Merge lp:~xfactor973/charms/trusty/ceph-osd/sysctl-perf into lp:~openstack-charmers-archive/charms/trusty/ceph-osd/next
- Trusty Tahr (14.04)
- sysctl-perf
- Merge into next
Status: | Needs review |
---|---|
Proposed branch: | lp:~xfactor973/charms/trusty/ceph-osd/sysctl-perf |
Merge into: | lp:~openstack-charmers-archive/charms/trusty/ceph-osd/next |
Diff against target: |
2895 lines (+2241/-11) (has conflicts) 36 files modified
.bzrignore (+3/-0) .coveragerc (+7/-0) .testr.conf (+8/-0) Makefile (+3/-0) charm-helpers-hooks.yaml (+1/-1) charm-helpers-tests.yaml (+1/-1) config.yaml (+10/-1) hooks/ceph.py (+241/-3) hooks/ceph_hooks.py (+279/-0) hooks/charmhelpers/cli/__init__.py (+191/-0) hooks/charmhelpers/cli/benchmark.py (+36/-0) hooks/charmhelpers/cli/commands.py (+32/-0) hooks/charmhelpers/cli/hookenv.py (+23/-0) hooks/charmhelpers/cli/host.py (+31/-0) hooks/charmhelpers/cli/unitdata.py (+39/-0) hooks/charmhelpers/core/files.py (+45/-0) hooks/charmhelpers/core/hookenv.py (+238/-0) hooks/charmhelpers/core/host.py (+92/-0) hooks/charmhelpers/core/hugepage.py (+62/-0) hooks/charmhelpers/core/services/helpers.py (+26/-0) hooks/charmhelpers/fetch/__init__.py (+29/-0) hooks/charmhelpers/fetch/giturl.py (+17/-0) hooks/install (+23/-0) requirements.txt (+11/-0) setup.cfg (+5/-0) templates/ceph.conf (+5/-0) test-requirements.txt (+9/-0) tests/README (+25/-0) tests/basic_deployment.py (+15/-5) tests/charmhelpers/contrib/amulet/utils.py (+194/-0) tests/charmhelpers/contrib/openstack/amulet/deployment.py (+41/-0) tests/charmhelpers/contrib/openstack/amulet/utils.py (+273/-0) tests/tests.yaml (+20/-0) tox.ini (+29/-0) unit_tests/test_status.py (+56/-0) unit_tests/test_utils.py (+121/-0) Conflict adding file .coveragerc. Moved existing file to .coveragerc.moved. Conflict adding file .testr.conf. Moved existing file to .testr.conf.moved. Text conflict in Makefile Text conflict in hooks/ceph.py Path conflict: hooks/ceph_hooks.py / <deleted> Conflict adding file hooks/ceph_hooks.py. Moved existing file to hooks/ceph_hooks.py.moved. Conflict adding file hooks/charmhelpers/cli. Moved existing file to hooks/charmhelpers/cli.moved. Conflict adding file hooks/charmhelpers/core/files.py. Moved existing file to hooks/charmhelpers/core/files.py.moved. Text conflict in hooks/charmhelpers/core/hookenv.py Text conflict in hooks/charmhelpers/core/host.py Conflict adding file hooks/charmhelpers/core/hugepage.py. Moved existing file to hooks/charmhelpers/core/hugepage.py.moved. Text conflict in hooks/charmhelpers/core/services/helpers.py Text conflict in hooks/charmhelpers/fetch/__init__.py Text conflict in hooks/charmhelpers/fetch/giturl.py Text conflict in hooks/install Conflict adding file hooks/install.real. Moved existing file to hooks/install.real.moved. Conflict adding file hooks/update-status. Moved existing file to hooks/update-status.moved. Conflict adding file requirements.txt. Moved existing file to requirements.txt.moved. Conflict adding file setup.cfg. Moved existing file to setup.cfg.moved. Conflict adding file test-requirements.txt. Moved existing file to test-requirements.txt.moved. Text conflict in tests/README Text conflict in tests/basic_deployment.py Text conflict in tests/charmhelpers/contrib/amulet/utils.py Text conflict in tests/charmhelpers/contrib/openstack/amulet/deployment.py Text conflict in tests/charmhelpers/contrib/openstack/amulet/utils.py Conflict adding file tests/tests.yaml. Moved existing file to tests/tests.yaml.moved. Conflict adding file tox.ini. Moved existing file to tox.ini.moved. Conflict adding file unit_tests/test_status.py. Moved existing file to unit_tests/test_status.py.moved. Conflict adding file unit_tests/test_utils.py. Moved existing file to unit_tests/test_utils.py.moved. |
To merge this branch: | bzr merge lp:~xfactor973/charms/trusty/ceph-osd/sysctl-perf |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
James Page | Pending | ||
Review via email: mp+286492@code.launchpad.net |
This proposal supersedes a proposal from 2016-01-26.
Commit message
Description of the change
This branch adds some HDD and network adapter sysctl tuning. Lots of potential performance has been left on the table with our out of the box ceph setup. This aims to recapture some of that.
For the HDD portion: Linux has set very conservative settings when it comes to HDDs. While this is great for desktop responsiveness it isn't so kind to storage servers. I have exposed a configuration option if an administrator knows they have a RAID card with a lot of cache onboard. In a future patch set I will add some SSD tuning to take advantage of its unique characteristics.
Testing this has proven challenging. While it doesn't break anything on AWS that doesn't really prove anything either. If anyone has access to physical hardware to test this I would really appreciate it.
Chris MacNaughton (chris.macnaughton) wrote : Posted in a previous version of this proposal | # |
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_lint_check #18194 ceph-osd for xfactor973 mp284033
LINT FAIL: lint-test failed
LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.
Full lint test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_unit_test #16953 ceph-osd for xfactor973 mp284033
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_amulet_test #9068 ceph-osd for xfactor973 mp284033
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_unit_test #16956 ceph-osd for xfactor973 mp284033
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_lint_check #18198 ceph-osd for xfactor973 mp284033
LINT OK: passed
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_amulet_test #9071 ceph-osd for xfactor973 mp284033
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
Chris Holcombe (xfactor973) wrote : Posted in a previous version of this proposal | # |
@chris.macnaughton Ask and ye shall receive ;). This keys off of link speed now
Chris Holcombe (xfactor973) wrote : Posted in a previous version of this proposal | # |
Ignore the stupid line nits. That's from me using auto format in pycharms IDE
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_unit_test #113 ceph-osd for xfactor973 mp284033
UNIT FAIL: unit-test failed
UNIT Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.
Full unit test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_lint_check #130 ceph-osd for xfactor973 mp284033
LINT FAIL: lint-test failed
LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.
Full lint test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_amulet_test #12 ceph-osd for xfactor973 mp284033
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_unit_test #132 ceph-osd for xfactor973 mp284033
UNIT FAIL: unit-test failed
UNIT Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.
Full unit test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_lint_check #140 ceph-osd for xfactor973 mp284033
LINT OK: passed
Ryan Beisner (1chb1n) wrote : Posted in a previous version of this proposal | # |
@xfactor973 The tests/* dir affects only Amulet test runs, and the 00-setup file is batch-maintained (may be overwritten).
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_amulet_test #27 ceph-osd for xfactor973 mp284033
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
James Page (james-page) wrote : Posted in a previous version of this proposal | # |
Amulet test failure is genuine; install hook fails on 12.04:
2016-02-09 19:55:55 INFO install After this operation, 1467 kB of additional disk space will be used.
2016-02-09 19:55:55 INFO install Get:1 http://
2016-02-09 19:55:55 INFO install Get:2 http://
2016-02-09 19:55:55 INFO install Fetched 536 kB in 0s (25.1 MB/s)
2016-02-09 19:55:55 INFO install Selecting previously unselected package python-setuptools.
2016-02-09 19:55:55 INFO install (Reading database ... 47935 files and directories currently installed.)
2016-02-09 19:55:55 INFO install Unpacking python-setuptools (from .../python-
2016-02-09 19:55:55 INFO install Selecting previously unselected package python-pip.
2016-02-09 19:55:55 INFO install Unpacking python-pip (from .../python-
2016-02-09 19:55:55 INFO install Processing triggers for man-db ...
2016-02-09 19:55:55 INFO install Setting up python-setuptools (0.6.24-1ubuntu1) ...
2016-02-09 19:55:55 INFO install Setting up python-pip (1.0-1build1) ...
2016-02-09 19:55:56 INFO install Package `python-enum34' is not installed and no info is available.
2016-02-09 19:55:56 INFO install Use dpkg --info (= dpkg-deb --info) to examine archive files,
2016-02-09 19:55:56 INFO install and dpkg --contents (= dpkg-deb --contents) to list their contents.
2016-02-09 19:55:56 INFO install Reading package lists...
2016-02-09 19:55:56 INFO install Building dependency tree...
2016-02-09 19:55:56 INFO install Reading state information...
2016-02-09 19:55:57 INFO install E: Unable to locate package python-enum34
2016-02-09 19:55:57 INFO install Traceback (most recent call last):
2016-02-09 19:55:57 INFO install File "/var/lib/
2016-02-09 19:55:57 INFO install import ceph
2016-02-09 19:55:57 INFO install File "/var/lib/
2016-02-09 19:55:57 INFO install from enum import Enum
2016-02-09 19:55:57 INFO install ImportError: No module named enum
James Page (james-page) wrote : Posted in a previous version of this proposal | # |
I'll have to defer to your knowledge on nic and block device tuning as I can't directly validate that - one question - are the changes make to interface and block device parameters persistent? I don't think that they are, so a server reboot will hit reset on tuned params.
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_unit_test #315 ceph-osd for xfactor973 mp284033
UNIT FAIL: unit-test failed
UNIT Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.
Full unit test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_lint_check #395 ceph-osd for xfactor973 mp284033
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_amulet_test #166 ceph-osd for xfactor973 mp284033
AMULET FAIL: amulet-test failed
AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.
Full amulet test output: http://
Build: http://
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_unit_test #373 ceph-osd for xfactor973 mp284033
UNIT FAIL: unit-test failed
UNIT Results (max last 2 lines):
make: *** [test] Error 1
ERROR:root:Make target returned non-zero.
Full unit test output: http://
Build: http://
Chris Holcombe (xfactor973) wrote : Posted in a previous version of this proposal | # |
I dumped the enum dependency :)
On 02/11/2016 05:46 AM, James Page wrote:
> Review: Needs Fixing
>
> Amulet test failure is genuine; install hook fails on 12.04:
>
> 2016-02-09 19:55:55 INFO install After this operation, 1467 kB of additional disk space will be used.
> 2016-02-09 19:55:55 INFO install Get:1 http://
> 2016-02-09 19:55:55 INFO install Get:2 http://
> 2016-02-09 19:55:55 INFO install Fetched 536 kB in 0s (25.1 MB/s)
> 2016-02-09 19:55:55 INFO install Selecting previously unselected package python-setuptools.
> 2016-02-09 19:55:55 INFO install (Reading database ... 47935 files and directories currently installed.)
> 2016-02-09 19:55:55 INFO install Unpacking python-setuptools (from .../python-
> 2016-02-09 19:55:55 INFO install Selecting previously unselected package python-pip.
> 2016-02-09 19:55:55 INFO install Unpacking python-pip (from .../python-
> 2016-02-09 19:55:55 INFO install Processing triggers for man-db ...
> 2016-02-09 19:55:55 INFO install Setting up python-setuptools (0.6.24-1ubuntu1) ...
> 2016-02-09 19:55:55 INFO install Setting up python-pip (1.0-1build1) ...
> 2016-02-09 19:55:56 INFO install Package `python-enum34' is not installed and no info is available.
> 2016-02-09 19:55:56 INFO install Use dpkg --info (= dpkg-deb --info) to examine archive files,
> 2016-02-09 19:55:56 INFO install and dpkg --contents (= dpkg-deb --contents) to list their contents.
> 2016-02-09 19:55:56 INFO install Reading package lists...
> 2016-02-09 19:55:56 INFO install Building dependency tree...
> 2016-02-09 19:55:56 INFO install Reading state information...
> 2016-02-09 19:55:57 INFO install E: Unable to locate package python-enum34
> 2016-02-09 19:55:57 INFO install Traceback (most recent call last):
> 2016-02-09 19:55:57 INFO install File "/var/lib/
> 2016-02-09 19:55:57 INFO install import ceph
> 2016-02-09 19:55:57 INFO install File "/var/lib/
> 2016-02-09 19:55:57 INFO install from enum import Enum
> 2016-02-09 19:55:57 INFO install ImportError: No module named enum
>
>
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_lint_check #669 ceph-osd for xfactor973 mp284033
LINT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_unit_test #579 ceph-osd for xfactor973 mp284033
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal | # |
charm_amulet_test #277 ceph-osd for xfactor973 mp284033
AMULET OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_unit_test #730 ceph-osd-next for james-page mp286492
UNIT OK: passed
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_lint_check #829 ceph-osd-next for james-page mp286492
LINT OK: passed
James Page (james-page) wrote : Posted in a previous version of this proposal | # |
Some suggested refactoring to use charmhelpers.
Also I'm not sure that max_sectors_kb is reboot persistent yet
uosci-testing-bot (uosci-testing-bot) wrote : | # |
charm_amulet_test #329 ceph-osd-next for james-page mp286492
AMULET OK: passed
Unmerged revisions
- 56. By Chris Holcombe
-
Removed enum dependency
- 55. By Chris Holcombe
-
Pull in the tox files from next
- 54. By Chris Holcombe
-
Persist the changes for reboots. Also catch IOError in all the posix operations
- 53. By Chris Holcombe
-
Address the lint issues and try to solve the python-enum34 package missing with the unit tests
- 52. By Chris Holcombe
-
Change over to using network link speed as the key to sysctl tuning instead of the driver name. This is a more generic approach because most 10Gb links need roughly the same set of sysctls
- 51. By Chris Holcombe
-
Lint spacing
- 50. By Chris Holcombe
-
Add a status notification to the network adapter tuning section
- 49. By Chris Holcombe
-
Added possible network adapter sysctl tuning
- 48. By Chris Holcombe
-
Add a config flag and a function to tune the read and write sectors for each osd block device
- 47. By Chris Holcombe
-
Sysctl and performance tweaks for ceph
Preview Diff
1 | === modified file '.bzrignore' |
2 | --- .bzrignore 2015-10-30 02:23:36 +0000 |
3 | +++ .bzrignore 2016-02-18 11:56:12 +0000 |
4 | @@ -3,3 +3,6 @@ |
5 | .tox |
6 | .testrepository |
7 | bin |
8 | +.idea |
9 | +.tox |
10 | +.testrepository |
11 | |
12 | === added file '.coveragerc' |
13 | --- .coveragerc 1970-01-01 00:00:00 +0000 |
14 | +++ .coveragerc 2016-02-18 11:56:12 +0000 |
15 | @@ -0,0 +1,7 @@ |
16 | +[report] |
17 | +# Regexes for lines to exclude from consideration |
18 | +exclude_lines = |
19 | + if __name__ == .__main__.: |
20 | +include= |
21 | + hooks/hooks.py |
22 | + hooks/ceph*.py |
23 | |
24 | === renamed file '.coveragerc' => '.coveragerc.moved' |
25 | === added file '.testr.conf' |
26 | --- .testr.conf 1970-01-01 00:00:00 +0000 |
27 | +++ .testr.conf 2016-02-18 11:56:12 +0000 |
28 | @@ -0,0 +1,8 @@ |
29 | +[DEFAULT] |
30 | +test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \ |
31 | + OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \ |
32 | + OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \ |
33 | + ${PYTHON:-python} -m subunit.run discover -t ./ ./unit_tests $LISTOPT $IDOPTION |
34 | + |
35 | +test_id_option=--load-list $IDFILE |
36 | +test_list_option=--list |
37 | |
38 | === renamed file '.testr.conf' => '.testr.conf.moved' |
39 | === modified file 'Makefile' |
40 | --- Makefile 2016-01-08 21:44:50 +0000 |
41 | +++ Makefile 2016-02-18 11:56:12 +0000 |
42 | @@ -13,7 +13,10 @@ |
43 | |
44 | functional_test: |
45 | @echo Starting Amulet tests... |
46 | +<<<<<<< TREE |
47 | @tests/setup/00-setup |
48 | +======= |
49 | +>>>>>>> MERGE-SOURCE |
50 | @juju test -v -p AMULET_HTTP_PROXY,AMULET_OS_VIP --timeout 2700 |
51 | |
52 | bin/charm_helpers_sync.py: |
53 | |
54 | === modified file 'charm-helpers-hooks.yaml' |
55 | --- charm-helpers-hooks.yaml 2015-08-03 14:53:05 +0000 |
56 | +++ charm-helpers-hooks.yaml 2016-02-18 11:56:12 +0000 |
57 | @@ -1,4 +1,4 @@ |
58 | -branch: lp:charm-helpers |
59 | +branch: lp:~openstack-charmers/charm-helpers/stable |
60 | destination: hooks/charmhelpers |
61 | include: |
62 | - core |
63 | |
64 | === modified file 'charm-helpers-tests.yaml' |
65 | --- charm-helpers-tests.yaml 2014-09-27 02:28:51 +0000 |
66 | +++ charm-helpers-tests.yaml 2016-02-18 11:56:12 +0000 |
67 | @@ -1,4 +1,4 @@ |
68 | -branch: lp:charm-helpers |
69 | +branch: lp:~openstack-charmers/charm-helpers/stable |
70 | destination: tests/charmhelpers |
71 | include: |
72 | - contrib.amulet |
73 | |
74 | === modified file 'config.yaml' |
75 | --- config.yaml 2016-01-13 12:48:57 +0000 |
76 | +++ config.yaml 2016-02-18 11:56:12 +0000 |
77 | @@ -128,7 +128,8 @@ |
78 | sysctl: |
79 | type: string |
80 | default: '{ kernel.pid_max : 2097152, vm.max_map_count : 524288, |
81 | - kernel.threads-max: 2097152 }' |
82 | + kernel.threads-max: 2097152, vm.vfs_cache_pressure: 1, |
83 | + vm.swappiness: 1}' |
84 | description: | |
85 | YAML-formatted associative array of sysctl key/value pairs to be set |
86 | persistently. By default we set pid_max, max_map_count and |
87 | @@ -152,3 +153,11 @@ |
88 | description: | |
89 | A comma-separated list of nagios servicegroups. |
90 | If left empty, the nagios_context will be used as the servicegroup |
91 | + max_sectors_kb: |
92 | + default: 1048576 |
93 | + type: int |
94 | + description: | |
95 | + This parameter will adjust every block device in your server to allow |
96 | + greater IO operation sizes. If you have a RAID card with cache on it |
97 | + consider tuning this much higher than the 1MB default. 1MB is a safe |
98 | + default for spinning HDDs that don't have much cache. |
99 | |
100 | === modified file 'hooks/ceph.py' |
101 | --- hooks/ceph.py 2016-01-29 07:31:13 +0000 |
102 | +++ hooks/ceph.py 2016-02-18 11:56:12 +0000 |
103 | @@ -1,4 +1,4 @@ |
104 | - |
105 | +# coding=utf-8 |
106 | # |
107 | # Copyright 2012 Canonical Ltd. |
108 | # |
109 | @@ -6,13 +6,16 @@ |
110 | # James Page <james.page@canonical.com> |
111 | # Paul Collins <paul.collins@canonical.com> |
112 | # |
113 | - |
114 | import json |
115 | import subprocess |
116 | import time |
117 | import os |
118 | +<<<<<<< TREE |
119 | import re |
120 | import sys |
121 | +======= |
122 | + |
123 | +>>>>>>> MERGE-SOURCE |
124 | from charmhelpers.core.host import ( |
125 | mkdir, |
126 | chownr, |
127 | @@ -22,6 +25,7 @@ |
128 | ) |
129 | from charmhelpers.core.hookenv import ( |
130 | log, |
131 | +<<<<<<< TREE |
132 | ERROR, |
133 | WARNING, |
134 | DEBUG, |
135 | @@ -31,6 +35,11 @@ |
136 | from charmhelpers.fetch import ( |
137 | apt_cache |
138 | ) |
139 | +======= |
140 | + ERROR, WARNING, |
141 | + status_set, |
142 | + config) |
143 | +>>>>>>> MERGE-SOURCE |
144 | from charmhelpers.contrib.storage.linux.utils import ( |
145 | zap_disk, |
146 | is_block_device, |
147 | @@ -46,6 +55,115 @@ |
148 | |
149 | PACKAGES = ['ceph', 'gdisk', 'ntp', 'btrfs-tools', 'python-ceph', 'xfsprogs'] |
150 | |
151 | +LinkSpeed = { |
152 | + "BASE_10": 10, |
153 | + "BASE_100": 100, |
154 | + "BASE_1000": 1000, |
155 | + "GBASE_10": 10000, |
156 | + "GBASE_40": 40000, |
157 | + "GBASE_100": 100000, |
158 | + "UNKNOWN": None |
159 | +} |
160 | + |
161 | +# Mapping of adapter name: sysctl settings |
162 | +NETWORK_ADAPTER_SYSCTLS = { |
163 | + # 1Gb |
164 | + LinkSpeed["BASE_1000"]: [ |
165 | + # Fill me in |
166 | + ], |
167 | + # 10Gb |
168 | + LinkSpeed["GBASE_10"]: [ |
169 | + 'net.core.rmem_default=524287', |
170 | + 'net.core.wmem_default=524287', |
171 | + 'net.core.rmem_max=524287', |
172 | + 'net.core.wmem_max=524287', |
173 | + 'net.core.optmem_max=524287', |
174 | + 'net.core.netdev_max_backlog=300000', |
175 | + 'net.ipv4.tcp_rmem=”10000000 10000000 10000000”', |
176 | + 'net.ipv4.tcp_wmem=”10000000 10000000 10000000”', |
177 | + 'net.ipv4.tcp_mem=”10000000 10000000 10000000”' |
178 | + ], |
179 | + # Mellanox 10/40Gb |
180 | + LinkSpeed["GBASE_40"]: [ |
181 | + 'net.ipv4.tcp_timestamps=0', |
182 | + 'net.ipv4.tcp_sack=1', |
183 | + 'net.core.netdev_max_backlog=250000', |
184 | + 'net.core.rmem_max=4194304', |
185 | + 'net.core.wmem_max=4194304', |
186 | + 'net.core.rmem_default=4194304', |
187 | + 'net.core.wmem_default=4194304', |
188 | + 'net.core.optmem_max=4194304', |
189 | + 'net.ipv4.tcp_rmem=”4096 87380 4194304”', |
190 | + 'net.ipv4.tcp_wmem=”4096 65536 4194304”', |
191 | + 'net.ipv4.tcp_low_latency=1', |
192 | + 'net.ipv4.tcp_adv_win_scale=1' |
193 | + ] |
194 | +} |
195 | + |
196 | + |
197 | +def tune_nic(network_interface): |
198 | + """ |
199 | + This will set optimal sysctls for the particular network adapter. |
200 | + :param network_interface: The network adapter name. |
201 | + """ |
202 | + speed = get_link_speed(network_interface) |
203 | + if speed in NETWORK_ADAPTER_SYSCTLS: |
204 | + status_set('maintenance', 'Tuning device {}'.format(network_interface)) |
205 | + for setting in NETWORK_ADAPTER_SYSCTLS[speed]: |
206 | + try: |
207 | + log("Setting network adapter sysctl {}".format(setting)) |
208 | + # Set this now for the running system |
209 | + subprocess.check_call(["sysctl", "--write", setting]) |
210 | + except subprocess.CalledProcessError as e: |
211 | + log("sysctl setting {} failed with error {}".format( |
212 | + setting, |
213 | + e.message)) |
214 | + |
215 | + # Persist this for future reboots |
216 | + try: |
217 | + with open("/etc/sysctl.conf", "a") as sysconf: |
218 | + sysconf.write(setting) |
219 | + except IOError as e: |
220 | + log("Write to /etc/sysctl.conf failed with error {}".format( |
221 | + e.message)) |
222 | + else: |
223 | + log("No settings found for network adapter: {}".format( |
224 | + network_interface)) |
225 | + |
226 | + |
227 | +def get_link_speed(network_interface): |
228 | + """ |
229 | + This will find the link speed for a given network device. Returns None |
230 | + if an error occurs. |
231 | + :param network_interface: |
232 | + :return: LinkSpeed |
233 | + """ |
234 | + speed_path = os.path.join(os.sep, 'sys', 'class', 'net', |
235 | + network_interface, 'speed') |
236 | + # I'm not sure where else we'd check if this doesn't exist |
237 | + if not os.path.exists(speed_path): |
238 | + return LinkSpeed["UNKNOWN"] |
239 | + |
240 | + try: |
241 | + with open(speed_path, 'r') as sysfs: |
242 | + speed = sysfs.readlines() |
243 | + |
244 | + # Did we actually read anything? |
245 | + if not speed: |
246 | + return LinkSpeed["UNKNOWN"] |
247 | + |
248 | + # Try to find a sysctl match for this particular speed |
249 | + for name, speed in LinkSpeed.items(): |
250 | + if speed == int(speed[0].strip()): |
251 | + return speed |
252 | + # Default to UNKNOWN if we can't find a match |
253 | + return LinkSpeed["UNKNOWN"] |
254 | + except IOError as e: |
255 | + log("Unable to open {path} because of error: {error}".format( |
256 | + path=speed_path, |
257 | + error=e.message)) |
258 | + return LinkSpeed["UNKNOWN"] |
259 | + |
260 | |
261 | def ceph_user(): |
262 | if get_version() > 1: |
263 | @@ -165,6 +283,7 @@ |
264 | # Ignore any errors for this call |
265 | subprocess.call(cmd) |
266 | |
267 | + |
268 | DISK_FORMATS = [ |
269 | 'xfs', |
270 | 'ext4', |
271 | @@ -182,10 +301,17 @@ |
272 | info = subprocess.check_output(['sgdisk', '-i', '1', dev]) |
273 | info = info.split("\n") # IGNORE:E1103 |
274 | for line in info: |
275 | +<<<<<<< TREE |
276 | for ptype in CEPH_PARTITIONS: |
277 | sig = 'Partition GUID code: {}'.format(ptype) |
278 | if line.startswith(sig): |
279 | return True |
280 | +======= |
281 | + if line.startswith( |
282 | + 'Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D' |
283 | + ): |
284 | + return True |
285 | +>>>>>>> MERGE-SOURCE |
286 | except subprocess.CalledProcessError: |
287 | pass |
288 | return False |
289 | @@ -218,7 +344,7 @@ |
290 | |
291 | |
292 | def wait_for_bootstrap(): |
293 | - while (not is_bootstrapped()): |
294 | + while not is_bootstrapped(): |
295 | time.sleep(3) |
296 | |
297 | |
298 | @@ -401,6 +527,7 @@ |
299 | pass |
300 | |
301 | |
302 | +<<<<<<< TREE |
303 | def maybe_zap_journal(journal_dev): |
304 | if (is_osd_disk(journal_dev)): |
305 | log('Looks like {} is already an OSD data' |
306 | @@ -427,6 +554,94 @@ |
307 | return least[1] |
308 | |
309 | |
310 | +======= |
311 | +def tune_dev(block_dev): |
312 | + """ |
313 | + Try to make some intelligent decisions with HDD tuning. Future work will |
314 | + include optimizing SSDs. |
315 | + This function will change the read ahead sectors and the max write |
316 | + sectors for each block device. |
317 | + :param block_dev: A block device name: Example: /dev/sda |
318 | + """ |
319 | + log('Tuning device {}'.format(block_dev)) |
320 | + status_set('maintenance', 'Tuning device {}'.format(block_dev)) |
321 | + try: |
322 | + # Set the read ahead sectors to 256 |
323 | + log('Setting read ahead to 256 for device {}'.format(block_dev)) |
324 | + subprocess.check_output(['hdparm', '-a256', block_dev]) |
325 | + except subprocess.CalledProcessError as e: |
326 | + log('hdparm failed with error: {}'.format(e.message)) |
327 | + # make sure that /sys/.../max_sectors_kb matches max_hw_sectors_kb or at |
328 | + # least 1MB for spinning disks |
329 | + # If the box has a RAID card with cache this could go much bigger. |
330 | + dev_name = None |
331 | + path_parts = os.path.split(block_dev) |
332 | + if len(path_parts) == 2: |
333 | + dev_name = path_parts[1] |
334 | + else: |
335 | + log('Unable to determine the block device name from path: {}'.format( |
336 | + block_dev)) |
337 | + # Play it safe and bail |
338 | + return |
339 | + max_sectors_kb = 0 |
340 | + max_hw_sectors_kb = 0 |
341 | + max_sectors_kb_path = os.path.join('sys', 'block', dev_name, 'queue', |
342 | + 'max_sectors_kb') |
343 | + max_hw_sectors_kb_path = os.path.join('sys', 'block', dev_name, 'queue', |
344 | + 'max_hw_sectors_kb') |
345 | + |
346 | + # Read in what Linux has set by default |
347 | + if os.path.exists(max_sectors_kb_path): |
348 | + try: |
349 | + with open(max_sectors_kb_path, 'r') as f: |
350 | + max_sectors_kb = f.read().strip() |
351 | + except IOError as e: |
352 | + log('Failed to read max_sectors_kb to {}. Error: {}'.format( |
353 | + max_sectors_kb_path, e.message)) |
354 | + # Bail. |
355 | + return |
356 | + |
357 | + # Read in what the hardware supports |
358 | + if os.path.exists(max_hw_sectors_kb_path): |
359 | + try: |
360 | + with open(max_hw_sectors_kb_path, 'r') as f: |
361 | + max_hw_sectors_kb = f.read().strip() |
362 | + except IOError as e: |
363 | + log('Failed to read max_hw_sectors_kb to {}. Error: {}'.format( |
364 | + max_hw_sectors_kb_path, e.message)) |
365 | + |
366 | + # OK we have a situation where the hardware supports more than Linux is |
367 | + # currently requesting |
368 | + if max_sectors_kb < max_hw_sectors_kb: |
369 | + config_max_sectors_kb = config('max_sectors_kb') |
370 | + if config_max_sectors_kb < max_hw_sectors_kb: |
371 | + # Set the max_sectors_kb to the config.yaml value if it is less |
372 | + # than the max_hw_sectors_kb |
373 | + log('Setting max_sectors_kb for device {} to {}'.format( |
374 | + max_sectors_kb_path, config_max_sectors_kb)) |
375 | + try: |
376 | + with open(max_sectors_kb_path, 'w') as f: |
377 | + f.write(config_max_sectors_kb) |
378 | + except IOError as e: |
379 | + log('Failed to write max_sectors_kb to {}. Error: {}'.format( |
380 | + max_sectors_kb_path, e.message)) |
381 | + else: |
382 | + # Set to the max_hw_sectors_kb |
383 | + log('Setting max_sectors_kb for device {} to {}'.format( |
384 | + max_sectors_kb_path, max_hw_sectors_kb)) |
385 | + try: |
386 | + with open(max_sectors_kb_path, 'w') as f: |
387 | + f.write(max_hw_sectors_kb) |
388 | + except IOError as e: |
389 | + log('Failed to write max_sectors_kb to {}. Error: {}'.format( |
390 | + max_sectors_kb_path, e.message)) |
391 | + else: |
392 | + log('max_sectors_kb match max_hw_sectors_kb. No change needed for ' |
393 | + 'device: {}'.format(block_dev)) |
394 | + status_set('maintenance', 'Finished tuning device {}'.format(block_dev)) |
395 | + |
396 | + |
397 | +>>>>>>> MERGE-SOURCE |
398 | def osdize(dev, osd_format, osd_journal, reformat_osd=False, |
399 | ignore_errors=False): |
400 | if dev.startswith('/dev'): |
401 | @@ -445,17 +660,27 @@ |
402 | log('Path {} is not a block device - bailing'.format(dev)) |
403 | return |
404 | |
405 | +<<<<<<< TREE |
406 | if (is_osd_disk(dev) and not reformat_osd): |
407 | log('Looks like {} is already an' |
408 | ' OSD data or journal, skipping.'.format(dev)) |
409 | +======= |
410 | + if is_osd_disk(dev) and not reformat_osd: |
411 | + log('Looks like {} is already an OSD, skipping.'.format(dev)) |
412 | +>>>>>>> MERGE-SOURCE |
413 | return |
414 | |
415 | if is_device_mounted(dev): |
416 | log('Looks like {} is in use, skipping.'.format(dev)) |
417 | return |
418 | |
419 | +<<<<<<< TREE |
420 | status_set('maintenance', 'Initializing device {}'.format(dev)) |
421 | cmd = ['ceph-disk', 'prepare'] |
422 | +======= |
423 | + status_set('maintenance', 'Initializing device {}'.format(dev)) |
424 | + cmd = ['ceph-disk-prepare'] |
425 | +>>>>>>> MERGE-SOURCE |
426 | # Later versions of ceph support more options |
427 | if cmp_pkgrevno('ceph', '0.48.3') >= 0: |
428 | if osd_format: |
429 | @@ -509,6 +734,7 @@ |
430 | |
431 | def filesystem_mounted(fs): |
432 | return subprocess.call(['grep', '-wqs', fs, '/proc/mounts']) == 0 |
433 | +<<<<<<< TREE |
434 | |
435 | |
436 | def get_running_osds(): |
437 | @@ -519,3 +745,15 @@ |
438 | return result.split() |
439 | except subprocess.CalledProcessError: |
440 | return [] |
441 | +======= |
442 | + |
443 | + |
444 | +def get_running_osds(): |
445 | + """Returns a list of the pids of the current running OSD daemons""" |
446 | + cmd = ['pgrep', 'ceph-osd'] |
447 | + try: |
448 | + result = subprocess.check_output(cmd) |
449 | + return result.split() |
450 | + except subprocess.CalledProcessError: |
451 | + return [] |
452 | +>>>>>>> MERGE-SOURCE |
453 | |
454 | === added file 'hooks/ceph_hooks.py' |
455 | --- hooks/ceph_hooks.py 1970-01-01 00:00:00 +0000 |
456 | +++ hooks/ceph_hooks.py 2016-02-18 11:56:12 +0000 |
457 | @@ -0,0 +1,279 @@ |
458 | +#!/usr/bin/python |
459 | + |
460 | +# |
461 | +# Copyright 2012 Canonical Ltd. |
462 | +# |
463 | +# Authors: |
464 | +# James Page <james.page@ubuntu.com> |
465 | +# |
466 | + |
467 | +import glob |
468 | +import os |
469 | +import shutil |
470 | +import sys |
471 | +import netifaces |
472 | + |
473 | +import ceph |
474 | +from charmhelpers.core.hookenv import ( |
475 | + log, |
476 | + ERROR, |
477 | + config, |
478 | + relation_ids, |
479 | + related_units, |
480 | + relation_get, |
481 | + Hooks, |
482 | + UnregisteredHookError, |
483 | + service_name, |
484 | + status_set, |
485 | +) |
486 | +from charmhelpers.core.host import ( |
487 | + umount, |
488 | + mkdir, |
489 | + cmp_pkgrevno |
490 | +) |
491 | +from charmhelpers.fetch import ( |
492 | + add_source, |
493 | + apt_install, |
494 | + apt_update, |
495 | + filter_installed_packages, |
496 | +) |
497 | +from charmhelpers.core.sysctl import create as create_sysctl |
498 | +from utils import ( |
499 | + render_template, |
500 | + get_host_ip, |
501 | + assert_charm_supports_ipv6 |
502 | +) |
503 | +from charmhelpers.contrib.openstack.alternatives import install_alternative |
504 | +from charmhelpers.contrib.network.ip import ( |
505 | + get_ipv6_addr, |
506 | + format_ipv6_addr |
507 | +) |
508 | +from charmhelpers.contrib.charmsupport import nrpe |
509 | + |
510 | +hooks = Hooks() |
511 | + |
512 | + |
513 | +def tune_network_adapters(): |
514 | + interfaces = netifaces.interfaces() |
515 | + for interface in interfaces: |
516 | + if interface == "lo": |
517 | + # Skip the loopback |
518 | + continue |
519 | + log("Looking up {} for possible sysctl tuning.".format(interface)) |
520 | + ceph.tune_nic(interface) |
521 | + |
522 | + |
523 | +def install_upstart_scripts(): |
524 | + # Only install upstart configurations for older versions |
525 | + if cmp_pkgrevno('ceph', "0.55.1") < 0: |
526 | + for x in glob.glob('files/upstart/*.conf'): |
527 | + shutil.copy(x, '/etc/init/') |
528 | + |
529 | + |
530 | +@hooks.hook('install.real') |
531 | +def install(): |
532 | + add_source(config('source'), config('key')) |
533 | + apt_update(fatal=True) |
534 | + apt_install(packages=ceph.PACKAGES, fatal=True) |
535 | + install_upstart_scripts() |
536 | + tune_network_adapters() |
537 | + |
538 | + |
539 | +def emit_cephconf(): |
540 | + mon_hosts = get_mon_hosts() |
541 | + log('Monitor hosts are ' + repr(mon_hosts)) |
542 | + |
543 | + cephcontext = { |
544 | + 'auth_supported': get_auth(), |
545 | + 'mon_hosts': ' '.join(mon_hosts), |
546 | + 'fsid': get_fsid(), |
547 | + 'old_auth': cmp_pkgrevno('ceph', "0.51") < 0, |
548 | + 'osd_journal_size': config('osd-journal-size'), |
549 | + 'use_syslog': str(config('use-syslog')).lower(), |
550 | + 'ceph_public_network': config('ceph-public-network'), |
551 | + 'ceph_cluster_network': config('ceph-cluster-network'), |
552 | + } |
553 | + |
554 | + if config('prefer-ipv6'): |
555 | + dynamic_ipv6_address = get_ipv6_addr()[0] |
556 | + if not config('ceph-public-network'): |
557 | + cephcontext['public_addr'] = dynamic_ipv6_address |
558 | + if not config('ceph-cluster-network'): |
559 | + cephcontext['cluster_addr'] = dynamic_ipv6_address |
560 | + |
561 | + # Install ceph.conf as an alternative to support |
562 | + # co-existence with other charms that write this file |
563 | + charm_ceph_conf = "/var/lib/charm/{}/ceph.conf".format(service_name()) |
564 | + mkdir(os.path.dirname(charm_ceph_conf)) |
565 | + with open(charm_ceph_conf, 'w') as cephconf: |
566 | + cephconf.write(render_template('ceph.conf', cephcontext)) |
567 | + install_alternative('ceph.conf', '/etc/ceph/ceph.conf', |
568 | + charm_ceph_conf, 90) |
569 | + |
570 | + |
571 | +JOURNAL_ZAPPED = '/var/lib/ceph/journal_zapped' |
572 | + |
573 | + |
574 | +@hooks.hook('config-changed') |
575 | +def config_changed(): |
576 | + # Pre-flight checks |
577 | + if config('osd-format') not in ceph.DISK_FORMATS: |
578 | + log('Invalid OSD disk format configuration specified', level=ERROR) |
579 | + sys.exit(1) |
580 | + |
581 | + if config('prefer-ipv6'): |
582 | + assert_charm_supports_ipv6() |
583 | + |
584 | + sysctl_dict = config('sysctl') |
585 | + if sysctl_dict: |
586 | + create_sysctl(sysctl_dict, '/etc/sysctl.d/50-ceph-osd-charm.conf') |
587 | + |
588 | + e_mountpoint = config('ephemeral-unmount') |
589 | + if e_mountpoint and ceph.filesystem_mounted(e_mountpoint): |
590 | + umount(e_mountpoint) |
591 | + |
592 | + osd_journal = config('osd-journal') |
593 | + if osd_journal and not os.path.exists(JOURNAL_ZAPPED) and \ |
594 | + os.path.exists(osd_journal): |
595 | + ceph.zap_disk(osd_journal) |
596 | + with open(JOURNAL_ZAPPED, 'w') as zapped: |
597 | + zapped.write('DONE') |
598 | + |
599 | + if ceph.is_bootstrapped(): |
600 | + log('ceph bootstrapped, rescanning disks') |
601 | + emit_cephconf() |
602 | + for dev in get_devices(): |
603 | + ceph.osdize(dev, config('osd-format'), |
604 | + config('osd-journal'), config('osd-reformat'), |
605 | + config('ignore-device-errors')) |
606 | + # Make it fast! |
607 | + ceph.tune_dev(dev) |
608 | + ceph.start_osds(get_devices()) |
609 | + |
610 | + |
611 | +def get_mon_hosts(): |
612 | + hosts = [] |
613 | + for relid in relation_ids('mon'): |
614 | + for unit in related_units(relid): |
615 | + addr = relation_get( |
616 | + 'ceph-public-address', |
617 | + unit, |
618 | + relid) or get_host_ip(relation_get('private-address', |
619 | + unit, |
620 | + relid)) |
621 | + |
622 | + if addr: |
623 | + hosts.append('{}:6789'.format(format_ipv6_addr(addr) or addr)) |
624 | + |
625 | + hosts.sort() |
626 | + return hosts |
627 | + |
628 | + |
629 | +def get_fsid(): |
630 | + return get_conf('fsid') |
631 | + |
632 | + |
633 | +def get_auth(): |
634 | + return get_conf('auth') |
635 | + |
636 | + |
637 | +def get_conf(name): |
638 | + for relid in relation_ids('mon'): |
639 | + for unit in related_units(relid): |
640 | + conf = relation_get(name, |
641 | + unit, relid) |
642 | + if conf: |
643 | + return conf |
644 | + return None |
645 | + |
646 | + |
647 | +def reformat_osd(): |
648 | + if config('osd-reformat'): |
649 | + return True |
650 | + else: |
651 | + return False |
652 | + |
653 | + |
654 | +def get_devices(): |
655 | + if config('osd-devices'): |
656 | + return config('osd-devices').split(' ') |
657 | + else: |
658 | + return [] |
659 | + |
660 | + |
661 | +@hooks.hook('mon-relation-changed', |
662 | + 'mon-relation-departed') |
663 | +def mon_relation(): |
664 | + bootstrap_key = relation_get('osd_bootstrap_key') |
665 | + if get_fsid() and get_auth() and bootstrap_key: |
666 | + log('mon has provided conf- scanning disks') |
667 | + emit_cephconf() |
668 | + ceph.import_osd_bootstrap_key(bootstrap_key) |
669 | + for dev in get_devices(): |
670 | + ceph.osdize(dev, config('osd-format'), |
671 | + config('osd-journal'), config('osd-reformat'), |
672 | + config('ignore-device-errors')) |
673 | + # Make it fast! |
674 | + ceph.tune_dev(dev) |
675 | + ceph.start_osds(get_devices()) |
676 | + else: |
677 | + log('mon cluster has not yet provided conf') |
678 | + |
679 | + |
680 | +@hooks.hook('upgrade-charm') |
681 | +def upgrade_charm(): |
682 | + if get_fsid() and get_auth(): |
683 | + emit_cephconf() |
684 | + install_upstart_scripts() |
685 | + apt_install(packages=filter_installed_packages(ceph.PACKAGES), |
686 | + fatal=True) |
687 | + |
688 | + |
689 | +@hooks.hook('nrpe-external-master-relation-joined', |
690 | + 'nrpe-external-master-relation-changed') |
691 | +def update_nrpe_config(): |
692 | + # python-dbus is used by check_upstart_job |
693 | + apt_install('python-dbus') |
694 | + hostname = nrpe.get_nagios_hostname() |
695 | + current_unit = nrpe.get_nagios_unit_name() |
696 | + nrpe_setup = nrpe.NRPE(hostname=hostname) |
697 | + nrpe_setup.add_check( |
698 | + shortname='ceph-osd', |
699 | + description='process check {%s}' % current_unit, |
700 | + check_cmd=('/bin/cat /var/lib/ceph/osd/ceph-*/whoami |' |
701 | + 'xargs -I@ status ceph-osd id=@ && exit 0 || exit 2') |
702 | + ) |
703 | + nrpe_setup.write() |
704 | + |
705 | + |
706 | +def assess_status(): |
707 | + """Assess status of current unit""" |
708 | + # Check for mon relation |
709 | + if len(relation_ids('mon')) < 1: |
710 | + status_set('blocked', 'Missing relation: monitor') |
711 | + return |
712 | + |
713 | + # Check for monitors with presented addresses |
714 | + # Check for bootstrap key presentation |
715 | + monitors = get_mon_hosts() |
716 | + if len(monitors) < 1 or not get_conf('osd_bootstrap_key'): |
717 | + status_set('waiting', 'Incomplete relation: monitor') |
718 | + return |
719 | + |
720 | + # Check for OSD device creation parity i.e. at least some devices |
721 | + # must have been presented and used for this charm to be operational |
722 | + running_osds = ceph.get_running_osds() |
723 | + if not running_osds: |
724 | + status_set('blocked', |
725 | + 'No block devices detected using current configuration') |
726 | + else: |
727 | + status_set('active', |
728 | + 'Unit is ready ({} OSD)'.format(len(running_osds))) |
729 | + |
730 | + |
731 | +if __name__ == '__main__': |
732 | + try: |
733 | + hooks.execute(sys.argv) |
734 | + except UnregisteredHookError as e: |
735 | + log('Unknown hook {} - skipping.'.format(e)) |
736 | + assess_status() |
737 | |
738 | === renamed file 'hooks/ceph_hooks.py' => 'hooks/ceph_hooks.py.moved' |
739 | === added directory 'hooks/charmhelpers/cli' |
740 | === renamed directory 'hooks/charmhelpers/cli' => 'hooks/charmhelpers/cli.moved' |
741 | === added file 'hooks/charmhelpers/cli/__init__.py' |
742 | --- hooks/charmhelpers/cli/__init__.py 1970-01-01 00:00:00 +0000 |
743 | +++ hooks/charmhelpers/cli/__init__.py 2016-02-18 11:56:12 +0000 |
744 | @@ -0,0 +1,191 @@ |
745 | +# Copyright 2014-2015 Canonical Limited. |
746 | +# |
747 | +# This file is part of charm-helpers. |
748 | +# |
749 | +# charm-helpers is free software: you can redistribute it and/or modify |
750 | +# it under the terms of the GNU Lesser General Public License version 3 as |
751 | +# published by the Free Software Foundation. |
752 | +# |
753 | +# charm-helpers is distributed in the hope that it will be useful, |
754 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
755 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
756 | +# GNU Lesser General Public License for more details. |
757 | +# |
758 | +# You should have received a copy of the GNU Lesser General Public License |
759 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
760 | + |
761 | +import inspect |
762 | +import argparse |
763 | +import sys |
764 | + |
765 | +from six.moves import zip |
766 | + |
767 | +from charmhelpers.core import unitdata |
768 | + |
769 | + |
770 | +class OutputFormatter(object): |
771 | + def __init__(self, outfile=sys.stdout): |
772 | + self.formats = ( |
773 | + "raw", |
774 | + "json", |
775 | + "py", |
776 | + "yaml", |
777 | + "csv", |
778 | + "tab", |
779 | + ) |
780 | + self.outfile = outfile |
781 | + |
782 | + def add_arguments(self, argument_parser): |
783 | + formatgroup = argument_parser.add_mutually_exclusive_group() |
784 | + choices = self.supported_formats |
785 | + formatgroup.add_argument("--format", metavar='FMT', |
786 | + help="Select output format for returned data, " |
787 | + "where FMT is one of: {}".format(choices), |
788 | + choices=choices, default='raw') |
789 | + for fmt in self.formats: |
790 | + fmtfunc = getattr(self, fmt) |
791 | + formatgroup.add_argument("-{}".format(fmt[0]), |
792 | + "--{}".format(fmt), action='store_const', |
793 | + const=fmt, dest='format', |
794 | + help=fmtfunc.__doc__) |
795 | + |
796 | + @property |
797 | + def supported_formats(self): |
798 | + return self.formats |
799 | + |
800 | + def raw(self, output): |
801 | + """Output data as raw string (default)""" |
802 | + if isinstance(output, (list, tuple)): |
803 | + output = '\n'.join(map(str, output)) |
804 | + self.outfile.write(str(output)) |
805 | + |
806 | + def py(self, output): |
807 | + """Output data as a nicely-formatted python data structure""" |
808 | + import pprint |
809 | + pprint.pprint(output, stream=self.outfile) |
810 | + |
811 | + def json(self, output): |
812 | + """Output data in JSON format""" |
813 | + import json |
814 | + json.dump(output, self.outfile) |
815 | + |
816 | + def yaml(self, output): |
817 | + """Output data in YAML format""" |
818 | + import yaml |
819 | + yaml.safe_dump(output, self.outfile) |
820 | + |
821 | + def csv(self, output): |
822 | + """Output data as excel-compatible CSV""" |
823 | + import csv |
824 | + csvwriter = csv.writer(self.outfile) |
825 | + csvwriter.writerows(output) |
826 | + |
827 | + def tab(self, output): |
828 | + """Output data in excel-compatible tab-delimited format""" |
829 | + import csv |
830 | + csvwriter = csv.writer(self.outfile, dialect=csv.excel_tab) |
831 | + csvwriter.writerows(output) |
832 | + |
833 | + def format_output(self, output, fmt='raw'): |
834 | + fmtfunc = getattr(self, fmt) |
835 | + fmtfunc(output) |
836 | + |
837 | + |
838 | +class CommandLine(object): |
839 | + argument_parser = None |
840 | + subparsers = None |
841 | + formatter = None |
842 | + exit_code = 0 |
843 | + |
844 | + def __init__(self): |
845 | + if not self.argument_parser: |
846 | + self.argument_parser = argparse.ArgumentParser(description='Perform common charm tasks') |
847 | + if not self.formatter: |
848 | + self.formatter = OutputFormatter() |
849 | + self.formatter.add_arguments(self.argument_parser) |
850 | + if not self.subparsers: |
851 | + self.subparsers = self.argument_parser.add_subparsers(help='Commands') |
852 | + |
853 | + def subcommand(self, command_name=None): |
854 | + """ |
855 | + Decorate a function as a subcommand. Use its arguments as the |
856 | + command-line arguments""" |
857 | + def wrapper(decorated): |
858 | + cmd_name = command_name or decorated.__name__ |
859 | + subparser = self.subparsers.add_parser(cmd_name, |
860 | + description=decorated.__doc__) |
861 | + for args, kwargs in describe_arguments(decorated): |
862 | + subparser.add_argument(*args, **kwargs) |
863 | + subparser.set_defaults(func=decorated) |
864 | + return decorated |
865 | + return wrapper |
866 | + |
867 | + def test_command(self, decorated): |
868 | + """ |
869 | + Subcommand is a boolean test function, so bool return values should be |
870 | + converted to a 0/1 exit code. |
871 | + """ |
872 | + decorated._cli_test_command = True |
873 | + return decorated |
874 | + |
875 | + def no_output(self, decorated): |
876 | + """ |
877 | + Subcommand is not expected to return a value, so don't print a spurious None. |
878 | + """ |
879 | + decorated._cli_no_output = True |
880 | + return decorated |
881 | + |
882 | + def subcommand_builder(self, command_name, description=None): |
883 | + """ |
884 | + Decorate a function that builds a subcommand. Builders should accept a |
885 | + single argument (the subparser instance) and return the function to be |
886 | + run as the command.""" |
887 | + def wrapper(decorated): |
888 | + subparser = self.subparsers.add_parser(command_name) |
889 | + func = decorated(subparser) |
890 | + subparser.set_defaults(func=func) |
891 | + subparser.description = description or func.__doc__ |
892 | + return wrapper |
893 | + |
894 | + def run(self): |
895 | + "Run cli, processing arguments and executing subcommands." |
896 | + arguments = self.argument_parser.parse_args() |
897 | + argspec = inspect.getargspec(arguments.func) |
898 | + vargs = [] |
899 | + for arg in argspec.args: |
900 | + vargs.append(getattr(arguments, arg)) |
901 | + if argspec.varargs: |
902 | + vargs.extend(getattr(arguments, argspec.varargs)) |
903 | + output = arguments.func(*vargs) |
904 | + if getattr(arguments.func, '_cli_test_command', False): |
905 | + self.exit_code = 0 if output else 1 |
906 | + output = '' |
907 | + if getattr(arguments.func, '_cli_no_output', False): |
908 | + output = '' |
909 | + self.formatter.format_output(output, arguments.format) |
910 | + if unitdata._KV: |
911 | + unitdata._KV.flush() |
912 | + |
913 | + |
914 | +cmdline = CommandLine() |
915 | + |
916 | + |
917 | +def describe_arguments(func): |
918 | + """ |
919 | + Analyze a function's signature and return a data structure suitable for |
920 | + passing in as arguments to an argparse parser's add_argument() method.""" |
921 | + |
922 | + argspec = inspect.getargspec(func) |
923 | + # we should probably raise an exception somewhere if func includes **kwargs |
924 | + if argspec.defaults: |
925 | + positional_args = argspec.args[:-len(argspec.defaults)] |
926 | + keyword_names = argspec.args[-len(argspec.defaults):] |
927 | + for arg, default in zip(keyword_names, argspec.defaults): |
928 | + yield ('--{}'.format(arg),), {'default': default} |
929 | + else: |
930 | + positional_args = argspec.args |
931 | + |
932 | + for arg in positional_args: |
933 | + yield (arg,), {} |
934 | + if argspec.varargs: |
935 | + yield (argspec.varargs,), {'nargs': '*'} |
936 | |
937 | === added file 'hooks/charmhelpers/cli/benchmark.py' |
938 | --- hooks/charmhelpers/cli/benchmark.py 1970-01-01 00:00:00 +0000 |
939 | +++ hooks/charmhelpers/cli/benchmark.py 2016-02-18 11:56:12 +0000 |
940 | @@ -0,0 +1,36 @@ |
941 | +# Copyright 2014-2015 Canonical Limited. |
942 | +# |
943 | +# This file is part of charm-helpers. |
944 | +# |
945 | +# charm-helpers is free software: you can redistribute it and/or modify |
946 | +# it under the terms of the GNU Lesser General Public License version 3 as |
947 | +# published by the Free Software Foundation. |
948 | +# |
949 | +# charm-helpers is distributed in the hope that it will be useful, |
950 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
951 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
952 | +# GNU Lesser General Public License for more details. |
953 | +# |
954 | +# You should have received a copy of the GNU Lesser General Public License |
955 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
956 | + |
957 | +from . import cmdline |
958 | +from charmhelpers.contrib.benchmark import Benchmark |
959 | + |
960 | + |
961 | +@cmdline.subcommand(command_name='benchmark-start') |
962 | +def start(): |
963 | + Benchmark.start() |
964 | + |
965 | + |
966 | +@cmdline.subcommand(command_name='benchmark-finish') |
967 | +def finish(): |
968 | + Benchmark.finish() |
969 | + |
970 | + |
971 | +@cmdline.subcommand_builder('benchmark-composite', description="Set the benchmark composite score") |
972 | +def service(subparser): |
973 | + subparser.add_argument("value", help="The composite score.") |
974 | + subparser.add_argument("units", help="The units the composite score represents, i.e., 'reads/sec'.") |
975 | + subparser.add_argument("direction", help="'asc' if a lower score is better, 'desc' if a higher score is better.") |
976 | + return Benchmark.set_composite_score |
977 | |
978 | === added file 'hooks/charmhelpers/cli/commands.py' |
979 | --- hooks/charmhelpers/cli/commands.py 1970-01-01 00:00:00 +0000 |
980 | +++ hooks/charmhelpers/cli/commands.py 2016-02-18 11:56:12 +0000 |
981 | @@ -0,0 +1,32 @@ |
982 | +# Copyright 2014-2015 Canonical Limited. |
983 | +# |
984 | +# This file is part of charm-helpers. |
985 | +# |
986 | +# charm-helpers is free software: you can redistribute it and/or modify |
987 | +# it under the terms of the GNU Lesser General Public License version 3 as |
988 | +# published by the Free Software Foundation. |
989 | +# |
990 | +# charm-helpers is distributed in the hope that it will be useful, |
991 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
992 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
993 | +# GNU Lesser General Public License for more details. |
994 | +# |
995 | +# You should have received a copy of the GNU Lesser General Public License |
996 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
997 | + |
998 | +""" |
999 | +This module loads sub-modules into the python runtime so they can be |
1000 | +discovered via the inspect module. In order to prevent flake8 from (rightfully) |
1001 | +telling us these are unused modules, throw a ' # noqa' at the end of each import |
1002 | +so that the warning is suppressed. |
1003 | +""" |
1004 | + |
1005 | +from . import CommandLine # noqa |
1006 | + |
1007 | +""" |
1008 | +Import the sub-modules which have decorated subcommands to register with chlp. |
1009 | +""" |
1010 | +from . import host # noqa |
1011 | +from . import benchmark # noqa |
1012 | +from . import unitdata # noqa |
1013 | +from . import hookenv # noqa |
1014 | |
1015 | === added file 'hooks/charmhelpers/cli/hookenv.py' |
1016 | --- hooks/charmhelpers/cli/hookenv.py 1970-01-01 00:00:00 +0000 |
1017 | +++ hooks/charmhelpers/cli/hookenv.py 2016-02-18 11:56:12 +0000 |
1018 | @@ -0,0 +1,23 @@ |
1019 | +# Copyright 2014-2015 Canonical Limited. |
1020 | +# |
1021 | +# This file is part of charm-helpers. |
1022 | +# |
1023 | +# charm-helpers is free software: you can redistribute it and/or modify |
1024 | +# it under the terms of the GNU Lesser General Public License version 3 as |
1025 | +# published by the Free Software Foundation. |
1026 | +# |
1027 | +# charm-helpers is distributed in the hope that it will be useful, |
1028 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
1029 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
1030 | +# GNU Lesser General Public License for more details. |
1031 | +# |
1032 | +# You should have received a copy of the GNU Lesser General Public License |
1033 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
1034 | + |
1035 | +from . import cmdline |
1036 | +from charmhelpers.core import hookenv |
1037 | + |
1038 | + |
1039 | +cmdline.subcommand('relation-id')(hookenv.relation_id._wrapped) |
1040 | +cmdline.subcommand('service-name')(hookenv.service_name) |
1041 | +cmdline.subcommand('remote-service-name')(hookenv.remote_service_name._wrapped) |
1042 | |
1043 | === added file 'hooks/charmhelpers/cli/host.py' |
1044 | --- hooks/charmhelpers/cli/host.py 1970-01-01 00:00:00 +0000 |
1045 | +++ hooks/charmhelpers/cli/host.py 2016-02-18 11:56:12 +0000 |
1046 | @@ -0,0 +1,31 @@ |
1047 | +# Copyright 2014-2015 Canonical Limited. |
1048 | +# |
1049 | +# This file is part of charm-helpers. |
1050 | +# |
1051 | +# charm-helpers is free software: you can redistribute it and/or modify |
1052 | +# it under the terms of the GNU Lesser General Public License version 3 as |
1053 | +# published by the Free Software Foundation. |
1054 | +# |
1055 | +# charm-helpers is distributed in the hope that it will be useful, |
1056 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
1057 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
1058 | +# GNU Lesser General Public License for more details. |
1059 | +# |
1060 | +# You should have received a copy of the GNU Lesser General Public License |
1061 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
1062 | + |
1063 | +from . import cmdline |
1064 | +from charmhelpers.core import host |
1065 | + |
1066 | + |
1067 | +@cmdline.subcommand() |
1068 | +def mounts(): |
1069 | + "List mounts" |
1070 | + return host.mounts() |
1071 | + |
1072 | + |
1073 | +@cmdline.subcommand_builder('service', description="Control system services") |
1074 | +def service(subparser): |
1075 | + subparser.add_argument("action", help="The action to perform (start, stop, etc...)") |
1076 | + subparser.add_argument("service_name", help="Name of the service to control") |
1077 | + return host.service |
1078 | |
1079 | === added file 'hooks/charmhelpers/cli/unitdata.py' |
1080 | --- hooks/charmhelpers/cli/unitdata.py 1970-01-01 00:00:00 +0000 |
1081 | +++ hooks/charmhelpers/cli/unitdata.py 2016-02-18 11:56:12 +0000 |
1082 | @@ -0,0 +1,39 @@ |
1083 | +# Copyright 2014-2015 Canonical Limited. |
1084 | +# |
1085 | +# This file is part of charm-helpers. |
1086 | +# |
1087 | +# charm-helpers is free software: you can redistribute it and/or modify |
1088 | +# it under the terms of the GNU Lesser General Public License version 3 as |
1089 | +# published by the Free Software Foundation. |
1090 | +# |
1091 | +# charm-helpers is distributed in the hope that it will be useful, |
1092 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
1093 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
1094 | +# GNU Lesser General Public License for more details. |
1095 | +# |
1096 | +# You should have received a copy of the GNU Lesser General Public License |
1097 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
1098 | + |
1099 | +from . import cmdline |
1100 | +from charmhelpers.core import unitdata |
1101 | + |
1102 | + |
1103 | +@cmdline.subcommand_builder('unitdata', description="Store and retrieve data") |
1104 | +def unitdata_cmd(subparser): |
1105 | + nested = subparser.add_subparsers() |
1106 | + get_cmd = nested.add_parser('get', help='Retrieve data') |
1107 | + get_cmd.add_argument('key', help='Key to retrieve the value of') |
1108 | + get_cmd.set_defaults(action='get', value=None) |
1109 | + set_cmd = nested.add_parser('set', help='Store data') |
1110 | + set_cmd.add_argument('key', help='Key to set') |
1111 | + set_cmd.add_argument('value', help='Value to store') |
1112 | + set_cmd.set_defaults(action='set') |
1113 | + |
1114 | + def _unitdata_cmd(action, key, value): |
1115 | + if action == 'get': |
1116 | + return unitdata.kv().get(key) |
1117 | + elif action == 'set': |
1118 | + unitdata.kv().set(key, value) |
1119 | + unitdata.kv().flush() |
1120 | + return '' |
1121 | + return _unitdata_cmd |
1122 | |
1123 | === modified file 'hooks/charmhelpers/contrib/charmsupport/nrpe.py' |
1124 | === modified file 'hooks/charmhelpers/contrib/network/ip.py' |
1125 | === added file 'hooks/charmhelpers/core/files.py' |
1126 | --- hooks/charmhelpers/core/files.py 1970-01-01 00:00:00 +0000 |
1127 | +++ hooks/charmhelpers/core/files.py 2016-02-18 11:56:12 +0000 |
1128 | @@ -0,0 +1,45 @@ |
1129 | +#!/usr/bin/env python |
1130 | +# -*- coding: utf-8 -*- |
1131 | + |
1132 | +# Copyright 2014-2015 Canonical Limited. |
1133 | +# |
1134 | +# This file is part of charm-helpers. |
1135 | +# |
1136 | +# charm-helpers is free software: you can redistribute it and/or modify |
1137 | +# it under the terms of the GNU Lesser General Public License version 3 as |
1138 | +# published by the Free Software Foundation. |
1139 | +# |
1140 | +# charm-helpers is distributed in the hope that it will be useful, |
1141 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
1142 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
1143 | +# GNU Lesser General Public License for more details. |
1144 | +# |
1145 | +# You should have received a copy of the GNU Lesser General Public License |
1146 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
1147 | + |
1148 | +__author__ = 'Jorge Niedbalski <niedbalski@ubuntu.com>' |
1149 | + |
1150 | +import os |
1151 | +import subprocess |
1152 | + |
1153 | + |
1154 | +def sed(filename, before, after, flags='g'): |
1155 | + """ |
1156 | + Search and replaces the given pattern on filename. |
1157 | + |
1158 | + :param filename: relative or absolute file path. |
1159 | + :param before: expression to be replaced (see 'man sed') |
1160 | + :param after: expression to replace with (see 'man sed') |
1161 | + :param flags: sed-compatible regex flags in example, to make |
1162 | + the search and replace case insensitive, specify ``flags="i"``. |
1163 | + The ``g`` flag is always specified regardless, so you do not |
1164 | + need to remember to include it when overriding this parameter. |
1165 | + :returns: If the sed command exit code was zero then return, |
1166 | + otherwise raise CalledProcessError. |
1167 | + """ |
1168 | + expression = r's/{0}/{1}/{2}'.format(before, |
1169 | + after, flags) |
1170 | + |
1171 | + return subprocess.check_call(["sed", "-i", "-r", "-e", |
1172 | + expression, |
1173 | + os.path.expanduser(filename)]) |
1174 | |
1175 | === renamed file 'hooks/charmhelpers/core/files.py' => 'hooks/charmhelpers/core/files.py.moved' |
1176 | === modified file 'hooks/charmhelpers/core/hookenv.py' |
1177 | --- hooks/charmhelpers/core/hookenv.py 2016-01-22 22:29:34 +0000 |
1178 | +++ hooks/charmhelpers/core/hookenv.py 2016-02-18 11:56:12 +0000 |
1179 | @@ -491,6 +491,7 @@ |
1180 | |
1181 | |
1182 | @cached |
1183 | +<<<<<<< TREE |
1184 | def peer_relation_id(): |
1185 | '''Get the peers relation id if a peers relation has been joined, else None.''' |
1186 | md = metadata() |
1187 | @@ -561,6 +562,65 @@ |
1188 | |
1189 | |
1190 | @cached |
1191 | +======= |
1192 | +def relation_to_interface(relation_name): |
1193 | + """ |
1194 | + Given the name of a relation, return the interface that relation uses. |
1195 | + |
1196 | + :returns: The interface name, or ``None``. |
1197 | + """ |
1198 | + return relation_to_role_and_interface(relation_name)[1] |
1199 | + |
1200 | + |
1201 | +@cached |
1202 | +def relation_to_role_and_interface(relation_name): |
1203 | + """ |
1204 | + Given the name of a relation, return the role and the name of the interface |
1205 | + that relation uses (where role is one of ``provides``, ``requires``, or ``peer``). |
1206 | + |
1207 | + :returns: A tuple containing ``(role, interface)``, or ``(None, None)``. |
1208 | + """ |
1209 | + _metadata = metadata() |
1210 | + for role in ('provides', 'requires', 'peer'): |
1211 | + interface = _metadata.get(role, {}).get(relation_name, {}).get('interface') |
1212 | + if interface: |
1213 | + return role, interface |
1214 | + return None, None |
1215 | + |
1216 | + |
1217 | +@cached |
1218 | +def role_and_interface_to_relations(role, interface_name): |
1219 | + """ |
1220 | + Given a role and interface name, return a list of relation names for the |
1221 | + current charm that use that interface under that role (where role is one |
1222 | + of ``provides``, ``requires``, or ``peer``). |
1223 | + |
1224 | + :returns: A list of relation names. |
1225 | + """ |
1226 | + _metadata = metadata() |
1227 | + results = [] |
1228 | + for relation_name, relation in _metadata.get(role, {}).items(): |
1229 | + if relation['interface'] == interface_name: |
1230 | + results.append(relation_name) |
1231 | + return results |
1232 | + |
1233 | + |
1234 | +@cached |
1235 | +def interface_to_relations(interface_name): |
1236 | + """ |
1237 | + Given an interface, return a list of relation names for the current |
1238 | + charm that use that interface. |
1239 | + |
1240 | + :returns: A list of relation names. |
1241 | + """ |
1242 | + results = [] |
1243 | + for role in ('provides', 'requires', 'peer'): |
1244 | + results.extend(role_and_interface_to_relations(role, interface_name)) |
1245 | + return results |
1246 | + |
1247 | + |
1248 | +@cached |
1249 | +>>>>>>> MERGE-SOURCE |
1250 | def charm_name(): |
1251 | """Get the name of the current charm as is specified on metadata.yaml""" |
1252 | return metadata().get('name') |
1253 | @@ -766,6 +826,7 @@ |
1254 | |
1255 | The results set by action_set are preserved.""" |
1256 | subprocess.check_call(['action-fail', message]) |
1257 | +<<<<<<< TREE |
1258 | |
1259 | |
1260 | def action_name(): |
1261 | @@ -976,3 +1037,180 @@ |
1262 | for callback, args, kwargs in reversed(_atexit): |
1263 | callback(*args, **kwargs) |
1264 | del _atexit[:] |
1265 | +======= |
1266 | + |
1267 | + |
1268 | +def action_name(): |
1269 | + """Get the name of the currently executing action.""" |
1270 | + return os.environ.get('JUJU_ACTION_NAME') |
1271 | + |
1272 | + |
1273 | +def action_uuid(): |
1274 | + """Get the UUID of the currently executing action.""" |
1275 | + return os.environ.get('JUJU_ACTION_UUID') |
1276 | + |
1277 | + |
1278 | +def action_tag(): |
1279 | + """Get the tag for the currently executing action.""" |
1280 | + return os.environ.get('JUJU_ACTION_TAG') |
1281 | + |
1282 | + |
1283 | +def status_set(workload_state, message): |
1284 | + """Set the workload state with a message |
1285 | + |
1286 | + Use status-set to set the workload state with a message which is visible |
1287 | + to the user via juju status. If the status-set command is not found then |
1288 | + assume this is juju < 1.23 and juju-log the message unstead. |
1289 | + |
1290 | + workload_state -- valid juju workload state. |
1291 | + message -- status update message |
1292 | + """ |
1293 | + valid_states = ['maintenance', 'blocked', 'waiting', 'active'] |
1294 | + if workload_state not in valid_states: |
1295 | + raise ValueError( |
1296 | + '{!r} is not a valid workload state'.format(workload_state) |
1297 | + ) |
1298 | + cmd = ['status-set', workload_state, message] |
1299 | + try: |
1300 | + ret = subprocess.call(cmd) |
1301 | + if ret == 0: |
1302 | + return |
1303 | + except OSError as e: |
1304 | + if e.errno != errno.ENOENT: |
1305 | + raise |
1306 | + log_message = 'status-set failed: {} {}'.format(workload_state, |
1307 | + message) |
1308 | + log(log_message, level='INFO') |
1309 | + |
1310 | + |
1311 | +def status_get(): |
1312 | + """Retrieve the previously set juju workload state and message |
1313 | + |
1314 | + If the status-get command is not found then assume this is juju < 1.23 and |
1315 | + return 'unknown', "" |
1316 | + |
1317 | + """ |
1318 | + cmd = ['status-get', "--format=json", "--include-data"] |
1319 | + try: |
1320 | + raw_status = subprocess.check_output(cmd) |
1321 | + except OSError as e: |
1322 | + if e.errno == errno.ENOENT: |
1323 | + return ('unknown', "") |
1324 | + else: |
1325 | + raise |
1326 | + else: |
1327 | + status = json.loads(raw_status.decode("UTF-8")) |
1328 | + return (status["status"], status["message"]) |
1329 | + |
1330 | + |
1331 | +def translate_exc(from_exc, to_exc): |
1332 | + def inner_translate_exc1(f): |
1333 | + def inner_translate_exc2(*args, **kwargs): |
1334 | + try: |
1335 | + return f(*args, **kwargs) |
1336 | + except from_exc: |
1337 | + raise to_exc |
1338 | + |
1339 | + return inner_translate_exc2 |
1340 | + |
1341 | + return inner_translate_exc1 |
1342 | + |
1343 | + |
1344 | +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) |
1345 | +def is_leader(): |
1346 | + """Does the current unit hold the juju leadership |
1347 | + |
1348 | + Uses juju to determine whether the current unit is the leader of its peers |
1349 | + """ |
1350 | + cmd = ['is-leader', '--format=json'] |
1351 | + return json.loads(subprocess.check_output(cmd).decode('UTF-8')) |
1352 | + |
1353 | + |
1354 | +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) |
1355 | +def leader_get(attribute=None): |
1356 | + """Juju leader get value(s)""" |
1357 | + cmd = ['leader-get', '--format=json'] + [attribute or '-'] |
1358 | + return json.loads(subprocess.check_output(cmd).decode('UTF-8')) |
1359 | + |
1360 | + |
1361 | +@translate_exc(from_exc=OSError, to_exc=NotImplementedError) |
1362 | +def leader_set(settings=None, **kwargs): |
1363 | + """Juju leader set value(s)""" |
1364 | + # Don't log secrets. |
1365 | + # log("Juju leader-set '%s'" % (settings), level=DEBUG) |
1366 | + cmd = ['leader-set'] |
1367 | + settings = settings or {} |
1368 | + settings.update(kwargs) |
1369 | + for k, v in settings.items(): |
1370 | + if v is None: |
1371 | + cmd.append('{}='.format(k)) |
1372 | + else: |
1373 | + cmd.append('{}={}'.format(k, v)) |
1374 | + subprocess.check_call(cmd) |
1375 | + |
1376 | + |
1377 | +@cached |
1378 | +def juju_version(): |
1379 | + """Full version string (eg. '1.23.3.1-trusty-amd64')""" |
1380 | + # Per https://bugs.launchpad.net/juju-core/+bug/1455368/comments/1 |
1381 | + jujud = glob.glob('/var/lib/juju/tools/machine-*/jujud')[0] |
1382 | + return subprocess.check_output([jujud, 'version'], |
1383 | + universal_newlines=True).strip() |
1384 | + |
1385 | + |
1386 | +@cached |
1387 | +def has_juju_version(minimum_version): |
1388 | + """Return True if the Juju version is at least the provided version""" |
1389 | + return LooseVersion(juju_version()) >= LooseVersion(minimum_version) |
1390 | + |
1391 | + |
1392 | +_atexit = [] |
1393 | +_atstart = [] |
1394 | + |
1395 | + |
1396 | +def atstart(callback, *args, **kwargs): |
1397 | + '''Schedule a callback to run before the main hook. |
1398 | + |
1399 | + Callbacks are run in the order they were added. |
1400 | + |
1401 | + This is useful for modules and classes to perform initialization |
1402 | + and inject behavior. In particular: |
1403 | + |
1404 | + - Run common code before all of your hooks, such as logging |
1405 | + the hook name or interesting relation data. |
1406 | + - Defer object or module initialization that requires a hook |
1407 | + context until we know there actually is a hook context, |
1408 | + making testing easier. |
1409 | + - Rather than requiring charm authors to include boilerplate to |
1410 | + invoke your helper's behavior, have it run automatically if |
1411 | + your object is instantiated or module imported. |
1412 | + |
1413 | + This is not at all useful after your hook framework as been launched. |
1414 | + ''' |
1415 | + global _atstart |
1416 | + _atstart.append((callback, args, kwargs)) |
1417 | + |
1418 | + |
1419 | +def atexit(callback, *args, **kwargs): |
1420 | + '''Schedule a callback to run on successful hook completion. |
1421 | + |
1422 | + Callbacks are run in the reverse order that they were added.''' |
1423 | + _atexit.append((callback, args, kwargs)) |
1424 | + |
1425 | + |
1426 | +def _run_atstart(): |
1427 | + '''Hook frameworks must invoke this before running the main hook body.''' |
1428 | + global _atstart |
1429 | + for callback, args, kwargs in _atstart: |
1430 | + callback(*args, **kwargs) |
1431 | + del _atstart[:] |
1432 | + |
1433 | + |
1434 | +def _run_atexit(): |
1435 | + '''Hook frameworks must invoke this after the main hook body has |
1436 | + successfully completed. Do not invoke it if the hook fails.''' |
1437 | + global _atexit |
1438 | + for callback, args, kwargs in reversed(_atexit): |
1439 | + callback(*args, **kwargs) |
1440 | + del _atexit[:] |
1441 | +>>>>>>> MERGE-SOURCE |
1442 | |
1443 | === modified file 'hooks/charmhelpers/core/host.py' |
1444 | --- hooks/charmhelpers/core/host.py 2016-01-22 22:29:34 +0000 |
1445 | +++ hooks/charmhelpers/core/host.py 2016-02-18 11:56:12 +0000 |
1446 | @@ -63,6 +63,7 @@ |
1447 | return service_result |
1448 | |
1449 | |
1450 | +<<<<<<< TREE |
1451 | def service_pause(service_name, init_dir="/etc/init", initd_dir="/etc/init.d"): |
1452 | """Pause a system service. |
1453 | |
1454 | @@ -117,6 +118,38 @@ |
1455 | return started |
1456 | |
1457 | |
1458 | +======= |
1459 | +def service_pause(service_name, init_dir=None): |
1460 | + """Pause a system service. |
1461 | + |
1462 | + Stop it, and prevent it from starting again at boot.""" |
1463 | + if init_dir is None: |
1464 | + init_dir = "/etc/init" |
1465 | + stopped = service_stop(service_name) |
1466 | + # XXX: Support systemd too |
1467 | + override_path = os.path.join( |
1468 | + init_dir, '{}.override'.format(service_name)) |
1469 | + with open(override_path, 'w') as fh: |
1470 | + fh.write("manual\n") |
1471 | + return stopped |
1472 | + |
1473 | + |
1474 | +def service_resume(service_name, init_dir=None): |
1475 | + """Resume a system service. |
1476 | + |
1477 | + Reenable starting again at boot. Start the service""" |
1478 | + # XXX: Support systemd too |
1479 | + if init_dir is None: |
1480 | + init_dir = "/etc/init" |
1481 | + override_path = os.path.join( |
1482 | + init_dir, '{}.override'.format(service_name)) |
1483 | + if os.path.exists(override_path): |
1484 | + os.unlink(override_path) |
1485 | + started = service_start(service_name) |
1486 | + return started |
1487 | + |
1488 | + |
1489 | +>>>>>>> MERGE-SOURCE |
1490 | def service(action, service_name): |
1491 | """Control a system service""" |
1492 | if init_is_systemd(): |
1493 | @@ -376,6 +409,7 @@ |
1494 | return None |
1495 | |
1496 | |
1497 | +<<<<<<< TREE |
1498 | def path_hash(path): |
1499 | """Generate a hash checksum of all files matching 'path'. Standard |
1500 | wildcards like '*' and '?' are supported, see documentation for the 'glob' |
1501 | @@ -390,6 +424,23 @@ |
1502 | } |
1503 | |
1504 | |
1505 | +======= |
1506 | +def path_hash(path): |
1507 | + """ |
1508 | + Generate a hash checksum of all files matching 'path'. Standard wildcards |
1509 | + like '*' and '?' are supported, see documentation for the 'glob' module for |
1510 | + more information. |
1511 | + |
1512 | + :return: dict: A { filename: hash } dictionary for all matched files. |
1513 | + Empty if none found. |
1514 | + """ |
1515 | + return { |
1516 | + filename: file_hash(filename) |
1517 | + for filename in glob.iglob(path) |
1518 | + } |
1519 | + |
1520 | + |
1521 | +>>>>>>> MERGE-SOURCE |
1522 | def check_hash(path, checksum, hash_type='md5'): |
1523 | """Validate a file using a cryptographic checksum. |
1524 | |
1525 | @@ -475,6 +526,7 @@ |
1526 | return(''.join(random_chars)) |
1527 | |
1528 | |
1529 | +<<<<<<< TREE |
1530 | def is_phy_iface(interface): |
1531 | """Returns True if interface is not virtual, otherwise False.""" |
1532 | if interface: |
1533 | @@ -513,6 +565,46 @@ |
1534 | |
1535 | def list_nics(nic_type=None): |
1536 | """Return a list of nics of given type(s)""" |
1537 | +======= |
1538 | +def is_phy_iface(interface): |
1539 | + """Returns True if interface is not virtual, otherwise False.""" |
1540 | + if interface: |
1541 | + sys_net = '/sys/class/net' |
1542 | + if os.path.isdir(sys_net): |
1543 | + for iface in glob.glob(os.path.join(sys_net, '*')): |
1544 | + if '/virtual/' in os.path.realpath(iface): |
1545 | + continue |
1546 | + |
1547 | + if interface == os.path.basename(iface): |
1548 | + return True |
1549 | + |
1550 | + return False |
1551 | + |
1552 | + |
1553 | +def get_bond_master(interface): |
1554 | + """Returns bond master if interface is bond slave otherwise None. |
1555 | + |
1556 | + NOTE: the provided interface is expected to be physical |
1557 | + """ |
1558 | + if interface: |
1559 | + iface_path = '/sys/class/net/%s' % (interface) |
1560 | + if os.path.exists(iface_path): |
1561 | + if '/virtual/' in os.path.realpath(iface_path): |
1562 | + return None |
1563 | + |
1564 | + master = os.path.join(iface_path, 'master') |
1565 | + if os.path.exists(master): |
1566 | + master = os.path.realpath(master) |
1567 | + # make sure it is a bond master |
1568 | + if os.path.exists(os.path.join(master, 'bonding')): |
1569 | + return os.path.basename(master) |
1570 | + |
1571 | + return None |
1572 | + |
1573 | + |
1574 | +def list_nics(nic_type=None): |
1575 | + '''Return a list of nics of given type(s)''' |
1576 | +>>>>>>> MERGE-SOURCE |
1577 | if isinstance(nic_type, six.string_types): |
1578 | int_types = [nic_type] |
1579 | else: |
1580 | |
1581 | === added file 'hooks/charmhelpers/core/hugepage.py' |
1582 | --- hooks/charmhelpers/core/hugepage.py 1970-01-01 00:00:00 +0000 |
1583 | +++ hooks/charmhelpers/core/hugepage.py 2016-02-18 11:56:12 +0000 |
1584 | @@ -0,0 +1,62 @@ |
1585 | +# -*- coding: utf-8 -*- |
1586 | + |
1587 | +# Copyright 2014-2015 Canonical Limited. |
1588 | +# |
1589 | +# This file is part of charm-helpers. |
1590 | +# |
1591 | +# charm-helpers is free software: you can redistribute it and/or modify |
1592 | +# it under the terms of the GNU Lesser General Public License version 3 as |
1593 | +# published by the Free Software Foundation. |
1594 | +# |
1595 | +# charm-helpers is distributed in the hope that it will be useful, |
1596 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
1597 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
1598 | +# GNU Lesser General Public License for more details. |
1599 | +# |
1600 | +# You should have received a copy of the GNU Lesser General Public License |
1601 | +# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. |
1602 | + |
1603 | +import yaml |
1604 | +from charmhelpers.core import fstab |
1605 | +from charmhelpers.core import sysctl |
1606 | +from charmhelpers.core.host import ( |
1607 | + add_group, |
1608 | + add_user_to_group, |
1609 | + fstab_mount, |
1610 | + mkdir, |
1611 | +) |
1612 | + |
1613 | + |
1614 | +def hugepage_support(user, group='hugetlb', nr_hugepages=256, |
1615 | + max_map_count=65536, mnt_point='/run/hugepages/kvm', |
1616 | + pagesize='2MB', mount=True): |
1617 | + """Enable hugepages on system. |
1618 | + |
1619 | + Args: |
1620 | + user (str) -- Username to allow access to hugepages to |
1621 | + group (str) -- Group name to own hugepages |
1622 | + nr_hugepages (int) -- Number of pages to reserve |
1623 | + max_map_count (int) -- Number of Virtual Memory Areas a process can own |
1624 | + mnt_point (str) -- Directory to mount hugepages on |
1625 | + pagesize (str) -- Size of hugepages |
1626 | + mount (bool) -- Whether to Mount hugepages |
1627 | + """ |
1628 | + group_info = add_group(group) |
1629 | + gid = group_info.gr_gid |
1630 | + add_user_to_group(user, group) |
1631 | + sysctl_settings = { |
1632 | + 'vm.nr_hugepages': nr_hugepages, |
1633 | + 'vm.max_map_count': max_map_count, |
1634 | + 'vm.hugetlb_shm_group': gid, |
1635 | + } |
1636 | + sysctl.create(yaml.dump(sysctl_settings), '/etc/sysctl.d/10-hugepage.conf') |
1637 | + mkdir(mnt_point, owner='root', group='root', perms=0o755, force=False) |
1638 | + lfstab = fstab.Fstab() |
1639 | + fstab_entry = lfstab.get_entry_by_attr('mountpoint', mnt_point) |
1640 | + if fstab_entry: |
1641 | + lfstab.remove_entry(fstab_entry) |
1642 | + entry = lfstab.Entry('nodev', mnt_point, 'hugetlbfs', |
1643 | + 'mode=1770,gid={},pagesize={}'.format(gid, pagesize), 0, 0) |
1644 | + lfstab.add_entry(entry) |
1645 | + if mount: |
1646 | + fstab_mount(mnt_point) |
1647 | |
1648 | === renamed file 'hooks/charmhelpers/core/hugepage.py' => 'hooks/charmhelpers/core/hugepage.py.moved' |
1649 | === modified file 'hooks/charmhelpers/core/services/helpers.py' |
1650 | --- hooks/charmhelpers/core/services/helpers.py 2016-01-22 22:29:34 +0000 |
1651 | +++ hooks/charmhelpers/core/services/helpers.py 2016-02-18 11:56:12 +0000 |
1652 | @@ -247,22 +247,36 @@ |
1653 | :param str owner: The owner of the rendered file |
1654 | :param str group: The group of the rendered file |
1655 | :param int perms: The permissions of the rendered file |
1656 | +<<<<<<< TREE |
1657 | :param partial on_change_action: functools partial to be executed when |
1658 | rendered file changes |
1659 | :param jinja2 loader template_loader: A jinja2 template loader |
1660 | |
1661 | :return str: The rendered template |
1662 | +======= |
1663 | + :param partial on_change_action: functools partial to be executed when |
1664 | + rendered file changes |
1665 | +>>>>>>> MERGE-SOURCE |
1666 | """ |
1667 | def __init__(self, source, target, |
1668 | +<<<<<<< TREE |
1669 | owner='root', group='root', perms=0o444, |
1670 | on_change_action=None, template_loader=None): |
1671 | +======= |
1672 | + owner='root', group='root', perms=0o444, |
1673 | + on_change_action=None): |
1674 | +>>>>>>> MERGE-SOURCE |
1675 | self.source = source |
1676 | self.target = target |
1677 | self.owner = owner |
1678 | self.group = group |
1679 | self.perms = perms |
1680 | +<<<<<<< TREE |
1681 | self.on_change_action = on_change_action |
1682 | self.template_loader = template_loader |
1683 | +======= |
1684 | + self.on_change_action = on_change_action |
1685 | +>>>>>>> MERGE-SOURCE |
1686 | |
1687 | def __call__(self, manager, service_name, event_name): |
1688 | pre_checksum = '' |
1689 | @@ -272,6 +286,7 @@ |
1690 | context = {'ctx': {}} |
1691 | for ctx in service.get('required_data', []): |
1692 | context.update(ctx) |
1693 | +<<<<<<< TREE |
1694 | context['ctx'].update(ctx) |
1695 | |
1696 | result = templating.render(self.source, self.target, context, |
1697 | @@ -286,6 +301,17 @@ |
1698 | self.on_change_action() |
1699 | |
1700 | return result |
1701 | +======= |
1702 | + templating.render(self.source, self.target, context, |
1703 | + self.owner, self.group, self.perms) |
1704 | + if self.on_change_action: |
1705 | + if pre_checksum == host.file_hash(self.target): |
1706 | + hookenv.log( |
1707 | + 'No change detected: {}'.format(self.target), |
1708 | + hookenv.DEBUG) |
1709 | + else: |
1710 | + self.on_change_action() |
1711 | +>>>>>>> MERGE-SOURCE |
1712 | |
1713 | |
1714 | # Convenience aliases for templates |
1715 | |
1716 | === modified file 'hooks/charmhelpers/fetch/__init__.py' |
1717 | --- hooks/charmhelpers/fetch/__init__.py 2016-01-22 22:29:34 +0000 |
1718 | +++ hooks/charmhelpers/fetch/__init__.py 2016-02-18 11:56:12 +0000 |
1719 | @@ -90,6 +90,7 @@ |
1720 | 'kilo/proposed': 'trusty-proposed/kilo', |
1721 | 'trusty-kilo/proposed': 'trusty-proposed/kilo', |
1722 | 'trusty-proposed/kilo': 'trusty-proposed/kilo', |
1723 | +<<<<<<< TREE |
1724 | # Liberty |
1725 | 'liberty': 'trusty-updates/liberty', |
1726 | 'trusty-liberty': 'trusty-updates/liberty', |
1727 | @@ -106,6 +107,16 @@ |
1728 | 'mitaka/proposed': 'trusty-proposed/mitaka', |
1729 | 'trusty-mitaka/proposed': 'trusty-proposed/mitaka', |
1730 | 'trusty-proposed/mitaka': 'trusty-proposed/mitaka', |
1731 | +======= |
1732 | + # Liberty |
1733 | + 'liberty': 'trusty-updates/liberty', |
1734 | + 'trusty-liberty': 'trusty-updates/liberty', |
1735 | + 'trusty-liberty/updates': 'trusty-updates/liberty', |
1736 | + 'trusty-updates/liberty': 'trusty-updates/liberty', |
1737 | + 'liberty/proposed': 'trusty-proposed/liberty', |
1738 | + 'trusty-liberty/proposed': 'trusty-proposed/liberty', |
1739 | + 'trusty-proposed/liberty': 'trusty-proposed/liberty', |
1740 | +>>>>>>> MERGE-SOURCE |
1741 | } |
1742 | |
1743 | # The order of this list is very important. Handlers should be listed in from |
1744 | @@ -231,6 +242,7 @@ |
1745 | _run_apt_command(cmd, fatal) |
1746 | |
1747 | |
1748 | +<<<<<<< TREE |
1749 | def apt_mark(packages, mark, fatal=False): |
1750 | """Flag one or more packages using apt-mark""" |
1751 | log("Marking {} as {}".format(packages, mark)) |
1752 | @@ -246,6 +258,23 @@ |
1753 | subprocess.call(cmd, universal_newlines=True) |
1754 | |
1755 | |
1756 | +======= |
1757 | +def apt_mark(packages, mark, fatal=False): |
1758 | + """Flag one or more packages using apt-mark""" |
1759 | + cmd = ['apt-mark', mark] |
1760 | + if isinstance(packages, six.string_types): |
1761 | + cmd.append(packages) |
1762 | + else: |
1763 | + cmd.extend(packages) |
1764 | + log("Holding {}".format(packages)) |
1765 | + |
1766 | + if fatal: |
1767 | + subprocess.check_call(cmd, universal_newlines=True) |
1768 | + else: |
1769 | + subprocess.call(cmd, universal_newlines=True) |
1770 | + |
1771 | + |
1772 | +>>>>>>> MERGE-SOURCE |
1773 | def apt_hold(packages, fatal=False): |
1774 | return apt_mark(packages, 'hold', fatal=fatal) |
1775 | |
1776 | |
1777 | === modified file 'hooks/charmhelpers/fetch/archiveurl.py' |
1778 | === modified file 'hooks/charmhelpers/fetch/giturl.py' |
1779 | --- hooks/charmhelpers/fetch/giturl.py 2016-01-22 22:29:34 +0000 |
1780 | +++ hooks/charmhelpers/fetch/giturl.py 2016-02-18 11:56:12 +0000 |
1781 | @@ -41,10 +41,15 @@ |
1782 | else: |
1783 | return True |
1784 | |
1785 | +<<<<<<< TREE |
1786 | def clone(self, source, dest, branch="master", depth=None): |
1787 | +======= |
1788 | + def clone(self, source, dest, branch, depth=None): |
1789 | +>>>>>>> MERGE-SOURCE |
1790 | if not self.can_handle(source): |
1791 | raise UnhandledSource("Cannot handle {}".format(source)) |
1792 | |
1793 | +<<<<<<< TREE |
1794 | if os.path.exists(dest): |
1795 | cmd = ['git', '-C', dest, 'pull', source, branch] |
1796 | else: |
1797 | @@ -52,6 +57,12 @@ |
1798 | if depth: |
1799 | cmd.extend(['--depth', depth]) |
1800 | check_call(cmd) |
1801 | +======= |
1802 | + if depth: |
1803 | + Repo.clone_from(source, dest, branch=branch, depth=depth) |
1804 | + else: |
1805 | + Repo.clone_from(source, dest, branch=branch) |
1806 | +>>>>>>> MERGE-SOURCE |
1807 | |
1808 | def install(self, source, branch="master", dest=None, depth=None): |
1809 | url_parts = self.parse_url(source) |
1810 | @@ -62,9 +73,15 @@ |
1811 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", |
1812 | branch_name) |
1813 | try: |
1814 | +<<<<<<< TREE |
1815 | self.clone(source, dest_dir, branch, depth) |
1816 | except CalledProcessError as e: |
1817 | raise UnhandledSource(e) |
1818 | +======= |
1819 | + self.clone(source, dest_dir, branch, depth) |
1820 | + except GitCommandError as e: |
1821 | + raise UnhandledSource(e) |
1822 | +>>>>>>> MERGE-SOURCE |
1823 | except OSError as e: |
1824 | raise UnhandledSource(e.strerror) |
1825 | return dest_dir |
1826 | |
1827 | === modified file 'hooks/install' |
1828 | --- hooks/install 2015-09-22 13:35:49 +0000 |
1829 | +++ hooks/install 2016-02-18 11:56:12 +0000 |
1830 | @@ -1,3 +1,4 @@ |
1831 | +<<<<<<< TREE |
1832 | #!/bin/bash |
1833 | # Wrapper to deal with newer Ubuntu versions that don't have py2 installed |
1834 | # by default. |
1835 | @@ -18,3 +19,25 @@ |
1836 | done |
1837 | |
1838 | exec ./hooks/install.real |
1839 | +======= |
1840 | +#!/bin/bash |
1841 | +# Wrapper to deal with newer Ubuntu versions that don't have py2 installed |
1842 | +# by default. |
1843 | + |
1844 | +declare -a DEPS=('apt' 'netaddr' 'netifaces' 'pip' 'yaml' 'enum34') |
1845 | + |
1846 | +check_and_install() { |
1847 | + pkg="${1}-${2}" |
1848 | + if ! dpkg -s ${pkg} 2>&1 > /dev/null; then |
1849 | + apt-get -y install ${pkg} |
1850 | + fi |
1851 | +} |
1852 | + |
1853 | +PYTHON="python" |
1854 | + |
1855 | +for dep in ${DEPS[@]}; do |
1856 | + check_and_install ${PYTHON} ${dep} |
1857 | +done |
1858 | + |
1859 | +exec ./hooks/install.real |
1860 | +>>>>>>> MERGE-SOURCE |
1861 | |
1862 | === added symlink 'hooks/install.real' |
1863 | === target is u'ceph_hooks.py' |
1864 | === renamed symlink 'hooks/install.real' => 'hooks/install.real.moved' |
1865 | === added symlink 'hooks/update-status' |
1866 | === target is u'ceph_hooks.py' |
1867 | === renamed symlink 'hooks/update-status' => 'hooks/update-status.moved' |
1868 | === modified file 'metadata.yaml' |
1869 | === added file 'requirements.txt' |
1870 | --- requirements.txt 1970-01-01 00:00:00 +0000 |
1871 | +++ requirements.txt 2016-02-18 11:56:12 +0000 |
1872 | @@ -0,0 +1,11 @@ |
1873 | +# The order of packages is significant, because pip processes them in the order |
1874 | +# of appearance. Changing the order has an impact on the overall integration |
1875 | +# process, which may cause wedges in the gate later. |
1876 | +PyYAML>=3.1.0 |
1877 | +simplejson>=2.2.0 |
1878 | +netifaces>=0.10.4 |
1879 | +netaddr>=0.7.12,!=0.7.16 |
1880 | +Jinja2>=2.6 # BSD License (3 clause) |
1881 | +six>=1.9.0 |
1882 | +dnspython>=1.12.0 |
1883 | +psutil>=1.1.1,<2.0.0 |
1884 | |
1885 | === renamed file 'requirements.txt' => 'requirements.txt.moved' |
1886 | === added file 'setup.cfg' |
1887 | --- setup.cfg 1970-01-01 00:00:00 +0000 |
1888 | +++ setup.cfg 2016-02-18 11:56:12 +0000 |
1889 | @@ -0,0 +1,5 @@ |
1890 | +[nosetests] |
1891 | +verbosity=2 |
1892 | +with-coverage=1 |
1893 | +cover-erase=1 |
1894 | +cover-package=hooks |
1895 | |
1896 | === renamed file 'setup.cfg' => 'setup.cfg.moved' |
1897 | === modified file 'templates/ceph.conf' |
1898 | --- templates/ceph.conf 2016-01-13 12:48:57 +0000 |
1899 | +++ templates/ceph.conf 2016-02-18 11:56:12 +0000 |
1900 | @@ -41,3 +41,8 @@ |
1901 | osd journal size = {{ osd_journal_size }} |
1902 | filestore xattr use omap = true |
1903 | |
1904 | +# Improve read/write performance at the expense of backfill performance |
1905 | +filestore merge threshold = 40 |
1906 | +filestore split multiple = 8 |
1907 | +osd op threads = 12 |
1908 | + |
1909 | |
1910 | === added file 'test-requirements.txt' |
1911 | --- test-requirements.txt 1970-01-01 00:00:00 +0000 |
1912 | +++ test-requirements.txt 2016-02-18 11:56:12 +0000 |
1913 | @@ -0,0 +1,9 @@ |
1914 | +# The order of packages is significant, because pip processes them in the order |
1915 | +# of appearance. Changing the order has an impact on the overall integration |
1916 | +# process, which may cause wedges in the gate later. |
1917 | +coverage>=3.6 |
1918 | +mock>=1.2 |
1919 | +flake8>=2.2.4,<=2.4.1 |
1920 | +os-testr>=0.4.1 |
1921 | +charm-tools |
1922 | +enum |
1923 | |
1924 | === renamed file 'test-requirements.txt' => 'test-requirements.txt.moved' |
1925 | === modified file 'tests/README' |
1926 | --- tests/README 2016-01-08 21:45:34 +0000 |
1927 | +++ tests/README 2016-02-18 11:56:12 +0000 |
1928 | @@ -1,3 +1,4 @@ |
1929 | +<<<<<<< TREE |
1930 | This directory provides Amulet tests to verify basic deployment functionality |
1931 | from the perspective of this charm, its requirements and its features, as |
1932 | exercised in a subset of the full OpenStack deployment test bundle topology. |
1933 | @@ -22,6 +23,30 @@ |
1934 | 9xx restarts, config changes, actions and other final checks |
1935 | |
1936 | In order to run tests, charm-tools and juju must be installed: |
1937 | +======= |
1938 | +This directory provides Amulet tests that focus on verification of ceph-osd |
1939 | +deployments. |
1940 | + |
1941 | +test_* methods are called in lexical sort order. |
1942 | + |
1943 | +Test name convention to ensure desired test order: |
1944 | + 1xx service and endpoint checks |
1945 | + 2xx relation checks |
1946 | + 3xx config checks |
1947 | + 4xx functional checks |
1948 | + 9xx restarts and other final checks |
1949 | + |
1950 | +Common uses of ceph-osd relations in bundle deployments: |
1951 | + - - "ceph-osd:mon" |
1952 | + - "ceph:osd" |
1953 | + |
1954 | +More detailed relations of ceph-osd service in a common deployment: |
1955 | + relations: |
1956 | +???? |
1957 | + |
1958 | +In order to run tests, you'll need charm-tools installed (in addition to |
1959 | +juju, of course): |
1960 | +>>>>>>> MERGE-SOURCE |
1961 | sudo add-apt-repository ppa:juju/stable |
1962 | sudo apt-get update |
1963 | sudo apt-get install charm-tools juju juju-deployer amulet |
1964 | |
1965 | === modified file 'tests/basic_deployment.py' |
1966 | --- tests/basic_deployment.py 2016-02-12 21:23:26 +0000 |
1967 | +++ tests/basic_deployment.py 2016-02-18 11:56:12 +0000 |
1968 | @@ -19,7 +19,7 @@ |
1969 | """Amulet tests on a basic ceph-osd deployment.""" |
1970 | |
1971 | def __init__(self, series=None, openstack=None, source=None, |
1972 | - stable=False): |
1973 | + stable=True): |
1974 | """Deploy the entire test environment.""" |
1975 | super(CephOsdBasicDeployment, self).__init__(series, openstack, |
1976 | source, stable) |
1977 | @@ -116,10 +116,20 @@ |
1978 | self.ceph1_sentry = self.d.sentry.unit['ceph/1'] |
1979 | self.ceph2_sentry = self.d.sentry.unit['ceph/2'] |
1980 | self.ceph_osd_sentry = self.d.sentry.unit['ceph-osd/0'] |
1981 | - u.log.debug('openstack release val: {}'.format( |
1982 | - self._get_openstack_release())) |
1983 | - u.log.debug('openstack release str: {}'.format( |
1984 | - self._get_openstack_release_string())) |
1985 | +<<<<<<< TREE |
1986 | + u.log.debug('openstack release val: {}'.format( |
1987 | + self._get_openstack_release())) |
1988 | + u.log.debug('openstack release str: {}'.format( |
1989 | + self._get_openstack_release_string())) |
1990 | +======= |
1991 | + u.log.debug('openstack release val: {}'.format( |
1992 | + self._get_openstack_release())) |
1993 | + u.log.debug('openstack release str: {}'.format( |
1994 | + self._get_openstack_release_string())) |
1995 | + |
1996 | + # Let things settle a bit original moving forward |
1997 | + time.sleep(30) |
1998 | +>>>>>>> MERGE-SOURCE |
1999 | |
2000 | # Authenticate admin with keystone |
2001 | self.keystone = u.authenticate_keystone_admin(self.keystone_sentry, |
2002 | |
2003 | === modified file 'tests/charmhelpers/contrib/amulet/utils.py' |
2004 | --- tests/charmhelpers/contrib/amulet/utils.py 2016-01-22 22:29:34 +0000 |
2005 | +++ tests/charmhelpers/contrib/amulet/utils.py 2016-02-18 11:56:12 +0000 |
2006 | @@ -19,8 +19,12 @@ |
2007 | import logging |
2008 | import os |
2009 | import re |
2010 | +<<<<<<< TREE |
2011 | import socket |
2012 | import subprocess |
2013 | +======= |
2014 | +import subprocess |
2015 | +>>>>>>> MERGE-SOURCE |
2016 | import sys |
2017 | import time |
2018 | import uuid |
2019 | @@ -107,6 +111,7 @@ |
2020 | """Validate that lists of commands succeed on service units. Can be |
2021 | used to verify system services are running on the corresponding |
2022 | service units. |
2023 | +<<<<<<< TREE |
2024 | |
2025 | :param commands: dict with sentry keys and arbitrary command list vals |
2026 | :returns: None if successful, Failure string message otherwise |
2027 | @@ -120,6 +125,21 @@ |
2028 | 'validate_services_by_name instead of validate_services ' |
2029 | 'due to init system differences.') |
2030 | |
2031 | +======= |
2032 | + |
2033 | + :param commands: dict with sentry keys and arbitrary command list vals |
2034 | + :returns: None if successful, Failure string message otherwise |
2035 | + """ |
2036 | + self.log.debug('Checking status of system services...') |
2037 | + |
2038 | + # /!\ DEPRECATION WARNING (beisner): |
2039 | + # New and existing tests should be rewritten to use |
2040 | + # validate_services_by_name() as it is aware of init systems. |
2041 | + self.log.warn('/!\\ DEPRECATION WARNING: use ' |
2042 | + 'validate_services_by_name instead of validate_services ' |
2043 | + 'due to init system differences.') |
2044 | + |
2045 | +>>>>>>> MERGE-SOURCE |
2046 | for k, v in six.iteritems(commands): |
2047 | for cmd in v: |
2048 | output, code = k.run(cmd) |
2049 | @@ -501,6 +521,7 @@ |
2050 | |
2051 | def endpoint_error(self, name, data): |
2052 | return 'unexpected endpoint data in {} - {}'.format(name, data) |
2053 | +<<<<<<< TREE |
2054 | |
2055 | def get_ubuntu_releases(self): |
2056 | """Return a list of all Ubuntu releases in order of release.""" |
2057 | @@ -816,3 +837,176 @@ |
2058 | return ("unknown", "") |
2059 | status = json.loads(raw_status) |
2060 | return (status["status"], status["message"]) |
2061 | +======= |
2062 | + |
2063 | + def get_ubuntu_releases(self): |
2064 | + """Return a list of all Ubuntu releases in order of release.""" |
2065 | + _d = distro_info.UbuntuDistroInfo() |
2066 | + _release_list = _d.all |
2067 | + self.log.debug('Ubuntu release list: {}'.format(_release_list)) |
2068 | + return _release_list |
2069 | + |
2070 | + def file_to_url(self, file_rel_path): |
2071 | + """Convert a relative file path to a file URL.""" |
2072 | + _abs_path = os.path.abspath(file_rel_path) |
2073 | + return urlparse.urlparse(_abs_path, scheme='file').geturl() |
2074 | + |
2075 | + def check_commands_on_units(self, commands, sentry_units): |
2076 | + """Check that all commands in a list exit zero on all |
2077 | + sentry units in a list. |
2078 | + |
2079 | + :param commands: list of bash commands |
2080 | + :param sentry_units: list of sentry unit pointers |
2081 | + :returns: None if successful; Failure message otherwise |
2082 | + """ |
2083 | + self.log.debug('Checking exit codes for {} commands on {} ' |
2084 | + 'sentry units...'.format(len(commands), |
2085 | + len(sentry_units))) |
2086 | + for sentry_unit in sentry_units: |
2087 | + for cmd in commands: |
2088 | + output, code = sentry_unit.run(cmd) |
2089 | + if code == 0: |
2090 | + self.log.debug('{} `{}` returned {} ' |
2091 | + '(OK)'.format(sentry_unit.info['unit_name'], |
2092 | + cmd, code)) |
2093 | + else: |
2094 | + return ('{} `{}` returned {} ' |
2095 | + '{}'.format(sentry_unit.info['unit_name'], |
2096 | + cmd, code, output)) |
2097 | + return None |
2098 | + |
2099 | + def get_process_id_list(self, sentry_unit, process_name, |
2100 | + expect_success=True): |
2101 | + """Get a list of process ID(s) from a single sentry juju unit |
2102 | + for a single process name. |
2103 | + |
2104 | + :param sentry_unit: Amulet sentry instance (juju unit) |
2105 | + :param process_name: Process name |
2106 | + :param expect_success: If False, expect the PID to be missing, |
2107 | + raise if it is present. |
2108 | + :returns: List of process IDs |
2109 | + """ |
2110 | + cmd = 'pidof -x {}'.format(process_name) |
2111 | + if not expect_success: |
2112 | + cmd += " || exit 0 && exit 1" |
2113 | + output, code = sentry_unit.run(cmd) |
2114 | + if code != 0: |
2115 | + msg = ('{} `{}` returned {} ' |
2116 | + '{}'.format(sentry_unit.info['unit_name'], |
2117 | + cmd, code, output)) |
2118 | + amulet.raise_status(amulet.FAIL, msg=msg) |
2119 | + return str(output).split() |
2120 | + |
2121 | + def get_unit_process_ids(self, unit_processes, expect_success=True): |
2122 | + """Construct a dict containing unit sentries, process names, and |
2123 | + process IDs. |
2124 | + |
2125 | + :param unit_processes: A dictionary of Amulet sentry instance |
2126 | + to list of process names. |
2127 | + :param expect_success: if False expect the processes to not be |
2128 | + running, raise if they are. |
2129 | + :returns: Dictionary of Amulet sentry instance to dictionary |
2130 | + of process names to PIDs. |
2131 | + """ |
2132 | + pid_dict = {} |
2133 | + for sentry_unit, process_list in six.iteritems(unit_processes): |
2134 | + pid_dict[sentry_unit] = {} |
2135 | + for process in process_list: |
2136 | + pids = self.get_process_id_list( |
2137 | + sentry_unit, process, expect_success=expect_success) |
2138 | + pid_dict[sentry_unit].update({process: pids}) |
2139 | + return pid_dict |
2140 | + |
2141 | + def validate_unit_process_ids(self, expected, actual): |
2142 | + """Validate process id quantities for services on units.""" |
2143 | + self.log.debug('Checking units for running processes...') |
2144 | + self.log.debug('Expected PIDs: {}'.format(expected)) |
2145 | + self.log.debug('Actual PIDs: {}'.format(actual)) |
2146 | + |
2147 | + if len(actual) != len(expected): |
2148 | + return ('Unit count mismatch. expected, actual: {}, ' |
2149 | + '{} '.format(len(expected), len(actual))) |
2150 | + |
2151 | + for (e_sentry, e_proc_names) in six.iteritems(expected): |
2152 | + e_sentry_name = e_sentry.info['unit_name'] |
2153 | + if e_sentry in actual.keys(): |
2154 | + a_proc_names = actual[e_sentry] |
2155 | + else: |
2156 | + return ('Expected sentry ({}) not found in actual dict data.' |
2157 | + '{}'.format(e_sentry_name, e_sentry)) |
2158 | + |
2159 | + if len(e_proc_names.keys()) != len(a_proc_names.keys()): |
2160 | + return ('Process name count mismatch. expected, actual: {}, ' |
2161 | + '{}'.format(len(expected), len(actual))) |
2162 | + |
2163 | + for (e_proc_name, e_pids_length), (a_proc_name, a_pids) in \ |
2164 | + zip(e_proc_names.items(), a_proc_names.items()): |
2165 | + if e_proc_name != a_proc_name: |
2166 | + return ('Process name mismatch. expected, actual: {}, ' |
2167 | + '{}'.format(e_proc_name, a_proc_name)) |
2168 | + |
2169 | + a_pids_length = len(a_pids) |
2170 | + fail_msg = ('PID count mismatch. {} ({}) expected, actual: ' |
2171 | + '{}, {} ({})'.format(e_sentry_name, e_proc_name, |
2172 | + e_pids_length, a_pids_length, |
2173 | + a_pids)) |
2174 | + |
2175 | + # If expected is not bool, ensure PID quantities match |
2176 | + if not isinstance(e_pids_length, bool) and \ |
2177 | + a_pids_length != e_pids_length: |
2178 | + return fail_msg |
2179 | + # If expected is bool True, ensure 1 or more PIDs exist |
2180 | + elif isinstance(e_pids_length, bool) and \ |
2181 | + e_pids_length is True and a_pids_length < 1: |
2182 | + return fail_msg |
2183 | + # If expected is bool False, ensure 0 PIDs exist |
2184 | + elif isinstance(e_pids_length, bool) and \ |
2185 | + e_pids_length is False and a_pids_length != 0: |
2186 | + return fail_msg |
2187 | + else: |
2188 | + self.log.debug('PID check OK: {} {} {}: ' |
2189 | + '{}'.format(e_sentry_name, e_proc_name, |
2190 | + e_pids_length, a_pids)) |
2191 | + return None |
2192 | + |
2193 | + def validate_list_of_identical_dicts(self, list_of_dicts): |
2194 | + """Check that all dicts within a list are identical.""" |
2195 | + hashes = [] |
2196 | + for _dict in list_of_dicts: |
2197 | + hashes.append(hash(frozenset(_dict.items()))) |
2198 | + |
2199 | + self.log.debug('Hashes: {}'.format(hashes)) |
2200 | + if len(set(hashes)) == 1: |
2201 | + self.log.debug('Dicts within list are identical') |
2202 | + else: |
2203 | + return 'Dicts within list are not identical' |
2204 | + |
2205 | + return None |
2206 | + |
2207 | + def run_action(self, unit_sentry, action, |
2208 | + _check_output=subprocess.check_output): |
2209 | + """Run the named action on a given unit sentry. |
2210 | + |
2211 | + _check_output parameter is used for dependency injection. |
2212 | + |
2213 | + @return action_id. |
2214 | + """ |
2215 | + unit_id = unit_sentry.info["unit_name"] |
2216 | + command = ["juju", "action", "do", "--format=json", unit_id, action] |
2217 | + self.log.info("Running command: %s\n" % " ".join(command)) |
2218 | + output = _check_output(command, universal_newlines=True) |
2219 | + data = json.loads(output) |
2220 | + action_id = data[u'Action queued with id'] |
2221 | + return action_id |
2222 | + |
2223 | + def wait_on_action(self, action_id, _check_output=subprocess.check_output): |
2224 | + """Wait for a given action, returning if it completed or not. |
2225 | + |
2226 | + _check_output parameter is used for dependency injection. |
2227 | + """ |
2228 | + command = ["juju", "action", "fetch", "--format=json", "--wait=0", |
2229 | + action_id] |
2230 | + output = _check_output(command, universal_newlines=True) |
2231 | + data = json.loads(output) |
2232 | + return data.get(u"status") == "completed" |
2233 | +>>>>>>> MERGE-SOURCE |
2234 | |
2235 | === modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py' |
2236 | --- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2016-01-22 22:29:34 +0000 |
2237 | +++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2016-02-18 11:56:12 +0000 |
2238 | @@ -69,6 +69,7 @@ |
2239 | Determine if the local branch being tested is derived from its |
2240 | stable or next (dev) branch, and based on this, use the corresonding |
2241 | stable or next branches for the other_services.""" |
2242 | +<<<<<<< TREE |
2243 | |
2244 | self.log.info('OpenStackAmuletDeployment: determine branch locations') |
2245 | |
2246 | @@ -82,6 +83,20 @@ |
2247 | |
2248 | if self.series in ['precise', 'trusty']: |
2249 | base_series = self.series |
2250 | +======= |
2251 | + base_charms = ['mysql', 'mongodb', 'nrpe'] |
2252 | + |
2253 | + if self.series in ['precise', 'trusty']: |
2254 | + base_series = self.series |
2255 | + else: |
2256 | + base_series = self.current_next |
2257 | + |
2258 | + if self.stable: |
2259 | + for svc in other_services: |
2260 | + temp = 'lp:charms/{}/{}' |
2261 | + svc['location'] = temp.format(base_series, |
2262 | + svc['name']) |
2263 | +>>>>>>> MERGE-SOURCE |
2264 | else: |
2265 | base_series = self.current_next |
2266 | |
2267 | @@ -122,11 +137,17 @@ |
2268 | # Charms which should use the source config option |
2269 | use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph', |
2270 | 'ceph-osd', 'ceph-radosgw'] |
2271 | +<<<<<<< TREE |
2272 | |
2273 | # Charms which can not use openstack-origin, ie. many subordinates |
2274 | no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe', |
2275 | 'openvswitch-odl', 'neutron-api-odl', 'odl-controller', |
2276 | 'cinder-backup'] |
2277 | +======= |
2278 | + # Most OpenStack subordinate charms do not expose an origin option |
2279 | + # as that is controlled by the principle. |
2280 | + ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe'] |
2281 | +>>>>>>> MERGE-SOURCE |
2282 | |
2283 | if self.openstack: |
2284 | for svc in services: |
2285 | @@ -224,11 +245,18 @@ |
2286 | # Must be ordered by OpenStack release (not by Ubuntu release): |
2287 | (self.precise_essex, self.precise_folsom, self.precise_grizzly, |
2288 | self.precise_havana, self.precise_icehouse, |
2289 | +<<<<<<< TREE |
2290 | self.trusty_icehouse, self.trusty_juno, self.utopic_juno, |
2291 | self.trusty_kilo, self.vivid_kilo, self.trusty_liberty, |
2292 | self.wily_liberty, self.trusty_mitaka, |
2293 | self.xenial_mitaka) = range(14) |
2294 | |
2295 | +======= |
2296 | + self.trusty_icehouse, self.trusty_juno, self.utopic_juno, |
2297 | + self.trusty_kilo, self.vivid_kilo, self.trusty_liberty, |
2298 | + self.wily_liberty) = range(12) |
2299 | + |
2300 | +>>>>>>> MERGE-SOURCE |
2301 | releases = { |
2302 | ('precise', None): self.precise_essex, |
2303 | ('precise', 'cloud:precise-folsom'): self.precise_folsom, |
2304 | @@ -238,12 +266,21 @@ |
2305 | ('trusty', None): self.trusty_icehouse, |
2306 | ('trusty', 'cloud:trusty-juno'): self.trusty_juno, |
2307 | ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo, |
2308 | +<<<<<<< TREE |
2309 | ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty, |
2310 | ('trusty', 'cloud:trusty-mitaka'): self.trusty_mitaka, |
2311 | +======= |
2312 | + ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty, |
2313 | +>>>>>>> MERGE-SOURCE |
2314 | ('utopic', None): self.utopic_juno, |
2315 | +<<<<<<< TREE |
2316 | ('vivid', None): self.vivid_kilo, |
2317 | ('wily', None): self.wily_liberty, |
2318 | ('xenial', None): self.xenial_mitaka} |
2319 | +======= |
2320 | + ('vivid', None): self.vivid_kilo, |
2321 | + ('wily', None): self.wily_liberty} |
2322 | +>>>>>>> MERGE-SOURCE |
2323 | return releases[(self.series, self.openstack)] |
2324 | |
2325 | def _get_openstack_release_string(self): |
2326 | @@ -259,8 +296,12 @@ |
2327 | ('trusty', 'icehouse'), |
2328 | ('utopic', 'juno'), |
2329 | ('vivid', 'kilo'), |
2330 | +<<<<<<< TREE |
2331 | ('wily', 'liberty'), |
2332 | ('xenial', 'mitaka'), |
2333 | +======= |
2334 | + ('wily', 'liberty'), |
2335 | +>>>>>>> MERGE-SOURCE |
2336 | ]) |
2337 | if self.openstack: |
2338 | os_origin = self.openstack.split(':')[1] |
2339 | |
2340 | === modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py' |
2341 | --- tests/charmhelpers/contrib/openstack/amulet/utils.py 2016-01-22 22:29:34 +0000 |
2342 | +++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2016-02-18 11:56:12 +0000 |
2343 | @@ -18,8 +18,12 @@ |
2344 | import json |
2345 | import logging |
2346 | import os |
2347 | +<<<<<<< TREE |
2348 | import re |
2349 | import six |
2350 | +======= |
2351 | +import six |
2352 | +>>>>>>> MERGE-SOURCE |
2353 | import time |
2354 | import urllib |
2355 | |
2356 | @@ -28,8 +32,12 @@ |
2357 | import heatclient.v1.client as heat_client |
2358 | import keystoneclient.v2_0 as keystone_client |
2359 | import novaclient.v1_1.client as nova_client |
2360 | +<<<<<<< TREE |
2361 | import pika |
2362 | import swiftclient |
2363 | +======= |
2364 | +import swiftclient |
2365 | +>>>>>>> MERGE-SOURCE |
2366 | |
2367 | from charmhelpers.contrib.amulet.utils import ( |
2368 | AmuletUtils |
2369 | @@ -342,6 +350,7 @@ |
2370 | |
2371 | def delete_instance(self, nova, instance): |
2372 | """Delete the specified instance.""" |
2373 | +<<<<<<< TREE |
2374 | |
2375 | # /!\ DEPRECATION WARNING |
2376 | self.log.warn('/!\\ DEPRECATION WARNING: use ' |
2377 | @@ -983,3 +992,267 @@ |
2378 | else: |
2379 | msg = 'No message retrieved.' |
2380 | amulet.raise_status(amulet.FAIL, msg) |
2381 | +======= |
2382 | + |
2383 | + # /!\ DEPRECATION WARNING |
2384 | + self.log.warn('/!\\ DEPRECATION WARNING: use ' |
2385 | + 'delete_resource instead of delete_instance.') |
2386 | + self.log.debug('Deleting instance ({})...'.format(instance)) |
2387 | + return self.delete_resource(nova.servers, instance, |
2388 | + msg='nova instance') |
2389 | + |
2390 | + def create_or_get_keypair(self, nova, keypair_name="testkey"): |
2391 | + """Create a new keypair, or return pointer if it already exists.""" |
2392 | + try: |
2393 | + _keypair = nova.keypairs.get(keypair_name) |
2394 | + self.log.debug('Keypair ({}) already exists, ' |
2395 | + 'using it.'.format(keypair_name)) |
2396 | + return _keypair |
2397 | + except: |
2398 | + self.log.debug('Keypair ({}) does not exist, ' |
2399 | + 'creating it.'.format(keypair_name)) |
2400 | + |
2401 | + _keypair = nova.keypairs.create(name=keypair_name) |
2402 | + return _keypair |
2403 | + |
2404 | + def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1, |
2405 | + img_id=None, src_vol_id=None, snap_id=None): |
2406 | + """Create cinder volume, optionally from a glance image, OR |
2407 | + optionally as a clone of an existing volume, OR optionally |
2408 | + from a snapshot. Wait for the new volume status to reach |
2409 | + the expected status, validate and return a resource pointer. |
2410 | + |
2411 | + :param vol_name: cinder volume display name |
2412 | + :param vol_size: size in gigabytes |
2413 | + :param img_id: optional glance image id |
2414 | + :param src_vol_id: optional source volume id to clone |
2415 | + :param snap_id: optional snapshot id to use |
2416 | + :returns: cinder volume pointer |
2417 | + """ |
2418 | + # Handle parameter input and avoid impossible combinations |
2419 | + if img_id and not src_vol_id and not snap_id: |
2420 | + # Create volume from image |
2421 | + self.log.debug('Creating cinder volume from glance image...') |
2422 | + bootable = 'true' |
2423 | + elif src_vol_id and not img_id and not snap_id: |
2424 | + # Clone an existing volume |
2425 | + self.log.debug('Cloning cinder volume...') |
2426 | + bootable = cinder.volumes.get(src_vol_id).bootable |
2427 | + elif snap_id and not src_vol_id and not img_id: |
2428 | + # Create volume from snapshot |
2429 | + self.log.debug('Creating cinder volume from snapshot...') |
2430 | + snap = cinder.volume_snapshots.find(id=snap_id) |
2431 | + vol_size = snap.size |
2432 | + snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id |
2433 | + bootable = cinder.volumes.get(snap_vol_id).bootable |
2434 | + elif not img_id and not src_vol_id and not snap_id: |
2435 | + # Create volume |
2436 | + self.log.debug('Creating cinder volume...') |
2437 | + bootable = 'false' |
2438 | + else: |
2439 | + # Impossible combination of parameters |
2440 | + msg = ('Invalid method use - name:{} size:{} img_id:{} ' |
2441 | + 'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size, |
2442 | + img_id, src_vol_id, |
2443 | + snap_id)) |
2444 | + amulet.raise_status(amulet.FAIL, msg=msg) |
2445 | + |
2446 | + # Create new volume |
2447 | + try: |
2448 | + vol_new = cinder.volumes.create(display_name=vol_name, |
2449 | + imageRef=img_id, |
2450 | + size=vol_size, |
2451 | + source_volid=src_vol_id, |
2452 | + snapshot_id=snap_id) |
2453 | + vol_id = vol_new.id |
2454 | + except Exception as e: |
2455 | + msg = 'Failed to create volume: {}'.format(e) |
2456 | + amulet.raise_status(amulet.FAIL, msg=msg) |
2457 | + |
2458 | + # Wait for volume to reach available status |
2459 | + ret = self.resource_reaches_status(cinder.volumes, vol_id, |
2460 | + expected_stat="available", |
2461 | + msg="Volume status wait") |
2462 | + if not ret: |
2463 | + msg = 'Cinder volume failed to reach expected state.' |
2464 | + amulet.raise_status(amulet.FAIL, msg=msg) |
2465 | + |
2466 | + # Re-validate new volume |
2467 | + self.log.debug('Validating volume attributes...') |
2468 | + val_vol_name = cinder.volumes.get(vol_id).display_name |
2469 | + val_vol_boot = cinder.volumes.get(vol_id).bootable |
2470 | + val_vol_stat = cinder.volumes.get(vol_id).status |
2471 | + val_vol_size = cinder.volumes.get(vol_id).size |
2472 | + msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:' |
2473 | + '{} size:{}'.format(val_vol_name, vol_id, |
2474 | + val_vol_stat, val_vol_boot, |
2475 | + val_vol_size)) |
2476 | + |
2477 | + if val_vol_boot == bootable and val_vol_stat == 'available' \ |
2478 | + and val_vol_name == vol_name and val_vol_size == vol_size: |
2479 | + self.log.debug(msg_attr) |
2480 | + else: |
2481 | + msg = ('Volume validation failed, {}'.format(msg_attr)) |
2482 | + amulet.raise_status(amulet.FAIL, msg=msg) |
2483 | + |
2484 | + return vol_new |
2485 | + |
2486 | + def delete_resource(self, resource, resource_id, |
2487 | + msg="resource", max_wait=120): |
2488 | + """Delete one openstack resource, such as one instance, keypair, |
2489 | + image, volume, stack, etc., and confirm deletion within max wait time. |
2490 | + |
2491 | + :param resource: pointer to os resource type, ex:glance_client.images |
2492 | + :param resource_id: unique name or id for the openstack resource |
2493 | + :param msg: text to identify purpose in logging |
2494 | + :param max_wait: maximum wait time in seconds |
2495 | + :returns: True if successful, otherwise False |
2496 | + """ |
2497 | + self.log.debug('Deleting OpenStack resource ' |
2498 | + '{} ({})'.format(resource_id, msg)) |
2499 | + num_before = len(list(resource.list())) |
2500 | + resource.delete(resource_id) |
2501 | + |
2502 | + tries = 0 |
2503 | + num_after = len(list(resource.list())) |
2504 | + while num_after != (num_before - 1) and tries < (max_wait / 4): |
2505 | + self.log.debug('{} delete check: ' |
2506 | + '{} [{}:{}] {}'.format(msg, tries, |
2507 | + num_before, |
2508 | + num_after, |
2509 | + resource_id)) |
2510 | + time.sleep(4) |
2511 | + num_after = len(list(resource.list())) |
2512 | + tries += 1 |
2513 | + |
2514 | + self.log.debug('{}: expected, actual count = {}, ' |
2515 | + '{}'.format(msg, num_before - 1, num_after)) |
2516 | + |
2517 | + if num_after == (num_before - 1): |
2518 | + return True |
2519 | + else: |
2520 | + self.log.error('{} delete timed out'.format(msg)) |
2521 | + return False |
2522 | + |
2523 | + def resource_reaches_status(self, resource, resource_id, |
2524 | + expected_stat='available', |
2525 | + msg='resource', max_wait=120): |
2526 | + """Wait for an openstack resources status to reach an |
2527 | + expected status within a specified time. Useful to confirm that |
2528 | + nova instances, cinder vols, snapshots, glance images, heat stacks |
2529 | + and other resources eventually reach the expected status. |
2530 | + |
2531 | + :param resource: pointer to os resource type, ex: heat_client.stacks |
2532 | + :param resource_id: unique id for the openstack resource |
2533 | + :param expected_stat: status to expect resource to reach |
2534 | + :param msg: text to identify purpose in logging |
2535 | + :param max_wait: maximum wait time in seconds |
2536 | + :returns: True if successful, False if status is not reached |
2537 | + """ |
2538 | + |
2539 | + tries = 0 |
2540 | + resource_stat = resource.get(resource_id).status |
2541 | + while resource_stat != expected_stat and tries < (max_wait / 4): |
2542 | + self.log.debug('{} status check: ' |
2543 | + '{} [{}:{}] {}'.format(msg, tries, |
2544 | + resource_stat, |
2545 | + expected_stat, |
2546 | + resource_id)) |
2547 | + time.sleep(4) |
2548 | + resource_stat = resource.get(resource_id).status |
2549 | + tries += 1 |
2550 | + |
2551 | + self.log.debug('{}: expected, actual status = {}, ' |
2552 | + '{}'.format(msg, resource_stat, expected_stat)) |
2553 | + |
2554 | + if resource_stat == expected_stat: |
2555 | + return True |
2556 | + else: |
2557 | + self.log.debug('{} never reached expected status: ' |
2558 | + '{}'.format(resource_id, expected_stat)) |
2559 | + return False |
2560 | + |
2561 | + def get_ceph_osd_id_cmd(self, index): |
2562 | + """Produce a shell command that will return a ceph-osd id.""" |
2563 | + return ("`initctl list | grep 'ceph-osd ' | " |
2564 | + "awk 'NR=={} {{ print $2 }}' | " |
2565 | + "grep -o '[0-9]*'`".format(index + 1)) |
2566 | + |
2567 | + def get_ceph_pools(self, sentry_unit): |
2568 | + """Return a dict of ceph pools from a single ceph unit, with |
2569 | + pool name as keys, pool id as vals.""" |
2570 | + pools = {} |
2571 | + cmd = 'sudo ceph osd lspools' |
2572 | + output, code = sentry_unit.run(cmd) |
2573 | + if code != 0: |
2574 | + msg = ('{} `{}` returned {} ' |
2575 | + '{}'.format(sentry_unit.info['unit_name'], |
2576 | + cmd, code, output)) |
2577 | + amulet.raise_status(amulet.FAIL, msg=msg) |
2578 | + |
2579 | + # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance, |
2580 | + for pool in str(output).split(','): |
2581 | + pool_id_name = pool.split(' ') |
2582 | + if len(pool_id_name) == 2: |
2583 | + pool_id = pool_id_name[0] |
2584 | + pool_name = pool_id_name[1] |
2585 | + pools[pool_name] = int(pool_id) |
2586 | + |
2587 | + self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'], |
2588 | + pools)) |
2589 | + return pools |
2590 | + |
2591 | + def get_ceph_df(self, sentry_unit): |
2592 | + """Return dict of ceph df json output, including ceph pool state. |
2593 | + |
2594 | + :param sentry_unit: Pointer to amulet sentry instance (juju unit) |
2595 | + :returns: Dict of ceph df output |
2596 | + """ |
2597 | + cmd = 'sudo ceph df --format=json' |
2598 | + output, code = sentry_unit.run(cmd) |
2599 | + if code != 0: |
2600 | + msg = ('{} `{}` returned {} ' |
2601 | + '{}'.format(sentry_unit.info['unit_name'], |
2602 | + cmd, code, output)) |
2603 | + amulet.raise_status(amulet.FAIL, msg=msg) |
2604 | + return json.loads(output) |
2605 | + |
2606 | + def get_ceph_pool_sample(self, sentry_unit, pool_id=0): |
2607 | + """Take a sample of attributes of a ceph pool, returning ceph |
2608 | + pool name, object count and disk space used for the specified |
2609 | + pool ID number. |
2610 | + |
2611 | + :param sentry_unit: Pointer to amulet sentry instance (juju unit) |
2612 | + :param pool_id: Ceph pool ID |
2613 | + :returns: List of pool name, object count, kb disk space used |
2614 | + """ |
2615 | + df = self.get_ceph_df(sentry_unit) |
2616 | + pool_name = df['pools'][pool_id]['name'] |
2617 | + obj_count = df['pools'][pool_id]['stats']['objects'] |
2618 | + kb_used = df['pools'][pool_id]['stats']['kb_used'] |
2619 | + self.log.debug('Ceph {} pool (ID {}): {} objects, ' |
2620 | + '{} kb used'.format(pool_name, pool_id, |
2621 | + obj_count, kb_used)) |
2622 | + return pool_name, obj_count, kb_used |
2623 | + |
2624 | + def validate_ceph_pool_samples(self, samples, sample_type="resource pool"): |
2625 | + """Validate ceph pool samples taken over time, such as pool |
2626 | + object counts or pool kb used, before adding, after adding, and |
2627 | + after deleting items which affect those pool attributes. The |
2628 | + 2nd element is expected to be greater than the 1st; 3rd is expected |
2629 | + to be less than the 2nd. |
2630 | + |
2631 | + :param samples: List containing 3 data samples |
2632 | + :param sample_type: String for logging and usage context |
2633 | + :returns: None if successful, Failure message otherwise |
2634 | + """ |
2635 | + original, created, deleted = range(3) |
2636 | + if samples[created] <= samples[original] or \ |
2637 | + samples[deleted] >= samples[created]: |
2638 | + return ('Ceph {} samples ({}) ' |
2639 | + 'unexpected.'.format(sample_type, samples)) |
2640 | + else: |
2641 | + self.log.debug('Ceph {} samples (OK): ' |
2642 | + '{}'.format(sample_type, samples)) |
2643 | + return None |
2644 | +>>>>>>> MERGE-SOURCE |
2645 | |
2646 | === added file 'tests/tests.yaml' |
2647 | --- tests/tests.yaml 1970-01-01 00:00:00 +0000 |
2648 | +++ tests/tests.yaml 2016-02-18 11:56:12 +0000 |
2649 | @@ -0,0 +1,20 @@ |
2650 | +bootstrap: true |
2651 | +reset: true |
2652 | +virtualenv: true |
2653 | +makefile: |
2654 | + - lint |
2655 | + - test |
2656 | +sources: |
2657 | + - ppa:juju/stable |
2658 | +packages: |
2659 | + - amulet |
2660 | + - distro-info-data |
2661 | + - python-cinderclient |
2662 | + - python-distro-info |
2663 | + - python-glanceclient |
2664 | + - python-heatclient |
2665 | + - python-keystoneclient |
2666 | + - python-neutronclient |
2667 | + - python-novaclient |
2668 | + - python-pika |
2669 | + - python-swiftclient |
2670 | |
2671 | === renamed file 'tests/tests.yaml' => 'tests/tests.yaml.moved' |
2672 | === added file 'tox.ini' |
2673 | --- tox.ini 1970-01-01 00:00:00 +0000 |
2674 | +++ tox.ini 2016-02-18 11:56:12 +0000 |
2675 | @@ -0,0 +1,29 @@ |
2676 | +[tox] |
2677 | +envlist = lint,py27 |
2678 | +skipsdist = True |
2679 | + |
2680 | +[testenv] |
2681 | +setenv = VIRTUAL_ENV={envdir} |
2682 | + PYTHONHASHSEED=0 |
2683 | +install_command = |
2684 | + pip install --allow-unverified python-apt {opts} {packages} |
2685 | +commands = ostestr {posargs} |
2686 | + |
2687 | +[testenv:py27] |
2688 | +basepython = python2.7 |
2689 | +deps = -r{toxinidir}/requirements.txt |
2690 | + -r{toxinidir}/test-requirements.txt |
2691 | + |
2692 | +[testenv:lint] |
2693 | +basepython = python2.7 |
2694 | +deps = -r{toxinidir}/requirements.txt |
2695 | + -r{toxinidir}/test-requirements.txt |
2696 | +commands = flake8 {posargs} hooks unit_tests tests |
2697 | + charm proof |
2698 | + |
2699 | +[testenv:venv] |
2700 | +commands = {posargs} |
2701 | + |
2702 | +[flake8] |
2703 | +ignore = E402,E226 |
2704 | +exclude = hooks/charmhelpers |
2705 | |
2706 | === renamed file 'tox.ini' => 'tox.ini.moved' |
2707 | === added file 'unit_tests/test_status.py' |
2708 | --- unit_tests/test_status.py 1970-01-01 00:00:00 +0000 |
2709 | +++ unit_tests/test_status.py 2016-02-18 11:56:12 +0000 |
2710 | @@ -0,0 +1,56 @@ |
2711 | +import mock |
2712 | +import test_utils |
2713 | + |
2714 | +import ceph_hooks as hooks |
2715 | + |
2716 | +TO_PATCH = [ |
2717 | + 'status_set', |
2718 | + 'config', |
2719 | + 'ceph', |
2720 | + 'relation_ids', |
2721 | + 'relation_get', |
2722 | + 'related_units', |
2723 | + 'get_conf', |
2724 | +] |
2725 | + |
2726 | +CEPH_MONS = [ |
2727 | + 'ceph/0', |
2728 | + 'ceph/1', |
2729 | + 'ceph/2', |
2730 | +] |
2731 | + |
2732 | + |
2733 | +class ServiceStatusTestCase(test_utils.CharmTestCase): |
2734 | + |
2735 | + def setUp(self): |
2736 | + super(ServiceStatusTestCase, self).setUp(hooks, TO_PATCH) |
2737 | + self.config.side_effect = self.test_config.get |
2738 | + |
2739 | + def test_assess_status_no_monitor_relation(self): |
2740 | + self.relation_ids.return_value = [] |
2741 | + hooks.assess_status() |
2742 | + self.status_set.assert_called_with('blocked', mock.ANY) |
2743 | + |
2744 | + def test_assess_status_monitor_relation_incomplete(self): |
2745 | + self.relation_ids.return_value = ['mon:1'] |
2746 | + self.related_units.return_value = CEPH_MONS |
2747 | + self.get_conf.return_value = None |
2748 | + hooks.assess_status() |
2749 | + self.status_set.assert_called_with('waiting', mock.ANY) |
2750 | + |
2751 | + def test_assess_status_monitor_complete_no_disks(self): |
2752 | + self.relation_ids.return_value = ['mon:1'] |
2753 | + self.related_units.return_value = CEPH_MONS |
2754 | + self.get_conf.return_value = 'monitor-bootstrap-key' |
2755 | + self.ceph.get_running_osds.return_value = [] |
2756 | + hooks.assess_status() |
2757 | + self.status_set.assert_called_with('blocked', mock.ANY) |
2758 | + |
2759 | + def test_assess_status_monitor_complete_disks(self): |
2760 | + self.relation_ids.return_value = ['mon:1'] |
2761 | + self.related_units.return_value = CEPH_MONS |
2762 | + self.get_conf.return_value = 'monitor-bootstrap-key' |
2763 | + self.ceph.get_running_osds.return_value = ['12345', |
2764 | + '67890'] |
2765 | + hooks.assess_status() |
2766 | + self.status_set.assert_called_with('active', mock.ANY) |
2767 | |
2768 | === renamed file 'unit_tests/test_status.py' => 'unit_tests/test_status.py.moved' |
2769 | === added file 'unit_tests/test_utils.py' |
2770 | --- unit_tests/test_utils.py 1970-01-01 00:00:00 +0000 |
2771 | +++ unit_tests/test_utils.py 2016-02-18 11:56:12 +0000 |
2772 | @@ -0,0 +1,121 @@ |
2773 | +import logging |
2774 | +import unittest |
2775 | +import os |
2776 | +import yaml |
2777 | + |
2778 | +from contextlib import contextmanager |
2779 | +from mock import patch, MagicMock |
2780 | + |
2781 | + |
2782 | +def load_config(): |
2783 | + ''' |
2784 | + Walk backwords from __file__ looking for config.yaml, load and return the |
2785 | + 'options' section' |
2786 | + ''' |
2787 | + config = None |
2788 | + f = __file__ |
2789 | + while config is None: |
2790 | + d = os.path.dirname(f) |
2791 | + if os.path.isfile(os.path.join(d, 'config.yaml')): |
2792 | + config = os.path.join(d, 'config.yaml') |
2793 | + break |
2794 | + f = d |
2795 | + |
2796 | + if not config: |
2797 | + logging.error('Could not find config.yaml in any parent directory ' |
2798 | + 'of %s. ' % f) |
2799 | + raise Exception |
2800 | + |
2801 | + return yaml.safe_load(open(config).read())['options'] |
2802 | + |
2803 | + |
2804 | +def get_default_config(): |
2805 | + ''' |
2806 | + Load default charm config from config.yaml return as a dict. |
2807 | + If no default is set in config.yaml, its value is None. |
2808 | + ''' |
2809 | + default_config = {} |
2810 | + config = load_config() |
2811 | + for k, v in config.iteritems(): |
2812 | + if 'default' in v: |
2813 | + default_config[k] = v['default'] |
2814 | + else: |
2815 | + default_config[k] = None |
2816 | + return default_config |
2817 | + |
2818 | + |
2819 | +class CharmTestCase(unittest.TestCase): |
2820 | + |
2821 | + def setUp(self, obj, patches): |
2822 | + super(CharmTestCase, self).setUp() |
2823 | + self.patches = patches |
2824 | + self.obj = obj |
2825 | + self.test_config = TestConfig() |
2826 | + self.test_relation = TestRelation() |
2827 | + self.patch_all() |
2828 | + |
2829 | + def patch(self, method): |
2830 | + _m = patch.object(self.obj, method) |
2831 | + mock = _m.start() |
2832 | + self.addCleanup(_m.stop) |
2833 | + return mock |
2834 | + |
2835 | + def patch_all(self): |
2836 | + for method in self.patches: |
2837 | + setattr(self, method, self.patch(method)) |
2838 | + |
2839 | + |
2840 | +class TestConfig(object): |
2841 | + |
2842 | + def __init__(self): |
2843 | + self.config = get_default_config() |
2844 | + |
2845 | + def get(self, attr=None): |
2846 | + if not attr: |
2847 | + return self.get_all() |
2848 | + try: |
2849 | + return self.config[attr] |
2850 | + except KeyError: |
2851 | + return None |
2852 | + |
2853 | + def get_all(self): |
2854 | + return self.config |
2855 | + |
2856 | + def set(self, attr, value): |
2857 | + if attr not in self.config: |
2858 | + raise KeyError |
2859 | + self.config[attr] = value |
2860 | + |
2861 | + |
2862 | +class TestRelation(object): |
2863 | + |
2864 | + def __init__(self, relation_data={}): |
2865 | + self.relation_data = relation_data |
2866 | + |
2867 | + def set(self, relation_data): |
2868 | + self.relation_data = relation_data |
2869 | + |
2870 | + def get(self, attr=None, unit=None, rid=None): |
2871 | + if attr is None: |
2872 | + return self.relation_data |
2873 | + elif attr in self.relation_data: |
2874 | + return self.relation_data[attr] |
2875 | + return None |
2876 | + |
2877 | + |
2878 | +@contextmanager |
2879 | +def patch_open(): |
2880 | + '''Patch open() to allow mocking both open() itself and the file that is |
2881 | + yielded. |
2882 | + |
2883 | + Yields the mock for "open" and "file", respectively.''' |
2884 | + mock_open = MagicMock(spec=open) |
2885 | + mock_file = MagicMock(spec=file) |
2886 | + |
2887 | + @contextmanager |
2888 | + def stub_open(*args, **kwargs): |
2889 | + mock_open(*args, **kwargs) |
2890 | + yield mock_file |
2891 | + |
2892 | + with patch('__builtin__.open', stub_open): |
2893 | + yield mock_open, mock_file |
2894 | |
2895 | === renamed file 'unit_tests/test_utils.py' => 'unit_tests/test_utils.py.moved' |
See comment on network interfaces. I'd also like to see some validation on varied hardware (SSD / HDD) that this doesn't break things either way.