Merge lp:~james-page/charms/precise/ceph/charm-helpers into lp:~charmers/charms/precise/ceph/trunk
- Precise Pangolin (12.04)
- charm-helpers
- Merge into trunk
Proposed by
James Page
Status: | Merged |
---|---|
Merged at revision: | 62 |
Proposed branch: | lp:~james-page/charms/precise/ceph/charm-helpers |
Merge into: | lp:~charmers/charms/precise/ceph/trunk |
Diff against target: |
1983 lines (+1146/-437) 13 files modified
.pydevproject (+1/-3) Makefile (+8/-0) README.md (+9/-9) charm-helpers-sync.yaml (+7/-0) hooks/ceph.py (+126/-26) hooks/charmhelpers/contrib/storage/linux/utils.py (+25/-0) hooks/charmhelpers/core/hookenv.py (+334/-0) hooks/charmhelpers/core/host.py (+273/-0) hooks/charmhelpers/fetch/__init__.py (+152/-0) hooks/charmhelpers/fetch/archiveurl.py (+43/-0) hooks/hooks.py (+149/-233) hooks/utils.py (+18/-164) metadata.yaml (+1/-2) |
To merge this branch: | bzr merge lp:~james-page/charms/precise/ceph/charm-helpers |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Mark Mims (community) | Approve | ||
Review via email: mp+173245@code.launchpad.net |
Commit message
Description of the change
Refactoring to support use with charm-helpers
Significant rework to support use of charm-helpers rather than its own
utils.py of fun.
Also fixes a couple of issues with newer versions of ceph which no longer automatically zap disks.
To post a comment you must log in.
- 80. By James Page
-
Fixup dodgy disk detection
Revision history for this message
Mark Mims (mark-mims) wrote : | # |
Added bug https:/
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === modified file '.pydevproject' | |||
2 | --- .pydevproject 2012-10-18 08:24:36 +0000 | |||
3 | +++ .pydevproject 2013-07-08 08:34:31 +0000 | |||
4 | @@ -1,7 +1,5 @@ | |||
5 | 1 | <?xml version="1.0" encoding="UTF-8" standalone="no"?> | 1 | <?xml version="1.0" encoding="UTF-8" standalone="no"?> |
9 | 2 | <?eclipse-pydev version="1.0"?> | 2 | <?eclipse-pydev version="1.0"?><pydev_project> |
7 | 3 | |||
8 | 4 | <pydev_project> | ||
10 | 5 | <pydev_property name="org.python.pydev.PYTHON_PROJECT_VERSION">python 2.7</pydev_property> | 3 | <pydev_property name="org.python.pydev.PYTHON_PROJECT_VERSION">python 2.7</pydev_property> |
11 | 6 | <pydev_property name="org.python.pydev.PYTHON_PROJECT_INTERPRETER">Default</pydev_property> | 4 | <pydev_property name="org.python.pydev.PYTHON_PROJECT_INTERPRETER">Default</pydev_property> |
12 | 7 | <pydev_pathproperty name="org.python.pydev.PROJECT_SOURCE_PATH"> | 5 | <pydev_pathproperty name="org.python.pydev.PROJECT_SOURCE_PATH"> |
13 | 8 | 6 | ||
14 | === added file 'Makefile' | |||
15 | --- Makefile 1970-01-01 00:00:00 +0000 | |||
16 | +++ Makefile 2013-07-08 08:34:31 +0000 | |||
17 | @@ -0,0 +1,8 @@ | |||
18 | 1 | #!/usr/bin/make | ||
19 | 2 | |||
20 | 3 | lint: | ||
21 | 4 | @flake8 --exclude hooks/charmhelpers hooks | ||
22 | 5 | @charm proof | ||
23 | 6 | |||
24 | 7 | sync: | ||
25 | 8 | @charm-helper-sync -c charm-helpers-sync.yaml | ||
26 | 0 | 9 | ||
27 | === modified file 'README.md' | |||
28 | --- README.md 2012-12-17 10:22:51 +0000 | |||
29 | +++ README.md 2013-07-08 08:34:31 +0000 | |||
30 | @@ -15,28 +15,28 @@ | |||
31 | 15 | fsid: | 15 | fsid: |
32 | 16 | uuid specific to a ceph cluster used to ensure that different | 16 | uuid specific to a ceph cluster used to ensure that different |
33 | 17 | clusters don't get mixed up - use `uuid` to generate one. | 17 | clusters don't get mixed up - use `uuid` to generate one. |
35 | 18 | 18 | ||
36 | 19 | monitor-secret: | 19 | monitor-secret: |
37 | 20 | a ceph generated key used by the daemons that manage to cluster | 20 | a ceph generated key used by the daemons that manage to cluster |
38 | 21 | to control security. You can use the ceph-authtool command to | 21 | to control security. You can use the ceph-authtool command to |
39 | 22 | generate one: | 22 | generate one: |
41 | 23 | 23 | ||
42 | 24 | ceph-authtool /dev/stdout --name=mon. --gen-key | 24 | ceph-authtool /dev/stdout --name=mon. --gen-key |
44 | 25 | 25 | ||
45 | 26 | These two pieces of configuration must NOT be changed post bootstrap; attempting | 26 | These two pieces of configuration must NOT be changed post bootstrap; attempting |
46 | 27 | todo this will cause a reconfiguration error and new service units will not join | 27 | todo this will cause a reconfiguration error and new service units will not join |
47 | 28 | the existing ceph cluster. | 28 | the existing ceph cluster. |
49 | 29 | 29 | ||
50 | 30 | The charm also supports specification of the storage devices to use in the ceph | 30 | The charm also supports specification of the storage devices to use in the ceph |
51 | 31 | cluster. | 31 | cluster. |
52 | 32 | 32 | ||
53 | 33 | osd-devices: | 33 | osd-devices: |
54 | 34 | A list of devices that the charm will attempt to detect, initialise and | 34 | A list of devices that the charm will attempt to detect, initialise and |
55 | 35 | activate as ceph storage. | 35 | activate as ceph storage. |
57 | 36 | 36 | ||
58 | 37 | This this can be a superset of the actual storage devices presented to | 37 | This this can be a superset of the actual storage devices presented to |
59 | 38 | each service unit and can be changed post ceph bootstrap using `juju set`. | 38 | each service unit and can be changed post ceph bootstrap using `juju set`. |
61 | 39 | 39 | ||
62 | 40 | At a minimum you must provide a juju config file during initial deployment | 40 | At a minimum you must provide a juju config file during initial deployment |
63 | 41 | with the fsid and monitor-secret options (contents of cepy.yaml below): | 41 | with the fsid and monitor-secret options (contents of cepy.yaml below): |
64 | 42 | 42 | ||
65 | @@ -44,7 +44,7 @@ | |||
66 | 44 | fsid: ecbb8960-0e21-11e2-b495-83a88f44db01 | 44 | fsid: ecbb8960-0e21-11e2-b495-83a88f44db01 |
67 | 45 | monitor-secret: AQD1P2xQiKglDhAA4NGUF5j38Mhq56qwz+45wg== | 45 | monitor-secret: AQD1P2xQiKglDhAA4NGUF5j38Mhq56qwz+45wg== |
68 | 46 | osd-devices: /dev/vdb /dev/vdc /dev/vdd /dev/vde | 46 | osd-devices: /dev/vdb /dev/vdc /dev/vdd /dev/vde |
70 | 47 | 47 | ||
71 | 48 | Specifying the osd-devices to use is also a good idea. | 48 | Specifying the osd-devices to use is also a good idea. |
72 | 49 | 49 | ||
73 | 50 | Boot things up by using: | 50 | Boot things up by using: |
74 | @@ -62,7 +62,7 @@ | |||
75 | 62 | James Page <james.page@ubuntu.com> | 62 | James Page <james.page@ubuntu.com> |
76 | 63 | Report bugs at: http://bugs.launchpad.net/charms/+source/ceph/+filebug | 63 | Report bugs at: http://bugs.launchpad.net/charms/+source/ceph/+filebug |
77 | 64 | Location: http://jujucharms.com/charms/ceph | 64 | Location: http://jujucharms.com/charms/ceph |
79 | 65 | 65 | ||
80 | 66 | Technical Bootnotes | 66 | Technical Bootnotes |
81 | 67 | =================== | 67 | =================== |
82 | 68 | 68 | ||
83 | @@ -89,4 +89,4 @@ | |||
84 | 89 | implement it. | 89 | implement it. |
85 | 90 | 90 | ||
86 | 91 | See http://ceph.com/docs/master/dev/mon-bootstrap/ for more information on Ceph | 91 | See http://ceph.com/docs/master/dev/mon-bootstrap/ for more information on Ceph |
88 | 92 | monitor cluster deployment strategies and pitfalls. | 92 | monitor cluster deployment strategies and pitfalls. |
89 | 93 | 93 | ||
90 | === added file 'charm-helpers-sync.yaml' | |||
91 | --- charm-helpers-sync.yaml 1970-01-01 00:00:00 +0000 | |||
92 | +++ charm-helpers-sync.yaml 2013-07-08 08:34:31 +0000 | |||
93 | @@ -0,0 +1,7 @@ | |||
94 | 1 | branch: lp:charm-helpers | ||
95 | 2 | destination: hooks/charmhelpers | ||
96 | 3 | include: | ||
97 | 4 | - core | ||
98 | 5 | - fetch | ||
99 | 6 | - contrib.storage.linux: | ||
100 | 7 | - utils | ||
101 | 0 | 8 | ||
102 | === modified file 'hooks/ceph.py' | |||
103 | --- hooks/ceph.py 2012-12-18 10:25:38 +0000 | |||
104 | +++ hooks/ceph.py 2013-07-08 08:34:31 +0000 | |||
105 | @@ -10,23 +10,36 @@ | |||
106 | 10 | import json | 10 | import json |
107 | 11 | import subprocess | 11 | import subprocess |
108 | 12 | import time | 12 | import time |
109 | 13 | import utils | ||
110 | 14 | import os | 13 | import os |
111 | 15 | import apt_pkg as apt | 14 | import apt_pkg as apt |
112 | 15 | from charmhelpers.core.host import ( | ||
113 | 16 | mkdir, | ||
114 | 17 | service_restart, | ||
115 | 18 | log | ||
116 | 19 | ) | ||
117 | 20 | from charmhelpers.contrib.storage.linux.utils import ( | ||
118 | 21 | zap_disk, | ||
119 | 22 | is_block_device | ||
120 | 23 | ) | ||
121 | 24 | from utils import ( | ||
122 | 25 | get_unit_hostname | ||
123 | 26 | ) | ||
124 | 16 | 27 | ||
125 | 17 | LEADER = 'leader' | 28 | LEADER = 'leader' |
126 | 18 | PEON = 'peon' | 29 | PEON = 'peon' |
127 | 19 | QUORUM = [LEADER, PEON] | 30 | QUORUM = [LEADER, PEON] |
128 | 20 | 31 | ||
129 | 32 | PACKAGES = ['ceph', 'gdisk', 'ntp', 'btrfs-tools', 'python-ceph', 'xfsprogs'] | ||
130 | 33 | |||
131 | 21 | 34 | ||
132 | 22 | def is_quorum(): | 35 | def is_quorum(): |
134 | 23 | asok = "/var/run/ceph/ceph-mon.{}.asok".format(utils.get_unit_hostname()) | 36 | asok = "/var/run/ceph/ceph-mon.{}.asok".format(get_unit_hostname()) |
135 | 24 | cmd = [ | 37 | cmd = [ |
136 | 25 | "ceph", | 38 | "ceph", |
137 | 26 | "--admin-daemon", | 39 | "--admin-daemon", |
138 | 27 | asok, | 40 | asok, |
139 | 28 | "mon_status" | 41 | "mon_status" |
141 | 29 | ] | 42 | ] |
142 | 30 | if os.path.exists(asok): | 43 | if os.path.exists(asok): |
143 | 31 | try: | 44 | try: |
144 | 32 | result = json.loads(subprocess.check_output(cmd)) | 45 | result = json.loads(subprocess.check_output(cmd)) |
145 | @@ -44,13 +57,13 @@ | |||
146 | 44 | 57 | ||
147 | 45 | 58 | ||
148 | 46 | def is_leader(): | 59 | def is_leader(): |
150 | 47 | asok = "/var/run/ceph/ceph-mon.{}.asok".format(utils.get_unit_hostname()) | 60 | asok = "/var/run/ceph/ceph-mon.{}.asok".format(get_unit_hostname()) |
151 | 48 | cmd = [ | 61 | cmd = [ |
152 | 49 | "ceph", | 62 | "ceph", |
153 | 50 | "--admin-daemon", | 63 | "--admin-daemon", |
154 | 51 | asok, | 64 | asok, |
155 | 52 | "mon_status" | 65 | "mon_status" |
157 | 53 | ] | 66 | ] |
158 | 54 | if os.path.exists(asok): | 67 | if os.path.exists(asok): |
159 | 55 | try: | 68 | try: |
160 | 56 | result = json.loads(subprocess.check_output(cmd)) | 69 | result = json.loads(subprocess.check_output(cmd)) |
161 | @@ -73,14 +86,14 @@ | |||
162 | 73 | 86 | ||
163 | 74 | 87 | ||
164 | 75 | def add_bootstrap_hint(peer): | 88 | def add_bootstrap_hint(peer): |
166 | 76 | asok = "/var/run/ceph/ceph-mon.{}.asok".format(utils.get_unit_hostname()) | 89 | asok = "/var/run/ceph/ceph-mon.{}.asok".format(get_unit_hostname()) |
167 | 77 | cmd = [ | 90 | cmd = [ |
168 | 78 | "ceph", | 91 | "ceph", |
169 | 79 | "--admin-daemon", | 92 | "--admin-daemon", |
170 | 80 | asok, | 93 | asok, |
171 | 81 | "add_bootstrap_peer_hint", | 94 | "add_bootstrap_peer_hint", |
172 | 82 | peer | 95 | peer |
174 | 83 | ] | 96 | ] |
175 | 84 | if os.path.exists(asok): | 97 | if os.path.exists(asok): |
176 | 85 | # Ignore any errors for this call | 98 | # Ignore any errors for this call |
177 | 86 | subprocess.call(cmd) | 99 | subprocess.call(cmd) |
178 | @@ -89,7 +102,7 @@ | |||
179 | 89 | 'xfs', | 102 | 'xfs', |
180 | 90 | 'ext4', | 103 | 'ext4', |
181 | 91 | 'btrfs' | 104 | 'btrfs' |
183 | 92 | ] | 105 | ] |
184 | 93 | 106 | ||
185 | 94 | 107 | ||
186 | 95 | def is_osd_disk(dev): | 108 | def is_osd_disk(dev): |
187 | @@ -99,7 +112,7 @@ | |||
188 | 99 | for line in info: | 112 | for line in info: |
189 | 100 | if line.startswith( | 113 | if line.startswith( |
190 | 101 | 'Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D' | 114 | 'Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D' |
192 | 102 | ): | 115 | ): |
193 | 103 | return True | 116 | return True |
194 | 104 | except subprocess.CalledProcessError: | 117 | except subprocess.CalledProcessError: |
195 | 105 | pass | 118 | pass |
196 | @@ -110,16 +123,11 @@ | |||
197 | 110 | cmd = [ | 123 | cmd = [ |
198 | 111 | 'udevadm', 'trigger', | 124 | 'udevadm', 'trigger', |
199 | 112 | '--subsystem-match=block', '--action=add' | 125 | '--subsystem-match=block', '--action=add' |
201 | 113 | ] | 126 | ] |
202 | 114 | 127 | ||
203 | 115 | subprocess.call(cmd) | 128 | subprocess.call(cmd) |
204 | 116 | 129 | ||
205 | 117 | 130 | ||
206 | 118 | def zap_disk(dev): | ||
207 | 119 | cmd = ['sgdisk', '--zap-all', dev] | ||
208 | 120 | subprocess.check_call(cmd) | ||
209 | 121 | |||
210 | 122 | |||
211 | 123 | _bootstrap_keyring = "/var/lib/ceph/bootstrap-osd/ceph.keyring" | 131 | _bootstrap_keyring = "/var/lib/ceph/bootstrap-osd/ceph.keyring" |
212 | 124 | 132 | ||
213 | 125 | 133 | ||
214 | @@ -140,7 +148,7 @@ | |||
215 | 140 | '--create-keyring', | 148 | '--create-keyring', |
216 | 141 | '--name=client.bootstrap-osd', | 149 | '--name=client.bootstrap-osd', |
217 | 142 | '--add-key={}'.format(key) | 150 | '--add-key={}'.format(key) |
219 | 143 | ] | 151 | ] |
220 | 144 | subprocess.check_call(cmd) | 152 | subprocess.check_call(cmd) |
221 | 145 | 153 | ||
222 | 146 | # OSD caps taken from ceph-create-keys | 154 | # OSD caps taken from ceph-create-keys |
223 | @@ -148,10 +156,10 @@ | |||
224 | 148 | 'mon': [ | 156 | 'mon': [ |
225 | 149 | 'allow command osd create ...', | 157 | 'allow command osd create ...', |
226 | 150 | 'allow command osd crush set ...', | 158 | 'allow command osd crush set ...', |
228 | 151 | r'allow command auth add * osd allow\ * mon allow\ rwx', | 159 | r'allow command auth add * osd allow\ * mon allow\ rwx', |
229 | 152 | 'allow command mon getmap' | 160 | 'allow command mon getmap' |
232 | 153 | ] | 161 | ] |
233 | 154 | } | 162 | } |
234 | 155 | 163 | ||
235 | 156 | 164 | ||
236 | 157 | def get_osd_bootstrap_key(): | 165 | def get_osd_bootstrap_key(): |
237 | @@ -169,14 +177,14 @@ | |||
238 | 169 | '--create-keyring', | 177 | '--create-keyring', |
239 | 170 | '--name=client.radosgw.gateway', | 178 | '--name=client.radosgw.gateway', |
240 | 171 | '--add-key={}'.format(key) | 179 | '--add-key={}'.format(key) |
242 | 172 | ] | 180 | ] |
243 | 173 | subprocess.check_call(cmd) | 181 | subprocess.check_call(cmd) |
244 | 174 | 182 | ||
245 | 175 | # OSD caps taken from ceph-create-keys | 183 | # OSD caps taken from ceph-create-keys |
246 | 176 | _radosgw_caps = { | 184 | _radosgw_caps = { |
247 | 177 | 'mon': ['allow r'], | 185 | 'mon': ['allow r'], |
248 | 178 | 'osd': ['allow rwx'] | 186 | 'osd': ['allow rwx'] |
250 | 179 | } | 187 | } |
251 | 180 | 188 | ||
252 | 181 | 189 | ||
253 | 182 | def get_radosgw_key(): | 190 | def get_radosgw_key(): |
254 | @@ -186,7 +194,7 @@ | |||
255 | 186 | _default_caps = { | 194 | _default_caps = { |
256 | 187 | 'mon': ['allow r'], | 195 | 'mon': ['allow r'], |
257 | 188 | 'osd': ['allow rwx'] | 196 | 'osd': ['allow rwx'] |
259 | 189 | } | 197 | } |
260 | 190 | 198 | ||
261 | 191 | 199 | ||
262 | 192 | def get_named_key(name, caps=None): | 200 | def get_named_key(name, caps=None): |
263 | @@ -196,16 +204,16 @@ | |||
264 | 196 | '--name', 'mon.', | 204 | '--name', 'mon.', |
265 | 197 | '--keyring', | 205 | '--keyring', |
266 | 198 | '/var/lib/ceph/mon/ceph-{}/keyring'.format( | 206 | '/var/lib/ceph/mon/ceph-{}/keyring'.format( |
269 | 199 | utils.get_unit_hostname() | 207 | get_unit_hostname() |
270 | 200 | ), | 208 | ), |
271 | 201 | 'auth', 'get-or-create', 'client.{}'.format(name), | 209 | 'auth', 'get-or-create', 'client.{}'.format(name), |
273 | 202 | ] | 210 | ] |
274 | 203 | # Add capabilities | 211 | # Add capabilities |
275 | 204 | for subsystem, subcaps in caps.iteritems(): | 212 | for subsystem, subcaps in caps.iteritems(): |
276 | 205 | cmd.extend([ | 213 | cmd.extend([ |
277 | 206 | subsystem, | 214 | subsystem, |
278 | 207 | '; '.join(subcaps), | 215 | '; '.join(subcaps), |
280 | 208 | ]) | 216 | ]) |
281 | 209 | output = subprocess.check_output(cmd).strip() # IGNORE:E1103 | 217 | output = subprocess.check_output(cmd).strip() # IGNORE:E1103 |
282 | 210 | # get-or-create appears to have different output depending | 218 | # get-or-create appears to have different output depending |
283 | 211 | # on whether its 'get' or 'create' | 219 | # on whether its 'get' or 'create' |
284 | @@ -221,6 +229,42 @@ | |||
285 | 221 | return key | 229 | return key |
286 | 222 | 230 | ||
287 | 223 | 231 | ||
288 | 232 | def bootstrap_monitor_cluster(secret): | ||
289 | 233 | hostname = get_unit_hostname() | ||
290 | 234 | path = '/var/lib/ceph/mon/ceph-{}'.format(hostname) | ||
291 | 235 | done = '{}/done'.format(path) | ||
292 | 236 | upstart = '{}/upstart'.format(path) | ||
293 | 237 | keyring = '/var/lib/ceph/tmp/{}.mon.keyring'.format(hostname) | ||
294 | 238 | |||
295 | 239 | if os.path.exists(done): | ||
296 | 240 | log('bootstrap_monitor_cluster: mon already initialized.') | ||
297 | 241 | else: | ||
298 | 242 | # Ceph >= 0.61.3 needs this for ceph-mon fs creation | ||
299 | 243 | mkdir('/var/run/ceph', perms=0755) | ||
300 | 244 | mkdir(path) | ||
301 | 245 | # end changes for Ceph >= 0.61.3 | ||
302 | 246 | try: | ||
303 | 247 | subprocess.check_call(['ceph-authtool', keyring, | ||
304 | 248 | '--create-keyring', '--name=mon.', | ||
305 | 249 | '--add-key={}'.format(secret), | ||
306 | 250 | '--cap', 'mon', 'allow *']) | ||
307 | 251 | |||
308 | 252 | subprocess.check_call(['ceph-mon', '--mkfs', | ||
309 | 253 | '-i', hostname, | ||
310 | 254 | '--keyring', keyring]) | ||
311 | 255 | |||
312 | 256 | with open(done, 'w'): | ||
313 | 257 | pass | ||
314 | 258 | with open(upstart, 'w'): | ||
315 | 259 | pass | ||
316 | 260 | |||
317 | 261 | service_restart('ceph-mon-all') | ||
318 | 262 | except: | ||
319 | 263 | raise | ||
320 | 264 | finally: | ||
321 | 265 | os.unlink(keyring) | ||
322 | 266 | |||
323 | 267 | |||
324 | 224 | def get_ceph_version(): | 268 | def get_ceph_version(): |
325 | 225 | apt.init() | 269 | apt.init() |
326 | 226 | cache = apt.Cache() | 270 | cache = apt.Cache() |
327 | @@ -233,3 +277,59 @@ | |||
328 | 233 | 277 | ||
329 | 234 | def version_compare(a, b): | 278 | def version_compare(a, b): |
330 | 235 | return apt.version_compare(a, b) | 279 | return apt.version_compare(a, b) |
331 | 280 | |||
332 | 281 | |||
333 | 282 | def update_monfs(): | ||
334 | 283 | hostname = get_unit_hostname() | ||
335 | 284 | monfs = '/var/lib/ceph/mon/ceph-{}'.format(hostname) | ||
336 | 285 | upstart = '{}/upstart'.format(monfs) | ||
337 | 286 | if os.path.exists(monfs) and not os.path.exists(upstart): | ||
338 | 287 | # Mark mon as managed by upstart so that | ||
339 | 288 | # it gets start correctly on reboots | ||
340 | 289 | with open(upstart, 'w'): | ||
341 | 290 | pass | ||
342 | 291 | |||
343 | 292 | |||
344 | 293 | def osdize(dev, osd_format, osd_journal, reformat_osd=False): | ||
345 | 294 | if not os.path.exists(dev): | ||
346 | 295 | log('Path {} does not exist - bailing'.format(dev)) | ||
347 | 296 | return | ||
348 | 297 | |||
349 | 298 | if not is_block_device(dev): | ||
350 | 299 | log('Path {} is not a block device - bailing'.format(dev)) | ||
351 | 300 | return | ||
352 | 301 | |||
353 | 302 | if (is_osd_disk(dev) and not reformat_osd): | ||
354 | 303 | log('Looks like {} is already an OSD, skipping.'.format(dev)) | ||
355 | 304 | return | ||
356 | 305 | |||
357 | 306 | if device_mounted(dev): | ||
358 | 307 | log('Looks like {} is in use, skipping.'.format(dev)) | ||
359 | 308 | return | ||
360 | 309 | |||
361 | 310 | cmd = ['ceph-disk-prepare'] | ||
362 | 311 | # Later versions of ceph support more options | ||
363 | 312 | if get_ceph_version() >= "0.48.3": | ||
364 | 313 | if osd_format: | ||
365 | 314 | cmd.append('--fs-type') | ||
366 | 315 | cmd.append(osd_format) | ||
367 | 316 | cmd.append(dev) | ||
368 | 317 | if osd_journal and os.path.exists(osd_journal): | ||
369 | 318 | cmd.append(osd_journal) | ||
370 | 319 | else: | ||
371 | 320 | # Just provide the device - no other options | ||
372 | 321 | # for older versions of ceph | ||
373 | 322 | cmd.append(dev) | ||
374 | 323 | |||
375 | 324 | if reformat_osd: | ||
376 | 325 | zap_disk(dev) | ||
377 | 326 | |||
378 | 327 | subprocess.check_call(cmd) | ||
379 | 328 | |||
380 | 329 | |||
381 | 330 | def device_mounted(dev): | ||
382 | 331 | return subprocess.call(['grep', '-wqs', dev + '1', '/proc/mounts']) == 0 | ||
383 | 332 | |||
384 | 333 | |||
385 | 334 | def filesystem_mounted(fs): | ||
386 | 335 | return subprocess.call(['grep', '-wqs', fs, '/proc/mounts']) == 0 | ||
387 | 236 | 336 | ||
388 | === added directory 'hooks/charmhelpers' | |||
389 | === added file 'hooks/charmhelpers/__init__.py' | |||
390 | === added directory 'hooks/charmhelpers/contrib' | |||
391 | === added file 'hooks/charmhelpers/contrib/__init__.py' | |||
392 | === added directory 'hooks/charmhelpers/contrib/storage' | |||
393 | === added file 'hooks/charmhelpers/contrib/storage/__init__.py' | |||
394 | === added directory 'hooks/charmhelpers/contrib/storage/linux' | |||
395 | === added file 'hooks/charmhelpers/contrib/storage/linux/__init__.py' | |||
396 | === added file 'hooks/charmhelpers/contrib/storage/linux/utils.py' | |||
397 | --- hooks/charmhelpers/contrib/storage/linux/utils.py 1970-01-01 00:00:00 +0000 | |||
398 | +++ hooks/charmhelpers/contrib/storage/linux/utils.py 2013-07-08 08:34:31 +0000 | |||
399 | @@ -0,0 +1,25 @@ | |||
400 | 1 | from os import stat | ||
401 | 2 | from stat import S_ISBLK | ||
402 | 3 | |||
403 | 4 | from subprocess import ( | ||
404 | 5 | check_call | ||
405 | 6 | ) | ||
406 | 7 | |||
407 | 8 | |||
408 | 9 | def is_block_device(path): | ||
409 | 10 | ''' | ||
410 | 11 | Confirm device at path is a valid block device node. | ||
411 | 12 | |||
412 | 13 | :returns: boolean: True if path is a block device, False if not. | ||
413 | 14 | ''' | ||
414 | 15 | return S_ISBLK(stat(path).st_mode) | ||
415 | 16 | |||
416 | 17 | |||
417 | 18 | def zap_disk(block_device): | ||
418 | 19 | ''' | ||
419 | 20 | Clear a block device of partition table. Relies on sgdisk, which is | ||
420 | 21 | installed as pat of the 'gdisk' package in Ubuntu. | ||
421 | 22 | |||
422 | 23 | :param block_device: str: Full path of block device to clean. | ||
423 | 24 | ''' | ||
424 | 25 | check_call(['sgdisk', '--zap-all', block_device]) | ||
425 | 0 | 26 | ||
426 | === added directory 'hooks/charmhelpers/core' | |||
427 | === added file 'hooks/charmhelpers/core/__init__.py' | |||
428 | === added file 'hooks/charmhelpers/core/hookenv.py' | |||
429 | --- hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000 | |||
430 | +++ hooks/charmhelpers/core/hookenv.py 2013-07-08 08:34:31 +0000 | |||
431 | @@ -0,0 +1,334 @@ | |||
432 | 1 | "Interactions with the Juju environment" | ||
433 | 2 | # Copyright 2013 Canonical Ltd. | ||
434 | 3 | # | ||
435 | 4 | # Authors: | ||
436 | 5 | # Charm Helpers Developers <juju@lists.ubuntu.com> | ||
437 | 6 | |||
438 | 7 | import os | ||
439 | 8 | import json | ||
440 | 9 | import yaml | ||
441 | 10 | import subprocess | ||
442 | 11 | import UserDict | ||
443 | 12 | |||
444 | 13 | CRITICAL = "CRITICAL" | ||
445 | 14 | ERROR = "ERROR" | ||
446 | 15 | WARNING = "WARNING" | ||
447 | 16 | INFO = "INFO" | ||
448 | 17 | DEBUG = "DEBUG" | ||
449 | 18 | MARKER = object() | ||
450 | 19 | |||
451 | 20 | cache = {} | ||
452 | 21 | |||
453 | 22 | |||
454 | 23 | def cached(func): | ||
455 | 24 | ''' Cache return values for multiple executions of func + args | ||
456 | 25 | |||
457 | 26 | For example: | ||
458 | 27 | |||
459 | 28 | @cached | ||
460 | 29 | def unit_get(attribute): | ||
461 | 30 | pass | ||
462 | 31 | |||
463 | 32 | unit_get('test') | ||
464 | 33 | |||
465 | 34 | will cache the result of unit_get + 'test' for future calls. | ||
466 | 35 | ''' | ||
467 | 36 | def wrapper(*args, **kwargs): | ||
468 | 37 | global cache | ||
469 | 38 | key = str((func, args, kwargs)) | ||
470 | 39 | try: | ||
471 | 40 | return cache[key] | ||
472 | 41 | except KeyError: | ||
473 | 42 | res = func(*args, **kwargs) | ||
474 | 43 | cache[key] = res | ||
475 | 44 | return res | ||
476 | 45 | return wrapper | ||
477 | 46 | |||
478 | 47 | |||
479 | 48 | def flush(key): | ||
480 | 49 | ''' Flushes any entries from function cache where the | ||
481 | 50 | key is found in the function+args ''' | ||
482 | 51 | flush_list = [] | ||
483 | 52 | for item in cache: | ||
484 | 53 | if key in item: | ||
485 | 54 | flush_list.append(item) | ||
486 | 55 | for item in flush_list: | ||
487 | 56 | del cache[item] | ||
488 | 57 | |||
489 | 58 | |||
490 | 59 | def log(message, level=None): | ||
491 | 60 | "Write a message to the juju log" | ||
492 | 61 | command = ['juju-log'] | ||
493 | 62 | if level: | ||
494 | 63 | command += ['-l', level] | ||
495 | 64 | command += [message] | ||
496 | 65 | subprocess.call(command) | ||
497 | 66 | |||
498 | 67 | |||
499 | 68 | class Serializable(UserDict.IterableUserDict): | ||
500 | 69 | "Wrapper, an object that can be serialized to yaml or json" | ||
501 | 70 | |||
502 | 71 | def __init__(self, obj): | ||
503 | 72 | # wrap the object | ||
504 | 73 | UserDict.IterableUserDict.__init__(self) | ||
505 | 74 | self.data = obj | ||
506 | 75 | |||
507 | 76 | def __getattr__(self, attr): | ||
508 | 77 | # See if this object has attribute. | ||
509 | 78 | if attr in ("json", "yaml", "data"): | ||
510 | 79 | return self.__dict__[attr] | ||
511 | 80 | # Check for attribute in wrapped object. | ||
512 | 81 | got = getattr(self.data, attr, MARKER) | ||
513 | 82 | if got is not MARKER: | ||
514 | 83 | return got | ||
515 | 84 | # Proxy to the wrapped object via dict interface. | ||
516 | 85 | try: | ||
517 | 86 | return self.data[attr] | ||
518 | 87 | except KeyError: | ||
519 | 88 | raise AttributeError(attr) | ||
520 | 89 | |||
521 | 90 | def __getstate__(self): | ||
522 | 91 | # Pickle as a standard dictionary. | ||
523 | 92 | return self.data | ||
524 | 93 | |||
525 | 94 | def __setstate__(self, state): | ||
526 | 95 | # Unpickle into our wrapper. | ||
527 | 96 | self.data = state | ||
528 | 97 | |||
529 | 98 | def json(self): | ||
530 | 99 | "Serialize the object to json" | ||
531 | 100 | return json.dumps(self.data) | ||
532 | 101 | |||
533 | 102 | def yaml(self): | ||
534 | 103 | "Serialize the object to yaml" | ||
535 | 104 | return yaml.dump(self.data) | ||
536 | 105 | |||
537 | 106 | |||
538 | 107 | def execution_environment(): | ||
539 | 108 | """A convenient bundling of the current execution context""" | ||
540 | 109 | context = {} | ||
541 | 110 | context['conf'] = config() | ||
542 | 111 | if relation_id(): | ||
543 | 112 | context['reltype'] = relation_type() | ||
544 | 113 | context['relid'] = relation_id() | ||
545 | 114 | context['rel'] = relation_get() | ||
546 | 115 | context['unit'] = local_unit() | ||
547 | 116 | context['rels'] = relations() | ||
548 | 117 | context['env'] = os.environ | ||
549 | 118 | return context | ||
550 | 119 | |||
551 | 120 | |||
552 | 121 | def in_relation_hook(): | ||
553 | 122 | "Determine whether we're running in a relation hook" | ||
554 | 123 | return 'JUJU_RELATION' in os.environ | ||
555 | 124 | |||
556 | 125 | |||
557 | 126 | def relation_type(): | ||
558 | 127 | "The scope for the current relation hook" | ||
559 | 128 | return os.environ.get('JUJU_RELATION', None) | ||
560 | 129 | |||
561 | 130 | |||
562 | 131 | def relation_id(): | ||
563 | 132 | "The relation ID for the current relation hook" | ||
564 | 133 | return os.environ.get('JUJU_RELATION_ID', None) | ||
565 | 134 | |||
566 | 135 | |||
567 | 136 | def local_unit(): | ||
568 | 137 | "Local unit ID" | ||
569 | 138 | return os.environ['JUJU_UNIT_NAME'] | ||
570 | 139 | |||
571 | 140 | |||
572 | 141 | def remote_unit(): | ||
573 | 142 | "The remote unit for the current relation hook" | ||
574 | 143 | return os.environ['JUJU_REMOTE_UNIT'] | ||
575 | 144 | |||
576 | 145 | |||
577 | 146 | @cached | ||
578 | 147 | def config(scope=None): | ||
579 | 148 | "Juju charm configuration" | ||
580 | 149 | config_cmd_line = ['config-get'] | ||
581 | 150 | if scope is not None: | ||
582 | 151 | config_cmd_line.append(scope) | ||
583 | 152 | config_cmd_line.append('--format=json') | ||
584 | 153 | try: | ||
585 | 154 | return json.loads(subprocess.check_output(config_cmd_line)) | ||
586 | 155 | except ValueError: | ||
587 | 156 | return None | ||
588 | 157 | |||
589 | 158 | |||
590 | 159 | @cached | ||
591 | 160 | def relation_get(attribute=None, unit=None, rid=None): | ||
592 | 161 | _args = ['relation-get', '--format=json'] | ||
593 | 162 | if rid: | ||
594 | 163 | _args.append('-r') | ||
595 | 164 | _args.append(rid) | ||
596 | 165 | _args.append(attribute or '-') | ||
597 | 166 | if unit: | ||
598 | 167 | _args.append(unit) | ||
599 | 168 | try: | ||
600 | 169 | return json.loads(subprocess.check_output(_args)) | ||
601 | 170 | except ValueError: | ||
602 | 171 | return None | ||
603 | 172 | |||
604 | 173 | |||
605 | 174 | def relation_set(relation_id=None, relation_settings={}, **kwargs): | ||
606 | 175 | relation_cmd_line = ['relation-set'] | ||
607 | 176 | if relation_id is not None: | ||
608 | 177 | relation_cmd_line.extend(('-r', relation_id)) | ||
609 | 178 | for k, v in (relation_settings.items() + kwargs.items()): | ||
610 | 179 | if v is None: | ||
611 | 180 | relation_cmd_line.append('{}='.format(k)) | ||
612 | 181 | else: | ||
613 | 182 | relation_cmd_line.append('{}={}'.format(k, v)) | ||
614 | 183 | subprocess.check_call(relation_cmd_line) | ||
615 | 184 | # Flush cache of any relation-gets for local unit | ||
616 | 185 | flush(local_unit()) | ||
617 | 186 | |||
618 | 187 | |||
619 | 188 | @cached | ||
620 | 189 | def relation_ids(reltype=None): | ||
621 | 190 | "A list of relation_ids" | ||
622 | 191 | reltype = reltype or relation_type() | ||
623 | 192 | relid_cmd_line = ['relation-ids', '--format=json'] | ||
624 | 193 | if reltype is not None: | ||
625 | 194 | relid_cmd_line.append(reltype) | ||
626 | 195 | return json.loads(subprocess.check_output(relid_cmd_line)) | ||
627 | 196 | return [] | ||
628 | 197 | |||
629 | 198 | |||
630 | 199 | @cached | ||
631 | 200 | def related_units(relid=None): | ||
632 | 201 | "A list of related units" | ||
633 | 202 | relid = relid or relation_id() | ||
634 | 203 | units_cmd_line = ['relation-list', '--format=json'] | ||
635 | 204 | if relid is not None: | ||
636 | 205 | units_cmd_line.extend(('-r', relid)) | ||
637 | 206 | return json.loads(subprocess.check_output(units_cmd_line)) | ||
638 | 207 | |||
639 | 208 | |||
640 | 209 | @cached | ||
641 | 210 | def relation_for_unit(unit=None, rid=None): | ||
642 | 211 | "Get the json represenation of a unit's relation" | ||
643 | 212 | unit = unit or remote_unit() | ||
644 | 213 | relation = relation_get(unit=unit, rid=rid) | ||
645 | 214 | for key in relation: | ||
646 | 215 | if key.endswith('-list'): | ||
647 | 216 | relation[key] = relation[key].split() | ||
648 | 217 | relation['__unit__'] = unit | ||
649 | 218 | return relation | ||
650 | 219 | |||
651 | 220 | |||
652 | 221 | @cached | ||
653 | 222 | def relations_for_id(relid=None): | ||
654 | 223 | "Get relations of a specific relation ID" | ||
655 | 224 | relation_data = [] | ||
656 | 225 | relid = relid or relation_ids() | ||
657 | 226 | for unit in related_units(relid): | ||
658 | 227 | unit_data = relation_for_unit(unit, relid) | ||
659 | 228 | unit_data['__relid__'] = relid | ||
660 | 229 | relation_data.append(unit_data) | ||
661 | 230 | return relation_data | ||
662 | 231 | |||
663 | 232 | |||
664 | 233 | @cached | ||
665 | 234 | def relations_of_type(reltype=None): | ||
666 | 235 | "Get relations of a specific type" | ||
667 | 236 | relation_data = [] | ||
668 | 237 | reltype = reltype or relation_type() | ||
669 | 238 | for relid in relation_ids(reltype): | ||
670 | 239 | for relation in relations_for_id(relid): | ||
671 | 240 | relation['__relid__'] = relid | ||
672 | 241 | relation_data.append(relation) | ||
673 | 242 | return relation_data | ||
674 | 243 | |||
675 | 244 | |||
676 | 245 | @cached | ||
677 | 246 | def relation_types(): | ||
678 | 247 | "Get a list of relation types supported by this charm" | ||
679 | 248 | charmdir = os.environ.get('CHARM_DIR', '') | ||
680 | 249 | mdf = open(os.path.join(charmdir, 'metadata.yaml')) | ||
681 | 250 | md = yaml.safe_load(mdf) | ||
682 | 251 | rel_types = [] | ||
683 | 252 | for key in ('provides', 'requires', 'peers'): | ||
684 | 253 | section = md.get(key) | ||
685 | 254 | if section: | ||
686 | 255 | rel_types.extend(section.keys()) | ||
687 | 256 | mdf.close() | ||
688 | 257 | return rel_types | ||
689 | 258 | |||
690 | 259 | |||
691 | 260 | @cached | ||
692 | 261 | def relations(): | ||
693 | 262 | rels = {} | ||
694 | 263 | for reltype in relation_types(): | ||
695 | 264 | relids = {} | ||
696 | 265 | for relid in relation_ids(reltype): | ||
697 | 266 | units = {local_unit(): relation_get(unit=local_unit(), rid=relid)} | ||
698 | 267 | for unit in related_units(relid): | ||
699 | 268 | reldata = relation_get(unit=unit, rid=relid) | ||
700 | 269 | units[unit] = reldata | ||
701 | 270 | relids[relid] = units | ||
702 | 271 | rels[reltype] = relids | ||
703 | 272 | return rels | ||
704 | 273 | |||
705 | 274 | |||
706 | 275 | def open_port(port, protocol="TCP"): | ||
707 | 276 | "Open a service network port" | ||
708 | 277 | _args = ['open-port'] | ||
709 | 278 | _args.append('{}/{}'.format(port, protocol)) | ||
710 | 279 | subprocess.check_call(_args) | ||
711 | 280 | |||
712 | 281 | |||
713 | 282 | def close_port(port, protocol="TCP"): | ||
714 | 283 | "Close a service network port" | ||
715 | 284 | _args = ['close-port'] | ||
716 | 285 | _args.append('{}/{}'.format(port, protocol)) | ||
717 | 286 | subprocess.check_call(_args) | ||
718 | 287 | |||
719 | 288 | |||
720 | 289 | @cached | ||
721 | 290 | def unit_get(attribute): | ||
722 | 291 | _args = ['unit-get', '--format=json', attribute] | ||
723 | 292 | try: | ||
724 | 293 | return json.loads(subprocess.check_output(_args)) | ||
725 | 294 | except ValueError: | ||
726 | 295 | return None | ||
727 | 296 | |||
728 | 297 | |||
729 | 298 | def unit_private_ip(): | ||
730 | 299 | return unit_get('private-address') | ||
731 | 300 | |||
732 | 301 | |||
733 | 302 | class UnregisteredHookError(Exception): | ||
734 | 303 | pass | ||
735 | 304 | |||
736 | 305 | |||
737 | 306 | class Hooks(object): | ||
738 | 307 | def __init__(self): | ||
739 | 308 | super(Hooks, self).__init__() | ||
740 | 309 | self._hooks = {} | ||
741 | 310 | |||
742 | 311 | def register(self, name, function): | ||
743 | 312 | self._hooks[name] = function | ||
744 | 313 | |||
745 | 314 | def execute(self, args): | ||
746 | 315 | hook_name = os.path.basename(args[0]) | ||
747 | 316 | if hook_name in self._hooks: | ||
748 | 317 | self._hooks[hook_name]() | ||
749 | 318 | else: | ||
750 | 319 | raise UnregisteredHookError(hook_name) | ||
751 | 320 | |||
752 | 321 | def hook(self, *hook_names): | ||
753 | 322 | def wrapper(decorated): | ||
754 | 323 | for hook_name in hook_names: | ||
755 | 324 | self.register(hook_name, decorated) | ||
756 | 325 | else: | ||
757 | 326 | self.register(decorated.__name__, decorated) | ||
758 | 327 | if '_' in decorated.__name__: | ||
759 | 328 | self.register( | ||
760 | 329 | decorated.__name__.replace('_', '-'), decorated) | ||
761 | 330 | return decorated | ||
762 | 331 | return wrapper | ||
763 | 332 | |||
764 | 333 | def charm_dir(): | ||
765 | 334 | return os.environ.get('CHARM_DIR') | ||
766 | 0 | 335 | ||
767 | === added file 'hooks/charmhelpers/core/host.py' | |||
768 | --- hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000 | |||
769 | +++ hooks/charmhelpers/core/host.py 2013-07-08 08:34:31 +0000 | |||
770 | @@ -0,0 +1,273 @@ | |||
771 | 1 | """Tools for working with the host system""" | ||
772 | 2 | # Copyright 2012 Canonical Ltd. | ||
773 | 3 | # | ||
774 | 4 | # Authors: | ||
775 | 5 | # Nick Moffitt <nick.moffitt@canonical.com> | ||
776 | 6 | # Matthew Wedgwood <matthew.wedgwood@canonical.com> | ||
777 | 7 | |||
778 | 8 | import apt_pkg | ||
779 | 9 | import os | ||
780 | 10 | import pwd | ||
781 | 11 | import grp | ||
782 | 12 | import subprocess | ||
783 | 13 | import hashlib | ||
784 | 14 | |||
785 | 15 | from collections import OrderedDict | ||
786 | 16 | |||
787 | 17 | from hookenv import log, execution_environment | ||
788 | 18 | |||
789 | 19 | |||
790 | 20 | def service_start(service_name): | ||
791 | 21 | service('start', service_name) | ||
792 | 22 | |||
793 | 23 | |||
794 | 24 | def service_stop(service_name): | ||
795 | 25 | service('stop', service_name) | ||
796 | 26 | |||
797 | 27 | |||
798 | 28 | def service_restart(service_name): | ||
799 | 29 | service('restart', service_name) | ||
800 | 30 | |||
801 | 31 | |||
802 | 32 | def service_reload(service_name, restart_on_failure=False): | ||
803 | 33 | if not service('reload', service_name) and restart_on_failure: | ||
804 | 34 | service('restart', service_name) | ||
805 | 35 | |||
806 | 36 | |||
807 | 37 | def service(action, service_name): | ||
808 | 38 | cmd = ['service', service_name, action] | ||
809 | 39 | return subprocess.call(cmd) == 0 | ||
810 | 40 | |||
811 | 41 | |||
812 | 42 | def adduser(username, password=None, shell='/bin/bash', system_user=False): | ||
813 | 43 | """Add a user""" | ||
814 | 44 | try: | ||
815 | 45 | user_info = pwd.getpwnam(username) | ||
816 | 46 | log('user {0} already exists!'.format(username)) | ||
817 | 47 | except KeyError: | ||
818 | 48 | log('creating user {0}'.format(username)) | ||
819 | 49 | cmd = ['useradd'] | ||
820 | 50 | if system_user or password is None: | ||
821 | 51 | cmd.append('--system') | ||
822 | 52 | else: | ||
823 | 53 | cmd.extend([ | ||
824 | 54 | '--create-home', | ||
825 | 55 | '--shell', shell, | ||
826 | 56 | '--password', password, | ||
827 | 57 | ]) | ||
828 | 58 | cmd.append(username) | ||
829 | 59 | subprocess.check_call(cmd) | ||
830 | 60 | user_info = pwd.getpwnam(username) | ||
831 | 61 | return user_info | ||
832 | 62 | |||
833 | 63 | |||
834 | 64 | def add_user_to_group(username, group): | ||
835 | 65 | """Add a user to a group""" | ||
836 | 66 | cmd = [ | ||
837 | 67 | 'gpasswd', '-a', | ||
838 | 68 | username, | ||
839 | 69 | group | ||
840 | 70 | ] | ||
841 | 71 | log("Adding user {} to group {}".format(username, group)) | ||
842 | 72 | subprocess.check_call(cmd) | ||
843 | 73 | |||
844 | 74 | |||
845 | 75 | def rsync(from_path, to_path, flags='-r', options=None): | ||
846 | 76 | """Replicate the contents of a path""" | ||
847 | 77 | context = execution_environment() | ||
848 | 78 | options = options or ['--delete', '--executability'] | ||
849 | 79 | cmd = ['/usr/bin/rsync', flags] | ||
850 | 80 | cmd.extend(options) | ||
851 | 81 | cmd.append(from_path.format(**context)) | ||
852 | 82 | cmd.append(to_path.format(**context)) | ||
853 | 83 | log(" ".join(cmd)) | ||
854 | 84 | return subprocess.check_output(cmd).strip() | ||
855 | 85 | |||
856 | 86 | |||
857 | 87 | def symlink(source, destination): | ||
858 | 88 | """Create a symbolic link""" | ||
859 | 89 | context = execution_environment() | ||
860 | 90 | log("Symlinking {} as {}".format(source, destination)) | ||
861 | 91 | cmd = [ | ||
862 | 92 | 'ln', | ||
863 | 93 | '-sf', | ||
864 | 94 | source.format(**context), | ||
865 | 95 | destination.format(**context) | ||
866 | 96 | ] | ||
867 | 97 | subprocess.check_call(cmd) | ||
868 | 98 | |||
869 | 99 | |||
870 | 100 | def mkdir(path, owner='root', group='root', perms=0555, force=False): | ||
871 | 101 | """Create a directory""" | ||
872 | 102 | context = execution_environment() | ||
873 | 103 | log("Making dir {} {}:{} {:o}".format(path, owner, group, | ||
874 | 104 | perms)) | ||
875 | 105 | uid = pwd.getpwnam(owner.format(**context)).pw_uid | ||
876 | 106 | gid = grp.getgrnam(group.format(**context)).gr_gid | ||
877 | 107 | realpath = os.path.abspath(path) | ||
878 | 108 | if os.path.exists(realpath): | ||
879 | 109 | if force and not os.path.isdir(realpath): | ||
880 | 110 | log("Removing non-directory file {} prior to mkdir()".format(path)) | ||
881 | 111 | os.unlink(realpath) | ||
882 | 112 | else: | ||
883 | 113 | os.makedirs(realpath, perms) | ||
884 | 114 | os.chown(realpath, uid, gid) | ||
885 | 115 | |||
886 | 116 | |||
887 | 117 | def write_file(path, fmtstr, owner='root', group='root', perms=0444, **kwargs): | ||
888 | 118 | """Create or overwrite a file with the contents of a string""" | ||
889 | 119 | context = execution_environment() | ||
890 | 120 | context.update(kwargs) | ||
891 | 121 | log("Writing file {} {}:{} {:o}".format(path, owner, group, | ||
892 | 122 | perms)) | ||
893 | 123 | uid = pwd.getpwnam(owner.format(**context)).pw_uid | ||
894 | 124 | gid = grp.getgrnam(group.format(**context)).gr_gid | ||
895 | 125 | with open(path.format(**context), 'w') as target: | ||
896 | 126 | os.fchown(target.fileno(), uid, gid) | ||
897 | 127 | os.fchmod(target.fileno(), perms) | ||
898 | 128 | target.write(fmtstr.format(**context)) | ||
899 | 129 | |||
900 | 130 | |||
901 | 131 | def render_template_file(source, destination, **kwargs): | ||
902 | 132 | """Create or overwrite a file using a template""" | ||
903 | 133 | log("Rendering template {} for {}".format(source, | ||
904 | 134 | destination)) | ||
905 | 135 | context = execution_environment() | ||
906 | 136 | with open(source.format(**context), 'r') as template: | ||
907 | 137 | write_file(destination.format(**context), template.read(), | ||
908 | 138 | **kwargs) | ||
909 | 139 | |||
910 | 140 | |||
911 | 141 | def filter_installed_packages(packages): | ||
912 | 142 | """Returns a list of packages that require installation""" | ||
913 | 143 | apt_pkg.init() | ||
914 | 144 | cache = apt_pkg.Cache() | ||
915 | 145 | _pkgs = [] | ||
916 | 146 | for package in packages: | ||
917 | 147 | try: | ||
918 | 148 | p = cache[package] | ||
919 | 149 | p.current_ver or _pkgs.append(package) | ||
920 | 150 | except KeyError: | ||
921 | 151 | log('Package {} has no installation candidate.'.format(package), | ||
922 | 152 | level='WARNING') | ||
923 | 153 | _pkgs.append(package) | ||
924 | 154 | return _pkgs | ||
925 | 155 | |||
926 | 156 | |||
927 | 157 | def apt_install(packages, options=None, fatal=False): | ||
928 | 158 | """Install one or more packages""" | ||
929 | 159 | options = options or [] | ||
930 | 160 | cmd = ['apt-get', '-y'] | ||
931 | 161 | cmd.extend(options) | ||
932 | 162 | cmd.append('install') | ||
933 | 163 | if isinstance(packages, basestring): | ||
934 | 164 | cmd.append(packages) | ||
935 | 165 | else: | ||
936 | 166 | cmd.extend(packages) | ||
937 | 167 | log("Installing {} with options: {}".format(packages, | ||
938 | 168 | options)) | ||
939 | 169 | if fatal: | ||
940 | 170 | subprocess.check_call(cmd) | ||
941 | 171 | else: | ||
942 | 172 | subprocess.call(cmd) | ||
943 | 173 | |||
944 | 174 | |||
945 | 175 | def apt_update(fatal=False): | ||
946 | 176 | """Update local apt cache""" | ||
947 | 177 | cmd = ['apt-get', 'update'] | ||
948 | 178 | if fatal: | ||
949 | 179 | subprocess.check_call(cmd) | ||
950 | 180 | else: | ||
951 | 181 | subprocess.call(cmd) | ||
952 | 182 | |||
953 | 183 | |||
954 | 184 | def mount(device, mountpoint, options=None, persist=False): | ||
955 | 185 | '''Mount a filesystem''' | ||
956 | 186 | cmd_args = ['mount'] | ||
957 | 187 | if options is not None: | ||
958 | 188 | cmd_args.extend(['-o', options]) | ||
959 | 189 | cmd_args.extend([device, mountpoint]) | ||
960 | 190 | try: | ||
961 | 191 | subprocess.check_output(cmd_args) | ||
962 | 192 | except subprocess.CalledProcessError, e: | ||
963 | 193 | log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output)) | ||
964 | 194 | return False | ||
965 | 195 | if persist: | ||
966 | 196 | # TODO: update fstab | ||
967 | 197 | pass | ||
968 | 198 | return True | ||
969 | 199 | |||
970 | 200 | |||
971 | 201 | def umount(mountpoint, persist=False): | ||
972 | 202 | '''Unmount a filesystem''' | ||
973 | 203 | cmd_args = ['umount', mountpoint] | ||
974 | 204 | try: | ||
975 | 205 | subprocess.check_output(cmd_args) | ||
976 | 206 | except subprocess.CalledProcessError, e: | ||
977 | 207 | log('Error unmounting {}\n{}'.format(mountpoint, e.output)) | ||
978 | 208 | return False | ||
979 | 209 | if persist: | ||
980 | 210 | # TODO: update fstab | ||
981 | 211 | pass | ||
982 | 212 | return True | ||
983 | 213 | |||
984 | 214 | |||
985 | 215 | def mounts(): | ||
986 | 216 | '''List of all mounted volumes as [[mountpoint,device],[...]]''' | ||
987 | 217 | with open('/proc/mounts') as f: | ||
988 | 218 | # [['/mount/point','/dev/path'],[...]] | ||
989 | 219 | system_mounts = [m[1::-1] for m in [l.strip().split() | ||
990 | 220 | for l in f.readlines()]] | ||
991 | 221 | return system_mounts | ||
992 | 222 | |||
993 | 223 | |||
994 | 224 | def file_hash(path): | ||
995 | 225 | ''' Generate a md5 hash of the contents of 'path' or None if not found ''' | ||
996 | 226 | if os.path.exists(path): | ||
997 | 227 | h = hashlib.md5() | ||
998 | 228 | with open(path, 'r') as source: | ||
999 | 229 | h.update(source.read()) # IGNORE:E1101 - it does have update | ||
1000 | 230 | return h.hexdigest() | ||
1001 | 231 | else: | ||
1002 | 232 | return None | ||
1003 | 233 | |||
1004 | 234 | |||
1005 | 235 | def restart_on_change(restart_map): | ||
1006 | 236 | ''' Restart services based on configuration files changing | ||
1007 | 237 | |||
1008 | 238 | This function is used a decorator, for example | ||
1009 | 239 | |||
1010 | 240 | @restart_on_change({ | ||
1011 | 241 | '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ] | ||
1012 | 242 | }) | ||
1013 | 243 | def ceph_client_changed(): | ||
1014 | 244 | ... | ||
1015 | 245 | |||
1016 | 246 | In this example, the cinder-api and cinder-volume services | ||
1017 | 247 | would be restarted if /etc/ceph/ceph.conf is changed by the | ||
1018 | 248 | ceph_client_changed function. | ||
1019 | 249 | ''' | ||
1020 | 250 | def wrap(f): | ||
1021 | 251 | def wrapped_f(*args): | ||
1022 | 252 | checksums = {} | ||
1023 | 253 | for path in restart_map: | ||
1024 | 254 | checksums[path] = file_hash(path) | ||
1025 | 255 | f(*args) | ||
1026 | 256 | restarts = [] | ||
1027 | 257 | for path in restart_map: | ||
1028 | 258 | if checksums[path] != file_hash(path): | ||
1029 | 259 | restarts += restart_map[path] | ||
1030 | 260 | for service_name in list(OrderedDict.fromkeys(restarts)): | ||
1031 | 261 | service('restart', service_name) | ||
1032 | 262 | return wrapped_f | ||
1033 | 263 | return wrap | ||
1034 | 264 | |||
1035 | 265 | |||
1036 | 266 | def lsb_release(): | ||
1037 | 267 | '''Return /etc/lsb-release in a dict''' | ||
1038 | 268 | d = {} | ||
1039 | 269 | with open('/etc/lsb-release', 'r') as lsb: | ||
1040 | 270 | for l in lsb: | ||
1041 | 271 | k, v = l.split('=') | ||
1042 | 272 | d[k.strip()] = v.strip() | ||
1043 | 273 | return d | ||
1044 | 0 | 274 | ||
1045 | === added directory 'hooks/charmhelpers/fetch' | |||
1046 | === added file 'hooks/charmhelpers/fetch/__init__.py' | |||
1047 | --- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000 | |||
1048 | +++ hooks/charmhelpers/fetch/__init__.py 2013-07-08 08:34:31 +0000 | |||
1049 | @@ -0,0 +1,152 @@ | |||
1050 | 1 | import importlib | ||
1051 | 2 | from yaml import safe_load | ||
1052 | 3 | from charmhelpers.core.host import ( | ||
1053 | 4 | apt_install, | ||
1054 | 5 | apt_update, | ||
1055 | 6 | filter_installed_packages, | ||
1056 | 7 | lsb_release | ||
1057 | 8 | ) | ||
1058 | 9 | from urlparse import ( | ||
1059 | 10 | urlparse, | ||
1060 | 11 | urlunparse, | ||
1061 | 12 | ) | ||
1062 | 13 | import subprocess | ||
1063 | 14 | from charmhelpers.core.hookenv import ( | ||
1064 | 15 | config, | ||
1065 | 16 | log, | ||
1066 | 17 | ) | ||
1067 | 18 | |||
1068 | 19 | CLOUD_ARCHIVE = """# Ubuntu Cloud Archive | ||
1069 | 20 | deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main | ||
1070 | 21 | """ | ||
1071 | 22 | PROPOSED_POCKET = """# Proposed | ||
1072 | 23 | deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted | ||
1073 | 24 | """ | ||
1074 | 25 | |||
1075 | 26 | |||
1076 | 27 | def add_source(source, key=None): | ||
1077 | 28 | if ((source.startswith('ppa:') or | ||
1078 | 29 | source.startswith('http:'))): | ||
1079 | 30 | subprocess.check_call(['add-apt-repository', source]) | ||
1080 | 31 | elif source.startswith('cloud:'): | ||
1081 | 32 | apt_install(filter_installed_packages(['ubuntu-cloud-keyring']), | ||
1082 | 33 | fatal=True) | ||
1083 | 34 | pocket = source.split(':')[-1] | ||
1084 | 35 | with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt: | ||
1085 | 36 | apt.write(CLOUD_ARCHIVE.format(pocket)) | ||
1086 | 37 | elif source == 'proposed': | ||
1087 | 38 | release = lsb_release()['DISTRIB_CODENAME'] | ||
1088 | 39 | with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt: | ||
1089 | 40 | apt.write(PROPOSED_POCKET.format(release)) | ||
1090 | 41 | if key: | ||
1091 | 42 | subprocess.check_call(['apt-key', 'import', key]) | ||
1092 | 43 | |||
1093 | 44 | |||
1094 | 45 | class SourceConfigError(Exception): | ||
1095 | 46 | pass | ||
1096 | 47 | |||
1097 | 48 | |||
1098 | 49 | def configure_sources(update=False, | ||
1099 | 50 | sources_var='install_sources', | ||
1100 | 51 | keys_var='install_keys'): | ||
1101 | 52 | """ | ||
1102 | 53 | Configure multiple sources from charm configuration | ||
1103 | 54 | |||
1104 | 55 | Example config: | ||
1105 | 56 | install_sources: | ||
1106 | 57 | - "ppa:foo" | ||
1107 | 58 | - "http://example.com/repo precise main" | ||
1108 | 59 | install_keys: | ||
1109 | 60 | - null | ||
1110 | 61 | - "a1b2c3d4" | ||
1111 | 62 | |||
1112 | 63 | Note that 'null' (a.k.a. None) should not be quoted. | ||
1113 | 64 | """ | ||
1114 | 65 | sources = safe_load(config(sources_var)) | ||
1115 | 66 | keys = safe_load(config(keys_var)) | ||
1116 | 67 | if isinstance(sources, basestring) and isinstance(keys, basestring): | ||
1117 | 68 | add_source(sources, keys) | ||
1118 | 69 | else: | ||
1119 | 70 | if not len(sources) == len(keys): | ||
1120 | 71 | msg = 'Install sources and keys lists are different lengths' | ||
1121 | 72 | raise SourceConfigError(msg) | ||
1122 | 73 | for src_num in range(len(sources)): | ||
1123 | 74 | add_source(sources[src_num], keys[src_num]) | ||
1124 | 75 | if update: | ||
1125 | 76 | apt_update(fatal=True) | ||
1126 | 77 | |||
1127 | 78 | # The order of this list is very important. Handlers should be listed in from | ||
1128 | 79 | # least- to most-specific URL matching. | ||
1129 | 80 | FETCH_HANDLERS = ( | ||
1130 | 81 | 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', | ||
1131 | 82 | ) | ||
1132 | 83 | |||
1133 | 84 | |||
1134 | 85 | class UnhandledSource(Exception): | ||
1135 | 86 | pass | ||
1136 | 87 | |||
1137 | 88 | |||
1138 | 89 | def install_remote(source): | ||
1139 | 90 | """ | ||
1140 | 91 | Install a file tree from a remote source | ||
1141 | 92 | |||
1142 | 93 | The specified source should be a url of the form: | ||
1143 | 94 | scheme://[host]/path[#[option=value][&...]] | ||
1144 | 95 | |||
1145 | 96 | Schemes supported are based on this modules submodules | ||
1146 | 97 | Options supported are submodule-specific""" | ||
1147 | 98 | # We ONLY check for True here because can_handle may return a string | ||
1148 | 99 | # explaining why it can't handle a given source. | ||
1149 | 100 | handlers = [h for h in plugins() if h.can_handle(source) is True] | ||
1150 | 101 | for handler in handlers: | ||
1151 | 102 | try: | ||
1152 | 103 | installed_to = handler.install(source) | ||
1153 | 104 | except UnhandledSource: | ||
1154 | 105 | pass | ||
1155 | 106 | if not installed_to: | ||
1156 | 107 | raise UnhandledSource("No handler found for source {}".format(source)) | ||
1157 | 108 | return installed_to | ||
1158 | 109 | |||
1159 | 110 | |||
1160 | 111 | def install_from_config(config_var_name): | ||
1161 | 112 | charm_config = config() | ||
1162 | 113 | source = charm_config[config_var_name] | ||
1163 | 114 | return install_remote(source) | ||
1164 | 115 | |||
1165 | 116 | |||
1166 | 117 | class BaseFetchHandler(object): | ||
1167 | 118 | """Base class for FetchHandler implementations in fetch plugins""" | ||
1168 | 119 | def can_handle(self, source): | ||
1169 | 120 | """Returns True if the source can be handled. Otherwise returns | ||
1170 | 121 | a string explaining why it cannot""" | ||
1171 | 122 | return "Wrong source type" | ||
1172 | 123 | |||
1173 | 124 | def install(self, source): | ||
1174 | 125 | """Try to download and unpack the source. Return the path to the | ||
1175 | 126 | unpacked files or raise UnhandledSource.""" | ||
1176 | 127 | raise UnhandledSource("Wrong source type {}".format(source)) | ||
1177 | 128 | |||
1178 | 129 | def parse_url(self, url): | ||
1179 | 130 | return urlparse(url) | ||
1180 | 131 | |||
1181 | 132 | def base_url(self, url): | ||
1182 | 133 | """Return url without querystring or fragment""" | ||
1183 | 134 | parts = list(self.parse_url(url)) | ||
1184 | 135 | parts[4:] = ['' for i in parts[4:]] | ||
1185 | 136 | return urlunparse(parts) | ||
1186 | 137 | |||
1187 | 138 | |||
1188 | 139 | def plugins(fetch_handlers=None): | ||
1189 | 140 | if not fetch_handlers: | ||
1190 | 141 | fetch_handlers = FETCH_HANDLERS | ||
1191 | 142 | plugin_list = [] | ||
1192 | 143 | for handler_name in fetch_handlers: | ||
1193 | 144 | package, classname = handler_name.rsplit('.', 1) | ||
1194 | 145 | try: | ||
1195 | 146 | handler_class = getattr(importlib.import_module(package), classname) | ||
1196 | 147 | plugin_list.append(handler_class()) | ||
1197 | 148 | except (ImportError, AttributeError): | ||
1198 | 149 | # Skip missing plugins so that they can be ommitted from | ||
1199 | 150 | # installation if desired | ||
1200 | 151 | log("FetchHandler {} not found, skipping plugin".format(handler_name)) | ||
1201 | 152 | return plugin_list | ||
1202 | 0 | 153 | ||
1203 | === added file 'hooks/charmhelpers/fetch/archiveurl.py' | |||
1204 | --- hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000 | |||
1205 | +++ hooks/charmhelpers/fetch/archiveurl.py 2013-07-08 08:34:31 +0000 | |||
1206 | @@ -0,0 +1,43 @@ | |||
1207 | 1 | import os | ||
1208 | 2 | import urllib2 | ||
1209 | 3 | from charmhelpers.fetch import ( | ||
1210 | 4 | BaseFetchHandler, | ||
1211 | 5 | UnhandledSource | ||
1212 | 6 | ) | ||
1213 | 7 | from charmhelpers.payload.archive import ( | ||
1214 | 8 | get_archive_handler, | ||
1215 | 9 | extract, | ||
1216 | 10 | ) | ||
1217 | 11 | |||
1218 | 12 | |||
1219 | 13 | class ArchiveUrlFetchHandler(BaseFetchHandler): | ||
1220 | 14 | """Handler for archives via generic URLs""" | ||
1221 | 15 | def can_handle(self, source): | ||
1222 | 16 | url_parts = self.parse_url(source) | ||
1223 | 17 | if url_parts.scheme not in ('http', 'https', 'ftp', 'file'): | ||
1224 | 18 | return "Wrong source type" | ||
1225 | 19 | if get_archive_handler(self.base_url(source)): | ||
1226 | 20 | return True | ||
1227 | 21 | return False | ||
1228 | 22 | |||
1229 | 23 | def download(self, source, dest): | ||
1230 | 24 | # propogate all exceptions | ||
1231 | 25 | # URLError, OSError, etc | ||
1232 | 26 | response = urllib2.urlopen(source) | ||
1233 | 27 | with open(dest, 'w') as dest_file: | ||
1234 | 28 | dest_file.write(response.read()) | ||
1235 | 29 | |||
1236 | 30 | def install(self, source): | ||
1237 | 31 | url_parts = self.parse_url(source) | ||
1238 | 32 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched') | ||
1239 | 33 | dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path)) | ||
1240 | 34 | try: | ||
1241 | 35 | self.download(source, dld_file) | ||
1242 | 36 | except urllib2.URLError as e: | ||
1243 | 37 | return UnhandledSource(e.reason) | ||
1244 | 38 | except OSError as e: | ||
1245 | 39 | return UnhandledSource(e.strerror) | ||
1246 | 40 | finally: | ||
1247 | 41 | if os.path.isfile(dld_file): | ||
1248 | 42 | os.unlink(dld_file) | ||
1249 | 43 | return extract(dld_file) | ||
1250 | 0 | 44 | ||
1251 | === modified file 'hooks/hooks.py' | |||
1252 | --- hooks/hooks.py 2013-06-20 21:15:17 +0000 | |||
1253 | +++ hooks/hooks.py 2013-07-08 08:34:31 +0000 | |||
1254 | @@ -10,12 +10,35 @@ | |||
1255 | 10 | 10 | ||
1256 | 11 | import glob | 11 | import glob |
1257 | 12 | import os | 12 | import os |
1258 | 13 | import subprocess | ||
1259 | 14 | import shutil | 13 | import shutil |
1260 | 15 | import sys | 14 | import sys |
1261 | 16 | 15 | ||
1262 | 17 | import ceph | 16 | import ceph |
1264 | 18 | import utils | 17 | from charmhelpers.core.hookenv import ( |
1265 | 18 | log, ERROR, | ||
1266 | 19 | config, | ||
1267 | 20 | relation_ids, | ||
1268 | 21 | related_units, | ||
1269 | 22 | relation_get, | ||
1270 | 23 | relation_set, | ||
1271 | 24 | remote_unit, | ||
1272 | 25 | Hooks, UnregisteredHookError | ||
1273 | 26 | ) | ||
1274 | 27 | from charmhelpers.core.host import ( | ||
1275 | 28 | apt_install, | ||
1276 | 29 | apt_update, | ||
1277 | 30 | filter_installed_packages, | ||
1278 | 31 | service_restart, | ||
1279 | 32 | umount | ||
1280 | 33 | ) | ||
1281 | 34 | from charmhelpers.fetch import add_source | ||
1282 | 35 | |||
1283 | 36 | from utils import ( | ||
1284 | 37 | render_template, | ||
1285 | 38 | get_host_ip, | ||
1286 | 39 | ) | ||
1287 | 40 | |||
1288 | 41 | hooks = Hooks() | ||
1289 | 19 | 42 | ||
1290 | 20 | 43 | ||
1291 | 21 | def install_upstart_scripts(): | 44 | def install_upstart_scripts(): |
1292 | @@ -25,328 +48,221 @@ | |||
1293 | 25 | shutil.copy(x, '/etc/init/') | 48 | shutil.copy(x, '/etc/init/') |
1294 | 26 | 49 | ||
1295 | 27 | 50 | ||
1296 | 51 | @hooks.hook('install') | ||
1297 | 28 | def install(): | 52 | def install(): |
1301 | 29 | utils.juju_log('INFO', 'Begin install hook.') | 53 | log('Begin install hook.') |
1302 | 30 | utils.configure_source() | 54 | add_source(config('source'), config('key')) |
1303 | 31 | utils.install('ceph', 'gdisk', 'ntp', 'btrfs-tools', 'python-ceph', 'xfsprogs') | 55 | apt_update(fatal=True) |
1304 | 56 | apt_install(packages=ceph.PACKAGES, fatal=True) | ||
1305 | 32 | install_upstart_scripts() | 57 | install_upstart_scripts() |
1307 | 33 | utils.juju_log('INFO', 'End install hook.') | 58 | log('End install hook.') |
1308 | 34 | 59 | ||
1309 | 35 | 60 | ||
1310 | 36 | def emit_cephconf(): | 61 | def emit_cephconf(): |
1311 | 37 | cephcontext = { | 62 | cephcontext = { |
1313 | 38 | 'auth_supported': utils.config_get('auth-supported'), | 63 | 'auth_supported': config('auth-supported'), |
1314 | 39 | 'mon_hosts': ' '.join(get_mon_hosts()), | 64 | 'mon_hosts': ' '.join(get_mon_hosts()), |
1316 | 40 | 'fsid': utils.config_get('fsid'), | 65 | 'fsid': config('fsid'), |
1317 | 41 | 'version': ceph.get_ceph_version() | 66 | 'version': ceph.get_ceph_version() |
1319 | 42 | } | 67 | } |
1320 | 43 | 68 | ||
1321 | 44 | with open('/etc/ceph/ceph.conf', 'w') as cephconf: | 69 | with open('/etc/ceph/ceph.conf', 'w') as cephconf: |
1323 | 45 | cephconf.write(utils.render_template('ceph.conf', cephcontext)) | 70 | cephconf.write(render_template('ceph.conf', cephcontext)) |
1324 | 46 | 71 | ||
1325 | 47 | JOURNAL_ZAPPED = '/var/lib/ceph/journal_zapped' | 72 | JOURNAL_ZAPPED = '/var/lib/ceph/journal_zapped' |
1326 | 48 | 73 | ||
1327 | 49 | 74 | ||
1328 | 75 | @hooks.hook('config-changed') | ||
1329 | 50 | def config_changed(): | 76 | def config_changed(): |
1331 | 51 | utils.juju_log('INFO', 'Begin config-changed hook.') | 77 | log('Begin config-changed hook.') |
1332 | 52 | 78 | ||
1334 | 53 | utils.juju_log('INFO', 'Monitor hosts are ' + repr(get_mon_hosts())) | 79 | log('Monitor hosts are ' + repr(get_mon_hosts())) |
1335 | 54 | 80 | ||
1336 | 55 | # Pre-flight checks | 81 | # Pre-flight checks |
1347 | 56 | if not utils.config_get('fsid'): | 82 | if not config('fsid'): |
1348 | 57 | utils.juju_log('CRITICAL', 'No fsid supplied, cannot proceed.') | 83 | log('No fsid supplied, cannot proceed.', level=ERROR) |
1349 | 58 | sys.exit(1) | 84 | sys.exit(1) |
1350 | 59 | if not utils.config_get('monitor-secret'): | 85 | if not config('monitor-secret'): |
1351 | 60 | utils.juju_log('CRITICAL', | 86 | log('No monitor-secret supplied, cannot proceed.', level=ERROR) |
1352 | 61 | 'No monitor-secret supplied, cannot proceed.') | 87 | sys.exit(1) |
1353 | 62 | sys.exit(1) | 88 | if config('osd-format') not in ceph.DISK_FORMATS: |
1354 | 63 | if utils.config_get('osd-format') not in ceph.DISK_FORMATS: | 89 | log('Invalid OSD disk format configuration specified', level=ERROR) |
1345 | 64 | utils.juju_log('CRITICAL', | ||
1346 | 65 | 'Invalid OSD disk format configuration specified') | ||
1355 | 66 | sys.exit(1) | 90 | sys.exit(1) |
1356 | 67 | 91 | ||
1357 | 68 | emit_cephconf() | 92 | emit_cephconf() |
1358 | 69 | 93 | ||
1363 | 70 | e_mountpoint = utils.config_get('ephemeral-unmount') | 94 | e_mountpoint = config('ephemeral-unmount') |
1364 | 71 | if (e_mountpoint and | 95 | if e_mountpoint and ceph.filesystem_mounted(e_mountpoint): |
1365 | 72 | filesystem_mounted(e_mountpoint)): | 96 | umount(e_mountpoint) |
1362 | 73 | subprocess.call(['umount', e_mountpoint]) | ||
1366 | 74 | 97 | ||
1371 | 75 | osd_journal = utils.config_get('osd-journal') | 98 | osd_journal = config('osd-journal') |
1372 | 76 | if (osd_journal and | 99 | if (osd_journal and not os.path.exists(JOURNAL_ZAPPED) |
1373 | 77 | not os.path.exists(JOURNAL_ZAPPED) and | 100 | and os.path.exists(osd_journal)): |
1370 | 78 | os.path.exists(osd_journal)): | ||
1374 | 79 | ceph.zap_disk(osd_journal) | 101 | ceph.zap_disk(osd_journal) |
1375 | 80 | with open(JOURNAL_ZAPPED, 'w') as zapped: | 102 | with open(JOURNAL_ZAPPED, 'w') as zapped: |
1376 | 81 | zapped.write('DONE') | 103 | zapped.write('DONE') |
1377 | 82 | 104 | ||
1380 | 83 | for dev in utils.config_get('osd-devices').split(' '): | 105 | for dev in config('osd-devices').split(' '): |
1381 | 84 | osdize(dev) | 106 | ceph.osdize(dev, config('osd-format'), config('osd-journal'), |
1382 | 107 | reformat_osd()) | ||
1383 | 85 | 108 | ||
1384 | 86 | # Support use of single node ceph | 109 | # Support use of single node ceph |
1388 | 87 | if (not ceph.is_bootstrapped() and | 110 | if (not ceph.is_bootstrapped() and int(config('monitor-count')) == 1): |
1389 | 88 | int(utils.config_get('monitor-count')) == 1): | 111 | ceph.bootstrap_monitor_cluster(config('monitor-secret')) |
1387 | 89 | bootstrap_monitor_cluster() | ||
1390 | 90 | ceph.wait_for_bootstrap() | 112 | ceph.wait_for_bootstrap() |
1391 | 91 | 113 | ||
1392 | 92 | if ceph.is_bootstrapped(): | 114 | if ceph.is_bootstrapped(): |
1393 | 93 | ceph.rescan_osd_devices() | 115 | ceph.rescan_osd_devices() |
1394 | 94 | 116 | ||
1396 | 95 | utils.juju_log('INFO', 'End config-changed hook.') | 117 | log('End config-changed hook.') |
1397 | 96 | 118 | ||
1398 | 97 | 119 | ||
1399 | 98 | def get_mon_hosts(): | 120 | def get_mon_hosts(): |
1400 | 99 | hosts = [] | 121 | hosts = [] |
1402 | 100 | hosts.append('{}:6789'.format(utils.get_host_ip())) | 122 | hosts.append('{}:6789'.format(get_host_ip())) |
1403 | 101 | 123 | ||
1406 | 102 | for relid in utils.relation_ids('mon'): | 124 | for relid in relation_ids('mon'): |
1407 | 103 | for unit in utils.relation_list(relid): | 125 | for unit in related_units(relid): |
1408 | 104 | hosts.append( | 126 | hosts.append( |
1413 | 105 | '{}:6789'.format(utils.get_host_ip( | 127 | '{}:6789'.format(get_host_ip(relation_get('private-address', |
1414 | 106 | utils.relation_get('private-address', | 128 | unit, relid))) |
1415 | 107 | unit, relid))) | 129 | ) |
1412 | 108 | ) | ||
1416 | 109 | 130 | ||
1417 | 110 | hosts.sort() | 131 | hosts.sort() |
1418 | 111 | return hosts | 132 | return hosts |
1419 | 112 | 133 | ||
1420 | 113 | 134 | ||
1421 | 114 | def update_monfs(): | ||
1422 | 115 | hostname = utils.get_unit_hostname() | ||
1423 | 116 | monfs = '/var/lib/ceph/mon/ceph-{}'.format(hostname) | ||
1424 | 117 | upstart = '{}/upstart'.format(monfs) | ||
1425 | 118 | if (os.path.exists(monfs) and | ||
1426 | 119 | not os.path.exists(upstart)): | ||
1427 | 120 | # Mark mon as managed by upstart so that | ||
1428 | 121 | # it gets start correctly on reboots | ||
1429 | 122 | with open(upstart, 'w'): | ||
1430 | 123 | pass | ||
1431 | 124 | |||
1432 | 125 | |||
1433 | 126 | def bootstrap_monitor_cluster(): | ||
1434 | 127 | hostname = utils.get_unit_hostname() | ||
1435 | 128 | path = '/var/lib/ceph/mon/ceph-{}'.format(hostname) | ||
1436 | 129 | done = '{}/done'.format(path) | ||
1437 | 130 | upstart = '{}/upstart'.format(path) | ||
1438 | 131 | secret = utils.config_get('monitor-secret') | ||
1439 | 132 | keyring = '/var/lib/ceph/tmp/{}.mon.keyring'.format(hostname) | ||
1440 | 133 | |||
1441 | 134 | if os.path.exists(done): | ||
1442 | 135 | utils.juju_log('INFO', | ||
1443 | 136 | 'bootstrap_monitor_cluster: mon already initialized.') | ||
1444 | 137 | else: | ||
1445 | 138 | # Ceph >= 0.61.3 needs this for ceph-mon fs creation | ||
1446 | 139 | os.makedirs('/var/run/ceph', mode=0755) | ||
1447 | 140 | os.makedirs(path) | ||
1448 | 141 | # end changes for Ceph >= 0.61.3 | ||
1449 | 142 | try: | ||
1450 | 143 | subprocess.check_call(['ceph-authtool', keyring, | ||
1451 | 144 | '--create-keyring', '--name=mon.', | ||
1452 | 145 | '--add-key={}'.format(secret), | ||
1453 | 146 | '--cap', 'mon', 'allow *']) | ||
1454 | 147 | |||
1455 | 148 | subprocess.check_call(['ceph-mon', '--mkfs', | ||
1456 | 149 | '-i', hostname, | ||
1457 | 150 | '--keyring', keyring]) | ||
1458 | 151 | |||
1459 | 152 | with open(done, 'w'): | ||
1460 | 153 | pass | ||
1461 | 154 | with open(upstart, 'w'): | ||
1462 | 155 | pass | ||
1463 | 156 | |||
1464 | 157 | subprocess.check_call(['start', 'ceph-mon-all-starter']) | ||
1465 | 158 | except: | ||
1466 | 159 | raise | ||
1467 | 160 | finally: | ||
1468 | 161 | os.unlink(keyring) | ||
1469 | 162 | |||
1470 | 163 | |||
1471 | 164 | def reformat_osd(): | 135 | def reformat_osd(): |
1473 | 165 | if utils.config_get('osd-reformat'): | 136 | if config('osd-reformat'): |
1474 | 166 | return True | 137 | return True |
1475 | 167 | else: | 138 | else: |
1476 | 168 | return False | 139 | return False |
1477 | 169 | 140 | ||
1478 | 170 | 141 | ||
1524 | 171 | def osdize(dev): | 142 | @hooks.hook('mon-relation-departed', |
1525 | 172 | if not os.path.exists(dev): | 143 | 'mon-relation-joined') |
1481 | 173 | utils.juju_log('INFO', | ||
1482 | 174 | 'Path {} does not exist - bailing'.format(dev)) | ||
1483 | 175 | return | ||
1484 | 176 | |||
1485 | 177 | if (ceph.is_osd_disk(dev) and not | ||
1486 | 178 | reformat_osd()): | ||
1487 | 179 | utils.juju_log('INFO', | ||
1488 | 180 | 'Looks like {} is already an OSD, skipping.' | ||
1489 | 181 | .format(dev)) | ||
1490 | 182 | return | ||
1491 | 183 | |||
1492 | 184 | if device_mounted(dev): | ||
1493 | 185 | utils.juju_log('INFO', | ||
1494 | 186 | 'Looks like {} is in use, skipping.'.format(dev)) | ||
1495 | 187 | return | ||
1496 | 188 | |||
1497 | 189 | cmd = ['ceph-disk-prepare'] | ||
1498 | 190 | # Later versions of ceph support more options | ||
1499 | 191 | if ceph.get_ceph_version() >= "0.48.3": | ||
1500 | 192 | osd_format = utils.config_get('osd-format') | ||
1501 | 193 | if osd_format: | ||
1502 | 194 | cmd.append('--fs-type') | ||
1503 | 195 | cmd.append(osd_format) | ||
1504 | 196 | cmd.append(dev) | ||
1505 | 197 | osd_journal = utils.config_get('osd-journal') | ||
1506 | 198 | if (osd_journal and | ||
1507 | 199 | os.path.exists(osd_journal)): | ||
1508 | 200 | cmd.append(osd_journal) | ||
1509 | 201 | else: | ||
1510 | 202 | # Just provide the device - no other options | ||
1511 | 203 | # for older versions of ceph | ||
1512 | 204 | cmd.append(dev) | ||
1513 | 205 | subprocess.call(cmd) | ||
1514 | 206 | |||
1515 | 207 | |||
1516 | 208 | def device_mounted(dev): | ||
1517 | 209 | return subprocess.call(['grep', '-wqs', dev + '1', '/proc/mounts']) == 0 | ||
1518 | 210 | |||
1519 | 211 | |||
1520 | 212 | def filesystem_mounted(fs): | ||
1521 | 213 | return subprocess.call(['grep', '-wqs', fs, '/proc/mounts']) == 0 | ||
1522 | 214 | |||
1523 | 215 | |||
1526 | 216 | def mon_relation(): | 144 | def mon_relation(): |
1528 | 217 | utils.juju_log('INFO', 'Begin mon-relation hook.') | 145 | log('Begin mon-relation hook.') |
1529 | 218 | emit_cephconf() | 146 | emit_cephconf() |
1530 | 219 | 147 | ||
1532 | 220 | moncount = int(utils.config_get('monitor-count')) | 148 | moncount = int(config('monitor-count')) |
1533 | 221 | if len(get_mon_hosts()) >= moncount: | 149 | if len(get_mon_hosts()) >= moncount: |
1535 | 222 | bootstrap_monitor_cluster() | 150 | ceph.bootstrap_monitor_cluster(config('monitor-secret')) |
1536 | 223 | ceph.wait_for_bootstrap() | 151 | ceph.wait_for_bootstrap() |
1537 | 224 | ceph.rescan_osd_devices() | 152 | ceph.rescan_osd_devices() |
1538 | 225 | notify_osds() | 153 | notify_osds() |
1539 | 226 | notify_radosgws() | 154 | notify_radosgws() |
1540 | 227 | notify_client() | 155 | notify_client() |
1541 | 228 | else: | 156 | else: |
1545 | 229 | utils.juju_log('INFO', | 157 | log('Not enough mons ({}), punting.' |
1546 | 230 | 'Not enough mons ({}), punting.'.format( | 158 | .format(len(get_mon_hosts()))) |
1544 | 231 | len(get_mon_hosts()))) | ||
1547 | 232 | 159 | ||
1549 | 233 | utils.juju_log('INFO', 'End mon-relation hook.') | 160 | log('End mon-relation hook.') |
1550 | 234 | 161 | ||
1551 | 235 | 162 | ||
1552 | 236 | def notify_osds(): | 163 | def notify_osds(): |
1562 | 237 | utils.juju_log('INFO', 'Begin notify_osds.') | 164 | log('Begin notify_osds.') |
1563 | 238 | 165 | ||
1564 | 239 | for relid in utils.relation_ids('osd'): | 166 | for relid in relation_ids('osd'): |
1565 | 240 | utils.relation_set(fsid=utils.config_get('fsid'), | 167 | relation_set(relation_id=relid, |
1566 | 241 | osd_bootstrap_key=ceph.get_osd_bootstrap_key(), | 168 | fsid=config('fsid'), |
1567 | 242 | auth=utils.config_get('auth-supported'), | 169 | osd_bootstrap_key=ceph.get_osd_bootstrap_key(), |
1568 | 243 | rid=relid) | 170 | auth=config('auth-supported')) |
1569 | 244 | 171 | ||
1570 | 245 | utils.juju_log('INFO', 'End notify_osds.') | 172 | log('End notify_osds.') |
1571 | 246 | 173 | ||
1572 | 247 | 174 | ||
1573 | 248 | def notify_radosgws(): | 175 | def notify_radosgws(): |
1582 | 249 | utils.juju_log('INFO', 'Begin notify_radosgws.') | 176 | log('Begin notify_radosgws.') |
1583 | 250 | 177 | ||
1584 | 251 | for relid in utils.relation_ids('radosgw'): | 178 | for relid in relation_ids('radosgw'): |
1585 | 252 | utils.relation_set(radosgw_key=ceph.get_radosgw_key(), | 179 | relation_set(relation_id=relid, |
1586 | 253 | auth=utils.config_get('auth-supported'), | 180 | radosgw_key=ceph.get_radosgw_key(), |
1587 | 254 | rid=relid) | 181 | auth=config('auth-supported')) |
1588 | 255 | 182 | ||
1589 | 256 | utils.juju_log('INFO', 'End notify_radosgws.') | 183 | log('End notify_radosgws.') |
1590 | 257 | 184 | ||
1591 | 258 | 185 | ||
1592 | 259 | def notify_client(): | 186 | def notify_client(): |
1594 | 260 | utils.juju_log('INFO', 'Begin notify_client.') | 187 | log('Begin notify_client.') |
1595 | 261 | 188 | ||
1598 | 262 | for relid in utils.relation_ids('client'): | 189 | for relid in relation_ids('client'): |
1599 | 263 | units = utils.relation_list(relid) | 190 | units = related_units(relid) |
1600 | 264 | if len(units) > 0: | 191 | if len(units) > 0: |
1601 | 265 | service_name = units[0].split('/')[0] | 192 | service_name = units[0].split('/')[0] |
1609 | 266 | utils.relation_set(key=ceph.get_named_key(service_name), | 193 | relation_set(relation_id=relid, |
1610 | 267 | auth=utils.config_get('auth-supported'), | 194 | key=ceph.get_named_key(service_name), |
1611 | 268 | rid=relid) | 195 | auth=config('auth-supported')) |
1612 | 269 | 196 | ||
1613 | 270 | utils.juju_log('INFO', 'End notify_client.') | 197 | log('End notify_client.') |
1614 | 271 | 198 | ||
1615 | 272 | 199 | ||
1616 | 200 | @hooks.hook('osd-relation-joined') | ||
1617 | 273 | def osd_relation(): | 201 | def osd_relation(): |
1619 | 274 | utils.juju_log('INFO', 'Begin osd-relation hook.') | 202 | log('Begin osd-relation hook.') |
1620 | 275 | 203 | ||
1621 | 276 | if ceph.is_quorum(): | 204 | if ceph.is_quorum(): |
1627 | 277 | utils.juju_log('INFO', | 205 | log('mon cluster in quorum - providing fsid & keys') |
1628 | 278 | 'mon cluster in quorum - providing fsid & keys') | 206 | relation_set(fsid=config('fsid'), |
1629 | 279 | utils.relation_set(fsid=utils.config_get('fsid'), | 207 | osd_bootstrap_key=ceph.get_osd_bootstrap_key(), |
1630 | 280 | osd_bootstrap_key=ceph.get_osd_bootstrap_key(), | 208 | auth=config('auth-supported')) |
1626 | 281 | auth=utils.config_get('auth-supported')) | ||
1631 | 282 | else: | 209 | else: |
1638 | 283 | utils.juju_log('INFO', | 210 | log('mon cluster not in quorum - deferring fsid provision') |
1639 | 284 | 'mon cluster not in quorum - deferring fsid provision') | 211 | |
1640 | 285 | 212 | log('End osd-relation hook.') | |
1641 | 286 | utils.juju_log('INFO', 'End osd-relation hook.') | 213 | |
1642 | 287 | 214 | ||
1643 | 288 | 215 | @hooks.hook('radosgw-relation-joined') | |
1644 | 289 | def radosgw_relation(): | 216 | def radosgw_relation(): |
1649 | 290 | utils.juju_log('INFO', 'Begin radosgw-relation hook.') | 217 | log('Begin radosgw-relation hook.') |
1650 | 291 | 218 | ||
1651 | 292 | utils.install('radosgw') # Install radosgw for admin tools | 219 | # Install radosgw for admin tools |
1652 | 293 | 220 | apt_install(packages=filter_installed_packages(['radosgw'])) | |
1653 | 294 | if ceph.is_quorum(): | 221 | if ceph.is_quorum(): |
1659 | 295 | utils.juju_log('INFO', | 222 | log('mon cluster in quorum - providing radosgw with keys') |
1660 | 296 | 'mon cluster in quorum - \ | 223 | relation_set(radosgw_key=ceph.get_radosgw_key(), |
1661 | 297 | providing radosgw with keys') | 224 | auth=config('auth-supported')) |
1657 | 298 | utils.relation_set(radosgw_key=ceph.get_radosgw_key(), | ||
1658 | 299 | auth=utils.config_get('auth-supported')) | ||
1662 | 300 | else: | 225 | else: |
1669 | 301 | utils.juju_log('INFO', | 226 | log('mon cluster not in quorum - deferring key provision') |
1670 | 302 | 'mon cluster not in quorum - deferring key provision') | 227 | |
1671 | 303 | 228 | log('End radosgw-relation hook.') | |
1672 | 304 | utils.juju_log('INFO', 'End radosgw-relation hook.') | 229 | |
1673 | 305 | 230 | ||
1674 | 306 | 231 | @hooks.hook('client-relation-joined') | |
1675 | 307 | def client_relation(): | 232 | def client_relation(): |
1677 | 308 | utils.juju_log('INFO', 'Begin client-relation hook.') | 233 | log('Begin client-relation hook.') |
1678 | 309 | 234 | ||
1679 | 310 | if ceph.is_quorum(): | 235 | if ceph.is_quorum(): |
1686 | 311 | utils.juju_log('INFO', | 236 | log('mon cluster in quorum - providing client with keys') |
1687 | 312 | 'mon cluster in quorum - \ | 237 | service_name = remote_unit().split('/')[0] |
1688 | 313 | providing client with keys') | 238 | relation_set(key=ceph.get_named_key(service_name), |
1689 | 314 | service_name = os.environ['JUJU_REMOTE_UNIT'].split('/')[0] | 239 | auth=config('auth-supported')) |
1684 | 315 | utils.relation_set(key=ceph.get_named_key(service_name), | ||
1685 | 316 | auth=utils.config_get('auth-supported')) | ||
1690 | 317 | else: | 240 | else: |
1697 | 318 | utils.juju_log('INFO', | 241 | log('mon cluster not in quorum - deferring key provision') |
1698 | 319 | 'mon cluster not in quorum - deferring key provision') | 242 | |
1699 | 320 | 243 | log('End client-relation hook.') | |
1700 | 321 | utils.juju_log('INFO', 'End client-relation hook.') | 244 | |
1701 | 322 | 245 | ||
1702 | 323 | 246 | @hooks.hook('upgrade-charm') | |
1703 | 324 | def upgrade_charm(): | 247 | def upgrade_charm(): |
1705 | 325 | utils.juju_log('INFO', 'Begin upgrade-charm hook.') | 248 | log('Begin upgrade-charm hook.') |
1706 | 326 | emit_cephconf() | 249 | emit_cephconf() |
1708 | 327 | utils.install('xfsprogs') | 250 | apt_install(packages=filter_installed_packages(ceph.PACKAGES), fatal=True) |
1709 | 328 | install_upstart_scripts() | 251 | install_upstart_scripts() |
1714 | 329 | update_monfs() | 252 | ceph.update_monfs() |
1715 | 330 | utils.juju_log('INFO', 'End upgrade-charm hook.') | 253 | log('End upgrade-charm hook.') |
1716 | 331 | 254 | ||
1717 | 332 | 255 | ||
1718 | 256 | @hooks.hook('start') | ||
1719 | 333 | def start(): | 257 | def start(): |
1720 | 334 | # In case we're being redeployed to the same machines, try | 258 | # In case we're being redeployed to the same machines, try |
1721 | 335 | # to make sure everything is running as soon as possible. | 259 | # to make sure everything is running as soon as possible. |
1723 | 336 | subprocess.call(['start', 'ceph-mon-all-starter']) | 260 | service_restart('ceph-mon-all') |
1724 | 337 | ceph.rescan_osd_devices() | 261 | ceph.rescan_osd_devices() |
1725 | 338 | 262 | ||
1726 | 339 | 263 | ||
1740 | 340 | utils.do_hooks({ | 264 | if __name__ == '__main__': |
1741 | 341 | 'config-changed': config_changed, | 265 | try: |
1742 | 342 | 'install': install, | 266 | hooks.execute(sys.argv) |
1743 | 343 | 'mon-relation-departed': mon_relation, | 267 | except UnregisteredHookError as e: |
1744 | 344 | 'mon-relation-joined': mon_relation, | 268 | log('Unknown hook {} - skipping.'.format(e)) |
1732 | 345 | 'osd-relation-joined': osd_relation, | ||
1733 | 346 | 'radosgw-relation-joined': radosgw_relation, | ||
1734 | 347 | 'client-relation-joined': client_relation, | ||
1735 | 348 | 'start': start, | ||
1736 | 349 | 'upgrade-charm': upgrade_charm, | ||
1737 | 350 | }) | ||
1738 | 351 | |||
1739 | 352 | sys.exit(0) | ||
1745 | 353 | 269 | ||
1746 | === modified file 'hooks/utils.py' | |||
1747 | --- hooks/utils.py 2013-02-08 11:09:00 +0000 | |||
1748 | +++ hooks/utils.py 2013-07-08 08:34:31 +0000 | |||
1749 | @@ -7,97 +7,41 @@ | |||
1750 | 7 | # Paul Collins <paul.collins@canonical.com> | 7 | # Paul Collins <paul.collins@canonical.com> |
1751 | 8 | # | 8 | # |
1752 | 9 | 9 | ||
1753 | 10 | import os | ||
1754 | 11 | import subprocess | ||
1755 | 12 | import socket | 10 | import socket |
1756 | 13 | import sys | ||
1757 | 14 | import re | 11 | import re |
1781 | 15 | 12 | from charmhelpers.core.hookenv import ( | |
1782 | 16 | 13 | unit_get, | |
1783 | 17 | def do_hooks(hooks): | 14 | cached |
1784 | 18 | hook = os.path.basename(sys.argv[0]) | 15 | ) |
1785 | 19 | 16 | from charmhelpers.core.host import ( | |
1786 | 20 | try: | 17 | apt_install, |
1787 | 21 | hook_func = hooks[hook] | 18 | filter_installed_packages |
1788 | 22 | except KeyError: | 19 | ) |
1766 | 23 | juju_log('INFO', | ||
1767 | 24 | "This charm doesn't know how to handle '{}'.".format(hook)) | ||
1768 | 25 | else: | ||
1769 | 26 | hook_func() | ||
1770 | 27 | |||
1771 | 28 | |||
1772 | 29 | def install(*pkgs): | ||
1773 | 30 | cmd = [ | ||
1774 | 31 | 'apt-get', | ||
1775 | 32 | '-y', | ||
1776 | 33 | 'install' | ||
1777 | 34 | ] | ||
1778 | 35 | for pkg in pkgs: | ||
1779 | 36 | cmd.append(pkg) | ||
1780 | 37 | subprocess.check_call(cmd) | ||
1789 | 38 | 20 | ||
1790 | 39 | TEMPLATES_DIR = 'templates' | 21 | TEMPLATES_DIR = 'templates' |
1791 | 40 | 22 | ||
1792 | 41 | try: | 23 | try: |
1793 | 42 | import jinja2 | 24 | import jinja2 |
1794 | 43 | except ImportError: | 25 | except ImportError: |
1796 | 44 | install('python-jinja2') | 26 | apt_install(filter_installed_packages(['python-jinja2']), |
1797 | 27 | fatal=True) | ||
1798 | 45 | import jinja2 | 28 | import jinja2 |
1799 | 46 | 29 | ||
1800 | 47 | try: | 30 | try: |
1801 | 48 | import dns.resolver | 31 | import dns.resolver |
1802 | 49 | except ImportError: | 32 | except ImportError: |
1804 | 50 | install('python-dnspython') | 33 | apt_install(filter_installed_packages(['python-dnspython']), |
1805 | 34 | fatal=True) | ||
1806 | 51 | import dns.resolver | 35 | import dns.resolver |
1807 | 52 | 36 | ||
1808 | 53 | 37 | ||
1809 | 54 | def render_template(template_name, context, template_dir=TEMPLATES_DIR): | 38 | def render_template(template_name, context, template_dir=TEMPLATES_DIR): |
1810 | 55 | templates = jinja2.Environment( | 39 | templates = jinja2.Environment( |
1813 | 56 | loader=jinja2.FileSystemLoader(template_dir) | 40 | loader=jinja2.FileSystemLoader(template_dir)) |
1812 | 57 | ) | ||
1814 | 58 | template = templates.get_template(template_name) | 41 | template = templates.get_template(template_name) |
1815 | 59 | return template.render(context) | 42 | return template.render(context) |
1816 | 60 | 43 | ||
1817 | 61 | 44 | ||
1818 | 62 | CLOUD_ARCHIVE = \ | ||
1819 | 63 | """ # Ubuntu Cloud Archive | ||
1820 | 64 | deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main | ||
1821 | 65 | """ | ||
1822 | 66 | |||
1823 | 67 | |||
1824 | 68 | def configure_source(): | ||
1825 | 69 | source = str(config_get('source')) | ||
1826 | 70 | if not source: | ||
1827 | 71 | return | ||
1828 | 72 | if source.startswith('ppa:'): | ||
1829 | 73 | cmd = [ | ||
1830 | 74 | 'add-apt-repository', | ||
1831 | 75 | source | ||
1832 | 76 | ] | ||
1833 | 77 | subprocess.check_call(cmd) | ||
1834 | 78 | if source.startswith('cloud:'): | ||
1835 | 79 | install('ubuntu-cloud-keyring') | ||
1836 | 80 | pocket = source.split(':')[1] | ||
1837 | 81 | with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt: | ||
1838 | 82 | apt.write(CLOUD_ARCHIVE.format(pocket)) | ||
1839 | 83 | if source.startswith('http:'): | ||
1840 | 84 | with open('/etc/apt/sources.list.d/ceph.list', 'w') as apt: | ||
1841 | 85 | apt.write("deb " + source + "\n") | ||
1842 | 86 | key = config_get('key') | ||
1843 | 87 | if key: | ||
1844 | 88 | cmd = [ | ||
1845 | 89 | 'apt-key', | ||
1846 | 90 | 'adv', '--keyserver keyserver.ubuntu.com', | ||
1847 | 91 | '--recv-keys', key | ||
1848 | 92 | ] | ||
1849 | 93 | subprocess.check_call(cmd) | ||
1850 | 94 | cmd = [ | ||
1851 | 95 | 'apt-get', | ||
1852 | 96 | 'update' | ||
1853 | 97 | ] | ||
1854 | 98 | subprocess.check_call(cmd) | ||
1855 | 99 | |||
1856 | 100 | |||
1857 | 101 | def enable_pocket(pocket): | 45 | def enable_pocket(pocket): |
1858 | 102 | apt_sources = "/etc/apt/sources.list" | 46 | apt_sources = "/etc/apt/sources.list" |
1859 | 103 | with open(apt_sources, "r") as sources: | 47 | with open(apt_sources, "r") as sources: |
1860 | @@ -109,105 +53,15 @@ | |||
1861 | 109 | else: | 53 | else: |
1862 | 110 | sources.write(line) | 54 | sources.write(line) |
1863 | 111 | 55 | ||
1958 | 112 | # Protocols | 56 | |
1959 | 113 | TCP = 'TCP' | 57 | @cached |
1866 | 114 | UDP = 'UDP' | ||
1867 | 115 | |||
1868 | 116 | |||
1869 | 117 | def expose(port, protocol='TCP'): | ||
1870 | 118 | cmd = [ | ||
1871 | 119 | 'open-port', | ||
1872 | 120 | '{}/{}'.format(port, protocol) | ||
1873 | 121 | ] | ||
1874 | 122 | subprocess.check_call(cmd) | ||
1875 | 123 | |||
1876 | 124 | |||
1877 | 125 | def juju_log(severity, message): | ||
1878 | 126 | cmd = [ | ||
1879 | 127 | 'juju-log', | ||
1880 | 128 | '--log-level', severity, | ||
1881 | 129 | message | ||
1882 | 130 | ] | ||
1883 | 131 | subprocess.check_call(cmd) | ||
1884 | 132 | |||
1885 | 133 | |||
1886 | 134 | def relation_ids(relation): | ||
1887 | 135 | cmd = [ | ||
1888 | 136 | 'relation-ids', | ||
1889 | 137 | relation | ||
1890 | 138 | ] | ||
1891 | 139 | return subprocess.check_output(cmd).split() # IGNORE:E1103 | ||
1892 | 140 | |||
1893 | 141 | |||
1894 | 142 | def relation_list(rid): | ||
1895 | 143 | cmd = [ | ||
1896 | 144 | 'relation-list', | ||
1897 | 145 | '-r', rid, | ||
1898 | 146 | ] | ||
1899 | 147 | return subprocess.check_output(cmd).split() # IGNORE:E1103 | ||
1900 | 148 | |||
1901 | 149 | |||
1902 | 150 | def relation_get(attribute, unit=None, rid=None): | ||
1903 | 151 | cmd = [ | ||
1904 | 152 | 'relation-get', | ||
1905 | 153 | ] | ||
1906 | 154 | if rid: | ||
1907 | 155 | cmd.append('-r') | ||
1908 | 156 | cmd.append(rid) | ||
1909 | 157 | cmd.append(attribute) | ||
1910 | 158 | if unit: | ||
1911 | 159 | cmd.append(unit) | ||
1912 | 160 | value = str(subprocess.check_output(cmd)).strip() | ||
1913 | 161 | if value == "": | ||
1914 | 162 | return None | ||
1915 | 163 | else: | ||
1916 | 164 | return value | ||
1917 | 165 | |||
1918 | 166 | |||
1919 | 167 | def relation_set(**kwargs): | ||
1920 | 168 | cmd = [ | ||
1921 | 169 | 'relation-set' | ||
1922 | 170 | ] | ||
1923 | 171 | args = [] | ||
1924 | 172 | for k, v in kwargs.items(): | ||
1925 | 173 | if k == 'rid': | ||
1926 | 174 | cmd.append('-r') | ||
1927 | 175 | cmd.append(v) | ||
1928 | 176 | else: | ||
1929 | 177 | args.append('{}={}'.format(k, v)) | ||
1930 | 178 | cmd += args | ||
1931 | 179 | subprocess.check_call(cmd) | ||
1932 | 180 | |||
1933 | 181 | |||
1934 | 182 | def unit_get(attribute): | ||
1935 | 183 | cmd = [ | ||
1936 | 184 | 'unit-get', | ||
1937 | 185 | attribute | ||
1938 | 186 | ] | ||
1939 | 187 | value = str(subprocess.check_output(cmd)).strip() | ||
1940 | 188 | if value == "": | ||
1941 | 189 | return None | ||
1942 | 190 | else: | ||
1943 | 191 | return value | ||
1944 | 192 | |||
1945 | 193 | |||
1946 | 194 | def config_get(attribute): | ||
1947 | 195 | cmd = [ | ||
1948 | 196 | 'config-get', | ||
1949 | 197 | attribute | ||
1950 | 198 | ] | ||
1951 | 199 | value = str(subprocess.check_output(cmd)).strip() | ||
1952 | 200 | if value == "": | ||
1953 | 201 | return None | ||
1954 | 202 | else: | ||
1955 | 203 | return value | ||
1956 | 204 | |||
1957 | 205 | |||
1960 | 206 | def get_unit_hostname(): | 58 | def get_unit_hostname(): |
1961 | 207 | return socket.gethostname() | 59 | return socket.gethostname() |
1962 | 208 | 60 | ||
1963 | 209 | 61 | ||
1965 | 210 | def get_host_ip(hostname=unit_get('private-address')): | 62 | @cached |
1966 | 63 | def get_host_ip(hostname=None): | ||
1967 | 64 | hostname = hostname or unit_get('private-address') | ||
1968 | 211 | try: | 65 | try: |
1969 | 212 | # Test to see if already an IPv4 address | 66 | # Test to see if already an IPv4 address |
1970 | 213 | socket.inet_aton(hostname) | 67 | socket.inet_aton(hostname) |
1971 | 214 | 68 | ||
1972 | === modified file 'metadata.yaml' | |||
1973 | --- metadata.yaml 2013-04-22 19:49:09 +0000 | |||
1974 | +++ metadata.yaml 2013-07-08 08:34:31 +0000 | |||
1975 | @@ -1,7 +1,6 @@ | |||
1976 | 1 | name: ceph | 1 | name: ceph |
1977 | 2 | summary: Highly scalable distributed storage | 2 | summary: Highly scalable distributed storage |
1980 | 3 | maintainer: James Page <james.page@ubuntu.com>, | 3 | maintainer: James Page <james.page@ubuntu.com> |
1979 | 4 | Paul Collins <paul.collins@canonical.com> | ||
1981 | 5 | description: | | 4 | description: | |
1982 | 6 | Ceph is a distributed storage and network file system designed to provide | 5 | Ceph is a distributed storage and network file system designed to provide |
1983 | 7 | excellent performance, reliability, and scalability. | 6 | excellent performance, reliability, and scalability. |
similarly to ceph-osd, two requests for the future:
- consider refactoring $CHARM_ DIR/hooks/ hooks.py into $CHARM_ DIR/lib/ ceph_tools with accompanying unit-type $CHARM_ DIR/lib/ ceph_tools/ tests where possible
- please think up some decent integration tests and add them into $CHARM_DIR/tests
Thanks!