Merge lp:~bigdata-dev/charms/trusty/hdp-hadoop/trunk into lp:charms/trusty/hdp-hadoop
- Trusty Tahr (14.04)
- trunk
- Merge into trunk
Proposed by
Cory Johns
Status: | Needs review |
---|---|
Proposed branch: | lp:~bigdata-dev/charms/trusty/hdp-hadoop/trunk |
Merge into: | lp:charms/trusty/hdp-hadoop |
Diff against target: |
3106 lines (+361/-2350) 18 files modified
README.md (+3/-3) files/scripts/terasort.sh (+15/-0) hooks/bdutils.py (+44/-28) hooks/bootstrap.py (+9/-0) hooks/charmhelpers/core/fstab.py (+0/-114) hooks/charmhelpers/core/hookenv.py (+0/-498) hooks/charmhelpers/core/host.py (+0/-325) hooks/charmhelpers/fetch/__init__.py (+0/-349) hooks/charmhelpers/fetch/archiveurl.py (+0/-63) hooks/charmhelpers/fetch/bzrurl.py (+0/-50) hooks/charmhelpers/lib/ceph_utils.py (+0/-315) hooks/charmhelpers/lib/cluster_utils.py (+0/-128) hooks/charmhelpers/lib/utils.py (+0/-221) hooks/charmhelpers/setup.py (+0/-12) hooks/hdp-hadoop-common.py (+199/-110) hooks/hdputils.py (+14/-12) resources.yaml (+11/-0) tests/01-hadoop-cluster-deployment-1.py (+66/-122) |
To merge this branch: | bzr merge lp:~bigdata-dev/charms/trusty/hdp-hadoop/trunk |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
amir sanjar (community) | Approve | ||
charmers | Pending | ||
Review via email: mp+245602@code.launchpad.net |
Commit message
Description of the change
Updates to support HDP 2.2
To post a comment you must log in.
- 41. By Cory Johns
-
Fix jps bug (and un-vendor charmhelpers)
Unmerged revisions
- 41. By Cory Johns
-
Fix jps bug (and un-vendor charmhelpers)
- 40. By amir sanjar
-
fixes for enabling hdp 2.2 support
- 39. By Cory Johns
-
Fixed mapreduce TODO config items and refactored setHadoopConfigXML
- 38. By Cory Johns
-
Cleaned up mapreduce test case
- 37. By Cory Johns
-
Refactor test case and fix "exit on PASS" bug in test
- 36. By Cory Johns
-
Default memory settings
- 35. By Cory Johns
-
Fixed hostname yarn property and cleaned up test case
- 34. By Cory Johns
-
Fixed mapreduce paths
- 33. By Cory Johns
-
Fixed yarn paths
- 32. By Cory Johns
-
Fixed TODO-JDK-PATH placeholder from upstream config
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === modified file 'README.md' | |||
2 | --- README.md 2014-09-10 18:29:03 +0000 | |||
3 | +++ README.md 2015-02-13 00:38:49 +0000 | |||
4 | @@ -1,5 +1,5 @@ | |||
5 | 1 | ## Overview | 1 | ## Overview |
7 | 2 | **What is Hortonworks Apache Hadoop (HDP 2.1.3) ?** | 2 | **What is Hortonworks Apache Hadoop (HDP 2.2.0) ?** |
8 | 3 | The Apache Hadoop software library is a framework that allows for the | 3 | The Apache Hadoop software library is a framework that allows for the |
9 | 4 | distributed processing of large data sets across clusters of computers | 4 | distributed processing of large data sets across clusters of computers |
10 | 5 | using a simple programming model. | 5 | using a simple programming model. |
11 | @@ -10,7 +10,7 @@ | |||
12 | 10 | at the application layer, so delivering a highly-availabile service on top of a | 10 | at the application layer, so delivering a highly-availabile service on top of a |
13 | 11 | cluster of computers, each of which may be prone to failures. | 11 | cluster of computers, each of which may be prone to failures. |
14 | 12 | 12 | ||
16 | 13 | Apache Hadoop 2.4.1 consists of significant improvements over the previous | 13 | Apache Hadoop 2.6.x consists of significant improvements over the previous |
17 | 14 | stable release (hadoop-1.x). | 14 | stable release (hadoop-1.x). |
18 | 15 | 15 | ||
19 | 16 | Here is a short overview of the improvments to both HDFS and MapReduce. | 16 | Here is a short overview of the improvments to both HDFS and MapReduce. |
20 | @@ -51,7 +51,7 @@ | |||
21 | 51 | 51 | ||
22 | 52 | This supports deployments of Hadoop in a number of configurations. | 52 | This supports deployments of Hadoop in a number of configurations. |
23 | 53 | 53 | ||
25 | 54 | ### HDP 2.1.3 Usage #1: Combined HDFS and MapReduce | 54 | ### HDP 2.2.0 Usage #1: Combined HDFS and MapReduce |
26 | 55 | 55 | ||
27 | 56 | In this configuration, the YARN ResourceManager is deployed on the same | 56 | In this configuration, the YARN ResourceManager is deployed on the same |
28 | 57 | service units as HDFS namenode and the HDFS datanodes also run YARN NodeManager:: | 57 | service units as HDFS namenode and the HDFS datanodes also run YARN NodeManager:: |
29 | 58 | 58 | ||
30 | === added file 'files/jujuresources-0.2.3.tar.gz' | |||
31 | 59 | Binary files files/jujuresources-0.2.3.tar.gz 1970-01-01 00:00:00 +0000 and files/jujuresources-0.2.3.tar.gz 2015-02-13 00:38:49 +0000 differ | 59 | Binary files files/jujuresources-0.2.3.tar.gz 1970-01-01 00:00:00 +0000 and files/jujuresources-0.2.3.tar.gz 2015-02-13 00:38:49 +0000 differ |
32 | === added file 'files/scripts/terasort.sh' | |||
33 | --- files/scripts/terasort.sh 1970-01-01 00:00:00 +0000 | |||
34 | +++ files/scripts/terasort.sh 2015-02-13 00:38:49 +0000 | |||
35 | @@ -0,0 +1,15 @@ | |||
36 | 1 | #!/bin/bash | ||
37 | 2 | |||
38 | 3 | |||
39 | 4 | SIZE=10000 | ||
40 | 5 | NUM_MAPS=100 | ||
41 | 6 | NUM_REDUCES=100 | ||
42 | 7 | IN_DIR=in_dir | ||
43 | 8 | OUT_DIR=out_dir | ||
44 | 9 | hadoop fs -rm -r -skipTrash ${IN_DIR} || true | ||
45 | 10 | hadoop jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar teragen ${SIZE} ${IN_DIR} | ||
46 | 11 | |||
47 | 12 | sleep 20 | ||
48 | 13 | |||
49 | 14 | hadoop jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar terasort ${IN_DIR} ${OUT_DIR} | ||
50 | 15 | |||
51 | 0 | 16 | ||
52 | === modified file 'hooks/bdutils.py' | |||
53 | --- hooks/bdutils.py 2014-09-17 14:49:16 +0000 | |||
54 | +++ hooks/bdutils.py 2015-02-13 00:38:49 +0000 | |||
55 | @@ -1,11 +1,15 @@ | |||
56 | 1 | #!/usr/bin/python | 1 | #!/usr/bin/python |
57 | 2 | import os | 2 | import os |
58 | 3 | import pwd | 3 | import pwd |
59 | 4 | import sys | ||
60 | 4 | import grp | 5 | import grp |
61 | 5 | import subprocess | 6 | import subprocess |
62 | 7 | import hashlib | ||
63 | 8 | import shlex | ||
64 | 6 | 9 | ||
65 | 7 | from shutil import rmtree | 10 | from shutil import rmtree |
66 | 8 | from charmhelpers.core.hookenv import log | 11 | from charmhelpers.core.hookenv import log |
67 | 12 | from charmhelpers.contrib.bigdata.utils import jps | ||
68 | 9 | 13 | ||
69 | 10 | def createPropertyElement(name, value): | 14 | def createPropertyElement(name, value): |
70 | 11 | import xml.etree.ElementTree as ET | 15 | import xml.etree.ElementTree as ET |
71 | @@ -22,26 +26,21 @@ | |||
72 | 22 | import xml.dom.minidom as minidom | 26 | import xml.dom.minidom as minidom |
73 | 23 | print("==> setHadoopConfigXML ","INFO") | 27 | print("==> setHadoopConfigXML ","INFO") |
74 | 24 | import xml.etree.ElementTree as ET | 28 | import xml.etree.ElementTree as ET |
75 | 25 | found = False | ||
76 | 26 | with open(xmlfileNamePath,'rb+') as f: | 29 | with open(xmlfileNamePath,'rb+') as f: |
77 | 27 | root = ET.parse(f).getroot() | 30 | root = ET.parse(f).getroot() |
81 | 28 | proList = root.findall("property") | 31 | for p in root.findall("property"): |
82 | 29 | for p in proList: | 32 | if p.find('name').text == name: |
83 | 30 | if found: | 33 | if value is None: |
84 | 34 | root.remove(p) | ||
85 | 35 | else: | ||
86 | 36 | p.find('value').text = value | ||
87 | 31 | break | 37 | break |
94 | 32 | cList = p.getchildren() | 38 | else: |
95 | 33 | for c in cList: | 39 | if value is not None: |
96 | 34 | if c.text == name: | 40 | root.append(createPropertyElement(name, value)) |
91 | 35 | p.find("value").text = value | ||
92 | 36 | found = True | ||
93 | 37 | break | ||
97 | 38 | 41 | ||
98 | 39 | f.seek(0) | 42 | f.seek(0) |
104 | 40 | if not found: | 43 | f.write(ET.tostring(root, encoding='UTF-8')) |
100 | 41 | root.append(createPropertyElement(name, value)) | ||
101 | 42 | f.write((minidom.parseString(ET.tostring(root, encoding='UTF-8'))).toprettyxml(indent="\t")) | ||
102 | 43 | else: | ||
103 | 44 | f.write(ET.tostring(root, encoding='UTF-8')) | ||
105 | 45 | f.truncate() | 44 | f.truncate() |
106 | 46 | 45 | ||
107 | 47 | def setDirPermission(path, owner, group, access): | 46 | def setDirPermission(path, owner, group, access): |
108 | @@ -67,15 +66,21 @@ | |||
109 | 67 | for r,d,f in os.walk(path): | 66 | for r,d,f in os.walk(path): |
110 | 68 | os.chmod( r , mode) | 67 | os.chmod( r , mode) |
111 | 69 | 68 | ||
113 | 70 | def wgetPkg(pkgName, crypType): | 69 | def wgetPkg(url, dest): |
114 | 71 | log("==> wgetPkg ") | 70 | log("==> wgetPkg ") |
122 | 72 | crypFileName= pkgName+'.'+crypType | 71 | output = subprocess.check_output(['wget', '-S', url, '-O', dest], |
123 | 73 | cmd = 'wget '+pkgName | 72 | stderr=subprocess.STDOUT) |
124 | 74 | subprocess.call(cmd.split()) | 73 | etag = [l for l in output.split('\n') if l.startswith(' ETag')] |
125 | 75 | if crypType: | 74 | if etag: |
126 | 76 | cmd = ['wget', crypFileName] | 75 | etag = etag[0].split(':')[1].strip(' "') |
127 | 77 | subprocess.call(cmd) | 76 | if '-' in etag: |
128 | 78 | #TODO -- cryption validation | 77 | log.warn('Multi-part ETag not supported; cannot verify checksum') |
129 | 78 | return # TODO: Support multi-part ETag algorithm | ||
130 | 79 | with open(dest) as fp: | ||
131 | 80 | md5 = hashlib.md5(fp.read()).hexdigest() | ||
132 | 81 | if md5 != etag: | ||
133 | 82 | log.error('Checksum mismatch: %s != %s' % (md5, etag)) | ||
134 | 83 | sys.exit(1) | ||
135 | 79 | 84 | ||
136 | 80 | def append_bashrc(line): | 85 | def append_bashrc(line): |
137 | 81 | log("==> append_bashrc","INFO") | 86 | log("==> append_bashrc","INFO") |
138 | @@ -124,10 +129,21 @@ | |||
139 | 124 | os.environ[ll[0]] = ll[1].strip().strip(';').strip("\"").strip() | 129 | os.environ[ll[0]] = ll[1].strip().strip(';').strip("\"").strip() |
140 | 125 | 130 | ||
141 | 126 | def is_jvm_service_active(processname): | 131 | def is_jvm_service_active(processname): |
146 | 127 | cmd=["jps"] | 132 | return jps(processname) |
147 | 128 | p = subprocess.Popen(cmd, stdout=subprocess.PIPE) | 133 | |
148 | 129 | out, err = p.communicate() | 134 | |
149 | 130 | if err == None and str(out).find(processname) != -1: | 135 | def HDFS_command(command): |
150 | 136 | cmd = shlex.split("su hdfs -c 'hdfs {c}'".format(c=command)) | ||
151 | 137 | return subprocess.check_output(cmd) | ||
152 | 138 | |||
153 | 139 | def fconfigured(filename): | ||
154 | 140 | fpath = os.path.join(os.path.sep, 'home', 'ubuntu', filename) | ||
155 | 141 | if os.path.isfile(fpath): | ||
156 | 131 | return True | 142 | return True |
157 | 132 | else: | 143 | else: |
158 | 133 | return False | ||
159 | 134 | \ No newline at end of file | 144 | \ No newline at end of file |
160 | 145 | touch(fpath) | ||
161 | 146 | return False | ||
162 | 147 | |||
163 | 148 | def touch(fname, times=None): | ||
164 | 149 | with open(fname, 'a'): | ||
165 | 150 | os.utime(fname, times) | ||
166 | 135 | 151 | ||
167 | === added file 'hooks/bootstrap.py' | |||
168 | --- hooks/bootstrap.py 1970-01-01 00:00:00 +0000 | |||
169 | +++ hooks/bootstrap.py 2015-02-13 00:38:49 +0000 | |||
170 | @@ -0,0 +1,9 @@ | |||
171 | 1 | import subprocess | ||
172 | 2 | |||
173 | 3 | |||
174 | 4 | def install_charmhelpers(): | ||
175 | 5 | subprocess.check_call(['apt-get', 'install', '-yq', 'python-pip']) | ||
176 | 6 | subprocess.check_call(['pip', 'install', 'files/jujuresources-0.2.3.tar.gz']) | ||
177 | 7 | import jujuresources | ||
178 | 8 | jujuresources.fetch() | ||
179 | 9 | jujuresources.install(['pathlib', 'pyaml', 'six', 'charmhelpers']) | ||
180 | 0 | 10 | ||
181 | === removed directory 'hooks/charmhelpers' | |||
182 | === removed file 'hooks/charmhelpers/__init__.py' | |||
183 | === removed directory 'hooks/charmhelpers/core' | |||
184 | === removed file 'hooks/charmhelpers/core/__init__.py' | |||
185 | === removed file 'hooks/charmhelpers/core/fstab.py' | |||
186 | --- hooks/charmhelpers/core/fstab.py 2014-07-09 12:37:36 +0000 | |||
187 | +++ hooks/charmhelpers/core/fstab.py 1970-01-01 00:00:00 +0000 | |||
188 | @@ -1,114 +0,0 @@ | |||
189 | 1 | #!/usr/bin/env python | ||
190 | 2 | # -*- coding: utf-8 -*- | ||
191 | 3 | |||
192 | 4 | __author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>' | ||
193 | 5 | |||
194 | 6 | import os | ||
195 | 7 | |||
196 | 8 | |||
197 | 9 | class Fstab(file): | ||
198 | 10 | """This class extends file in order to implement a file reader/writer | ||
199 | 11 | for file `/etc/fstab` | ||
200 | 12 | """ | ||
201 | 13 | |||
202 | 14 | class Entry(object): | ||
203 | 15 | """Entry class represents a non-comment line on the `/etc/fstab` file | ||
204 | 16 | """ | ||
205 | 17 | def __init__(self, device, mountpoint, filesystem, | ||
206 | 18 | options, d=0, p=0): | ||
207 | 19 | self.device = device | ||
208 | 20 | self.mountpoint = mountpoint | ||
209 | 21 | self.filesystem = filesystem | ||
210 | 22 | |||
211 | 23 | if not options: | ||
212 | 24 | options = "defaults" | ||
213 | 25 | |||
214 | 26 | self.options = options | ||
215 | 27 | self.d = d | ||
216 | 28 | self.p = p | ||
217 | 29 | |||
218 | 30 | def __eq__(self, o): | ||
219 | 31 | return str(self) == str(o) | ||
220 | 32 | |||
221 | 33 | def __str__(self): | ||
222 | 34 | return "{} {} {} {} {} {}".format(self.device, | ||
223 | 35 | self.mountpoint, | ||
224 | 36 | self.filesystem, | ||
225 | 37 | self.options, | ||
226 | 38 | self.d, | ||
227 | 39 | self.p) | ||
228 | 40 | |||
229 | 41 | DEFAULT_PATH = os.path.join(os.path.sep, 'etc', 'fstab') | ||
230 | 42 | |||
231 | 43 | def __init__(self, path=None): | ||
232 | 44 | if path: | ||
233 | 45 | self._path = path | ||
234 | 46 | else: | ||
235 | 47 | self._path = self.DEFAULT_PATH | ||
236 | 48 | file.__init__(self, self._path, 'r+') | ||
237 | 49 | |||
238 | 50 | def _hydrate_entry(self, line): | ||
239 | 51 | return Fstab.Entry(*filter( | ||
240 | 52 | lambda x: x not in ('', None), | ||
241 | 53 | line.strip("\n").split(" "))) | ||
242 | 54 | |||
243 | 55 | @property | ||
244 | 56 | def entries(self): | ||
245 | 57 | self.seek(0) | ||
246 | 58 | for line in self.readlines(): | ||
247 | 59 | try: | ||
248 | 60 | if not line.startswith("#"): | ||
249 | 61 | yield self._hydrate_entry(line) | ||
250 | 62 | except ValueError: | ||
251 | 63 | pass | ||
252 | 64 | |||
253 | 65 | def get_entry_by_attr(self, attr, value): | ||
254 | 66 | for entry in self.entries: | ||
255 | 67 | e_attr = getattr(entry, attr) | ||
256 | 68 | if e_attr == value: | ||
257 | 69 | return entry | ||
258 | 70 | return None | ||
259 | 71 | |||
260 | 72 | def add_entry(self, entry): | ||
261 | 73 | if self.get_entry_by_attr('device', entry.device): | ||
262 | 74 | return False | ||
263 | 75 | |||
264 | 76 | self.write(str(entry) + '\n') | ||
265 | 77 | self.truncate() | ||
266 | 78 | return entry | ||
267 | 79 | |||
268 | 80 | def remove_entry(self, entry): | ||
269 | 81 | self.seek(0) | ||
270 | 82 | |||
271 | 83 | lines = self.readlines() | ||
272 | 84 | |||
273 | 85 | found = False | ||
274 | 86 | for index, line in enumerate(lines): | ||
275 | 87 | if not line.startswith("#"): | ||
276 | 88 | if self._hydrate_entry(line) == entry: | ||
277 | 89 | found = True | ||
278 | 90 | break | ||
279 | 91 | |||
280 | 92 | if not found: | ||
281 | 93 | return False | ||
282 | 94 | |||
283 | 95 | lines.remove(line) | ||
284 | 96 | |||
285 | 97 | self.seek(0) | ||
286 | 98 | self.write(''.join(lines)) | ||
287 | 99 | self.truncate() | ||
288 | 100 | return True | ||
289 | 101 | |||
290 | 102 | @classmethod | ||
291 | 103 | def remove_by_mountpoint(cls, mountpoint, path=None): | ||
292 | 104 | fstab = cls(path=path) | ||
293 | 105 | entry = fstab.get_entry_by_attr('mountpoint', mountpoint) | ||
294 | 106 | if entry: | ||
295 | 107 | return fstab.remove_entry(entry) | ||
296 | 108 | return False | ||
297 | 109 | |||
298 | 110 | @classmethod | ||
299 | 111 | def add(cls, device, mountpoint, filesystem, options=None, path=None): | ||
300 | 112 | return cls(path=path).add_entry(Fstab.Entry(device, | ||
301 | 113 | mountpoint, filesystem, | ||
302 | 114 | options=options)) | ||
303 | 115 | 0 | ||
304 | === removed file 'hooks/charmhelpers/core/hookenv.py' | |||
305 | --- hooks/charmhelpers/core/hookenv.py 2014-07-09 12:37:36 +0000 | |||
306 | +++ hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000 | |||
307 | @@ -1,498 +0,0 @@ | |||
308 | 1 | "Interactions with the Juju environment" | ||
309 | 2 | # Copyright 2013 Canonical Ltd. | ||
310 | 3 | # | ||
311 | 4 | # Authors: | ||
312 | 5 | # Charm Helpers Developers <juju@lists.ubuntu.com> | ||
313 | 6 | |||
314 | 7 | import os | ||
315 | 8 | import json | ||
316 | 9 | import yaml | ||
317 | 10 | import subprocess | ||
318 | 11 | import sys | ||
319 | 12 | import UserDict | ||
320 | 13 | from subprocess import CalledProcessError | ||
321 | 14 | |||
322 | 15 | CRITICAL = "CRITICAL" | ||
323 | 16 | ERROR = "ERROR" | ||
324 | 17 | WARNING = "WARNING" | ||
325 | 18 | INFO = "INFO" | ||
326 | 19 | DEBUG = "DEBUG" | ||
327 | 20 | MARKER = object() | ||
328 | 21 | |||
329 | 22 | cache = {} | ||
330 | 23 | |||
331 | 24 | |||
332 | 25 | def cached(func): | ||
333 | 26 | """Cache return values for multiple executions of func + args | ||
334 | 27 | |||
335 | 28 | For example: | ||
336 | 29 | |||
337 | 30 | @cached | ||
338 | 31 | def unit_get(attribute): | ||
339 | 32 | pass | ||
340 | 33 | |||
341 | 34 | unit_get('test') | ||
342 | 35 | |||
343 | 36 | will cache the result of unit_get + 'test' for future calls. | ||
344 | 37 | """ | ||
345 | 38 | def wrapper(*args, **kwargs): | ||
346 | 39 | global cache | ||
347 | 40 | key = str((func, args, kwargs)) | ||
348 | 41 | try: | ||
349 | 42 | return cache[key] | ||
350 | 43 | except KeyError: | ||
351 | 44 | res = func(*args, **kwargs) | ||
352 | 45 | cache[key] = res | ||
353 | 46 | return res | ||
354 | 47 | return wrapper | ||
355 | 48 | |||
356 | 49 | |||
357 | 50 | def flush(key): | ||
358 | 51 | """Flushes any entries from function cache where the | ||
359 | 52 | key is found in the function+args """ | ||
360 | 53 | flush_list = [] | ||
361 | 54 | for item in cache: | ||
362 | 55 | if key in item: | ||
363 | 56 | flush_list.append(item) | ||
364 | 57 | for item in flush_list: | ||
365 | 58 | del cache[item] | ||
366 | 59 | |||
367 | 60 | |||
368 | 61 | def log(message, level=None): | ||
369 | 62 | """Write a message to the juju log""" | ||
370 | 63 | command = ['juju-log'] | ||
371 | 64 | if level: | ||
372 | 65 | command += ['-l', level] | ||
373 | 66 | command += [message] | ||
374 | 67 | subprocess.call(command) | ||
375 | 68 | |||
376 | 69 | |||
377 | 70 | class Serializable(UserDict.IterableUserDict): | ||
378 | 71 | """Wrapper, an object that can be serialized to yaml or json""" | ||
379 | 72 | |||
380 | 73 | def __init__(self, obj): | ||
381 | 74 | # wrap the object | ||
382 | 75 | UserDict.IterableUserDict.__init__(self) | ||
383 | 76 | self.data = obj | ||
384 | 77 | |||
385 | 78 | def __getattr__(self, attr): | ||
386 | 79 | # See if this object has attribute. | ||
387 | 80 | if attr in ("json", "yaml", "data"): | ||
388 | 81 | return self.__dict__[attr] | ||
389 | 82 | # Check for attribute in wrapped object. | ||
390 | 83 | got = getattr(self.data, attr, MARKER) | ||
391 | 84 | if got is not MARKER: | ||
392 | 85 | return got | ||
393 | 86 | # Proxy to the wrapped object via dict interface. | ||
394 | 87 | try: | ||
395 | 88 | return self.data[attr] | ||
396 | 89 | except KeyError: | ||
397 | 90 | raise AttributeError(attr) | ||
398 | 91 | |||
399 | 92 | def __getstate__(self): | ||
400 | 93 | # Pickle as a standard dictionary. | ||
401 | 94 | return self.data | ||
402 | 95 | |||
403 | 96 | def __setstate__(self, state): | ||
404 | 97 | # Unpickle into our wrapper. | ||
405 | 98 | self.data = state | ||
406 | 99 | |||
407 | 100 | def json(self): | ||
408 | 101 | """Serialize the object to json""" | ||
409 | 102 | return json.dumps(self.data) | ||
410 | 103 | |||
411 | 104 | def yaml(self): | ||
412 | 105 | """Serialize the object to yaml""" | ||
413 | 106 | return yaml.dump(self.data) | ||
414 | 107 | |||
415 | 108 | |||
416 | 109 | def execution_environment(): | ||
417 | 110 | """A convenient bundling of the current execution context""" | ||
418 | 111 | context = {} | ||
419 | 112 | context['conf'] = config() | ||
420 | 113 | if relation_id(): | ||
421 | 114 | context['reltype'] = relation_type() | ||
422 | 115 | context['relid'] = relation_id() | ||
423 | 116 | context['rel'] = relation_get() | ||
424 | 117 | context['unit'] = local_unit() | ||
425 | 118 | context['rels'] = relations() | ||
426 | 119 | context['env'] = os.environ | ||
427 | 120 | return context | ||
428 | 121 | |||
429 | 122 | |||
430 | 123 | def in_relation_hook(): | ||
431 | 124 | """Determine whether we're running in a relation hook""" | ||
432 | 125 | return 'JUJU_RELATION' in os.environ | ||
433 | 126 | |||
434 | 127 | |||
435 | 128 | def relation_type(): | ||
436 | 129 | """The scope for the current relation hook""" | ||
437 | 130 | return os.environ.get('JUJU_RELATION', None) | ||
438 | 131 | |||
439 | 132 | |||
440 | 133 | def relation_id(): | ||
441 | 134 | """The relation ID for the current relation hook""" | ||
442 | 135 | return os.environ.get('JUJU_RELATION_ID', None) | ||
443 | 136 | |||
444 | 137 | |||
445 | 138 | def local_unit(): | ||
446 | 139 | """Local unit ID""" | ||
447 | 140 | return os.environ['JUJU_UNIT_NAME'] | ||
448 | 141 | |||
449 | 142 | |||
450 | 143 | def remote_unit(): | ||
451 | 144 | """The remote unit for the current relation hook""" | ||
452 | 145 | return os.environ['JUJU_REMOTE_UNIT'] | ||
453 | 146 | |||
454 | 147 | |||
455 | 148 | def service_name(): | ||
456 | 149 | """The name service group this unit belongs to""" | ||
457 | 150 | return local_unit().split('/')[0] | ||
458 | 151 | |||
459 | 152 | |||
460 | 153 | def hook_name(): | ||
461 | 154 | """The name of the currently executing hook""" | ||
462 | 155 | return os.path.basename(sys.argv[0]) | ||
463 | 156 | |||
464 | 157 | |||
465 | 158 | class Config(dict): | ||
466 | 159 | """A Juju charm config dictionary that can write itself to | ||
467 | 160 | disk (as json) and track which values have changed since | ||
468 | 161 | the previous hook invocation. | ||
469 | 162 | |||
470 | 163 | Do not instantiate this object directly - instead call | ||
471 | 164 | ``hookenv.config()`` | ||
472 | 165 | |||
473 | 166 | Example usage:: | ||
474 | 167 | |||
475 | 168 | >>> # inside a hook | ||
476 | 169 | >>> from charmhelpers.core import hookenv | ||
477 | 170 | >>> config = hookenv.config() | ||
478 | 171 | >>> config['foo'] | ||
479 | 172 | 'bar' | ||
480 | 173 | >>> config['mykey'] = 'myval' | ||
481 | 174 | >>> config.save() | ||
482 | 175 | |||
483 | 176 | |||
484 | 177 | >>> # user runs `juju set mycharm foo=baz` | ||
485 | 178 | >>> # now we're inside subsequent config-changed hook | ||
486 | 179 | >>> config = hookenv.config() | ||
487 | 180 | >>> config['foo'] | ||
488 | 181 | 'baz' | ||
489 | 182 | >>> # test to see if this val has changed since last hook | ||
490 | 183 | >>> config.changed('foo') | ||
491 | 184 | True | ||
492 | 185 | >>> # what was the previous value? | ||
493 | 186 | >>> config.previous('foo') | ||
494 | 187 | 'bar' | ||
495 | 188 | >>> # keys/values that we add are preserved across hooks | ||
496 | 189 | >>> config['mykey'] | ||
497 | 190 | 'myval' | ||
498 | 191 | >>> # don't forget to save at the end of hook! | ||
499 | 192 | >>> config.save() | ||
500 | 193 | |||
501 | 194 | """ | ||
502 | 195 | CONFIG_FILE_NAME = '.juju-persistent-config' | ||
503 | 196 | |||
504 | 197 | def __init__(self, *args, **kw): | ||
505 | 198 | super(Config, self).__init__(*args, **kw) | ||
506 | 199 | self._prev_dict = None | ||
507 | 200 | self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME) | ||
508 | 201 | if os.path.exists(self.path): | ||
509 | 202 | self.load_previous() | ||
510 | 203 | |||
511 | 204 | def load_previous(self, path=None): | ||
512 | 205 | """Load previous copy of config from disk so that current values | ||
513 | 206 | can be compared to previous values. | ||
514 | 207 | |||
515 | 208 | :param path: | ||
516 | 209 | |||
517 | 210 | File path from which to load the previous config. If `None`, | ||
518 | 211 | config is loaded from the default location. If `path` is | ||
519 | 212 | specified, subsequent `save()` calls will write to the same | ||
520 | 213 | path. | ||
521 | 214 | |||
522 | 215 | """ | ||
523 | 216 | self.path = path or self.path | ||
524 | 217 | with open(self.path) as f: | ||
525 | 218 | self._prev_dict = json.load(f) | ||
526 | 219 | |||
527 | 220 | def changed(self, key): | ||
528 | 221 | """Return true if the value for this key has changed since | ||
529 | 222 | the last save. | ||
530 | 223 | |||
531 | 224 | """ | ||
532 | 225 | if self._prev_dict is None: | ||
533 | 226 | return True | ||
534 | 227 | return self.previous(key) != self.get(key) | ||
535 | 228 | |||
536 | 229 | def previous(self, key): | ||
537 | 230 | """Return previous value for this key, or None if there | ||
538 | 231 | is no "previous" value. | ||
539 | 232 | |||
540 | 233 | """ | ||
541 | 234 | if self._prev_dict: | ||
542 | 235 | return self._prev_dict.get(key) | ||
543 | 236 | return None | ||
544 | 237 | |||
545 | 238 | def save(self): | ||
546 | 239 | """Save this config to disk. | ||
547 | 240 | |||
548 | 241 | Preserves items in _prev_dict that do not exist in self. | ||
549 | 242 | |||
550 | 243 | """ | ||
551 | 244 | if self._prev_dict: | ||
552 | 245 | for k, v in self._prev_dict.iteritems(): | ||
553 | 246 | if k not in self: | ||
554 | 247 | self[k] = v | ||
555 | 248 | with open(self.path, 'w') as f: | ||
556 | 249 | json.dump(self, f) | ||
557 | 250 | |||
558 | 251 | |||
559 | 252 | @cached | ||
560 | 253 | def config(scope=None): | ||
561 | 254 | """Juju charm configuration""" | ||
562 | 255 | config_cmd_line = ['config-get'] | ||
563 | 256 | if scope is not None: | ||
564 | 257 | config_cmd_line.append(scope) | ||
565 | 258 | config_cmd_line.append('--format=json') | ||
566 | 259 | try: | ||
567 | 260 | config_data = json.loads(subprocess.check_output(config_cmd_line)) | ||
568 | 261 | if scope is not None: | ||
569 | 262 | return config_data | ||
570 | 263 | return Config(config_data) | ||
571 | 264 | except ValueError: | ||
572 | 265 | return None | ||
573 | 266 | |||
574 | 267 | |||
575 | 268 | @cached | ||
576 | 269 | def relation_get(attribute=None, unit=None, rid=None): | ||
577 | 270 | """Get relation information""" | ||
578 | 271 | _args = ['relation-get', '--format=json'] | ||
579 | 272 | if rid: | ||
580 | 273 | _args.append('-r') | ||
581 | 274 | _args.append(rid) | ||
582 | 275 | _args.append(attribute or '-') | ||
583 | 276 | if unit: | ||
584 | 277 | _args.append(unit) | ||
585 | 278 | try: | ||
586 | 279 | return json.loads(subprocess.check_output(_args)) | ||
587 | 280 | except ValueError: | ||
588 | 281 | return None | ||
589 | 282 | except CalledProcessError, e: | ||
590 | 283 | if e.returncode == 2: | ||
591 | 284 | return None | ||
592 | 285 | raise | ||
593 | 286 | |||
594 | 287 | |||
595 | 288 | def relation_set(relation_id=None, relation_settings={}, **kwargs): | ||
596 | 289 | """Set relation information for the current unit""" | ||
597 | 290 | relation_cmd_line = ['relation-set'] | ||
598 | 291 | if relation_id is not None: | ||
599 | 292 | relation_cmd_line.extend(('-r', relation_id)) | ||
600 | 293 | for k, v in (relation_settings.items() + kwargs.items()): | ||
601 | 294 | if v is None: | ||
602 | 295 | relation_cmd_line.append('{}='.format(k)) | ||
603 | 296 | else: | ||
604 | 297 | relation_cmd_line.append('{}={}'.format(k, v)) | ||
605 | 298 | subprocess.check_call(relation_cmd_line) | ||
606 | 299 | # Flush cache of any relation-gets for local unit | ||
607 | 300 | flush(local_unit()) | ||
608 | 301 | |||
609 | 302 | |||
610 | 303 | @cached | ||
611 | 304 | def relation_ids(reltype=None): | ||
612 | 305 | """A list of relation_ids""" | ||
613 | 306 | reltype = reltype or relation_type() | ||
614 | 307 | relid_cmd_line = ['relation-ids', '--format=json'] | ||
615 | 308 | if reltype is not None: | ||
616 | 309 | relid_cmd_line.append(reltype) | ||
617 | 310 | return json.loads(subprocess.check_output(relid_cmd_line)) or [] | ||
618 | 311 | return [] | ||
619 | 312 | |||
620 | 313 | |||
621 | 314 | @cached | ||
622 | 315 | def related_units(relid=None): | ||
623 | 316 | """A list of related units""" | ||
624 | 317 | relid = relid or relation_id() | ||
625 | 318 | units_cmd_line = ['relation-list', '--format=json'] | ||
626 | 319 | if relid is not None: | ||
627 | 320 | units_cmd_line.extend(('-r', relid)) | ||
628 | 321 | return json.loads(subprocess.check_output(units_cmd_line)) or [] | ||
629 | 322 | |||
630 | 323 | |||
631 | 324 | @cached | ||
632 | 325 | def relation_for_unit(unit=None, rid=None): | ||
633 | 326 | """Get the json represenation of a unit's relation""" | ||
634 | 327 | unit = unit or remote_unit() | ||
635 | 328 | relation = relation_get(unit=unit, rid=rid) | ||
636 | 329 | for key in relation: | ||
637 | 330 | if key.endswith('-list'): | ||
638 | 331 | relation[key] = relation[key].split() | ||
639 | 332 | relation['__unit__'] = unit | ||
640 | 333 | return relation | ||
641 | 334 | |||
642 | 335 | |||
643 | 336 | @cached | ||
644 | 337 | def relations_for_id(relid=None): | ||
645 | 338 | """Get relations of a specific relation ID""" | ||
646 | 339 | relation_data = [] | ||
647 | 340 | relid = relid or relation_ids() | ||
648 | 341 | for unit in related_units(relid): | ||
649 | 342 | unit_data = relation_for_unit(unit, relid) | ||
650 | 343 | unit_data['__relid__'] = relid | ||
651 | 344 | relation_data.append(unit_data) | ||
652 | 345 | return relation_data | ||
653 | 346 | |||
654 | 347 | |||
655 | 348 | @cached | ||
656 | 349 | def relations_of_type(reltype=None): | ||
657 | 350 | """Get relations of a specific type""" | ||
658 | 351 | relation_data = [] | ||
659 | 352 | reltype = reltype or relation_type() | ||
660 | 353 | for relid in relation_ids(reltype): | ||
661 | 354 | for relation in relations_for_id(relid): | ||
662 | 355 | relation['__relid__'] = relid | ||
663 | 356 | relation_data.append(relation) | ||
664 | 357 | return relation_data | ||
665 | 358 | |||
666 | 359 | |||
667 | 360 | @cached | ||
668 | 361 | def relation_types(): | ||
669 | 362 | """Get a list of relation types supported by this charm""" | ||
670 | 363 | charmdir = os.environ.get('CHARM_DIR', '') | ||
671 | 364 | mdf = open(os.path.join(charmdir, 'metadata.yaml')) | ||
672 | 365 | md = yaml.safe_load(mdf) | ||
673 | 366 | rel_types = [] | ||
674 | 367 | for key in ('provides', 'requires', 'peers'): | ||
675 | 368 | section = md.get(key) | ||
676 | 369 | if section: | ||
677 | 370 | rel_types.extend(section.keys()) | ||
678 | 371 | mdf.close() | ||
679 | 372 | return rel_types | ||
680 | 373 | |||
681 | 374 | |||
682 | 375 | @cached | ||
683 | 376 | def relations(): | ||
684 | 377 | """Get a nested dictionary of relation data for all related units""" | ||
685 | 378 | rels = {} | ||
686 | 379 | for reltype in relation_types(): | ||
687 | 380 | relids = {} | ||
688 | 381 | for relid in relation_ids(reltype): | ||
689 | 382 | units = {local_unit(): relation_get(unit=local_unit(), rid=relid)} | ||
690 | 383 | for unit in related_units(relid): | ||
691 | 384 | reldata = relation_get(unit=unit, rid=relid) | ||
692 | 385 | units[unit] = reldata | ||
693 | 386 | relids[relid] = units | ||
694 | 387 | rels[reltype] = relids | ||
695 | 388 | return rels | ||
696 | 389 | |||
697 | 390 | |||
698 | 391 | @cached | ||
699 | 392 | def is_relation_made(relation, keys='private-address'): | ||
700 | 393 | ''' | ||
701 | 394 | Determine whether a relation is established by checking for | ||
702 | 395 | presence of key(s). If a list of keys is provided, they | ||
703 | 396 | must all be present for the relation to be identified as made | ||
704 | 397 | ''' | ||
705 | 398 | if isinstance(keys, str): | ||
706 | 399 | keys = [keys] | ||
707 | 400 | for r_id in relation_ids(relation): | ||
708 | 401 | for unit in related_units(r_id): | ||
709 | 402 | context = {} | ||
710 | 403 | for k in keys: | ||
711 | 404 | context[k] = relation_get(k, rid=r_id, | ||
712 | 405 | unit=unit) | ||
713 | 406 | if None not in context.values(): | ||
714 | 407 | return True | ||
715 | 408 | return False | ||
716 | 409 | |||
717 | 410 | |||
718 | 411 | def open_port(port, protocol="TCP"): | ||
719 | 412 | """Open a service network port""" | ||
720 | 413 | _args = ['open-port'] | ||
721 | 414 | _args.append('{}/{}'.format(port, protocol)) | ||
722 | 415 | subprocess.check_call(_args) | ||
723 | 416 | |||
724 | 417 | |||
725 | 418 | def close_port(port, protocol="TCP"): | ||
726 | 419 | """Close a service network port""" | ||
727 | 420 | _args = ['close-port'] | ||
728 | 421 | _args.append('{}/{}'.format(port, protocol)) | ||
729 | 422 | subprocess.check_call(_args) | ||
730 | 423 | |||
731 | 424 | |||
732 | 425 | @cached | ||
733 | 426 | def unit_get(attribute): | ||
734 | 427 | """Get the unit ID for the remote unit""" | ||
735 | 428 | _args = ['unit-get', '--format=json', attribute] | ||
736 | 429 | try: | ||
737 | 430 | return json.loads(subprocess.check_output(_args)) | ||
738 | 431 | except ValueError: | ||
739 | 432 | return None | ||
740 | 433 | |||
741 | 434 | |||
742 | 435 | def unit_private_ip(): | ||
743 | 436 | """Get this unit's private IP address""" | ||
744 | 437 | return unit_get('private-address') | ||
745 | 438 | |||
746 | 439 | |||
747 | 440 | class UnregisteredHookError(Exception): | ||
748 | 441 | """Raised when an undefined hook is called""" | ||
749 | 442 | pass | ||
750 | 443 | |||
751 | 444 | |||
752 | 445 | class Hooks(object): | ||
753 | 446 | """A convenient handler for hook functions. | ||
754 | 447 | |||
755 | 448 | Example: | ||
756 | 449 | hooks = Hooks() | ||
757 | 450 | |||
758 | 451 | # register a hook, taking its name from the function name | ||
759 | 452 | @hooks.hook() | ||
760 | 453 | def install(): | ||
761 | 454 | ... | ||
762 | 455 | |||
763 | 456 | # register a hook, providing a custom hook name | ||
764 | 457 | @hooks.hook("config-changed") | ||
765 | 458 | def config_changed(): | ||
766 | 459 | ... | ||
767 | 460 | |||
768 | 461 | if __name__ == "__main__": | ||
769 | 462 | # execute a hook based on the name the program is called by | ||
770 | 463 | hooks.execute(sys.argv) | ||
771 | 464 | """ | ||
772 | 465 | |||
773 | 466 | def __init__(self): | ||
774 | 467 | super(Hooks, self).__init__() | ||
775 | 468 | self._hooks = {} | ||
776 | 469 | |||
777 | 470 | def register(self, name, function): | ||
778 | 471 | """Register a hook""" | ||
779 | 472 | self._hooks[name] = function | ||
780 | 473 | |||
781 | 474 | def execute(self, args): | ||
782 | 475 | """Execute a registered hook based on args[0]""" | ||
783 | 476 | hook_name = os.path.basename(args[0]) | ||
784 | 477 | if hook_name in self._hooks: | ||
785 | 478 | self._hooks[hook_name]() | ||
786 | 479 | else: | ||
787 | 480 | raise UnregisteredHookError(hook_name) | ||
788 | 481 | |||
789 | 482 | def hook(self, *hook_names): | ||
790 | 483 | """Decorator, registering them as hooks""" | ||
791 | 484 | def wrapper(decorated): | ||
792 | 485 | for hook_name in hook_names: | ||
793 | 486 | self.register(hook_name, decorated) | ||
794 | 487 | else: | ||
795 | 488 | self.register(decorated.__name__, decorated) | ||
796 | 489 | if '_' in decorated.__name__: | ||
797 | 490 | self.register( | ||
798 | 491 | decorated.__name__.replace('_', '-'), decorated) | ||
799 | 492 | return decorated | ||
800 | 493 | return wrapper | ||
801 | 494 | |||
802 | 495 | |||
803 | 496 | def charm_dir(): | ||
804 | 497 | """Return the root directory of the current charm""" | ||
805 | 498 | return os.environ.get('CHARM_DIR') | ||
806 | 499 | 0 | ||
807 | === removed file 'hooks/charmhelpers/core/host.py' | |||
808 | --- hooks/charmhelpers/core/host.py 2014-07-09 12:37:36 +0000 | |||
809 | +++ hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000 | |||
810 | @@ -1,325 +0,0 @@ | |||
811 | 1 | """Tools for working with the host system""" | ||
812 | 2 | # Copyright 2012 Canonical Ltd. | ||
813 | 3 | # | ||
814 | 4 | # Authors: | ||
815 | 5 | # Nick Moffitt <nick.moffitt@canonical.com> | ||
816 | 6 | # Matthew Wedgwood <matthew.wedgwood@canonical.com> | ||
817 | 7 | |||
818 | 8 | import os | ||
819 | 9 | import pwd | ||
820 | 10 | import grp | ||
821 | 11 | import random | ||
822 | 12 | import string | ||
823 | 13 | import subprocess | ||
824 | 14 | import hashlib | ||
825 | 15 | import apt_pkg | ||
826 | 16 | |||
827 | 17 | from collections import OrderedDict | ||
828 | 18 | |||
829 | 19 | from hookenv import log | ||
830 | 20 | from fstab import Fstab | ||
831 | 21 | |||
832 | 22 | |||
833 | 23 | def service_start(service_name): | ||
834 | 24 | """Start a system service""" | ||
835 | 25 | return service('start', service_name) | ||
836 | 26 | |||
837 | 27 | |||
838 | 28 | def service_stop(service_name): | ||
839 | 29 | """Stop a system service""" | ||
840 | 30 | return service('stop', service_name) | ||
841 | 31 | |||
842 | 32 | |||
843 | 33 | def service_restart(service_name): | ||
844 | 34 | """Restart a system service""" | ||
845 | 35 | return service('restart', service_name) | ||
846 | 36 | |||
847 | 37 | |||
848 | 38 | def service_reload(service_name, restart_on_failure=False): | ||
849 | 39 | """Reload a system service, optionally falling back to restart if | ||
850 | 40 | reload fails""" | ||
851 | 41 | service_result = service('reload', service_name) | ||
852 | 42 | if not service_result and restart_on_failure: | ||
853 | 43 | service_result = service('restart', service_name) | ||
854 | 44 | return service_result | ||
855 | 45 | |||
856 | 46 | |||
857 | 47 | def service(action, service_name): | ||
858 | 48 | """Control a system service""" | ||
859 | 49 | cmd = ['service', service_name, action] | ||
860 | 50 | return subprocess.call(cmd) == 0 | ||
861 | 51 | |||
862 | 52 | |||
863 | 53 | def service_running(service): | ||
864 | 54 | """Determine whether a system service is running""" | ||
865 | 55 | try: | ||
866 | 56 | output = subprocess.check_output(['service', service, 'status']) | ||
867 | 57 | except subprocess.CalledProcessError: | ||
868 | 58 | return False | ||
869 | 59 | else: | ||
870 | 60 | if ("start/running" in output or "is running" in output): | ||
871 | 61 | return True | ||
872 | 62 | else: | ||
873 | 63 | return False | ||
874 | 64 | |||
875 | 65 | |||
876 | 66 | def adduser(username, password=None, shell='/bin/bash', system_user=False): | ||
877 | 67 | """Add a user to the system""" | ||
878 | 68 | try: | ||
879 | 69 | user_info = pwd.getpwnam(username) | ||
880 | 70 | log('user {0} already exists!'.format(username)) | ||
881 | 71 | except KeyError: | ||
882 | 72 | log('creating user {0}'.format(username)) | ||
883 | 73 | cmd = ['useradd'] | ||
884 | 74 | if system_user or password is None: | ||
885 | 75 | cmd.append('--system') | ||
886 | 76 | else: | ||
887 | 77 | cmd.extend([ | ||
888 | 78 | '--create-home', | ||
889 | 79 | '--shell', shell, | ||
890 | 80 | '--password', password, | ||
891 | 81 | ]) | ||
892 | 82 | cmd.append(username) | ||
893 | 83 | subprocess.check_call(cmd) | ||
894 | 84 | user_info = pwd.getpwnam(username) | ||
895 | 85 | return user_info | ||
896 | 86 | |||
897 | 87 | |||
898 | 88 | def add_user_to_group(username, group): | ||
899 | 89 | """Add a user to a group""" | ||
900 | 90 | cmd = [ | ||
901 | 91 | 'gpasswd', '-a', | ||
902 | 92 | username, | ||
903 | 93 | group | ||
904 | 94 | ] | ||
905 | 95 | log("Adding user {} to group {}".format(username, group)) | ||
906 | 96 | subprocess.check_call(cmd) | ||
907 | 97 | |||
908 | 98 | |||
909 | 99 | def rsync(from_path, to_path, flags='-r', options=None): | ||
910 | 100 | """Replicate the contents of a path""" | ||
911 | 101 | options = options or ['--delete', '--executability'] | ||
912 | 102 | cmd = ['/usr/bin/rsync', flags] | ||
913 | 103 | cmd.extend(options) | ||
914 | 104 | cmd.append(from_path) | ||
915 | 105 | cmd.append(to_path) | ||
916 | 106 | log(" ".join(cmd)) | ||
917 | 107 | return subprocess.check_output(cmd).strip() | ||
918 | 108 | |||
919 | 109 | |||
920 | 110 | def symlink(source, destination): | ||
921 | 111 | """Create a symbolic link""" | ||
922 | 112 | log("Symlinking {} as {}".format(source, destination)) | ||
923 | 113 | cmd = [ | ||
924 | 114 | 'ln', | ||
925 | 115 | '-sf', | ||
926 | 116 | source, | ||
927 | 117 | destination, | ||
928 | 118 | ] | ||
929 | 119 | subprocess.check_call(cmd) | ||
930 | 120 | |||
931 | 121 | |||
932 | 122 | def mkdir(path, owner='root', group='root', perms=0555, force=False): | ||
933 | 123 | """Create a directory""" | ||
934 | 124 | log("Making dir {} {}:{} {:o}".format(path, owner, group, | ||
935 | 125 | perms)) | ||
936 | 126 | uid = pwd.getpwnam(owner).pw_uid | ||
937 | 127 | gid = grp.getgrnam(group).gr_gid | ||
938 | 128 | realpath = os.path.abspath(path) | ||
939 | 129 | if os.path.exists(realpath): | ||
940 | 130 | if force and not os.path.isdir(realpath): | ||
941 | 131 | log("Removing non-directory file {} prior to mkdir()".format(path)) | ||
942 | 132 | os.unlink(realpath) | ||
943 | 133 | else: | ||
944 | 134 | os.makedirs(realpath, perms) | ||
945 | 135 | os.chown(realpath, uid, gid) | ||
946 | 136 | |||
947 | 137 | |||
948 | 138 | def write_file(path, content, owner='root', group='root', perms=0444): | ||
949 | 139 | """Create or overwrite a file with the contents of a string""" | ||
950 | 140 | log("Writing file {} {}:{} {:o}".format(path, owner, group, perms)) | ||
951 | 141 | uid = pwd.getpwnam(owner).pw_uid | ||
952 | 142 | gid = grp.getgrnam(group).gr_gid | ||
953 | 143 | with open(path, 'w') as target: | ||
954 | 144 | os.fchown(target.fileno(), uid, gid) | ||
955 | 145 | os.fchmod(target.fileno(), perms) | ||
956 | 146 | target.write(content) | ||
957 | 147 | |||
958 | 148 | |||
959 | 149 | def fstab_remove(mp): | ||
960 | 150 | """Remove the given mountpoint entry from /etc/fstab | ||
961 | 151 | """ | ||
962 | 152 | return Fstab.remove_by_mountpoint(mp) | ||
963 | 153 | |||
964 | 154 | |||
965 | 155 | def fstab_add(dev, mp, fs, options=None): | ||
966 | 156 | """Adds the given device entry to the /etc/fstab file | ||
967 | 157 | """ | ||
968 | 158 | return Fstab.add(dev, mp, fs, options=options) | ||
969 | 159 | |||
970 | 160 | |||
971 | 161 | def mount(device, mountpoint, options=None, persist=False, filesystem="ext3"): | ||
972 | 162 | """Mount a filesystem at a particular mountpoint""" | ||
973 | 163 | cmd_args = ['mount'] | ||
974 | 164 | if options is not None: | ||
975 | 165 | cmd_args.extend(['-o', options]) | ||
976 | 166 | cmd_args.extend([device, mountpoint]) | ||
977 | 167 | try: | ||
978 | 168 | subprocess.check_output(cmd_args) | ||
979 | 169 | except subprocess.CalledProcessError, e: | ||
980 | 170 | log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output)) | ||
981 | 171 | return False | ||
982 | 172 | |||
983 | 173 | if persist: | ||
984 | 174 | return fstab_add(device, mountpoint, filesystem, options=options) | ||
985 | 175 | return True | ||
986 | 176 | |||
987 | 177 | |||
988 | 178 | def umount(mountpoint, persist=False): | ||
989 | 179 | """Unmount a filesystem""" | ||
990 | 180 | cmd_args = ['umount', mountpoint] | ||
991 | 181 | try: | ||
992 | 182 | subprocess.check_output(cmd_args) | ||
993 | 183 | except subprocess.CalledProcessError, e: | ||
994 | 184 | log('Error unmounting {}\n{}'.format(mountpoint, e.output)) | ||
995 | 185 | return False | ||
996 | 186 | |||
997 | 187 | if persist: | ||
998 | 188 | return fstab_remove(mountpoint) | ||
999 | 189 | return True | ||
1000 | 190 | |||
1001 | 191 | |||
1002 | 192 | def mounts(): | ||
1003 | 193 | """Get a list of all mounted volumes as [[mountpoint,device],[...]]""" | ||
1004 | 194 | with open('/proc/mounts') as f: | ||
1005 | 195 | # [['/mount/point','/dev/path'],[...]] | ||
1006 | 196 | system_mounts = [m[1::-1] for m in [l.strip().split() | ||
1007 | 197 | for l in f.readlines()]] | ||
1008 | 198 | return system_mounts | ||
1009 | 199 | |||
1010 | 200 | |||
1011 | 201 | def file_hash(path): | ||
1012 | 202 | """Generate a md5 hash of the contents of 'path' or None if not found """ | ||
1013 | 203 | if os.path.exists(path): | ||
1014 | 204 | h = hashlib.md5() | ||
1015 | 205 | with open(path, 'r') as source: | ||
1016 | 206 | h.update(source.read()) # IGNORE:E1101 - it does have update | ||
1017 | 207 | return h.hexdigest() | ||
1018 | 208 | else: | ||
1019 | 209 | return None | ||
1020 | 210 | |||
1021 | 211 | |||
1022 | 212 | def restart_on_change(restart_map, stopstart=False): | ||
1023 | 213 | """Restart services based on configuration files changing | ||
1024 | 214 | |||
1025 | 215 | This function is used a decorator, for example | ||
1026 | 216 | |||
1027 | 217 | @restart_on_change({ | ||
1028 | 218 | '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ] | ||
1029 | 219 | }) | ||
1030 | 220 | def ceph_client_changed(): | ||
1031 | 221 | ... | ||
1032 | 222 | |||
1033 | 223 | In this example, the cinder-api and cinder-volume services | ||
1034 | 224 | would be restarted if /etc/ceph/ceph.conf is changed by the | ||
1035 | 225 | ceph_client_changed function. | ||
1036 | 226 | """ | ||
1037 | 227 | def wrap(f): | ||
1038 | 228 | def wrapped_f(*args): | ||
1039 | 229 | checksums = {} | ||
1040 | 230 | for path in restart_map: | ||
1041 | 231 | checksums[path] = file_hash(path) | ||
1042 | 232 | f(*args) | ||
1043 | 233 | restarts = [] | ||
1044 | 234 | for path in restart_map: | ||
1045 | 235 | if checksums[path] != file_hash(path): | ||
1046 | 236 | restarts += restart_map[path] | ||
1047 | 237 | services_list = list(OrderedDict.fromkeys(restarts)) | ||
1048 | 238 | if not stopstart: | ||
1049 | 239 | for service_name in services_list: | ||
1050 | 240 | service('restart', service_name) | ||
1051 | 241 | else: | ||
1052 | 242 | for action in ['stop', 'start']: | ||
1053 | 243 | for service_name in services_list: | ||
1054 | 244 | service(action, service_name) | ||
1055 | 245 | return wrapped_f | ||
1056 | 246 | return wrap | ||
1057 | 247 | |||
1058 | 248 | |||
1059 | 249 | def lsb_release(): | ||
1060 | 250 | """Return /etc/lsb-release in a dict""" | ||
1061 | 251 | d = {} | ||
1062 | 252 | with open('/etc/lsb-release', 'r') as lsb: | ||
1063 | 253 | for l in lsb: | ||
1064 | 254 | k, v = l.split('=') | ||
1065 | 255 | d[k.strip()] = v.strip() | ||
1066 | 256 | return d | ||
1067 | 257 | |||
1068 | 258 | |||
1069 | 259 | def pwgen(length=None): | ||
1070 | 260 | """Generate a random pasword.""" | ||
1071 | 261 | if length is None: | ||
1072 | 262 | length = random.choice(range(35, 45)) | ||
1073 | 263 | alphanumeric_chars = [ | ||
1074 | 264 | l for l in (string.letters + string.digits) | ||
1075 | 265 | if l not in 'l0QD1vAEIOUaeiou'] | ||
1076 | 266 | random_chars = [ | ||
1077 | 267 | random.choice(alphanumeric_chars) for _ in range(length)] | ||
1078 | 268 | return(''.join(random_chars)) | ||
1079 | 269 | |||
1080 | 270 | |||
1081 | 271 | def list_nics(nic_type): | ||
1082 | 272 | '''Return a list of nics of given type(s)''' | ||
1083 | 273 | if isinstance(nic_type, basestring): | ||
1084 | 274 | int_types = [nic_type] | ||
1085 | 275 | else: | ||
1086 | 276 | int_types = nic_type | ||
1087 | 277 | interfaces = [] | ||
1088 | 278 | for int_type in int_types: | ||
1089 | 279 | cmd = ['ip', 'addr', 'show', 'label', int_type + '*'] | ||
1090 | 280 | ip_output = subprocess.check_output(cmd).split('\n') | ||
1091 | 281 | ip_output = (line for line in ip_output if line) | ||
1092 | 282 | for line in ip_output: | ||
1093 | 283 | if line.split()[1].startswith(int_type): | ||
1094 | 284 | interfaces.append(line.split()[1].replace(":", "")) | ||
1095 | 285 | return interfaces | ||
1096 | 286 | |||
1097 | 287 | |||
1098 | 288 | def set_nic_mtu(nic, mtu): | ||
1099 | 289 | '''Set MTU on a network interface''' | ||
1100 | 290 | cmd = ['ip', 'link', 'set', nic, 'mtu', mtu] | ||
1101 | 291 | subprocess.check_call(cmd) | ||
1102 | 292 | |||
1103 | 293 | |||
1104 | 294 | def get_nic_mtu(nic): | ||
1105 | 295 | cmd = ['ip', 'addr', 'show', nic] | ||
1106 | 296 | ip_output = subprocess.check_output(cmd).split('\n') | ||
1107 | 297 | mtu = "" | ||
1108 | 298 | for line in ip_output: | ||
1109 | 299 | words = line.split() | ||
1110 | 300 | if 'mtu' in words: | ||
1111 | 301 | mtu = words[words.index("mtu") + 1] | ||
1112 | 302 | return mtu | ||
1113 | 303 | |||
1114 | 304 | |||
1115 | 305 | def get_nic_hwaddr(nic): | ||
1116 | 306 | cmd = ['ip', '-o', '-0', 'addr', 'show', nic] | ||
1117 | 307 | ip_output = subprocess.check_output(cmd) | ||
1118 | 308 | hwaddr = "" | ||
1119 | 309 | words = ip_output.split() | ||
1120 | 310 | if 'link/ether' in words: | ||
1121 | 311 | hwaddr = words[words.index('link/ether') + 1] | ||
1122 | 312 | return hwaddr | ||
1123 | 313 | |||
1124 | 314 | |||
1125 | 315 | def cmp_pkgrevno(package, revno, pkgcache=None): | ||
1126 | 316 | '''Compare supplied revno with the revno of the installed package | ||
1127 | 317 | 1 => Installed revno is greater than supplied arg | ||
1128 | 318 | 0 => Installed revno is the same as supplied arg | ||
1129 | 319 | -1 => Installed revno is less than supplied arg | ||
1130 | 320 | ''' | ||
1131 | 321 | if not pkgcache: | ||
1132 | 322 | apt_pkg.init() | ||
1133 | 323 | pkgcache = apt_pkg.Cache() | ||
1134 | 324 | pkg = pkgcache[package] | ||
1135 | 325 | return apt_pkg.version_compare(pkg.current_ver.ver_str, revno) | ||
1136 | 326 | 0 | ||
1137 | === removed directory 'hooks/charmhelpers/fetch' | |||
1138 | === removed file 'hooks/charmhelpers/fetch/__init__.py' | |||
1139 | --- hooks/charmhelpers/fetch/__init__.py 2014-07-09 12:37:36 +0000 | |||
1140 | +++ hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000 | |||
1141 | @@ -1,349 +0,0 @@ | |||
1142 | 1 | import importlib | ||
1143 | 2 | import time | ||
1144 | 3 | from yaml import safe_load | ||
1145 | 4 | from charmhelpers.core.host import ( | ||
1146 | 5 | lsb_release | ||
1147 | 6 | ) | ||
1148 | 7 | from urlparse import ( | ||
1149 | 8 | urlparse, | ||
1150 | 9 | urlunparse, | ||
1151 | 10 | ) | ||
1152 | 11 | import subprocess | ||
1153 | 12 | from charmhelpers.core.hookenv import ( | ||
1154 | 13 | config, | ||
1155 | 14 | log, | ||
1156 | 15 | ) | ||
1157 | 16 | import apt_pkg | ||
1158 | 17 | import os | ||
1159 | 18 | |||
1160 | 19 | |||
1161 | 20 | CLOUD_ARCHIVE = """# Ubuntu Cloud Archive | ||
1162 | 21 | deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main | ||
1163 | 22 | """ | ||
1164 | 23 | PROPOSED_POCKET = """# Proposed | ||
1165 | 24 | deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted | ||
1166 | 25 | """ | ||
1167 | 26 | CLOUD_ARCHIVE_POCKETS = { | ||
1168 | 27 | # Folsom | ||
1169 | 28 | 'folsom': 'precise-updates/folsom', | ||
1170 | 29 | 'precise-folsom': 'precise-updates/folsom', | ||
1171 | 30 | 'precise-folsom/updates': 'precise-updates/folsom', | ||
1172 | 31 | 'precise-updates/folsom': 'precise-updates/folsom', | ||
1173 | 32 | 'folsom/proposed': 'precise-proposed/folsom', | ||
1174 | 33 | 'precise-folsom/proposed': 'precise-proposed/folsom', | ||
1175 | 34 | 'precise-proposed/folsom': 'precise-proposed/folsom', | ||
1176 | 35 | # Grizzly | ||
1177 | 36 | 'grizzly': 'precise-updates/grizzly', | ||
1178 | 37 | 'precise-grizzly': 'precise-updates/grizzly', | ||
1179 | 38 | 'precise-grizzly/updates': 'precise-updates/grizzly', | ||
1180 | 39 | 'precise-updates/grizzly': 'precise-updates/grizzly', | ||
1181 | 40 | 'grizzly/proposed': 'precise-proposed/grizzly', | ||
1182 | 41 | 'precise-grizzly/proposed': 'precise-proposed/grizzly', | ||
1183 | 42 | 'precise-proposed/grizzly': 'precise-proposed/grizzly', | ||
1184 | 43 | # Havana | ||
1185 | 44 | 'havana': 'precise-updates/havana', | ||
1186 | 45 | 'precise-havana': 'precise-updates/havana', | ||
1187 | 46 | 'precise-havana/updates': 'precise-updates/havana', | ||
1188 | 47 | 'precise-updates/havana': 'precise-updates/havana', | ||
1189 | 48 | 'havana/proposed': 'precise-proposed/havana', | ||
1190 | 49 | 'precise-havana/proposed': 'precise-proposed/havana', | ||
1191 | 50 | 'precise-proposed/havana': 'precise-proposed/havana', | ||
1192 | 51 | # Icehouse | ||
1193 | 52 | 'icehouse': 'precise-updates/icehouse', | ||
1194 | 53 | 'precise-icehouse': 'precise-updates/icehouse', | ||
1195 | 54 | 'precise-icehouse/updates': 'precise-updates/icehouse', | ||
1196 | 55 | 'precise-updates/icehouse': 'precise-updates/icehouse', | ||
1197 | 56 | 'icehouse/proposed': 'precise-proposed/icehouse', | ||
1198 | 57 | 'precise-icehouse/proposed': 'precise-proposed/icehouse', | ||
1199 | 58 | 'precise-proposed/icehouse': 'precise-proposed/icehouse', | ||
1200 | 59 | # Juno | ||
1201 | 60 | 'juno': 'trusty-updates/juno', | ||
1202 | 61 | 'trusty-juno': 'trusty-updates/juno', | ||
1203 | 62 | 'trusty-juno/updates': 'trusty-updates/juno', | ||
1204 | 63 | 'trusty-updates/juno': 'trusty-updates/juno', | ||
1205 | 64 | 'juno/proposed': 'trusty-proposed/juno', | ||
1206 | 65 | 'juno/proposed': 'trusty-proposed/juno', | ||
1207 | 66 | 'trusty-juno/proposed': 'trusty-proposed/juno', | ||
1208 | 67 | 'trusty-proposed/juno': 'trusty-proposed/juno', | ||
1209 | 68 | } | ||
1210 | 69 | |||
1211 | 70 | # The order of this list is very important. Handlers should be listed in from | ||
1212 | 71 | # least- to most-specific URL matching. | ||
1213 | 72 | FETCH_HANDLERS = ( | ||
1214 | 73 | 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler', | ||
1215 | 74 | 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler', | ||
1216 | 75 | ) | ||
1217 | 76 | |||
1218 | 77 | APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT. | ||
1219 | 78 | APT_NO_LOCK_RETRY_DELAY = 10 # Wait 10 seconds between apt lock checks. | ||
1220 | 79 | APT_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times. | ||
1221 | 80 | |||
1222 | 81 | |||
1223 | 82 | class SourceConfigError(Exception): | ||
1224 | 83 | pass | ||
1225 | 84 | |||
1226 | 85 | |||
1227 | 86 | class UnhandledSource(Exception): | ||
1228 | 87 | pass | ||
1229 | 88 | |||
1230 | 89 | |||
1231 | 90 | class AptLockError(Exception): | ||
1232 | 91 | pass | ||
1233 | 92 | |||
1234 | 93 | |||
1235 | 94 | class BaseFetchHandler(object): | ||
1236 | 95 | |||
1237 | 96 | """Base class for FetchHandler implementations in fetch plugins""" | ||
1238 | 97 | |||
1239 | 98 | def can_handle(self, source): | ||
1240 | 99 | """Returns True if the source can be handled. Otherwise returns | ||
1241 | 100 | a string explaining why it cannot""" | ||
1242 | 101 | return "Wrong source type" | ||
1243 | 102 | |||
1244 | 103 | def install(self, source): | ||
1245 | 104 | """Try to download and unpack the source. Return the path to the | ||
1246 | 105 | unpacked files or raise UnhandledSource.""" | ||
1247 | 106 | raise UnhandledSource("Wrong source type {}".format(source)) | ||
1248 | 107 | |||
1249 | 108 | def parse_url(self, url): | ||
1250 | 109 | return urlparse(url) | ||
1251 | 110 | |||
1252 | 111 | def base_url(self, url): | ||
1253 | 112 | """Return url without querystring or fragment""" | ||
1254 | 113 | parts = list(self.parse_url(url)) | ||
1255 | 114 | parts[4:] = ['' for i in parts[4:]] | ||
1256 | 115 | return urlunparse(parts) | ||
1257 | 116 | |||
1258 | 117 | |||
1259 | 118 | def filter_installed_packages(packages): | ||
1260 | 119 | """Returns a list of packages that require installation""" | ||
1261 | 120 | apt_pkg.init() | ||
1262 | 121 | |||
1263 | 122 | # Tell apt to build an in-memory cache to prevent race conditions (if | ||
1264 | 123 | # another process is already building the cache). | ||
1265 | 124 | apt_pkg.config.set("Dir::Cache::pkgcache", "") | ||
1266 | 125 | |||
1267 | 126 | cache = apt_pkg.Cache() | ||
1268 | 127 | _pkgs = [] | ||
1269 | 128 | for package in packages: | ||
1270 | 129 | try: | ||
1271 | 130 | p = cache[package] | ||
1272 | 131 | p.current_ver or _pkgs.append(package) | ||
1273 | 132 | except KeyError: | ||
1274 | 133 | log('Package {} has no installation candidate.'.format(package), | ||
1275 | 134 | level='WARNING') | ||
1276 | 135 | _pkgs.append(package) | ||
1277 | 136 | return _pkgs | ||
1278 | 137 | |||
1279 | 138 | |||
1280 | 139 | def apt_install(packages, options=None, fatal=False): | ||
1281 | 140 | """Install one or more packages""" | ||
1282 | 141 | if options is None: | ||
1283 | 142 | options = ['--option=Dpkg::Options::=--force-confold'] | ||
1284 | 143 | |||
1285 | 144 | cmd = ['apt-get', '--assume-yes'] | ||
1286 | 145 | cmd.extend(options) | ||
1287 | 146 | cmd.append('install') | ||
1288 | 147 | if isinstance(packages, basestring): | ||
1289 | 148 | cmd.append(packages) | ||
1290 | 149 | else: | ||
1291 | 150 | cmd.extend(packages) | ||
1292 | 151 | log("Installing {} with options: {}".format(packages, | ||
1293 | 152 | options)) | ||
1294 | 153 | _run_apt_command(cmd, fatal) | ||
1295 | 154 | |||
1296 | 155 | |||
1297 | 156 | def apt_upgrade(options=None, fatal=False, dist=False): | ||
1298 | 157 | """Upgrade all packages""" | ||
1299 | 158 | if options is None: | ||
1300 | 159 | options = ['--option=Dpkg::Options::=--force-confold'] | ||
1301 | 160 | |||
1302 | 161 | cmd = ['apt-get', '--assume-yes'] | ||
1303 | 162 | cmd.extend(options) | ||
1304 | 163 | if dist: | ||
1305 | 164 | cmd.append('dist-upgrade') | ||
1306 | 165 | else: | ||
1307 | 166 | cmd.append('upgrade') | ||
1308 | 167 | log("Upgrading with options: {}".format(options)) | ||
1309 | 168 | _run_apt_command(cmd, fatal) | ||
1310 | 169 | |||
1311 | 170 | |||
1312 | 171 | def apt_update(fatal=False): | ||
1313 | 172 | """Update local apt cache""" | ||
1314 | 173 | cmd = ['apt-get', 'update'] | ||
1315 | 174 | _run_apt_command(cmd, fatal) | ||
1316 | 175 | |||
1317 | 176 | |||
1318 | 177 | def apt_purge(packages, fatal=False): | ||
1319 | 178 | """Purge one or more packages""" | ||
1320 | 179 | cmd = ['apt-get', '--assume-yes', 'purge'] | ||
1321 | 180 | if isinstance(packages, basestring): | ||
1322 | 181 | cmd.append(packages) | ||
1323 | 182 | else: | ||
1324 | 183 | cmd.extend(packages) | ||
1325 | 184 | log("Purging {}".format(packages)) | ||
1326 | 185 | _run_apt_command(cmd, fatal) | ||
1327 | 186 | |||
1328 | 187 | |||
1329 | 188 | def apt_hold(packages, fatal=False): | ||
1330 | 189 | """Hold one or more packages""" | ||
1331 | 190 | cmd = ['apt-mark', 'hold'] | ||
1332 | 191 | if isinstance(packages, basestring): | ||
1333 | 192 | cmd.append(packages) | ||
1334 | 193 | else: | ||
1335 | 194 | cmd.extend(packages) | ||
1336 | 195 | log("Holding {}".format(packages)) | ||
1337 | 196 | |||
1338 | 197 | if fatal: | ||
1339 | 198 | subprocess.check_call(cmd) | ||
1340 | 199 | else: | ||
1341 | 200 | subprocess.call(cmd) | ||
1342 | 201 | |||
1343 | 202 | |||
1344 | 203 | def add_source(source, key=None): | ||
1345 | 204 | if source is None: | ||
1346 | 205 | log('Source is not present. Skipping') | ||
1347 | 206 | return | ||
1348 | 207 | |||
1349 | 208 | if (source.startswith('ppa:') or | ||
1350 | 209 | source.startswith('http') or | ||
1351 | 210 | source.startswith('deb ') or | ||
1352 | 211 | source.startswith('cloud-archive:')): | ||
1353 | 212 | subprocess.check_call(['add-apt-repository', '--yes', source]) | ||
1354 | 213 | elif source.startswith('cloud:'): | ||
1355 | 214 | apt_install(filter_installed_packages(['ubuntu-cloud-keyring']), | ||
1356 | 215 | fatal=True) | ||
1357 | 216 | pocket = source.split(':')[-1] | ||
1358 | 217 | if pocket not in CLOUD_ARCHIVE_POCKETS: | ||
1359 | 218 | raise SourceConfigError( | ||
1360 | 219 | 'Unsupported cloud: source option %s' % | ||
1361 | 220 | pocket) | ||
1362 | 221 | actual_pocket = CLOUD_ARCHIVE_POCKETS[pocket] | ||
1363 | 222 | with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt: | ||
1364 | 223 | apt.write(CLOUD_ARCHIVE.format(actual_pocket)) | ||
1365 | 224 | elif source == 'proposed': | ||
1366 | 225 | release = lsb_release()['DISTRIB_CODENAME'] | ||
1367 | 226 | with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt: | ||
1368 | 227 | apt.write(PROPOSED_POCKET.format(release)) | ||
1369 | 228 | if key: | ||
1370 | 229 | subprocess.check_call(['apt-key', 'adv', '--keyserver', | ||
1371 | 230 | 'hkp://keyserver.ubuntu.com:80', '--recv', | ||
1372 | 231 | key]) | ||
1373 | 232 | |||
1374 | 233 | |||
1375 | 234 | def configure_sources(update=False, | ||
1376 | 235 | sources_var='install_sources', | ||
1377 | 236 | keys_var='install_keys'): | ||
1378 | 237 | """ | ||
1379 | 238 | Configure multiple sources from charm configuration | ||
1380 | 239 | |||
1381 | 240 | Example config: | ||
1382 | 241 | install_sources: | ||
1383 | 242 | - "ppa:foo" | ||
1384 | 243 | - "http://example.com/repo precise main" | ||
1385 | 244 | install_keys: | ||
1386 | 245 | - null | ||
1387 | 246 | - "a1b2c3d4" | ||
1388 | 247 | |||
1389 | 248 | Note that 'null' (a.k.a. None) should not be quoted. | ||
1390 | 249 | """ | ||
1391 | 250 | sources = safe_load(config(sources_var)) | ||
1392 | 251 | keys = config(keys_var) | ||
1393 | 252 | if keys is not None: | ||
1394 | 253 | keys = safe_load(keys) | ||
1395 | 254 | if isinstance(sources, basestring) and ( | ||
1396 | 255 | keys is None or isinstance(keys, basestring)): | ||
1397 | 256 | add_source(sources, keys) | ||
1398 | 257 | else: | ||
1399 | 258 | if not len(sources) == len(keys): | ||
1400 | 259 | msg = 'Install sources and keys lists are different lengths' | ||
1401 | 260 | raise SourceConfigError(msg) | ||
1402 | 261 | for src_num in range(len(sources)): | ||
1403 | 262 | add_source(sources[src_num], keys[src_num]) | ||
1404 | 263 | if update: | ||
1405 | 264 | apt_update(fatal=True) | ||
1406 | 265 | |||
1407 | 266 | |||
1408 | 267 | def install_remote(source): | ||
1409 | 268 | """ | ||
1410 | 269 | Install a file tree from a remote source | ||
1411 | 270 | |||
1412 | 271 | The specified source should be a url of the form: | ||
1413 | 272 | scheme://[host]/path[#[option=value][&...]] | ||
1414 | 273 | |||
1415 | 274 | Schemes supported are based on this modules submodules | ||
1416 | 275 | Options supported are submodule-specific""" | ||
1417 | 276 | # We ONLY check for True here because can_handle may return a string | ||
1418 | 277 | # explaining why it can't handle a given source. | ||
1419 | 278 | handlers = [h for h in plugins() if h.can_handle(source) is True] | ||
1420 | 279 | installed_to = None | ||
1421 | 280 | for handler in handlers: | ||
1422 | 281 | try: | ||
1423 | 282 | installed_to = handler.install(source) | ||
1424 | 283 | except UnhandledSource: | ||
1425 | 284 | pass | ||
1426 | 285 | if not installed_to: | ||
1427 | 286 | raise UnhandledSource("No handler found for source {}".format(source)) | ||
1428 | 287 | return installed_to | ||
1429 | 288 | |||
1430 | 289 | |||
1431 | 290 | def install_from_config(config_var_name): | ||
1432 | 291 | charm_config = config() | ||
1433 | 292 | source = charm_config[config_var_name] | ||
1434 | 293 | return install_remote(source) | ||
1435 | 294 | |||
1436 | 295 | |||
1437 | 296 | def plugins(fetch_handlers=None): | ||
1438 | 297 | if not fetch_handlers: | ||
1439 | 298 | fetch_handlers = FETCH_HANDLERS | ||
1440 | 299 | plugin_list = [] | ||
1441 | 300 | for handler_name in fetch_handlers: | ||
1442 | 301 | package, classname = handler_name.rsplit('.', 1) | ||
1443 | 302 | try: | ||
1444 | 303 | handler_class = getattr( | ||
1445 | 304 | importlib.import_module(package), | ||
1446 | 305 | classname) | ||
1447 | 306 | plugin_list.append(handler_class()) | ||
1448 | 307 | except (ImportError, AttributeError): | ||
1449 | 308 | # Skip missing plugins so that they can be ommitted from | ||
1450 | 309 | # installation if desired | ||
1451 | 310 | log("FetchHandler {} not found, skipping plugin".format( | ||
1452 | 311 | handler_name)) | ||
1453 | 312 | return plugin_list | ||
1454 | 313 | |||
1455 | 314 | |||
1456 | 315 | def _run_apt_command(cmd, fatal=False): | ||
1457 | 316 | """ | ||
1458 | 317 | Run an APT command, checking output and retrying if the fatal flag is set | ||
1459 | 318 | to True. | ||
1460 | 319 | |||
1461 | 320 | :param: cmd: str: The apt command to run. | ||
1462 | 321 | :param: fatal: bool: Whether the command's output should be checked and | ||
1463 | 322 | retried. | ||
1464 | 323 | """ | ||
1465 | 324 | env = os.environ.copy() | ||
1466 | 325 | |||
1467 | 326 | if 'DEBIAN_FRONTEND' not in env: | ||
1468 | 327 | env['DEBIAN_FRONTEND'] = 'noninteractive' | ||
1469 | 328 | |||
1470 | 329 | if fatal: | ||
1471 | 330 | retry_count = 0 | ||
1472 | 331 | result = None | ||
1473 | 332 | |||
1474 | 333 | # If the command is considered "fatal", we need to retry if the apt | ||
1475 | 334 | # lock was not acquired. | ||
1476 | 335 | |||
1477 | 336 | while result is None or result == APT_NO_LOCK: | ||
1478 | 337 | try: | ||
1479 | 338 | result = subprocess.check_call(cmd, env=env) | ||
1480 | 339 | except subprocess.CalledProcessError, e: | ||
1481 | 340 | retry_count = retry_count + 1 | ||
1482 | 341 | if retry_count > APT_NO_LOCK_RETRY_COUNT: | ||
1483 | 342 | raise | ||
1484 | 343 | result = e.returncode | ||
1485 | 344 | log("Couldn't acquire DPKG lock. Will retry in {} seconds." | ||
1486 | 345 | "".format(APT_NO_LOCK_RETRY_DELAY)) | ||
1487 | 346 | time.sleep(APT_NO_LOCK_RETRY_DELAY) | ||
1488 | 347 | |||
1489 | 348 | else: | ||
1490 | 349 | subprocess.call(cmd, env=env) | ||
1491 | 350 | 0 | ||
1492 | === removed file 'hooks/charmhelpers/fetch/archiveurl.py' | |||
1493 | --- hooks/charmhelpers/fetch/archiveurl.py 2014-07-09 12:37:36 +0000 | |||
1494 | +++ hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000 | |||
1495 | @@ -1,63 +0,0 @@ | |||
1496 | 1 | import os | ||
1497 | 2 | import urllib2 | ||
1498 | 3 | import urlparse | ||
1499 | 4 | |||
1500 | 5 | from charmhelpers.fetch import ( | ||
1501 | 6 | BaseFetchHandler, | ||
1502 | 7 | UnhandledSource | ||
1503 | 8 | ) | ||
1504 | 9 | from charmhelpers.payload.archive import ( | ||
1505 | 10 | get_archive_handler, | ||
1506 | 11 | extract, | ||
1507 | 12 | ) | ||
1508 | 13 | from charmhelpers.core.host import mkdir | ||
1509 | 14 | |||
1510 | 15 | |||
1511 | 16 | class ArchiveUrlFetchHandler(BaseFetchHandler): | ||
1512 | 17 | """Handler for archives via generic URLs""" | ||
1513 | 18 | def can_handle(self, source): | ||
1514 | 19 | url_parts = self.parse_url(source) | ||
1515 | 20 | if url_parts.scheme not in ('http', 'https', 'ftp', 'file'): | ||
1516 | 21 | return "Wrong source type" | ||
1517 | 22 | if get_archive_handler(self.base_url(source)): | ||
1518 | 23 | return True | ||
1519 | 24 | return False | ||
1520 | 25 | |||
1521 | 26 | def download(self, source, dest): | ||
1522 | 27 | # propogate all exceptions | ||
1523 | 28 | # URLError, OSError, etc | ||
1524 | 29 | proto, netloc, path, params, query, fragment = urlparse.urlparse(source) | ||
1525 | 30 | if proto in ('http', 'https'): | ||
1526 | 31 | auth, barehost = urllib2.splituser(netloc) | ||
1527 | 32 | if auth is not None: | ||
1528 | 33 | source = urlparse.urlunparse((proto, barehost, path, params, query, fragment)) | ||
1529 | 34 | username, password = urllib2.splitpasswd(auth) | ||
1530 | 35 | passman = urllib2.HTTPPasswordMgrWithDefaultRealm() | ||
1531 | 36 | # Realm is set to None in add_password to force the username and password | ||
1532 | 37 | # to be used whatever the realm | ||
1533 | 38 | passman.add_password(None, source, username, password) | ||
1534 | 39 | authhandler = urllib2.HTTPBasicAuthHandler(passman) | ||
1535 | 40 | opener = urllib2.build_opener(authhandler) | ||
1536 | 41 | urllib2.install_opener(opener) | ||
1537 | 42 | response = urllib2.urlopen(source) | ||
1538 | 43 | try: | ||
1539 | 44 | with open(dest, 'w') as dest_file: | ||
1540 | 45 | dest_file.write(response.read()) | ||
1541 | 46 | except Exception as e: | ||
1542 | 47 | if os.path.isfile(dest): | ||
1543 | 48 | os.unlink(dest) | ||
1544 | 49 | raise e | ||
1545 | 50 | |||
1546 | 51 | def install(self, source): | ||
1547 | 52 | url_parts = self.parse_url(source) | ||
1548 | 53 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched') | ||
1549 | 54 | if not os.path.exists(dest_dir): | ||
1550 | 55 | mkdir(dest_dir, perms=0755) | ||
1551 | 56 | dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path)) | ||
1552 | 57 | try: | ||
1553 | 58 | self.download(source, dld_file) | ||
1554 | 59 | except urllib2.URLError as e: | ||
1555 | 60 | raise UnhandledSource(e.reason) | ||
1556 | 61 | except OSError as e: | ||
1557 | 62 | raise UnhandledSource(e.strerror) | ||
1558 | 63 | return extract(dld_file) | ||
1559 | 64 | 0 | ||
1560 | === removed file 'hooks/charmhelpers/fetch/bzrurl.py' | |||
1561 | --- hooks/charmhelpers/fetch/bzrurl.py 2014-07-09 12:37:36 +0000 | |||
1562 | +++ hooks/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000 | |||
1563 | @@ -1,50 +0,0 @@ | |||
1564 | 1 | import os | ||
1565 | 2 | from charmhelpers.fetch import ( | ||
1566 | 3 | BaseFetchHandler, | ||
1567 | 4 | UnhandledSource | ||
1568 | 5 | ) | ||
1569 | 6 | from charmhelpers.core.host import mkdir | ||
1570 | 7 | |||
1571 | 8 | try: | ||
1572 | 9 | from bzrlib.branch import Branch | ||
1573 | 10 | except ImportError: | ||
1574 | 11 | from charmhelpers.fetch import apt_install | ||
1575 | 12 | apt_install("python-bzrlib") | ||
1576 | 13 | from bzrlib.branch import Branch | ||
1577 | 14 | |||
1578 | 15 | |||
1579 | 16 | class BzrUrlFetchHandler(BaseFetchHandler): | ||
1580 | 17 | """Handler for bazaar branches via generic and lp URLs""" | ||
1581 | 18 | def can_handle(self, source): | ||
1582 | 19 | url_parts = self.parse_url(source) | ||
1583 | 20 | if url_parts.scheme not in ('bzr+ssh', 'lp'): | ||
1584 | 21 | return False | ||
1585 | 22 | else: | ||
1586 | 23 | return True | ||
1587 | 24 | |||
1588 | 25 | def branch(self, source, dest): | ||
1589 | 26 | url_parts = self.parse_url(source) | ||
1590 | 27 | # If we use lp:branchname scheme we need to load plugins | ||
1591 | 28 | if not self.can_handle(source): | ||
1592 | 29 | raise UnhandledSource("Cannot handle {}".format(source)) | ||
1593 | 30 | if url_parts.scheme == "lp": | ||
1594 | 31 | from bzrlib.plugin import load_plugins | ||
1595 | 32 | load_plugins() | ||
1596 | 33 | try: | ||
1597 | 34 | remote_branch = Branch.open(source) | ||
1598 | 35 | remote_branch.bzrdir.sprout(dest).open_branch() | ||
1599 | 36 | except Exception as e: | ||
1600 | 37 | raise e | ||
1601 | 38 | |||
1602 | 39 | def install(self, source): | ||
1603 | 40 | url_parts = self.parse_url(source) | ||
1604 | 41 | branch_name = url_parts.path.strip("/").split("/")[-1] | ||
1605 | 42 | dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", | ||
1606 | 43 | branch_name) | ||
1607 | 44 | if not os.path.exists(dest_dir): | ||
1608 | 45 | mkdir(dest_dir, perms=0755) | ||
1609 | 46 | try: | ||
1610 | 47 | self.branch(source, dest_dir) | ||
1611 | 48 | except OSError as e: | ||
1612 | 49 | raise UnhandledSource(e.strerror) | ||
1613 | 50 | return dest_dir | ||
1614 | 51 | 0 | ||
1615 | === removed directory 'hooks/charmhelpers/lib' | |||
1616 | === removed file 'hooks/charmhelpers/lib/__init__.py' | |||
1617 | === removed file 'hooks/charmhelpers/lib/ceph_utils.py' | |||
1618 | --- hooks/charmhelpers/lib/ceph_utils.py 2014-07-09 12:37:36 +0000 | |||
1619 | +++ hooks/charmhelpers/lib/ceph_utils.py 1970-01-01 00:00:00 +0000 | |||
1620 | @@ -1,315 +0,0 @@ | |||
1621 | 1 | # | ||
1622 | 2 | # Copyright 2012 Canonical Ltd. | ||
1623 | 3 | # | ||
1624 | 4 | # This file is sourced from lp:openstack-charm-helpers | ||
1625 | 5 | # | ||
1626 | 6 | # Authors: | ||
1627 | 7 | # James Page <james.page@ubuntu.com> | ||
1628 | 8 | # Adam Gandelman <adamg@ubuntu.com> | ||
1629 | 9 | # | ||
1630 | 10 | |||
1631 | 11 | import commands | ||
1632 | 12 | import json | ||
1633 | 13 | import subprocess | ||
1634 | 14 | import os | ||
1635 | 15 | import shutil | ||
1636 | 16 | import time | ||
1637 | 17 | import lib.utils as utils | ||
1638 | 18 | |||
1639 | 19 | KEYRING = '/etc/ceph/ceph.client.%s.keyring' | ||
1640 | 20 | KEYFILE = '/etc/ceph/ceph.client.%s.key' | ||
1641 | 21 | |||
1642 | 22 | CEPH_CONF = """[global] | ||
1643 | 23 | auth supported = %(auth)s | ||
1644 | 24 | keyring = %(keyring)s | ||
1645 | 25 | mon host = %(mon_hosts)s | ||
1646 | 26 | log to syslog = %(use_syslog)s | ||
1647 | 27 | err to syslog = %(use_syslog)s | ||
1648 | 28 | clog to syslog = %(use_syslog)s | ||
1649 | 29 | """ | ||
1650 | 30 | |||
1651 | 31 | |||
1652 | 32 | def execute(cmd): | ||
1653 | 33 | subprocess.check_call(cmd) | ||
1654 | 34 | |||
1655 | 35 | |||
1656 | 36 | def execute_shell(cmd): | ||
1657 | 37 | subprocess.check_call(cmd, shell=True) | ||
1658 | 38 | |||
1659 | 39 | |||
1660 | 40 | def install(): | ||
1661 | 41 | ceph_dir = "/etc/ceph" | ||
1662 | 42 | if not os.path.isdir(ceph_dir): | ||
1663 | 43 | os.mkdir(ceph_dir) | ||
1664 | 44 | utils.install('ceph-common') | ||
1665 | 45 | |||
1666 | 46 | |||
1667 | 47 | def rbd_exists(service, pool, rbd_img): | ||
1668 | 48 | (rc, out) = commands.getstatusoutput('rbd list --id %s --pool %s' % | ||
1669 | 49 | (service, pool)) | ||
1670 | 50 | return rbd_img in out | ||
1671 | 51 | |||
1672 | 52 | |||
1673 | 53 | def create_rbd_image(service, pool, image, sizemb): | ||
1674 | 54 | cmd = [ | ||
1675 | 55 | 'rbd', | ||
1676 | 56 | 'create', | ||
1677 | 57 | image, | ||
1678 | 58 | '--size', | ||
1679 | 59 | str(sizemb), | ||
1680 | 60 | '--id', | ||
1681 | 61 | service, | ||
1682 | 62 | '--pool', | ||
1683 | 63 | pool] | ||
1684 | 64 | execute(cmd) | ||
1685 | 65 | |||
1686 | 66 | |||
1687 | 67 | def pool_exists(service, name): | ||
1688 | 68 | (rc, out) = commands.getstatusoutput("rados --id %s lspools" % service) | ||
1689 | 69 | return name in out | ||
1690 | 70 | |||
1691 | 71 | |||
1692 | 72 | def ceph_version(): | ||
1693 | 73 | ''' Retrieve the local version of ceph ''' | ||
1694 | 74 | if os.path.exists('/usr/bin/ceph'): | ||
1695 | 75 | cmd = ['ceph', '-v'] | ||
1696 | 76 | output = subprocess.check_output(cmd) | ||
1697 | 77 | output = output.split() | ||
1698 | 78 | if len(output) > 3: | ||
1699 | 79 | return output[2] | ||
1700 | 80 | else: | ||
1701 | 81 | return None | ||
1702 | 82 | else: | ||
1703 | 83 | return None | ||
1704 | 84 | |||
1705 | 85 | |||
1706 | 86 | def get_osds(service): | ||
1707 | 87 | ''' | ||
1708 | 88 | Return a list of all Ceph Object Storage Daemons | ||
1709 | 89 | currently in the cluster | ||
1710 | 90 | ''' | ||
1711 | 91 | version = ceph_version() | ||
1712 | 92 | if version and version >= '0.56': | ||
1713 | 93 | cmd = ['ceph', '--id', service, 'osd', 'ls', '--format=json'] | ||
1714 | 94 | return json.loads(subprocess.check_output(cmd)) | ||
1715 | 95 | else: | ||
1716 | 96 | return None | ||
1717 | 97 | |||
1718 | 98 | |||
1719 | 99 | def create_pool(service, name, replicas=2): | ||
1720 | 100 | ''' Create a new RADOS pool ''' | ||
1721 | 101 | if pool_exists(service, name): | ||
1722 | 102 | utils.juju_log('WARNING', | ||
1723 | 103 | "Ceph pool {} already exists, " | ||
1724 | 104 | "skipping creation".format(name)) | ||
1725 | 105 | return | ||
1726 | 106 | |||
1727 | 107 | osds = get_osds(service) | ||
1728 | 108 | if osds: | ||
1729 | 109 | pgnum = (len(osds) * 100 / replicas) | ||
1730 | 110 | else: | ||
1731 | 111 | # NOTE(james-page): Default to 200 for older ceph versions | ||
1732 | 112 | # which don't support OSD query from cli | ||
1733 | 113 | pgnum = 200 | ||
1734 | 114 | |||
1735 | 115 | cmd = [ | ||
1736 | 116 | 'ceph', '--id', service, | ||
1737 | 117 | 'osd', 'pool', 'create', | ||
1738 | 118 | name, str(pgnum) | ||
1739 | 119 | ] | ||
1740 | 120 | subprocess.check_call(cmd) | ||
1741 | 121 | cmd = [ | ||
1742 | 122 | 'ceph', '--id', service, | ||
1743 | 123 | 'osd', 'pool', 'set', name, | ||
1744 | 124 | 'size', str(replicas) | ||
1745 | 125 | ] | ||
1746 | 126 | subprocess.check_call(cmd) | ||
1747 | 127 | |||
1748 | 128 | |||
1749 | 129 | def keyfile_path(service): | ||
1750 | 130 | return KEYFILE % service | ||
1751 | 131 | |||
1752 | 132 | |||
1753 | 133 | def keyring_path(service): | ||
1754 | 134 | return KEYRING % service | ||
1755 | 135 | |||
1756 | 136 | |||
1757 | 137 | def create_keyring(service, key): | ||
1758 | 138 | keyring = keyring_path(service) | ||
1759 | 139 | if os.path.exists(keyring): | ||
1760 | 140 | utils.juju_log('INFO', 'ceph: Keyring exists at %s.' % keyring) | ||
1761 | 141 | cmd = [ | ||
1762 | 142 | 'ceph-authtool', | ||
1763 | 143 | keyring, | ||
1764 | 144 | '--create-keyring', | ||
1765 | 145 | '--name=client.%s' % service, | ||
1766 | 146 | '--add-key=%s' % key] | ||
1767 | 147 | execute(cmd) | ||
1768 | 148 | utils.juju_log('INFO', 'ceph: Created new ring at %s.' % keyring) | ||
1769 | 149 | |||
1770 | 150 | |||
1771 | 151 | def create_key_file(service, key): | ||
1772 | 152 | # create a file containing the key | ||
1773 | 153 | keyfile = keyfile_path(service) | ||
1774 | 154 | if os.path.exists(keyfile): | ||
1775 | 155 | utils.juju_log('INFO', 'ceph: Keyfile exists at %s.' % keyfile) | ||
1776 | 156 | fd = open(keyfile, 'w') | ||
1777 | 157 | fd.write(key) | ||
1778 | 158 | fd.close() | ||
1779 | 159 | utils.juju_log('INFO', 'ceph: Created new keyfile at %s.' % keyfile) | ||
1780 | 160 | |||
1781 | 161 | |||
1782 | 162 | def get_ceph_nodes(): | ||
1783 | 163 | hosts = [] | ||
1784 | 164 | for r_id in utils.relation_ids('ceph'): | ||
1785 | 165 | for unit in utils.relation_list(r_id): | ||
1786 | 166 | hosts.append(utils.relation_get('private-address', | ||
1787 | 167 | unit=unit, rid=r_id)) | ||
1788 | 168 | return hosts | ||
1789 | 169 | |||
1790 | 170 | |||
1791 | 171 | def configure(service, key, auth, use_syslog): | ||
1792 | 172 | create_keyring(service, key) | ||
1793 | 173 | create_key_file(service, key) | ||
1794 | 174 | hosts = get_ceph_nodes() | ||
1795 | 175 | mon_hosts = ",".join(map(str, hosts)) | ||
1796 | 176 | keyring = keyring_path(service) | ||
1797 | 177 | with open('/etc/ceph/ceph.conf', 'w') as ceph_conf: | ||
1798 | 178 | ceph_conf.write(CEPH_CONF % locals()) | ||
1799 | 179 | modprobe_kernel_module('rbd') | ||
1800 | 180 | |||
1801 | 181 | |||
1802 | 182 | def image_mapped(image_name): | ||
1803 | 183 | (rc, out) = commands.getstatusoutput('rbd showmapped') | ||
1804 | 184 | return image_name in out | ||
1805 | 185 | |||
1806 | 186 | |||
1807 | 187 | def map_block_storage(service, pool, image): | ||
1808 | 188 | cmd = [ | ||
1809 | 189 | 'rbd', | ||
1810 | 190 | 'map', | ||
1811 | 191 | '%s/%s' % (pool, image), | ||
1812 | 192 | '--user', | ||
1813 | 193 | service, | ||
1814 | 194 | '--secret', | ||
1815 | 195 | keyfile_path(service)] | ||
1816 | 196 | execute(cmd) | ||
1817 | 197 | |||
1818 | 198 | |||
1819 | 199 | def filesystem_mounted(fs): | ||
1820 | 200 | return subprocess.call(['grep', '-wqs', fs, '/proc/mounts']) == 0 | ||
1821 | 201 | |||
1822 | 202 | |||
1823 | 203 | def make_filesystem(blk_device, fstype='ext4'): | ||
1824 | 204 | count = 0 | ||
1825 | 205 | e_noent = os.errno.ENOENT | ||
1826 | 206 | while not os.path.exists(blk_device): | ||
1827 | 207 | if count >= 10: | ||
1828 | 208 | utils.juju_log('ERROR', 'ceph: gave up waiting on block ' | ||
1829 | 209 | 'device %s' % blk_device) | ||
1830 | 210 | raise IOError(e_noent, os.strerror(e_noent), blk_device) | ||
1831 | 211 | utils.juju_log('INFO', 'ceph: waiting for block device %s ' | ||
1832 | 212 | 'to appear' % blk_device) | ||
1833 | 213 | count += 1 | ||
1834 | 214 | time.sleep(1) | ||
1835 | 215 | else: | ||
1836 | 216 | utils.juju_log('INFO', 'ceph: Formatting block device %s ' | ||
1837 | 217 | 'as filesystem %s.' % (blk_device, fstype)) | ||
1838 | 218 | execute(['mkfs', '-t', fstype, blk_device]) | ||
1839 | 219 | |||
1840 | 220 | |||
1841 | 221 | def place_data_on_ceph(service, blk_device, data_src_dst, fstype='ext4'): | ||
1842 | 222 | # mount block device into /mnt | ||
1843 | 223 | cmd = ['mount', '-t', fstype, blk_device, '/mnt'] | ||
1844 | 224 | execute(cmd) | ||
1845 | 225 | |||
1846 | 226 | # copy data to /mnt | ||
1847 | 227 | try: | ||
1848 | 228 | copy_files(data_src_dst, '/mnt') | ||
1849 | 229 | except: | ||
1850 | 230 | pass | ||
1851 | 231 | |||
1852 | 232 | # umount block device | ||
1853 | 233 | cmd = ['umount', '/mnt'] | ||
1854 | 234 | execute(cmd) | ||
1855 | 235 | |||
1856 | 236 | _dir = os.stat(data_src_dst) | ||
1857 | 237 | uid = _dir.st_uid | ||
1858 | 238 | gid = _dir.st_gid | ||
1859 | 239 | |||
1860 | 240 | # re-mount where the data should originally be | ||
1861 | 241 | cmd = ['mount', '-t', fstype, blk_device, data_src_dst] | ||
1862 | 242 | execute(cmd) | ||
1863 | 243 | |||
1864 | 244 | # ensure original ownership of new mount. | ||
1865 | 245 | cmd = ['chown', '-R', '%s:%s' % (uid, gid), data_src_dst] | ||
1866 | 246 | execute(cmd) | ||
1867 | 247 | |||
1868 | 248 | |||
1869 | 249 | # TODO: re-use | ||
1870 | 250 | def modprobe_kernel_module(module): | ||
1871 | 251 | utils.juju_log('INFO', 'Loading kernel module') | ||
1872 | 252 | cmd = ['modprobe', module] | ||
1873 | 253 | execute(cmd) | ||
1874 | 254 | cmd = 'echo %s >> /etc/modules' % module | ||
1875 | 255 | execute_shell(cmd) | ||
1876 | 256 | |||
1877 | 257 | |||
1878 | 258 | def copy_files(src, dst, symlinks=False, ignore=None): | ||
1879 | 259 | for item in os.listdir(src): | ||
1880 | 260 | s = os.path.join(src, item) | ||
1881 | 261 | d = os.path.join(dst, item) | ||
1882 | 262 | if os.path.isdir(s): | ||
1883 | 263 | shutil.copytree(s, d, symlinks, ignore) | ||
1884 | 264 | else: | ||
1885 | 265 | shutil.copy2(s, d) | ||
1886 | 266 | |||
1887 | 267 | |||
1888 | 268 | def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point, | ||
1889 | 269 | blk_device, fstype, system_services=[], | ||
1890 | 270 | rbd_pool_replicas=2): | ||
1891 | 271 | """ | ||
1892 | 272 | To be called from the current cluster leader. | ||
1893 | 273 | Ensures given pool and RBD image exists, is mapped to a block device, | ||
1894 | 274 | and the device is formatted and mounted at the given mount_point. | ||
1895 | 275 | |||
1896 | 276 | If formatting a device for the first time, data existing at mount_point | ||
1897 | 277 | will be migrated to the RBD device before being remounted. | ||
1898 | 278 | |||
1899 | 279 | All services listed in system_services will be stopped prior to data | ||
1900 | 280 | migration and restarted when complete. | ||
1901 | 281 | """ | ||
1902 | 282 | # Ensure pool, RBD image, RBD mappings are in place. | ||
1903 | 283 | if not pool_exists(service, pool): | ||
1904 | 284 | utils.juju_log('INFO', 'ceph: Creating new pool %s.' % pool) | ||
1905 | 285 | create_pool(service, pool, replicas=rbd_pool_replicas) | ||
1906 | 286 | |||
1907 | 287 | if not rbd_exists(service, pool, rbd_img): | ||
1908 | 288 | utils.juju_log('INFO', 'ceph: Creating RBD image (%s).' % rbd_img) | ||
1909 | 289 | create_rbd_image(service, pool, rbd_img, sizemb) | ||
1910 | 290 | |||
1911 | 291 | if not image_mapped(rbd_img): | ||
1912 | 292 | utils.juju_log('INFO', 'ceph: Mapping RBD Image as a Block Device.') | ||
1913 | 293 | map_block_storage(service, pool, rbd_img) | ||
1914 | 294 | |||
1915 | 295 | # make file system | ||
1916 | 296 | # TODO: What happens if for whatever reason this is run again and | ||
1917 | 297 | # the data is already in the rbd device and/or is mounted?? | ||
1918 | 298 | # When it is mounted already, it will fail to make the fs | ||
1919 | 299 | # XXX: This is really sketchy! Need to at least add an fstab entry | ||
1920 | 300 | # otherwise this hook will blow away existing data if its executed | ||
1921 | 301 | # after a reboot. | ||
1922 | 302 | if not filesystem_mounted(mount_point): | ||
1923 | 303 | make_filesystem(blk_device, fstype) | ||
1924 | 304 | |||
1925 | 305 | for svc in system_services: | ||
1926 | 306 | if utils.running(svc): | ||
1927 | 307 | utils.juju_log('INFO', | ||
1928 | 308 | 'Stopping services %s prior to migrating ' | ||
1929 | 309 | 'data' % svc) | ||
1930 | 310 | utils.stop(svc) | ||
1931 | 311 | |||
1932 | 312 | place_data_on_ceph(service, blk_device, mount_point, fstype) | ||
1933 | 313 | |||
1934 | 314 | for svc in system_services: | ||
1935 | 315 | utils.start(svc) | ||
1936 | 316 | 0 | ||
1937 | === removed file 'hooks/charmhelpers/lib/cluster_utils.py' | |||
1938 | --- hooks/charmhelpers/lib/cluster_utils.py 2014-07-09 12:37:36 +0000 | |||
1939 | +++ hooks/charmhelpers/lib/cluster_utils.py 1970-01-01 00:00:00 +0000 | |||
1940 | @@ -1,128 +0,0 @@ | |||
1941 | 1 | # | ||
1942 | 2 | # Copyright 2012 Canonical Ltd. | ||
1943 | 3 | # | ||
1944 | 4 | # This file is sourced from lp:openstack-charm-helpers | ||
1945 | 5 | # | ||
1946 | 6 | # Authors: | ||
1947 | 7 | # James Page <james.page@ubuntu.com> | ||
1948 | 8 | # Adam Gandelman <adamg@ubuntu.com> | ||
1949 | 9 | # | ||
1950 | 10 | |||
1951 | 11 | from utils import ( | ||
1952 | 12 | juju_log, | ||
1953 | 13 | relation_ids, | ||
1954 | 14 | relation_list, | ||
1955 | 15 | relation_get, | ||
1956 | 16 | get_unit_hostname, | ||
1957 | 17 | config_get) | ||
1958 | 18 | import subprocess | ||
1959 | 19 | import os | ||
1960 | 20 | |||
1961 | 21 | |||
1962 | 22 | def is_clustered(): | ||
1963 | 23 | for r_id in (relation_ids('ha') or []): | ||
1964 | 24 | for unit in (relation_list(r_id) or []): | ||
1965 | 25 | clustered = relation_get('clustered', | ||
1966 | 26 | rid=r_id, | ||
1967 | 27 | unit=unit) | ||
1968 | 28 | if clustered: | ||
1969 | 29 | return True | ||
1970 | 30 | return False | ||
1971 | 31 | |||
1972 | 32 | |||
1973 | 33 | def is_leader(resource): | ||
1974 | 34 | cmd = [ | ||
1975 | 35 | "crm", "resource", | ||
1976 | 36 | "show", resource] | ||
1977 | 37 | try: | ||
1978 | 38 | status = subprocess.check_output(cmd) | ||
1979 | 39 | except subprocess.CalledProcessError: | ||
1980 | 40 | return False | ||
1981 | 41 | else: | ||
1982 | 42 | if get_unit_hostname() in status: | ||
1983 | 43 | return True | ||
1984 | 44 | else: | ||
1985 | 45 | return False | ||
1986 | 46 | |||
1987 | 47 | |||
1988 | 48 | def peer_units(): | ||
1989 | 49 | peers = [] | ||
1990 | 50 | for r_id in (relation_ids('cluster') or []): | ||
1991 | 51 | for unit in (relation_list(r_id) or []): | ||
1992 | 52 | peers.append(unit) | ||
1993 | 53 | return peers | ||
1994 | 54 | |||
1995 | 55 | |||
1996 | 56 | def oldest_peer(peers): | ||
1997 | 57 | local_unit_no = os.getenv('JUJU_UNIT_NAME').split('/')[1] | ||
1998 | 58 | for peer in peers: | ||
1999 | 59 | remote_unit_no = peer.split('/')[1] | ||
2000 | 60 | if remote_unit_no < local_unit_no: | ||
2001 | 61 | return False | ||
2002 | 62 | return True | ||
2003 | 63 | |||
2004 | 64 | |||
2005 | 65 | def eligible_leader(resource): | ||
2006 | 66 | if is_clustered(): | ||
2007 | 67 | if not is_leader(resource): | ||
2008 | 68 | juju_log('INFO', 'Deferring action to CRM leader.') | ||
2009 | 69 | return False | ||
2010 | 70 | else: | ||
2011 | 71 | peers = peer_units() | ||
2012 | 72 | if peers and not oldest_peer(peers): | ||
2013 | 73 | juju_log('INFO', 'Deferring action to oldest service unit.') | ||
2014 | 74 | return False | ||
2015 | 75 | return True | ||
2016 | 76 | |||
2017 | 77 | |||
2018 | 78 | def https(): | ||
2019 | 79 | ''' | ||
2020 | 80 | Determines whether enough data has been provided in configuration | ||
2021 | 81 | or relation data to configure HTTPS | ||
2022 | 82 | . | ||
2023 | 83 | returns: boolean | ||
2024 | 84 | ''' | ||
2025 | 85 | if config_get('use-https') == "yes": | ||
2026 | 86 | return True | ||
2027 | 87 | if config_get('ssl_cert') and config_get('ssl_key'): | ||
2028 | 88 | return True | ||
2029 | 89 | for r_id in relation_ids('identity-service'): | ||
2030 | 90 | for unit in relation_list(r_id): | ||
2031 | 91 | if (relation_get('https_keystone', rid=r_id, unit=unit) and | ||
2032 | 92 | relation_get('ssl_cert', rid=r_id, unit=unit) and | ||
2033 | 93 | relation_get('ssl_key', rid=r_id, unit=unit) and | ||
2034 | 94 | relation_get('ca_cert', rid=r_id, unit=unit)): | ||
2035 | 95 | return True | ||
2036 | 96 | return False | ||
2037 | 97 | |||
2038 | 98 | |||
2039 | 99 | def determine_api_port(public_port): | ||
2040 | 100 | ''' | ||
2041 | 101 | Determine correct API server listening port based on | ||
2042 | 102 | existence of HTTPS reverse proxy and/or haproxy. | ||
2043 | 103 | |||
2044 | 104 | public_port: int: standard public port for given service | ||
2045 | 105 | |||
2046 | 106 | returns: int: the correct listening port for the API service | ||
2047 | 107 | ''' | ||
2048 | 108 | i = 0 | ||
2049 | 109 | if len(peer_units()) > 0 or is_clustered(): | ||
2050 | 110 | i += 1 | ||
2051 | 111 | if https(): | ||
2052 | 112 | i += 1 | ||
2053 | 113 | return public_port - (i * 10) | ||
2054 | 114 | |||
2055 | 115 | |||
2056 | 116 | def determine_haproxy_port(public_port): | ||
2057 | 117 | ''' | ||
2058 | 118 | Description: Determine correct proxy listening port based on public IP + | ||
2059 | 119 | existence of HTTPS reverse proxy. | ||
2060 | 120 | |||
2061 | 121 | public_port: int: standard public port for given service | ||
2062 | 122 | |||
2063 | 123 | returns: int: the correct listening port for the HAProxy service | ||
2064 | 124 | ''' | ||
2065 | 125 | i = 0 | ||
2066 | 126 | if https(): | ||
2067 | 127 | i += 1 | ||
2068 | 128 | return public_port - (i * 10) | ||
2069 | 129 | 0 | ||
2070 | === removed file 'hooks/charmhelpers/lib/utils.py' | |||
2071 | --- hooks/charmhelpers/lib/utils.py 2014-07-09 12:37:36 +0000 | |||
2072 | +++ hooks/charmhelpers/lib/utils.py 1970-01-01 00:00:00 +0000 | |||
2073 | @@ -1,221 +0,0 @@ | |||
2074 | 1 | # | ||
2075 | 2 | # Copyright 2012 Canonical Ltd. | ||
2076 | 3 | # | ||
2077 | 4 | # This file is sourced from lp:openstack-charm-helpers | ||
2078 | 5 | # | ||
2079 | 6 | # Authors: | ||
2080 | 7 | # James Page <james.page@ubuntu.com> | ||
2081 | 8 | # Paul Collins <paul.collins@canonical.com> | ||
2082 | 9 | # Adam Gandelman <adamg@ubuntu.com> | ||
2083 | 10 | # | ||
2084 | 11 | |||
2085 | 12 | import json | ||
2086 | 13 | import os | ||
2087 | 14 | import subprocess | ||
2088 | 15 | import socket | ||
2089 | 16 | import sys | ||
2090 | 17 | |||
2091 | 18 | |||
2092 | 19 | def do_hooks(hooks): | ||
2093 | 20 | hook = os.path.basename(sys.argv[0]) | ||
2094 | 21 | |||
2095 | 22 | try: | ||
2096 | 23 | hook_func = hooks[hook] | ||
2097 | 24 | except KeyError: | ||
2098 | 25 | juju_log('INFO', | ||
2099 | 26 | "This charm doesn't know how to handle '{}'.".format(hook)) | ||
2100 | 27 | else: | ||
2101 | 28 | hook_func() | ||
2102 | 29 | |||
2103 | 30 | |||
2104 | 31 | def install(*pkgs): | ||
2105 | 32 | cmd = [ | ||
2106 | 33 | 'apt-get', | ||
2107 | 34 | '-y', | ||
2108 | 35 | 'install'] | ||
2109 | 36 | for pkg in pkgs: | ||
2110 | 37 | cmd.append(pkg) | ||
2111 | 38 | subprocess.check_call(cmd) | ||
2112 | 39 | |||
2113 | 40 | TEMPLATES_DIR = 'templates' | ||
2114 | 41 | |||
2115 | 42 | try: | ||
2116 | 43 | import jinja2 | ||
2117 | 44 | except ImportError: | ||
2118 | 45 | install('python-jinja2') | ||
2119 | 46 | import jinja2 | ||
2120 | 47 | |||
2121 | 48 | try: | ||
2122 | 49 | import dns.resolver | ||
2123 | 50 | except ImportError: | ||
2124 | 51 | install('python-dnspython') | ||
2125 | 52 | import dns.resolver | ||
2126 | 53 | |||
2127 | 54 | |||
2128 | 55 | def render_template(template_name, context, template_dir=TEMPLATES_DIR): | ||
2129 | 56 | templates = jinja2.Environment(loader=jinja2.FileSystemLoader( | ||
2130 | 57 | template_dir)) | ||
2131 | 58 | template = templates.get_template(template_name) | ||
2132 | 59 | return template.render(context) | ||
2133 | 60 | |||
2134 | 61 | # Protocols | ||
2135 | 62 | TCP = 'TCP' | ||
2136 | 63 | UDP = 'UDP' | ||
2137 | 64 | |||
2138 | 65 | |||
2139 | 66 | def expose(port, protocol='TCP'): | ||
2140 | 67 | cmd = [ | ||
2141 | 68 | 'open-port', | ||
2142 | 69 | '{}/{}'.format(port, protocol)] | ||
2143 | 70 | subprocess.check_call(cmd) | ||
2144 | 71 | |||
2145 | 72 | |||
2146 | 73 | def juju_log(severity, message): | ||
2147 | 74 | cmd = [ | ||
2148 | 75 | 'juju-log', | ||
2149 | 76 | '--log-level', severity, | ||
2150 | 77 | message] | ||
2151 | 78 | subprocess.check_call(cmd) | ||
2152 | 79 | |||
2153 | 80 | |||
2154 | 81 | def relation_ids(relation): | ||
2155 | 82 | cmd = [ | ||
2156 | 83 | 'relation-ids', | ||
2157 | 84 | relation] | ||
2158 | 85 | result = str(subprocess.check_output(cmd)).split() | ||
2159 | 86 | if result == "": | ||
2160 | 87 | return None | ||
2161 | 88 | else: | ||
2162 | 89 | return result | ||
2163 | 90 | |||
2164 | 91 | |||
2165 | 92 | def relation_list(rid): | ||
2166 | 93 | cmd = [ | ||
2167 | 94 | 'relation-list', | ||
2168 | 95 | '-r', rid] | ||
2169 | 96 | result = str(subprocess.check_output(cmd)).split() | ||
2170 | 97 | if result == "": | ||
2171 | 98 | return None | ||
2172 | 99 | else: | ||
2173 | 100 | return result | ||
2174 | 101 | |||
2175 | 102 | |||
2176 | 103 | def relation_get(attribute, unit=None, rid=None): | ||
2177 | 104 | cmd = [ | ||
2178 | 105 | 'relation-get'] | ||
2179 | 106 | if rid: | ||
2180 | 107 | cmd.append('-r') | ||
2181 | 108 | cmd.append(rid) | ||
2182 | 109 | cmd.append(attribute) | ||
2183 | 110 | if unit: | ||
2184 | 111 | cmd.append(unit) | ||
2185 | 112 | value = subprocess.check_output(cmd).strip() # IGNORE:E1103 | ||
2186 | 113 | if value == "": | ||
2187 | 114 | return None | ||
2188 | 115 | else: | ||
2189 | 116 | return value | ||
2190 | 117 | |||
2191 | 118 | |||
2192 | 119 | def relation_set(**kwargs): | ||
2193 | 120 | cmd = [ | ||
2194 | 121 | 'relation-set'] | ||
2195 | 122 | args = [] | ||
2196 | 123 | for k, v in kwargs.items(): | ||
2197 | 124 | if k == 'rid': | ||
2198 | 125 | if v: | ||
2199 | 126 | cmd.append('-r') | ||
2200 | 127 | cmd.append(v) | ||
2201 | 128 | else: | ||
2202 | 129 | args.append('{}={}'.format(k, v)) | ||
2203 | 130 | cmd += args | ||
2204 | 131 | subprocess.check_call(cmd) | ||
2205 | 132 | |||
2206 | 133 | |||
2207 | 134 | def unit_get(attribute): | ||
2208 | 135 | cmd = [ | ||
2209 | 136 | 'unit-get', | ||
2210 | 137 | attribute] | ||
2211 | 138 | value = subprocess.check_output(cmd).strip() # IGNORE:E1103 | ||
2212 | 139 | if value == "": | ||
2213 | 140 | return None | ||
2214 | 141 | else: | ||
2215 | 142 | return value | ||
2216 | 143 | |||
2217 | 144 | |||
2218 | 145 | def config_get(attribute): | ||
2219 | 146 | cmd = [ | ||
2220 | 147 | 'config-get', | ||
2221 | 148 | '--format', | ||
2222 | 149 | 'json'] | ||
2223 | 150 | out = subprocess.check_output(cmd).strip() # IGNORE:E1103 | ||
2224 | 151 | cfg = json.loads(out) | ||
2225 | 152 | |||
2226 | 153 | try: | ||
2227 | 154 | return cfg[attribute] | ||
2228 | 155 | except KeyError: | ||
2229 | 156 | return None | ||
2230 | 157 | |||
2231 | 158 | |||
2232 | 159 | def get_unit_hostname(): | ||
2233 | 160 | return socket.gethostname() | ||
2234 | 161 | |||
2235 | 162 | |||
2236 | 163 | def get_host_ip(hostname=unit_get('private-address')): | ||
2237 | 164 | try: | ||
2238 | 165 | # Test to see if already an IPv4 address | ||
2239 | 166 | socket.inet_aton(hostname) | ||
2240 | 167 | return hostname | ||
2241 | 168 | except socket.error: | ||
2242 | 169 | answers = dns.resolver.query(hostname, 'A') | ||
2243 | 170 | if answers: | ||
2244 | 171 | return answers[0].address | ||
2245 | 172 | return None | ||
2246 | 173 | |||
2247 | 174 | |||
2248 | 175 | def _svc_control(service, action): | ||
2249 | 176 | subprocess.check_call(['service', service, action]) | ||
2250 | 177 | |||
2251 | 178 | |||
2252 | 179 | def restart(*services): | ||
2253 | 180 | for service in services: | ||
2254 | 181 | _svc_control(service, 'restart') | ||
2255 | 182 | |||
2256 | 183 | |||
2257 | 184 | def stop(*services): | ||
2258 | 185 | for service in services: | ||
2259 | 186 | _svc_control(service, 'stop') | ||
2260 | 187 | |||
2261 | 188 | |||
2262 | 189 | def start(*services): | ||
2263 | 190 | for service in services: | ||
2264 | 191 | _svc_control(service, 'start') | ||
2265 | 192 | |||
2266 | 193 | |||
2267 | 194 | def reload(*services): | ||
2268 | 195 | for service in services: | ||
2269 | 196 | try: | ||
2270 | 197 | _svc_control(service, 'reload') | ||
2271 | 198 | except subprocess.CalledProcessError: | ||
2272 | 199 | # Reload failed - either service does not support reload | ||
2273 | 200 | # or it was not running - restart will fixup most things | ||
2274 | 201 | _svc_control(service, 'restart') | ||
2275 | 202 | |||
2276 | 203 | |||
2277 | 204 | def running(service): | ||
2278 | 205 | try: | ||
2279 | 206 | output = subprocess.check_output(['service', service, 'status']) | ||
2280 | 207 | except subprocess.CalledProcessError: | ||
2281 | 208 | return False | ||
2282 | 209 | else: | ||
2283 | 210 | if ("start/running" in output or "is running" in output): | ||
2284 | 211 | return True | ||
2285 | 212 | else: | ||
2286 | 213 | return False | ||
2287 | 214 | |||
2288 | 215 | |||
2289 | 216 | def is_relation_made(relation, key='private-address'): | ||
2290 | 217 | for r_id in (relation_ids(relation) or []): | ||
2291 | 218 | for unit in (relation_list(r_id) or []): | ||
2292 | 219 | if relation_get(key, rid=r_id, unit=unit): | ||
2293 | 220 | return True | ||
2294 | 221 | return False | ||
2295 | 222 | 0 | ||
2296 | === removed file 'hooks/charmhelpers/setup.py' | |||
2297 | --- hooks/charmhelpers/setup.py 2014-07-09 12:37:36 +0000 | |||
2298 | +++ hooks/charmhelpers/setup.py 1970-01-01 00:00:00 +0000 | |||
2299 | @@ -1,12 +0,0 @@ | |||
2300 | 1 | #!/usr/bin/env python | ||
2301 | 2 | |||
2302 | 3 | from distutils.core import setup | ||
2303 | 4 | |||
2304 | 5 | setup(name='charmhelpers', | ||
2305 | 6 | version='1.0', | ||
2306 | 7 | description='this is dumb', | ||
2307 | 8 | author='nobody', | ||
2308 | 9 | author_email='dummy@amulet', | ||
2309 | 10 | url='http://google.com', | ||
2310 | 11 | packages=[], | ||
2311 | 12 | ) | ||
2312 | 13 | 0 | ||
2313 | === modified file 'hooks/hdp-hadoop-common.py' | |||
2314 | --- hooks/hdp-hadoop-common.py 2014-11-21 19:35:29 +0000 | |||
2315 | +++ hooks/hdp-hadoop-common.py 2015-02-13 00:38:49 +0000 | |||
2316 | @@ -1,19 +1,28 @@ | |||
2317 | 1 | #!/usr/bin/env python | 1 | #!/usr/bin/env python |
2318 | 2 | import os | 2 | import os |
2319 | 3 | import re | ||
2320 | 3 | import subprocess | 4 | import subprocess |
2321 | 5 | import pwd | ||
2322 | 4 | import sys | 6 | import sys |
2323 | 5 | import tarfile | 7 | import tarfile |
2324 | 6 | import shlex | 8 | import shlex |
2325 | 7 | import shutil | 9 | import shutil |
2326 | 8 | import inspect | 10 | import inspect |
2327 | 9 | import time | 11 | import time |
2328 | 12 | from functools import partial | ||
2329 | 13 | import socket | ||
2330 | 14 | |||
2331 | 15 | import bootstrap | ||
2332 | 16 | bootstrap.install_charmhelpers() | ||
2333 | 10 | 17 | ||
2334 | 11 | from hdputils import install_base_pkg, updateHDPDirectoryScript, config_all_nodes, \ | 18 | from hdputils import install_base_pkg, updateHDPDirectoryScript, config_all_nodes, \ |
2337 | 12 | setHadoopEnvVar, home, hdpScript, configureJAVA, config_all_nodes | 19 | setHadoopEnvVar, home, hdpScript, configureJAVA, config_all_nodes, \ |
2338 | 13 | from bdutils import setDirPermission, fileSetKV, append_bashrc, is_jvm_service_active, setHadoopConfigXML, chownRecursive | 20 | javaPath |
2339 | 21 | from bdutils import setDirPermission, fileSetKV, append_bashrc, is_jvm_service_active, setHadoopConfigXML, \ | ||
2340 | 22 | chownRecursive, HDFS_command, fconfigured | ||
2341 | 14 | 23 | ||
2344 | 15 | from charmhelpers.lib.utils import config_get, get_unit_hostname | 24 | from shutil import rmtree, copyfile, copy |
2345 | 16 | from shutil import rmtree, copyfile | 25 | from charmhelpers.core import hookenv |
2346 | 17 | from charmhelpers.core.hookenv import log, Hooks, relation_get, relation_set, unit_get, open_port, local_unit, related_units | 26 | from charmhelpers.core.hookenv import log, Hooks, relation_get, relation_set, unit_get, open_port, local_unit, related_units |
2347 | 18 | from charmhelpers.core.host import service_start, service_stop, add_user_to_group | 27 | from charmhelpers.core.host import service_start, service_stop, add_user_to_group |
2348 | 19 | from time import sleep | 28 | from time import sleep |
2349 | @@ -68,27 +77,71 @@ | |||
2350 | 68 | # in config.yaml | 77 | # in config.yaml |
2351 | 69 | def updateDirectoriesScript(): | 78 | def updateDirectoriesScript(): |
2352 | 70 | log("==> {}".format(inspect.stack()[0][3]),"INFO") | 79 | log("==> {}".format(inspect.stack()[0][3]),"INFO") |
2359 | 71 | fileSetKV(directoriesScript, "DFS_NAME_DIR=", "\""+config_get('dfs_name_dir')+"\";") | 80 | fileSetKV(directoriesScript, "DFS_NAME_DIR=", "\""+hookenv.config('dfs_name_dir')+"\";") |
2360 | 72 | fileSetKV(directoriesScript, "DFS_DATA_DIR=","\""+ config_get('dfs_data_dir')+"\";") | 81 | fileSetKV(directoriesScript, "DFS_DATA_DIR=","\""+ hookenv.config('dfs_data_dir')+"\";") |
2361 | 73 | fileSetKV(directoriesScript, "FS_CHECKPOINT_DIR=", "\""+config_get('fs_checkpoint_dir')+"\";") | 82 | fileSetKV(directoriesScript, "FS_CHECKPOINT_DIR=", "\""+hookenv.config('fs_checkpoint_dir')+"\";") |
2362 | 74 | fileSetKV(directoriesScript, "YARN_LOCAL_DIR=", "\""+config_get('yarn_local_dir')+"\";") | 83 | fileSetKV(directoriesScript, "YARN_LOCAL_DIR=", "\""+hookenv.config('yarn_local_dir')+"\";") |
2363 | 75 | fileSetKV(directoriesScript, "YARN_LOCAL_LOG_DIR=", "\""+config_get('yarn_local_log_dir')+"\";") | 84 | fileSetKV(directoriesScript, "YARN_LOCAL_LOG_DIR=", "\""+hookenv.config('yarn_local_log_dir')+"\";") |
2364 | 76 | fileSetKV(directoriesScript, "ZOOKEEPER_DATA_DIR=", "\""+config_get('zookeeper_data_dir')+"\";") | 85 | fileSetKV(directoriesScript, "ZOOKEEPER_DATA_DIR=", "\""+hookenv.config('zookeeper_data_dir')+"\";") |
2365 | 77 | 86 | ||
2366 | 78 | 87 | ||
2367 | 79 | def createHDPHadoopConf(): | 88 | def createHDPHadoopConf(): |
2368 | 80 | log("==> {}".format(inspect.stack()[0][3]),"INFO") | 89 | log("==> {}".format(inspect.stack()[0][3]),"INFO") |
2369 | 81 | 90 | ||
2370 | 91 | edits = { | ||
2371 | 92 | 'hadoop-env.sh': { | ||
2372 | 93 | r'TODO-JDK-PATH': javaPath, | ||
2373 | 94 | }, | ||
2374 | 95 | 'yarn-env.sh': { | ||
2375 | 96 | r'TODO-JDK-PATH': javaPath, | ||
2376 | 97 | }, | ||
2377 | 98 | } | ||
2378 | 99 | |||
2379 | 82 | HADOOP_CONF_DIR = os.environ["HADOOP_CONF_DIR"] | 100 | HADOOP_CONF_DIR = os.environ["HADOOP_CONF_DIR"] |
2380 | 83 | HDPConfPath = os.path.join(os.path.sep,home, hdpScript, "configuration_files", "core_hadoop") | 101 | HDPConfPath = os.path.join(os.path.sep,home, hdpScript, "configuration_files", "core_hadoop") |
2381 | 84 | source = os.listdir(HDPConfPath) | 102 | source = os.listdir(HDPConfPath) |
2382 | 85 | for files in source: | 103 | for files in source: |
2383 | 86 | srcFile = os.path.join(os.path.sep, HDPConfPath, files) | 104 | srcFile = os.path.join(os.path.sep, HDPConfPath, files) |
2384 | 87 | desFile = os.path.join(os.path.sep, HADOOP_CONF_DIR, files) | 105 | desFile = os.path.join(os.path.sep, HADOOP_CONF_DIR, files) |
2386 | 88 | shutil.copyfile(srcFile, desFile) | 106 | edit_file(srcFile, desFile, edits.get(files)) |
2387 | 89 | subprocess.call(createDadoopConfDir) | 107 | subprocess.call(createDadoopConfDir) |
2388 | 90 | 108 | ||
2389 | 91 | 109 | ||
2390 | 110 | def edit_file(src, dst=None, edits=None): | ||
2391 | 111 | """ | ||
2392 | 112 | Copy file from src to dst, applying one or more per-line edits. | ||
2393 | 113 | |||
2394 | 114 | :param str src: Path of source file to copy from | ||
2395 | 115 | :param str dst: Path of destination file. If None or the same as `src`, | ||
2396 | 116 | the edits will be made in-place. | ||
2397 | 117 | :param dict edits: A mapping of regular expression patterns to regular | ||
2398 | 118 | expression replacement patterns (using backreferences). | ||
2399 | 119 | Any matching replacements will be made for each line. | ||
2400 | 120 | If the order of replacements is important, an instance | ||
2401 | 121 | of the `OrderedDict` class from the `collections` | ||
2402 | 122 | package should be used. | ||
2403 | 123 | """ | ||
2404 | 124 | if not edits: | ||
2405 | 125 | if not dst or dst == src: | ||
2406 | 126 | return # in-place edit with no changes is a noop | ||
2407 | 127 | shutil.copyfile(src, dst) # no edits is just a copy | ||
2408 | 128 | return | ||
2409 | 129 | cleanup = False | ||
2410 | 130 | if not dst or dst == src: | ||
2411 | 131 | dst = src | ||
2412 | 132 | src = '{}.bak'.format(src) | ||
2413 | 133 | os.rename(dst, src) | ||
2414 | 134 | cleanup = True | ||
2415 | 135 | with open(src, 'r') as infile: | ||
2416 | 136 | with open(dst, 'w') as outfile: | ||
2417 | 137 | for line in infile.readlines(): | ||
2418 | 138 | for pat, repl in edits.iteritems(): | ||
2419 | 139 | line = re.sub(pat, repl, line) | ||
2420 | 140 | outfile.write(line) | ||
2421 | 141 | if cleanup: | ||
2422 | 142 | os.remove(src) | ||
2423 | 143 | |||
2424 | 144 | |||
2425 | 92 | def uninstall_base_pkg(): | 145 | def uninstall_base_pkg(): |
2426 | 93 | log("==> {}".format(inspect.stack()[0][3]),"INFO") | 146 | log("==> {}".format(inspect.stack()[0][3]),"INFO") |
2427 | 94 | packages = ['ntp', | 147 | packages = ['ntp', |
2428 | @@ -104,7 +157,6 @@ | |||
2429 | 104 | 'liblzo2-2', | 157 | 'liblzo2-2', |
2430 | 105 | 'liblzo2-dev', | 158 | 'liblzo2-dev', |
2431 | 106 | 'libhdfs0', | 159 | 'libhdfs0', |
2432 | 107 | 'libhdfs0-dev', | ||
2433 | 108 | 'hadoop-lzo'] | 160 | 'hadoop-lzo'] |
2434 | 109 | apt_purge(packages) | 161 | apt_purge(packages) |
2435 | 110 | shutil.rmtree(hdpScriptPath) | 162 | shutil.rmtree(hdpScriptPath) |
2436 | @@ -130,26 +182,41 @@ | |||
2437 | 130 | setDirPermission(os.environ['MAPRED_PID_DIR'], os.environ['MAPRED_USER'], group, 0755) | 182 | setDirPermission(os.environ['MAPRED_PID_DIR'], os.environ['MAPRED_USER'], group, 0755) |
2438 | 131 | setDirPermission(os.environ['YARN_LOCAL_DIR'], os.environ['YARN_USER'], group, 0755) | 183 | setDirPermission(os.environ['YARN_LOCAL_DIR'], os.environ['YARN_USER'], group, 0755) |
2439 | 132 | setDirPermission(os.environ['YARN_LOCAL_LOG_DIR'], os.environ['YARN_USER'], group, 0755) | 184 | setDirPermission(os.environ['YARN_LOCAL_LOG_DIR'], os.environ['YARN_USER'], group, 0755) |
2444 | 133 | hdfsConfPath = os.path.join(os.path.sep, os.environ['HADOOP_CONF_DIR'],'hdfs-site.xml') | 185 | hdfsConfPath = os.path.join(os.path.sep, os.environ['HADOOP_CONF_DIR'], 'hdfs-site.xml') |
2445 | 134 | setHadoopConfigXML(hdfsConfPath, "dfs.namenode.name.dir", config_get('dfs_name_dir')) | 186 | setHadoopConfigXML(hdfsConfPath, "dfs.namenode.name.dir", hookenv.config('dfs_name_dir')) |
2446 | 135 | setHadoopConfigXML(hdfsConfPath, "dfs.datanode.data.dir", config_get('dfs_data_dir')) | 187 | setHadoopConfigXML(hdfsConfPath, "dfs.datanode.data.dir", hookenv.config('dfs_data_dir')) |
2447 | 136 | setHadoopConfigXML(hdfsConfPath, "dfs.namenode.checkpoint.dir", config_get('fs_checkpoint_dir')) | 188 | setHadoopConfigXML(hdfsConfPath, "dfs.namenode.checkpoint.dir", hookenv.config('fs_checkpoint_dir')) |
2448 | 189 | yarnConfPath = os.path.join(os.path.sep, os.environ['HADOOP_CONF_DIR'], 'yarn-site.xml') | ||
2449 | 190 | setHadoopConfigXML(yarnConfPath, "yarn.scheduler.maximum-allocation-mb", "2048") | ||
2450 | 191 | setHadoopConfigXML(yarnConfPath, "yarn.scheduler.minimum-allocation-mb", "682") | ||
2451 | 192 | setHadoopConfigXML(yarnConfPath, "yarn.nodemanager.resource.cpu-vcores", "2") | ||
2452 | 193 | setHadoopConfigXML(yarnConfPath, "yarn.nodemanager.resource.memory-mb", "2048") | ||
2453 | 194 | setHadoopConfigXML(yarnConfPath, "yarn.nodemanager.resource.percentage-physical-cpu-limit", "100") | ||
2454 | 195 | mapredConfPath = os.path.join(os.path.sep, os.environ['HADOOP_CONF_DIR'], 'mapred-site.xml') | ||
2455 | 196 | # setHadoopConfigXML(mapredConfPath, "mapreduce.application.framework.path", None) | ||
2456 | 197 | setHadoopConfigXML(mapredConfPath, "mapreduce.map.memory.mb", "682") | ||
2457 | 198 | setHadoopConfigXML(mapredConfPath, "mapreduce.reduce.memory.mb", "682") | ||
2458 | 199 | setHadoopConfigXML(mapredConfPath, "mapreduce.map.java.opts", "-Xmx546m") | ||
2459 | 200 | setHadoopConfigXML(mapredConfPath, "mapreduce.reduce.java.opts", "-Xmx546m") | ||
2460 | 201 | setHadoopConfigXML(mapredConfPath, "mapreduce.task.io.sort.factor", "100") | ||
2461 | 202 | setHadoopConfigXML(mapredConfPath, "mapreduce.task.io.sort.mb", "273") | ||
2462 | 203 | setHadoopConfigXML(mapredConfPath, "yarn.app.mapreduce.am.resource.mb", "682") | ||
2463 | 137 | 204 | ||
2464 | 138 | # candidate for BD charm helper | 205 | # candidate for BD charm helper |
2465 | 139 | def format_namenode(hdfsUser): | 206 | def format_namenode(hdfsUser): |
2466 | 140 | log("==> hdfs format for user={}".format(hdfsUser),"INFO") | 207 | log("==> hdfs format for user={}".format(hdfsUser),"INFO") |
2468 | 141 | cmd = shlex.split("su {} -c '/usr/lib/hadoop/bin/hadoop namenode -format'".format(hdfsUser)) | 208 | cmd = shlex.split("su {} -c '/usr/hdp/current/hadoop-client/bin/hadoop namenode -format'".format(hdfsUser)) |
2469 | 142 | subprocess.call(cmd) | 209 | subprocess.call(cmd) |
2470 | 143 | 210 | ||
2471 | 144 | def callHDFS_fs(command): | 211 | def callHDFS_fs(command): |
2473 | 145 | cmd = shlex.split("su hdfs -c '/usr/lib/hadoop/bin/hadoop fs {}'".format(command)) | 212 | cmd = shlex.split("su hdfs -c '/usr/hdp/current/hadoop-client/bin/hadoop fs {}'".format(command)) |
2474 | 146 | subprocess.call(cmd) | 213 | subprocess.call(cmd) |
2475 | 147 | 214 | ||
2476 | 148 | 215 | ||
2477 | 149 | ''' Start Jobhistory Server ''' | 216 | ''' Start Jobhistory Server ''' |
2478 | 150 | def start_jobhistory(): | 217 | def start_jobhistory(): |
2479 | 151 | log("==> Start Job History Server") | 218 | log("==> Start Job History Server") |
2481 | 152 | path = os.path.join(os.path.sep, 'usr', 'lib', 'hadoop-yarn', 'bin', 'container-executor') | 219 | path = '/usr/hdp/current/hadoop-yarn-client/bin/container-executor' |
2482 | 153 | chownRecursive(path, 'root', 'hadoop') | 220 | chownRecursive(path, 'root', 'hadoop') |
2483 | 154 | os.chmod(path, 650) | 221 | os.chmod(path, 650) |
2484 | 155 | callHDFS_fs("-mkdir -p /mr-history/tmp") | 222 | callHDFS_fs("-mkdir -p /mr-history/tmp") |
2485 | @@ -161,16 +228,16 @@ | |||
2486 | 161 | callHDFS_fs("-chmod -R 1777 /app-logs ") | 228 | callHDFS_fs("-chmod -R 1777 /app-logs ") |
2487 | 162 | callHDFS_fs("-chown yarn /app-logs ") | 229 | callHDFS_fs("-chown yarn /app-logs ") |
2488 | 163 | hadoopConfDir = os.environ["HADOOP_CONF_DIR"] | 230 | hadoopConfDir = os.environ["HADOOP_CONF_DIR"] |
2491 | 164 | os.environ["HADOOP_LIBEXEC_DIR"]="/usr/lib/hadoop/libexec" | 231 | os.environ["HADOOP_LIBEXEC_DIR"] = "/usr/hdp/current/hadoop-client/libexec" |
2492 | 165 | cmd = shlex.split("su {} -c '/usr/lib/hadoop-mapreduce/sbin/mr-jobhistory-daemon.sh --config {} start historyserver'".\ | 232 | cmd = shlex.split("su {} -c '/usr/hdp/current/hadoop-mapreduce-client/sbin/mr-jobhistory-daemon.sh --config {} start historyserver'".\ |
2493 | 166 | format(os.environ['MAPRED_USER'], hadoopConfDir)) | 233 | format(os.environ['MAPRED_USER'], hadoopConfDir)) |
2494 | 167 | subprocess.call(cmd) | 234 | subprocess.call(cmd) |
2495 | 168 | 235 | ||
2496 | 169 | '''Stop Job History Server''' | 236 | '''Stop Job History Server''' |
2497 | 170 | def stop_jobhistory(): | 237 | def stop_jobhistory(): |
2498 | 171 | hadoopConfDir = os.environ["HADOOP_CONF_DIR"] | 238 | hadoopConfDir = os.environ["HADOOP_CONF_DIR"] |
2501 | 172 | os.environ["HADOOP_LIBEXEC_DIR"]="/usr/lib/hadoop/libexec" | 239 | os.environ["HADOOP_LIBEXEC_DIR"] = "/usr/hdp/current/hadoop-client/libexec" |
2502 | 173 | cmd = shlex.split("su {} -c '/usr/lib/hadoop-mapreduce/sbin/mr-jobhistory-daemon.sh --config {} stop historyserver'".\ | 240 | cmd = shlex.split("su {} -c '/usr/hdp/current/hadoop-mapreduce-client/sbin/mr-jobhistory-daemon.sh --config {} stop historyserver'".\ |
2503 | 174 | format(os.environ['MAPRED_USER'], hadoopConfDir)) | 241 | format(os.environ['MAPRED_USER'], hadoopConfDir)) |
2504 | 175 | subprocess.call(cmd) | 242 | subprocess.call(cmd) |
2505 | 176 | 243 | ||
2506 | @@ -182,47 +249,49 @@ | |||
2507 | 182 | stop_jobhistory() | 249 | stop_jobhistory() |
2508 | 183 | start_jobhistory() | 250 | start_jobhistory() |
2509 | 184 | 251 | ||
2513 | 185 | ''' Start Namenode Service | 252 | |
2514 | 186 | hdfsUser: System user for Hadoop Distributed Filesystem | 253 | def hdfs_service(service, action, username): |
2515 | 187 | ''' | 254 | daemon = '/usr/hdp/current/hadoop-hdfs-{}/../hadoop/sbin/hadoop-daemon.sh' |
2516 | 255 | log("==> {} {} for user={}".format(action, service, username), "INFO") | ||
2517 | 256 | subprocess.check_call(shlex.split( | ||
2518 | 257 | "su {user} -c '{daemon} --config {conf_dir} {action} {service}'".format( | ||
2519 | 258 | user=username, | ||
2520 | 259 | daemon=daemon.format(service), | ||
2521 | 260 | service=service, | ||
2522 | 261 | action=action, | ||
2523 | 262 | conf_dir=os.environ["HADOOP_CONF_DIR"]))) | ||
2524 | 263 | |||
2525 | 264 | |||
2526 | 188 | def start_namenode(hdfsUser): | 265 | def start_namenode(hdfsUser): |
2536 | 189 | log("==> start namenode for user={}".format(hdfsUser), "INFO") | 266 | ''' |
2537 | 190 | hadoopConfDir = os.environ["HADOOP_CONF_DIR"] | 267 | Start Namenode Service |
2538 | 191 | cmd = shlex.split("su {} -c '/usr/lib/hadoop/sbin/hadoop-daemon.sh --config {} start namenode'".\ | 268 | hdfsUser: System user for Hadoop Distributed Filesystem |
2539 | 192 | format(hdfsUser, hadoopConfDir)) | 269 | ''' |
2540 | 193 | subprocess.check_call(cmd) | 270 | hdfs_service('namenode', 'start', hdfsUser) |
2541 | 194 | 271 | ||
2542 | 195 | '''Stop Name Node server | 272 | |
2534 | 196 | hdfsUser: System user for Hadoop Distributed Filesystem | ||
2535 | 197 | ''' | ||
2543 | 198 | def stop_namenode(hdfsUser): | 273 | def stop_namenode(hdfsUser): |
2554 | 199 | log("==> stop namenode for user={}".format(hdfsUser), "INFO") | 274 | ''' |
2555 | 200 | hadoopConfDir = os.environ["HADOOP_CONF_DIR"] | 275 | Stop Name Node server |
2556 | 201 | cmd = shlex.split("su {} -c '/usr/lib/hadoop/sbin/hadoop-daemon.sh --config {} stop namenode'".\ | 276 | hdfsUser: System user for Hadoop Distributed Filesystem |
2557 | 202 | format(hdfsUser, hadoopConfDir)) | 277 | ''' |
2558 | 203 | subprocess.call(cmd) | 278 | hdfs_service('namenode', 'stop', hdfsUser) |
2559 | 204 | 279 | ||
2560 | 205 | ''' | 280 | |
2551 | 206 | Start Data Node | ||
2552 | 207 | hdfsUser: System user for Hadoop Distributed Filesystem | ||
2553 | 208 | ''' | ||
2561 | 209 | def start_datanode(hdfsUser): | 281 | def start_datanode(hdfsUser): |
2572 | 210 | log("==> start datanode for user={}".format(hdfsUser), "INFO") | 282 | ''' |
2573 | 211 | hadoopConfDir = os.environ["HADOOP_CONF_DIR"] | 283 | Start Data Node |
2574 | 212 | cmd = shlex.split("su {} -c '/usr/lib/hadoop/sbin/hadoop-daemon.sh --config {} start datanode'".\ | 284 | hdfsUser: System user for Hadoop Distributed Filesystem |
2575 | 213 | format(hdfsUser, hadoopConfDir)) | 285 | ''' |
2576 | 214 | subprocess.check_call(cmd) | 286 | hdfs_service('datanode', 'start', hdfsUser) |
2577 | 215 | 287 | ||
2578 | 216 | ''' | 288 | |
2569 | 217 | Stop Data Node | ||
2570 | 218 | hdfsUser: System user for Hadoop Distributed Filesystem | ||
2571 | 219 | ''' | ||
2579 | 220 | def stop_datanode(hdfsUser): | 289 | def stop_datanode(hdfsUser): |
2585 | 221 | log("==> stop datanode for user={}".format(hdfsUser), "INFO") | 290 | ''' |
2586 | 222 | hadoopConfDir = os.environ["HADOOP_CONF_DIR"] | 291 | Stop Data Node |
2587 | 223 | cmd = shlex.split("su {} -c '/usr/lib/hadoop/sbin/hadoop-daemon.sh --config {} stop datanode'".\ | 292 | hdfsUser: System user for Hadoop Distributed Filesystem |
2588 | 224 | format(hdfsUser, hadoopConfDir)) | 293 | ''' |
2589 | 225 | subprocess.call(cmd) | 294 | hdfs_service('datanode', 'stop', hdfsUser) |
2590 | 226 | # candidate for BD charm helper | 295 | # candidate for BD charm helper |
2591 | 227 | 296 | ||
2592 | 228 | ''' | 297 | ''' |
2593 | @@ -238,52 +307,49 @@ | |||
2594 | 238 | setHadoopConfigXML(yarnConfPath, "yarn.resourcemanager.resource-tracker.address", RMhostname+":8025") | 307 | setHadoopConfigXML(yarnConfPath, "yarn.resourcemanager.resource-tracker.address", RMhostname+":8025") |
2595 | 239 | setHadoopConfigXML(yarnConfPath, "yarn.resourcemanager.scheduler.address", RMhostname+":8030") | 308 | setHadoopConfigXML(yarnConfPath, "yarn.resourcemanager.scheduler.address", RMhostname+":8030") |
2596 | 240 | setHadoopConfigXML(yarnConfPath, "yarn.resourcemanager.address", RMhostname+":8050") | 309 | setHadoopConfigXML(yarnConfPath, "yarn.resourcemanager.address", RMhostname+":8050") |
2597 | 310 | setHadoopConfigXML(yarnConfPath, "yarn.resourcemanager.hostname", RMhostname) | ||
2598 | 241 | setHadoopConfigXML(yarnConfPath, "yarn.resourcemanager.admin.address", RMhostname+":8141") | 311 | setHadoopConfigXML(yarnConfPath, "yarn.resourcemanager.admin.address", RMhostname+":8141") |
2601 | 242 | setHadoopConfigXML(yarnConfPath, "yarn.nodemanager.local-dirs", config_get('yarn_local_dir')) | 312 | setHadoopConfigXML(yarnConfPath, "yarn.nodemanager.local-dirs", hookenv.config('yarn_local_dir')) |
2602 | 243 | setHadoopConfigXML(yarnConfPath, "yarn.nodemanager.log-dirs", config_get('yarn_local_log_dir')) | 313 | setHadoopConfigXML(yarnConfPath, "yarn.nodemanager.log-dirs", hookenv.config('yarn_local_log_dir')) |
2603 | 244 | setHadoopConfigXML(yarnConfPath, "yarn.log.server.url","http://"+RMhostname+":19888") | 314 | setHadoopConfigXML(yarnConfPath, "yarn.log.server.url","http://"+RMhostname+":19888") |
2604 | 245 | setHadoopConfigXML(yarnConfPath, "yarn.resourcemanager.webapp.address", RMhostname+":8088") | 315 | setHadoopConfigXML(yarnConfPath, "yarn.resourcemanager.webapp.address", RMhostname+":8088") |
2605 | 246 | #jobhistory server | 316 | #jobhistory server |
2606 | 247 | setHadoopConfigXML(mapConfDir, "mapreduce.jobhistory.webapp.address", RMhostname+":19888") | 317 | setHadoopConfigXML(mapConfDir, "mapreduce.jobhistory.webapp.address", RMhostname+":19888") |
2607 | 248 | setHadoopConfigXML(mapConfDir, "mapreduce.jobhistory.address", RMhostname+":10020") | 318 | setHadoopConfigXML(mapConfDir, "mapreduce.jobhistory.address", RMhostname+":10020") |
2608 | 249 | 319 | ||
2610 | 250 | ''' Start Resource Manager server ''' | 320 | |
2611 | 321 | def yarn_service(service, action, username): | ||
2612 | 322 | daemon = '/usr/hdp/current/hadoop-yarn-client/sbin/yarn-daemon.sh' | ||
2613 | 323 | log("==> {} {} for user={}".format(action, service, username), "INFO") | ||
2614 | 324 | os.environ["HADOOP_LIBEXEC_DIR"] = "/usr/hdp/current/hadoop-client/libexec" | ||
2615 | 325 | subprocess.check_call(shlex.split( | ||
2616 | 326 | "su {user} -c '{daemon} --config {conf_dir} {action} {service}'".format( | ||
2617 | 327 | user=username, | ||
2618 | 328 | daemon=daemon.format(service), | ||
2619 | 329 | service=service, | ||
2620 | 330 | action=action, | ||
2621 | 331 | conf_dir=os.environ["HADOOP_CONF_DIR"]))) | ||
2622 | 332 | |||
2623 | 333 | |||
2624 | 251 | def start_resourcemanager(yarnUser): | 334 | def start_resourcemanager(yarnUser): |
2633 | 252 | log("==> start resourcemanager", "INFO") | 335 | ''' Start Resource Manager server ''' |
2634 | 253 | hadoopConfDir = os.environ["HADOOP_CONF_DIR"] | 336 | yarn_service('resourcemanager', 'start', yarnUser) |
2635 | 254 | os.environ["HADOOP_LIBEXEC_DIR"]="/usr/lib/hadoop/libexec" | 337 | |
2636 | 255 | cmd = shlex.split("su {} -c '/usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config {} start resourcemanager'".\ | 338 | |
2629 | 256 | format(yarnUser, hadoopConfDir)) | ||
2630 | 257 | subprocess.call(cmd) | ||
2631 | 258 | |||
2632 | 259 | ''' Stop Resource Manager server ''' | ||
2637 | 260 | def stop_resourcemanager(yarnUser): | 339 | def stop_resourcemanager(yarnUser): |
2646 | 261 | log("==> stop resourcemanager", "INFO") | 340 | ''' Stop Resource Manager server ''' |
2647 | 262 | hadoopConfDir = os.environ["HADOOP_CONF_DIR"] | 341 | yarn_service('resourcemanager', 'stop', yarnUser) |
2648 | 263 | os.environ["HADOOP_LIBEXEC_DIR"]="/usr/lib/hadoop/libexec" | 342 | |
2649 | 264 | cmd = shlex.split("su {} -c '/usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config {} stop resourcemanager'".\ | 343 | |
2642 | 265 | format(yarnUser, hadoopConfDir)) | ||
2643 | 266 | subprocess.check_call(cmd) | ||
2644 | 267 | |||
2645 | 268 | ''' Start Node Manager daemon on each compute node ''' | ||
2650 | 269 | def start_nodemanager(yarnUser): | 344 | def start_nodemanager(yarnUser): |
2660 | 270 | log("==> start nodemanager", "INFO") | 345 | ''' Start Node Manager daemon on each compute node ''' |
2661 | 271 | hadoopConfDir = os.environ["HADOOP_CONF_DIR"] | 346 | yarn_service('nodemanager', 'start', yarnUser) |
2662 | 272 | os.environ["HADOOP_LIBEXEC_DIR"]="/usr/lib/hadoop/libexec" | 347 | |
2654 | 273 | cmd = shlex.split("su {} -c '/usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config {} start nodemanager'".\ | ||
2655 | 274 | format(yarnUser, hadoopConfDir)) | ||
2656 | 275 | subprocess.check_call(cmd) | ||
2657 | 276 | |||
2658 | 277 | |||
2659 | 278 | ''' Stop Node Manager daemon on each compute node ''' | ||
2663 | 279 | 348 | ||
2664 | 280 | def stop_nodemanager(yarnUser): | 349 | def stop_nodemanager(yarnUser): |
2671 | 281 | log("==> stop nodemanager", "INFO") | 350 | ''' Stop Node Manager daemon on each compute node ''' |
2672 | 282 | hadoopConfDir = os.environ["HADOOP_CONF_DIR"] | 351 | yarn_service('nodemanager', 'stop', yarnUser) |
2673 | 283 | os.environ["HADOOP_LIBEXEC_DIR"]="/usr/lib/hadoop/libexec" | 352 | |
2668 | 284 | cmd = shlex.split("su {} -c '/usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config {} stop nodemanager'".\ | ||
2669 | 285 | format(yarnUser, hadoopConfDir)) | ||
2670 | 286 | subprocess.call(cmd) | ||
2674 | 287 | 353 | ||
2675 | 288 | '''Stop all running hadoop services | 354 | '''Stop all running hadoop services |
2676 | 289 | NOTE: Order is important - DO NOT CHANGE | 355 | NOTE: Order is important - DO NOT CHANGE |
2677 | @@ -336,6 +402,8 @@ | |||
2678 | 336 | break | 402 | break |
2679 | 337 | 403 | ||
2680 | 338 | def configureHDFS(hostname): | 404 | def configureHDFS(hostname): |
2681 | 405 | cmd = "/usr/bin/hdp-select set all 2.2.0.0-2041" | ||
2682 | 406 | subprocess.check_call(shlex.split(cmd)) | ||
2683 | 339 | hdfsConfPath = os.path.join(os.path.sep, os.environ['HADOOP_CONF_DIR'],'hdfs-site.xml') | 407 | hdfsConfPath = os.path.join(os.path.sep, os.environ['HADOOP_CONF_DIR'],'hdfs-site.xml') |
2684 | 340 | coreConfPath = os.path.join(os.path.sep, os.environ['HADOOP_CONF_DIR'],'core-site.xml') | 408 | coreConfPath = os.path.join(os.path.sep, os.environ['HADOOP_CONF_DIR'],'core-site.xml') |
2685 | 341 | setHadoopConfigXML(coreConfPath, "fs.defaultFS", "hdfs://"+hostname+":8020") | 409 | setHadoopConfigXML(coreConfPath, "fs.defaultFS", "hdfs://"+hostname+":8020") |
2686 | @@ -361,11 +429,10 @@ | |||
2687 | 361 | 'liblzo2-2', | 429 | 'liblzo2-2', |
2688 | 362 | 'liblzo2-dev', | 430 | 'liblzo2-dev', |
2689 | 363 | 'libhdfs0', | 431 | 'libhdfs0', |
2690 | 364 | 'libhdfs0-dev', | ||
2691 | 365 | 'hadoop-lzo'] | 432 | 'hadoop-lzo'] |
2692 | 366 | install_base_pkg(packages) | 433 | install_base_pkg(packages) |
2693 | 367 | config_hadoop_nodes() | 434 | config_hadoop_nodes() |
2695 | 368 | fileSetKV(hosts_path, unit_get('private-address')+' ', get_unit_hostname()) | 435 | fileSetKV(hosts_path, unit_get('private-address')+' ', socket.gethostname()) |
2696 | 369 | 436 | ||
2697 | 370 | 437 | ||
2698 | 371 | @hooks.hook('resourcemanager-relation-joined') | 438 | @hooks.hook('resourcemanager-relation-joined') |
2699 | @@ -373,17 +440,23 @@ | |||
2700 | 373 | log ("==> resourcemanager-relation-joined","INFO") | 440 | log ("==> resourcemanager-relation-joined","INFO") |
2701 | 374 | if is_jvm_service_active("ResourceManager"): | 441 | if is_jvm_service_active("ResourceManager"): |
2702 | 375 | relation_set(resourceManagerReady=True) | 442 | relation_set(resourceManagerReady=True) |
2704 | 376 | relation_set(resourceManager_hostname=get_unit_hostname()) | 443 | relation_set(resourceManager_hostname=socket.gethostname()) |
2705 | 377 | return | 444 | return |
2706 | 445 | # waiting for namenode, however this will not work on a distributed hdfs system. | ||
2707 | 378 | if not is_jvm_service_active("NameNode"): | 446 | if not is_jvm_service_active("NameNode"): |
2708 | 379 | sys.exit(0) | 447 | sys.exit(0) |
2709 | 448 | shutil.copy(os.path.join(os.path.sep, os.environ['CHARM_DIR'],\ | ||
2710 | 449 | 'files', 'scripts', "terasort.sh"), home) | ||
2711 | 380 | setHadoopEnvVar() | 450 | setHadoopEnvVar() |
2712 | 381 | relation_set(resourceManager_ip=unit_get('private-address')) | 451 | relation_set(resourceManager_ip=unit_get('private-address')) |
2714 | 382 | relation_set(resourceManager_hostname=get_unit_hostname()) | 452 | relation_set(resourceManager_hostname=socket.gethostname()) |
2715 | 383 | configureYarn(unit_get('private-address')) | 453 | configureYarn(unit_get('private-address')) |
2716 | 384 | start_resourcemanager(os.environ["YARN_USER"]) | 454 | start_resourcemanager(os.environ["YARN_USER"]) |
2717 | 385 | start_jobhistory() | 455 | start_jobhistory() |
2718 | 386 | open_port(8025) | 456 | open_port(8025) |
2719 | 457 | open_port(8050) | ||
2720 | 458 | open_port(8020) | ||
2721 | 459 | open_port(8042) | ||
2722 | 387 | open_port(8030) | 460 | open_port(8030) |
2723 | 388 | open_port(8050) | 461 | open_port(8050) |
2724 | 389 | open_port(8141) | 462 | open_port(8141) |
2725 | @@ -414,7 +487,9 @@ | |||
2726 | 414 | open_port(19888) | 487 | open_port(19888) |
2727 | 415 | open_port(8088) | 488 | open_port(8088) |
2728 | 416 | open_port(10020) | 489 | open_port(10020) |
2730 | 417 | relation_set(nodemanager_hostname=get_unit_hostname()) | 490 | open_port(8042) |
2731 | 491 | |||
2732 | 492 | relation_set(nodemanager_hostname=socket.gethostname()) | ||
2733 | 418 | 493 | ||
2734 | 419 | 494 | ||
2735 | 420 | @hooks.hook('resourcemanager-relation-broken') | 495 | @hooks.hook('resourcemanager-relation-broken') |
2736 | @@ -437,25 +512,32 @@ | |||
2737 | 437 | 512 | ||
2738 | 438 | if is_jvm_service_active("NameNode"): | 513 | if is_jvm_service_active("NameNode"): |
2739 | 439 | relation_set(nameNodeReady=True) | 514 | relation_set(nameNodeReady=True) |
2741 | 440 | relation_set(namenode_hostname=get_unit_hostname()) | 515 | relation_set(namenode_hostname=socket.gethostname()) |
2742 | 441 | return | 516 | return |
2743 | 442 | setHadoopEnvVar() | 517 | setHadoopEnvVar() |
2744 | 443 | setDirPermission(os.environ['DFS_NAME_DIR'], os.environ['HDFS_USER'], os.environ['HADOOP_GROUP'], 0755) | 518 | setDirPermission(os.environ['DFS_NAME_DIR'], os.environ['HDFS_USER'], os.environ['HADOOP_GROUP'], 0755) |
2746 | 444 | relation_set(namenode_hostname=get_unit_hostname()) | 519 | relation_set(namenode_hostname=socket.gethostname()) |
2747 | 445 | configureHDFS(unit_get('private-address')) | 520 | configureHDFS(unit_get('private-address')) |
2748 | 446 | format_namenode(os.environ["HDFS_USER"]) | 521 | format_namenode(os.environ["HDFS_USER"]) |
2749 | 447 | start_namenode(os.environ["HDFS_USER"]) | 522 | start_namenode(os.environ["HDFS_USER"]) |
2750 | 523 | # Namenode might not be ready yet | ||
2751 | 524 | sleep(15) | ||
2752 | 525 | # Create hdfs (superuser) and ubunut (default user) | ||
2753 | 526 | HDFS_command("dfs -mkdir -p /user/hdfs") | ||
2754 | 527 | HDFS_command("dfs -mkdir -p /user/ubuntu") | ||
2755 | 528 | HDFS_command("dfs -chown ubuntu /user/ubuntu") | ||
2756 | 529 | HDFS_command("dfs -chmod -R 755 /user/ubuntu") | ||
2757 | 448 | start_jobhistory() | 530 | start_jobhistory() |
2758 | 449 | open_port(8020) | 531 | open_port(8020) |
2759 | 450 | open_port(8010) | 532 | open_port(8010) |
2760 | 451 | open_port(50070) | 533 | open_port(50070) |
2761 | 452 | open_port(50075) | 534 | open_port(50075) |
2762 | 453 | open_port(8480) | 535 | open_port(8480) |
2764 | 454 | open_port(50470) | 536 | open_port(50070) |
2765 | 537 | open_port(45454) | ||
2766 | 455 | if not is_jvm_service_active("NameNode"): | 538 | if not is_jvm_service_active("NameNode"): |
2767 | 456 | log("error ==> NameNode failed to start") | 539 | log("error ==> NameNode failed to start") |
2768 | 457 | sys.exit(1) | 540 | sys.exit(1) |
2769 | 458 | sleep(5) | ||
2770 | 459 | relation_set(nameNodeReady=True) | 541 | relation_set(nameNodeReady=True) |
2771 | 460 | 542 | ||
2772 | 461 | 543 | ||
2773 | @@ -480,7 +562,7 @@ | |||
2774 | 480 | open_port(8480) | 562 | open_port(8480) |
2775 | 481 | open_port(50010) | 563 | open_port(50010) |
2776 | 482 | open_port(50075) | 564 | open_port(50075) |
2778 | 483 | relation_set(datanode_hostname = get_unit_hostname()) | 565 | relation_set(datanode_hostname = socket.gethostname()) |
2779 | 484 | 566 | ||
2780 | 485 | 567 | ||
2781 | 486 | @hooks.hook('config-changed') | 568 | @hooks.hook('config-changed') |
2782 | @@ -499,6 +581,15 @@ | |||
2783 | 499 | sys.exit(0) | 581 | sys.exit(0) |
2784 | 500 | log("Configuring namenode - changed phase", "INFO") | 582 | log("Configuring namenode - changed phase", "INFO") |
2785 | 501 | fileSetKV(hosts_path, relation_get('private-address')+' ', datanode_host) | 583 | fileSetKV(hosts_path, relation_get('private-address')+' ', datanode_host) |
2786 | 584 | |||
2787 | 585 | # Upload the MapReduce tarball to HDFS. | ||
2788 | 586 | if not fconfigured("mapred_init"): | ||
2789 | 587 | HDFS_command("dfs -mkdir -p /hdp/apps/2.2.0.0-2041/mapreduce/") | ||
2790 | 588 | HDFS_command("dfs -put /usr/hdp/current/hadoop-client/mapreduce.tar.gz /hdp/apps/2.2.0.0-2041/mapreduce/") | ||
2791 | 589 | HDFS_command("dfs -chown -R hdfs:hadoop /hdp") | ||
2792 | 590 | HDFS_command("dfs -chmod -R 555 /hdp/apps/2.2.0.0-2041/mapreduce") | ||
2793 | 591 | HDFS_command("dfs -chmod -R 444 /hdp/apps/2.2.0.0-2041/mapreduce/mapreduce.tar.gz") | ||
2794 | 592 | |||
2795 | 502 | 593 | ||
2796 | 503 | @hooks.hook('start') | 594 | @hooks.hook('start') |
2797 | 504 | def start(): | 595 | def start(): |
2798 | @@ -525,14 +616,14 @@ | |||
2799 | 525 | 616 | ||
2800 | 526 | @hooks.hook('compute-nodes-relation-joined') | 617 | @hooks.hook('compute-nodes-relation-joined') |
2801 | 527 | def compute_nodes_relation_joined(): | 618 | def compute_nodes_relation_joined(): |
2804 | 528 | log("==> compute_nodes_relation_joined {}".format(get_unit_hostname()),"INFO") | 619 | log("==> compute_nodes_relation_joined {}".format(socket.gethostname()),"INFO") |
2805 | 529 | relation_set(hostname=get_unit_hostname()) | 620 | relation_set(hostname=socket.gethostname()) |
2806 | 530 | 621 | ||
2807 | 531 | 622 | ||
2808 | 532 | @hooks.hook('hadoop-nodes-relation-joined') | 623 | @hooks.hook('hadoop-nodes-relation-joined') |
2809 | 533 | def hadoop_nodes_relation_joined(): | 624 | def hadoop_nodes_relation_joined(): |
2812 | 534 | log("==> hadoop_nodes_relation_joined {}".format(get_unit_hostname()),"INFO") | 625 | log("==> hadoop_nodes_relation_joined {}".format(socket.gethostname()),"INFO") |
2813 | 535 | relation_set(hostname=get_unit_hostname()) | 626 | relation_set(hostname=socket.gethostname()) |
2814 | 536 | 627 | ||
2815 | 537 | 628 | ||
2816 | 538 | 629 | ||
2817 | @@ -555,7 +646,5 @@ | |||
2818 | 555 | resourceManagerReady = False | 646 | resourceManagerReady = False |
2819 | 556 | 647 | ||
2820 | 557 | 648 | ||
2821 | 558 | |||
2822 | 559 | |||
2823 | 560 | if __name__ == "__main__": | 649 | if __name__ == "__main__": |
2824 | 561 | hooks.execute(sys.argv) | 650 | hooks.execute(sys.argv) |
2825 | 562 | 651 | ||
2826 | === modified file 'hooks/hdputils.py' | |||
2827 | --- hooks/hdputils.py 2014-11-21 18:37:23 +0000 | |||
2828 | +++ hooks/hdputils.py 2015-02-13 00:38:49 +0000 | |||
2829 | @@ -17,24 +17,27 @@ | |||
2830 | 17 | 17 | ||
2831 | 18 | def install_base_pkg(packages): | 18 | def install_base_pkg(packages): |
2832 | 19 | log ("==> install_base_pkg", "INFO") | 19 | log ("==> install_base_pkg", "INFO") |
2835 | 20 | wgetPkg("http://public-repo-1.hortonworks.com/HDP/ubuntu12/2.1.3.0/hdp.list -O /etc/apt/sources.list.d/hdp.list","") | 20 | wgetPkg("http://public-repo-1.hortonworks.com/HDP/ubuntu12/2.x/GA/2.2.0.0/hdp.list", |
2836 | 21 | cmd =gpg_script | 21 | "/etc/apt/sources.list.d/hdp.list") |
2837 | 22 | cmd = gpg_script | ||
2838 | 22 | subprocess.call(cmd) | 23 | subprocess.call(cmd) |
2839 | 23 | apt_update() | 24 | apt_update() |
2840 | 24 | apt_install(packages) | 25 | apt_install(packages) |
2841 | 25 | if not os.path.isdir(os.path.join(os.path.sep,'usr','lib', 'hadoop')): | 26 | if not os.path.isdir(os.path.join(os.path.sep,'usr','lib', 'hadoop')): |
2842 | 26 | log("Error, apt-get install Hadoop failed", "ERROR") | 27 | log("Error, apt-get install Hadoop failed", "ERROR") |
2843 | 27 | sys.exit(1) | 28 | sys.exit(1) |
2848 | 28 | os.chdir(home); | 29 | os.chdir(home) |
2849 | 29 | wgetPkg("http://public-repo-1.hortonworks.com/HDP/tools/2.1.1.0/hdp_manual_install_rpm_helper_files-2.1.1.385.tar.gz","") | 30 | wgetPkg("http://public-repo-1.hortonworks.com/HDP/tools/2.2.0.0/hdp_manual_install_rpm_helper_files-2.2.0.0.2041.tar.gz", |
2850 | 30 | if tarfile.is_tarfile(tarfilename): | 31 | tarfilename) |
2851 | 31 | tball = tarfile.open(tarfilename) | 32 | if not tarfile.is_tarfile(tarfilename): |
2852 | 33 | log.error('Unable to extract Hadoop Tarball') | ||
2853 | 34 | sys.exit(1) | ||
2854 | 35 | with tarfile.open(tarfilename) as tball: | ||
2855 | 36 | extracted_dirname = tball.next().name | ||
2856 | 32 | tball.extractall(home) | 37 | tball.extractall(home) |
2857 | 33 | else: | ||
2858 | 34 | log ("Unable to extract Hadoop Tarball ", "ERROR") | ||
2859 | 35 | if os.path.isdir(hdpScript): | 38 | if os.path.isdir(hdpScript): |
2860 | 36 | shutil.rmtree(hdpScript) | 39 | shutil.rmtree(hdpScript) |
2862 | 37 | os.rename(tarfilenamePre, hdpScript) | 40 | os.rename(extracted_dirname, hdpScript) |
2863 | 38 | log("<== install_base_pkg", "INFO") | 41 | log("<== install_base_pkg", "INFO") |
2864 | 39 | 42 | ||
2865 | 40 | def uninstall_base_pkg(packages): | 43 | def uninstall_base_pkg(packages): |
2866 | @@ -116,10 +119,9 @@ | |||
2867 | 116 | ################ Global values ######################### | 119 | ################ Global values ######################### |
2868 | 117 | home = os.path.join(os.path.sep, "home", "ubuntu") | 120 | home = os.path.join(os.path.sep, "home", "ubuntu") |
2869 | 118 | javaPath = "/usr/lib/jvm/java-1.7.0-openjdk-amd64" | 121 | javaPath = "/usr/lib/jvm/java-1.7.0-openjdk-amd64" |
2872 | 119 | tarfilename="hdp_manual_install_rpm_helper_files-2.1.1.385.tar.gz" | 122 | tarfilename="/tmp/hdp-helpers.tar.gz" |
2871 | 120 | tarfilenamePre="hdp_manual_install_rpm_helper_files-2.1.1.385" | ||
2873 | 121 | HDP_PGP_SCRIPT = 'gpg_ubuntu.sh' | 123 | HDP_PGP_SCRIPT = 'gpg_ubuntu.sh' |
2875 | 122 | gpg_script = os.path.join(os.path.sep, os.environ['CHARM_DIR'], os.path.sep, os.environ['CHARM_DIR'], 'files', 'scripts',HDP_PGP_SCRIPT) | 124 | gpg_script = os.path.join(os.environ['CHARM_DIR'], 'files', 'scripts', HDP_PGP_SCRIPT) |
2876 | 123 | hdpScript = "hdp_scripts" | 125 | hdpScript = "hdp_scripts" |
2877 | 124 | hdpScriptPath = os.path.join(os.path.sep,home, hdpScript,'scripts') | 126 | hdpScriptPath = os.path.join(os.path.sep,home, hdpScript,'scripts') |
2878 | 125 | usersAndGroupsScript = os.path.join(os.path.sep, hdpScriptPath, "usersAndGroups.sh") | 127 | usersAndGroupsScript = os.path.join(os.path.sep, hdpScriptPath, "usersAndGroups.sh") |
2879 | 126 | 128 | ||
2880 | === added file 'resources.yaml' | |||
2881 | --- resources.yaml 1970-01-01 00:00:00 +0000 | |||
2882 | +++ resources.yaml 2015-02-13 00:38:49 +0000 | |||
2883 | @@ -0,0 +1,11 @@ | |||
2884 | 1 | resources: | ||
2885 | 2 | pathlib: | ||
2886 | 3 | pypi: path.py>=7.0 | ||
2887 | 4 | pyaml: | ||
2888 | 5 | pypi: pyaml | ||
2889 | 6 | six: | ||
2890 | 7 | pypi: six | ||
2891 | 8 | charmhelpers: | ||
2892 | 9 | pypi: http://bazaar.launchpad.net/~bigdata-dev/bigdata-data/trunk/download/cory.johns%40canonical.com-20150212185722-5i0c57g7725yw71r/charmhelpers0.2.2.ta-20150203190143-yrb1xsbp2bffbyp5-1/charmhelpers-0.2.2.tar.gz | ||
2893 | 10 | hash: 4a6daacc352f7381b38e3efffe760f6c43a15452bdaf66e50f234ea425b91873 | ||
2894 | 11 | hash_type: sha256 | ||
2895 | 0 | 12 | ||
2896 | === modified file 'tests/01-hadoop-cluster-deployment-1.py' | |||
2897 | --- tests/01-hadoop-cluster-deployment-1.py 2014-11-21 18:37:23 +0000 | |||
2898 | +++ tests/01-hadoop-cluster-deployment-1.py 2015-02-13 00:38:49 +0000 | |||
2899 | @@ -1,141 +1,85 @@ | |||
2900 | 1 | #!/usr/bin/python3 | 1 | #!/usr/bin/python3 |
2901 | 2 | 2 | ||
2902 | 3 | import unittest | ||
2903 | 3 | import amulet | 4 | import amulet |
2904 | 4 | import yaml | 5 | import yaml |
2905 | 5 | import os | 6 | import os |
2910 | 6 | class TestDeployment(object): | 7 | |
2911 | 7 | def __init__(self): | 8 | |
2912 | 8 | self.d = amulet.Deployment(series='trusty') | 9 | class TestDeployment(unittest.TestCase): |
2913 | 9 | bpath = os.path.join(os.path.dirname( __file__), "hadoop_cluster.yaml") | 10 | @classmethod |
2914 | 11 | def setUpClass(cls): | ||
2915 | 12 | cls.d = amulet.Deployment(series='trusty') | ||
2916 | 13 | bpath = os.path.join(os.path.dirname(__file__), "hadoop_cluster.yaml") | ||
2917 | 10 | with open(bpath) as f: | 14 | with open(bpath) as f: |
2918 | 11 | bun = f.read() | 15 | bun = f.read() |
2920 | 12 | self.d.load(yaml.safe_load(bun)) | 16 | cls.d.load(yaml.safe_load(bun)) |
2921 | 13 | try: | 17 | try: |
2924 | 14 | self.d.setup(timeout=9000) | 18 | cls.d.setup(timeout=9000) |
2925 | 15 | self.d.sentry.wait() | 19 | cls.d.sentry.wait() |
2926 | 16 | except amulet.helpers.TimeoutError: | 20 | except amulet.helpers.TimeoutError: |
2927 | 17 | amulet.raise_status(amulet.SKIP, msg="Environment wasn't stood up in time") | 21 | amulet.raise_status(amulet.SKIP, msg="Environment wasn't stood up in time") |
2928 | 18 | except: | 22 | except: |
2929 | 19 | raise | 23 | raise |
2932 | 20 | self.master_unit = self.d.sentry.unit['yarn-hdfs-master/0'] | 24 | cls.master_unit = cls.d.sentry.unit['yarn-hdfs-master/0'] |
2933 | 21 | self.compute_unit = self.d.sentry.unit['compute-node/0'] | 25 | cls.compute_unit = cls.d.sentry.unit['compute-node/0'] |
2934 | 22 | 26 | ||
2935 | 23 | def run(self): | ||
2936 | 24 | for test in dir(self): | ||
2937 | 25 | if test.startswith('test_'): | ||
2938 | 26 | getattr(self, test)() | ||
2939 | 27 | ############################################################ | ||
2940 | 28 | # Validate hadoop services on master node have been started | ||
2941 | 29 | ############################################################ | ||
2942 | 30 | def test_hadoop_master_service_status(self): | 27 | def test_hadoop_master_service_status(self): |
2956 | 31 | o,c= self.master_unit.run("jps | awk '{print $2}'") | 28 | """ |
2957 | 32 | if o.find('ResourceManager') == -1: | 29 | Validate hadoop services on master node have been started |
2958 | 33 | amulet.raise_status(amulet.FAIL, msg="ResourceManager not started") | 30 | """ |
2959 | 34 | else: | 31 | output, retcode = self.master_unit.run("pgrep -a java") |
2960 | 35 | amulet.raise_status(amulet.PASS, msg="ResourceManager started") | 32 | assert 'ResourceManager' in output, "ResourceManager not started" |
2961 | 36 | if o.find('JobHistoryServer') == -1: | 33 | assert 'JobHistoryServer' in output, "JobHistoryServer not started" |
2962 | 37 | amulet.raise_status(amulet.FAIL, msg="JobHistoryServer not started") | 34 | assert 'NameNode' in output, "NameNode not started" |
2950 | 38 | else: | ||
2951 | 39 | amulet.raise_status(amulet.PASS, msg="JobHistoryServer started") | ||
2952 | 40 | if o.find('NameNode') == -1: | ||
2953 | 41 | amulet.raise_status(amulet.FAIL, msg="NameNode not started") | ||
2954 | 42 | else: | ||
2955 | 43 | amulet.raise_status(amulet.PASS, msg="NameNode started") | ||
2963 | 44 | 35 | ||
2964 | 45 | ############################################################ | ||
2965 | 46 | # Validate hadoop services on compute node have been started | ||
2966 | 47 | ############################################################ | ||
2967 | 48 | def test_hadoop_compute_service_status(self): | 36 | def test_hadoop_compute_service_status(self): |
3056 | 49 | o,c= self.compute_unit.run("jps | awk '{print $2}'") | 37 | """ |
3057 | 50 | if o.find('NodeManager') == -1: | 38 | Validate hadoop services on compute node have been started |
3058 | 51 | amulet.raise_status(amulet.FAIL, msg="NodeManager not started") | 39 | """ |
3059 | 52 | else: | 40 | output, retcode = self.compute_unit.run("pgrep -a java") |
3060 | 53 | amulet.raise_status(amulet.PASS, msg="NodeManager started") | 41 | assert 'NodeManager' in output, "NodeManager not started" |
3061 | 54 | if o.find('DataNode') == -1: | 42 | assert 'DataNode' in output, "DataServer not started" |
3062 | 55 | amulet.raise_status(amulet.FAIL, msg="DataServer not started") | 43 | |
3063 | 56 | else: | 44 | def test_hdfs_dir(self): |
3064 | 57 | amulet.raise_status(amulet.PASS, msg="DataServer started") | 45 | """ |
3065 | 58 | ########################################################################### | 46 | Validate admin few hadoop activities on HDFS cluster. |
3066 | 59 | # Validate admin few hadoop activities on HDFS cluster. | 47 | 1) This test validates mkdir on hdfs cluster |
3067 | 60 | ########################################################################### | 48 | 2) This test validates change hdfs dir owner on the cluster |
3068 | 61 | # 1) This test validates mkdir on hdfs cluster | 49 | 3) This test validates setting hdfs directory access permission on the cluster |
3069 | 62 | ########################################################################### | 50 | |
3070 | 63 | def test_hdfs_mkdir(self): | 51 | NB: These are order-dependent, so must be done as part of a single test case. |
3071 | 64 | o,c= slef.master_unit.run("su hdfs -c 'hdfs dfs -mkdir -p /user/ubuntu'") | 52 | """ |
3072 | 65 | if c == 0: | 53 | output, retcode = self.master_unit.run("su hdfs -c 'hdfs dfs -mkdir -p /user/ubuntu'") |
3073 | 66 | amulet.raise_status(amulet.PASS, msg=" Successfully created a user directory on hdfs") | 54 | assert retcode == 0, "Created a user directory on hdfs FAILED:\n{}".format(output) |
3074 | 67 | else: | 55 | output, retcode = self.master_unit.run("su hdfs -c 'hdfs dfs -chown ubuntu:ubuntu /user/ubuntu'") |
3075 | 68 | amulet.raise_status(amulet.FAIL, msg=" Created a user directory on hdfs FAILED") | 56 | assert retcode == 0, "Assigning an owner to hdfs directory FAILED:\n{}".format(output) |
3076 | 69 | ########################################################################### | 57 | output, retcode = self.master_unit.run("su hdfs -c 'hdfs dfs -chmod -R 755 /user/ubuntu'") |
3077 | 70 | # 2) This test validates change hdfs dir owner on the cluster | 58 | assert retcode == 0, "seting directory permission on hdfs FAILED:\n{}".format(output) |
3078 | 71 | ########################################################################### | 59 | |
3079 | 72 | def test_hdfs_dir_owner(self): | 60 | def test_yarn_mapreduce_exe(self): |
3080 | 73 | o,c= self.master_unit.run("su hdfs -c 'hdfs dfs -chown ubuntu:ubuntu /user/ubuntu'") | 61 | """ |
3081 | 74 | if c == 0: | 62 | Validate yarn mapreduce operations: |
3082 | 75 | amulet.raise_status(amulet.PASS, msg=" Successfully assigned an owner to directory on hdfs") | 63 | 1) validate mapreduce execution - writing to hdfs |
3083 | 76 | else: | 64 | 2) validate successful mapreduce operation after the execution |
3084 | 77 | amulet.raise_status(amulet.FAIL, msg=" assigning an owner to hdfs directory FAILED") | 65 | 3) validate mapreduce execution - reading and writing to hdfs |
3085 | 78 | 66 | 4) validate successful mapreduce operation after the execution | |
3086 | 79 | ########################################################################### | 67 | 5) validate successful deletion of mapreduce operation result from hdfs |
3087 | 80 | # 3) This test validates setting hdfs directory access permission on the cluster | 68 | |
3088 | 81 | ########################################################################### | 69 | NB: These are order-dependent, so must be done as part of a single test case. |
3089 | 82 | def test_hdfs_dir_permission(self): | 70 | """ |
3090 | 83 | o,c= self.master_unit.run("su hdfs -c 'hdfs dfs -chmod -R 755 /user/ubuntu'") | 71 | jar_file = '/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar' |
3091 | 84 | if c == 0: | 72 | test_steps = [ |
3092 | 85 | amulet.raise_status(amulet.PASS, msg=" Successfully set directory permission on hdfs") | 73 | ('teragen', "su ubuntu -c 'hadoop jar {} teragen 10000 /user/ubuntu/teragenout'".format(jar_file)), |
3093 | 86 | else: | 74 | ('mapreduce #1', "su hdfs -c 'hdfs dfs -ls /user/ubuntu/teragenout/_SUCCESS'"), |
3094 | 87 | amulet.raise_status(amulet.FAIL, msg=" seting directory permission on hdfs FAILED") | 75 | ('terasort', "su ubuntu -c 'hadoop jar {} terasort /user/ubuntu/teragenout /user/ubuntu/terasortout'".format(jar_file)), |
3095 | 88 | 76 | ('mapreduce #2', "su hdfs -c 'hdfs dfs -ls /user/ubuntu/terasortout/_SUCCESS'"), | |
3096 | 89 | ########################################################################### | 77 | ('cleanup', "su hdfs -c 'hdfs dfs -rm -r /user/ubuntu/teragenout'"), |
3097 | 90 | # Validate yarn mapreduce operation | 78 | ] |
3098 | 91 | # 1) validate mapreduce execution - writing to hdfs | 79 | for name, step in test_steps: |
3099 | 92 | # 2) validate successful mapreduce operation after the execution | 80 | output, retcode = self.master_unit.run(step) |
3100 | 93 | ########################################################################### | 81 | assert retcode == 0, "{} FAILED:\n{}".format(name, output) |
3013 | 94 | def test_yarn_mapreduce_exe1(self): | ||
3014 | 95 | o,c= self.master_unit.run("su ubuntu -c 'hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples-*.jar teragen 10000 /user/ubuntu/teragenout'") | ||
3015 | 96 | if c == 0: | ||
3016 | 97 | amulet.raise_status(amulet.PASS, msg=" Successfull execution of teragen mapreduce app") | ||
3017 | 98 | else: | ||
3018 | 99 | amulet.raise_status(amulet.FAIL, msg=" teragen FAILED") | ||
3019 | 100 | ########################################################################### | ||
3020 | 101 | # 2) validate successful mapreduce operation after the execution | ||
3021 | 102 | ########################################################################### | ||
3022 | 103 | o,c= self.master_unit.run("su hdfs -c 'hdfs dfs -ls /user/ubuntu/teragenout/_SUCCESS'") | ||
3023 | 104 | if c == 0: | ||
3024 | 105 | amulet.raise_status(amulet.PASS, msg="mapreduce operation was success") | ||
3025 | 106 | else: | ||
3026 | 107 | amulet.raise_status(amulet.FAIL, msg="mapreduce operation was not a success") | ||
3027 | 108 | |||
3028 | 109 | ########################################################################### | ||
3029 | 110 | # Validate yarn mapreduce operation | ||
3030 | 111 | # 1) validate mapreduce execution - reading and writing to hdfs | ||
3031 | 112 | # 2) validate successful mapreduce operation after the execution | ||
3032 | 113 | ########################################################################### | ||
3033 | 114 | def test_yarn_mapreduce_exe2(self): | ||
3034 | 115 | o,c= self.master_unit.run("su ubuntu -c 'hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples-*.jar terasort /user/ubuntu/teragenout /user/ubuntu/terasortout'") | ||
3035 | 116 | if c == 0: | ||
3036 | 117 | amulet.raise_status(amulet.PASS, msg=" Successfull execution of terasort mapreduce app") | ||
3037 | 118 | else: | ||
3038 | 119 | amulet.raise_status(amulet.FAIL, msg=" terasort FAILED") | ||
3039 | 120 | ########################################################################### | ||
3040 | 121 | # 2) validate a successful mapreduce operation after the execution | ||
3041 | 122 | ########################################################################### | ||
3042 | 123 | o,c= self.master_unit.run("su hdfs -c 'hdfs dfs -ls /user/ubuntu/terasortout/_SUCCESS'") | ||
3043 | 124 | if c == 0: | ||
3044 | 125 | amulet.raise_status(amulet.PASS, msg="mapreduce operation was success") | ||
3045 | 126 | else: | ||
3046 | 127 | amulet.raise_status(amulet.FAIL, msg="mapreduce operation was not a success") | ||
3047 | 128 | ########################################################################### | ||
3048 | 129 | # validate a successful deletion of mapreduce operation result from hdfs | ||
3049 | 130 | ########################################################################### | ||
3050 | 131 | def test_hdfs_delete_dir(self): | ||
3051 | 132 | o,c= self.master_unit.run("su hdfs -c 'hdfs dfs -rm -r /user/ubuntu/teragenout'") | ||
3052 | 133 | if c == 0: | ||
3053 | 134 | amulet.raise_status(amulet.PASS, msg="mapreduce result deleted from hdfs cluster") | ||
3054 | 135 | else: | ||
3055 | 136 | amulet.raise_status(amulet.FAIL, msg="mapreduce was not deleted from hdfs cluster") | ||
3101 | 137 | 82 | ||
3102 | 138 | 83 | ||
3103 | 139 | if __name__ == '__main__': | 84 | if __name__ == '__main__': |
3106 | 140 | runner = TestDeployment() | 85 | unittest.main() |
3105 | 141 | runner.run() |