Merge lp:~bigdata-dev/charms/trusty/hdp-hadoop/trunk into lp:charms/trusty/hdp-hadoop

Proposed by Cory Johns
Status: Needs review
Proposed branch: lp:~bigdata-dev/charms/trusty/hdp-hadoop/trunk
Merge into: lp:charms/trusty/hdp-hadoop
Diff against target: 3106 lines (+361/-2350)
18 files modified
README.md (+3/-3)
files/scripts/terasort.sh (+15/-0)
hooks/bdutils.py (+44/-28)
hooks/bootstrap.py (+9/-0)
hooks/charmhelpers/core/fstab.py (+0/-114)
hooks/charmhelpers/core/hookenv.py (+0/-498)
hooks/charmhelpers/core/host.py (+0/-325)
hooks/charmhelpers/fetch/__init__.py (+0/-349)
hooks/charmhelpers/fetch/archiveurl.py (+0/-63)
hooks/charmhelpers/fetch/bzrurl.py (+0/-50)
hooks/charmhelpers/lib/ceph_utils.py (+0/-315)
hooks/charmhelpers/lib/cluster_utils.py (+0/-128)
hooks/charmhelpers/lib/utils.py (+0/-221)
hooks/charmhelpers/setup.py (+0/-12)
hooks/hdp-hadoop-common.py (+199/-110)
hooks/hdputils.py (+14/-12)
resources.yaml (+11/-0)
tests/01-hadoop-cluster-deployment-1.py (+66/-122)
To merge this branch: bzr merge lp:~bigdata-dev/charms/trusty/hdp-hadoop/trunk
Reviewer Review Type Date Requested Status
amir sanjar (community) Approve
charmers Pending
Review via email: mp+245602@code.launchpad.net

Description of the change

Updates to support HDP 2.2

To post a comment you must log in.
Revision history for this message
amir sanjar (asanjar) :
review: Approve
41. By Cory Johns

Fix jps bug (and un-vendor charmhelpers)

Unmerged revisions

41. By Cory Johns

Fix jps bug (and un-vendor charmhelpers)

40. By amir sanjar

fixes for enabling hdp 2.2 support

39. By Cory Johns

Fixed mapreduce TODO config items and refactored setHadoopConfigXML

38. By Cory Johns

Cleaned up mapreduce test case

37. By Cory Johns

Refactor test case and fix "exit on PASS" bug in test

36. By Cory Johns

Default memory settings

35. By Cory Johns

Fixed hostname yarn property and cleaned up test case

34. By Cory Johns

Fixed mapreduce paths

33. By Cory Johns

Fixed yarn paths

32. By Cory Johns

Fixed TODO-JDK-PATH placeholder from upstream config

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'README.md'
--- README.md 2014-09-10 18:29:03 +0000
+++ README.md 2015-02-13 00:38:49 +0000
@@ -1,5 +1,5 @@
1## Overview1## Overview
2**What is Hortonworks Apache Hadoop (HDP 2.1.3) ?**2**What is Hortonworks Apache Hadoop (HDP 2.2.0) ?**
3The Apache Hadoop software library is a framework that allows for the3The Apache Hadoop software library is a framework that allows for the
4distributed processing of large data sets across clusters of computers4distributed processing of large data sets across clusters of computers
5using a simple programming model.5using a simple programming model.
@@ -10,7 +10,7 @@
10at the application layer, so delivering a highly-availabile service on top of a10at the application layer, so delivering a highly-availabile service on top of a
11cluster of computers, each of which may be prone to failures.11cluster of computers, each of which may be prone to failures.
1212
13Apache Hadoop 2.4.1 consists of significant improvements over the previous13Apache Hadoop 2.6.x consists of significant improvements over the previous
14stable release (hadoop-1.x).14stable release (hadoop-1.x).
1515
16Here is a short overview of the improvments to both HDFS and MapReduce.16Here is a short overview of the improvments to both HDFS and MapReduce.
@@ -51,7 +51,7 @@
5151
52This supports deployments of Hadoop in a number of configurations.52This supports deployments of Hadoop in a number of configurations.
5353
54### HDP 2.1.3 Usage #1: Combined HDFS and MapReduce54### HDP 2.2.0 Usage #1: Combined HDFS and MapReduce
5555
56In this configuration, the YARN ResourceManager is deployed on the same56In this configuration, the YARN ResourceManager is deployed on the same
57service units as HDFS namenode and the HDFS datanodes also run YARN NodeManager::57service units as HDFS namenode and the HDFS datanodes also run YARN NodeManager::
5858
=== added file 'files/jujuresources-0.2.3.tar.gz'
59Binary files files/jujuresources-0.2.3.tar.gz 1970-01-01 00:00:00 +0000 and files/jujuresources-0.2.3.tar.gz 2015-02-13 00:38:49 +0000 differ59Binary files files/jujuresources-0.2.3.tar.gz 1970-01-01 00:00:00 +0000 and files/jujuresources-0.2.3.tar.gz 2015-02-13 00:38:49 +0000 differ
=== added file 'files/scripts/terasort.sh'
--- files/scripts/terasort.sh 1970-01-01 00:00:00 +0000
+++ files/scripts/terasort.sh 2015-02-13 00:38:49 +0000
@@ -0,0 +1,15 @@
1#!/bin/bash
2
3
4SIZE=10000
5NUM_MAPS=100
6NUM_REDUCES=100
7IN_DIR=in_dir
8OUT_DIR=out_dir
9hadoop fs -rm -r -skipTrash ${IN_DIR} || true
10hadoop jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar teragen ${SIZE} ${IN_DIR}
11
12sleep 20
13
14hadoop jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar terasort ${IN_DIR} ${OUT_DIR}
15
016
=== modified file 'hooks/bdutils.py'
--- hooks/bdutils.py 2014-09-17 14:49:16 +0000
+++ hooks/bdutils.py 2015-02-13 00:38:49 +0000
@@ -1,11 +1,15 @@
1#!/usr/bin/python1#!/usr/bin/python
2import os2import os
3import pwd3import pwd
4import sys
4import grp5import grp
5import subprocess6import subprocess
7import hashlib
8import shlex
69
7from shutil import rmtree10from shutil import rmtree
8from charmhelpers.core.hookenv import log11from charmhelpers.core.hookenv import log
12from charmhelpers.contrib.bigdata.utils import jps
913
10def createPropertyElement(name, value):14def createPropertyElement(name, value):
11 import xml.etree.ElementTree as ET15 import xml.etree.ElementTree as ET
@@ -22,26 +26,21 @@
22 import xml.dom.minidom as minidom26 import xml.dom.minidom as minidom
23 print("==> setHadoopConfigXML ","INFO")27 print("==> setHadoopConfigXML ","INFO")
24 import xml.etree.ElementTree as ET28 import xml.etree.ElementTree as ET
25 found = False
26 with open(xmlfileNamePath,'rb+') as f: 29 with open(xmlfileNamePath,'rb+') as f:
27 root = ET.parse(f).getroot()30 root = ET.parse(f).getroot()
28 proList = root.findall("property")31 for p in root.findall("property"):
29 for p in proList:32 if p.find('name').text == name:
30 if found:33 if value is None:
34 root.remove(p)
35 else:
36 p.find('value').text = value
31 break37 break
32 cList = p.getchildren()38 else:
33 for c in cList:39 if value is not None:
34 if c.text == name:40 root.append(createPropertyElement(name, value))
35 p.find("value").text = value
36 found = True
37 break
3841
39 f.seek(0)42 f.seek(0)
40 if not found: 43 f.write(ET.tostring(root, encoding='UTF-8'))
41 root.append(createPropertyElement(name, value))
42 f.write((minidom.parseString(ET.tostring(root, encoding='UTF-8'))).toprettyxml(indent="\t"))
43 else:
44 f.write(ET.tostring(root, encoding='UTF-8'))
45 f.truncate()44 f.truncate()
4645
47def setDirPermission(path, owner, group, access):46def setDirPermission(path, owner, group, access):
@@ -67,15 +66,21 @@
67 for r,d,f in os.walk(path):66 for r,d,f in os.walk(path):
68 os.chmod( r , mode)67 os.chmod( r , mode)
69 68
70def wgetPkg(pkgName, crypType):69def wgetPkg(url, dest):
71 log("==> wgetPkg ")70 log("==> wgetPkg ")
72 crypFileName= pkgName+'.'+crypType 71 output = subprocess.check_output(['wget', '-S', url, '-O', dest],
73 cmd = 'wget '+pkgName72 stderr=subprocess.STDOUT)
74 subprocess.call(cmd.split())73 etag = [l for l in output.split('\n') if l.startswith(' ETag')]
75 if crypType:74 if etag:
76 cmd = ['wget', crypFileName]75 etag = etag[0].split(':')[1].strip(' "')
77 subprocess.call(cmd)76 if '-' in etag:
78 #TODO -- cryption validation77 log.warn('Multi-part ETag not supported; cannot verify checksum')
78 return # TODO: Support multi-part ETag algorithm
79 with open(dest) as fp:
80 md5 = hashlib.md5(fp.read()).hexdigest()
81 if md5 != etag:
82 log.error('Checksum mismatch: %s != %s' % (md5, etag))
83 sys.exit(1)
79 84
80def append_bashrc(line):85def append_bashrc(line):
81 log("==> append_bashrc","INFO")86 log("==> append_bashrc","INFO")
@@ -124,10 +129,21 @@
124 os.environ[ll[0]] = ll[1].strip().strip(';').strip("\"").strip()129 os.environ[ll[0]] = ll[1].strip().strip(';').strip("\"").strip()
125 130
126def is_jvm_service_active(processname):131def is_jvm_service_active(processname):
127 cmd=["jps"]132 return jps(processname)
128 p = subprocess.Popen(cmd, stdout=subprocess.PIPE)133
129 out, err = p.communicate()134
130 if err == None and str(out).find(processname) != -1:135def HDFS_command(command):
136 cmd = shlex.split("su hdfs -c 'hdfs {c}'".format(c=command))
137 return subprocess.check_output(cmd)
138
139def fconfigured(filename):
140 fpath = os.path.join(os.path.sep, 'home', 'ubuntu', filename)
141 if os.path.isfile(fpath):
131 return True142 return True
132 else:143 else:
133 return False
134\ No newline at end of file144\ No newline at end of file
145 touch(fpath)
146 return False
147
148def touch(fname, times=None):
149 with open(fname, 'a'):
150 os.utime(fname, times)
135151
=== added file 'hooks/bootstrap.py'
--- hooks/bootstrap.py 1970-01-01 00:00:00 +0000
+++ hooks/bootstrap.py 2015-02-13 00:38:49 +0000
@@ -0,0 +1,9 @@
1import subprocess
2
3
4def install_charmhelpers():
5 subprocess.check_call(['apt-get', 'install', '-yq', 'python-pip'])
6 subprocess.check_call(['pip', 'install', 'files/jujuresources-0.2.3.tar.gz'])
7 import jujuresources
8 jujuresources.fetch()
9 jujuresources.install(['pathlib', 'pyaml', 'six', 'charmhelpers'])
010
=== removed directory 'hooks/charmhelpers'
=== removed file 'hooks/charmhelpers/__init__.py'
=== removed directory 'hooks/charmhelpers/core'
=== removed file 'hooks/charmhelpers/core/__init__.py'
=== removed file 'hooks/charmhelpers/core/fstab.py'
--- hooks/charmhelpers/core/fstab.py 2014-07-09 12:37:36 +0000
+++ hooks/charmhelpers/core/fstab.py 1970-01-01 00:00:00 +0000
@@ -1,114 +0,0 @@
1#!/usr/bin/env python
2# -*- coding: utf-8 -*-
3
4__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
5
6import os
7
8
9class Fstab(file):
10 """This class extends file in order to implement a file reader/writer
11 for file `/etc/fstab`
12 """
13
14 class Entry(object):
15 """Entry class represents a non-comment line on the `/etc/fstab` file
16 """
17 def __init__(self, device, mountpoint, filesystem,
18 options, d=0, p=0):
19 self.device = device
20 self.mountpoint = mountpoint
21 self.filesystem = filesystem
22
23 if not options:
24 options = "defaults"
25
26 self.options = options
27 self.d = d
28 self.p = p
29
30 def __eq__(self, o):
31 return str(self) == str(o)
32
33 def __str__(self):
34 return "{} {} {} {} {} {}".format(self.device,
35 self.mountpoint,
36 self.filesystem,
37 self.options,
38 self.d,
39 self.p)
40
41 DEFAULT_PATH = os.path.join(os.path.sep, 'etc', 'fstab')
42
43 def __init__(self, path=None):
44 if path:
45 self._path = path
46 else:
47 self._path = self.DEFAULT_PATH
48 file.__init__(self, self._path, 'r+')
49
50 def _hydrate_entry(self, line):
51 return Fstab.Entry(*filter(
52 lambda x: x not in ('', None),
53 line.strip("\n").split(" ")))
54
55 @property
56 def entries(self):
57 self.seek(0)
58 for line in self.readlines():
59 try:
60 if not line.startswith("#"):
61 yield self._hydrate_entry(line)
62 except ValueError:
63 pass
64
65 def get_entry_by_attr(self, attr, value):
66 for entry in self.entries:
67 e_attr = getattr(entry, attr)
68 if e_attr == value:
69 return entry
70 return None
71
72 def add_entry(self, entry):
73 if self.get_entry_by_attr('device', entry.device):
74 return False
75
76 self.write(str(entry) + '\n')
77 self.truncate()
78 return entry
79
80 def remove_entry(self, entry):
81 self.seek(0)
82
83 lines = self.readlines()
84
85 found = False
86 for index, line in enumerate(lines):
87 if not line.startswith("#"):
88 if self._hydrate_entry(line) == entry:
89 found = True
90 break
91
92 if not found:
93 return False
94
95 lines.remove(line)
96
97 self.seek(0)
98 self.write(''.join(lines))
99 self.truncate()
100 return True
101
102 @classmethod
103 def remove_by_mountpoint(cls, mountpoint, path=None):
104 fstab = cls(path=path)
105 entry = fstab.get_entry_by_attr('mountpoint', mountpoint)
106 if entry:
107 return fstab.remove_entry(entry)
108 return False
109
110 @classmethod
111 def add(cls, device, mountpoint, filesystem, options=None, path=None):
112 return cls(path=path).add_entry(Fstab.Entry(device,
113 mountpoint, filesystem,
114 options=options))
1150
=== removed file 'hooks/charmhelpers/core/hookenv.py'
--- hooks/charmhelpers/core/hookenv.py 2014-07-09 12:37:36 +0000
+++ hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000
@@ -1,498 +0,0 @@
1"Interactions with the Juju environment"
2# Copyright 2013 Canonical Ltd.
3#
4# Authors:
5# Charm Helpers Developers <juju@lists.ubuntu.com>
6
7import os
8import json
9import yaml
10import subprocess
11import sys
12import UserDict
13from subprocess import CalledProcessError
14
15CRITICAL = "CRITICAL"
16ERROR = "ERROR"
17WARNING = "WARNING"
18INFO = "INFO"
19DEBUG = "DEBUG"
20MARKER = object()
21
22cache = {}
23
24
25def cached(func):
26 """Cache return values for multiple executions of func + args
27
28 For example:
29
30 @cached
31 def unit_get(attribute):
32 pass
33
34 unit_get('test')
35
36 will cache the result of unit_get + 'test' for future calls.
37 """
38 def wrapper(*args, **kwargs):
39 global cache
40 key = str((func, args, kwargs))
41 try:
42 return cache[key]
43 except KeyError:
44 res = func(*args, **kwargs)
45 cache[key] = res
46 return res
47 return wrapper
48
49
50def flush(key):
51 """Flushes any entries from function cache where the
52 key is found in the function+args """
53 flush_list = []
54 for item in cache:
55 if key in item:
56 flush_list.append(item)
57 for item in flush_list:
58 del cache[item]
59
60
61def log(message, level=None):
62 """Write a message to the juju log"""
63 command = ['juju-log']
64 if level:
65 command += ['-l', level]
66 command += [message]
67 subprocess.call(command)
68
69
70class Serializable(UserDict.IterableUserDict):
71 """Wrapper, an object that can be serialized to yaml or json"""
72
73 def __init__(self, obj):
74 # wrap the object
75 UserDict.IterableUserDict.__init__(self)
76 self.data = obj
77
78 def __getattr__(self, attr):
79 # See if this object has attribute.
80 if attr in ("json", "yaml", "data"):
81 return self.__dict__[attr]
82 # Check for attribute in wrapped object.
83 got = getattr(self.data, attr, MARKER)
84 if got is not MARKER:
85 return got
86 # Proxy to the wrapped object via dict interface.
87 try:
88 return self.data[attr]
89 except KeyError:
90 raise AttributeError(attr)
91
92 def __getstate__(self):
93 # Pickle as a standard dictionary.
94 return self.data
95
96 def __setstate__(self, state):
97 # Unpickle into our wrapper.
98 self.data = state
99
100 def json(self):
101 """Serialize the object to json"""
102 return json.dumps(self.data)
103
104 def yaml(self):
105 """Serialize the object to yaml"""
106 return yaml.dump(self.data)
107
108
109def execution_environment():
110 """A convenient bundling of the current execution context"""
111 context = {}
112 context['conf'] = config()
113 if relation_id():
114 context['reltype'] = relation_type()
115 context['relid'] = relation_id()
116 context['rel'] = relation_get()
117 context['unit'] = local_unit()
118 context['rels'] = relations()
119 context['env'] = os.environ
120 return context
121
122
123def in_relation_hook():
124 """Determine whether we're running in a relation hook"""
125 return 'JUJU_RELATION' in os.environ
126
127
128def relation_type():
129 """The scope for the current relation hook"""
130 return os.environ.get('JUJU_RELATION', None)
131
132
133def relation_id():
134 """The relation ID for the current relation hook"""
135 return os.environ.get('JUJU_RELATION_ID', None)
136
137
138def local_unit():
139 """Local unit ID"""
140 return os.environ['JUJU_UNIT_NAME']
141
142
143def remote_unit():
144 """The remote unit for the current relation hook"""
145 return os.environ['JUJU_REMOTE_UNIT']
146
147
148def service_name():
149 """The name service group this unit belongs to"""
150 return local_unit().split('/')[0]
151
152
153def hook_name():
154 """The name of the currently executing hook"""
155 return os.path.basename(sys.argv[0])
156
157
158class Config(dict):
159 """A Juju charm config dictionary that can write itself to
160 disk (as json) and track which values have changed since
161 the previous hook invocation.
162
163 Do not instantiate this object directly - instead call
164 ``hookenv.config()``
165
166 Example usage::
167
168 >>> # inside a hook
169 >>> from charmhelpers.core import hookenv
170 >>> config = hookenv.config()
171 >>> config['foo']
172 'bar'
173 >>> config['mykey'] = 'myval'
174 >>> config.save()
175
176
177 >>> # user runs `juju set mycharm foo=baz`
178 >>> # now we're inside subsequent config-changed hook
179 >>> config = hookenv.config()
180 >>> config['foo']
181 'baz'
182 >>> # test to see if this val has changed since last hook
183 >>> config.changed('foo')
184 True
185 >>> # what was the previous value?
186 >>> config.previous('foo')
187 'bar'
188 >>> # keys/values that we add are preserved across hooks
189 >>> config['mykey']
190 'myval'
191 >>> # don't forget to save at the end of hook!
192 >>> config.save()
193
194 """
195 CONFIG_FILE_NAME = '.juju-persistent-config'
196
197 def __init__(self, *args, **kw):
198 super(Config, self).__init__(*args, **kw)
199 self._prev_dict = None
200 self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)
201 if os.path.exists(self.path):
202 self.load_previous()
203
204 def load_previous(self, path=None):
205 """Load previous copy of config from disk so that current values
206 can be compared to previous values.
207
208 :param path:
209
210 File path from which to load the previous config. If `None`,
211 config is loaded from the default location. If `path` is
212 specified, subsequent `save()` calls will write to the same
213 path.
214
215 """
216 self.path = path or self.path
217 with open(self.path) as f:
218 self._prev_dict = json.load(f)
219
220 def changed(self, key):
221 """Return true if the value for this key has changed since
222 the last save.
223
224 """
225 if self._prev_dict is None:
226 return True
227 return self.previous(key) != self.get(key)
228
229 def previous(self, key):
230 """Return previous value for this key, or None if there
231 is no "previous" value.
232
233 """
234 if self._prev_dict:
235 return self._prev_dict.get(key)
236 return None
237
238 def save(self):
239 """Save this config to disk.
240
241 Preserves items in _prev_dict that do not exist in self.
242
243 """
244 if self._prev_dict:
245 for k, v in self._prev_dict.iteritems():
246 if k not in self:
247 self[k] = v
248 with open(self.path, 'w') as f:
249 json.dump(self, f)
250
251
252@cached
253def config(scope=None):
254 """Juju charm configuration"""
255 config_cmd_line = ['config-get']
256 if scope is not None:
257 config_cmd_line.append(scope)
258 config_cmd_line.append('--format=json')
259 try:
260 config_data = json.loads(subprocess.check_output(config_cmd_line))
261 if scope is not None:
262 return config_data
263 return Config(config_data)
264 except ValueError:
265 return None
266
267
268@cached
269def relation_get(attribute=None, unit=None, rid=None):
270 """Get relation information"""
271 _args = ['relation-get', '--format=json']
272 if rid:
273 _args.append('-r')
274 _args.append(rid)
275 _args.append(attribute or '-')
276 if unit:
277 _args.append(unit)
278 try:
279 return json.loads(subprocess.check_output(_args))
280 except ValueError:
281 return None
282 except CalledProcessError, e:
283 if e.returncode == 2:
284 return None
285 raise
286
287
288def relation_set(relation_id=None, relation_settings={}, **kwargs):
289 """Set relation information for the current unit"""
290 relation_cmd_line = ['relation-set']
291 if relation_id is not None:
292 relation_cmd_line.extend(('-r', relation_id))
293 for k, v in (relation_settings.items() + kwargs.items()):
294 if v is None:
295 relation_cmd_line.append('{}='.format(k))
296 else:
297 relation_cmd_line.append('{}={}'.format(k, v))
298 subprocess.check_call(relation_cmd_line)
299 # Flush cache of any relation-gets for local unit
300 flush(local_unit())
301
302
303@cached
304def relation_ids(reltype=None):
305 """A list of relation_ids"""
306 reltype = reltype or relation_type()
307 relid_cmd_line = ['relation-ids', '--format=json']
308 if reltype is not None:
309 relid_cmd_line.append(reltype)
310 return json.loads(subprocess.check_output(relid_cmd_line)) or []
311 return []
312
313
314@cached
315def related_units(relid=None):
316 """A list of related units"""
317 relid = relid or relation_id()
318 units_cmd_line = ['relation-list', '--format=json']
319 if relid is not None:
320 units_cmd_line.extend(('-r', relid))
321 return json.loads(subprocess.check_output(units_cmd_line)) or []
322
323
324@cached
325def relation_for_unit(unit=None, rid=None):
326 """Get the json represenation of a unit's relation"""
327 unit = unit or remote_unit()
328 relation = relation_get(unit=unit, rid=rid)
329 for key in relation:
330 if key.endswith('-list'):
331 relation[key] = relation[key].split()
332 relation['__unit__'] = unit
333 return relation
334
335
336@cached
337def relations_for_id(relid=None):
338 """Get relations of a specific relation ID"""
339 relation_data = []
340 relid = relid or relation_ids()
341 for unit in related_units(relid):
342 unit_data = relation_for_unit(unit, relid)
343 unit_data['__relid__'] = relid
344 relation_data.append(unit_data)
345 return relation_data
346
347
348@cached
349def relations_of_type(reltype=None):
350 """Get relations of a specific type"""
351 relation_data = []
352 reltype = reltype or relation_type()
353 for relid in relation_ids(reltype):
354 for relation in relations_for_id(relid):
355 relation['__relid__'] = relid
356 relation_data.append(relation)
357 return relation_data
358
359
360@cached
361def relation_types():
362 """Get a list of relation types supported by this charm"""
363 charmdir = os.environ.get('CHARM_DIR', '')
364 mdf = open(os.path.join(charmdir, 'metadata.yaml'))
365 md = yaml.safe_load(mdf)
366 rel_types = []
367 for key in ('provides', 'requires', 'peers'):
368 section = md.get(key)
369 if section:
370 rel_types.extend(section.keys())
371 mdf.close()
372 return rel_types
373
374
375@cached
376def relations():
377 """Get a nested dictionary of relation data for all related units"""
378 rels = {}
379 for reltype in relation_types():
380 relids = {}
381 for relid in relation_ids(reltype):
382 units = {local_unit(): relation_get(unit=local_unit(), rid=relid)}
383 for unit in related_units(relid):
384 reldata = relation_get(unit=unit, rid=relid)
385 units[unit] = reldata
386 relids[relid] = units
387 rels[reltype] = relids
388 return rels
389
390
391@cached
392def is_relation_made(relation, keys='private-address'):
393 '''
394 Determine whether a relation is established by checking for
395 presence of key(s). If a list of keys is provided, they
396 must all be present for the relation to be identified as made
397 '''
398 if isinstance(keys, str):
399 keys = [keys]
400 for r_id in relation_ids(relation):
401 for unit in related_units(r_id):
402 context = {}
403 for k in keys:
404 context[k] = relation_get(k, rid=r_id,
405 unit=unit)
406 if None not in context.values():
407 return True
408 return False
409
410
411def open_port(port, protocol="TCP"):
412 """Open a service network port"""
413 _args = ['open-port']
414 _args.append('{}/{}'.format(port, protocol))
415 subprocess.check_call(_args)
416
417
418def close_port(port, protocol="TCP"):
419 """Close a service network port"""
420 _args = ['close-port']
421 _args.append('{}/{}'.format(port, protocol))
422 subprocess.check_call(_args)
423
424
425@cached
426def unit_get(attribute):
427 """Get the unit ID for the remote unit"""
428 _args = ['unit-get', '--format=json', attribute]
429 try:
430 return json.loads(subprocess.check_output(_args))
431 except ValueError:
432 return None
433
434
435def unit_private_ip():
436 """Get this unit's private IP address"""
437 return unit_get('private-address')
438
439
440class UnregisteredHookError(Exception):
441 """Raised when an undefined hook is called"""
442 pass
443
444
445class Hooks(object):
446 """A convenient handler for hook functions.
447
448 Example:
449 hooks = Hooks()
450
451 # register a hook, taking its name from the function name
452 @hooks.hook()
453 def install():
454 ...
455
456 # register a hook, providing a custom hook name
457 @hooks.hook("config-changed")
458 def config_changed():
459 ...
460
461 if __name__ == "__main__":
462 # execute a hook based on the name the program is called by
463 hooks.execute(sys.argv)
464 """
465
466 def __init__(self):
467 super(Hooks, self).__init__()
468 self._hooks = {}
469
470 def register(self, name, function):
471 """Register a hook"""
472 self._hooks[name] = function
473
474 def execute(self, args):
475 """Execute a registered hook based on args[0]"""
476 hook_name = os.path.basename(args[0])
477 if hook_name in self._hooks:
478 self._hooks[hook_name]()
479 else:
480 raise UnregisteredHookError(hook_name)
481
482 def hook(self, *hook_names):
483 """Decorator, registering them as hooks"""
484 def wrapper(decorated):
485 for hook_name in hook_names:
486 self.register(hook_name, decorated)
487 else:
488 self.register(decorated.__name__, decorated)
489 if '_' in decorated.__name__:
490 self.register(
491 decorated.__name__.replace('_', '-'), decorated)
492 return decorated
493 return wrapper
494
495
496def charm_dir():
497 """Return the root directory of the current charm"""
498 return os.environ.get('CHARM_DIR')
4990
=== removed file 'hooks/charmhelpers/core/host.py'
--- hooks/charmhelpers/core/host.py 2014-07-09 12:37:36 +0000
+++ hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000
@@ -1,325 +0,0 @@
1"""Tools for working with the host system"""
2# Copyright 2012 Canonical Ltd.
3#
4# Authors:
5# Nick Moffitt <nick.moffitt@canonical.com>
6# Matthew Wedgwood <matthew.wedgwood@canonical.com>
7
8import os
9import pwd
10import grp
11import random
12import string
13import subprocess
14import hashlib
15import apt_pkg
16
17from collections import OrderedDict
18
19from hookenv import log
20from fstab import Fstab
21
22
23def service_start(service_name):
24 """Start a system service"""
25 return service('start', service_name)
26
27
28def service_stop(service_name):
29 """Stop a system service"""
30 return service('stop', service_name)
31
32
33def service_restart(service_name):
34 """Restart a system service"""
35 return service('restart', service_name)
36
37
38def service_reload(service_name, restart_on_failure=False):
39 """Reload a system service, optionally falling back to restart if
40 reload fails"""
41 service_result = service('reload', service_name)
42 if not service_result and restart_on_failure:
43 service_result = service('restart', service_name)
44 return service_result
45
46
47def service(action, service_name):
48 """Control a system service"""
49 cmd = ['service', service_name, action]
50 return subprocess.call(cmd) == 0
51
52
53def service_running(service):
54 """Determine whether a system service is running"""
55 try:
56 output = subprocess.check_output(['service', service, 'status'])
57 except subprocess.CalledProcessError:
58 return False
59 else:
60 if ("start/running" in output or "is running" in output):
61 return True
62 else:
63 return False
64
65
66def adduser(username, password=None, shell='/bin/bash', system_user=False):
67 """Add a user to the system"""
68 try:
69 user_info = pwd.getpwnam(username)
70 log('user {0} already exists!'.format(username))
71 except KeyError:
72 log('creating user {0}'.format(username))
73 cmd = ['useradd']
74 if system_user or password is None:
75 cmd.append('--system')
76 else:
77 cmd.extend([
78 '--create-home',
79 '--shell', shell,
80 '--password', password,
81 ])
82 cmd.append(username)
83 subprocess.check_call(cmd)
84 user_info = pwd.getpwnam(username)
85 return user_info
86
87
88def add_user_to_group(username, group):
89 """Add a user to a group"""
90 cmd = [
91 'gpasswd', '-a',
92 username,
93 group
94 ]
95 log("Adding user {} to group {}".format(username, group))
96 subprocess.check_call(cmd)
97
98
99def rsync(from_path, to_path, flags='-r', options=None):
100 """Replicate the contents of a path"""
101 options = options or ['--delete', '--executability']
102 cmd = ['/usr/bin/rsync', flags]
103 cmd.extend(options)
104 cmd.append(from_path)
105 cmd.append(to_path)
106 log(" ".join(cmd))
107 return subprocess.check_output(cmd).strip()
108
109
110def symlink(source, destination):
111 """Create a symbolic link"""
112 log("Symlinking {} as {}".format(source, destination))
113 cmd = [
114 'ln',
115 '-sf',
116 source,
117 destination,
118 ]
119 subprocess.check_call(cmd)
120
121
122def mkdir(path, owner='root', group='root', perms=0555, force=False):
123 """Create a directory"""
124 log("Making dir {} {}:{} {:o}".format(path, owner, group,
125 perms))
126 uid = pwd.getpwnam(owner).pw_uid
127 gid = grp.getgrnam(group).gr_gid
128 realpath = os.path.abspath(path)
129 if os.path.exists(realpath):
130 if force and not os.path.isdir(realpath):
131 log("Removing non-directory file {} prior to mkdir()".format(path))
132 os.unlink(realpath)
133 else:
134 os.makedirs(realpath, perms)
135 os.chown(realpath, uid, gid)
136
137
138def write_file(path, content, owner='root', group='root', perms=0444):
139 """Create or overwrite a file with the contents of a string"""
140 log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
141 uid = pwd.getpwnam(owner).pw_uid
142 gid = grp.getgrnam(group).gr_gid
143 with open(path, 'w') as target:
144 os.fchown(target.fileno(), uid, gid)
145 os.fchmod(target.fileno(), perms)
146 target.write(content)
147
148
149def fstab_remove(mp):
150 """Remove the given mountpoint entry from /etc/fstab
151 """
152 return Fstab.remove_by_mountpoint(mp)
153
154
155def fstab_add(dev, mp, fs, options=None):
156 """Adds the given device entry to the /etc/fstab file
157 """
158 return Fstab.add(dev, mp, fs, options=options)
159
160
161def mount(device, mountpoint, options=None, persist=False, filesystem="ext3"):
162 """Mount a filesystem at a particular mountpoint"""
163 cmd_args = ['mount']
164 if options is not None:
165 cmd_args.extend(['-o', options])
166 cmd_args.extend([device, mountpoint])
167 try:
168 subprocess.check_output(cmd_args)
169 except subprocess.CalledProcessError, e:
170 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
171 return False
172
173 if persist:
174 return fstab_add(device, mountpoint, filesystem, options=options)
175 return True
176
177
178def umount(mountpoint, persist=False):
179 """Unmount a filesystem"""
180 cmd_args = ['umount', mountpoint]
181 try:
182 subprocess.check_output(cmd_args)
183 except subprocess.CalledProcessError, e:
184 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
185 return False
186
187 if persist:
188 return fstab_remove(mountpoint)
189 return True
190
191
192def mounts():
193 """Get a list of all mounted volumes as [[mountpoint,device],[...]]"""
194 with open('/proc/mounts') as f:
195 # [['/mount/point','/dev/path'],[...]]
196 system_mounts = [m[1::-1] for m in [l.strip().split()
197 for l in f.readlines()]]
198 return system_mounts
199
200
201def file_hash(path):
202 """Generate a md5 hash of the contents of 'path' or None if not found """
203 if os.path.exists(path):
204 h = hashlib.md5()
205 with open(path, 'r') as source:
206 h.update(source.read()) # IGNORE:E1101 - it does have update
207 return h.hexdigest()
208 else:
209 return None
210
211
212def restart_on_change(restart_map, stopstart=False):
213 """Restart services based on configuration files changing
214
215 This function is used a decorator, for example
216
217 @restart_on_change({
218 '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
219 })
220 def ceph_client_changed():
221 ...
222
223 In this example, the cinder-api and cinder-volume services
224 would be restarted if /etc/ceph/ceph.conf is changed by the
225 ceph_client_changed function.
226 """
227 def wrap(f):
228 def wrapped_f(*args):
229 checksums = {}
230 for path in restart_map:
231 checksums[path] = file_hash(path)
232 f(*args)
233 restarts = []
234 for path in restart_map:
235 if checksums[path] != file_hash(path):
236 restarts += restart_map[path]
237 services_list = list(OrderedDict.fromkeys(restarts))
238 if not stopstart:
239 for service_name in services_list:
240 service('restart', service_name)
241 else:
242 for action in ['stop', 'start']:
243 for service_name in services_list:
244 service(action, service_name)
245 return wrapped_f
246 return wrap
247
248
249def lsb_release():
250 """Return /etc/lsb-release in a dict"""
251 d = {}
252 with open('/etc/lsb-release', 'r') as lsb:
253 for l in lsb:
254 k, v = l.split('=')
255 d[k.strip()] = v.strip()
256 return d
257
258
259def pwgen(length=None):
260 """Generate a random pasword."""
261 if length is None:
262 length = random.choice(range(35, 45))
263 alphanumeric_chars = [
264 l for l in (string.letters + string.digits)
265 if l not in 'l0QD1vAEIOUaeiou']
266 random_chars = [
267 random.choice(alphanumeric_chars) for _ in range(length)]
268 return(''.join(random_chars))
269
270
271def list_nics(nic_type):
272 '''Return a list of nics of given type(s)'''
273 if isinstance(nic_type, basestring):
274 int_types = [nic_type]
275 else:
276 int_types = nic_type
277 interfaces = []
278 for int_type in int_types:
279 cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
280 ip_output = subprocess.check_output(cmd).split('\n')
281 ip_output = (line for line in ip_output if line)
282 for line in ip_output:
283 if line.split()[1].startswith(int_type):
284 interfaces.append(line.split()[1].replace(":", ""))
285 return interfaces
286
287
288def set_nic_mtu(nic, mtu):
289 '''Set MTU on a network interface'''
290 cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]
291 subprocess.check_call(cmd)
292
293
294def get_nic_mtu(nic):
295 cmd = ['ip', 'addr', 'show', nic]
296 ip_output = subprocess.check_output(cmd).split('\n')
297 mtu = ""
298 for line in ip_output:
299 words = line.split()
300 if 'mtu' in words:
301 mtu = words[words.index("mtu") + 1]
302 return mtu
303
304
305def get_nic_hwaddr(nic):
306 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
307 ip_output = subprocess.check_output(cmd)
308 hwaddr = ""
309 words = ip_output.split()
310 if 'link/ether' in words:
311 hwaddr = words[words.index('link/ether') + 1]
312 return hwaddr
313
314
315def cmp_pkgrevno(package, revno, pkgcache=None):
316 '''Compare supplied revno with the revno of the installed package
317 1 => Installed revno is greater than supplied arg
318 0 => Installed revno is the same as supplied arg
319 -1 => Installed revno is less than supplied arg
320 '''
321 if not pkgcache:
322 apt_pkg.init()
323 pkgcache = apt_pkg.Cache()
324 pkg = pkgcache[package]
325 return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
3260
=== removed directory 'hooks/charmhelpers/fetch'
=== removed file 'hooks/charmhelpers/fetch/__init__.py'
--- hooks/charmhelpers/fetch/__init__.py 2014-07-09 12:37:36 +0000
+++ hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000
@@ -1,349 +0,0 @@
1import importlib
2import time
3from yaml import safe_load
4from charmhelpers.core.host import (
5 lsb_release
6)
7from urlparse import (
8 urlparse,
9 urlunparse,
10)
11import subprocess
12from charmhelpers.core.hookenv import (
13 config,
14 log,
15)
16import apt_pkg
17import os
18
19
20CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
21deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
22"""
23PROPOSED_POCKET = """# Proposed
24deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
25"""
26CLOUD_ARCHIVE_POCKETS = {
27 # Folsom
28 'folsom': 'precise-updates/folsom',
29 'precise-folsom': 'precise-updates/folsom',
30 'precise-folsom/updates': 'precise-updates/folsom',
31 'precise-updates/folsom': 'precise-updates/folsom',
32 'folsom/proposed': 'precise-proposed/folsom',
33 'precise-folsom/proposed': 'precise-proposed/folsom',
34 'precise-proposed/folsom': 'precise-proposed/folsom',
35 # Grizzly
36 'grizzly': 'precise-updates/grizzly',
37 'precise-grizzly': 'precise-updates/grizzly',
38 'precise-grizzly/updates': 'precise-updates/grizzly',
39 'precise-updates/grizzly': 'precise-updates/grizzly',
40 'grizzly/proposed': 'precise-proposed/grizzly',
41 'precise-grizzly/proposed': 'precise-proposed/grizzly',
42 'precise-proposed/grizzly': 'precise-proposed/grizzly',
43 # Havana
44 'havana': 'precise-updates/havana',
45 'precise-havana': 'precise-updates/havana',
46 'precise-havana/updates': 'precise-updates/havana',
47 'precise-updates/havana': 'precise-updates/havana',
48 'havana/proposed': 'precise-proposed/havana',
49 'precise-havana/proposed': 'precise-proposed/havana',
50 'precise-proposed/havana': 'precise-proposed/havana',
51 # Icehouse
52 'icehouse': 'precise-updates/icehouse',
53 'precise-icehouse': 'precise-updates/icehouse',
54 'precise-icehouse/updates': 'precise-updates/icehouse',
55 'precise-updates/icehouse': 'precise-updates/icehouse',
56 'icehouse/proposed': 'precise-proposed/icehouse',
57 'precise-icehouse/proposed': 'precise-proposed/icehouse',
58 'precise-proposed/icehouse': 'precise-proposed/icehouse',
59 # Juno
60 'juno': 'trusty-updates/juno',
61 'trusty-juno': 'trusty-updates/juno',
62 'trusty-juno/updates': 'trusty-updates/juno',
63 'trusty-updates/juno': 'trusty-updates/juno',
64 'juno/proposed': 'trusty-proposed/juno',
65 'juno/proposed': 'trusty-proposed/juno',
66 'trusty-juno/proposed': 'trusty-proposed/juno',
67 'trusty-proposed/juno': 'trusty-proposed/juno',
68}
69
70# The order of this list is very important. Handlers should be listed in from
71# least- to most-specific URL matching.
72FETCH_HANDLERS = (
73 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
74 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
75)
76
77APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
78APT_NO_LOCK_RETRY_DELAY = 10 # Wait 10 seconds between apt lock checks.
79APT_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times.
80
81
82class SourceConfigError(Exception):
83 pass
84
85
86class UnhandledSource(Exception):
87 pass
88
89
90class AptLockError(Exception):
91 pass
92
93
94class BaseFetchHandler(object):
95
96 """Base class for FetchHandler implementations in fetch plugins"""
97
98 def can_handle(self, source):
99 """Returns True if the source can be handled. Otherwise returns
100 a string explaining why it cannot"""
101 return "Wrong source type"
102
103 def install(self, source):
104 """Try to download and unpack the source. Return the path to the
105 unpacked files or raise UnhandledSource."""
106 raise UnhandledSource("Wrong source type {}".format(source))
107
108 def parse_url(self, url):
109 return urlparse(url)
110
111 def base_url(self, url):
112 """Return url without querystring or fragment"""
113 parts = list(self.parse_url(url))
114 parts[4:] = ['' for i in parts[4:]]
115 return urlunparse(parts)
116
117
118def filter_installed_packages(packages):
119 """Returns a list of packages that require installation"""
120 apt_pkg.init()
121
122 # Tell apt to build an in-memory cache to prevent race conditions (if
123 # another process is already building the cache).
124 apt_pkg.config.set("Dir::Cache::pkgcache", "")
125
126 cache = apt_pkg.Cache()
127 _pkgs = []
128 for package in packages:
129 try:
130 p = cache[package]
131 p.current_ver or _pkgs.append(package)
132 except KeyError:
133 log('Package {} has no installation candidate.'.format(package),
134 level='WARNING')
135 _pkgs.append(package)
136 return _pkgs
137
138
139def apt_install(packages, options=None, fatal=False):
140 """Install one or more packages"""
141 if options is None:
142 options = ['--option=Dpkg::Options::=--force-confold']
143
144 cmd = ['apt-get', '--assume-yes']
145 cmd.extend(options)
146 cmd.append('install')
147 if isinstance(packages, basestring):
148 cmd.append(packages)
149 else:
150 cmd.extend(packages)
151 log("Installing {} with options: {}".format(packages,
152 options))
153 _run_apt_command(cmd, fatal)
154
155
156def apt_upgrade(options=None, fatal=False, dist=False):
157 """Upgrade all packages"""
158 if options is None:
159 options = ['--option=Dpkg::Options::=--force-confold']
160
161 cmd = ['apt-get', '--assume-yes']
162 cmd.extend(options)
163 if dist:
164 cmd.append('dist-upgrade')
165 else:
166 cmd.append('upgrade')
167 log("Upgrading with options: {}".format(options))
168 _run_apt_command(cmd, fatal)
169
170
171def apt_update(fatal=False):
172 """Update local apt cache"""
173 cmd = ['apt-get', 'update']
174 _run_apt_command(cmd, fatal)
175
176
177def apt_purge(packages, fatal=False):
178 """Purge one or more packages"""
179 cmd = ['apt-get', '--assume-yes', 'purge']
180 if isinstance(packages, basestring):
181 cmd.append(packages)
182 else:
183 cmd.extend(packages)
184 log("Purging {}".format(packages))
185 _run_apt_command(cmd, fatal)
186
187
188def apt_hold(packages, fatal=False):
189 """Hold one or more packages"""
190 cmd = ['apt-mark', 'hold']
191 if isinstance(packages, basestring):
192 cmd.append(packages)
193 else:
194 cmd.extend(packages)
195 log("Holding {}".format(packages))
196
197 if fatal:
198 subprocess.check_call(cmd)
199 else:
200 subprocess.call(cmd)
201
202
203def add_source(source, key=None):
204 if source is None:
205 log('Source is not present. Skipping')
206 return
207
208 if (source.startswith('ppa:') or
209 source.startswith('http') or
210 source.startswith('deb ') or
211 source.startswith('cloud-archive:')):
212 subprocess.check_call(['add-apt-repository', '--yes', source])
213 elif source.startswith('cloud:'):
214 apt_install(filter_installed_packages(['ubuntu-cloud-keyring']),
215 fatal=True)
216 pocket = source.split(':')[-1]
217 if pocket not in CLOUD_ARCHIVE_POCKETS:
218 raise SourceConfigError(
219 'Unsupported cloud: source option %s' %
220 pocket)
221 actual_pocket = CLOUD_ARCHIVE_POCKETS[pocket]
222 with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
223 apt.write(CLOUD_ARCHIVE.format(actual_pocket))
224 elif source == 'proposed':
225 release = lsb_release()['DISTRIB_CODENAME']
226 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
227 apt.write(PROPOSED_POCKET.format(release))
228 if key:
229 subprocess.check_call(['apt-key', 'adv', '--keyserver',
230 'hkp://keyserver.ubuntu.com:80', '--recv',
231 key])
232
233
234def configure_sources(update=False,
235 sources_var='install_sources',
236 keys_var='install_keys'):
237 """
238 Configure multiple sources from charm configuration
239
240 Example config:
241 install_sources:
242 - "ppa:foo"
243 - "http://example.com/repo precise main"
244 install_keys:
245 - null
246 - "a1b2c3d4"
247
248 Note that 'null' (a.k.a. None) should not be quoted.
249 """
250 sources = safe_load(config(sources_var))
251 keys = config(keys_var)
252 if keys is not None:
253 keys = safe_load(keys)
254 if isinstance(sources, basestring) and (
255 keys is None or isinstance(keys, basestring)):
256 add_source(sources, keys)
257 else:
258 if not len(sources) == len(keys):
259 msg = 'Install sources and keys lists are different lengths'
260 raise SourceConfigError(msg)
261 for src_num in range(len(sources)):
262 add_source(sources[src_num], keys[src_num])
263 if update:
264 apt_update(fatal=True)
265
266
267def install_remote(source):
268 """
269 Install a file tree from a remote source
270
271 The specified source should be a url of the form:
272 scheme://[host]/path[#[option=value][&...]]
273
274 Schemes supported are based on this modules submodules
275 Options supported are submodule-specific"""
276 # We ONLY check for True here because can_handle may return a string
277 # explaining why it can't handle a given source.
278 handlers = [h for h in plugins() if h.can_handle(source) is True]
279 installed_to = None
280 for handler in handlers:
281 try:
282 installed_to = handler.install(source)
283 except UnhandledSource:
284 pass
285 if not installed_to:
286 raise UnhandledSource("No handler found for source {}".format(source))
287 return installed_to
288
289
290def install_from_config(config_var_name):
291 charm_config = config()
292 source = charm_config[config_var_name]
293 return install_remote(source)
294
295
296def plugins(fetch_handlers=None):
297 if not fetch_handlers:
298 fetch_handlers = FETCH_HANDLERS
299 plugin_list = []
300 for handler_name in fetch_handlers:
301 package, classname = handler_name.rsplit('.', 1)
302 try:
303 handler_class = getattr(
304 importlib.import_module(package),
305 classname)
306 plugin_list.append(handler_class())
307 except (ImportError, AttributeError):
308 # Skip missing plugins so that they can be ommitted from
309 # installation if desired
310 log("FetchHandler {} not found, skipping plugin".format(
311 handler_name))
312 return plugin_list
313
314
315def _run_apt_command(cmd, fatal=False):
316 """
317 Run an APT command, checking output and retrying if the fatal flag is set
318 to True.
319
320 :param: cmd: str: The apt command to run.
321 :param: fatal: bool: Whether the command's output should be checked and
322 retried.
323 """
324 env = os.environ.copy()
325
326 if 'DEBIAN_FRONTEND' not in env:
327 env['DEBIAN_FRONTEND'] = 'noninteractive'
328
329 if fatal:
330 retry_count = 0
331 result = None
332
333 # If the command is considered "fatal", we need to retry if the apt
334 # lock was not acquired.
335
336 while result is None or result == APT_NO_LOCK:
337 try:
338 result = subprocess.check_call(cmd, env=env)
339 except subprocess.CalledProcessError, e:
340 retry_count = retry_count + 1
341 if retry_count > APT_NO_LOCK_RETRY_COUNT:
342 raise
343 result = e.returncode
344 log("Couldn't acquire DPKG lock. Will retry in {} seconds."
345 "".format(APT_NO_LOCK_RETRY_DELAY))
346 time.sleep(APT_NO_LOCK_RETRY_DELAY)
347
348 else:
349 subprocess.call(cmd, env=env)
3500
=== removed file 'hooks/charmhelpers/fetch/archiveurl.py'
--- hooks/charmhelpers/fetch/archiveurl.py 2014-07-09 12:37:36 +0000
+++ hooks/charmhelpers/fetch/archiveurl.py 1970-01-01 00:00:00 +0000
@@ -1,63 +0,0 @@
1import os
2import urllib2
3import urlparse
4
5from charmhelpers.fetch import (
6 BaseFetchHandler,
7 UnhandledSource
8)
9from charmhelpers.payload.archive import (
10 get_archive_handler,
11 extract,
12)
13from charmhelpers.core.host import mkdir
14
15
16class ArchiveUrlFetchHandler(BaseFetchHandler):
17 """Handler for archives via generic URLs"""
18 def can_handle(self, source):
19 url_parts = self.parse_url(source)
20 if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
21 return "Wrong source type"
22 if get_archive_handler(self.base_url(source)):
23 return True
24 return False
25
26 def download(self, source, dest):
27 # propogate all exceptions
28 # URLError, OSError, etc
29 proto, netloc, path, params, query, fragment = urlparse.urlparse(source)
30 if proto in ('http', 'https'):
31 auth, barehost = urllib2.splituser(netloc)
32 if auth is not None:
33 source = urlparse.urlunparse((proto, barehost, path, params, query, fragment))
34 username, password = urllib2.splitpasswd(auth)
35 passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
36 # Realm is set to None in add_password to force the username and password
37 # to be used whatever the realm
38 passman.add_password(None, source, username, password)
39 authhandler = urllib2.HTTPBasicAuthHandler(passman)
40 opener = urllib2.build_opener(authhandler)
41 urllib2.install_opener(opener)
42 response = urllib2.urlopen(source)
43 try:
44 with open(dest, 'w') as dest_file:
45 dest_file.write(response.read())
46 except Exception as e:
47 if os.path.isfile(dest):
48 os.unlink(dest)
49 raise e
50
51 def install(self, source):
52 url_parts = self.parse_url(source)
53 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
54 if not os.path.exists(dest_dir):
55 mkdir(dest_dir, perms=0755)
56 dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
57 try:
58 self.download(source, dld_file)
59 except urllib2.URLError as e:
60 raise UnhandledSource(e.reason)
61 except OSError as e:
62 raise UnhandledSource(e.strerror)
63 return extract(dld_file)
640
=== removed file 'hooks/charmhelpers/fetch/bzrurl.py'
--- hooks/charmhelpers/fetch/bzrurl.py 2014-07-09 12:37:36 +0000
+++ hooks/charmhelpers/fetch/bzrurl.py 1970-01-01 00:00:00 +0000
@@ -1,50 +0,0 @@
1import os
2from charmhelpers.fetch import (
3 BaseFetchHandler,
4 UnhandledSource
5)
6from charmhelpers.core.host import mkdir
7
8try:
9 from bzrlib.branch import Branch
10except ImportError:
11 from charmhelpers.fetch import apt_install
12 apt_install("python-bzrlib")
13 from bzrlib.branch import Branch
14
15
16class BzrUrlFetchHandler(BaseFetchHandler):
17 """Handler for bazaar branches via generic and lp URLs"""
18 def can_handle(self, source):
19 url_parts = self.parse_url(source)
20 if url_parts.scheme not in ('bzr+ssh', 'lp'):
21 return False
22 else:
23 return True
24
25 def branch(self, source, dest):
26 url_parts = self.parse_url(source)
27 # If we use lp:branchname scheme we need to load plugins
28 if not self.can_handle(source):
29 raise UnhandledSource("Cannot handle {}".format(source))
30 if url_parts.scheme == "lp":
31 from bzrlib.plugin import load_plugins
32 load_plugins()
33 try:
34 remote_branch = Branch.open(source)
35 remote_branch.bzrdir.sprout(dest).open_branch()
36 except Exception as e:
37 raise e
38
39 def install(self, source):
40 url_parts = self.parse_url(source)
41 branch_name = url_parts.path.strip("/").split("/")[-1]
42 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
43 branch_name)
44 if not os.path.exists(dest_dir):
45 mkdir(dest_dir, perms=0755)
46 try:
47 self.branch(source, dest_dir)
48 except OSError as e:
49 raise UnhandledSource(e.strerror)
50 return dest_dir
510
=== removed directory 'hooks/charmhelpers/lib'
=== removed file 'hooks/charmhelpers/lib/__init__.py'
=== removed file 'hooks/charmhelpers/lib/ceph_utils.py'
--- hooks/charmhelpers/lib/ceph_utils.py 2014-07-09 12:37:36 +0000
+++ hooks/charmhelpers/lib/ceph_utils.py 1970-01-01 00:00:00 +0000
@@ -1,315 +0,0 @@
1#
2# Copyright 2012 Canonical Ltd.
3#
4# This file is sourced from lp:openstack-charm-helpers
5#
6# Authors:
7# James Page <james.page@ubuntu.com>
8# Adam Gandelman <adamg@ubuntu.com>
9#
10
11import commands
12import json
13import subprocess
14import os
15import shutil
16import time
17import lib.utils as utils
18
19KEYRING = '/etc/ceph/ceph.client.%s.keyring'
20KEYFILE = '/etc/ceph/ceph.client.%s.key'
21
22CEPH_CONF = """[global]
23 auth supported = %(auth)s
24 keyring = %(keyring)s
25 mon host = %(mon_hosts)s
26 log to syslog = %(use_syslog)s
27 err to syslog = %(use_syslog)s
28 clog to syslog = %(use_syslog)s
29"""
30
31
32def execute(cmd):
33 subprocess.check_call(cmd)
34
35
36def execute_shell(cmd):
37 subprocess.check_call(cmd, shell=True)
38
39
40def install():
41 ceph_dir = "/etc/ceph"
42 if not os.path.isdir(ceph_dir):
43 os.mkdir(ceph_dir)
44 utils.install('ceph-common')
45
46
47def rbd_exists(service, pool, rbd_img):
48 (rc, out) = commands.getstatusoutput('rbd list --id %s --pool %s' %
49 (service, pool))
50 return rbd_img in out
51
52
53def create_rbd_image(service, pool, image, sizemb):
54 cmd = [
55 'rbd',
56 'create',
57 image,
58 '--size',
59 str(sizemb),
60 '--id',
61 service,
62 '--pool',
63 pool]
64 execute(cmd)
65
66
67def pool_exists(service, name):
68 (rc, out) = commands.getstatusoutput("rados --id %s lspools" % service)
69 return name in out
70
71
72def ceph_version():
73 ''' Retrieve the local version of ceph '''
74 if os.path.exists('/usr/bin/ceph'):
75 cmd = ['ceph', '-v']
76 output = subprocess.check_output(cmd)
77 output = output.split()
78 if len(output) > 3:
79 return output[2]
80 else:
81 return None
82 else:
83 return None
84
85
86def get_osds(service):
87 '''
88 Return a list of all Ceph Object Storage Daemons
89 currently in the cluster
90 '''
91 version = ceph_version()
92 if version and version >= '0.56':
93 cmd = ['ceph', '--id', service, 'osd', 'ls', '--format=json']
94 return json.loads(subprocess.check_output(cmd))
95 else:
96 return None
97
98
99def create_pool(service, name, replicas=2):
100 ''' Create a new RADOS pool '''
101 if pool_exists(service, name):
102 utils.juju_log('WARNING',
103 "Ceph pool {} already exists, "
104 "skipping creation".format(name))
105 return
106
107 osds = get_osds(service)
108 if osds:
109 pgnum = (len(osds) * 100 / replicas)
110 else:
111 # NOTE(james-page): Default to 200 for older ceph versions
112 # which don't support OSD query from cli
113 pgnum = 200
114
115 cmd = [
116 'ceph', '--id', service,
117 'osd', 'pool', 'create',
118 name, str(pgnum)
119 ]
120 subprocess.check_call(cmd)
121 cmd = [
122 'ceph', '--id', service,
123 'osd', 'pool', 'set', name,
124 'size', str(replicas)
125 ]
126 subprocess.check_call(cmd)
127
128
129def keyfile_path(service):
130 return KEYFILE % service
131
132
133def keyring_path(service):
134 return KEYRING % service
135
136
137def create_keyring(service, key):
138 keyring = keyring_path(service)
139 if os.path.exists(keyring):
140 utils.juju_log('INFO', 'ceph: Keyring exists at %s.' % keyring)
141 cmd = [
142 'ceph-authtool',
143 keyring,
144 '--create-keyring',
145 '--name=client.%s' % service,
146 '--add-key=%s' % key]
147 execute(cmd)
148 utils.juju_log('INFO', 'ceph: Created new ring at %s.' % keyring)
149
150
151def create_key_file(service, key):
152 # create a file containing the key
153 keyfile = keyfile_path(service)
154 if os.path.exists(keyfile):
155 utils.juju_log('INFO', 'ceph: Keyfile exists at %s.' % keyfile)
156 fd = open(keyfile, 'w')
157 fd.write(key)
158 fd.close()
159 utils.juju_log('INFO', 'ceph: Created new keyfile at %s.' % keyfile)
160
161
162def get_ceph_nodes():
163 hosts = []
164 for r_id in utils.relation_ids('ceph'):
165 for unit in utils.relation_list(r_id):
166 hosts.append(utils.relation_get('private-address',
167 unit=unit, rid=r_id))
168 return hosts
169
170
171def configure(service, key, auth, use_syslog):
172 create_keyring(service, key)
173 create_key_file(service, key)
174 hosts = get_ceph_nodes()
175 mon_hosts = ",".join(map(str, hosts))
176 keyring = keyring_path(service)
177 with open('/etc/ceph/ceph.conf', 'w') as ceph_conf:
178 ceph_conf.write(CEPH_CONF % locals())
179 modprobe_kernel_module('rbd')
180
181
182def image_mapped(image_name):
183 (rc, out) = commands.getstatusoutput('rbd showmapped')
184 return image_name in out
185
186
187def map_block_storage(service, pool, image):
188 cmd = [
189 'rbd',
190 'map',
191 '%s/%s' % (pool, image),
192 '--user',
193 service,
194 '--secret',
195 keyfile_path(service)]
196 execute(cmd)
197
198
199def filesystem_mounted(fs):
200 return subprocess.call(['grep', '-wqs', fs, '/proc/mounts']) == 0
201
202
203def make_filesystem(blk_device, fstype='ext4'):
204 count = 0
205 e_noent = os.errno.ENOENT
206 while not os.path.exists(blk_device):
207 if count >= 10:
208 utils.juju_log('ERROR', 'ceph: gave up waiting on block '
209 'device %s' % blk_device)
210 raise IOError(e_noent, os.strerror(e_noent), blk_device)
211 utils.juju_log('INFO', 'ceph: waiting for block device %s '
212 'to appear' % blk_device)
213 count += 1
214 time.sleep(1)
215 else:
216 utils.juju_log('INFO', 'ceph: Formatting block device %s '
217 'as filesystem %s.' % (blk_device, fstype))
218 execute(['mkfs', '-t', fstype, blk_device])
219
220
221def place_data_on_ceph(service, blk_device, data_src_dst, fstype='ext4'):
222 # mount block device into /mnt
223 cmd = ['mount', '-t', fstype, blk_device, '/mnt']
224 execute(cmd)
225
226 # copy data to /mnt
227 try:
228 copy_files(data_src_dst, '/mnt')
229 except:
230 pass
231
232 # umount block device
233 cmd = ['umount', '/mnt']
234 execute(cmd)
235
236 _dir = os.stat(data_src_dst)
237 uid = _dir.st_uid
238 gid = _dir.st_gid
239
240 # re-mount where the data should originally be
241 cmd = ['mount', '-t', fstype, blk_device, data_src_dst]
242 execute(cmd)
243
244 # ensure original ownership of new mount.
245 cmd = ['chown', '-R', '%s:%s' % (uid, gid), data_src_dst]
246 execute(cmd)
247
248
249# TODO: re-use
250def modprobe_kernel_module(module):
251 utils.juju_log('INFO', 'Loading kernel module')
252 cmd = ['modprobe', module]
253 execute(cmd)
254 cmd = 'echo %s >> /etc/modules' % module
255 execute_shell(cmd)
256
257
258def copy_files(src, dst, symlinks=False, ignore=None):
259 for item in os.listdir(src):
260 s = os.path.join(src, item)
261 d = os.path.join(dst, item)
262 if os.path.isdir(s):
263 shutil.copytree(s, d, symlinks, ignore)
264 else:
265 shutil.copy2(s, d)
266
267
268def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
269 blk_device, fstype, system_services=[],
270 rbd_pool_replicas=2):
271 """
272 To be called from the current cluster leader.
273 Ensures given pool and RBD image exists, is mapped to a block device,
274 and the device is formatted and mounted at the given mount_point.
275
276 If formatting a device for the first time, data existing at mount_point
277 will be migrated to the RBD device before being remounted.
278
279 All services listed in system_services will be stopped prior to data
280 migration and restarted when complete.
281 """
282 # Ensure pool, RBD image, RBD mappings are in place.
283 if not pool_exists(service, pool):
284 utils.juju_log('INFO', 'ceph: Creating new pool %s.' % pool)
285 create_pool(service, pool, replicas=rbd_pool_replicas)
286
287 if not rbd_exists(service, pool, rbd_img):
288 utils.juju_log('INFO', 'ceph: Creating RBD image (%s).' % rbd_img)
289 create_rbd_image(service, pool, rbd_img, sizemb)
290
291 if not image_mapped(rbd_img):
292 utils.juju_log('INFO', 'ceph: Mapping RBD Image as a Block Device.')
293 map_block_storage(service, pool, rbd_img)
294
295 # make file system
296 # TODO: What happens if for whatever reason this is run again and
297 # the data is already in the rbd device and/or is mounted??
298 # When it is mounted already, it will fail to make the fs
299 # XXX: This is really sketchy! Need to at least add an fstab entry
300 # otherwise this hook will blow away existing data if its executed
301 # after a reboot.
302 if not filesystem_mounted(mount_point):
303 make_filesystem(blk_device, fstype)
304
305 for svc in system_services:
306 if utils.running(svc):
307 utils.juju_log('INFO',
308 'Stopping services %s prior to migrating '
309 'data' % svc)
310 utils.stop(svc)
311
312 place_data_on_ceph(service, blk_device, mount_point, fstype)
313
314 for svc in system_services:
315 utils.start(svc)
3160
=== removed file 'hooks/charmhelpers/lib/cluster_utils.py'
--- hooks/charmhelpers/lib/cluster_utils.py 2014-07-09 12:37:36 +0000
+++ hooks/charmhelpers/lib/cluster_utils.py 1970-01-01 00:00:00 +0000
@@ -1,128 +0,0 @@
1#
2# Copyright 2012 Canonical Ltd.
3#
4# This file is sourced from lp:openstack-charm-helpers
5#
6# Authors:
7# James Page <james.page@ubuntu.com>
8# Adam Gandelman <adamg@ubuntu.com>
9#
10
11from utils import (
12 juju_log,
13 relation_ids,
14 relation_list,
15 relation_get,
16 get_unit_hostname,
17 config_get)
18import subprocess
19import os
20
21
22def is_clustered():
23 for r_id in (relation_ids('ha') or []):
24 for unit in (relation_list(r_id) or []):
25 clustered = relation_get('clustered',
26 rid=r_id,
27 unit=unit)
28 if clustered:
29 return True
30 return False
31
32
33def is_leader(resource):
34 cmd = [
35 "crm", "resource",
36 "show", resource]
37 try:
38 status = subprocess.check_output(cmd)
39 except subprocess.CalledProcessError:
40 return False
41 else:
42 if get_unit_hostname() in status:
43 return True
44 else:
45 return False
46
47
48def peer_units():
49 peers = []
50 for r_id in (relation_ids('cluster') or []):
51 for unit in (relation_list(r_id) or []):
52 peers.append(unit)
53 return peers
54
55
56def oldest_peer(peers):
57 local_unit_no = os.getenv('JUJU_UNIT_NAME').split('/')[1]
58 for peer in peers:
59 remote_unit_no = peer.split('/')[1]
60 if remote_unit_no < local_unit_no:
61 return False
62 return True
63
64
65def eligible_leader(resource):
66 if is_clustered():
67 if not is_leader(resource):
68 juju_log('INFO', 'Deferring action to CRM leader.')
69 return False
70 else:
71 peers = peer_units()
72 if peers and not oldest_peer(peers):
73 juju_log('INFO', 'Deferring action to oldest service unit.')
74 return False
75 return True
76
77
78def https():
79 '''
80 Determines whether enough data has been provided in configuration
81 or relation data to configure HTTPS
82 .
83 returns: boolean
84 '''
85 if config_get('use-https') == "yes":
86 return True
87 if config_get('ssl_cert') and config_get('ssl_key'):
88 return True
89 for r_id in relation_ids('identity-service'):
90 for unit in relation_list(r_id):
91 if (relation_get('https_keystone', rid=r_id, unit=unit) and
92 relation_get('ssl_cert', rid=r_id, unit=unit) and
93 relation_get('ssl_key', rid=r_id, unit=unit) and
94 relation_get('ca_cert', rid=r_id, unit=unit)):
95 return True
96 return False
97
98
99def determine_api_port(public_port):
100 '''
101 Determine correct API server listening port based on
102 existence of HTTPS reverse proxy and/or haproxy.
103
104 public_port: int: standard public port for given service
105
106 returns: int: the correct listening port for the API service
107 '''
108 i = 0
109 if len(peer_units()) > 0 or is_clustered():
110 i += 1
111 if https():
112 i += 1
113 return public_port - (i * 10)
114
115
116def determine_haproxy_port(public_port):
117 '''
118 Description: Determine correct proxy listening port based on public IP +
119 existence of HTTPS reverse proxy.
120
121 public_port: int: standard public port for given service
122
123 returns: int: the correct listening port for the HAProxy service
124 '''
125 i = 0
126 if https():
127 i += 1
128 return public_port - (i * 10)
1290
=== removed file 'hooks/charmhelpers/lib/utils.py'
--- hooks/charmhelpers/lib/utils.py 2014-07-09 12:37:36 +0000
+++ hooks/charmhelpers/lib/utils.py 1970-01-01 00:00:00 +0000
@@ -1,221 +0,0 @@
1#
2# Copyright 2012 Canonical Ltd.
3#
4# This file is sourced from lp:openstack-charm-helpers
5#
6# Authors:
7# James Page <james.page@ubuntu.com>
8# Paul Collins <paul.collins@canonical.com>
9# Adam Gandelman <adamg@ubuntu.com>
10#
11
12import json
13import os
14import subprocess
15import socket
16import sys
17
18
19def do_hooks(hooks):
20 hook = os.path.basename(sys.argv[0])
21
22 try:
23 hook_func = hooks[hook]
24 except KeyError:
25 juju_log('INFO',
26 "This charm doesn't know how to handle '{}'.".format(hook))
27 else:
28 hook_func()
29
30
31def install(*pkgs):
32 cmd = [
33 'apt-get',
34 '-y',
35 'install']
36 for pkg in pkgs:
37 cmd.append(pkg)
38 subprocess.check_call(cmd)
39
40TEMPLATES_DIR = 'templates'
41
42try:
43 import jinja2
44except ImportError:
45 install('python-jinja2')
46 import jinja2
47
48try:
49 import dns.resolver
50except ImportError:
51 install('python-dnspython')
52 import dns.resolver
53
54
55def render_template(template_name, context, template_dir=TEMPLATES_DIR):
56 templates = jinja2.Environment(loader=jinja2.FileSystemLoader(
57 template_dir))
58 template = templates.get_template(template_name)
59 return template.render(context)
60
61# Protocols
62TCP = 'TCP'
63UDP = 'UDP'
64
65
66def expose(port, protocol='TCP'):
67 cmd = [
68 'open-port',
69 '{}/{}'.format(port, protocol)]
70 subprocess.check_call(cmd)
71
72
73def juju_log(severity, message):
74 cmd = [
75 'juju-log',
76 '--log-level', severity,
77 message]
78 subprocess.check_call(cmd)
79
80
81def relation_ids(relation):
82 cmd = [
83 'relation-ids',
84 relation]
85 result = str(subprocess.check_output(cmd)).split()
86 if result == "":
87 return None
88 else:
89 return result
90
91
92def relation_list(rid):
93 cmd = [
94 'relation-list',
95 '-r', rid]
96 result = str(subprocess.check_output(cmd)).split()
97 if result == "":
98 return None
99 else:
100 return result
101
102
103def relation_get(attribute, unit=None, rid=None):
104 cmd = [
105 'relation-get']
106 if rid:
107 cmd.append('-r')
108 cmd.append(rid)
109 cmd.append(attribute)
110 if unit:
111 cmd.append(unit)
112 value = subprocess.check_output(cmd).strip() # IGNORE:E1103
113 if value == "":
114 return None
115 else:
116 return value
117
118
119def relation_set(**kwargs):
120 cmd = [
121 'relation-set']
122 args = []
123 for k, v in kwargs.items():
124 if k == 'rid':
125 if v:
126 cmd.append('-r')
127 cmd.append(v)
128 else:
129 args.append('{}={}'.format(k, v))
130 cmd += args
131 subprocess.check_call(cmd)
132
133
134def unit_get(attribute):
135 cmd = [
136 'unit-get',
137 attribute]
138 value = subprocess.check_output(cmd).strip() # IGNORE:E1103
139 if value == "":
140 return None
141 else:
142 return value
143
144
145def config_get(attribute):
146 cmd = [
147 'config-get',
148 '--format',
149 'json']
150 out = subprocess.check_output(cmd).strip() # IGNORE:E1103
151 cfg = json.loads(out)
152
153 try:
154 return cfg[attribute]
155 except KeyError:
156 return None
157
158
159def get_unit_hostname():
160 return socket.gethostname()
161
162
163def get_host_ip(hostname=unit_get('private-address')):
164 try:
165 # Test to see if already an IPv4 address
166 socket.inet_aton(hostname)
167 return hostname
168 except socket.error:
169 answers = dns.resolver.query(hostname, 'A')
170 if answers:
171 return answers[0].address
172 return None
173
174
175def _svc_control(service, action):
176 subprocess.check_call(['service', service, action])
177
178
179def restart(*services):
180 for service in services:
181 _svc_control(service, 'restart')
182
183
184def stop(*services):
185 for service in services:
186 _svc_control(service, 'stop')
187
188
189def start(*services):
190 for service in services:
191 _svc_control(service, 'start')
192
193
194def reload(*services):
195 for service in services:
196 try:
197 _svc_control(service, 'reload')
198 except subprocess.CalledProcessError:
199 # Reload failed - either service does not support reload
200 # or it was not running - restart will fixup most things
201 _svc_control(service, 'restart')
202
203
204def running(service):
205 try:
206 output = subprocess.check_output(['service', service, 'status'])
207 except subprocess.CalledProcessError:
208 return False
209 else:
210 if ("start/running" in output or "is running" in output):
211 return True
212 else:
213 return False
214
215
216def is_relation_made(relation, key='private-address'):
217 for r_id in (relation_ids(relation) or []):
218 for unit in (relation_list(r_id) or []):
219 if relation_get(key, rid=r_id, unit=unit):
220 return True
221 return False
2220
=== removed file 'hooks/charmhelpers/setup.py'
--- hooks/charmhelpers/setup.py 2014-07-09 12:37:36 +0000
+++ hooks/charmhelpers/setup.py 1970-01-01 00:00:00 +0000
@@ -1,12 +0,0 @@
1#!/usr/bin/env python
2
3from distutils.core import setup
4
5setup(name='charmhelpers',
6 version='1.0',
7 description='this is dumb',
8 author='nobody',
9 author_email='dummy@amulet',
10 url='http://google.com',
11 packages=[],
12)
130
=== modified file 'hooks/hdp-hadoop-common.py'
--- hooks/hdp-hadoop-common.py 2014-11-21 19:35:29 +0000
+++ hooks/hdp-hadoop-common.py 2015-02-13 00:38:49 +0000
@@ -1,19 +1,28 @@
1#!/usr/bin/env python1#!/usr/bin/env python
2import os2import os
3import re
3import subprocess4import subprocess
5import pwd
4import sys6import sys
5import tarfile7import tarfile
6import shlex8import shlex
7import shutil9import shutil
8import inspect10import inspect
9import time11import time
12from functools import partial
13import socket
14
15import bootstrap
16bootstrap.install_charmhelpers()
1017
11from hdputils import install_base_pkg, updateHDPDirectoryScript, config_all_nodes, \18from hdputils import install_base_pkg, updateHDPDirectoryScript, config_all_nodes, \
12 setHadoopEnvVar, home, hdpScript, configureJAVA, config_all_nodes19 setHadoopEnvVar, home, hdpScript, configureJAVA, config_all_nodes, \
13from bdutils import setDirPermission, fileSetKV, append_bashrc, is_jvm_service_active, setHadoopConfigXML, chownRecursive20 javaPath
21from bdutils import setDirPermission, fileSetKV, append_bashrc, is_jvm_service_active, setHadoopConfigXML, \
22 chownRecursive, HDFS_command, fconfigured
1423
15from charmhelpers.lib.utils import config_get, get_unit_hostname24from shutil import rmtree, copyfile, copy
16from shutil import rmtree, copyfile25from charmhelpers.core import hookenv
17from charmhelpers.core.hookenv import log, Hooks, relation_get, relation_set, unit_get, open_port, local_unit, related_units26from charmhelpers.core.hookenv import log, Hooks, relation_get, relation_set, unit_get, open_port, local_unit, related_units
18from charmhelpers.core.host import service_start, service_stop, add_user_to_group27from charmhelpers.core.host import service_start, service_stop, add_user_to_group
19from time import sleep28from time import sleep
@@ -68,27 +77,71 @@
68# in config.yaml77# in config.yaml
69def updateDirectoriesScript():78def updateDirectoriesScript():
70 log("==> {}".format(inspect.stack()[0][3]),"INFO")79 log("==> {}".format(inspect.stack()[0][3]),"INFO")
71 fileSetKV(directoriesScript, "DFS_NAME_DIR=", "\""+config_get('dfs_name_dir')+"\";")80 fileSetKV(directoriesScript, "DFS_NAME_DIR=", "\""+hookenv.config('dfs_name_dir')+"\";")
72 fileSetKV(directoriesScript, "DFS_DATA_DIR=","\""+ config_get('dfs_data_dir')+"\";")81 fileSetKV(directoriesScript, "DFS_DATA_DIR=","\""+ hookenv.config('dfs_data_dir')+"\";")
73 fileSetKV(directoriesScript, "FS_CHECKPOINT_DIR=", "\""+config_get('fs_checkpoint_dir')+"\";")82 fileSetKV(directoriesScript, "FS_CHECKPOINT_DIR=", "\""+hookenv.config('fs_checkpoint_dir')+"\";")
74 fileSetKV(directoriesScript, "YARN_LOCAL_DIR=", "\""+config_get('yarn_local_dir')+"\";")83 fileSetKV(directoriesScript, "YARN_LOCAL_DIR=", "\""+hookenv.config('yarn_local_dir')+"\";")
75 fileSetKV(directoriesScript, "YARN_LOCAL_LOG_DIR=", "\""+config_get('yarn_local_log_dir')+"\";")84 fileSetKV(directoriesScript, "YARN_LOCAL_LOG_DIR=", "\""+hookenv.config('yarn_local_log_dir')+"\";")
76 fileSetKV(directoriesScript, "ZOOKEEPER_DATA_DIR=", "\""+config_get('zookeeper_data_dir')+"\";")85 fileSetKV(directoriesScript, "ZOOKEEPER_DATA_DIR=", "\""+hookenv.config('zookeeper_data_dir')+"\";")
7786
7887
79def createHDPHadoopConf():88def createHDPHadoopConf():
80 log("==> {}".format(inspect.stack()[0][3]),"INFO")89 log("==> {}".format(inspect.stack()[0][3]),"INFO")
8190
91 edits = {
92 'hadoop-env.sh': {
93 r'TODO-JDK-PATH': javaPath,
94 },
95 'yarn-env.sh': {
96 r'TODO-JDK-PATH': javaPath,
97 },
98 }
99
82 HADOOP_CONF_DIR = os.environ["HADOOP_CONF_DIR"]100 HADOOP_CONF_DIR = os.environ["HADOOP_CONF_DIR"]
83 HDPConfPath = os.path.join(os.path.sep,home, hdpScript, "configuration_files", "core_hadoop")101 HDPConfPath = os.path.join(os.path.sep,home, hdpScript, "configuration_files", "core_hadoop")
84 source = os.listdir(HDPConfPath)102 source = os.listdir(HDPConfPath)
85 for files in source:103 for files in source:
86 srcFile = os.path.join(os.path.sep, HDPConfPath, files)104 srcFile = os.path.join(os.path.sep, HDPConfPath, files)
87 desFile = os.path.join(os.path.sep, HADOOP_CONF_DIR, files)105 desFile = os.path.join(os.path.sep, HADOOP_CONF_DIR, files)
88 shutil.copyfile(srcFile, desFile)106 edit_file(srcFile, desFile, edits.get(files))
89 subprocess.call(createDadoopConfDir)107 subprocess.call(createDadoopConfDir)
90108
91109
110def edit_file(src, dst=None, edits=None):
111 """
112 Copy file from src to dst, applying one or more per-line edits.
113
114 :param str src: Path of source file to copy from
115 :param str dst: Path of destination file. If None or the same as `src`,
116 the edits will be made in-place.
117 :param dict edits: A mapping of regular expression patterns to regular
118 expression replacement patterns (using backreferences).
119 Any matching replacements will be made for each line.
120 If the order of replacements is important, an instance
121 of the `OrderedDict` class from the `collections`
122 package should be used.
123 """
124 if not edits:
125 if not dst or dst == src:
126 return # in-place edit with no changes is a noop
127 shutil.copyfile(src, dst) # no edits is just a copy
128 return
129 cleanup = False
130 if not dst or dst == src:
131 dst = src
132 src = '{}.bak'.format(src)
133 os.rename(dst, src)
134 cleanup = True
135 with open(src, 'r') as infile:
136 with open(dst, 'w') as outfile:
137 for line in infile.readlines():
138 for pat, repl in edits.iteritems():
139 line = re.sub(pat, repl, line)
140 outfile.write(line)
141 if cleanup:
142 os.remove(src)
143
144
92def uninstall_base_pkg():145def uninstall_base_pkg():
93 log("==> {}".format(inspect.stack()[0][3]),"INFO")146 log("==> {}".format(inspect.stack()[0][3]),"INFO")
94 packages = ['ntp',147 packages = ['ntp',
@@ -104,7 +157,6 @@
104 'liblzo2-2',157 'liblzo2-2',
105 'liblzo2-dev',158 'liblzo2-dev',
106 'libhdfs0',159 'libhdfs0',
107 'libhdfs0-dev',
108 'hadoop-lzo']160 'hadoop-lzo']
109 apt_purge(packages)161 apt_purge(packages)
110 shutil.rmtree(hdpScriptPath)162 shutil.rmtree(hdpScriptPath)
@@ -130,26 +182,41 @@
130 setDirPermission(os.environ['MAPRED_PID_DIR'], os.environ['MAPRED_USER'], group, 0755)182 setDirPermission(os.environ['MAPRED_PID_DIR'], os.environ['MAPRED_USER'], group, 0755)
131 setDirPermission(os.environ['YARN_LOCAL_DIR'], os.environ['YARN_USER'], group, 0755)183 setDirPermission(os.environ['YARN_LOCAL_DIR'], os.environ['YARN_USER'], group, 0755)
132 setDirPermission(os.environ['YARN_LOCAL_LOG_DIR'], os.environ['YARN_USER'], group, 0755)184 setDirPermission(os.environ['YARN_LOCAL_LOG_DIR'], os.environ['YARN_USER'], group, 0755)
133 hdfsConfPath = os.path.join(os.path.sep, os.environ['HADOOP_CONF_DIR'],'hdfs-site.xml')185 hdfsConfPath = os.path.join(os.path.sep, os.environ['HADOOP_CONF_DIR'], 'hdfs-site.xml')
134 setHadoopConfigXML(hdfsConfPath, "dfs.namenode.name.dir", config_get('dfs_name_dir'))186 setHadoopConfigXML(hdfsConfPath, "dfs.namenode.name.dir", hookenv.config('dfs_name_dir'))
135 setHadoopConfigXML(hdfsConfPath, "dfs.datanode.data.dir", config_get('dfs_data_dir'))187 setHadoopConfigXML(hdfsConfPath, "dfs.datanode.data.dir", hookenv.config('dfs_data_dir'))
136 setHadoopConfigXML(hdfsConfPath, "dfs.namenode.checkpoint.dir", config_get('fs_checkpoint_dir'))188 setHadoopConfigXML(hdfsConfPath, "dfs.namenode.checkpoint.dir", hookenv.config('fs_checkpoint_dir'))
189 yarnConfPath = os.path.join(os.path.sep, os.environ['HADOOP_CONF_DIR'], 'yarn-site.xml')
190 setHadoopConfigXML(yarnConfPath, "yarn.scheduler.maximum-allocation-mb", "2048")
191 setHadoopConfigXML(yarnConfPath, "yarn.scheduler.minimum-allocation-mb", "682")
192 setHadoopConfigXML(yarnConfPath, "yarn.nodemanager.resource.cpu-vcores", "2")
193 setHadoopConfigXML(yarnConfPath, "yarn.nodemanager.resource.memory-mb", "2048")
194 setHadoopConfigXML(yarnConfPath, "yarn.nodemanager.resource.percentage-physical-cpu-limit", "100")
195 mapredConfPath = os.path.join(os.path.sep, os.environ['HADOOP_CONF_DIR'], 'mapred-site.xml')
196# setHadoopConfigXML(mapredConfPath, "mapreduce.application.framework.path", None)
197 setHadoopConfigXML(mapredConfPath, "mapreduce.map.memory.mb", "682")
198 setHadoopConfigXML(mapredConfPath, "mapreduce.reduce.memory.mb", "682")
199 setHadoopConfigXML(mapredConfPath, "mapreduce.map.java.opts", "-Xmx546m")
200 setHadoopConfigXML(mapredConfPath, "mapreduce.reduce.java.opts", "-Xmx546m")
201 setHadoopConfigXML(mapredConfPath, "mapreduce.task.io.sort.factor", "100")
202 setHadoopConfigXML(mapredConfPath, "mapreduce.task.io.sort.mb", "273")
203 setHadoopConfigXML(mapredConfPath, "yarn.app.mapreduce.am.resource.mb", "682")
137204
138# candidate for BD charm helper205# candidate for BD charm helper
139def format_namenode(hdfsUser):206def format_namenode(hdfsUser):
140 log("==> hdfs format for user={}".format(hdfsUser),"INFO")207 log("==> hdfs format for user={}".format(hdfsUser),"INFO")
141 cmd = shlex.split("su {} -c '/usr/lib/hadoop/bin/hadoop namenode -format'".format(hdfsUser))208 cmd = shlex.split("su {} -c '/usr/hdp/current/hadoop-client/bin/hadoop namenode -format'".format(hdfsUser))
142 subprocess.call(cmd)209 subprocess.call(cmd)
143210
144def callHDFS_fs(command):211def callHDFS_fs(command):
145 cmd = shlex.split("su hdfs -c '/usr/lib/hadoop/bin/hadoop fs {}'".format(command))212 cmd = shlex.split("su hdfs -c '/usr/hdp/current/hadoop-client/bin/hadoop fs {}'".format(command))
146 subprocess.call(cmd)213 subprocess.call(cmd)
147214
148215
149''' Start Jobhistory Server '''216''' Start Jobhistory Server '''
150def start_jobhistory():217def start_jobhistory():
151 log("==> Start Job History Server")218 log("==> Start Job History Server")
152 path = os.path.join(os.path.sep, 'usr', 'lib', 'hadoop-yarn', 'bin', 'container-executor')219 path = '/usr/hdp/current/hadoop-yarn-client/bin/container-executor'
153 chownRecursive(path, 'root', 'hadoop')220 chownRecursive(path, 'root', 'hadoop')
154 os.chmod(path, 650)221 os.chmod(path, 650)
155 callHDFS_fs("-mkdir -p /mr-history/tmp")222 callHDFS_fs("-mkdir -p /mr-history/tmp")
@@ -161,16 +228,16 @@
161 callHDFS_fs("-chmod -R 1777 /app-logs ")228 callHDFS_fs("-chmod -R 1777 /app-logs ")
162 callHDFS_fs("-chown yarn /app-logs ")229 callHDFS_fs("-chown yarn /app-logs ")
163 hadoopConfDir = os.environ["HADOOP_CONF_DIR"]230 hadoopConfDir = os.environ["HADOOP_CONF_DIR"]
164 os.environ["HADOOP_LIBEXEC_DIR"]="/usr/lib/hadoop/libexec"231 os.environ["HADOOP_LIBEXEC_DIR"] = "/usr/hdp/current/hadoop-client/libexec"
165 cmd = shlex.split("su {} -c '/usr/lib/hadoop-mapreduce/sbin/mr-jobhistory-daemon.sh --config {} start historyserver'".\232 cmd = shlex.split("su {} -c '/usr/hdp/current/hadoop-mapreduce-client/sbin/mr-jobhistory-daemon.sh --config {} start historyserver'".\
166 format(os.environ['MAPRED_USER'], hadoopConfDir))233 format(os.environ['MAPRED_USER'], hadoopConfDir))
167 subprocess.call(cmd)234 subprocess.call(cmd)
168235
169'''Stop Job History Server'''236'''Stop Job History Server'''
170def stop_jobhistory():237def stop_jobhistory():
171 hadoopConfDir = os.environ["HADOOP_CONF_DIR"]238 hadoopConfDir = os.environ["HADOOP_CONF_DIR"]
172 os.environ["HADOOP_LIBEXEC_DIR"]="/usr/lib/hadoop/libexec"239 os.environ["HADOOP_LIBEXEC_DIR"] = "/usr/hdp/current/hadoop-client/libexec"
173 cmd = shlex.split("su {} -c '/usr/lib/hadoop-mapreduce/sbin/mr-jobhistory-daemon.sh --config {} stop historyserver'".\240 cmd = shlex.split("su {} -c '/usr/hdp/current/hadoop-mapreduce-client/sbin/mr-jobhistory-daemon.sh --config {} stop historyserver'".\
174 format(os.environ['MAPRED_USER'], hadoopConfDir))241 format(os.environ['MAPRED_USER'], hadoopConfDir))
175 subprocess.call(cmd)242 subprocess.call(cmd)
176243
@@ -182,47 +249,49 @@
182 stop_jobhistory()249 stop_jobhistory()
183 start_jobhistory()250 start_jobhistory()
184251
185''' Start Namenode Service252
186 hdfsUser: System user for Hadoop Distributed Filesystem253def hdfs_service(service, action, username):
187'''254 daemon = '/usr/hdp/current/hadoop-hdfs-{}/../hadoop/sbin/hadoop-daemon.sh'
255 log("==> {} {} for user={}".format(action, service, username), "INFO")
256 subprocess.check_call(shlex.split(
257 "su {user} -c '{daemon} --config {conf_dir} {action} {service}'".format(
258 user=username,
259 daemon=daemon.format(service),
260 service=service,
261 action=action,
262 conf_dir=os.environ["HADOOP_CONF_DIR"])))
263
264
188def start_namenode(hdfsUser):265def start_namenode(hdfsUser):
189 log("==> start namenode for user={}".format(hdfsUser), "INFO")266 '''
190 hadoopConfDir = os.environ["HADOOP_CONF_DIR"]267 Start Namenode Service
191 cmd = shlex.split("su {} -c '/usr/lib/hadoop/sbin/hadoop-daemon.sh --config {} start namenode'".\268 hdfsUser: System user for Hadoop Distributed Filesystem
192 format(hdfsUser, hadoopConfDir))269 '''
193 subprocess.check_call(cmd)270 hdfs_service('namenode', 'start', hdfsUser)
194271
195'''Stop Name Node server272
196 hdfsUser: System user for Hadoop Distributed Filesystem
197'''
198def stop_namenode(hdfsUser):273def stop_namenode(hdfsUser):
199 log("==> stop namenode for user={}".format(hdfsUser), "INFO")274 '''
200 hadoopConfDir = os.environ["HADOOP_CONF_DIR"]275 Stop Name Node server
201 cmd = shlex.split("su {} -c '/usr/lib/hadoop/sbin/hadoop-daemon.sh --config {} stop namenode'".\276 hdfsUser: System user for Hadoop Distributed Filesystem
202 format(hdfsUser, hadoopConfDir))277 '''
203 subprocess.call(cmd)278 hdfs_service('namenode', 'stop', hdfsUser)
204279
205'''280
206 Start Data Node
207 hdfsUser: System user for Hadoop Distributed Filesystem
208'''
209def start_datanode(hdfsUser):281def start_datanode(hdfsUser):
210 log("==> start datanode for user={}".format(hdfsUser), "INFO")282 '''
211 hadoopConfDir = os.environ["HADOOP_CONF_DIR"]283 Start Data Node
212 cmd = shlex.split("su {} -c '/usr/lib/hadoop/sbin/hadoop-daemon.sh --config {} start datanode'".\284 hdfsUser: System user for Hadoop Distributed Filesystem
213 format(hdfsUser, hadoopConfDir))285 '''
214 subprocess.check_call(cmd)286 hdfs_service('datanode', 'start', hdfsUser)
215287
216'''288
217 Stop Data Node
218 hdfsUser: System user for Hadoop Distributed Filesystem
219'''
220def stop_datanode(hdfsUser):289def stop_datanode(hdfsUser):
221 log("==> stop datanode for user={}".format(hdfsUser), "INFO")290 '''
222 hadoopConfDir = os.environ["HADOOP_CONF_DIR"]291 Stop Data Node
223 cmd = shlex.split("su {} -c '/usr/lib/hadoop/sbin/hadoop-daemon.sh --config {} stop datanode'".\292 hdfsUser: System user for Hadoop Distributed Filesystem
224 format(hdfsUser, hadoopConfDir))293 '''
225 subprocess.call(cmd)294 hdfs_service('datanode', 'stop', hdfsUser)
226# candidate for BD charm helper295# candidate for BD charm helper
227296
228'''297'''
@@ -238,52 +307,49 @@
238 setHadoopConfigXML(yarnConfPath, "yarn.resourcemanager.resource-tracker.address", RMhostname+":8025")307 setHadoopConfigXML(yarnConfPath, "yarn.resourcemanager.resource-tracker.address", RMhostname+":8025")
239 setHadoopConfigXML(yarnConfPath, "yarn.resourcemanager.scheduler.address", RMhostname+":8030")308 setHadoopConfigXML(yarnConfPath, "yarn.resourcemanager.scheduler.address", RMhostname+":8030")
240 setHadoopConfigXML(yarnConfPath, "yarn.resourcemanager.address", RMhostname+":8050")309 setHadoopConfigXML(yarnConfPath, "yarn.resourcemanager.address", RMhostname+":8050")
310 setHadoopConfigXML(yarnConfPath, "yarn.resourcemanager.hostname", RMhostname)
241 setHadoopConfigXML(yarnConfPath, "yarn.resourcemanager.admin.address", RMhostname+":8141")311 setHadoopConfigXML(yarnConfPath, "yarn.resourcemanager.admin.address", RMhostname+":8141")
242 setHadoopConfigXML(yarnConfPath, "yarn.nodemanager.local-dirs", config_get('yarn_local_dir'))312 setHadoopConfigXML(yarnConfPath, "yarn.nodemanager.local-dirs", hookenv.config('yarn_local_dir'))
243 setHadoopConfigXML(yarnConfPath, "yarn.nodemanager.log-dirs", config_get('yarn_local_log_dir'))313 setHadoopConfigXML(yarnConfPath, "yarn.nodemanager.log-dirs", hookenv.config('yarn_local_log_dir'))
244 setHadoopConfigXML(yarnConfPath, "yarn.log.server.url","http://"+RMhostname+":19888")314 setHadoopConfigXML(yarnConfPath, "yarn.log.server.url","http://"+RMhostname+":19888")
245 setHadoopConfigXML(yarnConfPath, "yarn.resourcemanager.webapp.address", RMhostname+":8088")315 setHadoopConfigXML(yarnConfPath, "yarn.resourcemanager.webapp.address", RMhostname+":8088")
246 #jobhistory server316 #jobhistory server
247 setHadoopConfigXML(mapConfDir, "mapreduce.jobhistory.webapp.address", RMhostname+":19888")317 setHadoopConfigXML(mapConfDir, "mapreduce.jobhistory.webapp.address", RMhostname+":19888")
248 setHadoopConfigXML(mapConfDir, "mapreduce.jobhistory.address", RMhostname+":10020")318 setHadoopConfigXML(mapConfDir, "mapreduce.jobhistory.address", RMhostname+":10020")
249319
250''' Start Resource Manager server '''320
321def yarn_service(service, action, username):
322 daemon = '/usr/hdp/current/hadoop-yarn-client/sbin/yarn-daemon.sh'
323 log("==> {} {} for user={}".format(action, service, username), "INFO")
324 os.environ["HADOOP_LIBEXEC_DIR"] = "/usr/hdp/current/hadoop-client/libexec"
325 subprocess.check_call(shlex.split(
326 "su {user} -c '{daemon} --config {conf_dir} {action} {service}'".format(
327 user=username,
328 daemon=daemon.format(service),
329 service=service,
330 action=action,
331 conf_dir=os.environ["HADOOP_CONF_DIR"])))
332
333
251def start_resourcemanager(yarnUser):334def start_resourcemanager(yarnUser):
252 log("==> start resourcemanager", "INFO")335 ''' Start Resource Manager server '''
253 hadoopConfDir = os.environ["HADOOP_CONF_DIR"]336 yarn_service('resourcemanager', 'start', yarnUser)
254 os.environ["HADOOP_LIBEXEC_DIR"]="/usr/lib/hadoop/libexec"337
255 cmd = shlex.split("su {} -c '/usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config {} start resourcemanager'".\338
256 format(yarnUser, hadoopConfDir))
257 subprocess.call(cmd)
258
259''' Stop Resource Manager server '''
260def stop_resourcemanager(yarnUser):339def stop_resourcemanager(yarnUser):
261 log("==> stop resourcemanager", "INFO")340 ''' Stop Resource Manager server '''
262 hadoopConfDir = os.environ["HADOOP_CONF_DIR"]341 yarn_service('resourcemanager', 'stop', yarnUser)
263 os.environ["HADOOP_LIBEXEC_DIR"]="/usr/lib/hadoop/libexec"342
264 cmd = shlex.split("su {} -c '/usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config {} stop resourcemanager'".\343
265 format(yarnUser, hadoopConfDir))
266 subprocess.check_call(cmd)
267
268''' Start Node Manager daemon on each compute node '''
269def start_nodemanager(yarnUser):344def start_nodemanager(yarnUser):
270 log("==> start nodemanager", "INFO")345 ''' Start Node Manager daemon on each compute node '''
271 hadoopConfDir = os.environ["HADOOP_CONF_DIR"]346 yarn_service('nodemanager', 'start', yarnUser)
272 os.environ["HADOOP_LIBEXEC_DIR"]="/usr/lib/hadoop/libexec"347
273 cmd = shlex.split("su {} -c '/usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config {} start nodemanager'".\
274 format(yarnUser, hadoopConfDir))
275 subprocess.check_call(cmd)
276
277
278''' Stop Node Manager daemon on each compute node '''
279348
280def stop_nodemanager(yarnUser):349def stop_nodemanager(yarnUser):
281 log("==> stop nodemanager", "INFO")350 ''' Stop Node Manager daemon on each compute node '''
282 hadoopConfDir = os.environ["HADOOP_CONF_DIR"]351 yarn_service('nodemanager', 'stop', yarnUser)
283 os.environ["HADOOP_LIBEXEC_DIR"]="/usr/lib/hadoop/libexec"352
284 cmd = shlex.split("su {} -c '/usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config {} stop nodemanager'".\
285 format(yarnUser, hadoopConfDir))
286 subprocess.call(cmd)
287353
288'''Stop all running hadoop services354'''Stop all running hadoop services
289 NOTE: Order is important - DO NOT CHANGE355 NOTE: Order is important - DO NOT CHANGE
@@ -336,6 +402,8 @@
336 break402 break
337403
338def configureHDFS(hostname):404def configureHDFS(hostname):
405 cmd = "/usr/bin/hdp-select set all 2.2.0.0-2041"
406 subprocess.check_call(shlex.split(cmd))
339 hdfsConfPath = os.path.join(os.path.sep, os.environ['HADOOP_CONF_DIR'],'hdfs-site.xml')407 hdfsConfPath = os.path.join(os.path.sep, os.environ['HADOOP_CONF_DIR'],'hdfs-site.xml')
340 coreConfPath = os.path.join(os.path.sep, os.environ['HADOOP_CONF_DIR'],'core-site.xml')408 coreConfPath = os.path.join(os.path.sep, os.environ['HADOOP_CONF_DIR'],'core-site.xml')
341 setHadoopConfigXML(coreConfPath, "fs.defaultFS", "hdfs://"+hostname+":8020")409 setHadoopConfigXML(coreConfPath, "fs.defaultFS", "hdfs://"+hostname+":8020")
@@ -361,11 +429,10 @@
361 'liblzo2-2',429 'liblzo2-2',
362 'liblzo2-dev',430 'liblzo2-dev',
363 'libhdfs0',431 'libhdfs0',
364 'libhdfs0-dev',
365 'hadoop-lzo']432 'hadoop-lzo']
366 install_base_pkg(packages)433 install_base_pkg(packages)
367 config_hadoop_nodes()434 config_hadoop_nodes()
368 fileSetKV(hosts_path, unit_get('private-address')+' ', get_unit_hostname())435 fileSetKV(hosts_path, unit_get('private-address')+' ', socket.gethostname())
369436
370437
371@hooks.hook('resourcemanager-relation-joined')438@hooks.hook('resourcemanager-relation-joined')
@@ -373,17 +440,23 @@
373 log ("==> resourcemanager-relation-joined","INFO")440 log ("==> resourcemanager-relation-joined","INFO")
374 if is_jvm_service_active("ResourceManager"):441 if is_jvm_service_active("ResourceManager"):
375 relation_set(resourceManagerReady=True)442 relation_set(resourceManagerReady=True)
376 relation_set(resourceManager_hostname=get_unit_hostname())443 relation_set(resourceManager_hostname=socket.gethostname())
377 return444 return
445 # waiting for namenode, however this will not work on a distributed hdfs system.
378 if not is_jvm_service_active("NameNode"):446 if not is_jvm_service_active("NameNode"):
379 sys.exit(0)447 sys.exit(0)
448 shutil.copy(os.path.join(os.path.sep, os.environ['CHARM_DIR'],\
449 'files', 'scripts', "terasort.sh"), home)
380 setHadoopEnvVar()450 setHadoopEnvVar()
381 relation_set(resourceManager_ip=unit_get('private-address'))451 relation_set(resourceManager_ip=unit_get('private-address'))
382 relation_set(resourceManager_hostname=get_unit_hostname())452 relation_set(resourceManager_hostname=socket.gethostname())
383 configureYarn(unit_get('private-address'))453 configureYarn(unit_get('private-address'))
384 start_resourcemanager(os.environ["YARN_USER"])454 start_resourcemanager(os.environ["YARN_USER"])
385 start_jobhistory()455 start_jobhistory()
386 open_port(8025)456 open_port(8025)
457 open_port(8050)
458 open_port(8020)
459 open_port(8042)
387 open_port(8030)460 open_port(8030)
388 open_port(8050)461 open_port(8050)
389 open_port(8141)462 open_port(8141)
@@ -414,7 +487,9 @@
414 open_port(19888)487 open_port(19888)
415 open_port(8088)488 open_port(8088)
416 open_port(10020)489 open_port(10020)
417 relation_set(nodemanager_hostname=get_unit_hostname())490 open_port(8042)
491
492 relation_set(nodemanager_hostname=socket.gethostname())
418493
419494
420@hooks.hook('resourcemanager-relation-broken')495@hooks.hook('resourcemanager-relation-broken')
@@ -437,25 +512,32 @@
437512
438 if is_jvm_service_active("NameNode"):513 if is_jvm_service_active("NameNode"):
439 relation_set(nameNodeReady=True)514 relation_set(nameNodeReady=True)
440 relation_set(namenode_hostname=get_unit_hostname())515 relation_set(namenode_hostname=socket.gethostname())
441 return516 return
442 setHadoopEnvVar()517 setHadoopEnvVar()
443 setDirPermission(os.environ['DFS_NAME_DIR'], os.environ['HDFS_USER'], os.environ['HADOOP_GROUP'], 0755)518 setDirPermission(os.environ['DFS_NAME_DIR'], os.environ['HDFS_USER'], os.environ['HADOOP_GROUP'], 0755)
444 relation_set(namenode_hostname=get_unit_hostname())519 relation_set(namenode_hostname=socket.gethostname())
445 configureHDFS(unit_get('private-address'))520 configureHDFS(unit_get('private-address'))
446 format_namenode(os.environ["HDFS_USER"])521 format_namenode(os.environ["HDFS_USER"])
447 start_namenode(os.environ["HDFS_USER"])522 start_namenode(os.environ["HDFS_USER"])
523 # Namenode might not be ready yet
524 sleep(15)
525 # Create hdfs (superuser) and ubunut (default user)
526 HDFS_command("dfs -mkdir -p /user/hdfs")
527 HDFS_command("dfs -mkdir -p /user/ubuntu")
528 HDFS_command("dfs -chown ubuntu /user/ubuntu")
529 HDFS_command("dfs -chmod -R 755 /user/ubuntu")
448 start_jobhistory()530 start_jobhistory()
449 open_port(8020)531 open_port(8020)
450 open_port(8010)532 open_port(8010)
451 open_port(50070)533 open_port(50070)
452 open_port(50075)534 open_port(50075)
453 open_port(8480)535 open_port(8480)
454 open_port(50470)536 open_port(50070)
537 open_port(45454)
455 if not is_jvm_service_active("NameNode"):538 if not is_jvm_service_active("NameNode"):
456 log("error ==> NameNode failed to start")539 log("error ==> NameNode failed to start")
457 sys.exit(1)540 sys.exit(1)
458 sleep(5)
459 relation_set(nameNodeReady=True)541 relation_set(nameNodeReady=True)
460542
461543
@@ -480,7 +562,7 @@
480 open_port(8480)562 open_port(8480)
481 open_port(50010)563 open_port(50010)
482 open_port(50075)564 open_port(50075)
483 relation_set(datanode_hostname = get_unit_hostname())565 relation_set(datanode_hostname = socket.gethostname())
484566
485567
486@hooks.hook('config-changed')568@hooks.hook('config-changed')
@@ -499,6 +581,15 @@
499 sys.exit(0)581 sys.exit(0)
500 log("Configuring namenode - changed phase", "INFO")582 log("Configuring namenode - changed phase", "INFO")
501 fileSetKV(hosts_path, relation_get('private-address')+' ', datanode_host)583 fileSetKV(hosts_path, relation_get('private-address')+' ', datanode_host)
584
585 # Upload the MapReduce tarball to HDFS.
586 if not fconfigured("mapred_init"):
587 HDFS_command("dfs -mkdir -p /hdp/apps/2.2.0.0-2041/mapreduce/")
588 HDFS_command("dfs -put /usr/hdp/current/hadoop-client/mapreduce.tar.gz /hdp/apps/2.2.0.0-2041/mapreduce/")
589 HDFS_command("dfs -chown -R hdfs:hadoop /hdp")
590 HDFS_command("dfs -chmod -R 555 /hdp/apps/2.2.0.0-2041/mapreduce")
591 HDFS_command("dfs -chmod -R 444 /hdp/apps/2.2.0.0-2041/mapreduce/mapreduce.tar.gz")
592
502593
503@hooks.hook('start')594@hooks.hook('start')
504def start():595def start():
@@ -525,14 +616,14 @@
525616
526@hooks.hook('compute-nodes-relation-joined')617@hooks.hook('compute-nodes-relation-joined')
527def compute_nodes_relation_joined():618def compute_nodes_relation_joined():
528 log("==> compute_nodes_relation_joined {}".format(get_unit_hostname()),"INFO")619 log("==> compute_nodes_relation_joined {}".format(socket.gethostname()),"INFO")
529 relation_set(hostname=get_unit_hostname())620 relation_set(hostname=socket.gethostname())
530621
531622
532@hooks.hook('hadoop-nodes-relation-joined')623@hooks.hook('hadoop-nodes-relation-joined')
533def hadoop_nodes_relation_joined():624def hadoop_nodes_relation_joined():
534 log("==> hadoop_nodes_relation_joined {}".format(get_unit_hostname()),"INFO")625 log("==> hadoop_nodes_relation_joined {}".format(socket.gethostname()),"INFO")
535 relation_set(hostname=get_unit_hostname())626 relation_set(hostname=socket.gethostname())
536627
537628
538629
@@ -555,7 +646,5 @@
555resourceManagerReady = False646resourceManagerReady = False
556647
557648
558
559
560if __name__ == "__main__":649if __name__ == "__main__":
561 hooks.execute(sys.argv)650 hooks.execute(sys.argv)
562651
=== modified file 'hooks/hdputils.py'
--- hooks/hdputils.py 2014-11-21 18:37:23 +0000
+++ hooks/hdputils.py 2015-02-13 00:38:49 +0000
@@ -17,24 +17,27 @@
1717
18def install_base_pkg(packages):18def install_base_pkg(packages):
19 log ("==> install_base_pkg", "INFO")19 log ("==> install_base_pkg", "INFO")
20 wgetPkg("http://public-repo-1.hortonworks.com/HDP/ubuntu12/2.1.3.0/hdp.list -O /etc/apt/sources.list.d/hdp.list","")20 wgetPkg("http://public-repo-1.hortonworks.com/HDP/ubuntu12/2.x/GA/2.2.0.0/hdp.list",
21 cmd =gpg_script 21 "/etc/apt/sources.list.d/hdp.list")
22 cmd = gpg_script
22 subprocess.call(cmd)23 subprocess.call(cmd)
23 apt_update()24 apt_update()
24 apt_install(packages)25 apt_install(packages)
25 if not os.path.isdir(os.path.join(os.path.sep,'usr','lib', 'hadoop')):26 if not os.path.isdir(os.path.join(os.path.sep,'usr','lib', 'hadoop')):
26 log("Error, apt-get install Hadoop failed", "ERROR")27 log("Error, apt-get install Hadoop failed", "ERROR")
27 sys.exit(1)28 sys.exit(1)
28 os.chdir(home);29 os.chdir(home)
29 wgetPkg("http://public-repo-1.hortonworks.com/HDP/tools/2.1.1.0/hdp_manual_install_rpm_helper_files-2.1.1.385.tar.gz","")30 wgetPkg("http://public-repo-1.hortonworks.com/HDP/tools/2.2.0.0/hdp_manual_install_rpm_helper_files-2.2.0.0.2041.tar.gz",
30 if tarfile.is_tarfile(tarfilename):31 tarfilename)
31 tball = tarfile.open(tarfilename)32 if not tarfile.is_tarfile(tarfilename):
33 log.error('Unable to extract Hadoop Tarball')
34 sys.exit(1)
35 with tarfile.open(tarfilename) as tball:
36 extracted_dirname = tball.next().name
32 tball.extractall(home)37 tball.extractall(home)
33 else:
34 log ("Unable to extract Hadoop Tarball ", "ERROR")
35 if os.path.isdir(hdpScript):38 if os.path.isdir(hdpScript):
36 shutil.rmtree(hdpScript)39 shutil.rmtree(hdpScript)
37 os.rename(tarfilenamePre, hdpScript)40 os.rename(extracted_dirname, hdpScript)
38 log("<== install_base_pkg", "INFO")41 log("<== install_base_pkg", "INFO")
3942
40def uninstall_base_pkg(packages):43def uninstall_base_pkg(packages):
@@ -116,10 +119,9 @@
116################ Global values #########################119################ Global values #########################
117home = os.path.join(os.path.sep, "home", "ubuntu")120home = os.path.join(os.path.sep, "home", "ubuntu")
118javaPath = "/usr/lib/jvm/java-1.7.0-openjdk-amd64"121javaPath = "/usr/lib/jvm/java-1.7.0-openjdk-amd64"
119tarfilename="hdp_manual_install_rpm_helper_files-2.1.1.385.tar.gz"122tarfilename="/tmp/hdp-helpers.tar.gz"
120tarfilenamePre="hdp_manual_install_rpm_helper_files-2.1.1.385"
121HDP_PGP_SCRIPT = 'gpg_ubuntu.sh'123HDP_PGP_SCRIPT = 'gpg_ubuntu.sh'
122gpg_script = os.path.join(os.path.sep, os.environ['CHARM_DIR'], os.path.sep, os.environ['CHARM_DIR'], 'files', 'scripts',HDP_PGP_SCRIPT)124gpg_script = os.path.join(os.environ['CHARM_DIR'], 'files', 'scripts', HDP_PGP_SCRIPT)
123hdpScript = "hdp_scripts"125hdpScript = "hdp_scripts"
124hdpScriptPath = os.path.join(os.path.sep,home, hdpScript,'scripts')126hdpScriptPath = os.path.join(os.path.sep,home, hdpScript,'scripts')
125usersAndGroupsScript = os.path.join(os.path.sep, hdpScriptPath, "usersAndGroups.sh")127usersAndGroupsScript = os.path.join(os.path.sep, hdpScriptPath, "usersAndGroups.sh")
126128
=== added file 'resources.yaml'
--- resources.yaml 1970-01-01 00:00:00 +0000
+++ resources.yaml 2015-02-13 00:38:49 +0000
@@ -0,0 +1,11 @@
1resources:
2 pathlib:
3 pypi: path.py>=7.0
4 pyaml:
5 pypi: pyaml
6 six:
7 pypi: six
8 charmhelpers:
9 pypi: http://bazaar.launchpad.net/~bigdata-dev/bigdata-data/trunk/download/cory.johns%40canonical.com-20150212185722-5i0c57g7725yw71r/charmhelpers0.2.2.ta-20150203190143-yrb1xsbp2bffbyp5-1/charmhelpers-0.2.2.tar.gz
10 hash: 4a6daacc352f7381b38e3efffe760f6c43a15452bdaf66e50f234ea425b91873
11 hash_type: sha256
012
=== modified file 'tests/01-hadoop-cluster-deployment-1.py'
--- tests/01-hadoop-cluster-deployment-1.py 2014-11-21 18:37:23 +0000
+++ tests/01-hadoop-cluster-deployment-1.py 2015-02-13 00:38:49 +0000
@@ -1,141 +1,85 @@
1#!/usr/bin/python31#!/usr/bin/python3
22
3import unittest
3import amulet4import amulet
4import yaml5import yaml
5import os6import os
6class TestDeployment(object):7
7 def __init__(self):8
8 self.d = amulet.Deployment(series='trusty')9class TestDeployment(unittest.TestCase):
9 bpath = os.path.join(os.path.dirname( __file__), "hadoop_cluster.yaml")10 @classmethod
11 def setUpClass(cls):
12 cls.d = amulet.Deployment(series='trusty')
13 bpath = os.path.join(os.path.dirname(__file__), "hadoop_cluster.yaml")
10 with open(bpath) as f:14 with open(bpath) as f:
11 bun = f.read()15 bun = f.read()
12 self.d.load(yaml.safe_load(bun))16 cls.d.load(yaml.safe_load(bun))
13 try:17 try:
14 self.d.setup(timeout=9000)18 cls.d.setup(timeout=9000)
15 self.d.sentry.wait()19 cls.d.sentry.wait()
16 except amulet.helpers.TimeoutError:20 except amulet.helpers.TimeoutError:
17 amulet.raise_status(amulet.SKIP, msg="Environment wasn't stood up in time")21 amulet.raise_status(amulet.SKIP, msg="Environment wasn't stood up in time")
18 except:22 except:
19 raise23 raise
20 self.master_unit = self.d.sentry.unit['yarn-hdfs-master/0']24 cls.master_unit = cls.d.sentry.unit['yarn-hdfs-master/0']
21 self.compute_unit = self.d.sentry.unit['compute-node/0']25 cls.compute_unit = cls.d.sentry.unit['compute-node/0']
2226
23 def run(self):
24 for test in dir(self):
25 if test.startswith('test_'):
26 getattr(self, test)()
27############################################################
28# Validate hadoop services on master node have been started
29############################################################
30 def test_hadoop_master_service_status(self):27 def test_hadoop_master_service_status(self):
31 o,c= self.master_unit.run("jps | awk '{print $2}'")28 """
32 if o.find('ResourceManager') == -1:29 Validate hadoop services on master node have been started
33 amulet.raise_status(amulet.FAIL, msg="ResourceManager not started")30 """
34 else:31 output, retcode = self.master_unit.run("pgrep -a java")
35 amulet.raise_status(amulet.PASS, msg="ResourceManager started")32 assert 'ResourceManager' in output, "ResourceManager not started"
36 if o.find('JobHistoryServer') == -1:33 assert 'JobHistoryServer' in output, "JobHistoryServer not started"
37 amulet.raise_status(amulet.FAIL, msg="JobHistoryServer not started")34 assert 'NameNode' in output, "NameNode not started"
38 else:
39 amulet.raise_status(amulet.PASS, msg="JobHistoryServer started")
40 if o.find('NameNode') == -1:
41 amulet.raise_status(amulet.FAIL, msg="NameNode not started")
42 else:
43 amulet.raise_status(amulet.PASS, msg="NameNode started")
4435
45############################################################
46# Validate hadoop services on compute node have been started
47############################################################
48 def test_hadoop_compute_service_status(self):36 def test_hadoop_compute_service_status(self):
49 o,c= self.compute_unit.run("jps | awk '{print $2}'")37 """
50 if o.find('NodeManager') == -1:38 Validate hadoop services on compute node have been started
51 amulet.raise_status(amulet.FAIL, msg="NodeManager not started")39 """
52 else:40 output, retcode = self.compute_unit.run("pgrep -a java")
53 amulet.raise_status(amulet.PASS, msg="NodeManager started")41 assert 'NodeManager' in output, "NodeManager not started"
54 if o.find('DataNode') == -1:42 assert 'DataNode' in output, "DataServer not started"
55 amulet.raise_status(amulet.FAIL, msg="DataServer not started")43
56 else:44 def test_hdfs_dir(self):
57 amulet.raise_status(amulet.PASS, msg="DataServer started")45 """
58###########################################################################46 Validate admin few hadoop activities on HDFS cluster.
59# Validate admin few hadoop activities on HDFS cluster.47 1) This test validates mkdir on hdfs cluster
60###########################################################################48 2) This test validates change hdfs dir owner on the cluster
61# 1) This test validates mkdir on hdfs cluster49 3) This test validates setting hdfs directory access permission on the cluster
62###########################################################################50
63 def test_hdfs_mkdir(self):51 NB: These are order-dependent, so must be done as part of a single test case.
64 o,c= slef.master_unit.run("su hdfs -c 'hdfs dfs -mkdir -p /user/ubuntu'")52 """
65 if c == 0:53 output, retcode = self.master_unit.run("su hdfs -c 'hdfs dfs -mkdir -p /user/ubuntu'")
66 amulet.raise_status(amulet.PASS, msg=" Successfully created a user directory on hdfs")54 assert retcode == 0, "Created a user directory on hdfs FAILED:\n{}".format(output)
67 else:55 output, retcode = self.master_unit.run("su hdfs -c 'hdfs dfs -chown ubuntu:ubuntu /user/ubuntu'")
68 amulet.raise_status(amulet.FAIL, msg=" Created a user directory on hdfs FAILED")56 assert retcode == 0, "Assigning an owner to hdfs directory FAILED:\n{}".format(output)
69###########################################################################57 output, retcode = self.master_unit.run("su hdfs -c 'hdfs dfs -chmod -R 755 /user/ubuntu'")
70# 2) This test validates change hdfs dir owner on the cluster58 assert retcode == 0, "seting directory permission on hdfs FAILED:\n{}".format(output)
71###########################################################################59
72 def test_hdfs_dir_owner(self):60 def test_yarn_mapreduce_exe(self):
73 o,c= self.master_unit.run("su hdfs -c 'hdfs dfs -chown ubuntu:ubuntu /user/ubuntu'")61 """
74 if c == 0:62 Validate yarn mapreduce operations:
75 amulet.raise_status(amulet.PASS, msg=" Successfully assigned an owner to directory on hdfs")63 1) validate mapreduce execution - writing to hdfs
76 else:64 2) validate successful mapreduce operation after the execution
77 amulet.raise_status(amulet.FAIL, msg=" assigning an owner to hdfs directory FAILED")65 3) validate mapreduce execution - reading and writing to hdfs
7866 4) validate successful mapreduce operation after the execution
79###########################################################################67 5) validate successful deletion of mapreduce operation result from hdfs
80# 3) This test validates setting hdfs directory access permission on the cluster68
81###########################################################################69 NB: These are order-dependent, so must be done as part of a single test case.
82 def test_hdfs_dir_permission(self):70 """
83 o,c= self.master_unit.run("su hdfs -c 'hdfs dfs -chmod -R 755 /user/ubuntu'")71 jar_file = '/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar'
84 if c == 0:72 test_steps = [
85 amulet.raise_status(amulet.PASS, msg=" Successfully set directory permission on hdfs")73 ('teragen', "su ubuntu -c 'hadoop jar {} teragen 10000 /user/ubuntu/teragenout'".format(jar_file)),
86 else:74 ('mapreduce #1', "su hdfs -c 'hdfs dfs -ls /user/ubuntu/teragenout/_SUCCESS'"),
87 amulet.raise_status(amulet.FAIL, msg=" seting directory permission on hdfs FAILED")75 ('terasort', "su ubuntu -c 'hadoop jar {} terasort /user/ubuntu/teragenout /user/ubuntu/terasortout'".format(jar_file)),
8876 ('mapreduce #2', "su hdfs -c 'hdfs dfs -ls /user/ubuntu/terasortout/_SUCCESS'"),
89###########################################################################77 ('cleanup', "su hdfs -c 'hdfs dfs -rm -r /user/ubuntu/teragenout'"),
90# Validate yarn mapreduce operation78 ]
91# 1) validate mapreduce execution - writing to hdfs79 for name, step in test_steps:
92# 2) validate successful mapreduce operation after the execution80 output, retcode = self.master_unit.run(step)
93###########################################################################81 assert retcode == 0, "{} FAILED:\n{}".format(name, output)
94 def test_yarn_mapreduce_exe1(self):
95 o,c= self.master_unit.run("su ubuntu -c 'hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples-*.jar teragen 10000 /user/ubuntu/teragenout'")
96 if c == 0:
97 amulet.raise_status(amulet.PASS, msg=" Successfull execution of teragen mapreduce app")
98 else:
99 amulet.raise_status(amulet.FAIL, msg=" teragen FAILED")
100###########################################################################
101# 2) validate successful mapreduce operation after the execution
102###########################################################################
103 o,c= self.master_unit.run("su hdfs -c 'hdfs dfs -ls /user/ubuntu/teragenout/_SUCCESS'")
104 if c == 0:
105 amulet.raise_status(amulet.PASS, msg="mapreduce operation was success")
106 else:
107 amulet.raise_status(amulet.FAIL, msg="mapreduce operation was not a success")
108
109###########################################################################
110# Validate yarn mapreduce operation
111# 1) validate mapreduce execution - reading and writing to hdfs
112# 2) validate successful mapreduce operation after the execution
113###########################################################################
114 def test_yarn_mapreduce_exe2(self):
115 o,c= self.master_unit.run("su ubuntu -c 'hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples-*.jar terasort /user/ubuntu/teragenout /user/ubuntu/terasortout'")
116 if c == 0:
117 amulet.raise_status(amulet.PASS, msg=" Successfull execution of terasort mapreduce app")
118 else:
119 amulet.raise_status(amulet.FAIL, msg=" terasort FAILED")
120###########################################################################
121# 2) validate a successful mapreduce operation after the execution
122###########################################################################
123 o,c= self.master_unit.run("su hdfs -c 'hdfs dfs -ls /user/ubuntu/terasortout/_SUCCESS'")
124 if c == 0:
125 amulet.raise_status(amulet.PASS, msg="mapreduce operation was success")
126 else:
127 amulet.raise_status(amulet.FAIL, msg="mapreduce operation was not a success")
128###########################################################################
129# validate a successful deletion of mapreduce operation result from hdfs
130###########################################################################
131 def test_hdfs_delete_dir(self):
132 o,c= self.master_unit.run("su hdfs -c 'hdfs dfs -rm -r /user/ubuntu/teragenout'")
133 if c == 0:
134 amulet.raise_status(amulet.PASS, msg="mapreduce result deleted from hdfs cluster")
135 else:
136 amulet.raise_status(amulet.FAIL, msg="mapreduce was not deleted from hdfs cluster")
13782
13883
139if __name__ == '__main__':84if __name__ == '__main__':
140 runner = TestDeployment()85 unittest.main()
141 runner.run()

Subscribers

People subscribed via source and target branches