Merge lp:~bigdata-dev/charms/trusty/apache-hadoop-hdfs-master/trunk into lp:charms/trusty/apache-hadoop-hdfs-master

Proposed by Kevin W Monroe
Status: Merged
Merged at revision: 89
Proposed branch: lp:~bigdata-dev/charms/trusty/apache-hadoop-hdfs-master/trunk
Merge into: lp:charms/trusty/apache-hadoop-hdfs-master
Diff against target: 399 lines (+277/-38) (has conflicts)
9 files modified
README.md (+79/-29)
config.yaml (+10/-0)
hooks/callbacks.py (+42/-5)
hooks/common.py (+21/-2)
hooks/ganglia-relation-broken (+26/-0)
hooks/ganglia-relation-changed (+26/-0)
metadata.yaml (+2/-0)
templates/hadoop-metrics2.properties.j2 (+69/-0)
tests/01-basic-deployment.py (+2/-2)
Text conflict in README.md
Text conflict in hooks/callbacks.py
Text conflict in hooks/common.py
To merge this branch: bzr merge lp:~bigdata-dev/charms/trusty/apache-hadoop-hdfs-master/trunk
Reviewer Review Type Date Requested Status
Kevin W Monroe Approve
Review via email: mp+271160@code.launchpad.net
To post a comment you must log in.
Revision history for this message
Kevin W Monroe (kwmonroe) :
review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'README.md'
2--- README.md 2015-08-24 23:05:09 +0000
3+++ README.md 2015-09-15 17:32:10 +0000
4@@ -27,35 +27,85 @@
5 hadoop jar my-job.jar
6
7
8-## Status and Smoke Test
9-
10-The services provide extended status reporting to indicate when they are ready:
11-
12- juju status --format=tabular
13-
14-This is particularly useful when combined with `watch` to track the on-going
15-progress of the deployment:
16-
17- watch -n 0.5 juju status --format=tabular
18-
19-The message for each unit will provide information about that unit's state.
20-Once they all indicate that they are ready, you can perform a "smoke test"
21-to verify that HDFS is working as expected using the built-in `smoke-test`
22-action:
23-
24- juju action do smoke-test
25-
26-After a few seconds or so, you can check the results of the smoke test:
27-
28- juju action status
29-
30-You will see `status: completed` if the smoke test was successful, or
31-`status: failed` if it was not. You can get more information on why it failed
32-via:
33-
34- juju action fetch <action-id>
35-
36-
37+<<<<<<< TREE
38+## Status and Smoke Test
39+
40+The services provide extended status reporting to indicate when they are ready:
41+
42+ juju status --format=tabular
43+
44+This is particularly useful when combined with `watch` to track the on-going
45+progress of the deployment:
46+
47+ watch -n 0.5 juju status --format=tabular
48+
49+The message for each unit will provide information about that unit's state.
50+Once they all indicate that they are ready, you can perform a "smoke test"
51+to verify that HDFS is working as expected using the built-in `smoke-test`
52+action:
53+
54+ juju action do smoke-test
55+
56+After a few seconds or so, you can check the results of the smoke test:
57+
58+ juju action status
59+
60+You will see `status: completed` if the smoke test was successful, or
61+`status: failed` if it was not. You can get more information on why it failed
62+via:
63+
64+ juju action fetch <action-id>
65+
66+
67+=======
68+## Status and Smoke Test
69+
70+The services provide extended status reporting to indicate when they are ready:
71+
72+ juju status --format=tabular
73+
74+This is particularly useful when combined with `watch` to track the on-going
75+progress of the deployment:
76+
77+ watch -n 0.5 juju status --format=tabular
78+
79+The message for each unit will provide information about that unit's state.
80+Once they all indicate that they are ready, you can perform a "smoke test"
81+to verify that HDFS is working as expected using the built-in `smoke-test`
82+action:
83+
84+ juju action do smoke-test
85+
86+After a few seconds or so, you can check the results of the smoke test:
87+
88+ juju action status
89+
90+You will see `status: completed` if the smoke test was successful, or
91+`status: failed` if it was not. You can get more information on why it failed
92+via:
93+
94+ juju action fetch <action-id>
95+
96+
97+## Monitoring
98+
99+This charm supports monitoring via Ganglia. To enable monitoring, you must
100+do **both** of the following (the order does not matter):
101+
102+ * Add a relation to the [Ganglia charm][] via the `:master` relation
103+ * Enable the `ganglia_metrics` config option
104+
105+For example:
106+
107+ juju add-relation hdfs-master ganglia:master
108+ juju set hdfs-master ganglia_metrics=true
109+
110+Enabling monitoring will issue restart the NameNode and all DataNode components
111+on all of the related compute-slaves. Take care to ensure that there are no
112+running jobs when enabling monitoring.
113+
114+
115+>>>>>>> MERGE-SOURCE
116 ## Deploying in Network-Restricted Environments
117
118 The Apache Hadoop charms can be deployed in environments with limited network
119
120=== modified file 'config.yaml'
121--- config.yaml 2015-04-03 16:49:17 +0000
122+++ config.yaml 2015-09-15 17:32:10 +0000
123@@ -17,3 +17,13 @@
124 default: ''
125 description: |
126 URL from which to fetch resources (e.g., Hadoop binaries) instead of Launchpad.
127+ ganglia_metrics:
128+ type: boolean
129+ default: false
130+ description: |
131+ Enable metrics using Ganglia. Note that enabling this option will
132+ have no effect if the service is not related to a ganglia service
133+ via the ganglia:master relation. Enabling this option with the
134+ relation will issue a restart to the NameNode and all DataNode
135+ components on all related compute-slaves.
136+ See the README for more information.
137
138=== modified file 'hooks/callbacks.py'
139--- hooks/callbacks.py 2015-08-10 22:59:27 +0000
140+++ hooks/callbacks.py 2015-09-15 17:32:10 +0000
141@@ -18,7 +18,10 @@
142
143 from charmhelpers.core import hookenv
144 from charmhelpers.core import unitdata
145-from jujubigdata.relations import DataNode
146+from jujubigdata.relations import DataNode, Ganglia
147+from charmhelpers.core.templating import render
148+from functools import partial
149+from subprocess import check_call
150
151
152 def update_working_status():
153@@ -37,7 +40,41 @@
154 hookenv.status_set('waiting', 'Waiting for compute slaves to provide DataNodes')
155 else:
156 hookenv.status_set('blocked', 'Waiting for relation to compute slaves')
157-
158-
159-def clear_active_flag():
160- unitdata.kv().set('charm.active', False)
161+<<<<<<< TREE
162+
163+
164+def clear_active_flag():
165+ unitdata.kv().set('charm.active', False)
166+=======
167+
168+
169+def clear_active_flag():
170+ unitdata.kv().set('charm.active', False)
171+
172+
173+def conf_ganglia_metrics(purgeConf=False):
174+ """
175+ Send hadoop specific metrics to a ganglia server
176+ """
177+ config = hookenv.config()
178+ ganglia_metrics = config['ganglia_metrics'] and not purgeConf
179+ ganglia_metrics_changed = ganglia_metrics != unitdata.kv().get('ganglia_metrics', False)
180+ unitdata.kv().set('ganglia_metrics', ganglia_metrics)
181+ comment = '#' if not ganglia_metrics else ''
182+ ganglia_host = 'UNSET_BY_JUJU' if not ganglia_metrics else Ganglia().host()
183+ ganglia_sink_str = comment + '*.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31'
184+ hookenv.log("Configuring ganglia sink in /etc/hadoop/conf/hadoop-metrics2.properties", level=None)
185+ render(
186+ source='hadoop-metrics2.properties.j2',
187+ target='/etc/hadoop/conf/hadoop-metrics2.properties',
188+ context={
189+ 'ganglia_host': ganglia_host,
190+ 'ganglia_sink_str': ganglia_sink_str,
191+ },
192+ ),
193+ if ganglia_metrics_changed:
194+ check_call(['actions/restart-hdfs'])
195+
196+
197+purge_ganglia_metrics = partial(conf_ganglia_metrics, purgeConf=True)
198+>>>>>>> MERGE-SOURCE
199
200=== modified file 'hooks/common.py' (properties changed: +x to -x)
201--- hooks/common.py 2015-08-10 22:59:27 +0000
202+++ hooks/common.py 2015-09-15 17:32:10 +0000
203@@ -103,8 +103,27 @@
204 callbacks.clear_active_flag,
205 charmframework.helpers.close_ports(dist_config.exposed_ports('hdfs-master')),
206 hdfs.stop_namenode,
207- callbacks.update_active_status,
208- ],
209+<<<<<<< TREE
210+ callbacks.update_active_status,
211+ ],
212+=======
213+ callbacks.update_active_status,
214+ ],
215+ },
216+ {
217+ 'name': 'ganglia',
218+ 'requires': [
219+ hadoop.is_installed,
220+ jujubigdata.relations.Ganglia,
221+ ],
222+ 'callbacks': [
223+ callbacks.conf_ganglia_metrics,
224+ ],
225+ 'cleanup': [
226+ callbacks.purge_ganglia_metrics
227+ ],
228+
229+>>>>>>> MERGE-SOURCE
230 },
231 ])
232 manager.manage()
233
234=== added file 'hooks/ganglia-relation-broken'
235--- hooks/ganglia-relation-broken 1970-01-01 00:00:00 +0000
236+++ hooks/ganglia-relation-broken 2015-09-15 17:32:10 +0000
237@@ -0,0 +1,26 @@
238+#!/usr/bin/env python
239+# Licensed under the Apache License, Version 2.0 (the "License");
240+# you may not use this file except in compliance with the License.
241+# You may obtain a copy of the License at
242+#
243+# http://www.apache.org/licenses/LICENSE-2.0
244+#
245+# Unless required by applicable law or agreed to in writing, software
246+# distributed under the License is distributed on an "AS IS" BASIS,
247+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
248+# See the License for the specific language governing permissions and
249+# limitations under the License.
250+
251+"""
252+All hooks in this charm are managed by the Charm Framework.
253+The framework helps manage dependencies and preconditions to ensure that
254+steps are only executed when they can be successful. As such, no additional
255+code should be added to this hook; instead, please integrate new functionality
256+into the 'callbacks' list in hooks/common.py. New callbacks can be placed
257+in hooks/callbacks.py, if necessary.
258+
259+See http://big-data-charm-helpers.readthedocs.org/en/latest/examples/framework.html
260+for more information.
261+"""
262+import common
263+common.manage()
264
265=== added file 'hooks/ganglia-relation-changed'
266--- hooks/ganglia-relation-changed 1970-01-01 00:00:00 +0000
267+++ hooks/ganglia-relation-changed 2015-09-15 17:32:10 +0000
268@@ -0,0 +1,26 @@
269+#!/usr/bin/env python
270+# Licensed under the Apache License, Version 2.0 (the "License");
271+# you may not use this file except in compliance with the License.
272+# You may obtain a copy of the License at
273+#
274+# http://www.apache.org/licenses/LICENSE-2.0
275+#
276+# Unless required by applicable law or agreed to in writing, software
277+# distributed under the License is distributed on an "AS IS" BASIS,
278+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
279+# See the License for the specific language governing permissions and
280+# limitations under the License.
281+
282+"""
283+All hooks in this charm are managed by the Charm Framework.
284+The framework helps manage dependencies and preconditions to ensure that
285+steps are only executed when they can be successful. As such, no additional
286+code should be added to this hook; instead, please integrate new functionality
287+into the 'callbacks' list in hooks/common.py. New callbacks can be placed
288+in hooks/callbacks.py, if necessary.
289+
290+See http://big-data-charm-helpers.readthedocs.org/en/latest/examples/framework.html
291+for more information.
292+"""
293+import common
294+common.manage()
295
296=== modified file 'metadata.yaml'
297--- metadata.yaml 2015-05-12 21:56:03 +0000
298+++ metadata.yaml 2015-09-15 17:32:10 +0000
299@@ -10,6 +10,8 @@
300 provides:
301 namenode:
302 interface: dfs
303+ ganglia:
304+ interface: monitor
305 requires:
306 datanode:
307 interface: dfs-slave
308
309=== added file 'resources/python/jujuresources-0.2.11.tar.gz'
310Binary files resources/python/jujuresources-0.2.11.tar.gz 1970-01-01 00:00:00 +0000 and resources/python/jujuresources-0.2.11.tar.gz 2015-09-15 17:32:10 +0000 differ
311=== added directory 'templates'
312=== added file 'templates/hadoop-metrics2.properties.j2'
313--- templates/hadoop-metrics2.properties.j2 1970-01-01 00:00:00 +0000
314+++ templates/hadoop-metrics2.properties.j2 2015-09-15 17:32:10 +0000
315@@ -0,0 +1,69 @@
316+#
317+# Licensed to the Apache Software Foundation (ASF) under one or more
318+# contributor license agreements. See the NOTICE file distributed with
319+# this work for additional information regarding copyright ownership.
320+# The ASF licenses this file to You under the Apache License, Version 2.0
321+# (the "License"); you may not use this file except in compliance with
322+# the License. You may obtain a copy of the License at
323+#
324+# http://www.apache.org/licenses/LICENSE-2.0
325+#
326+# Unless required by applicable law or agreed to in writing, software
327+# distributed under the License is distributed on an "AS IS" BASIS,
328+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
329+# See the License for the specific language governing permissions and
330+# limitations under the License.
331+#
332+
333+# syntax: [prefix].[source|sink].[instance].[options]
334+# See javadoc of package-info.java for org.apache.hadoop.metrics2 for details
335+
336+*.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
337+# default sampling period, in seconds
338+*.period=10
339+
340+# Defining sink for Ganglia 3.1
341+{{ ganglia_sink_str }}
342+
343+# Default polling period for GangliaSink
344+*.sink.ganglia.period=10
345+
346+# default for supportsparse is false
347+*.sink.ganglia.supportsparse=true
348+
349+# Directing output to ganglia servers
350+
351+*.sink.ganglia.slope=jvm.metrics.gcCount=zero,jvm.metrics.memHeapUsedM=both
352+*.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40
353+
354+namenode.sink.ganglia.servers={{ ganglia_host }}:8649
355+datanode.sink.ganglia.servers={{ ganglia_host }}:8649
356+jobtracker.sink.ganglia.servers={{ ganglia_host }}:8649
357+tasktracker.sink.ganglia.servers={{ ganglia_host }}:8649
358+maptask.sink.ganglia.servers={{ ganglia_host }}:8649
359+reducetask.sink.ganglia.servers={{ ganglia_host }}:8649
360+resourcemanager.sink.ganglia.servers={{ ganglia_host }}:8649
361+nodemanager.sink.ganglia.servers={{ ganglia_host }}:8649
362+historyserver.sink.ganglia.servers={{ ganglia_host }}:8649
363+journalnode.sink.ganglia.servers={{ ganglia_host }}:8649
364+resourcemanager.sink.ganglia.tagsForPrefix.yarn=Queue
365+
366+# The namen de-metrics. ut will contain metrics from all context
367+#namenode.sink.file.filename=namenode-metrics.out
368+# Specifying a special sampling period for namenode:
369+#namenode.sink.*.period=8
370+
371+#datanode.sink.file.filename=datanode-metrics.out
372+
373+# the following example split metrics of different
374+# context to different sinks (in this case files)
375+#jobtracker.sink.file_jvm.context=jvm
376+#jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out
377+#jobtracker.sink.file_mapred.context=mapred
378+#jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out
379+
380+#tasktracker.sink.file.filename=tasktracker-metrics.out
381+
382+#maptask.sink.file.filename=maptask-metrics.out
383+
384+#reducetask.sink.file.filename=reducetask-metrics.out
385
386=== modified file 'tests/01-basic-deployment.py'
387--- tests/01-basic-deployment.py 2015-03-04 00:20:08 +0000
388+++ tests/01-basic-deployment.py 2015-09-15 17:32:10 +0000
389@@ -17,8 +17,8 @@
390 def setUpClass(cls):
391 cls.d = amulet.Deployment(series='trusty')
392 cls.d.add('apache-hadoop-hdfs-master')
393- cls.d.setup(timeout=9000)
394- cls.d.sentry.wait()
395+ cls.d.setup(timeout=900)
396+ cls.d.sentry.wait(timeout=1800)
397 cls.unit = cls.d.sentry.unit['apache-hadoop-hdfs-master/0']
398
399 def test_deploy(self):

Subscribers

People subscribed via source and target branches