Merge lp:~bigdata-dev/charms/trusty/apache-hadoop-hdfs-master/trunk into lp:charms/trusty/apache-hadoop-hdfs-master

Proposed by Kevin W Monroe
Status: Merged
Merged at revision: 88
Proposed branch: lp:~bigdata-dev/charms/trusty/apache-hadoop-hdfs-master/trunk
Merge into: lp:charms/trusty/apache-hadoop-hdfs-master
Diff against target: 341 lines (+183/-38) (has conflicts)
11 files modified
DEV-README.md (+7/-0)
README.md (+36/-5)
actions.yaml (+2/-0)
actions/smoke-test (+53/-0)
dist.yaml (+0/-32)
hooks/callbacks.py (+4/-0)
hooks/common.py (+2/-0)
hooks/datanode-relation-departed (+26/-0)
hooks/namenode-relation-departed (+26/-0)
hooks/secondary-relation-departed (+26/-0)
resources.yaml (+1/-1)
Text conflict in DEV-README.md
Conflict adding file resources/python/jujuresources-0.2.9.tar.gz.  Moved existing file to resources/python/jujuresources-0.2.9.tar.gz.moved.
To merge this branch: bzr merge lp:~bigdata-dev/charms/trusty/apache-hadoop-hdfs-master/trunk
Reviewer Review Type Date Requested Status
Kevin W Monroe Approve
Review via email: mp+268668@code.launchpad.net
To post a comment you must log in.
101. By Cory Johns

Fixed permissions on test_dist_config.py

Revision history for this message
Kevin W Monroe (kwmonroe) wrote :

Realtime syslog analytics bundle test looked good. Merged.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'DEV-README.md'
2--- DEV-README.md 2015-06-18 17:07:45 +0000
3+++ DEV-README.md 2015-08-21 21:51:24 +0000
4@@ -61,10 +61,17 @@
5
6 ## Manual Deployment
7
8+<<<<<<< TREE
9 The easiest way to deploy the core Apache Hadoop platform is to use one of
10 the [apache bundles](https://jujucharms.com/u/bigdata-charmers/#bundles).
11 However, to manually deploy the base Apache Hadoop platform without using one
12 of the bundles, you can use the following:
13+=======
14+The easiest way to deploy an Apache Hadoop platform is to use one of
15+the [apache bundles](https://jujucharms.com/u/bigdata-charmers/#bundles).
16+However, to manually deploy the base Apache Hadoop platform without using one
17+of the bundles, you can use the following:
18+>>>>>>> MERGE-SOURCE
19
20 juju deploy apache-hadoop-hdfs-master hdfs-master
21 juju deploy apache-hadoop-hdfs-secondary secondary-namenode
22
23=== modified file 'README.md'
24--- README.md 2015-06-29 14:19:09 +0000
25+++ README.md 2015-08-21 21:51:24 +0000
26@@ -27,6 +27,35 @@
27 hadoop jar my-job.jar
28
29
30+## Status and Smoke Test
31+
32+The services provide extended status reporting to indicate when they are ready:
33+
34+ juju status --format=tabular
35+
36+This is particularly useful when combined with `watch` to track the on-going
37+progress of the deployment:
38+
39+ watch -n 0.5 juju status --format=tabular
40+
41+The message for each unit will provide information about that unit's state.
42+Once they all indicate that they are ready, you can perform a "smoke test"
43+to verify that HDFS is working as expected using the built-in `smoke-test`
44+action:
45+
46+ juju action do smoke-test
47+
48+After a few seconds or so, you can check the results of the smoke test:
49+
50+ juju action status
51+
52+You will see `status: completed` if the smoke test was successful, or
53+`status: failed` if it was not. You can get more information on why it failed
54+via:
55+
56+ juju action fetch <action-id>
57+
58+
59 ## Deploying in Network-Restricted Environments
60
61 The Apache Hadoop charms can be deployed in environments with limited network
62@@ -49,17 +78,19 @@
63 of these resources:
64
65 sudo pip install jujuresources
66- juju resources fetch --all apache-hadoop-compute-slave/resources.yaml -d /tmp/resources
67- juju resources serve -d /tmp/resources
68+ juju-resources fetch --all /path/to/resources.yaml -d /tmp/resources
69+ juju-resources serve -d /tmp/resources
70
71 This will fetch all of the resources needed by this charm and serve them via a
72-simple HTTP server. You can then set the `resources_mirror` config option to
73-have the charm use this server for retrieving resources.
74+simple HTTP server. The output from `juju-resources serve` will give you a
75+URL that you can set as the `resources_mirror` config option for this charm.
76+Setting this option will cause all resources required by this charm to be
77+downloaded from the configured URL.
78
79 You can fetch the resources for all of the Apache Hadoop charms
80 (`apache-hadoop-hdfs-master`, `apache-hadoop-yarn-master`,
81 `apache-hadoop-hdfs-secondary`, `apache-hadoop-plugin`, etc) into a single
82-directory and serve them all with a single `juju resources serve` instance.
83+directory and serve them all with a single `juju-resources serve` instance.
84
85
86 ## Contact Information
87
88=== modified file 'actions.yaml'
89--- actions.yaml 2015-06-23 17:15:23 +0000
90+++ actions.yaml 2015-08-21 21:51:24 +0000
91@@ -4,3 +4,5 @@
92 description: All of the HDFS processes can be stopped with this Juju action.
93 restart-hdfs:
94 description: All of the HDFS processes can be restarted with this Juju action.
95+smoke-test:
96+ description: Verify that HDFS is working by creating and removing a small file.
97
98=== added file 'actions/smoke-test'
99--- actions/smoke-test 1970-01-01 00:00:00 +0000
100+++ actions/smoke-test 2015-08-21 21:51:24 +0000
101@@ -0,0 +1,53 @@
102+#!/usr/bin/env python
103+
104+import sys
105+
106+try:
107+ from charmhelpers.core import hookenv
108+ from charmhelpers.core import unitdata
109+ from jujubigdata.utils import run_as
110+ charm_ready = unitdata.kv().get('charm.active', False)
111+except ImportError:
112+ charm_ready = False
113+
114+if not charm_ready:
115+ # might not have hookenv.action_fail available yet
116+ from subprocess import call
117+ call(['action-fail', 'HDFS service not yet ready'])
118+
119+
120+# verify the hdfs-test directory does not already exist
121+output = run_as('ubuntu', 'hdfs', 'dfs', '-ls', '/tmp', capture_output=True)
122+if '/tmp/hdfs-test' in output:
123+ run_as('ubuntu', 'hdfs', 'dfs', '-rm', '-R', '/tmp/hdfs-test')
124+ output = run_as('ubuntu', 'hdfs', 'dfs', '-ls', '/tmp', capture_output=True)
125+ if 'hdfs-test' in output:
126+ hookenv.action_fail('Unable to remove existing hdfs-test directory')
127+ sys.exit()
128+
129+# create the directory
130+run_as('ubuntu', 'hdfs', 'dfs', '-mkdir', '-p', '/tmp/hdfs-test')
131+run_as('ubuntu', 'hdfs', 'dfs', '-chmod', '-R', '777', '/tmp/hdfs-test')
132+
133+# verify the newly created hdfs-test subdirectory exists
134+output = run_as('ubuntu', 'hdfs', 'dfs', '-ls', '/tmp', capture_output=True)
135+for line in output.split('\n'):
136+ if '/tmp/hdfs-test' in line:
137+ if 'ubuntu' not in line or 'drwxrwxrwx' not in line:
138+ hookenv.action_fail('Permissions incorrect for hdfs-test directory')
139+ sys.exit()
140+ break
141+else:
142+ hookenv.action_fail('Unable to create hdfs-test directory')
143+ sys.exit()
144+
145+# remove the directory
146+run_as('ubuntu', 'hdfs', 'dfs', '-rm', '-R', '/tmp/hdfs-test')
147+
148+# verify the hdfs-test subdirectory has been removed
149+output = run_as('ubuntu', 'hdfs', 'dfs', '-ls', '/tmp', capture_output=True)
150+if '/tmp/hdfs-test' in output:
151+ hookenv.action_fail('Unable to remove hdfs-test directory')
152+ sys.exit()
153+
154+hookenv.action_set({'outcome': 'success'})
155
156=== modified file 'dist.yaml'
157--- dist.yaml 2015-04-16 15:46:35 +0000
158+++ dist.yaml 2015-08-21 21:51:24 +0000
159@@ -73,44 +73,12 @@
160 # Only expose ports serving a UI or external API (i.e., namenode and
161 # resourcemanager). Communication among units within the cluster does
162 # not need ports to be explicitly opened.
163- # If adding a port here, you will need to update
164- # charmhelpers.contrib.bigdata.handlers.apache or hooks/callbacks.py
165- # to ensure that it is supported.
166 namenode:
167 port: 8020
168- exposed_on: 'hdfs-master'
169 nn_webapp_http:
170 port: 50070
171 exposed_on: 'hdfs-master'
172- dn_webapp_http:
173- port: 50075
174- exposed_on: 'compute-slave'
175- resourcemanager:
176- port: 8032
177- exposed_on: 'yarn-master'
178- rm_webapp_http:
179- port: 8088
180- exposed_on: 'yarn-master'
181- rm_log:
182- port: 19888
183- nm_webapp_http:
184- port: 8042
185- exposed_on: 'compute-slave'
186- jobhistory:
187- port: 10020
188- jh_webapp_http:
189- port: 19888
190- exposed_on: 'yarn-master'
191 # TODO: support SSL
192 #nn_webapp_https:
193 # port: 50470
194 # exposed_on: 'hdfs-master'
195- #dn_webapp_https:
196- # port: 50475
197- # exposed_on: 'compute-slave'
198- #rm_webapp_https:
199- # port: 8090
200- # exposed_on: 'yarn-master'
201- #nm_webapp_https:
202- # port: 8044
203- # exposed_on: 'compute-slave'
204
205=== modified file 'hooks/callbacks.py'
206--- hooks/callbacks.py 2015-06-25 15:38:21 +0000
207+++ hooks/callbacks.py 2015-08-21 21:51:24 +0000
208@@ -37,3 +37,7 @@
209 hookenv.status_set('waiting', 'Waiting for compute slaves to provide DataNodes')
210 else:
211 hookenv.status_set('blocked', 'Waiting for relation to compute slaves')
212+
213+
214+def clear_active_flag():
215+ unitdata.kv().set('charm.active', False)
216
217=== modified file 'hooks/common.py'
218--- hooks/common.py 2015-06-25 15:39:10 +0000
219+++ hooks/common.py 2015-08-21 21:51:24 +0000
220@@ -100,8 +100,10 @@
221 callbacks.update_active_status,
222 ],
223 'cleanup': [
224+ callbacks.clear_active_flag,
225 charmframework.helpers.close_ports(dist_config.exposed_ports('hdfs-master')),
226 hdfs.stop_namenode,
227+ callbacks.update_active_status,
228 ],
229 },
230 ])
231
232=== added file 'hooks/datanode-relation-departed'
233--- hooks/datanode-relation-departed 1970-01-01 00:00:00 +0000
234+++ hooks/datanode-relation-departed 2015-08-21 21:51:24 +0000
235@@ -0,0 +1,26 @@
236+#!/usr/bin/env python
237+# Licensed under the Apache License, Version 2.0 (the "License");
238+# you may not use this file except in compliance with the License.
239+# You may obtain a copy of the License at
240+#
241+# http://www.apache.org/licenses/LICENSE-2.0
242+#
243+# Unless required by applicable law or agreed to in writing, software
244+# distributed under the License is distributed on an "AS IS" BASIS,
245+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
246+# See the License for the specific language governing permissions and
247+# limitations under the License.
248+
249+"""
250+All hooks in this charm are managed by the Charm Framework.
251+The framework helps manage dependencies and preconditions to ensure that
252+steps are only executed when they can be successful. As such, no additional
253+code should be added to this hook; instead, please integrate new functionality
254+into the 'callbacks' list in hooks/common.py. New callbacks can be placed
255+in hooks/callbacks.py, if necessary.
256+
257+See http://big-data-charm-helpers.readthedocs.org/en/latest/examples/framework.html
258+for more information.
259+"""
260+import common
261+common.manage()
262
263=== added file 'hooks/namenode-relation-departed'
264--- hooks/namenode-relation-departed 1970-01-01 00:00:00 +0000
265+++ hooks/namenode-relation-departed 2015-08-21 21:51:24 +0000
266@@ -0,0 +1,26 @@
267+#!/usr/bin/env python
268+# Licensed under the Apache License, Version 2.0 (the "License");
269+# you may not use this file except in compliance with the License.
270+# You may obtain a copy of the License at
271+#
272+# http://www.apache.org/licenses/LICENSE-2.0
273+#
274+# Unless required by applicable law or agreed to in writing, software
275+# distributed under the License is distributed on an "AS IS" BASIS,
276+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
277+# See the License for the specific language governing permissions and
278+# limitations under the License.
279+
280+"""
281+All hooks in this charm are managed by the Charm Framework.
282+The framework helps manage dependencies and preconditions to ensure that
283+steps are only executed when they can be successful. As such, no additional
284+code should be added to this hook; instead, please integrate new functionality
285+into the 'callbacks' list in hooks/common.py. New callbacks can be placed
286+in hooks/callbacks.py, if necessary.
287+
288+See http://big-data-charm-helpers.readthedocs.org/en/latest/examples/framework.html
289+for more information.
290+"""
291+import common
292+common.manage()
293
294=== added file 'hooks/secondary-relation-departed'
295--- hooks/secondary-relation-departed 1970-01-01 00:00:00 +0000
296+++ hooks/secondary-relation-departed 2015-08-21 21:51:24 +0000
297@@ -0,0 +1,26 @@
298+#!/usr/bin/env python
299+# Licensed under the Apache License, Version 2.0 (the "License");
300+# you may not use this file except in compliance with the License.
301+# You may obtain a copy of the License at
302+#
303+# http://www.apache.org/licenses/LICENSE-2.0
304+#
305+# Unless required by applicable law or agreed to in writing, software
306+# distributed under the License is distributed on an "AS IS" BASIS,
307+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
308+# See the License for the specific language governing permissions and
309+# limitations under the License.
310+
311+"""
312+All hooks in this charm are managed by the Charm Framework.
313+The framework helps manage dependencies and preconditions to ensure that
314+steps are only executed when they can be successful. As such, no additional
315+code should be added to this hook; instead, please integrate new functionality
316+into the 'callbacks' list in hooks/common.py. New callbacks can be placed
317+in hooks/callbacks.py, if necessary.
318+
319+See http://big-data-charm-helpers.readthedocs.org/en/latest/examples/framework.html
320+for more information.
321+"""
322+import common
323+common.manage()
324
325=== modified file 'resources.yaml'
326--- resources.yaml 2015-07-24 15:26:06 +0000
327+++ resources.yaml 2015-08-21 21:51:24 +0000
328@@ -4,7 +4,7 @@
329 pathlib:
330 pypi: path.py>=7.0
331 jujubigdata:
332- pypi: jujubigdata>=2.0.2,<3.0.0
333+ pypi: jujubigdata>=4.0.0,<5.0.0
334 java-installer:
335 # This points to a script which manages installing Java.
336 # If replaced with an alternate implementation, it must output *only* two
337
338=== added file 'resources/python/jujuresources-0.2.9.tar.gz'
339Binary files resources/python/jujuresources-0.2.9.tar.gz 1970-01-01 00:00:00 +0000 and resources/python/jujuresources-0.2.9.tar.gz 2015-08-21 21:51:24 +0000 differ
340=== renamed file 'resources/python/jujuresources-0.2.9.tar.gz' => 'resources/python/jujuresources-0.2.9.tar.gz.moved'
341=== modified file 'tests/remote/test_dist_config.py' (properties changed: -x to +x)

Subscribers

People subscribed via source and target branches