Merge lp:~bigdata-dev/charms/trusty/apache-hadoop-compute-slave/trunk into lp:charms/trusty/apache-hadoop-compute-slave

Proposed by Kevin W Monroe
Status: Merged
Merged at revision: 87
Proposed branch: lp:~bigdata-dev/charms/trusty/apache-hadoop-compute-slave/trunk
Merge into: lp:charms/trusty/apache-hadoop-compute-slave
Diff against target: 308 lines (+109/-55) (has conflicts)
8 files modified
DEV-README.md (+7/-0)
README.md (+7/-5)
dist.yaml (+0/-28)
hooks/callbacks.py (+28/-12)
hooks/common.py (+31/-10)
hooks/datanode-relation-departed (+16/-0)
hooks/nodemanager-relation-departed (+16/-0)
resources.yaml (+4/-0)
Text conflict in DEV-README.md
Text conflict in resources.yaml
Conflict adding file resources/python/jujuresources-0.2.9.tar.gz.  Moved existing file to resources/python/jujuresources-0.2.9.tar.gz.moved.
To merge this branch: bzr merge lp:~bigdata-dev/charms/trusty/apache-hadoop-compute-slave/trunk
Reviewer Review Type Date Requested Status
Kevin W Monroe Approve
Review via email: mp+268667@code.launchpad.net
To post a comment you must log in.
100. By Cory Johns

Fixed permissions on test_dist_config.py

Revision history for this message
Kevin W Monroe (kwmonroe) wrote :

Realtime syslog analytics bundle test looked good. Merged.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'DEV-README.md'
2--- DEV-README.md 2015-06-29 14:15:27 +0000
3+++ DEV-README.md 2015-08-21 21:51:19 +0000
4@@ -49,10 +49,17 @@
5
6 ## Manual Deployment
7
8+<<<<<<< TREE
9 The easiest way to deploy the core Apache Hadoop platform is to use one of
10 the [apache bundles](https://jujucharms.com/u/bigdata-charmers/#bundles).
11 However, to manually deploy the base Apache Hadoop platform without using one
12 of the bundles, you can use the following:
13+=======
14+The easiest way to deploy an Apache Hadoop platform is to use one of
15+the [apache bundles](https://jujucharms.com/u/bigdata-charmers/#bundles).
16+However, to manually deploy the base Apache Hadoop platform without using one
17+of the bundles, you can use the following:
18+>>>>>>> MERGE-SOURCE
19
20 juju deploy apache-hadoop-hdfs-master hdfs-master
21 juju deploy apache-hadoop-hdfs-secondary secondary-namenode
22
23=== modified file 'README.md'
24--- README.md 2015-06-29 14:15:27 +0000
25+++ README.md 2015-08-21 21:51:19 +0000
26@@ -59,17 +59,19 @@
27 of these resources:
28
29 sudo pip install jujuresources
30- juju resources fetch --all apache-hadoop-compute-slave/resources.yaml -d /tmp/resources
31- juju resources serve -d /tmp/resources
32+ juju-resources fetch --all /path/to/resources.yaml -d /tmp/resources
33+ juju-resources serve -d /tmp/resources
34
35 This will fetch all of the resources needed by this charm and serve them via a
36-simple HTTP server. You can then set the `resources_mirror` config option to
37-have the charm use this server for retrieving resources.
38+simple HTTP server. The output from `juju-resources serve` will give you a
39+URL that you can set as the `resources_mirror` config option for this charm.
40+Setting this option will cause all resources required by this charm to be
41+downloaded from the configured URL.
42
43 You can fetch the resources for all of the Apache Hadoop charms
44 (`apache-hadoop-hdfs-master`, `apache-hadoop-yarn-master`,
45 `apache-hadoop-hdfs-secondary`, `apache-hadoop-plugin`, etc) into a single
46-directory and serve them all with a single `juju resources serve` instance.
47+directory and serve them all with a single `juju-resources serve` instance.
48
49
50 ## Contact Information
51
52=== modified file 'dist.yaml'
53--- dist.yaml 2015-04-17 22:24:50 +0000
54+++ dist.yaml 2015-08-21 21:51:19 +0000
55@@ -73,44 +73,16 @@
56 # Only expose ports serving a UI or external API (i.e., namenode and
57 # resourcemanager). Communication among units within the cluster does
58 # not need ports to be explicitly opened.
59- # If adding a port here, you will need to update
60- # charmhelpers.contrib.bigdata.handlers.apache or hooks/callbacks.py
61- # to ensure that it is supported.
62- namenode:
63- port: 8020
64- exposed_on: 'hdfs-master'
65- nn_webapp_http:
66- port: 50070
67- exposed_on: 'hdfs-master'
68 dn_webapp_http:
69 port: 50075
70 exposed_on: 'compute-slave-hdfs'
71- resourcemanager:
72- port: 8032
73- exposed_on: 'yarn-master'
74- rm_webapp_http:
75- port: 8088
76- exposed_on: 'yarn-master'
77- rm_log:
78- port: 19888
79 nm_webapp_http:
80 port: 8042
81 exposed_on: 'compute-slave-yarn'
82- jobhistory:
83- port: 10020
84- jh_webapp_http:
85- port: 19888
86- exposed_on: 'yarn-master'
87 # TODO: support SSL
88- #nn_webapp_https:
89- # port: 50470
90- # exposed_on: 'hdfs-master'
91 #dn_webapp_https:
92 # port: 50475
93 # exposed_on: 'compute-slave-hdfs'
94- #rm_webapp_https:
95- # port: 8090
96- # exposed_on: 'yarn-master'
97 #nm_webapp_https:
98 # port: 8044
99 # exposed_on: 'compute-slave-yarn'
100
101=== modified file 'hooks/callbacks.py'
102--- hooks/callbacks.py 2015-06-24 22:12:57 +0000
103+++ hooks/callbacks.py 2015-08-21 21:51:19 +0000
104@@ -24,37 +24,53 @@
105 def update_blocked_status():
106 if unitdata.kv().get('charm.active', False):
107 return
108- rels = (
109- ('Yarn', 'ResourceManager', ResourceManagerMaster()),
110+ rels = [
111 ('HDFS', 'NameNode', NameNodeMaster()),
112- )
113+ ]
114 missing_rel = [rel for rel, res, impl in rels if not impl.connected_units()]
115- missing_hosts = [rel for rel, res, impl in rels if not impl.am_i_registered()]
116- not_ready = [(rel, res) for rel, res, impl in rels if not impl.is_ready()]
117+ rels.append(('Yarn', 'ResourceManager', ResourceManagerMaster()))
118+ not_ready = [(rel, res) for rel, res, impl in rels if impl.connected_units() and not impl.is_ready()]
119+ missing_hosts = [rel for rel, res, impl in rels if impl.connected_units() and not impl.am_i_registered()]
120 if missing_rel:
121 hookenv.status_set('blocked', 'Waiting for relation to %s master%s' % (
122 ' and '.join(missing_rel),
123 's' if len(missing_rel) > 1 else '',
124 )),
125- elif missing_hosts:
126- hookenv.status_set('waiting', 'Waiting for /etc/hosts registration on %s' % (
127- ' and '.join(missing_hosts),
128- ))
129 elif not_ready:
130 unready_rels, unready_ress = zip(*not_ready)
131 hookenv.status_set('waiting', 'Waiting for %s to provide %s' % (
132 ' and '.join(unready_rels),
133 ' and '.join(unready_ress),
134 ))
135+ elif missing_hosts:
136+ hookenv.status_set('waiting', 'Waiting for /etc/hosts registration on %s' % (
137+ ' and '.join(missing_hosts),
138+ ))
139
140
141 def update_working_status():
142 if unitdata.kv().get('charm.active', False):
143 hookenv.status_set('maintenance', 'Updating configuration')
144 return
145- hookenv.status_set('maintenance', 'Setting up NodeManager and DataNode')
146+ yarn_connected = ResourceManagerMaster().connected_units()
147+ hookenv.status_set('maintenance', 'Setting up DataNode%s' % (
148+ ' and NodeManager' if yarn_connected else '',
149+ ))
150
151
152 def update_active_status():
153- unitdata.kv().set('charm.active', True)
154- hookenv.status_set('active', 'Ready')
155+ hdfs_ready = NameNodeMaster().is_ready()
156+ yarn_connected = ResourceManagerMaster().connected_units()
157+ yarn_ready = ResourceManagerMaster().is_ready()
158+ if hdfs_ready and (not yarn_connected or yarn_ready):
159+ unitdata.kv().set('charm.active', True)
160+ hookenv.status_set('active', 'Ready%s' % (
161+ '' if yarn_ready else ' (HDFS only)'
162+ ))
163+ else:
164+ clear_active_flag()
165+ update_blocked_status()
166+
167+
168+def clear_active_flag():
169+ unitdata.kv().set('charm.active', False)
170
171=== modified file 'hooks/common.py'
172--- hooks/common.py 2015-06-24 22:12:57 +0000
173+++ hooks/common.py 2015-08-21 21:51:19 +0000
174@@ -71,40 +71,61 @@
175 ],
176 },
177 {
178- 'name': 'compute-slave',
179+ 'name': 'datanode',
180 'provides': [
181 jujubigdata.relations.DataNode(),
182+ ],
183+ 'requires': [
184+ hadoop.is_installed,
185+ hdfs_relation,
186+ hdfs_relation.am_i_registered,
187+ ],
188+ 'callbacks': [
189+ callbacks.update_working_status,
190+ hdfs_relation.register_provided_hosts,
191+ jujubigdata.utils.manage_etc_hosts,
192+ hdfs_relation.install_ssh_keys,
193+ hdfs.configure_datanode,
194+ hdfs.start_datanode,
195+ charmframework.helpers.open_ports(
196+ dist_config.exposed_ports('compute-slave-hdfs')),
197+ callbacks.update_active_status,
198+ ],
199+ 'cleanup': [
200+ callbacks.clear_active_flag,
201+ charmframework.helpers.close_ports(
202+ dist_config.exposed_ports('compute-slave-hdfs')),
203+ hdfs.stop_datanode,
204+ callbacks.update_blocked_status,
205+ ],
206+ },
207+ {
208+ 'name': 'nodemanager',
209+ 'provides': [
210 jujubigdata.relations.NodeManager(),
211 ],
212 'requires': [
213 hadoop.is_installed,
214- hdfs_relation,
215 yarn_relation,
216- hdfs_relation.am_i_registered,
217 yarn_relation.am_i_registered,
218 ],
219 'callbacks': [
220 callbacks.update_working_status,
221- hdfs_relation.register_provided_hosts,
222 yarn_relation.register_provided_hosts,
223 jujubigdata.utils.manage_etc_hosts,
224- hdfs_relation.install_ssh_keys,
225 yarn_relation.install_ssh_keys,
226- hdfs.configure_datanode,
227 yarn.configure_nodemanager,
228- hdfs.start_datanode,
229 yarn.start_nodemanager,
230 charmframework.helpers.open_ports(
231- dist_config.exposed_ports('compute-slave-hdfs') +
232 dist_config.exposed_ports('compute-slave-yarn')),
233 callbacks.update_active_status,
234 ],
235 'cleanup': [
236+ callbacks.clear_active_flag,
237 charmframework.helpers.close_ports(
238- dist_config.exposed_ports('compute-slave-hdfs') +
239 dist_config.exposed_ports('compute-slave-yarn')),
240- hdfs.stop_datanode,
241 yarn.stop_nodemanager,
242+ callbacks.update_active_status, # might still be active if HDFS-only
243 ],
244 },
245 ])
246
247=== added file 'hooks/datanode-relation-departed'
248--- hooks/datanode-relation-departed 1970-01-01 00:00:00 +0000
249+++ hooks/datanode-relation-departed 2015-08-21 21:51:19 +0000
250@@ -0,0 +1,16 @@
251+#!/usr/bin/env python
252+# Licensed under the Apache License, Version 2.0 (the "License");
253+# you may not use this file except in compliance with the License.
254+# You may obtain a copy of the License at
255+#
256+# http://www.apache.org/licenses/LICENSE-2.0
257+#
258+# Unless required by applicable law or agreed to in writing, software
259+# distributed under the License is distributed on an "AS IS" BASIS,
260+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
261+# See the License for the specific language governing permissions and
262+# limitations under the License.
263+
264+import common
265+
266+common.manage()
267
268=== added file 'hooks/nodemanager-relation-departed'
269--- hooks/nodemanager-relation-departed 1970-01-01 00:00:00 +0000
270+++ hooks/nodemanager-relation-departed 2015-08-21 21:51:19 +0000
271@@ -0,0 +1,16 @@
272+#!/usr/bin/env python
273+# Licensed under the Apache License, Version 2.0 (the "License");
274+# you may not use this file except in compliance with the License.
275+# You may obtain a copy of the License at
276+#
277+# http://www.apache.org/licenses/LICENSE-2.0
278+#
279+# Unless required by applicable law or agreed to in writing, software
280+# distributed under the License is distributed on an "AS IS" BASIS,
281+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
282+# See the License for the specific language governing permissions and
283+# limitations under the License.
284+
285+import common
286+
287+common.manage()
288
289=== modified file 'resources.yaml'
290--- resources.yaml 2015-07-24 15:25:29 +0000
291+++ resources.yaml 2015-08-21 21:51:19 +0000
292@@ -4,7 +4,11 @@
293 pathlib:
294 pypi: path.py>=7.0
295 jujubigdata:
296+<<<<<<< TREE
297 pypi: jujubigdata>=2.0.2,<3.0.0
298+=======
299+ pypi: jujubigdata>=4.0.0,<5.0.0
300+>>>>>>> MERGE-SOURCE
301 java-installer:
302 # This points to a script which manages installing Java.
303 # If replaced with an alternate implementation, it must output *only* two
304
305=== added file 'resources/python/jujuresources-0.2.9.tar.gz'
306Binary files resources/python/jujuresources-0.2.9.tar.gz 1970-01-01 00:00:00 +0000 and resources/python/jujuresources-0.2.9.tar.gz 2015-08-21 21:51:19 +0000 differ
307=== renamed file 'resources/python/jujuresources-0.2.9.tar.gz' => 'resources/python/jujuresources-0.2.9.tar.gz.moved'
308=== modified file 'tests/remote/test_dist_config.py' (properties changed: -x to +x)

Subscribers

People subscribed via source and target branches