Merge lp:~bigdata-dev/charms/trusty/apache-hadoop-compute-slave/trunk into lp:charms/trusty/apache-hadoop-compute-slave
- Trusty Tahr (14.04)
- trunk
- Merge into trunk
Proposed by
Kevin W Monroe
Status: | Merged |
---|---|
Merged at revision: | 87 |
Proposed branch: | lp:~bigdata-dev/charms/trusty/apache-hadoop-compute-slave/trunk |
Merge into: | lp:charms/trusty/apache-hadoop-compute-slave |
Diff against target: |
308 lines (+109/-55) (has conflicts) 8 files modified
DEV-README.md (+7/-0) README.md (+7/-5) dist.yaml (+0/-28) hooks/callbacks.py (+28/-12) hooks/common.py (+31/-10) hooks/datanode-relation-departed (+16/-0) hooks/nodemanager-relation-departed (+16/-0) resources.yaml (+4/-0) Text conflict in DEV-README.md Text conflict in resources.yaml Conflict adding file resources/python/jujuresources-0.2.9.tar.gz. Moved existing file to resources/python/jujuresources-0.2.9.tar.gz.moved. |
To merge this branch: | bzr merge lp:~bigdata-dev/charms/trusty/apache-hadoop-compute-slave/trunk |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Kevin W Monroe | Approve | ||
Review via email: mp+268667@code.launchpad.net |
Commit message
Description of the change
To post a comment you must log in.
- 100. By Cory Johns
-
Fixed permissions on test_dist_config.py
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === modified file 'DEV-README.md' | |||
2 | --- DEV-README.md 2015-06-29 14:15:27 +0000 | |||
3 | +++ DEV-README.md 2015-08-21 21:51:19 +0000 | |||
4 | @@ -49,10 +49,17 @@ | |||
5 | 49 | 49 | ||
6 | 50 | ## Manual Deployment | 50 | ## Manual Deployment |
7 | 51 | 51 | ||
8 | 52 | <<<<<<< TREE | ||
9 | 52 | The easiest way to deploy the core Apache Hadoop platform is to use one of | 53 | The easiest way to deploy the core Apache Hadoop platform is to use one of |
10 | 53 | the [apache bundles](https://jujucharms.com/u/bigdata-charmers/#bundles). | 54 | the [apache bundles](https://jujucharms.com/u/bigdata-charmers/#bundles). |
11 | 54 | However, to manually deploy the base Apache Hadoop platform without using one | 55 | However, to manually deploy the base Apache Hadoop platform without using one |
12 | 55 | of the bundles, you can use the following: | 56 | of the bundles, you can use the following: |
13 | 57 | ======= | ||
14 | 58 | The easiest way to deploy an Apache Hadoop platform is to use one of | ||
15 | 59 | the [apache bundles](https://jujucharms.com/u/bigdata-charmers/#bundles). | ||
16 | 60 | However, to manually deploy the base Apache Hadoop platform without using one | ||
17 | 61 | of the bundles, you can use the following: | ||
18 | 62 | >>>>>>> MERGE-SOURCE | ||
19 | 56 | 63 | ||
20 | 57 | juju deploy apache-hadoop-hdfs-master hdfs-master | 64 | juju deploy apache-hadoop-hdfs-master hdfs-master |
21 | 58 | juju deploy apache-hadoop-hdfs-secondary secondary-namenode | 65 | juju deploy apache-hadoop-hdfs-secondary secondary-namenode |
22 | 59 | 66 | ||
23 | === modified file 'README.md' | |||
24 | --- README.md 2015-06-29 14:15:27 +0000 | |||
25 | +++ README.md 2015-08-21 21:51:19 +0000 | |||
26 | @@ -59,17 +59,19 @@ | |||
27 | 59 | of these resources: | 59 | of these resources: |
28 | 60 | 60 | ||
29 | 61 | sudo pip install jujuresources | 61 | sudo pip install jujuresources |
32 | 62 | juju resources fetch --all apache-hadoop-compute-slave/resources.yaml -d /tmp/resources | 62 | juju-resources fetch --all /path/to/resources.yaml -d /tmp/resources |
33 | 63 | juju resources serve -d /tmp/resources | 63 | juju-resources serve -d /tmp/resources |
34 | 64 | 64 | ||
35 | 65 | This will fetch all of the resources needed by this charm and serve them via a | 65 | This will fetch all of the resources needed by this charm and serve them via a |
38 | 66 | simple HTTP server. You can then set the `resources_mirror` config option to | 66 | simple HTTP server. The output from `juju-resources serve` will give you a |
39 | 67 | have the charm use this server for retrieving resources. | 67 | URL that you can set as the `resources_mirror` config option for this charm. |
40 | 68 | Setting this option will cause all resources required by this charm to be | ||
41 | 69 | downloaded from the configured URL. | ||
42 | 68 | 70 | ||
43 | 69 | You can fetch the resources for all of the Apache Hadoop charms | 71 | You can fetch the resources for all of the Apache Hadoop charms |
44 | 70 | (`apache-hadoop-hdfs-master`, `apache-hadoop-yarn-master`, | 72 | (`apache-hadoop-hdfs-master`, `apache-hadoop-yarn-master`, |
45 | 71 | `apache-hadoop-hdfs-secondary`, `apache-hadoop-plugin`, etc) into a single | 73 | `apache-hadoop-hdfs-secondary`, `apache-hadoop-plugin`, etc) into a single |
47 | 72 | directory and serve them all with a single `juju resources serve` instance. | 74 | directory and serve them all with a single `juju-resources serve` instance. |
48 | 73 | 75 | ||
49 | 74 | 76 | ||
50 | 75 | ## Contact Information | 77 | ## Contact Information |
51 | 76 | 78 | ||
52 | === modified file 'dist.yaml' | |||
53 | --- dist.yaml 2015-04-17 22:24:50 +0000 | |||
54 | +++ dist.yaml 2015-08-21 21:51:19 +0000 | |||
55 | @@ -73,44 +73,16 @@ | |||
56 | 73 | # Only expose ports serving a UI or external API (i.e., namenode and | 73 | # Only expose ports serving a UI or external API (i.e., namenode and |
57 | 74 | # resourcemanager). Communication among units within the cluster does | 74 | # resourcemanager). Communication among units within the cluster does |
58 | 75 | # not need ports to be explicitly opened. | 75 | # not need ports to be explicitly opened. |
59 | 76 | # If adding a port here, you will need to update | ||
60 | 77 | # charmhelpers.contrib.bigdata.handlers.apache or hooks/callbacks.py | ||
61 | 78 | # to ensure that it is supported. | ||
62 | 79 | namenode: | ||
63 | 80 | port: 8020 | ||
64 | 81 | exposed_on: 'hdfs-master' | ||
65 | 82 | nn_webapp_http: | ||
66 | 83 | port: 50070 | ||
67 | 84 | exposed_on: 'hdfs-master' | ||
68 | 85 | dn_webapp_http: | 76 | dn_webapp_http: |
69 | 86 | port: 50075 | 77 | port: 50075 |
70 | 87 | exposed_on: 'compute-slave-hdfs' | 78 | exposed_on: 'compute-slave-hdfs' |
71 | 88 | resourcemanager: | ||
72 | 89 | port: 8032 | ||
73 | 90 | exposed_on: 'yarn-master' | ||
74 | 91 | rm_webapp_http: | ||
75 | 92 | port: 8088 | ||
76 | 93 | exposed_on: 'yarn-master' | ||
77 | 94 | rm_log: | ||
78 | 95 | port: 19888 | ||
79 | 96 | nm_webapp_http: | 79 | nm_webapp_http: |
80 | 97 | port: 8042 | 80 | port: 8042 |
81 | 98 | exposed_on: 'compute-slave-yarn' | 81 | exposed_on: 'compute-slave-yarn' |
82 | 99 | jobhistory: | ||
83 | 100 | port: 10020 | ||
84 | 101 | jh_webapp_http: | ||
85 | 102 | port: 19888 | ||
86 | 103 | exposed_on: 'yarn-master' | ||
87 | 104 | # TODO: support SSL | 82 | # TODO: support SSL |
88 | 105 | #nn_webapp_https: | ||
89 | 106 | # port: 50470 | ||
90 | 107 | # exposed_on: 'hdfs-master' | ||
91 | 108 | #dn_webapp_https: | 83 | #dn_webapp_https: |
92 | 109 | # port: 50475 | 84 | # port: 50475 |
93 | 110 | # exposed_on: 'compute-slave-hdfs' | 85 | # exposed_on: 'compute-slave-hdfs' |
94 | 111 | #rm_webapp_https: | ||
95 | 112 | # port: 8090 | ||
96 | 113 | # exposed_on: 'yarn-master' | ||
97 | 114 | #nm_webapp_https: | 86 | #nm_webapp_https: |
98 | 115 | # port: 8044 | 87 | # port: 8044 |
99 | 116 | # exposed_on: 'compute-slave-yarn' | 88 | # exposed_on: 'compute-slave-yarn' |
100 | 117 | 89 | ||
101 | === modified file 'hooks/callbacks.py' | |||
102 | --- hooks/callbacks.py 2015-06-24 22:12:57 +0000 | |||
103 | +++ hooks/callbacks.py 2015-08-21 21:51:19 +0000 | |||
104 | @@ -24,37 +24,53 @@ | |||
105 | 24 | def update_blocked_status(): | 24 | def update_blocked_status(): |
106 | 25 | if unitdata.kv().get('charm.active', False): | 25 | if unitdata.kv().get('charm.active', False): |
107 | 26 | return | 26 | return |
110 | 27 | rels = ( | 27 | rels = [ |
109 | 28 | ('Yarn', 'ResourceManager', ResourceManagerMaster()), | ||
111 | 29 | ('HDFS', 'NameNode', NameNodeMaster()), | 28 | ('HDFS', 'NameNode', NameNodeMaster()), |
113 | 30 | ) | 29 | ] |
114 | 31 | missing_rel = [rel for rel, res, impl in rels if not impl.connected_units()] | 30 | missing_rel = [rel for rel, res, impl in rels if not impl.connected_units()] |
117 | 32 | missing_hosts = [rel for rel, res, impl in rels if not impl.am_i_registered()] | 31 | rels.append(('Yarn', 'ResourceManager', ResourceManagerMaster())) |
118 | 33 | not_ready = [(rel, res) for rel, res, impl in rels if not impl.is_ready()] | 32 | not_ready = [(rel, res) for rel, res, impl in rels if impl.connected_units() and not impl.is_ready()] |
119 | 33 | missing_hosts = [rel for rel, res, impl in rels if impl.connected_units() and not impl.am_i_registered()] | ||
120 | 34 | if missing_rel: | 34 | if missing_rel: |
121 | 35 | hookenv.status_set('blocked', 'Waiting for relation to %s master%s' % ( | 35 | hookenv.status_set('blocked', 'Waiting for relation to %s master%s' % ( |
122 | 36 | ' and '.join(missing_rel), | 36 | ' and '.join(missing_rel), |
123 | 37 | 's' if len(missing_rel) > 1 else '', | 37 | 's' if len(missing_rel) > 1 else '', |
124 | 38 | )), | 38 | )), |
125 | 39 | elif missing_hosts: | ||
126 | 40 | hookenv.status_set('waiting', 'Waiting for /etc/hosts registration on %s' % ( | ||
127 | 41 | ' and '.join(missing_hosts), | ||
128 | 42 | )) | ||
129 | 43 | elif not_ready: | 39 | elif not_ready: |
130 | 44 | unready_rels, unready_ress = zip(*not_ready) | 40 | unready_rels, unready_ress = zip(*not_ready) |
131 | 45 | hookenv.status_set('waiting', 'Waiting for %s to provide %s' % ( | 41 | hookenv.status_set('waiting', 'Waiting for %s to provide %s' % ( |
132 | 46 | ' and '.join(unready_rels), | 42 | ' and '.join(unready_rels), |
133 | 47 | ' and '.join(unready_ress), | 43 | ' and '.join(unready_ress), |
134 | 48 | )) | 44 | )) |
135 | 45 | elif missing_hosts: | ||
136 | 46 | hookenv.status_set('waiting', 'Waiting for /etc/hosts registration on %s' % ( | ||
137 | 47 | ' and '.join(missing_hosts), | ||
138 | 48 | )) | ||
139 | 49 | 49 | ||
140 | 50 | 50 | ||
141 | 51 | def update_working_status(): | 51 | def update_working_status(): |
142 | 52 | if unitdata.kv().get('charm.active', False): | 52 | if unitdata.kv().get('charm.active', False): |
143 | 53 | hookenv.status_set('maintenance', 'Updating configuration') | 53 | hookenv.status_set('maintenance', 'Updating configuration') |
144 | 54 | return | 54 | return |
146 | 55 | hookenv.status_set('maintenance', 'Setting up NodeManager and DataNode') | 55 | yarn_connected = ResourceManagerMaster().connected_units() |
147 | 56 | hookenv.status_set('maintenance', 'Setting up DataNode%s' % ( | ||
148 | 57 | ' and NodeManager' if yarn_connected else '', | ||
149 | 58 | )) | ||
150 | 56 | 59 | ||
151 | 57 | 60 | ||
152 | 58 | def update_active_status(): | 61 | def update_active_status(): |
155 | 59 | unitdata.kv().set('charm.active', True) | 62 | hdfs_ready = NameNodeMaster().is_ready() |
156 | 60 | hookenv.status_set('active', 'Ready') | 63 | yarn_connected = ResourceManagerMaster().connected_units() |
157 | 64 | yarn_ready = ResourceManagerMaster().is_ready() | ||
158 | 65 | if hdfs_ready and (not yarn_connected or yarn_ready): | ||
159 | 66 | unitdata.kv().set('charm.active', True) | ||
160 | 67 | hookenv.status_set('active', 'Ready%s' % ( | ||
161 | 68 | '' if yarn_ready else ' (HDFS only)' | ||
162 | 69 | )) | ||
163 | 70 | else: | ||
164 | 71 | clear_active_flag() | ||
165 | 72 | update_blocked_status() | ||
166 | 73 | |||
167 | 74 | |||
168 | 75 | def clear_active_flag(): | ||
169 | 76 | unitdata.kv().set('charm.active', False) | ||
170 | 61 | 77 | ||
171 | === modified file 'hooks/common.py' | |||
172 | --- hooks/common.py 2015-06-24 22:12:57 +0000 | |||
173 | +++ hooks/common.py 2015-08-21 21:51:19 +0000 | |||
174 | @@ -71,40 +71,61 @@ | |||
175 | 71 | ], | 71 | ], |
176 | 72 | }, | 72 | }, |
177 | 73 | { | 73 | { |
179 | 74 | 'name': 'compute-slave', | 74 | 'name': 'datanode', |
180 | 75 | 'provides': [ | 75 | 'provides': [ |
181 | 76 | jujubigdata.relations.DataNode(), | 76 | jujubigdata.relations.DataNode(), |
182 | 77 | ], | ||
183 | 78 | 'requires': [ | ||
184 | 79 | hadoop.is_installed, | ||
185 | 80 | hdfs_relation, | ||
186 | 81 | hdfs_relation.am_i_registered, | ||
187 | 82 | ], | ||
188 | 83 | 'callbacks': [ | ||
189 | 84 | callbacks.update_working_status, | ||
190 | 85 | hdfs_relation.register_provided_hosts, | ||
191 | 86 | jujubigdata.utils.manage_etc_hosts, | ||
192 | 87 | hdfs_relation.install_ssh_keys, | ||
193 | 88 | hdfs.configure_datanode, | ||
194 | 89 | hdfs.start_datanode, | ||
195 | 90 | charmframework.helpers.open_ports( | ||
196 | 91 | dist_config.exposed_ports('compute-slave-hdfs')), | ||
197 | 92 | callbacks.update_active_status, | ||
198 | 93 | ], | ||
199 | 94 | 'cleanup': [ | ||
200 | 95 | callbacks.clear_active_flag, | ||
201 | 96 | charmframework.helpers.close_ports( | ||
202 | 97 | dist_config.exposed_ports('compute-slave-hdfs')), | ||
203 | 98 | hdfs.stop_datanode, | ||
204 | 99 | callbacks.update_blocked_status, | ||
205 | 100 | ], | ||
206 | 101 | }, | ||
207 | 102 | { | ||
208 | 103 | 'name': 'nodemanager', | ||
209 | 104 | 'provides': [ | ||
210 | 77 | jujubigdata.relations.NodeManager(), | 105 | jujubigdata.relations.NodeManager(), |
211 | 78 | ], | 106 | ], |
212 | 79 | 'requires': [ | 107 | 'requires': [ |
213 | 80 | hadoop.is_installed, | 108 | hadoop.is_installed, |
214 | 81 | hdfs_relation, | ||
215 | 82 | yarn_relation, | 109 | yarn_relation, |
216 | 83 | hdfs_relation.am_i_registered, | ||
217 | 84 | yarn_relation.am_i_registered, | 110 | yarn_relation.am_i_registered, |
218 | 85 | ], | 111 | ], |
219 | 86 | 'callbacks': [ | 112 | 'callbacks': [ |
220 | 87 | callbacks.update_working_status, | 113 | callbacks.update_working_status, |
221 | 88 | hdfs_relation.register_provided_hosts, | ||
222 | 89 | yarn_relation.register_provided_hosts, | 114 | yarn_relation.register_provided_hosts, |
223 | 90 | jujubigdata.utils.manage_etc_hosts, | 115 | jujubigdata.utils.manage_etc_hosts, |
224 | 91 | hdfs_relation.install_ssh_keys, | ||
225 | 92 | yarn_relation.install_ssh_keys, | 116 | yarn_relation.install_ssh_keys, |
226 | 93 | hdfs.configure_datanode, | ||
227 | 94 | yarn.configure_nodemanager, | 117 | yarn.configure_nodemanager, |
228 | 95 | hdfs.start_datanode, | ||
229 | 96 | yarn.start_nodemanager, | 118 | yarn.start_nodemanager, |
230 | 97 | charmframework.helpers.open_ports( | 119 | charmframework.helpers.open_ports( |
231 | 98 | dist_config.exposed_ports('compute-slave-hdfs') + | ||
232 | 99 | dist_config.exposed_ports('compute-slave-yarn')), | 120 | dist_config.exposed_ports('compute-slave-yarn')), |
233 | 100 | callbacks.update_active_status, | 121 | callbacks.update_active_status, |
234 | 101 | ], | 122 | ], |
235 | 102 | 'cleanup': [ | 123 | 'cleanup': [ |
236 | 124 | callbacks.clear_active_flag, | ||
237 | 103 | charmframework.helpers.close_ports( | 125 | charmframework.helpers.close_ports( |
238 | 104 | dist_config.exposed_ports('compute-slave-hdfs') + | ||
239 | 105 | dist_config.exposed_ports('compute-slave-yarn')), | 126 | dist_config.exposed_ports('compute-slave-yarn')), |
240 | 106 | hdfs.stop_datanode, | ||
241 | 107 | yarn.stop_nodemanager, | 127 | yarn.stop_nodemanager, |
242 | 128 | callbacks.update_active_status, # might still be active if HDFS-only | ||
243 | 108 | ], | 129 | ], |
244 | 109 | }, | 130 | }, |
245 | 110 | ]) | 131 | ]) |
246 | 111 | 132 | ||
247 | === added file 'hooks/datanode-relation-departed' | |||
248 | --- hooks/datanode-relation-departed 1970-01-01 00:00:00 +0000 | |||
249 | +++ hooks/datanode-relation-departed 2015-08-21 21:51:19 +0000 | |||
250 | @@ -0,0 +1,16 @@ | |||
251 | 1 | #!/usr/bin/env python | ||
252 | 2 | # Licensed under the Apache License, Version 2.0 (the "License"); | ||
253 | 3 | # you may not use this file except in compliance with the License. | ||
254 | 4 | # You may obtain a copy of the License at | ||
255 | 5 | # | ||
256 | 6 | # http://www.apache.org/licenses/LICENSE-2.0 | ||
257 | 7 | # | ||
258 | 8 | # Unless required by applicable law or agreed to in writing, software | ||
259 | 9 | # distributed under the License is distributed on an "AS IS" BASIS, | ||
260 | 10 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
261 | 11 | # See the License for the specific language governing permissions and | ||
262 | 12 | # limitations under the License. | ||
263 | 13 | |||
264 | 14 | import common | ||
265 | 15 | |||
266 | 16 | common.manage() | ||
267 | 0 | 17 | ||
268 | === added file 'hooks/nodemanager-relation-departed' | |||
269 | --- hooks/nodemanager-relation-departed 1970-01-01 00:00:00 +0000 | |||
270 | +++ hooks/nodemanager-relation-departed 2015-08-21 21:51:19 +0000 | |||
271 | @@ -0,0 +1,16 @@ | |||
272 | 1 | #!/usr/bin/env python | ||
273 | 2 | # Licensed under the Apache License, Version 2.0 (the "License"); | ||
274 | 3 | # you may not use this file except in compliance with the License. | ||
275 | 4 | # You may obtain a copy of the License at | ||
276 | 5 | # | ||
277 | 6 | # http://www.apache.org/licenses/LICENSE-2.0 | ||
278 | 7 | # | ||
279 | 8 | # Unless required by applicable law or agreed to in writing, software | ||
280 | 9 | # distributed under the License is distributed on an "AS IS" BASIS, | ||
281 | 10 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
282 | 11 | # See the License for the specific language governing permissions and | ||
283 | 12 | # limitations under the License. | ||
284 | 13 | |||
285 | 14 | import common | ||
286 | 15 | |||
287 | 16 | common.manage() | ||
288 | 0 | 17 | ||
289 | === modified file 'resources.yaml' | |||
290 | --- resources.yaml 2015-07-24 15:25:29 +0000 | |||
291 | +++ resources.yaml 2015-08-21 21:51:19 +0000 | |||
292 | @@ -4,7 +4,11 @@ | |||
293 | 4 | pathlib: | 4 | pathlib: |
294 | 5 | pypi: path.py>=7.0 | 5 | pypi: path.py>=7.0 |
295 | 6 | jujubigdata: | 6 | jujubigdata: |
296 | 7 | <<<<<<< TREE | ||
297 | 7 | pypi: jujubigdata>=2.0.2,<3.0.0 | 8 | pypi: jujubigdata>=2.0.2,<3.0.0 |
298 | 9 | ======= | ||
299 | 10 | pypi: jujubigdata>=4.0.0,<5.0.0 | ||
300 | 11 | >>>>>>> MERGE-SOURCE | ||
301 | 8 | java-installer: | 12 | java-installer: |
302 | 9 | # This points to a script which manages installing Java. | 13 | # This points to a script which manages installing Java. |
303 | 10 | # If replaced with an alternate implementation, it must output *only* two | 14 | # If replaced with an alternate implementation, it must output *only* two |
304 | 11 | 15 | ||
305 | === added file 'resources/python/jujuresources-0.2.9.tar.gz' | |||
306 | 12 | Binary files resources/python/jujuresources-0.2.9.tar.gz 1970-01-01 00:00:00 +0000 and resources/python/jujuresources-0.2.9.tar.gz 2015-08-21 21:51:19 +0000 differ | 16 | Binary files resources/python/jujuresources-0.2.9.tar.gz 1970-01-01 00:00:00 +0000 and resources/python/jujuresources-0.2.9.tar.gz 2015-08-21 21:51:19 +0000 differ |
307 | === renamed file 'resources/python/jujuresources-0.2.9.tar.gz' => 'resources/python/jujuresources-0.2.9.tar.gz.moved' | |||
308 | === modified file 'tests/remote/test_dist_config.py' (properties changed: -x to +x) |
Realtime syslog analytics bundle test looked good. Merged.