Merge lp:~bigdata-dev/charms/trusty/apache-hadoop-compute-slave/trunk into lp:charms/trusty/apache-hadoop-compute-slave

Proposed by Kevin W Monroe
Status: Merged
Merged at revision: 87
Proposed branch: lp:~bigdata-dev/charms/trusty/apache-hadoop-compute-slave/trunk
Merge into: lp:charms/trusty/apache-hadoop-compute-slave
Diff against target: 308 lines (+109/-55) (has conflicts)
8 files modified
DEV-README.md (+7/-0)
README.md (+7/-5)
dist.yaml (+0/-28)
hooks/callbacks.py (+28/-12)
hooks/common.py (+31/-10)
hooks/datanode-relation-departed (+16/-0)
hooks/nodemanager-relation-departed (+16/-0)
resources.yaml (+4/-0)
Text conflict in DEV-README.md
Text conflict in resources.yaml
Conflict adding file resources/python/jujuresources-0.2.9.tar.gz.  Moved existing file to resources/python/jujuresources-0.2.9.tar.gz.moved.
To merge this branch: bzr merge lp:~bigdata-dev/charms/trusty/apache-hadoop-compute-slave/trunk
Reviewer Review Type Date Requested Status
Kevin W Monroe Approve
Review via email: mp+268667@code.launchpad.net
To post a comment you must log in.
100. By Cory Johns

Fixed permissions on test_dist_config.py

Revision history for this message
Kevin W Monroe (kwmonroe) wrote :

Realtime syslog analytics bundle test looked good. Merged.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'DEV-README.md'
--- DEV-README.md 2015-06-29 14:15:27 +0000
+++ DEV-README.md 2015-08-21 21:51:19 +0000
@@ -49,10 +49,17 @@
4949
50## Manual Deployment50## Manual Deployment
5151
52<<<<<<< TREE
52The easiest way to deploy the core Apache Hadoop platform is to use one of53The easiest way to deploy the core Apache Hadoop platform is to use one of
53the [apache bundles](https://jujucharms.com/u/bigdata-charmers/#bundles).54the [apache bundles](https://jujucharms.com/u/bigdata-charmers/#bundles).
54However, to manually deploy the base Apache Hadoop platform without using one55However, to manually deploy the base Apache Hadoop platform without using one
55of the bundles, you can use the following:56of the bundles, you can use the following:
57=======
58The easiest way to deploy an Apache Hadoop platform is to use one of
59the [apache bundles](https://jujucharms.com/u/bigdata-charmers/#bundles).
60However, to manually deploy the base Apache Hadoop platform without using one
61of the bundles, you can use the following:
62>>>>>>> MERGE-SOURCE
5663
57 juju deploy apache-hadoop-hdfs-master hdfs-master64 juju deploy apache-hadoop-hdfs-master hdfs-master
58 juju deploy apache-hadoop-hdfs-secondary secondary-namenode65 juju deploy apache-hadoop-hdfs-secondary secondary-namenode
5966
=== modified file 'README.md'
--- README.md 2015-06-29 14:15:27 +0000
+++ README.md 2015-08-21 21:51:19 +0000
@@ -59,17 +59,19 @@
59of these resources:59of these resources:
6060
61 sudo pip install jujuresources61 sudo pip install jujuresources
62 juju resources fetch --all apache-hadoop-compute-slave/resources.yaml -d /tmp/resources62 juju-resources fetch --all /path/to/resources.yaml -d /tmp/resources
63 juju resources serve -d /tmp/resources63 juju-resources serve -d /tmp/resources
6464
65This will fetch all of the resources needed by this charm and serve them via a65This will fetch all of the resources needed by this charm and serve them via a
66simple HTTP server. You can then set the `resources_mirror` config option to66simple HTTP server. The output from `juju-resources serve` will give you a
67have the charm use this server for retrieving resources.67URL that you can set as the `resources_mirror` config option for this charm.
68Setting this option will cause all resources required by this charm to be
69downloaded from the configured URL.
6870
69You can fetch the resources for all of the Apache Hadoop charms71You can fetch the resources for all of the Apache Hadoop charms
70(`apache-hadoop-hdfs-master`, `apache-hadoop-yarn-master`,72(`apache-hadoop-hdfs-master`, `apache-hadoop-yarn-master`,
71`apache-hadoop-hdfs-secondary`, `apache-hadoop-plugin`, etc) into a single73`apache-hadoop-hdfs-secondary`, `apache-hadoop-plugin`, etc) into a single
72directory and serve them all with a single `juju resources serve` instance.74directory and serve them all with a single `juju-resources serve` instance.
7375
7476
75## Contact Information77## Contact Information
7678
=== modified file 'dist.yaml'
--- dist.yaml 2015-04-17 22:24:50 +0000
+++ dist.yaml 2015-08-21 21:51:19 +0000
@@ -73,44 +73,16 @@
73 # Only expose ports serving a UI or external API (i.e., namenode and73 # Only expose ports serving a UI or external API (i.e., namenode and
74 # resourcemanager). Communication among units within the cluster does74 # resourcemanager). Communication among units within the cluster does
75 # not need ports to be explicitly opened.75 # not need ports to be explicitly opened.
76 # If adding a port here, you will need to update
77 # charmhelpers.contrib.bigdata.handlers.apache or hooks/callbacks.py
78 # to ensure that it is supported.
79 namenode:
80 port: 8020
81 exposed_on: 'hdfs-master'
82 nn_webapp_http:
83 port: 50070
84 exposed_on: 'hdfs-master'
85 dn_webapp_http:76 dn_webapp_http:
86 port: 5007577 port: 50075
87 exposed_on: 'compute-slave-hdfs'78 exposed_on: 'compute-slave-hdfs'
88 resourcemanager:
89 port: 8032
90 exposed_on: 'yarn-master'
91 rm_webapp_http:
92 port: 8088
93 exposed_on: 'yarn-master'
94 rm_log:
95 port: 19888
96 nm_webapp_http:79 nm_webapp_http:
97 port: 804280 port: 8042
98 exposed_on: 'compute-slave-yarn'81 exposed_on: 'compute-slave-yarn'
99 jobhistory:
100 port: 10020
101 jh_webapp_http:
102 port: 19888
103 exposed_on: 'yarn-master'
104 # TODO: support SSL82 # TODO: support SSL
105 #nn_webapp_https:
106 # port: 50470
107 # exposed_on: 'hdfs-master'
108 #dn_webapp_https:83 #dn_webapp_https:
109 # port: 5047584 # port: 50475
110 # exposed_on: 'compute-slave-hdfs'85 # exposed_on: 'compute-slave-hdfs'
111 #rm_webapp_https:
112 # port: 8090
113 # exposed_on: 'yarn-master'
114 #nm_webapp_https:86 #nm_webapp_https:
115 # port: 804487 # port: 8044
116 # exposed_on: 'compute-slave-yarn'88 # exposed_on: 'compute-slave-yarn'
11789
=== modified file 'hooks/callbacks.py'
--- hooks/callbacks.py 2015-06-24 22:12:57 +0000
+++ hooks/callbacks.py 2015-08-21 21:51:19 +0000
@@ -24,37 +24,53 @@
24def update_blocked_status():24def update_blocked_status():
25 if unitdata.kv().get('charm.active', False):25 if unitdata.kv().get('charm.active', False):
26 return26 return
27 rels = (27 rels = [
28 ('Yarn', 'ResourceManager', ResourceManagerMaster()),
29 ('HDFS', 'NameNode', NameNodeMaster()),28 ('HDFS', 'NameNode', NameNodeMaster()),
30 )29 ]
31 missing_rel = [rel for rel, res, impl in rels if not impl.connected_units()]30 missing_rel = [rel for rel, res, impl in rels if not impl.connected_units()]
32 missing_hosts = [rel for rel, res, impl in rels if not impl.am_i_registered()]31 rels.append(('Yarn', 'ResourceManager', ResourceManagerMaster()))
33 not_ready = [(rel, res) for rel, res, impl in rels if not impl.is_ready()]32 not_ready = [(rel, res) for rel, res, impl in rels if impl.connected_units() and not impl.is_ready()]
33 missing_hosts = [rel for rel, res, impl in rels if impl.connected_units() and not impl.am_i_registered()]
34 if missing_rel:34 if missing_rel:
35 hookenv.status_set('blocked', 'Waiting for relation to %s master%s' % (35 hookenv.status_set('blocked', 'Waiting for relation to %s master%s' % (
36 ' and '.join(missing_rel),36 ' and '.join(missing_rel),
37 's' if len(missing_rel) > 1 else '',37 's' if len(missing_rel) > 1 else '',
38 )),38 )),
39 elif missing_hosts:
40 hookenv.status_set('waiting', 'Waiting for /etc/hosts registration on %s' % (
41 ' and '.join(missing_hosts),
42 ))
43 elif not_ready:39 elif not_ready:
44 unready_rels, unready_ress = zip(*not_ready)40 unready_rels, unready_ress = zip(*not_ready)
45 hookenv.status_set('waiting', 'Waiting for %s to provide %s' % (41 hookenv.status_set('waiting', 'Waiting for %s to provide %s' % (
46 ' and '.join(unready_rels),42 ' and '.join(unready_rels),
47 ' and '.join(unready_ress),43 ' and '.join(unready_ress),
48 ))44 ))
45 elif missing_hosts:
46 hookenv.status_set('waiting', 'Waiting for /etc/hosts registration on %s' % (
47 ' and '.join(missing_hosts),
48 ))
4949
5050
51def update_working_status():51def update_working_status():
52 if unitdata.kv().get('charm.active', False):52 if unitdata.kv().get('charm.active', False):
53 hookenv.status_set('maintenance', 'Updating configuration')53 hookenv.status_set('maintenance', 'Updating configuration')
54 return54 return
55 hookenv.status_set('maintenance', 'Setting up NodeManager and DataNode')55 yarn_connected = ResourceManagerMaster().connected_units()
56 hookenv.status_set('maintenance', 'Setting up DataNode%s' % (
57 ' and NodeManager' if yarn_connected else '',
58 ))
5659
5760
58def update_active_status():61def update_active_status():
59 unitdata.kv().set('charm.active', True)62 hdfs_ready = NameNodeMaster().is_ready()
60 hookenv.status_set('active', 'Ready')63 yarn_connected = ResourceManagerMaster().connected_units()
64 yarn_ready = ResourceManagerMaster().is_ready()
65 if hdfs_ready and (not yarn_connected or yarn_ready):
66 unitdata.kv().set('charm.active', True)
67 hookenv.status_set('active', 'Ready%s' % (
68 '' if yarn_ready else ' (HDFS only)'
69 ))
70 else:
71 clear_active_flag()
72 update_blocked_status()
73
74
75def clear_active_flag():
76 unitdata.kv().set('charm.active', False)
6177
=== modified file 'hooks/common.py'
--- hooks/common.py 2015-06-24 22:12:57 +0000
+++ hooks/common.py 2015-08-21 21:51:19 +0000
@@ -71,40 +71,61 @@
71 ],71 ],
72 },72 },
73 {73 {
74 'name': 'compute-slave',74 'name': 'datanode',
75 'provides': [75 'provides': [
76 jujubigdata.relations.DataNode(),76 jujubigdata.relations.DataNode(),
77 ],
78 'requires': [
79 hadoop.is_installed,
80 hdfs_relation,
81 hdfs_relation.am_i_registered,
82 ],
83 'callbacks': [
84 callbacks.update_working_status,
85 hdfs_relation.register_provided_hosts,
86 jujubigdata.utils.manage_etc_hosts,
87 hdfs_relation.install_ssh_keys,
88 hdfs.configure_datanode,
89 hdfs.start_datanode,
90 charmframework.helpers.open_ports(
91 dist_config.exposed_ports('compute-slave-hdfs')),
92 callbacks.update_active_status,
93 ],
94 'cleanup': [
95 callbacks.clear_active_flag,
96 charmframework.helpers.close_ports(
97 dist_config.exposed_ports('compute-slave-hdfs')),
98 hdfs.stop_datanode,
99 callbacks.update_blocked_status,
100 ],
101 },
102 {
103 'name': 'nodemanager',
104 'provides': [
77 jujubigdata.relations.NodeManager(),105 jujubigdata.relations.NodeManager(),
78 ],106 ],
79 'requires': [107 'requires': [
80 hadoop.is_installed,108 hadoop.is_installed,
81 hdfs_relation,
82 yarn_relation,109 yarn_relation,
83 hdfs_relation.am_i_registered,
84 yarn_relation.am_i_registered,110 yarn_relation.am_i_registered,
85 ],111 ],
86 'callbacks': [112 'callbacks': [
87 callbacks.update_working_status,113 callbacks.update_working_status,
88 hdfs_relation.register_provided_hosts,
89 yarn_relation.register_provided_hosts,114 yarn_relation.register_provided_hosts,
90 jujubigdata.utils.manage_etc_hosts,115 jujubigdata.utils.manage_etc_hosts,
91 hdfs_relation.install_ssh_keys,
92 yarn_relation.install_ssh_keys,116 yarn_relation.install_ssh_keys,
93 hdfs.configure_datanode,
94 yarn.configure_nodemanager,117 yarn.configure_nodemanager,
95 hdfs.start_datanode,
96 yarn.start_nodemanager,118 yarn.start_nodemanager,
97 charmframework.helpers.open_ports(119 charmframework.helpers.open_ports(
98 dist_config.exposed_ports('compute-slave-hdfs') +
99 dist_config.exposed_ports('compute-slave-yarn')),120 dist_config.exposed_ports('compute-slave-yarn')),
100 callbacks.update_active_status,121 callbacks.update_active_status,
101 ],122 ],
102 'cleanup': [123 'cleanup': [
124 callbacks.clear_active_flag,
103 charmframework.helpers.close_ports(125 charmframework.helpers.close_ports(
104 dist_config.exposed_ports('compute-slave-hdfs') +
105 dist_config.exposed_ports('compute-slave-yarn')),126 dist_config.exposed_ports('compute-slave-yarn')),
106 hdfs.stop_datanode,
107 yarn.stop_nodemanager,127 yarn.stop_nodemanager,
128 callbacks.update_active_status, # might still be active if HDFS-only
108 ],129 ],
109 },130 },
110 ])131 ])
111132
=== added file 'hooks/datanode-relation-departed'
--- hooks/datanode-relation-departed 1970-01-01 00:00:00 +0000
+++ hooks/datanode-relation-departed 2015-08-21 21:51:19 +0000
@@ -0,0 +1,16 @@
1#!/usr/bin/env python
2# Licensed under the Apache License, Version 2.0 (the "License");
3# you may not use this file except in compliance with the License.
4# You may obtain a copy of the License at
5#
6# http://www.apache.org/licenses/LICENSE-2.0
7#
8# Unless required by applicable law or agreed to in writing, software
9# distributed under the License is distributed on an "AS IS" BASIS,
10# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11# See the License for the specific language governing permissions and
12# limitations under the License.
13
14import common
15
16common.manage()
017
=== added file 'hooks/nodemanager-relation-departed'
--- hooks/nodemanager-relation-departed 1970-01-01 00:00:00 +0000
+++ hooks/nodemanager-relation-departed 2015-08-21 21:51:19 +0000
@@ -0,0 +1,16 @@
1#!/usr/bin/env python
2# Licensed under the Apache License, Version 2.0 (the "License");
3# you may not use this file except in compliance with the License.
4# You may obtain a copy of the License at
5#
6# http://www.apache.org/licenses/LICENSE-2.0
7#
8# Unless required by applicable law or agreed to in writing, software
9# distributed under the License is distributed on an "AS IS" BASIS,
10# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11# See the License for the specific language governing permissions and
12# limitations under the License.
13
14import common
15
16common.manage()
017
=== modified file 'resources.yaml'
--- resources.yaml 2015-07-24 15:25:29 +0000
+++ resources.yaml 2015-08-21 21:51:19 +0000
@@ -4,7 +4,11 @@
4 pathlib:4 pathlib:
5 pypi: path.py>=7.05 pypi: path.py>=7.0
6 jujubigdata:6 jujubigdata:
7<<<<<<< TREE
7 pypi: jujubigdata>=2.0.2,<3.0.08 pypi: jujubigdata>=2.0.2,<3.0.0
9=======
10 pypi: jujubigdata>=4.0.0,<5.0.0
11>>>>>>> MERGE-SOURCE
8 java-installer:12 java-installer:
9 # This points to a script which manages installing Java.13 # This points to a script which manages installing Java.
10 # If replaced with an alternate implementation, it must output *only* two14 # If replaced with an alternate implementation, it must output *only* two
1115
=== added file 'resources/python/jujuresources-0.2.9.tar.gz'
12Binary files resources/python/jujuresources-0.2.9.tar.gz 1970-01-01 00:00:00 +0000 and resources/python/jujuresources-0.2.9.tar.gz 2015-08-21 21:51:19 +0000 differ16Binary files resources/python/jujuresources-0.2.9.tar.gz 1970-01-01 00:00:00 +0000 and resources/python/jujuresources-0.2.9.tar.gz 2015-08-21 21:51:19 +0000 differ
=== renamed file 'resources/python/jujuresources-0.2.9.tar.gz' => 'resources/python/jujuresources-0.2.9.tar.gz.moved'
=== modified file 'tests/remote/test_dist_config.py' (properties changed: -x to +x)

Subscribers

People subscribed via source and target branches