Merge lp:~asanjar/charms/trusty/hadoop/hadoop-elk into lp:charms/hadoop

Proposed by amir sanjar
Status: Merged
Merged at revision: 4
Proposed branch: lp:~asanjar/charms/trusty/hadoop/hadoop-elk
Merge into: lp:charms/hadoop
Diff against target: 537 lines (+151/-90)
5 files modified
README.md (+10/-0)
config.yaml (+14/-1)
files/upstart/defaults (+11/-11)
hooks/hadoop-common (+114/-78)
metadata.yaml (+2/-0)
To merge this branch: bzr merge lp:~asanjar/charms/trusty/hadoop/hadoop-elk
Reviewer Review Type Date Requested Status
Charles Butler (community) Approve
charmers Pending
Review via email: mp+224143@code.launchpad.net

Description of the change

adding elasticsearch support to hadoop

To post a comment you must log in.
Revision history for this message
Charles Butler (lazypower) wrote :

Amir,

I haven't run a deploy test yet. I've noted the one thing that stood out to me. A possible note here about interface names - since this just relates to elasticsearch and is not hyper specific to the E.lasticsearch L.ogstash K.ibana stack; it may make more sense to name the relationship appropriately, as this relationship would work with any data river associated with Elastic Search.

Otherwise, looks good so far. I'll run a deployment test once you've pushed the modifications.

review: Needs Fixing
6. By amir sanjar

shutdown & elasticsearch changes

Revision history for this message
Charles Butler (lazypower) wrote :

+1 LGTM

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'README.md'
2--- README.md 2014-06-16 17:12:24 +0000
3+++ README.md 2014-07-07 14:22:54 +0000
4@@ -63,6 +63,7 @@
5 juju add-relation hadoop-master:namenode hadoop-slavecluster:datanode
6 juju add-relation hadoop-master:resourcemanager hadoop-slavecluster:nodemanager
7
8+
9 ### Scale Out Usage: Separate HDFS and MapReduce
10
11 In this configuration the HDFS and YARN deployments operate on
12@@ -85,6 +86,15 @@
13 to be deployed onto machines with more processing power and hdfs services
14 to be deployed onto machines with larger storage.
15
16+### TO deploy a Hadoop service with elasticsearch service::
17+ # deploy ElasticSearch locally:
18+ juju deploy elasticsearch elasticsearch
19+ # elasticsearch-hadoop.jar file will be added to LIBJARS path
20+ # Recommanded to use hadoop -libjars option to included elk jar file
21+ juju add-unit -n elasticsearch
22+ # deploy hive service by any senarios mentioned above
23+ # associate Hive with elasticsearch
24+ juju add-relation hadoop-master:elasticsearch elasticsearch:client
25 ## Known Limitations and Issues
26
27 Note that removing the relation between namenode and datanode is destructive!
28
29=== modified file 'config.yaml'
30--- config.yaml 2014-05-13 19:22:12 +0000
31+++ config.yaml 2014-07-07 14:22:54 +0000
32@@ -152,7 +152,7 @@
33 map output fetching.
34 hadoop_dir_base:
35 type: string
36- default: /home/ubuntu/hadoop/data
37+ default: /usr/local/hadoop/data
38 description: |
39 The directory under which all other hadoop data is stored. Use this
40 to take advantage of extra storage that might be avaliable.
41@@ -170,3 +170,16 @@
42 default: org.apache.hadoop.mapred.ShuffleHandler
43 description: |
44 Shuffle service that needs to be set for Map Reduce applications.
45+ dfs_heartbeat_interval:
46+ type: int
47+ default: 3
48+ description: |
49+ Determines datanode heartbeat interval in seconds.
50+ dfs_namenode_heartbeat_recheck_interval:
51+ type: int
52+ default: 300000
53+ description: |
54+ Determines datanode recheck heartbeat interval in milliseconds
55+ It is used to calculate the final tineout value for namenode. Calcultion process is
56+ as follow: 10.30 minutes = 2 x (dfs.namenode.heartbeat.recheck-interval=5*60*1000)
57+ + 10 * 1000 * (dfs.heartbeat.interval=3)
58
59=== modified file 'files/upstart/defaults'
60--- files/upstart/defaults 2014-05-13 19:22:12 +0000
61+++ files/upstart/defaults 2014-07-07 14:22:54 +0000
62@@ -1,17 +1,17 @@
63 #!/bin/bash
64
65
66-echo "export JAVA_HOME=$1" > /etc/profile.d/hadoopenv.sh
67-echo "export HADOOP_INSTALL=$2" >> /etc/profile.d/hadoopenv.sh
68-echo "export HADOOP_HOME=$2" >> /etc/profile.d/hadoopenv.sh
69-echo "export HADOOP_COMMON_HOME=$2" >> /etc/profile.d/hadoopenv.sh
70-echo "export HADOOP_HDFS_HOME=$2" >> /etc/profile.d/hadoopenv.sh
71-echo "export HADOOP_MAPRED_HOME=$2" >> /etc/profile.d/hadoopenv.sh
72-echo "export HADOOP_YARN_HOME=$2" >> /etc/profile.d/hadoopenv.sh
73-echo "export PATH=$1/bin:$PATH:$2/bin:$2/sbin" >> /etc/profile.d/hadoopenv.sh
74-echo "export YARN_HOME=$2" >> /etc/profile.d/hadoopenv.sh
75-echo "export HADOOP_CONF_DIR=$4" >> /etc/profile.d/hadoopenv.sh
76-echo "export YARN_CONF_DIR=$4" >> /etc/profile.d/hadoopenv.sh
77+#echo "export JAVA_HOME=$1" > /etc/profile.d/hadoopenv.sh
78+echo "export HADOOP_INSTALL=$1" >> /etc/profile.d/hadoopenv.sh
79+echo "export HADOOP_HOME=$1" >> /etc/profile.d/hadoopenv.sh
80+echo "export HADOOP_COMMON_HOME=$1" >> /etc/profile.d/hadoopenv.sh
81+echo "export HADOOP_HDFS_HOME=$1" >> /etc/profile.d/hadoopenv.sh
82+echo "export HADOOP_MAPRED_HOME=$1" >> /etc/profile.d/hadoopenv.sh
83+echo "export HADOOP_YARN_HOME=$1" >> /etc/profile.d/hadoopenv.sh
84+echo "export PATH=$PATH:$1/bin:$1/sbin" >> /etc/profile.d/hadoopenv.sh
85+echo "export YARN_HOME=$1" >> /etc/profile.d/hadoopenv.sh
86+echo "export HADOOP_CONF_DIR=$2" >> /etc/profile.d/hadoopenv.sh
87+echo "export YARN_CONF_DIR=$2" >> /etc/profile.d/hadoopenv.sh
88 chmod +x /etc/profile.d/hadoopenv.sh
89
90
91
92=== modified file 'hooks/hadoop-common'
93--- hooks/hadoop-common 2014-05-13 19:22:12 +0000
94+++ hooks/hadoop-common 2014-07-07 14:22:54 +0000
95@@ -19,31 +19,32 @@
96 sed -i -e "s/^127.0.1.1\(.*$hostname.*\)/$private_address\1/" /etc/hosts
97
98 }
99+JAVA_VENDOR="openjdk"
100+JAVA_VERSION="7"
101+HADOOP_VERSION="hadoop-2.2.0"
102+PLATFORM_ARCH="amd64"
103+HADOOP_DIR="/usr/local/hadoop"
104+HADOOP_CONF_DIR="/etc/hadoop/conf.juju"
105+HADOOP_INSTALLED=$HADOOP_DIR/$HADOOP_VERSION
106+HADOOP_TMP_DIR=$HADOOP_DIR/tmp
107
108-set_hadoopenv () {
109- JAVA_VENDOR="openjdk"
110- JAVA_VERSION="7"
111- HADOOP_VERSION="hadoop-2.2.0"
112- PLATFORM_ARCH="amd64"
113- HOME_DIR="/home/ubuntu"
114- HADOOP_DIR="/home/ubuntu/hadoop"
115- HADOOP_CONF_DIR="/etc/hadoop/conf.juju"
116- HADOOP_INSTALLED=$HADOOP_DIR/$HADOOP_VERSION
117- HADOOP_TMP_DIR=$HADOOP_DIR/tmp
118-}
119
120 configure_systemenv(){
121 juju-log "Configuring hadoop system environment"
122- JAVA_HOME_PATH=$(sudo find /usr/ -name java-7-openjdk-*)
123+ #JAVA_HOME_PATH=$(sudo find /usr/ -name java-7-openjdk-*)
124 if [ ! -f /etc/default/hadoop ]
125 then
126 juju-log "Configuring hadoop system environment - setting up hadoopenv.sh "
127 install -o root -g root -m 0644 files/upstart/defaults /etc/default/hadoop
128- . /etc/default/hadoop $JAVA_HOME_PATH $HADOOP_INSTALLED $HOME_DIR $HADOOP_CONF_DIR
129+ . /etc/default/hadoop $HADOOP_INSTALLED $HADOOP_CONF_DIR
130 fi
131 juju-log "Configuring hadoop system environment - execute hadoopenv.sh "
132 . /etc/profile.d/hadoopenv.sh
133 }
134+
135+configure_systemenv
136+
137+
138 # Helpers to support conditional restarts based
139 # on specific files changing
140
141@@ -68,7 +69,7 @@
142 configure_sources () {
143 source=`config-get source`
144 juju-log "Configuring hadoop using the Hadoop Ubuntu Team PPA..."
145- # add-apt-repository ppa:hadoop-ubuntu/$source
146+ # add-apt-repository ppa:hadoop-ubuntu/$source
147 # apt-get update -qqy
148 }
149
150@@ -94,7 +95,9 @@
151 mkdir -p $HADOOP_DIR
152 mkdir -p $HADOOP_TMP_DIR
153 apt-get install -qqy openjdk-7-jdk
154+
155 tar -xzf files/archives/$HADOOP_VERSION.tar.gz -C $HADOOP_DIR
156+ chown -R ubuntu:hadoop $HADOOP_DIR
157
158
159 }
160@@ -194,8 +197,10 @@
161 if [ ! -d /etc/hadoop/conf.juju ]
162 then
163 mkdir -p /etc/hadoop/conf.juju
164+
165 cp $HADOOP_INSTALLED/etc/hadoop/* /etc/hadoop/conf.juju
166 #cp -r $HADOOP_INSTALLED/etc/hadoop /etc/hadoop/conf.juju
167+ rm /etc/hadoop/conf.juju/slaves
168 update-alternatives --install /etc/hadoop/conf hadoop-conf \
169 /etc/hadoop/conf.juju 50
170 cp /etc/hadoop/conf.juju/mapred-site.xml.template \
171@@ -209,9 +214,10 @@
172 dotdee --setup /etc/hadoop/conf.juju/mapred-site.xml
173 dotdee --setup /etc/hadoop/conf.juju/hadoop-env.sh
174 dotdee --setup /etc/hadoop/conf.juju/yarn-site.xml
175-
176+ chown -R ubuntu:hadoop /etc/hadoop/conf.juju
177 fi
178 # add JAVA_HOME to hadoop-env.sh
179+ JAVA_HOME_PATH=$(sudo find /usr/ -name java-7-openjdk-*)
180 sed -ir 's|export JAVA_HOME=.*|export JAVA_HOME='$JAVA_HOME_PATH'|' $HADOOP_CONF_DIR/hadoop-env.sh
181 # Configure Heap Size
182 heap=`config-get heap`
183@@ -228,9 +234,6 @@
184 config_element "dfs.namenode.handler.count" \
185 "`config-get dfs_namenode_handler_count`" > \
186 $dir/11-dfs.namenode.handler.count
187- # config_element "dfs.blocksize" \
188- # "`config-get dfs_block_size`" > \
189- # $dir/12-dfs.blocksize
190 config_element "dfs.datanode.max.transfer.threads" \
191 "`config-get dfs_datanode_max_xcievers`" > \
192 $dir/13-dfs.datanode.max.transfer.threads
193@@ -246,6 +249,15 @@
194 config_element "dfs.replication" \
195 "`config-get dfs_replication`" > \
196 $dir/17-dfs.replication
197+ config_element "dfs.heartbeat.interval"\
198+ "`config-get dfs_heartbeat_interval`" > \
199+ $dir/18-dfs.heartbeat.interval
200+ config_element "dfs.datanode.data.dir" \
201+ "`config-get hadoop_dir_base`/cache/hadoop/dfs/name" > \
202+ $dir/19-dfs.datanode.data.dir
203+ config_element "dfs.namenode.heartbeat.recheck-interval"\
204+ "`config-get dfs_namenode_heartbeat_recheck_interval`" > \
205+ $dir/20-dfs.namenode.heartbeat.recheck-interval
206 dotdee --update /etc/hadoop/conf.juju/hdfs-site.xml || true
207
208 # Configure Hadoop YARN
209@@ -290,7 +302,7 @@
210 mkdir -p $dir/cache/hadoop
211 # We don't want to do this recursively since we may be reinstalling, in which case
212 # users have their own cache/<username> directories which shouldn't be stolen
213- groupadd -f hadoop
214+ #groupadd -f hadoop
215 chown ubuntu:hadoop $dir $dir/cache $dir/cache/hadoop
216 # Ensure group write on this directory or we can start namenode/datanode
217 chmod 775 $dir/cache/hadoop
218@@ -302,6 +314,7 @@
219 datanode|secondarynamenode)
220 sshkey=`relation-get namenode_sshkey`
221 echo $sshkey >> /home/ubuntu/.ssh/authorized_keys
222+ chown -R ubuntu:hadoop /home/ubuntu/.ssh
223 ;;
224 namenode)
225 relation-set namenode_sshkey="$(cat /home/ubuntu/.ssh/id_rsa.pub)"
226@@ -311,15 +324,13 @@
227 juju-log "Configuring service unit relation $1..."
228 case $1 in
229 datanode|secondarynamenode|mapred-namenode)
230- # sshkey=`relation-get namenode_sshkey`
231- # echo $sshkey >> /home/ubuntu/.ssh/authorized_keys
232+
233 namenode_address=`relation-get private-address`
234 config_element "fs.default.name" "hdfs://$namenode_address:9000" > \
235 $dir/50-fs.default.name
236 ;;
237 namenode)
238 private_address=`unit-get private-address`
239- # relation-set namenode_sshkey="$(cat /home/ubuntu/.ssh/id_rsa.pub)"
240 config_element "fs.default.name" "hdfs://$private_address:9000" > \
241 $dir/50-fs.default.name
242 ;;
243@@ -384,7 +395,7 @@
244
245 format_namenode () {
246 juju-log "Formatting namenode filesystem"
247- hdfs namenode -format
248+ su ubuntu -c "$HADOOP_INSTALLED/bin/hdfs namenode -format"
249 chown -R ubuntu:hadoop $HADOOP_DIR
250 }
251
252@@ -397,16 +408,18 @@
253 juju-log "Restarting hdfs: $hdfs_role"
254 case $hdfs_role in
255 namenode)
256- hadoop-daemon.sh stop namenode
257- hadoop-daemon.sh start namenode
258+ #hadoop-daemon.sh stop namenode
259+ #hadoop-daemon.sh start namenode
260+ su ubuntu -c "$HADOOP_INSTALLED/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop namenode"
261+ su ubuntu -c "$HADOOP_INSTALLED/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode"
262 ;;
263 datanode)
264- su ubuntu -c "$HADOOP_INSTALLED/sbin/hadoop-daemons.sh stop datanode"
265- su ubuntu -c "$HADOOP_INSTALLED/sbin/hadoop-daemons.sh start datanode"
266+ su ubuntu -c "$HADOOP_INSTALLED/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop datanode"
267+ su ubuntu -c "$HADOOP_INSTALLED/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start datanode"
268 ;;
269 secondarynamenode)
270- hadoop-daemons.sh stop secondarynamenode
271- hadoop-daemons.sh start secondarynamenode
272+ # will be replace with HA hadoop-daemons.sh stop secondarynamenode
273+ # hadoop-daemons.sh start secondarynamenode
274 ;;
275 esac
276 }
277@@ -414,12 +427,14 @@
278 juju-log "Restarting yarn: $mapred_role"
279 case $mapred_role in
280 resourcemanager)
281- yarn-daemon.sh stop resourcemanager
282- yarn-daemon.sh start resourcemanager
283+ #yarn-daemon.sh stop resourcemanager
284+ #yarn-daemon.sh start resourcemanager
285+ su ubuntu -c "$HADOOP_INSTALLED/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop resourcemanager"
286+ su ubuntu -c "$HADOOP_INSTALLED/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start resourcemanager"
287 ;;
288 nodemanager)
289- su ubuntu -c "$HADOOP_INSTALLED/sbin/yarn-daemons.sh stop nodemanager"
290- su ubuntu -c "$HADOOP_INSTALLED/sbin/yarn-daemons.sh start nodemanager"
291+ su ubuntu -c "$HADOOP_INSTALLED/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop nodemanager"
292+ su ubuntu -c "$HADOOP_INSTALLED/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start nodemanager"
293 ;;
294 esac
295 }
296@@ -427,10 +442,7 @@
297
298 # Hadoop Service Control Commands
299 restart_hadoop () {
300- # [ "$hdfs_role" != "unconfigured" ] && \
301- # _restart_ hadoop-$hdfs_role || :
302- # [ "$mapred_role" != "unconfigured" ] && \
303- # _restart_ hadoop-$mapred_role || :
304+
305 juju-log "restart_hadoop hdfs_role=$hdfs_role and yarn_role=$mapred_role"
306 [ "$hdfs_role" != "unconfigured" ] && \
307 _restart_hdfs || :
308@@ -438,35 +450,32 @@
309 [ "$mapred_role" != "unconfigured" ] && \
310 _restart_yarn || :
311 }
312+
313 stop_hadoop () {
314- # [ "$hdfs_role" != "unconfigured" ] && \
315- # _stop_ hadoop-$hdfs_role || :
316- # [ "$mapred_role" != "unconfigured" ] && \
317- # _stop_ hadoop-$mapred_role || :
318+
319 juju-log "Stop_hadoop hdfs_role=$hdfs_role and yarn_role=$mapred_role"
320 case $hdfs_role in
321 namenode)
322- hadoop-daemon.sh stop namenode
323+ su ubuntu -c "$HADOOP_INSTALLED/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR stop namenode"
324 ;;
325 datanode)
326- hadoop-daemons.sh stop datanode
327+ su ubuntu -c "$HADOOP_INSTALLED/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR stop datanode"
328 ;;
329 esac
330+
331 case $mapred_role in
332 resourcemanager)
333- yarn-daemon.sh stop resourcemanager
334+ #yarn-daemon.sh stop resourcemanager
335+ su ubuntu -c "$HADOOP_INSTALLED/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop resourcemanager"
336 ;;
337 nodemanager)
338- yarn-daemons.sh stop nodemanager
339+ su ubuntu -c "$HADOOP_INSTALLED/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop nodemanager"
340 ;;
341 esac
342+
343+
344 }
345-#start_hadoop () {
346-# [ "$hdfs_role" != "unconfigured" ] && \
347-# _start_ hadoop-$hdfs_role || :
348-# [ "$mapred_role" != "unconfigured" ] && \
349-# _start_ hadoop-$mapred_role || :
350-#}
351+
352
353 # Restart services only if configuration files
354 # have actually changed
355@@ -484,22 +493,39 @@
356 then
357 # Just restart HDFS role
358 [ "$hdfs_role" != "unconfigured" ] && \
359-# _restart_ hadoop-$hdfs_role || :
360 _restart_hdfs || :
361 fi
362 if file_changed /etc/hadoop/conf.juju/yarn-site.xml
363 then
364 # Just restart mapreduce role
365 [ "$mapred_role" != "unconfigured" ] && \
366-# _restart_ hadoop-$mapred_role || :
367 _restart_yarn || :
368 fi
369 }
370+install_elasticsearch_hadoop () {
371+ if [ ! -f "/home/ubuntu/elasticsearch-hadoop-2.0.0/dist/elasticsearch-hadoop-2.0.0.jar" ] ;
372+ then
373+ apt-get install unzip -y
374+ cd /home/ubuntu
375+
376+ wget http://download.elasticsearch.org/hadoop/elasticsearch-hadoop-2.0.0{.zip,.zip.sha1.txt}
377+ sh1=$(openssl sha1 *.zip | awk '{print $NF}')
378+ sh2=$(cat *.sha1.txt | awk '{print $1;}')
379+ if [ $sh1 != $sh2 ] ;
380+ then
381+ juju-log "invalid checksum"
382+ exit 1
383+ fi
384+
385+ unzip elasticsearch-hadoop-2.0.0.zip
386+ elk_jar_path="export LIBJARS=\$LIBJARS:/home/ubuntu/elasticsearch-hadoop-2.0.0/dist/elasticsearch-hadoop-2.0.0.jar"
387+ echo $elk_jar_path >> /home/ubuntu/.bashrc
388+
389+ fi
390+}
391
392 install_job () {
393- #HADOOP_HOME=/usr/lib/hadoop
394- set_hadoopenv
395- configure_systemenv
396+
397 juju-log "installing terasort script"
398 cp scripts/terasort.sh $HADOOP_DIR
399 chown ubuntu.hadoop $HADOOP_DIR/terasort.sh
400@@ -535,31 +561,27 @@
401 install)
402 configure_hosts
403 configure_sources
404- set_hadoopenv
405+
406 install_base_packages
407- configure_systemenv
408+
409 install_optional_packages
410 configure_hadoop
411 configure_tmp_dir_perms
412 ;;
413+ stop)
414+ stop_hadoop
415+ ;;
416 resourcemanager-relation-joined)
417 case $mapred_role in
418 unconfigured)
419 juju-log "Configuring this unit as a resourcemanager"
420- mapred_role="resourcemanager"
421- # this is a hack, should be removed with .dep package install
422- #mkdir -p /usr/share/doc/hadoop-resourcemanager
423- # end of hack
424+ mapred_role="resourcemanager"
425 configure_role_relation $mapred_role
426- install_packages $mapred_role
427- set_hadoopenv
428- configure_systemenv
429+ install_packages $mapred_role
430 _restart_yarn
431 open_ports
432 install_job
433- # Some hadoop processes take a bit of time to start
434- # we need to let them get to a point where they are
435- # ready to accept connections
436+
437 sleep 10 && relation-set ready="true"
438 ;;
439 resourcemanager)
440@@ -592,8 +614,7 @@
441 # end of hack
442 configure_role_relation $mapred_role
443 install_packages $mapred_role
444- set_hadoopenv
445- configure_systemenv
446+
447 _restart_yarn
448 open_ports
449 fi
450@@ -632,8 +653,7 @@
451 # this is a hack, should be removed with .dep package install
452 #mkdir -p /usr/share/doc/hadoop-namenode
453 # end of hack
454- set_hadoopenv
455- configure_systemenv
456+
457 configure_role_relation $hdfs_role
458 install_packages $hdfs_role
459 stop_hadoop
460@@ -675,8 +695,7 @@
461 # end of hack
462 configure_role_relation $hdfs_role
463 install_packages $hdfs_role
464- set_hadoopenv
465- configure_systemenv
466+
467 _restart_hdfs
468 open_ports
469 fi
470@@ -703,14 +722,14 @@
471 juju-log "Namenode not yet ready"
472 exit 0
473 else
474+ relation-set slave_IP=`unit-get private-address`
475 juju-log "Configuring this unit as a datanode"
476 hdfs_role="datanode"
477 # this is a hack, should be removed with .dep package install
478 # mkdir -p /usr/share/doc/hadoop-datanode
479 # end of hack
480 configure_role_relation $hdfs_role
481- set_hadoopenv
482- configure_systemenv
483+
484 install_packages $hdfs_role
485 _restart_hdfs
486 open_ports
487@@ -729,6 +748,11 @@
488 ;;
489 esac
490 ;;
491+ elasticsearch-relation-joined)
492+ juju-log "Reconfiguring this resourcemanager to communicate with elasticsearch cluster"
493+ [ mapred_role="resourcemanager" ] && install_elasticsearch_hadoop || :
494+
495+ ;;
496 ganglia-relation-changed)
497 # Call generic ganglia install and configure script
498 # TODO supercede when subordinates land.
499@@ -741,14 +765,26 @@
500 conditional_restart # only restart if pertinent config has changed!
501 ;;
502 upgrade-charm|config-changed)
503- set_hadoopenv
504+
505 install_optional_packages
506- configure_systemenv
507 configure_hadoop
508 configure_tmp_dir_perms
509 conditional_restart # only restart if pertinent config has changed!
510 open_ports
511 ;;
512+ namenode-relation-changed)
513+ juju-log "Configuring namenode - Changed phase"
514+ slave_IP=`relation-get slave_IP`
515+ if [ -z "$slave_IP" ]
516+ then
517+ juju-log "Configuring namenode changed - Changed phase - NO slave ip"
518+ exit 0
519+ else
520+ juju-log "Configuring namenode - Changed phase - got slave_ip= $slave_IP"
521+ echo $slave_IP>>/etc/hadoop/conf.juju/slaves
522+ chown -R ubuntu:hadoop /etc/hadoop/conf.juju
523+ fi
524+ ;;
525 *)
526 juju-log "Command not recognised"
527 ;;
528
529=== modified file 'metadata.yaml'
530--- metadata.yaml 2014-06-13 22:40:33 +0000
531+++ metadata.yaml 2014-07-07 14:22:54 +0000
532@@ -21,3 +21,5 @@
533 interface: mapred
534 mapred-namenode:
535 interface: dfs
536+ elasticsearch:
537+ interface: elasticsearch

Subscribers

People subscribed via source and target branches

to all changes: