Merge lp:~asanjar/charms/trusty/hadoop/hadoop-elk into lp:charms/hadoop

Proposed by amir sanjar
Status: Merged
Merged at revision: 4
Proposed branch: lp:~asanjar/charms/trusty/hadoop/hadoop-elk
Merge into: lp:charms/hadoop
Diff against target: 537 lines (+151/-90)
5 files modified
README.md (+10/-0)
config.yaml (+14/-1)
files/upstart/defaults (+11/-11)
hooks/hadoop-common (+114/-78)
metadata.yaml (+2/-0)
To merge this branch: bzr merge lp:~asanjar/charms/trusty/hadoop/hadoop-elk
Reviewer Review Type Date Requested Status
Charles Butler (community) Approve
charmers Pending
Review via email: mp+224143@code.launchpad.net

Description of the change

adding elasticsearch support to hadoop

To post a comment you must log in.
Revision history for this message
Charles Butler (lazypower) wrote :

Amir,

I haven't run a deploy test yet. I've noted the one thing that stood out to me. A possible note here about interface names - since this just relates to elasticsearch and is not hyper specific to the E.lasticsearch L.ogstash K.ibana stack; it may make more sense to name the relationship appropriately, as this relationship would work with any data river associated with Elastic Search.

Otherwise, looks good so far. I'll run a deployment test once you've pushed the modifications.

review: Needs Fixing
6. By amir sanjar

shutdown & elasticsearch changes

Revision history for this message
Charles Butler (lazypower) wrote :

+1 LGTM

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'README.md'
--- README.md 2014-06-16 17:12:24 +0000
+++ README.md 2014-07-07 14:22:54 +0000
@@ -63,6 +63,7 @@
63 juju add-relation hadoop-master:namenode hadoop-slavecluster:datanode63 juju add-relation hadoop-master:namenode hadoop-slavecluster:datanode
64 juju add-relation hadoop-master:resourcemanager hadoop-slavecluster:nodemanager64 juju add-relation hadoop-master:resourcemanager hadoop-slavecluster:nodemanager
6565
66
66### Scale Out Usage: Separate HDFS and MapReduce67### Scale Out Usage: Separate HDFS and MapReduce
6768
68In this configuration the HDFS and YARN deployments operate on69In this configuration the HDFS and YARN deployments operate on
@@ -85,6 +86,15 @@
85to be deployed onto machines with more processing power and hdfs services86to be deployed onto machines with more processing power and hdfs services
86to be deployed onto machines with larger storage.87to be deployed onto machines with larger storage.
8788
89### TO deploy a Hadoop service with elasticsearch service::
90 # deploy ElasticSearch locally:
91 juju deploy elasticsearch elasticsearch
92 # elasticsearch-hadoop.jar file will be added to LIBJARS path
93 # Recommanded to use hadoop -libjars option to included elk jar file
94 juju add-unit -n elasticsearch
95 # deploy hive service by any senarios mentioned above
96 # associate Hive with elasticsearch
97 juju add-relation hadoop-master:elasticsearch elasticsearch:client
88## Known Limitations and Issues98## Known Limitations and Issues
8999
90Note that removing the relation between namenode and datanode is destructive!100Note that removing the relation between namenode and datanode is destructive!
91101
=== modified file 'config.yaml'
--- config.yaml 2014-05-13 19:22:12 +0000
+++ config.yaml 2014-07-07 14:22:54 +0000
@@ -152,7 +152,7 @@
152 map output fetching.152 map output fetching.
153 hadoop_dir_base:153 hadoop_dir_base:
154 type: string154 type: string
155 default: /home/ubuntu/hadoop/data155 default: /usr/local/hadoop/data
156 description: |156 description: |
157 The directory under which all other hadoop data is stored. Use this157 The directory under which all other hadoop data is stored. Use this
158 to take advantage of extra storage that might be avaliable.158 to take advantage of extra storage that might be avaliable.
@@ -170,3 +170,16 @@
170 default: org.apache.hadoop.mapred.ShuffleHandler170 default: org.apache.hadoop.mapred.ShuffleHandler
171 description: |171 description: |
172 Shuffle service that needs to be set for Map Reduce applications.172 Shuffle service that needs to be set for Map Reduce applications.
173 dfs_heartbeat_interval:
174 type: int
175 default: 3
176 description: |
177 Determines datanode heartbeat interval in seconds.
178 dfs_namenode_heartbeat_recheck_interval:
179 type: int
180 default: 300000
181 description: |
182 Determines datanode recheck heartbeat interval in milliseconds
183 It is used to calculate the final tineout value for namenode. Calcultion process is
184 as follow: 10.30 minutes = 2 x (dfs.namenode.heartbeat.recheck-interval=5*60*1000)
185 + 10 * 1000 * (dfs.heartbeat.interval=3)
173186
=== modified file 'files/upstart/defaults'
--- files/upstart/defaults 2014-05-13 19:22:12 +0000
+++ files/upstart/defaults 2014-07-07 14:22:54 +0000
@@ -1,17 +1,17 @@
1#!/bin/bash1#!/bin/bash
22
33
4echo "export JAVA_HOME=$1" > /etc/profile.d/hadoopenv.sh4#echo "export JAVA_HOME=$1" > /etc/profile.d/hadoopenv.sh
5echo "export HADOOP_INSTALL=$2" >> /etc/profile.d/hadoopenv.sh5echo "export HADOOP_INSTALL=$1" >> /etc/profile.d/hadoopenv.sh
6echo "export HADOOP_HOME=$2" >> /etc/profile.d/hadoopenv.sh6echo "export HADOOP_HOME=$1" >> /etc/profile.d/hadoopenv.sh
7echo "export HADOOP_COMMON_HOME=$2" >> /etc/profile.d/hadoopenv.sh7echo "export HADOOP_COMMON_HOME=$1" >> /etc/profile.d/hadoopenv.sh
8echo "export HADOOP_HDFS_HOME=$2" >> /etc/profile.d/hadoopenv.sh8echo "export HADOOP_HDFS_HOME=$1" >> /etc/profile.d/hadoopenv.sh
9echo "export HADOOP_MAPRED_HOME=$2" >> /etc/profile.d/hadoopenv.sh9echo "export HADOOP_MAPRED_HOME=$1" >> /etc/profile.d/hadoopenv.sh
10echo "export HADOOP_YARN_HOME=$2" >> /etc/profile.d/hadoopenv.sh10echo "export HADOOP_YARN_HOME=$1" >> /etc/profile.d/hadoopenv.sh
11echo "export PATH=$1/bin:$PATH:$2/bin:$2/sbin" >> /etc/profile.d/hadoopenv.sh11echo "export PATH=$PATH:$1/bin:$1/sbin" >> /etc/profile.d/hadoopenv.sh
12echo "export YARN_HOME=$2" >> /etc/profile.d/hadoopenv.sh12echo "export YARN_HOME=$1" >> /etc/profile.d/hadoopenv.sh
13echo "export HADOOP_CONF_DIR=$4" >> /etc/profile.d/hadoopenv.sh13echo "export HADOOP_CONF_DIR=$2" >> /etc/profile.d/hadoopenv.sh
14echo "export YARN_CONF_DIR=$4" >> /etc/profile.d/hadoopenv.sh14echo "export YARN_CONF_DIR=$2" >> /etc/profile.d/hadoopenv.sh
15chmod +x /etc/profile.d/hadoopenv.sh15chmod +x /etc/profile.d/hadoopenv.sh
1616
1717
1818
=== modified file 'hooks/hadoop-common'
--- hooks/hadoop-common 2014-05-13 19:22:12 +0000
+++ hooks/hadoop-common 2014-07-07 14:22:54 +0000
@@ -19,31 +19,32 @@
19 sed -i -e "s/^127.0.1.1\(.*$hostname.*\)/$private_address\1/" /etc/hosts19 sed -i -e "s/^127.0.1.1\(.*$hostname.*\)/$private_address\1/" /etc/hosts
2020
21}21}
22JAVA_VENDOR="openjdk"
23JAVA_VERSION="7"
24HADOOP_VERSION="hadoop-2.2.0"
25PLATFORM_ARCH="amd64"
26HADOOP_DIR="/usr/local/hadoop"
27HADOOP_CONF_DIR="/etc/hadoop/conf.juju"
28HADOOP_INSTALLED=$HADOOP_DIR/$HADOOP_VERSION
29HADOOP_TMP_DIR=$HADOOP_DIR/tmp
2230
23set_hadoopenv () {
24 JAVA_VENDOR="openjdk"
25 JAVA_VERSION="7"
26 HADOOP_VERSION="hadoop-2.2.0"
27 PLATFORM_ARCH="amd64"
28 HOME_DIR="/home/ubuntu"
29 HADOOP_DIR="/home/ubuntu/hadoop"
30 HADOOP_CONF_DIR="/etc/hadoop/conf.juju"
31 HADOOP_INSTALLED=$HADOOP_DIR/$HADOOP_VERSION
32 HADOOP_TMP_DIR=$HADOOP_DIR/tmp
33}
3431
35configure_systemenv(){32configure_systemenv(){
36juju-log "Configuring hadoop system environment"33juju-log "Configuring hadoop system environment"
37 JAVA_HOME_PATH=$(sudo find /usr/ -name java-7-openjdk-*)34 #JAVA_HOME_PATH=$(sudo find /usr/ -name java-7-openjdk-*)
38 if [ ! -f /etc/default/hadoop ]35 if [ ! -f /etc/default/hadoop ]
39 then36 then
40 juju-log "Configuring hadoop system environment - setting up hadoopenv.sh "37 juju-log "Configuring hadoop system environment - setting up hadoopenv.sh "
41 install -o root -g root -m 0644 files/upstart/defaults /etc/default/hadoop38 install -o root -g root -m 0644 files/upstart/defaults /etc/default/hadoop
42 . /etc/default/hadoop $JAVA_HOME_PATH $HADOOP_INSTALLED $HOME_DIR $HADOOP_CONF_DIR39 . /etc/default/hadoop $HADOOP_INSTALLED $HADOOP_CONF_DIR
43 fi40 fi
44 juju-log "Configuring hadoop system environment - execute hadoopenv.sh "41 juju-log "Configuring hadoop system environment - execute hadoopenv.sh "
45 . /etc/profile.d/hadoopenv.sh42 . /etc/profile.d/hadoopenv.sh
46}43}
44
45configure_systemenv
46
47
47# Helpers to support conditional restarts based48# Helpers to support conditional restarts based
48# on specific files changing49# on specific files changing
4950
@@ -68,7 +69,7 @@
68configure_sources () {69configure_sources () {
69 source=`config-get source`70 source=`config-get source`
70 juju-log "Configuring hadoop using the Hadoop Ubuntu Team PPA..."71 juju-log "Configuring hadoop using the Hadoop Ubuntu Team PPA..."
71 # add-apt-repository ppa:hadoop-ubuntu/$source72 # add-apt-repository ppa:hadoop-ubuntu/$source
72 # apt-get update -qqy73 # apt-get update -qqy
73}74}
7475
@@ -94,7 +95,9 @@
94 mkdir -p $HADOOP_DIR95 mkdir -p $HADOOP_DIR
95 mkdir -p $HADOOP_TMP_DIR96 mkdir -p $HADOOP_TMP_DIR
96 apt-get install -qqy openjdk-7-jdk97 apt-get install -qqy openjdk-7-jdk
98
97 tar -xzf files/archives/$HADOOP_VERSION.tar.gz -C $HADOOP_DIR99 tar -xzf files/archives/$HADOOP_VERSION.tar.gz -C $HADOOP_DIR
100 chown -R ubuntu:hadoop $HADOOP_DIR
98 101
99102
100}103}
@@ -194,8 +197,10 @@
194 if [ ! -d /etc/hadoop/conf.juju ]197 if [ ! -d /etc/hadoop/conf.juju ]
195 then198 then
196 mkdir -p /etc/hadoop/conf.juju199 mkdir -p /etc/hadoop/conf.juju
200
197 cp $HADOOP_INSTALLED/etc/hadoop/* /etc/hadoop/conf.juju201 cp $HADOOP_INSTALLED/etc/hadoop/* /etc/hadoop/conf.juju
198 #cp -r $HADOOP_INSTALLED/etc/hadoop /etc/hadoop/conf.juju202 #cp -r $HADOOP_INSTALLED/etc/hadoop /etc/hadoop/conf.juju
203 rm /etc/hadoop/conf.juju/slaves
199 update-alternatives --install /etc/hadoop/conf hadoop-conf \204 update-alternatives --install /etc/hadoop/conf hadoop-conf \
200 /etc/hadoop/conf.juju 50205 /etc/hadoop/conf.juju 50
201 cp /etc/hadoop/conf.juju/mapred-site.xml.template \206 cp /etc/hadoop/conf.juju/mapred-site.xml.template \
@@ -209,9 +214,10 @@
209 dotdee --setup /etc/hadoop/conf.juju/mapred-site.xml214 dotdee --setup /etc/hadoop/conf.juju/mapred-site.xml
210 dotdee --setup /etc/hadoop/conf.juju/hadoop-env.sh215 dotdee --setup /etc/hadoop/conf.juju/hadoop-env.sh
211 dotdee --setup /etc/hadoop/conf.juju/yarn-site.xml216 dotdee --setup /etc/hadoop/conf.juju/yarn-site.xml
212 217 chown -R ubuntu:hadoop /etc/hadoop/conf.juju
213 fi218 fi
214 # add JAVA_HOME to hadoop-env.sh219 # add JAVA_HOME to hadoop-env.sh
220 JAVA_HOME_PATH=$(sudo find /usr/ -name java-7-openjdk-*)
215 sed -ir 's|export JAVA_HOME=.*|export JAVA_HOME='$JAVA_HOME_PATH'|' $HADOOP_CONF_DIR/hadoop-env.sh221 sed -ir 's|export JAVA_HOME=.*|export JAVA_HOME='$JAVA_HOME_PATH'|' $HADOOP_CONF_DIR/hadoop-env.sh
216# Configure Heap Size222# Configure Heap Size
217 heap=`config-get heap`223 heap=`config-get heap`
@@ -228,9 +234,6 @@
228 config_element "dfs.namenode.handler.count" \234 config_element "dfs.namenode.handler.count" \
229 "`config-get dfs_namenode_handler_count`" > \235 "`config-get dfs_namenode_handler_count`" > \
230 $dir/11-dfs.namenode.handler.count236 $dir/11-dfs.namenode.handler.count
231 # config_element "dfs.blocksize" \
232 # "`config-get dfs_block_size`" > \
233 # $dir/12-dfs.blocksize
234 config_element "dfs.datanode.max.transfer.threads" \237 config_element "dfs.datanode.max.transfer.threads" \
235 "`config-get dfs_datanode_max_xcievers`" > \238 "`config-get dfs_datanode_max_xcievers`" > \
236 $dir/13-dfs.datanode.max.transfer.threads 239 $dir/13-dfs.datanode.max.transfer.threads
@@ -246,6 +249,15 @@
246 config_element "dfs.replication" \249 config_element "dfs.replication" \
247 "`config-get dfs_replication`" > \250 "`config-get dfs_replication`" > \
248 $dir/17-dfs.replication251 $dir/17-dfs.replication
252 config_element "dfs.heartbeat.interval"\
253 "`config-get dfs_heartbeat_interval`" > \
254 $dir/18-dfs.heartbeat.interval
255 config_element "dfs.datanode.data.dir" \
256 "`config-get hadoop_dir_base`/cache/hadoop/dfs/name" > \
257 $dir/19-dfs.datanode.data.dir
258 config_element "dfs.namenode.heartbeat.recheck-interval"\
259 "`config-get dfs_namenode_heartbeat_recheck_interval`" > \
260 $dir/20-dfs.namenode.heartbeat.recheck-interval
249 dotdee --update /etc/hadoop/conf.juju/hdfs-site.xml || true261 dotdee --update /etc/hadoop/conf.juju/hdfs-site.xml || true
250262
251 # Configure Hadoop YARN263 # Configure Hadoop YARN
@@ -290,7 +302,7 @@
290 mkdir -p $dir/cache/hadoop302 mkdir -p $dir/cache/hadoop
291 # We don't want to do this recursively since we may be reinstalling, in which case303 # We don't want to do this recursively since we may be reinstalling, in which case
292 # users have their own cache/<username> directories which shouldn't be stolen304 # users have their own cache/<username> directories which shouldn't be stolen
293 groupadd -f hadoop305 #groupadd -f hadoop
294 chown ubuntu:hadoop $dir $dir/cache $dir/cache/hadoop306 chown ubuntu:hadoop $dir $dir/cache $dir/cache/hadoop
295 # Ensure group write on this directory or we can start namenode/datanode307 # Ensure group write on this directory or we can start namenode/datanode
296 chmod 775 $dir/cache/hadoop308 chmod 775 $dir/cache/hadoop
@@ -302,6 +314,7 @@
302 datanode|secondarynamenode)314 datanode|secondarynamenode)
303 sshkey=`relation-get namenode_sshkey`315 sshkey=`relation-get namenode_sshkey`
304 echo $sshkey >> /home/ubuntu/.ssh/authorized_keys316 echo $sshkey >> /home/ubuntu/.ssh/authorized_keys
317 chown -R ubuntu:hadoop /home/ubuntu/.ssh
305 ;;318 ;;
306 namenode)319 namenode)
307 relation-set namenode_sshkey="$(cat /home/ubuntu/.ssh/id_rsa.pub)"320 relation-set namenode_sshkey="$(cat /home/ubuntu/.ssh/id_rsa.pub)"
@@ -311,15 +324,13 @@
311 juju-log "Configuring service unit relation $1..."324 juju-log "Configuring service unit relation $1..."
312 case $1 in325 case $1 in
313 datanode|secondarynamenode|mapred-namenode)326 datanode|secondarynamenode|mapred-namenode)
314 # sshkey=`relation-get namenode_sshkey`327
315 # echo $sshkey >> /home/ubuntu/.ssh/authorized_keys
316 namenode_address=`relation-get private-address`328 namenode_address=`relation-get private-address`
317 config_element "fs.default.name" "hdfs://$namenode_address:9000" > \329 config_element "fs.default.name" "hdfs://$namenode_address:9000" > \
318 $dir/50-fs.default.name330 $dir/50-fs.default.name
319 ;;331 ;;
320 namenode)332 namenode)
321 private_address=`unit-get private-address`333 private_address=`unit-get private-address`
322 # relation-set namenode_sshkey="$(cat /home/ubuntu/.ssh/id_rsa.pub)"
323 config_element "fs.default.name" "hdfs://$private_address:9000" > \334 config_element "fs.default.name" "hdfs://$private_address:9000" > \
324 $dir/50-fs.default.name335 $dir/50-fs.default.name
325 ;;336 ;;
@@ -384,7 +395,7 @@
384395
385format_namenode () {396format_namenode () {
386 juju-log "Formatting namenode filesystem"397 juju-log "Formatting namenode filesystem"
387 hdfs namenode -format398 su ubuntu -c "$HADOOP_INSTALLED/bin/hdfs namenode -format"
388 chown -R ubuntu:hadoop $HADOOP_DIR399 chown -R ubuntu:hadoop $HADOOP_DIR
389}400}
390401
@@ -397,16 +408,18 @@
397 juju-log "Restarting hdfs: $hdfs_role"408 juju-log "Restarting hdfs: $hdfs_role"
398 case $hdfs_role in409 case $hdfs_role in
399 namenode)410 namenode)
400 hadoop-daemon.sh stop namenode411 #hadoop-daemon.sh stop namenode
401 hadoop-daemon.sh start namenode 412 #hadoop-daemon.sh start namenode
413 su ubuntu -c "$HADOOP_INSTALLED/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop namenode"
414 su ubuntu -c "$HADOOP_INSTALLED/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode"
402 ;;415 ;;
403 datanode)416 datanode)
404 su ubuntu -c "$HADOOP_INSTALLED/sbin/hadoop-daemons.sh stop datanode"417 su ubuntu -c "$HADOOP_INSTALLED/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop datanode"
405 su ubuntu -c "$HADOOP_INSTALLED/sbin/hadoop-daemons.sh start datanode"418 su ubuntu -c "$HADOOP_INSTALLED/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start datanode"
406 ;;419 ;;
407 secondarynamenode)420 secondarynamenode)
408 hadoop-daemons.sh stop secondarynamenode421 # will be replace with HA hadoop-daemons.sh stop secondarynamenode
409 hadoop-daemons.sh start secondarynamenode 422 # hadoop-daemons.sh start secondarynamenode
410 ;;423 ;;
411 esac424 esac
412}425}
@@ -414,12 +427,14 @@
414 juju-log "Restarting yarn: $mapred_role"427 juju-log "Restarting yarn: $mapred_role"
415 case $mapred_role in428 case $mapred_role in
416 resourcemanager)429 resourcemanager)
417 yarn-daemon.sh stop resourcemanager430 #yarn-daemon.sh stop resourcemanager
418 yarn-daemon.sh start resourcemanager 431 #yarn-daemon.sh start resourcemanager
432 su ubuntu -c "$HADOOP_INSTALLED/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop resourcemanager"
433 su ubuntu -c "$HADOOP_INSTALLED/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start resourcemanager"
419 ;;434 ;;
420 nodemanager)435 nodemanager)
421 su ubuntu -c "$HADOOP_INSTALLED/sbin/yarn-daemons.sh stop nodemanager"436 su ubuntu -c "$HADOOP_INSTALLED/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop nodemanager"
422 su ubuntu -c "$HADOOP_INSTALLED/sbin/yarn-daemons.sh start nodemanager"437 su ubuntu -c "$HADOOP_INSTALLED/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start nodemanager"
423 ;;438 ;;
424 esac439 esac
425}440}
@@ -427,10 +442,7 @@
427442
428# Hadoop Service Control Commands443# Hadoop Service Control Commands
429restart_hadoop () {444restart_hadoop () {
430 # [ "$hdfs_role" != "unconfigured" ] && \445
431 # _restart_ hadoop-$hdfs_role || :
432 # [ "$mapred_role" != "unconfigured" ] && \
433 # _restart_ hadoop-$mapred_role || :
434 juju-log "restart_hadoop hdfs_role=$hdfs_role and yarn_role=$mapred_role"446 juju-log "restart_hadoop hdfs_role=$hdfs_role and yarn_role=$mapred_role"
435 [ "$hdfs_role" != "unconfigured" ] && \447 [ "$hdfs_role" != "unconfigured" ] && \
436 _restart_hdfs || :448 _restart_hdfs || :
@@ -438,35 +450,32 @@
438 [ "$mapred_role" != "unconfigured" ] && \450 [ "$mapred_role" != "unconfigured" ] && \
439 _restart_yarn || :451 _restart_yarn || :
440}452}
453
441stop_hadoop () {454stop_hadoop () {
442 # [ "$hdfs_role" != "unconfigured" ] && \455
443 # _stop_ hadoop-$hdfs_role || :
444 # [ "$mapred_role" != "unconfigured" ] && \
445 # _stop_ hadoop-$mapred_role || :
446 juju-log "Stop_hadoop hdfs_role=$hdfs_role and yarn_role=$mapred_role"456 juju-log "Stop_hadoop hdfs_role=$hdfs_role and yarn_role=$mapred_role"
447 case $hdfs_role in457 case $hdfs_role in
448 namenode)458 namenode)
449 hadoop-daemon.sh stop namenode 459 su ubuntu -c "$HADOOP_INSTALLED/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR stop namenode"
450 ;;460 ;;
451 datanode)461 datanode)
452 hadoop-daemons.sh stop datanode462 su ubuntu -c "$HADOOP_INSTALLED/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR stop datanode"
453 ;;463 ;;
454 esac464 esac
465
455 case $mapred_role in466 case $mapred_role in
456 resourcemanager)467 resourcemanager)
457 yarn-daemon.sh stop resourcemanager468 #yarn-daemon.sh stop resourcemanager
469 su ubuntu -c "$HADOOP_INSTALLED/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop resourcemanager"
458 ;;470 ;;
459 nodemanager)471 nodemanager)
460 yarn-daemons.sh stop nodemanager472 su ubuntu -c "$HADOOP_INSTALLED/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop nodemanager"
461 ;;473 ;;
462 esac474 esac
475
476
463}477}
464#start_hadoop () {478
465# [ "$hdfs_role" != "unconfigured" ] && \
466# _start_ hadoop-$hdfs_role || :
467# [ "$mapred_role" != "unconfigured" ] && \
468# _start_ hadoop-$mapred_role || :
469#}
470479
471# Restart services only if configuration files480# Restart services only if configuration files
472# have actually changed481# have actually changed
@@ -484,22 +493,39 @@
484 then493 then
485 # Just restart HDFS role494 # Just restart HDFS role
486 [ "$hdfs_role" != "unconfigured" ] && \495 [ "$hdfs_role" != "unconfigured" ] && \
487# _restart_ hadoop-$hdfs_role || :
488 _restart_hdfs || :496 _restart_hdfs || :
489 fi497 fi
490 if file_changed /etc/hadoop/conf.juju/yarn-site.xml498 if file_changed /etc/hadoop/conf.juju/yarn-site.xml
491 then499 then
492 # Just restart mapreduce role500 # Just restart mapreduce role
493 [ "$mapred_role" != "unconfigured" ] && \501 [ "$mapred_role" != "unconfigured" ] && \
494# _restart_ hadoop-$mapred_role || :
495 _restart_yarn || :502 _restart_yarn || :
496 fi503 fi
497}504}
505install_elasticsearch_hadoop () {
506 if [ ! -f "/home/ubuntu/elasticsearch-hadoop-2.0.0/dist/elasticsearch-hadoop-2.0.0.jar" ] ;
507 then
508 apt-get install unzip -y
509 cd /home/ubuntu
510
511 wget http://download.elasticsearch.org/hadoop/elasticsearch-hadoop-2.0.0{.zip,.zip.sha1.txt}
512 sh1=$(openssl sha1 *.zip | awk '{print $NF}')
513 sh2=$(cat *.sha1.txt | awk '{print $1;}')
514 if [ $sh1 != $sh2 ] ;
515 then
516 juju-log "invalid checksum"
517 exit 1
518 fi
519
520 unzip elasticsearch-hadoop-2.0.0.zip
521 elk_jar_path="export LIBJARS=\$LIBJARS:/home/ubuntu/elasticsearch-hadoop-2.0.0/dist/elasticsearch-hadoop-2.0.0.jar"
522 echo $elk_jar_path >> /home/ubuntu/.bashrc
523
524 fi
525}
498526
499install_job () {527install_job () {
500 #HADOOP_HOME=/usr/lib/hadoop528
501 set_hadoopenv
502 configure_systemenv
503 juju-log "installing terasort script"529 juju-log "installing terasort script"
504 cp scripts/terasort.sh $HADOOP_DIR530 cp scripts/terasort.sh $HADOOP_DIR
505 chown ubuntu.hadoop $HADOOP_DIR/terasort.sh531 chown ubuntu.hadoop $HADOOP_DIR/terasort.sh
@@ -535,31 +561,27 @@
535 install)561 install)
536 configure_hosts562 configure_hosts
537 configure_sources563 configure_sources
538 set_hadoopenv564
539 install_base_packages565 install_base_packages
540 configure_systemenv566
541 install_optional_packages567 install_optional_packages
542 configure_hadoop568 configure_hadoop
543 configure_tmp_dir_perms569 configure_tmp_dir_perms
544 ;;570 ;;
571 stop)
572 stop_hadoop
573 ;;
545 resourcemanager-relation-joined)574 resourcemanager-relation-joined)
546 case $mapred_role in575 case $mapred_role in
547 unconfigured)576 unconfigured)
548 juju-log "Configuring this unit as a resourcemanager"577 juju-log "Configuring this unit as a resourcemanager"
549 mapred_role="resourcemanager"578 mapred_role="resourcemanager"
550 # this is a hack, should be removed with .dep package install
551 #mkdir -p /usr/share/doc/hadoop-resourcemanager
552 # end of hack
553 configure_role_relation $mapred_role579 configure_role_relation $mapred_role
554 install_packages $mapred_role580 install_packages $mapred_role
555 set_hadoopenv
556 configure_systemenv
557 _restart_yarn581 _restart_yarn
558 open_ports582 open_ports
559 install_job583 install_job
560 # Some hadoop processes take a bit of time to start584
561 # we need to let them get to a point where they are
562 # ready to accept connections
563 sleep 10 && relation-set ready="true"585 sleep 10 && relation-set ready="true"
564 ;;586 ;;
565 resourcemanager)587 resourcemanager)
@@ -592,8 +614,7 @@
592 # end of hack 614 # end of hack
593 configure_role_relation $mapred_role615 configure_role_relation $mapred_role
594 install_packages $mapred_role616 install_packages $mapred_role
595 set_hadoopenv617
596 configure_systemenv
597 _restart_yarn618 _restart_yarn
598 open_ports619 open_ports
599 fi620 fi
@@ -632,8 +653,7 @@
632 # this is a hack, should be removed with .dep package install 653 # this is a hack, should be removed with .dep package install
633 #mkdir -p /usr/share/doc/hadoop-namenode 654 #mkdir -p /usr/share/doc/hadoop-namenode
634 # end of hack 655 # end of hack
635 set_hadoopenv656
636 configure_systemenv
637 configure_role_relation $hdfs_role657 configure_role_relation $hdfs_role
638 install_packages $hdfs_role658 install_packages $hdfs_role
639 stop_hadoop659 stop_hadoop
@@ -675,8 +695,7 @@
675 # end of hack 695 # end of hack
676 configure_role_relation $hdfs_role696 configure_role_relation $hdfs_role
677 install_packages $hdfs_role697 install_packages $hdfs_role
678 set_hadoopenv698
679 configure_systemenv
680 _restart_hdfs699 _restart_hdfs
681 open_ports700 open_ports
682 fi701 fi
@@ -703,14 +722,14 @@
703 juju-log "Namenode not yet ready"722 juju-log "Namenode not yet ready"
704 exit 0723 exit 0
705 else724 else
725 relation-set slave_IP=`unit-get private-address`
706 juju-log "Configuring this unit as a datanode"726 juju-log "Configuring this unit as a datanode"
707 hdfs_role="datanode"727 hdfs_role="datanode"
708 # this is a hack, should be removed with .dep package install 728 # this is a hack, should be removed with .dep package install
709 # mkdir -p /usr/share/doc/hadoop-datanode729 # mkdir -p /usr/share/doc/hadoop-datanode
710 # end of hack730 # end of hack
711 configure_role_relation $hdfs_role731 configure_role_relation $hdfs_role
712 set_hadoopenv732
713 configure_systemenv
714 install_packages $hdfs_role733 install_packages $hdfs_role
715 _restart_hdfs734 _restart_hdfs
716 open_ports735 open_ports
@@ -729,6 +748,11 @@
729 ;;748 ;;
730 esac749 esac
731 ;;750 ;;
751 elasticsearch-relation-joined)
752 juju-log "Reconfiguring this resourcemanager to communicate with elasticsearch cluster"
753 [ mapred_role="resourcemanager" ] && install_elasticsearch_hadoop || :
754
755 ;;
732 ganglia-relation-changed)756 ganglia-relation-changed)
733 # Call generic ganglia install and configure script757 # Call generic ganglia install and configure script
734 # TODO supercede when subordinates land.758 # TODO supercede when subordinates land.
@@ -741,14 +765,26 @@
741 conditional_restart # only restart if pertinent config has changed!765 conditional_restart # only restart if pertinent config has changed!
742 ;;766 ;;
743 upgrade-charm|config-changed)767 upgrade-charm|config-changed)
744 set_hadoopenv768
745 install_optional_packages769 install_optional_packages
746 configure_systemenv
747 configure_hadoop770 configure_hadoop
748 configure_tmp_dir_perms771 configure_tmp_dir_perms
749 conditional_restart # only restart if pertinent config has changed!772 conditional_restart # only restart if pertinent config has changed!
750 open_ports773 open_ports
751 ;;774 ;;
775 namenode-relation-changed)
776 juju-log "Configuring namenode - Changed phase"
777 slave_IP=`relation-get slave_IP`
778 if [ -z "$slave_IP" ]
779 then
780 juju-log "Configuring namenode changed - Changed phase - NO slave ip"
781 exit 0
782 else
783 juju-log "Configuring namenode - Changed phase - got slave_ip= $slave_IP"
784 echo $slave_IP>>/etc/hadoop/conf.juju/slaves
785 chown -R ubuntu:hadoop /etc/hadoop/conf.juju
786 fi
787 ;;
752 *)788 *)
753 juju-log "Command not recognised"789 juju-log "Command not recognised"
754 ;;790 ;;
755791
=== modified file 'metadata.yaml'
--- metadata.yaml 2014-06-13 22:40:33 +0000
+++ metadata.yaml 2014-07-07 14:22:54 +0000
@@ -21,3 +21,5 @@
21 interface: mapred21 interface: mapred
22 mapred-namenode:22 mapred-namenode:
23 interface: dfs23 interface: dfs
24 elasticsearch:
25 interface: elasticsearch

Subscribers

People subscribed via source and target branches

to all changes: