Merge lp:~hazmat/charms/precise/hadoop/trunk into lp:~charmers/charms/precise/hadoop/trunk

Proposed by Gary Poster
Status: Merged
Merged at revision: 31
Proposed branch: lp:~hazmat/charms/precise/hadoop/trunk
Merge into: lp:~charmers/charms/precise/hadoop/trunk
Diff against target: 255 lines (+38/-39)
2 files modified
config.yaml (+13/-13)
hooks/hadoop-common (+25/-26)
To merge this branch: bzr merge lp:~hazmat/charms/precise/hadoop/trunk
Reviewer Review Type Date Requested Status
Marco Ceppi (community) Approve
Review via email: mp+191278@code.launchpad.net

Commit message

Switch config values from dots to dashes. This makes the charm work with Juju Core and the GUI.

Description of the change

This branch from Kapil includes Jeff Pihach's fix for the hadoop charm.

The hadoop charm uses dots in the config values. This breaks the CLI (juju set will fail) and the GUI (deployment will fail). It may be a conscious decision in the CLI to reject config values with dots. Whether or not that is so, as a simple practical solution, we are declaring dots as inappropriate for config values, until/unless changing this is a high enough priority for Juju Core and the GUI.

The fix here is to convert dots to hyphens.

Thanks!

Gary

To post a comment you must log in.
Revision history for this message
Marco Ceppi (marcoceppi) wrote :

I'm strongly opposed to changes that break backwards compatibility. However, I'm letting this one go through, since you'd need juju < 0.7 to successfully deploy and change config for this charm - given that these releases are all depreciated it's recommended people move forward. Since you can't change config options for this charm in juju-core anyone upgrading won't be affected.

That being said, I'd like to state that all updates to charms must maintain backwards compatibility. Shortcomings, bugs, and differences between "pyjuju" and "gojuju" shouldn't be addressed by changing (and breaking) charms.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'config.yaml'
--- config.yaml 2012-06-07 13:03:51 +0000
+++ config.yaml 2013-10-15 19:16:27 +0000
@@ -66,29 +66,29 @@
66 .66 .
67 If you are also mixing MapReduce and DFS roles on the same units you need to67 If you are also mixing MapReduce and DFS roles on the same units you need to
68 take this into account as well (see README for more details).68 take this into account as well (see README for more details).
69# Expert options See http://wiki.apache.org/hadoop/FAQ#How_well_does_Hadoop_scale.3F 69# Expert options See http://wiki.apache.org/hadoop/FAQ#How_well_does_Hadoop_scale.3F
70# for more details70# for more details
71 dfs.namenode.handler.count:71 dfs_namenode_handler_count:
72 type: int72 type: int
73 default: 1073 default: 10
74 description: |74 description: |
75 The number of server threads for the namenode. Increase this in larger75 The number of server threads for the namenode. Increase this in larger
76 deployments to ensure the namenode can cope with the number of datanodes76 deployments to ensure the namenode can cope with the number of datanodes
77 that it has to deal with.77 that it has to deal with.
78 dfs.block.size:78 dfs_block_size:
79 type: int79 type: int
80 default: 6710886480 default: 67108864
81 description: |81 description: |
82 The default block size for new files (default to 64MB). Increase this in 82 The default block size for new files (default to 64MB). Increase this in
83 larger deployments for better large data set performance.83 larger deployments for better large data set performance.
84 io.file.buffer.size:84 io_file_buffer_size:
85 type: int85 type: int
86 default: 409686 default: 4096
87 description: |87 description: |
88 The size of buffer for use in sequence files. The size of this buffer should88 The size of buffer for use in sequence files. The size of this buffer should
89 probably be a multiple of hardware page size (4096 on Intel x86), and it89 probably be a multiple of hardware page size (4096 on Intel x86), and it
90 determines how much data is buffered during read and write operations.90 determines how much data is buffered during read and write operations.
91 dfs.datanode.max.xcievers:91 dfs_datanode_max_xcievers:
92 type: int92 type: int
93 default: 409693 default: 4096
94 description: |94 description: |
@@ -97,13 +97,13 @@
97 An Hadoop HDFS datanode has an upper bound on the number of files that it97 An Hadoop HDFS datanode has an upper bound on the number of files that it
98 will serve at any one time. This defaults to 256 (which is low) in hadoop98 will serve at any one time. This defaults to 256 (which is low) in hadoop
99 1.x - however this charm increases that to 4096.99 1.x - however this charm increases that to 4096.
100 mapred.reduce.parallel.copies:100 mapred_reduce_parallel_copies:
101 type: int101 type: int
102 default: 5102 default: 5
103 description: |103 description: |
104 The default number of parallel transfers run by reduce during the104 The default number of parallel transfers run by reduce during the
105 copy(shuffle) phase.105 copy(shuffle) phase.
106 mapred.child.java.opts:106 mapred_child_java_opts:
107 type: string107 type: string
108 default: -Xmx200m108 default: -Xmx200m
109 description: |109 description: |
@@ -117,32 +117,32 @@
117 .117 .
118 The configuration variable mapred.child.ulimit can be used to control118 The configuration variable mapred.child.ulimit can be used to control
119 the maximum virtual memory of the child processes.119 the maximum virtual memory of the child processes.
120 io.sort.factor:120 io_sort_factor:
121 type: int121 type: int
122 default: 10122 default: 10
123 description: |123 description: |
124 The number of streams to merge at once while sorting files. This124 The number of streams to merge at once while sorting files. This
125 determines the number of open file handles.125 determines the number of open file handles.
126 io.sort.mb:126 io_sort_mb:
127 type: int127 type: int
128 default: 100128 default: 100
129 description: |129 description: |
130 The total amount of buffer memory to use while sorting files, in130 The total amount of buffer memory to use while sorting files, in
131 megabytes. By default, gives each merge stream 1MB, which should minimize131 megabytes. By default, gives each merge stream 1MB, which should minimize
132 seeks.132 seeks.
133 mapred.job.tracker.handler.count:133 mapred_job_tracker_handler_count:
134 type: int134 type: int
135 default: 10135 default: 10
136 description: |136 description: |
137 The number of server threads for the JobTracker. This should be roughly137 The number of server threads for the JobTracker. This should be roughly
138 4% of the number of tasktracker nodes.138 4% of the number of tasktracker nodes.
139 tasktracker.http.threads:139 tasktracker_http_threads:
140 type: int140 type: int
141 default: 40141 default: 40
142 description: |142 description: |
143 The number of worker threads that for the http server. This is used for143 The number of worker threads that for the http server. This is used for
144 map output fetching.144 map output fetching.
145 hadoop.dir.base:145 hadoop_dir_base:
146 type: string146 type: string
147 default: /var/lib/hadoop147 default: /var/lib/hadoop
148 description: |148 description: |
149149
=== modified file 'hooks/hadoop-common'
--- hooks/hadoop-common 2012-05-03 22:12:08 +0000
+++ hooks/hadoop-common 2013-10-15 19:16:27 +0000
@@ -4,7 +4,7 @@
44
5configure_hosts () {5configure_hosts () {
6 private_address=`unit-get private-address`6 private_address=`unit-get private-address`
7 # This is a horrible hack to ensure that 7 # This is a horrible hack to ensure that
8 # Java can resolve the hostname of the server to its8 # Java can resolve the hostname of the server to its
9 # real IP address.9 # real IP address.
1010
@@ -76,7 +76,7 @@
76 case $hdfs_role in76 case $hdfs_role in
77 namenode)77 namenode)
78 open-port 802078 open-port 8020
79 open-port 50070 79 open-port 50070
80 ;;80 ;;
81 datanode)81 datanode)
82 open-port 5001082 open-port 50010
@@ -98,13 +98,12 @@
98 esac98 esac
99}99}
100100
101MAPRED_CONFIG="mapred.reduce.parallel.copies101MAPRED_CONFIG="mapred_reduce_parallel_copies
102mapred.child.java.opts102mapred_child_java_opts
103io.sort.factor103io_sort_factor
104io.sort.mb104io_sort_mb
105mapred.job.tracker.handler.count105mapred_job_tracker_handler_count
106tasktracker.http.threads106tasktracker_http_threads"
107"
108107
109CONFIG_FILES="/etc/hadoop/conf.juju/hdfs-site.xml108CONFIG_FILES="/etc/hadoop/conf.juju/hdfs-site.xml
110/etc/hadoop/conf.juju/core-site.xml109/etc/hadoop/conf.juju/core-site.xml
@@ -143,9 +142,9 @@
143 cp /dev/null /etc/hadoop/conf.juju/hdfs-site.xml142 cp /dev/null /etc/hadoop/conf.juju/hdfs-site.xml
144 cp /dev/null /etc/hadoop/conf.juju/core-site.xml143 cp /dev/null /etc/hadoop/conf.juju/core-site.xml
145 cp /dev/null /etc/hadoop/conf.juju/mapred-site.xml144 cp /dev/null /etc/hadoop/conf.juju/mapred-site.xml
146 dotdee --setup /etc/hadoop/conf.juju/hdfs-site.xml 145 dotdee --setup /etc/hadoop/conf.juju/hdfs-site.xml
147 dotdee --setup /etc/hadoop/conf.juju/core-site.xml146 dotdee --setup /etc/hadoop/conf.juju/core-site.xml
148 dotdee --setup /etc/hadoop/conf.juju/mapred-site.xml 147 dotdee --setup /etc/hadoop/conf.juju/mapred-site.xml
149 dotdee --setup /etc/hadoop/conf.juju/hadoop-env.sh148 dotdee --setup /etc/hadoop/conf.juju/hadoop-env.sh
150 fi149 fi
151 # Configure Heap Size150 # Configure Heap Size
@@ -159,16 +158,16 @@
159 # Purge existing configuration158 # Purge existing configuration
160 rm -f $dir/1*-dfs.*159 rm -f $dir/1*-dfs.*
161 config_element "dfs.name.dir" \160 config_element "dfs.name.dir" \
162 "`config-get hadoop.dir.base`/cache/hadoop/dfs/name" > \161 "`config-get hadoop_dir_base`/cache/hadoop/dfs/name" > \
163 $dir/10-dfs.name.dir162 $dir/10-dfs.name.dir
164 config_element "dfs.namenode.handler.count" \163 config_element "dfs.namenode.handler.count" \
165 "`config-get dfs.namenode.handler.count`" > \164 "`config-get dfs_namenode_handler_count`" > \
166 $dir/11-dfs.namenode.handler.count165 $dir/11-dfs.namenode.handler.count
167 config_element "dfs.block.size" i\166 config_element "dfs.block.size" i\
168 "`config-get dfs.block.size`" > \167 "`config-get dfs_block_size`" > \
169 $dir/12-dfs.block.size168 $dir/12-dfs.block.size
170 config_element "dfs.datanode.max.xcievers" \169 config_element "dfs.datanode.max.xcievers" \
171 "`config-get dfs.datanode.max.xcievers`" > \170 "`config-get dfs_datanode_max_xcievers`" > \
172 $dir/13-dfs.datanode.max.xcievers171 $dir/13-dfs.datanode.max.xcievers
173 [ "`config-get hbase`" = "True" ] && \172 [ "`config-get hbase`" = "True" ] && \
174 config_element "dfs.support.append" "true" > \173 config_element "dfs.support.append" "true" > \
@@ -187,7 +186,7 @@
187 counter=10186 counter=10
188 for element in $MAPRED_CONFIG187 for element in $MAPRED_CONFIG
189 do188 do
190 config_element "$element" "`config-get $element`" > \189 config_element "${element//_/.}" "`config-get $element`" > \
191 $dir/20-$counter-$element190 $dir/20-$counter-$element
192 counter=`expr $counter + 1`191 counter=`expr $counter + 1`
193 done192 done
@@ -196,15 +195,15 @@
196 dir=`dotdee --dir /etc/hadoop/conf.juju/core-site.xml`195 dir=`dotdee --dir /etc/hadoop/conf.juju/core-site.xml`
197 config_basic $dir196 config_basic $dir
198 rm -f $dir/1*-*197 rm -f $dir/1*-*
199 config_element "hadoop.tmp.dir" "`config-get hadoop.dir.base`/cache/\${user.name}" > \198 config_element "hadoop.tmp.dir" "`config-get hadoop_dir_base`/cache/\${user.name}" > \
200 $dir/10-hadoop.tmp.dir199 $dir/10-hadoop.tmp.dir
201 config_element "io.file.buffer.size" "`config-get io.file.buffer.size`" > \200 config_element "io.file.buffer.size" "`config-get io_file_buffer_size`" > \
202 $dir/11-io.file.buffer.size201 $dir/11-io.file.buffer.size
203 dotdee --update /etc/hadoop/conf.juju/core-site.xml || true202 dotdee --update /etc/hadoop/conf.juju/core-site.xml || true
204}203}
205204
206configure_tmp_dir_perms() {205configure_tmp_dir_perms() {
207 dir=`config-get hadoop.dir.base`206 dir=`config-get hadoop_dir_base`
208 # Make sure the directory exists207 # Make sure the directory exists
209 mkdir -p $dir/cache/hadoop208 mkdir -p $dir/cache/hadoop
210 # We don't want to do this recursively since we may be reinstalling, in which case209 # We don't want to do this recursively since we may be reinstalling, in which case
@@ -261,7 +260,7 @@
261260
262install_packages () {261install_packages () {
263 case $1 in262 case $1 in
264 namenode|datanode|secondarynamenode|jobtracker|tasktracker) 263 namenode|datanode|secondarynamenode|jobtracker|tasktracker)
265 juju-log "Installing extra packages for $1"264 juju-log "Installing extra packages for $1"
266 apt-get -y install hadoop-$1265 apt-get -y install hadoop-$1
267 ;;266 ;;
@@ -295,19 +294,19 @@
295}294}
296295
297# Hadoop Service Control Commands296# Hadoop Service Control Commands
298restart_hadoop () { 297restart_hadoop () {
299 [ "$hdfs_role" != "unconfigured" ] && \298 [ "$hdfs_role" != "unconfigured" ] && \
300 _restart_ hadoop-$hdfs_role || :299 _restart_ hadoop-$hdfs_role || :
301 [ "$mapred_role" != "unconfigured" ] && \300 [ "$mapred_role" != "unconfigured" ] && \
302 _restart_ hadoop-$mapred_role || :301 _restart_ hadoop-$mapred_role || :
303}302}
304stop_hadoop () { 303stop_hadoop () {
305 [ "$hdfs_role" != "unconfigured" ] && \304 [ "$hdfs_role" != "unconfigured" ] && \
306 _stop_ hadoop-$hdfs_role || :305 _stop_ hadoop-$hdfs_role || :
307 [ "$mapred_role" != "unconfigured" ] && \306 [ "$mapred_role" != "unconfigured" ] && \
308 _stop_ hadoop-$mapred_role || :307 _stop_ hadoop-$mapred_role || :
309}308}
310start_hadoop () { 309start_hadoop () {
311 [ "$hdfs_role" != "unconfigured" ] && \310 [ "$hdfs_role" != "unconfigured" ] && \
312 _start_ hadoop-$hdfs_role || :311 _start_ hadoop-$hdfs_role || :
313 [ "$mapred_role" != "unconfigured" ] && \312 [ "$mapred_role" != "unconfigured" ] && \
@@ -380,7 +379,7 @@
380 install_base_packages379 install_base_packages
381 install_optional_packages380 install_optional_packages
382 configure_hadoop381 configure_hadoop
383 configure_tmp_dir_perms 382 configure_tmp_dir_perms
384 ;;383 ;;
385 jobtracker-relation-joined)384 jobtracker-relation-joined)
386 case $mapred_role in385 case $mapred_role in
@@ -454,7 +453,7 @@
454 ;;453 ;;
455 namenode-relation-joined)454 namenode-relation-joined)
456 case $hdfs_role in455 case $hdfs_role in
457 unconfigured) 456 unconfigured)
458 juju-log "Configuring this unit as a namenode"457 juju-log "Configuring this unit as a namenode"
459 hdfs_role="namenode"458 hdfs_role="namenode"
460 configure_role_relation $hdfs_role459 configure_role_relation $hdfs_role
@@ -513,7 +512,7 @@
513 ;;512 ;;
514 datanode-relation-changed)513 datanode-relation-changed)
515 case $hdfs_role in514 case $hdfs_role in
516 unconfigured) 515 unconfigured)
517 ready=`relation-get ready`516 ready=`relation-get ready`
518 if [ -z "$ready" ]517 if [ -z "$ready" ]
519 then518 then

Subscribers

People subscribed via source and target branches

to all changes: