Merge lp:~hazmat/charms/precise/hadoop/trunk into lp:~charmers/charms/precise/hadoop/trunk

Proposed by Gary Poster
Status: Merged
Merged at revision: 31
Proposed branch: lp:~hazmat/charms/precise/hadoop/trunk
Merge into: lp:~charmers/charms/precise/hadoop/trunk
Diff against target: 255 lines (+38/-39)
2 files modified
config.yaml (+13/-13)
hooks/hadoop-common (+25/-26)
To merge this branch: bzr merge lp:~hazmat/charms/precise/hadoop/trunk
Reviewer Review Type Date Requested Status
Marco Ceppi (community) Approve
Review via email: mp+191278@code.launchpad.net

Commit message

Switch config values from dots to dashes. This makes the charm work with Juju Core and the GUI.

Description of the change

This branch from Kapil includes Jeff Pihach's fix for the hadoop charm.

The hadoop charm uses dots in the config values. This breaks the CLI (juju set will fail) and the GUI (deployment will fail). It may be a conscious decision in the CLI to reject config values with dots. Whether or not that is so, as a simple practical solution, we are declaring dots as inappropriate for config values, until/unless changing this is a high enough priority for Juju Core and the GUI.

The fix here is to convert dots to hyphens.

Thanks!

Gary

To post a comment you must log in.
Revision history for this message
Marco Ceppi (marcoceppi) wrote :

I'm strongly opposed to changes that break backwards compatibility. However, I'm letting this one go through, since you'd need juju < 0.7 to successfully deploy and change config for this charm - given that these releases are all depreciated it's recommended people move forward. Since you can't change config options for this charm in juju-core anyone upgrading won't be affected.

That being said, I'd like to state that all updates to charms must maintain backwards compatibility. Shortcomings, bugs, and differences between "pyjuju" and "gojuju" shouldn't be addressed by changing (and breaking) charms.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'config.yaml'
2--- config.yaml 2012-06-07 13:03:51 +0000
3+++ config.yaml 2013-10-15 19:16:27 +0000
4@@ -66,29 +66,29 @@
5 .
6 If you are also mixing MapReduce and DFS roles on the same units you need to
7 take this into account as well (see README for more details).
8-# Expert options See http://wiki.apache.org/hadoop/FAQ#How_well_does_Hadoop_scale.3F
9+# Expert options See http://wiki.apache.org/hadoop/FAQ#How_well_does_Hadoop_scale.3F
10 # for more details
11- dfs.namenode.handler.count:
12+ dfs_namenode_handler_count:
13 type: int
14 default: 10
15 description: |
16 The number of server threads for the namenode. Increase this in larger
17 deployments to ensure the namenode can cope with the number of datanodes
18 that it has to deal with.
19- dfs.block.size:
20+ dfs_block_size:
21 type: int
22 default: 67108864
23 description: |
24- The default block size for new files (default to 64MB). Increase this in
25+ The default block size for new files (default to 64MB). Increase this in
26 larger deployments for better large data set performance.
27- io.file.buffer.size:
28+ io_file_buffer_size:
29 type: int
30 default: 4096
31 description: |
32 The size of buffer for use in sequence files. The size of this buffer should
33 probably be a multiple of hardware page size (4096 on Intel x86), and it
34 determines how much data is buffered during read and write operations.
35- dfs.datanode.max.xcievers:
36+ dfs_datanode_max_xcievers:
37 type: int
38 default: 4096
39 description: |
40@@ -97,13 +97,13 @@
41 An Hadoop HDFS datanode has an upper bound on the number of files that it
42 will serve at any one time. This defaults to 256 (which is low) in hadoop
43 1.x - however this charm increases that to 4096.
44- mapred.reduce.parallel.copies:
45+ mapred_reduce_parallel_copies:
46 type: int
47 default: 5
48 description: |
49 The default number of parallel transfers run by reduce during the
50 copy(shuffle) phase.
51- mapred.child.java.opts:
52+ mapred_child_java_opts:
53 type: string
54 default: -Xmx200m
55 description: |
56@@ -117,32 +117,32 @@
57 .
58 The configuration variable mapred.child.ulimit can be used to control
59 the maximum virtual memory of the child processes.
60- io.sort.factor:
61+ io_sort_factor:
62 type: int
63 default: 10
64 description: |
65 The number of streams to merge at once while sorting files. This
66 determines the number of open file handles.
67- io.sort.mb:
68+ io_sort_mb:
69 type: int
70 default: 100
71 description: |
72 The total amount of buffer memory to use while sorting files, in
73 megabytes. By default, gives each merge stream 1MB, which should minimize
74 seeks.
75- mapred.job.tracker.handler.count:
76+ mapred_job_tracker_handler_count:
77 type: int
78 default: 10
79 description: |
80 The number of server threads for the JobTracker. This should be roughly
81 4% of the number of tasktracker nodes.
82- tasktracker.http.threads:
83+ tasktracker_http_threads:
84 type: int
85 default: 40
86 description: |
87 The number of worker threads that for the http server. This is used for
88 map output fetching.
89- hadoop.dir.base:
90+ hadoop_dir_base:
91 type: string
92 default: /var/lib/hadoop
93 description: |
94
95=== modified file 'hooks/hadoop-common'
96--- hooks/hadoop-common 2012-05-03 22:12:08 +0000
97+++ hooks/hadoop-common 2013-10-15 19:16:27 +0000
98@@ -4,7 +4,7 @@
99
100 configure_hosts () {
101 private_address=`unit-get private-address`
102- # This is a horrible hack to ensure that
103+ # This is a horrible hack to ensure that
104 # Java can resolve the hostname of the server to its
105 # real IP address.
106
107@@ -76,7 +76,7 @@
108 case $hdfs_role in
109 namenode)
110 open-port 8020
111- open-port 50070
112+ open-port 50070
113 ;;
114 datanode)
115 open-port 50010
116@@ -98,13 +98,12 @@
117 esac
118 }
119
120-MAPRED_CONFIG="mapred.reduce.parallel.copies
121-mapred.child.java.opts
122-io.sort.factor
123-io.sort.mb
124-mapred.job.tracker.handler.count
125-tasktracker.http.threads
126-"
127+MAPRED_CONFIG="mapred_reduce_parallel_copies
128+mapred_child_java_opts
129+io_sort_factor
130+io_sort_mb
131+mapred_job_tracker_handler_count
132+tasktracker_http_threads"
133
134 CONFIG_FILES="/etc/hadoop/conf.juju/hdfs-site.xml
135 /etc/hadoop/conf.juju/core-site.xml
136@@ -143,9 +142,9 @@
137 cp /dev/null /etc/hadoop/conf.juju/hdfs-site.xml
138 cp /dev/null /etc/hadoop/conf.juju/core-site.xml
139 cp /dev/null /etc/hadoop/conf.juju/mapred-site.xml
140- dotdee --setup /etc/hadoop/conf.juju/hdfs-site.xml
141+ dotdee --setup /etc/hadoop/conf.juju/hdfs-site.xml
142 dotdee --setup /etc/hadoop/conf.juju/core-site.xml
143- dotdee --setup /etc/hadoop/conf.juju/mapred-site.xml
144+ dotdee --setup /etc/hadoop/conf.juju/mapred-site.xml
145 dotdee --setup /etc/hadoop/conf.juju/hadoop-env.sh
146 fi
147 # Configure Heap Size
148@@ -159,16 +158,16 @@
149 # Purge existing configuration
150 rm -f $dir/1*-dfs.*
151 config_element "dfs.name.dir" \
152- "`config-get hadoop.dir.base`/cache/hadoop/dfs/name" > \
153+ "`config-get hadoop_dir_base`/cache/hadoop/dfs/name" > \
154 $dir/10-dfs.name.dir
155 config_element "dfs.namenode.handler.count" \
156- "`config-get dfs.namenode.handler.count`" > \
157+ "`config-get dfs_namenode_handler_count`" > \
158 $dir/11-dfs.namenode.handler.count
159 config_element "dfs.block.size" i\
160- "`config-get dfs.block.size`" > \
161+ "`config-get dfs_block_size`" > \
162 $dir/12-dfs.block.size
163 config_element "dfs.datanode.max.xcievers" \
164- "`config-get dfs.datanode.max.xcievers`" > \
165+ "`config-get dfs_datanode_max_xcievers`" > \
166 $dir/13-dfs.datanode.max.xcievers
167 [ "`config-get hbase`" = "True" ] && \
168 config_element "dfs.support.append" "true" > \
169@@ -187,7 +186,7 @@
170 counter=10
171 for element in $MAPRED_CONFIG
172 do
173- config_element "$element" "`config-get $element`" > \
174+ config_element "${element//_/.}" "`config-get $element`" > \
175 $dir/20-$counter-$element
176 counter=`expr $counter + 1`
177 done
178@@ -196,15 +195,15 @@
179 dir=`dotdee --dir /etc/hadoop/conf.juju/core-site.xml`
180 config_basic $dir
181 rm -f $dir/1*-*
182- config_element "hadoop.tmp.dir" "`config-get hadoop.dir.base`/cache/\${user.name}" > \
183+ config_element "hadoop.tmp.dir" "`config-get hadoop_dir_base`/cache/\${user.name}" > \
184 $dir/10-hadoop.tmp.dir
185- config_element "io.file.buffer.size" "`config-get io.file.buffer.size`" > \
186+ config_element "io.file.buffer.size" "`config-get io_file_buffer_size`" > \
187 $dir/11-io.file.buffer.size
188 dotdee --update /etc/hadoop/conf.juju/core-site.xml || true
189 }
190
191 configure_tmp_dir_perms() {
192- dir=`config-get hadoop.dir.base`
193+ dir=`config-get hadoop_dir_base`
194 # Make sure the directory exists
195 mkdir -p $dir/cache/hadoop
196 # We don't want to do this recursively since we may be reinstalling, in which case
197@@ -261,7 +260,7 @@
198
199 install_packages () {
200 case $1 in
201- namenode|datanode|secondarynamenode|jobtracker|tasktracker)
202+ namenode|datanode|secondarynamenode|jobtracker|tasktracker)
203 juju-log "Installing extra packages for $1"
204 apt-get -y install hadoop-$1
205 ;;
206@@ -295,19 +294,19 @@
207 }
208
209 # Hadoop Service Control Commands
210-restart_hadoop () {
211+restart_hadoop () {
212 [ "$hdfs_role" != "unconfigured" ] && \
213 _restart_ hadoop-$hdfs_role || :
214 [ "$mapred_role" != "unconfigured" ] && \
215 _restart_ hadoop-$mapred_role || :
216 }
217-stop_hadoop () {
218+stop_hadoop () {
219 [ "$hdfs_role" != "unconfigured" ] && \
220 _stop_ hadoop-$hdfs_role || :
221 [ "$mapred_role" != "unconfigured" ] && \
222 _stop_ hadoop-$mapred_role || :
223 }
224-start_hadoop () {
225+start_hadoop () {
226 [ "$hdfs_role" != "unconfigured" ] && \
227 _start_ hadoop-$hdfs_role || :
228 [ "$mapred_role" != "unconfigured" ] && \
229@@ -380,7 +379,7 @@
230 install_base_packages
231 install_optional_packages
232 configure_hadoop
233- configure_tmp_dir_perms
234+ configure_tmp_dir_perms
235 ;;
236 jobtracker-relation-joined)
237 case $mapred_role in
238@@ -454,7 +453,7 @@
239 ;;
240 namenode-relation-joined)
241 case $hdfs_role in
242- unconfigured)
243+ unconfigured)
244 juju-log "Configuring this unit as a namenode"
245 hdfs_role="namenode"
246 configure_role_relation $hdfs_role
247@@ -513,7 +512,7 @@
248 ;;
249 datanode-relation-changed)
250 case $hdfs_role in
251- unconfigured)
252+ unconfigured)
253 ready=`relation-get ready`
254 if [ -z "$ready" ]
255 then

Subscribers

People subscribed via source and target branches

to all changes: