Merge lp:~hazmat/charms/precise/hadoop/trunk into lp:~charmers/charms/precise/hadoop/trunk
- Precise Pangolin (12.04)
- trunk
- Merge into trunk
Proposed by
Gary Poster
Status: | Merged |
---|---|
Merged at revision: | 31 |
Proposed branch: | lp:~hazmat/charms/precise/hadoop/trunk |
Merge into: | lp:~charmers/charms/precise/hadoop/trunk |
Diff against target: |
255 lines (+38/-39) 2 files modified
config.yaml (+13/-13) hooks/hadoop-common (+25/-26) |
To merge this branch: | bzr merge lp:~hazmat/charms/precise/hadoop/trunk |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Marco Ceppi (community) | Approve | ||
Review via email: mp+191278@code.launchpad.net |
Commit message
Switch config values from dots to dashes. This makes the charm work with Juju Core and the GUI.
Description of the change
This branch from Kapil includes Jeff Pihach's fix for the hadoop charm.
The hadoop charm uses dots in the config values. This breaks the CLI (juju set will fail) and the GUI (deployment will fail). It may be a conscious decision in the CLI to reject config values with dots. Whether or not that is so, as a simple practical solution, we are declaring dots as inappropriate for config values, until/unless changing this is a high enough priority for Juju Core and the GUI.
The fix here is to convert dots to hyphens.
Thanks!
Gary
To post a comment you must log in.
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === modified file 'config.yaml' | |||
2 | --- config.yaml 2012-06-07 13:03:51 +0000 | |||
3 | +++ config.yaml 2013-10-15 19:16:27 +0000 | |||
4 | @@ -66,29 +66,29 @@ | |||
5 | 66 | . | 66 | . |
6 | 67 | If you are also mixing MapReduce and DFS roles on the same units you need to | 67 | If you are also mixing MapReduce and DFS roles on the same units you need to |
7 | 68 | take this into account as well (see README for more details). | 68 | take this into account as well (see README for more details). |
9 | 69 | # Expert options See http://wiki.apache.org/hadoop/FAQ#How_well_does_Hadoop_scale.3F | 69 | # Expert options See http://wiki.apache.org/hadoop/FAQ#How_well_does_Hadoop_scale.3F |
10 | 70 | # for more details | 70 | # for more details |
12 | 71 | dfs.namenode.handler.count: | 71 | dfs_namenode_handler_count: |
13 | 72 | type: int | 72 | type: int |
14 | 73 | default: 10 | 73 | default: 10 |
15 | 74 | description: | | 74 | description: | |
16 | 75 | The number of server threads for the namenode. Increase this in larger | 75 | The number of server threads for the namenode. Increase this in larger |
17 | 76 | deployments to ensure the namenode can cope with the number of datanodes | 76 | deployments to ensure the namenode can cope with the number of datanodes |
18 | 77 | that it has to deal with. | 77 | that it has to deal with. |
20 | 78 | dfs.block.size: | 78 | dfs_block_size: |
21 | 79 | type: int | 79 | type: int |
22 | 80 | default: 67108864 | 80 | default: 67108864 |
23 | 81 | description: | | 81 | description: | |
25 | 82 | The default block size for new files (default to 64MB). Increase this in | 82 | The default block size for new files (default to 64MB). Increase this in |
26 | 83 | larger deployments for better large data set performance. | 83 | larger deployments for better large data set performance. |
28 | 84 | io.file.buffer.size: | 84 | io_file_buffer_size: |
29 | 85 | type: int | 85 | type: int |
30 | 86 | default: 4096 | 86 | default: 4096 |
31 | 87 | description: | | 87 | description: | |
32 | 88 | The size of buffer for use in sequence files. The size of this buffer should | 88 | The size of buffer for use in sequence files. The size of this buffer should |
33 | 89 | probably be a multiple of hardware page size (4096 on Intel x86), and it | 89 | probably be a multiple of hardware page size (4096 on Intel x86), and it |
34 | 90 | determines how much data is buffered during read and write operations. | 90 | determines how much data is buffered during read and write operations. |
36 | 91 | dfs.datanode.max.xcievers: | 91 | dfs_datanode_max_xcievers: |
37 | 92 | type: int | 92 | type: int |
38 | 93 | default: 4096 | 93 | default: 4096 |
39 | 94 | description: | | 94 | description: | |
40 | @@ -97,13 +97,13 @@ | |||
41 | 97 | An Hadoop HDFS datanode has an upper bound on the number of files that it | 97 | An Hadoop HDFS datanode has an upper bound on the number of files that it |
42 | 98 | will serve at any one time. This defaults to 256 (which is low) in hadoop | 98 | will serve at any one time. This defaults to 256 (which is low) in hadoop |
43 | 99 | 1.x - however this charm increases that to 4096. | 99 | 1.x - however this charm increases that to 4096. |
45 | 100 | mapred.reduce.parallel.copies: | 100 | mapred_reduce_parallel_copies: |
46 | 101 | type: int | 101 | type: int |
47 | 102 | default: 5 | 102 | default: 5 |
48 | 103 | description: | | 103 | description: | |
49 | 104 | The default number of parallel transfers run by reduce during the | 104 | The default number of parallel transfers run by reduce during the |
50 | 105 | copy(shuffle) phase. | 105 | copy(shuffle) phase. |
52 | 106 | mapred.child.java.opts: | 106 | mapred_child_java_opts: |
53 | 107 | type: string | 107 | type: string |
54 | 108 | default: -Xmx200m | 108 | default: -Xmx200m |
55 | 109 | description: | | 109 | description: | |
56 | @@ -117,32 +117,32 @@ | |||
57 | 117 | . | 117 | . |
58 | 118 | The configuration variable mapred.child.ulimit can be used to control | 118 | The configuration variable mapred.child.ulimit can be used to control |
59 | 119 | the maximum virtual memory of the child processes. | 119 | the maximum virtual memory of the child processes. |
61 | 120 | io.sort.factor: | 120 | io_sort_factor: |
62 | 121 | type: int | 121 | type: int |
63 | 122 | default: 10 | 122 | default: 10 |
64 | 123 | description: | | 123 | description: | |
65 | 124 | The number of streams to merge at once while sorting files. This | 124 | The number of streams to merge at once while sorting files. This |
66 | 125 | determines the number of open file handles. | 125 | determines the number of open file handles. |
68 | 126 | io.sort.mb: | 126 | io_sort_mb: |
69 | 127 | type: int | 127 | type: int |
70 | 128 | default: 100 | 128 | default: 100 |
71 | 129 | description: | | 129 | description: | |
72 | 130 | The total amount of buffer memory to use while sorting files, in | 130 | The total amount of buffer memory to use while sorting files, in |
73 | 131 | megabytes. By default, gives each merge stream 1MB, which should minimize | 131 | megabytes. By default, gives each merge stream 1MB, which should minimize |
74 | 132 | seeks. | 132 | seeks. |
76 | 133 | mapred.job.tracker.handler.count: | 133 | mapred_job_tracker_handler_count: |
77 | 134 | type: int | 134 | type: int |
78 | 135 | default: 10 | 135 | default: 10 |
79 | 136 | description: | | 136 | description: | |
80 | 137 | The number of server threads for the JobTracker. This should be roughly | 137 | The number of server threads for the JobTracker. This should be roughly |
81 | 138 | 4% of the number of tasktracker nodes. | 138 | 4% of the number of tasktracker nodes. |
83 | 139 | tasktracker.http.threads: | 139 | tasktracker_http_threads: |
84 | 140 | type: int | 140 | type: int |
85 | 141 | default: 40 | 141 | default: 40 |
86 | 142 | description: | | 142 | description: | |
87 | 143 | The number of worker threads that for the http server. This is used for | 143 | The number of worker threads that for the http server. This is used for |
88 | 144 | map output fetching. | 144 | map output fetching. |
90 | 145 | hadoop.dir.base: | 145 | hadoop_dir_base: |
91 | 146 | type: string | 146 | type: string |
92 | 147 | default: /var/lib/hadoop | 147 | default: /var/lib/hadoop |
93 | 148 | description: | | 148 | description: | |
94 | 149 | 149 | ||
95 | === modified file 'hooks/hadoop-common' | |||
96 | --- hooks/hadoop-common 2012-05-03 22:12:08 +0000 | |||
97 | +++ hooks/hadoop-common 2013-10-15 19:16:27 +0000 | |||
98 | @@ -4,7 +4,7 @@ | |||
99 | 4 | 4 | ||
100 | 5 | configure_hosts () { | 5 | configure_hosts () { |
101 | 6 | private_address=`unit-get private-address` | 6 | private_address=`unit-get private-address` |
103 | 7 | # This is a horrible hack to ensure that | 7 | # This is a horrible hack to ensure that |
104 | 8 | # Java can resolve the hostname of the server to its | 8 | # Java can resolve the hostname of the server to its |
105 | 9 | # real IP address. | 9 | # real IP address. |
106 | 10 | 10 | ||
107 | @@ -76,7 +76,7 @@ | |||
108 | 76 | case $hdfs_role in | 76 | case $hdfs_role in |
109 | 77 | namenode) | 77 | namenode) |
110 | 78 | open-port 8020 | 78 | open-port 8020 |
112 | 79 | open-port 50070 | 79 | open-port 50070 |
113 | 80 | ;; | 80 | ;; |
114 | 81 | datanode) | 81 | datanode) |
115 | 82 | open-port 50010 | 82 | open-port 50010 |
116 | @@ -98,13 +98,12 @@ | |||
117 | 98 | esac | 98 | esac |
118 | 99 | } | 99 | } |
119 | 100 | 100 | ||
127 | 101 | MAPRED_CONFIG="mapred.reduce.parallel.copies | 101 | MAPRED_CONFIG="mapred_reduce_parallel_copies |
128 | 102 | mapred.child.java.opts | 102 | mapred_child_java_opts |
129 | 103 | io.sort.factor | 103 | io_sort_factor |
130 | 104 | io.sort.mb | 104 | io_sort_mb |
131 | 105 | mapred.job.tracker.handler.count | 105 | mapred_job_tracker_handler_count |
132 | 106 | tasktracker.http.threads | 106 | tasktracker_http_threads" |
126 | 107 | " | ||
133 | 108 | 107 | ||
134 | 109 | CONFIG_FILES="/etc/hadoop/conf.juju/hdfs-site.xml | 108 | CONFIG_FILES="/etc/hadoop/conf.juju/hdfs-site.xml |
135 | 110 | /etc/hadoop/conf.juju/core-site.xml | 109 | /etc/hadoop/conf.juju/core-site.xml |
136 | @@ -143,9 +142,9 @@ | |||
137 | 143 | cp /dev/null /etc/hadoop/conf.juju/hdfs-site.xml | 142 | cp /dev/null /etc/hadoop/conf.juju/hdfs-site.xml |
138 | 144 | cp /dev/null /etc/hadoop/conf.juju/core-site.xml | 143 | cp /dev/null /etc/hadoop/conf.juju/core-site.xml |
139 | 145 | cp /dev/null /etc/hadoop/conf.juju/mapred-site.xml | 144 | cp /dev/null /etc/hadoop/conf.juju/mapred-site.xml |
141 | 146 | dotdee --setup /etc/hadoop/conf.juju/hdfs-site.xml | 145 | dotdee --setup /etc/hadoop/conf.juju/hdfs-site.xml |
142 | 147 | dotdee --setup /etc/hadoop/conf.juju/core-site.xml | 146 | dotdee --setup /etc/hadoop/conf.juju/core-site.xml |
144 | 148 | dotdee --setup /etc/hadoop/conf.juju/mapred-site.xml | 147 | dotdee --setup /etc/hadoop/conf.juju/mapred-site.xml |
145 | 149 | dotdee --setup /etc/hadoop/conf.juju/hadoop-env.sh | 148 | dotdee --setup /etc/hadoop/conf.juju/hadoop-env.sh |
146 | 150 | fi | 149 | fi |
147 | 151 | # Configure Heap Size | 150 | # Configure Heap Size |
148 | @@ -159,16 +158,16 @@ | |||
149 | 159 | # Purge existing configuration | 158 | # Purge existing configuration |
150 | 160 | rm -f $dir/1*-dfs.* | 159 | rm -f $dir/1*-dfs.* |
151 | 161 | config_element "dfs.name.dir" \ | 160 | config_element "dfs.name.dir" \ |
153 | 162 | "`config-get hadoop.dir.base`/cache/hadoop/dfs/name" > \ | 161 | "`config-get hadoop_dir_base`/cache/hadoop/dfs/name" > \ |
154 | 163 | $dir/10-dfs.name.dir | 162 | $dir/10-dfs.name.dir |
155 | 164 | config_element "dfs.namenode.handler.count" \ | 163 | config_element "dfs.namenode.handler.count" \ |
157 | 165 | "`config-get dfs.namenode.handler.count`" > \ | 164 | "`config-get dfs_namenode_handler_count`" > \ |
158 | 166 | $dir/11-dfs.namenode.handler.count | 165 | $dir/11-dfs.namenode.handler.count |
159 | 167 | config_element "dfs.block.size" i\ | 166 | config_element "dfs.block.size" i\ |
161 | 168 | "`config-get dfs.block.size`" > \ | 167 | "`config-get dfs_block_size`" > \ |
162 | 169 | $dir/12-dfs.block.size | 168 | $dir/12-dfs.block.size |
163 | 170 | config_element "dfs.datanode.max.xcievers" \ | 169 | config_element "dfs.datanode.max.xcievers" \ |
165 | 171 | "`config-get dfs.datanode.max.xcievers`" > \ | 170 | "`config-get dfs_datanode_max_xcievers`" > \ |
166 | 172 | $dir/13-dfs.datanode.max.xcievers | 171 | $dir/13-dfs.datanode.max.xcievers |
167 | 173 | [ "`config-get hbase`" = "True" ] && \ | 172 | [ "`config-get hbase`" = "True" ] && \ |
168 | 174 | config_element "dfs.support.append" "true" > \ | 173 | config_element "dfs.support.append" "true" > \ |
169 | @@ -187,7 +186,7 @@ | |||
170 | 187 | counter=10 | 186 | counter=10 |
171 | 188 | for element in $MAPRED_CONFIG | 187 | for element in $MAPRED_CONFIG |
172 | 189 | do | 188 | do |
174 | 190 | config_element "$element" "`config-get $element`" > \ | 189 | config_element "${element//_/.}" "`config-get $element`" > \ |
175 | 191 | $dir/20-$counter-$element | 190 | $dir/20-$counter-$element |
176 | 192 | counter=`expr $counter + 1` | 191 | counter=`expr $counter + 1` |
177 | 193 | done | 192 | done |
178 | @@ -196,15 +195,15 @@ | |||
179 | 196 | dir=`dotdee --dir /etc/hadoop/conf.juju/core-site.xml` | 195 | dir=`dotdee --dir /etc/hadoop/conf.juju/core-site.xml` |
180 | 197 | config_basic $dir | 196 | config_basic $dir |
181 | 198 | rm -f $dir/1*-* | 197 | rm -f $dir/1*-* |
183 | 199 | config_element "hadoop.tmp.dir" "`config-get hadoop.dir.base`/cache/\${user.name}" > \ | 198 | config_element "hadoop.tmp.dir" "`config-get hadoop_dir_base`/cache/\${user.name}" > \ |
184 | 200 | $dir/10-hadoop.tmp.dir | 199 | $dir/10-hadoop.tmp.dir |
186 | 201 | config_element "io.file.buffer.size" "`config-get io.file.buffer.size`" > \ | 200 | config_element "io.file.buffer.size" "`config-get io_file_buffer_size`" > \ |
187 | 202 | $dir/11-io.file.buffer.size | 201 | $dir/11-io.file.buffer.size |
188 | 203 | dotdee --update /etc/hadoop/conf.juju/core-site.xml || true | 202 | dotdee --update /etc/hadoop/conf.juju/core-site.xml || true |
189 | 204 | } | 203 | } |
190 | 205 | 204 | ||
191 | 206 | configure_tmp_dir_perms() { | 205 | configure_tmp_dir_perms() { |
193 | 207 | dir=`config-get hadoop.dir.base` | 206 | dir=`config-get hadoop_dir_base` |
194 | 208 | # Make sure the directory exists | 207 | # Make sure the directory exists |
195 | 209 | mkdir -p $dir/cache/hadoop | 208 | mkdir -p $dir/cache/hadoop |
196 | 210 | # We don't want to do this recursively since we may be reinstalling, in which case | 209 | # We don't want to do this recursively since we may be reinstalling, in which case |
197 | @@ -261,7 +260,7 @@ | |||
198 | 261 | 260 | ||
199 | 262 | install_packages () { | 261 | install_packages () { |
200 | 263 | case $1 in | 262 | case $1 in |
202 | 264 | namenode|datanode|secondarynamenode|jobtracker|tasktracker) | 263 | namenode|datanode|secondarynamenode|jobtracker|tasktracker) |
203 | 265 | juju-log "Installing extra packages for $1" | 264 | juju-log "Installing extra packages for $1" |
204 | 266 | apt-get -y install hadoop-$1 | 265 | apt-get -y install hadoop-$1 |
205 | 267 | ;; | 266 | ;; |
206 | @@ -295,19 +294,19 @@ | |||
207 | 295 | } | 294 | } |
208 | 296 | 295 | ||
209 | 297 | # Hadoop Service Control Commands | 296 | # Hadoop Service Control Commands |
211 | 298 | restart_hadoop () { | 297 | restart_hadoop () { |
212 | 299 | [ "$hdfs_role" != "unconfigured" ] && \ | 298 | [ "$hdfs_role" != "unconfigured" ] && \ |
213 | 300 | _restart_ hadoop-$hdfs_role || : | 299 | _restart_ hadoop-$hdfs_role || : |
214 | 301 | [ "$mapred_role" != "unconfigured" ] && \ | 300 | [ "$mapred_role" != "unconfigured" ] && \ |
215 | 302 | _restart_ hadoop-$mapred_role || : | 301 | _restart_ hadoop-$mapred_role || : |
216 | 303 | } | 302 | } |
218 | 304 | stop_hadoop () { | 303 | stop_hadoop () { |
219 | 305 | [ "$hdfs_role" != "unconfigured" ] && \ | 304 | [ "$hdfs_role" != "unconfigured" ] && \ |
220 | 306 | _stop_ hadoop-$hdfs_role || : | 305 | _stop_ hadoop-$hdfs_role || : |
221 | 307 | [ "$mapred_role" != "unconfigured" ] && \ | 306 | [ "$mapred_role" != "unconfigured" ] && \ |
222 | 308 | _stop_ hadoop-$mapred_role || : | 307 | _stop_ hadoop-$mapred_role || : |
223 | 309 | } | 308 | } |
225 | 310 | start_hadoop () { | 309 | start_hadoop () { |
226 | 311 | [ "$hdfs_role" != "unconfigured" ] && \ | 310 | [ "$hdfs_role" != "unconfigured" ] && \ |
227 | 312 | _start_ hadoop-$hdfs_role || : | 311 | _start_ hadoop-$hdfs_role || : |
228 | 313 | [ "$mapred_role" != "unconfigured" ] && \ | 312 | [ "$mapred_role" != "unconfigured" ] && \ |
229 | @@ -380,7 +379,7 @@ | |||
230 | 380 | install_base_packages | 379 | install_base_packages |
231 | 381 | install_optional_packages | 380 | install_optional_packages |
232 | 382 | configure_hadoop | 381 | configure_hadoop |
234 | 383 | configure_tmp_dir_perms | 382 | configure_tmp_dir_perms |
235 | 384 | ;; | 383 | ;; |
236 | 385 | jobtracker-relation-joined) | 384 | jobtracker-relation-joined) |
237 | 386 | case $mapred_role in | 385 | case $mapred_role in |
238 | @@ -454,7 +453,7 @@ | |||
239 | 454 | ;; | 453 | ;; |
240 | 455 | namenode-relation-joined) | 454 | namenode-relation-joined) |
241 | 456 | case $hdfs_role in | 455 | case $hdfs_role in |
243 | 457 | unconfigured) | 456 | unconfigured) |
244 | 458 | juju-log "Configuring this unit as a namenode" | 457 | juju-log "Configuring this unit as a namenode" |
245 | 459 | hdfs_role="namenode" | 458 | hdfs_role="namenode" |
246 | 460 | configure_role_relation $hdfs_role | 459 | configure_role_relation $hdfs_role |
247 | @@ -513,7 +512,7 @@ | |||
248 | 513 | ;; | 512 | ;; |
249 | 514 | datanode-relation-changed) | 513 | datanode-relation-changed) |
250 | 515 | case $hdfs_role in | 514 | case $hdfs_role in |
252 | 516 | unconfigured) | 515 | unconfigured) |
253 | 517 | ready=`relation-get ready` | 516 | ready=`relation-get ready` |
254 | 518 | if [ -z "$ready" ] | 517 | if [ -z "$ready" ] |
255 | 519 | then | 518 | then |
I'm strongly opposed to changes that break backwards compatibility. However, I'm letting this one go through, since you'd need juju < 0.7 to successfully deploy and change config for this charm - given that these releases are all depreciated it's recommended people move forward. Since you can't change config options for this charm in juju-core anyone upgrading won't be affected.
That being said, I'd like to state that all updates to charms must maintain backwards compatibility. Shortcomings, bugs, and differences between "pyjuju" and "gojuju" shouldn't be addressed by changing (and breaking) charms.