Merge lp:~admcleod/charms/trusty/apache-hadoop-hdfs-master/hadoop-upgrade into lp:charms/trusty/apache-hadoop-hdfs-master
- Trusty Tahr (14.04)
- hadoop-upgrade
- Merge into trunk
Proposed by
Andrew McLeod
Status: | Needs review |
---|---|
Proposed branch: | lp:~admcleod/charms/trusty/apache-hadoop-hdfs-master/hadoop-upgrade |
Merge into: | lp:charms/trusty/apache-hadoop-hdfs-master |
Diff against target: |
518 lines (+477/-0) 6 files modified
README.md (+135/-0) actions.yaml (+31/-0) actions/hadoop-post-upgrade (+87/-0) actions/hadoop-pre-upgrade (+138/-0) actions/hadoop-upgrade (+78/-0) resources.yaml (+8/-0) |
To merge this branch: | bzr merge lp:~admcleod/charms/trusty/apache-hadoop-hdfs-master/hadoop-upgrade |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
charmers | Pending | ||
Review via email: mp+275420@code.launchpad.net |
Commit message
Description of the change
Added hadoop-pre-upgrade, hadoop-upgrade and hadoop-post-upgrade actions, modified resources.yaml
To post a comment you must log in.
- 96. By Andrew McLeod
-
removed some unnecessary comments
Unmerged revisions
- 96. By Andrew McLeod
-
removed some unnecessary comments
- 95. By Andrew McLeod
-
updated README.md
- 94. By Andrew McLeod
-
wait for hdfs to come out of safemode before deleting checkpoint file
- 93. By Andrew McLeod
-
trivial text modifications
- 92. By Andrew McLeod
-
hadoop upgrade actions additions etc
- 91. By Andrew McLeod
-
dist.yaml 2.7.1 - unstable (untested) hadoop
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === modified file 'README.md' |
2 | --- README.md 2015-10-07 20:35:26 +0000 |
3 | +++ README.md 2015-10-22 16:54:30 +0000 |
4 | @@ -74,6 +74,141 @@ |
5 | running jobs when enabling monitoring. |
6 | |
7 | |
8 | +## Upgrading |
9 | + |
10 | +This charm supports upgrading a non-HA hadoop cluster using the hdfs -upgrade |
11 | +flag via juju actions, from 2.4.1 to 2.7.1 and back. Each unit that a hadoop |
12 | +service resides on must be upgraded - if the version of the hadoop software on a |
13 | +particular unit is different to one which it has a relationship with, the |
14 | +relation will fail. |
15 | + |
16 | +The process to upgrade hadoop is: |
17 | + |
18 | + 1. stop yarn |
19 | + 2. stop all datanodes (and secondary namenode if necessary) |
20 | + 3. upgrade the hadoop software on all related nodes |
21 | + 4. upgrade the namenode |
22 | + 5. restart the namenode, secondary namenode and datanodes. |
23 | + |
24 | +Some checks have been built into the actions, but you should review the actions |
25 | +and validate each step if you plan to run this against a mission critical |
26 | +production system, especially the backup and rollback options. |
27 | + |
28 | +**Warning:** These actions do not copy or merge new hadoop version configuration. |
29 | +these upgrade actions leave the existing configuration in place. It is up to |
30 | +the hadoop administrator to make any configuration changes necessary. |
31 | + |
32 | +### Upgrade Actions |
33 | + |
34 | +#### hadoop-pre-upgrade |
35 | + |
36 | + juju action do namenode/0 hadoop-pre-upgrade version=2.7.1 |
37 | + |
38 | + * Should only be run on the namenode |
39 | + * Version param must be supplied - version=2.4.1 or version=2.7.1 |
40 | + * Performs HDFS checks twice and checks there are no differences / no jobs |
41 | + still running |
42 | + * Performs fsck on HDFS |
43 | + * Attempts to back up edit log and fs image from /tmp/hadoop-hdfs |
44 | + * Cycles namenode dfs daemons to update and create new checkpoint |
45 | + * Stops namenode dfs daemons |
46 | + |
47 | + |
48 | +#### hadoop-upgrade |
49 | + |
50 | + juju action do othernode/0 hadoop-upgrade version=2.7.1 rollback=false |
51 | + |
52 | + * Run on any unit with hadoop software installed *AFTER* hadoop-pre-upgrade on |
53 | + the namenode so that any running daemons are stopped |
54 | + * Services with different versions of hadoop will not work together |
55 | + * Version param must be supplied - version=2.4.1 or version=2.7.1 |
56 | + * Rollback param must be supplied - rollback=[true|false]. If rollback is true, |
57 | + symlink is recreated, no new files are created. |
58 | + |
59 | + |
60 | +#### hadoop-post-upgrade |
61 | + |
62 | + juju action do namenode/0 hadoop-post-upgrade version=2.7.1 rollback=false finalize=false |
63 | + |
64 | + * Should only be run on a namenode |
65 | + * Rollback param must be supplied - rollback=[true|false]. If rollback is true, |
66 | + symlink is recreated, no new files are created. |
67 | + * Starts HDFS daemon with -upgrade or -rollback switch to convert fs to new |
68 | + version |
69 | + * Performs fsck on HDFS |
70 | + * Creates namespace log and compares to pre-upgrade logs |
71 | + * Creates dfsadmon -report log and compares to pre-upgrade logs |
72 | + * Takes option param finalize=[true|false] - if false, does nothing. If true, |
73 | + finalizes HDFS upgrade (removes backup) - ***if HDFS is not finalized, deleted |
74 | + files remain (hidden) on the fs and do not free up space - rollback is no |
75 | + longer possible after finalize*** |
76 | + |
77 | + |
78 | +#### Example upgrade steps |
79 | + |
80 | +***Do not copy & paste*** the following, unless you are using the apache core batch |
81 | +processing bundle - it is intended to be an example of how you upgrade a running |
82 | +hadoop cluster. All of these upgrade actions provide action responses, i.e. |
83 | +visible with 'juju action fetch' and also juju extended status updates. |
84 | + |
85 | + juju action do yarn-master/0 stop-yarn |
86 | + |
87 | + juju action fetch --wait 0 action-id |
88 | + |
89 | + juju action do hdfs-master/0 hadoop-pre-upgrade version=2.7.1 |
90 | + |
91 | + watch 'juju status --format=tabular' |
92 | + |
93 | + juju action do secondary-namenode/0 hadoop-upgrade version=2.7.1 rollback=false& |
94 | + juju action do plugin/0 hadoop-upgrade version=2.7.1 rollback=false& |
95 | + juju action do compute-slave/0 hadoop-upgrade version=2.7.1 rollback=false& |
96 | + juju action do compute-slave/1 hadoop-upgrade version=2.7.1 rollback=false& |
97 | + juju action do compute-slave/2 hadoop-upgrade version=2.7.1 rollback=false& |
98 | + juju action do yarn-master/0 hadoop-upgrade version=2.7.1 rollback=false& |
99 | + juju action do hdfs-master/0 hadoop-upgrade version=2.7.1 rollback=false& |
100 | + |
101 | + watch 'juju status --format=tabular' |
102 | + |
103 | + juju action do hdfs-master/0 hadoop-post-upgrade rollback=false version=2.7.1 |
104 | + |
105 | + watch 'juju status --format=tabular' |
106 | + |
107 | + juju action do yarn-master/0 start-yarn |
108 | + |
109 | + Test cluster functionality: |
110 | + |
111 | + juju action do plugin/0 terasort |
112 | + |
113 | +*Once you have tested your upgrade, you must choose whether to finalize or |
114 | +rollback HDFS.* |
115 | + |
116 | +#### Finalize instructions |
117 | + |
118 | +NOTE: If you do not finalize HDFS after an upgrade, files which are deleted |
119 | +from HDFS will remain hidden on the file system, and therefore you will not |
120 | +reclaim used space. You must finalize once you have decided you are happy to |
121 | +proceed with the upgraded version, or rollback to the previous version. |
122 | + |
123 | +After completing the upgrade procedure: |
124 | + |
125 | + juju action do namenode/0 hadoop-post-upgrade version=2.7.1 rollback=false |
126 | + finalize=true |
127 | + |
128 | +#### Rollback instructions |
129 | + |
130 | +WARNING: Rollback removes any files created since upgrade - this means |
131 | +reverting to the previous system state, not just a previous version of software. |
132 | + |
133 | +* Follow the same procedure you used to upgrade, but make sure |
134 | +version='previously installed version' (this will not work for an older version |
135 | +which was not previously installed) |
136 | +* For hadoop-upgrade, rollback flag is optional, and specifies whether to use |
137 | +existing 'previous ver' directory, or unpack source agains |
138 | +* For hadoop-post-upgrade, rollback flag is mandatory as it forces HDFS to |
139 | +rollback to the previous version and image |
140 | + |
141 | + juju action do namenode/0 hadoop-post-upgrade version=2.4.1 rollback=true |
142 | + |
143 | ## Deploying in Network-Restricted Environments |
144 | |
145 | The Apache Hadoop charms can be deployed in environments with limited network |
146 | |
147 | === modified file 'actions.yaml' |
148 | --- actions.yaml 2015-08-17 18:12:14 +0000 |
149 | +++ actions.yaml 2015-10-22 16:54:30 +0000 |
150 | @@ -6,3 +6,34 @@ |
151 | description: All of the HDFS processes can be restarted with this Juju action. |
152 | smoke-test: |
153 | description: Verify that HDFS is working by creating and removing a small file. |
154 | +hadoop-pre-upgrade: |
155 | + description: prepare hadoop cluster for upgrade |
156 | + params: |
157 | + version: |
158 | + type: string |
159 | + descrption: destination hadoop version X.X.X |
160 | + required: [version] |
161 | +hadoop-upgrade: |
162 | + description: upgrade (or roll back) hadoop to specified version |
163 | + params: |
164 | + version: |
165 | + type: string |
166 | + description: destination hadoop version X.X.X |
167 | + rollback: |
168 | + type: boolean |
169 | + description: true or false - defaults to false |
170 | + required: [version, rollback] |
171 | +hadoop-post-upgrade: |
172 | + descrption: complete upgrade on hdfs-master node |
173 | + params: |
174 | + version: |
175 | + type: string |
176 | + description: destination hadoop version X.X.X |
177 | + finalize: |
178 | + type: boolean |
179 | + default: false |
180 | + description: finalizes hadoop upgrade - wipes backup image and allows real file deletion |
181 | + rollback: |
182 | + type: boolean |
183 | + description: true or false |
184 | + required: [version, rollback] |
185 | |
186 | === added file 'actions/hadoop-post-upgrade' |
187 | --- actions/hadoop-post-upgrade 1970-01-01 00:00:00 +0000 |
188 | +++ actions/hadoop-post-upgrade 2015-10-22 16:54:30 +0000 |
189 | @@ -0,0 +1,87 @@ |
190 | +#!/bin/bash |
191 | +export SAVEPATH=$PATH |
192 | +. /etc/environment |
193 | +export PATH=$PATH:$SAVEPATH |
194 | +export JAVA_HOME |
195 | + |
196 | +current_hadoop_ver=`/usr/lib/hadoop/bin/hadoop version|head -n1|awk '{print $2}'` |
197 | +new_hadoop_ver=`action-get version` |
198 | +rollback=`action-get rollback` |
199 | +finalize=`action-get finalize` |
200 | + |
201 | +# Following condition may seem unintuitive - but at this point, current_hadoop_ver should match new_hadoop_ver |
202 | +# as current_hadoop_ver should be the new version of hadoop software |
203 | +if [ "$new_hadoop_ver" != "$current_hadoop_ver" ] ; then |
204 | + action-set result="Version specified is not installed on this system, aborting" |
205 | + action-fail "Version specified is not installed on this system, aborting" |
206 | + exit 1 |
207 | +fi |
208 | + |
209 | +if [ "${finalize}" == "True" ] ; then |
210 | + cd ${HADOOP_HOME} |
211 | + su hdfs -c "bin/hdfs dfsadmin -finalizeUpgrade" |
212 | + action-set finalized="true" |
213 | + status-set active "Ready - upgrade finalized, rollback unavailable" |
214 | + exit 0 |
215 | +fi |
216 | + |
217 | +if [ "${rollback}" == "True" ] ; then |
218 | + cd ${HADOOP_HOME} |
219 | + status-set maintenance "Performing namenode rollback..." |
220 | + |
221 | + # following workaround for rollback to 2.4.1 to 2.7.1: |
222 | + # echo y to prompt, rather than using start-dfs.sh -rollback due to |
223 | + # https://issues.apache.org/jira/browse/HDFS-8226 |
224 | + su hdfs -c "echo y|bin/hdfs namenode -rollback" |
225 | + |
226 | + status-set maintenance "Performing datanode rollback..." |
227 | + su hdfs -c "sbin/hadoop-daemons.sh --config $HADOOP_CONF_DIR --script 'hdfs' start datanode -rollback" |
228 | + su hdfs -c "sbin/start-dfs.sh" |
229 | + procedure="rollback" |
230 | +fi |
231 | + |
232 | +if [ "${rollback}" == "False" ] ; then |
233 | + cd ${HADOOP_HOME} |
234 | + status-set maintenance "starting dfs in upgrade mode" |
235 | + su hdfs -c "sbin/start-dfs.sh -upgrade" |
236 | + procedure="upgrade" |
237 | +fi |
238 | + |
239 | +cd ${HADOOP_HOME} |
240 | + |
241 | +status-set maintenance "creating dfsadmin report" |
242 | +su hdfs -c "bin/hdfs dfsadmin -report | grep -v DFS|grep -v contact|grep -v Capacity > /tmp/dfs-v-new-report-1.log " |
243 | +diff /tmp/dfs-v-old-report-2.log /tmp/dfs-v-new-report-1.log |
244 | +if [ ! $? -eq 0 ] ; then |
245 | + action-set newhadoop.diff-report="failed diff - check /tmp/dfs-v-*-report*" |
246 | +fi |
247 | + |
248 | +status-set maintenance "creating fs log" |
249 | +su hdfs -c "bin/hdfs dfs -ls -R / > /tmp/dfs-v-new-lsr-0.log" |
250 | +diff /tmp/dfs-v-new-lsr-0.log /tmp/dfs-v-old-lsr-2.log |
251 | +if [ ! $? -eq 0 ] ; then |
252 | + action-set newhadoop.diff-lsr="fail" |
253 | +fi |
254 | + |
255 | +status-set maintenance "checking fs" |
256 | +su hdfs -c "bin/hdfs fsck / -files -blocks -locations > /tmp/dfs-v-new-fsck-1.log" |
257 | +diff /tmp/dfs-v-old-fsck-1.log /tmp/dfs-v-new-fsck-1.log |
258 | +if [ ! $? -eq 0 ] ; then |
259 | + action-set newhadoop.diff-fsck="fail" |
260 | +fi |
261 | + |
262 | +hadoop_version=`/usr/lib/hadoop/bin/hadoop version|head -n1|awk '{print $2}'` |
263 | + |
264 | +status-set maintenance "looking in hdfs for ${procedure} checkpoint files..." |
265 | +if ! su hdfs -c "bin/hdfs dfs -test -e hdfs:///user/ubuntu/checkpoint_hadoop" ; then |
266 | + action-set result="${procedure} checkpoint file hdfs:///user/ubuntu/checkpoint_hadoop not found - check fs manually" |
267 | + status-set active "Unknown - see action results and check fs manually" |
268 | + action-set hadoop.version="${hadoop_version}" |
269 | +else |
270 | + status-set maintenance "deleting checkpoint file" |
271 | + sleep 30 |
272 | + su hdfs -c "bin/hdfs dfs -rm hdfs:///user/ubuntu/checkpoint_hadoop" |
273 | + action-set newhadoop.running="true" |
274 | + action-set hadoop.version="${hadoop_version}" |
275 | + status-set active "Ready - hadoop version ${hadoop_version} ${procedure} complete" |
276 | +fi |
277 | |
278 | === added file 'actions/hadoop-pre-upgrade' |
279 | --- actions/hadoop-pre-upgrade 1970-01-01 00:00:00 +0000 |
280 | +++ actions/hadoop-pre-upgrade 2015-10-22 16:54:30 +0000 |
281 | @@ -0,0 +1,138 @@ |
282 | +#!/bin/bash |
283 | +export JUJU_PATH=$PATH |
284 | +. /etc/environment |
285 | +export PATH=$PATH:$JUJU_PATH |
286 | +export JAVA_HOME |
287 | + |
288 | +rm -rf /tmp/dfs-v* |
289 | +cd "$HADOOP_HOME" |
290 | + |
291 | +current_hadoop_ver=`/usr/lib/hadoop/bin/hadoop version|head -n1|awk '{print $2}'` |
292 | +new_hadoop_ver=`action-get version` |
293 | + |
294 | +function clean_checkpoint { |
295 | + su hdfs -c "hdfs dfs -rm hdfs:///user/ubuntu/checkpoint_hadoop" |
296 | +} |
297 | + |
298 | +status-set maintenance "creating upgrade checkpoint files in hdfs:///user/ubuntu/" |
299 | +su hdfs -c "date > /tmp/checkpoint_hadoop" |
300 | +if ! su hdfs -c "bin/hdfs dfs -put /tmp/checkpoint_hadoop hdfs:///user/ubuntu/" ; then |
301 | + action-fail "checkpoint file(s) could not be created, hdfs not running? Aborting upgrade" |
302 | + exit 1 |
303 | +fi |
304 | + |
305 | +if [ "$new_hadoop_ver" == "$current_hadoop_ver" ] ; then |
306 | + action-set result="Same version already installed, aborting" |
307 | + action-fail "Same version already installed" |
308 | + clean_checkpoint |
309 | + exit 1 |
310 | +fi |
311 | + |
312 | +status-set maintenance "checking hdfs root..." |
313 | +su hdfs -c "bin/hdfs fsck / -files -blocks -locations > /tmp/dfs-v-old-fsck-1.log" |
314 | +exitcode=$? |
315 | +action-set exitcode.fsck-hdfs-1=$exitcode |
316 | +if [ ! $exitcode -eq 0 ] ; then |
317 | + action-fail "fsck exitcode not zero - check manually" |
318 | + clean_checkpoint |
319 | + exit 1 |
320 | +fi |
321 | + |
322 | +status-set maintenance "logging hdfs namespace..." |
323 | +su hdfs -c "bin/hdfs dfs -ls -R / > /tmp/dfs-v-old-lsr-1.log" |
324 | +exitcode=$? |
325 | +action-set exitcode.dfsls-1=$exitcode |
326 | +if [ ! $exitcode -eq 0 ] ; then |
327 | + action-fail "could not ls -R dfs - check manually" |
328 | + clean_checkpoint |
329 | + exit 1 |
330 | +fi |
331 | + |
332 | +status-set maintenance "creating dfsadmin report..." |
333 | +su hdfs -c "bin/hdfs dfsadmin -report | grep -v DFS|grep -v contact|grep -v Capacity > /tmp/dfs-v-old-report-1.log" |
334 | +exitcode=$? |
335 | +action-set exitcode.dfsadmin-report=$exitcode |
336 | +if [ ! $exitcode -eq 0 ] ; then |
337 | + action-fail "Error running dfsadmin -report - check manually" |
338 | + clean_checkpoint |
339 | + exit 1 |
340 | +fi |
341 | + |
342 | +status-set maintenance "checking hdfs namespace log..." |
343 | +su hdfs -c "bin/hdfs dfs -ls -R / > /tmp/dfs-v-old-lsr-2.log" |
344 | +diff /tmp/dfs-v-old-lsr-1.log /tmp/dfs-v-old-lsr-2.log > /dev/null |
345 | +if [ ! $? -eq 0 ] ; then |
346 | + action-fail "dfs ls -R - differences found (/tmp/dfs-v-old-lsr-?.log), aborting pre-upgrade" |
347 | + clean_checkpoint |
348 | + exit 1 |
349 | + status-set active "dfs ls -R - differences found - aborting pre-upgrade" |
350 | +fi |
351 | +action-set dfsls.diff="No differences found between 1st and 2nd dfs ls -R runs" |
352 | + |
353 | +status-set maintenance "checking dfsadmin report log..." |
354 | +su hdfs -c "bin/hdfs dfsadmin -report | grep -v DFS|grep -v contact|grep -v Capacity > /tmp/dfs-v-old-report-2.log" |
355 | +diff /tmp/dfs-v-old-report-1.log /tmp/dfs-v-old-report-2.log > /dev/null |
356 | +if [ ! $? -eq 0 ] ; then |
357 | + status-set active "dfsadmin report - differences found (/tmp/dfs-v-old-report-?.log) aborting pre-upgrade" |
358 | + clean_checkpoint |
359 | + exit 1 |
360 | +fi |
361 | +action-set dfsadminreport.diff="No differences found between 1st and 2nd dfsadmin -report runs" |
362 | + |
363 | +status-set maintenance "backing up edit and fsimage files to /home/hdfs/backup" |
364 | +if [ ! -d /home/hdfs/backup/ ] ; then |
365 | + mkdir /home/hdfs/backup/ |
366 | +fi |
367 | + |
368 | +if [ -d /tmp/hadoop-hdfs/dfs/namesecondary/current/ ] ; then |
369 | + cp /tmp/hadoop-hdfs/dfs/namesecondary/current/edits* /home/hdfs/backup/ |
370 | + exitcode=$? |
371 | + action-set exitcode.backup-edits=$exitcode |
372 | + if [ ! $exitcode -eq 0 ] ; then |
373 | + action-fail "Could not cp /tmp/hadoop-hdfs/dfs/namesecondary/current/edits* to /home/hdfs/backup/" |
374 | + clean_checkpoint |
375 | + exit 1 |
376 | + fi |
377 | + |
378 | + cp /tmp/hadoop-hdfs/dfs/namesecondary/current/fsimage* /home/hdfs/backup/ |
379 | + exitcode=$? |
380 | + action-set exitcode.backup-fsimage=$exitcode |
381 | + if [ ! $exitcode -eq 0 ] ; then |
382 | + action-fail "Could not cp /tmp/hadoop-hdfs/dfs/namesecondary/current/fsimage* to /home/hdfs/backup/" |
383 | + clean_checkpoint |
384 | + exit 1 |
385 | + fi |
386 | +fi |
387 | + |
388 | +status-set maintenance "cycling dfs to create namespace checkpoint for rollback" |
389 | +su hdfs -c "sbin/stop-dfs.sh" |
390 | +exitcode=$? |
391 | +action-set exitcode.cycledfs-1=$exitcode |
392 | +if [ ! $exitcode -eq 0 ] ; then |
393 | + action-fail "Could not stop dfs - pre-upgrade aborted" |
394 | + clean_checkpoint |
395 | + exit 1 |
396 | +fi |
397 | + |
398 | +su hdfs -c "sbin/start-dfs.sh" |
399 | +exitcode=$? |
400 | +action-set exitcode.cycledfs-2=$exitcode |
401 | +if [ ! $exitcode -eq 0 ] ; then |
402 | + action-fail "Could not start dfs - pre-upgrade aborted" |
403 | + clean_checkpoint |
404 | + exit 1 |
405 | +fi |
406 | + |
407 | +status-set maintenance "bringing DFS daemons down" |
408 | +su hdfs -c "sbin/stop-dfs.sh" |
409 | +exitcode=$? |
410 | +action-set exitcode.start-dfs=$exitcode |
411 | +if [ ! $exitcode -eq 0 ] ; then |
412 | + action-fail "Could not stop dfs - pre-upgrade aborted" |
413 | + clean_checkpoint |
414 | + exit 1 |
415 | +fi |
416 | + |
417 | +action-set pre-upgrade="ready for upgrade" |
418 | +action-set next-step="run hadoop-upgrade on all other hadoop nodes before hdfs-master - see hdfs-master README.md" |
419 | +status-set maintenance "hadoop-pre-upgrade complete - run hadoop-upgrade on all hadoop nodes" |
420 | |
421 | === added file 'actions/hadoop-upgrade' |
422 | --- actions/hadoop-upgrade 1970-01-01 00:00:00 +0000 |
423 | +++ actions/hadoop-upgrade 2015-10-22 16:54:30 +0000 |
424 | @@ -0,0 +1,78 @@ |
425 | +#!/bin/bash |
426 | +export SAVEPATH=$PATH |
427 | +. /etc/environment |
428 | +export PATH=$PATH:$SAVEPATH |
429 | +export JAVA_HOME |
430 | + |
431 | +current_hadoop_ver=`/usr/lib/hadoop/bin/hadoop version|head -n1|awk '{print $2}'` |
432 | +new_hadoop_ver=`action-get version` |
433 | +cpu_arch=`lscpu|grep -i arch|awk '{print $2}'` |
434 | +rollback=`action-get rollback` |
435 | + |
436 | + |
437 | +if pgrep -f Dproc_namenode ; then |
438 | + action-set result="namenode process detected, upgrade aborted" |
439 | + status-set active "namenode process detected, upgrade aborted" |
440 | + exit 1 |
441 | +fi |
442 | + |
443 | +if [ "$new_hadoop_ver" == "$current_hadoop_ver" ] ; then |
444 | + action-set result="Same version already installed, aborting" |
445 | + action-fail "Same version already installed" |
446 | + exit 1 |
447 | +fi |
448 | + |
449 | +if [ "${rollback}" == "True" ] ; then |
450 | + if [ -d /usr/lib/hadoop-${new_hadoop_ver} ] ; then |
451 | + rm /usr/lib/hadoop |
452 | + ln -s /usr/lib/hadoop-${new_hadoop_ver} /usr/lib/hadoop |
453 | + if [ -d /usr/lib/hadoop-${current_hadoop_ver}/logs ] ; then |
454 | + mv /usr/lib/hadoop-${current_hadoop_ver}/logs /usr/lib/hadoop/ |
455 | + fi |
456 | + action-set newhadoop.rollback="successfully rolled back" |
457 | + status-set active "Ready - rollback to ${new_hadoop_ver} complete - ready for hadoop-post-upgrade" |
458 | + exit 0 |
459 | + else |
460 | + action-set newhadoop.rollback="previous version not found, unpacking..." |
461 | + status-set active "previous version not found, unpacking..." |
462 | + fi |
463 | +fi |
464 | + |
465 | +status-set maintenance "Fetching hadoop-${new_hadoop_ver}-${cpu_arch}" |
466 | +juju-resources fetch hadoop-${new_hadoop_ver}-${cpu_arch} |
467 | +if [ ! $? -eq 0 ] ; then |
468 | + action-set newhadoop.fetch="fail" |
469 | + exit 1 |
470 | +fi |
471 | +action-set newhadoop.fetch="success" |
472 | + |
473 | +status-set maintenance "Verifying hadoop-${new_hadoop_ver}-${cpu_arch}" |
474 | +juju-resources verify hadoop-${new_hadoop_ver}-${cpu_arch} |
475 | +if [ ! $? -eq 0 ] ; then |
476 | + action-set newhadoop.verify="fail" |
477 | + exit 1 |
478 | +fi |
479 | +action-set newhadoop.verify="success" |
480 | + |
481 | +new_hadoop_path=`juju-resources resource_path hadoop-${new_hadoop_ver}-${cpu_arch}` |
482 | +if [ -h /usr/lib/hadoop ] ; then |
483 | + rm /usr/lib/hadoop |
484 | +fi |
485 | + |
486 | +mv /usr/lib/hadoop/ /usr/lib/hadoop-${current_hadoop_ver} |
487 | +ln -s /usr/lib/hadoop-${current_hadoop_ver}/ /usr/lib/hadoop |
488 | +current_hadoop_path=hadoop-${current_hadoop_ver} |
489 | + |
490 | +status-set maintenance "Extracting hadoop-${new_hadoop_ver}-${cpu_arch}" |
491 | +tar -zxvf ${new_hadoop_path} -C /usr/lib/ |
492 | +if [ $? -eq 0 ] ; then |
493 | + if [ -h /usr/lib/hadoop ] ; then |
494 | + rm /usr/lib/hadoop |
495 | + fi |
496 | + ln -s /usr/lib/hadoop-${new_hadoop_ver} /usr/lib/hadoop |
497 | +fi |
498 | +if [ -d ${current_hadoop_path}/logs ] ; then |
499 | + mv ${current_hadoop_path}/logs ${new_hadoop_path}/ |
500 | +fi |
501 | +action-set result="complete" |
502 | +status-set maintenance "hadoop version ${new_hadoop_ver} installed - ready for hadoop-post-upgrade" |
503 | |
504 | === modified file 'resources.yaml' |
505 | --- resources.yaml 2015-10-07 20:35:26 +0000 |
506 | +++ resources.yaml 2015-10-22 16:54:30 +0000 |
507 | @@ -26,3 +26,11 @@ |
508 | url: https://s3.amazonaws.com/jujubigdata/apache/x86_64/hadoop-2.4.1-a790d39.tar.gz |
509 | hash: a790d39baba3a597bd226042496764e0520c2336eedb28a1a3d5c48572d3b672 |
510 | hash_type: sha256 |
511 | + hadoop-2.4.1-x86_64: |
512 | + url: https://s3.amazonaws.com/jujubigdata/apache/x86_64/hadoop-2.4.1-a790d39.tar.gz |
513 | + hash: a790d39baba3a597bd226042496764e0520c2336eedb28a1a3d5c48572d3b672 |
514 | + hash_type: sha256 |
515 | + hadoop-2.7.1-x86_64: |
516 | + url: http://mirrors.ukfast.co.uk/sites/ftp.apache.org/hadoop/common/hadoop-2.7.1/hadoop-2.7.1.tar.gz |
517 | + hash: 991dc34ea42a80b236ca46ff5d207107bcc844174df0441777248fdb6d8c9aa0 |
518 | + hash_type: sha256 |