Merge lp:~bigdata-dev/charms/trusty/apache-hadoop-client/minimal into lp:~bigdata-dev/charms/trusty/apache-hadoop-client/trunk

Proposed by Cory Johns
Status: Merged
Merged at revision: 95
Proposed branch: lp:~bigdata-dev/charms/trusty/apache-hadoop-client/minimal
Merge into: lp:~bigdata-dev/charms/trusty/apache-hadoop-client/trunk
Diff against target: 690 lines (+24/-543)
19 files modified
README.md (+6/-42)
config.yaml (+1/-6)
dist.yaml (+0/-116)
hooks/callbacks.py (+0/-36)
hooks/common.py (+0/-95)
hooks/config-changed (+0/-15)
hooks/hadoop-plugin-relation-changed (+4/-0)
hooks/hadoop-plugin-relation-joined (+2/-0)
hooks/install (+2/-17)
hooks/namenode-relation-changed (+0/-16)
hooks/resourcemanager-relation-changed (+0/-16)
hooks/setup.py (+0/-35)
hooks/start (+0/-15)
hooks/status-set (+5/-0)
hooks/stop (+0/-15)
metadata.yaml (+4/-5)
resources.yaml (+0/-36)
tests/01-basic-deployment.py (+0/-6)
tests/remote/test_dist_config.py (+0/-72)
To merge this branch: bzr merge lp:~bigdata-dev/charms/trusty/apache-hadoop-client/minimal
Reviewer Review Type Date Requested Status
amir sanjar (community) Approve
Review via email: mp+258939@code.launchpad.net

Description of the change

Refactored to use apache-hadoop-plugin instead of duplicating all the code.

To post a comment you must log in.
99. By Cory Johns

Added missing scope: container to subordinate relation

100. By Cory Johns

Merged upstream

Revision history for this message
amir sanjar (asanjar) wrote :

looks good

review: Approve
Revision history for this message
amir sanjar (asanjar) :
review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'README.md'
2--- README.md 2015-05-06 17:04:07 +0000
3+++ README.md 2015-05-12 23:32:24 +0000
4@@ -10,11 +10,12 @@
5
6 ## Usage
7
8-This charm is intended to be deployed via one of the
9-[bundles](https://jujucharms.com/q/bigdata-dev/apache?type=bundle).
10-For example:
11+This charm is intended to be connected to the
12+[core bundle](https://jujucharms.com/u/bigdata-dev/apache-core-batch-processing/):
13
14- juju quickstart u/bigdata-dev/apache-core-batch-processing
15+ juju quickstart apache-core-batch-processing
16+ juju deploy apache-hadoop-client client
17+ juju add-relation client plugin
18
19 This will deploy the Apache Hadoop platform with a single client unit.
20 From there, you can manually load and run map-reduce jobs:
21@@ -24,46 +25,9 @@
22 hadoop jar my-job.jar
23
24
25-## Deploying in Network-Restricted Environments
26-
27-The Apache Hadoop charms can be deployed in environments with limited network
28-access. To deploy in this environment, you will need a local mirror to serve
29-the packages and resources required by these charms.
30-
31-
32-### Mirroring Packages
33-
34-You can setup a local mirror for apt packages using squid-deb-proxy.
35-For instructions on configuring juju to use this, see the
36-[Juju Proxy Documentation](https://juju.ubuntu.com/docs/howto-proxies.html).
37-
38-
39-### Mirroring Resources
40-
41-In addition to apt packages, the Apache Hadoop charms require a few binary
42-resources, which are normally hosted on Launchpad. If access to Launchpad
43-is not available, the `jujuresources` library makes it easy to create a mirror
44-of these resources:
45-
46- sudo pip install jujuresources
47- juju resources fetch --all apache-hadoop-client/resources.yaml -d /tmp/resources
48- juju resources serve -d /tmp/resources
49-
50-This will fetch all of the resources needed by this charm and serve them via a
51-simple HTTP server. You can then set the `resources_mirror` config option to
52-have the charm use this server for retrieving resources.
53-
54-You can fetch the resources for all of the Apache Hadoop charms
55-(`apache-hadoop-hdfs-master`, `apache-hadoop-yarn-master`,
56-`apache-hadoop-compute-slave`, `apache-hadoop-client`, etc) into a single
57-directory and serve them all with a single `juju resources serve` instance.
58-
59-
60 ## Contact Information
61
62-* Amir Sanjar <amir.sanjar@canonical.com>
63-* Cory Johns <cory.johns@canonical.com>
64-* Kevin Monroe <kevin.monroe@canonical.com>
65+[bigdata-dev@canonical.com](mailto:bigdata-dev@canonical.com)
66
67
68 ## Hadoop
69
70=== modified file 'config.yaml'
71--- config.yaml 2015-04-03 16:49:17 +0000
72+++ config.yaml 2015-05-12 23:32:24 +0000
73@@ -1,6 +1,1 @@
74-options:
75- resources_mirror:
76- type: string
77- default: ''
78- description: |
79- URL from which to fetch resources (e.g., Hadoop binaries) instead of Launchpad.
80+options: {}
81
82=== removed file 'dist.yaml'
83--- dist.yaml 2015-04-16 15:45:57 +0000
84+++ dist.yaml 1970-01-01 00:00:00 +0000
85@@ -1,116 +0,0 @@
86-# This file contains values that are likely to change per distribution.
87-# The aim is to make it easier to update / extend the charms with
88-# minimal changes to the shared code in charmhelpers.
89-vendor: 'apache'
90-hadoop_version: '2.4.1'
91-packages:
92- - 'libsnappy1'
93- - 'libsnappy-dev'
94- - 'openssl'
95- - 'liblzo2-2'
96-groups:
97- - 'hadoop'
98- - 'mapred'
99- - 'supergroup'
100-users:
101- ubuntu:
102- groups: ['hadoop', 'mapred', 'supergroup']
103- hdfs:
104- groups: ['hadoop']
105- mapred:
106- groups: ['hadoop', 'mapred']
107- yarn:
108- groups: ['hadoop']
109-dirs:
110- hadoop:
111- path: '/usr/lib/hadoop'
112- perms: 0777
113- hadoop_conf:
114- path: '/etc/hadoop/conf'
115- hadoop_tmp:
116- path: '/tmp/hadoop'
117- perms: 0777
118- mapred_log:
119- path: '/var/log/hadoop/mapred'
120- owner: 'mapred'
121- group: 'hadoop'
122- perms: 0755
123- mapred_run:
124- path: '/var/run/hadoop/mapred'
125- owner: 'mapred'
126- group: 'hadoop'
127- perms: 0755
128- yarn_tmp:
129- path: '/tmp/hadoop-yarn'
130- perms: 0777
131- yarn_log_dir:
132- path: '/var/log/hadoop/yarn'
133- owner: 'yarn'
134- group: 'hadoop'
135- perms: 0755
136- hdfs_log_dir:
137- path: '/var/log/hadoop/hdfs'
138- owner: 'hdfs'
139- group: 'hadoop'
140- perms: 0755
141- hdfs_dir_base:
142- path: '/usr/local/hadoop/data'
143- owner: 'hdfs'
144- group: 'hadoop'
145- perms: 0755
146- cache_base:
147- path: '{dirs[hdfs_dir_base]}/cache'
148- owner: 'hdfs'
149- group: 'hadoop'
150- perms: 01775
151- cache_dir:
152- path: '{dirs[hdfs_dir_base]}/cache/hadoop'
153- owner: 'hdfs'
154- group: 'hadoop'
155- perms: 0775
156-ports:
157- # Ports that need to be exposed, overridden, or manually specified.
158- # Only expose ports serving a UI or external API (i.e., namenode and
159- # resourcemanager). Communication among units within the cluster does
160- # not need ports to be explicitly opened.
161- # If adding a port here, you will need to update
162- # charmhelpers.contrib.bigdata.handlers.apache or hooks/callbacks.py
163- # to ensure that it is supported.
164- namenode:
165- port: 8020
166- exposed_on: 'hdfs-master'
167- nn_webapp_http:
168- port: 50070
169- exposed_on: 'hdfs-master'
170- dn_webapp_http:
171- port: 50075
172- exposed_on: 'compute-slave'
173- resourcemanager:
174- port: 8032
175- exposed_on: 'yarn-master'
176- rm_webapp_http:
177- port: 8088
178- exposed_on: 'yarn-master'
179- rm_log:
180- port: 19888
181- nm_webapp_http:
182- port: 8042
183- exposed_on: 'compute-slave'
184- jobhistory:
185- port: 10020
186- jh_webapp_http:
187- port: 19888
188- exposed_on: 'yarn-master'
189- # TODO: support SSL
190- #nn_webapp_https:
191- # port: 50470
192- # exposed_on: 'hdfs-master'
193- #dn_webapp_https:
194- # port: 50475
195- # exposed_on: 'compute-slave'
196- #rm_webapp_https:
197- # port: 8090
198- # exposed_on: 'yarn-master'
199- #nm_webapp_https:
200- # port: 8044
201- # exposed_on: 'compute-slave'
202
203=== removed file 'hooks/callbacks.py'
204--- hooks/callbacks.py 2015-05-08 17:26:18 +0000
205+++ hooks/callbacks.py 1970-01-01 00:00:00 +0000
206@@ -1,36 +0,0 @@
207-# Licensed under the Apache License, Version 2.0 (the "License");
208-# you may not use this file except in compliance with the License.
209-# You may obtain a copy of the License at
210-#
211-# http://www.apache.org/licenses/LICENSE-2.0
212-#
213-# Unless required by applicable law or agreed to in writing, software
214-# distributed under the License is distributed on an "AS IS" BASIS,
215-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
216-# See the License for the specific language governing permissions and
217-# limitations under the License.
218-"""
219-Callbacks for additional setup tasks.
220-
221-Add any additional tasks / setup here. If a callback is used by mutliple
222-charms, consider refactoring it up to the charmhelpers library.
223-"""
224-from charmhelpers.contrib.bigdata import utils
225-from charmhelpers.core import hookenv
226-
227-
228-def update_etc_hosts():
229- if hookenv.in_relation_hook():
230- # send our hostname on the relation
231- local_host = hookenv.local_unit().replace('/', '-')
232- hookenv.relation_set(hostname=local_host)
233-
234- # get /etc/hosts entries from the master
235- master_hosts = hookenv.relation_get('etc_hosts')
236-
237- # update /etc/hosts on the local unit if we have master hostname data
238- if master_hosts:
239- hookenv.log('Updating /etc/hosts from %s' % hookenv.remote_unit())
240- utils.update_etc_hosts(master_hosts)
241- else:
242- hookenv.log('No /etc/hosts updates from %s' % hookenv.remote_unit())
243
244=== removed file 'hooks/common.py'
245--- hooks/common.py 2015-05-12 23:29:29 +0000
246+++ hooks/common.py 1970-01-01 00:00:00 +0000
247@@ -1,95 +0,0 @@
248-#!/usr/bin/env python
249-# Licensed under the Apache License, Version 2.0 (the "License");
250-# you may not use this file except in compliance with the License.
251-# You may obtain a copy of the License at
252-#
253-# http://www.apache.org/licenses/LICENSE-2.0
254-#
255-# Unless required by applicable law or agreed to in writing, software
256-# distributed under the License is distributed on an "AS IS" BASIS,
257-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
258-# See the License for the specific language governing permissions and
259-# limitations under the License.
260-"""
261-Common implementation for all hooks.
262-"""
263-
264-import jujuresources
265-
266-
267-def bootstrap_resources():
268- """
269- Install required resources defined in resources.yaml
270- """
271- mirror_url = jujuresources.config_get('resources_mirror')
272- if not jujuresources.fetch(mirror_url=mirror_url):
273- jujuresources.juju_log('Resources unavailable; manual intervention required', 'ERROR')
274- return False
275- jujuresources.install(['pathlib', 'pyaml', 'six', 'charmhelpers', 'jujubigdata'])
276- return True
277-
278-
279-def manage():
280- if not bootstrap_resources():
281- # defer until resources are available, since charmhelpers, and thus
282- # the framework, are required (will require manual intervention)
283- return
284-
285- from charmhelpers.core import charmframework
286- import jujubigdata
287- import callbacks # noqa (ignore when linting)
288-
289- # list of keys required to be in the dist.yaml
290- client_reqs = ['vendor', 'hadoop_version', 'packages', 'groups', 'users',
291- 'dirs', 'ports']
292- dist_config = jujubigdata.utils.DistConfig(filename='dist.yaml',
293- required_keys=client_reqs)
294- hadoop = jujubigdata.handlers.HadoopBase(dist_config)
295- hdfs = jujubigdata.handlers.HDFS(hadoop)
296- yarn = jujubigdata.handlers.YARN(hadoop)
297- namenode = jujubigdata.relations.NameNode(spec=hadoop.client_spec)
298- resourcemanager = jujubigdata.relations.ResourceManager(spec=hadoop.client_spec)
299- manager = charmframework.Manager([
300- {
301- 'name': 'hadoop-base',
302- 'requires': [
303- hadoop.verify_conditional_resources,
304- ],
305- 'callbacks': [
306- hadoop.install,
307- ],
308- },
309- {
310- 'name': 'client-hdfs',
311- 'provides': [
312- ],
313- 'requires': [
314- hadoop.is_installed,
315- namenode,
316- ],
317- 'callbacks': [
318- namenode.update_etc_hosts,
319- hdfs.configure_client,
320- ],
321- },
322- {
323- 'name': 'client-yarn',
324- 'provides': [
325- ],
326- 'requires': [
327- hadoop.is_installed,
328- resourcemanager,
329- ],
330- 'callbacks': [
331- resourcemanager.update_etc_hosts,
332- yarn.install_demo,
333- yarn.configure_client,
334- ],
335- 'cleanup': [],
336- },
337- ])
338- manager.manage()
339-
340-
341-if __name__ == '__main__':
342- manage()
343
344=== removed file 'hooks/config-changed'
345--- hooks/config-changed 2015-02-09 18:13:28 +0000
346+++ hooks/config-changed 1970-01-01 00:00:00 +0000
347@@ -1,15 +0,0 @@
348-#!/usr/bin/env python
349-# Licensed under the Apache License, Version 2.0 (the "License");
350-# you may not use this file except in compliance with the License.
351-# You may obtain a copy of the License at
352-#
353-# http://www.apache.org/licenses/LICENSE-2.0
354-#
355-# Unless required by applicable law or agreed to in writing, software
356-# distributed under the License is distributed on an "AS IS" BASIS,
357-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
358-# See the License for the specific language governing permissions and
359-# limitations under the License.
360-
361-import common
362-common.manage()
363
364=== added file 'hooks/hadoop-plugin-relation-changed'
365--- hooks/hadoop-plugin-relation-changed 1970-01-01 00:00:00 +0000
366+++ hooks/hadoop-plugin-relation-changed 2015-05-12 23:32:24 +0000
367@@ -0,0 +1,4 @@
368+#!/bin/bash
369+if [[ "$(relation-get hdfs-ready)" == "True" ]]; then
370+ hooks/status-set active "Ready to run mapreduce jobs"
371+fi
372
373=== added file 'hooks/hadoop-plugin-relation-joined'
374--- hooks/hadoop-plugin-relation-joined 1970-01-01 00:00:00 +0000
375+++ hooks/hadoop-plugin-relation-joined 2015-05-12 23:32:24 +0000
376@@ -0,0 +1,2 @@
377+#!/bin/bash
378+hooks/status-set waiting "Waiting for Hadoop to be ready"
379
380=== modified file 'hooks/install'
381--- hooks/install 2015-02-09 18:13:28 +0000
382+++ hooks/install 2015-05-12 23:32:24 +0000
383@@ -1,17 +1,2 @@
384-#!/usr/bin/python
385-# Licensed under the Apache License, Version 2.0 (the "License");
386-# you may not use this file except in compliance with the License.
387-# You may obtain a copy of the License at
388-#
389-# http://www.apache.org/licenses/LICENSE-2.0
390-#
391-# Unless required by applicable law or agreed to in writing, software
392-# distributed under the License is distributed on an "AS IS" BASIS,
393-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
394-# See the License for the specific language governing permissions and
395-# limitations under the License.
396-import setup
397-setup.pre_install()
398-
399-import common
400-common.manage()
401+#!/bin/bash
402+hooks/status-set blocked "Please add relation to apache-hadoop-plugin"
403
404=== removed file 'hooks/namenode-relation-changed'
405--- hooks/namenode-relation-changed 2015-05-07 15:08:59 +0000
406+++ hooks/namenode-relation-changed 1970-01-01 00:00:00 +0000
407@@ -1,16 +0,0 @@
408-#!/usr/bin/env python
409-# Licensed under the Apache License, Version 2.0 (the "License");
410-# you may not use this file except in compliance with the License.
411-# You may obtain a copy of the License at
412-#
413-# http://www.apache.org/licenses/LICENSE-2.0
414-#
415-# Unless required by applicable law or agreed to in writing, software
416-# distributed under the License is distributed on an "AS IS" BASIS,
417-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
418-# See the License for the specific language governing permissions and
419-# limitations under the License.
420-
421-import common
422-
423-common.manage()
424
425=== removed file 'hooks/resourcemanager-relation-changed'
426--- hooks/resourcemanager-relation-changed 2015-05-07 15:08:59 +0000
427+++ hooks/resourcemanager-relation-changed 1970-01-01 00:00:00 +0000
428@@ -1,16 +0,0 @@
429-#!/usr/bin/env python
430-# Licensed under the Apache License, Version 2.0 (the "License");
431-# you may not use this file except in compliance with the License.
432-# You may obtain a copy of the License at
433-#
434-# http://www.apache.org/licenses/LICENSE-2.0
435-#
436-# Unless required by applicable law or agreed to in writing, software
437-# distributed under the License is distributed on an "AS IS" BASIS,
438-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
439-# See the License for the specific language governing permissions and
440-# limitations under the License.
441-
442-import common
443-
444-common.manage()
445
446=== removed file 'hooks/setup.py'
447--- hooks/setup.py 2015-03-03 21:29:00 +0000
448+++ hooks/setup.py 1970-01-01 00:00:00 +0000
449@@ -1,35 +0,0 @@
450-# Licensed under the Apache License, Version 2.0 (the "License");
451-# you may not use this file except in compliance with the License.
452-# You may obtain a copy of the License at
453-#
454-# http://www.apache.org/licenses/LICENSE-2.0
455-#
456-# Unless required by applicable law or agreed to in writing, software
457-# distributed under the License is distributed on an "AS IS" BASIS,
458-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
459-# See the License for the specific language governing permissions and
460-# limitations under the License.
461-import subprocess
462-from glob import glob
463-
464-def pre_install():
465- """
466- Do any setup required before the install hook.
467- """
468- install_pip()
469- install_jujuresources()
470-
471-
472-def install_pip():
473- subprocess.check_call(['apt-get', 'install', '-yq', 'python-pip', 'bzr'])
474-
475-
476-def install_jujuresources():
477- """
478- Install the bundled jujuresources library, if not present.
479- """
480- try:
481- import jujuresources # noqa
482- except ImportError:
483- jr_archive = glob('resources/jujuresources-*.tar.gz')[0]
484- subprocess.check_call(['pip', 'install', jr_archive])
485
486=== removed file 'hooks/start'
487--- hooks/start 2015-02-09 18:13:28 +0000
488+++ hooks/start 1970-01-01 00:00:00 +0000
489@@ -1,15 +0,0 @@
490-#!/usr/bin/env python
491-# Licensed under the Apache License, Version 2.0 (the "License");
492-# you may not use this file except in compliance with the License.
493-# You may obtain a copy of the License at
494-#
495-# http://www.apache.org/licenses/LICENSE-2.0
496-#
497-# Unless required by applicable law or agreed to in writing, software
498-# distributed under the License is distributed on an "AS IS" BASIS,
499-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
500-# See the License for the specific language governing permissions and
501-# limitations under the License.
502-
503-import common
504-common.manage()
505
506=== added file 'hooks/status-set'
507--- hooks/status-set 1970-01-01 00:00:00 +0000
508+++ hooks/status-set 2015-05-12 23:32:24 +0000
509@@ -0,0 +1,5 @@
510+#!/bin/bash
511+# Wrapper around status-set for use with older versions of Juju
512+if which status-set > /dev/null; then
513+ status-set "$@"
514+fi
515
516=== removed file 'hooks/stop'
517--- hooks/stop 2015-02-09 18:13:28 +0000
518+++ hooks/stop 1970-01-01 00:00:00 +0000
519@@ -1,15 +0,0 @@
520-#!/usr/bin/env python
521-# Licensed under the Apache License, Version 2.0 (the "License");
522-# you may not use this file except in compliance with the License.
523-# You may obtain a copy of the License at
524-#
525-# http://www.apache.org/licenses/LICENSE-2.0
526-#
527-# Unless required by applicable law or agreed to in writing, software
528-# distributed under the License is distributed on an "AS IS" BASIS,
529-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
530-# See the License for the specific language governing permissions and
531-# limitations under the License.
532-
533-import common
534-common.manage()
535
536=== modified file 'metadata.yaml'
537--- metadata.yaml 2015-05-06 17:04:07 +0000
538+++ metadata.yaml 2015-05-12 23:32:24 +0000
539@@ -8,8 +8,7 @@
540 This charm manages a dedicated client node as a place to
541 run mapreduce jobs.
542 tags: ["applications", "bigdata", "hadoop", "apache"]
543-requires:
544- namenode:
545- interface: dfs
546- resourcemanager:
547- interface: mapred
548+provides:
549+ hadoop-plugin:
550+ interface: hadoop-plugin
551+ scope: container
552
553=== removed directory 'resources'
554=== removed file 'resources.yaml'
555--- resources.yaml 2015-05-12 23:29:29 +0000
556+++ resources.yaml 1970-01-01 00:00:00 +0000
557@@ -1,36 +0,0 @@
558-options:
559- output_dir: /home/ubuntu/resources
560-resources:
561- pathlib:
562- pypi: path.py>=7.0
563- pyaml:
564- pypi: pyaml
565- six:
566- pypi: six
567- jujubigdata:
568- pypi: jujubigdata>=1.2.5,<2.0.0
569- charmhelpers:
570- pypi: http://bazaar.launchpad.net/~bigdata-dev/bigdata-data/trunk/download/kevin.monroe%40canonical.com-20150511173636-5rblzf5r2o1zcv2p/charmhelpers0.2.3.ta-20150417221203-zg62z8c220egc3ch-1/charmhelpers-0.2.3.tar.gz
571- hash: 44340a6fd6f192bcc9d390c0d9c3901d4fc190166485b107047bc1c6ba102a2f
572- hash_type: sha256
573- java-installer:
574- # This points to a script which manages installing Java.
575- # If replaced with an alternate implementation, it must output *only* two
576- # lines containing the JAVA_HOME path, and the Java version, respectively,
577- # on stdout. Upon error, it must exit with a non-zero exit code.
578- url: http://bazaar.launchpad.net/~bigdata-dev/bigdata-data/trunk/download/cory.johns%40canonical.com-20150312205309-2ji1etk44gep01w1/javainstaller.sh-20150311213053-4vq7369jhlvc6qy8-1/java-installer.sh
579- hash: 130984f1dc3bc624d4245234d0fca22f529d234d0eaa1241c5e9f701319bdea9
580- hash_type: sha256
581-optional_resources:
582- hadoop-aarch64:
583- url: http://bazaar.launchpad.net/~bigdata-dev/bigdata-data/trunk/download/kevin.monroe%40canonical.com-20150303192631-swhrf8f7q82si75t/hadoop2.4.1.tar.gz-20150303192554-7gqslr4m8ahkwiax-2/hadoop-2.4.1.tar.gz
584- hash: 03ad135835bfe413f85fe176259237a8
585- hash_type: md5
586- hadoop-ppc64le:
587- url: http://bazaar.launchpad.net/~bigdata-dev/bigdata-data/trunk/download/kevin.monroe%40canonical.com-20150130165209-nuz1myezjpdx7eus/hadoop2.4.1ppc64le.t-20150130165148-s8i19s002ht88gio-2/hadoop-2.4.1-ppc64le.tar.gz
588- hash: 09942b168a3db0d183b281477d3dae9deb7b7bc4b5783ba5cda3965b62e71bd5
589- hash_type: sha256
590- hadoop-x86_64:
591- url: http://bazaar.launchpad.net/~bigdata-dev/bigdata-data/trunk/download/cory.johns%40canonical.com-20150116154822-x5osw3zfhw6e03b1/hadoop2.4.1.tar.gz-20150116154748-yfa2j12rr5m53xd3-1/hadoop-2.4.1.tar.gz
592- hash: a790d39baba3a597bd226042496764e0520c2336eedb28a1a3d5c48572d3b672
593- hash_type: sha256
594
595=== removed file 'resources/jujuresources-0.2.5.tar.gz'
596Binary files resources/jujuresources-0.2.5.tar.gz 2015-03-03 19:56:49 +0000 and resources/jujuresources-0.2.5.tar.gz 1970-01-01 00:00:00 +0000 differ
597=== modified file 'tests/01-basic-deployment.py'
598--- tests/01-basic-deployment.py 2015-03-04 00:07:45 +0000
599+++ tests/01-basic-deployment.py 2015-05-12 23:32:24 +0000
600@@ -29,12 +29,6 @@
601 assert 'SecondaryNameNode' not in output, "SecondaryNameNode should not be started"
602 assert 'DataNode' not in output, "DataServer should not be started"
603
604- def test_dist_config(self):
605- # test_dist_config.py is run on the deployed unit because it
606- # requires the Juju context to properly validate dist.yaml
607- output, retcode = self.unit.run("tests/remote/test_dist_config.py")
608- self.assertEqual(retcode, 0, 'Remote dist config test failed:\n{}'.format(output))
609-
610
611 if __name__ == '__main__':
612 unittest.main()
613
614=== removed directory 'tests/remote'
615=== removed file 'tests/remote/test_dist_config.py'
616--- tests/remote/test_dist_config.py 2015-03-19 20:09:17 +0000
617+++ tests/remote/test_dist_config.py 1970-01-01 00:00:00 +0000
618@@ -1,72 +0,0 @@
619-#!/usr/bin/env python
620-
621-import grp
622-import os
623-import pwd
624-import unittest
625-
626-from charmhelpers.contrib import bigdata
627-
628-
629-class TestDistConfig(unittest.TestCase):
630- """
631- Test that the ``dist.yaml`` settings were applied properly, such as users, groups, and dirs.
632-
633- This is done as a remote test on the deployed unit rather than a regular
634- test under ``tests/`` because filling in the ``dist.yaml`` requires Juju
635- context (e.g., config).
636- """
637- @classmethod
638- def setUpClass(cls):
639- config = None
640- config_dir = os.environ['JUJU_CHARM_DIR']
641- config_file = 'dist.yaml'
642- if os.path.isfile(os.path.join(config_dir, config_file)):
643- config = os.path.join(config_dir, config_file)
644- if not config:
645- raise IOError('Could not find {} in {}'.format(config_file, config_dir))
646- reqs = ['vendor', 'hadoop_version', 'packages', 'groups', 'users',
647- 'dirs']
648- cls.dist_config = bigdata.utils.DistConfig(config, reqs)
649-
650- def test_groups(self):
651- for name in self.dist_config.groups:
652- try:
653- grp.getgrnam(name)
654- except KeyError:
655- self.fail('Group {} is missing'.format(name))
656-
657- def test_users(self):
658- for username, details in self.dist_config.users.items():
659- try:
660- user = pwd.getpwnam(username)
661- except KeyError:
662- self.fail('User {} is missing'.format(username))
663- for groupname in details['groups']:
664- try:
665- group = grp.getgrnam(groupname)
666- except KeyError:
667- self.fail('Group {} referenced by user {} does not exist'.format(
668- groupname, username))
669- if group.gr_gid != user.pw_gid:
670- self.assertIn(username, group.gr_mem, 'User {} not in group {}'.format(
671- username, groupname))
672-
673- def test_dirs(self):
674- for name, details in self.dist_config.dirs.items():
675- dirpath = self.dist_config.path(name)
676- self.assertTrue(dirpath.isdir(), 'Dir {} is missing'.format(name))
677- stat = dirpath.stat()
678- owner = pwd.getpwuid(stat.st_uid).pw_name
679- group = grp.getgrgid(stat.st_gid).gr_name
680- perms = stat.st_mode & ~0o40000
681- self.assertEqual(owner, details.get('owner', 'root'),
682- 'Dir {} ({}) has wrong owner: {}'.format(name, dirpath, owner))
683- self.assertEqual(group, details.get('group', 'root'),
684- 'Dir {} ({}) has wrong group: {}'.format(name, dirpath, group))
685- self.assertEqual(perms, details.get('perms', 0o755),
686- 'Dir {} ({}) has wrong perms: 0o{:o}'.format(name, dirpath, perms))
687-
688-
689-if __name__ == '__main__':
690- unittest.main()

Subscribers

People subscribed via source and target branches

to all changes: