Merge lp:~bigdata-dev/charms/bundles/apache-analytics-pig/trunk into lp:~charmers/charms/bundles/apache-analytics-pig/bundle

Proposed by Cory Johns
Status: Merged
Merged at revision: 13
Proposed branch: lp:~bigdata-dev/charms/bundles/apache-analytics-pig/trunk
Merge into: lp:~charmers/charms/bundles/apache-analytics-pig/bundle
Diff against target: 310 lines (+234/-7)
6 files modified
README.md (+1/-1)
bundle-dev.yaml (+50/-0)
bundle-local.yaml (+50/-0)
bundle.yaml (+6/-6)
tests/01-bundle.py (+124/-0)
tests/tests.yaml (+3/-0)
To merge this branch: bzr merge lp:~bigdata-dev/charms/bundles/apache-analytics-pig/trunk
Reviewer Review Type Date Requested Status
Kevin W Monroe Approve
Review via email: mp+273283@code.launchpad.net

Description of the change

Tests for CWR

To post a comment you must log in.
15. By Cory Johns

Added tests.yaml and removed no longer needed 00-setup

16. By Cory Johns

Added -dev and -local bundle files

17. By Cory Johns

Fixed test perms

Revision history for this message
Kevin W Monroe (kwmonroe) wrote :

LGTM, +1

review: Approve
18. By Kevin W Monroe

update charm revnos with latest promulgated versions

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'README.md'
2--- README.md 2015-05-29 21:03:16 +0000
3+++ README.md 2015-10-07 21:27:49 +0000
4@@ -14,7 +14,7 @@
5 ## Usage
6 Deploy this bundle using juju-quickstart:
7
8- juju quickstart u/bigdata-dev/apache-analytics-pig
9+ juju quickstart apache-analytics-pig
10
11 See `juju quickstart --help` for deployment options, including machine
12 constraints and how to deploy a locally modified version of the
13
14=== added file 'bundle-dev.yaml'
15--- bundle-dev.yaml 1970-01-01 00:00:00 +0000
16+++ bundle-dev.yaml 2015-10-07 21:27:49 +0000
17@@ -0,0 +1,50 @@
18+services:
19+ compute-slave:
20+ charm: cs:~bigdata-dev/trusty/apache-hadoop-compute-slave
21+ num_units: 3
22+ annotations:
23+ gui-x: "300"
24+ gui-y: "200"
25+ constraints: mem=3G
26+ hdfs-master:
27+ charm: cs:~bigdata-dev/trusty/apache-hadoop-hdfs-master
28+ num_units: 1
29+ annotations:
30+ gui-x: "600"
31+ gui-y: "350"
32+ constraints: mem=7G
33+ pig:
34+ charm: cs:~bigdata-dev/trusty/apache-pig
35+ num_units: 1
36+ annotations:
37+ gui-x: "1200"
38+ gui-y: "200"
39+ constraints: mem=3G
40+ plugin:
41+ charm: cs:~bigdata-dev/trusty/apache-hadoop-plugin
42+ annotations:
43+ gui-x: "900"
44+ gui-y: "200"
45+ secondary-namenode:
46+ charm: cs:~bigdata-dev/trusty/apache-hadoop-hdfs-secondary
47+ num_units: 1
48+ annotations:
49+ gui-x: "600"
50+ gui-y: "600"
51+ constraints: mem=7G
52+ yarn-master:
53+ charm: cs:~bigdata-dev/trusty/apache-hadoop-yarn-master
54+ num_units: 1
55+ annotations:
56+ gui-x: "600"
57+ gui-y: "100"
58+ constraints: mem=7G
59+series: trusty
60+relations:
61+ - [yarn-master, hdfs-master]
62+ - [secondary-namenode, hdfs-master]
63+ - [compute-slave, yarn-master]
64+ - [compute-slave, hdfs-master]
65+ - [plugin, yarn-master]
66+ - [plugin, hdfs-master]
67+ - [pig, plugin]
68
69=== added file 'bundle-local.yaml'
70--- bundle-local.yaml 1970-01-01 00:00:00 +0000
71+++ bundle-local.yaml 2015-10-07 21:27:49 +0000
72@@ -0,0 +1,50 @@
73+services:
74+ compute-slave:
75+ charm: apache-hadoop-compute-slave
76+ num_units: 3
77+ annotations:
78+ gui-x: "300"
79+ gui-y: "200"
80+ constraints: mem=3G
81+ hdfs-master:
82+ charm: apache-hadoop-hdfs-master
83+ num_units: 1
84+ annotations:
85+ gui-x: "600"
86+ gui-y: "350"
87+ constraints: mem=7G
88+ pig:
89+ charm: apache-pig
90+ num_units: 1
91+ annotations:
92+ gui-x: "1200"
93+ gui-y: "200"
94+ constraints: mem=3G
95+ plugin:
96+ charm: apache-hadoop-plugin
97+ annotations:
98+ gui-x: "900"
99+ gui-y: "200"
100+ secondary-namenode:
101+ charm: apache-hadoop-hdfs-secondary
102+ num_units: 1
103+ annotations:
104+ gui-x: "600"
105+ gui-y: "600"
106+ constraints: mem=7G
107+ yarn-master:
108+ charm: apache-hadoop-yarn-master
109+ num_units: 1
110+ annotations:
111+ gui-x: "600"
112+ gui-y: "100"
113+ constraints: mem=7G
114+series: trusty
115+relations:
116+ - [yarn-master, hdfs-master]
117+ - [secondary-namenode, hdfs-master]
118+ - [compute-slave, yarn-master]
119+ - [compute-slave, hdfs-master]
120+ - [plugin, yarn-master]
121+ - [plugin, hdfs-master]
122+ - [pig, plugin]
123
124=== modified file 'bundle.yaml'
125--- bundle.yaml 2015-09-16 22:07:20 +0000
126+++ bundle.yaml 2015-10-07 21:27:49 +0000
127@@ -1,39 +1,39 @@
128 services:
129 compute-slave:
130- charm: cs:trusty/apache-hadoop-compute-slave-8
131+ charm: cs:trusty/apache-hadoop-compute-slave-9
132 num_units: 3
133 annotations:
134 gui-x: "300"
135 gui-y: "200"
136 constraints: mem=3G
137 hdfs-master:
138- charm: cs:trusty/apache-hadoop-hdfs-master-8
139+ charm: cs:trusty/apache-hadoop-hdfs-master-9
140 num_units: 1
141 annotations:
142 gui-x: "600"
143 gui-y: "350"
144 constraints: mem=7G
145 pig:
146- charm: cs:trusty/apache-pig-6
147+ charm: cs:trusty/apache-pig-7
148 num_units: 1
149 annotations:
150 gui-x: "1200"
151 gui-y: "200"
152 constraints: mem=3G
153 plugin:
154- charm: cs:trusty/apache-hadoop-plugin-7
155+ charm: cs:trusty/apache-hadoop-plugin-8
156 annotations:
157 gui-x: "900"
158 gui-y: "200"
159 secondary-namenode:
160- charm: cs:trusty/apache-hadoop-hdfs-secondary-6
161+ charm: cs:trusty/apache-hadoop-hdfs-secondary-7
162 num_units: 1
163 annotations:
164 gui-x: "600"
165 gui-y: "600"
166 constraints: mem=7G
167 yarn-master:
168- charm: cs:trusty/apache-hadoop-yarn-master-6
169+ charm: cs:trusty/apache-hadoop-yarn-master-7
170 num_units: 1
171 annotations:
172 gui-x: "600"
173
174=== added directory 'tests'
175=== added file 'tests/01-bundle.py'
176--- tests/01-bundle.py 1970-01-01 00:00:00 +0000
177+++ tests/01-bundle.py 2015-10-07 21:27:49 +0000
178@@ -0,0 +1,124 @@
179+#!/usr/bin/env python3
180+
181+import os
182+import unittest
183+
184+import yaml
185+import amulet
186+
187+
188+class TestBundle(unittest.TestCase):
189+ bundle_file = os.path.join(os.path.dirname(__file__), '..', 'bundle.yaml')
190+
191+ @classmethod
192+ def setUpClass(cls):
193+ cls.d = amulet.Deployment(series='trusty')
194+ with open(cls.bundle_file) as f:
195+ bun = f.read()
196+ bundle = yaml.safe_load(bun)
197+ cls.d.load(bundle)
198+ cls.d.setup(timeout=1800)
199+ cls.d.sentry.wait_for_messages({'pig': 'Ready'}, timeout=1800)
200+ cls.hdfs = cls.d.sentry['hdfs-master'][0]
201+ cls.yarn = cls.d.sentry['yarn-master'][0]
202+ cls.slave = cls.d.sentry['compute-slave'][0]
203+ cls.secondary = cls.d.sentry['secondary-namenode'][0]
204+ cls.pig = cls.d.sentry['pig'][0]
205+
206+ def test_components(self):
207+ """
208+ Confirm that all of the required components are up and running.
209+ """
210+ hdfs, retcode = self.hdfs.run("pgrep -a java")
211+ yarn, retcode = self.yarn.run("pgrep -a java")
212+ slave, retcode = self.slave.run("pgrep -a java")
213+ secondary, retcode = self.secondary.run("pgrep -a java")
214+ pig, retcode = self.pig.run("pgrep -a java")
215+
216+ # .NameNode needs the . to differentiate it from SecondaryNameNode
217+ assert '.NameNode' in hdfs, "NameNode not started"
218+ assert '.NameNode' not in yarn, "NameNode should not be running on yarn-master"
219+ assert '.NameNode' not in slave, "NameNode should not be running on compute-slave"
220+ assert '.NameNode' not in secondary, "NameNode should not be running on secondary-namenode"
221+ assert '.NameNode' not in pig, "NameNode should not be running on pig"
222+
223+ assert 'ResourceManager' in yarn, "ResourceManager not started"
224+ assert 'ResourceManager' not in hdfs, "ResourceManager should not be running on hdfs-master"
225+ assert 'ResourceManager' not in slave, "ResourceManager should not be running on compute-slave"
226+ assert 'ResourceManager' not in secondary, "ResourceManager should not be running on secondary-namenode"
227+ assert 'ResourceManager' not in pig, "ResourceManager should not be running on pig"
228+
229+ assert 'JobHistoryServer' in yarn, "JobHistoryServer not started"
230+ assert 'JobHistoryServer' not in hdfs, "JobHistoryServer should not be running on hdfs-master"
231+ assert 'JobHistoryServer' not in slave, "JobHistoryServer should not be running on compute-slave"
232+ assert 'JobHistoryServer' not in secondary, "JobHistoryServer should not be running on secondary-namenode"
233+ assert 'JobHistoryServer' not in pig, "JobHistoryServer should not be running on pig"
234+
235+ assert 'NodeManager' in slave, "NodeManager not started"
236+ assert 'NodeManager' not in yarn, "NodeManager should not be running on yarn-master"
237+ assert 'NodeManager' not in hdfs, "NodeManager should not be running on hdfs-master"
238+ assert 'NodeManager' not in secondary, "NodeManager should not be running on secondary-namenode"
239+ assert 'NodeManager' not in pig, "NodeManager should not be running on pig"
240+
241+ assert 'DataNode' in slave, "DataServer not started"
242+ assert 'DataNode' not in yarn, "DataNode should not be running on yarn-master"
243+ assert 'DataNode' not in hdfs, "DataNode should not be running on hdfs-master"
244+ assert 'DataNode' not in secondary, "DataNode should not be running on secondary-namenode"
245+ assert 'DataNode' not in pig, "DataNode should not be running on pig"
246+
247+ assert 'SecondaryNameNode' in secondary, "SecondaryNameNode not started"
248+ assert 'SecondaryNameNode' not in yarn, "SecondaryNameNode should not be running on yarn-master"
249+ assert 'SecondaryNameNode' not in hdfs, "SecondaryNameNode should not be running on hdfs-master"
250+ assert 'SecondaryNameNode' not in slave, "SecondaryNameNode should not be running on compute-slave"
251+ assert 'SecondaryNameNode' not in pig, "SecondaryNameNode should not be running on pig"
252+
253+ def test_hdfs_dir(self):
254+ """
255+ Validate admin few hadoop activities on HDFS cluster.
256+ 1) This test validates mkdir on hdfs cluster
257+ 2) This test validates change hdfs dir owner on the cluster
258+ 3) This test validates setting hdfs directory access permission on the cluster
259+
260+ NB: These are order-dependent, so must be done as part of a single test case.
261+ """
262+ output, retcode = self.pig.run("su hdfs -c 'hdfs dfs -mkdir -p /user/ubuntu'")
263+ assert retcode == 0, "Created a user directory on hdfs FAILED:\n{}".format(output)
264+ output, retcode = self.pig.run("su hdfs -c 'hdfs dfs -chown ubuntu:ubuntu /user/ubuntu'")
265+ assert retcode == 0, "Assigning an owner to hdfs directory FAILED:\n{}".format(output)
266+ output, retcode = self.pig.run("su hdfs -c 'hdfs dfs -chmod -R 755 /user/ubuntu'")
267+ assert retcode == 0, "seting directory permission on hdfs FAILED:\n{}".format(output)
268+
269+ def test_yarn_mapreduce_exe(self):
270+ """
271+ Validate yarn mapreduce operations:
272+ 1) validate mapreduce execution - writing to hdfs
273+ 2) validate successful mapreduce operation after the execution
274+ 3) validate mapreduce execution - reading and writing to hdfs
275+ 4) validate successful mapreduce operation after the execution
276+ 5) validate successful deletion of mapreduce operation result from hdfs
277+
278+ NB: These are order-dependent, so must be done as part of a single test case.
279+ """
280+ jar_file = '/usr/lib/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar'
281+ test_steps = [
282+ ('teragen', "su ubuntu -c 'hadoop jar {} teragen 10000 /user/ubuntu/teragenout'".format(jar_file)),
283+ ('mapreduce #1', "su hdfs -c 'hdfs dfs -ls /user/ubuntu/teragenout/_SUCCESS'"),
284+ ('terasort', "su ubuntu -c 'hadoop jar {} terasort /user/ubuntu/teragenout /user/ubuntu/terasortout'".
285+ format(jar_file)),
286+ ('mapreduce #2', "su hdfs -c 'hdfs dfs -ls /user/ubuntu/terasortout/_SUCCESS'"),
287+ ('cleanup', "su hdfs -c 'hdfs dfs -rm -r /user/ubuntu/teragenout'"),
288+ ]
289+ for name, step in test_steps:
290+ output, retcode = self.pig.run(step)
291+ assert retcode == 0, "{} FAILED:\n{}".format(name, output)
292+
293+ def test_pig(self):
294+ o, c = self.pig.run("su ubuntu -c 'pig -i -x local'")
295+ assert "Apache" in o, "Pig local mode failed: %s" % o
296+
297+ o, c = self.pig.run("su ubuntu -c 'pig -i'")
298+ assert "Apache" in o, "Pig MapReduce mode failed: %s" % o
299+
300+
301+if __name__ == '__main__':
302+ unittest.main()
303
304=== added file 'tests/tests.yaml'
305--- tests/tests.yaml 1970-01-01 00:00:00 +0000
306+++ tests/tests.yaml 2015-10-07 21:27:49 +0000
307@@ -0,0 +1,3 @@
308+reset: false
309+packages:
310+ - amulet

Subscribers

People subscribed via source and target branches