Merge lp:~bigdata-dev/charms/trusty/hdp-hadoop/trunk into lp:charms/trusty/hdp-hadoop

Proposed by Charles Butler
Status: Merged
Merged at revision: 25
Proposed branch: lp:~bigdata-dev/charms/trusty/hdp-hadoop/trunk
Merge into: lp:charms/trusty/hdp-hadoop
Diff against target: 488 lines (+164/-70)
6 files modified
hadoop_cluster.yaml (+0/-20)
hooks/hdp-hadoop-common.py (+134/-39)
hooks/hdputils.py (+2/-0)
metadata.yaml (+6/-9)
tests/01-hadoop-cluster-deployment-1.py (+2/-2)
tests/hadoop_cluster.yaml (+20/-0)
To merge this branch: bzr merge lp:~bigdata-dev/charms/trusty/hdp-hadoop/trunk
Reviewer Review Type Date Requested Status
David Britton (community) Needs Fixing
Review via email: mp+242414@code.launchpad.net

Description of the change

a cleanup and reimplementation of amir's branch:

adding following features:
1) Openstack enablement for multi-node compute nodes
2) Enable external big data applications query hadoop compute-nodes ip&hostname
for task discovery.
3) clean-up
4) update amulet testcase

To post a comment you must log in.
25. By Charles Butler

Re-add env to shebang

Revision history for this message
David Britton (dpb) wrote :

[0] Please convert all non-standard function comment headers to python docstrings.

Revision history for this message
David Britton (dpb) wrote :

A couple diff comments, I'll update more as I find them. I'm submitting as I go since this was time-sensitive.

Revision history for this message
David Britton (dpb) wrote :

Another round of diff comments. added

Revision history for this message
David Britton (dpb) wrote :

Please move the following code:

#################################### Global Data ################################

hdp_hellper_script = os.path.join(os.path.sep, os.environ['CHARM_DIR'],'files', 'scripts', "HDPHelperScripts.sh")
if not os.path.isfile(hdp_hellper_script):
    log ("Erro ==> {} not found".format(hdp_hellper_script), "ERROR")

createDadoopConfDir = os.path.join(os.path.sep, os.environ['CHARM_DIR'],'files', 'scripts', "createHadoopConfDir.sh")
hdpScript = "hdp_scripts"
hdpScriptPath = os.path.join(os.path.sep,home, hdpScript,'scripts')
usersAndGroupsScript = os.path.join(os.path.sep, hdpScriptPath, "usersAndGroups.sh")
directoriesScript = os.path.join(os.path.sep, hdpScriptPath, "directories.sh")
tarfilename="hdp_manual_install_rpm_helper_files-2.1.1.385.tar.gz"
tarfilenamePre="hdp_manual_install_rpm_helper_files-2.1.1.385"
bashrc = os.path.join(os.path.sep, home, '.bashrc')
hadoopMemoryOptimizationData = os.path.join(os.path.sep, hdpScriptPath, "hdpMemOpt.txt");
hosts_path = os.path.join(os.path.sep, 'etc', 'hosts')
resourceManagerReady = False
##########################################################################################

# Start the Hook Logic Block
hooks = Hooks()

to right above:

if __name__ == "__main__":
    hooks.execute(sys.argv)

So all the non method definition code will be together?

Revision history for this message
David Britton (dpb) wrote :

Last round, marking needs fixing

review: Needs Fixing
Revision history for this message
amir sanjar (asanjar) wrote :

David, I understand your concerns regarding abbreviation (i.e. dn, nd,..) but these selections were made due to same use of abbreviation, internal and externally, by hortonworks itself. For example Hortonworks uses:
 "/grid/hadoop/hdfs/nn"
    description: "Space separated list of directories where NameNode will store file system image."
or
    default: "/grid/hadoop/hdfs/dn"
    description: "Space separated list of directories where DataNode will store file system image."
or
    default: "/grid/hadoop/hdfs/snn"

Hortonworks users/developers would see these as a problem but a common way of abbreviation.

Revision history for this message
amir sanjar (asanjar) wrote :

BTW david, anyone that who is going to touch this charm has to understand big data and how hortonworks stack installs/configure. This charm is not for common developers. it is targeted for Big Data developers.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== removed file 'hadoop_cluster.yaml'
2--- hadoop_cluster.yaml 2014-08-25 16:33:37 +0000
3+++ hadoop_cluster.yaml 1970-01-01 00:00:00 +0000
4@@ -1,20 +0,0 @@
5-hdp-hadoop-cluster:
6- services:
7- "compute-node":
8- charm: "cs:~asanjar/trusty/hdp-hadoop"
9- num_units: 1
10- annotations:
11- "gui-x": "525.7681500094167"
12- "gui-y": "608.4847070634866"
13- "yarn-hdfs-master":
14- charm: "cs:~asanjar/trusty/hdp-hadoop"
15- num_units: 1
16- annotations:
17- "gui-x": "532"
18- "gui-y": "236.51529293651342"
19- relations:
20- - - "yarn-hdfs-master:namenode"
21- - "compute-node:datanode"
22- - - "yarn-hdfs-master:resourcemanager"
23- - "compute-node:nodemanager"
24- series: trusty
25
26=== added symlink 'hooks/compute-nodes-relation-changed'
27=== target is u'hdp-hadoop-common.py'
28=== added symlink 'hooks/compute-nodes-relation-joined'
29=== target is u'hdp-hadoop-common.py'
30=== added symlink 'hooks/compute-relation-joined'
31=== target is u'hdp-hadoop-common.py'
32=== added symlink 'hooks/hadoop-nodes-relation-joined'
33=== target is u'hdp-hadoop-common.py'
34=== modified file 'hooks/hdp-hadoop-common.py'
35--- hooks/hdp-hadoop-common.py 2014-11-03 17:56:47 +0000
36+++ hooks/hdp-hadoop-common.py 2014-11-21 15:37:12 +0000
37@@ -6,6 +6,7 @@
38 import shlex
39 import shutil
40 import inspect
41+import time
42
43 from hdputils import install_base_pkg, updateHDPDirectoryScript, config_all_nodes, \
44 setHadoopEnvVar, home, hdpScript, configureJAVA, config_all_nodes
45@@ -13,7 +14,7 @@
46
47 from charmhelpers.lib.utils import config_get, get_unit_hostname
48 from shutil import rmtree, copyfile
49-from charmhelpers.core.hookenv import log, Hooks, relation_get, relation_set, unit_get, open_port
50+from charmhelpers.core.hookenv import log, Hooks, relation_get, relation_set, unit_get, open_port, local_unit, related_units
51 from charmhelpers.core.host import service_start, service_stop, add_user_to_group
52 from time import sleep
53
54@@ -88,7 +89,6 @@
55 subprocess.call(createDadoopConfDir)
56
57
58-
59 def uninstall_base_pkg():
60 log("==> {}".format(inspect.stack()[0][3]),"INFO")
61 packages = ['ntp',
62@@ -144,9 +144,11 @@
63 def callHDFS_fs(command):
64 cmd = shlex.split("su hdfs -c '/usr/lib/hadoop/bin/hadoop fs {}'".format(command))
65 subprocess.call(cmd)
66-
67-def startJobHistory():
68- log("==> startJobHistory")
69+###########################
70+# Start Job History server
71+###########################
72+def start_jobhistory():
73+ log("==> start_jh")
74 path = os.path.join(os.path.sep, 'usr', 'lib', 'hadoop-yarn', 'bin', 'container-executor')
75 chownRecursive(path, 'root', 'hadoop')
76 os.chmod(path, 650)
77@@ -164,15 +166,28 @@
78 format(os.environ['MAPRED_USER'], hadoopConfDir))
79 subprocess.call(cmd)
80
81-def stopJobHistory():
82+###########################
83+# Stop Job History server
84+###########################
85+def stop_jobhistory():
86 hadoopConfDir = os.environ["HADOOP_CONF_DIR"]
87 os.environ["HADOOP_LIBEXEC_DIR"]="/usr/lib/hadoop/libexec"
88- cmd = shlex.split("su {} -c '/usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config {} stop resourcemanager'".\
89+ cmd = shlex.split("su {} -c '/usr/lib/hadoop-mapreduce/sbin/mr-jobhistory-daemon.sh --config {} stop historyserver'".\
90 format(os.environ['MAPRED_USER'], hadoopConfDir))
91 subprocess.call(cmd)
92
93-# candidate for BD charm helper
94+############################################################################################
95+# restart Job History server - Must be done everytime yarn-site.xml and
96+# mpared-site.xml are modfied
97+############################################################################################
98+def restart_jobhistory():
99+ if is_jvm_service_active("JobHistoryServer"):
100+ stop_jobhistory()
101+ start_jobhistory()
102
103+##########################
104+# Start NameNode server
105+###########################
106 def start_namenode(hdfsUser):
107 log("==> start namenode for user={}".format(hdfsUser), "INFO")
108 hadoopConfDir = os.environ["HADOOP_CONF_DIR"]
109@@ -180,28 +195,40 @@
110 format(hdfsUser, hadoopConfDir))
111 subprocess.check_call(cmd)
112
113+###########################
114+# Stop Name Node server
115+###########################
116 def stop_namenode(hdfsUser):
117- log("==> start namenode for user={}".format(hdfsUser), "INFO")
118+ log("==> stop namenode for user={}".format(hdfsUser), "INFO")
119 hadoopConfDir = os.environ["HADOOP_CONF_DIR"]
120 cmd = shlex.split("su {} -c '/usr/lib/hadoop/sbin/hadoop-daemon.sh --config {} stop namenode'".\
121 format(hdfsUser, hadoopConfDir))
122 subprocess.call(cmd)
123-# candidate for BD charm helper
124
125+###########################
126+# Start Data Node
127+###########################
128 def start_datanode(hdfsUser):
129- log("==> start namenode for user={}".format(hdfsUser), "INFO")
130+ log("==> start datanode for user={}".format(hdfsUser), "INFO")
131 hadoopConfDir = os.environ["HADOOP_CONF_DIR"]
132 cmd = shlex.split("su {} -c '/usr/lib/hadoop/sbin/hadoop-daemon.sh --config {} start datanode'".\
133 format(hdfsUser, hadoopConfDir))
134 subprocess.check_call(cmd)
135
136+###########################
137+# Stop Data Node
138+###########################
139 def stop_datanode(hdfsUser):
140- log("==> start namenode for user={}".format(hdfsUser), "INFO")
141+ log("==> stop datanode for user={}".format(hdfsUser), "INFO")
142 hadoopConfDir = os.environ["HADOOP_CONF_DIR"]
143 cmd = shlex.split("su {} -c '/usr/lib/hadoop/sbin/hadoop-daemon.sh --config {} stop datanode'".\
144 format(hdfsUser, hadoopConfDir))
145 subprocess.call(cmd)
146 # candidate for BD charm helper
147+
148+#############################################
149+# Configure YARN-SITE.XML and MAPRED-SITE.XML
150+#############################################
151 def configureYarn(RMhostname):
152 yarnConfPath = os.path.join(os.path.sep, os.environ['HADOOP_CONF_DIR'],"yarn-site.xml")
153 mapConfDir = os.path.join(os.path.sep, os.environ['HADOOP_CONF_DIR'],"mapred-site.xml")
154@@ -220,8 +247,10 @@
155 setHadoopConfigXML(mapConfDir, "mapreduce.jobhistory.webapp.address", RMhostname+":19888")
156 setHadoopConfigXML(mapConfDir, "mapreduce.jobhistory.address", RMhostname+":10020")
157
158-# candidate for BD charm helper
159-def start_RM(yarnUser):
160+################################
161+# Start Resource Manager server
162+################################
163+def start_resourcemanager(yarnUser):
164 log("==> start resourcemanager", "INFO")
165 hadoopConfDir = os.environ["HADOOP_CONF_DIR"]
166 os.environ["HADOOP_LIBEXEC_DIR"]="/usr/lib/hadoop/libexec"
167@@ -229,23 +258,32 @@
168 format(yarnUser, hadoopConfDir))
169 subprocess.call(cmd)
170
171-def stop_RM(yarnUser):
172+################################
173+# Stop Resource Manager server
174+################################
175+def stop_resourcemanager(yarnUser):
176 log("==> stop resourcemanager", "INFO")
177 hadoopConfDir = os.environ["HADOOP_CONF_DIR"]
178 os.environ["HADOOP_LIBEXEC_DIR"]="/usr/lib/hadoop/libexec"
179 cmd = shlex.split("su {} -c '/usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config {} stop resourcemanager'".\
180 format(yarnUser, hadoopConfDir))
181 subprocess.check_call(cmd)
182-# candidate for BD charm helper
183-def start_NM(yarnUser):
184+
185+################################################
186+# Start Node Manager daemon on each compute node
187+################################################
188+def start_nodemanager(yarnUser):
189 log("==> start nodemanager", "INFO")
190 hadoopConfDir = os.environ["HADOOP_CONF_DIR"]
191 os.environ["HADOOP_LIBEXEC_DIR"]="/usr/lib/hadoop/libexec"
192 cmd = shlex.split("su {} -c '/usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config {} start nodemanager'".\
193 format(yarnUser, hadoopConfDir))
194- subprocess.call(cmd)
195+ subprocess.check_call(cmd)
196
197-def stop_NM(yarnUser):
198+################################################
199+# Stop Node Manager daemon on each compute node
200+################################################
201+def stop_nodemanager(yarnUser):
202 log("==> stop nodemanager", "INFO")
203 hadoopConfDir = os.environ["HADOOP_CONF_DIR"]
204 os.environ["HADOOP_LIBEXEC_DIR"]="/usr/lib/hadoop/libexec"
205@@ -253,29 +291,52 @@
206 format(yarnUser, hadoopConfDir))
207 subprocess.call(cmd)
208
209+################################################
210+# Stop all running hadoop services
211+# NOTE: Order is important - DO NOT CHANGE
212+################################################
213 def stop_hadoop_services():
214 if is_jvm_service_active("ResourceManager"):
215- stop_RM(os.environ['YARN_USER'])
216+ stop_resourcemanager(os.environ['YARN_USER'])
217 if is_jvm_service_active("NodeManager"):
218- stop_NM(os.environ['YARN_USER'])
219+ stop_nodemanager(os.environ['YARN_USER'])
220 if is_jvm_service_active("NameNode"):
221 stop_namenode(os.environ['HDFS_USER'])
222 if is_jvm_service_active("DataNode"):
223 stop_datanode(os.environ['HDFS_USER'])
224+ if is_jvm_service_active("JobHistoryServer"):
225+ stop_jobhistory()
226
227+################################################
228+# restart all running hadoop services
229+# NOTE: Order is important - DO NOT CHENAGE
230+################################################
231 def restart_hadoop_services():
232- if is_jvm_service_active("ResourceManager"):
233- stop_RM(os.environ['YARN_USER'])
234- start_RM(os.environ['YARN_USER'])
235- if is_jvm_service_active("NodeManager"):
236- stop_NM(os.environ['YARN_USER'])
237- start_NM(os.environ['YARN_USER'])
238 if is_jvm_service_active("NameNode"):
239 stop_namenode(os.environ['HDFS_USER'])
240 start_namenode(os.environ['HDFS_USER'])
241+ if is_jvm_service_active("ResourceManager"):
242+ stop_resourcemanager(os.environ['YARN_USER'])
243+ start_resourcemanager(os.environ['YARN_USER'])
244+ if is_jvm_service_active("NodeManager"):
245+ stop_nodemanager(os.environ['YARN_USER'])
246+ start_nodemanager(os.environ['YARN_USER'])
247 if is_jvm_service_active("DataNode"):
248 stop_datanode(os.environ['HDFS_USER'])
249 start_datanode(os.environ['HDFS_USER'])
250+ restart_jobhistory()
251+
252+def wait_for_hadoop_service(service):
253+ ticks = time.time()
254+ while True:
255+ if (time.time() - ticks) > 200:
256+ log("Error ==> Reached timeout value for hadoop service {}..".format(service), "ERROR")
257+ sys.exit(1)
258+ if not is_jvm_service_active(service):
259+ time.sleep(2)
260+ log("Waiting.. ==> {} not ready..".format(service),"INFO")
261+ continue
262+ break
263
264 def configureHDFS(hostname):
265 hdfsConfPath = os.path.join(os.path.sep, os.environ['HADOOP_CONF_DIR'],'hdfs-site.xml')
266@@ -301,7 +362,6 @@
267 bashrc = os.path.join(os.path.sep, home, '.bashrc')
268 hadoopMemoryOptimizationData = os.path.join(os.path.sep, hdpScriptPath, "hdpMemOpt.txt");
269 hosts_path = os.path.join(os.path.sep, 'etc', 'hosts')
270-nameNodeReady = False
271 resourceManagerReady = False
272 ##########################################################################################
273
274@@ -332,6 +392,7 @@
275 config_hadoop_nodes()
276 fileSetKV(hosts_path, unit_get('private-address')+' ', get_unit_hostname())
277
278+
279 @hooks.hook('resourcemanager-relation-joined')
280 def resourcemanager_relation_joined():
281 log ("==> resourcemanager-relation-joined","INFO")
282@@ -339,12 +400,14 @@
283 relation_set(resourceManagerReady=True)
284 relation_set(resourceManager_hostname=get_unit_hostname())
285 return
286+ if not is_jvm_service_active("NameNode"):
287+ sys.exit(0)
288 setHadoopEnvVar()
289 relation_set(resourceManager_ip=unit_get('private-address'))
290 relation_set(resourceManager_hostname=get_unit_hostname())
291 configureYarn(unit_get('private-address'))
292- start_RM(os.environ["YARN_USER"])
293- startJobHistory()
294+ start_resourcemanager(os.environ["YARN_USER"])
295+ start_jobhistory()
296 open_port(8025)
297 open_port(8030)
298 open_port(8050)
299@@ -365,8 +428,10 @@
300 rm_ip = relation_get('private-address')
301 configureYarn(rm_ip)
302 fileSetKV(hosts_path, rm_ip+' ', relation_get('resourceManager_hostname'))
303- start_NM(os.environ["YARN_USER"])
304-
305+ # nodemanager requires data node daemon
306+ if not is_jvm_service_active("DataNode"):
307+ start_datanode(os.environ['HDFS_USER'])
308+ start_nodemanager(os.environ["YARN_USER"])
309 open_port(8025)
310 open_port(8030)
311 open_port(8050)
312@@ -376,6 +441,7 @@
313 open_port(10020)
314 relation_set(nodemanager_hostname=get_unit_hostname())
315
316+
317 @hooks.hook('resourcemanager-relation-broken')
318 def resourcemanager_relation_broken():
319 log ("Configuring resourcemanager - broken phase","INFO")
320@@ -404,13 +470,16 @@
321 configureHDFS(unit_get('private-address'))
322 format_namenode(os.environ["HDFS_USER"])
323 start_namenode(os.environ["HDFS_USER"])
324-
325+ start_jobhistory()
326 open_port(8020)
327 open_port(8010)
328 open_port(50070)
329 open_port(50075)
330 open_port(8480)
331 open_port(50470)
332+ if not is_jvm_service_active("NameNode"):
333+ log("error ==> NameNode failed to start")
334+ sys.exit(1)
335 sleep(5)
336 relation_set(nameNodeReady=True)
337
338@@ -423,13 +492,15 @@
339 if not nameNodeReady:
340 sys.exit(0)
341 setHadoopEnvVar()
342- nodeType="namenode"
343- namenode_hostname = relation_get("namenode_hostname")
344- namenode_ip = relation_get('private-address')
345- fileSetKV(hosts_path, namenode_ip+' ', namenode_hostname)
346- configureHDFS(namenode_hostname)
347+ nn_hostname = relation_get("namenode_hostname")
348+ nn_ip = relation_get('private-address')
349+ fileSetKV(hosts_path, nn_ip+' ', nn_hostname)
350+ configureHDFS(nn_ip)
351 setDirPermission(os.environ['DFS_DATA_DIR'], os.environ['HDFS_USER'], os.environ['HADOOP_GROUP'], 0750)
352 start_datanode(os.environ["HDFS_USER"])
353+ if not is_jvm_service_active("DataNode"):
354+ log("error ==> DataNode failed to start")
355+ sys.exit(1)
356 open_port(8010)
357 open_port(8480)
358 open_port(50010)
359@@ -448,7 +519,7 @@
360
361 @hooks.hook('namenode-relation-changed')
362 def namenode_relation_changed():
363- dn_host = relation_get('dn-hostname')
364+ dn_host = relation_get('dn_hostname')
365 if dn_host == None:
366 sys.exit(0)
367 log("Configuring namenode - changed phase", "INFO")
368@@ -464,6 +535,30 @@
369 setHadoopEnvVar()
370 stop_hadoop_services()
371
372+@hooks.hook('compute-nodes-relation-changed')
373+def compute_nodes_relation_changed():
374+ log("==> Configuring compute-nodes, local ID = {} - Changed phase".format(local_unit()),"INFO")
375+ nodeList = related_units()
376+ for n in nodeList:
377+ dn_ip = relation_get('private-address', unit=n)
378+ hostname = relation_get('hostname', unit=n)
379+ log("==> Configuring compute-nodes {}={}".format(dn_ip, hostname),"INFO")
380+ if hostname != None:
381+ fileSetKV(hosts_path, dn_ip+' ', hostname)
382+ else:
383+ sys.exit(0)
384+
385+@hooks.hook('compute-nodes-relation-joined')
386+def compute_nodes_relation_joined():
387+ log("==> compute_nodes_relation_joined {}".format(get_unit_hostname()),"INFO")
388+ relation_set(hostname=get_unit_hostname())
389+
390+
391+@hooks.hook('hadoop-nodes-relation-joined')
392+def hadoop_nodes_relation_joined():
393+ log("==> hadoop_nodes_relation_joined {}".format(get_unit_hostname()),"INFO")
394+ relation_set(hostname=get_unit_hostname())
395+
396
397
398
399
400=== modified file 'hooks/hdputils.py'
401--- hooks/hdputils.py 2014-08-11 03:35:08 +0000
402+++ hooks/hdputils.py 2014-11-21 15:37:12 +0000
403@@ -22,6 +22,8 @@
404 subprocess.call(cmd)
405 apt_update()
406 apt_install(packages)
407+ if not os.path.isdir(os.path.join(os.path.sep,'usr','lib', 'hadoop')):
408+ log("Error, apt-get install Hadoop failed", "ERROR")
409 os.chdir(home);
410 wgetPkg("http://public-repo-1.hortonworks.com/HDP/tools/2.1.1.0/hdp_manual_install_rpm_helper_files-2.1.1.385.tar.gz","")
411 if tarfile.is_tarfile(tarfilename):
412
413=== modified file 'metadata.yaml'
414--- metadata.yaml 2014-07-23 12:16:51 +0000
415+++ metadata.yaml 2014-11-21 15:37:12 +0000
416@@ -4,22 +4,19 @@
417 description: |
418 Hadoop is a software platform that lets one easily write and
419 run applications that process vast amounts of data.
420-categories: ["applications"]
421+tags: ["applications"]
422 provides:
423 namenode:
424 interface: dfs
425 resourcemanager:
426 interface: mapred
427- ganglia:
428- interface: monitor
429+ hadoop-nodes:
430+ interface: mapred
431 requires:
432 datanode:
433 interface: dfs
434- secondarynamenode:
435- interface: dfs
436 nodemanager:
437 interface: mapred
438- mapred-namenode:
439- interface: dfs
440- elasticsearch:
441- interface: elasticsearch
442+peers:
443+ compute-nodes:
444+ interface: mapred
445
446=== modified file 'tests/01-hadoop-cluster-deployment-1.py'
447--- tests/01-hadoop-cluster-deployment-1.py 2014-09-17 15:27:28 +0000
448+++ tests/01-hadoop-cluster-deployment-1.py 2014-11-21 15:37:12 +0000
449@@ -6,12 +6,12 @@
450 class TestDeployment(object):
451 def __init__(self):
452 self.d = amulet.Deployment(series='trusty')
453- bpath = os.path.join(os.path.dirname( __file__), "../hadoop_cluster.yaml")
454+ bpath = os.path.join(os.path.dirname( __file__), "hadoop_cluster.yaml")
455 f = open(bpath)
456 bun = f.read()
457 self.d.load(yaml.safe_load(bun))
458 try:
459- self.d.setup(timeout=9000)
460+ self.d.setup(timeout=900000)
461 self.d.sentry.wait()
462 except amulet.helpers.TimeoutError:
463 amulet.raise_status(amulet.SKIP, msg="Environment wasn't stood up in time")
464
465=== added file 'tests/hadoop_cluster.yaml'
466--- tests/hadoop_cluster.yaml 1970-01-01 00:00:00 +0000
467+++ tests/hadoop_cluster.yaml 2014-11-21 15:37:12 +0000
468@@ -0,0 +1,20 @@
469+hdp-hadoop-cluster:
470+ services:
471+ "compute-node":
472+ charm: "cs:~asanjar/trusty/hdp-hadoop"
473+ num_units: 4
474+ annotations:
475+ "gui-x": "525.7681500094167"
476+ "gui-y": "608.4847070634866"
477+ "yarn-hdfs-master":
478+ charm: "cs:~asanjar/trusty/hdp-hadoop"
479+ num_units: 1
480+ annotations:
481+ "gui-x": "532"
482+ "gui-y": "236.51529293651342"
483+ relations:
484+ - - "yarn-hdfs-master:namenode"
485+ - "compute-node:datanode"
486+ - - "yarn-hdfs-master:resourcemanager"
487+ - "compute-node:nodemanager"
488+ series: trusty

Subscribers

People subscribed via source and target branches