Merge lp:~bigdata-dev/charms/trusty/apache-spark/trunk into lp:charms/trusty/apache-spark

Proposed by Kevin W Monroe
Status: Merged
Merged at revision: 35
Proposed branch: lp:~bigdata-dev/charms/trusty/apache-spark/trunk
Merge into: lp:charms/trusty/apache-spark
Diff against target: 94 lines (+17/-14)
5 files modified
README.md (+1/-1)
metadata.yaml (+1/-1)
resources.yaml (+2/-2)
tests/00-setup (+6/-3)
tests/100-deploy-spark-hdfs-yarn (+7/-7)
To merge this branch: bzr merge lp:~bigdata-dev/charms/trusty/apache-spark/trunk
Reviewer Review Type Date Requested Status
Kevin W Monroe Approve
Review via email: mp+271382@code.launchpad.net
To post a comment you must log in.
Revision history for this message
Kevin W Monroe (kwmonroe) :
review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'README.md'
2--- README.md 2015-08-20 22:04:25 +0000
3+++ README.md 2015-09-16 21:18:54 +0000
4@@ -1,6 +1,6 @@
5 ## Overview
6
7-### Spark 1.3.x cluster for YARN & HDFS
8+### Spark 1.4.x cluster for YARN & HDFS
9 Apache Sparkā„¢ is a fast and general purpose engine for large-scale data
10 processing. Key features:
11
12
13=== modified file 'metadata.yaml'
14--- metadata.yaml 2015-06-04 22:00:39 +0000
15+++ metadata.yaml 2015-09-16 21:18:54 +0000
16@@ -1,5 +1,5 @@
17 name: apache-spark
18-summary: Data warehouse infrastructure built on top of Hadoop
19+summary: Apache Spark is a fast engine for large-scale data processing
20 maintainer: Amir Sanjar <amir.sanjar@canonical.com>
21 description: |
22 Apache Spark is a fast and general engine for large-scale data processing.
23
24=== modified file 'resources.yaml'
25--- resources.yaml 2015-08-24 23:28:33 +0000
26+++ resources.yaml 2015-09-16 21:18:54 +0000
27@@ -11,6 +11,6 @@
28 hash: 204fc101f3c336692feeb1401225407c575a90d42af5d7ab83937b59c42a6af9
29 hash_type: sha256
30 spark-x86_64:
31- url: http://www.apache.org/dist/spark/spark-1.3.0/spark-1.3.0-bin-hadoop2.4.tgz
32- hash: 094b5116231b6fec8d3991492c06683126ce66a17f910d82780a1a9106b41547
33+ url: http://www.apache.org/dist/spark/spark-1.4.1/spark-1.4.1-bin-hadoop2.4.tgz
34+ hash: bc8c79188db9a2b6104da21b3b380838edf1e40acbc79c84ef2ed2ad82ecdbc3
35 hash_type: sha256
36
37=== added file 'resources/python/jujuresources-0.2.11.tar.gz'
38Binary files resources/python/jujuresources-0.2.11.tar.gz 1970-01-01 00:00:00 +0000 and resources/python/jujuresources-0.2.11.tar.gz 2015-09-16 21:18:54 +0000 differ
39=== removed file 'resources/python/jujuresources-0.2.9.tar.gz'
40Binary files resources/python/jujuresources-0.2.9.tar.gz 2015-06-29 20:49:03 +0000 and resources/python/jujuresources-0.2.9.tar.gz 1970-01-01 00:00:00 +0000 differ
41=== modified file 'tests/00-setup'
42--- tests/00-setup 2015-04-22 15:27:27 +0000
43+++ tests/00-setup 2015-09-16 21:18:54 +0000
44@@ -1,5 +1,8 @@
45 #!/bin/bash
46
47-sudo add-apt-repository ppa:juju/stable -y
48-sudo apt-get update
49-sudo apt-get install python3 amulet -y
50+if ! dpkg -s amulet &> /dev/null; then
51+ echo Installing Amulet...
52+ sudo add-apt-repository -y ppa:juju/stable
53+ sudo apt-get update
54+ sudo apt-get -y install amulet
55+fi
56
57=== modified file 'tests/100-deploy-spark-hdfs-yarn'
58--- tests/100-deploy-spark-hdfs-yarn 2015-08-24 23:28:33 +0000
59+++ tests/100-deploy-spark-hdfs-yarn 2015-09-16 21:18:54 +0000
60@@ -1,4 +1,4 @@
61-#!/usr/bin/python3
62+#!/usr/bin/env python3
63
64 import unittest
65 import amulet
66@@ -14,10 +14,10 @@
67 def setUpClass(cls):
68 cls.d = amulet.Deployment(series='trusty')
69 # Deploy a hadoop cluster
70- cls.d.add('yarn-master', charm='cs:~bigdata-dev/trusty/apache-hadoop-yarn-master')
71- cls.d.add('hdfs-master', charm='cs:~bigdata-dev/trusty/apache-hadoop-hdfs-master')
72- cls.d.add('compute-slave', charm='cs:~bigdata-dev/trusty/apache-hadoop-compute-slave', units=3)
73- cls.d.add('plugin', charm='cs:~bigdata-dev/trusty/apache-hadoop-plugin')
74+ cls.d.add('yarn-master', charm='cs:trusty/apache-hadoop-yarn-master')
75+ cls.d.add('hdfs-master', charm='cs:trusty/apache-hadoop-hdfs-master')
76+ cls.d.add('compute-slave', charm='cs:trusty/apache-hadoop-compute-slave', units=3)
77+ cls.d.add('plugin', charm='cs:trusty/apache-hadoop-plugin')
78 cls.d.relate('yarn-master:namenode', 'hdfs-master:namenode')
79 cls.d.relate('compute-slave:nodemanager', 'yarn-master:nodemanager')
80 cls.d.relate('compute-slave:datanode', 'hdfs-master:datanode')
81@@ -25,11 +25,11 @@
82 cls.d.relate('plugin:namenode', 'hdfs-master:namenode')
83
84 # Add Spark Service
85- cls.d.add('spark', charm='cs:~bigdata-dev/trusty/apache-spark')
86+ cls.d.add('spark', charm='cs:trusty/apache-spark')
87 cls.d.relate('spark:hadoop-plugin', 'plugin:hadoop-plugin')
88
89 cls.d.setup(timeout=3600)
90- cls.d.sentry.wait()
91+ cls.d.sentry.wait(timeout=3600)
92 cls.unit = cls.d.sentry.unit['spark/0']
93
94 ###########################################################################

Subscribers

People subscribed via source and target branches