Merge lp:~chris.macnaughton/openstack-mojo-specs/ceph-upgrade into lp:~ost-maintainers/openstack-mojo-specs/mojo-openstack-specs-1709

Proposed by Chris MacNaughton
Status: Merged
Approved by: Ryan Beisner
Approved revision: 341
Merged at revision: 336
Proposed branch: lp:~chris.macnaughton/openstack-mojo-specs/ceph-upgrade
Merge into: lp:~ost-maintainers/openstack-mojo-specs/mojo-openstack-specs-1709
Diff against target: 710 lines (+454/-14)
21 files modified
helper/bundles/ceph-charm-migration.yaml (+92/-0)
helper/bundles/charm-ceph.yaml (+88/-0)
helper/collect/collect-ceph-default (+3/-2)
helper/setup/delete_application.py (+21/-0)
helper/tests/test_ceph_store.py (+29/-12)
helper/utils/kiki.py (+12/-0)
helper/utils/mojo_utils.py (+6/-0)
specs/storage/ceph/charm_migration/icehouse/SPEC_INFO.txt (+4/-0)
specs/storage/ceph/charm_migration/icehouse/manifest (+25/-0)
specs/storage/ceph/charm_migration/kilo/SPEC_INFO.txt (+4/-0)
specs/storage/ceph/charm_migration/kilo/manifest (+25/-0)
specs/storage/ceph/charm_migration/liberty/SPEC_INFO.txt (+4/-0)
specs/storage/ceph/charm_migration/liberty/manifest (+25/-0)
specs/storage/ceph/charm_migration/mitaka/SPEC_INFO.txt (+4/-0)
specs/storage/ceph/charm_migration/mitaka/manifest (+25/-0)
specs/storage/ceph/charm_migration/newton/SPEC_INFO.txt (+4/-0)
specs/storage/ceph/charm_migration/newton/manifest (+25/-0)
specs/storage/ceph/charm_migration/ocata/SPEC_INFO.txt (+4/-0)
specs/storage/ceph/charm_migration/ocata/manifest (+25/-0)
specs/storage/ceph/charm_migration/pike/SPEC_INFO.txt (+4/-0)
specs/storage/ceph/charm_migration/pike/manifest (+25/-0)
To merge this branch: bzr merge lp:~chris.macnaughton/openstack-mojo-specs/ceph-upgrade
Reviewer Review Type Date Requested Status
Ryan Beisner Needs Fixing
Review via email: mp+331146@code.launchpad.net
To post a comment you must log in.
336. By Chris MacNaughton

spec info updates and lint fixes

337. By Chris MacNaughton

syntax check for the test

338. By Chris MacNaughton

migrate ceph charms to charmstore

Revision history for this message
Chris MacNaughton (chris.macnaughton) wrote :

At the end of this spec, we have ceph-mon + ceph-osd :

Model Controller Cloud/Region Version SLA
icey icey-serverstack serverstack/serverstack 2.2.2 unsupported

App Version Status Scale Charm Store Rev OS Notes
ceph-mon 10.2.7 active 3 ceph-mon local 0 ubuntu
ceph-osd 10.2.7 active 6 ceph-osd local 15 ubuntu
ntp 4.2.8p4+dfsg active 6 ntp local 0 ubuntu

Unit Workload Agent Machine Public address Ports Message
ceph-mon/0* active idle 6 10.5.0.22 Unit is ready and clustered
ceph-mon/1 active idle 7 10.5.0.6 Unit is ready and clustered
ceph-mon/2 active idle 8 10.5.0.24 Unit is ready and clustered
ceph-osd/0* active idle 0 10.5.0.10 Unit is ready (1 OSD)
  ntp/1 active idle 10.5.0.10 123/udp Unit is ready
ceph-osd/1 active idle 1 10.5.0.9 Unit is ready (1 OSD)
  ntp/0* active executing 10.5.0.9 123/udp (update-status) Unit is ready
ceph-osd/2 active idle 2 10.5.0.28 Unit is ready (1 OSD)
  ntp/2 active idle 10.5.0.28 123/udp Unit is ready
ceph-osd/3 active idle 9 10.5.0.46 Unit is ready (1 OSD)
  ntp/3 active idle 10.5.0.46 123/udp Unit is ready
ceph-osd/4 active idle 10 10.5.0.49 Unit is ready (1 OSD)
  ntp/4 active idle 10.5.0.49 123/udp Unit is ready
ceph-osd/5 active idle 11 10.5.0.11 Unit is ready (1 OSD)
  ntp/5 active idle 10.5.0.11 123/udp Unit is ready

Machine State DNS Inst id Series AZ Message
0 started 10.5.0.10 050027ef-ba50-4974-90c5-eeeed901f742 xenial nova ACTIVE
1 started 10.5.0.9 2c86fed0-43ce-45ab-8fb7-c422a3453927 xenial nova ACTIVE
2 started 10.5.0.28 efd90b75-f39a-4a09-9a9b-5f5b4842413e xenial nova ACTIVE
6 started 10.5.0.22 085b71e1-fbd0-4b57-b199-acf1c12e095f xenial nova ACTIVE
7 started 10.5.0.6 e0769d30-0b85-4dd0-bf51-c09a5cd25b2b xenial nova ACTIVE
8 started 10.5.0.24 4ed91ac1-3e93-4a65-9bed-287f5208ea32 xenial nova ACTIVE
9 started 10.5.0.46 aa1d3f9d-25bf-46a1-92be-25720371ab59 xenial nova ACTIVE
10 started 10.5.0.49 17aaa991-cc1b-4040-8fd7-0bea88e3843d xenial nova ACTIVE
11 started 10.5.0.11 00d80a51-1c56-4d98-baf1-fe2a771164ff xenial nova ACTIVE

Relation provider Requirer Interface Type
ceph-mon:mon ceph-mon:mon ceph peer
ceph-mon:osd ceph-osd:mon ceph-osd regular
ceph-osd:juju-info ntp:juju-info juju-info subordinate
ntp:ntp-peers ntp:ntp-peers ntp peer

Revision history for this message
Chris MacNaughton (chris.macnaughton) wrote :

Some output from the mojo run: https://pastebin.ubuntu.com/25615994/

Revision history for this message
Ryan Beisner (1chb1n) wrote :

helper/setup/delete_application.py:19:1: E305 expected 2 blank lines after class or function definition, found 1

review: Needs Fixing
339. By Chris MacNaughton

style fix

Revision history for this message
Ryan Beisner (1chb1n) wrote :

Let's make some changes in naming for clarity. This is not a ceph upgrade. It is a charm migration.

So, instead of "specs/storage/ceph/upgrade" perhaps:

specs/storage/ceph/charm-migrate

Also update bundle names and other assets to suit. Thank you.

review: Needs Fixing
340. By Chris MacNaughton

rename from upgrade to migration

341. By Chris MacNaughton

replace hyphen with underscore

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== added file 'helper/bundles/ceph-charm-migration.yaml'
--- helper/bundles/ceph-charm-migration.yaml 1970-01-01 00:00:00 +0000
+++ helper/bundles/ceph-charm-migration.yaml 2017-09-25 19:22:27 +0000
@@ -0,0 +1,92 @@
1# This bundle is intended to closely resemble the ceph-base bundle from openstack-bundles.
2base:
3 relations:
4 - - ceph-osd:mon
5 - ceph-mon:osd
6 - - ceph:bootstrap-source
7 - ceph-mon:bootstrap-source
8 services:
9 ceph:
10 annotations:
11 gui-x: '750'
12 gui-y: '500'
13 charm: ceph
14 num_units: 3
15 options:
16 ephemeral-unmount: /mnt
17 osd-devices: /dev/vdb /dev/sdb /dev/xvdb
18 osd-reformat: 'no'
19 fsid: 5a791d94-980b-11e4-b6f6-3c970e8b1cf7
20 monitor-secret: AQAi5a9UeJXUExAA+By9u+GPhl8/XiUQ4nwI3A==
21 ceph-mon:
22 annotations:
23 gui-x: '750'
24 gui-y: '500'
25 charm: ceph-mon
26 num_units: 3
27 options:
28 expected-osd-count: 6
29 no-bootstrap: True
30 ceph-osd:
31 annotations:
32 gui-x: '1000'
33 gui-y: '500'
34 charm: ceph-osd
35 num_units: 6
36 options:
37 ephemeral-unmount: /mnt
38 osd-devices: /dev/vdb /dev/sdb /dev/xvdb
39 osd-reformat: 'no'
40 to: ceph
41# icehouse
42trusty-icehouse:
43 inherits: base
44 series: trusty
45# kilo
46trusty-kilo:
47 inherits: base
48 series: trusty
49 overrides:
50 source: cloud:trusty-kilo
51# liberty
52trusty-liberty:
53 inherits: base
54 series: trusty
55 overrides:
56 source: cloud:trusty-liberty
57# mitaka
58trusty-mitaka:
59 inherits: base
60 series: trusty
61 overrides:
62 source: cloud:trusty-mitaka
63xenial-mitaka:
64 inherits: base
65 series: xenial
66# newton
67xenial-newton:
68 inherits: base
69 series: xenial
70 overrides:
71 source: cloud:xenial-newton
72yakkety-newton:
73 inherits: base
74 series: yakkety
75# ocata
76xenial-ocata:
77 inherits: base
78 series: xenial
79 overrides:
80 source: cloud:xenial-ocata
81zesty-ocata:
82 inherits: base
83 series: zesty
84# pike
85xenial-pike:
86 inherits: base
87 series: xenial
88 overrides:
89 source: cloud:xenial-pike
90artful-pike:
91 inherits: base
92 series: artful
093
=== added file 'helper/bundles/charm-ceph.yaml'
--- helper/bundles/charm-ceph.yaml 1970-01-01 00:00:00 +0000
+++ helper/bundles/charm-ceph.yaml 2017-09-25 19:22:27 +0000
@@ -0,0 +1,88 @@
1# This bundle is intended to closely resemble the ceph-base bundle from openstack-bundles.
2base:
3 relations:
4 - - ceph-osd:mon
5 - ceph:osd
6 - - ntp:juju-info
7 - ceph-osd:juju-info
8 services:
9 ceph:
10 annotations:
11 gui-x: '750'
12 gui-y: '500'
13 charm: ceph
14 num_units: 3
15 options:
16 ephemeral-unmount: /mnt
17 osd-devices: /dev/vdb /dev/sdb /dev/xvdb
18 osd-reformat: 'yes'
19 fsid: 5a791d94-980b-11e4-b6f6-3c970e8b1cf7
20 monitor-secret: AQAi5a9UeJXUExAA+By9u+GPhl8/XiUQ4nwI3A==
21 ceph-osd:
22 annotations:
23 gui-x: '1000'
24 gui-y: '500'
25 charm: ceph-osd
26 num_units: 3
27 options:
28 ephemeral-unmount: /mnt
29 osd-devices: /dev/vdb /dev/sdb /dev/xvdb
30 osd-reformat: 'yes'
31 ntp:
32 annotations:
33 gui-x: '1000'
34 gui-y: '0'
35 charm: ntp
36 num_units: 0
37# icehouse
38trusty-icehouse:
39 inherits: base
40 series: trusty
41# kilo
42trusty-kilo:
43 inherits: base
44 series: trusty
45 overrides:
46 source: cloud:trusty-kilo
47# liberty
48trusty-liberty:
49 inherits: base
50 series: trusty
51 overrides:
52 source: cloud:trusty-liberty
53# mitaka
54trusty-mitaka:
55 inherits: base
56 series: trusty
57 overrides:
58 source: cloud:trusty-mitaka
59xenial-mitaka:
60 inherits: base
61 series: xenial
62# newton
63xenial-newton:
64 inherits: base
65 series: xenial
66 overrides:
67 source: cloud:xenial-newton
68yakkety-newton:
69 inherits: base
70 series: yakkety
71# ocata
72xenial-ocata:
73 inherits: base
74 series: xenial
75 overrides:
76 source: cloud:xenial-ocata
77zesty-ocata:
78 inherits: base
79 series: zesty
80# pike
81xenial-pike:
82 inherits: base
83 series: xenial
84 overrides:
85 source: cloud:xenial-pike
86artful-pike:
87 inherits: base
88 series: artful
089
=== modified file 'helper/collect/collect-ceph-default'
--- helper/collect/collect-ceph-default 2017-09-06 17:15:11 +0000
+++ helper/collect/collect-ceph-default 2017-09-25 19:22:27 +0000
@@ -1,3 +1,4 @@
1ceph-mon git://github.com/openstack/charm-ceph-mon1ceph cs:~openstack-charmers-next/ceph
2ceph-osd git://github.com/openstack/charm-ceph-osd2ceph-mon cs:~openstack-charmers-next/ceph-mon
3ceph-osd cs:~openstack-charmers-next/ceph-osd
3ntp git://git.launchpad.net/~thedac/ntp-charm;revno=zesty4ntp git://git.launchpad.net/~thedac/ntp-charm;revno=zesty
45
=== added file 'helper/setup/delete_application.py'
--- helper/setup/delete_application.py 1970-01-01 00:00:00 +0000
+++ helper/setup/delete_application.py 2017-09-25 19:22:27 +0000
@@ -0,0 +1,21 @@
1#!/usr/bin/env python
2import sys
3import utils.mojo_utils as mojo_utils
4import logging
5import argparse
6
7
8def main(argv):
9 logging.basicConfig(level=logging.INFO)
10 parser = argparse.ArgumentParser()
11 parser.add_argument("application", nargs="*")
12 options = parser.parse_args()
13 unit_args = mojo_utils.parse_mojo_arg(
14 options,
15 'application', multiargs=True)
16 for application in unit_args:
17 mojo_utils.delete_application(application)
18
19
20if __name__ == "__main__":
21 sys.exit(main(sys.argv))
022
=== modified file 'helper/tests/test_ceph_store.py'
--- helper/tests/test_ceph_store.py 2017-09-06 05:54:23 +0000
+++ helper/tests/test_ceph_store.py 2017-09-25 19:22:27 +0000
@@ -3,23 +3,40 @@
3import sys3import sys
44
5import utils.mojo_utils as mojo_utils5import utils.mojo_utils as mojo_utils
6import argparse
67
78
8def main(argv):9def main(argv):
9 mojo_utils.remote_run(10 parser = argparse.ArgumentParser()
10 'ceph-mon/0', 'ceph osd pool create rbd 128')11 parser.add_argument("application", default="ceph-mon", nargs="*")
11 # Check12 parser.add_argument("units", default=[0, 1], nargs="*")
12 mojo_utils.remote_run('ceph-mon/0', 'echo 123456789 > /tmp/input.txt')13 options = parser.parse_args()
13 mojo_utils.remote_run(14 application = mojo_utils.parse_mojo_arg(options,
14 'ceph-mon/0', 'rados put -p rbd test_input /tmp/input.txt')15 'application', multiargs=False)
1516 units = mojo_utils.parse_mojo_arg(options, 'units', multiargs=True)
16 # Check17
17 mojo_utils.remote_run(18 mojo_utils.remote_run(
18 'ceph-mon/1', 'rados get -p rbd test_input /tmp/input.txt')19 '{}/{}'.format(application, units[0]), 'ceph osd pool create rbd 128')
19 output = mojo_utils.remote_run('ceph-mon/1', 'cat /tmp/input.txt')20 # Check
21 mojo_utils.remote_run(
22 '{}/{}'.format(application, units[0]),
23 'echo 123456789 > /tmp/input.txt')
24 mojo_utils.remote_run(
25 '{}/{}'.format(application, units[0]),
26 'rados put -p rbd test_input /tmp/input.txt')
27
28 # Check
29 mojo_utils.remote_run(
30 '{}/{}'.format(application, units[-1]),
31 'rados get -p rbd test_input /tmp/input.txt')
32 output = mojo_utils.remote_run(
33 '{}/{}'.format(application, units[-1]),
34 'cat /tmp/input.txt')
2035
21 # Cleanup36 # Cleanup
22 mojo_utils.remote_run('ceph-mon/2', 'rados rm -p rbd test_input')37 mojo_utils.remote_run(
38 '{}/{}'.format(application, units[-1]),
39 'rados rm -p rbd test_input')
23 if output[0].strip() != "123456789":40 if output[0].strip() != "123456789":
24 sys.exit(1)41 sys.exit(1)
2542
2643
=== modified file 'helper/utils/kiki.py'
--- helper/utils/kiki.py 2017-09-05 22:27:05 +0000
+++ helper/utils/kiki.py 2017-09-25 19:22:27 +0000
@@ -251,6 +251,18 @@
251251
252252
253@cached253@cached
254def remove_application():
255 """Translate argument for remove-unit
256
257 @returns string Juju argument for remove-unit
258 """
259 if min_version('2.1'):
260 return "remove-application"
261 else:
262 return "remove-service"
263
264
265@cached
254def juju_state():266def juju_state():
255 """Translate identifier for juju-state267 """Translate identifier for juju-state
256268
257269
=== modified file 'helper/utils/mojo_utils.py'
--- helper/utils/mojo_utils.py 2017-09-25 18:51:44 +0000
+++ helper/utils/mojo_utils.py 2017-09-25 19:22:27 +0000
@@ -181,6 +181,12 @@
181 delete_unit_provider(unit)181 delete_unit_provider(unit)
182182
183183
184def delete_application(application, wait=True):
185 logging.info('Removing application ' + application)
186 cmd = [kiki.cmd(), kiki.remove_application(), application]
187 subprocess.check_call(cmd)
188
189
184def delete_oldest(service, method='juju'):190def delete_oldest(service, method='juju'):
185 units = unit_sorted(get_juju_units(service=service))191 units = unit_sorted(get_juju_units(service=service))
186 delete_unit(units[0], method='juju')192 delete_unit(units[0], method='juju')
187193
=== added directory 'specs/storage/ceph/charm_migration'
=== added directory 'specs/storage/ceph/charm_migration/icehouse'
=== added file 'specs/storage/ceph/charm_migration/icehouse/SPEC_INFO.txt'
--- specs/storage/ceph/charm_migration/icehouse/SPEC_INFO.txt 1970-01-01 00:00:00 +0000
+++ specs/storage/ceph/charm_migration/icehouse/SPEC_INFO.txt 2017-09-25 19:22:27 +0000
@@ -0,0 +1,4 @@
1This spec deploys a 3 monitor and 3 node OSD cluster. It tests to verify
2Rados can write to and read from the cluster. It deploys ceph-mon in addition
3to ceph, and then tears down ceph to verify that the cluster upgrade
4works.
0\ No newline at end of file5\ No newline at end of file
16
=== added symlink 'specs/storage/ceph/charm_migration/icehouse/ceph-charm-migration.yaml'
=== target is u'../../../../../helper/bundles/ceph-charm-migration.yaml'
=== added symlink 'specs/storage/ceph/charm_migration/icehouse/charm-ceph.yaml'
=== target is u'../../../../../helper/bundles/charm-ceph.yaml'
=== added symlink 'specs/storage/ceph/charm_migration/icehouse/check_juju.py'
=== target is u'../../../../../helper/tests/check_juju.py'
=== added symlink 'specs/storage/ceph/charm_migration/icehouse/collect-ceph-default'
=== target is u'../../../../../helper/collect/collect-ceph-default'
=== added symlink 'specs/storage/ceph/charm_migration/icehouse/delete_application.py'
=== target is u'../../../../../helper/setup/delete_application.py'
=== added file 'specs/storage/ceph/charm_migration/icehouse/manifest'
--- specs/storage/ceph/charm_migration/icehouse/manifest 1970-01-01 00:00:00 +0000
+++ specs/storage/ceph/charm_migration/icehouse/manifest 2017-09-25 19:22:27 +0000
@@ -0,0 +1,25 @@
1# Collect the charm branches from Launchpad
2collect config=collect-ceph-default
3
4# Use juju deployer with charm-ceph.yaml bundle
5deploy config=charm-ceph.yaml delay=0 wait=False target=${MOJO_SERIES}-icehouse
6
7# Check juju statuses are green and that hooks have finished
8verify config=check_juju.py
9
10# Test obj store by sending and recieving files
11verify config=test_ceph_store.py APPLICATION=ceph UNITS="0 1"
12
13# Use juju deployer with ceph-charm-migration.yaml bundle
14deploy config=ceph-charm-migration.yaml delay=0 wait=False target=${MOJO_SERIES}-icehouse
15
16# Check juju statuses are green and that hooks have finished
17verify config=check_juju.py
18
19# Remove charm-ceph from the deployment
20verify config=delete_application.py APPLICATION=ceph
21
22# Test obj store by sending and recieving files
23verify config=test_ceph_store.py APPLICATION=ceph-mon UNITS="0 1"
24
25# Success
026
=== added symlink 'specs/storage/ceph/charm_migration/icehouse/test_ceph_store.py'
=== target is u'../../../../../helper/tests/test_ceph_store.py'
=== added symlink 'specs/storage/ceph/charm_migration/icehouse/utils'
=== target is u'../../../../../helper/utils'
=== added directory 'specs/storage/ceph/charm_migration/kilo'
=== added file 'specs/storage/ceph/charm_migration/kilo/SPEC_INFO.txt'
--- specs/storage/ceph/charm_migration/kilo/SPEC_INFO.txt 1970-01-01 00:00:00 +0000
+++ specs/storage/ceph/charm_migration/kilo/SPEC_INFO.txt 2017-09-25 19:22:27 +0000
@@ -0,0 +1,4 @@
1This spec deploys a 3 monitor and 3 node OSD cluster. It tests to verify
2Rados can write to and read from the cluster. It deploys ceph-mon in addition
3to ceph, and then tears down ceph to verify that the cluster upgrade
4works.
0\ No newline at end of file5\ No newline at end of file
16
=== added symlink 'specs/storage/ceph/charm_migration/kilo/ceph-charm-migration.yaml'
=== target is u'../../../../../helper/bundles/ceph-charm-migration.yaml'
=== added symlink 'specs/storage/ceph/charm_migration/kilo/charm-ceph.yaml'
=== target is u'../../../../../helper/bundles/charm-ceph.yaml'
=== added symlink 'specs/storage/ceph/charm_migration/kilo/check_juju.py'
=== target is u'../../../../../helper/tests/check_juju.py'
=== added symlink 'specs/storage/ceph/charm_migration/kilo/collect-ceph-default'
=== target is u'../../../../../helper/collect/collect-ceph-default'
=== added symlink 'specs/storage/ceph/charm_migration/kilo/delete_application.py'
=== target is u'../../../../../helper/setup/delete_application.py'
=== added file 'specs/storage/ceph/charm_migration/kilo/manifest'
--- specs/storage/ceph/charm_migration/kilo/manifest 1970-01-01 00:00:00 +0000
+++ specs/storage/ceph/charm_migration/kilo/manifest 2017-09-25 19:22:27 +0000
@@ -0,0 +1,25 @@
1# Collect the charm branches from Launchpad
2collect config=collect-ceph-default
3
4# Use juju deployer with charm-ceph.yaml bundle
5deploy config=charm-ceph.yaml delay=0 wait=False target=${MOJO_SERIES}-kilo
6
7# Check juju statuses are green and that hooks have finished
8verify config=check_juju.py
9
10# Test obj store by sending and recieving files
11verify config=test_ceph_store.py APPLICATION=ceph UNITS="0 1"
12
13# Use juju deployer with ceph-charm-migration.yaml bundle
14deploy config=ceph-charm-migration.yaml delay=0 wait=False target=${MOJO_SERIES}-kilo
15
16# Check juju statuses are green and that hooks have finished
17verify config=check_juju.py
18
19# Remove charm-ceph from the deployment
20verify config=delete_application.py APPLICATION=ceph
21
22# Test obj store by sending and recieving files
23verify config=test_ceph_store.py APPLICATION=ceph-mon UNITS="0 1"
24
25# Success
026
=== added symlink 'specs/storage/ceph/charm_migration/kilo/test_ceph_store.py'
=== target is u'../../../../../helper/tests/test_ceph_store.py'
=== added symlink 'specs/storage/ceph/charm_migration/kilo/utils'
=== target is u'../../../../../helper/utils'
=== added directory 'specs/storage/ceph/charm_migration/liberty'
=== added file 'specs/storage/ceph/charm_migration/liberty/SPEC_INFO.txt'
--- specs/storage/ceph/charm_migration/liberty/SPEC_INFO.txt 1970-01-01 00:00:00 +0000
+++ specs/storage/ceph/charm_migration/liberty/SPEC_INFO.txt 2017-09-25 19:22:27 +0000
@@ -0,0 +1,4 @@
1This spec deploys a 3 monitor and 3 node OSD cluster. It tests to verify
2Rados can write to and read from the cluster. It deploys ceph-mon in addition
3to ceph, and then tears down ceph to verify that the cluster upgrade
4works.
0\ No newline at end of file5\ No newline at end of file
16
=== added symlink 'specs/storage/ceph/charm_migration/liberty/ceph-charm-migration.yaml'
=== target is u'../../../../../helper/bundles/ceph-charm-migration.yaml'
=== added symlink 'specs/storage/ceph/charm_migration/liberty/charm-ceph.yaml'
=== target is u'../../../../../helper/bundles/charm-ceph.yaml'
=== added symlink 'specs/storage/ceph/charm_migration/liberty/check_juju.py'
=== target is u'../../../../../helper/tests/check_juju.py'
=== added symlink 'specs/storage/ceph/charm_migration/liberty/collect-ceph-default'
=== target is u'../../../../../helper/collect/collect-ceph-default'
=== added symlink 'specs/storage/ceph/charm_migration/liberty/delete_application.py'
=== target is u'../../../../../helper/setup/delete_application.py'
=== added file 'specs/storage/ceph/charm_migration/liberty/manifest'
--- specs/storage/ceph/charm_migration/liberty/manifest 1970-01-01 00:00:00 +0000
+++ specs/storage/ceph/charm_migration/liberty/manifest 2017-09-25 19:22:27 +0000
@@ -0,0 +1,25 @@
1# Collect the charm branches from Launchpad
2collect config=collect-ceph-default
3
4# Use juju deployer with charm-ceph.yaml bundle
5deploy config=charm-ceph.yaml delay=0 wait=False target=${MOJO_SERIES}-liberty
6
7# Check juju statuses are green and that hooks have finished
8verify config=check_juju.py
9
10# Test obj store by sending and recieving files
11verify config=test_ceph_store.py APPLICATION=ceph UNITS="0 1"
12
13# Use juju deployer with ceph-charm-migration.yaml bundle
14deploy config=ceph-charm-migration.yaml delay=0 wait=False target=${MOJO_SERIES}-liberty
15
16# Check juju statuses are green and that hooks have finished
17verify config=check_juju.py
18
19# Remove charm-ceph from the deployment
20verify config=delete_application.py APPLICATION=ceph
21
22# Test obj store by sending and recieving files
23verify config=test_ceph_store.py APPLICATION=ceph-mon UNITS="0 1"
24
25# Success
026
=== added symlink 'specs/storage/ceph/charm_migration/liberty/test_ceph_store.py'
=== target is u'../../../../../helper/tests/test_ceph_store.py'
=== added symlink 'specs/storage/ceph/charm_migration/liberty/utils'
=== target is u'../../../../../helper/utils'
=== added directory 'specs/storage/ceph/charm_migration/mitaka'
=== added file 'specs/storage/ceph/charm_migration/mitaka/SPEC_INFO.txt'
--- specs/storage/ceph/charm_migration/mitaka/SPEC_INFO.txt 1970-01-01 00:00:00 +0000
+++ specs/storage/ceph/charm_migration/mitaka/SPEC_INFO.txt 2017-09-25 19:22:27 +0000
@@ -0,0 +1,4 @@
1This spec deploys a 3 monitor and 3 node OSD cluster. It tests to verify
2Rados can write to and read from the cluster. It deploys ceph-mon in addition
3to ceph, and then tears down ceph to verify that the cluster upgrade
4works.
0\ No newline at end of file5\ No newline at end of file
16
=== added symlink 'specs/storage/ceph/charm_migration/mitaka/ceph-charm-migration.yaml'
=== target is u'../../../../../helper/bundles/ceph-charm-migration.yaml'
=== added symlink 'specs/storage/ceph/charm_migration/mitaka/charm-ceph.yaml'
=== target is u'../../../../../helper/bundles/charm-ceph.yaml'
=== added symlink 'specs/storage/ceph/charm_migration/mitaka/check_juju.py'
=== target is u'../../../../../helper/tests/check_juju.py'
=== added symlink 'specs/storage/ceph/charm_migration/mitaka/collect-ceph-default'
=== target is u'../../../../../helper/collect/collect-ceph-default'
=== added symlink 'specs/storage/ceph/charm_migration/mitaka/delete_application.py'
=== target is u'../../../../../helper/setup/delete_application.py'
=== added file 'specs/storage/ceph/charm_migration/mitaka/manifest'
--- specs/storage/ceph/charm_migration/mitaka/manifest 1970-01-01 00:00:00 +0000
+++ specs/storage/ceph/charm_migration/mitaka/manifest 2017-09-25 19:22:27 +0000
@@ -0,0 +1,25 @@
1# Collect the charm branches from Launchpad
2collect config=collect-ceph-default
3
4# Use juju deployer with charm-ceph.yaml bundle
5deploy config=charm-ceph.yaml delay=0 wait=False target=${MOJO_SERIES}-mitaka
6
7# Check juju statuses are green and that hooks have finished
8verify config=check_juju.py
9
10# Test obj store by sending and recieving files
11verify config=test_ceph_store.py APPLICATION=ceph UNITS="0 1"
12
13# Use juju deployer with ceph-charm-migration.yaml bundle
14deploy config=ceph-charm-migration.yaml delay=0 wait=False target=${MOJO_SERIES}-mitaka
15
16# Check juju statuses are green and that hooks have finished
17verify config=check_juju.py
18
19# Remove charm-ceph from the deployment
20verify config=delete_application.py APPLICATION=ceph
21
22# Test obj store by sending and recieving files
23verify config=test_ceph_store.py APPLICATION=ceph-mon UNITS="0 1"
24
25# Success
026
=== added symlink 'specs/storage/ceph/charm_migration/mitaka/test_ceph_store.py'
=== target is u'../../../../../helper/tests/test_ceph_store.py'
=== added symlink 'specs/storage/ceph/charm_migration/mitaka/utils'
=== target is u'../../../../../helper/utils'
=== added directory 'specs/storage/ceph/charm_migration/newton'
=== added file 'specs/storage/ceph/charm_migration/newton/SPEC_INFO.txt'
--- specs/storage/ceph/charm_migration/newton/SPEC_INFO.txt 1970-01-01 00:00:00 +0000
+++ specs/storage/ceph/charm_migration/newton/SPEC_INFO.txt 2017-09-25 19:22:27 +0000
@@ -0,0 +1,4 @@
1This spec deploys a 3 monitor and 3 node OSD cluster. It tests to verify
2Rados can write to and read from the cluster. It deploys ceph-mon in addition
3to ceph, and then tears down ceph to verify that the cluster upgrade
4works.
0\ No newline at end of file5\ No newline at end of file
16
=== added symlink 'specs/storage/ceph/charm_migration/newton/ceph-charm-migration.yaml'
=== target is u'../../../../../helper/bundles/ceph-charm-migration.yaml'
=== added symlink 'specs/storage/ceph/charm_migration/newton/charm-ceph.yaml'
=== target is u'../../../../../helper/bundles/charm-ceph.yaml'
=== added symlink 'specs/storage/ceph/charm_migration/newton/check_juju.py'
=== target is u'../../../../../helper/tests/check_juju.py'
=== added symlink 'specs/storage/ceph/charm_migration/newton/collect-ceph-default'
=== target is u'../../../../../helper/collect/collect-ceph-default'
=== added symlink 'specs/storage/ceph/charm_migration/newton/delete_application.py'
=== target is u'../../../../../helper/setup/delete_application.py'
=== added file 'specs/storage/ceph/charm_migration/newton/manifest'
--- specs/storage/ceph/charm_migration/newton/manifest 1970-01-01 00:00:00 +0000
+++ specs/storage/ceph/charm_migration/newton/manifest 2017-09-25 19:22:27 +0000
@@ -0,0 +1,25 @@
1# Collect the charm branches from Launchpad
2collect config=collect-ceph-default
3
4# Use juju deployer with charm-ceph.yaml bundle
5deploy config=charm-ceph.yaml delay=0 wait=False target=${MOJO_SERIES}-newton
6
7# Check juju statuses are green and that hooks have finished
8verify config=check_juju.py
9
10# Test obj store by sending and recieving files
11verify config=test_ceph_store.py APPLICATION=ceph UNITS="0 1"
12
13# Use juju deployer with ceph-charm-migration.yaml bundle
14deploy config=ceph-charm-migration.yaml delay=0 wait=False target=${MOJO_SERIES}-newton
15
16# Check juju statuses are green and that hooks have finished
17verify config=check_juju.py
18
19# Remove charm-ceph from the deployment
20verify config=delete_application.py APPLICATION=ceph
21
22# Test obj store by sending and recieving files
23verify config=test_ceph_store.py APPLICATION=ceph-mon UNITS="0 1"
24
25# Success
026
=== added symlink 'specs/storage/ceph/charm_migration/newton/test_ceph_store.py'
=== target is u'../../../../../helper/tests/test_ceph_store.py'
=== added symlink 'specs/storage/ceph/charm_migration/newton/utils'
=== target is u'../../../../../helper/utils'
=== added directory 'specs/storage/ceph/charm_migration/ocata'
=== added file 'specs/storage/ceph/charm_migration/ocata/SPEC_INFO.txt'
--- specs/storage/ceph/charm_migration/ocata/SPEC_INFO.txt 1970-01-01 00:00:00 +0000
+++ specs/storage/ceph/charm_migration/ocata/SPEC_INFO.txt 2017-09-25 19:22:27 +0000
@@ -0,0 +1,4 @@
1This spec deploys a 3 monitor and 3 node OSD cluster. It tests to verify
2Rados can write to and read from the cluster. It deploys ceph-mon in addition
3to ceph, and then tears down ceph to verify that the cluster upgrade
4works.
0\ No newline at end of file5\ No newline at end of file
16
=== added symlink 'specs/storage/ceph/charm_migration/ocata/ceph-charm-migration.yaml'
=== target is u'../../../../../helper/bundles/ceph-charm-migration.yaml'
=== added symlink 'specs/storage/ceph/charm_migration/ocata/charm-ceph.yaml'
=== target is u'../../../../../helper/bundles/charm-ceph.yaml'
=== added symlink 'specs/storage/ceph/charm_migration/ocata/check_juju.py'
=== target is u'../../../../../helper/tests/check_juju.py'
=== added symlink 'specs/storage/ceph/charm_migration/ocata/collect-ceph-default'
=== target is u'../../../../../helper/collect/collect-ceph-default'
=== added symlink 'specs/storage/ceph/charm_migration/ocata/delete_application.py'
=== target is u'../../../../../helper/setup/delete_application.py'
=== added file 'specs/storage/ceph/charm_migration/ocata/manifest'
--- specs/storage/ceph/charm_migration/ocata/manifest 1970-01-01 00:00:00 +0000
+++ specs/storage/ceph/charm_migration/ocata/manifest 2017-09-25 19:22:27 +0000
@@ -0,0 +1,25 @@
1# Collect the charm branches from Launchpad
2collect config=collect-ceph-default
3
4# Use juju deployer with charm-ceph.yaml bundle
5deploy config=charm-ceph.yaml delay=0 wait=False target=${MOJO_SERIES}-ocata
6
7# Check juju statuses are green and that hooks have finished
8verify config=check_juju.py
9
10# Test obj store by sending and recieving files
11verify config=test_ceph_store.py APPLICATION=ceph UNITS="0 1"
12
13# Use juju deployer with ceph-charm-migration.yaml bundle
14deploy config=ceph-charm-migration.yaml delay=0 wait=False target=${MOJO_SERIES}-ocata
15
16# Check juju statuses are green and that hooks have finished
17verify config=check_juju.py
18
19# Remove charm-ceph from the deployment
20verify config=delete_application.py APPLICATION=ceph
21
22# Test obj store by sending and recieving files
23verify config=test_ceph_store.py APPLICATION=ceph-mon UNITS="0 1"
24
25# Success
026
=== added symlink 'specs/storage/ceph/charm_migration/ocata/test_ceph_store.py'
=== target is u'../../../../../helper/tests/test_ceph_store.py'
=== added symlink 'specs/storage/ceph/charm_migration/ocata/utils'
=== target is u'../../../../../helper/utils'
=== added directory 'specs/storage/ceph/charm_migration/pike'
=== added file 'specs/storage/ceph/charm_migration/pike/SPEC_INFO.txt'
--- specs/storage/ceph/charm_migration/pike/SPEC_INFO.txt 1970-01-01 00:00:00 +0000
+++ specs/storage/ceph/charm_migration/pike/SPEC_INFO.txt 2017-09-25 19:22:27 +0000
@@ -0,0 +1,4 @@
1This spec deploys a 3 monitor and 3 node OSD cluster. It tests to verify
2Rados can write to and read from the cluster. It deploys ceph-mon in addition
3to ceph, and then tears down ceph to verify that the cluster upgrade
4works.
0\ No newline at end of file5\ No newline at end of file
16
=== added symlink 'specs/storage/ceph/charm_migration/pike/ceph-charm-migration.yaml'
=== target is u'../../../../../helper/bundles/ceph-charm-migration.yaml'
=== added symlink 'specs/storage/ceph/charm_migration/pike/charm-ceph.yaml'
=== target is u'../../../../../helper/bundles/charm-ceph.yaml'
=== added symlink 'specs/storage/ceph/charm_migration/pike/check_juju.py'
=== target is u'../../../../../helper/tests/check_juju.py'
=== added symlink 'specs/storage/ceph/charm_migration/pike/collect-ceph-default'
=== target is u'../../../../../helper/collect/collect-ceph-default'
=== added symlink 'specs/storage/ceph/charm_migration/pike/delete_application.py'
=== target is u'../../../../../helper/setup/delete_application.py'
=== added file 'specs/storage/ceph/charm_migration/pike/manifest'
--- specs/storage/ceph/charm_migration/pike/manifest 1970-01-01 00:00:00 +0000
+++ specs/storage/ceph/charm_migration/pike/manifest 2017-09-25 19:22:27 +0000
@@ -0,0 +1,25 @@
1# Collect the charm branches from Launchpad
2collect config=collect-ceph-default
3
4# Use juju deployer with charm-ceph.yaml bundle
5deploy config=charm-ceph.yaml delay=0 wait=False target=${MOJO_SERIES}-pike
6
7# Check juju statuses are green and that hooks have finished
8verify config=check_juju.py
9
10# Test obj store by sending and recieving files
11verify config=test_ceph_store.py APPLICATION=ceph UNITS="0 1"
12
13# Use juju deployer with ceph-charm-migration.yaml bundle
14deploy config=ceph-charm-migration.yaml delay=0 wait=False target=${MOJO_SERIES}-pike
15
16# Check juju statuses are green and that hooks have finished
17verify config=check_juju.py
18
19# Remove charm-ceph from the deployment
20verify config=delete_application.py APPLICATION=ceph
21
22# Test obj store by sending and recieving files
23verify config=test_ceph_store.py APPLICATION=ceph-mon UNITS="0 1"
24
25# Success
026
=== added symlink 'specs/storage/ceph/charm_migration/pike/test_ceph_store.py'
=== target is u'../../../../../helper/tests/test_ceph_store.py'
=== added symlink 'specs/storage/ceph/charm_migration/pike/utils'
=== target is u'../../../../../helper/utils'

Subscribers

People subscribed via source and target branches