Merge ~arif-ali/stsstack-bundles:add_script_tools into stsstack-bundles:master

Proposed by Arif Ali
Status: Rejected
Rejected by: Edward Hope-Morley
Proposed branch: ~arif-ali/stsstack-bundles:add_script_tools
Merge into: stsstack-bundles:master
Diff against target: 59 lines (+53/-0)
1 file modified
openstack/tools/add_volumes_to_nova_compute.sh (+53/-0)
Reviewer Review Type Date Requested Status
Edward Hope-Morley Needs Information
Review via email: mp+390103@code.launchpad.net

Commit message

Add script to add ceph-osd to nova-compute

In some scenarios the ceph-osd is co-located on nova-compute
and its useful for reproducers to be able to have this scenario

The script is something that worked for me, but may require more
work for the wider audience

To post a comment you must log in.
Revision history for this message
Edward Hope-Morley (hopem) wrote :

Hi Arif, I wonder if there is a way that we could do this without it having to be a two-phase deployment. Also, I think this could be simplified by taking the existing ceph overlay [1] and making the number of osd devices configurable, that way Juju will automatically create the volumes and attach them to the node that ceph-osd is running on. Thoughts?

[1] https://git.launchpad.net/stsstack-bundles/tree/overlays/ceph.yaml#n25

Revision history for this message
Arif Ali (arif-ali) wrote :

Yeah, that would be the ultimate solution, as the cleanup process would be a lot easier too

I will need to have a play, and test, and see how we can do this. I think we'll probably need a new option with generate_bundles.sh to enable a feature so that the overlays can change dynamically

Again, this was a quick solution to a problem I was trying to solve

Revision history for this message
Edward Hope-Morley (hopem) wrote :

@arif-ali fyi there is now a --num-cepn-osds option that allows you to modify the number of OSDs per ceph-osd unit at deploy time. Hopefully that can help you a bit.

review: Needs Information
Revision history for this message
Edward Hope-Morley (hopem) wrote :

The repository has moved to https://github.com/canonical/stsstack-bundles/. Please resubmit there if you are still working on this patch.

Unmerged commits

3ff7758... by Arif Ali

Add script to add ceph-osd to nova-compute

In some scenarios the ceph-osd is co-located on nova-compute
and its useful for reproducers to be able to have this scenario

The script is something that worked for me, but may require more
work for the wider audience

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1diff --git a/openstack/tools/add_volumes_to_nova_compute.sh b/openstack/tools/add_volumes_to_nova_compute.sh
2new file mode 100755
3index 0000000..dce0ff5
4--- /dev/null
5+++ b/openstack/tools/add_volumes_to_nova_compute.sh
6@@ -0,0 +1,53 @@
7+#!/bin/bash
8+
9+# This script wil create 3 10G volumes for the purpose of adding ceph-osd to nova-compute units
10+#
11+# This is more in-line with how some customers would do it. This should automate everything
12+# based on this scenario. This will also use less instances and hence less RAM
13+
14+# TODO: need to paramterise this
15+series=bionic
16+ceph_mon_deploy="cs:ceph-mon"
17+ceph_osd_deploy="cs:ceph-osd"
18+cinder_ceph_deploy="cs:cinder-ceph"
19+
20+. ~/novarc
21+
22+# let's deploy ceph-mon, while the volumes are being created
23+juju deploy ${ceph_mon_deploy} ceph-mon --constraints mem=1G -n 3 --series ${series}
24+
25+juju deploy ${cinder_ceph_deploy} cinder-ceph --series ${series}
26+
27+# The machines that are allocated to nova-compute
28+machines=$(juju status nova-compute --format=json | jq '.["applications"]["nova-compute"]["units"][]["machine"]' | sed s/\"//g | xargs)
29+
30+# Go through all the machines and create the 3 volumes, and attach it to the instance
31+for machine in $machines
32+do
33+ instance=$(juju status nova-compute --format=json | jq '.["machines"]["'${machine}'"]["instance-id"]' | sed s/\"//g)
34+ for i in `seq 1 3`
35+ do
36+ vol=vol${machine}${i}
37+ openstack volume create --size 10 ${vol}
38+ openstack server add volume ${instance} ${vol}
39+ done
40+done
41+
42+# Now deploy ceph-osd to all of the machines
43+juju deploy ${ceph_osd_deploy} ceph-osd --config osd-devices="/dev/vdb /dev/vdc /dev/vdd" -n $(echo ${machines} | wc -w) --to $(echo ${machines} | tr ' ' ',') --series ${series}
44+
45+# Add all relations
46+juju add-relation ceph-osd:mon ceph-mon:osd
47+juju add-relation ceph-mon:client nova-compute:ceph-access
48+juju add-relation ceph-mon:client glance:ceph
49+
50+juju add-relation cinder-ceph:ceph ceph-mon:client
51+juju add-relation cinder-ceph:storage-backend cinder:storage-backend
52+juju add-relation cinder-ceph:ceph-access nova-compute:ceph-access
53+
54+# TODO: echo back the ids of the volumes that may need to be deletes.
55+# Probably better, list the commands that would do it, and maybe create
56+# clean_volumes script for it to be available
57+#
58+# Note that the volumes won't delete if they are attached, so should be
59+# good to do in most circumstances

Subscribers

People subscribed via source and target branches