Merge lp:~1chb1n/charm-helpers/amulet-ceph-cinder-updates into lp:charm-helpers

Proposed by Ryan Beisner on 2015-06-15
Status: Merged
Merged at revision: 400
Proposed branch: lp:~1chb1n/charm-helpers/amulet-ceph-cinder-updates
Merge into: lp:charm-helpers
Diff against target: 606 lines (+404/-55)
3 files modified
charmhelpers/contrib/amulet/utils.py (+128/-3)
charmhelpers/contrib/openstack/amulet/deployment.py (+36/-3)
charmhelpers/contrib/openstack/amulet/utils.py (+240/-49)
To merge this branch: bzr merge lp:~1chb1n/charm-helpers/amulet-ceph-cinder-updates
Reviewer Review Type Date Requested Status
Corey Bryant (community) 2015-06-15 Approve on 2015-06-29
James Page 2015-07-02 Pending
Review via email: mp+262013@code.launchpad.net

Description of the Change

# updates for ceph/cinder amulet tests:
Update validate_config_data - enable use of boolean comparison methods

Add check_commands_on_units - check lists of commands on lists of units

Add authenticate_cinder_admin

Add create_cinder_volume

Update create_cirros_image to use generic resource status methods, add
validation.

Simplify delete_image to use generic resource_delete method.

Simplify delete_instance to use generic resource_delete method.

These changes preserve existing behavior while adding new capabilities.

# related / dependent charm MPs:
https://code.launchpad.net/~1chb1n/charms/trusty/ceph/next-amulet-update/+merge/262016

https://code.launchpad.net/~1chb1n/charms/trusty/ceph-osd/next-amulet-update/+merge/262249

https://code.launchpad.net/~1chb1n/charms/trusty/ceph-radosgw/next-amulet-update/+merge/262599

https://code.launchpad.net/~1chb1n/charms/trusty/cinder/next-amulet-updates/+merge/262772

https://code.launchpad.net/~1chb1n/charms/trusty/cinder-ceph/next-amulet-updates/+merge/262910

https://code.launchpad.net/~1chb1n/charms/trusty/ceilometer-agent/next-amulet-add-initial/+merge/263040

https://code.launchpad.net/~1chb1n/charms/trusty/ceilometer/next-amulet-add-subordinate-checks/+merge/263147

https://code.launchpad.net/~1chb1n/charms/trusty/keystone/next-amulet-update/+merge/263460

https://code.launchpad.net/~1chb1n/charms/trusty/mysql/trunk-amulet-update-vivid/+merge/263428

https://code.launchpad.net/~1chb1n/charms/trusty/neutron-api/next-amulet-update/+merge/263594

https://code.launchpad.net/~1chb1n/charms/trusty/glance/next-amulet-update/+merge/263413

https://code.launchpad.net/~1chb1n/charms/trusty/neutron-gateway/next-amulet-update/+merge/263658

To post a comment you must log in.
Corey Bryant (corey.bryant) wrote :

Reviewed with comments below.

review: Needs Fixing
Ryan Beisner (1chb1n) wrote :

Good input, appreciate it! See comments inline.

Ryan Beisner (1chb1n) wrote :

Updated comments inline.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'charmhelpers/contrib/amulet/utils.py'
2--- charmhelpers/contrib/amulet/utils.py 2015-06-12 14:38:38 +0000
3+++ charmhelpers/contrib/amulet/utils.py 2015-06-29 13:19:53 +0000
4@@ -14,6 +14,7 @@
5 # You should have received a copy of the GNU Lesser General Public License
6 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
7
8+import amulet
9 import ConfigParser
10 import distro_info
11 import io
12@@ -173,6 +174,11 @@
13
14 Verify that the specified section of the config file contains
15 the expected option key:value pairs.
16+
17+ Compare expected dictionary data vs actual dictionary data.
18+ The values in the 'expected' dictionary can be strings, bools, ints,
19+ longs, or can be a function that evaluates a variable and returns a
20+ bool.
21 """
22 self.log.debug('Validating config file data ({} in {} on {})'
23 '...'.format(section, config_file,
24@@ -185,9 +191,20 @@
25 for k in expected.keys():
26 if not config.has_option(section, k):
27 return "section [{}] is missing option {}".format(section, k)
28- if config.get(section, k) != expected[k]:
29+
30+ actual = config.get(section, k)
31+ v = expected[k]
32+ if (isinstance(v, six.string_types) or
33+ isinstance(v, bool) or
34+ isinstance(v, six.integer_types)):
35+ # handle explicit values
36+ if actual != v:
37+ return "section [{}] {}:{} != expected {}:{}".format(
38+ section, k, actual, k, expected[k])
39+ # handle function pointers, such as not_null or valid_ip
40+ elif not v(actual):
41 return "section [{}] {}:{} != expected {}:{}".format(
42- section, k, config.get(section, k), k, expected[k])
43+ section, k, actual, k, expected[k])
44 return None
45
46 def _validate_dict_data(self, expected, actual):
47@@ -195,7 +212,7 @@
48
49 Compare expected dictionary data vs actual dictionary data.
50 The values in the 'expected' dictionary can be strings, bools, ints,
51- longs, or can be a function that evaluate a variable and returns a
52+ longs, or can be a function that evaluates a variable and returns a
53 bool.
54 """
55 self.log.debug('actual: {}'.format(repr(actual)))
56@@ -206,8 +223,10 @@
57 if (isinstance(v, six.string_types) or
58 isinstance(v, bool) or
59 isinstance(v, six.integer_types)):
60+ # handle explicit values
61 if v != actual[k]:
62 return "{}:{}".format(k, actual[k])
63+ # handle function pointers, such as not_null or valid_ip
64 elif not v(actual[k]):
65 return "{}:{}".format(k, actual[k])
66 else:
67@@ -406,3 +425,109 @@
68 """Convert a relative file path to a file URL."""
69 _abs_path = os.path.abspath(file_rel_path)
70 return urlparse.urlparse(_abs_path, scheme='file').geturl()
71+
72+ def check_commands_on_units(self, commands, sentry_units):
73+ """Check that all commands in a list exit zero on all
74+ sentry units in a list.
75+
76+ :param commands: list of bash commands
77+ :param sentry_units: list of sentry unit pointers
78+ :returns: None if successful; Failure message otherwise
79+ """
80+ self.log.debug('Checking exit codes for {} commands on {} '
81+ 'sentry units...'.format(len(commands),
82+ len(sentry_units)))
83+ for sentry_unit in sentry_units:
84+ for cmd in commands:
85+ output, code = sentry_unit.run(cmd)
86+ if code == 0:
87+ self.log.debug('{} `{}` returned {} '
88+ '(OK)'.format(sentry_unit.info['unit_name'],
89+ cmd, code))
90+ else:
91+ return ('{} `{}` returned {} '
92+ '{}'.format(sentry_unit.info['unit_name'],
93+ cmd, code, output))
94+ return None
95+
96+ def get_process_id_list(self, sentry_unit, process_name):
97+ """Get a list of process ID(s) from a single sentry juju unit
98+ for a single process name.
99+
100+ :param sentry_unit: Pointer to amulet sentry instance (juju unit)
101+ :param process_name: Process name
102+ :returns: List of process IDs
103+ """
104+ cmd = 'pidof {}'.format(process_name)
105+ output, code = sentry_unit.run(cmd)
106+ if code != 0:
107+ msg = ('{} `{}` returned {} '
108+ '{}'.format(sentry_unit.info['unit_name'],
109+ cmd, code, output))
110+ amulet.raise_status(amulet.FAIL, msg=msg)
111+ return str(output).split()
112+
113+ def get_unit_process_ids(self, unit_processes):
114+ """Construct a dict containing unit sentries, process names, and
115+ process IDs."""
116+ pid_dict = {}
117+ for sentry_unit, process_list in unit_processes.iteritems():
118+ pid_dict[sentry_unit] = {}
119+ for process in process_list:
120+ pids = self.get_process_id_list(sentry_unit, process)
121+ pid_dict[sentry_unit].update({process: pids})
122+ return pid_dict
123+
124+ def validate_unit_process_ids(self, expected, actual):
125+ """Validate process id quantities for services on units."""
126+ self.log.debug('Checking units for running processes...')
127+ self.log.debug('Expected PIDs: {}'.format(expected))
128+ self.log.debug('Actual PIDs: {}'.format(actual))
129+
130+ if len(actual) != len(expected):
131+ return ('Unit count mismatch. expected, actual: {}, '
132+ '{} '.format(len(expected), len(actual)))
133+
134+ for (e_sentry, e_proc_names) in expected.iteritems():
135+ e_sentry_name = e_sentry.info['unit_name']
136+ if e_sentry in actual.keys():
137+ a_proc_names = actual[e_sentry]
138+ else:
139+ return ('Expected sentry ({}) not found in actual dict data.'
140+ '{}'.format(e_sentry_name, e_sentry))
141+
142+ if len(e_proc_names.keys()) != len(a_proc_names.keys()):
143+ return ('Process name count mismatch. expected, actual: {}, '
144+ '{}'.format(len(expected), len(actual)))
145+
146+ for (e_proc_name, e_pids_length), (a_proc_name, a_pids) in \
147+ zip(e_proc_names.items(), a_proc_names.items()):
148+ if e_proc_name != a_proc_name:
149+ return ('Process name mismatch. expected, actual: {}, '
150+ '{}'.format(e_proc_name, a_proc_name))
151+
152+ a_pids_length = len(a_pids)
153+ if e_pids_length != a_pids_length:
154+ return ('PID count mismatch. {} ({}) expected, actual: '
155+ '{}, {} ({})'.format(e_sentry_name, e_proc_name,
156+ e_pids_length, a_pids_length,
157+ a_pids))
158+ else:
159+ self.log.debug('PID check OK: {} {} {}: '
160+ '{}'.format(e_sentry_name, e_proc_name,
161+ e_pids_length, a_pids))
162+ return None
163+
164+ def validate_list_of_identical_dicts(self, list_of_dicts):
165+ """Check that all dicts within a list are identical."""
166+ hashes = []
167+ for _dict in list_of_dicts:
168+ hashes.append(hash(frozenset(_dict.items())))
169+
170+ self.log.debug('Hashes: {}'.format(hashes))
171+ if len(set(hashes)) == 1:
172+ self.log.debug('Dicts within list are identical')
173+ else:
174+ return 'Dicts within list are not identical'
175+
176+ return None
177
178=== modified file 'charmhelpers/contrib/openstack/amulet/deployment.py'
179--- charmhelpers/contrib/openstack/amulet/deployment.py 2015-06-11 19:46:51 +0000
180+++ charmhelpers/contrib/openstack/amulet/deployment.py 2015-06-29 13:19:53 +0000
181@@ -79,9 +79,9 @@
182 services.append(this_service)
183 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
184 'ceph-osd', 'ceph-radosgw']
185- # Openstack subordinate charms do not expose an origin option as that
186- # is controlled by the principle
187- ignore = ['neutron-openvswitch']
188+ # Most OpenStack subordinate charms do not expose an origin option
189+ # as that is controlled by the principle.
190+ ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch']
191
192 if self.openstack:
193 for svc in services:
194@@ -148,3 +148,36 @@
195 return os_origin.split('%s-' % self.series)[1].split('/')[0]
196 else:
197 return releases[self.series]
198+
199+ def get_ceph_expected_pools(self, radosgw=False):
200+ """Return a list of expected ceph pools in a ceph + cinder + glance
201+ test scenario, based on OpenStack release and whether ceph radosgw
202+ is flagged as present or not."""
203+
204+ if self._get_openstack_release() >= self.trusty_kilo:
205+ # Kilo or later
206+ pools = [
207+ 'rbd',
208+ 'cinder',
209+ 'glance'
210+ ]
211+ else:
212+ # Juno or earlier
213+ pools = [
214+ 'data',
215+ 'metadata',
216+ 'rbd',
217+ 'cinder',
218+ 'glance'
219+ ]
220+
221+ if radosgw:
222+ pools.extend([
223+ '.rgw.root',
224+ '.rgw.control',
225+ '.rgw',
226+ '.rgw.gc',
227+ '.users.uid'
228+ ])
229+
230+ return pools
231
232=== modified file 'charmhelpers/contrib/openstack/amulet/utils.py'
233--- charmhelpers/contrib/openstack/amulet/utils.py 2015-06-11 14:30:25 +0000
234+++ charmhelpers/contrib/openstack/amulet/utils.py 2015-06-29 13:19:53 +0000
235@@ -14,16 +14,20 @@
236 # You should have received a copy of the GNU Lesser General Public License
237 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
238
239+import amulet
240+import json
241 import logging
242 import os
243 import six
244 import time
245 import urllib
246
247+import cinderclient.v1.client as cinder_client
248 import glanceclient.v1.client as glance_client
249 import heatclient.v1.client as heat_client
250 import keystoneclient.v2_0 as keystone_client
251 import novaclient.v1_1.client as nova_client
252+import swiftclient
253
254 from charmhelpers.contrib.amulet.utils import (
255 AmuletUtils
256@@ -171,6 +175,16 @@
257 self.log.debug('Checking if tenant exists ({})...'.format(tenant))
258 return tenant in [t.name for t in keystone.tenants.list()]
259
260+ def authenticate_cinder_admin(self, keystone_sentry, username,
261+ password, tenant):
262+ """Authenticates admin user with cinder."""
263+ # NOTE(beisner): cinder python client doesn't accept tokens.
264+ service_ip = \
265+ keystone_sentry.relation('shared-db',
266+ 'mysql:shared-db')['private-address']
267+ ept = "http://{}:5000/v2.0".format(service_ip.strip().decode('utf-8'))
268+ return cinder_client.Client(username, password, tenant, ept)
269+
270 def authenticate_keystone_admin(self, keystone_sentry, user, password,
271 tenant):
272 """Authenticates admin user with the keystone admin endpoint."""
273@@ -212,9 +226,29 @@
274 return nova_client.Client(username=user, api_key=password,
275 project_id=tenant, auth_url=ep)
276
277+ def authenticate_swift_user(self, keystone, user, password, tenant):
278+ """Authenticates a regular user with swift api."""
279+ self.log.debug('Authenticating swift user ({})...'.format(user))
280+ ep = keystone.service_catalog.url_for(service_type='identity',
281+ endpoint_type='publicURL')
282+ return swiftclient.Connection(authurl=ep,
283+ user=user,
284+ key=password,
285+ tenant_name=tenant,
286+ auth_version='2.0')
287+
288 def create_cirros_image(self, glance, image_name):
289- """Download the latest cirros image and upload it to glance."""
290- self.log.debug('Creating glance image ({})...'.format(image_name))
291+ """Download the latest cirros image and upload it to glance,
292+ validate and return a resource pointer.
293+
294+ :param glance: pointer to authenticated glance connection
295+ :param image_name: display name for new image
296+ :returns: glance image pointer
297+ """
298+ self.log.debug('Creating glance cirros image '
299+ '({})...'.format(image_name))
300+
301+ # Download cirros image
302 http_proxy = os.getenv('AMULET_HTTP_PROXY')
303 self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))
304 if http_proxy:
305@@ -223,33 +257,51 @@
306 else:
307 opener = urllib.FancyURLopener()
308
309- f = opener.open("http://download.cirros-cloud.net/version/released")
310+ f = opener.open('http://download.cirros-cloud.net/version/released')
311 version = f.read().strip()
312- cirros_img = "cirros-{}-x86_64-disk.img".format(version)
313+ cirros_img = 'cirros-{}-x86_64-disk.img'.format(version)
314 local_path = os.path.join('tests', cirros_img)
315
316 if not os.path.exists(local_path):
317- cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net",
318+ cirros_url = 'http://{}/{}/{}'.format('download.cirros-cloud.net',
319 version, cirros_img)
320 opener.retrieve(cirros_url, local_path)
321 f.close()
322
323+ # Create glance image
324 with open(local_path) as f:
325 image = glance.images.create(name=image_name, is_public=True,
326 disk_format='qcow2',
327 container_format='bare', data=f)
328- count = 1
329- status = image.status
330- while status != 'active' and count < 10:
331- time.sleep(3)
332- image = glance.images.get(image.id)
333- status = image.status
334- self.log.debug('image status: {}'.format(status))
335- count += 1
336-
337- if status != 'active':
338- self.log.error('image creation timed out')
339- return None
340+
341+ # Wait for image to reach active status
342+ img_id = image.id
343+ ret = self.resource_reaches_status(glance.images, img_id,
344+ expected_stat='active',
345+ msg='Image status wait')
346+ if not ret:
347+ msg = 'Glance image failed to reach expected state.'
348+ amulet.raise_status(amulet.FAIL, msg=msg)
349+
350+ # Re-validate new image
351+ self.log.debug('Validating image attributes...')
352+ val_img_name = glance.images.get(img_id).name
353+ val_img_stat = glance.images.get(img_id).status
354+ val_img_pub = glance.images.get(img_id).is_public
355+ val_img_cfmt = glance.images.get(img_id).container_format
356+ val_img_dfmt = glance.images.get(img_id).disk_format
357+ msg_attr = ('Image attributes - name:{} public:{} id:{} stat:{} '
358+ 'container fmt:{} disk fmt:{}'.format(
359+ val_img_name, val_img_pub, img_id,
360+ val_img_stat, val_img_cfmt, val_img_dfmt))
361+
362+ if val_img_name == image_name and val_img_stat == 'active' \
363+ and val_img_pub is True and val_img_cfmt == 'bare' \
364+ and val_img_dfmt == 'qcow2':
365+ self.log.debug(msg_attr)
366+ else:
367+ msg = ('Volume validation failed, {}'.format(msg_attr))
368+ amulet.raise_status(amulet.FAIL, msg=msg)
369
370 return image
371
372@@ -260,22 +312,7 @@
373 self.log.warn('/!\\ DEPRECATION WARNING: use '
374 'delete_resource instead of delete_image.')
375 self.log.debug('Deleting glance image ({})...'.format(image))
376- num_before = len(list(glance.images.list()))
377- glance.images.delete(image)
378-
379- count = 1
380- num_after = len(list(glance.images.list()))
381- while num_after != (num_before - 1) and count < 10:
382- time.sleep(3)
383- num_after = len(list(glance.images.list()))
384- self.log.debug('number of images: {}'.format(num_after))
385- count += 1
386-
387- if num_after != (num_before - 1):
388- self.log.error('image deletion timed out')
389- return False
390-
391- return True
392+ return self.delete_resource(glance.images, image, msg='glance image')
393
394 def create_instance(self, nova, image_name, instance_name, flavor):
395 """Create the specified instance."""
396@@ -308,22 +345,8 @@
397 self.log.warn('/!\\ DEPRECATION WARNING: use '
398 'delete_resource instead of delete_instance.')
399 self.log.debug('Deleting instance ({})...'.format(instance))
400- num_before = len(list(nova.servers.list()))
401- nova.servers.delete(instance)
402-
403- count = 1
404- num_after = len(list(nova.servers.list()))
405- while num_after != (num_before - 1) and count < 10:
406- time.sleep(3)
407- num_after = len(list(nova.servers.list()))
408- self.log.debug('number of instances: {}'.format(num_after))
409- count += 1
410-
411- if num_after != (num_before - 1):
412- self.log.error('instance deletion timed out')
413- return False
414-
415- return True
416+ return self.delete_resource(nova.servers, instance,
417+ msg='nova instance')
418
419 def create_or_get_keypair(self, nova, keypair_name="testkey"):
420 """Create a new keypair, or return pointer if it already exists."""
421@@ -339,6 +362,88 @@
422 _keypair = nova.keypairs.create(name=keypair_name)
423 return _keypair
424
425+ def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1,
426+ img_id=None, src_vol_id=None, snap_id=None):
427+ """Create cinder volume, optionally from a glance image, OR
428+ optionally as a clone of an existing volume, OR optionally
429+ from a snapshot. Wait for the new volume status to reach
430+ the expected status, validate and return a resource pointer.
431+
432+ :param vol_name: cinder volume display name
433+ :param vol_size: size in gigabytes
434+ :param img_id: optional glance image id
435+ :param src_vol_id: optional source volume id to clone
436+ :param snap_id: optional snapshot id to use
437+ :returns: cinder volume pointer
438+ """
439+ # Handle parameter input and avoid impossible combinations
440+ if img_id and not src_vol_id and not snap_id:
441+ # Create volume from image
442+ self.log.debug('Creating cinder volume from glance image...')
443+ bootable = 'true'
444+ elif src_vol_id and not img_id and not snap_id:
445+ # Clone an existing volume
446+ self.log.debug('Cloning cinder volume...')
447+ bootable = cinder.volumes.get(src_vol_id).bootable
448+ elif snap_id and not src_vol_id and not img_id:
449+ # Create volume from snapshot
450+ self.log.debug('Creating cinder volume from snapshot...')
451+ snap = cinder.volume_snapshots.find(id=snap_id)
452+ vol_size = snap.size
453+ snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id
454+ bootable = cinder.volumes.get(snap_vol_id).bootable
455+ elif not img_id and not src_vol_id and not snap_id:
456+ # Create volume
457+ self.log.debug('Creating cinder volume...')
458+ bootable = 'false'
459+ else:
460+ # Impossible combination of parameters
461+ msg = ('Invalid method use - name:{} size:{} img_id:{} '
462+ 'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size,
463+ img_id, src_vol_id,
464+ snap_id))
465+ amulet.raise_status(amulet.FAIL, msg=msg)
466+
467+ # Create new volume
468+ try:
469+ vol_new = cinder.volumes.create(display_name=vol_name,
470+ imageRef=img_id,
471+ size=vol_size,
472+ source_volid=src_vol_id,
473+ snapshot_id=snap_id)
474+ vol_id = vol_new.id
475+ except Exception as e:
476+ msg = 'Failed to create volume: {}'.format(e)
477+ amulet.raise_status(amulet.FAIL, msg=msg)
478+
479+ # Wait for volume to reach available status
480+ ret = self.resource_reaches_status(cinder.volumes, vol_id,
481+ expected_stat="available",
482+ msg="Volume status wait")
483+ if not ret:
484+ msg = 'Cinder volume failed to reach expected state.'
485+ amulet.raise_status(amulet.FAIL, msg=msg)
486+
487+ # Re-validate new volume
488+ self.log.debug('Validating volume attributes...')
489+ val_vol_name = cinder.volumes.get(vol_id).display_name
490+ val_vol_boot = cinder.volumes.get(vol_id).bootable
491+ val_vol_stat = cinder.volumes.get(vol_id).status
492+ val_vol_size = cinder.volumes.get(vol_id).size
493+ msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:'
494+ '{} size:{}'.format(val_vol_name, vol_id,
495+ val_vol_stat, val_vol_boot,
496+ val_vol_size))
497+
498+ if val_vol_boot == bootable and val_vol_stat == 'available' \
499+ and val_vol_name == vol_name and val_vol_size == vol_size:
500+ self.log.debug(msg_attr)
501+ else:
502+ msg = ('Volume validation failed, {}'.format(msg_attr))
503+ amulet.raise_status(amulet.FAIL, msg=msg)
504+
505+ return vol_new
506+
507 def delete_resource(self, resource, resource_id,
508 msg="resource", max_wait=120):
509 """Delete one openstack resource, such as one instance, keypair,
510@@ -350,6 +455,8 @@
511 :param max_wait: maximum wait time in seconds
512 :returns: True if successful, otherwise False
513 """
514+ self.log.debug('Deleting OpenStack resource '
515+ '{} ({})'.format(resource_id, msg))
516 num_before = len(list(resource.list()))
517 resource.delete(resource_id)
518
519@@ -411,3 +518,87 @@
520 self.log.debug('{} never reached expected status: '
521 '{}'.format(resource_id, expected_stat))
522 return False
523+
524+ def get_ceph_osd_id_cmd(self, index):
525+ """Produce a shell command that will return a ceph-osd id."""
526+ return ("`initctl list | grep 'ceph-osd ' | "
527+ "awk 'NR=={} {{ print $2 }}' | "
528+ "grep -o '[0-9]*'`".format(index + 1))
529+
530+ def get_ceph_pools(self, sentry_unit):
531+ """Return a dict of ceph pools from a single ceph unit, with
532+ pool name as keys, pool id as vals."""
533+ pools = {}
534+ cmd = 'sudo ceph osd lspools'
535+ output, code = sentry_unit.run(cmd)
536+ if code != 0:
537+ msg = ('{} `{}` returned {} '
538+ '{}'.format(sentry_unit.info['unit_name'],
539+ cmd, code, output))
540+ amulet.raise_status(amulet.FAIL, msg=msg)
541+
542+ # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance,
543+ for pool in str(output).split(','):
544+ pool_id_name = pool.split(' ')
545+ if len(pool_id_name) == 2:
546+ pool_id = pool_id_name[0]
547+ pool_name = pool_id_name[1]
548+ pools[pool_name] = int(pool_id)
549+
550+ self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'],
551+ pools))
552+ return pools
553+
554+ def get_ceph_df(self, sentry_unit):
555+ """Return dict of ceph df json output, including ceph pool state.
556+
557+ :param sentry_unit: Pointer to amulet sentry instance (juju unit)
558+ :returns: Dict of ceph df output
559+ """
560+ cmd = 'sudo ceph df --format=json'
561+ output, code = sentry_unit.run(cmd)
562+ if code != 0:
563+ msg = ('{} `{}` returned {} '
564+ '{}'.format(sentry_unit.info['unit_name'],
565+ cmd, code, output))
566+ amulet.raise_status(amulet.FAIL, msg=msg)
567+ return json.loads(output)
568+
569+ def get_ceph_pool_sample(self, sentry_unit, pool_id=0):
570+ """Take a sample of attributes of a ceph pool, returning ceph
571+ pool name, object count and disk space used for the specified
572+ pool ID number.
573+
574+ :param sentry_unit: Pointer to amulet sentry instance (juju unit)
575+ :param pool_id: Ceph pool ID
576+ :returns: List of pool name, object count, kb disk space used
577+ """
578+ df = self.get_ceph_df(sentry_unit)
579+ pool_name = df['pools'][pool_id]['name']
580+ obj_count = df['pools'][pool_id]['stats']['objects']
581+ kb_used = df['pools'][pool_id]['stats']['kb_used']
582+ self.log.debug('Ceph {} pool (ID {}): {} objects, '
583+ '{} kb used'.format(pool_name, pool_id,
584+ obj_count, kb_used))
585+ return pool_name, obj_count, kb_used
586+
587+ def validate_ceph_pool_samples(self, samples, sample_type="resource pool"):
588+ """Validate ceph pool samples taken over time, such as pool
589+ object counts or pool kb used, before adding, after adding, and
590+ after deleting items which affect those pool attributes. The
591+ 2nd element is expected to be greater than the 1st; 3rd is expected
592+ to be less than the 2nd.
593+
594+ :param samples: List containing 3 data samples
595+ :param sample_type: String for logging and usage context
596+ :returns: None if successful, Failure message otherwise
597+ """
598+ original, created, deleted = range(3)
599+ if samples[created] <= samples[original] or \
600+ samples[deleted] >= samples[created]:
601+ return ('Ceph {} samples ({}) '
602+ 'unexpected.'.format(sample_type, samples))
603+ else:
604+ self.log.debug('Ceph {} samples (OK): '
605+ '{}'.format(sample_type, samples))
606+ return None

Subscribers

People subscribed via source and target branches