Merge lp:~xfactor973/charms/trusty/ceph-osd/coordinated-upgrade into lp:~openstack-charmers-archive/charms/trusty/ceph-osd/next

Proposed by Chris Holcombe on 2016-02-26
Status: Needs review
Proposed branch: lp:~xfactor973/charms/trusty/ceph-osd/coordinated-upgrade
Merge into: lp:~openstack-charmers-archive/charms/trusty/ceph-osd/next
Diff against target: 600 lines (+387/-18)
5 files modified
.bzrignore (+1/-0)
hooks/ceph.py (+155/-9)
hooks/ceph_hooks.py (+199/-5)
hooks/utils.py (+31/-3)
templates/ceph.conf (+1/-1)
To merge this branch: bzr merge lp:~xfactor973/charms/trusty/ceph-osd/coordinated-upgrade
Reviewer Review Type Date Requested Status
James Page Needs Fixing on 2016-02-29
Chris MacNaughton 2016-02-26 Pending
Review via email: mp+287376@code.launchpad.net

Description of the change

This patch allows the ceph osd cluster to upgrade themselves one by one. It does this by using the ceph monitor cluster as a locking mechanism. There are most likely edge cases with this method that I haven't thought of. Consider this code to be lightly tested. It worked fine on ec2.

To post a comment you must log in.
Chris Holcombe (xfactor973) wrote :

Note: I put the helpers up for review on charmhelpers: https://code.launchpad.net/~xfactor973/charm-helpers/ceph-keystore/+merge/287205

charm_lint_check #1557 ceph-osd-next for xfactor973 mp287376
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/15210766/
Build: http://10.245.162.36:8080/job/charm_lint_check/1557/

charm_unit_test #1305 ceph-osd-next for xfactor973 mp287376
    UNIT OK: passed

Build: http://10.245.162.36:8080/job/charm_unit_test/1305/

charm_amulet_test #554 ceph-osd-next for xfactor973 mp287376
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/15210862/
Build: http://10.245.162.36:8080/job/charm_amulet_test/554/

charm_unit_test #1308 ceph-osd-next for xfactor973 mp287376
    UNIT OK: passed

Build: http://10.245.162.36:8080/job/charm_unit_test/1308/

charm_lint_check #1562 ceph-osd-next for xfactor973 mp287376
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
make: *** [lint] Error 1
ERROR:root:Make target returned non-zero.

Full lint test output: http://paste.ubuntu.com/15210966/
Build: http://10.245.162.36:8080/job/charm_lint_check/1562/

charm_amulet_test #558 ceph-osd-next for xfactor973 mp287376
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/15211124/
Build: http://10.245.162.36:8080/job/charm_amulet_test/558/

James Page (james-page) wrote :

I think most of my comments on the ceph-mon proposal also apply here.

review: Needs Fixing
70. By Chris Holcombe on 2016-03-01

Add back in monitor pieces. Will separate out into another MP

Unmerged revisions

70. By Chris Holcombe on 2016-03-01

Add back in monitor pieces. Will separate out into another MP

69. By Chris Holcombe on 2016-03-01

Hash the hostname instead of the ip address. That is more portable. Works now on lxc and also on ec2

68. By Chris Holcombe on 2016-02-26

Merge upstream

67. By Chris Holcombe on 2016-02-26

It rolls!. This now upgrades and rolls the ceph osd cluster one by one

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file '.bzrignore'
2--- .bzrignore 2015-10-30 02:23:36 +0000
3+++ .bzrignore 2016-03-01 16:57:01 +0000
4@@ -3,3 +3,4 @@
5 .tox
6 .testrepository
7 bin
8+.idea
9
10=== modified file 'hooks/ceph.py'
11--- hooks/ceph.py 2016-01-29 07:31:13 +0000
12+++ hooks/ceph.py 2016-03-01 16:57:01 +0000
13@@ -1,4 +1,3 @@
14-
15 #
16 # Copyright 2012 Canonical Ltd.
17 #
18@@ -6,20 +5,19 @@
19 # James Page <james.page@canonical.com>
20 # Paul Collins <paul.collins@canonical.com>
21 #
22-
23 import json
24 import subprocess
25 import time
26 import os
27 import re
28 import sys
29+import errno
30 from charmhelpers.core.host import (
31 mkdir,
32 chownr,
33- service_restart,
34 cmp_pkgrevno,
35- lsb_release
36-)
37+ lsb_release,
38+ service_restart)
39 from charmhelpers.core.hookenv import (
40 log,
41 ERROR,
42@@ -54,6 +52,137 @@
43 return "root"
44
45
46+class CrushLocation(object):
47+ def __init__(self, name, identifier, host, rack, row, datacenter, chassis, root):
48+ self.name = name
49+ self.identifier = identifier
50+ self.host = host
51+ self.rack = rack
52+ self.row = row
53+ self.datacenter = datacenter
54+ self.chassis = chassis
55+ self.root = root
56+
57+
58+"""
59+{"nodes":[{"id":-1,"name":"default","type":"root","type_id":10,"children":[-4,-3,-2]},{"id":-2,"name":"ip-172-31-10-122","type":"host","type_id":1,"children":[0]},{"id":0,"name":"osd.0","exists":1,"type":"osd","type_id":0,"status":"up","reweight":"1.000000","crush_weight":"1.000000","depth":2},{"id":-3,"name":"ip-172-31-25-187","type":"host","type_id":1,"children":[1]},{"id":1,"name":"osd.1","exists":1,"type":"osd","type_id":0,"status":"up","reweight":"1.000000","crush_weight":"1.000000","depth":2},{"id":-4,"name":"ip-172-31-38-24","type":"host","type_id":1,"children":[2]},{"id":2,"name":"osd.2","exists":1,"type":"osd","type_id":0,"status":"up","reweight":"1.000000","crush_weight":"1.000000","depth":2}],"stray":[]}
60+"""
61+
62+
63+def get_osd_tree():
64+ """
65+ Returns the current osd map in JSON.
66+ :return: JSON String. :raise: ValueError if the monmap fails to parse.
67+ Also raises CalledProcessError if our ceph command fails
68+ """
69+ try:
70+ tree = subprocess.check_output(
71+ ['sudo', '-u', ceph_user(),
72+ 'ceph', 'osd', 'tree', '--format=json'])
73+ try:
74+ json_tree = json.loads(tree)
75+ crush_list = []
76+ # Make sure children are present in the json
77+ if not json_tree['nodes']:
78+ return None
79+ child_ids = json_tree['nodes'][0]['children']
80+ for child in json_tree['nodes']:
81+ if child['id'] in child_ids:
82+ crush_list.append(
83+ CrushLocation(
84+ name=child.get('name'),
85+ identifier=child['id'],
86+ host=child.get('host'),
87+ rack=child.get('rack'),
88+ row=child.get('row'),
89+ datacenter=child.get('datacenter'),
90+ chassis=child.get('chassis'),
91+ root=child.get('root')
92+ )
93+ )
94+ return crush_list
95+ except ValueError as v:
96+ log("Unable to parse ceph tree json: {}. Error: {}".format(
97+ tree, v.message))
98+ raise
99+ except subprocess.CalledProcessError as e:
100+ log("ceph osd tree command failed with message: {}".format(
101+ e.message))
102+ raise
103+
104+
105+def monitor_key_delete(key):
106+ """
107+ Deletes a key value pair on the monitor cluster.
108+ :param key: String. The key to delete.
109+ """
110+ try:
111+ subprocess.check_output(
112+ ['sudo', '-u', ceph_user(),
113+ 'ceph', 'config-key', 'del', str(key)])
114+ except subprocess.CalledProcessError as e:
115+ log("Monitor config-key put failed with message: {}".format(
116+ e.message))
117+ raise
118+
119+
120+def monitor_key_set(key, value):
121+ """
122+ Sets a key value pair on the monitor cluster.
123+ :param key: String. The key to set.
124+ :param value: The value to set. This will be converted to a string
125+ before setting
126+ """
127+ try:
128+ subprocess.check_output(
129+ ['sudo', '-u', ceph_user(),
130+ 'ceph', 'config-key', 'put', str(key), str(value)])
131+ except subprocess.CalledProcessError as e:
132+ log("Monitor config-key put failed with message: {}".format(
133+ e.message))
134+ raise
135+
136+
137+def monitor_key_get(key):
138+ """
139+ Gets the value of an existing key in the monitor cluster.
140+ :param key: String. The key to search for.
141+ :return: Returns the value of that key or None if not found.
142+ """
143+ try:
144+ output = subprocess.check_output(
145+ ['sudo', '-u', ceph_user(),
146+ 'ceph', 'config-key', 'get', str(key)])
147+ return output
148+ except subprocess.CalledProcessError as e:
149+ log("Monitor config-key get failed with message: {}".format(
150+ e.message))
151+ return None
152+
153+
154+def monitor_key_exists(key):
155+ """
156+ Searches for the existence of a key in the monitor cluster.
157+ :param key: String. The key to search for
158+ :return: Returns True if the key exists, False if not and raises an
159+ exception if an unknown error occurs.
160+ """
161+ try:
162+ subprocess.check_call(
163+ ['sudo', '-u', ceph_user(),
164+ 'ceph', 'config-key', 'exists', str(key)])
165+ # I can return true here regardless because Ceph returns
166+ # ENOENT if the key wasn't found
167+ return True
168+ except subprocess.CalledProcessError as e:
169+ if e.returncode == errno.ENOENT:
170+ return False
171+ else:
172+ log("Unknown error from ceph config-get exists: {} {}".format(
173+ e.returncode, e.message))
174+ raise
175+
176+
177 def get_version():
178 '''Derive Ceph release from an installed package.'''
179 import apt_pkg as apt
180@@ -64,7 +193,7 @@
181 pkg = cache[package]
182 except:
183 # the package is unknown to the current apt cache.
184- e = 'Could not determine version of package with no installation '\
185+ e = 'Could not determine version of package with no installation ' \
186 'candidate: %s' % package
187 error_out(e)
188
189@@ -165,6 +294,7 @@
190 # Ignore any errors for this call
191 subprocess.call(cmd)
192
193+
194 DISK_FORMATS = [
195 'xfs',
196 'ext4',
197@@ -211,6 +341,7 @@
198
199
200 _bootstrap_keyring = "/var/lib/ceph/bootstrap-osd/ceph.keyring"
201+_upgrade_keyring = "/etc/ceph/ceph.client.admin.keyring"
202
203
204 def is_bootstrapped():
205@@ -236,6 +367,21 @@
206 ]
207 subprocess.check_call(cmd)
208
209+
210+def import_osd_upgrade_key(key):
211+ if not os.path.exists(_upgrade_keyring):
212+ cmd = [
213+ "sudo",
214+ "-u",
215+ ceph_user(),
216+ 'ceph-authtool',
217+ _upgrade_keyring,
218+ '--create-keyring',
219+ '--name=client.admin',
220+ '--add-key={}'.format(key)
221+ ]
222+ subprocess.check_call(cmd)
223+
224 # OSD caps taken from ceph-create-keys
225 _osd_bootstrap_caps = {
226 'mon': [
227@@ -402,7 +548,7 @@
228
229
230 def maybe_zap_journal(journal_dev):
231- if (is_osd_disk(journal_dev)):
232+ if is_osd_disk(journal_dev):
233 log('Looks like {} is already an OSD data'
234 ' or journal, skipping.'.format(journal_dev))
235 return
236@@ -445,7 +591,7 @@
237 log('Path {} is not a block device - bailing'.format(dev))
238 return
239
240- if (is_osd_disk(dev) and not reformat_osd):
241+ if is_osd_disk(dev) and not reformat_osd:
242 log('Looks like {} is already an'
243 ' OSD data or journal, skipping.'.format(dev))
244 return
245@@ -512,7 +658,7 @@
246
247
248 def get_running_osds():
249- '''Returns a list of the pids of the current running OSD daemons'''
250+ """Returns a list of the pids of the current running OSD daemons"""
251 cmd = ['pgrep', 'ceph-osd']
252 try:
253 result = subprocess.check_output(cmd)
254
255=== modified file 'hooks/ceph_hooks.py'
256--- hooks/ceph_hooks.py 2016-02-25 15:48:22 +0000
257+++ hooks/ceph_hooks.py 2016-03-01 16:57:01 +0000
258@@ -8,12 +8,17 @@
259 #
260
261 import glob
262+import hashlib
263 import os
264+import random
265 import shutil
266+import subprocess
267 import sys
268 import tempfile
269+import time
270
271 import ceph
272+from charmhelpers.core import hookenv
273 from charmhelpers.core.hookenv import (
274 log,
275 ERROR,
276@@ -39,13 +44,13 @@
277 filter_installed_packages,
278 )
279 from charmhelpers.core.sysctl import create as create_sysctl
280+from charmhelpers.core import host
281
282 from utils import (
283 get_host_ip,
284 get_networks,
285 assert_charm_supports_ipv6,
286- render_template,
287-)
288+ render_template)
289
290 from charmhelpers.contrib.openstack.alternatives import install_alternative
291 from charmhelpers.contrib.network.ip import (
292@@ -57,6 +62,188 @@
293
294 hooks = Hooks()
295
296+# A dict of valid ceph upgrade paths. Mapping is old -> new
297+upgrade_paths = {
298+ 'cloud:trusty-juno': 'cloud:trusty-kilo',
299+ 'cloud:trusty-kilo': 'cloud:trusty-liberty',
300+ 'cloud:trusty-liberty': None,
301+}
302+
303+
304+def pretty_print_upgrade_paths():
305+ lines = []
306+ for key, value in upgrade_paths.iteritems():
307+ lines.append("{} -> {}".format(key, value))
308+ return lines
309+
310+
311+def check_for_upgrade():
312+ c = hookenv.config()
313+ old_version = c.previous('source')
314+ log('old_version: {}'.format(old_version))
315+ # Strip all whitespace
316+ new_version = config('source')
317+ if new_version:
318+ # replace all whitespace
319+ new_version = new_version.replace(' ', '')
320+ log('new_version: {}'.format(new_version))
321+
322+ if old_version in upgrade_paths:
323+ if new_version == upgrade_paths[old_version]:
324+ log("{} to {} is a valid upgrade path. Proceeding.".format(
325+ old_version, new_version))
326+ roll_osd_cluster(new_version)
327+ else:
328+ # Log a helpful error message
329+ log("Invalid upgrade path from {} to {}. "
330+ "Valid paths are: {}".format(old_version,
331+ new_version,
332+ pretty_print_upgrade_paths()))
333+
334+
335+def lock_and_roll(my_hash):
336+ start_timestamp = time.time()
337+
338+ ceph.monitor_key_set("{}_start".format(my_hash), start_timestamp)
339+ log("Rolling")
340+ # This should be quick
341+ upgrade_osd()
342+ log("Done")
343+
344+ stop_timestamp = time.time()
345+ # Set a key to inform others I am finished
346+ ceph.monitor_key_set("{}_done".format(my_hash), stop_timestamp)
347+
348+
349+def get_hostname():
350+ try:
351+ with open('/etc/hostname', 'r') as host_file:
352+ host_lines = host_file.readlines()
353+ if host_lines:
354+ return host_lines[0].strip()
355+ except IOError:
356+ raise
357+
358+
359+# TODO: Timeout busted nodes and keep moving forward
360+# Edge cases:
361+# 1. Previous node dies on upgrade, can we retry?
362+# 2. This assumes that the osd failure domain is not set to osd.
363+# It rolls an entire server at a time.
364+def roll_osd_cluster(new_version):
365+ """
366+ This is tricky to get right so here's what we're going to do.
367+ There's 2 possible cases: Either I'm first in line or not.
368+ If I'm not first in line I'll wait a random time between 5-30 seconds
369+ and test to see if the previous osd is upgraded yet.
370+
371+ TODO: If you're not in the same failure domain it's safe to upgrade
372+ 1. Examine all pools and adopt the most strict failure domain policy
373+ Example: Pool 1: Failure domain = rack
374+ Pool 2: Failure domain = host
375+ Pool 3: Failure domain = row
376+
377+ outcome: Failure domain = host
378+ """
379+ log('roll_osd_cluster called with {}'.format(new_version))
380+ my_hostname = None
381+ try:
382+ my_hostname = get_hostname()
383+ except IOError as err:
384+ log("Failed to read /etc/hostname. Error: {}".format(err.message))
385+ status_set('blocked', 'failed to upgrade monitor')
386+
387+ my_hash = hashlib.sha224(my_hostname).hexdigest()
388+ # A sorted list of hashed osd names
389+ osd_hashed_dict = {}
390+ osd_tree_list = ceph.get_osd_tree()
391+ osd_hashed_list = sorted([hashlib.sha224(
392+ i.name.encode('utf-8')).hexdigest() for i in osd_tree_list])
393+ # Save a hash : name mapping so we can show the user which
394+ # unit name we're waiting on
395+ for i in osd_tree_list:
396+ osd_hashed_dict[
397+ hashlib.sha224(
398+ i.name.encode('utf-8')).hexdigest()
399+ ] = i.name
400+ log("osd_hashed_list: {}".format(osd_hashed_list))
401+ try:
402+ position = osd_hashed_list.index(my_hash)
403+ log("upgrade position: {}".format(position))
404+ if position == 0:
405+ # I'm first! Roll
406+ # First set a key to inform others I'm about to roll
407+ lock_and_roll(my_hash=my_hash)
408+ else:
409+ # Check if the previous node has finished
410+ status_set('blocked',
411+ 'Waiting on {} to finish upgrading'.format(
412+ osd_hashed_dict[
413+ osd_hashed_list[position - 1]]
414+ ))
415+ previous_node_finished = ceph.monitor_key_exists(
416+ "{}_done".format(osd_hashed_list[position - 1]))
417+
418+ # Block and wait on the previous nodes to finish
419+ while previous_node_finished is False:
420+ log("previous is not finished. Waiting")
421+ # Has this node been trying to upgrade for longer than 10 minutes?
422+ # If so then move on and consider that node dead.
423+
424+ # NOTE: This assumes the clusters clocks are somewhat accurate
425+ current_timestamp = time.time()
426+ previous_node_start_time = ceph.monitor_key_get(
427+ "{}_start".format(osd_hashed_list[position - 1]))
428+ if (current_timestamp - (10 * 60)) > previous_node_start_time:
429+ # Previous node is probably dead. Lets move on
430+ if previous_node_start_time is not None:
431+ log("Previous node {} appears dead. {} > {} Moving on".format(
432+ osd_hashed_dict[osd_hashed_list[position - 1]],
433+ (current_timestamp - (10 * 60)),
434+ previous_node_start_time
435+ ))
436+ lock_and_roll(my_hash=my_hash)
437+ else:
438+ # ??
439+ pass
440+ else:
441+ # I have to wait. Sleep a random amount of time and then
442+ # check if I can lock,upgrade and roll.
443+ time.sleep(random.randrange(5, 30))
444+ previous_node_finished = ceph.monitor_key_exists(
445+ "{}_done".format(osd_hashed_list[position - 1]))
446+ lock_and_roll(my_hash=my_hash)
447+ except ValueError:
448+ log("Unable to find ceph monitor hash in list")
449+ status_set('blocked', 'failed to upgrade monitor')
450+
451+
452+def upgrade_osd():
453+ add_source(config('source'), config('key'))
454+
455+ current_version = ceph.get_version()
456+ status_set("maintenance", "Upgrading osd")
457+ log("Current ceph version is {}".format(current_version))
458+ new_version = config('release-version')
459+ log("Upgrading to: {}".format(new_version))
460+
461+ try:
462+ add_source(config('source'), config('key'))
463+ apt_update(fatal=True)
464+ except subprocess.CalledProcessError as err:
465+ log("Adding the ceph source failed with message: {}".format(
466+ err.message))
467+ status_set("blocked", "Upgrade to {} failed".format(new_version))
468+ try:
469+ host.service_stop('ceph-osd-all')
470+ apt_install(packages=ceph.PACKAGES, fatal=True)
471+ host.service_start('ceph-osd-all')
472+ status_set("active", "")
473+ except subprocess.CalledProcessError as err:
474+ log("Stopping ceph and upgrading packages failed "
475+ "with message: {}".format(err.message))
476+ status_set("blocked", "Upgrade to {} failed".format(new_version))
477+
478
479 def install_upstart_scripts():
480 # Only install upstart configurations for older versions
481@@ -113,6 +300,7 @@
482 install_alternative('ceph.conf', '/etc/ceph/ceph.conf',
483 charm_ceph_conf, 90)
484
485+
486 JOURNAL_ZAPPED = '/var/lib/ceph/journal_zapped'
487
488
489@@ -147,6 +335,9 @@
490
491 @hooks.hook('config-changed')
492 def config_changed():
493+ # Check if an upgrade was requested
494+ check_for_upgrade()
495+
496 # Pre-flight checks
497 if config('osd-format') not in ceph.DISK_FORMATS:
498 log('Invalid OSD disk format configuration specified', level=ERROR)
499@@ -160,7 +351,7 @@
500 create_sysctl(sysctl_dict, '/etc/sysctl.d/50-ceph-osd-charm.conf')
501
502 e_mountpoint = config('ephemeral-unmount')
503- if (e_mountpoint and ceph.filesystem_mounted(e_mountpoint)):
504+ if e_mountpoint and ceph.filesystem_mounted(e_mountpoint):
505 umount(e_mountpoint)
506 prepare_disks_and_activate()
507
508@@ -189,8 +380,9 @@
509 hosts = []
510 for relid in relation_ids('mon'):
511 for unit in related_units(relid):
512- addr = relation_get('ceph-public-address', unit, relid) or \
513- get_host_ip(relation_get('private-address', unit, relid))
514+ addr = relation_get(
515+ 'ceph-public-address', unit, relid) or \
516+ get_host_ip(relation_get('private-address', unit, relid))
517
518 if addr:
519 hosts.append('{}:6789'.format(format_ipv6_addr(addr) or addr))
520@@ -246,10 +438,12 @@
521 'mon-relation-departed')
522 def mon_relation():
523 bootstrap_key = relation_get('osd_bootstrap_key')
524+ upgrade_key = relation_get('osd_upgrade_key')
525 if get_fsid() and get_auth() and bootstrap_key:
526 log('mon has provided conf- scanning disks')
527 emit_cephconf()
528 ceph.import_osd_bootstrap_key(bootstrap_key)
529+ ceph.import_osd_upgrade_key(upgrade_key)
530 prepare_disks_and_activate()
531 else:
532 log('mon cluster has not yet provided conf')
533
534=== modified file 'hooks/utils.py'
535--- hooks/utils.py 2016-02-18 17:10:53 +0000
536+++ hooks/utils.py 2016-03-01 16:57:01 +0000
537@@ -1,4 +1,3 @@
538-
539 #
540 # Copyright 2012 Canonical Ltd.
541 #
542@@ -12,8 +11,8 @@
543 from charmhelpers.core.hookenv import (
544 unit_get,
545 cached,
546- config
547-)
548+ config,
549+ status_set)
550 from charmhelpers.fetch import (
551 apt_install,
552 filter_installed_packages
553@@ -87,6 +86,35 @@
554 return answers[0].address
555
556
557+def get_public_addr():
558+ return get_network_addrs('ceph-public-network')[0]
559+
560+
561+def get_network_addrs(config_opt):
562+ """Get all configured public networks addresses.
563+
564+ If public network(s) are provided, go through them and return the
565+ addresses we have configured on any of those networks.
566+ """
567+ addrs = []
568+ networks = config(config_opt)
569+ if networks:
570+ networks = networks.split()
571+ addrs = [get_address_in_network(n) for n in networks]
572+ addrs = [a for a in addrs if a]
573+
574+ if not addrs:
575+ if networks:
576+ msg = ("Could not find an address on any of '%s' - resolve this "
577+ "error to retry" % networks)
578+ status_set('blocked', msg)
579+ raise Exception(msg)
580+ else:
581+ return [get_host_ip()]
582+
583+ return addrs
584+
585+
586 def get_networks(config_opt='ceph-public-network'):
587 """Get all configured networks from provided config option.
588
589
590=== modified file 'templates/ceph.conf'
591--- templates/ceph.conf 2016-01-18 16:42:36 +0000
592+++ templates/ceph.conf 2016-03-01 16:57:01 +0000
593@@ -6,7 +6,7 @@
594 auth service required = {{ auth_supported }}
595 auth client required = {{ auth_supported }}
596 {% endif %}
597-keyring = /etc/ceph/$cluster.$name.keyring
598+keyring = /etc/ceph/ceph.client.admin.keyring
599 mon host = {{ mon_hosts }}
600 fsid = {{ fsid }}
601

Subscribers

People subscribed via source and target branches