Merge lp:~springfield-team/charms/trusty/nova-cloud-controller/n1kv into lp:~openstack-charmers-archive/charms/trusty/nova-cloud-controller/next

Proposed by Jorge Niedbalski
Status: Superseded
Proposed branch: lp:~springfield-team/charms/trusty/nova-cloud-controller/n1kv
Merge into: lp:~openstack-charmers-archive/charms/trusty/nova-cloud-controller/next
Diff against target: 1959 lines (+592/-367)
12 files modified
hooks/charmhelpers/contrib/network/ip.py (+50/-48)
hooks/charmhelpers/contrib/openstack/context.py (+303/-215)
hooks/charmhelpers/contrib/openstack/neutron.py (+18/-2)
hooks/charmhelpers/contrib/openstack/utils.py (+24/-0)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+81/-97)
hooks/charmhelpers/core/hookenv.py (+6/-0)
hooks/charmhelpers/core/host.py (+8/-2)
hooks/charmhelpers/core/services/__init__.py (+2/-2)
hooks/charmhelpers/fetch/__init__.py (+5/-1)
hooks/charmhelpers/fetch/giturl.py (+44/-0)
templates/icehouse/cisco_plugins.ini (+43/-0)
templates/icehouse/nova.conf (+8/-0)
To merge this branch: bzr merge lp:~springfield-team/charms/trusty/nova-cloud-controller/n1kv
Reviewer Review Type Date Requested Status
OpenStack Charmers Pending
Edward Hope-Morley Pending
Review via email: mp+242499@code.launchpad.net

This proposal has been superseded by a proposal from 2014-12-12.

To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_lint_check #1169 nova-cloud-controller-next for niedbalski mp242499
    LINT OK: passed

LINT Results (max last 5 lines):
  I: config.yaml: option os-admin-network has no default value
  I: config.yaml: option haproxy-client-timeout has no default value
  I: config.yaml: option ssl_cert has no default value
  I: config.yaml: option nvp-l3-uuid has no default value
  I: config.yaml: option os-internal-network has no default value

Full lint test output: http://paste.ubuntu.com/9151040/
Build: http://10.98.191.181:8080/job/charm_lint_check/1169/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_unit_test #1003 nova-cloud-controller-next for niedbalski mp242499
    UNIT OK: passed

UNIT Results (max last 5 lines):
  hooks/nova_cc_hooks 442 145 67% 132-135, 148-149, 173, 184-185, 189, 221, 226-231, 242, 322-325, 333-336, 342-345, 355-371, 380-382, 392-406, 410-419, 505, 515, 519-520, 573, 579-589, 594-605, 615-625, 630-669, 679-694, 702-706, 731-740, 764, 769-777, 803, 853-856
  hooks/nova_cc_utils 445 112 75% 296-301, 312-315, 325-326, 382, 384, 430-432, 436, 450-458, 465-470, 474-488, 544, 593-595, 600-603, 608, 612, 636-637, 651-653, 674-675, 681-704, 708-714, 718-724, 730, 736, 743, 754-758, 843, 903-909, 913-915, 919-922, 926-938
  TOTAL 1038 366 65%
  Ran 96 tests in 8.821s
  OK

Full unit test output: http://paste.ubuntu.com/9151047/
Build: http://10.98.191.181:8080/job/charm_unit_test/1003/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_amulet_test #511 nova-cloud-controller-next for niedbalski mp242499
    AMULET FAIL: amulet-test failed

AMULET Results (max last 5 lines):
  ERROR waited for 10m0s without being able to connect: ssh: connect to host 10.217.3.230 port 22: No route to host
  juju-test.conductor WARNING : Could not bootstrap osci-sv07, got Bootstrap returned with an exit > 0. Skipping
  juju-test INFO : Results: 0 passed, 0 failed, 3 errored
  ERROR subprocess encountered error code 124
  make: *** [test] Error 124

Full amulet test output: http://paste.ubuntu.com/9151571/
Build: http://10.98.191.181:8080/job/charm_amulet_test/511/

129. By Shiv Prasad Rao

Changes for n1kv

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_lint_check #93 nova-cloud-controller-next for niedbalski mp242499
    LINT OK: passed

LINT Results (max last 5 lines):
  I: config.yaml: option os-admin-network has no default value
  I: config.yaml: option haproxy-client-timeout has no default value
  I: config.yaml: option ssl_cert has no default value
  I: config.yaml: option nvp-l3-uuid has no default value
  I: config.yaml: option os-internal-network has no default value

Full lint test output: pastebin not avail., cmd error
Build: http://10.98.191.181:8080/job/charm_lint_check/93/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_unit_test #93 nova-cloud-controller-next for niedbalski mp242499
    UNIT OK: passed

UNIT Results (max last 5 lines):
  hooks/nova_cc_hooks 442 145 67% 132-135, 148-149, 173, 184-185, 189, 221, 226-231, 242, 322-325, 333-336, 342-345, 355-371, 380-382, 392-406, 410-419, 505, 515, 519-520, 573, 579-589, 594-605, 615-625, 630-669, 679-694, 702-706, 731-740, 764, 769-777, 803, 853-856
  hooks/nova_cc_utils 445 112 75% 296-301, 312-315, 325-326, 382, 384, 430-432, 436, 450-458, 465-470, 474-488, 544, 593-595, 600-603, 608, 612, 636-637, 651-653, 674-675, 681-704, 708-714, 718-724, 730, 736, 743, 754-758, 843, 903-909, 913-915, 919-922, 926-938
  TOTAL 1038 366 65%
  Ran 96 tests in 9.319s
  OK

Full unit test output: pastebin not avail., cmd error
Build: http://10.98.191.181:8080/job/charm_unit_test/93/

130. By Edward Hope-Morley

[hopem] synced /next

Unmerged revisions

130. By Edward Hope-Morley

[hopem] synced /next

129. By Shiv Prasad Rao

Changes for n1kv

128. By Jorge Niedbalski

[all] resync with /next && charm helpers "make sync"

127. By Shiv Prasad Rao

Additional changes for n1kv

126. By Shiv Prasad Rao

Cisco Nexus 1000V changes

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'hooks/charmhelpers/contrib/network/ip.py'
2--- hooks/charmhelpers/contrib/network/ip.py 2014-10-09 10:31:45 +0000
3+++ hooks/charmhelpers/contrib/network/ip.py 2014-12-08 02:03:09 +0000
4@@ -1,15 +1,12 @@
5 import glob
6 import re
7 import subprocess
8-import sys
9
10 from functools import partial
11
12 from charmhelpers.core.hookenv import unit_get
13 from charmhelpers.fetch import apt_install
14 from charmhelpers.core.hookenv import (
15- WARNING,
16- ERROR,
17 log
18 )
19
20@@ -34,31 +31,28 @@
21 network)
22
23
24+def no_ip_found_error_out(network):
25+ errmsg = ("No IP address found in network: %s" % network)
26+ raise ValueError(errmsg)
27+
28+
29 def get_address_in_network(network, fallback=None, fatal=False):
30- """
31- Get an IPv4 or IPv6 address within the network from the host.
32+ """Get an IPv4 or IPv6 address within the network from the host.
33
34 :param network (str): CIDR presentation format. For example,
35 '192.168.1.0/24'.
36 :param fallback (str): If no address is found, return fallback.
37 :param fatal (boolean): If no address is found, fallback is not
38 set and fatal is True then exit(1).
39-
40 """
41-
42- def not_found_error_out():
43- log("No IP address found in network: %s" % network,
44- level=ERROR)
45- sys.exit(1)
46-
47 if network is None:
48 if fallback is not None:
49 return fallback
50+
51+ if fatal:
52+ no_ip_found_error_out(network)
53 else:
54- if fatal:
55- not_found_error_out()
56- else:
57- return None
58+ return None
59
60 _validate_cidr(network)
61 network = netaddr.IPNetwork(network)
62@@ -70,6 +64,7 @@
63 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
64 if cidr in network:
65 return str(cidr.ip)
66+
67 if network.version == 6 and netifaces.AF_INET6 in addresses:
68 for addr in addresses[netifaces.AF_INET6]:
69 if not addr['addr'].startswith('fe80'):
70@@ -82,20 +77,20 @@
71 return fallback
72
73 if fatal:
74- not_found_error_out()
75+ no_ip_found_error_out(network)
76
77 return None
78
79
80 def is_ipv6(address):
81- '''Determine whether provided address is IPv6 or not'''
82+ """Determine whether provided address is IPv6 or not."""
83 try:
84 address = netaddr.IPAddress(address)
85 except netaddr.AddrFormatError:
86 # probably a hostname - so not an address at all!
87 return False
88- else:
89- return address.version == 6
90+
91+ return address.version == 6
92
93
94 def is_address_in_network(network, address):
95@@ -113,11 +108,13 @@
96 except (netaddr.core.AddrFormatError, ValueError):
97 raise ValueError("Network (%s) is not in CIDR presentation format" %
98 network)
99+
100 try:
101 address = netaddr.IPAddress(address)
102 except (netaddr.core.AddrFormatError, ValueError):
103 raise ValueError("Address (%s) is not in correct presentation format" %
104 address)
105+
106 if address in network:
107 return True
108 else:
109@@ -147,6 +144,7 @@
110 return iface
111 else:
112 return addresses[netifaces.AF_INET][0][key]
113+
114 if address.version == 6 and netifaces.AF_INET6 in addresses:
115 for addr in addresses[netifaces.AF_INET6]:
116 if not addr['addr'].startswith('fe80'):
117@@ -160,41 +158,42 @@
118 return str(cidr).split('/')[1]
119 else:
120 return addr[key]
121+
122 return None
123
124
125 get_iface_for_address = partial(_get_for_address, key='iface')
126
127+
128 get_netmask_for_address = partial(_get_for_address, key='netmask')
129
130
131 def format_ipv6_addr(address):
132- """
133- IPv6 needs to be wrapped with [] in url link to parse correctly.
134+ """If address is IPv6, wrap it in '[]' otherwise return None.
135+
136+ This is required by most configuration files when specifying IPv6
137+ addresses.
138 """
139 if is_ipv6(address):
140- address = "[%s]" % address
141- else:
142- log("Not a valid ipv6 address: %s" % address, level=WARNING)
143- address = None
144+ return "[%s]" % address
145
146- return address
147+ return None
148
149
150 def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False,
151 fatal=True, exc_list=None):
152- """
153- Return the assigned IP address for a given interface, if any, or [].
154- """
155+ """Return the assigned IP address for a given interface, if any."""
156 # Extract nic if passed /dev/ethX
157 if '/' in iface:
158 iface = iface.split('/')[-1]
159+
160 if not exc_list:
161 exc_list = []
162+
163 try:
164 inet_num = getattr(netifaces, inet_type)
165 except AttributeError:
166- raise Exception('Unknown inet type ' + str(inet_type))
167+ raise Exception("Unknown inet type '%s'" % str(inet_type))
168
169 interfaces = netifaces.interfaces()
170 if inc_aliases:
171@@ -202,15 +201,18 @@
172 for _iface in interfaces:
173 if iface == _iface or _iface.split(':')[0] == iface:
174 ifaces.append(_iface)
175+
176 if fatal and not ifaces:
177 raise Exception("Invalid interface '%s'" % iface)
178+
179 ifaces.sort()
180 else:
181 if iface not in interfaces:
182 if fatal:
183- raise Exception("%s not found " % (iface))
184+ raise Exception("Interface '%s' not found " % (iface))
185 else:
186 return []
187+
188 else:
189 ifaces = [iface]
190
191@@ -221,11 +223,14 @@
192 for entry in net_info[inet_num]:
193 if 'addr' in entry and entry['addr'] not in exc_list:
194 addresses.append(entry['addr'])
195+
196 if fatal and not addresses:
197 raise Exception("Interface '%s' doesn't have any %s addresses." %
198 (iface, inet_type))
199+
200 return addresses
201
202+
203 get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET')
204
205
206@@ -241,6 +246,7 @@
207 raw = re.match(ll_key, _addr)
208 if raw:
209 _addr = raw.group(1)
210+
211 if _addr == addr:
212 log("Address '%s' is configured on iface '%s'" %
213 (addr, iface))
214@@ -251,8 +257,9 @@
215
216
217 def sniff_iface(f):
218- """If no iface provided, inject net iface inferred from unit private
219- address.
220+ """Ensure decorated function is called with a value for iface.
221+
222+ If no iface provided, inject net iface inferred from unit private address.
223 """
224 def iface_sniffer(*args, **kwargs):
225 if not kwargs.get('iface', None):
226@@ -317,33 +324,28 @@
227 return addrs
228
229 if fatal:
230- raise Exception("Interface '%s' doesn't have a scope global "
231+ raise Exception("Interface '%s' does not have a scope global "
232 "non-temporary ipv6 address." % iface)
233
234 return []
235
236
237 def get_bridges(vnic_dir='/sys/devices/virtual/net'):
238- """
239- Return a list of bridges on the system or []
240- """
241- b_rgex = vnic_dir + '/*/bridge'
242- return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_rgex)]
243+ """Return a list of bridges on the system."""
244+ b_regex = "%s/*/bridge" % vnic_dir
245+ return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_regex)]
246
247
248 def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'):
249- """
250- Return a list of nics comprising a given bridge on the system or []
251- """
252- brif_rgex = "%s/%s/brif/*" % (vnic_dir, bridge)
253- return [x.split('/')[-1] for x in glob.glob(brif_rgex)]
254+ """Return a list of nics comprising a given bridge on the system."""
255+ brif_regex = "%s/%s/brif/*" % (vnic_dir, bridge)
256+ return [x.split('/')[-1] for x in glob.glob(brif_regex)]
257
258
259 def is_bridge_member(nic):
260- """
261- Check if a given nic is a member of a bridge
262- """
263+ """Check if a given nic is a member of a bridge."""
264 for bridge in get_bridges():
265 if nic in get_bridge_nics(bridge):
266 return True
267+
268 return False
269
270=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
271--- hooks/charmhelpers/contrib/openstack/context.py 2014-10-13 16:18:58 +0000
272+++ hooks/charmhelpers/contrib/openstack/context.py 2014-12-08 02:03:09 +0000
273@@ -3,18 +3,15 @@
274 import time
275
276 from base64 import b64decode
277-
278-from subprocess import (
279- check_call
280-)
281+from subprocess import check_call
282
283 from charmhelpers.fetch import (
284 apt_install,
285 filter_installed_packages,
286 )
287-
288 from charmhelpers.core.hookenv import (
289 config,
290+ is_relation_made,
291 local_unit,
292 log,
293 relation_get,
294@@ -23,43 +20,40 @@
295 relation_set,
296 unit_get,
297 unit_private_ip,
298+ DEBUG,
299+ INFO,
300+ WARNING,
301 ERROR,
302- INFO
303 )
304-
305 from charmhelpers.core.host import (
306 mkdir,
307- write_file
308+ write_file,
309 )
310-
311 from charmhelpers.contrib.hahelpers.cluster import (
312 determine_apache_port,
313 determine_api_port,
314 https,
315- is_clustered
316+ is_clustered,
317 )
318-
319 from charmhelpers.contrib.hahelpers.apache import (
320 get_cert,
321 get_ca_cert,
322 install_ca_cert,
323 )
324-
325 from charmhelpers.contrib.openstack.neutron import (
326 neutron_plugin_attribute,
327 )
328-
329 from charmhelpers.contrib.network.ip import (
330 get_address_in_network,
331 get_ipv6_addr,
332 get_netmask_for_address,
333 format_ipv6_addr,
334- is_address_in_network
335+ is_address_in_network,
336 )
337-
338 from charmhelpers.contrib.openstack.utils import get_host_ip
339
340 CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
341+ADDRESS_TYPES = ['admin', 'internal', 'public']
342
343
344 class OSContextError(Exception):
345@@ -67,7 +61,7 @@
346
347
348 def ensure_packages(packages):
349- '''Install but do not upgrade required plugin packages'''
350+ """Install but do not upgrade required plugin packages."""
351 required = filter_installed_packages(packages)
352 if required:
353 apt_install(required, fatal=True)
354@@ -78,17 +72,24 @@
355 for k, v in ctxt.iteritems():
356 if v is None or v == '':
357 _missing.append(k)
358+
359 if _missing:
360- log('Missing required data: %s' % ' '.join(_missing), level='INFO')
361+ log('Missing required data: %s' % ' '.join(_missing), level=INFO)
362 return False
363+
364 return True
365
366
367 def config_flags_parser(config_flags):
368+ """Parses config flags string into dict.
369+
370+ The provided config_flags string may be a list of comma-separated values
371+ which themselves may be comma-separated list of values.
372+ """
373 if config_flags.find('==') >= 0:
374- log("config_flags is not in expected format (key=value)",
375- level=ERROR)
376+ log("config_flags is not in expected format (key=value)", level=ERROR)
377 raise OSContextError
378+
379 # strip the following from each value.
380 post_strippers = ' ,'
381 # we strip any leading/trailing '=' or ' ' from the string then
382@@ -111,17 +112,18 @@
383 # if this not the first entry, expect an embedded key.
384 index = current.rfind(',')
385 if index < 0:
386- log("invalid config value(s) at index %s" % (i),
387- level=ERROR)
388+ log("Invalid config value(s) at index %s" % (i), level=ERROR)
389 raise OSContextError
390 key = current[index + 1:]
391
392 # Add to collection.
393 flags[key.strip(post_strippers)] = value.rstrip(post_strippers)
394+
395 return flags
396
397
398 class OSContextGenerator(object):
399+ """Base class for all context generators."""
400 interfaces = []
401
402 def __call__(self):
403@@ -133,11 +135,11 @@
404
405 def __init__(self,
406 database=None, user=None, relation_prefix=None, ssl_dir=None):
407- '''
408- Allows inspecting relation for settings prefixed with relation_prefix.
409- This is useful for parsing access for multiple databases returned via
410- the shared-db interface (eg, nova_password, quantum_password)
411- '''
412+ """Allows inspecting relation for settings prefixed with
413+ relation_prefix. This is useful for parsing access for multiple
414+ databases returned via the shared-db interface (eg, nova_password,
415+ quantum_password)
416+ """
417 self.relation_prefix = relation_prefix
418 self.database = database
419 self.user = user
420@@ -147,9 +149,8 @@
421 self.database = self.database or config('database')
422 self.user = self.user or config('database-user')
423 if None in [self.database, self.user]:
424- log('Could not generate shared_db context. '
425- 'Missing required charm config options. '
426- '(database name and user)')
427+ log("Could not generate shared_db context. Missing required charm "
428+ "config options. (database name and user)", level=ERROR)
429 raise OSContextError
430
431 ctxt = {}
432@@ -202,23 +203,24 @@
433 def __call__(self):
434 self.database = self.database or config('database')
435 if self.database is None:
436- log('Could not generate postgresql_db context. '
437- 'Missing required charm config options. '
438- '(database name)')
439+ log('Could not generate postgresql_db context. Missing required '
440+ 'charm config options. (database name)', level=ERROR)
441 raise OSContextError
442+
443 ctxt = {}
444-
445 for rid in relation_ids(self.interfaces[0]):
446 for unit in related_units(rid):
447- ctxt = {
448- 'database_host': relation_get('host', rid=rid, unit=unit),
449- 'database': self.database,
450- 'database_user': relation_get('user', rid=rid, unit=unit),
451- 'database_password': relation_get('password', rid=rid, unit=unit),
452- 'database_type': 'postgresql',
453- }
454+ rel_host = relation_get('host', rid=rid, unit=unit)
455+ rel_user = relation_get('user', rid=rid, unit=unit)
456+ rel_passwd = relation_get('password', rid=rid, unit=unit)
457+ ctxt = {'database_host': rel_host,
458+ 'database': self.database,
459+ 'database_user': rel_user,
460+ 'database_password': rel_passwd,
461+ 'database_type': 'postgresql'}
462 if context_complete(ctxt):
463 return ctxt
464+
465 return {}
466
467
468@@ -227,23 +229,29 @@
469 ca_path = os.path.join(ssl_dir, 'db-client.ca')
470 with open(ca_path, 'w') as fh:
471 fh.write(b64decode(rdata['ssl_ca']))
472+
473 ctxt['database_ssl_ca'] = ca_path
474 elif 'ssl_ca' in rdata:
475- log("Charm not setup for ssl support but ssl ca found")
476+ log("Charm not setup for ssl support but ssl ca found", level=INFO)
477 return ctxt
478+
479 if 'ssl_cert' in rdata:
480 cert_path = os.path.join(
481 ssl_dir, 'db-client.cert')
482 if not os.path.exists(cert_path):
483- log("Waiting 1m for ssl client cert validity")
484+ log("Waiting 1m for ssl client cert validity", level=INFO)
485 time.sleep(60)
486+
487 with open(cert_path, 'w') as fh:
488 fh.write(b64decode(rdata['ssl_cert']))
489+
490 ctxt['database_ssl_cert'] = cert_path
491 key_path = os.path.join(ssl_dir, 'db-client.key')
492 with open(key_path, 'w') as fh:
493 fh.write(b64decode(rdata['ssl_key']))
494+
495 ctxt['database_ssl_key'] = key_path
496+
497 return ctxt
498
499
500@@ -251,9 +259,8 @@
501 interfaces = ['identity-service']
502
503 def __call__(self):
504- log('Generating template context for identity-service')
505+ log('Generating template context for identity-service', level=DEBUG)
506 ctxt = {}
507-
508 for rid in relation_ids('identity-service'):
509 for unit in related_units(rid):
510 rdata = relation_get(rid=rid, unit=unit)
511@@ -261,26 +268,24 @@
512 serv_host = format_ipv6_addr(serv_host) or serv_host
513 auth_host = rdata.get('auth_host')
514 auth_host = format_ipv6_addr(auth_host) or auth_host
515-
516- ctxt = {
517- 'service_port': rdata.get('service_port'),
518- 'service_host': serv_host,
519- 'auth_host': auth_host,
520- 'auth_port': rdata.get('auth_port'),
521- 'admin_tenant_name': rdata.get('service_tenant'),
522- 'admin_user': rdata.get('service_username'),
523- 'admin_password': rdata.get('service_password'),
524- 'service_protocol':
525- rdata.get('service_protocol') or 'http',
526- 'auth_protocol':
527- rdata.get('auth_protocol') or 'http',
528- }
529+ svc_protocol = rdata.get('service_protocol') or 'http'
530+ auth_protocol = rdata.get('auth_protocol') or 'http'
531+ ctxt = {'service_port': rdata.get('service_port'),
532+ 'service_host': serv_host,
533+ 'auth_host': auth_host,
534+ 'auth_port': rdata.get('auth_port'),
535+ 'admin_tenant_name': rdata.get('service_tenant'),
536+ 'admin_user': rdata.get('service_username'),
537+ 'admin_password': rdata.get('service_password'),
538+ 'service_protocol': svc_protocol,
539+ 'auth_protocol': auth_protocol}
540 if context_complete(ctxt):
541 # NOTE(jamespage) this is required for >= icehouse
542 # so a missing value just indicates keystone needs
543 # upgrading
544 ctxt['admin_tenant_id'] = rdata.get('service_tenant_id')
545 return ctxt
546+
547 return {}
548
549
550@@ -293,21 +298,23 @@
551 self.interfaces = [rel_name]
552
553 def __call__(self):
554- log('Generating template context for amqp')
555+ log('Generating template context for amqp', level=DEBUG)
556 conf = config()
557- user_setting = 'rabbit-user'
558- vhost_setting = 'rabbit-vhost'
559 if self.relation_prefix:
560- user_setting = self.relation_prefix + '-rabbit-user'
561- vhost_setting = self.relation_prefix + '-rabbit-vhost'
562+ user_setting = '%s-rabbit-user' % (self.relation_prefix)
563+ vhost_setting = '%s-rabbit-vhost' % (self.relation_prefix)
564+ else:
565+ user_setting = 'rabbit-user'
566+ vhost_setting = 'rabbit-vhost'
567
568 try:
569 username = conf[user_setting]
570 vhost = conf[vhost_setting]
571 except KeyError as e:
572- log('Could not generate shared_db context. '
573- 'Missing required charm config options: %s.' % e)
574+ log('Could not generate shared_db context. Missing required charm '
575+ 'config options: %s.' % e, level=ERROR)
576 raise OSContextError
577+
578 ctxt = {}
579 for rid in relation_ids(self.rel_name):
580 ha_vip_only = False
581@@ -321,6 +328,7 @@
582 host = relation_get('private-address', rid=rid, unit=unit)
583 host = format_ipv6_addr(host) or host
584 ctxt['rabbitmq_host'] = host
585+
586 ctxt.update({
587 'rabbitmq_user': username,
588 'rabbitmq_password': relation_get('password', rid=rid,
589@@ -331,6 +339,7 @@
590 ssl_port = relation_get('ssl_port', rid=rid, unit=unit)
591 if ssl_port:
592 ctxt['rabbit_ssl_port'] = ssl_port
593+
594 ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit)
595 if ssl_ca:
596 ctxt['rabbit_ssl_ca'] = ssl_ca
597@@ -344,41 +353,45 @@
598 if context_complete(ctxt):
599 if 'rabbit_ssl_ca' in ctxt:
600 if not self.ssl_dir:
601- log(("Charm not setup for ssl support "
602- "but ssl ca found"))
603+ log("Charm not setup for ssl support but ssl ca "
604+ "found", level=INFO)
605 break
606+
607 ca_path = os.path.join(
608 self.ssl_dir, 'rabbit-client-ca.pem')
609 with open(ca_path, 'w') as fh:
610 fh.write(b64decode(ctxt['rabbit_ssl_ca']))
611 ctxt['rabbit_ssl_ca'] = ca_path
612+
613 # Sufficient information found = break out!
614 break
615+
616 # Used for active/active rabbitmq >= grizzly
617- if ('clustered' not in ctxt or ha_vip_only) \
618- and len(related_units(rid)) > 1:
619+ if (('clustered' not in ctxt or ha_vip_only) and
620+ len(related_units(rid)) > 1):
621 rabbitmq_hosts = []
622 for unit in related_units(rid):
623 host = relation_get('private-address', rid=rid, unit=unit)
624 host = format_ipv6_addr(host) or host
625 rabbitmq_hosts.append(host)
626+
627 ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts)
628+
629 if not context_complete(ctxt):
630 return {}
631- else:
632- return ctxt
633+
634+ return ctxt
635
636
637 class CephContext(OSContextGenerator):
638+ """Generates context for /etc/ceph/ceph.conf templates."""
639 interfaces = ['ceph']
640
641 def __call__(self):
642- '''This generates context for /etc/ceph/ceph.conf templates'''
643 if not relation_ids('ceph'):
644 return {}
645
646- log('Generating template context for ceph')
647-
648+ log('Generating template context for ceph', level=DEBUG)
649 mon_hosts = []
650 auth = None
651 key = None
652@@ -387,18 +400,18 @@
653 for unit in related_units(rid):
654 auth = relation_get('auth', rid=rid, unit=unit)
655 key = relation_get('key', rid=rid, unit=unit)
656- ceph_addr = \
657- relation_get('ceph-public-address', rid=rid, unit=unit) or \
658- relation_get('private-address', rid=rid, unit=unit)
659+ ceph_pub_addr = relation_get('ceph-public-address', rid=rid,
660+ unit=unit)
661+ unit_priv_addr = relation_get('private-address', rid=rid,
662+ unit=unit)
663+ ceph_addr = ceph_pub_addr or unit_priv_addr
664 ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr
665 mon_hosts.append(ceph_addr)
666
667- ctxt = {
668- 'mon_hosts': ' '.join(mon_hosts),
669- 'auth': auth,
670- 'key': key,
671- 'use_syslog': use_syslog
672- }
673+ ctxt = {'mon_hosts': ' '.join(mon_hosts),
674+ 'auth': auth,
675+ 'key': key,
676+ 'use_syslog': use_syslog}
677
678 if not os.path.isdir('/etc/ceph'):
679 os.mkdir('/etc/ceph')
680@@ -407,79 +420,65 @@
681 return {}
682
683 ensure_packages(['ceph-common'])
684-
685 return ctxt
686
687
688-ADDRESS_TYPES = ['admin', 'internal', 'public']
689-
690-
691 class HAProxyContext(OSContextGenerator):
692+ """Provides half a context for the haproxy template, which describes
693+ all peers to be included in the cluster. Each charm needs to include
694+ its own context generator that describes the port mapping.
695+ """
696 interfaces = ['cluster']
697
698 def __call__(self):
699- '''
700- Builds half a context for the haproxy template, which describes
701- all peers to be included in the cluster. Each charm needs to include
702- its own context generator that describes the port mapping.
703- '''
704 if not relation_ids('cluster'):
705 return {}
706
707- l_unit = local_unit().replace('/', '-')
708-
709 if config('prefer-ipv6'):
710 addr = get_ipv6_addr(exc_list=[config('vip')])[0]
711 else:
712 addr = get_host_ip(unit_get('private-address'))
713
714+ l_unit = local_unit().replace('/', '-')
715 cluster_hosts = {}
716
717 # NOTE(jamespage): build out map of configured network endpoints
718 # and associated backends
719 for addr_type in ADDRESS_TYPES:
720- laddr = get_address_in_network(
721- config('os-{}-network'.format(addr_type)))
722+ cfg_opt = 'os-{}-network'.format(addr_type)
723+ laddr = get_address_in_network(config(cfg_opt))
724 if laddr:
725- cluster_hosts[laddr] = {}
726- cluster_hosts[laddr]['network'] = "{}/{}".format(
727- laddr,
728- get_netmask_for_address(laddr)
729- )
730- cluster_hosts[laddr]['backends'] = {}
731- cluster_hosts[laddr]['backends'][l_unit] = laddr
732+ netmask = get_netmask_for_address(laddr)
733+ cluster_hosts[laddr] = {'network': "{}/{}".format(laddr,
734+ netmask),
735+ 'backends': {l_unit: laddr}}
736 for rid in relation_ids('cluster'):
737 for unit in related_units(rid):
738- _unit = unit.replace('/', '-')
739 _laddr = relation_get('{}-address'.format(addr_type),
740 rid=rid, unit=unit)
741 if _laddr:
742+ _unit = unit.replace('/', '-')
743 cluster_hosts[laddr]['backends'][_unit] = _laddr
744
745 # NOTE(jamespage) no split configurations found, just use
746 # private addresses
747 if not cluster_hosts:
748- cluster_hosts[addr] = {}
749- cluster_hosts[addr]['network'] = "{}/{}".format(
750- addr,
751- get_netmask_for_address(addr)
752- )
753- cluster_hosts[addr]['backends'] = {}
754- cluster_hosts[addr]['backends'][l_unit] = addr
755+ netmask = get_netmask_for_address(addr)
756+ cluster_hosts[addr] = {'network': "{}/{}".format(addr, netmask),
757+ 'backends': {l_unit: addr}}
758 for rid in relation_ids('cluster'):
759 for unit in related_units(rid):
760- _unit = unit.replace('/', '-')
761 _laddr = relation_get('private-address',
762 rid=rid, unit=unit)
763 if _laddr:
764+ _unit = unit.replace('/', '-')
765 cluster_hosts[addr]['backends'][_unit] = _laddr
766
767- ctxt = {
768- 'frontends': cluster_hosts,
769- }
770+ ctxt = {'frontends': cluster_hosts}
771
772 if config('haproxy-server-timeout'):
773 ctxt['haproxy_server_timeout'] = config('haproxy-server-timeout')
774+
775 if config('haproxy-client-timeout'):
776 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')
777
778@@ -495,11 +494,15 @@
779 for frontend in cluster_hosts:
780 if len(cluster_hosts[frontend]['backends']) > 1:
781 # Enable haproxy when we have enough peers.
782- log('Ensuring haproxy enabled in /etc/default/haproxy.')
783+ log('Ensuring haproxy enabled in /etc/default/haproxy.',
784+ level=DEBUG)
785 with open('/etc/default/haproxy', 'w') as out:
786 out.write('ENABLED=1\n')
787+
788 return ctxt
789- log('HAProxy context is incomplete, this unit has no peers.')
790+
791+ log('HAProxy context is incomplete, this unit has no peers.',
792+ level=INFO)
793 return {}
794
795
796@@ -507,29 +510,28 @@
797 interfaces = ['image-service']
798
799 def __call__(self):
800- '''
801- Obtains the glance API server from the image-service relation. Useful
802- in nova and cinder (currently).
803- '''
804- log('Generating template context for image-service.')
805+ """Obtains the glance API server from the image-service relation.
806+ Useful in nova and cinder (currently).
807+ """
808+ log('Generating template context for image-service.', level=DEBUG)
809 rids = relation_ids('image-service')
810 if not rids:
811 return {}
812+
813 for rid in rids:
814 for unit in related_units(rid):
815 api_server = relation_get('glance-api-server',
816 rid=rid, unit=unit)
817 if api_server:
818 return {'glance_api_servers': api_server}
819- log('ImageService context is incomplete. '
820- 'Missing required relation data.')
821+
822+ log("ImageService context is incomplete. Missing required relation "
823+ "data.", level=INFO)
824 return {}
825
826
827 class ApacheSSLContext(OSContextGenerator):
828-
829- """
830- Generates a context for an apache vhost configuration that configures
831+ """Generates a context for an apache vhost configuration that configures
832 HTTPS reverse proxying for one or many endpoints. Generated context
833 looks something like::
834
835@@ -563,6 +565,7 @@
836 else:
837 cert_filename = 'cert'
838 key_filename = 'key'
839+
840 write_file(path=os.path.join(ssl_dir, cert_filename),
841 content=b64decode(cert))
842 write_file(path=os.path.join(ssl_dir, key_filename),
843@@ -574,7 +577,8 @@
844 install_ca_cert(b64decode(ca_cert))
845
846 def canonical_names(self):
847- '''Figure out which canonical names clients will access this service'''
848+ """Figure out which canonical names clients will access this service.
849+ """
850 cns = []
851 for r_id in relation_ids('identity-service'):
852 for unit in related_units(r_id):
853@@ -582,47 +586,71 @@
854 for k in rdata:
855 if k.startswith('ssl_key_'):
856 cns.append(k.lstrip('ssl_key_'))
857+
858 return list(set(cns))
859
860+ def get_network_addresses(self):
861+ """For each network configured, return corresponding address and vip
862+ (if available).
863+
864+ Returns a list of tuples of the form:
865+
866+ [(address_in_net_a, vip_in_net_a),
867+ (address_in_net_b, vip_in_net_b),
868+ ...]
869+
870+ or, if no vip(s) available:
871+
872+ [(address_in_net_a, address_in_net_a),
873+ (address_in_net_b, address_in_net_b),
874+ ...]
875+ """
876+ addresses = []
877+ if config('vip'):
878+ vips = config('vip').split()
879+ else:
880+ vips = []
881+
882+ for net_type in ['os-internal-network', 'os-admin-network',
883+ 'os-public-network']:
884+ addr = get_address_in_network(config(net_type),
885+ unit_get('private-address'))
886+ if len(vips) > 1 and is_clustered():
887+ if not config(net_type):
888+ log("Multiple networks configured but net_type "
889+ "is None (%s)." % net_type, level=WARNING)
890+ continue
891+
892+ for vip in vips:
893+ if is_address_in_network(config(net_type), vip):
894+ addresses.append((addr, vip))
895+ break
896+
897+ elif is_clustered() and config('vip'):
898+ addresses.append((addr, config('vip')))
899+ else:
900+ addresses.append((addr, addr))
901+
902+ return addresses
903+
904 def __call__(self):
905 if isinstance(self.external_ports, basestring):
906 self.external_ports = [self.external_ports]
907- if (not self.external_ports or not https()):
908+
909+ if not self.external_ports or not https():
910 return {}
911
912 self.configure_ca()
913 self.enable_modules()
914
915- ctxt = {
916- 'namespace': self.service_namespace,
917- 'endpoints': [],
918- 'ext_ports': []
919- }
920+ ctxt = {'namespace': self.service_namespace,
921+ 'endpoints': [],
922+ 'ext_ports': []}
923
924 for cn in self.canonical_names():
925 self.configure_cert(cn)
926
927- addresses = []
928- vips = []
929- if config('vip'):
930- vips = config('vip').split()
931-
932- for network_type in ['os-internal-network',
933- 'os-admin-network',
934- 'os-public-network']:
935- address = get_address_in_network(config(network_type),
936- unit_get('private-address'))
937- if len(vips) > 0 and is_clustered():
938- for vip in vips:
939- if is_address_in_network(config(network_type),
940- vip):
941- addresses.append((address, vip))
942- break
943- elif is_clustered():
944- addresses.append((address, config('vip')))
945- else:
946- addresses.append((address, address))
947-
948+ addresses = self.get_network_addresses()
949 for address, endpoint in set(addresses):
950 for api_port in self.external_ports:
951 ext_port = determine_apache_port(api_port)
952@@ -630,6 +658,7 @@
953 portmap = (address, endpoint, int(ext_port), int(int_port))
954 ctxt['endpoints'].append(portmap)
955 ctxt['ext_ports'].append(int(ext_port))
956+
957 ctxt['ext_ports'] = list(set(ctxt['ext_ports']))
958 return ctxt
959
960@@ -647,21 +676,23 @@
961
962 @property
963 def packages(self):
964- return neutron_plugin_attribute(
965- self.plugin, 'packages', self.network_manager)
966+ return neutron_plugin_attribute(self.plugin, 'packages',
967+ self.network_manager)
968
969 @property
970 def neutron_security_groups(self):
971 return None
972
973 def _ensure_packages(self):
974- [ensure_packages(pkgs) for pkgs in self.packages]
975+ for pkgs in self.packages:
976+ ensure_packages(pkgs)
977
978 def _save_flag_file(self):
979 if self.network_manager == 'quantum':
980 _file = '/etc/nova/quantum_plugin.conf'
981 else:
982 _file = '/etc/nova/neutron_plugin.conf'
983+
984 with open(_file, 'wb') as out:
985 out.write(self.plugin + '\n')
986
987@@ -670,13 +701,11 @@
988 self.network_manager)
989 config = neutron_plugin_attribute(self.plugin, 'config',
990 self.network_manager)
991- ovs_ctxt = {
992- 'core_plugin': driver,
993- 'neutron_plugin': 'ovs',
994- 'neutron_security_groups': self.neutron_security_groups,
995- 'local_ip': unit_private_ip(),
996- 'config': config
997- }
998+ ovs_ctxt = {'core_plugin': driver,
999+ 'neutron_plugin': 'ovs',
1000+ 'neutron_security_groups': self.neutron_security_groups,
1001+ 'local_ip': unit_private_ip(),
1002+ 'config': config}
1003
1004 return ovs_ctxt
1005
1006@@ -685,13 +714,11 @@
1007 self.network_manager)
1008 config = neutron_plugin_attribute(self.plugin, 'config',
1009 self.network_manager)
1010- nvp_ctxt = {
1011- 'core_plugin': driver,
1012- 'neutron_plugin': 'nvp',
1013- 'neutron_security_groups': self.neutron_security_groups,
1014- 'local_ip': unit_private_ip(),
1015- 'config': config
1016- }
1017+ nvp_ctxt = {'core_plugin': driver,
1018+ 'neutron_plugin': 'nvp',
1019+ 'neutron_security_groups': self.neutron_security_groups,
1020+ 'local_ip': unit_private_ip(),
1021+ 'config': config}
1022
1023 return nvp_ctxt
1024
1025@@ -700,35 +727,50 @@
1026 self.network_manager)
1027 n1kv_config = neutron_plugin_attribute(self.plugin, 'config',
1028 self.network_manager)
1029- n1kv_ctxt = {
1030- 'core_plugin': driver,
1031- 'neutron_plugin': 'n1kv',
1032- 'neutron_security_groups': self.neutron_security_groups,
1033- 'local_ip': unit_private_ip(),
1034- 'config': n1kv_config,
1035- 'vsm_ip': config('n1kv-vsm-ip'),
1036- 'vsm_username': config('n1kv-vsm-username'),
1037- 'vsm_password': config('n1kv-vsm-password'),
1038- 'restrict_policy_profiles': config(
1039- 'n1kv_restrict_policy_profiles'),
1040- }
1041+ n1kv_user_config_flags = config('n1kv-config-flags')
1042+ restrict_policy_profiles = config('n1kv-restrict-policy-profiles')
1043+ n1kv_ctxt = {'core_plugin': driver,
1044+ 'neutron_plugin': 'n1kv',
1045+ 'neutron_security_groups': self.neutron_security_groups,
1046+ 'local_ip': unit_private_ip(),
1047+ 'config': n1kv_config,
1048+ 'vsm_ip': config('n1kv-vsm-ip'),
1049+ 'vsm_username': config('n1kv-vsm-username'),
1050+ 'vsm_password': config('n1kv-vsm-password'),
1051+ 'restrict_policy_profiles': restrict_policy_profiles}
1052+
1053+ if n1kv_user_config_flags:
1054+ flags = config_flags_parser(n1kv_user_config_flags)
1055+ n1kv_ctxt['user_config_flags'] = flags
1056
1057 return n1kv_ctxt
1058
1059+ def calico_ctxt(self):
1060+ driver = neutron_plugin_attribute(self.plugin, 'driver',
1061+ self.network_manager)
1062+ config = neutron_plugin_attribute(self.plugin, 'config',
1063+ self.network_manager)
1064+ calico_ctxt = {'core_plugin': driver,
1065+ 'neutron_plugin': 'Calico',
1066+ 'neutron_security_groups': self.neutron_security_groups,
1067+ 'local_ip': unit_private_ip(),
1068+ 'config': config}
1069+
1070+ return calico_ctxt
1071+
1072 def neutron_ctxt(self):
1073 if https():
1074 proto = 'https'
1075 else:
1076 proto = 'http'
1077+
1078 if is_clustered():
1079 host = config('vip')
1080 else:
1081 host = unit_get('private-address')
1082- url = '%s://%s:%s' % (proto, host, '9696')
1083- ctxt = {
1084- 'network_manager': self.network_manager,
1085- 'neutron_url': url,
1086- }
1087+
1088+ ctxt = {'network_manager': self.network_manager,
1089+ 'neutron_url': '%s://%s:%s' % (proto, host, '9696')}
1090 return ctxt
1091
1092 def __call__(self):
1093@@ -748,6 +790,8 @@
1094 ctxt.update(self.nvp_ctxt())
1095 elif self.plugin == 'n1kv':
1096 ctxt.update(self.n1kv_ctxt())
1097+ elif self.plugin == 'Calico':
1098+ ctxt.update(self.calico_ctxt())
1099
1100 alchemy_flags = config('neutron-alchemy-flags')
1101 if alchemy_flags:
1102@@ -759,23 +803,40 @@
1103
1104
1105 class OSConfigFlagContext(OSContextGenerator):
1106-
1107- """
1108- Responsible for adding user-defined config-flags in charm config to a
1109- template context.
1110+ """Provides support for user-defined config flags.
1111+
1112+ Users can define a comma-seperated list of key=value pairs
1113+ in the charm configuration and apply them at any point in
1114+ any file by using a template flag.
1115+
1116+ Sometimes users might want config flags inserted within a
1117+ specific section so this class allows users to specify the
1118+ template flag name, allowing for multiple template flags
1119+ (sections) within the same context.
1120
1121 NOTE: the value of config-flags may be a comma-separated list of
1122 key=value pairs and some Openstack config files support
1123 comma-separated lists as values.
1124 """
1125
1126+ def __init__(self, charm_flag='config-flags',
1127+ template_flag='user_config_flags'):
1128+ """
1129+ :param charm_flag: config flags in charm configuration.
1130+ :param template_flag: insert point for user-defined flags in template
1131+ file.
1132+ """
1133+ super(OSConfigFlagContext, self).__init__()
1134+ self._charm_flag = charm_flag
1135+ self._template_flag = template_flag
1136+
1137 def __call__(self):
1138- config_flags = config('config-flags')
1139+ config_flags = config(self._charm_flag)
1140 if not config_flags:
1141 return {}
1142
1143- flags = config_flags_parser(config_flags)
1144- return {'user_config_flags': flags}
1145+ return {self._template_flag:
1146+ config_flags_parser(config_flags)}
1147
1148
1149 class SubordinateConfigContext(OSContextGenerator):
1150@@ -819,7 +880,6 @@
1151 },
1152 }
1153 }
1154-
1155 """
1156
1157 def __init__(self, service, config_file, interface):
1158@@ -849,26 +909,28 @@
1159
1160 if self.service not in sub_config:
1161 log('Found subordinate_config on %s but it contained'
1162- 'nothing for %s service' % (rid, self.service))
1163+ 'nothing for %s service' % (rid, self.service),
1164+ level=INFO)
1165 continue
1166
1167 sub_config = sub_config[self.service]
1168 if self.config_file not in sub_config:
1169 log('Found subordinate_config on %s but it contained'
1170- 'nothing for %s' % (rid, self.config_file))
1171+ 'nothing for %s' % (rid, self.config_file),
1172+ level=INFO)
1173 continue
1174
1175 sub_config = sub_config[self.config_file]
1176 for k, v in sub_config.iteritems():
1177 if k == 'sections':
1178 for section, config_dict in v.iteritems():
1179- log("adding section '%s'" % (section))
1180+ log("Adding section '%s'" % (section),
1181+ level=DEBUG)
1182 ctxt[k][section] = config_dict
1183 else:
1184 ctxt[k] = v
1185
1186- log("%d section(s) found" % (len(ctxt['sections'])), level=INFO)
1187-
1188+ log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)
1189 return ctxt
1190
1191
1192@@ -880,15 +942,14 @@
1193 False if config('debug') is None else config('debug')
1194 ctxt['verbose'] = \
1195 False if config('verbose') is None else config('verbose')
1196+
1197 return ctxt
1198
1199
1200 class SyslogContext(OSContextGenerator):
1201
1202 def __call__(self):
1203- ctxt = {
1204- 'use_syslog': config('use-syslog')
1205- }
1206+ ctxt = {'use_syslog': config('use-syslog')}
1207 return ctxt
1208
1209
1210@@ -896,13 +957,9 @@
1211
1212 def __call__(self):
1213 if config('prefer-ipv6'):
1214- return {
1215- 'bind_host': '::'
1216- }
1217+ return {'bind_host': '::'}
1218 else:
1219- return {
1220- 'bind_host': '0.0.0.0'
1221- }
1222+ return {'bind_host': '0.0.0.0'}
1223
1224
1225 class WorkerConfigContext(OSContextGenerator):
1226@@ -914,11 +971,42 @@
1227 except ImportError:
1228 apt_install('python-psutil', fatal=True)
1229 from psutil import NUM_CPUS
1230+
1231 return NUM_CPUS
1232
1233 def __call__(self):
1234- multiplier = config('worker-multiplier') or 1
1235- ctxt = {
1236- "workers": self.num_cpus * multiplier
1237- }
1238+ multiplier = config('worker-multiplier') or 0
1239+ ctxt = {"workers": self.num_cpus * multiplier}
1240+ return ctxt
1241+
1242+
1243+class ZeroMQContext(OSContextGenerator):
1244+ interfaces = ['zeromq-configuration']
1245+
1246+ def __call__(self):
1247+ ctxt = {}
1248+ if is_relation_made('zeromq-configuration', 'host'):
1249+ for rid in relation_ids('zeromq-configuration'):
1250+ for unit in related_units(rid):
1251+ ctxt['zmq_nonce'] = relation_get('nonce', unit, rid)
1252+ ctxt['zmq_host'] = relation_get('host', unit, rid)
1253+
1254+ return ctxt
1255+
1256+
1257+class NotificationDriverContext(OSContextGenerator):
1258+
1259+ def __init__(self, zmq_relation='zeromq-configuration',
1260+ amqp_relation='amqp'):
1261+ """
1262+ :param zmq_relation: Name of Zeromq relation to check
1263+ """
1264+ self.zmq_relation = zmq_relation
1265+ self.amqp_relation = amqp_relation
1266+
1267+ def __call__(self):
1268+ ctxt = {'notifications': 'False'}
1269+ if is_relation_made(self.amqp_relation):
1270+ ctxt['notifications'] = "True"
1271+
1272 return ctxt
1273
1274=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
1275--- hooks/charmhelpers/contrib/openstack/neutron.py 2014-07-28 14:41:41 +0000
1276+++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-12-08 02:03:09 +0000
1277@@ -138,10 +138,25 @@
1278 relation_prefix='neutron',
1279 ssl_dir=NEUTRON_CONF_DIR)],
1280 'services': [],
1281- 'packages': [['neutron-plugin-cisco']],
1282+ 'packages': [[headers_package()] + determine_dkms_package(),
1283+ ['neutron-plugin-cisco']],
1284 'server_packages': ['neutron-server',
1285 'neutron-plugin-cisco'],
1286 'server_services': ['neutron-server']
1287+ },
1288+ 'Calico': {
1289+ 'config': '/etc/neutron/plugins/ml2/ml2_conf.ini',
1290+ 'driver': 'neutron.plugins.ml2.plugin.Ml2Plugin',
1291+ 'contexts': [
1292+ context.SharedDBContext(user=config('neutron-database-user'),
1293+ database=config('neutron-database'),
1294+ relation_prefix='neutron',
1295+ ssl_dir=NEUTRON_CONF_DIR)],
1296+ 'services': ['calico-compute', 'bird', 'neutron-dhcp-agent'],
1297+ 'packages': [[headers_package()] + determine_dkms_package(),
1298+ ['calico-compute', 'bird', 'neutron-dhcp-agent']],
1299+ 'server_packages': ['neutron-server', 'calico-control'],
1300+ 'server_services': ['neutron-server']
1301 }
1302 }
1303 if release >= 'icehouse':
1304@@ -162,7 +177,8 @@
1305 elif manager == 'neutron':
1306 plugins = neutron_plugins()
1307 else:
1308- log('Error: Network manager does not support plugins.')
1309+ log("Network manager '%s' does not support plugins." % (manager),
1310+ level=ERROR)
1311 raise Exception
1312
1313 try:
1314
1315=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
1316--- hooks/charmhelpers/contrib/openstack/utils.py 2014-10-06 21:03:50 +0000
1317+++ hooks/charmhelpers/contrib/openstack/utils.py 2014-12-08 02:03:09 +0000
1318@@ -2,6 +2,7 @@
1319
1320 # Common python helper functions used for OpenStack charms.
1321 from collections import OrderedDict
1322+from functools import wraps
1323
1324 import subprocess
1325 import json
1326@@ -468,6 +469,14 @@
1327 return result.split('.')[0]
1328
1329
1330+def get_matchmaker_map(mm_file='/etc/oslo/matchmaker_ring.json'):
1331+ mm_map = {}
1332+ if os.path.isfile(mm_file):
1333+ with open(mm_file, 'r') as f:
1334+ mm_map = json.load(f)
1335+ return mm_map
1336+
1337+
1338 def sync_db_with_multi_ipv6_addresses(database, database_user,
1339 relation_prefix=None):
1340 hosts = get_ipv6_addr(dynamic_only=False)
1341@@ -484,3 +493,18 @@
1342
1343 for rid in relation_ids('shared-db'):
1344 relation_set(relation_id=rid, **kwargs)
1345+
1346+
1347+def os_requires_version(ostack_release, pkg):
1348+ """
1349+ Decorator for hook to specify minimum supported release
1350+ """
1351+ def wrap(f):
1352+ @wraps(f)
1353+ def wrapped_f(*args):
1354+ if os_release(pkg) < ostack_release:
1355+ raise Exception("This hook is not supported on releases"
1356+ " before %s" % ostack_release)
1357+ f(*args)
1358+ return wrapped_f
1359+ return wrap
1360
1361=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
1362--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-07-28 14:41:41 +0000
1363+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-12-08 02:03:09 +0000
1364@@ -16,19 +16,18 @@
1365 from subprocess import (
1366 check_call,
1367 check_output,
1368- CalledProcessError
1369+ CalledProcessError,
1370 )
1371-
1372 from charmhelpers.core.hookenv import (
1373 relation_get,
1374 relation_ids,
1375 related_units,
1376 log,
1377+ DEBUG,
1378 INFO,
1379 WARNING,
1380- ERROR
1381+ ERROR,
1382 )
1383-
1384 from charmhelpers.core.host import (
1385 mount,
1386 mounts,
1387@@ -37,7 +36,6 @@
1388 service_running,
1389 umount,
1390 )
1391-
1392 from charmhelpers.fetch import (
1393 apt_install,
1394 )
1395@@ -56,69 +54,60 @@
1396
1397
1398 def install():
1399- ''' Basic Ceph client installation '''
1400+ """Basic Ceph client installation."""
1401 ceph_dir = "/etc/ceph"
1402 if not os.path.exists(ceph_dir):
1403 os.mkdir(ceph_dir)
1404+
1405 apt_install('ceph-common', fatal=True)
1406
1407
1408 def rbd_exists(service, pool, rbd_img):
1409- ''' Check to see if a RADOS block device exists '''
1410+ """Check to see if a RADOS block device exists."""
1411 try:
1412- out = check_output(['rbd', 'list', '--id', service,
1413- '--pool', pool])
1414+ out = check_output(['rbd', 'list', '--id', service, '--pool', pool])
1415 except CalledProcessError:
1416 return False
1417- else:
1418- return rbd_img in out
1419+
1420+ return rbd_img in out
1421
1422
1423 def create_rbd_image(service, pool, image, sizemb):
1424- ''' Create a new RADOS block device '''
1425- cmd = [
1426- 'rbd',
1427- 'create',
1428- image,
1429- '--size',
1430- str(sizemb),
1431- '--id',
1432- service,
1433- '--pool',
1434- pool
1435- ]
1436+ """Create a new RADOS block device."""
1437+ cmd = ['rbd', 'create', image, '--size', str(sizemb), '--id', service,
1438+ '--pool', pool]
1439 check_call(cmd)
1440
1441
1442 def pool_exists(service, name):
1443- ''' Check to see if a RADOS pool already exists '''
1444+ """Check to see if a RADOS pool already exists."""
1445 try:
1446 out = check_output(['rados', '--id', service, 'lspools'])
1447 except CalledProcessError:
1448 return False
1449- else:
1450- return name in out
1451+
1452+ return name in out
1453
1454
1455 def get_osds(service):
1456- '''
1457- Return a list of all Ceph Object Storage Daemons
1458- currently in the cluster
1459- '''
1460+ """Return a list of all Ceph Object Storage Daemons currently in the
1461+ cluster.
1462+ """
1463 version = ceph_version()
1464 if version and version >= '0.56':
1465 return json.loads(check_output(['ceph', '--id', service,
1466 'osd', 'ls', '--format=json']))
1467- else:
1468- return None
1469-
1470-
1471-def create_pool(service, name, replicas=2):
1472- ''' Create a new RADOS pool '''
1473+
1474+ return None
1475+
1476+
1477+def create_pool(service, name, replicas=3):
1478+ """Create a new RADOS pool."""
1479 if pool_exists(service, name):
1480 log("Ceph pool {} already exists, skipping creation".format(name),
1481 level=WARNING)
1482 return
1483+
1484 # Calculate the number of placement groups based
1485 # on upstream recommended best practices.
1486 osds = get_osds(service)
1487@@ -128,27 +117,19 @@
1488 # NOTE(james-page): Default to 200 for older ceph versions
1489 # which don't support OSD query from cli
1490 pgnum = 200
1491- cmd = [
1492- 'ceph', '--id', service,
1493- 'osd', 'pool', 'create',
1494- name, str(pgnum)
1495- ]
1496+
1497+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]
1498 check_call(cmd)
1499- cmd = [
1500- 'ceph', '--id', service,
1501- 'osd', 'pool', 'set', name,
1502- 'size', str(replicas)
1503- ]
1504+
1505+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size',
1506+ str(replicas)]
1507 check_call(cmd)
1508
1509
1510 def delete_pool(service, name):
1511- ''' Delete a RADOS pool from ceph '''
1512- cmd = [
1513- 'ceph', '--id', service,
1514- 'osd', 'pool', 'delete',
1515- name, '--yes-i-really-really-mean-it'
1516- ]
1517+ """Delete a RADOS pool from ceph."""
1518+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'delete', name,
1519+ '--yes-i-really-really-mean-it']
1520 check_call(cmd)
1521
1522
1523@@ -161,44 +142,43 @@
1524
1525
1526 def create_keyring(service, key):
1527- ''' Create a new Ceph keyring containing key'''
1528+ """Create a new Ceph keyring containing key."""
1529 keyring = _keyring_path(service)
1530 if os.path.exists(keyring):
1531- log('ceph: Keyring exists at %s.' % keyring, level=WARNING)
1532+ log('Ceph keyring exists at %s.' % keyring, level=WARNING)
1533 return
1534- cmd = [
1535- 'ceph-authtool',
1536- keyring,
1537- '--create-keyring',
1538- '--name=client.{}'.format(service),
1539- '--add-key={}'.format(key)
1540- ]
1541+
1542+ cmd = ['ceph-authtool', keyring, '--create-keyring',
1543+ '--name=client.{}'.format(service), '--add-key={}'.format(key)]
1544 check_call(cmd)
1545- log('ceph: Created new ring at %s.' % keyring, level=INFO)
1546+ log('Created new ceph keyring at %s.' % keyring, level=DEBUG)
1547
1548
1549 def create_key_file(service, key):
1550- ''' Create a file containing key '''
1551+ """Create a file containing key."""
1552 keyfile = _keyfile_path(service)
1553 if os.path.exists(keyfile):
1554- log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING)
1555+ log('Keyfile exists at %s.' % keyfile, level=WARNING)
1556 return
1557+
1558 with open(keyfile, 'w') as fd:
1559 fd.write(key)
1560- log('ceph: Created new keyfile at %s.' % keyfile, level=INFO)
1561+
1562+ log('Created new keyfile at %s.' % keyfile, level=INFO)
1563
1564
1565 def get_ceph_nodes():
1566- ''' Query named relation 'ceph' to detemine current nodes '''
1567+ """Query named relation 'ceph' to determine current nodes."""
1568 hosts = []
1569 for r_id in relation_ids('ceph'):
1570 for unit in related_units(r_id):
1571 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
1572+
1573 return hosts
1574
1575
1576 def configure(service, key, auth, use_syslog):
1577- ''' Perform basic configuration of Ceph '''
1578+ """Perform basic configuration of Ceph."""
1579 create_keyring(service, key)
1580 create_key_file(service, key)
1581 hosts = get_ceph_nodes()
1582@@ -211,17 +191,17 @@
1583
1584
1585 def image_mapped(name):
1586- ''' Determine whether a RADOS block device is mapped locally '''
1587+ """Determine whether a RADOS block device is mapped locally."""
1588 try:
1589 out = check_output(['rbd', 'showmapped'])
1590 except CalledProcessError:
1591 return False
1592- else:
1593- return name in out
1594+
1595+ return name in out
1596
1597
1598 def map_block_storage(service, pool, image):
1599- ''' Map a RADOS block device for local use '''
1600+ """Map a RADOS block device for local use."""
1601 cmd = [
1602 'rbd',
1603 'map',
1604@@ -235,31 +215,32 @@
1605
1606
1607 def filesystem_mounted(fs):
1608- ''' Determine whether a filesytems is already mounted '''
1609+ """Determine whether a filesytems is already mounted."""
1610 return fs in [f for f, m in mounts()]
1611
1612
1613 def make_filesystem(blk_device, fstype='ext4', timeout=10):
1614- ''' Make a new filesystem on the specified block device '''
1615+ """Make a new filesystem on the specified block device."""
1616 count = 0
1617 e_noent = os.errno.ENOENT
1618 while not os.path.exists(blk_device):
1619 if count >= timeout:
1620- log('ceph: gave up waiting on block device %s' % blk_device,
1621+ log('Gave up waiting on block device %s' % blk_device,
1622 level=ERROR)
1623 raise IOError(e_noent, os.strerror(e_noent), blk_device)
1624- log('ceph: waiting for block device %s to appear' % blk_device,
1625- level=INFO)
1626+
1627+ log('Waiting for block device %s to appear' % blk_device,
1628+ level=DEBUG)
1629 count += 1
1630 time.sleep(1)
1631 else:
1632- log('ceph: Formatting block device %s as filesystem %s.' %
1633+ log('Formatting block device %s as filesystem %s.' %
1634 (blk_device, fstype), level=INFO)
1635 check_call(['mkfs', '-t', fstype, blk_device])
1636
1637
1638 def place_data_on_block_device(blk_device, data_src_dst):
1639- ''' Migrate data in data_src_dst to blk_device and then remount '''
1640+ """Migrate data in data_src_dst to blk_device and then remount."""
1641 # mount block device into /mnt
1642 mount(blk_device, '/mnt')
1643 # copy data to /mnt
1644@@ -279,8 +260,8 @@
1645
1646 # TODO: re-use
1647 def modprobe(module):
1648- ''' Load a kernel module and configure for auto-load on reboot '''
1649- log('ceph: Loading kernel module', level=INFO)
1650+ """Load a kernel module and configure for auto-load on reboot."""
1651+ log('Loading kernel module', level=INFO)
1652 cmd = ['modprobe', module]
1653 check_call(cmd)
1654 with open('/etc/modules', 'r+') as modules:
1655@@ -289,7 +270,7 @@
1656
1657
1658 def copy_files(src, dst, symlinks=False, ignore=None):
1659- ''' Copy files from src to dst '''
1660+ """Copy files from src to dst."""
1661 for item in os.listdir(src):
1662 s = os.path.join(src, item)
1663 d = os.path.join(dst, item)
1664@@ -300,9 +281,9 @@
1665
1666
1667 def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
1668- blk_device, fstype, system_services=[]):
1669- """
1670- NOTE: This function must only be called from a single service unit for
1671+ blk_device, fstype, system_services=[],
1672+ replicas=3):
1673+ """NOTE: This function must only be called from a single service unit for
1674 the same rbd_img otherwise data loss will occur.
1675
1676 Ensures given pool and RBD image exists, is mapped to a block device,
1677@@ -316,15 +297,16 @@
1678 """
1679 # Ensure pool, RBD image, RBD mappings are in place.
1680 if not pool_exists(service, pool):
1681- log('ceph: Creating new pool {}.'.format(pool))
1682- create_pool(service, pool)
1683+ log('Creating new pool {}.'.format(pool), level=INFO)
1684+ create_pool(service, pool, replicas=replicas)
1685
1686 if not rbd_exists(service, pool, rbd_img):
1687- log('ceph: Creating RBD image ({}).'.format(rbd_img))
1688+ log('Creating RBD image ({}).'.format(rbd_img), level=INFO)
1689 create_rbd_image(service, pool, rbd_img, sizemb)
1690
1691 if not image_mapped(rbd_img):
1692- log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img))
1693+ log('Mapping RBD Image {} as a Block Device.'.format(rbd_img),
1694+ level=INFO)
1695 map_block_storage(service, pool, rbd_img)
1696
1697 # make file system
1698@@ -339,42 +321,44 @@
1699
1700 for svc in system_services:
1701 if service_running(svc):
1702- log('ceph: Stopping services {} prior to migrating data.'
1703- .format(svc))
1704+ log('Stopping services {} prior to migrating data.'
1705+ .format(svc), level=DEBUG)
1706 service_stop(svc)
1707
1708 place_data_on_block_device(blk_device, mount_point)
1709
1710 for svc in system_services:
1711- log('ceph: Starting service {} after migrating data.'
1712- .format(svc))
1713+ log('Starting service {} after migrating data.'
1714+ .format(svc), level=DEBUG)
1715 service_start(svc)
1716
1717
1718 def ensure_ceph_keyring(service, user=None, group=None):
1719- '''
1720- Ensures a ceph keyring is created for a named service
1721- and optionally ensures user and group ownership.
1722+ """Ensures a ceph keyring is created for a named service and optionally
1723+ ensures user and group ownership.
1724
1725 Returns False if no ceph key is available in relation state.
1726- '''
1727+ """
1728 key = None
1729 for rid in relation_ids('ceph'):
1730 for unit in related_units(rid):
1731 key = relation_get('key', rid=rid, unit=unit)
1732 if key:
1733 break
1734+
1735 if not key:
1736 return False
1737+
1738 create_keyring(service=service, key=key)
1739 keyring = _keyring_path(service)
1740 if user and group:
1741 check_call(['chown', '%s.%s' % (user, group), keyring])
1742+
1743 return True
1744
1745
1746 def ceph_version():
1747- ''' Retrieve the local version of ceph '''
1748+ """Retrieve the local version of ceph."""
1749 if os.path.exists('/usr/bin/ceph'):
1750 cmd = ['ceph', '-v']
1751 output = check_output(cmd)
1752
1753=== modified file 'hooks/charmhelpers/core/hookenv.py'
1754--- hooks/charmhelpers/core/hookenv.py 2014-10-06 21:03:50 +0000
1755+++ hooks/charmhelpers/core/hookenv.py 2014-12-08 02:03:09 +0000
1756@@ -214,6 +214,12 @@
1757 except KeyError:
1758 return (self._prev_dict or {})[key]
1759
1760+ def keys(self):
1761+ prev_keys = []
1762+ if self._prev_dict is not None:
1763+ prev_keys = self._prev_dict.keys()
1764+ return list(set(prev_keys + dict.keys(self)))
1765+
1766 def load_previous(self, path=None):
1767 """Load previous copy of config from disk.
1768
1769
1770=== modified file 'hooks/charmhelpers/core/host.py'
1771--- hooks/charmhelpers/core/host.py 2014-10-06 21:03:50 +0000
1772+++ hooks/charmhelpers/core/host.py 2014-12-08 02:03:09 +0000
1773@@ -6,13 +6,13 @@
1774 # Matthew Wedgwood <matthew.wedgwood@canonical.com>
1775
1776 import os
1777+import re
1778 import pwd
1779 import grp
1780 import random
1781 import string
1782 import subprocess
1783 import hashlib
1784-import shutil
1785 from contextlib import contextmanager
1786
1787 from collections import OrderedDict
1788@@ -317,7 +317,13 @@
1789 ip_output = (line for line in ip_output if line)
1790 for line in ip_output:
1791 if line.split()[1].startswith(int_type):
1792- interfaces.append(line.split()[1].replace(":", ""))
1793+ matched = re.search('.*: (bond[0-9]+\.[0-9]+)@.*', line)
1794+ if matched:
1795+ interface = matched.groups()[0]
1796+ else:
1797+ interface = line.split()[1].replace(":", "")
1798+ interfaces.append(interface)
1799+
1800 return interfaces
1801
1802
1803
1804=== modified file 'hooks/charmhelpers/core/services/__init__.py'
1805--- hooks/charmhelpers/core/services/__init__.py 2014-08-13 13:12:02 +0000
1806+++ hooks/charmhelpers/core/services/__init__.py 2014-12-08 02:03:09 +0000
1807@@ -1,2 +1,2 @@
1808-from .base import *
1809-from .helpers import *
1810+from .base import * # NOQA
1811+from .helpers import * # NOQA
1812
1813=== modified file 'hooks/charmhelpers/fetch/__init__.py'
1814--- hooks/charmhelpers/fetch/__init__.py 2014-10-06 21:03:50 +0000
1815+++ hooks/charmhelpers/fetch/__init__.py 2014-12-08 02:03:09 +0000
1816@@ -72,6 +72,7 @@
1817 FETCH_HANDLERS = (
1818 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
1819 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
1820+ 'charmhelpers.fetch.giturl.GitUrlFetchHandler',
1821 )
1822
1823 APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
1824@@ -218,6 +219,7 @@
1825 pocket for the release.
1826 'cloud:' may be used to activate official cloud archive pockets,
1827 such as 'cloud:icehouse'
1828+ 'distro' may be used as a noop
1829
1830 @param key: A key to be added to the system's APT keyring and used
1831 to verify the signatures on packages. Ideally, this should be an
1832@@ -251,8 +253,10 @@
1833 release = lsb_release()['DISTRIB_CODENAME']
1834 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
1835 apt.write(PROPOSED_POCKET.format(release))
1836+ elif source == 'distro':
1837+ pass
1838 else:
1839- raise SourceConfigError("Unknown source: {!r}".format(source))
1840+ log("Unknown source: {!r}".format(source))
1841
1842 if key:
1843 if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
1844
1845=== added file 'hooks/charmhelpers/fetch/giturl.py'
1846--- hooks/charmhelpers/fetch/giturl.py 1970-01-01 00:00:00 +0000
1847+++ hooks/charmhelpers/fetch/giturl.py 2014-12-08 02:03:09 +0000
1848@@ -0,0 +1,44 @@
1849+import os
1850+from charmhelpers.fetch import (
1851+ BaseFetchHandler,
1852+ UnhandledSource
1853+)
1854+from charmhelpers.core.host import mkdir
1855+
1856+try:
1857+ from git import Repo
1858+except ImportError:
1859+ from charmhelpers.fetch import apt_install
1860+ apt_install("python-git")
1861+ from git import Repo
1862+
1863+
1864+class GitUrlFetchHandler(BaseFetchHandler):
1865+ """Handler for git branches via generic and github URLs"""
1866+ def can_handle(self, source):
1867+ url_parts = self.parse_url(source)
1868+ #TODO (mattyw) no support for ssh git@ yet
1869+ if url_parts.scheme not in ('http', 'https', 'git'):
1870+ return False
1871+ else:
1872+ return True
1873+
1874+ def clone(self, source, dest, branch):
1875+ if not self.can_handle(source):
1876+ raise UnhandledSource("Cannot handle {}".format(source))
1877+
1878+ repo = Repo.clone_from(source, dest)
1879+ repo.git.checkout(branch)
1880+
1881+ def install(self, source, branch="master"):
1882+ url_parts = self.parse_url(source)
1883+ branch_name = url_parts.path.strip("/").split("/")[-1]
1884+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
1885+ branch_name)
1886+ if not os.path.exists(dest_dir):
1887+ mkdir(dest_dir, perms=0755)
1888+ try:
1889+ self.clone(source, dest_dir, branch)
1890+ except OSError as e:
1891+ raise UnhandledSource(e.strerror)
1892+ return dest_dir
1893
1894=== added file 'templates/icehouse/cisco_plugins.ini'
1895--- templates/icehouse/cisco_plugins.ini 1970-01-01 00:00:00 +0000
1896+++ templates/icehouse/cisco_plugins.ini 2014-12-08 02:03:09 +0000
1897@@ -0,0 +1,43 @@
1898+###############################################################################
1899+# [ WARNING ]
1900+# Configuration file maintained by Juju. Local changes may be overwritten.
1901+###############################################################################
1902+[cisco_plugins]
1903+
1904+[cisco]
1905+
1906+[cisco_n1k]
1907+integration_bridge = br-int
1908+default_policy_profile = default-pp
1909+network_node_policy_profile = default-pp
1910+{% if openstack_release != 'havana' -%}
1911+http_timeout = 120
1912+# (BoolOpt) Specify whether plugin should attempt to synchronize with the VSM
1913+# when neutron is started.
1914+# Default value: False, indicating no full sync will be performed.
1915+#
1916+enable_sync_on_start = False
1917+{% endif -%}
1918+restrict_policy_profiles = {{ restrict_policy_profiles }}
1919+{% if n1kv_user_config_flags -%}
1920+{% for key, value in n1kv_user_config_flags.iteritems() -%}
1921+{{ key }} = {{ value }}
1922+{% endfor -%}
1923+{% endif -%}
1924+
1925+[CISCO_PLUGINS]
1926+vswitch_plugin = neutron.plugins.cisco.n1kv.n1kv_neutron_plugin.N1kvNeutronPluginV2
1927+
1928+[N1KV:{{ vsm_ip }}]
1929+password = {{ vsm_password }}
1930+username = {{ vsm_username }}
1931+
1932+{% include "parts/section-database" %}
1933+
1934+[securitygroup]
1935+{% if neutron_security_groups -%}
1936+firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
1937+enable_security_group = True
1938+{% else -%}
1939+firewall_driver = neutron.agent.firewall.NoopFirewallDriver
1940+{% endif -%}
1941
1942=== modified file 'templates/icehouse/nova.conf'
1943--- templates/icehouse/nova.conf 2014-12-03 23:31:19 +0000
1944+++ templates/icehouse/nova.conf 2014-12-08 02:03:09 +0000
1945@@ -76,6 +76,14 @@
1946 {% endif -%}
1947 {% endif -%}
1948
1949+{% if neutron_plugin and neutron_plugin == 'n1kv' -%}
1950+libvirt_user_virtio_for_bridges = True
1951+nova_firewall_driver = nova.virt.firewall.NoopFirewallDriver
1952+{% if external_network -%}
1953+default_floating_pool = {{ external_network }}
1954+{% endif -%}
1955+{% endif -%}
1956+
1957 {% if network_manager_config -%}
1958 {% for key, value in network_manager_config.iteritems() -%}
1959 {{ key }} = {{ value }}

Subscribers

People subscribed via source and target branches