Merge lp:~plumgrid-team/charms/trusty/plumgrid-director/trunk into lp:charms/trusty/plumgrid-director

Proposed by Bilal Baqar
Status: Superseded
Proposed branch: lp:~plumgrid-team/charms/trusty/plumgrid-director/trunk
Merge into: lp:charms/trusty/plumgrid-director
Diff against target: 1249 lines (+433/-277)
15 files modified
Makefile (+1/-1)
config.yaml (+36/-0)
hooks/charmhelpers/contrib/openstack/neutron.py (+2/-2)
hooks/pg_dir_context.py (+36/-26)
hooks/pg_dir_hooks.py (+83/-23)
hooks/pg_dir_utils.py (+225/-49)
templates/kilo/00-pg.conf (+2/-0)
templates/kilo/hosts (+1/-1)
templates/kilo/ifcs.conf (+1/-1)
templates/kilo/nginx.conf (+17/-0)
tests/files/plumgrid-director-dense.yaml (+0/-133)
tests/test.yaml (+0/-2)
unit_tests/test_pg_dir_context.py (+22/-14)
unit_tests/test_pg_dir_hooks.py (+4/-24)
unit_tests/test_pg_dir_utils.py (+3/-1)
To merge this branch: bzr merge lp:~plumgrid-team/charms/trusty/plumgrid-director/trunk
Reviewer Review Type Date Requested Status
Charles Butler (community) Needs Fixing
Review Queue (community) automated testing Needs Fixing
Review via email: mp+282353@code.launchpad.net

This proposal has been superseded by a proposal from 2016-05-18.

To post a comment you must log in.
24. By Bilal Baqar

Merge: config is pushed in install hook now so that config-changed hook is run completely

Revision history for this message
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/2161/

review: Needs Fixing (automated testing)
Revision history for this message
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/2141/

review: Needs Fixing (automated testing)
25. By Bilal Baqar

Merge: setting hostname of lxc to the hostname of the node

Revision history for this message
Charles Butler (lazypower) wrote :

Greetings Bilal,

This branch doesn't appear to apply cleanly. Can you take a look and resolve the merge conflicts?

review: Needs Fixing
Revision history for this message
Bilal Baqar (bbaqar) wrote :

Hey Charles

Thanks for taking the time out to review the merge proposal. I ll deal with the conflicts.

26. By Bilal Baqar

[Merge] - Fixing lint

27. By Bilal Baqar

Made the following changes:
1. Reordered file and module imports
2. Sorted director IPs
3. Added unit fqdn in /etc/hosts of plumgrid-lxc
4. Loading plumgrid specific iptables on install
5. Added temporary upgrade hook to load iptables
6. stop_pg() is being used in restart_pg()
7. persistant iptables

28. By Bilal Baqar

Improved config-changed hook to perform steps according to the config changed

29. By Bilal Baqar

OPSVM Changes - Ticket: [SOL-830]
- Getting OPSVM IP from charm configs
- Making OPSVM specific changes
- Setting OPSVM IP in relation with edge/gw
- Cleaned code in various functions
- Added restart_on_change decorater function that restarts plumgrid service only when there has been any change in the configuration files
- Removed restart of plumgrid service when only two directors are available
- Fixed unit tests accordingly

30. By Bilal Baqar

Adding status messages in charms - Ticket:[SOL-949]

31. By Bilal Baqar

Merge: Liberty/Mitaka support

32. By Bilal Baqar

Merge - Mitaka changes
- Created new relation with neutron-api-plumgrid
- getting pg creds in config
- nginx conf changes for middleware

33. By Bilal Baqar

5.1 changes
- configure-pg-sources added
- updated templates

34. By Junaid Ali

L3 fabric changes

35. By Junaid Ali

Changes:
    - Default value for management interface removed as
      'juju-br0' has been changed to 'br-eth0' in Juju 2.0
    - Updated get_mgmt_interface method

Unmerged revisions

35. By Junaid Ali

Changes:
    - Default value for management interface removed as
      'juju-br0' has been changed to 'br-eth0' in Juju 2.0
    - Updated get_mgmt_interface method

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Makefile'
2--- Makefile 2015-08-24 16:18:48 +0000
3+++ Makefile 2016-04-08 08:30:24 +0000
4@@ -7,7 +7,7 @@
5 netaddr jinja2
6
7 lint: virtualenv
8- .venv/bin/flake8 --exclude hooks/charmhelpers hooks unit_tests tests
9+ .venv/bin/flake8 --exclude hooks/charmhelpers hooks unit_tests tests --ignore E402
10 @charm proof
11
12 unit_test: virtualenv
13
14=== modified file 'config.yaml'
15--- config.yaml 2015-08-16 19:04:53 +0000
16+++ config.yaml 2016-04-08 08:30:24 +0000
17@@ -7,6 +7,28 @@
18 default: 'null'
19 type: string
20 description: Public SSH key of PLUMgrid LCM which is running PG-Tools.
21+ mgmt-interface:
22+ type: string
23+ default: 'juju-br0'
24+ description: The interface connected to PLUMgrid Managment network.
25+ os-data-network:
26+ type: string
27+ default:
28+ description: |
29+ The IP address and netmask of the OpenStack Data network (e.g.,
30+ 192.168.0.0/24)
31+ .
32+ This network will be used for tenant network traffic in overlay
33+ networks.
34+ fabric-interfaces:
35+ default: 'MANAGEMENT'
36+ type: string
37+ description: |
38+ Interfaces that will provide fabric connectivity on the director nodes.
39+ Provided in form of json in a string. These interfaces have to be connected
40+ to the os-data-network specified in the config. Default value is MANAGEMENT which
41+ will configure the management interface as the fabric interface on each
42+ director.
43 network-device-mtu:
44 type: string
45 default: '1580'
46@@ -19,7 +41,21 @@
47 default: null
48 type: string
49 description: Provide the respective keys of the install sources.
50+ plumgrid-build:
51+ default: 'latest'
52+ type: string
53+ description: |
54+ Provide the build version of PLUMgrid packages that needs to be installed
55+ iovisor-build:
56+ default: 'latest'
57+ type: string
58+ description: |
59+ Provide the build version of iovisor package that needs to be installed
60 plumgrid-license-key:
61 default: null
62 type: string
63 description: Provide the PLUMgrid ONS License key.
64+ opsvm-ip:
65+ default: 127.0.0.1
66+ type: string
67+ description: IP address of the PLUMgrid Operations VM Management interface.
68
69=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
70--- hooks/charmhelpers/contrib/openstack/neutron.py 2015-07-29 18:07:31 +0000
71+++ hooks/charmhelpers/contrib/openstack/neutron.py 2016-04-08 08:30:24 +0000
72@@ -204,8 +204,8 @@
73 database=config('database'),
74 ssl_dir=NEUTRON_CONF_DIR)],
75 'services': [],
76- 'packages': [['plumgrid-lxc'],
77- ['iovisor-dkms']],
78+ 'packages': ['plumgrid-lxc',
79+ 'iovisor-dkms'],
80 'server_packages': ['neutron-server',
81 'neutron-plugin-plumgrid'],
82 'server_services': ['neutron-server']
83
84=== modified file 'hooks/pg_dir_context.py'
85--- hooks/pg_dir_context.py 2015-08-24 16:18:48 +0000
86+++ hooks/pg_dir_context.py 2016-04-08 08:30:24 +0000
87@@ -3,6 +3,9 @@
88 # This file contains the class that generates context
89 # for PLUMgrid template files.
90
91+import re
92+from charmhelpers.contrib.openstack import context
93+from charmhelpers.contrib.openstack.utils import get_host_ip
94 from charmhelpers.core.hookenv import (
95 config,
96 unit_get,
97@@ -12,12 +15,15 @@
98 related_units,
99 relation_get,
100 )
101-from charmhelpers.contrib.openstack import context
102-from charmhelpers.contrib.openstack.utils import get_host_ip
103-from charmhelpers.contrib.network.ip import get_address_in_network
104+from charmhelpers.contrib.network.ip import (
105+ is_ip,
106+ get_address_in_network,
107+)
108
109-import re
110-from socket import gethostname as get_unit_hostname
111+from socket import (
112+ gethostname,
113+ getfqdn
114+)
115
116
117 def _pg_dir_ips():
118@@ -25,12 +31,11 @@
119 Inspects plumgrid-director peer relation and returns the
120 ips of the peer directors
121 '''
122- pg_dir_ips = []
123- for rid in relation_ids('director'):
124- for unit in related_units(rid):
125- rdata = relation_get(rid=rid, unit=unit)
126- pg_dir_ips.append(rdata['private-address'])
127- return pg_dir_ips
128+ return [get_host_ip(rdata['private-address'])
129+ for rid in relation_ids("director")
130+ for rdata in
131+ (relation_get(rid=rid, unit=unit) for unit in related_units(rid))
132+ if rdata]
133
134
135 class PGDirContext(context.NeutronContext):
136@@ -71,24 +76,29 @@
137 pg_dir_ips = _pg_dir_ips()
138 pg_dir_ips.append(str(get_address_in_network(network=None,
139 fallback=get_host_ip(unit_get('private-address')))))
140+ pg_dir_ips = sorted(pg_dir_ips)
141 pg_ctxt['director_ips'] = pg_dir_ips
142- pg_dir_ips_string = ''
143- single_ip = True
144- for ip in pg_dir_ips:
145- if single_ip:
146- pg_dir_ips_string = str(ip)
147- single_ip = False
148- else:
149- pg_dir_ips_string = pg_dir_ips_string + ',' + str(ip)
150- pg_ctxt['director_ips_string'] = pg_dir_ips_string
151- pg_ctxt['virtual_ip'] = conf['plumgrid-virtual-ip']
152- pg_ctxt['pg_hostname'] = "pg-director"
153- from pg_dir_utils import check_interface_type
154- interface_type = check_interface_type()
155- pg_ctxt['interface'] = interface_type
156- pg_ctxt['label'] = get_unit_hostname()
157+ dir_count = len(pg_dir_ips)
158+ pg_ctxt['director_ips_string'] = (str(pg_dir_ips[0]) + ',' +
159+ str(pg_dir_ips[1]) + ',' +
160+ str(pg_dir_ips[2])
161+ if dir_count == 3 else
162+ str(pg_dir_ips[0]))
163+ PG_VIP = conf['plumgrid-virtual-ip']
164+ if is_ip(PG_VIP):
165+ pg_ctxt['virtual_ip'] = PG_VIP
166+ else:
167+ raise ValueError('Invalid PLUMgrid Virtual IP Provided')
168+ unit_hostname = gethostname()
169+ pg_ctxt['pg_hostname'] = unit_hostname
170+ pg_ctxt['pg_fqdn'] = getfqdn()
171+ from pg_dir_utils import get_mgmt_interface, get_fabric_interface
172+ pg_ctxt['interface'] = get_mgmt_interface()
173+ pg_ctxt['fabric_interface'] = get_fabric_interface()
174+ pg_ctxt['label'] = unit_hostname
175 pg_ctxt['fabric_mode'] = 'host'
176 virtual_ip_array = re.split('\.', conf['plumgrid-virtual-ip'])
177 pg_ctxt['virtual_router_id'] = virtual_ip_array[3]
178+ pg_ctxt['opsvm_ip'] = conf['opsvm-ip']
179
180 return pg_ctxt
181
182=== modified file 'hooks/pg_dir_hooks.py'
183--- hooks/pg_dir_hooks.py 2015-08-24 16:18:48 +0000
184+++ hooks/pg_dir_hooks.py 2016-04-08 08:30:24 +0000
185@@ -7,22 +7,28 @@
186
187 import sys
188 import time
189+from charmhelpers.core.host import service_running
190+from charmhelpers.contrib.network.ip import is_ip
191+
192 from charmhelpers.core.hookenv import (
193 Hooks,
194 UnregisteredHookError,
195 log,
196 config,
197+ relation_set,
198+ relation_ids,
199+ status_set
200 )
201
202 from charmhelpers.fetch import (
203 apt_install,
204- apt_purge,
205 configure_sources,
206 )
207
208 from pg_dir_utils import (
209 register_configs,
210 restart_pg,
211+ restart_map,
212 stop_pg,
213 determine_packages,
214 load_iovisor,
215@@ -30,6 +36,10 @@
216 ensure_mtu,
217 add_lcm_key,
218 post_pg_license,
219+ fabric_interface_changed,
220+ load_iptables,
221+ restart_on_change,
222+ director_cluster_ready
223 )
224
225 hooks = Hooks()
226@@ -41,22 +51,39 @@
227 '''
228 Install hook is run when the charm is first deployed on a node.
229 '''
230+ status_set('maintenance', 'Executing pre-install')
231+ load_iptables()
232 configure_sources(update=True)
233+ status_set('maintenance', 'Installing apt packages')
234 pkgs = determine_packages()
235 for pkg in pkgs:
236 apt_install(pkg, options=['--force-yes'], fatal=True)
237 load_iovisor()
238 ensure_mtu()
239- add_lcm_key()
240+ CONFIGS.write_all()
241
242
243 @hooks.hook('director-relation-joined')
244+@restart_on_change(restart_map())
245 def dir_joined():
246 '''
247 This hook is run when a unit of director is added.
248 '''
249- CONFIGS.write_all()
250- restart_pg()
251+ if director_cluster_ready():
252+ ensure_mtu()
253+ CONFIGS.write_all()
254+
255+
256+@hooks.hook('plumgrid-relation-joined')
257+def plumgrid_joined(relation_id=None):
258+ '''
259+ This hook is run when relation with edge or gateway is created.
260+ '''
261+ opsvm_ip = config('opsvm-ip')
262+ if not is_ip(opsvm_ip):
263+ raise ValueError('Incorrect OPSVM IP specified')
264+ else:
265+ relation_set(relation_id=relation_id, opsvm_ip=opsvm_ip)
266
267
268 @hooks.hook('config-changed')
269@@ -65,22 +92,41 @@
270 This hook is run when a config parameter is changed.
271 It also runs on node reboot.
272 '''
273- if post_pg_license():
274- log("PLUMgrid License Posted")
275- return 1
276- if add_lcm_key():
277- log("PLUMgrid LCM Key added")
278- return 1
279- stop_pg()
280- configure_sources(update=True)
281- pkgs = determine_packages()
282- for pkg in pkgs:
283- apt_install(pkg, options=['--force-yes'], fatal=True)
284- load_iovisor()
285+ charm_config = config()
286+ if charm_config.changed('lcm-ssh-key'):
287+ if add_lcm_key():
288+ log("PLUMgrid LCM Key added")
289+ if charm_config.changed('plumgrid-license-key'):
290+ if post_pg_license():
291+ log("PLUMgrid License Posted")
292+ if charm_config.changed('fabric-interfaces'):
293+ if not fabric_interface_changed():
294+ log("Fabric interface already set")
295+ else:
296+ stop_pg()
297+ if charm_config.changed('plumgrid-virtual-ip'):
298+ CONFIGS.write_all()
299+ stop_pg()
300+ if (charm_config.changed('install_sources') or
301+ charm_config.changed('plumgrid-build') or
302+ charm_config.changed('install_keys') or
303+ charm_config.changed('iovisor-build')):
304+ status_set('maintenance', 'Upgrading apt packages')
305+ stop_pg()
306+ configure_sources(update=True)
307+ pkgs = determine_packages()
308+ for pkg in pkgs:
309+ apt_install(pkg, options=['--force-yes'], fatal=True)
310+ remove_iovisor()
311+ load_iovisor()
312+ if charm_config.changed('opsvm-ip'):
313+ for rid in relation_ids('plumgrid'):
314+ plumgrid_joined(rid)
315+ stop_pg()
316 ensure_mtu()
317- add_lcm_key()
318 CONFIGS.write_all()
319- restart_pg()
320+ if not service_running('plumgrid'):
321+ restart_pg()
322
323
324 @hooks.hook('start')
325@@ -93,20 +139,34 @@
326 while (count < 10):
327 if post_pg_license():
328 break
329- count = count + 1
330+ count += 1
331 time.sleep(15)
332
333
334+@hooks.hook('upgrade-charm')
335+@restart_on_change(restart_map())
336+def upgrade_charm():
337+ '''
338+ This hook is run when the charm is upgraded
339+ '''
340+ ensure_mtu()
341+ CONFIGS.write_all()
342+
343+
344 @hooks.hook('stop')
345 def stop():
346 '''
347 This hook is run when the charm is destroyed.
348 '''
349 stop_pg()
350- remove_iovisor()
351- pkgs = determine_packages()
352- for pkg in pkgs:
353- apt_purge(pkg, fatal=False)
354+
355+
356+@hooks.hook('update-status')
357+def update_status():
358+ if service_running('plumgrid'):
359+ status_set('active', 'Unit is ready')
360+ else:
361+ status_set('blocked', 'plumgrid service not running')
362
363
364 def main():
365
366=== modified file 'hooks/pg_dir_utils.py'
367--- hooks/pg_dir_utils.py 2015-08-24 16:18:48 +0000
368+++ hooks/pg_dir_utils.py 2016-04-08 08:30:24 +0000
369@@ -2,45 +2,60 @@
370
371 # This file contains functions used by the hooks to deploy PLUMgrid Director.
372
373+import pg_dir_context
374+import subprocess
375+import time
376+import os
377+import json
378+from collections import OrderedDict
379+from socket import gethostname as get_unit_hostname
380+from copy import deepcopy
381 from charmhelpers.contrib.openstack.neutron import neutron_plugin_attribute
382-from copy import deepcopy
383+from charmhelpers.contrib.openstack import templating
384+from charmhelpers.contrib.storage.linux.ceph import modprobe
385 from charmhelpers.core.hookenv import (
386 log,
387 config,
388+ unit_get,
389+ status_set
390 )
391-from charmhelpers.contrib.openstack import templating
392-from charmhelpers.core.host import set_nic_mtu
393-from collections import OrderedDict
394-from charmhelpers.contrib.storage.linux.ceph import modprobe
395-from charmhelpers.contrib.openstack.utils import (
396- os_release,
397+from charmhelpers.contrib.network.ip import (
398+ get_iface_from_addr,
399+ get_bridges,
400+ get_bridge_nics,
401+ is_ip,
402+ is_address_in_network,
403+ get_iface_addr
404 )
405 from charmhelpers.core.host import (
406 service_start,
407 service_stop,
408-)
409-import pg_dir_context
410-import subprocess
411-import time
412-import os
413-import re
414-import json
415+ service_running,
416+ path_hash,
417+ set_nic_mtu
418+)
419+from charmhelpers.fetch import (
420+ apt_cache,
421+ apt_install
422+)
423+from charmhelpers.contrib.openstack.utils import (
424+ os_release,
425+)
426
427 LXC_CONF = '/etc/libvirt/lxc.conf'
428 TEMPLATES = 'templates/'
429 PG_LXC_DATA_PATH = '/var/lib/libvirt/filesystems/plumgrid-data'
430 PG_LXC_PATH = '/var/lib/libvirt/filesystems/plumgrid'
431-
432 PG_CONF = '%s/conf/pg/plumgrid.conf' % PG_LXC_DATA_PATH
433 PG_KA_CONF = '%s/conf/etc/keepalived.conf' % PG_LXC_DATA_PATH
434 PG_DEF_CONF = '%s/conf/pg/nginx.conf' % PG_LXC_DATA_PATH
435 PG_HN_CONF = '%s/conf/etc/hostname' % PG_LXC_DATA_PATH
436 PG_HS_CONF = '%s/conf/etc/hosts' % PG_LXC_DATA_PATH
437 PG_IFCS_CONF = '%s/conf/pg/ifcs.conf' % PG_LXC_DATA_PATH
438+OPS_CONF = '%s/conf/etc/00-pg.conf' % PG_LXC_DATA_PATH
439 AUTH_KEY_PATH = '%s/root/.ssh/authorized_keys' % PG_LXC_DATA_PATH
440 TEMP_LICENSE_FILE = '/tmp/license'
441
442-
443 BASE_RESOURCE_MAP = OrderedDict([
444 (PG_KA_CONF, {
445 'services': ['plumgrid'],
446@@ -62,6 +77,10 @@
447 'services': ['plumgrid'],
448 'contexts': [pg_dir_context.PGDirContext()],
449 }),
450+ (OPS_CONF, {
451+ 'services': ['plumgrid'],
452+ 'contexts': [pg_dir_context.PGDirContext()],
453+ }),
454 (PG_IFCS_CONF, {
455 'services': [],
456 'contexts': [pg_dir_context.PGDirContext()],
457@@ -74,7 +93,25 @@
458 Returns list of packages required by PLUMgrid director as specified
459 in the neutron_plugins dictionary in charmhelpers.
460 '''
461- return neutron_plugin_attribute('plumgrid', 'packages', 'neutron')
462+ pkgs = []
463+ tag = 'latest'
464+ for pkg in neutron_plugin_attribute('plumgrid', 'packages', 'neutron'):
465+ if 'plumgrid' in pkg:
466+ tag = config('plumgrid-build')
467+ elif pkg == 'iovisor-dkms':
468+ tag = config('iovisor-build')
469+
470+ if tag == 'latest':
471+ pkgs.append(pkg)
472+ else:
473+ if tag in [i.ver_str for i in apt_cache()[pkg].version_list]:
474+ pkgs.append('%s=%s' % (pkg, tag))
475+ else:
476+ error_msg = \
477+ "Build version '%s' for package '%s' not available" \
478+ % (tag, pkg)
479+ raise ValueError(error_msg)
480+ return pkgs
481
482
483 def register_configs(release=None):
484@@ -111,11 +148,20 @@
485 '''
486 Stops and Starts PLUMgrid service after flushing iptables.
487 '''
488- service_stop('plumgrid')
489- time.sleep(2)
490- _exec_cmd(cmd=['iptables', '-F'])
491+ stop_pg()
492 service_start('plumgrid')
493- time.sleep(5)
494+ time.sleep(3)
495+ if not service_running('plumgrid'):
496+ if service_running('libvirt-bin'):
497+ raise ValueError("plumgrid service couldn't be started")
498+ else:
499+ if service_start('libvirt-bin'):
500+ time.sleep(3)
501+ if not service_running('plumgrid'):
502+ raise ValueError("plumgrid service couldn't be started")
503+ else:
504+ raise ValueError("libvirt-bin service couldn't be started")
505+ status_set('active', 'Unit is ready')
506
507
508 def stop_pg():
509@@ -138,39 +184,95 @@
510 Removes iovisor kernel module.
511 '''
512 _exec_cmd(cmd=['rmmod', 'iovisor'],
513- error_msg='Error Loading IOVisor Kernel Module')
514-
515-
516-def check_interface_type():
517- '''
518- Checks the interface. Support added for AWS deployments. There are 2
519- possible interfaces "juju-br0" and "eth0". The default being juju-br0
520- '''
521- log("Checking Interface Type")
522- default_interface = "juju-br0"
523- AWS_interface = "eth0"
524- shell_output = subprocess.check_output(['brctl', 'show', 'juju-br0'])
525- output = re.split(' |\n|\t', shell_output)
526- if output[10] == '':
527- return AWS_interface
528- else:
529- return default_interface
530+ error_msg='Error Removing IOVisor Kernel Module')
531+ time.sleep(1)
532+
533+
534+def interface_exists(interface):
535+ '''
536+ Checks if interface exists on node.
537+ '''
538+ try:
539+ subprocess.check_call(['ip', 'link', 'show', interface],
540+ stdout=open(os.devnull, 'w'),
541+ stderr=subprocess.STDOUT)
542+ except subprocess.CalledProcessError:
543+ return False
544+ return True
545+
546+
547+def get_mgmt_interface():
548+ '''
549+ Returns the managment interface.
550+ '''
551+ mgmt_interface = config('mgmt-interface')
552+ if interface_exists(mgmt_interface):
553+ return mgmt_interface
554+ else:
555+ log('Provided managment interface %s does not exist'
556+ % mgmt_interface)
557+ return get_iface_from_addr(unit_get('private-address'))
558+
559+
560+def fabric_interface_changed():
561+ '''
562+ Returns true if interface for node changed.
563+ '''
564+ fabric_interface = get_fabric_interface()
565+ try:
566+ with open(PG_IFCS_CONF, 'r') as ifcs:
567+ for line in ifcs:
568+ if 'fabric_core' in line:
569+ if line.split()[0] == fabric_interface:
570+ return False
571+ except IOError:
572+ return True
573+ return True
574+
575+
576+def get_fabric_interface():
577+ '''
578+ Returns the fabric interface.
579+ '''
580+ fabric_interfaces = config('fabric-interfaces')
581+ if fabric_interfaces == 'MANAGEMENT':
582+ return get_mgmt_interface()
583+ else:
584+ try:
585+ all_fabric_interfaces = json.loads(fabric_interfaces)
586+ except ValueError:
587+ raise ValueError('Invalid json provided for fabric interfaces')
588+ hostname = get_unit_hostname()
589+ if hostname in all_fabric_interfaces:
590+ node_fabric_interface = all_fabric_interfaces[hostname]
591+ elif 'DEFAULT' in all_fabric_interfaces:
592+ node_fabric_interface = all_fabric_interfaces['DEFAULT']
593+ else:
594+ raise ValueError('No fabric interface provided for node')
595+ if interface_exists(node_fabric_interface):
596+ if is_address_in_network(config('os-data-network'),
597+ get_iface_addr(node_fabric_interface)[0]):
598+ return node_fabric_interface
599+ else:
600+ raise ValueError('Fabric interface not in fabric network')
601+ else:
602+ log('Provided fabric interface %s does not exist'
603+ % node_fabric_interface)
604+ raise ValueError('Provided fabric interface does not exist')
605+ return node_fabric_interface
606
607
608 def ensure_mtu():
609 '''
610 Ensures required MTU of the underlying networking of the node.
611 '''
612- log("Changing MTU of juju-br0 and all attached interfaces")
613 interface_mtu = config('network-device-mtu')
614- interface_type = check_interface_type()
615- if interface_type == "juju-br0":
616- cmd = subprocess.check_output(["brctl", "show", interface_type])
617- words = cmd.split()
618- for word in words:
619- if 'eth' in word:
620- set_nic_mtu(word, interface_mtu)
621- set_nic_mtu(interface_type, interface_mtu)
622+ fabric_interface = get_fabric_interface()
623+ if fabric_interface in get_bridges():
624+ attached_interfaces = get_bridge_nics(fabric_interface)
625+ for interface in attached_interfaces:
626+ set_nic_mtu(interface, interface_mtu)
627+ set_nic_mtu(fabric_interface, interface_mtu)
628
629
630 def _exec_cmd(cmd=None, error_msg='Command exited with ERRORs', fatal=False):
631@@ -229,6 +331,8 @@
632 log('PLUMgrid License Key not specified')
633 return 0
634 PG_VIP = config('plumgrid-virtual-ip')
635+ if not is_ip(PG_VIP):
636+ raise ValueError('Invalid IP Provided')
637 LICENSE_POST_PATH = 'https://%s/0/tenant_manager/license_key' % PG_VIP
638 LICENSE_GET_PATH = 'https://%s/0/tenant_manager/licenses' % PG_VIP
639 PG_CURL = '%s/opt/pg/scripts/pg_curl.sh' % PG_LXC_PATH
640@@ -239,8 +343,7 @@
641 'plumgrid:plumgrid',
642 LICENSE_POST_PATH,
643 '-d',
644- json.dumps(license)
645- ]
646+ json.dumps(license)]
647 licence_get_cmd = [PG_CURL, '-u', 'plumgrid:plumgrid', LICENSE_GET_PATH]
648 try:
649 old_license = subprocess.check_output(licence_get_cmd)
650@@ -254,3 +357,76 @@
651 log('No change in PLUMgrid License')
652 return 0
653 return 1
654+
655+
656+def load_iptables():
657+ '''
658+ Loads iptables rules to allow all PLUMgrid communication.
659+ '''
660+ network = get_cidr_from_iface(get_mgmt_interface())
661+ if network:
662+ _exec_cmd(['sudo', 'iptables', '-A', 'INPUT', '-p', 'tcp',
663+ '-j', 'ACCEPT', '-s', network, '-d',
664+ network, '-m', 'state', '--state', 'NEW'])
665+ _exec_cmd(['sudo', 'iptables', '-A', 'INPUT', '-p', 'udp', '-j',
666+ 'ACCEPT', '-s', network, '-d', network,
667+ '-m', 'state', '--state', 'NEW'])
668+ _exec_cmd(['sudo', 'iptables', '-I', 'INPUT', '-s', network,
669+ '-d', '224.0.0.18/32', '-j', 'ACCEPT'])
670+ _exec_cmd(['sudo', 'iptables', '-I', 'INPUT', '-p', 'vrrp', '-j',
671+ 'ACCEPT'])
672+ _exec_cmd(['sudo', 'iptables', '-A', 'INPUT', '-p', 'tcp', '-j',
673+ 'ACCEPT', '-d', config('plumgrid-virtual-ip'), '-m',
674+ 'state', '--state', 'NEW'])
675+ apt_install('iptables-persistent')
676+
677+
678+def get_cidr_from_iface(interface):
679+ '''
680+ Determines Network CIDR from interface.
681+ '''
682+ if not interface:
683+ return None
684+ apt_install('ohai')
685+ try:
686+ os_info = subprocess.check_output(['ohai', '-l', 'fatal'])
687+ except OSError:
688+ log('Unable to get operating system information')
689+ return None
690+ try:
691+ os_info_json = json.loads(os_info)
692+ except ValueError:
693+ log('Unable to determine network')
694+ return None
695+ device = os_info_json['network']['interfaces'].get(interface)
696+ if device is not None:
697+ if device.get('routes'):
698+ routes = device['routes']
699+ for net in routes:
700+ if 'scope' in net:
701+ return net.get('destination')
702+ else:
703+ return None
704+ else:
705+ return None
706+
707+
708+def director_cluster_ready():
709+ dirs_count = len(pg_dir_context._pg_dir_ips())
710+ return True if dirs_count == 2 else False
711+
712+
713+def restart_on_change(restart_map):
714+ """
715+ Restart services based on configuration files changing
716+ """
717+ def wrap(f):
718+ def wrapped_f(*args, **kwargs):
719+ checksums = {path: path_hash(path) for path in restart_map}
720+ f(*args, **kwargs)
721+ for path in restart_map:
722+ if path_hash(path) != checksums[path]:
723+ restart_pg()
724+ break
725+ return wrapped_f
726+ return wrap
727
728=== added symlink 'hooks/plumgrid-relation-joined'
729=== target is u'pg_dir_hooks.py'
730=== added symlink 'hooks/update-status'
731=== target is u'pg_dir_hooks.py'
732=== added symlink 'hooks/upgrade-charm'
733=== target is u'pg_dir_hooks.py'
734=== added file 'templates/kilo/00-pg.conf'
735--- templates/kilo/00-pg.conf 1970-01-01 00:00:00 +0000
736+++ templates/kilo/00-pg.conf 2016-04-08 08:30:24 +0000
737@@ -0,0 +1,2 @@
738+$template ls_json,"{{'{'}}{{'%'}}timestamp:::date-rfc3339,jsonf:@timestamp%,%source:::jsonf:@source_host%,%msg:::json%}"
739+:syslogtag,isequal,"pg:" @{{ opsvm_ip }}:6000;ls_json
740
741=== modified file 'templates/kilo/hosts'
742--- templates/kilo/hosts 2015-07-29 18:11:14 +0000
743+++ templates/kilo/hosts 2016-04-08 08:30:24 +0000
744@@ -1,5 +1,5 @@
745 127.0.0.1 localhost
746-127.0.1.1 {{ pg_hostname }}
747+127.0.1.1 {{ pg_fqdn }} {{ pg_hostname }}
748
749 # The following lines are desirable for IPv6 capable hosts
750 ::1 ip6-localhost ip6-loopback
751
752=== modified file 'templates/kilo/ifcs.conf'
753--- templates/kilo/ifcs.conf 2015-07-29 18:11:14 +0000
754+++ templates/kilo/ifcs.conf 2016-04-08 08:30:24 +0000
755@@ -1,2 +1,2 @@
756-{{ interface }} = fabric_core host
757+{{ fabric_interface }} = fabric_core host
758
759
760=== modified file 'templates/kilo/nginx.conf'
761--- templates/kilo/nginx.conf 2015-08-10 10:06:33 +0000
762+++ templates/kilo/nginx.conf 2016-04-08 08:30:24 +0000
763@@ -12,6 +12,10 @@
764 server {{ virtual_ip }}:3000;
765 }
766
767+upstream pgMW {
768+ server {{ opsvm_ip }}:4000;
769+}
770+
771 map $http_upgrade $connection_upgrade {
772 default upgrade;
773 '' close;
774@@ -58,6 +62,19 @@
775 proxy_set_header Host $host;
776 }
777
778+ location /mwv0 {
779+ proxy_pass http://pgMW;
780+ proxy_redirect off;
781+ proxy_http_version 1.1;
782+ proxy_set_header Upgrade $http_upgrade;
783+ proxy_set_header Connection "upgrade";
784+ proxy_set_header Host $host;
785+ }
786+
787+ location /cloudApex/ {
788+ index index.html;
789+ }
790+
791 location /vtap/ {
792 alias /opt/pg/vtap;
793 }
794
795=== added file 'tests/files/plumgrid-director-dense.yaml'
796--- tests/files/plumgrid-director-dense.yaml 1970-01-01 00:00:00 +0000
797+++ tests/files/plumgrid-director-dense.yaml 2016-04-08 08:30:24 +0000
798@@ -0,0 +1,133 @@
799+test:
800+ series: 'trusty'
801+ relations:
802+ - - mysql
803+ - keystone
804+ - - nova-cloud-controller
805+ - mysql
806+ - - nova-cloud-controller
807+ - rabbitmq-server
808+ - - nova-cloud-controller
809+ - glance
810+ - - nova-cloud-controller
811+ - keystone
812+ - - nova-compute
813+ - nova-cloud-controller
814+ - - nova-compute
815+ - mysql
816+ - - nova-compute
817+ - rabbitmq-server
818+ - - nova-compute
819+ - glance
820+ - - glance
821+ - mysql
822+ - - glance
823+ - keystone
824+ - - glance
825+ - cinder
826+ - - mysql
827+ - cinder
828+ - - cinder
829+ - rabbitmq-server
830+ - - cinder
831+ - nova-cloud-controller
832+ - - cinder
833+ - keystone
834+ - - openstack-dashboard
835+ - keystone
836+ - - neutron-api
837+ - mysql
838+ - - neutron-api
839+ - keystone
840+ - - neutron-api
841+ - rabbitmq-server
842+ - - neutron-api
843+ - nova-cloud-controller
844+ - - neutron-api
845+ - neutron-api-plumgrid
846+ - - neutron-api-plumgrid
847+ - plumgrid-edge
848+ - - plumgrid-director
849+ - plumgrid-edge
850+ - - nova-compute
851+ - plumgrid-edge
852+ - - plumgrid-director
853+ - plumgrid-gateway
854+ services:
855+ mysql:
856+ charm: cs:trusty/mysql
857+ num_units: 1
858+ to: 'lxc:plumgrid-director=0'
859+ rabbitmq-server:
860+ charm: cs:trusty/rabbitmq-server
861+ num_units: 1
862+ to: 'lxc:plumgrid-director=0'
863+ keystone:
864+ charm: cs:trusty/keystone
865+ num_units: 1
866+ options:
867+ admin-password: plumgrid
868+ openstack-origin: cloud:trusty-kilo
869+ to: 'lxc:plumgrid-director=0'
870+ nova-cloud-controller:
871+ charm: cs:trusty/nova-cloud-controller
872+ num_units: 1
873+ options:
874+ console-access-protocol: novnc
875+ network-manager: Neutron
876+ openstack-origin: cloud:trusty-kilo
877+ quantum-security-groups: 'yes'
878+ to: 'lxc:plumgrid-director=0'
879+ glance:
880+ charm: cs:trusty/glance
881+ num_units: 1
882+ options:
883+ openstack-origin: cloud:trusty-kilo
884+ to: 'lxc:plumgrid-director=0'
885+ openstack-dashboard:
886+ charm: cs:trusty/openstack-dashboard
887+ num_units: 1
888+ options:
889+ openstack-origin: cloud:trusty-kilo
890+ to: 'lxc:plumgrid-director=0'
891+ cinder:
892+ charm: cs:trusty/cinder
893+ num_units: 1
894+ options:
895+ openstack-origin: cloud:trusty-kilo
896+ to: 'lxc:plumgrid-director=0'
897+ neutron-api:
898+ charm: cs:~plumgrid-team/trusty/neutron-api
899+ num_units: 1
900+ options:
901+ neutron-plugin: plumgrid
902+ neutron-security-groups: false
903+ openstack-origin: cloud:trusty-kilo
904+ plumgrid-password: plumgrid
905+ plumgrid-username: plumgrid
906+ plumgrid-virtual-ip: 192.168.100.250
907+ to: 'lxc:plumgrid-director=0'
908+ neutron-api-plumgrid:
909+ charm: cs:~plumgrid-team/trusty/neutron-api-plumgrid
910+ options:
911+ enable-metadata: True
912+ plumgrid-director:
913+ charm: cs:~plumgrid-team/trusty/plumgrid-director
914+ num_units: 1
915+ constraints: "root-disk=30G mem=8G cpu-cores=8"
916+ options:
917+ plumgrid-virtual-ip: 192.168.100.250
918+ nova-compute:
919+ charm: cs:~plumgrid-team/trusty/nova-compute
920+ num_units: 1
921+ options:
922+ enable-live-migration: true
923+ enable-resize: true
924+ migration-auth-type: ssh
925+ openstack-origin: cloud:trusty-kilo
926+ to: '0'
927+ plumgrid-edge:
928+ charm: cs:~plumgrid-team/trusty/plumgrid-edge
929+ plumgrid-gateway:
930+ charm: cs:~plumgrid-team/trusty/plumgrid-gateway
931+ num_units: 1
932
933=== removed file 'tests/files/plumgrid-director-dense.yaml'
934--- tests/files/plumgrid-director-dense.yaml 2015-09-01 17:31:26 +0000
935+++ tests/files/plumgrid-director-dense.yaml 1970-01-01 00:00:00 +0000
936@@ -1,133 +0,0 @@
937-test:
938- series: 'trusty'
939- relations:
940- - - mysql
941- - keystone
942- - - nova-cloud-controller
943- - mysql
944- - - nova-cloud-controller
945- - rabbitmq-server
946- - - nova-cloud-controller
947- - glance
948- - - nova-cloud-controller
949- - keystone
950- - - nova-compute
951- - nova-cloud-controller
952- - - nova-compute
953- - mysql
954- - - nova-compute
955- - rabbitmq-server
956- - - nova-compute
957- - glance
958- - - glance
959- - mysql
960- - - glance
961- - keystone
962- - - glance
963- - cinder
964- - - mysql
965- - cinder
966- - - cinder
967- - rabbitmq-server
968- - - cinder
969- - nova-cloud-controller
970- - - cinder
971- - keystone
972- - - openstack-dashboard
973- - keystone
974- - - neutron-api
975- - mysql
976- - - neutron-api
977- - keystone
978- - - neutron-api
979- - rabbitmq-server
980- - - neutron-api
981- - nova-cloud-controller
982- - - neutron-api
983- - neutron-api-plumgrid
984- - - neutron-api-plumgrid
985- - plumgrid-edge
986- - - plumgrid-director
987- - plumgrid-edge
988- - - nova-compute
989- - plumgrid-edge
990- - - plumgrid-director
991- - plumgrid-gateway
992- services:
993- mysql:
994- charm: cs:trusty/mysql
995- num_units: 1
996- to: 'lxc:plumgrid-director=0'
997- rabbitmq-server:
998- charm: cs:trusty/rabbitmq-server
999- num_units: 1
1000- to: 'lxc:plumgrid-director=0'
1001- keystone:
1002- charm: cs:trusty/keystone
1003- num_units: 1
1004- options:
1005- admin-password: plumgrid
1006- openstack-origin: cloud:trusty-kilo
1007- to: 'lxc:plumgrid-director=0'
1008- nova-cloud-controller:
1009- charm: cs:trusty/nova-cloud-controller
1010- num_units: 1
1011- options:
1012- console-access-protocol: novnc
1013- network-manager: Neutron
1014- openstack-origin: cloud:trusty-kilo
1015- quantum-security-groups: 'yes'
1016- to: 'lxc:plumgrid-director=0'
1017- glance:
1018- charm: cs:trusty/glance
1019- num_units: 1
1020- options:
1021- openstack-origin: cloud:trusty-kilo
1022- to: 'lxc:plumgrid-director=0'
1023- openstack-dashboard:
1024- charm: cs:trusty/openstack-dashboard
1025- num_units: 1
1026- options:
1027- openstack-origin: cloud:trusty-kilo
1028- to: 'lxc:plumgrid-director=0'
1029- cinder:
1030- charm: cs:trusty/cinder
1031- num_units: 1
1032- options:
1033- openstack-origin: cloud:trusty-kilo
1034- to: 'lxc:plumgrid-director=0'
1035- neutron-api:
1036- charm: cs:~plumgrid-team/trusty/neutron-api
1037- num_units: 1
1038- options:
1039- neutron-plugin: plumgrid
1040- neutron-security-groups: false
1041- openstack-origin: cloud:trusty-kilo
1042- plumgrid-password: plumgrid
1043- plumgrid-username: plumgrid
1044- plumgrid-virtual-ip: 192.168.100.250
1045- to: 'lxc:plumgrid-director=0'
1046- neutron-api-plumgrid:
1047- charm: cs:~plumgrid-team/trusty/neutron-api-plumgrid
1048- options:
1049- enable-metadata: True
1050- plumgrid-director:
1051- charm: cs:~plumgrid-team/trusty/plumgrid-director
1052- num_units: 1
1053- constraints: "root-disk=30G mem=8G cpu-cores=8"
1054- options:
1055- plumgrid-virtual-ip: 192.168.100.250
1056- nova-compute:
1057- charm: cs:~plumgrid-team/trusty/nova-compute
1058- num_units: 1
1059- options:
1060- enable-live-migration: true
1061- enable-resize: true
1062- migration-auth-type: ssh
1063- openstack-origin: cloud:trusty-kilo
1064- to: '0'
1065- plumgrid-edge:
1066- charm: cs:~plumgrid-team/trusty/plumgrid-edge
1067- plumgrid-gateway:
1068- charm: cs:~plumgrid-team/trusty/plumgrid-gateway
1069- num_units: 1
1070
1071=== added file 'tests/test.yaml'
1072--- tests/test.yaml 1970-01-01 00:00:00 +0000
1073+++ tests/test.yaml 2016-04-08 08:30:24 +0000
1074@@ -0,0 +1,2 @@
1075+makefile:
1076+ - lint
1077
1078=== removed file 'tests/test.yaml'
1079--- tests/test.yaml 2015-09-01 17:31:26 +0000
1080+++ tests/test.yaml 1970-01-01 00:00:00 +0000
1081@@ -1,2 +0,0 @@
1082-makefile:
1083- - lint
1084
1085=== modified file 'unit_tests/test_pg_dir_context.py'
1086--- unit_tests/test_pg_dir_context.py 2015-08-24 16:18:48 +0000
1087+++ unit_tests/test_pg_dir_context.py 2016-04-08 08:30:24 +0000
1088@@ -8,7 +8,8 @@
1089 'config',
1090 'unit_get',
1091 'get_host_ip',
1092- 'get_unit_hostname',
1093+ 'gethostname',
1094+ 'getfqdn'
1095 ]
1096
1097
1098@@ -45,19 +46,21 @@
1099 'neutron_plugin_attribute')
1100 @patch.object(charmhelpers.contrib.openstack.context, 'unit_private_ip')
1101 @patch.object(context, '_pg_dir_ips')
1102- @patch.object(utils, 'check_interface_type')
1103- def test_neutroncc_context_api_rel(self, _int_type, _pg_dir_ips,
1104- _unit_priv_ip, _npa, _ens_pkgs,
1105- _save_ff, _https, _is_clus,
1106- _unit_get, _config, _runits, _rids,
1107- _rget):
1108+ @patch.object(utils, 'get_mgmt_interface')
1109+ @patch.object(utils, 'get_fabric_interface')
1110+ def test_neutroncc_context_api_rel(self, _fabric_int, _mgmt_int,
1111+ _pg_dir_ips, _unit_priv_ip, _npa,
1112+ _ens_pkgs, _save_ff, _https,
1113+ _is_clus, _unit_get, _config,
1114+ _runits, _rids, _rget):
1115 def mock_npa(plugin, section, manager):
1116 if section == "driver":
1117 return "neutron.randomdriver"
1118 if section == "config":
1119 return "neutron.randomconfig"
1120
1121- config = {'plumgrid-virtual-ip': "192.168.100.250"}
1122+ config = {'plumgrid-virtual-ip': "192.168.100.250",
1123+ 'opsvm-ip': '127.0.0.1'}
1124
1125 def mock_config(key=None):
1126 if key:
1127@@ -70,10 +73,12 @@
1128 _npa.side_effect = mock_npa
1129 _unit_get.return_value = '192.168.100.201'
1130 _unit_priv_ip.return_value = '192.168.100.201'
1131- self.get_unit_hostname.return_value = 'node0'
1132+ self.gethostname.return_value = 'node0'
1133+ self.getfqdn.return_value = 'node0.maas'
1134 self.get_host_ip.return_value = '192.168.100.201'
1135 _pg_dir_ips.return_value = ['192.168.100.202', '192.168.100.203']
1136- _int_type.return_value = 'juju-br0'
1137+ _mgmt_int.return_value = 'juju-br0'
1138+ _fabric_int.return_value = 'juju-br0'
1139 napi_ctxt = context.PGDirContext()
1140 expect = {
1141 'config': 'neutron.randomconfig',
1142@@ -84,14 +89,17 @@
1143 'neutron_security_groups': None,
1144 'neutron_url': 'https://None:9696',
1145 'virtual_ip': '192.168.100.250',
1146- 'pg_hostname': 'pg-director',
1147+ 'pg_hostname': 'node0',
1148+ 'pg_fqdn': 'node0.maas',
1149 'interface': 'juju-br0',
1150+ 'fabric_interface': 'juju-br0',
1151 'label': 'node0',
1152 'fabric_mode': 'host',
1153 'virtual_router_id': '250',
1154- 'director_ips': ['192.168.100.202', '192.168.100.203',
1155- '192.168.100.201'],
1156+ 'director_ips': ['192.168.100.201', '192.168.100.202',
1157+ '192.168.100.203'],
1158 'director_ips_string':
1159- '192.168.100.202,192.168.100.203,192.168.100.201',
1160+ '192.168.100.201,192.168.100.202,192.168.100.203',
1161+ 'opsvm_ip': '127.0.0.1',
1162 }
1163 self.assertEquals(expect, napi_ctxt())
1164
1165=== modified file 'unit_tests/test_pg_dir_hooks.py'
1166--- unit_tests/test_pg_dir_hooks.py 2015-08-24 16:18:48 +0000
1167+++ unit_tests/test_pg_dir_hooks.py 2016-04-08 08:30:24 +0000
1168@@ -1,5 +1,7 @@
1169 from mock import MagicMock, patch, call
1170+
1171 from test_utils import CharmTestCase
1172+
1173 with patch('charmhelpers.core.hookenv.config') as config:
1174 config.return_value = 'neutron'
1175 import pg_dir_utils as utils
1176@@ -18,7 +20,6 @@
1177 TO_PATCH = [
1178 'remove_iovisor',
1179 'apt_install',
1180- 'apt_purge',
1181 'CONFIGS',
1182 'log',
1183 'configure_sources',
1184@@ -29,7 +30,8 @@
1185 'add_lcm_key',
1186 'determine_packages',
1187 'post_pg_license',
1188- 'config'
1189+ 'config',
1190+ 'load_iptables'
1191 ]
1192 NEUTRON_CONF_DIR = "/etc/neutron"
1193
1194@@ -58,33 +60,11 @@
1195 ])
1196 self.load_iovisor.assert_called_with()
1197 self.ensure_mtu.assert_called_with()
1198- self.add_lcm_key.assert_called_with()
1199-
1200- def test_config_changed_hook(self):
1201- _pkgs = ['plumgrid-lxc', 'iovisor-dkms']
1202- self.add_lcm_key.return_value = 0
1203- self.post_pg_license.return_value = 0
1204- self.determine_packages.return_value = [_pkgs]
1205- self._call_hook('config-changed')
1206- self.stop_pg.assert_called_with()
1207- self.configure_sources.assert_called_with(update=True)
1208- self.apt_install.assert_has_calls([
1209- call(_pkgs, fatal=True,
1210- options=['--force-yes']),
1211- ])
1212- self.load_iovisor.assert_called_with()
1213- self.ensure_mtu.assert_called_with()
1214-
1215- self.CONFIGS.write_all.assert_called_with()
1216- self.restart_pg.assert_called_with()
1217
1218 def test_start(self):
1219 self._call_hook('start')
1220 self.test_config.set('plumgrid-license-key', None)
1221
1222 def test_stop(self):
1223- _pkgs = ['plumgrid-lxc', 'iovisor-dkms']
1224 self._call_hook('stop')
1225 self.stop_pg.assert_called_with()
1226- self.remove_iovisor.assert_called_with()
1227- self.determine_packages.return_value = _pkgs
1228
1229=== modified file 'unit_tests/test_pg_dir_utils.py'
1230--- unit_tests/test_pg_dir_utils.py 2015-08-24 16:18:48 +0000
1231+++ unit_tests/test_pg_dir_utils.py 2016-04-08 08:30:24 +0000
1232@@ -55,7 +55,8 @@
1233 nutils.PG_DEF_CONF,
1234 nutils.PG_HN_CONF,
1235 nutils.PG_HS_CONF,
1236- nutils.PG_IFCS_CONF]
1237+ nutils.PG_IFCS_CONF,
1238+ nutils.OPS_CONF]
1239 self.assertItemsEqual(_regconfs.configs, confs)
1240
1241 def test_resource_map(self):
1242@@ -73,6 +74,7 @@
1243 (nutils.PG_DEF_CONF, ['plumgrid']),
1244 (nutils.PG_HN_CONF, ['plumgrid']),
1245 (nutils.PG_HS_CONF, ['plumgrid']),
1246+ (nutils.OPS_CONF, ['plumgrid']),
1247 (nutils.PG_IFCS_CONF, []),
1248 ])
1249 self.assertEqual(expect, _restart_map)

Subscribers

People subscribed via source and target branches