Merge lp:~freyes/charms/trusty/nova-cloud-controller/single-nova-consoleauth into lp:~openstack-charmers-archive/charms/trusty/nova-cloud-controller/next

Proposed by Felipe Reyes
Status: Superseded
Proposed branch: lp:~freyes/charms/trusty/nova-cloud-controller/single-nova-consoleauth
Merge into: lp:~openstack-charmers-archive/charms/trusty/nova-cloud-controller/next
Diff against target: 276 lines (+195/-28)
3 files modified
config.yaml (+11/-0)
hooks/nova_cc_hooks.py (+60/-2)
unit_tests/test_nova_cc_hooks.py (+124/-26)
To merge this branch: bzr merge lp:~freyes/charms/trusty/nova-cloud-controller/single-nova-consoleauth
Reviewer Review Type Date Requested Status
Edward Hope-Morley Needs Fixing
OpenStack Charmers Pending
Jorge Niedbalski Pending
Billy Olsen Pending
Review via email: mp+253309@code.launchpad.net

This proposal supersedes a proposal from 2015-03-06.

This proposal has been superseded by a proposal from 2015-03-25.

Description of the change

Dear OpenStack Charmers,

This patch configures a new pacemaker resource that will allow to run a single instance of nova-consoleauth service when console-access-protocol is defined.

Specifically, nova-consoleauth is configured to run where the vip is allocated, with this patch this will be default behaviour. This is needed for environments where the memcached cannot (or doesn't want to) be related to nova-cloud-controller to share the authorization tokens across all the running instances.

Example output of 'sudo crm_mon -1' with this patch applied:

Last updated: Mon Mar 2 14:46:43 2015
Last change: Mon Mar 2 14:44:53 2015 via crm_resource on juju-freyes-machine-18
Stack: corosync
Current DC: juju-freyes-machine-18 (168103083) - partition with quorum
Version: 1.1.10-42f2063
3 Nodes configured
5 Resources configured

Online: [ juju-freyes-machine-16 juju-freyes-machine-17 juju-freyes-machine-18 ]

 Resource Group: grp_nova_vips
     res_nova_eth0_vip (ocf::heartbeat:IPaddr2): Started juju-freyes-machine-16
 Clone Set: cl_nova_haproxy [res_nova_haproxy]
     Started: [ juju-freyes-machine-16 juju-freyes-machine-17 juju-freyes-machine-18 ]
 res_nova_consoleauth (upstart:nova-consoleauth): Started juju-freyes-machine-16

$ python juju_rrun.py --service nova-cloud-controller 'sudo ps aux|grep nova-consoleauth'
---------- 16 nova-cloud-controller/0
---------- stdout ----------
nova 20773 1.2 7.0 312604 71312 ? Ss 14:44 0:02 /usr/bin/python /usr/bin/nova-consoleauth --config-file=/etc/nova/nova.conf
ubuntu 21188 0.0 0.6 134316 6836 ? Sl 14:47 0:00 juju-run nova-cloud-controller/0 sudo ps aux|grep nova-consoleauth
root 21195 0.0 0.0 8860 648 ? S 14:47 0:00 grep nova-consoleauth

---------- 17 nova-cloud-controller/1
---------- stdout ----------
ubuntu 20997 0.0 0.6 134316 6840 ? Sl 14:47 0:00 juju-run nova-cloud-controller/1 sudo ps aux|grep nova-consoleauth
root 21004 0.0 0.0 8860 648 ? S 14:47 0:00 grep nova-consoleauth

---------- 18 nova-cloud-controller/2
---------- stdout ----------
ubuntu 21325 0.0 0.8 134316 8888 ? Sl 14:47 0:00 juju-run nova-cloud-controller/2 sudo ps aux|grep nova-consoleauth
root 21332 0.0 0.0 8860 644 ? S 14:47 0:00 grep nova-consoleauth

Thanks,

To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal

charm_lint_check #2119 nova-cloud-controller-next for freyes mp250339
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/2119/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal

charm_unit_test #1909 nova-cloud-controller-next for freyes mp250339
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/1909/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal

charm_amulet_test #2028 nova-cloud-controller-next for freyes mp250339
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/2028/

Revision history for this message
Billy Olsen (billy-olsen) wrote : Posted in a previous version of this proposal

Felipe,

Thanks for the submission. I haven't deployed this yet to try it out, but my concerns are around the upstart resources. Since we'll have a difference between systemd and upstart between vivid and previous releases I think we should be using an ocf file instead. There's already one that exists in https://github.com/madkiss/openstack-resource-agents/tree/master/ocf. Looks like its been awhile since its been touched so its either stable or not supported anymore. I think the openstack-resource-agents package also includes this, but I'm not sure as to its status (I believe its in universe).

An amulet test is always nice :-) We don't have anything in the way of the hacluster relation in our amulet tests at this point so I don't think its strictly a requirement (e.g. I wouldn't hold up the mp for it), but improvements are always welcomed.

review: Needs Fixing
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal

charm_lint_check #2392 nova-cloud-controller-next for freyes mp251456
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
  unit_tests/test_nova_cc_hooks.py:727:51: E502 the backslash is redundant between brackets
  make: *** [lint] Error 1

Full lint test output: http://paste.ubuntu.com/10520210/
Build: http://10.245.162.77:8080/job/charm_lint_check/2392/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal

charm_unit_test #2182 nova-cloud-controller-next for freyes mp251456
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/2182/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal

charm_amulet_test #2312 nova-cloud-controller-next for freyes mp251456
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/2312/

Revision history for this message
Billy Olsen (billy-olsen) wrote : Posted in a previous version of this proposal

Felipe, thanks for the submission! Things look good. Approved.

review: Approve
Revision history for this message
Billy Olsen (billy-olsen) wrote : Posted in a previous version of this proposal

Okay I spoke too soon. Everything looks good for enabling the single-nova-consoleauth and it works as expected while its running. However, if a user wishes to change from single-nova-consoleauth back to multi consoleauth then this patch doesn't actually:

1. remove the res_nova_consoleauth from the corosync/pacemaker configuration.
2. restart the services on nodes which the service is not running restoring the multi-consoleauth scenario.

Apologies for not chasing this scenario down sooner.

review: Needs Fixing
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal

charm_lint_check #2493 nova-cloud-controller-next for freyes mp252138
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/2493/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal

charm_unit_test #2283 nova-cloud-controller-next for freyes mp252138
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/2283/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal

charm_amulet_test #2365 nova-cloud-controller-next for freyes mp252138
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
  ERROR subprocess encountered error code 1
  make: *** [test] Error 1

Full amulet test output: http://paste.ubuntu.com/10553839/
Build: http://10.245.162.77:8080/job/charm_amulet_test/2365/

Revision history for this message
Felipe Reyes (freyes) wrote : Posted in a previous version of this proposal

The amulet failure is:

  juju-test.conductor.14-basic-precise-icehouse DEBUG : Running 14-basic-precise-icehouse (tests/14-basic-precise-icehouse)
  2015-03-07 01:22:16 Starting deployment of osci-sv00
  2015-03-07 01:22:49 Deploying services...
[...]
  2015-03-07 01:23:13 Deploying service rabbitmq-server using local:precise/rabbitmq-server
  2015-03-07 01:34:08 The following units had errors:
  unit: rabbitmq-server/0: machine: 7 agent-state: error details: hook failed: "config-changed"

This isn't related to my change

Revision history for this message
Felipe Reyes (freyes) wrote : Posted in a previous version of this proposal

> 1. remove the res_nova_consoleauth from the corosync/pacemaker configuration.
> 2. restart the services on nodes which the service is not running restoring
> the multi-consoleauth scenario.

Makes sense, I'll update the patch.

Thanks,

Revision history for this message
Felipe Reyes (freyes) wrote :

@Billy, I pushed a new version of the patch to address your feedback, this can't be merged until bug 1433377 is fixed, but I'll appreciate if you could take a quick review to know if things look good.

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #2747 nova-cloud-controller-next for freyes mp253309
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/2747/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #2538 nova-cloud-controller-next for freyes mp253309
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/2538/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #2586 nova-cloud-controller-next for freyes mp253309
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
  ERROR subprocess encountered error code 1
  make: *** [test] Error 1

Full amulet test output: http://paste.ubuntu.com/10630161/
Build: http://10.245.162.77:8080/job/charm_amulet_test/2586/

Revision history for this message
Felipe Reyes (freyes) wrote :

A couple of deployer configurations for testing this change were pushed to lp:~openstack-charm-testers/+junk/percona-mysql-agent

One of them is a complete openstack cloud (based in next.yaml from openstack-charm-testing repo) and the other one is a simple 3 node percona cluster + keystone.

Revision history for this message
Edward Hope-Morley (hopem) wrote :

Felipe, i'm deploying this as we speak. The code looks fine although I have a few comments about how it could be slightly neater, see inline. One approach might be so pull the relation_settings, travers the settings dict and apply any default/missing entries then process the results.

review: Needs Fixing
Revision history for this message
Felipe Reyes (freyes) wrote :

Ed, the bundle to test this MP is located at https://code.launchpad.net/~openstack-charm-testers/+junk/single-nova-consoleauth , sorry for the confusion.

Revision history for this message
Felipe Reyes (freyes) wrote :

Ed, I'll push another MP that addresses your feedback, except one of them, please see my reply.

Revision history for this message
Edward Hope-Morley (hopem) wrote :

So i should have noticed before but, your code is living in config_changed() but relies on the ha relation to exist so it will never get called unless the ha relation exists prior to config_changed() being run...which will not be the case unless you modify charm config after the hacluster has completed installation and relations have been added. So, i suggest you move that code into the ha_joined() or ha_changed() hooks and call it from config_changed() assuming that is safe to do.

review: Needs Fixing
154. By Felipe Reyes

Commit after merge

155. By Felipe Reyes

Move code into its own function and it's called from config-changed, ha-relation-changed and upgrade-charm

156. By Felipe Reyes

fix lint warning

157. By Felipe Reyes

Commit after merge

158. By Felipe Reyes

nova-consoleauth set to manual only if there is a ha relation

Unmerged revisions

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'config.yaml'
2--- config.yaml 2015-02-19 04:30:13 +0000
3+++ config.yaml 2015-03-25 23:24:02 +0000
4@@ -326,3 +326,14 @@
5 description: |
6 A comma-separated list of nagios servicegroups.
7 If left empty, the nagios_context will be used as the servicegroup
8+ single-nova-consoleauth:
9+ type: boolean
10+ default: true
11+ description: |
12+ When this configuration is set to True, a single instance of
13+ nova-consoleauth service will be running, this allows users to always
14+ authenticate against the same instance and avoid authentications issues
15+ when the token used was stored in a different instance.
16+
17+ If memcached is being used to store the tokens, then it's recommended to
18+ change this configuration to False.
19
20=== modified file 'hooks/nova_cc_hooks.py'
21--- hooks/nova_cc_hooks.py 2015-02-19 04:29:22 +0000
22+++ hooks/nova_cc_hooks.py 2015-03-25 23:24:02 +0000
23@@ -120,6 +120,9 @@
24
25 hooks = Hooks()
26 CONFIGS = register_configs()
27+COLO_CONSOLEAUTH = 'inf: res_nova_consoleauth grp_nova_vips'
28+AGENT_CONSOLEAUTH = 'ocf:openstack:nova-consoleauth'
29+AGENT_CA_PARAMS = 'op monitor interval="5s"'
30
31
32 @hooks.hook()
33@@ -171,6 +174,52 @@
34 [cluster_joined(rid) for rid in relation_ids('cluster')]
35 update_nrpe_config()
36
37+ relids = relation_ids('ha')
38+ data = {}
39+ if len(relids) != 1:
40+ log('Related to {} ha services'.format(len(relids)), level='DEBUG')
41+ ha_relid = None
42+ else:
43+ ha_relid = relids[0]
44+ data = relation_get(rid=ha_relid) or {}
45+
46+ for key in ['delete_resources', 'colocations', 'init_services',
47+ 'resources', 'resource_params']:
48+ if key not in data:
49+ if key != 'delete_resources':
50+ data[key] = {}
51+ else:
52+ data[key] = []
53+
54+ if config('single-nova-consoleauth') and console_attributes('protocol'):
55+ for item in ['vip_consoleauth', 'res_nova_consoleauth']:
56+ if item in data['delete_resources']:
57+ data['delete_resources'].remove(item)
58+
59+ # the new pcmkr resources have to be added to the existing ones
60+ data['colocations']['vip_consoleauth'] = COLO_CONSOLEAUTH
61+ data['init_services']['res_nova_consoleauth'] = 'nova-consoleauth'
62+ data['resources']['res_nova_consoleauth'] = AGENT_CONSOLEAUTH
63+ data['resource_params']['res_nova_consoleauth'] = AGENT_CA_PARAMS
64+
65+ for rid in relation_ids('ha'):
66+ relation_set(rid, **data)
67+
68+ elif (not config('single-nova-consoleauth')
69+ and console_attributes('protocol')):
70+ for item in ['vip_consoleauth', 'res_nova_consoleauth']:
71+ if item not in data['delete_resources']:
72+ data['delete_resources'].append(item)
73+
74+ # remove them from the rel, so they aren't recreated
75+ data['colocations'].pop('vip_consoleauth', None)
76+ data['init_services'].pop('res_nova_consoleauth', None)
77+ data['resources'].pop('res_nova_consoleauth', None)
78+ data['resource_params'].pop('res_nova_consoleauth', None)
79+
80+ for rid in relation_ids('ha'):
81+ relation_set(rid, **data)
82+
83
84 @hooks.hook('amqp-relation-joined')
85 def amqp_joined(relation_id=None):
86@@ -636,7 +685,7 @@
87 'res_nova_haproxy': 'lsb:haproxy',
88 }
89 resource_params = {
90- 'res_nova_haproxy': 'op monitor interval="5s"'
91+ 'res_nova_haproxy': 'op monitor interval="5s"',
92 }
93
94 vip_group = []
95@@ -674,12 +723,21 @@
96 clones = {
97 'cl_nova_haproxy': 'res_nova_haproxy'
98 }
99+ colocations = {}
100+
101+ if config('single-nova-consoleauth') and console_attributes('protocol'):
102+ colocations['vip_consoleauth'] = COLO_CONSOLEAUTH
103+ init_services['res_nova_consoleauth'] = 'nova-consoleauth'
104+ resources['res_nova_consoleauth'] = AGENT_CONSOLEAUTH
105+ resource_params['res_nova_consoleauth'] = AGENT_CA_PARAMS
106+
107 relation_set(init_services=init_services,
108 corosync_bindiface=cluster_config['ha-bindiface'],
109 corosync_mcastport=cluster_config['ha-mcastport'],
110 resources=resources,
111 resource_params=resource_params,
112- clones=clones)
113+ clones=clones,
114+ colocations=colocations)
115
116
117 @hooks.hook('ha-relation-changed')
118
119=== modified file 'unit_tests/test_nova_cc_hooks.py'
120--- unit_tests/test_nova_cc_hooks.py 2015-01-13 14:44:54 +0000
121+++ unit_tests/test_nova_cc_hooks.py 2015-03-25 23:24:02 +0000
122@@ -611,30 +611,128 @@
123 'by the neutron-server process.'
124 )
125
126- def test_ha_relation_joined_no_bound_ip(self):
127- self.get_hacluster_config.return_value = {
128- 'ha-bindiface': 'em0',
129- 'ha-mcastport': '8080',
130- 'vip': '10.10.10.10',
131- }
132- self.test_config.set('vip_iface', 'eth120')
133- self.test_config.set('vip_cidr', '21')
134- self.get_iface_for_address.return_value = None
135- self.get_netmask_for_address.return_value = None
136- hooks.ha_joined()
137- args = {
138- 'corosync_bindiface': 'em0',
139- 'corosync_mcastport': '8080',
140- 'init_services': {'res_nova_haproxy': 'haproxy'},
141- 'resources': {'res_nova_eth120_vip': 'ocf:heartbeat:IPaddr2',
142- 'res_nova_haproxy': 'lsb:haproxy'},
143- 'resource_params': {
144- 'res_nova_eth120_vip': 'params ip="10.10.10.10"'
145- ' cidr_netmask="21" nic="eth120"',
146- 'res_nova_haproxy': 'op monitor interval="5s"'},
147- 'clones': {'cl_nova_haproxy': 'res_nova_haproxy'}
148- }
149- self.relation_set.assert_has_calls([
150- call(groups={'grp_nova_vips': 'res_nova_eth120_vip'}),
151- call(**args),
152+ @patch('nova_cc_utils.config')
153+ def test_ha_relation_joined_no_bound_ip(self, config):
154+ self.get_hacluster_config.return_value = {
155+ 'ha-bindiface': 'em0',
156+ 'ha-mcastport': '8080',
157+ 'vip': '10.10.10.10',
158+ }
159+ self.test_config.set('vip_iface', 'eth120')
160+ self.test_config.set('vip_cidr', '21')
161+ config.return_value = None
162+ self.get_iface_for_address.return_value = None
163+ self.get_netmask_for_address.return_value = None
164+ hooks.ha_joined()
165+ args = {
166+ 'corosync_bindiface': 'em0',
167+ 'corosync_mcastport': '8080',
168+ 'init_services': {'res_nova_haproxy': 'haproxy'},
169+ 'resources': {'res_nova_eth120_vip': 'ocf:heartbeat:IPaddr2',
170+ 'res_nova_haproxy': 'lsb:haproxy'},
171+ 'resource_params': {
172+ 'res_nova_eth120_vip': 'params ip="10.10.10.10"'
173+ ' cidr_netmask="21" nic="eth120"',
174+ 'res_nova_haproxy': 'op monitor interval="5s"'},
175+ 'colocations': {},
176+ 'clones': {'cl_nova_haproxy': 'res_nova_haproxy'}
177+ }
178+ self.relation_set.assert_has_calls([
179+ call(groups={'grp_nova_vips': 'res_nova_eth120_vip'}),
180+ call(**args),
181+ ])
182+
183+ @patch('nova_cc_utils.config')
184+ def test_ha_relation_multi_consoleauth(self, config):
185+ self.get_hacluster_config.return_value = {
186+ 'ha-bindiface': 'em0',
187+ 'ha-mcastport': '8080',
188+ 'vip': '10.10.10.10',
189+ }
190+ self.test_config.set('vip_iface', 'eth120')
191+ self.test_config.set('vip_cidr', '21')
192+ self.test_config.set('single-nova-consoleauth', False)
193+ config.return_value = 'novnc'
194+ self.get_iface_for_address.return_value = None
195+ self.get_netmask_for_address.return_value = None
196+ hooks.ha_joined()
197+ args = {
198+ 'corosync_bindiface': 'em0',
199+ 'corosync_mcastport': '8080',
200+ 'init_services': {'res_nova_haproxy': 'haproxy'},
201+ 'resources': {'res_nova_eth120_vip': 'ocf:heartbeat:IPaddr2',
202+ 'res_nova_haproxy': 'lsb:haproxy'},
203+ 'resource_params': {
204+ 'res_nova_eth120_vip': 'params ip="10.10.10.10"'
205+ ' cidr_netmask="21" nic="eth120"',
206+ 'res_nova_haproxy': 'op monitor interval="5s"'},
207+ 'colocations': {},
208+ 'clones': {'cl_nova_haproxy': 'res_nova_haproxy'}
209+ }
210+ self.relation_set.assert_has_calls([
211+ call(groups={'grp_nova_vips': 'res_nova_eth120_vip'}),
212+ call(**args),
213+ ])
214+
215+ @patch('nova_cc_utils.config')
216+ def test_ha_relation_single_consoleauth(self, config):
217+ self.get_hacluster_config.return_value = {
218+ 'ha-bindiface': 'em0',
219+ 'ha-mcastport': '8080',
220+ 'vip': '10.10.10.10',
221+ }
222+ self.test_config.set('vip_iface', 'eth120')
223+ self.test_config.set('vip_cidr', '21')
224+ config.return_value = 'novnc'
225+ self.get_iface_for_address.return_value = None
226+ self.get_netmask_for_address.return_value = None
227+ hooks.ha_joined()
228+ args = {
229+ 'corosync_bindiface': 'em0',
230+ 'corosync_mcastport': '8080',
231+ 'init_services': {'res_nova_haproxy': 'haproxy',
232+ 'res_nova_consoleauth': 'nova-consoleauth'},
233+ 'resources': {'res_nova_eth120_vip': 'ocf:heartbeat:IPaddr2',
234+ 'res_nova_haproxy': 'lsb:haproxy',
235+ 'res_nova_consoleauth':
236+ 'ocf:openstack:nova-consoleauth'},
237+ 'resource_params': {
238+ 'res_nova_eth120_vip': 'params ip="10.10.10.10"'
239+ ' cidr_netmask="21" nic="eth120"',
240+ 'res_nova_haproxy': 'op monitor interval="5s"',
241+ 'res_nova_consoleauth': 'op monitor interval="5s"'},
242+ 'colocations': {
243+ 'vip_consoleauth': 'inf: res_nova_consoleauth grp_nova_vips'
244+ },
245+ 'clones': {'cl_nova_haproxy': 'res_nova_haproxy'}
246+ }
247+ self.relation_set.assert_has_calls([
248+ call(groups={'grp_nova_vips': 'res_nova_eth120_vip'}),
249+ call(**args),
250+ ])
251+
252+ @patch('nova_cc_hooks.configure_https')
253+ @patch('nova_cc_utils.config')
254+ def test_config_changed_single_consoleauth(self, config, *args):
255+ config.return_value = 'novnc'
256+ rids = {'ha': ['ha:1']}
257+
258+ def f(r):
259+ return rids.get(r, [])
260+
261+ self.relation_ids.side_effect = f
262+ hooks.config_changed()
263+ args = {
264+ 'delete_resources': [],
265+ 'init_services': {'res_nova_consoleauth': 'nova-consoleauth'},
266+ 'resources': {'res_nova_consoleauth':
267+ 'ocf:openstack:nova-consoleauth'},
268+ 'resource_params': {
269+ 'res_nova_consoleauth': 'op monitor interval="5s"'},
270+ 'colocations': {
271+ 'vip_consoleauth': 'inf: res_nova_consoleauth grp_nova_vips'
272+ }
273+ }
274+ self.relation_set.assert_has_calls([
275+ call(v, **args) for v in rids['ha']
276 ])

Subscribers

People subscribed via source and target branches