Merge lp:~cbehrens/nova/lp803165 into lp:~hudson-openstack/nova/trunk

Proposed by Chris Behrens
Status: Merged
Approved by: Josh Kearney
Approved revision: 1226
Merged at revision: 1228
Proposed branch: lp:~cbehrens/nova/lp803165
Merge into: lp:~hudson-openstack/nova/trunk
Diff against target: 15 lines (+5/-0)
1 file modified
nova/rpc.py (+5/-0)
To merge this branch: bzr merge lp:~cbehrens/nova/lp803165
Reviewer Review Type Date Requested Status
Josh Kearney (community) Approve
Brian Lamar (community) Needs Information
Matt Dietz (community) Approve
Ed Leafe (community) Approve
Review via email: mp+66355@code.launchpad.net

Description of the change

Sets 'exclusive=True' on Fanout amqp queues. We create the queues with uuids, so the consumer should have exclusive access and they should get removed when done (service stop). exclusive implies auto_delete. Fixes lp:803165

To post a comment you must log in.
Revision history for this message
Chris Behrens (cbehrens) wrote :

I meant to note: This change requires rabbit to be restarted. Because we're changing to exclusive (and thus auto_delete), it requires the exchange to be deleted. Restarting rabbit is the easy way to do it.

Revision history for this message
Ed Leafe (ed-leafe) wrote :

lgtm

review: Approve
Revision history for this message
Matt Dietz (cerberus) wrote :

Talked to Chris about this. He means you only need to restart rabbit once after the change. Killing and restarting the scheduler after the rabbit restart will successfully clean up the queues.

lgtm.

review: Approve
Revision history for this message
Brian Lamar (blamar) wrote :

This looks pretty straight-forward...and not something we can currently unit test. I'm going to run some tests against this branch, but I have one question:

"Exclusive queues may only be consumed from by the current connection"

Is this going to be hindered by using connection pools or reconnecting after a rabbitmq service interruption? The wording is just specific enough to cause me to think about some disaster scenarios. Like, are we going to have to restart every service if rabbitmq goes down to establish new exclusive queues?

review: Needs Information
Revision history for this message
Chris Behrens (cbehrens) wrote :

Ya, amqp gets confusing. No, you shouldn't have to restart every service because publishers publish to a routing key/exchange and not really a queue. Scheduler will create a new unique queue on startup that binds to the same routing key/exchange as before (scheduler_fanout).

Revision history for this message
Josh Kearney (jk0) wrote :

LGTM

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'nova/rpc.py'
2--- nova/rpc.py 2011-06-03 20:32:42 +0000
3+++ nova/rpc.py 2011-06-29 17:00:45 +0000
4@@ -275,6 +275,11 @@
5 unique = uuid.uuid4().hex
6 self.queue = '%s_fanout_%s' % (topic, unique)
7 self.durable = False
8+ # Fanout creates unique queue names, so we should auto-remove
9+ # them when done, so they're not left around on restart.
10+ # Also, we're the only one that should be consuming. exclusive
11+ # implies auto_delete, so we'll just set that..
12+ self.exclusive = True
13 LOG.info(_('Created "%(exchange)s" fanout exchange '
14 'with "%(key)s" routing key'),
15 dict(exchange=self.exchange, key=self.routing_key))