Merge lp:~zedshaw/nova/generic-msg-queue-layer into lp:~hudson-openstack/nova/trunk

Proposed by Zed A. Shaw
Status: Merged
Approved by: Vish Ishaya
Approved revision: 1333
Merged at revision: 1346
Proposed branch: lp:~zedshaw/nova/generic-msg-queue-layer
Merge into: lp:~hudson-openstack/nova/trunk
Diff against target: 689 lines (+198/-305)
12 files modified
Authors (+1/-0)
nova/rpc/__init__.py (+66/-0)
nova/rpc/amqp.py (+1/-22)
nova/rpc/common.py (+23/-0)
nova/service.py (+12/-16)
nova/test.py (+0/-16)
nova/tests/test_adminapi.py (+1/-1)
nova/tests/test_cloud.py (+5/-27)
nova/tests/test_rpc.py (+12/-49)
nova/tests/test_rpc_amqp.py (+68/-0)
nova/tests/test_service.py (+0/-170)
nova/tests/test_test.py (+9/-4)
To merge this branch: bzr merge lp:~zedshaw/nova/generic-msg-queue-layer
Reviewer Review Type Date Requested Status
Trey Morris (community) Approve
Jason Kölker (community) Approve
Vish Ishaya (community) Needs Information
Lorin Hochstein (community) Abstain
Review via email: mp+69712@code.launchpad.net

Description of the change

This change creates a minimalist API abstraction for the nova/rpc.py code so that it's possible to use other queue mechanisms besides Rabbit and/or AMQP, and even use other drivers for AMQP rather than Rabbit. The change is intended to give the least amount of interference with the rest of the code, fixes several bugs in the tests, and works with the current branch. I also have a small demo driver+server for using 0MQ which I'll submit after this patch is merged.

To post a comment you must log in.
Revision history for this message
Brian Waldon (bcwaldon) wrote :

Looks like you may have requested the wrong Vish to review this. You probably want https://launchpad.net/~vishvananda

Revision history for this message
Lorin Hochstein (lorinh) wrote :

It looks like there are some artifacts in the source files left over from resolving merge conflicts:

nova/tests/test_adminapi.py

266: +<<<<<<< TREE
274: +=======
281: +>>>>>>> MERGE-SOURCE

review: Needs Fixing
Revision history for this message
Vish Ishaya (vishvananda) wrote :

Looks pretty good. Something weird happened in merge of test_cloud. I think you accidentally deleted a test or two. Any possible replacement for the deleted tests in test_service, or were they just too dependent on amqp to be useful?

review: Needs Fixing
Revision history for this message
Zed A. Shaw (zedshaw) wrote :

On Fri, Jul 29, 2011 at 12:36:34AM -0000, Lorin Hochstein wrote:
> Review: Needs Fixing
> It looks like there are some artifacts in the source files left over from resolving merge conflicts:

No, that's not in my source, it's from the bzr merge resolution. Since
it takes people days to review things there ends up being conflicts by
the time it's ready. I'll try it again.

--
Zed A. Shaw
http://zedshaw.com/

Revision history for this message
Zed A. Shaw (zedshaw) wrote :

On Fri, Jul 29, 2011 at 01:58:50AM -0000, Vish Ishaya wrote:
> Review: Needs Fixing
> Looks pretty good. Something weird happened in merge of test_cloud. I think you accidentally deleted a test or two. Any possible replacement for the deleted tests in test_service, or were they just too dependent on amqp to be useful?

We found the tests actually didn't test anything in service, they just
tested some things in amqp, but then used mock objects which meant they
didn't test anything.

--
Zed A. Shaw
http://zedshaw.com/

Revision history for this message
Lorin Hochstein (lorinh) wrote :

> On Fri, Jul 29, 2011 at 12:36:34AM -0000, Lorin Hochstein wrote:
> > Review: Needs Fixing
> > It looks like there are some artifacts in the source files left over from
> resolving merge conflicts:
>
> No, that's not in my source, it's from the bzr merge resolution. Since
> it takes people days to review things there ends up being conflicts by
> the time it's ready. I'll try it again.
>

Whoops, sorry about that.

review: Abstain
Revision history for this message
Zed A. Shaw (zedshaw) wrote :

On Fri, Jul 29, 2011 at 03:16:28AM -0000, Lorin Hochstein wrote:
> Whoops, sorry about that.

No problem, all the tools do a bad job of showing what's really going
on. I'm doing the merge and retest now, should have it committed soon.

--
Zed A. Shaw
http://zedshaw.com/

Revision history for this message
Vish Ishaya (vishvananda) wrote :

nice zed!

Looking forward to the alternative implementation. Just one minor question. Why use load_module instead of using the existing import_object? It looks like it does the same thing (with the exception that import_object will also let you use a class instead of a module for your backend)

review: Needs Information
Revision history for this message
Zed A. Shaw (zedshaw) wrote :

On Fri, Jul 29, 2011 at 06:43:24AM -0000, Vish Ishaya wrote:
> Review: Needs Information nice zed!
>
> Looking forward to the alternative implementation. Just one minor
> question. Why use load_module instead of using the existing
> import_object? It looks like it does the same thing (with the
> exception that import_object will also let you use a class instead of
> a module for your backend)

I needed just a module loader, not an object-in-a-module loader. The
way the nova.rpc code is used only at the module level in most of the
code (rpc.call, rpc.cast, etc.) meant I could refactor the code to use
an OOP style without changing tons of code everywhere. It could be
changed to be generic (since python sort of doesn't care), and use an
object or module, but right now that'd be too much change for one patch.

--
Zed A. Shaw
http://zedshaw.com/

Revision history for this message
Vish Ishaya (vishvananda) wrote :

> It could be
> changed to be generic (since python sort of doesn't care), and use an
> object or module, but right now that'd be too much change for one patch.

that is exactly what utils.import_object does. It attempts to import it as a module or object, if that fails, it attempts to import it as a callable and call it.

network/manager.py uses a module:
97:flags.DEFINE_string('network_driver', 'nova.network.linux_net'
314: self.driver = utils.import_object(network_driver)

compute/manager.py uses a method:
65:flags.DEFINE_string('compute_driver', 'nova.virt.connection.get_connection'
134: utils.import_object(compute_driver),

volume/manager.py uses a class:
59:flags.DEFINE_string('volume_driver', 'nova.volume.driver.ISCSIDriver',
71: self.driver = utils.import_object(volume_driver)

Gotta love python! I think you can just replace your load_module with import_object with no issues.

Revision history for this message
Zed A. Shaw (zedshaw) wrote :

On Fri, Jul 29, 2011 at 07:07:25AM -0000, Vish Ishaya wrote:
> > It could be
> > changed to be generic (since python sort of doesn't care), and use an
> > object or module, but right now that'd be too much change for one patch.
>
> that is exactly what utils.import_object does. It attempts to import it as a module or object, if that fails, it attempts to import it as a callable and call it.

So then it's not really import_object, it's more "import stuff
magically"? :-)

--
Zed A. Shaw
http://zedshaw.com/

Revision history for this message
Vish Ishaya (vishvananda) wrote :

> So then it's not really import_object, it's more "import stuff
> magically"? :-)

I never claimed it had a good name. :p

Revision history for this message
Jason Kölker (jason-koelker) wrote :

Any reason not to rename rpc_backends to rpc and move rpc.py into __init__.py or rpc/api.py and then import it in __init__.py to keep the layout the same as the rest of the codebase?

Other than that, is bueno. Tests pass and I was able to boot an instance. I'm glad you squared this away, its been annoying me that it wasn't pluggable.

review: Needs Information
Revision history for this message
Zed A. Shaw (zedshaw) wrote :

On Fri, Jul 29, 2011 at 03:27:51PM -0000, Jason Kölker wrote:
> Review: Needs Information
> Any reason not to rename rpc_backends to rpc and move rpc.py into __init__.py or rpc/api.py and then import it in __init__.py to keep the layout the same as the rest of the codebase?

I believe I tried that but the imports didn't work right without the
abstration code being in an explicit nova/rpc.py module.

--
Zed A. Shaw
http://zedshaw.com/

Revision history for this message
Zed A. Shaw (zedshaw) wrote :

On Fri, Jul 29, 2011 at 03:01:51PM -0000, Vish Ishaya wrote:
>
> > So then it's not really import_object, it's more "import stuff
> > magically"? :-)
>
> I never claimed it had a good name. :p

Ok, this is fixed, it now just uses the nova.util.import_object.

FYI, the tests are broken if you have to make a new virtualenv.

--
Zed A. Shaw
http://zedshaw.com/

Revision history for this message
Zed A. Shaw (zedshaw) wrote :

On Fri, Jul 29, 2011 at 03:27:51PM -0000, Jason Kölker wrote:
> Review: Needs Information
> Any reason not to rename rpc_backends to rpc and move rpc.py into __init__.py or rpc/api.py and then import it in __init__.py to keep the layout the same as the rest of the codebase?

I just fixed the latest thing Vish brought up, and then looking at how
I'd have to change the code to fit this request I'm thinking it makes it
more complicated.

I'm going to ask that, if the reviewers want this request, that they
vote on it so that I'm not chasing my tail implementing something that's
not really necessary.

After this patch is in, it'll be easier to refactor.

--
Zed A. Shaw
http://zedshaw.com/

Revision history for this message
Trey Morris (tr3buchet) wrote :

"...keep the layout the same as the rest of the codebase?"
+1

Revision history for this message
Zed A. Shaw (zedshaw) wrote :

On Fri, Jul 29, 2011 at 06:07:26PM -0000, Trey Morris wrote:
> "...keep the layout the same as the rest of the codebase?"
> +1

If that's the case, point me to another part of the codebase I should
model this after. Specific files that do this same design of "loadable
modules are inside a directory with __init__.py as the abstraction."

--
Zed A. Shaw
http://zedshaw.com/

Revision history for this message
Lorin Hochstein (lorinh) wrote :

> On Fri, Jul 29, 2011 at 03:01:51PM -0000, Vish Ishaya wrote:
> FYI, the tests are broken if you have to make a new virtualenv.
>

Logged this as https://bugs.launchpad.net/nova/+bug/818210 (assuming it's the same problem you're encountering).

Revision history for this message
Jason Kölker (jason-koelker) wrote :

> On Fri, Jul 29, 2011 at 06:07:26PM -0000, Trey Morris wrote:
> > "...keep the layout the same as the rest of the codebase?"
> > +1
>
> If that's the case, point me to another part of the codebase I should
> model this after. Specific files that do this same design of "loadable
> modules are inside a directory with __init__.py as the abstraction."

The ipv6 module is the simplest, but the db uses the same pattern as well. Its a trivial change:

$ bzr diff
=== renamed directory 'nova/rpc_backends' => 'nova/rpc'
=== modified file 'nova/rpc/__init__.py'
--- nova/rpc_backends/__init__.py 2011-07-26 23:29:50 +0000
+++ nova/rpc/__init__.py 2011-07-29 18:58:38 +0000
@@ -0,0 +1,2 @@
+from nova.rpc.api import *
+from nova.rpc.common import RemoteError, LOG

=== modified file 'nova/rpc/amqp.py'
--- nova/rpc_backends/amqp.py 2011-07-26 23:29:50 +0000
+++ nova/rpc/amqp.py 2011-07-29 18:57:42 +0000
@@ -44,7 +44,7 @@
 from nova import flags
 from nova import log as logging
 from nova import utils
-from nova.rpc_backends.common import RemoteError, LOG
+from nova.rpc.common import RemoteError, LOG

 FLAGS = flags.FLAGS

=== renamed file 'nova/rpc.py' => 'nova/rpc/api.py'
--- nova/rpc.py 2011-07-29 17:33:58 +0000
+++ nova/rpc/api.py 2011-07-29 18:58:11 +0000
@@ -18,12 +18,11 @@

 from nova.utils import import_object
-from nova.rpc_backends.common import RemoteError, LOG
 from nova import flags

 FLAGS = flags.FLAGS
 flags.DEFINE_string('rpc_backend',
- 'nova.rpc_backends.amqp',
+ 'nova.rpc.amqp',
                     "The messaging module to use, defaults to AMQP.")

 RPCIMPL = import_object(FLAGS.rpc_backend)

=== modified file 'nova/tests/test_rpc_amqp.py'
--- nova/tests/test_rpc_amqp.py 2011-07-26 23:29:50 +0000
+++ nova/tests/test_rpc_amqp.py 2011-07-29 19:02:57 +0000
@@ -2,7 +2,7 @@
 from nova import flags
 from nova import log as logging
 from nova import rpc
-from nova.rpc_backends import amqp
+from nova.rpc import amqp
 from nova import test

Revision history for this message
Jason Kölker (jason-koelker) wrote :

> > On Fri, Jul 29, 2011 at 06:07:26PM -0000, Trey Morris wrote:
> > > "...keep the layout the same as the rest of the codebase?"
> > > +1
> >
> > If that's the case, point me to another part of the codebase I should
> > model this after. Specific files that do this same design of "loadable
> > modules are inside a directory with __init__.py as the abstraction."
>
> The ipv6 module is the simplest, but the db uses the same pattern as well. Its
> a trivial change:
>

Ah, I see you already did that, excellent!

review: Approve
Revision history for this message
Zed A. Shaw (zedshaw) wrote :

On Fri, Jul 29, 2011 at 06:16:02PM -0000, Zed A. Shaw wrote:
> On Fri, Jul 29, 2011 at 06:07:26PM -0000, Trey Morris wrote:
> > "...keep the layout the same as the rest of the codebase?"
> > +1

Alright, it's now matching the "rest of the codebase", based on the
description since I can't find other modules doing this with a similar
layout.

It's also remerged, and new request to review put in.

--
Zed A. Shaw
http://zedshaw.com/

Revision history for this message
Trey Morris (tr3buchet) wrote :

koelker: haha way to pay attention!

zed: i see you just added yourself to Authors. welcome to the party!

review: Approve
Revision history for this message
Zed A. Shaw (zedshaw) wrote :

On Fri, Jul 29, 2011 at 07:22:35PM -0000, Jason Kölker wrote:
> > The ipv6 module is the simplest, but the db uses the same pattern as well. Its
> > a trivial change:
> >
>
> Ah, I see you already did that, excellent!

Yep, did it already. FYI, it's not trivial to move these modules around
because people are trying to edit them *and* bzr tends to have problems
with moved files. If there's much more moving required I'll ask that it
be done on subsequent changes to the code rather than wait around for
perfection on this patch.

--
Zed A. Shaw
http://zedshaw.com/

Revision history for this message
Zed A. Shaw (zedshaw) wrote :

On Fri, Jul 29, 2011 at 07:25:58PM -0000, Trey Morris wrote:
> Review: Approve
> koelker: haha way to pay attention!
>
> zed: i see you just added yourself to Authors. welcome to the party!

Thanks, although it was mostly to satisfy a unit test.

--
Zed A. Shaw
http://zedshaw.com/

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Authors'
2--- Authors 2011-07-26 00:49:36 +0000
3+++ Authors 2011-07-29 19:17:40 +0000
4@@ -105,3 +105,4 @@
5 Youcef Laribi <Youcef.Laribi@eu.citrix.com>
6 Yuriy Taraday <yorik.sar@gmail.com>
7 Zhixue Wu <Zhixue.Wu@citrix.com>
8+Zed Shaw <zedshaw@zedshaw.com>
9
10=== added directory 'nova/rpc'
11=== added file 'nova/rpc/__init__.py'
12--- nova/rpc/__init__.py 1970-01-01 00:00:00 +0000
13+++ nova/rpc/__init__.py 2011-07-29 19:17:40 +0000
14@@ -0,0 +1,66 @@
15+# vim: tabstop=4 shiftwidth=4 softtabstop=4
16+
17+# Copyright 2010 United States Government as represented by the
18+# Administrator of the National Aeronautics and Space Administration.
19+# All Rights Reserved.
20+#
21+# Licensed under the Apache License, Version 2.0 (the "License"); you may
22+# not use this file except in compliance with the License. You may obtain
23+# a copy of the License at
24+#
25+# http://www.apache.org/licenses/LICENSE-2.0
26+#
27+# Unless required by applicable law or agreed to in writing, software
28+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
29+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
30+# License for the specific language governing permissions and limitations
31+# under the License.
32+
33+
34+from nova.utils import import_object
35+from nova.rpc.common import RemoteError, LOG
36+from nova import flags
37+
38+FLAGS = flags.FLAGS
39+flags.DEFINE_string('rpc_backend',
40+ 'nova.rpc.amqp',
41+ "The messaging module to use, defaults to AMQP.")
42+
43+RPCIMPL = import_object(FLAGS.rpc_backend)
44+
45+
46+def create_connection(new=True):
47+ return RPCIMPL.Connection.instance(new=True)
48+
49+
50+def create_consumer(conn, topic, proxy, fanout=False):
51+ if fanout:
52+ return RPCIMPL.FanoutAdapterConsumer(
53+ connection=conn,
54+ topic=topic,
55+ proxy=proxy)
56+ else:
57+ return RPCIMPL.TopicAdapterConsumer(
58+ connection=conn,
59+ topic=topic,
60+ proxy=proxy)
61+
62+
63+def create_consumer_set(conn, consumers):
64+ return RPCIMPL.ConsumerSet(connection=conn, consumer_list=consumers)
65+
66+
67+def call(context, topic, msg):
68+ return RPCIMPL.call(context, topic, msg)
69+
70+
71+def cast(context, topic, msg):
72+ return RPCIMPL.cast(context, topic, msg)
73+
74+
75+def fanout_cast(context, topic, msg):
76+ return RPCIMPL.fanout_cast(context, topic, msg)
77+
78+
79+def multicall(context, topic, msg):
80+ return RPCIMPL.multicall(context, topic, msg)
81
82=== renamed file 'nova/rpc.py' => 'nova/rpc/amqp.py'
83--- nova/rpc.py 2011-07-01 14:53:20 +0000
84+++ nova/rpc/amqp.py 2011-07-29 19:17:40 +0000
85@@ -44,9 +44,7 @@
86 from nova import flags
87 from nova import log as logging
88 from nova import utils
89-
90-
91-LOG = logging.getLogger('nova.rpc')
92+from nova.rpc.common import RemoteError, LOG
93
94
95 FLAGS = flags.FLAGS
96@@ -418,25 +416,6 @@
97 publisher.close()
98
99
100-class RemoteError(exception.Error):
101- """Signifies that a remote class has raised an exception.
102-
103- Containes a string representation of the type of the original exception,
104- the value of the original exception, and the traceback. These are
105- sent to the parent as a joined string so printing the exception
106- contains all of the relevent info.
107-
108- """
109-
110- def __init__(self, exc_type, value, traceback):
111- self.exc_type = exc_type
112- self.value = value
113- self.traceback = traceback
114- super(RemoteError, self).__init__('%s %s\n%s' % (exc_type,
115- value,
116- traceback))
117-
118-
119 def _unpack_context(msg):
120 """Unpack context from msg."""
121 context_dict = {}
122
123=== added file 'nova/rpc/common.py'
124--- nova/rpc/common.py 1970-01-01 00:00:00 +0000
125+++ nova/rpc/common.py 2011-07-29 19:17:40 +0000
126@@ -0,0 +1,23 @@
127+from nova import exception
128+from nova import log as logging
129+
130+LOG = logging.getLogger('nova.rpc')
131+
132+
133+class RemoteError(exception.Error):
134+ """Signifies that a remote class has raised an exception.
135+
136+ Containes a string representation of the type of the original exception,
137+ the value of the original exception, and the traceback. These are
138+ sent to the parent as a joined string so printing the exception
139+ contains all of the relevent info.
140+
141+ """
142+
143+ def __init__(self, exc_type, value, traceback):
144+ self.exc_type = exc_type
145+ self.value = value
146+ self.traceback = traceback
147+ super(RemoteError, self).__init__('%s %s\n%s' % (exc_type,
148+ value,
149+ traceback))
150
151=== modified file 'nova/service.py'
152--- nova/service.py 2011-06-24 01:31:00 +0000
153+++ nova/service.py 2011-07-29 19:17:40 +0000
154@@ -149,26 +149,22 @@
155 if 'nova-compute' == self.binary:
156 self.manager.update_available_resource(ctxt)
157
158- self.conn = rpc.Connection.instance(new=True)
159+ self.conn = rpc.create_connection(new=True)
160 logging.debug("Creating Consumer connection for Service %s" %
161 self.topic)
162
163 # Share this same connection for these Consumers
164- consumer_all = rpc.TopicAdapterConsumer(
165- connection=self.conn,
166- topic=self.topic,
167- proxy=self)
168- consumer_node = rpc.TopicAdapterConsumer(
169- connection=self.conn,
170- topic='%s.%s' % (self.topic, self.host),
171- proxy=self)
172- fanout = rpc.FanoutAdapterConsumer(
173- connection=self.conn,
174- topic=self.topic,
175- proxy=self)
176- consumer_set = rpc.ConsumerSet(
177- connection=self.conn,
178- consumer_list=[consumer_all, consumer_node, fanout])
179+ consumer_all = rpc.create_consumer(self.conn, self.topic, self,
180+ fanout=False)
181+
182+ node_topic = '%s.%s' % (self.topic, self.host)
183+ consumer_node = rpc.create_consumer(self.conn, node_topic, self,
184+ fanout=False)
185+
186+ fanout = rpc.create_consumer(self.conn, self.topic, self, fanout=True)
187+
188+ consumers = [consumer_all, consumer_node, fanout]
189+ consumer_set = rpc.create_consumer_set(self.conn, consumers)
190
191 # Wait forever, processing these consumers
192 def _wait():
193
194=== modified file 'nova/test.py'
195--- nova/test.py 2011-07-08 09:35:01 +0000
196+++ nova/test.py 2011-07-29 19:17:40 +0000
197@@ -99,9 +99,7 @@
198 self.flag_overrides = {}
199 self.injected = []
200 self._services = []
201- self._monkey_patch_attach()
202 self._original_flags = FLAGS.FlagValuesDict()
203- rpc.ConnectionPool = rpc.Pool(max_size=FLAGS.rpc_conn_pool_size)
204
205 def tearDown(self):
206 """Runs after each test method to tear down test environment."""
207@@ -126,9 +124,6 @@
208 # Reset any overriden flags
209 self.reset_flags()
210
211- # Reset our monkey-patches
212- rpc.Consumer.attach_to_eventlet = self.original_attach
213-
214 # Stop any timers
215 for x in self.injected:
216 try:
217@@ -172,17 +167,6 @@
218 self._services.append(svc)
219 return svc
220
221- def _monkey_patch_attach(self):
222- self.original_attach = rpc.Consumer.attach_to_eventlet
223-
224- def _wrapped(inner_self):
225- rv = self.original_attach(inner_self)
226- self.injected.append(rv)
227- return rv
228-
229- _wrapped.func_name = self.original_attach.func_name
230- rpc.Consumer.attach_to_eventlet = _wrapped
231-
232 # Useful assertions
233 def assertDictMatch(self, d1, d2, approx_equal=False, tolerance=0.001):
234 """Assert two dicts are equivalent.
235
236=== modified file 'nova/tests/test_adminapi.py'
237--- nova/tests/test_adminapi.py 2011-06-27 21:48:03 +0000
238+++ nova/tests/test_adminapi.py 2011-07-29 19:17:40 +0000
239@@ -39,7 +39,7 @@
240 super(AdminApiTestCase, self).setUp()
241 self.flags(connection_type='fake')
242
243- self.conn = rpc.Connection.instance()
244+ self.conn = rpc.create_connection()
245
246 # set up our cloud
247 self.api = admin.AdminController()
248
249=== modified file 'nova/tests/test_cloud.py'
250--- nova/tests/test_cloud.py 2011-07-27 21:39:27 +0000
251+++ nova/tests/test_cloud.py 2011-07-29 19:17:40 +0000
252@@ -50,7 +50,7 @@
253 self.flags(connection_type='fake',
254 stub_network=True)
255
256- self.conn = rpc.Connection.instance()
257+ self.conn = rpc.create_connection()
258
259 # set up our cloud
260 self.cloud = cloud.CloudController()
261@@ -326,23 +326,16 @@
262 revoke = self.cloud.revoke_security_group_ingress
263 self.assertTrue(revoke(self.context, group_name=sec['name'], **kwargs))
264
265- def test_revoke_security_group_ingress_by_id(self):
266- kwargs = {'project_id': self.context.project_id, 'name': 'test'}
267- sec = db.security_group_create(self.context, kwargs)
268+ def test_authorize_revoke_security_group_ingress_by_id(self):
269+ sec = db.security_group_create(self.context,
270+ {'project_id': self.context.project_id,
271+ 'name': 'test'})
272 authz = self.cloud.authorize_security_group_ingress
273 kwargs = {'to_port': '999', 'from_port': '999', 'ip_protocol': 'tcp'}
274 authz(self.context, group_id=sec['id'], **kwargs)
275 revoke = self.cloud.revoke_security_group_ingress
276 self.assertTrue(revoke(self.context, group_id=sec['id'], **kwargs))
277
278- def test_authorize_security_group_ingress_by_id(self):
279- sec = db.security_group_create(self.context,
280- {'project_id': self.context.project_id,
281- 'name': 'test'})
282- authz = self.cloud.authorize_security_group_ingress
283- kwargs = {'to_port': '999', 'from_port': '999', 'ip_protocol': 'tcp'}
284- self.assertTrue(authz(self.context, group_id=sec['id'], **kwargs))
285-
286 def test_authorize_security_group_ingress_missing_protocol_params(self):
287 sec = db.security_group_create(self.context,
288 {'project_id': self.context.project_id,
289@@ -961,21 +954,6 @@
290 self._wait_for_running(ec2_instance_id)
291 return ec2_instance_id
292
293- def test_rescue_unrescue_instance(self):
294- instance_id = self._run_instance(
295- image_id='ami-1',
296- instance_type=FLAGS.default_instance_type,
297- max_count=1)
298- self.cloud.rescue_instance(context=self.context,
299- instance_id=instance_id)
300- # NOTE(vish): This currently does no validation, it simply makes sure
301- # that the code path doesn't throw an exception.
302- self.cloud.unrescue_instance(context=self.context,
303- instance_id=instance_id)
304- # TODO(soren): We need this until we can stop polling in the rpc code
305- # for unit tests.
306- self.cloud.terminate_instances(self.context, [instance_id])
307-
308 def test_console_output(self):
309 instance_id = self._run_instance(
310 image_id='ami-1',
311
312=== modified file 'nova/tests/test_rpc.py'
313--- nova/tests/test_rpc.py 2011-05-26 22:08:53 +0000
314+++ nova/tests/test_rpc.py 2011-07-29 19:17:40 +0000
315@@ -33,11 +33,12 @@
316 class RpcTestCase(test.TestCase):
317 def setUp(self):
318 super(RpcTestCase, self).setUp()
319- self.conn = rpc.Connection.instance(True)
320+ self.conn = rpc.create_connection(True)
321 self.receiver = TestReceiver()
322- self.consumer = rpc.TopicAdapterConsumer(connection=self.conn,
323- topic='test',
324- proxy=self.receiver)
325+ self.consumer = rpc.create_consumer(self.conn,
326+ 'test',
327+ self.receiver,
328+ False)
329 self.consumer.attach_to_eventlet()
330 self.context = context.get_admin_context()
331
332@@ -129,6 +130,8 @@
333 """Calls echo in the passed queue"""
334 LOG.debug(_("Nested received %(queue)s, %(value)s")
335 % locals())
336+ # TODO: so, it will replay the context and use the same REQID?
337+ # that's bizarre.
338 ret = rpc.call(context,
339 queue,
340 {"method": "echo",
341@@ -137,10 +140,11 @@
342 return value
343
344 nested = Nested()
345- conn = rpc.Connection.instance(True)
346- consumer = rpc.TopicAdapterConsumer(connection=conn,
347- topic='nested',
348- proxy=nested)
349+ conn = rpc.create_connection(True)
350+ consumer = rpc.create_consumer(conn,
351+ 'nested',
352+ nested,
353+ False)
354 consumer.attach_to_eventlet()
355 value = 42
356 result = rpc.call(self.context,
357@@ -149,47 +153,6 @@
358 "value": value}})
359 self.assertEqual(value, result)
360
361- def test_connectionpool_single(self):
362- """Test that ConnectionPool recycles a single connection."""
363- conn1 = rpc.ConnectionPool.get()
364- rpc.ConnectionPool.put(conn1)
365- conn2 = rpc.ConnectionPool.get()
366- rpc.ConnectionPool.put(conn2)
367- self.assertEqual(conn1, conn2)
368-
369- def test_connectionpool_double(self):
370- """Test that ConnectionPool returns and reuses separate connections.
371-
372- When called consecutively we should get separate connections and upon
373- returning them those connections should be reused for future calls
374- before generating a new connection.
375-
376- """
377- conn1 = rpc.ConnectionPool.get()
378- conn2 = rpc.ConnectionPool.get()
379-
380- self.assertNotEqual(conn1, conn2)
381- rpc.ConnectionPool.put(conn1)
382- rpc.ConnectionPool.put(conn2)
383-
384- conn3 = rpc.ConnectionPool.get()
385- conn4 = rpc.ConnectionPool.get()
386- self.assertEqual(conn1, conn3)
387- self.assertEqual(conn2, conn4)
388-
389- def test_connectionpool_limit(self):
390- """Test connection pool limit and connection uniqueness."""
391- max_size = FLAGS.rpc_conn_pool_size
392- conns = []
393-
394- for i in xrange(max_size):
395- conns.append(rpc.ConnectionPool.get())
396-
397- self.assertFalse(rpc.ConnectionPool.free_items)
398- self.assertEqual(rpc.ConnectionPool.current_size,
399- rpc.ConnectionPool.max_size)
400- self.assertEqual(len(set(conns)), max_size)
401-
402
403 class TestReceiver(object):
404 """Simple Proxy class so the consumer has methods to call.
405
406=== added file 'nova/tests/test_rpc_amqp.py'
407--- nova/tests/test_rpc_amqp.py 1970-01-01 00:00:00 +0000
408+++ nova/tests/test_rpc_amqp.py 2011-07-29 19:17:40 +0000
409@@ -0,0 +1,68 @@
410+from nova import context
411+from nova import flags
412+from nova import log as logging
413+from nova import rpc
414+from nova.rpc import amqp
415+from nova import test
416+
417+
418+FLAGS = flags.FLAGS
419+LOG = logging.getLogger('nova.tests.rpc')
420+
421+
422+class RpcAMQPTestCase(test.TestCase):
423+ def setUp(self):
424+ super(RpcAMQPTestCase, self).setUp()
425+ self.conn = rpc.create_connection(True)
426+ self.receiver = TestReceiver()
427+ self.consumer = rpc.create_consumer(self.conn,
428+ 'test',
429+ self.receiver,
430+ False)
431+ self.consumer.attach_to_eventlet()
432+ self.context = context.get_admin_context()
433+
434+ def test_connectionpool_single(self):
435+ """Test that ConnectionPool recycles a single connection."""
436+ conn1 = amqp.ConnectionPool.get()
437+ amqp.ConnectionPool.put(conn1)
438+ conn2 = amqp.ConnectionPool.get()
439+ amqp.ConnectionPool.put(conn2)
440+ self.assertEqual(conn1, conn2)
441+
442+
443+class TestReceiver(object):
444+ """Simple Proxy class so the consumer has methods to call.
445+
446+ Uses static methods because we aren't actually storing any state.
447+
448+ """
449+
450+ @staticmethod
451+ def echo(context, value):
452+ """Simply returns whatever value is sent in."""
453+ LOG.debug(_("Received %s"), value)
454+ return value
455+
456+ @staticmethod
457+ def context(context, value):
458+ """Returns dictionary version of context."""
459+ LOG.debug(_("Received %s"), context)
460+ return context.to_dict()
461+
462+ @staticmethod
463+ def echo_three_times(context, value):
464+ context.reply(value)
465+ context.reply(value + 1)
466+ context.reply(value + 2)
467+
468+ @staticmethod
469+ def echo_three_times_yield(context, value):
470+ yield value
471+ yield value + 1
472+ yield value + 2
473+
474+ @staticmethod
475+ def fail(context, value):
476+ """Raises an exception with the value sent in."""
477+ raise Exception(value)
478
479=== modified file 'nova/tests/test_service.py'
480--- nova/tests/test_service.py 2011-06-28 14:39:04 +0000
481+++ nova/tests/test_service.py 2011-07-29 19:17:40 +0000
482@@ -109,103 +109,8 @@
483 # the looping calls are created in StartService.
484 app = service.Service.create(host=host, binary=binary, topic=topic)
485
486- self.mox.StubOutWithMock(service.rpc.Connection, 'instance')
487- service.rpc.Connection.instance(new=mox.IgnoreArg())
488-
489- self.mox.StubOutWithMock(rpc,
490- 'TopicAdapterConsumer',
491- use_mock_anything=True)
492- self.mox.StubOutWithMock(rpc,
493- 'FanoutAdapterConsumer',
494- use_mock_anything=True)
495-
496- self.mox.StubOutWithMock(rpc,
497- 'ConsumerSet',
498- use_mock_anything=True)
499-
500- rpc.TopicAdapterConsumer(connection=mox.IgnoreArg(),
501- topic=topic,
502- proxy=mox.IsA(service.Service)).AndReturn(
503- rpc.TopicAdapterConsumer)
504-
505- rpc.TopicAdapterConsumer(connection=mox.IgnoreArg(),
506- topic='%s.%s' % (topic, host),
507- proxy=mox.IsA(service.Service)).AndReturn(
508- rpc.TopicAdapterConsumer)
509-
510- rpc.FanoutAdapterConsumer(connection=mox.IgnoreArg(),
511- topic=topic,
512- proxy=mox.IsA(service.Service)).AndReturn(
513- rpc.FanoutAdapterConsumer)
514-
515- def wait_func(self, limit=None):
516- return None
517-
518- mock_cset = self.mox.CreateMock(rpc.ConsumerSet,
519- {'wait': wait_func})
520- rpc.ConsumerSet(connection=mox.IgnoreArg(),
521- consumer_list=mox.IsA(list)).AndReturn(mock_cset)
522- wait_func(mox.IgnoreArg())
523-
524- service_create = {'host': host,
525- 'binary': binary,
526- 'topic': topic,
527- 'report_count': 0,
528- 'availability_zone': 'nova'}
529- service_ref = {'host': host,
530- 'binary': binary,
531- 'report_count': 0,
532- 'id': 1}
533-
534- service.db.service_get_by_args(mox.IgnoreArg(),
535- host,
536- binary).AndRaise(exception.NotFound())
537- service.db.service_create(mox.IgnoreArg(),
538- service_create).AndReturn(service_ref)
539- self.mox.ReplayAll()
540-
541- app.start()
542- app.stop()
543 self.assert_(app)
544
545- # We're testing sort of weird behavior in how report_state decides
546- # whether it is disconnected, it looks for a variable on itself called
547- # 'model_disconnected' and report_state doesn't really do much so this
548- # these are mostly just for coverage
549- def test_report_state_no_service(self):
550- host = 'foo'
551- binary = 'bar'
552- topic = 'test'
553- service_create = {'host': host,
554- 'binary': binary,
555- 'topic': topic,
556- 'report_count': 0,
557- 'availability_zone': 'nova'}
558- service_ref = {'host': host,
559- 'binary': binary,
560- 'topic': topic,
561- 'report_count': 0,
562- 'availability_zone': 'nova',
563- 'id': 1}
564-
565- service.db.service_get_by_args(mox.IgnoreArg(),
566- host,
567- binary).AndRaise(exception.NotFound())
568- service.db.service_create(mox.IgnoreArg(),
569- service_create).AndReturn(service_ref)
570- service.db.service_get(mox.IgnoreArg(),
571- service_ref['id']).AndReturn(service_ref)
572- service.db.service_update(mox.IgnoreArg(), service_ref['id'],
573- mox.ContainsKeyValue('report_count', 1))
574-
575- self.mox.ReplayAll()
576- serv = service.Service(host,
577- binary,
578- topic,
579- 'nova.tests.test_service.FakeManager')
580- serv.start()
581- serv.report_state()
582-
583 def test_report_state_newly_disconnected(self):
584 host = 'foo'
585 binary = 'bar'
586@@ -276,81 +181,6 @@
587
588 self.assert_(not serv.model_disconnected)
589
590- def test_compute_can_update_available_resource(self):
591- """Confirm compute updates their record of compute-service table."""
592- host = 'foo'
593- binary = 'nova-compute'
594- topic = 'compute'
595-
596- # Any mocks are not working without UnsetStubs() here.
597- self.mox.UnsetStubs()
598- ctxt = context.get_admin_context()
599- service_ref = db.service_create(ctxt, {'host': host,
600- 'binary': binary,
601- 'topic': topic})
602- serv = service.Service(host,
603- binary,
604- topic,
605- 'nova.compute.manager.ComputeManager')
606-
607- # This testcase want to test calling update_available_resource.
608- # No need to call periodic call, then below variable must be set 0.
609- serv.report_interval = 0
610- serv.periodic_interval = 0
611-
612- # Creating mocks
613- self.mox.StubOutWithMock(service.rpc.Connection, 'instance')
614- service.rpc.Connection.instance(new=mox.IgnoreArg())
615-
616- self.mox.StubOutWithMock(rpc,
617- 'TopicAdapterConsumer',
618- use_mock_anything=True)
619- self.mox.StubOutWithMock(rpc,
620- 'FanoutAdapterConsumer',
621- use_mock_anything=True)
622-
623- self.mox.StubOutWithMock(rpc,
624- 'ConsumerSet',
625- use_mock_anything=True)
626-
627- rpc.TopicAdapterConsumer(connection=mox.IgnoreArg(),
628- topic=topic,
629- proxy=mox.IsA(service.Service)).AndReturn(
630- rpc.TopicAdapterConsumer)
631-
632- rpc.TopicAdapterConsumer(connection=mox.IgnoreArg(),
633- topic='%s.%s' % (topic, host),
634- proxy=mox.IsA(service.Service)).AndReturn(
635- rpc.TopicAdapterConsumer)
636-
637- rpc.FanoutAdapterConsumer(connection=mox.IgnoreArg(),
638- topic=topic,
639- proxy=mox.IsA(service.Service)).AndReturn(
640- rpc.FanoutAdapterConsumer)
641-
642- def wait_func(self, limit=None):
643- return None
644-
645- mock_cset = self.mox.CreateMock(rpc.ConsumerSet,
646- {'wait': wait_func})
647- rpc.ConsumerSet(connection=mox.IgnoreArg(),
648- consumer_list=mox.IsA(list)).AndReturn(mock_cset)
649- wait_func(mox.IgnoreArg())
650-
651- self.mox.StubOutWithMock(serv.manager.driver,
652- 'update_available_resource')
653- serv.manager.driver.update_available_resource(mox.IgnoreArg(), host)
654-
655- # Just doing start()-stop(), not confirm new db record is created,
656- # because update_available_resource() works only in
657- # libvirt environment. This testcase confirms
658- # update_available_resource() is called. Otherwise, mox complains.
659- self.mox.ReplayAll()
660- serv.start()
661- serv.stop()
662-
663- db.service_destroy(ctxt, service_ref['id'])
664-
665
666 class TestWSGIService(test.TestCase):
667
668
669=== modified file 'nova/tests/test_test.py'
670--- nova/tests/test_test.py 2011-02-24 01:58:32 +0000
671+++ nova/tests/test_test.py 2011-07-29 19:17:40 +0000
672@@ -33,8 +33,13 @@
673 self.start_service('compute')
674
675 def test_rpc_consumer_isolation(self):
676- connection = rpc.Connection.instance(new=True)
677- consumer = rpc.TopicAdapterConsumer(connection, topic='compute')
678- consumer.register_callback(
679- lambda x, y: self.fail('I should never be called'))
680+ class NeverCalled(object):
681+
682+ def __getattribute__(*args):
683+ assert False, "I should never get called."
684+
685+ connection = rpc.create_connection(new=True)
686+ proxy = NeverCalled()
687+ consumer = rpc.create_consumer(connection, 'compute',
688+ proxy, fanout=False)
689 consumer.attach_to_eventlet()