Merge lp:~openstack-gd/nova/nova-avail-zones into lp:~hudson-openstack/nova/trunk
- nova-avail-zones
- Merge into trunk
Status: | Merged |
---|---|
Approved by: | Soren Hansen |
Approved revision: | 490 |
Merged at revision: | 551 |
Proposed branch: | lp:~openstack-gd/nova/nova-avail-zones |
Merge into: | lp:~hudson-openstack/nova/trunk |
Diff against target: |
487 lines (+235/-21) 12 files modified
Authors (+1/-0) nova/api/ec2/cloud.py (+41/-4) nova/compute/api.py (+2/-1) nova/db/api.py (+9/-4) nova/db/sqlalchemy/api.py (+13/-4) nova/db/sqlalchemy/models.py (+1/-0) nova/flags.py (+0/-1) nova/scheduler/zone.py (+56/-0) nova/service.py (+3/-1) nova/tests/test_cloud.py (+44/-2) nova/tests/test_scheduler.py (+54/-0) nova/tests/test_service.py (+11/-4) |
To merge this branch: | bzr merge lp:~openstack-gd/nova/nova-avail-zones |
Related bugs: | |
Related blueprints: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Devin Carlen (community) | Approve | ||
Ilya Alekseyev (community) | Approve | ||
Vish Ishaya (community) | Approve | ||
Vish (community) | Abstain | ||
Nova Core security contacts | Pending | ||
Todd Willey | Pending | ||
Eric Day | Pending | ||
Soren Hansen | Pending | ||
Review via email: mp+44878@code.launchpad.net |
Commit message
Description of the change
Added support of availability zones for compute.
models.Service got additional field availability_zone and was created ZoneScheduler that make decisions based on this field.
Also replaced fake 'nova' zone in EC2 cloud api.
Vish Ishaya (vishvananda) wrote : | # |
This is looking pretty good. It appears that volumes are not handled. Did you plan on doing those in another patch? I'm also wondering if this could actually be a subclass of Simple Scheduler, so we at least get the added functionality of going to the least loaded host. Another possibility is just to ignore the zone if it is None, and add it to simple scheduler.
Eldar Nugaev (reldan) wrote : | # |
> This is looking pretty good. It appears that volumes are not handled. Did
> you plan on doing those in another patch? I'm also wondering if this could
> actually be a subclass of Simple Scheduler, so we at least get the added
> functionality of going to the least loaded host. Another possibility is just
> to ignore the zone if it is None, and add it to simple scheduler.
Thank you for the review. We didn't handle block storage device yet. It's easiest path that allows to run compute node on different physical hosts. If this path is Ok, we think about add availability zones for volumes.
Should we add zone handling to Simple Scheduler in this path or we can add it in volume zones path?
Vish Ishaya (vishvananda) wrote : | # |
I'd like to see this as an addition to simple scheduler, but you can put it in when you do volume availability zones.
Eldar Nugaev (reldan) wrote : | # |
I have a problem. I should merge my changes with trunk. But I can't do this right way. I suppose I see partial commit http://
api/ec2/cloud.py
def _describe_
rv = {'availabilityZ
services = db.service_
now = db.get_time()
But db doesn't have service_
Soren Hansen (soren) wrote : | # |
Good catch. I've reported https:/
Ilya Alekseyev (ilyaalekseyev) wrote : | # |
Looks good.
Ilya Alekseyev (ilyaalekseyev) wrote : | # |
> Good catch. I've reported https:/
Fine. We take changes and merged it to our branch. Could you please review our code again?
OpenStack Infra (hudson-openstack) wrote : | # |
The attempt to merge lp:~openstack-gd/nova/nova-avail-zones into lp:nova failed. Below is the output from the failed tests.
nova/service.
Limit all lines to a maximum of 79 characters.
There are still many devices around that are limited to 80 character
lines; plus, limiting windows to 80 characters makes it possible to have
several windows side-by-side. The default wrapping on such devices looks
ugly. Therefore, please limit all lines to a maximum of 79 characters.
For flowing long blocks of text (docstrings or comments), limiting the
length to 72 characters is recommended.
nova/api/
Limit all lines to a maximum of 79 characters.
There are still many devices around that are limited to 80 character
lines; plus, limiting windows to 80 characters makes it possible to have
several windows side-by-side. The default wrapping on such devices looks
ugly. Therefore, please limit all lines to a maximum of 79 characters.
For flowing long blocks of text (docstrings or comments), limiting the
length to 72 characters is recommended.
nova/api/
for zone in [service.
Limit all lines to a maximum of 79 characters.
There are still many devices around that are limited to 80 character
lines; plus, limiting windows to 80 characters makes it possible to have
several windows side-by-side. The default wrapping on such devices looks
ugly. Therefore, please limit all lines to a maximum of 79 characters.
For flowing long blocks of text (docstrings or comments), limiting the
length to 72 characters is recommended.
nova/api/
Limit all lines to a maximum of 79 characters.
There are still many devices around that are limited to 80 character
lines; plus, limiting windows to 80 characters makes it possible to have
several windows side-by-side. The default wrapping on such devices looks
ugly. Therefore, please limit all lines to a maximum of 79 characters.
For flowing long blocks of text (docstrings or comments), limiting the
length to 72 characters is recommended.
nova/api/
Limi...
Thierry Carrez (ttx) wrote : | # |
@Ilya: You need to sign CLA and add your reference to http://
Hudson/Tarmac should reject the change, if my CLA verifier plugin works...
Ilya Alekseyev (ilyaalekseyev) wrote : | # |
> @Ilya: You need to sign CLA and add your reference to
> http://
> http://
>
> Hudson/Tarmac should reject the change, if my CLA verifier plugin works...
Fixed. Sorry, forgot to add myself to wiki page.
- 490. By Eldar Nugaev
-
remove extra whitspaces
Preview Diff
1 | === modified file 'Authors' |
2 | --- Authors 2011-01-10 17:37:06 +0000 |
3 | +++ Authors 2011-01-11 23:22:10 +0000 |
4 | @@ -15,6 +15,7 @@ |
5 | Eric Day <eday@oddments.org> |
6 | Ewan Mellor <ewan.mellor@citrix.com> |
7 | Hisaki Ohara <hisaki.ohara@intel.com> |
8 | +Ilya Alekseyev <ialekseev@griddynamics.com> |
9 | Jay Pipes <jaypipes@gmail.com> |
10 | Jesse Andrews <anotherjesse@gmail.com> |
11 | Joe Heck <heckj@mac.com> |
12 | |
13 | === modified file 'nova/api/ec2/cloud.py' |
14 | --- nova/api/ec2/cloud.py 2011-01-10 20:29:42 +0000 |
15 | +++ nova/api/ec2/cloud.py 2011-01-11 23:22:10 +0000 |
16 | @@ -132,6 +132,21 @@ |
17 | result[key] = [line] |
18 | return result |
19 | |
20 | + def _trigger_refresh_security_group(self, context, security_group): |
21 | + nodes = set([instance['host'] for instance in security_group.instances |
22 | + if instance['host'] is not None]) |
23 | + for node in nodes: |
24 | + rpc.cast(context, |
25 | + '%s.%s' % (FLAGS.compute_topic, node), |
26 | + {"method": "refresh_security_group", |
27 | + "args": {"security_group_id": security_group.id}}) |
28 | + |
29 | + def _get_availability_zone_by_host(self, context, host): |
30 | + services = db.service_get_all_by_host(context, host) |
31 | + if len(services) > 0: |
32 | + return services[0]['availability_zone'] |
33 | + return 'unknown zone' |
34 | + |
35 | def get_metadata(self, address): |
36 | ctxt = context.get_admin_context() |
37 | instance_ref = self.compute_api.get_all(ctxt, fixed_ip=address) |
38 | @@ -144,6 +159,8 @@ |
39 | else: |
40 | keys = '' |
41 | hostname = instance_ref['hostname'] |
42 | + host = instance_ref['host'] |
43 | + availability_zone = self._get_availability_zone_by_host(ctxt, host) |
44 | floating_ip = db.instance_get_floating_address(ctxt, |
45 | instance_ref['id']) |
46 | ec2_id = id_to_ec2_id(instance_ref['id']) |
47 | @@ -166,8 +183,7 @@ |
48 | 'local-hostname': hostname, |
49 | 'local-ipv4': address, |
50 | 'kernel-id': instance_ref['kernel_id'], |
51 | - # TODO(vish): real zone |
52 | - 'placement': {'availability-zone': 'nova'}, |
53 | + 'placement': {'availability-zone': availability_zone}, |
54 | 'public-hostname': hostname, |
55 | 'public-ipv4': floating_ip or '', |
56 | 'public-keys': keys, |
57 | @@ -191,8 +207,26 @@ |
58 | return self._describe_availability_zones(context, **kwargs) |
59 | |
60 | def _describe_availability_zones(self, context, **kwargs): |
61 | - return {'availabilityZoneInfo': [{'zoneName': 'nova', |
62 | - 'zoneState': 'available'}]} |
63 | + enabled_services = db.service_get_all(context) |
64 | + disabled_services = db.service_get_all(context, True) |
65 | + available_zones = [] |
66 | + for zone in [service.availability_zone for service |
67 | + in enabled_services]: |
68 | + if not zone in available_zones: |
69 | + available_zones.append(zone) |
70 | + not_available_zones = [] |
71 | + for zone in [service.availability_zone for service in disabled_services |
72 | + if not service['availability_zone'] in available_zones]: |
73 | + if not zone in not_available_zones: |
74 | + not_available_zones.append(zone) |
75 | + result = [] |
76 | + for zone in available_zones: |
77 | + result.append({'zoneName': zone, |
78 | + 'zoneState': "available"}) |
79 | + for zone in not_available_zones: |
80 | + result.append({'zoneName': zone, |
81 | + 'zoneState': "not available"}) |
82 | + return {'availabilityZoneInfo': result} |
83 | |
84 | def _describe_availability_zones_verbose(self, context, **kwargs): |
85 | rv = {'availabilityZoneInfo': [{'zoneName': 'nova', |
86 | @@ -646,6 +680,9 @@ |
87 | i['amiLaunchIndex'] = instance['launch_index'] |
88 | i['displayName'] = instance['display_name'] |
89 | i['displayDescription'] = instance['display_description'] |
90 | + host = instance['host'] |
91 | + zone = self._get_availability_zone_by_host(context, host) |
92 | + i['placement'] = {'availabilityZone': zone} |
93 | if instance['reservation_id'] not in reservations: |
94 | r = {} |
95 | r['reservationId'] = instance['reservation_id'] |
96 | |
97 | === modified file 'nova/compute/api.py' |
98 | --- nova/compute/api.py 2011-01-11 06:47:35 +0000 |
99 | +++ nova/compute/api.py 2011-01-11 23:22:10 +0000 |
100 | @@ -186,7 +186,8 @@ |
101 | FLAGS.scheduler_topic, |
102 | {"method": "run_instance", |
103 | "args": {"topic": FLAGS.compute_topic, |
104 | - "instance_id": instance_id}}) |
105 | + "instance_id": instance_id, |
106 | + "availability_zone": availability_zone}}) |
107 | |
108 | for group_id in security_groups: |
109 | self.trigger_security_group_members_refresh(elevated, group_id) |
110 | |
111 | === modified file 'nova/db/api.py' |
112 | --- nova/db/api.py 2011-01-11 12:24:58 +0000 |
113 | +++ nova/db/api.py 2011-01-11 23:22:10 +0000 |
114 | @@ -81,16 +81,21 @@ |
115 | return IMPL.service_get(context, service_id) |
116 | |
117 | |
118 | -def service_get_all(context): |
119 | - """Get a list of all services on any machine on any topic of any type""" |
120 | - return IMPL.service_get_all(context) |
121 | +def service_get_all(context, disabled=False): |
122 | + """Get all service.""" |
123 | + return IMPL.service_get_all(context, None, disabled) |
124 | |
125 | |
126 | def service_get_all_by_topic(context, topic): |
127 | - """Get all compute services for a given topic.""" |
128 | + """Get all services for a given topic.""" |
129 | return IMPL.service_get_all_by_topic(context, topic) |
130 | |
131 | |
132 | +def service_get_all_by_host(context, host): |
133 | + """Get all services for a given host.""" |
134 | + return IMPL.service_get_all_by_host(context, host) |
135 | + |
136 | + |
137 | def service_get_all_compute_sorted(context): |
138 | """Get all compute services sorted by instance count. |
139 | |
140 | |
141 | === modified file 'nova/db/sqlalchemy/api.py' |
142 | --- nova/db/sqlalchemy/api.py 2011-01-11 12:24:58 +0000 |
143 | +++ nova/db/sqlalchemy/api.py 2011-01-11 23:22:10 +0000 |
144 | @@ -135,14 +135,14 @@ |
145 | |
146 | |
147 | @require_admin_context |
148 | -def service_get_all(context, session=None): |
149 | +def service_get_all(context, session=None, disabled=False): |
150 | if not session: |
151 | session = get_session() |
152 | |
153 | result = session.query(models.Service).\ |
154 | - filter_by(deleted=can_read_deleted(context)).\ |
155 | - all() |
156 | - |
157 | + filter_by(deleted=can_read_deleted(context)).\ |
158 | + filter_by(disabled=disabled).\ |
159 | + all() |
160 | return result |
161 | |
162 | |
163 | @@ -157,6 +157,15 @@ |
164 | |
165 | |
166 | @require_admin_context |
167 | +def service_get_all_by_host(context, host): |
168 | + session = get_session() |
169 | + return session.query(models.Service).\ |
170 | + filter_by(deleted=False).\ |
171 | + filter_by(host=host).\ |
172 | + all() |
173 | + |
174 | + |
175 | +@require_admin_context |
176 | def _service_get_all_topic_subquery(context, session, topic, subq, label): |
177 | sort_value = getattr(subq.c, label) |
178 | return session.query(models.Service, func.coalesce(sort_value, 0)).\ |
179 | |
180 | === modified file 'nova/db/sqlalchemy/models.py' |
181 | --- nova/db/sqlalchemy/models.py 2011-01-10 17:37:06 +0000 |
182 | +++ nova/db/sqlalchemy/models.py 2011-01-11 23:22:10 +0000 |
183 | @@ -149,6 +149,7 @@ |
184 | topic = Column(String(255)) |
185 | report_count = Column(Integer, nullable=False, default=0) |
186 | disabled = Column(Boolean, default=False) |
187 | + availability_zone = Column(String(255), default='nova') |
188 | |
189 | |
190 | class Certificate(BASE, NovaBase): |
191 | |
192 | === modified file 'nova/flags.py' |
193 | --- nova/flags.py 2011-01-11 06:47:35 +0000 |
194 | +++ nova/flags.py 2011-01-11 23:22:10 +0000 |
195 | @@ -301,6 +301,5 @@ |
196 | DEFINE_string('host', socket.gethostname(), |
197 | 'name of this node') |
198 | |
199 | -# UNUSED |
200 | DEFINE_string('node_availability_zone', 'nova', |
201 | 'availability zone of this node') |
202 | |
203 | === added file 'nova/scheduler/zone.py' |
204 | --- nova/scheduler/zone.py 1970-01-01 00:00:00 +0000 |
205 | +++ nova/scheduler/zone.py 2011-01-11 23:22:10 +0000 |
206 | @@ -0,0 +1,56 @@ |
207 | +# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
208 | + |
209 | +# Copyright (c) 2010 Openstack, LLC. |
210 | +# Copyright 2010 United States Government as represented by the |
211 | +# Administrator of the National Aeronautics and Space Administration. |
212 | +# All Rights Reserved. |
213 | +# |
214 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
215 | +# not use this file except in compliance with the License. You may obtain |
216 | +# a copy of the License at |
217 | +# |
218 | +# http://www.apache.org/licenses/LICENSE-2.0 |
219 | +# |
220 | +# Unless required by applicable law or agreed to in writing, software |
221 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
222 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
223 | +# License for the specific language governing permissions and limitations |
224 | +# under the License. |
225 | + |
226 | +""" |
227 | +Availability Zone Scheduler implementation |
228 | +""" |
229 | + |
230 | +import random |
231 | + |
232 | +from nova.scheduler import driver |
233 | +from nova import db |
234 | + |
235 | + |
236 | +class ZoneScheduler(driver.Scheduler): |
237 | + """Implements Scheduler as a random node selector.""" |
238 | + |
239 | + def hosts_up_with_zone(self, context, topic, zone): |
240 | + """Return the list of hosts that have a running service |
241 | + for topic and availability zone (if defined). |
242 | + """ |
243 | + |
244 | + if zone is None: |
245 | + return self.hosts_up(context, topic) |
246 | + |
247 | + services = db.service_get_all_by_topic(context, topic) |
248 | + return [service.host |
249 | + for service in services |
250 | + if self.service_is_up(service) |
251 | + and service.availability_zone == zone] |
252 | + |
253 | + def schedule(self, context, topic, *_args, **_kwargs): |
254 | + """Picks a host that is up at random in selected |
255 | + availability zone (if defined). |
256 | + """ |
257 | + |
258 | + zone = _kwargs.get('availability_zone') |
259 | + hosts = self.hosts_up_with_zone(context, topic, zone) |
260 | + if not hosts: |
261 | + raise driver.NoValidHost(_("No hosts found")) |
262 | + return hosts[int(random.random() * len(hosts))] |
263 | |
264 | === modified file 'nova/service.py' |
265 | --- nova/service.py 2011-01-09 00:39:12 +0000 |
266 | +++ nova/service.py 2011-01-11 23:22:10 +0000 |
267 | @@ -113,11 +113,13 @@ |
268 | self.timers.append(periodic) |
269 | |
270 | def _create_service_ref(self, context): |
271 | + zone = FLAGS.node_availability_zone |
272 | service_ref = db.service_create(context, |
273 | {'host': self.host, |
274 | 'binary': self.binary, |
275 | 'topic': self.topic, |
276 | - 'report_count': 0}) |
277 | + 'report_count': 0, |
278 | + 'availability_zone': zone}) |
279 | self.service_id = service_ref['id'] |
280 | |
281 | def __getattr__(self, key): |
282 | |
283 | === modified file 'nova/tests/test_cloud.py' |
284 | --- nova/tests/test_cloud.py 2011-01-10 07:01:10 +0000 |
285 | +++ nova/tests/test_cloud.py 2011-01-11 23:22:10 +0000 |
286 | @@ -133,10 +133,35 @@ |
287 | db.volume_destroy(self.context, vol1['id']) |
288 | db.volume_destroy(self.context, vol2['id']) |
289 | |
290 | + def test_describe_availability_zones(self): |
291 | + """Makes sure describe_availability_zones works and filters results.""" |
292 | + service1 = db.service_create(self.context, {'host': 'host1_zones', |
293 | + 'binary': "nova-compute", |
294 | + 'topic': 'compute', |
295 | + 'report_count': 0, |
296 | + 'availability_zone': "zone1"}) |
297 | + service2 = db.service_create(self.context, {'host': 'host2_zones', |
298 | + 'binary': "nova-compute", |
299 | + 'topic': 'compute', |
300 | + 'report_count': 0, |
301 | + 'availability_zone': "zone2"}) |
302 | + result = self.cloud.describe_availability_zones(self.context) |
303 | + self.assertEqual(len(result['availabilityZoneInfo']), 3) |
304 | + db.service_destroy(self.context, service1['id']) |
305 | + db.service_destroy(self.context, service2['id']) |
306 | + |
307 | def test_describe_instances(self): |
308 | """Makes sure describe_instances works and filters results.""" |
309 | - inst1 = db.instance_create(self.context, {'reservation_id': 'a'}) |
310 | - inst2 = db.instance_create(self.context, {'reservation_id': 'a'}) |
311 | + inst1 = db.instance_create(self.context, {'reservation_id': 'a', |
312 | + 'host': 'host1'}) |
313 | + inst2 = db.instance_create(self.context, {'reservation_id': 'a', |
314 | + 'host': 'host2'}) |
315 | + comp1 = db.service_create(self.context, {'host': 'host1', |
316 | + 'availability_zone': 'zone1', |
317 | + 'topic': "compute"}) |
318 | + comp2 = db.service_create(self.context, {'host': 'host2', |
319 | + 'availability_zone': 'zone2', |
320 | + 'topic': "compute"}) |
321 | result = self.cloud.describe_instances(self.context) |
322 | result = result['reservationSet'][0] |
323 | self.assertEqual(len(result['instancesSet']), 2) |
324 | @@ -147,8 +172,12 @@ |
325 | self.assertEqual(len(result['instancesSet']), 1) |
326 | self.assertEqual(result['instancesSet'][0]['instanceId'], |
327 | instance_id) |
328 | + self.assertEqual(result['instancesSet'][0] |
329 | + ['placement']['availabilityZone'], 'zone2') |
330 | db.instance_destroy(self.context, inst1['id']) |
331 | db.instance_destroy(self.context, inst2['id']) |
332 | + db.service_destroy(self.context, comp1['id']) |
333 | + db.service_destroy(self.context, comp2['id']) |
334 | |
335 | def test_console_output(self): |
336 | image_id = FLAGS.default_image |
337 | @@ -228,6 +257,19 @@ |
338 | LOG.debug(_("Terminating instance %s"), instance_id) |
339 | rv = self.compute.terminate_instance(instance_id) |
340 | |
341 | + def test_describe_instances(self): |
342 | + """Makes sure describe_instances works.""" |
343 | + instance1 = db.instance_create(self.context, {'host': 'host2'}) |
344 | + comp1 = db.service_create(self.context, {'host': 'host2', |
345 | + 'availability_zone': 'zone1', |
346 | + 'topic': "compute"}) |
347 | + result = self.cloud.describe_instances(self.context) |
348 | + self.assertEqual(result['reservationSet'][0] |
349 | + ['instancesSet'][0] |
350 | + ['placement']['availabilityZone'], 'zone1') |
351 | + db.instance_destroy(self.context, instance1['id']) |
352 | + db.service_destroy(self.context, comp1['id']) |
353 | + |
354 | def test_instance_update_state(self): |
355 | def instance(num): |
356 | return { |
357 | |
358 | === modified file 'nova/tests/test_scheduler.py' |
359 | --- nova/tests/test_scheduler.py 2011-01-03 19:29:39 +0000 |
360 | +++ nova/tests/test_scheduler.py 2011-01-11 23:22:10 +0000 |
361 | @@ -21,6 +21,7 @@ |
362 | |
363 | import datetime |
364 | |
365 | +from mox import IgnoreArg |
366 | from nova import context |
367 | from nova import db |
368 | from nova import flags |
369 | @@ -76,6 +77,59 @@ |
370 | scheduler.named_method(ctxt, 'topic', num=7) |
371 | |
372 | |
373 | +class ZoneSchedulerTestCase(test.TestCase): |
374 | + """Test case for zone scheduler""" |
375 | + def setUp(self): |
376 | + super(ZoneSchedulerTestCase, self).setUp() |
377 | + self.flags(scheduler_driver='nova.scheduler.zone.ZoneScheduler') |
378 | + |
379 | + def _create_service_model(self, **kwargs): |
380 | + service = db.sqlalchemy.models.Service() |
381 | + service.host = kwargs['host'] |
382 | + service.disabled = False |
383 | + service.deleted = False |
384 | + service.report_count = 0 |
385 | + service.binary = 'nova-compute' |
386 | + service.topic = 'compute' |
387 | + service.id = kwargs['id'] |
388 | + service.availability_zone = kwargs['zone'] |
389 | + service.created_at = datetime.datetime.utcnow() |
390 | + return service |
391 | + |
392 | + def test_with_two_zones(self): |
393 | + scheduler = manager.SchedulerManager() |
394 | + ctxt = context.get_admin_context() |
395 | + service_list = [self._create_service_model(id=1, |
396 | + host='host1', |
397 | + zone='zone1'), |
398 | + self._create_service_model(id=2, |
399 | + host='host2', |
400 | + zone='zone2'), |
401 | + self._create_service_model(id=3, |
402 | + host='host3', |
403 | + zone='zone2'), |
404 | + self._create_service_model(id=4, |
405 | + host='host4', |
406 | + zone='zone2'), |
407 | + self._create_service_model(id=5, |
408 | + host='host5', |
409 | + zone='zone2')] |
410 | + self.mox.StubOutWithMock(db, 'service_get_all_by_topic') |
411 | + arg = IgnoreArg() |
412 | + db.service_get_all_by_topic(arg, arg).AndReturn(service_list) |
413 | + self.mox.StubOutWithMock(rpc, 'cast', use_mock_anything=True) |
414 | + rpc.cast(ctxt, |
415 | + 'compute.host1', |
416 | + {'method': 'run_instance', |
417 | + 'args': {'instance_id': 'i-ffffffff', |
418 | + 'availability_zone': 'zone1'}}) |
419 | + self.mox.ReplayAll() |
420 | + scheduler.run_instance(ctxt, |
421 | + 'compute', |
422 | + instance_id='i-ffffffff', |
423 | + availability_zone='zone1') |
424 | + |
425 | + |
426 | class SimpleDriverTestCase(test.TestCase): |
427 | """Test case for simple driver""" |
428 | def setUp(self): |
429 | |
430 | === modified file 'nova/tests/test_service.py' |
431 | --- nova/tests/test_service.py 2011-01-03 19:29:39 +0000 |
432 | +++ nova/tests/test_service.py 2011-01-11 23:22:10 +0000 |
433 | @@ -133,7 +133,8 @@ |
434 | service_create = {'host': host, |
435 | 'binary': binary, |
436 | 'topic': topic, |
437 | - 'report_count': 0} |
438 | + 'report_count': 0, |
439 | + 'availability_zone': 'nova'} |
440 | service_ref = {'host': host, |
441 | 'binary': binary, |
442 | 'report_count': 0, |
443 | @@ -161,11 +162,13 @@ |
444 | service_create = {'host': host, |
445 | 'binary': binary, |
446 | 'topic': topic, |
447 | - 'report_count': 0} |
448 | + 'report_count': 0, |
449 | + 'availability_zone': 'nova'} |
450 | service_ref = {'host': host, |
451 | 'binary': binary, |
452 | 'topic': topic, |
453 | 'report_count': 0, |
454 | + 'availability_zone': 'nova', |
455 | 'id': 1} |
456 | |
457 | service.db.service_get_by_args(mox.IgnoreArg(), |
458 | @@ -193,11 +196,13 @@ |
459 | service_create = {'host': host, |
460 | 'binary': binary, |
461 | 'topic': topic, |
462 | - 'report_count': 0} |
463 | + 'report_count': 0, |
464 | + 'availability_zone': 'nova'} |
465 | service_ref = {'host': host, |
466 | 'binary': binary, |
467 | 'topic': topic, |
468 | 'report_count': 0, |
469 | + 'availability_zone': 'nova', |
470 | 'id': 1} |
471 | |
472 | service.db.service_get_by_args(mox.IgnoreArg(), |
473 | @@ -224,11 +229,13 @@ |
474 | service_create = {'host': host, |
475 | 'binary': binary, |
476 | 'topic': topic, |
477 | - 'report_count': 0} |
478 | + 'report_count': 0, |
479 | + 'availability_zone': 'nova'} |
480 | service_ref = {'host': host, |
481 | 'binary': binary, |
482 | 'topic': topic, |
483 | 'report_count': 0, |
484 | + 'availability_zone': 'nova', |
485 | 'id': 1} |
486 | |
487 | service.db.service_get_by_args(mox.IgnoreArg(), |
not me.. you've got the wrong guy.. ;)