Merge lp:~sandy-walsh/nova/zones4 into lp:~hudson-openstack/nova/trunk
- zones4
- Merge into trunk
Status: | Superseded |
---|---|
Proposed branch: | lp:~sandy-walsh/nova/zones4 |
Merge into: | lp:~hudson-openstack/nova/trunk |
Prerequisite: | lp:~sandy-walsh/nova/zones3 |
Diff against target: |
1345 lines (+675/-86) 21 files modified
nova/api/openstack/servers.py (+20/-1) nova/api/openstack/zones.py (+19/-14) nova/compute/api.py (+16/-1) nova/compute/manager.py (+3/-2) nova/db/api.py (+1/-0) nova/flags.py (+3/-2) nova/manager.py (+27/-1) nova/network/manager.py (+3/-2) nova/rpc.py (+59/-18) nova/scheduler/api.py (+214/-22) nova/scheduler/driver.py (+7/-0) nova/scheduler/manager.py (+13/-1) nova/scheduler/zone_manager.py (+36/-3) nova/service.py (+8/-2) nova/tests/api/openstack/test_zones.py (+26/-3) nova/tests/test_rpc.py (+2/-2) nova/tests/test_scheduler.py (+161/-0) nova/tests/test_service.py (+19/-9) nova/tests/test_test.py (+1/-1) nova/tests/test_zones.py (+34/-0) nova/volume/manager.py (+3/-2) |
To merge this branch: | bzr merge lp:~sandy-walsh/nova/zones4 |
Related bugs: | |
Related blueprints: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Rick Harris (community) | Approve | ||
Matt Dietz (community) | Approve | ||
Review via email: mp+53726@code.launchpad.net |
This proposal has been superseded by a proposal from 2011-03-24.
Commit message
In this branch we are forwarding incoming requests to child zones when the requested resource is not found in the current zone.
Description of the change
In this branch we are forwarding incoming requests to child zones when the requested resource is not found in the current zone.
For example: If 'nova pause 123' is issued against Zone 1, but instance 123 does not live in Zone 1, the call will be forwarded to all child zones hoping someone can deal with it.
NOTE: This currently only works with OpenStack API requests and routing checks are only being done against Compute/instance_id checks.
Specifically:
* servers.
* servers.create is pending for distributed scheduler
* servers.get_all will get added early in Diablo.
What I've been doing for testing:
1. Set up a Nova deployment in a VM (Zone0)
2. Clone the VM and set --zone_name=zone1 (and change all the IP addresses to the new address in nova.conf, glance.conf and novarc)
3. Set --enable_
4. use the --connection_
5. Add Zone1 as a child of Zone0 (nova zone-add)
(make sure the instance id's are different in each zone)
Example of calls being sent to child zones:
http://
Eric Day (eday) wrote : | # |
Sandy Walsh (sandy-walsh) wrote : | # |
Thanks Eric,
That was the way I was initially intending to go.
The closer I got to it, I noticed that all calls would need to be re-marshaled to be sent to the children. We'd need a new client library abstraction layer (since forwarded requests may be OS API or EC2 API). I was hoping to avoid all that by simply forwarding the already marshaled message.
This client library abstraction is likely to get a little unwieldy since it needs to support OS/EC2 and whomever else comes along (actually, sounds like another endorsement for DirectAPI.)
Also, with this approach I was assuming an idiom of "API checks parameters and bails if they're wrong. Work is done in the services." ... but I see the counter-argument.
Let me think more about client library abstraction. Since we have big API changes coming soon, might be timely to consider this.
Sandy Walsh (sandy-walsh) wrote : | # |
Thinking a little more about it, but I've got a concern about that approach. Not for technical reasons, but for schedule/political reasons: Since API is such a touchy subject and still under development, this could turn in a major blocker against getting anything practical done with Zones for a long time.
Eric Day (eday) wrote : | # |
FWIW, I think making the call to novaclient (and re-marshal) from nova.compute.* is the correct thing to do. I would move forward with the OpenStack API for this as well, as that seems to be where we are committed at the moment. If we need to change this in the future, it will be a simple HTTP formatting issue inside novaclient (the API into novaclient shouldn't change).
Sandy Walsh (sandy-walsh) wrote : | # |
Yup ... that's the direction I'll take. Thanks again Eric.
Sandy Walsh (sandy-walsh) wrote : | # |
Ok, all refactored out of middleware now. The <service>/api.py does the checking now and the external api just bails out after a redirect. It's all handled in decorators now.
Rick Harris (rconradharris) wrote : | # |
Nice work, Sandy.
Some really minor nits/suggestions:
> 402 + try:
> 403 + manager = getattr(nova, collection)
The getattr can be outside of the try-block (AFAICT).
> 404 + if isinstance(item_id, int) or item_id.isdigit():
> 405 + result = manager.
> 406 + else:
> 407 + result = manager.
One common way of handling this is to just go ahead and cast to int and
handle the possible ValueError (EAFP), like:
try:
result = manager.
except ValueError:
result = manager.
> 371 +def _wrap_method(
> 372 + """Wrap method to supply self."""
> 373 + def _wrap(*args, **kwargs):
> 374 + return function(self, *args, **kwargs)
> 375 + return _wrap
For the newly added decorators, it might be worth using @functools.wraps so
that the outer-func inherits the inner-funcs attributes. Can make debugging a
little easier.
> 397 +def _issue_
> 398 + item_id):
Trailing '\' isn't needed.
Could line-up item_id with opening param.
> 264 + @scheduler_
> 265 def get_diagnostics
Should that be 'get_diagnostics'? I ask because it's the only one that differs
in that regard.
Matt Dietz (cerberus) wrote : | # |
The exception raising idea is really interesting. Hard to follow at first, but the tests make it clear IMO. I think this looks pretty pretty promising.
Sandy Walsh (sandy-walsh) wrote : | # |
Thanks rick ... great feedback. I've implemented your changes.
re: get_diagnostics, the name in the decorator is the name of the method in novaclient, so it may not map 1:1 to the method being wrapped (as in this case).
Basically the decorator is saying, "if this method can't find the instance, check with the child zones by calling <method> in novaclient"
Sandy Walsh (sandy-walsh) wrote : | # |
(oh, push pending, will change WIP to let you know)
Rick Harris (rconradharris) wrote : | # |
> 788 +def redirect_
> 648 +def _wrap_method(
Didn't like the idea of using @functools.wraps to propagate the inner func's metadata?
Sandy Walsh (sandy-walsh) wrote : | # |
I do. Good suggestion. But I'll fix that up on the next branch. Haven't spent enough time with it and don't want to miss the window today.
Rick Harris (rconradharris) wrote : | # |
Good deal, lgtm. Thanks!
OpenStack Infra (hudson-openstack) wrote : | # |
No proposals found for merge of lp:~sandy-walsh/nova/zones3 into lp:nova.
- 720. By Sandy Walsh
-
trunk merge
Unmerged revisions
Preview Diff
1 | === modified file 'nova/api/openstack/servers.py' | |||
2 | --- nova/api/openstack/servers.py 2011-03-24 05:08:17 +0000 | |||
3 | +++ nova/api/openstack/servers.py 2011-03-24 12:14:14 +0000 | |||
4 | @@ -38,6 +38,7 @@ | |||
5 | 38 | from nova.compute import power_state | 38 | from nova.compute import power_state |
6 | 39 | from nova.quota import QuotaError | 39 | from nova.quota import QuotaError |
7 | 40 | import nova.api.openstack | 40 | import nova.api.openstack |
8 | 41 | from nova.scheduler import api as scheduler_api | ||
9 | 41 | 42 | ||
10 | 42 | 43 | ||
11 | 43 | LOG = logging.getLogger('server') | 44 | LOG = logging.getLogger('server') |
12 | @@ -88,15 +89,18 @@ | |||
13 | 88 | for inst in limited_list] | 89 | for inst in limited_list] |
14 | 89 | return dict(servers=servers) | 90 | return dict(servers=servers) |
15 | 90 | 91 | ||
16 | 92 | @scheduler_api.redirect_handler | ||
17 | 91 | def show(self, req, id): | 93 | def show(self, req, id): |
18 | 92 | """ Returns server details by server id """ | 94 | """ Returns server details by server id """ |
19 | 93 | try: | 95 | try: |
21 | 94 | instance = self.compute_api.get(req.environ['nova.context'], id) | 96 | instance = self.compute_api.routing_get( |
22 | 97 | req.environ['nova.context'], id) | ||
23 | 95 | builder = self._get_view_builder(req) | 98 | builder = self._get_view_builder(req) |
24 | 96 | return builder.build(instance, is_detail=True) | 99 | return builder.build(instance, is_detail=True) |
25 | 97 | except exception.NotFound: | 100 | except exception.NotFound: |
26 | 98 | return faults.Fault(exc.HTTPNotFound()) | 101 | return faults.Fault(exc.HTTPNotFound()) |
27 | 99 | 102 | ||
28 | 103 | @scheduler_api.redirect_handler | ||
29 | 100 | def delete(self, req, id): | 104 | def delete(self, req, id): |
30 | 101 | """ Destroys a server """ | 105 | """ Destroys a server """ |
31 | 102 | try: | 106 | try: |
32 | @@ -227,6 +231,7 @@ | |||
33 | 227 | # if the original error is okay, just reraise it | 231 | # if the original error is okay, just reraise it |
34 | 228 | raise error | 232 | raise error |
35 | 229 | 233 | ||
36 | 234 | @scheduler_api.redirect_handler | ||
37 | 230 | def update(self, req, id): | 235 | def update(self, req, id): |
38 | 231 | """ Updates the server name or password """ | 236 | """ Updates the server name or password """ |
39 | 232 | if len(req.body) == 0: | 237 | if len(req.body) == 0: |
40 | @@ -252,6 +257,7 @@ | |||
41 | 252 | return faults.Fault(exc.HTTPNotFound()) | 257 | return faults.Fault(exc.HTTPNotFound()) |
42 | 253 | return exc.HTTPNoContent() | 258 | return exc.HTTPNoContent() |
43 | 254 | 259 | ||
44 | 260 | @scheduler_api.redirect_handler | ||
45 | 255 | def action(self, req, id): | 261 | def action(self, req, id): |
46 | 256 | """Multi-purpose method used to reboot, rebuild, or | 262 | """Multi-purpose method used to reboot, rebuild, or |
47 | 257 | resize a server""" | 263 | resize a server""" |
48 | @@ -317,6 +323,7 @@ | |||
49 | 317 | return faults.Fault(exc.HTTPUnprocessableEntity()) | 323 | return faults.Fault(exc.HTTPUnprocessableEntity()) |
50 | 318 | return exc.HTTPAccepted() | 324 | return exc.HTTPAccepted() |
51 | 319 | 325 | ||
52 | 326 | @scheduler_api.redirect_handler | ||
53 | 320 | def lock(self, req, id): | 327 | def lock(self, req, id): |
54 | 321 | """ | 328 | """ |
55 | 322 | lock the instance with id | 329 | lock the instance with id |
56 | @@ -332,6 +339,7 @@ | |||
57 | 332 | return faults.Fault(exc.HTTPUnprocessableEntity()) | 339 | return faults.Fault(exc.HTTPUnprocessableEntity()) |
58 | 333 | return exc.HTTPAccepted() | 340 | return exc.HTTPAccepted() |
59 | 334 | 341 | ||
60 | 342 | @scheduler_api.redirect_handler | ||
61 | 335 | def unlock(self, req, id): | 343 | def unlock(self, req, id): |
62 | 336 | """ | 344 | """ |
63 | 337 | unlock the instance with id | 345 | unlock the instance with id |
64 | @@ -347,6 +355,7 @@ | |||
65 | 347 | return faults.Fault(exc.HTTPUnprocessableEntity()) | 355 | return faults.Fault(exc.HTTPUnprocessableEntity()) |
66 | 348 | return exc.HTTPAccepted() | 356 | return exc.HTTPAccepted() |
67 | 349 | 357 | ||
68 | 358 | @scheduler_api.redirect_handler | ||
69 | 350 | def get_lock(self, req, id): | 359 | def get_lock(self, req, id): |
70 | 351 | """ | 360 | """ |
71 | 352 | return the boolean state of (instance with id)'s lock | 361 | return the boolean state of (instance with id)'s lock |
72 | @@ -361,6 +370,7 @@ | |||
73 | 361 | return faults.Fault(exc.HTTPUnprocessableEntity()) | 370 | return faults.Fault(exc.HTTPUnprocessableEntity()) |
74 | 362 | return exc.HTTPAccepted() | 371 | return exc.HTTPAccepted() |
75 | 363 | 372 | ||
76 | 373 | @scheduler_api.redirect_handler | ||
77 | 364 | def reset_network(self, req, id): | 374 | def reset_network(self, req, id): |
78 | 365 | """ | 375 | """ |
79 | 366 | Reset networking on an instance (admin only). | 376 | Reset networking on an instance (admin only). |
80 | @@ -375,6 +385,7 @@ | |||
81 | 375 | return faults.Fault(exc.HTTPUnprocessableEntity()) | 385 | return faults.Fault(exc.HTTPUnprocessableEntity()) |
82 | 376 | return exc.HTTPAccepted() | 386 | return exc.HTTPAccepted() |
83 | 377 | 387 | ||
84 | 388 | @scheduler_api.redirect_handler | ||
85 | 378 | def inject_network_info(self, req, id): | 389 | def inject_network_info(self, req, id): |
86 | 379 | """ | 390 | """ |
87 | 380 | Inject network info for an instance (admin only). | 391 | Inject network info for an instance (admin only). |
88 | @@ -389,6 +400,7 @@ | |||
89 | 389 | return faults.Fault(exc.HTTPUnprocessableEntity()) | 400 | return faults.Fault(exc.HTTPUnprocessableEntity()) |
90 | 390 | return exc.HTTPAccepted() | 401 | return exc.HTTPAccepted() |
91 | 391 | 402 | ||
92 | 403 | @scheduler_api.redirect_handler | ||
93 | 392 | def pause(self, req, id): | 404 | def pause(self, req, id): |
94 | 393 | """ Permit Admins to Pause the server. """ | 405 | """ Permit Admins to Pause the server. """ |
95 | 394 | ctxt = req.environ['nova.context'] | 406 | ctxt = req.environ['nova.context'] |
96 | @@ -400,6 +412,7 @@ | |||
97 | 400 | return faults.Fault(exc.HTTPUnprocessableEntity()) | 412 | return faults.Fault(exc.HTTPUnprocessableEntity()) |
98 | 401 | return exc.HTTPAccepted() | 413 | return exc.HTTPAccepted() |
99 | 402 | 414 | ||
100 | 415 | @scheduler_api.redirect_handler | ||
101 | 403 | def unpause(self, req, id): | 416 | def unpause(self, req, id): |
102 | 404 | """ Permit Admins to Unpause the server. """ | 417 | """ Permit Admins to Unpause the server. """ |
103 | 405 | ctxt = req.environ['nova.context'] | 418 | ctxt = req.environ['nova.context'] |
104 | @@ -411,6 +424,7 @@ | |||
105 | 411 | return faults.Fault(exc.HTTPUnprocessableEntity()) | 424 | return faults.Fault(exc.HTTPUnprocessableEntity()) |
106 | 412 | return exc.HTTPAccepted() | 425 | return exc.HTTPAccepted() |
107 | 413 | 426 | ||
108 | 427 | @scheduler_api.redirect_handler | ||
109 | 414 | def suspend(self, req, id): | 428 | def suspend(self, req, id): |
110 | 415 | """permit admins to suspend the server""" | 429 | """permit admins to suspend the server""" |
111 | 416 | context = req.environ['nova.context'] | 430 | context = req.environ['nova.context'] |
112 | @@ -422,6 +436,7 @@ | |||
113 | 422 | return faults.Fault(exc.HTTPUnprocessableEntity()) | 436 | return faults.Fault(exc.HTTPUnprocessableEntity()) |
114 | 423 | return exc.HTTPAccepted() | 437 | return exc.HTTPAccepted() |
115 | 424 | 438 | ||
116 | 439 | @scheduler_api.redirect_handler | ||
117 | 425 | def resume(self, req, id): | 440 | def resume(self, req, id): |
118 | 426 | """permit admins to resume the server from suspend""" | 441 | """permit admins to resume the server from suspend""" |
119 | 427 | context = req.environ['nova.context'] | 442 | context = req.environ['nova.context'] |
120 | @@ -433,6 +448,7 @@ | |||
121 | 433 | return faults.Fault(exc.HTTPUnprocessableEntity()) | 448 | return faults.Fault(exc.HTTPUnprocessableEntity()) |
122 | 434 | return exc.HTTPAccepted() | 449 | return exc.HTTPAccepted() |
123 | 435 | 450 | ||
124 | 451 | @scheduler_api.redirect_handler | ||
125 | 436 | def rescue(self, req, id): | 452 | def rescue(self, req, id): |
126 | 437 | """Permit users to rescue the server.""" | 453 | """Permit users to rescue the server.""" |
127 | 438 | context = req.environ["nova.context"] | 454 | context = req.environ["nova.context"] |
128 | @@ -444,6 +460,7 @@ | |||
129 | 444 | return faults.Fault(exc.HTTPUnprocessableEntity()) | 460 | return faults.Fault(exc.HTTPUnprocessableEntity()) |
130 | 445 | return exc.HTTPAccepted() | 461 | return exc.HTTPAccepted() |
131 | 446 | 462 | ||
132 | 463 | @scheduler_api.redirect_handler | ||
133 | 447 | def unrescue(self, req, id): | 464 | def unrescue(self, req, id): |
134 | 448 | """Permit users to unrescue the server.""" | 465 | """Permit users to unrescue the server.""" |
135 | 449 | context = req.environ["nova.context"] | 466 | context = req.environ["nova.context"] |
136 | @@ -455,6 +472,7 @@ | |||
137 | 455 | return faults.Fault(exc.HTTPUnprocessableEntity()) | 472 | return faults.Fault(exc.HTTPUnprocessableEntity()) |
138 | 456 | return exc.HTTPAccepted() | 473 | return exc.HTTPAccepted() |
139 | 457 | 474 | ||
140 | 475 | @scheduler_api.redirect_handler | ||
141 | 458 | def get_ajax_console(self, req, id): | 476 | def get_ajax_console(self, req, id): |
142 | 459 | """ Returns a url to an instance's ajaxterm console. """ | 477 | """ Returns a url to an instance's ajaxterm console. """ |
143 | 460 | try: | 478 | try: |
144 | @@ -464,6 +482,7 @@ | |||
145 | 464 | return faults.Fault(exc.HTTPNotFound()) | 482 | return faults.Fault(exc.HTTPNotFound()) |
146 | 465 | return exc.HTTPAccepted() | 483 | return exc.HTTPAccepted() |
147 | 466 | 484 | ||
148 | 485 | @scheduler_api.redirect_handler | ||
149 | 467 | def diagnostics(self, req, id): | 486 | def diagnostics(self, req, id): |
150 | 468 | """Permit Admins to retrieve server diagnostics.""" | 487 | """Permit Admins to retrieve server diagnostics.""" |
151 | 469 | ctxt = req.environ["nova.context"] | 488 | ctxt = req.environ["nova.context"] |
152 | 470 | 489 | ||
153 | === modified file 'nova/api/openstack/zones.py' | |||
154 | --- nova/api/openstack/zones.py 2011-03-11 19:49:32 +0000 | |||
155 | +++ nova/api/openstack/zones.py 2011-03-24 12:14:14 +0000 | |||
156 | @@ -16,8 +16,8 @@ | |||
157 | 16 | import common | 16 | import common |
158 | 17 | 17 | ||
159 | 18 | from nova import flags | 18 | from nova import flags |
160 | 19 | from nova import log as logging | ||
161 | 19 | from nova import wsgi | 20 | from nova import wsgi |
162 | 20 | from nova import db | ||
163 | 21 | from nova.scheduler import api | 21 | from nova.scheduler import api |
164 | 22 | 22 | ||
165 | 23 | 23 | ||
166 | @@ -38,7 +38,8 @@ | |||
167 | 38 | 38 | ||
168 | 39 | 39 | ||
169 | 40 | def _scrub_zone(zone): | 40 | def _scrub_zone(zone): |
171 | 41 | return _filter_keys(zone, ('id', 'api_url')) | 41 | return _exclude_keys(zone, ('username', 'password', 'created_at', |
172 | 42 | 'deleted', 'deleted_at', 'updated_at')) | ||
173 | 42 | 43 | ||
174 | 43 | 44 | ||
175 | 44 | class Controller(wsgi.Controller): | 45 | class Controller(wsgi.Controller): |
176 | @@ -52,13 +53,9 @@ | |||
177 | 52 | """Return all zones in brief""" | 53 | """Return all zones in brief""" |
178 | 53 | # Ask the ZoneManager in the Scheduler for most recent data, | 54 | # Ask the ZoneManager in the Scheduler for most recent data, |
179 | 54 | # or fall-back to the database ... | 55 | # or fall-back to the database ... |
184 | 55 | items = api.API().get_zone_list(req.environ['nova.context']) | 56 | items = api.get_zone_list(req.environ['nova.context']) |
181 | 56 | if not items: | ||
182 | 57 | items = db.zone_get_all(req.environ['nova.context']) | ||
183 | 58 | |||
185 | 59 | items = common.limited(items, req) | 57 | items = common.limited(items, req) |
188 | 60 | items = [_exclude_keys(item, ['username', 'password']) | 58 | items = [_scrub_zone(item) for item in items] |
187 | 61 | for item in items] | ||
189 | 62 | return dict(zones=items) | 59 | return dict(zones=items) |
190 | 63 | 60 | ||
191 | 64 | def detail(self, req): | 61 | def detail(self, req): |
192 | @@ -67,29 +64,37 @@ | |||
193 | 67 | 64 | ||
194 | 68 | def info(self, req): | 65 | def info(self, req): |
195 | 69 | """Return name and capabilities for this zone.""" | 66 | """Return name and capabilities for this zone.""" |
198 | 70 | return dict(zone=dict(name=FLAGS.zone_name, | 67 | items = api.get_zone_capabilities(req.environ['nova.context']) |
199 | 71 | capabilities=FLAGS.zone_capabilities)) | 68 | |
200 | 69 | zone = dict(name=FLAGS.zone_name) | ||
201 | 70 | caps = FLAGS.zone_capabilities | ||
202 | 71 | for cap in caps: | ||
203 | 72 | key_values = cap.split('=') | ||
204 | 73 | zone[key_values[0]] = key_values[1] | ||
205 | 74 | for item, (min_value, max_value) in items.iteritems(): | ||
206 | 75 | zone[item] = "%s,%s" % (min_value, max_value) | ||
207 | 76 | return dict(zone=zone) | ||
208 | 72 | 77 | ||
209 | 73 | def show(self, req, id): | 78 | def show(self, req, id): |
210 | 74 | """Return data about the given zone id""" | 79 | """Return data about the given zone id""" |
211 | 75 | zone_id = int(id) | 80 | zone_id = int(id) |
213 | 76 | zone = db.zone_get(req.environ['nova.context'], zone_id) | 81 | zone = api.zone_get(req.environ['nova.context'], zone_id) |
214 | 77 | return dict(zone=_scrub_zone(zone)) | 82 | return dict(zone=_scrub_zone(zone)) |
215 | 78 | 83 | ||
216 | 79 | def delete(self, req, id): | 84 | def delete(self, req, id): |
217 | 80 | zone_id = int(id) | 85 | zone_id = int(id) |
219 | 81 | db.zone_delete(req.environ['nova.context'], zone_id) | 86 | api.zone_delete(req.environ['nova.context'], zone_id) |
220 | 82 | return {} | 87 | return {} |
221 | 83 | 88 | ||
222 | 84 | def create(self, req): | 89 | def create(self, req): |
223 | 85 | context = req.environ['nova.context'] | 90 | context = req.environ['nova.context'] |
224 | 86 | env = self._deserialize(req.body, req.get_content_type()) | 91 | env = self._deserialize(req.body, req.get_content_type()) |
226 | 87 | zone = db.zone_create(context, env["zone"]) | 92 | zone = api.zone_create(context, env["zone"]) |
227 | 88 | return dict(zone=_scrub_zone(zone)) | 93 | return dict(zone=_scrub_zone(zone)) |
228 | 89 | 94 | ||
229 | 90 | def update(self, req, id): | 95 | def update(self, req, id): |
230 | 91 | context = req.environ['nova.context'] | 96 | context = req.environ['nova.context'] |
231 | 92 | env = self._deserialize(req.body, req.get_content_type()) | 97 | env = self._deserialize(req.body, req.get_content_type()) |
232 | 93 | zone_id = int(id) | 98 | zone_id = int(id) |
234 | 94 | zone = db.zone_update(context, zone_id, env["zone"]) | 99 | zone = api.zone_update(context, zone_id, env["zone"]) |
235 | 95 | return dict(zone=_scrub_zone(zone)) | 100 | return dict(zone=_scrub_zone(zone)) |
236 | 96 | 101 | ||
237 | === modified file 'nova/compute/api.py' | |||
238 | --- nova/compute/api.py 2011-03-23 21:04:42 +0000 | |||
239 | +++ nova/compute/api.py 2011-03-24 12:14:14 +0000 | |||
240 | @@ -34,6 +34,7 @@ | |||
241 | 34 | from nova import utils | 34 | from nova import utils |
242 | 35 | from nova import volume | 35 | from nova import volume |
243 | 36 | from nova.compute import instance_types | 36 | from nova.compute import instance_types |
244 | 37 | from nova.scheduler import api as scheduler_api | ||
245 | 37 | from nova.db import base | 38 | from nova.db import base |
246 | 38 | 39 | ||
247 | 39 | FLAGS = flags.FLAGS | 40 | FLAGS = flags.FLAGS |
248 | @@ -352,6 +353,7 @@ | |||
249 | 352 | rv = self.db.instance_update(context, instance_id, kwargs) | 353 | rv = self.db.instance_update(context, instance_id, kwargs) |
250 | 353 | return dict(rv.iteritems()) | 354 | return dict(rv.iteritems()) |
251 | 354 | 355 | ||
252 | 356 | @scheduler_api.reroute_compute("delete") | ||
253 | 355 | def delete(self, context, instance_id): | 357 | def delete(self, context, instance_id): |
254 | 356 | LOG.debug(_("Going to try to terminate %s"), instance_id) | 358 | LOG.debug(_("Going to try to terminate %s"), instance_id) |
255 | 357 | try: | 359 | try: |
256 | @@ -384,6 +386,13 @@ | |||
257 | 384 | rv = self.db.instance_get(context, instance_id) | 386 | rv = self.db.instance_get(context, instance_id) |
258 | 385 | return dict(rv.iteritems()) | 387 | return dict(rv.iteritems()) |
259 | 386 | 388 | ||
260 | 389 | @scheduler_api.reroute_compute("get") | ||
261 | 390 | def routing_get(self, context, instance_id): | ||
262 | 391 | """Use this method instead of get() if this is the only | ||
263 | 392 | operation you intend to to. It will route to novaclient.get | ||
264 | 393 | if the instance is not found.""" | ||
265 | 394 | return self.get(context, instance_id) | ||
266 | 395 | |||
267 | 387 | def get_all(self, context, project_id=None, reservation_id=None, | 396 | def get_all(self, context, project_id=None, reservation_id=None, |
268 | 388 | fixed_ip=None): | 397 | fixed_ip=None): |
269 | 389 | """Get all instances, possibly filtered by one of the | 398 | """Get all instances, possibly filtered by one of the |
270 | @@ -527,14 +536,17 @@ | |||
271 | 527 | "instance_id": instance_id, | 536 | "instance_id": instance_id, |
272 | 528 | "flavor_id": flavor_id}}) | 537 | "flavor_id": flavor_id}}) |
273 | 529 | 538 | ||
274 | 539 | @scheduler_api.reroute_compute("pause") | ||
275 | 530 | def pause(self, context, instance_id): | 540 | def pause(self, context, instance_id): |
276 | 531 | """Pause the given instance.""" | 541 | """Pause the given instance.""" |
277 | 532 | self._cast_compute_message('pause_instance', context, instance_id) | 542 | self._cast_compute_message('pause_instance', context, instance_id) |
278 | 533 | 543 | ||
279 | 544 | @scheduler_api.reroute_compute("unpause") | ||
280 | 534 | def unpause(self, context, instance_id): | 545 | def unpause(self, context, instance_id): |
281 | 535 | """Unpause the given instance.""" | 546 | """Unpause the given instance.""" |
282 | 536 | self._cast_compute_message('unpause_instance', context, instance_id) | 547 | self._cast_compute_message('unpause_instance', context, instance_id) |
283 | 537 | 548 | ||
284 | 549 | @scheduler_api.reroute_compute("diagnostics") | ||
285 | 538 | def get_diagnostics(self, context, instance_id): | 550 | def get_diagnostics(self, context, instance_id): |
286 | 539 | """Retrieve diagnostics for the given instance.""" | 551 | """Retrieve diagnostics for the given instance.""" |
287 | 540 | return self._call_compute_message( | 552 | return self._call_compute_message( |
288 | @@ -546,18 +558,22 @@ | |||
289 | 546 | """Retrieve actions for the given instance.""" | 558 | """Retrieve actions for the given instance.""" |
290 | 547 | return self.db.instance_get_actions(context, instance_id) | 559 | return self.db.instance_get_actions(context, instance_id) |
291 | 548 | 560 | ||
292 | 561 | @scheduler_api.reroute_compute("suspend") | ||
293 | 549 | def suspend(self, context, instance_id): | 562 | def suspend(self, context, instance_id): |
294 | 550 | """suspend the instance with instance_id""" | 563 | """suspend the instance with instance_id""" |
295 | 551 | self._cast_compute_message('suspend_instance', context, instance_id) | 564 | self._cast_compute_message('suspend_instance', context, instance_id) |
296 | 552 | 565 | ||
297 | 566 | @scheduler_api.reroute_compute("resume") | ||
298 | 553 | def resume(self, context, instance_id): | 567 | def resume(self, context, instance_id): |
299 | 554 | """resume the instance with instance_id""" | 568 | """resume the instance with instance_id""" |
300 | 555 | self._cast_compute_message('resume_instance', context, instance_id) | 569 | self._cast_compute_message('resume_instance', context, instance_id) |
301 | 556 | 570 | ||
302 | 571 | @scheduler_api.reroute_compute("rescue") | ||
303 | 557 | def rescue(self, context, instance_id): | 572 | def rescue(self, context, instance_id): |
304 | 558 | """Rescue the given instance.""" | 573 | """Rescue the given instance.""" |
305 | 559 | self._cast_compute_message('rescue_instance', context, instance_id) | 574 | self._cast_compute_message('rescue_instance', context, instance_id) |
306 | 560 | 575 | ||
307 | 576 | @scheduler_api.reroute_compute("unrescue") | ||
308 | 561 | def unrescue(self, context, instance_id): | 577 | def unrescue(self, context, instance_id): |
309 | 562 | """Unrescue the given instance.""" | 578 | """Unrescue the given instance.""" |
310 | 563 | self._cast_compute_message('unrescue_instance', context, instance_id) | 579 | self._cast_compute_message('unrescue_instance', context, instance_id) |
311 | @@ -573,7 +589,6 @@ | |||
312 | 573 | 589 | ||
313 | 574 | def get_ajax_console(self, context, instance_id): | 590 | def get_ajax_console(self, context, instance_id): |
314 | 575 | """Get a url to an AJAX Console""" | 591 | """Get a url to an AJAX Console""" |
315 | 576 | instance = self.get(context, instance_id) | ||
316 | 577 | output = self._call_compute_message('get_ajax_console', | 592 | output = self._call_compute_message('get_ajax_console', |
317 | 578 | context, | 593 | context, |
318 | 579 | instance_id) | 594 | instance_id) |
319 | 580 | 595 | ||
320 | === modified file 'nova/compute/manager.py' | |||
321 | --- nova/compute/manager.py 2011-03-24 10:01:22 +0000 | |||
322 | +++ nova/compute/manager.py 2011-03-24 12:14:14 +0000 | |||
323 | @@ -111,7 +111,7 @@ | |||
324 | 111 | return decorated_function | 111 | return decorated_function |
325 | 112 | 112 | ||
326 | 113 | 113 | ||
328 | 114 | class ComputeManager(manager.Manager): | 114 | class ComputeManager(manager.SchedulerDependentManager): |
329 | 115 | 115 | ||
330 | 116 | """Manages the running instances from creation to destruction.""" | 116 | """Manages the running instances from creation to destruction.""" |
331 | 117 | 117 | ||
332 | @@ -132,7 +132,8 @@ | |||
333 | 132 | 132 | ||
334 | 133 | self.network_manager = utils.import_object(FLAGS.network_manager) | 133 | self.network_manager = utils.import_object(FLAGS.network_manager) |
335 | 134 | self.volume_manager = utils.import_object(FLAGS.volume_manager) | 134 | self.volume_manager = utils.import_object(FLAGS.volume_manager) |
337 | 135 | super(ComputeManager, self).__init__(*args, **kwargs) | 135 | super(ComputeManager, self).__init__(service_name="compute", |
338 | 136 | *args, **kwargs) | ||
339 | 136 | 137 | ||
340 | 137 | def init_host(self): | 138 | def init_host(self): |
341 | 138 | """Do any initialization that needs to be run if this is a | 139 | """Do any initialization that needs to be run if this is a |
342 | 139 | 140 | ||
343 | === modified file 'nova/db/api.py' | |||
344 | --- nova/db/api.py 2011-03-23 04:31:50 +0000 | |||
345 | +++ nova/db/api.py 2011-03-24 12:14:14 +0000 | |||
346 | @@ -71,6 +71,7 @@ | |||
347 | 71 | """No more available blades""" | 71 | """No more available blades""" |
348 | 72 | pass | 72 | pass |
349 | 73 | 73 | ||
350 | 74 | |||
351 | 74 | ################### | 75 | ################### |
352 | 75 | 76 | ||
353 | 76 | 77 | ||
354 | 77 | 78 | ||
355 | === modified file 'nova/flags.py' | |||
356 | --- nova/flags.py 2011-03-18 11:35:00 +0000 | |||
357 | +++ nova/flags.py 2011-03-24 12:14:14 +0000 | |||
358 | @@ -358,5 +358,6 @@ | |||
359 | 358 | 'availability zone of this node') | 358 | 'availability zone of this node') |
360 | 359 | 359 | ||
361 | 360 | DEFINE_string('zone_name', 'nova', 'name of this zone') | 360 | DEFINE_string('zone_name', 'nova', 'name of this zone') |
364 | 361 | DEFINE_string('zone_capabilities', 'kypervisor:xenserver;os:linux', | 361 | DEFINE_list('zone_capabilities', |
365 | 362 | 'Key/Value tags which represent capabilities of this zone') | 362 | ['hypervisor=xenserver;kvm', 'os=linux;windows'], |
366 | 363 | 'Key/Multi-value list representng capabilities of this zone') | ||
367 | 363 | 364 | ||
368 | === modified file 'nova/manager.py' | |||
369 | --- nova/manager.py 2010-12-15 00:05:39 +0000 | |||
370 | +++ nova/manager.py 2011-03-24 12:14:14 +0000 | |||
371 | @@ -53,8 +53,9 @@ | |||
372 | 53 | 53 | ||
373 | 54 | from nova import utils | 54 | from nova import utils |
374 | 55 | from nova import flags | 55 | from nova import flags |
375 | 56 | from nova import log as logging | ||
376 | 56 | from nova.db import base | 57 | from nova.db import base |
378 | 57 | 58 | from nova.scheduler import api | |
379 | 58 | 59 | ||
380 | 59 | FLAGS = flags.FLAGS | 60 | FLAGS = flags.FLAGS |
381 | 60 | 61 | ||
382 | @@ -74,3 +75,28 @@ | |||
383 | 74 | """Do any initialization that needs to be run if this is a standalone | 75 | """Do any initialization that needs to be run if this is a standalone |
384 | 75 | service. Child classes should override this method.""" | 76 | service. Child classes should override this method.""" |
385 | 76 | pass | 77 | pass |
386 | 78 | |||
387 | 79 | |||
388 | 80 | class SchedulerDependentManager(Manager): | ||
389 | 81 | """Periodically send capability updates to the Scheduler services. | ||
390 | 82 | Services that need to update the Scheduler of their capabilities | ||
391 | 83 | should derive from this class. Otherwise they can derive from | ||
392 | 84 | manager.Manager directly. Updates are only sent after | ||
393 | 85 | update_service_capabilities is called with non-None values.""" | ||
394 | 86 | def __init__(self, host=None, db_driver=None, service_name="undefined"): | ||
395 | 87 | self.last_capabilities = None | ||
396 | 88 | self.service_name = service_name | ||
397 | 89 | super(SchedulerDependentManager, self).__init__(host, db_driver) | ||
398 | 90 | |||
399 | 91 | def update_service_capabilities(self, capabilities): | ||
400 | 92 | """Remember these capabilities to send on next periodic update.""" | ||
401 | 93 | self.last_capabilities = capabilities | ||
402 | 94 | |||
403 | 95 | def periodic_tasks(self, context=None): | ||
404 | 96 | """Pass data back to the scheduler at a periodic interval""" | ||
405 | 97 | if self.last_capabilities: | ||
406 | 98 | logging.debug(_("Notifying Schedulers of capabilities ...")) | ||
407 | 99 | api.update_service_capabilities(context, self.service_name, | ||
408 | 100 | self.host, self.last_capabilities) | ||
409 | 101 | |||
410 | 102 | super(SchedulerDependentManager, self).periodic_tasks(context) | ||
411 | 77 | 103 | ||
412 | === modified file 'nova/network/manager.py' | |||
413 | --- nova/network/manager.py 2011-03-23 05:29:32 +0000 | |||
414 | +++ nova/network/manager.py 2011-03-24 12:14:14 +0000 | |||
415 | @@ -105,7 +105,7 @@ | |||
416 | 105 | pass | 105 | pass |
417 | 106 | 106 | ||
418 | 107 | 107 | ||
420 | 108 | class NetworkManager(manager.Manager): | 108 | class NetworkManager(manager.SchedulerDependentManager): |
421 | 109 | """Implements common network manager functionality. | 109 | """Implements common network manager functionality. |
422 | 110 | 110 | ||
423 | 111 | This class must be subclassed to support specific topologies. | 111 | This class must be subclassed to support specific topologies. |
424 | @@ -116,7 +116,8 @@ | |||
425 | 116 | if not network_driver: | 116 | if not network_driver: |
426 | 117 | network_driver = FLAGS.network_driver | 117 | network_driver = FLAGS.network_driver |
427 | 118 | self.driver = utils.import_object(network_driver) | 118 | self.driver = utils.import_object(network_driver) |
429 | 119 | super(NetworkManager, self).__init__(*args, **kwargs) | 119 | super(NetworkManager, self).__init__(service_name='network', |
430 | 120 | *args, **kwargs) | ||
431 | 120 | 121 | ||
432 | 121 | def init_host(self): | 122 | def init_host(self): |
433 | 122 | """Do any initialization that needs to be run if this is a | 123 | """Do any initialization that needs to be run if this is a |
434 | 123 | 124 | ||
435 | === modified file 'nova/rpc.py' | |||
436 | --- nova/rpc.py 2011-03-18 13:56:05 +0000 | |||
437 | +++ nova/rpc.py 2011-03-24 12:14:14 +0000 | |||
438 | @@ -137,24 +137,7 @@ | |||
439 | 137 | return timer | 137 | return timer |
440 | 138 | 138 | ||
441 | 139 | 139 | ||
460 | 140 | class Publisher(messaging.Publisher): | 140 | class AdapterConsumer(Consumer): |
443 | 141 | """Publisher base class""" | ||
444 | 142 | pass | ||
445 | 143 | |||
446 | 144 | |||
447 | 145 | class TopicConsumer(Consumer): | ||
448 | 146 | """Consumes messages on a specific topic""" | ||
449 | 147 | exchange_type = "topic" | ||
450 | 148 | |||
451 | 149 | def __init__(self, connection=None, topic="broadcast"): | ||
452 | 150 | self.queue = topic | ||
453 | 151 | self.routing_key = topic | ||
454 | 152 | self.exchange = FLAGS.control_exchange | ||
455 | 153 | self.durable = False | ||
456 | 154 | super(TopicConsumer, self).__init__(connection=connection) | ||
457 | 155 | |||
458 | 156 | |||
459 | 157 | class AdapterConsumer(TopicConsumer): | ||
461 | 158 | """Calls methods on a proxy object based on method and args""" | 141 | """Calls methods on a proxy object based on method and args""" |
462 | 159 | def __init__(self, connection=None, topic="broadcast", proxy=None): | 142 | def __init__(self, connection=None, topic="broadcast", proxy=None): |
463 | 160 | LOG.debug(_('Initing the Adapter Consumer for %s') % topic) | 143 | LOG.debug(_('Initing the Adapter Consumer for %s') % topic) |
464 | @@ -207,6 +190,41 @@ | |||
465 | 207 | return | 190 | return |
466 | 208 | 191 | ||
467 | 209 | 192 | ||
468 | 193 | class Publisher(messaging.Publisher): | ||
469 | 194 | """Publisher base class""" | ||
470 | 195 | pass | ||
471 | 196 | |||
472 | 197 | |||
473 | 198 | class TopicAdapterConsumer(AdapterConsumer): | ||
474 | 199 | """Consumes messages on a specific topic""" | ||
475 | 200 | exchange_type = "topic" | ||
476 | 201 | |||
477 | 202 | def __init__(self, connection=None, topic="broadcast", proxy=None): | ||
478 | 203 | self.queue = topic | ||
479 | 204 | self.routing_key = topic | ||
480 | 205 | self.exchange = FLAGS.control_exchange | ||
481 | 206 | self.durable = False | ||
482 | 207 | super(TopicAdapterConsumer, self).__init__(connection=connection, | ||
483 | 208 | topic=topic, proxy=proxy) | ||
484 | 209 | |||
485 | 210 | |||
486 | 211 | class FanoutAdapterConsumer(AdapterConsumer): | ||
487 | 212 | """Consumes messages from a fanout exchange""" | ||
488 | 213 | exchange_type = "fanout" | ||
489 | 214 | |||
490 | 215 | def __init__(self, connection=None, topic="broadcast", proxy=None): | ||
491 | 216 | self.exchange = "%s_fanout" % topic | ||
492 | 217 | self.routing_key = topic | ||
493 | 218 | unique = uuid.uuid4().hex | ||
494 | 219 | self.queue = "%s_fanout_%s" % (topic, unique) | ||
495 | 220 | self.durable = False | ||
496 | 221 | LOG.info(_("Created '%(exchange)s' fanout exchange " | ||
497 | 222 | "with '%(key)s' routing key"), | ||
498 | 223 | dict(exchange=self.exchange, key=self.routing_key)) | ||
499 | 224 | super(FanoutAdapterConsumer, self).__init__(connection=connection, | ||
500 | 225 | topic=topic, proxy=proxy) | ||
501 | 226 | |||
502 | 227 | |||
503 | 210 | class TopicPublisher(Publisher): | 228 | class TopicPublisher(Publisher): |
504 | 211 | """Publishes messages on a specific topic""" | 229 | """Publishes messages on a specific topic""" |
505 | 212 | exchange_type = "topic" | 230 | exchange_type = "topic" |
506 | @@ -218,6 +236,19 @@ | |||
507 | 218 | super(TopicPublisher, self).__init__(connection=connection) | 236 | super(TopicPublisher, self).__init__(connection=connection) |
508 | 219 | 237 | ||
509 | 220 | 238 | ||
510 | 239 | class FanoutPublisher(Publisher): | ||
511 | 240 | """Publishes messages to a fanout exchange.""" | ||
512 | 241 | exchange_type = "fanout" | ||
513 | 242 | |||
514 | 243 | def __init__(self, topic, connection=None): | ||
515 | 244 | self.exchange = "%s_fanout" % topic | ||
516 | 245 | self.queue = "%s_fanout" % topic | ||
517 | 246 | self.durable = False | ||
518 | 247 | LOG.info(_("Creating '%(exchange)s' fanout exchange"), | ||
519 | 248 | dict(exchange=self.exchange)) | ||
520 | 249 | super(FanoutPublisher, self).__init__(connection=connection) | ||
521 | 250 | |||
522 | 251 | |||
523 | 221 | class DirectConsumer(Consumer): | 252 | class DirectConsumer(Consumer): |
524 | 222 | """Consumes messages directly on a channel specified by msg_id""" | 253 | """Consumes messages directly on a channel specified by msg_id""" |
525 | 223 | exchange_type = "direct" | 254 | exchange_type = "direct" |
526 | @@ -360,6 +391,16 @@ | |||
527 | 360 | publisher.close() | 391 | publisher.close() |
528 | 361 | 392 | ||
529 | 362 | 393 | ||
530 | 394 | def fanout_cast(context, topic, msg): | ||
531 | 395 | """Sends a message on a fanout exchange without waiting for a response""" | ||
532 | 396 | LOG.debug(_("Making asynchronous fanout cast...")) | ||
533 | 397 | _pack_context(msg, context) | ||
534 | 398 | conn = Connection.instance() | ||
535 | 399 | publisher = FanoutPublisher(topic, connection=conn) | ||
536 | 400 | publisher.send(msg) | ||
537 | 401 | publisher.close() | ||
538 | 402 | |||
539 | 403 | |||
540 | 363 | def generic_response(message_data, message): | 404 | def generic_response(message_data, message): |
541 | 364 | """Logs a result and exits""" | 405 | """Logs a result and exits""" |
542 | 365 | LOG.debug(_('response %s'), message_data) | 406 | LOG.debug(_('response %s'), message_data) |
543 | 366 | 407 | ||
544 | === modified file 'nova/scheduler/api.py' | |||
545 | --- nova/scheduler/api.py 2011-02-25 21:40:15 +0000 | |||
546 | +++ nova/scheduler/api.py 2011-03-24 12:14:14 +0000 | |||
547 | @@ -17,33 +17,225 @@ | |||
548 | 17 | Handles all requests relating to schedulers. | 17 | Handles all requests relating to schedulers. |
549 | 18 | """ | 18 | """ |
550 | 19 | 19 | ||
551 | 20 | import novaclient | ||
552 | 21 | |||
553 | 22 | from nova import db | ||
554 | 23 | from nova import exception | ||
555 | 20 | from nova import flags | 24 | from nova import flags |
556 | 21 | from nova import log as logging | 25 | from nova import log as logging |
557 | 22 | from nova import rpc | 26 | from nova import rpc |
558 | 23 | 27 | ||
559 | 28 | from eventlet import greenpool | ||
560 | 29 | |||
561 | 24 | FLAGS = flags.FLAGS | 30 | FLAGS = flags.FLAGS |
562 | 31 | flags.DEFINE_bool('enable_zone_routing', | ||
563 | 32 | False, | ||
564 | 33 | 'When True, routing to child zones will occur.') | ||
565 | 34 | |||
566 | 25 | LOG = logging.getLogger('nova.scheduler.api') | 35 | LOG = logging.getLogger('nova.scheduler.api') |
567 | 26 | 36 | ||
568 | 27 | 37 | ||
591 | 28 | class API(object): | 38 | def _call_scheduler(method, context, params=None): |
592 | 29 | """API for interacting with the scheduler.""" | 39 | """Generic handler for RPC calls to the scheduler. |
593 | 30 | 40 | ||
594 | 31 | def _call_scheduler(self, method, context, params=None): | 41 | :param params: Optional dictionary of arguments to be passed to the |
595 | 32 | """Generic handler for RPC calls to the scheduler. | 42 | scheduler worker |
596 | 33 | 43 | ||
597 | 34 | :param params: Optional dictionary of arguments to be passed to the | 44 | :retval: Result returned by scheduler worker |
598 | 35 | scheduler worker | 45 | """ |
599 | 36 | 46 | if not params: | |
600 | 37 | :retval: Result returned by scheduler worker | 47 | params = {} |
601 | 38 | """ | 48 | queue = FLAGS.scheduler_topic |
602 | 39 | if not params: | 49 | kwargs = {'method': method, 'args': params} |
603 | 40 | params = {} | 50 | return rpc.call(context, queue, kwargs) |
604 | 41 | queue = FLAGS.scheduler_topic | 51 | |
605 | 42 | kwargs = {'method': method, 'args': params} | 52 | |
606 | 43 | return rpc.call(context, queue, kwargs) | 53 | def get_zone_list(context): |
607 | 44 | 54 | """Return a list of zones assoicated with this zone.""" | |
608 | 45 | def get_zone_list(self, context): | 55 | items = _call_scheduler('get_zone_list', context) |
609 | 46 | items = self._call_scheduler('get_zone_list', context) | 56 | for item in items: |
610 | 47 | for item in items: | 57 | item['api_url'] = item['api_url'].replace('\\/', '/') |
611 | 48 | item['api_url'] = item['api_url'].replace('\\/', '/') | 58 | if not items: |
612 | 49 | return items | 59 | items = db.zone_get_all(context) |
613 | 60 | return items | ||
614 | 61 | |||
615 | 62 | |||
616 | 63 | def zone_get(context, zone_id): | ||
617 | 64 | return db.zone_get(context, zone_id) | ||
618 | 65 | |||
619 | 66 | |||
620 | 67 | def zone_delete(context, zone_id): | ||
621 | 68 | return db.zone_delete(context, zone_id) | ||
622 | 69 | |||
623 | 70 | |||
624 | 71 | def zone_create(context, data): | ||
625 | 72 | return db.zone_create(context, data) | ||
626 | 73 | |||
627 | 74 | |||
628 | 75 | def zone_update(context, zone_id, data): | ||
629 | 76 | return db.zone_update(context, zone_id, data) | ||
630 | 77 | |||
631 | 78 | |||
632 | 79 | def get_zone_capabilities(context, service=None): | ||
633 | 80 | """Returns a dict of key, value capabilities for this zone, | ||
634 | 81 | or for a particular class of services running in this zone.""" | ||
635 | 82 | return _call_scheduler('get_zone_capabilities', context=context, | ||
636 | 83 | params=dict(service=service)) | ||
637 | 84 | |||
638 | 85 | |||
639 | 86 | def update_service_capabilities(context, service_name, host, capabilities): | ||
640 | 87 | """Send an update to all the scheduler services informing them | ||
641 | 88 | of the capabilities of this service.""" | ||
642 | 89 | kwargs = dict(method='update_service_capabilities', | ||
643 | 90 | args=dict(service_name=service_name, host=host, | ||
644 | 91 | capabilities=capabilities)) | ||
645 | 92 | return rpc.fanout_cast(context, 'scheduler', kwargs) | ||
646 | 93 | |||
647 | 94 | |||
648 | 95 | def _wrap_method(function, self): | ||
649 | 96 | """Wrap method to supply self.""" | ||
650 | 97 | def _wrap(*args, **kwargs): | ||
651 | 98 | return function(self, *args, **kwargs) | ||
652 | 99 | return _wrap | ||
653 | 100 | |||
654 | 101 | |||
655 | 102 | def _process(func, zone): | ||
656 | 103 | """Worker stub for green thread pool. Give the worker | ||
657 | 104 | an authenticated nova client and zone info.""" | ||
658 | 105 | nova = novaclient.OpenStack(zone.username, zone.password, zone.api_url) | ||
659 | 106 | nova.authenticate() | ||
660 | 107 | return func(nova, zone) | ||
661 | 108 | |||
662 | 109 | |||
663 | 110 | def child_zone_helper(zone_list, func): | ||
664 | 111 | """Fire off a command to each zone in the list. | ||
665 | 112 | The return is [novaclient return objects] from each child zone. | ||
666 | 113 | For example, if you are calling server.pause(), the list will | ||
667 | 114 | be whatever the response from server.pause() is. One entry | ||
668 | 115 | per child zone called.""" | ||
669 | 116 | green_pool = greenpool.GreenPool() | ||
670 | 117 | return [result for result in green_pool.imap( | ||
671 | 118 | _wrap_method(_process, func), zone_list)] | ||
672 | 119 | |||
673 | 120 | |||
674 | 121 | def _issue_novaclient_command(nova, zone, collection, method_name, item_id): | ||
675 | 122 | """Use novaclient to issue command to a single child zone. | ||
676 | 123 | One of these will be run in parallel for each child zone.""" | ||
677 | 124 | manager = getattr(nova, collection) | ||
678 | 125 | result = None | ||
679 | 126 | try: | ||
680 | 127 | try: | ||
681 | 128 | result = manager.get(int(item_id)) | ||
682 | 129 | except ValueError, e: | ||
683 | 130 | result = manager.find(name=item_id) | ||
684 | 131 | except novaclient.NotFound: | ||
685 | 132 | url = zone.api_url | ||
686 | 133 | LOG.debug(_("%(collection)s '%(item_id)s' not found on '%(url)s'" % | ||
687 | 134 | locals())) | ||
688 | 135 | return None | ||
689 | 136 | |||
690 | 137 | if method_name.lower() not in ['get', 'find']: | ||
691 | 138 | result = getattr(result, method_name)() | ||
692 | 139 | return result | ||
693 | 140 | |||
694 | 141 | |||
695 | 142 | def wrap_novaclient_function(f, collection, method_name, item_id): | ||
696 | 143 | """Appends collection, method_name and item_id to the incoming | ||
697 | 144 | (nova, zone) call from child_zone_helper.""" | ||
698 | 145 | def inner(nova, zone): | ||
699 | 146 | return f(nova, zone, collection, method_name, item_id) | ||
700 | 147 | |||
701 | 148 | return inner | ||
702 | 149 | |||
703 | 150 | |||
704 | 151 | class RedirectResult(exception.Error): | ||
705 | 152 | """Used to the HTTP API know that these results are pre-cooked | ||
706 | 153 | and they can be returned to the caller directly.""" | ||
707 | 154 | def __init__(self, results): | ||
708 | 155 | self.results = results | ||
709 | 156 | super(RedirectResult, self).__init__( | ||
710 | 157 | message=_("Uncaught Zone redirection exception")) | ||
711 | 158 | |||
712 | 159 | |||
713 | 160 | class reroute_compute(object): | ||
714 | 161 | """Decorator used to indicate that the method should | ||
715 | 162 | delegate the call the child zones if the db query | ||
716 | 163 | can't find anything.""" | ||
717 | 164 | def __init__(self, method_name): | ||
718 | 165 | self.method_name = method_name | ||
719 | 166 | |||
720 | 167 | def __call__(self, f): | ||
721 | 168 | def wrapped_f(*args, **kwargs): | ||
722 | 169 | collection, context, item_id = \ | ||
723 | 170 | self.get_collection_context_and_id(args, kwargs) | ||
724 | 171 | try: | ||
725 | 172 | # Call the original function ... | ||
726 | 173 | return f(*args, **kwargs) | ||
727 | 174 | except exception.InstanceNotFound, e: | ||
728 | 175 | LOG.debug(_("Instance %(item_id)s not found " | ||
729 | 176 | "locally: '%(e)s'" % locals())) | ||
730 | 177 | |||
731 | 178 | if not FLAGS.enable_zone_routing: | ||
732 | 179 | raise | ||
733 | 180 | |||
734 | 181 | zones = db.zone_get_all(context) | ||
735 | 182 | if not zones: | ||
736 | 183 | raise | ||
737 | 184 | |||
738 | 185 | # Ask the children to provide an answer ... | ||
739 | 186 | LOG.debug(_("Asking child zones ...")) | ||
740 | 187 | result = self._call_child_zones(zones, | ||
741 | 188 | wrap_novaclient_function(_issue_novaclient_command, | ||
742 | 189 | collection, self.method_name, item_id)) | ||
743 | 190 | # Scrub the results and raise another exception | ||
744 | 191 | # so the API layers can bail out gracefully ... | ||
745 | 192 | raise RedirectResult(self.unmarshall_result(result)) | ||
746 | 193 | return wrapped_f | ||
747 | 194 | |||
748 | 195 | def _call_child_zones(self, zones, function): | ||
749 | 196 | """Ask the child zones to perform this operation. | ||
750 | 197 | Broken out for testing.""" | ||
751 | 198 | return child_zone_helper(zones, function) | ||
752 | 199 | |||
753 | 200 | def get_collection_context_and_id(self, args, kwargs): | ||
754 | 201 | """Returns a tuple of (novaclient collection name, security | ||
755 | 202 | context and resource id. Derived class should override this.""" | ||
756 | 203 | context = kwargs.get('context', None) | ||
757 | 204 | instance_id = kwargs.get('instance_id', None) | ||
758 | 205 | if len(args) > 0 and not context: | ||
759 | 206 | context = args[1] | ||
760 | 207 | if len(args) > 1 and not instance_id: | ||
761 | 208 | instance_id = args[2] | ||
762 | 209 | return ("servers", context, instance_id) | ||
763 | 210 | |||
764 | 211 | def unmarshall_result(self, zone_responses): | ||
765 | 212 | """Result is a list of responses from each child zone. | ||
766 | 213 | Each decorator derivation is responsible to turning this | ||
767 | 214 | into a format expected by the calling method. For | ||
768 | 215 | example, this one is expected to return a single Server | ||
769 | 216 | dict {'server':{k:v}}. Others may return a list of them, like | ||
770 | 217 | {'servers':[{k,v}]}""" | ||
771 | 218 | reduced_response = [] | ||
772 | 219 | for zone_response in zone_responses: | ||
773 | 220 | if not zone_response: | ||
774 | 221 | continue | ||
775 | 222 | |||
776 | 223 | server = zone_response.__dict__ | ||
777 | 224 | |||
778 | 225 | for k in server.keys(): | ||
779 | 226 | if k[0] == '_' or k == 'manager': | ||
780 | 227 | del server[k] | ||
781 | 228 | |||
782 | 229 | reduced_response.append(dict(server=server)) | ||
783 | 230 | if reduced_response: | ||
784 | 231 | return reduced_response[0] # first for now. | ||
785 | 232 | return {} | ||
786 | 233 | |||
787 | 234 | |||
788 | 235 | def redirect_handler(f): | ||
789 | 236 | def new_f(*args, **kwargs): | ||
790 | 237 | try: | ||
791 | 238 | return f(*args, **kwargs) | ||
792 | 239 | except RedirectResult, e: | ||
793 | 240 | return e.results | ||
794 | 241 | return new_f | ||
795 | 50 | 242 | ||
796 | === modified file 'nova/scheduler/driver.py' | |||
797 | --- nova/scheduler/driver.py 2011-03-10 04:30:52 +0000 | |||
798 | +++ nova/scheduler/driver.py 2011-03-24 12:14:14 +0000 | |||
799 | @@ -49,6 +49,13 @@ | |||
800 | 49 | class Scheduler(object): | 49 | class Scheduler(object): |
801 | 50 | """The base class that all Scheduler clases should inherit from.""" | 50 | """The base class that all Scheduler clases should inherit from.""" |
802 | 51 | 51 | ||
803 | 52 | def __init__(self): | ||
804 | 53 | self.zone_manager = None | ||
805 | 54 | |||
806 | 55 | def set_zone_manager(self, zone_manager): | ||
807 | 56 | """Called by the Scheduler Service to supply a ZoneManager.""" | ||
808 | 57 | self.zone_manager = zone_manager | ||
809 | 58 | |||
810 | 52 | @staticmethod | 59 | @staticmethod |
811 | 53 | def service_is_up(service): | 60 | def service_is_up(service): |
812 | 54 | """Check whether a service is up based on last heartbeat.""" | 61 | """Check whether a service is up based on last heartbeat.""" |
813 | 55 | 62 | ||
814 | === modified file 'nova/scheduler/manager.py' | |||
815 | --- nova/scheduler/manager.py 2011-03-14 17:59:41 +0000 | |||
816 | +++ nova/scheduler/manager.py 2011-03-24 12:14:14 +0000 | |||
817 | @@ -41,10 +41,11 @@ | |||
818 | 41 | class SchedulerManager(manager.Manager): | 41 | class SchedulerManager(manager.Manager): |
819 | 42 | """Chooses a host to run instances on.""" | 42 | """Chooses a host to run instances on.""" |
820 | 43 | def __init__(self, scheduler_driver=None, *args, **kwargs): | 43 | def __init__(self, scheduler_driver=None, *args, **kwargs): |
821 | 44 | self.zone_manager = zone_manager.ZoneManager() | ||
822 | 44 | if not scheduler_driver: | 45 | if not scheduler_driver: |
823 | 45 | scheduler_driver = FLAGS.scheduler_driver | 46 | scheduler_driver = FLAGS.scheduler_driver |
824 | 46 | self.driver = utils.import_object(scheduler_driver) | 47 | self.driver = utils.import_object(scheduler_driver) |
826 | 47 | self.zone_manager = zone_manager.ZoneManager() | 48 | self.driver.set_zone_manager(self.zone_manager) |
827 | 48 | super(SchedulerManager, self).__init__(*args, **kwargs) | 49 | super(SchedulerManager, self).__init__(*args, **kwargs) |
828 | 49 | 50 | ||
829 | 50 | def __getattr__(self, key): | 51 | def __getattr__(self, key): |
830 | @@ -59,6 +60,17 @@ | |||
831 | 59 | """Get a list of zones from the ZoneManager.""" | 60 | """Get a list of zones from the ZoneManager.""" |
832 | 60 | return self.zone_manager.get_zone_list() | 61 | return self.zone_manager.get_zone_list() |
833 | 61 | 62 | ||
834 | 63 | def get_zone_capabilities(self, context=None, service=None): | ||
835 | 64 | """Get the normalized set of capabilites for this zone, | ||
836 | 65 | or for a particular service.""" | ||
837 | 66 | return self.zone_manager.get_zone_capabilities(context, service) | ||
838 | 67 | |||
839 | 68 | def update_service_capabilities(self, context=None, service_name=None, | ||
840 | 69 | host=None, capabilities={}): | ||
841 | 70 | """Process a capability update from a service node.""" | ||
842 | 71 | self.zone_manager.update_service_capabilities(service_name, | ||
843 | 72 | host, capabilities) | ||
844 | 73 | |||
845 | 62 | def _schedule(self, method, context, topic, *args, **kwargs): | 74 | def _schedule(self, method, context, topic, *args, **kwargs): |
846 | 63 | """Tries to call schedule_* method on the driver to retrieve host. | 75 | """Tries to call schedule_* method on the driver to retrieve host. |
847 | 64 | 76 | ||
848 | 65 | 77 | ||
849 | === modified file 'nova/scheduler/zone_manager.py' | |||
850 | --- nova/scheduler/zone_manager.py 2011-03-03 14:55:02 +0000 | |||
851 | +++ nova/scheduler/zone_manager.py 2011-03-24 12:14:14 +0000 | |||
852 | @@ -58,8 +58,9 @@ | |||
853 | 58 | child zone.""" | 58 | child zone.""" |
854 | 59 | self.last_seen = datetime.now() | 59 | self.last_seen = datetime.now() |
855 | 60 | self.attempt = 0 | 60 | self.attempt = 0 |
858 | 61 | self.name = zone_metadata["name"] | 61 | self.name = zone_metadata.get("name", "n/a") |
859 | 62 | self.capabilities = zone_metadata["capabilities"] | 62 | self.capabilities = ", ".join(["%s=%s" % (k, v) |
860 | 63 | for k, v in zone_metadata.iteritems() if k != 'name']) | ||
861 | 63 | self.is_active = True | 64 | self.is_active = True |
862 | 64 | 65 | ||
863 | 65 | def to_dict(self): | 66 | def to_dict(self): |
864 | @@ -104,13 +105,37 @@ | |||
865 | 104 | """Keeps the zone states updated.""" | 105 | """Keeps the zone states updated.""" |
866 | 105 | def __init__(self): | 106 | def __init__(self): |
867 | 106 | self.last_zone_db_check = datetime.min | 107 | self.last_zone_db_check = datetime.min |
869 | 107 | self.zone_states = {} | 108 | self.zone_states = {} # { <zone_id> : ZoneState } |
870 | 109 | self.service_states = {} # { <service> : { <host> : { cap k : v }}} | ||
871 | 108 | self.green_pool = greenpool.GreenPool() | 110 | self.green_pool = greenpool.GreenPool() |
872 | 109 | 111 | ||
873 | 110 | def get_zone_list(self): | 112 | def get_zone_list(self): |
874 | 111 | """Return the list of zones we know about.""" | 113 | """Return the list of zones we know about.""" |
875 | 112 | return [zone.to_dict() for zone in self.zone_states.values()] | 114 | return [zone.to_dict() for zone in self.zone_states.values()] |
876 | 113 | 115 | ||
877 | 116 | def get_zone_capabilities(self, context, service=None): | ||
878 | 117 | """Roll up all the individual host info to generic 'service' | ||
879 | 118 | capabilities. Each capability is aggregated into | ||
880 | 119 | <cap>_min and <cap>_max values.""" | ||
881 | 120 | service_dict = self.service_states | ||
882 | 121 | if service: | ||
883 | 122 | service_dict = {service: self.service_states.get(service, {})} | ||
884 | 123 | |||
885 | 124 | # TODO(sandy) - be smarter about fabricating this structure. | ||
886 | 125 | # But it's likely to change once we understand what the Best-Match | ||
887 | 126 | # code will need better. | ||
888 | 127 | combined = {} # { <service>_<cap> : (min, max), ... } | ||
889 | 128 | for service_name, host_dict in service_dict.iteritems(): | ||
890 | 129 | for host, caps_dict in host_dict.iteritems(): | ||
891 | 130 | for cap, value in caps_dict.iteritems(): | ||
892 | 131 | key = "%s_%s" % (service_name, cap) | ||
893 | 132 | min_value, max_value = combined.get(key, (value, value)) | ||
894 | 133 | min_value = min(min_value, value) | ||
895 | 134 | max_value = max(max_value, value) | ||
896 | 135 | combined[key] = (min_value, max_value) | ||
897 | 136 | |||
898 | 137 | return combined | ||
899 | 138 | |||
900 | 114 | def _refresh_from_db(self, context): | 139 | def _refresh_from_db(self, context): |
901 | 115 | """Make our zone state map match the db.""" | 140 | """Make our zone state map match the db.""" |
902 | 116 | # Add/update existing zones ... | 141 | # Add/update existing zones ... |
903 | @@ -141,3 +166,11 @@ | |||
904 | 141 | self.last_zone_db_check = datetime.now() | 166 | self.last_zone_db_check = datetime.now() |
905 | 142 | self._refresh_from_db(context) | 167 | self._refresh_from_db(context) |
906 | 143 | self._poll_zones(context) | 168 | self._poll_zones(context) |
907 | 169 | |||
908 | 170 | def update_service_capabilities(self, service_name, host, capabilities): | ||
909 | 171 | """Update the per-service capabilities based on this notification.""" | ||
910 | 172 | logging.debug(_("Received %(service_name)s service update from " | ||
911 | 173 | "%(host)s: %(capabilities)s") % locals()) | ||
912 | 174 | service_caps = self.service_states.get(service_name, {}) | ||
913 | 175 | service_caps[host] = capabilities | ||
914 | 176 | self.service_states[service_name] = service_caps | ||
915 | 144 | 177 | ||
916 | === modified file 'nova/service.py' | |||
917 | --- nova/service.py 2011-03-18 13:56:05 +0000 | |||
918 | +++ nova/service.py 2011-03-24 12:14:14 +0000 | |||
919 | @@ -97,18 +97,24 @@ | |||
920 | 97 | 97 | ||
921 | 98 | conn1 = rpc.Connection.instance(new=True) | 98 | conn1 = rpc.Connection.instance(new=True) |
922 | 99 | conn2 = rpc.Connection.instance(new=True) | 99 | conn2 = rpc.Connection.instance(new=True) |
923 | 100 | conn3 = rpc.Connection.instance(new=True) | ||
924 | 100 | if self.report_interval: | 101 | if self.report_interval: |
926 | 101 | consumer_all = rpc.AdapterConsumer( | 102 | consumer_all = rpc.TopicAdapterConsumer( |
927 | 102 | connection=conn1, | 103 | connection=conn1, |
928 | 103 | topic=self.topic, | 104 | topic=self.topic, |
929 | 104 | proxy=self) | 105 | proxy=self) |
931 | 105 | consumer_node = rpc.AdapterConsumer( | 106 | consumer_node = rpc.TopicAdapterConsumer( |
932 | 106 | connection=conn2, | 107 | connection=conn2, |
933 | 107 | topic='%s.%s' % (self.topic, self.host), | 108 | topic='%s.%s' % (self.topic, self.host), |
934 | 108 | proxy=self) | 109 | proxy=self) |
935 | 110 | fanout = rpc.FanoutAdapterConsumer( | ||
936 | 111 | connection=conn3, | ||
937 | 112 | topic=self.topic, | ||
938 | 113 | proxy=self) | ||
939 | 109 | 114 | ||
940 | 110 | self.timers.append(consumer_all.attach_to_eventlet()) | 115 | self.timers.append(consumer_all.attach_to_eventlet()) |
941 | 111 | self.timers.append(consumer_node.attach_to_eventlet()) | 116 | self.timers.append(consumer_node.attach_to_eventlet()) |
942 | 117 | self.timers.append(fanout.attach_to_eventlet()) | ||
943 | 112 | 118 | ||
944 | 113 | pulse = utils.LoopingCall(self.report_state) | 119 | pulse = utils.LoopingCall(self.report_state) |
945 | 114 | pulse.start(interval=self.report_interval, now=False) | 120 | pulse.start(interval=self.report_interval, now=False) |
946 | 115 | 121 | ||
947 | === modified file 'nova/tests/api/openstack/test_zones.py' | |||
948 | --- nova/tests/api/openstack/test_zones.py 2011-03-11 19:49:32 +0000 | |||
949 | +++ nova/tests/api/openstack/test_zones.py 2011-03-24 12:14:14 +0000 | |||
950 | @@ -75,6 +75,10 @@ | |||
951 | 75 | ] | 75 | ] |
952 | 76 | 76 | ||
953 | 77 | 77 | ||
954 | 78 | def zone_capabilities(method, context, params): | ||
955 | 79 | return dict() | ||
956 | 80 | |||
957 | 81 | |||
958 | 78 | class ZonesTest(test.TestCase): | 82 | class ZonesTest(test.TestCase): |
959 | 79 | def setUp(self): | 83 | def setUp(self): |
960 | 80 | super(ZonesTest, self).setUp() | 84 | super(ZonesTest, self).setUp() |
961 | @@ -93,13 +97,18 @@ | |||
962 | 93 | self.stubs.Set(nova.db, 'zone_create', zone_create) | 97 | self.stubs.Set(nova.db, 'zone_create', zone_create) |
963 | 94 | self.stubs.Set(nova.db, 'zone_delete', zone_delete) | 98 | self.stubs.Set(nova.db, 'zone_delete', zone_delete) |
964 | 95 | 99 | ||
965 | 100 | self.old_zone_name = FLAGS.zone_name | ||
966 | 101 | self.old_zone_capabilities = FLAGS.zone_capabilities | ||
967 | 102 | |||
968 | 96 | def tearDown(self): | 103 | def tearDown(self): |
969 | 97 | self.stubs.UnsetAll() | 104 | self.stubs.UnsetAll() |
970 | 98 | FLAGS.allow_admin_api = self.allow_admin | 105 | FLAGS.allow_admin_api = self.allow_admin |
971 | 106 | FLAGS.zone_name = self.old_zone_name | ||
972 | 107 | FLAGS.zone_capabilities = self.old_zone_capabilities | ||
973 | 99 | super(ZonesTest, self).tearDown() | 108 | super(ZonesTest, self).tearDown() |
974 | 100 | 109 | ||
975 | 101 | def test_get_zone_list_scheduler(self): | 110 | def test_get_zone_list_scheduler(self): |
977 | 102 | self.stubs.Set(api.API, '_call_scheduler', zone_get_all_scheduler) | 111 | self.stubs.Set(api, '_call_scheduler', zone_get_all_scheduler) |
978 | 103 | req = webob.Request.blank('/v1.0/zones') | 112 | req = webob.Request.blank('/v1.0/zones') |
979 | 104 | res = req.get_response(fakes.wsgi_app()) | 113 | res = req.get_response(fakes.wsgi_app()) |
980 | 105 | res_dict = json.loads(res.body) | 114 | res_dict = json.loads(res.body) |
981 | @@ -108,8 +117,7 @@ | |||
982 | 108 | self.assertEqual(len(res_dict['zones']), 2) | 117 | self.assertEqual(len(res_dict['zones']), 2) |
983 | 109 | 118 | ||
984 | 110 | def test_get_zone_list_db(self): | 119 | def test_get_zone_list_db(self): |
987 | 111 | self.stubs.Set(api.API, '_call_scheduler', | 120 | self.stubs.Set(api, '_call_scheduler', zone_get_all_scheduler_empty) |
986 | 112 | zone_get_all_scheduler_empty) | ||
988 | 113 | self.stubs.Set(nova.db, 'zone_get_all', zone_get_all_db) | 121 | self.stubs.Set(nova.db, 'zone_get_all', zone_get_all_db) |
989 | 114 | req = webob.Request.blank('/v1.0/zones') | 122 | req = webob.Request.blank('/v1.0/zones') |
990 | 115 | req.headers["Content-Type"] = "application/json" | 123 | req.headers["Content-Type"] = "application/json" |
991 | @@ -167,3 +175,18 @@ | |||
992 | 167 | self.assertEqual(res_dict['zone']['id'], 1) | 175 | self.assertEqual(res_dict['zone']['id'], 1) |
993 | 168 | self.assertEqual(res_dict['zone']['api_url'], 'http://example.com') | 176 | self.assertEqual(res_dict['zone']['api_url'], 'http://example.com') |
994 | 169 | self.assertFalse('username' in res_dict['zone']) | 177 | self.assertFalse('username' in res_dict['zone']) |
995 | 178 | |||
996 | 179 | def test_zone_info(self): | ||
997 | 180 | FLAGS.zone_name = 'darksecret' | ||
998 | 181 | FLAGS.zone_capabilities = ['cap1=a;b', 'cap2=c;d'] | ||
999 | 182 | self.stubs.Set(api, '_call_scheduler', zone_capabilities) | ||
1000 | 183 | |||
1001 | 184 | body = dict(zone=dict(username='zeb', password='sneaky')) | ||
1002 | 185 | req = webob.Request.blank('/v1.0/zones/info') | ||
1003 | 186 | |||
1004 | 187 | res = req.get_response(fakes.wsgi_app()) | ||
1005 | 188 | res_dict = json.loads(res.body) | ||
1006 | 189 | self.assertEqual(res.status_int, 200) | ||
1007 | 190 | self.assertEqual(res_dict['zone']['name'], 'darksecret') | ||
1008 | 191 | self.assertEqual(res_dict['zone']['cap1'], 'a;b') | ||
1009 | 192 | self.assertEqual(res_dict['zone']['cap2'], 'c;d') | ||
1010 | 170 | 193 | ||
1011 | === modified file 'nova/tests/test_rpc.py' | |||
1012 | --- nova/tests/test_rpc.py 2011-01-19 20:26:09 +0000 | |||
1013 | +++ nova/tests/test_rpc.py 2011-03-24 12:14:14 +0000 | |||
1014 | @@ -36,7 +36,7 @@ | |||
1015 | 36 | super(RpcTestCase, self).setUp() | 36 | super(RpcTestCase, self).setUp() |
1016 | 37 | self.conn = rpc.Connection.instance(True) | 37 | self.conn = rpc.Connection.instance(True) |
1017 | 38 | self.receiver = TestReceiver() | 38 | self.receiver = TestReceiver() |
1019 | 39 | self.consumer = rpc.AdapterConsumer(connection=self.conn, | 39 | self.consumer = rpc.TopicAdapterConsumer(connection=self.conn, |
1020 | 40 | topic='test', | 40 | topic='test', |
1021 | 41 | proxy=self.receiver) | 41 | proxy=self.receiver) |
1022 | 42 | self.consumer.attach_to_eventlet() | 42 | self.consumer.attach_to_eventlet() |
1023 | @@ -97,7 +97,7 @@ | |||
1024 | 97 | 97 | ||
1025 | 98 | nested = Nested() | 98 | nested = Nested() |
1026 | 99 | conn = rpc.Connection.instance(True) | 99 | conn = rpc.Connection.instance(True) |
1028 | 100 | consumer = rpc.AdapterConsumer(connection=conn, | 100 | consumer = rpc.TopicAdapterConsumer(connection=conn, |
1029 | 101 | topic='nested', | 101 | topic='nested', |
1030 | 102 | proxy=nested) | 102 | proxy=nested) |
1031 | 103 | consumer.attach_to_eventlet() | 103 | consumer.attach_to_eventlet() |
1032 | 104 | 104 | ||
1033 | === modified file 'nova/tests/test_scheduler.py' | |||
1034 | --- nova/tests/test_scheduler.py 2011-03-10 06:23:13 +0000 | |||
1035 | +++ nova/tests/test_scheduler.py 2011-03-24 12:14:14 +0000 | |||
1036 | @@ -21,6 +21,9 @@ | |||
1037 | 21 | 21 | ||
1038 | 22 | import datetime | 22 | import datetime |
1039 | 23 | import mox | 23 | import mox |
1040 | 24 | import novaclient.exceptions | ||
1041 | 25 | import stubout | ||
1042 | 26 | import webob | ||
1043 | 24 | 27 | ||
1044 | 25 | from mox import IgnoreArg | 28 | from mox import IgnoreArg |
1045 | 26 | from nova import context | 29 | from nova import context |
1046 | @@ -32,6 +35,7 @@ | |||
1047 | 32 | from nova import rpc | 35 | from nova import rpc |
1048 | 33 | from nova import utils | 36 | from nova import utils |
1049 | 34 | from nova.auth import manager as auth_manager | 37 | from nova.auth import manager as auth_manager |
1050 | 38 | from nova.scheduler import api | ||
1051 | 35 | from nova.scheduler import manager | 39 | from nova.scheduler import manager |
1052 | 36 | from nova.scheduler import driver | 40 | from nova.scheduler import driver |
1053 | 37 | from nova.compute import power_state | 41 | from nova.compute import power_state |
1054 | @@ -937,3 +941,160 @@ | |||
1055 | 937 | db.instance_destroy(self.context, instance_id) | 941 | db.instance_destroy(self.context, instance_id) |
1056 | 938 | db.service_destroy(self.context, s_ref['id']) | 942 | db.service_destroy(self.context, s_ref['id']) |
1057 | 939 | db.service_destroy(self.context, s_ref2['id']) | 943 | db.service_destroy(self.context, s_ref2['id']) |
1058 | 944 | |||
1059 | 945 | |||
1060 | 946 | class FakeZone(object): | ||
1061 | 947 | def __init__(self, api_url, username, password): | ||
1062 | 948 | self.api_url = api_url | ||
1063 | 949 | self.username = username | ||
1064 | 950 | self.password = password | ||
1065 | 951 | |||
1066 | 952 | |||
1067 | 953 | def zone_get_all(context): | ||
1068 | 954 | return [ | ||
1069 | 955 | FakeZone('http://example.com', 'bob', 'xxx'), | ||
1070 | 956 | ] | ||
1071 | 957 | |||
1072 | 958 | |||
1073 | 959 | class FakeRerouteCompute(api.reroute_compute): | ||
1074 | 960 | def _call_child_zones(self, zones, function): | ||
1075 | 961 | return [] | ||
1076 | 962 | |||
1077 | 963 | def get_collection_context_and_id(self, args, kwargs): | ||
1078 | 964 | return ("servers", None, 1) | ||
1079 | 965 | |||
1080 | 966 | def unmarshall_result(self, zone_responses): | ||
1081 | 967 | return dict(magic="found me") | ||
1082 | 968 | |||
1083 | 969 | |||
1084 | 970 | def go_boom(self, context, instance): | ||
1085 | 971 | raise exception.InstanceNotFound("boom message", instance) | ||
1086 | 972 | |||
1087 | 973 | |||
1088 | 974 | def found_instance(self, context, instance): | ||
1089 | 975 | return dict(name='myserver') | ||
1090 | 976 | |||
1091 | 977 | |||
1092 | 978 | class FakeResource(object): | ||
1093 | 979 | def __init__(self, attribute_dict): | ||
1094 | 980 | for k, v in attribute_dict.iteritems(): | ||
1095 | 981 | setattr(self, k, v) | ||
1096 | 982 | |||
1097 | 983 | def pause(self): | ||
1098 | 984 | pass | ||
1099 | 985 | |||
1100 | 986 | |||
1101 | 987 | class ZoneRedirectTest(test.TestCase): | ||
1102 | 988 | def setUp(self): | ||
1103 | 989 | super(ZoneRedirectTest, self).setUp() | ||
1104 | 990 | self.stubs = stubout.StubOutForTesting() | ||
1105 | 991 | |||
1106 | 992 | self.stubs.Set(db, 'zone_get_all', zone_get_all) | ||
1107 | 993 | |||
1108 | 994 | self.enable_zone_routing = FLAGS.enable_zone_routing | ||
1109 | 995 | FLAGS.enable_zone_routing = True | ||
1110 | 996 | |||
1111 | 997 | def tearDown(self): | ||
1112 | 998 | self.stubs.UnsetAll() | ||
1113 | 999 | FLAGS.enable_zone_routing = self.enable_zone_routing | ||
1114 | 1000 | super(ZoneRedirectTest, self).tearDown() | ||
1115 | 1001 | |||
1116 | 1002 | def test_trap_found_locally(self): | ||
1117 | 1003 | decorator = FakeRerouteCompute("foo") | ||
1118 | 1004 | try: | ||
1119 | 1005 | result = decorator(found_instance)(None, None, 1) | ||
1120 | 1006 | except api.RedirectResult, e: | ||
1121 | 1007 | self.fail(_("Successful database hit should succeed")) | ||
1122 | 1008 | |||
1123 | 1009 | def test_trap_not_found_locally(self): | ||
1124 | 1010 | decorator = FakeRerouteCompute("foo") | ||
1125 | 1011 | try: | ||
1126 | 1012 | result = decorator(go_boom)(None, None, 1) | ||
1127 | 1013 | self.assertFail(_("Should have rerouted.")) | ||
1128 | 1014 | except api.RedirectResult, e: | ||
1129 | 1015 | self.assertEquals(e.results['magic'], 'found me') | ||
1130 | 1016 | |||
1131 | 1017 | def test_routing_flags(self): | ||
1132 | 1018 | FLAGS.enable_zone_routing = False | ||
1133 | 1019 | decorator = FakeRerouteCompute("foo") | ||
1134 | 1020 | try: | ||
1135 | 1021 | result = decorator(go_boom)(None, None, 1) | ||
1136 | 1022 | self.assertFail(_("Should have thrown exception.")) | ||
1137 | 1023 | except exception.InstanceNotFound, e: | ||
1138 | 1024 | self.assertEquals(e.message, 'boom message') | ||
1139 | 1025 | |||
1140 | 1026 | def test_get_collection_context_and_id(self): | ||
1141 | 1027 | decorator = api.reroute_compute("foo") | ||
1142 | 1028 | self.assertEquals(decorator.get_collection_context_and_id( | ||
1143 | 1029 | (None, 10, 20), {}), ("servers", 10, 20)) | ||
1144 | 1030 | self.assertEquals(decorator.get_collection_context_and_id( | ||
1145 | 1031 | (None, 11,), dict(instance_id=21)), ("servers", 11, 21)) | ||
1146 | 1032 | self.assertEquals(decorator.get_collection_context_and_id( | ||
1147 | 1033 | (None,), dict(context=12, instance_id=22)), ("servers", 12, 22)) | ||
1148 | 1034 | |||
1149 | 1035 | def test_unmarshal_single_server(self): | ||
1150 | 1036 | decorator = api.reroute_compute("foo") | ||
1151 | 1037 | self.assertEquals(decorator.unmarshall_result([]), {}) | ||
1152 | 1038 | self.assertEquals(decorator.unmarshall_result( | ||
1153 | 1039 | [FakeResource(dict(a=1, b=2)), ]), | ||
1154 | 1040 | dict(server=dict(a=1, b=2))) | ||
1155 | 1041 | self.assertEquals(decorator.unmarshall_result( | ||
1156 | 1042 | [FakeResource(dict(a=1, _b=2)), ]), | ||
1157 | 1043 | dict(server=dict(a=1,))) | ||
1158 | 1044 | self.assertEquals(decorator.unmarshall_result( | ||
1159 | 1045 | [FakeResource(dict(a=1, manager=2)), ]), | ||
1160 | 1046 | dict(server=dict(a=1,))) | ||
1161 | 1047 | self.assertEquals(decorator.unmarshall_result( | ||
1162 | 1048 | [FakeResource(dict(_a=1, manager=2)), ]), | ||
1163 | 1049 | dict(server={})) | ||
1164 | 1050 | |||
1165 | 1051 | |||
1166 | 1052 | class FakeServerCollection(object): | ||
1167 | 1053 | def get(self, instance_id): | ||
1168 | 1054 | return FakeResource(dict(a=10, b=20)) | ||
1169 | 1055 | |||
1170 | 1056 | def find(self, name): | ||
1171 | 1057 | return FakeResource(dict(a=11, b=22)) | ||
1172 | 1058 | |||
1173 | 1059 | |||
1174 | 1060 | class FakeEmptyServerCollection(object): | ||
1175 | 1061 | def get(self, f): | ||
1176 | 1062 | raise novaclient.NotFound(1) | ||
1177 | 1063 | |||
1178 | 1064 | def find(self, name): | ||
1179 | 1065 | raise novaclient.NotFound(2) | ||
1180 | 1066 | |||
1181 | 1067 | |||
1182 | 1068 | class FakeNovaClient(object): | ||
1183 | 1069 | def __init__(self, collection): | ||
1184 | 1070 | self.servers = collection | ||
1185 | 1071 | |||
1186 | 1072 | |||
1187 | 1073 | class DynamicNovaClientTest(test.TestCase): | ||
1188 | 1074 | def test_issue_novaclient_command_found(self): | ||
1189 | 1075 | zone = FakeZone('http://example.com', 'bob', 'xxx') | ||
1190 | 1076 | self.assertEquals(api._issue_novaclient_command( | ||
1191 | 1077 | FakeNovaClient(FakeServerCollection()), | ||
1192 | 1078 | zone, "servers", "get", 100).a, 10) | ||
1193 | 1079 | |||
1194 | 1080 | self.assertEquals(api._issue_novaclient_command( | ||
1195 | 1081 | FakeNovaClient(FakeServerCollection()), | ||
1196 | 1082 | zone, "servers", "find", "name").b, 22) | ||
1197 | 1083 | |||
1198 | 1084 | self.assertEquals(api._issue_novaclient_command( | ||
1199 | 1085 | FakeNovaClient(FakeServerCollection()), | ||
1200 | 1086 | zone, "servers", "pause", 100), None) | ||
1201 | 1087 | |||
1202 | 1088 | def test_issue_novaclient_command_not_found(self): | ||
1203 | 1089 | zone = FakeZone('http://example.com', 'bob', 'xxx') | ||
1204 | 1090 | self.assertEquals(api._issue_novaclient_command( | ||
1205 | 1091 | FakeNovaClient(FakeEmptyServerCollection()), | ||
1206 | 1092 | zone, "servers", "get", 100), None) | ||
1207 | 1093 | |||
1208 | 1094 | self.assertEquals(api._issue_novaclient_command( | ||
1209 | 1095 | FakeNovaClient(FakeEmptyServerCollection()), | ||
1210 | 1096 | zone, "servers", "find", "name"), None) | ||
1211 | 1097 | |||
1212 | 1098 | self.assertEquals(api._issue_novaclient_command( | ||
1213 | 1099 | FakeNovaClient(FakeEmptyServerCollection()), | ||
1214 | 1100 | zone, "servers", "any", "name"), None) | ||
1215 | 940 | 1101 | ||
1216 | === modified file 'nova/tests/test_service.py' | |||
1217 | --- nova/tests/test_service.py 2011-03-10 06:16:03 +0000 | |||
1218 | +++ nova/tests/test_service.py 2011-03-24 12:14:14 +0000 | |||
1219 | @@ -109,20 +109,29 @@ | |||
1220 | 109 | app = service.Service.create(host=host, binary=binary) | 109 | app = service.Service.create(host=host, binary=binary) |
1221 | 110 | 110 | ||
1222 | 111 | self.mox.StubOutWithMock(rpc, | 111 | self.mox.StubOutWithMock(rpc, |
1226 | 112 | 'AdapterConsumer', | 112 | 'TopicAdapterConsumer', |
1227 | 113 | use_mock_anything=True) | 113 | use_mock_anything=True) |
1228 | 114 | rpc.AdapterConsumer(connection=mox.IgnoreArg(), | 114 | self.mox.StubOutWithMock(rpc, |
1229 | 115 | 'FanoutAdapterConsumer', | ||
1230 | 116 | use_mock_anything=True) | ||
1231 | 117 | rpc.TopicAdapterConsumer(connection=mox.IgnoreArg(), | ||
1232 | 115 | topic=topic, | 118 | topic=topic, |
1233 | 116 | proxy=mox.IsA(service.Service)).AndReturn( | 119 | proxy=mox.IsA(service.Service)).AndReturn( |
1235 | 117 | rpc.AdapterConsumer) | 120 | rpc.TopicAdapterConsumer) |
1236 | 118 | 121 | ||
1238 | 119 | rpc.AdapterConsumer(connection=mox.IgnoreArg(), | 122 | rpc.TopicAdapterConsumer(connection=mox.IgnoreArg(), |
1239 | 120 | topic='%s.%s' % (topic, host), | 123 | topic='%s.%s' % (topic, host), |
1240 | 121 | proxy=mox.IsA(service.Service)).AndReturn( | 124 | proxy=mox.IsA(service.Service)).AndReturn( |
1245 | 122 | rpc.AdapterConsumer) | 125 | rpc.TopicAdapterConsumer) |
1246 | 123 | 126 | ||
1247 | 124 | rpc.AdapterConsumer.attach_to_eventlet() | 127 | rpc.FanoutAdapterConsumer(connection=mox.IgnoreArg(), |
1248 | 125 | rpc.AdapterConsumer.attach_to_eventlet() | 128 | topic=topic, |
1249 | 129 | proxy=mox.IsA(service.Service)).AndReturn( | ||
1250 | 130 | rpc.FanoutAdapterConsumer) | ||
1251 | 131 | |||
1252 | 132 | rpc.TopicAdapterConsumer.attach_to_eventlet() | ||
1253 | 133 | rpc.TopicAdapterConsumer.attach_to_eventlet() | ||
1254 | 134 | rpc.FanoutAdapterConsumer.attach_to_eventlet() | ||
1255 | 126 | 135 | ||
1256 | 127 | service_create = {'host': host, | 136 | service_create = {'host': host, |
1257 | 128 | 'binary': binary, | 137 | 'binary': binary, |
1258 | @@ -279,6 +288,7 @@ | |||
1259 | 279 | self.mox.StubOutWithMock(service.rpc.Connection, 'instance') | 288 | self.mox.StubOutWithMock(service.rpc.Connection, 'instance') |
1260 | 280 | service.rpc.Connection.instance(new=mox.IgnoreArg()) | 289 | service.rpc.Connection.instance(new=mox.IgnoreArg()) |
1261 | 281 | service.rpc.Connection.instance(new=mox.IgnoreArg()) | 290 | service.rpc.Connection.instance(new=mox.IgnoreArg()) |
1262 | 291 | service.rpc.Connection.instance(new=mox.IgnoreArg()) | ||
1263 | 282 | self.mox.StubOutWithMock(serv.manager.driver, | 292 | self.mox.StubOutWithMock(serv.manager.driver, |
1264 | 283 | 'update_available_resource') | 293 | 'update_available_resource') |
1265 | 284 | serv.manager.driver.update_available_resource(mox.IgnoreArg(), host) | 294 | serv.manager.driver.update_available_resource(mox.IgnoreArg(), host) |
1266 | 285 | 295 | ||
1267 | === modified file 'nova/tests/test_test.py' | |||
1268 | --- nova/tests/test_test.py 2011-02-21 22:55:06 +0000 | |||
1269 | +++ nova/tests/test_test.py 2011-03-24 12:14:14 +0000 | |||
1270 | @@ -34,7 +34,7 @@ | |||
1271 | 34 | 34 | ||
1272 | 35 | def test_rpc_consumer_isolation(self): | 35 | def test_rpc_consumer_isolation(self): |
1273 | 36 | connection = rpc.Connection.instance(new=True) | 36 | connection = rpc.Connection.instance(new=True) |
1275 | 37 | consumer = rpc.TopicConsumer(connection, topic='compute') | 37 | consumer = rpc.TopicAdapterConsumer(connection, topic='compute') |
1276 | 38 | consumer.register_callback( | 38 | consumer.register_callback( |
1277 | 39 | lambda x, y: self.fail('I should never be called')) | 39 | lambda x, y: self.fail('I should never be called')) |
1278 | 40 | consumer.attach_to_eventlet() | 40 | consumer.attach_to_eventlet() |
1279 | 41 | 41 | ||
1280 | === modified file 'nova/tests/test_zones.py' | |||
1281 | --- nova/tests/test_zones.py 2011-03-03 14:55:02 +0000 | |||
1282 | +++ nova/tests/test_zones.py 2011-03-24 12:14:14 +0000 | |||
1283 | @@ -76,6 +76,40 @@ | |||
1284 | 76 | self.assertEquals(len(zm.zone_states), 1) | 76 | self.assertEquals(len(zm.zone_states), 1) |
1285 | 77 | self.assertEquals(zm.zone_states[1].username, 'user1') | 77 | self.assertEquals(zm.zone_states[1].username, 'user1') |
1286 | 78 | 78 | ||
1287 | 79 | def test_service_capabilities(self): | ||
1288 | 80 | zm = zone_manager.ZoneManager() | ||
1289 | 81 | caps = zm.get_zone_capabilities(self, None) | ||
1290 | 82 | self.assertEquals(caps, {}) | ||
1291 | 83 | |||
1292 | 84 | zm.update_service_capabilities("svc1", "host1", dict(a=1, b=2)) | ||
1293 | 85 | caps = zm.get_zone_capabilities(self, None) | ||
1294 | 86 | self.assertEquals(caps, dict(svc1_a=(1, 1), svc1_b=(2, 2))) | ||
1295 | 87 | |||
1296 | 88 | zm.update_service_capabilities("svc1", "host1", dict(a=2, b=3)) | ||
1297 | 89 | caps = zm.get_zone_capabilities(self, None) | ||
1298 | 90 | self.assertEquals(caps, dict(svc1_a=(2, 2), svc1_b=(3, 3))) | ||
1299 | 91 | |||
1300 | 92 | zm.update_service_capabilities("svc1", "host2", dict(a=20, b=30)) | ||
1301 | 93 | caps = zm.get_zone_capabilities(self, None) | ||
1302 | 94 | self.assertEquals(caps, dict(svc1_a=(2, 20), svc1_b=(3, 30))) | ||
1303 | 95 | |||
1304 | 96 | zm.update_service_capabilities("svc10", "host1", dict(a=99, b=99)) | ||
1305 | 97 | caps = zm.get_zone_capabilities(self, None) | ||
1306 | 98 | self.assertEquals(caps, dict(svc1_a=(2, 20), svc1_b=(3, 30), | ||
1307 | 99 | svc10_a=(99, 99), svc10_b=(99, 99))) | ||
1308 | 100 | |||
1309 | 101 | zm.update_service_capabilities("svc1", "host3", dict(c=5)) | ||
1310 | 102 | caps = zm.get_zone_capabilities(self, None) | ||
1311 | 103 | self.assertEquals(caps, dict(svc1_a=(2, 20), svc1_b=(3, 30), | ||
1312 | 104 | svc1_c=(5, 5), svc10_a=(99, 99), | ||
1313 | 105 | svc10_b=(99, 99))) | ||
1314 | 106 | |||
1315 | 107 | caps = zm.get_zone_capabilities(self, 'svc1') | ||
1316 | 108 | self.assertEquals(caps, dict(svc1_a=(2, 20), svc1_b=(3, 30), | ||
1317 | 109 | svc1_c=(5, 5))) | ||
1318 | 110 | caps = zm.get_zone_capabilities(self, 'svc10') | ||
1319 | 111 | self.assertEquals(caps, dict(svc10_a=(99, 99), svc10_b=(99, 99))) | ||
1320 | 112 | |||
1321 | 79 | def test_refresh_from_db_replace_existing(self): | 113 | def test_refresh_from_db_replace_existing(self): |
1322 | 80 | zm = zone_manager.ZoneManager() | 114 | zm = zone_manager.ZoneManager() |
1323 | 81 | zone_state = zone_manager.ZoneState() | 115 | zone_state = zone_manager.ZoneState() |
1324 | 82 | 116 | ||
1325 | === modified file 'nova/volume/manager.py' | |||
1326 | --- nova/volume/manager.py 2011-03-03 13:54:11 +0000 | |||
1327 | +++ nova/volume/manager.py 2011-03-24 12:14:14 +0000 | |||
1328 | @@ -64,14 +64,15 @@ | |||
1329 | 64 | 'if True, will not discover local volumes') | 64 | 'if True, will not discover local volumes') |
1330 | 65 | 65 | ||
1331 | 66 | 66 | ||
1333 | 67 | class VolumeManager(manager.Manager): | 67 | class VolumeManager(manager.SchedulerDependentManager): |
1334 | 68 | """Manages attachable block storage devices.""" | 68 | """Manages attachable block storage devices.""" |
1335 | 69 | def __init__(self, volume_driver=None, *args, **kwargs): | 69 | def __init__(self, volume_driver=None, *args, **kwargs): |
1336 | 70 | """Load the driver from the one specified in args, or from flags.""" | 70 | """Load the driver from the one specified in args, or from flags.""" |
1337 | 71 | if not volume_driver: | 71 | if not volume_driver: |
1338 | 72 | volume_driver = FLAGS.volume_driver | 72 | volume_driver = FLAGS.volume_driver |
1339 | 73 | self.driver = utils.import_object(volume_driver) | 73 | self.driver = utils.import_object(volume_driver) |
1341 | 74 | super(VolumeManager, self).__init__(*args, **kwargs) | 74 | super(VolumeManager, self).__init__(service_name='volume', |
1342 | 75 | *args, **kwargs) | ||
1343 | 75 | # NOTE(vish): Implementation specific db handling is done | 76 | # NOTE(vish): Implementation specific db handling is done |
1344 | 76 | # by the driver. | 77 | # by the driver. |
1345 | 77 | self.driver.db = self.db | 78 | self.driver.db = self.db |
Hi Sandy,
I don't think WSGI middleware is the correct place to be handling zones forwarding. What if we write a tool directly against nova.compute.api or if some other component (like a compute or network worker) issues a nova.compute.api call? I think all logic and routing should instead be handled inside inside nova.compute and/or nova.scheduler. Nothing in nova.api should ever be aware of what is happening.