Merge lp:~sandy-walsh/nova/dist-sched-2a into lp:~hudson-openstack/nova/trunk
- dist-sched-2a
- Merge into trunk
Status: | Superseded |
---|---|
Proposed branch: | lp:~sandy-walsh/nova/dist-sched-2a |
Merge into: | lp:~hudson-openstack/nova/trunk |
Prerequisite: | lp:~rconradharris/nova/dist-sched-1 |
Diff against target: |
706 lines (+221/-132) 8 files modified
nova/compute/api.py (+6/-10) nova/exception.py (+2/-3) nova/scheduler/host_filter.py (+70/-35) nova/scheduler/manager.py (+6/-1) nova/scheduler/zone_aware_scheduler.py (+62/-31) nova/tests/test_host_filter.py (+49/-51) nova/tests/test_zone_aware_scheduler.py (+3/-1) nova/virt/fake.py (+23/-0) |
To merge this branch: | bzr merge lp:~sandy-walsh/nova/dist-sched-2a |
Related bugs: | |
Related blueprints: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Ed Leafe (community) | Approve | ||
Rick Harris (community) | Approve | ||
Review via email: mp+61245@code.launchpad.net |
This proposal has been superseded by a proposal from 2011-05-31.
Commit message
Basic hook-up to HostFilter and fixed up the passing of InstanceType spec to the scheduler.
Description of the change
Basic hook-up to HostFilter and fixed up the passing of InstanceType spec to the scheduler.
Sandy Walsh (sandy-walsh) wrote : | # |
Todd Willey (xtoddx) wrote : | # |
Can you merge trunk so the diff gets updated?
Sandy Walsh (sandy-walsh) wrote : | # |
all done ... thanks for the reminder Todd.
Jay Pipes (jaypipes) wrote : | # |
Hi guys,
Couple things, mostly around trying to get the terminology a little more concrete. I'm hoping that you would be amenable to changing the term "driver" to simply be "filter" in the scheduler code. In some places, filters are called filters (in the class name and the docstrings). In other places, they're called drivers or filter drivers.
For instance:
36 +class InstanceTypeFil
37 + """HostFilter driver hard-coded to work with InstanceType records."""
I'd like to see that docstring say:
"""Type of filter that inspects/filters on attributes of an InstanceType"""
Similarly, here:
11 + 'filter_driver':
12 + 'nova.scheduler
13 + 'InstanceTypeFi
Would just be:
'filter': 'nova.scheduler
The reason I ask for this change is because, the term "driver" is already overloaded in the Nova code -- along with various terms like "manager".
Secondly, with the concept of a filter, you typically have the concept of "chaining". In other words, a filter merely removes elements of a set and returns the filtered set to the next filter in the chain. It would be good to be able to add multiple filters to the request.
Something like:
'filters': [
{'filter': 'nova.scheduler
'on': instance_type},
{'filter': 'nova.scheduler
'on': network_ref},
]
Just some thoughts to chew on. :)
Cheers,
jay
Sandy Walsh (sandy-walsh) wrote : | # |
Good points. I'll clear up the "driver" thing. I agree fully.
Lemme stew on the chaining thing. Interesting idea. Perhaps that's something we can work into a coming merge?
Jay Pipes (jaypipes) wrote : | # |
> Good points. I'll clear up the "driver" thing. I agree fully.
sweet :)
> Lemme stew on the chaining thing. Interesting idea. Perhaps that's something
> we can work into a coming merge?
Absolutely.
-jay
Jay Pipes (jaypipes) wrote : | # |
11 + 'filter_driver':
Can we just make that 'filter'?
-jay
Sandy Walsh (sandy-walsh) wrote : | # |
doh! Missed one :(
Ed Leafe (ed-leafe) wrote : | # |
Several places: the standard for multiline docstrings is to put the closing tripquotes on their own line.
Starting on line 169: the comparison expression is long and cumbersome; it would read much better if it were broken into separate lines:
for filter_class in FILTERS:
filter_match = "%s.%s" % (filter_
if filter_match == filter_name:
return filter_class()
Diff line 290: you have a string format operator, but no format placeholders in the string.
Other than those relatively minor things, looks good!
Sandy Walsh (sandy-walsh) wrote : | # |
Thanks Ed ... good stuff. I'll get right on it.
Ed Leafe (ed-leafe) wrote : | # |
Looks like you misunderstood my docstring comment. Single line docstrings should be:
"""This is a simple docstring."""
Multi-line docstrings should be:
"""This is a more complex explanation.
It requires several lines to more fully explain
the details of this method.
"""
My comment was only about the *closing" triple quote. The opening tripquote does not get its own line.
Jay Pipes (jaypipes) wrote : | # |
tiny weeny nit:
400 +from nova import rpc
401 from nova import log as logging
l comes before r :)
-jay
Sandy Walsh (sandy-walsh) wrote : | # |
phooey ... I'll fix the docstrings (again). heh.
(and thanks again Jay, will fix)
Brian Lamar (blamar) wrote : | # |
@ed
I seriously *hate* to be that guy because I think the multi-line docstring style you described is more aesthetically pleasing, but according to the HACKING file a multi-line docstring has a newline at the very end. I've had a number of merge proposals help up while I added newlines :)
Ed Leafe (ed-leafe) wrote : | # |
That is the most vile standard. A tripquote all by itself is essentially a blank line. But I guess I'm preaching to the choir here. FWIW, I *never* follow that, and have not gotten dinged. (Please don't take that as a challenge with my next merge prop!) ;-)
Jay Pipes (jaypipes) wrote : | # |
I'd like to propose we move to this standard:
"""T
he first character of the first line end then
.
After all, docstrings
are more intere
sting when it's difficult to read what is
in it.
"""
Joshua McKenty (joshua-mckenty) wrote : | # |
As long as we use \n\r for the first line break, and \n for the others.
Joshua McKenty
Piston Cloud Computing, Inc.
(650) 283-6846
<email address hidden>
On 2011-05-27, at 9:50 AM, Jay Pipes wrote:
> I'd like to propose we move to this standard:
>
> """T
> he first character of the first line end then
> .
>
> After all, docstrings
> are more intere
> sting when it's difficult to read what is
> in it.
>
>
>
>
>
> """
> --
> https:/
> Your team Nova Core is requested to review the proposed merge of lp:~sandy-walsh/nova/dist-sched-2a into lp:nova.
Ed Leafe (ed-leafe) wrote : | # |
> I'd like to propose we move to this standard:
>
> """T
> he first character of the first line end then
> .
>
> After all, docstrings
> are more intere
> sting when it's difficult to read what is
> in it.
>
>
>
>
>
> """
I'd like to see a mandatory blank line between every line with text in it. After all, since whitespace is a good thing, more of it is even better!
Sandy Walsh (sandy-walsh) wrote : | # |
All docstrings should be formal haiku (with seasonal references).
(while this may not be getting me closer to approval on this branch ... it sure is fun)
-S
Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is prohibited.
If you receive this transmission in error, please notify us immediately by e-mail
at <email address hidden>, and delete the original message.
Your cooperation is appreciated.
Rick Harris (rconradharris) wrote : | # |
Looks good. Let's get this merged so we can power through the rest of the dist-scheduler work :)
OpenStack Infra (hudson-openstack) wrote : | # |
No proposals found for merge of lp:~rconradharris/nova/dist-sched-1 into lp:nova.
Unmerged revisions
Preview Diff
1 | === modified file 'nova/compute/api.py' |
2 | --- nova/compute/api.py 2011-05-25 20:58:40 +0000 |
3 | +++ nova/compute/api.py 2011-05-31 17:43:25 +0000 |
4 | @@ -91,7 +91,6 @@ |
5 | """Enforce quota limits on injected files. |
6 | |
7 | Raises a QuotaError if any limit is exceeded. |
8 | - |
9 | """ |
10 | if injected_files is None: |
11 | return |
12 | @@ -140,7 +139,6 @@ |
13 | """Create the number and type of instances requested. |
14 | |
15 | Verifies that quota and other arguments are valid. |
16 | - |
17 | """ |
18 | if not instance_type: |
19 | instance_type = instance_types.get_default_instance_type() |
20 | @@ -268,7 +266,12 @@ |
21 | {"method": "run_instance", |
22 | "args": {"topic": FLAGS.compute_topic, |
23 | "instance_id": instance_id, |
24 | - "instance_type": instance_type, |
25 | + "request_spec": { |
26 | + 'instance_type': instance_type, |
27 | + 'filter': |
28 | + 'nova.scheduler.host_filter.' |
29 | + 'InstanceTypeFilter' |
30 | + }, |
31 | "availability_zone": availability_zone, |
32 | "injected_files": injected_files, |
33 | "admin_password": admin_password}}) |
34 | @@ -294,7 +297,6 @@ |
35 | already exist. |
36 | |
37 | :param context: the security context |
38 | - |
39 | """ |
40 | try: |
41 | db.security_group_get_by_name(context, context.project_id, |
42 | @@ -327,7 +329,6 @@ |
43 | |
44 | Sends an update request to each compute node for whom this is |
45 | relevant. |
46 | - |
47 | """ |
48 | # First, we get the security group rules that reference this group as |
49 | # the grantee.. |
50 | @@ -374,7 +375,6 @@ |
51 | updated |
52 | |
53 | :returns: None |
54 | - |
55 | """ |
56 | rv = self.db.instance_update(context, instance_id, kwargs) |
57 | return dict(rv.iteritems()) |
58 | @@ -424,7 +424,6 @@ |
59 | Use this method instead of get() if this is the only operation you |
60 | intend to to. It will route to novaclient.get if the instance is not |
61 | found. |
62 | - |
63 | """ |
64 | return self.get(context, instance_id) |
65 | |
66 | @@ -434,7 +433,6 @@ |
67 | |
68 | If there is no filter and the context is an admin, it will retreive |
69 | all instances in the system. |
70 | - |
71 | """ |
72 | if reservation_id is not None: |
73 | return self.db.instance_get_all_by_reservation( |
74 | @@ -464,7 +462,6 @@ |
75 | compute worker |
76 | |
77 | :returns: None |
78 | - |
79 | """ |
80 | if not params: |
81 | params = {} |
82 | @@ -514,7 +511,6 @@ |
83 | """Snapshot the given instance. |
84 | |
85 | :returns: A dict containing image metadata |
86 | - |
87 | """ |
88 | properties = {'instance_id': str(instance_id), |
89 | 'user_id': str(context.user_id)} |
90 | |
91 | === modified file 'nova/exception.py' |
92 | --- nova/exception.py 2011-05-27 04:37:39 +0000 |
93 | +++ nova/exception.py 2011-05-31 17:43:25 +0000 |
94 | @@ -473,9 +473,8 @@ |
95 | message = _("Zone %(zone_id)s could not be found.") |
96 | |
97 | |
98 | -class SchedulerHostFilterDriverNotFound(NotFound): |
99 | - message = _("Scheduler Host Filter Driver %(driver_name)s could" |
100 | - " not be found.") |
101 | +class SchedulerHostFilterNotFound(NotFound): |
102 | + message = _("Scheduler Host Filter %(filter_name)s could not be found.") |
103 | |
104 | |
105 | class InstanceMetadataNotFound(NotFound): |
106 | |
107 | === modified file 'nova/scheduler/host_filter.py' |
108 | --- nova/scheduler/host_filter.py 2011-05-09 19:57:56 +0000 |
109 | +++ nova/scheduler/host_filter.py 2011-05-31 17:43:25 +0000 |
110 | @@ -14,8 +14,8 @@ |
111 | # under the License. |
112 | |
113 | """ |
114 | -Host Filter is a driver mechanism for requesting instance resources. |
115 | -Three drivers are included: AllHosts, Flavor & JSON. AllHosts just |
116 | +Host Filter is a mechanism for requesting instance resources. |
117 | +Three filters are included: AllHosts, Flavor & JSON. AllHosts just |
118 | returns the full, unfiltered list of hosts. Flavor is a hard coded |
119 | matching mechanism based on flavor criteria and JSON is an ad-hoc |
120 | filter grammar. |
121 | @@ -42,17 +42,18 @@ |
122 | from nova import flags |
123 | from nova import log as logging |
124 | from nova import utils |
125 | +from nova.scheduler import zone_aware_scheduler |
126 | |
127 | LOG = logging.getLogger('nova.scheduler.host_filter') |
128 | |
129 | FLAGS = flags.FLAGS |
130 | -flags.DEFINE_string('default_host_filter_driver', |
131 | +flags.DEFINE_string('default_host_filter', |
132 | 'nova.scheduler.host_filter.AllHostsFilter', |
133 | - 'Which driver to use for filtering hosts.') |
134 | + 'Which filter to use for filtering hosts.') |
135 | |
136 | |
137 | class HostFilter(object): |
138 | - """Base class for host filter drivers.""" |
139 | + """Base class for host filters.""" |
140 | |
141 | def instance_type_to_filter(self, instance_type): |
142 | """Convert instance_type into a filter for most common use-case.""" |
143 | @@ -63,14 +64,15 @@ |
144 | raise NotImplementedError() |
145 | |
146 | def _full_name(self): |
147 | - """module.classname of the filter driver""" |
148 | + """module.classname of the filter.""" |
149 | return "%s.%s" % (self.__module__, self.__class__.__name__) |
150 | |
151 | |
152 | class AllHostsFilter(HostFilter): |
153 | - """NOP host filter driver. Returns all hosts in ZoneManager. |
154 | + """ NOP host filter. Returns all hosts in ZoneManager. |
155 | This essentially does what the old Scheduler+Chance used |
156 | - to give us.""" |
157 | + to give us. |
158 | + """ |
159 | |
160 | def instance_type_to_filter(self, instance_type): |
161 | """Return anything to prevent base-class from raising |
162 | @@ -83,8 +85,8 @@ |
163 | for host, services in zone_manager.service_states.iteritems()] |
164 | |
165 | |
166 | -class FlavorFilter(HostFilter): |
167 | - """HostFilter driver hard-coded to work with flavors.""" |
168 | +class InstanceTypeFilter(HostFilter): |
169 | + """HostFilter hard-coded to work with InstanceType records.""" |
170 | |
171 | def instance_type_to_filter(self, instance_type): |
172 | """Use instance_type to filter hosts.""" |
173 | @@ -98,9 +100,10 @@ |
174 | capabilities = services.get('compute', {}) |
175 | host_ram_mb = capabilities['host_memory_free'] |
176 | disk_bytes = capabilities['disk_available'] |
177 | - if host_ram_mb >= instance_type['memory_mb'] and \ |
178 | - disk_bytes >= instance_type['local_gb']: |
179 | - selected_hosts.append((host, capabilities)) |
180 | + spec_ram = instance_type['memory_mb'] |
181 | + spec_disk = instance_type['local_gb'] |
182 | + if host_ram_mb >= spec_ram and disk_bytes >= spec_disk: |
183 | + selected_hosts.append((host, capabilities)) |
184 | return selected_hosts |
185 | |
186 | #host entries (currently) are like: |
187 | @@ -109,15 +112,15 @@ |
188 | # 'host_memory_total': 8244539392, |
189 | # 'host_memory_overhead': 184225792, |
190 | # 'host_memory_free': 3868327936, |
191 | -# 'host_memory_free_computed': 3840843776}, |
192 | -# 'host_other-config': {}, |
193 | +# 'host_memory_free_computed': 3840843776, |
194 | +# 'host_other_config': {}, |
195 | # 'host_ip_address': '192.168.1.109', |
196 | # 'host_cpu_info': {}, |
197 | # 'disk_available': 32954957824, |
198 | # 'disk_total': 50394562560, |
199 | -# 'disk_used': 17439604736}, |
200 | +# 'disk_used': 17439604736, |
201 | # 'host_uuid': 'cedb9b39-9388-41df-8891-c5c9a0c0fe5f', |
202 | -# 'host_name-label': 'xs-mini'} |
203 | +# 'host_name_label': 'xs-mini'} |
204 | |
205 | # instance_type table has: |
206 | #name = Column(String(255), unique=True) |
207 | @@ -131,8 +134,9 @@ |
208 | |
209 | |
210 | class JsonFilter(HostFilter): |
211 | - """Host Filter driver to allow simple JSON-based grammar for |
212 | - selecting hosts.""" |
213 | + """Host Filter to allow simple JSON-based grammar for |
214 | + selecting hosts. |
215 | + """ |
216 | |
217 | def _equals(self, args): |
218 | """First term is == all the other terms.""" |
219 | @@ -228,7 +232,8 @@ |
220 | |
221 | def _parse_string(self, string, host, services): |
222 | """Strings prefixed with $ are capability lookups in the |
223 | - form '$service.capability[.subcap*]'""" |
224 | + form '$service.capability[.subcap*]' |
225 | + """ |
226 | if not string: |
227 | return None |
228 | if string[0] != '$': |
229 | @@ -271,18 +276,48 @@ |
230 | return hosts |
231 | |
232 | |
233 | -DRIVERS = [AllHostsFilter, FlavorFilter, JsonFilter] |
234 | - |
235 | - |
236 | -def choose_driver(driver_name=None): |
237 | - """Since the caller may specify which driver to use we need |
238 | - to have an authoritative list of what is permissible. This |
239 | - function checks the driver name against a predefined set |
240 | - of acceptable drivers.""" |
241 | - |
242 | - if not driver_name: |
243 | - driver_name = FLAGS.default_host_filter_driver |
244 | - for driver in DRIVERS: |
245 | - if "%s.%s" % (driver.__module__, driver.__name__) == driver_name: |
246 | - return driver() |
247 | - raise exception.SchedulerHostFilterDriverNotFound(driver_name=driver_name) |
248 | +FILTERS = [AllHostsFilter, InstanceTypeFilter, JsonFilter] |
249 | + |
250 | + |
251 | +def choose_host_filter(filter_name=None): |
252 | + """Since the caller may specify which filter to use we need |
253 | + to have an authoritative list of what is permissible. This |
254 | + function checks the filter name against a predefined set |
255 | + of acceptable filters. |
256 | + """ |
257 | + |
258 | + if not filter_name: |
259 | + filter_name = FLAGS.default_host_filter |
260 | + for filter_class in FILTERS: |
261 | + host_match = "%s.%s" % (filter_class.__module__, filter_class.__name__) |
262 | + if host_match == filter_name: |
263 | + return filter_class() |
264 | + raise exception.SchedulerHostFilterNotFound(filter_name=filter_name) |
265 | + |
266 | + |
267 | +class HostFilterScheduler(zone_aware_scheduler.ZoneAwareScheduler): |
268 | + """The HostFilterScheduler uses the HostFilter to filter |
269 | + hosts for weighing. The particular filter used may be passed in |
270 | + as an argument or the default will be used. |
271 | + |
272 | + request_spec = {'filter': <Filter name>, |
273 | + 'instance_type': <InstanceType dict>} |
274 | + """ |
275 | + |
276 | + def filter_hosts(self, num, request_spec): |
277 | + """Filter the full host list (from the ZoneManager)""" |
278 | + filter_name = request_spec.get('filter', None) |
279 | + host_filter = choose_host_filter(filter_name) |
280 | + |
281 | + # TODO(sandy): We're only using InstanceType-based specs |
282 | + # currently. Later we'll need to snoop for more detailed |
283 | + # host filter requests. |
284 | + instance_type = request_spec['instance_type'] |
285 | + name, query = host_filter.instance_type_to_filter(instance_type) |
286 | + return host_filter.filter_hosts(self.zone_manager, query) |
287 | + |
288 | + def weigh_hosts(self, num, request_spec, hosts): |
289 | + """Derived classes must override this method and return |
290 | + a lists of hosts in [{weight, hostname}] format. |
291 | + """ |
292 | + return [dict(weight=1, hostname=host) for host, caps in hosts] |
293 | |
294 | === modified file 'nova/scheduler/manager.py' |
295 | --- nova/scheduler/manager.py 2011-05-05 14:35:44 +0000 |
296 | +++ nova/scheduler/manager.py 2011-05-31 17:43:25 +0000 |
297 | @@ -83,11 +83,16 @@ |
298 | except AttributeError: |
299 | host = self.driver.schedule(elevated, topic, *args, **kwargs) |
300 | |
301 | + if not host: |
302 | + LOG.debug(_("%(topic)s %(method)s handled in Scheduler") |
303 | + % locals()) |
304 | + return |
305 | + |
306 | rpc.cast(context, |
307 | db.queue_get_for(context, topic, host), |
308 | {"method": method, |
309 | "args": kwargs}) |
310 | - LOG.debug(_("Casting to %(topic)s %(host)s for %(method)s") % locals()) |
311 | + LOG.debug(_("Casted to %(topic)s %(host)s for %(method)s") % locals()) |
312 | |
313 | # NOTE (masumotok) : This method should be moved to nova.api.ec2.admin. |
314 | # Based on bexar design summit discussion, |
315 | |
316 | === modified file 'nova/scheduler/zone_aware_scheduler.py' |
317 | --- nova/scheduler/zone_aware_scheduler.py 2011-05-13 13:12:18 +0000 |
318 | +++ nova/scheduler/zone_aware_scheduler.py 2011-05-31 17:43:25 +0000 |
319 | @@ -22,7 +22,9 @@ |
320 | |
321 | import operator |
322 | |
323 | +from nova import db |
324 | from nova import log as logging |
325 | +from nova import rpc |
326 | from nova.scheduler import api |
327 | from nova.scheduler import driver |
328 | |
329 | @@ -36,7 +38,7 @@ |
330 | """Call novaclient zone method. Broken out for testing.""" |
331 | return api.call_zone_method(context, method, specs=specs) |
332 | |
333 | - def schedule_run_instance(self, context, topic='compute', specs={}, |
334 | + def schedule_run_instance(self, context, instance_id, request_spec, |
335 | *args, **kwargs): |
336 | """This method is called from nova.compute.api to provision |
337 | an instance. However we need to look at the parameters being |
338 | @@ -44,56 +46,83 @@ |
339 | 1. Create a Build Plan and then provision, or |
340 | 2. Use the Build Plan information in the request parameters |
341 | to simply create the instance (either in this zone or |
342 | - a child zone).""" |
343 | - |
344 | - if 'blob' in specs: |
345 | - return self.provision_instance(context, topic, specs) |
346 | + a child zone). |
347 | + """ |
348 | + |
349 | + # TODO(sandy): We'll have to look for richer specs at some point. |
350 | + |
351 | + if 'blob' in request_spec: |
352 | + self.provision_resource(context, request_spec, instance_id, kwargs) |
353 | + return None |
354 | |
355 | # Create build plan and provision ... |
356 | - build_plan = self.select(context, specs) |
357 | + build_plan = self.select(context, request_spec) |
358 | + if not build_plan: |
359 | + raise driver.NoValidHost(_('No hosts were available')) |
360 | + |
361 | for item in build_plan: |
362 | - self.provision_instance(context, topic, item) |
363 | - |
364 | - def provision_instance(context, topic, item): |
365 | - """Create the requested instance in this Zone or a child zone.""" |
366 | - pass |
367 | - |
368 | - def select(self, context, *args, **kwargs): |
369 | + self.provision_resource(context, item, instance_id, kwargs) |
370 | + |
371 | + # Returning None short-circuits the routing to Compute (since |
372 | + # we've already done it here) |
373 | + return None |
374 | + |
375 | + def provision_resource(self, context, item, instance_id, kwargs): |
376 | + """Create the requested resource in this Zone or a child zone.""" |
377 | + if "hostname" in item: |
378 | + host = item['hostname'] |
379 | + kwargs['instance_id'] = instance_id |
380 | + rpc.cast(context, |
381 | + db.queue_get_for(context, "compute", host), |
382 | + {"method": "run_instance", |
383 | + "args": kwargs}) |
384 | + LOG.debug(_("Casted to compute %(host)s for run_instance") |
385 | + % locals()) |
386 | + else: |
387 | + # TODO(sandy) Provision in child zone ... |
388 | + LOG.warning(_("Provision to Child Zone not supported (yet)")) |
389 | + pass |
390 | + |
391 | + def select(self, context, request_spec, *args, **kwargs): |
392 | """Select returns a list of weights and zone/host information |
393 | corresponding to the best hosts to service the request. Any |
394 | child zone information has been encrypted so as not to reveal |
395 | - anything about the children.""" |
396 | - return self._schedule(context, "compute", *args, **kwargs) |
397 | + anything about the children. |
398 | + """ |
399 | + return self._schedule(context, "compute", request_spec, |
400 | + *args, **kwargs) |
401 | |
402 | - def schedule(self, context, topic, *args, **kwargs): |
403 | + # TODO(sandy): We're only focused on compute instances right now, |
404 | + # so we don't implement the default "schedule()" method required |
405 | + # of Schedulers. |
406 | + def schedule(self, context, topic, request_spec, *args, **kwargs): |
407 | """The schedule() contract requires we return the one |
408 | best-suited host for this request. |
409 | """ |
410 | - res = self._schedule(context, topic, *args, **kwargs) |
411 | - # TODO(sirp): should this be a host object rather than a weight-dict? |
412 | - if not res: |
413 | - raise driver.NoValidHost(_('No hosts were available')) |
414 | - return res[0] |
415 | + raise driver.NoValidHost(_('No hosts were available')) |
416 | |
417 | - def _schedule(self, context, topic, *args, **kwargs): |
418 | + def _schedule(self, context, topic, request_spec, *args, **kwargs): |
419 | """Returns a list of hosts that meet the required specs, |
420 | ordered by their fitness. |
421 | """ |
422 | |
423 | - #TODO(sandy): extract these from args. |
424 | + if topic != "compute": |
425 | + raise NotImplemented(_("Zone Aware Scheduler only understands " |
426 | + "Compute nodes (for now)")) |
427 | + |
428 | + #TODO(sandy): how to infer this from OS API params? |
429 | num_instances = 1 |
430 | - specs = {} |
431 | |
432 | # Filter local hosts based on requirements ... |
433 | - host_list = self.filter_hosts(num_instances, specs) |
434 | + host_list = self.filter_hosts(num_instances, request_spec) |
435 | |
436 | # then weigh the selected hosts. |
437 | # weighted = [{weight=weight, name=hostname}, ...] |
438 | - weighted = self.weigh_hosts(num_instances, specs, host_list) |
439 | + weighted = self.weigh_hosts(num_instances, request_spec, host_list) |
440 | |
441 | # Next, tack on the best weights from the child zones ... |
442 | child_results = self._call_zone_method(context, "select", |
443 | - specs=specs) |
444 | + specs=request_spec) |
445 | for child_zone, result in child_results: |
446 | for weighting in result: |
447 | # Remember the child_zone so we can get back to |
448 | @@ -108,12 +137,14 @@ |
449 | weighted.sort(key=operator.itemgetter('weight')) |
450 | return weighted |
451 | |
452 | - def filter_hosts(self, num, specs): |
453 | + def filter_hosts(self, num, request_spec): |
454 | """Derived classes must override this method and return |
455 | - a list of hosts in [(hostname, capability_dict)] format.""" |
456 | + a list of hosts in [(hostname, capability_dict)] format. |
457 | + """ |
458 | raise NotImplemented() |
459 | |
460 | - def weigh_hosts(self, num, specs, hosts): |
461 | + def weigh_hosts(self, num, request_spec, hosts): |
462 | """Derived classes must override this method and return |
463 | - a lists of hosts in [{weight, hostname}] format.""" |
464 | + a lists of hosts in [{weight, hostname}] format. |
465 | + """ |
466 | raise NotImplemented() |
467 | |
468 | === modified file 'nova/tests/test_host_filter.py' |
469 | --- nova/tests/test_host_filter.py 2011-05-09 16:08:56 +0000 |
470 | +++ nova/tests/test_host_filter.py 2011-05-31 17:43:25 +0000 |
471 | @@ -13,7 +13,7 @@ |
472 | # License for the specific language governing permissions and limitations |
473 | # under the License. |
474 | """ |
475 | -Tests For Scheduler Host Filter Drivers. |
476 | +Tests For Scheduler Host Filters. |
477 | """ |
478 | |
479 | import json |
480 | @@ -31,7 +31,7 @@ |
481 | |
482 | |
483 | class HostFilterTestCase(test.TestCase): |
484 | - """Test case for host filter drivers.""" |
485 | + """Test case for host filters.""" |
486 | |
487 | def _host_caps(self, multiplier): |
488 | # Returns host capabilities in the following way: |
489 | @@ -57,8 +57,8 @@ |
490 | 'host_name-label': 'xs-%s' % multiplier} |
491 | |
492 | def setUp(self): |
493 | - self.old_flag = FLAGS.default_host_filter_driver |
494 | - FLAGS.default_host_filter_driver = \ |
495 | + self.old_flag = FLAGS.default_host_filter |
496 | + FLAGS.default_host_filter = \ |
497 | 'nova.scheduler.host_filter.AllHostsFilter' |
498 | self.instance_type = dict(name='tiny', |
499 | memory_mb=50, |
500 | @@ -76,51 +76,52 @@ |
501 | self.zone_manager.service_states = states |
502 | |
503 | def tearDown(self): |
504 | - FLAGS.default_host_filter_driver = self.old_flag |
505 | + FLAGS.default_host_filter = self.old_flag |
506 | |
507 | - def test_choose_driver(self): |
508 | - # Test default driver ... |
509 | - driver = host_filter.choose_driver() |
510 | - self.assertEquals(driver._full_name(), |
511 | + def test_choose_filter(self): |
512 | + # Test default filter ... |
513 | + hf = host_filter.choose_host_filter() |
514 | + self.assertEquals(hf._full_name(), |
515 | 'nova.scheduler.host_filter.AllHostsFilter') |
516 | - # Test valid driver ... |
517 | - driver = host_filter.choose_driver( |
518 | - 'nova.scheduler.host_filter.FlavorFilter') |
519 | - self.assertEquals(driver._full_name(), |
520 | - 'nova.scheduler.host_filter.FlavorFilter') |
521 | - # Test invalid driver ... |
522 | + # Test valid filter ... |
523 | + hf = host_filter.choose_host_filter( |
524 | + 'nova.scheduler.host_filter.InstanceTypeFilter') |
525 | + self.assertEquals(hf._full_name(), |
526 | + 'nova.scheduler.host_filter.InstanceTypeFilter') |
527 | + # Test invalid filter ... |
528 | try: |
529 | - host_filter.choose_driver('does not exist') |
530 | - self.fail("Should not find driver") |
531 | - except exception.SchedulerHostFilterDriverNotFound: |
532 | + host_filter.choose_host_filter('does not exist') |
533 | + self.fail("Should not find host filter.") |
534 | + except exception.SchedulerHostFilterNotFound: |
535 | pass |
536 | |
537 | - def test_all_host_driver(self): |
538 | - driver = host_filter.AllHostsFilter() |
539 | - cooked = driver.instance_type_to_filter(self.instance_type) |
540 | - hosts = driver.filter_hosts(self.zone_manager, cooked) |
541 | + def test_all_host_filter(self): |
542 | + hf = host_filter.AllHostsFilter() |
543 | + cooked = hf.instance_type_to_filter(self.instance_type) |
544 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
545 | self.assertEquals(10, len(hosts)) |
546 | for host, capabilities in hosts: |
547 | self.assertTrue(host.startswith('host')) |
548 | |
549 | - def test_flavor_driver(self): |
550 | - driver = host_filter.FlavorFilter() |
551 | + def test_instance_type_filter(self): |
552 | + hf = host_filter.InstanceTypeFilter() |
553 | # filter all hosts that can support 50 ram and 500 disk |
554 | - name, cooked = driver.instance_type_to_filter(self.instance_type) |
555 | - self.assertEquals('nova.scheduler.host_filter.FlavorFilter', name) |
556 | - hosts = driver.filter_hosts(self.zone_manager, cooked) |
557 | + name, cooked = hf.instance_type_to_filter(self.instance_type) |
558 | + self.assertEquals('nova.scheduler.host_filter.InstanceTypeFilter', |
559 | + name) |
560 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
561 | self.assertEquals(6, len(hosts)) |
562 | just_hosts = [host for host, caps in hosts] |
563 | just_hosts.sort() |
564 | self.assertEquals('host05', just_hosts[0]) |
565 | self.assertEquals('host10', just_hosts[5]) |
566 | |
567 | - def test_json_driver(self): |
568 | - driver = host_filter.JsonFilter() |
569 | + def test_json_filter(self): |
570 | + hf = host_filter.JsonFilter() |
571 | # filter all hosts that can support 50 ram and 500 disk |
572 | - name, cooked = driver.instance_type_to_filter(self.instance_type) |
573 | + name, cooked = hf.instance_type_to_filter(self.instance_type) |
574 | self.assertEquals('nova.scheduler.host_filter.JsonFilter', name) |
575 | - hosts = driver.filter_hosts(self.zone_manager, cooked) |
576 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
577 | self.assertEquals(6, len(hosts)) |
578 | just_hosts = [host for host, caps in hosts] |
579 | just_hosts.sort() |
580 | @@ -140,7 +141,7 @@ |
581 | ] |
582 | ] |
583 | cooked = json.dumps(raw) |
584 | - hosts = driver.filter_hosts(self.zone_manager, cooked) |
585 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
586 | |
587 | self.assertEquals(5, len(hosts)) |
588 | just_hosts = [host for host, caps in hosts] |
589 | @@ -152,7 +153,7 @@ |
590 | ['=', '$compute.host_memory_free', 30], |
591 | ] |
592 | cooked = json.dumps(raw) |
593 | - hosts = driver.filter_hosts(self.zone_manager, cooked) |
594 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
595 | |
596 | self.assertEquals(9, len(hosts)) |
597 | just_hosts = [host for host, caps in hosts] |
598 | @@ -162,7 +163,7 @@ |
599 | |
600 | raw = ['in', '$compute.host_memory_free', 20, 40, 60, 80, 100] |
601 | cooked = json.dumps(raw) |
602 | - hosts = driver.filter_hosts(self.zone_manager, cooked) |
603 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
604 | |
605 | self.assertEquals(5, len(hosts)) |
606 | just_hosts = [host for host, caps in hosts] |
607 | @@ -174,35 +175,32 @@ |
608 | raw = ['unknown command', ] |
609 | cooked = json.dumps(raw) |
610 | try: |
611 | - driver.filter_hosts(self.zone_manager, cooked) |
612 | + hf.filter_hosts(self.zone_manager, cooked) |
613 | self.fail("Should give KeyError") |
614 | except KeyError, e: |
615 | pass |
616 | |
617 | - self.assertTrue(driver.filter_hosts(self.zone_manager, json.dumps([]))) |
618 | - self.assertTrue(driver.filter_hosts(self.zone_manager, json.dumps({}))) |
619 | - self.assertTrue(driver.filter_hosts(self.zone_manager, json.dumps( |
620 | + self.assertTrue(hf.filter_hosts(self.zone_manager, json.dumps([]))) |
621 | + self.assertTrue(hf.filter_hosts(self.zone_manager, json.dumps({}))) |
622 | + self.assertTrue(hf.filter_hosts(self.zone_manager, json.dumps( |
623 | ['not', True, False, True, False] |
624 | ))) |
625 | |
626 | try: |
627 | - driver.filter_hosts(self.zone_manager, json.dumps( |
628 | + hf.filter_hosts(self.zone_manager, json.dumps( |
629 | 'not', True, False, True, False |
630 | )) |
631 | self.fail("Should give KeyError") |
632 | except KeyError, e: |
633 | pass |
634 | |
635 | - self.assertFalse(driver.filter_hosts(self.zone_manager, json.dumps( |
636 | - ['=', '$foo', 100] |
637 | - ))) |
638 | - self.assertFalse(driver.filter_hosts(self.zone_manager, json.dumps( |
639 | - ['=', '$.....', 100] |
640 | - ))) |
641 | - self.assertFalse(driver.filter_hosts(self.zone_manager, json.dumps( |
642 | - ['>', ['and', ['or', ['not', ['<', ['>=', ['<=', ['in', ]]]]]]]] |
643 | - ))) |
644 | + self.assertFalse(hf.filter_hosts(self.zone_manager, |
645 | + json.dumps(['=', '$foo', 100]))) |
646 | + self.assertFalse(hf.filter_hosts(self.zone_manager, |
647 | + json.dumps(['=', '$.....', 100]))) |
648 | + self.assertFalse(hf.filter_hosts(self.zone_manager, |
649 | + json.dumps( |
650 | + ['>', ['and', ['or', ['not', ['<', ['>=', ['<=', ['in', ]]]]]]]]))) |
651 | |
652 | - self.assertFalse(driver.filter_hosts(self.zone_manager, json.dumps( |
653 | - ['=', {}, ['>', '$missing....foo']] |
654 | - ))) |
655 | + self.assertFalse(hf.filter_hosts(self.zone_manager, |
656 | + json.dumps(['=', {}, ['>', '$missing....foo']]))) |
657 | |
658 | === modified file 'nova/tests/test_zone_aware_scheduler.py' |
659 | --- nova/tests/test_zone_aware_scheduler.py 2011-05-13 01:44:22 +0000 |
660 | +++ nova/tests/test_zone_aware_scheduler.py 2011-05-31 17:43:25 +0000 |
661 | @@ -116,4 +116,6 @@ |
662 | sched.set_zone_manager(zm) |
663 | |
664 | fake_context = {} |
665 | - self.assertRaises(driver.NoValidHost, sched.schedule, fake_context, {}) |
666 | + self.assertRaises(driver.NoValidHost, sched.schedule_run_instance, |
667 | + fake_context, 1, |
668 | + dict(host_filter=None, instance_type={})) |
669 | |
670 | === modified file 'nova/virt/fake.py' |
671 | --- nova/virt/fake.py 2011-05-05 03:53:04 +0000 |
672 | +++ nova/virt/fake.py 2011-05-31 17:43:25 +0000 |
673 | @@ -82,6 +82,21 @@ |
674 | |
675 | def __init__(self): |
676 | self.instances = {} |
677 | + self.host_status = { |
678 | + 'host_name-description': 'Fake Host', |
679 | + 'host_hostname': 'fake-mini', |
680 | + 'host_memory_total': 8000000000, |
681 | + 'host_memory_overhead': 10000000, |
682 | + 'host_memory_free': 7900000000, |
683 | + 'host_memory_free_computed': 7900000000, |
684 | + 'host_other_config': {}, |
685 | + 'host_ip_address': '192.168.1.109', |
686 | + 'host_cpu_info': {}, |
687 | + 'disk_available': 500000000000, |
688 | + 'disk_total': 600000000000, |
689 | + 'disk_used': 100000000000, |
690 | + 'host_uuid': 'cedb9b39-9388-41df-8891-c5c9a0c0fe5f', |
691 | + 'host_name_label': 'fake-mini'} |
692 | |
693 | @classmethod |
694 | def instance(cls): |
695 | @@ -456,3 +471,11 @@ |
696 | def test_remove_vm(self, instance_name): |
697 | """ Removes the named VM, as if it crashed. For testing""" |
698 | self.instances.pop(instance_name) |
699 | + |
700 | + def update_host_status(self): |
701 | + """Return fake Host Status of ram, disk, network.""" |
702 | + return self.host_status |
703 | + |
704 | + def get_host_stats(self, refresh=False): |
705 | + """Return fake Host Status of ram, disk, network.""" |
706 | + return self.host_status |
Note that this merge is dependent on dist-sched-1.
This implies the diff below includes all of dist-sched-1 as well. Sorry for the clutter :/