Merge lp:~rconradharris/nova/dist-sched-2b into lp:~hudson-openstack/nova/trunk
- dist-sched-2b
- Merge into trunk
Status: | Merged |
---|---|
Approved by: | Sandy Walsh |
Approved revision: | 1083 |
Merged at revision: | 1145 |
Proposed branch: | lp:~rconradharris/nova/dist-sched-2b |
Merge into: | lp:~hudson-openstack/nova/trunk |
Prerequisite: | lp:~sandy-walsh/nova/dist-sched-2a |
Diff against target: |
1123 lines (+577/-215) 10 files modified
nova/exception.py (+9/-0) nova/scheduler/host_filter.py (+1/-0) nova/scheduler/least_cost.py (+156/-0) nova/scheduler/zone_aware_scheduler.py (+13/-6) nova/test.py (+15/-3) nova/tests/scheduler/test_host_filter.py (+206/-0) nova/tests/scheduler/test_least_cost_scheduler.py (+144/-0) nova/tests/scheduler/test_scheduler.py (+2/-1) nova/tests/scheduler/test_zone_aware_scheduler.py (+31/-0) nova/tests/test_host_filter.py (+0/-205) |
To merge this branch: | bzr merge lp:~rconradharris/nova/dist-sched-2b |
Related bugs: | |
Related blueprints: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Brian Waldon (community) | Approve | ||
Ed Leafe (community) | Needs Fixing | ||
Sandy Walsh (community) | Approve | ||
Review via email: mp+61352@code.launchpad.net |
Commit message
Description of the change
Adds LeastCostScheduler which uses a series of cost functions and associated weights to determine which host to provision to.
Left for future work:
* Handle scheduling of many instances (currently assumes n=1)
* Handle scheduling of arbitrary resources (currently weigh_hosts only handles instances)
* Add more cost functions (currently just noop and fill-first)
* Simulator so we can determine sensible values for cost-function-
NOTE: This patch depends on Sandy's dist-scheduler-2a patch.
Rick Harris (rconradharris) wrote : | # |
Sandy Walsh (sandy-walsh) wrote : | # |
Good branch Rick. Look forward to playing with it for realz.
Perhaps after this goes through you could update /doc/source/
Brian Waldon (bcwaldon) wrote : | # |
Great work, Rick. I'm definitely looking forward to this new functionality. Thanks for using the exception module correctly :) Here's what I think:
28: This is a duplicate line
71: I don't really like seeing method/class names in flags. Would it make sense to switch to individual flags for each cost function, using an alias instead of a path to the function name?
There seems to be a bit of magic going on with these *_cost_fn_weight flags and actual cost functions. I don't see the need to make the code surrounding cost functions so automatic. You must have given a lot of thought to this, so I would love to hear your reasoning.
87: Should this be grouped up at the top with the other flags? i know we do it that way in other modules.
91: Should this function be called something different? The name doesn't seem to actually represent what it does.
Ed Leafe (ed-leafe) wrote : | # |
The code works fine when run in a single zone deployment. However, due to a disconnect between the different forks of novaclient, the call to nested zones for select() uses POST, but the controller code is expecting a GET with a query string. This problem was actually introduced in the merge-2a branch, is fixed in the upcoming merge-3 branch, and only affects code that is deployed across nested zones, so I've put together a patch that will serve to prevent this from crashing when running in nested zones: http://
Rick Harris (rconradharris) wrote : | # |
Brian, thanks for the feedback.
> 28: This is a duplicate line
Good catch. Fixed when I remerged trunk.
> Would it make sense to switch to individual flags for each cost function, using an alias instead of a path to the function name?
I'd be a little skeptical of using an alias. One thing I love about using the fully-qualified module or class name is, you know exactly where to go look for the implementation. An alias would add one more level of indirection for (I'm not sure what) gain.
Besides, there's already quite a bit of precedent in the Nova code base already for using this pattern:
353:DEFINE_
355:DEFINE_
357:DEFINE_
359:DEFINE_
361:DEFINE_
365:DEFINE_
If we'd like to move away from this approach, perhaps we can strike up an ML conversation and reach a consensus?
> There seems to be a bit of magic going on with these *_cost_fn_weight flags and actual cost
> functions. I don't see the need to make the code surrounding cost functions so automatic. You must > have given a lot of thought to this, so I would love to hear your reasoning.
I wouldn't say I gave this a *lot* of thought :) But, the basic idea wasn't to save keystrokes or anything like that; it was to enforce sensible naming of the weight-flags: each weight flag would be required to be named exactly the same as its corresponding cost function. This would make the code easier to follow (I thought), and enhance grep-ability.
I'm totally with you on the idea that "magic" is usually a bad idea. That said, I think there are cases where some cleverness can be used judiciously to enforce some best-practices. This seemed to me like a decent case for that, but, I could be persuaded otherwise :)
> 87: Should this be grouped up at the top with the other flags? i know we do it that way in other modules.
This is a case where I purposefully differed from convention. I felt it was better to keep the weight-flag and it's corresponding cost-function close together. They conceptually make up a single unit (the weighed cost), so I thought keep them close would reinforce that idea. Plus, it made reorganizing/
> 91: Should this function be called something different? The name doesn't seem to actually represent what it does.
Could you expand on this?
The thought with the naming was:
* If you allocate to servers with the most available ram, you're performing a 'load-leveling' type of distribution.
* In contrast, if you allocate to servers with the least available ram, then you are preferring to fill utilized servers up before moving on to free machines (aka fill-first).
Would a NOTE(sirp): comment help here, or does the naming not make sense at all?
Brian Waldon (bcwaldon) wrote : | # |
> > Would it make sense to switch to individual flags for each cost function,
> using an alias instead of a path to the function name?
>
> I'd be a little skeptical of using an alias. One thing I love about using the
> fully-qualified module or class name is, you know exactly where to go look for
> the implementation. An alias would add one more level of indirection for (I'm
> not sure what) gain.
>
> Besides, there's already quite a bit of precedent in the Nova code base
> already for using this pattern:
>
> 353:DEFINE_
> 355:DEFINE_
> 'nova.console.
> 357:DEFINE_
> 359:DEFINE_
> 361:DEFINE_
> 'nova.scheduler
> 365:DEFINE_
>
> If we'd like to move away from this approach, perhaps we can strike up an ML
> conversation and reach a consensus?
Ok, we'll talk about this later. It's okay for now.
> > There seems to be a bit of magic going on with these *_cost_fn_weight flags
> and actual cost
> > functions. I don't see the need to make the code surrounding cost functions
> so automatic. You must > have given a lot of thought to this, so I would love
> to hear your reasoning.
>
> I wouldn't say I gave this a *lot* of thought :) But, the basic idea wasn't to
> save keystrokes or anything like that; it was to enforce sensible naming of
> the weight-flags: each weight flag would be required to be named exactly the
> same as its corresponding cost function. This would make the code easier to
> follow (I thought), and enhance grep-ability.
>
> I'm totally with you on the idea that "magic" is usually a bad idea. That
> said, I think there are cases where some cleverness can be used judiciously to
> enforce some best-practices. This seemed to me like a decent case for that,
> but, I could be persuaded otherwise :)
I guess the flag is adding very low-level configurability that wouldn't be used too often. Being very explicit at that level isn't a bad thing.
> > 87: Should this be grouped up at the top with the other flags? i know we do
> it that way in other modules.
>
> This is a case where I purposefully differed from convention. I felt it was
> better to keep the weight-flag and it's corresponding cost-function close
> together. They conceptually make up a single unit (the weighed cost), so I
> thought keep them close would reinforce that idea. Plus, it made
> reorganizing/
> deleted as a single group.
Ok, just wanted to make sure there was a reason.
> > 91: Should this function be called something different? The name doesn't
> seem to actually represent what it does.
>
> Could you expand on this?
>
> The thought with the naming was:
>
> * If you allocate to servers with the most available ram, you're performing
> a 'load-leveling' type of distribution.
>
> * In contrast, if you allocate to servers with the least availa...
Brian Waldon (bcwaldon) : | # |
Ed Leafe (ed-leafe) wrote : | # |
> The code works fine when run in a single zone deployment. However, due to a
> disconnect between the different forks of novaclient, the call to nested zones
> for select() uses POST, but the controller code is expecting a GET with a
> query string. This problem was actually introduced in the merge-2a branch, is
> fixed in the upcoming merge-3 branch, and only affects code that is deployed
> across nested zones, so I've put together a patch that will serve to prevent
> this from crashing when running in nested zones:
> http://
Actually, a better solution would be to simply grab the select() method from nova/api/
Preview Diff
1 | === modified file 'nova/exception.py' |
2 | --- nova/exception.py 2011-05-31 21:16:41 +0000 |
3 | +++ nova/exception.py 2011-06-02 19:11:23 +0000 |
4 | @@ -477,6 +477,15 @@ |
5 | message = _("Scheduler Host Filter %(filter_name)s could not be found.") |
6 | |
7 | |
8 | +class SchedulerCostFunctionNotFound(NotFound): |
9 | + message = _("Scheduler cost function %(cost_fn_str)s could" |
10 | + " not be found.") |
11 | + |
12 | + |
13 | +class SchedulerWeightFlagNotFound(NotFound): |
14 | + message = _("Scheduler weight flag not found: %(flag_name)s") |
15 | + |
16 | + |
17 | class InstanceMetadataNotFound(NotFound): |
18 | message = _("Instance %(instance_id)s has no metadata with " |
19 | "key %(metadata_key)s.") |
20 | |
21 | === modified file 'nova/scheduler/host_filter.py' |
22 | --- nova/scheduler/host_filter.py 2011-06-01 14:58:17 +0000 |
23 | +++ nova/scheduler/host_filter.py 2011-06-02 19:11:23 +0000 |
24 | @@ -41,6 +41,7 @@ |
25 | from nova import exception |
26 | from nova import flags |
27 | from nova import log as logging |
28 | +from nova.scheduler import zone_aware_scheduler |
29 | from nova import utils |
30 | from nova.scheduler import zone_aware_scheduler |
31 | |
32 | |
33 | === added file 'nova/scheduler/least_cost.py' |
34 | --- nova/scheduler/least_cost.py 1970-01-01 00:00:00 +0000 |
35 | +++ nova/scheduler/least_cost.py 2011-06-02 19:11:23 +0000 |
36 | @@ -0,0 +1,156 @@ |
37 | +# Copyright (c) 2011 Openstack, LLC. |
38 | +# All Rights Reserved. |
39 | +# |
40 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
41 | +# not use this file except in compliance with the License. You may obtain |
42 | +# a copy of the License at |
43 | +# |
44 | +# http://www.apache.org/licenses/LICENSE-2.0 |
45 | +# |
46 | +# Unless required by applicable law or agreed to in writing, software |
47 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
48 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
49 | +# License for the specific language governing permissions and limitations |
50 | +# under the License. |
51 | +""" |
52 | +Least Cost Scheduler is a mechanism for choosing which host machines to |
53 | +provision a set of resources to. The input of the least-cost-scheduler is a |
54 | +set of objective-functions, called the 'cost-functions', a weight for each |
55 | +cost-function, and a list of candidate hosts (gathered via FilterHosts). |
56 | + |
57 | +The cost-function and weights are tabulated, and the host with the least cost |
58 | +is then selected for provisioning. |
59 | +""" |
60 | + |
61 | +import collections |
62 | + |
63 | +from nova import flags |
64 | +from nova import log as logging |
65 | +from nova.scheduler import zone_aware_scheduler |
66 | +from nova import utils |
67 | + |
68 | +LOG = logging.getLogger('nova.scheduler.least_cost') |
69 | + |
70 | +FLAGS = flags.FLAGS |
71 | +flags.DEFINE_list('least_cost_scheduler_cost_functions', |
72 | + ['nova.scheduler.least_cost.noop_cost_fn'], |
73 | + 'Which cost functions the LeastCostScheduler should use.') |
74 | + |
75 | + |
76 | +# TODO(sirp): Once we have enough of these rules, we can break them out into a |
77 | +# cost_functions.py file (perhaps in a least_cost_scheduler directory) |
78 | +flags.DEFINE_integer('noop_cost_fn_weight', 1, |
79 | + 'How much weight to give the noop cost function') |
80 | + |
81 | + |
82 | +def noop_cost_fn(host): |
83 | + """Return a pre-weight cost of 1 for each host""" |
84 | + return 1 |
85 | + |
86 | + |
87 | +flags.DEFINE_integer('fill_first_cost_fn_weight', 1, |
88 | + 'How much weight to give the fill-first cost function') |
89 | + |
90 | + |
91 | +def fill_first_cost_fn(host): |
92 | + """Prefer hosts that have less ram available, filter_hosts will exclude |
93 | + hosts that don't have enough ram""" |
94 | + hostname, caps = host |
95 | + free_mem = caps['compute']['host_memory_free'] |
96 | + return free_mem |
97 | + |
98 | + |
99 | +class LeastCostScheduler(zone_aware_scheduler.ZoneAwareScheduler): |
100 | + def get_cost_fns(self): |
101 | + """Returns a list of tuples containing weights and cost functions to |
102 | + use for weighing hosts |
103 | + """ |
104 | + cost_fns = [] |
105 | + for cost_fn_str in FLAGS.least_cost_scheduler_cost_functions: |
106 | + |
107 | + try: |
108 | + # NOTE(sirp): import_class is somewhat misnamed since it can |
109 | + # any callable from a module |
110 | + cost_fn = utils.import_class(cost_fn_str) |
111 | + except exception.ClassNotFound: |
112 | + raise exception.SchedulerCostFunctionNotFound( |
113 | + cost_fn_str=cost_fn_str) |
114 | + |
115 | + try: |
116 | + weight = getattr(FLAGS, "%s_weight" % cost_fn.__name__) |
117 | + except AttributeError: |
118 | + raise exception.SchedulerWeightFlagNotFound( |
119 | + flag_name=flag_name) |
120 | + |
121 | + cost_fns.append((weight, cost_fn)) |
122 | + |
123 | + return cost_fns |
124 | + |
125 | + def weigh_hosts(self, num, request_spec, hosts): |
126 | + """Returns a list of dictionaries of form: |
127 | + [ {weight: weight, hostname: hostname} ]""" |
128 | + |
129 | + # FIXME(sirp): weigh_hosts should handle more than just instances |
130 | + hostnames = [hostname for hostname, caps in hosts] |
131 | + |
132 | + cost_fns = self.get_cost_fns() |
133 | + costs = weighted_sum(domain=hosts, weighted_fns=cost_fns) |
134 | + |
135 | + weighted = [] |
136 | + weight_log = [] |
137 | + for cost, hostname in zip(costs, hostnames): |
138 | + weight_log.append("%s: %s" % (hostname, "%.2f" % cost)) |
139 | + weight_dict = dict(weight=cost, hostname=hostname) |
140 | + weighted.append(weight_dict) |
141 | + |
142 | + LOG.debug(_("Weighted Costs => %s") % weight_log) |
143 | + return weighted |
144 | + |
145 | + |
146 | +def normalize_list(L): |
147 | + """Normalize an array of numbers such that each element satisfies: |
148 | + 0 <= e <= 1""" |
149 | + if not L: |
150 | + return L |
151 | + max_ = max(L) |
152 | + if max_ > 0: |
153 | + return [(float(e) / max_) for e in L] |
154 | + return L |
155 | + |
156 | + |
157 | +def weighted_sum(domain, weighted_fns, normalize=True): |
158 | + """Use the weighted-sum method to compute a score for an array of objects. |
159 | + Normalize the results of the objective-functions so that the weights are |
160 | + meaningful regardless of objective-function's range. |
161 | + |
162 | + domain - input to be scored |
163 | + weighted_fns - list of weights and functions like: |
164 | + [(weight, objective-functions)] |
165 | + |
166 | + Returns an unsorted of scores. To pair with hosts do: zip(scores, hosts) |
167 | + """ |
168 | + # Table of form: |
169 | + # { domain1: [score1, score2, ..., scoreM] |
170 | + # ... |
171 | + # domainN: [score1, score2, ..., scoreM] } |
172 | + score_table = collections.defaultdict(list) |
173 | + for weight, fn in weighted_fns: |
174 | + scores = [fn(elem) for elem in domain] |
175 | + |
176 | + if normalize: |
177 | + norm_scores = normalize_list(scores) |
178 | + else: |
179 | + norm_scores = scores |
180 | + |
181 | + for idx, score in enumerate(norm_scores): |
182 | + weighted_score = score * weight |
183 | + score_table[idx].append(weighted_score) |
184 | + |
185 | + # Sum rows in table to compute score for each element in domain |
186 | + domain_scores = [] |
187 | + for idx in sorted(score_table): |
188 | + elem_score = sum(score_table[idx]) |
189 | + elem = domain[idx] |
190 | + domain_scores.append(elem_score) |
191 | + |
192 | + return domain_scores |
193 | |
194 | === modified file 'nova/scheduler/zone_aware_scheduler.py' |
195 | --- nova/scheduler/zone_aware_scheduler.py 2011-05-27 14:24:02 +0000 |
196 | +++ nova/scheduler/zone_aware_scheduler.py 2011-06-02 19:11:23 +0000 |
197 | @@ -39,7 +39,7 @@ |
198 | return api.call_zone_method(context, method, specs=specs) |
199 | |
200 | def schedule_run_instance(self, context, instance_id, request_spec, |
201 | - *args, **kwargs): |
202 | + *args, **kwargs): |
203 | """This method is called from nova.compute.api to provision |
204 | an instance. However we need to look at the parameters being |
205 | passed in to see if this is a request to: |
206 | @@ -116,6 +116,9 @@ |
207 | # Filter local hosts based on requirements ... |
208 | host_list = self.filter_hosts(num_instances, request_spec) |
209 | |
210 | + # TODO(sirp): weigh_hosts should also be a function of 'topic' or |
211 | + # resources, so that we can apply different objective functions to it |
212 | + |
213 | # then weigh the selected hosts. |
214 | # weighted = [{weight=weight, name=hostname}, ...] |
215 | weighted = self.weigh_hosts(num_instances, request_spec, host_list) |
216 | @@ -139,12 +142,16 @@ |
217 | |
218 | def filter_hosts(self, num, request_spec): |
219 | """Derived classes must override this method and return |
220 | - a list of hosts in [(hostname, capability_dict)] format. |
221 | + a list of hosts in [(hostname, capability_dict)] format. |
222 | """ |
223 | - raise NotImplemented() |
224 | + # NOTE(sirp): The default logic is the equivalent to AllHostsFilter |
225 | + service_states = self.zone_manager.service_states |
226 | + return [(host, services) |
227 | + for host, services in service_states.iteritems()] |
228 | |
229 | def weigh_hosts(self, num, request_spec, hosts): |
230 | - """Derived classes must override this method and return |
231 | - a lists of hosts in [{weight, hostname}] format. |
232 | + """Derived classes may override this to provide more sophisticated |
233 | + scheduling objectives |
234 | """ |
235 | - raise NotImplemented() |
236 | + # NOTE(sirp): The default logic is the same as the NoopCostFunction |
237 | + return [dict(weight=1, hostname=host) for host, caps in hosts] |
238 | |
239 | === modified file 'nova/test.py' |
240 | --- nova/test.py 2011-05-25 22:42:49 +0000 |
241 | +++ nova/test.py 2011-06-02 19:11:23 +0000 |
242 | @@ -184,7 +184,7 @@ |
243 | wsgi.Server.start = _wrapped_start |
244 | |
245 | # Useful assertions |
246 | - def assertDictMatch(self, d1, d2): |
247 | + def assertDictMatch(self, d1, d2, approx_equal=False, tolerance=0.001): |
248 | """Assert two dicts are equivalent. |
249 | |
250 | This is a 'deep' match in the sense that it handles nested |
251 | @@ -215,15 +215,26 @@ |
252 | for key in d1keys: |
253 | d1value = d1[key] |
254 | d2value = d2[key] |
255 | + try: |
256 | + error = abs(float(d1value) - float(d2value)) |
257 | + within_tolerance = error <= tolerance |
258 | + except (ValueError, TypeError): |
259 | + # If both values aren't convertable to float, just ignore |
260 | + # ValueError if arg is a str, TypeError if it's something else |
261 | + # (like None) |
262 | + within_tolerance = False |
263 | + |
264 | if hasattr(d1value, 'keys') and hasattr(d2value, 'keys'): |
265 | self.assertDictMatch(d1value, d2value) |
266 | elif 'DONTCARE' in (d1value, d2value): |
267 | continue |
268 | + elif approx_equal and within_tolerance: |
269 | + continue |
270 | elif d1value != d2value: |
271 | raise_assertion("d1['%(key)s']=%(d1value)s != " |
272 | "d2['%(key)s']=%(d2value)s" % locals()) |
273 | |
274 | - def assertDictListMatch(self, L1, L2): |
275 | + def assertDictListMatch(self, L1, L2, approx_equal=False, tolerance=0.001): |
276 | """Assert a list of dicts are equivalent.""" |
277 | def raise_assertion(msg): |
278 | L1str = str(L1) |
279 | @@ -239,4 +250,5 @@ |
280 | 'len(L2)=%(L2count)d' % locals()) |
281 | |
282 | for d1, d2 in zip(L1, L2): |
283 | - self.assertDictMatch(d1, d2) |
284 | + self.assertDictMatch(d1, d2, approx_equal=approx_equal, |
285 | + tolerance=tolerance) |
286 | |
287 | === added directory 'nova/tests/scheduler' |
288 | === added file 'nova/tests/scheduler/__init__.py' |
289 | === added file 'nova/tests/scheduler/test_host_filter.py' |
290 | --- nova/tests/scheduler/test_host_filter.py 1970-01-01 00:00:00 +0000 |
291 | +++ nova/tests/scheduler/test_host_filter.py 2011-06-02 19:11:23 +0000 |
292 | @@ -0,0 +1,206 @@ |
293 | +# Copyright 2011 OpenStack LLC. |
294 | +# All Rights Reserved. |
295 | +# |
296 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
297 | +# not use this file except in compliance with the License. You may obtain |
298 | +# a copy of the License at |
299 | +# |
300 | +# http://www.apache.org/licenses/LICENSE-2.0 |
301 | +# |
302 | +# Unless required by applicable law or agreed to in writing, software |
303 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
304 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
305 | +# License for the specific language governing permissions and limitations |
306 | +# under the License. |
307 | +""" |
308 | +Tests For Scheduler Host Filters. |
309 | +""" |
310 | + |
311 | +import json |
312 | + |
313 | +from nova import exception |
314 | +from nova import flags |
315 | +from nova import test |
316 | +from nova.scheduler import host_filter |
317 | + |
318 | +FLAGS = flags.FLAGS |
319 | + |
320 | + |
321 | +class FakeZoneManager: |
322 | + pass |
323 | + |
324 | + |
325 | +class HostFilterTestCase(test.TestCase): |
326 | + """Test case for host filters.""" |
327 | + |
328 | + def _host_caps(self, multiplier): |
329 | + # Returns host capabilities in the following way: |
330 | + # host1 = memory:free 10 (100max) |
331 | + # disk:available 100 (1000max) |
332 | + # hostN = memory:free 10 + 10N |
333 | + # disk:available 100 + 100N |
334 | + # in other words: hostN has more resources than host0 |
335 | + # which means ... don't go above 10 hosts. |
336 | + return {'host_name-description': 'XenServer %s' % multiplier, |
337 | + 'host_hostname': 'xs-%s' % multiplier, |
338 | + 'host_memory_total': 100, |
339 | + 'host_memory_overhead': 10, |
340 | + 'host_memory_free': 10 + multiplier * 10, |
341 | + 'host_memory_free-computed': 10 + multiplier * 10, |
342 | + 'host_other-config': {}, |
343 | + 'host_ip_address': '192.168.1.%d' % (100 + multiplier), |
344 | + 'host_cpu_info': {}, |
345 | + 'disk_available': 100 + multiplier * 100, |
346 | + 'disk_total': 1000, |
347 | + 'disk_used': 0, |
348 | + 'host_uuid': 'xxx-%d' % multiplier, |
349 | + 'host_name-label': 'xs-%s' % multiplier} |
350 | + |
351 | + def setUp(self): |
352 | + self.old_flag = FLAGS.default_host_filter |
353 | + FLAGS.default_host_filter = \ |
354 | + 'nova.scheduler.host_filter.AllHostsFilter' |
355 | + self.instance_type = dict(name='tiny', |
356 | + memory_mb=50, |
357 | + vcpus=10, |
358 | + local_gb=500, |
359 | + flavorid=1, |
360 | + swap=500, |
361 | + rxtx_quota=30000, |
362 | + rxtx_cap=200) |
363 | + |
364 | + self.zone_manager = FakeZoneManager() |
365 | + states = {} |
366 | + for x in xrange(10): |
367 | + states['host%02d' % (x + 1)] = {'compute': self._host_caps(x)} |
368 | + self.zone_manager.service_states = states |
369 | + |
370 | + def tearDown(self): |
371 | + FLAGS.default_host_filter = self.old_flag |
372 | + |
373 | + def test_choose_filter(self): |
374 | + # Test default filter ... |
375 | + hf = host_filter.choose_host_filter() |
376 | + self.assertEquals(hf._full_name(), |
377 | + 'nova.scheduler.host_filter.AllHostsFilter') |
378 | + # Test valid filter ... |
379 | + hf = host_filter.choose_host_filter( |
380 | + 'nova.scheduler.host_filter.InstanceTypeFilter') |
381 | + self.assertEquals(hf._full_name(), |
382 | + 'nova.scheduler.host_filter.InstanceTypeFilter') |
383 | + # Test invalid filter ... |
384 | + try: |
385 | + host_filter.choose_host_filter('does not exist') |
386 | + self.fail("Should not find host filter.") |
387 | + except exception.SchedulerHostFilterNotFound: |
388 | + pass |
389 | + |
390 | + def test_all_host_filter(self): |
391 | + hf = host_filter.AllHostsFilter() |
392 | + cooked = hf.instance_type_to_filter(self.instance_type) |
393 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
394 | + self.assertEquals(10, len(hosts)) |
395 | + for host, capabilities in hosts: |
396 | + self.assertTrue(host.startswith('host')) |
397 | + |
398 | + def test_instance_type_filter(self): |
399 | + hf = host_filter.InstanceTypeFilter() |
400 | + # filter all hosts that can support 50 ram and 500 disk |
401 | + name, cooked = hf.instance_type_to_filter(self.instance_type) |
402 | + self.assertEquals('nova.scheduler.host_filter.InstanceTypeFilter', |
403 | + name) |
404 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
405 | + self.assertEquals(6, len(hosts)) |
406 | + just_hosts = [host for host, caps in hosts] |
407 | + just_hosts.sort() |
408 | + self.assertEquals('host05', just_hosts[0]) |
409 | + self.assertEquals('host10', just_hosts[5]) |
410 | + |
411 | + def test_json_filter(self): |
412 | + hf = host_filter.JsonFilter() |
413 | + # filter all hosts that can support 50 ram and 500 disk |
414 | + name, cooked = hf.instance_type_to_filter(self.instance_type) |
415 | + self.assertEquals('nova.scheduler.host_filter.JsonFilter', name) |
416 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
417 | + self.assertEquals(6, len(hosts)) |
418 | + just_hosts = [host for host, caps in hosts] |
419 | + just_hosts.sort() |
420 | + self.assertEquals('host05', just_hosts[0]) |
421 | + self.assertEquals('host10', just_hosts[5]) |
422 | + |
423 | + # Try some custom queries |
424 | + |
425 | + raw = ['or', |
426 | + ['and', |
427 | + ['<', '$compute.host_memory_free', 30], |
428 | + ['<', '$compute.disk_available', 300] |
429 | + ], |
430 | + ['and', |
431 | + ['>', '$compute.host_memory_free', 70], |
432 | + ['>', '$compute.disk_available', 700] |
433 | + ] |
434 | + ] |
435 | + cooked = json.dumps(raw) |
436 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
437 | + |
438 | + self.assertEquals(5, len(hosts)) |
439 | + just_hosts = [host for host, caps in hosts] |
440 | + just_hosts.sort() |
441 | + for index, host in zip([1, 2, 8, 9, 10], just_hosts): |
442 | + self.assertEquals('host%02d' % index, host) |
443 | + |
444 | + raw = ['not', |
445 | + ['=', '$compute.host_memory_free', 30], |
446 | + ] |
447 | + cooked = json.dumps(raw) |
448 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
449 | + |
450 | + self.assertEquals(9, len(hosts)) |
451 | + just_hosts = [host for host, caps in hosts] |
452 | + just_hosts.sort() |
453 | + for index, host in zip([1, 2, 4, 5, 6, 7, 8, 9, 10], just_hosts): |
454 | + self.assertEquals('host%02d' % index, host) |
455 | + |
456 | + raw = ['in', '$compute.host_memory_free', 20, 40, 60, 80, 100] |
457 | + cooked = json.dumps(raw) |
458 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
459 | + |
460 | + self.assertEquals(5, len(hosts)) |
461 | + just_hosts = [host for host, caps in hosts] |
462 | + just_hosts.sort() |
463 | + for index, host in zip([2, 4, 6, 8, 10], just_hosts): |
464 | + self.assertEquals('host%02d' % index, host) |
465 | + |
466 | + # Try some bogus input ... |
467 | + raw = ['unknown command', ] |
468 | + cooked = json.dumps(raw) |
469 | + try: |
470 | + hf.filter_hosts(self.zone_manager, cooked) |
471 | + self.fail("Should give KeyError") |
472 | + except KeyError, e: |
473 | + pass |
474 | + |
475 | + self.assertTrue(hf.filter_hosts(self.zone_manager, json.dumps([]))) |
476 | + self.assertTrue(hf.filter_hosts(self.zone_manager, json.dumps({}))) |
477 | + self.assertTrue(hf.filter_hosts(self.zone_manager, json.dumps( |
478 | + ['not', True, False, True, False] |
479 | + ))) |
480 | + |
481 | + try: |
482 | + hf.filter_hosts(self.zone_manager, json.dumps( |
483 | + 'not', True, False, True, False |
484 | + )) |
485 | + self.fail("Should give KeyError") |
486 | + except KeyError, e: |
487 | + pass |
488 | + |
489 | + self.assertFalse(hf.filter_hosts(self.zone_manager, |
490 | + json.dumps(['=', '$foo', 100]))) |
491 | + self.assertFalse(hf.filter_hosts(self.zone_manager, |
492 | + json.dumps(['=', '$.....', 100]))) |
493 | + self.assertFalse(hf.filter_hosts(self.zone_manager, |
494 | + json.dumps( |
495 | + ['>', ['and', ['or', ['not', ['<', ['>=', ['<=', ['in', ]]]]]]]]))) |
496 | + |
497 | + self.assertFalse(hf.filter_hosts(self.zone_manager, |
498 | + json.dumps(['=', {}, ['>', '$missing....foo']]))) |
499 | |
500 | === added file 'nova/tests/scheduler/test_least_cost_scheduler.py' |
501 | --- nova/tests/scheduler/test_least_cost_scheduler.py 1970-01-01 00:00:00 +0000 |
502 | +++ nova/tests/scheduler/test_least_cost_scheduler.py 2011-06-02 19:11:23 +0000 |
503 | @@ -0,0 +1,144 @@ |
504 | +# Copyright 2011 OpenStack LLC. |
505 | +# All Rights Reserved. |
506 | +# |
507 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
508 | +# not use this file except in compliance with the License. You may obtain |
509 | +# a copy of the License at |
510 | +# |
511 | +# http://www.apache.org/licenses/LICENSE-2.0 |
512 | +# |
513 | +# Unless required by applicable law or agreed to in writing, software |
514 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
515 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
516 | +# License for the specific language governing permissions and limitations |
517 | +# under the License. |
518 | +""" |
519 | +Tests For Least Cost Scheduler |
520 | +""" |
521 | + |
522 | +from nova import flags |
523 | +from nova import test |
524 | +from nova.scheduler import least_cost |
525 | +from nova.tests.scheduler import test_zone_aware_scheduler |
526 | + |
527 | +MB = 1024 * 1024 |
528 | +FLAGS = flags.FLAGS |
529 | + |
530 | + |
531 | +class FakeHost(object): |
532 | + def __init__(self, host_id, free_ram, io): |
533 | + self.id = host_id |
534 | + self.free_ram = free_ram |
535 | + self.io = io |
536 | + |
537 | + |
538 | +class WeightedSumTestCase(test.TestCase): |
539 | + def test_empty_domain(self): |
540 | + domain = [] |
541 | + weighted_fns = [] |
542 | + result = least_cost.weighted_sum(domain, weighted_fns) |
543 | + expected = [] |
544 | + self.assertEqual(expected, result) |
545 | + |
546 | + def test_basic_costing(self): |
547 | + hosts = [ |
548 | + FakeHost(1, 512 * MB, 100), |
549 | + FakeHost(2, 256 * MB, 400), |
550 | + FakeHost(3, 512 * MB, 100) |
551 | + ] |
552 | + |
553 | + weighted_fns = [ |
554 | + (1, lambda h: h.free_ram), # Fill-first, free_ram is a *cost* |
555 | + (2, lambda h: h.io), # Avoid high I/O |
556 | + ] |
557 | + |
558 | + costs = least_cost.weighted_sum( |
559 | + domain=hosts, weighted_fns=weighted_fns) |
560 | + |
561 | + # Each 256 MB unit of free-ram contributes 0.5 points by way of: |
562 | + # cost = weight * (score/max_score) = 1 * (256/512) = 0.5 |
563 | + # Each 100 iops of IO adds 0.5 points by way of: |
564 | + # cost = 2 * (100/400) = 2 * 0.25 = 0.5 |
565 | + expected = [1.5, 2.5, 1.5] |
566 | + self.assertEqual(expected, costs) |
567 | + |
568 | + |
569 | +class LeastCostSchedulerTestCase(test.TestCase): |
570 | + def setUp(self): |
571 | + super(LeastCostSchedulerTestCase, self).setUp() |
572 | + |
573 | + class FakeZoneManager: |
574 | + pass |
575 | + |
576 | + zone_manager = FakeZoneManager() |
577 | + |
578 | + states = test_zone_aware_scheduler.fake_zone_manager_service_states( |
579 | + num_hosts=10) |
580 | + zone_manager.service_states = states |
581 | + |
582 | + self.sched = least_cost.LeastCostScheduler() |
583 | + self.sched.zone_manager = zone_manager |
584 | + |
585 | + def tearDown(self): |
586 | + super(LeastCostSchedulerTestCase, self).tearDown() |
587 | + |
588 | + def assertWeights(self, expected, num, request_spec, hosts): |
589 | + weighted = self.sched.weigh_hosts(num, request_spec, hosts) |
590 | + self.assertDictListMatch(weighted, expected, approx_equal=True) |
591 | + |
592 | + def test_no_hosts(self): |
593 | + num = 1 |
594 | + request_spec = {} |
595 | + hosts = [] |
596 | + |
597 | + expected = [] |
598 | + self.assertWeights(expected, num, request_spec, hosts) |
599 | + |
600 | + def test_noop_cost_fn(self): |
601 | + FLAGS.least_cost_scheduler_cost_functions = [ |
602 | + 'nova.scheduler.least_cost.noop_cost_fn' |
603 | + ] |
604 | + FLAGS.noop_cost_fn_weight = 1 |
605 | + |
606 | + num = 1 |
607 | + request_spec = {} |
608 | + hosts = self.sched.filter_hosts(num, request_spec) |
609 | + |
610 | + expected = [dict(weight=1, hostname=hostname) |
611 | + for hostname, caps in hosts] |
612 | + self.assertWeights(expected, num, request_spec, hosts) |
613 | + |
614 | + def test_cost_fn_weights(self): |
615 | + FLAGS.least_cost_scheduler_cost_functions = [ |
616 | + 'nova.scheduler.least_cost.noop_cost_fn' |
617 | + ] |
618 | + FLAGS.noop_cost_fn_weight = 2 |
619 | + |
620 | + num = 1 |
621 | + request_spec = {} |
622 | + hosts = self.sched.filter_hosts(num, request_spec) |
623 | + |
624 | + expected = [dict(weight=2, hostname=hostname) |
625 | + for hostname, caps in hosts] |
626 | + self.assertWeights(expected, num, request_spec, hosts) |
627 | + |
628 | + def test_fill_first_cost_fn(self): |
629 | + FLAGS.least_cost_scheduler_cost_functions = [ |
630 | + 'nova.scheduler.least_cost.fill_first_cost_fn' |
631 | + ] |
632 | + FLAGS.fill_first_cost_fn_weight = 1 |
633 | + |
634 | + num = 1 |
635 | + request_spec = {} |
636 | + hosts = self.sched.filter_hosts(num, request_spec) |
637 | + |
638 | + expected = [] |
639 | + for idx, (hostname, caps) in enumerate(hosts): |
640 | + # Costs are normalized so over 10 hosts, each host with increasing |
641 | + # free ram will cost 1/N more. Since the lowest cost host has some |
642 | + # free ram, we add in the 1/N for the base_cost |
643 | + weight = 0.1 + (0.1 * idx) |
644 | + weight_dict = dict(weight=weight, hostname=hostname) |
645 | + expected.append(weight_dict) |
646 | + |
647 | + self.assertWeights(expected, num, request_spec, hosts) |
648 | |
649 | === renamed file 'nova/tests/test_scheduler.py' => 'nova/tests/scheduler/test_scheduler.py' |
650 | --- nova/tests/test_scheduler.py 2011-05-13 01:07:54 +0000 |
651 | +++ nova/tests/scheduler/test_scheduler.py 2011-06-02 19:11:23 +0000 |
652 | @@ -61,7 +61,8 @@ |
653 | """Test case for scheduler""" |
654 | def setUp(self): |
655 | super(SchedulerTestCase, self).setUp() |
656 | - self.flags(scheduler_driver='nova.tests.test_scheduler.TestDriver') |
657 | + driver = 'nova.tests.scheduler.test_scheduler.TestDriver' |
658 | + self.flags(scheduler_driver=driver) |
659 | |
660 | def _create_compute_service(self): |
661 | """Create compute-manager(ComputeNode and Service record).""" |
662 | |
663 | === renamed file 'nova/tests/test_zone_aware_scheduler.py' => 'nova/tests/scheduler/test_zone_aware_scheduler.py' |
664 | --- nova/tests/test_zone_aware_scheduler.py 2011-06-01 14:58:17 +0000 |
665 | +++ nova/tests/scheduler/test_zone_aware_scheduler.py 2011-06-02 19:11:23 +0000 |
666 | @@ -22,6 +22,37 @@ |
667 | from nova.scheduler import zone_manager |
668 | |
669 | |
670 | +def _host_caps(multiplier): |
671 | + # Returns host capabilities in the following way: |
672 | + # host1 = memory:free 10 (100max) |
673 | + # disk:available 100 (1000max) |
674 | + # hostN = memory:free 10 + 10N |
675 | + # disk:available 100 + 100N |
676 | + # in other words: hostN has more resources than host0 |
677 | + # which means ... don't go above 10 hosts. |
678 | + return {'host_name-description': 'XenServer %s' % multiplier, |
679 | + 'host_hostname': 'xs-%s' % multiplier, |
680 | + 'host_memory_total': 100, |
681 | + 'host_memory_overhead': 10, |
682 | + 'host_memory_free': 10 + multiplier * 10, |
683 | + 'host_memory_free-computed': 10 + multiplier * 10, |
684 | + 'host_other-config': {}, |
685 | + 'host_ip_address': '192.168.1.%d' % (100 + multiplier), |
686 | + 'host_cpu_info': {}, |
687 | + 'disk_available': 100 + multiplier * 100, |
688 | + 'disk_total': 1000, |
689 | + 'disk_used': 0, |
690 | + 'host_uuid': 'xxx-%d' % multiplier, |
691 | + 'host_name-label': 'xs-%s' % multiplier} |
692 | + |
693 | + |
694 | +def fake_zone_manager_service_states(num_hosts): |
695 | + states = {} |
696 | + for x in xrange(num_hosts): |
697 | + states['host%02d' % (x + 1)] = {'compute': _host_caps(x)} |
698 | + return states |
699 | + |
700 | + |
701 | class FakeZoneAwareScheduler(zone_aware_scheduler.ZoneAwareScheduler): |
702 | def filter_hosts(self, num, specs): |
703 | # NOTE(sirp): this is returning [(hostname, services)] |
704 | |
705 | === added file 'nova/tests/test_host_filter.py' |
706 | --- nova/tests/test_host_filter.py 1970-01-01 00:00:00 +0000 |
707 | +++ nova/tests/test_host_filter.py 2011-06-02 19:11:23 +0000 |
708 | @@ -0,0 +1,205 @@ |
709 | +# Copyright 2011 OpenStack LLC. |
710 | +# All Rights Reserved. |
711 | +# |
712 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
713 | +# not use this file except in compliance with the License. You may obtain |
714 | +# a copy of the License at |
715 | +# |
716 | +# http://www.apache.org/licenses/LICENSE-2.0 |
717 | +# |
718 | +# Unless required by applicable law or agreed to in writing, software |
719 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
720 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
721 | +# License for the specific language governing permissions and limitations |
722 | +# under the License. |
723 | +""" |
724 | +Tests For Scheduler Host Filters. |
725 | +""" |
726 | + |
727 | +import json |
728 | + |
729 | +from nova import exception |
730 | +from nova import flags |
731 | +from nova import test |
732 | +from nova.scheduler import host_filter |
733 | + |
734 | +FLAGS = flags.FLAGS |
735 | + |
736 | + |
737 | +class FakeZoneManager: |
738 | + pass |
739 | + |
740 | + |
741 | +class HostFilterTestCase(test.TestCase): |
742 | + """Test case for host filters.""" |
743 | + |
744 | + def _host_caps(self, multiplier): |
745 | + # Returns host capabilities in the following way: |
746 | + # host1 = memory:free 10 (100max) |
747 | + # disk:available 100 (1000max) |
748 | + # hostN = memory:free 10 + 10N |
749 | + # disk:available 100 + 100N |
750 | + # in other words: hostN has more resources than host0 |
751 | + # which means ... don't go above 10 hosts. |
752 | + return {'host_name-description': 'XenServer %s' % multiplier, |
753 | + 'host_hostname': 'xs-%s' % multiplier, |
754 | + 'host_memory_total': 100, |
755 | + 'host_memory_overhead': 10, |
756 | + 'host_memory_free': 10 + multiplier * 10, |
757 | + 'host_memory_free-computed': 10 + multiplier * 10, |
758 | + 'host_other-config': {}, |
759 | + 'host_ip_address': '192.168.1.%d' % (100 + multiplier), |
760 | + 'host_cpu_info': {}, |
761 | + 'disk_available': 100 + multiplier * 100, |
762 | + 'disk_total': 1000, |
763 | + 'disk_used': 0, |
764 | + 'host_uuid': 'xxx-%d' % multiplier, |
765 | + 'host_name-label': 'xs-%s' % multiplier} |
766 | + |
767 | + def setUp(self): |
768 | + self.old_flag = FLAGS.default_host_filter |
769 | + FLAGS.default_host_filter = \ |
770 | + 'nova.scheduler.host_filter.AllHostsFilter' |
771 | + self.instance_type = dict(name='tiny', |
772 | + memory_mb=50, |
773 | + vcpus=10, |
774 | + local_gb=500, |
775 | + flavorid=1, |
776 | + swap=500, |
777 | + rxtx_quota=30000, |
778 | + rxtx_cap=200) |
779 | + |
780 | + self.zone_manager = FakeZoneManager() |
781 | + states = {} |
782 | + for x in xrange(10): |
783 | + states['host%02d' % (x + 1)] = {'compute': self._host_caps(x)} |
784 | + self.zone_manager.service_states = states |
785 | + |
786 | + def tearDown(self): |
787 | + FLAGS.default_host_filter = self.old_flag |
788 | + |
789 | + def test_choose_filter(self): |
790 | + # Test default filter ... |
791 | + hf = host_filter.choose_host_filter() |
792 | + self.assertEquals(hf._full_name(), |
793 | + 'nova.scheduler.host_filter.AllHostsFilter') |
794 | + # Test valid filter ... |
795 | + hf = host_filter.choose_host_filter( |
796 | + 'nova.scheduler.host_filter.InstanceTypeFilter') |
797 | + self.assertEquals(hf._full_name(), |
798 | + 'nova.scheduler.host_filter.InstanceTypeFilter') |
799 | + # Test invalid filter ... |
800 | + try: |
801 | + host_filter.choose_host_filter('does not exist') |
802 | + self.fail("Should not find host filter.") |
803 | + except exception.SchedulerHostFilterNotFound: |
804 | + pass |
805 | + |
806 | + def test_all_host_filter(self): |
807 | + hf = host_filter.AllHostsFilter() |
808 | + cooked = hf.instance_type_to_filter(self.instance_type) |
809 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
810 | + self.assertEquals(10, len(hosts)) |
811 | + for host, capabilities in hosts: |
812 | + self.assertTrue(host.startswith('host')) |
813 | + |
814 | + def test_instance_type_filter(self): |
815 | + hf = host_filter.InstanceTypeFilter() |
816 | + # filter all hosts that can support 50 ram and 500 disk |
817 | + name, cooked = hf.instance_type_to_filter(self.instance_type) |
818 | + self.assertEquals('nova.scheduler.host_filter.InstanceTypeFilter', |
819 | + name) |
820 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
821 | + self.assertEquals(6, len(hosts)) |
822 | + just_hosts = [host for host, caps in hosts] |
823 | + just_hosts.sort() |
824 | + self.assertEquals('host05', just_hosts[0]) |
825 | + self.assertEquals('host10', just_hosts[5]) |
826 | + |
827 | + def test_json_filter(self): |
828 | + hf = host_filter.JsonFilter() |
829 | + # filter all hosts that can support 50 ram and 500 disk |
830 | + name, cooked = hf.instance_type_to_filter(self.instance_type) |
831 | + self.assertEquals('nova.scheduler.host_filter.JsonFilter', name) |
832 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
833 | + self.assertEquals(6, len(hosts)) |
834 | + just_hosts = [host for host, caps in hosts] |
835 | + just_hosts.sort() |
836 | + self.assertEquals('host05', just_hosts[0]) |
837 | + self.assertEquals('host10', just_hosts[5]) |
838 | + |
839 | + # Try some custom queries |
840 | + |
841 | + raw = ['or', |
842 | + ['and', |
843 | + ['<', '$compute.host_memory_free', 30], |
844 | + ['<', '$compute.disk_available', 300], |
845 | + ], |
846 | + ['and', |
847 | + ['>', '$compute.host_memory_free', 70], |
848 | + ['>', '$compute.disk_available', 700], |
849 | + ], |
850 | + ] |
851 | + |
852 | + cooked = json.dumps(raw) |
853 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
854 | + |
855 | + self.assertEquals(5, len(hosts)) |
856 | + just_hosts = [host for host, caps in hosts] |
857 | + just_hosts.sort() |
858 | + for index, host in zip([1, 2, 8, 9, 10], just_hosts): |
859 | + self.assertEquals('host%02d' % index, host) |
860 | + |
861 | + raw = ['not', |
862 | + ['=', '$compute.host_memory_free', 30], |
863 | + ] |
864 | + cooked = json.dumps(raw) |
865 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
866 | + |
867 | + self.assertEquals(9, len(hosts)) |
868 | + just_hosts = [host for host, caps in hosts] |
869 | + just_hosts.sort() |
870 | + for index, host in zip([1, 2, 4, 5, 6, 7, 8, 9, 10], just_hosts): |
871 | + self.assertEquals('host%02d' % index, host) |
872 | + |
873 | + raw = ['in', '$compute.host_memory_free', 20, 40, 60, 80, 100] |
874 | + cooked = json.dumps(raw) |
875 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
876 | + |
877 | + self.assertEquals(5, len(hosts)) |
878 | + just_hosts = [host for host, caps in hosts] |
879 | + just_hosts.sort() |
880 | + for index, host in zip([2, 4, 6, 8, 10], just_hosts): |
881 | + self.assertEquals('host%02d' % index, host) |
882 | + |
883 | + # Try some bogus input ... |
884 | + raw = ['unknown command', ] |
885 | + cooked = json.dumps(raw) |
886 | + try: |
887 | + hf.filter_hosts(self.zone_manager, cooked) |
888 | + self.fail("Should give KeyError") |
889 | + except KeyError, e: |
890 | + pass |
891 | + |
892 | + self.assertTrue(hf.filter_hosts(self.zone_manager, json.dumps([]))) |
893 | + self.assertTrue(hf.filter_hosts(self.zone_manager, json.dumps({}))) |
894 | + self.assertTrue(hf.filter_hosts(self.zone_manager, json.dumps( |
895 | + ['not', True, False, True, False]))) |
896 | + |
897 | + try: |
898 | + hf.filter_hosts(self.zone_manager, json.dumps( |
899 | + 'not', True, False, True, False)) |
900 | + self.fail("Should give KeyError") |
901 | + except KeyError, e: |
902 | + pass |
903 | + |
904 | + self.assertFalse(hf.filter_hosts(self.zone_manager, |
905 | + json.dumps(['=', '$foo', 100]))) |
906 | + self.assertFalse(hf.filter_hosts(self.zone_manager, |
907 | + json.dumps(['=', '$.....', 100]))) |
908 | + self.assertFalse(hf.filter_hosts(self.zone_manager, |
909 | + json.dumps( |
910 | + ['>', ['and', ['or', ['not', ['<', ['>=', ['<=', ['in', ]]]]]]]]))) |
911 | + |
912 | + self.assertFalse(hf.filter_hosts(self.zone_manager, |
913 | + json.dumps(['=', {}, ['>', '$missing....foo']]))) |
914 | |
915 | === removed file 'nova/tests/test_host_filter.py' |
916 | --- nova/tests/test_host_filter.py 2011-06-01 14:58:17 +0000 |
917 | +++ nova/tests/test_host_filter.py 1970-01-01 00:00:00 +0000 |
918 | @@ -1,205 +0,0 @@ |
919 | -# Copyright 2011 OpenStack LLC. |
920 | -# All Rights Reserved. |
921 | -# |
922 | -# Licensed under the Apache License, Version 2.0 (the "License"); you may |
923 | -# not use this file except in compliance with the License. You may obtain |
924 | -# a copy of the License at |
925 | -# |
926 | -# http://www.apache.org/licenses/LICENSE-2.0 |
927 | -# |
928 | -# Unless required by applicable law or agreed to in writing, software |
929 | -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
930 | -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
931 | -# License for the specific language governing permissions and limitations |
932 | -# under the License. |
933 | -""" |
934 | -Tests For Scheduler Host Filters. |
935 | -""" |
936 | - |
937 | -import json |
938 | - |
939 | -from nova import exception |
940 | -from nova import flags |
941 | -from nova import test |
942 | -from nova.scheduler import host_filter |
943 | - |
944 | -FLAGS = flags.FLAGS |
945 | - |
946 | - |
947 | -class FakeZoneManager: |
948 | - pass |
949 | - |
950 | - |
951 | -class HostFilterTestCase(test.TestCase): |
952 | - """Test case for host filters.""" |
953 | - |
954 | - def _host_caps(self, multiplier): |
955 | - # Returns host capabilities in the following way: |
956 | - # host1 = memory:free 10 (100max) |
957 | - # disk:available 100 (1000max) |
958 | - # hostN = memory:free 10 + 10N |
959 | - # disk:available 100 + 100N |
960 | - # in other words: hostN has more resources than host0 |
961 | - # which means ... don't go above 10 hosts. |
962 | - return {'host_name-description': 'XenServer %s' % multiplier, |
963 | - 'host_hostname': 'xs-%s' % multiplier, |
964 | - 'host_memory_total': 100, |
965 | - 'host_memory_overhead': 10, |
966 | - 'host_memory_free': 10 + multiplier * 10, |
967 | - 'host_memory_free-computed': 10 + multiplier * 10, |
968 | - 'host_other-config': {}, |
969 | - 'host_ip_address': '192.168.1.%d' % (100 + multiplier), |
970 | - 'host_cpu_info': {}, |
971 | - 'disk_available': 100 + multiplier * 100, |
972 | - 'disk_total': 1000, |
973 | - 'disk_used': 0, |
974 | - 'host_uuid': 'xxx-%d' % multiplier, |
975 | - 'host_name-label': 'xs-%s' % multiplier} |
976 | - |
977 | - def setUp(self): |
978 | - self.old_flag = FLAGS.default_host_filter |
979 | - FLAGS.default_host_filter = \ |
980 | - 'nova.scheduler.host_filter.AllHostsFilter' |
981 | - self.instance_type = dict(name='tiny', |
982 | - memory_mb=50, |
983 | - vcpus=10, |
984 | - local_gb=500, |
985 | - flavorid=1, |
986 | - swap=500, |
987 | - rxtx_quota=30000, |
988 | - rxtx_cap=200) |
989 | - |
990 | - self.zone_manager = FakeZoneManager() |
991 | - states = {} |
992 | - for x in xrange(10): |
993 | - states['host%02d' % (x + 1)] = {'compute': self._host_caps(x)} |
994 | - self.zone_manager.service_states = states |
995 | - |
996 | - def tearDown(self): |
997 | - FLAGS.default_host_filter = self.old_flag |
998 | - |
999 | - def test_choose_filter(self): |
1000 | - # Test default filter ... |
1001 | - hf = host_filter.choose_host_filter() |
1002 | - self.assertEquals(hf._full_name(), |
1003 | - 'nova.scheduler.host_filter.AllHostsFilter') |
1004 | - # Test valid filter ... |
1005 | - hf = host_filter.choose_host_filter( |
1006 | - 'nova.scheduler.host_filter.InstanceTypeFilter') |
1007 | - self.assertEquals(hf._full_name(), |
1008 | - 'nova.scheduler.host_filter.InstanceTypeFilter') |
1009 | - # Test invalid filter ... |
1010 | - try: |
1011 | - host_filter.choose_host_filter('does not exist') |
1012 | - self.fail("Should not find host filter.") |
1013 | - except exception.SchedulerHostFilterNotFound: |
1014 | - pass |
1015 | - |
1016 | - def test_all_host_filter(self): |
1017 | - hf = host_filter.AllHostsFilter() |
1018 | - cooked = hf.instance_type_to_filter(self.instance_type) |
1019 | - hosts = hf.filter_hosts(self.zone_manager, cooked) |
1020 | - self.assertEquals(10, len(hosts)) |
1021 | - for host, capabilities in hosts: |
1022 | - self.assertTrue(host.startswith('host')) |
1023 | - |
1024 | - def test_instance_type_filter(self): |
1025 | - hf = host_filter.InstanceTypeFilter() |
1026 | - # filter all hosts that can support 50 ram and 500 disk |
1027 | - name, cooked = hf.instance_type_to_filter(self.instance_type) |
1028 | - self.assertEquals('nova.scheduler.host_filter.InstanceTypeFilter', |
1029 | - name) |
1030 | - hosts = hf.filter_hosts(self.zone_manager, cooked) |
1031 | - self.assertEquals(6, len(hosts)) |
1032 | - just_hosts = [host for host, caps in hosts] |
1033 | - just_hosts.sort() |
1034 | - self.assertEquals('host05', just_hosts[0]) |
1035 | - self.assertEquals('host10', just_hosts[5]) |
1036 | - |
1037 | - def test_json_filter(self): |
1038 | - hf = host_filter.JsonFilter() |
1039 | - # filter all hosts that can support 50 ram and 500 disk |
1040 | - name, cooked = hf.instance_type_to_filter(self.instance_type) |
1041 | - self.assertEquals('nova.scheduler.host_filter.JsonFilter', name) |
1042 | - hosts = hf.filter_hosts(self.zone_manager, cooked) |
1043 | - self.assertEquals(6, len(hosts)) |
1044 | - just_hosts = [host for host, caps in hosts] |
1045 | - just_hosts.sort() |
1046 | - self.assertEquals('host05', just_hosts[0]) |
1047 | - self.assertEquals('host10', just_hosts[5]) |
1048 | - |
1049 | - # Try some custom queries |
1050 | - |
1051 | - raw = ['or', |
1052 | - ['and', |
1053 | - ['<', '$compute.host_memory_free', 30], |
1054 | - ['<', '$compute.disk_available', 300], |
1055 | - ], |
1056 | - ['and', |
1057 | - ['>', '$compute.host_memory_free', 70], |
1058 | - ['>', '$compute.disk_available', 700], |
1059 | - ], |
1060 | - ] |
1061 | - |
1062 | - cooked = json.dumps(raw) |
1063 | - hosts = hf.filter_hosts(self.zone_manager, cooked) |
1064 | - |
1065 | - self.assertEquals(5, len(hosts)) |
1066 | - just_hosts = [host for host, caps in hosts] |
1067 | - just_hosts.sort() |
1068 | - for index, host in zip([1, 2, 8, 9, 10], just_hosts): |
1069 | - self.assertEquals('host%02d' % index, host) |
1070 | - |
1071 | - raw = ['not', |
1072 | - ['=', '$compute.host_memory_free', 30], |
1073 | - ] |
1074 | - cooked = json.dumps(raw) |
1075 | - hosts = hf.filter_hosts(self.zone_manager, cooked) |
1076 | - |
1077 | - self.assertEquals(9, len(hosts)) |
1078 | - just_hosts = [host for host, caps in hosts] |
1079 | - just_hosts.sort() |
1080 | - for index, host in zip([1, 2, 4, 5, 6, 7, 8, 9, 10], just_hosts): |
1081 | - self.assertEquals('host%02d' % index, host) |
1082 | - |
1083 | - raw = ['in', '$compute.host_memory_free', 20, 40, 60, 80, 100] |
1084 | - cooked = json.dumps(raw) |
1085 | - hosts = hf.filter_hosts(self.zone_manager, cooked) |
1086 | - |
1087 | - self.assertEquals(5, len(hosts)) |
1088 | - just_hosts = [host for host, caps in hosts] |
1089 | - just_hosts.sort() |
1090 | - for index, host in zip([2, 4, 6, 8, 10], just_hosts): |
1091 | - self.assertEquals('host%02d' % index, host) |
1092 | - |
1093 | - # Try some bogus input ... |
1094 | - raw = ['unknown command', ] |
1095 | - cooked = json.dumps(raw) |
1096 | - try: |
1097 | - hf.filter_hosts(self.zone_manager, cooked) |
1098 | - self.fail("Should give KeyError") |
1099 | - except KeyError, e: |
1100 | - pass |
1101 | - |
1102 | - self.assertTrue(hf.filter_hosts(self.zone_manager, json.dumps([]))) |
1103 | - self.assertTrue(hf.filter_hosts(self.zone_manager, json.dumps({}))) |
1104 | - self.assertTrue(hf.filter_hosts(self.zone_manager, json.dumps( |
1105 | - ['not', True, False, True, False]))) |
1106 | - |
1107 | - try: |
1108 | - hf.filter_hosts(self.zone_manager, json.dumps( |
1109 | - 'not', True, False, True, False)) |
1110 | - self.fail("Should give KeyError") |
1111 | - except KeyError, e: |
1112 | - pass |
1113 | - |
1114 | - self.assertFalse(hf.filter_hosts(self.zone_manager, |
1115 | - json.dumps(['=', '$foo', 100]))) |
1116 | - self.assertFalse(hf.filter_hosts(self.zone_manager, |
1117 | - json.dumps(['=', '$.....', 100]))) |
1118 | - self.assertFalse(hf.filter_hosts(self.zone_manager, |
1119 | - json.dumps( |
1120 | - ['>', ['and', ['or', ['not', ['<', ['>=', ['<=', ['in', ]]]]]]]]))) |
1121 | - |
1122 | - self.assertFalse(hf.filter_hosts(self.zone_manager, |
1123 | - json.dumps(['=', {}, ['>', '$missing....foo']]))) |
Re-merged trunk and fixed tests; should be ready for review again.