Merge lp:~justin-fathomdb/nova/constraint-scheduler into lp:~hudson-openstack/nova/trunk

Proposed by justinsb
Status: Work in progress
Proposed branch: lp:~justin-fathomdb/nova/constraint-scheduler
Merge into: lp:~hudson-openstack/nova/trunk
Diff against target: 1136 lines (+898/-85)
8 files modified
nova/scheduler/constraint.py (+226/-0)
nova/scheduler/constraint_lib.py (+254/-0)
nova/scheduler/datastore.py (+148/-0)
nova/scheduler/driver.py (+13/-3)
nova/scheduler/simple.py (+24/-35)
nova/scheduler/zone.py (+1/-2)
nova/tests/test_constraintlib.py (+170/-0)
nova/tests/test_scheduler.py (+62/-45)
To merge this branch: bzr merge lp:~justin-fathomdb/nova/constraint-scheduler
Reviewer Review Type Date Requested Status
Nachi Ueno (community) Needs Fixing
Nova Core security contacts Pending
Review via email: mp+51857@code.launchpad.net

Description of the change

Implementation of constraint-based scheduler.
Created a simple contraint solver and a scheduler that uses it; created constraints that mirrored the existing selection criteria; refactored DB access code to avoid duplication and support future constraints (Working towards 'proximity' allocation or 'specific zone' allocation)

To post a comment you must log in.
Revision history for this message
Soren Hansen (soren) wrote :

I question the usefulness of best-match algorithms. I don't believe that the best match is really interesting at all. Finding best matches typically involves (and indeed that seems to be what happens in this implementation) looking at the entire set of candidates and ordering them according to some criteria. I don't believe this approach scales, and I don't believe it's necessary. It's not unlikely that the top XX% are all just fine candidates, so finding the very best offers no real advantage. Furthermore, the host that is the best match right now might be significantly worse two minutes from now.

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

Justin, nice work on a very interesting branch.

We're currently working on a similar approach in our Zones and Distributed Scheduler blueprints.

https://blueprints.launchpad.net/nova/+spec/multi-cluster-in-a-region
https://blueprints.launchpad.net/nova/+spec/bexar-distributed-scheduler

Ed Leafe (dabo) is working on a similar body of work to this, based on the current Rackspace/Slicehost implementation of Server Best Match.

Soren has a point that the search doesn't have to be exhaustive. We recently changed our approach from server-side to nearly fully db-side and saw massive performance improvements. Also, limiting the result set to XX% as Soren mentioned was possible once the problem was pushed off.

I think we need to review your branch in some depth before giving a pass/fail. I've sure there are things we can leverage to bring it in line with Distributed Scheduler.

Cheers! ... and stay tuned.

Revision history for this message
justinsb (justin-fathomdb) wrote :

Soren: It's true that an exhaustive search can be expensive, which is why I
didn't code an exhaustive search (other than in the unit tests) :-) There's
a framework for 'Criteria', which the 'Solver' tries to solve as best it
can. You can plug in whatever Solver technique you think best (which might
actually be exhaustive for small sets), but I believe the Solver I've got
here is likely to be reasonable in practice. The approach is that it
identifies the 'most selective criteria' and then steps through those
results in order to find one where all the other criteria are no-more
unhappy (mini-max). I'll document the magic better! I haven't coded
heuristic algorithms yet because - frankly - we're nowhere near the scales
where it becomes necessary and we have no way to take advantage of
heuristics when we're sourcing data from a relational DB, and because there
are difficult questions around behavior when the heuristics fail to find a
good solution or any solution at all.

Sandy: I think the work is non-overlapping with the distributed & multi
schedulers, but I'll check them out in more detail. My goal is to support
more constraints in the scheduler (in particular, co-placement of volumes
and servers), and I'm going to work on this constraint to help motivate this
patch.

I'm going to check out the other branches, and code up a co-placement
Criteria so this isn't just work in the abstract!

Revision history for this message
justinsb (justin-fathomdb) wrote :

I've got a branch up which makes use of the constraint scheduler:
lp:~justin-fathomdb/nova/schedule-compute-near-volume

It's now super-easy to add constraints, even conflicting constraints, and (hopefully) this will still yield reasonable decisions.

As the number of constraints grows, I don't believe that trying to satisfy the constraints manually will scale.

The dependent branch isn't yet ready for merge (I doubt it actually works); I need to write tests for it and that is at the end of a really long chain of merge requests!!

Revision history for this message
Nachi Ueno (nati-ueno) wrote :

As you pointed, I suppose your code needs more test code before merged.
In addtion unit tests for ConstraintLib fail.

======================================================================
ERROR: test_big (nova.tests.test_constraintlib.ConstraintLibTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/nati/workspace/constraint-scheduler/nova/tests/test_constraintlib.py", line 106, in test_big
    self._do_random_test(rnd, item_count, constraints_count)
  File "/home/nati/workspace/constraint-scheduler/nova/tests/test_constraintlib.py", line 122, in _do_random_test
    solver = constraint_lib.Solver(None, None)
TypeError: __init__() takes exactly 2 arguments (3 given)

======================================================================
ERROR: test_five_five__five (nova.tests.test_constraintlib.ConstraintLibTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/nati/workspace/constraint-scheduler/nova/tests/test_constraintlib.py", line 97, in test_five_five__five
    self._do_random_test(rnd, 5, 5)
  File "/home/nati/workspace/constraint-scheduler/nova/tests/test_constraintlib.py", line 122, in _do_random_test
    solver = constraint_lib.Solver(None, None)
TypeError: __init__() takes exactly 2 arguments (3 given)

review: Needs Fixing
Revision history for this message
justinsb (justin-fathomdb) wrote :

Fixed up the unit tests (I was working in a derived branch and tried to keep this one up to date - probably more trouble than it's worth with bazaar)

This branch has good test coverage; and the dependent one now does also.

Revision history for this message
Nachi Ueno (nati-ueno) wrote :

Thank you for your fix.
Would you add more test case for ConstraintDriverTestCase?

Revision history for this message
justinsb (justin-fathomdb) wrote :

It's easy to miss, but the ConstraintDriverTestCase derives from
_SchedulerBaseTestCase, so inherits basically the same code coverage as the
SimpleScheduler (although the SimpleScheduler supports some 'directed
placement' in a bit of a hacky way, which I don't support in the
ConstraintScheduler) But the ConstraintScheduler (which is never used
unless someone specifically requests it) behaves the same way as the
SimpleScheduler under the basic tests.

In addition, there are unit tests for the constraint solving library.

I believe that's reasonable test coverage. There can always be more tests,
but I think this is a reasonable start for something that is only
user-selectable. As we add use-cases, I'm sure we'll find issues and add
tests, both for bugs, and places where extra behaviour is needed. I did
find a bug when developing the derived directed-location branch, and I
back-ported that fix (which is how I broke the unit tests in the first
place!) That bug was when the min-scores were tied, it didn't fall-back to
consider the secondary criteria. The constraint solving library tests
didn't hit this because they were using randomized data, so were very
unlikely to get ties.

Is that OK?

Revision history for this message
Nachi Ueno (nati-ueno) wrote :

I'm sorry for late reply, because of earthquake in Japan.
As you said, it is hard to write sufficient test code.
However, is it possible to add Schedure test code which corresponds to constraint solving library tests? Test code can be used a document, so it shows how to use the Constraint Schedure.

> It's easy to miss, but the ConstraintDriverTestCase derives from
> _SchedulerBaseTestCase, so inherits basically the same code coverage as the
> SimpleScheduler (although the SimpleScheduler supports some 'directed
> placement' in a bit of a hacky way, which I don't support in the
> ConstraintScheduler) But the ConstraintScheduler (which is never used
> unless someone specifically requests it) behaves the same way as the
> SimpleScheduler under the basic tests.
>
> In addition, there are unit tests for the constraint solving library.
>
> I believe that's reasonable test coverage. There can always be more tests,
> but I think this is a reasonable start for something that is only
> user-selectable. As we add use-cases, I'm sure we'll find issues and add
> tests, both for bugs, and places where extra behaviour is needed. I did
> find a bug when developing the derived directed-location branch, and I
> back-ported that fix (which is how I broke the unit tests in the first
> place!) That bug was when the min-scores were tied, it didn't fall-back to
> consider the secondary criteria. The constraint solving library tests
> didn't hit this because they were using randomized data, so were very
> unlikely to get ties.
>
> Is that OK?

Revision history for this message
justinsb (justin-fathomdb) wrote :

Nachi: There are tests already for the constraint scheduler, but the only constraints implemented in this branch are ones that assign to the least-loaded machine. If you look in the derived branch, there are examples of more advanced constraints and better tests

In the derived branch, I add a constraint that favors putting a compute node as close as possible to a specified location, based on a topology:
https://code.launchpad.net/~justin-fathomdb/nova/schedule-compute-near-volume/+merge/52520

Revision history for this message
justinsb (justin-fathomdb) wrote :

Moving to WIP - we're going to discuss at Design Summit

Unmerged revisions

725. By justinsb

Back-ported fixes from derived branch

724. By justinsb

Added (derived) unit test for constraint scheduler

723. By justinsb

Fixed pep8, missing copyrights

722. By justinsb

Use datastore in simple scheduler for retrieval in non-forced case

721. By justinsb

Remove DB update code from schedulers; they deal with the datastore now

720. By justinsb

Refactoring data store so that we're not using DB objects
(Different constraints will likely need different queries, and the DB binding will be problematic)

719. By justinsb

Optimization for when we know we're not the worst constraint

718. By justinsb

Small cleanup of tests & pep8

717. By justinsb

Fix logical error, use a priority queue in the constraint solver

716. By justinsb

A few fixes (passes most simple scheduler tests)

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== added file 'nova/scheduler/constraint.py'
--- nova/scheduler/constraint.py 1970-01-01 00:00:00 +0000
+++ nova/scheduler/constraint.py 2011-03-10 20:18:18 +0000
@@ -0,0 +1,226 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright (c) 2011 Justin Santa Barbara
4#
5# All Rights Reserved.
6#
7# Licensed under the Apache License, Version 2.0 (the "License"); you may
8# not use this file except in compliance with the License. You may obtain
9# a copy of the License at
10#
11# http://www.apache.org/licenses/LICENSE-2.0
12#
13# Unless required by applicable law or agreed to in writing, software
14# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
15# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
16# License for the specific language governing permissions and limitations
17# under the License.
18
19"""
20Constraint-Based Scheduler
21"""
22
23from nova import db
24from nova import flags
25from nova import log as logging
26from nova.scheduler import driver
27from nova.scheduler import constraint_lib
28
29
30LOG = logging.getLogger('nova.scheduler.constraint')
31FLAGS = flags.FLAGS
32
33
34class SchedulerConstraint(constraint_lib.Constraint):
35 def request(self):
36 return self.solver.context['request']
37
38 def request_context(self):
39 return self.solver.context['request_context']
40
41 def scheduler(self):
42 return self.solver.context['scheduler']
43
44 def datastore(self):
45 return self.scheduler().datastore()
46
47 def selectivity(self):
48 # We default to an intermediate value here
49 return 0.75
50
51 def __str__(self):
52 return self.__class__.__name__
53
54 def __repr__(self):
55 return self.__str__()
56
57
58class InstanceConstraintFavorLeastLoaded(SchedulerConstraint):
59 def __init__(self):
60 super(InstanceConstraintFavorLeastLoaded, self).__init__()
61 self.db_results = None
62
63 def _get_database_items(self):
64 if not self.db_results:
65 self.trace(_("get_instance_hosts_sorted"))
66 results = self.datastore().get_instance_hosts_sorted(
67 self.request_context())
68
69 for result in results:
70 self.cache_item(result)
71
72 self.db_results = results
73
74 return self.db_results
75
76 def score_item(self, candidate):
77 score = candidate.utilization_score()
78 self.trace_score(candidate, score)
79 return score
80
81 def get_candidate_iterator(self):
82 """Returns an iterator over all items in best-to-least-good order"""
83 requested_vcpus = self.request()['vcpus']
84
85 for host in self._get_database_items():
86 if host.used + requested_vcpus <= host.capacity:
87 score = self.score_item(host)
88 yield constraint_lib.Candidate(host.id, score)
89 else:
90 self.trace(_("skip %s: capacity") % host)
91
92
93class VolumeConstraintFavorLeastGigabytes(SchedulerConstraint):
94 def __init__(self):
95 super(VolumeConstraintFavorLeastGigabytes, self).__init__()
96 self.db_results = None
97
98 def _get_database_items(self):
99 if not self.db_results:
100 self.trace(_("get_volume_hosts_sorted"))
101 results = self.datastore().get_volume_hosts_sorted(
102 self.request_context())
103
104 for result in results:
105 self.cache_item(result)
106
107 self.db_results = results
108
109 return self.db_results
110
111 def score_item(self, candidate):
112 score = candidate.utilization_score()
113 self.trace_score(candidate, score)
114 return score
115
116 def get_candidate_iterator(self):
117 """Returns an iterator over all items in best-to-least-good order"""
118 requested_size = self.request()['size']
119
120 for host in self._get_database_items():
121 if host.used + requested_size <= host.capacity:
122 score = self.score_item(host)
123 yield constraint_lib.Candidate(host.id, score)
124 else:
125 self.trace(_("skip %s: capacity") % host)
126
127
128class NetworkConstraintFavorLeastNetworks(SchedulerConstraint):
129 def __init__(self):
130 super(NetworkConstraintFavorLeastNetworks, self).__init__()
131 self.db_results = None
132
133 def _get_database_items(self):
134 if not self.db_results:
135 self.trace(_("get_network_hosts_sorted"))
136 results = self.datastore().get_network_hosts_sorted(
137 self.request_context())
138
139 for result in results:
140 self.cache_item(result)
141
142 self.db_results = results
143
144 return self.db_results
145
146 def score_item(self, candidate):
147 score = candidate.utilization_score()
148 self.trace_score(candidate, score)
149 return score
150
151 def get_candidate_iterator(self):
152 """Returns an iterator over all items in best-to-least-good order"""
153 requested_count = 1
154
155 for host in self._get_database_items():
156 if host.used_networks + requested_count <= host.capacity_networks:
157 score = self.score_item(host)
158 yield constraint_lib.Candidate(host.id, score)
159 else:
160 self.trace(_("skip %s: capacity") % host)
161
162
163class ConstraintScheduler(driver.Scheduler):
164 """Implements constraint-based scheduler"""
165 def __init__(self):
166 super(ConstraintScheduler, self).__init__()
167
168 def schedule(self, context, topic, *_args, **_kwargs):
169 raise NotImplementedError(_("Must implement a %s scheduler") % topic)
170
171 def schedule_run_instance(self, request_context, instance_id,
172 *_args, **_kwargs):
173 """Picks a host that is up and has the fewest running instances."""
174 instance_ref = db.instance_get(request_context, instance_id)
175
176 solver = constraint_lib.Solver({'request_context': request_context,
177 'request': instance_ref,
178 'scheduler': self})
179 solver.add_constraint(InstanceConstraintFavorLeastLoaded())
180
181 for service in solver.get_solutions_iterator():
182 if self.service_is_up(service.model):
183 host = service.model['host']
184 self.datastore().record_instance_scheduled(request_context,
185 instance_id,
186 host)
187 return host
188
189 raise driver.NoValidHost(_("No suitable hosts found"))
190
191 def schedule_create_volume(self, request_context, volume_id,
192 *_args, **_kwargs):
193 """Picks a host that is up and has the fewest volumes."""
194 volume_ref = db.volume_get(request_context, volume_id)
195
196 solver = constraint_lib.Solver({'request_context': request_context,
197 'request': volume_ref,
198 'scheduler': self})
199 solver.add_constraint(VolumeConstraintFavorLeastGigabytes())
200
201 for service in solver.get_solutions_iterator():
202 if self.service_is_up(service.model):
203 host = service.model['host']
204 self.datastore().record_volume_scheduled(request_context,
205 volume_id,
206 host)
207 return host
208
209 raise driver.NoValidHost(_("No suitable hosts found"))
210
211 def schedule_set_network_host(self, request_context, *_args, **_kwargs):
212 """Picks a host that is up and has the fewest networks."""
213
214 solver = constraint_lib.Solver({'request_context': request_context,
215 'request': {},
216 'scheduler': self})
217 solver.add_constraint(NetworkConstraintFavorLeastNetworks())
218
219 for service in solver.get_solutions_iterator():
220 if self.service_is_up(service.model):
221 host = service.model['host']
222 self.datastore().record_network_scheduled(request_context,
223 host)
224 return host
225
226 raise driver.NoValidHost(_("No suitable hosts found"))
0227
=== added file 'nova/scheduler/constraint_lib.py'
--- nova/scheduler/constraint_lib.py 1970-01-01 00:00:00 +0000
+++ nova/scheduler/constraint_lib.py 2011-03-10 20:18:18 +0000
@@ -0,0 +1,254 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright (c) 2011 Justin Santa Barbara
4# All Rights Reserved.
5#
6# Licensed under the Apache License, Version 2.0 (the "License"); you may
7# not use this file except in compliance with the License. You may obtain
8# a copy of the License at
9#
10# http://www.apache.org/licenses/LICENSE-2.0
11#
12# Unless required by applicable law or agreed to in writing, software
13# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
14# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
15# License for the specific language governing permissions and limitations
16# under the License.
17
18"""
19Simple Constraint Solver Library
20"""
21
22import heapq
23
24from nova import flags
25from nova import log as logging
26
27
28LOG = logging.getLogger('nova.scheduler.constraint')
29FLAGS = flags.FLAGS
30
31
32class Candidate(object):
33 def __init__(self, id, score):
34 self.id = id
35 self.score = score
36
37 def __str__(self):
38 return "%s:%s" % (self.id, self.score)
39
40
41class Constraint(object):
42 def __init__(self):
43 self.solver = None
44
45 def __str__(self):
46 return "%s" % self.__class__.__name__
47
48 def score_item(self, candidate):
49 """Scores the 'goodness' of a candidate.
50
51 For acceptable values, the score must be > 0 and <= 1.
52 If score <= 0, then the decision is unacceptable"""
53 raise NotImplementedError("Must implement score_item")
54
55 def get_candidate_iterator(self):
56 """Returns an iterator over all items in best-to-least-good order"""
57 raise NotImplementedError("Must implement get_candidate_iterator")
58
59 def selectivity(self):
60 """Returns the selectivity of the constraint in (0,1]
61
62 A smaller value means the constraint is likely to match fewer results.
63 If the criteria only matches half the items, use 0.5.
64 If it only matches one in 10, use 0.1 etc"""
65 raise NotImplementedError("Must implement selectivity")
66
67 def can_beat_score(self, score):
68 """Checks if we know that we can match/beat the specified score.
69
70 This is an optimization; a typical implementation will check the items
71 in the context and check if at least one of them has a score that is
72 >= the passed score. If so, we're not going to be the worst criteria,
73 so we don't need to get the list of values. See the implementation of
74 the Solver for the details of why!
75 """
76 for item in self.solver.all_cached_items():
77 if self.score_item(item) > score:
78 return True
79 return False
80
81 def set_solver(self, solver):
82 if self.solver:
83 raise Exception("Tried to double-set solver")
84 self.solver = solver
85
86 def trace(self, message):
87 self.solver.trace(message)
88
89 def trace_score(self, candidate, score):
90 self.trace("%s scored %s => %s" % (self, candidate, score))
91
92 def cache_item(self, item):
93 item_id = item.id
94 self.solver.cache_item(item, item_id)
95
96
97def get_selectivity(constraint):
98 return constraint.selectivity()
99
100
101class ConstraintIteratorState(object):
102 """Holds the state of iteration over the candidates for a constraint"""
103 def __init__(self, constraint, query_values=True):
104 self.constraint = constraint
105
106 self.is_done = False
107 self.current = None
108
109 if query_values:
110 self.iterator = constraint.get_candidate_iterator()
111 self.advance()
112 else:
113 self.iterator = None
114
115 def advance(self):
116 next_item = next(self.iterator, None)
117 if next_item:
118 self.current = next_item
119 else:
120 self.current = None
121 self.is_done = True
122
123 def __str__(self):
124 return "%s %s" % (self.constraint, self.current)
125
126
127class Solver(object):
128 """Finds the minimax solution to the constraints
129
130 Formally, we're looking for max(min(score(candidate, constraint))
131 where max is over all candidates
132 and min is over all constraints.
133
134 NOTE(justinsb): Anyone know how to get TeX into here? :-)"""
135
136 def __init__(self, context):
137 self.cached_items = {}
138 self.context = context
139 self.constraints = []
140
141 def add_constraint(self, constraint):
142 self.constraints.append(constraint)
143 constraint.set_solver(self)
144
145 def lookup_item(self, item_id):
146 item = self.cached_items.get(item_id)
147 if not item:
148 # This shouldn't be possible
149 raise Exception(_("Item not in cache: %s") % item_id)
150 # if self.lookup_function:
151 # self.trace(_("lookup on %s") % (item_id))
152 # item = self.lookup_function(item_id)
153 # self.cache(item, item_id)
154 # else:
155 # raise Error()
156 return item
157
158 def cache_item(self, item, item_id):
159 self.cached_items[item_id] = item
160
161 def all_cached_items(self):
162 return self.cached_items.values()
163
164 def trace(self, message):
165 """Records diagnostic information on scheduling decisions.
166
167 Constraint scheduling is not exactly simple. We may in future want to
168 provide (admin) API calls that expose the scheduling reasoning
169 """
170 LOG.debug(message)
171
172 def get_solutions_iterator(self):
173 if not self.constraints:
174 raise Exception("No constraints provided")
175
176 queue = []
177
178 # We sort so that the most selective criteria is done first
179 # This then lets less selective criteria inspect the candidates that
180 # the more selective candidates have chosen first.
181 # This is just a (potential) optimization
182 constraints = sorted(self.constraints, key=get_selectivity)
183
184 # Find the criteria which is the lowest scoring (worst)
185 # Observe that the worst doesn't actually change... the score must get
186 # lower as we iterate through
187 worst = None
188 states = []
189 for constraint in constraints:
190 query_values = True
191
192 if worst and constraint.can_beat_score(worst.current.score):
193 # This is a sneaky optimization. If we know we're not going
194 # to be the worst, we don't need to query (see code below)
195 # If we'd be querying a REST service or DB, this is important
196 self.trace(_("Won't query constraint - not the worst: %s") %
197 (constraint))
198 query_values = False
199
200 state = ConstraintIteratorState(constraint,
201 query_values=query_values)
202 states.append(state)
203
204 if query_values:
205 if not worst or worst.current.score > state.current.score:
206 worst = state
207
208 self.trace(_("Using lowest-scoring criteria: %s") %
209 (worst.constraint))
210
211 while not worst.is_done:
212 self.trace(_("Candidate: %s") % (worst))
213
214 # Loop over other constraints to get the score for the candidate
215 # We break ties by choosing the candidate with the sum of scores
216 min_score = worst.current.score
217 tie_break_score = 0
218
219 item_id = worst.current.id
220 item = self.lookup_item(item_id)
221 for constraint_state in states:
222 score = constraint_state.constraint.score_item(item)
223 min_score = min(min_score, score)
224 tie_break_score += score
225
226 # Heapify is really nice and will look at tie_break_score
227 # if min_score is the same
228 heapq.heappush(queue, (-min_score, -tie_break_score, item))
229
230 # Every future item will have a score <= worst.current.score
231 while True:
232 (head_score, tie_break_score, head_item) = queue[0]
233 head_score = -head_score
234 #tie_break_score = -tie_break_score
235 if head_score <= worst.current.score:
236 break
237
238 heapq.heappop(queue)
239 self.trace(_("Yielding: %s") % (head_item))
240 yield head_item
241
242 # Advance the iterator
243 worst.advance()
244 if not worst.is_done:
245 self.trace(_("Advanced: %s") % (worst))
246
247 self.trace(_("Reached end of: %s") % (worst.constraint))
248
249 while queue:
250 (head_score, tie_break_score, head_item) = heapq.heappop(queue)
251 #head_score = -head_score
252 #tie_break_score = -tie_break_score
253 self.trace(_("Yielding: %s") % (head_item))
254 yield head_item
0255
=== added file 'nova/scheduler/datastore.py'
--- nova/scheduler/datastore.py 1970-01-01 00:00:00 +0000
+++ nova/scheduler/datastore.py 2011-03-10 20:18:18 +0000
@@ -0,0 +1,148 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright (c) 2011 Justin Santa Barbara
4#
5# All Rights Reserved.
6#
7# Licensed under the Apache License, Version 2.0 (the "License"); you may
8# not use this file except in compliance with the License. You may obtain
9# a copy of the License at
10#
11# http://www.apache.org/licenses/LICENSE-2.0
12#
13# Unless required by applicable law or agreed to in writing, software
14# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
15# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
16# License for the specific language governing permissions and limitations
17# under the License.
18
19"""
20DataStore for the scheduler.
21
22Currently backed by the DB.
23"""
24import datetime
25
26from nova import db
27from nova import flags
28from nova import log as logging
29
30
31LOG = logging.getLogger('nova.scheduler.datastore')
32FLAGS = flags.FLAGS
33
34
35class SchedulerAbstractHostModel(object):
36 def __init__(self, id, model, used, capacity):
37 self.id = id
38 self.model = model
39 self.used = used
40 self.capacity = capacity
41
42 def to_s(self):
43 return '%s %s/%s' % (self.id, self.used, self.capacity)
44
45 def utilization_score(self):
46 # The best machine is one that is unused
47 utilization = float(self.used) / float(self.capacity)
48 if utilization > 1:
49 utilization = 1
50 score = 1 - utilization
51 return score
52
53
54class SchedulerInstanceHostModel(SchedulerAbstractHostModel):
55 def __init__(self, id, model, used_cores, capacity_cores):
56 super(SchedulerInstanceHostModel, self).__init__(id, model,
57 used_cores,
58 capacity_cores)
59
60
61class SchedulerVolumeHostModel(SchedulerAbstractHostModel):
62 def __init__(self, id, model, used_gigabytes, capacity_gigabytes):
63 super(SchedulerVolumeHostModel, self).__init__(id, model,
64 used_gigabytes,
65 capacity_gigabytes)
66
67
68class SchedulerNetworkHostModel(SchedulerAbstractHostModel):
69 def __init__(self, id, model, used_networks, capacity_networks):
70 super(SchedulerNetworkHostModel, self).__init__(id, model,
71 used_networks,
72 capacity_networks)
73
74
75class SchedulerDataStore(object):
76 def get_instance_hosts_sorted(self, context):
77 results = []
78 db_results = db.service_get_all_compute_sorted(context)
79 for db_result in db_results:
80 (model, used_cores) = db_result
81 id = model['id']
82
83 # TODO(justinsb): We need a way to know the true capacity
84 capacity_cores = FLAGS.max_cores
85
86 item = SchedulerInstanceHostModel(id,
87 model,
88 used_cores,
89 capacity_cores)
90 results.append(item)
91 # NOTE(justinsb): This is only sorted by score because max is fixed
92 return results
93
94 def service_get_all_by_topic(self, context, topic):
95 return db.service_get_all_by_topic(context, topic)
96
97 def get_volume_hosts_sorted(self, context):
98 results = []
99 db_results = db.service_get_all_volume_sorted(context)
100 for db_result in db_results:
101 (model, used_gigabytes) = db_result
102 id = model['id']
103
104 # TODO(justinsb): We need a way to know the true capacity
105 capacity_gigabytes = FLAGS.max_gigabytes
106
107 item = SchedulerVolumeHostModel(id,
108 model,
109 used_gigabytes,
110 capacity_gigabytes)
111 results.append(item)
112 # NOTE(justinsb): This is only sorted by score because max is fixed
113 return results
114
115 def get_network_hosts_sorted(self, context):
116 results = []
117 db_results = db.service_get_all_network_sorted(context)
118 for db_result in db_results:
119 (model, used_networks) = db_result
120 id = model['id']
121
122 # TODO(justinsb): We need a way to know the true capacity
123 capacity_networks = FLAGS.max_networks
124
125 item = SchedulerNetworkHostModel(id,
126 model,
127 used_networks,
128 capacity_networks)
129 results.append(item)
130 # NOTE(justinsb): This is only sorted by score because max is fixed
131 return results
132
133 def record_instance_scheduled(self, context, instance_id, host):
134 now = datetime.datetime.utcnow()
135 db.instance_update(context,
136 instance_id,
137 {'host': host,
138 'scheduled_at': now})
139
140 def record_volume_scheduled(self, context, volume_id, host):
141 now = datetime.datetime.utcnow()
142 db.volume_update(context,
143 volume_id,
144 {'host': host,
145 'scheduled_at': now})
146
147 def record_network_scheduled(self, context, host):
148 pass
0149
=== modified file 'nova/scheduler/driver.py'
--- nova/scheduler/driver.py 2011-01-18 19:01:16 +0000
+++ nova/scheduler/driver.py 2011-03-10 20:18:18 +0000
@@ -3,6 +3,7 @@
3# Copyright (c) 2010 Openstack, LLC.3# Copyright (c) 2010 Openstack, LLC.
4# Copyright 2010 United States Government as represented by the4# Copyright 2010 United States Government as represented by the
5# Administrator of the National Aeronautics and Space Administration.5# Administrator of the National Aeronautics and Space Administration.
6# Copyright 2011 Justin Santa Barbara
6# All Rights Reserved.7# All Rights Reserved.
7#8#
8# Licensed under the Apache License, Version 2.0 (the "License"); you may9# Licensed under the Apache License, Version 2.0 (the "License"); you may
@@ -23,9 +24,10 @@
2324
24import datetime25import datetime
2526
26from nova import db
27from nova import exception27from nova import exception
28from nova import flags28from nova import flags
29from nova.scheduler import datastore
30
2931
30FLAGS = flags.FLAGS32FLAGS = flags.FLAGS
31flags.DEFINE_integer('service_down_time', 60,33flags.DEFINE_integer('service_down_time', 60,
@@ -44,6 +46,9 @@
4446
45class Scheduler(object):47class Scheduler(object):
46 """The base class that all Scheduler clases should inherit from."""48 """The base class that all Scheduler clases should inherit from."""
49 def __init__(self):
50 super(Scheduler, self).__init__()
51 self._datastore = None
4752
48 @staticmethod53 @staticmethod
49 def service_is_up(service):54 def service_is_up(service):
@@ -55,8 +60,7 @@
5560
56 def hosts_up(self, context, topic):61 def hosts_up(self, context, topic):
57 """Return the list of hosts that have a running service for topic."""62 """Return the list of hosts that have a running service for topic."""
5863 services = self.datastore().service_get_all_by_topic(context, topic)
59 services = db.service_get_all_by_topic(context, topic)
60 return [service.host64 return [service.host
61 for service in services65 for service in services
62 if self.service_is_up(service)]66 if self.service_is_up(service)]
@@ -64,3 +68,9 @@
64 def schedule(self, context, topic, *_args, **_kwargs):68 def schedule(self, context, topic, *_args, **_kwargs):
65 """Must override at least this method for scheduler to work."""69 """Must override at least this method for scheduler to work."""
66 raise NotImplementedError(_("Must implement a fallback schedule"))70 raise NotImplementedError(_("Must implement a fallback schedule"))
71
72 def datastore(self):
73 """Returns the associated SchedulerDataStore"""
74 if not self._datastore:
75 self._datastore = datastore.SchedulerDataStore()
76 return self._datastore
6777
=== modified file 'nova/scheduler/simple.py'
--- nova/scheduler/simple.py 2011-01-27 22:14:10 +0000
+++ nova/scheduler/simple.py 2011-03-10 20:18:18 +0000
@@ -3,6 +3,7 @@
3# Copyright (c) 2010 Openstack, LLC.3# Copyright (c) 2010 Openstack, LLC.
4# Copyright 2010 United States Government as represented by the4# Copyright 2010 United States Government as represented by the
5# Administrator of the National Aeronautics and Space Administration.5# Administrator of the National Aeronautics and Space Administration.
6# Copyright 2011 Justin Santa Barbara
6# All Rights Reserved.7# All Rights Reserved.
7#8#
8# Licensed under the Apache License, Version 2.0 (the "License"); you may9# Licensed under the Apache License, Version 2.0 (the "License"); you may
@@ -21,8 +22,6 @@
21Simple Scheduler22Simple Scheduler
22"""23"""
2324
24import datetime
25
26from nova import db25from nova import db
27from nova import flags26from nova import flags
28from nova.scheduler import driver27from nova.scheduler import driver
@@ -52,25 +51,19 @@
52 if not self.service_is_up(service):51 if not self.service_is_up(service):
53 raise driver.WillNotSchedule(_("Host %s is not alive") % host)52 raise driver.WillNotSchedule(_("Host %s is not alive") % host)
5453
55 # TODO(vish): this probably belongs in the manager, if we54 self.datastore().record_instance_scheduled(context,
56 # can generalize this somehow55 instance_id,
57 now = datetime.datetime.utcnow()56 host)
58 db.instance_update(context, instance_id, {'host': host,
59 'scheduled_at': now})
60 return host57 return host
61 results = db.service_get_all_compute_sorted(context)58 results = self.datastore().get_instance_hosts_sorted(context)
62 for result in results:59 for result in results:
63 (service, instance_cores) = result60 service = result.model
64 if instance_cores + instance_ref['vcpus'] > FLAGS.max_cores:61 if result.used + instance_ref['vcpus'] > result.capacity:
65 raise driver.NoValidHost(_("All hosts have too many cores"))62 raise driver.NoValidHost(_("All hosts have too many cores"))
66 if self.service_is_up(service):63 if self.service_is_up(service):
67 # NOTE(vish): this probably belongs in the manager, if we64 self.datastore().record_instance_scheduled(context,
68 # can generalize this somehow65 instance_id,
69 now = datetime.datetime.utcnow()66 service['host'])
70 db.instance_update(context,
71 instance_id,
72 {'host': service['host'],
73 'scheduled_at': now})
74 return service['host']67 return service['host']
75 raise driver.NoValidHost(_("No hosts found"))68 raise driver.NoValidHost(_("No hosts found"))
7669
@@ -86,37 +79,33 @@
86 if not self.service_is_up(service):79 if not self.service_is_up(service):
87 raise driver.WillNotSchedule(_("Host %s not available") % host)80 raise driver.WillNotSchedule(_("Host %s not available") % host)
8881
89 # TODO(vish): this probably belongs in the manager, if we82 self.datastore().record_volume_scheduled(context,
90 # can generalize this somehow83 volume_id,
91 now = datetime.datetime.utcnow()84 host)
92 db.volume_update(context, volume_id, {'host': host,
93 'scheduled_at': now})
94 return host85 return host
95 results = db.service_get_all_volume_sorted(context)86 results = self.datastore().get_volume_hosts_sorted(context)
96 for result in results:87 for result in results:
97 (service, volume_gigabytes) = result88 service = result.model
98 if volume_gigabytes + volume_ref['size'] > FLAGS.max_gigabytes:89 if result.used + volume_ref['size'] > result.capacity:
99 raise driver.NoValidHost(_("All hosts have too many "90 raise driver.NoValidHost(_("All hosts have too many "
100 "gigabytes"))91 "gigabytes"))
101 if self.service_is_up(service):92 if self.service_is_up(service):
102 # NOTE(vish): this probably belongs in the manager, if we93 self.datastore().record_volume_scheduled(context,
103 # can generalize this somehow94 volume_id,
104 now = datetime.datetime.utcnow()95 service['host'])
105 db.volume_update(context,
106 volume_id,
107 {'host': service['host'],
108 'scheduled_at': now})
109 return service['host']96 return service['host']
110 raise driver.NoValidHost(_("No hosts found"))97 raise driver.NoValidHost(_("No hosts found"))
11198
112 def schedule_set_network_host(self, context, *_args, **_kwargs):99 def schedule_set_network_host(self, context, *_args, **_kwargs):
113 """Picks a host that is up and has the fewest networks."""100 """Picks a host that is up and has the fewest networks."""
114101
115 results = db.service_get_all_network_sorted(context)102 results = self.datastore().get_network_hosts_sorted(context)
116 for result in results:103 for result in results:
117 (service, instance_count) = result104 service = result.model
118 if instance_count >= FLAGS.max_networks:105 if result.used + 1 > result.capacity:
119 raise driver.NoValidHost(_("All hosts have too many networks"))106 raise driver.NoValidHost(_("All hosts have too many networks"))
120 if self.service_is_up(service):107 if self.service_is_up(service):
108 self.datastore().record_network_scheduled(context,
109 service['host'])
121 return service['host']110 return service['host']
122 raise driver.NoValidHost(_("No hosts found"))111 raise driver.NoValidHost(_("No hosts found"))
123112
=== modified file 'nova/scheduler/zone.py'
--- nova/scheduler/zone.py 2011-01-11 22:27:36 +0000
+++ nova/scheduler/zone.py 2011-03-10 20:18:18 +0000
@@ -24,7 +24,6 @@
24import random24import random
2525
26from nova.scheduler import driver26from nova.scheduler import driver
27from nova import db
2827
2928
30class ZoneScheduler(driver.Scheduler):29class ZoneScheduler(driver.Scheduler):
@@ -38,7 +37,7 @@
38 if zone is None:37 if zone is None:
39 return self.hosts_up(context, topic)38 return self.hosts_up(context, topic)
4039
41 services = db.service_get_all_by_topic(context, topic)40 services = self.datastore().service_get_all_by_topic(context, topic)
42 return [service.host41 return [service.host
43 for service in services42 for service in services
44 if self.service_is_up(service)43 if self.service_is_up(service)
4544
=== added file 'nova/tests/test_constraintlib.py'
--- nova/tests/test_constraintlib.py 1970-01-01 00:00:00 +0000
+++ nova/tests/test_constraintlib.py 2011-03-10 20:18:18 +0000
@@ -0,0 +1,170 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright 2011 Justin Santa Barbara
4# All Rights Reserved.
5#
6# Licensed under the Apache License, Version 2.0 (the "License"); you may
7# not use this file except in compliance with the License. You may obtain
8# a copy of the License at
9#
10# http://www.apache.org/licenses/LICENSE-2.0
11#
12# Unless required by applicable law or agreed to in writing, software
13# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
14# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
15# License for the specific language governing permissions and limitations
16# under the License.
17"""
18Tests For Constraint Library
19"""
20
21import random
22
23from nova import flags
24from nova import log as logging
25from nova import test
26from nova.scheduler import constraint_lib
27
28
29LOG = logging.getLogger('nova.tests.constraintlib')
30
31FLAGS = flags.FLAGS
32FLAGS.verbose = True
33
34
35def brute_compute_score(item, constraints):
36 min_score = None
37 for constraint in constraints:
38 score = constraint.score_item(item)
39 if min_score and min_score <= score:
40 continue
41 min_score = score
42
43 return min_score
44
45
46def brute_force(all_items, constraints):
47 max_min_score = None
48 max_min_item = None
49
50 for item in all_items:
51 min_score = brute_compute_score(item, constraints)
52
53 if max_min_score and max_min_score >= min_score:
54 continue
55
56 max_min_score = min_score
57 max_min_item = item
58
59 return (max_min_item, max_min_score)
60
61
62class ParameterBoundConstraint(constraint_lib.Constraint):
63 def __init__(self, all_items, key, selectivity_value):
64 super(ParameterBoundConstraint, self).__init__()
65 self.all_items = all_items
66 self.key = key
67 self.selectivity_value = selectivity_value
68
69 def to_s(self):
70 return "ParameterBoundConstraint:%s" % self.key
71
72 def score_item(self, candidate):
73 return candidate[self.key]
74
75 def get_candidate_iterator(self):
76 """Returns an iterator over all items in best-to-least-good order"""
77 all_items_ordered = sorted(self.all_items,
78 key=self.score_item,
79 reverse=True)
80 for item in all_items_ordered:
81 score = self.score_item(item)
82 item_id = item['id']
83 yield constraint_lib.Candidate(item_id, score)
84
85 def selectivity(self):
86 return self.selectivity_value
87
88
89class ConstraintLibTestCase(test.TestCase):
90 """Test case for constraint library"""
91
92 def test_five_five__five(self):
93 rnd = random.Random()
94 rnd.seed(555)
95
96 for _rep in range(5):
97 self._do_random_test(rnd, 5, 5)
98
99 def test_big(self):
100 rnd = random.Random()
101 rnd.seed(1234)
102
103 item_count = 1000
104 constraints_count = 100
105
106 self._do_random_test(rnd, item_count, constraints_count)
107
108 def _test_random_100(self):
109 """This is hidden because it's slow, but it's a nice torture test"""
110 rnd = random.Random()
111 rnd.seed(1234)
112
113 for _rep in range(100):
114 item_count = rnd.randint(100, 1000)
115 constraints_count = rnd.randint(5, 100)
116
117 self._do_random_test(rnd, item_count, constraints_count)
118
119 def _do_random_test(self, rnd, item_count, constraints_count):
120 all_items = []
121
122 context = {}
123 solver = constraint_lib.Solver(context)
124
125 for j in range(item_count):
126 item = {}
127 for i in range(constraints_count):
128 key = 'c%s' % i
129 item[key] = rnd.random()
130 item['id'] = j
131 all_items.append(item)
132 solver.cache_item(item, item['id'])
133
134 constraints = []
135 for i in range(constraints_count):
136 key = 'c%s' % i
137 selectivity_value = rnd.random()
138 constraint = ParameterBoundConstraint(all_items,
139 key,
140 selectivity_value)
141 solver.add_constraint(constraint)
142 constraints.append(constraint)
143
144 solved_item = None
145
146 previous_item = None
147 for item in solver.get_solutions_iterator():
148 if not solved_item:
149 solved_item = item
150 if previous_item:
151 previous_score = brute_compute_score(previous_item,
152 constraints)
153 item_score = brute_compute_score(item, constraints)
154
155 if previous_score < item_score:
156 LOG.warn("PREVIOUS: %s %s" % (previous_item,
157 previous_score))
158 LOG.warn("THIS: %s %s" % (item,
159 item_score))
160 self.assertFalse("Items not returned in order")
161
162 previous_item = item
163
164 (brute_item, brute_score) = brute_force(all_items, constraints)
165
166 self.assertEquals(brute_score,
167 brute_compute_score(brute_item, constraints))
168 self.assertEquals(brute_score,
169 brute_compute_score(solved_item, constraints))
170 self.assertEquals(brute_item, solved_item)
0171
=== modified file 'nova/tests/test_scheduler.py'
--- nova/tests/test_scheduler.py 2011-03-07 01:25:01 +0000
+++ nova/tests/test_scheduler.py 2011-03-10 20:18:18 +0000
@@ -130,17 +130,20 @@
130 availability_zone='zone1')130 availability_zone='zone1')
131131
132132
133class SimpleDriverTestCase(test.TestCase):133class _SchedulerBaseTestCase(test.TestCase):
134 """Test case for simple driver"""134 """Base test case for scheduler drivers"""
135 def __init__(self, *args, **kwargs):
136 super(_SchedulerBaseTestCase, self).__init__(*args, **kwargs)
137
135 def setUp(self):138 def setUp(self):
136 super(SimpleDriverTestCase, self).setUp()139 super(_SchedulerBaseTestCase, self).setUp()
137 self.flags(connection_type='fake',140 self.flags(connection_type='fake',
138 stub_network=True,141 stub_network=True,
139 max_cores=4,142 max_cores=4,
140 max_gigabytes=4,143 max_gigabytes=4,
141 network_manager='nova.network.manager.FlatManager',144 network_manager='nova.network.manager.FlatManager',
142 volume_driver='nova.volume.driver.FakeISCSIDriver',145 volume_driver='nova.volume.driver.FakeISCSIDriver',
143 scheduler_driver='nova.scheduler.simple.SimpleScheduler')146 scheduler_driver=self.scheduler_driver)
144 self.scheduler = manager.SchedulerManager()147 self.scheduler = manager.SchedulerManager()
145 self.manager = auth_manager.AuthManager()148 self.manager = auth_manager.AuthManager()
146 self.user = self.manager.create_user('fake', 'fake', 'fake')149 self.user = self.manager.create_user('fake', 'fake', 'fake')
@@ -210,47 +213,6 @@
210 compute1.kill()213 compute1.kill()
211 compute2.kill()214 compute2.kill()
212215
213 def test_specific_host_gets_instance(self):
214 """Ensures if you set availability_zone it launches on that zone"""
215 compute1 = self.start_service('compute', host='host1')
216 compute2 = self.start_service('compute', host='host2')
217 instance_id1 = self._create_instance()
218 compute1.run_instance(self.context, instance_id1)
219 instance_id2 = self._create_instance(availability_zone='nova:host1')
220 host = self.scheduler.driver.schedule_run_instance(self.context,
221 instance_id2)
222 self.assertEqual('host1', host)
223 compute1.terminate_instance(self.context, instance_id1)
224 db.instance_destroy(self.context, instance_id2)
225 compute1.kill()
226 compute2.kill()
227
228 def test_wont_sechedule_if_specified_host_is_down(self):
229 compute1 = self.start_service('compute', host='host1')
230 s1 = db.service_get_by_args(self.context, 'host1', 'nova-compute')
231 now = datetime.datetime.utcnow()
232 delta = datetime.timedelta(seconds=FLAGS.service_down_time * 2)
233 past = now - delta
234 db.service_update(self.context, s1['id'], {'updated_at': past})
235 instance_id2 = self._create_instance(availability_zone='nova:host1')
236 self.assertRaises(driver.WillNotSchedule,
237 self.scheduler.driver.schedule_run_instance,
238 self.context,
239 instance_id2)
240 db.instance_destroy(self.context, instance_id2)
241 compute1.kill()
242
243 def test_will_schedule_on_disabled_host_if_specified(self):
244 compute1 = self.start_service('compute', host='host1')
245 s1 = db.service_get_by_args(self.context, 'host1', 'nova-compute')
246 db.service_update(self.context, s1['id'], {'disabled': True})
247 instance_id2 = self._create_instance(availability_zone='nova:host1')
248 host = self.scheduler.driver.schedule_run_instance(self.context,
249 instance_id2)
250 self.assertEqual('host1', host)
251 db.instance_destroy(self.context, instance_id2)
252 compute1.kill()
253
254 def test_too_many_cores(self):216 def test_too_many_cores(self):
255 """Ensures we don't go over max cores"""217 """Ensures we don't go over max cores"""
256 compute1 = self.start_service('compute', host='host1')218 compute1 = self.start_service('compute', host='host1')
@@ -316,3 +278,58 @@
316 volume2.delete_volume(self.context, volume_id)278 volume2.delete_volume(self.context, volume_id)
317 volume1.kill()279 volume1.kill()
318 volume2.kill()280 volume2.kill()
281
282
283class SimpleDriverTestCase(_SchedulerBaseTestCase):
284 """Test case for simple driver"""
285 def __init__(self, *args, **kwargs):
286 self.scheduler_driver = 'nova.scheduler.simple.SimpleScheduler'
287 super(SimpleDriverTestCase, self).__init__(*args, **kwargs)
288
289 def test_specific_host_gets_instance(self):
290 """Ensures if you set availability_zone it launches on that zone"""
291 compute1 = self.start_service('compute', host='host1')
292 compute2 = self.start_service('compute', host='host2')
293 instance_id1 = self._create_instance()
294 compute1.run_instance(self.context, instance_id1)
295 instance_id2 = self._create_instance(availability_zone='nova:host1')
296 host = self.scheduler.driver.schedule_run_instance(self.context,
297 instance_id2)
298 self.assertEqual('host1', host)
299 compute1.terminate_instance(self.context, instance_id1)
300 db.instance_destroy(self.context, instance_id2)
301 compute1.kill()
302 compute2.kill()
303
304 def test_wont_sechedule_if_specified_host_is_down(self):
305 compute1 = self.start_service('compute', host='host1')
306 s1 = db.service_get_by_args(self.context, 'host1', 'nova-compute')
307 now = datetime.datetime.utcnow()
308 delta = datetime.timedelta(seconds=FLAGS.service_down_time * 2)
309 past = now - delta
310 db.service_update(self.context, s1['id'], {'updated_at': past})
311 instance_id2 = self._create_instance(availability_zone='nova:host1')
312 self.assertRaises(driver.WillNotSchedule,
313 self.scheduler.driver.schedule_run_instance,
314 self.context,
315 instance_id2)
316 db.instance_destroy(self.context, instance_id2)
317 compute1.kill()
318
319 def test_will_schedule_on_disabled_host_if_specified(self):
320 compute1 = self.start_service('compute', host='host1')
321 s1 = db.service_get_by_args(self.context, 'host1', 'nova-compute')
322 db.service_update(self.context, s1['id'], {'disabled': True})
323 instance_id2 = self._create_instance(availability_zone='nova:host1')
324 host = self.scheduler.driver.schedule_run_instance(self.context,
325 instance_id2)
326 self.assertEqual('host1', host)
327 db.instance_destroy(self.context, instance_id2)
328 compute1.kill()
329
330
331class ConstraintDriverTestCase(_SchedulerBaseTestCase):
332 """Test case for constraint driver"""
333 def __init__(self, *args, **kwargs):
334 self.scheduler_driver = 'nova.scheduler.constraint.ConstraintScheduler'
335 super(ConstraintDriverTestCase, self).__init__(*args, **kwargs)