Merge lp:~fred-yang/nova/TrustedComputingPools into lp:~hudson-openstack/nova/trunk
- TrustedComputingPools
- Merge into trunk
Status: | Needs review |
---|---|
Proposed branch: | lp:~fred-yang/nova/TrustedComputingPools |
Merge into: | lp:~hudson-openstack/nova/trunk |
Diff against target: |
1441 lines (+1393/-0) 9 files modified
nova/scheduler/attestation/__init__.py (+24/-0) nova/scheduler/attestation/attestation.py (+124/-0) nova/scheduler/attestation/client.py (+223/-0) nova/scheduler/attestation/service.py (+204/-0) nova/scheduler/filters/__init__.py (+1/-0) nova/scheduler/filters/json_filter_integrity.py (+124/-0) nova/scheduler/manager_integrity.py (+240/-0) nova/tests/scheduler/test_json_filter_integrity.py (+251/-0) nova/tests/scheduler/test_manager_integrity.py (+202/-0) |
To merge this branch: | bzr merge lp:~fred-yang/nova/TrustedComputingPools |
Related bugs: | |
Related blueprints: |
Trusted Computing pools
(Low)
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Paul Voccio (community) | Needs Fixing | ||
Sandy Walsh (community) | Needs Fixing | ||
Chris Behrens (community) | Needs Fixing | ||
Review via email: mp+76134@code.launchpad.net |
Commit message
Description of the change
Enable Trusted Computing Pools into Openstack scheduler by adding new filter driver
Spec/Design - http://
Blueprint discussed at Diablo Summit - https:/
Sandy Walsh (sandy-walsh) wrote : | # |
Thanks Fred ... perhaps we can start with a pep8 cleanup and ensure it complies with the HACKING document. Also if you could remove all the debug/commented out code?
Then we can get into the nitty gritty of it.
Johannes Erdfelt (johannes.erdfelt) wrote : | # |
I don't know enough about the scheduler to comment on the functionality of the patch, but I do have a style comment. It doesn't seem necessary to split tlvl_string and tlvl_value as separate lists. Especially since the code combines them into a dict at runtime. It's probably cleaner to create a dict literal instead.
Paul Voccio (pvo) wrote : | # |
Wanted to echo removing comments and some of the formatting the others
mentioned here.
I got some 3 failures with the S3APITestCase. Not sure if this is related or
not yet.
>
Please also add yourself to the Authors file. This failed on the unit tests
for me.
>
> =======
> FAIL: test_authors_
> -------
> Traceback (most recent call last):
> File "/root/
> test_authors_
> '%r not listed in Authors' % missing)
> AssertionError: set([u'<email address hidden>']) not listed in Authors
fred yang (fred-yang) wrote : | # |
Woop! Learning the process. Should I submit the Authors file now, or I can do another clean-up to submit
Thanks,
-Fred
> Wanted to echo removing comments and some of the formatting the others
> mentioned here.
>
> I got some 3 failures with the S3APITestCase. Not sure if this is related or
> not yet.
> >
> Please also add yourself to the Authors file. This failed on the unit tests
> for me.
>
> >
> > =======
> > FAIL: test_authors_
> > -------
> > Traceback (most recent call last):
> > File "/root/
> in
> > test_authors_
> > '%r not listed in Authors' % missing)
> > AssertionError: set([u'<email address hidden>']) not listed in Authors
fred yang (fred-yang) wrote : | # |
Will start from pep 8 tool to clearn up ...
Thanks,
-Fred
> Thanks Fred ... perhaps we can start with a pep8 cleanup and ensure it
> complies with the HACKING document. Also if you could remove all the
> debug/commented out code?
>
> Then we can get into the nitty gritty of it.
fred yang (fred-yang) wrote : | # |
Combining sting & value into a dict is good suggestion
Thanks,
-Fred
> I don't know enough about the scheduler to comment on the functionality of the
> patch, but I do have a style comment. It doesn't seem necessary to split
> tlvl_string and tlvl_value as separate lists. Especially since the code
> combines them into a dict at runtime. It's probably cleaner to create a dict
> literal instead.
Preview Diff
1 | === added directory 'nova/scheduler/attestation' |
2 | === added file 'nova/scheduler/attestation/__init__.py' |
3 | --- nova/scheduler/attestation/__init__.py 1970-01-01 00:00:00 +0000 |
4 | +++ nova/scheduler/attestation/__init__.py 2011-09-20 01:13:28 +0000 |
5 | @@ -0,0 +1,24 @@ |
6 | +# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
7 | + |
8 | +# Copyright (c) 2010 Openstack, LLC. |
9 | +# |
10 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
11 | +# not use this file except in compliance with the License. You may obtain |
12 | +# a copy of the License at |
13 | +# |
14 | +# http://www.apache.org/licenses/LICENSE-2.0 |
15 | +# |
16 | +# Unless required by applicable law or agreed to in writing, software |
17 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
18 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
19 | +# License for the specific language governing permissions and limitations |
20 | +# under the License. |
21 | + |
22 | +""" |
23 | +:mod:`nova.scheduler.attestation` -- API service to attestation server |
24 | +===================================================== |
25 | + |
26 | +.. automodule:: nova.scheduler.attestation |
27 | + :platform: Unix |
28 | + :synopsis: Module that provide https client to attestation server |
29 | +""" |
30 | |
31 | === added file 'nova/scheduler/attestation/attestation.py' |
32 | --- nova/scheduler/attestation/attestation.py 1970-01-01 00:00:00 +0000 |
33 | +++ nova/scheduler/attestation/attestation.py 2011-09-20 01:13:28 +0000 |
34 | @@ -0,0 +1,124 @@ |
35 | +# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
36 | + |
37 | +# Copyright 2010-2011 OpenStack, LLC |
38 | +# All Rights Reserved. |
39 | +# |
40 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
41 | +# not use this file except in compliance with the License. You may obtain |
42 | +# a copy of the License at |
43 | +# |
44 | +# http://www.apache.org/licenses/LICENSE-2.0 |
45 | +# |
46 | +# Unless required by applicable law or agreed to in writing, software |
47 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
48 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
49 | +# License for the specific language governing permissions and limitations |
50 | +# under the License. |
51 | + |
52 | +""" |
53 | +Client classes for callers to attestation server |
54 | +""" |
55 | + |
56 | +import json |
57 | +from nova import log as logging |
58 | + |
59 | +from nova.scheduler.attestation import client |
60 | +LOG = logging.getLogger('nova.scheduler.attestation') |
61 | + |
62 | +class v10Client(client.BaseClient): |
63 | + |
64 | + """Main client class for accessing Attestation server""" |
65 | + |
66 | + DEFAULT_PORT = 4443 |
67 | + |
68 | + def __init__(self, host, port=None, use_ssl=False, doc_root="/v1", |
69 | + auth_user=None, auth_passwd=None, |
70 | + key_file=None, cert_file=None, ca_file=None): |
71 | + """ |
72 | + Creates a new client to a Attestation API service. |
73 | + |
74 | + :param host: The host where Attestation resides |
75 | + :param port: The port where Attestation resides (defaults to 9292) |
76 | + :param use_ssl: Should we use HTTPS? (defaults to False) |
77 | + :param doc_root: Prefix for all URLs we request from host |
78 | + :param auth_tok: The auth token to pass to the server |
79 | + """ |
80 | + port = port or self.DEFAULT_PORT |
81 | + self.doc_root = doc_root |
82 | + super(v10Client, self).__init__(host, port, use_ssl, |
83 | + auth_user, auth_passwd, |
84 | + key_file, cert_file, ca_file) |
85 | + |
86 | + def do_request(self, method, action, body=None, headers=None, params=None): |
87 | + action = "%s/%s" % (self.doc_root, action) |
88 | + return super(v10Client, self).do_request(method, action, body, |
89 | + headers, params) |
90 | + |
91 | + def post_hosts(self, hosts, PcrMask=None): |
92 | + """ |
93 | + Post a list of hosts to attestation server to get trustworthiness |
94 | + as well as PCR index back if specified - |
95 | + Return RequestId |
96 | + |
97 | + :hosts: hosts list to be posted |
98 | + :PcrMask: PCR index to be retrieved |
99 | + :TODO: limit the hosts count to be less than 500 |
100 | + """ |
101 | + body = {} |
102 | + body['count'] = len(hosts) |
103 | + if PcrMask: |
104 | + body['PCRmask'] = PcrMask |
105 | + #body['hosts'] = '+'.join(hosts) |
106 | + body['hosts'] = hosts |
107 | + cooked = json.dumps(body) |
108 | + headers = {} |
109 | + headers['Context-Type'] = 'application/json' |
110 | + headers['Accept'] = 'application/json' |
111 | + res = self.do_request("POST", "PostHosts", cooked, headers) |
112 | + rheader = res.getheaders() |
113 | + LOG.debug(_("post_hosts got return headers %(rheader)s " % locals())) |
114 | + |
115 | + raw = json.loads(res.read()) |
116 | + RequestId = raw['RequestId'] |
117 | + LOG.debug(_("post_hosts got %(raw)s %(RequestId)s" % locals())) |
118 | + return RequestId |
119 | + |
120 | + def get_hosts(self, rid): |
121 | + """ |
122 | + Get trustworthiness from previously posted hosts |
123 | + """ |
124 | + |
125 | + #TODO: perform mark and limit read to pull all data |
126 | + body = {} |
127 | + body['RequestId'] = rid |
128 | + cooked = json.dumps(body) |
129 | + headers = {} |
130 | + headers['Context-Type'] = 'application/json' |
131 | + headers['Accept'] = 'application/json' |
132 | + res = self.do_request("GET", "RequestId", cooked, headers) |
133 | + rheader = res.getheaders() |
134 | + LOG.debug(_("get_hosts got return headers %(rheader)s " % locals())) |
135 | + raw = json.loads(res.read()) |
136 | + data = raw['RequestId'] |
137 | + return data |
138 | + |
139 | + def poll_hosts(self, hosts, PcrMask=None): |
140 | + """ |
141 | + same as post_hosts, but sync & wait for operation done to return data |
142 | + """ |
143 | + body = {} |
144 | + body['count'] = len(hosts) |
145 | + if PcrMask: |
146 | + body['PCRmask'] = PcrMask |
147 | + #body['hosts'] = '+'.join(hosts) |
148 | + body['hosts'] = hosts |
149 | + headers = {} |
150 | + headers['Context-Type'] = 'application/json' |
151 | + headers['Accept'] = 'application/json' |
152 | + body = json.dumps(body) |
153 | + res = self.do_request("POST", "PollHosts", body, headers) |
154 | + rheader = res.getheaders() |
155 | + LOG.debug(_("Poll_hosts headers %(rheader)s " % locals())) |
156 | + raw = json.loads(res.read()) |
157 | + data = raw['PollHosts'] |
158 | + return data |
159 | |
160 | === added file 'nova/scheduler/attestation/client.py' |
161 | --- nova/scheduler/attestation/client.py 1970-01-01 00:00:00 +0000 |
162 | +++ nova/scheduler/attestation/client.py 2011-09-20 01:13:28 +0000 |
163 | @@ -0,0 +1,223 @@ |
164 | +# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
165 | + |
166 | +# Copyright (c) 2010 Openstack, LLC. |
167 | +# Copyright 2010 United States Government as represented by the |
168 | +# Administrator of the National Aeronautics and Space Administration. |
169 | +# All Rights Reserved. |
170 | +# |
171 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
172 | +# not use this file except in compliance with the License. You may obtain |
173 | +# a copy of the License at |
174 | +# |
175 | +# http://www.apache.org/licenses/LICENSE-2.0 |
176 | +# |
177 | +# Unless required by applicable law or agreed to in writing, software |
178 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
179 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
180 | +# License for the specific language governing permissions and limitations |
181 | +# under the License. |
182 | + |
183 | +""" |
184 | +Attestation Client service |
185 | +""" |
186 | + |
187 | +import httplib |
188 | +import logging |
189 | +import socket |
190 | +import urllib |
191 | +import ssl |
192 | + |
193 | +from nova import log as logging |
194 | +from nova import exception |
195 | +LOG = logging.getLogger('nova.scheduler.attestation.client') |
196 | + |
197 | +class HTTPSClientAuthConnection(httplib.HTTPSConnection): |
198 | + """ Class to make a HTTPS connection, with support for full client-based SSL Authentication""" |
199 | + |
200 | + def __init__(self, host, port, key_file, cert_file, ca_file, timeout=None): |
201 | + httplib.HTTPSConnection.__init__(self, host, port, |
202 | + key_file=key_file, cert_file=cert_file) |
203 | + self.host = host |
204 | + self.post = port |
205 | + self.key_file = key_file |
206 | + self.cert_file = cert_file |
207 | + self.ca_file = ca_file |
208 | + self.timeout = timeout |
209 | + |
210 | + def connect(self): |
211 | + """ Connect to a host on a given (SSL) port. |
212 | + Use ca_file pointing somewhere, use it to check Server Certificate. |
213 | + Redefined/copied and extended from httplib.py:1105 (Python 2.6.x). |
214 | + """ |
215 | + host = self.host |
216 | + port = self.port |
217 | + sock = socket.create_connection((self.host, self.port), self.timeout) |
218 | + if self._tunnel_host: |
219 | + self.sock = sock |
220 | + self._tunnel() |
221 | + # If there's no CA File, don't force Server Certificate Check |
222 | + if self.ca_file: |
223 | + self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file, |
224 | + ssl_version=ssl.PROTOCOL_SSLv23, |
225 | + ca_certs=self.ca_file, |
226 | + cert_reqs=ssl.CERT_REQUIRED) |
227 | + LOG.debug(_("Https server certified " % locals())) |
228 | + s = self.sock |
229 | + cert = s.getpeercert() |
230 | + else: |
231 | + self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file, |
232 | + cert_reqs=ssl.CERT_NONE) |
233 | + |
234 | + |
235 | +class BaseClient(object): |
236 | + |
237 | + """A base client class""" |
238 | + |
239 | + CHUNKSIZE = 65536 |
240 | + |
241 | + def __init__(self, host, port, use_ssl, auth_user, auth_passwd, |
242 | + key_file=None, cert_file=None, ca_file=None): |
243 | + """ |
244 | + Creates a new client to some service. |
245 | + |
246 | + :param host: The host where service resides |
247 | + :param port: The port where service resides |
248 | + :param use_ssl: Should we use HTTPS? |
249 | + :param auth_user: user name to connect with attestation server |
250 | + :param auth_passwd: password to get attestation server authentication |
251 | + """ |
252 | + self.host = host |
253 | + self.port = port |
254 | + self.use_ssl = use_ssl |
255 | + self.auth_user = auth_user |
256 | + self.auth_passwd = auth_passwd |
257 | + self.connection = None |
258 | + self.key_file = key_file |
259 | + self.cert_file = cert_file |
260 | + self.ca_file = ca_file |
261 | + |
262 | + def set_auth_blob(self, auth_user, auth_passwd): |
263 | + """ |
264 | + Updates the authentication token for this client connection. |
265 | + """ |
266 | + self.auth_user = auth_user |
267 | + self.auth_passwd = auth_passwd |
268 | + |
269 | + def get_connection(self): |
270 | + """ |
271 | + Returns the proper connection type |
272 | + """ |
273 | + host = self.host |
274 | + port = self.port |
275 | + if self.use_ssl: |
276 | + return HTTPSClientAuthConnection(self.host, self.port, |
277 | + key_file=self.key_file, |
278 | + cert_file=self.cert_file, |
279 | + ca_file=self.ca_file) |
280 | + else: |
281 | + return httplib.HTTPConnection(self.host, self.port) |
282 | + |
283 | + def do_request(self, method, action, body=None, headers=None, params=None): |
284 | + """ |
285 | + Connects to the server and issues a request. Handles converting |
286 | + any returned HTTP error status codes to OpenStack/Glance exceptions |
287 | + and closing the server connection. Returns the result data, or |
288 | + raises an appropriate exception. |
289 | + |
290 | + """ |
291 | + if type(params) is dict: |
292 | + |
293 | + # remove any params that are None |
294 | + for (key, value) in params.items(): |
295 | + if value is None: |
296 | + del params[key] |
297 | + |
298 | + action += '?' + urllib.urlencode(params) |
299 | + |
300 | + try: |
301 | + c = self.get_connection() |
302 | + headers = headers or {} |
303 | + if 'x-auth-user' not in headers and self.auth_user: |
304 | + headers['x-auth-user'] = self.auth_user |
305 | + if 'x-auth-passwd' not in headers and self.auth_passwd: |
306 | + headers['x-auth-passwd'] = self.auth_passwd |
307 | + |
308 | + # Do a simple request or a chunked request, depending |
309 | + # on whether the body param is a file-like object and |
310 | + # the method is PUT or POST |
311 | + #if hasattr(body, 'read') and method.lower() in ('post', 'put'): |
312 | + if body: |
313 | + # query is in body |
314 | + c.putrequest(method, action) |
315 | + |
316 | + for header, value in headers.items(): |
317 | + c.putheader(header, value) |
318 | + #c.putheader('Content-type', 'text/json') |
319 | + c.putheader('Content-length', len(body)) |
320 | + c.putheader('Transfer-Encoding', 'chunked') |
321 | + c.endheaders() |
322 | + |
323 | + #chunk = body.read(self.CHUNKSIZE) |
324 | + c.send('%s' % (body)) |
325 | + c.send('0\r\n\r\n') |
326 | + else: |
327 | + # Simple request.through uri |
328 | + c.request(method, action, body, headers) |
329 | + res = c.getresponse() |
330 | + #status_code = self.get_status_code(res) |
331 | + status_code = res.status |
332 | + if status_code in (httplib.OK, |
333 | + httplib.CREATED, |
334 | + httplib.ACCEPTED, |
335 | + httplib.NO_CONTENT): |
336 | + return res |
337 | + elif status_code == httplib.UNAUTHORIZED: |
338 | + raise exception.NotAuthorized |
339 | + elif status_code == httplib.FORBIDDEN: |
340 | + raise exception.NotAuthorized |
341 | + elif status_code == httplib.NOT_FOUND: |
342 | + raise exception.NotFound |
343 | + elif status_code == httplib.CONFLICT: |
344 | + raise exception.Duplicate(res.read()) |
345 | + elif status_code == httplib.BAD_REQUEST: |
346 | + raise exception.Invalid(res.read()) |
347 | + elif status_code == httplib.INTERNAL_SERVER_ERROR: |
348 | + raise Exception("Internal Server error: %s" % res.read()) |
349 | + else: |
350 | + raise Exception("Unknown error occurred! %s" % res.read()) |
351 | + |
352 | + except (socket.error, IOError), e: |
353 | + raise Exception("Unable to connect to server. Got error: %s" % e) |
354 | + |
355 | + def get_status_code(self, response): |
356 | + """ |
357 | + Returns the integer status code from the response, which |
358 | + can be either a Webob.Response (used in testing) or httplib.Response |
359 | + """ |
360 | + if hasattr(response, 'status_int'): |
361 | + return response.status_int |
362 | + else: |
363 | + return response.status |
364 | + |
365 | + def _extract_params(self, actual_params, allowed_params): |
366 | + """ |
367 | + Extract a subset of keys from a dictionary. The filters key |
368 | + will also be extracted, and each of its values will be returned |
369 | + as an individual param. |
370 | + |
371 | + :param actual_params: dict of keys to filter |
372 | + :param allowed_params: list of keys that 'actual_params' will be |
373 | + reduced to |
374 | + :retval subset of 'params' dict |
375 | + """ |
376 | + try: |
377 | + # expect 'filters' param to be a dict here |
378 | + result = dict(actual_params.get('filters')) |
379 | + except TypeError: |
380 | + result = {} |
381 | + |
382 | + for allowed_param in allowed_params: |
383 | + if allowed_param in actual_params: |
384 | + result[allowed_param] = actual_params[allowed_param] |
385 | + |
386 | + return result |
387 | |
388 | === added file 'nova/scheduler/attestation/service.py' |
389 | --- nova/scheduler/attestation/service.py 1970-01-01 00:00:00 +0000 |
390 | +++ nova/scheduler/attestation/service.py 2011-09-20 01:13:28 +0000 |
391 | @@ -0,0 +1,204 @@ |
392 | +# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
393 | + |
394 | +# Copyright (c) 2010 Openstack, LLC. |
395 | +# Copyright 2010 United States Government as represented by the |
396 | +# Administrator of the National Aeronautics and Space Administration. |
397 | +# All Rights Reserved. |
398 | +# |
399 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
400 | +# not use this file except in compliance with the License. You may obtain |
401 | +# a copy of the License at |
402 | +# |
403 | +# http://www.apache.org/licenses/LICENSE-2.0 |
404 | +# |
405 | +# Unless required by applicable law or agreed to in writing, software |
406 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
407 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
408 | +# License for the specific language governing permissions and limitations |
409 | +# under the License. |
410 | + |
411 | +""" |
412 | +Attestation Service |
413 | +""" |
414 | + |
415 | +import datetime |
416 | + |
417 | +from nova import flags |
418 | +from nova import log as logging |
419 | +from nova import utils |
420 | + |
421 | +from nova.scheduler.attestation import attestation |
422 | + |
423 | + |
424 | +LOG = logging.getLogger('nova.scheduler.attestaion.service') |
425 | +FLAGS = flags.FLAGS |
426 | +flags.DEFINE_integer('attestation_geo_required', 0, |
427 | + 'require to use PCR22 as geo indicator') |
428 | +flags.DEFINE_integer('attestation_max_hosts', 300, |
429 | + 'max number of hosts per request') |
430 | + |
431 | +flags.DEFINE_string('attestation_server', 'localhost', |
432 | + 'attestation server http') |
433 | +flags.DEFINE_string('attestation_server_ca_file', None, |
434 | + 'attestation server Cert file for Identity verification') |
435 | +flags.DEFINE_string('attestation_port', '4443', |
436 | + 'attestation server port') |
437 | +flags.DEFINE_string('attestation_auth_user', 'MustSetupThisUser', |
438 | + 'attestation access user - must change') |
439 | +flags.DEFINE_string('attestation_auth_passwd', 'MustChangeThisPasswd', |
440 | + 'attestation access passwd - must change') |
441 | + |
442 | +class AttestationService(object): |
443 | + def __init__(self): |
444 | + self.host = FLAGS.attestation_server |
445 | + self.port = FLAGS.attestation_port |
446 | + self.auth_user = FLAGS.attestation_auth_user |
447 | + self.auth_passwd = FLAGS.attestation_auth_passwd |
448 | + self.ca_file = FLAGS.attestation_server_ca_file |
449 | + self.use_ssl = True |
450 | + self.doc = "v1.0" |
451 | + self.attestation = attestation.v10Client(self.host, self.port, |
452 | + self.use_ssl, self.doc, self.auth_user, self.auth_passwd, |
453 | + ca_file = self.ca_file) |
454 | + self.req_count = FLAGS.attestation_max_hosts or 300 |
455 | + # max number of hosts per request |
456 | + # TODO: retrive from server |
457 | + self.requests = {} # tracks request ids with host list |
458 | + self.req_id = [] |
459 | + |
460 | + tlvl_string = ['trusted', 'untrusted', 'unknown'] |
461 | + tlvl_value = ['lvl10', 'lvl01', 'lvl01'] |
462 | + |
463 | + def _post_hosts(self, hosts, PcrMask=None): |
464 | + total = len(hosts) |
465 | + max = self.req_count |
466 | + n = (total + max - 1) // max |
467 | + count_per_loop = total // n |
468 | + start = 0 |
469 | + if FLAGS.attestation_geo_required: |
470 | + PcrMask = '400000' # also read PCR22 in |
471 | + #TODO: need error handling |
472 | + while total: |
473 | + if total < count_per_loop: |
474 | + count_per_loop = total |
475 | + list = hosts[start:count_per_loop] |
476 | + rid = self.attestation.post_hosts(list, PcrMask) |
477 | + self.req_id.append(rid) |
478 | + total -= count_per_loop |
479 | + start += count_per_loop |
480 | + return {} |
481 | + |
482 | + def _get_hosts(self, hosts): |
483 | + """ |
484 | + get result from previously posted rid |
485 | + """ |
486 | + if not len(self.req_id): |
487 | + return {} |
488 | + data = [] |
489 | + done = [] |
490 | + index = 0 |
491 | + for rid in self.req_id: |
492 | + res = self.attestation.get_hosts(rid) |
493 | + hosts = self._process_packet(res, rid) |
494 | + if isinstance(hosts, dict): |
495 | + # got result |
496 | + data.append(hosts) |
497 | + done.append(rid) |
498 | + else: |
499 | + # TODO: something wrong, need clean-up |
500 | + # return hosts to get caller to issue command again |
501 | + return 'BAD_DATA' |
502 | + index += 1 |
503 | + |
504 | + report = self._pack_host_data(data) |
505 | + for rid in done: |
506 | + self.req_id.remove(rid) |
507 | + return report |
508 | + |
509 | + def _poll_hosts(self, hosts, PcrMask=None): |
510 | + total = len(hosts) |
511 | + max = self.req_count |
512 | + n = (total + max - 1) // max |
513 | + count_per_loop = total // n |
514 | + start = 0 |
515 | + if FLAGS.attestation_geo_required: |
516 | + PcrMask = '400000' # also read PCR22 in |
517 | + #TODO: need error handling |
518 | + res = [] |
519 | + while total: |
520 | + if total < count_per_loop: |
521 | + count_per_loop = total |
522 | + list = hosts[start:count_per_loop] |
523 | + data = self.attestation.poll_hosts(list, PcrMask) |
524 | + if isinstance(data, dict): |
525 | + res.append(data) |
526 | + total -= count_per_loop |
527 | + start += count_per_loop |
528 | + report = self._pack_host_data(res) |
529 | + return report |
530 | + |
531 | + commands = { |
532 | + 'post_hosts': _post_hosts, |
533 | + 'get_hosts': _get_hosts, |
534 | + 'poll_hosts': _poll_hosts, |
535 | + } |
536 | + |
537 | + def do_attestation(self, cmd, hosts=None): |
538 | + """ |
539 | + cmd: [post_hosts, get_hosts, poll_hosts] |
540 | + """ |
541 | + method = self.commands[cmd.lower()] # Let exception fly. |
542 | + report = method(self, hosts) |
543 | + return report |
544 | + |
545 | + def _pack_host_data(self, data): |
546 | + report = {} |
547 | + cvt = dict(zip(self.tlvl_string, self.tlvl_value)) |
548 | + for item in data: |
549 | + count = item['count'] |
550 | + hosts = item['hosts'] |
551 | + for host, state in hosts.iteritems(): |
552 | + entry = {} |
553 | + entry['trust_lvl'] = cvt.get(state['trust_lvl'], {}) |
554 | + entry['vtime'] = state['timestamp'] |
555 | + if FLAGS.attestation_geo_required: |
556 | + entry['geo'] = state[PCR22] |
557 | + report[host] = entry |
558 | + count -= 1 |
559 | + if count: |
560 | + LOG.warn(_("Get hosts missed count %(count)s" % locals())) |
561 | + return report |
562 | + |
563 | + |
564 | + def _process_packet(self, packet, rid): |
565 | + """ |
566 | + Expected input format |
567 | + { 'jobid' : rid, |
568 | + 'jobstatus' : 0/1/2 (processing/success/failed) |
569 | + 'jobcode' : 0 or others |
570 | + 'jobresult' : "error information" |
571 | + (continue hosts information if success) |
572 | + 'count' : 2 |
573 | + 'hosts' : [ |
574 | + {host1 : { |
575 | + 'trust_lvl': trusted|untrusted|unknown |
576 | + 'timestamp': UTC |
577 | + 'PCRxx' : xxx}, |
578 | + host2 : { |
579 | + 'trust_lvl': trusted|untrusted|unknown |
580 | + 'timestamp': UTC |
581 | + 'PCRxx' : xxx}, |
582 | + }] |
583 | + } |
584 | + """ |
585 | + if not packet['jobid'] == rid: |
586 | + LOG.warn(_("Incoming packet has wrong RequestId" % locals())) |
587 | + return 2 |
588 | + status = packet['jobstatus'] |
589 | + if status is 2: |
590 | + result = packet['jobresult'] |
591 | + LOG.warn(_("Incoming packet error %(result)s " % locals())) |
592 | + return 2 |
593 | + if status is 1: |
594 | + return packet |
595 | + return {} |
596 | |
597 | === modified file 'nova/scheduler/filters/__init__.py' |
598 | --- nova/scheduler/filters/__init__.py 2011-08-15 22:09:39 +0000 |
599 | +++ nova/scheduler/filters/__init__.py 2011-09-20 01:13:28 +0000 |
600 | @@ -34,3 +34,4 @@ |
601 | from all_hosts_filter import AllHostsFilter |
602 | from instance_type_filter import InstanceTypeFilter |
603 | from json_filter import JsonFilter |
604 | +from json_filter_integrity import JsonFilterIntegrity |
605 | |
606 | === added file 'nova/scheduler/filters/json_filter_integrity.py' |
607 | --- nova/scheduler/filters/json_filter_integrity.py 1970-01-01 00:00:00 +0000 |
608 | +++ nova/scheduler/filters/json_filter_integrity.py 2011-09-20 01:13:28 +0000 |
609 | @@ -0,0 +1,124 @@ |
610 | +# Copyright (c) 2011 Openstack, LLC. |
611 | +# All Rights Reserved. |
612 | +# |
613 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
614 | +# not use this file except in compliance with the License. You may obtain |
615 | +# a copy of the License at |
616 | +# |
617 | +# http://www.apache.org/licenses/LICENSE-2.0 |
618 | +# |
619 | +# Unless required by applicable law or agreed to in writing, software |
620 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
621 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
622 | +# License for the specific language governing permissions and limitations |
623 | +# under the License. |
624 | + |
625 | +""" |
626 | +Supporting trusted compute pool host filtering for Json |
627 | +""" |
628 | + |
629 | +import json |
630 | + |
631 | +from nova import exception |
632 | +from nova import flags |
633 | +from nova import log as logging |
634 | +from nova import utils |
635 | +from nova.scheduler.filters import json_filter |
636 | + |
637 | +LOG = logging.getLogger('nova.scheduler.filter.json_filter_integrity') |
638 | + |
639 | +FLAGS = flags.FLAGS |
640 | +flags.DEFINE_integer('trust_pool_enforced', 1, |
641 | + 'enforce compute nodes to be trusted or non-trusted') |
642 | + |
643 | +#currently supported commands |
644 | +trust_value = ['trusted'] |
645 | +untrust_value = ['Untrusted'] |
646 | +basic_cmd = ['='] |
647 | +extra_cmd = ['>='] |
648 | +tlvl_string = ['trusted', 'untrusted'] |
649 | +tlvl_value = ['lvl10', 'lvl01'] |
650 | + |
651 | +class JsonFilterIntegrity(json_filter.JsonFilter): |
652 | + """Host Filter to allow simple JSON-based grammar for |
653 | + selecting hosts. |
654 | + """ |
655 | + |
656 | + def _trust_pool_reconfirm(self, zone_manager, hosts): |
657 | + return zone_manager.integrity_service.reconfirm(hosts) |
658 | + |
659 | + |
660 | + def _convert_trust_query(self, cmd, arg): |
661 | + """pre-scan query to sanitize trust_lvl request""" |
662 | + if len(arg) != 2: |
663 | + raise exception.Error(_("Bad $trust_state combination")) |
664 | + |
665 | + # convert to internal value |
666 | + cvt = dict(zip(tlvl_string, tlvl_value)) |
667 | + value = cvt.get(arg[1].lower(), {}) |
668 | + if value: |
669 | + tmp = [arg[0], value] |
670 | + else: |
671 | + tmp = arg |
672 | + v = arg[1] |
673 | + if cmd in extra_cmd and v in trust_value: |
674 | + return tmp |
675 | + if cmd in basic_cmd and (v in trust_value or v in untrust_value): |
676 | + return tmp |
677 | + raise exception.Error(_("Bad $trust_state combination")) |
678 | + |
679 | + def _process_tfilter(self, zone_manager, query, host, services, c): |
680 | + """Recursively parse the query structure.""" |
681 | + if len(query) == 0: |
682 | + return c, True |
683 | + cmd = query[0] |
684 | + method = self.commands[cmd] # Let exception fly. |
685 | + cooked_args = [] |
686 | + for arg in query[1:]: |
687 | + if isinstance(arg, list): |
688 | + c, arg = self._process_tfilter( |
689 | + zone_manager, arg, host, services, c) |
690 | + elif isinstance(arg, basestring): |
691 | + arg = self._parse_string(arg, host, services) |
692 | + if arg != None: |
693 | + cooked_args.append(arg) |
694 | + |
695 | + if cooked_args and isinstance(cooked_args[0], str): |
696 | + if len(cooked_args[0]) > 3 and cooked_args[0][0:3] == 'lvl': |
697 | + cooked_args = self._convert_trust_query(cmd, cooked_args) |
698 | + c += 1 |
699 | + |
700 | + result = method(self, cooked_args) |
701 | + return c, result |
702 | + |
703 | + def filter_hosts(self, zone_manager, query): |
704 | + """Return a list of hosts that can fulfill filter.""" |
705 | + expanded = json.loads(query) |
706 | + filtered_hosts = [] |
707 | + for host, services in zone_manager.service_states.iteritems(): |
708 | + c = 0 |
709 | + c, r = self._process_tfilter( |
710 | + zone_manager, expanded, host, services, c) |
711 | + if isinstance(r, list): |
712 | + r = True in r |
713 | + if r: |
714 | + filtered_hosts.append((host, services)) |
715 | + |
716 | + if not c and FLAGS.trust_pool_enforced: |
717 | + lists = [] |
718 | + # There is no trust_lvl from query, so force all request |
719 | + # to only untrusted nodes |
720 | + cvt = dict(zip(tlvl_string, tlvl_value)) |
721 | + untrusted = cvt.get('untrusted', {}) |
722 | + query = ["=", "$trust_state.trust_lvl", untrusted] |
723 | + for host, services in filtered_hosts: |
724 | + r = self._process_filter(zone_manager, query, host, services) |
725 | + if isinstance(r, list): |
726 | + r = True in r |
727 | + if r: |
728 | + lists.append((host, services)) |
729 | + filtered_hosts = lists |
730 | + filtered_hosts = self._trust_pool_reconfirm( |
731 | + zone_manager, filtered_hosts) |
732 | + LOG.debug(_("Found hosts %(filtered_hosts)s" % locals())) |
733 | + return filtered_hosts |
734 | |
735 | === added file 'nova/scheduler/manager_integrity.py' |
736 | --- nova/scheduler/manager_integrity.py 1970-01-01 00:00:00 +0000 |
737 | +++ nova/scheduler/manager_integrity.py 2011-09-20 01:13:28 +0000 |
738 | @@ -0,0 +1,240 @@ |
739 | +# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
740 | + |
741 | +# Copyright (c) 2010 Openstack, LLC. |
742 | +# Copyright 2010 United States Government as represented by the |
743 | +# Administrator of the National Aeronautics and Space Administration. |
744 | +# All Rights Reserved. |
745 | +# |
746 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
747 | +# not use this file except in compliance with the License. You may obtain |
748 | +# a copy of the License at |
749 | +# |
750 | +# http://www.apache.org/licenses/LICENSE-2.0 |
751 | +# |
752 | +# Unless required by applicable law or agreed to in writing, software |
753 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
754 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
755 | +# License for the specific language governing permissions and limitations |
756 | +# under the License. |
757 | + |
758 | +""" |
759 | +Scheduler Service |
760 | +""" |
761 | + |
762 | +import functools |
763 | +import datetime |
764 | + |
765 | +from nova import context |
766 | +from nova import db |
767 | +from nova import flags |
768 | +from nova import log as logging |
769 | +from nova import manager |
770 | +from nova import rpc |
771 | +from nova import utils |
772 | +from nova.scheduler import manager |
773 | +from nova.scheduler.attestation import service |
774 | + |
775 | +LOG = logging.getLogger('nova.scheduler.manager_integrity') |
776 | +FLAGS = flags.FLAGS |
777 | +flags.DEFINE_bool('integrity_reconfirm', False, |
778 | + 'real-time reconfirm integrity') |
779 | +flags.DEFINE_bool('trusted_pool_debug', False, |
780 | + 'work-around before attestation server is ready') |
781 | + |
782 | +class IntegrityService(object): |
783 | + """ |
784 | + Trusted computing pool service - filters and provide cache for |
785 | + compute nodes' trustworthiness through attestation |
786 | + """ |
787 | + |
788 | + def __init__(self, zone_manager): |
789 | + self.zone_manager = zone_manager |
790 | + self.zone_manager.inflight_hosts = {} #{host:utc (for testing)} |
791 | + self.zone_manager.tick_snapshots = {} #{host:{count:tick, update:UTC}} |
792 | + self.zone_manager.trust_cache = {} #{host:trust_lvl} |
793 | + self.zone_manager.attestation_service = AttestationService() |
794 | + |
795 | + def snapshot_update(self, host, service): |
796 | + """ |
797 | + update lastest compute node's report status into last snapshots |
798 | + tick_snapshots = {host:{'count':report_count,'updated_at':updated_at}} |
799 | + tick_snapshot{} is used for checking if a node has been rebooted since |
800 | + """ |
801 | + updated_at = service['updated_at'] or service['created_at'] |
802 | + self.zone_manager.tick_snapshots[host] = {'updated_at' : updated_at, |
803 | + 'report_count':service['report_count']} |
804 | + |
805 | + def _do_attestation(self, method, hosts): |
806 | + """expect integrity_report {'host':{trust_lvl:lvl, vtime:UTC}}""" |
807 | + return self.zone_manager.attestation_service.do_attestation( |
808 | + method, hosts) |
809 | + |
810 | + def cache_states_update(self, integrity_report): |
811 | + """called right after do_attestation""" |
812 | + for host, state in integrity_report.iteritems(): |
813 | + self.zone_manager.trust_cache[host] = state['trust_lvl'] |
814 | + self.zone_manager.update_service_capabilities( |
815 | + 'trust_state', host, state) |
816 | + del self.zone_manager.inflight_hosts[host] |
817 | + |
818 | + def post_hosts(self, hosts): |
819 | + """POST list list of hosts to attestation server""" |
820 | + lists = [] |
821 | + now = utils.utcnow() |
822 | + for host in hosts: |
823 | + self.zone_manager.inflight_hosts[host] = now |
824 | + lists.append(host) |
825 | + if lists: |
826 | + self._do_attestation('POST_HOSTS', lists) |
827 | + #LOG.debug(_("POST_HOSTS %(lists)s" % locals())) |
828 | + |
829 | + def get_hosts(self, context=None): |
830 | + """GET previously posted attestation results """ |
831 | + """TODO: need to break hosts into controable size for GET""" |
832 | + hosts = [] |
833 | + for host, time in self.zone_manager.inflight_hosts.iteritems(): |
834 | + hosts.append(host) |
835 | + # expect integrity_report {'host':{Trust_lvl:lvl, vtime:UTC}} |
836 | + if hosts: |
837 | + integrity_report = self._do_attestation('GET_HOSTS', hosts) |
838 | + self.cache_states_update(integrity_report) |
839 | + |
840 | + def post_get_host(self, host, context=None): |
841 | + """POST and GET for a host's integrity report""" |
842 | + # expect integrity_report {'host':{Trust_lvl:lvl, vtime:UTC}} |
843 | + intergity_report = self._do_attestation('POLL_HOSTS', host) |
844 | + self.cache_states_update(integrity_report) |
845 | + |
846 | + def service_is_down(self, service): |
847 | + """check if a compute node has missed a heatbeat since""" |
848 | + heartbeat = service['updated_at'] or service['created_at'] |
849 | + x = utils.utcnow() |
850 | + elapsed = utils.utcnow() - heartbeat |
851 | + return elapsed >= datetime.timedelta(seconds=FLAGS.service_down_time) |
852 | + |
853 | + def host_is_rebooted(self, service, snapshot): |
854 | + """ |
855 | + check if a host was rebooted if report_count was discontinuous |
856 | + this should only be called after service_is_down() verifed |
857 | + """ |
858 | + e_tick = service['report_count'] - snapshot['report_count'] |
859 | + e_time = service['updated_at'] - snapshot['updated_at'] |
860 | + unit = datetime.timedelta(seconds=FLAGS.report_interval) |
861 | + #LOG.debug(_("e_tick %(e_tick)s e_time %(e_time)s" % locals())) |
862 | + return not e_tick * unit <= e_time <= (e_tick+1) * unit |
863 | + |
864 | + def cached_data_removal(self, host, service): |
865 | + if host in self.zone_manager.inflight_hosts: |
866 | + del self.zone_manager.inflight_hosts[host] |
867 | + #self.service_state_integrity_update(host, {}) |
868 | + self.zone_manager.update_service_capabilities('trust_state', host, {}) |
869 | + |
870 | + def collect_new_hosts(self, context): |
871 | + """locate hosts who were rebooted in the last FLAG.report_interval""" |
872 | + services = db.service_get_all_by_topic(context, 'compute') |
873 | + hosts = [] |
874 | + for service in services: |
875 | + snapshot = self.zone_manager.tick_snapshots.get(service.host, {}) |
876 | + if self.service_is_down(service): |
877 | + if snapshot: |
878 | + del self.zone_manager.tick_snapshots[service.host] |
879 | + self.cached_data_removal(service.host, service) |
880 | + else: |
881 | + if not snapshot or self.host_is_rebooted(service, snapshot): |
882 | + self.cached_data_removal(service.host, service) |
883 | + hosts.append(service.host) |
884 | + self.snapshot_update(service.host, service) |
885 | + return hosts |
886 | + |
887 | + def cache_setup(self, context=None): |
888 | + """ |
889 | + Populate service_states{} with cached integrity data |
890 | + runs as periodic task from scheduler for every FLAG.report_interval |
891 | + """ |
892 | + self.get_hosts(context) |
893 | + hosts = self.collect_new_hosts(context) |
894 | + self.post_hosts(hosts) |
895 | + |
896 | + def reconfirm(self, lists): |
897 | + """ |
898 | + verify if we need to do the very last minute verifcation |
899 | + see if any candidate host got rebooted since last snapshot |
900 | + """ |
901 | + if FLAGS.integrity_reconfirm: |
902 | + hosts = [] |
903 | + for host, service in lists: |
904 | + hosts.append(host) |
905 | + integrity_report = self._do_attestation('POLL_HOSTS', hosts) |
906 | + self.cache_states_update(integrity_report) |
907 | + hosts = [] |
908 | + for host, service in lists: |
909 | + lvl = service['trust_state'].get('trust_lvl', {}) |
910 | + if integrity_report[host].get('trust_lvl', {}) == lvl: |
911 | + hosts.append((host, service)) |
912 | + else: |
913 | + LOG.warn(_("mis-matched %(host)s trust_lvl %(lvl)s" |
914 | + % locals())) |
915 | + return hosts |
916 | + |
917 | + #TODO:: need a better mechanism |
918 | + # collect all good hosts |
919 | + ctx = context.get_admin_context() |
920 | + services = db.service_get_all_by_topic(ctx, 'compute') |
921 | + hosts_good = {} |
922 | + for service in services: |
923 | + if not self.service_is_down(service): |
924 | + snapshot = self.zone_manager.tick_snapshots.get(service.host,{}) |
925 | + if snapshot and not self.host_is_rebooted(service, snapshot): |
926 | + hosts_good[service.host] = 'v' |
927 | + |
928 | + hosts = [] |
929 | + for host, service in lists: |
930 | + if hosts_good.get(host, None): |
931 | + lvl = service['trust_state'].get('trust_lvl', {}) |
932 | + if self.zone_manager.trust_cache[host] == lvl: |
933 | + hosts.append((host, service)) |
934 | + return hosts |
935 | + |
936 | + |
937 | +class AttestationService(object): |
938 | + def __init__(self): |
939 | + if FLAGS.trusted_pool_debug is True: |
940 | + """ |
941 | + only used for unit test |
942 | + """ |
943 | + utc = utils.utcnow() - datetime.timedelta(0) |
944 | + self.tlvl_db = {} |
945 | + # tlvl_string = ['trusted', 'untrusted'] |
946 | + # tlvl_value = ['lvl10', 'lvl01'] |
947 | + # must match with host_filter_integrity.py |
948 | + self.tlvl_db['storky'] = {'trust_lvl':'lvl10', 'vtime':utc} |
949 | + self.tlvl_db['stocky'] = {'trust_lvl':'lvl01', 'vtime':utc} |
950 | + else: |
951 | + self.attestation = service.AttestationService() |
952 | + |
953 | + def do_attestation(self, cmd, hosts): |
954 | + #TODO: final code must do tlvl_string to tlvl_value conversion |
955 | + if FLAGS.trusted_pool_debug is True: |
956 | + report = {} |
957 | + utc = utils.utcnow() - datetime.timedelta(0) |
958 | + for host in hosts: |
959 | + lvl = self.tlvl_db[host].get('trust_lvl', {}) |
960 | + report[host] = {'trust_lvl':lvl,'vtime':utc} |
961 | + return report |
962 | + else: |
963 | + return self.attestation.do_attestation(cmd, hosts) |
964 | + |
965 | +class SchedulerManager(manager.SchedulerManager): |
966 | + """inject a periodic_task into schedule for building integrity cache""" |
967 | + def __init__(self, scheduler_driver=None, *args, **kwargs): |
968 | + super(SchedulerManager, self).__init__(*args, **kwargs) |
969 | + self.zone_manager.integrity_service= IntegrityService(self.zone_manager) |
970 | + |
971 | + def setup_test_attestation(self, module): |
972 | + """ testing purpost""" |
973 | + self.zone_manager.integrity_service._do_attestation = module |
974 | + |
975 | + def periodic_tasks(self, context=None): |
976 | + super(SchedulerManager, self).periodic_tasks(context) |
977 | + """Setup integrity cache for this zone""" |
978 | + self.zone_manager.integrity_service.cache_setup(context) |
979 | |
980 | === added file 'nova/tests/scheduler/test_json_filter_integrity.py' |
981 | --- nova/tests/scheduler/test_json_filter_integrity.py 1970-01-01 00:00:00 +0000 |
982 | +++ nova/tests/scheduler/test_json_filter_integrity.py 2011-09-20 01:13:28 +0000 |
983 | @@ -0,0 +1,251 @@ |
984 | +# Copyright 2011 OpenStack LLC. |
985 | +# All Rights Reserved. |
986 | +# |
987 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
988 | +# not use this file except in compliance with the License. You may obtain |
989 | +# a copy of the License at |
990 | +# |
991 | +# http://www.apache.org/licenses/LICENSE-2.0 |
992 | +# |
993 | +# Unless required by applicable law or agreed to in writing, software |
994 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
995 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
996 | +# License for the specific language governing permissions and limitations |
997 | +# under the License. |
998 | +""" |
999 | +Tests For Scheduler Host Filters. |
1000 | +""" |
1001 | + |
1002 | +import json |
1003 | + |
1004 | +import time |
1005 | +import datetime |
1006 | +from nova import utils |
1007 | +from nova import exception |
1008 | +from nova import flags |
1009 | +from nova import test |
1010 | +from nova.scheduler.filters import json_filter |
1011 | +from nova.scheduler.filters import json_filter_integrity |
1012 | +from nova import log as logging |
1013 | + |
1014 | +FLAGS = flags.FLAGS |
1015 | + |
1016 | + |
1017 | +class FakeZoneManager: |
1018 | + pass |
1019 | + |
1020 | + |
1021 | +class HostFilterTestCase(test.TestCase): |
1022 | + """Test case for host filters.""" |
1023 | + |
1024 | + def _host_caps(self, multiplier): |
1025 | + # Returns host capabilities in the following way: |
1026 | + # host1 = memory:free 10 (100max) |
1027 | + # disk:available 100 (1000max) |
1028 | + # hostN = memory:free 10 + 10N |
1029 | + # disk:available 100 + 100N |
1030 | + # in other words: hostN has more resources than host0 |
1031 | + # which means ... don't go above 10 hosts. |
1032 | + return {'host_name-description': 'XenServer %s' % multiplier, |
1033 | + 'host_hostname': 'xs-%s' % multiplier, |
1034 | + 'host_memory_total': 100, |
1035 | + 'host_memory_overhead': 10, |
1036 | + 'host_memory_free': 10 + multiplier * 10, |
1037 | + 'host_memory_free-computed': 10 + multiplier * 10, |
1038 | + 'host_other-config': {}, |
1039 | + 'host_ip_address': '192.168.1.%d' % (100 + multiplier), |
1040 | + 'host_cpu_info': {}, |
1041 | + 'disk_available': 100 + multiplier * 100, |
1042 | + 'disk_total': 1000, |
1043 | + 'disk_used': 0, |
1044 | + 'host_uuid': 'xxx-%d' % multiplier, |
1045 | + 'host_name-label': 'xs-%s' % multiplier} |
1046 | + |
1047 | + def setUp(self): |
1048 | + self.old_flag = FLAGS.default_host_filter |
1049 | + FLAGS.default_host_filter = \ |
1050 | + 'nova.scheduler.filters.json_filter_integrity.AllHostsFilter' |
1051 | + self.instance_type = dict(name='tiny', |
1052 | + memory_mb=50, |
1053 | + vcpus=10, |
1054 | + local_gb=500, |
1055 | + flavorid=1, |
1056 | + swap=500, |
1057 | + rxtx_quota=30000, |
1058 | + rxtx_cap=200, |
1059 | + extra_specs={}) |
1060 | + |
1061 | + self.zone_manager = FakeZoneManager() |
1062 | + states = {} |
1063 | + now = utils.utcnow() - datetime.timedelta(0) |
1064 | + for x in xrange(10): |
1065 | + state={} |
1066 | + if x in [0, 2, 4, 6, 9]: |
1067 | + lvl = 'lvl10' |
1068 | + state= {'compute': self._host_caps(x), |
1069 | + 'trust_state':{'trust_lvl':lvl, 'vtime':now}} |
1070 | + elif x in [1, 3, 5, 8]: |
1071 | + lvl = 'lvl01' |
1072 | + state= {'compute': self._host_caps(x), |
1073 | + 'trust_state':{'trust_lvl':lvl, 'vtime':now}} |
1074 | + else: |
1075 | + lvl = 'inflight' |
1076 | + state= {'compute': self._host_caps(x)} |
1077 | + states['host%02d' % (x)] = state |
1078 | + self.zone_manager.service_states = states |
1079 | + |
1080 | + def tearDown(self): |
1081 | + FLAGS.default_host_filter = self.old_flag |
1082 | + |
1083 | + def dummy_reconfirm(self, zone_manager, hosts): |
1084 | + return hosts |
1085 | + |
1086 | + def test_json_filter_integrity(self): |
1087 | + hf = json_filter_integrity.JsonFilterIntegrity() |
1088 | + #self.zone_manager.integrity_service_confirm=self.dummy_reconfirm |
1089 | + hf._trust_pool_reconfirm = self.dummy_reconfirm |
1090 | + # filter all hosts that can support 50 ram and 500 disk |
1091 | + |
1092 | + name, cooked = hf.instance_type_to_filter(self.instance_type) |
1093 | + self.assertEquals('nova.scheduler.filters.json_filter_integrity.JsonFilterIntegrity', |
1094 | + name) |
1095 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
1096 | + self.assertEquals(2, len(hosts)) |
1097 | + just_hosts = [host for host, caps in hosts] |
1098 | + just_hosts.sort() |
1099 | + self.assertEquals('host05', just_hosts[0]) |
1100 | + self.assertEquals('host08', just_hosts[1]) |
1101 | + |
1102 | + raw=[] |
1103 | + cooked = json.dumps(raw) |
1104 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
1105 | + self.assertEquals(4, len(hosts)) |
1106 | + |
1107 | + raw= ['>=', '$compute.host_memory_free', 20] |
1108 | + cooked = json.dumps(raw) |
1109 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
1110 | + self.assertEquals(4, len(hosts)) |
1111 | + |
1112 | + raw= ['>=', '$trust_state.trust_lvl', 'trusted'] |
1113 | + cooked = json.dumps(raw) |
1114 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
1115 | + self.assertEquals(5, len(hosts)) |
1116 | + |
1117 | + raw= ['<', '$trust_state.trust_lvl', 'untrusted'] |
1118 | + cooked = json.dumps(raw) |
1119 | + try: |
1120 | + hf.filter_hosts(self.zone_manager, cooked) |
1121 | + self.fail("UsageError") |
1122 | + except exception.Error, e: |
1123 | + pass |
1124 | + # Try some custom queries |
1125 | + |
1126 | + raw = [ 'or', |
1127 | + ['and', |
1128 | + ['<', '$compute.host_memory_free', 30], |
1129 | + ['<', '$compute.disk_available', 300], |
1130 | + ], |
1131 | + ['and', |
1132 | + ['>', '$compute.host_memory_free', 70], |
1133 | + ['>', '$compute.disk_available', 700], |
1134 | + ] |
1135 | + ] |
1136 | + cooked = json.dumps(raw) |
1137 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
1138 | + self.assertEquals(2, len(hosts)) |
1139 | + |
1140 | + raw = [ 'or', |
1141 | + ['and', |
1142 | + ['<', '$compute.host_memory_free', 30], |
1143 | + ['<', '$compute.disk_available', 300], |
1144 | + ], |
1145 | + ['and', |
1146 | + ['>', '$compute.host_memory_free', 60], |
1147 | + ['>', '$compute.disk_available', 600], |
1148 | + ['>=', '$trust_state.trust_lvl', 'trusted'] |
1149 | + ] |
1150 | + ] |
1151 | + cooked = json.dumps(raw) |
1152 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
1153 | + self.assertEquals(4, len(hosts)) |
1154 | + |
1155 | + raw = ['=', '$trust_state.trust_lvl', 'trusted'] |
1156 | + |
1157 | + cooked = json.dumps(raw) |
1158 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
1159 | + |
1160 | + self.assertEquals(5, len(hosts)) |
1161 | + just_hosts = [host for host, caps in hosts] |
1162 | + just_hosts.sort() |
1163 | + for index, host in zip([0, 2, 4, 6, 9], just_hosts): |
1164 | + self.assertEquals('host%02d' % index, host) |
1165 | + |
1166 | + raw = ['and', |
1167 | + ['>=', '$compute.host_memory_free', 30], |
1168 | + ] |
1169 | + cooked = json.dumps(raw) |
1170 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
1171 | + |
1172 | + self.assertEquals(3, len(hosts)) |
1173 | + just_hosts = [host for host, caps in hosts] |
1174 | + just_hosts.sort() |
1175 | + for index, host in zip([3, 5, 8], just_hosts): |
1176 | + self.assertEquals('host%02d' % index, host) |
1177 | + |
1178 | + raw = ['and', |
1179 | + ['>=', '$compute.host_memory_free', 30], |
1180 | + ['=', '$trust_state.trust_lvl', 'Untrusted'] |
1181 | + ] |
1182 | + cooked = json.dumps(raw) |
1183 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
1184 | + |
1185 | + self.assertEquals(3, len(hosts)) |
1186 | + just_hosts = [host for host, caps in hosts] |
1187 | + just_hosts.sort() |
1188 | + for index, host in zip([3, 5, 8], just_hosts): |
1189 | + self.assertEquals('host%02d' % index, host) |
1190 | + |
1191 | + raw = ['and', |
1192 | + ['>=', '$trust_state.trust_lvl', 'trusted'], |
1193 | + ['>=', '$compute.host_memory_free', 30] |
1194 | + ] |
1195 | + cooked = json.dumps(raw) |
1196 | + hosts = hf.filter_hosts(self.zone_manager, cooked) |
1197 | + |
1198 | + self.assertEquals(4, len(hosts)) |
1199 | + just_hosts = [host for host, caps in hosts] |
1200 | + just_hosts.sort() |
1201 | + for index, host in zip([2, 4, 6, 9], just_hosts): |
1202 | + self.assertEquals('host%02d' % index, host) |
1203 | + |
1204 | + # Try some bogus input ... |
1205 | + raw = ['unknown command', ] |
1206 | + cooked = json.dumps(raw) |
1207 | + try: |
1208 | + hf.filter_hosts(self.zone_manager, cooked) |
1209 | + self.fail("Should give KeyError") |
1210 | + except KeyError, e: |
1211 | + pass |
1212 | + |
1213 | + self.assertTrue(hf.filter_hosts(self.zone_manager, json.dumps([]))) |
1214 | + self.assertTrue(hf.filter_hosts(self.zone_manager, json.dumps({}))) |
1215 | + self.assertTrue(hf.filter_hosts(self.zone_manager, json.dumps( |
1216 | + ['not', True, False, True, False]))) |
1217 | + |
1218 | + try: |
1219 | + hf.filter_hosts(self.zone_manager, json.dumps( |
1220 | + 'not', True, False, True, False)) |
1221 | + self.fail("Should give KeyError") |
1222 | + except KeyError, e: |
1223 | + pass |
1224 | + |
1225 | + self.assertFalse(hf.filter_hosts(self.zone_manager, |
1226 | + json.dumps(['=', '$foo', 100]))) |
1227 | + self.assertFalse(hf.filter_hosts(self.zone_manager, |
1228 | + json.dumps(['=', '$.....', 100]))) |
1229 | + self.assertFalse(hf.filter_hosts(self.zone_manager, |
1230 | + json.dumps( |
1231 | + ['>', ['and', ['or', ['not', ['<', ['>=', ['<=', ['in', ]]]]]]]]))) |
1232 | + |
1233 | + self.assertFalse(hf.filter_hosts(self.zone_manager, |
1234 | + json.dumps(['=', {}, ['>', '$missing....foo']]))) |
1235 | |
1236 | === added file 'nova/tests/scheduler/test_manager_integrity.py' |
1237 | --- nova/tests/scheduler/test_manager_integrity.py 1970-01-01 00:00:00 +0000 |
1238 | +++ nova/tests/scheduler/test_manager_integrity.py 2011-09-20 01:13:28 +0000 |
1239 | @@ -0,0 +1,202 @@ |
1240 | +# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
1241 | + |
1242 | +# Copyright 2010 United States Government as represented by the |
1243 | +# Administrator of the National Aeronautics and Space Administration. |
1244 | +# All Rights Reserved. |
1245 | +# |
1246 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
1247 | +# not use this file except in compliance with the License. You may obtain |
1248 | +# a copy of the License at |
1249 | +# |
1250 | +# http://www.apache.org/licenses/LICENSE-2.0 |
1251 | +# |
1252 | +# Unless required by applicable law or agreed to in writing, software |
1253 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
1254 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
1255 | +# License for the specific language governing permissions and limitations |
1256 | +# under the License. |
1257 | +""" |
1258 | +Tests For Scheduler |
1259 | +""" |
1260 | + |
1261 | +import time |
1262 | +import datetime |
1263 | +import mox |
1264 | +import novaclient.exceptions |
1265 | +import stubout |
1266 | +import webob |
1267 | + |
1268 | +from mox import IgnoreArg |
1269 | +from nova import context |
1270 | +from nova import db |
1271 | +from nova import exception |
1272 | +from nova import flags |
1273 | +from nova import service |
1274 | +from nova import test |
1275 | +from nova import rpc |
1276 | +from nova import utils |
1277 | +from nova.auth import manager as auth_manager |
1278 | +from nova.scheduler import api |
1279 | +from nova.scheduler import manager |
1280 | +from nova.scheduler import manager_integrity |
1281 | +from nova.scheduler import driver |
1282 | +from nova.compute import power_state |
1283 | +from nova.db.sqlalchemy import models |
1284 | +from nova import log as logging |
1285 | + |
1286 | + |
1287 | +FLAGS = flags.FLAGS |
1288 | + |
1289 | +class TestDriver(driver.Scheduler): |
1290 | + """Scheduler Driver for Tests""" |
1291 | + def schedule(context, topic, *args, **kwargs): |
1292 | + return 'fallback_host' |
1293 | + |
1294 | + def schedule_named_method(context, topic, num): |
1295 | + return 'named_host' |
1296 | + |
1297 | + |
1298 | +class SchedulerTestCase(test.TestCase): |
1299 | + """Test case for scheduler""" |
1300 | + def setUp(self): |
1301 | + super(SchedulerTestCase, self).setUp() |
1302 | + driver = 'nova.tests.scheduler.test_scheduler.TestDriver' |
1303 | + self.flags(scheduler_driver=driver) |
1304 | + |
1305 | + self.old_interval_flag = FLAGS.report_interval |
1306 | + FLAGS.report_interval = 2 |
1307 | + self.old_down_flag = FLAGS.service_down_time |
1308 | + FLAGS.service_down_time = 4 |
1309 | + |
1310 | + |
1311 | + def tearDown(self): |
1312 | + self.stubs.UnsetAll() |
1313 | + FLAGS.report_interval = self.old_interval_flag |
1314 | + FLAGS.service_down_time = self.old_down_flag |
1315 | + super(SchedulerTestCase, self).tearDown() |
1316 | + |
1317 | + def _create_ncompute_service(self, n): |
1318 | + """Create compute-manager(ComputeNode and Service record).""" |
1319 | + self.hosts=n |
1320 | + ctxt = context.get_admin_context() |
1321 | + now = utils.utcnow() - datetime.timedelta(0) |
1322 | + dic1 = {'host': 'dummy', 'binary': 'nova-compute', 'topic': 'compute', |
1323 | + 'report_count': 0, 'updated_at':0, 'created_at':0, |
1324 | + 'availability_zone': 'dummyzone'} |
1325 | + |
1326 | + dic2 = {'service_id': 0, |
1327 | + 'vcpus': 16, 'memory_mb': 32, 'local_gb': 100, |
1328 | + 'vcpus_used': 16, 'memory_mb_used': 32, 'local_gb_used': 10, |
1329 | + 'hypervisor_type': 'qemu', 'hypervisor_version': 12003, |
1330 | + 'cpu_info': ''} |
1331 | + |
1332 | + for idx in range(n): |
1333 | + dic1['host'] = 'host%02d' % idx |
1334 | + dic1['report_count'] = idx * 2 |
1335 | + dic1['updated_at'] = now |
1336 | + dic1['created_at'] = now |
1337 | + s_ref = db.service_create(ctxt, dic1) |
1338 | + dic2['service_id'] = s_ref['id'] |
1339 | + db.compute_node_create(ctxt, dic2) |
1340 | + |
1341 | + def _create_integrity_db(self, n): |
1342 | + tlvl = {'trust_lvl':'trusted', 'vTime':0} |
1343 | + utc = utils.utcnow() - datetime.timedelta(0) |
1344 | + db = {} |
1345 | + for idx in range(n): |
1346 | + host = 'host%02d' % idx |
1347 | + if idx % 2: |
1348 | + lvl = 'lvl10' |
1349 | + else: |
1350 | + lvl = 'lvl00' |
1351 | + db[host] = {'trust_lvl':lvl, 'vTime':utc} |
1352 | + self.integrity_db = db |
1353 | + |
1354 | + def _create_compute_service(self): |
1355 | + """Create compute-manager(ComputeNode and Service record).""" |
1356 | + ctxt = context.get_admin_context() |
1357 | + dic = {'host': 'dummy', 'binary': 'nova-compute', 'topic': 'compute', |
1358 | + 'report_count': 0, 'availability_zone': 'dummyzone'} |
1359 | + s_ref = db.service_create(ctxt, dic) |
1360 | + |
1361 | + dic = {'service_id': s_ref['id'], |
1362 | + 'vcpus': 16, 'memory_mb': 32, 'local_gb': 100, |
1363 | + 'vcpus_used': 16, 'memory_mb_used': 32, 'local_gb_used': 10, |
1364 | + 'hypervisor_type': 'qemu', 'hypervisor_version': 12003, |
1365 | + 'cpu_info': ''} |
1366 | + db.compute_node_create(ctxt, dic) |
1367 | + |
1368 | + return db.service_get(ctxt, s_ref['id']) |
1369 | + |
1370 | + def _create_db(self, ctxt, scheduler, n): |
1371 | + self._create_ncompute_service(n) |
1372 | + |
1373 | + """ |
1374 | + services=db.service_get_all_by_topic(ctxt, 'compute') |
1375 | + for service in services: |
1376 | + logging.debug(_("Host %s"), service.host) |
1377 | + logging.debug(_("Services %s "), service.report_count) |
1378 | + logging.debug(_("Updated %s "), service.updated_at) |
1379 | + """ |
1380 | + |
1381 | + def _do_attestation(self, cmd, hosts): |
1382 | + utc = utils.utcnow() - datetime.timedelta(0) |
1383 | + self.query = hosts |
1384 | + db = self.integrity_db |
1385 | + report = {} |
1386 | + for host in hosts: |
1387 | + lvl = db[host].get('trust_lvl', {}) |
1388 | + txt = {'trust_lvl':lvl, 'vtime':utc} |
1389 | + report[host] = txt |
1390 | + return report |
1391 | + |
1392 | + def _update_ticks(self, ctx, skips, count, dec, down=None): |
1393 | + now = utils.utcnow() - datetime.timedelta(0) |
1394 | + for idx in range(self.hosts): |
1395 | + if idx % skips: |
1396 | + d = 0 |
1397 | + else: |
1398 | + if down: |
1399 | + continue |
1400 | + d = dec |
1401 | + h = 'host%02d' % idx |
1402 | + ref = db.service_get_by_args(ctx, h, 'nova-compute') |
1403 | + ref['updated_at'] = now |
1404 | + ref['report_count'] += count - d |
1405 | + db.service_update(ctx, ref['id'], ref) |
1406 | + |
1407 | + def test_trust(self): |
1408 | + hosts = 10 |
1409 | + scheduler = manager_integrity.SchedulerManager() |
1410 | + scheduler.setup_test_attestation(self._do_attestation) |
1411 | + ctx = context.get_admin_context() |
1412 | + self._create_db(ctx, scheduler, hosts) |
1413 | + self._create_integrity_db(hosts) |
1414 | + scheduler.periodic_tasks(ctx) |
1415 | + self.assertEqual(10, len(self.query)) |
1416 | + time.sleep(FLAGS.report_interval) |
1417 | + |
1418 | + self._update_ticks(ctx, 1, 1, 0) |
1419 | + scheduler.periodic_tasks(ctx) |
1420 | + |
1421 | + time.sleep(FLAGS.service_down_time) |
1422 | + self._update_ticks(ctx, 3, 2, 2) |
1423 | + scheduler.periodic_tasks(ctx) |
1424 | + self.assertEqual(4, len(self.query)) |
1425 | + time.sleep(FLAGS.service_down_time * 2) |
1426 | + down = 1 |
1427 | + self._update_ticks(ctx, 3, 4, 4, down) |
1428 | + scheduler.periodic_tasks(ctx) |
1429 | + for idx in [0, 3, 6, 9]: |
1430 | + txt = scheduler.zone_manager.service_states.get("host%2d" % idx,{}) |
1431 | + self.assertEqual(0, len(txt)) |
1432 | + for idx in [1, 2, 4, 5, 7, 8]: |
1433 | + host = "host%02d" %idx |
1434 | + service = scheduler.zone_manager.service_states[host] |
1435 | + txt = service.get('trust_state', {}) |
1436 | + lvl = txt.get('trust_lvl') |
1437 | + if idx % 2: |
1438 | + self.assertEqual(lvl, 'lvl10') |
1439 | + else: |
1440 | + self.assertEqual(lvl, 'lvl00') |
1441 | + |
This looks really interesting! I haven't done a thorough review yet, but it looks like around line 276, there's extreme indenting. Maybe there are tabs instead of spaces?