Merge lp:~fred-yang/nova/TrustedComputingPools into lp:~hudson-openstack/nova/trunk
- TrustedComputingPools
- Merge into trunk
Status: | Needs review |
---|---|
Proposed branch: | lp:~fred-yang/nova/TrustedComputingPools |
Merge into: | lp:~hudson-openstack/nova/trunk |
Diff against target: |
1441 lines (+1393/-0) 9 files modified
nova/scheduler/attestation/__init__.py (+24/-0) nova/scheduler/attestation/attestation.py (+124/-0) nova/scheduler/attestation/client.py (+223/-0) nova/scheduler/attestation/service.py (+204/-0) nova/scheduler/filters/__init__.py (+1/-0) nova/scheduler/filters/json_filter_integrity.py (+124/-0) nova/scheduler/manager_integrity.py (+240/-0) nova/tests/scheduler/test_json_filter_integrity.py (+251/-0) nova/tests/scheduler/test_manager_integrity.py (+202/-0) |
To merge this branch: | bzr merge lp:~fred-yang/nova/TrustedComputingPools |
Related bugs: | |
Related blueprints: |
Trusted Computing pools
(Low)
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Paul Voccio (community) | Needs Fixing | ||
Sandy Walsh (community) | Needs Fixing | ||
Chris Behrens (community) | Needs Fixing | ||
Review via email: mp+76134@code.launchpad.net |
Commit message
Description of the change
Enable Trusted Computing Pools into Openstack scheduler by adding new filter driver
Spec/Design - http://
Blueprint discussed at Diablo Summit - https:/
Sandy Walsh (sandy-walsh) wrote : | # |
Thanks Fred ... perhaps we can start with a pep8 cleanup and ensure it complies with the HACKING document. Also if you could remove all the debug/commented out code?
Then we can get into the nitty gritty of it.
Johannes Erdfelt (johannes.erdfelt) wrote : | # |
I don't know enough about the scheduler to comment on the functionality of the patch, but I do have a style comment. It doesn't seem necessary to split tlvl_string and tlvl_value as separate lists. Especially since the code combines them into a dict at runtime. It's probably cleaner to create a dict literal instead.
Paul Voccio (pvo) wrote : | # |
Wanted to echo removing comments and some of the formatting the others
mentioned here.
I got some 3 failures with the S3APITestCase. Not sure if this is related or
not yet.
>
Please also add yourself to the Authors file. This failed on the unit tests
for me.
>
> =======
> FAIL: test_authors_
> -------
> Traceback (most recent call last):
> File "/root/
> test_authors_
> '%r not listed in Authors' % missing)
> AssertionError: set([u'<email address hidden>']) not listed in Authors
fred yang (fred-yang) wrote : | # |
Woop! Learning the process. Should I submit the Authors file now, or I can do another clean-up to submit
Thanks,
-Fred
> Wanted to echo removing comments and some of the formatting the others
> mentioned here.
>
> I got some 3 failures with the S3APITestCase. Not sure if this is related or
> not yet.
> >
> Please also add yourself to the Authors file. This failed on the unit tests
> for me.
>
> >
> > =======
> > FAIL: test_authors_
> > -------
> > Traceback (most recent call last):
> > File "/root/
> in
> > test_authors_
> > '%r not listed in Authors' % missing)
> > AssertionError: set([u'<email address hidden>']) not listed in Authors
fred yang (fred-yang) wrote : | # |
Will start from pep 8 tool to clearn up ...
Thanks,
-Fred
> Thanks Fred ... perhaps we can start with a pep8 cleanup and ensure it
> complies with the HACKING document. Also if you could remove all the
> debug/commented out code?
>
> Then we can get into the nitty gritty of it.
fred yang (fred-yang) wrote : | # |
Combining sting & value into a dict is good suggestion
Thanks,
-Fred
> I don't know enough about the scheduler to comment on the functionality of the
> patch, but I do have a style comment. It doesn't seem necessary to split
> tlvl_string and tlvl_value as separate lists. Especially since the code
> combines them into a dict at runtime. It's probably cleaner to create a dict
> literal instead.
Preview Diff
1 | === added directory 'nova/scheduler/attestation' | |||
2 | === added file 'nova/scheduler/attestation/__init__.py' | |||
3 | --- nova/scheduler/attestation/__init__.py 1970-01-01 00:00:00 +0000 | |||
4 | +++ nova/scheduler/attestation/__init__.py 2011-09-20 01:13:28 +0000 | |||
5 | @@ -0,0 +1,24 @@ | |||
6 | 1 | # vim: tabstop=4 shiftwidth=4 softtabstop=4 | ||
7 | 2 | |||
8 | 3 | # Copyright (c) 2010 Openstack, LLC. | ||
9 | 4 | # | ||
10 | 5 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | ||
11 | 6 | # not use this file except in compliance with the License. You may obtain | ||
12 | 7 | # a copy of the License at | ||
13 | 8 | # | ||
14 | 9 | # http://www.apache.org/licenses/LICENSE-2.0 | ||
15 | 10 | # | ||
16 | 11 | # Unless required by applicable law or agreed to in writing, software | ||
17 | 12 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | ||
18 | 13 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | ||
19 | 14 | # License for the specific language governing permissions and limitations | ||
20 | 15 | # under the License. | ||
21 | 16 | |||
22 | 17 | """ | ||
23 | 18 | :mod:`nova.scheduler.attestation` -- API service to attestation server | ||
24 | 19 | ===================================================== | ||
25 | 20 | |||
26 | 21 | .. automodule:: nova.scheduler.attestation | ||
27 | 22 | :platform: Unix | ||
28 | 23 | :synopsis: Module that provide https client to attestation server | ||
29 | 24 | """ | ||
30 | 0 | 25 | ||
31 | === added file 'nova/scheduler/attestation/attestation.py' | |||
32 | --- nova/scheduler/attestation/attestation.py 1970-01-01 00:00:00 +0000 | |||
33 | +++ nova/scheduler/attestation/attestation.py 2011-09-20 01:13:28 +0000 | |||
34 | @@ -0,0 +1,124 @@ | |||
35 | 1 | # vim: tabstop=4 shiftwidth=4 softtabstop=4 | ||
36 | 2 | |||
37 | 3 | # Copyright 2010-2011 OpenStack, LLC | ||
38 | 4 | # All Rights Reserved. | ||
39 | 5 | # | ||
40 | 6 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | ||
41 | 7 | # not use this file except in compliance with the License. You may obtain | ||
42 | 8 | # a copy of the License at | ||
43 | 9 | # | ||
44 | 10 | # http://www.apache.org/licenses/LICENSE-2.0 | ||
45 | 11 | # | ||
46 | 12 | # Unless required by applicable law or agreed to in writing, software | ||
47 | 13 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | ||
48 | 14 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | ||
49 | 15 | # License for the specific language governing permissions and limitations | ||
50 | 16 | # under the License. | ||
51 | 17 | |||
52 | 18 | """ | ||
53 | 19 | Client classes for callers to attestation server | ||
54 | 20 | """ | ||
55 | 21 | |||
56 | 22 | import json | ||
57 | 23 | from nova import log as logging | ||
58 | 24 | |||
59 | 25 | from nova.scheduler.attestation import client | ||
60 | 26 | LOG = logging.getLogger('nova.scheduler.attestation') | ||
61 | 27 | |||
62 | 28 | class v10Client(client.BaseClient): | ||
63 | 29 | |||
64 | 30 | """Main client class for accessing Attestation server""" | ||
65 | 31 | |||
66 | 32 | DEFAULT_PORT = 4443 | ||
67 | 33 | |||
68 | 34 | def __init__(self, host, port=None, use_ssl=False, doc_root="/v1", | ||
69 | 35 | auth_user=None, auth_passwd=None, | ||
70 | 36 | key_file=None, cert_file=None, ca_file=None): | ||
71 | 37 | """ | ||
72 | 38 | Creates a new client to a Attestation API service. | ||
73 | 39 | |||
74 | 40 | :param host: The host where Attestation resides | ||
75 | 41 | :param port: The port where Attestation resides (defaults to 9292) | ||
76 | 42 | :param use_ssl: Should we use HTTPS? (defaults to False) | ||
77 | 43 | :param doc_root: Prefix for all URLs we request from host | ||
78 | 44 | :param auth_tok: The auth token to pass to the server | ||
79 | 45 | """ | ||
80 | 46 | port = port or self.DEFAULT_PORT | ||
81 | 47 | self.doc_root = doc_root | ||
82 | 48 | super(v10Client, self).__init__(host, port, use_ssl, | ||
83 | 49 | auth_user, auth_passwd, | ||
84 | 50 | key_file, cert_file, ca_file) | ||
85 | 51 | |||
86 | 52 | def do_request(self, method, action, body=None, headers=None, params=None): | ||
87 | 53 | action = "%s/%s" % (self.doc_root, action) | ||
88 | 54 | return super(v10Client, self).do_request(method, action, body, | ||
89 | 55 | headers, params) | ||
90 | 56 | |||
91 | 57 | def post_hosts(self, hosts, PcrMask=None): | ||
92 | 58 | """ | ||
93 | 59 | Post a list of hosts to attestation server to get trustworthiness | ||
94 | 60 | as well as PCR index back if specified - | ||
95 | 61 | Return RequestId | ||
96 | 62 | |||
97 | 63 | :hosts: hosts list to be posted | ||
98 | 64 | :PcrMask: PCR index to be retrieved | ||
99 | 65 | :TODO: limit the hosts count to be less than 500 | ||
100 | 66 | """ | ||
101 | 67 | body = {} | ||
102 | 68 | body['count'] = len(hosts) | ||
103 | 69 | if PcrMask: | ||
104 | 70 | body['PCRmask'] = PcrMask | ||
105 | 71 | #body['hosts'] = '+'.join(hosts) | ||
106 | 72 | body['hosts'] = hosts | ||
107 | 73 | cooked = json.dumps(body) | ||
108 | 74 | headers = {} | ||
109 | 75 | headers['Context-Type'] = 'application/json' | ||
110 | 76 | headers['Accept'] = 'application/json' | ||
111 | 77 | res = self.do_request("POST", "PostHosts", cooked, headers) | ||
112 | 78 | rheader = res.getheaders() | ||
113 | 79 | LOG.debug(_("post_hosts got return headers %(rheader)s " % locals())) | ||
114 | 80 | |||
115 | 81 | raw = json.loads(res.read()) | ||
116 | 82 | RequestId = raw['RequestId'] | ||
117 | 83 | LOG.debug(_("post_hosts got %(raw)s %(RequestId)s" % locals())) | ||
118 | 84 | return RequestId | ||
119 | 85 | |||
120 | 86 | def get_hosts(self, rid): | ||
121 | 87 | """ | ||
122 | 88 | Get trustworthiness from previously posted hosts | ||
123 | 89 | """ | ||
124 | 90 | |||
125 | 91 | #TODO: perform mark and limit read to pull all data | ||
126 | 92 | body = {} | ||
127 | 93 | body['RequestId'] = rid | ||
128 | 94 | cooked = json.dumps(body) | ||
129 | 95 | headers = {} | ||
130 | 96 | headers['Context-Type'] = 'application/json' | ||
131 | 97 | headers['Accept'] = 'application/json' | ||
132 | 98 | res = self.do_request("GET", "RequestId", cooked, headers) | ||
133 | 99 | rheader = res.getheaders() | ||
134 | 100 | LOG.debug(_("get_hosts got return headers %(rheader)s " % locals())) | ||
135 | 101 | raw = json.loads(res.read()) | ||
136 | 102 | data = raw['RequestId'] | ||
137 | 103 | return data | ||
138 | 104 | |||
139 | 105 | def poll_hosts(self, hosts, PcrMask=None): | ||
140 | 106 | """ | ||
141 | 107 | same as post_hosts, but sync & wait for operation done to return data | ||
142 | 108 | """ | ||
143 | 109 | body = {} | ||
144 | 110 | body['count'] = len(hosts) | ||
145 | 111 | if PcrMask: | ||
146 | 112 | body['PCRmask'] = PcrMask | ||
147 | 113 | #body['hosts'] = '+'.join(hosts) | ||
148 | 114 | body['hosts'] = hosts | ||
149 | 115 | headers = {} | ||
150 | 116 | headers['Context-Type'] = 'application/json' | ||
151 | 117 | headers['Accept'] = 'application/json' | ||
152 | 118 | body = json.dumps(body) | ||
153 | 119 | res = self.do_request("POST", "PollHosts", body, headers) | ||
154 | 120 | rheader = res.getheaders() | ||
155 | 121 | LOG.debug(_("Poll_hosts headers %(rheader)s " % locals())) | ||
156 | 122 | raw = json.loads(res.read()) | ||
157 | 123 | data = raw['PollHosts'] | ||
158 | 124 | return data | ||
159 | 0 | 125 | ||
160 | === added file 'nova/scheduler/attestation/client.py' | |||
161 | --- nova/scheduler/attestation/client.py 1970-01-01 00:00:00 +0000 | |||
162 | +++ nova/scheduler/attestation/client.py 2011-09-20 01:13:28 +0000 | |||
163 | @@ -0,0 +1,223 @@ | |||
164 | 1 | # vim: tabstop=4 shiftwidth=4 softtabstop=4 | ||
165 | 2 | |||
166 | 3 | # Copyright (c) 2010 Openstack, LLC. | ||
167 | 4 | # Copyright 2010 United States Government as represented by the | ||
168 | 5 | # Administrator of the National Aeronautics and Space Administration. | ||
169 | 6 | # All Rights Reserved. | ||
170 | 7 | # | ||
171 | 8 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | ||
172 | 9 | # not use this file except in compliance with the License. You may obtain | ||
173 | 10 | # a copy of the License at | ||
174 | 11 | # | ||
175 | 12 | # http://www.apache.org/licenses/LICENSE-2.0 | ||
176 | 13 | # | ||
177 | 14 | # Unless required by applicable law or agreed to in writing, software | ||
178 | 15 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | ||
179 | 16 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | ||
180 | 17 | # License for the specific language governing permissions and limitations | ||
181 | 18 | # under the License. | ||
182 | 19 | |||
183 | 20 | """ | ||
184 | 21 | Attestation Client service | ||
185 | 22 | """ | ||
186 | 23 | |||
187 | 24 | import httplib | ||
188 | 25 | import logging | ||
189 | 26 | import socket | ||
190 | 27 | import urllib | ||
191 | 28 | import ssl | ||
192 | 29 | |||
193 | 30 | from nova import log as logging | ||
194 | 31 | from nova import exception | ||
195 | 32 | LOG = logging.getLogger('nova.scheduler.attestation.client') | ||
196 | 33 | |||
197 | 34 | class HTTPSClientAuthConnection(httplib.HTTPSConnection): | ||
198 | 35 | """ Class to make a HTTPS connection, with support for full client-based SSL Authentication""" | ||
199 | 36 | |||
200 | 37 | def __init__(self, host, port, key_file, cert_file, ca_file, timeout=None): | ||
201 | 38 | httplib.HTTPSConnection.__init__(self, host, port, | ||
202 | 39 | key_file=key_file, cert_file=cert_file) | ||
203 | 40 | self.host = host | ||
204 | 41 | self.post = port | ||
205 | 42 | self.key_file = key_file | ||
206 | 43 | self.cert_file = cert_file | ||
207 | 44 | self.ca_file = ca_file | ||
208 | 45 | self.timeout = timeout | ||
209 | 46 | |||
210 | 47 | def connect(self): | ||
211 | 48 | """ Connect to a host on a given (SSL) port. | ||
212 | 49 | Use ca_file pointing somewhere, use it to check Server Certificate. | ||
213 | 50 | Redefined/copied and extended from httplib.py:1105 (Python 2.6.x). | ||
214 | 51 | """ | ||
215 | 52 | host = self.host | ||
216 | 53 | port = self.port | ||
217 | 54 | sock = socket.create_connection((self.host, self.port), self.timeout) | ||
218 | 55 | if self._tunnel_host: | ||
219 | 56 | self.sock = sock | ||
220 | 57 | self._tunnel() | ||
221 | 58 | # If there's no CA File, don't force Server Certificate Check | ||
222 | 59 | if self.ca_file: | ||
223 | 60 | self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file, | ||
224 | 61 | ssl_version=ssl.PROTOCOL_SSLv23, | ||
225 | 62 | ca_certs=self.ca_file, | ||
226 | 63 | cert_reqs=ssl.CERT_REQUIRED) | ||
227 | 64 | LOG.debug(_("Https server certified " % locals())) | ||
228 | 65 | s = self.sock | ||
229 | 66 | cert = s.getpeercert() | ||
230 | 67 | else: | ||
231 | 68 | self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file, | ||
232 | 69 | cert_reqs=ssl.CERT_NONE) | ||
233 | 70 | |||
234 | 71 | |||
235 | 72 | class BaseClient(object): | ||
236 | 73 | |||
237 | 74 | """A base client class""" | ||
238 | 75 | |||
239 | 76 | CHUNKSIZE = 65536 | ||
240 | 77 | |||
241 | 78 | def __init__(self, host, port, use_ssl, auth_user, auth_passwd, | ||
242 | 79 | key_file=None, cert_file=None, ca_file=None): | ||
243 | 80 | """ | ||
244 | 81 | Creates a new client to some service. | ||
245 | 82 | |||
246 | 83 | :param host: The host where service resides | ||
247 | 84 | :param port: The port where service resides | ||
248 | 85 | :param use_ssl: Should we use HTTPS? | ||
249 | 86 | :param auth_user: user name to connect with attestation server | ||
250 | 87 | :param auth_passwd: password to get attestation server authentication | ||
251 | 88 | """ | ||
252 | 89 | self.host = host | ||
253 | 90 | self.port = port | ||
254 | 91 | self.use_ssl = use_ssl | ||
255 | 92 | self.auth_user = auth_user | ||
256 | 93 | self.auth_passwd = auth_passwd | ||
257 | 94 | self.connection = None | ||
258 | 95 | self.key_file = key_file | ||
259 | 96 | self.cert_file = cert_file | ||
260 | 97 | self.ca_file = ca_file | ||
261 | 98 | |||
262 | 99 | def set_auth_blob(self, auth_user, auth_passwd): | ||
263 | 100 | """ | ||
264 | 101 | Updates the authentication token for this client connection. | ||
265 | 102 | """ | ||
266 | 103 | self.auth_user = auth_user | ||
267 | 104 | self.auth_passwd = auth_passwd | ||
268 | 105 | |||
269 | 106 | def get_connection(self): | ||
270 | 107 | """ | ||
271 | 108 | Returns the proper connection type | ||
272 | 109 | """ | ||
273 | 110 | host = self.host | ||
274 | 111 | port = self.port | ||
275 | 112 | if self.use_ssl: | ||
276 | 113 | return HTTPSClientAuthConnection(self.host, self.port, | ||
277 | 114 | key_file=self.key_file, | ||
278 | 115 | cert_file=self.cert_file, | ||
279 | 116 | ca_file=self.ca_file) | ||
280 | 117 | else: | ||
281 | 118 | return httplib.HTTPConnection(self.host, self.port) | ||
282 | 119 | |||
283 | 120 | def do_request(self, method, action, body=None, headers=None, params=None): | ||
284 | 121 | """ | ||
285 | 122 | Connects to the server and issues a request. Handles converting | ||
286 | 123 | any returned HTTP error status codes to OpenStack/Glance exceptions | ||
287 | 124 | and closing the server connection. Returns the result data, or | ||
288 | 125 | raises an appropriate exception. | ||
289 | 126 | |||
290 | 127 | """ | ||
291 | 128 | if type(params) is dict: | ||
292 | 129 | |||
293 | 130 | # remove any params that are None | ||
294 | 131 | for (key, value) in params.items(): | ||
295 | 132 | if value is None: | ||
296 | 133 | del params[key] | ||
297 | 134 | |||
298 | 135 | action += '?' + urllib.urlencode(params) | ||
299 | 136 | |||
300 | 137 | try: | ||
301 | 138 | c = self.get_connection() | ||
302 | 139 | headers = headers or {} | ||
303 | 140 | if 'x-auth-user' not in headers and self.auth_user: | ||
304 | 141 | headers['x-auth-user'] = self.auth_user | ||
305 | 142 | if 'x-auth-passwd' not in headers and self.auth_passwd: | ||
306 | 143 | headers['x-auth-passwd'] = self.auth_passwd | ||
307 | 144 | |||
308 | 145 | # Do a simple request or a chunked request, depending | ||
309 | 146 | # on whether the body param is a file-like object and | ||
310 | 147 | # the method is PUT or POST | ||
311 | 148 | #if hasattr(body, 'read') and method.lower() in ('post', 'put'): | ||
312 | 149 | if body: | ||
313 | 150 | # query is in body | ||
314 | 151 | c.putrequest(method, action) | ||
315 | 152 | |||
316 | 153 | for header, value in headers.items(): | ||
317 | 154 | c.putheader(header, value) | ||
318 | 155 | #c.putheader('Content-type', 'text/json') | ||
319 | 156 | c.putheader('Content-length', len(body)) | ||
320 | 157 | c.putheader('Transfer-Encoding', 'chunked') | ||
321 | 158 | c.endheaders() | ||
322 | 159 | |||
323 | 160 | #chunk = body.read(self.CHUNKSIZE) | ||
324 | 161 | c.send('%s' % (body)) | ||
325 | 162 | c.send('0\r\n\r\n') | ||
326 | 163 | else: | ||
327 | 164 | # Simple request.through uri | ||
328 | 165 | c.request(method, action, body, headers) | ||
329 | 166 | res = c.getresponse() | ||
330 | 167 | #status_code = self.get_status_code(res) | ||
331 | 168 | status_code = res.status | ||
332 | 169 | if status_code in (httplib.OK, | ||
333 | 170 | httplib.CREATED, | ||
334 | 171 | httplib.ACCEPTED, | ||
335 | 172 | httplib.NO_CONTENT): | ||
336 | 173 | return res | ||
337 | 174 | elif status_code == httplib.UNAUTHORIZED: | ||
338 | 175 | raise exception.NotAuthorized | ||
339 | 176 | elif status_code == httplib.FORBIDDEN: | ||
340 | 177 | raise exception.NotAuthorized | ||
341 | 178 | elif status_code == httplib.NOT_FOUND: | ||
342 | 179 | raise exception.NotFound | ||
343 | 180 | elif status_code == httplib.CONFLICT: | ||
344 | 181 | raise exception.Duplicate(res.read()) | ||
345 | 182 | elif status_code == httplib.BAD_REQUEST: | ||
346 | 183 | raise exception.Invalid(res.read()) | ||
347 | 184 | elif status_code == httplib.INTERNAL_SERVER_ERROR: | ||
348 | 185 | raise Exception("Internal Server error: %s" % res.read()) | ||
349 | 186 | else: | ||
350 | 187 | raise Exception("Unknown error occurred! %s" % res.read()) | ||
351 | 188 | |||
352 | 189 | except (socket.error, IOError), e: | ||
353 | 190 | raise Exception("Unable to connect to server. Got error: %s" % e) | ||
354 | 191 | |||
355 | 192 | def get_status_code(self, response): | ||
356 | 193 | """ | ||
357 | 194 | Returns the integer status code from the response, which | ||
358 | 195 | can be either a Webob.Response (used in testing) or httplib.Response | ||
359 | 196 | """ | ||
360 | 197 | if hasattr(response, 'status_int'): | ||
361 | 198 | return response.status_int | ||
362 | 199 | else: | ||
363 | 200 | return response.status | ||
364 | 201 | |||
365 | 202 | def _extract_params(self, actual_params, allowed_params): | ||
366 | 203 | """ | ||
367 | 204 | Extract a subset of keys from a dictionary. The filters key | ||
368 | 205 | will also be extracted, and each of its values will be returned | ||
369 | 206 | as an individual param. | ||
370 | 207 | |||
371 | 208 | :param actual_params: dict of keys to filter | ||
372 | 209 | :param allowed_params: list of keys that 'actual_params' will be | ||
373 | 210 | reduced to | ||
374 | 211 | :retval subset of 'params' dict | ||
375 | 212 | """ | ||
376 | 213 | try: | ||
377 | 214 | # expect 'filters' param to be a dict here | ||
378 | 215 | result = dict(actual_params.get('filters')) | ||
379 | 216 | except TypeError: | ||
380 | 217 | result = {} | ||
381 | 218 | |||
382 | 219 | for allowed_param in allowed_params: | ||
383 | 220 | if allowed_param in actual_params: | ||
384 | 221 | result[allowed_param] = actual_params[allowed_param] | ||
385 | 222 | |||
386 | 223 | return result | ||
387 | 0 | 224 | ||
388 | === added file 'nova/scheduler/attestation/service.py' | |||
389 | --- nova/scheduler/attestation/service.py 1970-01-01 00:00:00 +0000 | |||
390 | +++ nova/scheduler/attestation/service.py 2011-09-20 01:13:28 +0000 | |||
391 | @@ -0,0 +1,204 @@ | |||
392 | 1 | # vim: tabstop=4 shiftwidth=4 softtabstop=4 | ||
393 | 2 | |||
394 | 3 | # Copyright (c) 2010 Openstack, LLC. | ||
395 | 4 | # Copyright 2010 United States Government as represented by the | ||
396 | 5 | # Administrator of the National Aeronautics and Space Administration. | ||
397 | 6 | # All Rights Reserved. | ||
398 | 7 | # | ||
399 | 8 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | ||
400 | 9 | # not use this file except in compliance with the License. You may obtain | ||
401 | 10 | # a copy of the License at | ||
402 | 11 | # | ||
403 | 12 | # http://www.apache.org/licenses/LICENSE-2.0 | ||
404 | 13 | # | ||
405 | 14 | # Unless required by applicable law or agreed to in writing, software | ||
406 | 15 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | ||
407 | 16 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | ||
408 | 17 | # License for the specific language governing permissions and limitations | ||
409 | 18 | # under the License. | ||
410 | 19 | |||
411 | 20 | """ | ||
412 | 21 | Attestation Service | ||
413 | 22 | """ | ||
414 | 23 | |||
415 | 24 | import datetime | ||
416 | 25 | |||
417 | 26 | from nova import flags | ||
418 | 27 | from nova import log as logging | ||
419 | 28 | from nova import utils | ||
420 | 29 | |||
421 | 30 | from nova.scheduler.attestation import attestation | ||
422 | 31 | |||
423 | 32 | |||
424 | 33 | LOG = logging.getLogger('nova.scheduler.attestaion.service') | ||
425 | 34 | FLAGS = flags.FLAGS | ||
426 | 35 | flags.DEFINE_integer('attestation_geo_required', 0, | ||
427 | 36 | 'require to use PCR22 as geo indicator') | ||
428 | 37 | flags.DEFINE_integer('attestation_max_hosts', 300, | ||
429 | 38 | 'max number of hosts per request') | ||
430 | 39 | |||
431 | 40 | flags.DEFINE_string('attestation_server', 'localhost', | ||
432 | 41 | 'attestation server http') | ||
433 | 42 | flags.DEFINE_string('attestation_server_ca_file', None, | ||
434 | 43 | 'attestation server Cert file for Identity verification') | ||
435 | 44 | flags.DEFINE_string('attestation_port', '4443', | ||
436 | 45 | 'attestation server port') | ||
437 | 46 | flags.DEFINE_string('attestation_auth_user', 'MustSetupThisUser', | ||
438 | 47 | 'attestation access user - must change') | ||
439 | 48 | flags.DEFINE_string('attestation_auth_passwd', 'MustChangeThisPasswd', | ||
440 | 49 | 'attestation access passwd - must change') | ||
441 | 50 | |||
442 | 51 | class AttestationService(object): | ||
443 | 52 | def __init__(self): | ||
444 | 53 | self.host = FLAGS.attestation_server | ||
445 | 54 | self.port = FLAGS.attestation_port | ||
446 | 55 | self.auth_user = FLAGS.attestation_auth_user | ||
447 | 56 | self.auth_passwd = FLAGS.attestation_auth_passwd | ||
448 | 57 | self.ca_file = FLAGS.attestation_server_ca_file | ||
449 | 58 | self.use_ssl = True | ||
450 | 59 | self.doc = "v1.0" | ||
451 | 60 | self.attestation = attestation.v10Client(self.host, self.port, | ||
452 | 61 | self.use_ssl, self.doc, self.auth_user, self.auth_passwd, | ||
453 | 62 | ca_file = self.ca_file) | ||
454 | 63 | self.req_count = FLAGS.attestation_max_hosts or 300 | ||
455 | 64 | # max number of hosts per request | ||
456 | 65 | # TODO: retrive from server | ||
457 | 66 | self.requests = {} # tracks request ids with host list | ||
458 | 67 | self.req_id = [] | ||
459 | 68 | |||
460 | 69 | tlvl_string = ['trusted', 'untrusted', 'unknown'] | ||
461 | 70 | tlvl_value = ['lvl10', 'lvl01', 'lvl01'] | ||
462 | 71 | |||
463 | 72 | def _post_hosts(self, hosts, PcrMask=None): | ||
464 | 73 | total = len(hosts) | ||
465 | 74 | max = self.req_count | ||
466 | 75 | n = (total + max - 1) // max | ||
467 | 76 | count_per_loop = total // n | ||
468 | 77 | start = 0 | ||
469 | 78 | if FLAGS.attestation_geo_required: | ||
470 | 79 | PcrMask = '400000' # also read PCR22 in | ||
471 | 80 | #TODO: need error handling | ||
472 | 81 | while total: | ||
473 | 82 | if total < count_per_loop: | ||
474 | 83 | count_per_loop = total | ||
475 | 84 | list = hosts[start:count_per_loop] | ||
476 | 85 | rid = self.attestation.post_hosts(list, PcrMask) | ||
477 | 86 | self.req_id.append(rid) | ||
478 | 87 | total -= count_per_loop | ||
479 | 88 | start += count_per_loop | ||
480 | 89 | return {} | ||
481 | 90 | |||
482 | 91 | def _get_hosts(self, hosts): | ||
483 | 92 | """ | ||
484 | 93 | get result from previously posted rid | ||
485 | 94 | """ | ||
486 | 95 | if not len(self.req_id): | ||
487 | 96 | return {} | ||
488 | 97 | data = [] | ||
489 | 98 | done = [] | ||
490 | 99 | index = 0 | ||
491 | 100 | for rid in self.req_id: | ||
492 | 101 | res = self.attestation.get_hosts(rid) | ||
493 | 102 | hosts = self._process_packet(res, rid) | ||
494 | 103 | if isinstance(hosts, dict): | ||
495 | 104 | # got result | ||
496 | 105 | data.append(hosts) | ||
497 | 106 | done.append(rid) | ||
498 | 107 | else: | ||
499 | 108 | # TODO: something wrong, need clean-up | ||
500 | 109 | # return hosts to get caller to issue command again | ||
501 | 110 | return 'BAD_DATA' | ||
502 | 111 | index += 1 | ||
503 | 112 | |||
504 | 113 | report = self._pack_host_data(data) | ||
505 | 114 | for rid in done: | ||
506 | 115 | self.req_id.remove(rid) | ||
507 | 116 | return report | ||
508 | 117 | |||
509 | 118 | def _poll_hosts(self, hosts, PcrMask=None): | ||
510 | 119 | total = len(hosts) | ||
511 | 120 | max = self.req_count | ||
512 | 121 | n = (total + max - 1) // max | ||
513 | 122 | count_per_loop = total // n | ||
514 | 123 | start = 0 | ||
515 | 124 | if FLAGS.attestation_geo_required: | ||
516 | 125 | PcrMask = '400000' # also read PCR22 in | ||
517 | 126 | #TODO: need error handling | ||
518 | 127 | res = [] | ||
519 | 128 | while total: | ||
520 | 129 | if total < count_per_loop: | ||
521 | 130 | count_per_loop = total | ||
522 | 131 | list = hosts[start:count_per_loop] | ||
523 | 132 | data = self.attestation.poll_hosts(list, PcrMask) | ||
524 | 133 | if isinstance(data, dict): | ||
525 | 134 | res.append(data) | ||
526 | 135 | total -= count_per_loop | ||
527 | 136 | start += count_per_loop | ||
528 | 137 | report = self._pack_host_data(res) | ||
529 | 138 | return report | ||
530 | 139 | |||
531 | 140 | commands = { | ||
532 | 141 | 'post_hosts': _post_hosts, | ||
533 | 142 | 'get_hosts': _get_hosts, | ||
534 | 143 | 'poll_hosts': _poll_hosts, | ||
535 | 144 | } | ||
536 | 145 | |||
537 | 146 | def do_attestation(self, cmd, hosts=None): | ||
538 | 147 | """ | ||
539 | 148 | cmd: [post_hosts, get_hosts, poll_hosts] | ||
540 | 149 | """ | ||
541 | 150 | method = self.commands[cmd.lower()] # Let exception fly. | ||
542 | 151 | report = method(self, hosts) | ||
543 | 152 | return report | ||
544 | 153 | |||
545 | 154 | def _pack_host_data(self, data): | ||
546 | 155 | report = {} | ||
547 | 156 | cvt = dict(zip(self.tlvl_string, self.tlvl_value)) | ||
548 | 157 | for item in data: | ||
549 | 158 | count = item['count'] | ||
550 | 159 | hosts = item['hosts'] | ||
551 | 160 | for host, state in hosts.iteritems(): | ||
552 | 161 | entry = {} | ||
553 | 162 | entry['trust_lvl'] = cvt.get(state['trust_lvl'], {}) | ||
554 | 163 | entry['vtime'] = state['timestamp'] | ||
555 | 164 | if FLAGS.attestation_geo_required: | ||
556 | 165 | entry['geo'] = state[PCR22] | ||
557 | 166 | report[host] = entry | ||
558 | 167 | count -= 1 | ||
559 | 168 | if count: | ||
560 | 169 | LOG.warn(_("Get hosts missed count %(count)s" % locals())) | ||
561 | 170 | return report | ||
562 | 171 | |||
563 | 172 | |||
564 | 173 | def _process_packet(self, packet, rid): | ||
565 | 174 | """ | ||
566 | 175 | Expected input format | ||
567 | 176 | { 'jobid' : rid, | ||
568 | 177 | 'jobstatus' : 0/1/2 (processing/success/failed) | ||
569 | 178 | 'jobcode' : 0 or others | ||
570 | 179 | 'jobresult' : "error information" | ||
571 | 180 | (continue hosts information if success) | ||
572 | 181 | 'count' : 2 | ||
573 | 182 | 'hosts' : [ | ||
574 | 183 | {host1 : { | ||
575 | 184 | 'trust_lvl': trusted|untrusted|unknown | ||
576 | 185 | 'timestamp': UTC | ||
577 | 186 | 'PCRxx' : xxx}, | ||
578 | 187 | host2 : { | ||
579 | 188 | 'trust_lvl': trusted|untrusted|unknown | ||
580 | 189 | 'timestamp': UTC | ||
581 | 190 | 'PCRxx' : xxx}, | ||
582 | 191 | }] | ||
583 | 192 | } | ||
584 | 193 | """ | ||
585 | 194 | if not packet['jobid'] == rid: | ||
586 | 195 | LOG.warn(_("Incoming packet has wrong RequestId" % locals())) | ||
587 | 196 | return 2 | ||
588 | 197 | status = packet['jobstatus'] | ||
589 | 198 | if status is 2: | ||
590 | 199 | result = packet['jobresult'] | ||
591 | 200 | LOG.warn(_("Incoming packet error %(result)s " % locals())) | ||
592 | 201 | return 2 | ||
593 | 202 | if status is 1: | ||
594 | 203 | return packet | ||
595 | 204 | return {} | ||
596 | 0 | 205 | ||
597 | === modified file 'nova/scheduler/filters/__init__.py' | |||
598 | --- nova/scheduler/filters/__init__.py 2011-08-15 22:09:39 +0000 | |||
599 | +++ nova/scheduler/filters/__init__.py 2011-09-20 01:13:28 +0000 | |||
600 | @@ -34,3 +34,4 @@ | |||
601 | 34 | from all_hosts_filter import AllHostsFilter | 34 | from all_hosts_filter import AllHostsFilter |
602 | 35 | from instance_type_filter import InstanceTypeFilter | 35 | from instance_type_filter import InstanceTypeFilter |
603 | 36 | from json_filter import JsonFilter | 36 | from json_filter import JsonFilter |
604 | 37 | from json_filter_integrity import JsonFilterIntegrity | ||
605 | 37 | 38 | ||
606 | === added file 'nova/scheduler/filters/json_filter_integrity.py' | |||
607 | --- nova/scheduler/filters/json_filter_integrity.py 1970-01-01 00:00:00 +0000 | |||
608 | +++ nova/scheduler/filters/json_filter_integrity.py 2011-09-20 01:13:28 +0000 | |||
609 | @@ -0,0 +1,124 @@ | |||
610 | 1 | # Copyright (c) 2011 Openstack, LLC. | ||
611 | 2 | # All Rights Reserved. | ||
612 | 3 | # | ||
613 | 4 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | ||
614 | 5 | # not use this file except in compliance with the License. You may obtain | ||
615 | 6 | # a copy of the License at | ||
616 | 7 | # | ||
617 | 8 | # http://www.apache.org/licenses/LICENSE-2.0 | ||
618 | 9 | # | ||
619 | 10 | # Unless required by applicable law or agreed to in writing, software | ||
620 | 11 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | ||
621 | 12 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | ||
622 | 13 | # License for the specific language governing permissions and limitations | ||
623 | 14 | # under the License. | ||
624 | 15 | |||
625 | 16 | """ | ||
626 | 17 | Supporting trusted compute pool host filtering for Json | ||
627 | 18 | """ | ||
628 | 19 | |||
629 | 20 | import json | ||
630 | 21 | |||
631 | 22 | from nova import exception | ||
632 | 23 | from nova import flags | ||
633 | 24 | from nova import log as logging | ||
634 | 25 | from nova import utils | ||
635 | 26 | from nova.scheduler.filters import json_filter | ||
636 | 27 | |||
637 | 28 | LOG = logging.getLogger('nova.scheduler.filter.json_filter_integrity') | ||
638 | 29 | |||
639 | 30 | FLAGS = flags.FLAGS | ||
640 | 31 | flags.DEFINE_integer('trust_pool_enforced', 1, | ||
641 | 32 | 'enforce compute nodes to be trusted or non-trusted') | ||
642 | 33 | |||
643 | 34 | #currently supported commands | ||
644 | 35 | trust_value = ['trusted'] | ||
645 | 36 | untrust_value = ['Untrusted'] | ||
646 | 37 | basic_cmd = ['='] | ||
647 | 38 | extra_cmd = ['>='] | ||
648 | 39 | tlvl_string = ['trusted', 'untrusted'] | ||
649 | 40 | tlvl_value = ['lvl10', 'lvl01'] | ||
650 | 41 | |||
651 | 42 | class JsonFilterIntegrity(json_filter.JsonFilter): | ||
652 | 43 | """Host Filter to allow simple JSON-based grammar for | ||
653 | 44 | selecting hosts. | ||
654 | 45 | """ | ||
655 | 46 | |||
656 | 47 | def _trust_pool_reconfirm(self, zone_manager, hosts): | ||
657 | 48 | return zone_manager.integrity_service.reconfirm(hosts) | ||
658 | 49 | |||
659 | 50 | |||
660 | 51 | def _convert_trust_query(self, cmd, arg): | ||
661 | 52 | """pre-scan query to sanitize trust_lvl request""" | ||
662 | 53 | if len(arg) != 2: | ||
663 | 54 | raise exception.Error(_("Bad $trust_state combination")) | ||
664 | 55 | |||
665 | 56 | # convert to internal value | ||
666 | 57 | cvt = dict(zip(tlvl_string, tlvl_value)) | ||
667 | 58 | value = cvt.get(arg[1].lower(), {}) | ||
668 | 59 | if value: | ||
669 | 60 | tmp = [arg[0], value] | ||
670 | 61 | else: | ||
671 | 62 | tmp = arg | ||
672 | 63 | v = arg[1] | ||
673 | 64 | if cmd in extra_cmd and v in trust_value: | ||
674 | 65 | return tmp | ||
675 | 66 | if cmd in basic_cmd and (v in trust_value or v in untrust_value): | ||
676 | 67 | return tmp | ||
677 | 68 | raise exception.Error(_("Bad $trust_state combination")) | ||
678 | 69 | |||
679 | 70 | def _process_tfilter(self, zone_manager, query, host, services, c): | ||
680 | 71 | """Recursively parse the query structure.""" | ||
681 | 72 | if len(query) == 0: | ||
682 | 73 | return c, True | ||
683 | 74 | cmd = query[0] | ||
684 | 75 | method = self.commands[cmd] # Let exception fly. | ||
685 | 76 | cooked_args = [] | ||
686 | 77 | for arg in query[1:]: | ||
687 | 78 | if isinstance(arg, list): | ||
688 | 79 | c, arg = self._process_tfilter( | ||
689 | 80 | zone_manager, arg, host, services, c) | ||
690 | 81 | elif isinstance(arg, basestring): | ||
691 | 82 | arg = self._parse_string(arg, host, services) | ||
692 | 83 | if arg != None: | ||
693 | 84 | cooked_args.append(arg) | ||
694 | 85 | |||
695 | 86 | if cooked_args and isinstance(cooked_args[0], str): | ||
696 | 87 | if len(cooked_args[0]) > 3 and cooked_args[0][0:3] == 'lvl': | ||
697 | 88 | cooked_args = self._convert_trust_query(cmd, cooked_args) | ||
698 | 89 | c += 1 | ||
699 | 90 | |||
700 | 91 | result = method(self, cooked_args) | ||
701 | 92 | return c, result | ||
702 | 93 | |||
703 | 94 | def filter_hosts(self, zone_manager, query): | ||
704 | 95 | """Return a list of hosts that can fulfill filter.""" | ||
705 | 96 | expanded = json.loads(query) | ||
706 | 97 | filtered_hosts = [] | ||
707 | 98 | for host, services in zone_manager.service_states.iteritems(): | ||
708 | 99 | c = 0 | ||
709 | 100 | c, r = self._process_tfilter( | ||
710 | 101 | zone_manager, expanded, host, services, c) | ||
711 | 102 | if isinstance(r, list): | ||
712 | 103 | r = True in r | ||
713 | 104 | if r: | ||
714 | 105 | filtered_hosts.append((host, services)) | ||
715 | 106 | |||
716 | 107 | if not c and FLAGS.trust_pool_enforced: | ||
717 | 108 | lists = [] | ||
718 | 109 | # There is no trust_lvl from query, so force all request | ||
719 | 110 | # to only untrusted nodes | ||
720 | 111 | cvt = dict(zip(tlvl_string, tlvl_value)) | ||
721 | 112 | untrusted = cvt.get('untrusted', {}) | ||
722 | 113 | query = ["=", "$trust_state.trust_lvl", untrusted] | ||
723 | 114 | for host, services in filtered_hosts: | ||
724 | 115 | r = self._process_filter(zone_manager, query, host, services) | ||
725 | 116 | if isinstance(r, list): | ||
726 | 117 | r = True in r | ||
727 | 118 | if r: | ||
728 | 119 | lists.append((host, services)) | ||
729 | 120 | filtered_hosts = lists | ||
730 | 121 | filtered_hosts = self._trust_pool_reconfirm( | ||
731 | 122 | zone_manager, filtered_hosts) | ||
732 | 123 | LOG.debug(_("Found hosts %(filtered_hosts)s" % locals())) | ||
733 | 124 | return filtered_hosts | ||
734 | 0 | 125 | ||
735 | === added file 'nova/scheduler/manager_integrity.py' | |||
736 | --- nova/scheduler/manager_integrity.py 1970-01-01 00:00:00 +0000 | |||
737 | +++ nova/scheduler/manager_integrity.py 2011-09-20 01:13:28 +0000 | |||
738 | @@ -0,0 +1,240 @@ | |||
739 | 1 | # vim: tabstop=4 shiftwidth=4 softtabstop=4 | ||
740 | 2 | |||
741 | 3 | # Copyright (c) 2010 Openstack, LLC. | ||
742 | 4 | # Copyright 2010 United States Government as represented by the | ||
743 | 5 | # Administrator of the National Aeronautics and Space Administration. | ||
744 | 6 | # All Rights Reserved. | ||
745 | 7 | # | ||
746 | 8 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | ||
747 | 9 | # not use this file except in compliance with the License. You may obtain | ||
748 | 10 | # a copy of the License at | ||
749 | 11 | # | ||
750 | 12 | # http://www.apache.org/licenses/LICENSE-2.0 | ||
751 | 13 | # | ||
752 | 14 | # Unless required by applicable law or agreed to in writing, software | ||
753 | 15 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | ||
754 | 16 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | ||
755 | 17 | # License for the specific language governing permissions and limitations | ||
756 | 18 | # under the License. | ||
757 | 19 | |||
758 | 20 | """ | ||
759 | 21 | Scheduler Service | ||
760 | 22 | """ | ||
761 | 23 | |||
762 | 24 | import functools | ||
763 | 25 | import datetime | ||
764 | 26 | |||
765 | 27 | from nova import context | ||
766 | 28 | from nova import db | ||
767 | 29 | from nova import flags | ||
768 | 30 | from nova import log as logging | ||
769 | 31 | from nova import manager | ||
770 | 32 | from nova import rpc | ||
771 | 33 | from nova import utils | ||
772 | 34 | from nova.scheduler import manager | ||
773 | 35 | from nova.scheduler.attestation import service | ||
774 | 36 | |||
775 | 37 | LOG = logging.getLogger('nova.scheduler.manager_integrity') | ||
776 | 38 | FLAGS = flags.FLAGS | ||
777 | 39 | flags.DEFINE_bool('integrity_reconfirm', False, | ||
778 | 40 | 'real-time reconfirm integrity') | ||
779 | 41 | flags.DEFINE_bool('trusted_pool_debug', False, | ||
780 | 42 | 'work-around before attestation server is ready') | ||
781 | 43 | |||
782 | 44 | class IntegrityService(object): | ||
783 | 45 | """ | ||
784 | 46 | Trusted computing pool service - filters and provide cache for | ||
785 | 47 | compute nodes' trustworthiness through attestation | ||
786 | 48 | """ | ||
787 | 49 | |||
788 | 50 | def __init__(self, zone_manager): | ||
789 | 51 | self.zone_manager = zone_manager | ||
790 | 52 | self.zone_manager.inflight_hosts = {} #{host:utc (for testing)} | ||
791 | 53 | self.zone_manager.tick_snapshots = {} #{host:{count:tick, update:UTC}} | ||
792 | 54 | self.zone_manager.trust_cache = {} #{host:trust_lvl} | ||
793 | 55 | self.zone_manager.attestation_service = AttestationService() | ||
794 | 56 | |||
795 | 57 | def snapshot_update(self, host, service): | ||
796 | 58 | """ | ||
797 | 59 | update lastest compute node's report status into last snapshots | ||
798 | 60 | tick_snapshots = {host:{'count':report_count,'updated_at':updated_at}} | ||
799 | 61 | tick_snapshot{} is used for checking if a node has been rebooted since | ||
800 | 62 | """ | ||
801 | 63 | updated_at = service['updated_at'] or service['created_at'] | ||
802 | 64 | self.zone_manager.tick_snapshots[host] = {'updated_at' : updated_at, | ||
803 | 65 | 'report_count':service['report_count']} | ||
804 | 66 | |||
805 | 67 | def _do_attestation(self, method, hosts): | ||
806 | 68 | """expect integrity_report {'host':{trust_lvl:lvl, vtime:UTC}}""" | ||
807 | 69 | return self.zone_manager.attestation_service.do_attestation( | ||
808 | 70 | method, hosts) | ||
809 | 71 | |||
810 | 72 | def cache_states_update(self, integrity_report): | ||
811 | 73 | """called right after do_attestation""" | ||
812 | 74 | for host, state in integrity_report.iteritems(): | ||
813 | 75 | self.zone_manager.trust_cache[host] = state['trust_lvl'] | ||
814 | 76 | self.zone_manager.update_service_capabilities( | ||
815 | 77 | 'trust_state', host, state) | ||
816 | 78 | del self.zone_manager.inflight_hosts[host] | ||
817 | 79 | |||
818 | 80 | def post_hosts(self, hosts): | ||
819 | 81 | """POST list list of hosts to attestation server""" | ||
820 | 82 | lists = [] | ||
821 | 83 | now = utils.utcnow() | ||
822 | 84 | for host in hosts: | ||
823 | 85 | self.zone_manager.inflight_hosts[host] = now | ||
824 | 86 | lists.append(host) | ||
825 | 87 | if lists: | ||
826 | 88 | self._do_attestation('POST_HOSTS', lists) | ||
827 | 89 | #LOG.debug(_("POST_HOSTS %(lists)s" % locals())) | ||
828 | 90 | |||
829 | 91 | def get_hosts(self, context=None): | ||
830 | 92 | """GET previously posted attestation results """ | ||
831 | 93 | """TODO: need to break hosts into controable size for GET""" | ||
832 | 94 | hosts = [] | ||
833 | 95 | for host, time in self.zone_manager.inflight_hosts.iteritems(): | ||
834 | 96 | hosts.append(host) | ||
835 | 97 | # expect integrity_report {'host':{Trust_lvl:lvl, vtime:UTC}} | ||
836 | 98 | if hosts: | ||
837 | 99 | integrity_report = self._do_attestation('GET_HOSTS', hosts) | ||
838 | 100 | self.cache_states_update(integrity_report) | ||
839 | 101 | |||
840 | 102 | def post_get_host(self, host, context=None): | ||
841 | 103 | """POST and GET for a host's integrity report""" | ||
842 | 104 | # expect integrity_report {'host':{Trust_lvl:lvl, vtime:UTC}} | ||
843 | 105 | intergity_report = self._do_attestation('POLL_HOSTS', host) | ||
844 | 106 | self.cache_states_update(integrity_report) | ||
845 | 107 | |||
846 | 108 | def service_is_down(self, service): | ||
847 | 109 | """check if a compute node has missed a heatbeat since""" | ||
848 | 110 | heartbeat = service['updated_at'] or service['created_at'] | ||
849 | 111 | x = utils.utcnow() | ||
850 | 112 | elapsed = utils.utcnow() - heartbeat | ||
851 | 113 | return elapsed >= datetime.timedelta(seconds=FLAGS.service_down_time) | ||
852 | 114 | |||
853 | 115 | def host_is_rebooted(self, service, snapshot): | ||
854 | 116 | """ | ||
855 | 117 | check if a host was rebooted if report_count was discontinuous | ||
856 | 118 | this should only be called after service_is_down() verifed | ||
857 | 119 | """ | ||
858 | 120 | e_tick = service['report_count'] - snapshot['report_count'] | ||
859 | 121 | e_time = service['updated_at'] - snapshot['updated_at'] | ||
860 | 122 | unit = datetime.timedelta(seconds=FLAGS.report_interval) | ||
861 | 123 | #LOG.debug(_("e_tick %(e_tick)s e_time %(e_time)s" % locals())) | ||
862 | 124 | return not e_tick * unit <= e_time <= (e_tick+1) * unit | ||
863 | 125 | |||
864 | 126 | def cached_data_removal(self, host, service): | ||
865 | 127 | if host in self.zone_manager.inflight_hosts: | ||
866 | 128 | del self.zone_manager.inflight_hosts[host] | ||
867 | 129 | #self.service_state_integrity_update(host, {}) | ||
868 | 130 | self.zone_manager.update_service_capabilities('trust_state', host, {}) | ||
869 | 131 | |||
870 | 132 | def collect_new_hosts(self, context): | ||
871 | 133 | """locate hosts who were rebooted in the last FLAG.report_interval""" | ||
872 | 134 | services = db.service_get_all_by_topic(context, 'compute') | ||
873 | 135 | hosts = [] | ||
874 | 136 | for service in services: | ||
875 | 137 | snapshot = self.zone_manager.tick_snapshots.get(service.host, {}) | ||
876 | 138 | if self.service_is_down(service): | ||
877 | 139 | if snapshot: | ||
878 | 140 | del self.zone_manager.tick_snapshots[service.host] | ||
879 | 141 | self.cached_data_removal(service.host, service) | ||
880 | 142 | else: | ||
881 | 143 | if not snapshot or self.host_is_rebooted(service, snapshot): | ||
882 | 144 | self.cached_data_removal(service.host, service) | ||
883 | 145 | hosts.append(service.host) | ||
884 | 146 | self.snapshot_update(service.host, service) | ||
885 | 147 | return hosts | ||
886 | 148 | |||
887 | 149 | def cache_setup(self, context=None): | ||
888 | 150 | """ | ||
889 | 151 | Populate service_states{} with cached integrity data | ||
890 | 152 | runs as periodic task from scheduler for every FLAG.report_interval | ||
891 | 153 | """ | ||
892 | 154 | self.get_hosts(context) | ||
893 | 155 | hosts = self.collect_new_hosts(context) | ||
894 | 156 | self.post_hosts(hosts) | ||
895 | 157 | |||
896 | 158 | def reconfirm(self, lists): | ||
897 | 159 | """ | ||
898 | 160 | verify if we need to do the very last minute verifcation | ||
899 | 161 | see if any candidate host got rebooted since last snapshot | ||
900 | 162 | """ | ||
901 | 163 | if FLAGS.integrity_reconfirm: | ||
902 | 164 | hosts = [] | ||
903 | 165 | for host, service in lists: | ||
904 | 166 | hosts.append(host) | ||
905 | 167 | integrity_report = self._do_attestation('POLL_HOSTS', hosts) | ||
906 | 168 | self.cache_states_update(integrity_report) | ||
907 | 169 | hosts = [] | ||
908 | 170 | for host, service in lists: | ||
909 | 171 | lvl = service['trust_state'].get('trust_lvl', {}) | ||
910 | 172 | if integrity_report[host].get('trust_lvl', {}) == lvl: | ||
911 | 173 | hosts.append((host, service)) | ||
912 | 174 | else: | ||
913 | 175 | LOG.warn(_("mis-matched %(host)s trust_lvl %(lvl)s" | ||
914 | 176 | % locals())) | ||
915 | 177 | return hosts | ||
916 | 178 | |||
917 | 179 | #TODO:: need a better mechanism | ||
918 | 180 | # collect all good hosts | ||
919 | 181 | ctx = context.get_admin_context() | ||
920 | 182 | services = db.service_get_all_by_topic(ctx, 'compute') | ||
921 | 183 | hosts_good = {} | ||
922 | 184 | for service in services: | ||
923 | 185 | if not self.service_is_down(service): | ||
924 | 186 | snapshot = self.zone_manager.tick_snapshots.get(service.host,{}) | ||
925 | 187 | if snapshot and not self.host_is_rebooted(service, snapshot): | ||
926 | 188 | hosts_good[service.host] = 'v' | ||
927 | 189 | |||
928 | 190 | hosts = [] | ||
929 | 191 | for host, service in lists: | ||
930 | 192 | if hosts_good.get(host, None): | ||
931 | 193 | lvl = service['trust_state'].get('trust_lvl', {}) | ||
932 | 194 | if self.zone_manager.trust_cache[host] == lvl: | ||
933 | 195 | hosts.append((host, service)) | ||
934 | 196 | return hosts | ||
935 | 197 | |||
936 | 198 | |||
937 | 199 | class AttestationService(object): | ||
938 | 200 | def __init__(self): | ||
939 | 201 | if FLAGS.trusted_pool_debug is True: | ||
940 | 202 | """ | ||
941 | 203 | only used for unit test | ||
942 | 204 | """ | ||
943 | 205 | utc = utils.utcnow() - datetime.timedelta(0) | ||
944 | 206 | self.tlvl_db = {} | ||
945 | 207 | # tlvl_string = ['trusted', 'untrusted'] | ||
946 | 208 | # tlvl_value = ['lvl10', 'lvl01'] | ||
947 | 209 | # must match with host_filter_integrity.py | ||
948 | 210 | self.tlvl_db['storky'] = {'trust_lvl':'lvl10', 'vtime':utc} | ||
949 | 211 | self.tlvl_db['stocky'] = {'trust_lvl':'lvl01', 'vtime':utc} | ||
950 | 212 | else: | ||
951 | 213 | self.attestation = service.AttestationService() | ||
952 | 214 | |||
953 | 215 | def do_attestation(self, cmd, hosts): | ||
954 | 216 | #TODO: final code must do tlvl_string to tlvl_value conversion | ||
955 | 217 | if FLAGS.trusted_pool_debug is True: | ||
956 | 218 | report = {} | ||
957 | 219 | utc = utils.utcnow() - datetime.timedelta(0) | ||
958 | 220 | for host in hosts: | ||
959 | 221 | lvl = self.tlvl_db[host].get('trust_lvl', {}) | ||
960 | 222 | report[host] = {'trust_lvl':lvl,'vtime':utc} | ||
961 | 223 | return report | ||
962 | 224 | else: | ||
963 | 225 | return self.attestation.do_attestation(cmd, hosts) | ||
964 | 226 | |||
965 | 227 | class SchedulerManager(manager.SchedulerManager): | ||
966 | 228 | """inject a periodic_task into schedule for building integrity cache""" | ||
967 | 229 | def __init__(self, scheduler_driver=None, *args, **kwargs): | ||
968 | 230 | super(SchedulerManager, self).__init__(*args, **kwargs) | ||
969 | 231 | self.zone_manager.integrity_service= IntegrityService(self.zone_manager) | ||
970 | 232 | |||
971 | 233 | def setup_test_attestation(self, module): | ||
972 | 234 | """ testing purpost""" | ||
973 | 235 | self.zone_manager.integrity_service._do_attestation = module | ||
974 | 236 | |||
975 | 237 | def periodic_tasks(self, context=None): | ||
976 | 238 | super(SchedulerManager, self).periodic_tasks(context) | ||
977 | 239 | """Setup integrity cache for this zone""" | ||
978 | 240 | self.zone_manager.integrity_service.cache_setup(context) | ||
979 | 0 | 241 | ||
980 | === added file 'nova/tests/scheduler/test_json_filter_integrity.py' | |||
981 | --- nova/tests/scheduler/test_json_filter_integrity.py 1970-01-01 00:00:00 +0000 | |||
982 | +++ nova/tests/scheduler/test_json_filter_integrity.py 2011-09-20 01:13:28 +0000 | |||
983 | @@ -0,0 +1,251 @@ | |||
984 | 1 | # Copyright 2011 OpenStack LLC. | ||
985 | 2 | # All Rights Reserved. | ||
986 | 3 | # | ||
987 | 4 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | ||
988 | 5 | # not use this file except in compliance with the License. You may obtain | ||
989 | 6 | # a copy of the License at | ||
990 | 7 | # | ||
991 | 8 | # http://www.apache.org/licenses/LICENSE-2.0 | ||
992 | 9 | # | ||
993 | 10 | # Unless required by applicable law or agreed to in writing, software | ||
994 | 11 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | ||
995 | 12 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | ||
996 | 13 | # License for the specific language governing permissions and limitations | ||
997 | 14 | # under the License. | ||
998 | 15 | """ | ||
999 | 16 | Tests For Scheduler Host Filters. | ||
1000 | 17 | """ | ||
1001 | 18 | |||
1002 | 19 | import json | ||
1003 | 20 | |||
1004 | 21 | import time | ||
1005 | 22 | import datetime | ||
1006 | 23 | from nova import utils | ||
1007 | 24 | from nova import exception | ||
1008 | 25 | from nova import flags | ||
1009 | 26 | from nova import test | ||
1010 | 27 | from nova.scheduler.filters import json_filter | ||
1011 | 28 | from nova.scheduler.filters import json_filter_integrity | ||
1012 | 29 | from nova import log as logging | ||
1013 | 30 | |||
1014 | 31 | FLAGS = flags.FLAGS | ||
1015 | 32 | |||
1016 | 33 | |||
1017 | 34 | class FakeZoneManager: | ||
1018 | 35 | pass | ||
1019 | 36 | |||
1020 | 37 | |||
1021 | 38 | class HostFilterTestCase(test.TestCase): | ||
1022 | 39 | """Test case for host filters.""" | ||
1023 | 40 | |||
1024 | 41 | def _host_caps(self, multiplier): | ||
1025 | 42 | # Returns host capabilities in the following way: | ||
1026 | 43 | # host1 = memory:free 10 (100max) | ||
1027 | 44 | # disk:available 100 (1000max) | ||
1028 | 45 | # hostN = memory:free 10 + 10N | ||
1029 | 46 | # disk:available 100 + 100N | ||
1030 | 47 | # in other words: hostN has more resources than host0 | ||
1031 | 48 | # which means ... don't go above 10 hosts. | ||
1032 | 49 | return {'host_name-description': 'XenServer %s' % multiplier, | ||
1033 | 50 | 'host_hostname': 'xs-%s' % multiplier, | ||
1034 | 51 | 'host_memory_total': 100, | ||
1035 | 52 | 'host_memory_overhead': 10, | ||
1036 | 53 | 'host_memory_free': 10 + multiplier * 10, | ||
1037 | 54 | 'host_memory_free-computed': 10 + multiplier * 10, | ||
1038 | 55 | 'host_other-config': {}, | ||
1039 | 56 | 'host_ip_address': '192.168.1.%d' % (100 + multiplier), | ||
1040 | 57 | 'host_cpu_info': {}, | ||
1041 | 58 | 'disk_available': 100 + multiplier * 100, | ||
1042 | 59 | 'disk_total': 1000, | ||
1043 | 60 | 'disk_used': 0, | ||
1044 | 61 | 'host_uuid': 'xxx-%d' % multiplier, | ||
1045 | 62 | 'host_name-label': 'xs-%s' % multiplier} | ||
1046 | 63 | |||
1047 | 64 | def setUp(self): | ||
1048 | 65 | self.old_flag = FLAGS.default_host_filter | ||
1049 | 66 | FLAGS.default_host_filter = \ | ||
1050 | 67 | 'nova.scheduler.filters.json_filter_integrity.AllHostsFilter' | ||
1051 | 68 | self.instance_type = dict(name='tiny', | ||
1052 | 69 | memory_mb=50, | ||
1053 | 70 | vcpus=10, | ||
1054 | 71 | local_gb=500, | ||
1055 | 72 | flavorid=1, | ||
1056 | 73 | swap=500, | ||
1057 | 74 | rxtx_quota=30000, | ||
1058 | 75 | rxtx_cap=200, | ||
1059 | 76 | extra_specs={}) | ||
1060 | 77 | |||
1061 | 78 | self.zone_manager = FakeZoneManager() | ||
1062 | 79 | states = {} | ||
1063 | 80 | now = utils.utcnow() - datetime.timedelta(0) | ||
1064 | 81 | for x in xrange(10): | ||
1065 | 82 | state={} | ||
1066 | 83 | if x in [0, 2, 4, 6, 9]: | ||
1067 | 84 | lvl = 'lvl10' | ||
1068 | 85 | state= {'compute': self._host_caps(x), | ||
1069 | 86 | 'trust_state':{'trust_lvl':lvl, 'vtime':now}} | ||
1070 | 87 | elif x in [1, 3, 5, 8]: | ||
1071 | 88 | lvl = 'lvl01' | ||
1072 | 89 | state= {'compute': self._host_caps(x), | ||
1073 | 90 | 'trust_state':{'trust_lvl':lvl, 'vtime':now}} | ||
1074 | 91 | else: | ||
1075 | 92 | lvl = 'inflight' | ||
1076 | 93 | state= {'compute': self._host_caps(x)} | ||
1077 | 94 | states['host%02d' % (x)] = state | ||
1078 | 95 | self.zone_manager.service_states = states | ||
1079 | 96 | |||
1080 | 97 | def tearDown(self): | ||
1081 | 98 | FLAGS.default_host_filter = self.old_flag | ||
1082 | 99 | |||
1083 | 100 | def dummy_reconfirm(self, zone_manager, hosts): | ||
1084 | 101 | return hosts | ||
1085 | 102 | |||
1086 | 103 | def test_json_filter_integrity(self): | ||
1087 | 104 | hf = json_filter_integrity.JsonFilterIntegrity() | ||
1088 | 105 | #self.zone_manager.integrity_service_confirm=self.dummy_reconfirm | ||
1089 | 106 | hf._trust_pool_reconfirm = self.dummy_reconfirm | ||
1090 | 107 | # filter all hosts that can support 50 ram and 500 disk | ||
1091 | 108 | |||
1092 | 109 | name, cooked = hf.instance_type_to_filter(self.instance_type) | ||
1093 | 110 | self.assertEquals('nova.scheduler.filters.json_filter_integrity.JsonFilterIntegrity', | ||
1094 | 111 | name) | ||
1095 | 112 | hosts = hf.filter_hosts(self.zone_manager, cooked) | ||
1096 | 113 | self.assertEquals(2, len(hosts)) | ||
1097 | 114 | just_hosts = [host for host, caps in hosts] | ||
1098 | 115 | just_hosts.sort() | ||
1099 | 116 | self.assertEquals('host05', just_hosts[0]) | ||
1100 | 117 | self.assertEquals('host08', just_hosts[1]) | ||
1101 | 118 | |||
1102 | 119 | raw=[] | ||
1103 | 120 | cooked = json.dumps(raw) | ||
1104 | 121 | hosts = hf.filter_hosts(self.zone_manager, cooked) | ||
1105 | 122 | self.assertEquals(4, len(hosts)) | ||
1106 | 123 | |||
1107 | 124 | raw= ['>=', '$compute.host_memory_free', 20] | ||
1108 | 125 | cooked = json.dumps(raw) | ||
1109 | 126 | hosts = hf.filter_hosts(self.zone_manager, cooked) | ||
1110 | 127 | self.assertEquals(4, len(hosts)) | ||
1111 | 128 | |||
1112 | 129 | raw= ['>=', '$trust_state.trust_lvl', 'trusted'] | ||
1113 | 130 | cooked = json.dumps(raw) | ||
1114 | 131 | hosts = hf.filter_hosts(self.zone_manager, cooked) | ||
1115 | 132 | self.assertEquals(5, len(hosts)) | ||
1116 | 133 | |||
1117 | 134 | raw= ['<', '$trust_state.trust_lvl', 'untrusted'] | ||
1118 | 135 | cooked = json.dumps(raw) | ||
1119 | 136 | try: | ||
1120 | 137 | hf.filter_hosts(self.zone_manager, cooked) | ||
1121 | 138 | self.fail("UsageError") | ||
1122 | 139 | except exception.Error, e: | ||
1123 | 140 | pass | ||
1124 | 141 | # Try some custom queries | ||
1125 | 142 | |||
1126 | 143 | raw = [ 'or', | ||
1127 | 144 | ['and', | ||
1128 | 145 | ['<', '$compute.host_memory_free', 30], | ||
1129 | 146 | ['<', '$compute.disk_available', 300], | ||
1130 | 147 | ], | ||
1131 | 148 | ['and', | ||
1132 | 149 | ['>', '$compute.host_memory_free', 70], | ||
1133 | 150 | ['>', '$compute.disk_available', 700], | ||
1134 | 151 | ] | ||
1135 | 152 | ] | ||
1136 | 153 | cooked = json.dumps(raw) | ||
1137 | 154 | hosts = hf.filter_hosts(self.zone_manager, cooked) | ||
1138 | 155 | self.assertEquals(2, len(hosts)) | ||
1139 | 156 | |||
1140 | 157 | raw = [ 'or', | ||
1141 | 158 | ['and', | ||
1142 | 159 | ['<', '$compute.host_memory_free', 30], | ||
1143 | 160 | ['<', '$compute.disk_available', 300], | ||
1144 | 161 | ], | ||
1145 | 162 | ['and', | ||
1146 | 163 | ['>', '$compute.host_memory_free', 60], | ||
1147 | 164 | ['>', '$compute.disk_available', 600], | ||
1148 | 165 | ['>=', '$trust_state.trust_lvl', 'trusted'] | ||
1149 | 166 | ] | ||
1150 | 167 | ] | ||
1151 | 168 | cooked = json.dumps(raw) | ||
1152 | 169 | hosts = hf.filter_hosts(self.zone_manager, cooked) | ||
1153 | 170 | self.assertEquals(4, len(hosts)) | ||
1154 | 171 | |||
1155 | 172 | raw = ['=', '$trust_state.trust_lvl', 'trusted'] | ||
1156 | 173 | |||
1157 | 174 | cooked = json.dumps(raw) | ||
1158 | 175 | hosts = hf.filter_hosts(self.zone_manager, cooked) | ||
1159 | 176 | |||
1160 | 177 | self.assertEquals(5, len(hosts)) | ||
1161 | 178 | just_hosts = [host for host, caps in hosts] | ||
1162 | 179 | just_hosts.sort() | ||
1163 | 180 | for index, host in zip([0, 2, 4, 6, 9], just_hosts): | ||
1164 | 181 | self.assertEquals('host%02d' % index, host) | ||
1165 | 182 | |||
1166 | 183 | raw = ['and', | ||
1167 | 184 | ['>=', '$compute.host_memory_free', 30], | ||
1168 | 185 | ] | ||
1169 | 186 | cooked = json.dumps(raw) | ||
1170 | 187 | hosts = hf.filter_hosts(self.zone_manager, cooked) | ||
1171 | 188 | |||
1172 | 189 | self.assertEquals(3, len(hosts)) | ||
1173 | 190 | just_hosts = [host for host, caps in hosts] | ||
1174 | 191 | just_hosts.sort() | ||
1175 | 192 | for index, host in zip([3, 5, 8], just_hosts): | ||
1176 | 193 | self.assertEquals('host%02d' % index, host) | ||
1177 | 194 | |||
1178 | 195 | raw = ['and', | ||
1179 | 196 | ['>=', '$compute.host_memory_free', 30], | ||
1180 | 197 | ['=', '$trust_state.trust_lvl', 'Untrusted'] | ||
1181 | 198 | ] | ||
1182 | 199 | cooked = json.dumps(raw) | ||
1183 | 200 | hosts = hf.filter_hosts(self.zone_manager, cooked) | ||
1184 | 201 | |||
1185 | 202 | self.assertEquals(3, len(hosts)) | ||
1186 | 203 | just_hosts = [host for host, caps in hosts] | ||
1187 | 204 | just_hosts.sort() | ||
1188 | 205 | for index, host in zip([3, 5, 8], just_hosts): | ||
1189 | 206 | self.assertEquals('host%02d' % index, host) | ||
1190 | 207 | |||
1191 | 208 | raw = ['and', | ||
1192 | 209 | ['>=', '$trust_state.trust_lvl', 'trusted'], | ||
1193 | 210 | ['>=', '$compute.host_memory_free', 30] | ||
1194 | 211 | ] | ||
1195 | 212 | cooked = json.dumps(raw) | ||
1196 | 213 | hosts = hf.filter_hosts(self.zone_manager, cooked) | ||
1197 | 214 | |||
1198 | 215 | self.assertEquals(4, len(hosts)) | ||
1199 | 216 | just_hosts = [host for host, caps in hosts] | ||
1200 | 217 | just_hosts.sort() | ||
1201 | 218 | for index, host in zip([2, 4, 6, 9], just_hosts): | ||
1202 | 219 | self.assertEquals('host%02d' % index, host) | ||
1203 | 220 | |||
1204 | 221 | # Try some bogus input ... | ||
1205 | 222 | raw = ['unknown command', ] | ||
1206 | 223 | cooked = json.dumps(raw) | ||
1207 | 224 | try: | ||
1208 | 225 | hf.filter_hosts(self.zone_manager, cooked) | ||
1209 | 226 | self.fail("Should give KeyError") | ||
1210 | 227 | except KeyError, e: | ||
1211 | 228 | pass | ||
1212 | 229 | |||
1213 | 230 | self.assertTrue(hf.filter_hosts(self.zone_manager, json.dumps([]))) | ||
1214 | 231 | self.assertTrue(hf.filter_hosts(self.zone_manager, json.dumps({}))) | ||
1215 | 232 | self.assertTrue(hf.filter_hosts(self.zone_manager, json.dumps( | ||
1216 | 233 | ['not', True, False, True, False]))) | ||
1217 | 234 | |||
1218 | 235 | try: | ||
1219 | 236 | hf.filter_hosts(self.zone_manager, json.dumps( | ||
1220 | 237 | 'not', True, False, True, False)) | ||
1221 | 238 | self.fail("Should give KeyError") | ||
1222 | 239 | except KeyError, e: | ||
1223 | 240 | pass | ||
1224 | 241 | |||
1225 | 242 | self.assertFalse(hf.filter_hosts(self.zone_manager, | ||
1226 | 243 | json.dumps(['=', '$foo', 100]))) | ||
1227 | 244 | self.assertFalse(hf.filter_hosts(self.zone_manager, | ||
1228 | 245 | json.dumps(['=', '$.....', 100]))) | ||
1229 | 246 | self.assertFalse(hf.filter_hosts(self.zone_manager, | ||
1230 | 247 | json.dumps( | ||
1231 | 248 | ['>', ['and', ['or', ['not', ['<', ['>=', ['<=', ['in', ]]]]]]]]))) | ||
1232 | 249 | |||
1233 | 250 | self.assertFalse(hf.filter_hosts(self.zone_manager, | ||
1234 | 251 | json.dumps(['=', {}, ['>', '$missing....foo']]))) | ||
1235 | 0 | 252 | ||
1236 | === added file 'nova/tests/scheduler/test_manager_integrity.py' | |||
1237 | --- nova/tests/scheduler/test_manager_integrity.py 1970-01-01 00:00:00 +0000 | |||
1238 | +++ nova/tests/scheduler/test_manager_integrity.py 2011-09-20 01:13:28 +0000 | |||
1239 | @@ -0,0 +1,202 @@ | |||
1240 | 1 | # vim: tabstop=4 shiftwidth=4 softtabstop=4 | ||
1241 | 2 | |||
1242 | 3 | # Copyright 2010 United States Government as represented by the | ||
1243 | 4 | # Administrator of the National Aeronautics and Space Administration. | ||
1244 | 5 | # All Rights Reserved. | ||
1245 | 6 | # | ||
1246 | 7 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | ||
1247 | 8 | # not use this file except in compliance with the License. You may obtain | ||
1248 | 9 | # a copy of the License at | ||
1249 | 10 | # | ||
1250 | 11 | # http://www.apache.org/licenses/LICENSE-2.0 | ||
1251 | 12 | # | ||
1252 | 13 | # Unless required by applicable law or agreed to in writing, software | ||
1253 | 14 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | ||
1254 | 15 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | ||
1255 | 16 | # License for the specific language governing permissions and limitations | ||
1256 | 17 | # under the License. | ||
1257 | 18 | """ | ||
1258 | 19 | Tests For Scheduler | ||
1259 | 20 | """ | ||
1260 | 21 | |||
1261 | 22 | import time | ||
1262 | 23 | import datetime | ||
1263 | 24 | import mox | ||
1264 | 25 | import novaclient.exceptions | ||
1265 | 26 | import stubout | ||
1266 | 27 | import webob | ||
1267 | 28 | |||
1268 | 29 | from mox import IgnoreArg | ||
1269 | 30 | from nova import context | ||
1270 | 31 | from nova import db | ||
1271 | 32 | from nova import exception | ||
1272 | 33 | from nova import flags | ||
1273 | 34 | from nova import service | ||
1274 | 35 | from nova import test | ||
1275 | 36 | from nova import rpc | ||
1276 | 37 | from nova import utils | ||
1277 | 38 | from nova.auth import manager as auth_manager | ||
1278 | 39 | from nova.scheduler import api | ||
1279 | 40 | from nova.scheduler import manager | ||
1280 | 41 | from nova.scheduler import manager_integrity | ||
1281 | 42 | from nova.scheduler import driver | ||
1282 | 43 | from nova.compute import power_state | ||
1283 | 44 | from nova.db.sqlalchemy import models | ||
1284 | 45 | from nova import log as logging | ||
1285 | 46 | |||
1286 | 47 | |||
1287 | 48 | FLAGS = flags.FLAGS | ||
1288 | 49 | |||
1289 | 50 | class TestDriver(driver.Scheduler): | ||
1290 | 51 | """Scheduler Driver for Tests""" | ||
1291 | 52 | def schedule(context, topic, *args, **kwargs): | ||
1292 | 53 | return 'fallback_host' | ||
1293 | 54 | |||
1294 | 55 | def schedule_named_method(context, topic, num): | ||
1295 | 56 | return 'named_host' | ||
1296 | 57 | |||
1297 | 58 | |||
1298 | 59 | class SchedulerTestCase(test.TestCase): | ||
1299 | 60 | """Test case for scheduler""" | ||
1300 | 61 | def setUp(self): | ||
1301 | 62 | super(SchedulerTestCase, self).setUp() | ||
1302 | 63 | driver = 'nova.tests.scheduler.test_scheduler.TestDriver' | ||
1303 | 64 | self.flags(scheduler_driver=driver) | ||
1304 | 65 | |||
1305 | 66 | self.old_interval_flag = FLAGS.report_interval | ||
1306 | 67 | FLAGS.report_interval = 2 | ||
1307 | 68 | self.old_down_flag = FLAGS.service_down_time | ||
1308 | 69 | FLAGS.service_down_time = 4 | ||
1309 | 70 | |||
1310 | 71 | |||
1311 | 72 | def tearDown(self): | ||
1312 | 73 | self.stubs.UnsetAll() | ||
1313 | 74 | FLAGS.report_interval = self.old_interval_flag | ||
1314 | 75 | FLAGS.service_down_time = self.old_down_flag | ||
1315 | 76 | super(SchedulerTestCase, self).tearDown() | ||
1316 | 77 | |||
1317 | 78 | def _create_ncompute_service(self, n): | ||
1318 | 79 | """Create compute-manager(ComputeNode and Service record).""" | ||
1319 | 80 | self.hosts=n | ||
1320 | 81 | ctxt = context.get_admin_context() | ||
1321 | 82 | now = utils.utcnow() - datetime.timedelta(0) | ||
1322 | 83 | dic1 = {'host': 'dummy', 'binary': 'nova-compute', 'topic': 'compute', | ||
1323 | 84 | 'report_count': 0, 'updated_at':0, 'created_at':0, | ||
1324 | 85 | 'availability_zone': 'dummyzone'} | ||
1325 | 86 | |||
1326 | 87 | dic2 = {'service_id': 0, | ||
1327 | 88 | 'vcpus': 16, 'memory_mb': 32, 'local_gb': 100, | ||
1328 | 89 | 'vcpus_used': 16, 'memory_mb_used': 32, 'local_gb_used': 10, | ||
1329 | 90 | 'hypervisor_type': 'qemu', 'hypervisor_version': 12003, | ||
1330 | 91 | 'cpu_info': ''} | ||
1331 | 92 | |||
1332 | 93 | for idx in range(n): | ||
1333 | 94 | dic1['host'] = 'host%02d' % idx | ||
1334 | 95 | dic1['report_count'] = idx * 2 | ||
1335 | 96 | dic1['updated_at'] = now | ||
1336 | 97 | dic1['created_at'] = now | ||
1337 | 98 | s_ref = db.service_create(ctxt, dic1) | ||
1338 | 99 | dic2['service_id'] = s_ref['id'] | ||
1339 | 100 | db.compute_node_create(ctxt, dic2) | ||
1340 | 101 | |||
1341 | 102 | def _create_integrity_db(self, n): | ||
1342 | 103 | tlvl = {'trust_lvl':'trusted', 'vTime':0} | ||
1343 | 104 | utc = utils.utcnow() - datetime.timedelta(0) | ||
1344 | 105 | db = {} | ||
1345 | 106 | for idx in range(n): | ||
1346 | 107 | host = 'host%02d' % idx | ||
1347 | 108 | if idx % 2: | ||
1348 | 109 | lvl = 'lvl10' | ||
1349 | 110 | else: | ||
1350 | 111 | lvl = 'lvl00' | ||
1351 | 112 | db[host] = {'trust_lvl':lvl, 'vTime':utc} | ||
1352 | 113 | self.integrity_db = db | ||
1353 | 114 | |||
1354 | 115 | def _create_compute_service(self): | ||
1355 | 116 | """Create compute-manager(ComputeNode and Service record).""" | ||
1356 | 117 | ctxt = context.get_admin_context() | ||
1357 | 118 | dic = {'host': 'dummy', 'binary': 'nova-compute', 'topic': 'compute', | ||
1358 | 119 | 'report_count': 0, 'availability_zone': 'dummyzone'} | ||
1359 | 120 | s_ref = db.service_create(ctxt, dic) | ||
1360 | 121 | |||
1361 | 122 | dic = {'service_id': s_ref['id'], | ||
1362 | 123 | 'vcpus': 16, 'memory_mb': 32, 'local_gb': 100, | ||
1363 | 124 | 'vcpus_used': 16, 'memory_mb_used': 32, 'local_gb_used': 10, | ||
1364 | 125 | 'hypervisor_type': 'qemu', 'hypervisor_version': 12003, | ||
1365 | 126 | 'cpu_info': ''} | ||
1366 | 127 | db.compute_node_create(ctxt, dic) | ||
1367 | 128 | |||
1368 | 129 | return db.service_get(ctxt, s_ref['id']) | ||
1369 | 130 | |||
1370 | 131 | def _create_db(self, ctxt, scheduler, n): | ||
1371 | 132 | self._create_ncompute_service(n) | ||
1372 | 133 | |||
1373 | 134 | """ | ||
1374 | 135 | services=db.service_get_all_by_topic(ctxt, 'compute') | ||
1375 | 136 | for service in services: | ||
1376 | 137 | logging.debug(_("Host %s"), service.host) | ||
1377 | 138 | logging.debug(_("Services %s "), service.report_count) | ||
1378 | 139 | logging.debug(_("Updated %s "), service.updated_at) | ||
1379 | 140 | """ | ||
1380 | 141 | |||
1381 | 142 | def _do_attestation(self, cmd, hosts): | ||
1382 | 143 | utc = utils.utcnow() - datetime.timedelta(0) | ||
1383 | 144 | self.query = hosts | ||
1384 | 145 | db = self.integrity_db | ||
1385 | 146 | report = {} | ||
1386 | 147 | for host in hosts: | ||
1387 | 148 | lvl = db[host].get('trust_lvl', {}) | ||
1388 | 149 | txt = {'trust_lvl':lvl, 'vtime':utc} | ||
1389 | 150 | report[host] = txt | ||
1390 | 151 | return report | ||
1391 | 152 | |||
1392 | 153 | def _update_ticks(self, ctx, skips, count, dec, down=None): | ||
1393 | 154 | now = utils.utcnow() - datetime.timedelta(0) | ||
1394 | 155 | for idx in range(self.hosts): | ||
1395 | 156 | if idx % skips: | ||
1396 | 157 | d = 0 | ||
1397 | 158 | else: | ||
1398 | 159 | if down: | ||
1399 | 160 | continue | ||
1400 | 161 | d = dec | ||
1401 | 162 | h = 'host%02d' % idx | ||
1402 | 163 | ref = db.service_get_by_args(ctx, h, 'nova-compute') | ||
1403 | 164 | ref['updated_at'] = now | ||
1404 | 165 | ref['report_count'] += count - d | ||
1405 | 166 | db.service_update(ctx, ref['id'], ref) | ||
1406 | 167 | |||
1407 | 168 | def test_trust(self): | ||
1408 | 169 | hosts = 10 | ||
1409 | 170 | scheduler = manager_integrity.SchedulerManager() | ||
1410 | 171 | scheduler.setup_test_attestation(self._do_attestation) | ||
1411 | 172 | ctx = context.get_admin_context() | ||
1412 | 173 | self._create_db(ctx, scheduler, hosts) | ||
1413 | 174 | self._create_integrity_db(hosts) | ||
1414 | 175 | scheduler.periodic_tasks(ctx) | ||
1415 | 176 | self.assertEqual(10, len(self.query)) | ||
1416 | 177 | time.sleep(FLAGS.report_interval) | ||
1417 | 178 | |||
1418 | 179 | self._update_ticks(ctx, 1, 1, 0) | ||
1419 | 180 | scheduler.periodic_tasks(ctx) | ||
1420 | 181 | |||
1421 | 182 | time.sleep(FLAGS.service_down_time) | ||
1422 | 183 | self._update_ticks(ctx, 3, 2, 2) | ||
1423 | 184 | scheduler.periodic_tasks(ctx) | ||
1424 | 185 | self.assertEqual(4, len(self.query)) | ||
1425 | 186 | time.sleep(FLAGS.service_down_time * 2) | ||
1426 | 187 | down = 1 | ||
1427 | 188 | self._update_ticks(ctx, 3, 4, 4, down) | ||
1428 | 189 | scheduler.periodic_tasks(ctx) | ||
1429 | 190 | for idx in [0, 3, 6, 9]: | ||
1430 | 191 | txt = scheduler.zone_manager.service_states.get("host%2d" % idx,{}) | ||
1431 | 192 | self.assertEqual(0, len(txt)) | ||
1432 | 193 | for idx in [1, 2, 4, 5, 7, 8]: | ||
1433 | 194 | host = "host%02d" %idx | ||
1434 | 195 | service = scheduler.zone_manager.service_states[host] | ||
1435 | 196 | txt = service.get('trust_state', {}) | ||
1436 | 197 | lvl = txt.get('trust_lvl') | ||
1437 | 198 | if idx % 2: | ||
1438 | 199 | self.assertEqual(lvl, 'lvl10') | ||
1439 | 200 | else: | ||
1440 | 201 | self.assertEqual(lvl, 'lvl00') | ||
1441 | 202 |
This looks really interesting! I haven't done a thorough review yet, but it looks like around line 276, there's extreme indenting. Maybe there are tabs instead of spaces?