Merge lp:lava-dashboard/multinode into lp:lava-dashboard
- multinode
- Merge into trunk
Status: | Merged |
---|---|
Approved by: | Neil Williams |
Approved revision: | 411 |
Merged at revision: | 415 |
Proposed branch: | lp:lava-dashboard/multinode |
Merge into: | lp:lava-dashboard |
Diff against target: |
246 lines (+192/-4) 2 files modified
dashboard_app/templates/dashboard_app/_test_run_list_table.html (+6/-0) dashboard_app/xmlrpc.py (+186/-4) |
To merge this branch: | bzr merge lp:lava-dashboard/multinode |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Neil Williams | Approve | ||
Review via email: mp+181101@code.launchpad.net |
Commit message
Description of the change
Landing MultiNode.
Handles the aggregation of MultiNode result bundles after the XMLRPC calls which are coordinated by LAVA Coordinator.
This branch applies without conflicts.
lava-dashboard will be the first shared MultiNode branch to be merged as these changes are not dependent on changes in the other shared branches.
Antonio Terceiro (terceiro) wrote : | # |
Neil Williams (codehelp) wrote : | # |
On Thu, 22 Aug 2013 01:27:18 -0000
Antonio Terceiro <email address hidden> wrote:
> > + def put_pending(self, content, group_name):
> > + try:
> > + # add this to a list which put_group can use.
> > + sha1 = hashlib.sha1()
> > + sha1.update(
> > + hexdigest = sha1.hexdigest()
> > + groupfile = "/tmp/%s" % group_name
> > + with open(groupfile, "a+") as grp_file:
> > + grp_file.
> > + return hexdigest
> > + except Exception as e:
> > + logging.
> > exception: %s" % e)
>
> Is there a race condition here?
Possibly, only within that one group. The Coordinator already ensures
that put_group waits for a fixed period to allow the last put_pending
for this group to complete but large groups could have a problem here.
(There is a reboot into the master image between the last possible sync
operation and the XMLRPC call too.)
> It's fine for two or more processes
> to append to the same file, but it's possible that depending on the
> size of the bundles and on line buffering issues the contents of
> different bundles might get intermingled. Maybe we should write each
> bundle to its own separate file, then read them all on put_group.
> Also, I miss some sort of authentication to avoid the risk of having
> attackers submitting random crap into bundle streams for multinode
> groups. I guess put_group already handles authentication because it
> uses the underlying bundle stream access control?
put_group uses authentication, yes.
I'll have a look at some of the checks in _put - what we don't want is
to use the full _put function which causes the creation and
deserialization of the pending bundle.
--
Neil Williams
=============
http://
- 411. By Neil Williams
-
Neil Williams 2013-08-23 Add support for checking authentication of the
pending bundles by porting the stream check code from _put without
allowing the pending bundle into the database before aggregation.
Neil Williams (codehelp) wrote : | # |
Approved with update for review comments.
Preview Diff
1 | === modified file 'dashboard_app/templates/dashboard_app/_test_run_list_table.html' | |||
2 | --- dashboard_app/templates/dashboard_app/_test_run_list_table.html 2013-02-25 09:25:30 +0000 | |||
3 | +++ dashboard_app/templates/dashboard_app/_test_run_list_table.html 2013-08-24 08:11:25 +0000 | |||
4 | @@ -2,6 +2,7 @@ | |||
5 | 2 | <table class="demo_jui display" id="test_runs"> | 2 | <table class="demo_jui display" id="test_runs"> |
6 | 3 | <thead> | 3 | <thead> |
7 | 4 | <tr> | 4 | <tr> |
8 | 5 | <th>{% trans "Device" %}</th> | ||
9 | 5 | <th>{% trans "Test Run" %}</th> | 6 | <th>{% trans "Test Run" %}</th> |
10 | 6 | <th>{% trans "Test" %}</th> | 7 | <th>{% trans "Test" %}</th> |
11 | 7 | <th>{% trans "Passes" %}</th> | 8 | <th>{% trans "Passes" %}</th> |
12 | @@ -13,6 +14,11 @@ | |||
13 | 13 | <tbody> | 14 | <tbody> |
14 | 14 | {% for test_run in test_run_list %} | 15 | {% for test_run in test_run_list %} |
15 | 15 | <tr> | 16 | <tr> |
16 | 17 | {% for attribute in test_run.attributes.all %} | ||
17 | 18 | {% if attribute.name == "target" %} | ||
18 | 19 | <td>{{ attribute.value }}</td> | ||
19 | 20 | {% endif %} | ||
20 | 21 | {% endfor %} | ||
21 | 16 | <td><a href="{{ test_run.get_absolute_url }}"><code>{{ test_run.test }} results<code/></a></td> | 22 | <td><a href="{{ test_run.get_absolute_url }}"><code>{{ test_run.test }} results<code/></a></td> |
22 | 17 | <td>{{ test_run.test }}</td> | 23 | <td>{{ test_run.test }}</td> |
23 | 18 | <td>{{ test_run.get_summary_results.pass }}</td> | 24 | <td>{{ test_run.get_summary_results.pass }}</td> |
24 | 19 | 25 | ||
25 | === modified file 'dashboard_app/xmlrpc.py' | |||
26 | --- dashboard_app/xmlrpc.py 2013-05-02 10:35:29 +0000 | |||
27 | +++ dashboard_app/xmlrpc.py 2013-08-24 08:11:25 +0000 | |||
28 | @@ -2,7 +2,7 @@ | |||
29 | 2 | # | 2 | # |
30 | 3 | # Author: Zygmunt Krynicki <zygmunt.krynicki@linaro.org> | 3 | # Author: Zygmunt Krynicki <zygmunt.krynicki@linaro.org> |
31 | 4 | # | 4 | # |
33 | 5 | # This file is part of Launch Control. | 5 | # This file is part of LAVA Dashboard |
34 | 6 | # | 6 | # |
35 | 7 | # Launch Control is free software: you can redistribute it and/or modify | 7 | # Launch Control is free software: you can redistribute it and/or modify |
36 | 8 | # it under the terms of the GNU Affero General Public License version 3 | 8 | # it under the terms of the GNU Affero General Public License version 3 |
37 | @@ -25,7 +25,9 @@ | |||
38 | 25 | import logging | 25 | import logging |
39 | 26 | import re | 26 | import re |
40 | 27 | import xmlrpclib | 27 | import xmlrpclib |
42 | 28 | 28 | import hashlib | |
43 | 29 | import json | ||
44 | 30 | import os | ||
45 | 29 | from django.contrib.auth.models import User, Group | 31 | from django.contrib.auth.models import User, Group |
46 | 30 | from django.core.urlresolvers import reverse | 32 | from django.core.urlresolvers import reverse |
47 | 31 | from django.db import IntegrityError, DatabaseError | 33 | from django.db import IntegrityError, DatabaseError |
48 | @@ -105,9 +107,9 @@ | |||
49 | 105 | logging.debug("Getting bundle stream") | 107 | logging.debug("Getting bundle stream") |
50 | 106 | bundle_stream = BundleStream.objects.accessible_by_principal(self.user).get(pathname=pathname) | 108 | bundle_stream = BundleStream.objects.accessible_by_principal(self.user).get(pathname=pathname) |
51 | 107 | except BundleStream.DoesNotExist: | 109 | except BundleStream.DoesNotExist: |
53 | 108 | logging.debug("Bundle stream does not exists, aborting") | 110 | logging.debug("Bundle stream does not exist, aborting") |
54 | 109 | raise xmlrpclib.Fault(errors.NOT_FOUND, | 111 | raise xmlrpclib.Fault(errors.NOT_FOUND, |
56 | 110 | "Bundle stream not found") | 112 | "Bundle stream not found") |
57 | 111 | if not bundle_stream.can_upload(self.user): | 113 | if not bundle_stream.can_upload(self.user): |
58 | 112 | raise xmlrpclib.Fault( | 114 | raise xmlrpclib.Fault( |
59 | 113 | errors.FORBIDDEN, "You cannot upload to this stream") | 115 | errors.FORBIDDEN, "You cannot upload to this stream") |
60 | @@ -243,6 +245,186 @@ | |||
61 | 243 | 'dashboard_app.views.redirect_to_bundle', | 245 | 'dashboard_app.views.redirect_to_bundle', |
62 | 244 | kwargs={'content_sha1':bundle.content_sha1})) | 246 | kwargs={'content_sha1':bundle.content_sha1})) |
63 | 245 | 247 | ||
64 | 248 | def put_pending(self, content, pathname, group_name): | ||
65 | 249 | """ | ||
66 | 250 | Name | ||
67 | 251 | ---- | ||
68 | 252 | `put_pending` (`content`, `pathname`, `group_name`) | ||
69 | 253 | |||
70 | 254 | Description | ||
71 | 255 | ----------- | ||
72 | 256 | MultiNode internal call. | ||
73 | 257 | |||
74 | 258 | Stores the bundle until the coordinator allows the complete | ||
75 | 259 | bundle list to be aggregated from the list and submitted by put_group | ||
76 | 260 | |||
77 | 261 | Arguments | ||
78 | 262 | --------- | ||
79 | 263 | `content`: string | ||
80 | 264 | Full text of the bundle. This *MUST* be a valid JSON | ||
81 | 265 | document and it *SHOULD* match the "Dashboard Bundle Format | ||
82 | 266 | 1.0" schema. The SHA1 of the content *MUST* be unique or a | ||
83 | 267 | ``Fault(409, "...")`` is raised. This is used to protect | ||
84 | 268 | from simple duplicate submissions. | ||
85 | 269 | `pathname`: string | ||
86 | 270 | Pathname of the bundle stream where a new bundle should | ||
87 | 271 | be created and stored. This argument *MUST* designate a | ||
88 | 272 | pre-existing bundle stream or a ``Fault(404, "...")`` exception | ||
89 | 273 | is raised. In addition the user *MUST* have access | ||
90 | 274 | permission to upload bundles there or a ``Fault(403, "...")`` | ||
91 | 275 | exception is raised. See below for access rules. | ||
92 | 276 | `group_name`: string | ||
93 | 277 | Unique ID of the MultiNode group. Other pending bundles will | ||
94 | 278 | be aggregated into a single result bundle for this group. | ||
95 | 279 | |||
96 | 280 | Return value | ||
97 | 281 | ------------ | ||
98 | 282 | If all goes well this function returns the SHA1 of the content. | ||
99 | 283 | |||
100 | 284 | Exceptions raised | ||
101 | 285 | ----------------- | ||
102 | 286 | 404 | ||
103 | 287 | Either: | ||
104 | 288 | |||
105 | 289 | - Bundle stream not found | ||
106 | 290 | - Uploading to specified stream is not permitted | ||
107 | 291 | 409 | ||
108 | 292 | Duplicate bundle content | ||
109 | 293 | |||
110 | 294 | Rules for bundle stream access | ||
111 | 295 | ------------------------------ | ||
112 | 296 | The following rules govern bundle stream upload access rights: | ||
113 | 297 | - all anonymous streams are accessible | ||
114 | 298 | - personal streams are accessible to owners | ||
115 | 299 | - team streams are accessible to team members | ||
116 | 300 | |||
117 | 301 | """ | ||
118 | 302 | try: | ||
119 | 303 | logging.debug("Getting bundle stream") | ||
120 | 304 | bundle_stream = BundleStream.objects.accessible_by_principal(self.user).get(pathname=pathname) | ||
121 | 305 | except BundleStream.DoesNotExist: | ||
122 | 306 | logging.debug("Bundle stream does not exist, aborting") | ||
123 | 307 | raise xmlrpclib.Fault(errors.NOT_FOUND, | ||
124 | 308 | "Bundle stream not found") | ||
125 | 309 | if not bundle_stream.can_upload(self.user): | ||
126 | 310 | raise xmlrpclib.Fault( | ||
127 | 311 | errors.FORBIDDEN, "You cannot upload to this stream") | ||
128 | 312 | try: | ||
129 | 313 | # add this to a list which put_group can use. | ||
130 | 314 | sha1 = hashlib.sha1() | ||
131 | 315 | sha1.update(content) | ||
132 | 316 | hexdigest = sha1.hexdigest() | ||
133 | 317 | groupfile = "/tmp/%s" % group_name | ||
134 | 318 | with open(groupfile, "a+") as grp_file: | ||
135 | 319 | grp_file.write("%s\n" % content) | ||
136 | 320 | return hexdigest | ||
137 | 321 | except Exception as e: | ||
138 | 322 | logging.debug("Dashboard pending submission caused an exception: %s" % e) | ||
139 | 323 | |||
140 | 324 | def put_group(self, content, content_filename, pathname, group_name): | ||
141 | 325 | """ | ||
142 | 326 | Name | ||
143 | 327 | ---- | ||
144 | 328 | `put_group` (`content`, `content_filename`, `pathname`, `group_name`) | ||
145 | 329 | |||
146 | 330 | Description | ||
147 | 331 | ----------- | ||
148 | 332 | MultiNode internal call. | ||
149 | 333 | |||
150 | 334 | Adds the final bundle to the list, aggregates the list | ||
151 | 335 | into a single group bundle and submits the group bundle. | ||
152 | 336 | |||
153 | 337 | Arguments | ||
154 | 338 | --------- | ||
155 | 339 | `content`: string | ||
156 | 340 | Full text of the bundle. This *MUST* be a valid JSON | ||
157 | 341 | document and it *SHOULD* match the "Dashboard Bundle Format | ||
158 | 342 | 1.0" schema. The SHA1 of the content *MUST* be unique or a | ||
159 | 343 | ``Fault(409, "...")`` is raised. This is used to protect | ||
160 | 344 | from simple duplicate submissions. | ||
161 | 345 | `content_filename`: string | ||
162 | 346 | Name of the file that contained the text of the bundle. The | ||
163 | 347 | `content_filename` can be an arbitrary string and will be | ||
164 | 348 | stored along with the content for reference. | ||
165 | 349 | `pathname`: string | ||
166 | 350 | Pathname of the bundle stream where a new bundle should | ||
167 | 351 | be created and stored. This argument *MUST* designate a | ||
168 | 352 | pre-existing bundle stream or a ``Fault(404, "...")`` exception | ||
169 | 353 | is raised. In addition the user *MUST* have access | ||
170 | 354 | permission to upload bundles there or a ``Fault(403, "...")`` | ||
171 | 355 | exception is raised. See below for access rules. | ||
172 | 356 | `group_name`: string | ||
173 | 357 | Unique ID of the MultiNode group. Other pending bundles will | ||
174 | 358 | be aggregated into a single result bundle for this group. At | ||
175 | 359 | least one other bundle must have already been submitted as | ||
176 | 360 | pending for the specified MultiNode group. LAVA Coordinator | ||
177 | 361 | causes the parent job to wait until all nodes have been marked | ||
178 | 362 | as having pending bundles, even if some bundles are empty. | ||
179 | 363 | |||
180 | 364 | Return value | ||
181 | 365 | ------------ | ||
182 | 366 | If all goes well this function returns the full URL of the bundle. | ||
183 | 367 | |||
184 | 368 | Exceptions raised | ||
185 | 369 | ----------------- | ||
186 | 370 | ValueError: | ||
187 | 371 | One or more bundles could not be converted to JSON prior | ||
188 | 372 | to aggregation. | ||
189 | 373 | 404 | ||
190 | 374 | Either: | ||
191 | 375 | |||
192 | 376 | - Bundle stream not found | ||
193 | 377 | - Uploading to specified stream is not permitted | ||
194 | 378 | 409 | ||
195 | 379 | Duplicate bundle content | ||
196 | 380 | |||
197 | 381 | Rules for bundle stream access | ||
198 | 382 | ------------------------------ | ||
199 | 383 | The following rules govern bundle stream upload access rights: | ||
200 | 384 | - all anonymous streams are accessible | ||
201 | 385 | - personal streams are accessible to owners | ||
202 | 386 | - team streams are accessible to team members | ||
203 | 387 | |||
204 | 388 | """ | ||
205 | 389 | grp_file = "/tmp/%s" % group_name | ||
206 | 390 | bundle_set = {} | ||
207 | 391 | bundle_set[group_name] = [] | ||
208 | 392 | if os.path.isfile(grp_file): | ||
209 | 393 | with open(grp_file, "r") as grp_data: | ||
210 | 394 | grp_list = grp_data.readlines() | ||
211 | 395 | for testrun in grp_list: | ||
212 | 396 | bundle_set[group_name].append(json.loads(testrun)) | ||
213 | 397 | # Note: now that we have the data from the group, the group data file could be re-used | ||
214 | 398 | # as an error log which is simpler than debugging through XMLRPC. | ||
215 | 399 | else: | ||
216 | 400 | raise ValueError("Aggregation failure for %s - check coordinator rpc_delay?" % group_name) | ||
217 | 401 | group_tests = [] | ||
218 | 402 | try: | ||
219 | 403 | json_data = json.loads(content) | ||
220 | 404 | except ValueError: | ||
221 | 405 | logging.debug("Invalid JSON content within the sub_id zero bundle") | ||
222 | 406 | json_data = None | ||
223 | 407 | try: | ||
224 | 408 | bundle_set[group_name].append(json_data) | ||
225 | 409 | except Exception as e: | ||
226 | 410 | logging.debug("appending JSON caused exception %s" % e) | ||
227 | 411 | try: | ||
228 | 412 | for bundle_list in bundle_set[group_name]: | ||
229 | 413 | for test_run in bundle_list['test_runs']: | ||
230 | 414 | group_tests.append(test_run) | ||
231 | 415 | except Exception as e: | ||
232 | 416 | logging.debug("aggregating bundles caused exception %s" % e) | ||
233 | 417 | group_content = json.dumps({"test_runs": group_tests, "format": json_data['format']}) | ||
234 | 418 | bundle = self._put(group_content, content_filename, pathname) | ||
235 | 419 | logging.debug("Returning permalink to aggregated bundle for %s" % group_name) | ||
236 | 420 | permalink = self._context.request.build_absolute_uri( | ||
237 | 421 | reverse('dashboard_app.views.redirect_to_bundle', | ||
238 | 422 | kwargs={'content_sha1': bundle.content_sha1})) | ||
239 | 423 | # only delete the group file when things go well. | ||
240 | 424 | if os.path.isfile(grp_file): | ||
241 | 425 | os.remove(grp_file) | ||
242 | 426 | return permalink | ||
243 | 427 | |||
244 | 246 | def get(self, content_sha1): | 428 | def get(self, content_sha1): |
245 | 247 | """ | 429 | """ |
246 | 248 | Name | 430 | Name |
> === modified file 'dashboard_ app/xmlrpc. py' app/xmlrpc. py 2013-05-02 10:35:29 +0000 app/xmlrpc. py 2013-08-20 16:56:49 +0000 contrib. auth.models import User, Group core.urlresolve rs import reverse app.views. redirect_ to_bundle' , {'content_ sha1':bundle. content_ sha1})) content) write(" %s\n" % content) debug(" Dashboard pending submission caused an exception: %s" % e)
> --- dashboard_
> +++ dashboard_
> @@ -2,7 +2,7 @@
> #
> # Author: Zygmunt Krynicki <email address hidden>
> #
> -# This file is part of Launch Control.
> +# This file is part of LAVA Dashboard
> #
> # Launch Control is free software: you can redistribute it and/or modify
> # it under the terms of the GNU Affero General Public License version 3
> @@ -25,7 +25,9 @@
> import logging
> import re
> import xmlrpclib
> -
> +import hashlib
> +import json
> +import os
> from django.
> from django.
> from django.db import IntegrityError, DatabaseError
> @@ -243,6 +245,152 @@
> 'dashboard_
> kwargs=
>
> + def put_pending(self, content, group_name):
> + """
[...]
> + """
> + try:
> + # add this to a list which put_group can use.
> + sha1 = hashlib.sha1()
> + sha1.update(
> + hexdigest = sha1.hexdigest()
> + groupfile = "/tmp/%s" % group_name
> + with open(groupfile, "a+") as grp_file:
> + grp_file.
> + return hexdigest
> + except Exception as e:
> + logging.
Is there a race condition here? It's fine for two or more processes to append
to the same file, but it's possible that depending on the size of the bundles
and on line buffering issues the contents of different bundles might get
intermingled. Maybe we should write each bundle to its own separate file, then
read them all on put_group.
Also, I miss some sort of authentication to avoid the risk of having attackers
submitting random crap into bundle streams for multinode groups. I guess
put_group already handles authentication because it uses the underlying bundle
stream access control?