Merge lp:lava-dashboard/multinode into lp:lava-dashboard

Proposed by Neil Williams
Status: Merged
Approved by: Neil Williams
Approved revision: 411
Merged at revision: 415
Proposed branch: lp:lava-dashboard/multinode
Merge into: lp:lava-dashboard
Diff against target: 246 lines (+192/-4)
2 files modified
dashboard_app/templates/dashboard_app/_test_run_list_table.html (+6/-0)
dashboard_app/xmlrpc.py (+186/-4)
To merge this branch: bzr merge lp:lava-dashboard/multinode
Reviewer Review Type Date Requested Status
Neil Williams Approve
Review via email: mp+181101@code.launchpad.net

Description of the change

Landing MultiNode.

Handles the aggregation of MultiNode result bundles after the XMLRPC calls which are coordinated by LAVA Coordinator.

This branch applies without conflicts.

lava-dashboard will be the first shared MultiNode branch to be merged as these changes are not dependent on changes in the other shared branches.

To post a comment you must log in.
Revision history for this message
Antonio Terceiro (terceiro) wrote :

> === modified file 'dashboard_app/xmlrpc.py'
> --- dashboard_app/xmlrpc.py 2013-05-02 10:35:29 +0000
> +++ dashboard_app/xmlrpc.py 2013-08-20 16:56:49 +0000
> @@ -2,7 +2,7 @@
> #
> # Author: Zygmunt Krynicki <email address hidden>
> #
> -# This file is part of Launch Control.
> +# This file is part of LAVA Dashboard
> #
> # Launch Control is free software: you can redistribute it and/or modify
> # it under the terms of the GNU Affero General Public License version 3
> @@ -25,7 +25,9 @@
> import logging
> import re
> import xmlrpclib
> -
> +import hashlib
> +import json
> +import os
> from django.contrib.auth.models import User, Group
> from django.core.urlresolvers import reverse
> from django.db import IntegrityError, DatabaseError
> @@ -243,6 +245,152 @@
> 'dashboard_app.views.redirect_to_bundle',
> kwargs={'content_sha1':bundle.content_sha1}))
>
> + def put_pending(self, content, group_name):
> + """
[...]
> + """
> + try:
> + # add this to a list which put_group can use.
> + sha1 = hashlib.sha1()
> + sha1.update(content)
> + hexdigest = sha1.hexdigest()
> + groupfile = "/tmp/%s" % group_name
> + with open(groupfile, "a+") as grp_file:
> + grp_file.write("%s\n" % content)
> + return hexdigest
> + except Exception as e:
> + logging.debug("Dashboard pending submission caused an exception: %s" % e)

Is there a race condition here? It's fine for two or more processes to append
to the same file, but it's possible that depending on the size of the bundles
and on line buffering issues the contents of different bundles might get
intermingled. Maybe we should write each bundle to its own separate file, then
read them all on put_group.

Also, I miss some sort of authentication to avoid the risk of having attackers
submitting random crap into bundle streams for multinode groups. I guess
put_group already handles authentication because it uses the underlying bundle
stream access control?

Revision history for this message
Neil Williams (codehelp) wrote :

On Thu, 22 Aug 2013 01:27:18 -0000
Antonio Terceiro <email address hidden> wrote:

> > + def put_pending(self, content, group_name):
> > + try:
> > + # add this to a list which put_group can use.
> > + sha1 = hashlib.sha1()
> > + sha1.update(content)
> > + hexdigest = sha1.hexdigest()
> > + groupfile = "/tmp/%s" % group_name
> > + with open(groupfile, "a+") as grp_file:
> > + grp_file.write("%s\n" % content)
> > + return hexdigest
> > + except Exception as e:
> > + logging.debug("Dashboard pending submission caused an
> > exception: %s" % e)
>
> Is there a race condition here?

Possibly, only within that one group. The Coordinator already ensures
that put_group waits for a fixed period to allow the last put_pending
for this group to complete but large groups could have a problem here.
(There is a reboot into the master image between the last possible sync
operation and the XMLRPC call too.)

> It's fine for two or more processes
> to append to the same file, but it's possible that depending on the
> size of the bundles and on line buffering issues the contents of
> different bundles might get intermingled. Maybe we should write each
> bundle to its own separate file, then read them all on put_group.

> Also, I miss some sort of authentication to avoid the risk of having
> attackers submitting random crap into bundle streams for multinode
> groups. I guess put_group already handles authentication because it
> uses the underlying bundle stream access control?

put_group uses authentication, yes.

I'll have a look at some of the checks in _put - what we don't want is
to use the full _put function which causes the creation and
deserialization of the pending bundle.

--

Neil Williams
=============
http://www.linux.codehelp.co.uk/

lp:lava-dashboard/multinode updated
411. By Neil Williams

Neil Williams 2013-08-23 Add support for checking authentication of the
 pending bundles by porting the stream check code from _put without
 allowing the pending bundle into the database before aggregation.

Revision history for this message
Neil Williams (codehelp) wrote :

Approved with update for review comments.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'dashboard_app/templates/dashboard_app/_test_run_list_table.html'
2--- dashboard_app/templates/dashboard_app/_test_run_list_table.html 2013-02-25 09:25:30 +0000
3+++ dashboard_app/templates/dashboard_app/_test_run_list_table.html 2013-08-24 08:11:25 +0000
4@@ -2,6 +2,7 @@
5 <table class="demo_jui display" id="test_runs">
6 <thead>
7 <tr>
8+ <th>{% trans "Device" %}</th>
9 <th>{% trans "Test Run" %}</th>
10 <th>{% trans "Test" %}</th>
11 <th>{% trans "Passes" %}</th>
12@@ -13,6 +14,11 @@
13 <tbody>
14 {% for test_run in test_run_list %}
15 <tr>
16+ {% for attribute in test_run.attributes.all %}
17+ {% if attribute.name == "target" %}
18+ <td>{{ attribute.value }}</td>
19+ {% endif %}
20+ {% endfor %}
21 <td><a href="{{ test_run.get_absolute_url }}"><code>{{ test_run.test }} results<code/></a></td>
22 <td>{{ test_run.test }}</td>
23 <td>{{ test_run.get_summary_results.pass }}</td>
24
25=== modified file 'dashboard_app/xmlrpc.py'
26--- dashboard_app/xmlrpc.py 2013-05-02 10:35:29 +0000
27+++ dashboard_app/xmlrpc.py 2013-08-24 08:11:25 +0000
28@@ -2,7 +2,7 @@
29 #
30 # Author: Zygmunt Krynicki <zygmunt.krynicki@linaro.org>
31 #
32-# This file is part of Launch Control.
33+# This file is part of LAVA Dashboard
34 #
35 # Launch Control is free software: you can redistribute it and/or modify
36 # it under the terms of the GNU Affero General Public License version 3
37@@ -25,7 +25,9 @@
38 import logging
39 import re
40 import xmlrpclib
41-
42+import hashlib
43+import json
44+import os
45 from django.contrib.auth.models import User, Group
46 from django.core.urlresolvers import reverse
47 from django.db import IntegrityError, DatabaseError
48@@ -105,9 +107,9 @@
49 logging.debug("Getting bundle stream")
50 bundle_stream = BundleStream.objects.accessible_by_principal(self.user).get(pathname=pathname)
51 except BundleStream.DoesNotExist:
52- logging.debug("Bundle stream does not exists, aborting")
53+ logging.debug("Bundle stream does not exist, aborting")
54 raise xmlrpclib.Fault(errors.NOT_FOUND,
55- "Bundle stream not found")
56+ "Bundle stream not found")
57 if not bundle_stream.can_upload(self.user):
58 raise xmlrpclib.Fault(
59 errors.FORBIDDEN, "You cannot upload to this stream")
60@@ -243,6 +245,186 @@
61 'dashboard_app.views.redirect_to_bundle',
62 kwargs={'content_sha1':bundle.content_sha1}))
63
64+ def put_pending(self, content, pathname, group_name):
65+ """
66+ Name
67+ ----
68+ `put_pending` (`content`, `pathname`, `group_name`)
69+
70+ Description
71+ -----------
72+ MultiNode internal call.
73+
74+ Stores the bundle until the coordinator allows the complete
75+ bundle list to be aggregated from the list and submitted by put_group
76+
77+ Arguments
78+ ---------
79+ `content`: string
80+ Full text of the bundle. This *MUST* be a valid JSON
81+ document and it *SHOULD* match the "Dashboard Bundle Format
82+ 1.0" schema. The SHA1 of the content *MUST* be unique or a
83+ ``Fault(409, "...")`` is raised. This is used to protect
84+ from simple duplicate submissions.
85+ `pathname`: string
86+ Pathname of the bundle stream where a new bundle should
87+ be created and stored. This argument *MUST* designate a
88+ pre-existing bundle stream or a ``Fault(404, "...")`` exception
89+ is raised. In addition the user *MUST* have access
90+ permission to upload bundles there or a ``Fault(403, "...")``
91+ exception is raised. See below for access rules.
92+ `group_name`: string
93+ Unique ID of the MultiNode group. Other pending bundles will
94+ be aggregated into a single result bundle for this group.
95+
96+ Return value
97+ ------------
98+ If all goes well this function returns the SHA1 of the content.
99+
100+ Exceptions raised
101+ -----------------
102+ 404
103+ Either:
104+
105+ - Bundle stream not found
106+ - Uploading to specified stream is not permitted
107+ 409
108+ Duplicate bundle content
109+
110+ Rules for bundle stream access
111+ ------------------------------
112+ The following rules govern bundle stream upload access rights:
113+ - all anonymous streams are accessible
114+ - personal streams are accessible to owners
115+ - team streams are accessible to team members
116+
117+ """
118+ try:
119+ logging.debug("Getting bundle stream")
120+ bundle_stream = BundleStream.objects.accessible_by_principal(self.user).get(pathname=pathname)
121+ except BundleStream.DoesNotExist:
122+ logging.debug("Bundle stream does not exist, aborting")
123+ raise xmlrpclib.Fault(errors.NOT_FOUND,
124+ "Bundle stream not found")
125+ if not bundle_stream.can_upload(self.user):
126+ raise xmlrpclib.Fault(
127+ errors.FORBIDDEN, "You cannot upload to this stream")
128+ try:
129+ # add this to a list which put_group can use.
130+ sha1 = hashlib.sha1()
131+ sha1.update(content)
132+ hexdigest = sha1.hexdigest()
133+ groupfile = "/tmp/%s" % group_name
134+ with open(groupfile, "a+") as grp_file:
135+ grp_file.write("%s\n" % content)
136+ return hexdigest
137+ except Exception as e:
138+ logging.debug("Dashboard pending submission caused an exception: %s" % e)
139+
140+ def put_group(self, content, content_filename, pathname, group_name):
141+ """
142+ Name
143+ ----
144+ `put_group` (`content`, `content_filename`, `pathname`, `group_name`)
145+
146+ Description
147+ -----------
148+ MultiNode internal call.
149+
150+ Adds the final bundle to the list, aggregates the list
151+ into a single group bundle and submits the group bundle.
152+
153+ Arguments
154+ ---------
155+ `content`: string
156+ Full text of the bundle. This *MUST* be a valid JSON
157+ document and it *SHOULD* match the "Dashboard Bundle Format
158+ 1.0" schema. The SHA1 of the content *MUST* be unique or a
159+ ``Fault(409, "...")`` is raised. This is used to protect
160+ from simple duplicate submissions.
161+ `content_filename`: string
162+ Name of the file that contained the text of the bundle. The
163+ `content_filename` can be an arbitrary string and will be
164+ stored along with the content for reference.
165+ `pathname`: string
166+ Pathname of the bundle stream where a new bundle should
167+ be created and stored. This argument *MUST* designate a
168+ pre-existing bundle stream or a ``Fault(404, "...")`` exception
169+ is raised. In addition the user *MUST* have access
170+ permission to upload bundles there or a ``Fault(403, "...")``
171+ exception is raised. See below for access rules.
172+ `group_name`: string
173+ Unique ID of the MultiNode group. Other pending bundles will
174+ be aggregated into a single result bundle for this group. At
175+ least one other bundle must have already been submitted as
176+ pending for the specified MultiNode group. LAVA Coordinator
177+ causes the parent job to wait until all nodes have been marked
178+ as having pending bundles, even if some bundles are empty.
179+
180+ Return value
181+ ------------
182+ If all goes well this function returns the full URL of the bundle.
183+
184+ Exceptions raised
185+ -----------------
186+ ValueError:
187+ One or more bundles could not be converted to JSON prior
188+ to aggregation.
189+ 404
190+ Either:
191+
192+ - Bundle stream not found
193+ - Uploading to specified stream is not permitted
194+ 409
195+ Duplicate bundle content
196+
197+ Rules for bundle stream access
198+ ------------------------------
199+ The following rules govern bundle stream upload access rights:
200+ - all anonymous streams are accessible
201+ - personal streams are accessible to owners
202+ - team streams are accessible to team members
203+
204+ """
205+ grp_file = "/tmp/%s" % group_name
206+ bundle_set = {}
207+ bundle_set[group_name] = []
208+ if os.path.isfile(grp_file):
209+ with open(grp_file, "r") as grp_data:
210+ grp_list = grp_data.readlines()
211+ for testrun in grp_list:
212+ bundle_set[group_name].append(json.loads(testrun))
213+ # Note: now that we have the data from the group, the group data file could be re-used
214+ # as an error log which is simpler than debugging through XMLRPC.
215+ else:
216+ raise ValueError("Aggregation failure for %s - check coordinator rpc_delay?" % group_name)
217+ group_tests = []
218+ try:
219+ json_data = json.loads(content)
220+ except ValueError:
221+ logging.debug("Invalid JSON content within the sub_id zero bundle")
222+ json_data = None
223+ try:
224+ bundle_set[group_name].append(json_data)
225+ except Exception as e:
226+ logging.debug("appending JSON caused exception %s" % e)
227+ try:
228+ for bundle_list in bundle_set[group_name]:
229+ for test_run in bundle_list['test_runs']:
230+ group_tests.append(test_run)
231+ except Exception as e:
232+ logging.debug("aggregating bundles caused exception %s" % e)
233+ group_content = json.dumps({"test_runs": group_tests, "format": json_data['format']})
234+ bundle = self._put(group_content, content_filename, pathname)
235+ logging.debug("Returning permalink to aggregated bundle for %s" % group_name)
236+ permalink = self._context.request.build_absolute_uri(
237+ reverse('dashboard_app.views.redirect_to_bundle',
238+ kwargs={'content_sha1': bundle.content_sha1}))
239+ # only delete the group file when things go well.
240+ if os.path.isfile(grp_file):
241+ os.remove(grp_file)
242+ return permalink
243+
244 def get(self, content_sha1):
245 """
246 Name

Subscribers

People subscribed via source and target branches

to status/vote changes: