Merge lp:lava-scheduler/multinode into lp:lava-scheduler

Proposed by Neil Williams
Status: Merged
Approved by: Neil Williams
Approved revision: 285
Merged at revision: 255
Proposed branch: lp:lava-scheduler/multinode
Merge into: lp:lava-scheduler
Diff against target: 1750 lines (+1225/-137) (has conflicts)
17 files modified
lava_scheduler_app/api.py (+19/-3)
lava_scheduler_app/management/commands/scheduler.py (+2/-2)
lava_scheduler_app/management/commands/schedulermonitor.py (+1/-1)
lava_scheduler_app/migrations/0030_auto__add_field_testjob_sub_id__add_field_testjob_target_group.py (+165/-0)
lava_scheduler_app/migrations/0031_auto__add_field_testjob_multinode_definition.py (+162/-0)
lava_scheduler_app/models.py (+140/-11)
lava_scheduler_app/templates/lava_scheduler_app/job_definition.html (+1/-1)
lava_scheduler_app/templates/lava_scheduler_app/job_sidebar.html (+16/-0)
lava_scheduler_app/templates/lava_scheduler_app/job_submit.html (+10/-0)
lava_scheduler_app/templates/lava_scheduler_app/multinode_job_definition.html (+21/-0)
lava_scheduler_app/urls.py (+13/-2)
lava_scheduler_app/utils.py (+117/-0)
lava_scheduler_app/views.py (+102/-10)
lava_scheduler_daemon/dbjobsource.py (+134/-64)
lava_scheduler_daemon/job.py (+281/-0)
lava_scheduler_daemon/service.py (+40/-42)
lava_scheduler_daemon/tests/test_board.py (+1/-1)
Text conflict in lava_scheduler_app/models.py
Text conflict in lava_scheduler_app/urls.py
Text conflict in lava_scheduler_app/views.py
Contents conflict in lava_scheduler_daemon/board.py
To merge this branch: bzr merge lp:lava-scheduler/multinode
Reviewer Review Type Date Requested Status
Neil Williams Approve
Antonio Terceiro Needs Fixing
Review via email: mp+181103@code.launchpad.net

Description of the change

Landing MultiNode.

Handles the scheduling of MultiNode jobs.

This branch applies with conflicts. The conflicts are proposed to be resolved as in this temporary branch: lp:~codehelp/lava-scheduler/multinode-merge

lava-scheduler will be the final shared branch to be merged as it requires changes in dispatcher to be applied before MultiNode jobs can be submitted.

To post a comment you must log in.
lp:lava-scheduler/multinode updated
282. By Neil Williams

Fu Wei 2013-08-23 Fix the hard code problem of 'logging_level', change
 'DEBUG' back to the info from multinode json file

283. By Neil Williams

Senthil Kumaran 2013-08-22 List all subjobs of a multinode job.

Revision history for this message
Antonio Terceiro (terceiro) wrote :
Download full text (26.4 KiB)

Hello guys,

Follows my comments on the current state of the code.

> === added file 'lava_scheduler_app/utils.py'
> --- lava_scheduler_app/utils.py 1970-01-01 00:00:00 +0000
> +++ lava_scheduler_app/utils.py 2013-08-24 08:58:27 +0000
> @@ -0,0 +1,116 @@
> +# Copyright (C) 2013 Linaro Limited
> +#
> +# Author: Neil Williams <email address hidden>
> +# Senthil Kumaran <email address hidden>
> +#
> +# This file is part of LAVA Scheduler.
> +#
> +# LAVA Scheduler is free software: you can redistribute it and/or modify it
> +# under the terms of the GNU Affero General Public License version 3 as
> +# published by the Free Software Foundation
> +#
> +# LAVA Scheduler is distributed in the hope that it will be useful, but
> +# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
> +# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
> +# more details.
> +#
> +# You should have received a copy of the GNU Affero General Public License
> +# along with LAVA Scheduler. If not, see <http://www.gnu.org/licenses/>.
> +
> +import re
> +import copy
> +import socket
> +import urlparse
> +import simplejson
> +
> +
> +def rewrite_hostname(result_url):
> + """If URL has hostname value as localhost/127.0.0.*, change it to the
> + actual server FQDN.
> +
> + Returns the RESULT_URL (string) re-written with hostname.
> +
> + See https://cards.linaro.org/browse/LAVA-611
> + """
> + host = urlparse.urlparse(result_url).netloc
> + if host == "localhost":
> + result_url = result_url.replace("localhost", socket.getfqdn())
> + elif host.startswith("127.0.0"):
> + ip_pat = r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}'
> + result_url = re.sub(ip_pat, socket.getfqdn(), result_url)
> + return result_url

IMO we should deprecate the "server" parameter for submit_results, and always
submit to master LAVA server somehow. We probably do not need to do this right
now, so I filed a bug for doing it in the future:

https://bugs.launchpad.net/lava-dispatcher/+bug/1217061

> +def split_multi_job(json_jobdata, target_group):
> + node_json = {}
> + all_nodes = {}
> + node_actions = {}
> + if "device_group" in json_jobdata:

this conditional is redundant since split_multi_job is already called in
the context where the same condition was already tested.

> + # get all the roles and create node action list for each role.
> + for group in json_jobdata["device_group"]:
> + node_actions[group["role"]] = []
> +
> + # Take each action and assign it to proper roles. If roles are not
> + # specified for a specific action, then assign it to all the roles.
> + all_actions = json_jobdata["actions"]
> + for role in node_actions.keys():
> + for action in all_actions:
> + new_action = copy.deepcopy(action)
> + if 'parameters' in new_action \
> + and 'role' in new_action["parameters"]:
> + if new_action["parameters"]["role"] == role:
> + new_action["parameters"].pop('role', None)
> + node_actions[role].appe...

review: Needs Fixing
Revision history for this message
Neil Williams (codehelp) wrote :
Download full text (7.3 KiB)

On Mon, 26 Aug 2013 23:33:20 -0000
Antonio Terceiro <email address hidden> wrote:

> > +def split_multi_job(json_jobdata, target_group):
> > + node_json = {}
> > + all_nodes = {}
> > + node_actions = {}
> > + if "device_group" in json_jobdata:
>
> this conditional is redundant since split_multi_job is already called
> in the context where the same condition was already tested.

It's worth retaining the check should other functions end up calling
this routine but it should probably be a no-op or error return instead
which would simplify the indentation a bit.

> > +def requested_device_count(json_data):
> > + """Utility function check the requested number of devices for
> > each
> > + device_type in a multinode job.
> > +
> > + JSON_DATA is the job definition string.
> > +
> > + Returns requested_device which is a dictionary of the
> > following format: +
> > + {'kvm': 1, 'qemu': 3, 'panda': 1}
> > +
> > + If the job is not a multinode job, then return None.
> > + """
> > + job_data = simplejson.loads(json_data)
> > + if 'device_group' in job_data:
> > + requested_devices = {}
> > + for device_group in job_data['device_group']:
> > + device_type = device_group['device_type']
> > + count = device_group['count']
> > + requested_devices[device_type] = count
> > + return requested_devices
> > + else:
> > + # TODO: Put logic to check whether we have requested
> > devices attached
> > + # to this lava-server, even if it is a single node
> > job?
> > + return None
>
> There is already code that does almost the same thing (checking
> amount of available devices for all device types) in
> lava_scheduler_app/api.py ... we should probably not duplicate that
> logic here.

I believe the intention here is that as MultiNode requires a different
Job selection process, it may be worth porting that code into the new
methods and using only a single device selection algorithm.

Senthil? Is this TODO redundant?

> > @@ -764,7 +818,13 @@
> > def job_cancel(request, pk):
> > job = get_restricted_job(request.user, pk)
> > if job.can_cancel(request.user):
> > - job.cancel()
> > + if job.target_group:
>
>
> I see this pattern repeated all over the code, but I don't think
> there's much we can do about it. But IMO we can make it little better
> by creating a new property in the job class so that you can test for
> `job.is_multinode` instead of `job.target_group`, which is not
> obviously related to multinode unless you understand the
> implementation details.

The processing is dependent on whether a group is in use. As far as the
code is concerned, it is the presence of a group which is relevant, not
the label we use of "MultiNode". I think target_group is more flexible.

> > === modified file 'lava_scheduler_daemon/board.py'
> > --- lava_scheduler_daemon/board.py 2013-07-17 12:20:25 +0000
> > +++ lava_scheduler_daemon/board.py 2013-08-24 08:58:27 +0000
> > @@ -339,7 +339,7 @@
> > self.logger.info("starting job %r", job_data)
> > self.running_job = self.job_cls(
> > job_data, self...

Read more...

lp:lava-scheduler/multinode updated
284. By Neil Williams

Neil Williams 2013-08-27 Move from an overly general Exception to specific
 exceptions possible directly from the called function.

Revision history for this message
Senthil Kumaran S (stylesen) wrote :
Download full text (9.1 KiB)

Hi,

The new MP is here -
https://code.launchpad.net/~stylesen/lava-scheduler/review-take-1/+merge/182635

On Tuesday 27 August 2013 04:00 PM, Neil Williams wrote:
> On Mon, 26 Aug 2013 23:33:20 -0000
> Antonio Terceiro <email address hidden> wrote:
>
>>> +def split_multi_job(json_jobdata, target_group):
>>> + node_json = {}
>>> + all_nodes = {}
>>> + node_actions = {}
>>> + if "device_group" in json_jobdata:
>>
>> this conditional is redundant since split_multi_job is already called
>> in the context where the same condition was already tested.
>
> It's worth retaining the check should other functions end up calling
> this routine but it should probably be a no-op or error return instead
> which would simplify the indentation a bit.

I am +1 for Neil's suggestion here. Fixed it accordingly.

>>> +def requested_device_count(json_data):
>>> + """Utility function check the requested number of devices for
>>> each
>>> + device_type in a multinode job.
>>> +
>>> + JSON_DATA is the job definition string.
>>> +
>>> + Returns requested_device which is a dictionary of the
>>> following format: +
>>> + {'kvm': 1, 'qemu': 3, 'panda': 1}
>>> +
>>> + If the job is not a multinode job, then return None.
>>> + """
>>> + job_data = simplejson.loads(json_data)
>>> + if 'device_group' in job_data:
>>> + requested_devices = {}
>>> + for device_group in job_data['device_group']:
>>> + device_type = device_group['device_type']
>>> + count = device_group['count']
>>> + requested_devices[device_type] = count
>>> + return requested_devices
>>> + else:
>>> + # TODO: Put logic to check whether we have requested
>>> devices attached
>>> + # to this lava-server, even if it is a single node
>>> job?
>>> + return None
>>
>> There is already code that does almost the same thing (checking
>> amount of available devices for all device types) in
>> lava_scheduler_app/api.py ... we should probably not duplicate that
>> logic here.
>
> I believe the intention here is that as MultiNode requires a different
> Job selection process, it may be worth porting that code into the new
> methods and using only a single device selection algorithm.

The function available in api.py is different from what we have in utils.py.

In api.py "all_device_types" function returns the available device types
in the lava-server after checking against the database.

On the other hand, in utils.py "requested_device_count" function takes
the job data as input and operates on it. It does not go to the database
and checks for the available device types. This is a handy function used
in order to retrieve how many devices of each type is requested in the
job file.

> Senthil? Is this TODO redundant?

The TODO is misleading. It was something which existed from previous
code when I had a different logic for this function. I removed it now in
the latest merge proposal.

>>> @@ -764,7 +818,13 @@
>>> def job_cancel(request, pk):
>>> job = get_restricted_job(request.user, pk)
>>> if job.can_cancel(request.user):
>>> - job.cancel()
>>> + if job.target...

Read more...

lp:lava-scheduler/multinode updated
285. By Neil Williams

Senthil Kumaran 2013-08-28 Remove all legacy code for Board based scheduler.
Senthil Kumaran 2013-08-28 Fix test case for using celery.
Senthil Kumaran 2013-08-28 Address review comments from Antonio.

Revision history for this message
Neil Williams (codehelp) wrote :

Final update reviewed: Approve.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'lava_scheduler_app/api.py'
--- lava_scheduler_app/api.py 2013-07-17 12:20:25 +0000
+++ lava_scheduler_app/api.py 2013-08-28 13:47:23 +0000
@@ -2,10 +2,12 @@
2from simplejson import JSONDecodeError2from simplejson import JSONDecodeError
3from django.db.models import Count3from django.db.models import Count
4from linaro_django_xmlrpc.models import ExposedAPI4from linaro_django_xmlrpc.models import ExposedAPI
5from lava_scheduler_app import utils
5from lava_scheduler_app.models import (6from lava_scheduler_app.models import (
6 Device,7 Device,
7 DeviceType,8 DeviceType,
8 JSONDataError,9 JSONDataError,
10 DevicesUnavailableException,
9 TestJob,11 TestJob,
10)12)
11from lava_scheduler_app.views import (13from lava_scheduler_app.views import (
@@ -35,14 +37,22 @@
35 raise xmlrpclib.Fault(404, "Specified device not found.")37 raise xmlrpclib.Fault(404, "Specified device not found.")
36 except DeviceType.DoesNotExist:38 except DeviceType.DoesNotExist:
37 raise xmlrpclib.Fault(404, "Specified device type not found.")39 raise xmlrpclib.Fault(404, "Specified device type not found.")
38 return job.id40 except DevicesUnavailableException as e:
41 raise xmlrpclib.Fault(400, str(e))
42 if isinstance(job, type(list())):
43 return job
44 else:
45 return job.id
3946
40 def resubmit_job(self, job_id):47 def resubmit_job(self, job_id):
41 try:48 try:
42 job = TestJob.objects.accessible_by_principal(self.user).get(pk=job_id)49 job = TestJob.objects.accessible_by_principal(self.user).get(pk=job_id)
43 except TestJob.DoesNotExist:50 except TestJob.DoesNotExist:
44 raise xmlrpclib.Fault(404, "Specified job not found.")51 raise xmlrpclib.Fault(404, "Specified job not found.")
45 return self.submit_job(job.definition)52 if job.is_multinode:
53 return self.submit_job(job.multinode_definition)
54 else:
55 return self.submit_job(job.definition)
4656
47 def cancel_job(self, job_id):57 def cancel_job(self, job_id):
48 if not self.user:58 if not self.user:
@@ -50,7 +60,13 @@
50 job = TestJob.objects.get(pk=job_id)60 job = TestJob.objects.get(pk=job_id)
51 if not job.can_cancel(self.user):61 if not job.can_cancel(self.user):
52 raise xmlrpclib.Fault(403, "Permission denied.")62 raise xmlrpclib.Fault(403, "Permission denied.")
53 job.cancel()63 if job.is_multinode:
64 multinode_jobs = TestJob.objects.all().filter(
65 target_group=job.target_group)
66 for multinode_job in multinode_jobs:
67 multinode_job.cancel()
68 else:
69 job.cancel()
54 return True70 return True
5571
56 def job_output(self, job_id):72 def job_output(self, job_id):
5773
=== modified file 'lava_scheduler_app/management/commands/scheduler.py'
--- lava_scheduler_app/management/commands/scheduler.py 2013-07-17 12:20:25 +0000
+++ lava_scheduler_app/management/commands/scheduler.py 2013-08-28 13:47:23 +0000
@@ -43,7 +43,7 @@
4343
44 from twisted.internet import reactor44 from twisted.internet import reactor
4545
46 from lava_scheduler_daemon.service import BoardSet46 from lava_scheduler_daemon.service import JobQueue
47 from lava_scheduler_daemon.dbjobsource import DatabaseJobSource47 from lava_scheduler_daemon.dbjobsource import DatabaseJobSource
4848
49 daemon_options = self._configure(options)49 daemon_options = self._configure(options)
@@ -58,7 +58,7 @@
58 'fake-dispatcher')58 'fake-dispatcher')
59 else:59 else:
60 dispatcher = options['dispatcher']60 dispatcher = options['dispatcher']
61 service = BoardSet(61 service = JobQueue(
62 source, dispatcher, reactor, daemon_options=daemon_options)62 source, dispatcher, reactor, daemon_options=daemon_options)
63 reactor.callWhenRunning(service.startService)63 reactor.callWhenRunning(service.startService)
64 reactor.run()64 reactor.run()
6565
=== modified file 'lava_scheduler_app/management/commands/schedulermonitor.py'
--- lava_scheduler_app/management/commands/schedulermonitor.py 2012-12-03 05:03:38 +0000
+++ lava_scheduler_app/management/commands/schedulermonitor.py 2013-08-28 13:47:23 +0000
@@ -31,7 +31,7 @@
3131
32 def handle(self, *args, **options):32 def handle(self, *args, **options):
33 from twisted.internet import reactor33 from twisted.internet import reactor
34 from lava_scheduler_daemon.board import Job34 from lava_scheduler_daemon.job import Job
35 daemon_options = self._configure(options)35 daemon_options = self._configure(options)
36 source = DatabaseJobSource()36 source = DatabaseJobSource()
37 dispatcher, board_name, json_file = args37 dispatcher, board_name, json_file = args
3838
=== added file 'lava_scheduler_app/migrations/0030_auto__add_field_testjob_sub_id__add_field_testjob_target_group.py'
--- lava_scheduler_app/migrations/0030_auto__add_field_testjob_sub_id__add_field_testjob_target_group.py 1970-01-01 00:00:00 +0000
+++ lava_scheduler_app/migrations/0030_auto__add_field_testjob_sub_id__add_field_testjob_target_group.py 2013-08-28 13:47:23 +0000
@@ -0,0 +1,165 @@
1# -*- coding: utf-8 -*-
2from south.db import db
3from south.v2 import SchemaMigration
4
5
6class Migration(SchemaMigration):
7
8 def forwards(self, orm):
9 # Adding field 'TestJob.sub_id'
10 db.add_column('lava_scheduler_app_testjob', 'sub_id',
11 self.gf('django.db.models.fields.CharField')(default='', max_length=200, blank=True),
12 keep_default=False)
13
14 # Adding field 'TestJob.target_group'
15 db.add_column('lava_scheduler_app_testjob', 'target_group',
16 self.gf('django.db.models.fields.CharField')(default=None, max_length=64, null=True, blank=True),
17 keep_default=False)
18
19 def backwards(self, orm):
20 # Deleting field 'TestJob.sub_id'
21 db.delete_column('lava_scheduler_app_testjob', 'sub_id')
22
23 # Deleting field 'TestJob.target_group'
24 db.delete_column('lava_scheduler_app_testjob', 'target_group')
25
26 models = {
27 'auth.group': {
28 'Meta': {'object_name': 'Group'},
29 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
30 'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
31 'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
32 },
33 'auth.permission': {
34 'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
35 'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
36 'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
37 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
38 'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
39 },
40 'auth.user': {
41 'Meta': {'object_name': 'User'},
42 'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
43 'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
44 'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
45 'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
46 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
47 'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
48 'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
49 'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
50 'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
51 'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
52 'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
53 'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
54 'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
55 },
56 'contenttypes.contenttype': {
57 'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
58 'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
59 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
60 'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
61 'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
62 },
63 'dashboard_app.bundle': {
64 'Meta': {'ordering': "['-uploaded_on']", 'object_name': 'Bundle'},
65 '_gz_content': ('django.db.models.fields.files.FileField', [], {'max_length': '100', 'null': 'True', 'db_column': "'gz_content'"}),
66 '_raw_content': ('django.db.models.fields.files.FileField', [], {'max_length': '100', 'null': 'True', 'db_column': "'content'"}),
67 'bundle_stream': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'bundles'", 'to': "orm['dashboard_app.BundleStream']"}),
68 'content_filename': ('django.db.models.fields.CharField', [], {'max_length': '256'}),
69 'content_sha1': ('django.db.models.fields.CharField', [], {'max_length': '40', 'unique': 'True', 'null': 'True'}),
70 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
71 'is_deserialized': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
72 'uploaded_by': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'uploaded_bundles'", 'null': 'True', 'to': "orm['auth.User']"}),
73 'uploaded_on': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.utcnow'})
74 },
75 'dashboard_app.bundlestream': {
76 'Meta': {'object_name': 'BundleStream'},
77 'group': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.Group']", 'null': 'True', 'blank': 'True'}),
78 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
79 'is_anonymous': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
80 'is_public': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
81 'name': ('django.db.models.fields.CharField', [], {'max_length': '64', 'blank': 'True'}),
82 'pathname': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '128'}),
83 'slug': ('django.db.models.fields.CharField', [], {'max_length': '64', 'blank': 'True'}),
84 'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']", 'null': 'True', 'blank': 'True'})
85 },
86 'lava_scheduler_app.device': {
87 'Meta': {'object_name': 'Device'},
88 'current_job': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'+'", 'null': 'True', 'on_delete': 'models.SET_NULL', 'to': "orm['lava_scheduler_app.TestJob']", 'blank': 'True', 'unique': 'True'}),
89 'device_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['lava_scheduler_app.DeviceType']"}),
90 'device_version': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '200', 'null': 'True', 'blank': 'True'}),
91 'health_status': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
92 'hostname': ('django.db.models.fields.CharField', [], {'max_length': '200', 'primary_key': 'True'}),
93 'last_health_report_job': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'+'", 'null': 'True', 'on_delete': 'models.SET_NULL', 'to': "orm['lava_scheduler_app.TestJob']", 'blank': 'True', 'unique': 'True'}),
94 'status': ('django.db.models.fields.IntegerField', [], {'default': '1'}),
95 'tags': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['lava_scheduler_app.Tag']", 'symmetrical': 'False', 'blank': 'True'})
96 },
97 'lava_scheduler_app.devicestatetransition': {
98 'Meta': {'object_name': 'DeviceStateTransition'},
99 'created_by': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']", 'null': 'True', 'on_delete': 'models.SET_NULL', 'blank': 'True'}),
100 'created_on': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
101 'device': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'transitions'", 'to': "orm['lava_scheduler_app.Device']"}),
102 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
103 'job': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['lava_scheduler_app.TestJob']", 'null': 'True', 'on_delete': 'models.SET_NULL', 'blank': 'True'}),
104 'message': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
105 'new_state': ('django.db.models.fields.IntegerField', [], {}),
106 'old_state': ('django.db.models.fields.IntegerField', [], {})
107 },
108 'lava_scheduler_app.devicetype': {
109 'Meta': {'object_name': 'DeviceType'},
110 'display': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
111 'health_check_job': ('django.db.models.fields.TextField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
112 'name': ('django.db.models.fields.SlugField', [], {'max_length': '50', 'primary_key': 'True'})
113 },
114 'lava_scheduler_app.jobfailuretag': {
115 'Meta': {'object_name': 'JobFailureTag'},
116 'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
117 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
118 'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '256'})
119 },
120 'lava_scheduler_app.tag': {
121 'Meta': {'object_name': 'Tag'},
122 'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
123 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
124 'name': ('django.db.models.fields.SlugField', [], {'unique': 'True', 'max_length': '50'})
125 },
126 'lava_scheduler_app.testjob': {
127 'Meta': {'object_name': 'TestJob'},
128 '_results_bundle': ('django.db.models.fields.related.OneToOneField', [], {'null': 'True', 'db_column': "'results_bundle_id'", 'on_delete': 'models.SET_NULL', 'to': "orm['dashboard_app.Bundle']", 'blank': 'True', 'unique': 'True'}),
129 '_results_link': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '400', 'null': 'True', 'db_column': "'results_link'", 'blank': 'True'}),
130 'actual_device': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'related_name': "'+'", 'null': 'True', 'blank': 'True', 'to': "orm['lava_scheduler_app.Device']"}),
131 'definition': ('django.db.models.fields.TextField', [], {}),
132 'description': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '200', 'null': 'True', 'blank': 'True'}),
133 'end_time': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
134 'failure_comment': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
135 'failure_tags': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'related_name': "'failure_tags'", 'blank': 'True', 'to': "orm['lava_scheduler_app.JobFailureTag']"}),
136 'group': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.Group']", 'null': 'True', 'blank': 'True'}),
137 'health_check': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
138 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
139 'is_public': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
140 'log_file': ('django.db.models.fields.files.FileField', [], {'default': 'None', 'max_length': '100', 'null': 'True', 'blank': 'True'}),
141 'priority': ('django.db.models.fields.IntegerField', [], {'default': '50'}),
142 'requested_device': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'related_name': "'+'", 'null': 'True', 'blank': 'True', 'to': "orm['lava_scheduler_app.Device']"}),
143 'requested_device_type': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'related_name': "'+'", 'null': 'True', 'blank': 'True', 'to': "orm['lava_scheduler_app.DeviceType']"}),
144 'start_time': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
145 'status': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
146 'sub_id': ('django.db.models.fields.CharField', [], {'max_length': '200', 'blank': 'True'}),
147 'submit_time': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
148 'submit_token': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['linaro_django_xmlrpc.AuthToken']", 'null': 'True', 'on_delete': 'models.SET_NULL', 'blank': 'True'}),
149 'submitter': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'+'", 'to': "orm['auth.User']"}),
150 'tags': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['lava_scheduler_app.Tag']", 'symmetrical': 'False', 'blank': 'True'}),
151 'target_group': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '64', 'null': 'True', 'blank': 'True'}),
152 'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']", 'null': 'True', 'blank': 'True'})
153 },
154 'linaro_django_xmlrpc.authtoken': {
155 'Meta': {'object_name': 'AuthToken'},
156 'created_on': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
157 'description': ('django.db.models.fields.TextField', [], {'default': "''", 'blank': 'True'}),
158 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
159 'last_used_on': ('django.db.models.fields.DateTimeField', [], {'null': 'True'}),
160 'secret': ('django.db.models.fields.CharField', [], {'default': "'7rf4239t35kqjrcixn4srgw00r61ncuq51jna0d6xbwpg2ur2annw5y1gkr9yt6ys9gh06b3wtcum4j0f2pdn5crul72mu1e1tw4at9jfgwk18asogkgoqcbc20ftylx'", 'unique': 'True', 'max_length': '128'}),
161 'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'auth_tokens'", 'to': "orm['auth.User']"})
162 }
163 }
164
165 complete_apps = ['lava_scheduler_app']
0166
=== added file 'lava_scheduler_app/migrations/0031_auto__add_field_testjob_multinode_definition.py'
--- lava_scheduler_app/migrations/0031_auto__add_field_testjob_multinode_definition.py 1970-01-01 00:00:00 +0000
+++ lava_scheduler_app/migrations/0031_auto__add_field_testjob_multinode_definition.py 2013-08-28 13:47:23 +0000
@@ -0,0 +1,162 @@
1# -*- coding: utf-8 -*-
2import datetime
3from south.db import db
4from south.v2 import SchemaMigration
5from django.db import models
6
7
8class Migration(SchemaMigration):
9
10 def forwards(self, orm):
11 # Adding field 'TestJob.multinode_definition'
12 db.add_column('lava_scheduler_app_testjob', 'multinode_definition',
13 self.gf('django.db.models.fields.TextField')(default='', blank=True),
14 keep_default=False)
15
16
17 def backwards(self, orm):
18 # Deleting field 'TestJob.multinode_definition'
19 db.delete_column('lava_scheduler_app_testjob', 'multinode_definition')
20
21
22 models = {
23 'auth.group': {
24 'Meta': {'object_name': 'Group'},
25 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
26 'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
27 'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
28 },
29 'auth.permission': {
30 'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
31 'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
32 'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
33 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
34 'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
35 },
36 'auth.user': {
37 'Meta': {'object_name': 'User'},
38 'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
39 'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
40 'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
41 'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
42 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
43 'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
44 'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
45 'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
46 'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
47 'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
48 'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
49 'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
50 'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
51 },
52 'contenttypes.contenttype': {
53 'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
54 'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
55 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
56 'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
57 'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
58 },
59 'dashboard_app.bundle': {
60 'Meta': {'ordering': "['-uploaded_on']", 'object_name': 'Bundle'},
61 '_gz_content': ('django.db.models.fields.files.FileField', [], {'max_length': '100', 'null': 'True', 'db_column': "'gz_content'"}),
62 '_raw_content': ('django.db.models.fields.files.FileField', [], {'max_length': '100', 'null': 'True', 'db_column': "'content'"}),
63 'bundle_stream': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'bundles'", 'to': "orm['dashboard_app.BundleStream']"}),
64 'content_filename': ('django.db.models.fields.CharField', [], {'max_length': '256'}),
65 'content_sha1': ('django.db.models.fields.CharField', [], {'max_length': '40', 'unique': 'True', 'null': 'True'}),
66 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
67 'is_deserialized': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
68 'uploaded_by': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'uploaded_bundles'", 'null': 'True', 'to': "orm['auth.User']"}),
69 'uploaded_on': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.utcnow'})
70 },
71 'dashboard_app.bundlestream': {
72 'Meta': {'object_name': 'BundleStream'},
73 'group': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.Group']", 'null': 'True', 'blank': 'True'}),
74 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
75 'is_anonymous': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
76 'is_public': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
77 'name': ('django.db.models.fields.CharField', [], {'max_length': '64', 'blank': 'True'}),
78 'pathname': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '128'}),
79 'slug': ('django.db.models.fields.CharField', [], {'max_length': '64', 'blank': 'True'}),
80 'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']", 'null': 'True', 'blank': 'True'})
81 },
82 'lava_scheduler_app.device': {
83 'Meta': {'object_name': 'Device'},
84 'current_job': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'+'", 'null': 'True', 'on_delete': 'models.SET_NULL', 'to': "orm['lava_scheduler_app.TestJob']", 'blank': 'True', 'unique': 'True'}),
85 'device_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['lava_scheduler_app.DeviceType']"}),
86 'device_version': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '200', 'null': 'True', 'blank': 'True'}),
87 'health_status': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
88 'hostname': ('django.db.models.fields.CharField', [], {'max_length': '200', 'primary_key': 'True'}),
89 'last_health_report_job': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'+'", 'null': 'True', 'on_delete': 'models.SET_NULL', 'to': "orm['lava_scheduler_app.TestJob']", 'blank': 'True', 'unique': 'True'}),
90 'status': ('django.db.models.fields.IntegerField', [], {'default': '1'}),
91 'tags': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['lava_scheduler_app.Tag']", 'symmetrical': 'False', 'blank': 'True'})
92 },
93 'lava_scheduler_app.devicestatetransition': {
94 'Meta': {'object_name': 'DeviceStateTransition'},
95 'created_by': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']", 'null': 'True', 'on_delete': 'models.SET_NULL', 'blank': 'True'}),
96 'created_on': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
97 'device': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'transitions'", 'to': "orm['lava_scheduler_app.Device']"}),
98 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
99 'job': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['lava_scheduler_app.TestJob']", 'null': 'True', 'on_delete': 'models.SET_NULL', 'blank': 'True'}),
100 'message': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
101 'new_state': ('django.db.models.fields.IntegerField', [], {}),
102 'old_state': ('django.db.models.fields.IntegerField', [], {})
103 },
104 'lava_scheduler_app.devicetype': {
105 'Meta': {'object_name': 'DeviceType'},
106 'display': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
107 'health_check_job': ('django.db.models.fields.TextField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
108 'name': ('django.db.models.fields.SlugField', [], {'max_length': '50', 'primary_key': 'True'})
109 },
110 'lava_scheduler_app.jobfailuretag': {
111 'Meta': {'object_name': 'JobFailureTag'},
112 'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
113 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
114 'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '256'})
115 },
116 'lava_scheduler_app.tag': {
117 'Meta': {'object_name': 'Tag'},
118 'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
119 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
120 'name': ('django.db.models.fields.SlugField', [], {'unique': 'True', 'max_length': '50'})
121 },
122 'lava_scheduler_app.testjob': {
123 'Meta': {'object_name': 'TestJob'},
124 '_results_bundle': ('django.db.models.fields.related.OneToOneField', [], {'null': 'True', 'db_column': "'results_bundle_id'", 'on_delete': 'models.SET_NULL', 'to': "orm['dashboard_app.Bundle']", 'blank': 'True', 'unique': 'True'}),
125 '_results_link': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '400', 'null': 'True', 'db_column': "'results_link'", 'blank': 'True'}),
126 'actual_device': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'related_name': "'+'", 'null': 'True', 'blank': 'True', 'to': "orm['lava_scheduler_app.Device']"}),
127 'definition': ('django.db.models.fields.TextField', [], {}),
128 'description': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '200', 'null': 'True', 'blank': 'True'}),
129 'end_time': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
130 'failure_comment': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
131 'failure_tags': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'related_name': "'failure_tags'", 'blank': 'True', 'to': "orm['lava_scheduler_app.JobFailureTag']"}),
132 'group': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.Group']", 'null': 'True', 'blank': 'True'}),
133 'health_check': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
134 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
135 'is_public': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
136 'log_file': ('django.db.models.fields.files.FileField', [], {'default': 'None', 'max_length': '100', 'null': 'True', 'blank': 'True'}),
137 'multinode_definition': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
138 'priority': ('django.db.models.fields.IntegerField', [], {'default': '50'}),
139 'requested_device': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'related_name': "'+'", 'null': 'True', 'blank': 'True', 'to': "orm['lava_scheduler_app.Device']"}),
140 'requested_device_type': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'related_name': "'+'", 'null': 'True', 'blank': 'True', 'to': "orm['lava_scheduler_app.DeviceType']"}),
141 'start_time': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
142 'status': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
143 'sub_id': ('django.db.models.fields.CharField', [], {'max_length': '200', 'blank': 'True'}),
144 'submit_time': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
145 'submit_token': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['linaro_django_xmlrpc.AuthToken']", 'null': 'True', 'on_delete': 'models.SET_NULL', 'blank': 'True'}),
146 'submitter': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'+'", 'to': "orm['auth.User']"}),
147 'tags': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['lava_scheduler_app.Tag']", 'symmetrical': 'False', 'blank': 'True'}),
148 'target_group': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '64', 'null': 'True', 'blank': 'True'}),
149 'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']", 'null': 'True', 'blank': 'True'})
150 },
151 'linaro_django_xmlrpc.authtoken': {
152 'Meta': {'object_name': 'AuthToken'},
153 'created_on': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
154 'description': ('django.db.models.fields.TextField', [], {'default': "''", 'blank': 'True'}),
155 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
156 'last_used_on': ('django.db.models.fields.DateTimeField', [], {'null': 'True'}),
157 'secret': ('django.db.models.fields.CharField', [], {'default': "'g4fgt7t5qdghq3qo3t3h5dhbj6fes2zh8n6lkncc0u0rcqxy0kaez7aacw05nc0oxjc3060pj0f1fsunjpo1btk6urfpt8xfmgefcatgmh1e7kj0ams90ikni05sd5qk'", 'unique': 'True', 'max_length': '128'}),
158 'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'auth_tokens'", 'to': "orm['auth.User']"})
159 }
160 }
161
162 complete_apps = ['lava_scheduler_app']
0\ No newline at end of file163\ No newline at end of file
1164
=== modified file 'lava_scheduler_app/models.py'
--- lava_scheduler_app/models.py 2013-08-19 11:41:55 +0000
+++ lava_scheduler_app/models.py 2013-08-28 13:47:23 +0000
@@ -1,5 +1,6 @@
1import logging1import logging
2import os2import os
3import uuid
3import simplejson4import simplejson
4import urlparse5import urlparse
56
@@ -18,6 +19,7 @@
18from dashboard_app.models import Bundle, BundleStream19from dashboard_app.models import Bundle, BundleStream
1920
20from lava_dispatcher.job import validate_job_data21from lava_dispatcher.job import validate_job_data
22from lava_scheduler_app import utils
2123
22from linaro_django_xmlrpc.models import AuthToken24from linaro_django_xmlrpc.models import AuthToken
2325
@@ -26,6 +28,10 @@
26 """Error raised when JSON is syntactically valid but ill-formed."""28 """Error raised when JSON is syntactically valid but ill-formed."""
2729
2830
31class DevicesUnavailableException(UserWarning):
32 """Error raised when required number of devices are unavailable."""
33
34
29class Tag(models.Model):35class Tag(models.Model):
3036
31 name = models.SlugField(unique=True)37 name = models.SlugField(unique=True)
@@ -44,6 +50,38 @@
44 raise ValidationError(e)50 raise ValidationError(e)
4551
4652
53def check_device_availability(requested_devices):
54 """Checks whether the number of devices requested is available.
55
56 See utils.requested_device_count() for details of REQUESTED_DEVICES
57 dictionary format.
58
59 Returns True if the requested number of devices are available, else
60 raises DevicesUnavailableException.
61 """
62 device_types = DeviceType.objects.values_list('name').filter(
63 models.Q(device__status=Device.IDLE) | \
64 models.Q(device__status=Device.RUNNING)
65 ).annotate(
66 num_count=models.Count('name')
67 ).order_by('name')
68
69 if requested_devices:
70 all_devices = {}
71 for dt in device_types:
72 # dt[0] -> device type name
73 # dt[1] -> device type count
74 all_devices[dt[0]] = dt[1]
75
76 for board, count in requested_devices.iteritems():
77 if all_devices.get(board, None) and count <= all_devices[board]:
78 continue
79 else:
80 raise DevicesUnavailableException(
81 "Required number of device(s) unavailable.")
82 return True
83
84
47class DeviceType(models.Model):85class DeviceType(models.Model):
48 """86 """
49 A class of device, for example a pandaboard or a snowball.87 A class of device, for example a pandaboard or a snowball.
@@ -245,6 +283,20 @@
245283
246 id = models.AutoField(primary_key=True)284 id = models.AutoField(primary_key=True)
247285
286 sub_id = models.CharField(
287 verbose_name=_(u"Sub ID"),
288 blank=True,
289 max_length=200
290 )
291
292 target_group = models.CharField(
293 verbose_name=_(u"Target Group"),
294 blank=True,
295 max_length=64,
296 null=True,
297 default=None
298 )
299
248 submitter = models.ForeignKey(300 submitter = models.ForeignKey(
249 User,301 User,
250 verbose_name=_(u"Submitter"),302 verbose_name=_(u"Submitter"),
@@ -317,7 +369,16 @@
317 )369 )
318370
319 definition = models.TextField(371 definition = models.TextField(
320 editable=False,372<<<<<<< TREE
373 editable=False,
374=======
375 editable=False,
376 )
377
378 multinode_definition = models.TextField(
379 editable=False,
380 blank=True
381>>>>>>> MERGE-SOURCE
321 )382 )
322383
323 log_file = models.FileField(384 log_file = models.FileField(
@@ -386,17 +447,34 @@
386447
387 @classmethod448 @classmethod
388 def from_json_and_user(cls, json_data, user, health_check=False):449 def from_json_and_user(cls, json_data, user, health_check=False):
450 requested_devices = utils.requested_device_count(json_data)
451 check_device_availability(requested_devices)
389 job_data = simplejson.loads(json_data)452 job_data = simplejson.loads(json_data)
390 validate_job_data(job_data)453 validate_job_data(job_data)
454
455 # Validate job, for parameters, specific to multinode that has been
456 # input by the user. These parameters are reserved by LAVA and
457 # generated during job submissions.
458 reserved_job_params = ["group_size", "role", "sub_id", "target_group"]
459 reserved_params_found = set(reserved_job_params).intersection(
460 set(job_data.keys()))
461 if reserved_params_found:
462 raise JSONDataError("Reserved parameters found in job data %s" %
463 str([x for x in reserved_params_found]))
464
391 if 'target' in job_data:465 if 'target' in job_data:
392 target = Device.objects.get(hostname=job_data['target'])466 target = Device.objects.get(hostname=job_data['target'])
393 device_type = None467 device_type = None
394 elif 'device_type' in job_data:468 elif 'device_type' in job_data:
395 target = None469 target = None
396 device_type = DeviceType.objects.get(name=job_data['device_type'])470 device_type = DeviceType.objects.get(name=job_data['device_type'])
471 elif 'device_group' in job_data:
472 target = None
473 device_type = None
397 else:474 else:
398 raise JSONDataError(475 raise JSONDataError(
399 "Neither 'target' nor 'device_type' found in job data.")476 "No 'target' or 'device_type' or 'device_group' are found "
477 "in job data.")
400478
401 priorities = dict([(j.upper(), i) for i, j in cls.PRIORITY_CHOICES])479 priorities = dict([(j.upper(), i) for i, j in cls.PRIORITY_CHOICES])
402 priority = cls.MEDIUM480 priority = cls.MEDIUM
@@ -449,6 +527,7 @@
449 bundle_stream.is_public)527 bundle_stream.is_public)
450 server = action['parameters']['server']528 server = action['parameters']['server']
451 parsed_server = urlparse.urlsplit(server)529 parsed_server = urlparse.urlsplit(server)
530 action["parameters"]["server"] = utils.rewrite_hostname(server)
452 if parsed_server.hostname is None:531 if parsed_server.hostname is None:
453 raise ValueError("invalid server: %s" % server)532 raise ValueError("invalid server: %s" % server)
454533
@@ -458,15 +537,49 @@
458 tags.append(Tag.objects.get(name=tag_name))537 tags.append(Tag.objects.get(name=tag_name))
459 except Tag.DoesNotExist:538 except Tag.DoesNotExist:
460 raise JSONDataError("tag %r does not exist" % tag_name)539 raise JSONDataError("tag %r does not exist" % tag_name)
461 job = TestJob(540
462 definition=json_data, submitter=submitter,541 if 'device_group' in job_data:
463 requested_device=target, requested_device_type=device_type,542 target_group = str(uuid.uuid4())
464 description=job_name, health_check=health_check, user=user,543 node_json = utils.split_multi_job(job_data, target_group)
465 group=group, is_public=is_public, priority=priority)544 job_list = []
466 job.save()545 try:
467 for tag in tags:546 parent_id = (TestJob.objects.latest('id')).id + 1
468 job.tags.add(tag)547 except:
469 return job548 parent_id = 1
549 child_id = 0
550
551 for role in node_json:
552 role_count = len(node_json[role])
553 for c in range(0, role_count):
554 device_type = DeviceType.objects.get(
555 name=node_json[role][c]["device_type"])
556 sub_id = '.'.join([str(parent_id), str(child_id)])
557
558 # Add sub_id to the generated job dictionary.
559 node_json[role][c]["sub_id"] = sub_id
560
561 job = TestJob(
562 sub_id=sub_id, submitter=submitter,
563 requested_device=target, description=job_name,
564 requested_device_type=device_type,
565 definition=simplejson.dumps(node_json[role][c]),
566 multinode_definition=json_data,
567 health_check=health_check, user=user, group=group,
568 is_public=is_public, priority=priority,
569 target_group=target_group)
570 job.save()
571 job_list.append(sub_id)
572 child_id += 1
573 return job_list
574
575 else:
576 job = TestJob(
577 definition=simplejson.dumps(job_data), submitter=submitter,
578 requested_device=target, requested_device_type=device_type,
579 description=job_name, health_check=health_check, user=user,
580 group=group, is_public=is_public, priority=priority)
581 job.save()
582 return job
470583
471 def _can_admin(self, user):584 def _can_admin(self, user):
472 """ used to check for things like if the user can cancel or annotate585 """ used to check for things like if the user can cancel or annotate
@@ -529,6 +642,22 @@
529 "LAVA job notification: " + description, mail,642 "LAVA job notification: " + description, mail,
530 settings.SERVER_EMAIL, recipients)643 settings.SERVER_EMAIL, recipients)
531644
645 @property
646 def sub_jobs_list(self):
647 if self.is_multinode:
648 jobs = TestJob.objects.filter(
649 target_group=self.target_group).order_by('id')
650 return jobs
651 else:
652 return None
653
654 @property
655 def is_multinode(self):
656 if self.target_group:
657 return True
658 else:
659 return False
660
532661
533class DeviceStateTransition(models.Model):662class DeviceStateTransition(models.Model):
534 created_on = models.DateTimeField(auto_now_add=True)663 created_on = models.DateTimeField(auto_now_add=True)
535664
=== modified file 'lava_scheduler_app/templates/lava_scheduler_app/job_definition.html'
--- lava_scheduler_app/templates/lava_scheduler_app/job_definition.html 2011-12-09 03:55:33 +0000
+++ lava_scheduler_app/templates/lava_scheduler_app/job_definition.html 2013-08-28 13:47:23 +0000
@@ -10,7 +10,7 @@
10{% endblock %}10{% endblock %}
1111
12{% block content %}12{% block content %}
13<h2>Job defintion file - {{ job.id }} </h2>13<h2>Job definition file - {{ job.id }} </h2>
14<a href="{% url lava.scheduler.job.definition.plain job.pk %}">Download as text file</a>14<a href="{% url lava.scheduler.job.definition.plain job.pk %}">Download as text file</a>
15<pre class="brush: js">{{ job.definition }}</pre>15<pre class="brush: js">{{ job.definition }}</pre>
1616
1717
=== modified file 'lava_scheduler_app/templates/lava_scheduler_app/job_sidebar.html'
--- lava_scheduler_app/templates/lava_scheduler_app/job_sidebar.html 2013-01-15 17:44:36 +0000
+++ lava_scheduler_app/templates/lava_scheduler_app/job_sidebar.html 2013-08-28 13:47:23 +0000
@@ -62,6 +62,17 @@
6262
63 <dt>Finished at:</dt>63 <dt>Finished at:</dt>
64 <dd>{{ job.end_time|default:"not finished" }}</dd>64 <dd>{{ job.end_time|default:"not finished" }}</dd>
65
66 {% if job.is_multinode %}
67 <dt>Sub Jobs:</dt>
68 {% for subjob in job.sub_jobs_list %}
69 <dd>
70 <a href="{% url lava.scheduler.job.detail subjob.pk %}">
71 {{ subjob.sub_id }}</a>
72 </dd>
73 {% endfor %}
74 {% endif %}
75
65</dl>76</dl>
66<h2>Views</h2>77<h2>Views</h2>
67<ul>78<ul>
@@ -76,6 +87,11 @@
76 <li>87 <li>
77 <a href="{% url lava.scheduler.job.definition job.pk %}">Definition</a>88 <a href="{% url lava.scheduler.job.definition job.pk %}">Definition</a>
78 </li>89 </li>
90 {% if job.is_multinode %}
91 <li>
92 <a href="{% url lava.scheduler.job.multinode_definition job.pk %}"> Multinode Definition</a>
93 </li>
94 {% endif %}
79 {% if job.results_link %}95 {% if job.results_link %}
80 <li>96 <li>
81 <a href="{{ job.results_link }}">Results Bundle</a>97 <a href="{{ job.results_link }}">Results Bundle</a>
8298
=== modified file 'lava_scheduler_app/templates/lava_scheduler_app/job_submit.html'
--- lava_scheduler_app/templates/lava_scheduler_app/job_submit.html 2013-06-28 09:43:44 +0000
+++ lava_scheduler_app/templates/lava_scheduler_app/job_submit.html 2013-08-28 13:47:23 +0000
@@ -31,6 +31,16 @@
31To view the full job list click <a href="{{ list_url }}">here</a>.31To view the full job list click <a href="{{ list_url }}">here</a>.
32</div>32</div>
3333
34{% elif job_list %}
35{% url lava.scheduler.job.list as list_url %}
36<div id="job-success">Multinode Job submission successfull!
37<br>
38<br>
39Jobs with ID {{ job_list }}</a> has been created.
40<br>
41To view the full job list click <a href="{{ list_url }}">here</a>.
42</div>
43
34{% else %}44{% else %}
3545
36{% if error %}46{% if error %}
3747
=== added file 'lava_scheduler_app/templates/lava_scheduler_app/multinode_job_definition.html'
--- lava_scheduler_app/templates/lava_scheduler_app/multinode_job_definition.html 1970-01-01 00:00:00 +0000
+++ lava_scheduler_app/templates/lava_scheduler_app/multinode_job_definition.html 2013-08-28 13:47:23 +0000
@@ -0,0 +1,21 @@
1{% extends "lava_scheduler_app/job_sidebar.html" %}
2
3{% block extrahead %}
4{{ block.super }}
5<script type="text/javascript" src="{{ STATIC_URL }}lava_scheduler_app/js/shCore.js"></script>
6<script type="text/javascript" src="{{ STATIC_URL }}lava_scheduler_app/js/shBrushJScript.js"></script>
7
8<link href="{{ STATIC_URL }}lava_scheduler_app/css/shCore.css" rel="stylesheet" type="text/css" />
9<link href="{{ STATIC_URL }}lava_scheduler_app/css/shThemeDefault.css" rel="stylesheet" type="text/css" />
10{% endblock %}
11
12{% block content %}
13<h2>Multinode Job definition file - {{ job.sub_id }} </h2>
14<a href="{% url lava.scheduler.job.multinode_definition.plain job.pk %}">Download as text file</a>
15<pre class="brush: js">{{ job.multinode_definition }}</pre>
16
17<script type="text/javascript">
18 SyntaxHighlighter.all()
19</script>
20
21{% endblock %}
022
=== modified file 'lava_scheduler_app/urls.py'
--- lava_scheduler_app/urls.py 2013-07-17 12:20:25 +0000
+++ lava_scheduler_app/urls.py 2013-08-28 13:47:23 +0000
@@ -79,8 +79,19 @@
79 'job_definition',79 'job_definition',
80 name='lava.scheduler.job.definition'),80 name='lava.scheduler.job.definition'),
81 url(r'^job/(?P<pk>[0-9]+)/definition/plain$',81 url(r'^job/(?P<pk>[0-9]+)/definition/plain$',
82 'job_definition_plain',82<<<<<<< TREE
83 name='lava.scheduler.job.definition.plain'),83 'job_definition_plain',
84 name='lava.scheduler.job.definition.plain'),
85=======
86 'job_definition_plain',
87 name='lava.scheduler.job.definition.plain'),
88 url(r'^job/(?P<pk>[0-9]+)/multinode_definition$',
89 'multinode_job_definition',
90 name='lava.scheduler.job.multinode_definition'),
91 url(r'^job/(?P<pk>[0-9]+)/multinode_definition/plain$',
92 'multinode_job_definition_plain',
93 name='lava.scheduler.job.multinode_definition.plain'),
94>>>>>>> MERGE-SOURCE
84 url(r'^job/(?P<pk>[0-9]+)/log_file$',95 url(r'^job/(?P<pk>[0-9]+)/log_file$',
85 'job_log_file',96 'job_log_file',
86 name='lava.scheduler.job.log_file'),97 name='lava.scheduler.job.log_file'),
8798
=== added file 'lava_scheduler_app/utils.py'
--- lava_scheduler_app/utils.py 1970-01-01 00:00:00 +0000
+++ lava_scheduler_app/utils.py 2013-08-28 13:47:23 +0000
@@ -0,0 +1,117 @@
1# Copyright (C) 2013 Linaro Limited
2#
3# Author: Neil Williams <neil.williams@linaro.org>
4# Senthil Kumaran <senthil.kumaran@linaro.org>
5#
6# This file is part of LAVA Scheduler.
7#
8# LAVA Scheduler is free software: you can redistribute it and/or modify it
9# under the terms of the GNU Affero General Public License version 3 as
10# published by the Free Software Foundation
11#
12# LAVA Scheduler is distributed in the hope that it will be useful, but
13# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
14# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
15# more details.
16#
17# You should have received a copy of the GNU Affero General Public License
18# along with LAVA Scheduler. If not, see <http://www.gnu.org/licenses/>.
19
20import re
21import copy
22import socket
23import urlparse
24import simplejson
25
26
27def rewrite_hostname(result_url):
28 """If URL has hostname value as localhost/127.0.0.*, change it to the
29 actual server FQDN.
30
31 Returns the RESULT_URL (string) re-written with hostname.
32
33 See https://cards.linaro.org/browse/LAVA-611
34 """
35 host = urlparse.urlparse(result_url).netloc
36 if host == "localhost":
37 result_url = result_url.replace("localhost", socket.getfqdn())
38 elif host.startswith("127.0.0"):
39 ip_pat = r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}'
40 result_url = re.sub(ip_pat, socket.getfqdn(), result_url)
41 return result_url
42
43
44def split_multi_job(json_jobdata, target_group):
45 node_json = {}
46 all_nodes = {}
47 node_actions = {}
48
49 # Check if we are operating on multinode job data. Else return the job
50 # data as it is.
51 if "device_group" in json_jobdata and target_group:
52 pass
53 else:
54 return json_jobdata
55
56 # get all the roles and create node action list for each role.
57 for group in json_jobdata["device_group"]:
58 node_actions[group["role"]] = []
59
60 # Take each action and assign it to proper roles. If roles are not
61 # specified for a specific action, then assign it to all the roles.
62 all_actions = json_jobdata["actions"]
63 for role in node_actions.keys():
64 for action in all_actions:
65 new_action = copy.deepcopy(action)
66 if 'parameters' in new_action \
67 and 'role' in new_action["parameters"]:
68 if new_action["parameters"]["role"] == role:
69 new_action["parameters"].pop('role', None)
70 node_actions[role].append(new_action)
71 else:
72 node_actions[role].append(new_action)
73
74 group_count = 0
75 for clients in json_jobdata["device_group"]:
76 group_count += int(clients["count"])
77 for clients in json_jobdata["device_group"]:
78 role = str(clients["role"])
79 count = int(clients["count"])
80 node_json[role] = []
81 for c in range(0, count):
82 node_json[role].append({})
83 node_json[role][c]["timeout"] = json_jobdata["timeout"]
84 node_json[role][c]["job_name"] = json_jobdata["job_name"]
85 node_json[role][c]["tags"] = clients["tags"]
86 node_json[role][c]["group_size"] = group_count
87 node_json[role][c]["target_group"] = target_group
88 node_json[role][c]["actions"] = node_actions[role]
89
90 node_json[role][c]["role"] = role
91 # multinode node stage 2
92 node_json[role][c]["logging_level"] = json_jobdata["logging_level"]
93 node_json[role][c]["device_type"] = clients["device_type"]
94
95 return node_json
96
97
98def requested_device_count(json_data):
99 """Utility function check the requested number of devices for each
100 device_type in a multinode job.
101
102 JSON_DATA is the job definition string.
103
104 Returns requested_device which is a dictionary of the following format:
105
106 {'kvm': 1, 'qemu': 3, 'panda': 1}
107
108 If the job is not a multinode job, then return an empty dictionary.
109 """
110 job_data = simplejson.loads(json_data)
111 requested_devices = {}
112 if 'device_group' in job_data:
113 for device_group in job_data['device_group']:
114 device_type = device_group['device_type']
115 count = device_group['count']
116 requested_devices[device_type] = count
117 return requested_devices
0118
=== modified file 'lava_scheduler_app/views.py'
--- lava_scheduler_app/views.py 2013-07-17 12:20:25 +0000
+++ lava_scheduler_app/views.py 2013-08-28 13:47:23 +0000
@@ -50,7 +50,12 @@
50 Device,50 Device,
51 DeviceType,51 DeviceType,
52 DeviceStateTransition,52 DeviceStateTransition,
53 TestJob,53<<<<<<< TREE
54 TestJob,
55=======
56 TestJob,
57 JSONDataError,
58>>>>>>> MERGE-SOURCE
54 validate_job_json,59 validate_job_json,
55)60)
5661
@@ -74,10 +79,16 @@
7479
7580
76def pklink(record):81def pklink(record):
82 job_id = record.pk
83 try:
84 if record.sub_id:
85 job_id = record.sub_id
86 except:
87 pass
77 return mark_safe(88 return mark_safe(
78 '<a href="%s">%s</a>' % (89 '<a href="%s">%s</a>' % (
79 record.get_absolute_url(),90 record.get_absolute_url(),
80 escape(record.pk)))91 escape(job_id)))
8192
8293
83class IDLinkColumn(Column):94class IDLinkColumn(Column):
@@ -100,11 +111,19 @@
100111
101112
102def all_jobs_with_device_sort():113def all_jobs_with_device_sort():
114<<<<<<< TREE
103 return TestJob.objects.select_related(115 return TestJob.objects.select_related(
104 "actual_device", "requested_device", "requested_device_type",116 "actual_device", "requested_device", "requested_device_type",
105 "submitter", "user", "group")\117 "submitter", "user", "group")\
106 .extra(select={'device_sort': 'coalesce(actual_device_id, requested_device_id, '118 .extra(select={'device_sort': 'coalesce(actual_device_id, requested_device_id, '
107 'requested_device_type_id)'}).all()119 'requested_device_type_id)'}).all()
120=======
121 jobs = TestJob.objects.select_related("actual_device", "requested_device",
122 "requested_device_type", "submitter", "user", "group")\
123 .extra(select={'device_sort': 'coalesce(actual_device_id, '
124 'requested_device_id, requested_device_type_id)'}).all()
125 return jobs.order_by('submit_time')
126>>>>>>> MERGE-SOURCE
108127
109128
110class JobTable(DataTablesTable):129class JobTable(DataTablesTable):
@@ -124,7 +143,7 @@
124 else:143 else:
125 return ''144 return ''
126145
127 id = RestrictedIDLinkColumn()146 sub_id = RestrictedIDLinkColumn()
128 status = Column()147 status = Column()
129 priority = Column()148 priority = Column()
130 device = Column(accessor='device_sort')149 device = Column(accessor='device_sort')
@@ -135,9 +154,15 @@
135 duration = Column()154 duration = Column()
136155
137 datatable_opts = {156 datatable_opts = {
157<<<<<<< TREE
138 'aaSorting': [[0, 'desc']],158 'aaSorting': [[0, 'desc']],
139 }159 }
140 searchable_columns = ['description']160 searchable_columns = ['description']
161=======
162 'aaSorting': [[6, 'desc']],
163 }
164 searchable_columns = ['description']
165>>>>>>> MERGE-SOURCE
141166
142167
143class IndexJobTable(JobTable):168class IndexJobTable(JobTable):
@@ -296,6 +321,10 @@
296 class Meta:321 class Meta:
297 exclude = ('status', 'submitter', 'end_time', 'priority', 'description')322 exclude = ('status', 'submitter', 'end_time', 'priority', 'description')
298323
324 datatable_opts = {
325 'aaSorting': [[2, 'desc']],
326 }
327
299328
300def failed_jobs_json(request):329def failed_jobs_json(request):
301 return FailedJobTable.json(request, params=(request,))330 return FailedJobTable.json(request, params=(request,))
@@ -499,6 +528,10 @@
499 class Meta:528 class Meta:
500 exclude = ('description', 'device')529 exclude = ('description', 'device')
501530
531 datatable_opts = {
532 'aaSorting': [[4, 'desc']],
533 }
534
502535
503def health_jobs_json(request, pk):536def health_jobs_json(request, pk):
504 device = get_object_or_404(Device, pk=pk)537 device = get_object_or_404(Device, pk=pk)
@@ -582,12 +615,15 @@
582 job = TestJob.from_json_and_user(615 job = TestJob.from_json_and_user(
583 request.POST.get("json-input"), request.user)616 request.POST.get("json-input"), request.user)
584617
585 response_data["job_id"] = job.id618 if isinstance(job, type(list())):
619 response_data["job_list"] = job
620 else:
621 response_data["job_id"] = job.id
586 return render_to_response(622 return render_to_response(
587 "lava_scheduler_app/job_submit.html",623 "lava_scheduler_app/job_submit.html",
588 response_data, RequestContext(request))624 response_data, RequestContext(request))
589625
590 except Exception as e:626 except (JSONDataError, ValueError) as e:
591 response_data["error"] = str(e)627 response_data["error"] = str(e)
592 response_data["json_input"] = request.POST.get("json-input")628 response_data["json_input"] = request.POST.get("json-input")
593 return render_to_response(629 return render_to_response(
@@ -660,7 +696,30 @@
660def job_definition_plain(request, pk):696def job_definition_plain(request, pk):
661 job = get_restricted_job(request.user, pk)697 job = get_restricted_job(request.user, pk)
662 response = HttpResponse(job.definition, mimetype='text/plain')698 response = HttpResponse(job.definition, mimetype='text/plain')
663 response['Content-Disposition'] = "attachment; filename=job_%d.json" % job.id699<<<<<<< TREE
700 response['Content-Disposition'] = "attachment; filename=job_%d.json" % job.id
701=======
702 response['Content-Disposition'] = "attachment; filename=job_%d.json" % job.id
703 return response
704
705
706def multinode_job_definition(request, pk):
707 job = get_restricted_job(request.user, pk)
708 log_file = job.output_file()
709 return render_to_response(
710 "lava_scheduler_app/multinode_job_definition.html",
711 {
712 'job': job,
713 'job_file_present': bool(log_file),
714 },
715 RequestContext(request))
716
717
718def multinode_job_definition_plain(request, pk):
719 job = get_restricted_job(request.user, pk)
720 response = HttpResponse(job.multinode_definition, mimetype='text/plain')
721 response['Content-Disposition'] = "attachment; filename=multinode_job_%d.json" % job.id
722>>>>>>> MERGE-SOURCE
664 return response723 return response
665724
666725
@@ -764,7 +823,13 @@
764def job_cancel(request, pk):823def job_cancel(request, pk):
765 job = get_restricted_job(request.user, pk)824 job = get_restricted_job(request.user, pk)
766 if job.can_cancel(request.user):825 if job.can_cancel(request.user):
767 job.cancel()826 if job.is_multinode:
827 multinode_jobs = TestJob.objects.all().filter(
828 target_group=job.target_group)
829 for multinode_job in multinode_jobs:
830 multinode_job.cancel()
831 else:
832 job.cancel()
768 return redirect(job)833 return redirect(job)
769 else:834 else:
770 return HttpResponseForbidden(835 return HttpResponseForbidden(
@@ -773,11 +838,38 @@
773838
774@post_only839@post_only
775def job_resubmit(request, pk):840def job_resubmit(request, pk):
841
842 response_data = {
843 'is_authorized': False,
844 'bread_crumb_trail': BreadCrumbTrail.leading_to(job_list),
845 }
846
776 job = get_restricted_job(request.user, pk)847 job = get_restricted_job(request.user, pk)
777 if job.can_resubmit(request.user):848 if job.can_resubmit(request.user):
778 definition = job.definition849 response_data["is_authorized"] = True
779 job = TestJob.from_json_and_user(definition, request.user)850
780 return redirect(job)851 if job.is_multinode:
852 definition = job.multinode_definition
853 else:
854 definition = job.definition
855
856 try:
857 job = TestJob.from_json_and_user(definition, request.user)
858
859 if isinstance(job, type(list())):
860 response_data["job_list"] = job
861 return render_to_response(
862 "lava_scheduler_app/job_submit.html",
863 response_data, RequestContext(request))
864 else:
865 return redirect(job)
866 except Exception as e:
867 response_data["error"] = str(e)
868 response_data["json_input"] = definition
869 return render_to_response(
870 "lava_scheduler_app/job_submit.html",
871 response_data, RequestContext(request))
872
781 else:873 else:
782 return HttpResponseForbidden(874 return HttpResponseForbidden(
783 "you cannot re-submit this job", content_type="text/plain")875 "you cannot re-submit this job", content_type="text/plain")
784876
=== renamed file 'lava_scheduler_daemon/board.py' => 'lava_scheduler_daemon/board.py.THIS'
=== modified file 'lava_scheduler_daemon/dbjobsource.py'
--- lava_scheduler_daemon/dbjobsource.py 2013-07-17 12:20:25 +0000
+++ lava_scheduler_daemon/dbjobsource.py 2013-08-28 13:47:23 +0000
@@ -3,6 +3,7 @@
3import os3import os
4import shutil4import shutil
5import urlparse5import urlparse
6import copy
67
7from dashboard_app.models import Bundle8from dashboard_app.models import Bundle
89
@@ -92,21 +93,135 @@
92 transaction.leave_transaction_management()93 transaction.leave_transaction_management()
93 return self.deferToThread(wrapper, *args, **kw)94 return self.deferToThread(wrapper, *args, **kw)
9495
95 def getBoardList_impl(self):96 def _get_health_check_jobs(self):
97 """Gets the list of configured boards and checks which are the boards
98 that require health check.
99
100 Returns JOB_LIST which is a set of health check jobs. If no health
101 check jobs are available returns an empty set.
102 """
103 job_list = set()
96 configured_boards = [104 configured_boards = [
97 x.hostname for x in dispatcher_config.get_devices()]105 x.hostname for x in dispatcher_config.get_devices()]
98 boards = []106 boards = []
99 for d in Device.objects.all():107 for d in Device.objects.all():
100 if d.hostname in configured_boards:108 if d.hostname in configured_boards:
101 boards.append({'hostname': d.hostname})109 boards.append(d)
102 return boards110
103111 for device in boards:
104 def getBoardList(self):112 if device.status != Device.IDLE:
105 return self.deferForDB(self.getBoardList_impl)113 continue
114 if not device.device_type.health_check_job:
115 run_health_check = False
116 elif device.health_status == Device.HEALTH_UNKNOWN:
117 run_health_check = True
118 elif device.health_status == Device.HEALTH_LOOPING:
119 run_health_check = True
120 elif not device.last_health_report_job:
121 run_health_check = True
122 else:
123 run_health_check = device.last_health_report_job.end_time < \
124 datetime.datetime.now() - datetime.timedelta(days=1)
125 if run_health_check:
126 job_list.add(self._getHealthCheckJobForBoard(device))
127 return job_list
128
129 def _fix_device(self, device, job):
130 """Associate an available/idle DEVICE to the given JOB.
131
132 Returns the job with actual_device set to DEVICE.
133
134 If we are unable to grab the DEVICE then we return None.
135 """
136 DeviceStateTransition.objects.create(
137 created_by=None, device=device, old_state=device.status,
138 new_state=Device.RUNNING, message=None, job=job).save()
139 device.status = Device.RUNNING
140 device.current_job = job
141 try:
142 # The unique constraint on current_job may cause this to
143 # fail in the case of concurrent requests for different
144 # boards grabbing the same job. If there are concurrent
145 # requests for the *same* board they may both return the
146 # same job -- this is an application level bug though.
147 device.save()
148 except IntegrityError:
149 self.logger.info(
150 "job %s has been assigned to another board -- rolling back",
151 job.id)
152 transaction.rollback()
153 return None
154 else:
155 job.actual_device = device
156 job.log_file.save(
157 'job-%s.log' % job.id, ContentFile(''), save=False)
158 job.submit_token = AuthToken.objects.create(user=job.submitter)
159 job.definition = simplejson.dumps(self._get_json_data(job),
160 sort_keys=True,
161 indent=4 * ' ')
162 job.save()
163 transaction.commit()
164 return job
165
166 def getJobList_impl(self):
167 jobs = TestJob.objects.all().filter(
168 status=TestJob.SUBMITTED).order_by('-priority', 'submit_time')
169 job_list = self._get_health_check_jobs()
170 devices = None
171 configured_boards = [
172 x.hostname for x in dispatcher_config.get_devices()]
173 self.logger.debug("Number of configured_devices: %d" % len(configured_boards))
174 for job in jobs:
175 if job.actual_device:
176 job_list.add(job)
177 elif job.requested_device:
178 self.logger.debug("Checking Requested Device")
179 devices = Device.objects.all().filter(
180 hostname=job.requested_device.hostname,
181 status=Device.IDLE)
182 elif job.requested_device_type:
183 self.logger.debug("Checking Requested Device Type")
184 devices = Device.objects.all().filter(
185 device_type=job.requested_device_type,
186 status=Device.IDLE)
187 else:
188 continue
189 if devices:
190 for d in devices:
191 self.logger.debug("Checking %s" % d.hostname)
192 if d.hostname in configured_boards:
193 if job:
194 job = self._fix_device(d, job)
195 if job:
196 job_list.add(job)
197
198 # Remove scheduling multinode jobs until all the jobs in the
199 # target_group are assigned devices.
200 final_job_list = copy.deepcopy(job_list)
201 for job in job_list:
202 if job.is_multinode:
203 multinode_jobs = TestJob.objects.all().filter(
204 target_group=job.target_group)
205
206 jobs_with_device = 0
207 for multinode_job in multinode_jobs:
208 if multinode_job.actual_device:
209 jobs_with_device += 1
210
211 if len(multinode_jobs) != jobs_with_device:
212 final_job_list.difference_update(set(multinode_jobs))
213
214 return final_job_list
215
216 def getJobList(self):
217 return self.deferForDB(self.getJobList_impl)
106218
107 def _get_json_data(self, job):219 def _get_json_data(self, job):
108 json_data = simplejson.loads(job.definition)220 json_data = simplejson.loads(job.definition)
109 json_data['target'] = job.actual_device.hostname221 if job.actual_device:
222 json_data['target'] = job.actual_device.hostname
223 elif job.requested_device:
224 json_data['target'] = job.requested_device.hostname
110 for action in json_data['actions']:225 for action in json_data['actions']:
111 if not action['command'].startswith('submit_results'):226 if not action['command'].startswith('submit_results'):
112 continue227 continue
@@ -171,64 +286,19 @@
171 else:286 else:
172 return None287 return None
173288
174 def getJobForBoard_impl(self, board_name):289 def getJobDetails_impl(self, job):
175 while True:290 job.status = TestJob.RUNNING
176 device = Device.objects.get(hostname=board_name)291 job.start_time = datetime.datetime.utcnow()
177 if device.status != Device.IDLE:292 shutil.rmtree(job.output_dir, ignore_errors=True)
178 return None293 job.log_file.save('job-%s.log' % job.id, ContentFile(''), save=False)
179 if not device.device_type.health_check_job:294 job.submit_token = AuthToken.objects.create(user=job.submitter)
180 run_health_check = False295 job.save()
181 elif device.health_status == Device.HEALTH_UNKNOWN:296 json_data = self._get_json_data(job)
182 run_health_check = True297 transaction.commit()
183 elif device.health_status == Device.HEALTH_LOOPING:298 return json_data
184 run_health_check = True
185 elif not device.last_health_report_job:
186 run_health_check = True
187 else:
188 run_health_check = device.last_health_report_job.end_time < datetime.datetime.now() - datetime.timedelta(days=1)
189 if run_health_check:
190 job = self._getHealthCheckJobForBoard(device)
191 else:
192 job = self._getJobFromQueue(device)
193 if job:
194 DeviceStateTransition.objects.create(
195 created_by=None, device=device, old_state=device.status,
196 new_state=Device.RUNNING, message=None, job=job).save()
197 job.status = TestJob.RUNNING
198 job.start_time = datetime.datetime.utcnow()
199 job.actual_device = device
200 device.status = Device.RUNNING
201 shutil.rmtree(job.output_dir, ignore_errors=True)
202 device.current_job = job
203 try:
204 # The unique constraint on current_job may cause this to
205 # fail in the case of concurrent requests for different
206 # boards grabbing the same job. If there are concurrent
207 # requests for the *same* board they may both return the
208 # same job -- this is an application level bug though.
209 device.save()
210 except IntegrityError:
211 self.logger.info(
212 "job %s has been assigned to another board -- "
213 "rolling back", job.id)
214 transaction.rollback()
215 continue
216 else:
217 job.log_file.save(
218 'job-%s.log' % job.id, ContentFile(''), save=False)
219 job.submit_token = AuthToken.objects.create(user=job.submitter)
220 job.save()
221 json_data = self._get_json_data(job)
222 transaction.commit()
223 return json_data
224 else:
225 # _getHealthCheckJobForBoard can offline the board, so commit
226 # in this branch too.
227 transaction.commit()
228 return None
229299
230 def getJobForBoard(self, board_name):300 def getJobDetails(self, job):
231 return self.deferForDB(self.getJobForBoard_impl, board_name)301 return self.deferForDB(self.getJobDetails_impl, job)
232302
233 def getOutputDirForJobOnBoard_impl(self, board_name):303 def getOutputDirForJobOnBoard_impl(self, board_name):
234 device = Device.objects.get(hostname=board_name)304 device = Device.objects.get(hostname=board_name)
235305
=== added file 'lava_scheduler_daemon/job.py'
--- lava_scheduler_daemon/job.py 1970-01-01 00:00:00 +0000
+++ lava_scheduler_daemon/job.py 2013-08-28 13:47:23 +0000
@@ -0,0 +1,281 @@
1# Copyright (C) 2013 Linaro Limited
2#
3# Author: Senthil Kumaran <senthil.kumaran@linaro.org>
4#
5# This file is part of LAVA Scheduler.
6#
7# LAVA Scheduler is free software: you can redistribute it and/or modify it
8# under the terms of the GNU Affero General Public License version 3 as
9# published by the Free Software Foundation
10#
11# LAVA Scheduler is distributed in the hope that it will be useful, but
12# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
13# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
14# more details.
15#
16# You should have received a copy of the GNU Affero General Public License
17# along with LAVA Scheduler. If not, see <http://www.gnu.org/licenses/>.
18
19import json
20import os
21import signal
22import tempfile
23import logging
24
25from twisted.internet.error import ProcessDone, ProcessExitedAlready
26from twisted.internet.protocol import ProcessProtocol
27from twisted.internet import defer, task
28
29
30def catchall_errback(logger):
31 def eb(failure):
32 logger.error(
33 '%s: %s\n%s', failure.type.__name__, failure.value,
34 failure.getTraceback())
35 return eb
36
37
38class DispatcherProcessProtocol(ProcessProtocol):
39
40 def __init__(self, deferred, job):
41 self.logger = logging.getLogger(__name__ + '.DispatcherProcessProtocol')
42 self.deferred = deferred
43 self.log_size = 0
44 self.job = job
45
46 def childDataReceived(self, childFD, data):
47 self.log_size += len(data)
48 if self.log_size > self.job.daemon_options['LOG_FILE_SIZE_LIMIT']:
49 if not self.job._killing:
50 self.job.cancel("exceeded log size limit")
51
52 def childConnectionLost(self, childFD):
53 self.logger.info("childConnectionLost for %s: %s",
54 self.job.board_name, childFD)
55
56 def processExited(self, reason):
57 self.logger.info("processExited for %s: %s",
58 self.job.board_name, reason.value)
59
60 def processEnded(self, reason):
61 self.logger.info("processEnded for %s: %s",
62 self.job.board_name, reason.value)
63 self.deferred.callback(reason.value.exitCode)
64
65
66class Job(object):
67
68 def __init__(self, job_data, dispatcher, source, board_name, reactor,
69 daemon_options):
70 self.job_data = job_data
71 self.dispatcher = dispatcher
72 self.source = source
73 self.board_name = board_name
74 self.logger = logging.getLogger(__name__ + '.Job.' + board_name)
75 self.reactor = reactor
76 self.daemon_options = daemon_options
77 self._json_file = None
78 self._source_lock = defer.DeferredLock()
79 self._checkCancel_call = task.LoopingCall(self._checkCancel)
80 self._signals = ['SIGINT', 'SIGINT', 'SIGTERM', 'SIGTERM', 'SIGKILL']
81 self._time_limit_call = None
82 self._killing = False
83 self._kill_reason = ''
84
85 def _checkCancel(self):
86 if self._killing:
87 self.cancel()
88 else:
89 return self._source_lock.run(
90 self.source.jobCheckForCancellation,
91 self.board_name).addCallback(self._maybeCancel)
92
93 def cancel(self, reason=None):
94 if not self._killing:
95 if reason is None:
96 reason = "killing job for unknown reason"
97 self._kill_reason = reason
98 self.logger.info(reason)
99 self._killing = True
100 if self._signals:
101 signame = self._signals.pop(0)
102 else:
103 self.logger.warning("self._signals is empty!")
104 signame = 'SIGKILL'
105 self.logger.info(
106 'attempting to kill job with signal %s' % signame)
107 try:
108 self._protocol.transport.signalProcess(getattr(signal, signame))
109 except ProcessExitedAlready:
110 pass
111
112 def _maybeCancel(self, cancel):
113 if cancel:
114 self.cancel("killing job by user request")
115 else:
116 logging.debug('not cancelling')
117
118 def _time_limit_exceeded(self):
119 self._time_limit_call = None
120 self.cancel("killing job for exceeding timeout")
121
122 def run(self):
123 d = self.source.getOutputDirForJobOnBoard(self.board_name)
124 return d.addCallback(self._run).addErrback(
125 catchall_errback(self.logger))
126
127 def _run(self, output_dir):
128 d = defer.Deferred()
129 json_data = self.job_data
130 fd, self._json_file = tempfile.mkstemp()
131 with os.fdopen(fd, 'wb') as f:
132 json.dump(json_data, f)
133 self._protocol = DispatcherProcessProtocol(d, self)
134 self.reactor.spawnProcess(
135 self._protocol, self.dispatcher, args=[
136 self.dispatcher, self._json_file, '--output-dir', output_dir],
137 childFDs={0: 0, 1: 'r', 2: 'r'}, env=None)
138 self._checkCancel_call.start(10)
139 timeout = max(
140 json_data['timeout'], self.daemon_options['MIN_JOB_TIMEOUT'])
141 self._time_limit_call = self.reactor.callLater(
142 timeout, self._time_limit_exceeded)
143 d.addBoth(self._exited)
144 return d
145
146 def _exited(self, exit_code):
147 self.logger.info("job finished on %s", self.job_data['target'])
148 if self._json_file is not None:
149 os.unlink(self._json_file)
150 self.logger.info("reporting job completed")
151 if self._time_limit_call is not None:
152 self._time_limit_call.cancel()
153 self._checkCancel_call.stop()
154 return self._source_lock.run(
155 self.source.jobCompleted,
156 self.board_name,
157 exit_code,
158 self._killing).addCallback(
159 lambda r: exit_code)
160
161
162class SchedulerMonitorPP(ProcessProtocol):
163
164 def __init__(self, d, board_name):
165 self.d = d
166 self.board_name = board_name
167 self.logger = logging.getLogger(__name__ + '.SchedulerMonitorPP')
168
169 def childDataReceived(self, childFD, data):
170 self.logger.warning(
171 "scheduler monitor for %s produced output: %r on fd %s",
172 self.board_name, data, childFD)
173
174 def processEnded(self, reason):
175 if not reason.check(ProcessDone):
176 self.logger.error(
177 "scheduler monitor for %s crashed: %s",
178 self.board_name, reason)
179 self.d.callback(None)
180
181
182class MonitorJob(object):
183
184 def __init__(self, job_data, dispatcher, source, board_name, reactor,
185 daemon_options):
186 self.logger = logging.getLogger(__name__ + '.MonitorJob')
187 self.job_data = job_data
188 self.dispatcher = dispatcher
189 self.source = source
190 self.board_name = board_name
191 self.reactor = reactor
192 self.daemon_options = daemon_options
193 self._json_file = None
194
195 def run(self):
196 d = defer.Deferred()
197 json_data = self.job_data
198 fd, self._json_file = tempfile.mkstemp()
199 with os.fdopen(fd, 'wb') as f:
200 json.dump(json_data, f)
201
202 childFDs = {0: 0, 1: 1, 2: 2}
203 args = [
204 'setsid', 'lava-server', 'manage', 'schedulermonitor',
205 self.dispatcher, str(self.board_name), self._json_file,
206 '-l', self.daemon_options['LOG_LEVEL']]
207 if self.daemon_options['LOG_FILE_PATH']:
208 args.extend(['-f', self.daemon_options['LOG_FILE_PATH']])
209 childFDs = None
210 self.logger.info('executing "%s"', ' '.join(args))
211 self.reactor.spawnProcess(
212 SchedulerMonitorPP(d, self.board_name), 'setsid',
213 childFDs=childFDs, env=None, args=args)
214 d.addBoth(self._exited)
215 return d
216
217 def _exited(self, result):
218 if self._json_file is not None:
219 os.unlink(self._json_file)
220 return result
221
222
223class JobRunner(object):
224 job_cls = MonitorJob
225
226 def __init__(self, source, job, dispatcher, reactor, daemon_options,
227 job_cls=None):
228 self.source = source
229 self.dispatcher = dispatcher
230 self.reactor = reactor
231 self.daemon_options = daemon_options
232 self.job = job
233 if job.actual_device:
234 self.board_name = job.actual_device.hostname
235 elif job.requested_device:
236 self.board_name = job.requested_device.hostname
237 if job_cls is not None:
238 self.job_cls = job_cls
239 self.running_job = None
240 self.logger = logging.getLogger(__name__ + '.JobRunner.' + str(job.id))
241
242 def start(self):
243 self.logger.debug("processing job")
244 if self.job is None:
245 self.logger.debug("no job found for processing")
246 return
247 self.source.getJobDetails(self.job).addCallbacks(
248 self._startJob, self._ebStartJob)
249
250 def _startJob(self, job_data):
251 if job_data is None:
252 self.logger.debug("no job found")
253 return
254 self.logger.info("starting job %r", job_data)
255
256 self.running_job = self.job_cls(
257 job_data, self.dispatcher, self.source, self.board_name,
258 self.reactor, self.daemon_options)
259 d = self.running_job.run()
260 d.addCallbacks(self._cbJobFinished, self._ebJobFinished)
261
262 def _ebStartJob(self, result):
263 self.logger.error(
264 '%s: %s\n%s', result.type.__name__, result.value,
265 result.getTraceback())
266 return
267
268 def stop(self):
269 self.logger.debug("stopping")
270
271 if self.running_job is not None:
272 self.logger.debug("job running; deferring stop")
273 else:
274 self.logger.debug("stopping immediately")
275 return defer.succeed(None)
276
277 def _ebJobFinished(self, result):
278 self.logger.exception(result.value)
279
280 def _cbJobFinished(self, result):
281 self.running_job = None
0282
=== modified file 'lava_scheduler_daemon/service.py'
--- lava_scheduler_daemon/service.py 2012-12-03 05:03:38 +0000
+++ lava_scheduler_daemon/service.py 2013-08-28 13:47:23 +0000
@@ -1,58 +1,56 @@
1# Copyright (C) 2013 Linaro Limited
2#
3# Author: Senthil Kumaran <senthil.kumaran@linaro.org>
4#
5# This file is part of LAVA Scheduler.
6#
7# LAVA Scheduler is free software: you can redistribute it and/or modify it
8# under the terms of the GNU Affero General Public License version 3 as
9# published by the Free Software Foundation
10#
11# LAVA Scheduler is distributed in the hope that it will be useful, but
12# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
13# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
14# more details.
15#
16# You should have received a copy of the GNU Affero General Public License
17# along with LAVA Scheduler. If not, see <http://www.gnu.org/licenses/>.
18
1import logging19import logging
220
3from twisted.application.service import Service21from twisted.application.service import Service
4from twisted.internet import defer22from twisted.internet import defer
5from twisted.internet.task import LoopingCall23from twisted.internet.task import LoopingCall
624
7from lava_scheduler_daemon.board import Board, catchall_errback25from lava_scheduler_daemon.job import JobRunner, catchall_errback
826
927
10class BoardSet(Service):28class JobQueue(Service):
1129
12 def __init__(self, source, dispatcher, reactor, daemon_options):30 def __init__(self, source, dispatcher, reactor, daemon_options):
13 self.logger = logging.getLogger(__name__ + '.BoardSet')31 self.logger = logging.getLogger(__name__ + '.JobQueue')
14 self.source = source32 self.source = source
15 self.boards = {}
16 self.dispatcher = dispatcher33 self.dispatcher = dispatcher
17 self.reactor = reactor34 self.reactor = reactor
18 self.daemon_options = daemon_options35 self.daemon_options = daemon_options
19 self._update_boards_call = LoopingCall(self._updateBoards)36 self._check_job_call = LoopingCall(self._checkJobs)
20 self._update_boards_call.clock = reactor37 self._check_job_call.clock = reactor
2138
22 def _updateBoards(self):39 def _checkJobs(self):
23 self.logger.debug("Refreshing board list")40 self.logger.debug("Refreshing jobs")
24 return self.source.getBoardList().addCallback(41 return self.source.getJobList().addCallback(
25 self._cbUpdateBoards).addErrback(catchall_errback(self.logger))42 self._cbCheckJobs).addErrback(catchall_errback(self.logger))
2643
27 def _cbUpdateBoards(self, board_cfgs):44 def _cbCheckJobs(self, job_list):
28 '''board_cfgs is an array of dicts {hostname=name} '''45 for job in job_list:
29 new_boards = {}46 new_job = JobRunner(self.source, job, self.dispatcher,
30 for board_cfg in board_cfgs:47 self.reactor, self.daemon_options)
31 board_name = board_cfg['hostname']48 self.logger.info("Starting Job: %d " % job.id)
3249 new_job.start()
33 if board_cfg['hostname'] in self.boards:
34 board = self.boards.pop(board_name)
35 new_boards[board_name] = board
36 else:
37 self.logger.info("Adding board: %s" % board_name)
38 new_boards[board_name] = Board(
39 self.source, board_name, self.dispatcher, self.reactor,
40 self.daemon_options)
41 new_boards[board_name].start()
42 for board in self.boards.values():
43 self.logger.info("Removing board: %s" % board.board_name)
44 board.stop()
45 self.boards = new_boards
4650
47 def startService(self):51 def startService(self):
48 self._update_boards_call.start(20)52 self._check_job_call.start(20)
4953
50 def stopService(self):54 def stopService(self):
51 self._update_boards_call.stop()55 self._check_job_call.stop()
52 ds = []56 return None
53 dead_boards = []
54 for board in self.boards.itervalues():
55 ds.append(board.stop().addCallback(dead_boards.append))
56 self.logger.info(
57 "waiting for %s boards", len(self.boards) - len(dead_boards))
58 return defer.gatherResults(ds)
5957
=== modified file 'lava_scheduler_daemon/tests/test_board.py'
--- lava_scheduler_daemon/tests/test_board.py 2013-07-17 12:20:25 +0000
+++ lava_scheduler_daemon/tests/test_board.py 2013-08-28 13:47:23 +0000
@@ -38,7 +38,7 @@
3838
39class TestJob(object):39class TestJob(object):
4040
41 def __init__(self, job_data, dispatcher, source, board_name, reactor, options, use_celery):41 def __init__(self, job_data, dispatcher, source, board_name, reactor, options):
42 self.json_data = job_data42 self.json_data = job_data
43 self.dispatcher = dispatcher43 self.dispatcher = dispatcher
44 self.reactor = reactor44 self.reactor = reactor

Subscribers

People subscribed via source and target branches

to status/vote changes: