Merge lp:lava-scheduler/multinode into lp:lava-scheduler
- multinode
- Merge into trunk
Status: | Merged |
---|---|
Approved by: | Neil Williams |
Approved revision: | 285 |
Merged at revision: | 255 |
Proposed branch: | lp:lava-scheduler/multinode |
Merge into: | lp:lava-scheduler |
Diff against target: |
1750 lines (+1225/-137) (has conflicts) 17 files modified
lava_scheduler_app/api.py (+19/-3) lava_scheduler_app/management/commands/scheduler.py (+2/-2) lava_scheduler_app/management/commands/schedulermonitor.py (+1/-1) lava_scheduler_app/migrations/0030_auto__add_field_testjob_sub_id__add_field_testjob_target_group.py (+165/-0) lava_scheduler_app/migrations/0031_auto__add_field_testjob_multinode_definition.py (+162/-0) lava_scheduler_app/models.py (+140/-11) lava_scheduler_app/templates/lava_scheduler_app/job_definition.html (+1/-1) lava_scheduler_app/templates/lava_scheduler_app/job_sidebar.html (+16/-0) lava_scheduler_app/templates/lava_scheduler_app/job_submit.html (+10/-0) lava_scheduler_app/templates/lava_scheduler_app/multinode_job_definition.html (+21/-0) lava_scheduler_app/urls.py (+13/-2) lava_scheduler_app/utils.py (+117/-0) lava_scheduler_app/views.py (+102/-10) lava_scheduler_daemon/dbjobsource.py (+134/-64) lava_scheduler_daemon/job.py (+281/-0) lava_scheduler_daemon/service.py (+40/-42) lava_scheduler_daemon/tests/test_board.py (+1/-1) Text conflict in lava_scheduler_app/models.py Text conflict in lava_scheduler_app/urls.py Text conflict in lava_scheduler_app/views.py Contents conflict in lava_scheduler_daemon/board.py |
To merge this branch: | bzr merge lp:lava-scheduler/multinode |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Neil Williams | Approve | ||
Antonio Terceiro | Needs Fixing | ||
Review via email: mp+181103@code.launchpad.net |
Commit message
Description of the change
Landing MultiNode.
Handles the scheduling of MultiNode jobs.
This branch applies with conflicts. The conflicts are proposed to be resolved as in this temporary branch: lp:~codehelp/lava-scheduler/multinode-merge
lava-scheduler will be the final shared branch to be merged as it requires changes in dispatcher to be applied before MultiNode jobs can be submitted.
- 282. By Neil Williams
-
Fu Wei 2013-08-23 Fix the hard code problem of 'logging_level', change
'DEBUG' back to the info from multinode json file - 283. By Neil Williams
-
Senthil Kumaran 2013-08-22 List all subjobs of a multinode job.
Neil Williams (codehelp) wrote : | # |
On Mon, 26 Aug 2013 23:33:20 -0000
Antonio Terceiro <email address hidden> wrote:
> > +def split_multi_
> > + node_json = {}
> > + all_nodes = {}
> > + node_actions = {}
> > + if "device_group" in json_jobdata:
>
> this conditional is redundant since split_multi_job is already called
> in the context where the same condition was already tested.
It's worth retaining the check should other functions end up calling
this routine but it should probably be a no-op or error return instead
which would simplify the indentation a bit.
> > +def requested_
> > + """Utility function check the requested number of devices for
> > each
> > + device_type in a multinode job.
> > +
> > + JSON_DATA is the job definition string.
> > +
> > + Returns requested_device which is a dictionary of the
> > following format: +
> > + {'kvm': 1, 'qemu': 3, 'panda': 1}
> > +
> > + If the job is not a multinode job, then return None.
> > + """
> > + job_data = simplejson.
> > + if 'device_group' in job_data:
> > + requested_devices = {}
> > + for device_group in job_data[
> > + device_type = device_
> > + count = device_
> > + requested_
> > + return requested_devices
> > + else:
> > + # TODO: Put logic to check whether we have requested
> > devices attached
> > + # to this lava-server, even if it is a single node
> > job?
> > + return None
>
> There is already code that does almost the same thing (checking
> amount of available devices for all device types) in
> lava_scheduler_
> logic here.
I believe the intention here is that as MultiNode requires a different
Job selection process, it may be worth porting that code into the new
methods and using only a single device selection algorithm.
Senthil? Is this TODO redundant?
> > @@ -764,7 +818,13 @@
> > def job_cancel(request, pk):
> > job = get_restricted_
> > if job.can_
> > - job.cancel()
> > + if job.target_group:
>
>
> I see this pattern repeated all over the code, but I don't think
> there's much we can do about it. But IMO we can make it little better
> by creating a new property in the job class so that you can test for
> `job.is_multinode` instead of `job.target_group`, which is not
> obviously related to multinode unless you understand the
> implementation details.
The processing is dependent on whether a group is in use. As far as the
code is concerned, it is the presence of a group which is relevant, not
the label we use of "MultiNode". I think target_group is more flexible.
> > === modified file 'lava_scheduler
> > --- lava_scheduler_
> > +++ lava_scheduler_
> > @@ -339,7 +339,7 @@
> > self.logger.
> > self.running_job = self.job_cls(
> > job_data, self...
- 284. By Neil Williams
-
Neil Williams 2013-08-27 Move from an overly general Exception to specific
exceptions possible directly from the called function.
Senthil Kumaran S (stylesen) wrote : | # |
Hi,
The new MP is here -
https:/
On Tuesday 27 August 2013 04:00 PM, Neil Williams wrote:
> On Mon, 26 Aug 2013 23:33:20 -0000
> Antonio Terceiro <email address hidden> wrote:
>
>>> +def split_multi_
>>> + node_json = {}
>>> + all_nodes = {}
>>> + node_actions = {}
>>> + if "device_group" in json_jobdata:
>>
>> this conditional is redundant since split_multi_job is already called
>> in the context where the same condition was already tested.
>
> It's worth retaining the check should other functions end up calling
> this routine but it should probably be a no-op or error return instead
> which would simplify the indentation a bit.
I am +1 for Neil's suggestion here. Fixed it accordingly.
>>> +def requested_
>>> + """Utility function check the requested number of devices for
>>> each
>>> + device_type in a multinode job.
>>> +
>>> + JSON_DATA is the job definition string.
>>> +
>>> + Returns requested_device which is a dictionary of the
>>> following format: +
>>> + {'kvm': 1, 'qemu': 3, 'panda': 1}
>>> +
>>> + If the job is not a multinode job, then return None.
>>> + """
>>> + job_data = simplejson.
>>> + if 'device_group' in job_data:
>>> + requested_devices = {}
>>> + for device_group in job_data[
>>> + device_type = device_
>>> + count = device_
>>> + requested_
>>> + return requested_devices
>>> + else:
>>> + # TODO: Put logic to check whether we have requested
>>> devices attached
>>> + # to this lava-server, even if it is a single node
>>> job?
>>> + return None
>>
>> There is already code that does almost the same thing (checking
>> amount of available devices for all device types) in
>> lava_scheduler_
>> logic here.
>
> I believe the intention here is that as MultiNode requires a different
> Job selection process, it may be worth porting that code into the new
> methods and using only a single device selection algorithm.
The function available in api.py is different from what we have in utils.py.
In api.py "all_device_types" function returns the available device types
in the lava-server after checking against the database.
On the other hand, in utils.py "requested_
the job data as input and operates on it. It does not go to the database
and checks for the available device types. This is a handy function used
in order to retrieve how many devices of each type is requested in the
job file.
> Senthil? Is this TODO redundant?
The TODO is misleading. It was something which existed from previous
code when I had a different logic for this function. I removed it now in
the latest merge proposal.
>>> @@ -764,7 +818,13 @@
>>> def job_cancel(request, pk):
>>> job = get_restricted_
>>> if job.can_
>>> - job.cancel()
>>> + if job.target...
- 285. By Neil Williams
-
Senthil Kumaran 2013-08-28 Remove all legacy code for Board based scheduler.
Senthil Kumaran 2013-08-28 Fix test case for using celery.
Senthil Kumaran 2013-08-28 Address review comments from Antonio.
Neil Williams (codehelp) wrote : | # |
Final update reviewed: Approve.
Preview Diff
1 | === modified file 'lava_scheduler_app/api.py' |
2 | --- lava_scheduler_app/api.py 2013-07-17 12:20:25 +0000 |
3 | +++ lava_scheduler_app/api.py 2013-08-28 13:47:23 +0000 |
4 | @@ -2,10 +2,12 @@ |
5 | from simplejson import JSONDecodeError |
6 | from django.db.models import Count |
7 | from linaro_django_xmlrpc.models import ExposedAPI |
8 | +from lava_scheduler_app import utils |
9 | from lava_scheduler_app.models import ( |
10 | Device, |
11 | DeviceType, |
12 | JSONDataError, |
13 | + DevicesUnavailableException, |
14 | TestJob, |
15 | ) |
16 | from lava_scheduler_app.views import ( |
17 | @@ -35,14 +37,22 @@ |
18 | raise xmlrpclib.Fault(404, "Specified device not found.") |
19 | except DeviceType.DoesNotExist: |
20 | raise xmlrpclib.Fault(404, "Specified device type not found.") |
21 | - return job.id |
22 | + except DevicesUnavailableException as e: |
23 | + raise xmlrpclib.Fault(400, str(e)) |
24 | + if isinstance(job, type(list())): |
25 | + return job |
26 | + else: |
27 | + return job.id |
28 | |
29 | def resubmit_job(self, job_id): |
30 | try: |
31 | job = TestJob.objects.accessible_by_principal(self.user).get(pk=job_id) |
32 | except TestJob.DoesNotExist: |
33 | raise xmlrpclib.Fault(404, "Specified job not found.") |
34 | - return self.submit_job(job.definition) |
35 | + if job.is_multinode: |
36 | + return self.submit_job(job.multinode_definition) |
37 | + else: |
38 | + return self.submit_job(job.definition) |
39 | |
40 | def cancel_job(self, job_id): |
41 | if not self.user: |
42 | @@ -50,7 +60,13 @@ |
43 | job = TestJob.objects.get(pk=job_id) |
44 | if not job.can_cancel(self.user): |
45 | raise xmlrpclib.Fault(403, "Permission denied.") |
46 | - job.cancel() |
47 | + if job.is_multinode: |
48 | + multinode_jobs = TestJob.objects.all().filter( |
49 | + target_group=job.target_group) |
50 | + for multinode_job in multinode_jobs: |
51 | + multinode_job.cancel() |
52 | + else: |
53 | + job.cancel() |
54 | return True |
55 | |
56 | def job_output(self, job_id): |
57 | |
58 | === modified file 'lava_scheduler_app/management/commands/scheduler.py' |
59 | --- lava_scheduler_app/management/commands/scheduler.py 2013-07-17 12:20:25 +0000 |
60 | +++ lava_scheduler_app/management/commands/scheduler.py 2013-08-28 13:47:23 +0000 |
61 | @@ -43,7 +43,7 @@ |
62 | |
63 | from twisted.internet import reactor |
64 | |
65 | - from lava_scheduler_daemon.service import BoardSet |
66 | + from lava_scheduler_daemon.service import JobQueue |
67 | from lava_scheduler_daemon.dbjobsource import DatabaseJobSource |
68 | |
69 | daemon_options = self._configure(options) |
70 | @@ -58,7 +58,7 @@ |
71 | 'fake-dispatcher') |
72 | else: |
73 | dispatcher = options['dispatcher'] |
74 | - service = BoardSet( |
75 | + service = JobQueue( |
76 | source, dispatcher, reactor, daemon_options=daemon_options) |
77 | reactor.callWhenRunning(service.startService) |
78 | reactor.run() |
79 | |
80 | === modified file 'lava_scheduler_app/management/commands/schedulermonitor.py' |
81 | --- lava_scheduler_app/management/commands/schedulermonitor.py 2012-12-03 05:03:38 +0000 |
82 | +++ lava_scheduler_app/management/commands/schedulermonitor.py 2013-08-28 13:47:23 +0000 |
83 | @@ -31,7 +31,7 @@ |
84 | |
85 | def handle(self, *args, **options): |
86 | from twisted.internet import reactor |
87 | - from lava_scheduler_daemon.board import Job |
88 | + from lava_scheduler_daemon.job import Job |
89 | daemon_options = self._configure(options) |
90 | source = DatabaseJobSource() |
91 | dispatcher, board_name, json_file = args |
92 | |
93 | === added file 'lava_scheduler_app/migrations/0030_auto__add_field_testjob_sub_id__add_field_testjob_target_group.py' |
94 | --- lava_scheduler_app/migrations/0030_auto__add_field_testjob_sub_id__add_field_testjob_target_group.py 1970-01-01 00:00:00 +0000 |
95 | +++ lava_scheduler_app/migrations/0030_auto__add_field_testjob_sub_id__add_field_testjob_target_group.py 2013-08-28 13:47:23 +0000 |
96 | @@ -0,0 +1,165 @@ |
97 | +# -*- coding: utf-8 -*- |
98 | +from south.db import db |
99 | +from south.v2 import SchemaMigration |
100 | + |
101 | + |
102 | +class Migration(SchemaMigration): |
103 | + |
104 | + def forwards(self, orm): |
105 | + # Adding field 'TestJob.sub_id' |
106 | + db.add_column('lava_scheduler_app_testjob', 'sub_id', |
107 | + self.gf('django.db.models.fields.CharField')(default='', max_length=200, blank=True), |
108 | + keep_default=False) |
109 | + |
110 | + # Adding field 'TestJob.target_group' |
111 | + db.add_column('lava_scheduler_app_testjob', 'target_group', |
112 | + self.gf('django.db.models.fields.CharField')(default=None, max_length=64, null=True, blank=True), |
113 | + keep_default=False) |
114 | + |
115 | + def backwards(self, orm): |
116 | + # Deleting field 'TestJob.sub_id' |
117 | + db.delete_column('lava_scheduler_app_testjob', 'sub_id') |
118 | + |
119 | + # Deleting field 'TestJob.target_group' |
120 | + db.delete_column('lava_scheduler_app_testjob', 'target_group') |
121 | + |
122 | + models = { |
123 | + 'auth.group': { |
124 | + 'Meta': {'object_name': 'Group'}, |
125 | + 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), |
126 | + 'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}), |
127 | + 'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}) |
128 | + }, |
129 | + 'auth.permission': { |
130 | + 'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'}, |
131 | + 'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}), |
132 | + 'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}), |
133 | + 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), |
134 | + 'name': ('django.db.models.fields.CharField', [], {'max_length': '50'}) |
135 | + }, |
136 | + 'auth.user': { |
137 | + 'Meta': {'object_name': 'User'}, |
138 | + 'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}), |
139 | + 'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}), |
140 | + 'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}), |
141 | + 'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}), |
142 | + 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), |
143 | + 'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}), |
144 | + 'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}), |
145 | + 'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}), |
146 | + 'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}), |
147 | + 'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}), |
148 | + 'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}), |
149 | + 'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}), |
150 | + 'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'}) |
151 | + }, |
152 | + 'contenttypes.contenttype': { |
153 | + 'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"}, |
154 | + 'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}), |
155 | + 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), |
156 | + 'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}), |
157 | + 'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}) |
158 | + }, |
159 | + 'dashboard_app.bundle': { |
160 | + 'Meta': {'ordering': "['-uploaded_on']", 'object_name': 'Bundle'}, |
161 | + '_gz_content': ('django.db.models.fields.files.FileField', [], {'max_length': '100', 'null': 'True', 'db_column': "'gz_content'"}), |
162 | + '_raw_content': ('django.db.models.fields.files.FileField', [], {'max_length': '100', 'null': 'True', 'db_column': "'content'"}), |
163 | + 'bundle_stream': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'bundles'", 'to': "orm['dashboard_app.BundleStream']"}), |
164 | + 'content_filename': ('django.db.models.fields.CharField', [], {'max_length': '256'}), |
165 | + 'content_sha1': ('django.db.models.fields.CharField', [], {'max_length': '40', 'unique': 'True', 'null': 'True'}), |
166 | + 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), |
167 | + 'is_deserialized': ('django.db.models.fields.BooleanField', [], {'default': 'False'}), |
168 | + 'uploaded_by': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'uploaded_bundles'", 'null': 'True', 'to': "orm['auth.User']"}), |
169 | + 'uploaded_on': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.utcnow'}) |
170 | + }, |
171 | + 'dashboard_app.bundlestream': { |
172 | + 'Meta': {'object_name': 'BundleStream'}, |
173 | + 'group': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.Group']", 'null': 'True', 'blank': 'True'}), |
174 | + 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), |
175 | + 'is_anonymous': ('django.db.models.fields.BooleanField', [], {'default': 'False'}), |
176 | + 'is_public': ('django.db.models.fields.BooleanField', [], {'default': 'False'}), |
177 | + 'name': ('django.db.models.fields.CharField', [], {'max_length': '64', 'blank': 'True'}), |
178 | + 'pathname': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '128'}), |
179 | + 'slug': ('django.db.models.fields.CharField', [], {'max_length': '64', 'blank': 'True'}), |
180 | + 'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']", 'null': 'True', 'blank': 'True'}) |
181 | + }, |
182 | + 'lava_scheduler_app.device': { |
183 | + 'Meta': {'object_name': 'Device'}, |
184 | + 'current_job': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'+'", 'null': 'True', 'on_delete': 'models.SET_NULL', 'to': "orm['lava_scheduler_app.TestJob']", 'blank': 'True', 'unique': 'True'}), |
185 | + 'device_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['lava_scheduler_app.DeviceType']"}), |
186 | + 'device_version': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '200', 'null': 'True', 'blank': 'True'}), |
187 | + 'health_status': ('django.db.models.fields.IntegerField', [], {'default': '0'}), |
188 | + 'hostname': ('django.db.models.fields.CharField', [], {'max_length': '200', 'primary_key': 'True'}), |
189 | + 'last_health_report_job': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'+'", 'null': 'True', 'on_delete': 'models.SET_NULL', 'to': "orm['lava_scheduler_app.TestJob']", 'blank': 'True', 'unique': 'True'}), |
190 | + 'status': ('django.db.models.fields.IntegerField', [], {'default': '1'}), |
191 | + 'tags': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['lava_scheduler_app.Tag']", 'symmetrical': 'False', 'blank': 'True'}) |
192 | + }, |
193 | + 'lava_scheduler_app.devicestatetransition': { |
194 | + 'Meta': {'object_name': 'DeviceStateTransition'}, |
195 | + 'created_by': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']", 'null': 'True', 'on_delete': 'models.SET_NULL', 'blank': 'True'}), |
196 | + 'created_on': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}), |
197 | + 'device': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'transitions'", 'to': "orm['lava_scheduler_app.Device']"}), |
198 | + 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), |
199 | + 'job': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['lava_scheduler_app.TestJob']", 'null': 'True', 'on_delete': 'models.SET_NULL', 'blank': 'True'}), |
200 | + 'message': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}), |
201 | + 'new_state': ('django.db.models.fields.IntegerField', [], {}), |
202 | + 'old_state': ('django.db.models.fields.IntegerField', [], {}) |
203 | + }, |
204 | + 'lava_scheduler_app.devicetype': { |
205 | + 'Meta': {'object_name': 'DeviceType'}, |
206 | + 'display': ('django.db.models.fields.BooleanField', [], {'default': 'True'}), |
207 | + 'health_check_job': ('django.db.models.fields.TextField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}), |
208 | + 'name': ('django.db.models.fields.SlugField', [], {'max_length': '50', 'primary_key': 'True'}) |
209 | + }, |
210 | + 'lava_scheduler_app.jobfailuretag': { |
211 | + 'Meta': {'object_name': 'JobFailureTag'}, |
212 | + 'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}), |
213 | + 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), |
214 | + 'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '256'}) |
215 | + }, |
216 | + 'lava_scheduler_app.tag': { |
217 | + 'Meta': {'object_name': 'Tag'}, |
218 | + 'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}), |
219 | + 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), |
220 | + 'name': ('django.db.models.fields.SlugField', [], {'unique': 'True', 'max_length': '50'}) |
221 | + }, |
222 | + 'lava_scheduler_app.testjob': { |
223 | + 'Meta': {'object_name': 'TestJob'}, |
224 | + '_results_bundle': ('django.db.models.fields.related.OneToOneField', [], {'null': 'True', 'db_column': "'results_bundle_id'", 'on_delete': 'models.SET_NULL', 'to': "orm['dashboard_app.Bundle']", 'blank': 'True', 'unique': 'True'}), |
225 | + '_results_link': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '400', 'null': 'True', 'db_column': "'results_link'", 'blank': 'True'}), |
226 | + 'actual_device': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'related_name': "'+'", 'null': 'True', 'blank': 'True', 'to': "orm['lava_scheduler_app.Device']"}), |
227 | + 'definition': ('django.db.models.fields.TextField', [], {}), |
228 | + 'description': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '200', 'null': 'True', 'blank': 'True'}), |
229 | + 'end_time': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}), |
230 | + 'failure_comment': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}), |
231 | + 'failure_tags': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'related_name': "'failure_tags'", 'blank': 'True', 'to': "orm['lava_scheduler_app.JobFailureTag']"}), |
232 | + 'group': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.Group']", 'null': 'True', 'blank': 'True'}), |
233 | + 'health_check': ('django.db.models.fields.BooleanField', [], {'default': 'False'}), |
234 | + 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), |
235 | + 'is_public': ('django.db.models.fields.BooleanField', [], {'default': 'False'}), |
236 | + 'log_file': ('django.db.models.fields.files.FileField', [], {'default': 'None', 'max_length': '100', 'null': 'True', 'blank': 'True'}), |
237 | + 'priority': ('django.db.models.fields.IntegerField', [], {'default': '50'}), |
238 | + 'requested_device': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'related_name': "'+'", 'null': 'True', 'blank': 'True', 'to': "orm['lava_scheduler_app.Device']"}), |
239 | + 'requested_device_type': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'related_name': "'+'", 'null': 'True', 'blank': 'True', 'to': "orm['lava_scheduler_app.DeviceType']"}), |
240 | + 'start_time': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}), |
241 | + 'status': ('django.db.models.fields.IntegerField', [], {'default': '0'}), |
242 | + 'sub_id': ('django.db.models.fields.CharField', [], {'max_length': '200', 'blank': 'True'}), |
243 | + 'submit_time': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}), |
244 | + 'submit_token': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['linaro_django_xmlrpc.AuthToken']", 'null': 'True', 'on_delete': 'models.SET_NULL', 'blank': 'True'}), |
245 | + 'submitter': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'+'", 'to': "orm['auth.User']"}), |
246 | + 'tags': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['lava_scheduler_app.Tag']", 'symmetrical': 'False', 'blank': 'True'}), |
247 | + 'target_group': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '64', 'null': 'True', 'blank': 'True'}), |
248 | + 'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']", 'null': 'True', 'blank': 'True'}) |
249 | + }, |
250 | + 'linaro_django_xmlrpc.authtoken': { |
251 | + 'Meta': {'object_name': 'AuthToken'}, |
252 | + 'created_on': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}), |
253 | + 'description': ('django.db.models.fields.TextField', [], {'default': "''", 'blank': 'True'}), |
254 | + 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), |
255 | + 'last_used_on': ('django.db.models.fields.DateTimeField', [], {'null': 'True'}), |
256 | + 'secret': ('django.db.models.fields.CharField', [], {'default': "'7rf4239t35kqjrcixn4srgw00r61ncuq51jna0d6xbwpg2ur2annw5y1gkr9yt6ys9gh06b3wtcum4j0f2pdn5crul72mu1e1tw4at9jfgwk18asogkgoqcbc20ftylx'", 'unique': 'True', 'max_length': '128'}), |
257 | + 'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'auth_tokens'", 'to': "orm['auth.User']"}) |
258 | + } |
259 | + } |
260 | + |
261 | + complete_apps = ['lava_scheduler_app'] |
262 | |
263 | === added file 'lava_scheduler_app/migrations/0031_auto__add_field_testjob_multinode_definition.py' |
264 | --- lava_scheduler_app/migrations/0031_auto__add_field_testjob_multinode_definition.py 1970-01-01 00:00:00 +0000 |
265 | +++ lava_scheduler_app/migrations/0031_auto__add_field_testjob_multinode_definition.py 2013-08-28 13:47:23 +0000 |
266 | @@ -0,0 +1,162 @@ |
267 | +# -*- coding: utf-8 -*- |
268 | +import datetime |
269 | +from south.db import db |
270 | +from south.v2 import SchemaMigration |
271 | +from django.db import models |
272 | + |
273 | + |
274 | +class Migration(SchemaMigration): |
275 | + |
276 | + def forwards(self, orm): |
277 | + # Adding field 'TestJob.multinode_definition' |
278 | + db.add_column('lava_scheduler_app_testjob', 'multinode_definition', |
279 | + self.gf('django.db.models.fields.TextField')(default='', blank=True), |
280 | + keep_default=False) |
281 | + |
282 | + |
283 | + def backwards(self, orm): |
284 | + # Deleting field 'TestJob.multinode_definition' |
285 | + db.delete_column('lava_scheduler_app_testjob', 'multinode_definition') |
286 | + |
287 | + |
288 | + models = { |
289 | + 'auth.group': { |
290 | + 'Meta': {'object_name': 'Group'}, |
291 | + 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), |
292 | + 'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}), |
293 | + 'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}) |
294 | + }, |
295 | + 'auth.permission': { |
296 | + 'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'}, |
297 | + 'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}), |
298 | + 'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}), |
299 | + 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), |
300 | + 'name': ('django.db.models.fields.CharField', [], {'max_length': '50'}) |
301 | + }, |
302 | + 'auth.user': { |
303 | + 'Meta': {'object_name': 'User'}, |
304 | + 'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}), |
305 | + 'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}), |
306 | + 'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}), |
307 | + 'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}), |
308 | + 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), |
309 | + 'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}), |
310 | + 'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}), |
311 | + 'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}), |
312 | + 'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}), |
313 | + 'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}), |
314 | + 'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}), |
315 | + 'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}), |
316 | + 'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'}) |
317 | + }, |
318 | + 'contenttypes.contenttype': { |
319 | + 'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"}, |
320 | + 'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}), |
321 | + 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), |
322 | + 'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}), |
323 | + 'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}) |
324 | + }, |
325 | + 'dashboard_app.bundle': { |
326 | + 'Meta': {'ordering': "['-uploaded_on']", 'object_name': 'Bundle'}, |
327 | + '_gz_content': ('django.db.models.fields.files.FileField', [], {'max_length': '100', 'null': 'True', 'db_column': "'gz_content'"}), |
328 | + '_raw_content': ('django.db.models.fields.files.FileField', [], {'max_length': '100', 'null': 'True', 'db_column': "'content'"}), |
329 | + 'bundle_stream': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'bundles'", 'to': "orm['dashboard_app.BundleStream']"}), |
330 | + 'content_filename': ('django.db.models.fields.CharField', [], {'max_length': '256'}), |
331 | + 'content_sha1': ('django.db.models.fields.CharField', [], {'max_length': '40', 'unique': 'True', 'null': 'True'}), |
332 | + 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), |
333 | + 'is_deserialized': ('django.db.models.fields.BooleanField', [], {'default': 'False'}), |
334 | + 'uploaded_by': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'uploaded_bundles'", 'null': 'True', 'to': "orm['auth.User']"}), |
335 | + 'uploaded_on': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.utcnow'}) |
336 | + }, |
337 | + 'dashboard_app.bundlestream': { |
338 | + 'Meta': {'object_name': 'BundleStream'}, |
339 | + 'group': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.Group']", 'null': 'True', 'blank': 'True'}), |
340 | + 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), |
341 | + 'is_anonymous': ('django.db.models.fields.BooleanField', [], {'default': 'False'}), |
342 | + 'is_public': ('django.db.models.fields.BooleanField', [], {'default': 'False'}), |
343 | + 'name': ('django.db.models.fields.CharField', [], {'max_length': '64', 'blank': 'True'}), |
344 | + 'pathname': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '128'}), |
345 | + 'slug': ('django.db.models.fields.CharField', [], {'max_length': '64', 'blank': 'True'}), |
346 | + 'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']", 'null': 'True', 'blank': 'True'}) |
347 | + }, |
348 | + 'lava_scheduler_app.device': { |
349 | + 'Meta': {'object_name': 'Device'}, |
350 | + 'current_job': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'+'", 'null': 'True', 'on_delete': 'models.SET_NULL', 'to': "orm['lava_scheduler_app.TestJob']", 'blank': 'True', 'unique': 'True'}), |
351 | + 'device_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['lava_scheduler_app.DeviceType']"}), |
352 | + 'device_version': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '200', 'null': 'True', 'blank': 'True'}), |
353 | + 'health_status': ('django.db.models.fields.IntegerField', [], {'default': '0'}), |
354 | + 'hostname': ('django.db.models.fields.CharField', [], {'max_length': '200', 'primary_key': 'True'}), |
355 | + 'last_health_report_job': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'+'", 'null': 'True', 'on_delete': 'models.SET_NULL', 'to': "orm['lava_scheduler_app.TestJob']", 'blank': 'True', 'unique': 'True'}), |
356 | + 'status': ('django.db.models.fields.IntegerField', [], {'default': '1'}), |
357 | + 'tags': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['lava_scheduler_app.Tag']", 'symmetrical': 'False', 'blank': 'True'}) |
358 | + }, |
359 | + 'lava_scheduler_app.devicestatetransition': { |
360 | + 'Meta': {'object_name': 'DeviceStateTransition'}, |
361 | + 'created_by': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']", 'null': 'True', 'on_delete': 'models.SET_NULL', 'blank': 'True'}), |
362 | + 'created_on': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}), |
363 | + 'device': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'transitions'", 'to': "orm['lava_scheduler_app.Device']"}), |
364 | + 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), |
365 | + 'job': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['lava_scheduler_app.TestJob']", 'null': 'True', 'on_delete': 'models.SET_NULL', 'blank': 'True'}), |
366 | + 'message': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}), |
367 | + 'new_state': ('django.db.models.fields.IntegerField', [], {}), |
368 | + 'old_state': ('django.db.models.fields.IntegerField', [], {}) |
369 | + }, |
370 | + 'lava_scheduler_app.devicetype': { |
371 | + 'Meta': {'object_name': 'DeviceType'}, |
372 | + 'display': ('django.db.models.fields.BooleanField', [], {'default': 'True'}), |
373 | + 'health_check_job': ('django.db.models.fields.TextField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}), |
374 | + 'name': ('django.db.models.fields.SlugField', [], {'max_length': '50', 'primary_key': 'True'}) |
375 | + }, |
376 | + 'lava_scheduler_app.jobfailuretag': { |
377 | + 'Meta': {'object_name': 'JobFailureTag'}, |
378 | + 'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}), |
379 | + 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), |
380 | + 'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '256'}) |
381 | + }, |
382 | + 'lava_scheduler_app.tag': { |
383 | + 'Meta': {'object_name': 'Tag'}, |
384 | + 'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}), |
385 | + 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), |
386 | + 'name': ('django.db.models.fields.SlugField', [], {'unique': 'True', 'max_length': '50'}) |
387 | + }, |
388 | + 'lava_scheduler_app.testjob': { |
389 | + 'Meta': {'object_name': 'TestJob'}, |
390 | + '_results_bundle': ('django.db.models.fields.related.OneToOneField', [], {'null': 'True', 'db_column': "'results_bundle_id'", 'on_delete': 'models.SET_NULL', 'to': "orm['dashboard_app.Bundle']", 'blank': 'True', 'unique': 'True'}), |
391 | + '_results_link': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '400', 'null': 'True', 'db_column': "'results_link'", 'blank': 'True'}), |
392 | + 'actual_device': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'related_name': "'+'", 'null': 'True', 'blank': 'True', 'to': "orm['lava_scheduler_app.Device']"}), |
393 | + 'definition': ('django.db.models.fields.TextField', [], {}), |
394 | + 'description': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '200', 'null': 'True', 'blank': 'True'}), |
395 | + 'end_time': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}), |
396 | + 'failure_comment': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}), |
397 | + 'failure_tags': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'related_name': "'failure_tags'", 'blank': 'True', 'to': "orm['lava_scheduler_app.JobFailureTag']"}), |
398 | + 'group': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.Group']", 'null': 'True', 'blank': 'True'}), |
399 | + 'health_check': ('django.db.models.fields.BooleanField', [], {'default': 'False'}), |
400 | + 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), |
401 | + 'is_public': ('django.db.models.fields.BooleanField', [], {'default': 'False'}), |
402 | + 'log_file': ('django.db.models.fields.files.FileField', [], {'default': 'None', 'max_length': '100', 'null': 'True', 'blank': 'True'}), |
403 | + 'multinode_definition': ('django.db.models.fields.TextField', [], {'blank': 'True'}), |
404 | + 'priority': ('django.db.models.fields.IntegerField', [], {'default': '50'}), |
405 | + 'requested_device': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'related_name': "'+'", 'null': 'True', 'blank': 'True', 'to': "orm['lava_scheduler_app.Device']"}), |
406 | + 'requested_device_type': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'related_name': "'+'", 'null': 'True', 'blank': 'True', 'to': "orm['lava_scheduler_app.DeviceType']"}), |
407 | + 'start_time': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}), |
408 | + 'status': ('django.db.models.fields.IntegerField', [], {'default': '0'}), |
409 | + 'sub_id': ('django.db.models.fields.CharField', [], {'max_length': '200', 'blank': 'True'}), |
410 | + 'submit_time': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}), |
411 | + 'submit_token': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['linaro_django_xmlrpc.AuthToken']", 'null': 'True', 'on_delete': 'models.SET_NULL', 'blank': 'True'}), |
412 | + 'submitter': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'+'", 'to': "orm['auth.User']"}), |
413 | + 'tags': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['lava_scheduler_app.Tag']", 'symmetrical': 'False', 'blank': 'True'}), |
414 | + 'target_group': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '64', 'null': 'True', 'blank': 'True'}), |
415 | + 'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']", 'null': 'True', 'blank': 'True'}) |
416 | + }, |
417 | + 'linaro_django_xmlrpc.authtoken': { |
418 | + 'Meta': {'object_name': 'AuthToken'}, |
419 | + 'created_on': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}), |
420 | + 'description': ('django.db.models.fields.TextField', [], {'default': "''", 'blank': 'True'}), |
421 | + 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), |
422 | + 'last_used_on': ('django.db.models.fields.DateTimeField', [], {'null': 'True'}), |
423 | + 'secret': ('django.db.models.fields.CharField', [], {'default': "'g4fgt7t5qdghq3qo3t3h5dhbj6fes2zh8n6lkncc0u0rcqxy0kaez7aacw05nc0oxjc3060pj0f1fsunjpo1btk6urfpt8xfmgefcatgmh1e7kj0ams90ikni05sd5qk'", 'unique': 'True', 'max_length': '128'}), |
424 | + 'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'auth_tokens'", 'to': "orm['auth.User']"}) |
425 | + } |
426 | + } |
427 | + |
428 | + complete_apps = ['lava_scheduler_app'] |
429 | \ No newline at end of file |
430 | |
431 | === modified file 'lava_scheduler_app/models.py' |
432 | --- lava_scheduler_app/models.py 2013-08-19 11:41:55 +0000 |
433 | +++ lava_scheduler_app/models.py 2013-08-28 13:47:23 +0000 |
434 | @@ -1,5 +1,6 @@ |
435 | import logging |
436 | import os |
437 | +import uuid |
438 | import simplejson |
439 | import urlparse |
440 | |
441 | @@ -18,6 +19,7 @@ |
442 | from dashboard_app.models import Bundle, BundleStream |
443 | |
444 | from lava_dispatcher.job import validate_job_data |
445 | +from lava_scheduler_app import utils |
446 | |
447 | from linaro_django_xmlrpc.models import AuthToken |
448 | |
449 | @@ -26,6 +28,10 @@ |
450 | """Error raised when JSON is syntactically valid but ill-formed.""" |
451 | |
452 | |
453 | +class DevicesUnavailableException(UserWarning): |
454 | + """Error raised when required number of devices are unavailable.""" |
455 | + |
456 | + |
457 | class Tag(models.Model): |
458 | |
459 | name = models.SlugField(unique=True) |
460 | @@ -44,6 +50,38 @@ |
461 | raise ValidationError(e) |
462 | |
463 | |
464 | +def check_device_availability(requested_devices): |
465 | + """Checks whether the number of devices requested is available. |
466 | + |
467 | + See utils.requested_device_count() for details of REQUESTED_DEVICES |
468 | + dictionary format. |
469 | + |
470 | + Returns True if the requested number of devices are available, else |
471 | + raises DevicesUnavailableException. |
472 | + """ |
473 | + device_types = DeviceType.objects.values_list('name').filter( |
474 | + models.Q(device__status=Device.IDLE) | \ |
475 | + models.Q(device__status=Device.RUNNING) |
476 | + ).annotate( |
477 | + num_count=models.Count('name') |
478 | + ).order_by('name') |
479 | + |
480 | + if requested_devices: |
481 | + all_devices = {} |
482 | + for dt in device_types: |
483 | + # dt[0] -> device type name |
484 | + # dt[1] -> device type count |
485 | + all_devices[dt[0]] = dt[1] |
486 | + |
487 | + for board, count in requested_devices.iteritems(): |
488 | + if all_devices.get(board, None) and count <= all_devices[board]: |
489 | + continue |
490 | + else: |
491 | + raise DevicesUnavailableException( |
492 | + "Required number of device(s) unavailable.") |
493 | + return True |
494 | + |
495 | + |
496 | class DeviceType(models.Model): |
497 | """ |
498 | A class of device, for example a pandaboard or a snowball. |
499 | @@ -245,6 +283,20 @@ |
500 | |
501 | id = models.AutoField(primary_key=True) |
502 | |
503 | + sub_id = models.CharField( |
504 | + verbose_name=_(u"Sub ID"), |
505 | + blank=True, |
506 | + max_length=200 |
507 | + ) |
508 | + |
509 | + target_group = models.CharField( |
510 | + verbose_name=_(u"Target Group"), |
511 | + blank=True, |
512 | + max_length=64, |
513 | + null=True, |
514 | + default=None |
515 | + ) |
516 | + |
517 | submitter = models.ForeignKey( |
518 | User, |
519 | verbose_name=_(u"Submitter"), |
520 | @@ -317,7 +369,16 @@ |
521 | ) |
522 | |
523 | definition = models.TextField( |
524 | - editable=False, |
525 | +<<<<<<< TREE |
526 | + editable=False, |
527 | +======= |
528 | + editable=False, |
529 | + ) |
530 | + |
531 | + multinode_definition = models.TextField( |
532 | + editable=False, |
533 | + blank=True |
534 | +>>>>>>> MERGE-SOURCE |
535 | ) |
536 | |
537 | log_file = models.FileField( |
538 | @@ -386,17 +447,34 @@ |
539 | |
540 | @classmethod |
541 | def from_json_and_user(cls, json_data, user, health_check=False): |
542 | + requested_devices = utils.requested_device_count(json_data) |
543 | + check_device_availability(requested_devices) |
544 | job_data = simplejson.loads(json_data) |
545 | validate_job_data(job_data) |
546 | + |
547 | + # Validate job, for parameters, specific to multinode that has been |
548 | + # input by the user. These parameters are reserved by LAVA and |
549 | + # generated during job submissions. |
550 | + reserved_job_params = ["group_size", "role", "sub_id", "target_group"] |
551 | + reserved_params_found = set(reserved_job_params).intersection( |
552 | + set(job_data.keys())) |
553 | + if reserved_params_found: |
554 | + raise JSONDataError("Reserved parameters found in job data %s" % |
555 | + str([x for x in reserved_params_found])) |
556 | + |
557 | if 'target' in job_data: |
558 | target = Device.objects.get(hostname=job_data['target']) |
559 | device_type = None |
560 | elif 'device_type' in job_data: |
561 | target = None |
562 | device_type = DeviceType.objects.get(name=job_data['device_type']) |
563 | + elif 'device_group' in job_data: |
564 | + target = None |
565 | + device_type = None |
566 | else: |
567 | raise JSONDataError( |
568 | - "Neither 'target' nor 'device_type' found in job data.") |
569 | + "No 'target' or 'device_type' or 'device_group' are found " |
570 | + "in job data.") |
571 | |
572 | priorities = dict([(j.upper(), i) for i, j in cls.PRIORITY_CHOICES]) |
573 | priority = cls.MEDIUM |
574 | @@ -449,6 +527,7 @@ |
575 | bundle_stream.is_public) |
576 | server = action['parameters']['server'] |
577 | parsed_server = urlparse.urlsplit(server) |
578 | + action["parameters"]["server"] = utils.rewrite_hostname(server) |
579 | if parsed_server.hostname is None: |
580 | raise ValueError("invalid server: %s" % server) |
581 | |
582 | @@ -458,15 +537,49 @@ |
583 | tags.append(Tag.objects.get(name=tag_name)) |
584 | except Tag.DoesNotExist: |
585 | raise JSONDataError("tag %r does not exist" % tag_name) |
586 | - job = TestJob( |
587 | - definition=json_data, submitter=submitter, |
588 | - requested_device=target, requested_device_type=device_type, |
589 | - description=job_name, health_check=health_check, user=user, |
590 | - group=group, is_public=is_public, priority=priority) |
591 | - job.save() |
592 | - for tag in tags: |
593 | - job.tags.add(tag) |
594 | - return job |
595 | + |
596 | + if 'device_group' in job_data: |
597 | + target_group = str(uuid.uuid4()) |
598 | + node_json = utils.split_multi_job(job_data, target_group) |
599 | + job_list = [] |
600 | + try: |
601 | + parent_id = (TestJob.objects.latest('id')).id + 1 |
602 | + except: |
603 | + parent_id = 1 |
604 | + child_id = 0 |
605 | + |
606 | + for role in node_json: |
607 | + role_count = len(node_json[role]) |
608 | + for c in range(0, role_count): |
609 | + device_type = DeviceType.objects.get( |
610 | + name=node_json[role][c]["device_type"]) |
611 | + sub_id = '.'.join([str(parent_id), str(child_id)]) |
612 | + |
613 | + # Add sub_id to the generated job dictionary. |
614 | + node_json[role][c]["sub_id"] = sub_id |
615 | + |
616 | + job = TestJob( |
617 | + sub_id=sub_id, submitter=submitter, |
618 | + requested_device=target, description=job_name, |
619 | + requested_device_type=device_type, |
620 | + definition=simplejson.dumps(node_json[role][c]), |
621 | + multinode_definition=json_data, |
622 | + health_check=health_check, user=user, group=group, |
623 | + is_public=is_public, priority=priority, |
624 | + target_group=target_group) |
625 | + job.save() |
626 | + job_list.append(sub_id) |
627 | + child_id += 1 |
628 | + return job_list |
629 | + |
630 | + else: |
631 | + job = TestJob( |
632 | + definition=simplejson.dumps(job_data), submitter=submitter, |
633 | + requested_device=target, requested_device_type=device_type, |
634 | + description=job_name, health_check=health_check, user=user, |
635 | + group=group, is_public=is_public, priority=priority) |
636 | + job.save() |
637 | + return job |
638 | |
639 | def _can_admin(self, user): |
640 | """ used to check for things like if the user can cancel or annotate |
641 | @@ -529,6 +642,22 @@ |
642 | "LAVA job notification: " + description, mail, |
643 | settings.SERVER_EMAIL, recipients) |
644 | |
645 | + @property |
646 | + def sub_jobs_list(self): |
647 | + if self.is_multinode: |
648 | + jobs = TestJob.objects.filter( |
649 | + target_group=self.target_group).order_by('id') |
650 | + return jobs |
651 | + else: |
652 | + return None |
653 | + |
654 | + @property |
655 | + def is_multinode(self): |
656 | + if self.target_group: |
657 | + return True |
658 | + else: |
659 | + return False |
660 | + |
661 | |
662 | class DeviceStateTransition(models.Model): |
663 | created_on = models.DateTimeField(auto_now_add=True) |
664 | |
665 | === modified file 'lava_scheduler_app/templates/lava_scheduler_app/job_definition.html' |
666 | --- lava_scheduler_app/templates/lava_scheduler_app/job_definition.html 2011-12-09 03:55:33 +0000 |
667 | +++ lava_scheduler_app/templates/lava_scheduler_app/job_definition.html 2013-08-28 13:47:23 +0000 |
668 | @@ -10,7 +10,7 @@ |
669 | {% endblock %} |
670 | |
671 | {% block content %} |
672 | -<h2>Job defintion file - {{ job.id }} </h2> |
673 | +<h2>Job definition file - {{ job.id }} </h2> |
674 | <a href="{% url lava.scheduler.job.definition.plain job.pk %}">Download as text file</a> |
675 | <pre class="brush: js">{{ job.definition }}</pre> |
676 | |
677 | |
678 | === modified file 'lava_scheduler_app/templates/lava_scheduler_app/job_sidebar.html' |
679 | --- lava_scheduler_app/templates/lava_scheduler_app/job_sidebar.html 2013-01-15 17:44:36 +0000 |
680 | +++ lava_scheduler_app/templates/lava_scheduler_app/job_sidebar.html 2013-08-28 13:47:23 +0000 |
681 | @@ -62,6 +62,17 @@ |
682 | |
683 | <dt>Finished at:</dt> |
684 | <dd>{{ job.end_time|default:"not finished" }}</dd> |
685 | + |
686 | + {% if job.is_multinode %} |
687 | + <dt>Sub Jobs:</dt> |
688 | + {% for subjob in job.sub_jobs_list %} |
689 | + <dd> |
690 | + <a href="{% url lava.scheduler.job.detail subjob.pk %}"> |
691 | + {{ subjob.sub_id }}</a> |
692 | + </dd> |
693 | + {% endfor %} |
694 | + {% endif %} |
695 | + |
696 | </dl> |
697 | <h2>Views</h2> |
698 | <ul> |
699 | @@ -76,6 +87,11 @@ |
700 | <li> |
701 | <a href="{% url lava.scheduler.job.definition job.pk %}">Definition</a> |
702 | </li> |
703 | + {% if job.is_multinode %} |
704 | + <li> |
705 | + <a href="{% url lava.scheduler.job.multinode_definition job.pk %}"> Multinode Definition</a> |
706 | + </li> |
707 | + {% endif %} |
708 | {% if job.results_link %} |
709 | <li> |
710 | <a href="{{ job.results_link }}">Results Bundle</a> |
711 | |
712 | === modified file 'lava_scheduler_app/templates/lava_scheduler_app/job_submit.html' |
713 | --- lava_scheduler_app/templates/lava_scheduler_app/job_submit.html 2013-06-28 09:43:44 +0000 |
714 | +++ lava_scheduler_app/templates/lava_scheduler_app/job_submit.html 2013-08-28 13:47:23 +0000 |
715 | @@ -31,6 +31,16 @@ |
716 | To view the full job list click <a href="{{ list_url }}">here</a>. |
717 | </div> |
718 | |
719 | +{% elif job_list %} |
720 | +{% url lava.scheduler.job.list as list_url %} |
721 | +<div id="job-success">Multinode Job submission successfull! |
722 | +<br> |
723 | +<br> |
724 | +Jobs with ID {{ job_list }}</a> has been created. |
725 | +<br> |
726 | +To view the full job list click <a href="{{ list_url }}">here</a>. |
727 | +</div> |
728 | + |
729 | {% else %} |
730 | |
731 | {% if error %} |
732 | |
733 | === added file 'lava_scheduler_app/templates/lava_scheduler_app/multinode_job_definition.html' |
734 | --- lava_scheduler_app/templates/lava_scheduler_app/multinode_job_definition.html 1970-01-01 00:00:00 +0000 |
735 | +++ lava_scheduler_app/templates/lava_scheduler_app/multinode_job_definition.html 2013-08-28 13:47:23 +0000 |
736 | @@ -0,0 +1,21 @@ |
737 | +{% extends "lava_scheduler_app/job_sidebar.html" %} |
738 | + |
739 | +{% block extrahead %} |
740 | +{{ block.super }} |
741 | +<script type="text/javascript" src="{{ STATIC_URL }}lava_scheduler_app/js/shCore.js"></script> |
742 | +<script type="text/javascript" src="{{ STATIC_URL }}lava_scheduler_app/js/shBrushJScript.js"></script> |
743 | + |
744 | +<link href="{{ STATIC_URL }}lava_scheduler_app/css/shCore.css" rel="stylesheet" type="text/css" /> |
745 | +<link href="{{ STATIC_URL }}lava_scheduler_app/css/shThemeDefault.css" rel="stylesheet" type="text/css" /> |
746 | +{% endblock %} |
747 | + |
748 | +{% block content %} |
749 | +<h2>Multinode Job definition file - {{ job.sub_id }} </h2> |
750 | +<a href="{% url lava.scheduler.job.multinode_definition.plain job.pk %}">Download as text file</a> |
751 | +<pre class="brush: js">{{ job.multinode_definition }}</pre> |
752 | + |
753 | +<script type="text/javascript"> |
754 | + SyntaxHighlighter.all() |
755 | +</script> |
756 | + |
757 | +{% endblock %} |
758 | |
759 | === modified file 'lava_scheduler_app/urls.py' |
760 | --- lava_scheduler_app/urls.py 2013-07-17 12:20:25 +0000 |
761 | +++ lava_scheduler_app/urls.py 2013-08-28 13:47:23 +0000 |
762 | @@ -79,8 +79,19 @@ |
763 | 'job_definition', |
764 | name='lava.scheduler.job.definition'), |
765 | url(r'^job/(?P<pk>[0-9]+)/definition/plain$', |
766 | - 'job_definition_plain', |
767 | - name='lava.scheduler.job.definition.plain'), |
768 | +<<<<<<< TREE |
769 | + 'job_definition_plain', |
770 | + name='lava.scheduler.job.definition.plain'), |
771 | +======= |
772 | + 'job_definition_plain', |
773 | + name='lava.scheduler.job.definition.plain'), |
774 | + url(r'^job/(?P<pk>[0-9]+)/multinode_definition$', |
775 | + 'multinode_job_definition', |
776 | + name='lava.scheduler.job.multinode_definition'), |
777 | + url(r'^job/(?P<pk>[0-9]+)/multinode_definition/plain$', |
778 | + 'multinode_job_definition_plain', |
779 | + name='lava.scheduler.job.multinode_definition.plain'), |
780 | +>>>>>>> MERGE-SOURCE |
781 | url(r'^job/(?P<pk>[0-9]+)/log_file$', |
782 | 'job_log_file', |
783 | name='lava.scheduler.job.log_file'), |
784 | |
785 | === added file 'lava_scheduler_app/utils.py' |
786 | --- lava_scheduler_app/utils.py 1970-01-01 00:00:00 +0000 |
787 | +++ lava_scheduler_app/utils.py 2013-08-28 13:47:23 +0000 |
788 | @@ -0,0 +1,117 @@ |
789 | +# Copyright (C) 2013 Linaro Limited |
790 | +# |
791 | +# Author: Neil Williams <neil.williams@linaro.org> |
792 | +# Senthil Kumaran <senthil.kumaran@linaro.org> |
793 | +# |
794 | +# This file is part of LAVA Scheduler. |
795 | +# |
796 | +# LAVA Scheduler is free software: you can redistribute it and/or modify it |
797 | +# under the terms of the GNU Affero General Public License version 3 as |
798 | +# published by the Free Software Foundation |
799 | +# |
800 | +# LAVA Scheduler is distributed in the hope that it will be useful, but |
801 | +# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY |
802 | +# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for |
803 | +# more details. |
804 | +# |
805 | +# You should have received a copy of the GNU Affero General Public License |
806 | +# along with LAVA Scheduler. If not, see <http://www.gnu.org/licenses/>. |
807 | + |
808 | +import re |
809 | +import copy |
810 | +import socket |
811 | +import urlparse |
812 | +import simplejson |
813 | + |
814 | + |
815 | +def rewrite_hostname(result_url): |
816 | + """If URL has hostname value as localhost/127.0.0.*, change it to the |
817 | + actual server FQDN. |
818 | + |
819 | + Returns the RESULT_URL (string) re-written with hostname. |
820 | + |
821 | + See https://cards.linaro.org/browse/LAVA-611 |
822 | + """ |
823 | + host = urlparse.urlparse(result_url).netloc |
824 | + if host == "localhost": |
825 | + result_url = result_url.replace("localhost", socket.getfqdn()) |
826 | + elif host.startswith("127.0.0"): |
827 | + ip_pat = r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}' |
828 | + result_url = re.sub(ip_pat, socket.getfqdn(), result_url) |
829 | + return result_url |
830 | + |
831 | + |
832 | +def split_multi_job(json_jobdata, target_group): |
833 | + node_json = {} |
834 | + all_nodes = {} |
835 | + node_actions = {} |
836 | + |
837 | + # Check if we are operating on multinode job data. Else return the job |
838 | + # data as it is. |
839 | + if "device_group" in json_jobdata and target_group: |
840 | + pass |
841 | + else: |
842 | + return json_jobdata |
843 | + |
844 | + # get all the roles and create node action list for each role. |
845 | + for group in json_jobdata["device_group"]: |
846 | + node_actions[group["role"]] = [] |
847 | + |
848 | + # Take each action and assign it to proper roles. If roles are not |
849 | + # specified for a specific action, then assign it to all the roles. |
850 | + all_actions = json_jobdata["actions"] |
851 | + for role in node_actions.keys(): |
852 | + for action in all_actions: |
853 | + new_action = copy.deepcopy(action) |
854 | + if 'parameters' in new_action \ |
855 | + and 'role' in new_action["parameters"]: |
856 | + if new_action["parameters"]["role"] == role: |
857 | + new_action["parameters"].pop('role', None) |
858 | + node_actions[role].append(new_action) |
859 | + else: |
860 | + node_actions[role].append(new_action) |
861 | + |
862 | + group_count = 0 |
863 | + for clients in json_jobdata["device_group"]: |
864 | + group_count += int(clients["count"]) |
865 | + for clients in json_jobdata["device_group"]: |
866 | + role = str(clients["role"]) |
867 | + count = int(clients["count"]) |
868 | + node_json[role] = [] |
869 | + for c in range(0, count): |
870 | + node_json[role].append({}) |
871 | + node_json[role][c]["timeout"] = json_jobdata["timeout"] |
872 | + node_json[role][c]["job_name"] = json_jobdata["job_name"] |
873 | + node_json[role][c]["tags"] = clients["tags"] |
874 | + node_json[role][c]["group_size"] = group_count |
875 | + node_json[role][c]["target_group"] = target_group |
876 | + node_json[role][c]["actions"] = node_actions[role] |
877 | + |
878 | + node_json[role][c]["role"] = role |
879 | + # multinode node stage 2 |
880 | + node_json[role][c]["logging_level"] = json_jobdata["logging_level"] |
881 | + node_json[role][c]["device_type"] = clients["device_type"] |
882 | + |
883 | + return node_json |
884 | + |
885 | + |
886 | +def requested_device_count(json_data): |
887 | + """Utility function check the requested number of devices for each |
888 | + device_type in a multinode job. |
889 | + |
890 | + JSON_DATA is the job definition string. |
891 | + |
892 | + Returns requested_device which is a dictionary of the following format: |
893 | + |
894 | + {'kvm': 1, 'qemu': 3, 'panda': 1} |
895 | + |
896 | + If the job is not a multinode job, then return an empty dictionary. |
897 | + """ |
898 | + job_data = simplejson.loads(json_data) |
899 | + requested_devices = {} |
900 | + if 'device_group' in job_data: |
901 | + for device_group in job_data['device_group']: |
902 | + device_type = device_group['device_type'] |
903 | + count = device_group['count'] |
904 | + requested_devices[device_type] = count |
905 | + return requested_devices |
906 | |
907 | === modified file 'lava_scheduler_app/views.py' |
908 | --- lava_scheduler_app/views.py 2013-07-17 12:20:25 +0000 |
909 | +++ lava_scheduler_app/views.py 2013-08-28 13:47:23 +0000 |
910 | @@ -50,7 +50,12 @@ |
911 | Device, |
912 | DeviceType, |
913 | DeviceStateTransition, |
914 | - TestJob, |
915 | +<<<<<<< TREE |
916 | + TestJob, |
917 | +======= |
918 | + TestJob, |
919 | + JSONDataError, |
920 | +>>>>>>> MERGE-SOURCE |
921 | validate_job_json, |
922 | ) |
923 | |
924 | @@ -74,10 +79,16 @@ |
925 | |
926 | |
927 | def pklink(record): |
928 | + job_id = record.pk |
929 | + try: |
930 | + if record.sub_id: |
931 | + job_id = record.sub_id |
932 | + except: |
933 | + pass |
934 | return mark_safe( |
935 | '<a href="%s">%s</a>' % ( |
936 | record.get_absolute_url(), |
937 | - escape(record.pk))) |
938 | + escape(job_id))) |
939 | |
940 | |
941 | class IDLinkColumn(Column): |
942 | @@ -100,11 +111,19 @@ |
943 | |
944 | |
945 | def all_jobs_with_device_sort(): |
946 | +<<<<<<< TREE |
947 | return TestJob.objects.select_related( |
948 | "actual_device", "requested_device", "requested_device_type", |
949 | "submitter", "user", "group")\ |
950 | .extra(select={'device_sort': 'coalesce(actual_device_id, requested_device_id, ' |
951 | 'requested_device_type_id)'}).all() |
952 | +======= |
953 | + jobs = TestJob.objects.select_related("actual_device", "requested_device", |
954 | + "requested_device_type", "submitter", "user", "group")\ |
955 | + .extra(select={'device_sort': 'coalesce(actual_device_id, ' |
956 | + 'requested_device_id, requested_device_type_id)'}).all() |
957 | + return jobs.order_by('submit_time') |
958 | +>>>>>>> MERGE-SOURCE |
959 | |
960 | |
961 | class JobTable(DataTablesTable): |
962 | @@ -124,7 +143,7 @@ |
963 | else: |
964 | return '' |
965 | |
966 | - id = RestrictedIDLinkColumn() |
967 | + sub_id = RestrictedIDLinkColumn() |
968 | status = Column() |
969 | priority = Column() |
970 | device = Column(accessor='device_sort') |
971 | @@ -135,9 +154,15 @@ |
972 | duration = Column() |
973 | |
974 | datatable_opts = { |
975 | +<<<<<<< TREE |
976 | 'aaSorting': [[0, 'desc']], |
977 | } |
978 | searchable_columns = ['description'] |
979 | +======= |
980 | + 'aaSorting': [[6, 'desc']], |
981 | + } |
982 | + searchable_columns = ['description'] |
983 | +>>>>>>> MERGE-SOURCE |
984 | |
985 | |
986 | class IndexJobTable(JobTable): |
987 | @@ -296,6 +321,10 @@ |
988 | class Meta: |
989 | exclude = ('status', 'submitter', 'end_time', 'priority', 'description') |
990 | |
991 | + datatable_opts = { |
992 | + 'aaSorting': [[2, 'desc']], |
993 | + } |
994 | + |
995 | |
996 | def failed_jobs_json(request): |
997 | return FailedJobTable.json(request, params=(request,)) |
998 | @@ -499,6 +528,10 @@ |
999 | class Meta: |
1000 | exclude = ('description', 'device') |
1001 | |
1002 | + datatable_opts = { |
1003 | + 'aaSorting': [[4, 'desc']], |
1004 | + } |
1005 | + |
1006 | |
1007 | def health_jobs_json(request, pk): |
1008 | device = get_object_or_404(Device, pk=pk) |
1009 | @@ -582,12 +615,15 @@ |
1010 | job = TestJob.from_json_and_user( |
1011 | request.POST.get("json-input"), request.user) |
1012 | |
1013 | - response_data["job_id"] = job.id |
1014 | + if isinstance(job, type(list())): |
1015 | + response_data["job_list"] = job |
1016 | + else: |
1017 | + response_data["job_id"] = job.id |
1018 | return render_to_response( |
1019 | "lava_scheduler_app/job_submit.html", |
1020 | response_data, RequestContext(request)) |
1021 | |
1022 | - except Exception as e: |
1023 | + except (JSONDataError, ValueError) as e: |
1024 | response_data["error"] = str(e) |
1025 | response_data["json_input"] = request.POST.get("json-input") |
1026 | return render_to_response( |
1027 | @@ -660,7 +696,30 @@ |
1028 | def job_definition_plain(request, pk): |
1029 | job = get_restricted_job(request.user, pk) |
1030 | response = HttpResponse(job.definition, mimetype='text/plain') |
1031 | - response['Content-Disposition'] = "attachment; filename=job_%d.json" % job.id |
1032 | +<<<<<<< TREE |
1033 | + response['Content-Disposition'] = "attachment; filename=job_%d.json" % job.id |
1034 | +======= |
1035 | + response['Content-Disposition'] = "attachment; filename=job_%d.json" % job.id |
1036 | + return response |
1037 | + |
1038 | + |
1039 | +def multinode_job_definition(request, pk): |
1040 | + job = get_restricted_job(request.user, pk) |
1041 | + log_file = job.output_file() |
1042 | + return render_to_response( |
1043 | + "lava_scheduler_app/multinode_job_definition.html", |
1044 | + { |
1045 | + 'job': job, |
1046 | + 'job_file_present': bool(log_file), |
1047 | + }, |
1048 | + RequestContext(request)) |
1049 | + |
1050 | + |
1051 | +def multinode_job_definition_plain(request, pk): |
1052 | + job = get_restricted_job(request.user, pk) |
1053 | + response = HttpResponse(job.multinode_definition, mimetype='text/plain') |
1054 | + response['Content-Disposition'] = "attachment; filename=multinode_job_%d.json" % job.id |
1055 | +>>>>>>> MERGE-SOURCE |
1056 | return response |
1057 | |
1058 | |
1059 | @@ -764,7 +823,13 @@ |
1060 | def job_cancel(request, pk): |
1061 | job = get_restricted_job(request.user, pk) |
1062 | if job.can_cancel(request.user): |
1063 | - job.cancel() |
1064 | + if job.is_multinode: |
1065 | + multinode_jobs = TestJob.objects.all().filter( |
1066 | + target_group=job.target_group) |
1067 | + for multinode_job in multinode_jobs: |
1068 | + multinode_job.cancel() |
1069 | + else: |
1070 | + job.cancel() |
1071 | return redirect(job) |
1072 | else: |
1073 | return HttpResponseForbidden( |
1074 | @@ -773,11 +838,38 @@ |
1075 | |
1076 | @post_only |
1077 | def job_resubmit(request, pk): |
1078 | + |
1079 | + response_data = { |
1080 | + 'is_authorized': False, |
1081 | + 'bread_crumb_trail': BreadCrumbTrail.leading_to(job_list), |
1082 | + } |
1083 | + |
1084 | job = get_restricted_job(request.user, pk) |
1085 | if job.can_resubmit(request.user): |
1086 | - definition = job.definition |
1087 | - job = TestJob.from_json_and_user(definition, request.user) |
1088 | - return redirect(job) |
1089 | + response_data["is_authorized"] = True |
1090 | + |
1091 | + if job.is_multinode: |
1092 | + definition = job.multinode_definition |
1093 | + else: |
1094 | + definition = job.definition |
1095 | + |
1096 | + try: |
1097 | + job = TestJob.from_json_and_user(definition, request.user) |
1098 | + |
1099 | + if isinstance(job, type(list())): |
1100 | + response_data["job_list"] = job |
1101 | + return render_to_response( |
1102 | + "lava_scheduler_app/job_submit.html", |
1103 | + response_data, RequestContext(request)) |
1104 | + else: |
1105 | + return redirect(job) |
1106 | + except Exception as e: |
1107 | + response_data["error"] = str(e) |
1108 | + response_data["json_input"] = definition |
1109 | + return render_to_response( |
1110 | + "lava_scheduler_app/job_submit.html", |
1111 | + response_data, RequestContext(request)) |
1112 | + |
1113 | else: |
1114 | return HttpResponseForbidden( |
1115 | "you cannot re-submit this job", content_type="text/plain") |
1116 | |
1117 | === renamed file 'lava_scheduler_daemon/board.py' => 'lava_scheduler_daemon/board.py.THIS' |
1118 | === modified file 'lava_scheduler_daemon/dbjobsource.py' |
1119 | --- lava_scheduler_daemon/dbjobsource.py 2013-07-17 12:20:25 +0000 |
1120 | +++ lava_scheduler_daemon/dbjobsource.py 2013-08-28 13:47:23 +0000 |
1121 | @@ -3,6 +3,7 @@ |
1122 | import os |
1123 | import shutil |
1124 | import urlparse |
1125 | +import copy |
1126 | |
1127 | from dashboard_app.models import Bundle |
1128 | |
1129 | @@ -92,21 +93,135 @@ |
1130 | transaction.leave_transaction_management() |
1131 | return self.deferToThread(wrapper, *args, **kw) |
1132 | |
1133 | - def getBoardList_impl(self): |
1134 | + def _get_health_check_jobs(self): |
1135 | + """Gets the list of configured boards and checks which are the boards |
1136 | + that require health check. |
1137 | + |
1138 | + Returns JOB_LIST which is a set of health check jobs. If no health |
1139 | + check jobs are available returns an empty set. |
1140 | + """ |
1141 | + job_list = set() |
1142 | configured_boards = [ |
1143 | x.hostname for x in dispatcher_config.get_devices()] |
1144 | boards = [] |
1145 | for d in Device.objects.all(): |
1146 | if d.hostname in configured_boards: |
1147 | - boards.append({'hostname': d.hostname}) |
1148 | - return boards |
1149 | - |
1150 | - def getBoardList(self): |
1151 | - return self.deferForDB(self.getBoardList_impl) |
1152 | + boards.append(d) |
1153 | + |
1154 | + for device in boards: |
1155 | + if device.status != Device.IDLE: |
1156 | + continue |
1157 | + if not device.device_type.health_check_job: |
1158 | + run_health_check = False |
1159 | + elif device.health_status == Device.HEALTH_UNKNOWN: |
1160 | + run_health_check = True |
1161 | + elif device.health_status == Device.HEALTH_LOOPING: |
1162 | + run_health_check = True |
1163 | + elif not device.last_health_report_job: |
1164 | + run_health_check = True |
1165 | + else: |
1166 | + run_health_check = device.last_health_report_job.end_time < \ |
1167 | + datetime.datetime.now() - datetime.timedelta(days=1) |
1168 | + if run_health_check: |
1169 | + job_list.add(self._getHealthCheckJobForBoard(device)) |
1170 | + return job_list |
1171 | + |
1172 | + def _fix_device(self, device, job): |
1173 | + """Associate an available/idle DEVICE to the given JOB. |
1174 | + |
1175 | + Returns the job with actual_device set to DEVICE. |
1176 | + |
1177 | + If we are unable to grab the DEVICE then we return None. |
1178 | + """ |
1179 | + DeviceStateTransition.objects.create( |
1180 | + created_by=None, device=device, old_state=device.status, |
1181 | + new_state=Device.RUNNING, message=None, job=job).save() |
1182 | + device.status = Device.RUNNING |
1183 | + device.current_job = job |
1184 | + try: |
1185 | + # The unique constraint on current_job may cause this to |
1186 | + # fail in the case of concurrent requests for different |
1187 | + # boards grabbing the same job. If there are concurrent |
1188 | + # requests for the *same* board they may both return the |
1189 | + # same job -- this is an application level bug though. |
1190 | + device.save() |
1191 | + except IntegrityError: |
1192 | + self.logger.info( |
1193 | + "job %s has been assigned to another board -- rolling back", |
1194 | + job.id) |
1195 | + transaction.rollback() |
1196 | + return None |
1197 | + else: |
1198 | + job.actual_device = device |
1199 | + job.log_file.save( |
1200 | + 'job-%s.log' % job.id, ContentFile(''), save=False) |
1201 | + job.submit_token = AuthToken.objects.create(user=job.submitter) |
1202 | + job.definition = simplejson.dumps(self._get_json_data(job), |
1203 | + sort_keys=True, |
1204 | + indent=4 * ' ') |
1205 | + job.save() |
1206 | + transaction.commit() |
1207 | + return job |
1208 | + |
1209 | + def getJobList_impl(self): |
1210 | + jobs = TestJob.objects.all().filter( |
1211 | + status=TestJob.SUBMITTED).order_by('-priority', 'submit_time') |
1212 | + job_list = self._get_health_check_jobs() |
1213 | + devices = None |
1214 | + configured_boards = [ |
1215 | + x.hostname for x in dispatcher_config.get_devices()] |
1216 | + self.logger.debug("Number of configured_devices: %d" % len(configured_boards)) |
1217 | + for job in jobs: |
1218 | + if job.actual_device: |
1219 | + job_list.add(job) |
1220 | + elif job.requested_device: |
1221 | + self.logger.debug("Checking Requested Device") |
1222 | + devices = Device.objects.all().filter( |
1223 | + hostname=job.requested_device.hostname, |
1224 | + status=Device.IDLE) |
1225 | + elif job.requested_device_type: |
1226 | + self.logger.debug("Checking Requested Device Type") |
1227 | + devices = Device.objects.all().filter( |
1228 | + device_type=job.requested_device_type, |
1229 | + status=Device.IDLE) |
1230 | + else: |
1231 | + continue |
1232 | + if devices: |
1233 | + for d in devices: |
1234 | + self.logger.debug("Checking %s" % d.hostname) |
1235 | + if d.hostname in configured_boards: |
1236 | + if job: |
1237 | + job = self._fix_device(d, job) |
1238 | + if job: |
1239 | + job_list.add(job) |
1240 | + |
1241 | + # Remove scheduling multinode jobs until all the jobs in the |
1242 | + # target_group are assigned devices. |
1243 | + final_job_list = copy.deepcopy(job_list) |
1244 | + for job in job_list: |
1245 | + if job.is_multinode: |
1246 | + multinode_jobs = TestJob.objects.all().filter( |
1247 | + target_group=job.target_group) |
1248 | + |
1249 | + jobs_with_device = 0 |
1250 | + for multinode_job in multinode_jobs: |
1251 | + if multinode_job.actual_device: |
1252 | + jobs_with_device += 1 |
1253 | + |
1254 | + if len(multinode_jobs) != jobs_with_device: |
1255 | + final_job_list.difference_update(set(multinode_jobs)) |
1256 | + |
1257 | + return final_job_list |
1258 | + |
1259 | + def getJobList(self): |
1260 | + return self.deferForDB(self.getJobList_impl) |
1261 | |
1262 | def _get_json_data(self, job): |
1263 | json_data = simplejson.loads(job.definition) |
1264 | - json_data['target'] = job.actual_device.hostname |
1265 | + if job.actual_device: |
1266 | + json_data['target'] = job.actual_device.hostname |
1267 | + elif job.requested_device: |
1268 | + json_data['target'] = job.requested_device.hostname |
1269 | for action in json_data['actions']: |
1270 | if not action['command'].startswith('submit_results'): |
1271 | continue |
1272 | @@ -171,64 +286,19 @@ |
1273 | else: |
1274 | return None |
1275 | |
1276 | - def getJobForBoard_impl(self, board_name): |
1277 | - while True: |
1278 | - device = Device.objects.get(hostname=board_name) |
1279 | - if device.status != Device.IDLE: |
1280 | - return None |
1281 | - if not device.device_type.health_check_job: |
1282 | - run_health_check = False |
1283 | - elif device.health_status == Device.HEALTH_UNKNOWN: |
1284 | - run_health_check = True |
1285 | - elif device.health_status == Device.HEALTH_LOOPING: |
1286 | - run_health_check = True |
1287 | - elif not device.last_health_report_job: |
1288 | - run_health_check = True |
1289 | - else: |
1290 | - run_health_check = device.last_health_report_job.end_time < datetime.datetime.now() - datetime.timedelta(days=1) |
1291 | - if run_health_check: |
1292 | - job = self._getHealthCheckJobForBoard(device) |
1293 | - else: |
1294 | - job = self._getJobFromQueue(device) |
1295 | - if job: |
1296 | - DeviceStateTransition.objects.create( |
1297 | - created_by=None, device=device, old_state=device.status, |
1298 | - new_state=Device.RUNNING, message=None, job=job).save() |
1299 | - job.status = TestJob.RUNNING |
1300 | - job.start_time = datetime.datetime.utcnow() |
1301 | - job.actual_device = device |
1302 | - device.status = Device.RUNNING |
1303 | - shutil.rmtree(job.output_dir, ignore_errors=True) |
1304 | - device.current_job = job |
1305 | - try: |
1306 | - # The unique constraint on current_job may cause this to |
1307 | - # fail in the case of concurrent requests for different |
1308 | - # boards grabbing the same job. If there are concurrent |
1309 | - # requests for the *same* board they may both return the |
1310 | - # same job -- this is an application level bug though. |
1311 | - device.save() |
1312 | - except IntegrityError: |
1313 | - self.logger.info( |
1314 | - "job %s has been assigned to another board -- " |
1315 | - "rolling back", job.id) |
1316 | - transaction.rollback() |
1317 | - continue |
1318 | - else: |
1319 | - job.log_file.save( |
1320 | - 'job-%s.log' % job.id, ContentFile(''), save=False) |
1321 | - job.submit_token = AuthToken.objects.create(user=job.submitter) |
1322 | - job.save() |
1323 | - json_data = self._get_json_data(job) |
1324 | - transaction.commit() |
1325 | - return json_data |
1326 | - else: |
1327 | - # _getHealthCheckJobForBoard can offline the board, so commit |
1328 | - # in this branch too. |
1329 | - transaction.commit() |
1330 | - return None |
1331 | + def getJobDetails_impl(self, job): |
1332 | + job.status = TestJob.RUNNING |
1333 | + job.start_time = datetime.datetime.utcnow() |
1334 | + shutil.rmtree(job.output_dir, ignore_errors=True) |
1335 | + job.log_file.save('job-%s.log' % job.id, ContentFile(''), save=False) |
1336 | + job.submit_token = AuthToken.objects.create(user=job.submitter) |
1337 | + job.save() |
1338 | + json_data = self._get_json_data(job) |
1339 | + transaction.commit() |
1340 | + return json_data |
1341 | |
1342 | - def getJobForBoard(self, board_name): |
1343 | - return self.deferForDB(self.getJobForBoard_impl, board_name) |
1344 | + def getJobDetails(self, job): |
1345 | + return self.deferForDB(self.getJobDetails_impl, job) |
1346 | |
1347 | def getOutputDirForJobOnBoard_impl(self, board_name): |
1348 | device = Device.objects.get(hostname=board_name) |
1349 | |
1350 | === added file 'lava_scheduler_daemon/job.py' |
1351 | --- lava_scheduler_daemon/job.py 1970-01-01 00:00:00 +0000 |
1352 | +++ lava_scheduler_daemon/job.py 2013-08-28 13:47:23 +0000 |
1353 | @@ -0,0 +1,281 @@ |
1354 | +# Copyright (C) 2013 Linaro Limited |
1355 | +# |
1356 | +# Author: Senthil Kumaran <senthil.kumaran@linaro.org> |
1357 | +# |
1358 | +# This file is part of LAVA Scheduler. |
1359 | +# |
1360 | +# LAVA Scheduler is free software: you can redistribute it and/or modify it |
1361 | +# under the terms of the GNU Affero General Public License version 3 as |
1362 | +# published by the Free Software Foundation |
1363 | +# |
1364 | +# LAVA Scheduler is distributed in the hope that it will be useful, but |
1365 | +# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY |
1366 | +# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for |
1367 | +# more details. |
1368 | +# |
1369 | +# You should have received a copy of the GNU Affero General Public License |
1370 | +# along with LAVA Scheduler. If not, see <http://www.gnu.org/licenses/>. |
1371 | + |
1372 | +import json |
1373 | +import os |
1374 | +import signal |
1375 | +import tempfile |
1376 | +import logging |
1377 | + |
1378 | +from twisted.internet.error import ProcessDone, ProcessExitedAlready |
1379 | +from twisted.internet.protocol import ProcessProtocol |
1380 | +from twisted.internet import defer, task |
1381 | + |
1382 | + |
1383 | +def catchall_errback(logger): |
1384 | + def eb(failure): |
1385 | + logger.error( |
1386 | + '%s: %s\n%s', failure.type.__name__, failure.value, |
1387 | + failure.getTraceback()) |
1388 | + return eb |
1389 | + |
1390 | + |
1391 | +class DispatcherProcessProtocol(ProcessProtocol): |
1392 | + |
1393 | + def __init__(self, deferred, job): |
1394 | + self.logger = logging.getLogger(__name__ + '.DispatcherProcessProtocol') |
1395 | + self.deferred = deferred |
1396 | + self.log_size = 0 |
1397 | + self.job = job |
1398 | + |
1399 | + def childDataReceived(self, childFD, data): |
1400 | + self.log_size += len(data) |
1401 | + if self.log_size > self.job.daemon_options['LOG_FILE_SIZE_LIMIT']: |
1402 | + if not self.job._killing: |
1403 | + self.job.cancel("exceeded log size limit") |
1404 | + |
1405 | + def childConnectionLost(self, childFD): |
1406 | + self.logger.info("childConnectionLost for %s: %s", |
1407 | + self.job.board_name, childFD) |
1408 | + |
1409 | + def processExited(self, reason): |
1410 | + self.logger.info("processExited for %s: %s", |
1411 | + self.job.board_name, reason.value) |
1412 | + |
1413 | + def processEnded(self, reason): |
1414 | + self.logger.info("processEnded for %s: %s", |
1415 | + self.job.board_name, reason.value) |
1416 | + self.deferred.callback(reason.value.exitCode) |
1417 | + |
1418 | + |
1419 | +class Job(object): |
1420 | + |
1421 | + def __init__(self, job_data, dispatcher, source, board_name, reactor, |
1422 | + daemon_options): |
1423 | + self.job_data = job_data |
1424 | + self.dispatcher = dispatcher |
1425 | + self.source = source |
1426 | + self.board_name = board_name |
1427 | + self.logger = logging.getLogger(__name__ + '.Job.' + board_name) |
1428 | + self.reactor = reactor |
1429 | + self.daemon_options = daemon_options |
1430 | + self._json_file = None |
1431 | + self._source_lock = defer.DeferredLock() |
1432 | + self._checkCancel_call = task.LoopingCall(self._checkCancel) |
1433 | + self._signals = ['SIGINT', 'SIGINT', 'SIGTERM', 'SIGTERM', 'SIGKILL'] |
1434 | + self._time_limit_call = None |
1435 | + self._killing = False |
1436 | + self._kill_reason = '' |
1437 | + |
1438 | + def _checkCancel(self): |
1439 | + if self._killing: |
1440 | + self.cancel() |
1441 | + else: |
1442 | + return self._source_lock.run( |
1443 | + self.source.jobCheckForCancellation, |
1444 | + self.board_name).addCallback(self._maybeCancel) |
1445 | + |
1446 | + def cancel(self, reason=None): |
1447 | + if not self._killing: |
1448 | + if reason is None: |
1449 | + reason = "killing job for unknown reason" |
1450 | + self._kill_reason = reason |
1451 | + self.logger.info(reason) |
1452 | + self._killing = True |
1453 | + if self._signals: |
1454 | + signame = self._signals.pop(0) |
1455 | + else: |
1456 | + self.logger.warning("self._signals is empty!") |
1457 | + signame = 'SIGKILL' |
1458 | + self.logger.info( |
1459 | + 'attempting to kill job with signal %s' % signame) |
1460 | + try: |
1461 | + self._protocol.transport.signalProcess(getattr(signal, signame)) |
1462 | + except ProcessExitedAlready: |
1463 | + pass |
1464 | + |
1465 | + def _maybeCancel(self, cancel): |
1466 | + if cancel: |
1467 | + self.cancel("killing job by user request") |
1468 | + else: |
1469 | + logging.debug('not cancelling') |
1470 | + |
1471 | + def _time_limit_exceeded(self): |
1472 | + self._time_limit_call = None |
1473 | + self.cancel("killing job for exceeding timeout") |
1474 | + |
1475 | + def run(self): |
1476 | + d = self.source.getOutputDirForJobOnBoard(self.board_name) |
1477 | + return d.addCallback(self._run).addErrback( |
1478 | + catchall_errback(self.logger)) |
1479 | + |
1480 | + def _run(self, output_dir): |
1481 | + d = defer.Deferred() |
1482 | + json_data = self.job_data |
1483 | + fd, self._json_file = tempfile.mkstemp() |
1484 | + with os.fdopen(fd, 'wb') as f: |
1485 | + json.dump(json_data, f) |
1486 | + self._protocol = DispatcherProcessProtocol(d, self) |
1487 | + self.reactor.spawnProcess( |
1488 | + self._protocol, self.dispatcher, args=[ |
1489 | + self.dispatcher, self._json_file, '--output-dir', output_dir], |
1490 | + childFDs={0: 0, 1: 'r', 2: 'r'}, env=None) |
1491 | + self._checkCancel_call.start(10) |
1492 | + timeout = max( |
1493 | + json_data['timeout'], self.daemon_options['MIN_JOB_TIMEOUT']) |
1494 | + self._time_limit_call = self.reactor.callLater( |
1495 | + timeout, self._time_limit_exceeded) |
1496 | + d.addBoth(self._exited) |
1497 | + return d |
1498 | + |
1499 | + def _exited(self, exit_code): |
1500 | + self.logger.info("job finished on %s", self.job_data['target']) |
1501 | + if self._json_file is not None: |
1502 | + os.unlink(self._json_file) |
1503 | + self.logger.info("reporting job completed") |
1504 | + if self._time_limit_call is not None: |
1505 | + self._time_limit_call.cancel() |
1506 | + self._checkCancel_call.stop() |
1507 | + return self._source_lock.run( |
1508 | + self.source.jobCompleted, |
1509 | + self.board_name, |
1510 | + exit_code, |
1511 | + self._killing).addCallback( |
1512 | + lambda r: exit_code) |
1513 | + |
1514 | + |
1515 | +class SchedulerMonitorPP(ProcessProtocol): |
1516 | + |
1517 | + def __init__(self, d, board_name): |
1518 | + self.d = d |
1519 | + self.board_name = board_name |
1520 | + self.logger = logging.getLogger(__name__ + '.SchedulerMonitorPP') |
1521 | + |
1522 | + def childDataReceived(self, childFD, data): |
1523 | + self.logger.warning( |
1524 | + "scheduler monitor for %s produced output: %r on fd %s", |
1525 | + self.board_name, data, childFD) |
1526 | + |
1527 | + def processEnded(self, reason): |
1528 | + if not reason.check(ProcessDone): |
1529 | + self.logger.error( |
1530 | + "scheduler monitor for %s crashed: %s", |
1531 | + self.board_name, reason) |
1532 | + self.d.callback(None) |
1533 | + |
1534 | + |
1535 | +class MonitorJob(object): |
1536 | + |
1537 | + def __init__(self, job_data, dispatcher, source, board_name, reactor, |
1538 | + daemon_options): |
1539 | + self.logger = logging.getLogger(__name__ + '.MonitorJob') |
1540 | + self.job_data = job_data |
1541 | + self.dispatcher = dispatcher |
1542 | + self.source = source |
1543 | + self.board_name = board_name |
1544 | + self.reactor = reactor |
1545 | + self.daemon_options = daemon_options |
1546 | + self._json_file = None |
1547 | + |
1548 | + def run(self): |
1549 | + d = defer.Deferred() |
1550 | + json_data = self.job_data |
1551 | + fd, self._json_file = tempfile.mkstemp() |
1552 | + with os.fdopen(fd, 'wb') as f: |
1553 | + json.dump(json_data, f) |
1554 | + |
1555 | + childFDs = {0: 0, 1: 1, 2: 2} |
1556 | + args = [ |
1557 | + 'setsid', 'lava-server', 'manage', 'schedulermonitor', |
1558 | + self.dispatcher, str(self.board_name), self._json_file, |
1559 | + '-l', self.daemon_options['LOG_LEVEL']] |
1560 | + if self.daemon_options['LOG_FILE_PATH']: |
1561 | + args.extend(['-f', self.daemon_options['LOG_FILE_PATH']]) |
1562 | + childFDs = None |
1563 | + self.logger.info('executing "%s"', ' '.join(args)) |
1564 | + self.reactor.spawnProcess( |
1565 | + SchedulerMonitorPP(d, self.board_name), 'setsid', |
1566 | + childFDs=childFDs, env=None, args=args) |
1567 | + d.addBoth(self._exited) |
1568 | + return d |
1569 | + |
1570 | + def _exited(self, result): |
1571 | + if self._json_file is not None: |
1572 | + os.unlink(self._json_file) |
1573 | + return result |
1574 | + |
1575 | + |
1576 | +class JobRunner(object): |
1577 | + job_cls = MonitorJob |
1578 | + |
1579 | + def __init__(self, source, job, dispatcher, reactor, daemon_options, |
1580 | + job_cls=None): |
1581 | + self.source = source |
1582 | + self.dispatcher = dispatcher |
1583 | + self.reactor = reactor |
1584 | + self.daemon_options = daemon_options |
1585 | + self.job = job |
1586 | + if job.actual_device: |
1587 | + self.board_name = job.actual_device.hostname |
1588 | + elif job.requested_device: |
1589 | + self.board_name = job.requested_device.hostname |
1590 | + if job_cls is not None: |
1591 | + self.job_cls = job_cls |
1592 | + self.running_job = None |
1593 | + self.logger = logging.getLogger(__name__ + '.JobRunner.' + str(job.id)) |
1594 | + |
1595 | + def start(self): |
1596 | + self.logger.debug("processing job") |
1597 | + if self.job is None: |
1598 | + self.logger.debug("no job found for processing") |
1599 | + return |
1600 | + self.source.getJobDetails(self.job).addCallbacks( |
1601 | + self._startJob, self._ebStartJob) |
1602 | + |
1603 | + def _startJob(self, job_data): |
1604 | + if job_data is None: |
1605 | + self.logger.debug("no job found") |
1606 | + return |
1607 | + self.logger.info("starting job %r", job_data) |
1608 | + |
1609 | + self.running_job = self.job_cls( |
1610 | + job_data, self.dispatcher, self.source, self.board_name, |
1611 | + self.reactor, self.daemon_options) |
1612 | + d = self.running_job.run() |
1613 | + d.addCallbacks(self._cbJobFinished, self._ebJobFinished) |
1614 | + |
1615 | + def _ebStartJob(self, result): |
1616 | + self.logger.error( |
1617 | + '%s: %s\n%s', result.type.__name__, result.value, |
1618 | + result.getTraceback()) |
1619 | + return |
1620 | + |
1621 | + def stop(self): |
1622 | + self.logger.debug("stopping") |
1623 | + |
1624 | + if self.running_job is not None: |
1625 | + self.logger.debug("job running; deferring stop") |
1626 | + else: |
1627 | + self.logger.debug("stopping immediately") |
1628 | + return defer.succeed(None) |
1629 | + |
1630 | + def _ebJobFinished(self, result): |
1631 | + self.logger.exception(result.value) |
1632 | + |
1633 | + def _cbJobFinished(self, result): |
1634 | + self.running_job = None |
1635 | |
1636 | === modified file 'lava_scheduler_daemon/service.py' |
1637 | --- lava_scheduler_daemon/service.py 2012-12-03 05:03:38 +0000 |
1638 | +++ lava_scheduler_daemon/service.py 2013-08-28 13:47:23 +0000 |
1639 | @@ -1,58 +1,56 @@ |
1640 | +# Copyright (C) 2013 Linaro Limited |
1641 | +# |
1642 | +# Author: Senthil Kumaran <senthil.kumaran@linaro.org> |
1643 | +# |
1644 | +# This file is part of LAVA Scheduler. |
1645 | +# |
1646 | +# LAVA Scheduler is free software: you can redistribute it and/or modify it |
1647 | +# under the terms of the GNU Affero General Public License version 3 as |
1648 | +# published by the Free Software Foundation |
1649 | +# |
1650 | +# LAVA Scheduler is distributed in the hope that it will be useful, but |
1651 | +# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY |
1652 | +# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for |
1653 | +# more details. |
1654 | +# |
1655 | +# You should have received a copy of the GNU Affero General Public License |
1656 | +# along with LAVA Scheduler. If not, see <http://www.gnu.org/licenses/>. |
1657 | + |
1658 | import logging |
1659 | |
1660 | from twisted.application.service import Service |
1661 | from twisted.internet import defer |
1662 | from twisted.internet.task import LoopingCall |
1663 | |
1664 | -from lava_scheduler_daemon.board import Board, catchall_errback |
1665 | - |
1666 | - |
1667 | -class BoardSet(Service): |
1668 | +from lava_scheduler_daemon.job import JobRunner, catchall_errback |
1669 | + |
1670 | + |
1671 | +class JobQueue(Service): |
1672 | |
1673 | def __init__(self, source, dispatcher, reactor, daemon_options): |
1674 | - self.logger = logging.getLogger(__name__ + '.BoardSet') |
1675 | + self.logger = logging.getLogger(__name__ + '.JobQueue') |
1676 | self.source = source |
1677 | - self.boards = {} |
1678 | self.dispatcher = dispatcher |
1679 | self.reactor = reactor |
1680 | self.daemon_options = daemon_options |
1681 | - self._update_boards_call = LoopingCall(self._updateBoards) |
1682 | - self._update_boards_call.clock = reactor |
1683 | - |
1684 | - def _updateBoards(self): |
1685 | - self.logger.debug("Refreshing board list") |
1686 | - return self.source.getBoardList().addCallback( |
1687 | - self._cbUpdateBoards).addErrback(catchall_errback(self.logger)) |
1688 | - |
1689 | - def _cbUpdateBoards(self, board_cfgs): |
1690 | - '''board_cfgs is an array of dicts {hostname=name} ''' |
1691 | - new_boards = {} |
1692 | - for board_cfg in board_cfgs: |
1693 | - board_name = board_cfg['hostname'] |
1694 | - |
1695 | - if board_cfg['hostname'] in self.boards: |
1696 | - board = self.boards.pop(board_name) |
1697 | - new_boards[board_name] = board |
1698 | - else: |
1699 | - self.logger.info("Adding board: %s" % board_name) |
1700 | - new_boards[board_name] = Board( |
1701 | - self.source, board_name, self.dispatcher, self.reactor, |
1702 | - self.daemon_options) |
1703 | - new_boards[board_name].start() |
1704 | - for board in self.boards.values(): |
1705 | - self.logger.info("Removing board: %s" % board.board_name) |
1706 | - board.stop() |
1707 | - self.boards = new_boards |
1708 | + self._check_job_call = LoopingCall(self._checkJobs) |
1709 | + self._check_job_call.clock = reactor |
1710 | + |
1711 | + def _checkJobs(self): |
1712 | + self.logger.debug("Refreshing jobs") |
1713 | + return self.source.getJobList().addCallback( |
1714 | + self._cbCheckJobs).addErrback(catchall_errback(self.logger)) |
1715 | + |
1716 | + def _cbCheckJobs(self, job_list): |
1717 | + for job in job_list: |
1718 | + new_job = JobRunner(self.source, job, self.dispatcher, |
1719 | + self.reactor, self.daemon_options) |
1720 | + self.logger.info("Starting Job: %d " % job.id) |
1721 | + new_job.start() |
1722 | |
1723 | def startService(self): |
1724 | - self._update_boards_call.start(20) |
1725 | + self._check_job_call.start(20) |
1726 | |
1727 | def stopService(self): |
1728 | - self._update_boards_call.stop() |
1729 | - ds = [] |
1730 | - dead_boards = [] |
1731 | - for board in self.boards.itervalues(): |
1732 | - ds.append(board.stop().addCallback(dead_boards.append)) |
1733 | - self.logger.info( |
1734 | - "waiting for %s boards", len(self.boards) - len(dead_boards)) |
1735 | - return defer.gatherResults(ds) |
1736 | + self._check_job_call.stop() |
1737 | + return None |
1738 | |
1739 | === modified file 'lava_scheduler_daemon/tests/test_board.py' |
1740 | --- lava_scheduler_daemon/tests/test_board.py 2013-07-17 12:20:25 +0000 |
1741 | +++ lava_scheduler_daemon/tests/test_board.py 2013-08-28 13:47:23 +0000 |
1742 | @@ -38,7 +38,7 @@ |
1743 | |
1744 | class TestJob(object): |
1745 | |
1746 | - def __init__(self, job_data, dispatcher, source, board_name, reactor, options, use_celery): |
1747 | + def __init__(self, job_data, dispatcher, source, board_name, reactor, options): |
1748 | self.json_data = job_data |
1749 | self.dispatcher = dispatcher |
1750 | self.reactor = reactor |
Hello guys,
Follows my comments on the current state of the code.
> === added file 'lava_scheduler _app/utils. py' app/utils. py 1970-01-01 00:00:00 +0000 app/utils. py 2013-08-24 08:58:27 +0000 www.gnu. org/licenses/>. hostname( result_ url): 127.0.0. *, change it to the /cards. linaro. org/browse/ LAVA-611 urlparse( result_ url).netloc url.replace( "localhost" , socket.getfqdn()) ("127.0. 0"): 3}\.\d{ 1,3}\.\ d{1,3}\ .\d{1,3} '
> --- lava_scheduler_
> +++ lava_scheduler_
> @@ -0,0 +1,116 @@
> +# Copyright (C) 2013 Linaro Limited
> +#
> +# Author: Neil Williams <email address hidden>
> +# Senthil Kumaran <email address hidden>
> +#
> +# This file is part of LAVA Scheduler.
> +#
> +# LAVA Scheduler is free software: you can redistribute it and/or modify it
> +# under the terms of the GNU Affero General Public License version 3 as
> +# published by the Free Software Foundation
> +#
> +# LAVA Scheduler is distributed in the hope that it will be useful, but
> +# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
> +# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
> +# more details.
> +#
> +# You should have received a copy of the GNU Affero General Public License
> +# along with LAVA Scheduler. If not, see <http://
> +
> +import re
> +import copy
> +import socket
> +import urlparse
> +import simplejson
> +
> +
> +def rewrite_
> + """If URL has hostname value as localhost/
> + actual server FQDN.
> +
> + Returns the RESULT_URL (string) re-written with hostname.
> +
> + See https:/
> + """
> + host = urlparse.
> + if host == "localhost":
> + result_url = result_
> + elif host.startswith
> + ip_pat = r'\d{1,
> + result_url = re.sub(ip_pat, socket.getfqdn(), result_url)
> + return result_url
IMO we should deprecate the "server" parameter for submit_results, and always
submit to master LAVA server somehow. We probably do not need to do this right
now, so I filed a bug for doing it in the future:
https:/ /bugs.launchpad .net/lava- dispatcher/ +bug/1217061
> +def split_multi_ job(json_ jobdata, target_group):
> + node_json = {}
> + all_nodes = {}
> + node_actions = {}
> + if "device_group" in json_jobdata:
this conditional is redundant since split_multi_job is already called in
the context where the same condition was already tested.
> + # get all the roles and create node action list for each role. "device_ group"] : group[" role"]] = [] "actions" ] keys(): action) "parameters" ]: "parameters" ]["role" ] == role: "parameters" ].pop(' role', None) role].appe. ..
> + for group in json_jobdata[
> + node_actions[
> +
> + # Take each action and assign it to proper roles. If roles are not
> + # specified for a specific action, then assign it to all the roles.
> + all_actions = json_jobdata[
> + for role in node_actions.
> + for action in all_actions:
> + new_action = copy.deepcopy(
> + if 'parameters' in new_action \
> + and 'role' in new_action[
> + if new_action[
> + new_action[
> + node_actions[