Merge lp:~al-maisan/launchpad/delays-by-jobs-ahead-504086 into lp:launchpad/db-devel
- delays-by-jobs-ahead-504086
- Merge into db-devel
Status: | Merged |
---|---|
Approved by: | Gavin Panella |
Approved revision: | not available |
Merged at revision: | not available |
Proposed branch: | lp:~al-maisan/launchpad/delays-by-jobs-ahead-504086 |
Merge into: | lp:launchpad/db-devel |
Diff against target: |
765 lines (+405/-118) 7 files modified
lib/lp/buildmaster/interfaces/buildfarmjob.py (+0/-49) lib/lp/buildmaster/model/buildfarmjob.py (+3/-2) lib/lp/soyuz/model/buildpackagejob.py (+1/-33) lib/lp/soyuz/model/buildqueue.py (+169/-2) lib/lp/soyuz/tests/test_buildpackagejob.py (+0/-7) lib/lp/soyuz/tests/test_buildqueue.py (+216/-16) lib/lp/testing/factory.py (+16/-9) |
To merge this branch: | bzr merge lp:~al-maisan/launchpad/delays-by-jobs-ahead-504086 |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Gavin Panella (community) | Approve | ||
Review via email: mp+18140@code.launchpad.net |
Commit message
Description of the change
Muharem Hrnjadovic (al-maisan) wrote : | # |
Muharem Hrnjadovic (al-maisan) wrote : | # |
Muharem Hrnjadovic wrote:
> Muharem Hrnjadovic has proposed merging lp:~al-maisan/launchpad/delays-by-jobs-ahead-504086 into lp:launchpad.
>
> Requested reviews:
> Canonical Launchpad Engineering (launchpad)
> Related bugs:
> #504086 Job dispatch time estimation: what are the delays caused by the jobs ahead?
> https:/
>
>
> Hello there!
>
> This is the second last branch required to provide a generalized build farm
> job dispatch time estimation (irrespective of job type).
>
> Given a build farm job of interest (JOI) for which the user would like to
> receive a dispatch time estimation we need to calculate the dispatch delay
> caused by other jobs that
>
> - are ahead of JOI in the queue and
> - compete with it for builder resources
>
> Some of the ideas implemented by the algorithm in this branch are:
>
> - a processor-
> virtualization setting.
> - the delays caused by the jobs ahead of the JOI need to be weighted (based
> on the size of the pool of builders that can run these jobs).
> - In addition to the estimated delay value _estimateJobDelay() returns the
> platform of the head job (the first of the competing jobs to be dispatched
> to a builder).
> The head job platform will be fed to _estimateTimeTo
> to estimate how long it takes until the head job gets dispatched to a
> builder.
> - Total delay = "delay caused by jobs ahead" + "time to next builder"
>
> Tests to run:
>
> bin/test -vv -t test_buildqueue
>
> I had various pre-implementation talks with Julian and Michael N.
>
> No (relevant) "make lint" errors or warnings.
>
>
>
> -------
>
> This body part will be downloaded on demand.
Hello Gavin,
just letting you know that I have addressed some of your preliminary
review comments (http://
The SQL query is now more precise and only yields the jobs that are
queued ahead of the job of interest (JOI) making both the 'break'
statement and the 'ORDER BY' clause superfluous.
Please see the attached incremental diff for details.
Best regards
--
Muharem Hrnjadovic <email address hidden>
Public key id : B2BBFCFC
Key fingerprint : A5A3 CC67 2B87 D641 103F 5602 219F 6B60 B2BB FCFC
1 | === modified file 'lib/lp/soyuz/model/buildqueue.py' |
2 | --- lib/lp/soyuz/model/buildqueue.py 2010-01-27 16:24:48 +0000 |
3 | +++ lib/lp/soyuz/model/buildqueue.py 2010-01-28 05:28:11 +0000 |
4 | @@ -332,13 +332,6 @@ |
5 | my_platform = ( |
6 | getattr(self.processor, 'id', None), |
7 | normalize_virtualization(self.virtualized)) |
8 | - processor_clause = """ |
9 | - AND ( |
10 | - -- The processor values either match or the candidate |
11 | - -- job is processor-independent. |
12 | - buildqueue.processor = %s OR |
13 | - buildqueue.processor IS NULL) |
14 | - """ % sqlvalues(self.processor) |
15 | query = """ |
16 | SELECT |
17 | BuildQueue.job, |
18 | @@ -351,7 +344,12 @@ |
19 | WHERE |
20 | BuildQueue.job = Job.id |
21 | AND Job.status = %s |
22 | - AND BuildQueue.lastscore >= %s |
23 | + AND ( |
24 | + -- The score must be either above my score or the |
25 | + -- job must be older than me in cases where the |
26 | + -- score is equal. |
27 | + BuildQueue.lastscore > %s OR |
28 | + (BuildQueue.lastscore = %s AND Job.id < %s)) |
29 | AND ( |
30 | -- The virtualized values either match or the job |
31 | -- does not care about virtualization and the job |
32 | @@ -361,17 +359,21 @@ |
33 | buildqueue.virtualized = %s OR |
34 | (buildqueue.virtualized IS NULL AND %s = TRUE)) |
35 | """ % sqlvalues( |
36 | - JobStatus.WAITING, self.lastscore, self.virtualized, |
37 | - self.virtualized) |
38 | + JobStatus.WAITING, self.lastscore, self.lastscore, self.job, |
39 | + self.virtualized, self.virtualized) |
40 | + processor_clause = """ |
41 | + AND ( |
42 | + -- The processor values either match or the candidate |
43 | + -- job is processor-independent. |
44 | + buildqueue.processor = %s OR |
45 | + buildqueue.processor IS NULL) |
46 | + """ % sqlvalues(self.processor) |
47 | |
48 | # We don't care about processors if the estimation is for a |
49 | # processor-independent job. |
50 | if self.processor is not None: |
51 | query += processor_clause |
52 | |
53 | - query += """ |
54 | - ORDER BY lastscore DESC, job |
55 | - """ |
56 | job_queue = store.execute(query).get_all() |
57 | |
58 | sum_of_delays = 0 |
59 | @@ -383,6 +385,7 @@ |
60 | |
61 | # Which platform is the job at the head of the queue requiring? |
62 | head_job_platform = None |
63 | + head_job_score = 0 |
64 | |
65 | # Apply weights to the estimated duration of the jobs as follows: |
66 | # - if a job is tied to a processor TP then divide the estimated |
67 | @@ -397,7 +400,7 @@ |
68 | # For the purpose of estimating the delay for dispatching the job |
69 | # at the head of the queue to a builder we need to capture the |
70 | # platform it targets. |
71 | - if head_job_platform is None: |
72 | + if head_job_platform is None or score > head_job_score: |
73 | if self.processor is None: |
74 | # The JOI is platform-independent i.e. the highest scored |
75 | # job will be the head job. |
76 | @@ -409,13 +412,6 @@ |
77 | if (my_platform == platform or processor is None): |
78 | head_job_platform = platform |
79 | |
80 | - if job == self.job.id: |
81 | - # We have seen all jobs that are ahead of us in the queue |
82 | - # and can stop now. |
83 | - # This is guaranteed by the "ORDER BY lastscore DESC.." |
84 | - # clause above. |
85 | - break |
86 | - |
87 | builder_count = builder_stats.get((processor, virtualized), 0) |
88 | if builder_count == 0: |
89 | # There is no builder that can run this job, ignore it |
90 | @@ -444,6 +440,10 @@ |
91 | |
92 | sum_of_delays += duration |
93 | |
94 | + # No jobs ahead of me. I am the head job. |
95 | + if head_job_platform is None: |
96 | + head_job_platform = my_platform |
97 | + |
98 | return (sum_of_delays, head_job_platform) |
99 | |
100 | |
101 | |
102 | === modified file 'lib/lp/soyuz/tests/test_buildqueue.py' |
103 | --- lib/lp/soyuz/tests/test_buildqueue.py 2010-01-27 16:47:22 +0000 |
104 | +++ lib/lp/soyuz/tests/test_buildqueue.py 2010-01-28 05:17:37 +0000 |
105 | @@ -35,15 +35,9 @@ |
106 | """Find build and queue instance for the given source and processor.""" |
107 | def processor_matches(bq): |
108 | if processor is None: |
109 | - if bq.processor is None: |
110 | - return True |
111 | - else: |
112 | - return False |
113 | + return (True if bq.processor is None else False) |
114 | else: |
115 | - if processor == bq.processor.name: |
116 | - return True |
117 | - else: |
118 | - return False |
119 | + return (True if processor == bq.processor.name else False) |
120 | |
121 | for build in test.builds: |
122 | bq = build.buildqueue_record |
123 | @@ -82,11 +76,7 @@ |
124 | def print_build_setup(builds): |
125 | """Show the build set-up for a particular test.""" |
126 | def processor_name(bq): |
127 | - proc = getattr(bq, 'processor', None) |
128 | - if proc is None: |
129 | - return 'None' |
130 | - else: |
131 | - return proc.name |
132 | + return ('None' if bq.processor is None else bq.processor.name) |
133 | |
134 | print "" |
135 | for build in builds: |
136 | @@ -120,10 +110,7 @@ |
137 | |
138 | This used to spurious failures in time based tests. |
139 | """ |
140 | - if abs(a - b) <= deviation: |
141 | - return True |
142 | - else: |
143 | - return False |
144 | + return (True if abs(a - b) <= deviation else False) |
145 | |
146 | |
147 | def set_remaining_time_for_running_job(bq, remainder): |
148 | @@ -246,14 +233,18 @@ |
149 | builder.manual = False |
150 | |
151 | # Native builders irrespective of processor. |
152 | - self.builders[(None, False)] = self.builders[(x86_proc.id, False)] |
153 | + self.builders[(None, False)] = [] |
154 | + self.builders[(None, False)].extend( |
155 | + self.builders[(x86_proc.id, False)]) |
156 | self.builders[(None, False)].extend( |
157 | self.builders[(amd_proc.id, False)]) |
158 | self.builders[(None, False)].extend( |
159 | self.builders[(hppa_proc.id, False)]) |
160 | |
161 | # Virtual builders irrespective of processor. |
162 | - self.builders[(None, True)] = self.builders[(x86_proc.id, True)] |
163 | + self.builders[(None, True)] = [] |
164 | + self.builders[(None, True)].extend( |
165 | + self.builders[(x86_proc.id, True)]) |
166 | self.builders[(None, True)].extend( |
167 | self.builders[(amd_proc.id, True)]) |
168 | self.builders[(None, True)].extend( |
Muharem Hrnjadovic (al-maisan) wrote : | # |
Hello Gavin,
your review comments got me thinking :) and I realized that the main
function in the branch (_estimateJobDe
using SQL aggregate functions as opposed to iterating in the python domain.
Also, _estimateJobDelay() now just calculates delays and is not
concerned with the head job platform any more which makes the code quite
a bit clearer.
Anyway, the resulting incremental diff (see attachment) since the review
began is 509 lines long.
Maybe, it's best to resubmit a merge proposal for the enhanced branch
and start the review from scratch?
Please let me know what you think.
Muharem Hrnjadovic wrote:
> Muharem Hrnjadovic has proposed merging lp:~al-maisan/launchpad/delays-by-jobs-ahead-504086 into lp:launchpad.
>
> Requested reviews:
> Canonical Launchpad Engineering (launchpad)
> Related bugs:
> #504086 Job dispatch time estimation: what are the delays caused by the jobs ahead?
> https:/
>
>
> Hello there!
>
> This is the second last branch required to provide a generalized build farm
> job dispatch time estimation (irrespective of job type).
>
> Given a build farm job of interest (JOI) for which the user would like to
> receive a dispatch time estimation we need to calculate the dispatch delay
> caused by other jobs that
>
> - are ahead of JOI in the queue and
> - compete with it for builder resources
>
> Some of the ideas implemented by the algorithm in this branch are:
>
> - a processor-
> virtualization setting.
> - the delays caused by the jobs ahead of the JOI need to be weighted (based
> on the size of the pool of builders that can run these jobs).
> - In addition to the estimated delay value _estimateJobDelay() returns the
> platform of the head job (the first of the competing jobs to be dispatched
> to a builder).
> The head job platform will be fed to _estimateTimeTo
> to estimate how long it takes until the head job gets dispatched to a
> builder.
> - Total delay = "delay caused by jobs ahead" + "time to next builder"
>
> Tests to run:
>
> bin/test -vv -t test_buildqueue
>
> I had various pre-implementation talks with Julian and Michael N.
>
> No (relevant) "make lint" errors or warnings.
>
>
>
> -------
>
> This body part will be downloaded on demand.
Best regards
--
Muharem Hrnjadovic <email address hidden>
Public key id : B2BBFCFC
Key fingerprint : A5A3 CC67 2B87 D641 103F 5602 219F 6B60 B2BB FCFC
1 | === modified file 'lib/lp/buildmaster/interfaces/buildfarmjob.py' |
2 | --- lib/lp/buildmaster/interfaces/buildfarmjob.py 2010-01-26 17:25:33 +0000 |
3 | +++ lib/lp/buildmaster/interfaces/buildfarmjob.py 2010-01-28 08:16:45 +0000 |
4 | @@ -109,7 +109,7 @@ |
5 | |
6 | class IBuildFarmCandidateJobSelection(Interface): |
7 | """Operations for refining candidate job selection (optional). |
8 | - |
9 | + |
10 | Job type classes that do *not* need to refine candidate job selection may |
11 | be derived from `BuildFarmJob` which provides a base implementation of |
12 | this interface. |
13 | @@ -129,7 +129,7 @@ |
14 | SELECT TRUE |
15 | FROM Archive, Build, BuildPackageJob, DistroArchSeries |
16 | WHERE |
17 | - BuildPackageJob.job = Job.id AND |
18 | + BuildPackageJob.job = Job.id AND |
19 | .. |
20 | |
21 | :param processor: the type of processor that the candidate jobs are |
22 | @@ -143,7 +143,7 @@ |
23 | def postprocessCandidate(job, logger): |
24 | """True if the candidate job is fine and should be dispatched |
25 | to a builder, False otherwise. |
26 | - |
27 | + |
28 | :param job: The `BuildQueue` instance to be scrutinized. |
29 | :param logger: The logger to use. |
30 | |
31 | |
32 | === modified file 'lib/lp/soyuz/model/buildpackagejob.py' |
33 | --- lib/lp/soyuz/model/buildpackagejob.py 2010-01-27 07:52:40 +0000 |
34 | +++ lib/lp/soyuz/model/buildpackagejob.py 2010-01-28 08:17:10 +0000 |
35 | @@ -181,8 +181,8 @@ |
36 | sub_query = """ |
37 | SELECT TRUE FROM Archive, Build, BuildPackageJob, DistroArchSeries |
38 | WHERE |
39 | - BuildPackageJob.job = Job.id AND |
40 | - BuildPackageJob.build = Build.id AND |
41 | + BuildPackageJob.job = Job.id AND |
42 | + BuildPackageJob.build = Build.id AND |
43 | Build.distroarchseries = DistroArchSeries.id AND |
44 | Build.archive = Archive.id AND |
45 | ((Archive.private IS TRUE AND |
46 | |
47 | === modified file 'lib/lp/soyuz/model/buildqueue.py' |
48 | --- lib/lp/soyuz/model/buildqueue.py 2010-01-27 14:33:26 +0000 |
49 | +++ lib/lp/soyuz/model/buildqueue.py 2010-01-28 08:21:00 +0000 |
50 | @@ -211,7 +211,7 @@ |
51 | def _estimateTimeToNextBuilder( |
52 | self, head_job_processor, head_job_virtualized): |
53 | """Estimate time until next builder becomes available. |
54 | - |
55 | + |
56 | For the purpose of estimating the dispatch time of the job of interest |
57 | (JOI) we need to know how long it will take until the job at the head |
58 | of JOI's queue is dispatched. |
59 | @@ -253,7 +253,7 @@ |
60 | |
61 | delay_query = """ |
62 | SELECT MIN( |
63 | - CASE WHEN |
64 | + CASE WHEN |
65 | EXTRACT(EPOCH FROM |
66 | (BuildQueue.estimated_duration - |
67 | (((now() AT TIME ZONE 'UTC') - Job.date_started)))) >= 0 |
68 | @@ -291,7 +291,7 @@ |
69 | |
70 | def _estimateJobDelay(self, builder_stats): |
71 | """Sum of estimated durations for *pending* jobs ahead in queue. |
72 | - |
73 | + |
74 | For the purpose of estimating the dispatch time of the job of |
75 | interest (JOI) we need to know the delay caused by all the pending |
76 | jobs that are ahead of the JOI in the queue and that compete with it |
77 | @@ -301,18 +301,13 @@ |
78 | key is a (processor, virtualized) combination (aka "platform") and |
79 | the value is the number of builders that can take on jobs |
80 | requiring that combination. |
81 | - :return: A (sum_of_delays, head_job_platform) tuple where |
82 | - * 'sum_of_delays' is the estimated delay in seconds and |
83 | - * 'head_job_platform' is the platform ((processor, virtualized) |
84 | - combination) required by the job at the head of the queue. |
85 | + :return: The sum of delays caused by the jobs that are ahead of and |
86 | + competing with the JOI. |
87 | """ |
88 | def normalize_virtualization(virtualized): |
89 | """Jobs with NULL virtualization settings should be treated the |
90 | same way as virtualized jobs.""" |
91 | - if virtualized is None or virtualized == True: |
92 | - return True |
93 | - else: |
94 | - return False |
95 | + return virtualized is None or virtualized |
96 | def jobs_compete_for_builders(a, b): |
97 | """True if the two jobs compete for builders.""" |
98 | a_processor, a_virtualized = a |
99 | @@ -332,26 +327,25 @@ |
100 | my_platform = ( |
101 | getattr(self.processor, 'id', None), |
102 | normalize_virtualization(self.virtualized)) |
103 | - processor_clause = """ |
104 | - AND ( |
105 | - -- The processor values either match or the candidate |
106 | - -- job is processor-independent. |
107 | - buildqueue.processor = %s OR |
108 | - buildqueue.processor IS NULL) |
109 | - """ % sqlvalues(self.processor) |
110 | query = """ |
111 | SELECT |
112 | - BuildQueue.job, |
113 | - BuildQueue.lastscore, |
114 | - BuildQueue.estimated_duration, |
115 | BuildQueue.processor, |
116 | - BuildQueue.virtualized |
117 | + BuildQueue.virtualized, |
118 | + COUNT(BuildQueue.job), |
119 | + CAST(EXTRACT( |
120 | + EPOCH FROM |
121 | + SUM(BuildQueue.estimated_duration)) AS INTEGER) |
122 | FROM |
123 | BuildQueue, Job |
124 | WHERE |
125 | BuildQueue.job = Job.id |
126 | AND Job.status = %s |
127 | - AND BuildQueue.lastscore >= %s |
128 | + AND ( |
129 | + -- The score must be either above my score or the |
130 | + -- job must be older than me in cases where the |
131 | + -- score is equal. |
132 | + BuildQueue.lastscore > %s OR |
133 | + (BuildQueue.lastscore = %s AND Job.id < %s)) |
134 | AND ( |
135 | -- The virtualized values either match or the job |
136 | -- does not care about virtualization and the job |
137 | @@ -361,61 +355,41 @@ |
138 | buildqueue.virtualized = %s OR |
139 | (buildqueue.virtualized IS NULL AND %s = TRUE)) |
140 | """ % sqlvalues( |
141 | - JobStatus.WAITING, self.lastscore, self.virtualized, |
142 | - self.virtualized) |
143 | - |
144 | + JobStatus.WAITING, self.lastscore, self.lastscore, self.job, |
145 | + self.virtualized, self.virtualized) |
146 | + processor_clause = """ |
147 | + AND ( |
148 | + -- The processor values either match or the candidate |
149 | + -- job is processor-independent. |
150 | + buildqueue.processor = %s OR |
151 | + buildqueue.processor IS NULL) |
152 | + """ % sqlvalues(self.processor) |
153 | # We don't care about processors if the estimation is for a |
154 | # processor-independent job. |
155 | if self.processor is not None: |
156 | query += processor_clause |
157 | |
158 | query += """ |
159 | - ORDER BY lastscore DESC, job |
160 | + GROUP BY BuildQueue.processor, BuildQueue.virtualized |
161 | """ |
162 | - job_queue = store.execute(query).get_all() |
163 | |
164 | - sum_of_delays = 0 |
165 | + delays_by_platform = store.execute(query).get_all() |
166 | |
167 | # This will be used to capture per-platform delay totals. |
168 | delays = dict() |
169 | # This will be used to capture per-platform job counts. |
170 | job_counts = dict() |
171 | |
172 | - # Which platform is the job at the head of the queue requiring? |
173 | - head_job_platform = None |
174 | - |
175 | # Apply weights to the estimated duration of the jobs as follows: |
176 | # - if a job is tied to a processor TP then divide the estimated |
177 | # duration of that job by the number of builders that target TP |
178 | # since only these can build the job. |
179 | # - if the job is processor-independent then divide its estimated |
180 | - # duration by the total number of builders because any one of |
181 | - # them may run it. |
182 | - for job, score, duration, processor, virtualized in job_queue: |
183 | + # duration by the total number of builders with the same |
184 | + # virtualization setting because any one of them may run it. |
185 | + for processor, virtualized, job_count, delay in delays_by_platform: |
186 | virtualized = normalize_virtualization(virtualized) |
187 | platform = (processor, virtualized) |
188 | - # For the purpose of estimating the delay for dispatching the job |
189 | - # at the head of the queue to a builder we need to capture the |
190 | - # platform it targets. |
191 | - if head_job_platform is None: |
192 | - if self.processor is None: |
193 | - # The JOI is platform-independent i.e. the highest scored |
194 | - # job will be the head job. |
195 | - head_job_platform = platform |
196 | - else: |
197 | - # The JOI targets a specific platform. The head job is |
198 | - # thus the highest scored job that either targets the |
199 | - # same platform or is platform-independent. |
200 | - if (my_platform == platform or processor is None): |
201 | - head_job_platform = platform |
202 | - |
203 | - if job == self.job.id: |
204 | - # We have seen all jobs that are ahead of us in the queue |
205 | - # and can stop now. |
206 | - # This is guaranteed by the "ORDER BY lastscore DESC.." |
207 | - # clause above. |
208 | - break |
209 | - |
210 | builder_count = builder_stats.get((processor, virtualized), 0) |
211 | if builder_count == 0: |
212 | # There is no builder that can run this job, ignore it |
213 | @@ -423,28 +397,25 @@ |
214 | continue |
215 | |
216 | if jobs_compete_for_builders(my_platform, platform): |
217 | - # Accumulate the delays, and count the number of jobs causing |
218 | - # them on a (processor, virtualized) basis. |
219 | - duration = duration.seconds |
220 | - delays[platform] = delays.setdefault(platform, 0) + duration |
221 | - job_counts[platform] = job_counts.setdefault(platform, 0) + 1 |
222 | + # The jobs that target the platform at hand compete with |
223 | + # the JOI for builders, add their delays. |
224 | + delays[platform] = delay |
225 | + job_counts[platform] = job_count |
226 | |
227 | + sum_of_delays = 0 |
228 | # Now weight/average the delays based on a jobs/builders comparison. |
229 | for platform, duration in delays.iteritems(): |
230 | jobs = job_counts[platform] |
231 | builders = builder_stats[platform] |
232 | - if jobs < builders: |
233 | - # There are less jobs than builders that can take them on, |
234 | - # the delays should be averaged/divided by the number of jobs. |
235 | - denominator = jobs |
236 | - else: |
237 | - denominator = builders |
238 | + # If there are less jobs than builders that can take them on, |
239 | + # the delays should be averaged/divided by the number of jobs. |
240 | + denominator = (jobs if jobs < builders else builders) |
241 | if denominator > 1: |
242 | duration = int(duration/float(denominator)) |
243 | |
244 | sum_of_delays += duration |
245 | - |
246 | - return (sum_of_delays, head_job_platform) |
247 | + |
248 | + return sum_of_delays |
249 | |
250 | |
251 | class BuildQueueSet(object): |
252 | |
253 | === modified file 'lib/lp/soyuz/tests/test_buildqueue.py' |
254 | --- lib/lp/soyuz/tests/test_buildqueue.py 2010-01-27 14:33:26 +0000 |
255 | +++ lib/lp/soyuz/tests/test_buildqueue.py 2010-01-28 08:02:56 +0000 |
256 | @@ -35,15 +35,9 @@ |
257 | """Find build and queue instance for the given source and processor.""" |
258 | def processor_matches(bq): |
259 | if processor is None: |
260 | - if bq.processor is None: |
261 | - return True |
262 | - else: |
263 | - return False |
264 | + return (True if bq.processor is None else False) |
265 | else: |
266 | - if processor == bq.processor.name: |
267 | - return True |
268 | - else: |
269 | - return False |
270 | + return (True if processor == bq.processor.name else False) |
271 | |
272 | for build in test.builds: |
273 | bq = build.buildqueue_record |
274 | @@ -82,14 +76,17 @@ |
275 | def print_build_setup(builds): |
276 | """Show the build set-up for a particular test.""" |
277 | def processor_name(bq): |
278 | - proc = getattr(bq, 'processor', None) |
279 | - if proc is None: |
280 | - return 'None' |
281 | + return ('None' if bq.processor is None else bq.processor.name) |
282 | + def higher_scored_or_older(a,b): |
283 | + a_queue_entry = a.buildqueue_record |
284 | + b_queue_entry = b.buildqueue_record |
285 | + if a_queue_entry.lastscore != b_queue_entry.lastscore: |
286 | + return cmp(a_queue_entry.lastscore, b_queue_entry.lastscore) |
287 | else: |
288 | - return proc.name |
289 | + return cmp(a_queue_entry.job, b_queue_entry.job) |
290 | |
291 | print "" |
292 | - for build in builds: |
293 | + for build in sorted(builds, higher_scored_or_older): |
294 | bq = build.buildqueue_record |
295 | source = None |
296 | for attr in ('sourcepackagerelease', 'sourcepackagename'): |
297 | @@ -120,10 +117,7 @@ |
298 | |
299 | This used to spurious failures in time based tests. |
300 | """ |
301 | - if abs(a - b) <= deviation: |
302 | - return True |
303 | - else: |
304 | - return False |
305 | + return (True if abs(a - b) <= deviation else False) |
306 | |
307 | |
308 | def set_remaining_time_for_running_job(bq, remainder): |
309 | @@ -133,14 +127,12 @@ |
310 | datetime.utcnow().replace(tzinfo=utc) - timedelta(seconds=offset)) |
311 | |
312 | |
313 | -def check_delay_for_job(test, the_job, delay, head_job_platform): |
314 | +def check_delay_for_job(test, the_job, delay): |
315 | # Obtain the builder statistics pertaining to this job. |
316 | builder_data = the_job._getBuilderData() |
317 | builders_in_total, builders_for_job, builder_stats = builder_data |
318 | estimated_delay = the_job._estimateJobDelay(builder_stats) |
319 | - test.assertEqual( |
320 | - estimated_delay, |
321 | - (delay, head_job_platform)) |
322 | + test.assertEqual(estimated_delay, delay) |
323 | |
324 | |
325 | class TestBuildQueueSet(TestCaseWithFactory): |
326 | @@ -246,14 +238,18 @@ |
327 | builder.manual = False |
328 | |
329 | # Native builders irrespective of processor. |
330 | - self.builders[(None, False)] = self.builders[(x86_proc.id, False)] |
331 | + self.builders[(None, False)] = [] |
332 | + self.builders[(None, False)].extend( |
333 | + self.builders[(x86_proc.id, False)]) |
334 | self.builders[(None, False)].extend( |
335 | self.builders[(amd_proc.id, False)]) |
336 | self.builders[(None, False)].extend( |
337 | self.builders[(hppa_proc.id, False)]) |
338 | |
339 | # Virtual builders irrespective of processor. |
340 | - self.builders[(None, True)] = self.builders[(x86_proc.id, True)] |
341 | + self.builders[(None, True)] = [] |
342 | + self.builders[(None, True)].extend( |
343 | + self.builders[(x86_proc.id, True)]) |
344 | self.builders[(None, True)].extend( |
345 | self.builders[(amd_proc.id, True)]) |
346 | self.builders[(None, True)].extend( |
347 | @@ -520,7 +516,7 @@ |
348 | assign_to_builder(self, 'gcc', 4) |
349 | # Now that no builder is immediately available, the shortest |
350 | # remaing build time (based on the estimated duration) is returned: |
351 | - # 300 seconds |
352 | + # 300 seconds |
353 | # This is equivalent to the 'gcc' job's estimated duration. |
354 | check_mintime_to_builder(self, apg_job, x86_proc, False, 300) |
355 | |
356 | @@ -672,7 +668,7 @@ |
357 | assign_to_builder(self, 'bison', 3, 'hppa') |
358 | # Now that no builder is immediately available, the shortest |
359 | # remaing build time (based on the estimated duration) is returned: |
360 | - # 660 seconds |
361 | + # 660 seconds |
362 | # This is equivalent to the 'bison' job's estimated duration. |
363 | check_mintime_to_builder(self, apg_job, hppa_proc, False, 660) |
364 | |
365 | @@ -880,19 +876,21 @@ |
366 | """Test estimated job delays with various processors.""" |
367 | score_increment = 2 |
368 | def setUp(self): |
369 | - """Set up a fake 'BRANCHBUILD' build farm job class. |
370 | + """Add 2 'build source package from recipe' builds to the mix. |
371 | |
372 | The two platform-independent jobs will have a score of 1025 and 1053 |
373 | respectively. |
374 | + In case of jobs with equal scores the one with the lesser 'job' value |
375 | + (i.e. the older one wins). |
376 | |
377 | 3, gedit, p: hppa, v:False e:0:01:00 *** s: 1003 |
378 | 4, gedit, p: 386, v:False e:0:02:00 *** s: 1006 |
379 | 5, firefox, p: hppa, v:False e:0:03:00 *** s: 1009 |
380 | 6, firefox, p: 386, v:False e:0:04:00 *** s: 1012 |
381 | 7, apg, p: hppa, v:False e:0:05:00 *** s: 1015 |
382 | - 8, apg, p: 386, v:False e:0:06:00 *** s: 1018 |
383 | 9, vim, p: hppa, v:False e:0:07:00 *** s: 1021 |
384 | 10, vim, p: 386, v:False e:0:08:00 *** s: 1024 |
385 | + 8, apg, p: 386, v:False e:0:06:00 *** s: 1024 |
386 | --> 19, xx-recipe-bash, p: None, v:False e:0:00:22 *** s: 1025 |
387 | 11, gcc, p: hppa, v:False e:0:09:00 *** s: 1027 |
388 | 12, gcc, p: 386, v:False e:0:10:00 *** s: 1030 |
389 | @@ -910,14 +908,18 @@ |
390 | |
391 | job = self.factory.makeSourcePackageRecipeBuildJob( |
392 | virtualized=False, estimated_duration=22, |
393 | - sourcename='xx-recipe-bash') |
394 | - job.lastscore = 1025 |
395 | + sourcename='xx-recipe-bash', score=1025) |
396 | self.builds.append(job.specific_job.build) |
397 | job = self.factory.makeSourcePackageRecipeBuildJob( |
398 | virtualized=False, estimated_duration=222, |
399 | - sourcename='xx-recipe-zsh') |
400 | - job.lastscore = 1053 |
401 | + sourcename='xx-recipe-zsh', score=1053) |
402 | self.builds.append(job.specific_job.build) |
403 | + |
404 | + # Assign the same score to the '386' vim and apg build jobs. |
405 | + processor_fam = ProcessorFamilySet().getByName('x86') |
406 | + x86_proc = processor_fam.processors[0] |
407 | + _apg_build, apg_job = find_job(self, 'apg', '386') |
408 | + apg_job.lastscore = 1024 |
409 | # print_build_setup(self.builds) |
410 | |
411 | def test_job_delay_for_binary_builds(self): |
412 | @@ -932,16 +934,15 @@ |
413 | builder_data = flex_job._getBuilderData() |
414 | builders_in_total, builders_for_job, builder_stats = builder_data |
415 | |
416 | - # The delay will be 900 (= 15*60) + 222 seconds, the head job is |
417 | - # platform-independent. |
418 | - check_delay_for_job(self, flex_job, 1122, (None, False)) |
419 | + # The delay will be 900 (= 15*60) + 222 seconds |
420 | + check_delay_for_job(self, flex_job, 1122) |
421 | |
422 | # Assign the postgres job to a builder. |
423 | assign_to_builder(self, 'postgres', 1, 'hppa') |
424 | # The 'postgres' job is not pending any more. Now only the 222 |
425 | # seconds (the estimated duration of the platform-independent job) |
426 | # should be returned. |
427 | - check_delay_for_job(self, flex_job, 222, (None, False)) |
428 | + check_delay_for_job(self, flex_job, 222) |
429 | |
430 | # How about some estimates for x86 builds? |
431 | processor_fam = ProcessorFamilySet().getByName('x86') |
432 | @@ -949,27 +950,28 @@ |
433 | |
434 | _bison_build, bison_job = find_job(self, 'bison', '386') |
435 | check_mintime_to_builder(self, bison_job, x86_proc, False, 0) |
436 | - # The delay will be 900 (= (14+16)*60/2) + 222 seconds, the head job |
437 | - # is platform-independent. |
438 | - check_delay_for_job(self, bison_job, 1122, (None, False)) |
439 | + # The delay will be 900 (= (14+16)*60/2) + 222 seconds. |
440 | + check_delay_for_job(self, bison_job, 1122) |
441 | |
442 | # The 2 tests that follow exercise the estimation in conjunction with |
443 | # longer pending job queues. Please note that the sum of estimates for |
444 | # the '386' jobs is divided by 4 which is the number of native '386' |
445 | # builders. |
446 | |
447 | - _apg_build, apg_job = find_job(self, 'apg', '386') |
448 | - check_mintime_to_builder(self, apg_job, x86_proc, False, 0) |
449 | - # The delay will be 900 (= (8+10+12+14+16)*60/4) + 122 (= (222+22)/2) |
450 | - # seconds, the head job is platform-independent. |
451 | - check_delay_for_job(self, apg_job, 1022, (None, False)) |
452 | + # Also, this tests that jobs with equal score but a lower 'job' value |
453 | + # (i.e. older jobs) are queued ahead of the job of interest (JOI). |
454 | + _vim_build, vim_job = find_job(self, 'vim', '386') |
455 | + check_mintime_to_builder(self, vim_job, x86_proc, False, 0) |
456 | + # The delay will be 870 (= (6+10+12+14+16)*60/4) + 122 (= (222+22)/2) |
457 | + # seconds. |
458 | + check_delay_for_job(self, vim_job, 992) |
459 | |
460 | _gedit_build, gedit_job = find_job(self, 'gedit', '386') |
461 | check_mintime_to_builder(self, gedit_job, x86_proc, False, 0) |
462 | # The delay will be |
463 | # 1080 (= (4+6+8+10+12+14+16)*60/4) + 122 (= (222+22)/2) |
464 | - # seconds, the head job is platform-independent. |
465 | - check_delay_for_job(self, gedit_job, 1172, (None, False)) |
466 | + # seconds. |
467 | + check_delay_for_job(self, gedit_job, 1172) |
468 | |
469 | def test_job_delay_for_recipe_builds(self): |
470 | # One of the 9 builders for the 'bash' build is immediately available. |
471 | @@ -983,7 +985,7 @@ |
472 | # The delay will be 960 + 780 + 222 = 1962, where |
473 | # hppa job delays: 960 = (9+11+13+15)*60/3 |
474 | # 386 job delays: 780 = (10+12+14+16)*60/4 |
475 | - check_delay_for_job(self, bash_job, 1962, (None, False)) |
476 | + check_delay_for_job(self, bash_job, 1962) |
477 | |
478 | # One of the 9 builders for the 'zsh' build is immediately available. |
479 | zsh_build, zsh_job = find_job(self, 'xx-recipe-zsh', None) |
480 | @@ -994,24 +996,22 @@ |
481 | builders_in_total, builders_for_job, builder_stats = builder_data |
482 | |
483 | # The delay will be 0 since this is the head job. |
484 | - check_delay_for_job(self, zsh_job, 0, (None, False)) |
485 | + check_delay_for_job(self, zsh_job, 0) |
486 | |
487 | # Assign the zsh job to a builder. |
488 | assign_to_builder(self, 'xx-recipe-zsh', 1, None) |
489 | |
490 | # Now that the highest-scored job is out of the way, the estimation |
491 | - # for the 'bash' recipe build is 222 seconds shorter and the new head |
492 | - # job platform is (1, False) (due to the fact that the native '386' |
493 | - # postgres job (with id 18) is the new head job). |
494 | + # for the 'bash' recipe build is 222 seconds shorter. |
495 | |
496 | # The delay will be 960 + 780 = 1740, where |
497 | # hppa job delays: 960 = (9+11+13+15)*60/3 |
498 | # 386 job delays: 780 = (10+12+14+16)*60/4 |
499 | - check_delay_for_job(self, bash_job, 1740, (1, False)) |
500 | + check_delay_for_job(self, bash_job, 1740) |
501 | |
502 | processor_fam = ProcessorFamilySet().getByName('x86') |
503 | x86_proc = processor_fam.processors[0] |
504 | |
505 | _postgres_build, postgres_job = find_job(self, 'postgres', '386') |
506 | # The delay will be 0 since this is the head job now. |
507 | - check_delay_for_job(self, postgres_job, 0, (1, False)) |
508 | + check_delay_for_job(self, postgres_job, 0) |
Gavin Panella (allenap) wrote : | # |
Hi Muharem, I'll review the incremental diff. Before moving on, I'll finish off reviewing the original diff quickly so that I'm familiar with where it's coming from.
Gavin Panella (allenap) wrote : | # |
This is a review of your branch at about revision 8647. I haven't added much since last night's preview review :) I'll look at the incremental diff now.
> === modified file 'lib/lp/
> --- lib/lp/
> +++ lib/lp/
> @@ -10,7 +10,6 @@
> __all__ = [
> 'IBuildFarmJob',
> 'IBuildFarmCand
> - 'IBuildFarmJobD
> 'ISpecificBuild
> 'BuildFarmJobType',
> ]
> @@ -108,54 +107,6 @@
> """
>
>
> -class IBuildFarmJobDi
> - """Operations needed for job dipatch time estimation."""
> -
> - def composePendingJ
> - """String SELECT query yielding pending jobs with given minimum score.
> -
> - This will be used for the purpose of job dispatch time estimation
> - for a build job of interest (JOI).
> - In order to estimate the dispatch time for the JOI we need to
> - calculate the sum of the estimated durations of the *pending* jobs
> - ahead of JOI.
> -
> - Depending on the build farm job type the JOI may or may not be tied
> - to a particular processor type.
> - Binary builds for example are always built for a specific processor
> - whereas "create a source package from recipe" type jobs do not care
> - about processor types or virtualization.
> -
> - When implementing this method for processor independent build farm job
> - types (e.g. recipe build) you may safely ignore the `processor` and
> - `virtualized` parameters.
> -
> - The SELECT query to be returned needs to select the following data
> -
> - 1 - BuildQueue.job
> - 2 - BuildQueue.
> - 3 - BuildQueue.
> - 4 - Processor.id [optional]
> - 5 - virtualized [optional]
> -
> - Please do *not* order the result set since it will be UNIONed and
> - ordered only then.
> -
> - Job types that are processor independent or do not care about
> - virtualization should return NULL for the optional data in the result
> - set.
> -
> - :param min_score: the pending jobs selected by the returned
> - query should have score >= min_score.
> - :param processor: the type of processor that the jobs are expected
> - to run on.
> - :param virtualized: whether the jobs are expected to run on the
> - `processor` natively or inside a virtual machine.
> - :return: a string SELECT clause that can be used to find
> - the pending jobs of the appropriate type.
> - """
> -
> -
> class IBuildFarmCandi
> """Operations for refining candidate job selection (optional).
>
>
> === modified file 'lib/lp/
> --- lib/lp/
> +++ lib/lp/
> @@ -1...
Gavin Panella (allenap) wrote : | # |
This is a review of the incremental changes. Although I've suggested a lot of changes here and in the previous review, they're all minor tweaks; the branch is good already, so r=me :)
> === modified file 'lib/lp/
> --- lib/lp/
> +++ lib/lp/
> @@ -109,7 +109,7 @@
>
> class IBuildFarmCandi
> """Operations for refining candidate job selection (optional).
> -
> +
> Job type classes that do *not* need to refine candidate job selection may
> be derived from `BuildFarmJob` which provides a base implementation of
> this interface.
> @@ -129,7 +129,7 @@
> SELECT TRUE
> FROM Archive, Build, BuildPackageJob, DistroArchSeries
> WHERE
> - BuildPackageJob.job = Job.id AND
> + BuildPackageJob.job = Job.id AND
> ..
>
> :param processor: the type of processor that the candidate jobs are
> @@ -143,7 +143,7 @@
> def postprocessCand
> """True if the candidate job is fine and should be dispatched
> to a builder, False otherwise.
> -
> +
> :param job: The `BuildQueue` instance to be scrutinized.
> :param logger: The logger to use.
>
>
> === modified file 'lib/lp/
> --- lib/lp/
> +++ lib/lp/
> @@ -181,8 +181,8 @@
> sub_query = """
> SELECT TRUE FROM Archive, Build, BuildPackageJob, DistroArchSeries
> WHERE
> - BuildPackageJob.job = Job.id AND
> - BuildPackageJob
> + BuildPackageJob.job = Job.id AND
> + BuildPackageJob
> Build.distroarc
> Build.archive = Archive.id AND
> ((Archive.private IS TRUE AND
>
> === modified file 'lib/lp/
> --- lib/lp/
> +++ lib/lp/
> @@ -211,7 +211,7 @@
> def _estimateTimeTo
> self, head_job_processor, head_job_
> """Estimate time until next builder becomes available.
> -
> +
> For the purpose of estimating the dispatch time of the job of interest
> (JOI) we need to know how long it will take until the job at the head
> of JOI's queue is dispatched.
> @@ -253,7 +253,7 @@
>
> delay_query = """
> SELECT MIN(
> - CASE WHEN
> + CASE WHEN
> EXTRACT(EPOCH FROM
> (BuildQueue.
> (((now() AT TIME ZONE 'UTC') - Job.date_
> @@ -291,7 +291,7 @@
>
> def _estimateJobDel
> """Sum of estimated durations for *pending* jobs ahead in queue.
> -
> +
> For the purpose of estimating the dispatch...
Muharem Hrnjadovic (al-maisan) wrote : | # |
Gavin Panella wrote:
> Review: Approve
> This is a review of the incremental changes. Although I've suggested a
> lot of changes here and in the previous review, they're all minor
> tweaks; the branch is good already, so r=me :)
Hello Gavin,
thank you for putting up with a branch that was changing while you were
reviewing it.
I have accommodated all your suggestions. Please see the enclosed diff
as well as the replies below.
>> === modified file 'lib/lp/
>> --- lib/lp/
>> +++ lib/lp/
>> @@ -109,7 +109,7 @@
>>
>> class IBuildFarmCandi
>> """Operations for refining candidate job selection (optional).
>> -
>> +
>> Job type classes that do *not* need to refine candidate job selection may
>> be derived from `BuildFarmJob` which provides a base implementation of
>> this interface.
>> @@ -129,7 +129,7 @@
>> SELECT TRUE
>> FROM Archive, Build, BuildPackageJob, DistroArchSeries
>> WHERE
>> - BuildPackageJob.job = Job.id AND
>> + BuildPackageJob.job = Job.id AND
>> ..
>>
>> :param processor: the type of processor that the candidate jobs are
>> @@ -143,7 +143,7 @@
>> def postprocessCand
>> """True if the candidate job is fine and should be dispatched
>> to a builder, False otherwise.
>> -
>> +
>> :param job: The `BuildQueue` instance to be scrutinized.
>> :param logger: The logger to use.
>>
>>
>> === modified file 'lib/lp/
>> --- lib/lp/
>> +++ lib/lp/
>> @@ -181,8 +181,8 @@
>> sub_query = """
>> SELECT TRUE FROM Archive, Build, BuildPackageJob, DistroArchSeries
>> WHERE
>> - BuildPackageJob.job = Job.id AND
>> - BuildPackageJob
>> + BuildPackageJob.job = Job.id AND
>> + BuildPackageJob
>> Build.distroarc
>> Build.archive = Archive.id AND
>> ((Archive.private IS TRUE AND
>>
>> === modified file 'lib/lp/
>> --- lib/lp/
>> +++ lib/lp/
>> @@ -211,7 +211,7 @@
>> def _estimateTimeTo
>> self, head_job_processor, head_job_
>> """Estimate time until next builder becomes available.
>> -
>> +
>> For the purpose of estimating the dispatch time of the job of interest
>> (JOI) we need to know how long it will take until the job at the head
>> of JOI's queue is dispatched.
>> @@ -253,7 +253,7 @@
>>
>> delay_query = """
>> SELECT MIN(
>> - CASE WHEN
>> + CASE WHEN
>> EXTRACT(EPOCH FROM
>> (BuildQueue...
1 | === modified file 'lib/lp/soyuz/model/buildqueue.py' |
2 | --- lib/lp/soyuz/model/buildqueue.py 2010-01-28 08:21:27 +0000 |
3 | +++ lib/lp/soyuz/model/buildqueue.py 2010-01-28 10:55:53 +0000 |
4 | @@ -301,8 +301,8 @@ |
5 | key is a (processor, virtualized) combination (aka "platform") and |
6 | the value is the number of builders that can take on jobs |
7 | requiring that combination. |
8 | - :return: The sum of delays caused by the jobs that are ahead of and |
9 | - competing with the JOI. |
10 | + :return: An integer value holding the sum of delays (in seconds) |
11 | + caused by the jobs that are ahead of and competing with the JOI. |
12 | """ |
13 | def normalize_virtualization(virtualized): |
14 | """Jobs with NULL virtualization settings should be treated the |
15 | |
16 | === modified file 'lib/lp/soyuz/tests/test_buildqueue.py' |
17 | --- lib/lp/soyuz/tests/test_buildqueue.py 2010-01-28 08:16:05 +0000 |
18 | +++ lib/lp/soyuz/tests/test_buildqueue.py 2010-01-28 11:42:18 +0000 |
19 | @@ -35,9 +35,9 @@ |
20 | """Find build and queue instance for the given source and processor.""" |
21 | def processor_matches(bq): |
22 | if processor is None: |
23 | - return (True if bq.processor is None else False) |
24 | + return (bq.processor is None) |
25 | else: |
26 | - return (True if processor == bq.processor.name else False) |
27 | + return (processor == bq.processor.name) |
28 | |
29 | for build in test.builds: |
30 | bq = build.buildqueue_record |
31 | @@ -77,25 +77,22 @@ |
32 | """Show the build set-up for a particular test.""" |
33 | def processor_name(bq): |
34 | return ('None' if bq.processor is None else bq.processor.name) |
35 | - def higher_scored_or_older(a,b): |
36 | - a_queue_entry = a.buildqueue_record |
37 | - b_queue_entry = b.buildqueue_record |
38 | - if a_queue_entry.lastscore != b_queue_entry.lastscore: |
39 | - return cmp(a_queue_entry.lastscore, b_queue_entry.lastscore) |
40 | - else: |
41 | - return cmp(a_queue_entry.job, b_queue_entry.job) |
42 | |
43 | print "" |
44 | - for build in sorted(builds, higher_scored_or_older): |
45 | - bq = build.buildqueue_record |
46 | + queue_entries = [build.buildqueue_record for build in builds] |
47 | + queue_entries = sorted( |
48 | + queue_entries, key=lambda qe: qe.job.id, reverse=True) |
49 | + queue_entries = sorted(queue_entries, key=lambda qe: qe.lastscore) |
50 | + for queue_entry in queue_entries: |
51 | source = None |
52 | for attr in ('sourcepackagerelease', 'sourcepackagename'): |
53 | - source = getattr(build, attr, None) |
54 | + source = getattr(queue_entry.specific_job.build, attr, None) |
55 | if source is not None: |
56 | break |
57 | print "%5s, %18s, p:%5s, v:%5s e:%s *** s:%5s" % ( |
58 | - bq.id, source.name, processor_name(bq), |
59 | - bq.virtualized, bq.estimated_duration, bq.lastscore) |
60 | + queue_entry.id, source.name, processor_name(queue_entry), |
61 | + queue_entry.virtualized, queue_entry.estimated_duration, |
62 | + queue_entry.lastscore) |
63 | |
64 | |
65 | def check_mintime_to_builder( |
66 | @@ -117,7 +114,7 @@ |
67 | |
68 | This used to spurious failures in time based tests. |
69 | """ |
70 | - return (True if abs(a - b) <= deviation else False) |
71 | + return (abs(a - b) <= deviation) |
72 | |
73 | |
74 | def set_remaining_time_for_running_job(bq, remainder): |
Preview Diff
1 | === modified file 'lib/lp/buildmaster/interfaces/buildfarmjob.py' |
2 | --- lib/lp/buildmaster/interfaces/buildfarmjob.py 2010-01-18 22:01:19 +0000 |
3 | +++ lib/lp/buildmaster/interfaces/buildfarmjob.py 2010-01-27 16:27:17 +0000 |
4 | @@ -10,7 +10,6 @@ |
5 | __all__ = [ |
6 | 'IBuildFarmJob', |
7 | 'IBuildFarmCandidateJobSelection', |
8 | - 'IBuildFarmJobDispatchEstimation', |
9 | 'ISpecificBuildFarmJobClass', |
10 | 'BuildFarmJobType', |
11 | ] |
12 | @@ -108,54 +107,6 @@ |
13 | """ |
14 | |
15 | |
16 | -class IBuildFarmJobDispatchEstimation(Interface): |
17 | - """Operations needed for job dipatch time estimation.""" |
18 | - |
19 | - def composePendingJobsQuery(min_score, processor, virtualized): |
20 | - """String SELECT query yielding pending jobs with given minimum score. |
21 | - |
22 | - This will be used for the purpose of job dispatch time estimation |
23 | - for a build job of interest (JOI). |
24 | - In order to estimate the dispatch time for the JOI we need to |
25 | - calculate the sum of the estimated durations of the *pending* jobs |
26 | - ahead of JOI. |
27 | - |
28 | - Depending on the build farm job type the JOI may or may not be tied |
29 | - to a particular processor type. |
30 | - Binary builds for example are always built for a specific processor |
31 | - whereas "create a source package from recipe" type jobs do not care |
32 | - about processor types or virtualization. |
33 | - |
34 | - When implementing this method for processor independent build farm job |
35 | - types (e.g. recipe build) you may safely ignore the `processor` and |
36 | - `virtualized` parameters. |
37 | - |
38 | - The SELECT query to be returned needs to select the following data |
39 | - |
40 | - 1 - BuildQueue.job |
41 | - 2 - BuildQueue.lastscore |
42 | - 3 - BuildQueue.estimated_duration |
43 | - 4 - Processor.id [optional] |
44 | - 5 - virtualized [optional] |
45 | - |
46 | - Please do *not* order the result set since it will be UNIONed and |
47 | - ordered only then. |
48 | - |
49 | - Job types that are processor independent or do not care about |
50 | - virtualization should return NULL for the optional data in the result |
51 | - set. |
52 | - |
53 | - :param min_score: the pending jobs selected by the returned |
54 | - query should have score >= min_score. |
55 | - :param processor: the type of processor that the jobs are expected |
56 | - to run on. |
57 | - :param virtualized: whether the jobs are expected to run on the |
58 | - `processor` natively or inside a virtual machine. |
59 | - :return: a string SELECT clause that can be used to find |
60 | - the pending jobs of the appropriate type. |
61 | - """ |
62 | - |
63 | - |
64 | class IBuildFarmCandidateJobSelection(Interface): |
65 | """Operations for refining candidate job selection (optional). |
66 | |
67 | |
68 | === modified file 'lib/lp/buildmaster/model/buildfarmjob.py' |
69 | --- lib/lp/buildmaster/model/buildfarmjob.py 2010-01-20 04:04:15 +0000 |
70 | +++ lib/lp/buildmaster/model/buildfarmjob.py 2010-01-27 16:27:17 +0000 |
71 | @@ -10,7 +10,6 @@ |
72 | |
73 | from canonical.launchpad.webapp.interfaces import ( |
74 | DEFAULT_FLAVOR, IStoreSelector, MAIN_STORE) |
75 | - |
76 | from lp.buildmaster.interfaces.buildfarmjob import ( |
77 | IBuildFarmJob, IBuildFarmCandidateJobSelection, |
78 | ISpecificBuildFarmJobClass) |
79 | @@ -19,7 +18,8 @@ |
80 | class BuildFarmJob: |
81 | """Mix-in class for `IBuildFarmJob` implementations.""" |
82 | implements(IBuildFarmJob) |
83 | - classProvides(IBuildFarmCandidateJobSelection, ISpecificBuildFarmJobClass) |
84 | + classProvides( |
85 | + IBuildFarmCandidateJobSelection, ISpecificBuildFarmJobClass) |
86 | |
87 | def score(self): |
88 | """See `IBuildFarmJob`.""" |
89 | @@ -77,3 +77,4 @@ |
90 | def postprocessCandidate(job, logger): |
91 | """See `IBuildFarmCandidateJobSelection`.""" |
92 | return True |
93 | + |
94 | |
95 | === modified file 'lib/lp/soyuz/model/buildpackagejob.py' |
96 | --- lib/lp/soyuz/model/buildpackagejob.py 2010-01-20 04:14:23 +0000 |
97 | +++ lib/lp/soyuz/model/buildpackagejob.py 2010-01-27 16:27:17 +0000 |
98 | @@ -12,17 +12,14 @@ |
99 | |
100 | from storm.locals import Int, Reference, Storm |
101 | |
102 | -from zope.interface import classProvides, implements |
103 | +from zope.interface import implements |
104 | from zope.component import getUtility |
105 | |
106 | from canonical.database.sqlbase import sqlvalues |
107 | |
108 | -from lp.buildmaster.interfaces.buildfarmjob import ( |
109 | - BuildFarmJobType, IBuildFarmJobDispatchEstimation) |
110 | from lp.buildmaster.model.packagebuildfarmjob import PackageBuildFarmJob |
111 | from lp.registry.interfaces.sourcepackage import SourcePackageUrgency |
112 | from lp.registry.interfaces.pocket import PackagePublishingPocket |
113 | -from lp.services.job.interfaces.job import JobStatus |
114 | from lp.soyuz.interfaces.archive import ArchivePurpose |
115 | from lp.soyuz.interfaces.build import BuildStatus, IBuildSet |
116 | from lp.soyuz.interfaces.buildpackagejob import IBuildPackageJob |
117 | @@ -32,7 +29,6 @@ |
118 | class BuildPackageJob(PackageBuildFarmJob, Storm): |
119 | """See `IBuildPackageJob`.""" |
120 | implements(IBuildPackageJob) |
121 | - classProvides(IBuildFarmJobDispatchEstimation) |
122 | |
123 | __storm_table__ = 'buildpackagejob' |
124 | id = Int(primary=True) |
125 | @@ -161,34 +157,6 @@ |
126 | """See `IBuildPackageJob`.""" |
127 | return self.build.title |
128 | |
129 | - @staticmethod |
130 | - def composePendingJobsQuery(min_score, processor, virtualized): |
131 | - """See `IBuildFarmJob`.""" |
132 | - return """ |
133 | - SELECT |
134 | - BuildQueue.job, |
135 | - BuildQueue.lastscore, |
136 | - BuildQueue.estimated_duration, |
137 | - Build.processor AS processor, |
138 | - Archive.require_virtualized AS virtualized |
139 | - FROM |
140 | - BuildQueue, Build, BuildPackageJob, Archive, Job |
141 | - WHERE |
142 | - BuildQueue.job_type = %s |
143 | - AND BuildPackageJob.job = BuildQueue.job |
144 | - AND BuildPackageJob.job = Job.id |
145 | - AND Job.status = %s |
146 | - AND BuildPackageJob.build = Build.id |
147 | - AND Build.buildstate = %s |
148 | - AND Build.archive = Archive.id |
149 | - AND Archive.enabled = TRUE |
150 | - AND BuildQueue.lastscore >= %s |
151 | - AND Build.processor = %s |
152 | - AND Archive.require_virtualized = %s |
153 | - """ % sqlvalues( |
154 | - BuildFarmJobType.PACKAGEBUILD, JobStatus.WAITING, |
155 | - BuildStatus.NEEDSBUILD, min_score, processor, virtualized) |
156 | - |
157 | @property |
158 | def processor(self): |
159 | """See `IBuildFarmJob`.""" |
160 | |
161 | === modified file 'lib/lp/soyuz/model/buildqueue.py' |
162 | --- lib/lp/soyuz/model/buildqueue.py 2010-01-18 09:36:46 +0000 |
163 | +++ lib/lp/soyuz/model/buildqueue.py 2010-01-27 16:27:17 +0000 |
164 | @@ -160,10 +160,17 @@ |
165 | GROUP BY processor, virtualized; |
166 | """ |
167 | results = store.execute(builder_data).get_all() |
168 | + builders_in_total = builders_for_job = 0 |
169 | + virtualized_total = 0 |
170 | + native_total = 0 |
171 | |
172 | builder_stats = dict() |
173 | - builders_in_total = builders_for_job = 0 |
174 | for processor, virtualized, count in results: |
175 | + builders_in_total += count |
176 | + if virtualized: |
177 | + virtualized_total += count |
178 | + else: |
179 | + native_total += count |
180 | if my_processor is not None: |
181 | if (my_processor.id == processor and |
182 | my_virtualized == virtualized): |
183 | @@ -171,12 +178,15 @@ |
184 | # particular processor/virtualization combination and |
185 | # this is how many of these we have. |
186 | builders_for_job = count |
187 | - builders_in_total += count |
188 | builder_stats[(processor, virtualized)] = count |
189 | if my_processor is None: |
190 | # The job of interest (JOI) is processor independent. |
191 | builders_for_job = builders_in_total |
192 | |
193 | + builder_stats[(None, None)] = builders_in_total |
194 | + builder_stats[(None, True)] = virtualized_total |
195 | + builder_stats[(None, False)] = native_total |
196 | + |
197 | return (builders_in_total, builders_for_job, builder_stats) |
198 | |
199 | def _freeBuildersCount(self, processor, virtualized): |
200 | @@ -279,6 +289,163 @@ |
201 | else: |
202 | return int(head_job_delay) |
203 | |
204 | + def _estimateJobDelay(self, builder_stats): |
205 | + """Sum of estimated durations for *pending* jobs ahead in queue. |
206 | + |
207 | + For the purpose of estimating the dispatch time of the job of |
208 | + interest (JOI) we need to know the delay caused by all the pending |
209 | + jobs that are ahead of the JOI in the queue and that compete with it |
210 | + for builders. |
211 | + |
212 | + :param builder_stats: A dictionary with builder counts where the |
213 | + key is a (processor, virtualized) combination (aka "platform") and |
214 | + the value is the number of builders that can take on jobs |
215 | + requiring that combination. |
216 | + :return: A (sum_of_delays, head_job_platform) tuple where |
217 | + * 'sum_of_delays' is the estimated delay in seconds and |
218 | + * 'head_job_platform' is the platform ((processor, virtualized) |
219 | + combination) required by the job at the head of the queue. |
220 | + """ |
221 | + def normalize_virtualization(virtualized): |
222 | + """Jobs with NULL virtualization settings should be treated the |
223 | + same way as virtualized jobs.""" |
224 | + if virtualized is None or virtualized == True: |
225 | + return True |
226 | + else: |
227 | + return False |
228 | + def jobs_compete_for_builders(a, b): |
229 | + """True if the two jobs compete for builders.""" |
230 | + a_processor, a_virtualized = a |
231 | + b_processor, b_virtualized = b |
232 | + if a_processor is None or b_processor is None: |
233 | + # If either of the jobs is platform-independent then the two |
234 | + # jobs compete for the same builders if the virtualization |
235 | + # settings match. |
236 | + if a_virtualized == b_virtualized: |
237 | + return True |
238 | + else: |
239 | + # Neither job is platform-independent, match processor and |
240 | + # virtualization settings. |
241 | + return a == b |
242 | + |
243 | + store = getUtility(IStoreSelector).get(MAIN_STORE, DEFAULT_FLAVOR) |
244 | + my_platform = ( |
245 | + getattr(self.processor, 'id', None), |
246 | + normalize_virtualization(self.virtualized)) |
247 | + processor_clause = """ |
248 | + AND ( |
249 | + -- The processor values either match or the candidate |
250 | + -- job is processor-independent. |
251 | + buildqueue.processor = %s OR |
252 | + buildqueue.processor IS NULL) |
253 | + """ % sqlvalues(self.processor) |
254 | + query = """ |
255 | + SELECT |
256 | + BuildQueue.job, |
257 | + BuildQueue.lastscore, |
258 | + BuildQueue.estimated_duration, |
259 | + BuildQueue.processor, |
260 | + BuildQueue.virtualized |
261 | + FROM |
262 | + BuildQueue, Job |
263 | + WHERE |
264 | + BuildQueue.job = Job.id |
265 | + AND Job.status = %s |
266 | + AND BuildQueue.lastscore >= %s |
267 | + AND ( |
268 | + -- The virtualized values either match or the job |
269 | + -- does not care about virtualization and the job |
270 | + -- of interest (JOI) is to be run on a virtual builder |
271 | + -- (we want to prevent the execution of untrusted code |
272 | + -- on native builders). |
273 | + buildqueue.virtualized = %s OR |
274 | + (buildqueue.virtualized IS NULL AND %s = TRUE)) |
275 | + """ % sqlvalues( |
276 | + JobStatus.WAITING, self.lastscore, self.virtualized, |
277 | + self.virtualized) |
278 | + |
279 | + # We don't care about processors if the estimation is for a |
280 | + # processor-independent job. |
281 | + if self.processor is not None: |
282 | + query += processor_clause |
283 | + |
284 | + query += """ |
285 | + ORDER BY lastscore DESC, job |
286 | + """ |
287 | + job_queue = store.execute(query).get_all() |
288 | + |
289 | + sum_of_delays = 0 |
290 | + |
291 | + # This will be used to capture per-platform delay totals. |
292 | + delays = dict() |
293 | + # This will be used to capture per-platform job counts. |
294 | + job_counts = dict() |
295 | + |
296 | + # Which platform is the job at the head of the queue requiring? |
297 | + head_job_platform = None |
298 | + |
299 | + # Apply weights to the estimated duration of the jobs as follows: |
300 | + # - if a job is tied to a processor TP then divide the estimated |
301 | + # duration of that job by the number of builders that target TP |
302 | + # since only these can build the job. |
303 | + # - if the job is processor-independent then divide its estimated |
304 | + # duration by the total number of builders with the same |
305 | + # virtualization setting because any one of them may run it. |
306 | + for job, score, duration, processor, virtualized in job_queue: |
307 | + virtualized = normalize_virtualization(virtualized) |
308 | + platform = (processor, virtualized) |
309 | + # For the purpose of estimating the delay for dispatching the job |
310 | + # at the head of the queue to a builder we need to capture the |
311 | + # platform it targets. |
312 | + if head_job_platform is None: |
313 | + if self.processor is None: |
314 | + # The JOI is platform-independent i.e. the highest scored |
315 | + # job will be the head job. |
316 | + head_job_platform = platform |
317 | + else: |
318 | + # The JOI targets a specific platform. The head job is |
319 | + # thus the highest scored job that either targets the |
320 | + # same platform or is platform-independent. |
321 | + if (my_platform == platform or processor is None): |
322 | + head_job_platform = platform |
323 | + |
324 | + if job == self.job.id: |
325 | + # We have seen all jobs that are ahead of us in the queue |
326 | + # and can stop now. |
327 | + # This is guaranteed by the "ORDER BY lastscore DESC.." |
328 | + # clause above. |
329 | + break |
330 | + |
331 | + builder_count = builder_stats.get((processor, virtualized), 0) |
332 | + if builder_count == 0: |
333 | + # There is no builder that can run this job, ignore it |
334 | + # for the purpose of dispatch time estimation. |
335 | + continue |
336 | + |
337 | + if jobs_compete_for_builders(my_platform, platform): |
338 | + # Accumulate the delays, and count the number of jobs causing |
339 | + # them on a (processor, virtualized) basis. |
340 | + duration = duration.seconds |
341 | + delays[platform] = delays.setdefault(platform, 0) + duration |
342 | + job_counts[platform] = job_counts.setdefault(platform, 0) + 1 |
343 | + |
344 | + # Now weight/average the delays based on a jobs/builders comparison. |
345 | + for platform, duration in delays.iteritems(): |
346 | + jobs = job_counts[platform] |
347 | + builders = builder_stats[platform] |
348 | + if jobs < builders: |
349 | + # There are less jobs than builders that can take them on, |
350 | + # the delays should be averaged/divided by the number of jobs. |
351 | + denominator = jobs |
352 | + else: |
353 | + denominator = builders |
354 | + if denominator > 1: |
355 | + duration = int(duration/float(denominator)) |
356 | + |
357 | + sum_of_delays += duration |
358 | + |
359 | + return (sum_of_delays, head_job_platform) |
360 | + |
361 | |
362 | class BuildQueueSet(object): |
363 | """Utility to deal with BuildQueue content class.""" |
364 | |
365 | === modified file 'lib/lp/soyuz/tests/test_buildpackagejob.py' |
366 | --- lib/lp/soyuz/tests/test_buildpackagejob.py 2010-01-18 08:55:03 +0000 |
367 | +++ lib/lp/soyuz/tests/test_buildpackagejob.py 2010-01-27 16:27:17 +0000 |
368 | @@ -9,17 +9,13 @@ |
369 | |
370 | from canonical.launchpad.webapp.interfaces import ( |
371 | IStoreSelector, MAIN_STORE, DEFAULT_FLAVOR) |
372 | -from canonical.launchpad.webapp.testing import verifyObject |
373 | from canonical.testing import LaunchpadZopelessLayer |
374 | |
375 | from lp.buildmaster.interfaces.builder import IBuilderSet |
376 | -from lp.buildmaster.interfaces.buildfarmjob import ( |
377 | - IBuildFarmJobDispatchEstimation) |
378 | from lp.soyuz.interfaces.archive import ArchivePurpose |
379 | from lp.soyuz.interfaces.build import BuildStatus |
380 | from lp.soyuz.interfaces.publishing import PackagePublishingStatus |
381 | from lp.soyuz.model.build import Build |
382 | -from lp.soyuz.model.buildpackagejob import BuildPackageJob |
383 | from lp.soyuz.model.processor import ProcessorFamilySet |
384 | from lp.soyuz.tests.test_publishing import SoyuzTestPublisher |
385 | from lp.testing import TestCaseWithFactory |
386 | @@ -382,9 +378,6 @@ |
387 | bpj = bq.specific_job |
388 | self.assertEqual(bpj.virtualized, False) |
389 | |
390 | - def test_provides_dispatch_estimation_interface(self): |
391 | - verifyObject(IBuildFarmJobDispatchEstimation, BuildPackageJob) |
392 | - |
393 | def test_getTitle(self): |
394 | # Test that BuildPackageJob returns the title of the build. |
395 | build, bq = find_job(self, 'gcc', '386') |
396 | |
397 | === modified file 'lib/lp/soyuz/tests/test_buildqueue.py' |
398 | --- lib/lp/soyuz/tests/test_buildqueue.py 2010-01-16 05:54:16 +0000 |
399 | +++ lib/lp/soyuz/tests/test_buildqueue.py 2010-01-27 16:27:17 +0000 |
400 | @@ -15,7 +15,8 @@ |
401 | from canonical.testing import LaunchpadZopelessLayer |
402 | |
403 | from lp.buildmaster.interfaces.builder import IBuilderSet |
404 | -from lp.buildmaster.interfaces.buildfarmjob import BuildFarmJobType |
405 | +from lp.buildmaster.interfaces.buildfarmjob import ( |
406 | + BuildFarmJobType) |
407 | from lp.buildmaster.model.builder import specific_job_classes |
408 | from lp.buildmaster.model.buildfarmjob import BuildFarmJob |
409 | from lp.services.job.model.job import Job |
410 | @@ -32,17 +33,38 @@ |
411 | |
412 | def find_job(test, name, processor='386'): |
413 | """Find build and queue instance for the given source and processor.""" |
414 | + def processor_matches(bq): |
415 | + if processor is None: |
416 | + if bq.processor is None: |
417 | + return True |
418 | + else: |
419 | + return False |
420 | + else: |
421 | + if processor == bq.processor.name: |
422 | + return True |
423 | + else: |
424 | + return False |
425 | + |
426 | for build in test.builds: |
427 | - if (build.sourcepackagerelease.name == name |
428 | - and build.processor.name == processor): |
429 | - return (build, build.buildqueue_record) |
430 | + bq = build.buildqueue_record |
431 | + source = None |
432 | + for attr in ('sourcepackagerelease', 'sourcepackagename'): |
433 | + source = getattr(build, attr, None) |
434 | + if source is not None: |
435 | + break |
436 | + if (source.name == name and processor_matches(bq)): |
437 | + return (build, bq) |
438 | return (None, None) |
439 | |
440 | |
441 | -def nth_builder(test, build, n): |
442 | +def nth_builder(test, bq, n): |
443 | """Find nth builder that can execute the given build.""" |
444 | + def builder_key(job): |
445 | + """Access key for builders capable of running the given job.""" |
446 | + return (getattr(job.processor, 'id', None), job.virtualized) |
447 | + |
448 | builder = None |
449 | - builders = test.builders.get(builder_key(build), []) |
450 | + builders = test.builders.get(builder_key(bq), []) |
451 | try: |
452 | builder = builders[n-1] |
453 | except IndexError: |
454 | @@ -53,23 +75,30 @@ |
455 | def assign_to_builder(test, job_name, builder_number, processor='386'): |
456 | """Simulate assigning a build to a builder.""" |
457 | build, bq = find_job(test, job_name, processor) |
458 | - builder = nth_builder(test, build, builder_number) |
459 | + builder = nth_builder(test, bq, builder_number) |
460 | bq.markAsBuilding(builder) |
461 | |
462 | |
463 | def print_build_setup(builds): |
464 | """Show the build set-up for a particular test.""" |
465 | + def processor_name(bq): |
466 | + proc = getattr(bq, 'processor', None) |
467 | + if proc is None: |
468 | + return 'None' |
469 | + else: |
470 | + return proc.name |
471 | + |
472 | + print "" |
473 | for build in builds: |
474 | bq = build.buildqueue_record |
475 | - spr = build.sourcepackagerelease |
476 | - print "%12s, p:%5s, v:%5s e:%s *** s:%5s" % ( |
477 | - spr.name, build.processor.name, build.is_virtualized, |
478 | - bq.estimated_duration, bq.lastscore) |
479 | - |
480 | - |
481 | -def builder_key(job): |
482 | - """Access key for builders capable of running the given job.""" |
483 | - return (job.processor.id, job.is_virtualized) |
484 | + source = None |
485 | + for attr in ('sourcepackagerelease', 'sourcepackagename'): |
486 | + source = getattr(build, attr, None) |
487 | + if source is not None: |
488 | + break |
489 | + print "%5s, %18s, p:%5s, v:%5s e:%s *** s:%5s" % ( |
490 | + bq.id, source.name, processor_name(bq), |
491 | + bq.virtualized, bq.estimated_duration, bq.lastscore) |
492 | |
493 | |
494 | def check_mintime_to_builder( |
495 | @@ -104,6 +133,16 @@ |
496 | datetime.utcnow().replace(tzinfo=utc) - timedelta(seconds=offset)) |
497 | |
498 | |
499 | +def check_delay_for_job(test, the_job, delay, head_job_platform): |
500 | + # Obtain the builder statistics pertaining to this job. |
501 | + builder_data = the_job._getBuilderData() |
502 | + builders_in_total, builders_for_job, builder_stats = builder_data |
503 | + estimated_delay = the_job._estimateJobDelay(builder_stats) |
504 | + test.assertEqual( |
505 | + estimated_delay, |
506 | + (delay, head_job_platform)) |
507 | + |
508 | + |
509 | class TestBuildQueueSet(TestCaseWithFactory): |
510 | """Test for `BuildQueueSet`.""" |
511 | |
512 | @@ -206,6 +245,20 @@ |
513 | builder.builderok = True |
514 | builder.manual = False |
515 | |
516 | + # Native builders irrespective of processor. |
517 | + self.builders[(None, False)] = self.builders[(x86_proc.id, False)] |
518 | + self.builders[(None, False)].extend( |
519 | + self.builders[(amd_proc.id, False)]) |
520 | + self.builders[(None, False)].extend( |
521 | + self.builders[(hppa_proc.id, False)]) |
522 | + |
523 | + # Virtual builders irrespective of processor. |
524 | + self.builders[(None, True)] = self.builders[(x86_proc.id, True)] |
525 | + self.builders[(None, True)].extend( |
526 | + self.builders[(amd_proc.id, True)]) |
527 | + self.builders[(None, True)].extend( |
528 | + self.builders[(hppa_proc.id, True)]) |
529 | + |
530 | # Disable the sample data builders. |
531 | getUtility(IBuilderSet)['bob'].builderok = False |
532 | getUtility(IBuilderSet)['frog'].builderok = False |
533 | @@ -329,6 +382,12 @@ |
534 | self.assertEqual( |
535 | builder_stats[(hppa_proc.id, True)], 4, |
536 | "The number of virtual hppa builders is wrong") |
537 | + self.assertEqual( |
538 | + builder_stats[(None, False)], 9, |
539 | + "The number of *virtual* builders across all processors is wrong") |
540 | + self.assertEqual( |
541 | + builder_stats[(None, True)], 12, |
542 | + "The number of *native* builders across all processors is wrong") |
543 | # Disable the native x86 builders. |
544 | for builder in self.builders[(x86_proc.id, False)]: |
545 | builder.builderok = False |
546 | @@ -580,6 +639,7 @@ |
547 | score = 1000 |
548 | duration = 0 |
549 | for build in self.builds: |
550 | + score += getattr(self, 'score_increment', 1) |
551 | score += 1 |
552 | duration += 60 |
553 | bq = build.buildqueue_record |
554 | @@ -815,3 +875,143 @@ |
555 | bq.virtualized, build.is_virtualized, |
556 | "The 'virtualized' property deviates.") |
557 | |
558 | + |
559 | +class TestMultiArchJobDelayEstimation(MultiArchBuildsBase): |
560 | + """Test estimated job delays with various processors.""" |
561 | + score_increment = 2 |
562 | + def setUp(self): |
563 | + """Set up a fake 'BRANCHBUILD' build farm job class. |
564 | + |
565 | + The two platform-independent jobs will have a score of 1025 and 1053 |
566 | + respectively. |
567 | + |
568 | + 3, gedit, p: hppa, v:False e:0:01:00 *** s: 1003 |
569 | + 4, gedit, p: 386, v:False e:0:02:00 *** s: 1006 |
570 | + 5, firefox, p: hppa, v:False e:0:03:00 *** s: 1009 |
571 | + 6, firefox, p: 386, v:False e:0:04:00 *** s: 1012 |
572 | + 7, apg, p: hppa, v:False e:0:05:00 *** s: 1015 |
573 | + 8, apg, p: 386, v:False e:0:06:00 *** s: 1018 |
574 | + 9, vim, p: hppa, v:False e:0:07:00 *** s: 1021 |
575 | + 10, vim, p: 386, v:False e:0:08:00 *** s: 1024 |
576 | + --> 19, xx-recipe-bash, p: None, v:False e:0:00:22 *** s: 1025 |
577 | + 11, gcc, p: hppa, v:False e:0:09:00 *** s: 1027 |
578 | + 12, gcc, p: 386, v:False e:0:10:00 *** s: 1030 |
579 | + 13, bison, p: hppa, v:False e:0:11:00 *** s: 1033 |
580 | + 14, bison, p: 386, v:False e:0:12:00 *** s: 1036 |
581 | + 15, flex, p: hppa, v:False e:0:13:00 *** s: 1039 |
582 | + 16, flex, p: 386, v:False e:0:14:00 *** s: 1042 |
583 | + 17, postgres, p: hppa, v:False e:0:15:00 *** s: 1045 |
584 | + 18, postgres, p: 386, v:False e:0:16:00 *** s: 1048 |
585 | + --> 20, xx-recipe-zsh, p: None, v:False e:0:03:42 *** s: 1053 |
586 | + |
587 | + p=processor, v=virtualized, e=estimated_duration, s=score |
588 | + """ |
589 | + super(TestMultiArchJobDelayEstimation, self).setUp() |
590 | + |
591 | + job = self.factory.makeSourcePackageRecipeBuildJob( |
592 | + virtualized=False, estimated_duration=22, |
593 | + sourcename='xx-recipe-bash') |
594 | + job.lastscore = 1025 |
595 | + self.builds.append(job.specific_job.build) |
596 | + job = self.factory.makeSourcePackageRecipeBuildJob( |
597 | + virtualized=False, estimated_duration=222, |
598 | + sourcename='xx-recipe-zsh') |
599 | + job.lastscore = 1053 |
600 | + self.builds.append(job.specific_job.build) |
601 | + # print_build_setup(self.builds) |
602 | + |
603 | + def test_job_delay_for_binary_builds(self): |
604 | + processor_fam = ProcessorFamilySet().getByName('hppa') |
605 | + hppa_proc = processor_fam.processors[0] |
606 | + |
607 | + # One of four builders for the 'flex' build is immediately available. |
608 | + flex_build, flex_job = find_job(self, 'flex', 'hppa') |
609 | + check_mintime_to_builder(self, flex_job, hppa_proc, False, 0) |
610 | + |
611 | + # Obtain the builder statistics pertaining to this job. |
612 | + builder_data = flex_job._getBuilderData() |
613 | + builders_in_total, builders_for_job, builder_stats = builder_data |
614 | + |
615 | + # The delay will be 900 (= 15*60) + 222 seconds, the head job is |
616 | + # platform-independent. |
617 | + check_delay_for_job(self, flex_job, 1122, (None, False)) |
618 | + |
619 | + # Assign the postgres job to a builder. |
620 | + assign_to_builder(self, 'postgres', 1, 'hppa') |
621 | + # The 'postgres' job is not pending any more. Now only the 222 |
622 | + # seconds (the estimated duration of the platform-independent job) |
623 | + # should be returned. |
624 | + check_delay_for_job(self, flex_job, 222, (None, False)) |
625 | + |
626 | + # How about some estimates for x86 builds? |
627 | + processor_fam = ProcessorFamilySet().getByName('x86') |
628 | + x86_proc = processor_fam.processors[0] |
629 | + |
630 | + _bison_build, bison_job = find_job(self, 'bison', '386') |
631 | + check_mintime_to_builder(self, bison_job, x86_proc, False, 0) |
632 | + # The delay will be 900 (= (14+16)*60/2) + 222 seconds, the head job |
633 | + # is platform-independent. |
634 | + check_delay_for_job(self, bison_job, 1122, (None, False)) |
635 | + |
636 | + # The 2 tests that follow exercise the estimation in conjunction with |
637 | + # longer pending job queues. Please note that the sum of estimates for |
638 | + # the '386' jobs is divided by 4 which is the number of native '386' |
639 | + # builders. |
640 | + |
641 | + _apg_build, apg_job = find_job(self, 'apg', '386') |
642 | + check_mintime_to_builder(self, apg_job, x86_proc, False, 0) |
643 | + # The delay will be 900 (= (8+10+12+14+16)*60/4) + 122 (= (222+22)/2) |
644 | + # seconds, the head job is platform-independent. |
645 | + check_delay_for_job(self, apg_job, 1022, (None, False)) |
646 | + |
647 | + _gedit_build, gedit_job = find_job(self, 'gedit', '386') |
648 | + check_mintime_to_builder(self, gedit_job, x86_proc, False, 0) |
649 | + # The delay will be |
650 | + # 1080 (= (4+6+8+10+12+14+16)*60/4) + 122 (= (222+22)/2) |
651 | + # seconds, the head job is platform-independent. |
652 | + check_delay_for_job(self, gedit_job, 1172, (None, False)) |
653 | + |
654 | + def test_job_delay_for_recipe_builds(self): |
655 | + # One of the 9 builders for the 'bash' build is immediately available. |
656 | + bash_build, bash_job = find_job(self, 'xx-recipe-bash', None) |
657 | + check_mintime_to_builder(self, bash_job, None, False, 0) |
658 | + |
659 | + # Obtain the builder statistics pertaining to this job. |
660 | + builder_data = bash_job._getBuilderData() |
661 | + builders_in_total, builders_for_job, builder_stats = builder_data |
662 | + |
663 | + # The delay will be 960 + 780 + 222 = 1962, where |
664 | + # hppa job delays: 960 = (9+11+13+15)*60/3 |
665 | + # 386 job delays: 780 = (10+12+14+16)*60/4 |
666 | + check_delay_for_job(self, bash_job, 1962, (None, False)) |
667 | + |
668 | + # One of the 9 builders for the 'zsh' build is immediately available. |
669 | + zsh_build, zsh_job = find_job(self, 'xx-recipe-zsh', None) |
670 | + check_mintime_to_builder(self, zsh_job, None, False, 0) |
671 | + |
672 | + # Obtain the builder statistics pertaining to this job. |
673 | + builder_data = zsh_job._getBuilderData() |
674 | + builders_in_total, builders_for_job, builder_stats = builder_data |
675 | + |
676 | + # The delay will be 0 since this is the head job. |
677 | + check_delay_for_job(self, zsh_job, 0, (None, False)) |
678 | + |
679 | + # Assign the zsh job to a builder. |
680 | + assign_to_builder(self, 'xx-recipe-zsh', 1, None) |
681 | + |
682 | + # Now that the highest-scored job is out of the way, the estimation |
683 | + # for the 'bash' recipe build is 222 seconds shorter and the new head |
684 | + # job platform is (1, False) (due to the fact that the native '386' |
685 | + # postgres job (with id 18) is the new head job). |
686 | + |
687 | + # The delay will be 960 + 780 = 1740, where |
688 | + # hppa job delays: 960 = (9+11+13+15)*60/3 |
689 | + # 386 job delays: 780 = (10+12+14+16)*60/4 |
690 | + check_delay_for_job(self, bash_job, 1740, (1, False)) |
691 | + |
692 | + processor_fam = ProcessorFamilySet().getByName('x86') |
693 | + x86_proc = processor_fam.processors[0] |
694 | + |
695 | + _postgres_build, postgres_job = find_job(self, 'postgres', '386') |
696 | + # The delay will be 0 since this is the head job now. |
697 | + check_delay_for_job(self, postgres_job, 0, (1, False)) |
698 | |
699 | === modified file 'lib/lp/testing/factory.py' |
700 | --- lib/lp/testing/factory.py 2010-01-21 11:07:52 +0000 |
701 | +++ lib/lp/testing/factory.py 2010-01-27 16:27:17 +0000 |
702 | @@ -1242,8 +1242,8 @@ |
703 | passed in, but defaults to a Subversion import from an arbitrary |
704 | unique URL. |
705 | """ |
706 | - if (svn_branch_url is cvs_root is cvs_module is git_repo_url is hg_repo_url |
707 | - is None): |
708 | + if (svn_branch_url is cvs_root is cvs_module is git_repo_url is |
709 | + hg_repo_url is None): |
710 | svn_branch_url = self.getUniqueURL() |
711 | |
712 | if product is None: |
713 | @@ -1611,10 +1611,11 @@ |
714 | registrant, owner, distroseries, sourcepackagename, name, recipe) |
715 | |
716 | def makeSourcePackageRecipeBuild(self, sourcepackage=None, recipe=None, |
717 | - requester=None, archive=None): |
718 | + requester=None, archive=None, |
719 | + sourcename=None): |
720 | """Make a new SourcePackageRecipeBuild.""" |
721 | if sourcepackage is None: |
722 | - sourcepackage = self.makeSourcePackage() |
723 | + sourcepackage = self.makeSourcePackage(sourcename=sourcename) |
724 | if recipe is None: |
725 | recipe = self.makeSourcePackageRecipe() |
726 | if requester is None: |
727 | @@ -1627,16 +1628,21 @@ |
728 | archive=archive, |
729 | requester=requester) |
730 | |
731 | - def makeSourcePackageRecipeBuildJob(self, score=9876): |
732 | + def makeSourcePackageRecipeBuildJob( |
733 | + self, score=9876, virtualized=True, estimated_duration=64, |
734 | + sourcename=None): |
735 | """Create a `SourcePackageRecipeBuildJob` and a `BuildQueue` for |
736 | testing.""" |
737 | - recipe_build = self.makeSourcePackageRecipeBuild() |
738 | + recipe_build = self.makeSourcePackageRecipeBuild( |
739 | + sourcename=sourcename) |
740 | recipe_build_job = recipe_build.makeJob() |
741 | |
742 | store = getUtility(IStoreSelector).get(MAIN_STORE, DEFAULT_FLAVOR) |
743 | bq = BuildQueue( |
744 | job=recipe_build_job.job, lastscore=score, |
745 | - job_type=BuildFarmJobType.RECIPEBRANCHBUILD) |
746 | + job_type=BuildFarmJobType.RECIPEBRANCHBUILD, |
747 | + estimated_duration = timedelta(seconds=estimated_duration), |
748 | + virtualized=virtualized) |
749 | store.add(bq) |
750 | return bq |
751 | |
752 | @@ -1850,10 +1856,11 @@ |
753 | return self.makeSourcePackageName() |
754 | return getUtility(ISourcePackageNameSet).getOrCreateByName(name) |
755 | |
756 | - def makeSourcePackage(self, sourcepackagename=None, distroseries=None): |
757 | + def makeSourcePackage( |
758 | + self, sourcepackagename=None, distroseries=None, sourcename=None): |
759 | """Make an `ISourcePackage`.""" |
760 | if sourcepackagename is None: |
761 | - sourcepackagename = self.makeSourcePackageName() |
762 | + sourcepackagename = self.makeSourcePackageName(sourcename) |
763 | if distroseries is None: |
764 | distroseries = self.makeDistroRelease() |
765 | return distroseries.getSourcePackage(sourcepackagename) |
Hello there!
This is the second last branch required to provide a generalized build farm
job dispatch time estimation (irrespective of job type).
Given a build farm job of interest (JOI) for which the user would like to
receive a dispatch time estimation we need to calculate the dispatch delay
caused by other jobs that
- are ahead of JOI in the queue and
- compete with it for builder resources
Some of the ideas implemented by the algorithm in this branch are:
- a processor- independent job competes with all other jobs with the same NextBuilder( ) in order
virtualization setting.
- the delays caused by the jobs ahead of the JOI need to be weighted (based
on the size of the pool of builders that can run these jobs).
- In addition to the estimated delay value _estimateJobDelay() returns the
platform of the head job (the first of the competing jobs to be dispatched
to a builder).
The head job platform will be fed to _estimateTimeTo
to estimate how long it takes until the head job gets dispatched to a
builder.
- Total delay = "delay caused by jobs ahead" + "time to next builder"
Tests to run:
bin/test -vv -t test_buildqueue
I had various pre-implementation talks with Julian and Michael N.
No (relevant) "make lint" errors or warnings.