Merge lp:~mabac/svammel/auto-close-retried into lp:svammel
- auto-close-retried
- Merge into trunk
Status: | Merged | ||||
---|---|---|---|---|---|
Approved by: | Mattias Backman | ||||
Approved revision: | 99 | ||||
Merged at revision: | 85 | ||||
Proposed branch: | lp:~mabac/svammel/auto-close-retried | ||||
Merge into: | lp:svammel | ||||
Prerequisite: | lp:~mabac/svammel/get-success-logs | ||||
Diff against target: |
733 lines (+217/-108) 6 files modified
bug_logging.py (+11/-0) bug_reporting.py (+14/-0) data_parsing.py (+65/-43) file-failures.py (+40/-10) tests/test_bug_logging.py (+41/-7) tests/test_fail_filer.py (+46/-48) |
||||
To merge this branch: | bzr merge lp:~mabac/svammel/auto-close-retried | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Steve Langasek (community) | Needs Fixing | ||
James Westby (community) | Approve | ||
Review via email: mp+54713@code.launchpad.net |
Commit message
Description of the change
Hi,
This branch adds a script for closing bugs. It's supposed to cover the requested feature of being able to close a bug which was filed due to transient archive inconsistency.
From an email from Steve L:
"[]...sometimes a build fails due
to transient archive breakage, particularly in the main archive, and a
developer will request a retry - that build gets the same build id as the
failure, but spits out a successful build. [...]"
The script acually lets you close any logged bug when a successful build is found of the same or higher version for the package. This should cover the above case and also supports the following cases where you might want to close bugs.
* The user forgets to add an archive to cross check for successful armel builds. The user can then do 'close-bugs --archive <archive with successful builds> --logfile filed_bugs.log' to close the wrongly filed bugs.
* The user completely messes up and wants to close all the bugs. Just add '--forceclose' to the command line to close all bugs found in the log file.
I feel that this is a somewhat different use case than running the file-failures script to file bugs, which is why I put this functionality in a separate script. I can make it part of the main flow if that's better.
By "closing" I mean setting the status of all connected bug_tasks to "Invalid". There may well be a better implementation of closing bugs, please let me know.
Thanks,
Mattias
James Westby (james-w) wrote : | # |
Mattias Backman (mabac) wrote : | # |
> 24 + for bug_task in bug.bug_tasks:
> 25 + print_message(" Changing status for %s [%s ==> %s]" % \
> 26 + (bug_task.title, bug_task.status,
> BUG_CLOSED_STATUS))
>
> Perhaps we should have the --dry-run support here too?
>
> Ah, I see now it is done at a higher level, but that message
> doesn't say what bug would be closed. Could we have it print
> the ".web_link" of each task that would be closed?
Sure, I'll add that.
>
> I have no problems with the code, it looks good to me.
>
> I do wonder about whether this should be a separate script (--force-close
> can be), but I wonder if just having the script in cron clean up
> automatically would be better. Or would you imagine having both scripts
> in cron?
Well, this is where I'd like some input from Steve I guess. I don't really know if this is going to be running in cron, or if a developer would run the script with carefully selected archives and never analyze the same combination of archives more than once.
>
> Also, closing bugs doesn't alter the log file, will that lead to
> missing bugs if it goes fail->success-
Yes, it will. I imagined that seeing a successful build once for a particular version in a rebuild archive would rule out that it is a porting issue. When you run the bug filer on a new rebuild archive I figured that you'd just point it to a blank log file and start from scratch. Or that any new failure would be in a higher version which would be filed.
Again, I'd like some input as to how svammel users would like to use it. There are more behaviors that I expect will have to be tweaked since I've made my own assumptions.
Thanks,
Mattias
James Westby (james-w) wrote : | # |
On Thu, 24 Mar 2011 17:10:04 -0000, Mattias Backman <email address hidden> wrote:
> Well, this is where I'd like some input from Steve I guess. I don't
> really know if this is going to be running in cron, or if a developer
> would run the script with carefully selected archives and never
> analyze the same combination of archives more than once.
I imagine it would be done multiple times for a collection of
archives. This is because it can take a week or longer for a full
rebuild test, and we would want to start filing bugs for the early
failures before then.
> Yes, it will. I imagined that seeing a successful build once for a
> particular version in a rebuild archive would rule out that it is a
> porting issue. When you run the bug filer on a new rebuild archive I
> figured that you'd just point it to a blank log file and start from
> scratch. Or that any new failure would be in a higher version which
> would be filed.
Yeah, those are good points, so I agree that it's not a big issue.
> Again, I'd like some input as to how svammel users would like to use
> it. There are more behaviors that I expect will have to be tweaked
> since I've made my own assumptions.
There is a rebuild test due soon. Let's ensure that we have something
that Steve can use for filing bugs from that, and we will get good
specific feedback then if there are still issues.
Thanks,
James
Mattias Backman (mabac) wrote : | # |
> I do wonder about whether this should be a separate script (--force-close
> can be), but I wonder if just having the script in cron clean up
> automatically would be better. Or would you imagine having both scripts
> in cron?
I've made this a part of the file-failures script now so it runs a clean-up pass after filing any new bugs. I'll leave the close-bugs script in there for now, will remove it if there seems to be no use for it.
Thanks,
Mattias
- 91. By Mattias Backman
-
Add bugtask weblink to dryrun message.
- 92. By Mattias Backman
-
Add clean up pass to file-failures.
- 93. By Mattias Backman
-
Fix import.
- 94. By Mattias Backman
-
Moved clean up out of loop.
James Westby (james-w) wrote : | # |
This looks good now, I think this will provide the best user experience
for Steve.
Thanks,
James
Steve Langasek (vorlon) wrote : | # |
On Thu, Mar 24, 2011 at 05:10:05PM -0000, Mattias Backman wrote:
> > I have no problems with the code, it looks good to me.
> > I do wonder about whether this should be a separate script (--force-close
> > can be), but I wonder if just having the script in cron clean up
> > automatically would be better. Or would you imagine having both scripts
> > in cron?
> Well, this is where I'd like some input from Steve I guess. I don't really
> know if this is going to be running in cron, or if a developer would run
> the script with carefully selected archives and never analyze the same
> combination of archives more than once.
I have no immediate plans to run the script from a cron job, but we will
definitely have occasion to run the script repeatedly against the same set
of archives - both to report new build failures over the course of a
rebuild, as James points out, and also because we would run this repeatedly
against the main archive. (The latter use might eventually be cronned.)
> > Also, closing bugs doesn't alter the log file, will that lead to
> > missing bugs if it goes fail->success-
> Yes, it will. I imagined that seeing a successful build once for a
> particular version in a rebuild archive would rule out that it is a
> porting issue. When you run the bug filer on a new rebuild archive I
> figured that you'd just point it to a blank log file and start from
> scratch. Or that any new failure would be in a higher version which would
> be filed.
This makes sense to me. As long as a build failure of a *new* version is
tracked, or it's clear how to invoke the tool to get a blank log file when
pointing at a new rebuild archive, that's fine.
On Fri, Mar 25, 2011 at 07:50:55AM -0000, Mattias Backman wrote:
> > I do wonder about whether this should be a separate script (--force-close
> > can be), but I wonder if just having the script in cron clean up
> > automatically would be better. Or would you imagine having both scripts
> > in cron?
> I've made this a part of the file-failures script now so it runs a
> clean-up pass after filing any new bugs. I'll leave the close-bugs script
> in there for now, will remove it if there seems to be no use for it.
Thanks, yes, I prefer to have this done as part of a single command rather
than having to invoke close-bugs separately.
--
Steve Langasek Give me a lever long enough and a Free OS
Debian Developer to set it on, and I can move the world.
Ubuntu Developer http://
<email address hidden> <email address hidden>
Steve Langasek (vorlon) wrote : | # |
> By "closing" I mean setting the status of all connected
> bug_tasks to "Invalid". There may well be a better
> implementation of closing bugs, please let me know.
Optimally, a bug that's being closed because the package has subsequently built successfully in the same package version would be closed as "invalid" and one closed in a later package version would be "Fix released"; but that's nitpicking on my part.
More importantly, we should only be closing bug tasks corresponding to the package the bug was originally reported on. If a bug report gets addressed in the current package by applying a workaround, but a new task has been opened on, for example, gcc-linaro to flag that the compiler needs a fix for the underlying issue, we shouldn't touch that task.
Mattias Backman (mabac) wrote : | # |
> > By "closing" I mean setting the status of all connected
> > bug_tasks to "Invalid". There may well be a better
> > implementation of closing bugs, please let me know.
>
> Optimally, a bug that's being closed because the package has subsequently
> built successfully in the same package version would be closed as "invalid"
> and one closed in a later package version would be "Fix released"; but that's
> nitpicking on my part.
I'll have a look at that and probably can change it easily.
>
> More importantly, we should only be closing bug tasks corresponding to the
> package the bug was originally reported on. If a bug report gets addressed in
> the current package by applying a workaround, but a new task has been opened
> on, for example, gcc-linaro to flag that the compiler needs a fix for the
> underlying issue, we shouldn't touch that task.
Thanks for clarifying, I'll fix that.
Closing bugs is only done when a later successful build on armel is found. I do not check for later failures on the other archs. This means that if we run the script on partially built archives where the armel build has failed and the builds are not yet started on the other archs there will be a bug that's not going to be closed even if the other builds fail. Will this be a problem?
Thanks,
Mattias
- 95. By Mattias Backman
-
Only close the bug_task we created, not tasks added for other packages.
- 96. By Mattias Backman
-
Add different statuses for closing bugs.
- 97. By Mattias Backman
-
pep8 style compliance fixes.
- 98. By Mattias Backman
-
Merged changes from trunk.
Steve Langasek (vorlon) wrote : | # |
On Tue, Mar 29, 2011 at 08:58:29AM -0000, Mattias Backman wrote:
> > More importantly, we should only be closing bug tasks corresponding to the
> > package the bug was originally reported on. If a bug report gets addressed in
> > the current package by applying a workaround, but a new task has been opened
> > on, for example, gcc-linaro to flag that the compiler needs a fix for the
> > underlying issue, we shouldn't touch that task.
> Thanks for clarifying, I'll fix that.
> Closing bugs is only done when a later successful build on armel is found.
> I do not check for later failures on the other archs. This means that if
> we run the script on partially built archives where the armel build has
> failed and the builds are not yet started on the other archs there will be
> a bug that's not going to be closed even if the other builds fail. Will
> this be a problem?
No, I think that's fine as the x86 rebuild is obviously going to run much
faster than the armel one in the general case, and we would want to avoid
filing bugs where we know we don't have enough information yet to say if
they're cross-platform. But please document this limitation.
Thanks,
--
Steve Langasek Give me a lever long enough and a Free OS
Debian Developer to set it on, and I can move the world.
Ubuntu Developer http://
<email address hidden> <email address hidden>
- 99. By Mattias Backman
-
Merge James' fix to avoid shadowing the stlib logging module.
Preview Diff
1 | === modified file 'bug_logging.py' |
2 | --- bug_logging.py 2011-03-23 11:12:12 +0000 |
3 | +++ bug_logging.py 2011-03-30 07:41:31 +0000 |
4 | @@ -72,6 +72,14 @@ |
5 | self.log_file.close() |
6 | raise |
7 | |
8 | + def get_arch(self, arch): |
9 | + entries = [] |
10 | + for entry in self.log_entries: |
11 | + if entry.is_arch(arch): |
12 | + entries.append(entry) |
13 | + return entries |
14 | + |
15 | + |
16 | class LogEntry(object): |
17 | def __init__(self, package_name, package_version, build_arch, bugtask_url): |
18 | self.package_name = package_name |
19 | @@ -88,3 +96,6 @@ |
20 | return (self.package_name == package_name and |
21 | self.package_version == package_version and |
22 | self.build_arch == build_arch) |
23 | + |
24 | + def is_arch(self, build_arch): |
25 | + return self.build_arch == build_arch |
26 | |
27 | === modified file 'bug_reporting.py' |
28 | --- bug_reporting.py 2011-03-22 10:17:01 +0000 |
29 | +++ bug_reporting.py 2011-03-30 07:41:31 +0000 |
30 | @@ -13,6 +13,10 @@ |
31 | ) |
32 | |
33 | |
34 | +BUG_INVALID_STATUS = 'Invalid' |
35 | +BUG_FIXED_STATUS = 'Fix Released' |
36 | + |
37 | + |
38 | def get_project_url(package_name, lp): |
39 | """ Return the link to the project with the supplied name. """ |
40 | |
41 | @@ -63,3 +67,13 @@ |
42 | dupe_bugs = project.searchTasks(tags=dupe_tags, tags_combinator='All') |
43 | |
44 | return len(dupe_bugs[:1]) > 0 |
45 | + |
46 | + |
47 | +def close_bug(url, close_status, lp): |
48 | + """ Close the bugtask by setting status to Invalid.""" |
49 | + |
50 | + bug_task = lp.load(url) |
51 | + print_message(" Changing status for %s [%s ==> %s]" % \ |
52 | + (bug_task.title, bug_task.status, close_status)) |
53 | + bug_task.status = close_status |
54 | + bug_task.lp_save() |
55 | |
56 | === modified file 'data_parsing.py' |
57 | --- data_parsing.py 2011-03-30 07:41:31 +0000 |
58 | +++ data_parsing.py 2011-03-30 07:41:31 +0000 |
59 | @@ -55,10 +55,10 @@ |
60 | |
61 | |
62 | class SPPH(object): |
63 | - def __init__(self, spph): |
64 | + def __init__(self, source_package_name, source_package_version): |
65 | self.records = dict() |
66 | - self.version = spph.source_package_version |
67 | - self.package_name = spph.source_package_name |
68 | + self.version = source_package_version |
69 | + self.package_name = source_package_name |
70 | self.url = get_base_api_url() + '+source/%s' % self.package_name |
71 | |
72 | def add_build_record(self, build_record): |
73 | @@ -136,7 +136,8 @@ |
74 | print_message(" Found build '%s'" % build.title) |
75 | |
76 | stored_spph = all_spph.get(csp.source_package_name) |
77 | - new_spph = SPPH(csp) |
78 | + new_spph = SPPH(csp.source_package_name, |
79 | + csp.source_package_version) |
80 | if new_spph.is_higher_version_than(stored_spph): |
81 | all_spph[csp.source_package_name] = new_spph |
82 | new_spph.add_build_record(build) |
83 | @@ -146,6 +147,30 @@ |
84 | return all_spph |
85 | |
86 | |
87 | +def exists_successful_later_build(spph, archives, fail_arch): |
88 | + for archive, archive_name in archives: |
89 | + # Get list of successful builds for the same package |
90 | + success_list = get_build_list(archive, SUCCESS_BUILD_STATE, |
91 | + spph.package_name) |
92 | + success_spph_list = [] |
93 | + for success_build in success_list: |
94 | + csp = success_build.current_source_publication |
95 | + success_spph = SPPH(csp.source_package_name, |
96 | + csp.source_package_version) |
97 | + if success_build.arch_tag == fail_arch: |
98 | + success_spph_list.append(success_spph) |
99 | + |
100 | + candidate_success_spph = get_highest_version(success_spph_list) |
101 | + if spph.is_higher_version_than(candidate_success_spph): |
102 | + # Only care about the failed build if it is of a higher |
103 | + # version than any successful build of the same package. |
104 | + pass |
105 | + else: |
106 | + return candidate_success_spph.version |
107 | + |
108 | + return None |
109 | + |
110 | + |
111 | def filter_spph(all_spph_list, fail_arch, ok_archs, archives): |
112 | """Filter out the build failures that are unique to the target arch. |
113 | |
114 | @@ -159,68 +184,48 @@ |
115 | A subset of all_spph_list which represents the interesting packages. |
116 | """ |
117 | filtered_list = [] |
118 | + print_message('Determining which build failures to file bugs for.') |
119 | for spph in all_spph_list.values(): |
120 | if not spph.is_ftbfs(fail_arch): |
121 | - print_message("Ignoring package '%s' version '%s' since it did " \ |
122 | - "not fail on architecture '%s'." % \ |
123 | + print_message(" Ignoring package '%s' version '%s' since it " \ |
124 | + "did not fail on architecture '%s'." % \ |
125 | (spph.package_name, spph.version, fail_arch)) |
126 | continue |
127 | also_failed = next((arch for arch in ok_archs |
128 | if spph.is_ftbfs(arch)), None) |
129 | if also_failed is not None: |
130 | - print_message("Ignoring package '%s' version '%s' since it also " \ |
131 | - "failed to build on architecture '%s'." % \ |
132 | + print_message(" Ignoring package '%s' version '%s' since it " \ |
133 | + "also failed to build on architecture '%s'." % \ |
134 | (spph.package_name, spph.version, also_failed)) |
135 | continue |
136 | |
137 | - for archive, archive_name in archives: |
138 | - # Get list of successful builds for the same package |
139 | - success_list = get_build_list(archive, SUCCESS_BUILD_STATE, |
140 | - spph.package_name) |
141 | - success_spph_list = [] |
142 | - for success_build in success_list: |
143 | - success_spph = SPPH(success_build.current_source_publication) |
144 | - if success_build.arch_tag == fail_arch: |
145 | - success_spph_list.append(success_spph) |
146 | - |
147 | - candidate_success_spph = get_highest_version(success_spph_list) |
148 | - if spph.is_higher_version_than(candidate_success_spph): |
149 | - # Only care about the failed build if it is of a higher |
150 | - # version than any successful build of the same package. |
151 | - pass |
152 | - else: |
153 | - print_message("Failed build for package '%s' version '%s' " \ |
154 | - "overridden by successful build of " \ |
155 | - "version '%s' in archive '%s'." % \ |
156 | - (spph.package_name, spph.version, |
157 | - candidate_success_spph.version, |
158 | - archive_name)) |
159 | - spph = None |
160 | - break |
161 | - |
162 | - if spph is not None: |
163 | - filtered_list.append(spph) |
164 | + success_version = exists_successful_later_build(spph, archives, |
165 | + fail_arch) |
166 | + if success_version is not None: |
167 | + print_message(" Ignoring package '%s' version '%s' since build " \ |
168 | + "of version '%s' succeded." % (spph.package_name, |
169 | + spph.version, |
170 | + success_version)) |
171 | + continue |
172 | + |
173 | + filtered_list.append(spph) |
174 | |
175 | return filtered_list |
176 | |
177 | |
178 | -def get_build_status(archive_names, series_name, fail_platform, ok_platforms): |
179 | - """Download live data from lp. |
180 | +def get_archives(archive_names, series_name): |
181 | + """Get list of archive and series. |
182 | |
183 | Parameters: |
184 | archive_names - List of archives. Eg ['primary', 'rebuild-x'] |
185 | series_name - Name of series. Eg 'natty' |
186 | - fail_platform - Name of target platform. |
187 | - ok_platforms - List of names of platforms which shouldn't have failed. |
188 | |
189 | Returns: |
190 | - A list of SPPH objects representing source packages to file bugs against. |
191 | + A list of (archive, archive_display_name) pairs. |
192 | """ |
193 | - |
194 | - ubuntu = get_launchpad().distributions['ubuntu'] |
195 | - |
196 | archives = [] |
197 | series = None |
198 | + ubuntu = get_launchpad().distributions['ubuntu'] |
199 | for archive_name in archive_names: |
200 | try: |
201 | archive = None |
202 | @@ -242,6 +247,23 @@ |
203 | except HTTPError: |
204 | print 'Error: %s is not a valid archive.' % archive_name |
205 | raise |
206 | + return archives |
207 | + |
208 | + |
209 | +def get_build_status(archive_names, series_name, fail_platform, ok_platforms): |
210 | + """Download live data from lp. |
211 | + |
212 | + Parameters: |
213 | + archive_names - List of archives. Eg ['primary', 'rebuild-x'] |
214 | + series_name - Name of series. Eg 'natty' |
215 | + fail_platform - Name of target platform. |
216 | + ok_platforms - List of names of platforms which shouldn't have failed. |
217 | + |
218 | + Returns: |
219 | + A list of SPPH objects representing source packages to file bugs against. |
220 | + """ |
221 | + |
222 | + archives = get_archives(archive_names, series_name) |
223 | |
224 | print_message("Will be getting builds from archives " + ", ".join( |
225 | name for (archive, name) in archives)) |
226 | |
227 | === modified file 'file-failures.py' |
228 | --- file-failures.py 2011-03-30 07:41:31 +0000 |
229 | +++ file-failures.py 2011-03-30 07:41:31 +0000 |
230 | @@ -13,10 +13,16 @@ |
231 | from data_parsing import ( |
232 | get_tags_for_fail, |
233 | get_build_status, |
234 | + get_archives, |
235 | + exists_successful_later_build, |
236 | + SPPH, |
237 | ) |
238 | from bug_reporting import ( |
239 | file_bug_by_url, |
240 | bug_already_filed_by_url, |
241 | + close_bug, |
242 | + BUG_INVALID_STATUS, |
243 | + BUG_FIXED_STATUS, |
244 | ) |
245 | from config import ( |
246 | init, |
247 | @@ -43,21 +49,23 @@ |
248 | return repr(self.value) |
249 | |
250 | |
251 | +LP_APP_NAME = 'Svammel bug filer' |
252 | + |
253 | + |
254 | parser = get_args_parser() |
255 | args = parser.parse_args() |
256 | set_verbose_messages(args.verbose or args.dryrun) |
257 | |
258 | -init(args.serviceroot, 'testing', '~/.launchpadlib/cache/') |
259 | +init(args.serviceroot, LP_APP_NAME, '~/.launchpadlib/cache/') |
260 | |
261 | # We are interested in build failures for this platform: |
262 | -fail_platform = 'armel' |
263 | +FAIL_PLATFORM = 'armel' |
264 | # unless there's a failed build for any of these: |
265 | -ok_platforms = ['i386', 'amd64'] |
266 | +OK_PLATFORMS = ['i386', 'amd64'] |
267 | |
268 | try: |
269 | interesting_spph = get_build_status(args.archive, args.series, |
270 | - fail_platform, ok_platforms) |
271 | - |
272 | + FAIL_PLATFORM, OK_PLATFORMS) |
273 | except: |
274 | print 'Error: Could not get build status' |
275 | raise |
276 | @@ -71,18 +79,18 @@ |
277 | continue |
278 | |
279 | if bug_log.get_entry(spph.package_name, spph.version, |
280 | - fail_platform) is not None: |
281 | + FAIL_PLATFORM) is not None: |
282 | print_message("Bug already reported for %s version %s. Perhaps it " \ |
283 | "has been closed or retargeted." % \ |
284 | (spph.package_name, spph.version)) |
285 | continue |
286 | |
287 | - build_record = spph.get_arch(fail_platform) |
288 | + build_record = spph.get_arch(FAIL_PLATFORM) |
289 | if build_record is not None: |
290 | build_log = get_file_contents(build_record.build_log_url) |
291 | else: |
292 | print "%s build record not available for %s version %s." % \ |
293 | - (fail_platform, spph.package_name, spph.version) |
294 | + (FAIL_PLATFORM, spph.package_name, spph.version) |
295 | build_log = '' |
296 | bug_tags = get_tags_for_fail(build_log) |
297 | interesting_log_part = get_interesting_part(build_log) |
298 | @@ -105,7 +113,29 @@ |
299 | bug_tags, get_launchpad()) |
300 | if bug is not None and args.logfile is not None: |
301 | bug_log.write_entry( |
302 | - LogEntry(spph.package_name, spph.version, fail_platform, |
303 | - bug.self_link).to_string()) |
304 | + LogEntry(spph.package_name, spph.version, FAIL_PLATFORM, |
305 | + bug.bug_tasks[0].self_link).to_string()) |
306 | else: |
307 | print_message(" ... but didn't since dryrun is specified.") |
308 | + |
309 | +archives = get_archives(args.archive, args.series) |
310 | +for log_entry in bug_log.get_arch(FAIL_PLATFORM): |
311 | + print_message("Checking for successful builds for package " \ |
312 | + "%s version >= %s" % \ |
313 | + (log_entry.package_name, log_entry.package_version)) |
314 | + success_version = exists_successful_later_build( |
315 | + SPPH(log_entry.package_name, log_entry.package_version), |
316 | + archives, FAIL_PLATFORM) |
317 | + if success_version is not None: |
318 | + if not args.dryrun: |
319 | + if success_version == log_entry.package_version: |
320 | + new_status = BUG_INVALID_STATUS |
321 | + else: |
322 | + new_status = BUG_FIXED_STATUS |
323 | + close_bug(log_entry.bugtask_url, new_status, get_launchpad()) |
324 | + else: |
325 | + bug = get_launchpad().load(log_entry.bugtask_url) |
326 | + for bug_task in bug.bug_tasks: |
327 | + print_message(" Had reason to close the bugtask %s but " \ |
328 | + "didn't since dryrun is specified." % \ |
329 | + bug_task.web_link) |
330 | |
331 | === modified file 'tests/test_bug_logging.py' |
332 | --- tests/test_bug_logging.py 2011-03-30 07:41:31 +0000 |
333 | +++ tests/test_bug_logging.py 2011-03-30 07:41:31 +0000 |
334 | @@ -22,6 +22,9 @@ |
335 | ) |
336 | |
337 | |
338 | +TEST_FAIL_ARCH = 'armel' |
339 | + |
340 | + |
341 | class TestLogEntry(TestCase): |
342 | |
343 | def setUp(self): |
344 | @@ -30,7 +33,7 @@ |
345 | def test_log_line_format(self): |
346 | package_name = 'package' |
347 | package_version = '1.0.1' |
348 | - build_arch = 'armel' |
349 | + build_arch = TEST_FAIL_ARCH |
350 | bugtask_url = 'http://dummy' |
351 | log_entry = LogEntry(package_name, package_version, build_arch, |
352 | bugtask_url) |
353 | @@ -42,7 +45,7 @@ |
354 | def test_log_line_matches(self): |
355 | package_name = 'package' |
356 | package_version = '1.0.1' |
357 | - build_arch = 'armel' |
358 | + build_arch = TEST_FAIL_ARCH |
359 | bugtask_url = 'http://dummy' |
360 | log_entry = LogEntry(package_name, package_version, build_arch, |
361 | bugtask_url) |
362 | @@ -53,7 +56,7 @@ |
363 | package_name = 'package' |
364 | package_version_logged = '1.0.1' |
365 | package_version_to_match = '2' |
366 | - build_arch = 'armel' |
367 | + build_arch = TEST_FAIL_ARCH |
368 | bugtask_url = 'http://dummy' |
369 | log_entry = LogEntry(package_name, package_version_logged, build_arch, |
370 | bugtask_url) |
371 | @@ -70,7 +73,7 @@ |
372 | def test_add_entry_does_so(self): |
373 | package_name = 'package' |
374 | package_version = '1.0.1' |
375 | - build_arch = 'armel' |
376 | + build_arch = TEST_FAIL_ARCH |
377 | bugtask_url = 'http://dummy' |
378 | log_entry = LogEntry(package_name, package_version, build_arch, |
379 | bugtask_url) |
380 | @@ -85,7 +88,7 @@ |
381 | def test_stored_log_is_found(self): |
382 | package_name = 'package' |
383 | package_version = '1.0.1' |
384 | - build_arch = 'armel' |
385 | + build_arch = TEST_FAIL_ARCH |
386 | bugtask_url = 'http://dummy' |
387 | log_entry = LogEntry(package_name, package_version, build_arch, |
388 | bugtask_url) |
389 | @@ -107,7 +110,7 @@ |
390 | def test_read_log_entry_from_file(self): |
391 | package_name = 'packagename' |
392 | package_version = '1.1.2' |
393 | - build_arch = 'armel' |
394 | + build_arch = TEST_FAIL_ARCH |
395 | bugtask_url = 'http://dummy' |
396 | log_entry = LogEntry(package_name, package_version, build_arch, |
397 | bugtask_url) |
398 | @@ -125,7 +128,7 @@ |
399 | def test_write_log_entry_to_file(self): |
400 | package_name = 'packagename' |
401 | package_version = '1.2.3-1' |
402 | - build_arch = 'armel' |
403 | + build_arch = TEST_FAIL_ARCH |
404 | bugtask_url = 'http://dummy' |
405 | log_entry = LogEntry(package_name, package_version, build_arch, |
406 | bugtask_url) |
407 | @@ -148,3 +151,34 @@ |
408 | __builtin__, 'open', mock_open)) |
409 | log_file_name = 'logfile' |
410 | self.assertRaises(IOError, create_log, log_file_name) |
411 | + |
412 | + def test_get_arch_returns_empty_list(self): |
413 | + log = Log(None) |
414 | + self.assertEquals([], log.get_arch(TEST_FAIL_ARCH)) |
415 | + |
416 | + def test_get_arch_returns_arch(self): |
417 | + log = Log(None) |
418 | + package_name = 'packagename' |
419 | + package_version = '1.2.3-1' |
420 | + build_arch = TEST_FAIL_ARCH |
421 | + bugtask_url = 'http://dummy' |
422 | + log_entry = LogEntry(package_name, package_version, build_arch, |
423 | + bugtask_url) |
424 | + log.add_entry(log_entry) |
425 | + self.assertEquals([log_entry], log.get_arch(TEST_FAIL_ARCH)) |
426 | + |
427 | + def test_get_arch_returns_several_archs(self): |
428 | + log = Log(None) |
429 | + package_name = 'packagename' |
430 | + package_version = '1.2.3-1' |
431 | + build_arch = TEST_FAIL_ARCH |
432 | + bugtask_url = 'http://dummy' |
433 | + log_entry1 = LogEntry(package_name, package_version, build_arch, |
434 | + bugtask_url) |
435 | + log_entry2 = LogEntry(package_name + 'new', package_version + 'later', |
436 | + build_arch, bugtask_url + '/not/same') |
437 | + log.add_entry(log_entry1) |
438 | + log.add_entry(log_entry2) |
439 | + logs = log.get_arch(TEST_FAIL_ARCH) |
440 | + self.assertIn(log_entry1, logs) |
441 | + self.assertIn(log_entry2, logs) |
442 | |
443 | === modified file 'tests/test_fail_filer.py' |
444 | --- tests/test_fail_filer.py 2011-03-30 07:41:31 +0000 |
445 | +++ tests/test_fail_filer.py 2011-03-30 07:41:31 +0000 |
446 | @@ -74,11 +74,6 @@ |
447 | |
448 | class TestFilter(TestCaseWithFixtures): |
449 | |
450 | - class MockPackage(object): |
451 | - def __init__(self, name, version): |
452 | - self.source_package_version = version |
453 | - self.source_package_name = name |
454 | - |
455 | class MockRecord(object): |
456 | class CSP: |
457 | def __init__(self, name, version): |
458 | @@ -113,7 +108,7 @@ |
459 | |
460 | def test_filter_fail_record_not_filtered(self): |
461 | self.init_dummy_config() |
462 | - spph = SPPH(self.MockPackage('packagefail', '1')) |
463 | + spph = SPPH('packagefail', '1') |
464 | failrecord = self.MockRecord(TEST_FAIL_ARCH, datetime.now(), |
465 | FAILED_BUILD_STATE) |
466 | spph.add_build_record(failrecord) |
467 | @@ -125,7 +120,7 @@ |
468 | |
469 | def test_filter_success_record_filtered(self): |
470 | self.init_dummy_config() |
471 | - spph = SPPH(self.MockPackage('packagesuccess', '1')) |
472 | + spph = SPPH('packagesuccess', '1') |
473 | failrecord = self.MockRecord(TEST_FAIL_ARCH, datetime.now(), |
474 | SUCCESS_BUILD_STATE) |
475 | spph.add_build_record(failrecord) |
476 | @@ -137,7 +132,7 @@ |
477 | |
478 | def test_filter_only_other_arch_filtered(self): |
479 | self.init_dummy_config() |
480 | - spph = SPPH(self.MockPackage('packageotherarch', '1')) |
481 | + spph = SPPH('packageotherarch', '1') |
482 | failrecord = self.MockRecord(TEST_OK_ARCHS[0], datetime.now(), |
483 | FAILED_BUILD_STATE) |
484 | spph.add_build_record(failrecord) |
485 | @@ -149,7 +144,7 @@ |
486 | |
487 | def test_filter_no_arch_filtered(self): |
488 | self.init_dummy_config() |
489 | - spph = SPPH(self.MockPackage('packagenobuilds', '1')) |
490 | + spph = SPPH('packagenobuilds', '1') |
491 | spph_list = dict() |
492 | spph_list[spph.package_name] = spph |
493 | filtered_list = filter_spph(spph_list, TEST_FAIL_ARCH, TEST_OK_ARCHS, |
494 | @@ -174,7 +169,7 @@ |
495 | self.init_dummy_config() |
496 | packagename = 'packagefailandok' |
497 | version = '1' |
498 | - fail_spph = SPPH(self.MockPackage(packagename, version)) |
499 | + fail_spph = SPPH(packagename, version) |
500 | failrecord = self.MockRecord(TEST_FAIL_ARCH, datetime.now(), |
501 | FAILED_BUILD_STATE, packagename, version) |
502 | successrecord = self.MockRecord(TEST_FAIL_ARCH, datetime.now(), |
503 | @@ -192,7 +187,7 @@ |
504 | self.init_dummy_config() |
505 | packagename = 'packagename' |
506 | version = '1' |
507 | - fail_spph = SPPH(self.MockPackage(packagename, version)) |
508 | + fail_spph = SPPH(packagename, version) |
509 | failrecord = self.MockRecord(TEST_FAIL_ARCH, datetime.now(), |
510 | FAILED_BUILD_STATE, packagename, version) |
511 | successrecord = self.MockRecord(TEST_OK_ARCHS[0], datetime.now(), |
512 | @@ -211,9 +206,10 @@ |
513 | packagename = 'packagename' |
514 | low_version = '1' |
515 | high_version = '1.2.3' |
516 | - fail_spph = SPPH(self.MockPackage(packagename, low_version)) |
517 | + fail_spph = SPPH(packagename, low_version) |
518 | failrecord = self.MockRecord(TEST_FAIL_ARCH, datetime.now(), |
519 | - FAILED_BUILD_STATE, packagename, low_version) |
520 | + FAILED_BUILD_STATE, packagename, |
521 | + low_version) |
522 | successrecord = self.MockRecord(TEST_FAIL_ARCH, datetime.now(), |
523 | SUCCESS_BUILD_STATE, packagename, |
524 | high_version) |
525 | @@ -230,9 +226,10 @@ |
526 | packagename = 'packagename' |
527 | low_version = '1' |
528 | high_version = '1.2.3' |
529 | - fail_spph = SPPH(self.MockPackage(packagename, high_version)) |
530 | + fail_spph = SPPH(packagename, high_version) |
531 | failrecord = self.MockRecord(TEST_FAIL_ARCH, datetime.now(), |
532 | - FAILED_BUILD_STATE, packagename, high_version) |
533 | + FAILED_BUILD_STATE, packagename, |
534 | + high_version) |
535 | successrecord = self.MockRecord(TEST_FAIL_ARCH, datetime.now(), |
536 | SUCCESS_BUILD_STATE, packagename, |
537 | low_version) |
538 | @@ -250,9 +247,10 @@ |
539 | low_version = '1' |
540 | mid_version = '2.1.3-1' |
541 | high_version = '3.2.3' |
542 | - fail_spph = SPPH(self.MockPackage(packagename, mid_version)) |
543 | + fail_spph = SPPH(packagename, mid_version) |
544 | failrecord = self.MockRecord(TEST_FAIL_ARCH, datetime.now(), |
545 | - FAILED_BUILD_STATE, packagename, mid_version) |
546 | + FAILED_BUILD_STATE, packagename, |
547 | + mid_version) |
548 | low_successrecord = self.MockRecord(TEST_FAIL_ARCH, datetime.now(), |
549 | SUCCESS_BUILD_STATE, packagename, |
550 | low_version) |
551 | @@ -298,29 +296,29 @@ |
552 | |
553 | def test_create_spph_sets_name(self): |
554 | self.init_dummy_config() |
555 | - spph = SPPH(self.MockPackage('packagename', '1')) |
556 | + spph = SPPH('packagename', '1') |
557 | self.assertEquals(spph.package_name, 'packagename') |
558 | |
559 | def test_create_spph_sets_version(self): |
560 | self.init_dummy_config() |
561 | - spph = SPPH(self.MockPackage('dummy', '1.1.2-3')) |
562 | + spph = SPPH('dummy', '1.1.2-3') |
563 | self.assertEquals(spph.version, '1.1.2-3') |
564 | |
565 | def test_new_spph_has_no_records(self): |
566 | self.init_dummy_config() |
567 | - spph = SPPH(self.MockPackage('dummy', '1.1.2-3')) |
568 | + spph = SPPH('dummy', '1.1.2-3') |
569 | self.assertEqual({}, spph.records) |
570 | |
571 | def test_add_build_record_does_so(self): |
572 | self.init_dummy_config() |
573 | - spph = SPPH(self.MockPackage('dummy', '1')) |
574 | + spph = SPPH('dummy', '1') |
575 | record = self.MockRecord(TEST_FAIL_ARCH, datetime.now()) |
576 | spph.add_build_record(record) |
577 | self.assertEquals(spph.records[TEST_FAIL_ARCH], record) |
578 | |
579 | def test_get_arch_does_so(self): |
580 | self.init_dummy_config() |
581 | - spph = SPPH(self.MockPackage('dummy', '1')) |
582 | + spph = SPPH('dummy', '1') |
583 | record = self.MockRecord(TEST_FAIL_ARCH, datetime.now()) |
584 | spph.add_build_record(record) |
585 | self.assertEquals(spph.get_arch(TEST_FAIL_ARCH), record) |
586 | @@ -328,7 +326,7 @@ |
587 | |
588 | def test_add_newer_build_record_replaces(self): |
589 | self.init_dummy_config() |
590 | - spph = SPPH(self.MockPackage('dummy', '1')) |
591 | + spph = SPPH('dummy', '1') |
592 | oldrecord = self.MockRecord(TEST_FAIL_ARCH, datetime.now()) |
593 | newrecord = self.MockRecord(TEST_FAIL_ARCH, datetime.now()) |
594 | spph.add_build_record(oldrecord) |
595 | @@ -337,7 +335,7 @@ |
596 | |
597 | def test_add_older_build_record_dont_replace(self): |
598 | self.init_dummy_config() |
599 | - spph = SPPH(self.MockPackage('dummy', '1')) |
600 | + spph = SPPH('dummy', '1') |
601 | oldrecord = self.MockRecord(TEST_FAIL_ARCH, datetime.now()) |
602 | newrecord = self.MockRecord(TEST_FAIL_ARCH, datetime.now()) |
603 | spph.add_build_record(newrecord) |
604 | @@ -346,7 +344,7 @@ |
605 | |
606 | def test_add_other_arch_build_record_dont_replace(self): |
607 | self.init_dummy_config() |
608 | - spph = SPPH(self.MockPackage('dummy', '1')) |
609 | + spph = SPPH('dummy', '1') |
610 | armrecord = self.MockRecord(TEST_FAIL_ARCH, datetime.now()) |
611 | i386record = self.MockRecord(TEST_OK_ARCHS[0], datetime.now()) |
612 | spph.add_build_record(armrecord) |
613 | @@ -356,12 +354,12 @@ |
614 | |
615 | def test_is_not_ftbfs_when_no_record(self): |
616 | self.init_dummy_config() |
617 | - spph = SPPH(self.MockPackage('dummy', '1')) |
618 | + spph = SPPH('dummy', '1') |
619 | self.assertFalse(spph.is_ftbfs(TEST_FAIL_ARCH)) |
620 | |
621 | def test_is_ftbfs_when_failed_record(self): |
622 | self.init_dummy_config() |
623 | - spph = SPPH(self.MockPackage('dummy', '1')) |
624 | + spph = SPPH('dummy', '1') |
625 | failrecord = self.MockRecord(TEST_FAIL_ARCH, datetime.now(), |
626 | FAILED_BUILD_STATE) |
627 | spph.add_build_record(failrecord) |
628 | @@ -369,7 +367,7 @@ |
629 | |
630 | def test_is_not_ftbfs_when_success_record(self): |
631 | self.init_dummy_config() |
632 | - spph = SPPH(self.MockPackage('dummy', '1')) |
633 | + spph = SPPH('dummy', '1') |
634 | successrecord = self.MockRecord(TEST_FAIL_ARCH, datetime.now(), |
635 | SUCCESS_BUILD_STATE) |
636 | spph.add_build_record(successrecord) |
637 | @@ -377,7 +375,7 @@ |
638 | |
639 | def test_is_not_ftbfs_when_replaced_record(self): |
640 | self.init_dummy_config() |
641 | - spph = SPPH(self.MockPackage('dummy', '1')) |
642 | + spph = SPPH('dummy', '1') |
643 | failrecord = self.MockRecord(TEST_FAIL_ARCH, datetime.now(), |
644 | FAILED_BUILD_STATE) |
645 | successrecord = self.MockRecord(TEST_FAIL_ARCH, datetime.now(), |
646 | @@ -388,7 +386,7 @@ |
647 | |
648 | def test_is_ftbfs_when_replaced_record(self): |
649 | self.init_dummy_config() |
650 | - spph = SPPH(self.MockPackage('dummy', '1')) |
651 | + spph = SPPH('dummy', '1') |
652 | successrecord = self.MockRecord(TEST_FAIL_ARCH, datetime.now(), |
653 | SUCCESS_BUILD_STATE) |
654 | failrecord = self.MockRecord(TEST_FAIL_ARCH, datetime.now(), |
655 | @@ -399,61 +397,61 @@ |
656 | |
657 | def test_spph_is_higher_version(self): |
658 | self.init_dummy_config() |
659 | - oldspph = SPPH(self.MockPackage('packageversion', '1')) |
660 | - newspph = SPPH(self.MockPackage('packageversion', '2')) |
661 | + oldspph = SPPH('packageversion', '1') |
662 | + newspph = SPPH('packageversion', '2') |
663 | self.assertTrue(newspph.is_higher_version_than(oldspph)) |
664 | self.assertFalse(oldspph.is_higher_version_than(newspph)) |
665 | |
666 | def test_spph_equal_version_is_not_higher(self): |
667 | self.init_dummy_config() |
668 | - oldspph = SPPH(self.MockPackage('packageversion', '1')) |
669 | - newspph = SPPH(self.MockPackage('packageversion', '1')) |
670 | + oldspph = SPPH('packageversion', '1') |
671 | + newspph = SPPH('packageversion', '1') |
672 | self.assertFalse(newspph.is_higher_version_than(oldspph)) |
673 | |
674 | def test_spph_any_version_is_higher_than_none(self): |
675 | self.init_dummy_config() |
676 | - newspph = SPPH(self.MockPackage('packageversion', '1')) |
677 | + newspph = SPPH('packageversion', '1') |
678 | self.assertTrue(newspph.is_higher_version_than(None)) |
679 | |
680 | def test_spph_real_life_versions(self): |
681 | self.init_dummy_config() |
682 | - oldspph = SPPH(self.MockPackage('packageversion', '0.5.32-0ubuntu1')) |
683 | - newspph = SPPH(self.MockPackage('packageversion', '0.5.32-0ubuntu2')) |
684 | + oldspph = SPPH('packageversion', '0.5.32-0ubuntu1') |
685 | + newspph = SPPH('packageversion', '0.5.32-0ubuntu2') |
686 | self.assertTrue(newspph.is_higher_version_than(oldspph)) |
687 | |
688 | def test_get_highest_version_finds_single_version(self): |
689 | self.init_dummy_config() |
690 | - spph = SPPH(self.MockPackage('packageversion', '0.5.32-0ubuntu2')) |
691 | + spph = SPPH('packageversion', '0.5.32-0ubuntu2') |
692 | spphlist = [spph] |
693 | self.assertEquals(spph, get_highest_version(spphlist)) |
694 | |
695 | def test_get_highest_version_returns_first_of_same_version(self): |
696 | self.init_dummy_config() |
697 | - spph_good = SPPH(self.MockPackage('packageversion', '0.5.32-0ubuntu2')) |
698 | - spph_bad = SPPH(self.MockPackage('packageversion', '0.5.32-0ubuntu2')) |
699 | + spph_good = SPPH('packageversion', '0.5.32-0ubuntu2') |
700 | + spph_bad = SPPH('packageversion', '0.5.32-0ubuntu2') |
701 | spphlist = [spph_good, spph_bad] |
702 | self.assertEquals(spph_good, get_highest_version(spphlist)) |
703 | |
704 | def test_get_highest_version_returns_highest_version_in_order(self): |
705 | self.init_dummy_config() |
706 | - spph_good = SPPH(self.MockPackage('packageversion', '0.5.32-0ubuntu3')) |
707 | - spph_bad = SPPH(self.MockPackage('packageversion', '0.5.32-0ubuntu2')) |
708 | + spph_good = SPPH('packageversion', '0.5.32-0ubuntu3') |
709 | + spph_bad = SPPH('packageversion', '0.5.32-0ubuntu2') |
710 | spphlist = [spph_good, spph_bad] |
711 | self.assertEquals(spph_good, get_highest_version(spphlist)) |
712 | |
713 | def test_get_highest_version_returns_highest_version_out_of_order(self): |
714 | self.init_dummy_config() |
715 | - spph_good = SPPH(self.MockPackage('packageversion', '0.5.32-0ubuntu3')) |
716 | - spph_bad = SPPH(self.MockPackage('packageversion', '0.5.32-0ubuntu2')) |
717 | + spph_good = SPPH('packageversion', '0.5.32-0ubuntu3') |
718 | + spph_bad = SPPH('packageversion', '0.5.32-0ubuntu2') |
719 | spphlist = [spph_bad, spph_good] |
720 | self.assertEquals(spph_good, get_highest_version(spphlist)) |
721 | |
722 | def test_get_highest_version_returns_highest_version_mixed_order(self): |
723 | self.init_dummy_config() |
724 | - spph_good = SPPH(self.MockPackage('packageversion', '99')) |
725 | - spph_bad1 = SPPH(self.MockPackage('packageversion', '1')) |
726 | - spph_bad2 = SPPH(self.MockPackage('packageversion', '2')) |
727 | - spph_bad3 = SPPH(self.MockPackage('packageversion', '3')) |
728 | + spph_good = SPPH('packageversion', '99') |
729 | + spph_bad1 = SPPH('packageversion', '1') |
730 | + spph_bad2 = SPPH('packageversion', '2') |
731 | + spph_bad3 = SPPH('packageversion', '3') |
732 | spphlist = [spph_bad1, spph_bad2, spph_bad1, spph_good, spph_bad3] |
733 | self.assertEquals(spph_good, get_highest_version(spphlist)) |
734 |
24 + for bug_task in bug.bug_tasks:
25 + print_message(" Changing status for %s [%s ==> %s]" % \
26 + (bug_task.title, bug_task.status, BUG_CLOSED_STATUS))
Perhaps we should have the --dry-run support here too?
Ah, I see now it is done at a higher level, but that message
doesn't say what bug would be closed. Could we have it print
the ".web_link" of each task that would be closed?
I have no problems with the code, it looks good to me.
I do wonder about whether this should be a separate script (--force-close
can be), but I wonder if just having the script in cron clean up
automatically would be better. Or would you imagine having both scripts
in cron?
Also, closing bugs doesn't alter the log file, will that lead to >fail?
missing bugs if it goes fail->success-
Thanks,
James