Merge ~pwlars/checkbox-ng:get-test-outcome-stats into checkbox-ng:master

Proposed by Paul Larson
Status: Merged
Approved by: Paul Larson
Approved revision: b26dca3e411eafbf18383a221a074c9c5969322a
Merged at revision: 917b76e1e5095b147a77cd4055818495c25c63b0
Proposed branch: ~pwlars/checkbox-ng:get-test-outcome-stats
Merge into: checkbox-ng:master
Diff against target: 65 lines (+27/-10)
2 files modified
plainbox/impl/providers/exporters/data/junit.xml (+7/-10)
plainbox/impl/session/state.py (+20/-0)
Reviewer Review Type Date Requested Status
Sylvain Pineau Needs Information
Maciej Kisielewski Approve
Review via email: mp+344338@code.launchpad.net

Description of the change

It seems that the stats we generate for junit don't match up with the stats we generate for the c3 submission. This is because the junit stats do not filter out the resource and attachment jobs. This restricts junit output to only look at those, and adds a new helper method in states.py to facilitate getting the right set of stats for only actual tests.

To post a comment you must log in.
Revision history for this message
Paul Larson (pwlars) wrote :

I almost forgot to mention... I tested this by creating a small dummy provider with 2 simple tests (one pass, one fail) and 2 attachment jobs (1 pass, 1 fail). Without the patch, I see 4 tests, 2 fails. With the pass, I only see the two actual tests, with 1 failed.

Revision history for this message
Maciej Kisielewski (kissiel) wrote :

Very nice! +1

review: Approve
Revision history for this message
Sylvain Pineau (sylvain-pineau) wrote :

I'd keep resources and attachments in the big picture as they can fail (generating less jobs for example or lead to other tricky side effects). This patchset ignores/filters them and I don't think it's a good idea if we are basing automated reviews exclusively on "real" jobs.

Where should we tweak reports to add the missing resource/attachments and keep the existing junit.xml?

review: Needs Information
Revision history for this message
Paul Larson (pwlars) wrote :

The problem isn't with the results summary, as that uses the submission.json and results for artifacts/resource already don't show up in the submission.json. The problem I was trying to solve with this is that when you look at c3 results and at jenkins results for the same test, they often differ greatly in the number of tests executed, failed, etc. The goal is to make those numbers match so that it's less confusing. I don't think jenkins (nor junit) give us a good ability to set up another class of tests that gets filtered from the main results

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1diff --git a/plainbox/impl/providers/exporters/data/junit.xml b/plainbox/impl/providers/exporters/data/junit.xml
2index 2a7e64a..2ac1115 100644
3--- a/plainbox/impl/providers/exporters/data/junit.xml
4+++ b/plainbox/impl/providers/exporters/data/junit.xml
5@@ -1,15 +1,13 @@
6 {%- set state = manager.default_device_context.state -%}
7 {%- set job_state_map = state.job_state_map -%}
8-{%- set passes = manager.state.get_outcome_stats()["pass"] -%}
9-{%- set fails = manager.state.get_outcome_stats()["fail"] -%}
10-{%- set skips = manager.state.get_outcome_stats()["skip"] +
11- manager.state.get_outcome_stats()["not-supported"] -%}
12-{%- set errors = manager.state.get_outcome_stats()["crash"] -%}
13+{%- set passes = manager.state.get_test_outcome_stats()["pass"] -%}
14+{%- set fails = manager.state.get_test_outcome_stats()["fail"] -%}
15+{%- set skips = manager.state.get_test_outcome_stats()["skip"] +
16+ manager.state.get_test_outcome_stats()["not-supported"] -%}
17+{%- set errors = manager.state.get_test_outcome_stats()["crash"] -%}
18 <?xml version="1.0" encoding="UTF-8"?>
19- <testsuites failures="{{ fails }}" name="" tests="{{ passes }}" skipped="{{ skips }}" errors="{{ errors }}">
20- {%- for job_id in job_state_map|sort %}
21- {%- set job_state = job_state_map[job_id] %}
22- {%- if job_state.result.outcome %}
23+ <testsuites failures="{{ fails }}" name="" tests="{{ passes+fails+skips+errors }}" skipped="{{ skips }}" errors="{{ errors }}">
24+ {%- for job_id, job_state in job_state_map|dictsort if job_state.result.outcome != None and job_state.job.plugin not in ("resource", "attachment") %}
25 <testcase classname="{{ job_state.job.id }}" name="{{ job_state.job.id }}" time="{{ job_state.result.execution_duration }}">
26 {%- if job_state.result.outcome == 'skip' or job_state.result.outcome == 'not-supported' -%}
27 <skipped />
28@@ -23,6 +21,5 @@
29 </error>
30 {%- endif %}
31 </testcase>
32- {%- endif -%}
33 {%- endfor %}
34 </testsuites>
35diff --git a/plainbox/impl/session/state.py b/plainbox/impl/session/state.py
36index 9993689..63238e4 100644
37--- a/plainbox/impl/session/state.py
38+++ b/plainbox/impl/session/state.py
39@@ -1297,6 +1297,26 @@ class SessionState:
40 stats[job_state.result.outcome] += 1
41 return stats
42
43+ def get_test_outcome_stats(self):
44+ """
45+ Process the JobState map to get stats about the job outcomes
46+ excluding attachment and resource jobs.
47+
48+ :returns:
49+ a mapping of "outcome": "total" key/value pairs
50+
51+ .. note::
52+ Only the outcomes seen during this session are reported, not all
53+ possible values (such as crash, not implemented, ...).
54+ """
55+ stats = collections.defaultdict(int)
56+ for job_id, job_state in self.job_state_map.items():
57+ if (not job_state.result.outcome or
58+ job_state.job.plugin in ("resource", "attachment")):
59+ continue
60+ stats[job_state.result.outcome] += 1
61+ return stats
62+
63 @property
64 def category_map(self):
65 """Map from category id to their corresponding translated names."""

Subscribers

People subscribed via source and target branches