Merge lp:~gl-az/percona-xtrabackup/2.0-testrun-summary into lp:percona-xtrabackup/2.0

Proposed by George Ormond Lorch III
Status: Superseded
Proposed branch: lp:~gl-az/percona-xtrabackup/2.0-testrun-summary
Merge into: lp:percona-xtrabackup/2.0
Diff against target: 187 lines (+62/-14)
2 files modified
test/testrun.c (+49/-11)
test/testrun.sh (+13/-3)
To merge this branch: bzr merge lp:~gl-az/percona-xtrabackup/2.0-testrun-summary
Reviewer Review Type Date Requested Status
Laurynas Biveinis (community) Approve
Registry Administrators Pending
Review via email: mp+167355@code.launchpad.net

This proposal supersedes a proposal from 2013-06-03.

This proposal has been superseded by a proposal from 2013-06-05.

Description of the change

Added test status tracking and summary report at end of test run cycle to make it wasy to see what failed at a glance, similar to MTR.
</facepalm>

To post a comment you must log in.
Revision history for this message
Laurynas Biveinis (laurynas-biveinis) wrote : Posted in a previous version of this proposal

Doesn't this report the skipped testcases (exit code == 200) as the failed ones?

If yes, that wouldn't be entirely correct, I'd report failed and skipped testcases separately.

Typo on diff line 32.

review: Needs Fixing
Revision history for this message
George Ormond Lorch III (gl-az) wrote : Posted in a previous version of this proposal

Good catch Laurynas, I didn't think about the skipped cases in this context. Something else worth considering is making the testrun behave more like the suite and MTR in that it will stop on the first failure and return that error code, which can be overridden by a -f and maybe add some retry logic.

Revision history for this message
Laurynas Biveinis (laurynas-biveinis) wrote : Posted in a previous version of this proposal

It looks that now the skipped cases will be listed twice: both as skipped (exit == 200), and as failed (exit != 0).

review: Needs Fixing
Revision history for this message
George Ormond Lorch III (gl-az) wrote : Posted in a previous version of this proposal

Oy, Doh! That's what I get for trying to do n+1 things at once where 'n'
is my max capacity for parallelism...

On 6/3/2013 9:34 PM, Laurynas Biveinis wrote:
> Review: Needs Fixing
>
> It looks that now the skipped cases will be listed twice: both as skipped (exit == 200), and as failed (exit != 0).
>

--
George O. Lorch III
Software Engineer, Percona
+1-888-401-3401 x542 US/Arizona (GMT -7)
skype: george.ormond.lorch.iii

Revision history for this message
George Ormond Lorch III (gl-az) wrote : Posted in a previous version of this proposal

OK Laurynas, as we discussed on IRC I did some more testing:
Created a new 'suite' called 'u' which exists as a sibling directory to the current 't' (test/u).
Added 3 simple tests:

$ cat u/test_pass.sh
. inc/common.sh
sleep 5
exit 0

$ cat u/test_fail.sh
. inc/common.sh
sleep 5
exit 1

$ cat u/test_skip.sh
. inc/common.sh
sleep 5
exit $SKIPPED_EXIT_CODE

Then ran the suite:
$ ./testrun -j 1 -s u

The first thing I ran into was all tests were failing with 255, why? Well, testrun.c doesn't work with alternate suites correctly. It will find the tests in the correct location, but when it goes to execute them, it tries to execute them out of ./t/<testname> instead of ./<suitename>/<testname>. This was a simple fix so I made it and updated the branch with this fix.

With that working, I run again and realize that testrun.c launches all tests individually through testrun.sh. testrun.sh only returns success (0) or failure (-1/255), nothing else because it may have run multiple tests so it only returns 'all succeeded or skipped' or 'something failed'. With this, in order to report skipped tests, we will need to modify testrun.sh to do something like pass the result from the tests directly out of there was only one test specified, otherwise, use the old method.

So, that being said, we have two choices here:
1) We just go back to my original work where we were really only interested in reporting true failures and accepting skipped tests as passed.
2) Change testrun.sh to return the true result when only one test was specified.

Opinions?

561. By George Ormond Lorch III

Fixed issue in testrun.c where specifying testsuite would find the tests in the corredt directory but would still try to execute them out of ./t

Added test status tracking and summary report at end of test run cycle to make it wasy to see what passed, skipped and failed at a glance, similar to MTR.

Changed testrun.sh slightly to return same code out of testrun.sh as the test that was executed when only a single test was run.

Revision history for this message
George Ormond Lorch III (gl-az) wrote :

So I went ahead and took a peek into testrun.sh to see about returning the true test return status when only one test was being run and it turned out to be only a few, somewhat ugly, lines of code but it works now. Also found and fixed the logic error that I think Laurynas was getting at during IRC conversation. This branch now seems to work and detect all cases correctly:

$ ./testrun -j 1 -s u
./testrun running with: -s u -j 1 -t 600

Found 3 testcases
[0] LAUNCHING - test_fail.sh
[0] completed test_fail.sh status 1
[0] LAUNCHING - test_pass.sh

nrchildren= 1, 1 tests remaining
Running: test_pass.sh
[0] completed test_pass.sh status 0
[0] LAUNCHING - test_skip.sh

nrchildren= 1, 0 tests remaining
Running: test_skip.sh
[0] completed test_skip.sh status 200

nrchildren= 0, 0 tests remaining
Running:
SUMMARY: 3 run, 1 successful, 1 skipped, 1 failed
Skipped cases: test_skip.sh
Failing cases: test_fail.sh

If this all looks acceptable then I will go ahead and merge forward to 2.1 as well and resubmit MP there.

Revision history for this message
Stewart Smith (stewart) wrote :

George Ormond Lorch III <email address hidden> writes:
> SUMMARY: 3 run, 1 successful, 1 skipped, 1 failed
> Skipped cases: test_skip.sh
> Failing cases: test_fail.sh

I like the summary. While we could no doubt do the same thing by parsing
the subunit output, I don't think that's needed (and is much more
complex, or involves having a dependency on some random python utility).

--
Stewart Smith

Revision history for this message
Laurynas Biveinis (laurynas-biveinis) :
review: Approve

Unmerged revisions

561. By George Ormond Lorch III

Fixed issue in testrun.c where specifying testsuite would find the tests in the corredt directory but would still try to execute them out of ./t

Added test status tracking and summary report at end of test run cycle to make it wasy to see what passed, skipped and failed at a glance, similar to MTR.

Changed testrun.sh slightly to return same code out of testrun.sh as the test that was executed when only a single test was run.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'test/testrun.c'
2--- test/testrun.c 2012-11-30 04:28:06 +0000
3+++ test/testrun.c 2013-06-05 21:04:26 +0000
4@@ -41,6 +41,7 @@
5 char *buf;
6 size_t end_of_buf;
7 size_t bufsz;
8+ int status;
9 };
10
11 pid_t *childpid;
12@@ -96,6 +97,7 @@
13 (*cases)[i].buf= NULL;
14 (*cases)[i].end_of_buf= 0;
15 (*cases)[i].bufsz= 0;
16+ (*cases)[i].status= 0;
17 free(namelist[i]);
18 }
19
20@@ -113,7 +115,41 @@
21 free(cases);
22 }
23
24-static int run_testcase_in_child(int nr, struct testcase *t, pid_t *cpid, const char* xbtarget)
25+static void report_status(struct testcase *cases, int n)
26+{
27+ int i;
28+ int failed=0;
29+ int skipped=0;
30+ for (i=0; i<n; i++)
31+ if (WEXITSTATUS(cases[i].status) != 0)
32+ if (WEXITSTATUS(cases[i].status) == 200)
33+ skipped++;
34+ else
35+ failed++;
36+
37+ printf("SUMMARY: %d run, %d successful, %d skipped, %d failed\n", n, n - failed - skipped, skipped, failed);
38+ if (skipped)
39+ {
40+ printf("Skipped cases:");
41+ for (i=0; i<n; i++)
42+ if (WEXITSTATUS(cases[i].status) == 200)
43+ printf(" %s", cases[i].name);
44+ printf("\n");
45+ }
46+
47+ if (failed)
48+ {
49+ printf("Failing cases:");
50+ for (i=0; i<n; i++)
51+ if (WEXITSTATUS(cases[i].status) != 0 && WEXITSTATUS(cases[i].status) != 200)
52+ printf(" %s", cases[i].name);
53+ printf("\n");
54+ }
55+}
56+
57+static int run_testcase_in_child(const char* suitedir, int nr,
58+ struct testcase *t, pid_t *cpid,
59+ const char* xbtarget)
60 {
61 int fd[2];
62
63@@ -132,7 +168,7 @@
64 close(fd[0]);
65
66 char tname[500];
67- snprintf(tname, sizeof(tname), "t/%s",t->name);
68+ snprintf(tname, sizeof(tname), "%s/%s",suitedir, t->name);
69
70 char basedir[PATH_MAX];
71 char cwd[PATH_MAX];
72@@ -195,14 +231,14 @@
73 free(buf);
74 }
75
76-static void run_testcases(struct testcase *testcases, int nrcases,
77- int njobs, int timeout, const char* xbtarget)
78+static void run_testcases(const char* suitedir, struct testcase *testcases,
79+ int nrcases, int njobs, int timeout,
80+ const char* xbtarget)
81 {
82 int childfd[njobs];
83 int nfds= 0;
84 int retval;
85 pid_t chpid[njobs];
86- int status;
87 int next_testcase= 0;
88 int i;
89 fd_set rfds;
90@@ -223,7 +259,7 @@
91 for(i=0; i<njobs; i++)
92 {
93 childtest[i]=next_testcase++;
94- childfd[i]= run_testcase_in_child(i, &testcases[childtest[i]], &childpid[i], xbtarget);
95+ childfd[i]= run_testcase_in_child(suitedir, i, &testcases[childtest[i]], &childpid[i], xbtarget);
96 }
97
98 fflush(stdout);
99@@ -277,8 +313,8 @@
100 t->end_of_buf+= r;
101 } while(r>0);
102
103- pid_t waited= waitpid(childpid[i], &status, WNOHANG);
104- if (!(WIFEXITED(status) || WIFSIGNALED(status)))
105+ pid_t waited= waitpid(childpid[i], &testcases[childtest[i]].status, WNOHANG);
106+ if (!(WIFEXITED(testcases[childtest[i]].status) || WIFSIGNALED(testcases[childtest[i]].status)))
107 continue;
108
109 if (waited != childpid[i])
110@@ -289,14 +325,14 @@
111
112 close(childfd[i]);
113 printf("[%d] completed %s status %d\n",
114- i, testcases[childtest[i]].name, WEXITSTATUS(status));
115+ i, testcases[childtest[i]].name, WEXITSTATUS(testcases[childtest[i]].status));
116 childfd[i]=-1;
117 nchildren--;
118
119 if (next_testcase < nrcases)
120 {
121 childtest[i]=next_testcase++;
122- childfd[i]= run_testcase_in_child(i, &testcases[childtest[i]], &childpid[i], xbtarget);
123+ childfd[i]= run_testcase_in_child(suitedir, i, &testcases[childtest[i]], &childpid[i], xbtarget);
124 nfds= (childfd[i] > nfds)? childfd[i] : nfds;
125 nchildren++;
126 }
127@@ -397,7 +433,9 @@
128
129 printf("Found %d testcases\n", nrcases);
130
131- run_testcases(testcases, nrcases, njobs, timeout, xbtarget);
132+ run_testcases(suitedir, testcases, nrcases, njobs, timeout, xbtarget);
133+
134+ report_status(testcases, nrcases);
135
136 free_testcases(testcases, nrcases);
137
138
139=== modified file 'test/testrun.sh'
140--- test/testrun.sh 2013-05-07 07:53:25 +0000
141+++ test/testrun.sh 2013-06-05 21:04:26 +0000
142@@ -284,6 +284,7 @@
143
144 failed_count=0
145 failed_tests=
146+skipped_count=0
147 total_count=0
148
149 export OUTFILE="$PWD/results/setup"
150@@ -337,6 +338,7 @@
151 test -r $SKIPPED_REASON && sreason=`cat $SKIPPED_REASON`
152 echo "[skipped] $sreason"
153 subunit_skip_test $t >> $SUBUNIT_OUT
154+ skipped_count=`expr $skipped_count + 1`
155 else
156 echo "[failed]"
157
158@@ -348,7 +350,7 @@
159
160 failed_count=$((failed_count+1))
161 failed_tests="$failed_tests $t"
162- result=1
163+ result=$rc
164 if [ -z "$force" ]
165 then
166 break;
167@@ -359,10 +361,18 @@
168 echo "========================================================================"
169 echo
170
171-if [ $result -eq 1 ]
172+if [ $result -ne 0 ]
173 then
174 echo
175 echo "$failed_count/$total_count tests have failed: $failed_tests"
176 echo "See results/ for detailed output"
177- exit -1
178+ if [ $total_count -gt 1 ]
179+ then
180+ exit -1
181+ else
182+ exit $result
183+ fi
184+elif [ $total_count -eq 1 ] && [ $skipped_count -eq 1 ]
185+then
186+ exit $SKIPPED_EXIT_CODE
187 fi

Subscribers

People subscribed via source and target branches