Merge lp:~gl-az/percona-xtrabackup/2.0-testrun-summary into lp:percona-xtrabackup/2.0

Proposed by George Ormond Lorch III
Status: Superseded
Proposed branch: lp:~gl-az/percona-xtrabackup/2.0-testrun-summary
Merge into: lp:percona-xtrabackup/2.0
Diff against target: 187 lines (+62/-14)
2 files modified
test/testrun.c (+49/-11)
test/testrun.sh (+13/-3)
To merge this branch: bzr merge lp:~gl-az/percona-xtrabackup/2.0-testrun-summary
Reviewer Review Type Date Requested Status
Laurynas Biveinis (community) Approve
Registry Administrators Pending
Review via email: mp+167355@code.launchpad.net

This proposal supersedes a proposal from 2013-06-03.

This proposal has been superseded by a proposal from 2013-06-05.

Description of the change

Added test status tracking and summary report at end of test run cycle to make it wasy to see what failed at a glance, similar to MTR.
</facepalm>

To post a comment you must log in.
Revision history for this message
Laurynas Biveinis (laurynas-biveinis) wrote : Posted in a previous version of this proposal

Doesn't this report the skipped testcases (exit code == 200) as the failed ones?

If yes, that wouldn't be entirely correct, I'd report failed and skipped testcases separately.

Typo on diff line 32.

review: Needs Fixing
Revision history for this message
George Ormond Lorch III (gl-az) wrote : Posted in a previous version of this proposal

Good catch Laurynas, I didn't think about the skipped cases in this context. Something else worth considering is making the testrun behave more like the suite and MTR in that it will stop on the first failure and return that error code, which can be overridden by a -f and maybe add some retry logic.

Revision history for this message
Laurynas Biveinis (laurynas-biveinis) wrote : Posted in a previous version of this proposal

It looks that now the skipped cases will be listed twice: both as skipped (exit == 200), and as failed (exit != 0).

review: Needs Fixing
Revision history for this message
George Ormond Lorch III (gl-az) wrote : Posted in a previous version of this proposal

Oy, Doh! That's what I get for trying to do n+1 things at once where 'n'
is my max capacity for parallelism...

On 6/3/2013 9:34 PM, Laurynas Biveinis wrote:
> Review: Needs Fixing
>
> It looks that now the skipped cases will be listed twice: both as skipped (exit == 200), and as failed (exit != 0).
>

--
George O. Lorch III
Software Engineer, Percona
+1-888-401-3401 x542 US/Arizona (GMT -7)
skype: george.ormond.lorch.iii

Revision history for this message
George Ormond Lorch III (gl-az) wrote : Posted in a previous version of this proposal

OK Laurynas, as we discussed on IRC I did some more testing:
Created a new 'suite' called 'u' which exists as a sibling directory to the current 't' (test/u).
Added 3 simple tests:

$ cat u/test_pass.sh
. inc/common.sh
sleep 5
exit 0

$ cat u/test_fail.sh
. inc/common.sh
sleep 5
exit 1

$ cat u/test_skip.sh
. inc/common.sh
sleep 5
exit $SKIPPED_EXIT_CODE

Then ran the suite:
$ ./testrun -j 1 -s u

The first thing I ran into was all tests were failing with 255, why? Well, testrun.c doesn't work with alternate suites correctly. It will find the tests in the correct location, but when it goes to execute them, it tries to execute them out of ./t/<testname> instead of ./<suitename>/<testname>. This was a simple fix so I made it and updated the branch with this fix.

With that working, I run again and realize that testrun.c launches all tests individually through testrun.sh. testrun.sh only returns success (0) or failure (-1/255), nothing else because it may have run multiple tests so it only returns 'all succeeded or skipped' or 'something failed'. With this, in order to report skipped tests, we will need to modify testrun.sh to do something like pass the result from the tests directly out of there was only one test specified, otherwise, use the old method.

So, that being said, we have two choices here:
1) We just go back to my original work where we were really only interested in reporting true failures and accepting skipped tests as passed.
2) Change testrun.sh to return the true result when only one test was specified.

Opinions?

561. By George Ormond Lorch III

Fixed issue in testrun.c where specifying testsuite would find the tests in the corredt directory but would still try to execute them out of ./t

Added test status tracking and summary report at end of test run cycle to make it wasy to see what passed, skipped and failed at a glance, similar to MTR.

Changed testrun.sh slightly to return same code out of testrun.sh as the test that was executed when only a single test was run.

Revision history for this message
George Ormond Lorch III (gl-az) wrote :

So I went ahead and took a peek into testrun.sh to see about returning the true test return status when only one test was being run and it turned out to be only a few, somewhat ugly, lines of code but it works now. Also found and fixed the logic error that I think Laurynas was getting at during IRC conversation. This branch now seems to work and detect all cases correctly:

$ ./testrun -j 1 -s u
./testrun running with: -s u -j 1 -t 600

Found 3 testcases
[0] LAUNCHING - test_fail.sh
[0] completed test_fail.sh status 1
[0] LAUNCHING - test_pass.sh

nrchildren= 1, 1 tests remaining
Running: test_pass.sh
[0] completed test_pass.sh status 0
[0] LAUNCHING - test_skip.sh

nrchildren= 1, 0 tests remaining
Running: test_skip.sh
[0] completed test_skip.sh status 200

nrchildren= 0, 0 tests remaining
Running:
SUMMARY: 3 run, 1 successful, 1 skipped, 1 failed
Skipped cases: test_skip.sh
Failing cases: test_fail.sh

If this all looks acceptable then I will go ahead and merge forward to 2.1 as well and resubmit MP there.

Revision history for this message
Stewart Smith (stewart) wrote :

George Ormond Lorch III <email address hidden> writes:
> SUMMARY: 3 run, 1 successful, 1 skipped, 1 failed
> Skipped cases: test_skip.sh
> Failing cases: test_fail.sh

I like the summary. While we could no doubt do the same thing by parsing
the subunit output, I don't think that's needed (and is much more
complex, or involves having a dependency on some random python utility).

--
Stewart Smith

Revision history for this message
Laurynas Biveinis (laurynas-biveinis) :
review: Approve

Unmerged revisions

561. By George Ormond Lorch III

Fixed issue in testrun.c where specifying testsuite would find the tests in the corredt directory but would still try to execute them out of ./t

Added test status tracking and summary report at end of test run cycle to make it wasy to see what passed, skipped and failed at a glance, similar to MTR.

Changed testrun.sh slightly to return same code out of testrun.sh as the test that was executed when only a single test was run.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'test/testrun.c'
--- test/testrun.c 2012-11-30 04:28:06 +0000
+++ test/testrun.c 2013-06-05 21:04:26 +0000
@@ -41,6 +41,7 @@
41 char *buf;41 char *buf;
42 size_t end_of_buf;42 size_t end_of_buf;
43 size_t bufsz;43 size_t bufsz;
44 int status;
44};45};
4546
46pid_t *childpid;47pid_t *childpid;
@@ -96,6 +97,7 @@
96 (*cases)[i].buf= NULL;97 (*cases)[i].buf= NULL;
97 (*cases)[i].end_of_buf= 0;98 (*cases)[i].end_of_buf= 0;
98 (*cases)[i].bufsz= 0;99 (*cases)[i].bufsz= 0;
100 (*cases)[i].status= 0;
99 free(namelist[i]);101 free(namelist[i]);
100 }102 }
101103
@@ -113,7 +115,41 @@
113 free(cases);115 free(cases);
114}116}
115117
116static int run_testcase_in_child(int nr, struct testcase *t, pid_t *cpid, const char* xbtarget)118static void report_status(struct testcase *cases, int n)
119{
120 int i;
121 int failed=0;
122 int skipped=0;
123 for (i=0; i<n; i++)
124 if (WEXITSTATUS(cases[i].status) != 0)
125 if (WEXITSTATUS(cases[i].status) == 200)
126 skipped++;
127 else
128 failed++;
129
130 printf("SUMMARY: %d run, %d successful, %d skipped, %d failed\n", n, n - failed - skipped, skipped, failed);
131 if (skipped)
132 {
133 printf("Skipped cases:");
134 for (i=0; i<n; i++)
135 if (WEXITSTATUS(cases[i].status) == 200)
136 printf(" %s", cases[i].name);
137 printf("\n");
138 }
139
140 if (failed)
141 {
142 printf("Failing cases:");
143 for (i=0; i<n; i++)
144 if (WEXITSTATUS(cases[i].status) != 0 && WEXITSTATUS(cases[i].status) != 200)
145 printf(" %s", cases[i].name);
146 printf("\n");
147 }
148}
149
150static int run_testcase_in_child(const char* suitedir, int nr,
151 struct testcase *t, pid_t *cpid,
152 const char* xbtarget)
117{153{
118 int fd[2];154 int fd[2];
119155
@@ -132,7 +168,7 @@
132 close(fd[0]);168 close(fd[0]);
133169
134 char tname[500];170 char tname[500];
135 snprintf(tname, sizeof(tname), "t/%s",t->name);171 snprintf(tname, sizeof(tname), "%s/%s",suitedir, t->name);
136172
137 char basedir[PATH_MAX];173 char basedir[PATH_MAX];
138 char cwd[PATH_MAX];174 char cwd[PATH_MAX];
@@ -195,14 +231,14 @@
195 free(buf);231 free(buf);
196}232}
197233
198static void run_testcases(struct testcase *testcases, int nrcases,234static void run_testcases(const char* suitedir, struct testcase *testcases,
199 int njobs, int timeout, const char* xbtarget)235 int nrcases, int njobs, int timeout,
236 const char* xbtarget)
200{237{
201 int childfd[njobs];238 int childfd[njobs];
202 int nfds= 0;239 int nfds= 0;
203 int retval;240 int retval;
204 pid_t chpid[njobs];241 pid_t chpid[njobs];
205 int status;
206 int next_testcase= 0;242 int next_testcase= 0;
207 int i;243 int i;
208 fd_set rfds;244 fd_set rfds;
@@ -223,7 +259,7 @@
223 for(i=0; i<njobs; i++)259 for(i=0; i<njobs; i++)
224 {260 {
225 childtest[i]=next_testcase++;261 childtest[i]=next_testcase++;
226 childfd[i]= run_testcase_in_child(i, &testcases[childtest[i]], &childpid[i], xbtarget);262 childfd[i]= run_testcase_in_child(suitedir, i, &testcases[childtest[i]], &childpid[i], xbtarget);
227 }263 }
228264
229 fflush(stdout);265 fflush(stdout);
@@ -277,8 +313,8 @@
277 t->end_of_buf+= r;313 t->end_of_buf+= r;
278 } while(r>0);314 } while(r>0);
279315
280 pid_t waited= waitpid(childpid[i], &status, WNOHANG);316 pid_t waited= waitpid(childpid[i], &testcases[childtest[i]].status, WNOHANG);
281 if (!(WIFEXITED(status) || WIFSIGNALED(status)))317 if (!(WIFEXITED(testcases[childtest[i]].status) || WIFSIGNALED(testcases[childtest[i]].status)))
282 continue;318 continue;
283319
284 if (waited != childpid[i])320 if (waited != childpid[i])
@@ -289,14 +325,14 @@
289325
290 close(childfd[i]);326 close(childfd[i]);
291 printf("[%d] completed %s status %d\n",327 printf("[%d] completed %s status %d\n",
292 i, testcases[childtest[i]].name, WEXITSTATUS(status));328 i, testcases[childtest[i]].name, WEXITSTATUS(testcases[childtest[i]].status));
293 childfd[i]=-1;329 childfd[i]=-1;
294 nchildren--;330 nchildren--;
295331
296 if (next_testcase < nrcases)332 if (next_testcase < nrcases)
297 {333 {
298 childtest[i]=next_testcase++;334 childtest[i]=next_testcase++;
299 childfd[i]= run_testcase_in_child(i, &testcases[childtest[i]], &childpid[i], xbtarget);335 childfd[i]= run_testcase_in_child(suitedir, i, &testcases[childtest[i]], &childpid[i], xbtarget);
300 nfds= (childfd[i] > nfds)? childfd[i] : nfds;336 nfds= (childfd[i] > nfds)? childfd[i] : nfds;
301 nchildren++;337 nchildren++;
302 }338 }
@@ -397,7 +433,9 @@
397433
398 printf("Found %d testcases\n", nrcases);434 printf("Found %d testcases\n", nrcases);
399435
400 run_testcases(testcases, nrcases, njobs, timeout, xbtarget);436 run_testcases(suitedir, testcases, nrcases, njobs, timeout, xbtarget);
437
438 report_status(testcases, nrcases);
401439
402 free_testcases(testcases, nrcases);440 free_testcases(testcases, nrcases);
403441
404442
=== modified file 'test/testrun.sh'
--- test/testrun.sh 2013-05-07 07:53:25 +0000
+++ test/testrun.sh 2013-06-05 21:04:26 +0000
@@ -284,6 +284,7 @@
284284
285failed_count=0285failed_count=0
286failed_tests=286failed_tests=
287skipped_count=0
287total_count=0288total_count=0
288289
289export OUTFILE="$PWD/results/setup"290export OUTFILE="$PWD/results/setup"
@@ -337,6 +338,7 @@
337 test -r $SKIPPED_REASON && sreason=`cat $SKIPPED_REASON`338 test -r $SKIPPED_REASON && sreason=`cat $SKIPPED_REASON`
338 echo "[skipped] $sreason"339 echo "[skipped] $sreason"
339 subunit_skip_test $t >> $SUBUNIT_OUT340 subunit_skip_test $t >> $SUBUNIT_OUT
341 skipped_count=`expr $skipped_count + 1`
340 else342 else
341 echo "[failed]"343 echo "[failed]"
342344
@@ -348,7 +350,7 @@
348350
349 failed_count=$((failed_count+1))351 failed_count=$((failed_count+1))
350 failed_tests="$failed_tests $t"352 failed_tests="$failed_tests $t"
351 result=1353 result=$rc
352 if [ -z "$force" ]354 if [ -z "$force" ]
353 then355 then
354 break;356 break;
@@ -359,10 +361,18 @@
359echo "========================================================================"361echo "========================================================================"
360echo362echo
361363
362if [ $result -eq 1 ]364if [ $result -ne 0 ]
363then365then
364 echo366 echo
365 echo "$failed_count/$total_count tests have failed: $failed_tests"367 echo "$failed_count/$total_count tests have failed: $failed_tests"
366 echo "See results/ for detailed output"368 echo "See results/ for detailed output"
367 exit -1369 if [ $total_count -gt 1 ]
370 then
371 exit -1
372 else
373 exit $result
374 fi
375elif [ $total_count -eq 1 ] && [ $skipped_count -eq 1 ]
376then
377 exit $SKIPPED_EXIT_CODE
368fi378fi

Subscribers

People subscribed via source and target branches