Merge lp:~afrantzis/lava-test/alf-testdefs into lp:lava-test/0.0

Proposed by Alexandros Frantzis
Status: Rejected
Rejected by: Neil Williams
Proposed branch: lp:~afrantzis/lava-test/alf-testdefs
Merge into: lp:lava-test/0.0
Diff against target: 375 lines (+319/-1)
10 files modified
abrek/test_definitions/average_parser.py (+73/-0)
abrek/test_definitions/clutter-eglx-es20.py (+33/-0)
abrek/test_definitions/es2gears.py (+13/-0)
abrek/test_definitions/glmark2-es2.py (+40/-0)
abrek/test_definitions/glmemperf.py (+13/-0)
abrek/test_definitions/gtkperf.py (+3/-1)
abrek/test_definitions/qgears.py (+21/-0)
abrek/test_definitions/render-bench.py (+33/-0)
abrek/test_definitions/timed_test_runner.py (+51/-0)
abrek/test_definitions/x11perf.py (+39/-0)
To merge this branch: bzr merge lp:~afrantzis/lava-test/alf-testdefs
Reviewer Review Type Date Requested Status
Zygmunt Krynicki (community) Needs Information
Paul Larson Pending
Review via email: mp+36990@code.launchpad.net

Description of the change

Test definitions for Linaro User Platforms graphics related benchmarks.

To post a comment you must log in.
Revision history for this message
Paul Larson (pwlars) wrote :

I haven't had time to fully look at all of these, but a few observations on what I've seen so far:
First off, the clutter tests that had been segfaulting for me previously are no longer doing that. Something, somewhere, got fixed on this already. yay!
Also, it would be much easier to review these separately if at all possible. Also, that way some of the simpler ones can get in quickly while we deal with issues of the others.

1 === added file 'abrek/test_definitions/average_parser.py'
We need to find a different place for these. Anything under test_definitions is assumed to be a test. If it's only used by a single test, then you can have a directory with the test name. If it's something shared among more than one test, then we really ought to put it in the common code somewhere.

277 === added file 'abrek/test_definitions/timed_test_runner.py'
same here

301 + def _runsteps(self, resultsdir, quiet=False):
302 + outputlog = os.path.join(resultsdir, 'testoutput.log')
303 + for (cmd, runtime, info) in zip(self.steps, self.runtime, self.info):
304 + # Use a pty to make sure output from the tests
305 + # is immediately available (line-buffering)
I've been looking at doing something like this for everything, so I want to take a look at moving this out as well.

Revision history for this message
Alexandros Frantzis (afrantzis) wrote :

Agreed, I will break this up into separate merge proposals.

About the shared runner/parser code, perhaps it is worth having a testdef utilities directory where we can put commonly used parser/runner classes, so that the main code remains clean.

Revision history for this message
Paul Larson (pwlars) wrote :

> Agreed, I will break this up into separate merge proposals.
>
> About the shared runner/parser code, perhaps it is worth having a testdef
> utilities directory where we can put commonly used parser/runner classes, so
> that the main code remains clean.
Yes, that sounds good

Revision history for this message
Zygmunt Krynicki (zyga) wrote :

Alf: is this still valid? I'm inclined to reject it as you are splitting those up to separate branches?

review: Needs Information

Unmerged revisions

41. By Alexandros Frantzis

Download qgears2 package from personal apt repo.

40. By Alexandros Frantzis

Update clutter-eglx-es20 test definition for new abrek API.

39. By Alexandros Frantzis

Update x11perf test definition for new abrek API.

38. By Alexandros Frantzis

Really append 'units' and 'result' fields to all gtkperf test cases.

37. By Alexandros Frantzis

Update render-bench test definition for new abrek API.

36. By Alexandros Frantzis

Update qgears test definitions for new abrek API.

35. By Alexandros Frantzis

Update glmemperf for new abrek API.

34. By Alexandros Frantzis

Update glmark2-es2 test for new abrek API.

33. By Alexandros Frantzis

Update es2gears test for new abrek API.

32. By Alexandros Frantzis

Update average_parser and timed_test_parser for new abrek API.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== added file 'abrek/test_definitions/average_parser.py'
--- abrek/test_definitions/average_parser.py 1970-01-01 00:00:00 +0000
+++ abrek/test_definitions/average_parser.py 2010-09-29 14:13:40 +0000
@@ -0,0 +1,73 @@
1import re
2import abrek.testdef
3import pickle
4
5class AverageParser(abrek.testdef.AbrekTestParser):
6 """Single test average result parser
7
8 Parses a series of results from the same test and computes the
9 average.
10 """
11 def parse(self):
12 filename = "testoutput.log"
13 pat = re.compile(self.pattern)
14 total = 0.0
15 count = 0
16 with open(filename, 'r') as fd:
17 for line in fd:
18 match = pat.search(line)
19 if match:
20 d = match.groupdict()
21 total = total + float(d['measurement'])
22 count = count + 1
23
24 if count > 0:
25 avg_result = total/count
26 else:
27 avg_result = 0
28
29 avg = {'measurement':avg_result}
30 self.results['test_results'].append(avg)
31 if self.fixupdict:
32 self.fixresults(self.fixupdict)
33 if self.appendall:
34 self.appendtoall(self.appendall)
35
36class MultiAverageParser(abrek.testdef.AbrekTestParser):
37 """Multiple test average result parser
38
39 Parses a series of results from multiple tests and computes the
40 average for each test.
41 """
42 def parse(self):
43 filename = "testoutput.log"
44 pat = re.compile(self.pattern)
45 pat_end = re.compile("^\$End AbrekTest\$ (?P<info>.*)")
46 total = 0.0
47 count = 0
48 with open(filename, 'r') as fd:
49 for line in fd:
50 match = pat.search(line)
51 if match:
52 d = match.groupdict()
53 total = total + float(d['measurement'])
54 count = count + 1
55 else:
56 match = pat_end.search(line)
57 if match:
58 d = match.groupdict()
59 if count > 0:
60 avg_result = total/count
61 else:
62 avg_result = 0
63 avg = pickle.loads(d['info'].decode('string-escape')[1:])
64 avg['measurement'] = avg_result
65 self.results['test_results'].append(avg)
66 total = 0.0
67 count = 0
68
69 if self.fixupdict:
70 self.fixresults(self.fixupdict)
71 if self.appendall:
72 self.appendtoall(self.appendall)
73
074
=== added file 'abrek/test_definitions/clutter-eglx-es20.py'
--- abrek/test_definitions/clutter-eglx-es20.py 1970-01-01 00:00:00 +0000
+++ abrek/test_definitions/clutter-eglx-es20.py 2010-09-29 14:13:40 +0000
@@ -0,0 +1,33 @@
1import abrek.testdef
2import pickle
3from average_parser import MultiAverageParser
4from timed_test_runner import TimedTestRunner
5
6clutter_tests = [
7 ("test-interactive", "test-rotate", 10),
8 ("test-interactive", "test-actors", 10),
9 ("test-interactive", "test-fbo", 10),
10 ("test-interactive", "test-animator", 10),
11 ("test-interactive", "test-random-text", 10),
12]
13cmd_fmt = "CLUTTER_SHOW_FPS=1 /usr/lib/clutter-1.0/tests/eglx-es20/%s %s"
14
15def create_test_cmds(tests):
16 return [cmd_fmt % args[:2] for args in tests]
17
18def create_test_runtimes(tests):
19 return [args[2] for args in tests]
20
21def create_test_info(tests):
22 return ["%r" % pickle.dumps({'test_case_id':'%s.%s' % args[:2], 'units':'fps'}) for args in tests]
23
24
25parse = MultiAverageParser(pattern="\*\*\* FPS: (?P<measurement>\d+) \*\*\*",
26 appendall={'result':'pass'})
27inst = abrek.testdef.AbrekTestInstaller(deps=["clutter-eglx-es20-1.0-tests"])
28run = TimedTestRunner(create_test_cmds(clutter_tests),
29 create_test_runtimes(clutter_tests),
30 create_test_info(clutter_tests))
31
32testobj = abrek.testdef.AbrekTest(testname="clutter-eglx-es20", installer=inst,
33 runner=run, parser=parse)
034
=== added file 'abrek/test_definitions/es2gears.py'
--- abrek/test_definitions/es2gears.py 1970-01-01 00:00:00 +0000
+++ abrek/test_definitions/es2gears.py 2010-09-29 14:13:40 +0000
@@ -0,0 +1,13 @@
1import re
2import abrek.testdef
3from average_parser import AverageParser
4from timed_test_runner import TimedTestRunner
5
6parse = AverageParser(pattern="=\W+(?P<measurement>\d+\.\d+) FPS",
7 appendall= {'test_case_id':'es2gears', 'units':'fps',
8 'result':'pass'})
9inst = abrek.testdef.AbrekTestInstaller(deps=["es2gears"])
10run = TimedTestRunner(["es2gears"], [16], [""])
11
12testobj = abrek.testdef.AbrekTest(testname="es2gears", installer=inst,
13 runner=run, parser=parse)
014
=== added file 'abrek/test_definitions/glmark2-es2.py'
--- abrek/test_definitions/glmark2-es2.py 1970-01-01 00:00:00 +0000
+++ abrek/test_definitions/glmark2-es2.py 2010-09-29 14:13:40 +0000
@@ -0,0 +1,40 @@
1import re
2import abrek.testdef
3
4RUNSTEPS = ["glmark2-es2"]
5
6class Glmark2Parser(abrek.testdef.AbrekTestParser):
7 def parse(self):
8 PAT1 = "^\W+(?P<subtest>.*?)\W+FPS:\W+(?P<measurement>\d+)"
9 filename = "testoutput.log"
10 pat1 = re.compile(PAT1)
11 in_results = False
12 cur_test = ""
13 with open(filename, 'r') as fd:
14 for line in fd.readlines():
15 if line.find("Precompilation") != -1:
16 in_results = True
17 if in_results == True:
18 match = pat1.search(line)
19 if match:
20 d = match.groupdict()
21 d['test_case_id'] = "%s.%s" % (cur_test, d['subtest'])
22 d.pop('subtest')
23 self.results['test_results'].append(d)
24 else:
25 if line.startswith("==="):
26 in_results = False
27 else:
28 cur_test = line.strip()
29
30 if self.fixupdict:
31 self.fixresults(self.fixupdict)
32 if self.appendall:
33 self.appendtoall(self.appendall)
34
35parse = Glmark2Parser(appendall={'units':'fps', 'result':'pass'})
36inst = abrek.testdef.AbrekTestInstaller(deps=["glmark2-es2"])
37run = abrek.testdef.AbrekTestRunner(RUNSTEPS)
38
39testobj = abrek.testdef.AbrekTest(testname="glmark2-es2", installer=inst,
40 runner=run, parser=parse)
041
=== added file 'abrek/test_definitions/glmemperf.py'
--- abrek/test_definitions/glmemperf.py 1970-01-01 00:00:00 +0000
+++ abrek/test_definitions/glmemperf.py 2010-09-29 14:13:40 +0000
@@ -0,0 +1,13 @@
1import abrek.testdef
2
3RUNSTEPS = ["glmemperf"]
4PATTERN = "^(?P<test_case_id>\w+):\W+(?P<measurement>\d+) fps"
5
6inst = abrek.testdef.AbrekTestInstaller(deps=["glmemperf"])
7run = abrek.testdef.AbrekTestRunner(RUNSTEPS)
8parse = abrek.testdef.AbrekTestParser(PATTERN,
9 appendall={'units':'fps',
10 'result':'pass'})
11
12testobj = abrek.testdef.AbrekTest(testname="glmemperf", installer=inst,
13 runner=run, parser=parse)
014
=== modified file 'abrek/test_definitions/gtkperf.py'
--- abrek/test_definitions/gtkperf.py 2010-09-22 17:55:02 +0000
+++ abrek/test_definitions/gtkperf.py 2010-09-29 14:13:40 +0000
@@ -27,7 +27,9 @@
27 if match:27 if match:
28 self.results['test_results'].append(match.groupdict())28 self.results['test_results'].append(match.groupdict())
2929
30parse = GtkTestParser(appendall={'units':'seconds', 'result':'pass'})30 self.appendtoall({'units':'seconds', 'result':'pass'})
31
32parse = GtkTestParser()
31inst = abrek.testdef.AbrekTestInstaller(deps=["gtkperf"])33inst = abrek.testdef.AbrekTestInstaller(deps=["gtkperf"])
32run = abrek.testdef.AbrekTestRunner(RUNSTEPS)34run = abrek.testdef.AbrekTestRunner(RUNSTEPS)
3335
3436
=== added file 'abrek/test_definitions/qgears.py'
--- abrek/test_definitions/qgears.py 1970-01-01 00:00:00 +0000
+++ abrek/test_definitions/qgears.py 2010-09-29 14:13:40 +0000
@@ -0,0 +1,21 @@
1import re
2import abrek.testdef
3from average_parser import AverageParser
4from timed_test_runner import TimedTestRunner
5
6INSTALL_STEPS = [
7 'sudo add-apt-repository "deb http://people.canonical.com/~afrantzis/packages/ ./"',
8 'sudo apt-get update',
9 'sudo apt-get install -y --force-yes qgears2'
10 ]
11
12parse = AverageParser(pattern="=\W+(?P<measurement>\d+\.\d+) FPS",
13 appendall={'test_case_id':'qgears', 'units':'fps',
14 'result':'pass'})
15
16inst = abrek.testdef.AbrekTestInstaller(INSTALL_STEPS,
17 deps=["python-software-properties"])
18run = TimedTestRunner(["qgears"], [10], [""])
19
20testobj = abrek.testdef.AbrekTest(testname="qgears", installer=inst,
21 runner=run, parser=parse)
022
=== added file 'abrek/test_definitions/render-bench.py'
--- abrek/test_definitions/render-bench.py 1970-01-01 00:00:00 +0000
+++ abrek/test_definitions/render-bench.py 2010-09-29 14:13:40 +0000
@@ -0,0 +1,33 @@
1import re
2import abrek.testdef
3
4class RenderBenchParser(abrek.testdef.AbrekTestParser):
5 def parse(self):
6 PAT1 = "^Test: (?P<test_case_id>.*)"
7 PAT2 = "^Time: (?P<measurement>\d+\.\d+)"
8 filename = "testoutput.log"
9 pat1 = re.compile(PAT1)
10 pat2 = re.compile(PAT2)
11 cur_test = None
12 with open(filename, 'r') as fd:
13 for line in fd:
14 match = pat1.search(line)
15 if match:
16 cur_test = match.groupdict()['test_case_id']
17 else:
18 match = pat2.search(line)
19 if match:
20 d = match.groupdict()
21 d['test_case_id'] = cur_test
22 self.results['test_results'].append(d)
23
24 self.appendtoall({'units':'seconds', 'result':'pass'})
25
26RUNSTEPS = ["render_bench"]
27
28inst = abrek.testdef.AbrekTestInstaller(deps=["render-bench"])
29run = abrek.testdef.AbrekTestRunner(RUNSTEPS)
30parse = RenderBenchParser()
31
32testobj = abrek.testdef.AbrekTest(testname="render-bench", installer=inst,
33 runner=run, parser=parse)
034
=== added file 'abrek/test_definitions/timed_test_runner.py'
--- abrek/test_definitions/timed_test_runner.py 1970-01-01 00:00:00 +0000
+++ abrek/test_definitions/timed_test_runner.py 2010-09-29 14:13:40 +0000
@@ -0,0 +1,51 @@
1import abrek.testdef
2import pty
3import time
4import os
5import subprocess
6import signal
7import sys
8
9class TimedTestRunner(abrek.testdef.AbrekTestRunner):
10 """Test runner class for running tests for specific amounts of time
11
12 steps - list of steps to be executed in a shell
13 runtime - list of runtimes for each step
14 info - informational string to append to start/end markers
15 """
16 def __init__(self, steps=[], runtime=[], info=[]):
17 super(TimedTestRunner, self).__init__(steps)
18 self.runtime = runtime
19 self.info = info
20
21 def _runsteps(self, resultsdir, quiet=False):
22 outputlog = os.path.join(resultsdir, 'testoutput.log')
23 for (cmd, runtime, info) in zip(self.steps, self.runtime, self.info):
24 # Use a pty to make sure output from the tests
25 # is immediately available (line-buffering)
26 (pid, ptyfd) = pty.fork()
27 if pid == 0:
28 try:
29 print "$Start AbrekTest$ %s" % info
30 subprocess.call(cmd, shell=True)
31 except:
32 pass
33
34 print "$End AbrekTest$ %s" % info
35 sys.exit()
36 else:
37 time.sleep(runtime)
38 os.kill(pid, signal.SIGINT)
39 with open(outputlog, 'a') as fd:
40 out = os.read(ptyfd, 1024)
41 while out != "":
42 fd.write(out)
43 out = ""
44 try:
45 out = os.read(ptyfd, 1024)
46 except:
47 pass
48
49 os.close(ptyfd)
50
51
052
=== added file 'abrek/test_definitions/x11perf.py'
--- abrek/test_definitions/x11perf.py 1970-01-01 00:00:00 +0000
+++ abrek/test_definitions/x11perf.py 2010-09-29 14:13:40 +0000
@@ -0,0 +1,39 @@
1import re
2import abrek.testdef
3
4x11perf_options = "-repeat 3"
5
6x11perf_tests = [
7 # Antialiased text (using XFT)
8 "-aa10text",
9 "-aa24text",
10
11 # Antialiased drawing (using XRENDER)
12 "-aatrapezoid300",
13 "-aatrap2x300",
14
15 # Normal blitting
16 "-copypixwin500",
17 "-copypixpix500",
18
19 # Composited blitting
20 "-comppixwin500",
21
22 # SHM put image
23 "-shmput500",
24 "-shmputxy500",
25
26 "-scroll500",
27 ]
28
29RUNSTEPS = ["x11perf %s %s" % (x11perf_options, " ".join(x11perf_tests))]
30PATTERN = "trep @.*\(\W*(?P<measurement>\d+.\d+)/sec\):\W+(?P<test_case_id>.+)"
31
32inst = abrek.testdef.AbrekTestInstaller(deps=["x11-apps"])
33run = abrek.testdef.AbrekTestRunner(RUNSTEPS)
34parse = abrek.testdef.AbrekTestParser(PATTERN,
35 appendall={'units':'reps/s',
36 'result':'pass'})
37
38testobj = abrek.testdef.AbrekTest(testname="x11perf", installer=inst,
39 runner=run, parser=parse)

Subscribers

People subscribed via source and target branches