Merge lp:~josharenson/mir/install_glmark2 into lp:mir
- install_glmark2
- Merge into development-branch
Status: | Superseded |
---|---|
Proposed branch: | lp:~josharenson/mir/install_glmark2 |
Merge into: | lp:mir |
Diff against target: |
157 lines (+119/-0) 4 files modified
debian/mir-test-tools.install (+1/-0) tests/CMakeLists.txt (+5/-0) tests/performance-tests/CMakeLists.txt (+28/-0) tests/performance-tests/test_glmark2-es2-mir.cpp (+85/-0) |
To merge this branch: | bzr merge lp:~josharenson/mir/install_glmark2 |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Alan Griffiths | Needs Fixing | ||
PS Jenkins bot (community) | continuous-integration | Needs Fixing | |
Alberto Aguirre (community) | Needs Fixing | ||
Robert Carr | Pending | ||
Alexandros Frantzis | Pending | ||
Mir development team | Pending | ||
Review via email: mp+218334@code.launchpad.net |
This proposal supersedes a proposal from 2014-04-30.
This proposal has been superseded by a proposal from 2014-05-08.
Commit message
* Adds a 'performance test' category.
* Runs glmark2-es2-mir, storing the results to a local file for post-analysis
Description of the change
Robert Carr (robertcarr) wrote : Posted in a previous version of this proposal | # |
Josh Arenson (josharenson) wrote : Posted in a previous version of this proposal | # |
> We don't need to link mir_performance
Absolutely.
Alan Griffiths (alan-griffiths) wrote : Posted in a previous version of this proposal | # |
Where should we source glmark2-es2-mir?
$ apt-cache search glmark2-es2-mir
$
Josh Arenson (josharenson) wrote : Posted in a previous version of this proposal | # |
> Where should we source glmark2-es2-mir?
>
> $ apt-cache search glmark2-es2-mir
> $
glmark2-es2-mir (the version that actually runs) is currently located in ppa:mir-
Robert Carr (robertcarr) wrote : Posted in a previous version of this proposal | # |
Why doesnt jenkins even say hello to this branch?
Josh Arenson (josharenson) wrote : Posted in a previous version of this proposal | # |
> Why doesnt jenkins even say hello to this branch?
According to fginther, there is an outage.
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal | # |
FAILED: Continuous integration, rev:1572
No commit message was specified in the merge proposal. Click on the following link and set the commit message (if you want a jenkins rebuild you need to trigger it yourself):
https:/
http://
Executed test runs:
FAILURE: http://
SUCCESS: http://
SUCCESS: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
SUCCESS: http://
Click here to trigger a rebuild:
http://
Alexandros Frantzis (afrantzis) wrote : Posted in a previous version of this proposal | # |
46 +find_program(
47 +
48 +if(NOT GLMARK2_
49 + message("warning: glmark2-es2-mir not found, performance tests _may_ fail.")
50 +endif()
Since we are not running the program automatically when building, I am not sure how useful it is to print this message at build time.
77 +#include <stdlib.h>
We also need <stdio.h> for popen, getline etc (we shouldn't depend on other headers bringing it in).
Some style issues (see http://
85 +class GLMark2Test : public ::testing::Test
86 +{
...
Mir uses 4 space indentations.
88 + enum ResultFileType {RAW, JSON};
Lowercase identifiers for enum values.
103 + while((read = getline(&line, &len, in)) != -1)
105 + if(file_type == RAW)
107 + else if(file_type == JSON)
109 + if(boost:
Spaces between keyword (if,while) and opening parenthesis.
105 + if(file_type == RAW)
106 + glmark2_output << line;
Braces around statement, since the 'else' clause also has braces.
94 + const char *cmd
93 + FILE *in;
...
'*' goes with type in our C++ code and const goes after the type. So: "FILE* in" and "char const* cmd"
98 + std::ofstream glmark2_output;
No need to define all variables in the beginning of the code block if it is more convenient (or necessary for RAII purposes) not to do so. So, for example it is allowed to have "std::ofstream glmark2_
123 + glmark2_
No need to manually close the file, it's closed on std::ofstream destruction.
Daniel van Vugt (vanvugt) wrote : Posted in a previous version of this proposal | # |
I have no opinion, but Jenkins won't ever land this till you add a commit message up the top.
Josh Arenson (josharenson) wrote : Posted in a previous version of this proposal | # |
> I have no opinion, but Jenkins won't ever land this till you add a commit
> message up the top.
Thanks for the heads up. I'm working on a refactor and will add the message when done.
Alberto Aguirre (albaguirre) wrote : Posted in a previous version of this proposal | # |
> > I have no opinion, but Jenkins won't ever land this till you add a commit
> > message up the top.
> Thanks for the heads up. I'm working on a refactor and will add the message
> when done.
You should mark this as "work in progress" so that others don't try to review code that is about to change.
Daniel van Vugt (vanvugt) : Posted in a previous version of this proposal | # |
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal | # |
FAILED: Continuous integration, rev:1577
No commit message was specified in the merge proposal. Click on the following link and set the commit message (if you want a jenkins rebuild you need to trigger it yourself):
https:/
http://
Executed test runs:
SUCCESS: http://
SUCCESS: http://
SUCCESS: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
SUCCESS: http://
Click here to trigger a rebuild:
http://
Alexandros Frantzis (afrantzis) wrote : Posted in a previous version of this proposal | # |
119 + /*HACK to wait for the server to start*/
You can override the mir::DefaultSer
/*HACK to stop the mir server*/
You can get the main loop object from mir::DefaultSer
Alan Griffiths (alan-griffiths) wrote : Posted in a previous version of this proposal | # |
> 119 + /*HACK to wait for the server to start*/
>
> You can override the
> mir::DefaultSer
> own implementation to wait for server start.
>
> /*HACK to stop the mir server*/
>
> You can get the main loop object from
> mir::DefaultSer
Even simpler: just use the pre-existing InProcessServer test fixture to start the server before entering the test body and close it afterwards.
Alexandros Frantzis (afrantzis) wrote : Posted in a previous version of this proposal | # |
> > 119 + /*HACK to wait for the server to start*/
> >
> > You can override the
> > mir::DefaultSer
> you
> > own implementation to wait for server start.
> >
> > /*HACK to stop the mir server*/
> >
> > You can get the main loop object from
> > mir::DefaultSer
>
> Even simpler: just use the pre-existing InProcessServer test fixture to start
> the server before entering the test body and close it afterwards.
Even better, +1
Alberto Aguirre (albaguirre) wrote : Posted in a previous version of this proposal | # |
~~~
111 + FILE* in;
112 + char* line = NULL;
113 + size_t len = 0;
114 + ssize_t read;
123 + ASSERT_TRUE((in = popen(cmd, "r")));
126 + while ((read = getline(&line, &len, in)) != -1)
151 + free(line);
152 + fclose(in);
~~~
I'd like to see that in a separate class and put under mir_test as it could be useful if we need to run other
benchmark programs OR use pstreams library which already does that for you.
~~~
virtual void RunGLMark2(char const* output_filename, ResultFileType file_type)
~~~
RunGLMark2 not in the style described at http://
~~~
106 + enum ResultFileType {raw, json};
~~~
Use enum class.
Alberto Aguirre (albaguirre) wrote : Posted in a previous version of this proposal | # |
Maybe this will be useful:
https:/
Popen p{"glmark2-es2-mir --fullscreen"};
std::string line;
while(p.
{
<do something with line>
}
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal | # |
FAILED: Continuous integration, rev:1579
http://
Executed test runs:
SUCCESS: http://
SUCCESS: http://
FAILURE: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
FAILURE: http://
FAILURE: http://
Click here to trigger a rebuild:
http://
PS Jenkins bot (ps-jenkins) wrote : Posted in a previous version of this proposal | # |
FAILED: Continuous integration, rev:1580
http://
Executed test runs:
SUCCESS: http://
SUCCESS: http://
FAILURE: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
Click here to trigger a rebuild:
http://
Alan Griffiths (alan-griffiths) wrote : Posted in a previous version of this proposal | # |
My earlier comment hasn't been addressed:
Even simpler: just use the pre-existing InProcessServer test fixture to start the server before entering the test body and close it afterwards.
PS Jenkins bot (ps-jenkins) wrote : | # |
FAILED: Continuous integration, rev:1585
http://
Executed test runs:
SUCCESS: http://
SUCCESS: http://
FAILURE: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
FAILURE: http://
SUCCESS: http://
Click here to trigger a rebuild:
http://
PS Jenkins bot (ps-jenkins) wrote : | # |
FAILED: Continuous integration, rev:1586
http://
Executed test runs:
SUCCESS: http://
SUCCESS: http://
FAILURE: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
FAILURE: http://
SUCCESS: http://
Click here to trigger a rebuild:
http://
Alan Griffiths (alan-griffiths) wrote : | # |
The test should be supplying the connection string required by glmark by calling InProcessServer
I'm not sure if there's a glmark command-line parameter you can use, or if it needs:
setenv(
In the case of using the setenv() option note that it is advisable to restore the original state of any environment variables changed in tests.
Josh Arenson (josharenson) wrote : | # |
> The test should be supplying the connection string required by glmark by
> calling InProcessServer
>
> I'm not sure if there's a glmark command-line parameter you can use, or if it
> needs:
>
> setenv(
>
> In the case of using the setenv() option note that it is advisable to restore
> the original state of any environment variables changed in tests.
I agree that the variable should be restored, but GLMark2 is also calling mir_connect_sync so I'm not sure the above method will work. This becomes more of an issue if we decide to automate closed-source benchmarks.
The other option is to override the SetUp method, and unset the variable there. This requires no changes to InProcessServer, but should the point in a test where an environment variable is set really affect the outcome?
Alan Griffiths (alan-griffiths) wrote : | # |
> > The test should be supplying the connection string required by glmark by
> > calling InProcessServer
> >
> > I'm not sure if there's a glmark command-line parameter you can use, or if
> it
> > needs:
> >
> > setenv(
> >
> > In the case of using the setenv() option note that it is advisable to
> restore
> > the original state of any environment variables changed in tests.
>
>
> I agree that the variable should be restored, but GLMark2 is also calling
> mir_connect_sync so I'm not sure the above method will work. This becomes more
> of an issue if we decide to automate closed-source benchmarks.
Sorry, I'm still being unclear.
1. I should have written: setenv(
2. GLMark2 should ideally be calling mir_connect_sync() with the URI returned by new_connection() - passed through a command line parameter.
PS Jenkins bot (ps-jenkins) wrote : | # |
FAILED: Continuous integration, rev:1587
http://
Executed test runs:
SUCCESS: http://
SUCCESS: http://
FAILURE: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
FAILURE: http://
SUCCESS: http://
Click here to trigger a rebuild:
http://
Alan Griffiths (alan-griffiths) wrote : | # |
148 + char const* origenv = getenv(
149 + unsetenv(
...
153 + setenv(
All thse lines should go. There is no need to do *anything* with $MIR_SERVER_
Instead, as glmark2-es2-mir doesn't have a "mir connection" parameter you should use $MIR_SERVER to specify the Mir server connection. Specifically, this works without requiring an endpoint on the filesystem:
auto const cmd = "MIR_SERVER=" + new_connection() + " glmark2-es2-mir --fullscreen";
~~~~
109 + virtual void RunGLMark2(char const* output_filename, ResultFileType file_type)
The Mir style guideline names functions in lower_case
Alan Griffiths (alan-griffiths) wrote : | # |
(Not a blocker for this MP)
93 +char const* dummy[] = {0};
94 +int argc = 0;
95 +char const** argv = dummy;
This is a bit unsatisfactory as it means that configuring the Mir server can't be done on the command line as is possible with other tests. I've raised lp:1316987 to record the problem.
Josh Arenson (josharenson) wrote : | # |
> (Not a blocker for this MP)
>
> 93 +char const* dummy[] = {0};
> 94 +int argc = 0;
> 95 +char const** argv = dummy;
>
> This is a bit unsatisfactory as it means that configuring the Mir server can't
> be done on the command line as is possible with other tests. I've raised
> lp:1316987 to record the problem.
Added a FIXME to the code, and took ownership of the bug.
PS Jenkins bot (ps-jenkins) wrote : | # |
FAILED: Continuous integration, rev:1588
http://
Executed test runs:
SUCCESS: http://
SUCCESS: http://
FAILURE: http://
FAILURE: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
FAILURE: http://
SUCCESS: http://
Click here to trigger a rebuild:
http://
Alan Griffiths (alan-griffiths) wrote : | # |
[0;32m[ RUN ] [mDemoInProces
/tmp/buildd/
Failed
bind: No such file or directory
/tmp/buildd/
Value of: display_server
Actual: false
Expected: true
/tmp/buildd/
Value of: display_server
Actual: false
Expected: true
Did you override SetUp() and forget to call mtf::InProcessS
[0;31m[ FAILED ] [mDemoInProces
Looks worrying but unrelated: triggering a rebuild.
Alan Griffiths (alan-griffiths) wrote : | # |
Actually, this "test" desn't assert anything. AFAICS it will pass even if glmark crashes and produces no stats.
Are we planning to introduce some test requirement at some stage (when we've benchmarked on some test kit)?
Alan Griffiths (alan-griffiths) wrote : | # |
85 +#include <iostream>
You don't use this. (But gtest pulls it in anyway so removing this line doesn't help much.)
PS Jenkins bot (ps-jenkins) wrote : | # |
FAILED: Continuous integration, rev:1588
http://
Executed test runs:
SUCCESS: http://
SUCCESS: http://
FAILURE: http://
FAILURE: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
FAILURE: http://
SUCCESS: http://
Click here to trigger a rebuild:
http://
Alberto Aguirre (albaguirre) wrote : | # |
You need to rebase the branch to not include
lp:~josharenson/mir/unset_server_nofile
and remove it as pre-requisite for this MP.
Josh Arenson (josharenson) wrote : | # |
> Actually, this "test" desn't assert anything. AFAICS it will pass even if
> glmark crashes and produces no stats.
>
> Are we planning to introduce some test requirement at some stage (when we've
> benchmarked on some test kit)?
Yes, I am currently profiling devices and trying to determine an appropriate threshold.
Alan Griffiths (alan-griffiths) wrote : | # |
> > Actually, this "test" desn't assert anything. AFAICS it will pass even if
> > glmark crashes and produces no stats.
> >
> > Are we planning to introduce some test requirement at some stage (when we've
> > benchmarked on some test kit)?
>
> Yes, I am currently profiling devices and trying to determine an appropriate
> threshold.
That's OK for now.
As Alberto says, you need to rid this MP of the unnecessary prerequisite (and the code of its changes).
PS Jenkins bot (ps-jenkins) wrote : | # |
FAILED: Continuous integration, rev:1590
http://
Executed test runs:
SUCCESS: http://
FAILURE: http://
FAILURE: http://
FAILURE: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
FAILURE: http://
SUCCESS: http://
Click here to trigger a rebuild:
http://
Alan Griffiths (alan-griffiths) wrote : | # |
> /usr/src/
> GTEST_IMPL_
> ~~~~~~~
> /usr/src/
> if (val1 op val2) {\
> ^
> /mir/tests/
> ASSERT_
clang is a bit more sensitive to signed/unsigned comparisons than g++ and std::size_t is unsigned. It works for both if you write this as:
~~~~
As you'll be editing anyway:
145 +
...
150 +
This whitespace doesn't aid readability.
Preview Diff
1 | === modified file 'debian/mir-test-tools.install' |
2 | --- debian/mir-test-tools.install 2014-03-06 06:05:17 +0000 |
3 | +++ debian/mir-test-tools.install 2014-05-08 17:51:28 +0000 |
4 | @@ -2,5 +2,6 @@ |
5 | usr/bin/mir_unit_tests |
6 | usr/bin/mir_acceptance_tests |
7 | usr/bin/mir_integration_tests |
8 | +usr/bin/mir_performance_tests |
9 | usr/lib/*/mir/tools/libmirclientlttng.so |
10 | usr/lib/*/mir/tools/libmirserverlttng.so |
11 | |
12 | === modified file 'tests/CMakeLists.txt' |
13 | --- tests/CMakeLists.txt 2014-03-26 05:48:59 +0000 |
14 | +++ tests/CMakeLists.txt 2014-05-08 17:51:28 +0000 |
15 | @@ -26,6 +26,7 @@ |
16 | |
17 | option(MIR_BUILD_ACCEPTANCE_TESTS "Build acceptance tests" ON) |
18 | option(MIR_BUILD_INTEGRATION_TESTS "Build integration tests" ON) |
19 | +option(MIR_BUILD_PERFORMANCE_TESTS "Build performance tests" ON) |
20 | option(MIR_BUILD_UNIT_TESTS "Build unit tests" ON) |
21 | |
22 | if (MIR_BUILD_ACCEPTANCE_TESTS) |
23 | @@ -36,6 +37,10 @@ |
24 | add_subdirectory(integration-tests/) |
25 | endif (MIR_BUILD_INTEGRATION_TESTS) |
26 | |
27 | +if (MIR_BUILD_PERFORMANCE_TESTS) |
28 | + add_subdirectory(performance-tests/) |
29 | +endif(MIR_BUILD_PERFORMANCE_TESTS) |
30 | + |
31 | if (MIR_BUILD_UNIT_TESTS) |
32 | add_subdirectory(unit-tests/) |
33 | endif (MIR_BUILD_UNIT_TESTS) |
34 | |
35 | === added directory 'tests/performance-tests' |
36 | === added file 'tests/performance-tests/CMakeLists.txt' |
37 | --- tests/performance-tests/CMakeLists.txt 1970-01-01 00:00:00 +0000 |
38 | +++ tests/performance-tests/CMakeLists.txt 2014-05-08 17:51:28 +0000 |
39 | @@ -0,0 +1,28 @@ |
40 | +set( |
41 | + PERFORMANCE_TESTS_SOURCES |
42 | + |
43 | + test_glmark2-es2-mir.cpp |
44 | +) |
45 | + |
46 | +add_executable( |
47 | + mir_performance_tests |
48 | + |
49 | + ${PERFORMANCE_TESTS_SOURCES} |
50 | +) |
51 | + |
52 | +target_link_libraries( |
53 | + mir_performance_tests |
54 | + |
55 | + mir-test |
56 | + mir-test-framework |
57 | + |
58 | + ${Boost_LIBRARIES} |
59 | + ${GTEST_BOTH_LIBRARIES} |
60 | + ${GMOCK_LIBRARY} |
61 | + ${GMOCK_MAIN_LIBRARY} |
62 | +) |
63 | + |
64 | +install( |
65 | + TARGETS mir_performance_tests |
66 | + RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR} |
67 | +) |
68 | |
69 | === added file 'tests/performance-tests/test_glmark2-es2-mir.cpp' |
70 | --- tests/performance-tests/test_glmark2-es2-mir.cpp 1970-01-01 00:00:00 +0000 |
71 | +++ tests/performance-tests/test_glmark2-es2-mir.cpp 2014-05-08 17:51:28 +0000 |
72 | @@ -0,0 +1,85 @@ |
73 | +#include "mir/default_server_configuration.h" |
74 | +#include "mir/options/default_configuration.h" |
75 | + |
76 | +#include "mir_test/popen.h" |
77 | +#include "mir_test_framework/in_process_server.h" |
78 | + |
79 | +#include <gtest/gtest.h> |
80 | + |
81 | +#include <stdlib.h> |
82 | +#include <stdio.h> |
83 | + |
84 | +#include <boost/regex.hpp> |
85 | +#include <fstream> |
86 | +#include <string> |
87 | + |
88 | +namespace |
89 | +{ |
90 | +// FIXME remove this when support for passing cli options to |
91 | +// DefaultServerConfiguration is implemented |
92 | +char const* dummy[] = {0}; |
93 | +int argc = 0; |
94 | +char const** argv = dummy; |
95 | + |
96 | +class GLMark2Test : public mir_test_framework::InProcessServer |
97 | +{ |
98 | +public: |
99 | + GLMark2Test() |
100 | + : config(argc, argv) |
101 | + { |
102 | + } |
103 | + |
104 | + mir::DefaultServerConfiguration& server_config() override {return config;}; |
105 | + |
106 | +protected: |
107 | + enum ResultFileType {raw, json}; |
108 | + virtual void run_glmark2(char const* output_filename, ResultFileType file_type) |
109 | + { |
110 | + auto const cmd = "MIR_SERVER=" + new_connection() + " glmark2-es2-mir --fullscreen"; |
111 | + mir::test::Popen p(cmd); |
112 | + |
113 | + boost::cmatch matches; |
114 | + boost::regex re_glmark2_score(".*glmark2\\s+Score:\\s+(\\d+).*"); |
115 | + std::string line; |
116 | + std::ofstream glmark2_output; |
117 | + std::string score = ""; |
118 | + glmark2_output.open(output_filename); |
119 | + while (p.get_line(line)) |
120 | + { |
121 | + if(boost::regex_match(line.c_str(), matches, re_glmark2_score) |
122 | + && score.length() == 0) |
123 | + { |
124 | + score = matches[1]; |
125 | + } |
126 | + |
127 | + if (file_type == raw) |
128 | + { |
129 | + glmark2_output << line; |
130 | + } |
131 | + } |
132 | + |
133 | + ASSERT_GT(score.length(), 0); |
134 | + EXPECT_GT(atoi(score.c_str()), 52); |
135 | + |
136 | + if (file_type == json) |
137 | + { |
138 | + std::string json = "{"; |
139 | + json += "'benchmark_name':'glmark2-es2-mir'"; |
140 | + json += ","; |
141 | + json += "'score':'" + score + "'"; |
142 | + json += "}"; |
143 | + glmark2_output << json; |
144 | + } |
145 | + |
146 | + } |
147 | + |
148 | +private: |
149 | + mir::DefaultServerConfiguration config; |
150 | + |
151 | +}; |
152 | + |
153 | +TEST_F(GLMark2Test, benchmark_fullscreen_default) |
154 | +{ |
155 | + run_glmark2("glmark2_fullscreen_default.json", json); |
156 | +} |
157 | +} |
We don't need to link mir_performance _test against so much right?