Mir

Merge lp:~alan-griffiths/mir/fix-1317370 into lp:mir

Proposed by Alan Griffiths
Status: Merged
Approved by: Alexandros Frantzis
Approved revision: no longer in the source branch.
Merged at revision: 1686
Proposed branch: lp:~alan-griffiths/mir/fix-1317370
Merge into: lp:mir
Diff against target: 44 lines (+12/-2)
1 file modified
tests/unit-tests/compositor/test_buffer_queue.cpp (+12/-2)
To merge this branch: bzr merge lp:~alan-griffiths/mir/fix-1317370
Reviewer Review Type Date Requested Status
PS Jenkins bot (community) continuous-integration Approve
Alexandros Frantzis (community) Approve
Kevin DuBois (community) Approve
Alberto Aguirre (community) Approve
Review via email: mp+222157@code.launchpad.net

Commit message

tests: BufferQueueTest.compositor_never_owns_client_buffers - allow a moment for clients to acquire a buffer

Description of the change

tests: BufferQueueTest.compositor_never_owns_client_buffers - allow a moment for clients to acquire a buffer

The previous code led to the "compositor" thread looping millions of times whilst the client tried to acquire the client_buffer_guard mutex a couple of thousand times. By introducing a condition variable the "compositor" waits briefly for the client to acquire a buffer (so that there is something to test).

To post a comment you must log in.
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Approve (continuous-integration)
Revision history for this message
Alberto Aguirre (albaguirre) wrote :

Yes, this cuts down the time in half in my desktop. (from 90-100ms to 43-50ms)

review: Approve
Revision history for this message
Kevin DuBois (kdub) wrote :

okay

review: Approve
Revision history for this message
Alexandros Frantzis (afrantzis) wrote :

This doesn't change the situation on my desktop (which is good to begin with, runtime < 1s), but I noticed that the unit test execution time under valgrind increased (vs previous and subsequent runs) for amd64 (https://jenkins.qa.ubuntu.com/job/mir-team-mir-development-branch-utopic-amd64-ci/340/consoleFull), but decreased significantly for armhf (https://jenkins.qa.ubuntu.com/job/mir-team-mir-development-branch-utopic-armhf-ci/338/consoleFull). So overall a net win.

I retriggered the CI just to get extra data points for the timings.

review: Approve
Revision history for this message
Alan Griffiths (alan-griffiths) wrote :

> This doesn't change the situation on my desktop (which is good to begin with,
> runtime < 1s),

I'm not sure why Daniel and I see this and others do not. (And I can get good timings too if I prevent the test using multiple CPUs.)

The bug mechanism is clear though: the "composite" thread releases and re-acquires the lock thousands of times before the waiting "client" thread can acquire it. This is clearly valid, but unfortunate, thread scheduling. The client thread "getting in" as soon as the lock is released by the compositor is also valid. Different kit exhibits different behavior.

Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Approve (continuous-integration)

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'tests/unit-tests/compositor/test_buffer_queue.cpp'
--- tests/unit-tests/compositor/test_buffer_queue.cpp 2014-06-03 02:13:59 +0000
+++ tests/unit-tests/compositor/test_buffer_queue.cpp 2014-06-05 10:46:20 +0000
@@ -1269,11 +1269,14 @@
12691269
1270TEST_F(BufferQueueTest, compositor_never_owns_client_buffers)1270TEST_F(BufferQueueTest, compositor_never_owns_client_buffers)
1271{1271{
1272 static std::chrono::nanoseconds const time_for_client_to_acquire{1};
1273
1272 for (int nbuffers = 2; nbuffers <= max_nbuffers_to_test; ++nbuffers)1274 for (int nbuffers = 2; nbuffers <= max_nbuffers_to_test; ++nbuffers)
1273 {1275 {
1274 mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory);1276 mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory);
12751277
1276 std::mutex client_buffer_guard;1278 std::mutex client_buffer_guard;
1279 std::condition_variable client_buffer_cv;
1277 mg::Buffer* client_buffer = nullptr;1280 mg::Buffer* client_buffer = nullptr;
1278 std::atomic<bool> done(false);1281 std::atomic<bool> done(false);
12791282
@@ -1285,9 +1288,15 @@
1285 auto buffer = q.compositor_acquire(this);1288 auto buffer = q.compositor_acquire(this);
12861289
1287 {1290 {
1288 std::lock_guard<std::mutex> lock(client_buffer_guard);1291 std::unique_lock<std::mutex> lock(client_buffer_guard);
1289 if (client_buffer)1292
1293 if (client_buffer_cv.wait_for(
1294 lock,
1295 time_for_client_to_acquire,
1296 [&]()->bool{ return client_buffer; }))
1297 {
1290 ASSERT_THAT(buffer->id(), Ne(client_buffer->id()));1298 ASSERT_THAT(buffer->id(), Ne(client_buffer->id()));
1299 }
1291 }1300 }
12921301
1293 std::this_thread::yield();1302 std::this_thread::yield();
@@ -1304,6 +1313,7 @@
1304 {1313 {
1305 std::lock_guard<std::mutex> lock(client_buffer_guard);1314 std::lock_guard<std::mutex> lock(client_buffer_guard);
1306 client_buffer = handle->buffer();1315 client_buffer = handle->buffer();
1316 client_buffer_cv.notify_one();
1307 }1317 }
13081318
1309 std::this_thread::yield();1319 std::this_thread::yield();

Subscribers

People subscribed via source and target branches