Mir

Merge lp:~alan-griffiths/mir/fix-1317370 into lp:mir

Proposed by Alan Griffiths
Status: Merged
Approved by: Alexandros Frantzis
Approved revision: no longer in the source branch.
Merged at revision: 1686
Proposed branch: lp:~alan-griffiths/mir/fix-1317370
Merge into: lp:mir
Diff against target: 44 lines (+12/-2)
1 file modified
tests/unit-tests/compositor/test_buffer_queue.cpp (+12/-2)
To merge this branch: bzr merge lp:~alan-griffiths/mir/fix-1317370
Reviewer Review Type Date Requested Status
PS Jenkins bot (community) continuous-integration Approve
Alexandros Frantzis (community) Approve
Kevin DuBois (community) Approve
Alberto Aguirre (community) Approve
Review via email: mp+222157@code.launchpad.net

Commit message

tests: BufferQueueTest.compositor_never_owns_client_buffers - allow a moment for clients to acquire a buffer

Description of the change

tests: BufferQueueTest.compositor_never_owns_client_buffers - allow a moment for clients to acquire a buffer

The previous code led to the "compositor" thread looping millions of times whilst the client tried to acquire the client_buffer_guard mutex a couple of thousand times. By introducing a condition variable the "compositor" waits briefly for the client to acquire a buffer (so that there is something to test).

To post a comment you must log in.
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Approve (continuous-integration)
Revision history for this message
Alberto Aguirre (albaguirre) wrote :

Yes, this cuts down the time in half in my desktop. (from 90-100ms to 43-50ms)

review: Approve
Revision history for this message
Kevin DuBois (kdub) wrote :

okay

review: Approve
Revision history for this message
Alexandros Frantzis (afrantzis) wrote :

This doesn't change the situation on my desktop (which is good to begin with, runtime < 1s), but I noticed that the unit test execution time under valgrind increased (vs previous and subsequent runs) for amd64 (https://jenkins.qa.ubuntu.com/job/mir-team-mir-development-branch-utopic-amd64-ci/340/consoleFull), but decreased significantly for armhf (https://jenkins.qa.ubuntu.com/job/mir-team-mir-development-branch-utopic-armhf-ci/338/consoleFull). So overall a net win.

I retriggered the CI just to get extra data points for the timings.

review: Approve
Revision history for this message
Alan Griffiths (alan-griffiths) wrote :

> This doesn't change the situation on my desktop (which is good to begin with,
> runtime < 1s),

I'm not sure why Daniel and I see this and others do not. (And I can get good timings too if I prevent the test using multiple CPUs.)

The bug mechanism is clear though: the "composite" thread releases and re-acquires the lock thousands of times before the waiting "client" thread can acquire it. This is clearly valid, but unfortunate, thread scheduling. The client thread "getting in" as soon as the lock is released by the compositor is also valid. Different kit exhibits different behavior.

Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Approve (continuous-integration)

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'tests/unit-tests/compositor/test_buffer_queue.cpp'
2--- tests/unit-tests/compositor/test_buffer_queue.cpp 2014-06-03 02:13:59 +0000
3+++ tests/unit-tests/compositor/test_buffer_queue.cpp 2014-06-05 10:46:20 +0000
4@@ -1269,11 +1269,14 @@
5
6 TEST_F(BufferQueueTest, compositor_never_owns_client_buffers)
7 {
8+ static std::chrono::nanoseconds const time_for_client_to_acquire{1};
9+
10 for (int nbuffers = 2; nbuffers <= max_nbuffers_to_test; ++nbuffers)
11 {
12 mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory);
13
14 std::mutex client_buffer_guard;
15+ std::condition_variable client_buffer_cv;
16 mg::Buffer* client_buffer = nullptr;
17 std::atomic<bool> done(false);
18
19@@ -1285,9 +1288,15 @@
20 auto buffer = q.compositor_acquire(this);
21
22 {
23- std::lock_guard<std::mutex> lock(client_buffer_guard);
24- if (client_buffer)
25+ std::unique_lock<std::mutex> lock(client_buffer_guard);
26+
27+ if (client_buffer_cv.wait_for(
28+ lock,
29+ time_for_client_to_acquire,
30+ [&]()->bool{ return client_buffer; }))
31+ {
32 ASSERT_THAT(buffer->id(), Ne(client_buffer->id()));
33+ }
34 }
35
36 std::this_thread::yield();
37@@ -1304,6 +1313,7 @@
38 {
39 std::lock_guard<std::mutex> lock(client_buffer_guard);
40 client_buffer = handle->buffer();
41+ client_buffer_cv.notify_one();
42 }
43
44 std::this_thread::yield();

Subscribers

People subscribed via source and target branches