Mir

Merge lp:~vanvugt/mir/fix-1509291 into lp:mir

Proposed by Daniel van Vugt
Status: Merged
Approved by: Alberto Aguirre
Approved revision: no longer in the source branch.
Merged at revision: 3192
Proposed branch: lp:~vanvugt/mir/fix-1509291
Merge into: lp:mir
Diff against target: 98 lines (+28/-13)
1 file modified
tests/acceptance-tests/test_latency.cpp (+28/-13)
To merge this branch: bzr merge lp:~vanvugt/mir/fix-1509291
Reviewer Review Type Date Requested Status
PS Jenkins bot (community) continuous-integration Approve
Cemil Azizoglu (community) Approve
Alberto Aguirre (community) Approve
Review via email: mp+280100@code.launchpad.net

Commit message

Make ClientLatency tests less prone to spurious failures (LP: #1509291)
  * Widened the error margin; and
  * Replaced sleeps with precise waits.

Description of the change

It even works under valgrind for me now (see LP: #1522031)

To post a comment you must log in.
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Approve (continuous-integration)
Revision history for this message
Alberto Aguirre (albaguirre) wrote :

LGTM.

review: Approve
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Approve (continuous-integration)
Revision history for this message
Cemil Azizoglu (cemil-azizoglu) wrote :

Good

review: Approve
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
Alberto Aguirre (albaguirre) wrote :

^--CI network hiccup. Let's try again.

Revision history for this message
PS Jenkins bot (ps-jenkins) :
review: Approve (continuous-integration)

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'tests/acceptance-tests/test_latency.cpp'
2--- tests/acceptance-tests/test_latency.cpp 2015-10-20 03:30:00 +0000
3+++ tests/acceptance-tests/test_latency.cpp 2015-12-11 07:19:47 +0000
4@@ -30,6 +30,9 @@
5 #include <gmock/gmock.h>
6 #include <deque>
7
8+#include <mutex>
9+#include <condition_variable>
10+
11 namespace mtf = mir_test_framework;
12 namespace mtd = mir::test::doubles;
13 namespace mt = mir::test;
14@@ -47,6 +50,7 @@
15 {
16 std::lock_guard<std::mutex> lock{mutex};
17 post_count++;
18+ posted.notify_one();
19 }
20
21 void record_submission(uint32_t submission_id)
22@@ -74,8 +78,21 @@
23 return latency;
24 }
25
26+ bool wait_for_posts(unsigned int count, std::chrono::milliseconds timeout)
27+ {
28+ std::unique_lock<std::mutex> lock(mutex);
29+ auto const deadline = std::chrono::system_clock::now() + timeout;
30+ while (post_count < count)
31+ {
32+ if (posted.wait_until(lock, deadline) == std::cv_status::timeout)
33+ return false;
34+ }
35+ return true;
36+ }
37+
38 private:
39 std::mutex mutex;
40+ std::condition_variable posted;
41 unsigned int post_count{0};
42
43 // Note that a buffer_id may appear twice in the list as the client is
44@@ -190,6 +207,12 @@
45 Stats stats;
46 TimeTrackingDisplay display{stats};
47 unsigned int test_submissions{100};
48+
49+ // We still have a margin for error here. The client and server will
50+ // be scheduled somewhat unpredictably which affects results. Also
51+ // affecting results will be the first few frames before the buffer
52+ // quere is full (during which there will be no buffer latency).
53+ float const error_margin = 0.4f;
54 };
55 }
56
57@@ -204,9 +227,8 @@
58 mir_buffer_stream_swap_buffers_sync(stream);
59 }
60
61- // Wait for the compositor to finish rendering all those frames,
62- // or else we'll be missing some samples and get a spurious average.
63- std::this_thread::sleep_for(std::chrono::milliseconds(500));
64+ ASSERT_TRUE(stats.wait_for_posts(test_submissions,
65+ std::chrono::seconds(60)));
66
67 unsigned int const expected_client_buffers = 3;
68
69@@ -219,12 +241,6 @@
70
71 auto observed_latency = display.group.average_latency();
72
73- // We still have a margin for error here. The client and server will
74- // be scheduled somewhat unpredictably which affects results. Also
75- // affecting results will be the first few frames before the buffer
76- // quere is full (during which there will be no buffer latency).
77- float const error_margin = 0.1f;
78-
79 EXPECT_THAT(observed_latency, Gt(expected_min_latency-error_margin));
80 EXPECT_THAT(observed_latency, Lt(expected_max_latency+error_margin));
81 }
82@@ -248,13 +264,12 @@
83 mir_buffer_stream_swap_buffers_sync(stream);
84 }
85
86- // Wait for the compositor to finish rendering all those frames,
87- // or else we'll be missing some samples and get a spurious average.
88- std::this_thread::sleep_for(std::chrono::milliseconds(500));
89+ ASSERT_TRUE(stats.wait_for_posts(test_submissions,
90+ std::chrono::seconds(60)));
91
92 // As the client is producing frames slower than the compositor consumes
93 // them, the buffer queue never fills. So latency is low;
94 float const observed_latency = display.group.average_latency();
95 EXPECT_THAT(observed_latency, Ge(0.0f));
96- EXPECT_THAT(observed_latency, Lt(1.1f));
97+ EXPECT_THAT(observed_latency, Le(1.0f + error_margin));
98 }

Subscribers

People subscribed via source and target branches