Mir

Merge lp:~vanvugt/mir/enable-dynamic-queue-scaling into lp:mir

Proposed by Daniel van Vugt on 2016-02-04
Status: Work in progress
Proposed branch: lp:~vanvugt/mir/enable-dynamic-queue-scaling
Merge into: lp:mir
Prerequisite: lp:~vanvugt/mir/fix-1480164-tests-scaling
Diff against target: 94 lines (+52/-7)
2 files modified
src/server/compositor/buffer_queue.cpp (+1/-1)
tests/acceptance-tests/test_latency.cpp (+51/-6)
To merge this branch: bzr merge lp:~vanvugt/mir/enable-dynamic-queue-scaling
Reviewer Review Type Date Requested Status
PS Jenkins bot (community) continuous-integration Needs Fixing on 2016-02-15
Mir CI Bot continuous-integration Approve on 2016-02-15
Daniel van Vugt Needs Information on 2016-02-15
Kevin DuBois (community) 2016-02-04 Abstain on 2016-02-04
Review via email: mp+285031@code.launchpad.net

Commit message

Enable dynamic buffer queue scaling to minimise latency, and tighten
the acceptance tests to show it's working. (LP: #1240909)

The relevant unit and integration tests are already present.

This only just became feasible after LP: #1476201 was fixed in Mir 0.19.0.

To post a comment you must log in.
Daniel van Vugt (vanvugt) wrote :

I need to do full stack testing on the phone again, but have run out of time today.

review: Needs Information
Mir CI Bot (mir-ci-bot) wrote :

PASSED: Continuous integration, rev:3300
https://mir-jenkins.ubuntu.com/job/mir-ci/228/
Executed test runs:
    None: https://mir-jenkins.ubuntu.com/job/generic-update-mp/228/console

Click here to trigger a rebuild:
https://mir-jenkins.ubuntu.com/job/mir-ci/228/rebuild

review: Approve (continuous-integration)
Kevin DuBois (kdub) wrote :

In the new model, we don't quite scale the number of buffers as much as the spare/3rd buffer doesn't happen to be used, so I don't think much has to be done to the new system to have a similar sort of feature. So, given that something's already around, and this has been in flight for a while, so +1 (once it gets a testing sign-off I suppose, abstain for now?)

More coordination than a code review really, but is this the last bit of the exchange-based features we'll be testing? I suppose I'd rather the full stack testing go into the new system (aiming for 0.20/0.21 being the switchover release)

review: Abstain
Daniel van Vugt (vanvugt) wrote :

I've already invested much of 2015 in this approach and the dependent pieces. So although you aim to retire it at some point, it is somewhat mature and was only turned off last year as a result of full stack testing on the phone (LP: #1476201).

As you can see in the acceptance test, there is precedent for us intentionally increasing latency when we were trying to boost frame rates (because you do need to do that first). So it's not like NBS will be held to the same low-latency threshold. Just a nice goal to have. My planned frame notification work will actually reduce latency of the full nested system even more than this, even if NBS did have higher per-queue latency, it wouldn't matter in the end.

And yeah, I think this is the last bit. As you can see all the work was done last year, so it's not terribly time consuming to try and turn on the optimizations now.

Daniel van Vugt (vanvugt) wrote :

Cool, Jenkins is failing for the right reasons. GLMark2 can't maintain a high enough frame rate on mobile. I'll need to fix that or this branch is a non-starter.

Daniel van Vugt (vanvugt) wrote :

Still needs more manual testing. There's a chance that slightly slow clients like the dash only ever keep up with 60Hz if you give them triple buffers. So theoretically such a pathological case could be made worse by this branch (skip one in every 30 frames, or two skips per second). That would not be good, so needs some eyes on manual testing here.

If there's any doubt, this effort will need to be trashed. And that's OK too. Because regardless of the outcome my next task after this will benefit all clients regardless of their performance profile.

review: Needs Information
Daniel van Vugt (vanvugt) wrote :

Alternatively, this work may just end up depending on us fixing this first:
   https://trello.com/c/puXQXSYw

Mir CI Bot (mir-ci-bot) wrote :

PASSED: Continuous integration, rev:3310
https://mir-jenkins.ubuntu.com/job/mir-ci/302/
Executed test runs:
    SUCCESS: https://mir-jenkins.ubuntu.com/job/build/90
    SUCCESS: https://mir-jenkins.ubuntu.com/job/build-0-fetch/96
    SUCCESS: https://mir-jenkins.ubuntu.com/job/build-1-sourcepkg/release=vivid+overlay/92
    SUCCESS: https://mir-jenkins.ubuntu.com/job/build-1-sourcepkg/release=xenial/92
    SUCCESS: https://mir-jenkins.ubuntu.com/job/build-2-binpkg-advanced/arch=amd64,compiler=gcc,platform=mesa,release=xenial/92
        deb: https://mir-jenkins.ubuntu.com/job/build-2-binpkg-advanced/arch=amd64,compiler=gcc,platform=mesa,release=xenial/92/artifact/output/*zip*/output.zip
    SUCCESS: https://mir-jenkins.ubuntu.com/job/build-2-binpkg-advanced/arch=cross-armhf,compiler=gcc,platform=android,release=vivid+overlay/92
        deb: https://mir-jenkins.ubuntu.com/job/build-2-binpkg-advanced/arch=cross-armhf,compiler=gcc,platform=android,release=vivid+overlay/92/artifact/output/*zip*/output.zip
    SUCCESS: https://mir-jenkins.ubuntu.com/job/build-2-binpkg-advanced/arch=i386,compiler=gcc,platform=android,release=vivid+overlay/92
        deb: https://mir-jenkins.ubuntu.com/job/build-2-binpkg-advanced/arch=i386,compiler=gcc,platform=android,release=vivid+overlay/92/artifact/output/*zip*/output.zip
    SUCCESS: https://mir-jenkins.ubuntu.com/job/build-2-binpkg-advanced/arch=i386,compiler=gcc,platform=mesa,release=xenial/92
        deb: https://mir-jenkins.ubuntu.com/job/build-2-binpkg-advanced/arch=i386,compiler=gcc,platform=mesa,release=xenial/92/artifact/output/*zip*/output.zip

Click here to trigger a rebuild:
https://mir-jenkins.ubuntu.com/job/mir-ci/302/rebuild

review: Approve (continuous-integration)

Unmerged revisions

3310. By Daniel van Vugt on 2016-02-15

Shrink the diff and add another handy test

3309. By Daniel van Vugt on 2016-02-15

Fix failing test (which expected high latency but didn't find it any more).
Add another test specifically to verify latency stays low after a period
of sufficently fast rendering.

3308. By Daniel van Vugt on 2016-02-15

Re-enable (very conservatively) queue scaling again.

3307. By Daniel van Vugt on 2016-02-15

Merge latest trunk

3306. By Daniel van Vugt on 2016-02-12

Merge latest trunk

3305. By Daniel van Vugt on 2016-02-11

Merge latest trunk

3304. By Daniel van Vugt on 2016-02-10

Merge latest trunk

3303. By Daniel van Vugt on 2016-02-09

Merge latest trunk

3302. By Daniel van Vugt on 2016-02-08

Merge latest trunk

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'src/server/compositor/buffer_queue.cpp'
2--- src/server/compositor/buffer_queue.cpp 2016-02-12 04:02:11 +0000
3+++ src/server/compositor/buffer_queue.cpp 2016-02-15 08:26:32 +0000
4@@ -137,7 +137,7 @@
5 graphics::BufferProperties const& props,
6 mc::FrameDroppingPolicyFactory const& policy_provider)
7 : nbuffers{nbuffers},
8- frame_deadlines_threshold{-1}, // Disable scaling by default
9+ frame_deadlines_threshold{30}, // around half a second to be sure
10 frame_deadlines_met{0},
11 scheduled_extra_frames{0},
12 frame_dropping_enabled{false},
13
14=== modified file 'tests/acceptance-tests/test_latency.cpp'
15--- tests/acceptance-tests/test_latency.cpp 2016-02-10 04:27:20 +0000
16+++ tests/acceptance-tests/test_latency.cpp 2016-02-15 08:26:32 +0000
17@@ -190,6 +190,15 @@
18 return max;
19 }
20
21+ unsigned int max_latency_from_frame(unsigned int index) const
22+ {
23+ unsigned int max = 0;
24+ for (auto i = index; i < latency_list.size(); ++i)
25+ if (latency_list[i] > max)
26+ max = latency_list[i];
27+ return max;
28+ }
29+
30 private:
31 std::mutex mutex;
32 std::vector<unsigned int> latency_list;
33@@ -246,12 +255,10 @@
34 ASSERT_TRUE(stats.wait_for_posts(test_submissions,
35 std::chrono::seconds(60)));
36
37- // Note: Using the "early release" optimization without dynamic queue
38- // scaling enabled makes the expected latency possibly up to
39- // nbuffers instead of nbuffers-1. After dynamic queue scaling is
40- // enabled, the average will be lower than this.
41- float const expected_max_latency = expected_client_buffers;
42- float const expected_min_latency = expected_client_buffers - 1;
43+ // This is to verify the server is performing as expected:
44+ float const expected_max_latency = expected_client_buffers - 1;
45+ // and this is just a sanity check that the test itself isn't broken:
46+ float const expected_min_latency = 1;
47
48 auto observed_latency = display.group.average_latency();
49
50@@ -259,6 +266,44 @@
51 EXPECT_THAT(observed_latency, Lt(expected_max_latency+error_margin));
52 }
53
54+TEST_F(ClientLatency, latency_scales_down_for_a_smooth_client)
55+{
56+ using namespace testing;
57+
58+ auto stream = mir_surface_get_buffer_stream(surface);
59+ for(auto i = 0u; i < test_submissions; i++) {
60+ auto submission_id = mir_debug_surface_current_buffer_id(surface);
61+ stats.record_submission(submission_id);
62+ mir_buffer_stream_swap_buffers_sync(stream);
63+ }
64+
65+ ASSERT_TRUE(stats.wait_for_posts(test_submissions,
66+ std::chrono::seconds(60)));
67+
68+ EXPECT_THAT(display.group.max_latency_from_frame(50), Le(1));
69+}
70+
71+TEST_F(ClientLatency, latency_doesnt_scale_down_for_a_stuttery_client)
72+{
73+ using namespace testing;
74+
75+ auto stream = mir_surface_get_buffer_stream(surface);
76+ for(auto i = 0u; i < test_submissions; i++) {
77+ auto submission_id = mir_debug_surface_current_buffer_id(surface);
78+ stats.record_submission(submission_id);
79+ mir_buffer_stream_swap_buffers_sync(stream);
80+
81+ // Stutter
82+ if (i % 10 == 0)
83+ std::this_thread::sleep_for(vblank_interval * 2);
84+ }
85+
86+ ASSERT_TRUE(stats.wait_for_posts(test_submissions,
87+ std::chrono::seconds(60)));
88+
89+ EXPECT_THAT(display.group.max_latency_from_frame(50), Ge(2));
90+}
91+
92 TEST_F(ClientLatency, latency_is_limited_to_nbuffers)
93 {
94 using namespace testing;

Subscribers

People subscribed via source and target branches