Merge lp:~vanvugt/mir/double into lp:mir
- double
- Merge into development-branch
Status: | Work in progress |
---|---|
Proposed branch: | lp:~vanvugt/mir/double |
Merge into: | lp:mir |
Diff against target: |
850 lines (+446/-53) 4 files modified
src/server/compositor/buffer_queue.cpp (+122/-22) src/server/compositor/buffer_queue.h (+7/-2) tests/integration-tests/surface_composition.cpp (+8/-1) tests/unit-tests/compositor/test_buffer_queue.cpp (+309/-28) |
To merge this branch: | bzr merge lp:~vanvugt/mir/double |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
PS Jenkins bot (community) | continuous-integration | Approve | |
Daniel van Vugt | Needs Fixing | ||
Alexandros Frantzis (community) | Needs Fixing | ||
Alberto Aguirre | Pending | ||
Review via email: mp+227701@code.launchpad.net |
Commit message
Reintroduce double buffering! (LP: #1240909)
Default to double buffering where possible, to minimize visible lag and
resource usage. The queue will be expanded automatically to triple buffers
if you enable framedropping, or if it is observed that the client is not
keeping up with the compositor.
Description of the change
PS Jenkins bot (ps-jenkins) wrote : | # |
FAILED: Continuous integration, rev:1725
http://
Executed test runs:
FAILURE: http://
SUCCESS: http://
SUCCESS: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
SUCCESS: http://
Click here to trigger a rebuild:
http://
- 1726. By Daniel van Vugt
-
Merge latest development-branch
PS Jenkins bot (ps-jenkins) wrote : | # |
FAILED: Continuous integration, rev:1726
http://
Executed test runs:
SUCCESS: http://
SUCCESS: http://
SUCCESS: http://
FAILURE: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
SUCCESS: http://
Click here to trigger a rebuild:
http://
Daniel van Vugt (vanvugt) wrote : | # |
The failure is bug 1348095. Unrelated to this proposal.
- 1727. By Daniel van Vugt
-
Merge latest development-branch
- 1728. By Daniel van Vugt
-
Merge latest development-branch
- 1729. By Daniel van Vugt
-
Merge latest development-branch and fix conflicts.
- 1730. By Daniel van Vugt
-
Merge latest development-branch
- 1731. By Daniel van Vugt
-
Fix failing test case, merged from devel:
gives_compositor_the_newest_ buffer_ after_dropping_ old_buffers - 1732. By Daniel van Vugt
-
Make tests which require 3 buffers more consistent
- 1733. By Daniel van Vugt
-
Fix failing test: TEST_F(StaleFrames, are_dropped_
when_restarting _compositor)
It was designed to assume three unique buffers available by default. Remove
that assumption. - 1734. By Daniel van Vugt
-
Lower the resize delay to 100 frames so slow clients only take ~1.5s to
be detected as slow and change to triple buffers.
PS Jenkins bot (ps-jenkins) wrote : | # |
FAILED: Continuous integration, rev:1734
http://
Executed test runs:
SUCCESS: http://
SUCCESS: http://
FAILURE: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
FAILURE: http://
SUCCESS: http://
Click here to trigger a rebuild:
http://
Alexandros Frantzis (afrantzis) wrote : | # |
Not a thorough review yet, looks good overall on a first pass.
86 + if (client_behind && missed_frames < queue_resize_
87 + {
88 + ++missed_frames;
89 + if (missed_frames >= queue_resize_
90 + extra_buffers = 1;
91 + }
92 +
93 + if (!client_behind && missed_frames > 0)
102 + if (missed_frames < queue_resize_
103 + --missed_frames;
It would be nice to have tests for the expected behavior of this piece of code.
PS Jenkins bot (ps-jenkins) wrote : | # |
PASSED: Continuous integration, rev:1734
http://
Executed test runs:
SUCCESS: http://
SUCCESS: http://
SUCCESS: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
SUCCESS: http://
Click here to trigger a rebuild:
http://
Daniel van Vugt (vanvugt) wrote : | # |
There are already tests for that portion of logic:
TEST_
TEST_
TEST_
Although it's always possible to add more.
Alexandros Frantzis (afrantzis) wrote : | # |
> Although it's always possible to add more.
One test that is missing (at least not tested explicitly) is to verify that: "If the ceiling is hit, keep it there with the extra buffer allocated so we don't shrink again and cause yet more missed frames."
- 1735. By Daniel van Vugt
-
Merge latest development-branch
- 1736. By Daniel van Vugt
-
Merge latest development-branch
- 1737. By Daniel van Vugt
-
Add unit test: BufferQueueTest
.switch_ to_triple_ buffers_ is_permanent This verifies we don't accidentally ping-pong between 2 and 3 buffers.
PS Jenkins bot (ps-jenkins) wrote : | # |
PASSED: Continuous integration, rev:1737
http://
Executed test runs:
SUCCESS: http://
SUCCESS: http://
SUCCESS: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
SUCCESS: http://
Click here to trigger a rebuild:
http://
Alexandros Frantzis (afrantzis) wrote : | # |
> * A client that's keeping up will be in-phase with composition. That
> * means it will stay, or quickly equalize at a point where there are
> * no client buffers still held when composition finishes.
This criterion only works well when the client is actually trying to keep up with the compositor (i.e. 60Hz). Clients that refresh less often on purpose or only when reacting to user input will become triple-buffered if the compositor is triggered by another source and they are not updating fast enough.
This happens, for example, if we run both mir_demo_
We probably need additional logic/heurestics to make this work properly (haven't thought through what that logic might be, though).
- 1738. By Daniel van Vugt
-
Merge latest development-branch
- 1739. By Daniel van Vugt
-
Add a regression test for the idle-vs-slow client problem Alexandros
described. Presently failing. - 1740. By Daniel van Vugt
-
Move the "missed frames" decision to a point where it won't be skewed by
multiple compositors (multi-monitor). - 1741. By Daniel van Vugt
-
Prototype a detection of "idle" clients. Other questionable tests still
failing. - 1742. By Daniel van Vugt
-
Simplify
- 1743. By Daniel van Vugt
-
More readable.
- 1744. By Daniel van Vugt
-
Fix failing test: BufferQueueTest
.queue_ size_scales_ for_slow_ clients
It needed some enhancement to not be detected as an "idle" client by the
new logic. - 1745. By Daniel van Vugt
-
Fix test case: BufferQueueTest
.switch_ to_triple_ buffers_ is_permanent
The new idle detection logic was not seeing any client activity and so
deemed it not trying to keep up. So added some "slow client" activity. - 1746. By Daniel van Vugt
-
Simplified idle detection
- 1747. By Daniel van Vugt
-
Fix comment typo
- 1748. By Daniel van Vugt
-
Add another regression test which shows premature expansion for really
slow clients. - 1749. By Daniel van Vugt
-
Fixed: Mis-detection of a really slow client (low self-determined frame rate)
as failing to keep up. Regression test now passes. - 1750. By Daniel van Vugt
-
Remove debug code
Daniel van Vugt (vanvugt) wrote : | # |
Excellent point Alexandros. That's now fixed with extra tests for the scenarios you describe.
Unfortunately I just realized that there's another situation that probably needs fixing: A system with a single slow-ticking client (like a clock) won't ever know that the compositor is capable of running any faster than the client is, and so will classify the client as slow --> triple buffers prematurely.
Daniel van Vugt (vanvugt) wrote : | # |
I'm now tempted to divide this proposal in two:
1. Basic test case fixes to support future double buffering; and
2. Heuristic switching between double and triple buffering (work in progress).
PS Jenkins bot (ps-jenkins) wrote : | # |
PASSED: Continuous integration, rev:1750
http://
Executed test runs:
SUCCESS: http://
SUCCESS: http://
SUCCESS: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
SUCCESS: http://
Click here to trigger a rebuild:
http://
- 1751. By Daniel van Vugt
-
Merge latest development-branch
- 1752. By Daniel van Vugt
-
Merge latest development-branch
- 1753. By Daniel van Vugt
-
Slightly simpler and more straight forward detection of slow vs idle clients.
- 1754. By Daniel van Vugt
-
Shrink the diff
- 1755. By Daniel van Vugt
-
More comments
- 1756. By Daniel van Vugt
-
Add a regression test for the slow-ticking-
client- mostly- idle-desktop case. - 1757. By Daniel van Vugt
-
test_buffer_
queue.cpp: De-duplicate buffer counting logic - 1758. By Daniel van Vugt
-
Fix indentation in new tests
- 1759. By Daniel van Vugt
-
test_buffer_
queue.cpp: Major simplification - reuse a common function to
measure unique buffers. - 1760. By Daniel van Vugt
-
Tidy up tests
- 1761. By Daniel van Vugt
-
Remove unnecessary sleeps from new tests
- 1762. By Daniel van Vugt
-
Go back to the old algorithm, keep the new tests.
- 1763. By Daniel van Vugt
-
A slightly more elegant measure of client_lag
PS Jenkins bot (ps-jenkins) wrote : | # |
PASSED: Continuous integration, rev:1762
http://
Executed test runs:
SUCCESS: http://
SUCCESS: http://
SUCCESS: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
SUCCESS: http://
Click here to trigger a rebuild:
http://
- 1764. By Daniel van Vugt
-
Merge latest development-branch
- 1765. By Daniel van Vugt
-
Merge latest development-branch
- 1766. By Daniel van Vugt
-
Merge latest development-branch
- 1767. By Daniel van Vugt
-
Merge latest development-branch
- 1768. By Daniel van Vugt
-
Merge latest development-branch
- 1769. By Daniel van Vugt
-
Merge latest development-branch
- 1770. By Daniel van Vugt
-
Merge latest development-branch
- 1771. By Daniel van Vugt
-
Merge latest development-branch and fix conflicts.
- 1772. By Daniel van Vugt
-
Merge latest development-branch
- 1773. By Daniel van Vugt
-
Merge latest development-branch
- 1774. By Daniel van Vugt
-
Merge latest development-branch
- 1775. By Daniel van Vugt
-
Merge latest development-branch
- 1776. By Daniel van Vugt
-
Merge latest development-branch
- 1777. By Daniel van Vugt
-
Merge latest development-branch
- 1778. By Daniel van Vugt
-
Merge (rebase on) 'double-
integration- tests' to shrink the diff of
this branch to its new parent. - 1779. By Daniel van Vugt
-
Merge latest development-branch
- 1780. By Daniel van Vugt
-
Merge latest development-branch and fix conflicts.
- 1781. By Daniel van Vugt
-
Merge latest development-branch
- 1782. By Daniel van Vugt
-
Merge latest development-branch
- 1783. By Daniel van Vugt
-
Merge latest development-branch
- 1784. By Daniel van Vugt
-
Merge latest development-branch
- 1785. By Daniel van Vugt
-
Merge latest development-branch and fix a conflict.
- 1786. By Daniel van Vugt
-
Merge latest development-branch
- 1787. By Daniel van Vugt
-
Merge latest development-branch
- 1788. By Daniel van Vugt
-
Merge latest development-branch
- 1789. By Daniel van Vugt
-
Merge latest trunk
- 1790. By Daniel van Vugt
-
Merge latest trunk
- 1791. By Daniel van Vugt
-
Merge latest trunk
- 1792. By Daniel van Vugt
-
Merge latest trunk
- 1793. By Daniel van Vugt
-
Merge latest trunk
- 1794. By Daniel van Vugt
-
Merge latest trunk
- 1795. By Daniel van Vugt
-
Merge latest trunk
- 1796. By Daniel van Vugt
-
Merge latest trunk
- 1797. By Daniel van Vugt
-
Prototype fix for the crashing integration test
- 1798. By Daniel van Vugt
-
A simpler fix for the crashing test -- don't try to release the same
old_buffer multiple times. - 1799. By Daniel van Vugt
-
Even simpler
Unmerged revisions
- 1799. By Daniel van Vugt
-
Even simpler
- 1798. By Daniel van Vugt
-
A simpler fix for the crashing test -- don't try to release the same
old_buffer multiple times. - 1797. By Daniel van Vugt
-
Prototype fix for the crashing integration test
- 1796. By Daniel van Vugt
-
Merge latest trunk
- 1795. By Daniel van Vugt
-
Merge latest trunk
- 1794. By Daniel van Vugt
-
Merge latest trunk
- 1793. By Daniel van Vugt
-
Merge latest trunk
- 1792. By Daniel van Vugt
-
Merge latest trunk
- 1791. By Daniel van Vugt
-
Merge latest trunk
- 1790. By Daniel van Vugt
-
Merge latest trunk
Preview Diff
1 | === modified file 'src/server/compositor/buffer_queue.cpp' | |||
2 | --- src/server/compositor/buffer_queue.cpp 2014-10-29 06:21:10 +0000 | |||
3 | +++ src/server/compositor/buffer_queue.cpp 2014-11-23 08:19:33 +0000 | |||
4 | @@ -92,18 +92,23 @@ | |||
5 | 92 | } | 92 | } |
6 | 93 | 93 | ||
7 | 94 | mc::BufferQueue::BufferQueue( | 94 | mc::BufferQueue::BufferQueue( |
9 | 95 | int nbuffers, | 95 | int max_buffers, |
10 | 96 | std::shared_ptr<graphics::GraphicBufferAllocator> const& gralloc, | 96 | std::shared_ptr<graphics::GraphicBufferAllocator> const& gralloc, |
11 | 97 | graphics::BufferProperties const& props, | 97 | graphics::BufferProperties const& props, |
12 | 98 | mc::FrameDroppingPolicyFactory const& policy_provider) | 98 | mc::FrameDroppingPolicyFactory const& policy_provider) |
14 | 99 | : nbuffers{nbuffers}, | 99 | : min_buffers{std::min(2, max_buffers)}, // TODO: Configurable in future |
15 | 100 | max_buffers{max_buffers}, | ||
16 | 101 | missed_frames{0}, | ||
17 | 102 | queue_resize_delay_frames{100}, | ||
18 | 103 | extra_buffers{0}, | ||
19 | 104 | client_lag{0}, | ||
20 | 100 | frame_dropping_enabled{false}, | 105 | frame_dropping_enabled{false}, |
21 | 101 | the_properties{props}, | 106 | the_properties{props}, |
22 | 102 | force_new_compositor_buffer{false}, | 107 | force_new_compositor_buffer{false}, |
23 | 103 | callbacks_allowed{true}, | 108 | callbacks_allowed{true}, |
24 | 104 | gralloc{gralloc} | 109 | gralloc{gralloc} |
25 | 105 | { | 110 | { |
27 | 106 | if (nbuffers < 1) | 111 | if (max_buffers < 1) |
28 | 107 | { | 112 | { |
29 | 108 | BOOST_THROW_EXCEPTION( | 113 | BOOST_THROW_EXCEPTION( |
30 | 109 | std::logic_error("invalid number of buffers for BufferQueue")); | 114 | std::logic_error("invalid number of buffers for BufferQueue")); |
31 | @@ -111,9 +116,9 @@ | |||
32 | 111 | 116 | ||
33 | 112 | /* By default not all buffers are allocated. | 117 | /* By default not all buffers are allocated. |
34 | 113 | * If there is increased pressure by the client to acquire | 118 | * If there is increased pressure by the client to acquire |
36 | 114 | * more buffers, more will be allocated at that time (up to nbuffers) | 119 | * more buffers, more will be allocated at that time (up to max_buffers) |
37 | 115 | */ | 120 | */ |
39 | 116 | for(int i = 0; i < std::min(nbuffers, 2); i++) | 121 | for (int i = 0; i < min_buffers; ++i) |
40 | 117 | { | 122 | { |
41 | 118 | buffers.push_back(gralloc->alloc_buffer(the_properties)); | 123 | buffers.push_back(gralloc->alloc_buffer(the_properties)); |
42 | 119 | } | 124 | } |
43 | @@ -129,7 +134,7 @@ | |||
44 | 129 | /* Special case: with one buffer both clients and compositors | 134 | /* Special case: with one buffer both clients and compositors |
45 | 130 | * need to share the same buffer | 135 | * need to share the same buffer |
46 | 131 | */ | 136 | */ |
48 | 132 | if (nbuffers == 1) | 137 | if (max_buffers == 1) |
49 | 133 | free_buffers.push_back(current_compositor_buffer); | 138 | free_buffers.push_back(current_compositor_buffer); |
50 | 134 | 139 | ||
51 | 135 | framedrop_policy = policy_provider.create_policy([this] | 140 | framedrop_policy = policy_provider.create_policy([this] |
52 | @@ -173,6 +178,7 @@ | |||
53 | 173 | void mc::BufferQueue::client_acquire(mc::BufferQueue::Callback complete) | 178 | void mc::BufferQueue::client_acquire(mc::BufferQueue::Callback complete) |
54 | 174 | { | 179 | { |
55 | 175 | std::unique_lock<decltype(guard)> lock(guard); | 180 | std::unique_lock<decltype(guard)> lock(guard); |
56 | 181 | // fprintf(stderr, "%s\n", __FUNCTION__); | ||
57 | 176 | 182 | ||
58 | 177 | pending_client_notifications.push_back(std::move(complete)); | 183 | pending_client_notifications.push_back(std::move(complete)); |
59 | 178 | 184 | ||
60 | @@ -184,12 +190,8 @@ | |||
61 | 184 | return; | 190 | return; |
62 | 185 | } | 191 | } |
63 | 186 | 192 | ||
64 | 187 | /* No empty buffers, attempt allocating more | ||
65 | 188 | * TODO: need a good heuristic to switch | ||
66 | 189 | * between double-buffering to n-buffering | ||
67 | 190 | */ | ||
68 | 191 | int const allocated_buffers = buffers.size(); | 193 | int const allocated_buffers = buffers.size(); |
70 | 192 | if (allocated_buffers < nbuffers) | 194 | if (allocated_buffers < ideal_buffers()) |
71 | 193 | { | 195 | { |
72 | 194 | auto const& buffer = gralloc->alloc_buffer(the_properties); | 196 | auto const& buffer = gralloc->alloc_buffer(the_properties); |
73 | 195 | buffers.push_back(buffer); | 197 | buffers.push_back(buffer); |
74 | @@ -215,6 +217,8 @@ | |||
75 | 215 | { | 217 | { |
76 | 216 | std::lock_guard<decltype(guard)> lock(guard); | 218 | std::lock_guard<decltype(guard)> lock(guard); |
77 | 217 | 219 | ||
78 | 220 | // fprintf(stderr, "%s\n", __FUNCTION__); | ||
79 | 221 | |||
80 | 218 | if (buffers_owned_by_client.empty()) | 222 | if (buffers_owned_by_client.empty()) |
81 | 219 | { | 223 | { |
82 | 220 | BOOST_THROW_EXCEPTION( | 224 | BOOST_THROW_EXCEPTION( |
83 | @@ -235,6 +239,7 @@ | |||
84 | 235 | mc::BufferQueue::compositor_acquire(void const* user_id) | 239 | mc::BufferQueue::compositor_acquire(void const* user_id) |
85 | 236 | { | 240 | { |
86 | 237 | std::unique_lock<decltype(guard)> lock(guard); | 241 | std::unique_lock<decltype(guard)> lock(guard); |
87 | 242 | // fprintf(stderr, "%s\n", __FUNCTION__); | ||
88 | 238 | 243 | ||
89 | 239 | bool use_current_buffer = false; | 244 | bool use_current_buffer = false; |
90 | 240 | if (!current_buffer_users.empty() && !is_a_current_buffer_user(user_id)) | 245 | if (!current_buffer_users.empty() && !is_a_current_buffer_user(user_id)) |
91 | @@ -280,7 +285,10 @@ | |||
92 | 280 | buffer_for(current_compositor_buffer, buffers); | 285 | buffer_for(current_compositor_buffer, buffers); |
93 | 281 | 286 | ||
94 | 282 | if (buffer_to_release) | 287 | if (buffer_to_release) |
95 | 288 | { | ||
96 | 289 | client_lag = 1; | ||
97 | 283 | release(buffer_to_release, std::move(lock)); | 290 | release(buffer_to_release, std::move(lock)); |
98 | 291 | } | ||
99 | 284 | 292 | ||
100 | 285 | return acquired_buffer; | 293 | return acquired_buffer; |
101 | 286 | } | 294 | } |
102 | @@ -288,6 +296,7 @@ | |||
103 | 288 | void mc::BufferQueue::compositor_release(std::shared_ptr<graphics::Buffer> const& buffer) | 296 | void mc::BufferQueue::compositor_release(std::shared_ptr<graphics::Buffer> const& buffer) |
104 | 289 | { | 297 | { |
105 | 290 | std::unique_lock<decltype(guard)> lock(guard); | 298 | std::unique_lock<decltype(guard)> lock(guard); |
106 | 299 | // fprintf(stderr, "%s\n", __FUNCTION__); | ||
107 | 291 | 300 | ||
108 | 292 | if (!remove(buffer.get(), buffers_sent_to_compositor)) | 301 | if (!remove(buffer.get(), buffers_sent_to_compositor)) |
109 | 293 | { | 302 | { |
110 | @@ -299,7 +308,52 @@ | |||
111 | 299 | if (contains(buffer.get(), buffers_sent_to_compositor)) | 308 | if (contains(buffer.get(), buffers_sent_to_compositor)) |
112 | 300 | return; | 309 | return; |
113 | 301 | 310 | ||
115 | 302 | if (nbuffers <= 1) | 311 | /* |
116 | 312 | * Calculate if we need extra buffers in the queue to account for a slow | ||
117 | 313 | * client that can't keep up with composition. | ||
118 | 314 | */ | ||
119 | 315 | if (frame_dropping_enabled) | ||
120 | 316 | { | ||
121 | 317 | missed_frames = 0; | ||
122 | 318 | extra_buffers = 0; | ||
123 | 319 | } | ||
124 | 320 | else | ||
125 | 321 | { | ||
126 | 322 | /* | ||
127 | 323 | * A client that's keeping up will be in-phase with composition. That | ||
128 | 324 | * means it will stay, or quickly equalize at a point where there are | ||
129 | 325 | * no client buffers still held when composition finishes. | ||
130 | 326 | */ | ||
131 | 327 | if (client_lag == 1 && missed_frames < queue_resize_delay_frames) | ||
132 | 328 | { | ||
133 | 329 | ++missed_frames; | ||
134 | 330 | if (missed_frames >= queue_resize_delay_frames) | ||
135 | 331 | extra_buffers = 1; | ||
136 | 332 | } | ||
137 | 333 | |||
138 | 334 | if (client_lag != 1 && missed_frames > 0) | ||
139 | 335 | { | ||
140 | 336 | /* | ||
141 | 337 | * Allow missed_frames to recover back down to zero, so long as | ||
142 | 338 | * the ceiling is never hit (meaning you're keeping up most of the | ||
143 | 339 | * time). If the ceiling is hit, keep it there with the extra | ||
144 | 340 | * buffer allocated so we don't shrink again and cause yet more | ||
145 | 341 | * missed frames. | ||
146 | 342 | */ | ||
147 | 343 | if (missed_frames < queue_resize_delay_frames) | ||
148 | 344 | --missed_frames; | ||
149 | 345 | } | ||
150 | 346 | |||
151 | 347 | // fprintf(stderr, "missed_frames %d, extra %d\n", | ||
152 | 348 | // missed_frames, extra_buffers); | ||
153 | 349 | } | ||
154 | 350 | |||
155 | 351 | // Let client_lag go above 1 to represent an idle client (one that's | ||
156 | 352 | // sleeping and not trying to keep up with the compositor) | ||
157 | 353 | if (client_lag) | ||
158 | 354 | ++client_lag; | ||
159 | 355 | |||
160 | 356 | if (max_buffers <= 1) | ||
161 | 303 | return; | 357 | return; |
162 | 304 | 358 | ||
163 | 305 | /* | 359 | /* |
164 | @@ -322,7 +376,10 @@ | |||
165 | 322 | } | 376 | } |
166 | 323 | 377 | ||
167 | 324 | if (current_compositor_buffer != buffer.get()) | 378 | if (current_compositor_buffer != buffer.get()) |
168 | 379 | { | ||
169 | 380 | client_lag = 0; | ||
170 | 325 | release(buffer.get(), std::move(lock)); | 381 | release(buffer.get(), std::move(lock)); |
171 | 382 | } | ||
172 | 326 | } | 383 | } |
173 | 327 | 384 | ||
174 | 328 | std::shared_ptr<mg::Buffer> mc::BufferQueue::snapshot_acquire() | 385 | std::shared_ptr<mg::Buffer> mc::BufferQueue::snapshot_acquire() |
175 | @@ -354,6 +411,14 @@ | |||
176 | 354 | { | 411 | { |
177 | 355 | std::lock_guard<decltype(guard)> lock(guard); | 412 | std::lock_guard<decltype(guard)> lock(guard); |
178 | 356 | frame_dropping_enabled = flag; | 413 | frame_dropping_enabled = flag; |
179 | 414 | |||
180 | 415 | while (static_cast<int>(buffers.size()) > ideal_buffers() && | ||
181 | 416 | !free_buffers.empty()) | ||
182 | 417 | { | ||
183 | 418 | auto surplus = free_buffers.back(); | ||
184 | 419 | free_buffers.pop_back(); | ||
185 | 420 | drop_buffer(surplus); | ||
186 | 421 | } | ||
187 | 357 | } | 422 | } |
188 | 358 | 423 | ||
189 | 359 | bool mc::BufferQueue::framedropping_allowed() const | 424 | bool mc::BufferQueue::framedropping_allowed() const |
190 | @@ -397,20 +462,14 @@ | |||
191 | 397 | int mc::BufferQueue::buffers_free_for_client() const | 462 | int mc::BufferQueue::buffers_free_for_client() const |
192 | 398 | { | 463 | { |
193 | 399 | std::lock_guard<decltype(guard)> lock(guard); | 464 | std::lock_guard<decltype(guard)> lock(guard); |
202 | 400 | int ret = 1; | 465 | return max_buffers > 1 ? free_buffers.size() : 1; |
195 | 401 | if (nbuffers > 1) | ||
196 | 402 | { | ||
197 | 403 | int nfree = free_buffers.size(); | ||
198 | 404 | int future_growth = nbuffers - buffers.size(); | ||
199 | 405 | ret = nfree + future_growth; | ||
200 | 406 | } | ||
201 | 407 | return ret; | ||
203 | 408 | } | 466 | } |
204 | 409 | 467 | ||
205 | 410 | void mc::BufferQueue::give_buffer_to_client( | 468 | void mc::BufferQueue::give_buffer_to_client( |
206 | 411 | mg::Buffer* buffer, | 469 | mg::Buffer* buffer, |
207 | 412 | std::unique_lock<std::mutex> lock) | 470 | std::unique_lock<std::mutex> lock) |
208 | 413 | { | 471 | { |
209 | 472 | // fprintf(stderr, "%s\n", __FUNCTION__); | ||
210 | 414 | /* Clears callback */ | 473 | /* Clears callback */ |
211 | 415 | auto give_to_client_cb = std::move(pending_client_notifications.front()); | 474 | auto give_to_client_cb = std::move(pending_client_notifications.front()); |
212 | 416 | pending_client_notifications.pop_front(); | 475 | pending_client_notifications.pop_front(); |
213 | @@ -424,7 +483,7 @@ | |||
214 | 424 | /* Special case: the current compositor buffer also needs to be | 483 | /* Special case: the current compositor buffer also needs to be |
215 | 425 | * replaced as it's shared with the client | 484 | * replaced as it's shared with the client |
216 | 426 | */ | 485 | */ |
218 | 427 | if (nbuffers == 1) | 486 | if (max_buffers == 1) |
219 | 428 | current_compositor_buffer = buffer; | 487 | current_compositor_buffer = buffer; |
220 | 429 | } | 488 | } |
221 | 430 | 489 | ||
222 | @@ -463,7 +522,13 @@ | |||
223 | 463 | mg::Buffer* buffer, | 522 | mg::Buffer* buffer, |
224 | 464 | std::unique_lock<std::mutex> lock) | 523 | std::unique_lock<std::mutex> lock) |
225 | 465 | { | 524 | { |
227 | 466 | if (!pending_client_notifications.empty()) | 525 | int used_buffers = buffers.size() - free_buffers.size(); |
228 | 526 | |||
229 | 527 | if (used_buffers > ideal_buffers() && max_buffers > 1) | ||
230 | 528 | { | ||
231 | 529 | drop_buffer(buffer); | ||
232 | 530 | } | ||
233 | 531 | else if (!pending_client_notifications.empty()) | ||
234 | 467 | { | 532 | { |
235 | 468 | framedrop_policy->swap_unblocked(); | 533 | framedrop_policy->swap_unblocked(); |
236 | 469 | give_buffer_to_client(buffer, std::move(lock)); | 534 | give_buffer_to_client(buffer, std::move(lock)); |
237 | @@ -472,6 +537,41 @@ | |||
238 | 472 | free_buffers.push_back(buffer); | 537 | free_buffers.push_back(buffer); |
239 | 473 | } | 538 | } |
240 | 474 | 539 | ||
241 | 540 | int mc::BufferQueue::ideal_buffers() const | ||
242 | 541 | { | ||
243 | 542 | int result = frame_dropping_enabled ? 3 : 2; | ||
244 | 543 | |||
245 | 544 | // Add extra buffers if we can see the client's not keeping up with | ||
246 | 545 | // composition. | ||
247 | 546 | result += extra_buffers; | ||
248 | 547 | |||
249 | 548 | if (result < min_buffers) | ||
250 | 549 | result = min_buffers; | ||
251 | 550 | if (result > max_buffers) | ||
252 | 551 | result = max_buffers; | ||
253 | 552 | |||
254 | 553 | return result; | ||
255 | 554 | } | ||
256 | 555 | |||
257 | 556 | void mc::BufferQueue::set_resize_delay(int nframes) | ||
258 | 557 | { | ||
259 | 558 | queue_resize_delay_frames = nframes; | ||
260 | 559 | if (nframes == 0) | ||
261 | 560 | extra_buffers = 1; | ||
262 | 561 | } | ||
263 | 562 | |||
264 | 563 | void mc::BufferQueue::drop_buffer(graphics::Buffer* buffer) | ||
265 | 564 | { | ||
266 | 565 | for (auto i = buffers.begin(); i != buffers.end(); ++i) | ||
267 | 566 | { | ||
268 | 567 | if (i->get() == buffer) | ||
269 | 568 | { | ||
270 | 569 | buffers.erase(i); | ||
271 | 570 | break; | ||
272 | 571 | } | ||
273 | 572 | } | ||
274 | 573 | } | ||
275 | 574 | |||
276 | 475 | void mc::BufferQueue::drop_frame(std::unique_lock<std::mutex> lock) | 575 | void mc::BufferQueue::drop_frame(std::unique_lock<std::mutex> lock) |
277 | 476 | { | 576 | { |
278 | 477 | auto buffer_to_give = pop(ready_to_composite_queue); | 577 | auto buffer_to_give = pop(ready_to_composite_queue); |
279 | 478 | 578 | ||
280 | === modified file 'src/server/compositor/buffer_queue.h' | |||
281 | --- src/server/compositor/buffer_queue.h 2014-10-29 06:21:10 +0000 | |||
282 | +++ src/server/compositor/buffer_queue.h 2014-11-23 08:19:33 +0000 | |||
283 | @@ -42,7 +42,7 @@ | |||
284 | 42 | public: | 42 | public: |
285 | 43 | typedef std::function<void(graphics::Buffer* buffer)> Callback; | 43 | typedef std::function<void(graphics::Buffer* buffer)> Callback; |
286 | 44 | 44 | ||
288 | 45 | BufferQueue(int nbuffers, | 45 | BufferQueue(int max_buffers, |
289 | 46 | std::shared_ptr<graphics::GraphicBufferAllocator> const& alloc, | 46 | std::shared_ptr<graphics::GraphicBufferAllocator> const& alloc, |
290 | 47 | graphics::BufferProperties const& props, | 47 | graphics::BufferProperties const& props, |
291 | 48 | FrameDroppingPolicyFactory const& policy_provider); | 48 | FrameDroppingPolicyFactory const& policy_provider); |
292 | @@ -64,6 +64,7 @@ | |||
293 | 64 | bool is_a_current_buffer_user(void const* user_id) const; | 64 | bool is_a_current_buffer_user(void const* user_id) const; |
294 | 65 | void drop_old_buffers() override; | 65 | void drop_old_buffers() override; |
295 | 66 | void drop_client_requests() override; | 66 | void drop_client_requests() override; |
296 | 67 | void set_resize_delay(int nframes); ///< negative means never resize | ||
297 | 67 | 68 | ||
298 | 68 | private: | 69 | private: |
299 | 69 | void give_buffer_to_client(graphics::Buffer* buffer, | 70 | void give_buffer_to_client(graphics::Buffer* buffer, |
300 | @@ -71,6 +72,8 @@ | |||
301 | 71 | void release(graphics::Buffer* buffer, | 72 | void release(graphics::Buffer* buffer, |
302 | 72 | std::unique_lock<std::mutex> lock); | 73 | std::unique_lock<std::mutex> lock); |
303 | 73 | void drop_frame(std::unique_lock<std::mutex> lock); | 74 | void drop_frame(std::unique_lock<std::mutex> lock); |
304 | 75 | int ideal_buffers() const; | ||
305 | 76 | void drop_buffer(graphics::Buffer* buffer); | ||
306 | 74 | 77 | ||
307 | 75 | mutable std::mutex guard; | 78 | mutable std::mutex guard; |
308 | 76 | 79 | ||
309 | @@ -86,7 +89,9 @@ | |||
310 | 86 | 89 | ||
311 | 87 | std::deque<Callback> pending_client_notifications; | 90 | std::deque<Callback> pending_client_notifications; |
312 | 88 | 91 | ||
314 | 89 | int nbuffers; | 92 | int const min_buffers, max_buffers; |
315 | 93 | int missed_frames, queue_resize_delay_frames, extra_buffers; | ||
316 | 94 | int client_lag; | ||
317 | 90 | bool frame_dropping_enabled; | 95 | bool frame_dropping_enabled; |
318 | 91 | graphics::BufferProperties the_properties; | 96 | graphics::BufferProperties the_properties; |
319 | 92 | bool force_new_compositor_buffer; | 97 | bool force_new_compositor_buffer; |
320 | 93 | 98 | ||
321 | === modified file 'tests/integration-tests/surface_composition.cpp' | |||
322 | --- tests/integration-tests/surface_composition.cpp 2014-10-02 11:56:12 +0000 | |||
323 | +++ tests/integration-tests/surface_composition.cpp 2014-11-23 08:19:33 +0000 | |||
324 | @@ -108,16 +108,23 @@ | |||
325 | 108 | 108 | ||
326 | 109 | mg::Buffer* old_buffer{nullptr}; | 109 | mg::Buffer* old_buffer{nullptr}; |
327 | 110 | 110 | ||
328 | 111 | bool called_back = true; | ||
329 | 111 | auto const callback = [&] (mg::Buffer* new_buffer) | 112 | auto const callback = [&] (mg::Buffer* new_buffer) |
330 | 112 | { | 113 | { |
331 | 113 | // If surface is dead then callback is not expected | 114 | // If surface is dead then callback is not expected |
332 | 114 | EXPECT_THAT(surface.get(), NotNull()); | 115 | EXPECT_THAT(surface.get(), NotNull()); |
333 | 115 | old_buffer = new_buffer; | 116 | old_buffer = new_buffer; |
334 | 117 | called_back = true; | ||
335 | 116 | }; | 118 | }; |
336 | 117 | 119 | ||
337 | 118 | // Exhaust the buffers to ensure we have a pending swap to complete | 120 | // Exhaust the buffers to ensure we have a pending swap to complete |
339 | 119 | for (auto i = 0; i != number_of_buffers; ++i) | 121 | // But also be careful to not pass a formerly released non-null old_buffer |
340 | 122 | // in to swap_buffers... | ||
341 | 123 | while (called_back) | ||
342 | 124 | { | ||
343 | 125 | called_back = false; | ||
344 | 120 | surface->swap_buffers(old_buffer, callback); | 126 | surface->swap_buffers(old_buffer, callback); |
345 | 127 | } | ||
346 | 121 | 128 | ||
347 | 122 | auto const renderable = surface->compositor_snapshot(this); | 129 | auto const renderable = surface->compositor_snapshot(this); |
348 | 123 | 130 | ||
349 | 124 | 131 | ||
350 | === modified file 'tests/unit-tests/compositor/test_buffer_queue.cpp' | |||
351 | --- tests/unit-tests/compositor/test_buffer_queue.cpp 2014-10-29 06:21:10 +0000 | |||
352 | +++ tests/unit-tests/compositor/test_buffer_queue.cpp 2014-11-23 08:19:33 +0000 | |||
353 | @@ -304,6 +304,7 @@ | |||
354 | 304 | int const nbuffers = 3; | 304 | int const nbuffers = 3; |
355 | 305 | mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); | 305 | mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
356 | 306 | 306 | ||
357 | 307 | q.set_resize_delay(0); // Force enabling of the third buffer | ||
358 | 307 | int const prefill = q.buffers_free_for_client(); | 308 | int const prefill = q.buffers_free_for_client(); |
359 | 308 | ASSERT_THAT(prefill, Gt(0)); | 309 | ASSERT_THAT(prefill, Gt(0)); |
360 | 309 | for (int i = 0; i < prefill; ++i) | 310 | for (int i = 0; i < prefill; ++i) |
361 | @@ -314,10 +315,7 @@ | |||
362 | 314 | } | 315 | } |
363 | 315 | 316 | ||
364 | 316 | auto handle1 = client_acquire_async(q); | 317 | auto handle1 = client_acquire_async(q); |
365 | 317 | ASSERT_THAT(handle1->has_acquired_buffer(), Eq(false)); | ||
366 | 318 | |||
367 | 319 | auto handle2 = client_acquire_async(q); | 318 | auto handle2 = client_acquire_async(q); |
368 | 320 | ASSERT_THAT(handle1->has_acquired_buffer(), Eq(false)); | ||
369 | 321 | 319 | ||
370 | 322 | for (int i = 0; i < nbuffers + 1; ++i) | 320 | for (int i = 0; i < nbuffers + 1; ++i) |
371 | 323 | q.compositor_release(q.compositor_acquire(this)); | 321 | q.compositor_release(q.compositor_acquire(this)); |
372 | @@ -386,7 +384,6 @@ | |||
373 | 386 | handle->release_buffer(); | 384 | handle->release_buffer(); |
374 | 387 | 385 | ||
375 | 388 | handle = client_acquire_async(q); | 386 | handle = client_acquire_async(q); |
376 | 389 | ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); | ||
377 | 390 | handle->release_buffer(); | 387 | handle->release_buffer(); |
378 | 391 | 388 | ||
379 | 392 | auto comp_buffer = q.compositor_acquire(this); | 389 | auto comp_buffer = q.compositor_acquire(this); |
380 | @@ -400,6 +397,8 @@ | |||
381 | 400 | { | 397 | { |
382 | 401 | mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); | 398 | mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
383 | 402 | 399 | ||
384 | 400 | q.set_resize_delay(0); // Force enabling of the third buffer | ||
385 | 401 | |||
386 | 403 | auto handle1 = client_acquire_async(q); | 402 | auto handle1 = client_acquire_async(q); |
387 | 404 | ASSERT_THAT(handle1->has_acquired_buffer(), Eq(true)); | 403 | ASSERT_THAT(handle1->has_acquired_buffer(), Eq(true)); |
388 | 405 | 404 | ||
389 | @@ -426,7 +425,7 @@ | |||
390 | 426 | compositor_thread, std::ref(q), std::ref(done)); | 425 | compositor_thread, std::ref(q), std::ref(done)); |
391 | 427 | 426 | ||
392 | 428 | std::unordered_set<uint32_t> ids_acquired; | 427 | std::unordered_set<uint32_t> ids_acquired; |
394 | 429 | int const max_ownable_buffers = nbuffers - 1; | 428 | int const max_ownable_buffers = q.buffers_free_for_client(); |
395 | 430 | for (int i = 0; i < max_ownable_buffers*2; ++i) | 429 | for (int i = 0; i < max_ownable_buffers*2; ++i) |
396 | 431 | { | 430 | { |
397 | 432 | std::vector<mg::Buffer *> client_buffers; | 431 | std::vector<mg::Buffer *> client_buffers; |
398 | @@ -445,7 +444,9 @@ | |||
399 | 445 | } | 444 | } |
400 | 446 | } | 445 | } |
401 | 447 | 446 | ||
403 | 448 | EXPECT_THAT(ids_acquired.size(), Eq(nbuffers)); | 447 | // Expect only two since we're now double-buffered and are not |
404 | 448 | // allowing frame dropping. | ||
405 | 449 | EXPECT_THAT(ids_acquired.size(), Eq(2)); | ||
406 | 449 | } | 450 | } |
407 | 450 | } | 451 | } |
408 | 451 | 452 | ||
409 | @@ -499,7 +500,7 @@ | |||
410 | 499 | { | 500 | { |
411 | 500 | std::deque<mg::BufferID> client_release_sequence; | 501 | std::deque<mg::BufferID> client_release_sequence; |
412 | 501 | std::vector<mg::Buffer *> buffers; | 502 | std::vector<mg::Buffer *> buffers; |
414 | 502 | int const max_ownable_buffers = nbuffers - 1; | 503 | int const max_ownable_buffers = q.buffers_free_for_client(); |
415 | 503 | for (int i = 0; i < max_ownable_buffers; ++i) | 504 | for (int i = 0; i < max_ownable_buffers; ++i) |
416 | 504 | { | 505 | { |
417 | 505 | auto handle = client_acquire_async(q); | 506 | auto handle = client_acquire_async(q); |
418 | @@ -632,13 +633,13 @@ | |||
419 | 632 | handle->release_buffer(); | 633 | handle->release_buffer(); |
420 | 633 | 634 | ||
421 | 634 | handle = client_acquire_async(q); | 635 | handle = client_acquire_async(q); |
422 | 635 | ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); | ||
423 | 636 | 636 | ||
424 | 637 | // in the original bug, compositor would be given the wrong buffer here | 637 | // in the original bug, compositor would be given the wrong buffer here |
425 | 638 | auto compositor_buffer = q.compositor_acquire(this); | 638 | auto compositor_buffer = q.compositor_acquire(this); |
426 | 639 | 639 | ||
427 | 640 | EXPECT_THAT(compositor_buffer->id(), Eq(first_ready_buffer_id)); | 640 | EXPECT_THAT(compositor_buffer->id(), Eq(first_ready_buffer_id)); |
428 | 641 | 641 | ||
429 | 642 | ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); | ||
430 | 642 | handle->release_buffer(); | 643 | handle->release_buffer(); |
431 | 643 | q.compositor_release(compositor_buffer); | 644 | q.compositor_release(compositor_buffer); |
432 | 644 | } | 645 | } |
433 | @@ -806,7 +807,14 @@ | |||
434 | 806 | handle->release_buffer(); | 807 | handle->release_buffer(); |
435 | 807 | } | 808 | } |
436 | 808 | 809 | ||
438 | 809 | EXPECT_THAT(ids_acquired.size(), Eq(nbuffers)); | 810 | /* |
439 | 811 | * Dynamic queue scaling sensibly limits the framedropping client to | ||
440 | 812 | * two non-overlapping buffers, before overwriting old ones. Allowing | ||
441 | 813 | * any more would just waste space (buffers). So this means a | ||
442 | 814 | * frame-dropping client won't ever see more than 3 unique buffers | ||
443 | 815 | */ | ||
444 | 816 | int const max_ownable_buffers = std::min(nbuffers, 3); | ||
445 | 817 | EXPECT_THAT(ids_acquired.size(), Eq(max_ownable_buffers)); | ||
446 | 810 | } | 818 | } |
447 | 811 | } | 819 | } |
448 | 812 | 820 | ||
449 | @@ -902,6 +910,7 @@ | |||
450 | 902 | auto const frame_time = std::chrono::milliseconds(16); | 910 | auto const frame_time = std::chrono::milliseconds(16); |
451 | 903 | 911 | ||
452 | 904 | q.allow_framedropping(false); | 912 | q.allow_framedropping(false); |
453 | 913 | q.set_resize_delay(1); | ||
454 | 905 | 914 | ||
455 | 906 | std::atomic<bool> done(false); | 915 | std::atomic<bool> done(false); |
456 | 907 | std::mutex sync; | 916 | std::mutex sync; |
457 | @@ -977,20 +986,13 @@ | |||
458 | 977 | } | 986 | } |
459 | 978 | } | 987 | } |
460 | 979 | 988 | ||
461 | 980 | namespace | ||
462 | 981 | { | ||
463 | 982 | int max_ownable_buffers(int nbuffers) | ||
464 | 983 | { | ||
465 | 984 | return (nbuffers == 1) ? 1 : nbuffers - 1; | ||
466 | 985 | } | ||
467 | 986 | } | ||
468 | 987 | |||
469 | 988 | TEST_F(BufferQueueTest, compositor_acquires_resized_frames) | 989 | TEST_F(BufferQueueTest, compositor_acquires_resized_frames) |
470 | 989 | { | 990 | { |
471 | 990 | for (int nbuffers = 1; nbuffers <= max_nbuffers_to_test; ++nbuffers) | 991 | for (int nbuffers = 1; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
472 | 991 | { | 992 | { |
473 | 992 | mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); | 993 | mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
474 | 993 | std::vector<mg::BufferID> history; | 994 | std::vector<mg::BufferID> history; |
475 | 995 | std::shared_ptr<AcquireWaitHandle> producing[5]; | ||
476 | 994 | 996 | ||
477 | 995 | const int width0 = 123; | 997 | const int width0 = 123; |
478 | 996 | const int height0 = 456; | 998 | const int height0 = 456; |
479 | @@ -1001,8 +1003,7 @@ | |||
480 | 1001 | int const nbuffers_to_use = q.buffers_free_for_client(); | 1003 | int const nbuffers_to_use = q.buffers_free_for_client(); |
481 | 1002 | ASSERT_THAT(nbuffers_to_use, Gt(0)); | 1004 | ASSERT_THAT(nbuffers_to_use, Gt(0)); |
482 | 1003 | 1005 | ||
485 | 1004 | int max_buffers{max_ownable_buffers(nbuffers)}; | 1006 | for (int produce = 0; produce < nbuffers_to_use; ++produce) |
484 | 1005 | for (int produce = 0; produce < max_buffers; ++produce) | ||
486 | 1006 | { | 1007 | { |
487 | 1007 | geom::Size new_size{width, height}; | 1008 | geom::Size new_size{width, height}; |
488 | 1008 | width += dx; | 1009 | width += dx; |
489 | @@ -1012,16 +1013,23 @@ | |||
490 | 1012 | auto handle = client_acquire_async(q); | 1013 | auto handle = client_acquire_async(q); |
491 | 1013 | ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); | 1014 | ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); |
492 | 1014 | history.emplace_back(handle->id()); | 1015 | history.emplace_back(handle->id()); |
493 | 1016 | producing[produce] = handle; | ||
494 | 1015 | auto buffer = handle->buffer(); | 1017 | auto buffer = handle->buffer(); |
495 | 1016 | ASSERT_THAT(buffer->size(), Eq(new_size)); | 1018 | ASSERT_THAT(buffer->size(), Eq(new_size)); |
497 | 1017 | handle->release_buffer(); | 1019 | } |
498 | 1020 | |||
499 | 1021 | // Overlap all the client_acquires asyncronously. It's the only way | ||
500 | 1022 | // the new dynamic queue scaling will let a client hold that many... | ||
501 | 1023 | for (int complete = 0; complete < nbuffers_to_use; ++complete) | ||
502 | 1024 | { | ||
503 | 1025 | producing[complete]->release_buffer(); | ||
504 | 1018 | } | 1026 | } |
505 | 1019 | 1027 | ||
506 | 1020 | width = width0; | 1028 | width = width0; |
507 | 1021 | height = height0; | 1029 | height = height0; |
508 | 1022 | 1030 | ||
511 | 1023 | ASSERT_THAT(history.size(), Eq(max_buffers)); | 1031 | ASSERT_THAT(history.size(), Eq(nbuffers_to_use)); |
512 | 1024 | for (int consume = 0; consume < max_buffers; ++consume) | 1032 | for (int consume = 0; consume < nbuffers_to_use; ++consume) |
513 | 1025 | { | 1033 | { |
514 | 1026 | geom::Size expect_size{width, height}; | 1034 | geom::Size expect_size{width, height}; |
515 | 1027 | width += dx; | 1035 | width += dx; |
516 | @@ -1062,7 +1070,8 @@ | |||
517 | 1062 | basic_properties, | 1070 | basic_properties, |
518 | 1063 | policy_factory); | 1071 | policy_factory); |
519 | 1064 | 1072 | ||
521 | 1065 | for (int i = 0; i < max_ownable_buffers(nbuffers); i++) | 1073 | auto max_ownable_buffers = q.buffers_free_for_client(); |
522 | 1074 | for (int i = 0; i < max_ownable_buffers; i++) | ||
523 | 1066 | { | 1075 | { |
524 | 1067 | auto client = client_acquire_sync(q); | 1076 | auto client = client_acquire_sync(q); |
525 | 1068 | q.client_release(client); | 1077 | q.client_release(client); |
526 | @@ -1089,7 +1098,8 @@ | |||
527 | 1089 | basic_properties, | 1098 | basic_properties, |
528 | 1090 | policy_factory); | 1099 | policy_factory); |
529 | 1091 | 1100 | ||
531 | 1092 | for (int i = 0; i < max_ownable_buffers(nbuffers); i++) | 1101 | int const max_ownable_buffers = q.buffers_free_for_client(); |
532 | 1102 | for (int i = 0; i < max_ownable_buffers; i++) | ||
533 | 1093 | { | 1103 | { |
534 | 1094 | auto client = client_acquire_sync(q); | 1104 | auto client = client_acquire_sync(q); |
535 | 1095 | q.client_release(client); | 1105 | q.client_release(client); |
536 | @@ -1379,7 +1389,7 @@ | |||
537 | 1379 | compositor_thread, std::ref(q), std::ref(done)); | 1389 | compositor_thread, std::ref(q), std::ref(done)); |
538 | 1380 | 1390 | ||
539 | 1381 | std::unordered_set<mg::Buffer *> unique_buffers_acquired; | 1391 | std::unordered_set<mg::Buffer *> unique_buffers_acquired; |
541 | 1382 | int const max_ownable_buffers = nbuffers - 1; | 1392 | int const max_ownable_buffers = q.buffers_free_for_client(); |
542 | 1383 | for (int frame = 0; frame < max_ownable_buffers*2; frame++) | 1393 | for (int frame = 0; frame < max_ownable_buffers*2; frame++) |
543 | 1384 | { | 1394 | { |
544 | 1385 | std::vector<mg::Buffer *> client_buffers; | 1395 | std::vector<mg::Buffer *> client_buffers; |
545 | @@ -1398,13 +1408,14 @@ | |||
546 | 1398 | } | 1408 | } |
547 | 1399 | } | 1409 | } |
548 | 1400 | 1410 | ||
550 | 1401 | EXPECT_THAT(unique_buffers_acquired.size(), Eq(nbuffers)); | 1411 | // Expect one more than max_ownable_buffers, to include the one that |
551 | 1412 | // is silently reserved for compositing. | ||
552 | 1413 | EXPECT_THAT(unique_buffers_acquired.size(), Eq(max_ownable_buffers+1)); | ||
553 | 1402 | 1414 | ||
554 | 1403 | } | 1415 | } |
555 | 1404 | } | 1416 | } |
556 | 1405 | 1417 | ||
559 | 1406 | /* FIXME (enabling this optimization breaks timing tests) */ | 1418 | TEST_F(BufferQueueTest, synchronous_clients_only_get_two_real_buffers) |
558 | 1407 | TEST_F(BufferQueueTest, DISABLED_synchronous_clients_only_get_two_real_buffers) | ||
560 | 1408 | { | 1419 | { |
561 | 1409 | for (int nbuffers = 2; nbuffers <= max_nbuffers_to_test; ++nbuffers) | 1420 | for (int nbuffers = 2; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
562 | 1410 | { | 1421 | { |
563 | @@ -1413,6 +1424,13 @@ | |||
564 | 1413 | 1424 | ||
565 | 1414 | std::atomic<bool> done(false); | 1425 | std::atomic<bool> done(false); |
566 | 1415 | auto unblock = [&done] { done = true; }; | 1426 | auto unblock = [&done] { done = true; }; |
567 | 1427 | |||
568 | 1428 | // With an unthrottled compositor_thread it will look like the client | ||
569 | 1429 | // isn't keeping up and the buffer queue would normally auto-expand. | ||
570 | 1430 | // Increase the auto-expansion threshold to ensure that doesn't happen | ||
571 | 1431 | // during this test... | ||
572 | 1432 | q.set_resize_delay(-1); | ||
573 | 1433 | |||
574 | 1416 | mt::AutoUnblockThread compositor(unblock, | 1434 | mt::AutoUnblockThread compositor(unblock, |
575 | 1417 | compositor_thread, std::ref(q), std::ref(done)); | 1435 | compositor_thread, std::ref(q), std::ref(done)); |
576 | 1418 | 1436 | ||
577 | @@ -1495,6 +1513,9 @@ | |||
578 | 1495 | int const nbuffers = 3; | 1513 | int const nbuffers = 3; |
579 | 1496 | mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); | 1514 | mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
580 | 1497 | 1515 | ||
581 | 1516 | // Ensure this test gets 3 real buffers immediately. | ||
582 | 1517 | q.set_resize_delay(0); | ||
583 | 1518 | |||
584 | 1498 | auto handle1 = client_acquire_async(q); | 1519 | auto handle1 = client_acquire_async(q); |
585 | 1499 | ASSERT_THAT(handle1->has_acquired_buffer(), Eq(true)); | 1520 | ASSERT_THAT(handle1->has_acquired_buffer(), Eq(true)); |
586 | 1500 | handle1->release_buffer(); | 1521 | handle1->release_buffer(); |
587 | @@ -1537,3 +1558,263 @@ | |||
588 | 1537 | auto comp2 = q.compositor_acquire(new_compositor_id); | 1558 | auto comp2 = q.compositor_acquire(new_compositor_id); |
589 | 1538 | ASSERT_THAT(comp2->id(), Eq(handle2->id())); | 1559 | ASSERT_THAT(comp2->id(), Eq(handle2->id())); |
590 | 1539 | } | 1560 | } |
591 | 1561 | |||
592 | 1562 | namespace | ||
593 | 1563 | { | ||
594 | 1564 | int unique_buffers(mc::BufferQueue& q) | ||
595 | 1565 | { | ||
596 | 1566 | std::atomic<bool> done(false); | ||
597 | 1567 | |||
598 | 1568 | auto unblock = [&done] { done = true; }; | ||
599 | 1569 | mt::AutoUnblockThread compositor(unblock, | ||
600 | 1570 | compositor_thread, std::ref(q), std::ref(done)); | ||
601 | 1571 | |||
602 | 1572 | std::unordered_set<mg::Buffer*> buffers_acquired; | ||
603 | 1573 | for (int frame = 0; frame < 100; frame++) | ||
604 | 1574 | { | ||
605 | 1575 | auto handle = client_acquire_async(q); | ||
606 | 1576 | handle->wait_for(std::chrono::seconds(1)); | ||
607 | 1577 | if (!handle->has_acquired_buffer()) | ||
608 | 1578 | return -1; | ||
609 | 1579 | buffers_acquired.insert(handle->buffer()); | ||
610 | 1580 | handle->release_buffer(); | ||
611 | 1581 | } | ||
612 | 1582 | |||
613 | 1583 | return static_cast<int>(buffers_acquired.size()); | ||
614 | 1584 | } | ||
615 | 1585 | |||
616 | 1586 | int unique_synchronous_buffers(mc::BufferQueue& q) | ||
617 | 1587 | { | ||
618 | 1588 | if (q.framedropping_allowed()) | ||
619 | 1589 | return 0; | ||
620 | 1590 | |||
621 | 1591 | int const max = 10; | ||
622 | 1592 | |||
623 | 1593 | // Flush the queue | ||
624 | 1594 | for (int f = 0; f < max; ++f) | ||
625 | 1595 | q.compositor_release(q.compositor_acquire(0)); | ||
626 | 1596 | |||
627 | 1597 | auto compositor = q.compositor_acquire(0); | ||
628 | 1598 | int count = 1; // ^ count the compositor buffer | ||
629 | 1599 | |||
630 | 1600 | std::shared_ptr<AcquireWaitHandle> client; | ||
631 | 1601 | while (count < max) | ||
632 | 1602 | { | ||
633 | 1603 | client = client_acquire_async(q); | ||
634 | 1604 | if (!client->has_acquired_buffer()) | ||
635 | 1605 | break; | ||
636 | 1606 | ++count; | ||
637 | 1607 | client->release_buffer(); | ||
638 | 1608 | } | ||
639 | 1609 | |||
640 | 1610 | q.compositor_release(compositor); | ||
641 | 1611 | EXPECT_TRUE(client->has_acquired_buffer()); | ||
642 | 1612 | client->release_buffer(); | ||
643 | 1613 | |||
644 | 1614 | // Flush the queue | ||
645 | 1615 | for (int f = 0; f < max; ++f) | ||
646 | 1616 | q.compositor_release(q.compositor_acquire(0)); | ||
647 | 1617 | |||
648 | 1618 | return count; | ||
649 | 1619 | } | ||
650 | 1620 | } // namespace | ||
651 | 1621 | |||
652 | 1622 | TEST_F(BufferQueueTest, queue_size_scales_instantly_on_framedropping) | ||
653 | 1623 | { | ||
654 | 1624 | for (int max_buffers = 1; max_buffers < max_nbuffers_to_test; ++max_buffers) | ||
655 | 1625 | { | ||
656 | 1626 | mc::BufferQueue q(max_buffers, allocator, basic_properties, | ||
657 | 1627 | policy_factory); | ||
658 | 1628 | |||
659 | 1629 | q.set_resize_delay(-1); | ||
660 | 1630 | |||
661 | 1631 | // Default: No frame dropping; expect double buffering | ||
662 | 1632 | q.allow_framedropping(false); | ||
663 | 1633 | EXPECT_EQ(std::min(max_buffers, 2), unique_buffers(q)); | ||
664 | 1634 | |||
665 | 1635 | // Enable frame dropping; expect triple buffering immediately | ||
666 | 1636 | q.allow_framedropping(true); | ||
667 | 1637 | EXPECT_EQ(std::min(max_buffers, 3), unique_buffers(q)); | ||
668 | 1638 | |||
669 | 1639 | // Revert back to no frame dropping; expect double buffering | ||
670 | 1640 | q.allow_framedropping(false); | ||
671 | 1641 | EXPECT_EQ(std::min(max_buffers, 2), unique_buffers(q)); | ||
672 | 1642 | } | ||
673 | 1643 | } | ||
674 | 1644 | |||
675 | 1645 | TEST_F(BufferQueueTest, queue_size_scales_for_slow_clients) | ||
676 | 1646 | { | ||
677 | 1647 | for (int max_buffers = 3; max_buffers < max_nbuffers_to_test; ++max_buffers) | ||
678 | 1648 | { | ||
679 | 1649 | mc::BufferQueue q(max_buffers, allocator, basic_properties, | ||
680 | 1650 | policy_factory); | ||
681 | 1651 | |||
682 | 1652 | q.allow_framedropping(false); | ||
683 | 1653 | |||
684 | 1654 | int const delay = 10; | ||
685 | 1655 | q.set_resize_delay(delay); | ||
686 | 1656 | |||
687 | 1657 | // Verify we're starting with double buffering | ||
688 | 1658 | EXPECT_EQ(2, unique_synchronous_buffers(q)); | ||
689 | 1659 | |||
690 | 1660 | // Simulate a slow client. Not an idle one, but one trying to keep up | ||
691 | 1661 | // and repeatedly failing to miss the frame deadline. | ||
692 | 1662 | for (int f = 0; f < delay*2; ++f) | ||
693 | 1663 | { | ||
694 | 1664 | auto client = client_acquire_async(q); | ||
695 | 1665 | q.compositor_release(q.compositor_acquire(this)); | ||
696 | 1666 | ASSERT_TRUE(client->has_acquired_buffer()); | ||
697 | 1667 | client->release_buffer(); | ||
698 | 1668 | } | ||
699 | 1669 | |||
700 | 1670 | // Verify we now have triple buffers | ||
701 | 1671 | EXPECT_EQ(3, unique_synchronous_buffers(q)); | ||
702 | 1672 | } | ||
703 | 1673 | } | ||
704 | 1674 | |||
705 | 1675 | TEST_F(BufferQueueTest, switch_to_triple_buffers_is_permanent) | ||
706 | 1676 | { | ||
707 | 1677 | for (int max_buffers = 3; max_buffers < max_nbuffers_to_test; ++max_buffers) | ||
708 | 1678 | { | ||
709 | 1679 | mc::BufferQueue q(max_buffers, allocator, basic_properties, | ||
710 | 1680 | policy_factory); | ||
711 | 1681 | |||
712 | 1682 | q.allow_framedropping(false); | ||
713 | 1683 | |||
714 | 1684 | int const delay = 10; | ||
715 | 1685 | q.set_resize_delay(delay); | ||
716 | 1686 | |||
717 | 1687 | // First, verify we only have 2 real buffers | ||
718 | 1688 | EXPECT_EQ(2, unique_synchronous_buffers(q)); | ||
719 | 1689 | |||
720 | 1690 | // Simulate a slow client. Not an idle one, but one trying to keep up | ||
721 | 1691 | // and repeatedly failing to miss the frame deadline. | ||
722 | 1692 | for (int f = 0; f < delay*2; ++f) | ||
723 | 1693 | { | ||
724 | 1694 | auto client = client_acquire_async(q); | ||
725 | 1695 | q.compositor_release(q.compositor_acquire(this)); | ||
726 | 1696 | ASSERT_TRUE(client->has_acquired_buffer()); | ||
727 | 1697 | client->release_buffer(); | ||
728 | 1698 | } | ||
729 | 1699 | |||
730 | 1700 | EXPECT_EQ(3, unique_synchronous_buffers(q)); | ||
731 | 1701 | |||
732 | 1702 | // Now let the client behave "well" and keep up. | ||
733 | 1703 | for (int f = 0; f < delay*10; ++f) | ||
734 | 1704 | { | ||
735 | 1705 | q.client_release(client_acquire_sync(q)); | ||
736 | 1706 | q.compositor_release(q.compositor_acquire(this)); | ||
737 | 1707 | } | ||
738 | 1708 | |||
739 | 1709 | // Make sure the queue has stayed expanded. Do the original test | ||
740 | 1710 | // again and verify clients get one more than before... | ||
741 | 1711 | EXPECT_EQ(3, unique_synchronous_buffers(q)); | ||
742 | 1712 | } | ||
743 | 1713 | } | ||
744 | 1714 | |||
745 | 1715 | TEST_F(BufferQueueTest, idle_clients_dont_get_expanded_buffers) | ||
746 | 1716 | { | ||
747 | 1717 | for (int max_buffers = 3; max_buffers < max_nbuffers_to_test; ++max_buffers) | ||
748 | 1718 | { | ||
749 | 1719 | mc::BufferQueue q(max_buffers, allocator, basic_properties, | ||
750 | 1720 | policy_factory); | ||
751 | 1721 | |||
752 | 1722 | q.allow_framedropping(false); | ||
753 | 1723 | |||
754 | 1724 | int const delay = 10; | ||
755 | 1725 | q.set_resize_delay(delay); | ||
756 | 1726 | |||
757 | 1727 | // First, verify we only have 2 real buffers | ||
758 | 1728 | int const contracted_nbuffers = 2; | ||
759 | 1729 | for (int f = 0; f < contracted_nbuffers-1; ++f) | ||
760 | 1730 | { | ||
761 | 1731 | auto client1 = client_acquire_async(q); | ||
762 | 1732 | ASSERT_TRUE(client1->has_acquired_buffer()); | ||
763 | 1733 | client1->release_buffer(); | ||
764 | 1734 | } | ||
765 | 1735 | auto client2 = client_acquire_async(q); | ||
766 | 1736 | ASSERT_FALSE(client2->has_acquired_buffer()); | ||
767 | 1737 | q.compositor_release(q.compositor_acquire(this)); | ||
768 | 1738 | ASSERT_TRUE(client2->has_acquired_buffer()); | ||
769 | 1739 | |||
770 | 1740 | // Now hold client2 buffer for a little too long... | ||
771 | 1741 | for (int f = 0; f < delay*2; ++f) | ||
772 | 1742 | q.compositor_release(q.compositor_acquire(this)); | ||
773 | 1743 | // this should have resulted in the queue expanding. | ||
774 | 1744 | |||
775 | 1745 | client2->release_buffer(); | ||
776 | 1746 | |||
777 | 1747 | // Verify we still only have double buffering. An idle client should | ||
778 | 1748 | // not be a reason to expand to triple. | ||
779 | 1749 | EXPECT_EQ(2, unique_synchronous_buffers(q)); | ||
780 | 1750 | } | ||
781 | 1751 | } | ||
782 | 1752 | |||
783 | 1753 | TEST_F(BufferQueueTest, really_slow_clients_dont_get_expanded_buffers) | ||
784 | 1754 | { | ||
785 | 1755 | for (int max_buffers = 3; max_buffers < max_nbuffers_to_test; ++max_buffers) | ||
786 | 1756 | { | ||
787 | 1757 | mc::BufferQueue q(max_buffers, allocator, basic_properties, | ||
788 | 1758 | policy_factory); | ||
789 | 1759 | |||
790 | 1760 | q.allow_framedropping(false); | ||
791 | 1761 | |||
792 | 1762 | int const delay = 10; | ||
793 | 1763 | q.set_resize_delay(delay); | ||
794 | 1764 | |||
795 | 1765 | // First, verify we only have 2 real buffers | ||
796 | 1766 | EXPECT_EQ(2, unique_synchronous_buffers(q)); | ||
797 | 1767 | |||
798 | 1768 | // Simulate a really slow client (one third frame rate) that can | ||
799 | 1769 | // never be helped by triple buffering. | ||
800 | 1770 | for (int f = 0; f < delay*2; ++f) | ||
801 | 1771 | { | ||
802 | 1772 | auto client = client_acquire_async(q); | ||
803 | 1773 | |||
804 | 1774 | q.compositor_release(q.compositor_acquire(this)); | ||
805 | 1775 | q.compositor_release(q.compositor_acquire(this)); | ||
806 | 1776 | q.compositor_release(q.compositor_acquire(this)); | ||
807 | 1777 | |||
808 | 1778 | ASSERT_TRUE(client->has_acquired_buffer()); | ||
809 | 1779 | client->release_buffer(); | ||
810 | 1780 | } | ||
811 | 1781 | |||
812 | 1782 | // Verify we still only have double buffering. An idle client should | ||
813 | 1783 | // not be a reason to expand to triple. | ||
814 | 1784 | EXPECT_EQ(2, unique_synchronous_buffers(q)); | ||
815 | 1785 | } | ||
816 | 1786 | } | ||
817 | 1787 | |||
818 | 1788 | TEST_F(BufferQueueTest, synchronous_clients_dont_get_expanded_buffers) | ||
819 | 1789 | { | ||
820 | 1790 | for (int max_buffers = 3; max_buffers < max_nbuffers_to_test; ++max_buffers) | ||
821 | 1791 | { | ||
822 | 1792 | mc::BufferQueue q(max_buffers, allocator, basic_properties, | ||
823 | 1793 | policy_factory); | ||
824 | 1794 | |||
825 | 1795 | q.allow_framedropping(false); | ||
826 | 1796 | |||
827 | 1797 | int const delay = 3; | ||
828 | 1798 | q.set_resize_delay(delay); | ||
829 | 1799 | |||
830 | 1800 | // First, verify we only have 2 real buffers | ||
831 | 1801 | EXPECT_EQ(2, unique_synchronous_buffers(q)); | ||
832 | 1802 | |||
833 | 1803 | // Simulate an idle shell with a single really slow client (like | ||
834 | 1804 | // a clock -- not something that's trying to keep up with the | ||
835 | 1805 | // refresh rate) | ||
836 | 1806 | for (int f = 0; f < delay*2; ++f) | ||
837 | 1807 | { | ||
838 | 1808 | auto client = client_acquire_async(q); | ||
839 | 1809 | ASSERT_TRUE(client->has_acquired_buffer()); | ||
840 | 1810 | client->release_buffer(); | ||
841 | 1811 | |||
842 | 1812 | q.compositor_release(q.compositor_acquire(this)); | ||
843 | 1813 | } | ||
844 | 1814 | |||
845 | 1815 | // Verify we still only have double buffering. An idle desktop driven | ||
846 | 1816 | // by a single slow client should not be a reason to expand to triple. | ||
847 | 1817 | EXPECT_EQ(2, unique_synchronous_buffers(q)); | ||
848 | 1818 | } | ||
849 | 1819 | } | ||
850 | 1820 |
Fun fact: Adding the heuristic for detecting slow clients fixes: st.slow_ client_ framerate_ matches_ compositor
BufferQueueTe
without any time-fudging required any more. Turns out it was a very good test.