Mir

Merge lp:~vanvugt/mir/double into lp:mir

Proposed by Daniel van Vugt
Status: Work in progress
Proposed branch: lp:~vanvugt/mir/double
Merge into: lp:mir
Diff against target: 850 lines (+446/-53)
4 files modified
src/server/compositor/buffer_queue.cpp (+122/-22)
src/server/compositor/buffer_queue.h (+7/-2)
tests/integration-tests/surface_composition.cpp (+8/-1)
tests/unit-tests/compositor/test_buffer_queue.cpp (+309/-28)
To merge this branch: bzr merge lp:~vanvugt/mir/double
Reviewer Review Type Date Requested Status
PS Jenkins bot (community) continuous-integration Approve
Daniel van Vugt Needs Fixing
Alexandros Frantzis (community) Needs Fixing
Alberto Aguirre Pending
Review via email: mp+227701@code.launchpad.net

Commit message

Reintroduce double buffering! (LP: #1240909)

Default to double buffering where possible, to minimize visible lag and
resource usage. The queue will be expanded automatically to triple buffers
if you enable framedropping, or if it is observed that the client is not
keeping up with the compositor.

To post a comment you must log in.
Revision history for this message
Daniel van Vugt (vanvugt) wrote :

Fun fact: Adding the heuristic for detecting slow clients fixes:
  BufferQueueTest.slow_client_framerate_matches_compositor
without any time-fudging required any more. Turns out it was a very good test.

Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Needs Fixing (continuous-integration)
lp:~vanvugt/mir/double updated
1726. By Daniel van Vugt

Merge latest development-branch

Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
Daniel van Vugt (vanvugt) wrote :

The failure is bug 1348095. Unrelated to this proposal.

lp:~vanvugt/mir/double updated
1727. By Daniel van Vugt

Merge latest development-branch

1728. By Daniel van Vugt

Merge latest development-branch

1729. By Daniel van Vugt

Merge latest development-branch and fix conflicts.

1730. By Daniel van Vugt

Merge latest development-branch

1731. By Daniel van Vugt

Fix failing test case, merged from devel:
gives_compositor_the_newest_buffer_after_dropping_old_buffers

1732. By Daniel van Vugt

Make tests which require 3 buffers more consistent

1733. By Daniel van Vugt

Fix failing test: TEST_F(StaleFrames, are_dropped_when_restarting_compositor)
It was designed to assume three unique buffers available by default. Remove
that assumption.

1734. By Daniel van Vugt

Lower the resize delay to 100 frames so slow clients only take ~1.5s to
be detected as slow and change to triple buffers.

Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Needs Fixing (continuous-integration)
Revision history for this message
Alexandros Frantzis (afrantzis) wrote :

Not a thorough review yet, looks good overall on a first pass.

86 + if (client_behind && missed_frames < queue_resize_delay_frames)
87 + {
88 + ++missed_frames;
89 + if (missed_frames >= queue_resize_delay_frames)
90 + extra_buffers = 1;
91 + }
92 +
93 + if (!client_behind && missed_frames > 0)
102 + if (missed_frames < queue_resize_delay_frames)
103 + --missed_frames;

It would be nice to have tests for the expected behavior of this piece of code.

review: Needs Fixing
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Approve (continuous-integration)
Revision history for this message
Daniel van Vugt (vanvugt) wrote :

There are already tests for that portion of logic:

  TEST_F(BufferQueueTest, slow_client_framerate_matches_compositor)
  TEST_F(BufferQueueTest, queue_size_scales_instantly_on_framedropping)
  TEST_F(BufferQueueTest, queue_size_scales_for_slow_clients)

Although it's always possible to add more.

Revision history for this message
Alexandros Frantzis (afrantzis) wrote :

> Although it's always possible to add more.

One test that is missing (at least not tested explicitly) is to verify that: "If the ceiling is hit, keep it there with the extra buffer allocated so we don't shrink again and cause yet more missed frames."

lp:~vanvugt/mir/double updated
1735. By Daniel van Vugt

Merge latest development-branch

1736. By Daniel van Vugt

Merge latest development-branch

1737. By Daniel van Vugt

Add unit test: BufferQueueTest.switch_to_triple_buffers_is_permanent

This verifies we don't accidentally ping-pong between 2 and 3 buffers.

Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Approve (continuous-integration)
Revision history for this message
Alexandros Frantzis (afrantzis) wrote :

> * A client that's keeping up will be in-phase with composition. That
> * means it will stay, or quickly equalize at a point where there are
> * no client buffers still held when composition finishes.

This criterion only works well when the client is actually trying to keep up with the compositor (i.e. 60Hz). Clients that refresh less often on purpose or only when reacting to user input will become triple-buffered if the compositor is triggered by another source and they are not updating fast enough.

This happens, for example, if we run both mir_demo_client_egltriangle and mir_demo_client_fingerpaint. The frames from egltriangle trigger the compositor and cause mir_demo_client_fingerpaint to become triple-buffered after a short while. The same happens if we just have mir_demo_client_fingerpaint and move its surface around in the demo shell.

We probably need additional logic/heurestics to make this work properly (haven't thought through what that logic might be, though).

review: Needs Fixing
lp:~vanvugt/mir/double updated
1738. By Daniel van Vugt

Merge latest development-branch

1739. By Daniel van Vugt

Add a regression test for the idle-vs-slow client problem Alexandros
described. Presently failing.

1740. By Daniel van Vugt

Move the "missed frames" decision to a point where it won't be skewed by
multiple compositors (multi-monitor).

1741. By Daniel van Vugt

Prototype a detection of "idle" clients. Other questionable tests still
failing.

1742. By Daniel van Vugt

Simplify

1743. By Daniel van Vugt

More readable.

1744. By Daniel van Vugt

Fix failing test: BufferQueueTest.queue_size_scales_for_slow_clients
It needed some enhancement to not be detected as an "idle" client by the
new logic.

1745. By Daniel van Vugt

Fix test case: BufferQueueTest.switch_to_triple_buffers_is_permanent
The new idle detection logic was not seeing any client activity and so
deemed it not trying to keep up. So added some "slow client" activity.

1746. By Daniel van Vugt

Simplified idle detection

1747. By Daniel van Vugt

Fix comment typo

1748. By Daniel van Vugt

Add another regression test which shows premature expansion for really
slow clients.

1749. By Daniel van Vugt

Fixed: Mis-detection of a really slow client (low self-determined frame rate)
as failing to keep up. Regression test now passes.

1750. By Daniel van Vugt

Remove debug code

Revision history for this message
Daniel van Vugt (vanvugt) wrote :

Excellent point Alexandros. That's now fixed with extra tests for the scenarios you describe.

Unfortunately I just realized that there's another situation that probably needs fixing: A system with a single slow-ticking client (like a clock) won't ever know that the compositor is capable of running any faster than the client is, and so will classify the client as slow --> triple buffers prematurely.

review: Needs Fixing
Revision history for this message
Daniel van Vugt (vanvugt) wrote :

I'm now tempted to divide this proposal in two:
  1. Basic test case fixes to support future double buffering; and
  2. Heuristic switching between double and triple buffering (work in progress).

Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Approve (continuous-integration)
lp:~vanvugt/mir/double updated
1751. By Daniel van Vugt

Merge latest development-branch

1752. By Daniel van Vugt

Merge latest development-branch

1753. By Daniel van Vugt

Slightly simpler and more straight forward detection of slow vs idle clients.

1754. By Daniel van Vugt

Shrink the diff

1755. By Daniel van Vugt

More comments

1756. By Daniel van Vugt

Add a regression test for the slow-ticking-client-mostly-idle-desktop case.

1757. By Daniel van Vugt

test_buffer_queue.cpp: De-duplicate buffer counting logic

1758. By Daniel van Vugt

Fix indentation in new tests

1759. By Daniel van Vugt

test_buffer_queue.cpp: Major simplification - reuse a common function to
measure unique buffers.

1760. By Daniel van Vugt

Tidy up tests

1761. By Daniel van Vugt

Remove unnecessary sleeps from new tests

1762. By Daniel van Vugt

Go back to the old algorithm, keep the new tests.

1763. By Daniel van Vugt

A slightly more elegant measure of client_lag

Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Approve (continuous-integration)
lp:~vanvugt/mir/double updated
1764. By Daniel van Vugt

Merge latest development-branch

1765. By Daniel van Vugt

Merge latest development-branch

1766. By Daniel van Vugt

Merge latest development-branch

1767. By Daniel van Vugt

Merge latest development-branch

1768. By Daniel van Vugt

Merge latest development-branch

1769. By Daniel van Vugt

Merge latest development-branch

1770. By Daniel van Vugt

Merge latest development-branch

1771. By Daniel van Vugt

Merge latest development-branch and fix conflicts.

1772. By Daniel van Vugt

Merge latest development-branch

1773. By Daniel van Vugt

Merge latest development-branch

1774. By Daniel van Vugt

Merge latest development-branch

1775. By Daniel van Vugt

Merge latest development-branch

1776. By Daniel van Vugt

Merge latest development-branch

1777. By Daniel van Vugt

Merge latest development-branch

1778. By Daniel van Vugt

Merge (rebase on) 'double-integration-tests' to shrink the diff of
this branch to its new parent.

1779. By Daniel van Vugt

Merge latest development-branch

1780. By Daniel van Vugt

Merge latest development-branch and fix conflicts.

1781. By Daniel van Vugt

Merge latest development-branch

1782. By Daniel van Vugt

Merge latest development-branch

1783. By Daniel van Vugt

Merge latest development-branch

1784. By Daniel van Vugt

Merge latest development-branch

1785. By Daniel van Vugt

Merge latest development-branch and fix a conflict.

1786. By Daniel van Vugt

Merge latest development-branch

1787. By Daniel van Vugt

Merge latest development-branch

1788. By Daniel van Vugt

Merge latest development-branch

1789. By Daniel van Vugt

Merge latest trunk

1790. By Daniel van Vugt

Merge latest trunk

1791. By Daniel van Vugt

Merge latest trunk

1792. By Daniel van Vugt

Merge latest trunk

1793. By Daniel van Vugt

Merge latest trunk

1794. By Daniel van Vugt

Merge latest trunk

1795. By Daniel van Vugt

Merge latest trunk

1796. By Daniel van Vugt

Merge latest trunk

1797. By Daniel van Vugt

Prototype fix for the crashing integration test

1798. By Daniel van Vugt

A simpler fix for the crashing test -- don't try to release the same
old_buffer multiple times.

1799. By Daniel van Vugt

Even simpler

Unmerged revisions

1799. By Daniel van Vugt

Even simpler

1798. By Daniel van Vugt

A simpler fix for the crashing test -- don't try to release the same
old_buffer multiple times.

1797. By Daniel van Vugt

Prototype fix for the crashing integration test

1796. By Daniel van Vugt

Merge latest trunk

1795. By Daniel van Vugt

Merge latest trunk

1794. By Daniel van Vugt

Merge latest trunk

1793. By Daniel van Vugt

Merge latest trunk

1792. By Daniel van Vugt

Merge latest trunk

1791. By Daniel van Vugt

Merge latest trunk

1790. By Daniel van Vugt

Merge latest trunk

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'src/server/compositor/buffer_queue.cpp'
--- src/server/compositor/buffer_queue.cpp 2014-10-29 06:21:10 +0000
+++ src/server/compositor/buffer_queue.cpp 2014-11-23 08:19:33 +0000
@@ -92,18 +92,23 @@
92}92}
9393
94mc::BufferQueue::BufferQueue(94mc::BufferQueue::BufferQueue(
95 int nbuffers,95 int max_buffers,
96 std::shared_ptr<graphics::GraphicBufferAllocator> const& gralloc,96 std::shared_ptr<graphics::GraphicBufferAllocator> const& gralloc,
97 graphics::BufferProperties const& props,97 graphics::BufferProperties const& props,
98 mc::FrameDroppingPolicyFactory const& policy_provider)98 mc::FrameDroppingPolicyFactory const& policy_provider)
99 : nbuffers{nbuffers},99 : min_buffers{std::min(2, max_buffers)}, // TODO: Configurable in future
100 max_buffers{max_buffers},
101 missed_frames{0},
102 queue_resize_delay_frames{100},
103 extra_buffers{0},
104 client_lag{0},
100 frame_dropping_enabled{false},105 frame_dropping_enabled{false},
101 the_properties{props},106 the_properties{props},
102 force_new_compositor_buffer{false},107 force_new_compositor_buffer{false},
103 callbacks_allowed{true},108 callbacks_allowed{true},
104 gralloc{gralloc}109 gralloc{gralloc}
105{110{
106 if (nbuffers < 1)111 if (max_buffers < 1)
107 {112 {
108 BOOST_THROW_EXCEPTION(113 BOOST_THROW_EXCEPTION(
109 std::logic_error("invalid number of buffers for BufferQueue"));114 std::logic_error("invalid number of buffers for BufferQueue"));
@@ -111,9 +116,9 @@
111116
112 /* By default not all buffers are allocated.117 /* By default not all buffers are allocated.
113 * If there is increased pressure by the client to acquire118 * If there is increased pressure by the client to acquire
114 * more buffers, more will be allocated at that time (up to nbuffers)119 * more buffers, more will be allocated at that time (up to max_buffers)
115 */120 */
116 for(int i = 0; i < std::min(nbuffers, 2); i++)121 for (int i = 0; i < min_buffers; ++i)
117 {122 {
118 buffers.push_back(gralloc->alloc_buffer(the_properties));123 buffers.push_back(gralloc->alloc_buffer(the_properties));
119 }124 }
@@ -129,7 +134,7 @@
129 /* Special case: with one buffer both clients and compositors134 /* Special case: with one buffer both clients and compositors
130 * need to share the same buffer135 * need to share the same buffer
131 */136 */
132 if (nbuffers == 1)137 if (max_buffers == 1)
133 free_buffers.push_back(current_compositor_buffer);138 free_buffers.push_back(current_compositor_buffer);
134139
135 framedrop_policy = policy_provider.create_policy([this]140 framedrop_policy = policy_provider.create_policy([this]
@@ -173,6 +178,7 @@
173void mc::BufferQueue::client_acquire(mc::BufferQueue::Callback complete)178void mc::BufferQueue::client_acquire(mc::BufferQueue::Callback complete)
174{179{
175 std::unique_lock<decltype(guard)> lock(guard);180 std::unique_lock<decltype(guard)> lock(guard);
181// fprintf(stderr, "%s\n", __FUNCTION__);
176182
177 pending_client_notifications.push_back(std::move(complete));183 pending_client_notifications.push_back(std::move(complete));
178184
@@ -184,12 +190,8 @@
184 return;190 return;
185 }191 }
186192
187 /* No empty buffers, attempt allocating more
188 * TODO: need a good heuristic to switch
189 * between double-buffering to n-buffering
190 */
191 int const allocated_buffers = buffers.size();193 int const allocated_buffers = buffers.size();
192 if (allocated_buffers < nbuffers)194 if (allocated_buffers < ideal_buffers())
193 {195 {
194 auto const& buffer = gralloc->alloc_buffer(the_properties);196 auto const& buffer = gralloc->alloc_buffer(the_properties);
195 buffers.push_back(buffer);197 buffers.push_back(buffer);
@@ -215,6 +217,8 @@
215{217{
216 std::lock_guard<decltype(guard)> lock(guard);218 std::lock_guard<decltype(guard)> lock(guard);
217219
220// fprintf(stderr, "%s\n", __FUNCTION__);
221
218 if (buffers_owned_by_client.empty())222 if (buffers_owned_by_client.empty())
219 {223 {
220 BOOST_THROW_EXCEPTION(224 BOOST_THROW_EXCEPTION(
@@ -235,6 +239,7 @@
235mc::BufferQueue::compositor_acquire(void const* user_id)239mc::BufferQueue::compositor_acquire(void const* user_id)
236{240{
237 std::unique_lock<decltype(guard)> lock(guard);241 std::unique_lock<decltype(guard)> lock(guard);
242// fprintf(stderr, "%s\n", __FUNCTION__);
238243
239 bool use_current_buffer = false;244 bool use_current_buffer = false;
240 if (!current_buffer_users.empty() && !is_a_current_buffer_user(user_id))245 if (!current_buffer_users.empty() && !is_a_current_buffer_user(user_id))
@@ -280,7 +285,10 @@
280 buffer_for(current_compositor_buffer, buffers);285 buffer_for(current_compositor_buffer, buffers);
281286
282 if (buffer_to_release)287 if (buffer_to_release)
288 {
289 client_lag = 1;
283 release(buffer_to_release, std::move(lock));290 release(buffer_to_release, std::move(lock));
291 }
284292
285 return acquired_buffer;293 return acquired_buffer;
286}294}
@@ -288,6 +296,7 @@
288void mc::BufferQueue::compositor_release(std::shared_ptr<graphics::Buffer> const& buffer)296void mc::BufferQueue::compositor_release(std::shared_ptr<graphics::Buffer> const& buffer)
289{297{
290 std::unique_lock<decltype(guard)> lock(guard);298 std::unique_lock<decltype(guard)> lock(guard);
299// fprintf(stderr, "%s\n", __FUNCTION__);
291300
292 if (!remove(buffer.get(), buffers_sent_to_compositor))301 if (!remove(buffer.get(), buffers_sent_to_compositor))
293 {302 {
@@ -299,7 +308,52 @@
299 if (contains(buffer.get(), buffers_sent_to_compositor))308 if (contains(buffer.get(), buffers_sent_to_compositor))
300 return;309 return;
301310
302 if (nbuffers <= 1)311 /*
312 * Calculate if we need extra buffers in the queue to account for a slow
313 * client that can't keep up with composition.
314 */
315 if (frame_dropping_enabled)
316 {
317 missed_frames = 0;
318 extra_buffers = 0;
319 }
320 else
321 {
322 /*
323 * A client that's keeping up will be in-phase with composition. That
324 * means it will stay, or quickly equalize at a point where there are
325 * no client buffers still held when composition finishes.
326 */
327 if (client_lag == 1 && missed_frames < queue_resize_delay_frames)
328 {
329 ++missed_frames;
330 if (missed_frames >= queue_resize_delay_frames)
331 extra_buffers = 1;
332 }
333
334 if (client_lag != 1 && missed_frames > 0)
335 {
336 /*
337 * Allow missed_frames to recover back down to zero, so long as
338 * the ceiling is never hit (meaning you're keeping up most of the
339 * time). If the ceiling is hit, keep it there with the extra
340 * buffer allocated so we don't shrink again and cause yet more
341 * missed frames.
342 */
343 if (missed_frames < queue_resize_delay_frames)
344 --missed_frames;
345 }
346
347// fprintf(stderr, "missed_frames %d, extra %d\n",
348// missed_frames, extra_buffers);
349 }
350
351 // Let client_lag go above 1 to represent an idle client (one that's
352 // sleeping and not trying to keep up with the compositor)
353 if (client_lag)
354 ++client_lag;
355
356 if (max_buffers <= 1)
303 return;357 return;
304358
305 /*359 /*
@@ -322,7 +376,10 @@
322 }376 }
323377
324 if (current_compositor_buffer != buffer.get())378 if (current_compositor_buffer != buffer.get())
379 {
380 client_lag = 0;
325 release(buffer.get(), std::move(lock));381 release(buffer.get(), std::move(lock));
382 }
326}383}
327384
328std::shared_ptr<mg::Buffer> mc::BufferQueue::snapshot_acquire()385std::shared_ptr<mg::Buffer> mc::BufferQueue::snapshot_acquire()
@@ -354,6 +411,14 @@
354{411{
355 std::lock_guard<decltype(guard)> lock(guard);412 std::lock_guard<decltype(guard)> lock(guard);
356 frame_dropping_enabled = flag;413 frame_dropping_enabled = flag;
414
415 while (static_cast<int>(buffers.size()) > ideal_buffers() &&
416 !free_buffers.empty())
417 {
418 auto surplus = free_buffers.back();
419 free_buffers.pop_back();
420 drop_buffer(surplus);
421 }
357}422}
358423
359bool mc::BufferQueue::framedropping_allowed() const424bool mc::BufferQueue::framedropping_allowed() const
@@ -397,20 +462,14 @@
397int mc::BufferQueue::buffers_free_for_client() const462int mc::BufferQueue::buffers_free_for_client() const
398{463{
399 std::lock_guard<decltype(guard)> lock(guard);464 std::lock_guard<decltype(guard)> lock(guard);
400 int ret = 1;465 return max_buffers > 1 ? free_buffers.size() : 1;
401 if (nbuffers > 1)
402 {
403 int nfree = free_buffers.size();
404 int future_growth = nbuffers - buffers.size();
405 ret = nfree + future_growth;
406 }
407 return ret;
408}466}
409467
410void mc::BufferQueue::give_buffer_to_client(468void mc::BufferQueue::give_buffer_to_client(
411 mg::Buffer* buffer,469 mg::Buffer* buffer,
412 std::unique_lock<std::mutex> lock)470 std::unique_lock<std::mutex> lock)
413{471{
472// fprintf(stderr, "%s\n", __FUNCTION__);
414 /* Clears callback */473 /* Clears callback */
415 auto give_to_client_cb = std::move(pending_client_notifications.front());474 auto give_to_client_cb = std::move(pending_client_notifications.front());
416 pending_client_notifications.pop_front();475 pending_client_notifications.pop_front();
@@ -424,7 +483,7 @@
424 /* Special case: the current compositor buffer also needs to be483 /* Special case: the current compositor buffer also needs to be
425 * replaced as it's shared with the client484 * replaced as it's shared with the client
426 */485 */
427 if (nbuffers == 1)486 if (max_buffers == 1)
428 current_compositor_buffer = buffer;487 current_compositor_buffer = buffer;
429 }488 }
430489
@@ -463,7 +522,13 @@
463 mg::Buffer* buffer,522 mg::Buffer* buffer,
464 std::unique_lock<std::mutex> lock)523 std::unique_lock<std::mutex> lock)
465{524{
466 if (!pending_client_notifications.empty())525 int used_buffers = buffers.size() - free_buffers.size();
526
527 if (used_buffers > ideal_buffers() && max_buffers > 1)
528 {
529 drop_buffer(buffer);
530 }
531 else if (!pending_client_notifications.empty())
467 {532 {
468 framedrop_policy->swap_unblocked();533 framedrop_policy->swap_unblocked();
469 give_buffer_to_client(buffer, std::move(lock));534 give_buffer_to_client(buffer, std::move(lock));
@@ -472,6 +537,41 @@
472 free_buffers.push_back(buffer);537 free_buffers.push_back(buffer);
473}538}
474539
540int mc::BufferQueue::ideal_buffers() const
541{
542 int result = frame_dropping_enabled ? 3 : 2;
543
544 // Add extra buffers if we can see the client's not keeping up with
545 // composition.
546 result += extra_buffers;
547
548 if (result < min_buffers)
549 result = min_buffers;
550 if (result > max_buffers)
551 result = max_buffers;
552
553 return result;
554}
555
556void mc::BufferQueue::set_resize_delay(int nframes)
557{
558 queue_resize_delay_frames = nframes;
559 if (nframes == 0)
560 extra_buffers = 1;
561}
562
563void mc::BufferQueue::drop_buffer(graphics::Buffer* buffer)
564{
565 for (auto i = buffers.begin(); i != buffers.end(); ++i)
566 {
567 if (i->get() == buffer)
568 {
569 buffers.erase(i);
570 break;
571 }
572 }
573}
574
475void mc::BufferQueue::drop_frame(std::unique_lock<std::mutex> lock)575void mc::BufferQueue::drop_frame(std::unique_lock<std::mutex> lock)
476{576{
477 auto buffer_to_give = pop(ready_to_composite_queue);577 auto buffer_to_give = pop(ready_to_composite_queue);
478578
=== modified file 'src/server/compositor/buffer_queue.h'
--- src/server/compositor/buffer_queue.h 2014-10-29 06:21:10 +0000
+++ src/server/compositor/buffer_queue.h 2014-11-23 08:19:33 +0000
@@ -42,7 +42,7 @@
42public:42public:
43 typedef std::function<void(graphics::Buffer* buffer)> Callback;43 typedef std::function<void(graphics::Buffer* buffer)> Callback;
4444
45 BufferQueue(int nbuffers,45 BufferQueue(int max_buffers,
46 std::shared_ptr<graphics::GraphicBufferAllocator> const& alloc,46 std::shared_ptr<graphics::GraphicBufferAllocator> const& alloc,
47 graphics::BufferProperties const& props,47 graphics::BufferProperties const& props,
48 FrameDroppingPolicyFactory const& policy_provider);48 FrameDroppingPolicyFactory const& policy_provider);
@@ -64,6 +64,7 @@
64 bool is_a_current_buffer_user(void const* user_id) const;64 bool is_a_current_buffer_user(void const* user_id) const;
65 void drop_old_buffers() override;65 void drop_old_buffers() override;
66 void drop_client_requests() override;66 void drop_client_requests() override;
67 void set_resize_delay(int nframes); ///< negative means never resize
6768
68private:69private:
69 void give_buffer_to_client(graphics::Buffer* buffer,70 void give_buffer_to_client(graphics::Buffer* buffer,
@@ -71,6 +72,8 @@
71 void release(graphics::Buffer* buffer,72 void release(graphics::Buffer* buffer,
72 std::unique_lock<std::mutex> lock);73 std::unique_lock<std::mutex> lock);
73 void drop_frame(std::unique_lock<std::mutex> lock);74 void drop_frame(std::unique_lock<std::mutex> lock);
75 int ideal_buffers() const;
76 void drop_buffer(graphics::Buffer* buffer);
7477
75 mutable std::mutex guard;78 mutable std::mutex guard;
7679
@@ -86,7 +89,9 @@
8689
87 std::deque<Callback> pending_client_notifications;90 std::deque<Callback> pending_client_notifications;
8891
89 int nbuffers;92 int const min_buffers, max_buffers;
93 int missed_frames, queue_resize_delay_frames, extra_buffers;
94 int client_lag;
90 bool frame_dropping_enabled;95 bool frame_dropping_enabled;
91 graphics::BufferProperties the_properties;96 graphics::BufferProperties the_properties;
92 bool force_new_compositor_buffer;97 bool force_new_compositor_buffer;
9398
=== modified file 'tests/integration-tests/surface_composition.cpp'
--- tests/integration-tests/surface_composition.cpp 2014-10-02 11:56:12 +0000
+++ tests/integration-tests/surface_composition.cpp 2014-11-23 08:19:33 +0000
@@ -108,16 +108,23 @@
108108
109 mg::Buffer* old_buffer{nullptr};109 mg::Buffer* old_buffer{nullptr};
110110
111 bool called_back = true;
111 auto const callback = [&] (mg::Buffer* new_buffer)112 auto const callback = [&] (mg::Buffer* new_buffer)
112 {113 {
113 // If surface is dead then callback is not expected114 // If surface is dead then callback is not expected
114 EXPECT_THAT(surface.get(), NotNull());115 EXPECT_THAT(surface.get(), NotNull());
115 old_buffer = new_buffer;116 old_buffer = new_buffer;
117 called_back = true;
116 };118 };
117119
118 // Exhaust the buffers to ensure we have a pending swap to complete120 // Exhaust the buffers to ensure we have a pending swap to complete
119 for (auto i = 0; i != number_of_buffers; ++i)121 // But also be careful to not pass a formerly released non-null old_buffer
122 // in to swap_buffers...
123 while (called_back)
124 {
125 called_back = false;
120 surface->swap_buffers(old_buffer, callback);126 surface->swap_buffers(old_buffer, callback);
127 }
121128
122 auto const renderable = surface->compositor_snapshot(this);129 auto const renderable = surface->compositor_snapshot(this);
123130
124131
=== modified file 'tests/unit-tests/compositor/test_buffer_queue.cpp'
--- tests/unit-tests/compositor/test_buffer_queue.cpp 2014-10-29 06:21:10 +0000
+++ tests/unit-tests/compositor/test_buffer_queue.cpp 2014-11-23 08:19:33 +0000
@@ -304,6 +304,7 @@
304 int const nbuffers = 3;304 int const nbuffers = 3;
305 mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory);305 mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory);
306306
307 q.set_resize_delay(0); // Force enabling of the third buffer
307 int const prefill = q.buffers_free_for_client();308 int const prefill = q.buffers_free_for_client();
308 ASSERT_THAT(prefill, Gt(0));309 ASSERT_THAT(prefill, Gt(0));
309 for (int i = 0; i < prefill; ++i)310 for (int i = 0; i < prefill; ++i)
@@ -314,10 +315,7 @@
314 }315 }
315316
316 auto handle1 = client_acquire_async(q);317 auto handle1 = client_acquire_async(q);
317 ASSERT_THAT(handle1->has_acquired_buffer(), Eq(false));
318
319 auto handle2 = client_acquire_async(q);318 auto handle2 = client_acquire_async(q);
320 ASSERT_THAT(handle1->has_acquired_buffer(), Eq(false));
321319
322 for (int i = 0; i < nbuffers + 1; ++i)320 for (int i = 0; i < nbuffers + 1; ++i)
323 q.compositor_release(q.compositor_acquire(this));321 q.compositor_release(q.compositor_acquire(this));
@@ -386,7 +384,6 @@
386 handle->release_buffer();384 handle->release_buffer();
387385
388 handle = client_acquire_async(q);386 handle = client_acquire_async(q);
389 ASSERT_THAT(handle->has_acquired_buffer(), Eq(true));
390 handle->release_buffer();387 handle->release_buffer();
391388
392 auto comp_buffer = q.compositor_acquire(this);389 auto comp_buffer = q.compositor_acquire(this);
@@ -400,6 +397,8 @@
400 {397 {
401 mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory);398 mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory);
402399
400 q.set_resize_delay(0); // Force enabling of the third buffer
401
403 auto handle1 = client_acquire_async(q);402 auto handle1 = client_acquire_async(q);
404 ASSERT_THAT(handle1->has_acquired_buffer(), Eq(true));403 ASSERT_THAT(handle1->has_acquired_buffer(), Eq(true));
405404
@@ -426,7 +425,7 @@
426 compositor_thread, std::ref(q), std::ref(done));425 compositor_thread, std::ref(q), std::ref(done));
427426
428 std::unordered_set<uint32_t> ids_acquired;427 std::unordered_set<uint32_t> ids_acquired;
429 int const max_ownable_buffers = nbuffers - 1;428 int const max_ownable_buffers = q.buffers_free_for_client();
430 for (int i = 0; i < max_ownable_buffers*2; ++i)429 for (int i = 0; i < max_ownable_buffers*2; ++i)
431 {430 {
432 std::vector<mg::Buffer *> client_buffers;431 std::vector<mg::Buffer *> client_buffers;
@@ -445,7 +444,9 @@
445 }444 }
446 }445 }
447446
448 EXPECT_THAT(ids_acquired.size(), Eq(nbuffers));447 // Expect only two since we're now double-buffered and are not
448 // allowing frame dropping.
449 EXPECT_THAT(ids_acquired.size(), Eq(2));
449 }450 }
450}451}
451452
@@ -499,7 +500,7 @@
499 {500 {
500 std::deque<mg::BufferID> client_release_sequence;501 std::deque<mg::BufferID> client_release_sequence;
501 std::vector<mg::Buffer *> buffers;502 std::vector<mg::Buffer *> buffers;
502 int const max_ownable_buffers = nbuffers - 1;503 int const max_ownable_buffers = q.buffers_free_for_client();
503 for (int i = 0; i < max_ownable_buffers; ++i)504 for (int i = 0; i < max_ownable_buffers; ++i)
504 {505 {
505 auto handle = client_acquire_async(q);506 auto handle = client_acquire_async(q);
@@ -632,13 +633,13 @@
632 handle->release_buffer();633 handle->release_buffer();
633634
634 handle = client_acquire_async(q);635 handle = client_acquire_async(q);
635 ASSERT_THAT(handle->has_acquired_buffer(), Eq(true));
636636
637 // in the original bug, compositor would be given the wrong buffer here637 // in the original bug, compositor would be given the wrong buffer here
638 auto compositor_buffer = q.compositor_acquire(this);638 auto compositor_buffer = q.compositor_acquire(this);
639639
640 EXPECT_THAT(compositor_buffer->id(), Eq(first_ready_buffer_id));640 EXPECT_THAT(compositor_buffer->id(), Eq(first_ready_buffer_id));
641641
642 ASSERT_THAT(handle->has_acquired_buffer(), Eq(true));
642 handle->release_buffer();643 handle->release_buffer();
643 q.compositor_release(compositor_buffer);644 q.compositor_release(compositor_buffer);
644}645}
@@ -806,7 +807,14 @@
806 handle->release_buffer();807 handle->release_buffer();
807 }808 }
808809
809 EXPECT_THAT(ids_acquired.size(), Eq(nbuffers));810 /*
811 * Dynamic queue scaling sensibly limits the framedropping client to
812 * two non-overlapping buffers, before overwriting old ones. Allowing
813 * any more would just waste space (buffers). So this means a
814 * frame-dropping client won't ever see more than 3 unique buffers
815 */
816 int const max_ownable_buffers = std::min(nbuffers, 3);
817 EXPECT_THAT(ids_acquired.size(), Eq(max_ownable_buffers));
810 }818 }
811}819}
812820
@@ -902,6 +910,7 @@
902 auto const frame_time = std::chrono::milliseconds(16);910 auto const frame_time = std::chrono::milliseconds(16);
903911
904 q.allow_framedropping(false);912 q.allow_framedropping(false);
913 q.set_resize_delay(1);
905914
906 std::atomic<bool> done(false);915 std::atomic<bool> done(false);
907 std::mutex sync;916 std::mutex sync;
@@ -977,20 +986,13 @@
977 }986 }
978}987}
979988
980namespace
981{
982int max_ownable_buffers(int nbuffers)
983{
984 return (nbuffers == 1) ? 1 : nbuffers - 1;
985}
986}
987
988TEST_F(BufferQueueTest, compositor_acquires_resized_frames)989TEST_F(BufferQueueTest, compositor_acquires_resized_frames)
989{990{
990 for (int nbuffers = 1; nbuffers <= max_nbuffers_to_test; ++nbuffers)991 for (int nbuffers = 1; nbuffers <= max_nbuffers_to_test; ++nbuffers)
991 {992 {
992 mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory);993 mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory);
993 std::vector<mg::BufferID> history;994 std::vector<mg::BufferID> history;
995 std::shared_ptr<AcquireWaitHandle> producing[5];
994996
995 const int width0 = 123;997 const int width0 = 123;
996 const int height0 = 456;998 const int height0 = 456;
@@ -1001,8 +1003,7 @@
1001 int const nbuffers_to_use = q.buffers_free_for_client();1003 int const nbuffers_to_use = q.buffers_free_for_client();
1002 ASSERT_THAT(nbuffers_to_use, Gt(0));1004 ASSERT_THAT(nbuffers_to_use, Gt(0));
10031005
1004 int max_buffers{max_ownable_buffers(nbuffers)};1006 for (int produce = 0; produce < nbuffers_to_use; ++produce)
1005 for (int produce = 0; produce < max_buffers; ++produce)
1006 {1007 {
1007 geom::Size new_size{width, height};1008 geom::Size new_size{width, height};
1008 width += dx;1009 width += dx;
@@ -1012,16 +1013,23 @@
1012 auto handle = client_acquire_async(q);1013 auto handle = client_acquire_async(q);
1013 ASSERT_THAT(handle->has_acquired_buffer(), Eq(true));1014 ASSERT_THAT(handle->has_acquired_buffer(), Eq(true));
1014 history.emplace_back(handle->id());1015 history.emplace_back(handle->id());
1016 producing[produce] = handle;
1015 auto buffer = handle->buffer();1017 auto buffer = handle->buffer();
1016 ASSERT_THAT(buffer->size(), Eq(new_size));1018 ASSERT_THAT(buffer->size(), Eq(new_size));
1017 handle->release_buffer();1019 }
1020
1021 // Overlap all the client_acquires asyncronously. It's the only way
1022 // the new dynamic queue scaling will let a client hold that many...
1023 for (int complete = 0; complete < nbuffers_to_use; ++complete)
1024 {
1025 producing[complete]->release_buffer();
1018 }1026 }
10191027
1020 width = width0;1028 width = width0;
1021 height = height0;1029 height = height0;
10221030
1023 ASSERT_THAT(history.size(), Eq(max_buffers));1031 ASSERT_THAT(history.size(), Eq(nbuffers_to_use));
1024 for (int consume = 0; consume < max_buffers; ++consume)1032 for (int consume = 0; consume < nbuffers_to_use; ++consume)
1025 {1033 {
1026 geom::Size expect_size{width, height};1034 geom::Size expect_size{width, height};
1027 width += dx;1035 width += dx;
@@ -1062,7 +1070,8 @@
1062 basic_properties,1070 basic_properties,
1063 policy_factory);1071 policy_factory);
10641072
1065 for (int i = 0; i < max_ownable_buffers(nbuffers); i++)1073 auto max_ownable_buffers = q.buffers_free_for_client();
1074 for (int i = 0; i < max_ownable_buffers; i++)
1066 {1075 {
1067 auto client = client_acquire_sync(q);1076 auto client = client_acquire_sync(q);
1068 q.client_release(client);1077 q.client_release(client);
@@ -1089,7 +1098,8 @@
1089 basic_properties,1098 basic_properties,
1090 policy_factory);1099 policy_factory);
10911100
1092 for (int i = 0; i < max_ownable_buffers(nbuffers); i++)1101 int const max_ownable_buffers = q.buffers_free_for_client();
1102 for (int i = 0; i < max_ownable_buffers; i++)
1093 {1103 {
1094 auto client = client_acquire_sync(q);1104 auto client = client_acquire_sync(q);
1095 q.client_release(client);1105 q.client_release(client);
@@ -1379,7 +1389,7 @@
1379 compositor_thread, std::ref(q), std::ref(done));1389 compositor_thread, std::ref(q), std::ref(done));
13801390
1381 std::unordered_set<mg::Buffer *> unique_buffers_acquired;1391 std::unordered_set<mg::Buffer *> unique_buffers_acquired;
1382 int const max_ownable_buffers = nbuffers - 1;1392 int const max_ownable_buffers = q.buffers_free_for_client();
1383 for (int frame = 0; frame < max_ownable_buffers*2; frame++)1393 for (int frame = 0; frame < max_ownable_buffers*2; frame++)
1384 {1394 {
1385 std::vector<mg::Buffer *> client_buffers;1395 std::vector<mg::Buffer *> client_buffers;
@@ -1398,13 +1408,14 @@
1398 }1408 }
1399 }1409 }
14001410
1401 EXPECT_THAT(unique_buffers_acquired.size(), Eq(nbuffers));1411 // Expect one more than max_ownable_buffers, to include the one that
1412 // is silently reserved for compositing.
1413 EXPECT_THAT(unique_buffers_acquired.size(), Eq(max_ownable_buffers+1));
14021414
1403 }1415 }
1404}1416}
14051417
1406/* FIXME (enabling this optimization breaks timing tests) */1418TEST_F(BufferQueueTest, synchronous_clients_only_get_two_real_buffers)
1407TEST_F(BufferQueueTest, DISABLED_synchronous_clients_only_get_two_real_buffers)
1408{1419{
1409 for (int nbuffers = 2; nbuffers <= max_nbuffers_to_test; ++nbuffers)1420 for (int nbuffers = 2; nbuffers <= max_nbuffers_to_test; ++nbuffers)
1410 {1421 {
@@ -1413,6 +1424,13 @@
14131424
1414 std::atomic<bool> done(false);1425 std::atomic<bool> done(false);
1415 auto unblock = [&done] { done = true; };1426 auto unblock = [&done] { done = true; };
1427
1428 // With an unthrottled compositor_thread it will look like the client
1429 // isn't keeping up and the buffer queue would normally auto-expand.
1430 // Increase the auto-expansion threshold to ensure that doesn't happen
1431 // during this test...
1432 q.set_resize_delay(-1);
1433
1416 mt::AutoUnblockThread compositor(unblock,1434 mt::AutoUnblockThread compositor(unblock,
1417 compositor_thread, std::ref(q), std::ref(done));1435 compositor_thread, std::ref(q), std::ref(done));
14181436
@@ -1495,6 +1513,9 @@
1495 int const nbuffers = 3;1513 int const nbuffers = 3;
1496 mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory);1514 mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory);
14971515
1516 // Ensure this test gets 3 real buffers immediately.
1517 q.set_resize_delay(0);
1518
1498 auto handle1 = client_acquire_async(q);1519 auto handle1 = client_acquire_async(q);
1499 ASSERT_THAT(handle1->has_acquired_buffer(), Eq(true));1520 ASSERT_THAT(handle1->has_acquired_buffer(), Eq(true));
1500 handle1->release_buffer();1521 handle1->release_buffer();
@@ -1537,3 +1558,263 @@
1537 auto comp2 = q.compositor_acquire(new_compositor_id);1558 auto comp2 = q.compositor_acquire(new_compositor_id);
1538 ASSERT_THAT(comp2->id(), Eq(handle2->id()));1559 ASSERT_THAT(comp2->id(), Eq(handle2->id()));
1539}1560}
1561
1562namespace
1563{
1564 int unique_buffers(mc::BufferQueue& q)
1565 {
1566 std::atomic<bool> done(false);
1567
1568 auto unblock = [&done] { done = true; };
1569 mt::AutoUnblockThread compositor(unblock,
1570 compositor_thread, std::ref(q), std::ref(done));
1571
1572 std::unordered_set<mg::Buffer*> buffers_acquired;
1573 for (int frame = 0; frame < 100; frame++)
1574 {
1575 auto handle = client_acquire_async(q);
1576 handle->wait_for(std::chrono::seconds(1));
1577 if (!handle->has_acquired_buffer())
1578 return -1;
1579 buffers_acquired.insert(handle->buffer());
1580 handle->release_buffer();
1581 }
1582
1583 return static_cast<int>(buffers_acquired.size());
1584 }
1585
1586 int unique_synchronous_buffers(mc::BufferQueue& q)
1587 {
1588 if (q.framedropping_allowed())
1589 return 0;
1590
1591 int const max = 10;
1592
1593 // Flush the queue
1594 for (int f = 0; f < max; ++f)
1595 q.compositor_release(q.compositor_acquire(0));
1596
1597 auto compositor = q.compositor_acquire(0);
1598 int count = 1; // ^ count the compositor buffer
1599
1600 std::shared_ptr<AcquireWaitHandle> client;
1601 while (count < max)
1602 {
1603 client = client_acquire_async(q);
1604 if (!client->has_acquired_buffer())
1605 break;
1606 ++count;
1607 client->release_buffer();
1608 }
1609
1610 q.compositor_release(compositor);
1611 EXPECT_TRUE(client->has_acquired_buffer());
1612 client->release_buffer();
1613
1614 // Flush the queue
1615 for (int f = 0; f < max; ++f)
1616 q.compositor_release(q.compositor_acquire(0));
1617
1618 return count;
1619 }
1620} // namespace
1621
1622TEST_F(BufferQueueTest, queue_size_scales_instantly_on_framedropping)
1623{
1624 for (int max_buffers = 1; max_buffers < max_nbuffers_to_test; ++max_buffers)
1625 {
1626 mc::BufferQueue q(max_buffers, allocator, basic_properties,
1627 policy_factory);
1628
1629 q.set_resize_delay(-1);
1630
1631 // Default: No frame dropping; expect double buffering
1632 q.allow_framedropping(false);
1633 EXPECT_EQ(std::min(max_buffers, 2), unique_buffers(q));
1634
1635 // Enable frame dropping; expect triple buffering immediately
1636 q.allow_framedropping(true);
1637 EXPECT_EQ(std::min(max_buffers, 3), unique_buffers(q));
1638
1639 // Revert back to no frame dropping; expect double buffering
1640 q.allow_framedropping(false);
1641 EXPECT_EQ(std::min(max_buffers, 2), unique_buffers(q));
1642 }
1643}
1644
1645TEST_F(BufferQueueTest, queue_size_scales_for_slow_clients)
1646{
1647 for (int max_buffers = 3; max_buffers < max_nbuffers_to_test; ++max_buffers)
1648 {
1649 mc::BufferQueue q(max_buffers, allocator, basic_properties,
1650 policy_factory);
1651
1652 q.allow_framedropping(false);
1653
1654 int const delay = 10;
1655 q.set_resize_delay(delay);
1656
1657 // Verify we're starting with double buffering
1658 EXPECT_EQ(2, unique_synchronous_buffers(q));
1659
1660 // Simulate a slow client. Not an idle one, but one trying to keep up
1661 // and repeatedly failing to miss the frame deadline.
1662 for (int f = 0; f < delay*2; ++f)
1663 {
1664 auto client = client_acquire_async(q);
1665 q.compositor_release(q.compositor_acquire(this));
1666 ASSERT_TRUE(client->has_acquired_buffer());
1667 client->release_buffer();
1668 }
1669
1670 // Verify we now have triple buffers
1671 EXPECT_EQ(3, unique_synchronous_buffers(q));
1672 }
1673}
1674
1675TEST_F(BufferQueueTest, switch_to_triple_buffers_is_permanent)
1676{
1677 for (int max_buffers = 3; max_buffers < max_nbuffers_to_test; ++max_buffers)
1678 {
1679 mc::BufferQueue q(max_buffers, allocator, basic_properties,
1680 policy_factory);
1681
1682 q.allow_framedropping(false);
1683
1684 int const delay = 10;
1685 q.set_resize_delay(delay);
1686
1687 // First, verify we only have 2 real buffers
1688 EXPECT_EQ(2, unique_synchronous_buffers(q));
1689
1690 // Simulate a slow client. Not an idle one, but one trying to keep up
1691 // and repeatedly failing to miss the frame deadline.
1692 for (int f = 0; f < delay*2; ++f)
1693 {
1694 auto client = client_acquire_async(q);
1695 q.compositor_release(q.compositor_acquire(this));
1696 ASSERT_TRUE(client->has_acquired_buffer());
1697 client->release_buffer();
1698 }
1699
1700 EXPECT_EQ(3, unique_synchronous_buffers(q));
1701
1702 // Now let the client behave "well" and keep up.
1703 for (int f = 0; f < delay*10; ++f)
1704 {
1705 q.client_release(client_acquire_sync(q));
1706 q.compositor_release(q.compositor_acquire(this));
1707 }
1708
1709 // Make sure the queue has stayed expanded. Do the original test
1710 // again and verify clients get one more than before...
1711 EXPECT_EQ(3, unique_synchronous_buffers(q));
1712 }
1713}
1714
1715TEST_F(BufferQueueTest, idle_clients_dont_get_expanded_buffers)
1716{
1717 for (int max_buffers = 3; max_buffers < max_nbuffers_to_test; ++max_buffers)
1718 {
1719 mc::BufferQueue q(max_buffers, allocator, basic_properties,
1720 policy_factory);
1721
1722 q.allow_framedropping(false);
1723
1724 int const delay = 10;
1725 q.set_resize_delay(delay);
1726
1727 // First, verify we only have 2 real buffers
1728 int const contracted_nbuffers = 2;
1729 for (int f = 0; f < contracted_nbuffers-1; ++f)
1730 {
1731 auto client1 = client_acquire_async(q);
1732 ASSERT_TRUE(client1->has_acquired_buffer());
1733 client1->release_buffer();
1734 }
1735 auto client2 = client_acquire_async(q);
1736 ASSERT_FALSE(client2->has_acquired_buffer());
1737 q.compositor_release(q.compositor_acquire(this));
1738 ASSERT_TRUE(client2->has_acquired_buffer());
1739
1740 // Now hold client2 buffer for a little too long...
1741 for (int f = 0; f < delay*2; ++f)
1742 q.compositor_release(q.compositor_acquire(this));
1743 // this should have resulted in the queue expanding.
1744
1745 client2->release_buffer();
1746
1747 // Verify we still only have double buffering. An idle client should
1748 // not be a reason to expand to triple.
1749 EXPECT_EQ(2, unique_synchronous_buffers(q));
1750 }
1751}
1752
1753TEST_F(BufferQueueTest, really_slow_clients_dont_get_expanded_buffers)
1754{
1755 for (int max_buffers = 3; max_buffers < max_nbuffers_to_test; ++max_buffers)
1756 {
1757 mc::BufferQueue q(max_buffers, allocator, basic_properties,
1758 policy_factory);
1759
1760 q.allow_framedropping(false);
1761
1762 int const delay = 10;
1763 q.set_resize_delay(delay);
1764
1765 // First, verify we only have 2 real buffers
1766 EXPECT_EQ(2, unique_synchronous_buffers(q));
1767
1768 // Simulate a really slow client (one third frame rate) that can
1769 // never be helped by triple buffering.
1770 for (int f = 0; f < delay*2; ++f)
1771 {
1772 auto client = client_acquire_async(q);
1773
1774 q.compositor_release(q.compositor_acquire(this));
1775 q.compositor_release(q.compositor_acquire(this));
1776 q.compositor_release(q.compositor_acquire(this));
1777
1778 ASSERT_TRUE(client->has_acquired_buffer());
1779 client->release_buffer();
1780 }
1781
1782 // Verify we still only have double buffering. An idle client should
1783 // not be a reason to expand to triple.
1784 EXPECT_EQ(2, unique_synchronous_buffers(q));
1785 }
1786}
1787
1788TEST_F(BufferQueueTest, synchronous_clients_dont_get_expanded_buffers)
1789{
1790 for (int max_buffers = 3; max_buffers < max_nbuffers_to_test; ++max_buffers)
1791 {
1792 mc::BufferQueue q(max_buffers, allocator, basic_properties,
1793 policy_factory);
1794
1795 q.allow_framedropping(false);
1796
1797 int const delay = 3;
1798 q.set_resize_delay(delay);
1799
1800 // First, verify we only have 2 real buffers
1801 EXPECT_EQ(2, unique_synchronous_buffers(q));
1802
1803 // Simulate an idle shell with a single really slow client (like
1804 // a clock -- not something that's trying to keep up with the
1805 // refresh rate)
1806 for (int f = 0; f < delay*2; ++f)
1807 {
1808 auto client = client_acquire_async(q);
1809 ASSERT_TRUE(client->has_acquired_buffer());
1810 client->release_buffer();
1811
1812 q.compositor_release(q.compositor_acquire(this));
1813 }
1814
1815 // Verify we still only have double buffering. An idle desktop driven
1816 // by a single slow client should not be a reason to expand to triple.
1817 EXPECT_EQ(2, unique_synchronous_buffers(q));
1818 }
1819}
1820

Subscribers

People subscribed via source and target branches