Mir

Merge lp:~vanvugt/mir/double into lp:mir

Proposed by Daniel van Vugt on 2014-07-22
Status: Work in progress
Proposed branch: lp:~vanvugt/mir/double
Merge into: lp:mir
Diff against target: 850 lines (+446/-53)
4 files modified
src/server/compositor/buffer_queue.cpp (+122/-22)
src/server/compositor/buffer_queue.h (+7/-2)
tests/integration-tests/surface_composition.cpp (+8/-1)
tests/unit-tests/compositor/test_buffer_queue.cpp (+309/-28)
To merge this branch: bzr merge lp:~vanvugt/mir/double
Reviewer Review Type Date Requested Status
PS Jenkins bot (community) continuous-integration 2014-07-22 Approve on 2014-08-01
Daniel van Vugt Needs Fixing on 2014-07-30
Alexandros Frantzis (community) 2014-07-22 Needs Fixing on 2014-07-29
Alberto Aguirre 2014-07-22 Pending
Review via email: mp+227701@code.launchpad.net

Commit message

Reintroduce double buffering! (LP: #1240909)

Default to double buffering where possible, to minimize visible lag and
resource usage. The queue will be expanded automatically to triple buffers
if you enable framedropping, or if it is observed that the client is not
keeping up with the compositor.

To post a comment you must log in.
Daniel van Vugt (vanvugt) wrote :

Fun fact: Adding the heuristic for detecting slow clients fixes:
  BufferQueueTest.slow_client_framerate_matches_compositor
without any time-fudging required any more. Turns out it was a very good test.

PS Jenkins bot (ps-jenkins) wrote :
review: Needs Fixing (continuous-integration)
lp:~vanvugt/mir/double updated on 2014-07-23
1726. By Daniel van Vugt on 2014-07-23

Merge latest development-branch

Daniel van Vugt (vanvugt) wrote :

The failure is bug 1348095. Unrelated to this proposal.

lp:~vanvugt/mir/double updated on 2014-07-28
1727. By Daniel van Vugt on 2014-07-24

Merge latest development-branch

1728. By Daniel van Vugt on 2014-07-25

Merge latest development-branch

1729. By Daniel van Vugt on 2014-07-28

Merge latest development-branch and fix conflicts.

1730. By Daniel van Vugt on 2014-07-28

Merge latest development-branch

1731. By Daniel van Vugt on 2014-07-28

Fix failing test case, merged from devel:
gives_compositor_the_newest_buffer_after_dropping_old_buffers

1732. By Daniel van Vugt on 2014-07-28

Make tests which require 3 buffers more consistent

1733. By Daniel van Vugt on 2014-07-28

Fix failing test: TEST_F(StaleFrames, are_dropped_when_restarting_compositor)
It was designed to assume three unique buffers available by default. Remove
that assumption.

1734. By Daniel van Vugt on 2014-07-28

Lower the resize delay to 100 frames so slow clients only take ~1.5s to
be detected as slow and change to triple buffers.

Alexandros Frantzis (afrantzis) wrote :

Not a thorough review yet, looks good overall on a first pass.

86 + if (client_behind && missed_frames < queue_resize_delay_frames)
87 + {
88 + ++missed_frames;
89 + if (missed_frames >= queue_resize_delay_frames)
90 + extra_buffers = 1;
91 + }
92 +
93 + if (!client_behind && missed_frames > 0)
102 + if (missed_frames < queue_resize_delay_frames)
103 + --missed_frames;

It would be nice to have tests for the expected behavior of this piece of code.

review: Needs Fixing
Daniel van Vugt (vanvugt) wrote :

There are already tests for that portion of logic:

  TEST_F(BufferQueueTest, slow_client_framerate_matches_compositor)
  TEST_F(BufferQueueTest, queue_size_scales_instantly_on_framedropping)
  TEST_F(BufferQueueTest, queue_size_scales_for_slow_clients)

Although it's always possible to add more.

Alexandros Frantzis (afrantzis) wrote :

> Although it's always possible to add more.

One test that is missing (at least not tested explicitly) is to verify that: "If the ceiling is hit, keep it there with the extra buffer allocated so we don't shrink again and cause yet more missed frames."

lp:~vanvugt/mir/double updated on 2014-07-29
1735. By Daniel van Vugt on 2014-07-29

Merge latest development-branch

1736. By Daniel van Vugt on 2014-07-29

Merge latest development-branch

1737. By Daniel van Vugt on 2014-07-29

Add unit test: BufferQueueTest.switch_to_triple_buffers_is_permanent

This verifies we don't accidentally ping-pong between 2 and 3 buffers.

Alexandros Frantzis (afrantzis) wrote :

> * A client that's keeping up will be in-phase with composition. That
> * means it will stay, or quickly equalize at a point where there are
> * no client buffers still held when composition finishes.

This criterion only works well when the client is actually trying to keep up with the compositor (i.e. 60Hz). Clients that refresh less often on purpose or only when reacting to user input will become triple-buffered if the compositor is triggered by another source and they are not updating fast enough.

This happens, for example, if we run both mir_demo_client_egltriangle and mir_demo_client_fingerpaint. The frames from egltriangle trigger the compositor and cause mir_demo_client_fingerpaint to become triple-buffered after a short while. The same happens if we just have mir_demo_client_fingerpaint and move its surface around in the demo shell.

We probably need additional logic/heurestics to make this work properly (haven't thought through what that logic might be, though).

review: Needs Fixing
lp:~vanvugt/mir/double updated on 2014-07-30
1738. By Daniel van Vugt on 2014-07-30

Merge latest development-branch

1739. By Daniel van Vugt on 2014-07-30

Add a regression test for the idle-vs-slow client problem Alexandros
described. Presently failing.

1740. By Daniel van Vugt on 2014-07-30

Move the "missed frames" decision to a point where it won't be skewed by
multiple compositors (multi-monitor).

1741. By Daniel van Vugt on 2014-07-30

Prototype a detection of "idle" clients. Other questionable tests still
failing.

1742. By Daniel van Vugt on 2014-07-30

Simplify

1743. By Daniel van Vugt on 2014-07-30

More readable.

1744. By Daniel van Vugt on 2014-07-30

Fix failing test: BufferQueueTest.queue_size_scales_for_slow_clients
It needed some enhancement to not be detected as an "idle" client by the
new logic.

1745. By Daniel van Vugt on 2014-07-30

Fix test case: BufferQueueTest.switch_to_triple_buffers_is_permanent
The new idle detection logic was not seeing any client activity and so
deemed it not trying to keep up. So added some "slow client" activity.

1746. By Daniel van Vugt on 2014-07-30

Simplified idle detection

1747. By Daniel van Vugt on 2014-07-30

Fix comment typo

1748. By Daniel van Vugt on 2014-07-30

Add another regression test which shows premature expansion for really
slow clients.

1749. By Daniel van Vugt on 2014-07-30

Fixed: Mis-detection of a really slow client (low self-determined frame rate)
as failing to keep up. Regression test now passes.

1750. By Daniel van Vugt on 2014-07-30

Remove debug code

Daniel van Vugt (vanvugt) wrote :

Excellent point Alexandros. That's now fixed with extra tests for the scenarios you describe.

Unfortunately I just realized that there's another situation that probably needs fixing: A system with a single slow-ticking client (like a clock) won't ever know that the compositor is capable of running any faster than the client is, and so will classify the client as slow --> triple buffers prematurely.

review: Needs Fixing
Daniel van Vugt (vanvugt) wrote :

I'm now tempted to divide this proposal in two:
  1. Basic test case fixes to support future double buffering; and
  2. Heuristic switching between double and triple buffering (work in progress).

lp:~vanvugt/mir/double updated on 2014-08-01
1751. By Daniel van Vugt on 2014-07-31

Merge latest development-branch

1752. By Daniel van Vugt on 2014-08-01

Merge latest development-branch

1753. By Daniel van Vugt on 2014-08-01

Slightly simpler and more straight forward detection of slow vs idle clients.

1754. By Daniel van Vugt on 2014-08-01

Shrink the diff

1755. By Daniel van Vugt on 2014-08-01

More comments

1756. By Daniel van Vugt on 2014-08-01

Add a regression test for the slow-ticking-client-mostly-idle-desktop case.

1757. By Daniel van Vugt on 2014-08-01

test_buffer_queue.cpp: De-duplicate buffer counting logic

1758. By Daniel van Vugt on 2014-08-01

Fix indentation in new tests

1759. By Daniel van Vugt on 2014-08-01

test_buffer_queue.cpp: Major simplification - reuse a common function to
measure unique buffers.

1760. By Daniel van Vugt on 2014-08-01

Tidy up tests

1761. By Daniel van Vugt on 2014-08-01

Remove unnecessary sleeps from new tests

1762. By Daniel van Vugt on 2014-08-01

Go back to the old algorithm, keep the new tests.

1763. By Daniel van Vugt on 2014-08-01

A slightly more elegant measure of client_lag

lp:~vanvugt/mir/double updated on 2014-11-23
1764. By Daniel van Vugt on 2014-08-04

Merge latest development-branch

1765. By Daniel van Vugt on 2014-08-05

Merge latest development-branch

1766. By Daniel van Vugt on 2014-08-06

Merge latest development-branch

1767. By Daniel van Vugt on 2014-08-07

Merge latest development-branch

1768. By Daniel van Vugt on 2014-08-08

Merge latest development-branch

1769. By Daniel van Vugt on 2014-08-12

Merge latest development-branch

1770. By Daniel van Vugt on 2014-08-18

Merge latest development-branch

1771. By Daniel van Vugt on 2014-09-02

Merge latest development-branch and fix conflicts.

1772. By Daniel van Vugt on 2014-09-22

Merge latest development-branch

1773. By Daniel van Vugt on 2014-09-25

Merge latest development-branch

1774. By Daniel van Vugt on 2014-10-01

Merge latest development-branch

1775. By Daniel van Vugt on 2014-10-01

Merge latest development-branch

1776. By Daniel van Vugt on 2014-10-02

Merge latest development-branch

1777. By Daniel van Vugt on 2014-10-03

Merge latest development-branch

1778. By Daniel van Vugt on 2014-10-03

Merge (rebase on) 'double-integration-tests' to shrink the diff of
this branch to its new parent.

1779. By Daniel van Vugt on 2014-10-06

Merge latest development-branch

1780. By Daniel van Vugt on 2014-10-13

Merge latest development-branch and fix conflicts.

1781. By Daniel van Vugt on 2014-10-14

Merge latest development-branch

1782. By Daniel van Vugt on 2014-10-15

Merge latest development-branch

1783. By Daniel van Vugt on 2014-10-21

Merge latest development-branch

1784. By Daniel van Vugt on 2014-10-22

Merge latest development-branch

1785. By Daniel van Vugt on 2014-10-29

Merge latest development-branch and fix a conflict.

1786. By Daniel van Vugt on 2014-11-03

Merge latest development-branch

1787. By Daniel van Vugt on 2014-11-04

Merge latest development-branch

1788. By Daniel van Vugt on 2014-11-05

Merge latest development-branch

1789. By Daniel van Vugt on 2014-11-06

Merge latest trunk

1790. By Daniel van Vugt on 2014-11-07

Merge latest trunk

1791. By Daniel van Vugt on 2014-11-10

Merge latest trunk

1792. By Daniel van Vugt on 2014-11-11

Merge latest trunk

1793. By Daniel van Vugt on 2014-11-12

Merge latest trunk

1794. By Daniel van Vugt on 2014-11-13

Merge latest trunk

1795. By Daniel van Vugt on 2014-11-14

Merge latest trunk

1796. By Daniel van Vugt on 2014-11-23

Merge latest trunk

1797. By Daniel van Vugt on 2014-11-23

Prototype fix for the crashing integration test

1798. By Daniel van Vugt on 2014-11-23

A simpler fix for the crashing test -- don't try to release the same
old_buffer multiple times.

1799. By Daniel van Vugt on 2014-11-23

Even simpler

Unmerged revisions

1799. By Daniel van Vugt on 2014-11-23

Even simpler

1798. By Daniel van Vugt on 2014-11-23

A simpler fix for the crashing test -- don't try to release the same
old_buffer multiple times.

1797. By Daniel van Vugt on 2014-11-23

Prototype fix for the crashing integration test

1796. By Daniel van Vugt on 2014-11-23

Merge latest trunk

1795. By Daniel van Vugt on 2014-11-14

Merge latest trunk

1794. By Daniel van Vugt on 2014-11-13

Merge latest trunk

1793. By Daniel van Vugt on 2014-11-12

Merge latest trunk

1792. By Daniel van Vugt on 2014-11-11

Merge latest trunk

1791. By Daniel van Vugt on 2014-11-10

Merge latest trunk

1790. By Daniel van Vugt on 2014-11-07

Merge latest trunk

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'src/server/compositor/buffer_queue.cpp'
2--- src/server/compositor/buffer_queue.cpp 2014-10-29 06:21:10 +0000
3+++ src/server/compositor/buffer_queue.cpp 2014-11-23 08:19:33 +0000
4@@ -92,18 +92,23 @@
5 }
6
7 mc::BufferQueue::BufferQueue(
8- int nbuffers,
9+ int max_buffers,
10 std::shared_ptr<graphics::GraphicBufferAllocator> const& gralloc,
11 graphics::BufferProperties const& props,
12 mc::FrameDroppingPolicyFactory const& policy_provider)
13- : nbuffers{nbuffers},
14+ : min_buffers{std::min(2, max_buffers)}, // TODO: Configurable in future
15+ max_buffers{max_buffers},
16+ missed_frames{0},
17+ queue_resize_delay_frames{100},
18+ extra_buffers{0},
19+ client_lag{0},
20 frame_dropping_enabled{false},
21 the_properties{props},
22 force_new_compositor_buffer{false},
23 callbacks_allowed{true},
24 gralloc{gralloc}
25 {
26- if (nbuffers < 1)
27+ if (max_buffers < 1)
28 {
29 BOOST_THROW_EXCEPTION(
30 std::logic_error("invalid number of buffers for BufferQueue"));
31@@ -111,9 +116,9 @@
32
33 /* By default not all buffers are allocated.
34 * If there is increased pressure by the client to acquire
35- * more buffers, more will be allocated at that time (up to nbuffers)
36+ * more buffers, more will be allocated at that time (up to max_buffers)
37 */
38- for(int i = 0; i < std::min(nbuffers, 2); i++)
39+ for (int i = 0; i < min_buffers; ++i)
40 {
41 buffers.push_back(gralloc->alloc_buffer(the_properties));
42 }
43@@ -129,7 +134,7 @@
44 /* Special case: with one buffer both clients and compositors
45 * need to share the same buffer
46 */
47- if (nbuffers == 1)
48+ if (max_buffers == 1)
49 free_buffers.push_back(current_compositor_buffer);
50
51 framedrop_policy = policy_provider.create_policy([this]
52@@ -173,6 +178,7 @@
53 void mc::BufferQueue::client_acquire(mc::BufferQueue::Callback complete)
54 {
55 std::unique_lock<decltype(guard)> lock(guard);
56+// fprintf(stderr, "%s\n", __FUNCTION__);
57
58 pending_client_notifications.push_back(std::move(complete));
59
60@@ -184,12 +190,8 @@
61 return;
62 }
63
64- /* No empty buffers, attempt allocating more
65- * TODO: need a good heuristic to switch
66- * between double-buffering to n-buffering
67- */
68 int const allocated_buffers = buffers.size();
69- if (allocated_buffers < nbuffers)
70+ if (allocated_buffers < ideal_buffers())
71 {
72 auto const& buffer = gralloc->alloc_buffer(the_properties);
73 buffers.push_back(buffer);
74@@ -215,6 +217,8 @@
75 {
76 std::lock_guard<decltype(guard)> lock(guard);
77
78+// fprintf(stderr, "%s\n", __FUNCTION__);
79+
80 if (buffers_owned_by_client.empty())
81 {
82 BOOST_THROW_EXCEPTION(
83@@ -235,6 +239,7 @@
84 mc::BufferQueue::compositor_acquire(void const* user_id)
85 {
86 std::unique_lock<decltype(guard)> lock(guard);
87+// fprintf(stderr, "%s\n", __FUNCTION__);
88
89 bool use_current_buffer = false;
90 if (!current_buffer_users.empty() && !is_a_current_buffer_user(user_id))
91@@ -280,7 +285,10 @@
92 buffer_for(current_compositor_buffer, buffers);
93
94 if (buffer_to_release)
95+ {
96+ client_lag = 1;
97 release(buffer_to_release, std::move(lock));
98+ }
99
100 return acquired_buffer;
101 }
102@@ -288,6 +296,7 @@
103 void mc::BufferQueue::compositor_release(std::shared_ptr<graphics::Buffer> const& buffer)
104 {
105 std::unique_lock<decltype(guard)> lock(guard);
106+// fprintf(stderr, "%s\n", __FUNCTION__);
107
108 if (!remove(buffer.get(), buffers_sent_to_compositor))
109 {
110@@ -299,7 +308,52 @@
111 if (contains(buffer.get(), buffers_sent_to_compositor))
112 return;
113
114- if (nbuffers <= 1)
115+ /*
116+ * Calculate if we need extra buffers in the queue to account for a slow
117+ * client that can't keep up with composition.
118+ */
119+ if (frame_dropping_enabled)
120+ {
121+ missed_frames = 0;
122+ extra_buffers = 0;
123+ }
124+ else
125+ {
126+ /*
127+ * A client that's keeping up will be in-phase with composition. That
128+ * means it will stay, or quickly equalize at a point where there are
129+ * no client buffers still held when composition finishes.
130+ */
131+ if (client_lag == 1 && missed_frames < queue_resize_delay_frames)
132+ {
133+ ++missed_frames;
134+ if (missed_frames >= queue_resize_delay_frames)
135+ extra_buffers = 1;
136+ }
137+
138+ if (client_lag != 1 && missed_frames > 0)
139+ {
140+ /*
141+ * Allow missed_frames to recover back down to zero, so long as
142+ * the ceiling is never hit (meaning you're keeping up most of the
143+ * time). If the ceiling is hit, keep it there with the extra
144+ * buffer allocated so we don't shrink again and cause yet more
145+ * missed frames.
146+ */
147+ if (missed_frames < queue_resize_delay_frames)
148+ --missed_frames;
149+ }
150+
151+// fprintf(stderr, "missed_frames %d, extra %d\n",
152+// missed_frames, extra_buffers);
153+ }
154+
155+ // Let client_lag go above 1 to represent an idle client (one that's
156+ // sleeping and not trying to keep up with the compositor)
157+ if (client_lag)
158+ ++client_lag;
159+
160+ if (max_buffers <= 1)
161 return;
162
163 /*
164@@ -322,7 +376,10 @@
165 }
166
167 if (current_compositor_buffer != buffer.get())
168+ {
169+ client_lag = 0;
170 release(buffer.get(), std::move(lock));
171+ }
172 }
173
174 std::shared_ptr<mg::Buffer> mc::BufferQueue::snapshot_acquire()
175@@ -354,6 +411,14 @@
176 {
177 std::lock_guard<decltype(guard)> lock(guard);
178 frame_dropping_enabled = flag;
179+
180+ while (static_cast<int>(buffers.size()) > ideal_buffers() &&
181+ !free_buffers.empty())
182+ {
183+ auto surplus = free_buffers.back();
184+ free_buffers.pop_back();
185+ drop_buffer(surplus);
186+ }
187 }
188
189 bool mc::BufferQueue::framedropping_allowed() const
190@@ -397,20 +462,14 @@
191 int mc::BufferQueue::buffers_free_for_client() const
192 {
193 std::lock_guard<decltype(guard)> lock(guard);
194- int ret = 1;
195- if (nbuffers > 1)
196- {
197- int nfree = free_buffers.size();
198- int future_growth = nbuffers - buffers.size();
199- ret = nfree + future_growth;
200- }
201- return ret;
202+ return max_buffers > 1 ? free_buffers.size() : 1;
203 }
204
205 void mc::BufferQueue::give_buffer_to_client(
206 mg::Buffer* buffer,
207 std::unique_lock<std::mutex> lock)
208 {
209+// fprintf(stderr, "%s\n", __FUNCTION__);
210 /* Clears callback */
211 auto give_to_client_cb = std::move(pending_client_notifications.front());
212 pending_client_notifications.pop_front();
213@@ -424,7 +483,7 @@
214 /* Special case: the current compositor buffer also needs to be
215 * replaced as it's shared with the client
216 */
217- if (nbuffers == 1)
218+ if (max_buffers == 1)
219 current_compositor_buffer = buffer;
220 }
221
222@@ -463,7 +522,13 @@
223 mg::Buffer* buffer,
224 std::unique_lock<std::mutex> lock)
225 {
226- if (!pending_client_notifications.empty())
227+ int used_buffers = buffers.size() - free_buffers.size();
228+
229+ if (used_buffers > ideal_buffers() && max_buffers > 1)
230+ {
231+ drop_buffer(buffer);
232+ }
233+ else if (!pending_client_notifications.empty())
234 {
235 framedrop_policy->swap_unblocked();
236 give_buffer_to_client(buffer, std::move(lock));
237@@ -472,6 +537,41 @@
238 free_buffers.push_back(buffer);
239 }
240
241+int mc::BufferQueue::ideal_buffers() const
242+{
243+ int result = frame_dropping_enabled ? 3 : 2;
244+
245+ // Add extra buffers if we can see the client's not keeping up with
246+ // composition.
247+ result += extra_buffers;
248+
249+ if (result < min_buffers)
250+ result = min_buffers;
251+ if (result > max_buffers)
252+ result = max_buffers;
253+
254+ return result;
255+}
256+
257+void mc::BufferQueue::set_resize_delay(int nframes)
258+{
259+ queue_resize_delay_frames = nframes;
260+ if (nframes == 0)
261+ extra_buffers = 1;
262+}
263+
264+void mc::BufferQueue::drop_buffer(graphics::Buffer* buffer)
265+{
266+ for (auto i = buffers.begin(); i != buffers.end(); ++i)
267+ {
268+ if (i->get() == buffer)
269+ {
270+ buffers.erase(i);
271+ break;
272+ }
273+ }
274+}
275+
276 void mc::BufferQueue::drop_frame(std::unique_lock<std::mutex> lock)
277 {
278 auto buffer_to_give = pop(ready_to_composite_queue);
279
280=== modified file 'src/server/compositor/buffer_queue.h'
281--- src/server/compositor/buffer_queue.h 2014-10-29 06:21:10 +0000
282+++ src/server/compositor/buffer_queue.h 2014-11-23 08:19:33 +0000
283@@ -42,7 +42,7 @@
284 public:
285 typedef std::function<void(graphics::Buffer* buffer)> Callback;
286
287- BufferQueue(int nbuffers,
288+ BufferQueue(int max_buffers,
289 std::shared_ptr<graphics::GraphicBufferAllocator> const& alloc,
290 graphics::BufferProperties const& props,
291 FrameDroppingPolicyFactory const& policy_provider);
292@@ -64,6 +64,7 @@
293 bool is_a_current_buffer_user(void const* user_id) const;
294 void drop_old_buffers() override;
295 void drop_client_requests() override;
296+ void set_resize_delay(int nframes); ///< negative means never resize
297
298 private:
299 void give_buffer_to_client(graphics::Buffer* buffer,
300@@ -71,6 +72,8 @@
301 void release(graphics::Buffer* buffer,
302 std::unique_lock<std::mutex> lock);
303 void drop_frame(std::unique_lock<std::mutex> lock);
304+ int ideal_buffers() const;
305+ void drop_buffer(graphics::Buffer* buffer);
306
307 mutable std::mutex guard;
308
309@@ -86,7 +89,9 @@
310
311 std::deque<Callback> pending_client_notifications;
312
313- int nbuffers;
314+ int const min_buffers, max_buffers;
315+ int missed_frames, queue_resize_delay_frames, extra_buffers;
316+ int client_lag;
317 bool frame_dropping_enabled;
318 graphics::BufferProperties the_properties;
319 bool force_new_compositor_buffer;
320
321=== modified file 'tests/integration-tests/surface_composition.cpp'
322--- tests/integration-tests/surface_composition.cpp 2014-10-02 11:56:12 +0000
323+++ tests/integration-tests/surface_composition.cpp 2014-11-23 08:19:33 +0000
324@@ -108,16 +108,23 @@
325
326 mg::Buffer* old_buffer{nullptr};
327
328+ bool called_back = true;
329 auto const callback = [&] (mg::Buffer* new_buffer)
330 {
331 // If surface is dead then callback is not expected
332 EXPECT_THAT(surface.get(), NotNull());
333 old_buffer = new_buffer;
334+ called_back = true;
335 };
336
337 // Exhaust the buffers to ensure we have a pending swap to complete
338- for (auto i = 0; i != number_of_buffers; ++i)
339+ // But also be careful to not pass a formerly released non-null old_buffer
340+ // in to swap_buffers...
341+ while (called_back)
342+ {
343+ called_back = false;
344 surface->swap_buffers(old_buffer, callback);
345+ }
346
347 auto const renderable = surface->compositor_snapshot(this);
348
349
350=== modified file 'tests/unit-tests/compositor/test_buffer_queue.cpp'
351--- tests/unit-tests/compositor/test_buffer_queue.cpp 2014-10-29 06:21:10 +0000
352+++ tests/unit-tests/compositor/test_buffer_queue.cpp 2014-11-23 08:19:33 +0000
353@@ -304,6 +304,7 @@
354 int const nbuffers = 3;
355 mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory);
356
357+ q.set_resize_delay(0); // Force enabling of the third buffer
358 int const prefill = q.buffers_free_for_client();
359 ASSERT_THAT(prefill, Gt(0));
360 for (int i = 0; i < prefill; ++i)
361@@ -314,10 +315,7 @@
362 }
363
364 auto handle1 = client_acquire_async(q);
365- ASSERT_THAT(handle1->has_acquired_buffer(), Eq(false));
366-
367 auto handle2 = client_acquire_async(q);
368- ASSERT_THAT(handle1->has_acquired_buffer(), Eq(false));
369
370 for (int i = 0; i < nbuffers + 1; ++i)
371 q.compositor_release(q.compositor_acquire(this));
372@@ -386,7 +384,6 @@
373 handle->release_buffer();
374
375 handle = client_acquire_async(q);
376- ASSERT_THAT(handle->has_acquired_buffer(), Eq(true));
377 handle->release_buffer();
378
379 auto comp_buffer = q.compositor_acquire(this);
380@@ -400,6 +397,8 @@
381 {
382 mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory);
383
384+ q.set_resize_delay(0); // Force enabling of the third buffer
385+
386 auto handle1 = client_acquire_async(q);
387 ASSERT_THAT(handle1->has_acquired_buffer(), Eq(true));
388
389@@ -426,7 +425,7 @@
390 compositor_thread, std::ref(q), std::ref(done));
391
392 std::unordered_set<uint32_t> ids_acquired;
393- int const max_ownable_buffers = nbuffers - 1;
394+ int const max_ownable_buffers = q.buffers_free_for_client();
395 for (int i = 0; i < max_ownable_buffers*2; ++i)
396 {
397 std::vector<mg::Buffer *> client_buffers;
398@@ -445,7 +444,9 @@
399 }
400 }
401
402- EXPECT_THAT(ids_acquired.size(), Eq(nbuffers));
403+ // Expect only two since we're now double-buffered and are not
404+ // allowing frame dropping.
405+ EXPECT_THAT(ids_acquired.size(), Eq(2));
406 }
407 }
408
409@@ -499,7 +500,7 @@
410 {
411 std::deque<mg::BufferID> client_release_sequence;
412 std::vector<mg::Buffer *> buffers;
413- int const max_ownable_buffers = nbuffers - 1;
414+ int const max_ownable_buffers = q.buffers_free_for_client();
415 for (int i = 0; i < max_ownable_buffers; ++i)
416 {
417 auto handle = client_acquire_async(q);
418@@ -632,13 +633,13 @@
419 handle->release_buffer();
420
421 handle = client_acquire_async(q);
422- ASSERT_THAT(handle->has_acquired_buffer(), Eq(true));
423
424 // in the original bug, compositor would be given the wrong buffer here
425 auto compositor_buffer = q.compositor_acquire(this);
426
427 EXPECT_THAT(compositor_buffer->id(), Eq(first_ready_buffer_id));
428
429+ ASSERT_THAT(handle->has_acquired_buffer(), Eq(true));
430 handle->release_buffer();
431 q.compositor_release(compositor_buffer);
432 }
433@@ -806,7 +807,14 @@
434 handle->release_buffer();
435 }
436
437- EXPECT_THAT(ids_acquired.size(), Eq(nbuffers));
438+ /*
439+ * Dynamic queue scaling sensibly limits the framedropping client to
440+ * two non-overlapping buffers, before overwriting old ones. Allowing
441+ * any more would just waste space (buffers). So this means a
442+ * frame-dropping client won't ever see more than 3 unique buffers
443+ */
444+ int const max_ownable_buffers = std::min(nbuffers, 3);
445+ EXPECT_THAT(ids_acquired.size(), Eq(max_ownable_buffers));
446 }
447 }
448
449@@ -902,6 +910,7 @@
450 auto const frame_time = std::chrono::milliseconds(16);
451
452 q.allow_framedropping(false);
453+ q.set_resize_delay(1);
454
455 std::atomic<bool> done(false);
456 std::mutex sync;
457@@ -977,20 +986,13 @@
458 }
459 }
460
461-namespace
462-{
463-int max_ownable_buffers(int nbuffers)
464-{
465- return (nbuffers == 1) ? 1 : nbuffers - 1;
466-}
467-}
468-
469 TEST_F(BufferQueueTest, compositor_acquires_resized_frames)
470 {
471 for (int nbuffers = 1; nbuffers <= max_nbuffers_to_test; ++nbuffers)
472 {
473 mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory);
474 std::vector<mg::BufferID> history;
475+ std::shared_ptr<AcquireWaitHandle> producing[5];
476
477 const int width0 = 123;
478 const int height0 = 456;
479@@ -1001,8 +1003,7 @@
480 int const nbuffers_to_use = q.buffers_free_for_client();
481 ASSERT_THAT(nbuffers_to_use, Gt(0));
482
483- int max_buffers{max_ownable_buffers(nbuffers)};
484- for (int produce = 0; produce < max_buffers; ++produce)
485+ for (int produce = 0; produce < nbuffers_to_use; ++produce)
486 {
487 geom::Size new_size{width, height};
488 width += dx;
489@@ -1012,16 +1013,23 @@
490 auto handle = client_acquire_async(q);
491 ASSERT_THAT(handle->has_acquired_buffer(), Eq(true));
492 history.emplace_back(handle->id());
493+ producing[produce] = handle;
494 auto buffer = handle->buffer();
495 ASSERT_THAT(buffer->size(), Eq(new_size));
496- handle->release_buffer();
497+ }
498+
499+ // Overlap all the client_acquires asyncronously. It's the only way
500+ // the new dynamic queue scaling will let a client hold that many...
501+ for (int complete = 0; complete < nbuffers_to_use; ++complete)
502+ {
503+ producing[complete]->release_buffer();
504 }
505
506 width = width0;
507 height = height0;
508
509- ASSERT_THAT(history.size(), Eq(max_buffers));
510- for (int consume = 0; consume < max_buffers; ++consume)
511+ ASSERT_THAT(history.size(), Eq(nbuffers_to_use));
512+ for (int consume = 0; consume < nbuffers_to_use; ++consume)
513 {
514 geom::Size expect_size{width, height};
515 width += dx;
516@@ -1062,7 +1070,8 @@
517 basic_properties,
518 policy_factory);
519
520- for (int i = 0; i < max_ownable_buffers(nbuffers); i++)
521+ auto max_ownable_buffers = q.buffers_free_for_client();
522+ for (int i = 0; i < max_ownable_buffers; i++)
523 {
524 auto client = client_acquire_sync(q);
525 q.client_release(client);
526@@ -1089,7 +1098,8 @@
527 basic_properties,
528 policy_factory);
529
530- for (int i = 0; i < max_ownable_buffers(nbuffers); i++)
531+ int const max_ownable_buffers = q.buffers_free_for_client();
532+ for (int i = 0; i < max_ownable_buffers; i++)
533 {
534 auto client = client_acquire_sync(q);
535 q.client_release(client);
536@@ -1379,7 +1389,7 @@
537 compositor_thread, std::ref(q), std::ref(done));
538
539 std::unordered_set<mg::Buffer *> unique_buffers_acquired;
540- int const max_ownable_buffers = nbuffers - 1;
541+ int const max_ownable_buffers = q.buffers_free_for_client();
542 for (int frame = 0; frame < max_ownable_buffers*2; frame++)
543 {
544 std::vector<mg::Buffer *> client_buffers;
545@@ -1398,13 +1408,14 @@
546 }
547 }
548
549- EXPECT_THAT(unique_buffers_acquired.size(), Eq(nbuffers));
550+ // Expect one more than max_ownable_buffers, to include the one that
551+ // is silently reserved for compositing.
552+ EXPECT_THAT(unique_buffers_acquired.size(), Eq(max_ownable_buffers+1));
553
554 }
555 }
556
557-/* FIXME (enabling this optimization breaks timing tests) */
558-TEST_F(BufferQueueTest, DISABLED_synchronous_clients_only_get_two_real_buffers)
559+TEST_F(BufferQueueTest, synchronous_clients_only_get_two_real_buffers)
560 {
561 for (int nbuffers = 2; nbuffers <= max_nbuffers_to_test; ++nbuffers)
562 {
563@@ -1413,6 +1424,13 @@
564
565 std::atomic<bool> done(false);
566 auto unblock = [&done] { done = true; };
567+
568+ // With an unthrottled compositor_thread it will look like the client
569+ // isn't keeping up and the buffer queue would normally auto-expand.
570+ // Increase the auto-expansion threshold to ensure that doesn't happen
571+ // during this test...
572+ q.set_resize_delay(-1);
573+
574 mt::AutoUnblockThread compositor(unblock,
575 compositor_thread, std::ref(q), std::ref(done));
576
577@@ -1495,6 +1513,9 @@
578 int const nbuffers = 3;
579 mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory);
580
581+ // Ensure this test gets 3 real buffers immediately.
582+ q.set_resize_delay(0);
583+
584 auto handle1 = client_acquire_async(q);
585 ASSERT_THAT(handle1->has_acquired_buffer(), Eq(true));
586 handle1->release_buffer();
587@@ -1537,3 +1558,263 @@
588 auto comp2 = q.compositor_acquire(new_compositor_id);
589 ASSERT_THAT(comp2->id(), Eq(handle2->id()));
590 }
591+
592+namespace
593+{
594+ int unique_buffers(mc::BufferQueue& q)
595+ {
596+ std::atomic<bool> done(false);
597+
598+ auto unblock = [&done] { done = true; };
599+ mt::AutoUnblockThread compositor(unblock,
600+ compositor_thread, std::ref(q), std::ref(done));
601+
602+ std::unordered_set<mg::Buffer*> buffers_acquired;
603+ for (int frame = 0; frame < 100; frame++)
604+ {
605+ auto handle = client_acquire_async(q);
606+ handle->wait_for(std::chrono::seconds(1));
607+ if (!handle->has_acquired_buffer())
608+ return -1;
609+ buffers_acquired.insert(handle->buffer());
610+ handle->release_buffer();
611+ }
612+
613+ return static_cast<int>(buffers_acquired.size());
614+ }
615+
616+ int unique_synchronous_buffers(mc::BufferQueue& q)
617+ {
618+ if (q.framedropping_allowed())
619+ return 0;
620+
621+ int const max = 10;
622+
623+ // Flush the queue
624+ for (int f = 0; f < max; ++f)
625+ q.compositor_release(q.compositor_acquire(0));
626+
627+ auto compositor = q.compositor_acquire(0);
628+ int count = 1; // ^ count the compositor buffer
629+
630+ std::shared_ptr<AcquireWaitHandle> client;
631+ while (count < max)
632+ {
633+ client = client_acquire_async(q);
634+ if (!client->has_acquired_buffer())
635+ break;
636+ ++count;
637+ client->release_buffer();
638+ }
639+
640+ q.compositor_release(compositor);
641+ EXPECT_TRUE(client->has_acquired_buffer());
642+ client->release_buffer();
643+
644+ // Flush the queue
645+ for (int f = 0; f < max; ++f)
646+ q.compositor_release(q.compositor_acquire(0));
647+
648+ return count;
649+ }
650+} // namespace
651+
652+TEST_F(BufferQueueTest, queue_size_scales_instantly_on_framedropping)
653+{
654+ for (int max_buffers = 1; max_buffers < max_nbuffers_to_test; ++max_buffers)
655+ {
656+ mc::BufferQueue q(max_buffers, allocator, basic_properties,
657+ policy_factory);
658+
659+ q.set_resize_delay(-1);
660+
661+ // Default: No frame dropping; expect double buffering
662+ q.allow_framedropping(false);
663+ EXPECT_EQ(std::min(max_buffers, 2), unique_buffers(q));
664+
665+ // Enable frame dropping; expect triple buffering immediately
666+ q.allow_framedropping(true);
667+ EXPECT_EQ(std::min(max_buffers, 3), unique_buffers(q));
668+
669+ // Revert back to no frame dropping; expect double buffering
670+ q.allow_framedropping(false);
671+ EXPECT_EQ(std::min(max_buffers, 2), unique_buffers(q));
672+ }
673+}
674+
675+TEST_F(BufferQueueTest, queue_size_scales_for_slow_clients)
676+{
677+ for (int max_buffers = 3; max_buffers < max_nbuffers_to_test; ++max_buffers)
678+ {
679+ mc::BufferQueue q(max_buffers, allocator, basic_properties,
680+ policy_factory);
681+
682+ q.allow_framedropping(false);
683+
684+ int const delay = 10;
685+ q.set_resize_delay(delay);
686+
687+ // Verify we're starting with double buffering
688+ EXPECT_EQ(2, unique_synchronous_buffers(q));
689+
690+ // Simulate a slow client. Not an idle one, but one trying to keep up
691+ // and repeatedly failing to miss the frame deadline.
692+ for (int f = 0; f < delay*2; ++f)
693+ {
694+ auto client = client_acquire_async(q);
695+ q.compositor_release(q.compositor_acquire(this));
696+ ASSERT_TRUE(client->has_acquired_buffer());
697+ client->release_buffer();
698+ }
699+
700+ // Verify we now have triple buffers
701+ EXPECT_EQ(3, unique_synchronous_buffers(q));
702+ }
703+}
704+
705+TEST_F(BufferQueueTest, switch_to_triple_buffers_is_permanent)
706+{
707+ for (int max_buffers = 3; max_buffers < max_nbuffers_to_test; ++max_buffers)
708+ {
709+ mc::BufferQueue q(max_buffers, allocator, basic_properties,
710+ policy_factory);
711+
712+ q.allow_framedropping(false);
713+
714+ int const delay = 10;
715+ q.set_resize_delay(delay);
716+
717+ // First, verify we only have 2 real buffers
718+ EXPECT_EQ(2, unique_synchronous_buffers(q));
719+
720+ // Simulate a slow client. Not an idle one, but one trying to keep up
721+ // and repeatedly failing to miss the frame deadline.
722+ for (int f = 0; f < delay*2; ++f)
723+ {
724+ auto client = client_acquire_async(q);
725+ q.compositor_release(q.compositor_acquire(this));
726+ ASSERT_TRUE(client->has_acquired_buffer());
727+ client->release_buffer();
728+ }
729+
730+ EXPECT_EQ(3, unique_synchronous_buffers(q));
731+
732+ // Now let the client behave "well" and keep up.
733+ for (int f = 0; f < delay*10; ++f)
734+ {
735+ q.client_release(client_acquire_sync(q));
736+ q.compositor_release(q.compositor_acquire(this));
737+ }
738+
739+ // Make sure the queue has stayed expanded. Do the original test
740+ // again and verify clients get one more than before...
741+ EXPECT_EQ(3, unique_synchronous_buffers(q));
742+ }
743+}
744+
745+TEST_F(BufferQueueTest, idle_clients_dont_get_expanded_buffers)
746+{
747+ for (int max_buffers = 3; max_buffers < max_nbuffers_to_test; ++max_buffers)
748+ {
749+ mc::BufferQueue q(max_buffers, allocator, basic_properties,
750+ policy_factory);
751+
752+ q.allow_framedropping(false);
753+
754+ int const delay = 10;
755+ q.set_resize_delay(delay);
756+
757+ // First, verify we only have 2 real buffers
758+ int const contracted_nbuffers = 2;
759+ for (int f = 0; f < contracted_nbuffers-1; ++f)
760+ {
761+ auto client1 = client_acquire_async(q);
762+ ASSERT_TRUE(client1->has_acquired_buffer());
763+ client1->release_buffer();
764+ }
765+ auto client2 = client_acquire_async(q);
766+ ASSERT_FALSE(client2->has_acquired_buffer());
767+ q.compositor_release(q.compositor_acquire(this));
768+ ASSERT_TRUE(client2->has_acquired_buffer());
769+
770+ // Now hold client2 buffer for a little too long...
771+ for (int f = 0; f < delay*2; ++f)
772+ q.compositor_release(q.compositor_acquire(this));
773+ // this should have resulted in the queue expanding.
774+
775+ client2->release_buffer();
776+
777+ // Verify we still only have double buffering. An idle client should
778+ // not be a reason to expand to triple.
779+ EXPECT_EQ(2, unique_synchronous_buffers(q));
780+ }
781+}
782+
783+TEST_F(BufferQueueTest, really_slow_clients_dont_get_expanded_buffers)
784+{
785+ for (int max_buffers = 3; max_buffers < max_nbuffers_to_test; ++max_buffers)
786+ {
787+ mc::BufferQueue q(max_buffers, allocator, basic_properties,
788+ policy_factory);
789+
790+ q.allow_framedropping(false);
791+
792+ int const delay = 10;
793+ q.set_resize_delay(delay);
794+
795+ // First, verify we only have 2 real buffers
796+ EXPECT_EQ(2, unique_synchronous_buffers(q));
797+
798+ // Simulate a really slow client (one third frame rate) that can
799+ // never be helped by triple buffering.
800+ for (int f = 0; f < delay*2; ++f)
801+ {
802+ auto client = client_acquire_async(q);
803+
804+ q.compositor_release(q.compositor_acquire(this));
805+ q.compositor_release(q.compositor_acquire(this));
806+ q.compositor_release(q.compositor_acquire(this));
807+
808+ ASSERT_TRUE(client->has_acquired_buffer());
809+ client->release_buffer();
810+ }
811+
812+ // Verify we still only have double buffering. An idle client should
813+ // not be a reason to expand to triple.
814+ EXPECT_EQ(2, unique_synchronous_buffers(q));
815+ }
816+}
817+
818+TEST_F(BufferQueueTest, synchronous_clients_dont_get_expanded_buffers)
819+{
820+ for (int max_buffers = 3; max_buffers < max_nbuffers_to_test; ++max_buffers)
821+ {
822+ mc::BufferQueue q(max_buffers, allocator, basic_properties,
823+ policy_factory);
824+
825+ q.allow_framedropping(false);
826+
827+ int const delay = 3;
828+ q.set_resize_delay(delay);
829+
830+ // First, verify we only have 2 real buffers
831+ EXPECT_EQ(2, unique_synchronous_buffers(q));
832+
833+ // Simulate an idle shell with a single really slow client (like
834+ // a clock -- not something that's trying to keep up with the
835+ // refresh rate)
836+ for (int f = 0; f < delay*2; ++f)
837+ {
838+ auto client = client_acquire_async(q);
839+ ASSERT_TRUE(client->has_acquired_buffer());
840+ client->release_buffer();
841+
842+ q.compositor_release(q.compositor_acquire(this));
843+ }
844+
845+ // Verify we still only have double buffering. An idle desktop driven
846+ // by a single slow client should not be a reason to expand to triple.
847+ EXPECT_EQ(2, unique_synchronous_buffers(q));
848+ }
849+}
850+

Subscribers

People subscribed via source and target branches