Merge lp:~mir-team/mir/enable-dynamic-buffer-queue into lp:mir
- enable-dynamic-buffer-queue
- Merge into development-branch
Status: | Work in progress |
---|---|
Proposed branch: | lp:~mir-team/mir/enable-dynamic-buffer-queue |
Merge into: | lp:mir |
Diff against target: |
1537 lines (+400/-361) 10 files modified
include/test/mir_test/signal.h (+1/-0) src/server/compositor/buffer_queue.cpp (+85/-33) src/server/compositor/buffer_queue.h (+12/-3) src/server/compositor/buffer_stream_factory.cpp (+2/-3) tests/integration-tests/compositor/test_buffer_stream.cpp (+77/-63) tests/integration-tests/compositor/test_swapping_swappers.cpp (+1/-1) tests/integration-tests/graphics/android/test_buffer_integration.cpp (+2/-1) tests/integration-tests/test_surface_first_frame_sync.cpp (+1/-1) tests/mir_test/signal.cpp (+6/-0) tests/unit-tests/compositor/test_buffer_queue.cpp (+213/-256) |
To merge this branch: | bzr merge lp:~mir-team/mir/enable-dynamic-buffer-queue |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Kevin DuBois (community) | Needs Fixing | ||
Daniel van Vugt | Needs Fixing | ||
Alexandros Frantzis (community) | Needs Fixing | ||
Alan Griffiths | Abstain | ||
PS Jenkins bot (community) | continuous-integration | Approve | |
Review via email:
|
Commit message
Enable dynamic reallocation in BufferQueue
Description of the change
Default to double buffering and enable dynamic reallocation in BufferQueue
This continues the work by Daniel from https:/
Change to double buffering by default - it'll will allocate more for async clients (async clients are not possible with the current client api) or in special cases when frame dropping is enabled (multiple compositors or a compositor in bypass).

PS Jenkins bot (ps-jenkins) wrote : | # |

Alan Griffiths (alan-griffiths) wrote : | # |
[ FAILED ] BufferQueueTest
- 1679. By Alberto Aguirre
-
Fix failing test
- 1680. By Alberto Aguirre
-
merge lp:mir/devel

PS Jenkins bot (ps-jenkins) wrote : | # |
PASSED: Continuous integration, rev:1680
http://
Executed test runs:
SUCCESS: http://
SUCCESS: http://
SUCCESS: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
SUCCESS: http://
Click here to trigger a rebuild:
http://

Alan Griffiths (alan-griffiths) wrote : | # |
In TEST_F(
Maybe we just need a better name for the test?
- 1681. By Alberto Aguirre
-
ASSERT upperbound in framerate matching tests.

Alberto Aguirre (albaguirre) wrote : | # |
> In TEST_F(
> removing the expectation ASSERT_
> - which means we're no longer validating that the "client framerate matches
> compositor".
>
> Maybe we just need a better name for the test?
I've added back the upper bound assertion.

PS Jenkins bot (ps-jenkins) wrote : | # |
FAILED: Continuous integration, rev:1681
http://
Executed test runs:
SUCCESS: http://
SUCCESS: http://
FAILURE: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
FAILURE: http://
Click here to trigger a rebuild:
http://
- 1682. By Alberto Aguirre
-
Increase test granularity

PS Jenkins bot (ps-jenkins) wrote : | # |
FAILED: Continuous integration, rev:1682
http://
Executed test runs:
SUCCESS: http://
SUCCESS: http://
FAILURE: http://
SUCCESS: http://
deb: http://
FAILURE: http://
FAILURE: http://
Click here to trigger a rebuild:
http://
- 1683. By Alberto Aguirre
-
test slow_client_
framerate_ matches_ compositor:
Give a bit more time diff to account for thread switching in arm platform

PS Jenkins bot (ps-jenkins) wrote : | # |
FAILED: Continuous integration, rev:1683
http://
Executed test runs:
SUCCESS: http://
FAILURE: http://
SUCCESS: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
SUCCESS: http://
Click here to trigger a rebuild:
http://

Alberto Aguirre (albaguirre) wrote : | # |
^-- "Start 53: mir_integration
Build timed out (after 120 minutes). Marking the build as failed."
Seems like https:/
- 1684. By Alberto Aguirre
-
empty commit to kick CI

PS Jenkins bot (ps-jenkins) wrote : | # |
FAILED: Continuous integration, rev:1684
http://
Executed test runs:
SUCCESS: http://
SUCCESS: http://
FAILURE: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
FAILURE: http://
Click here to trigger a rebuild:
http://
- 1685. By Alberto Aguirre
-
merge lp:mir/devel

Daniel van Vugt (vanvugt) wrote : | # |
(1) A useful FIXME comment is missing from mc::BufferQueue
// FIXME: Sometimes required_buffers > nbuffers (LP: #1317403)
(2) An important test has been deleted:
1061 -TEST_F(
It needs to stay, and to pass(!). At least bypass needed 3+ buffers at some peak times last I checked. Otherwise it will intermittently deadlock. Although I forget the full reasoning right now.
(3) You have fudged slow_client_
Not sure how safe the fudge is or if it risks regression. I would be more sure if I knew how to reintroduce the bug it protects against :)

PS Jenkins bot (ps-jenkins) wrote : | # |
PASSED: Continuous integration, rev:1685
http://
Executed test runs:
SUCCESS: http://
SUCCESS: http://
SUCCESS: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
SUCCESS: http://
Click here to trigger a rebuild:
http://

Alan Griffiths (alan-griffiths) wrote : | # |
Still haven't entirely got my head around this - so changing to Abstain.
~~~~
1211 + auto const compositor_
Any reason for writing in this verbose style? C.f.

Alberto Aguirre (albaguirre) wrote : | # |
> (1) A useful FIXME comment is missing from mc::BufferQueue
> // FIXME: Sometimes required_buffers > nbuffers (LP: #1317403)
>
> (2) An important test has been deleted:
> 1061 -TEST_F(
> It needs to stay, and to pass(!). At least bypass needed 3+ buffers at some
> peak times last I checked. Otherwise it will intermittently deadlock. Although
> I forget the full reasoning right now.
>
> (3) You have fudged slow_client_
> cheating as mentioned in
> https:/
> Not sure how safe the fudge is or if it risks regression. I would be more sure
> if I knew how to reintroduce the bug it protects against :)
(1) is not applicable anymore
(2) It's no longer an important test. Bypass does not need more than 2 buffers. If you find a deadlock scenario, let's fix the deadlock not increase the number of buffers.
(3) No cheating...you have to account for the thread/context switch time since there's no pipelining - which under valgrind running on armhf seems to be significant.

Daniel van Vugt (vanvugt) wrote : | # |
(1) Bug 1317403 was really a consequence of bypass needing triple buffers, so read on... Also if you find a bug invalid, the first place it should be mentioned is in the bug.
(2) That's a bit surprising. For most of the year or so since I implemented bypass, 3 buffers was the minimum. Less than that and the client/server would be waiting for each other indefinitely (hence visibly freeze). And a month ago I could still reproduce some deadlocks, sometimes. But I can't today. It looks like possibly:
(a) Alan's newish async client buffer swapping solved the primary problem, as there is now significant time during swap_buffers where the client owns zero buffers. So that leaves the minimum 2 available for the server to bypass with, and then release one back to the client. Also...
(b) Bug 1319765 was unsolved around the time I was seeing deadlocks most recently so that could have been all of the remaining deadlocks, now solved.
Although I'm not yet 100% confident, it does look like bypass can now work with only two buffers. A win for performance!
(3) It's cheating because I do remember I could only reproduce the original performance bug at 16ms. It was right on the edge but I guess too brittle to rely on such timing. But I can't see either bug 1241371 or bug 1241369 occurring any more so I guess that is the important thing.
(4) BufferAllocBeha
(5) BufferAllocatio
This is just a high level review. I haven't started to read the code really but it mostly looks like my double branch.

Daniel van Vugt (vanvugt) wrote : | # |
(6) Why delete this regression test?
973 -/* Regression test for LP#1270964 */
974 -TEST_F(
I had it passing in the double branch.

Daniel van Vugt (vanvugt) wrote : | # |
(7) It might be slightly unnecessary but I think nice to have the extra coverage starting with 2 buffers:
1498 -/* FIXME (enabling this optimization breaks timing tests) */
1499 -TEST_F(
1500 +TEST_F(
1501 {
1502 - for (int nbuffers = 2; nbuffers <= max_nbuffers_
1503 + for (int nbuffers = 3; nbuffers <= max_nbuffers_
That test case is the most important one of this whole branch. And the best indication that it's working is if you can enable the test case without modifying it at all.

Gerry Boland (gerboland) wrote : | # |
Just a probably stupid question from me:
> (2) That's a bit surprising. For most of the year or so since I implemented
> bypass, 3 buffers was the minimum. Less than that and the client/server would
> be waiting for each other indefinitely (hence visibly freeze). And a month ago
> I could still reproduce some deadlocks, sometimes. But I can't today. It looks
> like possibly:
> (a) Alan's newish async client buffer swapping solved the primary problem,
> as there is now significant time during swap_buffers where the client owns
> zero buffers. So that leaves the minimum 2 available for the server to bypass
> with, and then release one back to the client. Also...
> (b) Bug 1319765 was unsolved around the time I was seeing deadlocks most
> recently so that could have been all of the remaining deadlocks, now solved.
>
> Although I'm not yet 100% confident, it does look like bypass can now work
> with only two buffers. A win for performance!
If the client is being blocked from having any buffer to render into, doesn't that reduce the amount of time it has a buffer to render into? So for a client which takes just under 16ms to render each frame, and the server holds onto 2 buffers for >1ms, then the client will keep missing vsync?

Alexandros Frantzis (afrantzis) wrote : | # |
Some suggestions from a first pass (a weak Needs Fixing).
61 + for(int i = 0; i < nalloc; i++)
Space between for and '('.
75 + if (alloc_
76 + allocated_buffers < min_buffers())
If alloc_behavior != on_demand isn't "allocated_buffers < min_buffers()" always false (we have pre-allocated the maximum number of buffers). So, we could just write "if (allocated_buffers < min_buffers())"
85 + bool use_current_buffer = force_use_
How about replacing the current logic (which has become a bit complex) with something more straightforward (IMO) like:
bool current_
{
return !current_
}
bool have_new_
{
return !ready_
}
compositor_
{
bool const use_current_buffer =
if (!use_current_
{
...
}
else if (!is_a_
{
}
}
150 + int const alloced_buffers = buffers.size();
"allocated_

Alexandros Frantzis (afrantzis) wrote : | # |
> 85 + bool use_current_buffer = force_use_
>
> How about replacing the current logic (which has become a bit complex) with
> something more straightforward (IMO) like:
Actually, what I proposed above probably doesn't work, but the point is that we should try to simplify the paths in that function since it has become too complex.

Alexandros Frantzis (afrantzis) wrote : | # |
> > 85 + bool use_current_buffer = force_use_
> >
> > How about replacing the current logic (which has become a bit complex) with
> > something more straightforward (IMO) like:
>
> Actually, what I proposed above probably doesn't work, but the point is that
> we should try to simplify the paths in that function since it has become too
> complex.
But this version seems to work (at least it passes all the tests):
bool const use_current_buffer =
force_
current_
!have_

Daniel van Vugt (vanvugt) wrote : | # |
> Just a probably stupid question from me:
> ...
> If the client is being blocked from having any buffer to render into, doesn't
> that reduce the amount of time it has a buffer to render into? So for a client
> which takes just under 16ms to render each frame, and the server holds onto 2
> buffers for >1ms, then the client will keep missing vsync?
That's a good point actually. Having 2 buffers increases the risk of missing vsync compared to 3, however the increase in risk is still negligible I think. Mir is now much better at meeting frame deadlines than ever and it's regularly improving still. I don't think the slight delay of 2 vs 3 buffers is likely to ever cause skipped frames on the systems Mir supports. What's more important (and certain) is that the latency is now halved, and that's always going to be true.
It's also worth noting that double buffering probably only impacts the client render time window for a fullscreen bypassed surface (where the server requires ownership of two buffers briefly during the flip). For general compositing, the server only ever holds one brief (but mostly zero) surface buffers and the client is never delayed in getting one.

Daniel van Vugt (vanvugt) wrote : | # |
Correction: Since SwitchingBundle became BufferQueue, the server always holds a minimum one surface buffer for compositing. Never zero.

Daniel van Vugt (vanvugt) wrote : | # |
(8) I just added some printfs and the code is actually still triple buffering on bypass (mostly but not always). If you're not seeing that behaviour then try:
mir_
and Ctrl+Alt+arrows to rotate a few times (force compositing) before going back to bypass (Ctrl+Alt+Up).
So either:
(a) that old behaviour is still correct and you need to reinstate tests for it like bypass_
(b) it shouldn't be triple buffering bypass clients and it's a bug; or
(c) it's just bad timing in the wrong phase and sometimes really does need 3 buffers to keep up.
It looks like the bypass detection code is still present and that's what's pushing it up to 3:
increase_demand = multiple_

Daniel van Vugt (vanvugt) wrote : | # |
(9) This test needs to be retained, and ported per the double branch...
1102 -TEST_F(
Because there's no value in allowing framedropping clients to have more than 2 buffers while the (slower) compositor uses a third. One of the two client buffers is newest (will eventually be composited) and the other can be safely discarded, overwritten by the client. This was working well in May but it seems some recent changes around framedropping policy has broken it. Certainly needs fixing because the new behaviour only wastes resources without any possible gain in performance.

Kevin DuBois (kdub) wrote : | # |
BufferAllocatio
240 + BufferAllocatio
251 + int const shrink_treshold;
typo
273 + int allocated_buffers() const;
This is public and is only used in test code, maybe BufferQueueTest could monitor how many times the GraphicBufferAl
- 1686. By Daniel van Vugt
-
Merge latest development-branch
- 1687. By Daniel van Vugt
-
Start simplifying; replaced Policy and Behaviour classes with a simple
integer range. - 1688. By Daniel van Vugt
-
Shrink the diff (remove whitespace changes)

Daniel van Vugt (vanvugt) wrote : | # |
Work in progress... I'll carry on with this tomorrow. Or fix the original branch. Not sure.
Unmerged revisions
- 1688. By Daniel van Vugt
-
Shrink the diff (remove whitespace changes)
- 1687. By Daniel van Vugt
-
Start simplifying; replaced Policy and Behaviour classes with a simple
integer range. - 1686. By Daniel van Vugt
-
Merge latest development-branch
- 1685. By Alberto Aguirre
-
merge lp:mir/devel
- 1684. By Alberto Aguirre
-
empty commit to kick CI
- 1683. By Alberto Aguirre
-
test slow_client_
framerate_ matches_ compositor:
Give a bit more time diff to account for thread switching in arm platform - 1682. By Alberto Aguirre
-
Increase test granularity
- 1681. By Alberto Aguirre
-
ASSERT upperbound in framerate matching tests.
- 1680. By Alberto Aguirre
-
merge lp:mir/devel
- 1679. By Alberto Aguirre
-
Fix failing test
Preview Diff
1 | === modified file 'include/test/mir_test/signal.h' |
2 | --- include/test/mir_test/signal.h 2014-05-22 20:48:20 +0000 |
3 | +++ include/test/mir_test/signal.h 2014-07-08 10:11:16 +0000 |
4 | @@ -37,6 +37,7 @@ |
5 | |
6 | void raise(); |
7 | bool raised(); |
8 | + void reset(); |
9 | |
10 | void wait(); |
11 | template<typename rep, typename period> |
12 | |
13 | === modified file 'src/server/compositor/buffer_queue.cpp' |
14 | --- src/server/compositor/buffer_queue.cpp 2014-06-12 15:35:08 +0000 |
15 | +++ src/server/compositor/buffer_queue.cpp 2014-07-08 10:11:16 +0000 |
16 | @@ -37,12 +37,14 @@ |
17 | return buffer; |
18 | } |
19 | |
20 | -bool remove(mg::Buffer const* item, std::vector<mg::Buffer*>& list) |
21 | +template<typename Type> |
22 | +bool remove(mg::Buffer const* item, std::vector<Type>& list) |
23 | { |
24 | int const size = list.size(); |
25 | for (int i = 0; i < size; ++i) |
26 | { |
27 | - if (list[i] == item) |
28 | + //The &* is so that it works with both shared_ptrs and raw pointers |
29 | + if (&*list[i] == item) |
30 | { |
31 | list.erase(list.begin() + i); |
32 | return true; |
33 | @@ -92,26 +94,28 @@ |
34 | } |
35 | |
36 | mc::BufferQueue::BufferQueue( |
37 | - int nbuffers, |
38 | + int static_min_buffers, int max_buffers, |
39 | std::shared_ptr<graphics::GraphicBufferAllocator> const& gralloc, |
40 | graphics::BufferProperties const& props, |
41 | - mc::FrameDroppingPolicyFactory const& policy_provider) |
42 | - : nbuffers{nbuffers}, |
43 | + mc::FrameDroppingPolicyFactory const& policy_provider, |
44 | + int shrink_threshold) |
45 | + : static_min_buffers{static_min_buffers}, |
46 | + max_buffers{max_buffers}, |
47 | + shrink_threshold{shrink_threshold}, |
48 | + queue_size_excess{0}, |
49 | frame_dropping_enabled{false}, |
50 | + force_use_current_buffer{false}, |
51 | + multiple_compositor_buffers{false}, |
52 | the_properties{props}, |
53 | gralloc{gralloc} |
54 | { |
55 | - if (nbuffers < 1) |
56 | + if (static_min_buffers < 0 || max_buffers < 1 || static_min_buffers > max_buffers) |
57 | { |
58 | BOOST_THROW_EXCEPTION( |
59 | std::logic_error("invalid number of buffers for BufferQueue")); |
60 | } |
61 | |
62 | - /* By default not all buffers are allocated. |
63 | - * If there is increased pressure by the client to acquire |
64 | - * more buffers, more will be allocated at that time (up to nbuffers) |
65 | - */ |
66 | - for(int i = 0; i < std::min(nbuffers, 2); i++) |
67 | + for (int i = 0; i < static_min_buffers; i++) |
68 | { |
69 | buffers.push_back(gralloc->alloc_buffer(the_properties)); |
70 | } |
71 | @@ -127,7 +131,7 @@ |
72 | /* Special case: with one buffer both clients and compositors |
73 | * need to share the same buffer |
74 | */ |
75 | - if (nbuffers == 1) |
76 | + if (max_buffers == 1) |
77 | free_buffers.push_back(current_compositor_buffer); |
78 | |
79 | framedrop_policy = policy_provider.create_policy([this] |
80 | @@ -169,12 +173,8 @@ |
81 | return; |
82 | } |
83 | |
84 | - /* No empty buffers, attempt allocating more |
85 | - * TODO: need a good heuristic to switch |
86 | - * between double-buffering to n-buffering |
87 | - */ |
88 | int const allocated_buffers = buffers.size(); |
89 | - if (allocated_buffers < nbuffers) |
90 | + if (allocated_buffers < dynamic_min_buffers()) |
91 | { |
92 | auto const& buffer = gralloc->alloc_buffer(the_properties); |
93 | buffers.push_back(buffer); |
94 | @@ -221,7 +221,7 @@ |
95 | { |
96 | std::unique_lock<decltype(guard)> lock(guard); |
97 | |
98 | - bool use_current_buffer = false; |
99 | + bool use_current_buffer = force_use_current_buffer; |
100 | if (!current_buffer_users.empty() && !is_a_current_buffer_user(user_id)) |
101 | { |
102 | use_current_buffer = true; |
103 | @@ -245,9 +245,17 @@ |
104 | current_buffer_users.push_back(user_id); |
105 | current_compositor_buffer = pop(ready_to_composite_queue); |
106 | } |
107 | + else if (current_buffer_users.empty()) |
108 | + { // current_buffer_users and ready_to_composite_queue both empty |
109 | + current_buffer_users.push_back(user_id); |
110 | + } |
111 | |
112 | + force_use_current_buffer = false; |
113 | buffers_sent_to_compositor.push_back(current_compositor_buffer); |
114 | |
115 | + multiple_compositor_buffers = |
116 | + buffers_sent_to_compositor.size() > current_buffer_users.size(); |
117 | + |
118 | std::shared_ptr<mg::Buffer> const acquired_buffer = |
119 | buffer_for(current_compositor_buffer, buffers); |
120 | |
121 | @@ -271,7 +279,7 @@ |
122 | if (contains(buffer.get(), buffers_sent_to_compositor)) |
123 | return; |
124 | |
125 | - if (nbuffers <= 1) |
126 | + if (max_buffers <= 1) |
127 | return; |
128 | |
129 | /* |
130 | @@ -289,8 +297,7 @@ |
131 | // Ensure current_compositor_buffer gets reused by the next |
132 | // compositor_acquire: |
133 | current_buffer_users.clear(); |
134 | - void const* const impossible_user_id = this; |
135 | - current_buffer_users.push_back(impossible_user_id); |
136 | + force_use_current_buffer = true; |
137 | } |
138 | |
139 | if (current_compositor_buffer != buffer.get()) |
140 | @@ -334,6 +341,11 @@ |
141 | return frame_dropping_enabled; |
142 | } |
143 | |
144 | +int mc::BufferQueue::allocated_buffers() const |
145 | +{ |
146 | + return buffers.size(); |
147 | +} |
148 | + |
149 | void mc::BufferQueue::force_requests_to_complete() |
150 | { |
151 | std::unique_lock<std::mutex> lock(guard); |
152 | @@ -369,14 +381,7 @@ |
153 | int mc::BufferQueue::buffers_free_for_client() const |
154 | { |
155 | std::lock_guard<decltype(guard)> lock(guard); |
156 | - int ret = 1; |
157 | - if (nbuffers > 1) |
158 | - { |
159 | - int nfree = free_buffers.size(); |
160 | - int future_growth = nbuffers - buffers.size(); |
161 | - ret = nfree + future_growth; |
162 | - } |
163 | - return ret; |
164 | + return max_buffers > 1 ? free_buffers.size() : 1; |
165 | } |
166 | |
167 | void mc::BufferQueue::give_buffer_to_client( |
168 | @@ -396,7 +401,7 @@ |
169 | /* Special case: the current compositor buffer also needs to be |
170 | * replaced as it's shared with the client |
171 | */ |
172 | - if (nbuffers == 1) |
173 | + if (max_buffers == 1) |
174 | current_compositor_buffer = buffer; |
175 | } |
176 | |
177 | @@ -432,13 +437,61 @@ |
178 | mg::Buffer* buffer, |
179 | std::unique_lock<std::mutex> lock) |
180 | { |
181 | - if (!pending_client_notifications.empty()) |
182 | + int const alloced_buffers = buffers.size(); |
183 | + |
184 | + // To avoid reallocating buffers too often (which may be slow), only drop |
185 | + // a buffer after it's continually been in excess for a long time... |
186 | + if (alloced_buffers > dynamic_min_buffers()) |
187 | + ++queue_size_excess; |
188 | + else |
189 | + queue_size_excess = 0; |
190 | + |
191 | + // If too many frames have had excess buffers then start dropping them now |
192 | + if (queue_size_excess > shrink_threshold && |
193 | + alloced_buffers > static_min_buffers) |
194 | + { |
195 | + remove(buffer, buffers); |
196 | + queue_size_excess = 0; |
197 | + } |
198 | + else if (!pending_client_notifications.empty()) |
199 | { |
200 | framedrop_policy->swap_unblocked(); |
201 | give_buffer_to_client(buffer, std::move(lock)); |
202 | } |
203 | else |
204 | + { |
205 | free_buffers.push_back(buffer); |
206 | + } |
207 | +} |
208 | + |
209 | +/** |
210 | + * Measure the actual number of buffers we presently need concurrently |
211 | + * to avoid starving any compositors of fresh frames or starving clients of |
212 | + * any buffers at all. |
213 | + */ |
214 | +int mc::BufferQueue::dynamic_min_buffers() const |
215 | +{ |
216 | + if (max_buffers == 1) |
217 | + return 1; |
218 | + |
219 | + if (static_min_buffers == max_buffers) |
220 | + return static_min_buffers; |
221 | + |
222 | + /* When framedropping is enabled and there are multiple compositors |
223 | + * they can be out of phase enough that the compositor buffer is |
224 | + * practically always owned by one of the compositors, which prevents it |
225 | + * from being updated to the latest client content during frame-dropping |
226 | + */ |
227 | + bool const increase_demand = multiple_compositor_buffers || |
228 | + current_buffer_users.size() > 1; |
229 | + int const frame_drop_demand = increase_demand ? 1 : 0; |
230 | + int const client_requests = buffers_owned_by_client.size() + |
231 | + pending_client_notifications.size(); |
232 | + int const client_demand = std::max(1, client_requests); |
233 | + int const compositor_demand = 1; |
234 | + int const required_buffers = compositor_demand + client_demand + frame_drop_demand; |
235 | + |
236 | + return std::min(max_buffers, required_buffers); |
237 | } |
238 | |
239 | void mc::BufferQueue::drop_frame(std::unique_lock<std::mutex> lock) |
240 | @@ -450,8 +503,7 @@ |
241 | if (!contains(current_compositor_buffer, buffers_sent_to_compositor)) |
242 | { |
243 | current_buffer_users.clear(); |
244 | - void const* const impossible_user_id = this; |
245 | - current_buffer_users.push_back(impossible_user_id); |
246 | + force_use_current_buffer = true; |
247 | std::swap(buffer_to_give, current_compositor_buffer); |
248 | } |
249 | give_buffer_to_client(buffer_to_give, std::move(lock)); |
250 | |
251 | === modified file 'src/server/compositor/buffer_queue.h' |
252 | --- src/server/compositor/buffer_queue.h 2014-06-12 15:35:08 +0000 |
253 | +++ src/server/compositor/buffer_queue.h 2014-07-08 10:11:16 +0000 |
254 | @@ -42,10 +42,12 @@ |
255 | public: |
256 | typedef std::function<void(graphics::Buffer* buffer)> Callback; |
257 | |
258 | - BufferQueue(int nbuffers, |
259 | + BufferQueue(int static_min_buffers, |
260 | + int max_buffers, |
261 | std::shared_ptr<graphics::GraphicBufferAllocator> const& alloc, |
262 | graphics::BufferProperties const& props, |
263 | - FrameDroppingPolicyFactory const& policy_provider); |
264 | + FrameDroppingPolicyFactory const& policy_provider, |
265 | + int shrink_threshold = 300); |
266 | |
267 | void client_acquire(Callback complete) override; |
268 | void client_release(graphics::Buffer* buffer) override; |
269 | @@ -61,6 +63,7 @@ |
270 | int buffers_ready_for_compositor() const override; |
271 | int buffers_free_for_client() const override; |
272 | bool framedropping_allowed() const; |
273 | + int allocated_buffers() const; |
274 | |
275 | private: |
276 | void give_buffer_to_client(graphics::Buffer* buffer, |
277 | @@ -69,6 +72,7 @@ |
278 | void release(graphics::Buffer* buffer, |
279 | std::unique_lock<std::mutex> lock); |
280 | void drop_frame(std::unique_lock<std::mutex> lock); |
281 | + int dynamic_min_buffers() const; |
282 | |
283 | mutable std::mutex guard; |
284 | |
285 | @@ -84,8 +88,13 @@ |
286 | |
287 | std::deque<Callback> pending_client_notifications; |
288 | |
289 | - int nbuffers; |
290 | + int const static_min_buffers; |
291 | + int const max_buffers; |
292 | + int const shrink_threshold; |
293 | + int queue_size_excess; |
294 | bool frame_dropping_enabled; |
295 | + bool force_use_current_buffer; |
296 | + bool multiple_compositor_buffers; |
297 | std::unique_ptr<FrameDroppingPolicy> framedrop_policy; |
298 | graphics::BufferProperties the_properties; |
299 | |
300 | |
301 | === modified file 'src/server/compositor/buffer_stream_factory.cpp' |
302 | --- src/server/compositor/buffer_stream_factory.cpp 2014-06-09 17:16:32 +0000 |
303 | +++ src/server/compositor/buffer_stream_factory.cpp 2014-07-08 10:11:16 +0000 |
304 | @@ -47,7 +47,6 @@ |
305 | std::shared_ptr<mc::BufferStream> mc::BufferStreamFactory::create_buffer_stream( |
306 | mg::BufferProperties const& buffer_properties) |
307 | { |
308 | - // Note: Framedropping and bypass both require a minimum 3 buffers |
309 | - auto switching_bundle = std::make_shared<mc::BufferQueue>(3, gralloc, buffer_properties, *policy_factory); |
310 | - return std::make_shared<mc::BufferStreamSurfaces>(switching_bundle); |
311 | + auto buffer_q = std::make_shared<mc::BufferQueue>(2, 3, gralloc, buffer_properties, *policy_factory); |
312 | + return std::make_shared<mc::BufferStreamSurfaces>(buffer_q); |
313 | } |
314 | |
315 | === modified file 'tests/integration-tests/compositor/test_buffer_stream.cpp' |
316 | --- tests/integration-tests/compositor/test_buffer_stream.cpp 2014-06-09 17:16:32 +0000 |
317 | +++ tests/integration-tests/compositor/test_buffer_stream.cpp 2014-07-08 10:11:16 +0000 |
318 | @@ -49,25 +49,33 @@ |
319 | { |
320 | using mc::BufferStreamSurfaces::BufferStreamSurfaces; |
321 | |
322 | - // Convenient function to allow tests to be written in linear style |
323 | + void acquire_client_buffer_async(mg::Buffer*& buffer, |
324 | + std::shared_ptr<mt::Signal> const& signal) |
325 | + { |
326 | + acquire_client_buffer( |
327 | + [signal, &buffer](mg::Buffer* new_buffer) |
328 | + { |
329 | + buffer = new_buffer; |
330 | + signal->raise(); |
331 | + }); |
332 | + } |
333 | + |
334 | + // Convenient functions to allow tests to be written in linear style |
335 | + mg::Buffer* acquire_client_buffer_blocking() |
336 | + { |
337 | + mg::Buffer* buffer = nullptr; |
338 | + auto signal = std::make_shared<mt::Signal>(); |
339 | + acquire_client_buffer_async(buffer, signal); |
340 | + signal->wait(); |
341 | + return buffer; |
342 | + } |
343 | + |
344 | void swap_client_buffers_blocking(mg::Buffer*& buffer) |
345 | { |
346 | - swap_client_buffers_cancellable(buffer, std::make_shared<mt::Signal>()); |
347 | - } |
348 | - |
349 | - void swap_client_buffers_cancellable(mg::Buffer*& buffer, std::shared_ptr<mt::Signal> const& signal) |
350 | - { |
351 | if (buffer) |
352 | release_client_buffer(buffer); |
353 | |
354 | - acquire_client_buffer( |
355 | - [signal, &buffer](mg::Buffer* new_buffer) |
356 | - { |
357 | - buffer = new_buffer; |
358 | - signal->raise(); |
359 | - }); |
360 | - |
361 | - signal->wait(); |
362 | + buffer = acquire_client_buffer_blocking(); |
363 | } |
364 | }; |
365 | |
366 | @@ -95,8 +103,7 @@ |
367 | mg::BufferUsage::hardware}; |
368 | mc::TimeoutFrameDroppingPolicyFactory policy_factory{timer, |
369 | frame_drop_timeout}; |
370 | - |
371 | - buffer_queue = std::make_shared<mc::BufferQueue>(nbuffers, |
372 | + buffer_queue = std::make_shared<mc::BufferQueue>(2, nbuffers, |
373 | allocator, |
374 | properties, |
375 | policy_factory); |
376 | @@ -116,12 +123,9 @@ |
377 | |
378 | TEST_F(BufferStreamTest, gives_same_back_buffer_until_more_available) |
379 | { |
380 | - ASSERT_THAT(buffers_free_for_client(), Ge(2)); // else we will hang |
381 | - |
382 | - mg::Buffer* client1{nullptr}; |
383 | - buffer_stream.swap_client_buffers_blocking(client1); |
384 | + mg::Buffer* client1 = buffer_stream.acquire_client_buffer_blocking(); |
385 | auto client1_id = client1->id(); |
386 | - buffer_stream.swap_client_buffers_blocking(client1); |
387 | + buffer_stream.release_client_buffer(client1); |
388 | |
389 | auto comp1 = buffer_stream.lock_compositor_buffer(nullptr); |
390 | auto comp2 = buffer_stream.lock_compositor_buffer(nullptr); |
391 | @@ -131,10 +135,14 @@ |
392 | |
393 | comp1.reset(); |
394 | |
395 | - buffer_stream.swap_client_buffers_blocking(client1); |
396 | + mg::Buffer* client2 = buffer_stream.acquire_client_buffer_blocking(); |
397 | + auto client2_id = client2->id(); |
398 | + buffer_stream.release_client_buffer(client2); |
399 | + |
400 | auto comp3 = buffer_stream.lock_compositor_buffer(nullptr); |
401 | |
402 | EXPECT_NE(client1_id, comp3->id()); |
403 | + EXPECT_EQ(client2_id, comp3->id()); |
404 | |
405 | comp2.reset(); |
406 | auto comp3_id = comp3->id(); |
407 | @@ -165,16 +173,10 @@ |
408 | |
409 | TEST_F(BufferStreamTest, gives_different_back_buffer_asap) |
410 | { |
411 | - ASSERT_THAT(buffers_free_for_client(), Ge(2)); // else we will hang |
412 | - |
413 | - mg::Buffer* client_buffer{nullptr}; |
414 | - buffer_stream.swap_client_buffers_blocking(client_buffer); |
415 | - |
416 | - ASSERT_THAT(nbuffers, Gt(1)); |
417 | - buffer_stream.swap_client_buffers_blocking(client_buffer); |
418 | + buffer_stream.release_client_buffer(buffer_stream.acquire_client_buffer_blocking()); |
419 | auto comp1 = buffer_stream.lock_compositor_buffer(nullptr); |
420 | |
421 | - buffer_stream.swap_client_buffers_blocking(client_buffer); |
422 | + buffer_stream.release_client_buffer(buffer_stream.acquire_client_buffer_blocking()); |
423 | auto comp2 = buffer_stream.lock_compositor_buffer(nullptr); |
424 | |
425 | EXPECT_NE(comp1->id(), comp2->id()); |
426 | @@ -185,13 +187,13 @@ |
427 | |
428 | TEST_F(BufferStreamTest, resize_affects_client_buffers_immediately) |
429 | { |
430 | - ASSERT_THAT(buffers_free_for_client(), Ge(2)); // else we will hang |
431 | - |
432 | auto old_size = buffer_stream.stream_size(); |
433 | |
434 | - mg::Buffer* client{nullptr}; |
435 | - buffer_stream.swap_client_buffers_blocking(client); |
436 | + mg::Buffer* client = buffer_stream.acquire_client_buffer_blocking(); |
437 | EXPECT_EQ(old_size, client->size()); |
438 | + buffer_stream.release_client_buffer(client); |
439 | + |
440 | + buffer_stream.lock_compositor_buffer(this); |
441 | |
442 | geom::Size const new_size |
443 | { |
444 | @@ -201,28 +203,28 @@ |
445 | buffer_stream.resize(new_size); |
446 | EXPECT_EQ(new_size, buffer_stream.stream_size()); |
447 | |
448 | - buffer_stream.swap_client_buffers_blocking(client); |
449 | + client = buffer_stream.acquire_client_buffer_blocking(); |
450 | EXPECT_EQ(new_size, client->size()); |
451 | + buffer_stream.release_client_buffer(client); |
452 | + |
453 | + buffer_stream.lock_compositor_buffer(this); |
454 | |
455 | buffer_stream.resize(old_size); |
456 | EXPECT_EQ(old_size, buffer_stream.stream_size()); |
457 | |
458 | - /* Release a buffer so client can acquire another */ |
459 | - auto comp = buffer_stream.lock_compositor_buffer(nullptr); |
460 | - comp.reset(); |
461 | + buffer_stream.lock_compositor_buffer(this); |
462 | |
463 | - buffer_stream.swap_client_buffers_blocking(client); |
464 | + client = buffer_stream.acquire_client_buffer_blocking(); |
465 | EXPECT_EQ(old_size, client->size()); |
466 | + buffer_stream.release_client_buffer(client); |
467 | } |
468 | |
469 | TEST_F(BufferStreamTest, compositor_gets_resized_buffers) |
470 | { |
471 | - ASSERT_THAT(buffers_free_for_client(), Ge(2)); // else we will hang |
472 | - |
473 | auto old_size = buffer_stream.stream_size(); |
474 | |
475 | - mg::Buffer* client{nullptr}; |
476 | - buffer_stream.swap_client_buffers_blocking(client); |
477 | + mg::Buffer* client = buffer_stream.acquire_client_buffer_blocking(); |
478 | + buffer_stream.release_client_buffer(client); |
479 | |
480 | geom::Size const new_size |
481 | { |
482 | @@ -232,47 +234,54 @@ |
483 | buffer_stream.resize(new_size); |
484 | EXPECT_EQ(new_size, buffer_stream.stream_size()); |
485 | |
486 | - buffer_stream.swap_client_buffers_blocking(client); |
487 | - |
488 | auto comp1 = buffer_stream.lock_compositor_buffer(nullptr); |
489 | + |
490 | + client = buffer_stream.acquire_client_buffer_blocking(); |
491 | + buffer_stream.release_client_buffer(client); |
492 | + |
493 | EXPECT_EQ(old_size, comp1->size()); |
494 | comp1.reset(); |
495 | |
496 | - buffer_stream.swap_client_buffers_blocking(client); |
497 | - |
498 | auto comp2 = buffer_stream.lock_compositor_buffer(nullptr); |
499 | + |
500 | + client = buffer_stream.acquire_client_buffer_blocking(); |
501 | + buffer_stream.release_client_buffer(client); |
502 | + |
503 | EXPECT_EQ(new_size, comp2->size()); |
504 | comp2.reset(); |
505 | |
506 | - buffer_stream.swap_client_buffers_blocking(client); |
507 | - |
508 | auto comp3 = buffer_stream.lock_compositor_buffer(nullptr); |
509 | + |
510 | + client = buffer_stream.acquire_client_buffer_blocking(); |
511 | + buffer_stream.release_client_buffer(client); |
512 | + |
513 | EXPECT_EQ(new_size, comp3->size()); |
514 | comp3.reset(); |
515 | |
516 | buffer_stream.resize(old_size); |
517 | EXPECT_EQ(old_size, buffer_stream.stream_size()); |
518 | |
519 | + auto comp4 = buffer_stream.lock_compositor_buffer(nullptr); |
520 | + |
521 | // No client frames "drawn" since resize(old_size), so compositor gets new_size |
522 | - buffer_stream.swap_client_buffers_blocking(client); |
523 | - auto comp4 = buffer_stream.lock_compositor_buffer(nullptr); |
524 | + client = buffer_stream.acquire_client_buffer_blocking(); |
525 | + buffer_stream.release_client_buffer(client); |
526 | EXPECT_EQ(new_size, comp4->size()); |
527 | comp4.reset(); |
528 | |
529 | + auto comp5 = buffer_stream.lock_compositor_buffer(nullptr); |
530 | + |
531 | // Generate a new frame, which should be back to old_size now |
532 | - buffer_stream.swap_client_buffers_blocking(client); |
533 | - auto comp5 = buffer_stream.lock_compositor_buffer(nullptr); |
534 | + client = buffer_stream.acquire_client_buffer_blocking(); |
535 | + buffer_stream.release_client_buffer(client); |
536 | EXPECT_EQ(old_size, comp5->size()); |
537 | comp5.reset(); |
538 | } |
539 | |
540 | TEST_F(BufferStreamTest, can_get_partly_released_back_buffer) |
541 | { |
542 | - ASSERT_THAT(buffers_free_for_client(), Ge(2)); // else we will hang |
543 | - |
544 | - mg::Buffer* client{nullptr}; |
545 | - buffer_stream.swap_client_buffers_blocking(client); |
546 | - buffer_stream.swap_client_buffers_blocking(client); |
547 | + mg::Buffer* client = buffer_stream.acquire_client_buffer_blocking(); |
548 | + buffer_stream.release_client_buffer(client); |
549 | |
550 | int a, b, c; |
551 | auto comp1 = buffer_stream.lock_compositor_buffer(&a); |
552 | @@ -375,12 +384,17 @@ |
553 | { |
554 | using namespace testing; |
555 | |
556 | - mg::Buffer* placeholder{nullptr}; |
557 | + mg::Buffer* buffer{nullptr}; |
558 | + auto signal = std::make_shared<mt::Signal>(); |
559 | |
560 | - // Grab all the buffers... |
561 | - // TODO: the magic “nbuffers - 1” number should be removed |
562 | - for (int i = 0; i < nbuffers - 1; ++i) |
563 | - buffer_stream.swap_client_buffers_blocking(placeholder); |
564 | + // acquire all possible buffers until blocked |
565 | + do |
566 | + { |
567 | + signal->reset(); |
568 | + buffer_stream.acquire_client_buffer_async(buffer, signal); |
569 | + if (signal->raised()) |
570 | + buffer_stream.release_client_buffer(buffer); |
571 | + } while (signal->raised()); |
572 | |
573 | auto swap_completed = std::make_shared<mt::Signal>(); |
574 | buffer_stream.acquire_client_buffer([swap_completed](mg::Buffer*) {swap_completed->raise();}); |
575 | |
576 | === modified file 'tests/integration-tests/compositor/test_swapping_swappers.cpp' |
577 | --- tests/integration-tests/compositor/test_swapping_swappers.cpp 2014-06-09 17:16:32 +0000 |
578 | +++ tests/integration-tests/compositor/test_swapping_swappers.cpp 2014-07-08 10:11:16 +0000 |
579 | @@ -49,7 +49,7 @@ |
580 | mir_pixel_format_abgr_8888, |
581 | mg::BufferUsage::hardware}; |
582 | mtd::StubFrameDroppingPolicyFactory policy_factory; |
583 | - switching_bundle = std::make_shared<mc::BufferQueue>(3, allocator, properties, policy_factory); |
584 | + switching_bundle = std::make_shared<mc::BufferQueue>(2, 3, allocator, properties, policy_factory); |
585 | } |
586 | |
587 | std::shared_ptr<mc::BufferQueue> switching_bundle; |
588 | |
589 | === modified file 'tests/integration-tests/graphics/android/test_buffer_integration.cpp' |
590 | --- tests/integration-tests/graphics/android/test_buffer_integration.cpp 2014-06-09 17:16:32 +0000 |
591 | +++ tests/integration-tests/graphics/android/test_buffer_integration.cpp 2014-07-08 10:11:16 +0000 |
592 | @@ -57,6 +57,7 @@ |
593 | mg::BufferProperties buffer_properties; |
594 | std::shared_ptr<mtd::GraphicsRegionFactory> graphics_region_factory; |
595 | mir::test::doubles::StubFrameDroppingPolicyFactory policy_factory; |
596 | + mc::BufferAllocationPolicy alloc_policy; |
597 | }; |
598 | |
599 | auto client_acquire_blocking(mc::BufferQueue& switching_bundle) |
600 | @@ -119,7 +120,7 @@ |
601 | |
602 | auto allocator = std::make_shared<mga::AndroidGraphicBufferAllocator>(null_buffer_initializer); |
603 | |
604 | - mc::BufferQueue swapper(2, allocator, buffer_properties, policy_factory); |
605 | + mc::BufferQueue swapper(2, allocator, buffer_properties, policy_factory, alloc_policy); |
606 | |
607 | auto returned_buffer = client_acquire_blocking(swapper); |
608 | |
609 | |
610 | === modified file 'tests/integration-tests/test_surface_first_frame_sync.cpp' |
611 | --- tests/integration-tests/test_surface_first_frame_sync.cpp 2014-06-17 11:21:31 +0000 |
612 | +++ tests/integration-tests/test_surface_first_frame_sync.cpp 2014-07-08 10:11:16 +0000 |
613 | @@ -113,7 +113,7 @@ |
614 | { |
615 | for (auto const& r : renderables) |
616 | { |
617 | - (void)r; |
618 | + r->buffer(); // We need to consume a buffer to unblock client tests |
619 | while (write(render_operations_fd, "a", 1) != 1) continue; |
620 | } |
621 | } |
622 | |
623 | === modified file 'tests/mir_test/signal.cpp' |
624 | --- tests/mir_test/signal.cpp 2014-05-22 20:48:20 +0000 |
625 | +++ tests/mir_test/signal.cpp 2014-07-08 10:11:16 +0000 |
626 | @@ -39,6 +39,12 @@ |
627 | return signalled; |
628 | } |
629 | |
630 | +void mt::Signal::reset() |
631 | +{ |
632 | + std::lock_guard<decltype(mutex)> lock(mutex); |
633 | + signalled = false; |
634 | +} |
635 | + |
636 | void mt::Signal::wait() |
637 | { |
638 | std::unique_lock<decltype(mutex)> lock(mutex); |
639 | |
640 | === modified file 'tests/unit-tests/compositor/test_buffer_queue.cpp' |
641 | --- tests/unit-tests/compositor/test_buffer_queue.cpp 2014-07-07 09:28:00 +0000 |
642 | +++ tests/unit-tests/compositor/test_buffer_queue.cpp 2014-07-08 10:11:16 +0000 |
643 | @@ -48,6 +48,25 @@ |
644 | namespace |
645 | { |
646 | |
647 | +class TestBufferQueue : public mc::BufferQueue |
648 | + |
649 | +{ |
650 | +public: |
651 | + static int const default_shrink_threshold = 5; |
652 | + TestBufferQueue(int nbuffers) |
653 | + : TestBufferQueue(nbuffers, mtd::StubFrameDroppingPolicyFactory{}) |
654 | + {} |
655 | + |
656 | + TestBufferQueue(int nbuffers, mc::FrameDroppingPolicyFactory const& policy_provider) |
657 | + : mc::BufferQueue((nbuffers >= 2) ? 2 : 1, nbuffers, |
658 | + std::make_shared<mtd::StubBufferAllocator>(), |
659 | + { geom::Size{3, 4}, mir_pixel_format_abgr_8888, mg::BufferUsage::hardware }, |
660 | + policy_provider, |
661 | + default_shrink_threshold) |
662 | + {} |
663 | + |
664 | +}; |
665 | + |
666 | class BufferQueueTest : public ::testing::Test |
667 | { |
668 | public: |
669 | @@ -55,21 +74,8 @@ |
670 | : max_nbuffers_to_test{5} |
671 | {}; |
672 | |
673 | - void SetUp() |
674 | - { |
675 | - allocator = std::make_shared<mtd::StubBufferAllocator>(); |
676 | - basic_properties = |
677 | - { |
678 | - geom::Size{3, 4}, |
679 | - mir_pixel_format_abgr_8888, |
680 | - mg::BufferUsage::hardware |
681 | - }; |
682 | - } |
683 | protected: |
684 | - int max_nbuffers_to_test; |
685 | - std::shared_ptr<mtd::StubBufferAllocator> allocator; |
686 | - mg::BufferProperties basic_properties; |
687 | - mtd::StubFrameDroppingPolicyFactory policy_factory; |
688 | + int const max_nbuffers_to_test; |
689 | }; |
690 | |
691 | class AcquireWaitHandle |
692 | @@ -196,10 +202,8 @@ |
693 | |
694 | TEST_F(BufferQueueTest, buffer_queue_of_one_is_supported) |
695 | { |
696 | - std::unique_ptr<mc::BufferQueue> q; |
697 | - ASSERT_NO_THROW(q = std::move( |
698 | - std::unique_ptr<mc::BufferQueue>( |
699 | - new mc::BufferQueue(1, allocator, basic_properties, policy_factory)))); |
700 | + std::unique_ptr<TestBufferQueue> q; |
701 | + ASSERT_NO_THROW(q = std::move(std::unique_ptr<TestBufferQueue>(new TestBufferQueue(1)))); |
702 | ASSERT_THAT(q, Ne(nullptr)); |
703 | |
704 | auto handle = client_acquire_async(*q); |
705 | @@ -234,7 +238,7 @@ |
706 | |
707 | TEST_F(BufferQueueTest, buffer_queue_of_one_supports_resizing) |
708 | { |
709 | - mc::BufferQueue q(1, allocator, basic_properties, policy_factory); |
710 | + TestBufferQueue q(1); |
711 | |
712 | const geom::Size expect_size{10, 20}; |
713 | q.resize(expect_size); |
714 | @@ -259,22 +263,22 @@ |
715 | |
716 | TEST_F(BufferQueueTest, framedropping_is_disabled_by_default) |
717 | { |
718 | - mc::BufferQueue bundle(2, allocator, basic_properties, policy_factory); |
719 | - EXPECT_THAT(bundle.framedropping_allowed(), Eq(false)); |
720 | + TestBufferQueue q(2); |
721 | + EXPECT_THAT(q.framedropping_allowed(), Eq(false)); |
722 | } |
723 | |
724 | TEST_F(BufferQueueTest, throws_when_creating_with_invalid_num_buffers) |
725 | { |
726 | - EXPECT_THROW(mc::BufferQueue a(0, allocator, basic_properties, policy_factory), std::logic_error); |
727 | - EXPECT_THROW(mc::BufferQueue a(-1, allocator, basic_properties, policy_factory), std::logic_error); |
728 | - EXPECT_THROW(mc::BufferQueue a(-10, allocator, basic_properties, policy_factory), std::logic_error); |
729 | + EXPECT_THROW(TestBufferQueue a(0), std::logic_error); |
730 | + EXPECT_THROW(TestBufferQueue a(-1), std::logic_error); |
731 | + EXPECT_THROW(TestBufferQueue a(-10), std::logic_error); |
732 | } |
733 | |
734 | TEST_F(BufferQueueTest, client_can_acquire_and_release_buffer) |
735 | { |
736 | for (int nbuffers = 1; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
737 | { |
738 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
739 | + TestBufferQueue q(nbuffers); |
740 | |
741 | auto handle = client_acquire_async(q); |
742 | ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); |
743 | @@ -286,7 +290,7 @@ |
744 | { |
745 | for (int nbuffers = 2; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
746 | { |
747 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
748 | + TestBufferQueue q(nbuffers); |
749 | int const max_ownable_buffers = q.buffers_free_for_client(); |
750 | ASSERT_THAT(max_ownable_buffers, Gt(0)); |
751 | |
752 | @@ -302,7 +306,7 @@ |
753 | TEST_F(BufferQueueTest, clients_can_have_multiple_pending_completions) |
754 | { |
755 | int const nbuffers = 3; |
756 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
757 | + TestBufferQueue q(nbuffers); |
758 | |
759 | int const prefill = q.buffers_free_for_client(); |
760 | ASSERT_THAT(prefill, Gt(0)); |
761 | @@ -317,7 +321,7 @@ |
762 | ASSERT_THAT(handle1->has_acquired_buffer(), Eq(false)); |
763 | |
764 | auto handle2 = client_acquire_async(q); |
765 | - ASSERT_THAT(handle1->has_acquired_buffer(), Eq(false)); |
766 | + ASSERT_THAT(handle2->has_acquired_buffer(), Eq(false)); |
767 | |
768 | for (int i = 0; i < nbuffers + 1; ++i) |
769 | q.compositor_release(q.compositor_acquire(this)); |
770 | @@ -334,7 +338,7 @@ |
771 | { |
772 | for (int nbuffers = 2; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
773 | { |
774 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
775 | + TestBufferQueue q(nbuffers); |
776 | ASSERT_THAT(q.framedropping_allowed(), Eq(false)); |
777 | |
778 | void const* main_compositor = reinterpret_cast<void const*>(0); |
779 | @@ -363,7 +367,7 @@ |
780 | { |
781 | for (int nbuffers = 2; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
782 | { |
783 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
784 | + TestBufferQueue q(nbuffers); |
785 | q.allow_framedropping(true); |
786 | |
787 | for (int i = 0; i < 1000; i++) |
788 | @@ -375,30 +379,37 @@ |
789 | } |
790 | } |
791 | |
792 | -/* Regression test for LP: #1210042 */ |
793 | -TEST_F(BufferQueueTest, clients_dont_recycle_startup_buffer) |
794 | +/* Regression test for LP: #1210042 LP#1270964*/ |
795 | +TEST_F(BufferQueueTest, client_and_compositor_requests_interleaved) |
796 | { |
797 | - mc::BufferQueue q(3, allocator, basic_properties, policy_factory); |
798 | - |
799 | - auto handle = client_acquire_async(q); |
800 | - ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); |
801 | - auto client_id = handle->id(); |
802 | - handle->release_buffer(); |
803 | - |
804 | - handle = client_acquire_async(q); |
805 | - ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); |
806 | - handle->release_buffer(); |
807 | - |
808 | - auto comp_buffer = q.compositor_acquire(this); |
809 | - EXPECT_THAT(client_id, Eq(comp_buffer->id())); |
810 | - q.compositor_release(comp_buffer); |
811 | + int const nbuffers = 3; |
812 | + TestBufferQueue q(nbuffers); |
813 | + |
814 | + std::vector<std::shared_ptr<AcquireWaitHandle>> handles; |
815 | + for (int i = 0; i < nbuffers; i++) |
816 | + { |
817 | + handles.push_back(client_acquire_async(q)); |
818 | + } |
819 | + |
820 | + for (int i = 0; i < nbuffers; i++) |
821 | + { |
822 | + auto& handle = handles[i]; |
823 | + ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); |
824 | + |
825 | + auto client_id = handle->id(); |
826 | + handle->release_buffer(); |
827 | + |
828 | + auto comp_buffer = q.compositor_acquire(this); |
829 | + EXPECT_THAT(client_id, Eq(comp_buffer->id())); |
830 | + q.compositor_release(comp_buffer); |
831 | + } |
832 | } |
833 | |
834 | TEST_F(BufferQueueTest, throws_on_out_of_order_client_release) |
835 | { |
836 | for (int nbuffers = 3; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
837 | { |
838 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
839 | + TestBufferQueue q(nbuffers); |
840 | |
841 | auto handle1 = client_acquire_async(q); |
842 | ASSERT_THAT(handle1->has_acquired_buffer(), Eq(true)); |
843 | @@ -416,36 +427,30 @@ |
844 | |
845 | TEST_F(BufferQueueTest, async_client_cycles_through_all_buffers) |
846 | { |
847 | - for (int nbuffers = 2; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
848 | + for (int nbuffers = 3; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
849 | { |
850 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
851 | - |
852 | - std::atomic<bool> done(false); |
853 | - auto unblock = [&done] { done = true; }; |
854 | - mt::AutoUnblockThread compositor(unblock, |
855 | - compositor_thread, std::ref(q), std::ref(done)); |
856 | - |
857 | - std::unordered_set<uint32_t> ids_acquired; |
858 | - int const max_ownable_buffers = nbuffers - 1; |
859 | - for (int i = 0; i < max_ownable_buffers*2; ++i) |
860 | - { |
861 | - std::vector<mg::Buffer *> client_buffers; |
862 | - for (int acquires = 0; acquires < max_ownable_buffers; ++acquires) |
863 | - { |
864 | - auto handle = client_acquire_async(q); |
865 | - handle->wait_for(std::chrono::seconds(1)); |
866 | - ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); |
867 | - ids_acquired.insert(handle->id().as_uint32_t()); |
868 | - client_buffers.push_back(handle->buffer()); |
869 | - } |
870 | - |
871 | - for (auto const& buffer : client_buffers) |
872 | - { |
873 | - q.client_release(buffer); |
874 | - } |
875 | - } |
876 | - |
877 | - EXPECT_THAT(ids_acquired.size(), Eq(nbuffers)); |
878 | + TestBufferQueue q(nbuffers); |
879 | + |
880 | + std::vector<std::shared_ptr<AcquireWaitHandle>> handles; |
881 | + for (int i = 0; i < nbuffers; i++) |
882 | + { |
883 | + handles.push_back(client_acquire_async(q)); |
884 | + } |
885 | + |
886 | + std::unordered_set<mg::Buffer *> buffers_acquired; |
887 | + for (int i = 0; i < nbuffers; i++) |
888 | + { |
889 | + auto& handle = handles[i]; |
890 | + ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); |
891 | + |
892 | + buffers_acquired.insert(handle->buffer()); |
893 | + handle->release_buffer(); |
894 | + |
895 | + auto comp_buffer = q.compositor_acquire(this); |
896 | + q.compositor_release(comp_buffer); |
897 | + } |
898 | + |
899 | + EXPECT_THAT(buffers_acquired.size(), Eq(nbuffers)); |
900 | } |
901 | } |
902 | |
903 | @@ -453,7 +458,7 @@ |
904 | { |
905 | for (int nbuffers = 1; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
906 | { |
907 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
908 | + TestBufferQueue q(nbuffers); |
909 | |
910 | auto handle = client_acquire_async(q); |
911 | ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); |
912 | @@ -471,7 +476,7 @@ |
913 | { |
914 | for (int nbuffers = 1; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
915 | { |
916 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
917 | + TestBufferQueue q(nbuffers); |
918 | |
919 | auto handle = client_acquire_async(q); |
920 | ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); |
921 | @@ -493,7 +498,7 @@ |
922 | { |
923 | for (int nbuffers = 2; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
924 | { |
925 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
926 | + TestBufferQueue q(nbuffers); |
927 | |
928 | for (int i = 0; i < 10; ++i) |
929 | { |
930 | @@ -527,7 +532,7 @@ |
931 | { |
932 | for (int nbuffers = 1; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
933 | { |
934 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
935 | + TestBufferQueue q(nbuffers); |
936 | |
937 | for (int i = 0; i < 100; i++) |
938 | { |
939 | @@ -541,7 +546,7 @@ |
940 | { |
941 | for (int nbuffers = 2; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
942 | { |
943 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
944 | + TestBufferQueue q(nbuffers); |
945 | q.allow_framedropping(false); |
946 | |
947 | std::atomic<bool> done(false); |
948 | @@ -579,7 +584,7 @@ |
949 | { |
950 | for (int nbuffers = 1; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
951 | { |
952 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
953 | + TestBufferQueue q(nbuffers); |
954 | |
955 | mg::BufferID client_id; |
956 | |
957 | @@ -608,7 +613,7 @@ |
958 | { |
959 | for (int nbuffers = 1; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
960 | { |
961 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
962 | + TestBufferQueue q(nbuffers); |
963 | |
964 | auto handle = client_acquire_async(q); |
965 | ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); |
966 | @@ -620,35 +625,12 @@ |
967 | } |
968 | } |
969 | |
970 | -/* Regression test for LP#1270964 */ |
971 | -TEST_F(BufferQueueTest, compositor_client_interleaved) |
972 | -{ |
973 | - mc::BufferQueue q(3, allocator, basic_properties, policy_factory); |
974 | - |
975 | - auto handle = client_acquire_async(q); |
976 | - ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); |
977 | - |
978 | - auto first_ready_buffer_id = handle->id(); |
979 | - handle->release_buffer(); |
980 | - |
981 | - handle = client_acquire_async(q); |
982 | - ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); |
983 | - |
984 | - // in the original bug, compositor would be given the wrong buffer here |
985 | - auto compositor_buffer = q.compositor_acquire(this); |
986 | - |
987 | - EXPECT_THAT(compositor_buffer->id(), Eq(first_ready_buffer_id)); |
988 | - |
989 | - handle->release_buffer(); |
990 | - q.compositor_release(compositor_buffer); |
991 | -} |
992 | - |
993 | TEST_F(BufferQueueTest, overlapping_compositors_get_different_frames) |
994 | { |
995 | // This test simulates bypass behaviour |
996 | for (int nbuffers = 2; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
997 | { |
998 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
999 | + TestBufferQueue q(nbuffers); |
1000 | |
1001 | std::shared_ptr<mg::Buffer> compositor[2]; |
1002 | |
1003 | @@ -685,7 +667,7 @@ |
1004 | { |
1005 | for (int nbuffers = 1; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
1006 | { |
1007 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
1008 | + TestBufferQueue q(nbuffers); |
1009 | |
1010 | auto comp_buffer = q.compositor_acquire(this); |
1011 | auto snapshot = q.snapshot_acquire(); |
1012 | @@ -699,7 +681,7 @@ |
1013 | { |
1014 | for (int nbuffers = 1; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
1015 | { |
1016 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
1017 | + TestBufferQueue q(nbuffers); |
1018 | int const num_snapshots = 100; |
1019 | |
1020 | std::shared_ptr<mg::Buffer> buf[num_snapshots]; |
1021 | @@ -715,7 +697,7 @@ |
1022 | { |
1023 | for (int nbuffers = 2; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
1024 | { |
1025 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
1026 | + TestBufferQueue q(nbuffers); |
1027 | |
1028 | auto handle = client_acquire_async(q); |
1029 | ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); |
1030 | @@ -739,7 +721,7 @@ |
1031 | { |
1032 | for (int nbuffers = 2; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
1033 | { |
1034 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
1035 | + TestBufferQueue q(nbuffers); |
1036 | |
1037 | std::atomic<bool> done(false); |
1038 | |
1039 | @@ -756,85 +738,23 @@ |
1040 | std::ref(done)); |
1041 | |
1042 | q.allow_framedropping(false); |
1043 | - mt::AutoJoinThread client1(client_thread, std::ref(q), 1000); |
1044 | + mt::AutoJoinThread client1(client_thread, std::ref(q), 100); |
1045 | client1.stop(); |
1046 | |
1047 | q.allow_framedropping(true); |
1048 | - mt::AutoJoinThread client2(client_thread, std::ref(q), 1000); |
1049 | + mt::AutoJoinThread client2(client_thread, std::ref(q), 100); |
1050 | client2.stop(); |
1051 | |
1052 | - mt::AutoJoinThread client3(switching_client_thread, std::ref(q), 1000); |
1053 | + mt::AutoJoinThread client3(switching_client_thread, std::ref(q), 100); |
1054 | client3.stop(); |
1055 | } |
1056 | } |
1057 | |
1058 | -TEST_F(BufferQueueTest, bypass_clients_get_more_than_two_buffers) |
1059 | -{ |
1060 | - for (int nbuffers = 3; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
1061 | - { |
1062 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
1063 | - |
1064 | - std::shared_ptr<mg::Buffer> compositor[2]; |
1065 | - |
1066 | - auto handle = client_acquire_async(q); |
1067 | - ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); |
1068 | - handle->release_buffer(); |
1069 | - compositor[0] = q.compositor_acquire(this); |
1070 | - |
1071 | - handle = client_acquire_async(q); |
1072 | - ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); |
1073 | - handle->release_buffer(); |
1074 | - compositor[1] = q.compositor_acquire(this); |
1075 | - |
1076 | - for (int i = 0; i < 20; i++) |
1077 | - { |
1078 | - // Two compositors acquired, and they're always different... |
1079 | - ASSERT_THAT(compositor[0]->id(), Ne(compositor[1]->id())); |
1080 | - |
1081 | - handle = client_acquire_async(q); |
1082 | - ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); |
1083 | - ASSERT_THAT(compositor[0]->id(), Ne(handle->id())); |
1084 | - ASSERT_THAT(compositor[1]->id(), Ne(handle->id())); |
1085 | - handle->release_buffer(); |
1086 | - |
1087 | - // One of the compositors (the oldest one) gets a new buffer... |
1088 | - int oldest = i & 1; |
1089 | - q.compositor_release(compositor[oldest]); |
1090 | - |
1091 | - compositor[oldest] = q.compositor_acquire(this); |
1092 | - } |
1093 | - |
1094 | - q.compositor_release(compositor[0]); |
1095 | - q.compositor_release(compositor[1]); |
1096 | - } |
1097 | -} |
1098 | - |
1099 | -TEST_F(BufferQueueTest, framedropping_clients_get_all_buffers) |
1100 | -{ |
1101 | - for (int nbuffers = 2; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
1102 | - { |
1103 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
1104 | - q.allow_framedropping(true); |
1105 | - |
1106 | - int const nframes = 100; |
1107 | - std::unordered_set<uint32_t> ids_acquired; |
1108 | - for (int i = 0; i < nframes; ++i) |
1109 | - { |
1110 | - auto handle = client_acquire_async(q); |
1111 | - ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); |
1112 | - ids_acquired.insert(handle->id().as_uint32_t()); |
1113 | - handle->release_buffer(); |
1114 | - } |
1115 | - |
1116 | - EXPECT_THAT(ids_acquired.size(), Eq(nbuffers)); |
1117 | - } |
1118 | -} |
1119 | - |
1120 | TEST_F(BufferQueueTest, waiting_clients_unblock_on_shutdown) |
1121 | { |
1122 | for (int nbuffers = 2; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
1123 | { |
1124 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
1125 | + TestBufferQueue q(nbuffers); |
1126 | q.allow_framedropping(false); |
1127 | |
1128 | int const max_ownable_buffers = q.buffers_free_for_client(); |
1129 | @@ -858,31 +778,26 @@ |
1130 | |
1131 | TEST_F(BufferQueueTest, client_framerate_matches_compositor) |
1132 | { |
1133 | - for (int nbuffers = 2; nbuffers <= 3; nbuffers++) |
1134 | + for (int nbuffers = 2; nbuffers <= max_nbuffers_to_test; nbuffers++) |
1135 | { |
1136 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
1137 | + TestBufferQueue q(nbuffers); |
1138 | unsigned long client_frames = 0; |
1139 | const unsigned long compose_frames = 20; |
1140 | - |
1141 | - q.allow_framedropping(false); |
1142 | + auto const frame_time = std::chrono::milliseconds{1}; |
1143 | |
1144 | std::atomic<bool> done(false); |
1145 | |
1146 | - mt::AutoJoinThread monitor1([&] |
1147 | + mt::AutoJoinThread compositor_thread([&] |
1148 | { |
1149 | - for (unsigned long frame = 0; frame != compose_frames+3; frame++) |
1150 | + for (unsigned long frame = 0; frame != compose_frames + 1; frame++) |
1151 | { |
1152 | - std::this_thread::sleep_for(std::chrono::milliseconds(10)); |
1153 | - |
1154 | auto buf = q.compositor_acquire(this); |
1155 | + std::this_thread::sleep_for(frame_time); |
1156 | q.compositor_release(buf); |
1157 | |
1158 | if (frame == compose_frames) |
1159 | { |
1160 | - // Tell the "client" to stop after compose_frames, but |
1161 | - // don't stop rendering immediately to avoid blocking |
1162 | - // if we rendered any twice |
1163 | - done.store(true); |
1164 | + done = true; |
1165 | } |
1166 | } |
1167 | }); |
1168 | @@ -891,7 +806,7 @@ |
1169 | ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); |
1170 | handle->release_buffer(); |
1171 | |
1172 | - while (!done.load()) |
1173 | + while (!done) |
1174 | { |
1175 | auto handle = client_acquire_async(q); |
1176 | handle->wait_for(std::chrono::seconds(1)); |
1177 | @@ -900,48 +815,43 @@ |
1178 | client_frames++; |
1179 | } |
1180 | |
1181 | - monitor1.stop(); |
1182 | + compositor_thread.stop(); |
1183 | |
1184 | - // Roughly compose_frames == client_frames within 50% |
1185 | - ASSERT_THAT(client_frames, Gt(compose_frames / 2)); |
1186 | - ASSERT_THAT(client_frames, Lt(compose_frames * 3 / 2)); |
1187 | + // Roughly compose_frames == client_frames within 90% |
1188 | + ASSERT_THAT(client_frames, Ge(compose_frames * 0.90f)); |
1189 | + ASSERT_THAT(client_frames, Le(compose_frames * 1.1f)); |
1190 | } |
1191 | } |
1192 | |
1193 | /* Regression test LP: #1241369 / LP: #1241371 */ |
1194 | TEST_F(BufferQueueTest, slow_client_framerate_matches_compositor) |
1195 | { |
1196 | - /* BufferQueue can only satify this for nbuffers >= 3 |
1197 | - * since a client can only own up to nbuffers - 1 at any one time |
1198 | - */ |
1199 | - for (int nbuffers = 3; nbuffers <= 3; nbuffers++) |
1200 | + for (int nbuffers = 2; nbuffers <= max_nbuffers_to_test; nbuffers++) |
1201 | { |
1202 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
1203 | + TestBufferQueue q(nbuffers); |
1204 | unsigned long client_frames = 0; |
1205 | - unsigned long const compose_frames = 100; |
1206 | - auto const frame_time = std::chrono::milliseconds(16); |
1207 | + unsigned long const compose_frames = 20; |
1208 | + auto const compositor_holding_time = std::chrono::milliseconds{16}; |
1209 | + //simulate a client rendering time that's very close to the deadline |
1210 | + //simulated by the compositor holding time but not the same as that |
1211 | + //would imply the client will always misses its window to the next vsync |
1212 | + //and its framerate would be roughly half of the compositor rate |
1213 | + auto const client_rendering_time = std::chrono::milliseconds{12}; |
1214 | |
1215 | q.allow_framedropping(false); |
1216 | |
1217 | std::atomic<bool> done(false); |
1218 | - std::mutex sync; |
1219 | |
1220 | - mt::AutoJoinThread monitor1([&] |
1221 | + mt::AutoJoinThread compositor_thread([&] |
1222 | { |
1223 | - for (unsigned long frame = 0; frame != compose_frames+3; frame++) |
1224 | + for (unsigned long frame = 0; frame != compose_frames + 1; frame++) |
1225 | { |
1226 | - std::this_thread::sleep_for(frame_time); |
1227 | - sync.lock(); |
1228 | auto buf = q.compositor_acquire(this); |
1229 | + std::this_thread::sleep_for(compositor_holding_time); |
1230 | q.compositor_release(buf); |
1231 | - sync.unlock(); |
1232 | - |
1233 | if (frame == compose_frames) |
1234 | { |
1235 | - // Tell the "client" to stop after compose_frames, but |
1236 | - // don't stop rendering immediately to avoid blocking |
1237 | - // if we rendered any twice |
1238 | - done.store(true); |
1239 | + done = true; |
1240 | } |
1241 | } |
1242 | }); |
1243 | @@ -950,23 +860,21 @@ |
1244 | ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); |
1245 | handle->release_buffer(); |
1246 | |
1247 | - while (!done.load()) |
1248 | + while (!done) |
1249 | { |
1250 | - sync.lock(); |
1251 | - sync.unlock(); |
1252 | auto handle = client_acquire_async(q); |
1253 | handle->wait_for(std::chrono::seconds(1)); |
1254 | ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); |
1255 | - std::this_thread::sleep_for(frame_time); |
1256 | + std::this_thread::sleep_for(client_rendering_time); |
1257 | handle->release_buffer(); |
1258 | client_frames++; |
1259 | } |
1260 | |
1261 | - monitor1.stop(); |
1262 | + compositor_thread.stop(); |
1263 | |
1264 | // Roughly compose_frames == client_frames within 20% |
1265 | - ASSERT_THAT(client_frames, Gt(compose_frames * 0.8f)); |
1266 | - ASSERT_THAT(client_frames, Lt(compose_frames * 1.2f)); |
1267 | + ASSERT_THAT(client_frames, Ge(compose_frames * 0.8f)); |
1268 | + ASSERT_THAT(client_frames, Le(compose_frames * 1.2f)); |
1269 | } |
1270 | } |
1271 | |
1272 | @@ -974,7 +882,7 @@ |
1273 | { |
1274 | for (int nbuffers = 1; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
1275 | { |
1276 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
1277 | + TestBufferQueue q(nbuffers); |
1278 | |
1279 | for (int width = 1; width < 100; ++width) |
1280 | { |
1281 | @@ -1009,8 +917,9 @@ |
1282 | { |
1283 | for (int nbuffers = 1; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
1284 | { |
1285 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
1286 | - mg::BufferID history[5]; |
1287 | + TestBufferQueue q(nbuffers); |
1288 | + std::vector<mg::BufferID> history; |
1289 | + std::vector<std::shared_ptr<AcquireWaitHandle>> producing; |
1290 | |
1291 | const int width0 = 123; |
1292 | const int height0 = 456; |
1293 | @@ -1018,7 +927,7 @@ |
1294 | const int dy = -3; |
1295 | int width = width0; |
1296 | int height = height0; |
1297 | - int const nbuffers_to_use = q.buffers_free_for_client(); |
1298 | + int const nbuffers_to_use = max_ownable_buffers(nbuffers); |
1299 | ASSERT_THAT(nbuffers_to_use, Gt(0)); |
1300 | |
1301 | for (int produce = 0; produce < max_ownable_buffers(nbuffers); ++produce) |
1302 | @@ -1030,10 +939,17 @@ |
1303 | q.resize(new_size); |
1304 | auto handle = client_acquire_async(q); |
1305 | ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); |
1306 | - history[produce] = handle->id(); |
1307 | + history.push_back(handle->id()); |
1308 | + producing.push_back(handle); |
1309 | auto buffer = handle->buffer(); |
1310 | ASSERT_THAT(buffer->size(), Eq(new_size)); |
1311 | - handle->release_buffer(); |
1312 | + } |
1313 | + |
1314 | + // Overlap all the client_acquires asyncronously. It's the only way |
1315 | + // the new dynamic queue scaling will let a client hold that many... |
1316 | + for (int complete = 0; complete < nbuffers_to_use; ++complete) |
1317 | + { |
1318 | + producing[complete]->release_buffer(); |
1319 | } |
1320 | |
1321 | width = width0; |
1322 | @@ -1075,20 +991,22 @@ |
1323 | nbuffers++) |
1324 | { |
1325 | mtd::MockFrameDroppingPolicyFactory policy_factory; |
1326 | - mc::BufferQueue q(nbuffers, |
1327 | - allocator, |
1328 | - basic_properties, |
1329 | - policy_factory); |
1330 | + TestBufferQueue q(nbuffers, policy_factory); |
1331 | + |
1332 | + std::vector<std::shared_ptr<AcquireWaitHandle>> handles; |
1333 | + for (int i = 0; i < max_ownable_buffers(nbuffers); i++) |
1334 | + handles.push_back(client_acquire_async(q)); |
1335 | |
1336 | for (int i = 0; i < max_ownable_buffers(nbuffers); i++) |
1337 | { |
1338 | - auto client = client_acquire_sync(q); |
1339 | - q.client_release(client); |
1340 | + auto& handle = handles[i]; |
1341 | + ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); |
1342 | + handle->release_buffer(); |
1343 | } |
1344 | |
1345 | auto handle = client_acquire_async(q); |
1346 | |
1347 | - EXPECT_FALSE(handle->has_acquired_buffer()); |
1348 | + ASSERT_FALSE(handle->has_acquired_buffer()); |
1349 | |
1350 | policy_factory.trigger_policies(); |
1351 | EXPECT_TRUE(handle->has_acquired_buffer()); |
1352 | @@ -1102,15 +1020,17 @@ |
1353 | nbuffers++) |
1354 | { |
1355 | mtd::MockFrameDroppingPolicyFactory policy_factory; |
1356 | - mc::BufferQueue q(nbuffers, |
1357 | - allocator, |
1358 | - basic_properties, |
1359 | - policy_factory); |
1360 | + TestBufferQueue q(nbuffers, policy_factory); |
1361 | + |
1362 | + std::vector<std::shared_ptr<AcquireWaitHandle>> handles; |
1363 | + for (int i = 0; i < max_ownable_buffers(nbuffers); i++) |
1364 | + handles.push_back(client_acquire_async(q)); |
1365 | |
1366 | for (int i = 0; i < max_ownable_buffers(nbuffers); i++) |
1367 | { |
1368 | - auto client = client_acquire_sync(q); |
1369 | - q.client_release(client); |
1370 | + auto& handle = handles[i]; |
1371 | + ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); |
1372 | + handle->release_buffer(); |
1373 | } |
1374 | |
1375 | /* Queue up two pending swaps */ |
1376 | @@ -1140,7 +1060,7 @@ |
1377 | int const nbuffers{1}; |
1378 | geom::Size const new_size{123,456}; |
1379 | |
1380 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
1381 | + TestBufferQueue q(nbuffers); |
1382 | |
1383 | q.client_release(client_acquire_sync(q)); |
1384 | q.resize(new_size); |
1385 | @@ -1160,7 +1080,7 @@ |
1386 | { // Regression test for LP: #1319765 |
1387 | using namespace testing; |
1388 | |
1389 | - mc::BufferQueue q{2, allocator, basic_properties, policy_factory}; |
1390 | + TestBufferQueue q{2}; |
1391 | |
1392 | q.client_release(client_acquire_sync(q)); |
1393 | auto a = q.compositor_acquire(this); |
1394 | @@ -1183,7 +1103,7 @@ |
1395 | { // Extended regression test for LP: #1319765 |
1396 | using namespace testing; |
1397 | |
1398 | - mc::BufferQueue q{2, allocator, basic_properties, policy_factory}; |
1399 | + TestBufferQueue q{2}; |
1400 | |
1401 | for (int i = 0; i < 100; ++i) |
1402 | { |
1403 | @@ -1225,7 +1145,7 @@ |
1404 | |
1405 | int const nbuffers{3}; |
1406 | |
1407 | - mc::BufferQueue q{nbuffers, allocator, basic_properties, policy_factory}; |
1408 | + TestBufferQueue q(nbuffers); |
1409 | q.allow_framedropping(true); |
1410 | |
1411 | std::vector<std::shared_ptr<mg::Buffer>> buffers; |
1412 | @@ -1237,9 +1157,14 @@ |
1413 | /* Let client release all possible buffers so they go into |
1414 | * the ready queue |
1415 | */ |
1416 | - for (int i = 0; i < nbuffers; ++i) |
1417 | - { |
1418 | - auto handle = client_acquire_async(q); |
1419 | + std::vector<std::shared_ptr<AcquireWaitHandle>> handles; |
1420 | + for (int i = 0; i < max_ownable_buffers(nbuffers); i++) |
1421 | + { |
1422 | + handles.push_back(client_acquire_async(q)); |
1423 | + } |
1424 | + for (int i = 0; i < max_ownable_buffers(nbuffers); ++i) |
1425 | + { |
1426 | + auto& handle = handles[i]; |
1427 | ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); |
1428 | /* Check the client never got the compositor buffer acquired above */ |
1429 | ASSERT_THAT(handle->id(), Ne(comp_buffer->id())); |
1430 | @@ -1247,7 +1172,7 @@ |
1431 | } |
1432 | |
1433 | /* Let the compositor acquire all ready buffers */ |
1434 | - for (int i = 0; i < nbuffers; ++i) |
1435 | + for (int i = 0; i < max_ownable_buffers(nbuffers); ++i) |
1436 | { |
1437 | buffers.push_back(q.compositor_acquire(this)); |
1438 | } |
1439 | @@ -1273,7 +1198,7 @@ |
1440 | |
1441 | for (int nbuffers = 2; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
1442 | { |
1443 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
1444 | + TestBufferQueue q(nbuffers); |
1445 | |
1446 | std::mutex client_buffer_guard; |
1447 | std::condition_variable client_buffer_cv; |
1448 | @@ -1329,7 +1254,7 @@ |
1449 | { |
1450 | for (int nbuffers = 2; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
1451 | { |
1452 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
1453 | + TestBufferQueue q(nbuffers); |
1454 | for (int i = 0; i < 100; ++i) |
1455 | { |
1456 | auto handle = client_acquire_async(q); |
1457 | @@ -1365,7 +1290,7 @@ |
1458 | { |
1459 | for (int nbuffers = 3; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
1460 | { |
1461 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
1462 | + TestBufferQueue q(nbuffers); |
1463 | |
1464 | void const* main_compositor = reinterpret_cast<void const*>(0); |
1465 | void const* second_compositor = reinterpret_cast<void const*>(1); |
1466 | @@ -1416,18 +1341,16 @@ |
1467 | } |
1468 | } |
1469 | |
1470 | - EXPECT_THAT(unique_buffers_acquired.size(), Eq(nbuffers)); |
1471 | + EXPECT_THAT(unique_buffers_acquired.size(), Ge(nbuffers)); |
1472 | |
1473 | } |
1474 | } |
1475 | |
1476 | -/* FIXME (enabling this optimization breaks timing tests) */ |
1477 | -TEST_F(BufferQueueTest, DISABLED_synchronous_clients_only_get_two_real_buffers) |
1478 | +TEST_F(BufferQueueTest, synchronous_clients_only_get_two_real_buffers) |
1479 | { |
1480 | - for (int nbuffers = 2; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
1481 | + for (int nbuffers = 3; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
1482 | { |
1483 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
1484 | - q.allow_framedropping(false); |
1485 | + TestBufferQueue q(nbuffers); |
1486 | |
1487 | std::atomic<bool> done(false); |
1488 | auto unblock = [&done] { done = true; }; |
1489 | @@ -1450,6 +1373,40 @@ |
1490 | } |
1491 | } |
1492 | |
1493 | +TEST_F(BufferQueueTest, queue_shrinks_after_expanding) |
1494 | +{ |
1495 | + for (int nbuffers = 3; nbuffers <= max_nbuffers_to_test; ++nbuffers) |
1496 | + { |
1497 | + TestBufferQueue q(nbuffers); |
1498 | + |
1499 | + std::vector<std::shared_ptr<AcquireWaitHandle>> handles; |
1500 | + for (int i = 0; i < max_ownable_buffers(nbuffers); i++) |
1501 | + { |
1502 | + handles.push_back(client_acquire_async(q)); |
1503 | + } |
1504 | + |
1505 | + //Make sure the queue expanded |
1506 | + ASSERT_THAT(q.allocated_buffers(), Eq(nbuffers)); |
1507 | + |
1508 | + for (auto& handle : handles) |
1509 | + { |
1510 | + handle->release_buffer(); |
1511 | + } |
1512 | + |
1513 | + for (int i = 0; i <= q.default_shrink_threshold; i++) |
1514 | + { |
1515 | + auto comp_buffer = q.compositor_acquire(this); |
1516 | + q.compositor_release(comp_buffer); |
1517 | + |
1518 | + auto handle = client_acquire_async(q); |
1519 | + ASSERT_THAT(handle->has_acquired_buffer(), Eq(true)); |
1520 | + handle->release_buffer(); |
1521 | + } |
1522 | + |
1523 | + EXPECT_THAT(q.allocated_buffers(), Lt(nbuffers)); |
1524 | + } |
1525 | +} |
1526 | + |
1527 | /* |
1528 | * This is a regression test for bug lp:1317801. This bug is a race and |
1529 | * very difficult to reproduce with pristine code. By carefully placing |
1530 | @@ -1467,7 +1424,7 @@ |
1531 | TEST_F(BufferQueueTest, DISABLED_lp_1317801_regression_test) |
1532 | { |
1533 | int const nbuffers = 3; |
1534 | - mc::BufferQueue q(nbuffers, allocator, basic_properties, policy_factory); |
1535 | + TestBufferQueue q(nbuffers); |
1536 | |
1537 | q.client_release(client_acquire_sync(q)); |
1538 |
FAILED: Continuous integration, rev:1678 jenkins. qa.ubuntu. com/job/ mir-team- mir-development -branch- ci/2015/ jenkins. qa.ubuntu. com/job/ mir-android- utopic- i386-build/ 723 jenkins. qa.ubuntu. com/job/ mir-clang- utopic- amd64-build/ 729 jenkins. qa.ubuntu. com/job/ mir-mediumtests -utopic- touch/723/ console jenkins. qa.ubuntu. com/job/ mir-team- mir-development -branch- utopic- amd64-ci/ 536/console jenkins. qa.ubuntu. com/job/ mir-team- mir-development -branch- utopic- armhf-ci/ 534/console jenkins. qa.ubuntu. com/job/ generic- mediumtests- builder- utopic- armhf/2166/ console
http://
Executed test runs:
SUCCESS: http://
SUCCESS: http://
FAILURE: http://
FAILURE: http://
FAILURE: http://
FAILURE: http://
Click here to trigger a rebuild: s-jenkins. ubuntu- ci:8080/ job/mir- team-mir- development- branch- ci/2015/ rebuild
http://