Merge lp:~kdub/mir/bstream-integration-test2 into lp:mir
- bstream-integration-test2
- Merge into development-branch
Status: | Merged |
---|---|
Approved by: | Daniel van Vugt |
Approved revision: | no longer in the source branch. |
Merged at revision: | 2733 |
Proposed branch: | lp:~kdub/mir/bstream-integration-test2 |
Merge into: | lp:mir |
Diff against target: |
623 lines (+519/-11) 1 file modified
tests/integration-tests/test_buffer_scheduling.cpp (+519/-11) |
To merge this branch: | bzr merge lp:~kdub/mir/bstream-integration-test2 |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Alexandros Frantzis (community) | Approve | ||
Chris Halse Rogers | Approve | ||
PS Jenkins bot (community) | continuous-integration | Approve | |
Review via email: mp+263706@code.launchpad.net |
Commit message
new buffer semantics: add more tests (inspired by the BufferQueue tests) to the BufferStream integration test.
Description of the change
new buffer semantics: add more tests (inspired by the BufferQueue tests) to the BufferStream integration test.
note: I've been tracking the translations in a gdoc:
https:/
still quite a few to go.
PS Jenkins bot (ps-jenkins) wrote : | # |
Chris Halse Rogers (raof) wrote : | # |
Quite a lot of these tests don't seem to make sense in my conception of the future allocate/submit model.
Hm. I kinda feel that these tests are going to be more a test of the emulate-
Kevin DuBois (kdub) wrote : | # |
> Quite a lot of these tests don't seem to make sense in my conception of the
> future allocate/submit model.
>
> Hm. I kinda feel that these tests are going to be more a test of the emulate-
> current-semantics test code that we'll need to write just for these tests when
> we switch to allocate/submit.
If BufferQueue and the "new semantics" were in a classic 2-bubble Venn diagram, this test suite's aim is only to capture the stuff in the middle. BufferQueue has some testing around its callback-based interfaces in its unit test, and "new semantics" will need more testing than this suite for its multiple-buffers approach.
The strategy here is to have the client-to-server (+vice versa) "connections" in the stubs, and the endpoints in the tests. The connections between client and server will have to be different in the two models; but with the stub-glue, the overarching behavior, as observed at the production and consumption endpoints, hopefully will be the same for the behaviors in the middle of the Venn diagram (which is what we can 'regress' on, and have to make sure we move the code while still under test coverage)
PS Jenkins bot (ps-jenkins) wrote : | # |
PASSED: Continuous integration, rev:2708
http://
Executed test runs:
SUCCESS: http://
SUCCESS: http://
SUCCESS: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
SUCCESS: http://
Click here to trigger a rebuild:
http://
Chris Halse Rogers (raof) wrote : | # |
I think I understand what the purpose of the tests are, I just disagree that they're all in the intersection of the Venn diagram :)
Particularly:
+TEST_P(
What is this supposed to test in the acquire/submit framework? Of course the client sees all the available buffers; it's allocated them explicitly.
+TEST_P(
What is this trying to test in the acquire/submit framework? The client always gets as many buffers as they allocate.
+TEST_P(
This is kinda testable, but is only going to test the test code in an acquire/submit model. The client gets a resized buffer exactly when the client allocates a freshly sized buffer.
+TEST_P(
Single buffered clients remain nonsensical in a composited environment :). But again, as above, a client will acquire a resized buffer exactly when it allocates one.
+TEST_P(
+TEST_P(
+TEST_P(
Under acquire/submit we never block a client. As such, these tests are going to heavily depend on the test-code emulating the current semantics where we *do* block. They might still be worthwhile tests to ensure that sensibly-written clients get sensible behaviour, though?
I think the buffers_ready tests are also candidates for not-in-
Other issues:
+TEST_P(
This makes sense, but there should be a produce first, otherwise there's no buffer for the compositor.
Kevin DuBois (kdub) wrote : | # |
> I think I understand what the purpose of the tests are, I just disagree that
> they're all in the intersection of the Venn diagram :)
> Particularly:
> +TEST_P(
> What is this supposed to test in the acquire/submit framework? Of course the
> client sees all the available buffers; it's allocated them explicitly.
Its not that its checking that all are seen by the client, but that the buffers are cycled through. In the current version, the test expectations have gotten a little weaker, so I improved to make more sense.
> +TEST_P(
> What is this trying to test in the acquire/submit framework? The client always
> gets as many buffers as they allocate.
Ack, was a bit unclear. I guess that the original test was mostly testing that it cycles through the buffers, and doesn't block the client when framedropping is on, improved test name and code to be more obvious.
> +TEST_P(
> This is kinda testable, but is only going to test the test code in an
> acquire/submit model. The client gets a resized buffer exactly when the client
> allocates a freshly sized buffer.
I'm okay to have tests in the suite that have to have some more rigging for the new system. Maybe in working on the resize code, I can figure out a way at that point to make it more applicable to both systems.
> +TEST_P(
> with_single_
> Single buffered clients remain nonsensical in a composited environment :). But
> again, as above, a client will acquire a resized buffer exactly when it
> allocates one.
Also makes less sense in a fencing system too :) And, yes, was redundant. Removed.
> +TEST_P(
> nonframedroppin
> +TEST_P(
> +TEST_P(
> Under acquire/submit we never block a client. As such, these tests are going
> to heavily depend on the test-code emulating the current semantics where we
> *do* block. They might still be worthwhile tests to ensure that sensibly-
> written clients get sensible behaviour, though?
>
> I think the buffers_ready tests are also candidates for not-in-the-
> intersection.
We can still hamper a client to the point that they're blocked if we're not sending buffers back when they're ready. Its a bit tricky to see how the test rigging will evolve, but seems useful to keep them around, at least until the new code starts becoming a bit more concrete.
> Other issues:
> +TEST_P(
> This makes sense, but there should be a produce first, otherwise there's no
> buffer for the compositor.
fixed.
Also, thanks for reviewing! (probably a pain to review, hoping that the spreadsheet is/was somewhat helpful)
PS Jenkins bot (ps-jenkins) wrote : | # |
PASSED: Continuous integration, rev:2712
http://
Executed test runs:
SUCCESS: http://
SUCCESS: http://
SUCCESS: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
SUCCESS: http://
Click here to trigger a rebuild:
http://
Chris Halse Rogers (raof) wrote : | # |
Yeah. I still suspect that a bunch of these will end up being not worthwhile tests, but maybe I'll be wrong, and it's harmless to keep them in here until the new code is here to test :)
Alexandros Frantzis (afrantzis) wrote : | # |
Looks good.
Like Chris I am not sure if all of these will make sense eventually, but I guess it's better to have them and remove them later if we don't need them, than to regress because we missed a test.
Preview Diff
1 | === modified file 'tests/integration-tests/test_buffer_scheduling.cpp' |
2 | --- tests/integration-tests/test_buffer_scheduling.cpp 2015-06-26 18:19:01 +0000 |
3 | +++ tests/integration-tests/test_buffer_scheduling.cpp 2015-07-07 11:42:17 +0000 |
4 | @@ -22,6 +22,7 @@ |
5 | #include "mir/test/doubles/mock_client_buffer_factory.h" |
6 | #include "mir/test/doubles/stub_buffer_allocator.h" |
7 | #include "mir/test/doubles/stub_frame_dropping_policy_factory.h" |
8 | +#include "mir/test/doubles/mock_frame_dropping_policy_factory.h" |
9 | #include "mir/test/fake_shared.h" |
10 | #include <gtest/gtest.h> |
11 | |
12 | @@ -35,18 +36,26 @@ |
13 | |
14 | namespace |
15 | { |
16 | +enum class Access |
17 | +{ |
18 | + blocked, |
19 | + unblocked |
20 | +}; |
21 | + |
22 | struct BufferEntry |
23 | { |
24 | mg::BufferID id; |
25 | unsigned int age; |
26 | + Access blockage; |
27 | bool operator==(BufferEntry const& b) const |
28 | { |
29 | - return ((id == b.id) && (age == b.age)); |
30 | + return ((blockage == b.blockage) && (id == b.id) && (age == b.age)); |
31 | } |
32 | }; |
33 | |
34 | struct ProducerSystem |
35 | { |
36 | + virtual bool can_produce() = 0; |
37 | virtual void produce() = 0; |
38 | virtual std::vector<BufferEntry> production_log() = 0; |
39 | virtual ~ProducerSystem() = default; |
40 | @@ -75,18 +84,32 @@ |
41 | std::bind(&BufferQueueProducer::buffer_ready, this, std::placeholders::_1)); |
42 | } |
43 | |
44 | + bool can_produce() |
45 | + { |
46 | + std::unique_lock<decltype(mutex)> lk(mutex); |
47 | + return buffer; |
48 | + } |
49 | + |
50 | void produce() |
51 | { |
52 | mg::Buffer* b = nullptr; |
53 | - { |
54 | - std::unique_lock<decltype(mutex)> lk(mutex); |
55 | - b = buffer; |
56 | - age++; |
57 | - entries.emplace_back(BufferEntry{b->id(), age}); |
58 | - buffer->write(reinterpret_cast<unsigned char const*>(&age), sizeof(age)); |
59 | - } |
60 | - stream.swap_buffers(b, |
61 | - std::bind(&BufferQueueProducer::buffer_ready, this, std::placeholders::_1)); |
62 | + if (can_produce()) |
63 | + { |
64 | + { |
65 | + std::unique_lock<decltype(mutex)> lk(mutex); |
66 | + b = buffer; |
67 | + buffer = nullptr; |
68 | + age++; |
69 | + entries.emplace_back(BufferEntry{b->id(), age, Access::unblocked}); |
70 | + b->write(reinterpret_cast<unsigned char const*>(&age), sizeof(age)); |
71 | + } |
72 | + stream.swap_buffers(b, |
73 | + std::bind(&BufferQueueProducer::buffer_ready, this, std::placeholders::_1)); |
74 | + } |
75 | + else |
76 | + { |
77 | + entries.emplace_back(BufferEntry{mg::BufferID{2}, 0u, Access::blocked}); |
78 | + } |
79 | } |
80 | |
81 | std::vector<BufferEntry> production_log() |
82 | @@ -95,6 +118,12 @@ |
83 | return entries; |
84 | } |
85 | |
86 | + geom::Size last_size() |
87 | + { |
88 | + if (buffer) |
89 | + return buffer->size(); |
90 | + return geom::Size{}; |
91 | + } |
92 | private: |
93 | mc::BufferStream& stream; |
94 | void buffer_ready(mg::Buffer* b) |
95 | @@ -118,8 +147,9 @@ |
96 | void consume() |
97 | { |
98 | auto b = stream.lock_compositor_buffer(this); |
99 | + last_size_ = b->size(); |
100 | b->read([this, b](unsigned char const* p) { |
101 | - entries.emplace_back(BufferEntry{b->id(), *reinterpret_cast<unsigned int const*>(p)}); |
102 | + entries.emplace_back(BufferEntry{b->id(), *reinterpret_cast<unsigned int const*>(p), Access::unblocked}); |
103 | }); |
104 | } |
105 | |
106 | @@ -127,8 +157,15 @@ |
107 | { |
108 | return entries; |
109 | } |
110 | + |
111 | + geom::Size last_size() |
112 | + { |
113 | + return last_size_; |
114 | + } |
115 | + |
116 | mc::BufferStream& stream; |
117 | std::vector<BufferEntry> entries; |
118 | + geom::Size last_size_; |
119 | }; |
120 | |
121 | //schedule helpers |
122 | @@ -161,6 +198,30 @@ |
123 | } |
124 | } |
125 | |
126 | +void repeat_system_until( |
127 | + std::vector<ScheduleEntry>& schedule, |
128 | + std::function<bool()> const& predicate) |
129 | +{ |
130 | + std::sort(schedule.begin(), schedule.end(), |
131 | + [](ScheduleEntry& a, ScheduleEntry& b) |
132 | + { |
133 | + return a.timestamp < b.timestamp; |
134 | + }); |
135 | + |
136 | + auto entry_it = schedule.begin(); |
137 | + if (entry_it == schedule.end()) return; |
138 | + while(predicate()) |
139 | + { |
140 | + for(auto const& p : entry_it->producers) |
141 | + p->produce(); |
142 | + for(auto const& c : entry_it->consumers) |
143 | + c->consume(); |
144 | + entry_it++; |
145 | + if (entry_it == schedule.end()) entry_it = schedule.begin(); |
146 | + } |
147 | +} |
148 | + |
149 | + |
150 | //test infrastructure |
151 | struct BufferScheduling : public Test, ::testing::WithParamInterface<int> |
152 | { |
153 | @@ -174,12 +235,17 @@ |
154 | mc::BufferStreamSurfaces stream{mt::fake_shared(queue)}; |
155 | BufferQueueProducer producer{stream}; |
156 | BufferQueueConsumer consumer{stream}; |
157 | + BufferQueueConsumer second_consumer{stream}; |
158 | }; |
159 | struct WithAnyNumberOfBuffers : BufferScheduling {}; |
160 | struct WithTwoOrMoreBuffers : BufferScheduling {}; |
161 | struct WithThreeOrMoreBuffers : BufferScheduling {}; |
162 | +struct WithOneBuffer : BufferScheduling {}; |
163 | +struct WithTwoBuffers : BufferScheduling {}; |
164 | +struct WithThreeBuffers : BufferScheduling {}; |
165 | } |
166 | |
167 | +/* Regression test for LP#1270964 */ |
168 | TEST_P(WithAnyNumberOfBuffers, all_buffers_consumed_in_interleaving_pattern) |
169 | { |
170 | std::vector<ScheduleEntry> schedule = { |
171 | @@ -241,6 +307,436 @@ |
172 | EXPECT_THAT(consumption_log, ContainerEq(production_log)); |
173 | } |
174 | |
175 | + |
176 | +MATCHER_P(EachBufferIdIs, value, "") |
177 | +{ |
178 | + auto id_matcher = [](BufferEntry const& a, BufferEntry const& b){ return a.id == b.id; }; |
179 | + return std::search_n(arg.begin(), arg.end(), arg.size(), value, id_matcher) != std::end(arg); |
180 | +} |
181 | + |
182 | +MATCHER(HasIncreasingAge, "") |
183 | +{ |
184 | + return std::is_sorted(arg.begin(), arg.end(), |
185 | + [](BufferEntry const& a, BufferEntry const& b) { |
186 | + return a.age < b.age; |
187 | + }); |
188 | +} |
189 | + |
190 | +TEST_P(WithOneBuffer, client_and_server_get_concurrent_access) |
191 | +{ |
192 | + std::vector<ScheduleEntry> schedule = { |
193 | + {1_t, {&producer}, {&consumer}}, |
194 | + {2_t, {&producer}, {&consumer}}, |
195 | + {3_t, {&producer}, {}}, |
196 | + {4_t, {}, {&consumer}}, |
197 | + }; |
198 | + run_system(schedule); |
199 | + |
200 | + auto production_log = producer.production_log(); |
201 | + auto consumption_log = consumer.consumption_log(); |
202 | + EXPECT_THAT(production_log, Not(IsEmpty())); |
203 | + EXPECT_THAT(consumption_log, Not(IsEmpty())); |
204 | + EXPECT_THAT(consumption_log, ContainerEq(production_log)); |
205 | + |
206 | + EXPECT_THAT(consumption_log, EachBufferIdIs(consumption_log[0])); |
207 | + EXPECT_THAT(production_log, EachBufferIdIs(consumption_log[0])); |
208 | + EXPECT_THAT(consumption_log, HasIncreasingAge()); |
209 | +} |
210 | + |
211 | + |
212 | +/* Regression test for LP: #1210042 */ |
213 | +TEST_P(WithThreeOrMoreBuffers, consumers_dont_recycle_startup_buffer ) |
214 | +{ |
215 | + std::vector<ScheduleEntry> schedule = { |
216 | + {1_t, {&producer}, {}}, |
217 | + {2_t, {&producer}, {}}, |
218 | + {3_t, {}, {&consumer}}, |
219 | + }; |
220 | + run_system(schedule); |
221 | + |
222 | + auto production_log = producer.production_log(); |
223 | + auto consumption_log = consumer.consumption_log(); |
224 | + ASSERT_THAT(production_log, SizeIs(2)); |
225 | + ASSERT_THAT(consumption_log, SizeIs(1)); |
226 | + EXPECT_THAT(consumption_log[0], Eq(production_log[0])); |
227 | +} |
228 | + |
229 | +TEST_P(WithTwoOrMoreBuffers, consumer_cycles_through_all_available_buffers) |
230 | +{ |
231 | + auto tick = 0_t; |
232 | + std::vector<ScheduleEntry> schedule; |
233 | + for(auto i = 0; i < nbuffers; i++) |
234 | + schedule.emplace_back(ScheduleEntry{tick++, {&producer}, {&consumer}}); |
235 | + run_system(schedule); |
236 | + |
237 | + auto production_log = producer.production_log(); |
238 | + std::sort(production_log.begin(), production_log.end(), |
239 | + [](BufferEntry const& a, BufferEntry const& b) { return a.id.as_value() > b.id.as_value(); }); |
240 | + auto it = std::unique(production_log.begin(), production_log.end()); |
241 | + production_log.erase(it, production_log.end()); |
242 | + EXPECT_THAT(production_log, SizeIs(nbuffers)); |
243 | +} |
244 | + |
245 | +TEST_P(WithAnyNumberOfBuffers, compositor_can_always_get_a_buffer) |
246 | +{ |
247 | + std::vector<ScheduleEntry> schedule = { |
248 | + {0_t, {&producer}, {}}, |
249 | + {1_t, {}, {&consumer}}, |
250 | + {2_t, {}, {&consumer}}, |
251 | + {3_t, {}, {&consumer}}, |
252 | + {5_t, {}, {&consumer}}, |
253 | + {6_t, {}, {&consumer}}, |
254 | + }; |
255 | + run_system(schedule); |
256 | + |
257 | + auto consumption_log = consumer.consumption_log(); |
258 | + ASSERT_THAT(consumption_log, SizeIs(5)); |
259 | +} |
260 | + |
261 | +TEST_P(WithTwoOrMoreBuffers, compositor_doesnt_starve_from_slow_client) |
262 | +{ |
263 | + std::vector<ScheduleEntry> schedule = { |
264 | + {1_t, {}, {&consumer}}, |
265 | + {60_t, {}, {&consumer}}, |
266 | + {120_t, {}, {&consumer}}, |
267 | + {150_t, {&producer}, {}}, |
268 | + {180_t, {}, {&consumer}}, |
269 | + {240_t, {}, {&consumer}}, |
270 | + {270_t, {&producer}, {}}, |
271 | + {300_t, {}, {&consumer}}, |
272 | + {360_t, {}, {&consumer}}, |
273 | + }; |
274 | + run_system(schedule); |
275 | + |
276 | + auto consumption_log = consumer.consumption_log(); |
277 | + ASSERT_THAT(consumption_log, SizeIs(7)); |
278 | + EXPECT_THAT(std::count(consumption_log.begin(), consumption_log.end(), consumption_log[0]), Eq(3)); |
279 | + EXPECT_THAT(std::count(consumption_log.begin(), consumption_log.end(), consumption_log[3]), Eq(2)); |
280 | + EXPECT_THAT(std::count(consumption_log.begin(), consumption_log.end(), consumption_log[5]), Eq(2)); |
281 | +} |
282 | + |
283 | +TEST_P(WithTwoOrMoreBuffers, multiple_consumers_are_in_sync) |
284 | +{ |
285 | + std::vector<ScheduleEntry> schedule = { |
286 | + {0_t, {&producer}, {}}, |
287 | + {1_t, {}, {&consumer}}, |
288 | + {60_t, {}, {&consumer, &second_consumer}}, |
289 | + {119_t, {}, {&consumer}}, |
290 | + {120_t, {}, {&second_consumer}}, |
291 | + {130_t, {&producer}, {}}, |
292 | + {178_t, {}, {&consumer}}, |
293 | + {180_t, {}, {&second_consumer}}, |
294 | + {237_t, {}, {&consumer}}, |
295 | + {240_t, {}, {&second_consumer}}, |
296 | + }; |
297 | + run_system(schedule); |
298 | + |
299 | + auto production_log = producer.production_log(); |
300 | + auto consumption_log_1 = consumer.consumption_log(); |
301 | + auto consumption_log_2 = second_consumer.consumption_log(); |
302 | + ASSERT_THAT(consumption_log_1, SizeIs(5)); |
303 | + ASSERT_THAT(consumption_log_2, SizeIs(4)); |
304 | + ASSERT_THAT(production_log, SizeIs(2)); |
305 | + |
306 | + std::for_each(consumption_log_1.begin(), consumption_log_1.begin() + 3, |
307 | + [&](BufferEntry const& entry){ EXPECT_THAT(entry, Eq(production_log[0])); }); |
308 | + std::for_each(consumption_log_1.begin() + 3, consumption_log_1.end(), |
309 | + [&](BufferEntry const& entry){ EXPECT_THAT(entry, Eq(production_log[1])); }); |
310 | + std::for_each(consumption_log_2.begin(), consumption_log_2.begin() + 2, |
311 | + [&](BufferEntry const& entry){ EXPECT_THAT(entry, Eq(production_log[0])); }); |
312 | + std::for_each(consumption_log_2.begin() + 2, consumption_log_2.end(), |
313 | + [&](BufferEntry const& entry){ EXPECT_THAT(entry, Eq(production_log[1])); }); |
314 | +} |
315 | + |
316 | +TEST_P(WithThreeOrMoreBuffers, multiple_fast_compositors_are_in_sync) |
317 | +{ |
318 | + std::vector<ScheduleEntry> schedule = { |
319 | + {0_t, {&producer}, {}}, |
320 | + {1_t, {&producer}, {}}, |
321 | + {60_t, {}, {&consumer, &second_consumer}}, |
322 | + {61_t, {}, {&consumer, &second_consumer}}, |
323 | + }; |
324 | + run_system(schedule); |
325 | + |
326 | + auto production_log = producer.production_log(); |
327 | + auto consumption_log_1 = consumer.consumption_log(); |
328 | + auto consumption_log_2 = second_consumer.consumption_log(); |
329 | + EXPECT_THAT(consumption_log_1, Eq(production_log)); |
330 | + EXPECT_THAT(consumption_log_2, Eq(production_log)); |
331 | +} |
332 | + |
333 | +TEST_P(WithTwoOrMoreBuffers, framedropping_clients_get_all_buffers_and_dont_block) |
334 | +{ |
335 | + queue.allow_framedropping(true); |
336 | + std::vector<ScheduleEntry> schedule; |
337 | + for (auto i = 0; i < nbuffers * 3; i++) |
338 | + schedule.emplace_back(ScheduleEntry{1_t, {&producer}, {}}); |
339 | + run_system(schedule); |
340 | + |
341 | + auto production_log = producer.production_log(); |
342 | + std::sort(production_log.begin(), production_log.end(), |
343 | + [](BufferEntry const& a, BufferEntry const& b) { return a.id.as_value() > b.id.as_value(); }); |
344 | + auto last = std::unique(production_log.begin(), production_log.end(), |
345 | + [](BufferEntry const& a, BufferEntry const& b) { return a.id == b.id; }); |
346 | + production_log.erase(last, production_log.end()); |
347 | + |
348 | + EXPECT_THAT(production_log.size(), Ge(nbuffers)); //Ge is to accommodate overallocation |
349 | +} |
350 | + |
351 | +TEST_P(WithTwoOrMoreBuffers, nonframedropping_client_throttles_to_compositor_rate) |
352 | +{ |
353 | + unsigned int reps = 50; |
354 | + auto const expected_blocks = reps - nbuffers; |
355 | + std::vector<ScheduleEntry> schedule = { |
356 | + {1_t, {&producer, &producer}, {&consumer}}, |
357 | + }; |
358 | + repeat_system_until(schedule, [&reps]{ return --reps != 0; }); |
359 | + |
360 | + auto log = producer.production_log(); |
361 | + auto block_count = std::count_if(log.begin(), log.end(), |
362 | + [](BufferEntry const& e) { return e.blockage == Access::blocked; }); |
363 | + EXPECT_THAT(block_count, Ge(expected_blocks)); |
364 | +} |
365 | + |
366 | +TEST_P(WithAnyNumberOfBuffers, resize_affects_client_acquires_immediately) |
367 | +{ |
368 | + unsigned int const sizes_to_test{4}; |
369 | + geom::Size new_size = properties.size; |
370 | + for(auto i = 0u; i < sizes_to_test; i++) |
371 | + { |
372 | + new_size = new_size * 2; |
373 | + queue.resize(new_size); |
374 | + |
375 | + std::vector<ScheduleEntry> schedule = {{1_t, {&producer}, {&consumer}}}; |
376 | + run_system(schedule); |
377 | + EXPECT_THAT(producer.last_size(), Eq(new_size)); |
378 | + } |
379 | +} |
380 | + |
381 | +TEST_P(WithAnyNumberOfBuffers, compositor_acquires_resized_frames) |
382 | +{ |
383 | + unsigned int const sizes_to_test{4}; |
384 | + int const attempt_limit{100}; |
385 | + geom::Size new_size = properties.size; |
386 | + for(auto i = 0u; i < sizes_to_test; i++) |
387 | + { |
388 | + new_size = new_size * 2; |
389 | + queue.resize(new_size); |
390 | + |
391 | + std::vector<ScheduleEntry> schedule = { |
392 | + {1_t, {&producer}, {}}, |
393 | + {2_t, {}, {&consumer}}, |
394 | + {3_t, {&producer}, {}}, |
395 | + }; |
396 | + run_system(schedule); |
397 | + |
398 | + int attempt_count = 0; |
399 | + schedule = {{2_t, {}, {&consumer}}}; |
400 | + repeat_system_until(schedule, [&] { |
401 | + return (consumer.last_size() != new_size) && (attempt_count++ < attempt_limit); }); |
402 | + |
403 | + ASSERT_THAT(attempt_count, Lt(attempt_limit)) << "consumer never got the new size"; |
404 | + } |
405 | +} |
406 | + |
407 | +// Regression test for LP: #1396006 |
408 | +TEST_P(WithTwoOrMoreBuffers, framedropping_policy_never_drops_newest_frame) |
409 | +{ |
410 | + mtd::MockFrameDroppingPolicyFactory policy_factory; |
411 | + mc::BufferQueue queue{nbuffers, mt::fake_shared(server_buffer_factory), properties, policy_factory}; |
412 | + mc::BufferStreamSurfaces stream{mt::fake_shared(queue)}; |
413 | + BufferQueueProducer producer(stream); |
414 | + |
415 | + for(auto i = 0; i < nbuffers; i++) |
416 | + producer.produce(); |
417 | + policy_factory.trigger_policies(); |
418 | + producer.produce(); |
419 | + |
420 | + auto production_log = producer.production_log(); |
421 | + ASSERT_THAT(production_log, SizeIs(nbuffers + 1)); |
422 | + EXPECT_THAT(production_log[nbuffers], Not(Eq(production_log[nbuffers - 1]))); |
423 | +} |
424 | + |
425 | +TEST_P(WithTwoOrMoreBuffers, uncomposited_client_swaps_when_policy_triggered) |
426 | +{ |
427 | + mtd::MockFrameDroppingPolicyFactory policy_factory; |
428 | + mc::BufferQueue queue{nbuffers, mt::fake_shared(server_buffer_factory), properties, policy_factory}; |
429 | + mc::BufferStreamSurfaces stream{mt::fake_shared(queue)}; |
430 | + BufferQueueProducer producer(stream); |
431 | + |
432 | + for(auto i = 0; i < nbuffers; i++) |
433 | + producer.produce(); |
434 | + policy_factory.trigger_policies(); |
435 | + producer.produce(); |
436 | + |
437 | + auto production_log = producer.production_log(); |
438 | + ASSERT_THAT(production_log, SizeIs(nbuffers + 1)); |
439 | + EXPECT_THAT(production_log[nbuffers - 1].blockage, Eq(Access::blocked)); |
440 | + EXPECT_THAT(production_log[nbuffers].blockage, Eq(Access::unblocked)); |
441 | +} |
442 | + |
443 | +// Regression test for LP: #1319765 |
444 | +TEST_P(WithTwoBuffers, client_is_not_blocked_prematurely) |
445 | +{ |
446 | + producer.produce(); |
447 | + auto a = queue.compositor_acquire(this); |
448 | + producer.produce(); |
449 | + auto b = queue.compositor_acquire(this); |
450 | + |
451 | + ASSERT_NE(a.get(), b.get()); |
452 | + |
453 | + queue.compositor_release(a); |
454 | + producer.produce(); |
455 | + queue.compositor_release(b); |
456 | + |
457 | + /* |
458 | + * Update to the original test case; This additional compositor acquire |
459 | + * represents the fixing of LP: #1395581 in the compositor logic. |
460 | + */ |
461 | + if (queue.buffers_ready_for_compositor(this)) |
462 | + queue.compositor_release(queue.compositor_acquire(this)); |
463 | + |
464 | + // With the fix, a buffer will be available instantaneously: |
465 | + EXPECT_TRUE(producer.can_produce()); |
466 | +} |
467 | + |
468 | +// Extended regression test for LP: #1319765 |
469 | +TEST_P(WithTwoBuffers, composite_on_demand_never_deadlocks) |
470 | +{ |
471 | + for (int i = 0; i < 100; ++i) |
472 | + { |
473 | + producer.produce(); |
474 | + auto a = queue.compositor_acquire(this); |
475 | + producer.produce(); |
476 | + auto b = queue.compositor_acquire(this); |
477 | + |
478 | + ASSERT_NE(a.get(), b.get()); |
479 | + |
480 | + queue.compositor_release(a); |
481 | + producer.produce(); |
482 | + queue.compositor_release(b); |
483 | + |
484 | + /* |
485 | + * Update to the original test case; This additional compositor acquire |
486 | + * represents the fixing of LP: #1395581 in the compositor logic. |
487 | + */ |
488 | + if (queue.buffers_ready_for_compositor(this)) |
489 | + queue.compositor_release(queue.compositor_acquire(this)); |
490 | + |
491 | + EXPECT_TRUE(producer.can_produce()); |
492 | + |
493 | + consumer.consume(); |
494 | + consumer.consume(); |
495 | + } |
496 | +} |
497 | + |
498 | +// Regression test for LP: #1395581 |
499 | +TEST_P(WithTwoOrMoreBuffers, buffers_ready_is_not_underestimated) |
500 | +{ |
501 | + // Produce frame 1 |
502 | + producer.produce(); |
503 | + // Acquire frame 1 |
504 | + auto a = queue.compositor_acquire(this); |
505 | + |
506 | + // Produce frame 2 |
507 | + producer.produce(); |
508 | + // Acquire frame 2 |
509 | + auto b = queue.compositor_acquire(this); |
510 | + |
511 | + // Release frame 1 |
512 | + queue.compositor_release(a); |
513 | + // Produce frame 3 |
514 | + producer.produce(); |
515 | + // Release frame 2 |
516 | + queue.compositor_release(b); |
517 | + |
518 | + // Verify frame 3 is ready for the first compositor |
519 | + EXPECT_THAT(queue.buffers_ready_for_compositor(this), Ge(1)); |
520 | + auto c = queue.compositor_acquire(this); |
521 | + |
522 | + // Verify frame 3 is ready for a second compositor |
523 | + int const other_compositor_id = 0; |
524 | + ASSERT_THAT(queue.buffers_ready_for_compositor(&other_compositor_id), Ge(1)); |
525 | + |
526 | + queue.compositor_release(c); |
527 | +} |
528 | + |
529 | +TEST_P(WithTwoOrMoreBuffers, buffers_ready_eventually_reaches_zero) |
530 | +{ |
531 | + const int nmonitors = 3; |
532 | + std::array<std::shared_ptr<BufferQueueConsumer>, nmonitors> consumers { { |
533 | + std::make_shared<BufferQueueConsumer>(stream), |
534 | + std::make_shared<BufferQueueConsumer>(stream), |
535 | + std::make_shared<BufferQueueConsumer>(stream) |
536 | + } }; |
537 | + |
538 | + for (auto const& consumer : consumers) |
539 | + EXPECT_EQ(0, queue.buffers_ready_for_compositor(consumer.get())); |
540 | + |
541 | + producer.produce(); |
542 | + |
543 | + for (auto const& consumer : consumers) |
544 | + { |
545 | + ASSERT_NE(0, queue.buffers_ready_for_compositor(consumer.get())); |
546 | + |
547 | + // Double consume to account for the +1 that |
548 | + // buffers_ready_for_compositor adds to do dynamic performance |
549 | + // detection. |
550 | + consumer->consume(); |
551 | + consumer->consume(); |
552 | + |
553 | + ASSERT_EQ(0, queue.buffers_ready_for_compositor(consumer.get())); |
554 | + } |
555 | +} |
556 | + |
557 | +TEST_P(WithAnyNumberOfBuffers, compositor_inflates_ready_count_for_slow_clients) |
558 | +{ |
559 | + for (int frame = 0; frame < 10; frame++) |
560 | + { |
561 | + ASSERT_EQ(0, queue.buffers_ready_for_compositor(&consumer)); |
562 | + producer.produce(); |
563 | + |
564 | + // Detecting a slow client requires scheduling at least one extra |
565 | + // frame... |
566 | + int nready = queue.buffers_ready_for_compositor(&consumer); |
567 | + ASSERT_EQ(2, nready); |
568 | + for (int i = 0; i < nready; ++i) |
569 | + consumer.consume(); |
570 | + } |
571 | +} |
572 | + |
573 | +TEST_P(WithAnyNumberOfBuffers, first_user_is_recorded) |
574 | +{ |
575 | + consumer.consume(); |
576 | + EXPECT_TRUE(queue.is_a_current_buffer_user(&consumer)); |
577 | +} |
578 | + |
579 | +TEST_P(WithThreeBuffers, gives_compositor_a_valid_buffer_after_dropping_old_buffers_without_clients) |
580 | +{ |
581 | + queue.drop_old_buffers(); |
582 | + consumer.consume(); |
583 | + EXPECT_THAT(consumer.consumption_log(), SizeIs(1)); |
584 | +} |
585 | + |
586 | +TEST_P(WithThreeBuffers, gives_new_compositor_the_newest_buffer_after_dropping_old_buffers) |
587 | +{ |
588 | + producer.produce(); |
589 | + consumer.consume(); |
590 | + producer.produce(); |
591 | + queue.drop_old_buffers(); |
592 | + |
593 | + void const* const new_compositor_id{&nbuffers}; |
594 | + auto comp2 = queue.compositor_acquire(new_compositor_id); |
595 | + |
596 | + auto production_log = producer.production_log(); |
597 | + auto consumption_log = consumer.consumption_log(); |
598 | + ASSERT_THAT(production_log, SizeIs(2)); |
599 | + ASSERT_THAT(consumption_log, SizeIs(1)); |
600 | + |
601 | + EXPECT_THAT(production_log[0], Eq(consumption_log[0])); |
602 | + EXPECT_THAT(production_log[1].id, Eq(comp2->id())); |
603 | +} |
604 | + |
605 | int const max_buffers_to_test{5}; |
606 | INSTANTIATE_TEST_CASE_P( |
607 | BufferScheduling, |
608 | @@ -254,3 +750,15 @@ |
609 | BufferScheduling, |
610 | WithThreeOrMoreBuffers, |
611 | Range(3, max_buffers_to_test)); |
612 | +INSTANTIATE_TEST_CASE_P( |
613 | + BufferScheduling, |
614 | + WithOneBuffer, |
615 | + Values(1)); |
616 | +INSTANTIATE_TEST_CASE_P( |
617 | + BufferScheduling, |
618 | + WithTwoBuffers, |
619 | + Values(2)); |
620 | +INSTANTIATE_TEST_CASE_P( |
621 | + BufferScheduling, |
622 | + WithThreeBuffers, |
623 | + Values(3)); |
FAILED: Continuous integration, rev:2707 jenkins. qa.ubuntu. com/job/ mir-ci/ 4253/ jenkins. qa.ubuntu. com/job/ mir-android- vivid-i386- build/3077 jenkins. qa.ubuntu. com/job/ mir-clang- wily-amd64- build/598/ console jenkins. qa.ubuntu. com/job/ mir-mediumtests -vivid- touch/3025 jenkins. qa.ubuntu. com/job/ mir-wily- amd64-ci/ 409 jenkins. qa.ubuntu. com/job/ mir-wily- amd64-ci/ 409/artifact/ work/output/ *zip*/output. zip jenkins. qa.ubuntu. com/job/ mir-mediumtests -builder- vivid-armhf/ 3025 jenkins. qa.ubuntu. com/job/ mir-mediumtests -builder- vivid-armhf/ 3025/artifact/ work/output/ *zip*/output. zip jenkins. qa.ubuntu. com/job/ mir-mediumtests -runner- mako/5835 s-jenkins. ubuntu- ci:8080/ job/touch- flash-device/ 21638
http://
Executed test runs:
SUCCESS: http://
FAILURE: http://
SUCCESS: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
SUCCESS: http://
Click here to trigger a rebuild: s-jenkins. ubuntu- ci:8080/ job/mir- ci/4253/ rebuild
http://