Merge lp:~vanvugt/mir/predictive-bypass-2 into lp:mir
- predictive-bypass-2
- Merge into development-branch
Status: | Merged |
---|---|
Merged at revision: | 2719 |
Proposed branch: | lp:~vanvugt/mir/predictive-bypass-2 |
Merge into: | lp:mir |
Diff against target: |
377 lines (+190/-0) 9 files modified
src/platforms/android/server/hwc_device.cpp (+45/-0) src/platforms/mesa/server/kms/display_buffer.cpp (+62/-0) src/platforms/mesa/server/kms/kms_output.h (+8/-0) src/platforms/mesa/server/kms/real_kms_output.cpp (+8/-0) src/platforms/mesa/server/kms/real_kms_output.h (+1/-0) tests/mir_test_doubles/mock_drm.cpp (+5/-0) tests/unit-tests/graphics/android/test_hwc_device.cpp (+23/-0) tests/unit-tests/graphics/mesa/kms/mock_kms_output.h (+1/-0) tests/unit-tests/graphics/mesa/kms/test_display_buffer.cpp (+37/-0) |
To merge this branch: | bzr merge lp:~vanvugt/mir/predictive-bypass-2 |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Alan Griffiths | Needs Information | ||
Andreas Pokorny (community) | Needs Information | ||
PS Jenkins bot (community) | continuous-integration | Approve | |
Review via email: mp+261941@code.launchpad.net |
Commit message
Introducing "predictive bypass"; this provides a constant ~10ms reduction
in latency when fully bypassed/overlayed. This benefit is in addition to
any lag reductions provided by other branches.
Additional unexpected benefits (free!):
* In some cases even smoothness/frame rate is improved by this branch
(LP: #1447896).
* Software cursors/touchspots appear to "stick" to the client app better
with this branch. Because the underlying client surface has the
additional time it needs to update for the new cursor/touch position
before the frame is posted.
Description of the change
Background research where this algorithm was first proposed (in a slightly different form):
https:/
Possible variations on the algorithm:
* The same technique can be applied to GL compositing too, but has
explicitly NOT been enabled for compositing right now. This is
because you can't trust most hardware (especially mobile) to be fast
enough to GL render and still have time to sleep without missing a
frame. Only the bypass/overylay code path is safe enough because we
know there is no GL compositing required.
* Detect render times, and missed frames intelligently... That's what
the first version of this optimization did. Unfortunately it was not
reliable and prone to false positives which resulted in the
optimization backing off and losing benefit. That approach is also
not very portable to Android (where we need it most).
If you're interested in manual testing:
Seeing 10ms improvement requires very careful attention to detail. If
you're trying to see it with your own eyes, these techniques help:
* Always start from the lowest latency configuration, so that any
improvement will be relatively more significant. This means do not
test with nesting, and do add --nbuffers=2 to your server. Your test
clients also should be run in lowest latency mode to most easily
see the improvement. This means using the -n option for framedropping
(on desktop, but don't use it on Android due to bugs, see below):
* Configure your input devices for minimal latency. USB mice for example
default to about 8ms latency (125Hz). You can reduce that to 1ms
(1000Hz) using the kernel parameter: usbhid.mousepoll=1
* Android warning! There's a bunch of Android performance bugs you need
to be aware of and are likely to run into making testing difficult and
confusing, so familiarise yourself:
https:/
* Remember only the bypass/overlay path gets the optimization so on
desktop at least that means only fullscreen GL clients benefit.
Of course, we aim for greater than 10ms improvement in future. This
branch is just one step on that journey.
PS Jenkins bot (ps-jenkins) wrote : | # |
Daniel van Vugt (vanvugt) wrote : | # |
Argh, vivid is not implementing C++14 but wily is?
Daniel van Vugt (vanvugt) wrote : | # |
Finally got this tested on the slowest machine I have: An Atom N270. This machine struggles with Unity7 and Unity8 it struggles even more. However running Mir demos I can get its end-to-end latency down to approximately 6ms with this branch [1ms input latency, 1ms render time, 4ms grace time to page flip]. And full 60 FPS.
So even a machine that's arguably too slow for Unity can easily keep up with predictive bypass.
PS Jenkins bot (ps-jenkins) wrote : | # |
PASSED: Continuous integration, rev:2653
http://
Executed test runs:
SUCCESS: http://
SUCCESS: http://
SUCCESS: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
deb: http://
SUCCESS: http://
SUCCESS: http://
Click here to trigger a rebuild:
http://
Alan Griffiths (alan-griffiths) wrote : | # |
*Needs Discussion*
We really need some decent way of testing end-end performance improvements like this claim.
It is impractical to rely on manual testing to detect regressions. And a bunch of unit tests for the throttling logic does not adequately prove the real world behaviour of the system.
Daniel van Vugt (vanvugt) wrote : | # |
It's good to have some scepticism. And it's reasonable to not expect most people to do exhaustive manual testing to convince themselves this works.
Before anyone says it, I think sleeping sounds like a terrible hack. I have questioned myself for months over this. However, the gain of 10ms less lag consistently across all devices is a big win worth living with some ugliness for. You can see the improvement on a phone in something as simple as swiping across the screen.
To help with peoples' mental models, consider this (time in milliseconds for illustration):
Present behaviour:
t0: Compositor snapshots scene and decides it can bypass/overlay it
t0: post() begins
t16: post() returns (waited for vblank and page flip completed)
Result: What's on screen is at least 16ms old (time since the snapshot)
Result: Only the first surface that woke up the compositor loop is likely to have a fresh frame displayed, because the time between the frame_posted signal and the scene snapshot is negligibly small. Other surfaces won't have a fresh frame ready and appear to stutter instead (LP: #1447896).
New behaviour in this branch:
t0: Still sleeping ~10ms from last frame
t10: Compositor snapshots scene and decides it can bypass/overlay it
t10: post() begins
t16: post() returns (waited for vblank and page flip completed)
Result: What's on screen is at least 6ms old, but that's an improvement of 10ms.
Result: Many surfaces have time to provide a fresh frame, composited into the new frame (bug 1447896 fixed).
Andreas Pokorny (andreas-pokorny) wrote : | # |
I agree that we should schedule the frame posting as late as possible..
But why sleep at the end of post() instead of prior to it?
Instead I would have thought
while(running)
{
auto sleepTime = estimateTimeNee
sleepIfPossibl
TimeSpan span;
compositeFrame();
post();
updateEstimato
}
Alberto Aguirre (albaguirre) wrote : | # |
I suppose this is why newer versions of android schedule on vsync ticks instead.
Kevin DuBois (kdub) wrote : | # |
> Before anyone says it, I think sleeping sounds like a terrible hack. I have
> questioned myself for months over this. However, the gain of 10ms less lag
> consistently across all devices is a big win worth living with some ugliness
> for. You can see the improvement on a phone in something as simple as swiping
> across the screen.
Eh, I think its tricky to improve our latency past the vsync period amount, and given the limitations of the vsync system, sleeping is okay to get us a fresher frame. The vsync period is the minimum we can hope to guarantee, but with some guessing about the gpu load and sleeping based on these, we can push the latency down lower than the vsync period. Now, this means we have to make some good guesses as to how much to sleep, based on the GPU load though, as bad guesses can also increase latency.
In the meantime, why is this done on the mg{a,m}
Also, can we use the mir::options instead of getenv?
I'm still chewing on if this will play nice with the android drivers. (which my initial intuition is that it should, but worth thinking through)
Kevin DuBois (kdub) wrote : | # |
@Alan's needs discussion
I think that measurements are becoming a more and more pressing need as we optimize the system further. We shouldn't increase latency once we have landed latency-reducing features, and we don't have a way to measure this thats well known.
Also, we're getting to the tricky part where our latency optimizations are dynamic... they're good in some cases, and not so good in others. A tricky problem like optimizing latency this really needs a tool to measure with a variety of interesting loads. This lets us see what tradeoffs we're making as we change the code, and guard against true regressions, where we accidentally increase latency accross-the-board inadventently.
Kevin DuBois (kdub) wrote : | # |
Hah, no sooner than I said that I noticed Alexandros's https:/
Daniel van Vugt (vanvugt) wrote : | # |
Those are reasonable questions but are issues I have already encountered and dealt with, so need some explaining...
"Why not sleep before post() instead of prior to it?"
I did, in the first prototype. It was unfortunately complex and only ever prototyped for Mesa. it was disliked by kdub, and I also realised it failed to address the issue of snapshotting the scene late (buffer contents were younger but attributes like window position were not). Sleeping at the end of the frame guarantees scene snapshotting (surface attributes and buffer contents) are never fixed too early.
"The vsync period is the minimum we can hope to guarantee"
False. Proof 1: This branch. Proof 2: Try it with a framedropping client like "mir_demo_
"Now, this means we have to make some good guesses as to how much to sleep, based on the GPU load though, as bad guesses can also increase latency."
Only partyly true. I have explicitly only applied the optimization to bypass/overlays so that there is no GPU GL render load to contend with. And to make sure the timing is right I have spent two weeks testing all our devices, and the slowest netbooks I can find.
"Also, we're getting to the tricky part where our latency optimizations are dynamic"
It's not dynamic on Android at least. I discussed that in the description.
Daniel van Vugt (vanvugt) wrote : | # |
"In the meantime, why is this done on the mg{a,m}
Again, to ensure safety and that we are never guessing about render times, the optimization is only applied when there is no render time -- bypass/overlay mode. This is good for Unity8 because it's always bypassed/overlayed.
Daniel van Vugt (vanvugt) wrote : | # |
Also remember "6ms latency" means from the scene snapshot to display. For swap-interval-1 clients that are double buffered you need to add another 16ms to that (or 33ms for triple). But you can avoid the 16-32ms addition by using swap interval zero, which is why I've recommended trying "mir_demo_
This branch only shortens the top layer (system compositor) in my original diagram. While using a swap interval zero shortens the middle/bottom layer:
https:/
The eye and the brain can easily perceive latency lower than 16ms and you quickly get used to it (and _want_ it lower than 16ms). Low enough latency and you start to imagine a fixed line or bar between your mouse and the screen. And then you never want it any other way...
Daniel van Vugt (vanvugt) wrote : | # |
Forgot to mention: On desktop it's essentially the hardware cursor we're trying to catch up to here so that when moving things on screen, those things update fast enough to stick to the hardware cursor (which is sampled from the kernel super-late for minimal latency).
This branch represents the first time we've actually been able to display frames fast enough to stick to the hardware cursor (with a slight margin of error of a few milliseconds).
Andreas Pokorny (andreas-pokorny) wrote : | # |
> "In the meantime, why is this done on the mg{a,m}
> Again, to ensure safety and that we are never guessing about render times, the
> optimization is only applied when there is no render time -- bypass/overlay
> mode. This is good for Unity8 because it's always bypassed/overlayed.
Ok I complained about the - "at the end of post()" mostly because of making the decision inside DisplayBuffer instead of inside the rendering loop.. just as kdub stated. I think this behavior makes also sense if you have to render and if the render time is low enough. Of course then one has to deal with nesting. For a nested-bypass to make sense it should submit its updates as soon as possible. (Or as late as possible while ensuring that the host compositor still has enough time to redraw or bypass/overlay).
Kevin DuBois (kdub) wrote : | # |
>
> "The vsync period is the minimum we can hope to guarantee"
> False. Proof 1: This branch. Proof 2: Try it with a framedropping client like
> "mir_demo_
> and we can visibly do better than one frame latency using this branch.
>
I think we're on the same page here... In a vsync system, we can /guarantee/ that the minimum is one vsync period, but by delaying like in this branch we can reliably get smaller-
> "Now, this means we have to make some good guesses as to how much to sleep,
> based on the GPU load though, as bad guesses can also increase latency."
> Only partyly true. I have explicitly only applied the optimization to
> bypass/overlays so that there is no GPU GL render load to contend with. And to
> make sure the timing is right I have spent two weeks testing all our devices,
> and the slowest netbooks I can find.
We can't make sure that there's no GPU render load, as the rest of the system could be drawing and waiting on Buffers. An example is unity8/usc... USC knows it hasn't drawn, but doesn't know if U8 or U8-clients have used the gpu.
>
> "Also, we're getting to the tricky part where our latency optimizations are
> dynamic"
> It's not dynamic on Android at least. I discussed that in the description.
Right, they aren't in this branch. Can't sleeping eat up time that the the clients (or clients-of-nested) would want to draw in? so USC doesn't miss the deadline, but the clients might start to. Having a feedback system that adjusts the amount of sleep seems like it could be helpful.
Daniel van Vugt (vanvugt) wrote : | # |
On desktop (which I understand better than Android) GPU render load is irrelevant. Because the display hardware we're doing page flipping with does not use a GPU. I was doing this stuff years before GPUs existed. They are functionally separate even if they exist in the same silicon these days.
On Android I suspect the driver (which is tightly woven with OpenGL) might be affected. But I have spent weeks testing many devices making sure it is not affected.
"Can't sleeping eat up time that the the clients (or clients-of-nested) would want to draw in?"
Absolutely not. Clients are separate processes that are unaffected by the server sleeping in a place where it's not holding any shared resources. Only GL compositing (the server) would be affected but I've made sure that's impossible by only implementing the sleep on bypass/overlay code paths where there is no GL compositing. This is mentioned a few times in this proposal as well as the original Google doc.
"Having a feedback system that adjusts the amount of sleep seems like it could be helpful."
I thought so too and the first protoype of this used feedback, for a couple of months. Unfortunately it was unreliable. Once you notice a hiccup and back off you can't (shouldn't) un-back-off. And so if that hiccup was just a hiccup (e.g. bug 1452579 is a problem) then you've lost all benefit effectively turning the optimization off. A feedback system is also deeply platform-specific and I only ever got it going on Mesa (lp:~vanvugt/mir/old-predictive-bypass), but it was unacceptably unreliable. This proposal however avoids the detection unreliability problem and brings the optimization to Android too.
Kevin DuBois (kdub) wrote : | # |
@android
The part that I'm worried about is that on android with surfaceflinger, the driver schedules its own compositions, and tinkering with that timing (as we've done to adapt the android code to the way the mir compositors work) can expose some bugs in the driver that we have to fix (case-in-point is krillin). I guess though that's just a past scar that might not be around in the future.
Also, for what its worth, most android drivers will look like mesa and just post to the display controller, although your intuition is right that they can activate different parts of the GPU sometimes (like, to scale or crop if the display controller can't handle this)
@sleeping
I'll think a bit more about this... It seems like it can't be truly be unaffected though, as we are holding the resources a bit differently and shifting the timing of the system. It is getting difficult to tell that a tradeoff is good or bad, as the system is complex enough that we need tools to inspect the data that different scenarios create. I trust that in your manual study of the effects of the change, things that were tested improved. In situations where changing the how long we're sleeping can change the system quite a bit, introspection with tools would help us figure out what tradeoffs are being made.
@feedback system
If we have a new device to support, it seems strange to have to manually check and adjust the amount we're sleeping in the loop. A feedback system would alleviate this, but if that was too unreliable, then it seems like we're still missing enough data to make a good decision.
Daniel van Vugt (vanvugt) wrote : | # |
If it makes people feel better I can propose the Mesa implementation first (which is safer in theory) and Android second. They're completely independent. But in my mind both are finished.
As for sleeping and safety, I added the environment variables late in the game as a safety net (and also as a testing tool) so you can evaluate the benefits in production. Especially if you suspect that turning off predictive bypass will help you; you can test that theory with MIR_DRIVER_
Yes, it's possible future devices will require additional tweaks to avoid skipping frames, but unlikely. I find it difficult to believe we'll encounter any device that is slower than arale (which seems to require 6ms just to post a frame without any GL rendering).
Daniel van Vugt (vanvugt) wrote : | # |
Note: If you are thinking about doing any testing on desktop, remember to add:
usbhid.
to your kernel command line.
That will reduce your mouse latency from 8ms to 1ms, which becomes quite noticeable when you're on the bleeding edge of performance. Having your mouse configured for lower latency also means it's then easier to see improvements in Mir's performance (like this branch), without having what you see clouded by high latency kernel events.
Alan Griffiths (alan-griffiths) wrote : | # |
*Needs Discussion*
Like Kevin I think this is a special case of "schedule compositing as late as possible (but no later)" which logically belongs in the compositor. Detecting a special case in the drivers and delaying the return from post() (as in this MP) provides this effect from outside the compositor.
I think it would be better preparation for handling further cases to provide timing feedback to the compositor and allowed the scheduling decision to be made there. The compositor also has the opportunity to detect scene changes (e.g. surfaces switching to/from fullscreen) that affect the applicability of this logic.
In any case, we really need some automated benchmarking scenarios to track the effect of changes like this.
Kevin DuBois (kdub) wrote : | # |
> I think it would be better preparation for handling further cases to provide
> timing feedback to the compositor and allowed the scheduling decision to be
> made there. The compositor also has the opportunity to detect scene changes
> (e.g. surfaces switching to/from fullscreen) that affect the applicability of
> this logic.
Having a role in the scheduling is how the android platform natively works (surfaceflinger). The display hardware starts the event loop that does the composition and posting. So having the platform inform the MTC about appropriate times would be good to solve this problem of how to drive sub-vsync-period latencies. It would also avoid problems like what we saw on krillin bringup, because the driver is operating along a more well-tested path.
Kevin DuBois (kdub) wrote : | # |
@tweaking, 6ms seems reasonable, but if a new device bringup goes wrong, it'll be painful to have to explain to non-mir bringup folks what is being tweaked, and why this number has to be tuned, and how to tune manually. (and has to be compiled to be reset, which often bringup teams aren't optimized to do)
So, it seems best to tie this to the MTC via an active scheduling request, or a set of parameters that the MTC queries from the driver. Failing that, we have to have the options plumbed up properly (ie, the standard mir::options stuff), so that its changeable without compiling.
Daniel van Vugt (vanvugt) wrote : | # |
I did consider and wanted to do it in the compositor too. However two major factors make that impractical:
1. In a single algorithm you need to (a) know that bypass/overlays are in use and (b) sleep at the _end_ of the post function. That used to be easy but the introduction of posting via DisplayGroup outside of DisplayBufferCo
2. Ideally the algorithm should vary timing based on the refresh rate of the display, and also disable itself in clone mode. I do all that for Mesa already, but it would be impossible to do in the compositor logic because you have know knowledge of the output devices there. Only in the DisplayBuffer.
Obviously a portable solution in the compositor would be nicer (and less effort). But there are solid reasons why it hasn't been done that way.
Daniel van Vugt (vanvugt) wrote : | # |
* it would be impossible to do in the compositor logic because you have "no" knowledge of the output devices there
Alan Griffiths (alan-griffiths) wrote : | # |
> I did consider and wanted to do it in the compositor too. However two major
> factors make that impractical:
It sounds like all three of us think the right place is the compositor.
At a minimum we ought to think about what change is needed to make that possible. (You're ahead of me here as I've not worked it through in any detail.)
> 1. In a single algorithm you need to (a) know that bypass/overlays are in
> use and (b) sleep at the _end_ of the post function. That used to be easy but
> the introduction of posting via DisplayGroup outside of
> DisplayBufferCo
You keep saying that we need to sleep *at the end of post()* am I missing something? Or is that just a proxy for delaying the start of the next composite cycle for that output?
The reason I ask is that compositing is already scheduled based on triggers from scene changes/buffer posts and that logic seems the right place to incorporate any delay.
> 2. Ideally the algorithm should vary timing based on the refresh rate of the
> display, and also disable itself in clone mode. I do all that for Mesa
> already, but it would be impossible to do in the compositor logic because you
> have know knowledge of the output devices there. Only in the DisplayBuffer.
Yes, we've discussed that some timing information would need to be fed into compositor scheduling. But why can't that be provided by the DisplayBuffer?
Daniel van Vugt (vanvugt) wrote : | # |
Moving things around:
I have tried. Undoing or modifying the DisplayGroup work is a very large effort, as it was a very large effort when introduced. On a smaller scale I have proposed a branch that simply ignored the DisplayGroup approach for Mesa where it is irrelevant but kdub didn't like it because it made the architecture inconsistent with Android. Although they are already quite different for platform-specific reasons anyway.
End of the post function:
Keep in mind the "prediction" is all about predicting the next frame will be bypassed. That's the core concept and the prediction is extremely reliable because we don't switch in/out bypass/overylays that often compared to the total number of frames displayed. Yes, you could emit information from DisplayBuffer about how the previous frame was rendered and what the refresh rate is and whether we are cloning, but I feel that's much less clean than what's proposed here. Because all that would require new interfaces and using them to expose information to the caller that we otherwise don't need to expose.
Although... one could distil all that information into a smaller piece of information like a "recommended sleep duration" passed from DisplayBuffer to the Compositor. I could try that but again, it's just the same algorithm implemented using more lines of code and less information hiding than what's proposed here. Although sleeping instead of providing information about a recommended sleep is arguably less ideal.
Using the existing compositor with info from DisplayBuffer:
See above.
Daniel van Vugt (vanvugt) wrote : | # |
Now working on a predictive-
Daniel van Vugt (vanvugt) wrote : | # |
You may prefer this branch instead:
https:/
Kevin DuBois (kdub) wrote : | # |
/me prefers other branch
Preview Diff
1 | === modified file 'src/platforms/android/server/hwc_device.cpp' |
2 | --- src/platforms/android/server/hwc_device.cpp 2015-05-19 17:17:39 +0000 |
3 | +++ src/platforms/android/server/hwc_device.cpp 2015-06-16 07:34:11 +0000 |
4 | @@ -27,6 +27,9 @@ |
5 | #include "mir/raii.h" |
6 | #include <limits> |
7 | #include <algorithm> |
8 | +#include <chrono> |
9 | +#include <thread> |
10 | +#include <mutex> |
11 | |
12 | namespace mg = mir::graphics; |
13 | namespace mga=mir::graphics::android; |
14 | @@ -42,6 +45,20 @@ |
15 | }; |
16 | return (renderable.alpha() < 1.0f - tolerance); |
17 | } |
18 | + |
19 | +std::once_flag checked_environment; |
20 | +std::chrono::milliseconds force_bypass_sleep(-1); |
21 | + |
22 | +void check_environment() |
23 | +{ |
24 | + std::call_once(checked_environment, []() |
25 | + { |
26 | + char const* env = getenv("MIR_DRIVER_FORCE_BYPASS_SLEEP"); |
27 | + if (env != NULL) |
28 | + force_bypass_sleep = std::chrono::milliseconds(atoi(env)); |
29 | + }); |
30 | +} |
31 | + |
32 | } |
33 | |
34 | bool mga::HwcDevice::compatible_renderlist(RenderableList const& list) |
35 | @@ -65,6 +82,7 @@ |
36 | mga::HwcDevice::HwcDevice(std::shared_ptr<HwcWrapper> const& hwc_wrapper) : |
37 | hwc_wrapper(hwc_wrapper) |
38 | { |
39 | + check_environment(); |
40 | } |
41 | |
42 | bool mga::HwcDevice::buffer_is_onscreen(mg::Buffer const& buffer) const |
43 | @@ -97,6 +115,8 @@ |
44 | |
45 | hwc_wrapper->prepare(lists); |
46 | |
47 | + bool purely_overlays = true; |
48 | + |
49 | for (auto& content : contents) |
50 | { |
51 | if (content.list.needs_swapbuffers()) |
52 | @@ -111,6 +131,7 @@ |
53 | content.compositor.render(std::move(rejected_renderables), content.list_offset, content.context); |
54 | content.list.setup_fb(content.context.last_rendered_buffer()); |
55 | content.list.swap_occurred(); |
56 | + purely_overlays = false; |
57 | } |
58 | |
59 | //setup overlays |
60 | @@ -136,6 +157,30 @@ |
61 | |
62 | mir::Fd retire_fd(content.list.retirement_fence()); |
63 | } |
64 | + |
65 | + if (purely_overlays) |
66 | + { |
67 | + /* |
68 | + * "Predictive bypass": If we're just displaying pure overlays on this |
69 | + * frame then it's very likely the same will be true on the next frame. |
70 | + * In that case we don't need to spare any time for GL rendering. |
71 | + * Instead just sleep for a significant portion of the frame. This will |
72 | + * ensure the scene snapshot that goes into the next frame is |
73 | + * younger when it hits the screen, and so has visibly lower latency. |
74 | + * This has extra benefit for the overlays of software cursors and |
75 | + * touchpoints as the client surface underneath is then given enough |
76 | + * time to update itself and better match (ie. "stick to") the cursor |
77 | + * above it. |
78 | + */ |
79 | + |
80 | + // Test results (how long can we sleep for without missing a frame?): |
81 | + // arale: 10ms (TODO: Find out why arale is so slow) |
82 | + // mako: 15ms |
83 | + // krillin: 11ms (to be fair, the display is 67Hz) |
84 | + using namespace std; |
85 | + auto delay = force_bypass_sleep >= 0ms ? force_bypass_sleep : 10ms; |
86 | + std::this_thread::sleep_for(delay); |
87 | + } |
88 | } |
89 | |
90 | void mga::HwcDevice::content_cleared() |
91 | |
92 | === modified file 'src/platforms/mesa/server/kms/display_buffer.cpp' |
93 | --- src/platforms/mesa/server/kms/display_buffer.cpp 2015-06-03 12:07:08 +0000 |
94 | +++ src/platforms/mesa/server/kms/display_buffer.cpp 2015-06-16 07:34:11 +0000 |
95 | @@ -28,6 +28,9 @@ |
96 | #include <GLES2/gl2.h> |
97 | |
98 | #include <stdexcept> |
99 | +#include <chrono> |
100 | +#include <thread> |
101 | +#include <mutex> |
102 | |
103 | namespace mgm = mir::graphics::mesa; |
104 | namespace geom = mir::geometry; |
105 | @@ -85,6 +88,19 @@ |
106 | BOOST_THROW_EXCEPTION(std::runtime_error("GLES2 implementation doesn't support GL_OES_EGL_image extension")); |
107 | } |
108 | |
109 | +std::once_flag checked_environment; |
110 | +std::chrono::milliseconds force_bypass_sleep(-1); |
111 | + |
112 | +void check_environment() |
113 | +{ |
114 | + std::call_once(checked_environment, []() |
115 | + { |
116 | + char const* env = getenv("MIR_DRIVER_FORCE_BYPASS_SLEEP"); |
117 | + if (env != NULL) |
118 | + force_bypass_sleep = std::chrono::milliseconds(atoi(env)); |
119 | + }); |
120 | +} |
121 | + |
122 | } |
123 | |
124 | mgm::DisplayBuffer::DisplayBuffer( |
125 | @@ -109,6 +125,8 @@ |
126 | needs_set_crtc{false}, |
127 | page_flips_pending{false} |
128 | { |
129 | + check_environment(); |
130 | + |
131 | uint32_t area_width = area.size.width.as_uint32_t(); |
132 | uint32_t area_height = area.size.height.as_uint32_t(); |
133 | if (rotation == mir_orientation_left || rotation == mir_orientation_right) |
134 | @@ -282,6 +300,11 @@ |
135 | needs_set_crtc = false; |
136 | } |
137 | |
138 | + using namespace std; // For operator""ms() |
139 | + |
140 | + // Predicted worst case render time for the next frame... |
141 | + auto predicted_render_time = 50ms; |
142 | + |
143 | if (bypass_buf) |
144 | { |
145 | /* |
146 | @@ -295,6 +318,10 @@ |
147 | */ |
148 | scheduled_bypass_frame = bypass_buf; |
149 | wait_for_page_flip(); |
150 | + |
151 | + // It's very likely the next frame will be bypassed like this one so |
152 | + // we only need time for kernel page flip scheduling... |
153 | + predicted_render_time = 5ms; |
154 | } |
155 | else |
156 | { |
157 | @@ -306,11 +333,46 @@ |
158 | scheduled_composite_frame = bufobj; |
159 | if (outputs.size() == 1) |
160 | wait_for_page_flip(); |
161 | + |
162 | + /* |
163 | + * TODO: If you're optimistic about your GPU performance and/or |
164 | + * measure it carefully you may wish to set predicted_render_time |
165 | + * to a lower value here for lower latency. |
166 | + * |
167 | + *predicted_render_time = 9ms; // e.g. about the same as Weston |
168 | + */ |
169 | } |
170 | |
171 | // Buffer lifetimes are managed exclusively by scheduled*/visible* now |
172 | bypass_buf = nullptr; |
173 | bypass_bufobj = nullptr; |
174 | + |
175 | + /* |
176 | + * Introducing "predictive bypass": |
177 | + * If the current frame is bypassed then there is an extremely high |
178 | + * likelihood the next one will be too. If it is then we can reduce |
179 | + * the latency of that next frame (make the compositor sample the |
180 | + * scene later) by almost a whole frame. Because we don't need to |
181 | + * spare any time for rendering. Just milliseconds at most for the |
182 | + * kernel to get around to scheduling a pageflip. Note: this prediction |
183 | + * only works for non-clone modes as the full set of outputs must be |
184 | + * perfectly in phase and we only know how to guarantee that with one. |
185 | + */ |
186 | + if (outputs.size() == 1) |
187 | + { |
188 | + if (force_bypass_sleep >= 0ms) |
189 | + { |
190 | + std::this_thread::sleep_for(force_bypass_sleep); |
191 | + } |
192 | + else |
193 | + { |
194 | + auto const& output = outputs.front(); |
195 | + auto const min_frame_interval = 1000ms / output->max_refresh_rate(); |
196 | + auto const delay = min_frame_interval - predicted_render_time; |
197 | + if (delay > 0ms) |
198 | + std::this_thread::sleep_for(delay); |
199 | + } |
200 | + } |
201 | } |
202 | |
203 | mgm::BufferObject* mgm::DisplayBuffer::get_front_buffer_object() |
204 | |
205 | === modified file 'src/platforms/mesa/server/kms/kms_output.h' |
206 | --- src/platforms/mesa/server/kms/kms_output.h 2015-05-28 21:16:37 +0000 |
207 | +++ src/platforms/mesa/server/kms/kms_output.h 2015-06-16 07:34:11 +0000 |
208 | @@ -43,6 +43,14 @@ |
209 | virtual void configure(geometry::Displacement fb_offset, size_t kms_mode_index) = 0; |
210 | virtual geometry::Size size() const = 0; |
211 | |
212 | + /** |
213 | + * Approximate maximum refresh rate of this output to within 1Hz. |
214 | + * Typically the rate is fixed (e.g. 60Hz) but it may also be variable as |
215 | + * in Nvidia G-Sync/AMD FreeSync/VESA Adaptive Sync. So this function |
216 | + * returns the maximum rate to expect. |
217 | + */ |
218 | + virtual int max_refresh_rate() const = 0; |
219 | + |
220 | virtual bool set_crtc(uint32_t fb_id) = 0; |
221 | virtual void clear_crtc() = 0; |
222 | virtual bool schedule_page_flip(uint32_t fb_id) = 0; |
223 | |
224 | === modified file 'src/platforms/mesa/server/kms/real_kms_output.cpp' |
225 | --- src/platforms/mesa/server/kms/real_kms_output.cpp 2015-05-28 21:16:37 +0000 |
226 | +++ src/platforms/mesa/server/kms/real_kms_output.cpp 2015-06-16 07:34:11 +0000 |
227 | @@ -188,6 +188,14 @@ |
228 | return {mode.hdisplay, mode.vdisplay}; |
229 | } |
230 | |
231 | +int mgm::RealKMSOutput::max_refresh_rate() const |
232 | +{ |
233 | + // TODO: In future when DRM exposes FreeSync/Adaptive Sync/G-Sync info |
234 | + // this value may be calculated differently. |
235 | + drmModeModeInfo const& current_mode = connector->modes[mode_index]; |
236 | + return current_mode.vrefresh; |
237 | +} |
238 | + |
239 | void mgm::RealKMSOutput::configure(geom::Displacement offset, size_t kms_mode_index) |
240 | { |
241 | fb_offset = offset; |
242 | |
243 | === modified file 'src/platforms/mesa/server/kms/real_kms_output.h' |
244 | --- src/platforms/mesa/server/kms/real_kms_output.h 2015-05-28 21:16:37 +0000 |
245 | +++ src/platforms/mesa/server/kms/real_kms_output.h 2015-06-16 07:34:11 +0000 |
246 | @@ -44,6 +44,7 @@ |
247 | void reset(); |
248 | void configure(geometry::Displacement fb_offset, size_t kms_mode_index); |
249 | geometry::Size size() const; |
250 | + int max_refresh_rate() const; |
251 | |
252 | bool set_crtc(uint32_t fb_id); |
253 | void clear_crtc(); |
254 | |
255 | === modified file 'tests/mir_test_doubles/mock_drm.cpp' |
256 | --- tests/mir_test_doubles/mock_drm.cpp 2015-01-21 07:34:50 +0000 |
257 | +++ tests/mir_test_doubles/mock_drm.cpp 2015-06-16 07:34:11 +0000 |
258 | @@ -216,6 +216,11 @@ |
259 | mode.clock = clock; |
260 | mode.htotal = htotal; |
261 | mode.vtotal = vtotal; |
262 | + |
263 | + uint32_t total = htotal; |
264 | + total *= vtotal; // extend to 32 bits |
265 | + mode.vrefresh = clock * 1000UL / total; |
266 | + |
267 | if (preferred) |
268 | mode.type |= DRM_MODE_TYPE_PREFERRED; |
269 | |
270 | |
271 | === modified file 'tests/unit-tests/graphics/android/test_hwc_device.cpp' |
272 | --- tests/unit-tests/graphics/android/test_hwc_device.cpp 2015-05-29 17:53:49 +0000 |
273 | +++ tests/unit-tests/graphics/android/test_hwc_device.cpp 2015-06-16 07:34:11 +0000 |
274 | @@ -439,6 +439,29 @@ |
275 | EXPECT_THAT(stub_buffer1.use_count(), Eq(use_count_before)); |
276 | } |
277 | |
278 | +TEST_F(HwcDevice, overlays_are_throttled_per_predictive_bypass) |
279 | +{ |
280 | + using namespace testing; |
281 | + EXPECT_CALL(*mock_device, prepare(_)) |
282 | + .WillRepeatedly(Invoke(set_all_layers_to_overlay)); |
283 | + |
284 | + mga::HwcDevice device(mock_device); |
285 | + |
286 | + mga::LayerList list(layer_adapter, {stub_renderable1}, {0,0}); |
287 | + mga::DisplayContents content{primary, list, offset, stub_context, |
288 | + stub_compositor}; |
289 | + |
290 | + for (int frame = 0; frame < 5; ++frame) |
291 | + { |
292 | + using namespace std::chrono; |
293 | + auto start = system_clock::now(); |
294 | + device.commit({content}); |
295 | + auto duration = system_clock::now() - start; |
296 | + // Duration cast to a simple type so that test failures are readable |
297 | + ASSERT_THAT(duration_cast<milliseconds>(duration).count(), Ge(8)); |
298 | + } |
299 | +} |
300 | + |
301 | TEST_F(HwcDevice, does_not_set_acquirefences_when_it_has_set_them_previously_without_update) |
302 | { |
303 | using namespace testing; |
304 | |
305 | === modified file 'tests/unit-tests/graphics/mesa/kms/mock_kms_output.h' |
306 | --- tests/unit-tests/graphics/mesa/kms/mock_kms_output.h 2015-06-03 15:57:44 +0000 |
307 | +++ tests/unit-tests/graphics/mesa/kms/mock_kms_output.h 2015-06-16 07:34:11 +0000 |
308 | @@ -32,6 +32,7 @@ |
309 | MOCK_METHOD0(reset, void()); |
310 | MOCK_METHOD2(configure, void(geometry::Displacement, size_t)); |
311 | MOCK_CONST_METHOD0(size, geometry::Size()); |
312 | + MOCK_CONST_METHOD0(max_refresh_rate, int()); |
313 | |
314 | MOCK_METHOD1(set_crtc, bool(uint32_t)); |
315 | MOCK_METHOD0(clear_crtc, void()); |
316 | |
317 | === modified file 'tests/unit-tests/graphics/mesa/kms/test_display_buffer.cpp' |
318 | --- tests/unit-tests/graphics/mesa/kms/test_display_buffer.cpp 2015-06-04 23:48:38 +0000 |
319 | +++ tests/unit-tests/graphics/mesa/kms/test_display_buffer.cpp 2015-06-16 07:34:11 +0000 |
320 | @@ -46,6 +46,8 @@ |
321 | class MesaDisplayBufferTest : public Test |
322 | { |
323 | public: |
324 | + int const mock_refresh_rate = 60; |
325 | + |
326 | MesaDisplayBufferTest() |
327 | : mock_bypassable_buffer{std::make_shared<NiceMock<MockBuffer>>()} |
328 | , fake_bypassable_renderable{ |
329 | @@ -78,6 +80,8 @@ |
330 | .WillByDefault(Return(true)); |
331 | ON_CALL(*mock_kms_output, schedule_page_flip(_)) |
332 | .WillByDefault(Return(true)); |
333 | + ON_CALL(*mock_kms_output, max_refresh_rate()) |
334 | + .WillByDefault(Return(mock_refresh_rate)); |
335 | |
336 | ON_CALL(*mock_bypassable_buffer, size()) |
337 | .WillByDefault(Return(display_area.size)); |
338 | @@ -154,6 +158,39 @@ |
339 | EXPECT_EQ(original_count, mock_bypassable_buffer.use_count()); |
340 | } |
341 | |
342 | +TEST_F(MesaDisplayBufferTest, predictive_bypass_is_throttled) |
343 | +{ |
344 | + graphics::mesa::DisplayBuffer db( |
345 | + create_platform(), |
346 | + null_display_report(), |
347 | + {mock_kms_output}, |
348 | + nullptr, |
349 | + display_area, |
350 | + mir_orientation_normal, |
351 | + gl_config, |
352 | + mock_egl.fake_egl_context); |
353 | + |
354 | + /* |
355 | + * Test that predictive bypass does not return from post for at least half |
356 | + * the frame time. This is a reliable test regardless of system load. |
357 | + * We would also like the test the converse but that would be unreliable... |
358 | + */ |
359 | + for (int frame = 0; frame < 5; ++frame) |
360 | + { |
361 | + ASSERT_TRUE(db.post_renderables_if_optimizable(bypassable_list)); |
362 | + |
363 | + using namespace std::chrono; |
364 | + auto start = system_clock::now(); |
365 | + db.post(); |
366 | + auto duration = system_clock::now() - start; |
367 | + |
368 | + // Duration cast to a simple type so that test failures are readable |
369 | + int milliseconds_per_frame = 1000 / mock_refresh_rate; |
370 | + ASSERT_THAT(duration_cast<milliseconds>(duration).count(), |
371 | + Ge(milliseconds_per_frame/2)); |
372 | + } |
373 | +} |
374 | + |
375 | TEST_F(MesaDisplayBufferTest, bypass_buffer_only_referenced_once_by_db) |
376 | { |
377 | graphics::mesa::DisplayBuffer db( |
FAILED: Continuous integration, rev:2651 jenkins. qa.ubuntu. com/job/ mir-ci/ 4094/ jenkins. qa.ubuntu. com/job/ mir-android- vivid-i386- build/2840/ console jenkins. qa.ubuntu. com/job/ mir-clang- wily-amd64- build/353 jenkins. qa.ubuntu. com/job/ mir-mediumtests -vivid- touch/2788/ console jenkins. qa.ubuntu. com/job/ mir-wily- amd64-ci/ 250 jenkins. qa.ubuntu. com/job/ mir-wily- amd64-ci/ 250/artifact/ work/output/ *zip*/output. zip jenkins. qa.ubuntu. com/job/ mir-mediumtests -builder- vivid-armhf/ 2788/console
http://
Executed test runs:
FAILURE: http://
SUCCESS: http://
FAILURE: http://
SUCCESS: http://
deb: http://
FAILURE: http://
Click here to trigger a rebuild: s-jenkins. ubuntu- ci:8080/ job/mir- ci/4094/ rebuild
http://