Mir

Merge lp:~vanvugt/mir/reverse-scene-traversal into lp:mir

Proposed by Daniel van Vugt
Status: Merged
Approved by: Alexandros Frantzis
Approved revision: no longer in the source branch.
Merged at revision: 1138
Proposed branch: lp:~vanvugt/mir/reverse-scene-traversal
Merge into: lp:mir
Diff against target: 187 lines (+113/-0)
6 files modified
include/server/mir/compositor/scene.h (+5/-0)
include/server/mir/surfaces/surface_stack.h (+2/-0)
src/server/surfaces/surface_stack.cpp (+18/-0)
tests/unit-tests/compositor/test_default_display_buffer_compositor.cpp (+4/-0)
tests/unit-tests/compositor/test_multi_threaded_compositor.cpp (+2/-0)
tests/unit-tests/surfaces/test_surface_stack.cpp (+82/-0)
To merge this branch: bzr merge lp:~vanvugt/mir/reverse-scene-traversal
Reviewer Review Type Date Requested Status
Alexandros Frantzis (community) Approve
Alan Griffiths Abstain
Kevin DuBois (community) Approve
PS Jenkins bot (community) continuous-integration Approve
Review via email: mp+190315@code.launchpad.net

Commit message

Add support for traversing the Scene from front surface to back.
This is required for occlusion detection at least (coming soon).

Description of the change

Prerequisite for fixing bug 1227739.

To post a comment you must log in.
Revision history for this message
Alexandros Frantzis (afrantzis) wrote :

43 + for ( auto layer = layers_by_depth.rbegin()
44 + ; layer != layers_by_depth.rend()
45 + ; ++layer
46 + )

Although I don't find anything in our coding standard about this, we tend to break lines after punctuation marks.

Use std::for_each() instead?

49 + for (auto it = surfaces.rbegin(); it != surfaces.rend(); ++it)
50 + {

Use std::for_each() instead?

review: Needs Fixing
Revision history for this message
Daniel van Vugt (vanvugt) wrote :

Fixed semicolons.

I disagree with for_each() here because it would force me to either add more functions or use lambdas. Both would reduce readability (and efficiency) a bit.

Revision history for this message
Alan Griffiths (alan-griffiths) wrote :

*Needs discussion*

I think we need a serious rethink of the interface Scene provides. It is evolving into a bad case of coupling.

lock()/unlock() are a problem because they imply locking the whole of the data model, not just the part relevant to the current operation.

Locking the whole data model is problematic if, for example, you want to composit different screens on different threads (which we attempt).

This MP adding forward and reverse traversals and expecting the results to make sense requires maintaining a lock across connected traversals. That will make it difficult to reduce the scope of locking to affected parts of the data.

I know at present there's no way to lock just a subset of the current model - but this interface makes that hard to change. The lock()/unlock() were originally a hack around the lack of a post-traversal call on OperatorForScene.

I think we want something like:

  auto surfaces = scene->select(filter);

  for (auto x : subscene) ...

and/or

  for (auto it = surfaces.rbegin(); it != surfaces.rend(); ++it)

With "~surfaces" releasing the lock.

review: Needs Information
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Approve (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :
review: Approve (continuous-integration)
Revision history for this message
Daniel van Vugt (vanvugt) wrote :

Alan,

As it happens, I too decided just this week I would like to see:
    auto surfaces = scene->select(filter);
or
    auto surfaces = scene->snapshot();

However I am sticking with the current design at present, in an effort to make small incremental steps that are still potentially saucy-compatible. My immediate goal is to resolve bug 1227739 in or soon-after saucy is released. That means using existing interfaces.

Revision history for this message
Daniel van Vugt (vanvugt) wrote :

Although "filtering" can be complex, requiring multiple passes in varying directions. So I don't endorse:
    auto surfaces = scene->select(filter);
That would break down for what I have planned. I would however endorse something like:
    auto renderables = scene->snapshot();
The locking is now atomic, non-blocking, and allows for arbitrary filtering algorithms including culling of non-visible renderables before the list is sent to the renderer.

Revision history for this message
Kevin DuBois (kdub) wrote :

so we always traverse layers forward, and then have a forward/reverse for surfaces? I agree with Alan's "need information", if we have enough time. If this needs to get in before ffe, though, I guess my review is an approve. Hope we can reinvent the scenegraph soon though

review: Approve
Revision history for this message
Alan Griffiths (alan-griffiths) wrote :

OK we all seem to agree that this adds to the technical debt we'll need to sort out (and have some tentative agreement on the right approach to doing that).

review: Abstain
Revision history for this message
Alexandros Frantzis (afrantzis) wrote :

> If this needs to get in before ffe, though, I guess my review is an approve.
> Hope we can reinvent the scenegraph soon though.

+1

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'include/server/mir/compositor/scene.h'
2--- include/server/mir/compositor/scene.h 2013-08-28 03:41:48 +0000
3+++ include/server/mir/compositor/scene.h 2013-10-10 09:44:21 +0000
4@@ -66,8 +66,13 @@
5 public:
6 virtual ~Scene() {}
7
8+ // Back to front; normal rendering order
9 virtual void for_each_if(FilterForScene& filter, OperatorForScene& op) = 0;
10
11+ // Front to back; as used when scanning for occlusions
12+ virtual void reverse_for_each_if(FilterForScene& filter,
13+ OperatorForScene& op) = 0;
14+
15 /**
16 * Sets a callback to be called whenever the state of the
17 * Scene changes.
18
19=== modified file 'include/server/mir/surfaces/surface_stack.h'
20--- include/server/mir/surfaces/surface_stack.h 2013-08-28 03:41:48 +0000
21+++ include/server/mir/surfaces/surface_stack.h 2013-10-10 09:44:21 +0000
22@@ -67,6 +67,8 @@
23
24 // From Scene
25 virtual void for_each_if(compositor::FilterForScene &filter, compositor::OperatorForScene &op);
26+ virtual void reverse_for_each_if(compositor::FilterForScene& filter,
27+ compositor::OperatorForScene& op);
28 virtual void set_change_callback(std::function<void()> const& f);
29
30 // From InputTargets
31
32=== modified file 'src/server/surfaces/surface_stack.cpp'
33--- src/server/surfaces/surface_stack.cpp 2013-10-03 06:01:11 +0000
34+++ src/server/surfaces/surface_stack.cpp 2013-10-10 09:44:21 +0000
35@@ -66,6 +66,24 @@
36 }
37 }
38
39+void ms::SurfaceStack::reverse_for_each_if(mc::FilterForScene& filter,
40+ mc::OperatorForScene& op)
41+{
42+ std::lock_guard<std::recursive_mutex> lg(guard);
43+ for (auto layer = layers_by_depth.rbegin();
44+ layer != layers_by_depth.rend();
45+ ++layer)
46+ {
47+ auto surfaces = layer->second;
48+ for (auto it = surfaces.rbegin(); it != surfaces.rend(); ++it)
49+ {
50+ mc::CompositingCriteria& info = *((*it)->compositing_criteria());
51+ ms::BufferStream& stream = *((*it)->buffer_stream());
52+ if (filter(info)) op(info, stream);
53+ }
54+ }
55+}
56+
57 void ms::SurfaceStack::set_change_callback(std::function<void()> const& f)
58 {
59 std::lock_guard<std::mutex> lg{notify_change_mutex};
60
61=== modified file 'tests/unit-tests/compositor/test_default_display_buffer_compositor.cpp'
62--- tests/unit-tests/compositor/test_default_display_buffer_compositor.cpp 2013-08-28 03:41:48 +0000
63+++ tests/unit-tests/compositor/test_default_display_buffer_compositor.cpp 2013-10-10 09:44:21 +0000
64@@ -47,6 +47,8 @@
65 struct MockScene : mc::Scene
66 {
67 MOCK_METHOD2(for_each_if, void(mc::FilterForScene&, mc::OperatorForScene&));
68+ MOCK_METHOD2(reverse_for_each_if, void(mc::FilterForScene&,
69+ mc::OperatorForScene&));
70 MOCK_METHOD1(set_change_callback, void(std::function<void()> const&));
71 MOCK_METHOD0(lock, void());
72 MOCK_METHOD0(unlock, void());
73@@ -76,6 +78,8 @@
74 }
75 }
76
77+ void reverse_for_each_if(mc::FilterForScene&, mc::OperatorForScene&) {}
78+
79 void set_change_callback(std::function<void()> const&) {}
80
81 void change(const std::vector<mc::CompositingCriteria*> &surfs)
82
83=== modified file 'tests/unit-tests/compositor/test_multi_threaded_compositor.cpp'
84--- tests/unit-tests/compositor/test_multi_threaded_compositor.cpp 2013-08-28 03:41:48 +0000
85+++ tests/unit-tests/compositor/test_multi_threaded_compositor.cpp 2013-10-10 09:44:21 +0000
86@@ -86,6 +86,8 @@
87 {
88 }
89
90+ void reverse_for_each_if(mc::FilterForScene&, mc::OperatorForScene&) {}
91+
92 void set_change_callback(std::function<void()> const& f)
93 {
94 std::lock_guard<std::mutex> lock{callback_mutex};
95
96=== modified file 'tests/unit-tests/surfaces/test_surface_stack.cpp'
97--- tests/unit-tests/surfaces/test_surface_stack.cpp 2013-08-28 03:41:48 +0000
98+++ tests/unit-tests/surfaces/test_surface_stack.cpp 2013-10-10 09:44:21 +0000
99@@ -256,6 +256,88 @@
100 stack.for_each_if(filter, renderable_operator);
101 }
102
103+TEST_F(SurfaceStack, skips_surface_that_is_filtered_out_reverse)
104+{
105+ using namespace ::testing;
106+
107+ EXPECT_CALL(mock_surface_allocator, create_surface(_,_))
108+ .WillOnce(Return(stub_surface1))
109+ .WillOnce(Return(stub_surface2))
110+ .WillOnce(Return(stub_surface3));
111+
112+ ms::SurfaceStack stack(mt::fake_shared(mock_surface_allocator),
113+ mt::fake_shared(input_registrar));
114+ auto s1 = stack.create_surface(default_params);
115+ auto criteria1 = s1.lock()->compositing_criteria();
116+ auto stream1 = s1.lock()->buffer_stream();
117+ auto s2 = stack.create_surface(default_params);
118+ auto criteria2 = s2.lock()->compositing_criteria();
119+ auto stream2 = s2.lock()->buffer_stream();
120+ auto s3 = stack.create_surface(default_params);
121+ auto criteria3 = s3.lock()->compositing_criteria();
122+ auto stream3 = s3.lock()->buffer_stream();
123+
124+ MockFilterForScene filter;
125+ MockOperatorForScene renderable_operator;
126+
127+ Sequence seq1, seq2;
128+ EXPECT_CALL(filter, filter(Ref(*criteria3)))
129+ .InSequence(seq1)
130+ .WillOnce(Return(true));
131+ EXPECT_CALL(filter, filter(Ref(*criteria2)))
132+ .InSequence(seq1)
133+ .WillOnce(Return(false));
134+ EXPECT_CALL(filter, filter(Ref(*criteria1)))
135+ .InSequence(seq1)
136+ .WillOnce(Return(true));
137+ EXPECT_CALL(renderable_operator,
138+ renderable_operator(Ref(*criteria3), Ref(*stream3)))
139+ .InSequence(seq2);
140+ EXPECT_CALL(renderable_operator,
141+ renderable_operator(Ref(*criteria1), Ref(*stream1)))
142+ .InSequence(seq2);
143+
144+ stack.reverse_for_each_if(filter, renderable_operator);
145+}
146+
147+TEST_F(SurfaceStack, stacking_order_reverse)
148+{
149+ using namespace ::testing;
150+
151+ EXPECT_CALL(mock_surface_allocator, create_surface(_,_))
152+ .WillOnce(Return(stub_surface1))
153+ .WillOnce(Return(stub_surface2))
154+ .WillOnce(Return(stub_surface3));
155+
156+ ms::SurfaceStack stack(mt::fake_shared(mock_surface_allocator),
157+ mt::fake_shared(input_registrar));
158+
159+ auto s1 = stack.create_surface(default_params);
160+ auto criteria1 = s1.lock()->compositing_criteria();
161+ auto stream1 = s1.lock()->buffer_stream();
162+ auto s2 = stack.create_surface(default_params);
163+ auto criteria2 = s2.lock()->compositing_criteria();
164+ auto stream2 = s2.lock()->buffer_stream();
165+ auto s3 = stack.create_surface(default_params);
166+ auto criteria3 = s3.lock()->compositing_criteria();
167+ auto stream3 = s3.lock()->buffer_stream();
168+
169+ StubFilterForScene filter;
170+ MockOperatorForScene renderable_operator;
171+ Sequence seq;
172+ EXPECT_CALL(renderable_operator,
173+ renderable_operator(Ref(*criteria3), Ref(*stream3)))
174+ .InSequence(seq);
175+ EXPECT_CALL(renderable_operator,
176+ renderable_operator(Ref(*criteria2), Ref(*stream2)))
177+ .InSequence(seq);
178+ EXPECT_CALL(renderable_operator,
179+ renderable_operator(Ref(*criteria1), Ref(*stream1)))
180+ .InSequence(seq);
181+
182+ stack.reverse_for_each_if(filter, renderable_operator);
183+}
184+
185 TEST_F(SurfaceStack, stacking_order)
186 {
187 using namespace ::testing;

Subscribers

People subscribed via source and target branches