Merge lp:~bcsaller/pyjuju/subordinate-removes into lp:pyjuju
- subordinate-removes
- Merge into trunk
Status: | Work in progress |
---|---|
Proposed branch: | lp:~bcsaller/pyjuju/subordinate-removes |
Merge into: | lp:pyjuju |
Diff against target: |
550 lines (+264/-43) 11 files modified
juju/control/destroy_service.py (+2/-2) juju/control/remove_relation.py (+5/-3) juju/control/tests/test_destroy_service.py (+18/-4) juju/control/tests/test_remove_relation.py (+2/-2) juju/hooks/tests/test_invoker.py (+2/-0) juju/lib/testing.py (+3/-3) juju/state/hook.py (+9/-3) juju/state/tests/test_service.py (+78/-22) juju/unit/lifecycle.py (+64/-0) juju/unit/tests/test_lifecycle.py (+76/-2) juju/unit/workflow.py (+5/-2) |
To merge this branch: | bzr merge lp:~bcsaller/pyjuju/subordinate-removes |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Juju Engineering | Pending | ||
Review via email: mp+102427@code.launchpad.net |
Commit message
Description of the change
Destroying a subordinate shouldn't result in tracebacks
Destroying a subordinate service after destroying its primary services results in a traceback
This patch corrects this. The associated test also previously failed to exercise all the needed
behavior to demonstrate this, which is corrected as well.
Benjamin Saller (bcsaller) wrote : | # |
Reviewers: mp+102427_
Message:
Please take a look.
Description:
Destroying a subordinate shouldn't result in tracebacks
Destroying a subordinate service after destroying its primary services
results in a traceback
This patch corrects this. The associated test also previously failed to
exercise all the needed
behavior to demonstrate this, which is corrected as well.
https:/
(do not edit description out of merge proposal)
Please review this at https:/
Affected files:
A [revision details]
M juju/control/
M juju/control/
M juju/control/
M juju/control/
M juju/hooks/
M juju/lib/testing.py
M juju/state/
M juju/state/
M juju/unit/
M juju/unit/
M juju/unit/
Kapil Thangavelu (hazmat) wrote : | # |
Needs Fixing
There seem to be two separate implementations of unrelated issues here
that would ideally be in separate branches. One for open/close port. And
one for subordinate suicide on removal. I'm focusing this review on
subordinate suicide aspect.
https:/
File juju/control/
https:/
juju/control/
this should default to false, right?
https:/
juju/control/
pls document here why the subordinate is coming first in the ep pair.
its easy to gloss over this is just an error message formatting thing.
https:/
File juju/control/
https:/
juju/control/
traceback
This doesn't actually test anything related to the branch afaics. The
actual problem is asynchronously triggered.
https:/
File juju/hooks/
https:/
juju/hooks/
run and changes are immediate."""
does this change belong in this branch?
https:/
File juju/state/
https:/
juju/state/
ports triggers the appropriate firewall
does this change belong in this branch?
https:/
File juju/unit/
https:/
juju/unit/
method name is a bit ambigious its always a suicide op, ie.
_undeploy_self..
https:/
juju/unit/
agents deployment.
s/agents/
https:/
juju/unit/
are you sure about that? i don't see anything obvious there, if so its a
bug in the deployer.. start would imply deploying again.. which is
bogus.
https:/
File juju/unit/
https:/
juju/unit/
how does this help? what if the node is already gone when the
retry_change fires. also untested.
- 537. By Benjamin Saller
-
typo
- 538. By Benjamin Saller
-
merge trunk
- 539. By Benjamin Saller
-
move ports handling to another branch
- 540. By Benjamin Saller
-
verify unit state removal on subordinate suicide
- 541. By Benjamin Saller
-
catch another async failure location
- 542. By Benjamin Saller
-
missing import
- 543. By Benjamin Saller
-
missing import
Kapil Thangavelu (hazmat) wrote : | # |
so post discussions, i believe we've said this branch/approach is dead in favor of the stop protocol spec.
Unmerged revisions
- 543. By Benjamin Saller
-
missing import
- 542. By Benjamin Saller
-
missing import
- 541. By Benjamin Saller
-
catch another async failure location
- 540. By Benjamin Saller
-
verify unit state removal on subordinate suicide
- 539. By Benjamin Saller
-
move ports handling to another branch
- 538. By Benjamin Saller
-
merge trunk
- 537. By Benjamin Saller
-
typo
- 536. By Benjamin Saller
-
remove print
- 535. By Benjamin Saller
-
add back in subordinate removal, this will have to change to honor the normal flow control model later. When stop hooks are properly added this will also need another review
- 534. By Benjamin Saller
-
change arg. order for error message
Preview Diff
1 | === modified file 'juju/control/destroy_service.py' | |||
2 | --- juju/control/destroy_service.py 2012-04-11 17:48:26 +0000 | |||
3 | +++ juju/control/destroy_service.py 2012-06-04 23:05:24 +0000 | |||
4 | @@ -47,14 +47,14 @@ | |||
5 | 47 | 47 | ||
6 | 48 | if relations: | 48 | if relations: |
7 | 49 | principal_service = None | 49 | principal_service = None |
9 | 50 | # if we have a container we can destroy the subordinate | 50 | # if we have a container we can't destroy the subordinate |
10 | 51 | # (revisit in the future) | 51 | # (revisit in the future) |
11 | 52 | for relation in relations: | 52 | for relation in relations: |
12 | 53 | if relation.relation_scope != "container": | 53 | if relation.relation_scope != "container": |
13 | 54 | continue | 54 | continue |
14 | 55 | services = yield relation.get_service_states() | 55 | services = yield relation.get_service_states() |
15 | 56 | remote_service = [s for s in services if s.service_name != | 56 | remote_service = [s for s in services if s.service_name != |
17 | 57 | service_state.service_name][0] | 57 | service_state.service_name][0] |
18 | 58 | if not (yield remote_service.is_subordinate()): | 58 | if not (yield remote_service.is_subordinate()): |
19 | 59 | principal_service = remote_service | 59 | principal_service = remote_service |
20 | 60 | break | 60 | break |
21 | 61 | 61 | ||
22 | === modified file 'juju/control/remove_relation.py' | |||
23 | --- juju/control/remove_relation.py 2012-04-12 09:23:50 +0000 | |||
24 | +++ juju/control/remove_relation.py 2012-06-04 23:05:24 +0000 | |||
25 | @@ -69,17 +69,19 @@ | |||
26 | 69 | service_pair = [] # ordered such that sub, principal | 69 | service_pair = [] # ordered such that sub, principal |
27 | 70 | 70 | ||
28 | 71 | is_container = False | 71 | is_container = False |
30 | 72 | has_principal = True | 72 | has_principal = False |
31 | 73 | # Sort the endpoints so we know which is the sub and which is the | ||
32 | 74 | # principal in the event we need to raise an error below | ||
33 | 73 | for ep in endpoints: | 75 | for ep in endpoints: |
34 | 74 | if ep.relation_scope == "container": | 76 | if ep.relation_scope == "container": |
35 | 75 | is_container = True | 77 | is_container = True |
36 | 76 | service = yield service_state_manager.get_service_state( | 78 | service = yield service_state_manager.get_service_state( |
37 | 77 | ep.service_name) | 79 | ep.service_name) |
38 | 78 | if (yield service.is_subordinate()): | 80 | if (yield service.is_subordinate()): |
40 | 79 | service_pair.append(service) | 81 | service_pair.insert(0, service) |
41 | 80 | has_principal = True | 82 | has_principal = True |
42 | 81 | else: | 83 | else: |
44 | 82 | service_pair.insert(0, service) | 84 | service_pair.append(service) |
45 | 83 | if is_container and len(service_pair) == 2 and has_principal: | 85 | if is_container and len(service_pair) == 2 and has_principal: |
46 | 84 | sub, principal = service_pair | 86 | sub, principal = service_pair |
47 | 85 | raise UnsupportedSubordinateServiceRemoval(sub.service_name, | 87 | raise UnsupportedSubordinateServiceRemoval(sub.service_name, |
48 | 86 | 88 | ||
49 | === modified file 'juju/control/tests/test_destroy_service.py' | |||
50 | --- juju/control/tests/test_destroy_service.py 2012-04-10 10:26:54 +0000 | |||
51 | +++ juju/control/tests/test_destroy_service.py 2012-06-04 23:05:24 +0000 | |||
52 | @@ -94,13 +94,15 @@ | |||
53 | 94 | @inlineCallbacks | 94 | @inlineCallbacks |
54 | 95 | def test_destroy_principal_with_subordinates(self): | 95 | def test_destroy_principal_with_subordinates(self): |
55 | 96 | log_service = yield self.add_service_from_charm("logging") | 96 | log_service = yield self.add_service_from_charm("logging") |
56 | 97 | lu1 = yield log_service.add_unit_state() | ||
57 | 98 | 97 | ||
58 | 99 | yield self.add_relation( | 98 | yield self.add_relation( |
59 | 100 | "juju-info", | 99 | "juju-info", |
60 | 100 | "container", | ||
61 | 101 | (self.service_state1, "juju-info", "server"), | 101 | (self.service_state1, "juju-info", "server"), |
62 | 102 | (log_service, "juju-info", "client")) | 102 | (log_service, "juju-info", "client")) |
63 | 103 | 103 | ||
64 | 104 | lu1 = yield log_service.add_unit_state(self.service1_unit) | ||
65 | 105 | |||
66 | 104 | finished = self.setup_cli_reactor() | 106 | finished = self.setup_cli_reactor() |
67 | 105 | topology = yield self.get_topology() | 107 | topology = yield self.get_topology() |
68 | 106 | service_id = topology.find_service_with_name("mysql") | 108 | service_id = topology.find_service_with_name("mysql") |
69 | @@ -111,9 +113,10 @@ | |||
70 | 111 | 113 | ||
71 | 112 | self.setup_exit(0) | 114 | self.setup_exit(0) |
72 | 113 | self.mocker.replay() | 115 | self.mocker.replay() |
73 | 114 | |||
74 | 115 | main(["destroy-service", "mysql"]) | 116 | main(["destroy-service", "mysql"]) |
75 | 117 | |||
76 | 116 | yield finished | 118 | yield finished |
77 | 119 | |||
78 | 117 | service_id = topology.find_service_with_name("mysql") | 120 | service_id = topology.find_service_with_name("mysql") |
79 | 118 | 121 | ||
80 | 119 | topology = yield self.get_topology() | 122 | topology = yield self.get_topology() |
81 | @@ -128,9 +131,21 @@ | |||
82 | 128 | # Zookeeper. see test_destroy_subordinate_without_relations. | 131 | # Zookeeper. see test_destroy_subordinate_without_relations. |
83 | 129 | exists = yield self.client.exists("/services/%s" % logging_id) | 132 | exists = yield self.client.exists("/services/%s" % logging_id) |
84 | 130 | self.assertTrue(exists) | 133 | self.assertTrue(exists) |
85 | 131 | |||
86 | 132 | self.assertIn("Service 'mysql' destroyed.", self.output.getvalue()) | 134 | self.assertIn("Service 'mysql' destroyed.", self.output.getvalue()) |
87 | 133 | 135 | ||
88 | 136 | # At this point trigger a destroy on the subordinate | ||
89 | 137 | # this deals with issue #982353 where partial state | ||
90 | 138 | # resulted in a traceback | ||
91 | 139 | # Assert that methods related to destruction fire w/o exception | ||
92 | 140 | self.assertEquals(True, (yield log_service.is_subordinate())) | ||
93 | 141 | self.assertEquals(None, (yield lu1.get_container())) | ||
94 | 142 | rels = yield self.relation_state_manager.get_relations_for_service( | ||
95 | 143 | log_service) | ||
96 | 144 | self.assertEquals([], rels) | ||
97 | 145 | yield self.service_state_manager.remove_service_state(log_service) | ||
98 | 146 | exists = yield self.client.exists("/services/%s" % logging_id) | ||
99 | 147 | self.assertFalse(exists) | ||
100 | 148 | |||
101 | 134 | @inlineCallbacks | 149 | @inlineCallbacks |
102 | 135 | def test_destroy_subordinate_without_relations(self): | 150 | def test_destroy_subordinate_without_relations(self): |
103 | 136 | """Verify we can remove a subordinate w/o relations.""" | 151 | """Verify we can remove a subordinate w/o relations.""" |
104 | @@ -146,7 +161,6 @@ | |||
105 | 146 | 161 | ||
106 | 147 | main(["destroy-service", "logging"]) | 162 | main(["destroy-service", "logging"]) |
107 | 148 | yield finished | 163 | yield finished |
108 | 149 | |||
109 | 150 | topology = yield self.get_topology() | 164 | topology = yield self.get_topology() |
110 | 151 | self.assertFalse(topology.has_service(logging_id)) | 165 | self.assertFalse(topology.has_service(logging_id)) |
111 | 152 | exists = yield self.client.exists("/services/%s" % logging_id) | 166 | exists = yield self.client.exists("/services/%s" % logging_id) |
112 | 153 | 167 | ||
113 | === modified file 'juju/control/tests/test_remove_relation.py' | |||
114 | --- juju/control/tests/test_remove_relation.py 2012-04-10 22:26:36 +0000 | |||
115 | +++ juju/control/tests/test_remove_relation.py 2012-06-04 23:05:24 +0000 | |||
116 | @@ -210,6 +210,6 @@ | |||
117 | 210 | main(["remove-relation", "logging", "wordpress"]) | 210 | main(["remove-relation", "logging", "wordpress"]) |
118 | 211 | yield wait_on_reactor_stopped | 211 | yield wait_on_reactor_stopped |
119 | 212 | self.assertIn("Unsupported attempt to destroy " | 212 | self.assertIn("Unsupported attempt to destroy " |
122 | 213 | "subordinate service 'wordpress' while " | 213 | "subordinate service 'logging' while " |
123 | 214 | "principal service 'logging' is related.", | 214 | "principal service 'wordpress' is related.", |
124 | 215 | self.output.getvalue()) | 215 | self.output.getvalue()) |
125 | 216 | 216 | ||
126 | === modified file 'juju/hooks/tests/test_invoker.py' | |||
127 | --- juju/hooks/tests/test_invoker.py 2012-05-04 03:36:36 +0000 | |||
128 | +++ juju/hooks/tests/test_invoker.py 2012-06-04 23:05:24 +0000 | |||
129 | @@ -1467,6 +1467,8 @@ | |||
130 | 1467 | 1467 | ||
131 | 1468 | mu1, mu2 = my_units | 1468 | mu1, mu2 = my_units |
132 | 1469 | lu1, lu2 = log_units | 1469 | lu1, lu2 = log_units |
133 | 1470 | self.mysql_units = my_units | ||
134 | 1471 | self.log_units = log_units | ||
135 | 1470 | 1472 | ||
136 | 1471 | mystate = pick_attr(service_states, relation_role="server") | 1473 | mystate = pick_attr(service_states, relation_role="server") |
137 | 1472 | logstate = pick_attr(service_states, relation_role="client") | 1474 | logstate = pick_attr(service_states, relation_role="client") |
138 | 1473 | 1475 | ||
139 | === modified file 'juju/lib/testing.py' | |||
140 | --- juju/lib/testing.py 2012-04-06 14:42:06 +0000 | |||
141 | +++ juju/lib/testing.py 2012-06-04 23:05:24 +0000 | |||
142 | @@ -170,19 +170,19 @@ | |||
143 | 170 | data, stat = yield client.get(path) | 170 | data, stat = yield client.get(path) |
144 | 171 | name = path.rsplit('/', 1)[1] | 171 | name = path.rsplit('/', 1)[1] |
145 | 172 | 172 | ||
147 | 173 | d['contents'] = _decode_fmt(data, yaml.load) | 173 | d["contents"] = _decode_fmt(data, yaml.load) |
148 | 174 | 174 | ||
149 | 175 | children = yield client.get_children(path) | 175 | children = yield client.get_children(path) |
150 | 176 | for name in children: | 176 | for name in children: |
151 | 177 | if path == "/" and name == "zookeeper": | 177 | if path == "/" and name == "zookeeper": |
152 | 178 | continue | 178 | continue |
153 | 179 | 179 | ||
155 | 180 | cd = yield export_tree(path + '/' + name, indent) | 180 | cd = yield export_tree(path + "/" + name, indent) |
156 | 181 | d[name] = cd | 181 | d[name] = cd |
157 | 182 | 182 | ||
158 | 183 | returnValue(d) | 183 | returnValue(d) |
159 | 184 | 184 | ||
161 | 185 | output[path.rsplit('/', 1)[1]] = yield export_tree(path, '') | 185 | output[path.rsplit("/", 1)[1]] = yield export_tree(path, "") |
162 | 186 | returnValue(output) | 186 | returnValue(output) |
163 | 187 | 187 | ||
164 | 188 | @inlineCallbacks | 188 | @inlineCallbacks |
165 | 189 | 189 | ||
166 | === modified file 'juju/state/hook.py' | |||
167 | --- juju/state/hook.py 2012-04-09 19:40:57 +0000 | |||
168 | +++ juju/state/hook.py 2012-06-04 23:05:24 +0000 | |||
169 | @@ -4,8 +4,9 @@ | |||
170 | 4 | 4 | ||
171 | 5 | from juju.state.base import StateBase | 5 | from juju.state.base import StateBase |
172 | 6 | from juju.state.errors import ( | 6 | from juju.state.errors import ( |
175 | 7 | UnitRelationStateNotFound, StateNotFound, RelationBrokenContextError, | 7 | ServiceStateNotFound, UnitRelationStateNotFound, StateNotFound, |
176 | 8 | RelationStateNotFound, InvalidRelationIdentity, StateChanged) | 8 | RelationBrokenContextError, RelationStateNotFound, |
177 | 9 | InvalidRelationIdentity, StateChanged) | ||
178 | 9 | from juju.state.relation import ( | 10 | from juju.state.relation import ( |
179 | 10 | RelationStateManager, ServiceRelationState, UnitRelationState) | 11 | RelationStateManager, ServiceRelationState, UnitRelationState) |
180 | 11 | from juju.state.service import ServiceStateManager, parse_service_name | 12 | from juju.state.service import ServiceStateManager, parse_service_name |
181 | @@ -102,7 +103,12 @@ | |||
182 | 102 | relations = [] | 103 | relations = [] |
183 | 103 | if self._topology is None: | 104 | if self._topology is None: |
184 | 104 | self._topology = yield self._read_topology() | 105 | self._topology = yield self._read_topology() |
186 | 105 | service = yield self.get_local_service() | 106 | try: |
187 | 107 | service = yield self.get_local_service() | ||
188 | 108 | except ServiceStateNotFound: | ||
189 | 109 | # The service has been removed | ||
190 | 110 | returnValue(relations) | ||
191 | 111 | |||
192 | 106 | internal_service_id = service.internal_id | 112 | internal_service_id = service.internal_id |
193 | 107 | for info in self._topology.get_relations_for_service( | 113 | for info in self._topology.get_relations_for_service( |
194 | 108 | internal_service_id): | 114 | internal_service_id): |
195 | 109 | 115 | ||
196 | === modified file 'juju/state/tests/test_service.py' | |||
197 | --- juju/state/tests/test_service.py 2012-04-11 19:38:41 +0000 | |||
198 | +++ juju/state/tests/test_service.py 2012-06-04 23:05:24 +0000 | |||
199 | @@ -159,6 +159,43 @@ | |||
200 | 159 | unit_state = yield service_state.add_unit_state() | 159 | unit_state = yield service_state.add_unit_state() |
201 | 160 | returnValue(unit_state) | 160 | returnValue(unit_state) |
202 | 161 | 161 | ||
203 | 162 | @inlineCallbacks | ||
204 | 163 | def setup_subordinate_defaults(self): | ||
205 | 164 | mysql_ep = RelationEndpoint("mysql", "juju-info", "juju-info", | ||
206 | 165 | "server", "global") | ||
207 | 166 | logging_ep = RelationEndpoint("logging", "juju-info", "juju-info", | ||
208 | 167 | "client", "container") | ||
209 | 168 | |||
210 | 169 | logging_charm_state = yield self.get_subordinate_charm() | ||
211 | 170 | self.assertTrue(logging_charm_state.is_subordinate()) | ||
212 | 171 | log_state = yield self.service_state_manager.add_service_state( | ||
213 | 172 | "logging", logging_charm_state, dummy_constraints) | ||
214 | 173 | mysql_state = yield self.service_state_manager.add_service_state( | ||
215 | 174 | "mysql", self.charm_state, dummy_constraints) | ||
216 | 175 | |||
217 | 176 | m1 = yield self.machine_state_manager.add_machine_state( | ||
218 | 177 | series_constraints) | ||
219 | 178 | m2 = yield self.machine_state_manager.add_machine_state( | ||
220 | 179 | series_constraints) | ||
221 | 180 | |||
222 | 181 | relation_state, service_states = (yield | ||
223 | 182 | self.relation_state_manager.add_relation_state( | ||
224 | 183 | mysql_ep, logging_ep)) | ||
225 | 184 | |||
226 | 185 | unit_state1 = yield mysql_state.add_unit_state() | ||
227 | 186 | yield unit_state1.assign_to_machine(m1) | ||
228 | 187 | unit_state0 = yield log_state.add_unit_state( | ||
229 | 188 | container=unit_state1) | ||
230 | 189 | |||
231 | 190 | unit_state3 = yield mysql_state.add_unit_state() | ||
232 | 191 | yield unit_state3.assign_to_machine(m2) | ||
233 | 192 | unit_state2 = yield log_state.add_unit_state( | ||
234 | 193 | container=unit_state3) | ||
235 | 194 | |||
236 | 195 | returnValue([service_states, relation_state, | ||
237 | 196 | [(unit_state0, unit_state1), | ||
238 | 197 | (unit_state2, unit_state3)]]) | ||
239 | 198 | |||
240 | 162 | 199 | ||
241 | 163 | class ServiceStateManagerTest(ServiceStateManagerTestBase): | 200 | class ServiceStateManagerTest(ServiceStateManagerTestBase): |
242 | 164 | 201 | ||
243 | @@ -455,27 +492,9 @@ | |||
244 | 455 | """ | 492 | """ |
245 | 456 | Validate adding units with containers specified and recovering that. | 493 | Validate adding units with containers specified and recovering that. |
246 | 457 | """ | 494 | """ |
268 | 458 | mysql_ep = RelationEndpoint("mysql", "juju-info", "juju-info", | 495 | service_states, relation_state, unit_states = ( |
269 | 459 | "server", "global") | 496 | yield self.setup_subordinate_defaults()) |
270 | 460 | logging_ep = RelationEndpoint("logging", "juju-info", "juju-info", | 497 | (unit_state0, unit_state1), (unit_state2, unit_state3) = unit_states |
250 | 461 | "client", "container") | ||
251 | 462 | |||
252 | 463 | logging_charm_state = yield self.get_subordinate_charm() | ||
253 | 464 | self.assertTrue(logging_charm_state.is_subordinate()) | ||
254 | 465 | log_state = yield self.service_state_manager.add_service_state( | ||
255 | 466 | "logging", logging_charm_state, dummy_constraints) | ||
256 | 467 | mysql_state = yield self.service_state_manager.add_service_state( | ||
257 | 468 | "mysql", self.charm_state, dummy_constraints) | ||
258 | 469 | |||
259 | 470 | relation_state, service_states = (yield | ||
260 | 471 | self.relation_state_manager.add_relation_state( | ||
261 | 472 | mysql_ep, logging_ep)) | ||
262 | 473 | |||
263 | 474 | unit_state1 = yield mysql_state.add_unit_state() | ||
264 | 475 | unit_state0 = yield log_state.add_unit_state(container=unit_state1) | ||
265 | 476 | |||
266 | 477 | unit_state3 = yield mysql_state.add_unit_state() | ||
267 | 478 | unit_state2 = yield log_state.add_unit_state(container=unit_state3) | ||
271 | 479 | 498 | ||
272 | 480 | self.assertEquals((yield unit_state1.get_container()), None) | 499 | self.assertEquals((yield unit_state1.get_container()), None) |
273 | 481 | self.assertEquals((yield unit_state0.get_container()), unit_state1) | 500 | self.assertEquals((yield unit_state0.get_container()), unit_state1) |
274 | @@ -498,7 +517,7 @@ | |||
275 | 498 | 517 | ||
276 | 499 | @inlineCallbacks | 518 | @inlineCallbacks |
277 | 500 | def verify_container(relation_state, service_relation_state, | 519 | def verify_container(relation_state, service_relation_state, |
279 | 501 | unit_state, container): | 520 | unit_state, container): |
280 | 502 | presence_path = self.get_presence_path( | 521 | presence_path = self.get_presence_path( |
281 | 503 | relation_state, | 522 | relation_state, |
282 | 504 | service_relation_state.relation_role, | 523 | service_relation_state.relation_role, |
283 | @@ -533,6 +552,11 @@ | |||
284 | 533 | # we verify the content elsewhere | 552 | # we verify the content elsewhere |
285 | 534 | self.assertTrue(settings_info["private-address"]) | 553 | self.assertTrue(settings_info["private-address"]) |
286 | 535 | 554 | ||
287 | 555 | mid = yield unit_state.get_assigned_machine_id() | ||
288 | 556 | cmid = yield container.get_assigned_machine_id() | ||
289 | 557 | if container: | ||
290 | 558 | self.assertEqual(mid, cmid) | ||
291 | 559 | |||
292 | 536 | # verify all the units are constructed as expected | 560 | # verify all the units are constructed as expected |
293 | 537 | # first the client roles with another container | 561 | # first the client roles with another container |
294 | 538 | yield verify_container(relation_state, logstate, | 562 | yield verify_container(relation_state, logstate, |
295 | @@ -2540,6 +2564,38 @@ | |||
296 | 2540 | self.assertEqual(exposed_flag, False) | 2564 | self.assertEqual(exposed_flag, False) |
297 | 2541 | 2565 | ||
298 | 2542 | @inlineCallbacks | 2566 | @inlineCallbacks |
299 | 2567 | def test_set_and_clear_exposed_flag_subordinate(self): | ||
300 | 2568 | """An exposed flag can be set on a subordinate service.""" | ||
301 | 2569 | service_states, relation_state, unit_states = ( | ||
302 | 2570 | yield self.setup_subordinate_defaults()) | ||
303 | 2571 | (u0, u1), (u2, u3) = unit_states | ||
304 | 2572 | service_state = yield u1.get_service_state() | ||
305 | 2573 | |||
306 | 2574 | # Defaults to false | ||
307 | 2575 | exposed_flag = yield service_state.get_exposed_flag() | ||
308 | 2576 | self.assertEqual(exposed_flag, False) | ||
309 | 2577 | |||
310 | 2578 | # Can be set | ||
311 | 2579 | yield service_state.set_exposed_flag() | ||
312 | 2580 | exposed_flag = yield service_state.get_exposed_flag() | ||
313 | 2581 | self.assertEqual(exposed_flag, True) | ||
314 | 2582 | |||
315 | 2583 | # Can be set multiple times | ||
316 | 2584 | yield service_state.set_exposed_flag() | ||
317 | 2585 | exposed_flag = yield service_state.get_exposed_flag() | ||
318 | 2586 | self.assertEqual(exposed_flag, True) | ||
319 | 2587 | |||
320 | 2588 | # Can be cleared | ||
321 | 2589 | yield service_state.clear_exposed_flag() | ||
322 | 2590 | exposed_flag = yield service_state.get_exposed_flag() | ||
323 | 2591 | self.assertEqual(exposed_flag, False) | ||
324 | 2592 | |||
325 | 2593 | # Can be cleared multiple times | ||
326 | 2594 | yield service_state.clear_exposed_flag() | ||
327 | 2595 | exposed_flag = yield service_state.get_exposed_flag() | ||
328 | 2596 | self.assertEqual(exposed_flag, False) | ||
329 | 2597 | |||
330 | 2598 | @inlineCallbacks | ||
331 | 2543 | def test_watch_exposed_flag(self): | 2599 | def test_watch_exposed_flag(self): |
332 | 2544 | """An exposed watch is setup on a permanent basis.""" | 2600 | """An exposed watch is setup on a permanent basis.""" |
333 | 2545 | service_state = yield self.add_service("wordpress") | 2601 | service_state = yield self.add_service("wordpress") |
334 | 2546 | 2602 | ||
335 | === modified file 'juju/unit/lifecycle.py' | |||
336 | --- juju/unit/lifecycle.py 2012-05-01 00:26:25 +0000 | |||
337 | +++ juju/unit/lifecycle.py 2012-06-04 23:05:24 +0000 | |||
338 | @@ -106,6 +106,7 @@ | |||
339 | 106 | self._state_dir = state_dir | 106 | self._state_dir = state_dir |
340 | 107 | self._relations = None | 107 | self._relations = None |
341 | 108 | self._running = False | 108 | self._running = False |
342 | 109 | self._remote_service = {} | ||
343 | 109 | self._watching_relation_memberships = False | 110 | self._watching_relation_memberships = False |
344 | 110 | self._watching_relation_resolved = False | 111 | self._watching_relation_resolved = False |
345 | 111 | self._run_lock = DeferredLock() | 112 | self._run_lock = DeferredLock() |
346 | @@ -404,6 +405,14 @@ | |||
347 | 404 | yield workflow.transition_state("departed") | 405 | yield workflow.transition_state("departed") |
348 | 405 | self._store_relations() | 406 | self._store_relations() |
349 | 406 | 407 | ||
350 | 408 | if not is_principal: | ||
351 | 409 | service_relation = old_relations[relation_id] | ||
352 | 410 | removed = yield self._undeploy_subordinate(service_relation) | ||
353 | 411 | if removed: | ||
354 | 412 | # we don't even need to process adds at this point | ||
355 | 413 | # (that would be a sick, sad world) | ||
356 | 414 | returnValue(None) | ||
357 | 415 | |||
358 | 407 | # Process new relations. | 416 | # Process new relations. |
359 | 408 | for relation_id in added: | 417 | for relation_id in added: |
360 | 409 | service_relation = new_relations[relation_id] | 418 | service_relation = new_relations[relation_id] |
361 | @@ -434,10 +443,65 @@ | |||
362 | 434 | 443 | ||
363 | 435 | self._relations[service_relation.internal_relation_id] = workflow | 444 | self._relations[service_relation.internal_relation_id] = workflow |
364 | 436 | 445 | ||
365 | 446 | # Cache the remote unit's service (to ease subordinate undeploy) | ||
366 | 447 | service_states = yield service_relation.get_service_states() | ||
367 | 448 | remote_service = [s for s in service_states if | ||
368 | 449 | s.service_name != self._unit.service_name][0] | ||
369 | 450 | |||
370 | 451 | self._remote_service[ \ | ||
371 | 452 | service_relation.internal_relation_id] = remote_service | ||
372 | 453 | |||
373 | 437 | with (yield workflow.lock()): | 454 | with (yield workflow.lock()): |
374 | 438 | yield workflow.synchronize() | 455 | yield workflow.synchronize() |
375 | 439 | 456 | ||
376 | 440 | @inlineCallbacks | 457 | @inlineCallbacks |
377 | 458 | def _undeploy_subordinate(self, service_relation): | ||
378 | 459 | """Conditionally stop and remove _this_ subordinate unit. | ||
379 | 460 | |||
380 | 461 | When a relationship is removed we check if its with this subordinate's | ||
381 | 462 | principal service. If it is we need to stop the agent and remove the | ||
382 | 463 | on disk deployment. | ||
383 | 464 | |||
384 | 465 | returns boolean indicating if we have self-terminated or not. | ||
385 | 466 | """ | ||
386 | 467 | # determine if the remote end of this service relation is the | ||
387 | 468 | # container of this unit (it could be a scope:container relation under | ||
388 | 469 | # the same principal). | ||
389 | 470 | |||
390 | 471 | container = yield self._unit.get_container() | ||
391 | 472 | container_service = yield container.get_service_state() | ||
392 | 473 | # We cannot use service_relation.get_service_states() as we're | ||
393 | 474 | # reacting to relation removal and its no longer in the topology. | ||
394 | 475 | # (forcing us to manage this cache) | ||
395 | 476 | remote_service = self._remote_service[ | ||
396 | 477 | service_relation.internal_relation_id] | ||
397 | 478 | |||
398 | 479 | if container is None or ( | ||
399 | 480 | remote_service.internal_id == container_service.internal_id): | ||
400 | 481 | # The container has already been removed we can safely proceed | ||
401 | 482 | # OR | ||
402 | 483 | # The remote side of the relationship being removed is the | ||
403 | 484 | # principal. | ||
404 | 485 | # At this point we will being shutting down this unit, | ||
405 | 486 | # remove our unit state | ||
406 | 487 | self._log.debug("Removing unit state for %s", self._unit) | ||
407 | 488 | yield self._service.remove_unit_state(self._unit) | ||
408 | 489 | # the final problem | ||
409 | 490 | yield self._do_unit_kill(self._unit.unit_name, "", | ||
410 | 491 | self._unit_dir) | ||
411 | 492 | returnValue(True) | ||
412 | 493 | |||
413 | 494 | returnValue(False) | ||
414 | 495 | |||
415 | 496 | @inlineCallbacks | ||
416 | 497 | def _do_unit_kill(self, unit_name, machine_id, charm_dir): | ||
417 | 498 | unit_deployer = UnitDeployer(self._client, machine_id, charm_dir) | ||
418 | 499 | # start the unit deployer (this is needed to bootstrap the factory) | ||
419 | 500 | # kill ourself | ||
420 | 501 | yield unit_deployer.start("subordinate") | ||
421 | 502 | yield unit_deployer.kill_service_unit(unit_name) | ||
422 | 503 | |||
423 | 504 | @inlineCallbacks | ||
424 | 441 | def _do_unit_deploy(self, unit_name, machine_id, charm_dir): | 505 | def _do_unit_deploy(self, unit_name, machine_id, charm_dir): |
425 | 442 | # this method exists to aid testing rather than being an | 506 | # this method exists to aid testing rather than being an |
426 | 443 | # inline | 507 | # inline |
427 | 444 | 508 | ||
428 | === modified file 'juju/unit/tests/test_lifecycle.py' | |||
429 | --- juju/unit/tests/test_lifecycle.py 2012-05-04 03:39:27 +0000 | |||
430 | +++ juju/unit/tests/test_lifecycle.py 2012-06-04 23:05:24 +0000 | |||
431 | @@ -23,7 +23,8 @@ | |||
432 | 23 | from juju.machine.tests.test_constraints import series_constraints | 23 | from juju.machine.tests.test_constraints import series_constraints |
433 | 24 | 24 | ||
434 | 25 | from juju.state.endpoint import RelationEndpoint | 25 | from juju.state.endpoint import RelationEndpoint |
436 | 26 | from juju.state.errors import UnitRelationStateNotFound | 26 | from juju.state.errors import (UnitRelationStateNotFound, |
437 | 27 | ServiceUnitStateNotFound) | ||
438 | 27 | from juju.state.machine import MachineStateManager | 28 | from juju.state.machine import MachineStateManager |
439 | 28 | from juju.state.relation import ClientServerUnitWatcher | 29 | from juju.state.relation import ClientServerUnitWatcher |
440 | 29 | from juju.state.service import NO_HOOKS | 30 | from juju.state.service import NO_HOOKS |
441 | @@ -34,7 +35,7 @@ | |||
442 | 34 | from juju.unit.tests.test_charm import CharmPublisherTestBase | 35 | from juju.unit.tests.test_charm import CharmPublisherTestBase |
443 | 35 | 36 | ||
444 | 36 | from juju.lib.testing import TestCase | 37 | from juju.lib.testing import TestCase |
446 | 37 | from juju.lib.mocker import MATCH | 38 | from juju.lib.mocker import MATCH, ANY, Mocker |
447 | 38 | 39 | ||
448 | 39 | 40 | ||
449 | 40 | class UnwriteablePath(object): | 41 | class UnwriteablePath(object): |
450 | @@ -1441,3 +1442,76 @@ | |||
451 | 1441 | yield lifecycle.start() | 1442 | yield lifecycle.start() |
452 | 1442 | yield test_deferred | 1443 | yield test_deferred |
453 | 1443 | # mocker has already verified the call signature, we're happy | 1444 | # mocker has already verified the call signature, we're happy |
454 | 1445 | |||
455 | 1446 | @inlineCallbacks | ||
456 | 1447 | def test_subordinate_lifecycle_stop_principal(self): | ||
457 | 1448 | state = yield self.setup_default_subordinate_relation() | ||
458 | 1449 | mu1, my_srv = state["principal"] | ||
459 | 1450 | lu1, log_srv = state["subordinate"] | ||
460 | 1451 | |||
461 | 1452 | # Principal setup | ||
462 | 1453 | principal_lifecycle = UnitLifecycle( | ||
463 | 1454 | self.client, mu1, my_srv, | ||
464 | 1455 | self.unit_directory, self.state_directory, self.executor) | ||
465 | 1456 | |||
466 | 1457 | test_deferred = Deferred() | ||
467 | 1458 | principal_results = [] | ||
468 | 1459 | |||
469 | 1460 | def test_deploy(unit_name, machine_id, charm_dir): | ||
470 | 1461 | principal_results.append([unit_name, machine_id, charm_dir]) | ||
471 | 1462 | test_deferred.callback(True) | ||
472 | 1463 | |||
473 | 1464 | mock_unit_deploy = self.mocker.patch(principal_lifecycle) | ||
474 | 1465 | mock_unit_deploy._do_unit_deploy("logging/0", 0, ANY) | ||
475 | 1466 | self.mocker.call(test_deploy) | ||
476 | 1467 | |||
477 | 1468 | # Subordinate setup | ||
478 | 1469 | test_sub_deferred = Deferred() | ||
479 | 1470 | sub_results = [] | ||
480 | 1471 | |||
481 | 1472 | def test_undeploy(unit_name, machine_id, charm_dir): | ||
482 | 1473 | sub_results.append([unit_name, machine_id, charm_dir]) | ||
483 | 1474 | test_sub_deferred.callback(True) | ||
484 | 1475 | |||
485 | 1476 | # Actual Test | ||
486 | 1477 | self.mocker.replay() | ||
487 | 1478 | yield principal_lifecycle.start() | ||
488 | 1479 | yield test_deferred | ||
489 | 1480 | |||
490 | 1481 | # The principal is running and did a deploy (mocked out) | ||
491 | 1482 | # at this point we should be able to collect the relation | ||
492 | 1483 | # and the unit and inject them into the subordinate (which | ||
493 | 1484 | # would have them in its own process normally) | ||
494 | 1485 | log = state["subordinate"][1] | ||
495 | 1486 | log_units = yield log.get_all_unit_states() | ||
496 | 1487 | lu1 = log_units[0] | ||
497 | 1488 | |||
498 | 1489 | # A second instance of mocker around the subordinate | ||
499 | 1490 | # avoids issues with patch/replace confusing the single | ||
500 | 1491 | # instance | ||
501 | 1492 | mocker2 = Mocker() | ||
502 | 1493 | subordinate_lifecycle = UnitLifecycle( | ||
503 | 1494 | self.client, lu1, log_srv, | ||
504 | 1495 | self.unit_directory, self.state_directory, self.executor) | ||
505 | 1496 | |||
506 | 1497 | mock_subunit_deploy = mocker2.patch(subordinate_lifecycle) | ||
507 | 1498 | mock_subunit_deploy._do_unit_kill( | ||
508 | 1499 | "logging/0", "", ANY) | ||
509 | 1500 | mocker2.call(test_undeploy) | ||
510 | 1501 | |||
511 | 1502 | with mocker2: | ||
512 | 1503 | yield subordinate_lifecycle.start() | ||
513 | 1504 | |||
514 | 1505 | # break relation to principal | ||
515 | 1506 | yield self.relation_manager.remove_relation_state( | ||
516 | 1507 | state["relation_state"]) | ||
517 | 1508 | |||
518 | 1509 | # verify that we attempt to we see the departed event | ||
519 | 1510 | # and the attempt of the subordinate to self terminate | ||
520 | 1511 | yield test_sub_deferred | ||
521 | 1512 | # again, mocker verified the call on do_unit_kill | ||
522 | 1513 | # this verifies that the unit attempted to self destruct | ||
523 | 1514 | |||
524 | 1515 | # verify the subordinate unit state was removed | ||
525 | 1516 | failure = self.service_manager.get_unit_state("logging/0") | ||
526 | 1517 | yield self.assertFailure(failure, ServiceUnitStateNotFound) | ||
527 | 1444 | 1518 | ||
528 | === modified file 'juju/unit/workflow.py' | |||
529 | --- juju/unit/workflow.py 2012-03-29 07:52:41 +0000 | |||
530 | +++ juju/unit/workflow.py 2012-06-04 23:05:24 +0000 | |||
531 | @@ -1,7 +1,7 @@ | |||
532 | 1 | import yaml | ||
533 | 2 | import csv | 1 | import csv |
534 | 2 | import logging | ||
535 | 3 | import os | 3 | import os |
537 | 4 | import logging | 4 | import yaml |
538 | 5 | 5 | ||
539 | 6 | from zookeeper import NoNodeException | 6 | from zookeeper import NoNodeException |
540 | 7 | from twisted.internet.defer import inlineCallbacks, returnValue | 7 | from twisted.internet.defer import inlineCallbacks, returnValue |
541 | @@ -226,6 +226,9 @@ | |||
542 | 226 | state_serialized = yaml.safe_dump(state_dict) | 226 | state_serialized = yaml.safe_dump(state_dict) |
543 | 227 | 227 | ||
544 | 228 | def update_state(content, stat): | 228 | def update_state(content, stat): |
545 | 229 | if not stat: | ||
546 | 230 | # State has gone away | ||
547 | 231 | return | ||
548 | 229 | unit_data = yaml.load(content) | 232 | unit_data = yaml.load(content) |
549 | 230 | if not unit_data: | 233 | if not unit_data: |
550 | 231 | unit_data = {} | 234 | unit_data = {} |
Please take a look.