Merge lp:~bcsaller/pyjuju/statusd into lp:pyjuju

Proposed by Benjamin Saller
Status: Work in progress
Proposed branch: lp:~bcsaller/pyjuju/statusd
Merge into: lp:pyjuju
Diff against target: 1330 lines (+738/-317)
10 files modified
juju/agents/provision.py (+15/-2)
juju/agents/tests/test_provision.py (+51/-2)
juju/control/status.py (+30/-129)
juju/control/tests/test_status.py (+23/-179)
juju/environment/config.py (+1/-1)
juju/environment/tests/test_config.py (+0/-1)
juju/providers/common/tests/test_utils.py (+34/-3)
juju/providers/common/utils.py (+16/-0)
juju/state/status.py (+181/-0)
juju/state/tests/test_status.py (+387/-0)
To merge this branch: bzr merge lp:~bcsaller/pyjuju/statusd
Reviewer Review Type Date Requested Status
Kapil Thangavelu (community) Needs Fixing
Jim Baker (community) Needs Fixing
Gustavo Niemeyer Needs Fixing
Review via email: mp+74336@code.launchpad.net

Description of the change

Statusd -- deployment side status computation

This branch includes changes designed to allow for computing status on topology change and writing the output to a YAML node in ZK. The status of this node will always represent the current view and the lclient simply queries this value and does its normal filtering as post-processing.

To post a comment you must log in.
Revision history for this message
Jim Baker (jimbaker) wrote :
Download full text (8.3 KiB)

Needs fixing. There is no explicit testing that ensemble status is depending on the actions of the StatusManager. Instead it's taking advantage of the fact that get_status will work even if the /status znode is not set in ZK. So things work regardless.

Maybe the better approach in testing is as follows:

1. Explicitly instantiate a StatusManager in the control tests. Verify that it properly writes to the /status znode with before and after runs of ensemble status, when the state of the system is changing (eg with build_topology).

2. Add some minimal testing in emsemble.agents.tests.test_provision to verify that the StatusManager is properly instantiated and running. There's nothing there now to verify that.

3. Verify in ensemble.state.tests.test_status that the proper caching behavior is seen. Otherwise I think the tests look fine for that.

Some additional comments:

[1]

+ self.mocker.count(min=0, max=None)
+
+ r = self.mocker.replace("ensemble.providers.common.utils.get_active_machine_provider")
+ r(ANY)
+ self.mocker.result(self.provider)
+ self.mocker.count(min=0, max=None)

I'm a bit leery of mocks that are so open. Nothing is really being verified here. Maybe it's called, maybe it isn't. Is it possible to better constrain them, or check their args?

[2]

- This is handy for a default initialization.
+ This is handy for a default initialisation.

Are we committing to British spelling? ;) Works for me either way.

[3]

+@inlineCallbacks
+def get_active_machine_provider(client):
+ """Return the machine environment for the running environment.
+
+ This is used on the deployment side to get a machine provider
+ which can be used to pull machine specific values like DNS names.
+ """
+ env_manager = EnvironmentStateManager(client)
+ env_config = yield env_manager.get_config()
+ env = env_config.get_default()
+ machine_provider = env.get_machine_provider()
+ returnValue(machine_provider)

No tests for this code.

[4]

+# Version of the status serialisation
+VERSION = 1

Good idea. I wonder if it should also be reported to the user instead of being deleted by client code here, so that code using status info can determine their backwards compatibility issues, if any.

[5]

+class StatusManager(StateBase):
+ """Watch deployments for status state changes."""
+ def __init__(self, client):
+ super(StatusManager, self).__init__(client)
+ self._status = None
+
+ @inlineCallbacks
+ def get_status(self):
+ """Get current status information."""
+ if self._status is None:
+ status = yield self._load_status()
+ if status is None:
+ status = yield self.collect()
+
+ self._status = status
+ returnValue(self._status)

From what I can tell, the caching behavior with _load_status is only exercised indirectly (by ensemble.control.tests.test_status.StatusTest.test_collect_filtering).

[6]

+ @inlineCallbacks
+ def build_topology(self, base=None):
+ """Build a simulated topology with a default machine configuration.
+
+ This method returns a dict that can be used to ...

Read more...

review: Needs Fixing
Revision history for this message
Gustavo Niemeyer (niemeyer) wrote :

Great work Ben. Looking forward to having that in.

Some comments:

[1]

The hook up on the provisioning agent seems to be completely untested.
Someone could mistakingly remove the line and everything would be
apparently fine.

[2]

+def get_active_machine_provider(client):

This would benefit from some testing too.

[3]

+ exists = yield self._client.exists("/status")
+ if not exists:
+ returnValue(None)
+
+ ystatus, _ = yield self._client.get("/status")
+ status = yaml.load(ystatus)

Race condition and unnecessary roundtrip here. Get and test for errors,
rather than testing and then getting.

[4]

+ if status is not None and "version" not in status or
status["version"] != VERSION:
+ raise EnsembleError("Version mismatch in status")

Crashing like that means we won't ever be able to change the version
number once a first version was used. Instead of crashing, pretend
it didn't exist if the version was wrong. Please test this.

[5]

+ del status["version"]

Doesn't seem necessary. Hiding the version should be done in the
command itself, if desired.

[6]

+ versioned_state = dict(state)

Same thing. No need to copy.. just let the version show up and
hide it in the command itself.

review: Needs Fixing
Revision history for this message
Benjamin Saller (bcsaller) wrote :

> [1]
>
> The hook up on the provisioning agent seems to be completely untested.
> Someone could mistakingly remove the line and everything would be
> apparently fine.

Added some tests to verify the watch fires w/o paying much attention
to the results as we don't build a proper environment to status in
those tests. Also had to mock StatusManager out for the expose tests
so as not to interfere with its watch tests.

>
> [2]
>
> +def get_active_machine_provider(client):
>
> This would benefit from some testing too.

Added. Verify we can get a dummy provider back when configured for such.

>
> [3]
>
> +        exists = yield self._client.exists("/status")
> +        if not exists:
> +            returnValue(None)
> +
> +        ystatus, _ = yield self._client.get("/status")
> +        status = yaml.load(ystatus)
>
> Race condition and unnecessary roundtrip here. Get and test for errors,
> rather than testing and then getting.
>

Thank, based on this point I cleaned up the get and set paths and am
using retry_change in the sets. The gets ask forgiveness rather than
permission (conceptually speaking).

> [4]
>
> +        if status is not None and "version" not in status or
> status["version"] != VERSION:
> +            raise EnsembleError("Version mismatch in status")
>
> Crashing like that means we won't ever be able to change the version
> number once a first version was used. Instead of crashing, pretend
> it didn't exist if the version was wrong. Please test this.
>

[4,5,6]

Any error in recovering state or parsing version should now force a
new status collection phase.

lp:~bcsaller/pyjuju/statusd updated
350. By Benjamin Saller

merge trunk

Revision history for this message
Jim Baker (jimbaker) wrote :

+1, needs fixing.

Still waiting on the response to my earlier review, but there's an important design ramification here, as I outlined on Friday over IRC:

[7]

This code depends on watching /topology to ensure that it always get every change that can change the result of running the status command. However, this is not generally true. Service exposing and open ports depend on the nodes /services/<internal service id>/exposed and /units/<internal unit id>/ports respectively, so these also must be watched. In addition, lp:~fwereade/juju/dynamic-unit-state introduces further additions to status based on the agent's presence, which is tracked with an ephemeral node /agents/unit/<internal unit id>.

Given that the StatusManager resides in the ProvisioningAgent, it should be feasible for the SM to use the firewall mgmt's support for observing any logical level changes. Specifically this should be done by using the open_close_ports_observer, since it would make little sense to duplicate the existing complex watch setup for this. It may be best to await the refactoring in the firewall mgmt support (to make it a plugin manager similar to StatusManager in fact). The presence simply needs to watch children of /agents/unit for changes. I guess here it depends on whether William's work lands first or not.

Some other problems I see in the code:

[8]

+ def __init__(self, client):
+ super(StatusManager, self).__init__(client)
+ self._status = None
+

This should do an immediate collect, to take account of any changes that have happened between /status being set and a bounce of the provisioning agent, along with corresponding testing.

[9]

+ def watch_status(self, callback=None):
+ """Set a permanent watch to refresh status on change."""
+
+ @inlineCallbacks
+ def watch_topology(old_topology, new_topology):
+ state = yield self.collect()
+ if callback:
+ callback(state)
+ returnValue(state)
+
+ return self._watch_topology(watch_topology)

This does not ensure the watch is reestablished, in the case of a transient error in collect. Indeed, the error handling might also need to be somewhat more complex, such as deleting the /status node to ensure it's no longer in use.

review: Needs Fixing
Revision history for this message
Kapil Thangavelu (hazmat) wrote :

This will be very cool to have in.

[10] The control points on the status watch protocol are a bit weak.
It would be nice if it had an explicit start/stop control points, ie.
on start collect and computer status and start the watch. On stop,
it should not touch zk in response to a watch event.

As is its still performing background activity even if the intent is to
stop because it computes and writes the tree before invoking the callback.

The callback becomes superflous at that point since the usage here
is self contained.

[11] The if __main__ stuff for status should get yanked out of the
provisioning agent module.

[12] pep syntax Several over 80 lines in status.py

[13]
+ @inlineCallbacks
+ def test_status_manager_watch(self):

+ @inlineCallbacks
+ def test_watch_status(self):

+ @inlineCallbacks
+ def test_watch_status_with_callback(self):

...
+ yield self.sleep(0.1)
+ yield self.sleep(1)
+ yield self.sleep(1)
+ yield self.sleep(1)
+ yield self.sleep(1)

could be simplified and restructured, just have a test helper api to
watch the /status node, would remove the mocking and sleeps.

[14]
- This is handy for a default initialization.
+ This is handy for a default initialisation.

ew.. british vs american spelling wars.

[15]

+ @inlineCallbacks
+ def test_load_status(self):
+ # even though this is a private method we test its logic
+ # directly

+ @inlineCallbacks
+ def test_store_status(self):
+ # even though this is a private method we test its logic
+ # directly

one nice practice from landscape was to label such tests with a
test_wb (ie whitebox testing) as they have implementation knowledge.

Still seems like these could directly reading/writing to /status and
comparing to get_status, ie no need to invoke private methods on the
class.

[16]

+ def mock_environment(self):
+ mock_environment = self.mocker.patch(Environment)
+ mock_environment.get_machine_provider()
+ self.mocker.result(self.provider)

test specific mocks like this shouldn't have to be found via crawling
down a several layers of inheritance. At minimum it should more exactly
specify what its doing if its going to be going down an inheritance chain
ie. setup_environment_provider..

[17]
+ @inlineCallbacks
+ def add_relation_with_relation_units(
+ self,
+ source_endpoint, source_units, source_states,
+ dest_endpoint, dest_units, dest_states):

should be reformat for easier reading, also methods with indented
params should have a doc string following to separate code from
params.

review: Needs Fixing
Revision history for this message
Kapil Thangavelu (hazmat) wrote :

has this been abandoned?

Revision history for this message
Gustavo Niemeyer (niemeyer) wrote :

In case this is brought back to life, let's please agree on how the public facing bits are changing. I'm wondering mostly about how the zookeeper state has changed, and in which moments it changes.

Unmerged revisions

350. By Benjamin Saller

merge trunk

349. By Benjamin Saller

load_state returns None on error forcing retry on invaild cached state, store_state using retry_change

348. By Benjamin Saller

status callback on watch, not init, refactor status test base to not include tests in state/

347. By Benjamin Saller

test around getting active machine provider server side

346. By Benjamin Saller

status watch callback to terminate watch when the agent is stopped and tests

345. By Benjamin Saller

status watch callback to terminate watch when the agent is stopped

344. By Benjamin Saller

add callback support to StatusManager

343. By Benjamin Saller

properly map units to host names for testing, compat with trunk

342. By Benjamin Saller

public address support in state/status

341. By Benjamin Saller

merge trunk

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'juju/agents/provision.py'
2--- juju/agents/provision.py 2011-10-12 18:14:01 +0000
3+++ juju/agents/provision.py 2011-10-13 16:27:26 +0000
4@@ -11,6 +11,7 @@
5 StateChanged, StopWatcher)
6 from juju.state.machine import MachineStateManager
7 from juju.state.service import ServiceStateManager
8+from juju.state.status import StatusManager
9
10 from .base import BaseAgent
11
12@@ -44,6 +45,7 @@
13 self.provider = self.environment.get_machine_provider()
14 self.machine_state_manager = MachineStateManager(self.client)
15 self.service_state_manager = ServiceStateManager(self.client)
16+ self.status_manager = StatusManager(self.client)
17 self._open_close_ports_observer = None
18 self._open_close_ports_on_machine_observer = None
19 self._running = True
20@@ -61,6 +63,9 @@
21 self.watch_machine_changes)
22 self.service_state_manager.watch_service_states(
23 self.watch_service_changes)
24+ self.status_manager.watch_status(
25+ self.watch_status_changes)
26+
27 from twisted.internet import reactor
28 reactor.callLater(
29 self.machine_check_period, self.periodic_machine_check)
30@@ -475,6 +480,14 @@
31 if self._open_close_ports_on_machine_observer is not None:
32 yield self._open_close_ports_on_machine_observer(machine_id)
33
34-
35-if __name__ == '__main__':
36+ def watch_status_changes(self, state):
37+ """Status changes to the global topology are returned to this watch.
38+
39+ :param:state: A status dictionary.
40+ """
41+ if not self._running:
42+ raise StopWatcher()
43+ return state
44+
45+if __name__ == "__main__":
46 ProvisioningAgent().run()
47
48=== modified file 'juju/agents/tests/test_provision.py'
49--- juju/agents/tests/test_provision.py 2011-10-12 18:14:01 +0000
50+++ juju/agents/tests/test_provision.py 2011-10-13 16:27:26 +0000
51@@ -15,10 +15,11 @@
52 from juju.environment.tests.test_config import SAMPLE_ENV
53
54 from juju.errors import ProviderInteractionError
55-from juju.lib.mocker import MATCH
56+from juju.lib.mocker import MATCH, ANY
57 from juju.providers.dummy import DummyMachine
58 from juju.state.errors import StopWatcher
59 from juju.state.machine import MachineStateManager
60+from juju.state.status import StatusManager
61 from juju.state.tests.test_service import ServiceStateManagerTestBase
62
63 from .common import AgentTestBase
64@@ -126,6 +127,8 @@
65 yield self.agent.startService()
66 self.output = self.capture_logging("juju.agents.provision",
67 logging.DEBUG)
68+ self.status_output = self.capture_logging("juju.state.status",
69+ logging.DEBUG)
70
71 def test_get_agent_name(self):
72 self.assertEqual(self.agent.get_agent_name(), "provision:dummy")
73@@ -444,6 +447,7 @@
74
75 @inlineCallbacks
76 def test_start_agent_with_watch(self):
77+ self.capture_logging("juju.state.status", logging.DEBUG)
78 mock_reactor = self.mocker.patch(reactor)
79 mock_reactor.callLater(
80 self.agent.machine_check_period,
81@@ -462,6 +466,45 @@
82 instance_id = yield machine_state0.get_instance_id()
83 self.assertEqual(instance_id, 0)
84
85+ @inlineCallbacks
86+ def test_status_manager_watch(self):
87+ status_state = {}
88+
89+ def check_status(state):
90+ status_state.clear()
91+ status_state.update(state)
92+
93+ mock_reactor = self.mocker.patch(reactor)
94+ mock_reactor.callLater(
95+ self.agent.machine_check_period,
96+ self.agent.periodic_machine_check)
97+
98+ mock_agent = self.mocker.patch(self.agent)
99+ mock_agent.watch_status_changes(MATCH(lambda x: isinstance(x, dict)))
100+ self.mocker.passthrough(check_status)
101+ self.mocker.count(min=1, max=None)
102+ self.mocker.replay()
103+
104+ self.agent.set_watch_enabled(True)
105+ yield self.agent.start()
106+
107+ # trigger a status change
108+ manager = MachineStateManager(self.client)
109+ yield manager.add_machine_state()
110+
111+ # verify that the agent is installed
112+ yield self.sleep(0.1)
113+ # verify that we have a machine 0 in the status
114+ self.assertIn("machines", status_state)
115+ self.assertIn(0, status_state["machines"])
116+ yield self.agent.stop()
117+
118+ # The test doesn't fully configure the runtime and is really
119+ # just testing that the watch is enabled so we expect an
120+ # error about this from status.
121+ self.assertIn("Machine provider information missing: machine 0",
122+ self.status_output.getvalue())
123+
124
125 class ExposeTestBase(
126 ProvisioningTestBase, ServiceStateManagerTestBase):
127@@ -482,7 +525,13 @@
128 mock_reactor.callLater(
129 self.agent.machine_check_period,
130 self.agent.periodic_machine_check)
131- self.mocker.replay()
132+
133+ # ignore any status related watches
134+ status_manager = self.mocker.patch(StatusManager)
135+ status_manager.watch_status(ANY)
136+ self.mocker.count(min=0, max=None)
137+
138+ self.mocker.replay()
139 self.agent.set_watch_enabled(enabled)
140 yield self.agent.start()
141
142
143=== modified file 'juju/control/status.py'
144--- juju/control/status.py 2011-09-30 03:18:49 +0000
145+++ juju/control/status.py 2011-10-13 16:27:26 +0000
146@@ -7,15 +7,8 @@
147 from twisted.internet.defer import inlineCallbacks, returnValue
148 import yaml
149
150-from juju.errors import ProviderError
151 from juju.environment.errors import EnvironmentsConfigError
152-from juju.state.errors import UnitRelationStateNotFound
153-from juju.state.charm import CharmStateManager
154-from juju.state.machine import MachineStateManager
155-from juju.state.service import ServiceStateManager
156-from juju.state.relation import RelationStateManager
157-from juju.unit.workflow import WorkflowStateClient
158-
159+from juju.state.status import StatusManager
160
161 # a minimal registry for renderers
162 # maps from format name to callable
163@@ -137,143 +130,51 @@
164
165 `log`: a Python stdlib logger.
166 """
167- service_manager = ServiceStateManager(client)
168- relation_manager = RelationStateManager(client)
169- machine_manager = MachineStateManager(client)
170- charm_manager = CharmStateManager(client)
171-
172- service_data = {}
173- machine_data = {}
174- state = dict(services=service_data, machines=machine_data)
175+ status_manager = StatusManager(client)
176+ state = yield status_manager.get_status()
177
178 seen_machines = set()
179 filter_services, filter_units = digest_scope(scope)
180
181- services = yield service_manager.get_all_service_states()
182- for service in services:
183+ services = state["services"]
184+ for service_name, service_data in services.items():
185 if len(filter_services):
186 found = False
187 for filter_service in filter_services:
188- if fnmatch(service.service_name, filter_service):
189+ if not fnmatch(service_name, filter_service):
190+ del services[service_name]
191+ continue
192+ else:
193 found = True
194- break
195 if not found:
196 continue
197
198- unit_data = {}
199- relation_data = {}
200-
201- charm_id = yield service.get_charm_id()
202- charm = yield charm_manager.get_charm_state(charm_id)
203-
204- service_data[service.service_name] = dict(units=unit_data,
205- charm=charm.id,
206- relations=relation_data)
207- exposed = yield service.get_exposed_flag()
208- if exposed:
209- service_data[service.service_name].update(exposed=exposed)
210-
211- units = yield service.get_all_unit_states()
212- unit_matched = False
213-
214- relations = yield relation_manager.get_relations_for_service(service)
215-
216- for unit in units:
217+ units = service_data["units"]
218+
219+ for unit_name, unit_data in units.items():
220+ machine_id = unit_data["machine"]
221 if len(filter_units):
222 found = False
223 for filter_unit in filter_units:
224- if fnmatch(unit.unit_name, filter_unit):
225+ if fnmatch(unit_name, filter_unit):
226 found = True
227- break
228+ seen_machines.add(machine_id)
229+
230 if not found:
231- continue
232-
233- u = unit_data[unit.unit_name] = dict()
234- machine_id = yield unit.get_assigned_machine_id()
235- u["machine"] = machine_id
236- unit_workflow_client = WorkflowStateClient(client, unit)
237- u["state"] = yield unit_workflow_client.get_state()
238- if exposed:
239- open_ports = yield unit.get_open_ports()
240- u["open-ports"] = ["{port}/{proto}".format(**port_info)
241- for port_info in open_ports]
242-
243- u["public-address"] = yield unit.get_public_address()
244-
245- # indicate we should include information about this
246- # machine later
247- seen_machines.add(machine_id)
248- unit_matched = True
249-
250- # collect info on each relation for the service unit
251- relation_status = {}
252- for relation in relations:
253- try:
254- relation_unit = yield relation.get_unit_state(unit)
255- except UnitRelationStateNotFound:
256- # This exception will occur when relations are
257- # established between services without service
258- # units, and therefore never have any
259- # corresponding service relation units. This
260- # scenario does not occur in actual deployments,
261- # but can happen in test circumstances. In
262- # particular, it will happen with a misconfigured
263- # provider, which exercises this codepath.
264- continue # should not occur, but status should not fail
265- relation_workflow_client = WorkflowStateClient(
266- client, relation_unit)
267- relation_workflow_state = \
268- yield relation_workflow_client.get_state()
269- relation_status[relation.relation_name] = dict(
270- state=relation_workflow_state)
271- u["relations"] = relation_status
272-
273- # after filtering units check if any matched or remove the
274- # service from the output
275- if filter_units and not unit_matched:
276- del service_data[service.service_name]
277- continue
278-
279- for relation in relations:
280- rel_services = yield relation.get_service_states()
281-
282- # A single related service implies a peer relation. More
283- # imply a bi-directional provides/requires relationship.
284- # In the later case we omit the local side of the relation
285- # when reporting.
286- if len(rel_services) > 1:
287- # Filter out self from multi-service relations.
288- rel_services = [
289- rsn for rsn in rel_services if rsn.service_name !=
290- service.service_name]
291-
292- if len(rel_services) > 1:
293- raise ValueError("Unexpected relationship with more "
294- "than 2 endpoints")
295-
296- rel_service = rel_services[0]
297- relation_data[relation.relation_name] = rel_service.service_name
298-
299- machines = yield machine_manager.get_all_machine_states()
300- for machine_state in machines:
301- if (filter_services or filter_units) and \
302- machine_state.id not in seen_machines:
303- continue
304-
305- instance_id = yield machine_state.get_instance_id()
306- m = {"instance-id": instance_id \
307- if instance_id is not None else "pending"}
308- if instance_id is not None:
309- try:
310- pm = yield machine_provider.get_machine(instance_id)
311- m["dns-name"] = pm.dns_name
312- except ProviderError:
313- # The provider doesn't have machine information
314- log.error(
315- "Machine provider information missing: machine %s" % (
316- machine_state.id))
317-
318- machine_data[machine_state.id] = m
319+ del units[unit_name]
320+
321+ elif filter_services:
322+ seen_machines.add(machine_id)
323+
324+ if (filter_services or filter_units) and \
325+ not service_data["units"]:
326+ del services[service_name]
327+
328+ machines = state["machines"]
329+ for machine_id, machine_data in machines.items():
330+ if (filter_services or filter_units) and \
331+ machine_id not in seen_machines:
332+ del machines[machine_id]
333
334 returnValue(state)
335
336
337=== modified file 'juju/control/tests/test_status.py'
338--- juju/control/tests/test_status.py 2011-09-30 03:18:49 +0000
339+++ juju/control/tests/test_status.py 2011-10-13 16:27:26 +0000
340@@ -5,16 +5,15 @@
341 import os
342
343 import yaml
344-from twisted.internet.defer import inlineCallbacks, returnValue
345+from twisted.internet.defer import inlineCallbacks
346
347-from .common import ControlToolTest
348-from juju.environment.environment import Environment
349 from juju.control import status
350 from juju.control import tests
351 from juju.state.endpoint import RelationEndpoint
352-from juju.state.tests.test_service import ServiceStateManagerTestBase
353+from juju.state.tests.test_status import StatusTestBase as StatusStateTestBase
354 from juju.unit.workflow import ZookeeperWorkflowState
355
356+
357 tests_path = os.path.dirname(inspect.getabsfile(tests))
358 sample_path = os.path.join(tests_path, "sample_cluster.yaml")
359 sample_cluster = yaml.load(open(sample_path, "r"))
360@@ -27,184 +26,12 @@
361 fp.close()
362
363
364-class StatusTestBase(ServiceStateManagerTestBase, ControlToolTest):
365-
366+class StatusTestBase(StatusStateTestBase):
367 @inlineCallbacks
368 def setUp(self):
369 yield super(StatusTestBase, self).setUp()
370- self.log = self.capture_logging()
371-
372- config = {
373- "environments": {
374- "firstenv": {
375- "type": "dummy",
376- "admin-secret": "homer"}}}
377- self.write_config(yaml.dump(config))
378- self.config.load()
379-
380- self.environment = self.config.get_default()
381- self.provider = self.environment.get_machine_provider()
382-
383 self.output = StringIO()
384
385- @inlineCallbacks
386- def set_unit_state(self, unit_state, state, port_protos=()):
387- unit_state.set_public_address(
388- "%s.example.com" % unit_state.unit_name.replace("/", "-"))
389- workflow_client = ZookeeperWorkflowState(self.client, unit_state)
390- yield workflow_client.set_state(state)
391- for port_proto in port_protos:
392- yield unit_state.open_port(*port_proto)
393-
394- @inlineCallbacks
395- def add_relation_unit_states(self, relation_state, unit_states, states):
396- for unit_state, state in zip(unit_states, states):
397- relation_unit_state = yield relation_state.add_unit_state(
398- unit_state)
399- workflow_client = ZookeeperWorkflowState(
400- self.client, relation_unit_state)
401- yield workflow_client.set_state(state)
402-
403- @inlineCallbacks
404- def add_relation_with_relation_units(
405- self,
406- source_endpoint, source_units, source_states,
407- dest_endpoint, dest_units, dest_states):
408- relation_state, service_relation_states = \
409- yield self.relation_state_manager.add_relation_state(
410- *[source_endpoint, dest_endpoint])
411- source_relation_state, dest_relation_state = service_relation_states
412- yield self.add_relation_unit_states(
413- source_relation_state, source_units, source_states)
414- yield self.add_relation_unit_states(
415- dest_relation_state, dest_units, dest_states)
416-
417- @inlineCallbacks
418- def build_topology(self, base=None):
419- """Build a simulated topology with a default machine configuration.
420-
421- This method returns a dict that can be used to get handles to
422- the constructed objects.
423- """
424- state = {}
425-
426- # build out the topology using the state managers
427- m1 = yield self.machine_state_manager.add_machine_state()
428- m2 = yield self.machine_state_manager.add_machine_state()
429- m3 = yield self.machine_state_manager.add_machine_state()
430- m4 = yield self.machine_state_manager.add_machine_state()
431- m5 = yield self.machine_state_manager.add_machine_state()
432- m6 = yield self.machine_state_manager.add_machine_state()
433- m7 = yield self.machine_state_manager.add_machine_state()
434-
435- # inform the provider about the machine
436- yield self.provider.start_machine({"machine-id": 0,
437- "dns-name": "steamcloud-1.com"})
438- yield self.provider.start_machine({"machine-id": 1,
439- "dns-name": "steamcloud-2.com"})
440- yield self.provider.start_machine({"machine-id": 2,
441- "dns-name": "steamcloud-3.com"})
442- yield self.provider.start_machine({"machine-id": 3,
443- "dns-name": "steamcloud-4.com"})
444- yield self.provider.start_machine({"machine-id": 4,
445- "dns-name": "steamcloud-5.com"})
446- yield self.provider.start_machine({"machine-id": 5,
447- "dns-name": "steamcloud-6.com"})
448- yield self.provider.start_machine({"machine-id": 6,
449- "dns-name": "steamcloud-7.com"})
450-
451- yield m1.set_instance_id(0)
452- yield m2.set_instance_id(1)
453- yield m3.set_instance_id(2)
454- yield m4.set_instance_id(3)
455- yield m5.set_instance_id(4)
456- yield m6.set_instance_id(5)
457- yield m7.set_instance_id(6)
458-
459- state["machines"] = [m1, m2, m3, m4, m5, m6, m7]
460-
461- # "Deploy" services
462- wordpress = yield self.add_service_from_charm("wordpress")
463- mysql = yield self.add_service_from_charm("mysql")
464- yield mysql.set_exposed_flag() # but w/ no open ports
465-
466- varnish = yield self.add_service_from_charm("varnish")
467- yield varnish.set_exposed_flag()
468- # w/o additional metadata
469- memcache = yield self.add_service("memcache")
470-
471- state["services"] = dict(wordpress=wordpress, mysql=mysql,
472- varnish=varnish, memcache=memcache)
473-
474- # add unit states to services and assign to machines
475- wpu = yield wordpress.add_unit_state()
476- yield wpu.assign_to_machine(m1)
477-
478- myu1 = yield mysql.add_unit_state()
479- myu2 = yield mysql.add_unit_state()
480- yield myu1.assign_to_machine(m2)
481- yield myu2.assign_to_machine(m3)
482-
483- vu1 = yield varnish.add_unit_state()
484- vu2 = yield varnish.add_unit_state()
485- yield vu1.assign_to_machine(m4)
486- yield vu2.assign_to_machine(m5)
487-
488- mc1 = yield memcache.add_unit_state()
489- mc2 = yield memcache.add_unit_state()
490- yield mc1.assign_to_machine(m6)
491- yield mc2.assign_to_machine(m7)
492-
493- # Set the lifecycle state and open ports, if any, for each unit state.
494- yield self.set_unit_state(wpu, "started", [(80, "tcp"), (443, "tcp")])
495- yield self.set_unit_state(myu1, "started")
496- yield self.set_unit_state(myu2, "stopped")
497- yield self.set_unit_state(vu1, "started", [(80, "tcp")])
498- yield self.set_unit_state(vu2, "started", [(80, "tcp")])
499- yield self.set_unit_state(mc1, "started")
500- yield self.set_unit_state(mc2, "installed")
501-
502- # Wordpress integrates with each of the following
503- # services. Each relation endpoint is used to define the
504- # specific relation to be established.
505- mysql_ep = RelationEndpoint(
506- "mysql", "client-server", "db", "server")
507- memcache_ep = RelationEndpoint(
508- "memcache", "client-server", "cache", "server")
509- varnish_ep = RelationEndpoint(
510- "varnish", "client-server", "proxy", "client")
511-
512- wordpress_db_ep = RelationEndpoint(
513- "wordpress", "client-server", "db", "client")
514- wordpress_cache_ep = RelationEndpoint(
515- "wordpress", "client-server", "cache", "client")
516- wordpress_proxy_ep = RelationEndpoint(
517- "wordpress", "client-server", "proxy", "server")
518-
519- # Create relation service units for each of these relations
520- yield self.add_relation_with_relation_units(
521- mysql_ep, [myu1, myu2], ["up", "departed"],
522- wordpress_db_ep, [wpu], ["up"])
523- yield self.add_relation_with_relation_units(
524- memcache_ep, [mc1, mc2], ["up", "down"],
525- wordpress_cache_ep, [wpu], ["up"])
526- yield self.add_relation_with_relation_units(
527- varnish_ep, [vu1, vu2], ["up", "up"],
528- wordpress_proxy_ep, [wpu], ["up"])
529-
530- state["relations"] = dict(
531- wordpress=[wpu],
532- mysql=[myu1, myu2],
533- varnish=[vu1, vu2],
534- memcache=[mc1, mc2]
535- )
536- returnValue(state)
537-
538- def mock_environment(self):
539- mock_environment = self.mocker.patch(Environment)
540- mock_environment.get_machine_provider()
541- self.mocker.result(self.provider)
542-
543
544 class StatusTest(StatusTestBase):
545
546@@ -212,6 +39,9 @@
547 def test_peer_relation(self):
548 """Verify status works with peer relations.
549 """
550+ self.mock_environment()
551+ self.mocker.replay()
552+
553 m1 = yield self.machine_state_manager.add_machine_state()
554 m2 = yield self.machine_state_manager.add_machine_state()
555 yield self.provider.start_machine({"machine-id": 0,
556@@ -256,6 +86,8 @@
557 @inlineCallbacks
558 def test_collect(self):
559 yield self.build_topology()
560+ yield self.mock_environment()
561+ self.mocker.replay()
562
563 # collect everything
564 state = yield status.collect(None, self.provider, self.client, None)
565@@ -320,6 +152,8 @@
566 @inlineCallbacks
567 def test_collect_filtering(self):
568 yield self.build_topology()
569+ yield self.mock_environment()
570+ self.mocker.replay()
571
572 # collect by service name
573 state = yield status.collect(
574@@ -388,6 +222,9 @@
575 @inlineCallbacks
576 def test_collect_with_unassigned_machines(self):
577 yield self.build_topology()
578+ yield self.mock_environment()
579+ self.mocker.replay()
580+
581 # get a service's units and unassign one of them
582 wordpress = yield self.service_state_manager.get_service_state(
583 "wordpress")
584@@ -399,6 +236,7 @@
585 yield unit.set_public_address(None)
586 # test that the machine is in state information w/o assignment
587 state = yield status.collect(None, self.provider, self.client, None)
588+
589 # verify that the unassigned machine appears in the state
590 self.assertEqual(state["machines"][machine_id],
591 {"dns-name": "steamcloud-1.com",
592@@ -422,6 +260,9 @@
593 @inlineCallbacks
594 def test_collect_with_removed_unit(self):
595 yield self.build_topology()
596+ yield self.mock_environment()
597+ self.mocker.replay()
598+
599 # get a service's units and unassign one of them
600 wordpress = yield self.service_state_manager.get_service_state(
601 "wordpress")
602@@ -432,7 +273,8 @@
603 yield wordpress.remove_unit_state(unit)
604
605 # test that wordpress has no assigned service units
606- state = yield status.collect(None, self.provider, self.client, None)
607+ state = yield status.collect(None, self.provider, self.client,
608+ None)
609 self.assertEqual(
610 state["services"]["wordpress"],
611 {"charm": "local:series/wordpress-3",
612@@ -454,6 +296,8 @@
613 # verify that we get some error reporting if the provider
614 # doesn't have proper machine info
615 yield self.build_topology()
616+ yield self.mock_environment()
617+ self.mocker.replay()
618
619 # add a new machine to the topology (but not the provider)
620 # and status it
621@@ -654,7 +498,7 @@
622 self.output,
623 None)
624
625- #dump_stringio(self.output, "/tmp/ens.svg")
626+ #dump_stringio(self.output, "/tmp/juju.svg")
627
628 # look for a hint the process completed.
629 self.assertIn("</svg>", self.output.getvalue())
630
631=== modified file 'juju/environment/config.py'
632--- juju/environment/config.py 2011-10-06 22:24:21 +0000
633+++ juju/environment/config.py 2011-10-13 16:27:26 +0000
634@@ -204,7 +204,7 @@
635 default location, and if it doesn't work, it will write down a
636 sample configuration there.
637
638- This is handy for a default initialization.
639+ This is handy for a default initialisation.
640 """
641 try:
642 self.load()
643
644=== modified file 'juju/environment/tests/test_config.py'
645--- juju/environment/tests/test_config.py 2011-10-06 22:24:21 +0000
646+++ juju/environment/tests/test_config.py 2011-10-13 16:27:26 +0000
647@@ -499,7 +499,6 @@
648 with open(self.default_path) as file:
649 config = yaml.load(file.read())
650 config["environments"]["sample"]["region"] = "ap-southeast-2"
651-
652 self.write_config(yaml.dump(config), other_path=True)
653
654 e = self.assertRaises(EnvironmentsConfigError,
655
656=== modified file 'juju/providers/common/tests/test_utils.py'
657--- juju/providers/common/tests/test_utils.py 2011-09-22 13:23:00 +0000
658+++ juju/providers/common/tests/test_utils.py 2011-10-13 16:27:26 +0000
659@@ -1,13 +1,18 @@
660 import os
661-from yaml import load
662+
663+import yaml
664
665 from twisted.python.failure import Failure
666+from twisted.internet.defer import inlineCallbacks
667
668+from juju.state.environment import EnvironmentStateManager
669 from juju.environment.tests.test_config import EnvironmentsConfigTestBase
670+from juju.state.tests.test_service import ServiceStateManagerTestBase
671 from juju.errors import ProviderInteractionError, JujuError
672 from juju.lib.testing import TestCase
673 from juju.providers.common.utils import (
674- convert_unknown_error, format_cloud_init, get_user_authorized_keys)
675+ convert_unknown_error, format_cloud_init,
676+ get_user_authorized_keys, get_active_machine_provider)
677
678
679 class CommonProviderTests(EnvironmentsConfigTestBase):
680@@ -107,7 +112,7 @@
681
682 lines = output.split("\n")
683 self.assertEqual(lines.pop(0), "#cloud-config")
684- config = load("\n".join(lines))
685+ config = yaml.load("\n".join(lines))
686 self.assertEqual(config["ssh_authorized_keys"], ["zebra"])
687 self.assertTrue(config["apt-update"])
688 self.assertTrue(config["apt-upgrade"])
689@@ -115,3 +120,29 @@
690 self.assertEqual(config["apt_sources"], formatted_repos)
691 self.assertEqual(config["runcmd"], scripts)
692 self.assertEqual(config["machine-data"]["magic"], [1, 2, 3])
693+
694+
695+class TestActiveMachineProvider(ServiceStateManagerTestBase, EnvironmentsConfigTestBase):
696+
697+ @inlineCallbacks
698+ def setUp(self):
699+ yield super(TestActiveMachineProvider, self).setUp()
700+ config = {
701+ "juju": "environments",
702+ "environments": {
703+ "firstenv": {
704+ "type": "dummy",
705+ "admin-secret": "homer"}}}
706+
707+ self.write_config(yaml.dump(config))
708+ self.config.load()
709+
710+ env_manager = EnvironmentStateManager(self.client)
711+ yield env_manager.set_config_state(self.config, "firstenv")
712+
713+ @inlineCallbacks
714+ def test_get_active_machine_provider(self):
715+ provider = yield get_active_machine_provider(self.client)
716+ self.assertEqual(provider.provider_type, "dummy")
717+
718+
719
720=== modified file 'juju/providers/common/utils.py'
721--- juju/providers/common/utils.py 2011-09-23 20:35:02 +0000
722+++ juju/providers/common/utils.py 2011-10-13 16:27:26 +0000
723@@ -2,9 +2,11 @@
724 import os
725 from yaml import safe_dump
726
727+from twisted.internet.defer import inlineCallbacks, returnValue
728 from twisted.python.failure import Failure
729
730 from juju.errors import JujuError, ProviderInteractionError
731+from juju.state.environment import EnvironmentStateManager
732
733 log = logging.getLogger("juju.common")
734
735@@ -126,3 +128,17 @@
736 output = safe_dump(cloud_config)
737 output = "#cloud-config\n%s" % (output)
738 return output
739+
740+
741+@inlineCallbacks
742+def get_active_machine_provider(client):
743+ """Return the machine environment for the running environment.
744+
745+ This is used on the deployment side to get a machine provider
746+ which can be used to pull machine specific values like DNS names.
747+ """
748+ env_manager = EnvironmentStateManager(client)
749+ env_config = yield env_manager.get_config()
750+ env = env_config.get_default()
751+ machine_provider = env.get_machine_provider()
752+ returnValue(machine_provider)
753
754=== added file 'juju/state/status.py'
755--- juju/state/status.py 1970-01-01 00:00:00 +0000
756+++ juju/state/status.py 2011-10-13 16:27:26 +0000
757@@ -0,0 +1,181 @@
758+import logging
759+
760+from twisted.internet.defer import inlineCallbacks, returnValue
761+from txzookeeper.utils import retry_change
762+import yaml
763+import zookeeper
764+
765+from juju.errors import ProviderError
766+from juju.providers.common.utils import get_active_machine_provider
767+from juju.state.errors import UnitRelationStateNotFound
768+from juju.state.charm import CharmStateManager
769+from juju.state.machine import MachineStateManager
770+from juju.state.relation import RelationStateManager
771+from juju.state.service import ServiceStateManager
772+from juju.state.base import StateBase
773+from juju.unit.workflow import WorkflowStateClient
774+
775+# Version of the status serialisation
776+VERSION = 1
777+
778+log = logging.getLogger("juju.state.status")
779+
780+
781+class StatusManager(StateBase):
782+ """Watch deployments for status state changes."""
783+ def __init__(self, client):
784+ super(StatusManager, self).__init__(client)
785+ self._status = None
786+
787+ @inlineCallbacks
788+ def get_status(self):
789+ """Get current status information."""
790+ if self._status is None:
791+ status = yield self._load_status()
792+ if status is None:
793+ status = yield self.collect()
794+
795+ self._status = status
796+ returnValue(self._status)
797+
798+ @inlineCallbacks
799+ def collect(self):
800+ """Collect and cache all status state.
801+
802+ This method typically is called on topology change and creates
803+ a version of the status which is held in this manager and returned
804+ until the next change.
805+ """
806+ charm_manager = CharmStateManager(self._client)
807+ machine_manager = MachineStateManager(self._client)
808+ relation_manager = RelationStateManager(self._client)
809+ service_manager = ServiceStateManager(self._client)
810+
811+ machine_provider = yield get_active_machine_provider(self._client)
812+
813+ service_data = {}
814+ machine_data = {}
815+
816+ state = dict(services=service_data, machines=machine_data)
817+ services = yield service_manager.get_all_service_states()
818+ for service in services:
819+ unit_data = {}
820+ relation_data = {}
821+ charm_id = yield service.get_charm_id()
822+ charm = yield charm_manager.get_charm_state(charm_id)
823+
824+ service_data[service.service_name] = dict(units=unit_data,
825+ charm=charm.id,
826+ relations=relation_data)
827+ exposed = yield service.get_exposed_flag()
828+ if exposed:
829+ service_data[service.service_name].update(exposed=exposed)
830+
831+ units = yield service.get_all_unit_states()
832+ relations = yield relation_manager.get_relations_for_service(service)
833+
834+ for unit in units:
835+ u = unit_data[unit.unit_name] = dict()
836+ u["public-address"] = yield unit.get_public_address()
837+ machine_id = yield unit.get_assigned_machine_id()
838+ u["machine"] = machine_id
839+
840+ unit_workflow_client = WorkflowStateClient(self._client, unit)
841+ u["state"] = yield unit_workflow_client.get_state()
842+ if exposed:
843+ open_ports = yield unit.get_open_ports()
844+ u["open-ports"] = ["{port}/{proto}".format(**port_info)
845+ for port_info in open_ports]
846+
847+ # collect info on each relation for the service unit
848+ relation_status = {}
849+ for relation in relations:
850+ try:
851+ relation_unit = yield relation.get_unit_state(unit)
852+ except UnitRelationStateNotFound:
853+ # This exception will occur when relations are
854+ # established between services without service
855+ # units, and therefore never have any
856+ # corresponding service relation units. This
857+ # scenario does not occur in actual deployments,
858+ # but can happen in test circumstances. In
859+ # particular, it will happen with a misconfigured
860+ # provider, which exercises this codepath.
861+ continue # should not occur, but status should not fail
862+ relation_workflow_client = WorkflowStateClient(
863+ self._client, relation_unit)
864+ relation_workflow_state = \
865+ yield relation_workflow_client.get_state()
866+ relation_status[relation.relation_name] = dict(
867+ state=relation_workflow_state)
868+ u["relations"] = relation_status
869+
870+ for relation in relations:
871+ rel_services = yield relation.get_service_states()
872+
873+ # A single related service implies a peer relation. More
874+ # imply a bi-directional provides/requires relationship.
875+ # In the later case we omit the local side of the relation
876+ # when reporting.
877+ if len(rel_services) > 1:
878+ # Filter out self from multi-service relations.
879+ rel_services = [
880+ rsn for rsn in rel_services if rsn.service_name !=
881+ service.service_name]
882+
883+ if len(rel_services) > 1:
884+ raise ValueError("Unexpected relationship with more "
885+ "than 2 endpoints")
886+
887+ rel_service = rel_services[0]
888+ relation_data[relation.relation_name] = rel_service.service_name
889+
890+ machines = yield machine_manager.get_all_machine_states()
891+ for machine_state in machines:
892+ instance_id = yield machine_state.get_instance_id()
893+ m = {"instance-id": instance_id if instance_id is not None else "pending"}
894+ if instance_id is not None:
895+ try:
896+ pm = yield machine_provider.get_machine(instance_id)
897+ m["dns-name"] = pm.dns_name
898+ except ProviderError:
899+ # The provider doesn't have machine information
900+ log.error("Machine provider information missing: machine %s" % (
901+ machine_state.id))
902+
903+ machine_data[machine_state.id] = m
904+
905+ yield self._store_status(state)
906+ returnValue(state)
907+
908+ @inlineCallbacks
909+ def _load_status(self):
910+ try:
911+ ystatus, _ = yield self._client.get("/status")
912+ except zookeeper.NoNodeException:
913+ returnValue(None)
914+
915+ status = yaml.load(ystatus)
916+ if status is None or "version" not in status or status["version"] != VERSION:
917+ returnValue(None)
918+
919+ returnValue(status)
920+
921+ @inlineCallbacks
922+ def _store_status(self, versioned_state):
923+ versioned_state["version"] = VERSION
924+ ys = yaml.dump(versioned_state)
925+ yield retry_change(self._client, "/status", lambda *a: ys)
926+ self._status = versioned_state
927+
928+ def watch_status(self, callback=None):
929+ """Set a permanent watch to refresh status on change."""
930+
931+ @inlineCallbacks
932+ def watch_topology(old_topology, new_topology):
933+ state = yield self.collect()
934+ if callback:
935+ callback(state)
936+ returnValue(state)
937+
938+ return self._watch_topology(watch_topology)
939
940=== added file 'juju/state/tests/test_status.py'
941--- juju/state/tests/test_status.py 1970-01-01 00:00:00 +0000
942+++ juju/state/tests/test_status.py 2011-10-13 16:27:26 +0000
943@@ -0,0 +1,387 @@
944+from twisted.internet.defer import inlineCallbacks, returnValue
945+import yaml
946+
947+from juju.errors import JujuError
948+from juju.environment.environment import Environment
949+from juju.environment.tests.test_config import EnvironmentsConfigTestBase
950+from juju.state.endpoint import RelationEndpoint
951+from juju.state.environment import EnvironmentStateManager
952+from juju.state.status import StatusManager
953+from juju.state.tests.test_service import ServiceStateManagerTestBase
954+from juju.unit.workflow import ZookeeperWorkflowState
955+
956+
957+class StatusTestBase(ServiceStateManagerTestBase, EnvironmentsConfigTestBase):
958+
959+ @inlineCallbacks
960+ def setUp(self):
961+ yield super(StatusTestBase, self).setUp()
962+ config = {
963+ "juju": "environments",
964+ "environments": {
965+ "firstenv": {
966+ "type": "dummy",
967+ "admin-secret": "homer"}}}
968+ self.write_config(yaml.dump(config))
969+ self.config.load()
970+
971+ self.environment = self.config.get_default()
972+ env_manager = EnvironmentStateManager(self.client)
973+ yield env_manager.set_config_state(self.config, "firstenv")
974+ self.provider = yield self.environment.get_machine_provider()
975+
976+ def mock_environment(self):
977+ mock_environment = self.mocker.patch(Environment)
978+ mock_environment.get_machine_provider()
979+ self.mocker.result(self.provider)
980+ self.mocker.count(min=0, max=None)
981+
982+ @inlineCallbacks
983+ def set_unit_state(self, unit_state, state, port_protos=()):
984+ yield unit_state.set_public_address(
985+ "%s.example.com" % unit_state.unit_name.replace("/", "-"))
986+ workflow_client = ZookeeperWorkflowState(self.client, unit_state)
987+ yield workflow_client.set_state(state)
988+ for port_proto in port_protos:
989+ yield unit_state.open_port(*port_proto)
990+
991+ @inlineCallbacks
992+ def add_relation_unit_states(self, relation_state, unit_states, states):
993+ for unit_state, state in zip(unit_states, states):
994+ relation_unit_state = yield relation_state.add_unit_state(
995+ unit_state)
996+ workflow_client = ZookeeperWorkflowState(
997+ self.client, relation_unit_state)
998+ yield workflow_client.set_state(state)
999+
1000+ @inlineCallbacks
1001+ def add_relation_with_relation_units(
1002+ self,
1003+ source_endpoint, source_units, source_states,
1004+ dest_endpoint, dest_units, dest_states):
1005+ relation_state, service_relation_states = \
1006+ yield self.relation_state_manager.add_relation_state(
1007+ *[source_endpoint, dest_endpoint])
1008+ source_relation_state, dest_relation_state = service_relation_states
1009+ yield self.add_relation_unit_states(
1010+ source_relation_state, source_units, source_states)
1011+ yield self.add_relation_unit_states(
1012+ dest_relation_state, dest_units, dest_states)
1013+
1014+ @inlineCallbacks
1015+ def build_topology(self, base=None):
1016+ """Build a simulated topology with a default machine configuration.
1017+
1018+ This method returns a dict that can be used to get handles to
1019+ the constructed objects.
1020+ """
1021+ state = {}
1022+
1023+ # build out the topology using the state managers
1024+
1025+ # inform the provider about the machine
1026+ state["machines"] = machine_list = []
1027+ for i in range(7):
1028+ m = yield self.machine_state_manager.add_machine_state()
1029+ machine_list.append(m)
1030+ public_address = "steamcloud-%d.com" % (i + 1)
1031+ yield self.provider.start_machine({"machine-id": i,
1032+ "dns-name": public_address})
1033+
1034+ yield m.set_instance_id(i)
1035+
1036+ # "Deploy" services
1037+ wordpress = yield self.add_service_from_charm("wordpress")
1038+ mysql = yield self.add_service_from_charm("mysql")
1039+ yield mysql.set_exposed_flag() # but w/ no open ports
1040+
1041+ varnish = yield self.add_service_from_charm("varnish")
1042+ yield varnish.set_exposed_flag()
1043+ # w/o additional metadata
1044+ memcache = yield self.add_service("memcache")
1045+
1046+ state["services"] = dict(wordpress=wordpress, mysql=mysql,
1047+ varnish=varnish, memcache=memcache)
1048+
1049+ # add unit states to services and assign to machines
1050+ wpu = yield wordpress.add_unit_state()
1051+ yield wpu.assign_to_machine(machine_list[0])
1052+
1053+ myu1 = yield mysql.add_unit_state()
1054+ myu2 = yield mysql.add_unit_state()
1055+ yield myu1.assign_to_machine(machine_list[1])
1056+ yield myu2.assign_to_machine(machine_list[2])
1057+
1058+ vu1 = yield varnish.add_unit_state()
1059+ vu2 = yield varnish.add_unit_state()
1060+ yield vu1.assign_to_machine(machine_list[3])
1061+ yield vu2.assign_to_machine(machine_list[4])
1062+
1063+ mc1 = yield memcache.add_unit_state()
1064+ mc2 = yield memcache.add_unit_state()
1065+ yield mc1.assign_to_machine(machine_list[5])
1066+ yield mc2.assign_to_machine(machine_list[6])
1067+
1068+ # Set the lifecycle state and open ports, if any, for each unit state.
1069+ yield self.set_unit_state(wpu, "started", [(80, "tcp"), (443, "tcp")])
1070+ yield self.set_unit_state(myu1, "started")
1071+ yield self.set_unit_state(myu2, "stopped")
1072+ yield self.set_unit_state(vu1, "started", [(80, "tcp")])
1073+ yield self.set_unit_state(vu2, "started", [(80, "tcp")])
1074+ yield self.set_unit_state(mc1, "started")
1075+ yield self.set_unit_state(mc2, "installed")
1076+
1077+ # Wordpress integrates with each of the following
1078+ # services. Each relation endpoint is used to define the
1079+ # specific relation to be established.
1080+ mysql_ep = RelationEndpoint(
1081+ "mysql", "client-server", "db", "server")
1082+ memcache_ep = RelationEndpoint(
1083+ "memcache", "client-server", "cache", "server")
1084+ varnish_ep = RelationEndpoint(
1085+ "varnish", "client-server", "proxy", "client")
1086+
1087+ wordpress_db_ep = RelationEndpoint(
1088+ "wordpress", "client-server", "db", "client")
1089+ wordpress_cache_ep = RelationEndpoint(
1090+ "wordpress", "client-server", "cache", "client")
1091+ wordpress_proxy_ep = RelationEndpoint(
1092+ "wordpress", "client-server", "proxy", "server")
1093+
1094+ # Create relation service units for each of these relations
1095+ yield self.add_relation_with_relation_units(
1096+ mysql_ep, [myu1, myu2], ["up", "departed"],
1097+ wordpress_db_ep, [wpu], ["up"])
1098+ yield self.add_relation_with_relation_units(
1099+ memcache_ep, [mc1, mc2], ["up", "down"],
1100+ wordpress_cache_ep, [wpu], ["up"])
1101+ yield self.add_relation_with_relation_units(
1102+ varnish_ep, [vu1, vu2], ["up", "up"],
1103+ wordpress_proxy_ep, [wpu], ["up"])
1104+
1105+ state["relations"] = dict(
1106+ wordpress=[wpu],
1107+ mysql=[myu1, myu2],
1108+ varnish=[vu1, vu2],
1109+ memcache=[mc1, mc2]
1110+ )
1111+ returnValue(state)
1112+
1113+ def validate_topology(self, state):
1114+ services = state["services"]
1115+ self.assertIn("wordpress", services)
1116+ self.assertIn("varnish", services)
1117+ self.assertIn("mysql", services)
1118+
1119+ # and verify the specifics of a single service
1120+ self.assertTrue("mysql" in services)
1121+ units = list(services["mysql"]["units"])
1122+ self.assertEqual(len(units), 2)
1123+
1124+ self.assertEqual(state["machines"][0],
1125+ {"instance-id": 0,
1126+ "dns-name": "steamcloud-1.com"})
1127+
1128+ self.assertEqual(services["mysql"]["relations"],
1129+ {"db": "wordpress"})
1130+
1131+ self.assertEqual(services["wordpress"]["relations"],
1132+ {"cache": "memcache",
1133+ "db": "mysql",
1134+ "proxy": "varnish"})
1135+
1136+ self.assertEqual(
1137+ services["varnish"],
1138+ {"units":
1139+ {"varnish/1": {
1140+ "machine": 4,
1141+ "state": "started",
1142+ "public-address": "varnish-1.example.com",
1143+ "open-ports": ["80/tcp"],
1144+ "relations": {"proxy": {"state": "up"}}},
1145+ "varnish/0": {
1146+ "machine": 3,
1147+ "state": "started",
1148+ "public-address": "varnish-0.example.com",
1149+ "open-ports": ["80/tcp"],
1150+ "relations": {"proxy": {"state": "up"}}}},
1151+ "exposed": True,
1152+ "charm": "local:series/varnish-1",
1153+ "relations": {"proxy": "wordpress"}})
1154+
1155+ self.assertEqual(
1156+ services["wordpress"],
1157+ {"charm": "local:series/wordpress-3",
1158+ "relations": {
1159+ "cache": "memcache",
1160+ "db": "mysql",
1161+ "proxy": "varnish"},
1162+ "units": {
1163+ "wordpress/0": {
1164+ "machine": 0,
1165+ "public-address": "wordpress-0.example.com",
1166+ "relations": {
1167+ "cache": {"state": "up"},
1168+ "db": {"state": "up"},
1169+ "proxy": {"state": "up"}},
1170+ "state": "started"}}})
1171+
1172+
1173+
1174+class TestStatusStateManager(StatusTestBase):
1175+ @inlineCallbacks
1176+ def test_collect(self):
1177+ yield self.build_topology()
1178+
1179+ # we need the provider that knows about our dummy machines
1180+ self.mock_environment()
1181+ self.mocker.replay()
1182+
1183+ # collect everything
1184+ sm = StatusManager(self.client)
1185+ state = yield sm.collect()
1186+ self.validate_topology(state)
1187+
1188+ @inlineCallbacks
1189+ def test_get_status(self):
1190+ yield self.build_topology()
1191+
1192+ # we need the provider that knows about our dummy machines
1193+ self.mock_environment()
1194+ self.mocker.replay()
1195+
1196+ # collect everything
1197+ sm = StatusManager(self.client)
1198+ state = yield sm.get_status()
1199+ self.validate_topology(state)
1200+
1201+ @inlineCallbacks
1202+ def test_watch_status(self):
1203+ yield self.build_topology()
1204+
1205+ # we need the provider that knows about our dummy machines
1206+ self.mock_environment()
1207+ self.mocker.replay()
1208+
1209+ # collect everything
1210+ sm = StatusManager(self.client)
1211+
1212+ yield sm.watch_status()
1213+
1214+ state = yield sm.get_status()
1215+ self.validate_topology(state)
1216+
1217+ # now make a change, and see if the manager has it
1218+ wp_service = yield self.service_state_manager.get_service_state("wordpress")
1219+ self.assertEqual(len(state["services"]["wordpress"]["units"]), 1)
1220+
1221+ # create a new unit
1222+ yield wp_service.add_unit_state()
1223+ yield self.sleep(1)
1224+ # validate we see the change
1225+ state2 = yield sm.get_status()
1226+ self.assertTrue(state2 is not state)
1227+
1228+ wordpress = state2["services"]["wordpress"]
1229+ self.assertEqual(len(wordpress["units"]), 2)
1230+
1231+ # and a final unit
1232+ yield wp_service.add_unit_state()
1233+ yield self.sleep(1)
1234+ state3 = yield sm.get_status()
1235+ wordpress = state3["services"]["wordpress"]
1236+ self.assertEqual(len(wordpress["units"]), 3)
1237+
1238+ @inlineCallbacks
1239+ def test_watch_status_with_callback(self):
1240+ yield self.build_topology()
1241+
1242+ # we need the provider that knows about our dummy machines
1243+ self.mock_environment()
1244+ self.mocker.replay()
1245+
1246+ # a callback for status changes
1247+ status_state = {}
1248+
1249+ def status_change(state):
1250+ status_state.clear()
1251+ status_state.update(state)
1252+
1253+ # collect everything
1254+ sm = StatusManager(self.client)
1255+
1256+ yield sm.watch_status(status_change)
1257+
1258+ state = yield sm.get_status()
1259+ self.validate_topology(state)
1260+ # validate that the callback set our local to the proper state
1261+ self.assertEqual(status_state, state)
1262+
1263+ # now make a change, and see if the manager has it
1264+ wp_service = yield self.service_state_manager.get_service_state("wordpress")
1265+ self.assertEqual(len(state["services"]["wordpress"]["units"]), 1)
1266+
1267+ # create a new unit
1268+ yield wp_service.add_unit_state()
1269+ yield self.sleep(1)
1270+ # validate we see the change
1271+ state2 = yield sm.get_status()
1272+ self.assertTrue(state2 is not state)
1273+
1274+ wordpress = state2["services"]["wordpress"]
1275+ self.assertEqual(len(wordpress["units"]), 2)
1276+
1277+ # and a final unit
1278+ yield wp_service.add_unit_state()
1279+ yield self.sleep(1)
1280+ state3 = yield sm.get_status()
1281+ wordpress = state3["services"]["wordpress"]
1282+ self.assertEqual(len(wordpress["units"]), 3)
1283+
1284+ self.assertEqual(status_state, state3)
1285+
1286+ @inlineCallbacks
1287+ def test_load_status(self):
1288+ # even though this is a private method we test its logic
1289+ # directly
1290+ self.mock_environment()
1291+ self.mocker.replay()
1292+
1293+ sm = StatusManager(self.client)
1294+ result = yield sm._load_status()
1295+ self.assertEqual(result, None)
1296+
1297+ # Store an empty dict
1298+ yield sm._store_status({})
1299+ result = yield sm._load_status()
1300+ self.assertEqual(result, {"version": 1})
1301+
1302+ # now write the status node directly w/o version
1303+ yield self.client.set("/status", yaml.dump({}))
1304+ error = yield sm._load_status()
1305+ # invalid state, no version (force retry at higher level call)
1306+ self.assertEqual(error, None)
1307+
1308+ # likewise writing an invalid version produces an error
1309+ yield self.client.set("/status", yaml.dump({"version": "ZZZ"}))
1310+ error = yield sm._load_status()
1311+ # invalid state, bad/unknown version (force retry at higher level call)
1312+ self.assertEqual(error, None)
1313+
1314+ @inlineCallbacks
1315+ def test_store_status(self):
1316+ # even though this is a private method we test its logic
1317+ # directly
1318+ from juju.state.status import VERSION
1319+ self.mock_environment()
1320+ self.mocker.replay()
1321+
1322+ sm = StatusManager(self.client)
1323+ yield sm._store_status({})
1324+
1325+ # verify that we can find the encoded version inside the yaml
1326+ # node
1327+ data, _ = yield self.client.get("/status")
1328+ state = yaml.load(data)
1329+ self.assertEqual(state, {"version": VERSION})
1330+

Subscribers

People subscribed via source and target branches

to status/vote changes: