Merge lp:~chad.smith/landscape-client/ha-manager-skeleton into lp:~landscape/landscape-client/trunk

Proposed by Chad Smith
Status: Merged
Approved by: Chad Smith
Approved revision: 636
Merged at revision: 628
Proposed branch: lp:~chad.smith/landscape-client/ha-manager-skeleton
Merge into: lp:~landscape/landscape-client/trunk
Diff against target: 597 lines (+547/-3)
5 files modified
landscape/manager/config.py (+1/-1)
landscape/manager/haservice.py (+205/-0)
landscape/manager/tests/test_config.py (+2/-1)
landscape/manager/tests/test_haservice.py (+331/-0)
landscape/message_schemas.py (+8/-1)
To merge this branch: bzr merge lp:~chad.smith/landscape-client/ha-manager-skeleton
Reviewer Review Type Date Requested Status
Jerry Seutter (community) Approve
Christopher Armstrong (community) Approve
Review via email: mp+148593@code.launchpad.net

Commit message

Initial HA service manager plugin for landscape-client to better enable Openstack live upgrades. This manager expects generic ha enablement and health scripts (add_to_cluster, remove_from_cluster and health_checks.d) delivered by a charm at /var/lib/juju/units/<charm_name>/charm/.

This plugin only activates upon receipt of an ha-service-change message from landscape-server. It will only take action with haproxy configured charms delivering the above-mentioned scripts. Without scripts or a health_checks.d dir, this plugin will log success and continue on with any package maintenance or updates.

Description of the change

Initial HA service manager pluging for landscape- client. Please take a good look over the deferreds and callbacks I'm using to make sure I'm not overusing callbacks.

This is round 1, a skeleton that depends on charm-delivered scripts which allow for local service health checks and ha_cluster online or standby. There will likely me iterations on this as the server team (HA charm writers) add functionality to the openstack HA charms.

The manager will allow Landscape server to send change-ha-service messages to request service-state: "online" or "standby".

When change-ha-service requests the "standby" state:
  The manager will run the charm's remove_from_cluster script and validate the error code to return an operation-result message with SUCCEEDED or FAILED message status. If the charm doesn't deliver a remove_from_cluster script a SUCCEEDED result is returned.

When change-ha-service requests the "online" state:
  The manager will first run and validate any charm health check scripts delivered at /var/lib/juju/units/<charm_name>/health_checks.d/*. Health checks are validated using run-parts commandline utility. Upon successful health check runs (or the absence of charm health checks) the manager will proceed to run the charm's add_to_cluster script. Bad exit codes from any present scripts will result in a FAILED operation-result message returned to the server with the offending script listed in the optional result-text.

To test:
 in my local copy of landscape/trunk I hacked the RebootComputer message to instead send a static change-ha-service message so I could test from a local client. I'll attach the trunk patch I was using which might better enable integration testing.

To post a comment you must log in.
Revision history for this message
Chad Smith (chad.smith) wrote :

hmmm no attach functionality to a merge proposal. Let's try a pastebin (which will probably need updating)

https://pastebin.canonical.com/84692/

629. By Chad Smith

reintroduce CharmScriptError and RunPartsError to return as twisted deferred fails.

630. By Chad Smith

fixed config unit test adding HAService to ALL_PLUGINS test

Revision history for this message
Christopher Armstrong (radix) wrote :

[1] + def run_parts(script_dir):

You have an inner run_parts method that doesn't need to exist; you can just inline the code.

[2] + def _respond_failure(self, failure, opid):

You should use landscape.lib.log.log_failure here to log the failure.

[3] _format_exception and the following code at the end of handle_change_ha_service is unnecessary:

+ except Exception, e:
+ self._respond_failure(self._format_exception(e), opid)

Instead, do

except:
    self._respond_failure(Failure(), opid)

Failure(), when constructed with no arguments, automatically grabs the "current" exception and traceback.

[4] I recommend separating _respond_failure into two different functions, one for handling failure instances and another for handling string messages.

[5] In the places where you invoke getProcessValue, I think you'll still need to provide an environment in case the script relies on basic things like PATH, etc. It should be reasonable to just pass through os.environ like you do for the getProcessOutputAndValue call.

[6]

+ def validate_exit_code(code, script):
+ if code != 0:
+ return fail(CharmScriptError(script, code))
+ else:
+ return succeed("%s succeeded." % script)

This could be rewritten a bit nicer as

if code != 0:
    raise CharmScriptError(script, code)
else:
    return "%s succeeded." % script

[7] Same for parse_output.

review: Needs Fixing
631. By Chad Smith

per review comments:
 - drop unnneeded run_parts method in favor of inline
 - use log_failure instead of logging.error
 - drop format_exception and use Failure() instead
 - on success, instead of return succeed("some string") just return "some string"
 - raise CharmScriptError instead of return fail(CharmScriptError)

632. By Chad Smith

- add _respond_failure_string to handle failure strings using logging.error.
- _respond_failure handles any raised exceptions using log_failure

Revision history for this message
Chad Smith (chad.smith) wrote :

thanks Chris for the input. I worked in those changes you suggested.

Revision history for this message
Christopher Armstrong (radix) wrote :

[8]

+ failure_string = "%s" % (failure.value)

You should probably just use failure.getErrorMessage(). (at first I was going to suggest str(failure.value) instead of "%s" %, but then I realized failure has a method for that already)

Looks good!

review: Approve
633. By Chad Smith

use failure.getErrorMessage() instead of str(failure.value)

Revision history for this message
Jerry Seutter (jseutter) wrote :

+1 looks good

Regarding "opid" - it looks like operator process id to me. I have been told to stop over-abbreviating variable names in the past. It would probably be best to use operation_id instead.

review: Approve
634. By Chad Smith

charm directory needs to tack on the 'charm' subdir for housing all charm deliverables. An example: /var/lib/juju/units/keystone-2/charm/

635. By Chad Smith

opid -> operation_id

636. By Chad Smith

lint fixes

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'landscape/manager/config.py'
--- landscape/manager/config.py 2013-01-24 20:15:36 +0000
+++ landscape/manager/config.py 2013-02-22 00:26:21 +0000
@@ -6,7 +6,7 @@
66
7ALL_PLUGINS = ["ProcessKiller", "PackageManager", "UserManager",7ALL_PLUGINS = ["ProcessKiller", "PackageManager", "UserManager",
8 "ShutdownManager", "Eucalyptus", "AptSources", "HardwareInfo",8 "ShutdownManager", "Eucalyptus", "AptSources", "HardwareInfo",
9 "CephUsage", "KeystoneToken"]9 "CephUsage", "KeystoneToken", "HAService"]
1010
1111
12class ManagerConfiguration(Configuration):12class ManagerConfiguration(Configuration):
1313
=== added file 'landscape/manager/haservice.py'
--- landscape/manager/haservice.py 1970-01-01 00:00:00 +0000
+++ landscape/manager/haservice.py 2013-02-22 00:26:21 +0000
@@ -0,0 +1,205 @@
1import logging
2import os
3
4from twisted.python.failure import Failure
5from twisted.internet.utils import getProcessValue, getProcessOutputAndValue
6from twisted.internet.defer import succeed
7
8from landscape.lib.log import log_failure
9from landscape.manager.plugin import ManagerPlugin, SUCCEEDED, FAILED
10
11
12class CharmScriptError(Exception):
13 """
14 Raised when a charm-provided script fails with a non-zero exit code.
15
16 @ivar script: the name of the failed script
17 @ivar code: the exit code of the failed script
18 """
19
20 def __init__(self, script, code):
21 self.script = script
22 self.code = code
23 Exception.__init__(self, self._get_message())
24
25 def _get_message(self):
26 return ("Failed charm script: %s exited with return code %d." %
27 (self.script, self.code))
28
29
30class RunPartsError(Exception):
31 """
32 Raised when a charm-provided health script run-parts directory contains
33 a health script that fails with a non-zero exit code.
34
35 @ivar stderr: the stderr from the failed run-parts command
36 """
37
38 def __init__(self, stderr):
39 self.message = ("%s" % stderr.split(":")[1].strip())
40 Exception.__init__(self, self._get_message())
41
42 def _get_message(self):
43 return "Failed charm script: %s." % self.message
44
45
46class HAService(ManagerPlugin):
47 """
48 Plugin to manage this computer's active participation in a
49 high-availability cluster. It depends on charms delivering both health
50 scripts and cluster_add cluster_remove scripts to function.
51 """
52
53 JUJU_UNITS_BASE = "/var/lib/juju/units"
54 CLUSTER_ONLINE = "add_to_cluster"
55 CLUSTER_STANDBY = "remove_from_cluster"
56 HEALTH_SCRIPTS_DIR = "health_checks.d"
57 STATE_STANDBY = u"standby"
58 STATE_ONLINE = u"online"
59
60 def register(self, registry):
61 super(HAService, self).register(registry)
62 registry.register_message("change-ha-service",
63 self.handle_change_ha_service)
64
65 def _respond(self, status, data, operation_id):
66 message = {"type": "operation-result",
67 "status": status,
68 "operation-id": operation_id}
69 if data:
70 if not isinstance(data, unicode):
71 # Let's decode result-text, replacing non-printable
72 # characters
73 message["result-text"] = data.decode("utf-8", "replace")
74 else:
75 message["result-text"] = data.decode("utf-8", "replace")
76 return self.registry.broker.send_message(message, True)
77
78 def _respond_success(self, data, message, operation_id):
79 logging.info(message)
80 return self._respond(SUCCEEDED, data, operation_id)
81
82 def _respond_failure(self, failure, operation_id):
83 """Handle exception failures."""
84 log_failure(failure)
85 return self._respond(FAILED, failure.getErrorMessage(), operation_id)
86
87 def _respond_failure_string(self, failure_string, operation_id):
88 """Only handle string failures."""
89 logging.error(failure_string)
90 return self._respond(FAILED, failure_string, operation_id)
91
92 def _run_health_checks(self, unit_name):
93 """
94 Exercise any discovered health check scripts, will return a deferred
95 success or fail.
96 """
97 health_dir = "%s/%s/charm/%s" % (
98 self.JUJU_UNITS_BASE, unit_name, self.HEALTH_SCRIPTS_DIR)
99 if not os.path.exists(health_dir) or len(os.listdir(health_dir)) == 0:
100 # No scripts, no problem
101 message = (
102 "Skipping juju charm health checks. No scripts at %s." %
103 health_dir)
104 logging.info(message)
105 return succeed(message)
106
107 def parse_output((stdout_data, stderr_data, status)):
108 if status != 0:
109 raise RunPartsError(stderr_data)
110 else:
111 return "All health checks succeeded."
112
113 result = getProcessOutputAndValue(
114 "run-parts", [health_dir], env=os.environ)
115 return result.addCallback(parse_output)
116
117 def _change_cluster_participation(self, _, unit_name, service_state):
118 """
119 Enables or disables a unit's participation in a cluster based on
120 running charm-delivered CLUSTER_ONLINE and CLUSTER_STANDBY scripts
121 if they exist. If the charm doesn't deliver scripts, return succeed().
122 """
123
124 unit_dir = "%s/%s/charm/" % (self.JUJU_UNITS_BASE, unit_name)
125 if service_state == u"online":
126 script = unit_dir + self.CLUSTER_ONLINE
127 else:
128 script = unit_dir + self.CLUSTER_STANDBY
129
130 if not os.path.exists(script):
131 logging.info("Ignoring juju charm cluster state change to '%s'. "
132 "Charm script does not exist at %s." %
133 (service_state, script))
134 return succeed(
135 "This computer is always a participant in its high-availabilty"
136 " cluster. No juju charm cluster settings changed.")
137
138 def run_script(script):
139 result = getProcessValue(script, env=os.environ)
140
141 def validate_exit_code(code, script):
142 if code != 0:
143 raise CharmScriptError(script, code)
144 else:
145 return "%s succeeded." % script
146 return result.addCallback(validate_exit_code, script)
147
148 return run_script(script)
149
150 def _perform_state_change(self, unit_name, service_state, operation_id):
151 """
152 Handle specific state change requests through calls to available
153 charm scripts like C{CLUSTER_ONLINE}, C{CLUSTER_STANDBY} and any
154 health check scripts. Assume success in any case where no scripts
155 exist for a given task.
156 """
157 d = succeed(None)
158 if service_state == self.STATE_ONLINE:
159 # Validate health of local service before we bring it online
160 # in the HAcluster
161 d = self._run_health_checks(unit_name)
162 d.addCallback(
163 self._change_cluster_participation, unit_name, service_state)
164 return d
165
166 def handle_change_ha_service(self, message):
167 """Parse incoming change-ha-service messages"""
168 operation_id = message["operation-id"]
169 try:
170 error_message = u""
171
172 service_name = message["service-name"] # keystone
173 unit_name = message["unit-name"] # keystone-0
174 service_state = message["service-state"] # "online" | "standby"
175 change_message = (
176 "%s high-availability service set to %s" %
177 (service_name, service_state))
178
179 if service_state not in [self.STATE_STANDBY, self.STATE_ONLINE]:
180 error_message = (
181 u"Invalid cluster participation state requested %s." %
182 service_state)
183
184 unit_dir = "%s/%s/charm" % (self.JUJU_UNITS_BASE, unit_name)
185 if not os.path.exists(self.JUJU_UNITS_BASE):
186 error_message = (
187 u"This computer is not deployed with juju. "
188 u"Changing high-availability service not supported.")
189 elif not os.path.exists(unit_dir):
190 error_message = (
191 u"This computer is not juju unit %s. Unable to "
192 u"modify high-availability services." % unit_name)
193
194 if error_message:
195 return self._respond_failure_string(
196 error_message, operation_id)
197
198 d = self._perform_state_change(
199 unit_name, service_state, operation_id)
200 d.addCallback(self._respond_success, change_message, operation_id)
201 d.addErrback(self._respond_failure, operation_id)
202 return d
203 except:
204 self._respond_failure(Failure(), operation_id)
205 return d
0206
=== modified file 'landscape/manager/tests/test_config.py'
--- landscape/manager/tests/test_config.py 2013-01-24 18:22:32 +0000
+++ landscape/manager/tests/test_config.py 2013-02-22 00:26:21 +0000
@@ -13,7 +13,8 @@
13 """By default all plugins are enabled."""13 """By default all plugins are enabled."""
14 self.assertEqual(["ProcessKiller", "PackageManager", "UserManager",14 self.assertEqual(["ProcessKiller", "PackageManager", "UserManager",
15 "ShutdownManager", "Eucalyptus", "AptSources",15 "ShutdownManager", "Eucalyptus", "AptSources",
16 "HardwareInfo", "CephUsage", "KeystoneToken"],16 "HardwareInfo", "CephUsage", "KeystoneToken",
17 "HAService"],
17 ALL_PLUGINS)18 ALL_PLUGINS)
18 self.assertEqual(ALL_PLUGINS, self.config.plugin_factories)19 self.assertEqual(ALL_PLUGINS, self.config.plugin_factories)
1920
2021
=== added file 'landscape/manager/tests/test_haservice.py'
--- landscape/manager/tests/test_haservice.py 1970-01-01 00:00:00 +0000
+++ landscape/manager/tests/test_haservice.py 2013-02-22 00:26:21 +0000
@@ -0,0 +1,331 @@
1import os
2
3from twisted.internet.defer import Deferred
4
5
6from landscape.manager.haservice import HAService
7from landscape.manager.plugin import SUCCEEDED, FAILED
8from landscape.tests.helpers import LandscapeTest, ManagerHelper
9from landscape.tests.mocker import ANY
10
11
12class HAServiceTests(LandscapeTest):
13 helpers = [ManagerHelper]
14
15 def setUp(self):
16 super(HAServiceTests, self).setUp()
17 self.ha_service = HAService()
18 self.ha_service.JUJU_UNITS_BASE = self.makeDir()
19 self.unit_name = "my-service-9"
20
21 self.health_check_d = os.path.join(
22 self.ha_service.JUJU_UNITS_BASE, self.unit_name, "charm",
23 self.ha_service.HEALTH_SCRIPTS_DIR)
24 # create entire dir path
25 os.makedirs(self.health_check_d)
26
27 self.manager.add(self.ha_service)
28
29 unit_dir = "%s/%s/charm" % (
30 self.ha_service.JUJU_UNITS_BASE, self.unit_name)
31 cluster_online = file(
32 "%s/add_to_cluster" % unit_dir, "w")
33 cluster_online.write("#!/bin/bash\nexit 0")
34 cluster_online.close()
35 cluster_standby = file(
36 "%s/remove_from_cluster" % unit_dir, "w")
37 cluster_standby.write("#!/bin/bash\nexit 0")
38 cluster_standby.close()
39
40 os.chmod(
41 "%s/add_to_cluster" % unit_dir, 0755)
42 os.chmod(
43 "%s/remove_from_cluster" % unit_dir, 0755)
44
45 service = self.broker_service
46 service.message_store.set_accepted_types(["operation-result"])
47
48 def test_invalid_server_service_state_request(self):
49 """
50 When the landscape server requests a C{service-state} other than
51 'online' or 'standby' the client responds with the appropriate error.
52 """
53 logging_mock = self.mocker.replace("logging.error")
54 logging_mock("Invalid cluster participation state requested BOGUS.")
55 self.mocker.replay()
56
57 self.manager.dispatch_message(
58 {"type": "change-ha-service", "service-name": "my-service",
59 "unit-name": self.unit_name, "service-state": "BOGUS",
60 "operation-id": 1})
61
62 service = self.broker_service
63 self.assertMessages(
64 service.message_store.get_pending_messages(),
65 [{"type": "operation-result", "result-text":
66 u"Invalid cluster participation state requested BOGUS.",
67 "status": FAILED, "operation-id": 1}])
68
69 def test_not_a_juju_computer(self):
70 """
71 When not a juju charmed computer, L{HAService} reponds with an error
72 due to missing JUJU_UNITS_BASE dir.
73 """
74 self.ha_service.JUJU_UNITS_BASE = "/I/don't/exist"
75
76 logging_mock = self.mocker.replace("logging.error")
77 logging_mock("This computer is not deployed with juju. "
78 "Changing high-availability service not supported.")
79 self.mocker.replay()
80
81 self.manager.dispatch_message(
82 {"type": "change-ha-service", "service-name": "my-service",
83 "unit-name": self.unit_name,
84 "service-state": self.ha_service.STATE_STANDBY,
85 "operation-id": 1})
86
87 service = self.broker_service
88 self.assertMessages(
89 service.message_store.get_pending_messages(),
90 [{"type": "operation-result", "result-text":
91 u"This computer is not deployed with juju. Changing "
92 u"high-availability service not supported.",
93 "status": FAILED, "operation-id": 1}])
94
95 def test_incorrect_juju_unit(self):
96 """
97 When not the specific juju charmed computer, L{HAService} reponds
98 with an error due to missing the JUJU_UNITS_BASE/$JUJU_UNIT dir.
99 """
100 logging_mock = self.mocker.replace("logging.error")
101 logging_mock("This computer is not juju unit some-other-service-0. "
102 "Unable to modify high-availability services.")
103 self.mocker.replay()
104
105 self.manager.dispatch_message(
106 {"type": "change-ha-service", "service-name": "some-other-service",
107 "unit-name": "some-other-service-0", "service-state": "standby",
108 "operation-id": 1})
109
110 service = self.broker_service
111 self.assertMessages(
112 service.message_store.get_pending_messages(),
113 [{"type": "operation-result", "result-text":
114 u"This computer is not juju unit some-other-service-0. "
115 u"Unable to modify high-availability services.",
116 "status": FAILED, "operation-id": 1}])
117
118 def test_wb_no_health_check_directory(self):
119 """
120 When unable to find a valid C{HEALTH_CHECK_DIR}, L{HAService} will
121 succeed but log an informational message.
122 """
123 self.ha_service.HEALTH_SCRIPTS_DIR = "I/don't/exist"
124
125 def should_not_be_called(result):
126 self.fail(
127 "_run_health_checks failed on absent health check directory.")
128
129 def check_success_result(result):
130 self.assertEqual(
131 result,
132 "Skipping juju charm health checks. No scripts at "
133 "%s/%s/charm/I/don't/exist." %
134 (self.ha_service.JUJU_UNITS_BASE, self.unit_name))
135
136 result = self.ha_service._run_health_checks(self.unit_name)
137 result.addCallbacks(check_success_result, should_not_be_called)
138
139 def test_wb_no_health_check_scripts(self):
140 """
141 When C{HEALTH_CHECK_DIR} exists but, no scripts exist, L{HAService}
142 will log an informational message, but succeed.
143 """
144 # In setup we created a health check directory but placed no health
145 # scripts in it.
146 def should_not_be_called(result):
147 self.fail(
148 "_run_health_checks failed on empty health check directory.")
149
150 def check_success_result(result):
151 self.assertEqual(
152 result,
153 "Skipping juju charm health checks. No scripts at "
154 "%s/%s/charm/%s." %
155 (self.ha_service.JUJU_UNITS_BASE, self.unit_name,
156 self.ha_service.HEALTH_SCRIPTS_DIR))
157
158 result = self.ha_service._run_health_checks(self.unit_name)
159 result.addCallbacks(check_success_result, should_not_be_called)
160
161 def test_wb_failed_health_script(self):
162 """
163 L{HAService} runs all health check scripts found in the
164 C{HEALTH_CHECK_DIR}. If any script fails, L{HAService} will return a
165 deferred L{fail}.
166 """
167
168 def expected_failure(result):
169 self.assertEqual(
170 str(result.value),
171 "Failed charm script: %s/%s/charm/%s/my-health-script-2 "
172 "exited with return code 1." %
173 (self.ha_service.JUJU_UNITS_BASE, self.unit_name,
174 self.ha_service.HEALTH_SCRIPTS_DIR))
175
176 def check_success_result(result):
177 self.fail(
178 "_run_health_checks succeded despite a failed health script.")
179
180 for number in [1, 2, 3]:
181 script_path = (
182 "%s/my-health-script-%d" % (self.health_check_d, number))
183 health_script = file(script_path, "w")
184 if number == 2:
185 health_script.write("#!/bin/bash\nexit 1")
186 else:
187 health_script.write("#!/bin/bash\nexit 0")
188 health_script.close()
189 os.chmod(script_path, 0755)
190
191 result = self.ha_service._run_health_checks(self.unit_name)
192 result.addCallbacks(check_success_result, expected_failure)
193 return result
194
195 def test_missing_cluster_standby_or_cluster_online_scripts(self):
196 """
197 When no cluster status change scripts are delivered by the charm,
198 L{HAService} will still return a L{succeeded}.
199 C{HEALTH_CHECK_DIR}. If any script fails, L{HAService} will return a
200 deferred L{fail}.
201 """
202
203 def should_not_be_called(result):
204 self.fail(
205 "_change_cluster_participation failed on absent charm script.")
206
207 def check_success_result(result):
208 self.assertEqual(
209 result,
210 "This computer is always a participant in its high-availabilty"
211 " cluster. No juju charm cluster settings changed.")
212
213 self.ha_service.CLUSTER_ONLINE = "I/don't/exist"
214 self.ha_service.CLUSTER_STANDBY = "I/don't/exist"
215
216 result = self.ha_service._change_cluster_participation(
217 None, self.unit_name, self.ha_service.STATE_ONLINE)
218 result.addCallbacks(check_success_result, should_not_be_called)
219
220 # Now test the cluster standby script
221 result = self.ha_service._change_cluster_participation(
222 None, self.unit_name, self.ha_service.STATE_STANDBY)
223 result.addCallbacks(check_success_result, should_not_be_called)
224 return result
225
226 def test_failed_cluster_standby_or_cluster_online_scripts(self):
227 def expected_failure(result, script_path):
228 self.assertEqual(
229 str(result.value),
230 "Failed charm script: %s exited with return code 2." %
231 (script_path))
232
233 def check_success_result(result):
234 self.fail(
235 "_change_cluster_participation ignored charm script failure.")
236
237 # Rewrite both cluster scripts as failures
238 unit_dir = "%s/%s/charm" % (
239 self.ha_service.JUJU_UNITS_BASE, self.unit_name)
240 for script_name in [
241 self.ha_service.CLUSTER_ONLINE, self.ha_service.CLUSTER_STANDBY]:
242
243 cluster_online = file("%s/%s" % (unit_dir, script_name), "w")
244 cluster_online.write("#!/bin/bash\nexit 2")
245 cluster_online.close()
246
247 result = self.ha_service._change_cluster_participation(
248 None, self.unit_name, self.ha_service.STATE_ONLINE)
249 result.addCallback(check_success_result)
250 script_path = ("%s/%s" % (unit_dir, self.ha_service.CLUSTER_ONLINE))
251 result.addErrback(expected_failure, script_path)
252
253 # Now test the cluster standby script
254 result = self.ha_service._change_cluster_participation(
255 None, self.unit_name, self.ha_service.STATE_STANDBY)
256 result.addCallback(check_success_result)
257 script_path = ("%s/%s" % (unit_dir, self.ha_service.CLUSTER_STANDBY))
258 result.addErrback(expected_failure, script_path)
259 return result
260
261 def test_run_success_cluster_standby(self):
262 """
263 When receives a C{change-ha-service message} with C{STATE_STANDBY}
264 requested the manager runs the C{CLUSTER_STANDBY} script and returns
265 a successful operation-result to the server.
266 """
267 message = ({"type": "change-ha-service", "service-name": "my-service",
268 "unit-name": self.unit_name,
269 "service-state": self.ha_service.STATE_STANDBY,
270 "operation-id": 1})
271 deferred = Deferred()
272
273 def validate_messages(value):
274 cluster_script = "%s/%s/charm/%s" % (
275 self.ha_service.JUJU_UNITS_BASE, self.unit_name,
276 self.ha_service.CLUSTER_STANDBY)
277 service = self.broker_service
278 self.assertMessages(
279 service.message_store.get_pending_messages(),
280 [{"type": "operation-result",
281 "result-text": u"%s succeeded." % cluster_script,
282 "status": SUCCEEDED, "operation-id": 1}])
283
284 def handle_has_run(handle_result_deferred):
285 handle_result_deferred.chainDeferred(deferred)
286 return deferred.addCallback(validate_messages)
287
288 ha_service_mock = self.mocker.patch(self.ha_service)
289 ha_service_mock.handle_change_ha_service(ANY)
290 self.mocker.passthrough(handle_has_run)
291 self.mocker.replay()
292 self.manager.add(self.ha_service)
293 self.manager.dispatch_message(message)
294
295 return deferred
296
297 def test_run_success_cluster_online(self):
298 """
299 When receives a C{change-ha-service message} with C{STATE_ONLINE}
300 requested the manager runs the C{CLUSTER_ONLINE} script and returns
301 a successful operation-result to the server.
302 """
303 message = ({"type": "change-ha-service", "service-name": "my-service",
304 "unit-name": self.unit_name,
305 "service-state": self.ha_service.STATE_ONLINE,
306 "operation-id": 1})
307 deferred = Deferred()
308
309 def validate_messages(value):
310 cluster_script = "%s/%s/charm/%s" % (
311 self.ha_service.JUJU_UNITS_BASE, self.unit_name,
312 self.ha_service.CLUSTER_ONLINE)
313 service = self.broker_service
314 self.assertMessages(
315 service.message_store.get_pending_messages(),
316 [{"type": "operation-result",
317 "result-text": u"%s succeeded." % cluster_script,
318 "status": SUCCEEDED, "operation-id": 1}])
319
320 def handle_has_run(handle_result_deferred):
321 handle_result_deferred.chainDeferred(deferred)
322 return deferred.addCallback(validate_messages)
323
324 ha_service_mock = self.mocker.patch(self.ha_service)
325 ha_service_mock.handle_change_ha_service(ANY)
326 self.mocker.passthrough(handle_has_run)
327 self.mocker.replay()
328 self.manager.add(self.ha_service)
329 self.manager.dispatch_message(message)
330
331 return deferred
0332
=== modified file 'landscape/message_schemas.py'
--- landscape/message_schemas.py 2013-02-21 13:35:54 +0000
+++ landscape/message_schemas.py 2013-02-22 00:26:21 +0000
@@ -125,6 +125,12 @@
125 "data": Any(String(), Constant(None))125 "data": Any(String(), Constant(None))
126})126})
127127
128CHANGE_HA_SERVICE = Message(
129 "change-ha-service",
130 {"service-name": String(), # keystone
131 "unit-name": String(), # keystone-9
132 "state": String()}) # online or standby
133
128MEMORY_INFO = Message("memory-info", {134MEMORY_INFO = Message("memory-info", {
129 "memory-info": List(Tuple(Float(), Int(), Int())),135 "memory-info": List(Tuple(Float(), Int(), Int())),
130 })136 })
@@ -445,5 +451,6 @@
445 CUSTOM_GRAPH, REBOOT_REQUIRED, APT_PREFERENCES, EUCALYPTUS_INFO,451 CUSTOM_GRAPH, REBOOT_REQUIRED, APT_PREFERENCES, EUCALYPTUS_INFO,
446 EUCALYPTUS_INFO_ERROR, NETWORK_DEVICE, NETWORK_ACTIVITY,452 EUCALYPTUS_INFO_ERROR, NETWORK_DEVICE, NETWORK_ACTIVITY,
447 REBOOT_REQUIRED_INFO, UPDATE_MANAGER_INFO, CPU_USAGE,453 REBOOT_REQUIRED_INFO, UPDATE_MANAGER_INFO, CPU_USAGE,
448 CEPH_USAGE, SWIFT_DEVICE_INFO, KEYSTONE_TOKEN]:454 CEPH_USAGE, SWIFT_DEVICE_INFO, KEYSTONE_TOKEN,
455 CHANGE_HA_SERVICE]:
449 message_schemas[schema.type] = schema456 message_schemas[schema.type] = schema

Subscribers

People subscribed via source and target branches

to all changes: