Merge lp:~chad.smith/charms/precise/keystone/ha-support into lp:~openstack-charmers/charms/precise/keystone/ha-support
Status: | Merged |
---|---|
Merged at revision: | 55 |
Proposed branch: | lp:~chad.smith/charms/precise/keystone/ha-support |
Merge into: | lp:~openstack-charmers/charms/precise/keystone/ha-support |
Diff against target: |
91 lines (+51/-0) 6 files modified
hooks/keystone-hooks (+5/-0) hooks/lib/openstack_common.py (+16/-0) scripts/add_to_cluster (+2/-0) scripts/health_checks.d/service_ports_live (+13/-0) scripts/health_checks.d/service_running (+13/-0) scripts/remove_from_cluster (+2/-0) |
To merge this branch: | bzr merge lp:~chad.smith/charms/precise/keystone/ha-support |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Adam Gandelman | Pending | ||
Review via email: mp+150137@code.launchpad.net |
This proposal supersedes a proposal from 2013-02-21.
Description of the change
This is a merge proposal to establish a potential template for health_script.d, add_to_cluster and remove_from_cluster delivery within the ha-supported openstack charms.
This is intended as a talking point and guidance for what landscape-client will expect on openstack charms supporting an HA services. If there is anything that doesn't look acceptable, I'm game for a change but wanted to pull a template together to seed discussions. Any suggestions about better charm-related ways.
These script locations will be hard-coded within landscape-client and will be run during rolling Openstack upgrades. So, prior to ODS, we'd like to make sure this process makes sense for most haclustered services.
The example of landscape-client's interaction with charm scripts is our HAServiceManager plugin at https:/
1. Before any packages are upgraded, landscape-client will only call /var/lib/
- remove_from_cluster should will place the node in crm standby state, thereby migrating any active HA resources off of the current node.
2. landscape-client will upgrade any packages and any additional package dependencies will be installed.
3. Upon successful upgrade (maybe involving a reboot), landscape-client will run all run-parts health scripts from /var/lib/
4. If all health scripts succeed, landscape-client will run /var/lib/
As James mentioned in email, remove_from_cluster script might take more action to cleanly remove an API service from openstack schedulers or configs. If a charm's remove_from_cluster does more than just migrate HA resources away from the node, then we'll need to discuss how to potentially decompose add_to_cluster functionality into separate scripts. One script that allows us to ensure a local openstack API is running locally and can be "health checked" and a separate script add_to_cluster which only finalizes and activates configuration in the HA service.
Hey Chad-
You targeted this merge into the upstream charm @ lp:charms/keystone, can you retarget toward our WIP HA branch @ lp:~openstack-charmers/charms/precise/keystone/ha-support ?
The general approach here LGTM. But I'd love for things to be made a bit more generic so we can just drop this work into other charms unmodified:
- Perhaps save_script_rc() can to be moved to lib/openstack_ common. py without the hard-coded KEYSTONE_ SERVICE_ NAME, and the caller can add that to the kwargs instead?
- I imagine there are some tests that would be common across all charms (service_running, service_ports_live) It would be great if those can be synced to other charms unmodified as well. Eg, perhaps have the ports look for *_OPENSTACK_PORT in the environment and ensure each one? The services check can determine what it should check based on *_OPENSTACK_ SERVICE_ NAME? Not sure if these common tests should live in some directory other than the tests specific to each service. /var/lib/ juju/units/ $unit/healthche cks/common. d/ & /var/lib/ juju/units/ $unit/healthche cks/$unit. d/ ??
End goal for me is to have the common framework for the health checking live with the other common code we maintain, and we can just sync it to other charms when we get it right in one. Of course this is complicated by the combination of bash + python charms we currently maintain, but still doable I think.