Merge lp:~stub/charms/precise/postgresql/integration into lp:charms/postgresql
- Precise Pangolin (12.04)
- integration
- Merge into trunk
Status: | Merged |
---|---|
Merged at revision: | 105 |
Proposed branch: | lp:~stub/charms/precise/postgresql/integration |
Merge into: | lp:charms/postgresql |
Diff against target: |
2291 lines (+1428/-149) 24 files modified
.bzrignore (+1/-0) Makefile (+11/-2) README.md (+39/-0) charm-helpers.yaml (+1/-2) config.yaml (+71/-11) hooks/charmhelpers/core/fstab.py (+3/-1) hooks/charmhelpers/core/hookenv.py (+35/-20) hooks/charmhelpers/core/host.py (+44/-9) hooks/charmhelpers/core/services/__init__.py (+2/-0) hooks/charmhelpers/core/services/base.py (+313/-0) hooks/charmhelpers/core/services/helpers.py (+125/-0) hooks/charmhelpers/core/templating.py (+51/-0) hooks/charmhelpers/fetch/__init__.py (+51/-12) hooks/hooks.py (+273/-58) lib/test-client-charm/config.yaml (+13/-0) lib/test-client-charm/hooks/hooks.py (+165/-0) lib/test-client-charm/hooks/start (+3/-0) lib/test-client-charm/hooks/stop (+3/-0) lib/test-client-charm/metadata.yaml (+13/-0) templates/postgres.cron.tmpl (+14/-12) templates/postgresql.conf.tmpl (+3/-0) templates/swiftwal.conf.tmpl (+5/-5) test.py (+162/-12) testing/jujufixture.py (+27/-5) |
To merge this branch: | bzr merge lp:~stub/charms/precise/postgresql/integration |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Charles Butler (community) | deploy + code | Approve | |
Review Queue (community) | automated testing | Needs Fixing | |
Cory Johns (community) | Needs Fixing | ||
Adam Israel (community) | Needs Fixing | ||
Review via email: mp+233666@code.launchpad.net |
Commit message
Various pending bug fixes and feature work.
Description of the change
Integration branch containing all of stub's PostgreSQL charm merge proposals currently in the review queue.
Separate, smaller merge proposals also exist, with each branch in the sequence dependent on the previous. This branch is here in case anyone wants to review everything at once.
Adam Israel (aisrael) wrote : | # |
Hi Stuart,
Thanks for all your work on the Postgresql charm! I've spent some time reviewing this cumulative MP against postgresql.
I should note that the above comment from review-queue is incorrect. The results did not pass automated testing.
There were a handful of errors running make lint. I fixed those and pushed the change to my personal namespace[1].
Moving on to the integration tests[2], I was able to get 19 tests to pass, with 14 failures.
I notice that a couple of the failed tests is attempting to execute `juju run --unit postgresql/1 relation-list -r replication:168`, however, relation-list only seems to be valid within the context of hook execution.
I had a great deal more failures initially, with deploy failing. I set JUJU_REPOSITORY and those tests were able to fun.
If you think it would be useful in debugging the failed tests, I can also include any juju/machine logs.
[1] lp:~aisrael/charms/precise/postgresql/fix-lint
[2] http://
Stuart Bishop (stub) wrote : | # |
'make lint' is clean when run on my trusty box. I did merge in your branch, but it introduces lint failures:
stub@belch:
Lint check (flake8)
directory hooks
checking hooks/helpers.py
checking hooks/hooks.py
hooks/hooks.
hooks/hooks.
checking hooks/test_hooks.py
directory testing
checking testing/__init__.py
checking testing/
directory tests
checking test.py
make: *** [lint] Error 1
I suspect you are running from Precise, with older versions of the lint checkers? Or a more modern version? Possibly more modern since one of the comments changed predates my maintainership. Apart from the trivially fixed long lines, your other changes look benign so I've merged it in anyway.
relation-list does work with 'juju run', because 'juju run' runs the command in a hook context (and is why it is functionally different to juju ssh). I do notice you are running juju 1.18, and I'm running the test suite with juju 1.20 (latest stable). I am not aware of any 1.20 features or bug fixes that I'm relying on, so 1.18 could be fine.
JUJU_REPOSITORY needs to be set, yes. I think we are stuck with this until I can port the tests to Amulet. Or when juju lets me deploy the cwd as a charm rather than requiring this baroque structure.
The tests are flaky, and will alas remain flaky until Bug #1200267 is addressed. I can increase the various sleeps scattered around the code base to make things better with slower providers, which will of course make the test suite even slower. I have increased what I think is the relevant timeout, and am now ignoring all failures from 'juju run' in this section (rather than just errcode==2, which is what we usually see when juju is caught with its pants down) - I think there was a race here where I might attempt to list units in a relation before the relation has been joined and an error code returned (the juju run failures you see).
The test_failover failure is interesting. I think for you it was failing just due to not waiting long enough. For me, I think that a race condition has been introduced with juju 1.20. I need to investigate this further.
Adam Israel (aisrael) wrote : | # |
Hey Stuart,
Interesting. I'm not sure why make lint is producing different results. My juju version is 1.20.7-
Perhaps it's an issue of running inside a Vagrant VM.
I'll re-run the tests natively and see if there are any differences. It's entirely possible that something in the vagrant cloud image is different.
Adam Israel (aisrael) wrote : | # |
Confirmed -- my Vagrant VM has a slightly newer version of flake8, which I suspect was installed via pip.
Stuart Bishop (stub) wrote : | # |
I think I've addressed everything from Adam's review now.
Cory Johns (johnsca) wrote : | # |
Stuart,
I still had test failures, which are all errors in the replication-
* Test run output: http://
* Hook error from log: http://
Also, while definitely not related to this change set, the tests for this charm do have some significant hurdles to passing in the automated test environment. Specifically:
* The `make test` target doesn't force the `make testdep` target, causing the `make test` target to fail if run first (which is done by the automated test runner). Once the runner manually discovers and runs 00_setup.py, the rest of the tests will fail, but `make test` will already have failed by that point. (This might be best fixed by having the runner discover and run the testdep target, though I do think it would be more intuitive to have the test target depend on the testdep target.)
* As you noted in your previous comment, the tests rely on the JUJU_REPOSITORY and SERIES (if being tested against precise) environment variables. The former may be set by the test runner, but it is unlikely that the latter will be.
* The tests for this charm require a local checkout of the postgresql-psql charm, with no fallback to installing from the store (which is what's causing this failure: http://
Even though these weren't introduced with this changeset, we really need to address them to get the postgresql charm passing in the automated test environment.
Stuart Bishop (stub) wrote : | # |
The exception traceback you pastebinned is very interesting. The "user
= rel['user']" line does not exist in this branch, so I think an old
version of the charm is being deployed somehow. The actual exception
looks familiar, and I think it is one that is fixed in this branch.
Hopefully it is something simple like the trusty charm is being run
from JUJU_REPOSITORY, whereas you were hoping for the precise branch
(the test run appears to be using trusty as the series). I beat my
head against a similar wall the other day, never finding out how juju
was managing to deploy a charm that didn't exist, until I totally
rebuild my JUJU_REPOSITORY.
I can add testdep as a requirement to the makefile easily enough. Is
'charm test' no longer the proposed method of running the tests?
(done)
I don't know why SERIES is required. The makefile does "SERIES :=
$(juju get-environment default-series)" at the top, so unless it is
overridden it will be using the default series of the environment.
(unchanged - charm version issue?)
I will fix things to use the charmstore version of postgresql-psql if
the series is Precise. There is not a version in the charm store for
Trusty. Ideally, there will never be as I'd like to just include this
charm inside the PostgreSQL charm somehow (in much the same way that
Amulet launches its sentinal charms). I haven't looked at this yet, as
I've been putting it off until I have time to switch things to Amulet.
Perhaps there is some syntax that will always use the precise branch
of the postgresql-psql charm no matter if I'm deploying to trusty or
precise hosts? (done for precise)
I'd be interested in hearing thoughts on the code changes and style btw.
On 2 October 2014 00:05, Cory Johns <email address hidden> wrote:
> Even though these weren't introduced with this changeset, we really need to address them to get the postgresql charm passing in the automated test environment.
Does this block landing? Let me know (and how to go about checking),
as I've been landing other branches to this charm from other
contributers without such scrutiny.
The current branch will also fail on the automated test environment,
and it doesn't seem helpful to block on this (and it means nobody’s
changes to the charm can land, since I doubt anyone else will be
attempting to fix this). I can simply disable most of the tests if it
is a requirement (and still have more tests than most charms).
--
Stuart Bishop <email address hidden>
Cory Johns (johnsca) wrote : | # |
> The "user = rel['user']" line does not exist in this branch,
> so I think an old version of the charm is being deployed somehow.
I was testing by merging this branch into trunk, and the line is there. It looks like it was added upstream in this commit: http://
> Is 'charm test' no longer the proposed method of running the tests?
The automated test runner uses bundletester, which does the equivalent of `charm test` but also does some additional discovery, which includes running `make test` before doing `charm test`. Also, it was just less intuitive for me to check for a testdep target before running `make test`; I was all ready to nack you for missing dependencies. :p
Tim Van Steenburgh is working on a blog post with detailed info about the automated test runner and recommended way for running tests locally to ensure they pass for the automated runner, which we hope to have out soon. In the meantime, the bundletester tool can be `pip install`ed from https:/
> I don't know why SERIES is required.
You're absolutely right. I checked the charms out under my local precise namespace, but was running with the default series set to trusty and was confused by why it was trying to deploy the charms as trusty, so that's my bad.
> Perhaps there is some syntax that will always use the precise branch
> of the postgresql-psql charm no matter if I'm deploying to trusty or
> precise hosts?
Just removing the conditional and hard-coding PSQL_CHARM to "cs:precise/
> Does this block landing?
No, not really. I certainly won't push to block this MP based on upstream test failures, but that doesn't mean I won't push for you to fix them if you're willing. :)
Although, failing tests from upstream *do* make it more difficult to review the new changes, especially on a complex charm like this one.
Also, our goal is to gate reviews on the automated tests, to improve quality and help us manage the queue of reviews more effectively. One of our milestones on that goal is to get all of the failing charms on http://
Cory Johns (johnsca) wrote : | # |
For reference, here is the blog post I mentioned: http://
Stuart, I will take a look at your updates shortly to see if they resolve the issue I was seeing. Thanks.
Stuart Bishop (stub) wrote : | # |
This is where I'm at at the moment:
PostgreSQL integration tests, all non-beta versions,
trial test.PG91Tests
test
PG91Tests
test_
test_basic ... [OK]
test_
test_
test_failover ... [ERROR]
test_
test_
test_
test_
test_
test_syslog ... [OK]
test_
test_
Stuart Bishop (stub) wrote : | # |
Looks sorted now. Bugs were on trunk, so trunk is busted until this branch lands.
- 114. By Stuart Bishop
-
Merge trunk
Review Queue (review-queue) wrote : | # |
This items has failed automated testing! Results available here http://
- 115. By Stuart Bishop
-
Make hooks.py executable
- 116. By Stuart Bishop
-
Embed test client charm and remove JUJU_REPOSITORY requirement
- 117. By Stuart Bishop
-
Allow charm store charms, fixing rsyslog test
- 118. By Stuart Bishop
-
Ignore magic synced charmhelpers
Whit Morriss (whitmo) wrote : | # |
Hey Stuart,
I'm running into issues getting bundletester to run the tests. It gets through lint and proof, then hangs, and exits after printing "Alarm clock" to stdout. psql and postgres seem to have deployed ok.
This may be an issue with my environment. still investigating.
-w
Stuart Bishop (stub) wrote : | # |
Yeah, I've got another MP up disabling the integration tests by default as having them is not being helpful. If you work out what assumptions bundletester is making (or vice versa), we can fix that in a separate branch. My prime concern with this branch is actually landing it, as trunk is broken and fixes have been stuck here for months.
Charles Butler (lazypower) wrote : | # |
Stub,
I've taken some time to re-review this branch with a mindset of "Focus on the code and verify the charm works" - And indeed. This branch of postgres fixes the upstream brokeneness that I've seen.
I'm going to tentatively approve this branch on the good faith promise that there will be an update correcting the testing story - as the PG91 tests are whats failing consistently when I test - but with the severity and longevity of this response cycle - I see value in whats here with additional work scheduled elsewhere to correct our infra problems re: testing.
Thanks again for the continued effort and work to keep the postgres charm a high quality member of the charm store.
If you have any questions/
Preview Diff
1 | === modified file '.bzrignore' |
2 | --- .bzrignore 2014-02-25 19:47:03 +0000 |
3 | +++ .bzrignore 2014-10-14 10:13:22 +0000 |
4 | @@ -1,3 +1,4 @@ |
5 | _trial_temp |
6 | hooks/_trial_temp |
7 | hooks/local_state.pickle |
8 | +lib/test-client-charm/hooks/charmhelpers |
9 | |
10 | === modified file 'Makefile' |
11 | --- Makefile 2014-06-27 10:04:40 +0000 |
12 | +++ Makefile 2014-10-14 10:13:22 +0000 |
13 | @@ -14,7 +14,7 @@ |
14 | @echo " make integration_test_93" |
15 | @echo " make integration_test_94" |
16 | |
17 | -test: lint unit_test integration_test |
18 | +test: testdep lint unit_test integration_test |
19 | |
20 | testdep: |
21 | tests/00_setup.test |
22 | @@ -25,7 +25,9 @@ |
23 | |
24 | integration_test: |
25 | @echo "PostgreSQL integration tests, all non-beta versions, ${SERIES}" |
26 | - trial test.PG91Tests test.PG92Tests test.PG93Tests |
27 | + trial test.PG91Tests |
28 | + trial test.PG92Tests |
29 | + trial test.PG93Tests |
30 | |
31 | integration_test_91: |
32 | @echo "PostgreSQL 9.1 integration tests, ${SERIES}" |
33 | @@ -48,3 +50,10 @@ |
34 | @flake8 -v \ |
35 | --exclude hooks/charmhelpers,hooks/_trial_temp \ |
36 | hooks testing tests test.py |
37 | + |
38 | +sync: |
39 | + @bzr cat \ |
40 | + lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \ |
41 | + > .charm_helpers_sync.py |
42 | + @python .charm_helpers_sync.py -c charm-helpers.yaml |
43 | + @rm .charm_helpers_sync.py |
44 | |
45 | === modified file 'README.md' |
46 | --- README.md 2014-06-27 05:27:12 +0000 |
47 | +++ README.md 2014-10-14 10:13:22 +0000 |
48 | @@ -255,6 +255,44 @@ |
49 | 6. juju add-relation postgresql storage |
50 | |
51 | |
52 | +# Point In Time Recovery |
53 | + |
54 | +The PostgreSQL charm has experimental support for log shipping and point |
55 | +in time recovery. This feature uses the wal-e[2] tool, and requires access |
56 | +to Amazon S3, Microsoft Azure Block Storage or Swift. This feature is |
57 | +flagged as experimental because it has only been tested with Swift, and |
58 | +not yet been tested under load. It also may require some API changes, |
59 | +particularly on how authentication credentials are accessed when a standard |
60 | +emerges. The charm can be configured to perform regular filesystem backups |
61 | +and ship WAL files to the object store. Hot standbys will make use of the |
62 | +archived WAL files, allowing them to resync after extended netsplits or |
63 | +even let you turn off streaming replication entirely. |
64 | + |
65 | +With a base backup and the WAL archive you can perform point in time |
66 | +recovery, but this is still a manual process and the charm does not |
67 | +yet help you do it. The simplest approach would be to create a new |
68 | +PostgreSQL service containing a single unit, 'juju ssh' in and use |
69 | +wal-e to replace the database after shutting it down, create a |
70 | +recovery.conf to replay the archived WAL files using wal-e, restart the |
71 | +database and wait for it to recover. Once recovered, new hot standby |
72 | +units can be added and client services related to the new database |
73 | +service. |
74 | + |
75 | +To enable the experimental wal-e support with Swift, you will need to |
76 | +use Ubuntu 14.04 (Trusty), and set the service configuration settings |
77 | +similar to the following:: |
78 | + |
79 | + postgresql: |
80 | + wal_e_storage_uri: swift://mycontainer |
81 | + os_username: my_swift_username |
82 | + os_password: my_swift_password |
83 | + os_auth_url: https://keystone.auth.url.example.com:8080/v2/ |
84 | + os_tenant_name: my_tenant_name |
85 | + install_sources: | |
86 | + - ppa:stub/pgcharm |
87 | + - cloud:icehouse |
88 | + |
89 | + |
90 | # Contact Information |
91 | |
92 | ## PostgreSQL |
93 | @@ -265,3 +303,4 @@ |
94 | - [PostgreSQL Mailing List](http://www.postgresql.org/list/) |
95 | |
96 | [1]: https://bugs.launchpad.net/charms/+source/postgresql/+bug/1258485 |
97 | + [2]: https://github.com/wal-e/wal-e |
98 | |
99 | === modified file 'charm-helpers.yaml' |
100 | --- charm-helpers.yaml 2014-06-05 11:04:11 +0000 |
101 | +++ charm-helpers.yaml 2014-10-14 10:13:22 +0000 |
102 | @@ -1,6 +1,5 @@ |
103 | destination: hooks/charmhelpers |
104 | -#branch: lp:charm-helpers |
105 | -branch: lp:~stub/charm-helpers/fix-configure_sources |
106 | +branch: lp:charm-helpers |
107 | include: |
108 | - core |
109 | - fetch |
110 | |
111 | === modified file 'config.yaml' |
112 | --- config.yaml 2014-06-03 09:24:14 +0000 |
113 | +++ config.yaml 2014-10-14 10:13:22 +0000 |
114 | @@ -89,6 +89,14 @@ |
115 | default: False |
116 | type: boolean |
117 | description: Log disconnections |
118 | + log_temp_files: |
119 | + default: "-1" |
120 | + type: string |
121 | + description: | |
122 | + Log creation of temporary files larger than the threshold. |
123 | + -1 disables the feature, 0 logs all temporary files, or specify |
124 | + the threshold size with an optional unit (eg. "512KB", default |
125 | + unit is kilobytes). |
126 | log_line_prefix: |
127 | default: "%t [%p]: [%l-1] db=%d,user=%u " |
128 | type: string |
129 | @@ -164,9 +172,10 @@ |
130 | default: False |
131 | type: boolean |
132 | description: | |
133 | + DEPRECATED. |
134 | Hot standby or warm standby. When True, queries can be run against |
135 | the database when in recovery or standby mode (ie. replicated). |
136 | - Overridden by juju when master/slave relations are used. |
137 | + Overridden when service contains multiple units. |
138 | hot_standby_feedback: |
139 | default: False |
140 | type: boolean |
141 | @@ -181,7 +190,7 @@ |
142 | 'minimal', 'archive' or 'hot_standby'. Defines how much information |
143 | is written to the WAL. Set to 'minimal' for stand alone databases |
144 | and 'hot_standby' for replicated setups. Overridden by juju when |
145 | - replication s used. |
146 | + replication is used. |
147 | max_wal_senders: |
148 | default: 0 |
149 | type: int |
150 | @@ -380,14 +389,16 @@ |
151 | # Swift backups and PITR via SwiftWAL |
152 | swiftwal_container_prefix: |
153 | type: string |
154 | - default: "" |
155 | + default: null |
156 | description: | |
157 | EXPERIMENTAL. |
158 | Swift container prefix for SwiftWAL to use. Must be set if any |
159 | - SwiftWAL features are enabled. |
160 | + SwiftWAL features are enabled. This will become a simple |
161 | + swiftwal_container config item when proper leader election is |
162 | + implemented in juju. |
163 | swiftwal_backup_schedule: |
164 | type: string |
165 | - default: "" |
166 | + default: null |
167 | description: | |
168 | EXPERIMENTAL. |
169 | Cron-formatted schedule for SwiftWAL database backups. |
170 | @@ -405,34 +416,83 @@ |
171 | description: | |
172 | EXPERIMENTAL. |
173 | Archive WAL files into Swift. If swiftwal_backup_schedule is set, |
174 | - this allows point-in-time recovery and WAL files are removed |
175 | + allows point-in-time recovery and WAL files are removed |
176 | automatically with old backups. If swiftwal_backup_schedule is not set |
177 | then WAL files are never removed. Enabling this option will override |
178 | the archive_mode and archive_command settings. |
179 | + wal_e_storage_uri: |
180 | + type: string |
181 | + default: null |
182 | + description: | |
183 | + EXPERIMENTAL. |
184 | + Specify storage to be used by WAL-E. Every PostgreSQL service must use |
185 | + a unique URI. Backups will be unrecoverable if it is not unique. The |
186 | + URI's scheme must be one of 'swift' (OpenStack Swift), 's3' (Amazon AWS) |
187 | + or 'wabs' (Windows Azure). For example: |
188 | + 'swift://some-container/directory/or/whatever' |
189 | + 's3://some-bucket/directory/or/whatever' |
190 | + 'wabs://some-bucket/directory/or/whatever' |
191 | + Setting the wal_e_storage_uri enables regular WAL-E filesystem level |
192 | + backups (per wal_e_backup_schedule), and log shipping to the configured |
193 | + storage. Point-in-time recovery becomes possible, as is disabling the |
194 | + streaming_replication configuration item and relying solely on |
195 | + log shipping for replication. |
196 | + wal_e_backup_schedule: |
197 | + type: string |
198 | + default: "13 0 * * *" |
199 | + description: | |
200 | + EXPERIMENTAL. |
201 | + Cron-formatted schedule for WAL-E database backups. If |
202 | + wal_e_backup_schedule is unset, WAL files will never be removed from |
203 | + WAL-E storage. |
204 | + wal_e_backup_retention: |
205 | + type: int |
206 | + default: 2 |
207 | + description: | |
208 | + EXPERIMENTAL. |
209 | + Number of recent base backups and WAL files to retain. |
210 | + You need enough space for this many backups plus one more, as |
211 | + an old backup will only be removed after a new one has been |
212 | + successfully made to replace it. |
213 | streaming_replication: |
214 | type: boolean |
215 | default: true |
216 | description: | |
217 | - EXPERIMENTAL. |
218 | Enable streaming replication. Normally, streaming replication is |
219 | always used, and any log shipping configured is used as a fallback. |
220 | Turning this off without configuring log shipping is an error. |
221 | os_username: |
222 | type: string |
223 | - default: "" |
224 | + default: null |
225 | description: EXPERIMENTAL. OpenStack Swift username. |
226 | os_password: |
227 | type: string |
228 | - default: "" |
229 | + default: null |
230 | description: EXPERIMENTAL. OpenStack Swift password. |
231 | os_auth_url: |
232 | type: string |
233 | - default: "" |
234 | + default: null |
235 | description: EXPERIMENTAL. OpenStack Swift authentication URL. |
236 | os_tenant_name: |
237 | type: string |
238 | - default: "" |
239 | + default: null |
240 | description: EXPERIMENTAL. OpenStack Swift tenant name. |
241 | + aws_access_key_id: |
242 | + type: string |
243 | + default: null |
244 | + description: EXPERIMENTAL. Amazon AWS access key id. |
245 | + aws_secret_access_key: |
246 | + type: string |
247 | + default: null |
248 | + description: EXPERIMENTAL. Amazon AWS secret access key. |
249 | + wabs_account_name: |
250 | + type: string |
251 | + default: null |
252 | + description: EXPERIMENTAL. Windows Azure account name. |
253 | + wabs_access_key: |
254 | + type: string |
255 | + default: null |
256 | + description: EXPERIMENTAL. Windows Azure access key. |
257 | package_status: |
258 | default: "install" |
259 | type: string |
260 | |
261 | === modified file 'hooks/charmhelpers/core/fstab.py' |
262 | --- hooks/charmhelpers/core/fstab.py 2014-06-05 11:21:55 +0000 |
263 | +++ hooks/charmhelpers/core/fstab.py 2014-10-14 10:13:22 +0000 |
264 | @@ -48,9 +48,11 @@ |
265 | file.__init__(self, self._path, 'r+') |
266 | |
267 | def _hydrate_entry(self, line): |
268 | + # NOTE: use split with no arguments to split on any |
269 | + # whitespace including tabs |
270 | return Fstab.Entry(*filter( |
271 | lambda x: x not in ('', None), |
272 | - line.strip("\n").split(" "))) |
273 | + line.strip("\n").split())) |
274 | |
275 | @property |
276 | def entries(self): |
277 | |
278 | === modified file 'hooks/charmhelpers/core/hookenv.py' |
279 | --- hooks/charmhelpers/core/hookenv.py 2014-05-26 07:10:14 +0000 |
280 | +++ hooks/charmhelpers/core/hookenv.py 2014-10-14 10:13:22 +0000 |
281 | @@ -25,7 +25,7 @@ |
282 | def cached(func): |
283 | """Cache return values for multiple executions of func + args |
284 | |
285 | - For example: |
286 | + For example:: |
287 | |
288 | @cached |
289 | def unit_get(attribute): |
290 | @@ -156,12 +156,15 @@ |
291 | |
292 | |
293 | class Config(dict): |
294 | - """A Juju charm config dictionary that can write itself to |
295 | - disk (as json) and track which values have changed since |
296 | - the previous hook invocation. |
297 | - |
298 | - Do not instantiate this object directly - instead call |
299 | - ``hookenv.config()`` |
300 | + """A dictionary representation of the charm's config.yaml, with some |
301 | + extra features: |
302 | + |
303 | + - See which values in the dictionary have changed since the previous hook. |
304 | + - For values that have changed, see what the previous value was. |
305 | + - Store arbitrary data for use in a later hook. |
306 | + |
307 | + NOTE: Do not instantiate this object directly - instead call |
308 | + ``hookenv.config()``, which will return an instance of :class:`Config`. |
309 | |
310 | Example usage:: |
311 | |
312 | @@ -170,8 +173,8 @@ |
313 | >>> config = hookenv.config() |
314 | >>> config['foo'] |
315 | 'bar' |
316 | + >>> # store a new key/value for later use |
317 | >>> config['mykey'] = 'myval' |
318 | - >>> config.save() |
319 | |
320 | |
321 | >>> # user runs `juju set mycharm foo=baz` |
322 | @@ -188,22 +191,23 @@ |
323 | >>> # keys/values that we add are preserved across hooks |
324 | >>> config['mykey'] |
325 | 'myval' |
326 | - >>> # don't forget to save at the end of hook! |
327 | - >>> config.save() |
328 | |
329 | """ |
330 | CONFIG_FILE_NAME = '.juju-persistent-config' |
331 | |
332 | def __init__(self, *args, **kw): |
333 | super(Config, self).__init__(*args, **kw) |
334 | + self.implicit_save = True |
335 | self._prev_dict = None |
336 | self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME) |
337 | if os.path.exists(self.path): |
338 | self.load_previous() |
339 | |
340 | def load_previous(self, path=None): |
341 | - """Load previous copy of config from disk so that current values |
342 | - can be compared to previous values. |
343 | + """Load previous copy of config from disk. |
344 | + |
345 | + In normal usage you don't need to call this method directly - it |
346 | + is called automatically at object initialization. |
347 | |
348 | :param path: |
349 | |
350 | @@ -218,8 +222,8 @@ |
351 | self._prev_dict = json.load(f) |
352 | |
353 | def changed(self, key): |
354 | - """Return true if the value for this key has changed since |
355 | - the last save. |
356 | + """Return True if the current value for this key is different from |
357 | + the previous value. |
358 | |
359 | """ |
360 | if self._prev_dict is None: |
361 | @@ -228,7 +232,7 @@ |
362 | |
363 | def previous(self, key): |
364 | """Return previous value for this key, or None if there |
365 | - is no "previous" value. |
366 | + is no previous value. |
367 | |
368 | """ |
369 | if self._prev_dict: |
370 | @@ -238,7 +242,13 @@ |
371 | def save(self): |
372 | """Save this config to disk. |
373 | |
374 | - Preserves items in _prev_dict that do not exist in self. |
375 | + If the charm is using the :mod:`Services Framework <services.base>` |
376 | + or :meth:'@hook <Hooks.hook>' decorator, this |
377 | + is called automatically at the end of successful hook execution. |
378 | + Otherwise, it should be called directly by user code. |
379 | + |
380 | + To disable automatic saves, set ``implicit_save=False`` on this |
381 | + instance. |
382 | |
383 | """ |
384 | if self._prev_dict: |
385 | @@ -285,8 +295,9 @@ |
386 | raise |
387 | |
388 | |
389 | -def relation_set(relation_id=None, relation_settings={}, **kwargs): |
390 | +def relation_set(relation_id=None, relation_settings=None, **kwargs): |
391 | """Set relation information for the current unit""" |
392 | + relation_settings = relation_settings if relation_settings else {} |
393 | relation_cmd_line = ['relation-set'] |
394 | if relation_id is not None: |
395 | relation_cmd_line.extend(('-r', relation_id)) |
396 | @@ -445,18 +456,19 @@ |
397 | class Hooks(object): |
398 | """A convenient handler for hook functions. |
399 | |
400 | - Example: |
401 | + Example:: |
402 | + |
403 | hooks = Hooks() |
404 | |
405 | # register a hook, taking its name from the function name |
406 | @hooks.hook() |
407 | def install(): |
408 | - ... |
409 | + pass # your code here |
410 | |
411 | # register a hook, providing a custom hook name |
412 | @hooks.hook("config-changed") |
413 | def config_changed(): |
414 | - ... |
415 | + pass # your code here |
416 | |
417 | if __name__ == "__main__": |
418 | # execute a hook based on the name the program is called by |
419 | @@ -476,6 +488,9 @@ |
420 | hook_name = os.path.basename(args[0]) |
421 | if hook_name in self._hooks: |
422 | self._hooks[hook_name]() |
423 | + cfg = config() |
424 | + if cfg.implicit_save: |
425 | + cfg.save() |
426 | else: |
427 | raise UnregisteredHookError(hook_name) |
428 | |
429 | |
430 | === modified file 'hooks/charmhelpers/core/host.py' |
431 | --- hooks/charmhelpers/core/host.py 2014-06-05 11:04:11 +0000 |
432 | +++ hooks/charmhelpers/core/host.py 2014-10-14 10:13:22 +0000 |
433 | @@ -12,7 +12,8 @@ |
434 | import string |
435 | import subprocess |
436 | import hashlib |
437 | -import apt_pkg |
438 | +import shutil |
439 | +from contextlib import contextmanager |
440 | |
441 | from collections import OrderedDict |
442 | |
443 | @@ -53,7 +54,7 @@ |
444 | def service_running(service): |
445 | """Determine whether a system service is running""" |
446 | try: |
447 | - output = subprocess.check_output(['service', service, 'status']) |
448 | + output = subprocess.check_output(['service', service, 'status'], stderr=subprocess.STDOUT) |
449 | except subprocess.CalledProcessError: |
450 | return False |
451 | else: |
452 | @@ -63,6 +64,16 @@ |
453 | return False |
454 | |
455 | |
456 | +def service_available(service_name): |
457 | + """Determine whether a system service is available""" |
458 | + try: |
459 | + subprocess.check_output(['service', service_name, 'status'], stderr=subprocess.STDOUT) |
460 | + except subprocess.CalledProcessError: |
461 | + return False |
462 | + else: |
463 | + return True |
464 | + |
465 | + |
466 | def adduser(username, password=None, shell='/bin/bash', system_user=False): |
467 | """Add a user to the system""" |
468 | try: |
469 | @@ -212,13 +223,13 @@ |
470 | def restart_on_change(restart_map, stopstart=False): |
471 | """Restart services based on configuration files changing |
472 | |
473 | - This function is used a decorator, for example |
474 | + This function is used a decorator, for example:: |
475 | |
476 | @restart_on_change({ |
477 | '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ] |
478 | }) |
479 | def ceph_client_changed(): |
480 | - ... |
481 | + pass # your code here |
482 | |
483 | In this example, the cinder-api and cinder-volume services |
484 | would be restarted if /etc/ceph/ceph.conf is changed by the |
485 | @@ -314,12 +325,36 @@ |
486 | |
487 | def cmp_pkgrevno(package, revno, pkgcache=None): |
488 | '''Compare supplied revno with the revno of the installed package |
489 | - 1 => Installed revno is greater than supplied arg |
490 | - 0 => Installed revno is the same as supplied arg |
491 | - -1 => Installed revno is less than supplied arg |
492 | + |
493 | + * 1 => Installed revno is greater than supplied arg |
494 | + * 0 => Installed revno is the same as supplied arg |
495 | + * -1 => Installed revno is less than supplied arg |
496 | + |
497 | ''' |
498 | + import apt_pkg |
499 | + from charmhelpers.fetch import apt_cache |
500 | if not pkgcache: |
501 | - apt_pkg.init() |
502 | - pkgcache = apt_pkg.Cache() |
503 | + pkgcache = apt_cache() |
504 | pkg = pkgcache[package] |
505 | return apt_pkg.version_compare(pkg.current_ver.ver_str, revno) |
506 | + |
507 | + |
508 | +@contextmanager |
509 | +def chdir(d): |
510 | + cur = os.getcwd() |
511 | + try: |
512 | + yield os.chdir(d) |
513 | + finally: |
514 | + os.chdir(cur) |
515 | + |
516 | + |
517 | +def chownr(path, owner, group): |
518 | + uid = pwd.getpwnam(owner).pw_uid |
519 | + gid = grp.getgrnam(group).gr_gid |
520 | + |
521 | + for root, dirs, files in os.walk(path): |
522 | + for name in dirs + files: |
523 | + full = os.path.join(root, name) |
524 | + broken_symlink = os.path.lexists(full) and not os.path.exists(full) |
525 | + if not broken_symlink: |
526 | + os.chown(full, uid, gid) |
527 | |
528 | === added directory 'hooks/charmhelpers/core/services' |
529 | === added file 'hooks/charmhelpers/core/services/__init__.py' |
530 | --- hooks/charmhelpers/core/services/__init__.py 1970-01-01 00:00:00 +0000 |
531 | +++ hooks/charmhelpers/core/services/__init__.py 2014-10-14 10:13:22 +0000 |
532 | @@ -0,0 +1,2 @@ |
533 | +from .base import * |
534 | +from .helpers import * |
535 | |
536 | === added file 'hooks/charmhelpers/core/services/base.py' |
537 | --- hooks/charmhelpers/core/services/base.py 1970-01-01 00:00:00 +0000 |
538 | +++ hooks/charmhelpers/core/services/base.py 2014-10-14 10:13:22 +0000 |
539 | @@ -0,0 +1,313 @@ |
540 | +import os |
541 | +import re |
542 | +import json |
543 | +from collections import Iterable |
544 | + |
545 | +from charmhelpers.core import host |
546 | +from charmhelpers.core import hookenv |
547 | + |
548 | + |
549 | +__all__ = ['ServiceManager', 'ManagerCallback', |
550 | + 'PortManagerCallback', 'open_ports', 'close_ports', 'manage_ports', |
551 | + 'service_restart', 'service_stop'] |
552 | + |
553 | + |
554 | +class ServiceManager(object): |
555 | + def __init__(self, services=None): |
556 | + """ |
557 | + Register a list of services, given their definitions. |
558 | + |
559 | + Service definitions are dicts in the following formats (all keys except |
560 | + 'service' are optional):: |
561 | + |
562 | + { |
563 | + "service": <service name>, |
564 | + "required_data": <list of required data contexts>, |
565 | + "provided_data": <list of provided data contexts>, |
566 | + "data_ready": <one or more callbacks>, |
567 | + "data_lost": <one or more callbacks>, |
568 | + "start": <one or more callbacks>, |
569 | + "stop": <one or more callbacks>, |
570 | + "ports": <list of ports to manage>, |
571 | + } |
572 | + |
573 | + The 'required_data' list should contain dicts of required data (or |
574 | + dependency managers that act like dicts and know how to collect the data). |
575 | + Only when all items in the 'required_data' list are populated are the list |
576 | + of 'data_ready' and 'start' callbacks executed. See `is_ready()` for more |
577 | + information. |
578 | + |
579 | + The 'provided_data' list should contain relation data providers, most likely |
580 | + a subclass of :class:`charmhelpers.core.services.helpers.RelationContext`, |
581 | + that will indicate a set of data to set on a given relation. |
582 | + |
583 | + The 'data_ready' value should be either a single callback, or a list of |
584 | + callbacks, to be called when all items in 'required_data' pass `is_ready()`. |
585 | + Each callback will be called with the service name as the only parameter. |
586 | + After all of the 'data_ready' callbacks are called, the 'start' callbacks |
587 | + are fired. |
588 | + |
589 | + The 'data_lost' value should be either a single callback, or a list of |
590 | + callbacks, to be called when a 'required_data' item no longer passes |
591 | + `is_ready()`. Each callback will be called with the service name as the |
592 | + only parameter. After all of the 'data_lost' callbacks are called, |
593 | + the 'stop' callbacks are fired. |
594 | + |
595 | + The 'start' value should be either a single callback, or a list of |
596 | + callbacks, to be called when starting the service, after the 'data_ready' |
597 | + callbacks are complete. Each callback will be called with the service |
598 | + name as the only parameter. This defaults to |
599 | + `[host.service_start, services.open_ports]`. |
600 | + |
601 | + The 'stop' value should be either a single callback, or a list of |
602 | + callbacks, to be called when stopping the service. If the service is |
603 | + being stopped because it no longer has all of its 'required_data', this |
604 | + will be called after all of the 'data_lost' callbacks are complete. |
605 | + Each callback will be called with the service name as the only parameter. |
606 | + This defaults to `[services.close_ports, host.service_stop]`. |
607 | + |
608 | + The 'ports' value should be a list of ports to manage. The default |
609 | + 'start' handler will open the ports after the service is started, |
610 | + and the default 'stop' handler will close the ports prior to stopping |
611 | + the service. |
612 | + |
613 | + |
614 | + Examples: |
615 | + |
616 | + The following registers an Upstart service called bingod that depends on |
617 | + a mongodb relation and which runs a custom `db_migrate` function prior to |
618 | + restarting the service, and a Runit service called spadesd:: |
619 | + |
620 | + manager = services.ServiceManager([ |
621 | + { |
622 | + 'service': 'bingod', |
623 | + 'ports': [80, 443], |
624 | + 'required_data': [MongoRelation(), config(), {'my': 'data'}], |
625 | + 'data_ready': [ |
626 | + services.template(source='bingod.conf'), |
627 | + services.template(source='bingod.ini', |
628 | + target='/etc/bingod.ini', |
629 | + owner='bingo', perms=0400), |
630 | + ], |
631 | + }, |
632 | + { |
633 | + 'service': 'spadesd', |
634 | + 'data_ready': services.template(source='spadesd_run.j2', |
635 | + target='/etc/sv/spadesd/run', |
636 | + perms=0555), |
637 | + 'start': runit_start, |
638 | + 'stop': runit_stop, |
639 | + }, |
640 | + ]) |
641 | + manager.manage() |
642 | + """ |
643 | + self._ready_file = os.path.join(hookenv.charm_dir(), 'READY-SERVICES.json') |
644 | + self._ready = None |
645 | + self.services = {} |
646 | + for service in services or []: |
647 | + service_name = service['service'] |
648 | + self.services[service_name] = service |
649 | + |
650 | + def manage(self): |
651 | + """ |
652 | + Handle the current hook by doing The Right Thing with the registered services. |
653 | + """ |
654 | + hook_name = hookenv.hook_name() |
655 | + if hook_name == 'stop': |
656 | + self.stop_services() |
657 | + else: |
658 | + self.provide_data() |
659 | + self.reconfigure_services() |
660 | + cfg = hookenv.config() |
661 | + if cfg.implicit_save: |
662 | + cfg.save() |
663 | + |
664 | + def provide_data(self): |
665 | + """ |
666 | + Set the relation data for each provider in the ``provided_data`` list. |
667 | + |
668 | + A provider must have a `name` attribute, which indicates which relation |
669 | + to set data on, and a `provide_data()` method, which returns a dict of |
670 | + data to set. |
671 | + """ |
672 | + hook_name = hookenv.hook_name() |
673 | + for service in self.services.values(): |
674 | + for provider in service.get('provided_data', []): |
675 | + if re.match(r'{}-relation-(joined|changed)'.format(provider.name), hook_name): |
676 | + data = provider.provide_data() |
677 | + _ready = provider._is_ready(data) if hasattr(provider, '_is_ready') else data |
678 | + if _ready: |
679 | + hookenv.relation_set(None, data) |
680 | + |
681 | + def reconfigure_services(self, *service_names): |
682 | + """ |
683 | + Update all files for one or more registered services, and, |
684 | + if ready, optionally restart them. |
685 | + |
686 | + If no service names are given, reconfigures all registered services. |
687 | + """ |
688 | + for service_name in service_names or self.services.keys(): |
689 | + if self.is_ready(service_name): |
690 | + self.fire_event('data_ready', service_name) |
691 | + self.fire_event('start', service_name, default=[ |
692 | + service_restart, |
693 | + manage_ports]) |
694 | + self.save_ready(service_name) |
695 | + else: |
696 | + if self.was_ready(service_name): |
697 | + self.fire_event('data_lost', service_name) |
698 | + self.fire_event('stop', service_name, default=[ |
699 | + manage_ports, |
700 | + service_stop]) |
701 | + self.save_lost(service_name) |
702 | + |
703 | + def stop_services(self, *service_names): |
704 | + """ |
705 | + Stop one or more registered services, by name. |
706 | + |
707 | + If no service names are given, stops all registered services. |
708 | + """ |
709 | + for service_name in service_names or self.services.keys(): |
710 | + self.fire_event('stop', service_name, default=[ |
711 | + manage_ports, |
712 | + service_stop]) |
713 | + |
714 | + def get_service(self, service_name): |
715 | + """ |
716 | + Given the name of a registered service, return its service definition. |
717 | + """ |
718 | + service = self.services.get(service_name) |
719 | + if not service: |
720 | + raise KeyError('Service not registered: %s' % service_name) |
721 | + return service |
722 | + |
723 | + def fire_event(self, event_name, service_name, default=None): |
724 | + """ |
725 | + Fire a data_ready, data_lost, start, or stop event on a given service. |
726 | + """ |
727 | + service = self.get_service(service_name) |
728 | + callbacks = service.get(event_name, default) |
729 | + if not callbacks: |
730 | + return |
731 | + if not isinstance(callbacks, Iterable): |
732 | + callbacks = [callbacks] |
733 | + for callback in callbacks: |
734 | + if isinstance(callback, ManagerCallback): |
735 | + callback(self, service_name, event_name) |
736 | + else: |
737 | + callback(service_name) |
738 | + |
739 | + def is_ready(self, service_name): |
740 | + """ |
741 | + Determine if a registered service is ready, by checking its 'required_data'. |
742 | + |
743 | + A 'required_data' item can be any mapping type, and is considered ready |
744 | + if `bool(item)` evaluates as True. |
745 | + """ |
746 | + service = self.get_service(service_name) |
747 | + reqs = service.get('required_data', []) |
748 | + return all(bool(req) for req in reqs) |
749 | + |
750 | + def _load_ready_file(self): |
751 | + if self._ready is not None: |
752 | + return |
753 | + if os.path.exists(self._ready_file): |
754 | + with open(self._ready_file) as fp: |
755 | + self._ready = set(json.load(fp)) |
756 | + else: |
757 | + self._ready = set() |
758 | + |
759 | + def _save_ready_file(self): |
760 | + if self._ready is None: |
761 | + return |
762 | + with open(self._ready_file, 'w') as fp: |
763 | + json.dump(list(self._ready), fp) |
764 | + |
765 | + def save_ready(self, service_name): |
766 | + """ |
767 | + Save an indicator that the given service is now data_ready. |
768 | + """ |
769 | + self._load_ready_file() |
770 | + self._ready.add(service_name) |
771 | + self._save_ready_file() |
772 | + |
773 | + def save_lost(self, service_name): |
774 | + """ |
775 | + Save an indicator that the given service is no longer data_ready. |
776 | + """ |
777 | + self._load_ready_file() |
778 | + self._ready.discard(service_name) |
779 | + self._save_ready_file() |
780 | + |
781 | + def was_ready(self, service_name): |
782 | + """ |
783 | + Determine if the given service was previously data_ready. |
784 | + """ |
785 | + self._load_ready_file() |
786 | + return service_name in self._ready |
787 | + |
788 | + |
789 | +class ManagerCallback(object): |
790 | + """ |
791 | + Special case of a callback that takes the `ServiceManager` instance |
792 | + in addition to the service name. |
793 | + |
794 | + Subclasses should implement `__call__` which should accept three parameters: |
795 | + |
796 | + * `manager` The `ServiceManager` instance |
797 | + * `service_name` The name of the service it's being triggered for |
798 | + * `event_name` The name of the event that this callback is handling |
799 | + """ |
800 | + def __call__(self, manager, service_name, event_name): |
801 | + raise NotImplementedError() |
802 | + |
803 | + |
804 | +class PortManagerCallback(ManagerCallback): |
805 | + """ |
806 | + Callback class that will open or close ports, for use as either |
807 | + a start or stop action. |
808 | + """ |
809 | + def __call__(self, manager, service_name, event_name): |
810 | + service = manager.get_service(service_name) |
811 | + new_ports = service.get('ports', []) |
812 | + port_file = os.path.join(hookenv.charm_dir(), '.{}.ports'.format(service_name)) |
813 | + if os.path.exists(port_file): |
814 | + with open(port_file) as fp: |
815 | + old_ports = fp.read().split(',') |
816 | + for old_port in old_ports: |
817 | + if bool(old_port): |
818 | + old_port = int(old_port) |
819 | + if old_port not in new_ports: |
820 | + hookenv.close_port(old_port) |
821 | + with open(port_file, 'w') as fp: |
822 | + fp.write(','.join(str(port) for port in new_ports)) |
823 | + for port in new_ports: |
824 | + if event_name == 'start': |
825 | + hookenv.open_port(port) |
826 | + elif event_name == 'stop': |
827 | + hookenv.close_port(port) |
828 | + |
829 | + |
830 | +def service_stop(service_name): |
831 | + """ |
832 | + Wrapper around host.service_stop to prevent spurious "unknown service" |
833 | + messages in the logs. |
834 | + """ |
835 | + if host.service_running(service_name): |
836 | + host.service_stop(service_name) |
837 | + |
838 | + |
839 | +def service_restart(service_name): |
840 | + """ |
841 | + Wrapper around host.service_restart to prevent spurious "unknown service" |
842 | + messages in the logs. |
843 | + """ |
844 | + if host.service_available(service_name): |
845 | + if host.service_running(service_name): |
846 | + host.service_restart(service_name) |
847 | + else: |
848 | + host.service_start(service_name) |
849 | + |
850 | + |
851 | +# Convenience aliases |
852 | +open_ports = close_ports = manage_ports = PortManagerCallback() |
853 | |
854 | === added file 'hooks/charmhelpers/core/services/helpers.py' |
855 | --- hooks/charmhelpers/core/services/helpers.py 1970-01-01 00:00:00 +0000 |
856 | +++ hooks/charmhelpers/core/services/helpers.py 2014-10-14 10:13:22 +0000 |
857 | @@ -0,0 +1,125 @@ |
858 | +from charmhelpers.core import hookenv |
859 | +from charmhelpers.core import templating |
860 | + |
861 | +from charmhelpers.core.services.base import ManagerCallback |
862 | + |
863 | + |
864 | +__all__ = ['RelationContext', 'TemplateCallback', |
865 | + 'render_template', 'template'] |
866 | + |
867 | + |
868 | +class RelationContext(dict): |
869 | + """ |
870 | + Base class for a context generator that gets relation data from juju. |
871 | + |
872 | + Subclasses must provide the attributes `name`, which is the name of the |
873 | + interface of interest, `interface`, which is the type of the interface of |
874 | + interest, and `required_keys`, which is the set of keys required for the |
875 | + relation to be considered complete. The data for all interfaces matching |
876 | + the `name` attribute that are complete will used to populate the dictionary |
877 | + values (see `get_data`, below). |
878 | + |
879 | + The generated context will be namespaced under the interface type, to prevent |
880 | + potential naming conflicts. |
881 | + """ |
882 | + name = None |
883 | + interface = None |
884 | + required_keys = [] |
885 | + |
886 | + def __init__(self, *args, **kwargs): |
887 | + super(RelationContext, self).__init__(*args, **kwargs) |
888 | + self.get_data() |
889 | + |
890 | + def __bool__(self): |
891 | + """ |
892 | + Returns True if all of the required_keys are available. |
893 | + """ |
894 | + return self.is_ready() |
895 | + |
896 | + __nonzero__ = __bool__ |
897 | + |
898 | + def __repr__(self): |
899 | + return super(RelationContext, self).__repr__() |
900 | + |
901 | + def is_ready(self): |
902 | + """ |
903 | + Returns True if all of the `required_keys` are available from any units. |
904 | + """ |
905 | + ready = len(self.get(self.name, [])) > 0 |
906 | + if not ready: |
907 | + hookenv.log('Incomplete relation: {}'.format(self.__class__.__name__), hookenv.DEBUG) |
908 | + return ready |
909 | + |
910 | + def _is_ready(self, unit_data): |
911 | + """ |
912 | + Helper method that tests a set of relation data and returns True if |
913 | + all of the `required_keys` are present. |
914 | + """ |
915 | + return set(unit_data.keys()).issuperset(set(self.required_keys)) |
916 | + |
917 | + def get_data(self): |
918 | + """ |
919 | + Retrieve the relation data for each unit involved in a relation and, |
920 | + if complete, store it in a list under `self[self.name]`. This |
921 | + is automatically called when the RelationContext is instantiated. |
922 | + |
923 | + The units are sorted lexographically first by the service ID, then by |
924 | + the unit ID. Thus, if an interface has two other services, 'db:1' |
925 | + and 'db:2', with 'db:1' having two units, 'wordpress/0' and 'wordpress/1', |
926 | + and 'db:2' having one unit, 'mediawiki/0', all of which have a complete |
927 | + set of data, the relation data for the units will be stored in the |
928 | + order: 'wordpress/0', 'wordpress/1', 'mediawiki/0'. |
929 | + |
930 | + If you only care about a single unit on the relation, you can just |
931 | + access it as `{{ interface[0]['key'] }}`. However, if you can at all |
932 | + support multiple units on a relation, you should iterate over the list, |
933 | + like:: |
934 | + |
935 | + {% for unit in interface -%} |
936 | + {{ unit['key'] }}{% if not loop.last %},{% endif %} |
937 | + {%- endfor %} |
938 | + |
939 | + Note that since all sets of relation data from all related services and |
940 | + units are in a single list, if you need to know which service or unit a |
941 | + set of data came from, you'll need to extend this class to preserve |
942 | + that information. |
943 | + """ |
944 | + if not hookenv.relation_ids(self.name): |
945 | + return |
946 | + |
947 | + ns = self.setdefault(self.name, []) |
948 | + for rid in sorted(hookenv.relation_ids(self.name)): |
949 | + for unit in sorted(hookenv.related_units(rid)): |
950 | + reldata = hookenv.relation_get(rid=rid, unit=unit) |
951 | + if self._is_ready(reldata): |
952 | + ns.append(reldata) |
953 | + |
954 | + def provide_data(self): |
955 | + """ |
956 | + Return data to be relation_set for this interface. |
957 | + """ |
958 | + return {} |
959 | + |
960 | + |
961 | +class TemplateCallback(ManagerCallback): |
962 | + """ |
963 | + Callback class that will render a template, for use as a ready action. |
964 | + """ |
965 | + def __init__(self, source, target, owner='root', group='root', perms=0444): |
966 | + self.source = source |
967 | + self.target = target |
968 | + self.owner = owner |
969 | + self.group = group |
970 | + self.perms = perms |
971 | + |
972 | + def __call__(self, manager, service_name, event_name): |
973 | + service = manager.get_service(service_name) |
974 | + context = {} |
975 | + for ctx in service.get('required_data', []): |
976 | + context.update(ctx) |
977 | + templating.render(self.source, self.target, context, |
978 | + self.owner, self.group, self.perms) |
979 | + |
980 | + |
981 | +# Convenience aliases for templates |
982 | +render_template = template = TemplateCallback |
983 | |
984 | === added file 'hooks/charmhelpers/core/templating.py' |
985 | --- hooks/charmhelpers/core/templating.py 1970-01-01 00:00:00 +0000 |
986 | +++ hooks/charmhelpers/core/templating.py 2014-10-14 10:13:22 +0000 |
987 | @@ -0,0 +1,51 @@ |
988 | +import os |
989 | + |
990 | +from charmhelpers.core import host |
991 | +from charmhelpers.core import hookenv |
992 | + |
993 | + |
994 | +def render(source, target, context, owner='root', group='root', perms=0444, templates_dir=None): |
995 | + """ |
996 | + Render a template. |
997 | + |
998 | + The `source` path, if not absolute, is relative to the `templates_dir`. |
999 | + |
1000 | + The `target` path should be absolute. |
1001 | + |
1002 | + The context should be a dict containing the values to be replaced in the |
1003 | + template. |
1004 | + |
1005 | + The `owner`, `group`, and `perms` options will be passed to `write_file`. |
1006 | + |
1007 | + If omitted, `templates_dir` defaults to the `templates` folder in the charm. |
1008 | + |
1009 | + Note: Using this requires python-jinja2; if it is not installed, calling |
1010 | + this will attempt to use charmhelpers.fetch.apt_install to install it. |
1011 | + """ |
1012 | + try: |
1013 | + from jinja2 import FileSystemLoader, Environment, exceptions |
1014 | + except ImportError: |
1015 | + try: |
1016 | + from charmhelpers.fetch import apt_install |
1017 | + except ImportError: |
1018 | + hookenv.log('Could not import jinja2, and could not import ' |
1019 | + 'charmhelpers.fetch to install it', |
1020 | + level=hookenv.ERROR) |
1021 | + raise |
1022 | + apt_install('python-jinja2', fatal=True) |
1023 | + from jinja2 import FileSystemLoader, Environment, exceptions |
1024 | + |
1025 | + if templates_dir is None: |
1026 | + templates_dir = os.path.join(hookenv.charm_dir(), 'templates') |
1027 | + loader = Environment(loader=FileSystemLoader(templates_dir)) |
1028 | + try: |
1029 | + source = source |
1030 | + template = loader.get_template(source) |
1031 | + except exceptions.TemplateNotFound as e: |
1032 | + hookenv.log('Could not load template %s from %s.' % |
1033 | + (source, templates_dir), |
1034 | + level=hookenv.ERROR) |
1035 | + raise e |
1036 | + content = template.render(context) |
1037 | + host.mkdir(os.path.dirname(target)) |
1038 | + host.write_file(target, content, owner, group, perms) |
1039 | |
1040 | === modified file 'hooks/charmhelpers/fetch/__init__.py' |
1041 | --- hooks/charmhelpers/fetch/__init__.py 2014-06-05 11:04:11 +0000 |
1042 | +++ hooks/charmhelpers/fetch/__init__.py 2014-10-14 10:13:22 +0000 |
1043 | @@ -1,4 +1,5 @@ |
1044 | import importlib |
1045 | +from tempfile import NamedTemporaryFile |
1046 | import time |
1047 | from yaml import safe_load |
1048 | from charmhelpers.core.host import ( |
1049 | @@ -13,7 +14,6 @@ |
1050 | config, |
1051 | log, |
1052 | ) |
1053 | -import apt_pkg |
1054 | import os |
1055 | |
1056 | |
1057 | @@ -117,13 +117,7 @@ |
1058 | |
1059 | def filter_installed_packages(packages): |
1060 | """Returns a list of packages that require installation""" |
1061 | - apt_pkg.init() |
1062 | - |
1063 | - # Tell apt to build an in-memory cache to prevent race conditions (if |
1064 | - # another process is already building the cache). |
1065 | - apt_pkg.config.set("Dir::Cache::pkgcache", "") |
1066 | - |
1067 | - cache = apt_pkg.Cache() |
1068 | + cache = apt_cache() |
1069 | _pkgs = [] |
1070 | for package in packages: |
1071 | try: |
1072 | @@ -136,6 +130,16 @@ |
1073 | return _pkgs |
1074 | |
1075 | |
1076 | +def apt_cache(in_memory=True): |
1077 | + """Build and return an apt cache""" |
1078 | + import apt_pkg |
1079 | + apt_pkg.init() |
1080 | + if in_memory: |
1081 | + apt_pkg.config.set("Dir::Cache::pkgcache", "") |
1082 | + apt_pkg.config.set("Dir::Cache::srcpkgcache", "") |
1083 | + return apt_pkg.Cache() |
1084 | + |
1085 | + |
1086 | def apt_install(packages, options=None, fatal=False): |
1087 | """Install one or more packages""" |
1088 | if options is None: |
1089 | @@ -201,6 +205,27 @@ |
1090 | |
1091 | |
1092 | def add_source(source, key=None): |
1093 | + """Add a package source to this system. |
1094 | + |
1095 | + @param source: a URL or sources.list entry, as supported by |
1096 | + add-apt-repository(1). Examples: |
1097 | + ppa:charmers/example |
1098 | + deb https://stub:key@private.example.com/ubuntu trusty main |
1099 | + |
1100 | + In addition: |
1101 | + 'proposed:' may be used to enable the standard 'proposed' |
1102 | + pocket for the release. |
1103 | + 'cloud:' may be used to activate official cloud archive pockets, |
1104 | + such as 'cloud:icehouse' |
1105 | + |
1106 | + @param key: A key to be added to the system's APT keyring and used |
1107 | + to verify the signatures on packages. Ideally, this should be an |
1108 | + ASCII format GPG public key including the block headers. A GPG key |
1109 | + id may also be used, but be aware that only insecure protocols are |
1110 | + available to retrieve the actual public key from a public keyserver |
1111 | + placing your Juju environment at risk. ppa and cloud archive keys |
1112 | + are securely added automtically, so sould not be provided. |
1113 | + """ |
1114 | if source is None: |
1115 | log('Source is not present. Skipping') |
1116 | return |
1117 | @@ -225,10 +250,23 @@ |
1118 | release = lsb_release()['DISTRIB_CODENAME'] |
1119 | with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt: |
1120 | apt.write(PROPOSED_POCKET.format(release)) |
1121 | + else: |
1122 | + raise SourceConfigError("Unknown source: {!r}".format(source)) |
1123 | + |
1124 | if key: |
1125 | - subprocess.check_call(['apt-key', 'adv', '--keyserver', |
1126 | - 'hkp://keyserver.ubuntu.com:80', '--recv', |
1127 | - key]) |
1128 | + if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key: |
1129 | + with NamedTemporaryFile() as key_file: |
1130 | + key_file.write(key) |
1131 | + key_file.flush() |
1132 | + key_file.seek(0) |
1133 | + subprocess.check_call(['apt-key', 'add', '-'], stdin=key_file) |
1134 | + else: |
1135 | + # Note that hkp: is in no way a secure protocol. Using a |
1136 | + # GPG key id is pointless from a security POV unless you |
1137 | + # absolutely trust your network and DNS. |
1138 | + subprocess.check_call(['apt-key', 'adv', '--keyserver', |
1139 | + 'hkp://keyserver.ubuntu.com:80', '--recv', |
1140 | + key]) |
1141 | |
1142 | |
1143 | def configure_sources(update=False, |
1144 | @@ -238,7 +276,8 @@ |
1145 | Configure multiple sources from charm configuration. |
1146 | |
1147 | The lists are encoded as yaml fragments in the configuration. |
1148 | - The frament needs to be included as a string. |
1149 | + The frament needs to be included as a string. Sources and their |
1150 | + corresponding keys are of the types supported by add_source(). |
1151 | |
1152 | Example config: |
1153 | install_sources: | |
1154 | |
1155 | === modified file 'hooks/hooks.py' |
1156 | --- hooks/hooks.py 2014-10-08 07:39:16 +0000 |
1157 | +++ hooks/hooks.py 2014-10-14 10:13:22 +0000 |
1158 | @@ -4,6 +4,7 @@ |
1159 | from contextlib import contextmanager |
1160 | import commands |
1161 | import cPickle as pickle |
1162 | +from distutils.version import StrictVersion |
1163 | import glob |
1164 | from grp import getgrnam |
1165 | import os.path |
1166 | @@ -16,6 +17,7 @@ |
1167 | from tempfile import NamedTemporaryFile |
1168 | from textwrap import dedent |
1169 | import time |
1170 | +import urlparse |
1171 | |
1172 | from charmhelpers import fetch |
1173 | from charmhelpers.core import hookenv, host |
1174 | @@ -144,6 +146,7 @@ |
1175 | for relid in hookenv.relation_ids('replication'): |
1176 | hookenv.relation_set(relid, replication_state) |
1177 | |
1178 | + log('saving local state', DEBUG) |
1179 | self.save() |
1180 | |
1181 | |
1182 | @@ -339,8 +342,17 @@ |
1183 | "-e", hookenv.config('encoding')] |
1184 | if hookenv.config('listen_port'): |
1185 | create_cmd.extend(["-p", str(hookenv.config('listen_port'))]) |
1186 | - create_cmd.append(pg_version()) |
1187 | + version = pg_version() |
1188 | + create_cmd.append(version) |
1189 | create_cmd.append(hookenv.config('cluster_name')) |
1190 | + |
1191 | + # With 9.3+, we make an opinionated decision to always enable |
1192 | + # data checksums. This seems to be best practice. We could |
1193 | + # turn this into a configuration item if there is need. There |
1194 | + # is no way to enable this option on existing clusters. |
1195 | + if StrictVersion(version) >= StrictVersion('9.3'): |
1196 | + create_cmd.extend(['--', '--data-checksums']) |
1197 | + |
1198 | run(create_cmd) |
1199 | # Ensure SSL certificates exist, as we enable SSL by default. |
1200 | create_ssl_cert(os.path.join( |
1201 | @@ -396,11 +408,26 @@ |
1202 | log('Ensuring minimal replication settings') |
1203 | config_data['hot_standby'] = True |
1204 | config_data['wal_level'] = 'hot_standby' |
1205 | - config_data['max_wal_senders'] = max( |
1206 | - num_slaves, config_data['max_wal_senders']) |
1207 | config_data['wal_keep_segments'] = max( |
1208 | config_data['wal_keep_segments'], |
1209 | config_data['replicated_wal_keep_segments']) |
1210 | + # We need this set even if config_data['streaming_replication'] |
1211 | + # is False, because the replication connection is still needed |
1212 | + # by pg_basebackup to build a hot standby. |
1213 | + config_data['max_wal_senders'] = max( |
1214 | + num_slaves, config_data['max_wal_senders']) |
1215 | + |
1216 | + # Log shipping to Swift using SwiftWAL. This could be for |
1217 | + # non-streaming replication, or for PITR. |
1218 | + if config_data.get('swiftwal_log_shipping', None): |
1219 | + config_data['archive_mode'] = True |
1220 | + config_data['wal_level'] = 'hot_standby' |
1221 | + config_data['archive_command'] = swiftwal_archive_command() |
1222 | + |
1223 | + if config_data.get('wal_e_storage_uri', None): |
1224 | + config_data['archive_mode'] = True |
1225 | + config_data['wal_level'] = 'hot_standby' |
1226 | + config_data['archive_command'] = wal_e_archive_command() |
1227 | |
1228 | # Send config data to the template |
1229 | # Return it as pg_config |
1230 | @@ -419,7 +446,7 @@ |
1231 | |
1232 | tune_postgresql_config(config_file) |
1233 | |
1234 | - local_state['saved_config'] = config_data |
1235 | + local_state['saved_config'] = dict(config_data) |
1236 | local_state.save() |
1237 | |
1238 | |
1239 | @@ -597,15 +624,16 @@ |
1240 | def install_postgresql_crontab(output_file): |
1241 | '''Create the postgres user's crontab''' |
1242 | config_data = hookenv.config() |
1243 | - crontab_data = { |
1244 | - 'backup_schedule': config_data["backup_schedule"], |
1245 | - 'scripts_dir': postgresql_scripts_dir, |
1246 | - 'backup_days': config_data["backup_retention_count"], |
1247 | - } |
1248 | + config_data['scripts_dir'] = postgresql_scripts_dir |
1249 | + config_data['swiftwal_backup_command'] = swiftwal_backup_command() |
1250 | + config_data['swiftwal_prune_command'] = swiftwal_prune_command() |
1251 | + config_data['wal_e_backup_command'] = wal_e_backup_command() |
1252 | + config_data['wal_e_prune_command'] = wal_e_prune_command() |
1253 | + |
1254 | charm_dir = hookenv.charm_dir() |
1255 | template_file = "{}/templates/postgres.cron.tmpl".format(charm_dir) |
1256 | crontab_template = Template( |
1257 | - open(template_file).read()).render(crontab_data) |
1258 | + open(template_file).read()).render(config_data) |
1259 | host.write_file(output_file, crontab_template, perms=0600) |
1260 | |
1261 | |
1262 | @@ -624,11 +652,15 @@ |
1263 | charm_dir = hookenv.charm_dir() |
1264 | streaming_replication = hookenv.config('streaming_replication') |
1265 | template_file = "{}/templates/recovery.conf.tmpl".format(charm_dir) |
1266 | - recovery_conf = Template(open(template_file).read()).render({ |
1267 | - 'host': master_host, |
1268 | - 'port': master_port, |
1269 | - 'password': local_state['replication_password'], |
1270 | - 'streaming_replication': streaming_replication}) |
1271 | + params = dict( |
1272 | + host=master_host, port=master_port, |
1273 | + password=local_state['replication_password'], |
1274 | + streaming_replication=streaming_replication) |
1275 | + if hookenv.config('wal_e_storage_uri'): |
1276 | + params['restore_command'] = wal_e_restore_command() |
1277 | + elif hookenv.config('swiftwal_log_shipping'): |
1278 | + params['restore_command'] = swiftwal_restore_command() |
1279 | + recovery_conf = Template(open(template_file).read()).render(params) |
1280 | log(recovery_conf, DEBUG) |
1281 | host.write_file( |
1282 | os.path.join(postgresql_cluster_dir, 'recovery.conf'), |
1283 | @@ -639,17 +671,164 @@ |
1284 | postgresql_restart() |
1285 | |
1286 | |
1287 | -# ------------------------------------------------------------------------------ |
1288 | -# load_postgresql_config: Convenience function that loads (as a string) the |
1289 | -# current postgresql configuration file. |
1290 | -# Returns a string containing the postgresql config or |
1291 | -# None |
1292 | -# ------------------------------------------------------------------------------ |
1293 | -def load_postgresql_config(config_file): |
1294 | - if os.path.isfile(config_file): |
1295 | - return(open(config_file).read()) |
1296 | +def ensure_swift_container(container): |
1297 | + from swiftclient import client as swiftclient |
1298 | + config = hookenv.config() |
1299 | + con = swiftclient.Connection( |
1300 | + authurl=config.get('os_auth_url', ''), |
1301 | + user=config.get('os_username', ''), |
1302 | + key=config.get('os_password', ''), |
1303 | + tenant_name=config.get('os_tenant_name', ''), |
1304 | + auth_version='2.0', |
1305 | + retries=0) |
1306 | + try: |
1307 | + con.head_container(container) |
1308 | + except swiftclient.ClientException: |
1309 | + con.put_container(container) |
1310 | + |
1311 | + |
1312 | +def wal_e_envdir(): |
1313 | + '''The envdir(1) environment location used to drive WAL-E.''' |
1314 | + return os.path.join(_get_postgresql_config_dir(), 'wal-e.env') |
1315 | + |
1316 | + |
1317 | +def create_wal_e_envdir(): |
1318 | + '''Regenerate the envdir(1) environment used to drive WAL-E.''' |
1319 | + config = hookenv.config() |
1320 | + env = dict( |
1321 | + SWIFT_AUTHURL=config.get('os_auth_url', ''), |
1322 | + SWIFT_TENANT=config.get('os_tenant_name', ''), |
1323 | + SWIFT_USER=config.get('os_username', ''), |
1324 | + SWIFT_PASSWORD=config.get('os_password', ''), |
1325 | + AWS_ACCESS_KEY_ID=config.get('aws_access_key_id', ''), |
1326 | + AWS_SECRET_ACCESS_KEY=config.get('aws_secret_access_key', ''), |
1327 | + WABS_ACCOUNT_NAME=config.get('wabs_account_name', ''), |
1328 | + WABS_ACCESS_KEY=config.get('wabs_access_key', ''), |
1329 | + WALE_SWIFT_PREFIX='', |
1330 | + WALE_S3_PREFIX='', |
1331 | + WALE_WABS_PREFIX='') |
1332 | + |
1333 | + uri = config.get('wal_e_storage_uri', None) |
1334 | + |
1335 | + if uri: |
1336 | + # Until juju provides us with proper leader election, we have a |
1337 | + # state where units do not know if they are alone or part of a |
1338 | + # cluster. To avoid units stomping on each others WAL and backups, |
1339 | + # we use a unique container for each unit when they are not |
1340 | + # part of the peer relation. Once they are part of the peer |
1341 | + # relation, they share a container. |
1342 | + if local_state.get('state', 'standalone') == 'standalone': |
1343 | + if not uri.endswith('/'): |
1344 | + uri += '/' |
1345 | + uri += hookenv.local_unit().split('/')[-1] |
1346 | + |
1347 | + parsed_uri = urlparse.urlparse(uri) |
1348 | + |
1349 | + required_env = [] |
1350 | + if parsed_uri.scheme == 'swift': |
1351 | + env['WALE_SWIFT_PREFIX'] = uri |
1352 | + required_env = ['SWIFT_AUTHURL', 'SWIFT_TENANT', |
1353 | + 'SWIFT_USER', 'SWIFT_PASSWORD'] |
1354 | + ensure_swift_container(parsed_uri.netloc) |
1355 | + elif parsed_uri.scheme == 's3': |
1356 | + env['WALE_S3_PREFIX'] = uri |
1357 | + required_env = ['AWS_ACCESS_KEY_ID', 'AWS_SECRET_ACCESS_KEY'] |
1358 | + elif parsed_uri.scheme == 'wabs': |
1359 | + env['WALE_WABS_PREFIX'] = uri |
1360 | + required_env = ['WABS_ACCOUNT_NAME', 'WABS_ACCESS_KEY'] |
1361 | + else: |
1362 | + log('Invalid wal_e_storage_uri {}'.format(uri), ERROR) |
1363 | + |
1364 | + for env_key in required_env: |
1365 | + if not env[env_key].strip(): |
1366 | + log('Missing {}'.format(env_key), ERROR) |
1367 | + |
1368 | + # Regenerate the envdir(1) environment recommended by WAL-E. |
1369 | + # All possible keys are rewritten to ensure we remove old secrets. |
1370 | + host.mkdir(wal_e_envdir(), 'postgres', 'postgres', 0o750) |
1371 | + for k, v in env.items(): |
1372 | + host.write_file( |
1373 | + os.path.join(wal_e_envdir(), k), v.strip(), |
1374 | + 'postgres', 'postgres', 0o640) |
1375 | + |
1376 | + |
1377 | +def wal_e_archive_command(): |
1378 | + '''Return the archive_command needed in postgresql.conf.''' |
1379 | + return 'envdir {} wal-e wal-push %p'.format(wal_e_envdir()) |
1380 | + |
1381 | + |
1382 | +def wal_e_restore_command(): |
1383 | + return 'envdir {} wal-e wal-fetch "%f" "%p"'.format(wal_e_envdir()) |
1384 | + |
1385 | + |
1386 | +def wal_e_backup_command(): |
1387 | + postgresql_cluster_dir = os.path.join( |
1388 | + postgresql_data_dir, pg_version(), hookenv.config('cluster_name')) |
1389 | + return 'envdir {} wal-e backup-push {}'.format( |
1390 | + wal_e_envdir(), postgresql_cluster_dir) |
1391 | + |
1392 | + |
1393 | +def wal_e_prune_command(): |
1394 | + return 'envdir {} wal-e delete --confirm retain {}'.format( |
1395 | + wal_e_envdir(), hookenv.config('wal_e_backup_retention')) |
1396 | + |
1397 | + |
1398 | +def swiftwal_config(): |
1399 | + postgresql_config_dir = _get_postgresql_config_dir() |
1400 | + return os.path.join(postgresql_config_dir, "swiftwal.conf") |
1401 | + |
1402 | + |
1403 | +def create_swiftwal_config(): |
1404 | + if not hookenv.config('swiftwal_container_prefix'): |
1405 | + return |
1406 | + |
1407 | + # Until juju provides us with proper leader election, we have a |
1408 | + # state where units do not know if they are alone or part of a |
1409 | + # cluster. To avoid units stomping on each others WAL and backups, |
1410 | + # we use a unique Swift container for each unit when they are not |
1411 | + # part of the peer relation. Once they are part of the peer |
1412 | + # relation, they share a container. |
1413 | + if local_state.get('state', 'standalone') == 'standalone': |
1414 | + container = '{}_{}'.format(hookenv.config('swiftwal_container_prefix'), |
1415 | + hookenv.local_unit().split('/')[-1]) |
1416 | else: |
1417 | - return(None) |
1418 | + container = hookenv.config('swiftwal_container_prefix') |
1419 | + |
1420 | + template_file = os.path.join(hookenv.charm_dir(), |
1421 | + 'templates', 'swiftwal.conf.tmpl') |
1422 | + params = dict(hookenv.config()) |
1423 | + params['swiftwal_container'] = container |
1424 | + content = Template(open(template_file).read()).render(params) |
1425 | + host.write_file(swiftwal_config(), content, "postgres", "postgres", 0o600) |
1426 | + |
1427 | + |
1428 | +def swiftwal_archive_command(): |
1429 | + '''Return the archive_command needed in postgresql.conf''' |
1430 | + return 'swiftwal --config={} archive-wal %p'.format(swiftwal_config()) |
1431 | + |
1432 | + |
1433 | +def swiftwal_restore_command(): |
1434 | + '''Return the restore_command needed in recovery.conf''' |
1435 | + return 'swiftwal --config={} restore-wal %f %p'.format(swiftwal_config()) |
1436 | + |
1437 | + |
1438 | +def swiftwal_backup_command(): |
1439 | + '''Return the backup command needed in postgres' crontab''' |
1440 | + cmd = 'swiftwal --config={} backup --port={}'.format(swiftwal_config(), |
1441 | + get_service_port()) |
1442 | + if not hookenv.config('swiftwal_log_shipping'): |
1443 | + cmd += ' --xlog' |
1444 | + return cmd |
1445 | + |
1446 | + |
1447 | +def swiftwal_prune_command(): |
1448 | + '''Return the backup & wal pruning command needed in postgres' crontab''' |
1449 | + config = hookenv.config() |
1450 | + args = '--keep-backups={} --keep-wals={}'.format( |
1451 | + config.get('swiftwal_backup_retention', 0), |
1452 | + max(config['wal_keep_segments'], |
1453 | + config['replicated_wal_keep_segments'])) |
1454 | + return 'swiftwal --config={} prune {}'.format(swiftwal_config(), args) |
1455 | |
1456 | |
1457 | def update_service_port(): |
1458 | @@ -810,14 +989,14 @@ |
1459 | dpkg.communicate(input=selections) |
1460 | |
1461 | |
1462 | -# ------------------------------------------------------------------------------ |
1463 | +# ----------------------------------------------------------------------------- |
1464 | # Core logic for permanent storage changes: |
1465 | # NOTE the only 2 "True" return points: |
1466 | # 1) symlink already pointing to existing storage (no-op) |
1467 | # 2) new storage properly initialized: |
1468 | # - if fresh new storage dir: rsync existing data |
1469 | # - manipulate /var/lib/postgresql/VERSION/CLUSTER symlink |
1470 | -# ------------------------------------------------------------------------------ |
1471 | +# ----------------------------------------------------------------------------- |
1472 | def config_changed_volume_apply(mount_point): |
1473 | version = pg_version() |
1474 | cluster_name = hookenv.config('cluster_name') |
1475 | @@ -900,13 +1079,6 @@ |
1476 | return False |
1477 | |
1478 | |
1479 | -def token_sql_safe(value): |
1480 | - # Only allow alphanumeric + underscore in database identifiers |
1481 | - if re.search('[^A-Za-z0-9_]', value): |
1482 | - return False |
1483 | - return True |
1484 | - |
1485 | - |
1486 | @hooks.hook() |
1487 | def config_changed(force_restart=False, mount_point=None): |
1488 | validate_config() |
1489 | @@ -939,28 +1111,45 @@ |
1490 | generate_postgresql_hba(postgresql_hba) |
1491 | create_ssl_cert(os.path.join( |
1492 | postgresql_data_dir, pg_version(), config_data['cluster_name'])) |
1493 | + create_swiftwal_config() |
1494 | + create_wal_e_envdir() |
1495 | update_service_port() |
1496 | update_nrpe_checks() |
1497 | |
1498 | - # Ensure client credentials match in the case that an external mountpoint |
1499 | - # has been mounted with an existing DB. |
1500 | - client_relids = (hookenv.relation_ids('db') |
1501 | - + hookenv.relation_ids('db-admin')) |
1502 | - for relid in client_relids: |
1503 | - rel = hookenv.relation_get(rid=relid, unit=hookenv.local_unit()) |
1504 | - |
1505 | - database = rel.get('database') |
1506 | - roles = filter(None, (rel.get('roles') or '').split(",")) |
1507 | - user = rel['user'] |
1508 | - password = create_user(user) |
1509 | - reset_user_roles(user, roles) |
1510 | - schema_user = rel['schema_user'] |
1511 | - schema_password = create_user(schema_user) |
1512 | - hookenv.relation_set(relid, |
1513 | - password=password, |
1514 | - schema_password=schema_password) |
1515 | - if not (database is None or database == 'all'): |
1516 | - ensure_database(user, schema_user, database) |
1517 | + # If an external mountpoint has caused an old, existing DB to be |
1518 | + # mounted, we need to ensure that all the users, databases, roles |
1519 | + # etc. exist with known passwords. |
1520 | + if local_state['state'] in ('standalone', 'master'): |
1521 | + client_relids = ( |
1522 | + hookenv.relation_ids('db') + hookenv.relation_ids('db-admin')) |
1523 | + for relid in client_relids: |
1524 | + rel = hookenv.relation_get(rid=relid, unit=hookenv.local_unit()) |
1525 | + client_rel = None |
1526 | + for unit in hookenv.related_units(relid): |
1527 | + client_rel = hookenv.relation_get(unit=unit, rid=relid) |
1528 | + if not client_rel: |
1529 | + continue # No client units - in between departed and broken? |
1530 | + |
1531 | + database = rel.get('database') |
1532 | + if database is None: |
1533 | + continue # The relation exists, but we haven't joined it yet. |
1534 | + |
1535 | + roles = filter(None, (client_rel.get('roles') or '').split(",")) |
1536 | + user = rel.get('user') |
1537 | + if user: |
1538 | + admin = relid.startswith('db-admin') |
1539 | + password = create_user(user, admin=admin) |
1540 | + reset_user_roles(user, roles) |
1541 | + hookenv.relation_set(relid, password=password) |
1542 | + |
1543 | + schema_user = rel.get('schema_user') |
1544 | + if schema_user: |
1545 | + schema_password = create_user(schema_user) |
1546 | + hookenv.relation_set(relid, schema_password=schema_password) |
1547 | + |
1548 | + if user and schema_user and not ( |
1549 | + database is None or database == 'all'): |
1550 | + ensure_database(user, schema_user, database) |
1551 | |
1552 | if force_restart: |
1553 | postgresql_restart() |
1554 | @@ -984,11 +1173,13 @@ |
1555 | config_data = hookenv.config() |
1556 | update_repos_and_packages() |
1557 | if 'state' not in local_state: |
1558 | + log('state not in {}'.format(local_state.keys()), DEBUG) |
1559 | # Fresh installation. Because this function is invoked by both |
1560 | # the install hook and the upgrade-charm hook, we need to guard |
1561 | # any non-idempotent setup. We should probably fix this; it |
1562 | # seems rather fragile. |
1563 | local_state.setdefault('state', 'standalone') |
1564 | + log(repr(local_state.keys()), DEBUG) |
1565 | |
1566 | # Drop the cluster created when the postgresql package was |
1567 | # installed, and rebuild it with the requested locale and encoding. |
1568 | @@ -1008,6 +1199,7 @@ |
1569 | 'allocated port {!r} != {!r}'.format( |
1570 | get_service_port(), config_data['listen_port'])) |
1571 | local_state['port'] = get_service_port() |
1572 | + log('publishing state', DEBUG) |
1573 | local_state.publish() |
1574 | |
1575 | postgresql_backups_dir = ( |
1576 | @@ -1598,12 +1790,21 @@ |
1577 | "postgresql-contrib-{}".format(version), |
1578 | "postgresql-plpython-{}".format(version), |
1579 | "python-jinja2", "python-psycopg2"] |
1580 | + |
1581 | # PGDG currently doesn't have debversion for 9.3 & 9.4. Put this back |
1582 | # when it does. |
1583 | if not (hookenv.config('pgdg') and version in ('9.3', '9.4')): |
1584 | packages.append("postgresql-{}-debversion".format(version)) |
1585 | + |
1586 | if hookenv.config('performance_tuning').lower() != 'manual': |
1587 | packages.append('pgtune') |
1588 | + |
1589 | + if hookenv.config('swiftwal_container_prefix'): |
1590 | + packages.append('swiftwal') |
1591 | + |
1592 | + if hookenv.config('wal_e_storage_uri'): |
1593 | + packages.extend(['wal-e', 'daemontools']) |
1594 | + |
1595 | packages.extend((hookenv.config('extra-packages') or '').split()) |
1596 | packages = fetch.filter_installed_packages(packages) |
1597 | # Set package state for main postgresql package if installed |
1598 | @@ -1644,9 +1845,10 @@ |
1599 | |
1600 | def authorized_by(unit): |
1601 | '''Return True if the peer has authorized our database connections.''' |
1602 | - relation = hookenv.relation_get(unit=unit) |
1603 | - authorized = relation.get('authorized', '').split() |
1604 | - return hookenv.local_unit() in authorized |
1605 | + for relid in hookenv.relation_ids('replication'): |
1606 | + relation = hookenv.relation_get(unit=unit, rid=relid) |
1607 | + authorized = relation.get('authorized', '').split() |
1608 | + return hookenv.local_unit() in authorized |
1609 | |
1610 | |
1611 | def promote_database(): |
1612 | @@ -1867,6 +2069,12 @@ |
1613 | _get_postgresql_config_dir(), "pg_hba.conf") |
1614 | generate_postgresql_hba(postgresql_hba) |
1615 | |
1616 | + # Swift container name make have changed, so regenerate the SwiftWAL |
1617 | + # config. This can go away when we have real leader election and can |
1618 | + # safely share a single container. |
1619 | + create_swiftwal_config() |
1620 | + create_wal_e_envdir() |
1621 | + |
1622 | local_state.publish() |
1623 | |
1624 | |
1625 | @@ -1884,6 +2092,13 @@ |
1626 | the master and hot standby joined the client relation. |
1627 | ''' |
1628 | master = local_state['following'] |
1629 | + if not master: |
1630 | + log("I will be a hot standby, but no master yet") |
1631 | + return |
1632 | + |
1633 | + if not authorized_by(master): |
1634 | + log("Master {} has not yet authorized us".format(master)) |
1635 | + return |
1636 | |
1637 | client_relations = hookenv.relation_get( |
1638 | 'client_relations', master, hookenv.relation_ids('replication')[0]) |
1639 | @@ -1926,7 +2141,7 @@ |
1640 | # Block until users and database has replicated, so we know the |
1641 | # connection details we publish are actually valid. This will |
1642 | # normally be pretty much instantaneous. |
1643 | - timeout = 900 |
1644 | + timeout = 60 |
1645 | start = time.time() |
1646 | while time.time() < start + timeout: |
1647 | cur = db_cursor(autocommit=True) |
1648 | @@ -2352,7 +2567,6 @@ |
1649 | postgresql_sysctl = "/etc/sysctl.d/50-postgresql.conf" |
1650 | postgresql_crontab = "/etc/cron.d/postgresql" |
1651 | postgresql_service_config_dir = "/var/run/postgresql" |
1652 | -replication_relation_types = ['master', 'slave', 'replication'] |
1653 | local_state = State('local_state.pickle') |
1654 | hook_name = os.path.basename(sys.argv[0]) |
1655 | juju_log_dir = "/var/log/juju" |
1656 | @@ -2369,3 +2583,4 @@ |
1657 | log("Relation {} with {}".format( |
1658 | hookenv.relation_id(), hookenv.remote_unit())) |
1659 | hooks.execute(sys.argv) |
1660 | + log("Completed {} hook".format(hook_name)) |
1661 | |
1662 | === added directory 'lib/test-client-charm' |
1663 | === added file 'lib/test-client-charm/config.yaml' |
1664 | --- lib/test-client-charm/config.yaml 1970-01-01 00:00:00 +0000 |
1665 | +++ lib/test-client-charm/config.yaml 2014-10-14 10:13:22 +0000 |
1666 | @@ -0,0 +1,13 @@ |
1667 | +options: |
1668 | + database: |
1669 | + default: "" |
1670 | + type: string |
1671 | + description: | |
1672 | + Database to connect 'db' relationships to, overriding the |
1673 | + generated default database name. |
1674 | + roles: |
1675 | + default: "" |
1676 | + type: string |
1677 | + description: | |
1678 | + Comma separated list of roles for PostgreSQL to grant to the database |
1679 | + user. |
1680 | |
1681 | === added directory 'lib/test-client-charm/hooks' |
1682 | === added symlink 'lib/test-client-charm/hooks/config-changed' |
1683 | === target is u'hooks.py' |
1684 | === added symlink 'lib/test-client-charm/hooks/db-admin-relation-broken' |
1685 | === target is u'./hooks.py' |
1686 | === added symlink 'lib/test-client-charm/hooks/db-admin-relation-changed' |
1687 | === target is u'./hooks.py' |
1688 | === added symlink 'lib/test-client-charm/hooks/db-admin-relation-joined' |
1689 | === target is u'./hooks.py' |
1690 | === added symlink 'lib/test-client-charm/hooks/db-relation-broken' |
1691 | === target is u'./hooks.py' |
1692 | === added symlink 'lib/test-client-charm/hooks/db-relation-changed' |
1693 | === target is u'./hooks.py' |
1694 | === added symlink 'lib/test-client-charm/hooks/db-relation-joined' |
1695 | === target is u'./hooks.py' |
1696 | === added file 'lib/test-client-charm/hooks/hooks.py' |
1697 | --- lib/test-client-charm/hooks/hooks.py 1970-01-01 00:00:00 +0000 |
1698 | +++ lib/test-client-charm/hooks/hooks.py 2014-10-14 10:13:22 +0000 |
1699 | @@ -0,0 +1,165 @@ |
1700 | +#!/usr/bin/env python |
1701 | + |
1702 | +import os.path |
1703 | +import re |
1704 | +import shutil |
1705 | +import sys |
1706 | +from textwrap import dedent |
1707 | + |
1708 | +from charmhelpers.core import hookenv, host |
1709 | +from charmhelpers.core.hookenv import log, DEBUG, INFO |
1710 | +from charmhelpers import fetch |
1711 | + |
1712 | + |
1713 | +CLIENT_RELATION_TYPES = frozenset(['db', 'db-admin']) |
1714 | + |
1715 | +DATA_DIR = os.path.join( |
1716 | + '/var/lib/units', hookenv.local_unit().replace('/', '-')) |
1717 | +SCRIPT_DIR = os.path.join(DATA_DIR, 'bin') |
1718 | +PGPASS_DIR = os.path.join(DATA_DIR, 'pgpass') |
1719 | + |
1720 | + |
1721 | +def update_system_path(): |
1722 | + org_lines = open('/etc/environment', 'rb').readlines() |
1723 | + env_lines = [] |
1724 | + |
1725 | + for line in org_lines: |
1726 | + if line.startswith('PATH=') and SCRIPT_DIR not in line: |
1727 | + line = re.sub( |
1728 | + """(['"]?)$""", |
1729 | + ":{}\\1".format(SCRIPT_DIR), |
1730 | + line, 1) |
1731 | + env_lines.append(line) |
1732 | + |
1733 | + if org_lines != env_lines: |
1734 | + content = '\n'.join(env_lines) |
1735 | + host.write_file('/etc/environment', content, perms=0o644) |
1736 | + |
1737 | + |
1738 | +def all_relations(relation_types=CLIENT_RELATION_TYPES): |
1739 | + for reltype in relation_types: |
1740 | + for relid in hookenv.relation_ids(reltype): |
1741 | + for unit in hookenv.related_units(relid): |
1742 | + yield reltype, relid, unit, hookenv.relation_get( |
1743 | + unit=unit, rid=relid) |
1744 | + |
1745 | + |
1746 | +def rebuild_all_relations(): |
1747 | + config = hookenv.config() |
1748 | + |
1749 | + # Clear out old scripts and pgpass files |
1750 | + if os.path.exists(SCRIPT_DIR): |
1751 | + shutil.rmtree(SCRIPT_DIR) |
1752 | + if os.path.exists(PGPASS_DIR): |
1753 | + shutil.rmtree(PGPASS_DIR) |
1754 | + host.mkdir(DATA_DIR, perms=0o755) |
1755 | + host.mkdir(SCRIPT_DIR, perms=0o755) |
1756 | + host.mkdir(PGPASS_DIR, group='ubuntu', perms=0o750) |
1757 | + |
1758 | + for _, relid, unit, relation in all_relations(relation_types=['db']): |
1759 | + log("{} {} {!r}".format(relid, unit, relation), DEBUG) |
1760 | + |
1761 | + def_str = '<DEFAULT>' |
1762 | + if config['database'] != relation.get('database', ''): |
1763 | + log("Switching from database {} to {}".format( |
1764 | + relation.get('database', '') or def_str, |
1765 | + config['database'] or def_str), INFO) |
1766 | + |
1767 | + if config['roles'] != relation.get('roles', ''): |
1768 | + log("Updating granted roles from {} to {}".format( |
1769 | + relation.get('roles', '') or def_str, |
1770 | + config['roles'] or def_str)) |
1771 | + |
1772 | + hookenv.relation_set( |
1773 | + relid, database=config['database'], roles=config['roles']) |
1774 | + |
1775 | + if 'user' in relation: |
1776 | + rebuild_relation(relid, unit, relation) |
1777 | + |
1778 | + for _, relid, unit, relation in all_relations(relation_types=['db-admin']): |
1779 | + log("{} {} {!r}".format(relid, unit, relation), DEBUG) |
1780 | + if 'user' in relation: |
1781 | + rebuild_relation(relid, unit, relation) |
1782 | + |
1783 | + |
1784 | +def rebuild_relation(relid, unit, relation): |
1785 | + relname = relid.split(':')[0] |
1786 | + unitname = unit.replace('/', '-') |
1787 | + this_unit = hookenv.local_unit() |
1788 | + |
1789 | + allowed_units = relation.get('allowed-units', '') |
1790 | + if this_unit not in allowed_units.split(): |
1791 | + log("Not yet authorized on {}".format(relid), INFO) |
1792 | + return |
1793 | + |
1794 | + script_name = 'psql-{}-{}'.format(relname, unitname) |
1795 | + build_script(script_name, relation) |
1796 | + state = relation.get('state', None) |
1797 | + if state in ('master', 'hot standby'): |
1798 | + script_name = 'psql-{}-{}'.format(relname, state.replace(' ', '-')) |
1799 | + build_script(script_name, relation) |
1800 | + |
1801 | + |
1802 | +def build_script(script_name, relation): |
1803 | + # Install a wrapper to psql that connects it to the desired database |
1804 | + # by default. One wrapper per unit per relation. |
1805 | + script_path = os.path.abspath(os.path.join(SCRIPT_DIR, script_name)) |
1806 | + pgpass_path = os.path.abspath(os.path.join(PGPASS_DIR, script_name)) |
1807 | + script = dedent("""\ |
1808 | + #!/bin/sh |
1809 | + exec env \\ |
1810 | + PGHOST={host} PGPORT={port} PGDATABASE={database} \\ |
1811 | + PGUSER={user} PGPASSFILE={pgpass} \\ |
1812 | + psql $@ |
1813 | + """).format( |
1814 | + host=relation['host'], |
1815 | + port=relation['port'], |
1816 | + database=relation.get('database', ''), # db-admin has no database |
1817 | + user=relation['user'], |
1818 | + pgpass=pgpass_path) |
1819 | + log("Generating wrapper {}".format(script_path), INFO) |
1820 | + host.write_file( |
1821 | + script_path, script, owner="ubuntu", group="ubuntu", perms=0o700) |
1822 | + |
1823 | + # The wrapper requires access to the password, stored in a .pgpass |
1824 | + # file so it isn't exposed in an environment variable or on the |
1825 | + # command line. |
1826 | + pgpass = "*:*:*:{user}:{password}".format( |
1827 | + user=relation['user'], password=relation['password']) |
1828 | + host.write_file( |
1829 | + pgpass_path, pgpass, owner="ubuntu", group="ubuntu", perms=0o400) |
1830 | + |
1831 | + |
1832 | +hooks = hookenv.Hooks() |
1833 | + |
1834 | + |
1835 | +@hooks.hook() |
1836 | +def install(): |
1837 | + fetch.apt_install( |
1838 | + ['language-pack-en', 'postgresql-client', 'python-psycopg2'], |
1839 | + fatal=True) |
1840 | + update_system_path() |
1841 | + |
1842 | + |
1843 | +@hooks.hook() |
1844 | +def upgrade_charm(): |
1845 | + # Per Bug #1205286, we can't store scripts and passwords in the |
1846 | + # charm directory. |
1847 | + if os.path.exists('bin'): |
1848 | + shutil.rmtree('bin') |
1849 | + if os.path.exists('pgpass'): |
1850 | + shutil.rmtree('pgpass') |
1851 | + update_system_path() |
1852 | + return rebuild_all_relations() |
1853 | + |
1854 | + |
1855 | +@hooks.hook( |
1856 | + 'config-changed', 'db-admin-relation-broken', |
1857 | + 'db-admin-relation-changed', 'db-admin-relation-joined', |
1858 | + 'db-relation-broken', 'db-relation-changed', 'db-relation-joined') |
1859 | +def rebuild_hook(): |
1860 | + return rebuild_all_relations() |
1861 | + |
1862 | + |
1863 | +if __name__ == '__main__': |
1864 | + hooks.execute(sys.argv) |
1865 | |
1866 | === added symlink 'lib/test-client-charm/hooks/install' |
1867 | === target is u'./hooks.py' |
1868 | === added file 'lib/test-client-charm/hooks/start' |
1869 | --- lib/test-client-charm/hooks/start 1970-01-01 00:00:00 +0000 |
1870 | +++ lib/test-client-charm/hooks/start 2014-10-14 10:13:22 +0000 |
1871 | @@ -0,0 +1,3 @@ |
1872 | +#!/bin/sh |
1873 | +# Do nothing. |
1874 | +exit 0 |
1875 | |
1876 | === added file 'lib/test-client-charm/hooks/stop' |
1877 | --- lib/test-client-charm/hooks/stop 1970-01-01 00:00:00 +0000 |
1878 | +++ lib/test-client-charm/hooks/stop 2014-10-14 10:13:22 +0000 |
1879 | @@ -0,0 +1,3 @@ |
1880 | +#!/bin/sh |
1881 | +# Do nothing. |
1882 | +exit 0 |
1883 | |
1884 | === added symlink 'lib/test-client-charm/hooks/upgrade-charm' |
1885 | === target is u'hooks.py' |
1886 | === added file 'lib/test-client-charm/metadata.yaml' |
1887 | --- lib/test-client-charm/metadata.yaml 1970-01-01 00:00:00 +0000 |
1888 | +++ lib/test-client-charm/metadata.yaml 2014-10-14 10:13:22 +0000 |
1889 | @@ -0,0 +1,13 @@ |
1890 | +name: postgresql-psql |
1891 | +summary: psql command line access to PostgreSQL services. |
1892 | +maintainer: stuart.bishop@canonical.com |
1893 | +description: | |
1894 | + Access to related PostgreSQL services via the |
1895 | + standard psql command line utility. |
1896 | +categories: |
1897 | + - databases |
1898 | +requires: |
1899 | + db: |
1900 | + interface: pgsql |
1901 | + db-admin: |
1902 | + interface: pgsql |
1903 | |
1904 | === modified file 'templates/postgres.cron.tmpl' |
1905 | --- templates/postgres.cron.tmpl 2013-11-19 11:38:45 +0000 |
1906 | +++ templates/postgres.cron.tmpl 2014-10-14 10:13:22 +0000 |
1907 | @@ -1,14 +1,16 @@ |
1908 | -{{backup_schedule}} postgres {{scripts_dir}}/pg_backup_job {{backup_days}} |
1909 | -{% if swiftwal_container -%} |
1910 | +# Maintained by juju |
1911 | +# |
1912 | +{% if backup_schedule -%} |
1913 | +{{backup_schedule}} postgres \ |
1914 | + {{scripts_dir}}/pg_backup_job {{backup_retention_count}} |
1915 | +{% endif -%} |
1916 | + |
1917 | {% if swiftwal_backup_schedule -%} |
1918 | -{% if swiftwal_log_shipping -%} |
1919 | -{{swiftwal_backup_schedule}} postgres \ |
1920 | - swiftwal --config={{swiftwal_config}} backup && \ |
1921 | - swiftwal --config={{swiftwal_config}} prune -n {{swiftwal_backup_retention}} |
1922 | -{% else -%} |
1923 | -{{swiftwal_backup_schedule}} postgres \ |
1924 | - swiftwal --config={{swiftwal_config}} backup --xlog && \ |
1925 | - swiftwal --config={{swiftwal_config}} prune -n {{swiftwal_backup_retention}} |
1926 | -{% endif -%} |
1927 | -{% endif -%} |
1928 | +{{swiftwal_backup_schedule}} postgres \ |
1929 | + {{swiftwal_backup_command}} && {{swiftwal_prune_command}} |
1930 | +{% endif -%} |
1931 | + |
1932 | +{% if wal_e_backup_schedule -%} |
1933 | +{{wal_e_backup_schedule}} postgres \ |
1934 | + {{wal_e_backup_command}} && {{wal_e_prune_command}} |
1935 | {% endif -%} |
1936 | |
1937 | === modified file 'templates/postgresql.conf.tmpl' |
1938 | --- templates/postgresql.conf.tmpl 2014-05-13 21:05:40 +0000 |
1939 | +++ templates/postgresql.conf.tmpl 2014-10-14 10:13:22 +0000 |
1940 | @@ -120,6 +120,9 @@ |
1941 | {% if log_lock_waits != "" -%} |
1942 | log_lock_waits = {{log_lock_waits}} |
1943 | {% endif -%} |
1944 | +{% if log_temp_files != "" -%} |
1945 | +log_temp_files = {{log_temp_files}} |
1946 | +{% endif -%} |
1947 | |
1948 | log_timezone = UTC |
1949 | |
1950 | |
1951 | === modified file 'templates/swiftwal.conf.tmpl' |
1952 | --- templates/swiftwal.conf.tmpl 2013-11-22 13:41:01 +0000 |
1953 | +++ templates/swiftwal.conf.tmpl 2014-10-14 10:13:22 +0000 |
1954 | @@ -1,6 +1,6 @@ |
1955 | # Generated and maintained by juju |
1956 | -OS_USERNAME: {{config.os_username}} |
1957 | -OS_TENANT_NAME: {{config.os_tenant_name}} |
1958 | -OS_PASSWORD: {{config.os_password}} |
1959 | -OS_AUTH_URL: {{config.os_auth_url}} |
1960 | -CONTAINER: {{vars.container}} |
1961 | +OS_USERNAME: {{os_username}} |
1962 | +OS_TENANT_NAME: {{os_tenant_name}} |
1963 | +OS_PASSWORD: {{os_password}} |
1964 | +OS_AUTH_URL: {{os_auth_url}} |
1965 | +CONTAINER: {{swiftwal_container}} |
1966 | |
1967 | === modified file 'test.py' (properties changed: +x to -x) |
1968 | --- test.py 2014-10-08 07:39:16 +0000 |
1969 | +++ test.py 2014-10-14 10:13:22 +0000 |
1970 | @@ -9,7 +9,9 @@ |
1971 | juju destroy-environment |
1972 | """ |
1973 | |
1974 | +from datetime import datetime |
1975 | import os.path |
1976 | +import shutil |
1977 | import signal |
1978 | import socket |
1979 | import subprocess |
1980 | @@ -24,15 +26,40 @@ |
1981 | from testing.jujufixture import JujuFixture, run |
1982 | |
1983 | |
1984 | -SERIES = os.environ.get('SERIES', 'precise').strip() |
1985 | -TEST_CHARM = 'local:{}/postgresql'.format(SERIES) |
1986 | -PSQL_CHARM = 'local:{}/postgresql-psql'.format(SERIES) |
1987 | +SERIES = os.environ.get('SERIES', 'trusty').strip() |
1988 | +TEST_CHARM = os.path.dirname(__file__) |
1989 | +PSQL_CHARM = os.path.join(TEST_CHARM, 'lib', 'test-client-charm') |
1990 | |
1991 | |
1992 | class NotReady(Exception): |
1993 | pass |
1994 | |
1995 | |
1996 | +def skip_if_swift_is_unavailable(): |
1997 | + os_keys = set(['OS_TENANT_NAME', 'OS_AUTH_URL', |
1998 | + 'OS_USERNAME', 'OS_PASSWORD']) |
1999 | + for os_key in os_keys: |
2000 | + if os_key not in os.environ: |
2001 | + return unittest.skip('Swift is unavailable') |
2002 | + return lambda x: x |
2003 | + |
2004 | + |
2005 | +def skip_if_s3_is_unavailable(): |
2006 | + os_keys = set(['AWS_ACCESS_KEY_ID', 'AWS_SECRET_ACCESS_KEY']) |
2007 | + for os_key in os_keys: |
2008 | + if os_key not in os.environ: |
2009 | + return unittest.skip('S3 is unavailable') |
2010 | + return lambda x: x |
2011 | + |
2012 | + |
2013 | +def skip_if_wabs_is_unavailable(): |
2014 | + os_keys = set(['WABS_ACCOUNT_NAME', 'WABS_ACCESS_KEY']) |
2015 | + for os_key in os_keys: |
2016 | + if os_key not in os.environ: |
2017 | + return unittest.skip('WABS is unavailable') |
2018 | + return lambda x: x |
2019 | + |
2020 | + |
2021 | class PostgreSQLCharmBaseTestCase(object): |
2022 | |
2023 | # Override these in subclasses to run these tests multiple times |
2024 | @@ -55,12 +82,22 @@ |
2025 | # Tests may add or change options. |
2026 | self.pg_config = dict(version=self.VERSION, pgdg=self.PGDG) |
2027 | |
2028 | + # Mirror charmhelpers into our support charms, since charms |
2029 | + # can't symlink out of their subtree. |
2030 | + here = os.path.abspath(os.path.dirname(__file__)) |
2031 | + main_charmhelpers = os.path.join(here, 'hooks', 'charmhelpers') |
2032 | + psql_charmhelpers = os.path.join(here, 'lib', 'test-client-charm', |
2033 | + 'hooks', 'charmhelpers') |
2034 | + if os.path.exists(psql_charmhelpers): |
2035 | + shutil.rmtree(psql_charmhelpers) |
2036 | + shutil.copytree(main_charmhelpers, psql_charmhelpers) |
2037 | + |
2038 | self.juju = self.useFixture(JujuFixture( |
2039 | series=SERIES, reuse_machines=True, |
2040 | do_teardown='TEST_DONT_TEARDOWN_JUJU' not in os.environ)) |
2041 | |
2042 | # If the charms fail, we don't want tests to hang indefinitely. |
2043 | - timeout = int(os.environ.get('TEST_TIMEOUT', 900)) |
2044 | + timeout = int(os.environ.get('TEST_TIMEOUT', 1200)) |
2045 | if timeout > 0: |
2046 | self.useFixture(fixtures.Timeout(timeout, gentle=True)) |
2047 | |
2048 | @@ -72,7 +109,7 @@ |
2049 | # is at this particular instant in the expected state, hoping |
2050 | # that the system is stable enough to continue testing. |
2051 | |
2052 | - timeout = time.time() + 180 |
2053 | + timeout = time.time() + 600 |
2054 | pg_units = frozenset(pg_units) |
2055 | |
2056 | # The list of PG units we expect to be related to the psql unit. |
2057 | @@ -104,7 +141,7 @@ |
2058 | except NotReady: |
2059 | if time.time() > timeout: |
2060 | raise |
2061 | - time.sleep(3) |
2062 | + time.sleep(10) |
2063 | |
2064 | def confirm_psql_unit_ready(self, psql_unit, pg_units): |
2065 | # Confirm the db and db-admin relations are all in a useful |
2066 | @@ -276,8 +313,9 @@ |
2067 | if postgres_unit in full_rel_info[rel_name][rel_id]: |
2068 | rel_info = full_rel_info[rel_name][rel_id][postgres_unit] |
2069 | break |
2070 | - assert rel_info is not None, 'Unable to find pg rel info {!r}'.format( |
2071 | - full_rel_info[rel_name]) |
2072 | + assert rel_info is not None, ( |
2073 | + 'Unable to find pg rel info {} {!r}'.format( |
2074 | + postgres_unit, full_rel_info[rel_name])) |
2075 | |
2076 | if dbname is None: |
2077 | dbname = rel_info['database'] |
2078 | @@ -295,11 +333,10 @@ |
2079 | tunnel_cmd = [ |
2080 | 'juju', 'ssh', psql_unit, '-N', '-L', |
2081 | '{}:{}:{}'.format(local_port, rel_info['host'], rel_info['port'])] |
2082 | + # Don't disable stdout, so we can see when there are SSH |
2083 | + # failures like bad host keys. |
2084 | tunnel_proc = subprocess.Popen( |
2085 | tunnel_cmd, stdin=subprocess.PIPE, preexec_fn=os.setpgrp) |
2086 | - # Don't disable stdout, so we can see when there are SSH |
2087 | - # failures like bad host keys. |
2088 | - # stdout=open('/dev/null', 'ab'), stderr=subprocess.STDOUT) |
2089 | tunnel_proc.stdin.close() |
2090 | |
2091 | try: |
2092 | @@ -371,6 +408,99 @@ |
2093 | |
2094 | self.assertEqual(num_slaves, 1, 'Slave not connected') |
2095 | |
2096 | + @skip_if_swift_is_unavailable() |
2097 | + def test_swiftwal_logshipping_replication(self): |
2098 | + os_keys = set(['OS_TENANT_NAME', 'OS_AUTH_URL', |
2099 | + 'OS_USERNAME', 'OS_PASSWORD']) |
2100 | + for os_key in os_keys: |
2101 | + self.pg_config[os_key.lower()] = os.environ[os_key] |
2102 | + self.pg_config['streaming_replication'] = False |
2103 | + self.pg_config['swiftwal_log_shipping'] = True |
2104 | + self.pg_config['swiftwal_container_prefix'] = '{}_{}'.format( |
2105 | + '_juju_pg_tests', datetime.utcnow().strftime('%Y%m%dT%H%M%SZ')) |
2106 | + self.pg_config['install_sources'] = 'ppa:stub/pgcharm' |
2107 | + |
2108 | + def swift_cleanup(): |
2109 | + prefix = self.pg_config['swiftwal_container_prefix'] |
2110 | + for container in [prefix, prefix + '_1', prefix + '_2']: |
2111 | + # Ignore errors and output |
2112 | + subprocess.call(['swift', 'delete', container], |
2113 | + stdout=subprocess.PIPE, |
2114 | + stderr=subprocess.STDOUT) |
2115 | + self.addCleanup(swift_cleanup) |
2116 | + |
2117 | + self.juju.deploy( |
2118 | + TEST_CHARM, 'postgresql', num_units=2, config=self.pg_config) |
2119 | + self.juju.deploy(PSQL_CHARM, 'psql') |
2120 | + self.juju.do(['add-relation', 'postgresql:db-admin', 'psql:db-admin']) |
2121 | + self.wait_until_ready(['postgresql/0', 'postgresql/1']) |
2122 | + |
2123 | + # Confirm that the slave has not opened a streaming |
2124 | + # replication connection. |
2125 | + num_slaves = self.sql('SELECT COUNT(*) FROM pg_stat_replication', |
2126 | + 'master', dbname='postgres')[0][0] |
2127 | + self.assertEqual(num_slaves, 0, 'Streaming connection found') |
2128 | + |
2129 | + # Confirm that replication is actually happening. |
2130 | + # Create a table and force a WAL change. |
2131 | + self.sql('CREATE TABLE foo AS SELECT generate_series(0,100)', |
2132 | + 'master', dbname='postgres') |
2133 | + self.sql('SELECT pg_switch_xlog()', |
2134 | + 'master', dbname='postgres') |
2135 | + timeout = time.time() + 120 |
2136 | + table_found = False |
2137 | + while time.time() < timeout and not table_found: |
2138 | + time.sleep(1) |
2139 | + if self.sql("SELECT TRUE from pg_class WHERE relname='foo'", |
2140 | + 'hot standby', dbname='postgres'): |
2141 | + table_found = True |
2142 | + self.assertTrue(table_found, "Replication not replicating") |
2143 | + |
2144 | + @skip_if_swift_is_unavailable() |
2145 | + def test_wal_e_swift_logshipping(self): |
2146 | + os_keys = set(['OS_TENANT_NAME', 'OS_AUTH_URL', |
2147 | + 'OS_USERNAME', 'OS_PASSWORD']) |
2148 | + container = '_juju_pg_tests' |
2149 | + for os_key in os_keys: |
2150 | + self.pg_config[os_key.lower()] = os.environ[os_key] |
2151 | + self.pg_config['streaming_replication'] = False |
2152 | + self.pg_config['wal_e_storage_uri'] = 'swift://{}/{}'.format( |
2153 | + container, datetime.utcnow().strftime('%Y%m%dT%H%M%SZ')) |
2154 | + self.pg_config['install_sources'] = 'ppa:stub/pgcharm' |
2155 | + |
2156 | + def swift_cleanup(): |
2157 | + subprocess.call(['swift', 'delete', container], |
2158 | + stdout=open(os.devnull, 'wb'), |
2159 | + stderr=subprocess.STDOUT) |
2160 | + self.addCleanup(swift_cleanup) |
2161 | + |
2162 | + self.juju.deploy( |
2163 | + TEST_CHARM, 'postgresql', num_units=2, config=self.pg_config) |
2164 | + self.juju.deploy(PSQL_CHARM, 'psql') |
2165 | + self.juju.do(['add-relation', 'postgresql:db-admin', 'psql:db-admin']) |
2166 | + self.wait_until_ready(['postgresql/0', 'postgresql/1']) |
2167 | + |
2168 | + # Confirm that the slave has not opened a streaming |
2169 | + # replication connection. |
2170 | + num_slaves = self.sql('SELECT COUNT(*) FROM pg_stat_replication', |
2171 | + 'master', dbname='postgres')[0][0] |
2172 | + self.assertEqual(num_slaves, 0, 'Streaming connection found') |
2173 | + |
2174 | + # Confirm that replication is actually happening. |
2175 | + # Create a table and force a WAL change. |
2176 | + self.sql('CREATE TABLE foo AS SELECT generate_series(0,100)', |
2177 | + 'master', dbname='postgres') |
2178 | + self.sql('SELECT pg_switch_xlog()', |
2179 | + 'master', dbname='postgres') |
2180 | + timeout = time.time() + 120 |
2181 | + table_found = False |
2182 | + while time.time() < timeout and not table_found: |
2183 | + time.sleep(1) |
2184 | + if self.sql("SELECT TRUE from pg_class WHERE relname='foo'", |
2185 | + 'hot standby', dbname='postgres'): |
2186 | + table_found = True |
2187 | + self.assertTrue(table_found, "Replication not replicating") |
2188 | + |
2189 | def test_basic_admin(self): |
2190 | '''Connect to a single unit service via the db-admin relationship.''' |
2191 | self.juju.deploy(TEST_CHARM, 'postgresql', config=self.pg_config) |
2192 | @@ -546,7 +676,6 @@ |
2193 | self.assertIs(False, self.is_master(standby_unit_1, 'postgres')) |
2194 | |
2195 | def test_admin_addresses(self): |
2196 | - |
2197 | # This test also tests explicit port assignment. We need |
2198 | # a different port for each PostgreSQL version we might be |
2199 | # testing, because clusters from previous tests of different |
2200 | @@ -746,6 +875,27 @@ |
2201 | 'rsyslog' not in status['services'], 'rsyslog failed to die') |
2202 | self.wait_until_ready(pg_units) |
2203 | |
2204 | + def test_upgrade_charm(self): |
2205 | + self.juju.deploy( |
2206 | + TEST_CHARM, 'postgresql', num_units=2, config=self.pg_config) |
2207 | + self.juju.deploy(PSQL_CHARM, 'psql') |
2208 | + self.juju.do(['add-relation', 'postgresql:db', 'psql:db']) |
2209 | + pg_units = ['postgresql/0', 'postgresql/1'] |
2210 | + self.wait_until_ready(pg_units) |
2211 | + |
2212 | + # Create something |
2213 | + self.sql("CREATE TABLE Foo AS SELECT TRUE", 'master') |
2214 | + |
2215 | + # Kick off the upgrade-charm hook |
2216 | + self.juju.do(['upgrade-charm', 'postgresql']) |
2217 | + self.wait_until_ready(pg_units) |
2218 | + |
2219 | + # Ensure that our data has perservered. |
2220 | + master_data = self.sql('SELECT * FROM Foo', 'master')[0][0] |
2221 | + self.assertTrue(master_data) |
2222 | + standby_data = self.sql('SELECT * FROM Foo', 'hot standby')[0][0] |
2223 | + self.assertTrue(standby_data) |
2224 | + |
2225 | |
2226 | class PG91Tests( |
2227 | PostgreSQLCharmBaseTestCase, |
2228 | |
2229 | === modified file 'testing/jujufixture.py' |
2230 | --- testing/jujufixture.py 2014-06-10 11:36:17 +0000 |
2231 | +++ testing/jujufixture.py 2014-10-14 10:13:22 +0000 |
2232 | @@ -21,6 +21,7 @@ |
2233 | super(JujuFixture, self).__init__() |
2234 | |
2235 | self.series = series |
2236 | + assert series, 'series not set' |
2237 | |
2238 | self.reuse_machines = reuse_machines |
2239 | |
2240 | @@ -47,6 +48,9 @@ |
2241 | def deploy(self, charm, name=None, num_units=1, config=None): |
2242 | cmd = ['deploy'] |
2243 | |
2244 | + if not charm.startswith('cs:'): |
2245 | + charm = self.charm_uri(charm) |
2246 | + |
2247 | if config: |
2248 | config_path = os.path.join( |
2249 | self.useFixture(fixtures.TempDir()).path, 'config.yaml') |
2250 | @@ -143,11 +147,8 @@ |
2251 | rel_id, rel_unit)]) |
2252 | res[rel_name][rel_id][rel_unit] = json.loads( |
2253 | json_rel_info) |
2254 | - except subprocess.CalledProcessError as x: |
2255 | - if x.returncode == 2: |
2256 | - res[rel_name][rel_id][rel_unit] = None |
2257 | - else: |
2258 | - raise |
2259 | + except subprocess.CalledProcessError: |
2260 | + res[rel_name][rel_id][rel_unit] = None |
2261 | return res |
2262 | |
2263 | def wait_until_ready(self, extra=60): |
2264 | @@ -185,6 +186,27 @@ |
2265 | if self.do_teardown: |
2266 | self.addCleanup(self.reset) |
2267 | |
2268 | + # Setup a temporary repository with the magic to find our charms. |
2269 | + self.repo_dir = self.useFixture(fixtures.TempDir()).path |
2270 | + self.useFixture(fixtures.EnvironmentVariable('JUJU_REPOSITORY', |
2271 | + self.repo_dir)) |
2272 | + self.repo_series_dir = os.path.join(self.repo_dir, self.series) |
2273 | + os.mkdir(self.repo_series_dir) |
2274 | + |
2275 | + def charm_uri(self, charm): |
2276 | + meta = yaml.safe_load(open(os.path.join(charm, 'metadata.yaml'), 'rb')) |
2277 | + name = meta.get('name') |
2278 | + assert name, 'charm {} has no name in metadata.yaml'.format(charm) |
2279 | + charm_link = os.path.join(self.repo_series_dir, name) |
2280 | + |
2281 | + # Recreate the charm link, which might have changed from the |
2282 | + # last deploy. |
2283 | + if os.path.exists(charm_link): |
2284 | + os.remove(charm_link) |
2285 | + os.symlink(charm, os.path.join(self.repo_series_dir, name)) |
2286 | + |
2287 | + return 'local:{}/{}'.format(self.series, name) |
2288 | + |
2289 | def reset(self): |
2290 | # Tear down any services left running that we know we spawned. |
2291 | while True: |
The results (PASS) are in and available here: http:// reports. vapour. ws/charm- tests/charm- bundle- test-960- results