Merge lp:~stub/charms/precise/postgresql/enable-integration-tests into lp:charms/postgresql
- Precise Pangolin (12.04)
- enable-integration-tests
- Merge into trunk
Status: | Work in progress |
---|---|
Proposed branch: | lp:~stub/charms/precise/postgresql/enable-integration-tests |
Merge into: | lp:charms/postgresql |
Diff against target: |
2264 lines (+509/-1039) 17 files modified
.bzrignore (+1/-0) Makefile (+51/-34) config.yaml (+65/-63) hooks/helpers.py (+0/-199) hooks/hooks.py (+15/-15) hooks/test_hooks.py (+0/-1) lib/juju-deployer-wrapper.py (+15/-0) templates/postgresql.conf.tmpl (+10/-0) test.py (+149/-354) testing/README (+0/-36) testing/amuletfixture.py (+200/-0) testing/jujufixture.py (+0/-297) tests/00-setup.sh (+0/-15) tests/01-lint.sh (+0/-3) tests/02-unit-tests.sh (+0/-3) tests/03-basic-amulet.py (+0/-19) tests/tests.yaml (+3/-0) |
To merge this branch: | bzr merge lp:~stub/charms/precise/postgresql/enable-integration-tests |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Review Queue (community) | automated testing | Needs Fixing | |
Adam Israel (community) | Needs Fixing | ||
charmers | Pending | ||
Review via email: mp+238283@code.launchpad.net |
Commit message
Description of the change
Switch to Amulet infrastructure.
Reenable integration tests by default.
Review Queue (review-queue) wrote : | # |
Adam Israel (aisrael) wrote : | # |
Hi Stuart,
I had the opportunity to review this merge proposal today.
The Makefile target `testdep` isn't being called by `make test`, so my initial bundletester run failed due to dependencies. I manually ran that step, and was able to get further in the integration tests. Unfortunately, the test eventually hung (after 3 hours of running).
I've posted the full bundletester logs here:
http://
There were a couple of integration tests that failed (test_basic_admin, test_roles_
Bundletester should tear down machines allocated between tests, but that didn't happen in this case. I wonder if it's related at all to the monkeypatched version of Amulet you're using.
How does this line up with your own testing? Anything I should do differently to test? Feel free to ping me in irc or elsewhere if there's more I can do to facilitate getting these tests landed.
I have to NAK this for now, pending further instructions on running tests and/or the resolution of the above issues.
Review Queue (review-queue) wrote : | # |
This items has failed automated testing! Results available here http://
Review Queue (review-queue) wrote : | # |
This items has failed automated testing! Results available here http://
Stuart Bishop (stub) wrote : | # |
I was relying on the newer Juju behavior (or local provider behavior?), where machines are destroyed by default when the service is destroyed. I'll put back the machine harvesting code I thought might not be needed.
test_basic_admin, test_roles_granted and test_upgrade_charm usually pass, and I suspect their failures was due to the provisioning issues. The more problematic tests are the failover ones, as they sometimes trigger a race condition I need to work out how to fix (if you failover too often too quickly, you might promote a unit to master that has not reached a recoverable state and things die).
bundletester is irrelevant here, as it is just the launcher. As far as bundletester is concerned there is just one huge test.
I've upgraded the Makefile rules to install the test dependencies the first time they are needed.
Review Queue (review-queue) wrote : | # |
This items has failed automated testing! Results available here http://
Review Queue (review-queue) wrote : | # |
This items has failed automated testing! Results available here http://
Stuart Bishop (stub) wrote : | # |
Tests are failing as the mocker library cannot be found, despite it being installed using 'apt-get install python-mocker'. This is likely as the python interpreter in play (the charmguardian vm?) isn't searching the system modules.
Adam Israel (aisrael) wrote : | # |
Hey stub,
Thanks for pointing that out. I've been working on a review of these integration tests and am tracking a bug in juju core/juju deployer (lp:1421315) that's been causing some of the tests to fail.
Out of curiosity, what version of juju are you running?
Stuart Bishop (stub) wrote : | # |
I stick to ppa:juju/stable for writing charms.
Adam Israel (aisrael) wrote : | # |
Excellent, thanks for confirming.
I have some initial notes for you.
I started testing using bundletester but ran into some intermittent test failures due to a bug in bundletester (now fixed).
I've continued testing manually, i.e., running nosetests -sv test:PG91Tests
There's a bug in juju core (lp:1421315) and juju-deployer. I've patched my local version of juju-deployer to work around the issue (a merge for that is pending).
With the above issues resolved, I'm running into another error that maybe you have some insight about.
test.PG91Tests.
['juju', 'run', '--timeout=21600s', '--unit', 'postgresql/0', 'sudo tail -1 /var/log/
ERROR subprocess encountered error code 1
I'm investigating why that error is being thrown. I wonder if juju run is executing before the unit is stood up. Any thoughts?
Stuart Bishop (stub) wrote : | # |
This particular error is bubbling up from the juju-wait plugin (lp:juju-wait). This will only call juju run against units that are alive (do not have life: set to dying or dead in the status) and are ready (report an agent-state: of started). The only thing slightly unusual that could be happening here is that it will execute the juju run's in parallel. The quickest way of checking if this is the culprit is to hack /usr/bin/juju-wait and temporarily change max_workers=6 to max_workers=1.
Adam Israel (aisrael) wrote : | # |
Ack, thanks, I'll check juju-wait.
Adam Israel (aisrael) wrote : | # |
Ok, I've finished a comprehensive review of the integration tests.
There were several tests failing that I've determined to be a failure in the test infrastructure. I've landed a patch to bundletester that was running tests more than once. I'll be sending patches to amulet, juju-deployer, and juju-wait (the change from max_workers=6 to max_workers=1 fixed the lock timeout error).
With the above patches implemented locally, I've been able to run the 9.1 integration test with only one failure, that looks like it's ultimately erroneous:
test_syslog initially failed on the deploy command. This line:
self.
should read:
self.
"cs:rsyslog" returns a 404 - see https:/
With that fixed, the test runs but fails it's assert checks:
_StringException: Traceback (most recent call last):
File "/charms/
self.
File "/usr/lib/
return original_
File "/usr/lib/
raise self.failureExc
AssertionError: False is not true
I think the failure is wrong, though. I ssh'd into each rsyslog unit after the failure and `tail -n 100 /var/log/syslog | grep`ped for each string and found it.
While running test_explicit_
The stack trace of the failure:
2015-03-26 15:17:01 INFO unit.postgresql
2015-03-26 15:17:01 INFO unit.postgresql
2015-03-26 15:17:01 INFO unit.postgresql
2015-03-26 15:17:01 INFO unit.postgresql
2015-03-26 15:17:01 INFO unit.postgresql
2015-03-26 15:17:01 INFO unit.postgresql
2015-03-26 15:17:01 INFO unit.postgresql
2015-03-26 15:17:01 INFO unit.postgresql
2015-03-26 15:17:01 INFO unit.postgresql
2015-03-26 15:1...
Stuart Bishop (stub) wrote : | # |
On 7 March 2015 at 06:36, Adam Israel <email address hidden> wrote:
> There were several tests failing that I've determined to be a failure in the test infrastructure. I've landed a patch to bundletester that was running tests more than once. I'll be sending patches to amulet, juju-deployer, and juju-wait (the change from max_workers=6 to max_workers=1 fixed the lock timeout error).
Ok. The multiple workers is of course to speed things up, but
correctness is better. I'll sort juju-wait shortly and kick off the
package build.
> With the above patches implemented locally, I've been able to run the 9.1 integration test with only one failure, that looks like it's ultimately erroneous:
>
> test_syslog initially failed on the deploy command. This line:
>
> self.deployment
>
> should read:
>
> self.deployment
>
> "cs:rsyslog" returns a 404 - see https:/
Aha... when to use prefixes or not has always confused me. I think
this has worked in the past.
> With that fixed, the test runs but fails it's assert checks:
> _StringException: Traceback (most recent call last):
> File "/charms/
> self.failUnless
> File "/usr/lib/
> return original_
> File "/usr/lib/
> raise self.failureExc
> AssertionError: False is not true
>
> I think the failure is wrong, though. I ssh'd into each rsyslog unit after the failure and `tail -n 100 /var/log/syslog | grep`ped for each string and found it.
>
> While running test_explicit_
I consider this a bug, which is why the force flag isn't set.
> The stack trace of the failure:
> 2015-03-26 15:17:01 INFO unit.postgresql
> 2015-03-26 15:17:01 INFO unit.postgresql
> 2015-03-26 15:17:01 INFO unit.postgresql
> 2015-03-26 15:17:01 INFO unit.postgresql
> 2015-03-26 15:17:01 INFO unit.postgresql
> 2015-03-26 15:17:01 INFO unit.postgresql
> 2015-03-26 15:17:01 INFO unit.postgresql
Adam Israel (aisrael) wrote : | # |
Sorry, it was the complete PG91Tests test suite that took an hour, not an individual test!
Stuart Bishop (stub) wrote : | # |
This branch is ready for another test run.
Tim Van Steenburgh (tvansteenburgh) wrote : | # |
Started tests.
Unmerged revisions
- 123. By Stuart Bishop
-
Missing dependency
- 122. By Stuart Bishop
-
test updates
- 121. By Stuart Bishop
-
Install test dependencies if required
- 120. By Stuart Bishop
-
Remove dead code
- 119. By Stuart Bishop
-
silence upgrade-charm test
- 118. By Stuart Bishop
-
Newer wait() implementation
- 117. By Stuart Bishop
-
missing config
- 116. By Stuart Bishop
-
tidy
- 115. By Stuart Bishop
-
delint
- 114. By Stuart Bishop
-
quiet teardown
Preview Diff
1 | === modified file '.bzrignore' |
2 | --- .bzrignore 2014-10-14 10:12:50 +0000 |
3 | +++ .bzrignore 2015-03-12 09:42:09 +0000 |
4 | @@ -2,3 +2,4 @@ |
5 | hooks/_trial_temp |
6 | hooks/local_state.pickle |
7 | lib/test-client-charm/hooks/charmhelpers |
8 | +.stamp-* |
9 | |
10 | === modified file 'Makefile' |
11 | --- Makefile 2014-10-24 06:31:15 +0000 |
12 | +++ Makefile 2015-03-12 09:42:09 +0000 |
13 | @@ -3,58 +3,75 @@ |
14 | SERIES := $(juju get-environment default-series) |
15 | |
16 | default: |
17 | - @echo "One of:" |
18 | + @echo "Install test dependencies:" |
19 | @echo " make testdep" |
20 | + @echo |
21 | + @echo "Indivitual tests, one of:" |
22 | @echo " make lint" |
23 | - @echo " make unit_test" |
24 | - @echo " make integration_test" |
25 | + @echo " make unittest" |
26 | @echo " make integration_test_91" |
27 | @echo " make integration_test_92" |
28 | @echo " make integration_test_93" |
29 | @echo " make integration_test_94" |
30 | @echo |
31 | - @echo "There is no 'make test'" |
32 | - |
33 | -test_bot_tests: |
34 | - @echo "Installing dependencies and running automatic-testrunner tests" |
35 | - tests/00-setup.sh |
36 | - tests/01-lint.sh |
37 | - tests/02-unit-tests.sh |
38 | - tests/03-basic-amulet.py |
39 | - |
40 | -testdep: |
41 | - tests/00-setup.sh |
42 | - |
43 | -unit_test: |
44 | + @echo "For full tests:" |
45 | + @echo " make test" |
46 | + |
47 | +clean: |
48 | + rm -f .stamp-* |
49 | + |
50 | +test: testdep lint unittest integration_test |
51 | + @echo "Done" |
52 | + |
53 | +testdep: .stamp-testdep |
54 | +.stamp-testdep: |
55 | + sudo add-apt-repository -y ppa:juju/stable |
56 | + sudo add-apt-repository -y ppa:stub/juju |
57 | + sudo apt-get update |
58 | + sudo apt-get install -y \ |
59 | + amulet \ |
60 | + python-flake8 \ |
61 | + python-fixtures \ |
62 | + python-jinja2 \ |
63 | + python-mocker \ |
64 | + python-psycopg2 \ |
65 | + python-nose \ |
66 | + python-testtools \ |
67 | + python-mocker \ |
68 | + python-yaml \ |
69 | + juju-wait \ |
70 | + pgtune |
71 | + touch .stamp-testdep |
72 | + |
73 | +unittest: testdep |
74 | @echo "Unit tests of hooks" |
75 | - cd hooks && trial test_hooks.py |
76 | + cd hooks && nosetests -sv test_hooks.py |
77 | |
78 | -integration_test: |
79 | +integration_test: testdep |
80 | @echo "PostgreSQL integration tests, all non-beta versions, ${SERIES}" |
81 | - trial test.PG91Tests |
82 | - trial test.PG92Tests |
83 | - trial test.PG93Tests |
84 | + nosetests -sv \ |
85 | + test:PG91Tests test:PG92Tests test:PG93Tests test:PG94Tests |
86 | |
87 | -integration_test_91: |
88 | +integration_test_91: testdep |
89 | @echo "PostgreSQL 9.1 integration tests, ${SERIES}" |
90 | - trial test.PG91Tests |
91 | + nosetests -sv test:PG91Tests |
92 | |
93 | -integration_test_92: |
94 | +integration_test_92: testdep |
95 | @echo "PostgreSQL 9.2 integration tests, ${SERIES}" |
96 | - trial test.PG92Tests |
97 | + nosetests -sv test:PG92Tests |
98 | |
99 | -integration_test_93: |
100 | +integration_test_93: testdep |
101 | @echo "PostgreSQL 9.3 integration tests, ${SERIES}" |
102 | - trial test.PG93Tests |
103 | - |
104 | -integration_test_94: |
105 | - @echo "PostgreSQL 9.4 (beta) integration tests, ${SERIES}" |
106 | - trial test.PG94Tests |
107 | - |
108 | -lint: |
109 | + nosetests -sv test:PG93Tests |
110 | + |
111 | +integration_test_94: testdep |
112 | + @echo "PostgreSQL 9.4 integration tests, ${SERIES}" |
113 | + nosetests -sv test:PG94Tests |
114 | + |
115 | +lint: testdep |
116 | @echo "Lint check (flake8)" |
117 | @flake8 -v \ |
118 | - --exclude hooks/charmhelpers,hooks/_trial_temp \ |
119 | + --exclude hooks/charmhelpers \ |
120 | hooks testing tests test.py |
121 | |
122 | sync: |
123 | |
124 | === modified file 'config.yaml' |
125 | --- config.yaml 2015-02-11 21:05:42 +0000 |
126 | +++ config.yaml 2015-03-12 09:42:09 +0000 |
127 | @@ -2,7 +2,7 @@ |
128 | admin_addresses: |
129 | default: "" |
130 | type: string |
131 | - description: | |
132 | + description: > |
133 | A comma-separated list of IP Addresses (or single IP) admin tools like |
134 | pgAdmin3 will connect from, this is most useful for developers running |
135 | juju in local mode who need to connect tools like pgAdmin to a postgres. |
136 | @@ -12,14 +12,14 @@ |
137 | locale: |
138 | default: "C" |
139 | type: string |
140 | - description: | |
141 | + description: > |
142 | Locale of service, defining language, default collation order, |
143 | and default formatting of numbers, currency, dates & times. Can only be |
144 | set when deploying the first unit of a service. |
145 | encoding: |
146 | default: "UTF-8" |
147 | type: string |
148 | - description: | |
149 | + description: > |
150 | Default encoding used to store text in this service. Can only be |
151 | set when deploying the first unit of a service. |
152 | extra-packages: |
153 | @@ -29,23 +29,12 @@ |
154 | dumpfile_location: |
155 | default: "None" |
156 | type: string |
157 | - description: | |
158 | + description: > |
159 | Path to a dumpfile to load into DB when service is initiated. |
160 | - config_change_command: |
161 | - default: "reload" |
162 | - type: string |
163 | - description: | |
164 | - The command to run whenever config has changed. Accepted values |
165 | - are "reload" or "restart" - any other value will mean neither is |
166 | - executed after a config change (which may be desired, if you're |
167 | - running a production server and would rather handle these out of |
168 | - band). Note that postgresql will still need to be reloaded |
169 | - whenever authentication and access details are updated, so |
170 | - disabling either doesn't mean PostgreSQL will never be reloaded. |
171 | version: |
172 | default: null |
173 | type: string |
174 | - description: | |
175 | + description: > |
176 | Version of PostgreSQL that we want to install. Supported versions |
177 | are "9.1", "9.2", "9.3". The default version for the deployed Ubuntu |
178 | release is used when the version is not specified. |
179 | @@ -65,6 +54,13 @@ |
180 | default: 100 |
181 | type: int |
182 | description: Maximum number of connections to allow to the PG database |
183 | + max_prepared_transactions: |
184 | + default: 0 |
185 | + type: int |
186 | + description: > |
187 | + Maximum number of prepared two phase commit transactions, waiting |
188 | + to be committed. Defaults to 0. as using two phase commit without |
189 | + a process to monitor and resolve lost transactions is dangerous. |
190 | ssl: |
191 | default: "True" |
192 | type: string |
193 | @@ -72,7 +68,7 @@ |
194 | log_min_duration_statement: |
195 | default: -1 |
196 | type: int |
197 | - description: | |
198 | + description: > |
199 | -1 is disabled, 0 logs all statements |
200 | and their durations, > 0 logs only |
201 | statements running at least this number |
202 | @@ -92,7 +88,7 @@ |
203 | log_temp_files: |
204 | default: "-1" |
205 | type: string |
206 | - description: | |
207 | + description: > |
208 | Log creation of temporary files larger than the threshold. |
209 | -1 disables the feature, 0 logs all temporary files, or specify |
210 | the threshold size with an optional unit (eg. "512KB", default |
211 | @@ -131,13 +127,13 @@ |
212 | autovacuum: |
213 | default: True |
214 | type: boolean |
215 | - description: | |
216 | + description: > |
217 | Autovacuum should almost always be running. If you want to turn this |
218 | off, you are probably following out of date documentation. |
219 | log_autovacuum_min_duration: |
220 | default: -1 |
221 | type: int |
222 | - description: | |
223 | + description: > |
224 | -1 disables, 0 logs all actions and their durations, > 0 logs only |
225 | actions running at least this number of milliseconds. |
226 | autovacuum_analyze_threshold: |
227 | @@ -155,13 +151,13 @@ |
228 | autovacuum_vacuum_cost_delay: |
229 | default: "20ms" |
230 | type: string |
231 | - description: | |
232 | + description: > |
233 | Default vacuum cost delay for autovacuum, in milliseconds; |
234 | -1 means use vacuum_cost_delay |
235 | search_path: |
236 | default: "\"$user\",public" |
237 | type: string |
238 | - description: | |
239 | + description: > |
240 | Comma separated list of schema names for |
241 | the default SQL search path. |
242 | standard_conforming_strings: |
243 | @@ -171,22 +167,21 @@ |
244 | hot_standby: |
245 | default: False |
246 | type: boolean |
247 | - description: | |
248 | - DEPRECATED. |
249 | + description: > |
250 | Hot standby or warm standby. When True, queries can be run against |
251 | the database when in recovery or standby mode (ie. replicated). |
252 | Overridden when service contains multiple units. |
253 | hot_standby_feedback: |
254 | default: False |
255 | type: boolean |
256 | - description: | |
257 | + description: > |
258 | Hot standby feedback, informing a master about in progress |
259 | transactions on a streaming hot standby and allowing the master to |
260 | defer cleanup and avoid query cancelations on the hot standby. |
261 | wal_level: |
262 | default: minimal |
263 | type: string |
264 | - description: | |
265 | + description: > |
266 | 'minimal', 'archive' or 'hot_standby'. Defines how much information |
267 | is written to the WAL. Set to 'minimal' for stand alone databases |
268 | and 'hot_standby' for replicated setups. Overridden by juju when |
269 | @@ -194,7 +189,7 @@ |
270 | max_wal_senders: |
271 | default: 0 |
272 | type: int |
273 | - description: | |
274 | + description: > |
275 | Maximum number of hot standbys that can connect using |
276 | streaming replication. Set this to the expected maximum number of |
277 | hot standby units to avoid unnecessary blocking and database restarts. |
278 | @@ -202,7 +197,7 @@ |
279 | wal_keep_segments: |
280 | default: 0 |
281 | type: int |
282 | - description: | |
283 | + description: > |
284 | Number of old WAL files to keep, providing a larger buffer for |
285 | streaming hot standbys to catch up from when lagged. Each WAL file |
286 | is 16MB in size. The WAL files are the buffer of how far a |
287 | @@ -212,7 +207,7 @@ |
288 | replicated_wal_keep_segments: |
289 | default: 5000 |
290 | type: int |
291 | - description: | |
292 | + description: > |
293 | Value of wal_keep_segments used when this service is replicated. |
294 | This setting only exists to provide a sane default when replication |
295 | is requested (so it doesn't fail) and nobody bothered to change the |
296 | @@ -220,7 +215,7 @@ |
297 | archive_mode: |
298 | default: False |
299 | type: boolean |
300 | - description: | |
301 | + description: > |
302 | Enable archiving of WAL files using the command specified by |
303 | archive_command. If archive_mode is enabled and archive_command not |
304 | set, then archiving is deferred until archive_command is set and the |
305 | @@ -228,25 +223,25 @@ |
306 | archive_command: |
307 | default: "" |
308 | type: string |
309 | - description: | |
310 | + description: > |
311 | Command used to archive WAL files when archive_mode is set and |
312 | wal_level > minimal. |
313 | work_mem: |
314 | default: "1MB" |
315 | type: string |
316 | - description: | |
317 | + description: > |
318 | Working Memory. |
319 | Ignored unless 'performance_tuning' is set to 'manual'. |
320 | maintenance_work_mem: |
321 | default: "1MB" |
322 | type: string |
323 | - description: | |
324 | + description: > |
325 | Maintenance working memory. |
326 | Ignored unless 'performance_tuning' is set to 'manual'. |
327 | performance_tuning: |
328 | default: "Mixed" |
329 | type: string |
330 | - description: | |
331 | + description: > |
332 | Possible values here are "manual", "DW" (data warehouse), |
333 | "OLTP" (online transaction processing), "Web" (web application), |
334 | "Desktop" or "Mixed". When this is set to a value other than |
335 | @@ -267,14 +262,14 @@ |
336 | shared_buffers: |
337 | default: "" |
338 | type: string |
339 | - description: | |
340 | + description: > |
341 | The amount of memory the database server uses for shared memory |
342 | buffers. This string should be of the format '###MB'. |
343 | Ignored unless 'performance_tuning' is set to 'manual'. |
344 | effective_cache_size: |
345 | default: "" |
346 | type: string |
347 | - description: | |
348 | + description: > |
349 | Effective cache size is an estimate of how much memory is available for |
350 | disk caching within the database. (50% to 75% of system memory). This |
351 | string should be of the format '###MB'. Ignored unless |
352 | @@ -282,52 +277,59 @@ |
353 | default_statistics_target: |
354 | default: -1 |
355 | type: int |
356 | - description: | |
357 | + description: > |
358 | Sets the default statistics target for table columns without a |
359 | column-specific target set via ALTER TABLE SET STATISTICS. |
360 | Leave unchanged to use the server default, which in recent |
361 | releases is 100. Ignored unless 'performance_tuning' is 'manual'. |
362 | Larger values increase the time needed to do ANALYZE, but |
363 | might improve the quality of the planner's estimates. |
364 | + collapse_limit: |
365 | + default: -1 |
366 | + type: int |
367 | + description: > |
368 | + Sets the from_collapse_limit and join_collapse_limit query planner |
369 | + options, controlling the maximum number of tables that can be joined |
370 | + before the turns off the table collapse query optimization. |
371 | temp_buffers: |
372 | default: "1MB" |
373 | type: string |
374 | - description: | |
375 | + description: > |
376 | The maximum number of temporary buffers used by each database session. |
377 | wal_buffers: |
378 | default: "-1" |
379 | type: string |
380 | - description: | |
381 | + description: > |
382 | min 32kB, -1 sets based on shared_buffers (change requires restart). |
383 | Ignored unless 'performance_tuning' is set to 'manual'. |
384 | checkpoint_segments: |
385 | default: 3 |
386 | type: int |
387 | - description: | |
388 | + description: > |
389 | in logfile segments, min 1, 16MB each. |
390 | Ignored unless 'performance_tuning' is set to 'manual'. |
391 | checkpoint_timeout: |
392 | default: "" |
393 | type: string |
394 | - description: | |
395 | + description: > |
396 | Maximum time between automatic WAL checkpoints. range '30s-1h'. |
397 | If left empty, the default postgresql value will be used. |
398 | fsync: |
399 | type: boolean |
400 | default: True |
401 | - description: | |
402 | + description: > |
403 | Turns forced synchronization on/off. If fsync is turned off, database |
404 | failures are likely to involve database corruption and require |
405 | recreating the unit |
406 | synchronous_commit: |
407 | type: boolean |
408 | default: True |
409 | - description: | |
410 | + description: > |
411 | Immediate fsync after commit. |
412 | full_page_writes: |
413 | type: boolean |
414 | default: True |
415 | - description: | |
416 | + description: > |
417 | Recover from partial page writes. |
418 | random_page_cost: |
419 | default: 4.0 |
420 | @@ -336,7 +338,7 @@ |
421 | extra_pg_auth: |
422 | type: string |
423 | default: "" |
424 | - description: | |
425 | + description: > |
426 | A comma separated extra pg_hba.conf auth rules. |
427 | This will be written to the pg_hba.conf file, one line per rule. |
428 | Note that this should not be needed as db relations already create |
429 | @@ -348,7 +350,7 @@ |
430 | manual_replication: |
431 | type: boolean |
432 | default: False |
433 | - description: | |
434 | + description: > |
435 | Enable or disable charm managed replication. When manual_replication |
436 | is True, the operator is responsible for maintaining recovery.conf |
437 | and performing any necessary database mirroring. The charm will |
438 | @@ -372,7 +374,7 @@ |
439 | nagios_context: |
440 | default: "juju" |
441 | type: string |
442 | - description: | |
443 | + description: > |
444 | Used by the nrpe-external-master subordinate charm. |
445 | A string that will be prepended to instance name to set the host name |
446 | in nagios. So for instance the hostname would be something like: |
447 | @@ -382,14 +384,14 @@ |
448 | nagios_additional_servicegroups: |
449 | default: "" |
450 | type: string |
451 | - description: | |
452 | + description: > |
453 | Used by the nrpe-external-master subordinate charm. |
454 | A comma-separated list of servicegroups to include along with |
455 | nagios_context when generating nagios service check configs. |
456 | This is useful for nagios installations where servicegroups |
457 | are used to apply special treatment to particular checks. |
458 | pgdg: |
459 | - description: | |
460 | + description: > |
461 | Enable the PostgreSQL Global Development Group APT repository |
462 | (https://wiki.postgresql.org/wiki/Apt). This package source provides |
463 | official PostgreSQL packages for Ubuntu LTS releases beyond those |
464 | @@ -397,13 +399,13 @@ |
465 | type: boolean |
466 | default: false |
467 | install_sources: |
468 | - description: | |
469 | + description: > |
470 | List of extra package sources, per charm-helpers standard. |
471 | YAML format. |
472 | type: string |
473 | default: null |
474 | install_keys: |
475 | - description: | |
476 | + description: > |
477 | List of signing keys for install_sources package sources, per |
478 | charmhelpers standard. YAML format. |
479 | type: string |
480 | @@ -411,12 +413,12 @@ |
481 | extra_archives: |
482 | default: "" |
483 | type: string |
484 | - description: | |
485 | + description: > |
486 | DEPRECATED & IGNORED. Use install_sources and install_keys. |
487 | advisory_lock_restart_key: |
488 | default: 765 |
489 | type: int |
490 | - description: | |
491 | + description: > |
492 | An advisory lock key used internally by the charm. You do not need |
493 | to change it unless it happens to conflict with an advisory lock key |
494 | being used by your applications. |
495 | @@ -424,7 +426,7 @@ |
496 | swiftwal_container_prefix: |
497 | type: string |
498 | default: null |
499 | - description: | |
500 | + description: > |
501 | EXPERIMENTAL. |
502 | Swift container prefix for SwiftWAL to use. Must be set if any |
503 | SwiftWAL features are enabled. This will become a simple |
504 | @@ -433,13 +435,13 @@ |
505 | swiftwal_backup_schedule: |
506 | type: string |
507 | default: null |
508 | - description: | |
509 | + description: > |
510 | EXPERIMENTAL. |
511 | Cron-formatted schedule for SwiftWAL database backups. |
512 | swiftwal_backup_retention: |
513 | type: int |
514 | default: 2 |
515 | - description: | |
516 | + description: > |
517 | EXPERIMENTAL. |
518 | Number of recent base backups to retain. You need enough space in |
519 | Swift for this many backups plus one more, as an old backup will only |
520 | @@ -447,7 +449,7 @@ |
521 | swiftwal_log_shipping: |
522 | type: boolean |
523 | default: false |
524 | - description: | |
525 | + description: > |
526 | EXPERIMENTAL. |
527 | Archive WAL files into Swift. If swiftwal_backup_schedule is set, |
528 | allows point-in-time recovery and WAL files are removed |
529 | @@ -474,7 +476,7 @@ |
530 | wal_e_backup_schedule: |
531 | type: string |
532 | default: "13 0 * * *" |
533 | - description: | |
534 | + description: > |
535 | EXPERIMENTAL. |
536 | Cron-formatted schedule for WAL-E database backups. If |
537 | wal_e_backup_schedule is unset, WAL files will never be removed from |
538 | @@ -482,7 +484,7 @@ |
539 | wal_e_backup_retention: |
540 | type: int |
541 | default: 2 |
542 | - description: | |
543 | + description: > |
544 | EXPERIMENTAL. |
545 | Number of recent base backups and WAL files to retain. |
546 | You need enough space for this many backups plus one more, as |
547 | @@ -491,7 +493,7 @@ |
548 | streaming_replication: |
549 | type: boolean |
550 | default: true |
551 | - description: | |
552 | + description: > |
553 | Enable streaming replication. Normally, streaming replication is |
554 | always used, and any log shipping configured is used as a fallback. |
555 | Turning this off without configuring log shipping is an error. |
556 | @@ -530,7 +532,7 @@ |
557 | package_status: |
558 | default: "install" |
559 | type: string |
560 | - description: | |
561 | + description: > |
562 | The status of service-affecting packages will be set to this |
563 | value in the dpkg database. Useful valid values are "install" |
564 | and "hold". |
565 | @@ -538,13 +540,13 @@ |
566 | metrics_target: |
567 | default: "" |
568 | type: string |
569 | - description: | |
570 | + description: > |
571 | Destination for statsd-format metrics, format "host:port". If |
572 | not present and valid, metrics disabled. |
573 | metrics_prefix: |
574 | default: "dev.$UNIT.postgresql" |
575 | type: string |
576 | - description: | |
577 | + description: > |
578 | Prefix for metrics. Special value $UNIT can be used to include the |
579 | name of the unit in the prefix. |
580 | metrics_sample_interval: |
581 | |
582 | === removed file 'hooks/helpers.py' |
583 | --- hooks/helpers.py 2012-10-01 14:10:14 +0000 |
584 | +++ hooks/helpers.py 1970-01-01 00:00:00 +0000 |
585 | @@ -1,199 +0,0 @@ |
586 | -# Copyright 2012 Canonical Ltd. This software is licensed under the |
587 | -# GNU Affero General Public License version 3 (see the file LICENSE). |
588 | - |
589 | -"""Helper functions for writing hooks in python.""" |
590 | - |
591 | -__metaclass__ = type |
592 | -__all__ = [ |
593 | - 'get_config', |
594 | - 'juju_status', |
595 | - 'log', |
596 | - 'log_entry', |
597 | - 'log_exit', |
598 | - 'make_charm_config_file', |
599 | - 'relation_get', |
600 | - 'relation_set', |
601 | - 'unit_info', |
602 | - 'wait_for_machine', |
603 | - 'wait_for_page_contents', |
604 | - 'wait_for_relation', |
605 | - 'wait_for_unit', |
606 | - ] |
607 | - |
608 | -from contextlib import contextmanager |
609 | -import json |
610 | -import operator |
611 | -from shelltoolbox import ( |
612 | - command, |
613 | - run, |
614 | - script_name, |
615 | - ) |
616 | -import os |
617 | -import tempfile |
618 | -import time |
619 | -import urllib2 |
620 | -import yaml |
621 | - |
622 | - |
623 | -log = command('juju-log') |
624 | - |
625 | - |
626 | -def log_entry(): |
627 | - log("--> Entering {}".format(script_name())) |
628 | - |
629 | - |
630 | -def log_exit(): |
631 | - log("<-- Exiting {}".format(script_name())) |
632 | - |
633 | - |
634 | -def get_config(): |
635 | - config_get = command('config-get', '--format=json') |
636 | - return json.loads(config_get()) |
637 | - |
638 | - |
639 | -def relation_get(*args): |
640 | - cmd = command('relation-get') |
641 | - return cmd(*args).strip() |
642 | - |
643 | - |
644 | -def relation_set(**kwargs): |
645 | - cmd = command('relation-set') |
646 | - args = ['{}={}'.format(k, v) for k, v in kwargs.items()] |
647 | - return cmd(*args) |
648 | - |
649 | - |
650 | -def make_charm_config_file(charm_config): |
651 | - charm_config_file = tempfile.NamedTemporaryFile() |
652 | - charm_config_file.write(yaml.dump(charm_config)) |
653 | - charm_config_file.flush() |
654 | - # The NamedTemporaryFile instance is returned instead of just the name |
655 | - # because we want to take advantage of garbage collection-triggered |
656 | - # deletion of the temp file when it goes out of scope in the caller. |
657 | - return charm_config_file |
658 | - |
659 | - |
660 | -def juju_status(key): |
661 | - return yaml.safe_load(run('juju', 'status'))[key] |
662 | - |
663 | - |
664 | -def get_charm_revision(service_name): |
665 | - service = juju_status('services')[service_name] |
666 | - return int(service['charm'].split('-')[-1]) |
667 | - |
668 | - |
669 | -def unit_info(service_name, item_name, data=None): |
670 | - services = juju_status('services') if data is None else data['services'] |
671 | - service = services.get(service_name) |
672 | - if service is None: |
673 | - # XXX 2012-02-08 gmb: |
674 | - # This allows us to cope with the race condition that we |
675 | - # have between deploying a service and having it come up in |
676 | - # `juju status`. We could probably do with cleaning it up so |
677 | - # that it fails a bit more noisily after a while. |
678 | - return '' |
679 | - units = service['units'] |
680 | - item = units.items()[0][1][item_name] |
681 | - return item |
682 | - |
683 | - |
684 | -@contextmanager |
685 | -def maintain_charm_revision(path=None): |
686 | - if path is None: |
687 | - path = os.path.join(os.path.dirname(__file__), '..', 'revision') |
688 | - revision = open(path).read() |
689 | - try: |
690 | - yield revision |
691 | - finally: |
692 | - with open(path, 'w') as f: |
693 | - f.write(revision) |
694 | - |
695 | - |
696 | -def upgrade_charm(service_name, timeout=120): |
697 | - next_revision = get_charm_revision(service_name) + 1 |
698 | - start_time = time.time() |
699 | - run('juju', 'upgrade-charm', service_name) |
700 | - while get_charm_revision(service_name) != next_revision: |
701 | - if time.time() - start_time >= timeout: |
702 | - raise RuntimeError('timeout waiting for charm to be upgraded') |
703 | - time.sleep(0.1) |
704 | - return next_revision |
705 | - |
706 | - |
707 | -def wait_for_machine(num_machines=1, timeout=300): |
708 | - """Wait `timeout` seconds for `num_machines` machines to come up. |
709 | - |
710 | - This wait_for... function can be called by other wait_for functions |
711 | - whose timeouts might be too short in situations where only a bare |
712 | - Juju setup has been bootstrapped. |
713 | - """ |
714 | - # You may think this is a hack, and you'd be right. The easiest way |
715 | - # to tell what environment we're working in (LXC vs EC2) is to check |
716 | - # the dns-name of the first machine. If it's localhost we're in LXC |
717 | - # and we can just return here. |
718 | - if juju_status('machines')[0]['dns-name'] == 'localhost': |
719 | - return |
720 | - start_time = time.time() |
721 | - while True: |
722 | - # Drop the first machine, since it's the Zookeeper and that's |
723 | - # not a machine that we need to wait for. This will only work |
724 | - # for EC2 environments, which is why we return early above if |
725 | - # we're in LXC. |
726 | - machine_data = juju_status('machines') |
727 | - non_zookeeper_machines = [ |
728 | - machine_data[key] for key in machine_data.keys()[1:]] |
729 | - if len(non_zookeeper_machines) >= num_machines: |
730 | - all_machines_running = True |
731 | - for machine in non_zookeeper_machines: |
732 | - if machine['instance-state'] != 'running': |
733 | - all_machines_running = False |
734 | - break |
735 | - if all_machines_running: |
736 | - break |
737 | - if time.time() - start_time >= timeout: |
738 | - raise RuntimeError('timeout waiting for service to start') |
739 | - time.sleep(0.1) |
740 | - |
741 | - |
742 | -def wait_for_unit(service_name, timeout=480): |
743 | - """Wait `timeout` seconds for a given service name to come up.""" |
744 | - wait_for_machine(num_machines=1) |
745 | - start_time = time.time() |
746 | - while True: |
747 | - state = unit_info(service_name, 'state') |
748 | - if 'error' in state or state == 'started': |
749 | - break |
750 | - if time.time() - start_time >= timeout: |
751 | - raise RuntimeError('timeout waiting for service to start') |
752 | - time.sleep(0.1) |
753 | - if state != 'started': |
754 | - raise RuntimeError('unit did not start, state: ' + state) |
755 | - |
756 | - |
757 | -def wait_for_relation(service_name, relation_name, timeout=120): |
758 | - """Wait `timeout` seconds for a given relation to come up.""" |
759 | - start_time = time.time() |
760 | - while True: |
761 | - relation = unit_info(service_name, 'relations').get(relation_name) |
762 | - if relation is not None and relation['state'] == 'up': |
763 | - break |
764 | - if time.time() - start_time >= timeout: |
765 | - raise RuntimeError('timeout waiting for relation to be up') |
766 | - time.sleep(0.1) |
767 | - |
768 | - |
769 | -def wait_for_page_contents(url, contents, timeout=120, validate=None): |
770 | - if validate is None: |
771 | - validate = operator.contains |
772 | - start_time = time.time() |
773 | - while True: |
774 | - try: |
775 | - stream = urllib2.urlopen(url) |
776 | - except (urllib2.HTTPError, urllib2.URLError): |
777 | - pass |
778 | - else: |
779 | - page = stream.read() |
780 | - if validate(page, contents): |
781 | - return page |
782 | - if time.time() - start_time >= timeout: |
783 | - raise RuntimeError('timeout waiting for contents of ' + url) |
784 | - time.sleep(0.1) |
785 | |
786 | === modified file 'hooks/hooks.py' |
787 | --- hooks/hooks.py 2015-03-08 23:51:59 +0000 |
788 | +++ hooks/hooks.py 2015-03-12 09:42:09 +0000 |
789 | @@ -1769,15 +1769,16 @@ |
790 | run("apt-key add lib/{}.asc".format(pgdg_key)) |
791 | open(pgdg_list, 'w').write('deb {} {}-pgdg main'.format( |
792 | 'http://apt.postgresql.org/pub/repos/apt/', distro_codename())) |
793 | - if version == '9.4': |
794 | - pgdg_94_list = '/etc/apt/sources.list.d/pgdg_94_{}.list'.format( |
795 | - sanitize(hookenv.local_unit())) |
796 | - if not os.path.exists(pgdg_94_list): |
797 | - need_upgrade = True |
798 | - open(pgdg_94_list, 'w').write( |
799 | - 'deb {} {}-pgdg main 9.4'.format( |
800 | - 'http://apt.postgresql.org/pub/repos/apt/', |
801 | - distro_codename())) |
802 | + # if version == '9.5': |
803 | + # pgdg_beta_list = ( |
804 | + # '/etc/apt/sources.list.d/pgdg_beta_{}.list'.format( |
805 | + # sanitize(hookenv.local_unit()))) |
806 | + # if not os.path.exists(pgdg_beta_list): |
807 | + # need_upgrade = True |
808 | + # open(pgdg_beta_list, 'w').write( |
809 | + # 'deb {} {}-pgdg main 9.5'.format( |
810 | + # 'http://apt.postgresql.org/pub/repos/apt/', |
811 | + # distro_codename())) |
812 | |
813 | elif os.path.exists(pgdg_list): |
814 | log( |
815 | @@ -2513,16 +2514,15 @@ |
816 | if len(relations) == 1 and 'nagios_hostname' in relations[0]: |
817 | nagios_hostname = relations[0]['nagios_hostname'] |
818 | log("update_nrpe_checks: Obtained nagios_hostname ({}) " |
819 | - "from nrpe-external-master relation.".format( |
820 | - nagios_hostname)) |
821 | + "from nrpe-external-master relation.".format(nagios_hostname)) |
822 | else: |
823 | unit = hookenv.local_unit() |
824 | unit_name = unit.replace('/', '-') |
825 | nagios_hostname = "%s-%s" % (config_data['nagios_context'], unit_name) |
826 | - log("update_nrpe_checks: Deduced nagios_hostname ({}) from charm config " |
827 | - "(nagios_hostname not found in nrpe-external-master relation, or " |
828 | - "wrong number of relations found)".format( |
829 | - nagios_hostname)) |
830 | + log("update_nrpe_checks: Deduced nagios_hostname ({}) from charm " |
831 | + "config (nagios_hostname not found in nrpe-external-master " |
832 | + "relation, or wrong number of relations " |
833 | + "found)".format(nagios_hostname)) |
834 | |
835 | nrpe_service_file = \ |
836 | '/var/lib/nagios/export/service__{}_check_pgsql.cfg'.format( |
837 | |
838 | === modified file 'hooks/test_hooks.py' |
839 | --- hooks/test_hooks.py 2015-03-09 10:30:33 +0000 |
840 | +++ hooks/test_hooks.py 2015-03-12 09:42:09 +0000 |
841 | @@ -68,7 +68,6 @@ |
842 | "encoding": "UTF-8", |
843 | "extra_packages": "", |
844 | "dumpfile_location": "None", |
845 | - "config_change_command": "reload", |
846 | "version": "9.1", |
847 | "cluster_name": "main", |
848 | "listen_ip": "*", |
849 | |
850 | === added file 'lib/juju-deployer-wrapper.py' |
851 | --- lib/juju-deployer-wrapper.py 1970-01-01 00:00:00 +0000 |
852 | +++ lib/juju-deployer-wrapper.py 2015-03-12 09:42:09 +0000 |
853 | @@ -0,0 +1,15 @@ |
854 | +#!/usr/bin/python |
855 | + |
856 | +import subprocess |
857 | +import sys |
858 | + |
859 | +# Strip the -W option, as its noise messes with test output. |
860 | +args = list(sys.argv[1:]) |
861 | +if '-W' in args: |
862 | + args.remove('-W') |
863 | +cmd = ['juju-deployer'] + args |
864 | +try: |
865 | + subprocess.check_output(cmd, stderr=subprocess.STDOUT) |
866 | +except subprocess.CalledProcessError as x: |
867 | + sys.stderr.write(x.output) |
868 | + sys.exit(x.returncode) |
869 | |
870 | === modified file 'templates/postgresql.conf.tmpl' |
871 | --- templates/postgresql.conf.tmpl 2015-02-11 21:05:42 +0000 |
872 | +++ templates/postgresql.conf.tmpl 2015-03-12 09:42:09 +0000 |
873 | @@ -41,6 +41,9 @@ |
874 | max_connections = {{max_connections}} |
875 | {% endif -%} |
876 | |
877 | +# Broken under all versions of Ubuntu, per Bug #1018307 |
878 | +ssl_renegotiation_limit=0 |
879 | + |
880 | |
881 | #------------------------------------------------------------------------------ |
882 | # RESOURCE USAGE (except WAL) |
883 | @@ -58,6 +61,9 @@ |
884 | {% if maintenance_work_mem != "" -%} |
885 | maintenance_work_mem = {{maintenance_work_mem}} |
886 | {% endif -%} |
887 | +{% if max_prepared_transactions != "" -%} |
888 | +max_prepared_transactions = {{max_prepared_transactions}} |
889 | +{% endif -%} |
890 | |
891 | |
892 | #------------------------------------------------------------------------------ |
893 | @@ -97,6 +103,10 @@ |
894 | {% if default_statistics_target != -1 -%} |
895 | default_statistics_target = {{default_statistics_target}} |
896 | {% endif -%} |
897 | +{% if collapse_limit != -1 -%} |
898 | +from_collapse_limit = {{collapse_limit}} |
899 | +join_collapse_limit = {{collapse_limit}} |
900 | +{% endif -%} |
901 | |
902 | |
903 | #------------------------------------------------------------------------------ |
904 | |
905 | === modified file 'test.py' |
906 | --- test.py 2014-12-04 17:49:22 +0000 |
907 | +++ test.py 2015-03-12 09:42:09 +0000 |
908 | @@ -23,7 +23,7 @@ |
909 | import psycopg2 |
910 | import testtools |
911 | |
912 | -from testing.jujufixture import JujuFixture, run |
913 | +from testing.amuletfixture import AmuletFixture |
914 | |
915 | |
916 | SERIES = os.environ.get('SERIES', 'trusty').strip() |
917 | @@ -92,189 +92,14 @@ |
918 | shutil.rmtree(psql_charmhelpers) |
919 | shutil.copytree(main_charmhelpers, psql_charmhelpers) |
920 | |
921 | - self.juju = self.useFixture(JujuFixture( |
922 | - series=SERIES, reuse_machines=True, |
923 | - do_teardown='TEST_DONT_TEARDOWN_JUJU' not in os.environ)) |
924 | + self.deployment = AmuletFixture(series=SERIES) |
925 | + self.deployment.setUp() |
926 | + self.addCleanup(self.deployment.tearDown) |
927 | |
928 | # If the charms fail, we don't want tests to hang indefinitely. |
929 | - timeout = int(os.environ.get('TEST_TIMEOUT', 1200)) |
930 | - if timeout > 0: |
931 | - self.useFixture(fixtures.Timeout(timeout, gentle=True)) |
932 | - |
933 | - def wait_until_ready(self, pg_units, relation=True): |
934 | - |
935 | - # Per Bug #1200267, it is impossible to know when a juju |
936 | - # environment is actually ready for testing. Instead, we do the |
937 | - # best we can by inspecting all the relation state, and, if it |
938 | - # is at this particular instant in the expected state, hoping |
939 | - # that the system is stable enough to continue testing. |
940 | - |
941 | - timeout = time.time() + 600 |
942 | - pg_units = frozenset(pg_units) |
943 | - |
944 | - # The list of PG units we expect to be related to the psql unit. |
945 | - if relation: |
946 | - rel_pg_units = frozenset(pg_units) |
947 | - else: |
948 | - rel_pg_units = frozenset() |
949 | - |
950 | - while True: |
951 | - try: |
952 | - self.juju.wait_until_ready(0) # Also refreshes status |
953 | - |
954 | - status_pg_units = set(self.juju.status[ |
955 | - 'services']['postgresql']['units'].keys()) |
956 | - |
957 | - if pg_units != status_pg_units: |
958 | - # Confirm the PG units reported by 'juju status' |
959 | - # match the list we expect. |
960 | - raise NotReady('units not yet added/removed') |
961 | - |
962 | - for psql_unit in self.juju.status['services']['psql']['units']: |
963 | - self.confirm_psql_unit_ready(psql_unit, rel_pg_units) |
964 | - |
965 | - for pg_unit in pg_units: |
966 | - peers = [u for u in pg_units if u != pg_unit] |
967 | - self.confirm_postgresql_unit_ready(pg_unit, peers) |
968 | - |
969 | - return |
970 | - except NotReady: |
971 | - if time.time() > timeout: |
972 | - raise |
973 | - time.sleep(10) |
974 | - |
975 | - def confirm_psql_unit_ready(self, psql_unit, pg_units): |
976 | - # Confirm the db and db-admin relations are all in a useful |
977 | - # state. |
978 | - psql_rel_info = self.juju.relation_info(psql_unit) |
979 | - if pg_units and not psql_rel_info: |
980 | - raise NotReady('{} waiting for relations'.format(psql_unit)) |
981 | - elif not pg_units and psql_rel_info: |
982 | - raise NotReady('{} waiting to drop relations'.format(psql_unit)) |
983 | - elif not pg_units and not psql_rel_info: |
984 | - return |
985 | - |
986 | - psql_service = psql_unit.split('/', 1)[0] |
987 | - |
988 | - # The set of PostgreSQL units related to the psql unit. They |
989 | - # might be related via several db or db-admin relations. |
990 | - all_rel_pg_units = set() |
991 | - |
992 | - for rel_name in psql_rel_info: |
993 | - for rel_id, rel_info in psql_rel_info[rel_name].items(): |
994 | - |
995 | - # The database this relation has requested to use, if any. |
996 | - requested_db = rel_info[psql_unit].get('database', None) |
997 | - |
998 | - rel_pg_units = ( |
999 | - [u for u in rel_info if not u.startswith(psql_service)]) |
1000 | - all_rel_pg_units = all_rel_pg_units.union(rel_pg_units) |
1001 | - |
1002 | - num_masters = 0 |
1003 | - |
1004 | - for unit in rel_pg_units: |
1005 | - unit_rel_info = rel_info[unit] |
1006 | - |
1007 | - # PG unit must be presenting the correct database. |
1008 | - if 'database' not in unit_rel_info: |
1009 | - raise NotReady( |
1010 | - '{} has no database'.format(unit)) |
1011 | - if requested_db and ( |
1012 | - unit_rel_info['database'] != requested_db): |
1013 | - raise NotReady( |
1014 | - '{} not using requested db {}'.format( |
1015 | - unit, requested_db)) |
1016 | - |
1017 | - # PG unit must be in a valid state. |
1018 | - state = unit_rel_info.get('state', None) |
1019 | - if not state: |
1020 | - raise NotReady( |
1021 | - '{} has no state'.format(unit)) |
1022 | - elif state == 'standalone': |
1023 | - if len(rel_pg_units) > 1: |
1024 | - raise NotReady( |
1025 | - '{} is standalone'.format(unit)) |
1026 | - elif state == 'master': |
1027 | - num_masters += 1 |
1028 | - elif state != 'hot standby': |
1029 | - # Failover state or totally broken. |
1030 | - raise NotReady( |
1031 | - '{} in {} state'.format(unit, state)) |
1032 | - |
1033 | - # PG unit must have authorized this psql client. |
1034 | - allowed_units = unit_rel_info.get( |
1035 | - 'allowed-units', '').split() |
1036 | - if psql_unit not in allowed_units: |
1037 | - raise NotReady( |
1038 | - '{} not yet authorized by {} ({})'.format( |
1039 | - psql_unit, unit, allowed_units)) |
1040 | - |
1041 | - # Alas, this is no longer true with manual replication. |
1042 | - # We must not have multiple masters in this relation. |
1043 | - # if len(rel_pg_units) > 1 and num_masters != 1: |
1044 | - # raise NotReady( |
1045 | - # '{} masters'.format(num_masters)) |
1046 | - |
1047 | - if pg_units != all_rel_pg_units: |
1048 | - raise NotReady( |
1049 | - 'Expected PG units {} != related units {}'.format( |
1050 | - pg_units, all_rel_pg_units)) |
1051 | - |
1052 | - def confirm_postgresql_unit_ready(self, pg_unit, peers=()): |
1053 | - pg_rel_info = self.juju.relation_info(pg_unit) |
1054 | - if not pg_rel_info: |
1055 | - raise NotReady('{} has no relations'.format(pg_unit)) |
1056 | - |
1057 | - try: |
1058 | - rep_rel_id = pg_rel_info['replication'].keys()[0] |
1059 | - actual_peers = set([ |
1060 | - u for u in pg_rel_info['replication'][rep_rel_id].keys() |
1061 | - if u != pg_unit]) |
1062 | - except (IndexError, KeyError): |
1063 | - if peers: |
1064 | - raise NotReady('Peer relation does not exist') |
1065 | - rep_rel_id = None |
1066 | - actual_peers = set() |
1067 | - |
1068 | - if actual_peers != set(peers): |
1069 | - raise NotReady('Expecting {} peers, found {}'.format( |
1070 | - peers, actual_peers)) |
1071 | - |
1072 | - if not peers: |
1073 | - return |
1074 | - |
1075 | - pg_rep_rel_info = pg_rel_info['replication'][rep_rel_id].get( |
1076 | - pg_unit, None) |
1077 | - if not pg_rep_rel_info: |
1078 | - raise NotReady('{} has not yet joined the peer relation'.format( |
1079 | - pg_unit)) |
1080 | - |
1081 | - state = pg_rep_rel_info.get('state', None) |
1082 | - |
1083 | - if not state: |
1084 | - raise NotReady('{} has no state'.format(pg_unit)) |
1085 | - |
1086 | - if state == 'standalone' and peers: |
1087 | - raise NotReady('{} is standalone but has peers'.format(pg_unit)) |
1088 | - |
1089 | - if state not in ('standalone', 'master', 'hot standby'): |
1090 | - raise NotReady('{} reports failover in progress'.format(pg_unit)) |
1091 | - |
1092 | - num_masters = 1 if state in ('master', 'standalone') else 0 |
1093 | - |
1094 | - for peer in peers: |
1095 | - peer_rel_info = pg_rel_info['replication'][rep_rel_id][peer] |
1096 | - peer_state = peer_rel_info.get('state', None) |
1097 | - if not peer_state: |
1098 | - raise NotReady('{} has no peer state'.format(peer)) |
1099 | - if peer_state == 'master': |
1100 | - num_masters += 1 |
1101 | - elif peer_state != 'hot standby': |
1102 | - raise NotReady('Peer {} in state {}'.format(peer, peer_state)) |
1103 | - |
1104 | - # No longer true with manual replication. |
1105 | - # if num_masters < 1: |
1106 | - # raise NotReady('No masters seen from {}'.format(pg_unit)) |
1107 | + self.timeout = int(os.environ.get('TEST_TIMEOUT', 1200)) |
1108 | + if self.timeout > 0: |
1109 | + self.useFixture(fixtures.Timeout(self.timeout, gentle=True)) |
1110 | |
1111 | def sql(self, sql, postgres_unit=None, psql_unit=None, dbname=None): |
1112 | '''Run some SQL on postgres_unit from psql_unit. |
1113 | @@ -289,38 +114,34 @@ |
1114 | ''' |
1115 | # Which psql unit we are going to query from. |
1116 | if psql_unit is None: |
1117 | - psql_unit = ( |
1118 | - self.juju.status['services']['psql']['units'].keys()[0]) |
1119 | - |
1120 | - full_rel_info = self.juju.relation_info(psql_unit) |
1121 | + psql_sentry = self.deployment.sentry['psql'][0] |
1122 | + psql_unit = psql_sentry.info['unit_name'] |
1123 | + else: |
1124 | + psql_sentry = self.deployment.sentry[psql_unit] |
1125 | |
1126 | # 'db' or 'db-admin' relation? |
1127 | rel_name = 'db-admin' if dbname else 'db' |
1128 | + to_rel = 'psql:{}'.format(rel_name) |
1129 | |
1130 | # Which PostgreSQL unit we want to talk to. |
1131 | if postgres_unit is None: |
1132 | - postgres_unit = ( |
1133 | - self.juju.status['services']['postgresql']['units'].keys()[0]) |
1134 | + postgres_sentry = self.deployment.sentry['postgresql'][0] |
1135 | + relinfo = postgres_sentry.relation(rel_name, to_rel) |
1136 | elif postgres_unit in ('master', 'hot standby'): |
1137 | - for rel_id, rel_info in full_rel_info[rel_name].items(): |
1138 | - for rel_unit, rel_unit_info in rel_info.items(): |
1139 | - if rel_unit_info.get('state') == postgres_unit: |
1140 | - postgres_unit = rel_unit |
1141 | - assert postgres_unit not in (None, 'master', 'hot standby'), ( |
1142 | - 'Unable to determine postgresql unit to use') |
1143 | - |
1144 | - # PostgreSQL unit relation info |
1145 | - rel_info = None |
1146 | - for rel_id in full_rel_info[rel_name]: |
1147 | - if postgres_unit in full_rel_info[rel_name][rel_id]: |
1148 | - rel_info = full_rel_info[rel_name][rel_id][postgres_unit] |
1149 | - break |
1150 | - assert rel_info is not None, ( |
1151 | - 'Unable to find pg rel info {} {!r}'.format( |
1152 | - postgres_unit, full_rel_info[rel_name])) |
1153 | + postgres_sentry = None |
1154 | + for postgres_sentry in self.deployment.sentry['postgresql']: |
1155 | + relinfo = postgres_sentry.relation(rel_name, to_rel) |
1156 | + if relinfo.get('state') == postgres_unit: |
1157 | + break |
1158 | + assert postgres_sentry is not None, ( |
1159 | + 'Unable to determine postgresql unit to use') |
1160 | + relinfo = postgres_sentry.relation(rel_name, to_rel) |
1161 | + else: |
1162 | + postgres_sentry = self.deployment.sentry[postgres_unit] |
1163 | + relinfo = postgres_sentry.relation(rel_name, to_rel) |
1164 | |
1165 | if dbname is None: |
1166 | - dbname = rel_info['database'] |
1167 | + dbname = relinfo['database'] |
1168 | |
1169 | # Choose a local port for our tunnel. |
1170 | s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) |
1171 | @@ -333,8 +154,8 @@ |
1172 | # tunnels, as simply killing the 'juju ssh' process doesn't seem |
1173 | # to be enough. |
1174 | tunnel_cmd = [ |
1175 | - 'juju', 'ssh', psql_unit, '-N', '-L', |
1176 | - '{}:{}:{}'.format(local_port, rel_info['host'], rel_info['port'])] |
1177 | + 'juju', 'ssh', psql_unit, '-q', '-N', '-L', |
1178 | + '{}:{}:{}'.format(local_port, relinfo['host'], relinfo['port'])] |
1179 | # Don't disable stdout, so we can see when there are SSH |
1180 | # failures like bad host keys. |
1181 | tunnel_proc = subprocess.Popen( |
1182 | @@ -360,7 +181,7 @@ |
1183 | # Execute the query |
1184 | con = psycopg2.connect( |
1185 | database=dbname, port=local_port, host='localhost', |
1186 | - user=rel_info['user'], password=rel_info['password']) |
1187 | + user=relinfo['user'], password=relinfo['password']) |
1188 | cur = con.cursor() |
1189 | cur.execute(sql) |
1190 | if cur.description is None: |
1191 | @@ -379,28 +200,25 @@ |
1192 | cmd = [ |
1193 | 'juju', 'run', '--unit', unit, |
1194 | 'sudo pg_ctlcluster 9.1 main -force {}'.format(command)] |
1195 | - run(self, cmd) |
1196 | + subprocess.check_output(cmd) |
1197 | |
1198 | def test_basic(self): |
1199 | '''Connect to a a single unit service via the db relationship.''' |
1200 | - self.juju.deploy(TEST_CHARM, 'postgresql', config=self.pg_config) |
1201 | - self.juju.deploy(PSQL_CHARM, 'psql') |
1202 | - self.juju.do(['add-relation', 'postgresql:db', 'psql:db']) |
1203 | - self.wait_until_ready(['postgresql/0']) |
1204 | + self.deployment.add('psql', PSQL_CHARM) |
1205 | + self.deployment.add('postgresql', TEST_CHARM) |
1206 | + self.deployment.configure('postgresql', self.pg_config) |
1207 | + self.deployment.relate('postgresql:db', 'psql:db') |
1208 | + self.deployment.deploy(timeout=self.timeout) |
1209 | |
1210 | result = self.sql('SELECT TRUE') |
1211 | self.assertEqual(result, [(True,)]) |
1212 | |
1213 | - # Confirm that the relation tears down without errors. |
1214 | - self.juju.do(['destroy-relation', 'postgresql:db', 'psql:db']) |
1215 | - self.wait_until_ready(['postgresql/0'], relation=False) |
1216 | - |
1217 | def test_streaming_replication(self): |
1218 | - self.juju.deploy( |
1219 | - TEST_CHARM, 'postgresql', num_units=2, config=self.pg_config) |
1220 | - self.juju.deploy(PSQL_CHARM, 'psql') |
1221 | - self.juju.do(['add-relation', 'postgresql:db', 'psql:db']) |
1222 | - self.wait_until_ready(['postgresql/0', 'postgresql/1']) |
1223 | + self.deployment.add('psql', PSQL_CHARM) |
1224 | + self.deployment.add('postgresql', TEST_CHARM, 2) |
1225 | + self.deployment.configure('postgresql', self.pg_config) |
1226 | + self.deployment.relate('postgresql:db', 'psql:db') |
1227 | + self.deployment.deploy(timeout=self.timeout) |
1228 | |
1229 | # Confirm that the slave has successfully opened a streaming |
1230 | # replication connection. |
1231 | @@ -431,11 +249,11 @@ |
1232 | stderr=subprocess.STDOUT) |
1233 | self.addCleanup(swift_cleanup) |
1234 | |
1235 | - self.juju.deploy( |
1236 | - TEST_CHARM, 'postgresql', num_units=2, config=self.pg_config) |
1237 | - self.juju.deploy(PSQL_CHARM, 'psql') |
1238 | - self.juju.do(['add-relation', 'postgresql:db-admin', 'psql:db-admin']) |
1239 | - self.wait_until_ready(['postgresql/0', 'postgresql/1']) |
1240 | + self.deployment.add('psql', PSQL_CHARM) |
1241 | + self.deployment.add('postgresql', TEST_CHARM, 2) |
1242 | + self.deployment.configure('postgresql', self.pg_config) |
1243 | + self.deployment.relate('postgresql:db-admin', 'psql:db-admin') |
1244 | + self.deployment.deploy(timeout=self.timeout) |
1245 | |
1246 | # Confirm that the slave has not opened a streaming |
1247 | # replication connection. |
1248 | @@ -476,11 +294,11 @@ |
1249 | stderr=subprocess.STDOUT) |
1250 | self.addCleanup(swift_cleanup) |
1251 | |
1252 | - self.juju.deploy( |
1253 | - TEST_CHARM, 'postgresql', num_units=2, config=self.pg_config) |
1254 | - self.juju.deploy(PSQL_CHARM, 'psql') |
1255 | - self.juju.do(['add-relation', 'postgresql:db-admin', 'psql:db-admin']) |
1256 | - self.wait_until_ready(['postgresql/0', 'postgresql/1']) |
1257 | + self.deployment.add('psql', PSQL_CHARM) |
1258 | + self.deployment.add('postgresql', TEST_CHARM, 2) |
1259 | + self.deployment.configure('postgresql', self.pg_config) |
1260 | + self.deployment.relate('postgresql:db-admin', 'psql:db-admin') |
1261 | + self.deployment.deploy(timeout=self.timeout) |
1262 | |
1263 | # Confirm that the slave has not opened a streaming |
1264 | # replication connection. |
1265 | @@ -505,20 +323,16 @@ |
1266 | |
1267 | def test_basic_admin(self): |
1268 | '''Connect to a single unit service via the db-admin relationship.''' |
1269 | - self.juju.deploy(TEST_CHARM, 'postgresql', config=self.pg_config) |
1270 | - self.juju.deploy(PSQL_CHARM, 'psql') |
1271 | - self.juju.do(['add-relation', 'postgresql:db-admin', 'psql:db-admin']) |
1272 | - self.juju.do(['expose', 'postgresql']) |
1273 | - self.wait_until_ready(['postgresql/0']) |
1274 | + self.deployment.add('psql', PSQL_CHARM) |
1275 | + self.deployment.add('postgresql', TEST_CHARM, 2) |
1276 | + self.deployment.configure('postgresql', self.pg_config) |
1277 | + self.deployment.relate('postgresql:db-admin', 'psql:db-admin') |
1278 | + self.deployment.expose('postgresql') |
1279 | + self.deployment.deploy(timeout=self.timeout) |
1280 | |
1281 | result = self.sql('SELECT TRUE', dbname='postgres') |
1282 | self.assertEqual(result, [(True,)]) |
1283 | |
1284 | - # Confirm that the relation tears down without errors. |
1285 | - self.juju.do([ |
1286 | - 'destroy-relation', 'postgresql:db-admin', 'psql:db-admin']) |
1287 | - self.wait_until_ready(['postgresql/0'], relation=False) |
1288 | - |
1289 | def is_master(self, postgres_unit, dbname=None): |
1290 | is_master = self.sql( |
1291 | 'SELECT NOT pg_is_in_recovery()', |
1292 | @@ -530,20 +344,20 @@ |
1293 | # Per Bug #1258485, creating a 3 unit service will often fail. |
1294 | # Instead, create a 2 unit service, wait for it to be ready, |
1295 | # then add a third unit. |
1296 | - self.juju.deploy(PSQL_CHARM, 'psql') |
1297 | - self.juju.deploy( |
1298 | - TEST_CHARM, 'postgresql', num_units=2, config=self.pg_config) |
1299 | - self.juju.do(['add-relation', 'postgresql:db', 'psql:db']) |
1300 | - self.wait_until_ready(['postgresql/0', 'postgresql/1']) |
1301 | - self.juju.add_unit('postgresql') |
1302 | - self.wait_until_ready(['postgresql/0', 'postgresql/1', 'postgresql/2']) |
1303 | + self.deployment.add('psql', PSQL_CHARM) |
1304 | + self.deployment.add('postgresql', TEST_CHARM, 2) |
1305 | + self.deployment.configure('postgresql', self.pg_config) |
1306 | + self.deployment.relate('postgresql:db', 'psql:db') |
1307 | + self.deployment.deploy(timeout=self.timeout) |
1308 | + self.deployment.add_unit('postgresql') |
1309 | + self.deployment.wait() |
1310 | |
1311 | # Even on a freshly setup service, we have no idea which unit |
1312 | # will become the master as we have no control over which two |
1313 | # units join the peer relation first. |
1314 | - units = sorted( |
1315 | - (self.is_master(unit), unit) for unit in |
1316 | - self.juju.status['services']['postgresql']['units'].keys()) |
1317 | + units = sorted((self.is_master(unit), unit) for unit in |
1318 | + [sentry.info['unit_name'] |
1319 | + for sentry in self.deployment.sentry['postgresql']]) |
1320 | self.assertFalse(units[0][0]) |
1321 | self.assertFalse(units[1][0]) |
1322 | self.assertTrue(units[2][0]) |
1323 | @@ -580,8 +394,8 @@ |
1324 | self.assertIs(True, token_received(standby_unit_2)) |
1325 | |
1326 | # Remove the master unit. |
1327 | - self.juju.do(['remove-unit', master_unit]) |
1328 | - self.wait_until_ready([standby_unit_1, standby_unit_2]) |
1329 | + self.deployment.remove_unit(master_unit) |
1330 | + self.deployment.wait() |
1331 | |
1332 | # When we failover, the unit that has received the most WAL |
1333 | # information from the old master (most in sync) is elected the |
1334 | @@ -603,8 +417,8 @@ |
1335 | self.assertIs(True, token_received(standby_unit)) |
1336 | |
1337 | # Remove the master again, leaving a single unit. |
1338 | - self.juju.do(['remove-unit', master_unit]) |
1339 | - self.wait_until_ready([standby_unit]) |
1340 | + self.deployment.remove_unit(master_unit) |
1341 | + self.deployment.wait() |
1342 | |
1343 | # Last unit is a working, standalone database. |
1344 | self.is_master(standby_unit) |
1345 | @@ -612,35 +426,29 @@ |
1346 | |
1347 | # Confirm that it is actually reporting as 'standalone' rather |
1348 | # than 'master' |
1349 | - full_relation_info = self.juju.relation_info('psql/0') |
1350 | - for rel_info in full_relation_info['db'].values(): |
1351 | - for unit, unit_rel_info in rel_info.items(): |
1352 | - if unit == 'psql/0': |
1353 | - pass |
1354 | - elif unit == standby_unit: |
1355 | - self.assertEqual(unit_rel_info['state'], 'standalone') |
1356 | - else: |
1357 | - raise RuntimeError('Unknown unit {}'.format(unit)) |
1358 | + relinfo = self.deployment.sentry[standby_unit].relation('db', |
1359 | + 'psql:db') |
1360 | + self.assertEqual(relinfo.get('state'), 'standalone') |
1361 | |
1362 | def test_failover_election(self): |
1363 | """Ensure master elected in a failover is the best choice""" |
1364 | # Per Bug #1258485, creating a 3 unit service will often fail. |
1365 | # Instead, create a 2 unit service, wait for it to be ready, |
1366 | # then add a third unit. |
1367 | - self.juju.deploy( |
1368 | - TEST_CHARM, 'postgresql', num_units=2, config=self.pg_config) |
1369 | - self.juju.deploy(PSQL_CHARM, 'psql') |
1370 | - self.juju.do(['add-relation', 'postgresql:db-admin', 'psql:db-admin']) |
1371 | - self.wait_until_ready(['postgresql/0', 'postgresql/1']) |
1372 | - self.juju.add_unit('postgresql') |
1373 | - self.wait_until_ready(['postgresql/0', 'postgresql/1', 'postgresql/2']) |
1374 | + self.deployment.add('psql', PSQL_CHARM) |
1375 | + self.deployment.add('postgresql', TEST_CHARM, 2) |
1376 | + self.deployment.configure('postgresql', self.pg_config) |
1377 | + self.deployment.relate('postgresql:db-admin', 'psql:db-admin') |
1378 | + self.deployment.deploy(timeout=self.timeout) |
1379 | + self.deployment.add_unit('postgresql') |
1380 | + self.deployment.wait() |
1381 | |
1382 | # Even on a freshly setup service, we have no idea which unit |
1383 | # will become the master as we have no control over which two |
1384 | # units join the peer relation first. |
1385 | - units = sorted( |
1386 | - (self.is_master(unit, 'postgres'), unit) for unit in |
1387 | - self.juju.status['services']['postgresql']['units'].keys()) |
1388 | + units = sorted((self.is_master(unit, 'postgres'), unit) for unit in |
1389 | + [sentry.info['unit_name'] |
1390 | + for sentry in self.deployment.sentry['postgresql']]) |
1391 | self.assertFalse(units[0][0]) |
1392 | self.assertFalse(units[1][0]) |
1393 | self.assertTrue(units[2][0]) |
1394 | @@ -665,8 +473,8 @@ |
1395 | self.pg_ctlcluster(standby_unit_1, 'start') |
1396 | |
1397 | # Failover. |
1398 | - self.juju.do(['remove-unit', master_unit]) |
1399 | - self.wait_until_ready([standby_unit_1, standby_unit_2]) |
1400 | + self.deployment.remove_unit(master_unit) |
1401 | + self.deployment.wait() |
1402 | |
1403 | # Fix replication. |
1404 | self.sql( |
1405 | @@ -685,15 +493,18 @@ |
1406 | port = 7400 + int((self.VERSION or '66').replace('.', '')) |
1407 | self.pg_config['listen_port'] = port |
1408 | |
1409 | - self.juju.deploy(TEST_CHARM, 'postgresql', config=self.pg_config) |
1410 | - self.juju.deploy(PSQL_CHARM, 'psql') |
1411 | - self.juju.do(['add-relation', 'postgresql:db-admin', 'psql:db-admin']) |
1412 | - self.wait_until_ready(['postgresql/0']) |
1413 | + self.deployment.add('psql', PSQL_CHARM) |
1414 | + self.deployment.add('postgresql', TEST_CHARM) |
1415 | + self.deployment.configure('postgresql', self.pg_config) |
1416 | + self.deployment.expose('postgresql') # This test requires exposure. |
1417 | + self.deployment.relate('postgresql:db-admin', 'psql:db-admin') |
1418 | + self.deployment.deploy(timeout=self.timeout) |
1419 | |
1420 | # Determine the IP address that the unit will see. |
1421 | - unit = self.juju.status['services']['postgresql']['units'].keys()[0] |
1422 | - unit_ip = self.juju.status['services']['postgresql']['units'][ |
1423 | - unit]['public-address'] |
1424 | + status = self.deployment.get_status() |
1425 | + unit = status['services']['postgresql']['units'].keys()[0] |
1426 | + unit_ip = (status['services']['postgresql']['units'][unit] |
1427 | + ['public-address']) |
1428 | s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) |
1429 | s.connect((unit_ip, port)) |
1430 | my_ip = s.getsockname()[0] |
1431 | @@ -713,8 +524,8 @@ |
1432 | psycopg2.OperationalError, psycopg2.connect, conn_str) |
1433 | |
1434 | # Connections should work after setting the admin-addresses. |
1435 | - self.juju.do([ |
1436 | - 'set', 'postgresql', 'admin_addresses={}'.format(my_ip)]) |
1437 | + self.deployment.configure('postgresql', dict(admin_addresses=my_ip)) |
1438 | + self.deployment.wait() |
1439 | timeout = time.time() + 30 |
1440 | while True: |
1441 | try: |
1442 | @@ -731,14 +542,14 @@ |
1443 | def test_explicit_database(self): |
1444 | # Two units to ensure both masters and hot standbys |
1445 | # present the correct credentials. |
1446 | - self.juju.deploy( |
1447 | - TEST_CHARM, 'postgresql', num_units=2, config=self.pg_config) |
1448 | - self.juju.deploy(PSQL_CHARM, 'psql') |
1449 | - self.juju.do(['set', 'psql', 'database=explicit']) |
1450 | - self.juju.do(['add-relation', 'postgresql:db', 'psql:db']) |
1451 | + self.deployment.add('psql', PSQL_CHARM) |
1452 | + self.deployment.configure('psql', dict(database='explicit')) |
1453 | + self.deployment.add('postgresql', TEST_CHARM, 2) |
1454 | + self.deployment.configure('postgresql', self.pg_config) |
1455 | + self.deployment.relate('postgresql:db', 'psql:db') |
1456 | + self.deployment.deploy(timeout=self.timeout) |
1457 | |
1458 | pg_units = ['postgresql/0', 'postgresql/1'] |
1459 | - self.wait_until_ready(pg_units) |
1460 | |
1461 | for unit in pg_units: |
1462 | result = self.sql('SELECT current_database()', unit)[0][0] |
1463 | @@ -749,20 +560,20 @@ |
1464 | def test_roles_granted(self): |
1465 | # We use two units to confirm that there is no attempt to |
1466 | # grant roles on the hot standby. |
1467 | - self.juju.deploy( |
1468 | - TEST_CHARM, 'postgresql', num_units=2, config=self.pg_config) |
1469 | - self.juju.deploy(PSQL_CHARM, 'psql', config={'roles': 'role_a'}) |
1470 | - self.juju.do(['add-relation', 'postgresql:db', 'psql:db']) |
1471 | - pg_units = ['postgresql/0', 'postgresql/1'] |
1472 | - self.wait_until_ready(pg_units) |
1473 | + self.deployment.add('psql', PSQL_CHARM) |
1474 | + self.deployment.configure('psql', dict(roles='role_a')) |
1475 | + self.deployment.add('postgresql', TEST_CHARM, 2) |
1476 | + self.deployment.configure('postgresql', self.pg_config) |
1477 | + self.deployment.relate('postgresql:db', 'psql:db') |
1478 | + self.deployment.deploy(timeout=self.timeout) |
1479 | |
1480 | has_role_a = self.sql(''' |
1481 | SELECT pg_has_role(current_user, 'role_a', 'MEMBER') |
1482 | ''')[0][0] |
1483 | self.assertTrue(has_role_a) |
1484 | |
1485 | - self.juju.do(['set', 'psql', 'roles=role_a,role_b']) |
1486 | - self.wait_until_ready(pg_units) |
1487 | + self.deployment.configure('psql', dict(roles='role_a,role_b')) |
1488 | + self.deployment.wait() |
1489 | |
1490 | # Retry this for a while. Per Bug #1200267, we can't tell when |
1491 | # the hooks have finished running and the role has been granted. |
1492 | @@ -788,12 +599,12 @@ |
1493 | def test_roles_revoked(self): |
1494 | # We use two units to confirm that there is no attempts to |
1495 | # grant roles on the hot standby. |
1496 | - self.juju.deploy( |
1497 | - TEST_CHARM, 'postgresql', num_units=2, config=self.pg_config) |
1498 | - self.juju.deploy(PSQL_CHARM, 'psql', config={'roles': 'role_a,role_b'}) |
1499 | - self.juju.do(['add-relation', 'postgresql:db', 'psql:db']) |
1500 | - pg_units = ['postgresql/0', 'postgresql/1'] |
1501 | - self.wait_until_ready(pg_units) |
1502 | + self.deployment.add('psql', PSQL_CHARM) |
1503 | + self.deployment.configure('psql', dict(roles='role_a,role_b')) |
1504 | + self.deployment.add('postgresql', TEST_CHARM, 2) |
1505 | + self.deployment.configure('postgresql', self.pg_config) |
1506 | + self.deployment.relate('postgresql:db', 'psql:db') |
1507 | + self.deployment.deploy(timeout=self.timeout) |
1508 | |
1509 | has_role_a, has_role_b = self.sql(''' |
1510 | SELECT |
1511 | @@ -803,8 +614,8 @@ |
1512 | self.assertTrue(has_role_a) |
1513 | self.assertTrue(has_role_b) |
1514 | |
1515 | - self.juju.do(['set', 'psql', 'roles=role_c']) |
1516 | - self.wait_until_ready(pg_units) |
1517 | + self.deployment.configure('psql', dict(roles='role_c')) |
1518 | + self.deployment.wait() |
1519 | |
1520 | # Per Bug #1200267, we have to retry a while here and hope. |
1521 | # We have of knowing when the pending role changes have |
1522 | @@ -823,8 +634,8 @@ |
1523 | self.assertFalse(has_role_b) |
1524 | self.assertTrue(has_role_c) |
1525 | |
1526 | - self.juju.do(['unset', 'psql', 'roles']) |
1527 | - self.wait_until_ready(pg_units) |
1528 | + self.deployment.configure('psql', dict(roles=None)) |
1529 | + self.deployment.wait() |
1530 | |
1531 | timeout = time.time() + 60 |
1532 | while True: |
1533 | @@ -844,15 +655,14 @@ |
1534 | # Deploy 2 PostgreSQL units and 2 rsyslog units to ensure that |
1535 | # log messages from every source reach every sink. |
1536 | self.pg_config['log_min_duration_statement'] = 0 # Log all statements |
1537 | - self.juju.deploy( |
1538 | - TEST_CHARM, 'postgresql', num_units=2, config=self.pg_config) |
1539 | - self.juju.deploy(PSQL_CHARM, 'psql') |
1540 | - self.juju.do(['add-relation', 'postgresql:db', 'psql:db']) |
1541 | - self.juju.deploy('cs:rsyslog', 'rsyslog', num_units=2) |
1542 | - self.juju.do([ |
1543 | - 'add-relation', 'postgresql:syslog', 'rsyslog:aggregator']) |
1544 | - pg_units = ['postgresql/0', 'postgresql/1'] |
1545 | - self.wait_until_ready(pg_units) |
1546 | + |
1547 | + self.deployment.add('psql', PSQL_CHARM) |
1548 | + self.deployment.add('postgresql', TEST_CHARM, 2) |
1549 | + self.deployment.configure('postgresql', self.pg_config) |
1550 | + self.deployment.relate('postgresql:db', 'psql:db') |
1551 | + self.deployment.add('rsyslog', 'cs:rsyslog', 2) |
1552 | + self.deployment.relate('postgresql:syslog', 'rsyslog:aggregator') |
1553 | + self.deployment.deploy(timeout=self.timeout) |
1554 | |
1555 | token = str(uuid.uuid1()) |
1556 | |
1557 | @@ -861,36 +671,24 @@ |
1558 | time.sleep(2) |
1559 | |
1560 | for runit in ['rsyslog/0', 'rsyslog/1']: |
1561 | - cmd = ['juju', 'run', '--unit', runit, 'tail -100 /var/log/syslog'] |
1562 | - out = run(self, cmd) |
1563 | + sentry = self.deployment.sentry[runit] |
1564 | + out = sentry.run('tail -100 /var/log/syslog') |
1565 | self.failUnless('master {}'.format(token) in out) |
1566 | self.failUnless('hot standby {}'.format(token) in out) |
1567 | |
1568 | - # Confirm that the relation tears down correctly. |
1569 | - self.juju.do(['destroy-service', 'rsyslog']) |
1570 | - timeout = time.time() + 120 |
1571 | - while time.time() < timeout: |
1572 | - status = self.juju.refresh_status() |
1573 | - if 'rsyslog' not in status['services']: |
1574 | - break |
1575 | - self.assert_( |
1576 | - 'rsyslog' not in status['services'], 'rsyslog failed to die') |
1577 | - self.wait_until_ready(pg_units) |
1578 | - |
1579 | def test_upgrade_charm(self): |
1580 | - self.juju.deploy( |
1581 | - TEST_CHARM, 'postgresql', num_units=2, config=self.pg_config) |
1582 | - self.juju.deploy(PSQL_CHARM, 'psql') |
1583 | - self.juju.do(['add-relation', 'postgresql:db', 'psql:db']) |
1584 | - pg_units = ['postgresql/0', 'postgresql/1'] |
1585 | - self.wait_until_ready(pg_units) |
1586 | + self.deployment.add('psql', PSQL_CHARM) |
1587 | + self.deployment.add('postgresql', TEST_CHARM, 2) |
1588 | + self.deployment.configure('postgresql', self.pg_config) |
1589 | + self.deployment.relate('postgresql:db', 'psql:db') |
1590 | + self.deployment.deploy(timeout=self.timeout) |
1591 | |
1592 | # Create something |
1593 | self.sql("CREATE TABLE Foo AS SELECT TRUE", 'master') |
1594 | |
1595 | # Kick off the upgrade-charm hook |
1596 | - self.juju.do(['upgrade-charm', 'postgresql']) |
1597 | - self.wait_until_ready(pg_units) |
1598 | + subprocess.check_output(['juju', 'upgrade-charm', 'postgresql']) |
1599 | + self.deployment.wait() |
1600 | |
1601 | # Ensure that our data has perservered. |
1602 | master_data = self.sql('SELECT * FROM Foo', 'master')[0][0] |
1603 | @@ -900,12 +698,12 @@ |
1604 | |
1605 | def test_manual_replication(self): |
1606 | self.pg_config['manual_replication'] = True |
1607 | - self.juju.deploy( |
1608 | - TEST_CHARM, 'postgresql', num_units=2, config=self.pg_config) |
1609 | - self.juju.deploy(PSQL_CHARM, 'psql') |
1610 | - self.juju.do(['add-relation', 'postgresql:db', 'psql:db']) |
1611 | - pg_units = ['postgresql/0', 'postgresql/1'] |
1612 | - self.wait_until_ready(pg_units) |
1613 | + |
1614 | + self.deployment.add('psql', PSQL_CHARM) |
1615 | + self.deployment.add('postgresql', TEST_CHARM, 2) |
1616 | + self.deployment.configure('postgresql', self.pg_config) |
1617 | + self.deployment.relate('postgresql:db', 'psql:db') |
1618 | + self.deployment.deploy(timeout=self.timeout) |
1619 | |
1620 | # Because we have enabled manual_replication mode, and not |
1621 | # actually manually configured replication, we will have |
1622 | @@ -914,12 +712,10 @@ |
1623 | self.assertTrue(self.is_master('postgresql/1')) |
1624 | |
1625 | # Both units are advertised as master on the relation. |
1626 | - rel_info = self.juju.relation_info('psql/0') |
1627 | - for relname in rel_info: |
1628 | - for relid in rel_info[relname]: |
1629 | - for unit in pg_units: |
1630 | - self.assertEqual(rel_info[relname][relid][unit]['state'], |
1631 | - 'master') |
1632 | + for unit in ['postgresql/0', 'postgresql/1']: |
1633 | + sentry = self.deployment.sentry[unit] |
1634 | + relinfo = sentry.relation('db', 'psql:db') |
1635 | + self.assertEqual(relinfo.get('state'), 'master') |
1636 | |
1637 | |
1638 | class PG91Tests( |
1639 | @@ -948,8 +744,7 @@ |
1640 | class PG94Tests( |
1641 | PostgreSQLCharmBaseTestCase, |
1642 | testtools.TestCase, fixtures.TestWithFixtures): |
1643 | - # 9.4 is still in beta, with packages only available in the PGDG |
1644 | - # archive. |
1645 | + # 9.4 is released, but packages are only available in the PGDG archive. |
1646 | VERSION = '9.4' |
1647 | PGDG = True |
1648 | |
1649 | |
1650 | === removed file 'testing/README' |
1651 | --- testing/README 2013-10-10 17:41:02 +0000 |
1652 | +++ testing/README 1970-01-01 00:00:00 +0000 |
1653 | @@ -1,36 +0,0 @@ |
1654 | -This package provides a Juju test fixture to help you write tests for |
1655 | -your charms as a Python test suite. It requires you to already have a |
1656 | -bootstrapped environment. |
1657 | - |
1658 | -Below is a simple example using the fixture. You can find a more extensive |
1659 | -example in the PostgreSQL charm. |
1660 | - |
1661 | -#!/usr/bin/python |
1662 | -import os |
1663 | -import fixtures |
1664 | -import testtools |
1665 | -import unittest |
1666 | -from charmhelpers.testing.jujufixture import JujuFixture |
1667 | - |
1668 | - |
1669 | -class CharmTestCase(testtools.TestCase, fixtures.TestWithFixtures): |
1670 | - |
1671 | - def setUp(self): |
1672 | - super(CharmTestCase, self).setUp() |
1673 | - |
1674 | - self.juju = self.useFixture(JujuFixture(reuse_machines=True)) |
1675 | - |
1676 | - # If the charms fail, we don't want tests to hang indefinitely. |
1677 | - timeout = int(os.environ.get('TEST_TIMEOUT', 900)) |
1678 | - self.useFixture(fixtures.Timeout(timeout, gentle=True)) |
1679 | - |
1680 | - def test_basic(self): |
1681 | - self.juju.deploy('local:mycharm', 'test-mycharm', num_units=1) |
1682 | - self.juju.add_unit('local:mycharm', 'test-mycharm', num_units=2) |
1683 | - self.juju.deploy('local:othercharm') |
1684 | - self.juju.do(['add-relation', 'test-mycharm:rel', 'othercharm:rel']) |
1685 | - self.juju.wait_until_ready() |
1686 | - [ ... test the deployment ... ] |
1687 | - |
1688 | -if __name__ == '__main__': |
1689 | - unittest.main() |
1690 | |
1691 | === added file 'testing/amuletfixture.py' |
1692 | --- testing/amuletfixture.py 1970-01-01 00:00:00 +0000 |
1693 | +++ testing/amuletfixture.py 2015-03-12 09:42:09 +0000 |
1694 | @@ -0,0 +1,200 @@ |
1695 | +# Copyright 2015 Canonical Ltd. |
1696 | +# |
1697 | +# This file is part of the PostgreSQL Charm for Juju. |
1698 | +# |
1699 | +# This program is free software: you can redistribute it and/or modify |
1700 | +# it under the terms of the GNU General Public License version 3, as |
1701 | +# published by the Free Software Foundation. |
1702 | +# |
1703 | +# This program is distributed in the hope that it will be useful, but |
1704 | +# WITHOUT ANY WARRANTY; without even the implied warranties of |
1705 | +# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR |
1706 | +# PURPOSE. See the GNU General Public License for more details. |
1707 | +# |
1708 | +# You should have received a copy of the GNU General Public License |
1709 | +# along with this program. If not, see <http://www.gnu.org/licenses/>. |
1710 | + |
1711 | +from functools import wraps |
1712 | +import json |
1713 | +import os |
1714 | +import shutil |
1715 | +import subprocess |
1716 | +import tempfile |
1717 | +import time |
1718 | + |
1719 | +import amulet |
1720 | +import yaml |
1721 | + |
1722 | + |
1723 | +class AmuletFixture(amulet.Deployment): |
1724 | + def __init__(self, series='precise'): |
1725 | + # We use a wrapper around juju-deployer so we can fix how it is |
1726 | + # invoked. In particular, turn off all the noise so we can |
1727 | + # actually read our test output. |
1728 | + juju_deployer = os.path.abspath(os.path.join( |
1729 | + os.path.dirname(__file__), os.pardir, 'lib', |
1730 | + 'juju-deployer-wrapper.py')) |
1731 | + super(AmuletFixture, self).__init__(series=series, |
1732 | + juju_deployer=juju_deployer) |
1733 | + |
1734 | + def setUp(self): |
1735 | + self._temp_dirs = [] |
1736 | + |
1737 | + self.reset_environment(force=True) |
1738 | + |
1739 | + # Repackage our charm to a temporary directory, allowing us |
1740 | + # to strip our virtualenv symlinks that would otherwise cause |
1741 | + # juju to abort. We also strip the .bzr directory, working |
1742 | + # around Bug #1394078. |
1743 | + self.repackage_charm() |
1744 | + |
1745 | + # Fix amulet.Deployment so it doesn't depend on environment |
1746 | + # variables or the current working directory, but rather the |
1747 | + # environment we have introspected. |
1748 | + with open(os.path.join(self.charm_dir, 'metadata.yaml'), 'r') as s: |
1749 | + self.charm_name = yaml.safe_load(s)['name'] |
1750 | + self.charm_cache.test_charm = None |
1751 | + self.charm_cache.fetch(self.charm_name, self.charm_dir, self.series) |
1752 | + |
1753 | + # Explicitly reset $JUJU_REPOSITORY to ensure amulet and |
1754 | + # juju-deployer does not mess with the real one, per Bug #1393792 |
1755 | + self.org_repo = os.environ.get('JUJU_REPOSITORY', None) |
1756 | + temp_repo = tempfile.mkdtemp(suffix='.repo') |
1757 | + self._temp_dirs.append(temp_repo) |
1758 | + os.environ['JUJU_REPOSITORY'] = temp_repo |
1759 | + os.makedirs(os.path.join(temp_repo, self.series), mode=0o700) |
1760 | + |
1761 | + def tearDown(self, reset_environment=True): |
1762 | + if reset_environment: |
1763 | + self.reset_environment() |
1764 | + if self.org_repo is None: |
1765 | + del os.environ['JUJU_REPOSITORY'] |
1766 | + else: |
1767 | + os.environ['JUJU_REPOSITORY'] = self.org_repo |
1768 | + |
1769 | + def deploy(self, timeout=None): |
1770 | + '''Deploying or updating the configured system. |
1771 | + |
1772 | + Invokes amulet.Deployer.setup with a nicer name and standard |
1773 | + timeout handling. |
1774 | + ''' |
1775 | + if timeout is None: |
1776 | + timeout = int(os.environ.get('AMULET_TIMEOUT', 900)) |
1777 | + |
1778 | + # If setUp fails, tearDown is never called leaving the |
1779 | + # environment setup. This is useful for debugging. |
1780 | + self.setup(timeout=timeout) |
1781 | + self.wait(timeout=timeout) |
1782 | + |
1783 | + def __del__(self): |
1784 | + for temp_dir in self._temp_dirs: |
1785 | + if os.path.exists(temp_dir): |
1786 | + shutil.rmtree(temp_dir, ignore_errors=True) |
1787 | + |
1788 | + def get_status(self): |
1789 | + raw = subprocess.check_output(['juju', 'status', '--format=json'], |
1790 | + universal_newlines=True) |
1791 | + if raw: |
1792 | + return json.loads(raw) |
1793 | + return None |
1794 | + |
1795 | + def wait(self, timeout=None): |
1796 | + '''Wait until the environment has reached a stable state.''' |
1797 | + cmd = ['juju', 'wait'] |
1798 | + if timeout: |
1799 | + cmd = ['timeout', str(timeout)] + cmd |
1800 | + subprocess.check_output(cmd) |
1801 | + |
1802 | + def reset_environment(self, force=False): |
1803 | + fails = dict() |
1804 | + while True: |
1805 | + status = self.get_status() |
1806 | + service_items = status.get('services', {}).items() |
1807 | + if not service_items: |
1808 | + break |
1809 | + for service_name, service in service_items: |
1810 | + if service.get('life', '') not in ('dying', 'dead'): |
1811 | + subprocess.check_output(['juju', 'destroy-service', |
1812 | + service_name], |
1813 | + stderr=subprocess.STDOUT) |
1814 | + for unit_name, unit in service.get('units', {}).items(): |
1815 | + if unit.get('agent-state', None) == 'error': |
1816 | + if force: |
1817 | + # If any units have failed hooks, unstick them. |
1818 | + try: |
1819 | + subprocess.check_output( |
1820 | + ['juju', 'resolved', unit_name], |
1821 | + stderr=subprocess.STDOUT) |
1822 | + except subprocess.CalledProcessError: |
1823 | + # A previous 'resolved' call make cause a |
1824 | + # subsequent one to fail if it is still |
1825 | + # being processed. However, we need to keep |
1826 | + # retrying because after a successful |
1827 | + # resolution a subsequent hook may cause an |
1828 | + # error state. |
1829 | + pass |
1830 | + else: |
1831 | + fails[unit_name] = unit |
1832 | + time.sleep(1) |
1833 | + |
1834 | + harvest_machines = [] |
1835 | + for machine, state in status.get('machines', {}).items(): |
1836 | + if machine != "0" and state.get('life') not in ('dying', 'dead'): |
1837 | + harvest_machines.append(machine) |
1838 | + |
1839 | + if harvest_machines: |
1840 | + cmd = ['juju', 'remove-machine', '--force'] + harvest_machines |
1841 | + subprocess.check_output(cmd, stderr=subprocess.STDOUT) |
1842 | + |
1843 | + if fails: |
1844 | + raise Exception("Teardown failed", fails) |
1845 | + |
1846 | + def repackage_charm(self): |
1847 | + """Mirror the charm into a staging area. |
1848 | + |
1849 | + We do this to work around issues with Amulet, juju-deployer |
1850 | + and juju. In particular: |
1851 | + - symlinks in the Python virtual env pointing outside of the |
1852 | + charm directory. |
1853 | + - odd bzr interactions, such as tests being run on the committed |
1854 | + version of the charm, rather than the working tree. |
1855 | + |
1856 | + Returns the test charm directory. |
1857 | + """ |
1858 | + # Find the charm_dir we are testing |
1859 | + src_charm_dir = os.path.dirname(__file__) |
1860 | + while True: |
1861 | + if os.path.exists(os.path.join(src_charm_dir, |
1862 | + 'metadata.yaml')): |
1863 | + break |
1864 | + assert src_charm_dir != os.sep, 'metadata.yaml not found' |
1865 | + src_charm_dir = os.path.abspath(os.path.join(src_charm_dir, |
1866 | + os.pardir)) |
1867 | + |
1868 | + with open(os.path.join(src_charm_dir, 'metadata.yaml'), 'r') as s: |
1869 | + self.charm_name = yaml.safe_load(s)['name'] |
1870 | + |
1871 | + repack_root = tempfile.mkdtemp(suffix='.charm') |
1872 | + self._temp_dirs.append(repack_root) |
1873 | + |
1874 | + self.charm_dir = os.path.join(repack_root, self.charm_name) |
1875 | + |
1876 | + # Ignore .bzr to work around weird bzr interactions with |
1877 | + # juju-deployer, per Bug #1394078, and ignore .venv |
1878 | + # due to a) it containing symlinks juju will reject and b) to avoid |
1879 | + # infinite recursion. |
1880 | + shutil.copytree(src_charm_dir, self.charm_dir, symlinks=True, |
1881 | + ignore=shutil.ignore_patterns('.venv?', '.bzr')) |
1882 | + |
1883 | + |
1884 | +# Bug #1417097 means we need to monkey patch Amulet for now. |
1885 | +real_juju = amulet.helpers.juju |
1886 | + |
1887 | + |
1888 | +@wraps(real_juju) |
1889 | +def patched_juju(args, env=None): |
1890 | + args = [str(a) for a in args] |
1891 | + return real_juju(args, env) |
1892 | + |
1893 | +amulet.helpers.juju = patched_juju |
1894 | +amulet.deployer.juju = patched_juju |
1895 | |
1896 | === removed file 'testing/jujufixture.py' |
1897 | --- testing/jujufixture.py 2014-10-14 10:12:02 +0000 |
1898 | +++ testing/jujufixture.py 1970-01-01 00:00:00 +0000 |
1899 | @@ -1,297 +0,0 @@ |
1900 | -import json |
1901 | -import os.path |
1902 | -import subprocess |
1903 | -import time |
1904 | - |
1905 | -import fixtures |
1906 | -from testtools.content import text_content |
1907 | -import yaml |
1908 | - |
1909 | - |
1910 | -__all__ = ['JujuFixture', 'run'] |
1911 | - |
1912 | - |
1913 | -class JujuFixture(fixtures.Fixture): |
1914 | - """Interact with juju. |
1915 | - |
1916 | - Assumes juju environment is bootstrapped. |
1917 | - """ |
1918 | - |
1919 | - def __init__(self, series, reuse_machines=False, do_teardown=True): |
1920 | - super(JujuFixture, self).__init__() |
1921 | - |
1922 | - self.series = series |
1923 | - assert series, 'series not set' |
1924 | - |
1925 | - self.reuse_machines = reuse_machines |
1926 | - |
1927 | - # Optionally, don't teardown services and machines after running |
1928 | - # a test. If a subsequent test is run, they will be torn down at |
1929 | - # that point. This option is only useful when running a single |
1930 | - # test, or when the test harness is set to abort after the first |
1931 | - # failed test. |
1932 | - self.do_teardown = do_teardown |
1933 | - |
1934 | - self._deployed_services = set() |
1935 | - |
1936 | - def do(self, cmd): |
1937 | - cmd = ['juju'] + cmd |
1938 | - run(self, cmd) |
1939 | - |
1940 | - def get_result(self, cmd): |
1941 | - cmd = ['juju'] + cmd + ['--format=json'] |
1942 | - out = run(self, cmd) |
1943 | - if out: |
1944 | - return json.loads(out) |
1945 | - return None |
1946 | - |
1947 | - def deploy(self, charm, name=None, num_units=1, config=None): |
1948 | - cmd = ['deploy'] |
1949 | - |
1950 | - if not charm.startswith('cs:'): |
1951 | - charm = self.charm_uri(charm) |
1952 | - |
1953 | - if config: |
1954 | - config_path = os.path.join( |
1955 | - self.useFixture(fixtures.TempDir()).path, 'config.yaml') |
1956 | - cmd.append('--config={}'.format(config_path)) |
1957 | - config = yaml.safe_dump({name: config}, default_flow_style=False) |
1958 | - open(config_path, 'w').write(config) |
1959 | - self.addDetail('pgconfig', text_content(config)) |
1960 | - |
1961 | - cmd.append(charm) |
1962 | - |
1963 | - if name is None: |
1964 | - name = charm.split(':', 1)[-1] |
1965 | - |
1966 | - cmd.append(name) |
1967 | - self._deployed_services.add(name) |
1968 | - |
1969 | - if self.reuse_machines and self._free_machines: |
1970 | - cmd.extend(['--to', str(self._free_machines.pop())]) |
1971 | - self.do(cmd) |
1972 | - if num_units > 1: |
1973 | - self.add_unit(name, num_units - 1) |
1974 | - else: |
1975 | - cmd.extend(['-n', str(num_units)]) |
1976 | - self.do(cmd) |
1977 | - |
1978 | - def add_unit(self, name, num_units=1): |
1979 | - num_units_spawned = 0 |
1980 | - while self.reuse_machines and self._free_machines: |
1981 | - cmd = ['add-unit', '--to', str(self._free_machines.pop()), name] |
1982 | - self.do(cmd) |
1983 | - num_units_spawned += 1 |
1984 | - if num_units_spawned == num_units: |
1985 | - return |
1986 | - |
1987 | - cmd = ['add-unit', '-n', str(num_units - num_units_spawned), name] |
1988 | - self.do(cmd) |
1989 | - |
1990 | - # The most recent environment status, updated by refresh_status() |
1991 | - status = None |
1992 | - |
1993 | - def refresh_status(self): |
1994 | - self.status = self.get_result(['status']) |
1995 | - |
1996 | - self._free_machines = set( |
1997 | - int(k) for k, m in self.status['machines'].items() |
1998 | - if k != '0' |
1999 | - and m.get('life', None) not in ('dead', 'dying') |
2000 | - and m.get('series', None) == self.series |
2001 | - and m.get('agent-state', 'pending') in ('started', 'ready')) |
2002 | - for service in self.status.get('services', {}).values(): |
2003 | - for unit in service.get('units', {}).values(): |
2004 | - if 'machine' in unit: |
2005 | - self._free_machines.discard(int(unit['machine'])) |
2006 | - |
2007 | - return self.status |
2008 | - |
2009 | - def relation_info(self, unit): |
2010 | - '''Return all the relation information accessible from a unit. |
2011 | - |
2012 | - relation_info('foo/0')[relation_name][relation_id][unit][key] |
2013 | - ''' |
2014 | - # Get the possible relation names heuristically, per Bug #1298819 |
2015 | - relation_names = [] |
2016 | - for service_name, service_info in self.status['services'].items(): |
2017 | - if service_name == unit.split('/')[0]: |
2018 | - relation_names = service_info.get('relations', {}).keys() |
2019 | - break |
2020 | - |
2021 | - res = {} |
2022 | - juju_run_cmd = ['juju', 'run', '--unit', unit] |
2023 | - for rel_name in relation_names: |
2024 | - try: |
2025 | - relation_ids = run( |
2026 | - self, juju_run_cmd + [ |
2027 | - 'relation-ids {}'.format(rel_name)]).split() |
2028 | - except subprocess.CalledProcessError: |
2029 | - # Per Bug #1298819, we can't ask the unit which relation |
2030 | - # names are active so we need to use the relation names |
2031 | - # reported by 'juju status'. This may cause us to |
2032 | - # request relation information that the unit is not yet |
2033 | - # aware of. |
2034 | - continue |
2035 | - res[rel_name] = {} |
2036 | - for rel_id in relation_ids: |
2037 | - res[rel_name][rel_id] = {} |
2038 | - relation_units = [unit] + run( |
2039 | - self, juju_run_cmd + [ |
2040 | - 'relation-list -r {}'.format(rel_id)]).split() |
2041 | - for rel_unit in relation_units: |
2042 | - try: |
2043 | - json_rel_info = run( |
2044 | - self, juju_run_cmd + [ |
2045 | - 'relation-get --format=json -r {} - {}'.format( |
2046 | - rel_id, rel_unit)]) |
2047 | - res[rel_name][rel_id][rel_unit] = json.loads( |
2048 | - json_rel_info) |
2049 | - except subprocess.CalledProcessError: |
2050 | - res[rel_name][rel_id][rel_unit] = None |
2051 | - return res |
2052 | - |
2053 | - def wait_until_ready(self, extra=60): |
2054 | - ready = False |
2055 | - while not ready: |
2056 | - self.refresh_status() |
2057 | - ready = True |
2058 | - for service in self.status['services']: |
2059 | - if self.status['services'][service].get('life', '') == 'dying': |
2060 | - ready = False |
2061 | - units = self.status['services'][service].get('units', {}) |
2062 | - for unit in units.keys(): |
2063 | - agent_state = units[unit].get('agent-state', '') |
2064 | - if agent_state == 'error': |
2065 | - raise RuntimeError('{} error: {}'.format( |
2066 | - unit, units[unit].get('agent-state-info', ''))) |
2067 | - if agent_state != 'started': |
2068 | - ready = False |
2069 | - time.sleep(1) |
2070 | - # Unfortunately, there is no way to tell when a system is |
2071 | - # actually ready for us to test. Juju only tells us that a |
2072 | - # relation has started being setup, and that no errors have been |
2073 | - # encountered yet. It utterly fails to inform us when the |
2074 | - # cascade of hooks this triggers has finished and the |
2075 | - # environment is in a stable and actually testable state. |
2076 | - # So as a work around for Bug #1200267, we need to sleep long |
2077 | - # enough that our system is probably stable. This means we have |
2078 | - # extremely slow and flaky tests, but that is possibly better |
2079 | - # than no tests. |
2080 | - time.sleep(extra) |
2081 | - |
2082 | - def setUp(self): |
2083 | - super(JujuFixture, self).setUp() |
2084 | - self.reset() |
2085 | - if self.do_teardown: |
2086 | - self.addCleanup(self.reset) |
2087 | - |
2088 | - # Setup a temporary repository with the magic to find our charms. |
2089 | - self.repo_dir = self.useFixture(fixtures.TempDir()).path |
2090 | - self.useFixture(fixtures.EnvironmentVariable('JUJU_REPOSITORY', |
2091 | - self.repo_dir)) |
2092 | - self.repo_series_dir = os.path.join(self.repo_dir, self.series) |
2093 | - os.mkdir(self.repo_series_dir) |
2094 | - |
2095 | - def charm_uri(self, charm): |
2096 | - meta = yaml.safe_load(open(os.path.join(charm, 'metadata.yaml'), 'rb')) |
2097 | - name = meta.get('name') |
2098 | - assert name, 'charm {} has no name in metadata.yaml'.format(charm) |
2099 | - charm_link = os.path.join(self.repo_series_dir, name) |
2100 | - |
2101 | - # Recreate the charm link, which might have changed from the |
2102 | - # last deploy. |
2103 | - if os.path.exists(charm_link): |
2104 | - os.remove(charm_link) |
2105 | - os.symlink(charm, os.path.join(self.repo_series_dir, name)) |
2106 | - |
2107 | - return 'local:{}/{}'.format(self.series, name) |
2108 | - |
2109 | - def reset(self): |
2110 | - # Tear down any services left running that we know we spawned. |
2111 | - while True: |
2112 | - found_services = False |
2113 | - self.refresh_status() |
2114 | - |
2115 | - # Kill any services started by the deploy() method. |
2116 | - for service_name, service in self.status.get( |
2117 | - 'services', {}).items(): |
2118 | - if service_name in self._deployed_services: |
2119 | - found_services = True |
2120 | - if service.get('life', '') not in ('dying', 'dead'): |
2121 | - self.do(['destroy-service', service_name]) |
2122 | - # If any units have failed hooks, unstick them. |
2123 | - for unit_name, unit in service.get('units', {}).items(): |
2124 | - if unit.get('agent-state', None) == 'error': |
2125 | - try: |
2126 | - self.do(['resolved', unit_name]) |
2127 | - except subprocess.CalledProcessError: |
2128 | - # More race conditions in juju. A |
2129 | - # previous 'resolved' call make cause a |
2130 | - # subsequent one to fail if it is still |
2131 | - # being processed. However, we need to |
2132 | - # keep retrying because after a |
2133 | - # successful resolution a subsequent hook |
2134 | - # may cause an error state. |
2135 | - pass |
2136 | - if not found_services: |
2137 | - break |
2138 | - time.sleep(1) |
2139 | - |
2140 | - self._deployed_services = set() |
2141 | - |
2142 | - # We shouldn't reuse machines, as we have no guarantee they are |
2143 | - # still in a usable state, so tear them down too. Per |
2144 | - # Bug #1190492 (INVALID), in the future this will be much nicer |
2145 | - # when we can use containers for isolation and can happily reuse |
2146 | - # machines. |
2147 | - if self.reuse_machines: |
2148 | - # If we are reusing machines, wait until pending machines |
2149 | - # are ready and dying machines are dead. |
2150 | - while True: |
2151 | - for k, machine in self.status['machines'].items(): |
2152 | - if (k != 0 and machine.get('agent-state', 'pending') |
2153 | - not in ('ready', 'started')): |
2154 | - time.sleep(1) |
2155 | - self.refresh_status() |
2156 | - continue |
2157 | - break |
2158 | - else: |
2159 | - self.do(['terminate-machine'] + list(self._free_machines)) |
2160 | - |
2161 | - |
2162 | -_run_seq = 0 |
2163 | - |
2164 | - |
2165 | -def run(detail_collector, cmd, input=''): |
2166 | - global _run_seq |
2167 | - _run_seq = _run_seq + 1 |
2168 | - |
2169 | - # This helper primarily exists to capture the subprocess details, |
2170 | - # but this is failing. Details are being captured, but those added |
2171 | - # inside the actual test (not setup) are not being reported. |
2172 | - |
2173 | - out, err, returncode = None, None, None |
2174 | - try: |
2175 | - proc = subprocess.Popen( |
2176 | - cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, |
2177 | - stderr=subprocess.PIPE) |
2178 | - (out, err) = proc.communicate(input) |
2179 | - returncode = proc.returncode |
2180 | - if returncode != 0: |
2181 | - raise subprocess.CalledProcessError(returncode, cmd, err) |
2182 | - return out |
2183 | - except subprocess.CalledProcessError as x: |
2184 | - returncode = x.returncode |
2185 | - raise |
2186 | - finally: |
2187 | - if detail_collector is not None: |
2188 | - m = { |
2189 | - 'cmd': ' '.join(cmd), |
2190 | - 'rc': returncode, |
2191 | - 'stdout': out, |
2192 | - 'stderr': err, |
2193 | - } |
2194 | - detail_collector.addDetail( |
2195 | - 'run_{}'.format(_run_seq), |
2196 | - text_content(yaml.safe_dump(m, default_flow_style=False))) |
2197 | |
2198 | === removed file 'tests/00-setup.sh' |
2199 | --- tests/00-setup.sh 2014-10-14 11:25:24 +0000 |
2200 | +++ tests/00-setup.sh 1970-01-01 00:00:00 +0000 |
2201 | @@ -1,15 +0,0 @@ |
2202 | -#!/bin/sh |
2203 | - |
2204 | -sudo add-apt-repository -y ppa:juju/stable |
2205 | -sudo apt-get update |
2206 | -sudo apt-get install -y \ |
2207 | - amulet \ |
2208 | - python-flake8 \ |
2209 | - python-fixtures \ |
2210 | - python-jinja2 \ |
2211 | - python-mocker \ |
2212 | - python-psycopg2 \ |
2213 | - python-testtools \ |
2214 | - python-twisted-core \ |
2215 | - python-yaml \ |
2216 | - pgtune |
2217 | |
2218 | === removed file 'tests/01-lint.sh' |
2219 | --- tests/01-lint.sh 2014-10-14 11:25:24 +0000 |
2220 | +++ tests/01-lint.sh 1970-01-01 00:00:00 +0000 |
2221 | @@ -1,3 +0,0 @@ |
2222 | -#!/bin/sh |
2223 | - |
2224 | -make -C $(dirname $0)/.. lint |
2225 | |
2226 | === removed file 'tests/02-unit-tests.sh' |
2227 | --- tests/02-unit-tests.sh 2014-10-14 11:25:24 +0000 |
2228 | +++ tests/02-unit-tests.sh 1970-01-01 00:00:00 +0000 |
2229 | @@ -1,3 +0,0 @@ |
2230 | -#!/bin/bash |
2231 | - |
2232 | -make -C $(dirname $0)/.. unit_test |
2233 | |
2234 | === removed file 'tests/03-basic-amulet.py' |
2235 | --- tests/03-basic-amulet.py 2014-10-14 11:25:24 +0000 |
2236 | +++ tests/03-basic-amulet.py 1970-01-01 00:00:00 +0000 |
2237 | @@ -1,19 +0,0 @@ |
2238 | -#!/usr/bin/python3 |
2239 | - |
2240 | -import amulet |
2241 | -import os.path |
2242 | - |
2243 | -d = amulet.Deployment() |
2244 | - |
2245 | -d.add('postgresql', os.path.join(os.path.dirname(__file__), os.pardir)) |
2246 | -d.expose('postgresql') |
2247 | - |
2248 | -try: |
2249 | - d.setup(timeout=900) |
2250 | - d.sentry.wait() |
2251 | -except amulet.helpers.TimeoutError: |
2252 | - amulet.raise_status(amulet.SKIP, msg="Environment wasn't stood up in time") |
2253 | -except: |
2254 | - raise |
2255 | - |
2256 | -amulet.raise_status(amulet.PASS) |
2257 | |
2258 | === added file 'tests/tests.yaml' |
2259 | --- tests/tests.yaml 1970-01-01 00:00:00 +0000 |
2260 | +++ tests/tests.yaml 2015-03-12 09:42:09 +0000 |
2261 | @@ -0,0 +1,3 @@ |
2262 | +virtualenv: false |
2263 | +makefile: |
2264 | + - test |
This items has failed automated testing! Results available here http:// reports. vapour. ws/charm- tests/charm- bundle- test-11095- results