Merge lp:~stub/charms/trusty/cassandra/noppa into lp:charms/trusty/cassandra

Proposed by Stuart Bishop
Status: Rejected
Rejected by: Stuart Bishop
Proposed branch: lp:~stub/charms/trusty/cassandra/noppa
Merge into: lp:charms/trusty/cassandra
Diff against target: 689 lines (+281/-46)
11 files modified
Makefile (+23/-3)
README.md (+1/-1)
config.yaml (+10/-1)
hooks/actions.py (+51/-3)
hooks/definitions.py (+5/-1)
hooks/helpers.py (+48/-21)
hooks/hooks.py (+1/-2)
tests/test_actions.py (+1/-1)
tests/test_helpers.py (+97/-6)
tests/test_integration.py (+42/-7)
tests/tests.yaml (+2/-0)
To merge this branch: bzr merge lp:~stub/charms/trusty/cassandra/noppa
Reviewer Review Type Date Requested Status
Cory Johns (community) Needs Information
Review Queue (community) automated testing Needs Fixing
Review via email: mp+278097@code.launchpad.net

Description of the change

Move the hardcoded PPA into configuration, so it may be overridden.

Ideally, I'll get the python-cassandra package into Xenial and we can drop this altogether.

To post a comment you must log in.
433. By Stuart Bishop

link bug

Revision history for this message
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/1483/

review: Needs Fixing (automated testing)
Revision history for this message
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/1467/

review: Needs Fixing (automated testing)
Revision history for this message
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/1574/

review: Needs Fixing (automated testing)
Revision history for this message
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/1557/

review: Needs Fixing (automated testing)
Revision history for this message
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/1614/

review: Needs Fixing (automated testing)
Revision history for this message
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/1597/

review: Needs Fixing (automated testing)
434. By Stuart Bishop

Include required ppa: in Cassandra 2.0 deployment test

Revision history for this message
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/1639/

review: Needs Fixing (automated testing)
Revision history for this message
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/1621/

review: Needs Fixing (automated testing)
435. By Stuart Bishop

Fix Cassandra 2.0 tests betterer

Revision history for this message
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/1651/

review: Needs Fixing (automated testing)
Revision history for this message
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/1633/

review: Needs Fixing (automated testing)
Revision history for this message
Review Queue (review-queue) wrote :

The results (PASS) are in and available here: http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/1647/

review: Approve (automated testing)
Revision history for this message
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/1667/

review: Needs Fixing (automated testing)
436. By Stuart Bishop

Merge ensure-thrift

437. By Stuart Bishop

Merge ensure-thrift for Cassandra 2.2

Revision history for this message
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/1717/

review: Needs Fixing (automated testing)
Revision history for this message
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/1695/

review: Needs Fixing (automated testing)
Revision history for this message
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/1718/

review: Needs Fixing (automated testing)
Revision history for this message
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/1696/

review: Needs Fixing (automated testing)
Revision history for this message
Cory Johns (johnsca) wrote :

Stuart,

This contains quite a bit more than the PPA change, and the auth tests time out for me. Can you give more information on what the other changes are for, and any recommendations for getting through the tests in less than 6 hours?

review: Needs Information
Revision history for this message
Stuart Bishop (stub) wrote :

Yeah, this branch grew a bit (the changes landing here were needed for a customer site, and some other dependencies may have crept in). I was trying to keep the changes in discrete branches but failed :-(

I've already rolled everything up into the misnamed ensure-thrift branch, so will kill this one.

Tests run a lot faster than 6 hours locally. I do see them running in about 7 hours on some providers though. This is almost all provisioning and deployment time:

DEBUG:runner:Sat Dec 19 14:14:04 UTC 2015
DEBUG:runner:Dec 19 14:30:32 test_basics_client_admin_relation (tests.test_integration.Test1UnitDeployment) ... ok

[...]

DEBUG:runner:Sat Dec 19 15:09:07 UTC 2015
DEBUG:runner:Dec 19 15:44:40 test_basics_client_admin_relation (tests.test_integration.Test3UnitDeployment) ... ok

The above times are from an AWS run at http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/1864/console . The time between the initial timestamp and the first tells us it took 16 minutes to deploy a single unit service and 35 minutes to deploy a 3 unit service. This is about 5 times longer than on my laptop.

I'm not sure what can be done to improve this, beyond getting better clouds. Amulet doesn't let me provision all the necessary machines upfront (~25?), as juju-deployer won't use them afaict.

Unmerged revisions

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Makefile'
2--- Makefile 2015-08-19 09:57:40 +0000
3+++ Makefile 2015-12-08 10:27:18 +0000
4@@ -21,9 +21,16 @@
5 @echo 'Usage: make [ lint | unittest | test | clean | sync ]'
6 env
7
8-# Only trusty supported, but wily expected soon.
9+# Only trusty supported, but xenial expected soon.
10 SERIES := $(shell juju get-environment default-series)
11
12+
13+# /!\ Ensure that errors early in pipes cause failures, rather than
14+# overridden by the last stage of the pipe. cf. 'test.py | ts'
15+SHELL := /bin/bash
16+export SHELLOPTS:=errexit:pipefail
17+
18+
19 # Calculate the CHARM_DIR (the directory this Makefile is in)
20 THIS_MAKEFILE_PATH:=$(word $(words $(MAKEFILE_LIST)),$(MAKEFILE_LIST))
21 CHARM_DIR:=$(shell cd $(dir $(THIS_MAKEFILE_PATH));pwd)
22@@ -70,18 +77,31 @@
23 AMULET_TIMEOUT=5400 \
24 $(NOSETESTS) tests.test_integration:Test1UnitDeployment 2>&1 | ts
25
26-20test: unittest Test21Deployment
27+20test: unittest Test20Deployment
28 Test20Deployment: deps
29 date
30 AMULET_TIMEOUT=5400 \
31 $(NOSETESTS) tests.test_integration:Test20Deployment 2>&1 | ts
32-
33+
34+22test: unittest Test22Deployment
35+Test22Deployment: deps
36+ date
37+ AMULET_TIMEOUT=5400 \
38+ $(NOSETESTS) tests.test_integration:Test22Deployment 2>&1 | ts
39+
40 3test: unittest Test3UnitDeployment
41 Test3UnitDeployment: deps
42 date
43 AMULET_TIMEOUT=7200 \
44 $(NOSETESTS) tests.test_integration:Test3UnitDeployment 2>&1 | ts
45
46+authtest: unittest TestAllowAllAuthenticatorDeployment
47+TestAllowAllAuthenticatorDeployment: deps
48+ date
49+ AMULET_TIMEOUT=7200 \
50+ $(NOSETESTS) \
51+ tests.test_integration:TestAllowAllAuthenticatorDeployment 2>&1 | ts
52+
53 # Place a copy of the Oracle Java SE 7 Server Runtime tarball in ./lib
54 # to run these tests.
55 jretest: unittest
56
57=== modified file 'README.md'
58--- README.md 2015-10-20 09:41:37 +0000
59+++ README.md 2015-12-08 10:27:18 +0000
60@@ -13,7 +13,7 @@
61
62 # Editions
63
64-This charm supports Apache Cassandra 2.0, Apache Cassandra 2.1, and
65+This charm supports Apache Cassandra 2.0, 2.1 & 2.2, and
66 Datastax Enterprise 4.8. The default is Apache Cassandra 2.1.
67
68 To use Apache Cassandra 2.0, specify the Apache Cassandra 2.0 archive source
69
70=== modified file 'config.yaml'
71--- config.yaml 2015-06-23 11:32:12 +0000
72+++ config.yaml 2015-12-08 10:27:18 +0000
73@@ -35,6 +35,7 @@
74 If you are using Datastax Enterprise, you will need to
75 override one defaults with your own username and password.
76 default: |
77+ - ppa:stub/cassandra # Required for python3-cassandra package
78 - deb http://www.apache.org/dist/cassandra/debian 21x main
79 # - deb http://debian.datastax.com/community stable main
80 # DSE requires you to register and add your username/password here.
81@@ -45,6 +46,7 @@
82 charm-helpers standard listing of package install source
83 signing keys, corresponding to install_sources.
84 default: |
85+ - null # ppa:stub/cassandra signing key added automatically.
86 - null # Apache package signing key added automatically.
87 # - null # DataStack package signing key added automatically.
88 http_proxy:
89@@ -242,7 +244,14 @@
90 default: 7001
91 description: >
92 Cluster secure communication port. TODO: Unused. configure SSL.
93-
94+ authenticator:
95+ type: string
96+ default: PasswordAuthenticator
97+ description: >
98+ Authentication backend. Only PasswordAuthenticator and
99+ AllowAllAuthenticator are supported. You should only
100+ use AllowAllAuthenticator for legacy applications that
101+ cannot provide authentication credentials.
102 authorizer:
103 type: string
104 default: AllowAllAuthorizer
105
106=== modified file 'hooks/actions.py'
107--- hooks/actions.py 2015-10-22 04:50:10 +0000
108+++ hooks/actions.py 2015-12-08 10:27:18 +0000
109@@ -59,6 +59,7 @@
110 'num_tokens',
111 'max_heap_size',
112 'heap_newsize',
113+ 'authenticator',
114 'authorizer',
115 'compaction_throughput_mb_per_sec',
116 'stream_throughput_outbound_megabits_per_sec',
117@@ -115,6 +116,21 @@
118 return wrapper
119
120
121+def authentication(func):
122+ '''Decorated function is skipped if authentication is disabled.'''
123+ @wraps(func)
124+ def wrapper(*args, **kw):
125+ auth = hookenv.config()['authenticator']
126+ if auth == 'PasswordAuthenticator':
127+ return func(*args, **kw)
128+ elif auth == 'AllowAllAuthenticator':
129+ hookenv.log('Skipped. Authentication disabled.', DEBUG)
130+ return None
131+ helpers.status_set('blocked', 'Unknown authenticator {}'.format(auth))
132+ raise SystemExit(0)
133+ return wrapper
134+
135+
136 @action
137 def set_proxy():
138 config = hookenv.config()
139@@ -231,6 +247,34 @@
140
141
142 @action
143+def reset_limits():
144+ '''Set /etc/security/limits.d correctly for Ubuntu, so the
145+ startup scripts don't emit a spurious warning.
146+
147+ Per Cassandra documentation, Ubuntu needs some extra
148+ twiddling in /etc/security/limits.d. I have no idea why
149+ the packages don't do this, since they are already
150+ setting limits for the cassandra user correctly. The real
151+ bug is that the limits of the user running the startup script
152+ are being checked, rather than the limits of the user that will
153+ actually run the process.
154+ '''
155+ contents = dedent('''\
156+ # Maintained by Juju
157+ root - memlock unlimited
158+ root - nofile 100000
159+ root - nproc 32768
160+ root - as unlimited
161+ ubuntu - memlock unlimited
162+ ubuntu - nofile 100000
163+ ubuntu - nproc 32768
164+ ubuntu - as unlimited
165+ ''')
166+ host.write_file('/etc/security/limits.d/cassandra-charm.conf',
167+ contents.encode('US-ASCII'))
168+
169+
170+@action
171 def install_cassandra_packages():
172 helpers.install_packages(helpers.get_cassandra_packages())
173 if helpers.get_jre() != 'oracle':
174@@ -404,6 +448,7 @@
175
176 @leader_only
177 @action
178+@authentication
179 @coordinator.require('repair', needs_reset_auth_keyspace_replication)
180 def reset_auth_keyspace_replication():
181 # Cassandra requires you to manually set the replication factor of
182@@ -454,8 +499,8 @@
183
184 # If our IP address has changed, we need to restart.
185 if config.changed('unit_private_ip'):
186- hookenv.log('waiting',
187- 'IP address changed. Waiting for restart permission.')
188+ helpers.status_set('waiting', 'IP address changed. '
189+ 'Waiting for restart permission.')
190 return True
191
192 # If the directory paths have changed, we need to migrate data
193@@ -546,6 +591,7 @@
194
195 @leader_only
196 @action
197+@authentication
198 def create_unit_superusers():
199 # The leader creates and updates accounts for nodes, using the
200 # encrypted password they provide in relations.PeerRelation. We
201@@ -674,7 +720,7 @@
202 @action
203 def emit_cluster_info():
204 helpers.emit_describe_cluster()
205- helpers.emit_auth_keyspace_status()
206+ helpers.emit_status()
207 helpers.emit_netstats()
208
209
210@@ -824,6 +870,7 @@
211
212 @leader_only
213 @action
214+@authentication
215 def reset_default_password():
216 if hookenv.leader_get('default_admin_password_changed'):
217 hookenv.log('Default admin password already changed')
218@@ -883,6 +930,7 @@
219
220
221 @action
222+@authentication
223 def request_unit_superuser():
224 relid = helpers.peer_relid()
225 if relid is None:
226
227=== modified file 'hooks/definitions.py'
228--- hooks/definitions.py 2015-09-04 11:56:16 +0000
229+++ hooks/definitions.py 2015-12-08 10:27:18 +0000
230@@ -44,6 +44,7 @@
231 actions.configure_sources,
232 actions.swapoff,
233 actions.reset_sysctl,
234+ actions.reset_limits,
235 actions.install_oracle_jre,
236 actions.install_cassandra_packages,
237 actions.emit_java_version,
238@@ -108,7 +109,10 @@
239
240 if helpers.is_cassandra_running():
241 hookenv.log('Cassandra is running')
242- if hookenv.local_unit() in helpers.get_unit_superusers():
243+ auth = hookenv.config()['authenticator']
244+ if auth == 'AllowAllAuthenticator':
245+ return True
246+ elif hookenv.local_unit() in helpers.get_unit_superusers():
247 hookenv.log('Credentials created')
248 return True
249 else:
250
251=== modified file 'hooks/helpers.py'
252--- hooks/helpers.py 2015-09-11 05:07:47 +0000
253+++ hooks/helpers.py 2015-12-08 10:27:18 +0000
254@@ -16,6 +16,7 @@
255 import configparser
256 from contextlib import contextmanager
257 from datetime import timedelta
258+from distutils.version import LooseVersion
259 import errno
260 from functools import wraps
261 import io
262@@ -303,10 +304,20 @@
263
264 def get_cassandra_version():
265 if get_cassandra_edition() == 'dse':
266- return '2.1' if get_package_version('dse-full') else None
267+ dse_ver = get_package_version('dse-full')
268+ if not dse_ver:
269+ return None
270+ elif LooseVersion(dse_ver) >= LooseVersion('4.7'):
271+ return '2.1'
272+ else:
273+ return '2.0'
274 return get_package_version('cassandra')
275
276
277+def has_cassandra_version(minimum_ver):
278+ return LooseVersion(get_cassandra_version()) >= LooseVersion(minimum_ver)
279+
280+
281 def get_cassandra_config_dir():
282 if get_cassandra_edition() == 'dse':
283 return '/etc/dse/cassandra'
284@@ -459,8 +470,12 @@
285 if username is None or password is None:
286 username, password = superuser_credentials()
287
288- auth_provider = cassandra.auth.PlainTextAuthProvider(username=username,
289- password=password)
290+ auth = hookenv.config()['authenticator']
291+ if auth == 'AllowAllAuthenticator':
292+ auth_provider = None
293+ else:
294+ auth_provider = cassandra.auth.PlainTextAuthProvider(username=username,
295+ password=password)
296
297 # Although we specify a reconnection_policy, it does not apply to
298 # the initial connection so we retry in a loop.
299@@ -519,17 +534,29 @@
300 @logged
301 def ensure_user(session, username, encrypted_password, superuser=False):
302 '''Create the DB user if it doesn't already exist & reset the password.'''
303+ auth = hookenv.config()['authenticator']
304+ if auth == 'AllowAllAuthenticator':
305+ return # No authentication means we cannot create users
306+
307 if superuser:
308 hookenv.log('Creating SUPERUSER {}'.format(username))
309 else:
310 hookenv.log('Creating user {}'.format(username))
311- query(session,
312- 'INSERT INTO system_auth.users (name, super) VALUES (%s, %s)',
313- ConsistencyLevel.ALL, (username, superuser))
314- query(session,
315- 'INSERT INTO system_auth.credentials (username, salted_hash) '
316- 'VALUES (%s, %s)',
317- ConsistencyLevel.ALL, (username, encrypted_password))
318+ if has_cassandra_version('2.2'):
319+ query(session,
320+ 'INSERT INTO system_auth.roles '
321+ '(role, can_login, is_superuser, salted_hash) '
322+ 'VALUES (%s, TRUE, %s, %s)',
323+ ConsistencyLevel.ALL,
324+ (username, superuser, encrypted_password))
325+ else:
326+ query(session,
327+ 'INSERT INTO system_auth.users (name, super) VALUES (%s, %s)',
328+ ConsistencyLevel.ALL, (username, superuser))
329+ query(session,
330+ 'INSERT INTO system_auth.credentials (username, salted_hash) '
331+ 'VALUES (%s, %s)',
332+ ConsistencyLevel.ALL, (username, encrypted_password))
333
334
335 @logged
336@@ -683,7 +710,7 @@
337 # Using the same name is preferred to match the actual Cassandra
338 # documentation.
339 simple_config_keys = ['cluster_name', 'num_tokens',
340- 'partitioner', 'authorizer',
341+ 'partitioner', 'authorizer', 'authenticator',
342 'compaction_throughput_mb_per_sec',
343 'stream_throughput_outbound_megabits_per_sec',
344 'tombstone_warn_threshold',
345@@ -704,16 +731,16 @@
346 dirs = get_all_database_directories()
347 cassandra_yaml.update(dirs)
348
349- # The charm only supports password authentication. In the future we
350- # may also support AllowAllAuthenticator. I'm not sure if others
351- # such as Kerboros can be supported or are useful.
352- cassandra_yaml['authenticator'] = 'PasswordAuthenticator'
353-
354 # GossipingPropertyFileSnitch is the only snitch recommended for
355 # production. It we allow others, we need to consider how to deal
356 # with the system_auth keyspace replication settings.
357 cassandra_yaml['endpoint_snitch'] = 'GossipingPropertyFileSnitch'
358
359+ # Per Bug #1523546 and CASSANDRA-9319, Thrift is disabled by default in
360+ # Cassandra 2.2. Ensure it is enabled if rpc_port is non-zero.
361+ if int(config['rpc_port']) > 0:
362+ cassandra_yaml['start_rpc'] = True
363+
364 cassandra_yaml.update(overrides)
365
366 write_cassandra_yaml(cassandra_yaml)
367@@ -745,7 +772,7 @@
368 # is not running.
369 os.kill(pid, 0)
370
371- if subprocess.call(["nodetool", "status", "system_auth"],
372+ if subprocess.call(["nodetool", "status"],
373 stdout=subprocess.DEVNULL,
374 stderr=subprocess.DEVNULL) == 0:
375 hookenv.log(
376@@ -870,9 +897,9 @@
377
378
379 @logged
380-def emit_auth_keyspace_status():
381- '''Run 'nodetool status system_auth' for the logs.'''
382- nodetool('status', 'system_auth') # Implicit emit
383+def emit_status():
384+ '''Run 'nodetool status' for the logs.'''
385+ nodetool('status') # Implicit emit
386
387
388 @logged
389@@ -883,7 +910,7 @@
390
391 def emit_cluster_info():
392 emit_describe_cluster()
393- emit_auth_keyspace_status()
394+ emit_status()
395 emit_netstats()
396
397
398
399=== modified file 'hooks/hooks.py'
400--- hooks/hooks.py 2015-06-18 15:47:26 +0000
401+++ hooks/hooks.py 2015-12-08 10:27:18 +0000
402@@ -24,8 +24,7 @@
403 import cassandra # NOQA: flake8
404 except ImportError:
405 packages = ['python3-bcrypt', 'python3-cassandra']
406- fetch.add_source('ppa:stub/cassandra')
407- fetch.apt_update(fatal=True)
408+ fetch.configure_sources(update=True)
409 fetch.apt_install(packages, fatal=True)
410 import bcrypt # NOQA: flake8
411 import cassandra # NOQA: flake8
412
413=== modified file 'tests/test_actions.py'
414--- tests/test_actions.py 2015-09-10 07:22:22 +0000
415+++ tests/test_actions.py 2015-12-08 10:27:18 +0000
416@@ -833,7 +833,7 @@
417 self.assertIn(expected, contents)
418
419 @patch('helpers.emit_netstats')
420- @patch('helpers.emit_auth_keyspace_status')
421+ @patch('helpers.emit_status')
422 @patch('helpers.emit_describe_cluster')
423 def test_emit_cluster_info(self, emit_desc, emit_status, emit_netstats):
424 actions.emit_cluster_info('')
425
426=== modified file 'tests/test_helpers.py'
427--- tests/test_helpers.py 2015-09-04 11:12:27 +0000
428+++ tests/test_helpers.py 2015-12-08 10:27:18 +0000
429@@ -731,8 +731,10 @@
430 sentinel.consistency, sentinel.args)
431 self.assertEqual(session.execute.call_count, 4)
432
433+ @patch('helpers.get_cassandra_version')
434 @patch('helpers.query')
435- def test_ensure_user(self, query):
436+ def test_ensure_user(self, query, ver):
437+ ver.return_value = '2.1'
438 helpers.ensure_user(sentinel.session,
439 sentinel.username, sentinel.pwhash,
440 superuser=sentinel.supflag)
441@@ -746,6 +748,21 @@
442 ConsistencyLevel.ALL,
443 (sentinel.username, sentinel.pwhash))])
444
445+ @patch('helpers.get_cassandra_version')
446+ @patch('helpers.query')
447+ def test_ensure_user_22(self, query, ver):
448+ ver.return_value = '2.2'
449+ helpers.ensure_user(sentinel.session,
450+ sentinel.username, sentinel.pwhash,
451+ superuser=sentinel.supflag)
452+ query.assert_called_once_with(sentinel.session,
453+ 'INSERT INTO system_auth.roles (role, '
454+ 'can_login, is_superuser, salted_hash) '
455+ 'VALUES (%s, TRUE, %s, %s)',
456+ ConsistencyLevel.ALL,
457+ (sentinel.username, sentinel.supflag,
458+ sentinel.pwhash))
459+
460 @patch('helpers.ensure_user')
461 @patch('helpers.encrypt_password')
462 @patch('helpers.nodetool')
463@@ -890,7 +907,7 @@
464 is_running.return_value = True
465 backoff.return_value = repeat(True)
466 check_output.side_effect = iter(['ONE Error: stuff', 'TWO OK'])
467- self.assertEqual(helpers.nodetool('status', 'system_auth'), 'TWO OK')
468+ self.assertEqual(helpers.nodetool('status'), 'TWO OK')
469
470 # The output was emitted.
471 helpers.emit.assert_called_once_with('TWO OK')
472@@ -908,7 +925,7 @@
473 subprocess.CalledProcessError([], 1, 'fail 4'),
474 subprocess.CalledProcessError([], 1, 'fail 5'),
475 'OK'])
476- self.assertEqual(helpers.nodetool('status', 'system_auth'), 'OK')
477+ self.assertEqual(helpers.nodetool('status'), 'OK')
478
479 # Later fails and final output was emitted.
480 helpers.emit.assert_has_calls([call('fail 5'), call('OK')])
481@@ -992,6 +1009,79 @@
482 stream_throughput_outbound_megabits_per_sec: 200
483 tombstone_warn_threshold: 1000
484 tombstone_failure_threshold: 100000
485+ start_rpc: true
486+ ''')
487+ self.maxDiff = None
488+ self.assertEqual(yaml.safe_load(new_config),
489+ yaml.safe_load(expected_config))
490+
491+ # Confirm we can use an explicit cluster_name too.
492+ write_file.reset_mock()
493+ hookenv.config()['cluster_name'] = 'fubar'
494+ helpers.configure_cassandra_yaml()
495+ new_config = write_file.call_args[0][1]
496+ self.assertEqual(yaml.safe_load(new_config)['cluster_name'],
497+ 'fubar')
498+
499+ @patch('helpers.get_cassandra_version')
500+ @patch('helpers.get_cassandra_yaml_file')
501+ @patch('helpers.get_seed_ips')
502+ @patch('charmhelpers.core.host.write_file')
503+ def test_configure_cassandra_yaml_22(self, write_file, seed_ips, yaml_file,
504+ get_cassandra_version):
505+ get_cassandra_version.return_value = '2.0'
506+ hookenv.config().update(dict(num_tokens=128,
507+ cluster_name='test_cluster_name',
508+ partitioner='test_partitioner'))
509+
510+ seed_ips.return_value = ['10.20.0.1', '10.20.0.2', '10.20.0.3']
511+
512+ existing_config = '''
513+ seed_provider:
514+ - class_name: blah.SimpleSeedProvider
515+ parameters:
516+ - seeds: 127.0.0.1 # Comma separated list.
517+ start_rpc: false # Defaults to False starting 2.2
518+ '''
519+
520+ with tempfile.TemporaryDirectory() as tmpdir:
521+ yaml_config = os.path.join(tmpdir, 'c.yaml')
522+ yaml_file.return_value = yaml_config
523+ with open(yaml_config, 'w', encoding='UTF-8') as f:
524+ f.write(existing_config)
525+
526+ helpers.configure_cassandra_yaml()
527+
528+ self.assertEqual(write_file.call_count, 2)
529+ new_config = write_file.call_args[0][1]
530+
531+ expected_config = dedent('''\
532+ start_rpc: true
533+ cluster_name: test_cluster_name
534+ authenticator: PasswordAuthenticator
535+ num_tokens: 128
536+ partitioner: test_partitioner
537+ listen_address: 10.20.0.1
538+ rpc_address: 0.0.0.0
539+ rpc_port: 9160
540+ native_transport_port: 9042
541+ storage_port: 7000
542+ ssl_storage_port: 7001
543+ authorizer: AllowAllAuthorizer
544+ seed_provider:
545+ - class_name: blah.SimpleSeedProvider
546+ parameters:
547+ # No whitespace in seeds is important.
548+ - seeds: '10.20.0.1,10.20.0.2,10.20.0.3'
549+ endpoint_snitch: GossipingPropertyFileSnitch
550+ data_file_directories:
551+ - /var/lib/cassandra/data
552+ commitlog_directory: /var/lib/cassandra/commitlog
553+ saved_caches_directory: /var/lib/cassandra/saved_caches
554+ compaction_throughput_mb_per_sec: 16
555+ stream_throughput_outbound_megabits_per_sec: 200
556+ tombstone_warn_threshold: 1000
557+ tombstone_failure_threshold: 100000
558 ''')
559 self.maxDiff = None
560 self.assertEqual(yaml.safe_load(new_config),
561@@ -1044,6 +1134,7 @@
562 listen_address: 10.20.0.1
563 rpc_address: 0.0.0.0
564 broadcast_rpc_address: 10.30.0.1
565+ start_rpc: true
566 rpc_port: 9160
567 native_transport_port: 9042
568 storage_port: 7000
569@@ -1261,9 +1352,9 @@
570 nodetool.assert_called_once_with('describecluster')
571
572 @patch('helpers.nodetool')
573- def test_emit_auth_keyspace_status(self, nodetool):
574- helpers.emit_auth_keyspace_status()
575- nodetool.assert_called_once_with('status', 'system_auth')
576+ def test_emit_status(self, nodetool):
577+ helpers.emit_status()
578+ nodetool.assert_called_once_with('status')
579
580 @patch('helpers.nodetool')
581 def test_emit_netstats(self, nodetool):
582
583=== modified file 'tests/test_integration.py'
584--- tests/test_integration.py 2015-08-19 09:57:40 +0000
585+++ tests/test_integration.py 2015-12-08 10:27:18 +0000
586@@ -187,7 +187,7 @@
587
588 def get_client_relinfo(self, relname):
589 # We only need one unit, even if rf > 1
590- s = self.deployment.sentry['cassandra/0']
591+ s = self.deployment.sentry['cassandra'][0]
592 relinfo = s.relation(relname, 'client:{}'.format(relname))
593 return relinfo
594
595@@ -287,11 +287,11 @@
596 # and data migrated. Instead, keep checking until our condition
597 # is met, or a timeout reached.
598 timeout = time.time() + 300
599- for unit_num in range(0, self.rf):
600- unit = 'cassandra/{}'.format(unit_num)
601+ for s in self.deployment.sentry['cassandra']:
602+ unit = s.info['unit_name']
603+ unit_num = s.info['unit']
604 with self.subTest(unit=unit):
605 while True:
606- s = self.deployment.sentry[unit]
607 # Attempting to diagnose Amulet failures. I suspect
608 # SSH host keys again, per Bug #802117
609 try:
610@@ -314,7 +314,6 @@
611 found = set(contents['directories'])
612 self.assertIn(keyspace, found)
613 self.assertIn('system', found)
614- self.assertIn('system_auth', found)
615 break
616 except Exception:
617 if time.time() > timeout:
618@@ -354,7 +353,8 @@
619 self.assertIsInstance(fail, AuthenticationFailed)
620
621 def test_cqlsh(self):
622- subprocess.check_output(['juju', 'ssh', 'cassandra/0',
623+ unit = self.deployment.sentry['cassandra'][0].info['unit_name']
624+ subprocess.check_output(['juju', 'ssh', unit,
625 'sudo -H cqlsh -e exit'],
626 stderr=subprocess.STDOUT)
627
628@@ -544,6 +544,28 @@
629 super(TestDSEDeployment, cls).setUpClass()
630
631
632+class TestAllowAllAuthenticatorDeployment(Test3UnitDeployment):
633+ test_config = dict(authenticator='AllowAllAuthenticator')
634+
635+ def cluster(self, username=None, password=None, hosts=None, port=9042):
636+ '''A cluster using invalid credentials.'''
637+ return super(TestAllowAllAuthenticatorDeployment,
638+ self).cluster(username='wat', password='eva')
639+
640+ def client_session(self, relname):
641+ '''A session using invalid credentials.'''
642+ relinfo = self.get_client_relinfo(relname)
643+ self.assertIn('host', relinfo.keys())
644+ cluster = self.cluster('random', 'nonsense',
645+ [relinfo['host']],
646+ int(relinfo['native_transport_port']))
647+ session = cluster.connect()
648+ self.addCleanup(session.shutdown)
649+ return session
650+
651+ test_default_superuser_account_closed = None
652+
653+
654 class Test20Deployment(Test1UnitDeployment):
655 """Tests run on a single node Apache Cassandra 2.0 cluster.
656 """
657@@ -551,8 +573,21 @@
658 test_config = dict(
659 edition='community',
660 install_sources=yaml.safe_dump([
661+ 'ppa:stub/cassandra',
662 'deb http://www.apache.org/dist/cassandra/debian 20x main']),
663- install_keys=yaml.safe_dump([None]))
664+ install_keys=yaml.safe_dump([None, None]))
665+
666+
667+class Test22Deployment(Test1UnitDeployment):
668+ """Tests run on a single node Apache Cassandra 2.2 cluster.
669+ """
670+ rf = 1
671+ test_config = dict(
672+ edition='community',
673+ install_sources=yaml.safe_dump([
674+ 'ppa:stub/cassandra',
675+ 'deb http://www.apache.org/dist/cassandra/debian 22x main']),
676+ install_keys=yaml.safe_dump([None, None]))
677
678
679 # Bug #1417097 means we need to monkey patch Amulet for now.
680
681=== modified file 'tests/tests.yaml'
682--- tests/tests.yaml 2015-06-29 10:09:24 +0000
683+++ tests/tests.yaml 2015-12-08 10:27:18 +0000
684@@ -10,3 +10,5 @@
685 - Test1UnitDeployment
686 - Test3UnitDeployment
687 - Test20Deployment
688+ - Test22Deployment
689+ - TestAllowAllAuthenticatorDeployment

Subscribers

People subscribed via source and target branches

to all changes: