Merge lp:~stub/charms/trusty/cassandra/ensure-thrift into lp:charms/trusty/cassandra

Proposed by Stuart Bishop on 2015-12-08
Status: Merged
Merge reported by: Cory Johns
Merged at revision: not available
Proposed branch: lp:~stub/charms/trusty/cassandra/ensure-thrift
Merge into: lp:charms/trusty/cassandra
Diff against target: 639 lines (+267/-48)
10 files modified
Makefile (+14/-2)
README.md (+6/-5)
config.yaml (+6/-2)
hooks/actions.py (+2/-1)
hooks/helpers.py (+61/-21)
hooks/hooks.py (+3/-4)
tests/test_actions.py (+17/-6)
tests/test_helpers.py (+124/-4)
tests/test_integration.py (+32/-3)
tests/tests.yaml (+2/-0)
To merge this branch: bzr merge lp:~stub/charms/trusty/cassandra/ensure-thrift
Reviewer Review Type Date Requested Status
Cory Johns 2016-01-20 Needs Fixing on 2016-02-25
Review Queue (community) automated testing Approve on 2016-02-01
Adam Israel 2015-12-19 Needs Fixing on 2016-01-18
Stuart Bishop Abstain on 2015-12-19
Review via email: mp+279869@code.launchpad.net

Description of the Change

Fix Bug #1523546 by supporting Cassandra 2.2.

Note that Cassandra 2.1 is still the release series recommended for production use.

To post a comment you must log in.
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/1719/

review: Needs Fixing (automated testing)
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/1697/

review: Needs Fixing (automated testing)
Stuart Bishop (stub) wrote :

Cassandra 2.2 is now the default, as it is now what upstream recommends.

Also added 3.0 support for the brave (it is not recommended for production yet).

Also rolled up all the outstanding feature and bug fixes still up for review, as the fixes are needed here and conflicts need to be resolved somewhere. Up to reviewers if they want to digest this in small chunks, or just deal with this rollup branch.

Review Queue (review-queue) wrote :

The results (PASS) are in and available here: http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/1750/

review: Approve (automated testing)
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/1727/

review: Needs Fixing (automated testing)
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/1755/

review: Needs Fixing (automated testing)
Review Queue (review-queue) wrote :

The results (PASS) are in and available here: http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/1756/

review: Approve (automated testing)
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/1732/

review: Needs Fixing (automated testing)
Review Queue (review-queue) wrote :

The results (PASS) are in and available here: http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/1747/

review: Approve (automated testing)
Review Queue (review-queue) wrote :

The results (PASS) are in and available here: http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/1816/

review: Approve (automated testing)
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/1838/

review: Needs Fixing (automated testing)
Stuart Bishop (stub) :
review: Abstain
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/1884/

review: Needs Fixing (automated testing)
Review Queue (review-queue) wrote :

The results (PASS) are in and available here: http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/1864/

review: Approve (automated testing)
Adam Israel (aisrael) wrote :

I see we've had recent tests pass and fail. I attempted to run the tests myself, using both the local and amazon provider. In both cases, I ran into fairly consistent timeouts causing the tests to fail.

In one test run, I saw the test fail while the units were in the state of either "Waiting for permission to bootstrap" or "Waiting for agent initialization to finish". In another, the testrunner sat for hours while all units were deployed and idle. I wonder if the current timeout logic is prone to error or race conditions?

Have you seen similar timeouts in your own testing? The consistency is something I'd really like to see fixed, especially as we're preparing to roll out the cloud weather report.

Here's the log from bundletester:

http://pastebin.ubuntu.com/14569933/

Here's the output of juju status immediately after the failure:

http://pastebin.ubuntu.com/14569948/

I also have the full (~5M) output of debug-log, if that'd be useful.

review: Needs Fixing
Stuart Bishop (stub) wrote :

I'm mainly interested in a code review for this branch. Tests are passing locally and at http://review.juju.solutions/review/2412. Going over the diff, there is nothing in this branch that will affect reliability of the test runs apart from there being more tests and any issues will be shared with trunk.

Looking at your bundletester log, the test suite ran juju-deployer to deploy a single unit and gave up waiting an hour and a half later. The environment was not in an error state, or we wouldn't have gotten as far as timing out (the 124 return code is from the timeout(1) command that wraps juju-wait). My assumption is that one of the 2 machines needed for these tests never completed provisioning, which is an issue I see regularly with juju and the local provider (lxc sets up a unit fine, but it gets stuck in juju in 'provisioning' state either without an agent state or with an agent state but no IP address). If provisioning doesn't complete locally within a couple of minutes, it is never going to complete. The timeout is only set to 1.5 hours to handle other clouds, and I'd appreciate input on whether this timeout should be reduced (what is the worst case you see to provision 4 VMs?). The alternative explanation is somehow one of the hooks ended up in an infinite loop, but I haven't seen that happen on my recent branches as loops like 'wait for Cassandra to start responding' have timeouts. This would be obvious from the debug-log, as you would see a hook repeatedly spamming messages in a loop for most of the hour and a half before Jan 18 18:56:32.

The output of juju status is for the next set of tests that deploys three units, and shows the units happily executing their hooks and settings things up. 'waiting for agent initialization to finish' is a Juju message, where the subordinates are waiting for hooks on their primaries to complete so they can be setup. 'waiting for permission to bootstrap' is a message from the charm and is normal, where it is waiting for permission from the leader to continue Cassandra setup (we need to coordinate adding units to the Cassandra cluster, one node at a time with a two minute delay between each one).

About the only thing I can think of to do here is catch provisioning failures and loudly blame others if appropriate (or perhaps skipping tests is better?). I normally Ctrl-C, rebootstrap and try again but I think it makes sense for CI runs.

Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/2202/

review: Needs Fixing (automated testing)
Review Queue (review-queue) wrote :

The results (PASS) are in and available here: http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/2206/

review: Approve (automated testing)
Review Queue (review-queue) wrote :

The results (PASS) are in and available here: http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/2183/

review: Approve (automated testing)
Cory Johns (johnsca) wrote :

I merged this and was then notified that it broke someone's bundle. It turns out that the two new PPAs in the install_sources option are absolutely required, so anyone who was using that option previously ends up with a broken install if they don't add them: https://paste.ubuntu.com/15198649/

I think the mandatory PPAs should not be part of that config option, and instead should be added by the code.

This has been reverted for the time being.

review: Needs Fixing
Cory Johns (johnsca) wrote :

Note, a work-around is to ensure that the PPAs are always included in the config value (e.g. http://pastebin.ubuntu.com/15199178/) but I think that is onerous for the user.

Stuart Bishop (stub) wrote :

This is works as intended, per Bug #1517803. The PPA can't be hard coded, as it means the charm requires network access to ppa.launchpad.net to install.

Stuart Bishop (stub) wrote :

To clarify, the PPAs are not mandatory. In our internal and customer deployments the necessary packages are placed on internal archives.

Antoni Segura Puimedon (celebdor) wrote :

I understand the need for the change, I just wish that charms had major and minor releases and I could pin to major instead of to minor in my bundles, to still get non-breaking changes.

@johnsca: I'll add the options to the bundle tomorrow and test it. I would appreciate if you can delay the re-merge until then.

Cory Johns (johnsca) wrote :

This has been merged again.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Makefile'
2--- Makefile 2015-11-12 05:40:51 +0000
3+++ Makefile 2016-01-15 08:41:58 +0000
4@@ -77,12 +77,24 @@
5 AMULET_TIMEOUT=5400 \
6 $(NOSETESTS) tests.test_integration:Test1UnitDeployment 2>&1 | ts
7
8-20test: unittest Test21Deployment
9+20test: unittest Test20Deployment
10 Test20Deployment: deps
11 date
12 AMULET_TIMEOUT=5400 \
13 $(NOSETESTS) tests.test_integration:Test20Deployment 2>&1 | ts
14-
15+
16+21test: unittest Test21Deployment
17+Test21Deployment: deps
18+ date
19+ AMULET_TIMEOUT=5400 \
20+ $(NOSETESTS) tests.test_integration:Test21Deployment 2>&1 | ts
21+
22+30test: unittest Test30Deployment
23+Test30Deployment: deps
24+ date
25+ AMULET_TIMEOUT=5400 \
26+ $(NOSETESTS) tests.test_integration:Test30Deployment 2>&1 | ts
27+
28 3test: unittest Test3UnitDeployment
29 Test3UnitDeployment: deps
30 date
31
32=== modified file 'README.md'
33--- README.md 2015-12-08 20:16:53 +0000
34+++ README.md 2016-01-15 08:41:58 +0000
35@@ -13,8 +13,8 @@
36
37 # Editions
38
39-This charm supports Apache Cassandra 2.0, Apache Cassandra 2.1, and
40-Datastax Enterprise 4.8. The default is Apache Cassandra 2.1.
41+This charm supports Apache Cassandra 2.0, 2.1, 2.2 & 3.0, and
42+Datastax Enterprise 4.7 & 4.8. The default is Apache Cassandra 2.2.
43
44 To use Apache Cassandra 2.0, specify the Apache Cassandra 2.0 archive source
45 in the `install_sources` config setting when deploying.
46@@ -60,14 +60,15 @@
47 ## Planning
48
49 - Do not attempt to store too much data per node. If you need more space,
50- add more nodes. Most workloads work best with a capacity under 500GB
51+ add more nodes. Most workloads work best with a capacity under 1TB
52 per node.
53
54 - You need to keep 50% of your disk space free for Cassandra maintenance
55 operations. If you expect your nodes to hold 500GB of data each, you
56- will need a 1TB partition.
57+ will need a 1TB partition. Using non-default compaction such as
58+ LeveledCompactionStrategy can lower this waste.
59
60-- Much more information can be found in the [Cassandra 2.1 documentation](http://www.datastax.com/documentation/cassandra/2.1/cassandra/planning/architecturePlanningAbout_c.html)
61+- Much more information can be found in the [Cassandra 2.2 documentation](http://docs.datastax.com/en/cassandra/2.2/cassandra/planning/planPlanningAbout.html)
62
63
64 ## Network Access
65
66=== modified file 'config.yaml'
67--- config.yaml 2015-10-27 13:39:42 +0000
68+++ config.yaml 2016-01-15 08:41:58 +0000
69@@ -35,7 +35,9 @@
70 If you are using Datastax Enterprise, you will need to
71 override one defaults with your own username and password.
72 default: |
73- - deb http://www.apache.org/dist/cassandra/debian 21x main
74+ - deb http://www.apache.org/dist/cassandra/debian 22x main
75+ - ppa:openjdk-r/ppa # For OpenJDK 8
76+ - ppa:stub/cassandra # For Python driver
77 # - deb http://debian.datastax.com/community stable main
78 # DSE requires you to register and add your username/password here.
79 # - deb http://un:pw@debian.datastax.com/enterprise stable main
80@@ -45,7 +47,9 @@
81 charm-helpers standard listing of package install source
82 signing keys, corresponding to install_sources.
83 default: |
84+ - null # ppa:stub/cassandra signing key added automatically.
85 - null # Apache package signing key added automatically.
86+ - null # PPA package signing key added automatically.
87 # - null # DataStack package signing key added automatically.
88 http_proxy:
89 type: string
90@@ -65,7 +69,7 @@
91 default: ""
92 description: >
93 URL for the private jre tar file. DSE requires
94- Oracle Java SE 7 Server JRE (eg. server-jre-7u76-linux-x64.tar.gz).
95+ Oracle Java SE 8 Server JRE (eg. server-jre-8u60-linux-x64.tar.gz).
96 edition:
97 type: string
98 default: community
99
100=== modified file 'hooks/actions.py'
101--- hooks/actions.py 2015-10-27 15:21:44 +0000
102+++ hooks/actions.py 2016-01-15 08:41:58 +0000
103@@ -280,7 +280,7 @@
104 if helpers.get_jre() != 'oracle':
105 subprocess.check_call(['update-java-alternatives',
106 '--jre-headless',
107- '--set', 'java-1.7.0-openjdk-amd64'])
108+ '--set', 'java-1.8.0-openjdk-amd64'])
109
110
111 @action
112@@ -739,6 +739,7 @@
113 # to control access.
114 ufw.service('ssh', 'open')
115 ufw.service('nrpe', 'open') # Also NRPE for nagios checks.
116+ ufw.service('rsync', 'open') # Also rsync for data transfer and backups.
117
118 # Clients need client access. These protocols are configured to
119 # require authentication.
120
121=== modified file 'hooks/helpers.py'
122--- hooks/helpers.py 2015-11-20 14:21:56 +0000
123+++ hooks/helpers.py 2016-01-15 08:41:58 +0000
124@@ -16,6 +16,7 @@
125 import configparser
126 from contextlib import contextmanager
127 from datetime import timedelta
128+from distutils.version import LooseVersion
129 import errno
130 from functools import wraps
131 import io
132@@ -177,7 +178,7 @@
133
134 def get_all_database_directories():
135 config = hookenv.config()
136- return dict(
137+ dirs = dict(
138 data_file_directories=[get_database_directory(d)
139 for d in (config['data_file_directories'] or
140 'data').split()],
141@@ -185,6 +186,10 @@
142 config['commitlog_directory'] or 'commitlog'),
143 saved_caches_directory=get_database_directory(
144 config['saved_caches_directory'] or 'saved_caches'))
145+ if has_cassandra_version('3.0'):
146+ # Not yet configurable. Make configurable with Juju native storage.
147+ dirs['hints_directory'] = get_database_directory('hints')
148+ return dirs
149
150
151 # FOR CHARMHELPERS
152@@ -303,10 +308,22 @@
153
154 def get_cassandra_version():
155 if get_cassandra_edition() == 'dse':
156- return '2.1' if get_package_version('dse-full') else None
157+ dse_ver = get_package_version('dse-full')
158+ if not dse_ver:
159+ return None
160+ elif LooseVersion(dse_ver) >= LooseVersion('4.7'):
161+ return '2.1'
162+ else:
163+ return '2.0'
164 return get_package_version('cassandra')
165
166
167+def has_cassandra_version(minimum_ver):
168+ cassandra_version = get_cassandra_version()
169+ assert cassandra_version is not None, 'Cassandra package not yet installed'
170+ return LooseVersion(cassandra_version) >= LooseVersion(minimum_ver)
171+
172+
173 def get_cassandra_config_dir():
174 if get_cassandra_edition() == 'dse':
175 return '/etc/dse/cassandra'
176@@ -354,8 +371,9 @@
177 # agreement.
178 pass
179 else:
180- # NB. OpenJDK 8 not available in trusty.
181- packages.add('openjdk-7-jre-headless')
182+ # NB. OpenJDK 8 not available in trusty. This needs to come
183+ # from a PPA or some other configured source.
184+ packages.add('openjdk-8-jre-headless')
185
186 return packages
187
188@@ -434,10 +452,11 @@
189 # harmless, it causes shutil.chown() to fail.
190 assert not is_cassandra_running()
191 db_dirs = get_all_database_directories()
192- unpacked_db_dirs = (db_dirs['data_file_directories'] +
193- [db_dirs['commitlog_directory']] +
194- [db_dirs['saved_caches_directory']])
195- for db_dir in unpacked_db_dirs:
196+ ensure_database_directory(db_dirs['commitlog_directory'])
197+ ensure_database_directory(db_dirs['saved_caches_directory'])
198+ if 'hints_directory' in db_dirs:
199+ ensure_database_directory(db_dirs['hints_directory'])
200+ for db_dir in db_dirs['data_file_directories']:
201 ensure_database_directory(db_dir)
202
203
204@@ -531,13 +550,21 @@
205 hookenv.log('Creating SUPERUSER {}'.format(username))
206 else:
207 hookenv.log('Creating user {}'.format(username))
208- query(session,
209- 'INSERT INTO system_auth.users (name, super) VALUES (%s, %s)',
210- ConsistencyLevel.ALL, (username, superuser))
211- query(session,
212- 'INSERT INTO system_auth.credentials (username, salted_hash) '
213- 'VALUES (%s, %s)',
214- ConsistencyLevel.ALL, (username, encrypted_password))
215+ if has_cassandra_version('2.2'):
216+ query(session,
217+ 'INSERT INTO system_auth.roles '
218+ '(role, can_login, is_superuser, salted_hash) '
219+ 'VALUES (%s, TRUE, %s, %s)',
220+ ConsistencyLevel.ALL,
221+ (username, superuser, encrypted_password))
222+ else:
223+ query(session,
224+ 'INSERT INTO system_auth.users (name, super) VALUES (%s, %s)',
225+ ConsistencyLevel.ALL, (username, superuser))
226+ query(session,
227+ 'INSERT INTO system_auth.credentials (username, salted_hash) '
228+ 'VALUES (%s, %s)',
229+ ConsistencyLevel.ALL, (username, encrypted_password))
230
231
232 @logged
233@@ -721,6 +748,11 @@
234 # with the system_auth keyspace replication settings.
235 cassandra_yaml['endpoint_snitch'] = 'GossipingPropertyFileSnitch'
236
237+ # Per Bug #1523546 and CASSANDRA-9319, Thrift is disabled by default in
238+ # Cassandra 2.2. Ensure it is enabled if rpc_port is non-zero.
239+ if int(config['rpc_port']) > 0:
240+ cassandra_yaml['start_rpc'] = True
241+
242 cassandra_yaml.update(overrides)
243
244 write_cassandra_yaml(cassandra_yaml)
245@@ -773,12 +805,20 @@
246
247
248 def get_auth_keyspace_replication(session):
249- statement = dedent('''\
250- SELECT strategy_options FROM system.schema_keyspaces
251- WHERE keyspace_name='system_auth'
252- ''')
253- r = query(session, statement, ConsistencyLevel.QUORUM)
254- return json.loads(r[0][0])
255+ if has_cassandra_version('3.0'):
256+ statement = dedent('''\
257+ SELECT replication FROM system_schema.keyspaces
258+ WHERE keyspace_name='system_auth'
259+ ''')
260+ r = query(session, statement, ConsistencyLevel.QUORUM)
261+ return dict(r[0][0])
262+ else:
263+ statement = dedent('''\
264+ SELECT strategy_options FROM system.schema_keyspaces
265+ WHERE keyspace_name='system_auth'
266+ ''')
267+ r = query(session, statement, ConsistencyLevel.QUORUM)
268+ return json.loads(r[0][0])
269
270
271 @logged
272
273=== modified file 'hooks/hooks.py'
274--- hooks/hooks.py 2015-06-18 15:47:26 +0000
275+++ hooks/hooks.py 2016-01-15 08:41:58 +0000
276@@ -24,8 +24,7 @@
277 import cassandra # NOQA: flake8
278 except ImportError:
279 packages = ['python3-bcrypt', 'python3-cassandra']
280- fetch.add_source('ppa:stub/cassandra')
281- fetch.apt_update(fatal=True)
282+ fetch.configure_sources(update=True)
283 fetch.apt_install(packages, fatal=True)
284 import bcrypt # NOQA: flake8
285 import cassandra # NOQA: flake8
286@@ -43,8 +42,8 @@
287
288 # Only useful for debugging, or perhaps have this enabled with a config
289 # option?
290- ## from loglog import loglog
291- ## loglog('/var/log/cassandra/system.log', prefix='C*: ')
292+ # from loglog import loglog
293+ # loglog('/var/log/cassandra/system.log', prefix='C*: ')
294
295 hookenv.log('*** {} Hook Start'.format(hookenv.hook_name()))
296 sm = definitions.get_service_manager()
297
298=== modified file 'tests/test_actions.py'
299--- tests/test_actions.py 2015-10-27 15:21:44 +0000
300+++ tests/test_actions.py 2016-01-15 08:41:58 +0000
301@@ -294,7 +294,7 @@
302 install_packages.assert_called_once_with(sentinel.cassandra_packages)
303 check_call.assert_called_once_with(['update-java-alternatives',
304 '--jre-headless', '--set',
305- 'java-1.7.0-openjdk-amd64'])
306+ 'java-1.8.0-openjdk-amd64'])
307
308 @patch('subprocess.check_call')
309 @patch('helpers.get_jre')
310@@ -857,6 +857,7 @@
311 # SSH and the client protocol ports are always fully open.
312 ufw.service.assert_has_calls([call('ssh', 'open'),
313 call('nrpe', 'open'),
314+ call('rsync', 'open'),
315 call(9042, 'open'),
316 call(9160, 'open')])
317
318@@ -905,11 +906,13 @@
319 call('1.1.0.2', 'any', 7002)],
320 any_order=True)
321
322+ @patch('helpers.get_cassandra_version')
323 @patch('charmhelpers.core.host.write_file')
324 @patch('charmhelpers.contrib.charmsupport.nrpe.NRPE')
325 @patch('helpers.local_plugins_dir')
326 def test_nrpe_external_master_relation(self, local_plugins_dir, nrpe,
327- write_file):
328+ write_file, cassandra_version):
329+ cassandra_version.return_value = '2.2'
330 # The fake charm_dir() needs populating.
331 plugin_src_dir = os.path.join(os.path.dirname(__file__),
332 os.pardir, 'files')
333@@ -947,11 +950,13 @@
334
335 nrpe().write.assert_called_once_with()
336
337+ @patch('helpers.get_cassandra_version')
338 @patch('charmhelpers.core.host.write_file')
339 @patch('os.path.exists')
340 @patch('charmhelpers.contrib.charmsupport.nrpe.NRPE')
341- def test_nrpe_external_master_relation_no_local(self, nrpe,
342- exists, write_file):
343+ def test_nrpe_external_master_relation_no_local(self, nrpe, exists,
344+ write_file, ver):
345+ ver.return_value = '2.2'
346 # If the local plugins directory doesn't exist, we don't attempt
347 # to write files to it. Wait until the subordinate has set it
348 # up.
349@@ -959,9 +964,12 @@
350 actions.nrpe_external_master_relation('')
351 self.assertFalse(write_file.called)
352
353+ @patch('helpers.get_cassandra_version')
354 @patch('os.path.exists')
355 @patch('charmhelpers.contrib.charmsupport.nrpe.NRPE')
356- def test_nrpe_external_master_relation_disable_heapchk(self, nrpe, exists):
357+ def test_nrpe_external_master_relation_disable_heapchk(self, nrpe,
358+ exists, ver):
359+ ver.return_value = '2.2'
360 exists.return_value = False
361
362 # Disable our checks
363@@ -980,9 +988,12 @@
364 call(shortname='cassandra_disk_var_lib_cassandra_commitlog',
365 description=ANY, check_cmd=ANY)], any_order=True)
366
367+ @patch('helpers.get_cassandra_version')
368 @patch('os.path.exists')
369 @patch('charmhelpers.contrib.charmsupport.nrpe.NRPE')
370- def test_nrpe_external_master_relation_disable_diskchk(self, nrpe, exists):
371+ def test_nrpe_external_master_relation_disable_diskchk(self, nrpe,
372+ exists, ver):
373+ ver.return_value = '2.2'
374 exists.return_value = False
375
376 # Disable our checks
377
378=== modified file 'tests/test_helpers.py'
379--- tests/test_helpers.py 2015-11-20 14:21:56 +0000
380+++ tests/test_helpers.py 2016-01-15 08:41:58 +0000
381@@ -220,8 +220,10 @@
382 # Absolute paths are absolute and passed through unmolested.
383 self.assertEqual(helpers.get_database_directory('/bar'), '/bar')
384
385+ @patch('helpers.get_cassandra_version')
386 @patch('relations.StorageRelation')
387- def test_get_all_database_directories(self, storage_relation):
388+ def test_get_all_database_directories(self, storage_relation, ver):
389+ ver.return_value = '2.2'
390 storage_relation().mountpoint = '/s'
391 self.assertDictEqual(
392 helpers.get_all_database_directories(),
393@@ -229,6 +231,18 @@
394 commitlog_directory='/s/cassandra/commitlog',
395 saved_caches_directory='/s/cassandra/saved_caches'))
396
397+ @patch('helpers.get_cassandra_version')
398+ @patch('relations.StorageRelation')
399+ def test_get_all_database_directories_30(self, storage_relation, ver):
400+ ver.return_value = '3.0'
401+ storage_relation().mountpoint = '/s'
402+ self.assertDictEqual(
403+ helpers.get_all_database_directories(),
404+ dict(data_file_directories=['/s/cassandra/data'],
405+ commitlog_directory='/s/cassandra/commitlog',
406+ saved_caches_directory='/s/cassandra/saved_caches',
407+ hints_directory='/s/cassandra/hints'))
408+
409 @patch('helpers.recursive_chown')
410 @patch('charmhelpers.core.host.mkdir')
411 @patch('helpers.get_database_directory')
412@@ -458,7 +472,7 @@
413 # Default
414 self.assertSetEqual(helpers.get_cassandra_packages(),
415 set(['cassandra', 'ntp', 'run-one',
416- 'netcat', 'openjdk-7-jre-headless']))
417+ 'netcat', 'openjdk-8-jre-headless']))
418
419 def test_get_cassandra_packages_oracle_jre(self):
420 # Oracle JRE
421@@ -731,8 +745,10 @@
422 sentinel.consistency, sentinel.args)
423 self.assertEqual(session.execute.call_count, 4)
424
425+ @patch('helpers.get_cassandra_version')
426 @patch('helpers.query')
427- def test_ensure_user(self, query):
428+ def test_ensure_user(self, query, ver):
429+ ver.return_value = '2.1'
430 helpers.ensure_user(sentinel.session,
431 sentinel.username, sentinel.pwhash,
432 superuser=sentinel.supflag)
433@@ -746,6 +762,21 @@
434 ConsistencyLevel.ALL,
435 (sentinel.username, sentinel.pwhash))])
436
437+ @patch('helpers.get_cassandra_version')
438+ @patch('helpers.query')
439+ def test_ensure_user_22(self, query, ver):
440+ ver.return_value = '2.2'
441+ helpers.ensure_user(sentinel.session,
442+ sentinel.username, sentinel.pwhash,
443+ superuser=sentinel.supflag)
444+ query.assert_called_once_with(sentinel.session,
445+ 'INSERT INTO system_auth.roles (role, '
446+ 'can_login, is_superuser, salted_hash) '
447+ 'VALUES (%s, TRUE, %s, %s)',
448+ ConsistencyLevel.ALL,
449+ (sentinel.username, sentinel.supflag,
450+ sentinel.pwhash))
451+
452 @patch('helpers.ensure_user')
453 @patch('helpers.encrypt_password')
454 @patch('helpers.nodetool')
455@@ -991,6 +1022,79 @@
456 stream_throughput_outbound_megabits_per_sec: 200
457 tombstone_warn_threshold: 1000
458 tombstone_failure_threshold: 100000
459+ start_rpc: true
460+ ''')
461+ self.maxDiff = None
462+ self.assertEqual(yaml.safe_load(new_config),
463+ yaml.safe_load(expected_config))
464+
465+ # Confirm we can use an explicit cluster_name too.
466+ write_file.reset_mock()
467+ hookenv.config()['cluster_name'] = 'fubar'
468+ helpers.configure_cassandra_yaml()
469+ new_config = write_file.call_args[0][1]
470+ self.assertEqual(yaml.safe_load(new_config)['cluster_name'],
471+ 'fubar')
472+
473+ @patch('helpers.get_cassandra_version')
474+ @patch('helpers.get_cassandra_yaml_file')
475+ @patch('helpers.get_seed_ips')
476+ @patch('charmhelpers.core.host.write_file')
477+ def test_configure_cassandra_yaml_22(self, write_file, seed_ips, yaml_file,
478+ get_cassandra_version):
479+ get_cassandra_version.return_value = '2.0'
480+ hookenv.config().update(dict(num_tokens=128,
481+ cluster_name='test_cluster_name',
482+ partitioner='test_partitioner'))
483+
484+ seed_ips.return_value = ['10.20.0.1', '10.20.0.2', '10.20.0.3']
485+
486+ existing_config = '''
487+ seed_provider:
488+ - class_name: blah.SimpleSeedProvider
489+ parameters:
490+ - seeds: 127.0.0.1 # Comma separated list.
491+ start_rpc: false # Defaults to False starting 2.2
492+ '''
493+
494+ with tempfile.TemporaryDirectory() as tmpdir:
495+ yaml_config = os.path.join(tmpdir, 'c.yaml')
496+ yaml_file.return_value = yaml_config
497+ with open(yaml_config, 'w', encoding='UTF-8') as f:
498+ f.write(existing_config)
499+
500+ helpers.configure_cassandra_yaml()
501+
502+ self.assertEqual(write_file.call_count, 2)
503+ new_config = write_file.call_args[0][1]
504+
505+ expected_config = dedent('''\
506+ start_rpc: true
507+ cluster_name: test_cluster_name
508+ authenticator: PasswordAuthenticator
509+ num_tokens: 128
510+ partitioner: test_partitioner
511+ listen_address: 10.20.0.1
512+ rpc_address: 0.0.0.0
513+ rpc_port: 9160
514+ native_transport_port: 9042
515+ storage_port: 7000
516+ ssl_storage_port: 7001
517+ authorizer: AllowAllAuthorizer
518+ seed_provider:
519+ - class_name: blah.SimpleSeedProvider
520+ parameters:
521+ # No whitespace in seeds is important.
522+ - seeds: '10.20.0.1,10.20.0.2,10.20.0.3'
523+ endpoint_snitch: GossipingPropertyFileSnitch
524+ data_file_directories:
525+ - /var/lib/cassandra/data
526+ commitlog_directory: /var/lib/cassandra/commitlog
527+ saved_caches_directory: /var/lib/cassandra/saved_caches
528+ compaction_throughput_mb_per_sec: 16
529+ stream_throughput_outbound_megabits_per_sec: 200
530+ tombstone_warn_threshold: 1000
531+ tombstone_failure_threshold: 100000
532 ''')
533 self.maxDiff = None
534 self.assertEqual(yaml.safe_load(new_config),
535@@ -1043,6 +1147,7 @@
536 listen_address: 10.20.0.1
537 rpc_address: 0.0.0.0
538 broadcast_rpc_address: 10.30.0.1
539+ start_rpc: true
540 rpc_port: 9160
541 native_transport_port: 9042
542 storage_port: 7000
543@@ -1203,8 +1308,10 @@
544 # Weird errors are reraised.
545 self.assertRaises(RuntimeError, helpers.is_cassandra_running)
546
547+ @patch('helpers.get_cassandra_version')
548 @patch('helpers.query')
549- def test_get_auth_keyspace_replication(self, query):
550+ def test_get_auth_keyspace_replication(self, query, ver):
551+ ver.return_value = '2.2'
552 query.return_value = [('{"json": true}',)]
553 settings = helpers.get_auth_keyspace_replication(sentinel.session)
554 self.assertDictEqual(settings, dict(json=True))
555@@ -1214,6 +1321,19 @@
556 WHERE keyspace_name='system_auth'
557 '''), ConsistencyLevel.QUORUM)
558
559+ @patch('helpers.get_cassandra_version')
560+ @patch('helpers.query')
561+ def test_get_auth_keyspace_replication_30(self, query, ver):
562+ ver.return_value = '3.0'
563+ query.return_value = [({"json": True},)] # Decoded under 3.0
564+ settings = helpers.get_auth_keyspace_replication(sentinel.session)
565+ self.assertDictEqual(settings, dict(json=True))
566+ query.assert_called_once_with(
567+ sentinel.session, dedent('''\
568+ SELECT replication FROM system_schema.keyspaces
569+ WHERE keyspace_name='system_auth'
570+ '''), ConsistencyLevel.QUORUM)
571+
572 @patch('helpers.status_set')
573 @patch('charmhelpers.core.hookenv.status_get')
574 @patch('helpers.query')
575
576=== modified file 'tests/test_integration.py'
577--- tests/test_integration.py 2015-11-13 11:40:06 +0000
578+++ tests/test_integration.py 2016-01-15 08:41:58 +0000
579@@ -532,8 +532,9 @@
580 rf = 1
581 test_config = dict(
582 edition='DSE', # Forces Oracle JRE
583- install_sources=yaml.safe_dump([os.environ.get('DSE_SOURCE')]),
584- install_keys=yaml.safe_dump([None]),
585+ install_sources=yaml.safe_dump([os.environ.get('DSE_SOURCE'),
586+ 'ppa:stub/cassandra']),
587+ install_keys=yaml.safe_dump([None, None]),
588 private_jre_url=get_jre_url())
589
590 @classmethod
591@@ -573,8 +574,36 @@
592 test_config = dict(
593 edition='community',
594 install_sources=yaml.safe_dump([
595+ 'ppa:stub/cassandra',
596+ 'ppa:openjdk-r/ppa',
597 'deb http://www.apache.org/dist/cassandra/debian 20x main']),
598- install_keys=yaml.safe_dump([None]))
599+ install_keys=yaml.safe_dump([None, None, None]))
600+
601+
602+class Test21Deployment(Test1UnitDeployment):
603+ """Tests run on a single node Apache Cassandra 2.1 cluster.
604+ """
605+ rf = 1
606+ test_config = dict(
607+ edition='community',
608+ install_sources=yaml.safe_dump([
609+ 'ppa:stub/cassandra',
610+ 'ppa:openjdk-r/ppa',
611+ 'deb http://www.apache.org/dist/cassandra/debian 21x main']),
612+ install_keys=yaml.safe_dump([None, None, None]))
613+
614+
615+class Test30Deployment(Test1UnitDeployment):
616+ """Tests run on a single node Apache Cassandra 3.0 cluster.
617+ """
618+ rf = 1
619+ test_config = dict(
620+ edition='community',
621+ install_sources=yaml.safe_dump([
622+ 'ppa:stub/cassandra',
623+ 'ppa:openjdk-r/ppa',
624+ 'deb http://www.apache.org/dist/cassandra/debian 30x main']),
625+ install_keys=yaml.safe_dump([None, None, None]))
626
627
628 # Bug #1417097 means we need to monkey patch Amulet for now.
629
630=== modified file 'tests/tests.yaml'
631--- tests/tests.yaml 2015-10-27 15:21:44 +0000
632+++ tests/tests.yaml 2016-01-15 08:41:58 +0000
633@@ -10,4 +10,6 @@
634 - Test1UnitDeployment
635 - Test3UnitDeployment
636 - Test20Deployment
637+ - Test21Deployment
638+ - Test30Deployment
639 - TestAllowAllAuthenticatorDeployment

Subscribers

People subscribed via source and target branches

to all changes: