Merge lp:~niedbalski/charms/trusty/mongodb/make-cleanup into lp:charms/trusty/mongodb

Proposed by Jorge Niedbalski
Status: Merged
Merged at revision: 60
Proposed branch: lp:~niedbalski/charms/trusty/mongodb/make-cleanup
Merge into: lp:charms/trusty/mongodb
Diff against target: 1066 lines (+334/-334)
12 files modified
Makefile (+26/-7)
config.yaml (+1/-1)
hooks/hooks.py (+88/-94)
test_requirements.txt (+5/-0)
tests/00-setup (+0/-11)
tests/00_setup.sh (+9/-0)
tests/01_deploy_test.py (+127/-0)
tests/01_test_write_log_rotate_config.py (+0/-41)
tests/02_relate_ceilometer_test.py (+43/-0)
tests/200_deploy.test (+0/-127)
tests/200_relate_ceilometer.test (+0/-53)
unit_tests/test_write_log_rotate_config.py (+35/-0)
To merge this branch: bzr merge lp:~niedbalski/charms/trusty/mongodb/make-cleanup
Reviewer Review Type Date Requested Status
Tim Van Steenburgh (community) Approve
Review via email: mp+244156@code.launchpad.net

Description of the change

- Cleanup of makefile and test directories.

To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_lint_check #104 trusty-mongodb for niedbalski mp244156
    LINT FAIL: lint-test failed

LINT Results (max last 5 lines):
  hooks/hooks.py:1344:39: E251 unexpected spaces around keyword / parameter equals
  tests/01_deploy_tERROR:root:Make target returned non-zero.
est.py:77:80: E501 line too long (81 > 79 characters)
  tests/02_relate_ceilometer_test.py:5:1: E302 expected 2 blank lines, found 1
  make: *** [lint] Error 1

Full lint test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_lint_check/104/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_unit_test #105 trusty-mongodb for niedbalski mp244156
    UNIT OK: passed

UNIT Results (max last 5 lines):
  yaml.serializer 85 85 0% 2-110
  yaml.tokens 76 76 0% 2-103
  TOTAL 6969 6706 4%
  Ran 1 test in 0.102s
  OK

Full unit test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_unit_test/105/

62. By Jorge Niedbalski

tests/00-setup executable

63. By Jorge Niedbalski

Moved test to functional_test target, and "make test" for just running unit tests as per @tvansteenburgh recommendation

64. By Jorge Niedbalski

Added test_requirements file, and modified test target to create the virtualenv prior to run

65. By Jorge Niedbalski

Don't exclude hooks/tests from flake8 eye

66. By Jorge Niedbalski

typo corrections

67. By Jorge Niedbalski

Don't exclude hooks/tests from flake8 eye

68. By Jorge Niedbalski

Don't exclude hooks/tests from flake8 eye

69. By Jorge Niedbalski

Removed unneeded code from clean target

70. By Jorge Niedbalski

[make lint] OK

71. By Jorge Niedbalski

- "make lint" target now uses the venv.
- lint cleanup

72. By Jorge Niedbalski

Added rm -f .coverage

Revision history for this message
Tim Van Steenburgh (tvansteenburgh) wrote :

+1 LGTM, thanks Jorge!

review: Approve
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_lint_check #105 trusty-mongodb for niedbalski mp244156
    LINT FAIL: lint-test missing

LINT Results (max last 5 lines):
INFO:root:Workspace dir: /var/lib/jenkins/workspace/charm_lint_check
INFO:root:Reading file: Makefile
INFO:root:Searching for: ['@flake8']
INFO:root:Search string not found in makefile target commands.
ERROR:root:No make target was executed.

Full lint test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_lint_check/105/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_unit_test #106 trusty-mongodb for niedbalski mp244156
    UNIT FAIL: unit-test failed

UNIT Results (max last 5 lines):
INFO:root:Reading file: Makefile
INFO:root:Searching for: ['nosetest', 'unit.test']
INFO:root:command: make -f Makefile test: .ven
ERROR:root:Make target returned non-zero.
  make: *** empty string invalid as file name. Stop.

Full unit test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_unit_test/106/

Revision history for this message
Ryan Beisner (1chb1n) wrote :

This merge may have broken something. bug @ https://bugs.launchpad.net/charms/+source/mongodb/+bug/1400908

Confirmed by rolling back to rev59 and re-deploying with the stable and next openstack charm sets (both succeeded).

Since our test rigs are in flight atm, I'm looking for a reproducer. Holler with any questions. Thanks!

Revision history for this message
Ryan Beisner (1chb1n) wrote :

In-line comment added.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Makefile'
2--- Makefile 2014-11-04 18:22:31 +0000
3+++ Makefile 2014-12-09 18:51:40 +0000
4@@ -15,15 +15,34 @@
5
6 PYTHON := /usr/bin/env python
7
8-unittest:
9- tests/01_test_write_log_rotate_config.py
10+clean:
11+ rm -f .coverage
12+ find . -name '*.pyc' -delete
13+ rm -rf .venv
14+ (which dh_clean && dh_clean) || true
15+
16+.venv:
17+ sudo apt-get install -y gcc python-dev python-virtualenv python-apt
18+ virtualenv .venv --system-site-packages
19+ .venv/bin/pip install -I -r test_requirements.txt
20+
21+lint: .venv
22+ .venv/bin/flake8 --exclude hooks/charmhelpers hooks tests unit_tests
23+ @charm proof
24+
25+test: .venv
26+ @echo Starting unit tests...
27+ .venv/bin/nosetests -s --nologcapture --with-coverage $(EXTRA) unit_tests/
28+
29+functional_test:
30+ @echo Starting amulet tests...
31+ @juju test -v -p AMULET_HTTP_PROXY --timeout 900
32
33 sync:
34 @mkdir -p bin
35- @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py > bin/charm_helpers_sync.py
36+ @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py > bin/charm_helpers_sync.py
37 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-sync.yaml
38
39-clean:
40- @find . -name \*.pyc -delete
41- @find . -name '*.bak' -delete
42-
43+publish: lint unit_test
44+ bzr push lp:charms/mongodb
45+ bzr push lp:charms/trusty/mongodb
46
47=== modified file 'config.yaml'
48--- config.yaml 2014-07-30 17:48:09 +0000
49+++ config.yaml 2014-12-09 18:51:40 +0000
50@@ -211,7 +211,7 @@
51 option.
52 key:
53 type: string
54- default:
55+ default: None
56 description: >
57 Key ID to import to the apt keyring to support use with arbitary source
58 configuration from outside of Launchpad archives or PPA's.
59
60=== modified file 'hooks/hooks.py'
61--- hooks/hooks.py 2014-11-18 21:36:44 +0000
62+++ hooks/hooks.py 2014-12-09 18:51:40 +0000
63@@ -81,16 +81,14 @@
64 return(True)
65 except Exception, e:
66 juju_log("port_check: Unable to connect to %s:%s/%s." %
67- (host, port, protocol))
68+ (host, port, protocol))
69 juju_log("port_check: Exception: %s" % str(e))
70 return(False)
71
72
73-#------------------------------------------------------------------------------
74 # update_service_ports: Convenience function that evaluate the old and new
75 # service ports to decide which ports need to be
76 # opened and which to close
77-#------------------------------------------------------------------------------
78 def update_service_ports(old_service_ports=None, new_service_ports=None):
79 juju_log("update_service_ports")
80 if old_service_ports is None or new_service_ports is None:
81@@ -138,7 +136,7 @@
82 try:
83 if pid is not None:
84 cmd_line = subprocess.check_output('ps -p %d -o cmd h' %
85- int(pid), shell=True)
86+ int(pid), shell=True)
87 retVal = (pid, cmd_line)
88 else:
89 juju_log("process_check: pid not defined.")
90@@ -148,7 +146,7 @@
91 retVal = (None, None)
92 finally:
93 juju_log("process_check returs pid: %s and cmd_line: %s" %
94- retVal)
95+ retVal)
96 return(retVal)
97
98
99@@ -160,8 +158,6 @@
100 return((None, None))
101
102
103-
104-
105 ###############################################################################
106 # Charm support functions
107 ###############################################################################
108@@ -185,11 +181,13 @@
109 # logpath
110 # Create the directory if not there already
111 subprocess.call(['mkdir',
112- '-p',
113- '%s' % os.path.dirname(config_data['logpath'])])
114+ '-p',
115+ '%s' % os.path.dirname(config_data['logpath'])])
116 subprocess.call(['chown',
117- '-R',
118- 'mongodb:mongodb', os.path.dirname(config_data['logpath'])])
119+ '-R',
120+ 'mongodb:mongodb',
121+ os.path.dirname(config_data['logpath'])])
122+
123 config.append("logpath=%s" % config_data['logpath'])
124 config.append("")
125
126@@ -302,7 +300,7 @@
127
128 # arbiter
129 if config_data['arbiter'] != "disabled" and \
130- config_data['arbiter'] != "enabled":
131+ config_data['arbiter'] != "enabled":
132 config.append("arbiter = %s" % config_data['arbiter'])
133 config.append("")
134
135@@ -352,7 +350,7 @@
136
137 def join_replset(master_node=None, host=None):
138 juju_log("join_replset: master_node: %s, host: %s" %
139- (master_node, host))
140+ (master_node, host))
141 if master_node is None or host is None:
142 retVal = False
143 else:
144@@ -369,8 +367,9 @@
145 if re.search(' --replSet %s ' % replicaset_name,
146 mongodb_init_config, re.MULTILINE) is None:
147 mongodb_init_config = regex_sub([(' -- ',
148- ' -- --replSet %s ' % replicaset_name)],
149- mongodb_init_config)
150+ ' -- --replSet %s ' %
151+ replicaset_name)],
152+ mongodb_init_config)
153 retVal = update_file(default_mongodb_init_config, mongodb_init_config)
154 except Exception, e:
155 juju_log(str(e))
156@@ -384,11 +383,12 @@
157 pat_replace = []
158 if daemon_options is None or daemon_options == "none":
159 pat_replace.append(
160- (' --config /etc/mongodb.conf.*', ' --config /etc/mongodb.conf; fi'))
161+ (' --config /etc/mongodb.conf.*',
162+ ' --config /etc/mongodb.conf; fi'))
163 else:
164 pat_replace.append(
165- (' --config /etc/mongodb.conf.*',
166- ' --config /etc/mongodb.conf %s; fi' % daemon_options))
167+ (' --config /etc/mongodb.conf.*',
168+ ' --config /etc/mongodb.conf %s; fi' % daemon_options))
169 regex_sub(pat_replace, mongodb_init_config)
170 return(update_file(default_mongodb_init_config, mongodb_init_config))
171
172@@ -402,8 +402,7 @@
173 mongodb_init_config, re.MULTILINE) is not None:
174 mongodb_init_config = regex_sub([
175 (' --replSet %s ' % replicaset_name, ' ')
176- ],
177- mongodb_init_config)
178+ ], mongodb_init_config)
179 retVal = update_file(default_mongodb_init_config, mongodb_init_config)
180 except Exception, e:
181 juju_log(str(e))
182@@ -420,7 +419,7 @@
183 mongodb_init_config = open(default_mongodb_init_config).read()
184 if re.search(' --rest ', mongodb_init_config, re.MULTILINE) is None:
185 mongodb_init_config = regex_sub([(' -- ', ' -- --rest ')],
186- mongodb_init_config)
187+ mongodb_init_config)
188 retVal = update_file(default_mongodb_init_config, mongodb_init_config)
189 except Exception, e:
190 juju_log(str(e))
191@@ -438,10 +437,10 @@
192 try:
193 mongodb_init_config = open(default_mongodb_init_config).read()
194 if re.search(' --rest ',
195- mongodb_init_config,
196- re.MULTILINE) is not None:
197+ mongodb_init_config,
198+ re.MULTILINE) is not None:
199 mongodb_init_config = regex_sub([(' --rest ', ' ')],
200- mongodb_init_config)
201+ mongodb_init_config)
202 retVal = update_file(default_mongodb_init_config, mongodb_init_config)
203 except Exception, e:
204 juju_log(str(e))
205@@ -454,7 +453,7 @@
206
207 def enable_arbiter(master_node=None, host=None):
208 juju_log("enable_arbiter: master_node: %s, host: %s" %
209- (master_node, host))
210+ (master_node, host))
211 if master_node is None or host is None:
212 retVal = False
213 else:
214@@ -466,17 +465,19 @@
215 def configsvr_status(wait_for=default_wait_for, max_tries=default_max_tries):
216 config_data = config()
217 current_try = 0
218+
219 while (process_check_pidfile('/var/run/mongodb/configsvr.pid') !=
220- (None, None)) and not port_check(
221- unit_get('private-address'),
222- config_data['config_server_port']) and current_try < max_tries:
223+ (None, None)) and not port_check(
224+ unit_get('private-address'),
225+ config_data['config_server_port']) and current_try < max_tries:
226+
227 juju_log("configsvr_status: Waiting for Config Server to be ready ...")
228 time.sleep(wait_for)
229 current_try += 1
230 retVal = (
231 process_check_pidfile('/var/run/mongodb/configsvr.pid') != (None, None)
232 ) == port_check(unit_get('private-address'),
233- config_data['config_server_port']) is True
234+ config_data['config_server_port']) is True
235 if retVal:
236 return(process_check_pidfile('/var/run/mongodb/configsvr.pid'))
237 else:
238@@ -508,7 +509,7 @@
239
240
241 def enable_configsvr(config_data, wait_for=default_wait_for,
242-max_tries=default_max_tries):
243+ max_tries=default_max_tries):
244 if config_data is None:
245 juju_log("enable_configsvr: config_data not defined.")
246 return(False)
247@@ -557,16 +558,19 @@
248 config_data = config()
249 current_try = 0
250 while (process_check_pidfile('/var/run/mongodb/mongos.pid') !=
251- (None, None)) and not port_check(
252- unit_get('private-address'),
253- config_data['mongos_port']) and current_try < max_tries:
254+ (None, None)) and not port_check(
255+ unit_get('private-address'),
256+ config_data['mongos_port']) and current_try < max_tries:
257+
258 juju_log("mongos_status: Waiting for Mongo shell to be ready ...")
259 time.sleep(wait_for)
260 current_try += 1
261 retVal = \
262 (process_check_pidfile('/var/run/mongodb/mongos.pid') !=
263- (None, None)) == port_check(unit_get('private-address'),
264+ (None, None)) == port_check(
265+ unit_get('private-address'),
266 config_data['mongos_port']) is True
267+
268 if retVal:
269 return(process_check_pidfile('/var/run/mongodb/mongos.pid'))
270 else:
271@@ -597,7 +601,7 @@
272
273
274 def enable_mongos(config_data=None, config_servers=None,
275- wait_for=default_wait_for, max_tries=default_max_tries):
276+ wait_for=default_wait_for, max_tries=default_max_tries):
277 juju_log("enable_mongos")
278 if config_data is None or config_servers is None:
279 juju_log("enable_mongos: config_data and config_servers are mandatory")
280@@ -625,8 +629,6 @@
281 if len(config_servers) > 0:
282 if len(config_servers) >= 3:
283 cmd_line += ' --configdb %s' % ','.join(config_servers[0:3])
284-# else:
285-# cmd_line += ' --configdb %s' % config_servers[0]
286 juju_log("enable_mongos: cmd_line: %s" % cmd_line)
287 subprocess.call(cmd_line, shell=True)
288 retVal = mongos_ready(wait_for, max_tries)
289@@ -661,13 +663,13 @@
290 current_try < max_tries):
291 juju_log(
292 "restart_mongod: Waiting for MongoDB to be ready ({}/{})".format(
293- current_try, max_tries))
294+ current_try, max_tries))
295 time.sleep(wait_for)
296 current_try += 1
297
298 return(
299- (service('status', 'mongodb') == port_check(my_hostname, my_port))
300- is True)
301+ (service('status', 'mongodb') == port_check(my_hostname,
302+ my_port)) is True)
303
304
305 def backup_cronjob(disable=False):
306@@ -742,7 +744,7 @@
307 # Trigger volume initialization logic for permanent storage
308 volid = volume_get_volume_id()
309 if not volid:
310- ## Invalid configuration (whether ephemeral, or permanent)
311+ # Invalid configuration (whether ephemeral, or permanent)
312 stop_hook()
313 mounts = volume_get_all_mounted()
314 if mounts:
315@@ -753,8 +755,8 @@
316 "'volume-ephemeral-storage' and 'volume-map'")
317 sys.exit(1)
318 if volume_is_permanent(volid):
319- ## config_changed_volume_apply will stop the service if it finds
320- ## it necessary, ie: new volume setup
321+ # config_changed_volume_apply will stop the service if it finds
322+ # it necessary, ie: new volume setup
323 if config_changed_volume_apply():
324 start_hook()
325 else:
326@@ -769,8 +771,9 @@
327
328 # current ports
329 current_mongodb_port = re.search('^#*port\s+=\s+(\w+)',
330- mongodb_config,
331- re.MULTILINE).group(1)
332+ mongodb_config,
333+ re.MULTILINE).group(1)
334+
335 current_web_admin_ui_port = int(current_mongodb_port) + 1000
336 new_web_admin_ui_port = int(config_data['port']) + 1000
337
338@@ -816,13 +819,14 @@
339 # arbiter
340 if config_data['replicaset_master'] != 'auto':
341 if config_data['arbiter'] != "disabled" and\
342- config_data['replicaset_master'] != "auto":
343+ config_data['replicaset_master'] != "auto":
344 if config_data['arbiter'] == 'enable':
345 enable_arbiter(config_data['replicaset_master'],
346- "%s:%s" % (private_address, config_data['port']))
347+ "%s:%s" % (private_address,
348+ config_data['port']))
349 else:
350 enable_arbiter(config_data['replicaset_master'],
351- config_data['arbiter'])
352+ config_data['arbiter'])
353
354 # expose necessary ports
355 update_service_ports([current_mongodb_port], [config_data['port']])
356@@ -883,7 +887,7 @@
357 try:
358 retVal = service('stop', 'mongodb')
359 os.remove('/var/lib/mongodb/mongod.lock')
360- #FIXME Need to check if this is still needed
361+ # FIXME Need to check if this is still needed
362 except Exception, e:
363 juju_log(str(e))
364 retVal = False
365@@ -901,13 +905,13 @@
366 juju_log("my_hostname: %s" % my_hostname)
367 juju_log("my_port: %s" % my_port)
368 juju_log("my_replset: %s" % my_replset)
369- return(relation_set(relation_id(),
370- {
371- 'hostname': my_hostname,
372- 'port': my_port,
373- 'replset': my_replset,
374- 'type': 'database',
375- }))
376+
377+ return(relation_set(relation_id(), {
378+ 'hostname': my_hostname,
379+ 'port': my_port,
380+ 'replset': my_replset,
381+ 'type': 'database',
382+ }))
383
384
385 @hooks.hook('replicaset-relation-joined')
386@@ -1014,19 +1018,19 @@
387 juju_log("data_relation_departed")
388 return(config_changed())
389
390+
391 @hooks.hook('configsvr-relation-joined')
392 def configsvr_relation_joined():
393 juju_log("configsvr_relation_joined")
394 my_hostname = unit_get('private-address')
395 my_port = config('config_server_port')
396 my_install_order = os.environ['JUJU_UNIT_NAME'].split('/')[1]
397- relation_set(relation_id(),
398- {
399- 'hostname': my_hostname,
400- 'port': my_port,
401- 'install-order': my_install_order,
402- 'type': 'configsvr',
403- })
404+ relation_set(relation_id(), {
405+ 'hostname': my_hostname,
406+ 'port': my_port,
407+ 'install-order': my_install_order,
408+ 'type': 'configsvr',
409+ })
410
411
412 @hooks.hook('configsvr-relation-changed')
413@@ -1044,13 +1048,12 @@
414 my_hostname = unit_get('private-address')
415 my_port = config('mongos_port')
416 my_install_order = os.environ['JUJU_UNIT_NAME'].split('/')[1]
417- relation_set(relation_id(),
418- {
419- 'hostname': my_hostname,
420- 'port': my_port,
421- 'install-order': my_install_order,
422- 'type': 'mongos'
423- })
424+ relation_set(relation_id(), {
425+ 'hostname': my_hostname,
426+ 'port': my_port,
427+ 'install-order': my_install_order,
428+ 'type': 'mongos'
429+ })
430
431
432 @hooks.hook('mongos-cfg-relation-changed')
433@@ -1070,10 +1073,11 @@
434 config_servers = load_config_servers(default_mongos_list)
435 print "Adding config server: %s:%s" % (hostname, port)
436 if hostname is not None and \
437- port is not None and \
438- hostname != '' and \
439- port != '' and \
440- "%s:%s" % (hostname, port) not in config_servers:
441+ port is not None and \
442+ hostname != '' and \
443+ port != '' and \
444+ "%s:%s" % (hostname,
445+ port) not in config_servers:
446 config_servers.append("%s:%s" % (hostname, port))
447 disable_mongos(config_data['mongos_port'])
448 retVal = enable_mongos(config_data, config_servers)
449@@ -1088,12 +1092,12 @@
450 mongo_client(mongos_host, shard_command1)
451 replicaset = relation_get('replset')
452 shard_command2 = "sh.addShard(\"%s/%s:%s\")" % \
453- (replicaset, hostname, port)
454+ (replicaset, hostname, port)
455 mongo_client(mongos_host, shard_command2)
456
457 else:
458 print("mongos_relation_change: undefined rel_type: %s" %
459- rel_type)
460+ rel_type)
461 return
462
463 print("mongos_relation_changed returns: %s" % retVal)
464@@ -1128,12 +1132,11 @@
465 ###############################################################################
466 # Volume managment
467 ###############################################################################
468-#------------------------------
469+#
470 # Get volume-id from juju config "volume-map" dictionary as
471 # volume-map[JUJU_UNIT_NAME]
472 # @return volid
473 #
474-#------------------------------
475 def volume_get_volid_from_volume_map():
476 config_data = config()
477 volume_map = {}
478@@ -1158,48 +1161,40 @@
479 return False
480
481
482-#------------------------------
483 # Returns a mount point from passed vol-id, e.g. /srv/juju/vol-000012345
484 #
485 # @param volid volume id (as e.g. EBS volid)
486 # @return mntpoint_path eg /srv/juju/vol-000012345
487-#------------------------------
488 def volume_mount_point_from_volid(volid):
489 if volid and volume_is_permanent(volid):
490 return "/srv/juju/%s" % volid
491 return None
492
493
494-#------------------------------
495 # Returns a stub volume id based on the mountpoint from the storage
496 # subordinate relation, if present.
497 #
498 # @return volid eg vol-000012345
499-#------------------------------
500 def volume_get_id_for_storage_subordinate():
501 # storage charm is a subordinate so we should only ever have one
502 # relation_id for the data relation
503 ids = relation_ids('data')
504 if len(ids) > 0:
505 mountpoint = relation_get('mountpoint',
506- os.environ['JUJU_UNIT_NAME'],
507- ids[0])
508+ os.environ['JUJU_UNIT_NAME'],
509+ ids[0])
510
511 juju_log('mountpoint: %s' % (mountpoint,))
512 if mountpoint and os.path.exists(mountpoint):
513 return mountpoint.split('/')[-1]
514
515
516-
517 # Do we have a valid storage state?
518 # @returns volid
519 # None config state is invalid - we should not serve
520 def volume_get_volume_id():
521
522 config_data = config()
523-
524-
525-
526 volid = volume_get_id_for_storage_subordinate()
527 if volid:
528 return volid
529@@ -1244,7 +1239,7 @@
530 return output
531
532
533-#------------------------------------------------------------------------------
534+#
535 # Core logic for permanent storage changes:
536 # NOTE the only 2 "True" return points:
537 # 1) symlink already pointing to existing storage (no-op)
538@@ -1253,7 +1248,7 @@
539 # mounts it to e.g.: /srv/juju/vol-000012345
540 # - if fresh new storage dir: rsync existing data
541 # - manipulate /var/lib/mongodb/VERSION/CLUSTER symlink
542-#------------------------------------------------------------------------------
543+#
544 def config_changed_volume_apply():
545 config_data = config()
546 data_directory_path = config_data["dbpath"]
547@@ -1283,7 +1278,8 @@
548 if os.path.islink(data_directory_path):
549 juju_log(
550 "mongodb data dir '%s' already points "
551- "to %s, skipping storage changes." % (data_directory_path, new_mongo_dir))
552+ "to %s, skipping storage changes." % (data_directory_path,
553+ new_mongo_dir))
554 juju_log(
555 "existing-symlink: to fix/avoid UID changes from "
556 "previous units, doing: "
557@@ -1337,11 +1333,9 @@
558 return False
559
560
561-#------------------------------------------------------------------------------
562 # Write mongodb-server logrotate configuration
563-#------------------------------------------------------------------------------
564 def write_logrotate_config(config_data,
565- conf_file = '/etc/logrotate.d/mongodb-server'):
566+ conf_file='/etc/logrotate.d/mongodb-server'):
567
568 juju_log('Writing {}.'.format(conf_file))
569 contents = dedent("""
570
571=== added file 'test_requirements.txt'
572--- test_requirements.txt 1970-01-01 00:00:00 +0000
573+++ test_requirements.txt 2014-12-09 18:51:40 +0000
574@@ -0,0 +1,5 @@
575+coverage>=3.6
576+mock>=1.0.1
577+nose>=1.3.1
578+flake8
579+
580
581=== removed file 'tests/00-setup'
582--- tests/00-setup 2014-11-20 14:55:49 +0000
583+++ tests/00-setup 1970-01-01 00:00:00 +0000
584@@ -1,11 +0,0 @@
585-#!/bin/bash
586-
587-set -e
588-
589-sudo apt-get install python-setuptools -y
590-sudo add-apt-repository ppa:juju/stable -y
591-
592-sudo apt-get update
593-
594-
595-sudo apt-get install amulet python3 python3-requests python3-pymongo python-mock juju-core charm-tools -y
596
597=== added file 'tests/00_setup.sh'
598--- tests/00_setup.sh 1970-01-01 00:00:00 +0000
599+++ tests/00_setup.sh 2014-12-09 18:51:40 +0000
600@@ -0,0 +1,9 @@
601+#!/bin/bash
602+
603+set -e
604+
605+sudo apt-get install python-setuptools -y
606+sudo add-apt-repository ppa:juju/stable -y
607+
608+sudo apt-get update
609+sudo apt-get install amulet python3 python3-requests python3-pymongo juju-core charm-tools python-mock python-pymongo -y
610
611=== added file 'tests/01_deploy_test.py'
612--- tests/01_deploy_test.py 1970-01-01 00:00:00 +0000
613+++ tests/01_deploy_test.py 2014-12-09 18:51:40 +0000
614@@ -0,0 +1,127 @@
615+#!/usr/bin/env python3
616+
617+import amulet
618+import requests
619+from pymongo import MongoClient
620+
621+#########################################################
622+# Test Quick Config
623+#########################################################
624+scale = 1
625+seconds = 1400
626+
627+#########################################################
628+# 3shard cluster configuration
629+#########################################################
630+d = amulet.Deployment(series='trusty')
631+
632+d.add('configsvr', charm='mongodb', units=scale)
633+d.add('mongos', charm='mongodb', units=scale)
634+d.add('shard1', charm='mongodb', units=scale)
635+d.add('shard2', charm='mongodb', units=scale)
636+
637+# Setup the config svr
638+d.configure('configsvr', {'replicaset': 'configsvr'})
639+
640+# define each shardset
641+d.configure('shard1', {'replicaset': 'shard1'})
642+d.configure('shard2', {'replicaset': 'shard2'})
643+
644+d.configure('mongos', {})
645+
646+# Connect the config servers to mongo shell
647+d.relate('configsvr:configsvr', 'mongos:mongos-cfg')
648+
649+# connect each shard to the mongo shell
650+d.relate('mongos:mongos', 'shard1:database')
651+d.relate('mongos:mongos', 'shard2:database')
652+d.expose('configsvr')
653+d.expose('mongos')
654+
655+# Perform the setup for the deployment.
656+try:
657+ d.setup(seconds)
658+ d.sentry.wait(seconds)
659+except amulet.helpers.TimeoutError:
660+ message = 'The environment did not setup in %d seconds.', seconds
661+ amulet.raise_status(amulet.SKIP, msg=message)
662+except:
663+ raise
664+
665+sentry_dict = {
666+ 'config-sentry': d.sentry.unit['configsvr/0'],
667+ 'mongos-sentry': d.sentry.unit['mongos/0'],
668+ 'shard1-sentry': d.sentry.unit['shard1/0'],
669+ 'shard2-sentry': d.sentry.unit['shard2/0']
670+}
671+
672+
673+#############################################################
674+# Check presence of MongoDB GUI HEALTH Status
675+#############################################################
676+def validate_status_interface():
677+ r = requests.get("http://{}:28017".format(
678+ sentry_dict['config-sentry'].info['public-address']),
679+ verify=False)
680+ r.raise_for_status
681+
682+
683+#############################################################
684+# Validate that each unit has an active mongo service
685+#############################################################
686+def validate_running_services():
687+ for service in sentry_dict:
688+ output = sentry_dict[service].run('service mongodb status')
689+ service_active = str(output).find('mongodb start/running')
690+ if service_active == -1:
691+ message = "Failed to find running MongoDB on host {}".format(
692+ service)
693+ amulet.raise_status(amulet.SKIP, msg=message)
694+
695+
696+#############################################################
697+# Validate connectivity from $WORLD
698+#############################################################
699+def validate_world_connectivity():
700+ client = MongoClient(d.sentry.unit['mongos/0'].info['public-address'])
701+
702+ db = client['test']
703+ # Can we successfully insert?
704+ insert_id = db.amulet.insert({'assert': True})
705+ if insert_id is None:
706+ amulet.raise_status(amulet.FAIL, msg="Failed to insert test data")
707+ # Can we delete from a shard using the Mongos hub?
708+ result = db.amulet.remove(insert_id)
709+ if result['err'] is not None:
710+ amulet.raise_status(amulet.FAIL, msg="Failed to remove test data")
711+
712+
713+#############################################################
714+# Validate relationships
715+#############################################################
716+# broken pending 1273312
717+def validate_relationships():
718+ d.sentry.unit['configsvr/0'].relation('configsvr', 'mongos:mongos-cfg')
719+ d.sentry.unit['shard1/0'].relation('database', 'mongos:mongos')
720+ d.sentry.unit['shard2/0'].relation('database', 'mongos:mongos')
721+ print(d.sentry.unit['shard1/0'].relation('database', 'mongos:mongos'))
722+
723+
724+def validate_manual_connection():
725+ output, code = d.sentry.unit['shard1/0'].run("mongo {}".format(
726+ d.sentry.unit['mongos/0'].info['public-address']))
727+ if code != 0:
728+ message = "Manual Connection failed for unit shard1"
729+ amulet.raise_status(amulet.SKIP, msg=message)
730+
731+ output, code = d.sentry.unit['shard2/0'].run("mongo {}".format(
732+ d.sentry.unit['mongos/0'].info['public-address']))
733+ if code != 0:
734+ message = "Manual Connection failed for unit shard2"
735+ amulet.raise_status(amulet.SKIP, msg=message)
736+
737+
738+validate_status_interface()
739+validate_running_services()
740+validate_manual_connection()
741+validate_world_connectivity()
742
743=== removed file 'tests/01_test_write_log_rotate_config.py'
744--- tests/01_test_write_log_rotate_config.py 2014-11-04 18:23:06 +0000
745+++ tests/01_test_write_log_rotate_config.py 1970-01-01 00:00:00 +0000
746@@ -1,41 +0,0 @@
747-#!/usr/bin/env python
748-
749-import mock
750-import os
751-import unittest
752-import tempfile
753-import sys
754-sys.path.append('hooks')
755-import hooks
756-
757-
758-class TestWriteLogrotateConfigFile(unittest.TestCase):
759-
760- def test_success(self):
761- logpath = '/tmp/foo/foo.log'
762- config_data = {
763- 'logpath': logpath,
764- 'logrotate-frequency': 'daily',
765- 'logrotate-maxsize': '5G',
766- 'logrotate-rotate': 5,
767- }
768- fd, temp_fn = tempfile.mkstemp()
769- os.close(fd)
770- with mock.patch('hooks.juju_log') as mock_juju_log:
771- with mock.patch('hooks.open', create=True) as mock_open:
772- mock_open.return_value = mock.MagicMock(spec=file)
773- hooks.write_logrotate_config(config_data, temp_fn)
774- os.unlink(temp_fn)
775- mock_juju_log.assert_called_once_with('Writing {}.'.format(temp_fn))
776- mock_open.assert_called_once_with(temp_fn, 'w')
777- mock_file = mock_open().__enter__()
778- call_args = mock_file.write.call_args[0][0]
779- self.assertTrue(mock_file.write.called)
780- self.assertIn(logpath, call_args)
781- self.assertIn('daily', call_args)
782- self.assertIn('maxsize 5G', call_args)
783- self.assertIn('rotate 5', call_args)
784-
785-
786-if __name__ == '__main__':
787- unittest.main()
788
789=== added file 'tests/02_relate_ceilometer_test.py'
790--- tests/02_relate_ceilometer_test.py 1970-01-01 00:00:00 +0000
791+++ tests/02_relate_ceilometer_test.py 2014-12-09 18:51:40 +0000
792@@ -0,0 +1,43 @@
793+#!/usr/bin/env python3
794+
795+import amulet
796+
797+
798+class TestDeploy(object):
799+
800+ def __init__(self, time=2500):
801+ # Attempt to load the deployment topology from a bundle.
802+ self.deploy = amulet.Deployment(series="trusty")
803+
804+ # If something errored out, attempt to continue by
805+ # manually specifying a standalone deployment
806+ self.deploy.add('mongodb')
807+ self.deploy.add('ceilometer', 'cs:trusty/ceilometer')
808+ # send blank configs to finalize the objects in the deployment map
809+ self.deploy.configure('mongodb', {})
810+ self.deploy.configure('ceilometer', {})
811+
812+ self.deploy.relate('mongodb:database', 'ceilometer:shared-db')
813+
814+ try:
815+ self.deploy.setup(time)
816+ self.deploy.sentry.wait(time)
817+ except:
818+ amulet.raise_status(amulet.FAIL, msg="Environment standup timeout")
819+ # sentry = self.deploy.sentry
820+
821+ def run(self):
822+ for test in dir(self):
823+ if test.startswith('test_'):
824+ getattr(self, test)()
825+
826+ def test_mongo_relation(self):
827+ unit = self.deploy.sentry.unit['ceilometer/0']
828+ mongo = self.deploy.sentry.unit['mongodb/0'].info['public-address']
829+ cont = unit.file_contents('/etc/ceilometer/ceilometer.conf')
830+ if mongo not in cont:
831+ amulet.raise_status(amulet.FAIL, "Unable to verify ceilometer cfg")
832+
833+if __name__ == '__main__':
834+ runner = TestDeploy()
835+ runner.run()
836
837=== removed file 'tests/200_deploy.test'
838--- tests/200_deploy.test 2014-11-04 22:25:26 +0000
839+++ tests/200_deploy.test 1970-01-01 00:00:00 +0000
840@@ -1,127 +0,0 @@
841-#!/usr/bin/env python3
842-
843-import amulet
844-import requests
845-from pymongo import MongoClient
846-
847-#########################################################
848-# Test Quick Config
849-#########################################################
850-scale = 1
851-seconds = 2500
852-
853-#########################################################
854-# 3shard cluster configuration
855-#########################################################
856-d = amulet.Deployment(series='trusty')
857-
858-d.add('configsvr', charm='mongodb', units=scale)
859-d.add('mongos', charm='mongodb', units=scale)
860-d.add('shard1', charm='mongodb', units=scale)
861-d.add('shard2', charm='mongodb', units=scale)
862-
863-#Setup the config svr
864-d.configure('configsvr', {'replicaset': 'configsvr'})
865-
866-# define each shardset
867-d.configure('shard1', {'replicaset': 'shard1'})
868-d.configure('shard2', {'replicaset': 'shard2'})
869-
870-d.configure('mongos', {})
871-
872-#Connect the config servers to mongo shell
873-d.relate('configsvr:configsvr', 'mongos:mongos-cfg')
874-
875-#connect each shard to the mongo shell
876-d.relate('mongos:mongos', 'shard1:database')
877-d.relate('mongos:mongos', 'shard2:database')
878-d.expose('configsvr')
879-d.expose('mongos')
880-
881-# Perform the setup for the deployment.
882-try:
883- d.setup(seconds)
884- d.sentry.wait(seconds)
885-except amulet.helpers.TimeoutError:
886- message = 'The environment did not setup in %d seconds.', seconds
887- amulet.raise_status(amulet.SKIP, msg=message)
888-except:
889- raise
890-
891-sentry_dict = {
892- 'config-sentry': d.sentry.unit['configsvr/0'],
893- 'mongos-sentry': d.sentry.unit['mongos/0'],
894- 'shard1-sentry': d.sentry.unit['shard1/0'],
895- 'shard2-sentry': d.sentry.unit['shard2/0']
896-}
897-
898-
899-#############################################################
900-# Check presence of MongoDB GUI HEALTH Status
901-#############################################################
902-def validate_status_interface():
903- r = requests.get("http://{}:28017".format(
904- sentry_dict['config-sentry'].info['public-address']),
905- verify=False)
906- r.raise_for_status
907-
908-
909-#############################################################
910-# Validate that each unit has an active mongo service
911-#############################################################
912-def validate_running_services():
913- for service in sentry_dict:
914- output = sentry_dict[service].run('service mongodb status')
915- service_active = str(output).find('mongodb start/running')
916- if service_active == -1:
917- message = "Failed to find running MongoDB on host {}".format(service)
918- amulet.raise_status(amulet.SKIP, msg=message)
919-
920-
921-#############################################################
922-# Validate connectivity from $WORLD
923-#############################################################
924-def validate_world_connectivity():
925- client = MongoClient(d.sentry.unit['mongos/0'].info['public-address'])
926-
927- db = client['test']
928- #Can we successfully insert?
929- insert_id = db.amulet.insert({'assert': True})
930- if insert_id is None:
931- amulet.raise_status(amulet.FAIL, msg="Failed to insert test data")
932- #Can we delete from a shard using the Mongos hub?
933- result = db.amulet.remove(insert_id)
934- if result['err'] is not None:
935- amulet.raise_status(amulet.FAIL, msg="Failed to remove test data")
936-
937-
938-#############################################################
939-# Validate relationships
940-#############################################################
941-#broken pending 1273312
942-def validate_relationships():
943- d.sentry.unit['configsvr/0'].relation('configsvr', 'mongos:mongos-cfg')
944- d.sentry.unit['shard1/0'].relation('database', 'mongos:mongos')
945- d.sentry.unit['shard2/0'].relation('database', 'mongos:mongos')
946- print(d.sentry.unit['shard1/0'].relation('database', 'mongos:mongos'))
947-
948-
949-def validate_manual_connection():
950- output, code = d.sentry.unit['shard1/0'].run("mongo {}".format(
951- d.sentry.unit['mongos/0'].info['public-address']))
952- if code != 0:
953- message = "Manual Connection failed for unit shard1"
954- amulet.raise_status(amulet.SKIP, msg=message)
955-
956- output, code = d.sentry.unit['shard2/0'].run("mongo {}".format(
957- d.sentry.unit['mongos/0'].info['public-address']))
958- if code != 0:
959- message = "Manual Connection failed for unit shard2"
960- amulet.raise_status(amulet.SKIP, msg=message)
961-
962-
963-validate_status_interface()
964-validate_running_services()
965-#validate_relationships()
966-validate_manual_connection()
967-validate_world_connectivity()
968
969=== removed file 'tests/200_relate_ceilometer.test'
970--- tests/200_relate_ceilometer.test 2014-11-04 22:07:36 +0000
971+++ tests/200_relate_ceilometer.test 1970-01-01 00:00:00 +0000
972@@ -1,53 +0,0 @@
973-#!/usr/bin/env python3
974-
975-import unittest
976-import subprocess
977-import amulet
978-
979-
980-class TestDeploy(unittest.TestCase):
981-
982- @classmethod
983- def setUpClass(cls):
984- time = 2500
985-
986- d = amulet.Deployment(series="trusty")
987- d.add('mongodb')
988- d.add('ceilometer', 'cs:trusty/ceilometer')
989- d.relate('mongodb:database', 'ceilometer:shared-db')
990- d.expose('mongodb')
991-
992- try:
993- d.setup(time)
994- d.sentry.wait(time)
995- except amulet.helpers.TimeoutError:
996- amulet.raise_status(amulet.FAIL, msg="Environment standup timeout")
997- except:
998- raise
999-
1000- cls.mongo = d.sentry['mongodb/0']
1001- cls.ceilometer = d.sentry['ceilometer/0']
1002- cls.deployment = d
1003-
1004- def test_mongo_relation(self):
1005- """Test that ceilometer config contains mongo host address"""
1006- mongo_private_address, _ = self.mongo.run('unit-get private-address')
1007- conf = self.ceilometer.file_contents('/etc/ceilometer/ceilometer.conf')
1008- self.assertTrue(mongo_private_address in conf)
1009-
1010- def test_port_change(self):
1011- """Test that mongo can be reached on its configured port"""
1012- host = self.mongo.info['public-address']
1013- netcat = 'nc {} {} </dev/null'
1014-
1015- def check_port(port):
1016- self.deployment.configure('mongodb', {'port': port})
1017- self.deployment.sentry.wait()
1018- return subprocess.call(netcat.format(host, port), shell=True)
1019-
1020- self.assertEqual(0, check_port(55555))
1021- self.assertEqual(0, check_port(27017))
1022-
1023-
1024-if __name__ == '__main__':
1025- unittest.main()
1026
1027=== added directory 'unit_tests'
1028=== added file 'unit_tests/test_write_log_rotate_config.py'
1029--- unit_tests/test_write_log_rotate_config.py 1970-01-01 00:00:00 +0000
1030+++ unit_tests/test_write_log_rotate_config.py 2014-12-09 18:51:40 +0000
1031@@ -0,0 +1,35 @@
1032+import mock
1033+import os
1034+import unittest
1035+import tempfile
1036+import sys
1037+sys.path.append('hooks')
1038+import hooks
1039+
1040+
1041+class TestWriteLogrotateConfigFile(unittest.TestCase):
1042+
1043+ def test_success(self):
1044+ logpath = '/tmp/foo/foo.log'
1045+ config_data = {
1046+ 'logpath': logpath,
1047+ 'logrotate-frequency': 'daily',
1048+ 'logrotate-maxsize': '5G',
1049+ 'logrotate-rotate': 5,
1050+ }
1051+ fd, temp_fn = tempfile.mkstemp()
1052+ os.close(fd)
1053+ with mock.patch('hooks.juju_log') as mock_juju_log:
1054+ with mock.patch('hooks.open', create=True) as mock_open:
1055+ mock_open.return_value = mock.MagicMock(spec=file)
1056+ hooks.write_logrotate_config(config_data, temp_fn)
1057+ os.unlink(temp_fn)
1058+ mock_juju_log.assert_called_once_with('Writing {}.'.format(temp_fn))
1059+ mock_open.assert_called_once_with(temp_fn, 'w')
1060+ mock_file = mock_open().__enter__()
1061+ call_args = mock_file.write.call_args[0][0]
1062+ self.assertTrue(mock_file.write.called)
1063+ self.assertIn(logpath, call_args)
1064+ self.assertIn('daily', call_args)
1065+ self.assertIn('maxsize 5G', call_args)
1066+ self.assertIn('rotate 5', call_args)

Subscribers

People subscribed via source and target branches