Merge lp:~egoist-dev/charms/precise/mongodb/replica-set into lp:charms/mongodb
- Precise Pangolin (12.04)
- replica-set
- Merge into trunk
Proposed by
Bartek Zurawski
Status: | Rejected |
---|---|
Rejected by: | Charles Butler |
Proposed branch: | lp:~egoist-dev/charms/precise/mongodb/replica-set |
Merge into: | lp:charms/mongodb |
Diff against target: |
789 lines (+410/-102) (has conflicts) 3 files modified
config.yaml (+12/-4) hooks/hooks.py (+396/-97) hooks/install (+2/-1) Text conflict in hooks/hooks.py |
To merge this branch: | bzr merge lp:~egoist-dev/charms/precise/mongodb/replica-set |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Charles Butler (community) | Needs Resubmitting | ||
Stuart Bishop (community) | Needs Fixing | ||
Review via email: mp+228490@code.launchpad.net |
Commit message
Description of the change
Fixed using mongodb replica-set by this charms. Change way to create replica-set for rs.initiate(
To post a comment you must log in.
Revision history for this message
Charles Butler (lazypower) wrote : | # |
Does not look good to me. Please fix this branch and resubmit.
review:
Needs Resubmitting
Unmerged revisions
- 48. By Bartek Zurawski
-
Fixing replica-set hooks. Using rs.initiate(
configuration) instead of rs.initiate() also using rs.conf and rs.reconfig instead of rs.add, and fix problem with adding and removing members from existed replica
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === modified file 'config.yaml' |
2 | --- config.yaml 2014-04-11 21:00:49 +0000 |
3 | +++ config.yaml 2014-07-28 14:22:50 +0000 |
4 | @@ -108,9 +108,17 @@ |
5 | type: string |
6 | description: Size limit for in-memory storage of op ids |
7 | replicaset: |
8 | - default: myset |
9 | + default: "myset" |
10 | type: string |
11 | description: Name of the replica set |
12 | + replicaset_server_dbpath: |
13 | + default: "/mnt/var/lib/database/" |
14 | + type: string |
15 | + description: Path to Database in replica set |
16 | + replicaset_server_logpath: |
17 | + default: "/var/log/mongodb/replica.log" |
18 | + type: string |
19 | + description: Path to logfile in replica set |
20 | web_admin_ui: |
21 | default: True |
22 | type: boolean |
23 | @@ -132,7 +140,7 @@ |
24 | type: string |
25 | description: The path where the config server data files will be kept. |
26 | config_server_logpath: |
27 | - default: "/mnt/var/log/mongodb/configsvr.log" |
28 | + default: "/var/log/mongodb/configsvr.log" |
29 | type: string |
30 | description: The path where to send config server log data. |
31 | arbiter: |
32 | @@ -140,11 +148,11 @@ |
33 | type: string |
34 | description: Enable arbiter mode. Possible values are 'disabled' for no arbiter, 'enable' to become an arbiter or 'host:port' to declare another host as an arbiter. replicaset_master must be set for this option to work. |
35 | mongos_logpath: |
36 | - default: "/mnt/var/log/mongodb/mongos.log" |
37 | + default: "/var/log/mongodb/mongos.log" |
38 | type: string |
39 | description: The path where to send log data from the mongo router. |
40 | mongos_port: |
41 | - default: 27021 |
42 | + default: 27017 |
43 | type: int |
44 | description: Port number to use for the mongo router |
45 | extra_config_options: |
46 | |
47 | === modified symlink 'hooks/config-changed' |
48 | === target changed u'./hooks.py' => u'hooks.py' |
49 | === modified symlink 'hooks/database-relation-joined' |
50 | === target changed u'./hooks.py' => u'hooks.py' |
51 | === modified file 'hooks/hooks.py' |
52 | --- hooks/hooks.py 2014-07-21 19:41:19 +0000 |
53 | +++ hooks/hooks.py 2014-07-28 14:22:50 +0000 |
54 | @@ -16,6 +16,7 @@ |
55 | import time |
56 | import yaml |
57 | import argparse |
58 | +import pymongo |
59 | |
60 | from os import chmod |
61 | from os import remove |
62 | @@ -23,6 +24,8 @@ |
63 | from string import Template |
64 | from textwrap import dedent |
65 | from yaml.constructor import ConstructorError |
66 | +from random import randint |
67 | +from pymongo import * |
68 | |
69 | from charmhelpers.fetch import ( |
70 | add_source, |
71 | @@ -40,8 +43,8 @@ |
72 | default_mongodb_config = "/etc/mongodb.conf" |
73 | default_mongodb_init_config = "/etc/init/mongodb.conf" |
74 | default_mongos_list = "/etc/mongos.list" |
75 | -default_wait_for = 20 |
76 | -default_max_tries = 20 |
77 | +default_wait_for = 4 |
78 | +default_max_tries = 5 |
79 | |
80 | ############################################################################### |
81 | # Supporting functions |
82 | @@ -129,6 +132,7 @@ |
83 | # to the specified unit |
84 | # relation_id: specify relation id for out of context usage. |
85 | #------------------------------------------------------------------------------ |
86 | + |
87 | def relation_get(scope=None, unit_name=None, relation_id=None, |
88 | wait_for=default_wait_for, max_tries=default_max_tries): |
89 | juju_log("relation_get: scope: %s, unit_name: %s, relation_id: %s" % |
90 | @@ -146,10 +150,9 @@ |
91 | relation_cmd_line.append(unit_name) |
92 | relation_data = json.loads(subprocess.check_output(relation_cmd_line)) |
93 | |
94 | -# while relation_data is None and current_try < max_tries: |
95 | -# time.sleep(wait_for) |
96 | -# relation_data = json.loads(subprocess.check_output(relation_cmd_line)) |
97 | -# current_try += 1 |
98 | + while relation_data is None and current_try < max_tries: |
99 | + time.sleep(wait_for) |
100 | + relation_data = json.loads(subprocess.check_output(relation_cmd_line)) |
101 | |
102 | except Exception, e: |
103 | juju_log(str(e)) |
104 | @@ -158,6 +161,28 @@ |
105 | juju_log("relation_get returns: %s" % relation_data) |
106 | return(relation_data) |
107 | |
108 | +""" |
109 | +def relation_get(attribute=None, unit=None, rid=None): |
110 | + _args = ['relation-get', '--format=json'] |
111 | + if rid: |
112 | + _args.append('-r') |
113 | + _args.append(rid) |
114 | + _args.append(attribute or '-') |
115 | + if unit: |
116 | + _args.append(unit) |
117 | + if attribute: |
118 | + _args.append(attribute) |
119 | + try: |
120 | + return json.loads(subprocess.check_output(_args)) |
121 | + except ValueError: |
122 | + return None |
123 | + except CalledProcessError, e: |
124 | + if e.returncode == 2: |
125 | + return None |
126 | + raise |
127 | + |
128 | +""" |
129 | + |
130 | |
131 | #------------------------------------------------------------------------------ |
132 | # relation_set: Convenience function wrapping the juju command relation-set |
133 | @@ -168,6 +193,7 @@ |
134 | # relation_id: The relation id to use |
135 | # Returns: True on success or False on failure |
136 | #------------------------------------------------------------------------------ |
137 | + |
138 | def relation_set(key_value_pairs=None, relation_id=None): |
139 | juju_log("relation_set: kv: %s, relation_id: %s" % |
140 | (key_value_pairs, relation_id)) |
141 | @@ -188,7 +214,22 @@ |
142 | juju_log("relation_set returns: %s" % retVal) |
143 | return(retVal) |
144 | |
145 | - |
146 | +""" |
147 | +def relation_set(relation_settings={}, relation_id=None): |
148 | + juju_log("Relation Set...") |
149 | + relation_cmd_line = ['relation-set'] |
150 | + if relation_id is not None: |
151 | + relation_cmd_line.extend(('-r', relation_id)) |
152 | + for (k, v) in relation_settings.items(): |
153 | + if v is None: |
154 | + relation_cmd_line.append('{}='.format(k)) |
155 | + else: |
156 | + relation_cmd_line.append('{}={}'.format(k, v)) |
157 | + try: |
158 | + subprocess.call(relation_cmd_line) |
159 | + except Exception, e: |
160 | + juju_log(str(e)) |
161 | +""" |
162 | def relation_list(relation_id=None, wait_for=default_wait_for, |
163 | max_tries=default_max_tries): |
164 | juju_log("relation_list: relation_id: %s" % relation_id) |
165 | @@ -199,10 +240,11 @@ |
166 | relation_cmd_line.append('-r %s' % relation_id) |
167 | relation_data = json.loads(subprocess.check_output(relation_cmd_line)) |
168 | |
169 | -# while relation_data is None and current_try < max_tries: |
170 | -# time.sleep(wait_for) |
171 | -# relation_data = json.loads(subprocess.check_output(relation_cmd_line)) |
172 | -# current_try += 1 |
173 | + while relation_data is None and current_try < max_tries: |
174 | + time.sleep(wait_for) |
175 | + juju_log("Sleeping...") |
176 | + relation_data = json.loads(subprocess.check_output(relation_cmd_line)) |
177 | + current_try += 1 |
178 | |
179 | except Exception, e: |
180 | juju_log(str(e)) |
181 | @@ -255,9 +297,9 @@ |
182 | return(False) |
183 | try: |
184 | s.connect((host, int(port))) |
185 | - s.shutdown(socket.SHUT_RDWR) |
186 | - juju_log("port_check: %s:%s/%s is open" % (host, port, protocol)) |
187 | - return(True) |
188 | + s.shutdown(socket.SHUT_RDWR) |
189 | + juju_log("port_check: %s:%s/%s is open" % (host, port, protocol)) |
190 | + return(True) |
191 | except Exception, e: |
192 | juju_log("port_check: Unable to connect to %s:%s/%s." % |
193 | (host, port, protocol)) |
194 | @@ -519,13 +561,78 @@ |
195 | return(subprocess.call(cmd_line, shell=True) == 0) |
196 | |
197 | |
198 | -def init_replset(master_node=None): |
199 | + |
200 | +def init_replset(master_node=None, hosts=None, replSet=None, wait_for=2, max_tries=default_max_tries): |
201 | if master_node is None: |
202 | - juju_log("init_replset: master_node must be defined.") |
203 | - retVal = False |
204 | - else: |
205 | - retVal = mongo_client(master_node, 'rs.initiate()') |
206 | - juju_log("init_replset returns: %s" % retVal) |
207 | + juju_log("init_replset: master_node is not defined") |
208 | + retVal = False |
209 | + if hosts is None: |
210 | + juju_log("init_replset: hosts list is not defined") |
211 | + retVal = False |
212 | + if replSet is None: |
213 | + juju_log("init_replset: replica set name is not defined") |
214 | + retVal = False |
215 | + command = None |
216 | + current_state = 0 |
217 | + for ip in range(len(hosts)): |
218 | + index = hosts.index(hosts[ip]) |
219 | + address = hosts[index] |
220 | + address_split = address.split(':') |
221 | + while (not port_check(address_split[0], address_split[1]) and current_state < max_tries): |
222 | + current_state += 1 |
223 | + master_host = master_node.split(':')[0] |
224 | + master_port = master_node.split(':')[1] |
225 | + juju_log("check replica set -> is ok?") |
226 | + |
227 | + if is_replica_set(master_host, master_port): |
228 | + juju_log("replica set -> existed") |
229 | + c = pymongo.connection.Connection('%s:%s' % (master_host, master_port)) |
230 | + mongo = c['admin'] |
231 | + local = c['local']['system.replset'] |
232 | + cfg = local.find()[0] |
233 | + max_id = 0 |
234 | + replica_id = str(cfg['_id']) |
235 | + |
236 | + new_cfg = { |
237 | + '_id': replica_id, |
238 | + 'members': [], |
239 | + } |
240 | + |
241 | + for member in cfg['members']: |
242 | + if max_id < member['_id']: |
243 | + max_id = member['_id'] |
244 | + |
245 | + for member in cfg['members']: |
246 | + new_cfg['members'].append({'_id': int(member['_id']), 'host': str(member['host'])}) |
247 | + for member in hosts: |
248 | + member_host = member.split(':')[0] |
249 | + member_port = member.split(':')[1] |
250 | + if is_replica_set(member_host, member_port) is False: |
251 | + max_id = max_id + 1 |
252 | + new_cfg['members'].append({'_id': max_id, 'host': str(member)}) |
253 | + new_cfg['version'] = int(cfg['version']) + 1 |
254 | + |
255 | + try: |
256 | + juju_log("New replica configuration: %s" % new_cfg) |
257 | + mongo.command({'replSetReconfig': new_cfg}) |
258 | + except Exception, e: |
259 | + juju_log(str(e)) |
260 | + |
261 | + else: |
262 | + config = '{"_id":"%s","members":[' % replSet |
263 | + for ip in range(len(hosts)): |
264 | + index = hosts.index(hosts[ip]) |
265 | + config += '{"_id":%s,"host":"%s"}' % (index,hosts[index]) |
266 | + if ip+1 != len(hosts): |
267 | + config += ',' |
268 | + config += ']}' |
269 | + command = 'rs.initiate(%s)' % config |
270 | + |
271 | + if command: |
272 | + retVal = mongo_client(master_node, command) |
273 | + else: |
274 | + retVal = True |
275 | + |
276 | return(retVal) |
277 | |
278 | |
279 | @@ -548,7 +655,7 @@ |
280 | if re.search(' --replSet %s ' % replicaset_name, |
281 | mongodb_init_config, re.MULTILINE) is None: |
282 | mongodb_init_config = regex_sub([(' -- ', |
283 | - ' -- --replSet %s ' % replicaset_name)], |
284 | + ' --replSet %s ' % replicaset_name)], |
285 | mongodb_init_config) |
286 | retVal = update_file(default_mongodb_init_config, mongodb_init_config) |
287 | except Exception, e: |
288 | @@ -557,6 +664,68 @@ |
289 | finally: |
290 | return(retVal) |
291 | |
292 | +def is_replica_set(host=None, port=None): |
293 | + try: |
294 | + connection = MongoClient('%s' % host, int(port)) |
295 | + data = connection.admin.command("replSetGetStatus") |
296 | + if data['ok'] == 1.0: |
297 | + value = True |
298 | + else: |
299 | + value = False |
300 | + except Exception, e: |
301 | + juju_log(str(e)) |
302 | + value = False |
303 | + finally: |
304 | + return(value) |
305 | + |
306 | +def get_replica_set_master(host=None, port=None): |
307 | + replica_master = None |
308 | + try: |
309 | + connection = MongoClient('%s' % host,int(port)) |
310 | + data = connection.admin.command("isMaster") |
311 | + if data['ismaster'] is True: |
312 | + replica_master = data['me'] |
313 | + elif data['ismaster'] is False and data['setName']: |
314 | + replica_master = data['primary'] |
315 | + else: |
316 | + replica_master = None |
317 | + except Exception, e: |
318 | + juju_log(str(e)) |
319 | + replica_master = None |
320 | + finally: |
321 | + return(replica_master) |
322 | + |
323 | +def get_replica_set_name(host=None, port=None): |
324 | + replica_name = None |
325 | + connection = MongoClient('%s' % host, int(port)) |
326 | + try: |
327 | + data = connection.admin.command("isMaster") |
328 | + replica_name = data['setName'] |
329 | + except Exception, e: |
330 | + juju_log(str(e)) |
331 | + replica_name = False |
332 | + finally: |
333 | + return(replica_name) |
334 | + |
335 | +def is_shard(mongos_host=None, mongos_port=None, client_host=None): |
336 | + connection = MongoClient('%s' % mongos_host,int(mongos_port)) |
337 | + shards = connection.admin.command("listshards")['shards'] |
338 | + if shards: |
339 | + list_of_shards = shards[0]['host'].split(',') |
340 | + if is_replica_set(client_host,mongos_port) is True: |
341 | + replica_master = get_replica_set_master(client_host, mongos_port) |
342 | + replica_name = get_replica_set_name(client_host, mongos_port) |
343 | + replica_uri = replica_name + '/' + replica_master |
344 | + for member in list_of_shards: |
345 | + if (member == replica_uri or member == replica_master): |
346 | + return True |
347 | + elif is_replica_set(client_host,mongos_port) is False: |
348 | + for member in list_of_shards: |
349 | + client_host2 = client_host + ':' + str(mongos_port) |
350 | + if member == client_host2: |
351 | + return True |
352 | + else: |
353 | + return False |
354 | |
355 | def update_daemon_options(daemon_options=None): |
356 | mongodb_init_config = open(default_mongodb_init_config).read() |
357 | @@ -705,9 +874,8 @@ |
358 | ) |
359 | subprocess.call( |
360 | [ |
361 | - 'mkdir', |
362 | - '-p', |
363 | - '%s' % os.path.dirname(config_data['config_server_logpath']) |
364 | + 'touch', |
365 | + '%s' % config_data['config_server_logpath'] |
366 | ] |
367 | ) |
368 | |
369 | @@ -721,7 +889,8 @@ |
370 | cmd_line += " --pidfilepath /var/run/mongodb/configsvr.pid" |
371 | cmd_line += " --fork" |
372 | subprocess.call(cmd_line, shell=True) |
373 | - |
374 | + juju_log("Wait for port listening...") |
375 | + time.sleep(wait_for) |
376 | retVal = configsvr_ready(wait_for, max_tries) |
377 | if retVal: |
378 | open_port(config_data['config_server_port']) |
379 | @@ -788,26 +957,31 @@ |
380 | juju_log("enable_mongos: Not enough config servers yet...") |
381 | return(True) |
382 | disable_mongos() |
383 | - # Make sure logpath exist |
384 | - subprocess.call( |
385 | - [ |
386 | - 'mkdir', |
387 | - '-p', |
388 | - '%s' % os.path.dirname(config_data['mongos_logpath']) |
389 | - ] |
390 | - ) |
391 | - cmd_line = "mongos" |
392 | - cmd_line += " --logpath %s" % config_data['mongos_logpath'] |
393 | - cmd_line += " --pidfilepath /var/run/mongodb/mongos.pid" |
394 | - cmd_line += " --port %d" % config_data['mongos_port'] |
395 | - cmd_line += " --fork" |
396 | - if len(config_servers) > 0: |
397 | - if len(config_servers) >= 3: |
398 | - cmd_line += ' --configdb %s' % ','.join(config_servers[0:3]) |
399 | -# else: |
400 | -# cmd_line += ' --configdb %s' % config_servers[0] |
401 | - juju_log("enable_mongos: cmd_line: %s" % cmd_line) |
402 | - subprocess.call(cmd_line, shell=True) |
403 | + # Stopping mongo to free port 27017 |
404 | + if len(config_servers) >=3: |
405 | + service('mongodb', 'stop') |
406 | + juju_log("Stopping mongo to free port 27017") |
407 | + # Make sure logpath exist |
408 | + subprocess.call( |
409 | + [ |
410 | + 'touch', |
411 | + '%s' % config_data['mongos_logpath'] |
412 | + ] |
413 | + ) |
414 | + cmd_line = "mongos" |
415 | + cmd_line += " --logpath %s" % config_data['mongos_logpath'] |
416 | + cmd_line += " --pidfilepath /var/run/mongodb/mongos.pid" |
417 | + cmd_line += " --port %d" % config_data['mongos_port'] |
418 | + cmd_line += " --fork" |
419 | + if len(config_servers) > 0: |
420 | + if len(config_servers) >= 3: |
421 | + cmd_line += ' --configdb %s' % ','.join(config_servers[0:3]) |
422 | +# else: |
423 | +# cmd_line += ' --configdb %s' % config_servers[0] |
424 | + juju_log("enable_mongos: cmd_line: %s" % cmd_line) |
425 | + juju_log("Waiting for configsvr") |
426 | + time.sleep(10) |
427 | + subprocess.call(cmd_line, shell=True) |
428 | retVal = mongos_ready(wait_for, max_tries) |
429 | if retVal: |
430 | open_port(config_data['mongos_port']) |
431 | @@ -848,6 +1022,47 @@ |
432 | (service('mongodb', 'status') == port_check(my_hostname, my_port)) |
433 | is True) |
434 | |
435 | +def restart_mongo_replica_config(wait_for=default_wait_for, max_tries=default_max_tries): |
436 | + my_hostname = unit_get('public-address') |
437 | + my_port = config_get('port') |
438 | + current_try = 0 |
439 | + |
440 | + service('mongodb', 'stop') |
441 | + if os.path.exists('/var/lib/mongodb/mongod.lock'): |
442 | + os.remove('/var/lib/mongodb/mongod.lock') |
443 | + |
444 | + # Make sure dbpath and logpath exist |
445 | + subprocess.call( |
446 | + [ |
447 | + 'mkdir', |
448 | + '-p', |
449 | + '%s' % config_get('replicaset_server_dbpath') |
450 | + ] |
451 | + ) |
452 | + subprocess.call( |
453 | + [ |
454 | + 'touch', |
455 | + '%s' % config_get('replicaset_server_logpath') |
456 | + ] |
457 | + ) |
458 | + |
459 | + |
460 | + cmd_line = "mongod" |
461 | + cmd_line += " --replSet %s " % config_get('replicaset') |
462 | + cmd_line += " --dbpath %s " % config_get('replicaset_server_dbpath') |
463 | + cmd_line += " --logpath %s " % config_get('replicaset_server_logpath') |
464 | + cmd_line += " --port 27017 " |
465 | + cmd_line += " --fork " |
466 | + cmd_line += " --pidfilepath /var/run/mongodb/replica.pid" |
467 | + |
468 | + subprocess.call(cmd_line, shell=True) |
469 | + |
470 | + juju_log("Replica waits for config final") |
471 | + while (not port_check(my_hostname, my_port)): |
472 | + time.sleep(wait_for) |
473 | + |
474 | + return (port_check(my_hostname, my_port) is True) |
475 | + |
476 | |
477 | def backup_cronjob(disable=False): |
478 | """Generate the cronjob to backup with mongodbump.""" |
479 | @@ -1054,9 +1269,13 @@ |
480 | def stop_hook(): |
481 | juju_log("stop_hook") |
482 | try: |
483 | - retVal = service('mongodb', 'stop') |
484 | - os.remove('/var/lib/mongodb/mongod.lock') |
485 | - #FIXME Need to check if this is still needed |
486 | + proc = subprocess.Popen(['ps', '-A'], stdout=subprocess.PIPE) |
487 | + out, err = proc.communicate() |
488 | + for line in out.splitlines(): |
489 | + if 'mongod' in line: |
490 | + pid = int(line.split(None, 1)[0]) |
491 | + os.kill(pid, signal.SIGKILL) |
492 | + retVal = True |
493 | except Exception, e: |
494 | juju_log(str(e)) |
495 | retVal = False |
496 | @@ -1067,17 +1286,23 @@ |
497 | |
498 | def database_relation_joined(): |
499 | juju_log("database_relation_joined") |
500 | - my_hostname = unit_get('public-address') |
501 | - my_port = config_get('port') |
502 | - my_replset = config_get('replicaset') |
503 | - juju_log("my_hostname: %s" % my_hostname) |
504 | - juju_log("my_port: %s" % my_port) |
505 | - juju_log("my_replset: %s" % my_replset) |
506 | + hostname = unit_get('public-address') |
507 | + port = config_get('port') |
508 | + replset = config_get('replicaset') |
509 | + install_order = os.environ['JUJU_UNIT_NAME'].split('/')[1] |
510 | + if is_replica_set(hostname, port): |
511 | + replica_master = get_replica_set_master(hostname, port) |
512 | + hostname = replica_master.split(':')[0] |
513 | + port = replica_master.split(':')[1] |
514 | + juju_log("hostname: %s" % hostname) |
515 | + juju_log("port: %s" % port) |
516 | + juju_log("replset: %s" % replset) |
517 | return(relation_set( |
518 | { |
519 | - 'hostname': my_hostname, |
520 | - 'port': my_port, |
521 | - 'replset': my_replset, |
522 | + 'hostname': hostname, |
523 | + 'port': port, |
524 | + 'replset': replset, |
525 | + 'install-order': install_order, |
526 | 'type': 'database', |
527 | })) |
528 | |
529 | @@ -1093,7 +1318,7 @@ |
530 | juju_log("my_replset: %s" % my_replset) |
531 | juju_log("my_install_order: %s" % my_install_order) |
532 | return(enable_replset(my_replset) == |
533 | - restart_mongod() == |
534 | + restart_mongo_replica_config() == |
535 | relation_set( |
536 | { |
537 | 'hostname': my_hostname, |
538 | @@ -1103,61 +1328,104 @@ |
539 | 'type': 'replset', |
540 | })) |
541 | |
542 | - |
543 | def replica_set_relation_changed(): |
544 | juju_log("replica_set_relation_changed") |
545 | my_hostname = unit_get('public-address') |
546 | my_port = config_get('port') |
547 | my_install_order = os.environ['JUJU_UNIT_NAME'].split('/')[1] |
548 | + my_replicaset = config_get('replicaset') |
549 | my_replicaset_master = config_get('replicaset_master') |
550 | |
551 | # If we are joining an existing replicaset cluster, just join and leave. |
552 | - if my_replicaset_master != "auto": |
553 | - return(join_replset(my_replicaset_master, my_hostname)) |
554 | +# if my_replicaset_master != "auto": |
555 | +# return(join_replset(my_replicaset_master, my_hostname)) |
556 | |
557 | # Default to this node being the master |
558 | master_hostname = my_hostname |
559 | master_port = my_port |
560 | master_install_order = my_install_order |
561 | - |
562 | + master_replicaset = my_replicaset |
563 | + |
564 | + |
565 | + relList = relation_list() |
566 | + while (len(relList) != 1): |
567 | # Check the nodes in the relation to find the master |
568 | - for member in relation_list(): |
569 | - juju_log("replica_set_relation_changed: member: %s" % member) |
570 | - hostname = relation_get('hostname', member) |
571 | - port = relation_get('port', member) |
572 | - install_order = relation_get('install-order', member) |
573 | - juju_log("replica_set_relation_changed: install_order: %s" % install_order) |
574 | - if install_order is None: |
575 | - juju_log("replica_set_relation_changed: install_order is None. relation is not ready") |
576 | - break |
577 | - if int(install_order) < int(master_install_order): |
578 | - master_hostname = hostname |
579 | - master_port = port |
580 | - master_install_order = install_order |
581 | + for member in relation_list(): |
582 | + juju_log("replica_set_relation_changed: member: %s" % member) |
583 | + hostname = relation_get('hostname', member) |
584 | + port = relation_get('port', member) |
585 | + install_order = relation_get('install-order', member) |
586 | + juju_log("replica_set_relation_changed: install_order: %s" % install_order) |
587 | + if install_order is None: |
588 | + juju_log("replica_set_relation_changed: install_order is None. relation is not ready") |
589 | + break |
590 | + if int(install_order) < int(master_install_order): |
591 | + juju_log("in IF") |
592 | + master_hostname = hostname |
593 | + master_port = port |
594 | + master_install_order = install_order |
595 | + |
596 | + hosts = [] |
597 | + hosts.append("%s:%s" % (my_hostname, my_port)) |
598 | + |
599 | + for member in relation_list(): |
600 | + juju_log("Getting ip for replica config") |
601 | + hostname = relation_get('hostname', member) |
602 | + port = relation_get('port', member) |
603 | + hosts.append("%s:%s" % (hostname,port)) |
604 | |
605 | # Initiate the replset |
606 | - init_replset("%s:%s" % (master_hostname, master_port)) |
607 | + if (init_replset("%s:%s" % (master_hostname, master_port), hosts, my_replicaset) is True): |
608 | + break |
609 | |
610 | # Add the rest of the nodes to the replset |
611 | - for member in relation_list(): |
612 | - hostname = relation_get('hostname', member) |
613 | - port = relation_get('port', member) |
614 | - if master_hostname != hostname: |
615 | - if hostname == my_hostname: |
616 | - subprocess.call(['mongo', |
617 | - '--eval', |
618 | - "rs.add(\"%s\")" % hostname]) |
619 | - else: |
620 | - join_replset("%s:%s" % (master_hostname, master_port), |
621 | - "%s:%s" % (hostname, port)) |
622 | - |
623 | +# for member in relation_list(): |
624 | +# hostname = relation_get('hostname', member) |
625 | +# port = relation_get('port', member) |
626 | +# if master_hostname != hostname: |
627 | +# if hostname == my_hostname: |
628 | +# subprocess.call(['mongo', |
629 | +# '--eval', |
630 | +# "rs.add(\"%s\")" % hostname]) |
631 | +# else: |
632 | +# join_replset("%s:%s" % (master_hostname, master_port), |
633 | +# "%s:%s" % (hostname, port)) |
634 | +# |
635 | # Add this node to the replset ( if needed ) |
636 | - if master_hostname != my_hostname: |
637 | - join_replset("%s:%s" % (master_hostname, master_port), |
638 | - "%s:%s" % (my_hostname, my_port)) |
639 | +# if master_hostname != my_hostname: |
640 | +# join_replset("%s:%s" % (master_hostname, master_port), |
641 | +# "%s:%s" % (my_hostname, my_port)) |
642 | |
643 | return(True) |
644 | |
645 | +def replica_set_relation_departed(): |
646 | + host = unit_get('public-address') |
647 | + port = config_get('port') |
648 | + replica_member_host = '%s:%s' % (host,port) |
649 | + replica_master = get_replica_set_master(host, port) |
650 | + if replica_master: |
651 | + c = pymongo.connection.Connection('%s' % replica_master) |
652 | + mongo = c['admin'] |
653 | + local = c['local']['system.replset'] |
654 | + cfg = local.find()[0] |
655 | + for i in range(0, len(cfg['members'])): |
656 | + juju_log("In for statement") |
657 | + if str(cfg['members'][i]['host']) == str(replica_member_host): |
658 | + juju_log('in if statement') |
659 | + del cfg['members'][i] |
660 | + new_cfg = { |
661 | + '_id': str(cfg['_id']), |
662 | + 'members': [], |
663 | + } |
664 | + for member in cfg['members']: |
665 | + new_cfg['members'].append({'_id': int(member['_id']), 'host': str(member['host'])}) |
666 | + new_cfg['version'] = int(cfg['version']) + 1 |
667 | + try: |
668 | + juju_log("New replica config: %s" % new_cfg) |
669 | + mongo.command({'replSetReconfig': new_cfg}) |
670 | + except Exception, e: |
671 | + juju_log(str(e)) |
672 | + return(True) |
673 | |
674 | def configsvr_relation_joined(): |
675 | juju_log("configsvr_relation_joined") |
676 | @@ -1169,7 +1437,7 @@ |
677 | 'hostname': my_hostname, |
678 | 'port': my_port, |
679 | 'install-order': my_install_order, |
680 | - 'type': 'configsvr', |
681 | + 'type': 'configsvr' |
682 | })) |
683 | |
684 | |
685 | @@ -1201,13 +1469,25 @@ |
686 | juju_log("mongos_relation_changed") |
687 | config_data = config_get() |
688 | retVal = False |
689 | + juju_log("Waiting for instance") |
690 | + time.sleep(5) |
691 | for member in relation_list(): |
692 | hostname = relation_get('hostname', member) |
693 | port = relation_get('port', member) |
694 | rel_type = relation_get('type', member) |
695 | - if hostname is None or port is None or rel_type is None: |
696 | - juju_log("mongos_relation_changed: relation data not ready.") |
697 | - break |
698 | + install_order = relation_get('install-order') |
699 | +# if hostname is None or port is None or rel_type is None: |
700 | +# juju_log("mongos_relation_changed: relation data not ready.") |
701 | +# break |
702 | + while hostname is None or port is None or rel_type is None: |
703 | + juju_log("Waiting for relation data") |
704 | + time.sleep(5) |
705 | + hostname = relation_get('hostname', member) |
706 | + juju_log("Hostname: %s" % hostname) |
707 | + port = relation_get('port', member) |
708 | + juju_log("Port: %s" % port) |
709 | + rel_type = relation_get('type', member) |
710 | + juju_log("Relation type: %s" % rel_type) |
711 | if rel_type == 'configsvr': |
712 | config_servers = load_config_servers(default_mongos_list) |
713 | print "Adding config server: %s:%s" % (hostname, port) |
714 | @@ -1227,6 +1507,7 @@ |
715 | mongos_host = "%s:%s" % ( |
716 | unit_get('public-address'), |
717 | config_get('mongos_port')) |
718 | +<<<<<<< TREE |
719 | shard_command1 = "sh.addShard(\"%s:%s\")" % (hostname, port) |
720 | retVal1 = mongo_client(mongos_host, shard_command1) |
721 | replicaset = relation_get('replset', member) |
722 | @@ -1237,9 +1518,25 @@ |
723 | else: |
724 | juju_log("Not enough config server for mongos yet.") |
725 | retVal = True |
726 | +======= |
727 | + if is_shard(unit_get('public-address'),config_get('mongos_port'),hostname) is False: |
728 | + juju_log("Host not in shard...") |
729 | + if is_replica_set(hostname, port) is False: |
730 | + shard_command1 = "sh.addShard(\"%s:%s\")" % (hostname, port) |
731 | + value = mongo_client(mongos_host, shard_command1) |
732 | + else: |
733 | + replicaset = get_replica_set_name(hostname, port) |
734 | + replicaset_master = get_replica_set_master(hostname, port) |
735 | + shard_command2 = "sh.addShard(\"%s/%s\")" % \ |
736 | + (replicaset, replicaset_master) |
737 | + value = mongo_client(mongos_host, shard_command2) |
738 | + retVal = value is True |
739 | + else: |
740 | + juju_log("Host in shard...") |
741 | + retVal = True |
742 | +>>>>>>> MERGE-SOURCE |
743 | else: |
744 | - juju_log("mongos_relation_change: undefined rel_type: %s" % |
745 | - rel_type) |
746 | + juju_log("mongos_relation_change: undefined rel_type: %s" % rel_type) |
747 | return(False) |
748 | juju_log("mongos_relation_changed returns: %s" % retVal) |
749 | return(retVal) |
750 | @@ -1507,6 +1804,8 @@ |
751 | retVal = replica_set_relation_joined() |
752 | elif hook_name == "replica-set-relation-changed": |
753 | retVal = replica_set_relation_changed() |
754 | + elif hook_name == "replica-set-relation-departed": |
755 | + retVal = replica_set_relation_departed() |
756 | elif hook_name == "configsvr-relation-joined": |
757 | retVal = configsvr_relation_joined() |
758 | elif hook_name == "configsvr-relation-changed": |
759 | |
760 | === modified file 'hooks/install' |
761 | --- hooks/install 2013-11-25 19:48:00 +0000 |
762 | +++ hooks/install 2014-07-28 14:22:50 +0000 |
763 | @@ -1,5 +1,6 @@ |
764 | #!/bin/bash |
765 | |
766 | -sudo apt-get install "python-yaml" |
767 | +sudo apt-get install python-pip -y |
768 | +sudo pip install pymongo |
769 | |
770 | hooks/hooks.py -H install |
771 | |
772 | === modified symlink 'hooks/mongos-cfg-relation-broken' |
773 | === target changed u'./hooks.py' => u'hooks.py' |
774 | === modified symlink 'hooks/mongos-relation-broken' |
775 | === target changed u'./hooks.py' => u'hooks.py' |
776 | === modified symlink 'hooks/mongos-relation-changed' |
777 | === target changed u'./hooks.py' => u'hooks.py' |
778 | === modified symlink 'hooks/mongos-relation-joined' |
779 | === target changed u'./hooks.py' => u'hooks.py' |
780 | === modified symlink 'hooks/replica-set-relation-changed' |
781 | === target changed u'./hooks.py' => u'hooks.py' |
782 | === added symlink 'hooks/replica-set-relation-departed' |
783 | === target is u'hooks.py' |
784 | === modified symlink 'hooks/replica-set-relation-joined' |
785 | === target changed u'./hooks.py' => u'hooks.py' |
786 | === modified symlink 'hooks/start' |
787 | === target changed u'./hooks.py' => u'hooks.py' |
788 | === modified symlink 'hooks/stop' |
789 | === target changed u'./hooks.py' => u'hooks.py' |
Thanks for this work. It looks like the mongo charm certainly needs a polish.
This branch contains conflict markers, so certainly needs repair as the Python will not compile.
Under what circumstances is it sane to change replicaset_ server_ dbpath or replicaset_ server_ logpath? If there are no genuine reasons, the charm should be opinionated and the paths hardcoded, rather than having them as configuration items (which really requires tests to confirm things work with non-default paths).
Is there a reason pymongo needs to be installed via pip, or can we use the package? Many installations will not have network connectivity to pypi, so a package is preferred. Otherwise, at a minimum a configurable URL will need to be added to config.yaml, specifying where the tarball can be downloaded and installed from.
There are some mixed tabs and spaces in hooks.py that should be fixed. Quite a lot of the changes appear to be whitespace, replacing spaces with tabs, so I suspect editor settings have caused problems. For example:
- s.shutdown( socket. SHUT_RDWR) "port_check: %s:%s/%s is open" % (host, port, protocol)) socket. SHUT_RDWR) "port_check: %s:%s/%s is open" % (host, port, protocol))
- juju_log(
- return(True)
+ s.shutdown(
+ juju_log(
+ return(True)
At a minimum, it needs to be consistent. I'm seeing all three acceptable forms of indentation being used in the one file, which is a timebomb (4 space only, mixed 4 space + tab, tab only).
A lot of the helpers here could instead be pulled from charm-helpers, in particular relation_get and relation_set. I suspect they would disappear from this review if the whitespace returned to normal.
This code is broken. relation-get will return the same information consistently throughout the hook (and if it doesn't, you have found a Juju bug).
+ while relation_data is None and current_try < max_tries: wait_for) subprocess. check_output( relation_ cmd_line) )
+ time.sleep(
+ relation_data = json.loads(
I worry about there being race conditions adding and removing the database from the replica set. It seems that the current replica set configuration is being update, and the entire modified configuration resubmitted. If multiple peer relation-changed or relation-departed hooks are running simultaneously, they will race and we will end up with broken replication configuration. I don't think this can be fixed without juju leadership.