Merge lp:~verterok/charms/trusty/mongodb/fix-data_directory-migration into lp:charms/trusty/mongodb

Proposed by Guillermo Gonzalez
Status: Merged
Merged at revision: 72
Proposed branch: lp:~verterok/charms/trusty/mongodb/fix-data_directory-migration
Merge into: lp:charms/trusty/mongodb
Diff against target: 126 lines (+92/-2)
2 files modified
hooks/hooks.py (+5/-2)
tests/04_deploy_with_storage.py (+87/-0)
To merge this branch: bzr merge lp:~verterok/charms/trusty/mongodb/fix-data_directory-migration
Reviewer Review Type Date Requested Status
Tim Van Steenburgh (community) Approve
Adam Israel (community) Needs Fixing
Review via email: mp+260290@code.launchpad.net

Commit message

Add flag to signal when the new_mongo_dir was just created and migrate the existing data directory.

Description of the change

Add flag to signal when the new_mongo_dir was just created and migrate the existing data directory.

Current code in trunk, will never migrate the data into the new_mongo_dir as the conditional will skip that code path.

In order to reproduce this, you need a non-local provider. I used this juju-deployer config: http://paste.ubuntu.com/11392167/

To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #4844 mongodb for verterok mp260290
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/4844/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #4524 mongodb for verterok mp260290
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/4524/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #4335 mongodb for verterok mp260290
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/4335/

Revision history for this message
Adam Israel (aisrael) wrote :

Hi Guillermo,

Thanks for your work on improving the mongodb charm! I had a chance to this merge proposal today and wanted to provide you with some feedback.

The changes you've made look to be straightforward, and I suspect that they work as intended, but I'd prefer to see a simple amulet test added to verify the migration works as expected. With that in place, I'd have no problem giving this my approval.

Thanks!

review: Needs Fixing
Revision history for this message
Guillermo Gonzalez (verterok) wrote :

Hi Adam,

Thanks for the prompt reply.
I tried to get an amulet test for this, but I cannot find a way to test this without making a really complex test case, e.g: adding 2 more charms (storage and block-storage-provider) plus handling credentials for the cloud that's used to run tests.

Also tried to use the local prodiver (for the storage charm), but found a new bug in the mongo charm, it fails if the provider is local (just a directory) as it tried to find the device...probably a mix of old volume-map support code and data relation handling

Is there a simpler way to test this with amulet?

As an alternative, I could try to write a unittest for the fix in config_changed_volume_apply function.

Cheers

Revision history for this message
Guillermo Gonzalez (verterok) wrote :

Hi Adam,

Thanks for the prompt reply.
I tried to get an amulet test for this, but I cannot find a way to test this without making a really complex test case, e.g: adding 2 more charms (storage and block-storage-provider) plus handling credentials for the cloud that's used to run tests.

Also tried to use the local prodiver (for the storage charm), but found a new bug in the mongo charm, it fails if the provider is local (just a directory) as it tried to find the device...probably a mix of old volume-map support code and data relation handling

Is there a simpler way to test this with amulet?

As an alternative, I could try to write a unittest for the fix in config_changed_volume_apply function.

Cheers

72. By Guillermo Gonzalez

fix local storage support

73. By Guillermo Gonzalez

add amulet test

Revision history for this message
Guillermo Gonzalez (verterok) wrote :

Hi Adam,

Fixed the local provider issue and added the amulet test.

Cheers

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #4622 mongodb for verterok mp260290
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/4622/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #4942 mongodb for verterok mp260290
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/4942/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #4393 mongodb for verterok mp260290
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/4393/

Revision history for this message
Adam Israel (aisrael) wrote :

Hey Guillermo,

I'm running into an error while testing, against both local and amazon providers. It looks like this might be the root cause:

ERROR: volume_init_and_mount: coult not find an unused device for regexp: /dev/vd[b-z]

Any ideas?

$ ./04_deploy_with_storage.py
2015-06-04 17:25:37 Starting deployment of amazon
2015-06-04 17:25:39 Deploying services...
2015-06-04 17:25:41 Deploying service mongodb using cs:trusty/mongodb-21
2015-06-04 17:26:00 Deploying service storage using cs:~chris-gondolin/trusty/storage-5
2015-06-04 17:26:19 Config specifies num units for subordinate: storage
2015-06-04 17:31:42 Adding relations...
2015-06-04 17:31:43 Exposing service 'mongodb'
2015-06-04 17:31:43 Deployment complete in 366.42 seconds
Adding storage relation, and sleeping for 2 min.
Traceback (most recent call last):
  File "./04_deploy_with_storage.py", line 86, in <module>
    validate_status()
  File "./04_deploy_with_storage.py", line 52, in validate_status
    d.sentry.wait_for_status(d.juju_env, ['mongodb'])
  File "/usr/lib/python3/dist-packages/amulet/sentry.py", line 260, in wait_for_status
    unit, unit_dict.get('agent-state-info')))
Exception: Error on unit mongodb/0: hook failed: "data-relation-changed" for storage:data

nged" for storage:data
(venv)vagrant@vagrant-ubuntu-trusty-64:/charms/trusty/mongodb/tests$ juju pprint
- mongodb/0: 52.1.122.247 (error) 27017/tcp, 27019/tcp, 27021/tcp, 28017/tcp
  - storage/0: 52.1.122.247 (started)
- mongodb/1: 54.152.170.105 (error) 27017/tcp, 27019/tcp, 27021/tcp, 28017/tcp
  - storage/1: 54.152.170.105 (started)

Here are pastebins of the logs:

mongodb/0:
http://paste.ubuntu.com/11571075/

storage/0:
http://paste.ubuntu.com/11571095/

mongodb/1:
http://paste.ubuntu.com/11571093/

storage/1:
http://paste.ubuntu.com/11571098/

review: Needs Fixing
Revision history for this message
Guillermo Gonzalez (verterok) wrote :

Hi Adam,

It looks like it's using the current trunk of the charm instead of the local one.
That's the data-relation-changed bug I had to fix in order to use the storage subordinate with the local provider in the amulet test.

Please check if the charm code in the units matches the one in the branch or trunk.

Thanks.

Revision history for this message
Ryan Beisner (1chb1n) wrote :
Revision history for this message
Tim Van Steenburgh (tvansteenburgh) wrote :

I'm +1 on this as it passes the OSCI tests.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'hooks/hooks.py'
2--- hooks/hooks.py 2015-04-22 13:24:09 +0000
3+++ hooks/hooks.py 2015-06-02 05:33:28 +0000
4@@ -1547,7 +1547,8 @@
5 assert(data_directory_path)
6 volid = volume_get_volume_id()
7 if volid:
8- if volume_is_permanent(volid):
9+ volid_from_subordinate = volume_get_id_for_storage_subordinate()
10+ if volume_is_permanent(volid) and not volid_from_subordinate:
11 if not volume_init_and_mount(volid):
12 juju_log(
13 "volume_init_and_mount failed, not applying changes")
14@@ -1581,9 +1582,11 @@
15
16 # Create a directory structure below "new" mount_point
17 curr_dir_stat = os.stat(data_directory_path)
18+ new_mongo_dir_just_created = False
19 if not os.path.isdir(new_mongo_dir):
20 juju_log("mkdir %s" % new_mongo_dir)
21 os.mkdir(new_mongo_dir)
22+ new_mongo_dir_just_created = True
23 # copy permissions from current data_directory_path
24 os.chown(new_mongo_dir, curr_dir_stat.st_uid, curr_dir_stat.st_gid)
25 os.chmod(new_mongo_dir, curr_dir_stat.st_mode)
26@@ -1595,7 +1598,7 @@
27 if not stop_hook():
28 juju_log("stop_hook() failed - can't migrate data.")
29 return False
30- if not os.path.exists(new_mongo_dir):
31+ if not os.path.exists(new_mongo_dir) or new_mongo_dir_just_created:
32 juju_log("migrating mongo data {}/ -> {}/".format(
33 data_directory_path, new_mongo_dir))
34 # void copying PID file to perm storage (shouldn't be any...)
35
36=== added file 'tests/04_deploy_with_storage.py'
37--- tests/04_deploy_with_storage.py 1970-01-01 00:00:00 +0000
38+++ tests/04_deploy_with_storage.py 2015-06-02 05:33:28 +0000
39@@ -0,0 +1,87 @@
40+#!/usr/bin/env python3
41+
42+import amulet
43+import time
44+from pymongo import MongoClient
45+from collections import Counter
46+
47+
48+#########################################################
49+# Test Quick Config
50+#########################################################
51+scale = 2
52+seconds = 1800
53+
54+# amount of time to wait before testing for replicaset
55+# status
56+wait_for_replicaset = 30
57+# amount of time to wait for the data relation
58+wait_for_relation = 60*2
59+
60+#########################################################
61+# 3shard cluster configuration
62+#########################################################
63+d = amulet.Deployment(series='trusty')
64+
65+d.add('mongodb', units=scale, series='trusty')
66+d.add('storage', charm='cs:~chris-gondolin/trusty/storage-5', series='trusty')
67+d.configure('storage', {'provider': 'local'})
68+
69+d.expose('mongodb')
70+
71+# Perform the setup for the deployment.
72+try:
73+ d.setup(seconds)
74+ d.sentry.wait(seconds)
75+except amulet.helpers.TimeoutError:
76+ message = 'The environment did not setup in %d seconds.', seconds
77+ amulet.raise_status(amulet.SKIP, msg=message)
78+except:
79+ raise
80+
81+sentry_dict = {
82+ 'mongodb0-sentry': d.sentry.unit['mongodb/0'],
83+ 'mongodb1-sentry': d.sentry.unit['mongodb/1'],
84+}
85+
86+
87+#############################################################
88+# Check agent status
89+#############################################################
90+def validate_status():
91+ d.sentry.wait_for_status(d.juju_env, ['mongodb'])
92+
93+
94+#############################################################
95+# Validate proper replicaset setup
96+#############################################################
97+def validate_replicaset_setup():
98+
99+ time.sleep(wait_for_replicaset)
100+
101+ unit_status = []
102+
103+ for service in sentry_dict:
104+ client = MongoClient(sentry_dict[service].info['public-address'])
105+ r = client.admin.command('replSetGetStatus')
106+ unit_status.append(r['myState'])
107+ client.close()
108+
109+ primaries = Counter(unit_status)[1]
110+ if primaries != 1:
111+ message = "Only one PRIMARY unit allowed! Found: %s" % (primaries)
112+ amulet.raise_status(amulet.FAIL, message)
113+
114+ secondrs = Counter(unit_status)[2]
115+ if secondrs != 1:
116+ message = "Only one SECONDARY unit allowed! (Found %s)" % (secondrs)
117+ amulet.raise_status(amulet.FAIL, message)
118+
119+
120+validate_status()
121+validate_replicaset_setup()
122+print("Adding storage relation, and sleeping for 2 min.")
123+d.relate('mongodb:data', 'storage:data')
124+time.sleep(wait_for_relation)
125+validate_status()
126+validate_replicaset_setup()

Subscribers

People subscribed via source and target branches