Merge lp:~verterok/charms/trusty/mongodb/fix-data_directory-migration into lp:charms/trusty/mongodb

Proposed by Guillermo Gonzalez
Status: Merged
Merged at revision: 72
Proposed branch: lp:~verterok/charms/trusty/mongodb/fix-data_directory-migration
Merge into: lp:charms/trusty/mongodb
Diff against target: 126 lines (+92/-2)
2 files modified
hooks/hooks.py (+5/-2)
tests/04_deploy_with_storage.py (+87/-0)
To merge this branch: bzr merge lp:~verterok/charms/trusty/mongodb/fix-data_directory-migration
Reviewer Review Type Date Requested Status
Tim Van Steenburgh (community) Approve
Adam Israel (community) Needs Fixing
Review via email: mp+260290@code.launchpad.net

Commit message

Add flag to signal when the new_mongo_dir was just created and migrate the existing data directory.

Description of the change

Add flag to signal when the new_mongo_dir was just created and migrate the existing data directory.

Current code in trunk, will never migrate the data into the new_mongo_dir as the conditional will skip that code path.

In order to reproduce this, you need a non-local provider. I used this juju-deployer config: http://paste.ubuntu.com/11392167/

To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #4844 mongodb for verterok mp260290
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/4844/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #4524 mongodb for verterok mp260290
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/4524/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #4335 mongodb for verterok mp260290
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/4335/

Revision history for this message
Adam Israel (aisrael) wrote :

Hi Guillermo,

Thanks for your work on improving the mongodb charm! I had a chance to this merge proposal today and wanted to provide you with some feedback.

The changes you've made look to be straightforward, and I suspect that they work as intended, but I'd prefer to see a simple amulet test added to verify the migration works as expected. With that in place, I'd have no problem giving this my approval.

Thanks!

review: Needs Fixing
Revision history for this message
Guillermo Gonzalez (verterok) wrote :

Hi Adam,

Thanks for the prompt reply.
I tried to get an amulet test for this, but I cannot find a way to test this without making a really complex test case, e.g: adding 2 more charms (storage and block-storage-provider) plus handling credentials for the cloud that's used to run tests.

Also tried to use the local prodiver (for the storage charm), but found a new bug in the mongo charm, it fails if the provider is local (just a directory) as it tried to find the device...probably a mix of old volume-map support code and data relation handling

Is there a simpler way to test this with amulet?

As an alternative, I could try to write a unittest for the fix in config_changed_volume_apply function.

Cheers

Revision history for this message
Guillermo Gonzalez (verterok) wrote :

Hi Adam,

Thanks for the prompt reply.
I tried to get an amulet test for this, but I cannot find a way to test this without making a really complex test case, e.g: adding 2 more charms (storage and block-storage-provider) plus handling credentials for the cloud that's used to run tests.

Also tried to use the local prodiver (for the storage charm), but found a new bug in the mongo charm, it fails if the provider is local (just a directory) as it tried to find the device...probably a mix of old volume-map support code and data relation handling

Is there a simpler way to test this with amulet?

As an alternative, I could try to write a unittest for the fix in config_changed_volume_apply function.

Cheers

72. By Guillermo Gonzalez

fix local storage support

73. By Guillermo Gonzalez

add amulet test

Revision history for this message
Guillermo Gonzalez (verterok) wrote :

Hi Adam,

Fixed the local provider issue and added the amulet test.

Cheers

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #4622 mongodb for verterok mp260290
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/4622/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #4942 mongodb for verterok mp260290
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/4942/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #4393 mongodb for verterok mp260290
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/4393/

Revision history for this message
Adam Israel (aisrael) wrote :

Hey Guillermo,

I'm running into an error while testing, against both local and amazon providers. It looks like this might be the root cause:

ERROR: volume_init_and_mount: coult not find an unused device for regexp: /dev/vd[b-z]

Any ideas?

$ ./04_deploy_with_storage.py
2015-06-04 17:25:37 Starting deployment of amazon
2015-06-04 17:25:39 Deploying services...
2015-06-04 17:25:41 Deploying service mongodb using cs:trusty/mongodb-21
2015-06-04 17:26:00 Deploying service storage using cs:~chris-gondolin/trusty/storage-5
2015-06-04 17:26:19 Config specifies num units for subordinate: storage
2015-06-04 17:31:42 Adding relations...
2015-06-04 17:31:43 Exposing service 'mongodb'
2015-06-04 17:31:43 Deployment complete in 366.42 seconds
Adding storage relation, and sleeping for 2 min.
Traceback (most recent call last):
  File "./04_deploy_with_storage.py", line 86, in <module>
    validate_status()
  File "./04_deploy_with_storage.py", line 52, in validate_status
    d.sentry.wait_for_status(d.juju_env, ['mongodb'])
  File "/usr/lib/python3/dist-packages/amulet/sentry.py", line 260, in wait_for_status
    unit, unit_dict.get('agent-state-info')))
Exception: Error on unit mongodb/0: hook failed: "data-relation-changed" for storage:data

nged" for storage:data
(venv)vagrant@vagrant-ubuntu-trusty-64:/charms/trusty/mongodb/tests$ juju pprint
- mongodb/0: 52.1.122.247 (error) 27017/tcp, 27019/tcp, 27021/tcp, 28017/tcp
  - storage/0: 52.1.122.247 (started)
- mongodb/1: 54.152.170.105 (error) 27017/tcp, 27019/tcp, 27021/tcp, 28017/tcp
  - storage/1: 54.152.170.105 (started)

Here are pastebins of the logs:

mongodb/0:
http://paste.ubuntu.com/11571075/

storage/0:
http://paste.ubuntu.com/11571095/

mongodb/1:
http://paste.ubuntu.com/11571093/

storage/1:
http://paste.ubuntu.com/11571098/

review: Needs Fixing
Revision history for this message
Guillermo Gonzalez (verterok) wrote :

Hi Adam,

It looks like it's using the current trunk of the charm instead of the local one.
That's the data-relation-changed bug I had to fix in order to use the storage subordinate with the local provider in the amulet test.

Please check if the charm code in the units matches the one in the branch or trunk.

Thanks.

Revision history for this message
Ryan Beisner (1chb1n) wrote :
Revision history for this message
Tim Van Steenburgh (tvansteenburgh) wrote :

I'm +1 on this as it passes the OSCI tests.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'hooks/hooks.py'
--- hooks/hooks.py 2015-04-22 13:24:09 +0000
+++ hooks/hooks.py 2015-06-02 05:33:28 +0000
@@ -1547,7 +1547,8 @@
1547 assert(data_directory_path)1547 assert(data_directory_path)
1548 volid = volume_get_volume_id()1548 volid = volume_get_volume_id()
1549 if volid:1549 if volid:
1550 if volume_is_permanent(volid):1550 volid_from_subordinate = volume_get_id_for_storage_subordinate()
1551 if volume_is_permanent(volid) and not volid_from_subordinate:
1551 if not volume_init_and_mount(volid):1552 if not volume_init_and_mount(volid):
1552 juju_log(1553 juju_log(
1553 "volume_init_and_mount failed, not applying changes")1554 "volume_init_and_mount failed, not applying changes")
@@ -1581,9 +1582,11 @@
15811582
1582 # Create a directory structure below "new" mount_point1583 # Create a directory structure below "new" mount_point
1583 curr_dir_stat = os.stat(data_directory_path)1584 curr_dir_stat = os.stat(data_directory_path)
1585 new_mongo_dir_just_created = False
1584 if not os.path.isdir(new_mongo_dir):1586 if not os.path.isdir(new_mongo_dir):
1585 juju_log("mkdir %s" % new_mongo_dir)1587 juju_log("mkdir %s" % new_mongo_dir)
1586 os.mkdir(new_mongo_dir)1588 os.mkdir(new_mongo_dir)
1589 new_mongo_dir_just_created = True
1587 # copy permissions from current data_directory_path1590 # copy permissions from current data_directory_path
1588 os.chown(new_mongo_dir, curr_dir_stat.st_uid, curr_dir_stat.st_gid)1591 os.chown(new_mongo_dir, curr_dir_stat.st_uid, curr_dir_stat.st_gid)
1589 os.chmod(new_mongo_dir, curr_dir_stat.st_mode)1592 os.chmod(new_mongo_dir, curr_dir_stat.st_mode)
@@ -1595,7 +1598,7 @@
1595 if not stop_hook():1598 if not stop_hook():
1596 juju_log("stop_hook() failed - can't migrate data.")1599 juju_log("stop_hook() failed - can't migrate data.")
1597 return False1600 return False
1598 if not os.path.exists(new_mongo_dir):1601 if not os.path.exists(new_mongo_dir) or new_mongo_dir_just_created:
1599 juju_log("migrating mongo data {}/ -> {}/".format(1602 juju_log("migrating mongo data {}/ -> {}/".format(
1600 data_directory_path, new_mongo_dir))1603 data_directory_path, new_mongo_dir))
1601 # void copying PID file to perm storage (shouldn't be any...)1604 # void copying PID file to perm storage (shouldn't be any...)
16021605
=== added file 'tests/04_deploy_with_storage.py'
--- tests/04_deploy_with_storage.py 1970-01-01 00:00:00 +0000
+++ tests/04_deploy_with_storage.py 2015-06-02 05:33:28 +0000
@@ -0,0 +1,87 @@
1#!/usr/bin/env python3
2
3import amulet
4import time
5from pymongo import MongoClient
6from collections import Counter
7
8
9#########################################################
10# Test Quick Config
11#########################################################
12scale = 2
13seconds = 1800
14
15# amount of time to wait before testing for replicaset
16# status
17wait_for_replicaset = 30
18# amount of time to wait for the data relation
19wait_for_relation = 60*2
20
21#########################################################
22# 3shard cluster configuration
23#########################################################
24d = amulet.Deployment(series='trusty')
25
26d.add('mongodb', units=scale, series='trusty')
27d.add('storage', charm='cs:~chris-gondolin/trusty/storage-5', series='trusty')
28d.configure('storage', {'provider': 'local'})
29
30d.expose('mongodb')
31
32# Perform the setup for the deployment.
33try:
34 d.setup(seconds)
35 d.sentry.wait(seconds)
36except amulet.helpers.TimeoutError:
37 message = 'The environment did not setup in %d seconds.', seconds
38 amulet.raise_status(amulet.SKIP, msg=message)
39except:
40 raise
41
42sentry_dict = {
43 'mongodb0-sentry': d.sentry.unit['mongodb/0'],
44 'mongodb1-sentry': d.sentry.unit['mongodb/1'],
45}
46
47
48#############################################################
49# Check agent status
50#############################################################
51def validate_status():
52 d.sentry.wait_for_status(d.juju_env, ['mongodb'])
53
54
55#############################################################
56# Validate proper replicaset setup
57#############################################################
58def validate_replicaset_setup():
59
60 time.sleep(wait_for_replicaset)
61
62 unit_status = []
63
64 for service in sentry_dict:
65 client = MongoClient(sentry_dict[service].info['public-address'])
66 r = client.admin.command('replSetGetStatus')
67 unit_status.append(r['myState'])
68 client.close()
69
70 primaries = Counter(unit_status)[1]
71 if primaries != 1:
72 message = "Only one PRIMARY unit allowed! Found: %s" % (primaries)
73 amulet.raise_status(amulet.FAIL, message)
74
75 secondrs = Counter(unit_status)[2]
76 if secondrs != 1:
77 message = "Only one SECONDARY unit allowed! (Found %s)" % (secondrs)
78 amulet.raise_status(amulet.FAIL, message)
79
80
81validate_status()
82validate_replicaset_setup()
83print("Adding storage relation, and sleeping for 2 min.")
84d.relate('mongodb:data', 'storage:data')
85time.sleep(wait_for_relation)
86validate_status()
87validate_replicaset_setup()

Subscribers

People subscribed via source and target branches