Merge lp:~vishvananda/nova/no-db-messaging into lp:~hudson-openstack/nova/trunk

Proposed by Vish Ishaya
Status: Work in progress
Proposed branch: lp:~vishvananda/nova/no-db-messaging
Merge into: lp:~hudson-openstack/nova/trunk
Prerequisite: lp:~termie/nova/rpc_multicall
Diff against target: 658 lines (+220/-143)
9 files modified
nova/context.py (+3/-3)
nova/db/sqlalchemy/api.py (+1/-1)
nova/db/sqlalchemy/models.py (+18/-5)
nova/scheduler/manager.py (+7/-5)
nova/tests/scheduler/test_scheduler.py (+32/-27)
nova/tests/test_volume.py (+34/-18)
nova/utils.py (+14/-1)
nova/volume/api.py (+58/-23)
nova/volume/manager.py (+53/-60)
To merge this branch: bzr merge lp:~vishvananda/nova/no-db-messaging
Reviewer Review Type Date Requested Status
Matt Dietz (community) Needs Information
Rick Harris (community) Needs Fixing
Dan Prince (community) Abstain
Review via email: mp+61687@code.launchpad.net

Description of the change

This is an initial proposal for feedback. This branch is an attempt to start on this blueprint:
https://blueprints.launchpad.net/nova/+spec/no-db-messaging
which will allow for the implementation of this blueprint:
https://blueprints.launchpad.net/nova/+spec/separate-code-for-services
And will ultimately make it easy to replace our various services with external projects.

This prototype changes volume_create and volume_delete to pass data through the queue instead of writing information to the database and reading it on the other end. It attempts to make minimal changes. It includes:
 * a small change to model code to allow for conversion into dicts
 * scheduler uses muticall instead of cast
 * volume.api.create_volume and delete volume use multicall to communicate with volume.manager
 * volume.manager returns updates instead of modifying the database directly

Please note that this is an initial proposal. It is based on some changes made by termie to allow the driver to return multiple times for a single call. It works to create volumes, but I haven't modified the tests to pass yet.

To Do:
 * change the driver code so it doesn't store its extra data about volumes in the global db.
 * fix the tests
 * modify attach and detach to use the same methodology

I'm open to feedback about this approach. I tried a few other versions and this seems like the simplest change set to get what we want. If this looks good, I will modify the other volume commands to work the same way and propose a similar set of changes for compute.

To post a comment you must log in.
Revision history for this message
Dan Prince (dan-prince) wrote :

Hey Vish,

I'm not able to boot any instances via the OS API w/ this branch. Looks to be related to the set_admin_password functionality which now polls for the compute host to be assigned:

(nova.api.openstack): TRACE: File "/usr/lib/pymodules/python2.6/nova/compute/api.py", line 501, in _set_admin_password
(nova.api.openstack): TRACE: host = self._find_host(context, instance_id)
(nova.api.openstack): TRACE: File "/usr/lib/pymodules/python2.6/nova/compute/api.py", line 497, in _find_host
(nova.api.openstack): TRACE: % instance_id)
(nova.api.openstack): TRACE: Error: Unable to find host for Instance 1
(nova.api.openstack): TRACE:

review: Needs Fixing
Revision history for this message
Vish Ishaya (vishvananda) wrote :

Thanks dan,

Turns out it was an error where if you cast to a generator (multicall uses generators), then it wouldn't actually execute the generator on the other end. I think casting should work now.

Vish

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

The prerequisite lp:~termie/nova/rpc_multicall has not yet been merged into lp:nova.

Revision history for this message
Dan Prince (dan-prince) wrote :

Hey Vish,

So you latest fix resolves the RPC issues. Thanks!

I'm still hitting an issue with things like instance metadata (aka a model attributes that aren't columns).

The following changes to NovaBase.update resolves it for me.

            if key == 'name' and key in columns:
                setattr(self, key, value)
            else:
                setattr(self, key, value)

review: Needs Fixing
Revision history for this message
Dan Prince (dan-prince) wrote :

Oops. This one should work:

      if key == 'name' and key in columns:
            setattr(self, key, value)
       elif key != 'name':
            setattr(self, key, value)

Revision history for this message
Mark Washenberger (markwash) wrote :

I'm having a little trouble grasping the multicall approach as shown here so I might be a bit confused.

It looks like the way this works is, when we schedule a volume to be created, we create an eventlet-based volume listener which will update the database as it hears back from the volume manager.

Maybe I'm mistaken about eventlet, but this implies that if for some reason the process terminates before the multicall has finished its last return, then the database won't be updated. Then, when the process is restarted, there will be no calling context remaining to handle the updates.

Would it be a more robust approach to create a permanently running VolumeListener that handles all volume update events?

Thanks!

lp:~vishvananda/nova/no-db-messaging updated
1113. By Vish Ishaya

merged rpc_multicall

1114. By Vish Ishaya

keep the database on the receiving end as well

1115. By Vish Ishaya

fix tests

1116. By Vish Ishaya

use strtime for passing datetimes back and forth through the queue

1117. By Vish Ishaya

lost some changes from rpc branch, bring them in manually

1118. By Vish Ishaya

merged trunk and removed conflicts

1119. By Vish Ishaya

make sure to handle VolumeIsBusy

1120. By Vish Ishaya

return not yield in scheduler shortcut

1121. By Vish Ishaya

fix snapshot test

Revision history for this message
Dan Prince (dan-prince) wrote :

Sorry for holding this up. I'm no multicall expert but I should probably remove my 'needs fixing' at least so more people check it out. The latest fix resolves my issue with model attributes that aren't columns.

review: Abstain
Revision history for this message
Rick Harris (rconradharris) wrote :

Ran into a test failure:

======================================================================
FAIL: test_create_and_delete_volume (nova.tests.integrated.test_volumes.VolumesTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/rick/openstack/nova/no-db-messaging/nova/tests/integrated/test_volumes.py", line 122, in test_create_and_delete_volume
    self.assertEquals(1, len(create_actions))
AssertionError: 1 != 2
-------------------- >> begin captured logging << --------------------

Other than that, I think this looks good.

review: Needs Fixing
Revision history for this message
Dan Prince (dan-prince) wrote :

Couple of conflicts now w/ a trunk merge too:

Text conflict in nova/volume/api.py
Text conflict in nova/volume/manager.py
2 conflicts encountered.

Revision history for this message
Matt Dietz (cerberus) wrote :

I really like this approach, but I have the same reservations that Mark does.

We could go with the Listener approach, but I will say that I've seen issues with bottlenecking in our current architecture trying to do something very similar.

It seems like this would make updating a running nova installation prohibitively difficult. If there was a way to recall the calling context/msg_id even after a worker bounce, then I'd feel a lot better.
We may need some kind of state dump mechanism for the workers. Perhaps we could pickle some pertinent data upon a worker receiving a HUP/KILL/whatever.

review: Needs Information
lp:~vishvananda/nova/no-db-messaging updated
1122. By Vish Ishaya

remove merge error calling failing test

1123. By Vish Ishaya

merged trunk

Unmerged revisions

1123. By Vish Ishaya

merged trunk

1122. By Vish Ishaya

remove merge error calling failing test

1121. By Vish Ishaya

fix snapshot test

1120. By Vish Ishaya

return not yield in scheduler shortcut

1119. By Vish Ishaya

make sure to handle VolumeIsBusy

1118. By Vish Ishaya

merged trunk and removed conflicts

1117. By Vish Ishaya

lost some changes from rpc branch, bring them in manually

1116. By Vish Ishaya

use strtime for passing datetimes back and forth through the queue

1115. By Vish Ishaya

fix tests

1114. By Vish Ishaya

keep the database on the receiving end as well

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'nova/context.py'
--- nova/context.py 2011-06-02 21:23:05 +0000
+++ nova/context.py 2011-06-22 17:01:37 +0000
@@ -56,8 +56,8 @@
56 self.remote_address = remote_address56 self.remote_address = remote_address
57 if not timestamp:57 if not timestamp:
58 timestamp = utils.utcnow()58 timestamp = utils.utcnow()
59 if isinstance(timestamp, str) or isinstance(timestamp, unicode):59 if isinstance(timestamp, basestring):
60 timestamp = utils.parse_isotime(timestamp)60 timestamp = utils.parse_strtime(timestamp)
61 self.timestamp = timestamp61 self.timestamp = timestamp
62 if not request_id:62 if not request_id:
63 chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890-'63 chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890-'
@@ -95,7 +95,7 @@
95 'is_admin': self.is_admin,95 'is_admin': self.is_admin,
96 'read_deleted': self.read_deleted,96 'read_deleted': self.read_deleted,
97 'remote_address': self.remote_address,97 'remote_address': self.remote_address,
98 'timestamp': utils.isotime(self.timestamp),98 'timestamp': utils.strtime(self.timestamp),
99 'request_id': self.request_id}99 'request_id': self.request_id}
100100
101 @classmethod101 @classmethod
102102
=== modified file 'nova/db/sqlalchemy/api.py'
--- nova/db/sqlalchemy/api.py 2011-06-20 20:55:16 +0000
+++ nova/db/sqlalchemy/api.py 2011-06-22 17:01:37 +0000
@@ -1689,7 +1689,7 @@
1689 return (result[0] or 0, result[1] or 0)1689 return (result[0] or 0, result[1] or 0)
16901690
16911691
1692@require_admin_context1692@require_context
1693def volume_destroy(context, volume_id):1693def volume_destroy(context, volume_id):
1694 session = get_session()1694 session = get_session()
1695 with session.begin():1695 with session.begin():
16961696
=== modified file 'nova/db/sqlalchemy/models.py'
--- nova/db/sqlalchemy/models.py 2011-06-20 20:55:16 +0000
+++ nova/db/sqlalchemy/models.py 2011-06-22 17:01:37 +0000
@@ -25,6 +25,7 @@
25from sqlalchemy.exc import IntegrityError25from sqlalchemy.exc import IntegrityError
26from sqlalchemy.ext.declarative import declarative_base26from sqlalchemy.ext.declarative import declarative_base
27from sqlalchemy.schema import ForeignKeyConstraint27from sqlalchemy.schema import ForeignKeyConstraint
28from sqlalchemy.types import DateTime as DTType
2829
29from nova.db.sqlalchemy.session import get_session30from nova.db.sqlalchemy.session import get_session
3031
@@ -77,17 +78,29 @@
77 return getattr(self, key, default)78 return getattr(self, key, default)
7879
79 def __iter__(self):80 def __iter__(self):
80 self._i = iter(object_mapper(self).columns)81 # NOTE(vish): include name property in the iterator
82 columns = dict(object_mapper(self).columns).keys()
83 name = self.get('name')
84 if name:
85 columns.append('name')
86 self._i = iter(columns)
81 return self87 return self
8288
83 def next(self):89 def next(self):
84 n = self._i.next().name90 n = self._i.next()
85 return n, getattr(self, n)91 return n, getattr(self, n)
8692
87 def update(self, values):93 def update(self, values):
88 """Make the model object behave like a dict"""94 """Make the model object behave like a dict and convert datetimes."""
89 for k, v in values.iteritems():95 columns = object_mapper(self).columns
90 setattr(self, k, v)96 for key, value in values.iteritems():
97 # NOTE(vish): don't update the 'name' property
98 if key != 'name' or key in columns:
99 if (key in columns and
100 isinstance(value, basestring) and
101 isinstance(columns[key].type, DTType)):
102 value = utils.parse_strtime(value)
103 setattr(self, key, value)
91104
92 def iteritems(self):105 def iteritems(self):
93 """Make the model object behave like a dict.106 """Make the model object behave like a dict.
94107
=== modified file 'nova/scheduler/manager.py'
--- nova/scheduler/manager.py 2011-06-09 23:16:55 +0000
+++ nova/scheduler/manager.py 2011-06-22 17:01:37 +0000
@@ -98,11 +98,13 @@
98 % locals())98 % locals())
99 return99 return
100100
101 rpc.cast(context,101 LOG.debug(_("Multicall %(topic)s %(host)s for %(method)s") % locals())
102 db.queue_get_for(context, topic, host),102 rvs = rpc.multicall(context,
103 {"method": method,103 db.queue_get_for(context, topic, host),
104 "args": kwargs})104 {"method": method,
105 LOG.debug(_("Casted to %(topic)s %(host)s for %(method)s") % locals())105 "args": kwargs})
106 for rv in rvs:
107 yield rv
106108
107 # NOTE (masumotok) : This method should be moved to nova.api.ec2.admin.109 # NOTE (masumotok) : This method should be moved to nova.api.ec2.admin.
108 # Based on bexar design summit discussion,110 # Based on bexar design summit discussion,
109111
=== modified file 'nova/tests/scheduler/test_scheduler.py'
--- nova/tests/scheduler/test_scheduler.py 2011-06-17 23:53:30 +0000
+++ nova/tests/scheduler/test_scheduler.py 2011-06-22 17:01:37 +0000
@@ -98,7 +98,7 @@
9898
99 def test_fallback(self):99 def test_fallback(self):
100 scheduler = manager.SchedulerManager()100 scheduler = manager.SchedulerManager()
101 self.mox.StubOutWithMock(rpc, 'cast', use_mock_anything=True)101 self.mox.StubOutWithMock(rpc, 'call', use_mock_anything=True)
102 ctxt = context.get_admin_context()102 ctxt = context.get_admin_context()
103 rpc.cast(ctxt,103 rpc.cast(ctxt,
104 'topic.fallback_host',104 'topic.fallback_host',
@@ -109,7 +109,7 @@
109109
110 def test_named_method(self):110 def test_named_method(self):
111 scheduler = manager.SchedulerManager()111 scheduler = manager.SchedulerManager()
112 self.mox.StubOutWithMock(rpc, 'cast', use_mock_anything=True)112 self.mox.StubOutWithMock(rpc, 'call', use_mock_anything=True)
113 ctxt = context.get_admin_context()113 ctxt = context.get_admin_context()
114 rpc.cast(ctxt,114 rpc.cast(ctxt,
115 'topic.named_host',115 'topic.named_host',
@@ -225,17 +225,17 @@
225 self.mox.StubOutWithMock(db, 'service_get_all_by_topic')225 self.mox.StubOutWithMock(db, 'service_get_all_by_topic')
226 arg = IgnoreArg()226 arg = IgnoreArg()
227 db.service_get_all_by_topic(arg, arg).AndReturn(service_list)227 db.service_get_all_by_topic(arg, arg).AndReturn(service_list)
228 self.mox.StubOutWithMock(rpc, 'cast', use_mock_anything=True)228 self.mox.StubOutWithMock(rpc, 'multicall', use_mock_anything=True)
229 rpc.cast(ctxt,229 rpc.multicall(ctxt,
230 'compute.host1',230 'compute.host1',
231 {'method': 'run_instance',231 {'method': 'run_instance',
232 'args': {'instance_id': 'i-ffffffff',232 'args': {'instance_id': 'i-ffffffff',
233 'availability_zone': 'zone1'}})233 'availability_zone': 'zone1'}}).AndReturn([])
234 self.mox.ReplayAll()234 self.mox.ReplayAll()
235 scheduler.run_instance(ctxt,235 list(scheduler.run_instance(ctxt,
236 'compute',236 'compute',
237 instance_id='i-ffffffff',237 instance_id='i-ffffffff',
238 availability_zone='zone1')238 availability_zone='zone1'))
239239
240240
241class SimpleDriverTestCase(test.TestCase):241class SimpleDriverTestCase(test.TestCase):
@@ -601,12 +601,13 @@
601 volume1 = self.start_service('volume', host='host1')601 volume1 = self.start_service('volume', host='host1')
602 volume2 = self.start_service('volume', host='host2')602 volume2 = self.start_service('volume', host='host2')
603 volume_id1 = self._create_volume()603 volume_id1 = self._create_volume()
604 volume1.create_volume(self.context, volume_id1)604 host1 = self.scheduler.driver.schedule_create_volume(self.context,
605 volume_id1)
605 volume_id2 = self._create_volume()606 volume_id2 = self._create_volume()
606 host = self.scheduler.driver.schedule_create_volume(self.context,607 host2 = self.scheduler.driver.schedule_create_volume(self.context,
607 volume_id2)608 volume_id2)
608 self.assertEqual(host, 'host2')609 self.assertNotEqual(host1, host2)
609 volume1.delete_volume(self.context, volume_id1)610 db.volume_destroy(self.context, volume_id1)
610 db.volume_destroy(self.context, volume_id2)611 db.volume_destroy(self.context, volume_id2)
611 volume1.kill()612 volume1.kill()
612 volume2.kill()613 volume2.kill()
@@ -619,10 +620,12 @@
619 volume_ids2 = []620 volume_ids2 = []
620 for index in xrange(FLAGS.max_gigabytes):621 for index in xrange(FLAGS.max_gigabytes):
621 volume_id = self._create_volume()622 volume_id = self._create_volume()
622 volume1.create_volume(self.context, volume_id)623 self.scheduler.driver.schedule_create_volume(self.context,
624 volume_id)
623 volume_ids1.append(volume_id)625 volume_ids1.append(volume_id)
624 volume_id = self._create_volume()626 volume_id = self._create_volume()
625 volume2.create_volume(self.context, volume_id)627 self.scheduler.driver.schedule_create_volume(self.context,
628 volume_id)
626 volume_ids2.append(volume_id)629 volume_ids2.append(volume_id)
627 volume_id = self._create_volume()630 volume_id = self._create_volume()
628 self.assertRaises(driver.NoValidHost,631 self.assertRaises(driver.NoValidHost,
@@ -658,16 +661,18 @@
658 driver_i._live_migration_src_check(nocare, nocare)661 driver_i._live_migration_src_check(nocare, nocare)
659 driver_i._live_migration_dest_check(nocare, nocare, i_ref['host'])662 driver_i._live_migration_dest_check(nocare, nocare, i_ref['host'])
660 driver_i._live_migration_common_check(nocare, nocare, i_ref['host'])663 driver_i._live_migration_common_check(nocare, nocare, i_ref['host'])
661 self.mox.StubOutWithMock(rpc, 'cast', use_mock_anything=True)664 self.mox.StubOutWithMock(rpc, 'multicall', use_mock_anything=True)
662 kwargs = {'instance_id': instance_id, 'dest': i_ref['host']}665 kwargs = {'instance_id': instance_id, 'dest': i_ref['host']}
663 rpc.cast(self.context,666 rpc.multicall(self.context,
664 db.queue_get_for(nocare, FLAGS.compute_topic, i_ref['host']),667 db.queue_get_for(nocare,
665 {"method": 'live_migration', "args": kwargs})668 FLAGS.compute_topic,
666669 i_ref['host']),
670 {"method": 'live_migration',
671 "args": kwargs}).AndReturn([])
667 self.mox.ReplayAll()672 self.mox.ReplayAll()
668 self.scheduler.live_migration(self.context, FLAGS.compute_topic,673 list(self.scheduler.live_migration(self.context, FLAGS.compute_topic,
669 instance_id=instance_id,674 instance_id=instance_id,
670 dest=i_ref['host'])675 dest=i_ref['host']))
671676
672 i_ref = db.instance_get(self.context, instance_id)677 i_ref = db.instance_get(self.context, instance_id)
673 self.assertTrue(i_ref['state_description'] == 'migrating')678 self.assertTrue(i_ref['state_description'] == 'migrating')
674679
=== modified file 'nova/tests/test_volume.py'
--- nova/tests/test_volume.py 2011-05-27 05:13:17 +0000
+++ nova/tests/test_volume.py 2011-06-22 17:01:37 +0000
@@ -57,14 +57,24 @@
57 vol['attach_status'] = "detached"57 vol['attach_status'] = "detached"
58 return db.volume_create(context.get_admin_context(), vol)['id']58 return db.volume_create(context.get_admin_context(), vol)['id']
5959
60 def _id_create_volume(self, context, volume_id):
61 """Version of create volume that uses id"""
62 volume_ref = utils.to_primitive(db.volume_get(context, volume_id))
63 return list(self.volume.create_volume(context, volume_ref))[-1]['id']
64
65 def _id_delete_volume(self, context, volume_id):
66 """Version of delete volume that uses id"""
67 volume_ref = utils.to_primitive(db.volume_get(context, volume_id))
68 return list(self.volume.delete_volume(context, volume_ref))[-1]
69
60 def test_create_delete_volume(self):70 def test_create_delete_volume(self):
61 """Test volume can be created and deleted."""71 """Test volume can be created and deleted."""
62 volume_id = self._create_volume()72 volume_id = self._create_volume()
63 self.volume.create_volume(self.context, volume_id)73 self._id_create_volume(self.context, volume_id)
64 self.assertEqual(volume_id, db.volume_get(context.get_admin_context(),74 self.assertEqual(volume_id, db.volume_get(context.get_admin_context(),
65 volume_id).id)75 volume_id).id)
6676
67 self.volume.delete_volume(self.context, volume_id)77 self._id_delete_volume(self.context, volume_id)
68 self.assertRaises(exception.NotFound,78 self.assertRaises(exception.NotFound,
69 db.volume_get,79 db.volume_get,
70 self.context,80 self.context,
@@ -77,7 +87,7 @@
77 snapshot_id = self._create_snapshot(volume_src_id)87 snapshot_id = self._create_snapshot(volume_src_id)
78 self.volume.create_snapshot(self.context, volume_src_id, snapshot_id)88 self.volume.create_snapshot(self.context, volume_src_id, snapshot_id)
79 volume_dst_id = self._create_volume(0, snapshot_id)89 volume_dst_id = self._create_volume(0, snapshot_id)
80 self.volume.create_volume(self.context, volume_dst_id, snapshot_id)90 self._id_create_volume(self.context, volume_dst_id)
81 self.assertEqual(volume_dst_id, db.volume_get(91 self.assertEqual(volume_dst_id, db.volume_get(
82 context.get_admin_context(),92 context.get_admin_context(),
83 volume_dst_id).id)93 volume_dst_id).id)
@@ -96,7 +106,7 @@
96 return True106 return True
97 try:107 try:
98 volume_id = self._create_volume('1001')108 volume_id = self._create_volume('1001')
99 self.volume.create_volume(self.context, volume_id)109 self._id_create_volume(self.context, volume_id)
100 self.fail("Should have thrown TypeError")110 self.fail("Should have thrown TypeError")
101 except TypeError:111 except TypeError:
102 pass112 pass
@@ -107,16 +117,16 @@
107 total_slots = FLAGS.iscsi_num_targets117 total_slots = FLAGS.iscsi_num_targets
108 for _index in xrange(total_slots):118 for _index in xrange(total_slots):
109 volume_id = self._create_volume()119 volume_id = self._create_volume()
110 self.volume.create_volume(self.context, volume_id)120 self._id_create_volume(self.context, volume_id)
111 vols.append(volume_id)121 vols.append(volume_id)
112 volume_id = self._create_volume()122 volume_id = self._create_volume()
113 self.assertRaises(db.NoMoreTargets,123 self.assertRaises(db.NoMoreTargets,
114 self.volume.create_volume,124 self._id_create_volume,
115 self.context,125 self.context,
116 volume_id)126 volume_id)
117 db.volume_destroy(context.get_admin_context(), volume_id)127 db.volume_destroy(context.get_admin_context(), volume_id)
118 for volume_id in vols:128 for volume_id in vols:
119 self.volume.delete_volume(self.context, volume_id)129 self._id_delete_volume(self.context, volume_id)
120130
121 def test_run_attach_detach_volume(self):131 def test_run_attach_detach_volume(self):
122 """Make sure volume can be attached and detached from instance."""132 """Make sure volume can be attached and detached from instance."""
@@ -132,7 +142,7 @@
132 instance_id = db.instance_create(self.context, inst)['id']142 instance_id = db.instance_create(self.context, inst)['id']
133 mountpoint = "/dev/sdf"143 mountpoint = "/dev/sdf"
134 volume_id = self._create_volume()144 volume_id = self._create_volume()
135 self.volume.create_volume(self.context, volume_id)145 self._id_create_volume(self.context, volume_id)
136 if FLAGS.fake_tests:146 if FLAGS.fake_tests:
137 db.volume_attached(self.context, volume_id, instance_id,147 db.volume_attached(self.context, volume_id, instance_id,
138 mountpoint)148 mountpoint)
@@ -148,10 +158,6 @@
148 instance_ref = db.volume_get_instance(self.context, volume_id)158 instance_ref = db.volume_get_instance(self.context, volume_id)
149 self.assertEqual(instance_ref['id'], instance_id)159 self.assertEqual(instance_ref['id'], instance_id)
150160
151 self.assertRaises(exception.Error,
152 self.volume.delete_volume,
153 self.context,
154 volume_id)
155 if FLAGS.fake_tests:161 if FLAGS.fake_tests:
156 db.volume_detached(self.context, volume_id)162 db.volume_detached(self.context, volume_id)
157 else:163 else:
@@ -161,7 +167,7 @@
161 vol = db.volume_get(self.context, volume_id)167 vol = db.volume_get(self.context, volume_id)
162 self.assertEqual(vol['status'], "available")168 self.assertEqual(vol['status'], "available")
163169
164 self.volume.delete_volume(self.context, volume_id)170 self._id_delete_volume(self.context, volume_id)
165 self.assertRaises(exception.VolumeNotFound,171 self.assertRaises(exception.VolumeNotFound,
166 db.volume_get,172 db.volume_get,
167 self.context,173 self.context,
@@ -185,10 +191,10 @@
185 total_slots = FLAGS.iscsi_num_targets191 total_slots = FLAGS.iscsi_num_targets
186 for _index in xrange(total_slots):192 for _index in xrange(total_slots):
187 volume_id = self._create_volume()193 volume_id = self._create_volume()
188 d = self.volume.create_volume(self.context, volume_id)194 d = self._id_create_volume(self.context, volume_id)
189 _check(d)195 _check(d)
190 for volume_id in volume_ids:196 for volume_id in volume_ids:
191 self.volume.delete_volume(self.context, volume_id)197 self._id_delete_volume(self.context, volume_id)
192198
193 def test_multi_node(self):199 def test_multi_node(self):
194 # TODO(termie): Figure out how to test with two nodes,200 # TODO(termie): Figure out how to test with two nodes,
@@ -253,6 +259,16 @@
253 def tearDown(self):259 def tearDown(self):
254 super(DriverTestCase, self).tearDown()260 super(DriverTestCase, self).tearDown()
255261
262 def _id_create_volume(self, context, volume_id):
263 """Version of create volume that uses id"""
264 volume_ref = utils.to_primitive(db.volume_get(context, volume_id))
265 return list(self.volume.create_volume(context, volume_ref))[-1]['id']
266
267 def _id_delete_volume(self, context, volume_id):
268 """Version of delete volume that uses id"""
269 volume_ref = utils.to_primitive(db.volume_get(context, volume_id))
270 return list(self.volume.delete_volume(context, volume_ref))[-1]
271
256 def _attach_volume(self):272 def _attach_volume(self):
257 """Attach volumes to an instance. This function also sets273 """Attach volumes to an instance. This function also sets
258 a fake log message."""274 a fake log message."""
@@ -262,7 +278,7 @@
262 """Detach volumes from an instance."""278 """Detach volumes from an instance."""
263 for volume_id in volume_id_list:279 for volume_id in volume_id_list:
264 db.volume_detached(self.context, volume_id)280 db.volume_detached(self.context, volume_id)
265 self.volume.delete_volume(self.context, volume_id)281 self._id_delete_volume(self.context, volume_id)
266282
267283
268class AOETestCase(DriverTestCase):284class AOETestCase(DriverTestCase):
@@ -284,7 +300,7 @@
284 vol['size'] = 0300 vol['size'] = 0
285 volume_id = db.volume_create(self.context,301 volume_id = db.volume_create(self.context,
286 vol)['id']302 vol)['id']
287 self.volume.create_volume(self.context, volume_id)303 self._id_create_volume(self.context, volume_id)
288304
289 # each volume has a different mountpoint305 # each volume has a different mountpoint
290 mountpoint = "/dev/sd" + chr((ord('b') + index))306 mountpoint = "/dev/sd" + chr((ord('b') + index))
@@ -360,7 +376,7 @@
360 vol = {}376 vol = {}
361 vol['size'] = 0377 vol['size'] = 0
362 vol_ref = db.volume_create(self.context, vol)378 vol_ref = db.volume_create(self.context, vol)
363 self.volume.create_volume(self.context, vol_ref['id'])379 self._id_create_volume(self.context, vol_ref['id'])
364 vol_ref = db.volume_get(self.context, vol_ref['id'])380 vol_ref = db.volume_get(self.context, vol_ref['id'])
365381
366 # each volume has a different mountpoint382 # each volume has a different mountpoint
367383
=== modified file 'nova/utils.py'
--- nova/utils.py 2011-06-18 00:12:44 +0000
+++ nova/utils.py 2011-06-22 17:01:37 +0000
@@ -50,6 +50,7 @@
5050
51LOG = logging.getLogger("nova.utils")51LOG = logging.getLogger("nova.utils")
52TIME_FORMAT = "%Y-%m-%dT%H:%M:%SZ"52TIME_FORMAT = "%Y-%m-%dT%H:%M:%SZ"
53PERFECT_TIME_FORMAT = "%Y-%m-%dT%H:%M:%S.%f"
53FLAGS = flags.FLAGS54FLAGS = flags.FLAGS
5455
5556
@@ -362,6 +363,18 @@
362 return datetime.datetime.strptime(timestr, TIME_FORMAT)363 return datetime.datetime.strptime(timestr, TIME_FORMAT)
363364
364365
366def strtime(at=None):
367 """Returns iso formatted utcnow."""
368 if not at:
369 at = utcnow()
370 return at.strftime(PERFECT_TIME_FORMAT)
371
372
373def parse_strtime(timestr):
374 """Turn an iso formatted time back into a datetime."""
375 return datetime.datetime.strptime(timestr, PERFECT_TIME_FORMAT)
376
377
365def parse_mailmap(mailmap='.mailmap'):378def parse_mailmap(mailmap='.mailmap'):
366 mapping = {}379 mapping = {}
367 if os.path.exists(mailmap):380 if os.path.exists(mailmap):
@@ -505,7 +518,7 @@
505 o[k] = to_primitive(v)518 o[k] = to_primitive(v)
506 return o519 return o
507 elif isinstance(value, datetime.datetime):520 elif isinstance(value, datetime.datetime):
508 return str(value)521 return strtime(value)
509 elif hasattr(value, 'iteritems'):522 elif hasattr(value, 'iteritems'):
510 return to_primitive(dict(value.iteritems()))523 return to_primitive(dict(value.iteritems()))
511 elif hasattr(value, '__iter__'):524 elif hasattr(value, '__iter__'):
512525
=== modified file 'nova/volume/api.py'
--- nova/volume/api.py 2011-06-15 16:46:24 +0000
+++ nova/volume/api.py 2011-06-22 17:01:37 +0000
@@ -20,10 +20,8 @@
20Handles all requests relating to volumes.20Handles all requests relating to volumes.
21"""21"""
2222
2323import eventlet
24from eventlet import greenthread24
25
26from nova import db
27from nova import exception25from nova import exception
28from nova import flags26from nova import flags
29from nova import log as logging27from nova import log as logging
@@ -68,14 +66,27 @@
68 'display_name': name,66 'display_name': name,
69 'display_description': description}67 'display_description': description}
7068
71 volume = self.db.volume_create(context, options)69 volume_ref = self.db.volume_create(context, options)
72 rpc.cast(context,70 volume_ref = utils.to_primitive(volume_ref)
73 FLAGS.scheduler_topic,71
74 {"method": "create_volume",72 def delayed_create(volume_ref):
75 "args": {"topic": FLAGS.volume_topic,73 vid = volume_ref['id']
76 "volume_id": volume['id'],74 try:
77 "snapshot_id": snapshot_id}})75 rvs = rpc.multicall(context,
78 return volume76 FLAGS.scheduler_topic,
77 {"method": "create_volume",
78 "args": {"topic": FLAGS.volume_topic,
79 "volume_ref": volume_ref}})
80 for volume_ref in rvs:
81 self.db.volume_update(context, vid, volume_ref)
82 volume_ref['launched_at'] = utils.utcnow()
83 self.db.volume_update(context, vid, volume_ref)
84
85 except rpc.RemoteError:
86 self.db.volume_update(context, vid, {'status': 'error'})
87
88 eventlet.spawn_n(delayed_create, volume_ref)
89 return volume_ref
7990
80 # TODO(yamahata): eliminate dumb polling91 # TODO(yamahata): eliminate dumb polling
81 def wait_creation(self, context, volume_id):92 def wait_creation(self, context, volume_id):
@@ -83,20 +94,44 @@
83 volume = self.get(context, volume_id)94 volume = self.get(context, volume_id)
84 if volume['status'] != 'creating':95 if volume['status'] != 'creating':
85 return96 return
86 greenthread.sleep(1)97 eventlet.greenthread.sleep(1)
8798
88 def delete(self, context, volume_id):99 def delete(self, context, volume_id):
89 volume = self.get(context, volume_id)100 volume_ref = self.get(context, volume_id)
90 if volume['status'] != "available":101 if volume_ref['status'] != "available":
91 raise exception.ApiError(_("Volume status must be available"))102 raise exception.ApiError(_("Volume status must be available"))
92 now = utils.utcnow()103 if volume_ref['attach_status'] == "attached":
93 self.db.volume_update(context, volume_id, {'status': 'deleting',104 raise exception.Error(_("Volume is still attached"))
94 'terminated_at': now})105
95 host = volume['host']106 volume_ref['status'] = 'deleting'
96 rpc.cast(context,107 volume_ref['terminated_at'] = utils.utcnow()
97 self.db.queue_get_for(context, FLAGS.volume_topic, host),108 self.db.volume_update(context, volume_ref['id'], volume_ref)
98 {"method": "delete_volume",109 volume_ref = utils.to_primitive(volume_ref)
99 "args": {"volume_id": volume_id}})110
111 def delayed_delete(volume_ref):
112 vid = volume_ref['id']
113 try:
114 topic = self.db.queue_get_for(context,
115 FLAGS.volume_topic,
116 volume_ref['host'])
117 rvs = rpc.multicall(context,
118 topic,
119 {"method": "delete_volume",
120 "args": {"volume_ref": volume_ref}})
121 for volume_ref in rvs:
122 self.db.volume_update(context, vid, volume_ref)
123
124 # NOTE(vish): See TODO in manager.py. This can be removed
125 # if change to a better method for handling
126 # deletes
127 if volume_ref['status'] != 'available':
128 self.db.volume_destroy(context, vid)
129
130 except rpc.RemoteError:
131 self.db.volume_update(context, vid, {'status': 'err_delete'})
132
133 eventlet.spawn_n(delayed_delete, volume_ref)
134 return True
100135
101 def update(self, context, volume_id, fields):136 def update(self, context, volume_id, fields):
102 self.db.volume_update(context, volume_id, fields)137 self.db.volume_update(context, volume_id, fields)
103138
=== modified file 'nova/volume/manager.py'
--- nova/volume/manager.py 2011-06-02 21:23:05 +0000
+++ nova/volume/manager.py 2011-06-22 17:01:37 +0000
@@ -88,79 +88,72 @@
88 else:88 else:
89 LOG.info(_("volume %s: skipping export"), volume['name'])89 LOG.info(_("volume %s: skipping export"), volume['name'])
9090
91 def create_volume(self, context, volume_id, snapshot_id=None):91 def create_volume(self, context, volume_ref):
92 """Creates and exports the volume."""92 """Creates and exports the volume."""
93 context = context.elevated()
94 volume_ref = self.db.volume_get(context, volume_id)
95 LOG.info(_("volume %s: creating"), volume_ref['name'])93 LOG.info(_("volume %s: creating"), volume_ref['name'])
9694
97 self.db.volume_update(context,95 @utils.synchronized(volume_ref['name'])
98 volume_id,96 def safe_create(volume_ref):
99 {'host': self.host})97 try:
100 # NOTE(vish): so we don't have to get volume from db again98 volume_ref = self.db.volume_get(context, volume_ref['id'])
101 # before passing it to the driver.99 except exception.VolumeNotFound:
100 volume_ref = self.db.volume_create(context, volume_ref)
101 return volume_ref
102
103 volume_ref = safe_create(volume_ref)
102 volume_ref['host'] = self.host104 volume_ref['host'] = self.host
103105 self.db.volume_update(context, volume_ref['id'], volume_ref)
104 try:106 yield volume_ref
105 vol_name = volume_ref['name']107
106 vol_size = volume_ref['size']108 vol_name = volume_ref['name']
107 LOG.debug(_("volume %(vol_name)s: creating lv of"109 vol_size = volume_ref['size']
108 " size %(vol_size)sG") % locals())110 LOG.debug(_("volume %(vol_name)s: creating lv of"
109 if snapshot_id == None:111 " size %(vol_size)sG") % locals())
110 model_update = self.driver.create_volume(volume_ref)112 snapshot_id = volume_ref['snapshot_id']
111 else:113 if snapshot_id is None:
112 snapshot_ref = self.db.snapshot_get(context, snapshot_id)114 model_update = self.driver.create_volume(volume_ref)
113 model_update = self.driver.create_volume_from_snapshot(115 else:
114 volume_ref,116 snapshot_ref = self.db.snapshot_get(context, snapshot_id)
115 snapshot_ref)117 model_update = self.driver.create_volume_from_snapshot(
116 if model_update:118 volume_ref,
117 self.db.volume_update(context, volume_ref['id'], model_update)119 snapshot_ref)
118120 if model_update:
119 LOG.debug(_("volume %s: creating export"), volume_ref['name'])121 volume_ref.update(model_update)
120 model_update = self.driver.create_export(context, volume_ref)122 self.db.volume_update(context, volume_ref['id'], model_update)
121 if model_update:123 yield volume_ref
122 self.db.volume_update(context, volume_ref['id'], model_update)124
123 except Exception:125 LOG.debug(_("volume %s: creating export"), volume_ref['name'])
124 self.db.volume_update(context,126 model_update = self.driver.create_export(context, volume_ref)
125 volume_ref['id'], {'status': 'error'})127 if model_update:
126 raise128 volume_ref.update(model_update)
127129 self.db.volume_update(context, volume_ref['id'], model_update)
128 now = utils.utcnow()130 yield volume_ref
129 self.db.volume_update(context,131
130 volume_ref['id'], {'status': 'available',
131 'launched_at': now})
132 LOG.debug(_("volume %s: created successfully"), volume_ref['name'])132 LOG.debug(_("volume %s: created successfully"), volume_ref['name'])
133 return volume_id133 volume_ref['status'] = 'available'
134 self.db.volume_update(context, volume_ref['id'], volume_ref)
135 yield volume_ref
134136
135 def delete_volume(self, context, volume_id):137 def delete_volume(self, context, volume_ref):
136 """Deletes and unexports volume."""138 """Deletes and unexports volume."""
137 context = context.elevated()139 LOG.debug(_("volume %s: removing export"), volume_ref['name'])
138 volume_ref = self.db.volume_get(context, volume_id)140 self.driver.remove_export(context, volume_ref)
139 if volume_ref['attach_status'] == "attached":
140 raise exception.Error(_("Volume is still attached"))
141 if volume_ref['host'] != self.host:
142 raise exception.Error(_("Volume is not local to this node"))
143
144 try:141 try:
145 LOG.debug(_("volume %s: removing export"), volume_ref['name'])
146 self.driver.remove_export(context, volume_ref)
147 LOG.debug(_("volume %s: deleting"), volume_ref['name'])142 LOG.debug(_("volume %s: deleting"), volume_ref['name'])
148 self.driver.delete_volume(volume_ref)143 self.driver.delete_volume(volume_ref)
149 except exception.VolumeIsBusy, e:144 # TODO(vish): This may not be the best way to handle a busy delete
145 # but I'm leaving it because this is the current way
146 # it is handled.
147 except exception.VolumeIsBusy:
150 LOG.debug(_("volume %s: volume is busy"), volume_ref['name'])148 LOG.debug(_("volume %s: volume is busy"), volume_ref['name'])
151 self.driver.ensure_export(context, volume_ref)149 self.driver.ensure_export(context, volume_ref)
152 self.db.volume_update(context, volume_ref['id'],150 volume_ref['status'] = 'available'
153 {'status': 'available'})151 self.db.volume_update(context, volume_ref['id'], volume_ref)
154 return True152 yield volume_ref
155 except Exception:153 return
156 self.db.volume_update(context,
157 volume_ref['id'],
158 {'status': 'error_deleting'})
159 raise
160
161 self.db.volume_destroy(context, volume_id)
162 LOG.debug(_("volume %s: deleted successfully"), volume_ref['name'])154 LOG.debug(_("volume %s: deleted successfully"), volume_ref['name'])
163 return True155 self.db.volume_destroy(context, volume_ref['id'])
156 yield volume_ref
164157
165 def create_snapshot(self, context, volume_id, snapshot_id):158 def create_snapshot(self, context, volume_id, snapshot_id):
166 """Creates and exports the snapshot."""159 """Creates and exports the snapshot."""