Merge lp:~vishvananda/nova/no-db-messaging into lp:~hudson-openstack/nova/trunk

Proposed by Vish Ishaya
Status: Work in progress
Proposed branch: lp:~vishvananda/nova/no-db-messaging
Merge into: lp:~hudson-openstack/nova/trunk
Prerequisite: lp:~termie/nova/rpc_multicall
Diff against target: 658 lines (+220/-143)
9 files modified
nova/context.py (+3/-3)
nova/db/sqlalchemy/api.py (+1/-1)
nova/db/sqlalchemy/models.py (+18/-5)
nova/scheduler/manager.py (+7/-5)
nova/tests/scheduler/test_scheduler.py (+32/-27)
nova/tests/test_volume.py (+34/-18)
nova/utils.py (+14/-1)
nova/volume/api.py (+58/-23)
nova/volume/manager.py (+53/-60)
To merge this branch: bzr merge lp:~vishvananda/nova/no-db-messaging
Reviewer Review Type Date Requested Status
Matt Dietz (community) Needs Information
Rick Harris (community) Needs Fixing
Dan Prince (community) Abstain
Review via email: mp+61687@code.launchpad.net

Description of the change

This is an initial proposal for feedback. This branch is an attempt to start on this blueprint:
https://blueprints.launchpad.net/nova/+spec/no-db-messaging
which will allow for the implementation of this blueprint:
https://blueprints.launchpad.net/nova/+spec/separate-code-for-services
And will ultimately make it easy to replace our various services with external projects.

This prototype changes volume_create and volume_delete to pass data through the queue instead of writing information to the database and reading it on the other end. It attempts to make minimal changes. It includes:
 * a small change to model code to allow for conversion into dicts
 * scheduler uses muticall instead of cast
 * volume.api.create_volume and delete volume use multicall to communicate with volume.manager
 * volume.manager returns updates instead of modifying the database directly

Please note that this is an initial proposal. It is based on some changes made by termie to allow the driver to return multiple times for a single call. It works to create volumes, but I haven't modified the tests to pass yet.

To Do:
 * change the driver code so it doesn't store its extra data about volumes in the global db.
 * fix the tests
 * modify attach and detach to use the same methodology

I'm open to feedback about this approach. I tried a few other versions and this seems like the simplest change set to get what we want. If this looks good, I will modify the other volume commands to work the same way and propose a similar set of changes for compute.

To post a comment you must log in.
Revision history for this message
Dan Prince (dan-prince) wrote :

Hey Vish,

I'm not able to boot any instances via the OS API w/ this branch. Looks to be related to the set_admin_password functionality which now polls for the compute host to be assigned:

(nova.api.openstack): TRACE: File "/usr/lib/pymodules/python2.6/nova/compute/api.py", line 501, in _set_admin_password
(nova.api.openstack): TRACE: host = self._find_host(context, instance_id)
(nova.api.openstack): TRACE: File "/usr/lib/pymodules/python2.6/nova/compute/api.py", line 497, in _find_host
(nova.api.openstack): TRACE: % instance_id)
(nova.api.openstack): TRACE: Error: Unable to find host for Instance 1
(nova.api.openstack): TRACE:

review: Needs Fixing
Revision history for this message
Vish Ishaya (vishvananda) wrote :

Thanks dan,

Turns out it was an error where if you cast to a generator (multicall uses generators), then it wouldn't actually execute the generator on the other end. I think casting should work now.

Vish

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

The prerequisite lp:~termie/nova/rpc_multicall has not yet been merged into lp:nova.

Revision history for this message
Dan Prince (dan-prince) wrote :

Hey Vish,

So you latest fix resolves the RPC issues. Thanks!

I'm still hitting an issue with things like instance metadata (aka a model attributes that aren't columns).

The following changes to NovaBase.update resolves it for me.

            if key == 'name' and key in columns:
                setattr(self, key, value)
            else:
                setattr(self, key, value)

review: Needs Fixing
Revision history for this message
Dan Prince (dan-prince) wrote :

Oops. This one should work:

      if key == 'name' and key in columns:
            setattr(self, key, value)
       elif key != 'name':
            setattr(self, key, value)

Revision history for this message
Mark Washenberger (markwash) wrote :

I'm having a little trouble grasping the multicall approach as shown here so I might be a bit confused.

It looks like the way this works is, when we schedule a volume to be created, we create an eventlet-based volume listener which will update the database as it hears back from the volume manager.

Maybe I'm mistaken about eventlet, but this implies that if for some reason the process terminates before the multicall has finished its last return, then the database won't be updated. Then, when the process is restarted, there will be no calling context remaining to handle the updates.

Would it be a more robust approach to create a permanently running VolumeListener that handles all volume update events?

Thanks!

lp:~vishvananda/nova/no-db-messaging updated
1113. By Vish Ishaya

merged rpc_multicall

1114. By Vish Ishaya

keep the database on the receiving end as well

1115. By Vish Ishaya

fix tests

1116. By Vish Ishaya

use strtime for passing datetimes back and forth through the queue

1117. By Vish Ishaya

lost some changes from rpc branch, bring them in manually

1118. By Vish Ishaya

merged trunk and removed conflicts

1119. By Vish Ishaya

make sure to handle VolumeIsBusy

1120. By Vish Ishaya

return not yield in scheduler shortcut

1121. By Vish Ishaya

fix snapshot test

Revision history for this message
Dan Prince (dan-prince) wrote :

Sorry for holding this up. I'm no multicall expert but I should probably remove my 'needs fixing' at least so more people check it out. The latest fix resolves my issue with model attributes that aren't columns.

review: Abstain
Revision history for this message
Rick Harris (rconradharris) wrote :

Ran into a test failure:

======================================================================
FAIL: test_create_and_delete_volume (nova.tests.integrated.test_volumes.VolumesTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/rick/openstack/nova/no-db-messaging/nova/tests/integrated/test_volumes.py", line 122, in test_create_and_delete_volume
    self.assertEquals(1, len(create_actions))
AssertionError: 1 != 2
-------------------- >> begin captured logging << --------------------

Other than that, I think this looks good.

review: Needs Fixing
Revision history for this message
Dan Prince (dan-prince) wrote :

Couple of conflicts now w/ a trunk merge too:

Text conflict in nova/volume/api.py
Text conflict in nova/volume/manager.py
2 conflicts encountered.

Revision history for this message
Matt Dietz (cerberus) wrote :

I really like this approach, but I have the same reservations that Mark does.

We could go with the Listener approach, but I will say that I've seen issues with bottlenecking in our current architecture trying to do something very similar.

It seems like this would make updating a running nova installation prohibitively difficult. If there was a way to recall the calling context/msg_id even after a worker bounce, then I'd feel a lot better.
We may need some kind of state dump mechanism for the workers. Perhaps we could pickle some pertinent data upon a worker receiving a HUP/KILL/whatever.

review: Needs Information
lp:~vishvananda/nova/no-db-messaging updated
1122. By Vish Ishaya

remove merge error calling failing test

1123. By Vish Ishaya

merged trunk

Unmerged revisions

1123. By Vish Ishaya

merged trunk

1122. By Vish Ishaya

remove merge error calling failing test

1121. By Vish Ishaya

fix snapshot test

1120. By Vish Ishaya

return not yield in scheduler shortcut

1119. By Vish Ishaya

make sure to handle VolumeIsBusy

1118. By Vish Ishaya

merged trunk and removed conflicts

1117. By Vish Ishaya

lost some changes from rpc branch, bring them in manually

1116. By Vish Ishaya

use strtime for passing datetimes back and forth through the queue

1115. By Vish Ishaya

fix tests

1114. By Vish Ishaya

keep the database on the receiving end as well

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'nova/context.py'
2--- nova/context.py 2011-06-02 21:23:05 +0000
3+++ nova/context.py 2011-06-22 17:01:37 +0000
4@@ -56,8 +56,8 @@
5 self.remote_address = remote_address
6 if not timestamp:
7 timestamp = utils.utcnow()
8- if isinstance(timestamp, str) or isinstance(timestamp, unicode):
9- timestamp = utils.parse_isotime(timestamp)
10+ if isinstance(timestamp, basestring):
11+ timestamp = utils.parse_strtime(timestamp)
12 self.timestamp = timestamp
13 if not request_id:
14 chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890-'
15@@ -95,7 +95,7 @@
16 'is_admin': self.is_admin,
17 'read_deleted': self.read_deleted,
18 'remote_address': self.remote_address,
19- 'timestamp': utils.isotime(self.timestamp),
20+ 'timestamp': utils.strtime(self.timestamp),
21 'request_id': self.request_id}
22
23 @classmethod
24
25=== modified file 'nova/db/sqlalchemy/api.py'
26--- nova/db/sqlalchemy/api.py 2011-06-20 20:55:16 +0000
27+++ nova/db/sqlalchemy/api.py 2011-06-22 17:01:37 +0000
28@@ -1689,7 +1689,7 @@
29 return (result[0] or 0, result[1] or 0)
30
31
32-@require_admin_context
33+@require_context
34 def volume_destroy(context, volume_id):
35 session = get_session()
36 with session.begin():
37
38=== modified file 'nova/db/sqlalchemy/models.py'
39--- nova/db/sqlalchemy/models.py 2011-06-20 20:55:16 +0000
40+++ nova/db/sqlalchemy/models.py 2011-06-22 17:01:37 +0000
41@@ -25,6 +25,7 @@
42 from sqlalchemy.exc import IntegrityError
43 from sqlalchemy.ext.declarative import declarative_base
44 from sqlalchemy.schema import ForeignKeyConstraint
45+from sqlalchemy.types import DateTime as DTType
46
47 from nova.db.sqlalchemy.session import get_session
48
49@@ -77,17 +78,29 @@
50 return getattr(self, key, default)
51
52 def __iter__(self):
53- self._i = iter(object_mapper(self).columns)
54+ # NOTE(vish): include name property in the iterator
55+ columns = dict(object_mapper(self).columns).keys()
56+ name = self.get('name')
57+ if name:
58+ columns.append('name')
59+ self._i = iter(columns)
60 return self
61
62 def next(self):
63- n = self._i.next().name
64+ n = self._i.next()
65 return n, getattr(self, n)
66
67 def update(self, values):
68- """Make the model object behave like a dict"""
69- for k, v in values.iteritems():
70- setattr(self, k, v)
71+ """Make the model object behave like a dict and convert datetimes."""
72+ columns = object_mapper(self).columns
73+ for key, value in values.iteritems():
74+ # NOTE(vish): don't update the 'name' property
75+ if key != 'name' or key in columns:
76+ if (key in columns and
77+ isinstance(value, basestring) and
78+ isinstance(columns[key].type, DTType)):
79+ value = utils.parse_strtime(value)
80+ setattr(self, key, value)
81
82 def iteritems(self):
83 """Make the model object behave like a dict.
84
85=== modified file 'nova/scheduler/manager.py'
86--- nova/scheduler/manager.py 2011-06-09 23:16:55 +0000
87+++ nova/scheduler/manager.py 2011-06-22 17:01:37 +0000
88@@ -98,11 +98,13 @@
89 % locals())
90 return
91
92- rpc.cast(context,
93- db.queue_get_for(context, topic, host),
94- {"method": method,
95- "args": kwargs})
96- LOG.debug(_("Casted to %(topic)s %(host)s for %(method)s") % locals())
97+ LOG.debug(_("Multicall %(topic)s %(host)s for %(method)s") % locals())
98+ rvs = rpc.multicall(context,
99+ db.queue_get_for(context, topic, host),
100+ {"method": method,
101+ "args": kwargs})
102+ for rv in rvs:
103+ yield rv
104
105 # NOTE (masumotok) : This method should be moved to nova.api.ec2.admin.
106 # Based on bexar design summit discussion,
107
108=== modified file 'nova/tests/scheduler/test_scheduler.py'
109--- nova/tests/scheduler/test_scheduler.py 2011-06-17 23:53:30 +0000
110+++ nova/tests/scheduler/test_scheduler.py 2011-06-22 17:01:37 +0000
111@@ -98,7 +98,7 @@
112
113 def test_fallback(self):
114 scheduler = manager.SchedulerManager()
115- self.mox.StubOutWithMock(rpc, 'cast', use_mock_anything=True)
116+ self.mox.StubOutWithMock(rpc, 'call', use_mock_anything=True)
117 ctxt = context.get_admin_context()
118 rpc.cast(ctxt,
119 'topic.fallback_host',
120@@ -109,7 +109,7 @@
121
122 def test_named_method(self):
123 scheduler = manager.SchedulerManager()
124- self.mox.StubOutWithMock(rpc, 'cast', use_mock_anything=True)
125+ self.mox.StubOutWithMock(rpc, 'call', use_mock_anything=True)
126 ctxt = context.get_admin_context()
127 rpc.cast(ctxt,
128 'topic.named_host',
129@@ -225,17 +225,17 @@
130 self.mox.StubOutWithMock(db, 'service_get_all_by_topic')
131 arg = IgnoreArg()
132 db.service_get_all_by_topic(arg, arg).AndReturn(service_list)
133- self.mox.StubOutWithMock(rpc, 'cast', use_mock_anything=True)
134- rpc.cast(ctxt,
135- 'compute.host1',
136- {'method': 'run_instance',
137- 'args': {'instance_id': 'i-ffffffff',
138- 'availability_zone': 'zone1'}})
139+ self.mox.StubOutWithMock(rpc, 'multicall', use_mock_anything=True)
140+ rpc.multicall(ctxt,
141+ 'compute.host1',
142+ {'method': 'run_instance',
143+ 'args': {'instance_id': 'i-ffffffff',
144+ 'availability_zone': 'zone1'}}).AndReturn([])
145 self.mox.ReplayAll()
146- scheduler.run_instance(ctxt,
147- 'compute',
148- instance_id='i-ffffffff',
149- availability_zone='zone1')
150+ list(scheduler.run_instance(ctxt,
151+ 'compute',
152+ instance_id='i-ffffffff',
153+ availability_zone='zone1'))
154
155
156 class SimpleDriverTestCase(test.TestCase):
157@@ -601,12 +601,13 @@
158 volume1 = self.start_service('volume', host='host1')
159 volume2 = self.start_service('volume', host='host2')
160 volume_id1 = self._create_volume()
161- volume1.create_volume(self.context, volume_id1)
162+ host1 = self.scheduler.driver.schedule_create_volume(self.context,
163+ volume_id1)
164 volume_id2 = self._create_volume()
165- host = self.scheduler.driver.schedule_create_volume(self.context,
166- volume_id2)
167- self.assertEqual(host, 'host2')
168- volume1.delete_volume(self.context, volume_id1)
169+ host2 = self.scheduler.driver.schedule_create_volume(self.context,
170+ volume_id2)
171+ self.assertNotEqual(host1, host2)
172+ db.volume_destroy(self.context, volume_id1)
173 db.volume_destroy(self.context, volume_id2)
174 volume1.kill()
175 volume2.kill()
176@@ -619,10 +620,12 @@
177 volume_ids2 = []
178 for index in xrange(FLAGS.max_gigabytes):
179 volume_id = self._create_volume()
180- volume1.create_volume(self.context, volume_id)
181+ self.scheduler.driver.schedule_create_volume(self.context,
182+ volume_id)
183 volume_ids1.append(volume_id)
184 volume_id = self._create_volume()
185- volume2.create_volume(self.context, volume_id)
186+ self.scheduler.driver.schedule_create_volume(self.context,
187+ volume_id)
188 volume_ids2.append(volume_id)
189 volume_id = self._create_volume()
190 self.assertRaises(driver.NoValidHost,
191@@ -658,16 +661,18 @@
192 driver_i._live_migration_src_check(nocare, nocare)
193 driver_i._live_migration_dest_check(nocare, nocare, i_ref['host'])
194 driver_i._live_migration_common_check(nocare, nocare, i_ref['host'])
195- self.mox.StubOutWithMock(rpc, 'cast', use_mock_anything=True)
196+ self.mox.StubOutWithMock(rpc, 'multicall', use_mock_anything=True)
197 kwargs = {'instance_id': instance_id, 'dest': i_ref['host']}
198- rpc.cast(self.context,
199- db.queue_get_for(nocare, FLAGS.compute_topic, i_ref['host']),
200- {"method": 'live_migration', "args": kwargs})
201-
202+ rpc.multicall(self.context,
203+ db.queue_get_for(nocare,
204+ FLAGS.compute_topic,
205+ i_ref['host']),
206+ {"method": 'live_migration',
207+ "args": kwargs}).AndReturn([])
208 self.mox.ReplayAll()
209- self.scheduler.live_migration(self.context, FLAGS.compute_topic,
210- instance_id=instance_id,
211- dest=i_ref['host'])
212+ list(self.scheduler.live_migration(self.context, FLAGS.compute_topic,
213+ instance_id=instance_id,
214+ dest=i_ref['host']))
215
216 i_ref = db.instance_get(self.context, instance_id)
217 self.assertTrue(i_ref['state_description'] == 'migrating')
218
219=== modified file 'nova/tests/test_volume.py'
220--- nova/tests/test_volume.py 2011-05-27 05:13:17 +0000
221+++ nova/tests/test_volume.py 2011-06-22 17:01:37 +0000
222@@ -57,14 +57,24 @@
223 vol['attach_status'] = "detached"
224 return db.volume_create(context.get_admin_context(), vol)['id']
225
226+ def _id_create_volume(self, context, volume_id):
227+ """Version of create volume that uses id"""
228+ volume_ref = utils.to_primitive(db.volume_get(context, volume_id))
229+ return list(self.volume.create_volume(context, volume_ref))[-1]['id']
230+
231+ def _id_delete_volume(self, context, volume_id):
232+ """Version of delete volume that uses id"""
233+ volume_ref = utils.to_primitive(db.volume_get(context, volume_id))
234+ return list(self.volume.delete_volume(context, volume_ref))[-1]
235+
236 def test_create_delete_volume(self):
237 """Test volume can be created and deleted."""
238 volume_id = self._create_volume()
239- self.volume.create_volume(self.context, volume_id)
240+ self._id_create_volume(self.context, volume_id)
241 self.assertEqual(volume_id, db.volume_get(context.get_admin_context(),
242 volume_id).id)
243
244- self.volume.delete_volume(self.context, volume_id)
245+ self._id_delete_volume(self.context, volume_id)
246 self.assertRaises(exception.NotFound,
247 db.volume_get,
248 self.context,
249@@ -77,7 +87,7 @@
250 snapshot_id = self._create_snapshot(volume_src_id)
251 self.volume.create_snapshot(self.context, volume_src_id, snapshot_id)
252 volume_dst_id = self._create_volume(0, snapshot_id)
253- self.volume.create_volume(self.context, volume_dst_id, snapshot_id)
254+ self._id_create_volume(self.context, volume_dst_id)
255 self.assertEqual(volume_dst_id, db.volume_get(
256 context.get_admin_context(),
257 volume_dst_id).id)
258@@ -96,7 +106,7 @@
259 return True
260 try:
261 volume_id = self._create_volume('1001')
262- self.volume.create_volume(self.context, volume_id)
263+ self._id_create_volume(self.context, volume_id)
264 self.fail("Should have thrown TypeError")
265 except TypeError:
266 pass
267@@ -107,16 +117,16 @@
268 total_slots = FLAGS.iscsi_num_targets
269 for _index in xrange(total_slots):
270 volume_id = self._create_volume()
271- self.volume.create_volume(self.context, volume_id)
272+ self._id_create_volume(self.context, volume_id)
273 vols.append(volume_id)
274 volume_id = self._create_volume()
275 self.assertRaises(db.NoMoreTargets,
276- self.volume.create_volume,
277+ self._id_create_volume,
278 self.context,
279 volume_id)
280 db.volume_destroy(context.get_admin_context(), volume_id)
281 for volume_id in vols:
282- self.volume.delete_volume(self.context, volume_id)
283+ self._id_delete_volume(self.context, volume_id)
284
285 def test_run_attach_detach_volume(self):
286 """Make sure volume can be attached and detached from instance."""
287@@ -132,7 +142,7 @@
288 instance_id = db.instance_create(self.context, inst)['id']
289 mountpoint = "/dev/sdf"
290 volume_id = self._create_volume()
291- self.volume.create_volume(self.context, volume_id)
292+ self._id_create_volume(self.context, volume_id)
293 if FLAGS.fake_tests:
294 db.volume_attached(self.context, volume_id, instance_id,
295 mountpoint)
296@@ -148,10 +158,6 @@
297 instance_ref = db.volume_get_instance(self.context, volume_id)
298 self.assertEqual(instance_ref['id'], instance_id)
299
300- self.assertRaises(exception.Error,
301- self.volume.delete_volume,
302- self.context,
303- volume_id)
304 if FLAGS.fake_tests:
305 db.volume_detached(self.context, volume_id)
306 else:
307@@ -161,7 +167,7 @@
308 vol = db.volume_get(self.context, volume_id)
309 self.assertEqual(vol['status'], "available")
310
311- self.volume.delete_volume(self.context, volume_id)
312+ self._id_delete_volume(self.context, volume_id)
313 self.assertRaises(exception.VolumeNotFound,
314 db.volume_get,
315 self.context,
316@@ -185,10 +191,10 @@
317 total_slots = FLAGS.iscsi_num_targets
318 for _index in xrange(total_slots):
319 volume_id = self._create_volume()
320- d = self.volume.create_volume(self.context, volume_id)
321+ d = self._id_create_volume(self.context, volume_id)
322 _check(d)
323 for volume_id in volume_ids:
324- self.volume.delete_volume(self.context, volume_id)
325+ self._id_delete_volume(self.context, volume_id)
326
327 def test_multi_node(self):
328 # TODO(termie): Figure out how to test with two nodes,
329@@ -253,6 +259,16 @@
330 def tearDown(self):
331 super(DriverTestCase, self).tearDown()
332
333+ def _id_create_volume(self, context, volume_id):
334+ """Version of create volume that uses id"""
335+ volume_ref = utils.to_primitive(db.volume_get(context, volume_id))
336+ return list(self.volume.create_volume(context, volume_ref))[-1]['id']
337+
338+ def _id_delete_volume(self, context, volume_id):
339+ """Version of delete volume that uses id"""
340+ volume_ref = utils.to_primitive(db.volume_get(context, volume_id))
341+ return list(self.volume.delete_volume(context, volume_ref))[-1]
342+
343 def _attach_volume(self):
344 """Attach volumes to an instance. This function also sets
345 a fake log message."""
346@@ -262,7 +278,7 @@
347 """Detach volumes from an instance."""
348 for volume_id in volume_id_list:
349 db.volume_detached(self.context, volume_id)
350- self.volume.delete_volume(self.context, volume_id)
351+ self._id_delete_volume(self.context, volume_id)
352
353
354 class AOETestCase(DriverTestCase):
355@@ -284,7 +300,7 @@
356 vol['size'] = 0
357 volume_id = db.volume_create(self.context,
358 vol)['id']
359- self.volume.create_volume(self.context, volume_id)
360+ self._id_create_volume(self.context, volume_id)
361
362 # each volume has a different mountpoint
363 mountpoint = "/dev/sd" + chr((ord('b') + index))
364@@ -360,7 +376,7 @@
365 vol = {}
366 vol['size'] = 0
367 vol_ref = db.volume_create(self.context, vol)
368- self.volume.create_volume(self.context, vol_ref['id'])
369+ self._id_create_volume(self.context, vol_ref['id'])
370 vol_ref = db.volume_get(self.context, vol_ref['id'])
371
372 # each volume has a different mountpoint
373
374=== modified file 'nova/utils.py'
375--- nova/utils.py 2011-06-18 00:12:44 +0000
376+++ nova/utils.py 2011-06-22 17:01:37 +0000
377@@ -50,6 +50,7 @@
378
379 LOG = logging.getLogger("nova.utils")
380 TIME_FORMAT = "%Y-%m-%dT%H:%M:%SZ"
381+PERFECT_TIME_FORMAT = "%Y-%m-%dT%H:%M:%S.%f"
382 FLAGS = flags.FLAGS
383
384
385@@ -362,6 +363,18 @@
386 return datetime.datetime.strptime(timestr, TIME_FORMAT)
387
388
389+def strtime(at=None):
390+ """Returns iso formatted utcnow."""
391+ if not at:
392+ at = utcnow()
393+ return at.strftime(PERFECT_TIME_FORMAT)
394+
395+
396+def parse_strtime(timestr):
397+ """Turn an iso formatted time back into a datetime."""
398+ return datetime.datetime.strptime(timestr, PERFECT_TIME_FORMAT)
399+
400+
401 def parse_mailmap(mailmap='.mailmap'):
402 mapping = {}
403 if os.path.exists(mailmap):
404@@ -505,7 +518,7 @@
405 o[k] = to_primitive(v)
406 return o
407 elif isinstance(value, datetime.datetime):
408- return str(value)
409+ return strtime(value)
410 elif hasattr(value, 'iteritems'):
411 return to_primitive(dict(value.iteritems()))
412 elif hasattr(value, '__iter__'):
413
414=== modified file 'nova/volume/api.py'
415--- nova/volume/api.py 2011-06-15 16:46:24 +0000
416+++ nova/volume/api.py 2011-06-22 17:01:37 +0000
417@@ -20,10 +20,8 @@
418 Handles all requests relating to volumes.
419 """
420
421-
422-from eventlet import greenthread
423-
424-from nova import db
425+import eventlet
426+
427 from nova import exception
428 from nova import flags
429 from nova import log as logging
430@@ -68,14 +66,27 @@
431 'display_name': name,
432 'display_description': description}
433
434- volume = self.db.volume_create(context, options)
435- rpc.cast(context,
436- FLAGS.scheduler_topic,
437- {"method": "create_volume",
438- "args": {"topic": FLAGS.volume_topic,
439- "volume_id": volume['id'],
440- "snapshot_id": snapshot_id}})
441- return volume
442+ volume_ref = self.db.volume_create(context, options)
443+ volume_ref = utils.to_primitive(volume_ref)
444+
445+ def delayed_create(volume_ref):
446+ vid = volume_ref['id']
447+ try:
448+ rvs = rpc.multicall(context,
449+ FLAGS.scheduler_topic,
450+ {"method": "create_volume",
451+ "args": {"topic": FLAGS.volume_topic,
452+ "volume_ref": volume_ref}})
453+ for volume_ref in rvs:
454+ self.db.volume_update(context, vid, volume_ref)
455+ volume_ref['launched_at'] = utils.utcnow()
456+ self.db.volume_update(context, vid, volume_ref)
457+
458+ except rpc.RemoteError:
459+ self.db.volume_update(context, vid, {'status': 'error'})
460+
461+ eventlet.spawn_n(delayed_create, volume_ref)
462+ return volume_ref
463
464 # TODO(yamahata): eliminate dumb polling
465 def wait_creation(self, context, volume_id):
466@@ -83,20 +94,44 @@
467 volume = self.get(context, volume_id)
468 if volume['status'] != 'creating':
469 return
470- greenthread.sleep(1)
471+ eventlet.greenthread.sleep(1)
472
473 def delete(self, context, volume_id):
474- volume = self.get(context, volume_id)
475- if volume['status'] != "available":
476+ volume_ref = self.get(context, volume_id)
477+ if volume_ref['status'] != "available":
478 raise exception.ApiError(_("Volume status must be available"))
479- now = utils.utcnow()
480- self.db.volume_update(context, volume_id, {'status': 'deleting',
481- 'terminated_at': now})
482- host = volume['host']
483- rpc.cast(context,
484- self.db.queue_get_for(context, FLAGS.volume_topic, host),
485- {"method": "delete_volume",
486- "args": {"volume_id": volume_id}})
487+ if volume_ref['attach_status'] == "attached":
488+ raise exception.Error(_("Volume is still attached"))
489+
490+ volume_ref['status'] = 'deleting'
491+ volume_ref['terminated_at'] = utils.utcnow()
492+ self.db.volume_update(context, volume_ref['id'], volume_ref)
493+ volume_ref = utils.to_primitive(volume_ref)
494+
495+ def delayed_delete(volume_ref):
496+ vid = volume_ref['id']
497+ try:
498+ topic = self.db.queue_get_for(context,
499+ FLAGS.volume_topic,
500+ volume_ref['host'])
501+ rvs = rpc.multicall(context,
502+ topic,
503+ {"method": "delete_volume",
504+ "args": {"volume_ref": volume_ref}})
505+ for volume_ref in rvs:
506+ self.db.volume_update(context, vid, volume_ref)
507+
508+ # NOTE(vish): See TODO in manager.py. This can be removed
509+ # if change to a better method for handling
510+ # deletes
511+ if volume_ref['status'] != 'available':
512+ self.db.volume_destroy(context, vid)
513+
514+ except rpc.RemoteError:
515+ self.db.volume_update(context, vid, {'status': 'err_delete'})
516+
517+ eventlet.spawn_n(delayed_delete, volume_ref)
518+ return True
519
520 def update(self, context, volume_id, fields):
521 self.db.volume_update(context, volume_id, fields)
522
523=== modified file 'nova/volume/manager.py'
524--- nova/volume/manager.py 2011-06-02 21:23:05 +0000
525+++ nova/volume/manager.py 2011-06-22 17:01:37 +0000
526@@ -88,79 +88,72 @@
527 else:
528 LOG.info(_("volume %s: skipping export"), volume['name'])
529
530- def create_volume(self, context, volume_id, snapshot_id=None):
531+ def create_volume(self, context, volume_ref):
532 """Creates and exports the volume."""
533- context = context.elevated()
534- volume_ref = self.db.volume_get(context, volume_id)
535 LOG.info(_("volume %s: creating"), volume_ref['name'])
536
537- self.db.volume_update(context,
538- volume_id,
539- {'host': self.host})
540- # NOTE(vish): so we don't have to get volume from db again
541- # before passing it to the driver.
542+ @utils.synchronized(volume_ref['name'])
543+ def safe_create(volume_ref):
544+ try:
545+ volume_ref = self.db.volume_get(context, volume_ref['id'])
546+ except exception.VolumeNotFound:
547+ volume_ref = self.db.volume_create(context, volume_ref)
548+ return volume_ref
549+
550+ volume_ref = safe_create(volume_ref)
551 volume_ref['host'] = self.host
552-
553- try:
554- vol_name = volume_ref['name']
555- vol_size = volume_ref['size']
556- LOG.debug(_("volume %(vol_name)s: creating lv of"
557- " size %(vol_size)sG") % locals())
558- if snapshot_id == None:
559- model_update = self.driver.create_volume(volume_ref)
560- else:
561- snapshot_ref = self.db.snapshot_get(context, snapshot_id)
562- model_update = self.driver.create_volume_from_snapshot(
563- volume_ref,
564- snapshot_ref)
565- if model_update:
566- self.db.volume_update(context, volume_ref['id'], model_update)
567-
568- LOG.debug(_("volume %s: creating export"), volume_ref['name'])
569- model_update = self.driver.create_export(context, volume_ref)
570- if model_update:
571- self.db.volume_update(context, volume_ref['id'], model_update)
572- except Exception:
573- self.db.volume_update(context,
574- volume_ref['id'], {'status': 'error'})
575- raise
576-
577- now = utils.utcnow()
578- self.db.volume_update(context,
579- volume_ref['id'], {'status': 'available',
580- 'launched_at': now})
581+ self.db.volume_update(context, volume_ref['id'], volume_ref)
582+ yield volume_ref
583+
584+ vol_name = volume_ref['name']
585+ vol_size = volume_ref['size']
586+ LOG.debug(_("volume %(vol_name)s: creating lv of"
587+ " size %(vol_size)sG") % locals())
588+ snapshot_id = volume_ref['snapshot_id']
589+ if snapshot_id is None:
590+ model_update = self.driver.create_volume(volume_ref)
591+ else:
592+ snapshot_ref = self.db.snapshot_get(context, snapshot_id)
593+ model_update = self.driver.create_volume_from_snapshot(
594+ volume_ref,
595+ snapshot_ref)
596+ if model_update:
597+ volume_ref.update(model_update)
598+ self.db.volume_update(context, volume_ref['id'], model_update)
599+ yield volume_ref
600+
601+ LOG.debug(_("volume %s: creating export"), volume_ref['name'])
602+ model_update = self.driver.create_export(context, volume_ref)
603+ if model_update:
604+ volume_ref.update(model_update)
605+ self.db.volume_update(context, volume_ref['id'], model_update)
606+ yield volume_ref
607+
608 LOG.debug(_("volume %s: created successfully"), volume_ref['name'])
609- return volume_id
610+ volume_ref['status'] = 'available'
611+ self.db.volume_update(context, volume_ref['id'], volume_ref)
612+ yield volume_ref
613
614- def delete_volume(self, context, volume_id):
615+ def delete_volume(self, context, volume_ref):
616 """Deletes and unexports volume."""
617- context = context.elevated()
618- volume_ref = self.db.volume_get(context, volume_id)
619- if volume_ref['attach_status'] == "attached":
620- raise exception.Error(_("Volume is still attached"))
621- if volume_ref['host'] != self.host:
622- raise exception.Error(_("Volume is not local to this node"))
623-
624+ LOG.debug(_("volume %s: removing export"), volume_ref['name'])
625+ self.driver.remove_export(context, volume_ref)
626 try:
627- LOG.debug(_("volume %s: removing export"), volume_ref['name'])
628- self.driver.remove_export(context, volume_ref)
629 LOG.debug(_("volume %s: deleting"), volume_ref['name'])
630 self.driver.delete_volume(volume_ref)
631- except exception.VolumeIsBusy, e:
632+ # TODO(vish): This may not be the best way to handle a busy delete
633+ # but I'm leaving it because this is the current way
634+ # it is handled.
635+ except exception.VolumeIsBusy:
636 LOG.debug(_("volume %s: volume is busy"), volume_ref['name'])
637 self.driver.ensure_export(context, volume_ref)
638- self.db.volume_update(context, volume_ref['id'],
639- {'status': 'available'})
640- return True
641- except Exception:
642- self.db.volume_update(context,
643- volume_ref['id'],
644- {'status': 'error_deleting'})
645- raise
646-
647- self.db.volume_destroy(context, volume_id)
648+ volume_ref['status'] = 'available'
649+ self.db.volume_update(context, volume_ref['id'], volume_ref)
650+ yield volume_ref
651+ return
652 LOG.debug(_("volume %s: deleted successfully"), volume_ref['name'])
653- return True
654+ self.db.volume_destroy(context, volume_ref['id'])
655+ yield volume_ref
656
657 def create_snapshot(self, context, volume_id, snapshot_id):
658 """Creates and exports the snapshot."""