Merge lp:~vishvananda/nova/no-db-messaging into lp:~hudson-openstack/nova/trunk
- no-db-messaging
- Merge into trunk
Status: | Work in progress |
---|---|
Proposed branch: | lp:~vishvananda/nova/no-db-messaging |
Merge into: | lp:~hudson-openstack/nova/trunk |
Prerequisite: | lp:~termie/nova/rpc_multicall |
Diff against target: |
658 lines (+220/-143) 9 files modified
nova/context.py (+3/-3) nova/db/sqlalchemy/api.py (+1/-1) nova/db/sqlalchemy/models.py (+18/-5) nova/scheduler/manager.py (+7/-5) nova/tests/scheduler/test_scheduler.py (+32/-27) nova/tests/test_volume.py (+34/-18) nova/utils.py (+14/-1) nova/volume/api.py (+58/-23) nova/volume/manager.py (+53/-60) |
To merge this branch: | bzr merge lp:~vishvananda/nova/no-db-messaging |
Related bugs: | |
Related blueprints: |
No DB Messaging
(High)
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Matt Dietz (community) | Needs Information | ||
Rick Harris (community) | Needs Fixing | ||
Dan Prince (community) | Abstain | ||
Review via email: mp+61687@code.launchpad.net |
Commit message
Description of the change
This is an initial proposal for feedback. This branch is an attempt to start on this blueprint:
https:/
which will allow for the implementation of this blueprint:
https:/
And will ultimately make it easy to replace our various services with external projects.
This prototype changes volume_create and volume_delete to pass data through the queue instead of writing information to the database and reading it on the other end. It attempts to make minimal changes. It includes:
* a small change to model code to allow for conversion into dicts
* scheduler uses muticall instead of cast
* volume.
* volume.manager returns updates instead of modifying the database directly
Please note that this is an initial proposal. It is based on some changes made by termie to allow the driver to return multiple times for a single call. It works to create volumes, but I haven't modified the tests to pass yet.
To Do:
* change the driver code so it doesn't store its extra data about volumes in the global db.
* fix the tests
* modify attach and detach to use the same methodology
I'm open to feedback about this approach. I tried a few other versions and this seems like the simplest change set to get what we want. If this looks good, I will modify the other volume commands to work the same way and propose a similar set of changes for compute.
Vish Ishaya (vishvananda) wrote : | # |
Thanks dan,
Turns out it was an error where if you cast to a generator (multicall uses generators), then it wouldn't actually execute the generator on the other end. I think casting should work now.
Vish
OpenStack Infra (hudson-openstack) wrote : | # |
The prerequisite lp:~termie/nova/rpc_multicall has not yet been merged into lp:nova.
Dan Prince (dan-prince) wrote : | # |
Hey Vish,
So you latest fix resolves the RPC issues. Thanks!
I'm still hitting an issue with things like instance metadata (aka a model attributes that aren't columns).
The following changes to NovaBase.update resolves it for me.
if key == 'name' and key in columns:
else:
Dan Prince (dan-prince) wrote : | # |
Oops. This one should work:
if key == 'name' and key in columns:
elif key != 'name':
Mark Washenberger (markwash) wrote : | # |
I'm having a little trouble grasping the multicall approach as shown here so I might be a bit confused.
It looks like the way this works is, when we schedule a volume to be created, we create an eventlet-based volume listener which will update the database as it hears back from the volume manager.
Maybe I'm mistaken about eventlet, but this implies that if for some reason the process terminates before the multicall has finished its last return, then the database won't be updated. Then, when the process is restarted, there will be no calling context remaining to handle the updates.
Would it be a more robust approach to create a permanently running VolumeListener that handles all volume update events?
Thanks!
- 1113. By Vish Ishaya
-
merged rpc_multicall
- 1114. By Vish Ishaya
-
keep the database on the receiving end as well
- 1115. By Vish Ishaya
-
fix tests
- 1116. By Vish Ishaya
-
use strtime for passing datetimes back and forth through the queue
- 1117. By Vish Ishaya
-
lost some changes from rpc branch, bring them in manually
- 1118. By Vish Ishaya
-
merged trunk and removed conflicts
- 1119. By Vish Ishaya
-
make sure to handle VolumeIsBusy
- 1120. By Vish Ishaya
-
return not yield in scheduler shortcut
- 1121. By Vish Ishaya
-
fix snapshot test
Dan Prince (dan-prince) wrote : | # |
Sorry for holding this up. I'm no multicall expert but I should probably remove my 'needs fixing' at least so more people check it out. The latest fix resolves my issue with model attributes that aren't columns.
Rick Harris (rconradharris) wrote : | # |
Ran into a test failure:
=======
FAIL: test_create_
-------
Traceback (most recent call last):
File "/home/
self.
AssertionError: 1 != 2
-------
Other than that, I think this looks good.
Dan Prince (dan-prince) wrote : | # |
Couple of conflicts now w/ a trunk merge too:
Text conflict in nova/volume/api.py
Text conflict in nova/volume/
2 conflicts encountered.
Matt Dietz (cerberus) wrote : | # |
I really like this approach, but I have the same reservations that Mark does.
We could go with the Listener approach, but I will say that I've seen issues with bottlenecking in our current architecture trying to do something very similar.
It seems like this would make updating a running nova installation prohibitively difficult. If there was a way to recall the calling context/msg_id even after a worker bounce, then I'd feel a lot better.
We may need some kind of state dump mechanism for the workers. Perhaps we could pickle some pertinent data upon a worker receiving a HUP/KILL/whatever.
- 1122. By Vish Ishaya
-
remove merge error calling failing test
- 1123. By Vish Ishaya
-
merged trunk
Unmerged revisions
- 1123. By Vish Ishaya
-
merged trunk
- 1122. By Vish Ishaya
-
remove merge error calling failing test
- 1121. By Vish Ishaya
-
fix snapshot test
- 1120. By Vish Ishaya
-
return not yield in scheduler shortcut
- 1119. By Vish Ishaya
-
make sure to handle VolumeIsBusy
- 1118. By Vish Ishaya
-
merged trunk and removed conflicts
- 1117. By Vish Ishaya
-
lost some changes from rpc branch, bring them in manually
- 1116. By Vish Ishaya
-
use strtime for passing datetimes back and forth through the queue
- 1115. By Vish Ishaya
-
fix tests
- 1114. By Vish Ishaya
-
keep the database on the receiving end as well
Preview Diff
1 | === modified file 'nova/context.py' | |||
2 | --- nova/context.py 2011-06-02 21:23:05 +0000 | |||
3 | +++ nova/context.py 2011-06-22 17:01:37 +0000 | |||
4 | @@ -56,8 +56,8 @@ | |||
5 | 56 | self.remote_address = remote_address | 56 | self.remote_address = remote_address |
6 | 57 | if not timestamp: | 57 | if not timestamp: |
7 | 58 | timestamp = utils.utcnow() | 58 | timestamp = utils.utcnow() |
10 | 59 | if isinstance(timestamp, str) or isinstance(timestamp, unicode): | 59 | if isinstance(timestamp, basestring): |
11 | 60 | timestamp = utils.parse_isotime(timestamp) | 60 | timestamp = utils.parse_strtime(timestamp) |
12 | 61 | self.timestamp = timestamp | 61 | self.timestamp = timestamp |
13 | 62 | if not request_id: | 62 | if not request_id: |
14 | 63 | chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890-' | 63 | chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890-' |
15 | @@ -95,7 +95,7 @@ | |||
16 | 95 | 'is_admin': self.is_admin, | 95 | 'is_admin': self.is_admin, |
17 | 96 | 'read_deleted': self.read_deleted, | 96 | 'read_deleted': self.read_deleted, |
18 | 97 | 'remote_address': self.remote_address, | 97 | 'remote_address': self.remote_address, |
20 | 98 | 'timestamp': utils.isotime(self.timestamp), | 98 | 'timestamp': utils.strtime(self.timestamp), |
21 | 99 | 'request_id': self.request_id} | 99 | 'request_id': self.request_id} |
22 | 100 | 100 | ||
23 | 101 | @classmethod | 101 | @classmethod |
24 | 102 | 102 | ||
25 | === modified file 'nova/db/sqlalchemy/api.py' | |||
26 | --- nova/db/sqlalchemy/api.py 2011-06-20 20:55:16 +0000 | |||
27 | +++ nova/db/sqlalchemy/api.py 2011-06-22 17:01:37 +0000 | |||
28 | @@ -1689,7 +1689,7 @@ | |||
29 | 1689 | return (result[0] or 0, result[1] or 0) | 1689 | return (result[0] or 0, result[1] or 0) |
30 | 1690 | 1690 | ||
31 | 1691 | 1691 | ||
33 | 1692 | @require_admin_context | 1692 | @require_context |
34 | 1693 | def volume_destroy(context, volume_id): | 1693 | def volume_destroy(context, volume_id): |
35 | 1694 | session = get_session() | 1694 | session = get_session() |
36 | 1695 | with session.begin(): | 1695 | with session.begin(): |
37 | 1696 | 1696 | ||
38 | === modified file 'nova/db/sqlalchemy/models.py' | |||
39 | --- nova/db/sqlalchemy/models.py 2011-06-20 20:55:16 +0000 | |||
40 | +++ nova/db/sqlalchemy/models.py 2011-06-22 17:01:37 +0000 | |||
41 | @@ -25,6 +25,7 @@ | |||
42 | 25 | from sqlalchemy.exc import IntegrityError | 25 | from sqlalchemy.exc import IntegrityError |
43 | 26 | from sqlalchemy.ext.declarative import declarative_base | 26 | from sqlalchemy.ext.declarative import declarative_base |
44 | 27 | from sqlalchemy.schema import ForeignKeyConstraint | 27 | from sqlalchemy.schema import ForeignKeyConstraint |
45 | 28 | from sqlalchemy.types import DateTime as DTType | ||
46 | 28 | 29 | ||
47 | 29 | from nova.db.sqlalchemy.session import get_session | 30 | from nova.db.sqlalchemy.session import get_session |
48 | 30 | 31 | ||
49 | @@ -77,17 +78,29 @@ | |||
50 | 77 | return getattr(self, key, default) | 78 | return getattr(self, key, default) |
51 | 78 | 79 | ||
52 | 79 | def __iter__(self): | 80 | def __iter__(self): |
54 | 80 | self._i = iter(object_mapper(self).columns) | 81 | # NOTE(vish): include name property in the iterator |
55 | 82 | columns = dict(object_mapper(self).columns).keys() | ||
56 | 83 | name = self.get('name') | ||
57 | 84 | if name: | ||
58 | 85 | columns.append('name') | ||
59 | 86 | self._i = iter(columns) | ||
60 | 81 | return self | 87 | return self |
61 | 82 | 88 | ||
62 | 83 | def next(self): | 89 | def next(self): |
64 | 84 | n = self._i.next().name | 90 | n = self._i.next() |
65 | 85 | return n, getattr(self, n) | 91 | return n, getattr(self, n) |
66 | 86 | 92 | ||
67 | 87 | def update(self, values): | 93 | def update(self, values): |
71 | 88 | """Make the model object behave like a dict""" | 94 | """Make the model object behave like a dict and convert datetimes.""" |
72 | 89 | for k, v in values.iteritems(): | 95 | columns = object_mapper(self).columns |
73 | 90 | setattr(self, k, v) | 96 | for key, value in values.iteritems(): |
74 | 97 | # NOTE(vish): don't update the 'name' property | ||
75 | 98 | if key != 'name' or key in columns: | ||
76 | 99 | if (key in columns and | ||
77 | 100 | isinstance(value, basestring) and | ||
78 | 101 | isinstance(columns[key].type, DTType)): | ||
79 | 102 | value = utils.parse_strtime(value) | ||
80 | 103 | setattr(self, key, value) | ||
81 | 91 | 104 | ||
82 | 92 | def iteritems(self): | 105 | def iteritems(self): |
83 | 93 | """Make the model object behave like a dict. | 106 | """Make the model object behave like a dict. |
84 | 94 | 107 | ||
85 | === modified file 'nova/scheduler/manager.py' | |||
86 | --- nova/scheduler/manager.py 2011-06-09 23:16:55 +0000 | |||
87 | +++ nova/scheduler/manager.py 2011-06-22 17:01:37 +0000 | |||
88 | @@ -98,11 +98,13 @@ | |||
89 | 98 | % locals()) | 98 | % locals()) |
90 | 99 | return | 99 | return |
91 | 100 | 100 | ||
97 | 101 | rpc.cast(context, | 101 | LOG.debug(_("Multicall %(topic)s %(host)s for %(method)s") % locals()) |
98 | 102 | db.queue_get_for(context, topic, host), | 102 | rvs = rpc.multicall(context, |
99 | 103 | {"method": method, | 103 | db.queue_get_for(context, topic, host), |
100 | 104 | "args": kwargs}) | 104 | {"method": method, |
101 | 105 | LOG.debug(_("Casted to %(topic)s %(host)s for %(method)s") % locals()) | 105 | "args": kwargs}) |
102 | 106 | for rv in rvs: | ||
103 | 107 | yield rv | ||
104 | 106 | 108 | ||
105 | 107 | # NOTE (masumotok) : This method should be moved to nova.api.ec2.admin. | 109 | # NOTE (masumotok) : This method should be moved to nova.api.ec2.admin. |
106 | 108 | # Based on bexar design summit discussion, | 110 | # Based on bexar design summit discussion, |
107 | 109 | 111 | ||
108 | === modified file 'nova/tests/scheduler/test_scheduler.py' | |||
109 | --- nova/tests/scheduler/test_scheduler.py 2011-06-17 23:53:30 +0000 | |||
110 | +++ nova/tests/scheduler/test_scheduler.py 2011-06-22 17:01:37 +0000 | |||
111 | @@ -98,7 +98,7 @@ | |||
112 | 98 | 98 | ||
113 | 99 | def test_fallback(self): | 99 | def test_fallback(self): |
114 | 100 | scheduler = manager.SchedulerManager() | 100 | scheduler = manager.SchedulerManager() |
116 | 101 | self.mox.StubOutWithMock(rpc, 'cast', use_mock_anything=True) | 101 | self.mox.StubOutWithMock(rpc, 'call', use_mock_anything=True) |
117 | 102 | ctxt = context.get_admin_context() | 102 | ctxt = context.get_admin_context() |
118 | 103 | rpc.cast(ctxt, | 103 | rpc.cast(ctxt, |
119 | 104 | 'topic.fallback_host', | 104 | 'topic.fallback_host', |
120 | @@ -109,7 +109,7 @@ | |||
121 | 109 | 109 | ||
122 | 110 | def test_named_method(self): | 110 | def test_named_method(self): |
123 | 111 | scheduler = manager.SchedulerManager() | 111 | scheduler = manager.SchedulerManager() |
125 | 112 | self.mox.StubOutWithMock(rpc, 'cast', use_mock_anything=True) | 112 | self.mox.StubOutWithMock(rpc, 'call', use_mock_anything=True) |
126 | 113 | ctxt = context.get_admin_context() | 113 | ctxt = context.get_admin_context() |
127 | 114 | rpc.cast(ctxt, | 114 | rpc.cast(ctxt, |
128 | 115 | 'topic.named_host', | 115 | 'topic.named_host', |
129 | @@ -225,17 +225,17 @@ | |||
130 | 225 | self.mox.StubOutWithMock(db, 'service_get_all_by_topic') | 225 | self.mox.StubOutWithMock(db, 'service_get_all_by_topic') |
131 | 226 | arg = IgnoreArg() | 226 | arg = IgnoreArg() |
132 | 227 | db.service_get_all_by_topic(arg, arg).AndReturn(service_list) | 227 | db.service_get_all_by_topic(arg, arg).AndReturn(service_list) |
139 | 228 | self.mox.StubOutWithMock(rpc, 'cast', use_mock_anything=True) | 228 | self.mox.StubOutWithMock(rpc, 'multicall', use_mock_anything=True) |
140 | 229 | rpc.cast(ctxt, | 229 | rpc.multicall(ctxt, |
141 | 230 | 'compute.host1', | 230 | 'compute.host1', |
142 | 231 | {'method': 'run_instance', | 231 | {'method': 'run_instance', |
143 | 232 | 'args': {'instance_id': 'i-ffffffff', | 232 | 'args': {'instance_id': 'i-ffffffff', |
144 | 233 | 'availability_zone': 'zone1'}}) | 233 | 'availability_zone': 'zone1'}}).AndReturn([]) |
145 | 234 | self.mox.ReplayAll() | 234 | self.mox.ReplayAll() |
150 | 235 | scheduler.run_instance(ctxt, | 235 | list(scheduler.run_instance(ctxt, |
151 | 236 | 'compute', | 236 | 'compute', |
152 | 237 | instance_id='i-ffffffff', | 237 | instance_id='i-ffffffff', |
153 | 238 | availability_zone='zone1') | 238 | availability_zone='zone1')) |
154 | 239 | 239 | ||
155 | 240 | 240 | ||
156 | 241 | class SimpleDriverTestCase(test.TestCase): | 241 | class SimpleDriverTestCase(test.TestCase): |
157 | @@ -601,12 +601,13 @@ | |||
158 | 601 | volume1 = self.start_service('volume', host='host1') | 601 | volume1 = self.start_service('volume', host='host1') |
159 | 602 | volume2 = self.start_service('volume', host='host2') | 602 | volume2 = self.start_service('volume', host='host2') |
160 | 603 | volume_id1 = self._create_volume() | 603 | volume_id1 = self._create_volume() |
162 | 604 | volume1.create_volume(self.context, volume_id1) | 604 | host1 = self.scheduler.driver.schedule_create_volume(self.context, |
163 | 605 | volume_id1) | ||
164 | 605 | volume_id2 = self._create_volume() | 606 | volume_id2 = self._create_volume() |
169 | 606 | host = self.scheduler.driver.schedule_create_volume(self.context, | 607 | host2 = self.scheduler.driver.schedule_create_volume(self.context, |
170 | 607 | volume_id2) | 608 | volume_id2) |
171 | 608 | self.assertEqual(host, 'host2') | 609 | self.assertNotEqual(host1, host2) |
172 | 609 | volume1.delete_volume(self.context, volume_id1) | 610 | db.volume_destroy(self.context, volume_id1) |
173 | 610 | db.volume_destroy(self.context, volume_id2) | 611 | db.volume_destroy(self.context, volume_id2) |
174 | 611 | volume1.kill() | 612 | volume1.kill() |
175 | 612 | volume2.kill() | 613 | volume2.kill() |
176 | @@ -619,10 +620,12 @@ | |||
177 | 619 | volume_ids2 = [] | 620 | volume_ids2 = [] |
178 | 620 | for index in xrange(FLAGS.max_gigabytes): | 621 | for index in xrange(FLAGS.max_gigabytes): |
179 | 621 | volume_id = self._create_volume() | 622 | volume_id = self._create_volume() |
181 | 622 | volume1.create_volume(self.context, volume_id) | 623 | self.scheduler.driver.schedule_create_volume(self.context, |
182 | 624 | volume_id) | ||
183 | 623 | volume_ids1.append(volume_id) | 625 | volume_ids1.append(volume_id) |
184 | 624 | volume_id = self._create_volume() | 626 | volume_id = self._create_volume() |
186 | 625 | volume2.create_volume(self.context, volume_id) | 627 | self.scheduler.driver.schedule_create_volume(self.context, |
187 | 628 | volume_id) | ||
188 | 626 | volume_ids2.append(volume_id) | 629 | volume_ids2.append(volume_id) |
189 | 627 | volume_id = self._create_volume() | 630 | volume_id = self._create_volume() |
190 | 628 | self.assertRaises(driver.NoValidHost, | 631 | self.assertRaises(driver.NoValidHost, |
191 | @@ -658,16 +661,18 @@ | |||
192 | 658 | driver_i._live_migration_src_check(nocare, nocare) | 661 | driver_i._live_migration_src_check(nocare, nocare) |
193 | 659 | driver_i._live_migration_dest_check(nocare, nocare, i_ref['host']) | 662 | driver_i._live_migration_dest_check(nocare, nocare, i_ref['host']) |
194 | 660 | driver_i._live_migration_common_check(nocare, nocare, i_ref['host']) | 663 | driver_i._live_migration_common_check(nocare, nocare, i_ref['host']) |
196 | 661 | self.mox.StubOutWithMock(rpc, 'cast', use_mock_anything=True) | 664 | self.mox.StubOutWithMock(rpc, 'multicall', use_mock_anything=True) |
197 | 662 | kwargs = {'instance_id': instance_id, 'dest': i_ref['host']} | 665 | kwargs = {'instance_id': instance_id, 'dest': i_ref['host']} |
202 | 663 | rpc.cast(self.context, | 666 | rpc.multicall(self.context, |
203 | 664 | db.queue_get_for(nocare, FLAGS.compute_topic, i_ref['host']), | 667 | db.queue_get_for(nocare, |
204 | 665 | {"method": 'live_migration', "args": kwargs}) | 668 | FLAGS.compute_topic, |
205 | 666 | 669 | i_ref['host']), | |
206 | 670 | {"method": 'live_migration', | ||
207 | 671 | "args": kwargs}).AndReturn([]) | ||
208 | 667 | self.mox.ReplayAll() | 672 | self.mox.ReplayAll() |
212 | 668 | self.scheduler.live_migration(self.context, FLAGS.compute_topic, | 673 | list(self.scheduler.live_migration(self.context, FLAGS.compute_topic, |
213 | 669 | instance_id=instance_id, | 674 | instance_id=instance_id, |
214 | 670 | dest=i_ref['host']) | 675 | dest=i_ref['host'])) |
215 | 671 | 676 | ||
216 | 672 | i_ref = db.instance_get(self.context, instance_id) | 677 | i_ref = db.instance_get(self.context, instance_id) |
217 | 673 | self.assertTrue(i_ref['state_description'] == 'migrating') | 678 | self.assertTrue(i_ref['state_description'] == 'migrating') |
218 | 674 | 679 | ||
219 | === modified file 'nova/tests/test_volume.py' | |||
220 | --- nova/tests/test_volume.py 2011-05-27 05:13:17 +0000 | |||
221 | +++ nova/tests/test_volume.py 2011-06-22 17:01:37 +0000 | |||
222 | @@ -57,14 +57,24 @@ | |||
223 | 57 | vol['attach_status'] = "detached" | 57 | vol['attach_status'] = "detached" |
224 | 58 | return db.volume_create(context.get_admin_context(), vol)['id'] | 58 | return db.volume_create(context.get_admin_context(), vol)['id'] |
225 | 59 | 59 | ||
226 | 60 | def _id_create_volume(self, context, volume_id): | ||
227 | 61 | """Version of create volume that uses id""" | ||
228 | 62 | volume_ref = utils.to_primitive(db.volume_get(context, volume_id)) | ||
229 | 63 | return list(self.volume.create_volume(context, volume_ref))[-1]['id'] | ||
230 | 64 | |||
231 | 65 | def _id_delete_volume(self, context, volume_id): | ||
232 | 66 | """Version of delete volume that uses id""" | ||
233 | 67 | volume_ref = utils.to_primitive(db.volume_get(context, volume_id)) | ||
234 | 68 | return list(self.volume.delete_volume(context, volume_ref))[-1] | ||
235 | 69 | |||
236 | 60 | def test_create_delete_volume(self): | 70 | def test_create_delete_volume(self): |
237 | 61 | """Test volume can be created and deleted.""" | 71 | """Test volume can be created and deleted.""" |
238 | 62 | volume_id = self._create_volume() | 72 | volume_id = self._create_volume() |
240 | 63 | self.volume.create_volume(self.context, volume_id) | 73 | self._id_create_volume(self.context, volume_id) |
241 | 64 | self.assertEqual(volume_id, db.volume_get(context.get_admin_context(), | 74 | self.assertEqual(volume_id, db.volume_get(context.get_admin_context(), |
242 | 65 | volume_id).id) | 75 | volume_id).id) |
243 | 66 | 76 | ||
245 | 67 | self.volume.delete_volume(self.context, volume_id) | 77 | self._id_delete_volume(self.context, volume_id) |
246 | 68 | self.assertRaises(exception.NotFound, | 78 | self.assertRaises(exception.NotFound, |
247 | 69 | db.volume_get, | 79 | db.volume_get, |
248 | 70 | self.context, | 80 | self.context, |
249 | @@ -77,7 +87,7 @@ | |||
250 | 77 | snapshot_id = self._create_snapshot(volume_src_id) | 87 | snapshot_id = self._create_snapshot(volume_src_id) |
251 | 78 | self.volume.create_snapshot(self.context, volume_src_id, snapshot_id) | 88 | self.volume.create_snapshot(self.context, volume_src_id, snapshot_id) |
252 | 79 | volume_dst_id = self._create_volume(0, snapshot_id) | 89 | volume_dst_id = self._create_volume(0, snapshot_id) |
254 | 80 | self.volume.create_volume(self.context, volume_dst_id, snapshot_id) | 90 | self._id_create_volume(self.context, volume_dst_id) |
255 | 81 | self.assertEqual(volume_dst_id, db.volume_get( | 91 | self.assertEqual(volume_dst_id, db.volume_get( |
256 | 82 | context.get_admin_context(), | 92 | context.get_admin_context(), |
257 | 83 | volume_dst_id).id) | 93 | volume_dst_id).id) |
258 | @@ -96,7 +106,7 @@ | |||
259 | 96 | return True | 106 | return True |
260 | 97 | try: | 107 | try: |
261 | 98 | volume_id = self._create_volume('1001') | 108 | volume_id = self._create_volume('1001') |
263 | 99 | self.volume.create_volume(self.context, volume_id) | 109 | self._id_create_volume(self.context, volume_id) |
264 | 100 | self.fail("Should have thrown TypeError") | 110 | self.fail("Should have thrown TypeError") |
265 | 101 | except TypeError: | 111 | except TypeError: |
266 | 102 | pass | 112 | pass |
267 | @@ -107,16 +117,16 @@ | |||
268 | 107 | total_slots = FLAGS.iscsi_num_targets | 117 | total_slots = FLAGS.iscsi_num_targets |
269 | 108 | for _index in xrange(total_slots): | 118 | for _index in xrange(total_slots): |
270 | 109 | volume_id = self._create_volume() | 119 | volume_id = self._create_volume() |
272 | 110 | self.volume.create_volume(self.context, volume_id) | 120 | self._id_create_volume(self.context, volume_id) |
273 | 111 | vols.append(volume_id) | 121 | vols.append(volume_id) |
274 | 112 | volume_id = self._create_volume() | 122 | volume_id = self._create_volume() |
275 | 113 | self.assertRaises(db.NoMoreTargets, | 123 | self.assertRaises(db.NoMoreTargets, |
277 | 114 | self.volume.create_volume, | 124 | self._id_create_volume, |
278 | 115 | self.context, | 125 | self.context, |
279 | 116 | volume_id) | 126 | volume_id) |
280 | 117 | db.volume_destroy(context.get_admin_context(), volume_id) | 127 | db.volume_destroy(context.get_admin_context(), volume_id) |
281 | 118 | for volume_id in vols: | 128 | for volume_id in vols: |
283 | 119 | self.volume.delete_volume(self.context, volume_id) | 129 | self._id_delete_volume(self.context, volume_id) |
284 | 120 | 130 | ||
285 | 121 | def test_run_attach_detach_volume(self): | 131 | def test_run_attach_detach_volume(self): |
286 | 122 | """Make sure volume can be attached and detached from instance.""" | 132 | """Make sure volume can be attached and detached from instance.""" |
287 | @@ -132,7 +142,7 @@ | |||
288 | 132 | instance_id = db.instance_create(self.context, inst)['id'] | 142 | instance_id = db.instance_create(self.context, inst)['id'] |
289 | 133 | mountpoint = "/dev/sdf" | 143 | mountpoint = "/dev/sdf" |
290 | 134 | volume_id = self._create_volume() | 144 | volume_id = self._create_volume() |
292 | 135 | self.volume.create_volume(self.context, volume_id) | 145 | self._id_create_volume(self.context, volume_id) |
293 | 136 | if FLAGS.fake_tests: | 146 | if FLAGS.fake_tests: |
294 | 137 | db.volume_attached(self.context, volume_id, instance_id, | 147 | db.volume_attached(self.context, volume_id, instance_id, |
295 | 138 | mountpoint) | 148 | mountpoint) |
296 | @@ -148,10 +158,6 @@ | |||
297 | 148 | instance_ref = db.volume_get_instance(self.context, volume_id) | 158 | instance_ref = db.volume_get_instance(self.context, volume_id) |
298 | 149 | self.assertEqual(instance_ref['id'], instance_id) | 159 | self.assertEqual(instance_ref['id'], instance_id) |
299 | 150 | 160 | ||
300 | 151 | self.assertRaises(exception.Error, | ||
301 | 152 | self.volume.delete_volume, | ||
302 | 153 | self.context, | ||
303 | 154 | volume_id) | ||
304 | 155 | if FLAGS.fake_tests: | 161 | if FLAGS.fake_tests: |
305 | 156 | db.volume_detached(self.context, volume_id) | 162 | db.volume_detached(self.context, volume_id) |
306 | 157 | else: | 163 | else: |
307 | @@ -161,7 +167,7 @@ | |||
308 | 161 | vol = db.volume_get(self.context, volume_id) | 167 | vol = db.volume_get(self.context, volume_id) |
309 | 162 | self.assertEqual(vol['status'], "available") | 168 | self.assertEqual(vol['status'], "available") |
310 | 163 | 169 | ||
312 | 164 | self.volume.delete_volume(self.context, volume_id) | 170 | self._id_delete_volume(self.context, volume_id) |
313 | 165 | self.assertRaises(exception.VolumeNotFound, | 171 | self.assertRaises(exception.VolumeNotFound, |
314 | 166 | db.volume_get, | 172 | db.volume_get, |
315 | 167 | self.context, | 173 | self.context, |
316 | @@ -185,10 +191,10 @@ | |||
317 | 185 | total_slots = FLAGS.iscsi_num_targets | 191 | total_slots = FLAGS.iscsi_num_targets |
318 | 186 | for _index in xrange(total_slots): | 192 | for _index in xrange(total_slots): |
319 | 187 | volume_id = self._create_volume() | 193 | volume_id = self._create_volume() |
321 | 188 | d = self.volume.create_volume(self.context, volume_id) | 194 | d = self._id_create_volume(self.context, volume_id) |
322 | 189 | _check(d) | 195 | _check(d) |
323 | 190 | for volume_id in volume_ids: | 196 | for volume_id in volume_ids: |
325 | 191 | self.volume.delete_volume(self.context, volume_id) | 197 | self._id_delete_volume(self.context, volume_id) |
326 | 192 | 198 | ||
327 | 193 | def test_multi_node(self): | 199 | def test_multi_node(self): |
328 | 194 | # TODO(termie): Figure out how to test with two nodes, | 200 | # TODO(termie): Figure out how to test with two nodes, |
329 | @@ -253,6 +259,16 @@ | |||
330 | 253 | def tearDown(self): | 259 | def tearDown(self): |
331 | 254 | super(DriverTestCase, self).tearDown() | 260 | super(DriverTestCase, self).tearDown() |
332 | 255 | 261 | ||
333 | 262 | def _id_create_volume(self, context, volume_id): | ||
334 | 263 | """Version of create volume that uses id""" | ||
335 | 264 | volume_ref = utils.to_primitive(db.volume_get(context, volume_id)) | ||
336 | 265 | return list(self.volume.create_volume(context, volume_ref))[-1]['id'] | ||
337 | 266 | |||
338 | 267 | def _id_delete_volume(self, context, volume_id): | ||
339 | 268 | """Version of delete volume that uses id""" | ||
340 | 269 | volume_ref = utils.to_primitive(db.volume_get(context, volume_id)) | ||
341 | 270 | return list(self.volume.delete_volume(context, volume_ref))[-1] | ||
342 | 271 | |||
343 | 256 | def _attach_volume(self): | 272 | def _attach_volume(self): |
344 | 257 | """Attach volumes to an instance. This function also sets | 273 | """Attach volumes to an instance. This function also sets |
345 | 258 | a fake log message.""" | 274 | a fake log message.""" |
346 | @@ -262,7 +278,7 @@ | |||
347 | 262 | """Detach volumes from an instance.""" | 278 | """Detach volumes from an instance.""" |
348 | 263 | for volume_id in volume_id_list: | 279 | for volume_id in volume_id_list: |
349 | 264 | db.volume_detached(self.context, volume_id) | 280 | db.volume_detached(self.context, volume_id) |
351 | 265 | self.volume.delete_volume(self.context, volume_id) | 281 | self._id_delete_volume(self.context, volume_id) |
352 | 266 | 282 | ||
353 | 267 | 283 | ||
354 | 268 | class AOETestCase(DriverTestCase): | 284 | class AOETestCase(DriverTestCase): |
355 | @@ -284,7 +300,7 @@ | |||
356 | 284 | vol['size'] = 0 | 300 | vol['size'] = 0 |
357 | 285 | volume_id = db.volume_create(self.context, | 301 | volume_id = db.volume_create(self.context, |
358 | 286 | vol)['id'] | 302 | vol)['id'] |
360 | 287 | self.volume.create_volume(self.context, volume_id) | 303 | self._id_create_volume(self.context, volume_id) |
361 | 288 | 304 | ||
362 | 289 | # each volume has a different mountpoint | 305 | # each volume has a different mountpoint |
363 | 290 | mountpoint = "/dev/sd" + chr((ord('b') + index)) | 306 | mountpoint = "/dev/sd" + chr((ord('b') + index)) |
364 | @@ -360,7 +376,7 @@ | |||
365 | 360 | vol = {} | 376 | vol = {} |
366 | 361 | vol['size'] = 0 | 377 | vol['size'] = 0 |
367 | 362 | vol_ref = db.volume_create(self.context, vol) | 378 | vol_ref = db.volume_create(self.context, vol) |
369 | 363 | self.volume.create_volume(self.context, vol_ref['id']) | 379 | self._id_create_volume(self.context, vol_ref['id']) |
370 | 364 | vol_ref = db.volume_get(self.context, vol_ref['id']) | 380 | vol_ref = db.volume_get(self.context, vol_ref['id']) |
371 | 365 | 381 | ||
372 | 366 | # each volume has a different mountpoint | 382 | # each volume has a different mountpoint |
373 | 367 | 383 | ||
374 | === modified file 'nova/utils.py' | |||
375 | --- nova/utils.py 2011-06-18 00:12:44 +0000 | |||
376 | +++ nova/utils.py 2011-06-22 17:01:37 +0000 | |||
377 | @@ -50,6 +50,7 @@ | |||
378 | 50 | 50 | ||
379 | 51 | LOG = logging.getLogger("nova.utils") | 51 | LOG = logging.getLogger("nova.utils") |
380 | 52 | TIME_FORMAT = "%Y-%m-%dT%H:%M:%SZ" | 52 | TIME_FORMAT = "%Y-%m-%dT%H:%M:%SZ" |
381 | 53 | PERFECT_TIME_FORMAT = "%Y-%m-%dT%H:%M:%S.%f" | ||
382 | 53 | FLAGS = flags.FLAGS | 54 | FLAGS = flags.FLAGS |
383 | 54 | 55 | ||
384 | 55 | 56 | ||
385 | @@ -362,6 +363,18 @@ | |||
386 | 362 | return datetime.datetime.strptime(timestr, TIME_FORMAT) | 363 | return datetime.datetime.strptime(timestr, TIME_FORMAT) |
387 | 363 | 364 | ||
388 | 364 | 365 | ||
389 | 366 | def strtime(at=None): | ||
390 | 367 | """Returns iso formatted utcnow.""" | ||
391 | 368 | if not at: | ||
392 | 369 | at = utcnow() | ||
393 | 370 | return at.strftime(PERFECT_TIME_FORMAT) | ||
394 | 371 | |||
395 | 372 | |||
396 | 373 | def parse_strtime(timestr): | ||
397 | 374 | """Turn an iso formatted time back into a datetime.""" | ||
398 | 375 | return datetime.datetime.strptime(timestr, PERFECT_TIME_FORMAT) | ||
399 | 376 | |||
400 | 377 | |||
401 | 365 | def parse_mailmap(mailmap='.mailmap'): | 378 | def parse_mailmap(mailmap='.mailmap'): |
402 | 366 | mapping = {} | 379 | mapping = {} |
403 | 367 | if os.path.exists(mailmap): | 380 | if os.path.exists(mailmap): |
404 | @@ -505,7 +518,7 @@ | |||
405 | 505 | o[k] = to_primitive(v) | 518 | o[k] = to_primitive(v) |
406 | 506 | return o | 519 | return o |
407 | 507 | elif isinstance(value, datetime.datetime): | 520 | elif isinstance(value, datetime.datetime): |
409 | 508 | return str(value) | 521 | return strtime(value) |
410 | 509 | elif hasattr(value, 'iteritems'): | 522 | elif hasattr(value, 'iteritems'): |
411 | 510 | return to_primitive(dict(value.iteritems())) | 523 | return to_primitive(dict(value.iteritems())) |
412 | 511 | elif hasattr(value, '__iter__'): | 524 | elif hasattr(value, '__iter__'): |
413 | 512 | 525 | ||
414 | === modified file 'nova/volume/api.py' | |||
415 | --- nova/volume/api.py 2011-06-15 16:46:24 +0000 | |||
416 | +++ nova/volume/api.py 2011-06-22 17:01:37 +0000 | |||
417 | @@ -20,10 +20,8 @@ | |||
418 | 20 | Handles all requests relating to volumes. | 20 | Handles all requests relating to volumes. |
419 | 21 | """ | 21 | """ |
420 | 22 | 22 | ||
425 | 23 | 23 | import eventlet | |
426 | 24 | from eventlet import greenthread | 24 | |
423 | 25 | |||
424 | 26 | from nova import db | ||
427 | 27 | from nova import exception | 25 | from nova import exception |
428 | 28 | from nova import flags | 26 | from nova import flags |
429 | 29 | from nova import log as logging | 27 | from nova import log as logging |
430 | @@ -68,14 +66,27 @@ | |||
431 | 68 | 'display_name': name, | 66 | 'display_name': name, |
432 | 69 | 'display_description': description} | 67 | 'display_description': description} |
433 | 70 | 68 | ||
442 | 71 | volume = self.db.volume_create(context, options) | 69 | volume_ref = self.db.volume_create(context, options) |
443 | 72 | rpc.cast(context, | 70 | volume_ref = utils.to_primitive(volume_ref) |
444 | 73 | FLAGS.scheduler_topic, | 71 | |
445 | 74 | {"method": "create_volume", | 72 | def delayed_create(volume_ref): |
446 | 75 | "args": {"topic": FLAGS.volume_topic, | 73 | vid = volume_ref['id'] |
447 | 76 | "volume_id": volume['id'], | 74 | try: |
448 | 77 | "snapshot_id": snapshot_id}}) | 75 | rvs = rpc.multicall(context, |
449 | 78 | return volume | 76 | FLAGS.scheduler_topic, |
450 | 77 | {"method": "create_volume", | ||
451 | 78 | "args": {"topic": FLAGS.volume_topic, | ||
452 | 79 | "volume_ref": volume_ref}}) | ||
453 | 80 | for volume_ref in rvs: | ||
454 | 81 | self.db.volume_update(context, vid, volume_ref) | ||
455 | 82 | volume_ref['launched_at'] = utils.utcnow() | ||
456 | 83 | self.db.volume_update(context, vid, volume_ref) | ||
457 | 84 | |||
458 | 85 | except rpc.RemoteError: | ||
459 | 86 | self.db.volume_update(context, vid, {'status': 'error'}) | ||
460 | 87 | |||
461 | 88 | eventlet.spawn_n(delayed_create, volume_ref) | ||
462 | 89 | return volume_ref | ||
463 | 79 | 90 | ||
464 | 80 | # TODO(yamahata): eliminate dumb polling | 91 | # TODO(yamahata): eliminate dumb polling |
465 | 81 | def wait_creation(self, context, volume_id): | 92 | def wait_creation(self, context, volume_id): |
466 | @@ -83,20 +94,44 @@ | |||
467 | 83 | volume = self.get(context, volume_id) | 94 | volume = self.get(context, volume_id) |
468 | 84 | if volume['status'] != 'creating': | 95 | if volume['status'] != 'creating': |
469 | 85 | return | 96 | return |
471 | 86 | greenthread.sleep(1) | 97 | eventlet.greenthread.sleep(1) |
472 | 87 | 98 | ||
473 | 88 | def delete(self, context, volume_id): | 99 | def delete(self, context, volume_id): |
476 | 89 | volume = self.get(context, volume_id) | 100 | volume_ref = self.get(context, volume_id) |
477 | 90 | if volume['status'] != "available": | 101 | if volume_ref['status'] != "available": |
478 | 91 | raise exception.ApiError(_("Volume status must be available")) | 102 | raise exception.ApiError(_("Volume status must be available")) |
487 | 92 | now = utils.utcnow() | 103 | if volume_ref['attach_status'] == "attached": |
488 | 93 | self.db.volume_update(context, volume_id, {'status': 'deleting', | 104 | raise exception.Error(_("Volume is still attached")) |
489 | 94 | 'terminated_at': now}) | 105 | |
490 | 95 | host = volume['host'] | 106 | volume_ref['status'] = 'deleting' |
491 | 96 | rpc.cast(context, | 107 | volume_ref['terminated_at'] = utils.utcnow() |
492 | 97 | self.db.queue_get_for(context, FLAGS.volume_topic, host), | 108 | self.db.volume_update(context, volume_ref['id'], volume_ref) |
493 | 98 | {"method": "delete_volume", | 109 | volume_ref = utils.to_primitive(volume_ref) |
494 | 99 | "args": {"volume_id": volume_id}}) | 110 | |
495 | 111 | def delayed_delete(volume_ref): | ||
496 | 112 | vid = volume_ref['id'] | ||
497 | 113 | try: | ||
498 | 114 | topic = self.db.queue_get_for(context, | ||
499 | 115 | FLAGS.volume_topic, | ||
500 | 116 | volume_ref['host']) | ||
501 | 117 | rvs = rpc.multicall(context, | ||
502 | 118 | topic, | ||
503 | 119 | {"method": "delete_volume", | ||
504 | 120 | "args": {"volume_ref": volume_ref}}) | ||
505 | 121 | for volume_ref in rvs: | ||
506 | 122 | self.db.volume_update(context, vid, volume_ref) | ||
507 | 123 | |||
508 | 124 | # NOTE(vish): See TODO in manager.py. This can be removed | ||
509 | 125 | # if change to a better method for handling | ||
510 | 126 | # deletes | ||
511 | 127 | if volume_ref['status'] != 'available': | ||
512 | 128 | self.db.volume_destroy(context, vid) | ||
513 | 129 | |||
514 | 130 | except rpc.RemoteError: | ||
515 | 131 | self.db.volume_update(context, vid, {'status': 'err_delete'}) | ||
516 | 132 | |||
517 | 133 | eventlet.spawn_n(delayed_delete, volume_ref) | ||
518 | 134 | return True | ||
519 | 100 | 135 | ||
520 | 101 | def update(self, context, volume_id, fields): | 136 | def update(self, context, volume_id, fields): |
521 | 102 | self.db.volume_update(context, volume_id, fields) | 137 | self.db.volume_update(context, volume_id, fields) |
522 | 103 | 138 | ||
523 | === modified file 'nova/volume/manager.py' | |||
524 | --- nova/volume/manager.py 2011-06-02 21:23:05 +0000 | |||
525 | +++ nova/volume/manager.py 2011-06-22 17:01:37 +0000 | |||
526 | @@ -88,79 +88,72 @@ | |||
527 | 88 | else: | 88 | else: |
528 | 89 | LOG.info(_("volume %s: skipping export"), volume['name']) | 89 | LOG.info(_("volume %s: skipping export"), volume['name']) |
529 | 90 | 90 | ||
531 | 91 | def create_volume(self, context, volume_id, snapshot_id=None): | 91 | def create_volume(self, context, volume_ref): |
532 | 92 | """Creates and exports the volume.""" | 92 | """Creates and exports the volume.""" |
533 | 93 | context = context.elevated() | ||
534 | 94 | volume_ref = self.db.volume_get(context, volume_id) | ||
535 | 95 | LOG.info(_("volume %s: creating"), volume_ref['name']) | 93 | LOG.info(_("volume %s: creating"), volume_ref['name']) |
536 | 96 | 94 | ||
542 | 97 | self.db.volume_update(context, | 95 | @utils.synchronized(volume_ref['name']) |
543 | 98 | volume_id, | 96 | def safe_create(volume_ref): |
544 | 99 | {'host': self.host}) | 97 | try: |
545 | 100 | # NOTE(vish): so we don't have to get volume from db again | 98 | volume_ref = self.db.volume_get(context, volume_ref['id']) |
546 | 101 | # before passing it to the driver. | 99 | except exception.VolumeNotFound: |
547 | 100 | volume_ref = self.db.volume_create(context, volume_ref) | ||
548 | 101 | return volume_ref | ||
549 | 102 | |||
550 | 103 | volume_ref = safe_create(volume_ref) | ||
551 | 102 | volume_ref['host'] = self.host | 104 | volume_ref['host'] = self.host |
581 | 103 | 105 | self.db.volume_update(context, volume_ref['id'], volume_ref) | |
582 | 104 | try: | 106 | yield volume_ref |
583 | 105 | vol_name = volume_ref['name'] | 107 | |
584 | 106 | vol_size = volume_ref['size'] | 108 | vol_name = volume_ref['name'] |
585 | 107 | LOG.debug(_("volume %(vol_name)s: creating lv of" | 109 | vol_size = volume_ref['size'] |
586 | 108 | " size %(vol_size)sG") % locals()) | 110 | LOG.debug(_("volume %(vol_name)s: creating lv of" |
587 | 109 | if snapshot_id == None: | 111 | " size %(vol_size)sG") % locals()) |
588 | 110 | model_update = self.driver.create_volume(volume_ref) | 112 | snapshot_id = volume_ref['snapshot_id'] |
589 | 111 | else: | 113 | if snapshot_id is None: |
590 | 112 | snapshot_ref = self.db.snapshot_get(context, snapshot_id) | 114 | model_update = self.driver.create_volume(volume_ref) |
591 | 113 | model_update = self.driver.create_volume_from_snapshot( | 115 | else: |
592 | 114 | volume_ref, | 116 | snapshot_ref = self.db.snapshot_get(context, snapshot_id) |
593 | 115 | snapshot_ref) | 117 | model_update = self.driver.create_volume_from_snapshot( |
594 | 116 | if model_update: | 118 | volume_ref, |
595 | 117 | self.db.volume_update(context, volume_ref['id'], model_update) | 119 | snapshot_ref) |
596 | 118 | 120 | if model_update: | |
597 | 119 | LOG.debug(_("volume %s: creating export"), volume_ref['name']) | 121 | volume_ref.update(model_update) |
598 | 120 | model_update = self.driver.create_export(context, volume_ref) | 122 | self.db.volume_update(context, volume_ref['id'], model_update) |
599 | 121 | if model_update: | 123 | yield volume_ref |
600 | 122 | self.db.volume_update(context, volume_ref['id'], model_update) | 124 | |
601 | 123 | except Exception: | 125 | LOG.debug(_("volume %s: creating export"), volume_ref['name']) |
602 | 124 | self.db.volume_update(context, | 126 | model_update = self.driver.create_export(context, volume_ref) |
603 | 125 | volume_ref['id'], {'status': 'error'}) | 127 | if model_update: |
604 | 126 | raise | 128 | volume_ref.update(model_update) |
605 | 127 | 129 | self.db.volume_update(context, volume_ref['id'], model_update) | |
606 | 128 | now = utils.utcnow() | 130 | yield volume_ref |
607 | 129 | self.db.volume_update(context, | 131 | |
579 | 130 | volume_ref['id'], {'status': 'available', | ||
580 | 131 | 'launched_at': now}) | ||
608 | 132 | LOG.debug(_("volume %s: created successfully"), volume_ref['name']) | 132 | LOG.debug(_("volume %s: created successfully"), volume_ref['name']) |
610 | 133 | return volume_id | 133 | volume_ref['status'] = 'available' |
611 | 134 | self.db.volume_update(context, volume_ref['id'], volume_ref) | ||
612 | 135 | yield volume_ref | ||
613 | 134 | 136 | ||
615 | 135 | def delete_volume(self, context, volume_id): | 137 | def delete_volume(self, context, volume_ref): |
616 | 136 | """Deletes and unexports volume.""" | 138 | """Deletes and unexports volume.""" |
624 | 137 | context = context.elevated() | 139 | LOG.debug(_("volume %s: removing export"), volume_ref['name']) |
625 | 138 | volume_ref = self.db.volume_get(context, volume_id) | 140 | self.driver.remove_export(context, volume_ref) |
619 | 139 | if volume_ref['attach_status'] == "attached": | ||
620 | 140 | raise exception.Error(_("Volume is still attached")) | ||
621 | 141 | if volume_ref['host'] != self.host: | ||
622 | 142 | raise exception.Error(_("Volume is not local to this node")) | ||
623 | 143 | |||
626 | 144 | try: | 141 | try: |
627 | 145 | LOG.debug(_("volume %s: removing export"), volume_ref['name']) | ||
628 | 146 | self.driver.remove_export(context, volume_ref) | ||
629 | 147 | LOG.debug(_("volume %s: deleting"), volume_ref['name']) | 142 | LOG.debug(_("volume %s: deleting"), volume_ref['name']) |
630 | 148 | self.driver.delete_volume(volume_ref) | 143 | self.driver.delete_volume(volume_ref) |
632 | 149 | except exception.VolumeIsBusy, e: | 144 | # TODO(vish): This may not be the best way to handle a busy delete |
633 | 145 | # but I'm leaving it because this is the current way | ||
634 | 146 | # it is handled. | ||
635 | 147 | except exception.VolumeIsBusy: | ||
636 | 150 | LOG.debug(_("volume %s: volume is busy"), volume_ref['name']) | 148 | LOG.debug(_("volume %s: volume is busy"), volume_ref['name']) |
637 | 151 | self.driver.ensure_export(context, volume_ref) | 149 | self.driver.ensure_export(context, volume_ref) |
648 | 152 | self.db.volume_update(context, volume_ref['id'], | 150 | volume_ref['status'] = 'available' |
649 | 153 | {'status': 'available'}) | 151 | self.db.volume_update(context, volume_ref['id'], volume_ref) |
650 | 154 | return True | 152 | yield volume_ref |
651 | 155 | except Exception: | 153 | return |
642 | 156 | self.db.volume_update(context, | ||
643 | 157 | volume_ref['id'], | ||
644 | 158 | {'status': 'error_deleting'}) | ||
645 | 159 | raise | ||
646 | 160 | |||
647 | 161 | self.db.volume_destroy(context, volume_id) | ||
652 | 162 | LOG.debug(_("volume %s: deleted successfully"), volume_ref['name']) | 154 | LOG.debug(_("volume %s: deleted successfully"), volume_ref['name']) |
654 | 163 | return True | 155 | self.db.volume_destroy(context, volume_ref['id']) |
655 | 156 | yield volume_ref | ||
656 | 164 | 157 | ||
657 | 165 | def create_snapshot(self, context, volume_id, snapshot_id): | 158 | def create_snapshot(self, context, volume_id, snapshot_id): |
658 | 166 | """Creates and exports the snapshot.""" | 159 | """Creates and exports the snapshot.""" |
Hey Vish,
I'm not able to boot any instances via the OS API w/ this branch. Looks to be related to the set_admin_password functionality which now polls for the compute host to be assigned:
(nova.api. openstack) : TRACE: File "/usr/lib/ pymodules/ python2. 6/nova/ compute/ api.py" , line 501, in _set_admin_password openstack) : TRACE: host = self._find_ host(context, instance_id) openstack) : TRACE: File "/usr/lib/ pymodules/ python2. 6/nova/ compute/ api.py" , line 497, in _find_host openstack) : TRACE: % instance_id) openstack) : TRACE: Error: Unable to find host for Instance 1 openstack) : TRACE:
(nova.api.
(nova.api.
(nova.api.
(nova.api.
(nova.api.