Merge lp:~yamahata/nova/boot-from-volume-0 into lp:~hudson-openstack/nova/trunk
- boot-from-volume-0
- Merge into trunk
Status: | Merged |
---|---|
Merged at revision: | 1198 |
Proposed branch: | lp:~yamahata/nova/boot-from-volume-0 |
Merge into: | lp:~hudson-openstack/nova/trunk |
Prerequisite: | lp:~morita-kazutaka/nova/clone-volume |
Diff against target: |
1797 lines (+1068/-149) 23 files modified
nova/api/ec2/apirequest.py (+3/-75) nova/api/ec2/cloud.py (+47/-11) nova/api/ec2/ec2utils.py (+94/-0) nova/compute/api.py (+87/-17) nova/compute/manager.py (+136/-10) nova/compute/utils.py (+29/-0) nova/db/api.py (+35/-0) nova/db/sqlalchemy/api.py (+79/-0) nova/db/sqlalchemy/migrate_repo/versions/024_add_block_device_mapping.py (+87/-0) nova/db/sqlalchemy/models.py (+39/-0) nova/scheduler/simple.py (+7/-1) nova/tests/test_api.py (+1/-1) nova/tests/test_cloud.py (+312/-10) nova/tests/test_compute.py (+15/-0) nova/virt/driver.py (+1/-1) nova/virt/fake.py (+5/-1) nova/virt/hyperv.py (+1/-1) nova/virt/libvirt.xml.template (+9/-0) nova/virt/libvirt/connection.py (+58/-18) nova/virt/vmwareapi_conn.py (+1/-1) nova/virt/xenapi_conn.py (+1/-1) nova/volume/api.py (+13/-1) nova/volume/driver.py (+8/-0) |
To merge this branch: | bzr merge lp:~yamahata/nova/boot-from-volume-0 |
Related bugs: | |
Related blueprints: |
Boot From Volume
(High)
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Rick Harris (community) | Approve | ||
Matt Dietz (community) | Approve | ||
Devin Carlen (community) | Needs Information | ||
Review via email: mp+62419@code.launchpad.net |
Commit message
Implements a portion of ec2 ebs boot.
What's implemented
- block_device_
(ephemeral device and no device isn't supported yet)
- stop/start instance
TODO:
- ephemeral device/no device
- machine image
Description of the change
This branch implements boot from volume.
What can be done with this branch are
- --block-
volume id or snapshot id can be specified.
ephemeral device/no device isn't supported yet.
- stop/start instance
There are several things left, they will be done as next step.
- machine image to point volume
- ephemeral device/no device
Brian Waldon (bcwaldon) wrote : | # |
Isaku Yamahata (yamahata) wrote : | # |
Thank you. Now I fixed it.
On Thu, May 26, 2011 at 02:21:04PM -0000, Brian Waldon wrote:
> There are a couple of conflicts you might want to look into.
> --
> https:/
> You are the owner of lp:~yamahata/nova/boot-from-volume-0.
>
--
yamahata
Devin Carlen (devcamcar) wrote : | # |
Can you please add some comments to this code block? It's unclear what this change is for.
37 + if len(parts) > 1:
38 + d = args.get(key, {})
39 + args[key] = d
40 + for k in parts[1:-1]:
41 + k = _camelcase_
42 + v = d.get(k, {})
43 + d[k] = v
44 + d = v
45 + d[_camelcase_
46 + else:
47 + args[key] = value
Please add a note header to this comment block, as in:
# NOTE(your username):
This helps in tracking down subject matter experts in this large codebase.
60 + # BlockDevicedMap
61 + # BlockDevicedMap
62 + # BlockDevicedMap
63 + # BlockDevicedMap
64 + # BlockDevicedMap
65 + # => remove .Ebs and allow volume id in SnapshotId
And same thing here:
188 + # tell vm driver to attach volume at boot time by updating
189 + # BlockDeviceMapping
Isaku Yamahata (yamahata) wrote : | # |
On Fri, May 27, 2011 at 09:06:09PM -0000, Devin Carlen wrote:
> Review: Needs Information
> Can you please add some comments to this code block? It's unclear what this change is for.
I added the comment on the code.
That hunk teaches multi dotted argument to ec2 argument parse.
So far only single dot is allowed. But, ec2 block device mapping
uses multi dot separated argument like
BlockDeviceMapp
> Please add a note header to this comment block, as in:
> # NOTE(your username):
Okay, I added it to them.
Is this required custom for nova?
"bzr annotation" provides what you want as version control system
is exactly for that purpose. And sprinkling username makes code
ugly.
--
yamahata
Isaku Yamahata (yamahata) wrote : | # |
Ping?
What can I do to make progress?
Rick Harris (rconradharris) wrote : | # |
Very impressive work! Just a few small nits:
Received a test failure:
=====
FAIL: test_stop_
-----
Traceback (most recent call last):
File "/home/
self.
File "/home/
self.
AssertionError: u'\\/dev\\/vdc' != '/dev/vdc'
> 72 + if len(parts) > 1:
> 73 + d = args.get(key, {})
> 74 + args[key] = d
> 75 + for k in parts[1:-1]:
> 76 + k = _camelcase_
> 77 + v = d.get(k, {})
> 78 + d[k] = v
> 79 + d = v
> 80 + d[_camelcase_
> 81 + else:
> 82 + args[key] = value
Might be worth breaking this code out into a utility method, something like:
`dict_from_
> 68 + # EBS boot uses multi dot-separeted arguments like
Typofix. s/separeted/
> 315 + block_device_
Usually not a good idea to use a list as a default argument. This is because
the list-object is created at /function definition/ time and the same list
object will be re-used on each invocation-
Instead, it's better to default to None and initialize a new list in the
function's body:
block_
block_
OR....
if not block_device_
block_
> 393 + if not _is_able_
> 394 + return
Should we log here that we weren't able to shutdown, something like:
LOG.warn(
> 975 === added file 'nova/db/
Looks like you'll have to renumber these since trunk has already advanced
migration numbers.
Matt Dietz (cerberus) wrote : | # |
First of all, great work on this!
I see a few failing tests:
=======
ERROR: test_compute_
-------
Traceback (most recent call last):
File "/Users/
{'wait': wait_func})
TypeError: CreateMock() takes exactly 2 arguments (3 given)
=======
ERROR: test_compute_
-------
Traceback (most recent call last):
File "/Users/
self.
File "/Users/
mock_
File "/Users/
raise ExpectedMethodC
ExpectedMethodC
0. __call_
=======
ERROR: test_create (nova.tests.
-------
Traceback (most recent call last):
File "/Users/
{'wait': wait_func})
TypeError: CreateMock() takes exactly 2 arguments (3 given)
=======
ERROR: test_create (nova.tests.
-------
Traceback (most recent call last):
File "/Users/
self.
File "/Users/
mock_
File "/Users/
raise ExpectedMethodC
ExpectedMethodC
0. __call_
-------
455 + try:
456 + bdms = self.db.
457 + context, instance_id)
458 + except exception.NotFound:
459 + pass
I don't really like throwing away exceptions. It's an explicit exception, sure, but it could mask something important. Seems like this should at least log the error.
476 + assert ((bdm['
477 + (bdm['volume_id'] ...
Isaku Yamahata (yamahata) wrote : | # |
On Wed, Jun 08, 2011 at 06:27:26PM -0000, Rick Harris wrote:
> Review: Needs Fixing
> Very impressive work! Just a few small nits:
>
>
> Received a test failure:
>
> =======
> FAIL: test_stop_
> -------
> Traceback (most recent call last):
> File "/home/
> self._assert_
> File "/home/
> self.assertEqua
> AssertionError: u'\\/dev\\/vdc' != '/dev/vdc'
Hmm, the test passes for me. I'm using sqlite3 for unittest.
I can't find the code to escape '/' into '\\/'. MySQL?
> > 315 + block_device_
>
> Usually not a good idea to use a list as a default argument. This is because
> the list-object is created at /function definition/ time and the same list
> object will be re-used on each invocation-
>
> Instead, it's better to default to None and initialize a new list in the
> function's body:
>
> block_device_
>
> block_device_
>
> OR....
>
> if not block_device_
> block_device_
Okay, fixed.
During the fixes, I found other suspicious code.
Since I'm not sure they are intentional or not at a glance,
so please review the attached patch.
> > 393 + if not _is_able_
> > 394 + return
>
> Should we log here that we weren't able to shutdown, something like:
>
> LOG.warn(_("Unable to shutdown server...."))
Yes, _is_able_
=== modified file 'nova/objectsto
--- nova/objectstor
+++ nova/objectstor
@@ -155,7 +155,8 @@ class BaseRequestHand
- def _render_parts(self, value, parts=[]):
+ def _render_parts(self, value, parts=None):
+ parts = parts or []
if isinstance(value, basestring):
elif isinstance(value, int) or isinstance(value, long):
=== modified file 'tools/
--- tools/ajaxterm/
+++ tools/ajaxterm/
@@ -726,7 +726,7 @@ class QWebHtml(QWebXml):
#-----
# QWeb Simple Controller
#-----
-def qweb_control(
+def qweb_control(
""" qweb_control(
A simple function to handle the controler part of your application. It
dispatch the control to the jump argument, while ensuring th...
Isaku Yamahata (yamahata) wrote : | # |
On Wed, Jun 08, 2011 at 09:07:25PM -0000, Matt Dietz wrote:
> I see a few failing tests:
I think you're using old mox version.
With revno 1117, mox 0.5.3 is required instead of 0.5.0
Updating your repo and reinstalling evenv will fix it.
> 455 + try:
> 456 + bdms = self.db.
> 457 + context, instance_id)
> 458 + except exception.NotFound:
> 459 + pass
>
> I don't really like throwing away exceptions. It's an explicit exception, sure, but it could mask something important. Seems like this should at least log the error.
I see. Given all the caller catch and ignores it, I changed it returns
empty list instead of raising NotFound. Thus I eliminated except ...: pass.
> 476 + assert ((bdm['
> 477 + (bdm['volume_id'] is not None))
>
> It seems like it would be better to raise an explicit exception here, with a message describing exactly why this is a bad state.
Okay.
> 239 + """Stop each instance in instace_id"""
> 245 + """Start each instance in instace_id"""
>
> instace_id should be instance_id
>
> Also, given that instance_id is singular, it should probably say something like:
>
> "Start the instance denoted by instance_id"
Unfortunately instance_id is a list of ec2 instance id like
terminate_
--
yamahata
Matt Dietz (cerberus) wrote : | # |
Thanks for the changes Yamahata!
Rick Harris (rconradharris) wrote : | # |
> =======
> FAIL: test_stop_
> -------
> Traceback (most recent call last):
> File "/home/
> test_stop_
> self._assert_
> File "/home/
> self.assertEqua
> AssertionError: u'\\/dev\\/vdc' != '/dev/vdc'
Turns out this was caused by running an older version of carrot (0.10.3) which didn't handle escaping properly.
Upgrading to 0.10.5 fixed it.
Patch looks great, nice job.
Rick Harris (rconradharris) wrote : | # |
Looks like Devin's concerns were addressed, setting to Approved.
OpenStack Infra (hudson-openstack) wrote : | # |
The attempt to merge lp:~yamahata/nova/boot-from-volume-0 into lp:nova failed. Below is the output from the failed tests.
AccountsTest
test_
test_
test_
test_
AdminAPITest
test_
test_
APITest
test_
test_
test_
Test
test_
test_
test_
test_
test_bad_token OK 0.03
test_
test_
test_no_user OK 0.03
test_
test_
TestFunctional
test_
test_
TestLimiter
test_
LimiterTest
test_
test_
test_
test_
test_
test_
test_
test_
test_
test_
test_
test_
test_
PaginationParam
test_
test_
test_no_params OK 0.00
test_
test_
ActionExtensionTest
test_
Isaku Yamahata (yamahata) wrote : | # |
fixed two errors of pep8.
--
yamahata
OpenStack Infra (hudson-openstack) wrote : | # |
No proposals found for merge of lp:~morita-kazutaka/nova/clone-volume into lp:nova.
Vish Ishaya (vishvananda) wrote : | # |
marking as merged. Our dependency checks set it back to needs review if the prereq branch has already merged, even though the merge was successful.
Preview Diff
1 | === modified file 'nova/api/ec2/apirequest.py' |
2 | --- nova/api/ec2/apirequest.py 2011-04-18 20:53:09 +0000 |
3 | +++ nova/api/ec2/apirequest.py 2011-06-17 23:35:54 +0000 |
4 | @@ -21,22 +21,15 @@ |
5 | """ |
6 | |
7 | import datetime |
8 | -import re |
9 | # TODO(termie): replace minidom with etree |
10 | from xml.dom import minidom |
11 | |
12 | from nova import log as logging |
13 | +from nova.api.ec2 import ec2utils |
14 | |
15 | LOG = logging.getLogger("nova.api.request") |
16 | |
17 | |
18 | -_c2u = re.compile('(((?<=[a-z])[A-Z])|([A-Z](?![A-Z]|$)))') |
19 | - |
20 | - |
21 | -def _camelcase_to_underscore(str): |
22 | - return _c2u.sub(r'_\1', str).lower().strip('_') |
23 | - |
24 | - |
25 | def _underscore_to_camelcase(str): |
26 | return ''.join([x[:1].upper() + x[1:] for x in str.split('_')]) |
27 | |
28 | @@ -51,59 +44,6 @@ |
29 | return datetimeobj.strftime("%Y-%m-%dT%H:%M:%SZ") |
30 | |
31 | |
32 | -def _try_convert(value): |
33 | - """Return a non-string from a string or unicode, if possible. |
34 | - |
35 | - ============= ===================================================== |
36 | - When value is returns |
37 | - ============= ===================================================== |
38 | - zero-length '' |
39 | - 'None' None |
40 | - 'True' True |
41 | - 'False' False |
42 | - '0', '-0' 0 |
43 | - 0xN, -0xN int from hex (postitive) (N is any number) |
44 | - 0bN, -0bN int from binary (positive) (N is any number) |
45 | - * try conversion to int, float, complex, fallback value |
46 | - |
47 | - """ |
48 | - if len(value) == 0: |
49 | - return '' |
50 | - if value == 'None': |
51 | - return None |
52 | - if value == 'True': |
53 | - return True |
54 | - if value == 'False': |
55 | - return False |
56 | - valueneg = value[1:] if value[0] == '-' else value |
57 | - if valueneg == '0': |
58 | - return 0 |
59 | - if valueneg == '': |
60 | - return value |
61 | - if valueneg[0] == '0': |
62 | - if valueneg[1] in 'xX': |
63 | - return int(value, 16) |
64 | - elif valueneg[1] in 'bB': |
65 | - return int(value, 2) |
66 | - else: |
67 | - try: |
68 | - return int(value, 8) |
69 | - except ValueError: |
70 | - pass |
71 | - try: |
72 | - return int(value) |
73 | - except ValueError: |
74 | - pass |
75 | - try: |
76 | - return float(value) |
77 | - except ValueError: |
78 | - pass |
79 | - try: |
80 | - return complex(value) |
81 | - except ValueError: |
82 | - return value |
83 | - |
84 | - |
85 | class APIRequest(object): |
86 | def __init__(self, controller, action, version, args): |
87 | self.controller = controller |
88 | @@ -114,7 +54,7 @@ |
89 | def invoke(self, context): |
90 | try: |
91 | method = getattr(self.controller, |
92 | - _camelcase_to_underscore(self.action)) |
93 | + ec2utils.camelcase_to_underscore(self.action)) |
94 | except AttributeError: |
95 | controller = self.controller |
96 | action = self.action |
97 | @@ -125,19 +65,7 @@ |
98 | # and reraise as 400 error. |
99 | raise Exception(_error) |
100 | |
101 | - args = {} |
102 | - for key, value in self.args.items(): |
103 | - parts = key.split(".") |
104 | - key = _camelcase_to_underscore(parts[0]) |
105 | - if isinstance(value, str) or isinstance(value, unicode): |
106 | - # NOTE(vish): Automatically convert strings back |
107 | - # into their respective values |
108 | - value = _try_convert(value) |
109 | - if len(parts) > 1: |
110 | - d = args.get(key, {}) |
111 | - d[parts[1]] = value |
112 | - value = d |
113 | - args[key] = value |
114 | + args = ec2utils.dict_from_dotted_str(self.args.items()) |
115 | |
116 | for key in args.keys(): |
117 | # NOTE(vish): Turn numeric dict keys into lists |
118 | |
119 | === modified file 'nova/api/ec2/cloud.py' |
120 | --- nova/api/ec2/cloud.py 2011-06-17 20:47:23 +0000 |
121 | +++ nova/api/ec2/cloud.py 2011-06-17 23:35:54 +0000 |
122 | @@ -909,6 +909,25 @@ |
123 | if kwargs.get('ramdisk_id'): |
124 | ramdisk = self._get_image(context, kwargs['ramdisk_id']) |
125 | kwargs['ramdisk_id'] = ramdisk['id'] |
126 | + for bdm in kwargs.get('block_device_mapping', []): |
127 | + # NOTE(yamahata) |
128 | + # BlockDevicedMapping.<N>.DeviceName |
129 | + # BlockDevicedMapping.<N>.Ebs.SnapshotId |
130 | + # BlockDevicedMapping.<N>.Ebs.VolumeSize |
131 | + # BlockDevicedMapping.<N>.Ebs.DeleteOnTermination |
132 | + # BlockDevicedMapping.<N>.VirtualName |
133 | + # => remove .Ebs and allow volume id in SnapshotId |
134 | + ebs = bdm.pop('ebs', None) |
135 | + if ebs: |
136 | + ec2_id = ebs.pop('snapshot_id') |
137 | + id = ec2utils.ec2_id_to_id(ec2_id) |
138 | + if ec2_id.startswith('snap-'): |
139 | + bdm['snapshot_id'] = id |
140 | + elif ec2_id.startswith('vol-'): |
141 | + bdm['volume_id'] = id |
142 | + ebs.setdefault('delete_on_termination', True) |
143 | + bdm.update(ebs) |
144 | + |
145 | image = self._get_image(context, kwargs['image_id']) |
146 | |
147 | if image: |
148 | @@ -933,37 +952,54 @@ |
149 | user_data=kwargs.get('user_data'), |
150 | security_group=kwargs.get('security_group'), |
151 | availability_zone=kwargs.get('placement', {}).get( |
152 | - 'AvailabilityZone')) |
153 | + 'AvailabilityZone'), |
154 | + block_device_mapping=kwargs.get('block_device_mapping', {})) |
155 | return self._format_run_instances(context, |
156 | instances[0]['reservation_id']) |
157 | |
158 | + def _do_instance(self, action, context, ec2_id): |
159 | + instance_id = ec2utils.ec2_id_to_id(ec2_id) |
160 | + action(context, instance_id=instance_id) |
161 | + |
162 | + def _do_instances(self, action, context, instance_id): |
163 | + for ec2_id in instance_id: |
164 | + self._do_instance(action, context, ec2_id) |
165 | + |
166 | def terminate_instances(self, context, instance_id, **kwargs): |
167 | """Terminate each instance in instance_id, which is a list of ec2 ids. |
168 | instance_id is a kwarg so its name cannot be modified.""" |
169 | LOG.debug(_("Going to start terminating instances")) |
170 | - for ec2_id in instance_id: |
171 | - instance_id = ec2utils.ec2_id_to_id(ec2_id) |
172 | - self.compute_api.delete(context, instance_id=instance_id) |
173 | + self._do_instances(self.compute_api.delete, context, instance_id) |
174 | return True |
175 | |
176 | def reboot_instances(self, context, instance_id, **kwargs): |
177 | """instance_id is a list of instance ids""" |
178 | LOG.audit(_("Reboot instance %r"), instance_id, context=context) |
179 | - for ec2_id in instance_id: |
180 | - instance_id = ec2utils.ec2_id_to_id(ec2_id) |
181 | - self.compute_api.reboot(context, instance_id=instance_id) |
182 | + self._do_instances(self.compute_api.reboot, context, instance_id) |
183 | + return True |
184 | + |
185 | + def stop_instances(self, context, instance_id, **kwargs): |
186 | + """Stop each instances in instance_id. |
187 | + Here instance_id is a list of instance ids""" |
188 | + LOG.debug(_("Going to stop instances")) |
189 | + self._do_instances(self.compute_api.stop, context, instance_id) |
190 | + return True |
191 | + |
192 | + def start_instances(self, context, instance_id, **kwargs): |
193 | + """Start each instances in instance_id. |
194 | + Here instance_id is a list of instance ids""" |
195 | + LOG.debug(_("Going to start instances")) |
196 | + self._do_instances(self.compute_api.start, context, instance_id) |
197 | return True |
198 | |
199 | def rescue_instance(self, context, instance_id, **kwargs): |
200 | """This is an extension to the normal ec2_api""" |
201 | - instance_id = ec2utils.ec2_id_to_id(instance_id) |
202 | - self.compute_api.rescue(context, instance_id=instance_id) |
203 | + self._do_instance(self.compute_api.rescue, contect, instnace_id) |
204 | return True |
205 | |
206 | def unrescue_instance(self, context, instance_id, **kwargs): |
207 | """This is an extension to the normal ec2_api""" |
208 | - instance_id = ec2utils.ec2_id_to_id(instance_id) |
209 | - self.compute_api.unrescue(context, instance_id=instance_id) |
210 | + self._do_instance(self.compute_api.unrescue, context, instance_id) |
211 | return True |
212 | |
213 | def update_instance(self, context, instance_id, **kwargs): |
214 | |
215 | === modified file 'nova/api/ec2/ec2utils.py' |
216 | --- nova/api/ec2/ec2utils.py 2011-05-11 18:02:01 +0000 |
217 | +++ nova/api/ec2/ec2utils.py 2011-06-17 23:35:54 +0000 |
218 | @@ -16,6 +16,8 @@ |
219 | # License for the specific language governing permissions and limitations |
220 | # under the License. |
221 | |
222 | +import re |
223 | + |
224 | from nova import exception |
225 | |
226 | |
227 | @@ -30,3 +32,95 @@ |
228 | def id_to_ec2_id(instance_id, template='i-%08x'): |
229 | """Convert an instance ID (int) to an ec2 ID (i-[base 16 number])""" |
230 | return template % instance_id |
231 | + |
232 | + |
233 | +_c2u = re.compile('(((?<=[a-z])[A-Z])|([A-Z](?![A-Z]|$)))') |
234 | + |
235 | + |
236 | +def camelcase_to_underscore(str): |
237 | + return _c2u.sub(r'_\1', str).lower().strip('_') |
238 | + |
239 | + |
240 | +def _try_convert(value): |
241 | + """Return a non-string from a string or unicode, if possible. |
242 | + |
243 | + ============= ===================================================== |
244 | + When value is returns |
245 | + ============= ===================================================== |
246 | + zero-length '' |
247 | + 'None' None |
248 | + 'True' True case insensitive |
249 | + 'False' False case insensitive |
250 | + '0', '-0' 0 |
251 | + 0xN, -0xN int from hex (postitive) (N is any number) |
252 | + 0bN, -0bN int from binary (positive) (N is any number) |
253 | + * try conversion to int, float, complex, fallback value |
254 | + |
255 | + """ |
256 | + if len(value) == 0: |
257 | + return '' |
258 | + if value == 'None': |
259 | + return None |
260 | + lowered_value = value.lower() |
261 | + if lowered_value == 'true': |
262 | + return True |
263 | + if lowered_value == 'false': |
264 | + return False |
265 | + valueneg = value[1:] if value[0] == '-' else value |
266 | + if valueneg == '0': |
267 | + return 0 |
268 | + if valueneg == '': |
269 | + return value |
270 | + if valueneg[0] == '0': |
271 | + if valueneg[1] in 'xX': |
272 | + return int(value, 16) |
273 | + elif valueneg[1] in 'bB': |
274 | + return int(value, 2) |
275 | + else: |
276 | + try: |
277 | + return int(value, 8) |
278 | + except ValueError: |
279 | + pass |
280 | + try: |
281 | + return int(value) |
282 | + except ValueError: |
283 | + pass |
284 | + try: |
285 | + return float(value) |
286 | + except ValueError: |
287 | + pass |
288 | + try: |
289 | + return complex(value) |
290 | + except ValueError: |
291 | + return value |
292 | + |
293 | + |
294 | +def dict_from_dotted_str(items): |
295 | + """parse multi dot-separated argument into dict. |
296 | + EBS boot uses multi dot-separeted arguments like |
297 | + BlockDeviceMapping.1.DeviceName=snap-id |
298 | + Convert the above into |
299 | + {'block_device_mapping': {'1': {'device_name': snap-id}}} |
300 | + """ |
301 | + args = {} |
302 | + for key, value in items: |
303 | + parts = key.split(".") |
304 | + key = camelcase_to_underscore(parts[0]) |
305 | + if isinstance(value, str) or isinstance(value, unicode): |
306 | + # NOTE(vish): Automatically convert strings back |
307 | + # into their respective values |
308 | + value = _try_convert(value) |
309 | + |
310 | + if len(parts) > 1: |
311 | + d = args.get(key, {}) |
312 | + args[key] = d |
313 | + for k in parts[1:-1]: |
314 | + k = camelcase_to_underscore(k) |
315 | + v = d.get(k, {}) |
316 | + d[k] = v |
317 | + d = v |
318 | + d[camelcase_to_underscore(parts[-1])] = value |
319 | + else: |
320 | + args[key] = value |
321 | + |
322 | + return args |
323 | |
324 | === modified file 'nova/compute/api.py' |
325 | --- nova/compute/api.py 2011-06-17 15:25:23 +0000 |
326 | +++ nova/compute/api.py 2011-06-17 23:35:54 +0000 |
327 | @@ -34,6 +34,7 @@ |
328 | from nova import volume |
329 | from nova.compute import instance_types |
330 | from nova.compute import power_state |
331 | +from nova.compute.utils import terminate_volumes |
332 | from nova.scheduler import api as scheduler_api |
333 | from nova.db import base |
334 | |
335 | @@ -52,6 +53,18 @@ |
336 | return str(instance_id) |
337 | |
338 | |
339 | +def _is_able_to_shutdown(instance, instance_id): |
340 | + states = {'terminating': "Instance %s is already being terminated", |
341 | + 'migrating': "Instance %s is being migrated", |
342 | + 'stopping': "Instance %s is being stopped"} |
343 | + msg = states.get(instance['state_description']) |
344 | + if msg: |
345 | + LOG.warning(_(msg), instance_id) |
346 | + return False |
347 | + |
348 | + return True |
349 | + |
350 | + |
351 | class API(base.Base): |
352 | """API for interacting with the compute manager.""" |
353 | |
354 | @@ -235,7 +248,7 @@ |
355 | return (num_instances, base_options, security_groups) |
356 | |
357 | def create_db_entry_for_new_instance(self, context, base_options, |
358 | - security_groups, num=1): |
359 | + security_groups, block_device_mapping, num=1): |
360 | """Create an entry in the DB for this new instance, |
361 | including any related table updates (such as security |
362 | groups, MAC address, etc). This will called by create() |
363 | @@ -255,6 +268,23 @@ |
364 | instance_id, |
365 | security_group_id) |
366 | |
367 | + # NOTE(yamahata) |
368 | + # tell vm driver to attach volume at boot time by updating |
369 | + # BlockDeviceMapping |
370 | + for bdm in block_device_mapping: |
371 | + LOG.debug(_('bdm %s'), bdm) |
372 | + assert 'device_name' in bdm |
373 | + values = { |
374 | + 'instance_id': instance_id, |
375 | + 'device_name': bdm['device_name'], |
376 | + 'delete_on_termination': bdm.get('delete_on_termination'), |
377 | + 'virtual_name': bdm.get('virtual_name'), |
378 | + 'snapshot_id': bdm.get('snapshot_id'), |
379 | + 'volume_id': bdm.get('volume_id'), |
380 | + 'volume_size': bdm.get('volume_size'), |
381 | + 'no_device': bdm.get('no_device')} |
382 | + self.db.block_device_mapping_create(elevated, values) |
383 | + |
384 | # Set sane defaults if not specified |
385 | updates = dict(hostname=self.hostname_factory(instance_id)) |
386 | if (not hasattr(instance, 'display_name') or |
387 | @@ -339,7 +369,7 @@ |
388 | key_name=None, key_data=None, security_group='default', |
389 | availability_zone=None, user_data=None, metadata={}, |
390 | injected_files=None, admin_password=None, zone_blob=None, |
391 | - reservation_id=None): |
392 | + reservation_id=None, block_device_mapping=None): |
393 | """ |
394 | Provision the instances by sending off a series of single |
395 | instance requests to the Schedulers. This is fine for trival |
396 | @@ -360,11 +390,13 @@ |
397 | injected_files, admin_password, zone_blob, |
398 | reservation_id) |
399 | |
400 | + block_device_mapping = block_device_mapping or [] |
401 | instances = [] |
402 | LOG.debug(_("Going to run %s instances..."), num_instances) |
403 | for num in range(num_instances): |
404 | instance = self.create_db_entry_for_new_instance(context, |
405 | - base_options, security_groups, num=num) |
406 | + base_options, security_groups, |
407 | + block_device_mapping, num=num) |
408 | instances.append(instance) |
409 | instance_id = instance['id'] |
410 | |
411 | @@ -474,24 +506,22 @@ |
412 | rv = self.db.instance_update(context, instance_id, kwargs) |
413 | return dict(rv.iteritems()) |
414 | |
415 | + def _get_instance(self, context, instance_id, action_str): |
416 | + try: |
417 | + return self.get(context, instance_id) |
418 | + except exception.NotFound: |
419 | + LOG.warning(_("Instance %(instance_id)s was not found during " |
420 | + "%(action_str)s") % |
421 | + {'instance_id': instance_id, 'action_str': action_str}) |
422 | + raise |
423 | + |
424 | @scheduler_api.reroute_compute("delete") |
425 | def delete(self, context, instance_id): |
426 | """Terminate an instance.""" |
427 | LOG.debug(_("Going to try to terminate %s"), instance_id) |
428 | - try: |
429 | - instance = self.get(context, instance_id) |
430 | - except exception.NotFound: |
431 | - LOG.warning(_("Instance %s was not found during terminate"), |
432 | - instance_id) |
433 | - raise |
434 | - |
435 | - if instance['state_description'] == 'terminating': |
436 | - LOG.warning(_("Instance %s is already being terminated"), |
437 | - instance_id) |
438 | - return |
439 | - |
440 | - if instance['state_description'] == 'migrating': |
441 | - LOG.warning(_("Instance %s is being migrated"), instance_id) |
442 | + instance = self._get_instance(context, instance_id, 'terminating') |
443 | + |
444 | + if not _is_able_to_shutdown(instance, instance_id): |
445 | return |
446 | |
447 | self.update(context, |
448 | @@ -505,8 +535,48 @@ |
449 | self._cast_compute_message('terminate_instance', context, |
450 | instance_id, host) |
451 | else: |
452 | + terminate_volumes(self.db, context, instance_id) |
453 | self.db.instance_destroy(context, instance_id) |
454 | |
455 | + @scheduler_api.reroute_compute("stop") |
456 | + def stop(self, context, instance_id): |
457 | + """Stop an instance.""" |
458 | + LOG.debug(_("Going to try to stop %s"), instance_id) |
459 | + |
460 | + instance = self._get_instance(context, instance_id, 'stopping') |
461 | + if not _is_able_to_shutdown(instance, instance_id): |
462 | + return |
463 | + |
464 | + self.update(context, |
465 | + instance['id'], |
466 | + state_description='stopping', |
467 | + state=power_state.NOSTATE, |
468 | + terminated_at=utils.utcnow()) |
469 | + |
470 | + host = instance['host'] |
471 | + if host: |
472 | + self._cast_compute_message('stop_instance', context, |
473 | + instance_id, host) |
474 | + |
475 | + def start(self, context, instance_id): |
476 | + """Start an instance.""" |
477 | + LOG.debug(_("Going to try to start %s"), instance_id) |
478 | + instance = self._get_instance(context, instance_id, 'starting') |
479 | + if instance['state_description'] != 'stopped': |
480 | + _state_description = instance['state_description'] |
481 | + LOG.warning(_("Instance %(instance_id)s is not " |
482 | + "stopped(%(_state_description)s)") % locals()) |
483 | + return |
484 | + |
485 | + # TODO(yamahata): injected_files isn't supported right now. |
486 | + # It is used only for osapi. not for ec2 api. |
487 | + # availability_zone isn't used by run_instance. |
488 | + rpc.cast(context, |
489 | + FLAGS.scheduler_topic, |
490 | + {"method": "start_instance", |
491 | + "args": {"topic": FLAGS.compute_topic, |
492 | + "instance_id": instance_id}}) |
493 | + |
494 | def get(self, context, instance_id): |
495 | """Get a single instance with the given instance_id.""" |
496 | rv = self.db.instance_get(context, instance_id) |
497 | |
498 | === modified file 'nova/compute/manager.py' |
499 | --- nova/compute/manager.py 2011-06-03 15:11:01 +0000 |
500 | +++ nova/compute/manager.py 2011-06-17 23:35:54 +0000 |
501 | @@ -53,6 +53,7 @@ |
502 | from nova import utils |
503 | from nova import volume |
504 | from nova.compute import power_state |
505 | +from nova.compute.utils import terminate_volumes |
506 | from nova.virt import driver |
507 | |
508 | |
509 | @@ -214,8 +215,63 @@ |
510 | """ |
511 | return self.driver.refresh_security_group_members(security_group_id) |
512 | |
513 | - @exception.wrap_exception |
514 | - def run_instance(self, context, instance_id, **kwargs): |
515 | + def _setup_block_device_mapping(self, context, instance_id): |
516 | + """setup volumes for block device mapping""" |
517 | + self.db.instance_set_state(context, |
518 | + instance_id, |
519 | + power_state.NOSTATE, |
520 | + 'block_device_mapping') |
521 | + |
522 | + volume_api = volume.API() |
523 | + block_device_mapping = [] |
524 | + for bdm in self.db.block_device_mapping_get_all_by_instance( |
525 | + context, instance_id): |
526 | + LOG.debug(_("setting up bdm %s"), bdm) |
527 | + if ((bdm['snapshot_id'] is not None) and |
528 | + (bdm['volume_id'] is None)): |
529 | + # TODO(yamahata): default name and description |
530 | + vol = volume_api.create(context, bdm['volume_size'], |
531 | + bdm['snapshot_id'], '', '') |
532 | + # TODO(yamahata): creating volume simultaneously |
533 | + # reduces creation time? |
534 | + volume_api.wait_creation(context, vol['id']) |
535 | + self.db.block_device_mapping_update( |
536 | + context, bdm['id'], {'volume_id': vol['id']}) |
537 | + bdm['volume_id'] = vol['id'] |
538 | + |
539 | + if not ((bdm['snapshot_id'] is None) or |
540 | + (bdm['volume_id'] is not None)): |
541 | + LOG.error(_('corrupted state of block device mapping ' |
542 | + 'id: %(id)s ' |
543 | + 'snapshot: %(snapshot_id) volume: %(vollume_id)') % |
544 | + {'id': bdm['id'], |
545 | + 'snapshot_id': bdm['snapshot'], |
546 | + 'volume_id': bdm['volume_id']}) |
547 | + raise exception.ApiError(_('broken block device mapping %d') % |
548 | + bdm['id']) |
549 | + |
550 | + if bdm['volume_id'] is not None: |
551 | + volume_api.check_attach(context, |
552 | + volume_id=bdm['volume_id']) |
553 | + dev_path = self._attach_volume_boot(context, instance_id, |
554 | + bdm['volume_id'], |
555 | + bdm['device_name']) |
556 | + block_device_mapping.append({'device_path': dev_path, |
557 | + 'mount_device': |
558 | + bdm['device_name']}) |
559 | + elif bdm['virtual_name'] is not None: |
560 | + # TODO(yamahata): ephemeral/swap device support |
561 | + LOG.debug(_('block_device_mapping: ' |
562 | + 'ephemeral device is not supported yet')) |
563 | + else: |
564 | + # TODO(yamahata): NoDevice support |
565 | + assert bdm['no_device'] |
566 | + LOG.debug(_('block_device_mapping: ' |
567 | + 'no device is not supported yet')) |
568 | + |
569 | + return block_device_mapping |
570 | + |
571 | + def _run_instance(self, context, instance_id, **kwargs): |
572 | """Launch a new instance with specified options.""" |
573 | context = context.elevated() |
574 | instance_ref = self.db.instance_get(context, instance_id) |
575 | @@ -249,11 +305,15 @@ |
576 | self.network_manager.setup_compute_network(context, |
577 | instance_id) |
578 | |
579 | + block_device_mapping = self._setup_block_device_mapping(context, |
580 | + instance_id) |
581 | + |
582 | # TODO(vish) check to make sure the availability zone matches |
583 | self._update_state(context, instance_id, power_state.BUILDING) |
584 | |
585 | try: |
586 | - self.driver.spawn(instance_ref) |
587 | + self.driver.spawn(instance_ref, |
588 | + block_device_mapping=block_device_mapping) |
589 | except Exception as ex: # pylint: disable=W0702 |
590 | msg = _("Instance '%(instance_id)s' failed to spawn. Is " |
591 | "virtualization enabled in the BIOS? Details: " |
592 | @@ -277,12 +337,24 @@ |
593 | self._update_state(context, instance_id) |
594 | |
595 | @exception.wrap_exception |
596 | + def run_instance(self, context, instance_id, **kwargs): |
597 | + self._run_instance(context, instance_id, **kwargs) |
598 | + |
599 | + @exception.wrap_exception |
600 | @checks_instance_lock |
601 | - def terminate_instance(self, context, instance_id): |
602 | - """Terminate an instance on this host.""" |
603 | + def start_instance(self, context, instance_id): |
604 | + """Starting an instance on this host.""" |
605 | + # TODO(yamahata): injected_files isn't supported. |
606 | + # Anyway OSAPI doesn't support stop/start yet |
607 | + self._run_instance(context, instance_id) |
608 | + |
609 | + def _shutdown_instance(self, context, instance_id, action_str): |
610 | + """Shutdown an instance on this host.""" |
611 | context = context.elevated() |
612 | instance_ref = self.db.instance_get(context, instance_id) |
613 | - LOG.audit(_("Terminating instance %s"), instance_id, context=context) |
614 | + LOG.audit(_("%(action_str)s instance %(instance_id)s") % |
615 | + {'action_str': action_str, 'instance_id': instance_id}, |
616 | + context=context) |
617 | |
618 | fixed_ip = instance_ref.get('fixed_ip') |
619 | if not FLAGS.stub_network and fixed_ip: |
620 | @@ -318,18 +390,36 @@ |
621 | |
622 | volumes = instance_ref.get('volumes') or [] |
623 | for volume in volumes: |
624 | - self.detach_volume(context, instance_id, volume['id']) |
625 | - if instance_ref['state'] == power_state.SHUTOFF: |
626 | + self._detach_volume(context, instance_id, volume['id'], False) |
627 | + |
628 | + if (instance_ref['state'] == power_state.SHUTOFF and |
629 | + instance_ref['state_description'] != 'stopped'): |
630 | self.db.instance_destroy(context, instance_id) |
631 | raise exception.Error(_('trying to destroy already destroyed' |
632 | ' instance: %s') % instance_id) |
633 | self.driver.destroy(instance_ref) |
634 | |
635 | + if action_str == 'Terminating': |
636 | + terminate_volumes(self.db, context, instance_id) |
637 | + |
638 | + @exception.wrap_exception |
639 | + @checks_instance_lock |
640 | + def terminate_instance(self, context, instance_id): |
641 | + """Terminate an instance on this host.""" |
642 | + self._shutdown_instance(context, instance_id, 'Terminating') |
643 | + |
644 | # TODO(ja): should we keep it in a terminated state for a bit? |
645 | self.db.instance_destroy(context, instance_id) |
646 | |
647 | @exception.wrap_exception |
648 | @checks_instance_lock |
649 | + def stop_instance(self, context, instance_id): |
650 | + """Stopping an instance on this host.""" |
651 | + self._shutdown_instance(context, instance_id, 'Stopping') |
652 | + # instance state will be updated to stopped by _poll_instance_states() |
653 | + |
654 | + @exception.wrap_exception |
655 | + @checks_instance_lock |
656 | def rebuild_instance(self, context, instance_id, **kwargs): |
657 | """Destroy and re-make this instance. |
658 | |
659 | @@ -800,6 +890,22 @@ |
660 | instance_ref = self.db.instance_get(context, instance_id) |
661 | return self.driver.get_vnc_console(instance_ref) |
662 | |
663 | + def _attach_volume_boot(self, context, instance_id, volume_id, mountpoint): |
664 | + """Attach a volume to an instance at boot time. So actual attach |
665 | + is done by instance creation""" |
666 | + |
667 | + # TODO(yamahata): |
668 | + # should move check_attach to volume manager? |
669 | + volume.API().check_attach(context, volume_id) |
670 | + |
671 | + context = context.elevated() |
672 | + LOG.audit(_("instance %(instance_id)s: booting with " |
673 | + "volume %(volume_id)s at %(mountpoint)s") % |
674 | + locals(), context=context) |
675 | + dev_path = self.volume_manager.setup_compute_volume(context, volume_id) |
676 | + self.db.volume_attached(context, volume_id, instance_id, mountpoint) |
677 | + return dev_path |
678 | + |
679 | @checks_instance_lock |
680 | def attach_volume(self, context, instance_id, volume_id, mountpoint): |
681 | """Attach a volume to an instance.""" |
682 | @@ -817,6 +923,16 @@ |
683 | volume_id, |
684 | instance_id, |
685 | mountpoint) |
686 | + values = { |
687 | + 'instance_id': instance_id, |
688 | + 'device_name': mountpoint, |
689 | + 'delete_on_termination': False, |
690 | + 'virtual_name': None, |
691 | + 'snapshot_id': None, |
692 | + 'volume_id': volume_id, |
693 | + 'volume_size': None, |
694 | + 'no_device': None} |
695 | + self.db.block_device_mapping_create(context, values) |
696 | except Exception as exc: # pylint: disable=W0702 |
697 | # NOTE(vish): The inline callback eats the exception info so we |
698 | # log the traceback here and reraise the same |
699 | @@ -831,7 +947,7 @@ |
700 | |
701 | @exception.wrap_exception |
702 | @checks_instance_lock |
703 | - def detach_volume(self, context, instance_id, volume_id): |
704 | + def _detach_volume(self, context, instance_id, volume_id, destroy_bdm): |
705 | """Detach a volume from an instance.""" |
706 | context = context.elevated() |
707 | instance_ref = self.db.instance_get(context, instance_id) |
708 | @@ -847,8 +963,15 @@ |
709 | volume_ref['mountpoint']) |
710 | self.volume_manager.remove_compute_volume(context, volume_id) |
711 | self.db.volume_detached(context, volume_id) |
712 | + if destroy_bdm: |
713 | + self.db.block_device_mapping_destroy_by_instance_and_volume( |
714 | + context, instance_id, volume_id) |
715 | return True |
716 | |
717 | + def detach_volume(self, context, instance_id, volume_id): |
718 | + """Detach a volume from an instance.""" |
719 | + return self._detach_volume(context, instance_id, volume_id, True) |
720 | + |
721 | def remove_volume(self, context, volume_id): |
722 | """Remove volume on compute host. |
723 | |
724 | @@ -1174,11 +1297,14 @@ |
725 | "State=%(db_state)s, so setting state to " |
726 | "shutoff.") % locals()) |
727 | vm_state = power_state.SHUTOFF |
728 | + if db_instance['state_description'] == 'stopping': |
729 | + self.db.instance_stop(context, db_instance['id']) |
730 | + continue |
731 | else: |
732 | vm_state = vm_instance.state |
733 | vms_not_found_in_db.remove(name) |
734 | |
735 | - if db_instance['state_description'] == 'migrating': |
736 | + if (db_instance['state_description'] in ['migrating', 'stopping']): |
737 | # A situation which db record exists, but no instance" |
738 | # sometimes occurs while live-migration at src compute, |
739 | # this case should be ignored. |
740 | |
741 | === added file 'nova/compute/utils.py' |
742 | --- nova/compute/utils.py 1970-01-01 00:00:00 +0000 |
743 | +++ nova/compute/utils.py 2011-06-17 23:35:54 +0000 |
744 | @@ -0,0 +1,29 @@ |
745 | +# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
746 | + |
747 | +# Copyright (c) 2011 VA Linux Systems Japan K.K |
748 | +# Copyright (c) 2011 Isaku Yamahata |
749 | +# |
750 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
751 | +# not use this file except in compliance with the License. You may obtain |
752 | +# a copy of the License at |
753 | +# |
754 | +# http://www.apache.org/licenses/LICENSE-2.0 |
755 | +# |
756 | +# Unless required by applicable law or agreed to in writing, software |
757 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
758 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
759 | +# License for the specific language governing permissions and limitations |
760 | +# under the License. |
761 | + |
762 | +from nova import volume |
763 | + |
764 | + |
765 | +def terminate_volumes(db, context, instance_id): |
766 | + """delete volumes of delete_on_termination=True in block device mapping""" |
767 | + volume_api = volume.API() |
768 | + for bdm in db.block_device_mapping_get_all_by_instance(context, |
769 | + instance_id): |
770 | + #LOG.debug(_("terminating bdm %s") % bdm) |
771 | + if bdm['volume_id'] and bdm['delete_on_termination']: |
772 | + volume_api.delete(context, bdm['volume_id']) |
773 | + db.block_device_mapping_destroy(context, bdm['id']) |
774 | |
775 | === modified file 'nova/db/api.py' |
776 | --- nova/db/api.py 2011-05-27 04:50:20 +0000 |
777 | +++ nova/db/api.py 2011-06-17 23:35:54 +0000 |
778 | @@ -414,6 +414,11 @@ |
779 | return IMPL.instance_destroy(context, instance_id) |
780 | |
781 | |
782 | +def instance_stop(context, instance_id): |
783 | + """Stop the instance or raise if it does not exist.""" |
784 | + return IMPL.instance_stop(context, instance_id) |
785 | + |
786 | + |
787 | def instance_get(context, instance_id): |
788 | """Get an instance or raise if it does not exist.""" |
789 | return IMPL.instance_get(context, instance_id) |
790 | @@ -920,6 +925,36 @@ |
791 | #################### |
792 | |
793 | |
794 | +def block_device_mapping_create(context, values): |
795 | + """Create an entry of block device mapping""" |
796 | + return IMPL.block_device_mapping_create(context, values) |
797 | + |
798 | + |
799 | +def block_device_mapping_update(context, bdm_id, values): |
800 | + """Create an entry of block device mapping""" |
801 | + return IMPL.block_device_mapping_update(context, bdm_id, values) |
802 | + |
803 | + |
804 | +def block_device_mapping_get_all_by_instance(context, instance_id): |
805 | + """Get all block device mapping belonging to a instance""" |
806 | + return IMPL.block_device_mapping_get_all_by_instance(context, instance_id) |
807 | + |
808 | + |
809 | +def block_device_mapping_destroy(context, bdm_id): |
810 | + """Destroy the block device mapping.""" |
811 | + return IMPL.block_device_mapping_destroy(context, bdm_id) |
812 | + |
813 | + |
814 | +def block_device_mapping_destroy_by_instance_and_volume(context, instance_id, |
815 | + volume_id): |
816 | + """Destroy the block device mapping or raise if it does not exist.""" |
817 | + return IMPL.block_device_mapping_destroy_by_instance_and_volume( |
818 | + context, instance_id, volume_id) |
819 | + |
820 | + |
821 | +#################### |
822 | + |
823 | + |
824 | def security_group_get_all(context): |
825 | """Get all security groups.""" |
826 | return IMPL.security_group_get_all(context) |
827 | |
828 | === modified file 'nova/db/sqlalchemy/api.py' |
829 | --- nova/db/sqlalchemy/api.py 2011-06-15 21:35:31 +0000 |
830 | +++ nova/db/sqlalchemy/api.py 2011-06-17 23:35:54 +0000 |
831 | @@ -840,6 +840,25 @@ |
832 | |
833 | |
834 | @require_context |
835 | +def instance_stop(context, instance_id): |
836 | + session = get_session() |
837 | + with session.begin(): |
838 | + from nova.compute import power_state |
839 | + session.query(models.Instance).\ |
840 | + filter_by(id=instance_id).\ |
841 | + update({'host': None, |
842 | + 'state': power_state.SHUTOFF, |
843 | + 'state_description': 'stopped', |
844 | + 'updated_at': literal_column('updated_at')}) |
845 | + session.query(models.SecurityGroupInstanceAssociation).\ |
846 | + filter_by(instance_id=instance_id).\ |
847 | + update({'updated_at': literal_column('updated_at')}) |
848 | + session.query(models.InstanceMetadata).\ |
849 | + filter_by(instance_id=instance_id).\ |
850 | + update({'updated_at': literal_column('updated_at')}) |
851 | + |
852 | + |
853 | +@require_context |
854 | def instance_get(context, instance_id, session=None): |
855 | if not session: |
856 | session = get_session() |
857 | @@ -1883,6 +1902,66 @@ |
858 | |
859 | |
860 | @require_context |
861 | +def block_device_mapping_create(context, values): |
862 | + bdm_ref = models.BlockDeviceMapping() |
863 | + bdm_ref.update(values) |
864 | + |
865 | + session = get_session() |
866 | + with session.begin(): |
867 | + bdm_ref.save(session=session) |
868 | + |
869 | + |
870 | +@require_context |
871 | +def block_device_mapping_update(context, bdm_id, values): |
872 | + session = get_session() |
873 | + with session.begin(): |
874 | + session.query(models.BlockDeviceMapping).\ |
875 | + filter_by(id=bdm_id).\ |
876 | + filter_by(deleted=False).\ |
877 | + update(values) |
878 | + |
879 | + |
880 | +@require_context |
881 | +def block_device_mapping_get_all_by_instance(context, instance_id): |
882 | + session = get_session() |
883 | + result = session.query(models.BlockDeviceMapping).\ |
884 | + filter_by(instance_id=instance_id).\ |
885 | + filter_by(deleted=False).\ |
886 | + all() |
887 | + if not result: |
888 | + return [] |
889 | + return result |
890 | + |
891 | + |
892 | +@require_context |
893 | +def block_device_mapping_destroy(context, bdm_id): |
894 | + session = get_session() |
895 | + with session.begin(): |
896 | + session.query(models.BlockDeviceMapping).\ |
897 | + filter_by(id=bdm_id).\ |
898 | + update({'deleted': True, |
899 | + 'deleted_at': utils.utcnow(), |
900 | + 'updated_at': literal_column('updated_at')}) |
901 | + |
902 | + |
903 | +@require_context |
904 | +def block_device_mapping_destroy_by_instance_and_volume(context, instance_id, |
905 | + volume_id): |
906 | + session = get_session() |
907 | + with session.begin(): |
908 | + session.query(models.BlockDeviceMapping).\ |
909 | + filter_by(instance_id=instance_id).\ |
910 | + filter_by(volume_id=volume_id).\ |
911 | + filter_by(deleted=False).\ |
912 | + update({'deleted': True, |
913 | + 'deleted_at': utils.utcnow(), |
914 | + 'updated_at': literal_column('updated_at')}) |
915 | + |
916 | + |
917 | +################### |
918 | + |
919 | + |
920 | +@require_context |
921 | def security_group_get_all(context): |
922 | session = get_session() |
923 | return session.query(models.SecurityGroup).\ |
924 | |
925 | === added file 'nova/db/sqlalchemy/migrate_repo/versions/024_add_block_device_mapping.py' |
926 | --- nova/db/sqlalchemy/migrate_repo/versions/024_add_block_device_mapping.py 1970-01-01 00:00:00 +0000 |
927 | +++ nova/db/sqlalchemy/migrate_repo/versions/024_add_block_device_mapping.py 2011-06-17 23:35:54 +0000 |
928 | @@ -0,0 +1,87 @@ |
929 | +# Copyright 2011 OpenStack LLC. |
930 | +# Copyright 2011 Isaku Yamahata |
931 | +# |
932 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
933 | +# not use this file except in compliance with the License. You may obtain |
934 | +# a copy of the License at |
935 | +# |
936 | +# http://www.apache.org/licenses/LICENSE-2.0 |
937 | +# |
938 | +# Unless required by applicable law or agreed to in writing, software |
939 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
940 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
941 | +# License for the specific language governing permissions and limitations |
942 | +# under the License. |
943 | + |
944 | +from sqlalchemy import MetaData, Table, Column |
945 | +from sqlalchemy import DateTime, Boolean, Integer, String |
946 | +from sqlalchemy import ForeignKey |
947 | +from nova import log as logging |
948 | + |
949 | +meta = MetaData() |
950 | + |
951 | +# Just for the ForeignKey and column creation to succeed, these are not the |
952 | +# actual definitions of instances or services. |
953 | +instances = Table('instances', meta, |
954 | + Column('id', Integer(), primary_key=True, nullable=False), |
955 | + ) |
956 | + |
957 | +volumes = Table('volumes', meta, |
958 | + Column('id', Integer(), primary_key=True, nullable=False), |
959 | + ) |
960 | + |
961 | +snapshots = Table('snapshots', meta, |
962 | + Column('id', Integer(), primary_key=True, nullable=False), |
963 | + ) |
964 | + |
965 | + |
966 | +block_device_mapping = Table('block_device_mapping', meta, |
967 | + Column('created_at', DateTime(timezone=False)), |
968 | + Column('updated_at', DateTime(timezone=False)), |
969 | + Column('deleted_at', DateTime(timezone=False)), |
970 | + Column('deleted', Boolean(create_constraint=True, name=None)), |
971 | + Column('id', Integer(), primary_key=True, autoincrement=True), |
972 | + Column('instance_id', |
973 | + Integer(), |
974 | + ForeignKey('instances.id'), |
975 | + nullable=False), |
976 | + Column('device_name', |
977 | + String(length=255, convert_unicode=False, assert_unicode=None, |
978 | + unicode_error=None, _warn_on_bytestring=False), |
979 | + nullable=False), |
980 | + Column('delete_on_termination', |
981 | + Boolean(create_constraint=True, name=None), |
982 | + default=False), |
983 | + Column('virtual_name', |
984 | + String(length=255, convert_unicode=False, assert_unicode=None, |
985 | + unicode_error=None, _warn_on_bytestring=False), |
986 | + nullable=True), |
987 | + Column('snapshot_id', |
988 | + Integer(), |
989 | + ForeignKey('snapshots.id'), |
990 | + nullable=True), |
991 | + Column('volume_id', Integer(), ForeignKey('volumes.id'), |
992 | + nullable=True), |
993 | + Column('volume_size', Integer(), nullable=True), |
994 | + Column('no_device', |
995 | + Boolean(create_constraint=True, name=None), |
996 | + nullable=True), |
997 | + ) |
998 | + |
999 | + |
1000 | +def upgrade(migrate_engine): |
1001 | + # Upgrade operations go here. Don't create your own engine; |
1002 | + # bind migrate_engine to your metadata |
1003 | + meta.bind = migrate_engine |
1004 | + try: |
1005 | + block_device_mapping.create() |
1006 | + except Exception: |
1007 | + logging.info(repr(block_device_mapping)) |
1008 | + logging.exception('Exception while creating table') |
1009 | + meta.drop_all(tables=[block_device_mapping]) |
1010 | + raise |
1011 | + |
1012 | + |
1013 | +def downgrade(migrate_engine): |
1014 | + # Operations to reverse the above upgrade go here. |
1015 | + block_device_mapping.drop() |
1016 | |
1017 | === modified file 'nova/db/sqlalchemy/models.py' |
1018 | --- nova/db/sqlalchemy/models.py 2011-06-08 15:45:23 +0000 |
1019 | +++ nova/db/sqlalchemy/models.py 2011-06-17 23:35:54 +0000 |
1020 | @@ -357,6 +357,45 @@ |
1021 | display_description = Column(String(255)) |
1022 | |
1023 | |
1024 | +class BlockDeviceMapping(BASE, NovaBase): |
1025 | + """Represents block device mapping that is defined by EC2""" |
1026 | + __tablename__ = "block_device_mapping" |
1027 | + id = Column(Integer, primary_key=True, autoincrement=True) |
1028 | + |
1029 | + instance_id = Column(Integer, ForeignKey('instances.id'), nullable=False) |
1030 | + instance = relationship(Instance, |
1031 | + backref=backref('balock_device_mapping'), |
1032 | + foreign_keys=instance_id, |
1033 | + primaryjoin='and_(BlockDeviceMapping.instance_id==' |
1034 | + 'Instance.id,' |
1035 | + 'BlockDeviceMapping.deleted==' |
1036 | + 'False)') |
1037 | + device_name = Column(String(255), nullable=False) |
1038 | + |
1039 | + # default=False for compatibility of the existing code. |
1040 | + # With EC2 API, |
1041 | + # default True for ami specified device. |
1042 | + # default False for created with other timing. |
1043 | + delete_on_termination = Column(Boolean, default=False) |
1044 | + |
1045 | + # for ephemeral device |
1046 | + virtual_name = Column(String(255), nullable=True) |
1047 | + |
1048 | + # for snapshot or volume |
1049 | + snapshot_id = Column(Integer, ForeignKey('snapshots.id'), nullable=True) |
1050 | + # outer join |
1051 | + snapshot = relationship(Snapshot, |
1052 | + foreign_keys=snapshot_id) |
1053 | + |
1054 | + volume_id = Column(Integer, ForeignKey('volumes.id'), nullable=True) |
1055 | + volume = relationship(Volume, |
1056 | + foreign_keys=volume_id) |
1057 | + volume_size = Column(Integer, nullable=True) |
1058 | + |
1059 | + # for no device to suppress devices. |
1060 | + no_device = Column(Boolean, nullable=True) |
1061 | + |
1062 | + |
1063 | class ExportDevice(BASE, NovaBase): |
1064 | """Represates a shelf and blade that a volume can be exported on.""" |
1065 | __tablename__ = 'export_devices' |
1066 | |
1067 | === modified file 'nova/scheduler/simple.py' |
1068 | --- nova/scheduler/simple.py 2011-06-02 21:23:05 +0000 |
1069 | +++ nova/scheduler/simple.py 2011-06-17 23:35:54 +0000 |
1070 | @@ -39,7 +39,7 @@ |
1071 | class SimpleScheduler(chance.ChanceScheduler): |
1072 | """Implements Naive Scheduler that tries to find least loaded host.""" |
1073 | |
1074 | - def schedule_run_instance(self, context, instance_id, *_args, **_kwargs): |
1075 | + def _schedule_instance(self, context, instance_id, *_args, **_kwargs): |
1076 | """Picks a host that is up and has the fewest running instances.""" |
1077 | instance_ref = db.instance_get(context, instance_id) |
1078 | if (instance_ref['availability_zone'] |
1079 | @@ -75,6 +75,12 @@ |
1080 | " for this request. Is the appropriate" |
1081 | " service running?")) |
1082 | |
1083 | + def schedule_run_instance(self, context, instance_id, *_args, **_kwargs): |
1084 | + return self._schedule_instance(context, instance_id, *_args, **_kwargs) |
1085 | + |
1086 | + def schedule_start_instance(self, context, instance_id, *_args, **_kwargs): |
1087 | + return self._schedule_instance(context, instance_id, *_args, **_kwargs) |
1088 | + |
1089 | def schedule_create_volume(self, context, volume_id, *_args, **_kwargs): |
1090 | """Picks a host that is up and has the fewest volumes.""" |
1091 | volume_ref = db.volume_get(context, volume_id) |
1092 | |
1093 | === modified file 'nova/tests/test_api.py' |
1094 | --- nova/tests/test_api.py 2011-05-23 21:15:10 +0000 |
1095 | +++ nova/tests/test_api.py 2011-06-17 23:35:54 +0000 |
1096 | @@ -89,7 +89,7 @@ |
1097 | class XmlConversionTestCase(test.TestCase): |
1098 | """Unit test api xml conversion""" |
1099 | def test_number_conversion(self): |
1100 | - conv = apirequest._try_convert |
1101 | + conv = ec2utils._try_convert |
1102 | self.assertEqual(conv('None'), None) |
1103 | self.assertEqual(conv('True'), True) |
1104 | self.assertEqual(conv('False'), False) |
1105 | |
1106 | === modified file 'nova/tests/test_cloud.py' |
1107 | --- nova/tests/test_cloud.py 2011-06-17 20:47:23 +0000 |
1108 | +++ nova/tests/test_cloud.py 2011-06-17 23:35:54 +0000 |
1109 | @@ -56,6 +56,7 @@ |
1110 | self.compute = self.start_service('compute') |
1111 | self.scheduter = self.start_service('scheduler') |
1112 | self.network = self.start_service('network') |
1113 | + self.volume = self.start_service('volume') |
1114 | self.image_service = utils.import_object(FLAGS.image_service) |
1115 | |
1116 | self.manager = manager.AuthManager() |
1117 | @@ -373,14 +374,21 @@ |
1118 | self.assertRaises(exception.ImageNotFound, deregister_image, |
1119 | self.context, 'ami-bad001') |
1120 | |
1121 | + def _run_instance(self, **kwargs): |
1122 | + rv = self.cloud.run_instances(self.context, **kwargs) |
1123 | + instance_id = rv['instancesSet'][0]['instanceId'] |
1124 | + return instance_id |
1125 | + |
1126 | + def _run_instance_wait(self, **kwargs): |
1127 | + ec2_instance_id = self._run_instance(**kwargs) |
1128 | + self._wait_for_running(ec2_instance_id) |
1129 | + return ec2_instance_id |
1130 | + |
1131 | def test_console_output(self): |
1132 | - instance_type = FLAGS.default_instance_type |
1133 | - max_count = 1 |
1134 | - kwargs = {'image_id': 'ami-1', |
1135 | - 'instance_type': instance_type, |
1136 | - 'max_count': max_count} |
1137 | - rv = self.cloud.run_instances(self.context, **kwargs) |
1138 | - instance_id = rv['instancesSet'][0]['instanceId'] |
1139 | + instance_id = self._run_instance( |
1140 | + image_id='ami-1', |
1141 | + instance_type=FLAGS.default_instance_type, |
1142 | + max_count=1) |
1143 | output = self.cloud.get_console_output(context=self.context, |
1144 | instance_id=[instance_id]) |
1145 | self.assertEquals(b64decode(output['output']), 'FAKE CONSOLE?OUTPUT') |
1146 | @@ -389,9 +397,7 @@ |
1147 | rv = self.cloud.terminate_instances(self.context, [instance_id]) |
1148 | |
1149 | def test_ajax_console(self): |
1150 | - kwargs = {'image_id': 'ami-1'} |
1151 | - rv = self.cloud.run_instances(self.context, **kwargs) |
1152 | - instance_id = rv['instancesSet'][0]['instanceId'] |
1153 | + instance_id = self._run_instance(image_id='ami-1') |
1154 | output = self.cloud.get_ajax_console(context=self.context, |
1155 | instance_id=[instance_id]) |
1156 | self.assertEquals(output['url'], |
1157 | @@ -569,3 +575,299 @@ |
1158 | vol = db.volume_get(self.context, vol['id']) |
1159 | self.assertEqual(None, vol['mountpoint']) |
1160 | db.volume_destroy(self.context, vol['id']) |
1161 | + |
1162 | + def _restart_compute_service(self, periodic_interval=None): |
1163 | + """restart compute service. NOTE: fake driver forgets all instances.""" |
1164 | + self.compute.kill() |
1165 | + if periodic_interval: |
1166 | + self.compute = self.start_service( |
1167 | + 'compute', periodic_interval=periodic_interval) |
1168 | + else: |
1169 | + self.compute = self.start_service('compute') |
1170 | + |
1171 | + def _wait_for_state(self, ctxt, instance_id, predicate): |
1172 | + """Wait for an stopping instance to be a given state""" |
1173 | + id = ec2utils.ec2_id_to_id(instance_id) |
1174 | + while True: |
1175 | + info = self.cloud.compute_api.get(context=ctxt, instance_id=id) |
1176 | + LOG.debug(info) |
1177 | + if predicate(info): |
1178 | + break |
1179 | + greenthread.sleep(1) |
1180 | + |
1181 | + def _wait_for_running(self, instance_id): |
1182 | + def is_running(info): |
1183 | + return info['state_description'] == 'running' |
1184 | + self._wait_for_state(self.context, instance_id, is_running) |
1185 | + |
1186 | + def _wait_for_stopped(self, instance_id): |
1187 | + def is_stopped(info): |
1188 | + return info['state_description'] == 'stopped' |
1189 | + self._wait_for_state(self.context, instance_id, is_stopped) |
1190 | + |
1191 | + def _wait_for_terminate(self, instance_id): |
1192 | + def is_deleted(info): |
1193 | + return info['deleted'] |
1194 | + elevated = self.context.elevated(read_deleted=True) |
1195 | + self._wait_for_state(elevated, instance_id, is_deleted) |
1196 | + |
1197 | + def test_stop_start_instance(self): |
1198 | + """Makes sure stop/start instance works""" |
1199 | + # enforce periodic tasks run in short time to avoid wait for 60s. |
1200 | + self._restart_compute_service(periodic_interval=0.3) |
1201 | + |
1202 | + kwargs = {'image_id': 'ami-1', |
1203 | + 'instance_type': FLAGS.default_instance_type, |
1204 | + 'max_count': 1, } |
1205 | + instance_id = self._run_instance_wait(**kwargs) |
1206 | + |
1207 | + # a running instance can't be started. It is just ignored. |
1208 | + result = self.cloud.start_instances(self.context, [instance_id]) |
1209 | + greenthread.sleep(0.3) |
1210 | + self.assertTrue(result) |
1211 | + |
1212 | + result = self.cloud.stop_instances(self.context, [instance_id]) |
1213 | + greenthread.sleep(0.3) |
1214 | + self.assertTrue(result) |
1215 | + self._wait_for_stopped(instance_id) |
1216 | + |
1217 | + result = self.cloud.start_instances(self.context, [instance_id]) |
1218 | + greenthread.sleep(0.3) |
1219 | + self.assertTrue(result) |
1220 | + self._wait_for_running(instance_id) |
1221 | + |
1222 | + result = self.cloud.stop_instances(self.context, [instance_id]) |
1223 | + greenthread.sleep(0.3) |
1224 | + self.assertTrue(result) |
1225 | + self._wait_for_stopped(instance_id) |
1226 | + |
1227 | + result = self.cloud.terminate_instances(self.context, [instance_id]) |
1228 | + greenthread.sleep(0.3) |
1229 | + self.assertTrue(result) |
1230 | + |
1231 | + self._restart_compute_service() |
1232 | + |
1233 | + def _volume_create(self): |
1234 | + kwargs = {'status': 'available', |
1235 | + 'host': self.volume.host, |
1236 | + 'size': 1, |
1237 | + 'attach_status': 'detached', } |
1238 | + return db.volume_create(self.context, kwargs) |
1239 | + |
1240 | + def _assert_volume_attached(self, vol, instance_id, mountpoint): |
1241 | + self.assertEqual(vol['instance_id'], instance_id) |
1242 | + self.assertEqual(vol['mountpoint'], mountpoint) |
1243 | + self.assertEqual(vol['status'], "in-use") |
1244 | + self.assertEqual(vol['attach_status'], "attached") |
1245 | + |
1246 | + def _assert_volume_detached(self, vol): |
1247 | + self.assertEqual(vol['instance_id'], None) |
1248 | + self.assertEqual(vol['mountpoint'], None) |
1249 | + self.assertEqual(vol['status'], "available") |
1250 | + self.assertEqual(vol['attach_status'], "detached") |
1251 | + |
1252 | + def test_stop_start_with_volume(self): |
1253 | + """Make sure run instance with block device mapping works""" |
1254 | + |
1255 | + # enforce periodic tasks run in short time to avoid wait for 60s. |
1256 | + self._restart_compute_service(periodic_interval=0.3) |
1257 | + |
1258 | + vol1 = self._volume_create() |
1259 | + vol2 = self._volume_create() |
1260 | + kwargs = {'image_id': 'ami-1', |
1261 | + 'instance_type': FLAGS.default_instance_type, |
1262 | + 'max_count': 1, |
1263 | + 'block_device_mapping': [{'device_name': '/dev/vdb', |
1264 | + 'volume_id': vol1['id'], |
1265 | + 'delete_on_termination': False, }, |
1266 | + {'device_name': '/dev/vdc', |
1267 | + 'volume_id': vol2['id'], |
1268 | + 'delete_on_termination': True, }, |
1269 | + ]} |
1270 | + ec2_instance_id = self._run_instance_wait(**kwargs) |
1271 | + instance_id = ec2utils.ec2_id_to_id(ec2_instance_id) |
1272 | + |
1273 | + vols = db.volume_get_all_by_instance(self.context, instance_id) |
1274 | + self.assertEqual(len(vols), 2) |
1275 | + for vol in vols: |
1276 | + self.assertTrue(vol['id'] == vol1['id'] or vol['id'] == vol2['id']) |
1277 | + |
1278 | + vol = db.volume_get(self.context, vol1['id']) |
1279 | + self._assert_volume_attached(vol, instance_id, '/dev/vdb') |
1280 | + |
1281 | + vol = db.volume_get(self.context, vol2['id']) |
1282 | + self._assert_volume_attached(vol, instance_id, '/dev/vdc') |
1283 | + |
1284 | + result = self.cloud.stop_instances(self.context, [ec2_instance_id]) |
1285 | + self.assertTrue(result) |
1286 | + self._wait_for_stopped(ec2_instance_id) |
1287 | + |
1288 | + vol = db.volume_get(self.context, vol1['id']) |
1289 | + self._assert_volume_detached(vol) |
1290 | + vol = db.volume_get(self.context, vol2['id']) |
1291 | + self._assert_volume_detached(vol) |
1292 | + |
1293 | + self.cloud.start_instances(self.context, [ec2_instance_id]) |
1294 | + self._wait_for_running(ec2_instance_id) |
1295 | + vols = db.volume_get_all_by_instance(self.context, instance_id) |
1296 | + self.assertEqual(len(vols), 2) |
1297 | + for vol in vols: |
1298 | + self.assertTrue(vol['id'] == vol1['id'] or vol['id'] == vol2['id']) |
1299 | + self.assertTrue(vol['mountpoint'] == '/dev/vdb' or |
1300 | + vol['mountpoint'] == '/dev/vdc') |
1301 | + self.assertEqual(vol['instance_id'], instance_id) |
1302 | + self.assertEqual(vol['status'], "in-use") |
1303 | + self.assertEqual(vol['attach_status'], "attached") |
1304 | + |
1305 | + self.cloud.terminate_instances(self.context, [ec2_instance_id]) |
1306 | + greenthread.sleep(0.3) |
1307 | + |
1308 | + admin_ctxt = context.get_admin_context(read_deleted=False) |
1309 | + vol = db.volume_get(admin_ctxt, vol1['id']) |
1310 | + self.assertFalse(vol['deleted']) |
1311 | + db.volume_destroy(self.context, vol1['id']) |
1312 | + |
1313 | + greenthread.sleep(0.3) |
1314 | + admin_ctxt = context.get_admin_context(read_deleted=True) |
1315 | + vol = db.volume_get(admin_ctxt, vol2['id']) |
1316 | + self.assertTrue(vol['deleted']) |
1317 | + |
1318 | + self._restart_compute_service() |
1319 | + |
1320 | + def test_stop_with_attached_volume(self): |
1321 | + """Make sure attach info is reflected to block device mapping""" |
1322 | + # enforce periodic tasks run in short time to avoid wait for 60s. |
1323 | + self._restart_compute_service(periodic_interval=0.3) |
1324 | + |
1325 | + vol1 = self._volume_create() |
1326 | + vol2 = self._volume_create() |
1327 | + kwargs = {'image_id': 'ami-1', |
1328 | + 'instance_type': FLAGS.default_instance_type, |
1329 | + 'max_count': 1, |
1330 | + 'block_device_mapping': [{'device_name': '/dev/vdb', |
1331 | + 'volume_id': vol1['id'], |
1332 | + 'delete_on_termination': True}]} |
1333 | + ec2_instance_id = self._run_instance_wait(**kwargs) |
1334 | + instance_id = ec2utils.ec2_id_to_id(ec2_instance_id) |
1335 | + |
1336 | + vols = db.volume_get_all_by_instance(self.context, instance_id) |
1337 | + self.assertEqual(len(vols), 1) |
1338 | + for vol in vols: |
1339 | + self.assertEqual(vol['id'], vol1['id']) |
1340 | + self._assert_volume_attached(vol, instance_id, '/dev/vdb') |
1341 | + |
1342 | + vol = db.volume_get(self.context, vol2['id']) |
1343 | + self._assert_volume_detached(vol) |
1344 | + |
1345 | + self.cloud.compute_api.attach_volume(self.context, |
1346 | + instance_id=instance_id, |
1347 | + volume_id=vol2['id'], |
1348 | + device='/dev/vdc') |
1349 | + greenthread.sleep(0.3) |
1350 | + vol = db.volume_get(self.context, vol2['id']) |
1351 | + self._assert_volume_attached(vol, instance_id, '/dev/vdc') |
1352 | + |
1353 | + self.cloud.compute_api.detach_volume(self.context, |
1354 | + volume_id=vol1['id']) |
1355 | + greenthread.sleep(0.3) |
1356 | + vol = db.volume_get(self.context, vol1['id']) |
1357 | + self._assert_volume_detached(vol) |
1358 | + |
1359 | + result = self.cloud.stop_instances(self.context, [ec2_instance_id]) |
1360 | + self.assertTrue(result) |
1361 | + self._wait_for_stopped(ec2_instance_id) |
1362 | + |
1363 | + for vol_id in (vol1['id'], vol2['id']): |
1364 | + vol = db.volume_get(self.context, vol_id) |
1365 | + self._assert_volume_detached(vol) |
1366 | + |
1367 | + self.cloud.start_instances(self.context, [ec2_instance_id]) |
1368 | + self._wait_for_running(ec2_instance_id) |
1369 | + vols = db.volume_get_all_by_instance(self.context, instance_id) |
1370 | + self.assertEqual(len(vols), 1) |
1371 | + for vol in vols: |
1372 | + self.assertEqual(vol['id'], vol2['id']) |
1373 | + self._assert_volume_attached(vol, instance_id, '/dev/vdc') |
1374 | + |
1375 | + vol = db.volume_get(self.context, vol1['id']) |
1376 | + self._assert_volume_detached(vol) |
1377 | + |
1378 | + self.cloud.terminate_instances(self.context, [ec2_instance_id]) |
1379 | + greenthread.sleep(0.3) |
1380 | + |
1381 | + for vol_id in (vol1['id'], vol2['id']): |
1382 | + vol = db.volume_get(self.context, vol_id) |
1383 | + self.assertEqual(vol['id'], vol_id) |
1384 | + self._assert_volume_detached(vol) |
1385 | + db.volume_destroy(self.context, vol_id) |
1386 | + |
1387 | + self._restart_compute_service() |
1388 | + |
1389 | + def _create_snapshot(self, ec2_volume_id): |
1390 | + result = self.cloud.create_snapshot(self.context, |
1391 | + volume_id=ec2_volume_id) |
1392 | + greenthread.sleep(0.3) |
1393 | + return result['snapshotId'] |
1394 | + |
1395 | + def test_run_with_snapshot(self): |
1396 | + """Makes sure run/stop/start instance with snapshot works.""" |
1397 | + vol = self._volume_create() |
1398 | + ec2_volume_id = ec2utils.id_to_ec2_id(vol['id'], 'vol-%08x') |
1399 | + |
1400 | + ec2_snapshot1_id = self._create_snapshot(ec2_volume_id) |
1401 | + snapshot1_id = ec2utils.ec2_id_to_id(ec2_snapshot1_id) |
1402 | + ec2_snapshot2_id = self._create_snapshot(ec2_volume_id) |
1403 | + snapshot2_id = ec2utils.ec2_id_to_id(ec2_snapshot2_id) |
1404 | + |
1405 | + kwargs = {'image_id': 'ami-1', |
1406 | + 'instance_type': FLAGS.default_instance_type, |
1407 | + 'max_count': 1, |
1408 | + 'block_device_mapping': [{'device_name': '/dev/vdb', |
1409 | + 'snapshot_id': snapshot1_id, |
1410 | + 'delete_on_termination': False, }, |
1411 | + {'device_name': '/dev/vdc', |
1412 | + 'snapshot_id': snapshot2_id, |
1413 | + 'delete_on_termination': True}]} |
1414 | + ec2_instance_id = self._run_instance_wait(**kwargs) |
1415 | + instance_id = ec2utils.ec2_id_to_id(ec2_instance_id) |
1416 | + |
1417 | + vols = db.volume_get_all_by_instance(self.context, instance_id) |
1418 | + self.assertEqual(len(vols), 2) |
1419 | + vol1_id = None |
1420 | + vol2_id = None |
1421 | + for vol in vols: |
1422 | + snapshot_id = vol['snapshot_id'] |
1423 | + if snapshot_id == snapshot1_id: |
1424 | + vol1_id = vol['id'] |
1425 | + mountpoint = '/dev/vdb' |
1426 | + elif snapshot_id == snapshot2_id: |
1427 | + vol2_id = vol['id'] |
1428 | + mountpoint = '/dev/vdc' |
1429 | + else: |
1430 | + self.fail() |
1431 | + |
1432 | + self._assert_volume_attached(vol, instance_id, mountpoint) |
1433 | + |
1434 | + self.assertTrue(vol1_id) |
1435 | + self.assertTrue(vol2_id) |
1436 | + |
1437 | + self.cloud.terminate_instances(self.context, [ec2_instance_id]) |
1438 | + greenthread.sleep(0.3) |
1439 | + self._wait_for_terminate(ec2_instance_id) |
1440 | + |
1441 | + greenthread.sleep(0.3) |
1442 | + admin_ctxt = context.get_admin_context(read_deleted=False) |
1443 | + vol = db.volume_get(admin_ctxt, vol1_id) |
1444 | + self._assert_volume_detached(vol) |
1445 | + self.assertFalse(vol['deleted']) |
1446 | + db.volume_destroy(self.context, vol1_id) |
1447 | + |
1448 | + greenthread.sleep(0.3) |
1449 | + admin_ctxt = context.get_admin_context(read_deleted=True) |
1450 | + vol = db.volume_get(admin_ctxt, vol2_id) |
1451 | + self.assertTrue(vol['deleted']) |
1452 | + |
1453 | + for snapshot_id in (ec2_snapshot1_id, ec2_snapshot2_id): |
1454 | + self.cloud.delete_snapshot(self.context, snapshot_id) |
1455 | + greenthread.sleep(0.3) |
1456 | + db.volume_destroy(self.context, vol['id']) |
1457 | |
1458 | === modified file 'nova/tests/test_compute.py' |
1459 | --- nova/tests/test_compute.py 2011-06-07 17:32:06 +0000 |
1460 | +++ nova/tests/test_compute.py 2011-06-17 23:35:54 +0000 |
1461 | @@ -228,6 +228,21 @@ |
1462 | self.assert_(instance_ref['launched_at'] < terminate) |
1463 | self.assert_(instance_ref['deleted_at'] > terminate) |
1464 | |
1465 | + def test_stop(self): |
1466 | + """Ensure instance can be stopped""" |
1467 | + instance_id = self._create_instance() |
1468 | + self.compute.run_instance(self.context, instance_id) |
1469 | + self.compute.stop_instance(self.context, instance_id) |
1470 | + self.compute.terminate_instance(self.context, instance_id) |
1471 | + |
1472 | + def test_start(self): |
1473 | + """Ensure instance can be started""" |
1474 | + instance_id = self._create_instance() |
1475 | + self.compute.run_instance(self.context, instance_id) |
1476 | + self.compute.stop_instance(self.context, instance_id) |
1477 | + self.compute.start_instance(self.context, instance_id) |
1478 | + self.compute.terminate_instance(self.context, instance_id) |
1479 | + |
1480 | def test_pause(self): |
1481 | """Ensure instance can be paused""" |
1482 | instance_id = self._create_instance() |
1483 | |
1484 | === modified file 'nova/virt/driver.py' |
1485 | --- nova/virt/driver.py 2011-03-30 00:35:24 +0000 |
1486 | +++ nova/virt/driver.py 2011-06-17 23:35:54 +0000 |
1487 | @@ -61,7 +61,7 @@ |
1488 | """Return a list of InstanceInfo for all registered VMs""" |
1489 | raise NotImplementedError() |
1490 | |
1491 | - def spawn(self, instance, network_info=None): |
1492 | + def spawn(self, instance, network_info=None, block_device_mapping=None): |
1493 | """Launch a VM for the specified instance""" |
1494 | raise NotImplementedError() |
1495 | |
1496 | |
1497 | === modified file 'nova/virt/fake.py' |
1498 | --- nova/virt/fake.py 2011-05-17 14:49:12 +0000 |
1499 | +++ nova/virt/fake.py 2011-06-17 23:35:54 +0000 |
1500 | @@ -129,7 +129,7 @@ |
1501 | info_list.append(self._map_to_instance_info(instance)) |
1502 | return info_list |
1503 | |
1504 | - def spawn(self, instance): |
1505 | + def spawn(self, instance, network_info=None, block_device_mapping=None): |
1506 | """ |
1507 | Create a new instance/VM/domain on the virtualization platform. |
1508 | |
1509 | @@ -237,6 +237,10 @@ |
1510 | """ |
1511 | pass |
1512 | |
1513 | + def poll_rescued_instances(self, timeout): |
1514 | + """Poll for rescued instances""" |
1515 | + pass |
1516 | + |
1517 | def migrate_disk_and_power_off(self, instance, dest): |
1518 | """ |
1519 | Transfers the disk of a running instance in multiple phases, turning |
1520 | |
1521 | === modified file 'nova/virt/hyperv.py' |
1522 | --- nova/virt/hyperv.py 2011-05-28 11:49:31 +0000 |
1523 | +++ nova/virt/hyperv.py 2011-06-17 23:35:54 +0000 |
1524 | @@ -139,7 +139,7 @@ |
1525 | |
1526 | return instance_infos |
1527 | |
1528 | - def spawn(self, instance): |
1529 | + def spawn(self, instance, network_info=None, block_device_mapping=None): |
1530 | """ Create a new VM and start it.""" |
1531 | vm = self._lookup(instance.name) |
1532 | if vm is not None: |
1533 | |
1534 | === modified file 'nova/virt/libvirt.xml.template' |
1535 | --- nova/virt/libvirt.xml.template 2011-05-31 17:45:26 +0000 |
1536 | +++ nova/virt/libvirt.xml.template 2011-06-17 23:35:54 +0000 |
1537 | @@ -67,11 +67,13 @@ |
1538 | <target dev='${disk_prefix}b' bus='${disk_bus}'/> |
1539 | </disk> |
1540 | #else |
1541 | + #if not ($getVar('ebs_root', False)) |
1542 | <disk type='file'> |
1543 | <driver type='${driver_type}'/> |
1544 | <source file='${basepath}/disk'/> |
1545 | <target dev='${disk_prefix}a' bus='${disk_bus}'/> |
1546 | </disk> |
1547 | + #end if |
1548 | #if $getVar('local', False) |
1549 | <disk type='file'> |
1550 | <driver type='${driver_type}'/> |
1551 | @@ -79,6 +81,13 @@ |
1552 | <target dev='${disk_prefix}b' bus='${disk_bus}'/> |
1553 | </disk> |
1554 | #end if |
1555 | + #for $vol in $volumes |
1556 | + <disk type='block'> |
1557 | + <driver type='raw'/> |
1558 | + <source dev='${vol.device_path}'/> |
1559 | + <target dev='${vol.mount_device}' bus='${disk_bus}'/> |
1560 | + </disk> |
1561 | + #end for |
1562 | #end if |
1563 | #end if |
1564 | |
1565 | |
1566 | === modified file 'nova/virt/libvirt/connection.py' |
1567 | --- nova/virt/libvirt/connection.py 2011-06-06 15:54:11 +0000 |
1568 | +++ nova/virt/libvirt/connection.py 2011-06-17 23:35:54 +0000 |
1569 | @@ -40,6 +40,7 @@ |
1570 | import multiprocessing |
1571 | import os |
1572 | import random |
1573 | +import re |
1574 | import shutil |
1575 | import subprocess |
1576 | import sys |
1577 | @@ -148,6 +149,10 @@ |
1578 | Template = t.Template |
1579 | |
1580 | |
1581 | +def _strip_dev(mount_path): |
1582 | + return re.sub(r'^/dev/', '', mount_path) |
1583 | + |
1584 | + |
1585 | class LibvirtConnection(driver.ComputeDriver): |
1586 | |
1587 | def __init__(self, read_only): |
1588 | @@ -575,11 +580,14 @@ |
1589 | # NOTE(ilyaalekseyev): Implementation like in multinics |
1590 | # for xenapi(tr3buchet) |
1591 | @exception.wrap_exception |
1592 | - def spawn(self, instance, network_info=None): |
1593 | - xml = self.to_xml(instance, False, network_info) |
1594 | + def spawn(self, instance, network_info=None, block_device_mapping=None): |
1595 | + xml = self.to_xml(instance, False, network_info=network_info, |
1596 | + block_device_mapping=block_device_mapping) |
1597 | + block_device_mapping = block_device_mapping or [] |
1598 | self.firewall_driver.setup_basic_filtering(instance, network_info) |
1599 | self.firewall_driver.prepare_instance_filter(instance, network_info) |
1600 | - self._create_image(instance, xml, network_info=network_info) |
1601 | + self._create_image(instance, xml, network_info=network_info, |
1602 | + block_device_mapping=block_device_mapping) |
1603 | domain = self._create_new_domain(xml) |
1604 | LOG.debug(_("instance %s: is running"), instance['name']) |
1605 | self.firewall_driver.apply_instance_filter(instance) |
1606 | @@ -761,7 +769,8 @@ |
1607 | # TODO(vish): should we format disk by default? |
1608 | |
1609 | def _create_image(self, inst, libvirt_xml, suffix='', disk_images=None, |
1610 | - network_info=None): |
1611 | + network_info=None, block_device_mapping=None): |
1612 | + block_device_mapping = block_device_mapping or [] |
1613 | if not network_info: |
1614 | network_info = netutils.get_network_info(inst) |
1615 | |
1616 | @@ -824,16 +833,19 @@ |
1617 | size = None |
1618 | root_fname += "_sm" |
1619 | |
1620 | - self._cache_image(fn=self._fetch_image, |
1621 | - target=basepath('disk'), |
1622 | - fname=root_fname, |
1623 | - cow=FLAGS.use_cow_images, |
1624 | - image_id=disk_images['image_id'], |
1625 | - user=user, |
1626 | - project=project, |
1627 | - size=size) |
1628 | + if not self._volume_in_mapping(self.root_mount_device, |
1629 | + block_device_mapping): |
1630 | + self._cache_image(fn=self._fetch_image, |
1631 | + target=basepath('disk'), |
1632 | + fname=root_fname, |
1633 | + cow=FLAGS.use_cow_images, |
1634 | + image_id=disk_images['image_id'], |
1635 | + user=user, |
1636 | + project=project, |
1637 | + size=size) |
1638 | |
1639 | - if inst_type['local_gb']: |
1640 | + if inst_type['local_gb'] and not self._volume_in_mapping( |
1641 | + self.local_mount_device, block_device_mapping): |
1642 | self._cache_image(fn=self._create_local, |
1643 | target=basepath('disk.local'), |
1644 | fname="local_%s" % inst_type['local_gb'], |
1645 | @@ -948,7 +960,20 @@ |
1646 | |
1647 | return result |
1648 | |
1649 | - def _prepare_xml_info(self, instance, rescue=False, network_info=None): |
1650 | + root_mount_device = 'vda' # FIXME for now. it's hard coded. |
1651 | + local_mount_device = 'vdb' # FIXME for now. it's hard coded. |
1652 | + |
1653 | + def _volume_in_mapping(self, mount_device, block_device_mapping): |
1654 | + mount_device_ = _strip_dev(mount_device) |
1655 | + for vol in block_device_mapping: |
1656 | + vol_mount_device = _strip_dev(vol['mount_device']) |
1657 | + if vol_mount_device == mount_device_: |
1658 | + return True |
1659 | + return False |
1660 | + |
1661 | + def _prepare_xml_info(self, instance, rescue=False, network_info=None, |
1662 | + block_device_mapping=None): |
1663 | + block_device_mapping = block_device_mapping or [] |
1664 | # TODO(adiantum) remove network_info creation code |
1665 | # when multinics will be completed |
1666 | if not network_info: |
1667 | @@ -966,6 +991,16 @@ |
1668 | else: |
1669 | driver_type = 'raw' |
1670 | |
1671 | + for vol in block_device_mapping: |
1672 | + vol['mount_device'] = _strip_dev(vol['mount_device']) |
1673 | + ebs_root = self._volume_in_mapping(self.root_mount_device, |
1674 | + block_device_mapping) |
1675 | + if self._volume_in_mapping(self.local_mount_device, |
1676 | + block_device_mapping): |
1677 | + local_gb = False |
1678 | + else: |
1679 | + local_gb = inst_type['local_gb'] |
1680 | + |
1681 | xml_info = {'type': FLAGS.libvirt_type, |
1682 | 'name': instance['name'], |
1683 | 'basepath': os.path.join(FLAGS.instances_path, |
1684 | @@ -973,9 +1008,11 @@ |
1685 | 'memory_kb': inst_type['memory_mb'] * 1024, |
1686 | 'vcpus': inst_type['vcpus'], |
1687 | 'rescue': rescue, |
1688 | - 'local': inst_type['local_gb'], |
1689 | + 'local': local_gb, |
1690 | 'driver_type': driver_type, |
1691 | - 'nics': nics} |
1692 | + 'nics': nics, |
1693 | + 'ebs_root': ebs_root, |
1694 | + 'volumes': block_device_mapping} |
1695 | |
1696 | if FLAGS.vnc_enabled: |
1697 | if FLAGS.libvirt_type != 'lxc': |
1698 | @@ -991,10 +1028,13 @@ |
1699 | xml_info['disk'] = xml_info['basepath'] + "/disk" |
1700 | return xml_info |
1701 | |
1702 | - def to_xml(self, instance, rescue=False, network_info=None): |
1703 | + def to_xml(self, instance, rescue=False, network_info=None, |
1704 | + block_device_mapping=None): |
1705 | + block_device_mapping = block_device_mapping or [] |
1706 | # TODO(termie): cache? |
1707 | LOG.debug(_('instance %s: starting toXML method'), instance['name']) |
1708 | - xml_info = self._prepare_xml_info(instance, rescue, network_info) |
1709 | + xml_info = self._prepare_xml_info(instance, rescue, network_info, |
1710 | + block_device_mapping) |
1711 | xml = str(Template(self.libvirt_xml, searchList=[xml_info])) |
1712 | LOG.debug(_('instance %s: finished toXML method'), instance['name']) |
1713 | return xml |
1714 | |
1715 | === modified file 'nova/virt/vmwareapi_conn.py' |
1716 | --- nova/virt/vmwareapi_conn.py 2011-04-12 21:43:07 +0000 |
1717 | +++ nova/virt/vmwareapi_conn.py 2011-06-17 23:35:54 +0000 |
1718 | @@ -124,7 +124,7 @@ |
1719 | """List VM instances.""" |
1720 | return self._vmops.list_instances() |
1721 | |
1722 | - def spawn(self, instance): |
1723 | + def spawn(self, instance, network_info=None, block_device_mapping=None): |
1724 | """Create VM instance.""" |
1725 | self._vmops.spawn(instance) |
1726 | |
1727 | |
1728 | === modified file 'nova/virt/xenapi_conn.py' |
1729 | --- nova/virt/xenapi_conn.py 2011-05-18 16:27:39 +0000 |
1730 | +++ nova/virt/xenapi_conn.py 2011-06-17 23:35:54 +0000 |
1731 | @@ -194,7 +194,7 @@ |
1732 | def list_instances_detail(self): |
1733 | return self._vmops.list_instances_detail() |
1734 | |
1735 | - def spawn(self, instance): |
1736 | + def spawn(self, instance, network_info=None, block_device_mapping=None): |
1737 | """Create VM instance""" |
1738 | self._vmops.spawn(instance) |
1739 | |
1740 | |
1741 | === modified file 'nova/volume/api.py' |
1742 | --- nova/volume/api.py 2011-06-02 21:23:05 +0000 |
1743 | +++ nova/volume/api.py 2011-06-17 23:35:54 +0000 |
1744 | @@ -21,6 +21,9 @@ |
1745 | """ |
1746 | |
1747 | |
1748 | +from eventlet import greenthread |
1749 | + |
1750 | +from nova import db |
1751 | from nova import exception |
1752 | from nova import flags |
1753 | from nova import log as logging |
1754 | @@ -44,7 +47,8 @@ |
1755 | if snapshot['status'] != "available": |
1756 | raise exception.ApiError( |
1757 | _("Snapshot status must be available")) |
1758 | - size = snapshot['volume_size'] |
1759 | + if not size: |
1760 | + size = snapshot['volume_size'] |
1761 | |
1762 | if quota.allowed_volumes(context, 1, size) < 1: |
1763 | pid = context.project_id |
1764 | @@ -73,6 +77,14 @@ |
1765 | "snapshot_id": snapshot_id}}) |
1766 | return volume |
1767 | |
1768 | + # TODO(yamahata): eliminate dumb polling |
1769 | + def wait_creation(self, context, volume_id): |
1770 | + while True: |
1771 | + volume = self.get(context, volume_id) |
1772 | + if volume['status'] != 'creating': |
1773 | + return |
1774 | + greenthread.sleep(1) |
1775 | + |
1776 | def delete(self, context, volume_id): |
1777 | volume = self.get(context, volume_id) |
1778 | if volume['status'] != "available": |
1779 | |
1780 | === modified file 'nova/volume/driver.py' |
1781 | --- nova/volume/driver.py 2011-05-27 05:13:17 +0000 |
1782 | +++ nova/volume/driver.py 2011-06-17 23:35:54 +0000 |
1783 | @@ -582,6 +582,14 @@ |
1784 | """No setup necessary in fake mode.""" |
1785 | pass |
1786 | |
1787 | + def discover_volume(self, context, volume): |
1788 | + """Discover volume on a remote host.""" |
1789 | + return "/dev/disk/by-path/volume-id-%d" % volume['id'] |
1790 | + |
1791 | + def undiscover_volume(self, volume): |
1792 | + """Undiscover volume on a remote host.""" |
1793 | + pass |
1794 | + |
1795 | @staticmethod |
1796 | def fake_execute(cmd, *_args, **_kwargs): |
1797 | """Execute that simply logs the command.""" |
There are a couple of conflicts you might want to look into.