Merge lp:~yamahata/nova/boot-from-volume-0 into lp:~hudson-openstack/nova/trunk

Proposed by Isaku Yamahata
Status: Merged
Merged at revision: 1198
Proposed branch: lp:~yamahata/nova/boot-from-volume-0
Merge into: lp:~hudson-openstack/nova/trunk
Prerequisite: lp:~morita-kazutaka/nova/clone-volume
Diff against target: 1797 lines (+1068/-149)
23 files modified
nova/api/ec2/apirequest.py (+3/-75)
nova/api/ec2/cloud.py (+47/-11)
nova/api/ec2/ec2utils.py (+94/-0)
nova/compute/api.py (+87/-17)
nova/compute/manager.py (+136/-10)
nova/compute/utils.py (+29/-0)
nova/db/api.py (+35/-0)
nova/db/sqlalchemy/api.py (+79/-0)
nova/db/sqlalchemy/migrate_repo/versions/024_add_block_device_mapping.py (+87/-0)
nova/db/sqlalchemy/models.py (+39/-0)
nova/scheduler/simple.py (+7/-1)
nova/tests/test_api.py (+1/-1)
nova/tests/test_cloud.py (+312/-10)
nova/tests/test_compute.py (+15/-0)
nova/virt/driver.py (+1/-1)
nova/virt/fake.py (+5/-1)
nova/virt/hyperv.py (+1/-1)
nova/virt/libvirt.xml.template (+9/-0)
nova/virt/libvirt/connection.py (+58/-18)
nova/virt/vmwareapi_conn.py (+1/-1)
nova/virt/xenapi_conn.py (+1/-1)
nova/volume/api.py (+13/-1)
nova/volume/driver.py (+8/-0)
To merge this branch: bzr merge lp:~yamahata/nova/boot-from-volume-0
Reviewer Review Type Date Requested Status
Rick Harris (community) Approve
Matt Dietz (community) Approve
Devin Carlen (community) Needs Information
Review via email: mp+62419@code.launchpad.net

Commit message

Implements a portion of ec2 ebs boot.
What's implemented
- block_device_mapping option for run instance with volume
  (ephemeral device and no device isn't supported yet)
- stop/start instance

TODO:
- ephemeral device/no device
- machine image

Description of the change

This branch implements boot from volume.
What can be done with this branch are
- --block-device-mapping for run instance command.
  volume id or snapshot id can be specified.
  ephemeral device/no device isn't supported yet.
- stop/start instance

There are several things left, they will be done as next step.
- machine image to point volume
- ephemeral device/no device

To post a comment you must log in.
Revision history for this message
Brian Waldon (bcwaldon) wrote :

There are a couple of conflicts you might want to look into.

Revision history for this message
Isaku Yamahata (yamahata) wrote :

Thank you. Now I fixed it.

On Thu, May 26, 2011 at 02:21:04PM -0000, Brian Waldon wrote:
> There are a couple of conflicts you might want to look into.
> --
> https://code.launchpad.net/~yamahata/nova/boot-from-volume-0/+merge/62419
> You are the owner of lp:~yamahata/nova/boot-from-volume-0.
>

--
yamahata

Revision history for this message
Devin Carlen (devcamcar) wrote :

Can you please add some comments to this code block? It's unclear what this change is for.

37 + if len(parts) > 1:
38 + d = args.get(key, {})
39 + args[key] = d
40 + for k in parts[1:-1]:
41 + k = _camelcase_to_underscore(k)
42 + v = d.get(k, {})
43 + d[k] = v
44 + d = v
45 + d[_camelcase_to_underscore(parts[-1])] = value
46 + else:
47 + args[key] = value

Please add a note header to this comment block, as in:

# NOTE(your username):

This helps in tracking down subject matter experts in this large codebase.

60 + # BlockDevicedMapping.<N>.DeviceName
61 + # BlockDevicedMapping.<N>.Ebs.SnapshotId
62 + # BlockDevicedMapping.<N>.Ebs.VolumeSize
63 + # BlockDevicedMapping.<N>.Ebs.DeleteOnTermination
64 + # BlockDevicedMapping.<N>.VirtualName
65 + # => remove .Ebs and allow volume id in SnapshotId

And same thing here:

188 + # tell vm driver to attach volume at boot time by updating
189 + # BlockDeviceMapping

review: Needs Information
Revision history for this message
Isaku Yamahata (yamahata) wrote :

On Fri, May 27, 2011 at 09:06:09PM -0000, Devin Carlen wrote:
> Review: Needs Information
> Can you please add some comments to this code block? It's unclear what this change is for.

I added the comment on the code.
That hunk teaches multi dotted argument to ec2 argument parse.
So far only single dot is allowed. But, ec2 block device mapping
uses multi dot separated argument like
BlockDeviceMapping.1.DeviceName=snap-id.

> Please add a note header to this comment block, as in:
> # NOTE(your username):

Okay, I added it to them.
Is this required custom for nova?
"bzr annotation" provides what you want as version control system
is exactly for that purpose. And sprinkling username makes code
ugly.
--
yamahata

Revision history for this message
Isaku Yamahata (yamahata) wrote :

Ping?
What can I do to make progress?

Revision history for this message
Rick Harris (rconradharris) wrote :

Very impressive work! Just a few small nits:

Received a test failure:

  ======================================================================
  FAIL: test_stop_with_attached_volume (nova.tests.test_cloud.CloudTestCase)
  ----------------------------------------------------------------------
  Traceback (most recent call last):
    File "/home/rick/openstack/nova/boot-from-volume-0/nova/tests/test_cloud.py", line 691, in test_stop_with_attached_volume
      self._assert_volume_attached(vol, instance_id, '/dev/vdc')
    File "/home/rick/openstack/nova/boot-from-volume-0/nova/tests/test_cloud.py", line 582, in _assert_volume_attached
      self.assertEqual(vol['mountpoint'], mountpoint)
  AssertionError: u'\\/dev\\/vdc' != '/dev/vdc'

> 72 + if len(parts) > 1:
> 73 + d = args.get(key, {})
> 74 + args[key] = d
> 75 + for k in parts[1:-1]:
> 76 + k = _camelcase_to_underscore(k)
> 77 + v = d.get(k, {})
> 78 + d[k] = v
> 79 + d = v
> 80 + d[_camelcase_to_underscore(parts[-1])] = value
> 81 + else:
> 82 + args[key] = value

Might be worth breaking this code out into a utility method, something like:
`dict_from_dotted_str`.

> 68 + # EBS boot uses multi dot-separeted arguments like

Typofix. s/separeted/separated/

> 315 + block_device_mapping=[]):

Usually not a good idea to use a list as a default argument. This is because
the list-object is created at /function definition/ time and the same list
object will be re-used on each invocation--probably not what you wanted.

Instead, it's better to default to None and initialize a new list in the
function's body:

  block_device_mapping=None):

    block_device_mapping = block_device_mapping or []

OR....

    if not block_device_mapping:
      block_device_mapping = []

> 393 + if not _is_able_to_shutdown(instance, instance_id):
> 394 + return

Should we log here that we weren't able to shutdown, something like:

  LOG.warn(_("Unable to shutdown server...."))

> 975 === added file 'nova/db/sqlalchemy/migrate_repo/versions/019_add_volume_snapshot_support.py'

Looks like you'll have to renumber these since trunk has already advanced
migration numbers.

review: Needs Fixing
Revision history for this message
Matt Dietz (cerberus) wrote :
Download full text (4.2 KiB)

First of all, great work on this!

I see a few failing tests:

======================================================================
ERROR: test_compute_can_update_available_resource (nova.tests.test_service.ServiceTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/cerberus/code/python/nova/boot-from-volume-0/nova/tests/test_service.py", line 334, in test_compute_can_update_available_resource
    {'wait': wait_func})
TypeError: CreateMock() takes exactly 2 arguments (3 given)

======================================================================
ERROR: test_compute_can_update_available_resource (nova.tests.test_service.ServiceTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/cerberus/code/python/nova/boot-from-volume-0/nova/test.py", line 94, in tearDown
    self.mox.VerifyAll()
  File "/Users/cerberus/code/python/nova/boot-from-volume-0/.nova-venv/lib/python2.6/site-packages/mox.py", line 197, in VerifyAll
    mock_obj._Verify()
  File "/Users/cerberus/code/python/nova/boot-from-volume-0/.nova-venv/lib/python2.6/site-packages/mox.py", line 344, in _Verify
    raise ExpectedMethodCallsError(self._expected_calls_queue)
ExpectedMethodCallsError: Verify: Expected methods never called:
  0. __call__(new=<IgnoreArg>) -> None

======================================================================
ERROR: test_create (nova.tests.test_service.ServiceTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/cerberus/code/python/nova/boot-from-volume-0/nova/tests/test_service.py", line 144, in test_create
    {'wait': wait_func})
TypeError: CreateMock() takes exactly 2 arguments (3 given)

======================================================================
ERROR: test_create (nova.tests.test_service.ServiceTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/cerberus/code/python/nova/boot-from-volume-0/nova/test.py", line 94, in tearDown
    self.mox.VerifyAll()
  File "/Users/cerberus/code/python/nova/boot-from-volume-0/.nova-venv/lib/python2.6/site-packages/mox.py", line 197, in VerifyAll
    mock_obj._Verify()
  File "/Users/cerberus/code/python/nova/boot-from-volume-0/.nova-venv/lib/python2.6/site-packages/mox.py", line 344, in _Verify
    raise ExpectedMethodCallsError(self._expected_calls_queue)
ExpectedMethodCallsError: Verify: Expected methods never called:
  0. __call__(new=<IgnoreArg>) -> None

----------------------------------------------------------------------

455 + try:
456 + bdms = self.db.block_device_mapping_get_all_by_instance(
457 + context, instance_id)
458 + except exception.NotFound:
459 + pass

I don't really like throwing away exceptions. It's an explicit exception, sure, but it could mask something important. Seems like this should at least log the error.

476 + assert ((bdm['snapshot_id'] is None) or
477 + (bdm['volume_id'] ...

Read more...

review: Needs Fixing
Revision history for this message
Isaku Yamahata (yamahata) wrote :
Download full text (3.9 KiB)

On Wed, Jun 08, 2011 at 06:27:26PM -0000, Rick Harris wrote:
> Review: Needs Fixing
> Very impressive work! Just a few small nits:
>
>
> Received a test failure:
>
> ======================================================================
> FAIL: test_stop_with_attached_volume (nova.tests.test_cloud.CloudTestCase)
> ----------------------------------------------------------------------
> Traceback (most recent call last):
> File "/home/rick/openstack/nova/boot-from-volume-0/nova/tests/test_cloud.py", line 691, in test_stop_with_attached_volume
> self._assert_volume_attached(vol, instance_id, '/dev/vdc')
> File "/home/rick/openstack/nova/boot-from-volume-0/nova/tests/test_cloud.py", line 582, in _assert_volume_attached
> self.assertEqual(vol['mountpoint'], mountpoint)
> AssertionError: u'\\/dev\\/vdc' != '/dev/vdc'

Hmm, the test passes for me. I'm using sqlite3 for unittest.
I can't find the code to escape '/' into '\\/'. MySQL?

> > 315 + block_device_mapping=[]):
>
> Usually not a good idea to use a list as a default argument. This is because
> the list-object is created at /function definition/ time and the same list
> object will be re-used on each invocation--probably not what you wanted.
>
> Instead, it's better to default to None and initialize a new list in the
> function's body:
>
> block_device_mapping=None):
>
> block_device_mapping = block_device_mapping or []
>
> OR....
>
> if not block_device_mapping:
> block_device_mapping = []

Okay, fixed.
During the fixes, I found other suspicious code.
Since I'm not sure they are intentional or not at a glance,
so please review the attached patch.

> > 393 + if not _is_able_to_shutdown(instance, instance_id):
> > 394 + return
>
> Should we log here that we weren't able to shutdown, something like:
>
> LOG.warn(_("Unable to shutdown server...."))

Yes, _is_able_to_shutdown() itself does.

=== modified file 'nova/objectstore/s3server.py'
--- nova/objectstore/s3server.py 2011-03-24 23:38:31 +0000
+++ nova/objectstore/s3server.py 2011-06-15 05:54:21 +0000
@@ -155,7 +155,8 @@ class BaseRequestHandler(wsgi.Controller
         self.finish('<?xml version="1.0" encoding="UTF-8"?>\n' +
                     ''.join(parts))

- def _render_parts(self, value, parts=[]):
+ def _render_parts(self, value, parts=None):
+ parts = parts or []
         if isinstance(value, basestring):
             parts.append(utils.xhtml_escape(value))
         elif isinstance(value, int) or isinstance(value, long):

=== modified file 'tools/ajaxterm/qweb.py'
--- tools/ajaxterm/qweb.py 2010-09-18 02:08:22 +0000
+++ tools/ajaxterm/qweb.py 2011-06-15 05:57:36 +0000
@@ -726,7 +726,7 @@ class QWebHtml(QWebXml):
 #----------------------------------------------------------
 # QWeb Simple Controller
 #----------------------------------------------------------
-def qweb_control(self,jump='main',p=[]):
+def qweb_control(self,jump='main',p=None):
     """ qweb_control(self,jump='main',p=[]):
     A simple function to handle the controler part of your application. It
     dispatch the control to the jump argument, while ensuring th...

Read more...

Revision history for this message
Isaku Yamahata (yamahata) wrote :

On Wed, Jun 08, 2011 at 09:07:25PM -0000, Matt Dietz wrote:
> I see a few failing tests:

I think you're using old mox version.
With revno 1117, mox 0.5.3 is required instead of 0.5.0
Updating your repo and reinstalling evenv will fix it.

> 455 + try:
> 456 + bdms = self.db.block_device_mapping_get_all_by_instance(
> 457 + context, instance_id)
> 458 + except exception.NotFound:
> 459 + pass
>
> I don't really like throwing away exceptions. It's an explicit exception, sure, but it could mask something important. Seems like this should at least log the error.

I see. Given all the caller catch and ignores it, I changed it returns
empty list instead of raising NotFound. Thus I eliminated except ...: pass.

> 476 + assert ((bdm['snapshot_id'] is None) or
> 477 + (bdm['volume_id'] is not None))
>
> It seems like it would be better to raise an explicit exception here, with a message describing exactly why this is a bad state.

Okay.

> 239 + """Stop each instance in instace_id"""
> 245 + """Start each instance in instace_id"""
>
> instace_id should be instance_id
>
> Also, given that instance_id is singular, it should probably say something like:
>
> "Start the instance denoted by instance_id"

Unfortunately instance_id is a list of ec2 instance id like
terminate_instances. Anyway I fixed typo.
--
yamahata

Revision history for this message
Matt Dietz (cerberus) wrote :

Thanks for the changes Yamahata!

review: Approve
Revision history for this message
Rick Harris (rconradharris) wrote :

> ======================================================================
> FAIL: test_stop_with_attached_volume (nova.tests.test_cloud.CloudTestCase)
> ----------------------------------------------------------------------
> Traceback (most recent call last):
> File "/home/rick/openstack/nova/boot-from-volume-0/nova/tests/test_cloud.py", line 691, in
> test_stop_with_attached_volume
> self._assert_volume_attached(vol, instance_id, '/dev/vdc')
> File "/home/rick/openstack/nova/boot-from-volume-0/nova/tests/test_cloud.py", line 582, in >_assert_volume_attached
> self.assertEqual(vol['mountpoint'], mountpoint)
> AssertionError: u'\\/dev\\/vdc' != '/dev/vdc'

Turns out this was caused by running an older version of carrot (0.10.3) which didn't handle escaping properly.

Upgrading to 0.10.5 fixed it.

Patch looks great, nice job.

review: Approve
Revision history for this message
Rick Harris (rconradharris) wrote :

Looks like Devin's concerns were addressed, setting to Approved.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :
Download full text (67.2 KiB)

The attempt to merge lp:~yamahata/nova/boot-from-volume-0 into lp:nova failed. Below is the output from the failed tests.

AccountsTest
    test_account_create OK 0.16
    test_account_delete OK 0.27
    test_account_update OK 0.16
    test_get_account OK 0.16
AdminAPITest
    test_admin_disabled OK 0.11
    test_admin_enabled OK 0.16
APITest
    test_exceptions_are_converted_to_faults OK 0.02
    test_malformed_json OK 0.04
    test_malformed_xml OK 0.04
Test
    test_authorize_project OK 0.21
    test_authorize_token OK 0.05
    test_authorize_user OK 0.03
    test_bad_project OK 0.05
    test_bad_token OK 0.03
    test_bad_user_bad_key OK 0.03
    test_bad_user_good_key OK 0.03
    test_no_user OK 0.03
    test_not_existing_project OK 0.05
    test_token_expiry OK 0.03
TestFunctional
    test_token_doesnotexist OK 0.04
    test_token_expiry OK 0.06
TestLimiter
    test_authorize_token OK 0.05
LimiterTest
    test_limiter_custom_max_limit OK 0.00
    test_limiter_limit_and_offset OK 0.00
    test_limiter_limit_medium OK 0.00
    test_limiter_limit_over_max OK 0.00
    test_limiter_limit_zero OK 0.00
    test_limiter_negative_limit OK 0.00
    test_limiter_negative_offset OK 0.00
    test_limiter_nothing OK 0.00
    test_limiter_offset_bad OK 0.00
    test_limiter_offset_blank OK 0.00
    test_limiter_offset_medium OK 0.00
    test_limiter_offset_over_max OK 0.00
    test_limiter_offset_zero OK 0.00
PaginationParamsTest
    test_invalid_limit OK 0.00
    test_invalid_marker OK 0.00
    test_no_params OK 0.00
    test_valid_limit OK 0.00
    test_valid_marker OK 0.00
ActionExtensionTest
    test_extended_action ...

Revision history for this message
Isaku Yamahata (yamahata) wrote :

fixed two errors of pep8.

--
yamahata

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

No proposals found for merge of lp:~morita-kazutaka/nova/clone-volume into lp:nova.

Revision history for this message
Vish Ishaya (vishvananda) wrote :

marking as merged. Our dependency checks set it back to needs review if the prereq branch has already merged, even though the merge was successful.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'nova/api/ec2/apirequest.py'
--- nova/api/ec2/apirequest.py 2011-04-18 20:53:09 +0000
+++ nova/api/ec2/apirequest.py 2011-06-17 23:35:54 +0000
@@ -21,22 +21,15 @@
21"""21"""
2222
23import datetime23import datetime
24import re
25# TODO(termie): replace minidom with etree24# TODO(termie): replace minidom with etree
26from xml.dom import minidom25from xml.dom import minidom
2726
28from nova import log as logging27from nova import log as logging
28from nova.api.ec2 import ec2utils
2929
30LOG = logging.getLogger("nova.api.request")30LOG = logging.getLogger("nova.api.request")
3131
3232
33_c2u = re.compile('(((?<=[a-z])[A-Z])|([A-Z](?![A-Z]|$)))')
34
35
36def _camelcase_to_underscore(str):
37 return _c2u.sub(r'_\1', str).lower().strip('_')
38
39
40def _underscore_to_camelcase(str):33def _underscore_to_camelcase(str):
41 return ''.join([x[:1].upper() + x[1:] for x in str.split('_')])34 return ''.join([x[:1].upper() + x[1:] for x in str.split('_')])
4235
@@ -51,59 +44,6 @@
51 return datetimeobj.strftime("%Y-%m-%dT%H:%M:%SZ")44 return datetimeobj.strftime("%Y-%m-%dT%H:%M:%SZ")
5245
5346
54def _try_convert(value):
55 """Return a non-string from a string or unicode, if possible.
56
57 ============= =====================================================
58 When value is returns
59 ============= =====================================================
60 zero-length ''
61 'None' None
62 'True' True
63 'False' False
64 '0', '-0' 0
65 0xN, -0xN int from hex (postitive) (N is any number)
66 0bN, -0bN int from binary (positive) (N is any number)
67 * try conversion to int, float, complex, fallback value
68
69 """
70 if len(value) == 0:
71 return ''
72 if value == 'None':
73 return None
74 if value == 'True':
75 return True
76 if value == 'False':
77 return False
78 valueneg = value[1:] if value[0] == '-' else value
79 if valueneg == '0':
80 return 0
81 if valueneg == '':
82 return value
83 if valueneg[0] == '0':
84 if valueneg[1] in 'xX':
85 return int(value, 16)
86 elif valueneg[1] in 'bB':
87 return int(value, 2)
88 else:
89 try:
90 return int(value, 8)
91 except ValueError:
92 pass
93 try:
94 return int(value)
95 except ValueError:
96 pass
97 try:
98 return float(value)
99 except ValueError:
100 pass
101 try:
102 return complex(value)
103 except ValueError:
104 return value
105
106
107class APIRequest(object):47class APIRequest(object):
108 def __init__(self, controller, action, version, args):48 def __init__(self, controller, action, version, args):
109 self.controller = controller49 self.controller = controller
@@ -114,7 +54,7 @@
114 def invoke(self, context):54 def invoke(self, context):
115 try:55 try:
116 method = getattr(self.controller,56 method = getattr(self.controller,
117 _camelcase_to_underscore(self.action))57 ec2utils.camelcase_to_underscore(self.action))
118 except AttributeError:58 except AttributeError:
119 controller = self.controller59 controller = self.controller
120 action = self.action60 action = self.action
@@ -125,19 +65,7 @@
125 # and reraise as 400 error.65 # and reraise as 400 error.
126 raise Exception(_error)66 raise Exception(_error)
12767
128 args = {}68 args = ec2utils.dict_from_dotted_str(self.args.items())
129 for key, value in self.args.items():
130 parts = key.split(".")
131 key = _camelcase_to_underscore(parts[0])
132 if isinstance(value, str) or isinstance(value, unicode):
133 # NOTE(vish): Automatically convert strings back
134 # into their respective values
135 value = _try_convert(value)
136 if len(parts) > 1:
137 d = args.get(key, {})
138 d[parts[1]] = value
139 value = d
140 args[key] = value
14169
142 for key in args.keys():70 for key in args.keys():
143 # NOTE(vish): Turn numeric dict keys into lists71 # NOTE(vish): Turn numeric dict keys into lists
14472
=== modified file 'nova/api/ec2/cloud.py'
--- nova/api/ec2/cloud.py 2011-06-17 20:47:23 +0000
+++ nova/api/ec2/cloud.py 2011-06-17 23:35:54 +0000
@@ -909,6 +909,25 @@
909 if kwargs.get('ramdisk_id'):909 if kwargs.get('ramdisk_id'):
910 ramdisk = self._get_image(context, kwargs['ramdisk_id'])910 ramdisk = self._get_image(context, kwargs['ramdisk_id'])
911 kwargs['ramdisk_id'] = ramdisk['id']911 kwargs['ramdisk_id'] = ramdisk['id']
912 for bdm in kwargs.get('block_device_mapping', []):
913 # NOTE(yamahata)
914 # BlockDevicedMapping.<N>.DeviceName
915 # BlockDevicedMapping.<N>.Ebs.SnapshotId
916 # BlockDevicedMapping.<N>.Ebs.VolumeSize
917 # BlockDevicedMapping.<N>.Ebs.DeleteOnTermination
918 # BlockDevicedMapping.<N>.VirtualName
919 # => remove .Ebs and allow volume id in SnapshotId
920 ebs = bdm.pop('ebs', None)
921 if ebs:
922 ec2_id = ebs.pop('snapshot_id')
923 id = ec2utils.ec2_id_to_id(ec2_id)
924 if ec2_id.startswith('snap-'):
925 bdm['snapshot_id'] = id
926 elif ec2_id.startswith('vol-'):
927 bdm['volume_id'] = id
928 ebs.setdefault('delete_on_termination', True)
929 bdm.update(ebs)
930
912 image = self._get_image(context, kwargs['image_id'])931 image = self._get_image(context, kwargs['image_id'])
913932
914 if image:933 if image:
@@ -933,37 +952,54 @@
933 user_data=kwargs.get('user_data'),952 user_data=kwargs.get('user_data'),
934 security_group=kwargs.get('security_group'),953 security_group=kwargs.get('security_group'),
935 availability_zone=kwargs.get('placement', {}).get(954 availability_zone=kwargs.get('placement', {}).get(
936 'AvailabilityZone'))955 'AvailabilityZone'),
956 block_device_mapping=kwargs.get('block_device_mapping', {}))
937 return self._format_run_instances(context,957 return self._format_run_instances(context,
938 instances[0]['reservation_id'])958 instances[0]['reservation_id'])
939959
960 def _do_instance(self, action, context, ec2_id):
961 instance_id = ec2utils.ec2_id_to_id(ec2_id)
962 action(context, instance_id=instance_id)
963
964 def _do_instances(self, action, context, instance_id):
965 for ec2_id in instance_id:
966 self._do_instance(action, context, ec2_id)
967
940 def terminate_instances(self, context, instance_id, **kwargs):968 def terminate_instances(self, context, instance_id, **kwargs):
941 """Terminate each instance in instance_id, which is a list of ec2 ids.969 """Terminate each instance in instance_id, which is a list of ec2 ids.
942 instance_id is a kwarg so its name cannot be modified."""970 instance_id is a kwarg so its name cannot be modified."""
943 LOG.debug(_("Going to start terminating instances"))971 LOG.debug(_("Going to start terminating instances"))
944 for ec2_id in instance_id:972 self._do_instances(self.compute_api.delete, context, instance_id)
945 instance_id = ec2utils.ec2_id_to_id(ec2_id)
946 self.compute_api.delete(context, instance_id=instance_id)
947 return True973 return True
948974
949 def reboot_instances(self, context, instance_id, **kwargs):975 def reboot_instances(self, context, instance_id, **kwargs):
950 """instance_id is a list of instance ids"""976 """instance_id is a list of instance ids"""
951 LOG.audit(_("Reboot instance %r"), instance_id, context=context)977 LOG.audit(_("Reboot instance %r"), instance_id, context=context)
952 for ec2_id in instance_id:978 self._do_instances(self.compute_api.reboot, context, instance_id)
953 instance_id = ec2utils.ec2_id_to_id(ec2_id)979 return True
954 self.compute_api.reboot(context, instance_id=instance_id)980
981 def stop_instances(self, context, instance_id, **kwargs):
982 """Stop each instances in instance_id.
983 Here instance_id is a list of instance ids"""
984 LOG.debug(_("Going to stop instances"))
985 self._do_instances(self.compute_api.stop, context, instance_id)
986 return True
987
988 def start_instances(self, context, instance_id, **kwargs):
989 """Start each instances in instance_id.
990 Here instance_id is a list of instance ids"""
991 LOG.debug(_("Going to start instances"))
992 self._do_instances(self.compute_api.start, context, instance_id)
955 return True993 return True
956994
957 def rescue_instance(self, context, instance_id, **kwargs):995 def rescue_instance(self, context, instance_id, **kwargs):
958 """This is an extension to the normal ec2_api"""996 """This is an extension to the normal ec2_api"""
959 instance_id = ec2utils.ec2_id_to_id(instance_id)997 self._do_instance(self.compute_api.rescue, contect, instnace_id)
960 self.compute_api.rescue(context, instance_id=instance_id)
961 return True998 return True
962999
963 def unrescue_instance(self, context, instance_id, **kwargs):1000 def unrescue_instance(self, context, instance_id, **kwargs):
964 """This is an extension to the normal ec2_api"""1001 """This is an extension to the normal ec2_api"""
965 instance_id = ec2utils.ec2_id_to_id(instance_id)1002 self._do_instance(self.compute_api.unrescue, context, instance_id)
966 self.compute_api.unrescue(context, instance_id=instance_id)
967 return True1003 return True
9681004
969 def update_instance(self, context, instance_id, **kwargs):1005 def update_instance(self, context, instance_id, **kwargs):
9701006
=== modified file 'nova/api/ec2/ec2utils.py'
--- nova/api/ec2/ec2utils.py 2011-05-11 18:02:01 +0000
+++ nova/api/ec2/ec2utils.py 2011-06-17 23:35:54 +0000
@@ -16,6 +16,8 @@
16# License for the specific language governing permissions and limitations16# License for the specific language governing permissions and limitations
17# under the License.17# under the License.
1818
19import re
20
19from nova import exception21from nova import exception
2022
2123
@@ -30,3 +32,95 @@
30def id_to_ec2_id(instance_id, template='i-%08x'):32def id_to_ec2_id(instance_id, template='i-%08x'):
31 """Convert an instance ID (int) to an ec2 ID (i-[base 16 number])"""33 """Convert an instance ID (int) to an ec2 ID (i-[base 16 number])"""
32 return template % instance_id34 return template % instance_id
35
36
37_c2u = re.compile('(((?<=[a-z])[A-Z])|([A-Z](?![A-Z]|$)))')
38
39
40def camelcase_to_underscore(str):
41 return _c2u.sub(r'_\1', str).lower().strip('_')
42
43
44def _try_convert(value):
45 """Return a non-string from a string or unicode, if possible.
46
47 ============= =====================================================
48 When value is returns
49 ============= =====================================================
50 zero-length ''
51 'None' None
52 'True' True case insensitive
53 'False' False case insensitive
54 '0', '-0' 0
55 0xN, -0xN int from hex (postitive) (N is any number)
56 0bN, -0bN int from binary (positive) (N is any number)
57 * try conversion to int, float, complex, fallback value
58
59 """
60 if len(value) == 0:
61 return ''
62 if value == 'None':
63 return None
64 lowered_value = value.lower()
65 if lowered_value == 'true':
66 return True
67 if lowered_value == 'false':
68 return False
69 valueneg = value[1:] if value[0] == '-' else value
70 if valueneg == '0':
71 return 0
72 if valueneg == '':
73 return value
74 if valueneg[0] == '0':
75 if valueneg[1] in 'xX':
76 return int(value, 16)
77 elif valueneg[1] in 'bB':
78 return int(value, 2)
79 else:
80 try:
81 return int(value, 8)
82 except ValueError:
83 pass
84 try:
85 return int(value)
86 except ValueError:
87 pass
88 try:
89 return float(value)
90 except ValueError:
91 pass
92 try:
93 return complex(value)
94 except ValueError:
95 return value
96
97
98def dict_from_dotted_str(items):
99 """parse multi dot-separated argument into dict.
100 EBS boot uses multi dot-separeted arguments like
101 BlockDeviceMapping.1.DeviceName=snap-id
102 Convert the above into
103 {'block_device_mapping': {'1': {'device_name': snap-id}}}
104 """
105 args = {}
106 for key, value in items:
107 parts = key.split(".")
108 key = camelcase_to_underscore(parts[0])
109 if isinstance(value, str) or isinstance(value, unicode):
110 # NOTE(vish): Automatically convert strings back
111 # into their respective values
112 value = _try_convert(value)
113
114 if len(parts) > 1:
115 d = args.get(key, {})
116 args[key] = d
117 for k in parts[1:-1]:
118 k = camelcase_to_underscore(k)
119 v = d.get(k, {})
120 d[k] = v
121 d = v
122 d[camelcase_to_underscore(parts[-1])] = value
123 else:
124 args[key] = value
125
126 return args
33127
=== modified file 'nova/compute/api.py'
--- nova/compute/api.py 2011-06-17 15:25:23 +0000
+++ nova/compute/api.py 2011-06-17 23:35:54 +0000
@@ -34,6 +34,7 @@
34from nova import volume34from nova import volume
35from nova.compute import instance_types35from nova.compute import instance_types
36from nova.compute import power_state36from nova.compute import power_state
37from nova.compute.utils import terminate_volumes
37from nova.scheduler import api as scheduler_api38from nova.scheduler import api as scheduler_api
38from nova.db import base39from nova.db import base
3940
@@ -52,6 +53,18 @@
52 return str(instance_id)53 return str(instance_id)
5354
5455
56def _is_able_to_shutdown(instance, instance_id):
57 states = {'terminating': "Instance %s is already being terminated",
58 'migrating': "Instance %s is being migrated",
59 'stopping': "Instance %s is being stopped"}
60 msg = states.get(instance['state_description'])
61 if msg:
62 LOG.warning(_(msg), instance_id)
63 return False
64
65 return True
66
67
55class API(base.Base):68class API(base.Base):
56 """API for interacting with the compute manager."""69 """API for interacting with the compute manager."""
5770
@@ -235,7 +248,7 @@
235 return (num_instances, base_options, security_groups)248 return (num_instances, base_options, security_groups)
236249
237 def create_db_entry_for_new_instance(self, context, base_options,250 def create_db_entry_for_new_instance(self, context, base_options,
238 security_groups, num=1):251 security_groups, block_device_mapping, num=1):
239 """Create an entry in the DB for this new instance,252 """Create an entry in the DB for this new instance,
240 including any related table updates (such as security253 including any related table updates (such as security
241 groups, MAC address, etc). This will called by create()254 groups, MAC address, etc). This will called by create()
@@ -255,6 +268,23 @@
255 instance_id,268 instance_id,
256 security_group_id)269 security_group_id)
257270
271 # NOTE(yamahata)
272 # tell vm driver to attach volume at boot time by updating
273 # BlockDeviceMapping
274 for bdm in block_device_mapping:
275 LOG.debug(_('bdm %s'), bdm)
276 assert 'device_name' in bdm
277 values = {
278 'instance_id': instance_id,
279 'device_name': bdm['device_name'],
280 'delete_on_termination': bdm.get('delete_on_termination'),
281 'virtual_name': bdm.get('virtual_name'),
282 'snapshot_id': bdm.get('snapshot_id'),
283 'volume_id': bdm.get('volume_id'),
284 'volume_size': bdm.get('volume_size'),
285 'no_device': bdm.get('no_device')}
286 self.db.block_device_mapping_create(elevated, values)
287
258 # Set sane defaults if not specified288 # Set sane defaults if not specified
259 updates = dict(hostname=self.hostname_factory(instance_id))289 updates = dict(hostname=self.hostname_factory(instance_id))
260 if (not hasattr(instance, 'display_name') or290 if (not hasattr(instance, 'display_name') or
@@ -339,7 +369,7 @@
339 key_name=None, key_data=None, security_group='default',369 key_name=None, key_data=None, security_group='default',
340 availability_zone=None, user_data=None, metadata={},370 availability_zone=None, user_data=None, metadata={},
341 injected_files=None, admin_password=None, zone_blob=None,371 injected_files=None, admin_password=None, zone_blob=None,
342 reservation_id=None):372 reservation_id=None, block_device_mapping=None):
343 """373 """
344 Provision the instances by sending off a series of single374 Provision the instances by sending off a series of single
345 instance requests to the Schedulers. This is fine for trival375 instance requests to the Schedulers. This is fine for trival
@@ -360,11 +390,13 @@
360 injected_files, admin_password, zone_blob,390 injected_files, admin_password, zone_blob,
361 reservation_id)391 reservation_id)
362392
393 block_device_mapping = block_device_mapping or []
363 instances = []394 instances = []
364 LOG.debug(_("Going to run %s instances..."), num_instances)395 LOG.debug(_("Going to run %s instances..."), num_instances)
365 for num in range(num_instances):396 for num in range(num_instances):
366 instance = self.create_db_entry_for_new_instance(context,397 instance = self.create_db_entry_for_new_instance(context,
367 base_options, security_groups, num=num)398 base_options, security_groups,
399 block_device_mapping, num=num)
368 instances.append(instance)400 instances.append(instance)
369 instance_id = instance['id']401 instance_id = instance['id']
370402
@@ -474,24 +506,22 @@
474 rv = self.db.instance_update(context, instance_id, kwargs)506 rv = self.db.instance_update(context, instance_id, kwargs)
475 return dict(rv.iteritems())507 return dict(rv.iteritems())
476508
509 def _get_instance(self, context, instance_id, action_str):
510 try:
511 return self.get(context, instance_id)
512 except exception.NotFound:
513 LOG.warning(_("Instance %(instance_id)s was not found during "
514 "%(action_str)s") %
515 {'instance_id': instance_id, 'action_str': action_str})
516 raise
517
477 @scheduler_api.reroute_compute("delete")518 @scheduler_api.reroute_compute("delete")
478 def delete(self, context, instance_id):519 def delete(self, context, instance_id):
479 """Terminate an instance."""520 """Terminate an instance."""
480 LOG.debug(_("Going to try to terminate %s"), instance_id)521 LOG.debug(_("Going to try to terminate %s"), instance_id)
481 try:522 instance = self._get_instance(context, instance_id, 'terminating')
482 instance = self.get(context, instance_id)523
483 except exception.NotFound:524 if not _is_able_to_shutdown(instance, instance_id):
484 LOG.warning(_("Instance %s was not found during terminate"),
485 instance_id)
486 raise
487
488 if instance['state_description'] == 'terminating':
489 LOG.warning(_("Instance %s is already being terminated"),
490 instance_id)
491 return
492
493 if instance['state_description'] == 'migrating':
494 LOG.warning(_("Instance %s is being migrated"), instance_id)
495 return525 return
496526
497 self.update(context,527 self.update(context,
@@ -505,8 +535,48 @@
505 self._cast_compute_message('terminate_instance', context,535 self._cast_compute_message('terminate_instance', context,
506 instance_id, host)536 instance_id, host)
507 else:537 else:
538 terminate_volumes(self.db, context, instance_id)
508 self.db.instance_destroy(context, instance_id)539 self.db.instance_destroy(context, instance_id)
509540
541 @scheduler_api.reroute_compute("stop")
542 def stop(self, context, instance_id):
543 """Stop an instance."""
544 LOG.debug(_("Going to try to stop %s"), instance_id)
545
546 instance = self._get_instance(context, instance_id, 'stopping')
547 if not _is_able_to_shutdown(instance, instance_id):
548 return
549
550 self.update(context,
551 instance['id'],
552 state_description='stopping',
553 state=power_state.NOSTATE,
554 terminated_at=utils.utcnow())
555
556 host = instance['host']
557 if host:
558 self._cast_compute_message('stop_instance', context,
559 instance_id, host)
560
561 def start(self, context, instance_id):
562 """Start an instance."""
563 LOG.debug(_("Going to try to start %s"), instance_id)
564 instance = self._get_instance(context, instance_id, 'starting')
565 if instance['state_description'] != 'stopped':
566 _state_description = instance['state_description']
567 LOG.warning(_("Instance %(instance_id)s is not "
568 "stopped(%(_state_description)s)") % locals())
569 return
570
571 # TODO(yamahata): injected_files isn't supported right now.
572 # It is used only for osapi. not for ec2 api.
573 # availability_zone isn't used by run_instance.
574 rpc.cast(context,
575 FLAGS.scheduler_topic,
576 {"method": "start_instance",
577 "args": {"topic": FLAGS.compute_topic,
578 "instance_id": instance_id}})
579
510 def get(self, context, instance_id):580 def get(self, context, instance_id):
511 """Get a single instance with the given instance_id."""581 """Get a single instance with the given instance_id."""
512 rv = self.db.instance_get(context, instance_id)582 rv = self.db.instance_get(context, instance_id)
513583
=== modified file 'nova/compute/manager.py'
--- nova/compute/manager.py 2011-06-03 15:11:01 +0000
+++ nova/compute/manager.py 2011-06-17 23:35:54 +0000
@@ -53,6 +53,7 @@
53from nova import utils53from nova import utils
54from nova import volume54from nova import volume
55from nova.compute import power_state55from nova.compute import power_state
56from nova.compute.utils import terminate_volumes
56from nova.virt import driver57from nova.virt import driver
5758
5859
@@ -214,8 +215,63 @@
214 """215 """
215 return self.driver.refresh_security_group_members(security_group_id)216 return self.driver.refresh_security_group_members(security_group_id)
216217
217 @exception.wrap_exception218 def _setup_block_device_mapping(self, context, instance_id):
218 def run_instance(self, context, instance_id, **kwargs):219 """setup volumes for block device mapping"""
220 self.db.instance_set_state(context,
221 instance_id,
222 power_state.NOSTATE,
223 'block_device_mapping')
224
225 volume_api = volume.API()
226 block_device_mapping = []
227 for bdm in self.db.block_device_mapping_get_all_by_instance(
228 context, instance_id):
229 LOG.debug(_("setting up bdm %s"), bdm)
230 if ((bdm['snapshot_id'] is not None) and
231 (bdm['volume_id'] is None)):
232 # TODO(yamahata): default name and description
233 vol = volume_api.create(context, bdm['volume_size'],
234 bdm['snapshot_id'], '', '')
235 # TODO(yamahata): creating volume simultaneously
236 # reduces creation time?
237 volume_api.wait_creation(context, vol['id'])
238 self.db.block_device_mapping_update(
239 context, bdm['id'], {'volume_id': vol['id']})
240 bdm['volume_id'] = vol['id']
241
242 if not ((bdm['snapshot_id'] is None) or
243 (bdm['volume_id'] is not None)):
244 LOG.error(_('corrupted state of block device mapping '
245 'id: %(id)s '
246 'snapshot: %(snapshot_id) volume: %(vollume_id)') %
247 {'id': bdm['id'],
248 'snapshot_id': bdm['snapshot'],
249 'volume_id': bdm['volume_id']})
250 raise exception.ApiError(_('broken block device mapping %d') %
251 bdm['id'])
252
253 if bdm['volume_id'] is not None:
254 volume_api.check_attach(context,
255 volume_id=bdm['volume_id'])
256 dev_path = self._attach_volume_boot(context, instance_id,
257 bdm['volume_id'],
258 bdm['device_name'])
259 block_device_mapping.append({'device_path': dev_path,
260 'mount_device':
261 bdm['device_name']})
262 elif bdm['virtual_name'] is not None:
263 # TODO(yamahata): ephemeral/swap device support
264 LOG.debug(_('block_device_mapping: '
265 'ephemeral device is not supported yet'))
266 else:
267 # TODO(yamahata): NoDevice support
268 assert bdm['no_device']
269 LOG.debug(_('block_device_mapping: '
270 'no device is not supported yet'))
271
272 return block_device_mapping
273
274 def _run_instance(self, context, instance_id, **kwargs):
219 """Launch a new instance with specified options."""275 """Launch a new instance with specified options."""
220 context = context.elevated()276 context = context.elevated()
221 instance_ref = self.db.instance_get(context, instance_id)277 instance_ref = self.db.instance_get(context, instance_id)
@@ -249,11 +305,15 @@
249 self.network_manager.setup_compute_network(context,305 self.network_manager.setup_compute_network(context,
250 instance_id)306 instance_id)
251307
308 block_device_mapping = self._setup_block_device_mapping(context,
309 instance_id)
310
252 # TODO(vish) check to make sure the availability zone matches311 # TODO(vish) check to make sure the availability zone matches
253 self._update_state(context, instance_id, power_state.BUILDING)312 self._update_state(context, instance_id, power_state.BUILDING)
254313
255 try:314 try:
256 self.driver.spawn(instance_ref)315 self.driver.spawn(instance_ref,
316 block_device_mapping=block_device_mapping)
257 except Exception as ex: # pylint: disable=W0702317 except Exception as ex: # pylint: disable=W0702
258 msg = _("Instance '%(instance_id)s' failed to spawn. Is "318 msg = _("Instance '%(instance_id)s' failed to spawn. Is "
259 "virtualization enabled in the BIOS? Details: "319 "virtualization enabled in the BIOS? Details: "
@@ -277,12 +337,24 @@
277 self._update_state(context, instance_id)337 self._update_state(context, instance_id)
278338
279 @exception.wrap_exception339 @exception.wrap_exception
340 def run_instance(self, context, instance_id, **kwargs):
341 self._run_instance(context, instance_id, **kwargs)
342
343 @exception.wrap_exception
280 @checks_instance_lock344 @checks_instance_lock
281 def terminate_instance(self, context, instance_id):345 def start_instance(self, context, instance_id):
282 """Terminate an instance on this host."""346 """Starting an instance on this host."""
347 # TODO(yamahata): injected_files isn't supported.
348 # Anyway OSAPI doesn't support stop/start yet
349 self._run_instance(context, instance_id)
350
351 def _shutdown_instance(self, context, instance_id, action_str):
352 """Shutdown an instance on this host."""
283 context = context.elevated()353 context = context.elevated()
284 instance_ref = self.db.instance_get(context, instance_id)354 instance_ref = self.db.instance_get(context, instance_id)
285 LOG.audit(_("Terminating instance %s"), instance_id, context=context)355 LOG.audit(_("%(action_str)s instance %(instance_id)s") %
356 {'action_str': action_str, 'instance_id': instance_id},
357 context=context)
286358
287 fixed_ip = instance_ref.get('fixed_ip')359 fixed_ip = instance_ref.get('fixed_ip')
288 if not FLAGS.stub_network and fixed_ip:360 if not FLAGS.stub_network and fixed_ip:
@@ -318,18 +390,36 @@
318390
319 volumes = instance_ref.get('volumes') or []391 volumes = instance_ref.get('volumes') or []
320 for volume in volumes:392 for volume in volumes:
321 self.detach_volume(context, instance_id, volume['id'])393 self._detach_volume(context, instance_id, volume['id'], False)
322 if instance_ref['state'] == power_state.SHUTOFF:394
395 if (instance_ref['state'] == power_state.SHUTOFF and
396 instance_ref['state_description'] != 'stopped'):
323 self.db.instance_destroy(context, instance_id)397 self.db.instance_destroy(context, instance_id)
324 raise exception.Error(_('trying to destroy already destroyed'398 raise exception.Error(_('trying to destroy already destroyed'
325 ' instance: %s') % instance_id)399 ' instance: %s') % instance_id)
326 self.driver.destroy(instance_ref)400 self.driver.destroy(instance_ref)
327401
402 if action_str == 'Terminating':
403 terminate_volumes(self.db, context, instance_id)
404
405 @exception.wrap_exception
406 @checks_instance_lock
407 def terminate_instance(self, context, instance_id):
408 """Terminate an instance on this host."""
409 self._shutdown_instance(context, instance_id, 'Terminating')
410
328 # TODO(ja): should we keep it in a terminated state for a bit?411 # TODO(ja): should we keep it in a terminated state for a bit?
329 self.db.instance_destroy(context, instance_id)412 self.db.instance_destroy(context, instance_id)
330413
331 @exception.wrap_exception414 @exception.wrap_exception
332 @checks_instance_lock415 @checks_instance_lock
416 def stop_instance(self, context, instance_id):
417 """Stopping an instance on this host."""
418 self._shutdown_instance(context, instance_id, 'Stopping')
419 # instance state will be updated to stopped by _poll_instance_states()
420
421 @exception.wrap_exception
422 @checks_instance_lock
333 def rebuild_instance(self, context, instance_id, **kwargs):423 def rebuild_instance(self, context, instance_id, **kwargs):
334 """Destroy and re-make this instance.424 """Destroy and re-make this instance.
335425
@@ -800,6 +890,22 @@
800 instance_ref = self.db.instance_get(context, instance_id)890 instance_ref = self.db.instance_get(context, instance_id)
801 return self.driver.get_vnc_console(instance_ref)891 return self.driver.get_vnc_console(instance_ref)
802892
893 def _attach_volume_boot(self, context, instance_id, volume_id, mountpoint):
894 """Attach a volume to an instance at boot time. So actual attach
895 is done by instance creation"""
896
897 # TODO(yamahata):
898 # should move check_attach to volume manager?
899 volume.API().check_attach(context, volume_id)
900
901 context = context.elevated()
902 LOG.audit(_("instance %(instance_id)s: booting with "
903 "volume %(volume_id)s at %(mountpoint)s") %
904 locals(), context=context)
905 dev_path = self.volume_manager.setup_compute_volume(context, volume_id)
906 self.db.volume_attached(context, volume_id, instance_id, mountpoint)
907 return dev_path
908
803 @checks_instance_lock909 @checks_instance_lock
804 def attach_volume(self, context, instance_id, volume_id, mountpoint):910 def attach_volume(self, context, instance_id, volume_id, mountpoint):
805 """Attach a volume to an instance."""911 """Attach a volume to an instance."""
@@ -817,6 +923,16 @@
817 volume_id,923 volume_id,
818 instance_id,924 instance_id,
819 mountpoint)925 mountpoint)
926 values = {
927 'instance_id': instance_id,
928 'device_name': mountpoint,
929 'delete_on_termination': False,
930 'virtual_name': None,
931 'snapshot_id': None,
932 'volume_id': volume_id,
933 'volume_size': None,
934 'no_device': None}
935 self.db.block_device_mapping_create(context, values)
820 except Exception as exc: # pylint: disable=W0702936 except Exception as exc: # pylint: disable=W0702
821 # NOTE(vish): The inline callback eats the exception info so we937 # NOTE(vish): The inline callback eats the exception info so we
822 # log the traceback here and reraise the same938 # log the traceback here and reraise the same
@@ -831,7 +947,7 @@
831947
832 @exception.wrap_exception948 @exception.wrap_exception
833 @checks_instance_lock949 @checks_instance_lock
834 def detach_volume(self, context, instance_id, volume_id):950 def _detach_volume(self, context, instance_id, volume_id, destroy_bdm):
835 """Detach a volume from an instance."""951 """Detach a volume from an instance."""
836 context = context.elevated()952 context = context.elevated()
837 instance_ref = self.db.instance_get(context, instance_id)953 instance_ref = self.db.instance_get(context, instance_id)
@@ -847,8 +963,15 @@
847 volume_ref['mountpoint'])963 volume_ref['mountpoint'])
848 self.volume_manager.remove_compute_volume(context, volume_id)964 self.volume_manager.remove_compute_volume(context, volume_id)
849 self.db.volume_detached(context, volume_id)965 self.db.volume_detached(context, volume_id)
966 if destroy_bdm:
967 self.db.block_device_mapping_destroy_by_instance_and_volume(
968 context, instance_id, volume_id)
850 return True969 return True
851970
971 def detach_volume(self, context, instance_id, volume_id):
972 """Detach a volume from an instance."""
973 return self._detach_volume(context, instance_id, volume_id, True)
974
852 def remove_volume(self, context, volume_id):975 def remove_volume(self, context, volume_id):
853 """Remove volume on compute host.976 """Remove volume on compute host.
854977
@@ -1174,11 +1297,14 @@
1174 "State=%(db_state)s, so setting state to "1297 "State=%(db_state)s, so setting state to "
1175 "shutoff.") % locals())1298 "shutoff.") % locals())
1176 vm_state = power_state.SHUTOFF1299 vm_state = power_state.SHUTOFF
1300 if db_instance['state_description'] == 'stopping':
1301 self.db.instance_stop(context, db_instance['id'])
1302 continue
1177 else:1303 else:
1178 vm_state = vm_instance.state1304 vm_state = vm_instance.state
1179 vms_not_found_in_db.remove(name)1305 vms_not_found_in_db.remove(name)
11801306
1181 if db_instance['state_description'] == 'migrating':1307 if (db_instance['state_description'] in ['migrating', 'stopping']):
1182 # A situation which db record exists, but no instance"1308 # A situation which db record exists, but no instance"
1183 # sometimes occurs while live-migration at src compute,1309 # sometimes occurs while live-migration at src compute,
1184 # this case should be ignored.1310 # this case should be ignored.
11851311
=== added file 'nova/compute/utils.py'
--- nova/compute/utils.py 1970-01-01 00:00:00 +0000
+++ nova/compute/utils.py 2011-06-17 23:35:54 +0000
@@ -0,0 +1,29 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright (c) 2011 VA Linux Systems Japan K.K
4# Copyright (c) 2011 Isaku Yamahata
5#
6# Licensed under the Apache License, Version 2.0 (the "License"); you may
7# not use this file except in compliance with the License. You may obtain
8# a copy of the License at
9#
10# http://www.apache.org/licenses/LICENSE-2.0
11#
12# Unless required by applicable law or agreed to in writing, software
13# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
14# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
15# License for the specific language governing permissions and limitations
16# under the License.
17
18from nova import volume
19
20
21def terminate_volumes(db, context, instance_id):
22 """delete volumes of delete_on_termination=True in block device mapping"""
23 volume_api = volume.API()
24 for bdm in db.block_device_mapping_get_all_by_instance(context,
25 instance_id):
26 #LOG.debug(_("terminating bdm %s") % bdm)
27 if bdm['volume_id'] and bdm['delete_on_termination']:
28 volume_api.delete(context, bdm['volume_id'])
29 db.block_device_mapping_destroy(context, bdm['id'])
030
=== modified file 'nova/db/api.py'
--- nova/db/api.py 2011-05-27 04:50:20 +0000
+++ nova/db/api.py 2011-06-17 23:35:54 +0000
@@ -414,6 +414,11 @@
414 return IMPL.instance_destroy(context, instance_id)414 return IMPL.instance_destroy(context, instance_id)
415415
416416
417def instance_stop(context, instance_id):
418 """Stop the instance or raise if it does not exist."""
419 return IMPL.instance_stop(context, instance_id)
420
421
417def instance_get(context, instance_id):422def instance_get(context, instance_id):
418 """Get an instance or raise if it does not exist."""423 """Get an instance or raise if it does not exist."""
419 return IMPL.instance_get(context, instance_id)424 return IMPL.instance_get(context, instance_id)
@@ -920,6 +925,36 @@
920####################925####################
921926
922927
928def block_device_mapping_create(context, values):
929 """Create an entry of block device mapping"""
930 return IMPL.block_device_mapping_create(context, values)
931
932
933def block_device_mapping_update(context, bdm_id, values):
934 """Create an entry of block device mapping"""
935 return IMPL.block_device_mapping_update(context, bdm_id, values)
936
937
938def block_device_mapping_get_all_by_instance(context, instance_id):
939 """Get all block device mapping belonging to a instance"""
940 return IMPL.block_device_mapping_get_all_by_instance(context, instance_id)
941
942
943def block_device_mapping_destroy(context, bdm_id):
944 """Destroy the block device mapping."""
945 return IMPL.block_device_mapping_destroy(context, bdm_id)
946
947
948def block_device_mapping_destroy_by_instance_and_volume(context, instance_id,
949 volume_id):
950 """Destroy the block device mapping or raise if it does not exist."""
951 return IMPL.block_device_mapping_destroy_by_instance_and_volume(
952 context, instance_id, volume_id)
953
954
955####################
956
957
923def security_group_get_all(context):958def security_group_get_all(context):
924 """Get all security groups."""959 """Get all security groups."""
925 return IMPL.security_group_get_all(context)960 return IMPL.security_group_get_all(context)
926961
=== modified file 'nova/db/sqlalchemy/api.py'
--- nova/db/sqlalchemy/api.py 2011-06-15 21:35:31 +0000
+++ nova/db/sqlalchemy/api.py 2011-06-17 23:35:54 +0000
@@ -840,6 +840,25 @@
840840
841841
842@require_context842@require_context
843def instance_stop(context, instance_id):
844 session = get_session()
845 with session.begin():
846 from nova.compute import power_state
847 session.query(models.Instance).\
848 filter_by(id=instance_id).\
849 update({'host': None,
850 'state': power_state.SHUTOFF,
851 'state_description': 'stopped',
852 'updated_at': literal_column('updated_at')})
853 session.query(models.SecurityGroupInstanceAssociation).\
854 filter_by(instance_id=instance_id).\
855 update({'updated_at': literal_column('updated_at')})
856 session.query(models.InstanceMetadata).\
857 filter_by(instance_id=instance_id).\
858 update({'updated_at': literal_column('updated_at')})
859
860
861@require_context
843def instance_get(context, instance_id, session=None):862def instance_get(context, instance_id, session=None):
844 if not session:863 if not session:
845 session = get_session()864 session = get_session()
@@ -1883,6 +1902,66 @@
18831902
18841903
1885@require_context1904@require_context
1905def block_device_mapping_create(context, values):
1906 bdm_ref = models.BlockDeviceMapping()
1907 bdm_ref.update(values)
1908
1909 session = get_session()
1910 with session.begin():
1911 bdm_ref.save(session=session)
1912
1913
1914@require_context
1915def block_device_mapping_update(context, bdm_id, values):
1916 session = get_session()
1917 with session.begin():
1918 session.query(models.BlockDeviceMapping).\
1919 filter_by(id=bdm_id).\
1920 filter_by(deleted=False).\
1921 update(values)
1922
1923
1924@require_context
1925def block_device_mapping_get_all_by_instance(context, instance_id):
1926 session = get_session()
1927 result = session.query(models.BlockDeviceMapping).\
1928 filter_by(instance_id=instance_id).\
1929 filter_by(deleted=False).\
1930 all()
1931 if not result:
1932 return []
1933 return result
1934
1935
1936@require_context
1937def block_device_mapping_destroy(context, bdm_id):
1938 session = get_session()
1939 with session.begin():
1940 session.query(models.BlockDeviceMapping).\
1941 filter_by(id=bdm_id).\
1942 update({'deleted': True,
1943 'deleted_at': utils.utcnow(),
1944 'updated_at': literal_column('updated_at')})
1945
1946
1947@require_context
1948def block_device_mapping_destroy_by_instance_and_volume(context, instance_id,
1949 volume_id):
1950 session = get_session()
1951 with session.begin():
1952 session.query(models.BlockDeviceMapping).\
1953 filter_by(instance_id=instance_id).\
1954 filter_by(volume_id=volume_id).\
1955 filter_by(deleted=False).\
1956 update({'deleted': True,
1957 'deleted_at': utils.utcnow(),
1958 'updated_at': literal_column('updated_at')})
1959
1960
1961###################
1962
1963
1964@require_context
1886def security_group_get_all(context):1965def security_group_get_all(context):
1887 session = get_session()1966 session = get_session()
1888 return session.query(models.SecurityGroup).\1967 return session.query(models.SecurityGroup).\
18891968
=== added file 'nova/db/sqlalchemy/migrate_repo/versions/024_add_block_device_mapping.py'
--- nova/db/sqlalchemy/migrate_repo/versions/024_add_block_device_mapping.py 1970-01-01 00:00:00 +0000
+++ nova/db/sqlalchemy/migrate_repo/versions/024_add_block_device_mapping.py 2011-06-17 23:35:54 +0000
@@ -0,0 +1,87 @@
1# Copyright 2011 OpenStack LLC.
2# Copyright 2011 Isaku Yamahata
3#
4# Licensed under the Apache License, Version 2.0 (the "License"); you may
5# not use this file except in compliance with the License. You may obtain
6# a copy of the License at
7#
8# http://www.apache.org/licenses/LICENSE-2.0
9#
10# Unless required by applicable law or agreed to in writing, software
11# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
12# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
13# License for the specific language governing permissions and limitations
14# under the License.
15
16from sqlalchemy import MetaData, Table, Column
17from sqlalchemy import DateTime, Boolean, Integer, String
18from sqlalchemy import ForeignKey
19from nova import log as logging
20
21meta = MetaData()
22
23# Just for the ForeignKey and column creation to succeed, these are not the
24# actual definitions of instances or services.
25instances = Table('instances', meta,
26 Column('id', Integer(), primary_key=True, nullable=False),
27 )
28
29volumes = Table('volumes', meta,
30 Column('id', Integer(), primary_key=True, nullable=False),
31 )
32
33snapshots = Table('snapshots', meta,
34 Column('id', Integer(), primary_key=True, nullable=False),
35 )
36
37
38block_device_mapping = Table('block_device_mapping', meta,
39 Column('created_at', DateTime(timezone=False)),
40 Column('updated_at', DateTime(timezone=False)),
41 Column('deleted_at', DateTime(timezone=False)),
42 Column('deleted', Boolean(create_constraint=True, name=None)),
43 Column('id', Integer(), primary_key=True, autoincrement=True),
44 Column('instance_id',
45 Integer(),
46 ForeignKey('instances.id'),
47 nullable=False),
48 Column('device_name',
49 String(length=255, convert_unicode=False, assert_unicode=None,
50 unicode_error=None, _warn_on_bytestring=False),
51 nullable=False),
52 Column('delete_on_termination',
53 Boolean(create_constraint=True, name=None),
54 default=False),
55 Column('virtual_name',
56 String(length=255, convert_unicode=False, assert_unicode=None,
57 unicode_error=None, _warn_on_bytestring=False),
58 nullable=True),
59 Column('snapshot_id',
60 Integer(),
61 ForeignKey('snapshots.id'),
62 nullable=True),
63 Column('volume_id', Integer(), ForeignKey('volumes.id'),
64 nullable=True),
65 Column('volume_size', Integer(), nullable=True),
66 Column('no_device',
67 Boolean(create_constraint=True, name=None),
68 nullable=True),
69 )
70
71
72def upgrade(migrate_engine):
73 # Upgrade operations go here. Don't create your own engine;
74 # bind migrate_engine to your metadata
75 meta.bind = migrate_engine
76 try:
77 block_device_mapping.create()
78 except Exception:
79 logging.info(repr(block_device_mapping))
80 logging.exception('Exception while creating table')
81 meta.drop_all(tables=[block_device_mapping])
82 raise
83
84
85def downgrade(migrate_engine):
86 # Operations to reverse the above upgrade go here.
87 block_device_mapping.drop()
088
=== modified file 'nova/db/sqlalchemy/models.py'
--- nova/db/sqlalchemy/models.py 2011-06-08 15:45:23 +0000
+++ nova/db/sqlalchemy/models.py 2011-06-17 23:35:54 +0000
@@ -357,6 +357,45 @@
357 display_description = Column(String(255))357 display_description = Column(String(255))
358358
359359
360class BlockDeviceMapping(BASE, NovaBase):
361 """Represents block device mapping that is defined by EC2"""
362 __tablename__ = "block_device_mapping"
363 id = Column(Integer, primary_key=True, autoincrement=True)
364
365 instance_id = Column(Integer, ForeignKey('instances.id'), nullable=False)
366 instance = relationship(Instance,
367 backref=backref('balock_device_mapping'),
368 foreign_keys=instance_id,
369 primaryjoin='and_(BlockDeviceMapping.instance_id=='
370 'Instance.id,'
371 'BlockDeviceMapping.deleted=='
372 'False)')
373 device_name = Column(String(255), nullable=False)
374
375 # default=False for compatibility of the existing code.
376 # With EC2 API,
377 # default True for ami specified device.
378 # default False for created with other timing.
379 delete_on_termination = Column(Boolean, default=False)
380
381 # for ephemeral device
382 virtual_name = Column(String(255), nullable=True)
383
384 # for snapshot or volume
385 snapshot_id = Column(Integer, ForeignKey('snapshots.id'), nullable=True)
386 # outer join
387 snapshot = relationship(Snapshot,
388 foreign_keys=snapshot_id)
389
390 volume_id = Column(Integer, ForeignKey('volumes.id'), nullable=True)
391 volume = relationship(Volume,
392 foreign_keys=volume_id)
393 volume_size = Column(Integer, nullable=True)
394
395 # for no device to suppress devices.
396 no_device = Column(Boolean, nullable=True)
397
398
360class ExportDevice(BASE, NovaBase):399class ExportDevice(BASE, NovaBase):
361 """Represates a shelf and blade that a volume can be exported on."""400 """Represates a shelf and blade that a volume can be exported on."""
362 __tablename__ = 'export_devices'401 __tablename__ = 'export_devices'
363402
=== modified file 'nova/scheduler/simple.py'
--- nova/scheduler/simple.py 2011-06-02 21:23:05 +0000
+++ nova/scheduler/simple.py 2011-06-17 23:35:54 +0000
@@ -39,7 +39,7 @@
39class SimpleScheduler(chance.ChanceScheduler):39class SimpleScheduler(chance.ChanceScheduler):
40 """Implements Naive Scheduler that tries to find least loaded host."""40 """Implements Naive Scheduler that tries to find least loaded host."""
4141
42 def schedule_run_instance(self, context, instance_id, *_args, **_kwargs):42 def _schedule_instance(self, context, instance_id, *_args, **_kwargs):
43 """Picks a host that is up and has the fewest running instances."""43 """Picks a host that is up and has the fewest running instances."""
44 instance_ref = db.instance_get(context, instance_id)44 instance_ref = db.instance_get(context, instance_id)
45 if (instance_ref['availability_zone']45 if (instance_ref['availability_zone']
@@ -75,6 +75,12 @@
75 " for this request. Is the appropriate"75 " for this request. Is the appropriate"
76 " service running?"))76 " service running?"))
7777
78 def schedule_run_instance(self, context, instance_id, *_args, **_kwargs):
79 return self._schedule_instance(context, instance_id, *_args, **_kwargs)
80
81 def schedule_start_instance(self, context, instance_id, *_args, **_kwargs):
82 return self._schedule_instance(context, instance_id, *_args, **_kwargs)
83
78 def schedule_create_volume(self, context, volume_id, *_args, **_kwargs):84 def schedule_create_volume(self, context, volume_id, *_args, **_kwargs):
79 """Picks a host that is up and has the fewest volumes."""85 """Picks a host that is up and has the fewest volumes."""
80 volume_ref = db.volume_get(context, volume_id)86 volume_ref = db.volume_get(context, volume_id)
8187
=== modified file 'nova/tests/test_api.py'
--- nova/tests/test_api.py 2011-05-23 21:15:10 +0000
+++ nova/tests/test_api.py 2011-06-17 23:35:54 +0000
@@ -89,7 +89,7 @@
89class XmlConversionTestCase(test.TestCase):89class XmlConversionTestCase(test.TestCase):
90 """Unit test api xml conversion"""90 """Unit test api xml conversion"""
91 def test_number_conversion(self):91 def test_number_conversion(self):
92 conv = apirequest._try_convert92 conv = ec2utils._try_convert
93 self.assertEqual(conv('None'), None)93 self.assertEqual(conv('None'), None)
94 self.assertEqual(conv('True'), True)94 self.assertEqual(conv('True'), True)
95 self.assertEqual(conv('False'), False)95 self.assertEqual(conv('False'), False)
9696
=== modified file 'nova/tests/test_cloud.py'
--- nova/tests/test_cloud.py 2011-06-17 20:47:23 +0000
+++ nova/tests/test_cloud.py 2011-06-17 23:35:54 +0000
@@ -56,6 +56,7 @@
56 self.compute = self.start_service('compute')56 self.compute = self.start_service('compute')
57 self.scheduter = self.start_service('scheduler')57 self.scheduter = self.start_service('scheduler')
58 self.network = self.start_service('network')58 self.network = self.start_service('network')
59 self.volume = self.start_service('volume')
59 self.image_service = utils.import_object(FLAGS.image_service)60 self.image_service = utils.import_object(FLAGS.image_service)
6061
61 self.manager = manager.AuthManager()62 self.manager = manager.AuthManager()
@@ -373,14 +374,21 @@
373 self.assertRaises(exception.ImageNotFound, deregister_image,374 self.assertRaises(exception.ImageNotFound, deregister_image,
374 self.context, 'ami-bad001')375 self.context, 'ami-bad001')
375376
377 def _run_instance(self, **kwargs):
378 rv = self.cloud.run_instances(self.context, **kwargs)
379 instance_id = rv['instancesSet'][0]['instanceId']
380 return instance_id
381
382 def _run_instance_wait(self, **kwargs):
383 ec2_instance_id = self._run_instance(**kwargs)
384 self._wait_for_running(ec2_instance_id)
385 return ec2_instance_id
386
376 def test_console_output(self):387 def test_console_output(self):
377 instance_type = FLAGS.default_instance_type388 instance_id = self._run_instance(
378 max_count = 1389 image_id='ami-1',
379 kwargs = {'image_id': 'ami-1',390 instance_type=FLAGS.default_instance_type,
380 'instance_type': instance_type,391 max_count=1)
381 'max_count': max_count}
382 rv = self.cloud.run_instances(self.context, **kwargs)
383 instance_id = rv['instancesSet'][0]['instanceId']
384 output = self.cloud.get_console_output(context=self.context,392 output = self.cloud.get_console_output(context=self.context,
385 instance_id=[instance_id])393 instance_id=[instance_id])
386 self.assertEquals(b64decode(output['output']), 'FAKE CONSOLE?OUTPUT')394 self.assertEquals(b64decode(output['output']), 'FAKE CONSOLE?OUTPUT')
@@ -389,9 +397,7 @@
389 rv = self.cloud.terminate_instances(self.context, [instance_id])397 rv = self.cloud.terminate_instances(self.context, [instance_id])
390398
391 def test_ajax_console(self):399 def test_ajax_console(self):
392 kwargs = {'image_id': 'ami-1'}400 instance_id = self._run_instance(image_id='ami-1')
393 rv = self.cloud.run_instances(self.context, **kwargs)
394 instance_id = rv['instancesSet'][0]['instanceId']
395 output = self.cloud.get_ajax_console(context=self.context,401 output = self.cloud.get_ajax_console(context=self.context,
396 instance_id=[instance_id])402 instance_id=[instance_id])
397 self.assertEquals(output['url'],403 self.assertEquals(output['url'],
@@ -569,3 +575,299 @@
569 vol = db.volume_get(self.context, vol['id'])575 vol = db.volume_get(self.context, vol['id'])
570 self.assertEqual(None, vol['mountpoint'])576 self.assertEqual(None, vol['mountpoint'])
571 db.volume_destroy(self.context, vol['id'])577 db.volume_destroy(self.context, vol['id'])
578
579 def _restart_compute_service(self, periodic_interval=None):
580 """restart compute service. NOTE: fake driver forgets all instances."""
581 self.compute.kill()
582 if periodic_interval:
583 self.compute = self.start_service(
584 'compute', periodic_interval=periodic_interval)
585 else:
586 self.compute = self.start_service('compute')
587
588 def _wait_for_state(self, ctxt, instance_id, predicate):
589 """Wait for an stopping instance to be a given state"""
590 id = ec2utils.ec2_id_to_id(instance_id)
591 while True:
592 info = self.cloud.compute_api.get(context=ctxt, instance_id=id)
593 LOG.debug(info)
594 if predicate(info):
595 break
596 greenthread.sleep(1)
597
598 def _wait_for_running(self, instance_id):
599 def is_running(info):
600 return info['state_description'] == 'running'
601 self._wait_for_state(self.context, instance_id, is_running)
602
603 def _wait_for_stopped(self, instance_id):
604 def is_stopped(info):
605 return info['state_description'] == 'stopped'
606 self._wait_for_state(self.context, instance_id, is_stopped)
607
608 def _wait_for_terminate(self, instance_id):
609 def is_deleted(info):
610 return info['deleted']
611 elevated = self.context.elevated(read_deleted=True)
612 self._wait_for_state(elevated, instance_id, is_deleted)
613
614 def test_stop_start_instance(self):
615 """Makes sure stop/start instance works"""
616 # enforce periodic tasks run in short time to avoid wait for 60s.
617 self._restart_compute_service(periodic_interval=0.3)
618
619 kwargs = {'image_id': 'ami-1',
620 'instance_type': FLAGS.default_instance_type,
621 'max_count': 1, }
622 instance_id = self._run_instance_wait(**kwargs)
623
624 # a running instance can't be started. It is just ignored.
625 result = self.cloud.start_instances(self.context, [instance_id])
626 greenthread.sleep(0.3)
627 self.assertTrue(result)
628
629 result = self.cloud.stop_instances(self.context, [instance_id])
630 greenthread.sleep(0.3)
631 self.assertTrue(result)
632 self._wait_for_stopped(instance_id)
633
634 result = self.cloud.start_instances(self.context, [instance_id])
635 greenthread.sleep(0.3)
636 self.assertTrue(result)
637 self._wait_for_running(instance_id)
638
639 result = self.cloud.stop_instances(self.context, [instance_id])
640 greenthread.sleep(0.3)
641 self.assertTrue(result)
642 self._wait_for_stopped(instance_id)
643
644 result = self.cloud.terminate_instances(self.context, [instance_id])
645 greenthread.sleep(0.3)
646 self.assertTrue(result)
647
648 self._restart_compute_service()
649
650 def _volume_create(self):
651 kwargs = {'status': 'available',
652 'host': self.volume.host,
653 'size': 1,
654 'attach_status': 'detached', }
655 return db.volume_create(self.context, kwargs)
656
657 def _assert_volume_attached(self, vol, instance_id, mountpoint):
658 self.assertEqual(vol['instance_id'], instance_id)
659 self.assertEqual(vol['mountpoint'], mountpoint)
660 self.assertEqual(vol['status'], "in-use")
661 self.assertEqual(vol['attach_status'], "attached")
662
663 def _assert_volume_detached(self, vol):
664 self.assertEqual(vol['instance_id'], None)
665 self.assertEqual(vol['mountpoint'], None)
666 self.assertEqual(vol['status'], "available")
667 self.assertEqual(vol['attach_status'], "detached")
668
669 def test_stop_start_with_volume(self):
670 """Make sure run instance with block device mapping works"""
671
672 # enforce periodic tasks run in short time to avoid wait for 60s.
673 self._restart_compute_service(periodic_interval=0.3)
674
675 vol1 = self._volume_create()
676 vol2 = self._volume_create()
677 kwargs = {'image_id': 'ami-1',
678 'instance_type': FLAGS.default_instance_type,
679 'max_count': 1,
680 'block_device_mapping': [{'device_name': '/dev/vdb',
681 'volume_id': vol1['id'],
682 'delete_on_termination': False, },
683 {'device_name': '/dev/vdc',
684 'volume_id': vol2['id'],
685 'delete_on_termination': True, },
686 ]}
687 ec2_instance_id = self._run_instance_wait(**kwargs)
688 instance_id = ec2utils.ec2_id_to_id(ec2_instance_id)
689
690 vols = db.volume_get_all_by_instance(self.context, instance_id)
691 self.assertEqual(len(vols), 2)
692 for vol in vols:
693 self.assertTrue(vol['id'] == vol1['id'] or vol['id'] == vol2['id'])
694
695 vol = db.volume_get(self.context, vol1['id'])
696 self._assert_volume_attached(vol, instance_id, '/dev/vdb')
697
698 vol = db.volume_get(self.context, vol2['id'])
699 self._assert_volume_attached(vol, instance_id, '/dev/vdc')
700
701 result = self.cloud.stop_instances(self.context, [ec2_instance_id])
702 self.assertTrue(result)
703 self._wait_for_stopped(ec2_instance_id)
704
705 vol = db.volume_get(self.context, vol1['id'])
706 self._assert_volume_detached(vol)
707 vol = db.volume_get(self.context, vol2['id'])
708 self._assert_volume_detached(vol)
709
710 self.cloud.start_instances(self.context, [ec2_instance_id])
711 self._wait_for_running(ec2_instance_id)
712 vols = db.volume_get_all_by_instance(self.context, instance_id)
713 self.assertEqual(len(vols), 2)
714 for vol in vols:
715 self.assertTrue(vol['id'] == vol1['id'] or vol['id'] == vol2['id'])
716 self.assertTrue(vol['mountpoint'] == '/dev/vdb' or
717 vol['mountpoint'] == '/dev/vdc')
718 self.assertEqual(vol['instance_id'], instance_id)
719 self.assertEqual(vol['status'], "in-use")
720 self.assertEqual(vol['attach_status'], "attached")
721
722 self.cloud.terminate_instances(self.context, [ec2_instance_id])
723 greenthread.sleep(0.3)
724
725 admin_ctxt = context.get_admin_context(read_deleted=False)
726 vol = db.volume_get(admin_ctxt, vol1['id'])
727 self.assertFalse(vol['deleted'])
728 db.volume_destroy(self.context, vol1['id'])
729
730 greenthread.sleep(0.3)
731 admin_ctxt = context.get_admin_context(read_deleted=True)
732 vol = db.volume_get(admin_ctxt, vol2['id'])
733 self.assertTrue(vol['deleted'])
734
735 self._restart_compute_service()
736
737 def test_stop_with_attached_volume(self):
738 """Make sure attach info is reflected to block device mapping"""
739 # enforce periodic tasks run in short time to avoid wait for 60s.
740 self._restart_compute_service(periodic_interval=0.3)
741
742 vol1 = self._volume_create()
743 vol2 = self._volume_create()
744 kwargs = {'image_id': 'ami-1',
745 'instance_type': FLAGS.default_instance_type,
746 'max_count': 1,
747 'block_device_mapping': [{'device_name': '/dev/vdb',
748 'volume_id': vol1['id'],
749 'delete_on_termination': True}]}
750 ec2_instance_id = self._run_instance_wait(**kwargs)
751 instance_id = ec2utils.ec2_id_to_id(ec2_instance_id)
752
753 vols = db.volume_get_all_by_instance(self.context, instance_id)
754 self.assertEqual(len(vols), 1)
755 for vol in vols:
756 self.assertEqual(vol['id'], vol1['id'])
757 self._assert_volume_attached(vol, instance_id, '/dev/vdb')
758
759 vol = db.volume_get(self.context, vol2['id'])
760 self._assert_volume_detached(vol)
761
762 self.cloud.compute_api.attach_volume(self.context,
763 instance_id=instance_id,
764 volume_id=vol2['id'],
765 device='/dev/vdc')
766 greenthread.sleep(0.3)
767 vol = db.volume_get(self.context, vol2['id'])
768 self._assert_volume_attached(vol, instance_id, '/dev/vdc')
769
770 self.cloud.compute_api.detach_volume(self.context,
771 volume_id=vol1['id'])
772 greenthread.sleep(0.3)
773 vol = db.volume_get(self.context, vol1['id'])
774 self._assert_volume_detached(vol)
775
776 result = self.cloud.stop_instances(self.context, [ec2_instance_id])
777 self.assertTrue(result)
778 self._wait_for_stopped(ec2_instance_id)
779
780 for vol_id in (vol1['id'], vol2['id']):
781 vol = db.volume_get(self.context, vol_id)
782 self._assert_volume_detached(vol)
783
784 self.cloud.start_instances(self.context, [ec2_instance_id])
785 self._wait_for_running(ec2_instance_id)
786 vols = db.volume_get_all_by_instance(self.context, instance_id)
787 self.assertEqual(len(vols), 1)
788 for vol in vols:
789 self.assertEqual(vol['id'], vol2['id'])
790 self._assert_volume_attached(vol, instance_id, '/dev/vdc')
791
792 vol = db.volume_get(self.context, vol1['id'])
793 self._assert_volume_detached(vol)
794
795 self.cloud.terminate_instances(self.context, [ec2_instance_id])
796 greenthread.sleep(0.3)
797
798 for vol_id in (vol1['id'], vol2['id']):
799 vol = db.volume_get(self.context, vol_id)
800 self.assertEqual(vol['id'], vol_id)
801 self._assert_volume_detached(vol)
802 db.volume_destroy(self.context, vol_id)
803
804 self._restart_compute_service()
805
806 def _create_snapshot(self, ec2_volume_id):
807 result = self.cloud.create_snapshot(self.context,
808 volume_id=ec2_volume_id)
809 greenthread.sleep(0.3)
810 return result['snapshotId']
811
812 def test_run_with_snapshot(self):
813 """Makes sure run/stop/start instance with snapshot works."""
814 vol = self._volume_create()
815 ec2_volume_id = ec2utils.id_to_ec2_id(vol['id'], 'vol-%08x')
816
817 ec2_snapshot1_id = self._create_snapshot(ec2_volume_id)
818 snapshot1_id = ec2utils.ec2_id_to_id(ec2_snapshot1_id)
819 ec2_snapshot2_id = self._create_snapshot(ec2_volume_id)
820 snapshot2_id = ec2utils.ec2_id_to_id(ec2_snapshot2_id)
821
822 kwargs = {'image_id': 'ami-1',
823 'instance_type': FLAGS.default_instance_type,
824 'max_count': 1,
825 'block_device_mapping': [{'device_name': '/dev/vdb',
826 'snapshot_id': snapshot1_id,
827 'delete_on_termination': False, },
828 {'device_name': '/dev/vdc',
829 'snapshot_id': snapshot2_id,
830 'delete_on_termination': True}]}
831 ec2_instance_id = self._run_instance_wait(**kwargs)
832 instance_id = ec2utils.ec2_id_to_id(ec2_instance_id)
833
834 vols = db.volume_get_all_by_instance(self.context, instance_id)
835 self.assertEqual(len(vols), 2)
836 vol1_id = None
837 vol2_id = None
838 for vol in vols:
839 snapshot_id = vol['snapshot_id']
840 if snapshot_id == snapshot1_id:
841 vol1_id = vol['id']
842 mountpoint = '/dev/vdb'
843 elif snapshot_id == snapshot2_id:
844 vol2_id = vol['id']
845 mountpoint = '/dev/vdc'
846 else:
847 self.fail()
848
849 self._assert_volume_attached(vol, instance_id, mountpoint)
850
851 self.assertTrue(vol1_id)
852 self.assertTrue(vol2_id)
853
854 self.cloud.terminate_instances(self.context, [ec2_instance_id])
855 greenthread.sleep(0.3)
856 self._wait_for_terminate(ec2_instance_id)
857
858 greenthread.sleep(0.3)
859 admin_ctxt = context.get_admin_context(read_deleted=False)
860 vol = db.volume_get(admin_ctxt, vol1_id)
861 self._assert_volume_detached(vol)
862 self.assertFalse(vol['deleted'])
863 db.volume_destroy(self.context, vol1_id)
864
865 greenthread.sleep(0.3)
866 admin_ctxt = context.get_admin_context(read_deleted=True)
867 vol = db.volume_get(admin_ctxt, vol2_id)
868 self.assertTrue(vol['deleted'])
869
870 for snapshot_id in (ec2_snapshot1_id, ec2_snapshot2_id):
871 self.cloud.delete_snapshot(self.context, snapshot_id)
872 greenthread.sleep(0.3)
873 db.volume_destroy(self.context, vol['id'])
572874
=== modified file 'nova/tests/test_compute.py'
--- nova/tests/test_compute.py 2011-06-07 17:32:06 +0000
+++ nova/tests/test_compute.py 2011-06-17 23:35:54 +0000
@@ -228,6 +228,21 @@
228 self.assert_(instance_ref['launched_at'] < terminate)228 self.assert_(instance_ref['launched_at'] < terminate)
229 self.assert_(instance_ref['deleted_at'] > terminate)229 self.assert_(instance_ref['deleted_at'] > terminate)
230230
231 def test_stop(self):
232 """Ensure instance can be stopped"""
233 instance_id = self._create_instance()
234 self.compute.run_instance(self.context, instance_id)
235 self.compute.stop_instance(self.context, instance_id)
236 self.compute.terminate_instance(self.context, instance_id)
237
238 def test_start(self):
239 """Ensure instance can be started"""
240 instance_id = self._create_instance()
241 self.compute.run_instance(self.context, instance_id)
242 self.compute.stop_instance(self.context, instance_id)
243 self.compute.start_instance(self.context, instance_id)
244 self.compute.terminate_instance(self.context, instance_id)
245
231 def test_pause(self):246 def test_pause(self):
232 """Ensure instance can be paused"""247 """Ensure instance can be paused"""
233 instance_id = self._create_instance()248 instance_id = self._create_instance()
234249
=== modified file 'nova/virt/driver.py'
--- nova/virt/driver.py 2011-03-30 00:35:24 +0000
+++ nova/virt/driver.py 2011-06-17 23:35:54 +0000
@@ -61,7 +61,7 @@
61 """Return a list of InstanceInfo for all registered VMs"""61 """Return a list of InstanceInfo for all registered VMs"""
62 raise NotImplementedError()62 raise NotImplementedError()
6363
64 def spawn(self, instance, network_info=None):64 def spawn(self, instance, network_info=None, block_device_mapping=None):
65 """Launch a VM for the specified instance"""65 """Launch a VM for the specified instance"""
66 raise NotImplementedError()66 raise NotImplementedError()
6767
6868
=== modified file 'nova/virt/fake.py'
--- nova/virt/fake.py 2011-05-17 14:49:12 +0000
+++ nova/virt/fake.py 2011-06-17 23:35:54 +0000
@@ -129,7 +129,7 @@
129 info_list.append(self._map_to_instance_info(instance))129 info_list.append(self._map_to_instance_info(instance))
130 return info_list130 return info_list
131131
132 def spawn(self, instance):132 def spawn(self, instance, network_info=None, block_device_mapping=None):
133 """133 """
134 Create a new instance/VM/domain on the virtualization platform.134 Create a new instance/VM/domain on the virtualization platform.
135135
@@ -237,6 +237,10 @@
237 """237 """
238 pass238 pass
239239
240 def poll_rescued_instances(self, timeout):
241 """Poll for rescued instances"""
242 pass
243
240 def migrate_disk_and_power_off(self, instance, dest):244 def migrate_disk_and_power_off(self, instance, dest):
241 """245 """
242 Transfers the disk of a running instance in multiple phases, turning246 Transfers the disk of a running instance in multiple phases, turning
243247
=== modified file 'nova/virt/hyperv.py'
--- nova/virt/hyperv.py 2011-05-28 11:49:31 +0000
+++ nova/virt/hyperv.py 2011-06-17 23:35:54 +0000
@@ -139,7 +139,7 @@
139139
140 return instance_infos140 return instance_infos
141141
142 def spawn(self, instance):142 def spawn(self, instance, network_info=None, block_device_mapping=None):
143 """ Create a new VM and start it."""143 """ Create a new VM and start it."""
144 vm = self._lookup(instance.name)144 vm = self._lookup(instance.name)
145 if vm is not None:145 if vm is not None:
146146
=== modified file 'nova/virt/libvirt.xml.template'
--- nova/virt/libvirt.xml.template 2011-05-31 17:45:26 +0000
+++ nova/virt/libvirt.xml.template 2011-06-17 23:35:54 +0000
@@ -67,11 +67,13 @@
67 <target dev='${disk_prefix}b' bus='${disk_bus}'/>67 <target dev='${disk_prefix}b' bus='${disk_bus}'/>
68 </disk>68 </disk>
69 #else69 #else
70 #if not ($getVar('ebs_root', False))
70 <disk type='file'>71 <disk type='file'>
71 <driver type='${driver_type}'/>72 <driver type='${driver_type}'/>
72 <source file='${basepath}/disk'/>73 <source file='${basepath}/disk'/>
73 <target dev='${disk_prefix}a' bus='${disk_bus}'/>74 <target dev='${disk_prefix}a' bus='${disk_bus}'/>
74 </disk>75 </disk>
76 #end if
75 #if $getVar('local', False)77 #if $getVar('local', False)
76 <disk type='file'>78 <disk type='file'>
77 <driver type='${driver_type}'/>79 <driver type='${driver_type}'/>
@@ -79,6 +81,13 @@
79 <target dev='${disk_prefix}b' bus='${disk_bus}'/>81 <target dev='${disk_prefix}b' bus='${disk_bus}'/>
80 </disk>82 </disk>
81 #end if83 #end if
84 #for $vol in $volumes
85 <disk type='block'>
86 <driver type='raw'/>
87 <source dev='${vol.device_path}'/>
88 <target dev='${vol.mount_device}' bus='${disk_bus}'/>
89 </disk>
90 #end for
82 #end if91 #end if
83#end if92#end if
8493
8594
=== modified file 'nova/virt/libvirt/connection.py'
--- nova/virt/libvirt/connection.py 2011-06-06 15:54:11 +0000
+++ nova/virt/libvirt/connection.py 2011-06-17 23:35:54 +0000
@@ -40,6 +40,7 @@
40import multiprocessing40import multiprocessing
41import os41import os
42import random42import random
43import re
43import shutil44import shutil
44import subprocess45import subprocess
45import sys46import sys
@@ -148,6 +149,10 @@
148 Template = t.Template149 Template = t.Template
149150
150151
152def _strip_dev(mount_path):
153 return re.sub(r'^/dev/', '', mount_path)
154
155
151class LibvirtConnection(driver.ComputeDriver):156class LibvirtConnection(driver.ComputeDriver):
152157
153 def __init__(self, read_only):158 def __init__(self, read_only):
@@ -575,11 +580,14 @@
575 # NOTE(ilyaalekseyev): Implementation like in multinics580 # NOTE(ilyaalekseyev): Implementation like in multinics
576 # for xenapi(tr3buchet)581 # for xenapi(tr3buchet)
577 @exception.wrap_exception582 @exception.wrap_exception
578 def spawn(self, instance, network_info=None):583 def spawn(self, instance, network_info=None, block_device_mapping=None):
579 xml = self.to_xml(instance, False, network_info)584 xml = self.to_xml(instance, False, network_info=network_info,
585 block_device_mapping=block_device_mapping)
586 block_device_mapping = block_device_mapping or []
580 self.firewall_driver.setup_basic_filtering(instance, network_info)587 self.firewall_driver.setup_basic_filtering(instance, network_info)
581 self.firewall_driver.prepare_instance_filter(instance, network_info)588 self.firewall_driver.prepare_instance_filter(instance, network_info)
582 self._create_image(instance, xml, network_info=network_info)589 self._create_image(instance, xml, network_info=network_info,
590 block_device_mapping=block_device_mapping)
583 domain = self._create_new_domain(xml)591 domain = self._create_new_domain(xml)
584 LOG.debug(_("instance %s: is running"), instance['name'])592 LOG.debug(_("instance %s: is running"), instance['name'])
585 self.firewall_driver.apply_instance_filter(instance)593 self.firewall_driver.apply_instance_filter(instance)
@@ -761,7 +769,8 @@
761 # TODO(vish): should we format disk by default?769 # TODO(vish): should we format disk by default?
762770
763 def _create_image(self, inst, libvirt_xml, suffix='', disk_images=None,771 def _create_image(self, inst, libvirt_xml, suffix='', disk_images=None,
764 network_info=None):772 network_info=None, block_device_mapping=None):
773 block_device_mapping = block_device_mapping or []
765 if not network_info:774 if not network_info:
766 network_info = netutils.get_network_info(inst)775 network_info = netutils.get_network_info(inst)
767776
@@ -824,16 +833,19 @@
824 size = None833 size = None
825 root_fname += "_sm"834 root_fname += "_sm"
826835
827 self._cache_image(fn=self._fetch_image,836 if not self._volume_in_mapping(self.root_mount_device,
828 target=basepath('disk'),837 block_device_mapping):
829 fname=root_fname,838 self._cache_image(fn=self._fetch_image,
830 cow=FLAGS.use_cow_images,839 target=basepath('disk'),
831 image_id=disk_images['image_id'],840 fname=root_fname,
832 user=user,841 cow=FLAGS.use_cow_images,
833 project=project,842 image_id=disk_images['image_id'],
834 size=size)843 user=user,
844 project=project,
845 size=size)
835846
836 if inst_type['local_gb']:847 if inst_type['local_gb'] and not self._volume_in_mapping(
848 self.local_mount_device, block_device_mapping):
837 self._cache_image(fn=self._create_local,849 self._cache_image(fn=self._create_local,
838 target=basepath('disk.local'),850 target=basepath('disk.local'),
839 fname="local_%s" % inst_type['local_gb'],851 fname="local_%s" % inst_type['local_gb'],
@@ -948,7 +960,20 @@
948960
949 return result961 return result
950962
951 def _prepare_xml_info(self, instance, rescue=False, network_info=None):963 root_mount_device = 'vda' # FIXME for now. it's hard coded.
964 local_mount_device = 'vdb' # FIXME for now. it's hard coded.
965
966 def _volume_in_mapping(self, mount_device, block_device_mapping):
967 mount_device_ = _strip_dev(mount_device)
968 for vol in block_device_mapping:
969 vol_mount_device = _strip_dev(vol['mount_device'])
970 if vol_mount_device == mount_device_:
971 return True
972 return False
973
974 def _prepare_xml_info(self, instance, rescue=False, network_info=None,
975 block_device_mapping=None):
976 block_device_mapping = block_device_mapping or []
952 # TODO(adiantum) remove network_info creation code977 # TODO(adiantum) remove network_info creation code
953 # when multinics will be completed978 # when multinics will be completed
954 if not network_info:979 if not network_info:
@@ -966,6 +991,16 @@
966 else:991 else:
967 driver_type = 'raw'992 driver_type = 'raw'
968993
994 for vol in block_device_mapping:
995 vol['mount_device'] = _strip_dev(vol['mount_device'])
996 ebs_root = self._volume_in_mapping(self.root_mount_device,
997 block_device_mapping)
998 if self._volume_in_mapping(self.local_mount_device,
999 block_device_mapping):
1000 local_gb = False
1001 else:
1002 local_gb = inst_type['local_gb']
1003
969 xml_info = {'type': FLAGS.libvirt_type,1004 xml_info = {'type': FLAGS.libvirt_type,
970 'name': instance['name'],1005 'name': instance['name'],
971 'basepath': os.path.join(FLAGS.instances_path,1006 'basepath': os.path.join(FLAGS.instances_path,
@@ -973,9 +1008,11 @@
973 'memory_kb': inst_type['memory_mb'] * 1024,1008 'memory_kb': inst_type['memory_mb'] * 1024,
974 'vcpus': inst_type['vcpus'],1009 'vcpus': inst_type['vcpus'],
975 'rescue': rescue,1010 'rescue': rescue,
976 'local': inst_type['local_gb'],1011 'local': local_gb,
977 'driver_type': driver_type,1012 'driver_type': driver_type,
978 'nics': nics}1013 'nics': nics,
1014 'ebs_root': ebs_root,
1015 'volumes': block_device_mapping}
9791016
980 if FLAGS.vnc_enabled:1017 if FLAGS.vnc_enabled:
981 if FLAGS.libvirt_type != 'lxc':1018 if FLAGS.libvirt_type != 'lxc':
@@ -991,10 +1028,13 @@
991 xml_info['disk'] = xml_info['basepath'] + "/disk"1028 xml_info['disk'] = xml_info['basepath'] + "/disk"
992 return xml_info1029 return xml_info
9931030
994 def to_xml(self, instance, rescue=False, network_info=None):1031 def to_xml(self, instance, rescue=False, network_info=None,
1032 block_device_mapping=None):
1033 block_device_mapping = block_device_mapping or []
995 # TODO(termie): cache?1034 # TODO(termie): cache?
996 LOG.debug(_('instance %s: starting toXML method'), instance['name'])1035 LOG.debug(_('instance %s: starting toXML method'), instance['name'])
997 xml_info = self._prepare_xml_info(instance, rescue, network_info)1036 xml_info = self._prepare_xml_info(instance, rescue, network_info,
1037 block_device_mapping)
998 xml = str(Template(self.libvirt_xml, searchList=[xml_info]))1038 xml = str(Template(self.libvirt_xml, searchList=[xml_info]))
999 LOG.debug(_('instance %s: finished toXML method'), instance['name'])1039 LOG.debug(_('instance %s: finished toXML method'), instance['name'])
1000 return xml1040 return xml
10011041
=== modified file 'nova/virt/vmwareapi_conn.py'
--- nova/virt/vmwareapi_conn.py 2011-04-12 21:43:07 +0000
+++ nova/virt/vmwareapi_conn.py 2011-06-17 23:35:54 +0000
@@ -124,7 +124,7 @@
124 """List VM instances."""124 """List VM instances."""
125 return self._vmops.list_instances()125 return self._vmops.list_instances()
126126
127 def spawn(self, instance):127 def spawn(self, instance, network_info=None, block_device_mapping=None):
128 """Create VM instance."""128 """Create VM instance."""
129 self._vmops.spawn(instance)129 self._vmops.spawn(instance)
130130
131131
=== modified file 'nova/virt/xenapi_conn.py'
--- nova/virt/xenapi_conn.py 2011-05-18 16:27:39 +0000
+++ nova/virt/xenapi_conn.py 2011-06-17 23:35:54 +0000
@@ -194,7 +194,7 @@
194 def list_instances_detail(self):194 def list_instances_detail(self):
195 return self._vmops.list_instances_detail()195 return self._vmops.list_instances_detail()
196196
197 def spawn(self, instance):197 def spawn(self, instance, network_info=None, block_device_mapping=None):
198 """Create VM instance"""198 """Create VM instance"""
199 self._vmops.spawn(instance)199 self._vmops.spawn(instance)
200200
201201
=== modified file 'nova/volume/api.py'
--- nova/volume/api.py 2011-06-02 21:23:05 +0000
+++ nova/volume/api.py 2011-06-17 23:35:54 +0000
@@ -21,6 +21,9 @@
21"""21"""
2222
2323
24from eventlet import greenthread
25
26from nova import db
24from nova import exception27from nova import exception
25from nova import flags28from nova import flags
26from nova import log as logging29from nova import log as logging
@@ -44,7 +47,8 @@
44 if snapshot['status'] != "available":47 if snapshot['status'] != "available":
45 raise exception.ApiError(48 raise exception.ApiError(
46 _("Snapshot status must be available"))49 _("Snapshot status must be available"))
47 size = snapshot['volume_size']50 if not size:
51 size = snapshot['volume_size']
4852
49 if quota.allowed_volumes(context, 1, size) < 1:53 if quota.allowed_volumes(context, 1, size) < 1:
50 pid = context.project_id54 pid = context.project_id
@@ -73,6 +77,14 @@
73 "snapshot_id": snapshot_id}})77 "snapshot_id": snapshot_id}})
74 return volume78 return volume
7579
80 # TODO(yamahata): eliminate dumb polling
81 def wait_creation(self, context, volume_id):
82 while True:
83 volume = self.get(context, volume_id)
84 if volume['status'] != 'creating':
85 return
86 greenthread.sleep(1)
87
76 def delete(self, context, volume_id):88 def delete(self, context, volume_id):
77 volume = self.get(context, volume_id)89 volume = self.get(context, volume_id)
78 if volume['status'] != "available":90 if volume['status'] != "available":
7991
=== modified file 'nova/volume/driver.py'
--- nova/volume/driver.py 2011-05-27 05:13:17 +0000
+++ nova/volume/driver.py 2011-06-17 23:35:54 +0000
@@ -582,6 +582,14 @@
582 """No setup necessary in fake mode."""582 """No setup necessary in fake mode."""
583 pass583 pass
584584
585 def discover_volume(self, context, volume):
586 """Discover volume on a remote host."""
587 return "/dev/disk/by-path/volume-id-%d" % volume['id']
588
589 def undiscover_volume(self, volume):
590 """Undiscover volume on a remote host."""
591 pass
592
585 @staticmethod593 @staticmethod
586 def fake_execute(cmd, *_args, **_kwargs):594 def fake_execute(cmd, *_args, **_kwargs):
587 """Execute that simply logs the command."""595 """Execute that simply logs the command."""