Merge lp:~cbehrens/nova/lp844160-build-works-with-zones into lp:~hudson-openstack/nova/trunk

Proposed by Chris Behrens
Status: Rejected
Rejected by: Chris Behrens
Proposed branch: lp:~cbehrens/nova/lp844160-build-works-with-zones
Merge into: lp:~hudson-openstack/nova/trunk
Diff against target: 4592 lines (+1761/-1238)
33 files modified
doc/source/devref/distributed_scheduler.rst (+2/-0)
nova/api/ec2/cloud.py (+6/-4)
nova/api/openstack/__init__.py (+1/-2)
nova/api/openstack/contrib/createserverext.py (+1/-2)
nova/api/openstack/contrib/volumes.py (+2/-41)
nova/api/openstack/contrib/zones.py (+50/-0)
nova/api/openstack/create_instance_helper.py (+0/-602)
nova/api/openstack/servers.py (+581/-30)
nova/api/openstack/zones.py (+3/-35)
nova/compute/api.py (+111/-116)
nova/scheduler/abstract_scheduler.py (+32/-43)
nova/scheduler/api.py (+2/-2)
nova/scheduler/chance.py (+23/-2)
nova/scheduler/driver.py (+106/-9)
nova/scheduler/least_cost.py (+1/-2)
nova/scheduler/manager.py (+5/-19)
nova/scheduler/multi.py (+5/-3)
nova/scheduler/simple.py (+35/-39)
nova/scheduler/vsa.py (+13/-20)
nova/scheduler/zone.py (+23/-5)
nova/tests/api/openstack/contrib/test_createserverext.py (+8/-4)
nova/tests/api/openstack/contrib/test_volumes.py (+12/-2)
nova/tests/api/openstack/test_extensions.py (+1/-0)
nova/tests/api/openstack/test_server_actions.py (+2/-2)
nova/tests/api/openstack/test_servers.py (+158/-45)
nova/tests/integrated/api/client.py (+16/-3)
nova/tests/integrated/test_servers.py (+36/-0)
nova/tests/scheduler/test_abstract_scheduler.py (+58/-17)
nova/tests/scheduler/test_least_cost_scheduler.py (+1/-1)
nova/tests/scheduler/test_scheduler.py (+320/-167)
nova/tests/scheduler/test_vsa_scheduler.py (+14/-16)
nova/tests/test_compute.py (+116/-5)
nova/tests/test_quota.py (+17/-0)
To merge this branch: bzr merge lp:~cbehrens/nova/lp844160-build-works-with-zones
Reviewer Review Type Date Requested Status
Sandy Walsh (community) Needs Fixing
Chris Behrens (community) Abstain
Brian Waldon (community) Needs Information
Review via email: mp+75990@code.launchpad.net

Description of the change

This makes the OS API servers controller 'create' work with all schedulers, including the zone aware schedulers (BaseScheduler and subclasses).

Since this means that the zones controller 'boot' method is not needed anymore, it has been removed and create_instance_helper has been folded back into the servers controller.

The distributed scheduler doc needs to be updated. I only updated it enough to say that some information is stale. If this merges, I'll file a bug to have it updated. I'm not the best person to update/create pretty pictures.

Other side effects of making this work:
1) compute API's create_at_all_once has been removed. It was only used by zone boot.
2) compute API's create() no longer creates Instance DB entries. The schedulers now do this. This makes sense, as only the schedulers will know where the instances will be placed. They could be placed locally or in a child zone. However, this comes at a cost. compute_api.create() now does a 'call' to the scheduler instead of a 'cast' in most cases (* see below). This is so it can receive the instance ID(s) that were created back from the scheduler. Ultimately, we probably need to figure out a way to generate UUIDs before scheduling and return only the information we know about an instance before it is actually scheduled and created. We could then revert this back to a cast. (Or maybe we always return a reservation ID instead of an instance.)
3) There's been an undocumented feature in the OS API to allow multiple instances to be built. I've kept it.
4) If compute_api.create() is creating multiple instances, only a single call is made to the scheduler, vs the old way of sending many casts. All schedulers now check how many instances have been requested.
5) I've added an undocumented option 'return_reservation_id' when building. If set to True, only a reservation ID is returned to the API caller, not the instance. This essentially gives you the old 'nova zone-boot' functionality.
6) It was requested I create a stub for a zones extension, so you'll see the empty extension in here. We'll move some code to it later.
7) Fixes an unrelated bug that merged into trunk recently where zones DB calls were not being done with admin context always, anymore.

* Case #5 above doesn't wait for the scheduler response with instance IDs. It does a 'cast' instead.

To post a comment you must log in.
Revision history for this message
Chris Behrens (cbehrens) wrote :

You'll see some code removed from the volumes extension. This is because it subclasses the servers controller to override create() to allow 'block_device_mapping' to be specified with a build. Since it was mostly code duplication (and used create_instance_helper), I just created a method in the servers controller that can be overridden in the volumes extension to retrieve the block_device_mapping.

Revision history for this message
Brian Waldon (bcwaldon) wrote :

I absolutely love the removal of create_instance_helper! Hit me up on IRC if you want to talk about any of this.

172: Can you expand on this? A note about each of the calls would be nice.

1198: Can the precooked stuff go away now? Just not sure how that fits in

1294: I don't think this is used anywhere. Can you remove it?

review: Needs Information
Revision history for this message
Chris Behrens (cbehrens) wrote :

172: Expanded
1294: Good catch... it's removed now.

As far as 1198: It can't really go away until we change how we talk to child zones. :-/ Sandy, pvo, and myself have been talking about some things related to zones that could make it go away.

Now.. I think a branch of Sandy's landed, and I probably have conflicts to resolve.

Revision history for this message
Chris Behrens (cbehrens) wrote :

Resolved conflicts with trunk. Time to open this up for review.

Revision history for this message
Chris Behrens (cbehrens) :
review: Abstain
Revision history for this message
Chris Behrens (cbehrens) wrote :

merged trunk.

Revision history for this message
Rick Harris (rconradharris) wrote :

This looks really good, love the refactoring and cleanups. Making a quick
first-pass with some notes; will plan on a digging in for a more thorough
review tomorrow.

> 2160 + # TODO(comstud): I would love to be able to return the full
> 2161 + # instance information here, but unfortunately passing things
> 2162 + # like 'datetime' back through rabbit don't work due to having
> 2163 + # to json encode/decode.

Not really necessary for this patch, but for future work, we can set a
default handler so that the `json` module will encode datetimes in
iso8601 format[1].

[1]http://stackoverflow.com/questions/455580/json-datetime-between-python-and-javascript

> 2294 + zone, _x, host = availability_zone.partition(':')

Could use split here and avoid the throwaway variable:

    zone, host = availability_zone.split(':')

> 859 + expl = _("Personality file limit exceeded")
> 860 + raise exc.HTTPRequestEntityTooLarge(explanation=error.message,
> 861 + headers={'Retry-After': 0})

`expl` is defined but never used. This patch just moved these lines around,
highlighting the issue. If s/error.message/expl/ is the solution, we could
just make the change in this patch, but if we need to consult the original
author we could make a separate bug for it.

On that note, the code could be DRY'd up as well:

    code = error.code
    if code == "OnsetFileLimitExceeded":
        expl = _("Personality file limit exceeded")
    elif code == "OnsetFilePathLimitExceeded":
        expl = _("Personality file path too long")
    elif code == "OnsetFileContentLimitExceeded":
        expl = _("Personality file content too long")
    elif code == "InstanceLimitExceeded":
        expl = _("Instance quotas have been exceeded")
    else:
        expl = None

    if expl:
        raise exc.HTTPRequestEntityTooLarge(
            explanation=expl headers={'Retry-After': 0})
    else:
        # if the original error is okay, just reraise it
        raise error

> 2167 + if local is True:

Per PEP8, preferred is:

    if local:

> 2388 + instances = instances.append(self.encode_instance(instance))

Looks like this should be:

    instances.append(self.encode_instance(instance))

Or maybe even:

    encoded_instance = self.encode_instance(instance)
    instances.append(encoded_instance)

Revision history for this message
Chris Behrens (cbehrens) wrote :

> This looks really good, love the refactoring and cleanups. Making a quick
> first-pass with some notes; will plan on a digging in for a more thorough
> review tomorrow.

Great, thanks!

>
>
> > 2160 + # TODO(comstud): I would love to be able to return the full
> > 2161 + # instance information here, but unfortunately passing things
> > 2162 + # like 'datetime' back through rabbit don't work due to
> having
> > 2163 + # to json encode/decode.
>
> Not really necessary for this patch, but for future work, we can set a
> default handler so that the `json` module will encode datetimes in
> iso8601 format[1].
>
> [1]http://stackoverflow.com/questions/455580/json-datetime-between-python-and-
> javascript

Great...good to know. I hadn't bothered looking for a solution yet... really there's all sorts of cases where we should be doing this and it's a discussion blamar proposed for the summit.

>
> > 2294 + zone, _x, host = availability_zone.partition(':')
>
> Could use split here and avoid the throwaway variable:
>
> zone, host = availability_zone.split(':')
>
> > 859 + expl = _("Personality file limit exceeded")
> > 860 + raise
> exc.HTTPRequestEntityTooLarge(explanation=error.message,
> > 861 + headers={'Retry-
> After': 0})
>
> `expl` is defined but never used. This patch just moved these lines around,
> highlighting the issue. If s/error.message/expl/ is the solution, we could
> just make the change in this patch, but if we need to consult the original
> author we could make a separate bug for it.

Yeah, the other one above was just moving lines around also. I'll take a look at these, though!

[...]
> > 2167 + if local is True:
>
> Per PEP8, preferred is:
>
> if local:

I'll fix that. I think that's a habit from doing "if blah is None" to distinguish keyword arguments with a None default from an empty list or dict being passed.. if you know what I mean.

> > 2388 + instances =
> instances.append(self.encode_instance(instance))
>
> Looks like this should be:
>
> instances.append(self.encode_instance(instance))

Shoot, yes. I copy-pasted that broken line around a number of times and thought I went back and fixed all instances of it. Good catch.

Revision history for this message
Chris Behrens (cbehrens) wrote :

Rick: I updated that comment regarding the datetime encoding. I think I hit all of your other issues so far. I've also added a comment in compute_api.create() that we should be using rpc.multicall vs rpc.call due to the amount of data the scheduler could return if we return full instance dictionaries.

It does appear the QuotaError stuff should have raised with those unused variables. I've gone ahead and made use of them, and cleaned it all up into a mapping table. I think that's cleaner than all of the 'if' stuff. In researching this, I find the QuotaError exception stuff could really use a re-factor (its class and the raises done in compute/api). I'll probably file a bug to clean it up after this merges.

1593. By Chris Behrens

typo

Revision history for this message
Chris Behrens (cbehrens) wrote :

Ok ready. Note: if running tests, just running api/openstack/test_servers.py by itself will fail due to an issue in trunk. Running the whole test suite will work.

Revision history for this message
Vish Ishaya (vishvananda) wrote :

On Sep 21, 2011, at 9:03 PM, Rick Harris wrote:

>
> Not really necessary for this patch, but for future work, we can set a
> default handler so that the `json` module will encode datetimes in
> iso8601 format[1].
>

I wrote some code long ago in my branches to get rid of using the db to pass information back and forth that handled this. It even managed the conversion back on updates:

basically it is just adding a datetime check to utils.to_primitive and converting using utils.strtime(). We can use sqlalchemy to parse back into the required formats on update with something like:

=== modified file 'nova/db/sqlalchemy/models.py'
--- nova/db/sqlalchemy/models.py 2011-05-24 19:21:02 +0000
+++ nova/db/sqlalchemy/models.py 2011-05-26 21:41:26 +0000
@@ -27,12 +27,14 @@
 from sqlalchemy.exc import IntegrityError
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.schema import ForeignKeyConstraint
+from sqlalchemy.types import DateTime as DTType

 from nova.db.sqlalchemy.session import get_session

 from nova import auth
 from nova import exception
 from nova import flags
+from nova import utils

 FLAGS = flags.FLAGS
@@ -90,11 +92,15 @@
         return n, getattr(self, n)

     def update(self, values):
- """Make the model object behave like a dict"""
- columns = dict(object_mapper(self).columns).keys()
+ """Make the model object behave like a dict and convert datetimes."""
+ columns = object_mapper(self).columns
         for key, value in values.iteritems():
             # NOTE(vish): don't update the 'name' property
- if key in columns:
+ if key != 'name' or key in columns:
+ if (key in columns and
+ isinstance(value, basestring) and
+ isinstance(columns[key].type, DTType)):
+ value = utils.parse_strtime(value)
                 setattr(self, key, value)

     def iteritems(self):

I was able to pass entire refs through the queue using this and update them on the other end.

Vish

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

Still going through it, but getting a test failure: http://paste.openstack.org/show/2522/

Stay tuned ...

review: Needs Fixing
Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

First off, I love that fact that you're keeping the unit tests as unit tests (and not integration tests) ... makes the review so much easier to follow.

I guess we really need update the docs shortly after this lands.

Regarding the precooked stuff, I wonder if we could just assume all results are raw and strip out any potentially offending data regardless? Just be a little more forgiving if they don't exist.

825 +from nova.rpc.common import RemoteError

import module not class

1697 + instances = self._schedule_run_instance(
I thought that method returned a tuple? Is this correct?

1950 + instance = self.create_instance_db_entry(context, request_spec)

create_instance_db_entry is a static method. Should either be classname qualifier or remove the @staticmethod decorator, not self.

2032 # Return instance as a list and make sure the caller doesn't
2033 + # cast to a compute node.

 ... not sure what the comment is trying to tell me.

2053 - # Returning None short-circuits the routing to Compute (since

... is this comment not appropriate anymore? I think some explanation of the None return is required (somewhere).

2215 + # Should only be None for tests?
2216 + if filename is not None:
Then this logic should be broken out into a separate function and stubbed in the test. Test case code shouldn't be in production code.

2256 + if isinstance(ret_val, tuple):
ah ha ... there it is. Can we unify these results to always be a tuple? Or I think we'd need a test for each condition (unless I missed something there)?

All in all ... great changes Chris! Nice to see that zone-boot and inheritance mess go away!

review: Needs Fixing
Revision history for this message
Chris Behrens (cbehrens) wrote :

Vish:

[...]
> I wrote some code long ago in my branches to get rid of using the db to pass
> information back and forth that handled this. It even managed the conversion
> back on updates:
[...]
> I was able to pass entire refs through the queue using this and update them on
> the other end.

Very cool. I'll probably look at incorporating this as a next step. This diff is already large enough due to moving code around. There are a lot more areas where we should be doing this, and it's a point of discussion that blamar suggested for the summit, also.

Revision history for this message
Chris Behrens (cbehrens) wrote :

> Still going through it, but getting a test failure:
> http://paste.openstack.org/show/2522/
>
> Stay tuned ...

So, I run into that now and then as well.. and it actually appears to be a kombu memory transport bug. Generally if you run into it, a 2nd run of the tests will pass. I think we're going to need to go back to using our own 'fakerabbit' type backend for kombu... or just aggressively try to get these fixed in kombu.

Revision history for this message
Chris Behrens (cbehrens) wrote :
Download full text (3.4 KiB)

> First off, I love that fact that you're keeping the unit tests as unit tests
> (and not integration tests) ... makes the review so much easier to follow.

Yup.. something I think about when coding tests, although there are a lot of cases where unit tests are currently more like integration tests.

>
> I guess we really need update the docs shortly after this lands.

Yeah. I'd like to update it more myself, but I'd prefer to spend time on it after we get this merged.. Since we're very early in essex, I think this is okay. We can file a bug after this merges.

>
> Regarding the precooked stuff, I wonder if we could just assume all results
> are raw and strip out any potentially offending data regardless? Just be a
> little more forgiving if they don't exist.
>
> 825 +from nova.rpc.common import RemoteError
>
> import module not class

Copy/paste thing, but I agree. I'll update it.

>
> 1697 + instances = self._schedule_run_instance(
> I thought that method returned a tuple? Is this correct?

I think you caught this below. The schedulers' methods do return tuples so that the manager can get a 'response to return' and a 'host to schedule on'. But the manager does really only return the 'response' portion. I'll update/add comments in the schedulers/manager.

>
>
> 1950 + instance = self.create_instance_db_entry(context,
> request_spec)
>
> create_instance_db_entry is a static method. Should either be classname
> qualifier or remove the @staticmethod decorator, not self.

I have the same thing for 'encode_instance', etc. I put them as static methods because they don't use 'self'... but it's a bit more clean in the code to be able to call them by self.*. Is that a huge no-no? If so, I think I lean towards removing the decorator even though they don't use any instance data.

>
> 2032 # Return instance as a list and make sure the caller doesn't
> 2033 + # cast to a compute node.
>
> ... not sure what the comment is trying to tell me.

Goes along with the tuple comment above. I'll update the comment as mentioned above.

>
> 2053 - # Returning None short-circuits the routing to Compute (since
>
> ... is this comment not appropriate anymore? I think some explanation of the
> None return is required (somewhere).

That comment attempts to explain why None is required, but I guess it's not descriptive enough. :) It also goes along with the other comments above I'll update.

>
> 2215 + # Should only be None for tests?
> 2216 + if filename is not None:
> Then this logic should be broken out into a separate function and stubbed in
> the test. Test case code shouldn't be in production code.

I'll do more investigation on this. I ran into a test failure where 'filename' was not defined.

>
> 2256 + if isinstance(ret_val, tuple):
> ah ha ... there it is. Can we unify these results to always be a tuple? Or I
> think we'd need a test for each condition (unless I missed something there)?

I could update all scheduler methods to return a tuple, yes, and I thought about doing this, although it's only run_instance that needs to return a response. For...

Read more...

1594. By Chris Behrens

Clean up the return values from all schedule* calls, making all schedule* calls do their own casts.
Creating convenience calls for the above results in 'scheduled_at' being updated in a single place for both instances and volumes now.

1595. By Chris Behrens

test fixes plus bugs/typos they uncovered. still needs more test fixes

1596. By Chris Behrens

fix abstract scheduler tests.. and bugs they found. added test for run_instance and checking a DB call is made with admin context

1597. By Chris Behrens

fix pep8 issue

1598. By Chris Behrens

chance scheduler bug uncovered with tests

1599. By Chris Behrens

vsa scheduler test fixes

1600. By Chris Behrens

more test fixes

Revision history for this message
Chris Behrens (cbehrens) wrote :

Moving to git.

Unmerged revisions

1600. By Chris Behrens

more test fixes

1599. By Chris Behrens

vsa scheduler test fixes

1598. By Chris Behrens

chance scheduler bug uncovered with tests

1597. By Chris Behrens

fix pep8 issue

1596. By Chris Behrens

fix abstract scheduler tests.. and bugs they found. added test for run_instance and checking a DB call is made with admin context

1595. By Chris Behrens

test fixes plus bugs/typos they uncovered. still needs more test fixes

1594. By Chris Behrens

Clean up the return values from all schedule* calls, making all schedule* calls do their own casts.
Creating convenience calls for the above results in 'scheduled_at' being updated in a single place for both instances and volumes now.

1593. By Chris Behrens

typo

1592. By Chris Behrens

broken indent

1591. By Chris Behrens

revert the kludge for reclaim_instance_interval since tests pass when all of them are run. I don't want to have a conflict with a fix from johannes

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'doc/source/devref/distributed_scheduler.rst'
--- doc/source/devref/distributed_scheduler.rst 2011-08-18 19:39:25 +0000
+++ doc/source/devref/distributed_scheduler.rst 2011-09-23 07:08:19 +0000
@@ -77,6 +77,8 @@
7777
78Requesting a new instance78Requesting a new instance
79-------------------------79-------------------------
80(Note: The information below is out of date, as the `nova.compute.api.create_all_at_once()` functionality has merged into `nova.compute.api.create()` and the non-zone aware schedulers have been updated.)
81
80Prior to the `BaseScheduler`, to request a new instance, a call was made to `nova.compute.api.create()`. The type of instance created depended on the value of the `InstanceType` record being passed in. The `InstanceType` determined the amount of disk, CPU, RAM and network required for the instance. Administrators can add new `InstanceType` records to suit their needs. For more complicated instance requests we need to go beyond the default fields in the `InstanceType` table.82Prior to the `BaseScheduler`, to request a new instance, a call was made to `nova.compute.api.create()`. The type of instance created depended on the value of the `InstanceType` record being passed in. The `InstanceType` determined the amount of disk, CPU, RAM and network required for the instance. Administrators can add new `InstanceType` records to suit their needs. For more complicated instance requests we need to go beyond the default fields in the `InstanceType` table.
8183
82`nova.compute.api.create()` performed the following actions:84`nova.compute.api.create()` performed the following actions:
8385
=== modified file 'nova/api/ec2/cloud.py'
--- nova/api/ec2/cloud.py 2011-09-21 15:54:30 +0000
+++ nova/api/ec2/cloud.py 2011-09-23 07:08:19 +0000
@@ -1384,7 +1384,7 @@
1384 if image_state != 'available':1384 if image_state != 'available':
1385 raise exception.ApiError(_('Image must be available'))1385 raise exception.ApiError(_('Image must be available'))
13861386
1387 instances = self.compute_api.create(context,1387 (instances, resv_id) = self.compute_api.create(context,
1388 instance_type=instance_types.get_instance_type_by_name(1388 instance_type=instance_types.get_instance_type_by_name(
1389 kwargs.get('instance_type', None)),1389 kwargs.get('instance_type', None)),
1390 image_href=self._get_image(context, kwargs['image_id'])['id'],1390 image_href=self._get_image(context, kwargs['image_id'])['id'],
@@ -1399,9 +1399,11 @@
1399 security_group=kwargs.get('security_group'),1399 security_group=kwargs.get('security_group'),
1400 availability_zone=kwargs.get('placement', {}).get(1400 availability_zone=kwargs.get('placement', {}).get(
1401 'AvailabilityZone'),1401 'AvailabilityZone'),
1402 block_device_mapping=kwargs.get('block_device_mapping', {}))1402 block_device_mapping=kwargs.get('block_device_mapping', {}),
1403 return self._format_run_instances(context,1403 # NOTE(comstud): Unfortunately, EC2 requires that the
1404 reservation_id=instances[0]['reservation_id'])1404 # instance DB entries have been created..
1405 wait_for_instances=True)
1406 return self._format_run_instances(context, resv_id)
14051407
1406 def _do_instance(self, action, context, ec2_id):1408 def _do_instance(self, action, context, ec2_id):
1407 instance_id = ec2utils.ec2_id_to_id(ec2_id)1409 instance_id = ec2utils.ec2_id_to_id(ec2_id)
14081410
=== modified file 'nova/api/openstack/__init__.py'
--- nova/api/openstack/__init__.py 2011-08-15 13:35:44 +0000
+++ nova/api/openstack/__init__.py 2011-09-23 07:08:19 +0000
@@ -139,8 +139,7 @@
139 controller=zones.create_resource(version),139 controller=zones.create_resource(version),
140 collection={'detail': 'GET',140 collection={'detail': 'GET',
141 'info': 'GET',141 'info': 'GET',
142 'select': 'POST',142 'select': 'POST'})
143 'boot': 'POST'})
144143
145 mapper.connect("versions", "/",144 mapper.connect("versions", "/",
146 controller=versions.create_resource(version),145 controller=versions.create_resource(version),
147146
=== modified file 'nova/api/openstack/contrib/createserverext.py'
--- nova/api/openstack/contrib/createserverext.py 2011-09-02 18:00:33 +0000
+++ nova/api/openstack/contrib/createserverext.py 2011-09-23 07:08:19 +0000
@@ -15,7 +15,6 @@
15# under the License15# under the License
1616
17from nova import utils17from nova import utils
18from nova.api.openstack import create_instance_helper as helper
19from nova.api.openstack import extensions18from nova.api.openstack import extensions
20from nova.api.openstack import servers19from nova.api.openstack import servers
21from nova.api.openstack import wsgi20from nova.api.openstack import wsgi
@@ -66,7 +65,7 @@
66 }65 }
6766
68 body_deserializers = {67 body_deserializers = {
69 'application/xml': helper.ServerXMLDeserializerV11(),68 'application/xml': servers.ServerXMLDeserializerV11(),
70 }69 }
7170
72 serializer = wsgi.ResponseSerializer(body_serializers,71 serializer = wsgi.ResponseSerializer(body_serializers,
7372
=== modified file 'nova/api/openstack/contrib/volumes.py'
--- nova/api/openstack/contrib/volumes.py 2011-09-14 19:33:51 +0000
+++ nova/api/openstack/contrib/volumes.py 2011-09-23 07:08:19 +0000
@@ -334,47 +334,8 @@
334class BootFromVolumeController(servers.ControllerV11):334class BootFromVolumeController(servers.ControllerV11):
335 """The boot from volume API controller for the Openstack API."""335 """The boot from volume API controller for the Openstack API."""
336336
337 def _create_instance(self, context, instance_type, image_href, **kwargs):337 def _get_block_device_mapping(self, data):
338 try:338 return data.get('block_device_mapping')
339 return self.compute_api.create(context, instance_type,
340 image_href, **kwargs)
341 except quota.QuotaError as error:
342 self.helper._handle_quota_error(error)
343 except exception.ImageNotFound as error:
344 msg = _("Can not find requested image")
345 raise faults.Fault(exc.HTTPBadRequest(explanation=msg))
346
347 def create(self, req, body):
348 """ Creates a new server for a given user """
349 extra_values = None
350 try:
351
352 def get_kwargs(context, instance_type, image_href, **kwargs):
353 kwargs['context'] = context
354 kwargs['instance_type'] = instance_type
355 kwargs['image_href'] = image_href
356 return kwargs
357
358 extra_values, kwargs = self.helper.create_instance(req, body,
359 get_kwargs)
360
361 block_device_mapping = body['server'].get('block_device_mapping')
362 kwargs['block_device_mapping'] = block_device_mapping
363
364 instances = self._create_instance(**kwargs)
365 except faults.Fault, f:
366 return f
367
368 # We can only return 1 instance via the API, if we happen to
369 # build more than one... instances is a list, so we'll just
370 # use the first one..
371 inst = instances[0]
372 for key in ['instance_type', 'image_ref']:
373 inst[key] = extra_values[key]
374
375 server = self._build_view(req, inst, is_detail=True)
376 server['server']['adminPass'] = extra_values['password']
377 return server
378339
379340
380class Volumes(extensions.ExtensionDescriptor):341class Volumes(extensions.ExtensionDescriptor):
381342
=== added file 'nova/api/openstack/contrib/zones.py'
--- nova/api/openstack/contrib/zones.py 1970-01-01 00:00:00 +0000
+++ nova/api/openstack/contrib/zones.py 2011-09-23 07:08:19 +0000
@@ -0,0 +1,50 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright 2011 OpenStack LLC.
4# All Rights Reserved.
5#
6# Licensed under the Apache License, Version 2.0 (the "License"); you may
7# not use this file except in compliance with the License. You may obtain
8# a copy of the License at
9#
10# http://www.apache.org/licenses/LICENSE-2.0
11#
12# Unless required by applicable law or agreed to in writing, software
13# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
14# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
15# License for the specific language governing permissions and limitations
16# under the License.
17
18"""The zones extension."""
19
20
21from nova import flags
22from nova import log as logging
23from nova.api.openstack import extensions
24
25
26LOG = logging.getLogger("nova.api.zones")
27FLAGS = flags.FLAGS
28
29
30class Zones(extensions.ExtensionDescriptor):
31 def get_name(self):
32 return "Zones"
33
34 def get_alias(self):
35 return "os-zones"
36
37 def get_description(self):
38 return """Enables zones-related functionality such as adding
39child zones, listing child zones, getting the capabilities of the
40local zone, and returning build plans to parent zones' schedulers"""
41
42 def get_namespace(self):
43 return "http://docs.openstack.org/ext/zones/api/v1.1"
44
45 def get_updated(self):
46 return "2011-09-21T00:00:00+00:00"
47
48 def get_resources(self):
49 # Nothing yet.
50 return []
051
=== removed file 'nova/api/openstack/create_instance_helper.py'
--- nova/api/openstack/create_instance_helper.py 2011-09-15 14:07:58 +0000
+++ nova/api/openstack/create_instance_helper.py 1970-01-01 00:00:00 +0000
@@ -1,602 +0,0 @@
1# Copyright 2011 OpenStack LLC.
2# Copyright 2011 Piston Cloud Computing, Inc.
3# All Rights Reserved.
4#
5# Licensed under the Apache License, Version 2.0 (the "License"); you may
6# not use this file except in compliance with the License. You may obtain
7# a copy of the License at
8#
9# http://www.apache.org/licenses/LICENSE-2.0
10#
11# Unless required by applicable law or agreed to in writing, software
12# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
13# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
14# License for the specific language governing permissions and limitations
15# under the License.
16
17import base64
18
19from webob import exc
20from xml.dom import minidom
21
22from nova import exception
23from nova import flags
24from nova import log as logging
25import nova.image
26from nova import quota
27from nova import utils
28
29from nova.compute import instance_types
30from nova.api.openstack import common
31from nova.api.openstack import wsgi
32from nova.rpc.common import RemoteError
33
34LOG = logging.getLogger('nova.api.openstack.create_instance_helper')
35FLAGS = flags.FLAGS
36
37
38class CreateFault(exception.NovaException):
39 message = _("Invalid parameters given to create_instance.")
40
41 def __init__(self, fault):
42 self.fault = fault
43 super(CreateFault, self).__init__()
44
45
46class CreateInstanceHelper(object):
47 """This is the base class for OS API Controllers that
48 are capable of creating instances (currently Servers and Zones).
49
50 Once we stabilize the Zones portion of the API we may be able
51 to move this code back into servers.py
52 """
53
54 def __init__(self, controller):
55 """We need the image service to create an instance."""
56 self.controller = controller
57 self._image_service = utils.import_object(FLAGS.image_service)
58 super(CreateInstanceHelper, self).__init__()
59
60 def create_instance(self, req, body, create_method):
61 """Creates a new server for the given user. The approach
62 used depends on the create_method. For example, the standard
63 POST /server call uses compute.api.create(), while
64 POST /zones/server uses compute.api.create_all_at_once().
65
66 The problem is, both approaches return different values (i.e.
67 [instance dicts] vs. reservation_id). So the handling of the
68 return type from this method is left to the caller.
69 """
70 if not body:
71 raise exc.HTTPUnprocessableEntity()
72
73 if not 'server' in body:
74 raise exc.HTTPUnprocessableEntity()
75
76 context = req.environ['nova.context']
77 server_dict = body['server']
78 password = self.controller._get_server_admin_password(server_dict)
79
80 if not 'name' in server_dict:
81 msg = _("Server name is not defined")
82 raise exc.HTTPBadRequest(explanation=msg)
83
84 name = server_dict['name']
85 self._validate_server_name(name)
86 name = name.strip()
87
88 image_href = self.controller._image_ref_from_req_data(body)
89 # If the image href was generated by nova api, strip image_href
90 # down to an id and use the default glance connection params
91
92 if str(image_href).startswith(req.application_url):
93 image_href = image_href.split('/').pop()
94 try:
95 image_service, image_id = nova.image.get_image_service(context,
96 image_href)
97 kernel_id, ramdisk_id = self._get_kernel_ramdisk_from_image(
98 req, image_service, image_id)
99 images = set([str(x['id']) for x in image_service.index(context)])
100 assert str(image_id) in images
101 except Exception, e:
102 msg = _("Cannot find requested image %(image_href)s: %(e)s" %
103 locals())
104 raise exc.HTTPBadRequest(explanation=msg)
105
106 personality = server_dict.get('personality')
107 config_drive = server_dict.get('config_drive')
108
109 injected_files = []
110 if personality:
111 injected_files = self._get_injected_files(personality)
112
113 sg_names = []
114 security_groups = server_dict.get('security_groups')
115 if security_groups is not None:
116 sg_names = [sg['name'] for sg in security_groups if sg.get('name')]
117 if not sg_names:
118 sg_names.append('default')
119
120 sg_names = list(set(sg_names))
121
122 requested_networks = server_dict.get('networks')
123 if requested_networks is not None:
124 requested_networks = self._get_requested_networks(
125 requested_networks)
126
127 try:
128 flavor_id = self.controller._flavor_id_from_req_data(body)
129 except ValueError as error:
130 msg = _("Invalid flavorRef provided.")
131 raise exc.HTTPBadRequest(explanation=msg)
132
133 zone_blob = server_dict.get('blob')
134
135 # optional openstack extensions:
136 key_name = server_dict.get('key_name')
137 user_data = server_dict.get('user_data')
138 self._validate_user_data(user_data)
139
140 availability_zone = server_dict.get('availability_zone')
141 name = server_dict['name']
142 self._validate_server_name(name)
143 name = name.strip()
144
145 reservation_id = server_dict.get('reservation_id')
146 min_count = server_dict.get('min_count')
147 max_count = server_dict.get('max_count')
148 # min_count and max_count are optional. If they exist, they come
149 # in as strings. We want to default 'min_count' to 1, and default
150 # 'max_count' to be 'min_count'.
151 min_count = int(min_count) if min_count else 1
152 max_count = int(max_count) if max_count else min_count
153 if min_count > max_count:
154 min_count = max_count
155
156 try:
157 inst_type = \
158 instance_types.get_instance_type_by_flavor_id(flavor_id)
159 extra_values = {
160 'instance_type': inst_type,
161 'image_ref': image_href,
162 'config_drive': config_drive,
163 'password': password}
164
165 return (extra_values,
166 create_method(context,
167 inst_type,
168 image_id,
169 kernel_id=kernel_id,
170 ramdisk_id=ramdisk_id,
171 display_name=name,
172 display_description=name,
173 key_name=key_name,
174 metadata=server_dict.get('metadata', {}),
175 access_ip_v4=server_dict.get('accessIPv4'),
176 access_ip_v6=server_dict.get('accessIPv6'),
177 injected_files=injected_files,
178 admin_password=password,
179 zone_blob=zone_blob,
180 reservation_id=reservation_id,
181 min_count=min_count,
182 max_count=max_count,
183 requested_networks=requested_networks,
184 security_group=sg_names,
185 user_data=user_data,
186 availability_zone=availability_zone,
187 config_drive=config_drive,))
188 except quota.QuotaError as error:
189 self._handle_quota_error(error)
190 except exception.ImageNotFound as error:
191 msg = _("Can not find requested image")
192 raise exc.HTTPBadRequest(explanation=msg)
193 except exception.FlavorNotFound as error:
194 msg = _("Invalid flavorRef provided.")
195 raise exc.HTTPBadRequest(explanation=msg)
196 except exception.KeypairNotFound as error:
197 msg = _("Invalid key_name provided.")
198 raise exc.HTTPBadRequest(explanation=msg)
199 except exception.SecurityGroupNotFound as error:
200 raise exc.HTTPBadRequest(explanation=unicode(error))
201 except RemoteError as err:
202 msg = "%(err_type)s: %(err_msg)s" % \
203 {'err_type': err.exc_type, 'err_msg': err.value}
204 raise exc.HTTPBadRequest(explanation=msg)
205 # Let the caller deal with unhandled exceptions.
206
207 def _handle_quota_error(self, error):
208 """
209 Reraise quota errors as api-specific http exceptions
210 """
211 if error.code == "OnsetFileLimitExceeded":
212 expl = _("Personality file limit exceeded")
213 raise exc.HTTPRequestEntityTooLarge(explanation=error.message,
214 headers={'Retry-After': 0})
215 if error.code == "OnsetFilePathLimitExceeded":
216 expl = _("Personality file path too long")
217 raise exc.HTTPRequestEntityTooLarge(explanation=error.message,
218 headers={'Retry-After': 0})
219 if error.code == "OnsetFileContentLimitExceeded":
220 expl = _("Personality file content too long")
221 raise exc.HTTPRequestEntityTooLarge(explanation=error.message,
222 headers={'Retry-After': 0})
223 if error.code == "InstanceLimitExceeded":
224 expl = _("Instance quotas have been exceeded")
225 raise exc.HTTPRequestEntityTooLarge(explanation=error.message,
226 headers={'Retry-After': 0})
227 # if the original error is okay, just reraise it
228 raise error
229
230 def _deserialize_create(self, request):
231 """
232 Deserialize a create request
233
234 Overrides normal behavior in the case of xml content
235 """
236 if request.content_type == "application/xml":
237 deserializer = ServerXMLDeserializer()
238 return deserializer.deserialize(request.body)
239 else:
240 return self._deserialize(request.body, request.get_content_type())
241
242 def _validate_server_name(self, value):
243 if not isinstance(value, basestring):
244 msg = _("Server name is not a string or unicode")
245 raise exc.HTTPBadRequest(explanation=msg)
246
247 if value.strip() == '':
248 msg = _("Server name is an empty string")
249 raise exc.HTTPBadRequest(explanation=msg)
250
251 def _get_kernel_ramdisk_from_image(self, req, image_service, image_id):
252 """Fetch an image from the ImageService, then if present, return the
253 associated kernel and ramdisk image IDs.
254 """
255 context = req.environ['nova.context']
256 image_meta = image_service.show(context, image_id)
257 # NOTE(sirp): extracted to a separate method to aid unit-testing, the
258 # new method doesn't need a request obj or an ImageService stub
259 kernel_id, ramdisk_id = self._do_get_kernel_ramdisk_from_image(
260 image_meta)
261 return kernel_id, ramdisk_id
262
263 @staticmethod
264 def _do_get_kernel_ramdisk_from_image(image_meta):
265 """Given an ImageService image_meta, return kernel and ramdisk image
266 ids if present.
267
268 This is only valid for `ami` style images.
269 """
270 image_id = image_meta['id']
271 if image_meta['status'] != 'active':
272 raise exception.ImageUnacceptable(image_id=image_id,
273 reason=_("status is not active"))
274
275 if image_meta.get('container_format') != 'ami':
276 return None, None
277
278 try:
279 kernel_id = image_meta['properties']['kernel_id']
280 except KeyError:
281 raise exception.KernelNotFoundForImage(image_id=image_id)
282
283 try:
284 ramdisk_id = image_meta['properties']['ramdisk_id']
285 except KeyError:
286 ramdisk_id = None
287
288 return kernel_id, ramdisk_id
289
290 def _get_injected_files(self, personality):
291 """
292 Create a list of injected files from the personality attribute
293
294 At this time, injected_files must be formatted as a list of
295 (file_path, file_content) pairs for compatibility with the
296 underlying compute service.
297 """
298 injected_files = []
299
300 for item in personality:
301 try:
302 path = item['path']
303 contents = item['contents']
304 except KeyError as key:
305 expl = _('Bad personality format: missing %s') % key
306 raise exc.HTTPBadRequest(explanation=expl)
307 except TypeError:
308 expl = _('Bad personality format')
309 raise exc.HTTPBadRequest(explanation=expl)
310 try:
311 contents = base64.b64decode(contents)
312 except TypeError:
313 expl = _('Personality content for %s cannot be decoded') % path
314 raise exc.HTTPBadRequest(explanation=expl)
315 injected_files.append((path, contents))
316 return injected_files
317
318 def _get_server_admin_password_old_style(self, server):
319 """ Determine the admin password for a server on creation """
320 return utils.generate_password(FLAGS.password_length)
321
322 def _get_server_admin_password_new_style(self, server):
323 """ Determine the admin password for a server on creation """
324 password = server.get('adminPass')
325
326 if password is None:
327 return utils.generate_password(FLAGS.password_length)
328 if not isinstance(password, basestring) or password == '':
329 msg = _("Invalid adminPass")
330 raise exc.HTTPBadRequest(explanation=msg)
331 return password
332
333 def _get_requested_networks(self, requested_networks):
334 """
335 Create a list of requested networks from the networks attribute
336 """
337 networks = []
338 for network in requested_networks:
339 try:
340 network_uuid = network['uuid']
341
342 if not utils.is_uuid_like(network_uuid):
343 msg = _("Bad networks format: network uuid is not in"
344 " proper format (%s)") % network_uuid
345 raise exc.HTTPBadRequest(explanation=msg)
346
347 #fixed IP address is optional
348 #if the fixed IP address is not provided then
349 #it will use one of the available IP address from the network
350 address = network.get('fixed_ip', None)
351 if address is not None and not utils.is_valid_ipv4(address):
352 msg = _("Invalid fixed IP address (%s)") % address
353 raise exc.HTTPBadRequest(explanation=msg)
354 # check if the network id is already present in the list,
355 # we don't want duplicate networks to be passed
356 # at the boot time
357 for id, ip in networks:
358 if id == network_uuid:
359 expl = _("Duplicate networks (%s) are not allowed")\
360 % network_uuid
361 raise exc.HTTPBadRequest(explanation=expl)
362
363 networks.append((network_uuid, address))
364 except KeyError as key:
365 expl = _('Bad network format: missing %s') % key
366 raise exc.HTTPBadRequest(explanation=expl)
367 except TypeError:
368 expl = _('Bad networks format')
369 raise exc.HTTPBadRequest(explanation=expl)
370
371 return networks
372
373 def _validate_user_data(self, user_data):
374 """Check if the user_data is encoded properly"""
375 if not user_data:
376 return
377 try:
378 user_data = base64.b64decode(user_data)
379 except TypeError:
380 expl = _('Userdata content cannot be decoded')
381 raise exc.HTTPBadRequest(explanation=expl)
382
383
384class ServerXMLDeserializer(wsgi.XMLDeserializer):
385 """
386 Deserializer to handle xml-formatted server create requests.
387
388 Handles standard server attributes as well as optional metadata
389 and personality attributes
390 """
391
392 metadata_deserializer = common.MetadataXMLDeserializer()
393
394 def create(self, string):
395 """Deserialize an xml-formatted server create request"""
396 dom = minidom.parseString(string)
397 server = self._extract_server(dom)
398 return {'body': {'server': server}}
399
400 def _extract_server(self, node):
401 """Marshal the server attribute of a parsed request"""
402 server = {}
403 server_node = self.find_first_child_named(node, 'server')
404
405 attributes = ["name", "imageId", "flavorId", "adminPass"]
406 for attr in attributes:
407 if server_node.getAttribute(attr):
408 server[attr] = server_node.getAttribute(attr)
409
410 metadata_node = self.find_first_child_named(server_node, "metadata")
411 server["metadata"] = self.metadata_deserializer.extract_metadata(
412 metadata_node)
413
414 server["personality"] = self._extract_personality(server_node)
415
416 return server
417
418 def _extract_personality(self, server_node):
419 """Marshal the personality attribute of a parsed request"""
420 node = self.find_first_child_named(server_node, "personality")
421 personality = []
422 if node is not None:
423 for file_node in self.find_children_named(node, "file"):
424 item = {}
425 if file_node.hasAttribute("path"):
426 item["path"] = file_node.getAttribute("path")
427 item["contents"] = self.extract_text(file_node)
428 personality.append(item)
429 return personality
430
431
432class ServerXMLDeserializerV11(wsgi.MetadataXMLDeserializer):
433 """
434 Deserializer to handle xml-formatted server create requests.
435
436 Handles standard server attributes as well as optional metadata
437 and personality attributes
438 """
439
440 metadata_deserializer = common.MetadataXMLDeserializer()
441
442 def action(self, string):
443 dom = minidom.parseString(string)
444 action_node = dom.childNodes[0]
445 action_name = action_node.tagName
446
447 action_deserializer = {
448 'createImage': self._action_create_image,
449 'createBackup': self._action_create_backup,
450 'changePassword': self._action_change_password,
451 'reboot': self._action_reboot,
452 'rebuild': self._action_rebuild,
453 'resize': self._action_resize,
454 'confirmResize': self._action_confirm_resize,
455 'revertResize': self._action_revert_resize,
456 }.get(action_name, self.default)
457
458 action_data = action_deserializer(action_node)
459
460 return {'body': {action_name: action_data}}
461
462 def _action_create_image(self, node):
463 return self._deserialize_image_action(node, ('name',))
464
465 def _action_create_backup(self, node):
466 attributes = ('name', 'backup_type', 'rotation')
467 return self._deserialize_image_action(node, attributes)
468
469 def _action_change_password(self, node):
470 if not node.hasAttribute("adminPass"):
471 raise AttributeError("No adminPass was specified in request")
472 return {"adminPass": node.getAttribute("adminPass")}
473
474 def _action_reboot(self, node):
475 if not node.hasAttribute("type"):
476 raise AttributeError("No reboot type was specified in request")
477 return {"type": node.getAttribute("type")}
478
479 def _action_rebuild(self, node):
480 rebuild = {}
481 if node.hasAttribute("name"):
482 rebuild['name'] = node.getAttribute("name")
483
484 metadata_node = self.find_first_child_named(node, "metadata")
485 if metadata_node is not None:
486 rebuild["metadata"] = self.extract_metadata(metadata_node)
487
488 personality = self._extract_personality(node)
489 if personality is not None:
490 rebuild["personality"] = personality
491
492 if not node.hasAttribute("imageRef"):
493 raise AttributeError("No imageRef was specified in request")
494 rebuild["imageRef"] = node.getAttribute("imageRef")
495
496 return rebuild
497
498 def _action_resize(self, node):
499 if not node.hasAttribute("flavorRef"):
500 raise AttributeError("No flavorRef was specified in request")
501 return {"flavorRef": node.getAttribute("flavorRef")}
502
503 def _action_confirm_resize(self, node):
504 return None
505
506 def _action_revert_resize(self, node):
507 return None
508
509 def _deserialize_image_action(self, node, allowed_attributes):
510 data = {}
511 for attribute in allowed_attributes:
512 value = node.getAttribute(attribute)
513 if value:
514 data[attribute] = value
515 metadata_node = self.find_first_child_named(node, 'metadata')
516 if metadata_node is not None:
517 metadata = self.metadata_deserializer.extract_metadata(
518 metadata_node)
519 data['metadata'] = metadata
520 return data
521
522 def create(self, string):
523 """Deserialize an xml-formatted server create request"""
524 dom = minidom.parseString(string)
525 server = self._extract_server(dom)
526 return {'body': {'server': server}}
527
528 def _extract_server(self, node):
529 """Marshal the server attribute of a parsed request"""
530 server = {}
531 server_node = self.find_first_child_named(node, 'server')
532
533 attributes = ["name", "imageRef", "flavorRef", "adminPass",
534 "accessIPv4", "accessIPv6"]
535 for attr in attributes:
536 if server_node.getAttribute(attr):
537 server[attr] = server_node.getAttribute(attr)
538
539 metadata_node = self.find_first_child_named(server_node, "metadata")
540 if metadata_node is not None:
541 server["metadata"] = self.extract_metadata(metadata_node)
542
543 personality = self._extract_personality(server_node)
544 if personality is not None:
545 server["personality"] = personality
546
547 networks = self._extract_networks(server_node)
548 if networks is not None:
549 server["networks"] = networks
550
551 security_groups = self._extract_security_groups(server_node)
552 if security_groups is not None:
553 server["security_groups"] = security_groups
554
555 return server
556
557 def _extract_personality(self, server_node):
558 """Marshal the personality attribute of a parsed request"""
559 node = self.find_first_child_named(server_node, "personality")
560 if node is not None:
561 personality = []
562 for file_node in self.find_children_named(node, "file"):
563 item = {}
564 if file_node.hasAttribute("path"):
565 item["path"] = file_node.getAttribute("path")
566 item["contents"] = self.extract_text(file_node)
567 personality.append(item)
568 return personality
569 else:
570 return None
571
572 def _extract_networks(self, server_node):
573 """Marshal the networks attribute of a parsed request"""
574 node = self.find_first_child_named(server_node, "networks")
575 if node is not None:
576 networks = []
577 for network_node in self.find_children_named(node,
578 "network"):
579 item = {}
580 if network_node.hasAttribute("uuid"):
581 item["uuid"] = network_node.getAttribute("uuid")
582 if network_node.hasAttribute("fixed_ip"):
583 item["fixed_ip"] = network_node.getAttribute("fixed_ip")
584 networks.append(item)
585 return networks
586 else:
587 return None
588
589 def _extract_security_groups(self, server_node):
590 """Marshal the security_groups attribute of a parsed request"""
591 node = self.find_first_child_named(server_node, "security_groups")
592 if node is not None:
593 security_groups = []
594 for sg_node in self.find_children_named(node, "security_group"):
595 item = {}
596 name_node = self.find_first_child_named(sg_node, "name")
597 if name_node:
598 item["name"] = self.extract_text(name_node)
599 security_groups.append(item)
600 return security_groups
601 else:
602 return None
6030
=== modified file 'nova/api/openstack/servers.py'
--- nova/api/openstack/servers.py 2011-09-22 15:41:34 +0000
+++ nova/api/openstack/servers.py 2011-09-23 07:08:19 +0000
@@ -1,4 +1,5 @@
1# Copyright 2010 OpenStack LLC.1# Copyright 2010 OpenStack LLC.
2# Copyright 2011 Piston Cloud Computing, Inc
2# All Rights Reserved.3# All Rights Reserved.
3#4#
4# Licensed under the Apache License, Version 2.0 (the "License"); you may5# Licensed under the Apache License, Version 2.0 (the "License"); you may
@@ -21,15 +22,17 @@
21from lxml import etree22from lxml import etree
22from webob import exc23from webob import exc
23import webob24import webob
25from xml.dom import minidom
2426
25from nova import compute27from nova import compute
26from nova import db28from nova import db
27from nova import exception29from nova import exception
28from nova import flags30from nova import flags
31from nova import image
29from nova import log as logging32from nova import log as logging
30from nova import utils33from nova import utils
34from nova import quota
31from nova.api.openstack import common35from nova.api.openstack import common
32from nova.api.openstack import create_instance_helper as helper
33from nova.api.openstack import ips36from nova.api.openstack import ips
34from nova.api.openstack import wsgi37from nova.api.openstack import wsgi
35from nova.compute import instance_types38from nova.compute import instance_types
@@ -40,6 +43,7 @@
40import nova.api.openstack.views.images43import nova.api.openstack.views.images
41import nova.api.openstack.views.servers44import nova.api.openstack.views.servers
42from nova.api.openstack import xmlutil45from nova.api.openstack import xmlutil
46from nova.rpc import common as rpc_common
4347
4448
45LOG = logging.getLogger('nova.api.openstack.servers')49LOG = logging.getLogger('nova.api.openstack.servers')
@@ -72,7 +76,6 @@
7276
73 def __init__(self):77 def __init__(self):
74 self.compute_api = compute.API()78 self.compute_api = compute.API()
75 self.helper = helper.CreateInstanceHelper(self)
7679
77 def index(self, req):80 def index(self, req):
78 """ Returns a list of server names and ids for a given user """81 """ Returns a list of server names and ids for a given user """
@@ -106,6 +109,12 @@
106 def _action_rebuild(self, info, request, instance_id):109 def _action_rebuild(self, info, request, instance_id):
107 raise NotImplementedError()110 raise NotImplementedError()
108111
112 def _get_block_device_mapping(self, data):
113 """Get block_device_mapping from 'server' dictionary.
114 Overidden by volumes controller.
115 """
116 return None
117
109 def _get_servers(self, req, is_detail):118 def _get_servers(self, req, is_detail):
110 """Returns a list of servers, taking into account any search119 """Returns a list of servers, taking into account any search
111 options specified.120 options specified.
@@ -157,6 +166,181 @@
157 limited_list = self._limit_items(instance_list, req)166 limited_list = self._limit_items(instance_list, req)
158 return self._build_list(req, limited_list, is_detail=is_detail)167 return self._build_list(req, limited_list, is_detail=is_detail)
159168
169 def _handle_quota_error(self, error):
170 """
171 Reraise quota errors as api-specific http exceptions
172 """
173
174 code_mappings = {
175 "OnsetFileLimitExceeded":
176 _("Personality file limit exceeded"),
177 "OnsetFilePathLimitExceeded":
178 _("Personality file path too long"),
179 "OnsetFileContentLimitExceeded":
180 _("Personality file content too long"),
181 "InstanceLimitExceeded":
182 _("Instance quotas have been exceeded")}
183
184 expl = code_mappings.get(error.code)
185 if expl:
186 raise exc.HTTPRequestEntityTooLarge(explanation=expl,
187 headers={'Retry-After': 0})
188 # if the original error is okay, just reraise it
189 raise error
190
191 def _deserialize_create(self, request):
192 """
193 Deserialize a create request
194
195 Overrides normal behavior in the case of xml content
196 """
197 if request.content_type == "application/xml":
198 deserializer = ServerXMLDeserializer()
199 return deserializer.deserialize(request.body)
200 else:
201 return self._deserialize(request.body, request.get_content_type())
202
203 def _validate_server_name(self, value):
204 if not isinstance(value, basestring):
205 msg = _("Server name is not a string or unicode")
206 raise exc.HTTPBadRequest(explanation=msg)
207
208 if value.strip() == '':
209 msg = _("Server name is an empty string")
210 raise exc.HTTPBadRequest(explanation=msg)
211
212 def _get_kernel_ramdisk_from_image(self, req, image_service, image_id):
213 """Fetch an image from the ImageService, then if present, return the
214 associated kernel and ramdisk image IDs.
215 """
216 context = req.environ['nova.context']
217 image_meta = image_service.show(context, image_id)
218 # NOTE(sirp): extracted to a separate method to aid unit-testing, the
219 # new method doesn't need a request obj or an ImageService stub
220 kernel_id, ramdisk_id = self._do_get_kernel_ramdisk_from_image(
221 image_meta)
222 return kernel_id, ramdisk_id
223
224 @staticmethod
225 def _do_get_kernel_ramdisk_from_image(image_meta):
226 """Given an ImageService image_meta, return kernel and ramdisk image
227 ids if present.
228
229 This is only valid for `ami` style images.
230 """
231 image_id = image_meta['id']
232 if image_meta['status'] != 'active':
233 raise exception.ImageUnacceptable(image_id=image_id,
234 reason=_("status is not active"))
235
236 if image_meta.get('container_format') != 'ami':
237 return None, None
238
239 try:
240 kernel_id = image_meta['properties']['kernel_id']
241 except KeyError:
242 raise exception.KernelNotFoundForImage(image_id=image_id)
243
244 try:
245 ramdisk_id = image_meta['properties']['ramdisk_id']
246 except KeyError:
247 ramdisk_id = None
248
249 return kernel_id, ramdisk_id
250
251 def _get_injected_files(self, personality):
252 """
253 Create a list of injected files from the personality attribute
254
255 At this time, injected_files must be formatted as a list of
256 (file_path, file_content) pairs for compatibility with the
257 underlying compute service.
258 """
259 injected_files = []
260
261 for item in personality:
262 try:
263 path = item['path']
264 contents = item['contents']
265 except KeyError as key:
266 expl = _('Bad personality format: missing %s') % key
267 raise exc.HTTPBadRequest(explanation=expl)
268 except TypeError:
269 expl = _('Bad personality format')
270 raise exc.HTTPBadRequest(explanation=expl)
271 try:
272 contents = base64.b64decode(contents)
273 except TypeError:
274 expl = _('Personality content for %s cannot be decoded') % path
275 raise exc.HTTPBadRequest(explanation=expl)
276 injected_files.append((path, contents))
277 return injected_files
278
279 def _get_server_admin_password_old_style(self, server):
280 """ Determine the admin password for a server on creation """
281 return utils.generate_password(FLAGS.password_length)
282
283 def _get_server_admin_password_new_style(self, server):
284 """ Determine the admin password for a server on creation """
285 password = server.get('adminPass')
286
287 if password is None:
288 return utils.generate_password(FLAGS.password_length)
289 if not isinstance(password, basestring) or password == '':
290 msg = _("Invalid adminPass")
291 raise exc.HTTPBadRequest(explanation=msg)
292 return password
293
294 def _get_requested_networks(self, requested_networks):
295 """
296 Create a list of requested networks from the networks attribute
297 """
298 networks = []
299 for network in requested_networks:
300 try:
301 network_uuid = network['uuid']
302
303 if not utils.is_uuid_like(network_uuid):
304 msg = _("Bad networks format: network uuid is not in"
305 " proper format (%s)") % network_uuid
306 raise exc.HTTPBadRequest(explanation=msg)
307
308 #fixed IP address is optional
309 #if the fixed IP address is not provided then
310 #it will use one of the available IP address from the network
311 address = network.get('fixed_ip', None)
312 if address is not None and not utils.is_valid_ipv4(address):
313 msg = _("Invalid fixed IP address (%s)") % address
314 raise exc.HTTPBadRequest(explanation=msg)
315 # check if the network id is already present in the list,
316 # we don't want duplicate networks to be passed
317 # at the boot time
318 for id, ip in networks:
319 if id == network_uuid:
320 expl = _("Duplicate networks (%s) are not allowed")\
321 % network_uuid
322 raise exc.HTTPBadRequest(explanation=expl)
323
324 networks.append((network_uuid, address))
325 except KeyError as key:
326 expl = _('Bad network format: missing %s') % key
327 raise exc.HTTPBadRequest(explanation=expl)
328 except TypeError:
329 expl = _('Bad networks format')
330 raise exc.HTTPBadRequest(explanation=expl)
331
332 return networks
333
334 def _validate_user_data(self, user_data):
335 """Check if the user_data is encoded properly"""
336 if not user_data:
337 return
338 try:
339 user_data = base64.b64decode(user_data)
340 except TypeError:
341 expl = _('Userdata content cannot be decoded')
342 raise exc.HTTPBadRequest(explanation=expl)
343
160 @novaclient_exception_converter344 @novaclient_exception_converter
161 @scheduler_api.redirect_handler345 @scheduler_api.redirect_handler
162 def show(self, req, id):346 def show(self, req, id):
@@ -174,22 +358,168 @@
174358
175 def create(self, req, body):359 def create(self, req, body):
176 """ Creates a new server for a given user """360 """ Creates a new server for a given user """
177 if 'server' in body:361
178 body['server']['key_name'] = self._get_key_name(req, body)362 if not body:
179363 raise exc.HTTPUnprocessableEntity()
180 extra_values = None364
181 extra_values, instances = self.helper.create_instance(365 if not 'server' in body:
182 req, body, self.compute_api.create)366 raise exc.HTTPUnprocessableEntity()
183367
184 # We can only return 1 instance via the API, if we happen to368 body['server']['key_name'] = self._get_key_name(req, body)
185 # build more than one... instances is a list, so we'll just369
186 # use the first one..370 context = req.environ['nova.context']
187 inst = instances[0]371 server_dict = body['server']
188 for key in ['instance_type', 'image_ref']:372 password = self._get_server_admin_password(server_dict)
189 inst[key] = extra_values[key]373
190374 if not 'name' in server_dict:
191 server = self._build_view(req, inst, is_detail=True)375 msg = _("Server name is not defined")
192 server['server']['adminPass'] = extra_values['password']376 raise exc.HTTPBadRequest(explanation=msg)
377
378 name = server_dict['name']
379 self._validate_server_name(name)
380 name = name.strip()
381
382 image_href = self._image_ref_from_req_data(body)
383 # If the image href was generated by nova api, strip image_href
384 # down to an id and use the default glance connection params
385
386 if str(image_href).startswith(req.application_url):
387 image_href = image_href.split('/').pop()
388 try:
389 image_service, image_id = image.get_image_service(context,
390 image_href)
391 kernel_id, ramdisk_id = self._get_kernel_ramdisk_from_image(
392 req, image_service, image_id)
393 images = set([str(x['id']) for x in image_service.index(context)])
394 assert str(image_id) in images
395 except Exception, e:
396 msg = _("Cannot find requested image %(image_href)s: %(e)s" %
397 locals())
398 raise exc.HTTPBadRequest(explanation=msg)
399
400 personality = server_dict.get('personality')
401 config_drive = server_dict.get('config_drive')
402
403 injected_files = []
404 if personality:
405 injected_files = self._get_injected_files(personality)
406
407 sg_names = []
408 security_groups = server_dict.get('security_groups')
409 if security_groups is not None:
410 sg_names = [sg['name'] for sg in security_groups if sg.get('name')]
411 if not sg_names:
412 sg_names.append('default')
413
414 sg_names = list(set(sg_names))
415
416 requested_networks = server_dict.get('networks')
417 if requested_networks is not None:
418 requested_networks = self._get_requested_networks(
419 requested_networks)
420
421 try:
422 flavor_id = self._flavor_id_from_req_data(body)
423 except ValueError as error:
424 msg = _("Invalid flavorRef provided.")
425 raise exc.HTTPBadRequest(explanation=msg)
426
427 zone_blob = server_dict.get('blob')
428
429 # optional openstack extensions:
430 key_name = server_dict.get('key_name')
431 user_data = server_dict.get('user_data')
432 self._validate_user_data(user_data)
433
434 availability_zone = server_dict.get('availability_zone')
435 name = server_dict['name']
436 self._validate_server_name(name)
437 name = name.strip()
438
439 block_device_mapping = self._get_block_device_mapping(server_dict)
440
441 # Only allow admins to specify their own reservation_ids
442 # This is really meant to allow zones to work.
443 reservation_id = server_dict.get('reservation_id')
444 if all([reservation_id is not None,
445 reservation_id != '',
446 not context.is_admin]):
447 reservation_id = None
448
449 ret_resv_id = server_dict.get('return_reservation_id', False)
450
451 min_count = server_dict.get('min_count')
452 max_count = server_dict.get('max_count')
453 # min_count and max_count are optional. If they exist, they come
454 # in as strings. We want to default 'min_count' to 1, and default
455 # 'max_count' to be 'min_count'.
456 min_count = int(min_count) if min_count else 1
457 max_count = int(max_count) if max_count else min_count
458 if min_count > max_count:
459 min_count = max_count
460
461 try:
462 inst_type = \
463 instance_types.get_instance_type_by_flavor_id(flavor_id)
464
465 (instances, resv_id) = self.compute_api.create(context,
466 inst_type,
467 image_id,
468 kernel_id=kernel_id,
469 ramdisk_id=ramdisk_id,
470 display_name=name,
471 display_description=name,
472 key_name=key_name,
473 metadata=server_dict.get('metadata', {}),
474 access_ip_v4=server_dict.get('accessIPv4'),
475 access_ip_v6=server_dict.get('accessIPv6'),
476 injected_files=injected_files,
477 admin_password=password,
478 zone_blob=zone_blob,
479 reservation_id=reservation_id,
480 min_count=min_count,
481 max_count=max_count,
482 requested_networks=requested_networks,
483 security_group=sg_names,
484 user_data=user_data,
485 availability_zone=availability_zone,
486 config_drive=config_drive,
487 block_device_mapping=block_device_mapping,
488 wait_for_instances=not ret_resv_id)
489 except quota.QuotaError as error:
490 self._handle_quota_error(error)
491 except exception.ImageNotFound as error:
492 msg = _("Can not find requested image")
493 raise exc.HTTPBadRequest(explanation=msg)
494 except exception.FlavorNotFound as error:
495 msg = _("Invalid flavorRef provided.")
496 raise exc.HTTPBadRequest(explanation=msg)
497 except exception.KeypairNotFound as error:
498 msg = _("Invalid key_name provided.")
499 raise exc.HTTPBadRequest(explanation=msg)
500 except exception.SecurityGroupNotFound as error:
501 raise exc.HTTPBadRequest(explanation=unicode(error))
502 except rpc_common.RemoteError as err:
503 msg = "%(err_type)s: %(err_msg)s" % \
504 {'err_type': err.exc_type, 'err_msg': err.value}
505 raise exc.HTTPBadRequest(explanation=msg)
506 # Let the caller deal with unhandled exceptions.
507
508 # If the caller wanted a reservation_id, return it
509 if ret_resv_id:
510 return {'reservation_id': resv_id}
511
512 # Instances is a list
513 instance = instances[0]
514 if not instance.get('_is_precooked', False):
515 instance['instance_type'] = inst_type
516 instance['image_ref'] = image_href
517
518 server = self._build_view(req, instance, is_detail=True)
519 if '_is_precooked' in server['server']:
520 del server['server']['_is_precooked']
521 else:
522 server['server']['adminPass'] = password
193 return server523 return server
194524
195 def _delete(self, context, id):525 def _delete(self, context, id):
@@ -212,7 +542,7 @@
212542
213 if 'name' in body['server']:543 if 'name' in body['server']:
214 name = body['server']['name']544 name = body['server']['name']
215 self.helper._validate_server_name(name)545 self._validate_server_name(name)
216 update_dict['display_name'] = name.strip()546 update_dict['display_name'] = name.strip()
217547
218 if 'accessIPv4' in body['server']:548 if 'accessIPv4' in body['server']:
@@ -284,17 +614,17 @@
284614
285 except KeyError as missing_key:615 except KeyError as missing_key:
286 msg = _("createBackup entity requires %s attribute") % missing_key616 msg = _("createBackup entity requires %s attribute") % missing_key
287 raise webob.exc.HTTPBadRequest(explanation=msg)617 raise exc.HTTPBadRequest(explanation=msg)
288618
289 except TypeError:619 except TypeError:
290 msg = _("Malformed createBackup entity")620 msg = _("Malformed createBackup entity")
291 raise webob.exc.HTTPBadRequest(explanation=msg)621 raise exc.HTTPBadRequest(explanation=msg)
292622
293 try:623 try:
294 rotation = int(rotation)624 rotation = int(rotation)
295 except ValueError:625 except ValueError:
296 msg = _("createBackup attribute 'rotation' must be an integer")626 msg = _("createBackup attribute 'rotation' must be an integer")
297 raise webob.exc.HTTPBadRequest(explanation=msg)627 raise exc.HTTPBadRequest(explanation=msg)
298628
299 # preserve link to server in image properties629 # preserve link to server in image properties
300 server_ref = os.path.join(req.application_url,630 server_ref = os.path.join(req.application_url,
@@ -309,7 +639,7 @@
309 props.update(metadata)639 props.update(metadata)
310 except ValueError:640 except ValueError:
311 msg = _("Invalid metadata")641 msg = _("Invalid metadata")
312 raise webob.exc.HTTPBadRequest(explanation=msg)642 raise exc.HTTPBadRequest(explanation=msg)
313643
314 image = self.compute_api.backup(context,644 image = self.compute_api.backup(context,
315 instance_id,645 instance_id,
@@ -687,7 +1017,7 @@
6871017
688 def _get_server_admin_password(self, server):1018 def _get_server_admin_password(self, server):
689 """ Determine the admin password for a server on creation """1019 """ Determine the admin password for a server on creation """
690 return self.helper._get_server_admin_password_old_style(server)1020 return self._get_server_admin_password_old_style(server)
6911021
692 def _get_server_search_options(self):1022 def _get_server_search_options(self):
693 """Return server search options allowed by non-admin"""1023 """Return server search options allowed by non-admin"""
@@ -873,11 +1203,11 @@
8731203
874 except KeyError:1204 except KeyError:
875 msg = _("createImage entity requires name attribute")1205 msg = _("createImage entity requires name attribute")
876 raise webob.exc.HTTPBadRequest(explanation=msg)1206 raise exc.HTTPBadRequest(explanation=msg)
8771207
878 except TypeError:1208 except TypeError:
879 msg = _("Malformed createImage entity")1209 msg = _("Malformed createImage entity")
880 raise webob.exc.HTTPBadRequest(explanation=msg)1210 raise exc.HTTPBadRequest(explanation=msg)
8811211
882 # preserve link to server in image properties1212 # preserve link to server in image properties
883 server_ref = os.path.join(req.application_url,1213 server_ref = os.path.join(req.application_url,
@@ -892,7 +1222,7 @@
892 props.update(metadata)1222 props.update(metadata)
893 except ValueError:1223 except ValueError:
894 msg = _("Invalid metadata")1224 msg = _("Invalid metadata")
895 raise webob.exc.HTTPBadRequest(explanation=msg)1225 raise exc.HTTPBadRequest(explanation=msg)
8961226
897 image = self.compute_api.snapshot(context,1227 image = self.compute_api.snapshot(context,
898 instance_id,1228 instance_id,
@@ -912,7 +1242,7 @@
9121242
913 def _get_server_admin_password(self, server):1243 def _get_server_admin_password(self, server):
914 """ Determine the admin password for a server on creation """1244 """ Determine the admin password for a server on creation """
915 return self.helper._get_server_admin_password_new_style(server)1245 return self._get_server_admin_password_new_style(server)
9161246
917 def _get_server_search_options(self):1247 def _get_server_search_options(self):
918 """Return server search options allowed by non-admin"""1248 """Return server search options allowed by non-admin"""
@@ -1057,6 +1387,227 @@
1057 return self._to_xml(server)1387 return self._to_xml(server)
10581388
10591389
1390class ServerXMLDeserializer(wsgi.XMLDeserializer):
1391 """
1392 Deserializer to handle xml-formatted server create requests.
1393
1394 Handles standard server attributes as well as optional metadata
1395 and personality attributes
1396 """
1397
1398 metadata_deserializer = common.MetadataXMLDeserializer()
1399
1400 def create(self, string):
1401 """Deserialize an xml-formatted server create request"""
1402 dom = minidom.parseString(string)
1403 server = self._extract_server(dom)
1404 return {'body': {'server': server}}
1405
1406 def _extract_server(self, node):
1407 """Marshal the server attribute of a parsed request"""
1408 server = {}
1409 server_node = self.find_first_child_named(node, 'server')
1410
1411 attributes = ["name", "imageId", "flavorId", "adminPass"]
1412 for attr in attributes:
1413 if server_node.getAttribute(attr):
1414 server[attr] = server_node.getAttribute(attr)
1415
1416 metadata_node = self.find_first_child_named(server_node, "metadata")
1417 server["metadata"] = self.metadata_deserializer.extract_metadata(
1418 metadata_node)
1419
1420 server["personality"] = self._extract_personality(server_node)
1421
1422 return server
1423
1424 def _extract_personality(self, server_node):
1425 """Marshal the personality attribute of a parsed request"""
1426 node = self.find_first_child_named(server_node, "personality")
1427 personality = []
1428 if node is not None:
1429 for file_node in self.find_children_named(node, "file"):
1430 item = {}
1431 if file_node.hasAttribute("path"):
1432 item["path"] = file_node.getAttribute("path")
1433 item["contents"] = self.extract_text(file_node)
1434 personality.append(item)
1435 return personality
1436
1437
1438class ServerXMLDeserializerV11(wsgi.MetadataXMLDeserializer):
1439 """
1440 Deserializer to handle xml-formatted server create requests.
1441
1442 Handles standard server attributes as well as optional metadata
1443 and personality attributes
1444 """
1445
1446 metadata_deserializer = common.MetadataXMLDeserializer()
1447
1448 def action(self, string):
1449 dom = minidom.parseString(string)
1450 action_node = dom.childNodes[0]
1451 action_name = action_node.tagName
1452
1453 action_deserializer = {
1454 'createImage': self._action_create_image,
1455 'createBackup': self._action_create_backup,
1456 'changePassword': self._action_change_password,
1457 'reboot': self._action_reboot,
1458 'rebuild': self._action_rebuild,
1459 'resize': self._action_resize,
1460 'confirmResize': self._action_confirm_resize,
1461 'revertResize': self._action_revert_resize,
1462 }.get(action_name, self.default)
1463
1464 action_data = action_deserializer(action_node)
1465
1466 return {'body': {action_name: action_data}}
1467
1468 def _action_create_image(self, node):
1469 return self._deserialize_image_action(node, ('name',))
1470
1471 def _action_create_backup(self, node):
1472 attributes = ('name', 'backup_type', 'rotation')
1473 return self._deserialize_image_action(node, attributes)
1474
1475 def _action_change_password(self, node):
1476 if not node.hasAttribute("adminPass"):
1477 raise AttributeError("No adminPass was specified in request")
1478 return {"adminPass": node.getAttribute("adminPass")}
1479
1480 def _action_reboot(self, node):
1481 if not node.hasAttribute("type"):
1482 raise AttributeError("No reboot type was specified in request")
1483 return {"type": node.getAttribute("type")}
1484
1485 def _action_rebuild(self, node):
1486 rebuild = {}
1487 if node.hasAttribute("name"):
1488 rebuild['name'] = node.getAttribute("name")
1489
1490 metadata_node = self.find_first_child_named(node, "metadata")
1491 if metadata_node is not None:
1492 rebuild["metadata"] = self.extract_metadata(metadata_node)
1493
1494 personality = self._extract_personality(node)
1495 if personality is not None:
1496 rebuild["personality"] = personality
1497
1498 if not node.hasAttribute("imageRef"):
1499 raise AttributeError("No imageRef was specified in request")
1500 rebuild["imageRef"] = node.getAttribute("imageRef")
1501
1502 return rebuild
1503
1504 def _action_resize(self, node):
1505 if not node.hasAttribute("flavorRef"):
1506 raise AttributeError("No flavorRef was specified in request")
1507 return {"flavorRef": node.getAttribute("flavorRef")}
1508
1509 def _action_confirm_resize(self, node):
1510 return None
1511
1512 def _action_revert_resize(self, node):
1513 return None
1514
1515 def _deserialize_image_action(self, node, allowed_attributes):
1516 data = {}
1517 for attribute in allowed_attributes:
1518 value = node.getAttribute(attribute)
1519 if value:
1520 data[attribute] = value
1521 metadata_node = self.find_first_child_named(node, 'metadata')
1522 if metadata_node is not None:
1523 metadata = self.metadata_deserializer.extract_metadata(
1524 metadata_node)
1525 data['metadata'] = metadata
1526 return data
1527
1528 def create(self, string):
1529 """Deserialize an xml-formatted server create request"""
1530 dom = minidom.parseString(string)
1531 server = self._extract_server(dom)
1532 return {'body': {'server': server}}
1533
1534 def _extract_server(self, node):
1535 """Marshal the server attribute of a parsed request"""
1536 server = {}
1537 server_node = self.find_first_child_named(node, 'server')
1538
1539 attributes = ["name", "imageRef", "flavorRef", "adminPass",
1540 "accessIPv4", "accessIPv6"]
1541 for attr in attributes:
1542 if server_node.getAttribute(attr):
1543 server[attr] = server_node.getAttribute(attr)
1544
1545 metadata_node = self.find_first_child_named(server_node, "metadata")
1546 if metadata_node is not None:
1547 server["metadata"] = self.extract_metadata(metadata_node)
1548
1549 personality = self._extract_personality(server_node)
1550 if personality is not None:
1551 server["personality"] = personality
1552
1553 networks = self._extract_networks(server_node)
1554 if networks is not None:
1555 server["networks"] = networks
1556
1557 security_groups = self._extract_security_groups(server_node)
1558 if security_groups is not None:
1559 server["security_groups"] = security_groups
1560
1561 return server
1562
1563 def _extract_personality(self, server_node):
1564 """Marshal the personality attribute of a parsed request"""
1565 node = self.find_first_child_named(server_node, "personality")
1566 if node is not None:
1567 personality = []
1568 for file_node in self.find_children_named(node, "file"):
1569 item = {}
1570 if file_node.hasAttribute("path"):
1571 item["path"] = file_node.getAttribute("path")
1572 item["contents"] = self.extract_text(file_node)
1573 personality.append(item)
1574 return personality
1575 else:
1576 return None
1577
1578 def _extract_networks(self, server_node):
1579 """Marshal the networks attribute of a parsed request"""
1580 node = self.find_first_child_named(server_node, "networks")
1581 if node is not None:
1582 networks = []
1583 for network_node in self.find_children_named(node,
1584 "network"):
1585 item = {}
1586 if network_node.hasAttribute("uuid"):
1587 item["uuid"] = network_node.getAttribute("uuid")
1588 if network_node.hasAttribute("fixed_ip"):
1589 item["fixed_ip"] = network_node.getAttribute("fixed_ip")
1590 networks.append(item)
1591 return networks
1592 else:
1593 return None
1594
1595 def _extract_security_groups(self, server_node):
1596 """Marshal the security_groups attribute of a parsed request"""
1597 node = self.find_first_child_named(server_node, "security_groups")
1598 if node is not None:
1599 security_groups = []
1600 for sg_node in self.find_children_named(node, "security_group"):
1601 item = {}
1602 name_node = self.find_first_child_named(sg_node, "name")
1603 if name_node:
1604 item["name"] = self.extract_text(name_node)
1605 security_groups.append(item)
1606 return security_groups
1607 else:
1608 return None
1609
1610
1060def create_resource(version='1.0'):1611def create_resource(version='1.0'):
1061 controller = {1612 controller = {
1062 '1.0': ControllerV10,1613 '1.0': ControllerV10,
@@ -1096,8 +1647,8 @@
1096 }1647 }
10971648
1098 xml_deserializer = {1649 xml_deserializer = {
1099 '1.0': helper.ServerXMLDeserializer(),1650 '1.0': ServerXMLDeserializer(),
1100 '1.1': helper.ServerXMLDeserializerV11(),1651 '1.1': ServerXMLDeserializerV11(),
1101 }[version]1652 }[version]
11021653
1103 body_deserializers = {1654 body_deserializers = {
11041655
=== modified file 'nova/api/openstack/zones.py'
--- nova/api/openstack/zones.py 2011-08-31 18:54:30 +0000
+++ nova/api/openstack/zones.py 2011-09-23 07:08:19 +0000
@@ -25,8 +25,8 @@
25from nova.compute import api as compute25from nova.compute import api as compute
26from nova.scheduler import api26from nova.scheduler import api
2727
28from nova.api.openstack import create_instance_helper as helper
29from nova.api.openstack import common28from nova.api.openstack import common
29from nova.api.openstack import servers
30from nova.api.openstack import wsgi30from nova.api.openstack import wsgi
3131
3232
@@ -67,7 +67,6 @@
6767
68 def __init__(self):68 def __init__(self):
69 self.compute_api = compute.API()69 self.compute_api = compute.API()
70 self.helper = helper.CreateInstanceHelper(self)
7170
72 def index(self, req):71 def index(self, req):
73 """Return all zones in brief"""72 """Return all zones in brief"""
@@ -120,18 +119,6 @@
120 zone = api.zone_update(context, zone_id, body["zone"])119 zone = api.zone_update(context, zone_id, body["zone"])
121 return dict(zone=_scrub_zone(zone))120 return dict(zone=_scrub_zone(zone))
122121
123 def boot(self, req, body):
124 """Creates a new server for a given user while being Zone aware.
125
126 Returns a reservation ID (a UUID).
127 """
128 result = None
129 extra_values, result = self.helper.create_instance(req, body,
130 self.compute_api.create_all_at_once)
131
132 reservation_id = result
133 return {'reservation_id': reservation_id}
134
135 @check_encryption_key122 @check_encryption_key
136 def select(self, req, body):123 def select(self, req, body):
137 """Returns a weighted list of costs to create instances124 """Returns a weighted list of costs to create instances
@@ -155,29 +142,10 @@
155 blob=cipher_text))142 blob=cipher_text))
156 return cooked143 return cooked
157144
158 def _image_ref_from_req_data(self, data):
159 return data['server']['imageId']
160
161 def _flavor_id_from_req_data(self, data):
162 return data['server']['flavorId']
163
164 def _get_server_admin_password(self, server):
165 """ Determine the admin password for a server on creation """
166 return self.helper._get_server_admin_password_old_style(server)
167
168145
169class ControllerV11(Controller):146class ControllerV11(Controller):
170 """Controller for 1.1 Zone resources."""147 """Controller for 1.1 Zone resources."""
171148 pass
172 def _get_server_admin_password(self, server):
173 """ Determine the admin password for a server on creation """
174 return self.helper._get_server_admin_password_new_style(server)
175
176 def _image_ref_from_req_data(self, data):
177 return data['server']['imageRef']
178
179 def _flavor_id_from_req_data(self, data):
180 return data['server']['flavorRef']
181149
182150
183def create_resource(version):151def create_resource(version):
@@ -199,7 +167,7 @@
199 serializer = wsgi.ResponseSerializer(body_serializers)167 serializer = wsgi.ResponseSerializer(body_serializers)
200168
201 body_deserializers = {169 body_deserializers = {
202 'application/xml': helper.ServerXMLDeserializer(),170 'application/xml': servers.ServerXMLDeserializer(),
203 }171 }
204 deserializer = wsgi.RequestDeserializer(body_deserializers)172 deserializer = wsgi.RequestDeserializer(body_deserializers)
205173
206174
=== modified file 'nova/compute/api.py'
--- nova/compute/api.py 2011-09-21 21:00:53 +0000
+++ nova/compute/api.py 2011-09-23 07:08:19 +0000
@@ -74,6 +74,11 @@
74 return display_name.translate(table, deletions)74 return display_name.translate(table, deletions)
7575
7676
77def generate_default_display_name(instance):
78 """Generate a default display name"""
79 return 'Server %s' % instance['id']
80
81
77def _is_able_to_shutdown(instance, instance_id):82def _is_able_to_shutdown(instance, instance_id):
78 vm_state = instance["vm_state"]83 vm_state = instance["vm_state"]
79 task_state = instance["task_state"]84 task_state = instance["task_state"]
@@ -176,17 +181,27 @@
176181
177 self.network_api.validate_networks(context, requested_networks)182 self.network_api.validate_networks(context, requested_networks)
178183
179 def _check_create_parameters(self, context, instance_type,184 def _create_instance(self, context, instance_type,
180 image_href, kernel_id=None, ramdisk_id=None,185 image_href, kernel_id, ramdisk_id,
181 min_count=None, max_count=None,186 min_count, max_count,
182 display_name='', display_description='',187 display_name, display_description,
183 key_name=None, key_data=None, security_group='default',188 key_name, key_data, security_group,
184 availability_zone=None, user_data=None, metadata=None,189 availability_zone, user_data, metadata,
185 injected_files=None, admin_password=None, zone_blob=None,190 injected_files, admin_password, zone_blob,
186 reservation_id=None, access_ip_v4=None, access_ip_v6=None,191 reservation_id, access_ip_v4, access_ip_v6,
187 requested_networks=None, config_drive=None,):192 requested_networks, config_drive,
193 block_device_mapping,
194 wait_for_instances):
188 """Verify all the input parameters regardless of the provisioning195 """Verify all the input parameters regardless of the provisioning
189 strategy being performed."""196 strategy being performed and schedule the instance(s) for
197 creation."""
198
199 if not metadata:
200 metadata = {}
201 if not display_description:
202 display_description = ''
203 if not security_group:
204 security_group = 'default'
190205
191 if not instance_type:206 if not instance_type:
192 instance_type = instance_types.get_default_instance_type()207 instance_type = instance_types.get_default_instance_type()
@@ -197,6 +212,8 @@
197 if not metadata:212 if not metadata:
198 metadata = {}213 metadata = {}
199214
215 block_device_mapping = block_device_mapping or []
216
200 num_instances = quota.allowed_instances(context, max_count,217 num_instances = quota.allowed_instances(context, max_count,
201 instance_type)218 instance_type)
202 if num_instances < min_count:219 if num_instances < min_count:
@@ -297,7 +314,28 @@
297 'vm_mode': vm_mode,314 'vm_mode': vm_mode,
298 'root_device_name': root_device_name}315 'root_device_name': root_device_name}
299316
300 return (num_instances, base_options, image)317 LOG.debug(_("Going to run %s instances...") % num_instances)
318
319 if wait_for_instances:
320 rpc_method = rpc.call
321 else:
322 rpc_method = rpc.cast
323
324 # TODO(comstud): We should use rpc.multicall when we can
325 # retrieve the full instance dictionary from the scheduler.
326 # Otherwise, we could exceed the AMQP max message size limit.
327 # This would require the schedulers' schedule_run_instances
328 # methods to return an iterator vs a list.
329 instances = self._schedule_run_instance(
330 rpc_method,
331 context, base_options,
332 instance_type, zone_blob,
333 availability_zone, injected_files,
334 admin_password, image,
335 num_instances, requested_networks,
336 block_device_mapping, security_group)
337
338 return (instances, reservation_id)
301339
302 @staticmethod340 @staticmethod
303 def _volume_size(instance_type, virtual_name):341 def _volume_size(instance_type, virtual_name):
@@ -393,10 +431,8 @@
393 including any related table updates (such as security group,431 including any related table updates (such as security group,
394 etc).432 etc).
395433
396 This will called by create() in the majority of situations,434 This is called by the scheduler after a location for the
397 but create_all_at_once() style Schedulers may initiate the call.435 instance has been determined.
398 If you are changing this method, be sure to update both
399 call paths.
400 """436 """
401 elevated = context.elevated()437 elevated = context.elevated()
402 if security_group is None:438 if security_group is None:
@@ -433,7 +469,7 @@
433 updates = {}469 updates = {}
434 if (not hasattr(instance, 'display_name') or470 if (not hasattr(instance, 'display_name') or
435 instance.display_name is None):471 instance.display_name is None):
436 updates['display_name'] = "Server %s" % instance_id472 updates['display_name'] = generate_default_display_name(instance)
437 instance['display_name'] = updates['display_name']473 instance['display_name'] = updates['display_name']
438 updates['hostname'] = self.hostname_factory(instance)474 updates['hostname'] = self.hostname_factory(instance)
439 updates['vm_state'] = vm_states.BUILDING475 updates['vm_state'] = vm_states.BUILDING
@@ -442,21 +478,23 @@
442 instance = self.update(context, instance_id, **updates)478 instance = self.update(context, instance_id, **updates)
443 return instance479 return instance
444480
445 def _ask_scheduler_to_create_instance(self, context, base_options,481 def _schedule_run_instance(self,
446 instance_type, zone_blob,482 rpc_method,
447 availability_zone, injected_files,483 context, base_options,
448 admin_password, image,484 instance_type, zone_blob,
449 instance_id=None, num_instances=1,485 availability_zone, injected_files,
450 requested_networks=None):486 admin_password, image,
451 """Send the run_instance request to the schedulers for processing."""487 num_instances,
488 requested_networks,
489 block_device_mapping,
490 security_group):
491 """Send a run_instance request to the schedulers for processing."""
492
452 pid = context.project_id493 pid = context.project_id
453 uid = context.user_id494 uid = context.user_id
454 if instance_id:495
455 LOG.debug(_("Casting to scheduler for %(pid)s/%(uid)s's"496 LOG.debug(_("Sending create to scheduler for %(pid)s/%(uid)s's") %
456 " instance %(instance_id)s (single-shot)") % locals())497 locals())
457 else:
458 LOG.debug(_("Casting to scheduler for %(pid)s/%(uid)s's"
459 " (all-at-once)") % locals())
460498
461 request_spec = {499 request_spec = {
462 'image': image,500 'image': image,
@@ -465,82 +503,41 @@
465 'filter': None,503 'filter': None,
466 'blob': zone_blob,504 'blob': zone_blob,
467 'num_instances': num_instances,505 'num_instances': num_instances,
506 'block_device_mapping': block_device_mapping,
507 'security_group': security_group,
468 }508 }
469509
470 rpc.cast(context,510 return rpc_method(context,
471 FLAGS.scheduler_topic,511 FLAGS.scheduler_topic,
472 {"method": "run_instance",512 {"method": "run_instance",
473 "args": {"topic": FLAGS.compute_topic,513 "args": {"topic": FLAGS.compute_topic,
474 "instance_id": instance_id,514 "request_spec": request_spec,
475 "request_spec": request_spec,515 "admin_password": admin_password,
476 "availability_zone": availability_zone,516 "injected_files": injected_files,
477 "admin_password": admin_password,517 "requested_networks": requested_networks}})
478 "injected_files": injected_files,
479 "requested_networks": requested_networks}})
480
481 def create_all_at_once(self, context, instance_type,
482 image_href, kernel_id=None, ramdisk_id=None,
483 min_count=None, max_count=None,
484 display_name='', display_description='',
485 key_name=None, key_data=None, security_group='default',
486 availability_zone=None, user_data=None, metadata=None,
487 injected_files=None, admin_password=None, zone_blob=None,
488 reservation_id=None, block_device_mapping=None,
489 access_ip_v4=None, access_ip_v6=None,
490 requested_networks=None, config_drive=None):
491 """Provision the instances by passing the whole request to
492 the Scheduler for execution. Returns a Reservation ID
493 related to the creation of all of these instances."""
494
495 if not metadata:
496 metadata = {}
497
498 num_instances, base_options, image = self._check_create_parameters(
499 context, instance_type,
500 image_href, kernel_id, ramdisk_id,
501 min_count, max_count,
502 display_name, display_description,
503 key_name, key_data, security_group,
504 availability_zone, user_data, metadata,
505 injected_files, admin_password, zone_blob,
506 reservation_id, access_ip_v4, access_ip_v6,
507 requested_networks, config_drive)
508
509 self._ask_scheduler_to_create_instance(context, base_options,
510 instance_type, zone_blob,
511 availability_zone, injected_files,
512 admin_password, image,
513 num_instances=num_instances,
514 requested_networks=requested_networks)
515
516 return base_options['reservation_id']
517518
518 def create(self, context, instance_type,519 def create(self, context, instance_type,
519 image_href, kernel_id=None, ramdisk_id=None,520 image_href, kernel_id=None, ramdisk_id=None,
520 min_count=None, max_count=None,521 min_count=None, max_count=None,
521 display_name='', display_description='',522 display_name=None, display_description=None,
522 key_name=None, key_data=None, security_group='default',523 key_name=None, key_data=None, security_group=None,
523 availability_zone=None, user_data=None, metadata=None,524 availability_zone=None, user_data=None, metadata=None,
524 injected_files=None, admin_password=None, zone_blob=None,525 injected_files=None, admin_password=None, zone_blob=None,
525 reservation_id=None, block_device_mapping=None,526 reservation_id=None, block_device_mapping=None,
526 access_ip_v4=None, access_ip_v6=None,527 access_ip_v4=None, access_ip_v6=None,
527 requested_networks=None, config_drive=None,):528 requested_networks=None, config_drive=None,
528 """529 wait_for_instances=True):
529 Provision the instances by sending off a series of single530 """
530 instance requests to the Schedulers. This is fine for trival531 Provision instances, sending instance information to the
531 Scheduler drivers, but may remove the effectiveness of the532 scheduler. The scheduler will determine where the instance(s)
532 more complicated drivers.533 go and will handle creating the DB entries.
533534
534 NOTE: If you change this method, be sure to change535 Returns a tuple of (instances, reservation_id) where instances
535 create_all_at_once() at the same time!536 could be 'None' or a list of instance dicts depending on if
536537 we waited for information from the scheduler or not.
537 Returns a list of instance dicts.538 """
538 """539
539540 (instances, reservation_id) = self._create_instance(
540 if not metadata:
541 metadata = {}
542
543 num_instances, base_options, image = self._check_create_parameters(
544 context, instance_type,541 context, instance_type,
545 image_href, kernel_id, ramdisk_id,542 image_href, kernel_id, ramdisk_id,
546 min_count, max_count,543 min_count, max_count,
@@ -549,27 +546,25 @@
549 availability_zone, user_data, metadata,546 availability_zone, user_data, metadata,
550 injected_files, admin_password, zone_blob,547 injected_files, admin_password, zone_blob,
551 reservation_id, access_ip_v4, access_ip_v6,548 reservation_id, access_ip_v4, access_ip_v6,
552 requested_networks, config_drive)549 requested_networks, config_drive,
553550 block_device_mapping,
554 block_device_mapping = block_device_mapping or []551 wait_for_instances)
555 instances = []552
556 LOG.debug(_("Going to run %s instances..."), num_instances)553 if instances is None:
557 for num in range(num_instances):554 # wait_for_instances must have been False
558 instance = self.create_db_entry_for_new_instance(context,555 return (instances, reservation_id)
559 instance_type, image,556
560 base_options, security_group,557 inst_ret_list = []
561 block_device_mapping, num=num)558 for instance in instances:
562 instances.append(instance)559 if instance.get('_is_precooked', False):
563 instance_id = instance['id']560 inst_ret_list.append(instance)
564561 else:
565 self._ask_scheduler_to_create_instance(context, base_options,562 # Scheduler only gives us the 'id'. We need to pull
566 instance_type, zone_blob,563 # in the created instances from the DB
567 availability_zone, injected_files,564 instance = self.db.instance_get(context, instance['id'])
568 admin_password, image,565 inst_ret_list.append(dict(instance.iteritems()))
569 instance_id=instance_id,566
570 requested_networks=requested_networks)567 return (inst_ret_list, reservation_id)
571
572 return [dict(x.iteritems()) for x in instances]
573568
574 def has_finished_migration(self, context, instance_uuid):569 def has_finished_migration(self, context, instance_uuid):
575 """Returns true if an instance has a finished migration."""570 """Returns true if an instance has a finished migration."""
576571
=== modified file 'nova/scheduler/abstract_scheduler.py'
--- nova/scheduler/abstract_scheduler.py 2011-09-12 14:36:14 +0000
+++ nova/scheduler/abstract_scheduler.py 2011-09-23 07:08:19 +0000
@@ -60,24 +60,10 @@
60 request_spec, kwargs):60 request_spec, kwargs):
61 """Create the requested resource in this Zone."""61 """Create the requested resource in this Zone."""
62 host = build_plan_item['hostname']62 host = build_plan_item['hostname']
63 base_options = request_spec['instance_properties']63 instance = self.create_instance_db_entry(context, request_spec)
64 image = request_spec['image']64 driver.cast_to_compute_host(context, host,
65 instance_type = request_spec.get('instance_type')65 'run_instance', instance_id=instance['id'], **kwargs)
6666 return driver.encode_instance(instance, local=True)
67 # TODO(sandy): I guess someone needs to add block_device_mapping
68 # support at some point? Also, OS API has no concept of security
69 # groups.
70 instance = compute_api.API().create_db_entry_for_new_instance(context,
71 instance_type, image, base_options, None, [])
72
73 instance_id = instance['id']
74 kwargs['instance_id'] = instance_id
75
76 queue = db.queue_get_for(context, "compute", host)
77 params = {"method": "run_instance", "args": kwargs}
78 rpc.cast(context, queue, params)
79 LOG.debug(_("Provisioning locally via compute node %(host)s")
80 % locals())
8167
82 def _decrypt_blob(self, blob):68 def _decrypt_blob(self, blob):
83 """Returns the decrypted blob or None if invalid. Broken out69 """Returns the decrypted blob or None if invalid. Broken out
@@ -112,7 +98,7 @@
112 files = kwargs['injected_files']98 files = kwargs['injected_files']
113 child_zone = zone_info['child_zone']99 child_zone = zone_info['child_zone']
114 child_blob = zone_info['child_blob']100 child_blob = zone_info['child_blob']
115 zone = db.zone_get(context, child_zone)101 zone = db.zone_get(context.elevated(), child_zone)
116 url = zone.api_url102 url = zone.api_url
117 LOG.debug(_("Forwarding instance create call to child zone %(url)s"103 LOG.debug(_("Forwarding instance create call to child zone %(url)s"
118 ". ReservationID=%(reservation_id)s") % locals())104 ". ReservationID=%(reservation_id)s") % locals())
@@ -132,12 +118,13 @@
132 # arguments are passed as keyword arguments118 # arguments are passed as keyword arguments
133 # (there's a reasonable default for ipgroups in the119 # (there's a reasonable default for ipgroups in the
134 # novaclient call).120 # novaclient call).
135 nova.servers.create(name, image_ref, flavor_id,121 instance = nova.servers.create(name, image_ref, flavor_id,
136 meta=meta, files=files, zone_blob=child_blob,122 meta=meta, files=files, zone_blob=child_blob,
137 reservation_id=reservation_id)123 reservation_id=reservation_id)
124 return driver.encode_instance(instance._info, local=False)
138125
139 def _provision_resource_from_blob(self, context, build_plan_item,126 def _provision_resource_from_blob(self, context, build_plan_item,
140 instance_id, request_spec, kwargs):127 request_spec, kwargs):
141 """Create the requested resource locally or in a child zone128 """Create the requested resource locally or in a child zone
142 based on what is stored in the zone blob info.129 based on what is stored in the zone blob info.
143130
@@ -165,21 +152,21 @@
165152
166 # Valid data ... is it for us?153 # Valid data ... is it for us?
167 if 'child_zone' in host_info and 'child_blob' in host_info:154 if 'child_zone' in host_info and 'child_blob' in host_info:
168 self._ask_child_zone_to_create_instance(context, host_info,155 instance = self._ask_child_zone_to_create_instance(context,
169 request_spec, kwargs)156 host_info, request_spec, kwargs)
170 else:157 else:
171 self._provision_resource_locally(context, host_info, request_spec,158 instance = self._provision_resource_locally(context,
172 kwargs)159 host_info, request_spec, kwargs)
160 return instance
173161
174 def _provision_resource(self, context, build_plan_item, instance_id,162 def _provision_resource(self, context, build_plan_item,
175 request_spec, kwargs):163 request_spec, kwargs):
176 """Create the requested resource in this Zone or a child zone."""164 """Create the requested resource in this Zone or a child zone."""
177 if "hostname" in build_plan_item:165 if "hostname" in build_plan_item:
178 self._provision_resource_locally(context, build_plan_item,166 return self._provision_resource_locally(context,
179 request_spec, kwargs)167 build_plan_item, request_spec, kwargs)
180 return168 return self._provision_resource_from_blob(context,
181 self._provision_resource_from_blob(context, build_plan_item,169 build_plan_item, request_spec, kwargs)
182 instance_id, request_spec, kwargs)
183170
184 def _adjust_child_weights(self, child_results, zones):171 def _adjust_child_weights(self, child_results, zones):
185 """Apply the Scale and Offset values from the Zone definition172 """Apply the Scale and Offset values from the Zone definition
@@ -205,8 +192,7 @@
205 LOG.exception(_("Bad child zone scaling values "192 LOG.exception(_("Bad child zone scaling values "
206 "for Zone: %(zone_id)s") % locals())193 "for Zone: %(zone_id)s") % locals())
207194
208 def schedule_run_instance(self, context, instance_id, request_spec,195 def schedule_run_instance(self, context, request_spec, *args, **kwargs):
209 *args, **kwargs):
210 """This method is called from nova.compute.api to provision196 """This method is called from nova.compute.api to provision
211 an instance. However we need to look at the parameters being197 an instance. However we need to look at the parameters being
212 passed in to see if this is a request to:198 passed in to see if this is a request to:
@@ -214,13 +200,16 @@
214 2. Use the Build Plan information in the request parameters200 2. Use the Build Plan information in the request parameters
215 to simply create the instance (either in this zone or201 to simply create the instance (either in this zone or
216 a child zone).202 a child zone).
203
204 returns list of instances created.
217 """205 """
218 # TODO(sandy): We'll have to look for richer specs at some point.206 # TODO(sandy): We'll have to look for richer specs at some point.
219 blob = request_spec.get('blob')207 blob = request_spec.get('blob')
220 if blob:208 if blob:
221 self._provision_resource(context, request_spec, instance_id,209 instance = self._provision_resource(context,
222 request_spec, kwargs)210 request_spec, request_spec, kwargs)
223 return None211 # Caller expects a list of instances
212 return [instance]
224213
225 num_instances = request_spec.get('num_instances', 1)214 num_instances = request_spec.get('num_instances', 1)
226 LOG.debug(_("Attempting to build %(num_instances)d instance(s)") %215 LOG.debug(_("Attempting to build %(num_instances)d instance(s)") %
@@ -231,16 +220,16 @@
231 if not build_plan:220 if not build_plan:
232 raise driver.NoValidHost(_('No hosts were available'))221 raise driver.NoValidHost(_('No hosts were available'))
233222
223 instances = []
234 for num in xrange(num_instances):224 for num in xrange(num_instances):
235 if not build_plan:225 if not build_plan:
236 break226 break
237 build_plan_item = build_plan.pop(0)227 build_plan_item = build_plan.pop(0)
238 self._provision_resource(context, build_plan_item, instance_id,228 instance = self._provision_resource(context,
239 request_spec, kwargs)229 build_plan_item, request_spec, kwargs)
230 instances.append(instance)
240231
241 # Returning None short-circuits the routing to Compute (since232 return instances
242 # we've already done it here)
243 return None
244233
245 def select(self, context, request_spec, *args, **kwargs):234 def select(self, context, request_spec, *args, **kwargs):
246 """Select returns a list of weights and zone/host information235 """Select returns a list of weights and zone/host information
@@ -251,7 +240,7 @@
251 return self._schedule(context, "compute", request_spec,240 return self._schedule(context, "compute", request_spec,
252 *args, **kwargs)241 *args, **kwargs)
253242
254 def schedule(self, context, topic, request_spec, *args, **kwargs):243 def schedule(self, context, topic, method, *args, **kwargs):
255 """The schedule() contract requires we return the one244 """The schedule() contract requires we return the one
256 best-suited host for this request.245 best-suited host for this request.
257 """246 """
@@ -285,7 +274,7 @@
285 weighted_hosts = self.weigh_hosts(topic, request_spec, filtered_hosts)274 weighted_hosts = self.weigh_hosts(topic, request_spec, filtered_hosts)
286 # Next, tack on the host weights from the child zones275 # Next, tack on the host weights from the child zones
287 json_spec = json.dumps(request_spec)276 json_spec = json.dumps(request_spec)
288 all_zones = db.zone_get_all(context)277 all_zones = db.zone_get_all(context.elevated())
289 child_results = self._call_zone_method(context, "select",278 child_results = self._call_zone_method(context, "select",
290 specs=json_spec, zones=all_zones)279 specs=json_spec, zones=all_zones)
291 self._adjust_child_weights(child_results, all_zones)280 self._adjust_child_weights(child_results, all_zones)
292281
=== modified file 'nova/scheduler/api.py'
--- nova/scheduler/api.py 2011-09-21 12:19:53 +0000
+++ nova/scheduler/api.py 2011-09-23 07:08:19 +0000
@@ -65,7 +65,7 @@
65 for item in items:65 for item in items:
66 item['api_url'] = item['api_url'].replace('\\/', '/')66 item['api_url'] = item['api_url'].replace('\\/', '/')
67 if not items:67 if not items:
68 items = db.zone_get_all(context)68 items = db.zone_get_all(context.elevated())
69 return items69 return items
7070
7171
@@ -116,7 +116,7 @@
116 pool = greenpool.GreenPool()116 pool = greenpool.GreenPool()
117 results = []117 results = []
118 if zones is None:118 if zones is None:
119 zones = db.zone_get_all(context)119 zones = db.zone_get_all(context.elevated())
120 for zone in zones:120 for zone in zones:
121 try:121 try:
122 # Do this on behalf of the user ...122 # Do this on behalf of the user ...
123123
=== modified file 'nova/scheduler/chance.py'
--- nova/scheduler/chance.py 2011-03-31 19:29:16 +0000
+++ nova/scheduler/chance.py 2011-09-23 07:08:19 +0000
@@ -29,12 +29,33 @@
29class ChanceScheduler(driver.Scheduler):29class ChanceScheduler(driver.Scheduler):
30 """Implements Scheduler as a random node selector."""30 """Implements Scheduler as a random node selector."""
3131
32 def schedule(self, context, topic, *_args, **_kwargs):32 def _schedule(self, context, topic, **kwargs):
33 """Picks a host that is up at random."""33 """Picks a host that is up at random."""
3434
35 hosts = self.hosts_up(context, topic)35 elevated = context.elevated()
36 hosts = self.hosts_up(elevated, topic)
36 if not hosts:37 if not hosts:
37 raise driver.NoValidHost(_("Scheduler was unable to locate a host"38 raise driver.NoValidHost(_("Scheduler was unable to locate a host"
38 " for this request. Is the appropriate"39 " for this request. Is the appropriate"
39 " service running?"))40 " service running?"))
40 return hosts[int(random.random() * len(hosts))]41 return hosts[int(random.random() * len(hosts))]
42
43 def schedule(self, context, topic, method, *_args, **kwargs):
44 """Picks a host that is up at random."""
45
46 host = self._schedule(context, topic, **kwargs)
47 driver.cast_to_host(context, topic, host, method, **kwargs)
48
49 def schedule_run_instance(self, context, request_spec, *_args, **kwargs):
50 """Create and run an instance or instances"""
51 elevated = context.elevated()
52 num_instances = request_spec.get('num_instances', 1)
53 instances = []
54 for num in xrange(num_instances):
55 host = self._schedule(context, 'compute', **kwargs)
56 instance = self.create_instance_db_entry(elevated, request_spec)
57 driver.cast_to_compute_host(context, host,
58 'run_instance', instance_id=instance['id'], **kwargs)
59 instances.append(driver.encode_instance(instance))
60
61 return instances
4162
=== modified file 'nova/scheduler/driver.py'
--- nova/scheduler/driver.py 2011-08-22 21:17:39 +0000
+++ nova/scheduler/driver.py 2011-09-23 07:08:19 +0000
@@ -29,17 +29,94 @@
29from nova import log as logging29from nova import log as logging
30from nova import rpc30from nova import rpc
31from nova import utils31from nova import utils
32from nova.compute import api as compute_api
32from nova.compute import power_state33from nova.compute import power_state
33from nova.compute import vm_states34from nova.compute import vm_states
34from nova.api.ec2 import ec2utils35from nova.api.ec2 import ec2utils
3536
3637
37FLAGS = flags.FLAGS38FLAGS = flags.FLAGS
39LOG = logging.getLogger('nova.scheduler.driver')
38flags.DEFINE_integer('service_down_time', 60,40flags.DEFINE_integer('service_down_time', 60,
39 'maximum time since last checkin for up service')41 'maximum time since last checkin for up service')
40flags.DECLARE('instances_path', 'nova.compute.manager')42flags.DECLARE('instances_path', 'nova.compute.manager')
4143
4244
45def cast_to_volume_host(context, host, method, update_db=True, **kwargs):
46 """Cast request to a volume host queue"""
47
48 if update_db:
49 volume_id = kwargs.get('volume_id', None)
50 if volume_id is not None:
51 now = utils.utcnow()
52 db.volume_update(context, volume_id,
53 {'host': host, 'scheduled_at': now})
54 rpc.cast(context,
55 db.queue_get_for(context, 'volume', host),
56 {"method": method, "args": kwargs})
57 LOG.debug(_("Casted '%(method)s' to volume '%(host)s'") % locals())
58
59
60def cast_to_compute_host(context, host, method, update_db=True, **kwargs):
61 """Cast request to a compute host queue"""
62
63 if update_db:
64 instance_id = kwargs.get('instance_id', None)
65 if instance_id is not None:
66 now = utils.utcnow()
67 db.instance_update(context, instance_id,
68 {'host': host, 'scheduled_at': now})
69 rpc.cast(context,
70 db.queue_get_for(context, 'compute', host),
71 {"method": method, "args": kwargs})
72 LOG.debug(_("Casted '%(method)s' to compute '%(host)s'") % locals())
73
74
75def cast_to_network_host(context, host, method, update_db=False, **kwargs):
76 """Cast request to a network host queue"""
77
78 rpc.cast(context,
79 db.queue_get_for(context, 'network', host),
80 {"method": method, "args": kwargs})
81 LOG.debug(_("Casted '%(method)s' to network '%(host)s'") % locals())
82
83
84def cast_to_host(context, topic, host, method, update_db=True, **kwargs):
85 """Generic cast to host"""
86
87 topic_mapping = {
88 "compute": cast_to_compute_host,
89 "volume": cast_to_volume_host,
90 'network': cast_to_network_host}
91
92 func = topic_mapping.get(topic)
93 if func:
94 func(context, host, method, update_db=update_db, **kwargs)
95 else:
96 rpc.cast(context,
97 db.queue_get_for(context, topic, host),
98 {"method": method, "args": kwargs})
99 LOG.debug(_("Casted '%(method)s' to %(topic)s '%(host)s'")
100 % locals())
101
102
103def encode_instance(instance, local=True):
104 """Encode locally created instance for return via RPC"""
105 # TODO(comstud): I would love to be able to return the full
106 # instance information here, but we'll need some modifications
107 # to the RPC code to handle datetime conversions with the
108 # json encoding/decoding. We should be able to set a default
109 # json handler somehow to do it.
110 #
111 # For now, I'll just return the instance ID and let the caller
112 # do a DB lookup :-/
113 if local:
114 return dict(id=instance['id'], _is_precooked=False)
115 else:
116 instance['_is_precooked'] = True
117 return instance
118
119
43class NoValidHost(exception.Error):120class NoValidHost(exception.Error):
44 """There is no valid host for the command."""121 """There is no valid host for the command."""
45 pass122 pass
@@ -55,6 +132,7 @@
55132
56 def __init__(self):133 def __init__(self):
57 self.zone_manager = None134 self.zone_manager = None
135 self.compute_api = compute_api.API()
58136
59 def set_zone_manager(self, zone_manager):137 def set_zone_manager(self, zone_manager):
60 """Called by the Scheduler Service to supply a ZoneManager."""138 """Called by the Scheduler Service to supply a ZoneManager."""
@@ -76,7 +154,20 @@
76 for service in services154 for service in services
77 if self.service_is_up(service)]155 if self.service_is_up(service)]
78156
79 def schedule(self, context, topic, *_args, **_kwargs):157 def create_instance_db_entry(self, context, request_spec):
158 """Create instance DB entry based on request_spec"""
159 base_options = request_spec['instance_properties']
160 image = request_spec['image']
161 instance_type = request_spec.get('instance_type')
162 security_group = request_spec.get('security_group', 'default')
163 block_device_mapping = request_spec.get('block_device_mapping', [])
164
165 instance = self.compute_api.create_db_entry_for_new_instance(
166 context, instance_type, image, base_options,
167 security_group, block_device_mapping)
168 return instance
169
170 def schedule(self, context, topic, method, *_args, **_kwargs):
80 """Must override at least this method for scheduler to work."""171 """Must override at least this method for scheduler to work."""
81 raise NotImplementedError(_("Must implement a fallback schedule"))172 raise NotImplementedError(_("Must implement a fallback schedule"))
82173
@@ -114,10 +205,12 @@
114 volume_ref['id'],205 volume_ref['id'],
115 {'status': 'migrating'})206 {'status': 'migrating'})
116207
117 # Return value is necessary to send request to src
118 # Check _schedule() in detail.
119 src = instance_ref['host']208 src = instance_ref['host']
120 return src209 cast_to_compute_host(context, src, 'live_migration',
210 update_db=False,
211 instance_id=instance_id,
212 dest=dest,
213 block_migration=block_migration)
121214
122 def _live_migration_src_check(self, context, instance_ref):215 def _live_migration_src_check(self, context, instance_ref):
123 """Live migration check routine (for src host).216 """Live migration check routine (for src host).
@@ -205,7 +298,7 @@
205 if not block_migration:298 if not block_migration:
206 src = instance_ref['host']299 src = instance_ref['host']
207 ipath = FLAGS.instances_path300 ipath = FLAGS.instances_path
208 logging.error(_("Cannot confirm tmpfile at %(ipath)s is on "301 LOG.error(_("Cannot confirm tmpfile at %(ipath)s is on "
209 "same shared storage between %(src)s "302 "same shared storage between %(src)s "
210 "and %(dest)s.") % locals())303 "and %(dest)s.") % locals())
211 raise304 raise
@@ -243,7 +336,7 @@
243336
244 except rpc.RemoteError:337 except rpc.RemoteError:
245 src = instance_ref['host']338 src = instance_ref['host']
246 logging.exception(_("host %(dest)s is not compatible with "339 LOG.exception(_("host %(dest)s is not compatible with "
247 "original host %(src)s.") % locals())340 "original host %(src)s.") % locals())
248 raise341 raise
249342
@@ -354,6 +447,8 @@
354 dst_t = db.queue_get_for(context, FLAGS.compute_topic, dest)447 dst_t = db.queue_get_for(context, FLAGS.compute_topic, dest)
355 src_t = db.queue_get_for(context, FLAGS.compute_topic, src)448 src_t = db.queue_get_for(context, FLAGS.compute_topic, src)
356449
450 filename = None
451
357 try:452 try:
358 # create tmpfile at dest host453 # create tmpfile at dest host
359 filename = rpc.call(context, dst_t,454 filename = rpc.call(context, dst_t,
@@ -370,6 +465,8 @@
370 raise465 raise
371466
372 finally:467 finally:
373 rpc.call(context, dst_t,468 # Should only be None for tests?
374 {"method": 'cleanup_shared_storage_test_file',469 if filename is not None:
375 "args": {'filename': filename}})470 rpc.call(context, dst_t,
471 {"method": 'cleanup_shared_storage_test_file',
472 "args": {'filename': filename}})
376473
=== modified file 'nova/scheduler/least_cost.py'
--- nova/scheduler/least_cost.py 2011-08-15 22:09:39 +0000
+++ nova/scheduler/least_cost.py 2011-09-23 07:08:19 +0000
@@ -160,8 +160,7 @@
160160
161 weighted = []161 weighted = []
162 weight_log = []162 weight_log = []
163 for cost, (hostname, service) in zip(costs, hosts):163 for cost, (hostname, caps) in zip(costs, hosts):
164 caps = service[topic]
165 weight_log.append("%s: %s" % (hostname, "%.2f" % cost))164 weight_log.append("%s: %s" % (hostname, "%.2f" % cost))
166 weight_dict = dict(weight=cost, hostname=hostname,165 weight_dict = dict(weight=cost, hostname=hostname,
167 capabilities=caps)166 capabilities=caps)
168167
=== modified file 'nova/scheduler/manager.py'
--- nova/scheduler/manager.py 2011-09-02 16:00:03 +0000
+++ nova/scheduler/manager.py 2011-09-23 07:08:19 +0000
@@ -81,37 +81,23 @@
81 """Select a list of hosts best matching the provided specs."""81 """Select a list of hosts best matching the provided specs."""
82 return self.driver.select(context, *args, **kwargs)82 return self.driver.select(context, *args, **kwargs)
8383
84 def get_scheduler_rules(self, context=None, *args, **kwargs):
85 """Ask the driver how requests should be made of it."""
86 return self.driver.get_scheduler_rules(context, *args, **kwargs)
87
88 def _schedule(self, method, context, topic, *args, **kwargs):84 def _schedule(self, method, context, topic, *args, **kwargs):
89 """Tries to call schedule_* method on the driver to retrieve host.85 """Tries to call schedule_* method on the driver to retrieve host.
9086
91 Falls back to schedule(context, topic) if method doesn't exist.87 Falls back to schedule(context, topic) if method doesn't exist.
92 """88 """
93 driver_method = 'schedule_%s' % method89 driver_method = 'schedule_%s' % method
94 elevated = context.elevated()
95 try:90 try:
96 real_meth = getattr(self.driver, driver_method)91 real_meth = getattr(self.driver, driver_method)
97 args = (elevated,) + args92 args = (context,) + args
98 except AttributeError, e:93 except AttributeError, e:
99 LOG.warning(_("Driver Method %(driver_method)s missing: %(e)s."94 LOG.warning(_("Driver Method %(driver_method)s missing: %(e)s."
100 "Reverting to schedule()") % locals())95 "Reverting to schedule()") % locals())
101 real_meth = self.driver.schedule96 real_meth = self.driver.schedule
102 args = (elevated, topic) + args97 args = (context, topic, method) + args
103 host = real_meth(*args, **kwargs)98
10499 # Scheduler methods are responsible for casting.
105 if not host:100 return real_meth(*args, **kwargs)
106 LOG.debug(_("%(topic)s %(method)s handled in Scheduler")
107 % locals())
108 return
109
110 rpc.cast(context,
111 db.queue_get_for(context, topic, host),
112 {"method": method,
113 "args": kwargs})
114 LOG.debug(_("Casted to %(topic)s %(host)s for %(method)s") % locals())
115101
116 # NOTE (masumotok) : This method should be moved to nova.api.ec2.admin.102 # NOTE (masumotok) : This method should be moved to nova.api.ec2.admin.
117 # Based on bexar design summit discussion,103 # Based on bexar design summit discussion,
118104
=== modified file 'nova/scheduler/multi.py'
--- nova/scheduler/multi.py 2011-08-11 23:26:26 +0000
+++ nova/scheduler/multi.py 2011-09-23 07:08:19 +0000
@@ -38,7 +38,8 @@
38# A mapping of methods to topics so we can figure out which driver to use.38# A mapping of methods to topics so we can figure out which driver to use.
39_METHOD_MAP = {'run_instance': 'compute',39_METHOD_MAP = {'run_instance': 'compute',
40 'start_instance': 'compute',40 'start_instance': 'compute',
41 'create_volume': 'volume'}41 'create_volume': 'volume',
42 'create_volumes': 'volume'}
4243
4344
44class MultiScheduler(driver.Scheduler):45class MultiScheduler(driver.Scheduler):
@@ -69,5 +70,6 @@
69 for k, v in self.drivers.iteritems():70 for k, v in self.drivers.iteritems():
70 v.set_zone_manager(zone_manager)71 v.set_zone_manager(zone_manager)
7172
72 def schedule(self, context, topic, *_args, **_kwargs):73 def schedule(self, context, topic, method, *_args, **_kwargs):
73 return self.drivers[topic].schedule(context, topic, *_args, **_kwargs)74 return self.drivers[topic].schedule(context, topic,
75 method, *_args, **_kwargs)
7476
=== modified file 'nova/scheduler/simple.py'
--- nova/scheduler/simple.py 2011-08-19 15:44:14 +0000
+++ nova/scheduler/simple.py 2011-09-23 07:08:19 +0000
@@ -39,47 +39,50 @@
39class SimpleScheduler(chance.ChanceScheduler):39class SimpleScheduler(chance.ChanceScheduler):
40 """Implements Naive Scheduler that tries to find least loaded host."""40 """Implements Naive Scheduler that tries to find least loaded host."""
4141
42 def _schedule_instance(self, context, instance_id, *_args, **_kwargs):42 def _schedule_instance(self, context, instance_opts, *_args, **_kwargs):
43 """Picks a host that is up and has the fewest running instances."""43 """Picks a host that is up and has the fewest running instances."""
44 instance_ref = db.instance_get(context, instance_id)44
45 if (instance_ref['availability_zone']45 availability_zone = instance_opts.get('availability_zone')
46 and ':' in instance_ref['availability_zone']46
47 and context.is_admin):47 if availability_zone and context.is_admin and \
48 zone, _x, host = instance_ref['availability_zone'].partition(':')48 (':' in availability_zone):
49 zone, host = availability_zone.split(':', 1)
49 service = db.service_get_by_args(context.elevated(), host,50 service = db.service_get_by_args(context.elevated(), host,
50 'nova-compute')51 'nova-compute')
51 if not self.service_is_up(service):52 if not self.service_is_up(service):
52 raise driver.WillNotSchedule(_("Host %s is not alive") % host)53 raise driver.WillNotSchedule(_("Host %s is not alive") % host)
54 return host
5355
54 # TODO(vish): this probably belongs in the manager, if we
55 # can generalize this somehow
56 now = utils.utcnow()
57 db.instance_update(context, instance_id, {'host': host,
58 'scheduled_at': now})
59 return host
60 results = db.service_get_all_compute_sorted(context)56 results = db.service_get_all_compute_sorted(context)
61 for result in results:57 for result in results:
62 (service, instance_cores) = result58 (service, instance_cores) = result
63 if instance_cores + instance_ref['vcpus'] > FLAGS.max_cores:59 if instance_cores + instance_opts['vcpus'] > FLAGS.max_cores:
64 raise driver.NoValidHost(_("All hosts have too many cores"))60 raise driver.NoValidHost(_("All hosts have too many cores"))
65 if self.service_is_up(service):61 if self.service_is_up(service):
66 # NOTE(vish): this probably belongs in the manager, if we
67 # can generalize this somehow
68 now = utils.utcnow()
69 db.instance_update(context,
70 instance_id,
71 {'host': service['host'],
72 'scheduled_at': now})
73 return service['host']62 return service['host']
74 raise driver.NoValidHost(_("Scheduler was unable to locate a host"63 raise driver.NoValidHost(_("Scheduler was unable to locate a host"
75 " for this request. Is the appropriate"64 " for this request. Is the appropriate"
76 " service running?"))65 " service running?"))
7766
78 def schedule_run_instance(self, context, instance_id, *_args, **_kwargs):67 def schedule_run_instance(self, context, request_spec, *_args, **_kwargs):
79 return self._schedule_instance(context, instance_id, *_args, **_kwargs)68 num_instances = request_spec.get('num_instances', 1)
69 instances = []
70 for num in xrange(num_instances):
71 host = self._schedule_instance(context,
72 request_spec['instance_properties'], *_args, **_kwargs)
73 instance_ref = self.create_instance_db_entry(context,
74 request_spec)
75 driver.cast_to_compute_host(context, host, 'run_instance',
76 instance_id=instance_ref['id'], **_kwargs)
77 instances.append(driver.encode_instance(instance_ref))
78 return instances
8079
81 def schedule_start_instance(self, context, instance_id, *_args, **_kwargs):80 def schedule_start_instance(self, context, instance_id, *_args, **_kwargs):
82 return self._schedule_instance(context, instance_id, *_args, **_kwargs)81 instance_ref = db.instance_get(context, instance_id)
82 host = self._schedule_instance(context, instance_ref,
83 *_args, **_kwargs)
84 driver.cast_to_compute_host(context, host, 'start_instance',
85 instance_id=intance_id, **_kwargs)
8386
84 def schedule_create_volume(self, context, volume_id, *_args, **_kwargs):87 def schedule_create_volume(self, context, volume_id, *_args, **_kwargs):
85 """Picks a host that is up and has the fewest volumes."""88 """Picks a host that is up and has the fewest volumes."""
@@ -92,13 +95,9 @@
92 'nova-volume')95 'nova-volume')
93 if not self.service_is_up(service):96 if not self.service_is_up(service):
94 raise driver.WillNotSchedule(_("Host %s not available") % host)97 raise driver.WillNotSchedule(_("Host %s not available") % host)
9598 driver.cast_to_volume_host(context, host, 'create_volume',
96 # TODO(vish): this probably belongs in the manager, if we99 volume_id=volume_id, **_kwargs)
97 # can generalize this somehow100 return None
98 now = utils.utcnow()
99 db.volume_update(context, volume_id, {'host': host,
100 'scheduled_at': now})
101 return host
102 results = db.service_get_all_volume_sorted(context)101 results = db.service_get_all_volume_sorted(context)
103 for result in results:102 for result in results:
104 (service, volume_gigabytes) = result103 (service, volume_gigabytes) = result
@@ -106,14 +105,9 @@
106 raise driver.NoValidHost(_("All hosts have too many "105 raise driver.NoValidHost(_("All hosts have too many "
107 "gigabytes"))106 "gigabytes"))
108 if self.service_is_up(service):107 if self.service_is_up(service):
109 # NOTE(vish): this probably belongs in the manager, if we108 driver.cast_to_volume_host(context, service['host'],
110 # can generalize this somehow109 'create_volume', volume_id=volume_id, **_kwargs)
111 now = utils.utcnow()110 return None
112 db.volume_update(context,
113 volume_id,
114 {'host': service['host'],
115 'scheduled_at': now})
116 return service['host']
117 raise driver.NoValidHost(_("Scheduler was unable to locate a host"111 raise driver.NoValidHost(_("Scheduler was unable to locate a host"
118 " for this request. Is the appropriate"112 " for this request. Is the appropriate"
119 " service running?"))113 " service running?"))
@@ -127,7 +121,9 @@
127 if instance_count >= FLAGS.max_networks:121 if instance_count >= FLAGS.max_networks:
128 raise driver.NoValidHost(_("All hosts have too many networks"))122 raise driver.NoValidHost(_("All hosts have too many networks"))
129 if self.service_is_up(service):123 if self.service_is_up(service):
130 return service['host']124 driver.cast_to_network_host(context, service['host'],
125 'set_network_host', **_kwargs)
126 return None
131 raise driver.NoValidHost(_("Scheduler was unable to locate a host"127 raise driver.NoValidHost(_("Scheduler was unable to locate a host"
132 " for this request. Is the appropriate"128 " for this request. Is the appropriate"
133 " service running?"))129 " service running?"))
134130
=== modified file 'nova/scheduler/vsa.py'
--- nova/scheduler/vsa.py 2011-08-26 18:14:44 +0000
+++ nova/scheduler/vsa.py 2011-09-23 07:08:19 +0000
@@ -195,8 +195,6 @@
195 'display_description': vol['description'],195 'display_description': vol['description'],
196 'volume_type_id': vol['volume_type_id'],196 'volume_type_id': vol['volume_type_id'],
197 'metadata': dict(to_vsa_id=vsa_id),197 'metadata': dict(to_vsa_id=vsa_id),
198 'host': vol['host'],
199 'scheduled_at': now
200 }198 }
201199
202 size = vol['size']200 size = vol['size']
@@ -205,12 +203,10 @@
205 LOG.debug(_("Provision volume %(name)s of size %(size)s GB on "\203 LOG.debug(_("Provision volume %(name)s of size %(size)s GB on "\
206 "host %(host)s"), locals())204 "host %(host)s"), locals())
207205
208 volume_ref = db.volume_create(context, options)206 volume_ref = db.volume_create(context.elevated(), options)
209 rpc.cast(context,207 driver.cast_to_volume_host(context, vol['host'],
210 db.queue_get_for(context, "volume", vol['host']),208 'create_volume', volume_id=volume_ref['id'],
211 {"method": "create_volume",209 snapshot_id=None)
212 "args": {"volume_id": volume_ref['id'],
213 "snapshot_id": None}})
214210
215 def _check_host_enforcement(self, context, availability_zone):211 def _check_host_enforcement(self, context, availability_zone):
216 if (availability_zone212 if (availability_zone
@@ -274,7 +270,6 @@
274 def schedule_create_volumes(self, context, request_spec,270 def schedule_create_volumes(self, context, request_spec,
275 availability_zone=None, *_args, **_kwargs):271 availability_zone=None, *_args, **_kwargs):
276 """Picks hosts for hosting multiple volumes."""272 """Picks hosts for hosting multiple volumes."""
277
278 num_volumes = request_spec.get('num_volumes')273 num_volumes = request_spec.get('num_volumes')
279 LOG.debug(_("Attempting to spawn %(num_volumes)d volume(s)") %274 LOG.debug(_("Attempting to spawn %(num_volumes)d volume(s)") %
280 locals())275 locals())
@@ -291,7 +286,8 @@
291286
292 for vol in volume_params:287 for vol in volume_params:
293 self._provision_volume(context, vol, vsa_id, availability_zone)288 self._provision_volume(context, vol, vsa_id, availability_zone)
294 except:289 except Exception:
290 LOG.exception(_("Error creating volumes"))
295 if vsa_id:291 if vsa_id:
296 db.vsa_update(context, vsa_id, dict(status=VsaState.FAILED))292 db.vsa_update(context, vsa_id, dict(status=VsaState.FAILED))
297293
@@ -310,10 +306,9 @@
310 host = self._check_host_enforcement(context,306 host = self._check_host_enforcement(context,
311 volume_ref['availability_zone'])307 volume_ref['availability_zone'])
312 if host:308 if host:
313 now = utils.utcnow()309 driver.cast_to_volume_host(context, host, 'create_volume',
314 db.volume_update(context, volume_id, {'host': host,310 volume_id=volume_id, **_kwargs)
315 'scheduled_at': now})311 return None
316 return host
317312
318 volume_type_id = volume_ref['volume_type_id']313 volume_type_id = volume_ref['volume_type_id']
319 if volume_type_id:314 if volume_type_id:
@@ -344,18 +339,16 @@
344339
345 try:340 try:
346 (host, qos_cap) = self._select_hosts(request_spec, all_hosts=hosts)341 (host, qos_cap) = self._select_hosts(request_spec, all_hosts=hosts)
347 except:342 except Exception:
343 LOG.exception(_("Error creating volume"))
348 if volume_ref['to_vsa_id']:344 if volume_ref['to_vsa_id']:
349 db.vsa_update(context, volume_ref['to_vsa_id'],345 db.vsa_update(context, volume_ref['to_vsa_id'],
350 dict(status=VsaState.FAILED))346 dict(status=VsaState.FAILED))
351 raise347 raise
352348
353 if host:349 if host:
354 now = utils.utcnow()350 driver.cast_to_volume_host(context, host, 'create_volume',
355 db.volume_update(context, volume_id, {'host': host,351 volume_id=volume_id, **_kwargs)
356 'scheduled_at': now})
357 self._consume_resource(qos_cap, volume_ref['size'], -1)
358 return host
359352
360 def _consume_full_drive(self, qos_values, direction):353 def _consume_full_drive(self, qos_values, direction):
361 qos_values['FullDrive']['NumFreeDrives'] += direction354 qos_values['FullDrive']['NumFreeDrives'] += direction
362355
=== modified file 'nova/scheduler/zone.py'
--- nova/scheduler/zone.py 2011-03-31 19:29:16 +0000
+++ nova/scheduler/zone.py 2011-09-23 07:08:19 +0000
@@ -35,7 +35,7 @@
35 for topic and availability zone (if defined).35 for topic and availability zone (if defined).
36 """36 """
3737
38 if zone is None:38 if not zone:
39 return self.hosts_up(context, topic)39 return self.hosts_up(context, topic)
4040
41 services = db.service_get_all_by_topic(context, topic)41 services = db.service_get_all_by_topic(context, topic)
@@ -44,16 +44,34 @@
44 if self.service_is_up(service)44 if self.service_is_up(service)
45 and service.availability_zone == zone]45 and service.availability_zone == zone]
4646
47 def schedule(self, context, topic, *_args, **_kwargs):47 def _schedule(self, context, topic, request_spec, **kwargs):
48 """Picks a host that is up at random in selected48 """Picks a host that is up at random in selected
49 availability zone (if defined).49 availability zone (if defined).
50 """50 """
5151
52 zone = _kwargs.get('availability_zone')52 zone = kwargs.get('availability_zone')
53 hosts = self.hosts_up_with_zone(context, topic, zone)53 if not zone and request_spec:
54 zone = request_spec['instance_properties'].get(
55 'availability_zone')
56 hosts = self.hosts_up_with_zone(context.elevated(), topic, zone)
54 if not hosts:57 if not hosts:
55 raise driver.NoValidHost(_("Scheduler was unable to locate a host"58 raise driver.NoValidHost(_("Scheduler was unable to locate a host"
56 " for this request. Is the appropriate"59 " for this request. Is the appropriate"
57 " service running?"))60 " service running?"))
58
59 return hosts[int(random.random() * len(hosts))]61 return hosts[int(random.random() * len(hosts))]
62
63 def schedule(self, context, topic, method, *_args, **kwargs):
64 host = self._schedule(context, topic, None, **kwargs)
65 driver.cast_to_host(context, topic, host, method, **kwargs)
66
67 def schedule_run_instance(self, context, request_spec, *_args, **kwargs):
68 """Builds and starts instances on selected hosts"""
69 num_instances = request_spec.get('num_instances', 1)
70 instances = []
71 for num in xrange(num_instances):
72 host = self._schedule(context, 'compute', request_spec, **kwargs)
73 instance = self.create_instance_db_entry(context, request_spec)
74 driver.cast_to_compute_host(context, host,
75 'run_instance', instance_id=instance['id'], **kwargs)
76 instances.append(driver.encode_instance(instance))
77 return instances
6078
=== modified file 'nova/tests/api/openstack/contrib/test_createserverext.py'
--- nova/tests/api/openstack/contrib/test_createserverext.py 2011-09-21 20:59:40 +0000
+++ nova/tests/api/openstack/contrib/test_createserverext.py 2011-09-23 07:08:19 +0000
@@ -27,6 +27,7 @@
27from nova import db27from nova import db
28from nova import exception28from nova import exception
29from nova import flags29from nova import flags
30from nova import rpc
30from nova import test31from nova import test
31import nova.api.openstack32import nova.api.openstack
32from nova.tests.api.openstack import fakes33from nova.tests.api.openstack import fakes
@@ -115,13 +116,15 @@
115 if 'user_data' in kwargs:116 if 'user_data' in kwargs:
116 self.user_data = kwargs['user_data']117 self.user_data = kwargs['user_data']
117118
118 return [{'id': '1234', 'display_name': 'fakeinstance',119 resv_id = None
120
121 return ([{'id': '1234', 'display_name': 'fakeinstance',
119 'uuid': FAKE_UUID,122 'uuid': FAKE_UUID,
120 'user_id': 'fake',123 'user_id': 'fake',
121 'project_id': 'fake',124 'project_id': 'fake',
122 'created_at': "",125 'created_at': "",
123 'updated_at': "",126 'updated_at': "",
124 'progress': 0}]127 'progress': 0}], resv_id)
125128
126 def set_admin_password(self, *args, **kwargs):129 def set_admin_password(self, *args, **kwargs):
127 pass130 pass
@@ -134,7 +137,7 @@
134 compute_api = MockComputeAPI()137 compute_api = MockComputeAPI()
135 self.stubs.Set(nova.compute, 'API', make_stub_method(compute_api))138 self.stubs.Set(nova.compute, 'API', make_stub_method(compute_api))
136 self.stubs.Set(139 self.stubs.Set(
137 nova.api.openstack.create_instance_helper.CreateInstanceHelper,140 nova.api.openstack.servers.Controller,
138 '_get_kernel_ramdisk_from_image', make_stub_method((1, 1)))141 '_get_kernel_ramdisk_from_image', make_stub_method((1, 1)))
139 return compute_api142 return compute_api
140143
@@ -393,7 +396,8 @@
393 return_instance_add_security_group)396 return_instance_add_security_group)
394 body_dict = self._create_security_group_request_dict(security_groups)397 body_dict = self._create_security_group_request_dict(security_groups)
395 request = self._get_create_request_json(body_dict)398 request = self._get_create_request_json(body_dict)
396 response = request.get_response(fakes.wsgi_app())399 compute_api, response = \
400 self._run_create_instance_with_mock_compute_api(request)
397 self.assertEquals(response.status_int, 202)401 self.assertEquals(response.status_int, 202)
398402
399 def test_get_server_by_id_verify_security_groups_json(self):403 def test_get_server_by_id_verify_security_groups_json(self):
400404
=== modified file 'nova/tests/api/openstack/contrib/test_volumes.py'
--- nova/tests/api/openstack/contrib/test_volumes.py 2011-09-21 20:59:40 +0000
+++ nova/tests/api/openstack/contrib/test_volumes.py 2011-09-23 07:08:19 +0000
@@ -31,8 +31,12 @@
3131
3232
33def fake_compute_api_create(cls, context, instance_type, image_href, **kwargs):33def fake_compute_api_create(cls, context, instance_type, image_href, **kwargs):
34 global _block_device_mapping_seen
35 _block_device_mapping_seen = kwargs.get('block_device_mapping')
36
34 inst_type = instance_types.get_instance_type_by_flavor_id(2)37 inst_type = instance_types.get_instance_type_by_flavor_id(2)
35 return [{'id': 1,38 resv_id = None
39 return ([{'id': 1,
36 'display_name': 'test_server',40 'display_name': 'test_server',
37 'uuid': fake_gen_uuid(),41 'uuid': fake_gen_uuid(),
38 'instance_type': dict(inst_type),42 'instance_type': dict(inst_type),
@@ -44,7 +48,7 @@
44 'created_at': datetime.datetime(2010, 10, 10, 12, 0, 0),48 'created_at': datetime.datetime(2010, 10, 10, 12, 0, 0),
45 'updated_at': datetime.datetime(2010, 11, 11, 11, 0, 0),49 'updated_at': datetime.datetime(2010, 11, 11, 11, 0, 0),
46 'progress': 050 'progress': 0
47 }]51 }], resv_id)
4852
4953
50class BootFromVolumeTest(test.TestCase):54class BootFromVolumeTest(test.TestCase):
@@ -64,6 +68,8 @@
64 delete_on_termination=False,68 delete_on_termination=False,
65 )]69 )]
66 ))70 ))
71 global _block_device_mapping_seen
72 _block_device_mapping_seen = None
67 req = webob.Request.blank('/v1.1/fake/os-volumes_boot')73 req = webob.Request.blank('/v1.1/fake/os-volumes_boot')
68 req.method = 'POST'74 req.method = 'POST'
69 req.body = json.dumps(body)75 req.body = json.dumps(body)
@@ -76,3 +82,7 @@
76 self.assertEqual(u'test_server', server['name'])82 self.assertEqual(u'test_server', server['name'])
77 self.assertEqual(3, int(server['image']['id']))83 self.assertEqual(3, int(server['image']['id']))
78 self.assertEqual(FLAGS.password_length, len(server['adminPass']))84 self.assertEqual(FLAGS.password_length, len(server['adminPass']))
85 self.assertEqual(len(_block_device_mapping_seen), 1)
86 self.assertEqual(_block_device_mapping_seen[0]['volume_id'], 1)
87 self.assertEqual(_block_device_mapping_seen[0]['device_name'],
88 '/dev/vda')
7989
=== modified file 'nova/tests/api/openstack/test_extensions.py'
--- nova/tests/api/openstack/test_extensions.py 2011-09-21 15:54:30 +0000
+++ nova/tests/api/openstack/test_extensions.py 2011-09-23 07:08:19 +0000
@@ -102,6 +102,7 @@
102 "VirtualInterfaces",102 "VirtualInterfaces",
103 "Volumes",103 "Volumes",
104 "VolumeTypes",104 "VolumeTypes",
105 "Zones",
105 ]106 ]
106 self.ext_list.sort()107 self.ext_list.sort()
107108
108109
=== modified file 'nova/tests/api/openstack/test_server_actions.py'
--- nova/tests/api/openstack/test_server_actions.py 2011-09-16 19:45:46 +0000
+++ nova/tests/api/openstack/test_server_actions.py 2011-09-23 07:08:19 +0000
@@ -9,7 +9,7 @@
9from nova import utils9from nova import utils
10from nova import exception10from nova import exception
11from nova import flags11from nova import flags
12from nova.api.openstack import create_instance_helper12from nova.api.openstack import servers
13from nova.compute import vm_states13from nova.compute import vm_states
14from nova.compute import instance_types14from nova.compute import instance_types
15import nova.db.api15import nova.db.api
@@ -970,7 +970,7 @@
970class TestServerActionXMLDeserializerV11(test.TestCase):970class TestServerActionXMLDeserializerV11(test.TestCase):
971971
972 def setUp(self):972 def setUp(self):
973 self.deserializer = create_instance_helper.ServerXMLDeserializerV11()973 self.deserializer = servers.ServerXMLDeserializerV11()
974974
975 def tearDown(self):975 def tearDown(self):
976 pass976 pass
977977
=== modified file 'nova/tests/api/openstack/test_servers.py'
--- nova/tests/api/openstack/test_servers.py 2011-09-22 15:41:34 +0000
+++ nova/tests/api/openstack/test_servers.py 2011-09-23 07:08:19 +0000
@@ -33,7 +33,6 @@
33from nova import test33from nova import test
34from nova import utils34from nova import utils
35import nova.api.openstack35import nova.api.openstack
36from nova.api.openstack import create_instance_helper
37from nova.api.openstack import servers36from nova.api.openstack import servers
38from nova.api.openstack import wsgi37from nova.api.openstack import wsgi
39from nova.api.openstack import xmlutil38from nova.api.openstack import xmlutil
@@ -1576,10 +1575,15 @@
15761575
1577 def _setup_for_create_instance(self):1576 def _setup_for_create_instance(self):
1578 """Shared implementation for tests below that create instance"""1577 """Shared implementation for tests below that create instance"""
1578
1579 self.instance_cache_num = 0
1580 self.instance_cache = {}
1581
1579 def instance_create(context, inst):1582 def instance_create(context, inst):
1580 inst_type = instance_types.get_instance_type_by_flavor_id(3)1583 inst_type = instance_types.get_instance_type_by_flavor_id(3)
1581 image_ref = 'http://localhost/images/2'1584 image_ref = 'http://localhost/images/2'
1582 return {'id': 1,1585 self.instance_cache_num += 1
1586 instance = {'id': self.instance_cache_num,
1583 'display_name': 'server_test',1587 'display_name': 'server_test',
1584 'uuid': FAKE_UUID,1588 'uuid': FAKE_UUID,
1585 'instance_type': dict(inst_type),1589 'instance_type': dict(inst_type),
@@ -1588,11 +1592,32 @@
1588 'image_ref': image_ref,1592 'image_ref': image_ref,
1589 'user_id': 'fake',1593 'user_id': 'fake',
1590 'project_id': 'fake',1594 'project_id': 'fake',
1595 'reservation_id': inst['reservation_id'],
1591 "created_at": datetime.datetime(2010, 10, 10, 12, 0, 0),1596 "created_at": datetime.datetime(2010, 10, 10, 12, 0, 0),
1592 "updated_at": datetime.datetime(2010, 11, 11, 11, 0, 0),1597 "updated_at": datetime.datetime(2010, 11, 11, 11, 0, 0),
1593 "config_drive": self.config_drive,1598 "config_drive": self.config_drive,
1594 "progress": 01599 "progress": 0
1595 }1600 }
1601 self.instance_cache[instance['id']] = instance
1602 return instance
1603
1604 def instance_get(context, instance_id):
1605 """Stub for compute/api create() pulling in instance after
1606 scheduling
1607 """
1608 return self.instance_cache[instance_id]
1609
1610 def rpc_call_wrapper(context, topic, msg):
1611 """Stub out the scheduler creating the instance entry"""
1612 if topic == FLAGS.scheduler_topic and \
1613 msg['method'] == 'run_instance':
1614 request_spec = msg['args']['request_spec']
1615 num_instances = request_spec.get('num_instances', 1)
1616 instances = []
1617 for x in xrange(num_instances):
1618 instances.append(instance_create(context,
1619 request_spec['instance_properties']))
1620 return instances
15961621
1597 def server_update(context, id, params):1622 def server_update(context, id, params):
1598 return instance_create(context, id)1623 return instance_create(context, id)
@@ -1615,18 +1640,20 @@
1615 self.stubs.Set(nova.db.api, 'project_get_networks',1640 self.stubs.Set(nova.db.api, 'project_get_networks',
1616 project_get_networks)1641 project_get_networks)
1617 self.stubs.Set(nova.db.api, 'instance_create', instance_create)1642 self.stubs.Set(nova.db.api, 'instance_create', instance_create)
1643 self.stubs.Set(nova.db.api, 'instance_get', instance_get)
1618 self.stubs.Set(nova.rpc, 'cast', fake_method)1644 self.stubs.Set(nova.rpc, 'cast', fake_method)
1619 self.stubs.Set(nova.rpc, 'call', fake_method)1645 self.stubs.Set(nova.rpc, 'call', rpc_call_wrapper)
1620 self.stubs.Set(nova.db.api, 'instance_update', server_update)1646 self.stubs.Set(nova.db.api, 'instance_update', server_update)
1621 self.stubs.Set(nova.db.api, 'queue_get_for', queue_get_for)1647 self.stubs.Set(nova.db.api, 'queue_get_for', queue_get_for)
1622 self.stubs.Set(nova.network.manager.VlanManager, 'allocate_fixed_ip',1648 self.stubs.Set(nova.network.manager.VlanManager, 'allocate_fixed_ip',
1623 fake_method)1649 fake_method)
1624 self.stubs.Set(1650 self.stubs.Set(
1625 nova.api.openstack.create_instance_helper.CreateInstanceHelper,1651 servers.Controller,
1626 "_get_kernel_ramdisk_from_image", kernel_ramdisk_mapping)1652 "_get_kernel_ramdisk_from_image",
1653 kernel_ramdisk_mapping)
1627 self.stubs.Set(nova.compute.api.API, "_find_host", find_host)1654 self.stubs.Set(nova.compute.api.API, "_find_host", find_host)
16281655
1629 def _test_create_instance_helper(self):1656 def _test_create_instance(self):
1630 self._setup_for_create_instance()1657 self._setup_for_create_instance()
16311658
1632 body = dict(server=dict(1659 body = dict(server=dict(
@@ -1650,7 +1677,7 @@
1650 self.assertEqual(FAKE_UUID, server['uuid'])1677 self.assertEqual(FAKE_UUID, server['uuid'])
16511678
1652 def test_create_instance(self):1679 def test_create_instance(self):
1653 self._test_create_instance_helper()1680 self._test_create_instance()
16541681
1655 def test_create_instance_has_uuid(self):1682 def test_create_instance_has_uuid(self):
1656 """Tests at the db-layer instead of API layer since that's where the1683 """Tests at the db-layer instead of API layer since that's where the
@@ -1662,51 +1689,134 @@
1662 expected = FAKE_UUID1689 expected = FAKE_UUID
1663 self.assertEqual(instance['uuid'], expected)1690 self.assertEqual(instance['uuid'], expected)
16641691
1665 def test_create_instance_via_zones(self):1692 def test_create_multiple_instances(self):
1666 """Server generated ReservationID"""1693 """Test creating multiple instances but not asking for
1667 self._setup_for_create_instance()1694 reservation_id
1668 self.flags(allow_admin_api=True)1695 """
16691696 self._setup_for_create_instance()
1670 body = dict(server=dict(1697
1671 name='server_test', imageId=3, flavorId=2,1698 image_href = 'http://localhost/v1.1/123/images/2'
1672 metadata={'hello': 'world', 'open': 'stack'},1699 flavor_ref = 'http://localhost/123/flavors/3'
1673 personality={}))1700 body = {
1674 req = webob.Request.blank('/v1.0/zones/boot')1701 'server': {
1675 req.method = 'POST'1702 'min_count': 2,
1676 req.body = json.dumps(body)1703 'name': 'server_test',
1677 req.headers["content-type"] = "application/json"1704 'imageRef': image_href,
16781705 'flavorRef': flavor_ref,
1679 res = req.get_response(fakes.wsgi_app())1706 'metadata': {'hello': 'world',
16801707 'open': 'stack'},
1681 reservation_id = json.loads(res.body)['reservation_id']1708 'personality': []
1682 self.assertEqual(res.status_int, 200)1709 }
1710 }
1711
1712 req = webob.Request.blank('/v1.1/123/servers')
1713 req.method = 'POST'
1714 req.body = json.dumps(body)
1715 req.headers["content-type"] = "application/json"
1716
1717 res = req.get_response(fakes.wsgi_app())
1718 self.assertEqual(res.status_int, 202)
1719 body = json.loads(res.body)
1720 self.assertIn('server', body)
1721
1722 def test_create_multiple_instances_resv_id_return(self):
1723 """Test creating multiple instances with asking for
1724 reservation_id
1725 """
1726 self._setup_for_create_instance()
1727
1728 image_href = 'http://localhost/v1.1/123/images/2'
1729 flavor_ref = 'http://localhost/123/flavors/3'
1730 body = {
1731 'server': {
1732 'min_count': 2,
1733 'name': 'server_test',
1734 'imageRef': image_href,
1735 'flavorRef': flavor_ref,
1736 'metadata': {'hello': 'world',
1737 'open': 'stack'},
1738 'personality': [],
1739 'return_reservation_id': True
1740 }
1741 }
1742
1743 req = webob.Request.blank('/v1.1/123/servers')
1744 req.method = 'POST'
1745 req.body = json.dumps(body)
1746 req.headers["content-type"] = "application/json"
1747
1748 res = req.get_response(fakes.wsgi_app())
1749 self.assertEqual(res.status_int, 202)
1750 body = json.loads(res.body)
1751 reservation_id = body.get('reservation_id')
1683 self.assertNotEqual(reservation_id, "")1752 self.assertNotEqual(reservation_id, "")
1684 self.assertNotEqual(reservation_id, None)1753 self.assertNotEqual(reservation_id, None)
1685 self.assertTrue(len(reservation_id) > 1)1754 self.assertTrue(len(reservation_id) > 1)
16861755
1687 def test_create_instance_via_zones_with_resid(self):1756 def test_create_instance_with_user_supplied_reservation_id(self):
1688 """User supplied ReservationID"""1757 """Non-admin supplied reservation_id should be ignored."""
1689 self._setup_for_create_instance()1758 self._setup_for_create_instance()
1690 self.flags(allow_admin_api=True)1759
16911760 image_href = 'http://localhost/v1.1/123/images/2'
1692 body = dict(server=dict(1761 flavor_ref = 'http://localhost/123/flavors/3'
1693 name='server_test', imageId=3, flavorId=2,1762 body = {
1694 metadata={'hello': 'world', 'open': 'stack'},1763 'server': {
1695 personality={}, reservation_id='myresid'))1764 'name': 'server_test',
1696 req = webob.Request.blank('/v1.0/zones/boot')1765 'imageRef': image_href,
1766 'flavorRef': flavor_ref,
1767 'metadata': {'hello': 'world',
1768 'open': 'stack'},
1769 'personality': [],
1770 'reservation_id': 'myresid',
1771 'return_reservation_id': True
1772 }
1773 }
1774
1775 req = webob.Request.blank('/v1.1/123/servers')
1697 req.method = 'POST'1776 req.method = 'POST'
1698 req.body = json.dumps(body)1777 req.body = json.dumps(body)
1699 req.headers["content-type"] = "application/json"1778 req.headers["content-type"] = "application/json"
17001779
1701 res = req.get_response(fakes.wsgi_app())1780 res = req.get_response(fakes.wsgi_app())
17021781 self.assertEqual(res.status_int, 202)
1782 res_body = json.loads(res.body)
1783 self.assertIn('reservation_id', res_body)
1784 self.assertNotEqual(res_body['reservation_id'], 'myresid')
1785
1786 def test_create_instance_with_admin_supplied_reservation_id(self):
1787 """Admin supplied reservation_id should be honored."""
1788 self._setup_for_create_instance()
1789
1790 image_href = 'http://localhost/v1.1/123/images/2'
1791 flavor_ref = 'http://localhost/123/flavors/3'
1792 body = {
1793 'server': {
1794 'name': 'server_test',
1795 'imageRef': image_href,
1796 'flavorRef': flavor_ref,
1797 'metadata': {'hello': 'world',
1798 'open': 'stack'},
1799 'personality': [],
1800 'reservation_id': 'myresid',
1801 'return_reservation_id': True
1802 }
1803 }
1804
1805 req = webob.Request.blank('/v1.1/123/servers')
1806 req.method = 'POST'
1807 req.body = json.dumps(body)
1808 req.headers["content-type"] = "application/json"
1809
1810 context = nova.context.RequestContext('testuser', 'testproject',
1811 is_admin=True)
1812 res = req.get_response(fakes.wsgi_app(fake_auth_context=context))
1813 self.assertEqual(res.status_int, 202)
1703 reservation_id = json.loads(res.body)['reservation_id']1814 reservation_id = json.loads(res.body)['reservation_id']
1704 self.assertEqual(res.status_int, 200)
1705 self.assertEqual(reservation_id, "myresid")1815 self.assertEqual(reservation_id, "myresid")
17061816
1707 def test_create_instance_no_key_pair(self):1817 def test_create_instance_no_key_pair(self):
1708 fakes.stub_out_key_pair_funcs(self.stubs, have_key_pair=False)1818 fakes.stub_out_key_pair_funcs(self.stubs, have_key_pair=False)
1709 self._test_create_instance_helper()1819 self._test_create_instance()
17101820
1711 def test_create_instance_no_name(self):1821 def test_create_instance_no_name(self):
1712 self._setup_for_create_instance()1822 self._setup_for_create_instance()
@@ -2792,7 +2902,7 @@
2792class TestServerCreateRequestXMLDeserializerV10(unittest.TestCase):2902class TestServerCreateRequestXMLDeserializerV10(unittest.TestCase):
27932903
2794 def setUp(self):2904 def setUp(self):
2795 self.deserializer = create_instance_helper.ServerXMLDeserializer()2905 self.deserializer = servers.ServerXMLDeserializer()
27962906
2797 def test_minimal_request(self):2907 def test_minimal_request(self):
2798 serial_request = """2908 serial_request = """
@@ -3078,7 +3188,7 @@
30783188
3079 def setUp(self):3189 def setUp(self):
3080 super(TestServerCreateRequestXMLDeserializerV11, self).setUp()3190 super(TestServerCreateRequestXMLDeserializerV11, self).setUp()
3081 self.deserializer = create_instance_helper.ServerXMLDeserializerV11()3191 self.deserializer = servers.ServerXMLDeserializerV11()
30823192
3083 def test_minimal_request(self):3193 def test_minimal_request(self):
3084 serial_request = """3194 serial_request = """
@@ -3552,10 +3662,12 @@
3552 else:3662 else:
3553 self.injected_files = None3663 self.injected_files = None
35543664
3555 return [{'id': '1234', 'display_name': 'fakeinstance',3665 resv_id = None
3666
3667 return ([{'id': '1234', 'display_name': 'fakeinstance',
3556 'user_id': 'fake',3668 'user_id': 'fake',
3557 'project_id': 'fake',3669 'project_id': 'fake',
3558 'uuid': FAKE_UUID}]3670 'uuid': FAKE_UUID}], resv_id)
35593671
3560 def set_admin_password(self, *args, **kwargs):3672 def set_admin_password(self, *args, **kwargs):
3561 pass3673 pass
@@ -3568,8 +3680,9 @@
3568 compute_api = MockComputeAPI()3680 compute_api = MockComputeAPI()
3569 self.stubs.Set(nova.compute, 'API', make_stub_method(compute_api))3681 self.stubs.Set(nova.compute, 'API', make_stub_method(compute_api))
3570 self.stubs.Set(3682 self.stubs.Set(
3571 nova.api.openstack.create_instance_helper.CreateInstanceHelper,3683 servers.Controller,
3572 '_get_kernel_ramdisk_from_image', make_stub_method((1, 1)))3684 '_get_kernel_ramdisk_from_image',
3685 make_stub_method((1, 1)))
3573 return compute_api3686 return compute_api
35743687
3575 def _create_personality_request_dict(self, personality_files):3688 def _create_personality_request_dict(self, personality_files):
@@ -3830,8 +3943,8 @@
3830 @staticmethod3943 @staticmethod
3831 def _get_k_r(image_meta):3944 def _get_k_r(image_meta):
3832 """Rebinding function to a shorter name for convenience"""3945 """Rebinding function to a shorter name for convenience"""
3833 kernel_id, ramdisk_id = create_instance_helper.CreateInstanceHelper. \3946 kernel_id, ramdisk_id = servers.Controller.\
3834 _do_get_kernel_ramdisk_from_image(image_meta)3947 _do_get_kernel_ramdisk_from_image(image_meta)
3835 return kernel_id, ramdisk_id3948 return kernel_id, ramdisk_id
38363949
38373950
38383951
=== modified file 'nova/tests/integrated/api/client.py'
--- nova/tests/integrated/api/client.py 2011-08-17 14:55:27 +0000
+++ nova/tests/integrated/api/client.py 2011-09-23 07:08:19 +0000
@@ -16,6 +16,7 @@
1616
17import json17import json
18import httplib18import httplib
19import urllib
19import urlparse20import urlparse
2021
21from nova import log as logging22from nova import log as logging
@@ -100,7 +101,7 @@
100101
101 relative_url = parsed_url.path102 relative_url = parsed_url.path
102 if parsed_url.query:103 if parsed_url.query:
103 relative_url = relative_url + parsed_url.query104 relative_url = relative_url + "?" + parsed_url.query
104 LOG.info(_("Doing %(method)s on %(relative_url)s") % locals())105 LOG.info(_("Doing %(method)s on %(relative_url)s") % locals())
105 if body:106 if body:
106 LOG.info(_("Body: %s") % body)107 LOG.info(_("Body: %s") % body)
@@ -205,12 +206,24 @@
205 def get_server(self, server_id):206 def get_server(self, server_id):
206 return self.api_get('/servers/%s' % server_id)['server']207 return self.api_get('/servers/%s' % server_id)['server']
207208
208 def get_servers(self, detail=True):209 def get_servers(self, detail=True, search_opts=None):
209 rel_url = '/servers/detail' if detail else '/servers'210 rel_url = '/servers/detail' if detail else '/servers'
211
212 if search_opts is not None:
213 qparams = {}
214 for opt, val in search_opts.iteritems():
215 qparams[opt] = val
216 if qparams:
217 query_string = "?%s" % urllib.urlencode(qparams)
218 rel_url += query_string
210 return self.api_get(rel_url)['servers']219 return self.api_get(rel_url)['servers']
211220
212 def post_server(self, server):221 def post_server(self, server):
213 return self.api_post('/servers', server)['server']222 response = self.api_post('/servers', server)
223 if 'reservation_id' in response:
224 return response
225 else:
226 return response['server']
214227
215 def put_server(self, server_id, server):228 def put_server(self, server_id, server):
216 return self.api_put('/servers/%s' % server_id, server)229 return self.api_put('/servers/%s' % server_id, server)
217230
=== modified file 'nova/tests/integrated/test_servers.py'
--- nova/tests/integrated/test_servers.py 2011-09-08 13:53:31 +0000
+++ nova/tests/integrated/test_servers.py 2011-09-23 07:08:19 +0000
@@ -436,6 +436,42 @@
436 # Cleanup436 # Cleanup
437 self._delete_server(server_id)437 self._delete_server(server_id)
438438
439 def test_create_multiple_servers(self):
440 """Creates multiple servers and checks for reservation_id"""
441
442 # Create 2 servers, setting 'return_reservation_id, which should
443 # return a reservation_id
444 server = self._build_minimal_create_server_request()
445 server['min_count'] = 2
446 server['return_reservation_id'] = True
447 post = {'server': server}
448 response = self.api.post_server(post)
449 self.assertIn('reservation_id', response)
450 reservation_id = response['reservation_id']
451 self.assertNotIn(reservation_id, ['', None])
452
453 # Create 1 more server, which should not return a reservation_id
454 server = self._build_minimal_create_server_request()
455 post = {'server': server}
456 created_server = self.api.post_server(post)
457 self.assertTrue(created_server['id'])
458 created_server_id = created_server['id']
459
460 # lookup servers created by the first request.
461 servers = self.api.get_servers(detail=True,
462 search_opts={'reservation_id': reservation_id})
463 server_map = dict((server['id'], server) for server in servers)
464 found_server = server_map.get(created_server_id)
465 # The server from the 2nd request should not be there.
466 self.assertEqual(found_server, None)
467 # Should have found 2 servers.
468 self.assertEqual(len(server_map), 2)
469
470 # Cleanup
471 self._delete_server(created_server_id)
472 for server_id in server_map.iterkeys():
473 self._delete_server(server_id)
474
439475
440if __name__ == "__main__":476if __name__ == "__main__":
441 unittest.main()477 unittest.main()
442478
=== modified file 'nova/tests/scheduler/test_abstract_scheduler.py'
--- nova/tests/scheduler/test_abstract_scheduler.py 2011-09-08 19:40:45 +0000
+++ nova/tests/scheduler/test_abstract_scheduler.py 2011-09-23 07:08:19 +0000
@@ -20,6 +20,7 @@
2020
21import nova.db21import nova.db
2222
23from nova import context
23from nova import exception24from nova import exception
24from nova import rpc25from nova import rpc
25from nova import test26from nova import test
@@ -102,7 +103,7 @@
102was_called = False103was_called = False
103104
104105
105def fake_provision_resource(context, item, instance_id, request_spec, kwargs):106def fake_provision_resource(context, item, request_spec, kwargs):
106 global was_called107 global was_called
107 was_called = True108 was_called = True
108109
@@ -118,8 +119,7 @@
118 was_called = True119 was_called = True
119120
120121
121def fake_provision_resource_from_blob(context, item, instance_id,122def fake_provision_resource_from_blob(context, item, request_spec, kwargs):
122 request_spec, kwargs):
123 global was_called123 global was_called
124 was_called = True124 was_called = True
125125
@@ -185,7 +185,7 @@
185 zm = FakeZoneManager()185 zm = FakeZoneManager()
186 sched.set_zone_manager(zm)186 sched.set_zone_manager(zm)
187187
188 fake_context = {}188 fake_context = context.RequestContext('user', 'project')
189 build_plan = sched.select(fake_context,189 build_plan = sched.select(fake_context,
190 {'instance_type': {'memory_mb': 512},190 {'instance_type': {'memory_mb': 512},
191 'num_instances': 4})191 'num_instances': 4})
@@ -229,9 +229,10 @@
229 zm = FakeEmptyZoneManager()229 zm = FakeEmptyZoneManager()
230 sched.set_zone_manager(zm)230 sched.set_zone_manager(zm)
231231
232 fake_context = {}232 fake_context = context.RequestContext('user', 'project')
233 request_spec = {}
233 self.assertRaises(driver.NoValidHost, sched.schedule_run_instance,234 self.assertRaises(driver.NoValidHost, sched.schedule_run_instance,
234 fake_context, 1,235 fake_context, request_spec,
235 dict(host_filter=None, instance_type={}))236 dict(host_filter=None, instance_type={}))
236237
237 def test_schedule_do_not_schedule_with_hint(self):238 def test_schedule_do_not_schedule_with_hint(self):
@@ -250,8 +251,8 @@
250 'blob': "Non-None blob data",251 'blob': "Non-None blob data",
251 }252 }
252253
253 result = sched.schedule_run_instance(None, 1, request_spec)254 instances = sched.schedule_run_instance(None, request_spec)
254 self.assertEquals(None, result)255 self.assertTrue(instances)
255 self.assertTrue(was_called)256 self.assertTrue(was_called)
256257
257 def test_provision_resource_local(self):258 def test_provision_resource_local(self):
@@ -263,7 +264,7 @@
263 fake_provision_resource_locally)264 fake_provision_resource_locally)
264265
265 request_spec = {'hostname': "foo"}266 request_spec = {'hostname': "foo"}
266 sched._provision_resource(None, request_spec, 1, request_spec, {})267 sched._provision_resource(None, request_spec, request_spec, {})
267 self.assertTrue(was_called)268 self.assertTrue(was_called)
268269
269 def test_provision_resource_remote(self):270 def test_provision_resource_remote(self):
@@ -275,7 +276,7 @@
275 fake_provision_resource_from_blob)276 fake_provision_resource_from_blob)
276277
277 request_spec = {}278 request_spec = {}
278 sched._provision_resource(None, request_spec, 1, request_spec, {})279 sched._provision_resource(None, request_spec, request_spec, {})
279 self.assertTrue(was_called)280 self.assertTrue(was_called)
280281
281 def test_provision_resource_from_blob_empty(self):282 def test_provision_resource_from_blob_empty(self):
@@ -285,7 +286,7 @@
285 request_spec = {}286 request_spec = {}
286 self.assertRaises(abstract_scheduler.InvalidBlob,287 self.assertRaises(abstract_scheduler.InvalidBlob,
287 sched._provision_resource_from_blob,288 sched._provision_resource_from_blob,
288 None, {}, 1, {}, {})289 None, {}, {}, {})
289290
290 def test_provision_resource_from_blob_with_local_blob(self):291 def test_provision_resource_from_blob_with_local_blob(self):
291 """292 """
@@ -303,20 +304,21 @@
303 # return fake instances304 # return fake instances
304 return {'id': 1, 'uuid': 'f874093c-7b17-49c0-89c3-22a5348497f9'}305 return {'id': 1, 'uuid': 'f874093c-7b17-49c0-89c3-22a5348497f9'}
305306
306 def fake_rpc_cast(*args, **kwargs):307 def fake_cast_to_compute_host(*args, **kwargs):
307 pass308 pass
308309
309 self.stubs.Set(sched, '_decrypt_blob',310 self.stubs.Set(sched, '_decrypt_blob',
310 fake_decrypt_blob_returns_local_info)311 fake_decrypt_blob_returns_local_info)
312 self.stubs.Set(driver, 'cast_to_compute_host',
313 fake_cast_to_compute_host)
311 self.stubs.Set(compute_api.API,314 self.stubs.Set(compute_api.API,
312 'create_db_entry_for_new_instance',315 'create_db_entry_for_new_instance',
313 fake_create_db_entry_for_new_instance)316 fake_create_db_entry_for_new_instance)
314 self.stubs.Set(rpc, 'cast', fake_rpc_cast)
315317
316 build_plan_item = {'blob': "Non-None blob data"}318 build_plan_item = {'blob': "Non-None blob data"}
317 request_spec = {'image': {}, 'instance_properties': {}}319 request_spec = {'image': {}, 'instance_properties': {}}
318320
319 sched._provision_resource_from_blob(None, build_plan_item, 1,321 sched._provision_resource_from_blob(None, build_plan_item,
320 request_spec, {})322 request_spec, {})
321 self.assertTrue(was_called)323 self.assertTrue(was_called)
322324
@@ -335,7 +337,7 @@
335337
336 request_spec = {'blob': "Non-None blob data"}338 request_spec = {'blob': "Non-None blob data"}
337339
338 sched._provision_resource_from_blob(None, request_spec, 1,340 sched._provision_resource_from_blob(None, request_spec,
339 request_spec, {})341 request_spec, {})
340 self.assertTrue(was_called)342 self.assertTrue(was_called)
341343
@@ -352,7 +354,7 @@
352354
353 request_spec = {'child_blob': True, 'child_zone': True}355 request_spec = {'child_blob': True, 'child_zone': True}
354356
355 sched._provision_resource_from_blob(None, request_spec, 1,357 sched._provision_resource_from_blob(None, request_spec,
356 request_spec, {})358 request_spec, {})
357 self.assertTrue(was_called)359 self.assertTrue(was_called)
358360
@@ -386,7 +388,7 @@
386 zm.service_states = {}388 zm.service_states = {}
387 sched.set_zone_manager(zm)389 sched.set_zone_manager(zm)
388390
389 fake_context = {}391 fake_context = context.RequestContext('user', 'project')
390 build_plan = sched.select(fake_context,392 build_plan = sched.select(fake_context,
391 {'instance_type': {'memory_mb': 512},393 {'instance_type': {'memory_mb': 512},
392 'num_instances': 4})394 'num_instances': 4})
@@ -394,6 +396,45 @@
394 # 0 from local zones, 12 from remotes396 # 0 from local zones, 12 from remotes
395 self.assertEqual(12, len(build_plan))397 self.assertEqual(12, len(build_plan))
396398
399 def test_run_instance_non_admin(self):
400 """Test creating an instance locally using run_instance, passing
401 a non-admin context. DB actions should work."""
402 sched = FakeAbstractScheduler()
403
404 def fake_cast_to_compute_host(*args, **kwargs):
405 pass
406
407 def fake_zone_get_all_zero(context):
408 # make sure this is called with admin context, even though
409 # we're using user context below
410 self.assertTrue(context.is_admin)
411 return []
412
413 self.stubs.Set(driver, 'cast_to_compute_host',
414 fake_cast_to_compute_host)
415 self.stubs.Set(sched, '_call_zone_method', fake_call_zone_method)
416 self.stubs.Set(nova.db, 'zone_get_all', fake_zone_get_all_zero)
417
418 zm = FakeZoneManager()
419 sched.set_zone_manager(zm)
420
421 fake_context = context.RequestContext('user', 'project')
422
423 request_spec = {
424 'image': {'properties': {}},
425 'security_group': [],
426 'instance_properties': {
427 'project_id': fake_context.project_id,
428 'user_id': fake_context.user_id},
429 'instance_type': {'memory_mb': 256},
430 'filter_driver': 'nova.scheduler.host_filter.AllHostsFilter'
431 }
432
433 instances = sched.schedule_run_instance(fake_context, request_spec)
434 self.assertEqual(len(instances), 1)
435 self.assertFalse(instances[0].get('_is_precooked', False))
436 nova.db.instance_destroy(fake_context, instances[0]['id'])
437
397438
398class BaseSchedulerTestCase(test.TestCase):439class BaseSchedulerTestCase(test.TestCase):
399 """Test case for Base Scheduler."""440 """Test case for Base Scheduler."""
400441
=== modified file 'nova/tests/scheduler/test_least_cost_scheduler.py'
--- nova/tests/scheduler/test_least_cost_scheduler.py 2011-08-15 22:31:24 +0000
+++ nova/tests/scheduler/test_least_cost_scheduler.py 2011-09-23 07:08:19 +0000
@@ -134,7 +134,7 @@
134134
135 expected = []135 expected = []
136 for idx, (hostname, services) in enumerate(hosts):136 for idx, (hostname, services) in enumerate(hosts):
137 caps = copy.deepcopy(services["compute"])137 caps = copy.deepcopy(services)
138 # Costs are normalized so over 10 hosts, each host with increasing138 # Costs are normalized so over 10 hosts, each host with increasing
139 # free ram will cost 1/N more. Since the lowest cost host has some139 # free ram will cost 1/N more. Since the lowest cost host has some
140 # free ram, we add in the 1/N for the base_cost140 # free ram, we add in the 1/N for the base_cost
141141
=== modified file 'nova/tests/scheduler/test_scheduler.py'
--- nova/tests/scheduler/test_scheduler.py 2011-09-15 20:42:30 +0000
+++ nova/tests/scheduler/test_scheduler.py 2011-09-23 07:08:19 +0000
@@ -35,10 +35,13 @@
35from nova import test35from nova import test
36from nova import rpc36from nova import rpc
37from nova import utils37from nova import utils
38from nova.db.sqlalchemy import models
38from nova.scheduler import api39from nova.scheduler import api
39from nova.scheduler import driver40from nova.scheduler import driver
40from nova.scheduler import manager41from nova.scheduler import manager
41from nova.scheduler import multi42from nova.scheduler import multi
43from nova.scheduler.simple import SimpleScheduler
44from nova.scheduler.zone import ZoneScheduler
42from nova.compute import power_state45from nova.compute import power_state
43from nova.compute import vm_states46from nova.compute import vm_states
4447
@@ -53,17 +56,86 @@
53FAKE_UUID = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa'56FAKE_UUID = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa'
5457
5558
56class FakeContext(object):59def _create_instance_dict(**kwargs):
57 auth_token = None60 """Create a dictionary for a test instance"""
61 inst = {}
62 # NOTE(jk0): If an integer is passed as the image_ref, the image
63 # service will use the default image service (in this case, the fake).
64 inst['image_ref'] = '1'
65 inst['reservation_id'] = 'r-fakeres'
66 inst['user_id'] = kwargs.get('user_id', 'admin')
67 inst['project_id'] = kwargs.get('project_id', 'fake')
68 inst['instance_type_id'] = '1'
69 if 'host' in kwargs:
70 inst['host'] = kwargs.get('host')
71 inst['vcpus'] = kwargs.get('vcpus', 1)
72 inst['memory_mb'] = kwargs.get('memory_mb', 20)
73 inst['local_gb'] = kwargs.get('local_gb', 30)
74 inst['vm_state'] = kwargs.get('vm_state', vm_states.ACTIVE)
75 inst['power_state'] = kwargs.get('power_state', power_state.RUNNING)
76 inst['task_state'] = kwargs.get('task_state', None)
77 inst['availability_zone'] = kwargs.get('availability_zone', None)
78 inst['ami_launch_index'] = 0
79 inst['launched_on'] = kwargs.get('launched_on', 'dummy')
80 return inst
81
82
83def _create_volume():
84 """Create a test volume"""
85 vol = {}
86 vol['size'] = 1
87 vol['availability_zone'] = 'test'
88 ctxt = context.get_admin_context()
89 return db.volume_create(ctxt, vol)['id']
90
91
92def _create_instance(**kwargs):
93 """Create a test instance"""
94 ctxt = context.get_admin_context()
95 return db.instance_create(ctxt, _create_instance_dict(**kwargs))
96
97
98def _create_instance_from_spec(spec):
99 return _create_instance(**spec['instance_properties'])
100
101
102def _create_request_spec(**kwargs):
103 return dict(instance_properties=_create_instance_dict(**kwargs))
104
105
106def _fake_cast_to_compute_host(context, host, method, **kwargs):
107 global _picked_host
108 _picked_host = host
109
110
111def _fake_cast_to_volume_host(context, host, method, **kwargs):
112 global _picked_host
113 _picked_host = host
114
115
116def _fake_create_instance_db_entry(simple_self, context, request_spec):
117 instance = _create_instance_from_spec(request_spec)
118 global instance_ids
119 instance_ids.append(instance['id'])
120 return instance
121
122
123class FakeContext(context.RequestContext):
124 def __init__(self, *args, **kwargs):
125 super(FakeContext, self).__init__('user', 'project', **kwargs)
58126
59127
60class TestDriver(driver.Scheduler):128class TestDriver(driver.Scheduler):
61 """Scheduler Driver for Tests"""129 """Scheduler Driver for Tests"""
62 def schedule(context, topic, *args, **kwargs):130 def schedule(self, context, topic, method, *args, **kwargs):
63 return 'fallback_host'131 host = 'fallback_host'
132 driver.cast_to_host(context, topic, host, method, **kwargs)
64133
65 def schedule_named_method(context, topic, num):134 def schedule_named_method(self, context, num=None):
66 return 'named_host'135 topic = 'topic'
136 host = 'named_host'
137 method = 'named_method'
138 driver.cast_to_host(context, topic, host, method, num=num)
67139
68140
69class SchedulerTestCase(test.TestCase):141class SchedulerTestCase(test.TestCase):
@@ -89,31 +161,16 @@
89161
90 return db.service_get(ctxt, s_ref['id'])162 return db.service_get(ctxt, s_ref['id'])
91163
92 def _create_instance(self, **kwargs):
93 """Create a test instance"""
94 ctxt = context.get_admin_context()
95 inst = {}
96 inst['user_id'] = 'admin'
97 inst['project_id'] = kwargs.get('project_id', 'fake')
98 inst['host'] = kwargs.get('host', 'dummy')
99 inst['vcpus'] = kwargs.get('vcpus', 1)
100 inst['memory_mb'] = kwargs.get('memory_mb', 10)
101 inst['local_gb'] = kwargs.get('local_gb', 20)
102 inst['vm_state'] = kwargs.get('vm_state', vm_states.ACTIVE)
103 inst['power_state'] = kwargs.get('power_state', power_state.RUNNING)
104 inst['task_state'] = kwargs.get('task_state', None)
105 return db.instance_create(ctxt, inst)
106
107 def test_fallback(self):164 def test_fallback(self):
108 scheduler = manager.SchedulerManager()165 scheduler = manager.SchedulerManager()
109 self.mox.StubOutWithMock(rpc, 'cast', use_mock_anything=True)166 self.mox.StubOutWithMock(rpc, 'cast', use_mock_anything=True)
110 ctxt = context.get_admin_context()167 ctxt = context.get_admin_context()
111 rpc.cast(ctxt,168 rpc.cast(ctxt,
112 'topic.fallback_host',169 'fake_topic.fallback_host',
113 {'method': 'noexist',170 {'method': 'noexist',
114 'args': {'num': 7}})171 'args': {'num': 7}})
115 self.mox.ReplayAll()172 self.mox.ReplayAll()
116 scheduler.noexist(ctxt, 'topic', num=7)173 scheduler.noexist(ctxt, 'fake_topic', num=7)
117174
118 def test_named_method(self):175 def test_named_method(self):
119 scheduler = manager.SchedulerManager()176 scheduler = manager.SchedulerManager()
@@ -173,8 +230,8 @@
173 scheduler = manager.SchedulerManager()230 scheduler = manager.SchedulerManager()
174 ctxt = context.get_admin_context()231 ctxt = context.get_admin_context()
175 s_ref = self._create_compute_service()232 s_ref = self._create_compute_service()
176 i_ref1 = self._create_instance(project_id='p-01', host=s_ref['host'])233 i_ref1 = _create_instance(project_id='p-01', host=s_ref['host'])
177 i_ref2 = self._create_instance(project_id='p-02', vcpus=3,234 i_ref2 = _create_instance(project_id='p-02', vcpus=3,
178 host=s_ref['host'])235 host=s_ref['host'])
179236
180 result = scheduler.show_host_resources(ctxt, s_ref['host'])237 result = scheduler.show_host_resources(ctxt, s_ref['host'])
@@ -197,7 +254,10 @@
197 """Test case for zone scheduler"""254 """Test case for zone scheduler"""
198 def setUp(self):255 def setUp(self):
199 super(ZoneSchedulerTestCase, self).setUp()256 super(ZoneSchedulerTestCase, self).setUp()
200 self.flags(scheduler_driver='nova.scheduler.zone.ZoneScheduler')257 self.flags(
258 scheduler_driver='nova.scheduler.multi.MultiScheduler',
259 compute_scheduler_driver='nova.scheduler.zone.ZoneScheduler',
260 volume_scheduler_driver='nova.scheduler.zone.ZoneScheduler')
201261
202 def _create_service_model(self, **kwargs):262 def _create_service_model(self, **kwargs):
203 service = db.sqlalchemy.models.Service()263 service = db.sqlalchemy.models.Service()
@@ -214,7 +274,7 @@
214274
215 def test_with_two_zones(self):275 def test_with_two_zones(self):
216 scheduler = manager.SchedulerManager()276 scheduler = manager.SchedulerManager()
217 ctxt = context.get_admin_context()277 ctxt = context.RequestContext('user', 'project')
218 service_list = [self._create_service_model(id=1,278 service_list = [self._create_service_model(id=1,
219 host='host1',279 host='host1',
220 zone='zone1'),280 zone='zone1'),
@@ -230,66 +290,53 @@
230 self._create_service_model(id=5,290 self._create_service_model(id=5,
231 host='host5',291 host='host5',
232 zone='zone2')]292 zone='zone2')]
293
294 request_spec = _create_request_spec(availability_zone='zone1')
295
296 fake_instance = _create_instance_dict(
297 **request_spec['instance_properties'])
298 fake_instance['id'] = 100
299 fake_instance['uuid'] = FAKE_UUID
300
233 self.mox.StubOutWithMock(db, 'service_get_all_by_topic')301 self.mox.StubOutWithMock(db, 'service_get_all_by_topic')
302 self.mox.StubOutWithMock(db, 'instance_update')
303 # Assumes we're testing with MultiScheduler
304 compute_sched_driver = scheduler.driver.drivers['compute']
305 self.mox.StubOutWithMock(compute_sched_driver,
306 'create_instance_db_entry')
307 self.mox.StubOutWithMock(rpc, 'cast', use_mock_anything=True)
308
234 arg = IgnoreArg()309 arg = IgnoreArg()
235 db.service_get_all_by_topic(arg, arg).AndReturn(service_list)310 db.service_get_all_by_topic(arg, arg).AndReturn(service_list)
236 self.mox.StubOutWithMock(rpc, 'cast', use_mock_anything=True)311 compute_sched_driver.create_instance_db_entry(arg,
237 rpc.cast(ctxt,312 request_spec).AndReturn(fake_instance)
313 db.instance_update(arg, 100, {'host': 'host1', 'scheduled_at': arg})
314 rpc.cast(arg,
238 'compute.host1',315 'compute.host1',
239 {'method': 'run_instance',316 {'method': 'run_instance',
240 'args': {'instance_id': 'i-ffffffff',317 'args': {'instance_id': 100}})
241 'availability_zone': 'zone1'}})
242 self.mox.ReplayAll()318 self.mox.ReplayAll()
243 scheduler.run_instance(ctxt,319 scheduler.run_instance(ctxt,
244 'compute',320 'compute',
245 instance_id='i-ffffffff',321 request_spec=request_spec)
246 availability_zone='zone1')
247322
248323
249class SimpleDriverTestCase(test.TestCase):324class SimpleDriverTestCase(test.TestCase):
250 """Test case for simple driver"""325 """Test case for simple driver"""
251 def setUp(self):326 def setUp(self):
252 super(SimpleDriverTestCase, self).setUp()327 super(SimpleDriverTestCase, self).setUp()
328 simple_scheduler = 'nova.scheduler.simple.SimpleScheduler'
253 self.flags(connection_type='fake',329 self.flags(connection_type='fake',
254 stub_network=True,330 stub_network=True,
255 max_cores=4,331 max_cores=4,
256 max_gigabytes=4,332 max_gigabytes=4,
257 network_manager='nova.network.manager.FlatManager',333 network_manager='nova.network.manager.FlatManager',
258 volume_driver='nova.volume.driver.FakeISCSIDriver',334 volume_driver='nova.volume.driver.FakeISCSIDriver',
259 scheduler_driver='nova.scheduler.simple.SimpleScheduler')335 scheduler_driver='nova.scheduler.multi.MultiScheduler',
336 compute_scheduler_driver=simple_scheduler,
337 volume_scheduler_driver=simple_scheduler)
260 self.scheduler = manager.SchedulerManager()338 self.scheduler = manager.SchedulerManager()
261 self.context = context.get_admin_context()339 self.context = context.get_admin_context()
262 self.user_id = 'fake'
263 self.project_id = 'fake'
264
265 def _create_instance(self, **kwargs):
266 """Create a test instance"""
267 inst = {}
268 # NOTE(jk0): If an integer is passed as the image_ref, the image
269 # service will use the default image service (in this case, the fake).
270 inst['image_ref'] = '1'
271 inst['reservation_id'] = 'r-fakeres'
272 inst['user_id'] = self.user_id
273 inst['project_id'] = self.project_id
274 inst['instance_type_id'] = '1'
275 inst['vcpus'] = kwargs.get('vcpus', 1)
276 inst['ami_launch_index'] = 0
277 inst['availability_zone'] = kwargs.get('availability_zone', None)
278 inst['host'] = kwargs.get('host', 'dummy')
279 inst['memory_mb'] = kwargs.get('memory_mb', 20)
280 inst['local_gb'] = kwargs.get('local_gb', 30)
281 inst['launched_on'] = kwargs.get('launghed_on', 'dummy')
282 inst['vm_state'] = kwargs.get('vm_state', vm_states.ACTIVE)
283 inst['task_state'] = kwargs.get('task_state', None)
284 inst['power_state'] = kwargs.get('power_state', power_state.RUNNING)
285 return db.instance_create(self.context, inst)['id']
286
287 def _create_volume(self):
288 """Create a test volume"""
289 vol = {}
290 vol['size'] = 1
291 vol['availability_zone'] = 'test'
292 return db.volume_create(self.context, vol)['id']
293340
294 def _create_compute_service(self, **kwargs):341 def _create_compute_service(self, **kwargs):
295 """Create a compute service."""342 """Create a compute service."""
@@ -369,14 +416,30 @@
369 'compute',416 'compute',
370 FLAGS.compute_manager)417 FLAGS.compute_manager)
371 compute2.start()418 compute2.start()
372 instance_id1 = self._create_instance()419
373 compute1.run_instance(self.context, instance_id1)420 global instance_ids
374 instance_id2 = self._create_instance()421 instance_ids = []
375 host = self.scheduler.driver.schedule_run_instance(self.context,422 instance_ids.append(_create_instance()['id'])
376 instance_id2)423 compute1.run_instance(self.context, instance_ids[0])
377 self.assertEqual(host, 'host2')424
378 compute1.terminate_instance(self.context, instance_id1)425 self.stubs.Set(SimpleScheduler,
379 db.instance_destroy(self.context, instance_id2)426 'create_instance_db_entry', _fake_create_instance_db_entry)
427 global _picked_host
428 _picked_host = None
429 self.stubs.Set(driver,
430 'cast_to_compute_host', _fake_cast_to_compute_host)
431
432 request_spec = _create_request_spec()
433 instances = self.scheduler.driver.schedule_run_instance(
434 self.context, request_spec)
435
436 self.assertEqual(_picked_host, 'host2')
437 self.assertEqual(len(instance_ids), 2)
438 self.assertEqual(len(instances), 1)
439 self.assertEqual(instances[0].get('_is_precooked', False), False)
440
441 compute1.terminate_instance(self.context, instance_ids[0])
442 compute2.terminate_instance(self.context, instance_ids[1])
380 compute1.kill()443 compute1.kill()
381 compute2.kill()444 compute2.kill()
382445
@@ -392,14 +455,27 @@
392 'compute',455 'compute',
393 FLAGS.compute_manager)456 FLAGS.compute_manager)
394 compute2.start()457 compute2.start()
395 instance_id1 = self._create_instance()458
396 compute1.run_instance(self.context, instance_id1)459 global instance_ids
397 instance_id2 = self._create_instance(availability_zone='nova:host1')460 instance_ids = []
398 host = self.scheduler.driver.schedule_run_instance(self.context,461 instance_ids.append(_create_instance()['id'])
399 instance_id2)462 compute1.run_instance(self.context, instance_ids[0])
400 self.assertEqual('host1', host)463
401 compute1.terminate_instance(self.context, instance_id1)464 self.stubs.Set(SimpleScheduler,
402 db.instance_destroy(self.context, instance_id2)465 'create_instance_db_entry', _fake_create_instance_db_entry)
466 global _picked_host
467 _picked_host = None
468 self.stubs.Set(driver,
469 'cast_to_compute_host', _fake_cast_to_compute_host)
470
471 request_spec = _create_request_spec(availability_zone='nova:host1')
472 instances = self.scheduler.driver.schedule_run_instance(
473 self.context, request_spec)
474 self.assertEqual(_picked_host, 'host1')
475 self.assertEqual(len(instance_ids), 2)
476
477 compute1.terminate_instance(self.context, instance_ids[0])
478 compute1.terminate_instance(self.context, instance_ids[1])
403 compute1.kill()479 compute1.kill()
404 compute2.kill()480 compute2.kill()
405481
@@ -414,12 +490,21 @@
414 delta = datetime.timedelta(seconds=FLAGS.service_down_time * 2)490 delta = datetime.timedelta(seconds=FLAGS.service_down_time * 2)
415 past = now - delta491 past = now - delta
416 db.service_update(self.context, s1['id'], {'updated_at': past})492 db.service_update(self.context, s1['id'], {'updated_at': past})
417 instance_id2 = self._create_instance(availability_zone='nova:host1')493
494 global instance_ids
495 instance_ids = []
496 self.stubs.Set(SimpleScheduler,
497 'create_instance_db_entry', _fake_create_instance_db_entry)
498 global _picked_host
499 _picked_host = None
500 self.stubs.Set(driver,
501 'cast_to_compute_host', _fake_cast_to_compute_host)
502
503 request_spec = _create_request_spec(availability_zone='nova:host1')
418 self.assertRaises(driver.WillNotSchedule,504 self.assertRaises(driver.WillNotSchedule,
419 self.scheduler.driver.schedule_run_instance,505 self.scheduler.driver.schedule_run_instance,
420 self.context,506 self.context,
421 instance_id2)507 request_spec)
422 db.instance_destroy(self.context, instance_id2)
423 compute1.kill()508 compute1.kill()
424509
425 def test_will_schedule_on_disabled_host_if_specified_no_queue(self):510 def test_will_schedule_on_disabled_host_if_specified_no_queue(self):
@@ -430,11 +515,22 @@
430 compute1.start()515 compute1.start()
431 s1 = db.service_get_by_args(self.context, 'host1', 'nova-compute')516 s1 = db.service_get_by_args(self.context, 'host1', 'nova-compute')
432 db.service_update(self.context, s1['id'], {'disabled': True})517 db.service_update(self.context, s1['id'], {'disabled': True})
433 instance_id2 = self._create_instance(availability_zone='nova:host1')518
434 host = self.scheduler.driver.schedule_run_instance(self.context,519 global instance_ids
435 instance_id2)520 instance_ids = []
436 self.assertEqual('host1', host)521 self.stubs.Set(SimpleScheduler,
437 db.instance_destroy(self.context, instance_id2)522 'create_instance_db_entry', _fake_create_instance_db_entry)
523 global _picked_host
524 _picked_host = None
525 self.stubs.Set(driver,
526 'cast_to_compute_host', _fake_cast_to_compute_host)
527
528 request_spec = _create_request_spec(availability_zone='nova:host1')
529 instances = self.scheduler.driver.schedule_run_instance(
530 self.context, request_spec)
531 self.assertEqual(_picked_host, 'host1')
532 self.assertEqual(len(instance_ids), 1)
533 compute1.terminate_instance(self.context, instance_ids[0])
438 compute1.kill()534 compute1.kill()
439535
440 def test_too_many_cores_no_queue(self):536 def test_too_many_cores_no_queue(self):
@@ -452,17 +548,17 @@
452 instance_ids1 = []548 instance_ids1 = []
453 instance_ids2 = []549 instance_ids2 = []
454 for index in xrange(FLAGS.max_cores):550 for index in xrange(FLAGS.max_cores):
455 instance_id = self._create_instance()551 instance_id = _create_instance()['id']
456 compute1.run_instance(self.context, instance_id)552 compute1.run_instance(self.context, instance_id)
457 instance_ids1.append(instance_id)553 instance_ids1.append(instance_id)
458 instance_id = self._create_instance()554 instance_id = _create_instance()['id']
459 compute2.run_instance(self.context, instance_id)555 compute2.run_instance(self.context, instance_id)
460 instance_ids2.append(instance_id)556 instance_ids2.append(instance_id)
461 instance_id = self._create_instance()557 request_spec = _create_request_spec()
462 self.assertRaises(driver.NoValidHost,558 self.assertRaises(driver.NoValidHost,
463 self.scheduler.driver.schedule_run_instance,559 self.scheduler.driver.schedule_run_instance,
464 self.context,560 self.context,
465 instance_id)561 request_spec)
466 for instance_id in instance_ids1:562 for instance_id in instance_ids1:
467 compute1.terminate_instance(self.context, instance_id)563 compute1.terminate_instance(self.context, instance_id)
468 for instance_id in instance_ids2:564 for instance_id in instance_ids2:
@@ -481,13 +577,19 @@
481 'nova-volume',577 'nova-volume',
482 'volume',578 'volume',
483 FLAGS.volume_manager)579 FLAGS.volume_manager)
580
581 global _picked_host
582 _picked_host = None
583 self.stubs.Set(driver,
584 'cast_to_volume_host', _fake_cast_to_volume_host)
585
484 volume2.start()586 volume2.start()
485 volume_id1 = self._create_volume()587 volume_id1 = _create_volume()
486 volume1.create_volume(self.context, volume_id1)588 volume1.create_volume(self.context, volume_id1)
487 volume_id2 = self._create_volume()589 volume_id2 = _create_volume()
488 host = self.scheduler.driver.schedule_create_volume(self.context,590 self.scheduler.driver.schedule_create_volume(self.context,
489 volume_id2)591 volume_id2)
490 self.assertEqual(host, 'host2')592 self.assertEqual(_picked_host, 'host2')
491 volume1.delete_volume(self.context, volume_id1)593 volume1.delete_volume(self.context, volume_id1)
492 db.volume_destroy(self.context, volume_id2)594 db.volume_destroy(self.context, volume_id2)
493595
@@ -514,17 +616,30 @@
514 compute2.kill()616 compute2.kill()
515617
516 def test_least_busy_host_gets_instance(self):618 def test_least_busy_host_gets_instance(self):
517 """Ensures the host with less cores gets the next one"""619 """Ensures the host with less cores gets the next one w/ Simple"""
518 compute1 = self.start_service('compute', host='host1')620 compute1 = self.start_service('compute', host='host1')
519 compute2 = self.start_service('compute', host='host2')621 compute2 = self.start_service('compute', host='host2')
520 instance_id1 = self._create_instance()622
521 compute1.run_instance(self.context, instance_id1)623 global instance_ids
522 instance_id2 = self._create_instance()624 instance_ids = []
523 host = self.scheduler.driver.schedule_run_instance(self.context,625 instance_ids.append(_create_instance()['id'])
524 instance_id2)626 compute1.run_instance(self.context, instance_ids[0])
525 self.assertEqual(host, 'host2')627
526 compute1.terminate_instance(self.context, instance_id1)628 self.stubs.Set(SimpleScheduler,
527 db.instance_destroy(self.context, instance_id2)629 'create_instance_db_entry', _fake_create_instance_db_entry)
630 global _picked_host
631 _picked_host = None
632 self.stubs.Set(driver,
633 'cast_to_compute_host', _fake_cast_to_compute_host)
634
635 request_spec = _create_request_spec()
636 instances = self.scheduler.driver.schedule_run_instance(
637 self.context, request_spec)
638 self.assertEqual(_picked_host, 'host2')
639 self.assertEqual(len(instance_ids), 2)
640
641 compute1.terminate_instance(self.context, instance_ids[0])
642 compute2.terminate_instance(self.context, instance_ids[1])
528 compute1.kill()643 compute1.kill()
529 compute2.kill()644 compute2.kill()
530645
@@ -532,41 +647,64 @@
532 """Ensures if you set availability_zone it launches on that zone"""647 """Ensures if you set availability_zone it launches on that zone"""
533 compute1 = self.start_service('compute', host='host1')648 compute1 = self.start_service('compute', host='host1')
534 compute2 = self.start_service('compute', host='host2')649 compute2 = self.start_service('compute', host='host2')
535 instance_id1 = self._create_instance()650
536 compute1.run_instance(self.context, instance_id1)651 global instance_ids
537 instance_id2 = self._create_instance(availability_zone='nova:host1')652 instance_ids = []
538 host = self.scheduler.driver.schedule_run_instance(self.context,653 instance_ids.append(_create_instance()['id'])
539 instance_id2)654 compute1.run_instance(self.context, instance_ids[0])
540 self.assertEqual('host1', host)655
541 compute1.terminate_instance(self.context, instance_id1)656 self.stubs.Set(SimpleScheduler,
542 db.instance_destroy(self.context, instance_id2)657 'create_instance_db_entry', _fake_create_instance_db_entry)
658 global _picked_host
659 _picked_host = None
660 self.stubs.Set(driver,
661 'cast_to_compute_host', _fake_cast_to_compute_host)
662
663 request_spec = _create_request_spec(availability_zone='nova:host1')
664 instances = self.scheduler.driver.schedule_run_instance(
665 self.context, request_spec)
666 self.assertEqual(_picked_host, 'host1')
667 self.assertEqual(len(instance_ids), 2)
668
669 compute1.terminate_instance(self.context, instance_ids[0])
670 compute1.terminate_instance(self.context, instance_ids[1])
543 compute1.kill()671 compute1.kill()
544 compute2.kill()672 compute2.kill()
545673
546 def test_wont_sechedule_if_specified_host_is_down(self):674 def test_wont_schedule_if_specified_host_is_down(self):
547 compute1 = self.start_service('compute', host='host1')675 compute1 = self.start_service('compute', host='host1')
548 s1 = db.service_get_by_args(self.context, 'host1', 'nova-compute')676 s1 = db.service_get_by_args(self.context, 'host1', 'nova-compute')
549 now = utils.utcnow()677 now = utils.utcnow()
550 delta = datetime.timedelta(seconds=FLAGS.service_down_time * 2)678 delta = datetime.timedelta(seconds=FLAGS.service_down_time * 2)
551 past = now - delta679 past = now - delta
552 db.service_update(self.context, s1['id'], {'updated_at': past})680 db.service_update(self.context, s1['id'], {'updated_at': past})
553 instance_id2 = self._create_instance(availability_zone='nova:host1')681 request_spec = _create_request_spec(availability_zone='nova:host1')
554 self.assertRaises(driver.WillNotSchedule,682 self.assertRaises(driver.WillNotSchedule,
555 self.scheduler.driver.schedule_run_instance,683 self.scheduler.driver.schedule_run_instance,
556 self.context,684 self.context,
557 instance_id2)685 request_spec)
558 db.instance_destroy(self.context, instance_id2)
559 compute1.kill()686 compute1.kill()
560687
561 def test_will_schedule_on_disabled_host_if_specified(self):688 def test_will_schedule_on_disabled_host_if_specified(self):
562 compute1 = self.start_service('compute', host='host1')689 compute1 = self.start_service('compute', host='host1')
563 s1 = db.service_get_by_args(self.context, 'host1', 'nova-compute')690 s1 = db.service_get_by_args(self.context, 'host1', 'nova-compute')
564 db.service_update(self.context, s1['id'], {'disabled': True})691 db.service_update(self.context, s1['id'], {'disabled': True})
565 instance_id2 = self._create_instance(availability_zone='nova:host1')692
566 host = self.scheduler.driver.schedule_run_instance(self.context,693 global instance_ids
567 instance_id2)694 instance_ids = []
568 self.assertEqual('host1', host)695 self.stubs.Set(SimpleScheduler,
569 db.instance_destroy(self.context, instance_id2)696 'create_instance_db_entry', _fake_create_instance_db_entry)
697 global _picked_host
698 _picked_host = None
699 self.stubs.Set(driver,
700 'cast_to_compute_host', _fake_cast_to_compute_host)
701
702 request_spec = _create_request_spec(availability_zone='nova:host1')
703 instances = self.scheduler.driver.schedule_run_instance(
704 self.context, request_spec)
705 self.assertEqual(_picked_host, 'host1')
706 self.assertEqual(len(instance_ids), 1)
707 compute1.terminate_instance(self.context, instance_ids[0])
570 compute1.kill()708 compute1.kill()
571709
572 def test_too_many_cores(self):710 def test_too_many_cores(self):
@@ -576,18 +714,30 @@
576 instance_ids1 = []714 instance_ids1 = []
577 instance_ids2 = []715 instance_ids2 = []
578 for index in xrange(FLAGS.max_cores):716 for index in xrange(FLAGS.max_cores):
579 instance_id = self._create_instance()717 instance_id = _create_instance()['id']
580 compute1.run_instance(self.context, instance_id)718 compute1.run_instance(self.context, instance_id)
581 instance_ids1.append(instance_id)719 instance_ids1.append(instance_id)
582 instance_id = self._create_instance()720 instance_id = _create_instance()['id']
583 compute2.run_instance(self.context, instance_id)721 compute2.run_instance(self.context, instance_id)
584 instance_ids2.append(instance_id)722 instance_ids2.append(instance_id)
585 instance_id = self._create_instance()723
724 def _create_instance_db_entry(simple_self, context, request_spec):
725 self.fail(_("Shouldn't try to create DB entry when at "
726 "max cores"))
727 self.stubs.Set(SimpleScheduler,
728 'create_instance_db_entry', _create_instance_db_entry)
729
730 global _picked_host
731 _picked_host = None
732 self.stubs.Set(driver,
733 'cast_to_compute_host', _fake_cast_to_compute_host)
734
735 request_spec = _create_request_spec()
736
586 self.assertRaises(driver.NoValidHost,737 self.assertRaises(driver.NoValidHost,
587 self.scheduler.driver.schedule_run_instance,738 self.scheduler.driver.schedule_run_instance,
588 self.context,739 self.context,
589 instance_id)740 request_spec)
590 db.instance_destroy(self.context, instance_id)
591 for instance_id in instance_ids1:741 for instance_id in instance_ids1:
592 compute1.terminate_instance(self.context, instance_id)742 compute1.terminate_instance(self.context, instance_id)
593 for instance_id in instance_ids2:743 for instance_id in instance_ids2:
@@ -599,12 +749,18 @@
599 """Ensures the host with less gigabytes gets the next one"""749 """Ensures the host with less gigabytes gets the next one"""
600 volume1 = self.start_service('volume', host='host1')750 volume1 = self.start_service('volume', host='host1')
601 volume2 = self.start_service('volume', host='host2')751 volume2 = self.start_service('volume', host='host2')
602 volume_id1 = self._create_volume()752
753 global _picked_host
754 _picked_host = None
755 self.stubs.Set(driver,
756 'cast_to_volume_host', _fake_cast_to_volume_host)
757
758 volume_id1 = _create_volume()
603 volume1.create_volume(self.context, volume_id1)759 volume1.create_volume(self.context, volume_id1)
604 volume_id2 = self._create_volume()760 volume_id2 = _create_volume()
605 host = self.scheduler.driver.schedule_create_volume(self.context,761 self.scheduler.driver.schedule_create_volume(self.context,
606 volume_id2)762 volume_id2)
607 self.assertEqual(host, 'host2')763 self.assertEqual(_picked_host, 'host2')
608 volume1.delete_volume(self.context, volume_id1)764 volume1.delete_volume(self.context, volume_id1)
609 db.volume_destroy(self.context, volume_id2)765 db.volume_destroy(self.context, volume_id2)
610 volume1.kill()766 volume1.kill()
@@ -617,13 +773,13 @@
617 volume_ids1 = []773 volume_ids1 = []
618 volume_ids2 = []774 volume_ids2 = []
619 for index in xrange(FLAGS.max_gigabytes):775 for index in xrange(FLAGS.max_gigabytes):
620 volume_id = self._create_volume()776 volume_id = _create_volume()
621 volume1.create_volume(self.context, volume_id)777 volume1.create_volume(self.context, volume_id)
622 volume_ids1.append(volume_id)778 volume_ids1.append(volume_id)
623 volume_id = self._create_volume()779 volume_id = _create_volume()
624 volume2.create_volume(self.context, volume_id)780 volume2.create_volume(self.context, volume_id)
625 volume_ids2.append(volume_id)781 volume_ids2.append(volume_id)
626 volume_id = self._create_volume()782 volume_id = _create_volume()
627 self.assertRaises(driver.NoValidHost,783 self.assertRaises(driver.NoValidHost,
628 self.scheduler.driver.schedule_create_volume,784 self.scheduler.driver.schedule_create_volume,
629 self.context,785 self.context,
@@ -636,13 +792,13 @@
636 volume2.kill()792 volume2.kill()
637793
638 def test_scheduler_live_migration_with_volume(self):794 def test_scheduler_live_migration_with_volume(self):
639 """scheduler_live_migration() works correctly as expected.795 """schedule_live_migration() works correctly as expected.
640796
641 Also, checks instance state is changed from 'running' -> 'migrating'.797 Also, checks instance state is changed from 'running' -> 'migrating'.
642798
643 """799 """
644800
645 instance_id = self._create_instance()801 instance_id = _create_instance(host='dummy')['id']
646 i_ref = db.instance_get(self.context, instance_id)802 i_ref = db.instance_get(self.context, instance_id)
647 dic = {'instance_id': instance_id, 'size': 1}803 dic = {'instance_id': instance_id, 'size': 1}
648 v_ref = db.volume_create(self.context, dic)804 v_ref = db.volume_create(self.context, dic)
@@ -680,7 +836,8 @@
680 def test_live_migration_src_check_instance_not_running(self):836 def test_live_migration_src_check_instance_not_running(self):
681 """The instance given by instance_id is not running."""837 """The instance given by instance_id is not running."""
682838
683 instance_id = self._create_instance(power_state=power_state.NOSTATE)839 instance_id = _create_instance(
840 power_state=power_state.NOSTATE)['id']
684 i_ref = db.instance_get(self.context, instance_id)841 i_ref = db.instance_get(self.context, instance_id)
685842
686 try:843 try:
@@ -695,7 +852,7 @@
695 def test_live_migration_src_check_volume_node_not_alive(self):852 def test_live_migration_src_check_volume_node_not_alive(self):
696 """Raise exception when volume node is not alive."""853 """Raise exception when volume node is not alive."""
697854
698 instance_id = self._create_instance()855 instance_id = _create_instance()['id']
699 i_ref = db.instance_get(self.context, instance_id)856 i_ref = db.instance_get(self.context, instance_id)
700 dic = {'instance_id': instance_id, 'size': 1}857 dic = {'instance_id': instance_id, 'size': 1}
701 v_ref = db.volume_create(self.context, {'instance_id': instance_id,858 v_ref = db.volume_create(self.context, {'instance_id': instance_id,
@@ -715,7 +872,7 @@
715872
716 def test_live_migration_src_check_compute_node_not_alive(self):873 def test_live_migration_src_check_compute_node_not_alive(self):
717 """Confirms src-compute node is alive."""874 """Confirms src-compute node is alive."""
718 instance_id = self._create_instance()875 instance_id = _create_instance()['id']
719 i_ref = db.instance_get(self.context, instance_id)876 i_ref = db.instance_get(self.context, instance_id)
720 t = utils.utcnow() - datetime.timedelta(10)877 t = utils.utcnow() - datetime.timedelta(10)
721 s_ref = self._create_compute_service(created_at=t, updated_at=t,878 s_ref = self._create_compute_service(created_at=t, updated_at=t,
@@ -730,7 +887,7 @@
730887
731 def test_live_migration_src_check_works_correctly(self):888 def test_live_migration_src_check_works_correctly(self):
732 """Confirms this method finishes with no error."""889 """Confirms this method finishes with no error."""
733 instance_id = self._create_instance()890 instance_id = _create_instance()['id']
734 i_ref = db.instance_get(self.context, instance_id)891 i_ref = db.instance_get(self.context, instance_id)
735 s_ref = self._create_compute_service(host=i_ref['host'])892 s_ref = self._create_compute_service(host=i_ref['host'])
736893
@@ -743,7 +900,7 @@
743900
744 def test_live_migration_dest_check_not_alive(self):901 def test_live_migration_dest_check_not_alive(self):
745 """Confirms exception raises in case dest host does not exist."""902 """Confirms exception raises in case dest host does not exist."""
746 instance_id = self._create_instance()903 instance_id = _create_instance()['id']
747 i_ref = db.instance_get(self.context, instance_id)904 i_ref = db.instance_get(self.context, instance_id)
748 t = utils.utcnow() - datetime.timedelta(10)905 t = utils.utcnow() - datetime.timedelta(10)
749 s_ref = self._create_compute_service(created_at=t, updated_at=t,906 s_ref = self._create_compute_service(created_at=t, updated_at=t,
@@ -758,7 +915,7 @@
758915
759 def test_live_migration_dest_check_service_same_host(self):916 def test_live_migration_dest_check_service_same_host(self):
760 """Confirms exceptioin raises in case dest and src is same host."""917 """Confirms exceptioin raises in case dest and src is same host."""
761 instance_id = self._create_instance()918 instance_id = _create_instance()['id']
762 i_ref = db.instance_get(self.context, instance_id)919 i_ref = db.instance_get(self.context, instance_id)
763 s_ref = self._create_compute_service(host=i_ref['host'])920 s_ref = self._create_compute_service(host=i_ref['host'])
764921
@@ -771,9 +928,9 @@
771928
772 def test_live_migration_dest_check_service_lack_memory(self):929 def test_live_migration_dest_check_service_lack_memory(self):
773 """Confirms exception raises when dest doesn't have enough memory."""930 """Confirms exception raises when dest doesn't have enough memory."""
774 instance_id = self._create_instance()931 instance_id = _create_instance()['id']
775 instance_id2 = self._create_instance(host='somewhere',932 instance_id2 = _create_instance(host='somewhere',
776 memory_mb=12)933 memory_mb=12)['id']
777 i_ref = db.instance_get(self.context, instance_id)934 i_ref = db.instance_get(self.context, instance_id)
778 s_ref = self._create_compute_service(host='somewhere')935 s_ref = self._create_compute_service(host='somewhere')
779936
@@ -787,9 +944,9 @@
787944
788 def test_block_migration_dest_check_service_lack_disk(self):945 def test_block_migration_dest_check_service_lack_disk(self):
789 """Confirms exception raises when dest doesn't have enough disk."""946 """Confirms exception raises when dest doesn't have enough disk."""
790 instance_id = self._create_instance()947 instance_id = _create_instance()['id']
791 instance_id2 = self._create_instance(host='somewhere',948 instance_id2 = _create_instance(host='somewhere',
792 local_gb=70)949 local_gb=70)['id']
793 i_ref = db.instance_get(self.context, instance_id)950 i_ref = db.instance_get(self.context, instance_id)
794 s_ref = self._create_compute_service(host='somewhere')951 s_ref = self._create_compute_service(host='somewhere')
795952
@@ -803,7 +960,7 @@
803960
804 def test_live_migration_dest_check_service_works_correctly(self):961 def test_live_migration_dest_check_service_works_correctly(self):
805 """Confirms method finishes with no error."""962 """Confirms method finishes with no error."""
806 instance_id = self._create_instance()963 instance_id = _create_instance()['id']
807 i_ref = db.instance_get(self.context, instance_id)964 i_ref = db.instance_get(self.context, instance_id)
808 s_ref = self._create_compute_service(host='somewhere',965 s_ref = self._create_compute_service(host='somewhere',
809 memory_mb_used=5)966 memory_mb_used=5)
@@ -821,7 +978,7 @@
821978
822 dest = 'dummydest'979 dest = 'dummydest'
823 # mocks for live_migration_common_check()980 # mocks for live_migration_common_check()
824 instance_id = self._create_instance()981 instance_id = _create_instance()['id']
825 i_ref = db.instance_get(self.context, instance_id)982 i_ref = db.instance_get(self.context, instance_id)
826 t1 = utils.utcnow() - datetime.timedelta(10)983 t1 = utils.utcnow() - datetime.timedelta(10)
827 s_ref = self._create_compute_service(created_at=t1, updated_at=t1,984 s_ref = self._create_compute_service(created_at=t1, updated_at=t1,
@@ -855,7 +1012,7 @@
855 def test_live_migration_common_check_service_different_hypervisor(self):1012 def test_live_migration_common_check_service_different_hypervisor(self):
856 """Original host and dest host has different hypervisor type."""1013 """Original host and dest host has different hypervisor type."""
857 dest = 'dummydest'1014 dest = 'dummydest'
858 instance_id = self._create_instance()1015 instance_id = _create_instance(host='dummy')['id']
859 i_ref = db.instance_get(self.context, instance_id)1016 i_ref = db.instance_get(self.context, instance_id)
8601017
861 # compute service for destination1018 # compute service for destination
@@ -880,7 +1037,7 @@
880 def test_live_migration_common_check_service_different_version(self):1037 def test_live_migration_common_check_service_different_version(self):
881 """Original host and dest host has different hypervisor version."""1038 """Original host and dest host has different hypervisor version."""
882 dest = 'dummydest'1039 dest = 'dummydest'
883 instance_id = self._create_instance()1040 instance_id = _create_instance(host='dummy')['id']
884 i_ref = db.instance_get(self.context, instance_id)1041 i_ref = db.instance_get(self.context, instance_id)
8851042
886 # compute service for destination1043 # compute service for destination
@@ -904,10 +1061,10 @@
904 db.service_destroy(self.context, s_ref2['id'])1061 db.service_destroy(self.context, s_ref2['id'])
9051062
906 def test_live_migration_common_check_checking_cpuinfo_fail(self):1063 def test_live_migration_common_check_checking_cpuinfo_fail(self):
907 """Raise excetion when original host doen't have compatible cpu."""1064 """Raise exception when original host doesn't have compatible cpu."""
9081065
909 dest = 'dummydest'1066 dest = 'dummydest'
910 instance_id = self._create_instance()1067 instance_id = _create_instance(host='dummy')['id']
911 i_ref = db.instance_get(self.context, instance_id)1068 i_ref = db.instance_get(self.context, instance_id)
9121069
913 # compute service for destination1070 # compute service for destination
@@ -927,7 +1084,7 @@
9271084
928 self.mox.ReplayAll()1085 self.mox.ReplayAll()
929 try:1086 try:
930 self.scheduler.driver._live_migration_common_check(self.context,1087 driver._live_migration_common_check(self.context,
931 i_ref,1088 i_ref,
932 dest,1089 dest,
933 False)1090 False)
@@ -1021,7 +1178,6 @@
1021class ZoneRedirectTest(test.TestCase):1178class ZoneRedirectTest(test.TestCase):
1022 def setUp(self):1179 def setUp(self):
1023 super(ZoneRedirectTest, self).setUp()1180 super(ZoneRedirectTest, self).setUp()
1024 self.stubs = stubout.StubOutForTesting()
10251181
1026 self.stubs.Set(db, 'zone_get_all', zone_get_all)1182 self.stubs.Set(db, 'zone_get_all', zone_get_all)
1027 self.stubs.Set(db, 'instance_get_by_uuid',1183 self.stubs.Set(db, 'instance_get_by_uuid',
@@ -1029,7 +1185,6 @@
1029 self.flags(enable_zone_routing=True)1185 self.flags(enable_zone_routing=True)
10301186
1031 def tearDown(self):1187 def tearDown(self):
1032 self.stubs.UnsetAll()
1033 super(ZoneRedirectTest, self).tearDown()1188 super(ZoneRedirectTest, self).tearDown()
10341189
1035 def test_trap_found_locally(self):1190 def test_trap_found_locally(self):
@@ -1257,12 +1412,10 @@
1257class CallZoneMethodTest(test.TestCase):1412class CallZoneMethodTest(test.TestCase):
1258 def setUp(self):1413 def setUp(self):
1259 super(CallZoneMethodTest, self).setUp()1414 super(CallZoneMethodTest, self).setUp()
1260 self.stubs = stubout.StubOutForTesting()
1261 self.stubs.Set(db, 'zone_get_all', zone_get_all)1415 self.stubs.Set(db, 'zone_get_all', zone_get_all)
1262 self.stubs.Set(novaclient, 'Client', FakeNovaClientZones)1416 self.stubs.Set(novaclient, 'Client', FakeNovaClientZones)
12631417
1264 def tearDown(self):1418 def tearDown(self):
1265 self.stubs.UnsetAll()
1266 super(CallZoneMethodTest, self).tearDown()1419 super(CallZoneMethodTest, self).tearDown()
12671420
1268 def test_call_zone_method(self):1421 def test_call_zone_method(self):
12691422
=== modified file 'nova/tests/scheduler/test_vsa_scheduler.py'
--- nova/tests/scheduler/test_vsa_scheduler.py 2011-08-26 02:09:50 +0000
+++ nova/tests/scheduler/test_vsa_scheduler.py 2011-09-23 07:08:19 +0000
@@ -22,6 +22,7 @@
22from nova import exception22from nova import exception
23from nova import flags23from nova import flags
24from nova import log as logging24from nova import log as logging
25from nova import rpc
25from nova import test26from nova import test
26from nova import utils27from nova import utils
27from nova.volume import volume_types28from nova.volume import volume_types
@@ -37,6 +38,10 @@
37global_volume = {}38global_volume = {}
3839
3940
41def fake_rpc_cast(*args, **kwargs):
42 pass
43
44
40class FakeVsaLeastUsedScheduler(45class FakeVsaLeastUsedScheduler(
41 vsa_sched.VsaSchedulerLeastUsedHost):46 vsa_sched.VsaSchedulerLeastUsedHost):
42 # No need to stub anything at the moment47 # No need to stub anything at the moment
@@ -170,12 +175,10 @@
170 LOG.debug(_("Test: provision vol %(name)s on host %(host)s"),175 LOG.debug(_("Test: provision vol %(name)s on host %(host)s"),
171 locals())176 locals())
172 LOG.debug(_("\t vol=%(vol)s"), locals())177 LOG.debug(_("\t vol=%(vol)s"), locals())
173 pass
174178
175 def _fake_vsa_update(self, context, vsa_id, values):179 def _fake_vsa_update(self, context, vsa_id, values):
176 LOG.debug(_("Test: VSA update request: vsa_id=%(vsa_id)s "\180 LOG.debug(_("Test: VSA update request: vsa_id=%(vsa_id)s "\
177 "values=%(values)s"), locals())181 "values=%(values)s"), locals())
178 pass
179182
180 def _fake_volume_create(self, context, options):183 def _fake_volume_create(self, context, options):
181 LOG.debug(_("Test: Volume create: %s"), options)184 LOG.debug(_("Test: Volume create: %s"), options)
@@ -196,7 +199,6 @@
196 "values=%(values)s"), locals())199 "values=%(values)s"), locals())
197 global scheduled_volume200 global scheduled_volume
198 scheduled_volume = {'id': volume_id, 'host': values['host']}201 scheduled_volume = {'id': volume_id, 'host': values['host']}
199 pass
200202
201 def _fake_service_get_by_args(self, context, host, binary):203 def _fake_service_get_by_args(self, context, host, binary):
202 return "service"204 return "service"
@@ -209,7 +211,6 @@
209211
210 def setUp(self, sched_class=None):212 def setUp(self, sched_class=None):
211 super(VsaSchedulerTestCase, self).setUp()213 super(VsaSchedulerTestCase, self).setUp()
212 self.stubs = stubout.StubOutForTesting()
213 self.context = context.get_admin_context()214 self.context = context.get_admin_context()
214215
215 if sched_class is None:216 if sched_class is None:
@@ -220,6 +221,7 @@
220 self.host_num = 10221 self.host_num = 10
221 self.drive_type_num = 5222 self.drive_type_num = 5
222223
224 self.stubs.Set(rpc, 'cast', fake_rpc_cast)
223 self.stubs.Set(self.sched,225 self.stubs.Set(self.sched,
224 '_get_service_states', self._fake_get_service_states)226 '_get_service_states', self._fake_get_service_states)
225 self.stubs.Set(self.sched,227 self.stubs.Set(self.sched,
@@ -234,8 +236,6 @@
234 def tearDown(self):236 def tearDown(self):
235 for name in self.created_types_lst:237 for name in self.created_types_lst:
236 volume_types.purge(self.context, name)238 volume_types.purge(self.context, name)
237
238 self.stubs.UnsetAll()
239 super(VsaSchedulerTestCase, self).tearDown()239 super(VsaSchedulerTestCase, self).tearDown()
240240
241 def test_vsa_sched_create_volumes_simple(self):241 def test_vsa_sched_create_volumes_simple(self):
@@ -333,6 +333,8 @@
333 self.stubs.Set(self.sched,333 self.stubs.Set(self.sched,
334 '_get_service_states', self._fake_get_service_states)334 '_get_service_states', self._fake_get_service_states)
335 self.stubs.Set(nova.db, 'volume_create', self._fake_volume_create)335 self.stubs.Set(nova.db, 'volume_create', self._fake_volume_create)
336 self.stubs.Set(nova.db, 'volume_update', self._fake_volume_update)
337 self.stubs.Set(rpc, 'cast', fake_rpc_cast)
336338
337 self.sched.schedule_create_volumes(self.context,339 self.sched.schedule_create_volumes(self.context,
338 request_spec,340 request_spec,
@@ -467,10 +469,9 @@
467 self.stubs.Set(self.sched,469 self.stubs.Set(self.sched,
468 'service_is_up', self._fake_service_is_up_True)470 'service_is_up', self._fake_service_is_up_True)
469471
470 host = self.sched.schedule_create_volume(self.context,472 self.sched.schedule_create_volume(self.context,
471 123, availability_zone=None)473 123, availability_zone=None)
472474
473 self.assertEqual(host, 'host_3')
474 self.assertEqual(scheduled_volume['id'], 123)475 self.assertEqual(scheduled_volume['id'], 123)
475 self.assertEqual(scheduled_volume['host'], 'host_3')476 self.assertEqual(scheduled_volume['host'], 'host_3')
476477
@@ -514,10 +515,9 @@
514 global_volume['volume_type_id'] = volume_type['id']515 global_volume['volume_type_id'] = volume_type['id']
515 global_volume['size'] = 0516 global_volume['size'] = 0
516517
517 host = self.sched.schedule_create_volume(self.context,518 self.sched.schedule_create_volume(self.context,
518 123, availability_zone=None)519 123, availability_zone=None)
519520
520 self.assertEqual(host, 'host_2')
521 self.assertEqual(scheduled_volume['id'], 123)521 self.assertEqual(scheduled_volume['id'], 123)
522 self.assertEqual(scheduled_volume['host'], 'host_2')522 self.assertEqual(scheduled_volume['host'], 'host_2')
523523
@@ -529,7 +529,6 @@
529 FakeVsaMostAvailCapacityScheduler())529 FakeVsaMostAvailCapacityScheduler())
530530
531 def tearDown(self):531 def tearDown(self):
532 self.stubs.UnsetAll()
533 super(VsaSchedulerTestCaseMostAvail, self).tearDown()532 super(VsaSchedulerTestCaseMostAvail, self).tearDown()
534533
535 def test_vsa_sched_create_single_volume(self):534 def test_vsa_sched_create_single_volume(self):
@@ -558,10 +557,9 @@
558 global_volume['volume_type_id'] = volume_type['id']557 global_volume['volume_type_id'] = volume_type['id']
559 global_volume['size'] = 0558 global_volume['size'] = 0
560559
561 host = self.sched.schedule_create_volume(self.context,560 self.sched.schedule_create_volume(self.context,
562 123, availability_zone=None)561 123, availability_zone=None)
563562
564 self.assertEqual(host, 'host_9')
565 self.assertEqual(scheduled_volume['id'], 123)563 self.assertEqual(scheduled_volume['id'], 123)
566 self.assertEqual(scheduled_volume['host'], 'host_9')564 self.assertEqual(scheduled_volume['host'], 'host_9')
567565
568566
=== modified file 'nova/tests/test_compute.py'
--- nova/tests/test_compute.py 2011-09-21 20:59:40 +0000
+++ nova/tests/test_compute.py 2011-09-23 07:08:19 +0000
@@ -26,6 +26,7 @@
26from nova import exception26from nova import exception
27from nova import flags27from nova import flags
28from nova import log as logging28from nova import log as logging
29from nova.scheduler import driver as scheduler_driver
29from nova import rpc30from nova import rpc
30from nova import test31from nova import test
31from nova import utils32from nova import utils
@@ -73,10 +74,42 @@
73 self.context = context.RequestContext(self.user_id, self.project_id)74 self.context = context.RequestContext(self.user_id, self.project_id)
74 test_notifier.NOTIFICATIONS = []75 test_notifier.NOTIFICATIONS = []
7576
77 orig_rpc_call = rpc.call
78 orig_rpc_cast = rpc.cast
79
80 def rpc_call_wrapper(context, topic, msg, do_cast=True):
81 """Stub out the scheduler creating the instance entry"""
82 if topic == FLAGS.scheduler_topic and \
83 msg['method'] == 'run_instance':
84 request_spec = msg['args']['request_spec']
85 scheduler = scheduler_driver.Scheduler
86 num_instances = request_spec.get('num_instances', 1)
87 instances = []
88 for x in xrange(num_instances):
89 instance = scheduler().create_instance_db_entry(
90 context,
91 request_spec)
92 encoded = scheduler_driver.encode_instance(instance)
93 instances.append(encoded)
94 return instances
95 else:
96 if do_cast:
97 orig_rpc_cast(context, topic, msg)
98 else:
99 return orig_rpc_call(context, topic, msg)
100
101 def rpc_cast_wrapper(context, topic, msg):
102 """Stub out the scheduler creating the instance entry in
103 the reservation_id case.
104 """
105 rpc_call_wrapper(context, topic, msg, do_cast=True)
106
76 def fake_show(meh, context, id):107 def fake_show(meh, context, id):
77 return {'id': 1, 'properties': {'kernel_id': 1, 'ramdisk_id': 1}}108 return {'id': 1, 'properties': {'kernel_id': 1, 'ramdisk_id': 1}}
78109
79 self.stubs.Set(fake_image._FakeImageService, 'show', fake_show)110 self.stubs.Set(fake_image._FakeImageService, 'show', fake_show)
111 self.stubs.Set(rpc, 'call', rpc_call_wrapper)
112 self.stubs.Set(rpc, 'cast', rpc_cast_wrapper)
80113
81 def _create_instance(self, params=None):114 def _create_instance(self, params=None):
82 """Create a test instance"""115 """Create a test instance"""
@@ -139,7 +172,7 @@
139 """Verify that an instance cannot be created without a display_name."""172 """Verify that an instance cannot be created without a display_name."""
140 cases = [dict(), dict(display_name=None)]173 cases = [dict(), dict(display_name=None)]
141 for instance in cases:174 for instance in cases:
142 ref = self.compute_api.create(self.context,175 (ref, resv_id) = self.compute_api.create(self.context,
143 instance_types.get_default_instance_type(), None, **instance)176 instance_types.get_default_instance_type(), None, **instance)
144 try:177 try:
145 self.assertNotEqual(ref[0]['display_name'], None)178 self.assertNotEqual(ref[0]['display_name'], None)
@@ -149,7 +182,7 @@
149 def test_create_instance_associates_security_groups(self):182 def test_create_instance_associates_security_groups(self):
150 """Make sure create associates security groups"""183 """Make sure create associates security groups"""
151 group = self._create_group()184 group = self._create_group()
152 ref = self.compute_api.create(185 (ref, resv_id) = self.compute_api.create(
153 self.context,186 self.context,
154 instance_type=instance_types.get_default_instance_type(),187 instance_type=instance_types.get_default_instance_type(),
155 image_href=None,188 image_href=None,
@@ -209,7 +242,7 @@
209 ('<}\x1fh\x10e\x08l\x02l\x05o\x12!{>', 'hello'),242 ('<}\x1fh\x10e\x08l\x02l\x05o\x12!{>', 'hello'),
210 ('hello_server', 'hello-server')]243 ('hello_server', 'hello-server')]
211 for display_name, hostname in cases:244 for display_name, hostname in cases:
212 ref = self.compute_api.create(self.context,245 (ref, resv_id) = self.compute_api.create(self.context,
213 instance_types.get_default_instance_type(), None,246 instance_types.get_default_instance_type(), None,
214 display_name=display_name)247 display_name=display_name)
215 try:248 try:
@@ -221,7 +254,7 @@
221 """Make sure destroying disassociates security groups"""254 """Make sure destroying disassociates security groups"""
222 group = self._create_group()255 group = self._create_group()
223256
224 ref = self.compute_api.create(257 (ref, resv_id) = self.compute_api.create(
225 self.context,258 self.context,
226 instance_type=instance_types.get_default_instance_type(),259 instance_type=instance_types.get_default_instance_type(),
227 image_href=None,260 image_href=None,
@@ -237,7 +270,7 @@
237 """Make sure destroying security groups disassociates instances"""270 """Make sure destroying security groups disassociates instances"""
238 group = self._create_group()271 group = self._create_group()
239272
240 ref = self.compute_api.create(273 (ref, resv_id) = self.compute_api.create(
241 self.context,274 self.context,
242 instance_type=instance_types.get_default_instance_type(),275 instance_type=instance_types.get_default_instance_type(),
243 image_href=None,276 image_href=None,
@@ -1394,3 +1427,81 @@
1394 self.assertEqual(self.compute_api._volume_size(inst_type,1427 self.assertEqual(self.compute_api._volume_size(inst_type,
1395 'swap'),1428 'swap'),
1396 swap_size)1429 swap_size)
1430
1431 def test_reservation_id_one_instance(self):
1432 """Verify building an instance has a reservation_id that
1433 matches return value from create"""
1434 (refs, resv_id) = self.compute_api.create(self.context,
1435 instance_types.get_default_instance_type(), None)
1436 try:
1437 self.assertEqual(len(refs), 1)
1438 self.assertEqual(refs[0]['reservation_id'], resv_id)
1439 finally:
1440 db.instance_destroy(self.context, refs[0]['id'])
1441
1442 def test_reservation_ids_two_instances(self):
1443 """Verify building 2 instances at once results in a
1444 reservation_id being returned equal to reservation id set
1445 in both instances
1446 """
1447 (refs, resv_id) = self.compute_api.create(self.context,
1448 instance_types.get_default_instance_type(), None,
1449 min_count=2, max_count=2)
1450 try:
1451 self.assertEqual(len(refs), 2)
1452 self.assertNotEqual(resv_id, None)
1453 finally:
1454 for instance in refs:
1455 self.assertEqual(instance['reservation_id'], resv_id)
1456 db.instance_destroy(self.context, instance['id'])
1457
1458 def test_reservation_ids_two_instances_no_wait(self):
1459 """Verify building 2 instances at once without waiting for
1460 instance IDs results in a reservation_id being returned equal
1461 to reservation id set in both instances
1462 """
1463 (refs, resv_id) = self.compute_api.create(self.context,
1464 instance_types.get_default_instance_type(), None,
1465 min_count=2, max_count=2, wait_for_instances=False)
1466 try:
1467 self.assertEqual(refs, None)
1468 self.assertNotEqual(resv_id, None)
1469 finally:
1470 instances = self.compute_api.get_all(self.context,
1471 search_opts={'reservation_id': resv_id})
1472 self.assertEqual(len(instances), 2)
1473 for instance in instances:
1474 self.assertEqual(instance['reservation_id'], resv_id)
1475 db.instance_destroy(self.context, instance['id'])
1476
1477 def test_create_with_specified_reservation_id(self):
1478 """Verify building instances with a specified
1479 reservation_id results in the correct reservation_id
1480 being set
1481 """
1482
1483 # We need admin context to be able to specify our own
1484 # reservation_ids.
1485 context = self.context.elevated()
1486 # 1 instance
1487 (refs, resv_id) = self.compute_api.create(context,
1488 instance_types.get_default_instance_type(), None,
1489 min_count=1, max_count=1, reservation_id='meow')
1490 try:
1491 self.assertEqual(len(refs), 1)
1492 self.assertEqual(resv_id, 'meow')
1493 finally:
1494 self.assertEqual(refs[0]['reservation_id'], resv_id)
1495 db.instance_destroy(self.context, refs[0]['id'])
1496
1497 # 2 instances
1498 (refs, resv_id) = self.compute_api.create(context,
1499 instance_types.get_default_instance_type(), None,
1500 min_count=2, max_count=2, reservation_id='woof')
1501 try:
1502 self.assertEqual(len(refs), 2)
1503 self.assertEqual(resv_id, 'woof')
1504 finally:
1505 for instance in refs:
1506 self.assertEqual(instance['reservation_id'], resv_id)
1507 db.instance_destroy(self.context, instance['id'])
13971508
=== modified file 'nova/tests/test_quota.py'
--- nova/tests/test_quota.py 2011-08-03 19:22:58 +0000
+++ nova/tests/test_quota.py 2011-09-23 07:08:19 +0000
@@ -21,9 +21,11 @@
21from nova import db21from nova import db
22from nova import flags22from nova import flags
23from nova import quota23from nova import quota
24from nova import rpc
24from nova import test25from nova import test
25from nova import volume26from nova import volume
26from nova.compute import instance_types27from nova.compute import instance_types
28from nova.scheduler import driver as scheduler_driver
2729
2830
29FLAGS = flags.FLAGS31FLAGS = flags.FLAGS
@@ -51,6 +53,21 @@
51 self.context = context.RequestContext(self.user_id,53 self.context = context.RequestContext(self.user_id,
52 self.project_id,54 self.project_id,
53 True)55 True)
56 orig_rpc_call = rpc.call
57
58 def rpc_call_wrapper(context, topic, msg):
59 """Stub out the scheduler creating the instance entry"""
60 if topic == FLAGS.scheduler_topic and \
61 msg['method'] == 'run_instance':
62 scheduler = scheduler_driver.Scheduler
63 instance = scheduler().create_instance_db_entry(
64 context,
65 msg['args']['request_spec'])
66 return [scheduler_driver.encode_instance(instance)]
67 else:
68 return orig_rpc_call(context, topic, msg)
69
70 self.stubs.Set(rpc, 'call', rpc_call_wrapper)
5471
55 def _create_instance(self, cores=2):72 def _create_instance(self, cores=2):
56 """Create a test instance"""73 """Create a test instance"""