Merge lp:~cbehrens/nova/lp844160-build-works-with-zones into lp:~hudson-openstack/nova/trunk

Proposed by Chris Behrens
Status: Rejected
Rejected by: Chris Behrens
Proposed branch: lp:~cbehrens/nova/lp844160-build-works-with-zones
Merge into: lp:~hudson-openstack/nova/trunk
Diff against target: 4592 lines (+1761/-1238)
33 files modified
doc/source/devref/distributed_scheduler.rst (+2/-0)
nova/api/ec2/cloud.py (+6/-4)
nova/api/openstack/__init__.py (+1/-2)
nova/api/openstack/contrib/createserverext.py (+1/-2)
nova/api/openstack/contrib/volumes.py (+2/-41)
nova/api/openstack/contrib/zones.py (+50/-0)
nova/api/openstack/create_instance_helper.py (+0/-602)
nova/api/openstack/servers.py (+581/-30)
nova/api/openstack/zones.py (+3/-35)
nova/compute/api.py (+111/-116)
nova/scheduler/abstract_scheduler.py (+32/-43)
nova/scheduler/api.py (+2/-2)
nova/scheduler/chance.py (+23/-2)
nova/scheduler/driver.py (+106/-9)
nova/scheduler/least_cost.py (+1/-2)
nova/scheduler/manager.py (+5/-19)
nova/scheduler/multi.py (+5/-3)
nova/scheduler/simple.py (+35/-39)
nova/scheduler/vsa.py (+13/-20)
nova/scheduler/zone.py (+23/-5)
nova/tests/api/openstack/contrib/test_createserverext.py (+8/-4)
nova/tests/api/openstack/contrib/test_volumes.py (+12/-2)
nova/tests/api/openstack/test_extensions.py (+1/-0)
nova/tests/api/openstack/test_server_actions.py (+2/-2)
nova/tests/api/openstack/test_servers.py (+158/-45)
nova/tests/integrated/api/client.py (+16/-3)
nova/tests/integrated/test_servers.py (+36/-0)
nova/tests/scheduler/test_abstract_scheduler.py (+58/-17)
nova/tests/scheduler/test_least_cost_scheduler.py (+1/-1)
nova/tests/scheduler/test_scheduler.py (+320/-167)
nova/tests/scheduler/test_vsa_scheduler.py (+14/-16)
nova/tests/test_compute.py (+116/-5)
nova/tests/test_quota.py (+17/-0)
To merge this branch: bzr merge lp:~cbehrens/nova/lp844160-build-works-with-zones
Reviewer Review Type Date Requested Status
Sandy Walsh (community) Needs Fixing
Chris Behrens (community) Abstain
Brian Waldon (community) Needs Information
Review via email: mp+75990@code.launchpad.net

Description of the change

This makes the OS API servers controller 'create' work with all schedulers, including the zone aware schedulers (BaseScheduler and subclasses).

Since this means that the zones controller 'boot' method is not needed anymore, it has been removed and create_instance_helper has been folded back into the servers controller.

The distributed scheduler doc needs to be updated. I only updated it enough to say that some information is stale. If this merges, I'll file a bug to have it updated. I'm not the best person to update/create pretty pictures.

Other side effects of making this work:
1) compute API's create_at_all_once has been removed. It was only used by zone boot.
2) compute API's create() no longer creates Instance DB entries. The schedulers now do this. This makes sense, as only the schedulers will know where the instances will be placed. They could be placed locally or in a child zone. However, this comes at a cost. compute_api.create() now does a 'call' to the scheduler instead of a 'cast' in most cases (* see below). This is so it can receive the instance ID(s) that were created back from the scheduler. Ultimately, we probably need to figure out a way to generate UUIDs before scheduling and return only the information we know about an instance before it is actually scheduled and created. We could then revert this back to a cast. (Or maybe we always return a reservation ID instead of an instance.)
3) There's been an undocumented feature in the OS API to allow multiple instances to be built. I've kept it.
4) If compute_api.create() is creating multiple instances, only a single call is made to the scheduler, vs the old way of sending many casts. All schedulers now check how many instances have been requested.
5) I've added an undocumented option 'return_reservation_id' when building. If set to True, only a reservation ID is returned to the API caller, not the instance. This essentially gives you the old 'nova zone-boot' functionality.
6) It was requested I create a stub for a zones extension, so you'll see the empty extension in here. We'll move some code to it later.
7) Fixes an unrelated bug that merged into trunk recently where zones DB calls were not being done with admin context always, anymore.

* Case #5 above doesn't wait for the scheduler response with instance IDs. It does a 'cast' instead.

To post a comment you must log in.
Revision history for this message
Chris Behrens (cbehrens) wrote :

You'll see some code removed from the volumes extension. This is because it subclasses the servers controller to override create() to allow 'block_device_mapping' to be specified with a build. Since it was mostly code duplication (and used create_instance_helper), I just created a method in the servers controller that can be overridden in the volumes extension to retrieve the block_device_mapping.

Revision history for this message
Brian Waldon (bcwaldon) wrote :

I absolutely love the removal of create_instance_helper! Hit me up on IRC if you want to talk about any of this.

172: Can you expand on this? A note about each of the calls would be nice.

1198: Can the precooked stuff go away now? Just not sure how that fits in

1294: I don't think this is used anywhere. Can you remove it?

review: Needs Information
Revision history for this message
Chris Behrens (cbehrens) wrote :

172: Expanded
1294: Good catch... it's removed now.

As far as 1198: It can't really go away until we change how we talk to child zones. :-/ Sandy, pvo, and myself have been talking about some things related to zones that could make it go away.

Now.. I think a branch of Sandy's landed, and I probably have conflicts to resolve.

Revision history for this message
Chris Behrens (cbehrens) wrote :

Resolved conflicts with trunk. Time to open this up for review.

Revision history for this message
Chris Behrens (cbehrens) :
review: Abstain
Revision history for this message
Chris Behrens (cbehrens) wrote :

merged trunk.

Revision history for this message
Rick Harris (rconradharris) wrote :

This looks really good, love the refactoring and cleanups. Making a quick
first-pass with some notes; will plan on a digging in for a more thorough
review tomorrow.

> 2160 + # TODO(comstud): I would love to be able to return the full
> 2161 + # instance information here, but unfortunately passing things
> 2162 + # like 'datetime' back through rabbit don't work due to having
> 2163 + # to json encode/decode.

Not really necessary for this patch, but for future work, we can set a
default handler so that the `json` module will encode datetimes in
iso8601 format[1].

[1]http://stackoverflow.com/questions/455580/json-datetime-between-python-and-javascript

> 2294 + zone, _x, host = availability_zone.partition(':')

Could use split here and avoid the throwaway variable:

    zone, host = availability_zone.split(':')

> 859 + expl = _("Personality file limit exceeded")
> 860 + raise exc.HTTPRequestEntityTooLarge(explanation=error.message,
> 861 + headers={'Retry-After': 0})

`expl` is defined but never used. This patch just moved these lines around,
highlighting the issue. If s/error.message/expl/ is the solution, we could
just make the change in this patch, but if we need to consult the original
author we could make a separate bug for it.

On that note, the code could be DRY'd up as well:

    code = error.code
    if code == "OnsetFileLimitExceeded":
        expl = _("Personality file limit exceeded")
    elif code == "OnsetFilePathLimitExceeded":
        expl = _("Personality file path too long")
    elif code == "OnsetFileContentLimitExceeded":
        expl = _("Personality file content too long")
    elif code == "InstanceLimitExceeded":
        expl = _("Instance quotas have been exceeded")
    else:
        expl = None

    if expl:
        raise exc.HTTPRequestEntityTooLarge(
            explanation=expl headers={'Retry-After': 0})
    else:
        # if the original error is okay, just reraise it
        raise error

> 2167 + if local is True:

Per PEP8, preferred is:

    if local:

> 2388 + instances = instances.append(self.encode_instance(instance))

Looks like this should be:

    instances.append(self.encode_instance(instance))

Or maybe even:

    encoded_instance = self.encode_instance(instance)
    instances.append(encoded_instance)

Revision history for this message
Chris Behrens (cbehrens) wrote :

> This looks really good, love the refactoring and cleanups. Making a quick
> first-pass with some notes; will plan on a digging in for a more thorough
> review tomorrow.

Great, thanks!

>
>
> > 2160 + # TODO(comstud): I would love to be able to return the full
> > 2161 + # instance information here, but unfortunately passing things
> > 2162 + # like 'datetime' back through rabbit don't work due to
> having
> > 2163 + # to json encode/decode.
>
> Not really necessary for this patch, but for future work, we can set a
> default handler so that the `json` module will encode datetimes in
> iso8601 format[1].
>
> [1]http://stackoverflow.com/questions/455580/json-datetime-between-python-and-
> javascript

Great...good to know. I hadn't bothered looking for a solution yet... really there's all sorts of cases where we should be doing this and it's a discussion blamar proposed for the summit.

>
> > 2294 + zone, _x, host = availability_zone.partition(':')
>
> Could use split here and avoid the throwaway variable:
>
> zone, host = availability_zone.split(':')
>
> > 859 + expl = _("Personality file limit exceeded")
> > 860 + raise
> exc.HTTPRequestEntityTooLarge(explanation=error.message,
> > 861 + headers={'Retry-
> After': 0})
>
> `expl` is defined but never used. This patch just moved these lines around,
> highlighting the issue. If s/error.message/expl/ is the solution, we could
> just make the change in this patch, but if we need to consult the original
> author we could make a separate bug for it.

Yeah, the other one above was just moving lines around also. I'll take a look at these, though!

[...]
> > 2167 + if local is True:
>
> Per PEP8, preferred is:
>
> if local:

I'll fix that. I think that's a habit from doing "if blah is None" to distinguish keyword arguments with a None default from an empty list or dict being passed.. if you know what I mean.

> > 2388 + instances =
> instances.append(self.encode_instance(instance))
>
> Looks like this should be:
>
> instances.append(self.encode_instance(instance))

Shoot, yes. I copy-pasted that broken line around a number of times and thought I went back and fixed all instances of it. Good catch.

Revision history for this message
Chris Behrens (cbehrens) wrote :

Rick: I updated that comment regarding the datetime encoding. I think I hit all of your other issues so far. I've also added a comment in compute_api.create() that we should be using rpc.multicall vs rpc.call due to the amount of data the scheduler could return if we return full instance dictionaries.

It does appear the QuotaError stuff should have raised with those unused variables. I've gone ahead and made use of them, and cleaned it all up into a mapping table. I think that's cleaner than all of the 'if' stuff. In researching this, I find the QuotaError exception stuff could really use a re-factor (its class and the raises done in compute/api). I'll probably file a bug to clean it up after this merges.

1593. By Chris Behrens

typo

Revision history for this message
Chris Behrens (cbehrens) wrote :

Ok ready. Note: if running tests, just running api/openstack/test_servers.py by itself will fail due to an issue in trunk. Running the whole test suite will work.

Revision history for this message
Vish Ishaya (vishvananda) wrote :

On Sep 21, 2011, at 9:03 PM, Rick Harris wrote:

>
> Not really necessary for this patch, but for future work, we can set a
> default handler so that the `json` module will encode datetimes in
> iso8601 format[1].
>

I wrote some code long ago in my branches to get rid of using the db to pass information back and forth that handled this. It even managed the conversion back on updates:

basically it is just adding a datetime check to utils.to_primitive and converting using utils.strtime(). We can use sqlalchemy to parse back into the required formats on update with something like:

=== modified file 'nova/db/sqlalchemy/models.py'
--- nova/db/sqlalchemy/models.py 2011-05-24 19:21:02 +0000
+++ nova/db/sqlalchemy/models.py 2011-05-26 21:41:26 +0000
@@ -27,12 +27,14 @@
 from sqlalchemy.exc import IntegrityError
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.schema import ForeignKeyConstraint
+from sqlalchemy.types import DateTime as DTType

 from nova.db.sqlalchemy.session import get_session

 from nova import auth
 from nova import exception
 from nova import flags
+from nova import utils

 FLAGS = flags.FLAGS
@@ -90,11 +92,15 @@
         return n, getattr(self, n)

     def update(self, values):
- """Make the model object behave like a dict"""
- columns = dict(object_mapper(self).columns).keys()
+ """Make the model object behave like a dict and convert datetimes."""
+ columns = object_mapper(self).columns
         for key, value in values.iteritems():
             # NOTE(vish): don't update the 'name' property
- if key in columns:
+ if key != 'name' or key in columns:
+ if (key in columns and
+ isinstance(value, basestring) and
+ isinstance(columns[key].type, DTType)):
+ value = utils.parse_strtime(value)
                 setattr(self, key, value)

     def iteritems(self):

I was able to pass entire refs through the queue using this and update them on the other end.

Vish

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

Still going through it, but getting a test failure: http://paste.openstack.org/show/2522/

Stay tuned ...

review: Needs Fixing
Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

First off, I love that fact that you're keeping the unit tests as unit tests (and not integration tests) ... makes the review so much easier to follow.

I guess we really need update the docs shortly after this lands.

Regarding the precooked stuff, I wonder if we could just assume all results are raw and strip out any potentially offending data regardless? Just be a little more forgiving if they don't exist.

825 +from nova.rpc.common import RemoteError

import module not class

1697 + instances = self._schedule_run_instance(
I thought that method returned a tuple? Is this correct?

1950 + instance = self.create_instance_db_entry(context, request_spec)

create_instance_db_entry is a static method. Should either be classname qualifier or remove the @staticmethod decorator, not self.

2032 # Return instance as a list and make sure the caller doesn't
2033 + # cast to a compute node.

 ... not sure what the comment is trying to tell me.

2053 - # Returning None short-circuits the routing to Compute (since

... is this comment not appropriate anymore? I think some explanation of the None return is required (somewhere).

2215 + # Should only be None for tests?
2216 + if filename is not None:
Then this logic should be broken out into a separate function and stubbed in the test. Test case code shouldn't be in production code.

2256 + if isinstance(ret_val, tuple):
ah ha ... there it is. Can we unify these results to always be a tuple? Or I think we'd need a test for each condition (unless I missed something there)?

All in all ... great changes Chris! Nice to see that zone-boot and inheritance mess go away!

review: Needs Fixing
Revision history for this message
Chris Behrens (cbehrens) wrote :

Vish:

[...]
> I wrote some code long ago in my branches to get rid of using the db to pass
> information back and forth that handled this. It even managed the conversion
> back on updates:
[...]
> I was able to pass entire refs through the queue using this and update them on
> the other end.

Very cool. I'll probably look at incorporating this as a next step. This diff is already large enough due to moving code around. There are a lot more areas where we should be doing this, and it's a point of discussion that blamar suggested for the summit, also.

Revision history for this message
Chris Behrens (cbehrens) wrote :

> Still going through it, but getting a test failure:
> http://paste.openstack.org/show/2522/
>
> Stay tuned ...

So, I run into that now and then as well.. and it actually appears to be a kombu memory transport bug. Generally if you run into it, a 2nd run of the tests will pass. I think we're going to need to go back to using our own 'fakerabbit' type backend for kombu... or just aggressively try to get these fixed in kombu.

Revision history for this message
Chris Behrens (cbehrens) wrote :
Download full text (3.4 KiB)

> First off, I love that fact that you're keeping the unit tests as unit tests
> (and not integration tests) ... makes the review so much easier to follow.

Yup.. something I think about when coding tests, although there are a lot of cases where unit tests are currently more like integration tests.

>
> I guess we really need update the docs shortly after this lands.

Yeah. I'd like to update it more myself, but I'd prefer to spend time on it after we get this merged.. Since we're very early in essex, I think this is okay. We can file a bug after this merges.

>
> Regarding the precooked stuff, I wonder if we could just assume all results
> are raw and strip out any potentially offending data regardless? Just be a
> little more forgiving if they don't exist.
>
> 825 +from nova.rpc.common import RemoteError
>
> import module not class

Copy/paste thing, but I agree. I'll update it.

>
> 1697 + instances = self._schedule_run_instance(
> I thought that method returned a tuple? Is this correct?

I think you caught this below. The schedulers' methods do return tuples so that the manager can get a 'response to return' and a 'host to schedule on'. But the manager does really only return the 'response' portion. I'll update/add comments in the schedulers/manager.

>
>
> 1950 + instance = self.create_instance_db_entry(context,
> request_spec)
>
> create_instance_db_entry is a static method. Should either be classname
> qualifier or remove the @staticmethod decorator, not self.

I have the same thing for 'encode_instance', etc. I put them as static methods because they don't use 'self'... but it's a bit more clean in the code to be able to call them by self.*. Is that a huge no-no? If so, I think I lean towards removing the decorator even though they don't use any instance data.

>
> 2032 # Return instance as a list and make sure the caller doesn't
> 2033 + # cast to a compute node.
>
> ... not sure what the comment is trying to tell me.

Goes along with the tuple comment above. I'll update the comment as mentioned above.

>
> 2053 - # Returning None short-circuits the routing to Compute (since
>
> ... is this comment not appropriate anymore? I think some explanation of the
> None return is required (somewhere).

That comment attempts to explain why None is required, but I guess it's not descriptive enough. :) It also goes along with the other comments above I'll update.

>
> 2215 + # Should only be None for tests?
> 2216 + if filename is not None:
> Then this logic should be broken out into a separate function and stubbed in
> the test. Test case code shouldn't be in production code.

I'll do more investigation on this. I ran into a test failure where 'filename' was not defined.

>
> 2256 + if isinstance(ret_val, tuple):
> ah ha ... there it is. Can we unify these results to always be a tuple? Or I
> think we'd need a test for each condition (unless I missed something there)?

I could update all scheduler methods to return a tuple, yes, and I thought about doing this, although it's only run_instance that needs to return a response. For...

Read more...

1594. By Chris Behrens

Clean up the return values from all schedule* calls, making all schedule* calls do their own casts.
Creating convenience calls for the above results in 'scheduled_at' being updated in a single place for both instances and volumes now.

1595. By Chris Behrens

test fixes plus bugs/typos they uncovered. still needs more test fixes

1596. By Chris Behrens

fix abstract scheduler tests.. and bugs they found. added test for run_instance and checking a DB call is made with admin context

1597. By Chris Behrens

fix pep8 issue

1598. By Chris Behrens

chance scheduler bug uncovered with tests

1599. By Chris Behrens

vsa scheduler test fixes

1600. By Chris Behrens

more test fixes

Revision history for this message
Chris Behrens (cbehrens) wrote :

Moving to git.

Unmerged revisions

1600. By Chris Behrens

more test fixes

1599. By Chris Behrens

vsa scheduler test fixes

1598. By Chris Behrens

chance scheduler bug uncovered with tests

1597. By Chris Behrens

fix pep8 issue

1596. By Chris Behrens

fix abstract scheduler tests.. and bugs they found. added test for run_instance and checking a DB call is made with admin context

1595. By Chris Behrens

test fixes plus bugs/typos they uncovered. still needs more test fixes

1594. By Chris Behrens

Clean up the return values from all schedule* calls, making all schedule* calls do their own casts.
Creating convenience calls for the above results in 'scheduled_at' being updated in a single place for both instances and volumes now.

1593. By Chris Behrens

typo

1592. By Chris Behrens

broken indent

1591. By Chris Behrens

revert the kludge for reclaim_instance_interval since tests pass when all of them are run. I don't want to have a conflict with a fix from johannes

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'doc/source/devref/distributed_scheduler.rst'
2--- doc/source/devref/distributed_scheduler.rst 2011-08-18 19:39:25 +0000
3+++ doc/source/devref/distributed_scheduler.rst 2011-09-23 07:08:19 +0000
4@@ -77,6 +77,8 @@
5
6 Requesting a new instance
7 -------------------------
8+(Note: The information below is out of date, as the `nova.compute.api.create_all_at_once()` functionality has merged into `nova.compute.api.create()` and the non-zone aware schedulers have been updated.)
9+
10 Prior to the `BaseScheduler`, to request a new instance, a call was made to `nova.compute.api.create()`. The type of instance created depended on the value of the `InstanceType` record being passed in. The `InstanceType` determined the amount of disk, CPU, RAM and network required for the instance. Administrators can add new `InstanceType` records to suit their needs. For more complicated instance requests we need to go beyond the default fields in the `InstanceType` table.
11
12 `nova.compute.api.create()` performed the following actions:
13
14=== modified file 'nova/api/ec2/cloud.py'
15--- nova/api/ec2/cloud.py 2011-09-21 15:54:30 +0000
16+++ nova/api/ec2/cloud.py 2011-09-23 07:08:19 +0000
17@@ -1384,7 +1384,7 @@
18 if image_state != 'available':
19 raise exception.ApiError(_('Image must be available'))
20
21- instances = self.compute_api.create(context,
22+ (instances, resv_id) = self.compute_api.create(context,
23 instance_type=instance_types.get_instance_type_by_name(
24 kwargs.get('instance_type', None)),
25 image_href=self._get_image(context, kwargs['image_id'])['id'],
26@@ -1399,9 +1399,11 @@
27 security_group=kwargs.get('security_group'),
28 availability_zone=kwargs.get('placement', {}).get(
29 'AvailabilityZone'),
30- block_device_mapping=kwargs.get('block_device_mapping', {}))
31- return self._format_run_instances(context,
32- reservation_id=instances[0]['reservation_id'])
33+ block_device_mapping=kwargs.get('block_device_mapping', {}),
34+ # NOTE(comstud): Unfortunately, EC2 requires that the
35+ # instance DB entries have been created..
36+ wait_for_instances=True)
37+ return self._format_run_instances(context, resv_id)
38
39 def _do_instance(self, action, context, ec2_id):
40 instance_id = ec2utils.ec2_id_to_id(ec2_id)
41
42=== modified file 'nova/api/openstack/__init__.py'
43--- nova/api/openstack/__init__.py 2011-08-15 13:35:44 +0000
44+++ nova/api/openstack/__init__.py 2011-09-23 07:08:19 +0000
45@@ -139,8 +139,7 @@
46 controller=zones.create_resource(version),
47 collection={'detail': 'GET',
48 'info': 'GET',
49- 'select': 'POST',
50- 'boot': 'POST'})
51+ 'select': 'POST'})
52
53 mapper.connect("versions", "/",
54 controller=versions.create_resource(version),
55
56=== modified file 'nova/api/openstack/contrib/createserverext.py'
57--- nova/api/openstack/contrib/createserverext.py 2011-09-02 18:00:33 +0000
58+++ nova/api/openstack/contrib/createserverext.py 2011-09-23 07:08:19 +0000
59@@ -15,7 +15,6 @@
60 # under the License
61
62 from nova import utils
63-from nova.api.openstack import create_instance_helper as helper
64 from nova.api.openstack import extensions
65 from nova.api.openstack import servers
66 from nova.api.openstack import wsgi
67@@ -66,7 +65,7 @@
68 }
69
70 body_deserializers = {
71- 'application/xml': helper.ServerXMLDeserializerV11(),
72+ 'application/xml': servers.ServerXMLDeserializerV11(),
73 }
74
75 serializer = wsgi.ResponseSerializer(body_serializers,
76
77=== modified file 'nova/api/openstack/contrib/volumes.py'
78--- nova/api/openstack/contrib/volumes.py 2011-09-14 19:33:51 +0000
79+++ nova/api/openstack/contrib/volumes.py 2011-09-23 07:08:19 +0000
80@@ -334,47 +334,8 @@
81 class BootFromVolumeController(servers.ControllerV11):
82 """The boot from volume API controller for the Openstack API."""
83
84- def _create_instance(self, context, instance_type, image_href, **kwargs):
85- try:
86- return self.compute_api.create(context, instance_type,
87- image_href, **kwargs)
88- except quota.QuotaError as error:
89- self.helper._handle_quota_error(error)
90- except exception.ImageNotFound as error:
91- msg = _("Can not find requested image")
92- raise faults.Fault(exc.HTTPBadRequest(explanation=msg))
93-
94- def create(self, req, body):
95- """ Creates a new server for a given user """
96- extra_values = None
97- try:
98-
99- def get_kwargs(context, instance_type, image_href, **kwargs):
100- kwargs['context'] = context
101- kwargs['instance_type'] = instance_type
102- kwargs['image_href'] = image_href
103- return kwargs
104-
105- extra_values, kwargs = self.helper.create_instance(req, body,
106- get_kwargs)
107-
108- block_device_mapping = body['server'].get('block_device_mapping')
109- kwargs['block_device_mapping'] = block_device_mapping
110-
111- instances = self._create_instance(**kwargs)
112- except faults.Fault, f:
113- return f
114-
115- # We can only return 1 instance via the API, if we happen to
116- # build more than one... instances is a list, so we'll just
117- # use the first one..
118- inst = instances[0]
119- for key in ['instance_type', 'image_ref']:
120- inst[key] = extra_values[key]
121-
122- server = self._build_view(req, inst, is_detail=True)
123- server['server']['adminPass'] = extra_values['password']
124- return server
125+ def _get_block_device_mapping(self, data):
126+ return data.get('block_device_mapping')
127
128
129 class Volumes(extensions.ExtensionDescriptor):
130
131=== added file 'nova/api/openstack/contrib/zones.py'
132--- nova/api/openstack/contrib/zones.py 1970-01-01 00:00:00 +0000
133+++ nova/api/openstack/contrib/zones.py 2011-09-23 07:08:19 +0000
134@@ -0,0 +1,50 @@
135+# vim: tabstop=4 shiftwidth=4 softtabstop=4
136+
137+# Copyright 2011 OpenStack LLC.
138+# All Rights Reserved.
139+#
140+# Licensed under the Apache License, Version 2.0 (the "License"); you may
141+# not use this file except in compliance with the License. You may obtain
142+# a copy of the License at
143+#
144+# http://www.apache.org/licenses/LICENSE-2.0
145+#
146+# Unless required by applicable law or agreed to in writing, software
147+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
148+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
149+# License for the specific language governing permissions and limitations
150+# under the License.
151+
152+"""The zones extension."""
153+
154+
155+from nova import flags
156+from nova import log as logging
157+from nova.api.openstack import extensions
158+
159+
160+LOG = logging.getLogger("nova.api.zones")
161+FLAGS = flags.FLAGS
162+
163+
164+class Zones(extensions.ExtensionDescriptor):
165+ def get_name(self):
166+ return "Zones"
167+
168+ def get_alias(self):
169+ return "os-zones"
170+
171+ def get_description(self):
172+ return """Enables zones-related functionality such as adding
173+child zones, listing child zones, getting the capabilities of the
174+local zone, and returning build plans to parent zones' schedulers"""
175+
176+ def get_namespace(self):
177+ return "http://docs.openstack.org/ext/zones/api/v1.1"
178+
179+ def get_updated(self):
180+ return "2011-09-21T00:00:00+00:00"
181+
182+ def get_resources(self):
183+ # Nothing yet.
184+ return []
185
186=== removed file 'nova/api/openstack/create_instance_helper.py'
187--- nova/api/openstack/create_instance_helper.py 2011-09-15 14:07:58 +0000
188+++ nova/api/openstack/create_instance_helper.py 1970-01-01 00:00:00 +0000
189@@ -1,602 +0,0 @@
190-# Copyright 2011 OpenStack LLC.
191-# Copyright 2011 Piston Cloud Computing, Inc.
192-# All Rights Reserved.
193-#
194-# Licensed under the Apache License, Version 2.0 (the "License"); you may
195-# not use this file except in compliance with the License. You may obtain
196-# a copy of the License at
197-#
198-# http://www.apache.org/licenses/LICENSE-2.0
199-#
200-# Unless required by applicable law or agreed to in writing, software
201-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
202-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
203-# License for the specific language governing permissions and limitations
204-# under the License.
205-
206-import base64
207-
208-from webob import exc
209-from xml.dom import minidom
210-
211-from nova import exception
212-from nova import flags
213-from nova import log as logging
214-import nova.image
215-from nova import quota
216-from nova import utils
217-
218-from nova.compute import instance_types
219-from nova.api.openstack import common
220-from nova.api.openstack import wsgi
221-from nova.rpc.common import RemoteError
222-
223-LOG = logging.getLogger('nova.api.openstack.create_instance_helper')
224-FLAGS = flags.FLAGS
225-
226-
227-class CreateFault(exception.NovaException):
228- message = _("Invalid parameters given to create_instance.")
229-
230- def __init__(self, fault):
231- self.fault = fault
232- super(CreateFault, self).__init__()
233-
234-
235-class CreateInstanceHelper(object):
236- """This is the base class for OS API Controllers that
237- are capable of creating instances (currently Servers and Zones).
238-
239- Once we stabilize the Zones portion of the API we may be able
240- to move this code back into servers.py
241- """
242-
243- def __init__(self, controller):
244- """We need the image service to create an instance."""
245- self.controller = controller
246- self._image_service = utils.import_object(FLAGS.image_service)
247- super(CreateInstanceHelper, self).__init__()
248-
249- def create_instance(self, req, body, create_method):
250- """Creates a new server for the given user. The approach
251- used depends on the create_method. For example, the standard
252- POST /server call uses compute.api.create(), while
253- POST /zones/server uses compute.api.create_all_at_once().
254-
255- The problem is, both approaches return different values (i.e.
256- [instance dicts] vs. reservation_id). So the handling of the
257- return type from this method is left to the caller.
258- """
259- if not body:
260- raise exc.HTTPUnprocessableEntity()
261-
262- if not 'server' in body:
263- raise exc.HTTPUnprocessableEntity()
264-
265- context = req.environ['nova.context']
266- server_dict = body['server']
267- password = self.controller._get_server_admin_password(server_dict)
268-
269- if not 'name' in server_dict:
270- msg = _("Server name is not defined")
271- raise exc.HTTPBadRequest(explanation=msg)
272-
273- name = server_dict['name']
274- self._validate_server_name(name)
275- name = name.strip()
276-
277- image_href = self.controller._image_ref_from_req_data(body)
278- # If the image href was generated by nova api, strip image_href
279- # down to an id and use the default glance connection params
280-
281- if str(image_href).startswith(req.application_url):
282- image_href = image_href.split('/').pop()
283- try:
284- image_service, image_id = nova.image.get_image_service(context,
285- image_href)
286- kernel_id, ramdisk_id = self._get_kernel_ramdisk_from_image(
287- req, image_service, image_id)
288- images = set([str(x['id']) for x in image_service.index(context)])
289- assert str(image_id) in images
290- except Exception, e:
291- msg = _("Cannot find requested image %(image_href)s: %(e)s" %
292- locals())
293- raise exc.HTTPBadRequest(explanation=msg)
294-
295- personality = server_dict.get('personality')
296- config_drive = server_dict.get('config_drive')
297-
298- injected_files = []
299- if personality:
300- injected_files = self._get_injected_files(personality)
301-
302- sg_names = []
303- security_groups = server_dict.get('security_groups')
304- if security_groups is not None:
305- sg_names = [sg['name'] for sg in security_groups if sg.get('name')]
306- if not sg_names:
307- sg_names.append('default')
308-
309- sg_names = list(set(sg_names))
310-
311- requested_networks = server_dict.get('networks')
312- if requested_networks is not None:
313- requested_networks = self._get_requested_networks(
314- requested_networks)
315-
316- try:
317- flavor_id = self.controller._flavor_id_from_req_data(body)
318- except ValueError as error:
319- msg = _("Invalid flavorRef provided.")
320- raise exc.HTTPBadRequest(explanation=msg)
321-
322- zone_blob = server_dict.get('blob')
323-
324- # optional openstack extensions:
325- key_name = server_dict.get('key_name')
326- user_data = server_dict.get('user_data')
327- self._validate_user_data(user_data)
328-
329- availability_zone = server_dict.get('availability_zone')
330- name = server_dict['name']
331- self._validate_server_name(name)
332- name = name.strip()
333-
334- reservation_id = server_dict.get('reservation_id')
335- min_count = server_dict.get('min_count')
336- max_count = server_dict.get('max_count')
337- # min_count and max_count are optional. If they exist, they come
338- # in as strings. We want to default 'min_count' to 1, and default
339- # 'max_count' to be 'min_count'.
340- min_count = int(min_count) if min_count else 1
341- max_count = int(max_count) if max_count else min_count
342- if min_count > max_count:
343- min_count = max_count
344-
345- try:
346- inst_type = \
347- instance_types.get_instance_type_by_flavor_id(flavor_id)
348- extra_values = {
349- 'instance_type': inst_type,
350- 'image_ref': image_href,
351- 'config_drive': config_drive,
352- 'password': password}
353-
354- return (extra_values,
355- create_method(context,
356- inst_type,
357- image_id,
358- kernel_id=kernel_id,
359- ramdisk_id=ramdisk_id,
360- display_name=name,
361- display_description=name,
362- key_name=key_name,
363- metadata=server_dict.get('metadata', {}),
364- access_ip_v4=server_dict.get('accessIPv4'),
365- access_ip_v6=server_dict.get('accessIPv6'),
366- injected_files=injected_files,
367- admin_password=password,
368- zone_blob=zone_blob,
369- reservation_id=reservation_id,
370- min_count=min_count,
371- max_count=max_count,
372- requested_networks=requested_networks,
373- security_group=sg_names,
374- user_data=user_data,
375- availability_zone=availability_zone,
376- config_drive=config_drive,))
377- except quota.QuotaError as error:
378- self._handle_quota_error(error)
379- except exception.ImageNotFound as error:
380- msg = _("Can not find requested image")
381- raise exc.HTTPBadRequest(explanation=msg)
382- except exception.FlavorNotFound as error:
383- msg = _("Invalid flavorRef provided.")
384- raise exc.HTTPBadRequest(explanation=msg)
385- except exception.KeypairNotFound as error:
386- msg = _("Invalid key_name provided.")
387- raise exc.HTTPBadRequest(explanation=msg)
388- except exception.SecurityGroupNotFound as error:
389- raise exc.HTTPBadRequest(explanation=unicode(error))
390- except RemoteError as err:
391- msg = "%(err_type)s: %(err_msg)s" % \
392- {'err_type': err.exc_type, 'err_msg': err.value}
393- raise exc.HTTPBadRequest(explanation=msg)
394- # Let the caller deal with unhandled exceptions.
395-
396- def _handle_quota_error(self, error):
397- """
398- Reraise quota errors as api-specific http exceptions
399- """
400- if error.code == "OnsetFileLimitExceeded":
401- expl = _("Personality file limit exceeded")
402- raise exc.HTTPRequestEntityTooLarge(explanation=error.message,
403- headers={'Retry-After': 0})
404- if error.code == "OnsetFilePathLimitExceeded":
405- expl = _("Personality file path too long")
406- raise exc.HTTPRequestEntityTooLarge(explanation=error.message,
407- headers={'Retry-After': 0})
408- if error.code == "OnsetFileContentLimitExceeded":
409- expl = _("Personality file content too long")
410- raise exc.HTTPRequestEntityTooLarge(explanation=error.message,
411- headers={'Retry-After': 0})
412- if error.code == "InstanceLimitExceeded":
413- expl = _("Instance quotas have been exceeded")
414- raise exc.HTTPRequestEntityTooLarge(explanation=error.message,
415- headers={'Retry-After': 0})
416- # if the original error is okay, just reraise it
417- raise error
418-
419- def _deserialize_create(self, request):
420- """
421- Deserialize a create request
422-
423- Overrides normal behavior in the case of xml content
424- """
425- if request.content_type == "application/xml":
426- deserializer = ServerXMLDeserializer()
427- return deserializer.deserialize(request.body)
428- else:
429- return self._deserialize(request.body, request.get_content_type())
430-
431- def _validate_server_name(self, value):
432- if not isinstance(value, basestring):
433- msg = _("Server name is not a string or unicode")
434- raise exc.HTTPBadRequest(explanation=msg)
435-
436- if value.strip() == '':
437- msg = _("Server name is an empty string")
438- raise exc.HTTPBadRequest(explanation=msg)
439-
440- def _get_kernel_ramdisk_from_image(self, req, image_service, image_id):
441- """Fetch an image from the ImageService, then if present, return the
442- associated kernel and ramdisk image IDs.
443- """
444- context = req.environ['nova.context']
445- image_meta = image_service.show(context, image_id)
446- # NOTE(sirp): extracted to a separate method to aid unit-testing, the
447- # new method doesn't need a request obj or an ImageService stub
448- kernel_id, ramdisk_id = self._do_get_kernel_ramdisk_from_image(
449- image_meta)
450- return kernel_id, ramdisk_id
451-
452- @staticmethod
453- def _do_get_kernel_ramdisk_from_image(image_meta):
454- """Given an ImageService image_meta, return kernel and ramdisk image
455- ids if present.
456-
457- This is only valid for `ami` style images.
458- """
459- image_id = image_meta['id']
460- if image_meta['status'] != 'active':
461- raise exception.ImageUnacceptable(image_id=image_id,
462- reason=_("status is not active"))
463-
464- if image_meta.get('container_format') != 'ami':
465- return None, None
466-
467- try:
468- kernel_id = image_meta['properties']['kernel_id']
469- except KeyError:
470- raise exception.KernelNotFoundForImage(image_id=image_id)
471-
472- try:
473- ramdisk_id = image_meta['properties']['ramdisk_id']
474- except KeyError:
475- ramdisk_id = None
476-
477- return kernel_id, ramdisk_id
478-
479- def _get_injected_files(self, personality):
480- """
481- Create a list of injected files from the personality attribute
482-
483- At this time, injected_files must be formatted as a list of
484- (file_path, file_content) pairs for compatibility with the
485- underlying compute service.
486- """
487- injected_files = []
488-
489- for item in personality:
490- try:
491- path = item['path']
492- contents = item['contents']
493- except KeyError as key:
494- expl = _('Bad personality format: missing %s') % key
495- raise exc.HTTPBadRequest(explanation=expl)
496- except TypeError:
497- expl = _('Bad personality format')
498- raise exc.HTTPBadRequest(explanation=expl)
499- try:
500- contents = base64.b64decode(contents)
501- except TypeError:
502- expl = _('Personality content for %s cannot be decoded') % path
503- raise exc.HTTPBadRequest(explanation=expl)
504- injected_files.append((path, contents))
505- return injected_files
506-
507- def _get_server_admin_password_old_style(self, server):
508- """ Determine the admin password for a server on creation """
509- return utils.generate_password(FLAGS.password_length)
510-
511- def _get_server_admin_password_new_style(self, server):
512- """ Determine the admin password for a server on creation """
513- password = server.get('adminPass')
514-
515- if password is None:
516- return utils.generate_password(FLAGS.password_length)
517- if not isinstance(password, basestring) or password == '':
518- msg = _("Invalid adminPass")
519- raise exc.HTTPBadRequest(explanation=msg)
520- return password
521-
522- def _get_requested_networks(self, requested_networks):
523- """
524- Create a list of requested networks from the networks attribute
525- """
526- networks = []
527- for network in requested_networks:
528- try:
529- network_uuid = network['uuid']
530-
531- if not utils.is_uuid_like(network_uuid):
532- msg = _("Bad networks format: network uuid is not in"
533- " proper format (%s)") % network_uuid
534- raise exc.HTTPBadRequest(explanation=msg)
535-
536- #fixed IP address is optional
537- #if the fixed IP address is not provided then
538- #it will use one of the available IP address from the network
539- address = network.get('fixed_ip', None)
540- if address is not None and not utils.is_valid_ipv4(address):
541- msg = _("Invalid fixed IP address (%s)") % address
542- raise exc.HTTPBadRequest(explanation=msg)
543- # check if the network id is already present in the list,
544- # we don't want duplicate networks to be passed
545- # at the boot time
546- for id, ip in networks:
547- if id == network_uuid:
548- expl = _("Duplicate networks (%s) are not allowed")\
549- % network_uuid
550- raise exc.HTTPBadRequest(explanation=expl)
551-
552- networks.append((network_uuid, address))
553- except KeyError as key:
554- expl = _('Bad network format: missing %s') % key
555- raise exc.HTTPBadRequest(explanation=expl)
556- except TypeError:
557- expl = _('Bad networks format')
558- raise exc.HTTPBadRequest(explanation=expl)
559-
560- return networks
561-
562- def _validate_user_data(self, user_data):
563- """Check if the user_data is encoded properly"""
564- if not user_data:
565- return
566- try:
567- user_data = base64.b64decode(user_data)
568- except TypeError:
569- expl = _('Userdata content cannot be decoded')
570- raise exc.HTTPBadRequest(explanation=expl)
571-
572-
573-class ServerXMLDeserializer(wsgi.XMLDeserializer):
574- """
575- Deserializer to handle xml-formatted server create requests.
576-
577- Handles standard server attributes as well as optional metadata
578- and personality attributes
579- """
580-
581- metadata_deserializer = common.MetadataXMLDeserializer()
582-
583- def create(self, string):
584- """Deserialize an xml-formatted server create request"""
585- dom = minidom.parseString(string)
586- server = self._extract_server(dom)
587- return {'body': {'server': server}}
588-
589- def _extract_server(self, node):
590- """Marshal the server attribute of a parsed request"""
591- server = {}
592- server_node = self.find_first_child_named(node, 'server')
593-
594- attributes = ["name", "imageId", "flavorId", "adminPass"]
595- for attr in attributes:
596- if server_node.getAttribute(attr):
597- server[attr] = server_node.getAttribute(attr)
598-
599- metadata_node = self.find_first_child_named(server_node, "metadata")
600- server["metadata"] = self.metadata_deserializer.extract_metadata(
601- metadata_node)
602-
603- server["personality"] = self._extract_personality(server_node)
604-
605- return server
606-
607- def _extract_personality(self, server_node):
608- """Marshal the personality attribute of a parsed request"""
609- node = self.find_first_child_named(server_node, "personality")
610- personality = []
611- if node is not None:
612- for file_node in self.find_children_named(node, "file"):
613- item = {}
614- if file_node.hasAttribute("path"):
615- item["path"] = file_node.getAttribute("path")
616- item["contents"] = self.extract_text(file_node)
617- personality.append(item)
618- return personality
619-
620-
621-class ServerXMLDeserializerV11(wsgi.MetadataXMLDeserializer):
622- """
623- Deserializer to handle xml-formatted server create requests.
624-
625- Handles standard server attributes as well as optional metadata
626- and personality attributes
627- """
628-
629- metadata_deserializer = common.MetadataXMLDeserializer()
630-
631- def action(self, string):
632- dom = minidom.parseString(string)
633- action_node = dom.childNodes[0]
634- action_name = action_node.tagName
635-
636- action_deserializer = {
637- 'createImage': self._action_create_image,
638- 'createBackup': self._action_create_backup,
639- 'changePassword': self._action_change_password,
640- 'reboot': self._action_reboot,
641- 'rebuild': self._action_rebuild,
642- 'resize': self._action_resize,
643- 'confirmResize': self._action_confirm_resize,
644- 'revertResize': self._action_revert_resize,
645- }.get(action_name, self.default)
646-
647- action_data = action_deserializer(action_node)
648-
649- return {'body': {action_name: action_data}}
650-
651- def _action_create_image(self, node):
652- return self._deserialize_image_action(node, ('name',))
653-
654- def _action_create_backup(self, node):
655- attributes = ('name', 'backup_type', 'rotation')
656- return self._deserialize_image_action(node, attributes)
657-
658- def _action_change_password(self, node):
659- if not node.hasAttribute("adminPass"):
660- raise AttributeError("No adminPass was specified in request")
661- return {"adminPass": node.getAttribute("adminPass")}
662-
663- def _action_reboot(self, node):
664- if not node.hasAttribute("type"):
665- raise AttributeError("No reboot type was specified in request")
666- return {"type": node.getAttribute("type")}
667-
668- def _action_rebuild(self, node):
669- rebuild = {}
670- if node.hasAttribute("name"):
671- rebuild['name'] = node.getAttribute("name")
672-
673- metadata_node = self.find_first_child_named(node, "metadata")
674- if metadata_node is not None:
675- rebuild["metadata"] = self.extract_metadata(metadata_node)
676-
677- personality = self._extract_personality(node)
678- if personality is not None:
679- rebuild["personality"] = personality
680-
681- if not node.hasAttribute("imageRef"):
682- raise AttributeError("No imageRef was specified in request")
683- rebuild["imageRef"] = node.getAttribute("imageRef")
684-
685- return rebuild
686-
687- def _action_resize(self, node):
688- if not node.hasAttribute("flavorRef"):
689- raise AttributeError("No flavorRef was specified in request")
690- return {"flavorRef": node.getAttribute("flavorRef")}
691-
692- def _action_confirm_resize(self, node):
693- return None
694-
695- def _action_revert_resize(self, node):
696- return None
697-
698- def _deserialize_image_action(self, node, allowed_attributes):
699- data = {}
700- for attribute in allowed_attributes:
701- value = node.getAttribute(attribute)
702- if value:
703- data[attribute] = value
704- metadata_node = self.find_first_child_named(node, 'metadata')
705- if metadata_node is not None:
706- metadata = self.metadata_deserializer.extract_metadata(
707- metadata_node)
708- data['metadata'] = metadata
709- return data
710-
711- def create(self, string):
712- """Deserialize an xml-formatted server create request"""
713- dom = minidom.parseString(string)
714- server = self._extract_server(dom)
715- return {'body': {'server': server}}
716-
717- def _extract_server(self, node):
718- """Marshal the server attribute of a parsed request"""
719- server = {}
720- server_node = self.find_first_child_named(node, 'server')
721-
722- attributes = ["name", "imageRef", "flavorRef", "adminPass",
723- "accessIPv4", "accessIPv6"]
724- for attr in attributes:
725- if server_node.getAttribute(attr):
726- server[attr] = server_node.getAttribute(attr)
727-
728- metadata_node = self.find_first_child_named(server_node, "metadata")
729- if metadata_node is not None:
730- server["metadata"] = self.extract_metadata(metadata_node)
731-
732- personality = self._extract_personality(server_node)
733- if personality is not None:
734- server["personality"] = personality
735-
736- networks = self._extract_networks(server_node)
737- if networks is not None:
738- server["networks"] = networks
739-
740- security_groups = self._extract_security_groups(server_node)
741- if security_groups is not None:
742- server["security_groups"] = security_groups
743-
744- return server
745-
746- def _extract_personality(self, server_node):
747- """Marshal the personality attribute of a parsed request"""
748- node = self.find_first_child_named(server_node, "personality")
749- if node is not None:
750- personality = []
751- for file_node in self.find_children_named(node, "file"):
752- item = {}
753- if file_node.hasAttribute("path"):
754- item["path"] = file_node.getAttribute("path")
755- item["contents"] = self.extract_text(file_node)
756- personality.append(item)
757- return personality
758- else:
759- return None
760-
761- def _extract_networks(self, server_node):
762- """Marshal the networks attribute of a parsed request"""
763- node = self.find_first_child_named(server_node, "networks")
764- if node is not None:
765- networks = []
766- for network_node in self.find_children_named(node,
767- "network"):
768- item = {}
769- if network_node.hasAttribute("uuid"):
770- item["uuid"] = network_node.getAttribute("uuid")
771- if network_node.hasAttribute("fixed_ip"):
772- item["fixed_ip"] = network_node.getAttribute("fixed_ip")
773- networks.append(item)
774- return networks
775- else:
776- return None
777-
778- def _extract_security_groups(self, server_node):
779- """Marshal the security_groups attribute of a parsed request"""
780- node = self.find_first_child_named(server_node, "security_groups")
781- if node is not None:
782- security_groups = []
783- for sg_node in self.find_children_named(node, "security_group"):
784- item = {}
785- name_node = self.find_first_child_named(sg_node, "name")
786- if name_node:
787- item["name"] = self.extract_text(name_node)
788- security_groups.append(item)
789- return security_groups
790- else:
791- return None
792
793=== modified file 'nova/api/openstack/servers.py'
794--- nova/api/openstack/servers.py 2011-09-22 15:41:34 +0000
795+++ nova/api/openstack/servers.py 2011-09-23 07:08:19 +0000
796@@ -1,4 +1,5 @@
797 # Copyright 2010 OpenStack LLC.
798+# Copyright 2011 Piston Cloud Computing, Inc
799 # All Rights Reserved.
800 #
801 # Licensed under the Apache License, Version 2.0 (the "License"); you may
802@@ -21,15 +22,17 @@
803 from lxml import etree
804 from webob import exc
805 import webob
806+from xml.dom import minidom
807
808 from nova import compute
809 from nova import db
810 from nova import exception
811 from nova import flags
812+from nova import image
813 from nova import log as logging
814 from nova import utils
815+from nova import quota
816 from nova.api.openstack import common
817-from nova.api.openstack import create_instance_helper as helper
818 from nova.api.openstack import ips
819 from nova.api.openstack import wsgi
820 from nova.compute import instance_types
821@@ -40,6 +43,7 @@
822 import nova.api.openstack.views.images
823 import nova.api.openstack.views.servers
824 from nova.api.openstack import xmlutil
825+from nova.rpc import common as rpc_common
826
827
828 LOG = logging.getLogger('nova.api.openstack.servers')
829@@ -72,7 +76,6 @@
830
831 def __init__(self):
832 self.compute_api = compute.API()
833- self.helper = helper.CreateInstanceHelper(self)
834
835 def index(self, req):
836 """ Returns a list of server names and ids for a given user """
837@@ -106,6 +109,12 @@
838 def _action_rebuild(self, info, request, instance_id):
839 raise NotImplementedError()
840
841+ def _get_block_device_mapping(self, data):
842+ """Get block_device_mapping from 'server' dictionary.
843+ Overidden by volumes controller.
844+ """
845+ return None
846+
847 def _get_servers(self, req, is_detail):
848 """Returns a list of servers, taking into account any search
849 options specified.
850@@ -157,6 +166,181 @@
851 limited_list = self._limit_items(instance_list, req)
852 return self._build_list(req, limited_list, is_detail=is_detail)
853
854+ def _handle_quota_error(self, error):
855+ """
856+ Reraise quota errors as api-specific http exceptions
857+ """
858+
859+ code_mappings = {
860+ "OnsetFileLimitExceeded":
861+ _("Personality file limit exceeded"),
862+ "OnsetFilePathLimitExceeded":
863+ _("Personality file path too long"),
864+ "OnsetFileContentLimitExceeded":
865+ _("Personality file content too long"),
866+ "InstanceLimitExceeded":
867+ _("Instance quotas have been exceeded")}
868+
869+ expl = code_mappings.get(error.code)
870+ if expl:
871+ raise exc.HTTPRequestEntityTooLarge(explanation=expl,
872+ headers={'Retry-After': 0})
873+ # if the original error is okay, just reraise it
874+ raise error
875+
876+ def _deserialize_create(self, request):
877+ """
878+ Deserialize a create request
879+
880+ Overrides normal behavior in the case of xml content
881+ """
882+ if request.content_type == "application/xml":
883+ deserializer = ServerXMLDeserializer()
884+ return deserializer.deserialize(request.body)
885+ else:
886+ return self._deserialize(request.body, request.get_content_type())
887+
888+ def _validate_server_name(self, value):
889+ if not isinstance(value, basestring):
890+ msg = _("Server name is not a string or unicode")
891+ raise exc.HTTPBadRequest(explanation=msg)
892+
893+ if value.strip() == '':
894+ msg = _("Server name is an empty string")
895+ raise exc.HTTPBadRequest(explanation=msg)
896+
897+ def _get_kernel_ramdisk_from_image(self, req, image_service, image_id):
898+ """Fetch an image from the ImageService, then if present, return the
899+ associated kernel and ramdisk image IDs.
900+ """
901+ context = req.environ['nova.context']
902+ image_meta = image_service.show(context, image_id)
903+ # NOTE(sirp): extracted to a separate method to aid unit-testing, the
904+ # new method doesn't need a request obj or an ImageService stub
905+ kernel_id, ramdisk_id = self._do_get_kernel_ramdisk_from_image(
906+ image_meta)
907+ return kernel_id, ramdisk_id
908+
909+ @staticmethod
910+ def _do_get_kernel_ramdisk_from_image(image_meta):
911+ """Given an ImageService image_meta, return kernel and ramdisk image
912+ ids if present.
913+
914+ This is only valid for `ami` style images.
915+ """
916+ image_id = image_meta['id']
917+ if image_meta['status'] != 'active':
918+ raise exception.ImageUnacceptable(image_id=image_id,
919+ reason=_("status is not active"))
920+
921+ if image_meta.get('container_format') != 'ami':
922+ return None, None
923+
924+ try:
925+ kernel_id = image_meta['properties']['kernel_id']
926+ except KeyError:
927+ raise exception.KernelNotFoundForImage(image_id=image_id)
928+
929+ try:
930+ ramdisk_id = image_meta['properties']['ramdisk_id']
931+ except KeyError:
932+ ramdisk_id = None
933+
934+ return kernel_id, ramdisk_id
935+
936+ def _get_injected_files(self, personality):
937+ """
938+ Create a list of injected files from the personality attribute
939+
940+ At this time, injected_files must be formatted as a list of
941+ (file_path, file_content) pairs for compatibility with the
942+ underlying compute service.
943+ """
944+ injected_files = []
945+
946+ for item in personality:
947+ try:
948+ path = item['path']
949+ contents = item['contents']
950+ except KeyError as key:
951+ expl = _('Bad personality format: missing %s') % key
952+ raise exc.HTTPBadRequest(explanation=expl)
953+ except TypeError:
954+ expl = _('Bad personality format')
955+ raise exc.HTTPBadRequest(explanation=expl)
956+ try:
957+ contents = base64.b64decode(contents)
958+ except TypeError:
959+ expl = _('Personality content for %s cannot be decoded') % path
960+ raise exc.HTTPBadRequest(explanation=expl)
961+ injected_files.append((path, contents))
962+ return injected_files
963+
964+ def _get_server_admin_password_old_style(self, server):
965+ """ Determine the admin password for a server on creation """
966+ return utils.generate_password(FLAGS.password_length)
967+
968+ def _get_server_admin_password_new_style(self, server):
969+ """ Determine the admin password for a server on creation """
970+ password = server.get('adminPass')
971+
972+ if password is None:
973+ return utils.generate_password(FLAGS.password_length)
974+ if not isinstance(password, basestring) or password == '':
975+ msg = _("Invalid adminPass")
976+ raise exc.HTTPBadRequest(explanation=msg)
977+ return password
978+
979+ def _get_requested_networks(self, requested_networks):
980+ """
981+ Create a list of requested networks from the networks attribute
982+ """
983+ networks = []
984+ for network in requested_networks:
985+ try:
986+ network_uuid = network['uuid']
987+
988+ if not utils.is_uuid_like(network_uuid):
989+ msg = _("Bad networks format: network uuid is not in"
990+ " proper format (%s)") % network_uuid
991+ raise exc.HTTPBadRequest(explanation=msg)
992+
993+ #fixed IP address is optional
994+ #if the fixed IP address is not provided then
995+ #it will use one of the available IP address from the network
996+ address = network.get('fixed_ip', None)
997+ if address is not None and not utils.is_valid_ipv4(address):
998+ msg = _("Invalid fixed IP address (%s)") % address
999+ raise exc.HTTPBadRequest(explanation=msg)
1000+ # check if the network id is already present in the list,
1001+ # we don't want duplicate networks to be passed
1002+ # at the boot time
1003+ for id, ip in networks:
1004+ if id == network_uuid:
1005+ expl = _("Duplicate networks (%s) are not allowed")\
1006+ % network_uuid
1007+ raise exc.HTTPBadRequest(explanation=expl)
1008+
1009+ networks.append((network_uuid, address))
1010+ except KeyError as key:
1011+ expl = _('Bad network format: missing %s') % key
1012+ raise exc.HTTPBadRequest(explanation=expl)
1013+ except TypeError:
1014+ expl = _('Bad networks format')
1015+ raise exc.HTTPBadRequest(explanation=expl)
1016+
1017+ return networks
1018+
1019+ def _validate_user_data(self, user_data):
1020+ """Check if the user_data is encoded properly"""
1021+ if not user_data:
1022+ return
1023+ try:
1024+ user_data = base64.b64decode(user_data)
1025+ except TypeError:
1026+ expl = _('Userdata content cannot be decoded')
1027+ raise exc.HTTPBadRequest(explanation=expl)
1028+
1029 @novaclient_exception_converter
1030 @scheduler_api.redirect_handler
1031 def show(self, req, id):
1032@@ -174,22 +358,168 @@
1033
1034 def create(self, req, body):
1035 """ Creates a new server for a given user """
1036- if 'server' in body:
1037- body['server']['key_name'] = self._get_key_name(req, body)
1038-
1039- extra_values = None
1040- extra_values, instances = self.helper.create_instance(
1041- req, body, self.compute_api.create)
1042-
1043- # We can only return 1 instance via the API, if we happen to
1044- # build more than one... instances is a list, so we'll just
1045- # use the first one..
1046- inst = instances[0]
1047- for key in ['instance_type', 'image_ref']:
1048- inst[key] = extra_values[key]
1049-
1050- server = self._build_view(req, inst, is_detail=True)
1051- server['server']['adminPass'] = extra_values['password']
1052+
1053+ if not body:
1054+ raise exc.HTTPUnprocessableEntity()
1055+
1056+ if not 'server' in body:
1057+ raise exc.HTTPUnprocessableEntity()
1058+
1059+ body['server']['key_name'] = self._get_key_name(req, body)
1060+
1061+ context = req.environ['nova.context']
1062+ server_dict = body['server']
1063+ password = self._get_server_admin_password(server_dict)
1064+
1065+ if not 'name' in server_dict:
1066+ msg = _("Server name is not defined")
1067+ raise exc.HTTPBadRequest(explanation=msg)
1068+
1069+ name = server_dict['name']
1070+ self._validate_server_name(name)
1071+ name = name.strip()
1072+
1073+ image_href = self._image_ref_from_req_data(body)
1074+ # If the image href was generated by nova api, strip image_href
1075+ # down to an id and use the default glance connection params
1076+
1077+ if str(image_href).startswith(req.application_url):
1078+ image_href = image_href.split('/').pop()
1079+ try:
1080+ image_service, image_id = image.get_image_service(context,
1081+ image_href)
1082+ kernel_id, ramdisk_id = self._get_kernel_ramdisk_from_image(
1083+ req, image_service, image_id)
1084+ images = set([str(x['id']) for x in image_service.index(context)])
1085+ assert str(image_id) in images
1086+ except Exception, e:
1087+ msg = _("Cannot find requested image %(image_href)s: %(e)s" %
1088+ locals())
1089+ raise exc.HTTPBadRequest(explanation=msg)
1090+
1091+ personality = server_dict.get('personality')
1092+ config_drive = server_dict.get('config_drive')
1093+
1094+ injected_files = []
1095+ if personality:
1096+ injected_files = self._get_injected_files(personality)
1097+
1098+ sg_names = []
1099+ security_groups = server_dict.get('security_groups')
1100+ if security_groups is not None:
1101+ sg_names = [sg['name'] for sg in security_groups if sg.get('name')]
1102+ if not sg_names:
1103+ sg_names.append('default')
1104+
1105+ sg_names = list(set(sg_names))
1106+
1107+ requested_networks = server_dict.get('networks')
1108+ if requested_networks is not None:
1109+ requested_networks = self._get_requested_networks(
1110+ requested_networks)
1111+
1112+ try:
1113+ flavor_id = self._flavor_id_from_req_data(body)
1114+ except ValueError as error:
1115+ msg = _("Invalid flavorRef provided.")
1116+ raise exc.HTTPBadRequest(explanation=msg)
1117+
1118+ zone_blob = server_dict.get('blob')
1119+
1120+ # optional openstack extensions:
1121+ key_name = server_dict.get('key_name')
1122+ user_data = server_dict.get('user_data')
1123+ self._validate_user_data(user_data)
1124+
1125+ availability_zone = server_dict.get('availability_zone')
1126+ name = server_dict['name']
1127+ self._validate_server_name(name)
1128+ name = name.strip()
1129+
1130+ block_device_mapping = self._get_block_device_mapping(server_dict)
1131+
1132+ # Only allow admins to specify their own reservation_ids
1133+ # This is really meant to allow zones to work.
1134+ reservation_id = server_dict.get('reservation_id')
1135+ if all([reservation_id is not None,
1136+ reservation_id != '',
1137+ not context.is_admin]):
1138+ reservation_id = None
1139+
1140+ ret_resv_id = server_dict.get('return_reservation_id', False)
1141+
1142+ min_count = server_dict.get('min_count')
1143+ max_count = server_dict.get('max_count')
1144+ # min_count and max_count are optional. If they exist, they come
1145+ # in as strings. We want to default 'min_count' to 1, and default
1146+ # 'max_count' to be 'min_count'.
1147+ min_count = int(min_count) if min_count else 1
1148+ max_count = int(max_count) if max_count else min_count
1149+ if min_count > max_count:
1150+ min_count = max_count
1151+
1152+ try:
1153+ inst_type = \
1154+ instance_types.get_instance_type_by_flavor_id(flavor_id)
1155+
1156+ (instances, resv_id) = self.compute_api.create(context,
1157+ inst_type,
1158+ image_id,
1159+ kernel_id=kernel_id,
1160+ ramdisk_id=ramdisk_id,
1161+ display_name=name,
1162+ display_description=name,
1163+ key_name=key_name,
1164+ metadata=server_dict.get('metadata', {}),
1165+ access_ip_v4=server_dict.get('accessIPv4'),
1166+ access_ip_v6=server_dict.get('accessIPv6'),
1167+ injected_files=injected_files,
1168+ admin_password=password,
1169+ zone_blob=zone_blob,
1170+ reservation_id=reservation_id,
1171+ min_count=min_count,
1172+ max_count=max_count,
1173+ requested_networks=requested_networks,
1174+ security_group=sg_names,
1175+ user_data=user_data,
1176+ availability_zone=availability_zone,
1177+ config_drive=config_drive,
1178+ block_device_mapping=block_device_mapping,
1179+ wait_for_instances=not ret_resv_id)
1180+ except quota.QuotaError as error:
1181+ self._handle_quota_error(error)
1182+ except exception.ImageNotFound as error:
1183+ msg = _("Can not find requested image")
1184+ raise exc.HTTPBadRequest(explanation=msg)
1185+ except exception.FlavorNotFound as error:
1186+ msg = _("Invalid flavorRef provided.")
1187+ raise exc.HTTPBadRequest(explanation=msg)
1188+ except exception.KeypairNotFound as error:
1189+ msg = _("Invalid key_name provided.")
1190+ raise exc.HTTPBadRequest(explanation=msg)
1191+ except exception.SecurityGroupNotFound as error:
1192+ raise exc.HTTPBadRequest(explanation=unicode(error))
1193+ except rpc_common.RemoteError as err:
1194+ msg = "%(err_type)s: %(err_msg)s" % \
1195+ {'err_type': err.exc_type, 'err_msg': err.value}
1196+ raise exc.HTTPBadRequest(explanation=msg)
1197+ # Let the caller deal with unhandled exceptions.
1198+
1199+ # If the caller wanted a reservation_id, return it
1200+ if ret_resv_id:
1201+ return {'reservation_id': resv_id}
1202+
1203+ # Instances is a list
1204+ instance = instances[0]
1205+ if not instance.get('_is_precooked', False):
1206+ instance['instance_type'] = inst_type
1207+ instance['image_ref'] = image_href
1208+
1209+ server = self._build_view(req, instance, is_detail=True)
1210+ if '_is_precooked' in server['server']:
1211+ del server['server']['_is_precooked']
1212+ else:
1213+ server['server']['adminPass'] = password
1214 return server
1215
1216 def _delete(self, context, id):
1217@@ -212,7 +542,7 @@
1218
1219 if 'name' in body['server']:
1220 name = body['server']['name']
1221- self.helper._validate_server_name(name)
1222+ self._validate_server_name(name)
1223 update_dict['display_name'] = name.strip()
1224
1225 if 'accessIPv4' in body['server']:
1226@@ -284,17 +614,17 @@
1227
1228 except KeyError as missing_key:
1229 msg = _("createBackup entity requires %s attribute") % missing_key
1230- raise webob.exc.HTTPBadRequest(explanation=msg)
1231+ raise exc.HTTPBadRequest(explanation=msg)
1232
1233 except TypeError:
1234 msg = _("Malformed createBackup entity")
1235- raise webob.exc.HTTPBadRequest(explanation=msg)
1236+ raise exc.HTTPBadRequest(explanation=msg)
1237
1238 try:
1239 rotation = int(rotation)
1240 except ValueError:
1241 msg = _("createBackup attribute 'rotation' must be an integer")
1242- raise webob.exc.HTTPBadRequest(explanation=msg)
1243+ raise exc.HTTPBadRequest(explanation=msg)
1244
1245 # preserve link to server in image properties
1246 server_ref = os.path.join(req.application_url,
1247@@ -309,7 +639,7 @@
1248 props.update(metadata)
1249 except ValueError:
1250 msg = _("Invalid metadata")
1251- raise webob.exc.HTTPBadRequest(explanation=msg)
1252+ raise exc.HTTPBadRequest(explanation=msg)
1253
1254 image = self.compute_api.backup(context,
1255 instance_id,
1256@@ -687,7 +1017,7 @@
1257
1258 def _get_server_admin_password(self, server):
1259 """ Determine the admin password for a server on creation """
1260- return self.helper._get_server_admin_password_old_style(server)
1261+ return self._get_server_admin_password_old_style(server)
1262
1263 def _get_server_search_options(self):
1264 """Return server search options allowed by non-admin"""
1265@@ -873,11 +1203,11 @@
1266
1267 except KeyError:
1268 msg = _("createImage entity requires name attribute")
1269- raise webob.exc.HTTPBadRequest(explanation=msg)
1270+ raise exc.HTTPBadRequest(explanation=msg)
1271
1272 except TypeError:
1273 msg = _("Malformed createImage entity")
1274- raise webob.exc.HTTPBadRequest(explanation=msg)
1275+ raise exc.HTTPBadRequest(explanation=msg)
1276
1277 # preserve link to server in image properties
1278 server_ref = os.path.join(req.application_url,
1279@@ -892,7 +1222,7 @@
1280 props.update(metadata)
1281 except ValueError:
1282 msg = _("Invalid metadata")
1283- raise webob.exc.HTTPBadRequest(explanation=msg)
1284+ raise exc.HTTPBadRequest(explanation=msg)
1285
1286 image = self.compute_api.snapshot(context,
1287 instance_id,
1288@@ -912,7 +1242,7 @@
1289
1290 def _get_server_admin_password(self, server):
1291 """ Determine the admin password for a server on creation """
1292- return self.helper._get_server_admin_password_new_style(server)
1293+ return self._get_server_admin_password_new_style(server)
1294
1295 def _get_server_search_options(self):
1296 """Return server search options allowed by non-admin"""
1297@@ -1057,6 +1387,227 @@
1298 return self._to_xml(server)
1299
1300
1301+class ServerXMLDeserializer(wsgi.XMLDeserializer):
1302+ """
1303+ Deserializer to handle xml-formatted server create requests.
1304+
1305+ Handles standard server attributes as well as optional metadata
1306+ and personality attributes
1307+ """
1308+
1309+ metadata_deserializer = common.MetadataXMLDeserializer()
1310+
1311+ def create(self, string):
1312+ """Deserialize an xml-formatted server create request"""
1313+ dom = minidom.parseString(string)
1314+ server = self._extract_server(dom)
1315+ return {'body': {'server': server}}
1316+
1317+ def _extract_server(self, node):
1318+ """Marshal the server attribute of a parsed request"""
1319+ server = {}
1320+ server_node = self.find_first_child_named(node, 'server')
1321+
1322+ attributes = ["name", "imageId", "flavorId", "adminPass"]
1323+ for attr in attributes:
1324+ if server_node.getAttribute(attr):
1325+ server[attr] = server_node.getAttribute(attr)
1326+
1327+ metadata_node = self.find_first_child_named(server_node, "metadata")
1328+ server["metadata"] = self.metadata_deserializer.extract_metadata(
1329+ metadata_node)
1330+
1331+ server["personality"] = self._extract_personality(server_node)
1332+
1333+ return server
1334+
1335+ def _extract_personality(self, server_node):
1336+ """Marshal the personality attribute of a parsed request"""
1337+ node = self.find_first_child_named(server_node, "personality")
1338+ personality = []
1339+ if node is not None:
1340+ for file_node in self.find_children_named(node, "file"):
1341+ item = {}
1342+ if file_node.hasAttribute("path"):
1343+ item["path"] = file_node.getAttribute("path")
1344+ item["contents"] = self.extract_text(file_node)
1345+ personality.append(item)
1346+ return personality
1347+
1348+
1349+class ServerXMLDeserializerV11(wsgi.MetadataXMLDeserializer):
1350+ """
1351+ Deserializer to handle xml-formatted server create requests.
1352+
1353+ Handles standard server attributes as well as optional metadata
1354+ and personality attributes
1355+ """
1356+
1357+ metadata_deserializer = common.MetadataXMLDeserializer()
1358+
1359+ def action(self, string):
1360+ dom = minidom.parseString(string)
1361+ action_node = dom.childNodes[0]
1362+ action_name = action_node.tagName
1363+
1364+ action_deserializer = {
1365+ 'createImage': self._action_create_image,
1366+ 'createBackup': self._action_create_backup,
1367+ 'changePassword': self._action_change_password,
1368+ 'reboot': self._action_reboot,
1369+ 'rebuild': self._action_rebuild,
1370+ 'resize': self._action_resize,
1371+ 'confirmResize': self._action_confirm_resize,
1372+ 'revertResize': self._action_revert_resize,
1373+ }.get(action_name, self.default)
1374+
1375+ action_data = action_deserializer(action_node)
1376+
1377+ return {'body': {action_name: action_data}}
1378+
1379+ def _action_create_image(self, node):
1380+ return self._deserialize_image_action(node, ('name',))
1381+
1382+ def _action_create_backup(self, node):
1383+ attributes = ('name', 'backup_type', 'rotation')
1384+ return self._deserialize_image_action(node, attributes)
1385+
1386+ def _action_change_password(self, node):
1387+ if not node.hasAttribute("adminPass"):
1388+ raise AttributeError("No adminPass was specified in request")
1389+ return {"adminPass": node.getAttribute("adminPass")}
1390+
1391+ def _action_reboot(self, node):
1392+ if not node.hasAttribute("type"):
1393+ raise AttributeError("No reboot type was specified in request")
1394+ return {"type": node.getAttribute("type")}
1395+
1396+ def _action_rebuild(self, node):
1397+ rebuild = {}
1398+ if node.hasAttribute("name"):
1399+ rebuild['name'] = node.getAttribute("name")
1400+
1401+ metadata_node = self.find_first_child_named(node, "metadata")
1402+ if metadata_node is not None:
1403+ rebuild["metadata"] = self.extract_metadata(metadata_node)
1404+
1405+ personality = self._extract_personality(node)
1406+ if personality is not None:
1407+ rebuild["personality"] = personality
1408+
1409+ if not node.hasAttribute("imageRef"):
1410+ raise AttributeError("No imageRef was specified in request")
1411+ rebuild["imageRef"] = node.getAttribute("imageRef")
1412+
1413+ return rebuild
1414+
1415+ def _action_resize(self, node):
1416+ if not node.hasAttribute("flavorRef"):
1417+ raise AttributeError("No flavorRef was specified in request")
1418+ return {"flavorRef": node.getAttribute("flavorRef")}
1419+
1420+ def _action_confirm_resize(self, node):
1421+ return None
1422+
1423+ def _action_revert_resize(self, node):
1424+ return None
1425+
1426+ def _deserialize_image_action(self, node, allowed_attributes):
1427+ data = {}
1428+ for attribute in allowed_attributes:
1429+ value = node.getAttribute(attribute)
1430+ if value:
1431+ data[attribute] = value
1432+ metadata_node = self.find_first_child_named(node, 'metadata')
1433+ if metadata_node is not None:
1434+ metadata = self.metadata_deserializer.extract_metadata(
1435+ metadata_node)
1436+ data['metadata'] = metadata
1437+ return data
1438+
1439+ def create(self, string):
1440+ """Deserialize an xml-formatted server create request"""
1441+ dom = minidom.parseString(string)
1442+ server = self._extract_server(dom)
1443+ return {'body': {'server': server}}
1444+
1445+ def _extract_server(self, node):
1446+ """Marshal the server attribute of a parsed request"""
1447+ server = {}
1448+ server_node = self.find_first_child_named(node, 'server')
1449+
1450+ attributes = ["name", "imageRef", "flavorRef", "adminPass",
1451+ "accessIPv4", "accessIPv6"]
1452+ for attr in attributes:
1453+ if server_node.getAttribute(attr):
1454+ server[attr] = server_node.getAttribute(attr)
1455+
1456+ metadata_node = self.find_first_child_named(server_node, "metadata")
1457+ if metadata_node is not None:
1458+ server["metadata"] = self.extract_metadata(metadata_node)
1459+
1460+ personality = self._extract_personality(server_node)
1461+ if personality is not None:
1462+ server["personality"] = personality
1463+
1464+ networks = self._extract_networks(server_node)
1465+ if networks is not None:
1466+ server["networks"] = networks
1467+
1468+ security_groups = self._extract_security_groups(server_node)
1469+ if security_groups is not None:
1470+ server["security_groups"] = security_groups
1471+
1472+ return server
1473+
1474+ def _extract_personality(self, server_node):
1475+ """Marshal the personality attribute of a parsed request"""
1476+ node = self.find_first_child_named(server_node, "personality")
1477+ if node is not None:
1478+ personality = []
1479+ for file_node in self.find_children_named(node, "file"):
1480+ item = {}
1481+ if file_node.hasAttribute("path"):
1482+ item["path"] = file_node.getAttribute("path")
1483+ item["contents"] = self.extract_text(file_node)
1484+ personality.append(item)
1485+ return personality
1486+ else:
1487+ return None
1488+
1489+ def _extract_networks(self, server_node):
1490+ """Marshal the networks attribute of a parsed request"""
1491+ node = self.find_first_child_named(server_node, "networks")
1492+ if node is not None:
1493+ networks = []
1494+ for network_node in self.find_children_named(node,
1495+ "network"):
1496+ item = {}
1497+ if network_node.hasAttribute("uuid"):
1498+ item["uuid"] = network_node.getAttribute("uuid")
1499+ if network_node.hasAttribute("fixed_ip"):
1500+ item["fixed_ip"] = network_node.getAttribute("fixed_ip")
1501+ networks.append(item)
1502+ return networks
1503+ else:
1504+ return None
1505+
1506+ def _extract_security_groups(self, server_node):
1507+ """Marshal the security_groups attribute of a parsed request"""
1508+ node = self.find_first_child_named(server_node, "security_groups")
1509+ if node is not None:
1510+ security_groups = []
1511+ for sg_node in self.find_children_named(node, "security_group"):
1512+ item = {}
1513+ name_node = self.find_first_child_named(sg_node, "name")
1514+ if name_node:
1515+ item["name"] = self.extract_text(name_node)
1516+ security_groups.append(item)
1517+ return security_groups
1518+ else:
1519+ return None
1520+
1521+
1522 def create_resource(version='1.0'):
1523 controller = {
1524 '1.0': ControllerV10,
1525@@ -1096,8 +1647,8 @@
1526 }
1527
1528 xml_deserializer = {
1529- '1.0': helper.ServerXMLDeserializer(),
1530- '1.1': helper.ServerXMLDeserializerV11(),
1531+ '1.0': ServerXMLDeserializer(),
1532+ '1.1': ServerXMLDeserializerV11(),
1533 }[version]
1534
1535 body_deserializers = {
1536
1537=== modified file 'nova/api/openstack/zones.py'
1538--- nova/api/openstack/zones.py 2011-08-31 18:54:30 +0000
1539+++ nova/api/openstack/zones.py 2011-09-23 07:08:19 +0000
1540@@ -25,8 +25,8 @@
1541 from nova.compute import api as compute
1542 from nova.scheduler import api
1543
1544-from nova.api.openstack import create_instance_helper as helper
1545 from nova.api.openstack import common
1546+from nova.api.openstack import servers
1547 from nova.api.openstack import wsgi
1548
1549
1550@@ -67,7 +67,6 @@
1551
1552 def __init__(self):
1553 self.compute_api = compute.API()
1554- self.helper = helper.CreateInstanceHelper(self)
1555
1556 def index(self, req):
1557 """Return all zones in brief"""
1558@@ -120,18 +119,6 @@
1559 zone = api.zone_update(context, zone_id, body["zone"])
1560 return dict(zone=_scrub_zone(zone))
1561
1562- def boot(self, req, body):
1563- """Creates a new server for a given user while being Zone aware.
1564-
1565- Returns a reservation ID (a UUID).
1566- """
1567- result = None
1568- extra_values, result = self.helper.create_instance(req, body,
1569- self.compute_api.create_all_at_once)
1570-
1571- reservation_id = result
1572- return {'reservation_id': reservation_id}
1573-
1574 @check_encryption_key
1575 def select(self, req, body):
1576 """Returns a weighted list of costs to create instances
1577@@ -155,29 +142,10 @@
1578 blob=cipher_text))
1579 return cooked
1580
1581- def _image_ref_from_req_data(self, data):
1582- return data['server']['imageId']
1583-
1584- def _flavor_id_from_req_data(self, data):
1585- return data['server']['flavorId']
1586-
1587- def _get_server_admin_password(self, server):
1588- """ Determine the admin password for a server on creation """
1589- return self.helper._get_server_admin_password_old_style(server)
1590-
1591
1592 class ControllerV11(Controller):
1593 """Controller for 1.1 Zone resources."""
1594-
1595- def _get_server_admin_password(self, server):
1596- """ Determine the admin password for a server on creation """
1597- return self.helper._get_server_admin_password_new_style(server)
1598-
1599- def _image_ref_from_req_data(self, data):
1600- return data['server']['imageRef']
1601-
1602- def _flavor_id_from_req_data(self, data):
1603- return data['server']['flavorRef']
1604+ pass
1605
1606
1607 def create_resource(version):
1608@@ -199,7 +167,7 @@
1609 serializer = wsgi.ResponseSerializer(body_serializers)
1610
1611 body_deserializers = {
1612- 'application/xml': helper.ServerXMLDeserializer(),
1613+ 'application/xml': servers.ServerXMLDeserializer(),
1614 }
1615 deserializer = wsgi.RequestDeserializer(body_deserializers)
1616
1617
1618=== modified file 'nova/compute/api.py'
1619--- nova/compute/api.py 2011-09-21 21:00:53 +0000
1620+++ nova/compute/api.py 2011-09-23 07:08:19 +0000
1621@@ -74,6 +74,11 @@
1622 return display_name.translate(table, deletions)
1623
1624
1625+def generate_default_display_name(instance):
1626+ """Generate a default display name"""
1627+ return 'Server %s' % instance['id']
1628+
1629+
1630 def _is_able_to_shutdown(instance, instance_id):
1631 vm_state = instance["vm_state"]
1632 task_state = instance["task_state"]
1633@@ -176,17 +181,27 @@
1634
1635 self.network_api.validate_networks(context, requested_networks)
1636
1637- def _check_create_parameters(self, context, instance_type,
1638- image_href, kernel_id=None, ramdisk_id=None,
1639- min_count=None, max_count=None,
1640- display_name='', display_description='',
1641- key_name=None, key_data=None, security_group='default',
1642- availability_zone=None, user_data=None, metadata=None,
1643- injected_files=None, admin_password=None, zone_blob=None,
1644- reservation_id=None, access_ip_v4=None, access_ip_v6=None,
1645- requested_networks=None, config_drive=None,):
1646+ def _create_instance(self, context, instance_type,
1647+ image_href, kernel_id, ramdisk_id,
1648+ min_count, max_count,
1649+ display_name, display_description,
1650+ key_name, key_data, security_group,
1651+ availability_zone, user_data, metadata,
1652+ injected_files, admin_password, zone_blob,
1653+ reservation_id, access_ip_v4, access_ip_v6,
1654+ requested_networks, config_drive,
1655+ block_device_mapping,
1656+ wait_for_instances):
1657 """Verify all the input parameters regardless of the provisioning
1658- strategy being performed."""
1659+ strategy being performed and schedule the instance(s) for
1660+ creation."""
1661+
1662+ if not metadata:
1663+ metadata = {}
1664+ if not display_description:
1665+ display_description = ''
1666+ if not security_group:
1667+ security_group = 'default'
1668
1669 if not instance_type:
1670 instance_type = instance_types.get_default_instance_type()
1671@@ -197,6 +212,8 @@
1672 if not metadata:
1673 metadata = {}
1674
1675+ block_device_mapping = block_device_mapping or []
1676+
1677 num_instances = quota.allowed_instances(context, max_count,
1678 instance_type)
1679 if num_instances < min_count:
1680@@ -297,7 +314,28 @@
1681 'vm_mode': vm_mode,
1682 'root_device_name': root_device_name}
1683
1684- return (num_instances, base_options, image)
1685+ LOG.debug(_("Going to run %s instances...") % num_instances)
1686+
1687+ if wait_for_instances:
1688+ rpc_method = rpc.call
1689+ else:
1690+ rpc_method = rpc.cast
1691+
1692+ # TODO(comstud): We should use rpc.multicall when we can
1693+ # retrieve the full instance dictionary from the scheduler.
1694+ # Otherwise, we could exceed the AMQP max message size limit.
1695+ # This would require the schedulers' schedule_run_instances
1696+ # methods to return an iterator vs a list.
1697+ instances = self._schedule_run_instance(
1698+ rpc_method,
1699+ context, base_options,
1700+ instance_type, zone_blob,
1701+ availability_zone, injected_files,
1702+ admin_password, image,
1703+ num_instances, requested_networks,
1704+ block_device_mapping, security_group)
1705+
1706+ return (instances, reservation_id)
1707
1708 @staticmethod
1709 def _volume_size(instance_type, virtual_name):
1710@@ -393,10 +431,8 @@
1711 including any related table updates (such as security group,
1712 etc).
1713
1714- This will called by create() in the majority of situations,
1715- but create_all_at_once() style Schedulers may initiate the call.
1716- If you are changing this method, be sure to update both
1717- call paths.
1718+ This is called by the scheduler after a location for the
1719+ instance has been determined.
1720 """
1721 elevated = context.elevated()
1722 if security_group is None:
1723@@ -433,7 +469,7 @@
1724 updates = {}
1725 if (not hasattr(instance, 'display_name') or
1726 instance.display_name is None):
1727- updates['display_name'] = "Server %s" % instance_id
1728+ updates['display_name'] = generate_default_display_name(instance)
1729 instance['display_name'] = updates['display_name']
1730 updates['hostname'] = self.hostname_factory(instance)
1731 updates['vm_state'] = vm_states.BUILDING
1732@@ -442,21 +478,23 @@
1733 instance = self.update(context, instance_id, **updates)
1734 return instance
1735
1736- def _ask_scheduler_to_create_instance(self, context, base_options,
1737- instance_type, zone_blob,
1738- availability_zone, injected_files,
1739- admin_password, image,
1740- instance_id=None, num_instances=1,
1741- requested_networks=None):
1742- """Send the run_instance request to the schedulers for processing."""
1743+ def _schedule_run_instance(self,
1744+ rpc_method,
1745+ context, base_options,
1746+ instance_type, zone_blob,
1747+ availability_zone, injected_files,
1748+ admin_password, image,
1749+ num_instances,
1750+ requested_networks,
1751+ block_device_mapping,
1752+ security_group):
1753+ """Send a run_instance request to the schedulers for processing."""
1754+
1755 pid = context.project_id
1756 uid = context.user_id
1757- if instance_id:
1758- LOG.debug(_("Casting to scheduler for %(pid)s/%(uid)s's"
1759- " instance %(instance_id)s (single-shot)") % locals())
1760- else:
1761- LOG.debug(_("Casting to scheduler for %(pid)s/%(uid)s's"
1762- " (all-at-once)") % locals())
1763+
1764+ LOG.debug(_("Sending create to scheduler for %(pid)s/%(uid)s's") %
1765+ locals())
1766
1767 request_spec = {
1768 'image': image,
1769@@ -465,82 +503,41 @@
1770 'filter': None,
1771 'blob': zone_blob,
1772 'num_instances': num_instances,
1773+ 'block_device_mapping': block_device_mapping,
1774+ 'security_group': security_group,
1775 }
1776
1777- rpc.cast(context,
1778- FLAGS.scheduler_topic,
1779- {"method": "run_instance",
1780- "args": {"topic": FLAGS.compute_topic,
1781- "instance_id": instance_id,
1782- "request_spec": request_spec,
1783- "availability_zone": availability_zone,
1784- "admin_password": admin_password,
1785- "injected_files": injected_files,
1786- "requested_networks": requested_networks}})
1787-
1788- def create_all_at_once(self, context, instance_type,
1789- image_href, kernel_id=None, ramdisk_id=None,
1790- min_count=None, max_count=None,
1791- display_name='', display_description='',
1792- key_name=None, key_data=None, security_group='default',
1793- availability_zone=None, user_data=None, metadata=None,
1794- injected_files=None, admin_password=None, zone_blob=None,
1795- reservation_id=None, block_device_mapping=None,
1796- access_ip_v4=None, access_ip_v6=None,
1797- requested_networks=None, config_drive=None):
1798- """Provision the instances by passing the whole request to
1799- the Scheduler for execution. Returns a Reservation ID
1800- related to the creation of all of these instances."""
1801-
1802- if not metadata:
1803- metadata = {}
1804-
1805- num_instances, base_options, image = self._check_create_parameters(
1806- context, instance_type,
1807- image_href, kernel_id, ramdisk_id,
1808- min_count, max_count,
1809- display_name, display_description,
1810- key_name, key_data, security_group,
1811- availability_zone, user_data, metadata,
1812- injected_files, admin_password, zone_blob,
1813- reservation_id, access_ip_v4, access_ip_v6,
1814- requested_networks, config_drive)
1815-
1816- self._ask_scheduler_to_create_instance(context, base_options,
1817- instance_type, zone_blob,
1818- availability_zone, injected_files,
1819- admin_password, image,
1820- num_instances=num_instances,
1821- requested_networks=requested_networks)
1822-
1823- return base_options['reservation_id']
1824+ return rpc_method(context,
1825+ FLAGS.scheduler_topic,
1826+ {"method": "run_instance",
1827+ "args": {"topic": FLAGS.compute_topic,
1828+ "request_spec": request_spec,
1829+ "admin_password": admin_password,
1830+ "injected_files": injected_files,
1831+ "requested_networks": requested_networks}})
1832
1833 def create(self, context, instance_type,
1834 image_href, kernel_id=None, ramdisk_id=None,
1835 min_count=None, max_count=None,
1836- display_name='', display_description='',
1837- key_name=None, key_data=None, security_group='default',
1838+ display_name=None, display_description=None,
1839+ key_name=None, key_data=None, security_group=None,
1840 availability_zone=None, user_data=None, metadata=None,
1841 injected_files=None, admin_password=None, zone_blob=None,
1842 reservation_id=None, block_device_mapping=None,
1843 access_ip_v4=None, access_ip_v6=None,
1844- requested_networks=None, config_drive=None,):
1845- """
1846- Provision the instances by sending off a series of single
1847- instance requests to the Schedulers. This is fine for trival
1848- Scheduler drivers, but may remove the effectiveness of the
1849- more complicated drivers.
1850-
1851- NOTE: If you change this method, be sure to change
1852- create_all_at_once() at the same time!
1853-
1854- Returns a list of instance dicts.
1855- """
1856-
1857- if not metadata:
1858- metadata = {}
1859-
1860- num_instances, base_options, image = self._check_create_parameters(
1861+ requested_networks=None, config_drive=None,
1862+ wait_for_instances=True):
1863+ """
1864+ Provision instances, sending instance information to the
1865+ scheduler. The scheduler will determine where the instance(s)
1866+ go and will handle creating the DB entries.
1867+
1868+ Returns a tuple of (instances, reservation_id) where instances
1869+ could be 'None' or a list of instance dicts depending on if
1870+ we waited for information from the scheduler or not.
1871+ """
1872+
1873+ (instances, reservation_id) = self._create_instance(
1874 context, instance_type,
1875 image_href, kernel_id, ramdisk_id,
1876 min_count, max_count,
1877@@ -549,27 +546,25 @@
1878 availability_zone, user_data, metadata,
1879 injected_files, admin_password, zone_blob,
1880 reservation_id, access_ip_v4, access_ip_v6,
1881- requested_networks, config_drive)
1882-
1883- block_device_mapping = block_device_mapping or []
1884- instances = []
1885- LOG.debug(_("Going to run %s instances..."), num_instances)
1886- for num in range(num_instances):
1887- instance = self.create_db_entry_for_new_instance(context,
1888- instance_type, image,
1889- base_options, security_group,
1890- block_device_mapping, num=num)
1891- instances.append(instance)
1892- instance_id = instance['id']
1893-
1894- self._ask_scheduler_to_create_instance(context, base_options,
1895- instance_type, zone_blob,
1896- availability_zone, injected_files,
1897- admin_password, image,
1898- instance_id=instance_id,
1899- requested_networks=requested_networks)
1900-
1901- return [dict(x.iteritems()) for x in instances]
1902+ requested_networks, config_drive,
1903+ block_device_mapping,
1904+ wait_for_instances)
1905+
1906+ if instances is None:
1907+ # wait_for_instances must have been False
1908+ return (instances, reservation_id)
1909+
1910+ inst_ret_list = []
1911+ for instance in instances:
1912+ if instance.get('_is_precooked', False):
1913+ inst_ret_list.append(instance)
1914+ else:
1915+ # Scheduler only gives us the 'id'. We need to pull
1916+ # in the created instances from the DB
1917+ instance = self.db.instance_get(context, instance['id'])
1918+ inst_ret_list.append(dict(instance.iteritems()))
1919+
1920+ return (inst_ret_list, reservation_id)
1921
1922 def has_finished_migration(self, context, instance_uuid):
1923 """Returns true if an instance has a finished migration."""
1924
1925=== modified file 'nova/scheduler/abstract_scheduler.py'
1926--- nova/scheduler/abstract_scheduler.py 2011-09-12 14:36:14 +0000
1927+++ nova/scheduler/abstract_scheduler.py 2011-09-23 07:08:19 +0000
1928@@ -60,24 +60,10 @@
1929 request_spec, kwargs):
1930 """Create the requested resource in this Zone."""
1931 host = build_plan_item['hostname']
1932- base_options = request_spec['instance_properties']
1933- image = request_spec['image']
1934- instance_type = request_spec.get('instance_type')
1935-
1936- # TODO(sandy): I guess someone needs to add block_device_mapping
1937- # support at some point? Also, OS API has no concept of security
1938- # groups.
1939- instance = compute_api.API().create_db_entry_for_new_instance(context,
1940- instance_type, image, base_options, None, [])
1941-
1942- instance_id = instance['id']
1943- kwargs['instance_id'] = instance_id
1944-
1945- queue = db.queue_get_for(context, "compute", host)
1946- params = {"method": "run_instance", "args": kwargs}
1947- rpc.cast(context, queue, params)
1948- LOG.debug(_("Provisioning locally via compute node %(host)s")
1949- % locals())
1950+ instance = self.create_instance_db_entry(context, request_spec)
1951+ driver.cast_to_compute_host(context, host,
1952+ 'run_instance', instance_id=instance['id'], **kwargs)
1953+ return driver.encode_instance(instance, local=True)
1954
1955 def _decrypt_blob(self, blob):
1956 """Returns the decrypted blob or None if invalid. Broken out
1957@@ -112,7 +98,7 @@
1958 files = kwargs['injected_files']
1959 child_zone = zone_info['child_zone']
1960 child_blob = zone_info['child_blob']
1961- zone = db.zone_get(context, child_zone)
1962+ zone = db.zone_get(context.elevated(), child_zone)
1963 url = zone.api_url
1964 LOG.debug(_("Forwarding instance create call to child zone %(url)s"
1965 ". ReservationID=%(reservation_id)s") % locals())
1966@@ -132,12 +118,13 @@
1967 # arguments are passed as keyword arguments
1968 # (there's a reasonable default for ipgroups in the
1969 # novaclient call).
1970- nova.servers.create(name, image_ref, flavor_id,
1971+ instance = nova.servers.create(name, image_ref, flavor_id,
1972 meta=meta, files=files, zone_blob=child_blob,
1973 reservation_id=reservation_id)
1974+ return driver.encode_instance(instance._info, local=False)
1975
1976 def _provision_resource_from_blob(self, context, build_plan_item,
1977- instance_id, request_spec, kwargs):
1978+ request_spec, kwargs):
1979 """Create the requested resource locally or in a child zone
1980 based on what is stored in the zone blob info.
1981
1982@@ -165,21 +152,21 @@
1983
1984 # Valid data ... is it for us?
1985 if 'child_zone' in host_info and 'child_blob' in host_info:
1986- self._ask_child_zone_to_create_instance(context, host_info,
1987- request_spec, kwargs)
1988+ instance = self._ask_child_zone_to_create_instance(context,
1989+ host_info, request_spec, kwargs)
1990 else:
1991- self._provision_resource_locally(context, host_info, request_spec,
1992- kwargs)
1993+ instance = self._provision_resource_locally(context,
1994+ host_info, request_spec, kwargs)
1995+ return instance
1996
1997- def _provision_resource(self, context, build_plan_item, instance_id,
1998+ def _provision_resource(self, context, build_plan_item,
1999 request_spec, kwargs):
2000 """Create the requested resource in this Zone or a child zone."""
2001 if "hostname" in build_plan_item:
2002- self._provision_resource_locally(context, build_plan_item,
2003- request_spec, kwargs)
2004- return
2005- self._provision_resource_from_blob(context, build_plan_item,
2006- instance_id, request_spec, kwargs)
2007+ return self._provision_resource_locally(context,
2008+ build_plan_item, request_spec, kwargs)
2009+ return self._provision_resource_from_blob(context,
2010+ build_plan_item, request_spec, kwargs)
2011
2012 def _adjust_child_weights(self, child_results, zones):
2013 """Apply the Scale and Offset values from the Zone definition
2014@@ -205,8 +192,7 @@
2015 LOG.exception(_("Bad child zone scaling values "
2016 "for Zone: %(zone_id)s") % locals())
2017
2018- def schedule_run_instance(self, context, instance_id, request_spec,
2019- *args, **kwargs):
2020+ def schedule_run_instance(self, context, request_spec, *args, **kwargs):
2021 """This method is called from nova.compute.api to provision
2022 an instance. However we need to look at the parameters being
2023 passed in to see if this is a request to:
2024@@ -214,13 +200,16 @@
2025 2. Use the Build Plan information in the request parameters
2026 to simply create the instance (either in this zone or
2027 a child zone).
2028+
2029+ returns list of instances created.
2030 """
2031 # TODO(sandy): We'll have to look for richer specs at some point.
2032 blob = request_spec.get('blob')
2033 if blob:
2034- self._provision_resource(context, request_spec, instance_id,
2035- request_spec, kwargs)
2036- return None
2037+ instance = self._provision_resource(context,
2038+ request_spec, request_spec, kwargs)
2039+ # Caller expects a list of instances
2040+ return [instance]
2041
2042 num_instances = request_spec.get('num_instances', 1)
2043 LOG.debug(_("Attempting to build %(num_instances)d instance(s)") %
2044@@ -231,16 +220,16 @@
2045 if not build_plan:
2046 raise driver.NoValidHost(_('No hosts were available'))
2047
2048+ instances = []
2049 for num in xrange(num_instances):
2050 if not build_plan:
2051 break
2052 build_plan_item = build_plan.pop(0)
2053- self._provision_resource(context, build_plan_item, instance_id,
2054- request_spec, kwargs)
2055+ instance = self._provision_resource(context,
2056+ build_plan_item, request_spec, kwargs)
2057+ instances.append(instance)
2058
2059- # Returning None short-circuits the routing to Compute (since
2060- # we've already done it here)
2061- return None
2062+ return instances
2063
2064 def select(self, context, request_spec, *args, **kwargs):
2065 """Select returns a list of weights and zone/host information
2066@@ -251,7 +240,7 @@
2067 return self._schedule(context, "compute", request_spec,
2068 *args, **kwargs)
2069
2070- def schedule(self, context, topic, request_spec, *args, **kwargs):
2071+ def schedule(self, context, topic, method, *args, **kwargs):
2072 """The schedule() contract requires we return the one
2073 best-suited host for this request.
2074 """
2075@@ -285,7 +274,7 @@
2076 weighted_hosts = self.weigh_hosts(topic, request_spec, filtered_hosts)
2077 # Next, tack on the host weights from the child zones
2078 json_spec = json.dumps(request_spec)
2079- all_zones = db.zone_get_all(context)
2080+ all_zones = db.zone_get_all(context.elevated())
2081 child_results = self._call_zone_method(context, "select",
2082 specs=json_spec, zones=all_zones)
2083 self._adjust_child_weights(child_results, all_zones)
2084
2085=== modified file 'nova/scheduler/api.py'
2086--- nova/scheduler/api.py 2011-09-21 12:19:53 +0000
2087+++ nova/scheduler/api.py 2011-09-23 07:08:19 +0000
2088@@ -65,7 +65,7 @@
2089 for item in items:
2090 item['api_url'] = item['api_url'].replace('\\/', '/')
2091 if not items:
2092- items = db.zone_get_all(context)
2093+ items = db.zone_get_all(context.elevated())
2094 return items
2095
2096
2097@@ -116,7 +116,7 @@
2098 pool = greenpool.GreenPool()
2099 results = []
2100 if zones is None:
2101- zones = db.zone_get_all(context)
2102+ zones = db.zone_get_all(context.elevated())
2103 for zone in zones:
2104 try:
2105 # Do this on behalf of the user ...
2106
2107=== modified file 'nova/scheduler/chance.py'
2108--- nova/scheduler/chance.py 2011-03-31 19:29:16 +0000
2109+++ nova/scheduler/chance.py 2011-09-23 07:08:19 +0000
2110@@ -29,12 +29,33 @@
2111 class ChanceScheduler(driver.Scheduler):
2112 """Implements Scheduler as a random node selector."""
2113
2114- def schedule(self, context, topic, *_args, **_kwargs):
2115+ def _schedule(self, context, topic, **kwargs):
2116 """Picks a host that is up at random."""
2117
2118- hosts = self.hosts_up(context, topic)
2119+ elevated = context.elevated()
2120+ hosts = self.hosts_up(elevated, topic)
2121 if not hosts:
2122 raise driver.NoValidHost(_("Scheduler was unable to locate a host"
2123 " for this request. Is the appropriate"
2124 " service running?"))
2125 return hosts[int(random.random() * len(hosts))]
2126+
2127+ def schedule(self, context, topic, method, *_args, **kwargs):
2128+ """Picks a host that is up at random."""
2129+
2130+ host = self._schedule(context, topic, **kwargs)
2131+ driver.cast_to_host(context, topic, host, method, **kwargs)
2132+
2133+ def schedule_run_instance(self, context, request_spec, *_args, **kwargs):
2134+ """Create and run an instance or instances"""
2135+ elevated = context.elevated()
2136+ num_instances = request_spec.get('num_instances', 1)
2137+ instances = []
2138+ for num in xrange(num_instances):
2139+ host = self._schedule(context, 'compute', **kwargs)
2140+ instance = self.create_instance_db_entry(elevated, request_spec)
2141+ driver.cast_to_compute_host(context, host,
2142+ 'run_instance', instance_id=instance['id'], **kwargs)
2143+ instances.append(driver.encode_instance(instance))
2144+
2145+ return instances
2146
2147=== modified file 'nova/scheduler/driver.py'
2148--- nova/scheduler/driver.py 2011-08-22 21:17:39 +0000
2149+++ nova/scheduler/driver.py 2011-09-23 07:08:19 +0000
2150@@ -29,17 +29,94 @@
2151 from nova import log as logging
2152 from nova import rpc
2153 from nova import utils
2154+from nova.compute import api as compute_api
2155 from nova.compute import power_state
2156 from nova.compute import vm_states
2157 from nova.api.ec2 import ec2utils
2158
2159
2160 FLAGS = flags.FLAGS
2161+LOG = logging.getLogger('nova.scheduler.driver')
2162 flags.DEFINE_integer('service_down_time', 60,
2163 'maximum time since last checkin for up service')
2164 flags.DECLARE('instances_path', 'nova.compute.manager')
2165
2166
2167+def cast_to_volume_host(context, host, method, update_db=True, **kwargs):
2168+ """Cast request to a volume host queue"""
2169+
2170+ if update_db:
2171+ volume_id = kwargs.get('volume_id', None)
2172+ if volume_id is not None:
2173+ now = utils.utcnow()
2174+ db.volume_update(context, volume_id,
2175+ {'host': host, 'scheduled_at': now})
2176+ rpc.cast(context,
2177+ db.queue_get_for(context, 'volume', host),
2178+ {"method": method, "args": kwargs})
2179+ LOG.debug(_("Casted '%(method)s' to volume '%(host)s'") % locals())
2180+
2181+
2182+def cast_to_compute_host(context, host, method, update_db=True, **kwargs):
2183+ """Cast request to a compute host queue"""
2184+
2185+ if update_db:
2186+ instance_id = kwargs.get('instance_id', None)
2187+ if instance_id is not None:
2188+ now = utils.utcnow()
2189+ db.instance_update(context, instance_id,
2190+ {'host': host, 'scheduled_at': now})
2191+ rpc.cast(context,
2192+ db.queue_get_for(context, 'compute', host),
2193+ {"method": method, "args": kwargs})
2194+ LOG.debug(_("Casted '%(method)s' to compute '%(host)s'") % locals())
2195+
2196+
2197+def cast_to_network_host(context, host, method, update_db=False, **kwargs):
2198+ """Cast request to a network host queue"""
2199+
2200+ rpc.cast(context,
2201+ db.queue_get_for(context, 'network', host),
2202+ {"method": method, "args": kwargs})
2203+ LOG.debug(_("Casted '%(method)s' to network '%(host)s'") % locals())
2204+
2205+
2206+def cast_to_host(context, topic, host, method, update_db=True, **kwargs):
2207+ """Generic cast to host"""
2208+
2209+ topic_mapping = {
2210+ "compute": cast_to_compute_host,
2211+ "volume": cast_to_volume_host,
2212+ 'network': cast_to_network_host}
2213+
2214+ func = topic_mapping.get(topic)
2215+ if func:
2216+ func(context, host, method, update_db=update_db, **kwargs)
2217+ else:
2218+ rpc.cast(context,
2219+ db.queue_get_for(context, topic, host),
2220+ {"method": method, "args": kwargs})
2221+ LOG.debug(_("Casted '%(method)s' to %(topic)s '%(host)s'")
2222+ % locals())
2223+
2224+
2225+def encode_instance(instance, local=True):
2226+ """Encode locally created instance for return via RPC"""
2227+ # TODO(comstud): I would love to be able to return the full
2228+ # instance information here, but we'll need some modifications
2229+ # to the RPC code to handle datetime conversions with the
2230+ # json encoding/decoding. We should be able to set a default
2231+ # json handler somehow to do it.
2232+ #
2233+ # For now, I'll just return the instance ID and let the caller
2234+ # do a DB lookup :-/
2235+ if local:
2236+ return dict(id=instance['id'], _is_precooked=False)
2237+ else:
2238+ instance['_is_precooked'] = True
2239+ return instance
2240+
2241+
2242 class NoValidHost(exception.Error):
2243 """There is no valid host for the command."""
2244 pass
2245@@ -55,6 +132,7 @@
2246
2247 def __init__(self):
2248 self.zone_manager = None
2249+ self.compute_api = compute_api.API()
2250
2251 def set_zone_manager(self, zone_manager):
2252 """Called by the Scheduler Service to supply a ZoneManager."""
2253@@ -76,7 +154,20 @@
2254 for service in services
2255 if self.service_is_up(service)]
2256
2257- def schedule(self, context, topic, *_args, **_kwargs):
2258+ def create_instance_db_entry(self, context, request_spec):
2259+ """Create instance DB entry based on request_spec"""
2260+ base_options = request_spec['instance_properties']
2261+ image = request_spec['image']
2262+ instance_type = request_spec.get('instance_type')
2263+ security_group = request_spec.get('security_group', 'default')
2264+ block_device_mapping = request_spec.get('block_device_mapping', [])
2265+
2266+ instance = self.compute_api.create_db_entry_for_new_instance(
2267+ context, instance_type, image, base_options,
2268+ security_group, block_device_mapping)
2269+ return instance
2270+
2271+ def schedule(self, context, topic, method, *_args, **_kwargs):
2272 """Must override at least this method for scheduler to work."""
2273 raise NotImplementedError(_("Must implement a fallback schedule"))
2274
2275@@ -114,10 +205,12 @@
2276 volume_ref['id'],
2277 {'status': 'migrating'})
2278
2279- # Return value is necessary to send request to src
2280- # Check _schedule() in detail.
2281 src = instance_ref['host']
2282- return src
2283+ cast_to_compute_host(context, src, 'live_migration',
2284+ update_db=False,
2285+ instance_id=instance_id,
2286+ dest=dest,
2287+ block_migration=block_migration)
2288
2289 def _live_migration_src_check(self, context, instance_ref):
2290 """Live migration check routine (for src host).
2291@@ -205,7 +298,7 @@
2292 if not block_migration:
2293 src = instance_ref['host']
2294 ipath = FLAGS.instances_path
2295- logging.error(_("Cannot confirm tmpfile at %(ipath)s is on "
2296+ LOG.error(_("Cannot confirm tmpfile at %(ipath)s is on "
2297 "same shared storage between %(src)s "
2298 "and %(dest)s.") % locals())
2299 raise
2300@@ -243,7 +336,7 @@
2301
2302 except rpc.RemoteError:
2303 src = instance_ref['host']
2304- logging.exception(_("host %(dest)s is not compatible with "
2305+ LOG.exception(_("host %(dest)s is not compatible with "
2306 "original host %(src)s.") % locals())
2307 raise
2308
2309@@ -354,6 +447,8 @@
2310 dst_t = db.queue_get_for(context, FLAGS.compute_topic, dest)
2311 src_t = db.queue_get_for(context, FLAGS.compute_topic, src)
2312
2313+ filename = None
2314+
2315 try:
2316 # create tmpfile at dest host
2317 filename = rpc.call(context, dst_t,
2318@@ -370,6 +465,8 @@
2319 raise
2320
2321 finally:
2322- rpc.call(context, dst_t,
2323- {"method": 'cleanup_shared_storage_test_file',
2324- "args": {'filename': filename}})
2325+ # Should only be None for tests?
2326+ if filename is not None:
2327+ rpc.call(context, dst_t,
2328+ {"method": 'cleanup_shared_storage_test_file',
2329+ "args": {'filename': filename}})
2330
2331=== modified file 'nova/scheduler/least_cost.py'
2332--- nova/scheduler/least_cost.py 2011-08-15 22:09:39 +0000
2333+++ nova/scheduler/least_cost.py 2011-09-23 07:08:19 +0000
2334@@ -160,8 +160,7 @@
2335
2336 weighted = []
2337 weight_log = []
2338- for cost, (hostname, service) in zip(costs, hosts):
2339- caps = service[topic]
2340+ for cost, (hostname, caps) in zip(costs, hosts):
2341 weight_log.append("%s: %s" % (hostname, "%.2f" % cost))
2342 weight_dict = dict(weight=cost, hostname=hostname,
2343 capabilities=caps)
2344
2345=== modified file 'nova/scheduler/manager.py'
2346--- nova/scheduler/manager.py 2011-09-02 16:00:03 +0000
2347+++ nova/scheduler/manager.py 2011-09-23 07:08:19 +0000
2348@@ -81,37 +81,23 @@
2349 """Select a list of hosts best matching the provided specs."""
2350 return self.driver.select(context, *args, **kwargs)
2351
2352- def get_scheduler_rules(self, context=None, *args, **kwargs):
2353- """Ask the driver how requests should be made of it."""
2354- return self.driver.get_scheduler_rules(context, *args, **kwargs)
2355-
2356 def _schedule(self, method, context, topic, *args, **kwargs):
2357 """Tries to call schedule_* method on the driver to retrieve host.
2358
2359 Falls back to schedule(context, topic) if method doesn't exist.
2360 """
2361 driver_method = 'schedule_%s' % method
2362- elevated = context.elevated()
2363 try:
2364 real_meth = getattr(self.driver, driver_method)
2365- args = (elevated,) + args
2366+ args = (context,) + args
2367 except AttributeError, e:
2368 LOG.warning(_("Driver Method %(driver_method)s missing: %(e)s."
2369 "Reverting to schedule()") % locals())
2370 real_meth = self.driver.schedule
2371- args = (elevated, topic) + args
2372- host = real_meth(*args, **kwargs)
2373-
2374- if not host:
2375- LOG.debug(_("%(topic)s %(method)s handled in Scheduler")
2376- % locals())
2377- return
2378-
2379- rpc.cast(context,
2380- db.queue_get_for(context, topic, host),
2381- {"method": method,
2382- "args": kwargs})
2383- LOG.debug(_("Casted to %(topic)s %(host)s for %(method)s") % locals())
2384+ args = (context, topic, method) + args
2385+
2386+ # Scheduler methods are responsible for casting.
2387+ return real_meth(*args, **kwargs)
2388
2389 # NOTE (masumotok) : This method should be moved to nova.api.ec2.admin.
2390 # Based on bexar design summit discussion,
2391
2392=== modified file 'nova/scheduler/multi.py'
2393--- nova/scheduler/multi.py 2011-08-11 23:26:26 +0000
2394+++ nova/scheduler/multi.py 2011-09-23 07:08:19 +0000
2395@@ -38,7 +38,8 @@
2396 # A mapping of methods to topics so we can figure out which driver to use.
2397 _METHOD_MAP = {'run_instance': 'compute',
2398 'start_instance': 'compute',
2399- 'create_volume': 'volume'}
2400+ 'create_volume': 'volume',
2401+ 'create_volumes': 'volume'}
2402
2403
2404 class MultiScheduler(driver.Scheduler):
2405@@ -69,5 +70,6 @@
2406 for k, v in self.drivers.iteritems():
2407 v.set_zone_manager(zone_manager)
2408
2409- def schedule(self, context, topic, *_args, **_kwargs):
2410- return self.drivers[topic].schedule(context, topic, *_args, **_kwargs)
2411+ def schedule(self, context, topic, method, *_args, **_kwargs):
2412+ return self.drivers[topic].schedule(context, topic,
2413+ method, *_args, **_kwargs)
2414
2415=== modified file 'nova/scheduler/simple.py'
2416--- nova/scheduler/simple.py 2011-08-19 15:44:14 +0000
2417+++ nova/scheduler/simple.py 2011-09-23 07:08:19 +0000
2418@@ -39,47 +39,50 @@
2419 class SimpleScheduler(chance.ChanceScheduler):
2420 """Implements Naive Scheduler that tries to find least loaded host."""
2421
2422- def _schedule_instance(self, context, instance_id, *_args, **_kwargs):
2423+ def _schedule_instance(self, context, instance_opts, *_args, **_kwargs):
2424 """Picks a host that is up and has the fewest running instances."""
2425- instance_ref = db.instance_get(context, instance_id)
2426- if (instance_ref['availability_zone']
2427- and ':' in instance_ref['availability_zone']
2428- and context.is_admin):
2429- zone, _x, host = instance_ref['availability_zone'].partition(':')
2430+
2431+ availability_zone = instance_opts.get('availability_zone')
2432+
2433+ if availability_zone and context.is_admin and \
2434+ (':' in availability_zone):
2435+ zone, host = availability_zone.split(':', 1)
2436 service = db.service_get_by_args(context.elevated(), host,
2437 'nova-compute')
2438 if not self.service_is_up(service):
2439 raise driver.WillNotSchedule(_("Host %s is not alive") % host)
2440+ return host
2441
2442- # TODO(vish): this probably belongs in the manager, if we
2443- # can generalize this somehow
2444- now = utils.utcnow()
2445- db.instance_update(context, instance_id, {'host': host,
2446- 'scheduled_at': now})
2447- return host
2448 results = db.service_get_all_compute_sorted(context)
2449 for result in results:
2450 (service, instance_cores) = result
2451- if instance_cores + instance_ref['vcpus'] > FLAGS.max_cores:
2452+ if instance_cores + instance_opts['vcpus'] > FLAGS.max_cores:
2453 raise driver.NoValidHost(_("All hosts have too many cores"))
2454 if self.service_is_up(service):
2455- # NOTE(vish): this probably belongs in the manager, if we
2456- # can generalize this somehow
2457- now = utils.utcnow()
2458- db.instance_update(context,
2459- instance_id,
2460- {'host': service['host'],
2461- 'scheduled_at': now})
2462 return service['host']
2463 raise driver.NoValidHost(_("Scheduler was unable to locate a host"
2464 " for this request. Is the appropriate"
2465 " service running?"))
2466
2467- def schedule_run_instance(self, context, instance_id, *_args, **_kwargs):
2468- return self._schedule_instance(context, instance_id, *_args, **_kwargs)
2469+ def schedule_run_instance(self, context, request_spec, *_args, **_kwargs):
2470+ num_instances = request_spec.get('num_instances', 1)
2471+ instances = []
2472+ for num in xrange(num_instances):
2473+ host = self._schedule_instance(context,
2474+ request_spec['instance_properties'], *_args, **_kwargs)
2475+ instance_ref = self.create_instance_db_entry(context,
2476+ request_spec)
2477+ driver.cast_to_compute_host(context, host, 'run_instance',
2478+ instance_id=instance_ref['id'], **_kwargs)
2479+ instances.append(driver.encode_instance(instance_ref))
2480+ return instances
2481
2482 def schedule_start_instance(self, context, instance_id, *_args, **_kwargs):
2483- return self._schedule_instance(context, instance_id, *_args, **_kwargs)
2484+ instance_ref = db.instance_get(context, instance_id)
2485+ host = self._schedule_instance(context, instance_ref,
2486+ *_args, **_kwargs)
2487+ driver.cast_to_compute_host(context, host, 'start_instance',
2488+ instance_id=intance_id, **_kwargs)
2489
2490 def schedule_create_volume(self, context, volume_id, *_args, **_kwargs):
2491 """Picks a host that is up and has the fewest volumes."""
2492@@ -92,13 +95,9 @@
2493 'nova-volume')
2494 if not self.service_is_up(service):
2495 raise driver.WillNotSchedule(_("Host %s not available") % host)
2496-
2497- # TODO(vish): this probably belongs in the manager, if we
2498- # can generalize this somehow
2499- now = utils.utcnow()
2500- db.volume_update(context, volume_id, {'host': host,
2501- 'scheduled_at': now})
2502- return host
2503+ driver.cast_to_volume_host(context, host, 'create_volume',
2504+ volume_id=volume_id, **_kwargs)
2505+ return None
2506 results = db.service_get_all_volume_sorted(context)
2507 for result in results:
2508 (service, volume_gigabytes) = result
2509@@ -106,14 +105,9 @@
2510 raise driver.NoValidHost(_("All hosts have too many "
2511 "gigabytes"))
2512 if self.service_is_up(service):
2513- # NOTE(vish): this probably belongs in the manager, if we
2514- # can generalize this somehow
2515- now = utils.utcnow()
2516- db.volume_update(context,
2517- volume_id,
2518- {'host': service['host'],
2519- 'scheduled_at': now})
2520- return service['host']
2521+ driver.cast_to_volume_host(context, service['host'],
2522+ 'create_volume', volume_id=volume_id, **_kwargs)
2523+ return None
2524 raise driver.NoValidHost(_("Scheduler was unable to locate a host"
2525 " for this request. Is the appropriate"
2526 " service running?"))
2527@@ -127,7 +121,9 @@
2528 if instance_count >= FLAGS.max_networks:
2529 raise driver.NoValidHost(_("All hosts have too many networks"))
2530 if self.service_is_up(service):
2531- return service['host']
2532+ driver.cast_to_network_host(context, service['host'],
2533+ 'set_network_host', **_kwargs)
2534+ return None
2535 raise driver.NoValidHost(_("Scheduler was unable to locate a host"
2536 " for this request. Is the appropriate"
2537 " service running?"))
2538
2539=== modified file 'nova/scheduler/vsa.py'
2540--- nova/scheduler/vsa.py 2011-08-26 18:14:44 +0000
2541+++ nova/scheduler/vsa.py 2011-09-23 07:08:19 +0000
2542@@ -195,8 +195,6 @@
2543 'display_description': vol['description'],
2544 'volume_type_id': vol['volume_type_id'],
2545 'metadata': dict(to_vsa_id=vsa_id),
2546- 'host': vol['host'],
2547- 'scheduled_at': now
2548 }
2549
2550 size = vol['size']
2551@@ -205,12 +203,10 @@
2552 LOG.debug(_("Provision volume %(name)s of size %(size)s GB on "\
2553 "host %(host)s"), locals())
2554
2555- volume_ref = db.volume_create(context, options)
2556- rpc.cast(context,
2557- db.queue_get_for(context, "volume", vol['host']),
2558- {"method": "create_volume",
2559- "args": {"volume_id": volume_ref['id'],
2560- "snapshot_id": None}})
2561+ volume_ref = db.volume_create(context.elevated(), options)
2562+ driver.cast_to_volume_host(context, vol['host'],
2563+ 'create_volume', volume_id=volume_ref['id'],
2564+ snapshot_id=None)
2565
2566 def _check_host_enforcement(self, context, availability_zone):
2567 if (availability_zone
2568@@ -274,7 +270,6 @@
2569 def schedule_create_volumes(self, context, request_spec,
2570 availability_zone=None, *_args, **_kwargs):
2571 """Picks hosts for hosting multiple volumes."""
2572-
2573 num_volumes = request_spec.get('num_volumes')
2574 LOG.debug(_("Attempting to spawn %(num_volumes)d volume(s)") %
2575 locals())
2576@@ -291,7 +286,8 @@
2577
2578 for vol in volume_params:
2579 self._provision_volume(context, vol, vsa_id, availability_zone)
2580- except:
2581+ except Exception:
2582+ LOG.exception(_("Error creating volumes"))
2583 if vsa_id:
2584 db.vsa_update(context, vsa_id, dict(status=VsaState.FAILED))
2585
2586@@ -310,10 +306,9 @@
2587 host = self._check_host_enforcement(context,
2588 volume_ref['availability_zone'])
2589 if host:
2590- now = utils.utcnow()
2591- db.volume_update(context, volume_id, {'host': host,
2592- 'scheduled_at': now})
2593- return host
2594+ driver.cast_to_volume_host(context, host, 'create_volume',
2595+ volume_id=volume_id, **_kwargs)
2596+ return None
2597
2598 volume_type_id = volume_ref['volume_type_id']
2599 if volume_type_id:
2600@@ -344,18 +339,16 @@
2601
2602 try:
2603 (host, qos_cap) = self._select_hosts(request_spec, all_hosts=hosts)
2604- except:
2605+ except Exception:
2606+ LOG.exception(_("Error creating volume"))
2607 if volume_ref['to_vsa_id']:
2608 db.vsa_update(context, volume_ref['to_vsa_id'],
2609 dict(status=VsaState.FAILED))
2610 raise
2611
2612 if host:
2613- now = utils.utcnow()
2614- db.volume_update(context, volume_id, {'host': host,
2615- 'scheduled_at': now})
2616- self._consume_resource(qos_cap, volume_ref['size'], -1)
2617- return host
2618+ driver.cast_to_volume_host(context, host, 'create_volume',
2619+ volume_id=volume_id, **_kwargs)
2620
2621 def _consume_full_drive(self, qos_values, direction):
2622 qos_values['FullDrive']['NumFreeDrives'] += direction
2623
2624=== modified file 'nova/scheduler/zone.py'
2625--- nova/scheduler/zone.py 2011-03-31 19:29:16 +0000
2626+++ nova/scheduler/zone.py 2011-09-23 07:08:19 +0000
2627@@ -35,7 +35,7 @@
2628 for topic and availability zone (if defined).
2629 """
2630
2631- if zone is None:
2632+ if not zone:
2633 return self.hosts_up(context, topic)
2634
2635 services = db.service_get_all_by_topic(context, topic)
2636@@ -44,16 +44,34 @@
2637 if self.service_is_up(service)
2638 and service.availability_zone == zone]
2639
2640- def schedule(self, context, topic, *_args, **_kwargs):
2641+ def _schedule(self, context, topic, request_spec, **kwargs):
2642 """Picks a host that is up at random in selected
2643 availability zone (if defined).
2644 """
2645
2646- zone = _kwargs.get('availability_zone')
2647- hosts = self.hosts_up_with_zone(context, topic, zone)
2648+ zone = kwargs.get('availability_zone')
2649+ if not zone and request_spec:
2650+ zone = request_spec['instance_properties'].get(
2651+ 'availability_zone')
2652+ hosts = self.hosts_up_with_zone(context.elevated(), topic, zone)
2653 if not hosts:
2654 raise driver.NoValidHost(_("Scheduler was unable to locate a host"
2655 " for this request. Is the appropriate"
2656 " service running?"))
2657-
2658 return hosts[int(random.random() * len(hosts))]
2659+
2660+ def schedule(self, context, topic, method, *_args, **kwargs):
2661+ host = self._schedule(context, topic, None, **kwargs)
2662+ driver.cast_to_host(context, topic, host, method, **kwargs)
2663+
2664+ def schedule_run_instance(self, context, request_spec, *_args, **kwargs):
2665+ """Builds and starts instances on selected hosts"""
2666+ num_instances = request_spec.get('num_instances', 1)
2667+ instances = []
2668+ for num in xrange(num_instances):
2669+ host = self._schedule(context, 'compute', request_spec, **kwargs)
2670+ instance = self.create_instance_db_entry(context, request_spec)
2671+ driver.cast_to_compute_host(context, host,
2672+ 'run_instance', instance_id=instance['id'], **kwargs)
2673+ instances.append(driver.encode_instance(instance))
2674+ return instances
2675
2676=== modified file 'nova/tests/api/openstack/contrib/test_createserverext.py'
2677--- nova/tests/api/openstack/contrib/test_createserverext.py 2011-09-21 20:59:40 +0000
2678+++ nova/tests/api/openstack/contrib/test_createserverext.py 2011-09-23 07:08:19 +0000
2679@@ -27,6 +27,7 @@
2680 from nova import db
2681 from nova import exception
2682 from nova import flags
2683+from nova import rpc
2684 from nova import test
2685 import nova.api.openstack
2686 from nova.tests.api.openstack import fakes
2687@@ -115,13 +116,15 @@
2688 if 'user_data' in kwargs:
2689 self.user_data = kwargs['user_data']
2690
2691- return [{'id': '1234', 'display_name': 'fakeinstance',
2692+ resv_id = None
2693+
2694+ return ([{'id': '1234', 'display_name': 'fakeinstance',
2695 'uuid': FAKE_UUID,
2696 'user_id': 'fake',
2697 'project_id': 'fake',
2698 'created_at': "",
2699 'updated_at': "",
2700- 'progress': 0}]
2701+ 'progress': 0}], resv_id)
2702
2703 def set_admin_password(self, *args, **kwargs):
2704 pass
2705@@ -134,7 +137,7 @@
2706 compute_api = MockComputeAPI()
2707 self.stubs.Set(nova.compute, 'API', make_stub_method(compute_api))
2708 self.stubs.Set(
2709- nova.api.openstack.create_instance_helper.CreateInstanceHelper,
2710+ nova.api.openstack.servers.Controller,
2711 '_get_kernel_ramdisk_from_image', make_stub_method((1, 1)))
2712 return compute_api
2713
2714@@ -393,7 +396,8 @@
2715 return_instance_add_security_group)
2716 body_dict = self._create_security_group_request_dict(security_groups)
2717 request = self._get_create_request_json(body_dict)
2718- response = request.get_response(fakes.wsgi_app())
2719+ compute_api, response = \
2720+ self._run_create_instance_with_mock_compute_api(request)
2721 self.assertEquals(response.status_int, 202)
2722
2723 def test_get_server_by_id_verify_security_groups_json(self):
2724
2725=== modified file 'nova/tests/api/openstack/contrib/test_volumes.py'
2726--- nova/tests/api/openstack/contrib/test_volumes.py 2011-09-21 20:59:40 +0000
2727+++ nova/tests/api/openstack/contrib/test_volumes.py 2011-09-23 07:08:19 +0000
2728@@ -31,8 +31,12 @@
2729
2730
2731 def fake_compute_api_create(cls, context, instance_type, image_href, **kwargs):
2732+ global _block_device_mapping_seen
2733+ _block_device_mapping_seen = kwargs.get('block_device_mapping')
2734+
2735 inst_type = instance_types.get_instance_type_by_flavor_id(2)
2736- return [{'id': 1,
2737+ resv_id = None
2738+ return ([{'id': 1,
2739 'display_name': 'test_server',
2740 'uuid': fake_gen_uuid(),
2741 'instance_type': dict(inst_type),
2742@@ -44,7 +48,7 @@
2743 'created_at': datetime.datetime(2010, 10, 10, 12, 0, 0),
2744 'updated_at': datetime.datetime(2010, 11, 11, 11, 0, 0),
2745 'progress': 0
2746- }]
2747+ }], resv_id)
2748
2749
2750 class BootFromVolumeTest(test.TestCase):
2751@@ -64,6 +68,8 @@
2752 delete_on_termination=False,
2753 )]
2754 ))
2755+ global _block_device_mapping_seen
2756+ _block_device_mapping_seen = None
2757 req = webob.Request.blank('/v1.1/fake/os-volumes_boot')
2758 req.method = 'POST'
2759 req.body = json.dumps(body)
2760@@ -76,3 +82,7 @@
2761 self.assertEqual(u'test_server', server['name'])
2762 self.assertEqual(3, int(server['image']['id']))
2763 self.assertEqual(FLAGS.password_length, len(server['adminPass']))
2764+ self.assertEqual(len(_block_device_mapping_seen), 1)
2765+ self.assertEqual(_block_device_mapping_seen[0]['volume_id'], 1)
2766+ self.assertEqual(_block_device_mapping_seen[0]['device_name'],
2767+ '/dev/vda')
2768
2769=== modified file 'nova/tests/api/openstack/test_extensions.py'
2770--- nova/tests/api/openstack/test_extensions.py 2011-09-21 15:54:30 +0000
2771+++ nova/tests/api/openstack/test_extensions.py 2011-09-23 07:08:19 +0000
2772@@ -102,6 +102,7 @@
2773 "VirtualInterfaces",
2774 "Volumes",
2775 "VolumeTypes",
2776+ "Zones",
2777 ]
2778 self.ext_list.sort()
2779
2780
2781=== modified file 'nova/tests/api/openstack/test_server_actions.py'
2782--- nova/tests/api/openstack/test_server_actions.py 2011-09-16 19:45:46 +0000
2783+++ nova/tests/api/openstack/test_server_actions.py 2011-09-23 07:08:19 +0000
2784@@ -9,7 +9,7 @@
2785 from nova import utils
2786 from nova import exception
2787 from nova import flags
2788-from nova.api.openstack import create_instance_helper
2789+from nova.api.openstack import servers
2790 from nova.compute import vm_states
2791 from nova.compute import instance_types
2792 import nova.db.api
2793@@ -970,7 +970,7 @@
2794 class TestServerActionXMLDeserializerV11(test.TestCase):
2795
2796 def setUp(self):
2797- self.deserializer = create_instance_helper.ServerXMLDeserializerV11()
2798+ self.deserializer = servers.ServerXMLDeserializerV11()
2799
2800 def tearDown(self):
2801 pass
2802
2803=== modified file 'nova/tests/api/openstack/test_servers.py'
2804--- nova/tests/api/openstack/test_servers.py 2011-09-22 15:41:34 +0000
2805+++ nova/tests/api/openstack/test_servers.py 2011-09-23 07:08:19 +0000
2806@@ -33,7 +33,6 @@
2807 from nova import test
2808 from nova import utils
2809 import nova.api.openstack
2810-from nova.api.openstack import create_instance_helper
2811 from nova.api.openstack import servers
2812 from nova.api.openstack import wsgi
2813 from nova.api.openstack import xmlutil
2814@@ -1576,10 +1575,15 @@
2815
2816 def _setup_for_create_instance(self):
2817 """Shared implementation for tests below that create instance"""
2818+
2819+ self.instance_cache_num = 0
2820+ self.instance_cache = {}
2821+
2822 def instance_create(context, inst):
2823 inst_type = instance_types.get_instance_type_by_flavor_id(3)
2824 image_ref = 'http://localhost/images/2'
2825- return {'id': 1,
2826+ self.instance_cache_num += 1
2827+ instance = {'id': self.instance_cache_num,
2828 'display_name': 'server_test',
2829 'uuid': FAKE_UUID,
2830 'instance_type': dict(inst_type),
2831@@ -1588,11 +1592,32 @@
2832 'image_ref': image_ref,
2833 'user_id': 'fake',
2834 'project_id': 'fake',
2835+ 'reservation_id': inst['reservation_id'],
2836 "created_at": datetime.datetime(2010, 10, 10, 12, 0, 0),
2837 "updated_at": datetime.datetime(2010, 11, 11, 11, 0, 0),
2838 "config_drive": self.config_drive,
2839 "progress": 0
2840 }
2841+ self.instance_cache[instance['id']] = instance
2842+ return instance
2843+
2844+ def instance_get(context, instance_id):
2845+ """Stub for compute/api create() pulling in instance after
2846+ scheduling
2847+ """
2848+ return self.instance_cache[instance_id]
2849+
2850+ def rpc_call_wrapper(context, topic, msg):
2851+ """Stub out the scheduler creating the instance entry"""
2852+ if topic == FLAGS.scheduler_topic and \
2853+ msg['method'] == 'run_instance':
2854+ request_spec = msg['args']['request_spec']
2855+ num_instances = request_spec.get('num_instances', 1)
2856+ instances = []
2857+ for x in xrange(num_instances):
2858+ instances.append(instance_create(context,
2859+ request_spec['instance_properties']))
2860+ return instances
2861
2862 def server_update(context, id, params):
2863 return instance_create(context, id)
2864@@ -1615,18 +1640,20 @@
2865 self.stubs.Set(nova.db.api, 'project_get_networks',
2866 project_get_networks)
2867 self.stubs.Set(nova.db.api, 'instance_create', instance_create)
2868+ self.stubs.Set(nova.db.api, 'instance_get', instance_get)
2869 self.stubs.Set(nova.rpc, 'cast', fake_method)
2870- self.stubs.Set(nova.rpc, 'call', fake_method)
2871+ self.stubs.Set(nova.rpc, 'call', rpc_call_wrapper)
2872 self.stubs.Set(nova.db.api, 'instance_update', server_update)
2873 self.stubs.Set(nova.db.api, 'queue_get_for', queue_get_for)
2874 self.stubs.Set(nova.network.manager.VlanManager, 'allocate_fixed_ip',
2875 fake_method)
2876 self.stubs.Set(
2877- nova.api.openstack.create_instance_helper.CreateInstanceHelper,
2878- "_get_kernel_ramdisk_from_image", kernel_ramdisk_mapping)
2879+ servers.Controller,
2880+ "_get_kernel_ramdisk_from_image",
2881+ kernel_ramdisk_mapping)
2882 self.stubs.Set(nova.compute.api.API, "_find_host", find_host)
2883
2884- def _test_create_instance_helper(self):
2885+ def _test_create_instance(self):
2886 self._setup_for_create_instance()
2887
2888 body = dict(server=dict(
2889@@ -1650,7 +1677,7 @@
2890 self.assertEqual(FAKE_UUID, server['uuid'])
2891
2892 def test_create_instance(self):
2893- self._test_create_instance_helper()
2894+ self._test_create_instance()
2895
2896 def test_create_instance_has_uuid(self):
2897 """Tests at the db-layer instead of API layer since that's where the
2898@@ -1662,51 +1689,134 @@
2899 expected = FAKE_UUID
2900 self.assertEqual(instance['uuid'], expected)
2901
2902- def test_create_instance_via_zones(self):
2903- """Server generated ReservationID"""
2904- self._setup_for_create_instance()
2905- self.flags(allow_admin_api=True)
2906-
2907- body = dict(server=dict(
2908- name='server_test', imageId=3, flavorId=2,
2909- metadata={'hello': 'world', 'open': 'stack'},
2910- personality={}))
2911- req = webob.Request.blank('/v1.0/zones/boot')
2912- req.method = 'POST'
2913- req.body = json.dumps(body)
2914- req.headers["content-type"] = "application/json"
2915-
2916- res = req.get_response(fakes.wsgi_app())
2917-
2918- reservation_id = json.loads(res.body)['reservation_id']
2919- self.assertEqual(res.status_int, 200)
2920+ def test_create_multiple_instances(self):
2921+ """Test creating multiple instances but not asking for
2922+ reservation_id
2923+ """
2924+ self._setup_for_create_instance()
2925+
2926+ image_href = 'http://localhost/v1.1/123/images/2'
2927+ flavor_ref = 'http://localhost/123/flavors/3'
2928+ body = {
2929+ 'server': {
2930+ 'min_count': 2,
2931+ 'name': 'server_test',
2932+ 'imageRef': image_href,
2933+ 'flavorRef': flavor_ref,
2934+ 'metadata': {'hello': 'world',
2935+ 'open': 'stack'},
2936+ 'personality': []
2937+ }
2938+ }
2939+
2940+ req = webob.Request.blank('/v1.1/123/servers')
2941+ req.method = 'POST'
2942+ req.body = json.dumps(body)
2943+ req.headers["content-type"] = "application/json"
2944+
2945+ res = req.get_response(fakes.wsgi_app())
2946+ self.assertEqual(res.status_int, 202)
2947+ body = json.loads(res.body)
2948+ self.assertIn('server', body)
2949+
2950+ def test_create_multiple_instances_resv_id_return(self):
2951+ """Test creating multiple instances with asking for
2952+ reservation_id
2953+ """
2954+ self._setup_for_create_instance()
2955+
2956+ image_href = 'http://localhost/v1.1/123/images/2'
2957+ flavor_ref = 'http://localhost/123/flavors/3'
2958+ body = {
2959+ 'server': {
2960+ 'min_count': 2,
2961+ 'name': 'server_test',
2962+ 'imageRef': image_href,
2963+ 'flavorRef': flavor_ref,
2964+ 'metadata': {'hello': 'world',
2965+ 'open': 'stack'},
2966+ 'personality': [],
2967+ 'return_reservation_id': True
2968+ }
2969+ }
2970+
2971+ req = webob.Request.blank('/v1.1/123/servers')
2972+ req.method = 'POST'
2973+ req.body = json.dumps(body)
2974+ req.headers["content-type"] = "application/json"
2975+
2976+ res = req.get_response(fakes.wsgi_app())
2977+ self.assertEqual(res.status_int, 202)
2978+ body = json.loads(res.body)
2979+ reservation_id = body.get('reservation_id')
2980 self.assertNotEqual(reservation_id, "")
2981 self.assertNotEqual(reservation_id, None)
2982 self.assertTrue(len(reservation_id) > 1)
2983
2984- def test_create_instance_via_zones_with_resid(self):
2985- """User supplied ReservationID"""
2986+ def test_create_instance_with_user_supplied_reservation_id(self):
2987+ """Non-admin supplied reservation_id should be ignored."""
2988 self._setup_for_create_instance()
2989- self.flags(allow_admin_api=True)
2990-
2991- body = dict(server=dict(
2992- name='server_test', imageId=3, flavorId=2,
2993- metadata={'hello': 'world', 'open': 'stack'},
2994- personality={}, reservation_id='myresid'))
2995- req = webob.Request.blank('/v1.0/zones/boot')
2996+
2997+ image_href = 'http://localhost/v1.1/123/images/2'
2998+ flavor_ref = 'http://localhost/123/flavors/3'
2999+ body = {
3000+ 'server': {
3001+ 'name': 'server_test',
3002+ 'imageRef': image_href,
3003+ 'flavorRef': flavor_ref,
3004+ 'metadata': {'hello': 'world',
3005+ 'open': 'stack'},
3006+ 'personality': [],
3007+ 'reservation_id': 'myresid',
3008+ 'return_reservation_id': True
3009+ }
3010+ }
3011+
3012+ req = webob.Request.blank('/v1.1/123/servers')
3013 req.method = 'POST'
3014 req.body = json.dumps(body)
3015 req.headers["content-type"] = "application/json"
3016
3017 res = req.get_response(fakes.wsgi_app())
3018-
3019+ self.assertEqual(res.status_int, 202)
3020+ res_body = json.loads(res.body)
3021+ self.assertIn('reservation_id', res_body)
3022+ self.assertNotEqual(res_body['reservation_id'], 'myresid')
3023+
3024+ def test_create_instance_with_admin_supplied_reservation_id(self):
3025+ """Admin supplied reservation_id should be honored."""
3026+ self._setup_for_create_instance()
3027+
3028+ image_href = 'http://localhost/v1.1/123/images/2'
3029+ flavor_ref = 'http://localhost/123/flavors/3'
3030+ body = {
3031+ 'server': {
3032+ 'name': 'server_test',
3033+ 'imageRef': image_href,
3034+ 'flavorRef': flavor_ref,
3035+ 'metadata': {'hello': 'world',
3036+ 'open': 'stack'},
3037+ 'personality': [],
3038+ 'reservation_id': 'myresid',
3039+ 'return_reservation_id': True
3040+ }
3041+ }
3042+
3043+ req = webob.Request.blank('/v1.1/123/servers')
3044+ req.method = 'POST'
3045+ req.body = json.dumps(body)
3046+ req.headers["content-type"] = "application/json"
3047+
3048+ context = nova.context.RequestContext('testuser', 'testproject',
3049+ is_admin=True)
3050+ res = req.get_response(fakes.wsgi_app(fake_auth_context=context))
3051+ self.assertEqual(res.status_int, 202)
3052 reservation_id = json.loads(res.body)['reservation_id']
3053- self.assertEqual(res.status_int, 200)
3054 self.assertEqual(reservation_id, "myresid")
3055
3056 def test_create_instance_no_key_pair(self):
3057 fakes.stub_out_key_pair_funcs(self.stubs, have_key_pair=False)
3058- self._test_create_instance_helper()
3059+ self._test_create_instance()
3060
3061 def test_create_instance_no_name(self):
3062 self._setup_for_create_instance()
3063@@ -2792,7 +2902,7 @@
3064 class TestServerCreateRequestXMLDeserializerV10(unittest.TestCase):
3065
3066 def setUp(self):
3067- self.deserializer = create_instance_helper.ServerXMLDeserializer()
3068+ self.deserializer = servers.ServerXMLDeserializer()
3069
3070 def test_minimal_request(self):
3071 serial_request = """
3072@@ -3078,7 +3188,7 @@
3073
3074 def setUp(self):
3075 super(TestServerCreateRequestXMLDeserializerV11, self).setUp()
3076- self.deserializer = create_instance_helper.ServerXMLDeserializerV11()
3077+ self.deserializer = servers.ServerXMLDeserializerV11()
3078
3079 def test_minimal_request(self):
3080 serial_request = """
3081@@ -3552,10 +3662,12 @@
3082 else:
3083 self.injected_files = None
3084
3085- return [{'id': '1234', 'display_name': 'fakeinstance',
3086+ resv_id = None
3087+
3088+ return ([{'id': '1234', 'display_name': 'fakeinstance',
3089 'user_id': 'fake',
3090 'project_id': 'fake',
3091- 'uuid': FAKE_UUID}]
3092+ 'uuid': FAKE_UUID}], resv_id)
3093
3094 def set_admin_password(self, *args, **kwargs):
3095 pass
3096@@ -3568,8 +3680,9 @@
3097 compute_api = MockComputeAPI()
3098 self.stubs.Set(nova.compute, 'API', make_stub_method(compute_api))
3099 self.stubs.Set(
3100- nova.api.openstack.create_instance_helper.CreateInstanceHelper,
3101- '_get_kernel_ramdisk_from_image', make_stub_method((1, 1)))
3102+ servers.Controller,
3103+ '_get_kernel_ramdisk_from_image',
3104+ make_stub_method((1, 1)))
3105 return compute_api
3106
3107 def _create_personality_request_dict(self, personality_files):
3108@@ -3830,8 +3943,8 @@
3109 @staticmethod
3110 def _get_k_r(image_meta):
3111 """Rebinding function to a shorter name for convenience"""
3112- kernel_id, ramdisk_id = create_instance_helper.CreateInstanceHelper. \
3113- _do_get_kernel_ramdisk_from_image(image_meta)
3114+ kernel_id, ramdisk_id = servers.Controller.\
3115+ _do_get_kernel_ramdisk_from_image(image_meta)
3116 return kernel_id, ramdisk_id
3117
3118
3119
3120=== modified file 'nova/tests/integrated/api/client.py'
3121--- nova/tests/integrated/api/client.py 2011-08-17 14:55:27 +0000
3122+++ nova/tests/integrated/api/client.py 2011-09-23 07:08:19 +0000
3123@@ -16,6 +16,7 @@
3124
3125 import json
3126 import httplib
3127+import urllib
3128 import urlparse
3129
3130 from nova import log as logging
3131@@ -100,7 +101,7 @@
3132
3133 relative_url = parsed_url.path
3134 if parsed_url.query:
3135- relative_url = relative_url + parsed_url.query
3136+ relative_url = relative_url + "?" + parsed_url.query
3137 LOG.info(_("Doing %(method)s on %(relative_url)s") % locals())
3138 if body:
3139 LOG.info(_("Body: %s") % body)
3140@@ -205,12 +206,24 @@
3141 def get_server(self, server_id):
3142 return self.api_get('/servers/%s' % server_id)['server']
3143
3144- def get_servers(self, detail=True):
3145+ def get_servers(self, detail=True, search_opts=None):
3146 rel_url = '/servers/detail' if detail else '/servers'
3147+
3148+ if search_opts is not None:
3149+ qparams = {}
3150+ for opt, val in search_opts.iteritems():
3151+ qparams[opt] = val
3152+ if qparams:
3153+ query_string = "?%s" % urllib.urlencode(qparams)
3154+ rel_url += query_string
3155 return self.api_get(rel_url)['servers']
3156
3157 def post_server(self, server):
3158- return self.api_post('/servers', server)['server']
3159+ response = self.api_post('/servers', server)
3160+ if 'reservation_id' in response:
3161+ return response
3162+ else:
3163+ return response['server']
3164
3165 def put_server(self, server_id, server):
3166 return self.api_put('/servers/%s' % server_id, server)
3167
3168=== modified file 'nova/tests/integrated/test_servers.py'
3169--- nova/tests/integrated/test_servers.py 2011-09-08 13:53:31 +0000
3170+++ nova/tests/integrated/test_servers.py 2011-09-23 07:08:19 +0000
3171@@ -436,6 +436,42 @@
3172 # Cleanup
3173 self._delete_server(server_id)
3174
3175+ def test_create_multiple_servers(self):
3176+ """Creates multiple servers and checks for reservation_id"""
3177+
3178+ # Create 2 servers, setting 'return_reservation_id, which should
3179+ # return a reservation_id
3180+ server = self._build_minimal_create_server_request()
3181+ server['min_count'] = 2
3182+ server['return_reservation_id'] = True
3183+ post = {'server': server}
3184+ response = self.api.post_server(post)
3185+ self.assertIn('reservation_id', response)
3186+ reservation_id = response['reservation_id']
3187+ self.assertNotIn(reservation_id, ['', None])
3188+
3189+ # Create 1 more server, which should not return a reservation_id
3190+ server = self._build_minimal_create_server_request()
3191+ post = {'server': server}
3192+ created_server = self.api.post_server(post)
3193+ self.assertTrue(created_server['id'])
3194+ created_server_id = created_server['id']
3195+
3196+ # lookup servers created by the first request.
3197+ servers = self.api.get_servers(detail=True,
3198+ search_opts={'reservation_id': reservation_id})
3199+ server_map = dict((server['id'], server) for server in servers)
3200+ found_server = server_map.get(created_server_id)
3201+ # The server from the 2nd request should not be there.
3202+ self.assertEqual(found_server, None)
3203+ # Should have found 2 servers.
3204+ self.assertEqual(len(server_map), 2)
3205+
3206+ # Cleanup
3207+ self._delete_server(created_server_id)
3208+ for server_id in server_map.iterkeys():
3209+ self._delete_server(server_id)
3210+
3211
3212 if __name__ == "__main__":
3213 unittest.main()
3214
3215=== modified file 'nova/tests/scheduler/test_abstract_scheduler.py'
3216--- nova/tests/scheduler/test_abstract_scheduler.py 2011-09-08 19:40:45 +0000
3217+++ nova/tests/scheduler/test_abstract_scheduler.py 2011-09-23 07:08:19 +0000
3218@@ -20,6 +20,7 @@
3219
3220 import nova.db
3221
3222+from nova import context
3223 from nova import exception
3224 from nova import rpc
3225 from nova import test
3226@@ -102,7 +103,7 @@
3227 was_called = False
3228
3229
3230-def fake_provision_resource(context, item, instance_id, request_spec, kwargs):
3231+def fake_provision_resource(context, item, request_spec, kwargs):
3232 global was_called
3233 was_called = True
3234
3235@@ -118,8 +119,7 @@
3236 was_called = True
3237
3238
3239-def fake_provision_resource_from_blob(context, item, instance_id,
3240- request_spec, kwargs):
3241+def fake_provision_resource_from_blob(context, item, request_spec, kwargs):
3242 global was_called
3243 was_called = True
3244
3245@@ -185,7 +185,7 @@
3246 zm = FakeZoneManager()
3247 sched.set_zone_manager(zm)
3248
3249- fake_context = {}
3250+ fake_context = context.RequestContext('user', 'project')
3251 build_plan = sched.select(fake_context,
3252 {'instance_type': {'memory_mb': 512},
3253 'num_instances': 4})
3254@@ -229,9 +229,10 @@
3255 zm = FakeEmptyZoneManager()
3256 sched.set_zone_manager(zm)
3257
3258- fake_context = {}
3259+ fake_context = context.RequestContext('user', 'project')
3260+ request_spec = {}
3261 self.assertRaises(driver.NoValidHost, sched.schedule_run_instance,
3262- fake_context, 1,
3263+ fake_context, request_spec,
3264 dict(host_filter=None, instance_type={}))
3265
3266 def test_schedule_do_not_schedule_with_hint(self):
3267@@ -250,8 +251,8 @@
3268 'blob': "Non-None blob data",
3269 }
3270
3271- result = sched.schedule_run_instance(None, 1, request_spec)
3272- self.assertEquals(None, result)
3273+ instances = sched.schedule_run_instance(None, request_spec)
3274+ self.assertTrue(instances)
3275 self.assertTrue(was_called)
3276
3277 def test_provision_resource_local(self):
3278@@ -263,7 +264,7 @@
3279 fake_provision_resource_locally)
3280
3281 request_spec = {'hostname': "foo"}
3282- sched._provision_resource(None, request_spec, 1, request_spec, {})
3283+ sched._provision_resource(None, request_spec, request_spec, {})
3284 self.assertTrue(was_called)
3285
3286 def test_provision_resource_remote(self):
3287@@ -275,7 +276,7 @@
3288 fake_provision_resource_from_blob)
3289
3290 request_spec = {}
3291- sched._provision_resource(None, request_spec, 1, request_spec, {})
3292+ sched._provision_resource(None, request_spec, request_spec, {})
3293 self.assertTrue(was_called)
3294
3295 def test_provision_resource_from_blob_empty(self):
3296@@ -285,7 +286,7 @@
3297 request_spec = {}
3298 self.assertRaises(abstract_scheduler.InvalidBlob,
3299 sched._provision_resource_from_blob,
3300- None, {}, 1, {}, {})
3301+ None, {}, {}, {})
3302
3303 def test_provision_resource_from_blob_with_local_blob(self):
3304 """
3305@@ -303,20 +304,21 @@
3306 # return fake instances
3307 return {'id': 1, 'uuid': 'f874093c-7b17-49c0-89c3-22a5348497f9'}
3308
3309- def fake_rpc_cast(*args, **kwargs):
3310+ def fake_cast_to_compute_host(*args, **kwargs):
3311 pass
3312
3313 self.stubs.Set(sched, '_decrypt_blob',
3314 fake_decrypt_blob_returns_local_info)
3315+ self.stubs.Set(driver, 'cast_to_compute_host',
3316+ fake_cast_to_compute_host)
3317 self.stubs.Set(compute_api.API,
3318 'create_db_entry_for_new_instance',
3319 fake_create_db_entry_for_new_instance)
3320- self.stubs.Set(rpc, 'cast', fake_rpc_cast)
3321
3322 build_plan_item = {'blob': "Non-None blob data"}
3323 request_spec = {'image': {}, 'instance_properties': {}}
3324
3325- sched._provision_resource_from_blob(None, build_plan_item, 1,
3326+ sched._provision_resource_from_blob(None, build_plan_item,
3327 request_spec, {})
3328 self.assertTrue(was_called)
3329
3330@@ -335,7 +337,7 @@
3331
3332 request_spec = {'blob': "Non-None blob data"}
3333
3334- sched._provision_resource_from_blob(None, request_spec, 1,
3335+ sched._provision_resource_from_blob(None, request_spec,
3336 request_spec, {})
3337 self.assertTrue(was_called)
3338
3339@@ -352,7 +354,7 @@
3340
3341 request_spec = {'child_blob': True, 'child_zone': True}
3342
3343- sched._provision_resource_from_blob(None, request_spec, 1,
3344+ sched._provision_resource_from_blob(None, request_spec,
3345 request_spec, {})
3346 self.assertTrue(was_called)
3347
3348@@ -386,7 +388,7 @@
3349 zm.service_states = {}
3350 sched.set_zone_manager(zm)
3351
3352- fake_context = {}
3353+ fake_context = context.RequestContext('user', 'project')
3354 build_plan = sched.select(fake_context,
3355 {'instance_type': {'memory_mb': 512},
3356 'num_instances': 4})
3357@@ -394,6 +396,45 @@
3358 # 0 from local zones, 12 from remotes
3359 self.assertEqual(12, len(build_plan))
3360
3361+ def test_run_instance_non_admin(self):
3362+ """Test creating an instance locally using run_instance, passing
3363+ a non-admin context. DB actions should work."""
3364+ sched = FakeAbstractScheduler()
3365+
3366+ def fake_cast_to_compute_host(*args, **kwargs):
3367+ pass
3368+
3369+ def fake_zone_get_all_zero(context):
3370+ # make sure this is called with admin context, even though
3371+ # we're using user context below
3372+ self.assertTrue(context.is_admin)
3373+ return []
3374+
3375+ self.stubs.Set(driver, 'cast_to_compute_host',
3376+ fake_cast_to_compute_host)
3377+ self.stubs.Set(sched, '_call_zone_method', fake_call_zone_method)
3378+ self.stubs.Set(nova.db, 'zone_get_all', fake_zone_get_all_zero)
3379+
3380+ zm = FakeZoneManager()
3381+ sched.set_zone_manager(zm)
3382+
3383+ fake_context = context.RequestContext('user', 'project')
3384+
3385+ request_spec = {
3386+ 'image': {'properties': {}},
3387+ 'security_group': [],
3388+ 'instance_properties': {
3389+ 'project_id': fake_context.project_id,
3390+ 'user_id': fake_context.user_id},
3391+ 'instance_type': {'memory_mb': 256},
3392+ 'filter_driver': 'nova.scheduler.host_filter.AllHostsFilter'
3393+ }
3394+
3395+ instances = sched.schedule_run_instance(fake_context, request_spec)
3396+ self.assertEqual(len(instances), 1)
3397+ self.assertFalse(instances[0].get('_is_precooked', False))
3398+ nova.db.instance_destroy(fake_context, instances[0]['id'])
3399+
3400
3401 class BaseSchedulerTestCase(test.TestCase):
3402 """Test case for Base Scheduler."""
3403
3404=== modified file 'nova/tests/scheduler/test_least_cost_scheduler.py'
3405--- nova/tests/scheduler/test_least_cost_scheduler.py 2011-08-15 22:31:24 +0000
3406+++ nova/tests/scheduler/test_least_cost_scheduler.py 2011-09-23 07:08:19 +0000
3407@@ -134,7 +134,7 @@
3408
3409 expected = []
3410 for idx, (hostname, services) in enumerate(hosts):
3411- caps = copy.deepcopy(services["compute"])
3412+ caps = copy.deepcopy(services)
3413 # Costs are normalized so over 10 hosts, each host with increasing
3414 # free ram will cost 1/N more. Since the lowest cost host has some
3415 # free ram, we add in the 1/N for the base_cost
3416
3417=== modified file 'nova/tests/scheduler/test_scheduler.py'
3418--- nova/tests/scheduler/test_scheduler.py 2011-09-15 20:42:30 +0000
3419+++ nova/tests/scheduler/test_scheduler.py 2011-09-23 07:08:19 +0000
3420@@ -35,10 +35,13 @@
3421 from nova import test
3422 from nova import rpc
3423 from nova import utils
3424+from nova.db.sqlalchemy import models
3425 from nova.scheduler import api
3426 from nova.scheduler import driver
3427 from nova.scheduler import manager
3428 from nova.scheduler import multi
3429+from nova.scheduler.simple import SimpleScheduler
3430+from nova.scheduler.zone import ZoneScheduler
3431 from nova.compute import power_state
3432 from nova.compute import vm_states
3433
3434@@ -53,17 +56,86 @@
3435 FAKE_UUID = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa'
3436
3437
3438-class FakeContext(object):
3439- auth_token = None
3440+def _create_instance_dict(**kwargs):
3441+ """Create a dictionary for a test instance"""
3442+ inst = {}
3443+ # NOTE(jk0): If an integer is passed as the image_ref, the image
3444+ # service will use the default image service (in this case, the fake).
3445+ inst['image_ref'] = '1'
3446+ inst['reservation_id'] = 'r-fakeres'
3447+ inst['user_id'] = kwargs.get('user_id', 'admin')
3448+ inst['project_id'] = kwargs.get('project_id', 'fake')
3449+ inst['instance_type_id'] = '1'
3450+ if 'host' in kwargs:
3451+ inst['host'] = kwargs.get('host')
3452+ inst['vcpus'] = kwargs.get('vcpus', 1)
3453+ inst['memory_mb'] = kwargs.get('memory_mb', 20)
3454+ inst['local_gb'] = kwargs.get('local_gb', 30)
3455+ inst['vm_state'] = kwargs.get('vm_state', vm_states.ACTIVE)
3456+ inst['power_state'] = kwargs.get('power_state', power_state.RUNNING)
3457+ inst['task_state'] = kwargs.get('task_state', None)
3458+ inst['availability_zone'] = kwargs.get('availability_zone', None)
3459+ inst['ami_launch_index'] = 0
3460+ inst['launched_on'] = kwargs.get('launched_on', 'dummy')
3461+ return inst
3462+
3463+
3464+def _create_volume():
3465+ """Create a test volume"""
3466+ vol = {}
3467+ vol['size'] = 1
3468+ vol['availability_zone'] = 'test'
3469+ ctxt = context.get_admin_context()
3470+ return db.volume_create(ctxt, vol)['id']
3471+
3472+
3473+def _create_instance(**kwargs):
3474+ """Create a test instance"""
3475+ ctxt = context.get_admin_context()
3476+ return db.instance_create(ctxt, _create_instance_dict(**kwargs))
3477+
3478+
3479+def _create_instance_from_spec(spec):
3480+ return _create_instance(**spec['instance_properties'])
3481+
3482+
3483+def _create_request_spec(**kwargs):
3484+ return dict(instance_properties=_create_instance_dict(**kwargs))
3485+
3486+
3487+def _fake_cast_to_compute_host(context, host, method, **kwargs):
3488+ global _picked_host
3489+ _picked_host = host
3490+
3491+
3492+def _fake_cast_to_volume_host(context, host, method, **kwargs):
3493+ global _picked_host
3494+ _picked_host = host
3495+
3496+
3497+def _fake_create_instance_db_entry(simple_self, context, request_spec):
3498+ instance = _create_instance_from_spec(request_spec)
3499+ global instance_ids
3500+ instance_ids.append(instance['id'])
3501+ return instance
3502+
3503+
3504+class FakeContext(context.RequestContext):
3505+ def __init__(self, *args, **kwargs):
3506+ super(FakeContext, self).__init__('user', 'project', **kwargs)
3507
3508
3509 class TestDriver(driver.Scheduler):
3510 """Scheduler Driver for Tests"""
3511- def schedule(context, topic, *args, **kwargs):
3512- return 'fallback_host'
3513+ def schedule(self, context, topic, method, *args, **kwargs):
3514+ host = 'fallback_host'
3515+ driver.cast_to_host(context, topic, host, method, **kwargs)
3516
3517- def schedule_named_method(context, topic, num):
3518- return 'named_host'
3519+ def schedule_named_method(self, context, num=None):
3520+ topic = 'topic'
3521+ host = 'named_host'
3522+ method = 'named_method'
3523+ driver.cast_to_host(context, topic, host, method, num=num)
3524
3525
3526 class SchedulerTestCase(test.TestCase):
3527@@ -89,31 +161,16 @@
3528
3529 return db.service_get(ctxt, s_ref['id'])
3530
3531- def _create_instance(self, **kwargs):
3532- """Create a test instance"""
3533- ctxt = context.get_admin_context()
3534- inst = {}
3535- inst['user_id'] = 'admin'
3536- inst['project_id'] = kwargs.get('project_id', 'fake')
3537- inst['host'] = kwargs.get('host', 'dummy')
3538- inst['vcpus'] = kwargs.get('vcpus', 1)
3539- inst['memory_mb'] = kwargs.get('memory_mb', 10)
3540- inst['local_gb'] = kwargs.get('local_gb', 20)
3541- inst['vm_state'] = kwargs.get('vm_state', vm_states.ACTIVE)
3542- inst['power_state'] = kwargs.get('power_state', power_state.RUNNING)
3543- inst['task_state'] = kwargs.get('task_state', None)
3544- return db.instance_create(ctxt, inst)
3545-
3546 def test_fallback(self):
3547 scheduler = manager.SchedulerManager()
3548 self.mox.StubOutWithMock(rpc, 'cast', use_mock_anything=True)
3549 ctxt = context.get_admin_context()
3550 rpc.cast(ctxt,
3551- 'topic.fallback_host',
3552+ 'fake_topic.fallback_host',
3553 {'method': 'noexist',
3554 'args': {'num': 7}})
3555 self.mox.ReplayAll()
3556- scheduler.noexist(ctxt, 'topic', num=7)
3557+ scheduler.noexist(ctxt, 'fake_topic', num=7)
3558
3559 def test_named_method(self):
3560 scheduler = manager.SchedulerManager()
3561@@ -173,8 +230,8 @@
3562 scheduler = manager.SchedulerManager()
3563 ctxt = context.get_admin_context()
3564 s_ref = self._create_compute_service()
3565- i_ref1 = self._create_instance(project_id='p-01', host=s_ref['host'])
3566- i_ref2 = self._create_instance(project_id='p-02', vcpus=3,
3567+ i_ref1 = _create_instance(project_id='p-01', host=s_ref['host'])
3568+ i_ref2 = _create_instance(project_id='p-02', vcpus=3,
3569 host=s_ref['host'])
3570
3571 result = scheduler.show_host_resources(ctxt, s_ref['host'])
3572@@ -197,7 +254,10 @@
3573 """Test case for zone scheduler"""
3574 def setUp(self):
3575 super(ZoneSchedulerTestCase, self).setUp()
3576- self.flags(scheduler_driver='nova.scheduler.zone.ZoneScheduler')
3577+ self.flags(
3578+ scheduler_driver='nova.scheduler.multi.MultiScheduler',
3579+ compute_scheduler_driver='nova.scheduler.zone.ZoneScheduler',
3580+ volume_scheduler_driver='nova.scheduler.zone.ZoneScheduler')
3581
3582 def _create_service_model(self, **kwargs):
3583 service = db.sqlalchemy.models.Service()
3584@@ -214,7 +274,7 @@
3585
3586 def test_with_two_zones(self):
3587 scheduler = manager.SchedulerManager()
3588- ctxt = context.get_admin_context()
3589+ ctxt = context.RequestContext('user', 'project')
3590 service_list = [self._create_service_model(id=1,
3591 host='host1',
3592 zone='zone1'),
3593@@ -230,66 +290,53 @@
3594 self._create_service_model(id=5,
3595 host='host5',
3596 zone='zone2')]
3597+
3598+ request_spec = _create_request_spec(availability_zone='zone1')
3599+
3600+ fake_instance = _create_instance_dict(
3601+ **request_spec['instance_properties'])
3602+ fake_instance['id'] = 100
3603+ fake_instance['uuid'] = FAKE_UUID
3604+
3605 self.mox.StubOutWithMock(db, 'service_get_all_by_topic')
3606+ self.mox.StubOutWithMock(db, 'instance_update')
3607+ # Assumes we're testing with MultiScheduler
3608+ compute_sched_driver = scheduler.driver.drivers['compute']
3609+ self.mox.StubOutWithMock(compute_sched_driver,
3610+ 'create_instance_db_entry')
3611+ self.mox.StubOutWithMock(rpc, 'cast', use_mock_anything=True)
3612+
3613 arg = IgnoreArg()
3614 db.service_get_all_by_topic(arg, arg).AndReturn(service_list)
3615- self.mox.StubOutWithMock(rpc, 'cast', use_mock_anything=True)
3616- rpc.cast(ctxt,
3617+ compute_sched_driver.create_instance_db_entry(arg,
3618+ request_spec).AndReturn(fake_instance)
3619+ db.instance_update(arg, 100, {'host': 'host1', 'scheduled_at': arg})
3620+ rpc.cast(arg,
3621 'compute.host1',
3622 {'method': 'run_instance',
3623- 'args': {'instance_id': 'i-ffffffff',
3624- 'availability_zone': 'zone1'}})
3625+ 'args': {'instance_id': 100}})
3626 self.mox.ReplayAll()
3627 scheduler.run_instance(ctxt,
3628 'compute',
3629- instance_id='i-ffffffff',
3630- availability_zone='zone1')
3631+ request_spec=request_spec)
3632
3633
3634 class SimpleDriverTestCase(test.TestCase):
3635 """Test case for simple driver"""
3636 def setUp(self):
3637 super(SimpleDriverTestCase, self).setUp()
3638+ simple_scheduler = 'nova.scheduler.simple.SimpleScheduler'
3639 self.flags(connection_type='fake',
3640- stub_network=True,
3641- max_cores=4,
3642- max_gigabytes=4,
3643- network_manager='nova.network.manager.FlatManager',
3644- volume_driver='nova.volume.driver.FakeISCSIDriver',
3645- scheduler_driver='nova.scheduler.simple.SimpleScheduler')
3646+ stub_network=True,
3647+ max_cores=4,
3648+ max_gigabytes=4,
3649+ network_manager='nova.network.manager.FlatManager',
3650+ volume_driver='nova.volume.driver.FakeISCSIDriver',
3651+ scheduler_driver='nova.scheduler.multi.MultiScheduler',
3652+ compute_scheduler_driver=simple_scheduler,
3653+ volume_scheduler_driver=simple_scheduler)
3654 self.scheduler = manager.SchedulerManager()
3655 self.context = context.get_admin_context()
3656- self.user_id = 'fake'
3657- self.project_id = 'fake'
3658-
3659- def _create_instance(self, **kwargs):
3660- """Create a test instance"""
3661- inst = {}
3662- # NOTE(jk0): If an integer is passed as the image_ref, the image
3663- # service will use the default image service (in this case, the fake).
3664- inst['image_ref'] = '1'
3665- inst['reservation_id'] = 'r-fakeres'
3666- inst['user_id'] = self.user_id
3667- inst['project_id'] = self.project_id
3668- inst['instance_type_id'] = '1'
3669- inst['vcpus'] = kwargs.get('vcpus', 1)
3670- inst['ami_launch_index'] = 0
3671- inst['availability_zone'] = kwargs.get('availability_zone', None)
3672- inst['host'] = kwargs.get('host', 'dummy')
3673- inst['memory_mb'] = kwargs.get('memory_mb', 20)
3674- inst['local_gb'] = kwargs.get('local_gb', 30)
3675- inst['launched_on'] = kwargs.get('launghed_on', 'dummy')
3676- inst['vm_state'] = kwargs.get('vm_state', vm_states.ACTIVE)
3677- inst['task_state'] = kwargs.get('task_state', None)
3678- inst['power_state'] = kwargs.get('power_state', power_state.RUNNING)
3679- return db.instance_create(self.context, inst)['id']
3680-
3681- def _create_volume(self):
3682- """Create a test volume"""
3683- vol = {}
3684- vol['size'] = 1
3685- vol['availability_zone'] = 'test'
3686- return db.volume_create(self.context, vol)['id']
3687
3688 def _create_compute_service(self, **kwargs):
3689 """Create a compute service."""
3690@@ -369,14 +416,30 @@
3691 'compute',
3692 FLAGS.compute_manager)
3693 compute2.start()
3694- instance_id1 = self._create_instance()
3695- compute1.run_instance(self.context, instance_id1)
3696- instance_id2 = self._create_instance()
3697- host = self.scheduler.driver.schedule_run_instance(self.context,
3698- instance_id2)
3699- self.assertEqual(host, 'host2')
3700- compute1.terminate_instance(self.context, instance_id1)
3701- db.instance_destroy(self.context, instance_id2)
3702+
3703+ global instance_ids
3704+ instance_ids = []
3705+ instance_ids.append(_create_instance()['id'])
3706+ compute1.run_instance(self.context, instance_ids[0])
3707+
3708+ self.stubs.Set(SimpleScheduler,
3709+ 'create_instance_db_entry', _fake_create_instance_db_entry)
3710+ global _picked_host
3711+ _picked_host = None
3712+ self.stubs.Set(driver,
3713+ 'cast_to_compute_host', _fake_cast_to_compute_host)
3714+
3715+ request_spec = _create_request_spec()
3716+ instances = self.scheduler.driver.schedule_run_instance(
3717+ self.context, request_spec)
3718+
3719+ self.assertEqual(_picked_host, 'host2')
3720+ self.assertEqual(len(instance_ids), 2)
3721+ self.assertEqual(len(instances), 1)
3722+ self.assertEqual(instances[0].get('_is_precooked', False), False)
3723+
3724+ compute1.terminate_instance(self.context, instance_ids[0])
3725+ compute2.terminate_instance(self.context, instance_ids[1])
3726 compute1.kill()
3727 compute2.kill()
3728
3729@@ -392,14 +455,27 @@
3730 'compute',
3731 FLAGS.compute_manager)
3732 compute2.start()
3733- instance_id1 = self._create_instance()
3734- compute1.run_instance(self.context, instance_id1)
3735- instance_id2 = self._create_instance(availability_zone='nova:host1')
3736- host = self.scheduler.driver.schedule_run_instance(self.context,
3737- instance_id2)
3738- self.assertEqual('host1', host)
3739- compute1.terminate_instance(self.context, instance_id1)
3740- db.instance_destroy(self.context, instance_id2)
3741+
3742+ global instance_ids
3743+ instance_ids = []
3744+ instance_ids.append(_create_instance()['id'])
3745+ compute1.run_instance(self.context, instance_ids[0])
3746+
3747+ self.stubs.Set(SimpleScheduler,
3748+ 'create_instance_db_entry', _fake_create_instance_db_entry)
3749+ global _picked_host
3750+ _picked_host = None
3751+ self.stubs.Set(driver,
3752+ 'cast_to_compute_host', _fake_cast_to_compute_host)
3753+
3754+ request_spec = _create_request_spec(availability_zone='nova:host1')
3755+ instances = self.scheduler.driver.schedule_run_instance(
3756+ self.context, request_spec)
3757+ self.assertEqual(_picked_host, 'host1')
3758+ self.assertEqual(len(instance_ids), 2)
3759+
3760+ compute1.terminate_instance(self.context, instance_ids[0])
3761+ compute1.terminate_instance(self.context, instance_ids[1])
3762 compute1.kill()
3763 compute2.kill()
3764
3765@@ -414,12 +490,21 @@
3766 delta = datetime.timedelta(seconds=FLAGS.service_down_time * 2)
3767 past = now - delta
3768 db.service_update(self.context, s1['id'], {'updated_at': past})
3769- instance_id2 = self._create_instance(availability_zone='nova:host1')
3770+
3771+ global instance_ids
3772+ instance_ids = []
3773+ self.stubs.Set(SimpleScheduler,
3774+ 'create_instance_db_entry', _fake_create_instance_db_entry)
3775+ global _picked_host
3776+ _picked_host = None
3777+ self.stubs.Set(driver,
3778+ 'cast_to_compute_host', _fake_cast_to_compute_host)
3779+
3780+ request_spec = _create_request_spec(availability_zone='nova:host1')
3781 self.assertRaises(driver.WillNotSchedule,
3782 self.scheduler.driver.schedule_run_instance,
3783 self.context,
3784- instance_id2)
3785- db.instance_destroy(self.context, instance_id2)
3786+ request_spec)
3787 compute1.kill()
3788
3789 def test_will_schedule_on_disabled_host_if_specified_no_queue(self):
3790@@ -430,11 +515,22 @@
3791 compute1.start()
3792 s1 = db.service_get_by_args(self.context, 'host1', 'nova-compute')
3793 db.service_update(self.context, s1['id'], {'disabled': True})
3794- instance_id2 = self._create_instance(availability_zone='nova:host1')
3795- host = self.scheduler.driver.schedule_run_instance(self.context,
3796- instance_id2)
3797- self.assertEqual('host1', host)
3798- db.instance_destroy(self.context, instance_id2)
3799+
3800+ global instance_ids
3801+ instance_ids = []
3802+ self.stubs.Set(SimpleScheduler,
3803+ 'create_instance_db_entry', _fake_create_instance_db_entry)
3804+ global _picked_host
3805+ _picked_host = None
3806+ self.stubs.Set(driver,
3807+ 'cast_to_compute_host', _fake_cast_to_compute_host)
3808+
3809+ request_spec = _create_request_spec(availability_zone='nova:host1')
3810+ instances = self.scheduler.driver.schedule_run_instance(
3811+ self.context, request_spec)
3812+ self.assertEqual(_picked_host, 'host1')
3813+ self.assertEqual(len(instance_ids), 1)
3814+ compute1.terminate_instance(self.context, instance_ids[0])
3815 compute1.kill()
3816
3817 def test_too_many_cores_no_queue(self):
3818@@ -452,17 +548,17 @@
3819 instance_ids1 = []
3820 instance_ids2 = []
3821 for index in xrange(FLAGS.max_cores):
3822- instance_id = self._create_instance()
3823+ instance_id = _create_instance()['id']
3824 compute1.run_instance(self.context, instance_id)
3825 instance_ids1.append(instance_id)
3826- instance_id = self._create_instance()
3827+ instance_id = _create_instance()['id']
3828 compute2.run_instance(self.context, instance_id)
3829 instance_ids2.append(instance_id)
3830- instance_id = self._create_instance()
3831+ request_spec = _create_request_spec()
3832 self.assertRaises(driver.NoValidHost,
3833 self.scheduler.driver.schedule_run_instance,
3834 self.context,
3835- instance_id)
3836+ request_spec)
3837 for instance_id in instance_ids1:
3838 compute1.terminate_instance(self.context, instance_id)
3839 for instance_id in instance_ids2:
3840@@ -481,13 +577,19 @@
3841 'nova-volume',
3842 'volume',
3843 FLAGS.volume_manager)
3844+
3845+ global _picked_host
3846+ _picked_host = None
3847+ self.stubs.Set(driver,
3848+ 'cast_to_volume_host', _fake_cast_to_volume_host)
3849+
3850 volume2.start()
3851- volume_id1 = self._create_volume()
3852+ volume_id1 = _create_volume()
3853 volume1.create_volume(self.context, volume_id1)
3854- volume_id2 = self._create_volume()
3855- host = self.scheduler.driver.schedule_create_volume(self.context,
3856- volume_id2)
3857- self.assertEqual(host, 'host2')
3858+ volume_id2 = _create_volume()
3859+ self.scheduler.driver.schedule_create_volume(self.context,
3860+ volume_id2)
3861+ self.assertEqual(_picked_host, 'host2')
3862 volume1.delete_volume(self.context, volume_id1)
3863 db.volume_destroy(self.context, volume_id2)
3864
3865@@ -514,17 +616,30 @@
3866 compute2.kill()
3867
3868 def test_least_busy_host_gets_instance(self):
3869- """Ensures the host with less cores gets the next one"""
3870+ """Ensures the host with less cores gets the next one w/ Simple"""
3871 compute1 = self.start_service('compute', host='host1')
3872 compute2 = self.start_service('compute', host='host2')
3873- instance_id1 = self._create_instance()
3874- compute1.run_instance(self.context, instance_id1)
3875- instance_id2 = self._create_instance()
3876- host = self.scheduler.driver.schedule_run_instance(self.context,
3877- instance_id2)
3878- self.assertEqual(host, 'host2')
3879- compute1.terminate_instance(self.context, instance_id1)
3880- db.instance_destroy(self.context, instance_id2)
3881+
3882+ global instance_ids
3883+ instance_ids = []
3884+ instance_ids.append(_create_instance()['id'])
3885+ compute1.run_instance(self.context, instance_ids[0])
3886+
3887+ self.stubs.Set(SimpleScheduler,
3888+ 'create_instance_db_entry', _fake_create_instance_db_entry)
3889+ global _picked_host
3890+ _picked_host = None
3891+ self.stubs.Set(driver,
3892+ 'cast_to_compute_host', _fake_cast_to_compute_host)
3893+
3894+ request_spec = _create_request_spec()
3895+ instances = self.scheduler.driver.schedule_run_instance(
3896+ self.context, request_spec)
3897+ self.assertEqual(_picked_host, 'host2')
3898+ self.assertEqual(len(instance_ids), 2)
3899+
3900+ compute1.terminate_instance(self.context, instance_ids[0])
3901+ compute2.terminate_instance(self.context, instance_ids[1])
3902 compute1.kill()
3903 compute2.kill()
3904
3905@@ -532,41 +647,64 @@
3906 """Ensures if you set availability_zone it launches on that zone"""
3907 compute1 = self.start_service('compute', host='host1')
3908 compute2 = self.start_service('compute', host='host2')
3909- instance_id1 = self._create_instance()
3910- compute1.run_instance(self.context, instance_id1)
3911- instance_id2 = self._create_instance(availability_zone='nova:host1')
3912- host = self.scheduler.driver.schedule_run_instance(self.context,
3913- instance_id2)
3914- self.assertEqual('host1', host)
3915- compute1.terminate_instance(self.context, instance_id1)
3916- db.instance_destroy(self.context, instance_id2)
3917+
3918+ global instance_ids
3919+ instance_ids = []
3920+ instance_ids.append(_create_instance()['id'])
3921+ compute1.run_instance(self.context, instance_ids[0])
3922+
3923+ self.stubs.Set(SimpleScheduler,
3924+ 'create_instance_db_entry', _fake_create_instance_db_entry)
3925+ global _picked_host
3926+ _picked_host = None
3927+ self.stubs.Set(driver,
3928+ 'cast_to_compute_host', _fake_cast_to_compute_host)
3929+
3930+ request_spec = _create_request_spec(availability_zone='nova:host1')
3931+ instances = self.scheduler.driver.schedule_run_instance(
3932+ self.context, request_spec)
3933+ self.assertEqual(_picked_host, 'host1')
3934+ self.assertEqual(len(instance_ids), 2)
3935+
3936+ compute1.terminate_instance(self.context, instance_ids[0])
3937+ compute1.terminate_instance(self.context, instance_ids[1])
3938 compute1.kill()
3939 compute2.kill()
3940
3941- def test_wont_sechedule_if_specified_host_is_down(self):
3942+ def test_wont_schedule_if_specified_host_is_down(self):
3943 compute1 = self.start_service('compute', host='host1')
3944 s1 = db.service_get_by_args(self.context, 'host1', 'nova-compute')
3945 now = utils.utcnow()
3946 delta = datetime.timedelta(seconds=FLAGS.service_down_time * 2)
3947 past = now - delta
3948 db.service_update(self.context, s1['id'], {'updated_at': past})
3949- instance_id2 = self._create_instance(availability_zone='nova:host1')
3950+ request_spec = _create_request_spec(availability_zone='nova:host1')
3951 self.assertRaises(driver.WillNotSchedule,
3952 self.scheduler.driver.schedule_run_instance,
3953 self.context,
3954- instance_id2)
3955- db.instance_destroy(self.context, instance_id2)
3956+ request_spec)
3957 compute1.kill()
3958
3959 def test_will_schedule_on_disabled_host_if_specified(self):
3960 compute1 = self.start_service('compute', host='host1')
3961 s1 = db.service_get_by_args(self.context, 'host1', 'nova-compute')
3962 db.service_update(self.context, s1['id'], {'disabled': True})
3963- instance_id2 = self._create_instance(availability_zone='nova:host1')
3964- host = self.scheduler.driver.schedule_run_instance(self.context,
3965- instance_id2)
3966- self.assertEqual('host1', host)
3967- db.instance_destroy(self.context, instance_id2)
3968+
3969+ global instance_ids
3970+ instance_ids = []
3971+ self.stubs.Set(SimpleScheduler,
3972+ 'create_instance_db_entry', _fake_create_instance_db_entry)
3973+ global _picked_host
3974+ _picked_host = None
3975+ self.stubs.Set(driver,
3976+ 'cast_to_compute_host', _fake_cast_to_compute_host)
3977+
3978+ request_spec = _create_request_spec(availability_zone='nova:host1')
3979+ instances = self.scheduler.driver.schedule_run_instance(
3980+ self.context, request_spec)
3981+ self.assertEqual(_picked_host, 'host1')
3982+ self.assertEqual(len(instance_ids), 1)
3983+ compute1.terminate_instance(self.context, instance_ids[0])
3984 compute1.kill()
3985
3986 def test_too_many_cores(self):
3987@@ -576,18 +714,30 @@
3988 instance_ids1 = []
3989 instance_ids2 = []
3990 for index in xrange(FLAGS.max_cores):
3991- instance_id = self._create_instance()
3992+ instance_id = _create_instance()['id']
3993 compute1.run_instance(self.context, instance_id)
3994 instance_ids1.append(instance_id)
3995- instance_id = self._create_instance()
3996+ instance_id = _create_instance()['id']
3997 compute2.run_instance(self.context, instance_id)
3998 instance_ids2.append(instance_id)
3999- instance_id = self._create_instance()
4000+
4001+ def _create_instance_db_entry(simple_self, context, request_spec):
4002+ self.fail(_("Shouldn't try to create DB entry when at "
4003+ "max cores"))
4004+ self.stubs.Set(SimpleScheduler,
4005+ 'create_instance_db_entry', _create_instance_db_entry)
4006+
4007+ global _picked_host
4008+ _picked_host = None
4009+ self.stubs.Set(driver,
4010+ 'cast_to_compute_host', _fake_cast_to_compute_host)
4011+
4012+ request_spec = _create_request_spec()
4013+
4014 self.assertRaises(driver.NoValidHost,
4015 self.scheduler.driver.schedule_run_instance,
4016 self.context,
4017- instance_id)
4018- db.instance_destroy(self.context, instance_id)
4019+ request_spec)
4020 for instance_id in instance_ids1:
4021 compute1.terminate_instance(self.context, instance_id)
4022 for instance_id in instance_ids2:
4023@@ -599,12 +749,18 @@
4024 """Ensures the host with less gigabytes gets the next one"""
4025 volume1 = self.start_service('volume', host='host1')
4026 volume2 = self.start_service('volume', host='host2')
4027- volume_id1 = self._create_volume()
4028+
4029+ global _picked_host
4030+ _picked_host = None
4031+ self.stubs.Set(driver,
4032+ 'cast_to_volume_host', _fake_cast_to_volume_host)
4033+
4034+ volume_id1 = _create_volume()
4035 volume1.create_volume(self.context, volume_id1)
4036- volume_id2 = self._create_volume()
4037- host = self.scheduler.driver.schedule_create_volume(self.context,
4038- volume_id2)
4039- self.assertEqual(host, 'host2')
4040+ volume_id2 = _create_volume()
4041+ self.scheduler.driver.schedule_create_volume(self.context,
4042+ volume_id2)
4043+ self.assertEqual(_picked_host, 'host2')
4044 volume1.delete_volume(self.context, volume_id1)
4045 db.volume_destroy(self.context, volume_id2)
4046 volume1.kill()
4047@@ -617,13 +773,13 @@
4048 volume_ids1 = []
4049 volume_ids2 = []
4050 for index in xrange(FLAGS.max_gigabytes):
4051- volume_id = self._create_volume()
4052+ volume_id = _create_volume()
4053 volume1.create_volume(self.context, volume_id)
4054 volume_ids1.append(volume_id)
4055- volume_id = self._create_volume()
4056+ volume_id = _create_volume()
4057 volume2.create_volume(self.context, volume_id)
4058 volume_ids2.append(volume_id)
4059- volume_id = self._create_volume()
4060+ volume_id = _create_volume()
4061 self.assertRaises(driver.NoValidHost,
4062 self.scheduler.driver.schedule_create_volume,
4063 self.context,
4064@@ -636,13 +792,13 @@
4065 volume2.kill()
4066
4067 def test_scheduler_live_migration_with_volume(self):
4068- """scheduler_live_migration() works correctly as expected.
4069+ """schedule_live_migration() works correctly as expected.
4070
4071 Also, checks instance state is changed from 'running' -> 'migrating'.
4072
4073 """
4074
4075- instance_id = self._create_instance()
4076+ instance_id = _create_instance(host='dummy')['id']
4077 i_ref = db.instance_get(self.context, instance_id)
4078 dic = {'instance_id': instance_id, 'size': 1}
4079 v_ref = db.volume_create(self.context, dic)
4080@@ -680,7 +836,8 @@
4081 def test_live_migration_src_check_instance_not_running(self):
4082 """The instance given by instance_id is not running."""
4083
4084- instance_id = self._create_instance(power_state=power_state.NOSTATE)
4085+ instance_id = _create_instance(
4086+ power_state=power_state.NOSTATE)['id']
4087 i_ref = db.instance_get(self.context, instance_id)
4088
4089 try:
4090@@ -695,7 +852,7 @@
4091 def test_live_migration_src_check_volume_node_not_alive(self):
4092 """Raise exception when volume node is not alive."""
4093
4094- instance_id = self._create_instance()
4095+ instance_id = _create_instance()['id']
4096 i_ref = db.instance_get(self.context, instance_id)
4097 dic = {'instance_id': instance_id, 'size': 1}
4098 v_ref = db.volume_create(self.context, {'instance_id': instance_id,
4099@@ -715,7 +872,7 @@
4100
4101 def test_live_migration_src_check_compute_node_not_alive(self):
4102 """Confirms src-compute node is alive."""
4103- instance_id = self._create_instance()
4104+ instance_id = _create_instance()['id']
4105 i_ref = db.instance_get(self.context, instance_id)
4106 t = utils.utcnow() - datetime.timedelta(10)
4107 s_ref = self._create_compute_service(created_at=t, updated_at=t,
4108@@ -730,7 +887,7 @@
4109
4110 def test_live_migration_src_check_works_correctly(self):
4111 """Confirms this method finishes with no error."""
4112- instance_id = self._create_instance()
4113+ instance_id = _create_instance()['id']
4114 i_ref = db.instance_get(self.context, instance_id)
4115 s_ref = self._create_compute_service(host=i_ref['host'])
4116
4117@@ -743,7 +900,7 @@
4118
4119 def test_live_migration_dest_check_not_alive(self):
4120 """Confirms exception raises in case dest host does not exist."""
4121- instance_id = self._create_instance()
4122+ instance_id = _create_instance()['id']
4123 i_ref = db.instance_get(self.context, instance_id)
4124 t = utils.utcnow() - datetime.timedelta(10)
4125 s_ref = self._create_compute_service(created_at=t, updated_at=t,
4126@@ -758,7 +915,7 @@
4127
4128 def test_live_migration_dest_check_service_same_host(self):
4129 """Confirms exceptioin raises in case dest and src is same host."""
4130- instance_id = self._create_instance()
4131+ instance_id = _create_instance()['id']
4132 i_ref = db.instance_get(self.context, instance_id)
4133 s_ref = self._create_compute_service(host=i_ref['host'])
4134
4135@@ -771,9 +928,9 @@
4136
4137 def test_live_migration_dest_check_service_lack_memory(self):
4138 """Confirms exception raises when dest doesn't have enough memory."""
4139- instance_id = self._create_instance()
4140- instance_id2 = self._create_instance(host='somewhere',
4141- memory_mb=12)
4142+ instance_id = _create_instance()['id']
4143+ instance_id2 = _create_instance(host='somewhere',
4144+ memory_mb=12)['id']
4145 i_ref = db.instance_get(self.context, instance_id)
4146 s_ref = self._create_compute_service(host='somewhere')
4147
4148@@ -787,9 +944,9 @@
4149
4150 def test_block_migration_dest_check_service_lack_disk(self):
4151 """Confirms exception raises when dest doesn't have enough disk."""
4152- instance_id = self._create_instance()
4153- instance_id2 = self._create_instance(host='somewhere',
4154- local_gb=70)
4155+ instance_id = _create_instance()['id']
4156+ instance_id2 = _create_instance(host='somewhere',
4157+ local_gb=70)['id']
4158 i_ref = db.instance_get(self.context, instance_id)
4159 s_ref = self._create_compute_service(host='somewhere')
4160
4161@@ -803,7 +960,7 @@
4162
4163 def test_live_migration_dest_check_service_works_correctly(self):
4164 """Confirms method finishes with no error."""
4165- instance_id = self._create_instance()
4166+ instance_id = _create_instance()['id']
4167 i_ref = db.instance_get(self.context, instance_id)
4168 s_ref = self._create_compute_service(host='somewhere',
4169 memory_mb_used=5)
4170@@ -821,7 +978,7 @@
4171
4172 dest = 'dummydest'
4173 # mocks for live_migration_common_check()
4174- instance_id = self._create_instance()
4175+ instance_id = _create_instance()['id']
4176 i_ref = db.instance_get(self.context, instance_id)
4177 t1 = utils.utcnow() - datetime.timedelta(10)
4178 s_ref = self._create_compute_service(created_at=t1, updated_at=t1,
4179@@ -855,7 +1012,7 @@
4180 def test_live_migration_common_check_service_different_hypervisor(self):
4181 """Original host and dest host has different hypervisor type."""
4182 dest = 'dummydest'
4183- instance_id = self._create_instance()
4184+ instance_id = _create_instance(host='dummy')['id']
4185 i_ref = db.instance_get(self.context, instance_id)
4186
4187 # compute service for destination
4188@@ -880,7 +1037,7 @@
4189 def test_live_migration_common_check_service_different_version(self):
4190 """Original host and dest host has different hypervisor version."""
4191 dest = 'dummydest'
4192- instance_id = self._create_instance()
4193+ instance_id = _create_instance(host='dummy')['id']
4194 i_ref = db.instance_get(self.context, instance_id)
4195
4196 # compute service for destination
4197@@ -904,10 +1061,10 @@
4198 db.service_destroy(self.context, s_ref2['id'])
4199
4200 def test_live_migration_common_check_checking_cpuinfo_fail(self):
4201- """Raise excetion when original host doen't have compatible cpu."""
4202+ """Raise exception when original host doesn't have compatible cpu."""
4203
4204 dest = 'dummydest'
4205- instance_id = self._create_instance()
4206+ instance_id = _create_instance(host='dummy')['id']
4207 i_ref = db.instance_get(self.context, instance_id)
4208
4209 # compute service for destination
4210@@ -927,7 +1084,7 @@
4211
4212 self.mox.ReplayAll()
4213 try:
4214- self.scheduler.driver._live_migration_common_check(self.context,
4215+ driver._live_migration_common_check(self.context,
4216 i_ref,
4217 dest,
4218 False)
4219@@ -1021,7 +1178,6 @@
4220 class ZoneRedirectTest(test.TestCase):
4221 def setUp(self):
4222 super(ZoneRedirectTest, self).setUp()
4223- self.stubs = stubout.StubOutForTesting()
4224
4225 self.stubs.Set(db, 'zone_get_all', zone_get_all)
4226 self.stubs.Set(db, 'instance_get_by_uuid',
4227@@ -1029,7 +1185,6 @@
4228 self.flags(enable_zone_routing=True)
4229
4230 def tearDown(self):
4231- self.stubs.UnsetAll()
4232 super(ZoneRedirectTest, self).tearDown()
4233
4234 def test_trap_found_locally(self):
4235@@ -1257,12 +1412,10 @@
4236 class CallZoneMethodTest(test.TestCase):
4237 def setUp(self):
4238 super(CallZoneMethodTest, self).setUp()
4239- self.stubs = stubout.StubOutForTesting()
4240 self.stubs.Set(db, 'zone_get_all', zone_get_all)
4241 self.stubs.Set(novaclient, 'Client', FakeNovaClientZones)
4242
4243 def tearDown(self):
4244- self.stubs.UnsetAll()
4245 super(CallZoneMethodTest, self).tearDown()
4246
4247 def test_call_zone_method(self):
4248
4249=== modified file 'nova/tests/scheduler/test_vsa_scheduler.py'
4250--- nova/tests/scheduler/test_vsa_scheduler.py 2011-08-26 02:09:50 +0000
4251+++ nova/tests/scheduler/test_vsa_scheduler.py 2011-09-23 07:08:19 +0000
4252@@ -22,6 +22,7 @@
4253 from nova import exception
4254 from nova import flags
4255 from nova import log as logging
4256+from nova import rpc
4257 from nova import test
4258 from nova import utils
4259 from nova.volume import volume_types
4260@@ -37,6 +38,10 @@
4261 global_volume = {}
4262
4263
4264+def fake_rpc_cast(*args, **kwargs):
4265+ pass
4266+
4267+
4268 class FakeVsaLeastUsedScheduler(
4269 vsa_sched.VsaSchedulerLeastUsedHost):
4270 # No need to stub anything at the moment
4271@@ -170,12 +175,10 @@
4272 LOG.debug(_("Test: provision vol %(name)s on host %(host)s"),
4273 locals())
4274 LOG.debug(_("\t vol=%(vol)s"), locals())
4275- pass
4276
4277 def _fake_vsa_update(self, context, vsa_id, values):
4278 LOG.debug(_("Test: VSA update request: vsa_id=%(vsa_id)s "\
4279 "values=%(values)s"), locals())
4280- pass
4281
4282 def _fake_volume_create(self, context, options):
4283 LOG.debug(_("Test: Volume create: %s"), options)
4284@@ -196,7 +199,6 @@
4285 "values=%(values)s"), locals())
4286 global scheduled_volume
4287 scheduled_volume = {'id': volume_id, 'host': values['host']}
4288- pass
4289
4290 def _fake_service_get_by_args(self, context, host, binary):
4291 return "service"
4292@@ -209,7 +211,6 @@
4293
4294 def setUp(self, sched_class=None):
4295 super(VsaSchedulerTestCase, self).setUp()
4296- self.stubs = stubout.StubOutForTesting()
4297 self.context = context.get_admin_context()
4298
4299 if sched_class is None:
4300@@ -220,6 +221,7 @@
4301 self.host_num = 10
4302 self.drive_type_num = 5
4303
4304+ self.stubs.Set(rpc, 'cast', fake_rpc_cast)
4305 self.stubs.Set(self.sched,
4306 '_get_service_states', self._fake_get_service_states)
4307 self.stubs.Set(self.sched,
4308@@ -234,8 +236,6 @@
4309 def tearDown(self):
4310 for name in self.created_types_lst:
4311 volume_types.purge(self.context, name)
4312-
4313- self.stubs.UnsetAll()
4314 super(VsaSchedulerTestCase, self).tearDown()
4315
4316 def test_vsa_sched_create_volumes_simple(self):
4317@@ -333,6 +333,8 @@
4318 self.stubs.Set(self.sched,
4319 '_get_service_states', self._fake_get_service_states)
4320 self.stubs.Set(nova.db, 'volume_create', self._fake_volume_create)
4321+ self.stubs.Set(nova.db, 'volume_update', self._fake_volume_update)
4322+ self.stubs.Set(rpc, 'cast', fake_rpc_cast)
4323
4324 self.sched.schedule_create_volumes(self.context,
4325 request_spec,
4326@@ -467,10 +469,9 @@
4327 self.stubs.Set(self.sched,
4328 'service_is_up', self._fake_service_is_up_True)
4329
4330- host = self.sched.schedule_create_volume(self.context,
4331- 123, availability_zone=None)
4332+ self.sched.schedule_create_volume(self.context,
4333+ 123, availability_zone=None)
4334
4335- self.assertEqual(host, 'host_3')
4336 self.assertEqual(scheduled_volume['id'], 123)
4337 self.assertEqual(scheduled_volume['host'], 'host_3')
4338
4339@@ -514,10 +515,9 @@
4340 global_volume['volume_type_id'] = volume_type['id']
4341 global_volume['size'] = 0
4342
4343- host = self.sched.schedule_create_volume(self.context,
4344- 123, availability_zone=None)
4345+ self.sched.schedule_create_volume(self.context,
4346+ 123, availability_zone=None)
4347
4348- self.assertEqual(host, 'host_2')
4349 self.assertEqual(scheduled_volume['id'], 123)
4350 self.assertEqual(scheduled_volume['host'], 'host_2')
4351
4352@@ -529,7 +529,6 @@
4353 FakeVsaMostAvailCapacityScheduler())
4354
4355 def tearDown(self):
4356- self.stubs.UnsetAll()
4357 super(VsaSchedulerTestCaseMostAvail, self).tearDown()
4358
4359 def test_vsa_sched_create_single_volume(self):
4360@@ -558,10 +557,9 @@
4361 global_volume['volume_type_id'] = volume_type['id']
4362 global_volume['size'] = 0
4363
4364- host = self.sched.schedule_create_volume(self.context,
4365- 123, availability_zone=None)
4366+ self.sched.schedule_create_volume(self.context,
4367+ 123, availability_zone=None)
4368
4369- self.assertEqual(host, 'host_9')
4370 self.assertEqual(scheduled_volume['id'], 123)
4371 self.assertEqual(scheduled_volume['host'], 'host_9')
4372
4373
4374=== modified file 'nova/tests/test_compute.py'
4375--- nova/tests/test_compute.py 2011-09-21 20:59:40 +0000
4376+++ nova/tests/test_compute.py 2011-09-23 07:08:19 +0000
4377@@ -26,6 +26,7 @@
4378 from nova import exception
4379 from nova import flags
4380 from nova import log as logging
4381+from nova.scheduler import driver as scheduler_driver
4382 from nova import rpc
4383 from nova import test
4384 from nova import utils
4385@@ -73,10 +74,42 @@
4386 self.context = context.RequestContext(self.user_id, self.project_id)
4387 test_notifier.NOTIFICATIONS = []
4388
4389+ orig_rpc_call = rpc.call
4390+ orig_rpc_cast = rpc.cast
4391+
4392+ def rpc_call_wrapper(context, topic, msg, do_cast=True):
4393+ """Stub out the scheduler creating the instance entry"""
4394+ if topic == FLAGS.scheduler_topic and \
4395+ msg['method'] == 'run_instance':
4396+ request_spec = msg['args']['request_spec']
4397+ scheduler = scheduler_driver.Scheduler
4398+ num_instances = request_spec.get('num_instances', 1)
4399+ instances = []
4400+ for x in xrange(num_instances):
4401+ instance = scheduler().create_instance_db_entry(
4402+ context,
4403+ request_spec)
4404+ encoded = scheduler_driver.encode_instance(instance)
4405+ instances.append(encoded)
4406+ return instances
4407+ else:
4408+ if do_cast:
4409+ orig_rpc_cast(context, topic, msg)
4410+ else:
4411+ return orig_rpc_call(context, topic, msg)
4412+
4413+ def rpc_cast_wrapper(context, topic, msg):
4414+ """Stub out the scheduler creating the instance entry in
4415+ the reservation_id case.
4416+ """
4417+ rpc_call_wrapper(context, topic, msg, do_cast=True)
4418+
4419 def fake_show(meh, context, id):
4420 return {'id': 1, 'properties': {'kernel_id': 1, 'ramdisk_id': 1}}
4421
4422 self.stubs.Set(fake_image._FakeImageService, 'show', fake_show)
4423+ self.stubs.Set(rpc, 'call', rpc_call_wrapper)
4424+ self.stubs.Set(rpc, 'cast', rpc_cast_wrapper)
4425
4426 def _create_instance(self, params=None):
4427 """Create a test instance"""
4428@@ -139,7 +172,7 @@
4429 """Verify that an instance cannot be created without a display_name."""
4430 cases = [dict(), dict(display_name=None)]
4431 for instance in cases:
4432- ref = self.compute_api.create(self.context,
4433+ (ref, resv_id) = self.compute_api.create(self.context,
4434 instance_types.get_default_instance_type(), None, **instance)
4435 try:
4436 self.assertNotEqual(ref[0]['display_name'], None)
4437@@ -149,7 +182,7 @@
4438 def test_create_instance_associates_security_groups(self):
4439 """Make sure create associates security groups"""
4440 group = self._create_group()
4441- ref = self.compute_api.create(
4442+ (ref, resv_id) = self.compute_api.create(
4443 self.context,
4444 instance_type=instance_types.get_default_instance_type(),
4445 image_href=None,
4446@@ -209,7 +242,7 @@
4447 ('<}\x1fh\x10e\x08l\x02l\x05o\x12!{>', 'hello'),
4448 ('hello_server', 'hello-server')]
4449 for display_name, hostname in cases:
4450- ref = self.compute_api.create(self.context,
4451+ (ref, resv_id) = self.compute_api.create(self.context,
4452 instance_types.get_default_instance_type(), None,
4453 display_name=display_name)
4454 try:
4455@@ -221,7 +254,7 @@
4456 """Make sure destroying disassociates security groups"""
4457 group = self._create_group()
4458
4459- ref = self.compute_api.create(
4460+ (ref, resv_id) = self.compute_api.create(
4461 self.context,
4462 instance_type=instance_types.get_default_instance_type(),
4463 image_href=None,
4464@@ -237,7 +270,7 @@
4465 """Make sure destroying security groups disassociates instances"""
4466 group = self._create_group()
4467
4468- ref = self.compute_api.create(
4469+ (ref, resv_id) = self.compute_api.create(
4470 self.context,
4471 instance_type=instance_types.get_default_instance_type(),
4472 image_href=None,
4473@@ -1394,3 +1427,81 @@
4474 self.assertEqual(self.compute_api._volume_size(inst_type,
4475 'swap'),
4476 swap_size)
4477+
4478+ def test_reservation_id_one_instance(self):
4479+ """Verify building an instance has a reservation_id that
4480+ matches return value from create"""
4481+ (refs, resv_id) = self.compute_api.create(self.context,
4482+ instance_types.get_default_instance_type(), None)
4483+ try:
4484+ self.assertEqual(len(refs), 1)
4485+ self.assertEqual(refs[0]['reservation_id'], resv_id)
4486+ finally:
4487+ db.instance_destroy(self.context, refs[0]['id'])
4488+
4489+ def test_reservation_ids_two_instances(self):
4490+ """Verify building 2 instances at once results in a
4491+ reservation_id being returned equal to reservation id set
4492+ in both instances
4493+ """
4494+ (refs, resv_id) = self.compute_api.create(self.context,
4495+ instance_types.get_default_instance_type(), None,
4496+ min_count=2, max_count=2)
4497+ try:
4498+ self.assertEqual(len(refs), 2)
4499+ self.assertNotEqual(resv_id, None)
4500+ finally:
4501+ for instance in refs:
4502+ self.assertEqual(instance['reservation_id'], resv_id)
4503+ db.instance_destroy(self.context, instance['id'])
4504+
4505+ def test_reservation_ids_two_instances_no_wait(self):
4506+ """Verify building 2 instances at once without waiting for
4507+ instance IDs results in a reservation_id being returned equal
4508+ to reservation id set in both instances
4509+ """
4510+ (refs, resv_id) = self.compute_api.create(self.context,
4511+ instance_types.get_default_instance_type(), None,
4512+ min_count=2, max_count=2, wait_for_instances=False)
4513+ try:
4514+ self.assertEqual(refs, None)
4515+ self.assertNotEqual(resv_id, None)
4516+ finally:
4517+ instances = self.compute_api.get_all(self.context,
4518+ search_opts={'reservation_id': resv_id})
4519+ self.assertEqual(len(instances), 2)
4520+ for instance in instances:
4521+ self.assertEqual(instance['reservation_id'], resv_id)
4522+ db.instance_destroy(self.context, instance['id'])
4523+
4524+ def test_create_with_specified_reservation_id(self):
4525+ """Verify building instances with a specified
4526+ reservation_id results in the correct reservation_id
4527+ being set
4528+ """
4529+
4530+ # We need admin context to be able to specify our own
4531+ # reservation_ids.
4532+ context = self.context.elevated()
4533+ # 1 instance
4534+ (refs, resv_id) = self.compute_api.create(context,
4535+ instance_types.get_default_instance_type(), None,
4536+ min_count=1, max_count=1, reservation_id='meow')
4537+ try:
4538+ self.assertEqual(len(refs), 1)
4539+ self.assertEqual(resv_id, 'meow')
4540+ finally:
4541+ self.assertEqual(refs[0]['reservation_id'], resv_id)
4542+ db.instance_destroy(self.context, refs[0]['id'])
4543+
4544+ # 2 instances
4545+ (refs, resv_id) = self.compute_api.create(context,
4546+ instance_types.get_default_instance_type(), None,
4547+ min_count=2, max_count=2, reservation_id='woof')
4548+ try:
4549+ self.assertEqual(len(refs), 2)
4550+ self.assertEqual(resv_id, 'woof')
4551+ finally:
4552+ for instance in refs:
4553+ self.assertEqual(instance['reservation_id'], resv_id)
4554+ db.instance_destroy(self.context, instance['id'])
4555
4556=== modified file 'nova/tests/test_quota.py'
4557--- nova/tests/test_quota.py 2011-08-03 19:22:58 +0000
4558+++ nova/tests/test_quota.py 2011-09-23 07:08:19 +0000
4559@@ -21,9 +21,11 @@
4560 from nova import db
4561 from nova import flags
4562 from nova import quota
4563+from nova import rpc
4564 from nova import test
4565 from nova import volume
4566 from nova.compute import instance_types
4567+from nova.scheduler import driver as scheduler_driver
4568
4569
4570 FLAGS = flags.FLAGS
4571@@ -51,6 +53,21 @@
4572 self.context = context.RequestContext(self.user_id,
4573 self.project_id,
4574 True)
4575+ orig_rpc_call = rpc.call
4576+
4577+ def rpc_call_wrapper(context, topic, msg):
4578+ """Stub out the scheduler creating the instance entry"""
4579+ if topic == FLAGS.scheduler_topic and \
4580+ msg['method'] == 'run_instance':
4581+ scheduler = scheduler_driver.Scheduler
4582+ instance = scheduler().create_instance_db_entry(
4583+ context,
4584+ msg['args']['request_spec'])
4585+ return [scheduler_driver.encode_instance(instance)]
4586+ else:
4587+ return orig_rpc_call(context, topic, msg)
4588+
4589+ self.stubs.Set(rpc, 'call', rpc_call_wrapper)
4590
4591 def _create_instance(self, cores=2):
4592 """Create a test instance"""