Merge lp:~cbehrens/nova/lp844160-build-works-with-zones into lp:~hudson-openstack/nova/trunk
- lp844160-build-works-with-zones
- Merge into trunk
Status: | Rejected |
---|---|
Rejected by: | Chris Behrens |
Proposed branch: | lp:~cbehrens/nova/lp844160-build-works-with-zones |
Merge into: | lp:~hudson-openstack/nova/trunk |
Diff against target: |
4592 lines (+1761/-1238) 33 files modified
doc/source/devref/distributed_scheduler.rst (+2/-0) nova/api/ec2/cloud.py (+6/-4) nova/api/openstack/__init__.py (+1/-2) nova/api/openstack/contrib/createserverext.py (+1/-2) nova/api/openstack/contrib/volumes.py (+2/-41) nova/api/openstack/contrib/zones.py (+50/-0) nova/api/openstack/create_instance_helper.py (+0/-602) nova/api/openstack/servers.py (+581/-30) nova/api/openstack/zones.py (+3/-35) nova/compute/api.py (+111/-116) nova/scheduler/abstract_scheduler.py (+32/-43) nova/scheduler/api.py (+2/-2) nova/scheduler/chance.py (+23/-2) nova/scheduler/driver.py (+106/-9) nova/scheduler/least_cost.py (+1/-2) nova/scheduler/manager.py (+5/-19) nova/scheduler/multi.py (+5/-3) nova/scheduler/simple.py (+35/-39) nova/scheduler/vsa.py (+13/-20) nova/scheduler/zone.py (+23/-5) nova/tests/api/openstack/contrib/test_createserverext.py (+8/-4) nova/tests/api/openstack/contrib/test_volumes.py (+12/-2) nova/tests/api/openstack/test_extensions.py (+1/-0) nova/tests/api/openstack/test_server_actions.py (+2/-2) nova/tests/api/openstack/test_servers.py (+158/-45) nova/tests/integrated/api/client.py (+16/-3) nova/tests/integrated/test_servers.py (+36/-0) nova/tests/scheduler/test_abstract_scheduler.py (+58/-17) nova/tests/scheduler/test_least_cost_scheduler.py (+1/-1) nova/tests/scheduler/test_scheduler.py (+320/-167) nova/tests/scheduler/test_vsa_scheduler.py (+14/-16) nova/tests/test_compute.py (+116/-5) nova/tests/test_quota.py (+17/-0) |
To merge this branch: | bzr merge lp:~cbehrens/nova/lp844160-build-works-with-zones |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Sandy Walsh (community) | Needs Fixing | ||
Chris Behrens (community) | Abstain | ||
Brian Waldon (community) | Needs Information | ||
Review via email: mp+75990@code.launchpad.net |
Commit message
Description of the change
This makes the OS API servers controller 'create' work with all schedulers, including the zone aware schedulers (BaseScheduler and subclasses).
Since this means that the zones controller 'boot' method is not needed anymore, it has been removed and create_
The distributed scheduler doc needs to be updated. I only updated it enough to say that some information is stale. If this merges, I'll file a bug to have it updated. I'm not the best person to update/create pretty pictures.
Other side effects of making this work:
1) compute API's create_at_all_once has been removed. It was only used by zone boot.
2) compute API's create() no longer creates Instance DB entries. The schedulers now do this. This makes sense, as only the schedulers will know where the instances will be placed. They could be placed locally or in a child zone. However, this comes at a cost. compute_
3) There's been an undocumented feature in the OS API to allow multiple instances to be built. I've kept it.
4) If compute_
5) I've added an undocumented option 'return_
6) It was requested I create a stub for a zones extension, so you'll see the empty extension in here. We'll move some code to it later.
7) Fixes an unrelated bug that merged into trunk recently where zones DB calls were not being done with admin context always, anymore.
* Case #5 above doesn't wait for the scheduler response with instance IDs. It does a 'cast' instead.
Chris Behrens (cbehrens) wrote : | # |
Brian Waldon (bcwaldon) wrote : | # |
I absolutely love the removal of create_
172: Can you expand on this? A note about each of the calls would be nice.
1198: Can the precooked stuff go away now? Just not sure how that fits in
1294: I don't think this is used anywhere. Can you remove it?
Chris Behrens (cbehrens) wrote : | # |
172: Expanded
1294: Good catch... it's removed now.
As far as 1198: It can't really go away until we change how we talk to child zones. :-/ Sandy, pvo, and myself have been talking about some things related to zones that could make it go away.
Now.. I think a branch of Sandy's landed, and I probably have conflicts to resolve.
Chris Behrens (cbehrens) wrote : | # |
Resolved conflicts with trunk. Time to open this up for review.
Chris Behrens (cbehrens) : | # |
Chris Behrens (cbehrens) wrote : | # |
merged trunk.
Rick Harris (rconradharris) wrote : | # |
This looks really good, love the refactoring and cleanups. Making a quick
first-pass with some notes; will plan on a digging in for a more thorough
review tomorrow.
> 2160 + # TODO(comstud): I would love to be able to return the full
> 2161 + # instance information here, but unfortunately passing things
> 2162 + # like 'datetime' back through rabbit don't work due to having
> 2163 + # to json encode/decode.
Not really necessary for this patch, but for future work, we can set a
default handler so that the `json` module will encode datetimes in
iso8601 format[1].
[1]http://
> 2294 + zone, _x, host = availability_
Could use split here and avoid the throwaway variable:
zone, host = availability_
> 859 + expl = _("Personality file limit exceeded")
> 860 + raise exc.HTTPRequest
> 861 + headers=
`expl` is defined but never used. This patch just moved these lines around,
highlighting the issue. If s/error.
just make the change in this patch, but if we need to consult the original
author we could make a separate bug for it.
On that note, the code could be DRY'd up as well:
code = error.code
if code == "OnsetFileLimit
expl = _("Personality file limit exceeded")
elif code == "OnsetFilePathL
expl = _("Personality file path too long")
elif code == "OnsetFileConte
expl = _("Personality file content too long")
elif code == "InstanceLimitE
expl = _("Instance quotas have been exceeded")
else:
expl = None
if expl:
raise exc.HTTPRequest
else:
# if the original error is okay, just reraise it
raise error
> 2167 + if local is True:
Per PEP8, preferred is:
if local:
> 2388 + instances = instances.
Looks like this should be:
instances.
Or maybe even:
encoded_
instances.
Chris Behrens (cbehrens) wrote : | # |
> This looks really good, love the refactoring and cleanups. Making a quick
> first-pass with some notes; will plan on a digging in for a more thorough
> review tomorrow.
Great, thanks!
>
>
> > 2160 + # TODO(comstud): I would love to be able to return the full
> > 2161 + # instance information here, but unfortunately passing things
> > 2162 + # like 'datetime' back through rabbit don't work due to
> having
> > 2163 + # to json encode/decode.
>
> Not really necessary for this patch, but for future work, we can set a
> default handler so that the `json` module will encode datetimes in
> iso8601 format[1].
>
> [1]http://
> javascript
Great...good to know. I hadn't bothered looking for a solution yet... really there's all sorts of cases where we should be doing this and it's a discussion blamar proposed for the summit.
>
> > 2294 + zone, _x, host = availability_
>
> Could use split here and avoid the throwaway variable:
>
> zone, host = availability_
>
> > 859 + expl = _("Personality file limit exceeded")
> > 860 + raise
> exc.HTTPRequest
> > 861 + headers={'Retry-
> After': 0})
>
> `expl` is defined but never used. This patch just moved these lines around,
> highlighting the issue. If s/error.
> just make the change in this patch, but if we need to consult the original
> author we could make a separate bug for it.
Yeah, the other one above was just moving lines around also. I'll take a look at these, though!
[...]
> > 2167 + if local is True:
>
> Per PEP8, preferred is:
>
> if local:
I'll fix that. I think that's a habit from doing "if blah is None" to distinguish keyword arguments with a None default from an empty list or dict being passed.. if you know what I mean.
> > 2388 + instances =
> instances.
>
> Looks like this should be:
>
> instances.
Shoot, yes. I copy-pasted that broken line around a number of times and thought I went back and fixed all instances of it. Good catch.
Chris Behrens (cbehrens) wrote : | # |
Rick: I updated that comment regarding the datetime encoding. I think I hit all of your other issues so far. I've also added a comment in compute_
It does appear the QuotaError stuff should have raised with those unused variables. I've gone ahead and made use of them, and cleaned it all up into a mapping table. I think that's cleaner than all of the 'if' stuff. In researching this, I find the QuotaError exception stuff could really use a re-factor (its class and the raises done in compute/api). I'll probably file a bug to clean it up after this merges.
- 1593. By Chris Behrens
-
typo
Chris Behrens (cbehrens) wrote : | # |
Ok ready. Note: if running tests, just running api/openstack/
Vish Ishaya (vishvananda) wrote : | # |
On Sep 21, 2011, at 9:03 PM, Rick Harris wrote:
>
> Not really necessary for this patch, but for future work, we can set a
> default handler so that the `json` module will encode datetimes in
> iso8601 format[1].
>
I wrote some code long ago in my branches to get rid of using the db to pass information back and forth that handled this. It even managed the conversion back on updates:
basically it is just adding a datetime check to utils.to_primitive and converting using utils.strtime(). We can use sqlalchemy to parse back into the required formats on update with something like:
=== modified file 'nova/db/
--- nova/db/
+++ nova/db/
@@ -27,12 +27,14 @@
from sqlalchemy.exc import IntegrityError
from sqlalchemy.
from sqlalchemy.schema import ForeignKeyConst
+from sqlalchemy.types import DateTime as DTType
from nova.db.
from nova import auth
from nova import exception
from nova import flags
+from nova import utils
FLAGS = flags.FLAGS
@@ -90,11 +92,15 @@
return n, getattr(self, n)
def update(self, values):
- """Make the model object behave like a dict"""
- columns = dict(object_
+ """Make the model object behave like a dict and convert datetimes."""
+ columns = object_
for key, value in values.iteritems():
# NOTE(vish): don't update the 'name' property
- if key in columns:
+ if key != 'name' or key in columns:
+ if (key in columns and
+ isinstance(value, basestring) and
+ isinstance(
+ value = utils.parse_
def iteritems(self):
I was able to pass entire refs through the queue using this and update them on the other end.
Vish
Sandy Walsh (sandy-walsh) wrote : | # |
Still going through it, but getting a test failure: http://
Stay tuned ...
Sandy Walsh (sandy-walsh) wrote : | # |
First off, I love that fact that you're keeping the unit tests as unit tests (and not integration tests) ... makes the review so much easier to follow.
I guess we really need update the docs shortly after this lands.
Regarding the precooked stuff, I wonder if we could just assume all results are raw and strip out any potentially offending data regardless? Just be a little more forgiving if they don't exist.
825 +from nova.rpc.common import RemoteError
import module not class
1697 + instances = self._schedule_
I thought that method returned a tuple? Is this correct?
1950 + instance = self.create_
create_
2032 # Return instance as a list and make sure the caller doesn't
2033 + # cast to a compute node.
... not sure what the comment is trying to tell me.
2053 - # Returning None short-circuits the routing to Compute (since
... is this comment not appropriate anymore? I think some explanation of the None return is required (somewhere).
2215 + # Should only be None for tests?
2216 + if filename is not None:
Then this logic should be broken out into a separate function and stubbed in the test. Test case code shouldn't be in production code.
2256 + if isinstance(ret_val, tuple):
ah ha ... there it is. Can we unify these results to always be a tuple? Or I think we'd need a test for each condition (unless I missed something there)?
All in all ... great changes Chris! Nice to see that zone-boot and inheritance mess go away!
Chris Behrens (cbehrens) wrote : | # |
Vish:
[...]
> I wrote some code long ago in my branches to get rid of using the db to pass
> information back and forth that handled this. It even managed the conversion
> back on updates:
[...]
> I was able to pass entire refs through the queue using this and update them on
> the other end.
Very cool. I'll probably look at incorporating this as a next step. This diff is already large enough due to moving code around. There are a lot more areas where we should be doing this, and it's a point of discussion that blamar suggested for the summit, also.
Chris Behrens (cbehrens) wrote : | # |
> Still going through it, but getting a test failure:
> http://
>
> Stay tuned ...
So, I run into that now and then as well.. and it actually appears to be a kombu memory transport bug. Generally if you run into it, a 2nd run of the tests will pass. I think we're going to need to go back to using our own 'fakerabbit' type backend for kombu... or just aggressively try to get these fixed in kombu.
Chris Behrens (cbehrens) wrote : | # |
> First off, I love that fact that you're keeping the unit tests as unit tests
> (and not integration tests) ... makes the review so much easier to follow.
Yup.. something I think about when coding tests, although there are a lot of cases where unit tests are currently more like integration tests.
>
> I guess we really need update the docs shortly after this lands.
Yeah. I'd like to update it more myself, but I'd prefer to spend time on it after we get this merged.. Since we're very early in essex, I think this is okay. We can file a bug after this merges.
>
> Regarding the precooked stuff, I wonder if we could just assume all results
> are raw and strip out any potentially offending data regardless? Just be a
> little more forgiving if they don't exist.
>
> 825 +from nova.rpc.common import RemoteError
>
> import module not class
Copy/paste thing, but I agree. I'll update it.
>
> 1697 + instances = self._schedule_
> I thought that method returned a tuple? Is this correct?
I think you caught this below. The schedulers' methods do return tuples so that the manager can get a 'response to return' and a 'host to schedule on'. But the manager does really only return the 'response' portion. I'll update/add comments in the schedulers/manager.
>
>
> 1950 + instance = self.create_
> request_spec)
>
> create_
> qualifier or remove the @staticmethod decorator, not self.
I have the same thing for 'encode_instance', etc. I put them as static methods because they don't use 'self'... but it's a bit more clean in the code to be able to call them by self.*. Is that a huge no-no? If so, I think I lean towards removing the decorator even though they don't use any instance data.
>
> 2032 # Return instance as a list and make sure the caller doesn't
> 2033 + # cast to a compute node.
>
> ... not sure what the comment is trying to tell me.
Goes along with the tuple comment above. I'll update the comment as mentioned above.
>
> 2053 - # Returning None short-circuits the routing to Compute (since
>
> ... is this comment not appropriate anymore? I think some explanation of the
> None return is required (somewhere).
That comment attempts to explain why None is required, but I guess it's not descriptive enough. :) It also goes along with the other comments above I'll update.
>
> 2215 + # Should only be None for tests?
> 2216 + if filename is not None:
> Then this logic should be broken out into a separate function and stubbed in
> the test. Test case code shouldn't be in production code.
I'll do more investigation on this. I ran into a test failure where 'filename' was not defined.
>
> 2256 + if isinstance(ret_val, tuple):
> ah ha ... there it is. Can we unify these results to always be a tuple? Or I
> think we'd need a test for each condition (unless I missed something there)?
I could update all scheduler methods to return a tuple, yes, and I thought about doing this, although it's only run_instance that needs to return a response. For...
- 1594. By Chris Behrens
-
Clean up the return values from all schedule* calls, making all schedule* calls do their own casts.
Creating convenience calls for the above results in 'scheduled_at' being updated in a single place for both instances and volumes now. - 1595. By Chris Behrens
-
test fixes plus bugs/typos they uncovered. still needs more test fixes
- 1596. By Chris Behrens
-
fix abstract scheduler tests.. and bugs they found. added test for run_instance and checking a DB call is made with admin context
- 1597. By Chris Behrens
-
fix pep8 issue
- 1598. By Chris Behrens
-
chance scheduler bug uncovered with tests
- 1599. By Chris Behrens
-
vsa scheduler test fixes
- 1600. By Chris Behrens
-
more test fixes
Chris Behrens (cbehrens) wrote : | # |
Moving to git.
Unmerged revisions
- 1600. By Chris Behrens
-
more test fixes
- 1599. By Chris Behrens
-
vsa scheduler test fixes
- 1598. By Chris Behrens
-
chance scheduler bug uncovered with tests
- 1597. By Chris Behrens
-
fix pep8 issue
- 1596. By Chris Behrens
-
fix abstract scheduler tests.. and bugs they found. added test for run_instance and checking a DB call is made with admin context
- 1595. By Chris Behrens
-
test fixes plus bugs/typos they uncovered. still needs more test fixes
- 1594. By Chris Behrens
-
Clean up the return values from all schedule* calls, making all schedule* calls do their own casts.
Creating convenience calls for the above results in 'scheduled_at' being updated in a single place for both instances and volumes now. - 1593. By Chris Behrens
-
typo
- 1592. By Chris Behrens
-
broken indent
- 1591. By Chris Behrens
-
revert the kludge for reclaim_
instance_ interval since tests pass when all of them are run. I don't want to have a conflict with a fix from johannes
Preview Diff
1 | === modified file 'doc/source/devref/distributed_scheduler.rst' | |||
2 | --- doc/source/devref/distributed_scheduler.rst 2011-08-18 19:39:25 +0000 | |||
3 | +++ doc/source/devref/distributed_scheduler.rst 2011-09-23 07:08:19 +0000 | |||
4 | @@ -77,6 +77,8 @@ | |||
5 | 77 | 77 | ||
6 | 78 | Requesting a new instance | 78 | Requesting a new instance |
7 | 79 | ------------------------- | 79 | ------------------------- |
8 | 80 | (Note: The information below is out of date, as the `nova.compute.api.create_all_at_once()` functionality has merged into `nova.compute.api.create()` and the non-zone aware schedulers have been updated.) | ||
9 | 81 | |||
10 | 80 | Prior to the `BaseScheduler`, to request a new instance, a call was made to `nova.compute.api.create()`. The type of instance created depended on the value of the `InstanceType` record being passed in. The `InstanceType` determined the amount of disk, CPU, RAM and network required for the instance. Administrators can add new `InstanceType` records to suit their needs. For more complicated instance requests we need to go beyond the default fields in the `InstanceType` table. | 82 | Prior to the `BaseScheduler`, to request a new instance, a call was made to `nova.compute.api.create()`. The type of instance created depended on the value of the `InstanceType` record being passed in. The `InstanceType` determined the amount of disk, CPU, RAM and network required for the instance. Administrators can add new `InstanceType` records to suit their needs. For more complicated instance requests we need to go beyond the default fields in the `InstanceType` table. |
11 | 81 | 83 | ||
12 | 82 | `nova.compute.api.create()` performed the following actions: | 84 | `nova.compute.api.create()` performed the following actions: |
13 | 83 | 85 | ||
14 | === modified file 'nova/api/ec2/cloud.py' | |||
15 | --- nova/api/ec2/cloud.py 2011-09-21 15:54:30 +0000 | |||
16 | +++ nova/api/ec2/cloud.py 2011-09-23 07:08:19 +0000 | |||
17 | @@ -1384,7 +1384,7 @@ | |||
18 | 1384 | if image_state != 'available': | 1384 | if image_state != 'available': |
19 | 1385 | raise exception.ApiError(_('Image must be available')) | 1385 | raise exception.ApiError(_('Image must be available')) |
20 | 1386 | 1386 | ||
22 | 1387 | instances = self.compute_api.create(context, | 1387 | (instances, resv_id) = self.compute_api.create(context, |
23 | 1388 | instance_type=instance_types.get_instance_type_by_name( | 1388 | instance_type=instance_types.get_instance_type_by_name( |
24 | 1389 | kwargs.get('instance_type', None)), | 1389 | kwargs.get('instance_type', None)), |
25 | 1390 | image_href=self._get_image(context, kwargs['image_id'])['id'], | 1390 | image_href=self._get_image(context, kwargs['image_id'])['id'], |
26 | @@ -1399,9 +1399,11 @@ | |||
27 | 1399 | security_group=kwargs.get('security_group'), | 1399 | security_group=kwargs.get('security_group'), |
28 | 1400 | availability_zone=kwargs.get('placement', {}).get( | 1400 | availability_zone=kwargs.get('placement', {}).get( |
29 | 1401 | 'AvailabilityZone'), | 1401 | 'AvailabilityZone'), |
33 | 1402 | block_device_mapping=kwargs.get('block_device_mapping', {})) | 1402 | block_device_mapping=kwargs.get('block_device_mapping', {}), |
34 | 1403 | return self._format_run_instances(context, | 1403 | # NOTE(comstud): Unfortunately, EC2 requires that the |
35 | 1404 | reservation_id=instances[0]['reservation_id']) | 1404 | # instance DB entries have been created.. |
36 | 1405 | wait_for_instances=True) | ||
37 | 1406 | return self._format_run_instances(context, resv_id) | ||
38 | 1405 | 1407 | ||
39 | 1406 | def _do_instance(self, action, context, ec2_id): | 1408 | def _do_instance(self, action, context, ec2_id): |
40 | 1407 | instance_id = ec2utils.ec2_id_to_id(ec2_id) | 1409 | instance_id = ec2utils.ec2_id_to_id(ec2_id) |
41 | 1408 | 1410 | ||
42 | === modified file 'nova/api/openstack/__init__.py' | |||
43 | --- nova/api/openstack/__init__.py 2011-08-15 13:35:44 +0000 | |||
44 | +++ nova/api/openstack/__init__.py 2011-09-23 07:08:19 +0000 | |||
45 | @@ -139,8 +139,7 @@ | |||
46 | 139 | controller=zones.create_resource(version), | 139 | controller=zones.create_resource(version), |
47 | 140 | collection={'detail': 'GET', | 140 | collection={'detail': 'GET', |
48 | 141 | 'info': 'GET', | 141 | 'info': 'GET', |
51 | 142 | 'select': 'POST', | 142 | 'select': 'POST'}) |
50 | 143 | 'boot': 'POST'}) | ||
52 | 144 | 143 | ||
53 | 145 | mapper.connect("versions", "/", | 144 | mapper.connect("versions", "/", |
54 | 146 | controller=versions.create_resource(version), | 145 | controller=versions.create_resource(version), |
55 | 147 | 146 | ||
56 | === modified file 'nova/api/openstack/contrib/createserverext.py' | |||
57 | --- nova/api/openstack/contrib/createserverext.py 2011-09-02 18:00:33 +0000 | |||
58 | +++ nova/api/openstack/contrib/createserverext.py 2011-09-23 07:08:19 +0000 | |||
59 | @@ -15,7 +15,6 @@ | |||
60 | 15 | # under the License | 15 | # under the License |
61 | 16 | 16 | ||
62 | 17 | from nova import utils | 17 | from nova import utils |
63 | 18 | from nova.api.openstack import create_instance_helper as helper | ||
64 | 19 | from nova.api.openstack import extensions | 18 | from nova.api.openstack import extensions |
65 | 20 | from nova.api.openstack import servers | 19 | from nova.api.openstack import servers |
66 | 21 | from nova.api.openstack import wsgi | 20 | from nova.api.openstack import wsgi |
67 | @@ -66,7 +65,7 @@ | |||
68 | 66 | } | 65 | } |
69 | 67 | 66 | ||
70 | 68 | body_deserializers = { | 67 | body_deserializers = { |
72 | 69 | 'application/xml': helper.ServerXMLDeserializerV11(), | 68 | 'application/xml': servers.ServerXMLDeserializerV11(), |
73 | 70 | } | 69 | } |
74 | 71 | 70 | ||
75 | 72 | serializer = wsgi.ResponseSerializer(body_serializers, | 71 | serializer = wsgi.ResponseSerializer(body_serializers, |
76 | 73 | 72 | ||
77 | === modified file 'nova/api/openstack/contrib/volumes.py' | |||
78 | --- nova/api/openstack/contrib/volumes.py 2011-09-14 19:33:51 +0000 | |||
79 | +++ nova/api/openstack/contrib/volumes.py 2011-09-23 07:08:19 +0000 | |||
80 | @@ -334,47 +334,8 @@ | |||
81 | 334 | class BootFromVolumeController(servers.ControllerV11): | 334 | class BootFromVolumeController(servers.ControllerV11): |
82 | 335 | """The boot from volume API controller for the Openstack API.""" | 335 | """The boot from volume API controller for the Openstack API.""" |
83 | 336 | 336 | ||
125 | 337 | def _create_instance(self, context, instance_type, image_href, **kwargs): | 337 | def _get_block_device_mapping(self, data): |
126 | 338 | try: | 338 | return data.get('block_device_mapping') |
86 | 339 | return self.compute_api.create(context, instance_type, | ||
87 | 340 | image_href, **kwargs) | ||
88 | 341 | except quota.QuotaError as error: | ||
89 | 342 | self.helper._handle_quota_error(error) | ||
90 | 343 | except exception.ImageNotFound as error: | ||
91 | 344 | msg = _("Can not find requested image") | ||
92 | 345 | raise faults.Fault(exc.HTTPBadRequest(explanation=msg)) | ||
93 | 346 | |||
94 | 347 | def create(self, req, body): | ||
95 | 348 | """ Creates a new server for a given user """ | ||
96 | 349 | extra_values = None | ||
97 | 350 | try: | ||
98 | 351 | |||
99 | 352 | def get_kwargs(context, instance_type, image_href, **kwargs): | ||
100 | 353 | kwargs['context'] = context | ||
101 | 354 | kwargs['instance_type'] = instance_type | ||
102 | 355 | kwargs['image_href'] = image_href | ||
103 | 356 | return kwargs | ||
104 | 357 | |||
105 | 358 | extra_values, kwargs = self.helper.create_instance(req, body, | ||
106 | 359 | get_kwargs) | ||
107 | 360 | |||
108 | 361 | block_device_mapping = body['server'].get('block_device_mapping') | ||
109 | 362 | kwargs['block_device_mapping'] = block_device_mapping | ||
110 | 363 | |||
111 | 364 | instances = self._create_instance(**kwargs) | ||
112 | 365 | except faults.Fault, f: | ||
113 | 366 | return f | ||
114 | 367 | |||
115 | 368 | # We can only return 1 instance via the API, if we happen to | ||
116 | 369 | # build more than one... instances is a list, so we'll just | ||
117 | 370 | # use the first one.. | ||
118 | 371 | inst = instances[0] | ||
119 | 372 | for key in ['instance_type', 'image_ref']: | ||
120 | 373 | inst[key] = extra_values[key] | ||
121 | 374 | |||
122 | 375 | server = self._build_view(req, inst, is_detail=True) | ||
123 | 376 | server['server']['adminPass'] = extra_values['password'] | ||
124 | 377 | return server | ||
127 | 378 | 339 | ||
128 | 379 | 340 | ||
129 | 380 | class Volumes(extensions.ExtensionDescriptor): | 341 | class Volumes(extensions.ExtensionDescriptor): |
130 | 381 | 342 | ||
131 | === added file 'nova/api/openstack/contrib/zones.py' | |||
132 | --- nova/api/openstack/contrib/zones.py 1970-01-01 00:00:00 +0000 | |||
133 | +++ nova/api/openstack/contrib/zones.py 2011-09-23 07:08:19 +0000 | |||
134 | @@ -0,0 +1,50 @@ | |||
135 | 1 | # vim: tabstop=4 shiftwidth=4 softtabstop=4 | ||
136 | 2 | |||
137 | 3 | # Copyright 2011 OpenStack LLC. | ||
138 | 4 | # All Rights Reserved. | ||
139 | 5 | # | ||
140 | 6 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | ||
141 | 7 | # not use this file except in compliance with the License. You may obtain | ||
142 | 8 | # a copy of the License at | ||
143 | 9 | # | ||
144 | 10 | # http://www.apache.org/licenses/LICENSE-2.0 | ||
145 | 11 | # | ||
146 | 12 | # Unless required by applicable law or agreed to in writing, software | ||
147 | 13 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | ||
148 | 14 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | ||
149 | 15 | # License for the specific language governing permissions and limitations | ||
150 | 16 | # under the License. | ||
151 | 17 | |||
152 | 18 | """The zones extension.""" | ||
153 | 19 | |||
154 | 20 | |||
155 | 21 | from nova import flags | ||
156 | 22 | from nova import log as logging | ||
157 | 23 | from nova.api.openstack import extensions | ||
158 | 24 | |||
159 | 25 | |||
160 | 26 | LOG = logging.getLogger("nova.api.zones") | ||
161 | 27 | FLAGS = flags.FLAGS | ||
162 | 28 | |||
163 | 29 | |||
164 | 30 | class Zones(extensions.ExtensionDescriptor): | ||
165 | 31 | def get_name(self): | ||
166 | 32 | return "Zones" | ||
167 | 33 | |||
168 | 34 | def get_alias(self): | ||
169 | 35 | return "os-zones" | ||
170 | 36 | |||
171 | 37 | def get_description(self): | ||
172 | 38 | return """Enables zones-related functionality such as adding | ||
173 | 39 | child zones, listing child zones, getting the capabilities of the | ||
174 | 40 | local zone, and returning build plans to parent zones' schedulers""" | ||
175 | 41 | |||
176 | 42 | def get_namespace(self): | ||
177 | 43 | return "http://docs.openstack.org/ext/zones/api/v1.1" | ||
178 | 44 | |||
179 | 45 | def get_updated(self): | ||
180 | 46 | return "2011-09-21T00:00:00+00:00" | ||
181 | 47 | |||
182 | 48 | def get_resources(self): | ||
183 | 49 | # Nothing yet. | ||
184 | 50 | return [] | ||
185 | 0 | 51 | ||
186 | === removed file 'nova/api/openstack/create_instance_helper.py' | |||
187 | --- nova/api/openstack/create_instance_helper.py 2011-09-15 14:07:58 +0000 | |||
188 | +++ nova/api/openstack/create_instance_helper.py 1970-01-01 00:00:00 +0000 | |||
189 | @@ -1,602 +0,0 @@ | |||
190 | 1 | # Copyright 2011 OpenStack LLC. | ||
191 | 2 | # Copyright 2011 Piston Cloud Computing, Inc. | ||
192 | 3 | # All Rights Reserved. | ||
193 | 4 | # | ||
194 | 5 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | ||
195 | 6 | # not use this file except in compliance with the License. You may obtain | ||
196 | 7 | # a copy of the License at | ||
197 | 8 | # | ||
198 | 9 | # http://www.apache.org/licenses/LICENSE-2.0 | ||
199 | 10 | # | ||
200 | 11 | # Unless required by applicable law or agreed to in writing, software | ||
201 | 12 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | ||
202 | 13 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | ||
203 | 14 | # License for the specific language governing permissions and limitations | ||
204 | 15 | # under the License. | ||
205 | 16 | |||
206 | 17 | import base64 | ||
207 | 18 | |||
208 | 19 | from webob import exc | ||
209 | 20 | from xml.dom import minidom | ||
210 | 21 | |||
211 | 22 | from nova import exception | ||
212 | 23 | from nova import flags | ||
213 | 24 | from nova import log as logging | ||
214 | 25 | import nova.image | ||
215 | 26 | from nova import quota | ||
216 | 27 | from nova import utils | ||
217 | 28 | |||
218 | 29 | from nova.compute import instance_types | ||
219 | 30 | from nova.api.openstack import common | ||
220 | 31 | from nova.api.openstack import wsgi | ||
221 | 32 | from nova.rpc.common import RemoteError | ||
222 | 33 | |||
223 | 34 | LOG = logging.getLogger('nova.api.openstack.create_instance_helper') | ||
224 | 35 | FLAGS = flags.FLAGS | ||
225 | 36 | |||
226 | 37 | |||
227 | 38 | class CreateFault(exception.NovaException): | ||
228 | 39 | message = _("Invalid parameters given to create_instance.") | ||
229 | 40 | |||
230 | 41 | def __init__(self, fault): | ||
231 | 42 | self.fault = fault | ||
232 | 43 | super(CreateFault, self).__init__() | ||
233 | 44 | |||
234 | 45 | |||
235 | 46 | class CreateInstanceHelper(object): | ||
236 | 47 | """This is the base class for OS API Controllers that | ||
237 | 48 | are capable of creating instances (currently Servers and Zones). | ||
238 | 49 | |||
239 | 50 | Once we stabilize the Zones portion of the API we may be able | ||
240 | 51 | to move this code back into servers.py | ||
241 | 52 | """ | ||
242 | 53 | |||
243 | 54 | def __init__(self, controller): | ||
244 | 55 | """We need the image service to create an instance.""" | ||
245 | 56 | self.controller = controller | ||
246 | 57 | self._image_service = utils.import_object(FLAGS.image_service) | ||
247 | 58 | super(CreateInstanceHelper, self).__init__() | ||
248 | 59 | |||
249 | 60 | def create_instance(self, req, body, create_method): | ||
250 | 61 | """Creates a new server for the given user. The approach | ||
251 | 62 | used depends on the create_method. For example, the standard | ||
252 | 63 | POST /server call uses compute.api.create(), while | ||
253 | 64 | POST /zones/server uses compute.api.create_all_at_once(). | ||
254 | 65 | |||
255 | 66 | The problem is, both approaches return different values (i.e. | ||
256 | 67 | [instance dicts] vs. reservation_id). So the handling of the | ||
257 | 68 | return type from this method is left to the caller. | ||
258 | 69 | """ | ||
259 | 70 | if not body: | ||
260 | 71 | raise exc.HTTPUnprocessableEntity() | ||
261 | 72 | |||
262 | 73 | if not 'server' in body: | ||
263 | 74 | raise exc.HTTPUnprocessableEntity() | ||
264 | 75 | |||
265 | 76 | context = req.environ['nova.context'] | ||
266 | 77 | server_dict = body['server'] | ||
267 | 78 | password = self.controller._get_server_admin_password(server_dict) | ||
268 | 79 | |||
269 | 80 | if not 'name' in server_dict: | ||
270 | 81 | msg = _("Server name is not defined") | ||
271 | 82 | raise exc.HTTPBadRequest(explanation=msg) | ||
272 | 83 | |||
273 | 84 | name = server_dict['name'] | ||
274 | 85 | self._validate_server_name(name) | ||
275 | 86 | name = name.strip() | ||
276 | 87 | |||
277 | 88 | image_href = self.controller._image_ref_from_req_data(body) | ||
278 | 89 | # If the image href was generated by nova api, strip image_href | ||
279 | 90 | # down to an id and use the default glance connection params | ||
280 | 91 | |||
281 | 92 | if str(image_href).startswith(req.application_url): | ||
282 | 93 | image_href = image_href.split('/').pop() | ||
283 | 94 | try: | ||
284 | 95 | image_service, image_id = nova.image.get_image_service(context, | ||
285 | 96 | image_href) | ||
286 | 97 | kernel_id, ramdisk_id = self._get_kernel_ramdisk_from_image( | ||
287 | 98 | req, image_service, image_id) | ||
288 | 99 | images = set([str(x['id']) for x in image_service.index(context)]) | ||
289 | 100 | assert str(image_id) in images | ||
290 | 101 | except Exception, e: | ||
291 | 102 | msg = _("Cannot find requested image %(image_href)s: %(e)s" % | ||
292 | 103 | locals()) | ||
293 | 104 | raise exc.HTTPBadRequest(explanation=msg) | ||
294 | 105 | |||
295 | 106 | personality = server_dict.get('personality') | ||
296 | 107 | config_drive = server_dict.get('config_drive') | ||
297 | 108 | |||
298 | 109 | injected_files = [] | ||
299 | 110 | if personality: | ||
300 | 111 | injected_files = self._get_injected_files(personality) | ||
301 | 112 | |||
302 | 113 | sg_names = [] | ||
303 | 114 | security_groups = server_dict.get('security_groups') | ||
304 | 115 | if security_groups is not None: | ||
305 | 116 | sg_names = [sg['name'] for sg in security_groups if sg.get('name')] | ||
306 | 117 | if not sg_names: | ||
307 | 118 | sg_names.append('default') | ||
308 | 119 | |||
309 | 120 | sg_names = list(set(sg_names)) | ||
310 | 121 | |||
311 | 122 | requested_networks = server_dict.get('networks') | ||
312 | 123 | if requested_networks is not None: | ||
313 | 124 | requested_networks = self._get_requested_networks( | ||
314 | 125 | requested_networks) | ||
315 | 126 | |||
316 | 127 | try: | ||
317 | 128 | flavor_id = self.controller._flavor_id_from_req_data(body) | ||
318 | 129 | except ValueError as error: | ||
319 | 130 | msg = _("Invalid flavorRef provided.") | ||
320 | 131 | raise exc.HTTPBadRequest(explanation=msg) | ||
321 | 132 | |||
322 | 133 | zone_blob = server_dict.get('blob') | ||
323 | 134 | |||
324 | 135 | # optional openstack extensions: | ||
325 | 136 | key_name = server_dict.get('key_name') | ||
326 | 137 | user_data = server_dict.get('user_data') | ||
327 | 138 | self._validate_user_data(user_data) | ||
328 | 139 | |||
329 | 140 | availability_zone = server_dict.get('availability_zone') | ||
330 | 141 | name = server_dict['name'] | ||
331 | 142 | self._validate_server_name(name) | ||
332 | 143 | name = name.strip() | ||
333 | 144 | |||
334 | 145 | reservation_id = server_dict.get('reservation_id') | ||
335 | 146 | min_count = server_dict.get('min_count') | ||
336 | 147 | max_count = server_dict.get('max_count') | ||
337 | 148 | # min_count and max_count are optional. If they exist, they come | ||
338 | 149 | # in as strings. We want to default 'min_count' to 1, and default | ||
339 | 150 | # 'max_count' to be 'min_count'. | ||
340 | 151 | min_count = int(min_count) if min_count else 1 | ||
341 | 152 | max_count = int(max_count) if max_count else min_count | ||
342 | 153 | if min_count > max_count: | ||
343 | 154 | min_count = max_count | ||
344 | 155 | |||
345 | 156 | try: | ||
346 | 157 | inst_type = \ | ||
347 | 158 | instance_types.get_instance_type_by_flavor_id(flavor_id) | ||
348 | 159 | extra_values = { | ||
349 | 160 | 'instance_type': inst_type, | ||
350 | 161 | 'image_ref': image_href, | ||
351 | 162 | 'config_drive': config_drive, | ||
352 | 163 | 'password': password} | ||
353 | 164 | |||
354 | 165 | return (extra_values, | ||
355 | 166 | create_method(context, | ||
356 | 167 | inst_type, | ||
357 | 168 | image_id, | ||
358 | 169 | kernel_id=kernel_id, | ||
359 | 170 | ramdisk_id=ramdisk_id, | ||
360 | 171 | display_name=name, | ||
361 | 172 | display_description=name, | ||
362 | 173 | key_name=key_name, | ||
363 | 174 | metadata=server_dict.get('metadata', {}), | ||
364 | 175 | access_ip_v4=server_dict.get('accessIPv4'), | ||
365 | 176 | access_ip_v6=server_dict.get('accessIPv6'), | ||
366 | 177 | injected_files=injected_files, | ||
367 | 178 | admin_password=password, | ||
368 | 179 | zone_blob=zone_blob, | ||
369 | 180 | reservation_id=reservation_id, | ||
370 | 181 | min_count=min_count, | ||
371 | 182 | max_count=max_count, | ||
372 | 183 | requested_networks=requested_networks, | ||
373 | 184 | security_group=sg_names, | ||
374 | 185 | user_data=user_data, | ||
375 | 186 | availability_zone=availability_zone, | ||
376 | 187 | config_drive=config_drive,)) | ||
377 | 188 | except quota.QuotaError as error: | ||
378 | 189 | self._handle_quota_error(error) | ||
379 | 190 | except exception.ImageNotFound as error: | ||
380 | 191 | msg = _("Can not find requested image") | ||
381 | 192 | raise exc.HTTPBadRequest(explanation=msg) | ||
382 | 193 | except exception.FlavorNotFound as error: | ||
383 | 194 | msg = _("Invalid flavorRef provided.") | ||
384 | 195 | raise exc.HTTPBadRequest(explanation=msg) | ||
385 | 196 | except exception.KeypairNotFound as error: | ||
386 | 197 | msg = _("Invalid key_name provided.") | ||
387 | 198 | raise exc.HTTPBadRequest(explanation=msg) | ||
388 | 199 | except exception.SecurityGroupNotFound as error: | ||
389 | 200 | raise exc.HTTPBadRequest(explanation=unicode(error)) | ||
390 | 201 | except RemoteError as err: | ||
391 | 202 | msg = "%(err_type)s: %(err_msg)s" % \ | ||
392 | 203 | {'err_type': err.exc_type, 'err_msg': err.value} | ||
393 | 204 | raise exc.HTTPBadRequest(explanation=msg) | ||
394 | 205 | # Let the caller deal with unhandled exceptions. | ||
395 | 206 | |||
396 | 207 | def _handle_quota_error(self, error): | ||
397 | 208 | """ | ||
398 | 209 | Reraise quota errors as api-specific http exceptions | ||
399 | 210 | """ | ||
400 | 211 | if error.code == "OnsetFileLimitExceeded": | ||
401 | 212 | expl = _("Personality file limit exceeded") | ||
402 | 213 | raise exc.HTTPRequestEntityTooLarge(explanation=error.message, | ||
403 | 214 | headers={'Retry-After': 0}) | ||
404 | 215 | if error.code == "OnsetFilePathLimitExceeded": | ||
405 | 216 | expl = _("Personality file path too long") | ||
406 | 217 | raise exc.HTTPRequestEntityTooLarge(explanation=error.message, | ||
407 | 218 | headers={'Retry-After': 0}) | ||
408 | 219 | if error.code == "OnsetFileContentLimitExceeded": | ||
409 | 220 | expl = _("Personality file content too long") | ||
410 | 221 | raise exc.HTTPRequestEntityTooLarge(explanation=error.message, | ||
411 | 222 | headers={'Retry-After': 0}) | ||
412 | 223 | if error.code == "InstanceLimitExceeded": | ||
413 | 224 | expl = _("Instance quotas have been exceeded") | ||
414 | 225 | raise exc.HTTPRequestEntityTooLarge(explanation=error.message, | ||
415 | 226 | headers={'Retry-After': 0}) | ||
416 | 227 | # if the original error is okay, just reraise it | ||
417 | 228 | raise error | ||
418 | 229 | |||
419 | 230 | def _deserialize_create(self, request): | ||
420 | 231 | """ | ||
421 | 232 | Deserialize a create request | ||
422 | 233 | |||
423 | 234 | Overrides normal behavior in the case of xml content | ||
424 | 235 | """ | ||
425 | 236 | if request.content_type == "application/xml": | ||
426 | 237 | deserializer = ServerXMLDeserializer() | ||
427 | 238 | return deserializer.deserialize(request.body) | ||
428 | 239 | else: | ||
429 | 240 | return self._deserialize(request.body, request.get_content_type()) | ||
430 | 241 | |||
431 | 242 | def _validate_server_name(self, value): | ||
432 | 243 | if not isinstance(value, basestring): | ||
433 | 244 | msg = _("Server name is not a string or unicode") | ||
434 | 245 | raise exc.HTTPBadRequest(explanation=msg) | ||
435 | 246 | |||
436 | 247 | if value.strip() == '': | ||
437 | 248 | msg = _("Server name is an empty string") | ||
438 | 249 | raise exc.HTTPBadRequest(explanation=msg) | ||
439 | 250 | |||
440 | 251 | def _get_kernel_ramdisk_from_image(self, req, image_service, image_id): | ||
441 | 252 | """Fetch an image from the ImageService, then if present, return the | ||
442 | 253 | associated kernel and ramdisk image IDs. | ||
443 | 254 | """ | ||
444 | 255 | context = req.environ['nova.context'] | ||
445 | 256 | image_meta = image_service.show(context, image_id) | ||
446 | 257 | # NOTE(sirp): extracted to a separate method to aid unit-testing, the | ||
447 | 258 | # new method doesn't need a request obj or an ImageService stub | ||
448 | 259 | kernel_id, ramdisk_id = self._do_get_kernel_ramdisk_from_image( | ||
449 | 260 | image_meta) | ||
450 | 261 | return kernel_id, ramdisk_id | ||
451 | 262 | |||
452 | 263 | @staticmethod | ||
453 | 264 | def _do_get_kernel_ramdisk_from_image(image_meta): | ||
454 | 265 | """Given an ImageService image_meta, return kernel and ramdisk image | ||
455 | 266 | ids if present. | ||
456 | 267 | |||
457 | 268 | This is only valid for `ami` style images. | ||
458 | 269 | """ | ||
459 | 270 | image_id = image_meta['id'] | ||
460 | 271 | if image_meta['status'] != 'active': | ||
461 | 272 | raise exception.ImageUnacceptable(image_id=image_id, | ||
462 | 273 | reason=_("status is not active")) | ||
463 | 274 | |||
464 | 275 | if image_meta.get('container_format') != 'ami': | ||
465 | 276 | return None, None | ||
466 | 277 | |||
467 | 278 | try: | ||
468 | 279 | kernel_id = image_meta['properties']['kernel_id'] | ||
469 | 280 | except KeyError: | ||
470 | 281 | raise exception.KernelNotFoundForImage(image_id=image_id) | ||
471 | 282 | |||
472 | 283 | try: | ||
473 | 284 | ramdisk_id = image_meta['properties']['ramdisk_id'] | ||
474 | 285 | except KeyError: | ||
475 | 286 | ramdisk_id = None | ||
476 | 287 | |||
477 | 288 | return kernel_id, ramdisk_id | ||
478 | 289 | |||
479 | 290 | def _get_injected_files(self, personality): | ||
480 | 291 | """ | ||
481 | 292 | Create a list of injected files from the personality attribute | ||
482 | 293 | |||
483 | 294 | At this time, injected_files must be formatted as a list of | ||
484 | 295 | (file_path, file_content) pairs for compatibility with the | ||
485 | 296 | underlying compute service. | ||
486 | 297 | """ | ||
487 | 298 | injected_files = [] | ||
488 | 299 | |||
489 | 300 | for item in personality: | ||
490 | 301 | try: | ||
491 | 302 | path = item['path'] | ||
492 | 303 | contents = item['contents'] | ||
493 | 304 | except KeyError as key: | ||
494 | 305 | expl = _('Bad personality format: missing %s') % key | ||
495 | 306 | raise exc.HTTPBadRequest(explanation=expl) | ||
496 | 307 | except TypeError: | ||
497 | 308 | expl = _('Bad personality format') | ||
498 | 309 | raise exc.HTTPBadRequest(explanation=expl) | ||
499 | 310 | try: | ||
500 | 311 | contents = base64.b64decode(contents) | ||
501 | 312 | except TypeError: | ||
502 | 313 | expl = _('Personality content for %s cannot be decoded') % path | ||
503 | 314 | raise exc.HTTPBadRequest(explanation=expl) | ||
504 | 315 | injected_files.append((path, contents)) | ||
505 | 316 | return injected_files | ||
506 | 317 | |||
507 | 318 | def _get_server_admin_password_old_style(self, server): | ||
508 | 319 | """ Determine the admin password for a server on creation """ | ||
509 | 320 | return utils.generate_password(FLAGS.password_length) | ||
510 | 321 | |||
511 | 322 | def _get_server_admin_password_new_style(self, server): | ||
512 | 323 | """ Determine the admin password for a server on creation """ | ||
513 | 324 | password = server.get('adminPass') | ||
514 | 325 | |||
515 | 326 | if password is None: | ||
516 | 327 | return utils.generate_password(FLAGS.password_length) | ||
517 | 328 | if not isinstance(password, basestring) or password == '': | ||
518 | 329 | msg = _("Invalid adminPass") | ||
519 | 330 | raise exc.HTTPBadRequest(explanation=msg) | ||
520 | 331 | return password | ||
521 | 332 | |||
522 | 333 | def _get_requested_networks(self, requested_networks): | ||
523 | 334 | """ | ||
524 | 335 | Create a list of requested networks from the networks attribute | ||
525 | 336 | """ | ||
526 | 337 | networks = [] | ||
527 | 338 | for network in requested_networks: | ||
528 | 339 | try: | ||
529 | 340 | network_uuid = network['uuid'] | ||
530 | 341 | |||
531 | 342 | if not utils.is_uuid_like(network_uuid): | ||
532 | 343 | msg = _("Bad networks format: network uuid is not in" | ||
533 | 344 | " proper format (%s)") % network_uuid | ||
534 | 345 | raise exc.HTTPBadRequest(explanation=msg) | ||
535 | 346 | |||
536 | 347 | #fixed IP address is optional | ||
537 | 348 | #if the fixed IP address is not provided then | ||
538 | 349 | #it will use one of the available IP address from the network | ||
539 | 350 | address = network.get('fixed_ip', None) | ||
540 | 351 | if address is not None and not utils.is_valid_ipv4(address): | ||
541 | 352 | msg = _("Invalid fixed IP address (%s)") % address | ||
542 | 353 | raise exc.HTTPBadRequest(explanation=msg) | ||
543 | 354 | # check if the network id is already present in the list, | ||
544 | 355 | # we don't want duplicate networks to be passed | ||
545 | 356 | # at the boot time | ||
546 | 357 | for id, ip in networks: | ||
547 | 358 | if id == network_uuid: | ||
548 | 359 | expl = _("Duplicate networks (%s) are not allowed")\ | ||
549 | 360 | % network_uuid | ||
550 | 361 | raise exc.HTTPBadRequest(explanation=expl) | ||
551 | 362 | |||
552 | 363 | networks.append((network_uuid, address)) | ||
553 | 364 | except KeyError as key: | ||
554 | 365 | expl = _('Bad network format: missing %s') % key | ||
555 | 366 | raise exc.HTTPBadRequest(explanation=expl) | ||
556 | 367 | except TypeError: | ||
557 | 368 | expl = _('Bad networks format') | ||
558 | 369 | raise exc.HTTPBadRequest(explanation=expl) | ||
559 | 370 | |||
560 | 371 | return networks | ||
561 | 372 | |||
562 | 373 | def _validate_user_data(self, user_data): | ||
563 | 374 | """Check if the user_data is encoded properly""" | ||
564 | 375 | if not user_data: | ||
565 | 376 | return | ||
566 | 377 | try: | ||
567 | 378 | user_data = base64.b64decode(user_data) | ||
568 | 379 | except TypeError: | ||
569 | 380 | expl = _('Userdata content cannot be decoded') | ||
570 | 381 | raise exc.HTTPBadRequest(explanation=expl) | ||
571 | 382 | |||
572 | 383 | |||
573 | 384 | class ServerXMLDeserializer(wsgi.XMLDeserializer): | ||
574 | 385 | """ | ||
575 | 386 | Deserializer to handle xml-formatted server create requests. | ||
576 | 387 | |||
577 | 388 | Handles standard server attributes as well as optional metadata | ||
578 | 389 | and personality attributes | ||
579 | 390 | """ | ||
580 | 391 | |||
581 | 392 | metadata_deserializer = common.MetadataXMLDeserializer() | ||
582 | 393 | |||
583 | 394 | def create(self, string): | ||
584 | 395 | """Deserialize an xml-formatted server create request""" | ||
585 | 396 | dom = minidom.parseString(string) | ||
586 | 397 | server = self._extract_server(dom) | ||
587 | 398 | return {'body': {'server': server}} | ||
588 | 399 | |||
589 | 400 | def _extract_server(self, node): | ||
590 | 401 | """Marshal the server attribute of a parsed request""" | ||
591 | 402 | server = {} | ||
592 | 403 | server_node = self.find_first_child_named(node, 'server') | ||
593 | 404 | |||
594 | 405 | attributes = ["name", "imageId", "flavorId", "adminPass"] | ||
595 | 406 | for attr in attributes: | ||
596 | 407 | if server_node.getAttribute(attr): | ||
597 | 408 | server[attr] = server_node.getAttribute(attr) | ||
598 | 409 | |||
599 | 410 | metadata_node = self.find_first_child_named(server_node, "metadata") | ||
600 | 411 | server["metadata"] = self.metadata_deserializer.extract_metadata( | ||
601 | 412 | metadata_node) | ||
602 | 413 | |||
603 | 414 | server["personality"] = self._extract_personality(server_node) | ||
604 | 415 | |||
605 | 416 | return server | ||
606 | 417 | |||
607 | 418 | def _extract_personality(self, server_node): | ||
608 | 419 | """Marshal the personality attribute of a parsed request""" | ||
609 | 420 | node = self.find_first_child_named(server_node, "personality") | ||
610 | 421 | personality = [] | ||
611 | 422 | if node is not None: | ||
612 | 423 | for file_node in self.find_children_named(node, "file"): | ||
613 | 424 | item = {} | ||
614 | 425 | if file_node.hasAttribute("path"): | ||
615 | 426 | item["path"] = file_node.getAttribute("path") | ||
616 | 427 | item["contents"] = self.extract_text(file_node) | ||
617 | 428 | personality.append(item) | ||
618 | 429 | return personality | ||
619 | 430 | |||
620 | 431 | |||
621 | 432 | class ServerXMLDeserializerV11(wsgi.MetadataXMLDeserializer): | ||
622 | 433 | """ | ||
623 | 434 | Deserializer to handle xml-formatted server create requests. | ||
624 | 435 | |||
625 | 436 | Handles standard server attributes as well as optional metadata | ||
626 | 437 | and personality attributes | ||
627 | 438 | """ | ||
628 | 439 | |||
629 | 440 | metadata_deserializer = common.MetadataXMLDeserializer() | ||
630 | 441 | |||
631 | 442 | def action(self, string): | ||
632 | 443 | dom = minidom.parseString(string) | ||
633 | 444 | action_node = dom.childNodes[0] | ||
634 | 445 | action_name = action_node.tagName | ||
635 | 446 | |||
636 | 447 | action_deserializer = { | ||
637 | 448 | 'createImage': self._action_create_image, | ||
638 | 449 | 'createBackup': self._action_create_backup, | ||
639 | 450 | 'changePassword': self._action_change_password, | ||
640 | 451 | 'reboot': self._action_reboot, | ||
641 | 452 | 'rebuild': self._action_rebuild, | ||
642 | 453 | 'resize': self._action_resize, | ||
643 | 454 | 'confirmResize': self._action_confirm_resize, | ||
644 | 455 | 'revertResize': self._action_revert_resize, | ||
645 | 456 | }.get(action_name, self.default) | ||
646 | 457 | |||
647 | 458 | action_data = action_deserializer(action_node) | ||
648 | 459 | |||
649 | 460 | return {'body': {action_name: action_data}} | ||
650 | 461 | |||
651 | 462 | def _action_create_image(self, node): | ||
652 | 463 | return self._deserialize_image_action(node, ('name',)) | ||
653 | 464 | |||
654 | 465 | def _action_create_backup(self, node): | ||
655 | 466 | attributes = ('name', 'backup_type', 'rotation') | ||
656 | 467 | return self._deserialize_image_action(node, attributes) | ||
657 | 468 | |||
658 | 469 | def _action_change_password(self, node): | ||
659 | 470 | if not node.hasAttribute("adminPass"): | ||
660 | 471 | raise AttributeError("No adminPass was specified in request") | ||
661 | 472 | return {"adminPass": node.getAttribute("adminPass")} | ||
662 | 473 | |||
663 | 474 | def _action_reboot(self, node): | ||
664 | 475 | if not node.hasAttribute("type"): | ||
665 | 476 | raise AttributeError("No reboot type was specified in request") | ||
666 | 477 | return {"type": node.getAttribute("type")} | ||
667 | 478 | |||
668 | 479 | def _action_rebuild(self, node): | ||
669 | 480 | rebuild = {} | ||
670 | 481 | if node.hasAttribute("name"): | ||
671 | 482 | rebuild['name'] = node.getAttribute("name") | ||
672 | 483 | |||
673 | 484 | metadata_node = self.find_first_child_named(node, "metadata") | ||
674 | 485 | if metadata_node is not None: | ||
675 | 486 | rebuild["metadata"] = self.extract_metadata(metadata_node) | ||
676 | 487 | |||
677 | 488 | personality = self._extract_personality(node) | ||
678 | 489 | if personality is not None: | ||
679 | 490 | rebuild["personality"] = personality | ||
680 | 491 | |||
681 | 492 | if not node.hasAttribute("imageRef"): | ||
682 | 493 | raise AttributeError("No imageRef was specified in request") | ||
683 | 494 | rebuild["imageRef"] = node.getAttribute("imageRef") | ||
684 | 495 | |||
685 | 496 | return rebuild | ||
686 | 497 | |||
687 | 498 | def _action_resize(self, node): | ||
688 | 499 | if not node.hasAttribute("flavorRef"): | ||
689 | 500 | raise AttributeError("No flavorRef was specified in request") | ||
690 | 501 | return {"flavorRef": node.getAttribute("flavorRef")} | ||
691 | 502 | |||
692 | 503 | def _action_confirm_resize(self, node): | ||
693 | 504 | return None | ||
694 | 505 | |||
695 | 506 | def _action_revert_resize(self, node): | ||
696 | 507 | return None | ||
697 | 508 | |||
698 | 509 | def _deserialize_image_action(self, node, allowed_attributes): | ||
699 | 510 | data = {} | ||
700 | 511 | for attribute in allowed_attributes: | ||
701 | 512 | value = node.getAttribute(attribute) | ||
702 | 513 | if value: | ||
703 | 514 | data[attribute] = value | ||
704 | 515 | metadata_node = self.find_first_child_named(node, 'metadata') | ||
705 | 516 | if metadata_node is not None: | ||
706 | 517 | metadata = self.metadata_deserializer.extract_metadata( | ||
707 | 518 | metadata_node) | ||
708 | 519 | data['metadata'] = metadata | ||
709 | 520 | return data | ||
710 | 521 | |||
711 | 522 | def create(self, string): | ||
712 | 523 | """Deserialize an xml-formatted server create request""" | ||
713 | 524 | dom = minidom.parseString(string) | ||
714 | 525 | server = self._extract_server(dom) | ||
715 | 526 | return {'body': {'server': server}} | ||
716 | 527 | |||
717 | 528 | def _extract_server(self, node): | ||
718 | 529 | """Marshal the server attribute of a parsed request""" | ||
719 | 530 | server = {} | ||
720 | 531 | server_node = self.find_first_child_named(node, 'server') | ||
721 | 532 | |||
722 | 533 | attributes = ["name", "imageRef", "flavorRef", "adminPass", | ||
723 | 534 | "accessIPv4", "accessIPv6"] | ||
724 | 535 | for attr in attributes: | ||
725 | 536 | if server_node.getAttribute(attr): | ||
726 | 537 | server[attr] = server_node.getAttribute(attr) | ||
727 | 538 | |||
728 | 539 | metadata_node = self.find_first_child_named(server_node, "metadata") | ||
729 | 540 | if metadata_node is not None: | ||
730 | 541 | server["metadata"] = self.extract_metadata(metadata_node) | ||
731 | 542 | |||
732 | 543 | personality = self._extract_personality(server_node) | ||
733 | 544 | if personality is not None: | ||
734 | 545 | server["personality"] = personality | ||
735 | 546 | |||
736 | 547 | networks = self._extract_networks(server_node) | ||
737 | 548 | if networks is not None: | ||
738 | 549 | server["networks"] = networks | ||
739 | 550 | |||
740 | 551 | security_groups = self._extract_security_groups(server_node) | ||
741 | 552 | if security_groups is not None: | ||
742 | 553 | server["security_groups"] = security_groups | ||
743 | 554 | |||
744 | 555 | return server | ||
745 | 556 | |||
746 | 557 | def _extract_personality(self, server_node): | ||
747 | 558 | """Marshal the personality attribute of a parsed request""" | ||
748 | 559 | node = self.find_first_child_named(server_node, "personality") | ||
749 | 560 | if node is not None: | ||
750 | 561 | personality = [] | ||
751 | 562 | for file_node in self.find_children_named(node, "file"): | ||
752 | 563 | item = {} | ||
753 | 564 | if file_node.hasAttribute("path"): | ||
754 | 565 | item["path"] = file_node.getAttribute("path") | ||
755 | 566 | item["contents"] = self.extract_text(file_node) | ||
756 | 567 | personality.append(item) | ||
757 | 568 | return personality | ||
758 | 569 | else: | ||
759 | 570 | return None | ||
760 | 571 | |||
761 | 572 | def _extract_networks(self, server_node): | ||
762 | 573 | """Marshal the networks attribute of a parsed request""" | ||
763 | 574 | node = self.find_first_child_named(server_node, "networks") | ||
764 | 575 | if node is not None: | ||
765 | 576 | networks = [] | ||
766 | 577 | for network_node in self.find_children_named(node, | ||
767 | 578 | "network"): | ||
768 | 579 | item = {} | ||
769 | 580 | if network_node.hasAttribute("uuid"): | ||
770 | 581 | item["uuid"] = network_node.getAttribute("uuid") | ||
771 | 582 | if network_node.hasAttribute("fixed_ip"): | ||
772 | 583 | item["fixed_ip"] = network_node.getAttribute("fixed_ip") | ||
773 | 584 | networks.append(item) | ||
774 | 585 | return networks | ||
775 | 586 | else: | ||
776 | 587 | return None | ||
777 | 588 | |||
778 | 589 | def _extract_security_groups(self, server_node): | ||
779 | 590 | """Marshal the security_groups attribute of a parsed request""" | ||
780 | 591 | node = self.find_first_child_named(server_node, "security_groups") | ||
781 | 592 | if node is not None: | ||
782 | 593 | security_groups = [] | ||
783 | 594 | for sg_node in self.find_children_named(node, "security_group"): | ||
784 | 595 | item = {} | ||
785 | 596 | name_node = self.find_first_child_named(sg_node, "name") | ||
786 | 597 | if name_node: | ||
787 | 598 | item["name"] = self.extract_text(name_node) | ||
788 | 599 | security_groups.append(item) | ||
789 | 600 | return security_groups | ||
790 | 601 | else: | ||
791 | 602 | return None | ||
792 | 603 | 0 | ||
793 | === modified file 'nova/api/openstack/servers.py' | |||
794 | --- nova/api/openstack/servers.py 2011-09-22 15:41:34 +0000 | |||
795 | +++ nova/api/openstack/servers.py 2011-09-23 07:08:19 +0000 | |||
796 | @@ -1,4 +1,5 @@ | |||
797 | 1 | # Copyright 2010 OpenStack LLC. | 1 | # Copyright 2010 OpenStack LLC. |
798 | 2 | # Copyright 2011 Piston Cloud Computing, Inc | ||
799 | 2 | # All Rights Reserved. | 3 | # All Rights Reserved. |
800 | 3 | # | 4 | # |
801 | 4 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | 5 | # Licensed under the Apache License, Version 2.0 (the "License"); you may |
802 | @@ -21,15 +22,17 @@ | |||
803 | 21 | from lxml import etree | 22 | from lxml import etree |
804 | 22 | from webob import exc | 23 | from webob import exc |
805 | 23 | import webob | 24 | import webob |
806 | 25 | from xml.dom import minidom | ||
807 | 24 | 26 | ||
808 | 25 | from nova import compute | 27 | from nova import compute |
809 | 26 | from nova import db | 28 | from nova import db |
810 | 27 | from nova import exception | 29 | from nova import exception |
811 | 28 | from nova import flags | 30 | from nova import flags |
812 | 31 | from nova import image | ||
813 | 29 | from nova import log as logging | 32 | from nova import log as logging |
814 | 30 | from nova import utils | 33 | from nova import utils |
815 | 34 | from nova import quota | ||
816 | 31 | from nova.api.openstack import common | 35 | from nova.api.openstack import common |
817 | 32 | from nova.api.openstack import create_instance_helper as helper | ||
818 | 33 | from nova.api.openstack import ips | 36 | from nova.api.openstack import ips |
819 | 34 | from nova.api.openstack import wsgi | 37 | from nova.api.openstack import wsgi |
820 | 35 | from nova.compute import instance_types | 38 | from nova.compute import instance_types |
821 | @@ -40,6 +43,7 @@ | |||
822 | 40 | import nova.api.openstack.views.images | 43 | import nova.api.openstack.views.images |
823 | 41 | import nova.api.openstack.views.servers | 44 | import nova.api.openstack.views.servers |
824 | 42 | from nova.api.openstack import xmlutil | 45 | from nova.api.openstack import xmlutil |
825 | 46 | from nova.rpc import common as rpc_common | ||
826 | 43 | 47 | ||
827 | 44 | 48 | ||
828 | 45 | LOG = logging.getLogger('nova.api.openstack.servers') | 49 | LOG = logging.getLogger('nova.api.openstack.servers') |
829 | @@ -72,7 +76,6 @@ | |||
830 | 72 | 76 | ||
831 | 73 | def __init__(self): | 77 | def __init__(self): |
832 | 74 | self.compute_api = compute.API() | 78 | self.compute_api = compute.API() |
833 | 75 | self.helper = helper.CreateInstanceHelper(self) | ||
834 | 76 | 79 | ||
835 | 77 | def index(self, req): | 80 | def index(self, req): |
836 | 78 | """ Returns a list of server names and ids for a given user """ | 81 | """ Returns a list of server names and ids for a given user """ |
837 | @@ -106,6 +109,12 @@ | |||
838 | 106 | def _action_rebuild(self, info, request, instance_id): | 109 | def _action_rebuild(self, info, request, instance_id): |
839 | 107 | raise NotImplementedError() | 110 | raise NotImplementedError() |
840 | 108 | 111 | ||
841 | 112 | def _get_block_device_mapping(self, data): | ||
842 | 113 | """Get block_device_mapping from 'server' dictionary. | ||
843 | 114 | Overidden by volumes controller. | ||
844 | 115 | """ | ||
845 | 116 | return None | ||
846 | 117 | |||
847 | 109 | def _get_servers(self, req, is_detail): | 118 | def _get_servers(self, req, is_detail): |
848 | 110 | """Returns a list of servers, taking into account any search | 119 | """Returns a list of servers, taking into account any search |
849 | 111 | options specified. | 120 | options specified. |
850 | @@ -157,6 +166,181 @@ | |||
851 | 157 | limited_list = self._limit_items(instance_list, req) | 166 | limited_list = self._limit_items(instance_list, req) |
852 | 158 | return self._build_list(req, limited_list, is_detail=is_detail) | 167 | return self._build_list(req, limited_list, is_detail=is_detail) |
853 | 159 | 168 | ||
854 | 169 | def _handle_quota_error(self, error): | ||
855 | 170 | """ | ||
856 | 171 | Reraise quota errors as api-specific http exceptions | ||
857 | 172 | """ | ||
858 | 173 | |||
859 | 174 | code_mappings = { | ||
860 | 175 | "OnsetFileLimitExceeded": | ||
861 | 176 | _("Personality file limit exceeded"), | ||
862 | 177 | "OnsetFilePathLimitExceeded": | ||
863 | 178 | _("Personality file path too long"), | ||
864 | 179 | "OnsetFileContentLimitExceeded": | ||
865 | 180 | _("Personality file content too long"), | ||
866 | 181 | "InstanceLimitExceeded": | ||
867 | 182 | _("Instance quotas have been exceeded")} | ||
868 | 183 | |||
869 | 184 | expl = code_mappings.get(error.code) | ||
870 | 185 | if expl: | ||
871 | 186 | raise exc.HTTPRequestEntityTooLarge(explanation=expl, | ||
872 | 187 | headers={'Retry-After': 0}) | ||
873 | 188 | # if the original error is okay, just reraise it | ||
874 | 189 | raise error | ||
875 | 190 | |||
876 | 191 | def _deserialize_create(self, request): | ||
877 | 192 | """ | ||
878 | 193 | Deserialize a create request | ||
879 | 194 | |||
880 | 195 | Overrides normal behavior in the case of xml content | ||
881 | 196 | """ | ||
882 | 197 | if request.content_type == "application/xml": | ||
883 | 198 | deserializer = ServerXMLDeserializer() | ||
884 | 199 | return deserializer.deserialize(request.body) | ||
885 | 200 | else: | ||
886 | 201 | return self._deserialize(request.body, request.get_content_type()) | ||
887 | 202 | |||
888 | 203 | def _validate_server_name(self, value): | ||
889 | 204 | if not isinstance(value, basestring): | ||
890 | 205 | msg = _("Server name is not a string or unicode") | ||
891 | 206 | raise exc.HTTPBadRequest(explanation=msg) | ||
892 | 207 | |||
893 | 208 | if value.strip() == '': | ||
894 | 209 | msg = _("Server name is an empty string") | ||
895 | 210 | raise exc.HTTPBadRequest(explanation=msg) | ||
896 | 211 | |||
897 | 212 | def _get_kernel_ramdisk_from_image(self, req, image_service, image_id): | ||
898 | 213 | """Fetch an image from the ImageService, then if present, return the | ||
899 | 214 | associated kernel and ramdisk image IDs. | ||
900 | 215 | """ | ||
901 | 216 | context = req.environ['nova.context'] | ||
902 | 217 | image_meta = image_service.show(context, image_id) | ||
903 | 218 | # NOTE(sirp): extracted to a separate method to aid unit-testing, the | ||
904 | 219 | # new method doesn't need a request obj or an ImageService stub | ||
905 | 220 | kernel_id, ramdisk_id = self._do_get_kernel_ramdisk_from_image( | ||
906 | 221 | image_meta) | ||
907 | 222 | return kernel_id, ramdisk_id | ||
908 | 223 | |||
909 | 224 | @staticmethod | ||
910 | 225 | def _do_get_kernel_ramdisk_from_image(image_meta): | ||
911 | 226 | """Given an ImageService image_meta, return kernel and ramdisk image | ||
912 | 227 | ids if present. | ||
913 | 228 | |||
914 | 229 | This is only valid for `ami` style images. | ||
915 | 230 | """ | ||
916 | 231 | image_id = image_meta['id'] | ||
917 | 232 | if image_meta['status'] != 'active': | ||
918 | 233 | raise exception.ImageUnacceptable(image_id=image_id, | ||
919 | 234 | reason=_("status is not active")) | ||
920 | 235 | |||
921 | 236 | if image_meta.get('container_format') != 'ami': | ||
922 | 237 | return None, None | ||
923 | 238 | |||
924 | 239 | try: | ||
925 | 240 | kernel_id = image_meta['properties']['kernel_id'] | ||
926 | 241 | except KeyError: | ||
927 | 242 | raise exception.KernelNotFoundForImage(image_id=image_id) | ||
928 | 243 | |||
929 | 244 | try: | ||
930 | 245 | ramdisk_id = image_meta['properties']['ramdisk_id'] | ||
931 | 246 | except KeyError: | ||
932 | 247 | ramdisk_id = None | ||
933 | 248 | |||
934 | 249 | return kernel_id, ramdisk_id | ||
935 | 250 | |||
936 | 251 | def _get_injected_files(self, personality): | ||
937 | 252 | """ | ||
938 | 253 | Create a list of injected files from the personality attribute | ||
939 | 254 | |||
940 | 255 | At this time, injected_files must be formatted as a list of | ||
941 | 256 | (file_path, file_content) pairs for compatibility with the | ||
942 | 257 | underlying compute service. | ||
943 | 258 | """ | ||
944 | 259 | injected_files = [] | ||
945 | 260 | |||
946 | 261 | for item in personality: | ||
947 | 262 | try: | ||
948 | 263 | path = item['path'] | ||
949 | 264 | contents = item['contents'] | ||
950 | 265 | except KeyError as key: | ||
951 | 266 | expl = _('Bad personality format: missing %s') % key | ||
952 | 267 | raise exc.HTTPBadRequest(explanation=expl) | ||
953 | 268 | except TypeError: | ||
954 | 269 | expl = _('Bad personality format') | ||
955 | 270 | raise exc.HTTPBadRequest(explanation=expl) | ||
956 | 271 | try: | ||
957 | 272 | contents = base64.b64decode(contents) | ||
958 | 273 | except TypeError: | ||
959 | 274 | expl = _('Personality content for %s cannot be decoded') % path | ||
960 | 275 | raise exc.HTTPBadRequest(explanation=expl) | ||
961 | 276 | injected_files.append((path, contents)) | ||
962 | 277 | return injected_files | ||
963 | 278 | |||
964 | 279 | def _get_server_admin_password_old_style(self, server): | ||
965 | 280 | """ Determine the admin password for a server on creation """ | ||
966 | 281 | return utils.generate_password(FLAGS.password_length) | ||
967 | 282 | |||
968 | 283 | def _get_server_admin_password_new_style(self, server): | ||
969 | 284 | """ Determine the admin password for a server on creation """ | ||
970 | 285 | password = server.get('adminPass') | ||
971 | 286 | |||
972 | 287 | if password is None: | ||
973 | 288 | return utils.generate_password(FLAGS.password_length) | ||
974 | 289 | if not isinstance(password, basestring) or password == '': | ||
975 | 290 | msg = _("Invalid adminPass") | ||
976 | 291 | raise exc.HTTPBadRequest(explanation=msg) | ||
977 | 292 | return password | ||
978 | 293 | |||
979 | 294 | def _get_requested_networks(self, requested_networks): | ||
980 | 295 | """ | ||
981 | 296 | Create a list of requested networks from the networks attribute | ||
982 | 297 | """ | ||
983 | 298 | networks = [] | ||
984 | 299 | for network in requested_networks: | ||
985 | 300 | try: | ||
986 | 301 | network_uuid = network['uuid'] | ||
987 | 302 | |||
988 | 303 | if not utils.is_uuid_like(network_uuid): | ||
989 | 304 | msg = _("Bad networks format: network uuid is not in" | ||
990 | 305 | " proper format (%s)") % network_uuid | ||
991 | 306 | raise exc.HTTPBadRequest(explanation=msg) | ||
992 | 307 | |||
993 | 308 | #fixed IP address is optional | ||
994 | 309 | #if the fixed IP address is not provided then | ||
995 | 310 | #it will use one of the available IP address from the network | ||
996 | 311 | address = network.get('fixed_ip', None) | ||
997 | 312 | if address is not None and not utils.is_valid_ipv4(address): | ||
998 | 313 | msg = _("Invalid fixed IP address (%s)") % address | ||
999 | 314 | raise exc.HTTPBadRequest(explanation=msg) | ||
1000 | 315 | # check if the network id is already present in the list, | ||
1001 | 316 | # we don't want duplicate networks to be passed | ||
1002 | 317 | # at the boot time | ||
1003 | 318 | for id, ip in networks: | ||
1004 | 319 | if id == network_uuid: | ||
1005 | 320 | expl = _("Duplicate networks (%s) are not allowed")\ | ||
1006 | 321 | % network_uuid | ||
1007 | 322 | raise exc.HTTPBadRequest(explanation=expl) | ||
1008 | 323 | |||
1009 | 324 | networks.append((network_uuid, address)) | ||
1010 | 325 | except KeyError as key: | ||
1011 | 326 | expl = _('Bad network format: missing %s') % key | ||
1012 | 327 | raise exc.HTTPBadRequest(explanation=expl) | ||
1013 | 328 | except TypeError: | ||
1014 | 329 | expl = _('Bad networks format') | ||
1015 | 330 | raise exc.HTTPBadRequest(explanation=expl) | ||
1016 | 331 | |||
1017 | 332 | return networks | ||
1018 | 333 | |||
1019 | 334 | def _validate_user_data(self, user_data): | ||
1020 | 335 | """Check if the user_data is encoded properly""" | ||
1021 | 336 | if not user_data: | ||
1022 | 337 | return | ||
1023 | 338 | try: | ||
1024 | 339 | user_data = base64.b64decode(user_data) | ||
1025 | 340 | except TypeError: | ||
1026 | 341 | expl = _('Userdata content cannot be decoded') | ||
1027 | 342 | raise exc.HTTPBadRequest(explanation=expl) | ||
1028 | 343 | |||
1029 | 160 | @novaclient_exception_converter | 344 | @novaclient_exception_converter |
1030 | 161 | @scheduler_api.redirect_handler | 345 | @scheduler_api.redirect_handler |
1031 | 162 | def show(self, req, id): | 346 | def show(self, req, id): |
1032 | @@ -174,22 +358,168 @@ | |||
1033 | 174 | 358 | ||
1034 | 175 | def create(self, req, body): | 359 | def create(self, req, body): |
1035 | 176 | """ Creates a new server for a given user """ | 360 | """ Creates a new server for a given user """ |
1052 | 177 | if 'server' in body: | 361 | |
1053 | 178 | body['server']['key_name'] = self._get_key_name(req, body) | 362 | if not body: |
1054 | 179 | 363 | raise exc.HTTPUnprocessableEntity() | |
1055 | 180 | extra_values = None | 364 | |
1056 | 181 | extra_values, instances = self.helper.create_instance( | 365 | if not 'server' in body: |
1057 | 182 | req, body, self.compute_api.create) | 366 | raise exc.HTTPUnprocessableEntity() |
1058 | 183 | 367 | ||
1059 | 184 | # We can only return 1 instance via the API, if we happen to | 368 | body['server']['key_name'] = self._get_key_name(req, body) |
1060 | 185 | # build more than one... instances is a list, so we'll just | 369 | |
1061 | 186 | # use the first one.. | 370 | context = req.environ['nova.context'] |
1062 | 187 | inst = instances[0] | 371 | server_dict = body['server'] |
1063 | 188 | for key in ['instance_type', 'image_ref']: | 372 | password = self._get_server_admin_password(server_dict) |
1064 | 189 | inst[key] = extra_values[key] | 373 | |
1065 | 190 | 374 | if not 'name' in server_dict: | |
1066 | 191 | server = self._build_view(req, inst, is_detail=True) | 375 | msg = _("Server name is not defined") |
1067 | 192 | server['server']['adminPass'] = extra_values['password'] | 376 | raise exc.HTTPBadRequest(explanation=msg) |
1068 | 377 | |||
1069 | 378 | name = server_dict['name'] | ||
1070 | 379 | self._validate_server_name(name) | ||
1071 | 380 | name = name.strip() | ||
1072 | 381 | |||
1073 | 382 | image_href = self._image_ref_from_req_data(body) | ||
1074 | 383 | # If the image href was generated by nova api, strip image_href | ||
1075 | 384 | # down to an id and use the default glance connection params | ||
1076 | 385 | |||
1077 | 386 | if str(image_href).startswith(req.application_url): | ||
1078 | 387 | image_href = image_href.split('/').pop() | ||
1079 | 388 | try: | ||
1080 | 389 | image_service, image_id = image.get_image_service(context, | ||
1081 | 390 | image_href) | ||
1082 | 391 | kernel_id, ramdisk_id = self._get_kernel_ramdisk_from_image( | ||
1083 | 392 | req, image_service, image_id) | ||
1084 | 393 | images = set([str(x['id']) for x in image_service.index(context)]) | ||
1085 | 394 | assert str(image_id) in images | ||
1086 | 395 | except Exception, e: | ||
1087 | 396 | msg = _("Cannot find requested image %(image_href)s: %(e)s" % | ||
1088 | 397 | locals()) | ||
1089 | 398 | raise exc.HTTPBadRequest(explanation=msg) | ||
1090 | 399 | |||
1091 | 400 | personality = server_dict.get('personality') | ||
1092 | 401 | config_drive = server_dict.get('config_drive') | ||
1093 | 402 | |||
1094 | 403 | injected_files = [] | ||
1095 | 404 | if personality: | ||
1096 | 405 | injected_files = self._get_injected_files(personality) | ||
1097 | 406 | |||
1098 | 407 | sg_names = [] | ||
1099 | 408 | security_groups = server_dict.get('security_groups') | ||
1100 | 409 | if security_groups is not None: | ||
1101 | 410 | sg_names = [sg['name'] for sg in security_groups if sg.get('name')] | ||
1102 | 411 | if not sg_names: | ||
1103 | 412 | sg_names.append('default') | ||
1104 | 413 | |||
1105 | 414 | sg_names = list(set(sg_names)) | ||
1106 | 415 | |||
1107 | 416 | requested_networks = server_dict.get('networks') | ||
1108 | 417 | if requested_networks is not None: | ||
1109 | 418 | requested_networks = self._get_requested_networks( | ||
1110 | 419 | requested_networks) | ||
1111 | 420 | |||
1112 | 421 | try: | ||
1113 | 422 | flavor_id = self._flavor_id_from_req_data(body) | ||
1114 | 423 | except ValueError as error: | ||
1115 | 424 | msg = _("Invalid flavorRef provided.") | ||
1116 | 425 | raise exc.HTTPBadRequest(explanation=msg) | ||
1117 | 426 | |||
1118 | 427 | zone_blob = server_dict.get('blob') | ||
1119 | 428 | |||
1120 | 429 | # optional openstack extensions: | ||
1121 | 430 | key_name = server_dict.get('key_name') | ||
1122 | 431 | user_data = server_dict.get('user_data') | ||
1123 | 432 | self._validate_user_data(user_data) | ||
1124 | 433 | |||
1125 | 434 | availability_zone = server_dict.get('availability_zone') | ||
1126 | 435 | name = server_dict['name'] | ||
1127 | 436 | self._validate_server_name(name) | ||
1128 | 437 | name = name.strip() | ||
1129 | 438 | |||
1130 | 439 | block_device_mapping = self._get_block_device_mapping(server_dict) | ||
1131 | 440 | |||
1132 | 441 | # Only allow admins to specify their own reservation_ids | ||
1133 | 442 | # This is really meant to allow zones to work. | ||
1134 | 443 | reservation_id = server_dict.get('reservation_id') | ||
1135 | 444 | if all([reservation_id is not None, | ||
1136 | 445 | reservation_id != '', | ||
1137 | 446 | not context.is_admin]): | ||
1138 | 447 | reservation_id = None | ||
1139 | 448 | |||
1140 | 449 | ret_resv_id = server_dict.get('return_reservation_id', False) | ||
1141 | 450 | |||
1142 | 451 | min_count = server_dict.get('min_count') | ||
1143 | 452 | max_count = server_dict.get('max_count') | ||
1144 | 453 | # min_count and max_count are optional. If they exist, they come | ||
1145 | 454 | # in as strings. We want to default 'min_count' to 1, and default | ||
1146 | 455 | # 'max_count' to be 'min_count'. | ||
1147 | 456 | min_count = int(min_count) if min_count else 1 | ||
1148 | 457 | max_count = int(max_count) if max_count else min_count | ||
1149 | 458 | if min_count > max_count: | ||
1150 | 459 | min_count = max_count | ||
1151 | 460 | |||
1152 | 461 | try: | ||
1153 | 462 | inst_type = \ | ||
1154 | 463 | instance_types.get_instance_type_by_flavor_id(flavor_id) | ||
1155 | 464 | |||
1156 | 465 | (instances, resv_id) = self.compute_api.create(context, | ||
1157 | 466 | inst_type, | ||
1158 | 467 | image_id, | ||
1159 | 468 | kernel_id=kernel_id, | ||
1160 | 469 | ramdisk_id=ramdisk_id, | ||
1161 | 470 | display_name=name, | ||
1162 | 471 | display_description=name, | ||
1163 | 472 | key_name=key_name, | ||
1164 | 473 | metadata=server_dict.get('metadata', {}), | ||
1165 | 474 | access_ip_v4=server_dict.get('accessIPv4'), | ||
1166 | 475 | access_ip_v6=server_dict.get('accessIPv6'), | ||
1167 | 476 | injected_files=injected_files, | ||
1168 | 477 | admin_password=password, | ||
1169 | 478 | zone_blob=zone_blob, | ||
1170 | 479 | reservation_id=reservation_id, | ||
1171 | 480 | min_count=min_count, | ||
1172 | 481 | max_count=max_count, | ||
1173 | 482 | requested_networks=requested_networks, | ||
1174 | 483 | security_group=sg_names, | ||
1175 | 484 | user_data=user_data, | ||
1176 | 485 | availability_zone=availability_zone, | ||
1177 | 486 | config_drive=config_drive, | ||
1178 | 487 | block_device_mapping=block_device_mapping, | ||
1179 | 488 | wait_for_instances=not ret_resv_id) | ||
1180 | 489 | except quota.QuotaError as error: | ||
1181 | 490 | self._handle_quota_error(error) | ||
1182 | 491 | except exception.ImageNotFound as error: | ||
1183 | 492 | msg = _("Can not find requested image") | ||
1184 | 493 | raise exc.HTTPBadRequest(explanation=msg) | ||
1185 | 494 | except exception.FlavorNotFound as error: | ||
1186 | 495 | msg = _("Invalid flavorRef provided.") | ||
1187 | 496 | raise exc.HTTPBadRequest(explanation=msg) | ||
1188 | 497 | except exception.KeypairNotFound as error: | ||
1189 | 498 | msg = _("Invalid key_name provided.") | ||
1190 | 499 | raise exc.HTTPBadRequest(explanation=msg) | ||
1191 | 500 | except exception.SecurityGroupNotFound as error: | ||
1192 | 501 | raise exc.HTTPBadRequest(explanation=unicode(error)) | ||
1193 | 502 | except rpc_common.RemoteError as err: | ||
1194 | 503 | msg = "%(err_type)s: %(err_msg)s" % \ | ||
1195 | 504 | {'err_type': err.exc_type, 'err_msg': err.value} | ||
1196 | 505 | raise exc.HTTPBadRequest(explanation=msg) | ||
1197 | 506 | # Let the caller deal with unhandled exceptions. | ||
1198 | 507 | |||
1199 | 508 | # If the caller wanted a reservation_id, return it | ||
1200 | 509 | if ret_resv_id: | ||
1201 | 510 | return {'reservation_id': resv_id} | ||
1202 | 511 | |||
1203 | 512 | # Instances is a list | ||
1204 | 513 | instance = instances[0] | ||
1205 | 514 | if not instance.get('_is_precooked', False): | ||
1206 | 515 | instance['instance_type'] = inst_type | ||
1207 | 516 | instance['image_ref'] = image_href | ||
1208 | 517 | |||
1209 | 518 | server = self._build_view(req, instance, is_detail=True) | ||
1210 | 519 | if '_is_precooked' in server['server']: | ||
1211 | 520 | del server['server']['_is_precooked'] | ||
1212 | 521 | else: | ||
1213 | 522 | server['server']['adminPass'] = password | ||
1214 | 193 | return server | 523 | return server |
1215 | 194 | 524 | ||
1216 | 195 | def _delete(self, context, id): | 525 | def _delete(self, context, id): |
1217 | @@ -212,7 +542,7 @@ | |||
1218 | 212 | 542 | ||
1219 | 213 | if 'name' in body['server']: | 543 | if 'name' in body['server']: |
1220 | 214 | name = body['server']['name'] | 544 | name = body['server']['name'] |
1222 | 215 | self.helper._validate_server_name(name) | 545 | self._validate_server_name(name) |
1223 | 216 | update_dict['display_name'] = name.strip() | 546 | update_dict['display_name'] = name.strip() |
1224 | 217 | 547 | ||
1225 | 218 | if 'accessIPv4' in body['server']: | 548 | if 'accessIPv4' in body['server']: |
1226 | @@ -284,17 +614,17 @@ | |||
1227 | 284 | 614 | ||
1228 | 285 | except KeyError as missing_key: | 615 | except KeyError as missing_key: |
1229 | 286 | msg = _("createBackup entity requires %s attribute") % missing_key | 616 | msg = _("createBackup entity requires %s attribute") % missing_key |
1231 | 287 | raise webob.exc.HTTPBadRequest(explanation=msg) | 617 | raise exc.HTTPBadRequest(explanation=msg) |
1232 | 288 | 618 | ||
1233 | 289 | except TypeError: | 619 | except TypeError: |
1234 | 290 | msg = _("Malformed createBackup entity") | 620 | msg = _("Malformed createBackup entity") |
1236 | 291 | raise webob.exc.HTTPBadRequest(explanation=msg) | 621 | raise exc.HTTPBadRequest(explanation=msg) |
1237 | 292 | 622 | ||
1238 | 293 | try: | 623 | try: |
1239 | 294 | rotation = int(rotation) | 624 | rotation = int(rotation) |
1240 | 295 | except ValueError: | 625 | except ValueError: |
1241 | 296 | msg = _("createBackup attribute 'rotation' must be an integer") | 626 | msg = _("createBackup attribute 'rotation' must be an integer") |
1243 | 297 | raise webob.exc.HTTPBadRequest(explanation=msg) | 627 | raise exc.HTTPBadRequest(explanation=msg) |
1244 | 298 | 628 | ||
1245 | 299 | # preserve link to server in image properties | 629 | # preserve link to server in image properties |
1246 | 300 | server_ref = os.path.join(req.application_url, | 630 | server_ref = os.path.join(req.application_url, |
1247 | @@ -309,7 +639,7 @@ | |||
1248 | 309 | props.update(metadata) | 639 | props.update(metadata) |
1249 | 310 | except ValueError: | 640 | except ValueError: |
1250 | 311 | msg = _("Invalid metadata") | 641 | msg = _("Invalid metadata") |
1252 | 312 | raise webob.exc.HTTPBadRequest(explanation=msg) | 642 | raise exc.HTTPBadRequest(explanation=msg) |
1253 | 313 | 643 | ||
1254 | 314 | image = self.compute_api.backup(context, | 644 | image = self.compute_api.backup(context, |
1255 | 315 | instance_id, | 645 | instance_id, |
1256 | @@ -687,7 +1017,7 @@ | |||
1257 | 687 | 1017 | ||
1258 | 688 | def _get_server_admin_password(self, server): | 1018 | def _get_server_admin_password(self, server): |
1259 | 689 | """ Determine the admin password for a server on creation """ | 1019 | """ Determine the admin password for a server on creation """ |
1261 | 690 | return self.helper._get_server_admin_password_old_style(server) | 1020 | return self._get_server_admin_password_old_style(server) |
1262 | 691 | 1021 | ||
1263 | 692 | def _get_server_search_options(self): | 1022 | def _get_server_search_options(self): |
1264 | 693 | """Return server search options allowed by non-admin""" | 1023 | """Return server search options allowed by non-admin""" |
1265 | @@ -873,11 +1203,11 @@ | |||
1266 | 873 | 1203 | ||
1267 | 874 | except KeyError: | 1204 | except KeyError: |
1268 | 875 | msg = _("createImage entity requires name attribute") | 1205 | msg = _("createImage entity requires name attribute") |
1270 | 876 | raise webob.exc.HTTPBadRequest(explanation=msg) | 1206 | raise exc.HTTPBadRequest(explanation=msg) |
1271 | 877 | 1207 | ||
1272 | 878 | except TypeError: | 1208 | except TypeError: |
1273 | 879 | msg = _("Malformed createImage entity") | 1209 | msg = _("Malformed createImage entity") |
1275 | 880 | raise webob.exc.HTTPBadRequest(explanation=msg) | 1210 | raise exc.HTTPBadRequest(explanation=msg) |
1276 | 881 | 1211 | ||
1277 | 882 | # preserve link to server in image properties | 1212 | # preserve link to server in image properties |
1278 | 883 | server_ref = os.path.join(req.application_url, | 1213 | server_ref = os.path.join(req.application_url, |
1279 | @@ -892,7 +1222,7 @@ | |||
1280 | 892 | props.update(metadata) | 1222 | props.update(metadata) |
1281 | 893 | except ValueError: | 1223 | except ValueError: |
1282 | 894 | msg = _("Invalid metadata") | 1224 | msg = _("Invalid metadata") |
1284 | 895 | raise webob.exc.HTTPBadRequest(explanation=msg) | 1225 | raise exc.HTTPBadRequest(explanation=msg) |
1285 | 896 | 1226 | ||
1286 | 897 | image = self.compute_api.snapshot(context, | 1227 | image = self.compute_api.snapshot(context, |
1287 | 898 | instance_id, | 1228 | instance_id, |
1288 | @@ -912,7 +1242,7 @@ | |||
1289 | 912 | 1242 | ||
1290 | 913 | def _get_server_admin_password(self, server): | 1243 | def _get_server_admin_password(self, server): |
1291 | 914 | """ Determine the admin password for a server on creation """ | 1244 | """ Determine the admin password for a server on creation """ |
1293 | 915 | return self.helper._get_server_admin_password_new_style(server) | 1245 | return self._get_server_admin_password_new_style(server) |
1294 | 916 | 1246 | ||
1295 | 917 | def _get_server_search_options(self): | 1247 | def _get_server_search_options(self): |
1296 | 918 | """Return server search options allowed by non-admin""" | 1248 | """Return server search options allowed by non-admin""" |
1297 | @@ -1057,6 +1387,227 @@ | |||
1298 | 1057 | return self._to_xml(server) | 1387 | return self._to_xml(server) |
1299 | 1058 | 1388 | ||
1300 | 1059 | 1389 | ||
1301 | 1390 | class ServerXMLDeserializer(wsgi.XMLDeserializer): | ||
1302 | 1391 | """ | ||
1303 | 1392 | Deserializer to handle xml-formatted server create requests. | ||
1304 | 1393 | |||
1305 | 1394 | Handles standard server attributes as well as optional metadata | ||
1306 | 1395 | and personality attributes | ||
1307 | 1396 | """ | ||
1308 | 1397 | |||
1309 | 1398 | metadata_deserializer = common.MetadataXMLDeserializer() | ||
1310 | 1399 | |||
1311 | 1400 | def create(self, string): | ||
1312 | 1401 | """Deserialize an xml-formatted server create request""" | ||
1313 | 1402 | dom = minidom.parseString(string) | ||
1314 | 1403 | server = self._extract_server(dom) | ||
1315 | 1404 | return {'body': {'server': server}} | ||
1316 | 1405 | |||
1317 | 1406 | def _extract_server(self, node): | ||
1318 | 1407 | """Marshal the server attribute of a parsed request""" | ||
1319 | 1408 | server = {} | ||
1320 | 1409 | server_node = self.find_first_child_named(node, 'server') | ||
1321 | 1410 | |||
1322 | 1411 | attributes = ["name", "imageId", "flavorId", "adminPass"] | ||
1323 | 1412 | for attr in attributes: | ||
1324 | 1413 | if server_node.getAttribute(attr): | ||
1325 | 1414 | server[attr] = server_node.getAttribute(attr) | ||
1326 | 1415 | |||
1327 | 1416 | metadata_node = self.find_first_child_named(server_node, "metadata") | ||
1328 | 1417 | server["metadata"] = self.metadata_deserializer.extract_metadata( | ||
1329 | 1418 | metadata_node) | ||
1330 | 1419 | |||
1331 | 1420 | server["personality"] = self._extract_personality(server_node) | ||
1332 | 1421 | |||
1333 | 1422 | return server | ||
1334 | 1423 | |||
1335 | 1424 | def _extract_personality(self, server_node): | ||
1336 | 1425 | """Marshal the personality attribute of a parsed request""" | ||
1337 | 1426 | node = self.find_first_child_named(server_node, "personality") | ||
1338 | 1427 | personality = [] | ||
1339 | 1428 | if node is not None: | ||
1340 | 1429 | for file_node in self.find_children_named(node, "file"): | ||
1341 | 1430 | item = {} | ||
1342 | 1431 | if file_node.hasAttribute("path"): | ||
1343 | 1432 | item["path"] = file_node.getAttribute("path") | ||
1344 | 1433 | item["contents"] = self.extract_text(file_node) | ||
1345 | 1434 | personality.append(item) | ||
1346 | 1435 | return personality | ||
1347 | 1436 | |||
1348 | 1437 | |||
1349 | 1438 | class ServerXMLDeserializerV11(wsgi.MetadataXMLDeserializer): | ||
1350 | 1439 | """ | ||
1351 | 1440 | Deserializer to handle xml-formatted server create requests. | ||
1352 | 1441 | |||
1353 | 1442 | Handles standard server attributes as well as optional metadata | ||
1354 | 1443 | and personality attributes | ||
1355 | 1444 | """ | ||
1356 | 1445 | |||
1357 | 1446 | metadata_deserializer = common.MetadataXMLDeserializer() | ||
1358 | 1447 | |||
1359 | 1448 | def action(self, string): | ||
1360 | 1449 | dom = minidom.parseString(string) | ||
1361 | 1450 | action_node = dom.childNodes[0] | ||
1362 | 1451 | action_name = action_node.tagName | ||
1363 | 1452 | |||
1364 | 1453 | action_deserializer = { | ||
1365 | 1454 | 'createImage': self._action_create_image, | ||
1366 | 1455 | 'createBackup': self._action_create_backup, | ||
1367 | 1456 | 'changePassword': self._action_change_password, | ||
1368 | 1457 | 'reboot': self._action_reboot, | ||
1369 | 1458 | 'rebuild': self._action_rebuild, | ||
1370 | 1459 | 'resize': self._action_resize, | ||
1371 | 1460 | 'confirmResize': self._action_confirm_resize, | ||
1372 | 1461 | 'revertResize': self._action_revert_resize, | ||
1373 | 1462 | }.get(action_name, self.default) | ||
1374 | 1463 | |||
1375 | 1464 | action_data = action_deserializer(action_node) | ||
1376 | 1465 | |||
1377 | 1466 | return {'body': {action_name: action_data}} | ||
1378 | 1467 | |||
1379 | 1468 | def _action_create_image(self, node): | ||
1380 | 1469 | return self._deserialize_image_action(node, ('name',)) | ||
1381 | 1470 | |||
1382 | 1471 | def _action_create_backup(self, node): | ||
1383 | 1472 | attributes = ('name', 'backup_type', 'rotation') | ||
1384 | 1473 | return self._deserialize_image_action(node, attributes) | ||
1385 | 1474 | |||
1386 | 1475 | def _action_change_password(self, node): | ||
1387 | 1476 | if not node.hasAttribute("adminPass"): | ||
1388 | 1477 | raise AttributeError("No adminPass was specified in request") | ||
1389 | 1478 | return {"adminPass": node.getAttribute("adminPass")} | ||
1390 | 1479 | |||
1391 | 1480 | def _action_reboot(self, node): | ||
1392 | 1481 | if not node.hasAttribute("type"): | ||
1393 | 1482 | raise AttributeError("No reboot type was specified in request") | ||
1394 | 1483 | return {"type": node.getAttribute("type")} | ||
1395 | 1484 | |||
1396 | 1485 | def _action_rebuild(self, node): | ||
1397 | 1486 | rebuild = {} | ||
1398 | 1487 | if node.hasAttribute("name"): | ||
1399 | 1488 | rebuild['name'] = node.getAttribute("name") | ||
1400 | 1489 | |||
1401 | 1490 | metadata_node = self.find_first_child_named(node, "metadata") | ||
1402 | 1491 | if metadata_node is not None: | ||
1403 | 1492 | rebuild["metadata"] = self.extract_metadata(metadata_node) | ||
1404 | 1493 | |||
1405 | 1494 | personality = self._extract_personality(node) | ||
1406 | 1495 | if personality is not None: | ||
1407 | 1496 | rebuild["personality"] = personality | ||
1408 | 1497 | |||
1409 | 1498 | if not node.hasAttribute("imageRef"): | ||
1410 | 1499 | raise AttributeError("No imageRef was specified in request") | ||
1411 | 1500 | rebuild["imageRef"] = node.getAttribute("imageRef") | ||
1412 | 1501 | |||
1413 | 1502 | return rebuild | ||
1414 | 1503 | |||
1415 | 1504 | def _action_resize(self, node): | ||
1416 | 1505 | if not node.hasAttribute("flavorRef"): | ||
1417 | 1506 | raise AttributeError("No flavorRef was specified in request") | ||
1418 | 1507 | return {"flavorRef": node.getAttribute("flavorRef")} | ||
1419 | 1508 | |||
1420 | 1509 | def _action_confirm_resize(self, node): | ||
1421 | 1510 | return None | ||
1422 | 1511 | |||
1423 | 1512 | def _action_revert_resize(self, node): | ||
1424 | 1513 | return None | ||
1425 | 1514 | |||
1426 | 1515 | def _deserialize_image_action(self, node, allowed_attributes): | ||
1427 | 1516 | data = {} | ||
1428 | 1517 | for attribute in allowed_attributes: | ||
1429 | 1518 | value = node.getAttribute(attribute) | ||
1430 | 1519 | if value: | ||
1431 | 1520 | data[attribute] = value | ||
1432 | 1521 | metadata_node = self.find_first_child_named(node, 'metadata') | ||
1433 | 1522 | if metadata_node is not None: | ||
1434 | 1523 | metadata = self.metadata_deserializer.extract_metadata( | ||
1435 | 1524 | metadata_node) | ||
1436 | 1525 | data['metadata'] = metadata | ||
1437 | 1526 | return data | ||
1438 | 1527 | |||
1439 | 1528 | def create(self, string): | ||
1440 | 1529 | """Deserialize an xml-formatted server create request""" | ||
1441 | 1530 | dom = minidom.parseString(string) | ||
1442 | 1531 | server = self._extract_server(dom) | ||
1443 | 1532 | return {'body': {'server': server}} | ||
1444 | 1533 | |||
1445 | 1534 | def _extract_server(self, node): | ||
1446 | 1535 | """Marshal the server attribute of a parsed request""" | ||
1447 | 1536 | server = {} | ||
1448 | 1537 | server_node = self.find_first_child_named(node, 'server') | ||
1449 | 1538 | |||
1450 | 1539 | attributes = ["name", "imageRef", "flavorRef", "adminPass", | ||
1451 | 1540 | "accessIPv4", "accessIPv6"] | ||
1452 | 1541 | for attr in attributes: | ||
1453 | 1542 | if server_node.getAttribute(attr): | ||
1454 | 1543 | server[attr] = server_node.getAttribute(attr) | ||
1455 | 1544 | |||
1456 | 1545 | metadata_node = self.find_first_child_named(server_node, "metadata") | ||
1457 | 1546 | if metadata_node is not None: | ||
1458 | 1547 | server["metadata"] = self.extract_metadata(metadata_node) | ||
1459 | 1548 | |||
1460 | 1549 | personality = self._extract_personality(server_node) | ||
1461 | 1550 | if personality is not None: | ||
1462 | 1551 | server["personality"] = personality | ||
1463 | 1552 | |||
1464 | 1553 | networks = self._extract_networks(server_node) | ||
1465 | 1554 | if networks is not None: | ||
1466 | 1555 | server["networks"] = networks | ||
1467 | 1556 | |||
1468 | 1557 | security_groups = self._extract_security_groups(server_node) | ||
1469 | 1558 | if security_groups is not None: | ||
1470 | 1559 | server["security_groups"] = security_groups | ||
1471 | 1560 | |||
1472 | 1561 | return server | ||
1473 | 1562 | |||
1474 | 1563 | def _extract_personality(self, server_node): | ||
1475 | 1564 | """Marshal the personality attribute of a parsed request""" | ||
1476 | 1565 | node = self.find_first_child_named(server_node, "personality") | ||
1477 | 1566 | if node is not None: | ||
1478 | 1567 | personality = [] | ||
1479 | 1568 | for file_node in self.find_children_named(node, "file"): | ||
1480 | 1569 | item = {} | ||
1481 | 1570 | if file_node.hasAttribute("path"): | ||
1482 | 1571 | item["path"] = file_node.getAttribute("path") | ||
1483 | 1572 | item["contents"] = self.extract_text(file_node) | ||
1484 | 1573 | personality.append(item) | ||
1485 | 1574 | return personality | ||
1486 | 1575 | else: | ||
1487 | 1576 | return None | ||
1488 | 1577 | |||
1489 | 1578 | def _extract_networks(self, server_node): | ||
1490 | 1579 | """Marshal the networks attribute of a parsed request""" | ||
1491 | 1580 | node = self.find_first_child_named(server_node, "networks") | ||
1492 | 1581 | if node is not None: | ||
1493 | 1582 | networks = [] | ||
1494 | 1583 | for network_node in self.find_children_named(node, | ||
1495 | 1584 | "network"): | ||
1496 | 1585 | item = {} | ||
1497 | 1586 | if network_node.hasAttribute("uuid"): | ||
1498 | 1587 | item["uuid"] = network_node.getAttribute("uuid") | ||
1499 | 1588 | if network_node.hasAttribute("fixed_ip"): | ||
1500 | 1589 | item["fixed_ip"] = network_node.getAttribute("fixed_ip") | ||
1501 | 1590 | networks.append(item) | ||
1502 | 1591 | return networks | ||
1503 | 1592 | else: | ||
1504 | 1593 | return None | ||
1505 | 1594 | |||
1506 | 1595 | def _extract_security_groups(self, server_node): | ||
1507 | 1596 | """Marshal the security_groups attribute of a parsed request""" | ||
1508 | 1597 | node = self.find_first_child_named(server_node, "security_groups") | ||
1509 | 1598 | if node is not None: | ||
1510 | 1599 | security_groups = [] | ||
1511 | 1600 | for sg_node in self.find_children_named(node, "security_group"): | ||
1512 | 1601 | item = {} | ||
1513 | 1602 | name_node = self.find_first_child_named(sg_node, "name") | ||
1514 | 1603 | if name_node: | ||
1515 | 1604 | item["name"] = self.extract_text(name_node) | ||
1516 | 1605 | security_groups.append(item) | ||
1517 | 1606 | return security_groups | ||
1518 | 1607 | else: | ||
1519 | 1608 | return None | ||
1520 | 1609 | |||
1521 | 1610 | |||
1522 | 1060 | def create_resource(version='1.0'): | 1611 | def create_resource(version='1.0'): |
1523 | 1061 | controller = { | 1612 | controller = { |
1524 | 1062 | '1.0': ControllerV10, | 1613 | '1.0': ControllerV10, |
1525 | @@ -1096,8 +1647,8 @@ | |||
1526 | 1096 | } | 1647 | } |
1527 | 1097 | 1648 | ||
1528 | 1098 | xml_deserializer = { | 1649 | xml_deserializer = { |
1531 | 1099 | '1.0': helper.ServerXMLDeserializer(), | 1650 | '1.0': ServerXMLDeserializer(), |
1532 | 1100 | '1.1': helper.ServerXMLDeserializerV11(), | 1651 | '1.1': ServerXMLDeserializerV11(), |
1533 | 1101 | }[version] | 1652 | }[version] |
1534 | 1102 | 1653 | ||
1535 | 1103 | body_deserializers = { | 1654 | body_deserializers = { |
1536 | 1104 | 1655 | ||
1537 | === modified file 'nova/api/openstack/zones.py' | |||
1538 | --- nova/api/openstack/zones.py 2011-08-31 18:54:30 +0000 | |||
1539 | +++ nova/api/openstack/zones.py 2011-09-23 07:08:19 +0000 | |||
1540 | @@ -25,8 +25,8 @@ | |||
1541 | 25 | from nova.compute import api as compute | 25 | from nova.compute import api as compute |
1542 | 26 | from nova.scheduler import api | 26 | from nova.scheduler import api |
1543 | 27 | 27 | ||
1544 | 28 | from nova.api.openstack import create_instance_helper as helper | ||
1545 | 29 | from nova.api.openstack import common | 28 | from nova.api.openstack import common |
1546 | 29 | from nova.api.openstack import servers | ||
1547 | 30 | from nova.api.openstack import wsgi | 30 | from nova.api.openstack import wsgi |
1548 | 31 | 31 | ||
1549 | 32 | 32 | ||
1550 | @@ -67,7 +67,6 @@ | |||
1551 | 67 | 67 | ||
1552 | 68 | def __init__(self): | 68 | def __init__(self): |
1553 | 69 | self.compute_api = compute.API() | 69 | self.compute_api = compute.API() |
1554 | 70 | self.helper = helper.CreateInstanceHelper(self) | ||
1555 | 71 | 70 | ||
1556 | 72 | def index(self, req): | 71 | def index(self, req): |
1557 | 73 | """Return all zones in brief""" | 72 | """Return all zones in brief""" |
1558 | @@ -120,18 +119,6 @@ | |||
1559 | 120 | zone = api.zone_update(context, zone_id, body["zone"]) | 119 | zone = api.zone_update(context, zone_id, body["zone"]) |
1560 | 121 | return dict(zone=_scrub_zone(zone)) | 120 | return dict(zone=_scrub_zone(zone)) |
1561 | 122 | 121 | ||
1562 | 123 | def boot(self, req, body): | ||
1563 | 124 | """Creates a new server for a given user while being Zone aware. | ||
1564 | 125 | |||
1565 | 126 | Returns a reservation ID (a UUID). | ||
1566 | 127 | """ | ||
1567 | 128 | result = None | ||
1568 | 129 | extra_values, result = self.helper.create_instance(req, body, | ||
1569 | 130 | self.compute_api.create_all_at_once) | ||
1570 | 131 | |||
1571 | 132 | reservation_id = result | ||
1572 | 133 | return {'reservation_id': reservation_id} | ||
1573 | 134 | |||
1574 | 135 | @check_encryption_key | 122 | @check_encryption_key |
1575 | 136 | def select(self, req, body): | 123 | def select(self, req, body): |
1576 | 137 | """Returns a weighted list of costs to create instances | 124 | """Returns a weighted list of costs to create instances |
1577 | @@ -155,29 +142,10 @@ | |||
1578 | 155 | blob=cipher_text)) | 142 | blob=cipher_text)) |
1579 | 156 | return cooked | 143 | return cooked |
1580 | 157 | 144 | ||
1581 | 158 | def _image_ref_from_req_data(self, data): | ||
1582 | 159 | return data['server']['imageId'] | ||
1583 | 160 | |||
1584 | 161 | def _flavor_id_from_req_data(self, data): | ||
1585 | 162 | return data['server']['flavorId'] | ||
1586 | 163 | |||
1587 | 164 | def _get_server_admin_password(self, server): | ||
1588 | 165 | """ Determine the admin password for a server on creation """ | ||
1589 | 166 | return self.helper._get_server_admin_password_old_style(server) | ||
1590 | 167 | |||
1591 | 168 | 145 | ||
1592 | 169 | class ControllerV11(Controller): | 146 | class ControllerV11(Controller): |
1593 | 170 | """Controller for 1.1 Zone resources.""" | 147 | """Controller for 1.1 Zone resources.""" |
1604 | 171 | 148 | pass | |
1595 | 172 | def _get_server_admin_password(self, server): | ||
1596 | 173 | """ Determine the admin password for a server on creation """ | ||
1597 | 174 | return self.helper._get_server_admin_password_new_style(server) | ||
1598 | 175 | |||
1599 | 176 | def _image_ref_from_req_data(self, data): | ||
1600 | 177 | return data['server']['imageRef'] | ||
1601 | 178 | |||
1602 | 179 | def _flavor_id_from_req_data(self, data): | ||
1603 | 180 | return data['server']['flavorRef'] | ||
1605 | 181 | 149 | ||
1606 | 182 | 150 | ||
1607 | 183 | def create_resource(version): | 151 | def create_resource(version): |
1608 | @@ -199,7 +167,7 @@ | |||
1609 | 199 | serializer = wsgi.ResponseSerializer(body_serializers) | 167 | serializer = wsgi.ResponseSerializer(body_serializers) |
1610 | 200 | 168 | ||
1611 | 201 | body_deserializers = { | 169 | body_deserializers = { |
1613 | 202 | 'application/xml': helper.ServerXMLDeserializer(), | 170 | 'application/xml': servers.ServerXMLDeserializer(), |
1614 | 203 | } | 171 | } |
1615 | 204 | deserializer = wsgi.RequestDeserializer(body_deserializers) | 172 | deserializer = wsgi.RequestDeserializer(body_deserializers) |
1616 | 205 | 173 | ||
1617 | 206 | 174 | ||
1618 | === modified file 'nova/compute/api.py' | |||
1619 | --- nova/compute/api.py 2011-09-21 21:00:53 +0000 | |||
1620 | +++ nova/compute/api.py 2011-09-23 07:08:19 +0000 | |||
1621 | @@ -74,6 +74,11 @@ | |||
1622 | 74 | return display_name.translate(table, deletions) | 74 | return display_name.translate(table, deletions) |
1623 | 75 | 75 | ||
1624 | 76 | 76 | ||
1625 | 77 | def generate_default_display_name(instance): | ||
1626 | 78 | """Generate a default display name""" | ||
1627 | 79 | return 'Server %s' % instance['id'] | ||
1628 | 80 | |||
1629 | 81 | |||
1630 | 77 | def _is_able_to_shutdown(instance, instance_id): | 82 | def _is_able_to_shutdown(instance, instance_id): |
1631 | 78 | vm_state = instance["vm_state"] | 83 | vm_state = instance["vm_state"] |
1632 | 79 | task_state = instance["task_state"] | 84 | task_state = instance["task_state"] |
1633 | @@ -176,17 +181,27 @@ | |||
1634 | 176 | 181 | ||
1635 | 177 | self.network_api.validate_networks(context, requested_networks) | 182 | self.network_api.validate_networks(context, requested_networks) |
1636 | 178 | 183 | ||
1646 | 179 | def _check_create_parameters(self, context, instance_type, | 184 | def _create_instance(self, context, instance_type, |
1647 | 180 | image_href, kernel_id=None, ramdisk_id=None, | 185 | image_href, kernel_id, ramdisk_id, |
1648 | 181 | min_count=None, max_count=None, | 186 | min_count, max_count, |
1649 | 182 | display_name='', display_description='', | 187 | display_name, display_description, |
1650 | 183 | key_name=None, key_data=None, security_group='default', | 188 | key_name, key_data, security_group, |
1651 | 184 | availability_zone=None, user_data=None, metadata=None, | 189 | availability_zone, user_data, metadata, |
1652 | 185 | injected_files=None, admin_password=None, zone_blob=None, | 190 | injected_files, admin_password, zone_blob, |
1653 | 186 | reservation_id=None, access_ip_v4=None, access_ip_v6=None, | 191 | reservation_id, access_ip_v4, access_ip_v6, |
1654 | 187 | requested_networks=None, config_drive=None,): | 192 | requested_networks, config_drive, |
1655 | 193 | block_device_mapping, | ||
1656 | 194 | wait_for_instances): | ||
1657 | 188 | """Verify all the input parameters regardless of the provisioning | 195 | """Verify all the input parameters regardless of the provisioning |
1659 | 189 | strategy being performed.""" | 196 | strategy being performed and schedule the instance(s) for |
1660 | 197 | creation.""" | ||
1661 | 198 | |||
1662 | 199 | if not metadata: | ||
1663 | 200 | metadata = {} | ||
1664 | 201 | if not display_description: | ||
1665 | 202 | display_description = '' | ||
1666 | 203 | if not security_group: | ||
1667 | 204 | security_group = 'default' | ||
1668 | 190 | 205 | ||
1669 | 191 | if not instance_type: | 206 | if not instance_type: |
1670 | 192 | instance_type = instance_types.get_default_instance_type() | 207 | instance_type = instance_types.get_default_instance_type() |
1671 | @@ -197,6 +212,8 @@ | |||
1672 | 197 | if not metadata: | 212 | if not metadata: |
1673 | 198 | metadata = {} | 213 | metadata = {} |
1674 | 199 | 214 | ||
1675 | 215 | block_device_mapping = block_device_mapping or [] | ||
1676 | 216 | |||
1677 | 200 | num_instances = quota.allowed_instances(context, max_count, | 217 | num_instances = quota.allowed_instances(context, max_count, |
1678 | 201 | instance_type) | 218 | instance_type) |
1679 | 202 | if num_instances < min_count: | 219 | if num_instances < min_count: |
1680 | @@ -297,7 +314,28 @@ | |||
1681 | 297 | 'vm_mode': vm_mode, | 314 | 'vm_mode': vm_mode, |
1682 | 298 | 'root_device_name': root_device_name} | 315 | 'root_device_name': root_device_name} |
1683 | 299 | 316 | ||
1685 | 300 | return (num_instances, base_options, image) | 317 | LOG.debug(_("Going to run %s instances...") % num_instances) |
1686 | 318 | |||
1687 | 319 | if wait_for_instances: | ||
1688 | 320 | rpc_method = rpc.call | ||
1689 | 321 | else: | ||
1690 | 322 | rpc_method = rpc.cast | ||
1691 | 323 | |||
1692 | 324 | # TODO(comstud): We should use rpc.multicall when we can | ||
1693 | 325 | # retrieve the full instance dictionary from the scheduler. | ||
1694 | 326 | # Otherwise, we could exceed the AMQP max message size limit. | ||
1695 | 327 | # This would require the schedulers' schedule_run_instances | ||
1696 | 328 | # methods to return an iterator vs a list. | ||
1697 | 329 | instances = self._schedule_run_instance( | ||
1698 | 330 | rpc_method, | ||
1699 | 331 | context, base_options, | ||
1700 | 332 | instance_type, zone_blob, | ||
1701 | 333 | availability_zone, injected_files, | ||
1702 | 334 | admin_password, image, | ||
1703 | 335 | num_instances, requested_networks, | ||
1704 | 336 | block_device_mapping, security_group) | ||
1705 | 337 | |||
1706 | 338 | return (instances, reservation_id) | ||
1707 | 301 | 339 | ||
1708 | 302 | @staticmethod | 340 | @staticmethod |
1709 | 303 | def _volume_size(instance_type, virtual_name): | 341 | def _volume_size(instance_type, virtual_name): |
1710 | @@ -393,10 +431,8 @@ | |||
1711 | 393 | including any related table updates (such as security group, | 431 | including any related table updates (such as security group, |
1712 | 394 | etc). | 432 | etc). |
1713 | 395 | 433 | ||
1718 | 396 | This will called by create() in the majority of situations, | 434 | This is called by the scheduler after a location for the |
1719 | 397 | but create_all_at_once() style Schedulers may initiate the call. | 435 | instance has been determined. |
1716 | 398 | If you are changing this method, be sure to update both | ||
1717 | 399 | call paths. | ||
1720 | 400 | """ | 436 | """ |
1721 | 401 | elevated = context.elevated() | 437 | elevated = context.elevated() |
1722 | 402 | if security_group is None: | 438 | if security_group is None: |
1723 | @@ -433,7 +469,7 @@ | |||
1724 | 433 | updates = {} | 469 | updates = {} |
1725 | 434 | if (not hasattr(instance, 'display_name') or | 470 | if (not hasattr(instance, 'display_name') or |
1726 | 435 | instance.display_name is None): | 471 | instance.display_name is None): |
1728 | 436 | updates['display_name'] = "Server %s" % instance_id | 472 | updates['display_name'] = generate_default_display_name(instance) |
1729 | 437 | instance['display_name'] = updates['display_name'] | 473 | instance['display_name'] = updates['display_name'] |
1730 | 438 | updates['hostname'] = self.hostname_factory(instance) | 474 | updates['hostname'] = self.hostname_factory(instance) |
1731 | 439 | updates['vm_state'] = vm_states.BUILDING | 475 | updates['vm_state'] = vm_states.BUILDING |
1732 | @@ -442,21 +478,23 @@ | |||
1733 | 442 | instance = self.update(context, instance_id, **updates) | 478 | instance = self.update(context, instance_id, **updates) |
1734 | 443 | return instance | 479 | return instance |
1735 | 444 | 480 | ||
1743 | 445 | def _ask_scheduler_to_create_instance(self, context, base_options, | 481 | def _schedule_run_instance(self, |
1744 | 446 | instance_type, zone_blob, | 482 | rpc_method, |
1745 | 447 | availability_zone, injected_files, | 483 | context, base_options, |
1746 | 448 | admin_password, image, | 484 | instance_type, zone_blob, |
1747 | 449 | instance_id=None, num_instances=1, | 485 | availability_zone, injected_files, |
1748 | 450 | requested_networks=None): | 486 | admin_password, image, |
1749 | 451 | """Send the run_instance request to the schedulers for processing.""" | 487 | num_instances, |
1750 | 488 | requested_networks, | ||
1751 | 489 | block_device_mapping, | ||
1752 | 490 | security_group): | ||
1753 | 491 | """Send a run_instance request to the schedulers for processing.""" | ||
1754 | 492 | |||
1755 | 452 | pid = context.project_id | 493 | pid = context.project_id |
1756 | 453 | uid = context.user_id | 494 | uid = context.user_id |
1763 | 454 | if instance_id: | 495 | |
1764 | 455 | LOG.debug(_("Casting to scheduler for %(pid)s/%(uid)s's" | 496 | LOG.debug(_("Sending create to scheduler for %(pid)s/%(uid)s's") % |
1765 | 456 | " instance %(instance_id)s (single-shot)") % locals()) | 497 | locals()) |
1760 | 457 | else: | ||
1761 | 458 | LOG.debug(_("Casting to scheduler for %(pid)s/%(uid)s's" | ||
1762 | 459 | " (all-at-once)") % locals()) | ||
1766 | 460 | 498 | ||
1767 | 461 | request_spec = { | 499 | request_spec = { |
1768 | 462 | 'image': image, | 500 | 'image': image, |
1769 | @@ -465,82 +503,41 @@ | |||
1770 | 465 | 'filter': None, | 503 | 'filter': None, |
1771 | 466 | 'blob': zone_blob, | 504 | 'blob': zone_blob, |
1772 | 467 | 'num_instances': num_instances, | 505 | 'num_instances': num_instances, |
1773 | 506 | 'block_device_mapping': block_device_mapping, | ||
1774 | 507 | 'security_group': security_group, | ||
1775 | 468 | } | 508 | } |
1776 | 469 | 509 | ||
1824 | 470 | rpc.cast(context, | 510 | return rpc_method(context, |
1825 | 471 | FLAGS.scheduler_topic, | 511 | FLAGS.scheduler_topic, |
1826 | 472 | {"method": "run_instance", | 512 | {"method": "run_instance", |
1827 | 473 | "args": {"topic": FLAGS.compute_topic, | 513 | "args": {"topic": FLAGS.compute_topic, |
1828 | 474 | "instance_id": instance_id, | 514 | "request_spec": request_spec, |
1829 | 475 | "request_spec": request_spec, | 515 | "admin_password": admin_password, |
1830 | 476 | "availability_zone": availability_zone, | 516 | "injected_files": injected_files, |
1831 | 477 | "admin_password": admin_password, | 517 | "requested_networks": requested_networks}}) |
1785 | 478 | "injected_files": injected_files, | ||
1786 | 479 | "requested_networks": requested_networks}}) | ||
1787 | 480 | |||
1788 | 481 | def create_all_at_once(self, context, instance_type, | ||
1789 | 482 | image_href, kernel_id=None, ramdisk_id=None, | ||
1790 | 483 | min_count=None, max_count=None, | ||
1791 | 484 | display_name='', display_description='', | ||
1792 | 485 | key_name=None, key_data=None, security_group='default', | ||
1793 | 486 | availability_zone=None, user_data=None, metadata=None, | ||
1794 | 487 | injected_files=None, admin_password=None, zone_blob=None, | ||
1795 | 488 | reservation_id=None, block_device_mapping=None, | ||
1796 | 489 | access_ip_v4=None, access_ip_v6=None, | ||
1797 | 490 | requested_networks=None, config_drive=None): | ||
1798 | 491 | """Provision the instances by passing the whole request to | ||
1799 | 492 | the Scheduler for execution. Returns a Reservation ID | ||
1800 | 493 | related to the creation of all of these instances.""" | ||
1801 | 494 | |||
1802 | 495 | if not metadata: | ||
1803 | 496 | metadata = {} | ||
1804 | 497 | |||
1805 | 498 | num_instances, base_options, image = self._check_create_parameters( | ||
1806 | 499 | context, instance_type, | ||
1807 | 500 | image_href, kernel_id, ramdisk_id, | ||
1808 | 501 | min_count, max_count, | ||
1809 | 502 | display_name, display_description, | ||
1810 | 503 | key_name, key_data, security_group, | ||
1811 | 504 | availability_zone, user_data, metadata, | ||
1812 | 505 | injected_files, admin_password, zone_blob, | ||
1813 | 506 | reservation_id, access_ip_v4, access_ip_v6, | ||
1814 | 507 | requested_networks, config_drive) | ||
1815 | 508 | |||
1816 | 509 | self._ask_scheduler_to_create_instance(context, base_options, | ||
1817 | 510 | instance_type, zone_blob, | ||
1818 | 511 | availability_zone, injected_files, | ||
1819 | 512 | admin_password, image, | ||
1820 | 513 | num_instances=num_instances, | ||
1821 | 514 | requested_networks=requested_networks) | ||
1822 | 515 | |||
1823 | 516 | return base_options['reservation_id'] | ||
1832 | 517 | 518 | ||
1833 | 518 | def create(self, context, instance_type, | 519 | def create(self, context, instance_type, |
1834 | 519 | image_href, kernel_id=None, ramdisk_id=None, | 520 | image_href, kernel_id=None, ramdisk_id=None, |
1835 | 520 | min_count=None, max_count=None, | 521 | min_count=None, max_count=None, |
1838 | 521 | display_name='', display_description='', | 522 | display_name=None, display_description=None, |
1839 | 522 | key_name=None, key_data=None, security_group='default', | 523 | key_name=None, key_data=None, security_group=None, |
1840 | 523 | availability_zone=None, user_data=None, metadata=None, | 524 | availability_zone=None, user_data=None, metadata=None, |
1841 | 524 | injected_files=None, admin_password=None, zone_blob=None, | 525 | injected_files=None, admin_password=None, zone_blob=None, |
1842 | 525 | reservation_id=None, block_device_mapping=None, | 526 | reservation_id=None, block_device_mapping=None, |
1843 | 526 | access_ip_v4=None, access_ip_v6=None, | 527 | access_ip_v4=None, access_ip_v6=None, |
1861 | 527 | requested_networks=None, config_drive=None,): | 528 | requested_networks=None, config_drive=None, |
1862 | 528 | """ | 529 | wait_for_instances=True): |
1863 | 529 | Provision the instances by sending off a series of single | 530 | """ |
1864 | 530 | instance requests to the Schedulers. This is fine for trival | 531 | Provision instances, sending instance information to the |
1865 | 531 | Scheduler drivers, but may remove the effectiveness of the | 532 | scheduler. The scheduler will determine where the instance(s) |
1866 | 532 | more complicated drivers. | 533 | go and will handle creating the DB entries. |
1867 | 533 | 534 | ||
1868 | 534 | NOTE: If you change this method, be sure to change | 535 | Returns a tuple of (instances, reservation_id) where instances |
1869 | 535 | create_all_at_once() at the same time! | 536 | could be 'None' or a list of instance dicts depending on if |
1870 | 536 | 537 | we waited for information from the scheduler or not. | |
1871 | 537 | Returns a list of instance dicts. | 538 | """ |
1872 | 538 | """ | 539 | |
1873 | 539 | 540 | (instances, reservation_id) = self._create_instance( | |
1857 | 540 | if not metadata: | ||
1858 | 541 | metadata = {} | ||
1859 | 542 | |||
1860 | 543 | num_instances, base_options, image = self._check_create_parameters( | ||
1874 | 544 | context, instance_type, | 541 | context, instance_type, |
1875 | 545 | image_href, kernel_id, ramdisk_id, | 542 | image_href, kernel_id, ramdisk_id, |
1876 | 546 | min_count, max_count, | 543 | min_count, max_count, |
1877 | @@ -549,27 +546,25 @@ | |||
1878 | 549 | availability_zone, user_data, metadata, | 546 | availability_zone, user_data, metadata, |
1879 | 550 | injected_files, admin_password, zone_blob, | 547 | injected_files, admin_password, zone_blob, |
1880 | 551 | reservation_id, access_ip_v4, access_ip_v6, | 548 | reservation_id, access_ip_v4, access_ip_v6, |
1902 | 552 | requested_networks, config_drive) | 549 | requested_networks, config_drive, |
1903 | 553 | 550 | block_device_mapping, | |
1904 | 554 | block_device_mapping = block_device_mapping or [] | 551 | wait_for_instances) |
1905 | 555 | instances = [] | 552 | |
1906 | 556 | LOG.debug(_("Going to run %s instances..."), num_instances) | 553 | if instances is None: |
1907 | 557 | for num in range(num_instances): | 554 | # wait_for_instances must have been False |
1908 | 558 | instance = self.create_db_entry_for_new_instance(context, | 555 | return (instances, reservation_id) |
1909 | 559 | instance_type, image, | 556 | |
1910 | 560 | base_options, security_group, | 557 | inst_ret_list = [] |
1911 | 561 | block_device_mapping, num=num) | 558 | for instance in instances: |
1912 | 562 | instances.append(instance) | 559 | if instance.get('_is_precooked', False): |
1913 | 563 | instance_id = instance['id'] | 560 | inst_ret_list.append(instance) |
1914 | 564 | 561 | else: | |
1915 | 565 | self._ask_scheduler_to_create_instance(context, base_options, | 562 | # Scheduler only gives us the 'id'. We need to pull |
1916 | 566 | instance_type, zone_blob, | 563 | # in the created instances from the DB |
1917 | 567 | availability_zone, injected_files, | 564 | instance = self.db.instance_get(context, instance['id']) |
1918 | 568 | admin_password, image, | 565 | inst_ret_list.append(dict(instance.iteritems())) |
1919 | 569 | instance_id=instance_id, | 566 | |
1920 | 570 | requested_networks=requested_networks) | 567 | return (inst_ret_list, reservation_id) |
1900 | 571 | |||
1901 | 572 | return [dict(x.iteritems()) for x in instances] | ||
1921 | 573 | 568 | ||
1922 | 574 | def has_finished_migration(self, context, instance_uuid): | 569 | def has_finished_migration(self, context, instance_uuid): |
1923 | 575 | """Returns true if an instance has a finished migration.""" | 570 | """Returns true if an instance has a finished migration.""" |
1924 | 576 | 571 | ||
1925 | === modified file 'nova/scheduler/abstract_scheduler.py' | |||
1926 | --- nova/scheduler/abstract_scheduler.py 2011-09-12 14:36:14 +0000 | |||
1927 | +++ nova/scheduler/abstract_scheduler.py 2011-09-23 07:08:19 +0000 | |||
1928 | @@ -60,24 +60,10 @@ | |||
1929 | 60 | request_spec, kwargs): | 60 | request_spec, kwargs): |
1930 | 61 | """Create the requested resource in this Zone.""" | 61 | """Create the requested resource in this Zone.""" |
1931 | 62 | host = build_plan_item['hostname'] | 62 | host = build_plan_item['hostname'] |
1950 | 63 | base_options = request_spec['instance_properties'] | 63 | instance = self.create_instance_db_entry(context, request_spec) |
1951 | 64 | image = request_spec['image'] | 64 | driver.cast_to_compute_host(context, host, |
1952 | 65 | instance_type = request_spec.get('instance_type') | 65 | 'run_instance', instance_id=instance['id'], **kwargs) |
1953 | 66 | 66 | return driver.encode_instance(instance, local=True) | |
1936 | 67 | # TODO(sandy): I guess someone needs to add block_device_mapping | ||
1937 | 68 | # support at some point? Also, OS API has no concept of security | ||
1938 | 69 | # groups. | ||
1939 | 70 | instance = compute_api.API().create_db_entry_for_new_instance(context, | ||
1940 | 71 | instance_type, image, base_options, None, []) | ||
1941 | 72 | |||
1942 | 73 | instance_id = instance['id'] | ||
1943 | 74 | kwargs['instance_id'] = instance_id | ||
1944 | 75 | |||
1945 | 76 | queue = db.queue_get_for(context, "compute", host) | ||
1946 | 77 | params = {"method": "run_instance", "args": kwargs} | ||
1947 | 78 | rpc.cast(context, queue, params) | ||
1948 | 79 | LOG.debug(_("Provisioning locally via compute node %(host)s") | ||
1949 | 80 | % locals()) | ||
1954 | 81 | 67 | ||
1955 | 82 | def _decrypt_blob(self, blob): | 68 | def _decrypt_blob(self, blob): |
1956 | 83 | """Returns the decrypted blob or None if invalid. Broken out | 69 | """Returns the decrypted blob or None if invalid. Broken out |
1957 | @@ -112,7 +98,7 @@ | |||
1958 | 112 | files = kwargs['injected_files'] | 98 | files = kwargs['injected_files'] |
1959 | 113 | child_zone = zone_info['child_zone'] | 99 | child_zone = zone_info['child_zone'] |
1960 | 114 | child_blob = zone_info['child_blob'] | 100 | child_blob = zone_info['child_blob'] |
1962 | 115 | zone = db.zone_get(context, child_zone) | 101 | zone = db.zone_get(context.elevated(), child_zone) |
1963 | 116 | url = zone.api_url | 102 | url = zone.api_url |
1964 | 117 | LOG.debug(_("Forwarding instance create call to child zone %(url)s" | 103 | LOG.debug(_("Forwarding instance create call to child zone %(url)s" |
1965 | 118 | ". ReservationID=%(reservation_id)s") % locals()) | 104 | ". ReservationID=%(reservation_id)s") % locals()) |
1966 | @@ -132,12 +118,13 @@ | |||
1967 | 132 | # arguments are passed as keyword arguments | 118 | # arguments are passed as keyword arguments |
1968 | 133 | # (there's a reasonable default for ipgroups in the | 119 | # (there's a reasonable default for ipgroups in the |
1969 | 134 | # novaclient call). | 120 | # novaclient call). |
1971 | 135 | nova.servers.create(name, image_ref, flavor_id, | 121 | instance = nova.servers.create(name, image_ref, flavor_id, |
1972 | 136 | meta=meta, files=files, zone_blob=child_blob, | 122 | meta=meta, files=files, zone_blob=child_blob, |
1973 | 137 | reservation_id=reservation_id) | 123 | reservation_id=reservation_id) |
1974 | 124 | return driver.encode_instance(instance._info, local=False) | ||
1975 | 138 | 125 | ||
1976 | 139 | def _provision_resource_from_blob(self, context, build_plan_item, | 126 | def _provision_resource_from_blob(self, context, build_plan_item, |
1978 | 140 | instance_id, request_spec, kwargs): | 127 | request_spec, kwargs): |
1979 | 141 | """Create the requested resource locally or in a child zone | 128 | """Create the requested resource locally or in a child zone |
1980 | 142 | based on what is stored in the zone blob info. | 129 | based on what is stored in the zone blob info. |
1981 | 143 | 130 | ||
1982 | @@ -165,21 +152,21 @@ | |||
1983 | 165 | 152 | ||
1984 | 166 | # Valid data ... is it for us? | 153 | # Valid data ... is it for us? |
1985 | 167 | if 'child_zone' in host_info and 'child_blob' in host_info: | 154 | if 'child_zone' in host_info and 'child_blob' in host_info: |
1988 | 168 | self._ask_child_zone_to_create_instance(context, host_info, | 155 | instance = self._ask_child_zone_to_create_instance(context, |
1989 | 169 | request_spec, kwargs) | 156 | host_info, request_spec, kwargs) |
1990 | 170 | else: | 157 | else: |
1993 | 171 | self._provision_resource_locally(context, host_info, request_spec, | 158 | instance = self._provision_resource_locally(context, |
1994 | 172 | kwargs) | 159 | host_info, request_spec, kwargs) |
1995 | 160 | return instance | ||
1996 | 173 | 161 | ||
1998 | 174 | def _provision_resource(self, context, build_plan_item, instance_id, | 162 | def _provision_resource(self, context, build_plan_item, |
1999 | 175 | request_spec, kwargs): | 163 | request_spec, kwargs): |
2000 | 176 | """Create the requested resource in this Zone or a child zone.""" | 164 | """Create the requested resource in this Zone or a child zone.""" |
2001 | 177 | if "hostname" in build_plan_item: | 165 | if "hostname" in build_plan_item: |
2007 | 178 | self._provision_resource_locally(context, build_plan_item, | 166 | return self._provision_resource_locally(context, |
2008 | 179 | request_spec, kwargs) | 167 | build_plan_item, request_spec, kwargs) |
2009 | 180 | return | 168 | return self._provision_resource_from_blob(context, |
2010 | 181 | self._provision_resource_from_blob(context, build_plan_item, | 169 | build_plan_item, request_spec, kwargs) |
2006 | 182 | instance_id, request_spec, kwargs) | ||
2011 | 183 | 170 | ||
2012 | 184 | def _adjust_child_weights(self, child_results, zones): | 171 | def _adjust_child_weights(self, child_results, zones): |
2013 | 185 | """Apply the Scale and Offset values from the Zone definition | 172 | """Apply the Scale and Offset values from the Zone definition |
2014 | @@ -205,8 +192,7 @@ | |||
2015 | 205 | LOG.exception(_("Bad child zone scaling values " | 192 | LOG.exception(_("Bad child zone scaling values " |
2016 | 206 | "for Zone: %(zone_id)s") % locals()) | 193 | "for Zone: %(zone_id)s") % locals()) |
2017 | 207 | 194 | ||
2020 | 208 | def schedule_run_instance(self, context, instance_id, request_spec, | 195 | def schedule_run_instance(self, context, request_spec, *args, **kwargs): |
2019 | 209 | *args, **kwargs): | ||
2021 | 210 | """This method is called from nova.compute.api to provision | 196 | """This method is called from nova.compute.api to provision |
2022 | 211 | an instance. However we need to look at the parameters being | 197 | an instance. However we need to look at the parameters being |
2023 | 212 | passed in to see if this is a request to: | 198 | passed in to see if this is a request to: |
2024 | @@ -214,13 +200,16 @@ | |||
2025 | 214 | 2. Use the Build Plan information in the request parameters | 200 | 2. Use the Build Plan information in the request parameters |
2026 | 215 | to simply create the instance (either in this zone or | 201 | to simply create the instance (either in this zone or |
2027 | 216 | a child zone). | 202 | a child zone). |
2028 | 203 | |||
2029 | 204 | returns list of instances created. | ||
2030 | 217 | """ | 205 | """ |
2031 | 218 | # TODO(sandy): We'll have to look for richer specs at some point. | 206 | # TODO(sandy): We'll have to look for richer specs at some point. |
2032 | 219 | blob = request_spec.get('blob') | 207 | blob = request_spec.get('blob') |
2033 | 220 | if blob: | 208 | if blob: |
2037 | 221 | self._provision_resource(context, request_spec, instance_id, | 209 | instance = self._provision_resource(context, |
2038 | 222 | request_spec, kwargs) | 210 | request_spec, request_spec, kwargs) |
2039 | 223 | return None | 211 | # Caller expects a list of instances |
2040 | 212 | return [instance] | ||
2041 | 224 | 213 | ||
2042 | 225 | num_instances = request_spec.get('num_instances', 1) | 214 | num_instances = request_spec.get('num_instances', 1) |
2043 | 226 | LOG.debug(_("Attempting to build %(num_instances)d instance(s)") % | 215 | LOG.debug(_("Attempting to build %(num_instances)d instance(s)") % |
2044 | @@ -231,16 +220,16 @@ | |||
2045 | 231 | if not build_plan: | 220 | if not build_plan: |
2046 | 232 | raise driver.NoValidHost(_('No hosts were available')) | 221 | raise driver.NoValidHost(_('No hosts were available')) |
2047 | 233 | 222 | ||
2048 | 223 | instances = [] | ||
2049 | 234 | for num in xrange(num_instances): | 224 | for num in xrange(num_instances): |
2050 | 235 | if not build_plan: | 225 | if not build_plan: |
2051 | 236 | break | 226 | break |
2052 | 237 | build_plan_item = build_plan.pop(0) | 227 | build_plan_item = build_plan.pop(0) |
2055 | 238 | self._provision_resource(context, build_plan_item, instance_id, | 228 | instance = self._provision_resource(context, |
2056 | 239 | request_spec, kwargs) | 229 | build_plan_item, request_spec, kwargs) |
2057 | 230 | instances.append(instance) | ||
2058 | 240 | 231 | ||
2062 | 241 | # Returning None short-circuits the routing to Compute (since | 232 | return instances |
2060 | 242 | # we've already done it here) | ||
2061 | 243 | return None | ||
2063 | 244 | 233 | ||
2064 | 245 | def select(self, context, request_spec, *args, **kwargs): | 234 | def select(self, context, request_spec, *args, **kwargs): |
2065 | 246 | """Select returns a list of weights and zone/host information | 235 | """Select returns a list of weights and zone/host information |
2066 | @@ -251,7 +240,7 @@ | |||
2067 | 251 | return self._schedule(context, "compute", request_spec, | 240 | return self._schedule(context, "compute", request_spec, |
2068 | 252 | *args, **kwargs) | 241 | *args, **kwargs) |
2069 | 253 | 242 | ||
2071 | 254 | def schedule(self, context, topic, request_spec, *args, **kwargs): | 243 | def schedule(self, context, topic, method, *args, **kwargs): |
2072 | 255 | """The schedule() contract requires we return the one | 244 | """The schedule() contract requires we return the one |
2073 | 256 | best-suited host for this request. | 245 | best-suited host for this request. |
2074 | 257 | """ | 246 | """ |
2075 | @@ -285,7 +274,7 @@ | |||
2076 | 285 | weighted_hosts = self.weigh_hosts(topic, request_spec, filtered_hosts) | 274 | weighted_hosts = self.weigh_hosts(topic, request_spec, filtered_hosts) |
2077 | 286 | # Next, tack on the host weights from the child zones | 275 | # Next, tack on the host weights from the child zones |
2078 | 287 | json_spec = json.dumps(request_spec) | 276 | json_spec = json.dumps(request_spec) |
2080 | 288 | all_zones = db.zone_get_all(context) | 277 | all_zones = db.zone_get_all(context.elevated()) |
2081 | 289 | child_results = self._call_zone_method(context, "select", | 278 | child_results = self._call_zone_method(context, "select", |
2082 | 290 | specs=json_spec, zones=all_zones) | 279 | specs=json_spec, zones=all_zones) |
2083 | 291 | self._adjust_child_weights(child_results, all_zones) | 280 | self._adjust_child_weights(child_results, all_zones) |
2084 | 292 | 281 | ||
2085 | === modified file 'nova/scheduler/api.py' | |||
2086 | --- nova/scheduler/api.py 2011-09-21 12:19:53 +0000 | |||
2087 | +++ nova/scheduler/api.py 2011-09-23 07:08:19 +0000 | |||
2088 | @@ -65,7 +65,7 @@ | |||
2089 | 65 | for item in items: | 65 | for item in items: |
2090 | 66 | item['api_url'] = item['api_url'].replace('\\/', '/') | 66 | item['api_url'] = item['api_url'].replace('\\/', '/') |
2091 | 67 | if not items: | 67 | if not items: |
2093 | 68 | items = db.zone_get_all(context) | 68 | items = db.zone_get_all(context.elevated()) |
2094 | 69 | return items | 69 | return items |
2095 | 70 | 70 | ||
2096 | 71 | 71 | ||
2097 | @@ -116,7 +116,7 @@ | |||
2098 | 116 | pool = greenpool.GreenPool() | 116 | pool = greenpool.GreenPool() |
2099 | 117 | results = [] | 117 | results = [] |
2100 | 118 | if zones is None: | 118 | if zones is None: |
2102 | 119 | zones = db.zone_get_all(context) | 119 | zones = db.zone_get_all(context.elevated()) |
2103 | 120 | for zone in zones: | 120 | for zone in zones: |
2104 | 121 | try: | 121 | try: |
2105 | 122 | # Do this on behalf of the user ... | 122 | # Do this on behalf of the user ... |
2106 | 123 | 123 | ||
2107 | === modified file 'nova/scheduler/chance.py' | |||
2108 | --- nova/scheduler/chance.py 2011-03-31 19:29:16 +0000 | |||
2109 | +++ nova/scheduler/chance.py 2011-09-23 07:08:19 +0000 | |||
2110 | @@ -29,12 +29,33 @@ | |||
2111 | 29 | class ChanceScheduler(driver.Scheduler): | 29 | class ChanceScheduler(driver.Scheduler): |
2112 | 30 | """Implements Scheduler as a random node selector.""" | 30 | """Implements Scheduler as a random node selector.""" |
2113 | 31 | 31 | ||
2115 | 32 | def schedule(self, context, topic, *_args, **_kwargs): | 32 | def _schedule(self, context, topic, **kwargs): |
2116 | 33 | """Picks a host that is up at random.""" | 33 | """Picks a host that is up at random.""" |
2117 | 34 | 34 | ||
2119 | 35 | hosts = self.hosts_up(context, topic) | 35 | elevated = context.elevated() |
2120 | 36 | hosts = self.hosts_up(elevated, topic) | ||
2121 | 36 | if not hosts: | 37 | if not hosts: |
2122 | 37 | raise driver.NoValidHost(_("Scheduler was unable to locate a host" | 38 | raise driver.NoValidHost(_("Scheduler was unable to locate a host" |
2123 | 38 | " for this request. Is the appropriate" | 39 | " for this request. Is the appropriate" |
2124 | 39 | " service running?")) | 40 | " service running?")) |
2125 | 40 | return hosts[int(random.random() * len(hosts))] | 41 | return hosts[int(random.random() * len(hosts))] |
2126 | 42 | |||
2127 | 43 | def schedule(self, context, topic, method, *_args, **kwargs): | ||
2128 | 44 | """Picks a host that is up at random.""" | ||
2129 | 45 | |||
2130 | 46 | host = self._schedule(context, topic, **kwargs) | ||
2131 | 47 | driver.cast_to_host(context, topic, host, method, **kwargs) | ||
2132 | 48 | |||
2133 | 49 | def schedule_run_instance(self, context, request_spec, *_args, **kwargs): | ||
2134 | 50 | """Create and run an instance or instances""" | ||
2135 | 51 | elevated = context.elevated() | ||
2136 | 52 | num_instances = request_spec.get('num_instances', 1) | ||
2137 | 53 | instances = [] | ||
2138 | 54 | for num in xrange(num_instances): | ||
2139 | 55 | host = self._schedule(context, 'compute', **kwargs) | ||
2140 | 56 | instance = self.create_instance_db_entry(elevated, request_spec) | ||
2141 | 57 | driver.cast_to_compute_host(context, host, | ||
2142 | 58 | 'run_instance', instance_id=instance['id'], **kwargs) | ||
2143 | 59 | instances.append(driver.encode_instance(instance)) | ||
2144 | 60 | |||
2145 | 61 | return instances | ||
2146 | 41 | 62 | ||
2147 | === modified file 'nova/scheduler/driver.py' | |||
2148 | --- nova/scheduler/driver.py 2011-08-22 21:17:39 +0000 | |||
2149 | +++ nova/scheduler/driver.py 2011-09-23 07:08:19 +0000 | |||
2150 | @@ -29,17 +29,94 @@ | |||
2151 | 29 | from nova import log as logging | 29 | from nova import log as logging |
2152 | 30 | from nova import rpc | 30 | from nova import rpc |
2153 | 31 | from nova import utils | 31 | from nova import utils |
2154 | 32 | from nova.compute import api as compute_api | ||
2155 | 32 | from nova.compute import power_state | 33 | from nova.compute import power_state |
2156 | 33 | from nova.compute import vm_states | 34 | from nova.compute import vm_states |
2157 | 34 | from nova.api.ec2 import ec2utils | 35 | from nova.api.ec2 import ec2utils |
2158 | 35 | 36 | ||
2159 | 36 | 37 | ||
2160 | 37 | FLAGS = flags.FLAGS | 38 | FLAGS = flags.FLAGS |
2161 | 39 | LOG = logging.getLogger('nova.scheduler.driver') | ||
2162 | 38 | flags.DEFINE_integer('service_down_time', 60, | 40 | flags.DEFINE_integer('service_down_time', 60, |
2163 | 39 | 'maximum time since last checkin for up service') | 41 | 'maximum time since last checkin for up service') |
2164 | 40 | flags.DECLARE('instances_path', 'nova.compute.manager') | 42 | flags.DECLARE('instances_path', 'nova.compute.manager') |
2165 | 41 | 43 | ||
2166 | 42 | 44 | ||
2167 | 45 | def cast_to_volume_host(context, host, method, update_db=True, **kwargs): | ||
2168 | 46 | """Cast request to a volume host queue""" | ||
2169 | 47 | |||
2170 | 48 | if update_db: | ||
2171 | 49 | volume_id = kwargs.get('volume_id', None) | ||
2172 | 50 | if volume_id is not None: | ||
2173 | 51 | now = utils.utcnow() | ||
2174 | 52 | db.volume_update(context, volume_id, | ||
2175 | 53 | {'host': host, 'scheduled_at': now}) | ||
2176 | 54 | rpc.cast(context, | ||
2177 | 55 | db.queue_get_for(context, 'volume', host), | ||
2178 | 56 | {"method": method, "args": kwargs}) | ||
2179 | 57 | LOG.debug(_("Casted '%(method)s' to volume '%(host)s'") % locals()) | ||
2180 | 58 | |||
2181 | 59 | |||
2182 | 60 | def cast_to_compute_host(context, host, method, update_db=True, **kwargs): | ||
2183 | 61 | """Cast request to a compute host queue""" | ||
2184 | 62 | |||
2185 | 63 | if update_db: | ||
2186 | 64 | instance_id = kwargs.get('instance_id', None) | ||
2187 | 65 | if instance_id is not None: | ||
2188 | 66 | now = utils.utcnow() | ||
2189 | 67 | db.instance_update(context, instance_id, | ||
2190 | 68 | {'host': host, 'scheduled_at': now}) | ||
2191 | 69 | rpc.cast(context, | ||
2192 | 70 | db.queue_get_for(context, 'compute', host), | ||
2193 | 71 | {"method": method, "args": kwargs}) | ||
2194 | 72 | LOG.debug(_("Casted '%(method)s' to compute '%(host)s'") % locals()) | ||
2195 | 73 | |||
2196 | 74 | |||
2197 | 75 | def cast_to_network_host(context, host, method, update_db=False, **kwargs): | ||
2198 | 76 | """Cast request to a network host queue""" | ||
2199 | 77 | |||
2200 | 78 | rpc.cast(context, | ||
2201 | 79 | db.queue_get_for(context, 'network', host), | ||
2202 | 80 | {"method": method, "args": kwargs}) | ||
2203 | 81 | LOG.debug(_("Casted '%(method)s' to network '%(host)s'") % locals()) | ||
2204 | 82 | |||
2205 | 83 | |||
2206 | 84 | def cast_to_host(context, topic, host, method, update_db=True, **kwargs): | ||
2207 | 85 | """Generic cast to host""" | ||
2208 | 86 | |||
2209 | 87 | topic_mapping = { | ||
2210 | 88 | "compute": cast_to_compute_host, | ||
2211 | 89 | "volume": cast_to_volume_host, | ||
2212 | 90 | 'network': cast_to_network_host} | ||
2213 | 91 | |||
2214 | 92 | func = topic_mapping.get(topic) | ||
2215 | 93 | if func: | ||
2216 | 94 | func(context, host, method, update_db=update_db, **kwargs) | ||
2217 | 95 | else: | ||
2218 | 96 | rpc.cast(context, | ||
2219 | 97 | db.queue_get_for(context, topic, host), | ||
2220 | 98 | {"method": method, "args": kwargs}) | ||
2221 | 99 | LOG.debug(_("Casted '%(method)s' to %(topic)s '%(host)s'") | ||
2222 | 100 | % locals()) | ||
2223 | 101 | |||
2224 | 102 | |||
2225 | 103 | def encode_instance(instance, local=True): | ||
2226 | 104 | """Encode locally created instance for return via RPC""" | ||
2227 | 105 | # TODO(comstud): I would love to be able to return the full | ||
2228 | 106 | # instance information here, but we'll need some modifications | ||
2229 | 107 | # to the RPC code to handle datetime conversions with the | ||
2230 | 108 | # json encoding/decoding. We should be able to set a default | ||
2231 | 109 | # json handler somehow to do it. | ||
2232 | 110 | # | ||
2233 | 111 | # For now, I'll just return the instance ID and let the caller | ||
2234 | 112 | # do a DB lookup :-/ | ||
2235 | 113 | if local: | ||
2236 | 114 | return dict(id=instance['id'], _is_precooked=False) | ||
2237 | 115 | else: | ||
2238 | 116 | instance['_is_precooked'] = True | ||
2239 | 117 | return instance | ||
2240 | 118 | |||
2241 | 119 | |||
2242 | 43 | class NoValidHost(exception.Error): | 120 | class NoValidHost(exception.Error): |
2243 | 44 | """There is no valid host for the command.""" | 121 | """There is no valid host for the command.""" |
2244 | 45 | pass | 122 | pass |
2245 | @@ -55,6 +132,7 @@ | |||
2246 | 55 | 132 | ||
2247 | 56 | def __init__(self): | 133 | def __init__(self): |
2248 | 57 | self.zone_manager = None | 134 | self.zone_manager = None |
2249 | 135 | self.compute_api = compute_api.API() | ||
2250 | 58 | 136 | ||
2251 | 59 | def set_zone_manager(self, zone_manager): | 137 | def set_zone_manager(self, zone_manager): |
2252 | 60 | """Called by the Scheduler Service to supply a ZoneManager.""" | 138 | """Called by the Scheduler Service to supply a ZoneManager.""" |
2253 | @@ -76,7 +154,20 @@ | |||
2254 | 76 | for service in services | 154 | for service in services |
2255 | 77 | if self.service_is_up(service)] | 155 | if self.service_is_up(service)] |
2256 | 78 | 156 | ||
2258 | 79 | def schedule(self, context, topic, *_args, **_kwargs): | 157 | def create_instance_db_entry(self, context, request_spec): |
2259 | 158 | """Create instance DB entry based on request_spec""" | ||
2260 | 159 | base_options = request_spec['instance_properties'] | ||
2261 | 160 | image = request_spec['image'] | ||
2262 | 161 | instance_type = request_spec.get('instance_type') | ||
2263 | 162 | security_group = request_spec.get('security_group', 'default') | ||
2264 | 163 | block_device_mapping = request_spec.get('block_device_mapping', []) | ||
2265 | 164 | |||
2266 | 165 | instance = self.compute_api.create_db_entry_for_new_instance( | ||
2267 | 166 | context, instance_type, image, base_options, | ||
2268 | 167 | security_group, block_device_mapping) | ||
2269 | 168 | return instance | ||
2270 | 169 | |||
2271 | 170 | def schedule(self, context, topic, method, *_args, **_kwargs): | ||
2272 | 80 | """Must override at least this method for scheduler to work.""" | 171 | """Must override at least this method for scheduler to work.""" |
2273 | 81 | raise NotImplementedError(_("Must implement a fallback schedule")) | 172 | raise NotImplementedError(_("Must implement a fallback schedule")) |
2274 | 82 | 173 | ||
2275 | @@ -114,10 +205,12 @@ | |||
2276 | 114 | volume_ref['id'], | 205 | volume_ref['id'], |
2277 | 115 | {'status': 'migrating'}) | 206 | {'status': 'migrating'}) |
2278 | 116 | 207 | ||
2279 | 117 | # Return value is necessary to send request to src | ||
2280 | 118 | # Check _schedule() in detail. | ||
2281 | 119 | src = instance_ref['host'] | 208 | src = instance_ref['host'] |
2283 | 120 | return src | 209 | cast_to_compute_host(context, src, 'live_migration', |
2284 | 210 | update_db=False, | ||
2285 | 211 | instance_id=instance_id, | ||
2286 | 212 | dest=dest, | ||
2287 | 213 | block_migration=block_migration) | ||
2288 | 121 | 214 | ||
2289 | 122 | def _live_migration_src_check(self, context, instance_ref): | 215 | def _live_migration_src_check(self, context, instance_ref): |
2290 | 123 | """Live migration check routine (for src host). | 216 | """Live migration check routine (for src host). |
2291 | @@ -205,7 +298,7 @@ | |||
2292 | 205 | if not block_migration: | 298 | if not block_migration: |
2293 | 206 | src = instance_ref['host'] | 299 | src = instance_ref['host'] |
2294 | 207 | ipath = FLAGS.instances_path | 300 | ipath = FLAGS.instances_path |
2296 | 208 | logging.error(_("Cannot confirm tmpfile at %(ipath)s is on " | 301 | LOG.error(_("Cannot confirm tmpfile at %(ipath)s is on " |
2297 | 209 | "same shared storage between %(src)s " | 302 | "same shared storage between %(src)s " |
2298 | 210 | "and %(dest)s.") % locals()) | 303 | "and %(dest)s.") % locals()) |
2299 | 211 | raise | 304 | raise |
2300 | @@ -243,7 +336,7 @@ | |||
2301 | 243 | 336 | ||
2302 | 244 | except rpc.RemoteError: | 337 | except rpc.RemoteError: |
2303 | 245 | src = instance_ref['host'] | 338 | src = instance_ref['host'] |
2305 | 246 | logging.exception(_("host %(dest)s is not compatible with " | 339 | LOG.exception(_("host %(dest)s is not compatible with " |
2306 | 247 | "original host %(src)s.") % locals()) | 340 | "original host %(src)s.") % locals()) |
2307 | 248 | raise | 341 | raise |
2308 | 249 | 342 | ||
2309 | @@ -354,6 +447,8 @@ | |||
2310 | 354 | dst_t = db.queue_get_for(context, FLAGS.compute_topic, dest) | 447 | dst_t = db.queue_get_for(context, FLAGS.compute_topic, dest) |
2311 | 355 | src_t = db.queue_get_for(context, FLAGS.compute_topic, src) | 448 | src_t = db.queue_get_for(context, FLAGS.compute_topic, src) |
2312 | 356 | 449 | ||
2313 | 450 | filename = None | ||
2314 | 451 | |||
2315 | 357 | try: | 452 | try: |
2316 | 358 | # create tmpfile at dest host | 453 | # create tmpfile at dest host |
2317 | 359 | filename = rpc.call(context, dst_t, | 454 | filename = rpc.call(context, dst_t, |
2318 | @@ -370,6 +465,8 @@ | |||
2319 | 370 | raise | 465 | raise |
2320 | 371 | 466 | ||
2321 | 372 | finally: | 467 | finally: |
2325 | 373 | rpc.call(context, dst_t, | 468 | # Should only be None for tests? |
2326 | 374 | {"method": 'cleanup_shared_storage_test_file', | 469 | if filename is not None: |
2327 | 375 | "args": {'filename': filename}}) | 470 | rpc.call(context, dst_t, |
2328 | 471 | {"method": 'cleanup_shared_storage_test_file', | ||
2329 | 472 | "args": {'filename': filename}}) | ||
2330 | 376 | 473 | ||
2331 | === modified file 'nova/scheduler/least_cost.py' | |||
2332 | --- nova/scheduler/least_cost.py 2011-08-15 22:09:39 +0000 | |||
2333 | +++ nova/scheduler/least_cost.py 2011-09-23 07:08:19 +0000 | |||
2334 | @@ -160,8 +160,7 @@ | |||
2335 | 160 | 160 | ||
2336 | 161 | weighted = [] | 161 | weighted = [] |
2337 | 162 | weight_log = [] | 162 | weight_log = [] |
2340 | 163 | for cost, (hostname, service) in zip(costs, hosts): | 163 | for cost, (hostname, caps) in zip(costs, hosts): |
2339 | 164 | caps = service[topic] | ||
2341 | 165 | weight_log.append("%s: %s" % (hostname, "%.2f" % cost)) | 164 | weight_log.append("%s: %s" % (hostname, "%.2f" % cost)) |
2342 | 166 | weight_dict = dict(weight=cost, hostname=hostname, | 165 | weight_dict = dict(weight=cost, hostname=hostname, |
2343 | 167 | capabilities=caps) | 166 | capabilities=caps) |
2344 | 168 | 167 | ||
2345 | === modified file 'nova/scheduler/manager.py' | |||
2346 | --- nova/scheduler/manager.py 2011-09-02 16:00:03 +0000 | |||
2347 | +++ nova/scheduler/manager.py 2011-09-23 07:08:19 +0000 | |||
2348 | @@ -81,37 +81,23 @@ | |||
2349 | 81 | """Select a list of hosts best matching the provided specs.""" | 81 | """Select a list of hosts best matching the provided specs.""" |
2350 | 82 | return self.driver.select(context, *args, **kwargs) | 82 | return self.driver.select(context, *args, **kwargs) |
2351 | 83 | 83 | ||
2352 | 84 | def get_scheduler_rules(self, context=None, *args, **kwargs): | ||
2353 | 85 | """Ask the driver how requests should be made of it.""" | ||
2354 | 86 | return self.driver.get_scheduler_rules(context, *args, **kwargs) | ||
2355 | 87 | |||
2356 | 88 | def _schedule(self, method, context, topic, *args, **kwargs): | 84 | def _schedule(self, method, context, topic, *args, **kwargs): |
2357 | 89 | """Tries to call schedule_* method on the driver to retrieve host. | 85 | """Tries to call schedule_* method on the driver to retrieve host. |
2358 | 90 | 86 | ||
2359 | 91 | Falls back to schedule(context, topic) if method doesn't exist. | 87 | Falls back to schedule(context, topic) if method doesn't exist. |
2360 | 92 | """ | 88 | """ |
2361 | 93 | driver_method = 'schedule_%s' % method | 89 | driver_method = 'schedule_%s' % method |
2362 | 94 | elevated = context.elevated() | ||
2363 | 95 | try: | 90 | try: |
2364 | 96 | real_meth = getattr(self.driver, driver_method) | 91 | real_meth = getattr(self.driver, driver_method) |
2366 | 97 | args = (elevated,) + args | 92 | args = (context,) + args |
2367 | 98 | except AttributeError, e: | 93 | except AttributeError, e: |
2368 | 99 | LOG.warning(_("Driver Method %(driver_method)s missing: %(e)s." | 94 | LOG.warning(_("Driver Method %(driver_method)s missing: %(e)s." |
2369 | 100 | "Reverting to schedule()") % locals()) | 95 | "Reverting to schedule()") % locals()) |
2370 | 101 | real_meth = self.driver.schedule | 96 | real_meth = self.driver.schedule |
2384 | 102 | args = (elevated, topic) + args | 97 | args = (context, topic, method) + args |
2385 | 103 | host = real_meth(*args, **kwargs) | 98 | |
2386 | 104 | 99 | # Scheduler methods are responsible for casting. | |
2387 | 105 | if not host: | 100 | return real_meth(*args, **kwargs) |
2375 | 106 | LOG.debug(_("%(topic)s %(method)s handled in Scheduler") | ||
2376 | 107 | % locals()) | ||
2377 | 108 | return | ||
2378 | 109 | |||
2379 | 110 | rpc.cast(context, | ||
2380 | 111 | db.queue_get_for(context, topic, host), | ||
2381 | 112 | {"method": method, | ||
2382 | 113 | "args": kwargs}) | ||
2383 | 114 | LOG.debug(_("Casted to %(topic)s %(host)s for %(method)s") % locals()) | ||
2388 | 115 | 101 | ||
2389 | 116 | # NOTE (masumotok) : This method should be moved to nova.api.ec2.admin. | 102 | # NOTE (masumotok) : This method should be moved to nova.api.ec2.admin. |
2390 | 117 | # Based on bexar design summit discussion, | 103 | # Based on bexar design summit discussion, |
2391 | 118 | 104 | ||
2392 | === modified file 'nova/scheduler/multi.py' | |||
2393 | --- nova/scheduler/multi.py 2011-08-11 23:26:26 +0000 | |||
2394 | +++ nova/scheduler/multi.py 2011-09-23 07:08:19 +0000 | |||
2395 | @@ -38,7 +38,8 @@ | |||
2396 | 38 | # A mapping of methods to topics so we can figure out which driver to use. | 38 | # A mapping of methods to topics so we can figure out which driver to use. |
2397 | 39 | _METHOD_MAP = {'run_instance': 'compute', | 39 | _METHOD_MAP = {'run_instance': 'compute', |
2398 | 40 | 'start_instance': 'compute', | 40 | 'start_instance': 'compute', |
2400 | 41 | 'create_volume': 'volume'} | 41 | 'create_volume': 'volume', |
2401 | 42 | 'create_volumes': 'volume'} | ||
2402 | 42 | 43 | ||
2403 | 43 | 44 | ||
2404 | 44 | class MultiScheduler(driver.Scheduler): | 45 | class MultiScheduler(driver.Scheduler): |
2405 | @@ -69,5 +70,6 @@ | |||
2406 | 69 | for k, v in self.drivers.iteritems(): | 70 | for k, v in self.drivers.iteritems(): |
2407 | 70 | v.set_zone_manager(zone_manager) | 71 | v.set_zone_manager(zone_manager) |
2408 | 71 | 72 | ||
2411 | 72 | def schedule(self, context, topic, *_args, **_kwargs): | 73 | def schedule(self, context, topic, method, *_args, **_kwargs): |
2412 | 73 | return self.drivers[topic].schedule(context, topic, *_args, **_kwargs) | 74 | return self.drivers[topic].schedule(context, topic, |
2413 | 75 | method, *_args, **_kwargs) | ||
2414 | 74 | 76 | ||
2415 | === modified file 'nova/scheduler/simple.py' | |||
2416 | --- nova/scheduler/simple.py 2011-08-19 15:44:14 +0000 | |||
2417 | +++ nova/scheduler/simple.py 2011-09-23 07:08:19 +0000 | |||
2418 | @@ -39,47 +39,50 @@ | |||
2419 | 39 | class SimpleScheduler(chance.ChanceScheduler): | 39 | class SimpleScheduler(chance.ChanceScheduler): |
2420 | 40 | """Implements Naive Scheduler that tries to find least loaded host.""" | 40 | """Implements Naive Scheduler that tries to find least loaded host.""" |
2421 | 41 | 41 | ||
2423 | 42 | def _schedule_instance(self, context, instance_id, *_args, **_kwargs): | 42 | def _schedule_instance(self, context, instance_opts, *_args, **_kwargs): |
2424 | 43 | """Picks a host that is up and has the fewest running instances.""" | 43 | """Picks a host that is up and has the fewest running instances.""" |
2430 | 44 | instance_ref = db.instance_get(context, instance_id) | 44 | |
2431 | 45 | if (instance_ref['availability_zone'] | 45 | availability_zone = instance_opts.get('availability_zone') |
2432 | 46 | and ':' in instance_ref['availability_zone'] | 46 | |
2433 | 47 | and context.is_admin): | 47 | if availability_zone and context.is_admin and \ |
2434 | 48 | zone, _x, host = instance_ref['availability_zone'].partition(':') | 48 | (':' in availability_zone): |
2435 | 49 | zone, host = availability_zone.split(':', 1) | ||
2436 | 49 | service = db.service_get_by_args(context.elevated(), host, | 50 | service = db.service_get_by_args(context.elevated(), host, |
2437 | 50 | 'nova-compute') | 51 | 'nova-compute') |
2438 | 51 | if not self.service_is_up(service): | 52 | if not self.service_is_up(service): |
2439 | 52 | raise driver.WillNotSchedule(_("Host %s is not alive") % host) | 53 | raise driver.WillNotSchedule(_("Host %s is not alive") % host) |
2440 | 54 | return host | ||
2441 | 53 | 55 | ||
2442 | 54 | # TODO(vish): this probably belongs in the manager, if we | ||
2443 | 55 | # can generalize this somehow | ||
2444 | 56 | now = utils.utcnow() | ||
2445 | 57 | db.instance_update(context, instance_id, {'host': host, | ||
2446 | 58 | 'scheduled_at': now}) | ||
2447 | 59 | return host | ||
2448 | 60 | results = db.service_get_all_compute_sorted(context) | 56 | results = db.service_get_all_compute_sorted(context) |
2449 | 61 | for result in results: | 57 | for result in results: |
2450 | 62 | (service, instance_cores) = result | 58 | (service, instance_cores) = result |
2452 | 63 | if instance_cores + instance_ref['vcpus'] > FLAGS.max_cores: | 59 | if instance_cores + instance_opts['vcpus'] > FLAGS.max_cores: |
2453 | 64 | raise driver.NoValidHost(_("All hosts have too many cores")) | 60 | raise driver.NoValidHost(_("All hosts have too many cores")) |
2454 | 65 | if self.service_is_up(service): | 61 | if self.service_is_up(service): |
2455 | 66 | # NOTE(vish): this probably belongs in the manager, if we | ||
2456 | 67 | # can generalize this somehow | ||
2457 | 68 | now = utils.utcnow() | ||
2458 | 69 | db.instance_update(context, | ||
2459 | 70 | instance_id, | ||
2460 | 71 | {'host': service['host'], | ||
2461 | 72 | 'scheduled_at': now}) | ||
2462 | 73 | return service['host'] | 62 | return service['host'] |
2463 | 74 | raise driver.NoValidHost(_("Scheduler was unable to locate a host" | 63 | raise driver.NoValidHost(_("Scheduler was unable to locate a host" |
2464 | 75 | " for this request. Is the appropriate" | 64 | " for this request. Is the appropriate" |
2465 | 76 | " service running?")) | 65 | " service running?")) |
2466 | 77 | 66 | ||
2469 | 78 | def schedule_run_instance(self, context, instance_id, *_args, **_kwargs): | 67 | def schedule_run_instance(self, context, request_spec, *_args, **_kwargs): |
2470 | 79 | return self._schedule_instance(context, instance_id, *_args, **_kwargs) | 68 | num_instances = request_spec.get('num_instances', 1) |
2471 | 69 | instances = [] | ||
2472 | 70 | for num in xrange(num_instances): | ||
2473 | 71 | host = self._schedule_instance(context, | ||
2474 | 72 | request_spec['instance_properties'], *_args, **_kwargs) | ||
2475 | 73 | instance_ref = self.create_instance_db_entry(context, | ||
2476 | 74 | request_spec) | ||
2477 | 75 | driver.cast_to_compute_host(context, host, 'run_instance', | ||
2478 | 76 | instance_id=instance_ref['id'], **_kwargs) | ||
2479 | 77 | instances.append(driver.encode_instance(instance_ref)) | ||
2480 | 78 | return instances | ||
2481 | 80 | 79 | ||
2482 | 81 | def schedule_start_instance(self, context, instance_id, *_args, **_kwargs): | 80 | def schedule_start_instance(self, context, instance_id, *_args, **_kwargs): |
2484 | 82 | return self._schedule_instance(context, instance_id, *_args, **_kwargs) | 81 | instance_ref = db.instance_get(context, instance_id) |
2485 | 82 | host = self._schedule_instance(context, instance_ref, | ||
2486 | 83 | *_args, **_kwargs) | ||
2487 | 84 | driver.cast_to_compute_host(context, host, 'start_instance', | ||
2488 | 85 | instance_id=intance_id, **_kwargs) | ||
2489 | 83 | 86 | ||
2490 | 84 | def schedule_create_volume(self, context, volume_id, *_args, **_kwargs): | 87 | def schedule_create_volume(self, context, volume_id, *_args, **_kwargs): |
2491 | 85 | """Picks a host that is up and has the fewest volumes.""" | 88 | """Picks a host that is up and has the fewest volumes.""" |
2492 | @@ -92,13 +95,9 @@ | |||
2493 | 92 | 'nova-volume') | 95 | 'nova-volume') |
2494 | 93 | if not self.service_is_up(service): | 96 | if not self.service_is_up(service): |
2495 | 94 | raise driver.WillNotSchedule(_("Host %s not available") % host) | 97 | raise driver.WillNotSchedule(_("Host %s not available") % host) |
2503 | 95 | 98 | driver.cast_to_volume_host(context, host, 'create_volume', | |
2504 | 96 | # TODO(vish): this probably belongs in the manager, if we | 99 | volume_id=volume_id, **_kwargs) |
2505 | 97 | # can generalize this somehow | 100 | return None |
2499 | 98 | now = utils.utcnow() | ||
2500 | 99 | db.volume_update(context, volume_id, {'host': host, | ||
2501 | 100 | 'scheduled_at': now}) | ||
2502 | 101 | return host | ||
2506 | 102 | results = db.service_get_all_volume_sorted(context) | 101 | results = db.service_get_all_volume_sorted(context) |
2507 | 103 | for result in results: | 102 | for result in results: |
2508 | 104 | (service, volume_gigabytes) = result | 103 | (service, volume_gigabytes) = result |
2509 | @@ -106,14 +105,9 @@ | |||
2510 | 106 | raise driver.NoValidHost(_("All hosts have too many " | 105 | raise driver.NoValidHost(_("All hosts have too many " |
2511 | 107 | "gigabytes")) | 106 | "gigabytes")) |
2512 | 108 | if self.service_is_up(service): | 107 | if self.service_is_up(service): |
2521 | 109 | # NOTE(vish): this probably belongs in the manager, if we | 108 | driver.cast_to_volume_host(context, service['host'], |
2522 | 110 | # can generalize this somehow | 109 | 'create_volume', volume_id=volume_id, **_kwargs) |
2523 | 111 | now = utils.utcnow() | 110 | return None |
2516 | 112 | db.volume_update(context, | ||
2517 | 113 | volume_id, | ||
2518 | 114 | {'host': service['host'], | ||
2519 | 115 | 'scheduled_at': now}) | ||
2520 | 116 | return service['host'] | ||
2524 | 117 | raise driver.NoValidHost(_("Scheduler was unable to locate a host" | 111 | raise driver.NoValidHost(_("Scheduler was unable to locate a host" |
2525 | 118 | " for this request. Is the appropriate" | 112 | " for this request. Is the appropriate" |
2526 | 119 | " service running?")) | 113 | " service running?")) |
2527 | @@ -127,7 +121,9 @@ | |||
2528 | 127 | if instance_count >= FLAGS.max_networks: | 121 | if instance_count >= FLAGS.max_networks: |
2529 | 128 | raise driver.NoValidHost(_("All hosts have too many networks")) | 122 | raise driver.NoValidHost(_("All hosts have too many networks")) |
2530 | 129 | if self.service_is_up(service): | 123 | if self.service_is_up(service): |
2532 | 130 | return service['host'] | 124 | driver.cast_to_network_host(context, service['host'], |
2533 | 125 | 'set_network_host', **_kwargs) | ||
2534 | 126 | return None | ||
2535 | 131 | raise driver.NoValidHost(_("Scheduler was unable to locate a host" | 127 | raise driver.NoValidHost(_("Scheduler was unable to locate a host" |
2536 | 132 | " for this request. Is the appropriate" | 128 | " for this request. Is the appropriate" |
2537 | 133 | " service running?")) | 129 | " service running?")) |
2538 | 134 | 130 | ||
2539 | === modified file 'nova/scheduler/vsa.py' | |||
2540 | --- nova/scheduler/vsa.py 2011-08-26 18:14:44 +0000 | |||
2541 | +++ nova/scheduler/vsa.py 2011-09-23 07:08:19 +0000 | |||
2542 | @@ -195,8 +195,6 @@ | |||
2543 | 195 | 'display_description': vol['description'], | 195 | 'display_description': vol['description'], |
2544 | 196 | 'volume_type_id': vol['volume_type_id'], | 196 | 'volume_type_id': vol['volume_type_id'], |
2545 | 197 | 'metadata': dict(to_vsa_id=vsa_id), | 197 | 'metadata': dict(to_vsa_id=vsa_id), |
2546 | 198 | 'host': vol['host'], | ||
2547 | 199 | 'scheduled_at': now | ||
2548 | 200 | } | 198 | } |
2549 | 201 | 199 | ||
2550 | 202 | size = vol['size'] | 200 | size = vol['size'] |
2551 | @@ -205,12 +203,10 @@ | |||
2552 | 205 | LOG.debug(_("Provision volume %(name)s of size %(size)s GB on "\ | 203 | LOG.debug(_("Provision volume %(name)s of size %(size)s GB on "\ |
2553 | 206 | "host %(host)s"), locals()) | 204 | "host %(host)s"), locals()) |
2554 | 207 | 205 | ||
2561 | 208 | volume_ref = db.volume_create(context, options) | 206 | volume_ref = db.volume_create(context.elevated(), options) |
2562 | 209 | rpc.cast(context, | 207 | driver.cast_to_volume_host(context, vol['host'], |
2563 | 210 | db.queue_get_for(context, "volume", vol['host']), | 208 | 'create_volume', volume_id=volume_ref['id'], |
2564 | 211 | {"method": "create_volume", | 209 | snapshot_id=None) |
2559 | 212 | "args": {"volume_id": volume_ref['id'], | ||
2560 | 213 | "snapshot_id": None}}) | ||
2565 | 214 | 210 | ||
2566 | 215 | def _check_host_enforcement(self, context, availability_zone): | 211 | def _check_host_enforcement(self, context, availability_zone): |
2567 | 216 | if (availability_zone | 212 | if (availability_zone |
2568 | @@ -274,7 +270,6 @@ | |||
2569 | 274 | def schedule_create_volumes(self, context, request_spec, | 270 | def schedule_create_volumes(self, context, request_spec, |
2570 | 275 | availability_zone=None, *_args, **_kwargs): | 271 | availability_zone=None, *_args, **_kwargs): |
2571 | 276 | """Picks hosts for hosting multiple volumes.""" | 272 | """Picks hosts for hosting multiple volumes.""" |
2572 | 277 | |||
2573 | 278 | num_volumes = request_spec.get('num_volumes') | 273 | num_volumes = request_spec.get('num_volumes') |
2574 | 279 | LOG.debug(_("Attempting to spawn %(num_volumes)d volume(s)") % | 274 | LOG.debug(_("Attempting to spawn %(num_volumes)d volume(s)") % |
2575 | 280 | locals()) | 275 | locals()) |
2576 | @@ -291,7 +286,8 @@ | |||
2577 | 291 | 286 | ||
2578 | 292 | for vol in volume_params: | 287 | for vol in volume_params: |
2579 | 293 | self._provision_volume(context, vol, vsa_id, availability_zone) | 288 | self._provision_volume(context, vol, vsa_id, availability_zone) |
2581 | 294 | except: | 289 | except Exception: |
2582 | 290 | LOG.exception(_("Error creating volumes")) | ||
2583 | 295 | if vsa_id: | 291 | if vsa_id: |
2584 | 296 | db.vsa_update(context, vsa_id, dict(status=VsaState.FAILED)) | 292 | db.vsa_update(context, vsa_id, dict(status=VsaState.FAILED)) |
2585 | 297 | 293 | ||
2586 | @@ -310,10 +306,9 @@ | |||
2587 | 310 | host = self._check_host_enforcement(context, | 306 | host = self._check_host_enforcement(context, |
2588 | 311 | volume_ref['availability_zone']) | 307 | volume_ref['availability_zone']) |
2589 | 312 | if host: | 308 | if host: |
2594 | 313 | now = utils.utcnow() | 309 | driver.cast_to_volume_host(context, host, 'create_volume', |
2595 | 314 | db.volume_update(context, volume_id, {'host': host, | 310 | volume_id=volume_id, **_kwargs) |
2596 | 315 | 'scheduled_at': now}) | 311 | return None |
2593 | 316 | return host | ||
2597 | 317 | 312 | ||
2598 | 318 | volume_type_id = volume_ref['volume_type_id'] | 313 | volume_type_id = volume_ref['volume_type_id'] |
2599 | 319 | if volume_type_id: | 314 | if volume_type_id: |
2600 | @@ -344,18 +339,16 @@ | |||
2601 | 344 | 339 | ||
2602 | 345 | try: | 340 | try: |
2603 | 346 | (host, qos_cap) = self._select_hosts(request_spec, all_hosts=hosts) | 341 | (host, qos_cap) = self._select_hosts(request_spec, all_hosts=hosts) |
2605 | 347 | except: | 342 | except Exception: |
2606 | 343 | LOG.exception(_("Error creating volume")) | ||
2607 | 348 | if volume_ref['to_vsa_id']: | 344 | if volume_ref['to_vsa_id']: |
2608 | 349 | db.vsa_update(context, volume_ref['to_vsa_id'], | 345 | db.vsa_update(context, volume_ref['to_vsa_id'], |
2609 | 350 | dict(status=VsaState.FAILED)) | 346 | dict(status=VsaState.FAILED)) |
2610 | 351 | raise | 347 | raise |
2611 | 352 | 348 | ||
2612 | 353 | if host: | 349 | if host: |
2618 | 354 | now = utils.utcnow() | 350 | driver.cast_to_volume_host(context, host, 'create_volume', |
2619 | 355 | db.volume_update(context, volume_id, {'host': host, | 351 | volume_id=volume_id, **_kwargs) |
2615 | 356 | 'scheduled_at': now}) | ||
2616 | 357 | self._consume_resource(qos_cap, volume_ref['size'], -1) | ||
2617 | 358 | return host | ||
2620 | 359 | 352 | ||
2621 | 360 | def _consume_full_drive(self, qos_values, direction): | 353 | def _consume_full_drive(self, qos_values, direction): |
2622 | 361 | qos_values['FullDrive']['NumFreeDrives'] += direction | 354 | qos_values['FullDrive']['NumFreeDrives'] += direction |
2623 | 362 | 355 | ||
2624 | === modified file 'nova/scheduler/zone.py' | |||
2625 | --- nova/scheduler/zone.py 2011-03-31 19:29:16 +0000 | |||
2626 | +++ nova/scheduler/zone.py 2011-09-23 07:08:19 +0000 | |||
2627 | @@ -35,7 +35,7 @@ | |||
2628 | 35 | for topic and availability zone (if defined). | 35 | for topic and availability zone (if defined). |
2629 | 36 | """ | 36 | """ |
2630 | 37 | 37 | ||
2632 | 38 | if zone is None: | 38 | if not zone: |
2633 | 39 | return self.hosts_up(context, topic) | 39 | return self.hosts_up(context, topic) |
2634 | 40 | 40 | ||
2635 | 41 | services = db.service_get_all_by_topic(context, topic) | 41 | services = db.service_get_all_by_topic(context, topic) |
2636 | @@ -44,16 +44,34 @@ | |||
2637 | 44 | if self.service_is_up(service) | 44 | if self.service_is_up(service) |
2638 | 45 | and service.availability_zone == zone] | 45 | and service.availability_zone == zone] |
2639 | 46 | 46 | ||
2641 | 47 | def schedule(self, context, topic, *_args, **_kwargs): | 47 | def _schedule(self, context, topic, request_spec, **kwargs): |
2642 | 48 | """Picks a host that is up at random in selected | 48 | """Picks a host that is up at random in selected |
2643 | 49 | availability zone (if defined). | 49 | availability zone (if defined). |
2644 | 50 | """ | 50 | """ |
2645 | 51 | 51 | ||
2648 | 52 | zone = _kwargs.get('availability_zone') | 52 | zone = kwargs.get('availability_zone') |
2649 | 53 | hosts = self.hosts_up_with_zone(context, topic, zone) | 53 | if not zone and request_spec: |
2650 | 54 | zone = request_spec['instance_properties'].get( | ||
2651 | 55 | 'availability_zone') | ||
2652 | 56 | hosts = self.hosts_up_with_zone(context.elevated(), topic, zone) | ||
2653 | 54 | if not hosts: | 57 | if not hosts: |
2654 | 55 | raise driver.NoValidHost(_("Scheduler was unable to locate a host" | 58 | raise driver.NoValidHost(_("Scheduler was unable to locate a host" |
2655 | 56 | " for this request. Is the appropriate" | 59 | " for this request. Is the appropriate" |
2656 | 57 | " service running?")) | 60 | " service running?")) |
2657 | 58 | |||
2658 | 59 | return hosts[int(random.random() * len(hosts))] | 61 | return hosts[int(random.random() * len(hosts))] |
2659 | 62 | |||
2660 | 63 | def schedule(self, context, topic, method, *_args, **kwargs): | ||
2661 | 64 | host = self._schedule(context, topic, None, **kwargs) | ||
2662 | 65 | driver.cast_to_host(context, topic, host, method, **kwargs) | ||
2663 | 66 | |||
2664 | 67 | def schedule_run_instance(self, context, request_spec, *_args, **kwargs): | ||
2665 | 68 | """Builds and starts instances on selected hosts""" | ||
2666 | 69 | num_instances = request_spec.get('num_instances', 1) | ||
2667 | 70 | instances = [] | ||
2668 | 71 | for num in xrange(num_instances): | ||
2669 | 72 | host = self._schedule(context, 'compute', request_spec, **kwargs) | ||
2670 | 73 | instance = self.create_instance_db_entry(context, request_spec) | ||
2671 | 74 | driver.cast_to_compute_host(context, host, | ||
2672 | 75 | 'run_instance', instance_id=instance['id'], **kwargs) | ||
2673 | 76 | instances.append(driver.encode_instance(instance)) | ||
2674 | 77 | return instances | ||
2675 | 60 | 78 | ||
2676 | === modified file 'nova/tests/api/openstack/contrib/test_createserverext.py' | |||
2677 | --- nova/tests/api/openstack/contrib/test_createserverext.py 2011-09-21 20:59:40 +0000 | |||
2678 | +++ nova/tests/api/openstack/contrib/test_createserverext.py 2011-09-23 07:08:19 +0000 | |||
2679 | @@ -27,6 +27,7 @@ | |||
2680 | 27 | from nova import db | 27 | from nova import db |
2681 | 28 | from nova import exception | 28 | from nova import exception |
2682 | 29 | from nova import flags | 29 | from nova import flags |
2683 | 30 | from nova import rpc | ||
2684 | 30 | from nova import test | 31 | from nova import test |
2685 | 31 | import nova.api.openstack | 32 | import nova.api.openstack |
2686 | 32 | from nova.tests.api.openstack import fakes | 33 | from nova.tests.api.openstack import fakes |
2687 | @@ -115,13 +116,15 @@ | |||
2688 | 115 | if 'user_data' in kwargs: | 116 | if 'user_data' in kwargs: |
2689 | 116 | self.user_data = kwargs['user_data'] | 117 | self.user_data = kwargs['user_data'] |
2690 | 117 | 118 | ||
2692 | 118 | return [{'id': '1234', 'display_name': 'fakeinstance', | 119 | resv_id = None |
2693 | 120 | |||
2694 | 121 | return ([{'id': '1234', 'display_name': 'fakeinstance', | ||
2695 | 119 | 'uuid': FAKE_UUID, | 122 | 'uuid': FAKE_UUID, |
2696 | 120 | 'user_id': 'fake', | 123 | 'user_id': 'fake', |
2697 | 121 | 'project_id': 'fake', | 124 | 'project_id': 'fake', |
2698 | 122 | 'created_at': "", | 125 | 'created_at': "", |
2699 | 123 | 'updated_at': "", | 126 | 'updated_at': "", |
2701 | 124 | 'progress': 0}] | 127 | 'progress': 0}], resv_id) |
2702 | 125 | 128 | ||
2703 | 126 | def set_admin_password(self, *args, **kwargs): | 129 | def set_admin_password(self, *args, **kwargs): |
2704 | 127 | pass | 130 | pass |
2705 | @@ -134,7 +137,7 @@ | |||
2706 | 134 | compute_api = MockComputeAPI() | 137 | compute_api = MockComputeAPI() |
2707 | 135 | self.stubs.Set(nova.compute, 'API', make_stub_method(compute_api)) | 138 | self.stubs.Set(nova.compute, 'API', make_stub_method(compute_api)) |
2708 | 136 | self.stubs.Set( | 139 | self.stubs.Set( |
2710 | 137 | nova.api.openstack.create_instance_helper.CreateInstanceHelper, | 140 | nova.api.openstack.servers.Controller, |
2711 | 138 | '_get_kernel_ramdisk_from_image', make_stub_method((1, 1))) | 141 | '_get_kernel_ramdisk_from_image', make_stub_method((1, 1))) |
2712 | 139 | return compute_api | 142 | return compute_api |
2713 | 140 | 143 | ||
2714 | @@ -393,7 +396,8 @@ | |||
2715 | 393 | return_instance_add_security_group) | 396 | return_instance_add_security_group) |
2716 | 394 | body_dict = self._create_security_group_request_dict(security_groups) | 397 | body_dict = self._create_security_group_request_dict(security_groups) |
2717 | 395 | request = self._get_create_request_json(body_dict) | 398 | request = self._get_create_request_json(body_dict) |
2719 | 396 | response = request.get_response(fakes.wsgi_app()) | 399 | compute_api, response = \ |
2720 | 400 | self._run_create_instance_with_mock_compute_api(request) | ||
2721 | 397 | self.assertEquals(response.status_int, 202) | 401 | self.assertEquals(response.status_int, 202) |
2722 | 398 | 402 | ||
2723 | 399 | def test_get_server_by_id_verify_security_groups_json(self): | 403 | def test_get_server_by_id_verify_security_groups_json(self): |
2724 | 400 | 404 | ||
2725 | === modified file 'nova/tests/api/openstack/contrib/test_volumes.py' | |||
2726 | --- nova/tests/api/openstack/contrib/test_volumes.py 2011-09-21 20:59:40 +0000 | |||
2727 | +++ nova/tests/api/openstack/contrib/test_volumes.py 2011-09-23 07:08:19 +0000 | |||
2728 | @@ -31,8 +31,12 @@ | |||
2729 | 31 | 31 | ||
2730 | 32 | 32 | ||
2731 | 33 | def fake_compute_api_create(cls, context, instance_type, image_href, **kwargs): | 33 | def fake_compute_api_create(cls, context, instance_type, image_href, **kwargs): |
2732 | 34 | global _block_device_mapping_seen | ||
2733 | 35 | _block_device_mapping_seen = kwargs.get('block_device_mapping') | ||
2734 | 36 | |||
2735 | 34 | inst_type = instance_types.get_instance_type_by_flavor_id(2) | 37 | inst_type = instance_types.get_instance_type_by_flavor_id(2) |
2737 | 35 | return [{'id': 1, | 38 | resv_id = None |
2738 | 39 | return ([{'id': 1, | ||
2739 | 36 | 'display_name': 'test_server', | 40 | 'display_name': 'test_server', |
2740 | 37 | 'uuid': fake_gen_uuid(), | 41 | 'uuid': fake_gen_uuid(), |
2741 | 38 | 'instance_type': dict(inst_type), | 42 | 'instance_type': dict(inst_type), |
2742 | @@ -44,7 +48,7 @@ | |||
2743 | 44 | 'created_at': datetime.datetime(2010, 10, 10, 12, 0, 0), | 48 | 'created_at': datetime.datetime(2010, 10, 10, 12, 0, 0), |
2744 | 45 | 'updated_at': datetime.datetime(2010, 11, 11, 11, 0, 0), | 49 | 'updated_at': datetime.datetime(2010, 11, 11, 11, 0, 0), |
2745 | 46 | 'progress': 0 | 50 | 'progress': 0 |
2747 | 47 | }] | 51 | }], resv_id) |
2748 | 48 | 52 | ||
2749 | 49 | 53 | ||
2750 | 50 | class BootFromVolumeTest(test.TestCase): | 54 | class BootFromVolumeTest(test.TestCase): |
2751 | @@ -64,6 +68,8 @@ | |||
2752 | 64 | delete_on_termination=False, | 68 | delete_on_termination=False, |
2753 | 65 | )] | 69 | )] |
2754 | 66 | )) | 70 | )) |
2755 | 71 | global _block_device_mapping_seen | ||
2756 | 72 | _block_device_mapping_seen = None | ||
2757 | 67 | req = webob.Request.blank('/v1.1/fake/os-volumes_boot') | 73 | req = webob.Request.blank('/v1.1/fake/os-volumes_boot') |
2758 | 68 | req.method = 'POST' | 74 | req.method = 'POST' |
2759 | 69 | req.body = json.dumps(body) | 75 | req.body = json.dumps(body) |
2760 | @@ -76,3 +82,7 @@ | |||
2761 | 76 | self.assertEqual(u'test_server', server['name']) | 82 | self.assertEqual(u'test_server', server['name']) |
2762 | 77 | self.assertEqual(3, int(server['image']['id'])) | 83 | self.assertEqual(3, int(server['image']['id'])) |
2763 | 78 | self.assertEqual(FLAGS.password_length, len(server['adminPass'])) | 84 | self.assertEqual(FLAGS.password_length, len(server['adminPass'])) |
2764 | 85 | self.assertEqual(len(_block_device_mapping_seen), 1) | ||
2765 | 86 | self.assertEqual(_block_device_mapping_seen[0]['volume_id'], 1) | ||
2766 | 87 | self.assertEqual(_block_device_mapping_seen[0]['device_name'], | ||
2767 | 88 | '/dev/vda') | ||
2768 | 79 | 89 | ||
2769 | === modified file 'nova/tests/api/openstack/test_extensions.py' | |||
2770 | --- nova/tests/api/openstack/test_extensions.py 2011-09-21 15:54:30 +0000 | |||
2771 | +++ nova/tests/api/openstack/test_extensions.py 2011-09-23 07:08:19 +0000 | |||
2772 | @@ -102,6 +102,7 @@ | |||
2773 | 102 | "VirtualInterfaces", | 102 | "VirtualInterfaces", |
2774 | 103 | "Volumes", | 103 | "Volumes", |
2775 | 104 | "VolumeTypes", | 104 | "VolumeTypes", |
2776 | 105 | "Zones", | ||
2777 | 105 | ] | 106 | ] |
2778 | 106 | self.ext_list.sort() | 107 | self.ext_list.sort() |
2779 | 107 | 108 | ||
2780 | 108 | 109 | ||
2781 | === modified file 'nova/tests/api/openstack/test_server_actions.py' | |||
2782 | --- nova/tests/api/openstack/test_server_actions.py 2011-09-16 19:45:46 +0000 | |||
2783 | +++ nova/tests/api/openstack/test_server_actions.py 2011-09-23 07:08:19 +0000 | |||
2784 | @@ -9,7 +9,7 @@ | |||
2785 | 9 | from nova import utils | 9 | from nova import utils |
2786 | 10 | from nova import exception | 10 | from nova import exception |
2787 | 11 | from nova import flags | 11 | from nova import flags |
2789 | 12 | from nova.api.openstack import create_instance_helper | 12 | from nova.api.openstack import servers |
2790 | 13 | from nova.compute import vm_states | 13 | from nova.compute import vm_states |
2791 | 14 | from nova.compute import instance_types | 14 | from nova.compute import instance_types |
2792 | 15 | import nova.db.api | 15 | import nova.db.api |
2793 | @@ -970,7 +970,7 @@ | |||
2794 | 970 | class TestServerActionXMLDeserializerV11(test.TestCase): | 970 | class TestServerActionXMLDeserializerV11(test.TestCase): |
2795 | 971 | 971 | ||
2796 | 972 | def setUp(self): | 972 | def setUp(self): |
2798 | 973 | self.deserializer = create_instance_helper.ServerXMLDeserializerV11() | 973 | self.deserializer = servers.ServerXMLDeserializerV11() |
2799 | 974 | 974 | ||
2800 | 975 | def tearDown(self): | 975 | def tearDown(self): |
2801 | 976 | pass | 976 | pass |
2802 | 977 | 977 | ||
2803 | === modified file 'nova/tests/api/openstack/test_servers.py' | |||
2804 | --- nova/tests/api/openstack/test_servers.py 2011-09-22 15:41:34 +0000 | |||
2805 | +++ nova/tests/api/openstack/test_servers.py 2011-09-23 07:08:19 +0000 | |||
2806 | @@ -33,7 +33,6 @@ | |||
2807 | 33 | from nova import test | 33 | from nova import test |
2808 | 34 | from nova import utils | 34 | from nova import utils |
2809 | 35 | import nova.api.openstack | 35 | import nova.api.openstack |
2810 | 36 | from nova.api.openstack import create_instance_helper | ||
2811 | 37 | from nova.api.openstack import servers | 36 | from nova.api.openstack import servers |
2812 | 38 | from nova.api.openstack import wsgi | 37 | from nova.api.openstack import wsgi |
2813 | 39 | from nova.api.openstack import xmlutil | 38 | from nova.api.openstack import xmlutil |
2814 | @@ -1576,10 +1575,15 @@ | |||
2815 | 1576 | 1575 | ||
2816 | 1577 | def _setup_for_create_instance(self): | 1576 | def _setup_for_create_instance(self): |
2817 | 1578 | """Shared implementation for tests below that create instance""" | 1577 | """Shared implementation for tests below that create instance""" |
2818 | 1578 | |||
2819 | 1579 | self.instance_cache_num = 0 | ||
2820 | 1580 | self.instance_cache = {} | ||
2821 | 1581 | |||
2822 | 1579 | def instance_create(context, inst): | 1582 | def instance_create(context, inst): |
2823 | 1580 | inst_type = instance_types.get_instance_type_by_flavor_id(3) | 1583 | inst_type = instance_types.get_instance_type_by_flavor_id(3) |
2824 | 1581 | image_ref = 'http://localhost/images/2' | 1584 | image_ref = 'http://localhost/images/2' |
2826 | 1582 | return {'id': 1, | 1585 | self.instance_cache_num += 1 |
2827 | 1586 | instance = {'id': self.instance_cache_num, | ||
2828 | 1583 | 'display_name': 'server_test', | 1587 | 'display_name': 'server_test', |
2829 | 1584 | 'uuid': FAKE_UUID, | 1588 | 'uuid': FAKE_UUID, |
2830 | 1585 | 'instance_type': dict(inst_type), | 1589 | 'instance_type': dict(inst_type), |
2831 | @@ -1588,11 +1592,32 @@ | |||
2832 | 1588 | 'image_ref': image_ref, | 1592 | 'image_ref': image_ref, |
2833 | 1589 | 'user_id': 'fake', | 1593 | 'user_id': 'fake', |
2834 | 1590 | 'project_id': 'fake', | 1594 | 'project_id': 'fake', |
2835 | 1595 | 'reservation_id': inst['reservation_id'], | ||
2836 | 1591 | "created_at": datetime.datetime(2010, 10, 10, 12, 0, 0), | 1596 | "created_at": datetime.datetime(2010, 10, 10, 12, 0, 0), |
2837 | 1592 | "updated_at": datetime.datetime(2010, 11, 11, 11, 0, 0), | 1597 | "updated_at": datetime.datetime(2010, 11, 11, 11, 0, 0), |
2838 | 1593 | "config_drive": self.config_drive, | 1598 | "config_drive": self.config_drive, |
2839 | 1594 | "progress": 0 | 1599 | "progress": 0 |
2840 | 1595 | } | 1600 | } |
2841 | 1601 | self.instance_cache[instance['id']] = instance | ||
2842 | 1602 | return instance | ||
2843 | 1603 | |||
2844 | 1604 | def instance_get(context, instance_id): | ||
2845 | 1605 | """Stub for compute/api create() pulling in instance after | ||
2846 | 1606 | scheduling | ||
2847 | 1607 | """ | ||
2848 | 1608 | return self.instance_cache[instance_id] | ||
2849 | 1609 | |||
2850 | 1610 | def rpc_call_wrapper(context, topic, msg): | ||
2851 | 1611 | """Stub out the scheduler creating the instance entry""" | ||
2852 | 1612 | if topic == FLAGS.scheduler_topic and \ | ||
2853 | 1613 | msg['method'] == 'run_instance': | ||
2854 | 1614 | request_spec = msg['args']['request_spec'] | ||
2855 | 1615 | num_instances = request_spec.get('num_instances', 1) | ||
2856 | 1616 | instances = [] | ||
2857 | 1617 | for x in xrange(num_instances): | ||
2858 | 1618 | instances.append(instance_create(context, | ||
2859 | 1619 | request_spec['instance_properties'])) | ||
2860 | 1620 | return instances | ||
2861 | 1596 | 1621 | ||
2862 | 1597 | def server_update(context, id, params): | 1622 | def server_update(context, id, params): |
2863 | 1598 | return instance_create(context, id) | 1623 | return instance_create(context, id) |
2864 | @@ -1615,18 +1640,20 @@ | |||
2865 | 1615 | self.stubs.Set(nova.db.api, 'project_get_networks', | 1640 | self.stubs.Set(nova.db.api, 'project_get_networks', |
2866 | 1616 | project_get_networks) | 1641 | project_get_networks) |
2867 | 1617 | self.stubs.Set(nova.db.api, 'instance_create', instance_create) | 1642 | self.stubs.Set(nova.db.api, 'instance_create', instance_create) |
2868 | 1643 | self.stubs.Set(nova.db.api, 'instance_get', instance_get) | ||
2869 | 1618 | self.stubs.Set(nova.rpc, 'cast', fake_method) | 1644 | self.stubs.Set(nova.rpc, 'cast', fake_method) |
2871 | 1619 | self.stubs.Set(nova.rpc, 'call', fake_method) | 1645 | self.stubs.Set(nova.rpc, 'call', rpc_call_wrapper) |
2872 | 1620 | self.stubs.Set(nova.db.api, 'instance_update', server_update) | 1646 | self.stubs.Set(nova.db.api, 'instance_update', server_update) |
2873 | 1621 | self.stubs.Set(nova.db.api, 'queue_get_for', queue_get_for) | 1647 | self.stubs.Set(nova.db.api, 'queue_get_for', queue_get_for) |
2874 | 1622 | self.stubs.Set(nova.network.manager.VlanManager, 'allocate_fixed_ip', | 1648 | self.stubs.Set(nova.network.manager.VlanManager, 'allocate_fixed_ip', |
2875 | 1623 | fake_method) | 1649 | fake_method) |
2876 | 1624 | self.stubs.Set( | 1650 | self.stubs.Set( |
2879 | 1625 | nova.api.openstack.create_instance_helper.CreateInstanceHelper, | 1651 | servers.Controller, |
2880 | 1626 | "_get_kernel_ramdisk_from_image", kernel_ramdisk_mapping) | 1652 | "_get_kernel_ramdisk_from_image", |
2881 | 1653 | kernel_ramdisk_mapping) | ||
2882 | 1627 | self.stubs.Set(nova.compute.api.API, "_find_host", find_host) | 1654 | self.stubs.Set(nova.compute.api.API, "_find_host", find_host) |
2883 | 1628 | 1655 | ||
2885 | 1629 | def _test_create_instance_helper(self): | 1656 | def _test_create_instance(self): |
2886 | 1630 | self._setup_for_create_instance() | 1657 | self._setup_for_create_instance() |
2887 | 1631 | 1658 | ||
2888 | 1632 | body = dict(server=dict( | 1659 | body = dict(server=dict( |
2889 | @@ -1650,7 +1677,7 @@ | |||
2890 | 1650 | self.assertEqual(FAKE_UUID, server['uuid']) | 1677 | self.assertEqual(FAKE_UUID, server['uuid']) |
2891 | 1651 | 1678 | ||
2892 | 1652 | def test_create_instance(self): | 1679 | def test_create_instance(self): |
2894 | 1653 | self._test_create_instance_helper() | 1680 | self._test_create_instance() |
2895 | 1654 | 1681 | ||
2896 | 1655 | def test_create_instance_has_uuid(self): | 1682 | def test_create_instance_has_uuid(self): |
2897 | 1656 | """Tests at the db-layer instead of API layer since that's where the | 1683 | """Tests at the db-layer instead of API layer since that's where the |
2898 | @@ -1662,51 +1689,134 @@ | |||
2899 | 1662 | expected = FAKE_UUID | 1689 | expected = FAKE_UUID |
2900 | 1663 | self.assertEqual(instance['uuid'], expected) | 1690 | self.assertEqual(instance['uuid'], expected) |
2901 | 1664 | 1691 | ||
2920 | 1665 | def test_create_instance_via_zones(self): | 1692 | def test_create_multiple_instances(self): |
2921 | 1666 | """Server generated ReservationID""" | 1693 | """Test creating multiple instances but not asking for |
2922 | 1667 | self._setup_for_create_instance() | 1694 | reservation_id |
2923 | 1668 | self.flags(allow_admin_api=True) | 1695 | """ |
2924 | 1669 | 1696 | self._setup_for_create_instance() | |
2925 | 1670 | body = dict(server=dict( | 1697 | |
2926 | 1671 | name='server_test', imageId=3, flavorId=2, | 1698 | image_href = 'http://localhost/v1.1/123/images/2' |
2927 | 1672 | metadata={'hello': 'world', 'open': 'stack'}, | 1699 | flavor_ref = 'http://localhost/123/flavors/3' |
2928 | 1673 | personality={})) | 1700 | body = { |
2929 | 1674 | req = webob.Request.blank('/v1.0/zones/boot') | 1701 | 'server': { |
2930 | 1675 | req.method = 'POST' | 1702 | 'min_count': 2, |
2931 | 1676 | req.body = json.dumps(body) | 1703 | 'name': 'server_test', |
2932 | 1677 | req.headers["content-type"] = "application/json" | 1704 | 'imageRef': image_href, |
2933 | 1678 | 1705 | 'flavorRef': flavor_ref, | |
2934 | 1679 | res = req.get_response(fakes.wsgi_app()) | 1706 | 'metadata': {'hello': 'world', |
2935 | 1680 | 1707 | 'open': 'stack'}, | |
2936 | 1681 | reservation_id = json.loads(res.body)['reservation_id'] | 1708 | 'personality': [] |
2937 | 1682 | self.assertEqual(res.status_int, 200) | 1709 | } |
2938 | 1710 | } | ||
2939 | 1711 | |||
2940 | 1712 | req = webob.Request.blank('/v1.1/123/servers') | ||
2941 | 1713 | req.method = 'POST' | ||
2942 | 1714 | req.body = json.dumps(body) | ||
2943 | 1715 | req.headers["content-type"] = "application/json" | ||
2944 | 1716 | |||
2945 | 1717 | res = req.get_response(fakes.wsgi_app()) | ||
2946 | 1718 | self.assertEqual(res.status_int, 202) | ||
2947 | 1719 | body = json.loads(res.body) | ||
2948 | 1720 | self.assertIn('server', body) | ||
2949 | 1721 | |||
2950 | 1722 | def test_create_multiple_instances_resv_id_return(self): | ||
2951 | 1723 | """Test creating multiple instances with asking for | ||
2952 | 1724 | reservation_id | ||
2953 | 1725 | """ | ||
2954 | 1726 | self._setup_for_create_instance() | ||
2955 | 1727 | |||
2956 | 1728 | image_href = 'http://localhost/v1.1/123/images/2' | ||
2957 | 1729 | flavor_ref = 'http://localhost/123/flavors/3' | ||
2958 | 1730 | body = { | ||
2959 | 1731 | 'server': { | ||
2960 | 1732 | 'min_count': 2, | ||
2961 | 1733 | 'name': 'server_test', | ||
2962 | 1734 | 'imageRef': image_href, | ||
2963 | 1735 | 'flavorRef': flavor_ref, | ||
2964 | 1736 | 'metadata': {'hello': 'world', | ||
2965 | 1737 | 'open': 'stack'}, | ||
2966 | 1738 | 'personality': [], | ||
2967 | 1739 | 'return_reservation_id': True | ||
2968 | 1740 | } | ||
2969 | 1741 | } | ||
2970 | 1742 | |||
2971 | 1743 | req = webob.Request.blank('/v1.1/123/servers') | ||
2972 | 1744 | req.method = 'POST' | ||
2973 | 1745 | req.body = json.dumps(body) | ||
2974 | 1746 | req.headers["content-type"] = "application/json" | ||
2975 | 1747 | |||
2976 | 1748 | res = req.get_response(fakes.wsgi_app()) | ||
2977 | 1749 | self.assertEqual(res.status_int, 202) | ||
2978 | 1750 | body = json.loads(res.body) | ||
2979 | 1751 | reservation_id = body.get('reservation_id') | ||
2980 | 1683 | self.assertNotEqual(reservation_id, "") | 1752 | self.assertNotEqual(reservation_id, "") |
2981 | 1684 | self.assertNotEqual(reservation_id, None) | 1753 | self.assertNotEqual(reservation_id, None) |
2982 | 1685 | self.assertTrue(len(reservation_id) > 1) | 1754 | self.assertTrue(len(reservation_id) > 1) |
2983 | 1686 | 1755 | ||
2986 | 1687 | def test_create_instance_via_zones_with_resid(self): | 1756 | def test_create_instance_with_user_supplied_reservation_id(self): |
2987 | 1688 | """User supplied ReservationID""" | 1757 | """Non-admin supplied reservation_id should be ignored.""" |
2988 | 1689 | self._setup_for_create_instance() | 1758 | self._setup_for_create_instance() |
2996 | 1690 | self.flags(allow_admin_api=True) | 1759 | |
2997 | 1691 | 1760 | image_href = 'http://localhost/v1.1/123/images/2' | |
2998 | 1692 | body = dict(server=dict( | 1761 | flavor_ref = 'http://localhost/123/flavors/3' |
2999 | 1693 | name='server_test', imageId=3, flavorId=2, | 1762 | body = { |
3000 | 1694 | metadata={'hello': 'world', 'open': 'stack'}, | 1763 | 'server': { |
3001 | 1695 | personality={}, reservation_id='myresid')) | 1764 | 'name': 'server_test', |
3002 | 1696 | req = webob.Request.blank('/v1.0/zones/boot') | 1765 | 'imageRef': image_href, |
3003 | 1766 | 'flavorRef': flavor_ref, | ||
3004 | 1767 | 'metadata': {'hello': 'world', | ||
3005 | 1768 | 'open': 'stack'}, | ||
3006 | 1769 | 'personality': [], | ||
3007 | 1770 | 'reservation_id': 'myresid', | ||
3008 | 1771 | 'return_reservation_id': True | ||
3009 | 1772 | } | ||
3010 | 1773 | } | ||
3011 | 1774 | |||
3012 | 1775 | req = webob.Request.blank('/v1.1/123/servers') | ||
3013 | 1697 | req.method = 'POST' | 1776 | req.method = 'POST' |
3014 | 1698 | req.body = json.dumps(body) | 1777 | req.body = json.dumps(body) |
3015 | 1699 | req.headers["content-type"] = "application/json" | 1778 | req.headers["content-type"] = "application/json" |
3016 | 1700 | 1779 | ||
3017 | 1701 | res = req.get_response(fakes.wsgi_app()) | 1780 | res = req.get_response(fakes.wsgi_app()) |
3019 | 1702 | 1781 | self.assertEqual(res.status_int, 202) | |
3020 | 1782 | res_body = json.loads(res.body) | ||
3021 | 1783 | self.assertIn('reservation_id', res_body) | ||
3022 | 1784 | self.assertNotEqual(res_body['reservation_id'], 'myresid') | ||
3023 | 1785 | |||
3024 | 1786 | def test_create_instance_with_admin_supplied_reservation_id(self): | ||
3025 | 1787 | """Admin supplied reservation_id should be honored.""" | ||
3026 | 1788 | self._setup_for_create_instance() | ||
3027 | 1789 | |||
3028 | 1790 | image_href = 'http://localhost/v1.1/123/images/2' | ||
3029 | 1791 | flavor_ref = 'http://localhost/123/flavors/3' | ||
3030 | 1792 | body = { | ||
3031 | 1793 | 'server': { | ||
3032 | 1794 | 'name': 'server_test', | ||
3033 | 1795 | 'imageRef': image_href, | ||
3034 | 1796 | 'flavorRef': flavor_ref, | ||
3035 | 1797 | 'metadata': {'hello': 'world', | ||
3036 | 1798 | 'open': 'stack'}, | ||
3037 | 1799 | 'personality': [], | ||
3038 | 1800 | 'reservation_id': 'myresid', | ||
3039 | 1801 | 'return_reservation_id': True | ||
3040 | 1802 | } | ||
3041 | 1803 | } | ||
3042 | 1804 | |||
3043 | 1805 | req = webob.Request.blank('/v1.1/123/servers') | ||
3044 | 1806 | req.method = 'POST' | ||
3045 | 1807 | req.body = json.dumps(body) | ||
3046 | 1808 | req.headers["content-type"] = "application/json" | ||
3047 | 1809 | |||
3048 | 1810 | context = nova.context.RequestContext('testuser', 'testproject', | ||
3049 | 1811 | is_admin=True) | ||
3050 | 1812 | res = req.get_response(fakes.wsgi_app(fake_auth_context=context)) | ||
3051 | 1813 | self.assertEqual(res.status_int, 202) | ||
3052 | 1703 | reservation_id = json.loads(res.body)['reservation_id'] | 1814 | reservation_id = json.loads(res.body)['reservation_id'] |
3053 | 1704 | self.assertEqual(res.status_int, 200) | ||
3054 | 1705 | self.assertEqual(reservation_id, "myresid") | 1815 | self.assertEqual(reservation_id, "myresid") |
3055 | 1706 | 1816 | ||
3056 | 1707 | def test_create_instance_no_key_pair(self): | 1817 | def test_create_instance_no_key_pair(self): |
3057 | 1708 | fakes.stub_out_key_pair_funcs(self.stubs, have_key_pair=False) | 1818 | fakes.stub_out_key_pair_funcs(self.stubs, have_key_pair=False) |
3059 | 1709 | self._test_create_instance_helper() | 1819 | self._test_create_instance() |
3060 | 1710 | 1820 | ||
3061 | 1711 | def test_create_instance_no_name(self): | 1821 | def test_create_instance_no_name(self): |
3062 | 1712 | self._setup_for_create_instance() | 1822 | self._setup_for_create_instance() |
3063 | @@ -2792,7 +2902,7 @@ | |||
3064 | 2792 | class TestServerCreateRequestXMLDeserializerV10(unittest.TestCase): | 2902 | class TestServerCreateRequestXMLDeserializerV10(unittest.TestCase): |
3065 | 2793 | 2903 | ||
3066 | 2794 | def setUp(self): | 2904 | def setUp(self): |
3068 | 2795 | self.deserializer = create_instance_helper.ServerXMLDeserializer() | 2905 | self.deserializer = servers.ServerXMLDeserializer() |
3069 | 2796 | 2906 | ||
3070 | 2797 | def test_minimal_request(self): | 2907 | def test_minimal_request(self): |
3071 | 2798 | serial_request = """ | 2908 | serial_request = """ |
3072 | @@ -3078,7 +3188,7 @@ | |||
3073 | 3078 | 3188 | ||
3074 | 3079 | def setUp(self): | 3189 | def setUp(self): |
3075 | 3080 | super(TestServerCreateRequestXMLDeserializerV11, self).setUp() | 3190 | super(TestServerCreateRequestXMLDeserializerV11, self).setUp() |
3077 | 3081 | self.deserializer = create_instance_helper.ServerXMLDeserializerV11() | 3191 | self.deserializer = servers.ServerXMLDeserializerV11() |
3078 | 3082 | 3192 | ||
3079 | 3083 | def test_minimal_request(self): | 3193 | def test_minimal_request(self): |
3080 | 3084 | serial_request = """ | 3194 | serial_request = """ |
3081 | @@ -3552,10 +3662,12 @@ | |||
3082 | 3552 | else: | 3662 | else: |
3083 | 3553 | self.injected_files = None | 3663 | self.injected_files = None |
3084 | 3554 | 3664 | ||
3086 | 3555 | return [{'id': '1234', 'display_name': 'fakeinstance', | 3665 | resv_id = None |
3087 | 3666 | |||
3088 | 3667 | return ([{'id': '1234', 'display_name': 'fakeinstance', | ||
3089 | 3556 | 'user_id': 'fake', | 3668 | 'user_id': 'fake', |
3090 | 3557 | 'project_id': 'fake', | 3669 | 'project_id': 'fake', |
3092 | 3558 | 'uuid': FAKE_UUID}] | 3670 | 'uuid': FAKE_UUID}], resv_id) |
3093 | 3559 | 3671 | ||
3094 | 3560 | def set_admin_password(self, *args, **kwargs): | 3672 | def set_admin_password(self, *args, **kwargs): |
3095 | 3561 | pass | 3673 | pass |
3096 | @@ -3568,8 +3680,9 @@ | |||
3097 | 3568 | compute_api = MockComputeAPI() | 3680 | compute_api = MockComputeAPI() |
3098 | 3569 | self.stubs.Set(nova.compute, 'API', make_stub_method(compute_api)) | 3681 | self.stubs.Set(nova.compute, 'API', make_stub_method(compute_api)) |
3099 | 3570 | self.stubs.Set( | 3682 | self.stubs.Set( |
3102 | 3571 | nova.api.openstack.create_instance_helper.CreateInstanceHelper, | 3683 | servers.Controller, |
3103 | 3572 | '_get_kernel_ramdisk_from_image', make_stub_method((1, 1))) | 3684 | '_get_kernel_ramdisk_from_image', |
3104 | 3685 | make_stub_method((1, 1))) | ||
3105 | 3573 | return compute_api | 3686 | return compute_api |
3106 | 3574 | 3687 | ||
3107 | 3575 | def _create_personality_request_dict(self, personality_files): | 3688 | def _create_personality_request_dict(self, personality_files): |
3108 | @@ -3830,8 +3943,8 @@ | |||
3109 | 3830 | @staticmethod | 3943 | @staticmethod |
3110 | 3831 | def _get_k_r(image_meta): | 3944 | def _get_k_r(image_meta): |
3111 | 3832 | """Rebinding function to a shorter name for convenience""" | 3945 | """Rebinding function to a shorter name for convenience""" |
3114 | 3833 | kernel_id, ramdisk_id = create_instance_helper.CreateInstanceHelper. \ | 3946 | kernel_id, ramdisk_id = servers.Controller.\ |
3115 | 3834 | _do_get_kernel_ramdisk_from_image(image_meta) | 3947 | _do_get_kernel_ramdisk_from_image(image_meta) |
3116 | 3835 | return kernel_id, ramdisk_id | 3948 | return kernel_id, ramdisk_id |
3117 | 3836 | 3949 | ||
3118 | 3837 | 3950 | ||
3119 | 3838 | 3951 | ||
3120 | === modified file 'nova/tests/integrated/api/client.py' | |||
3121 | --- nova/tests/integrated/api/client.py 2011-08-17 14:55:27 +0000 | |||
3122 | +++ nova/tests/integrated/api/client.py 2011-09-23 07:08:19 +0000 | |||
3123 | @@ -16,6 +16,7 @@ | |||
3124 | 16 | 16 | ||
3125 | 17 | import json | 17 | import json |
3126 | 18 | import httplib | 18 | import httplib |
3127 | 19 | import urllib | ||
3128 | 19 | import urlparse | 20 | import urlparse |
3129 | 20 | 21 | ||
3130 | 21 | from nova import log as logging | 22 | from nova import log as logging |
3131 | @@ -100,7 +101,7 @@ | |||
3132 | 100 | 101 | ||
3133 | 101 | relative_url = parsed_url.path | 102 | relative_url = parsed_url.path |
3134 | 102 | if parsed_url.query: | 103 | if parsed_url.query: |
3136 | 103 | relative_url = relative_url + parsed_url.query | 104 | relative_url = relative_url + "?" + parsed_url.query |
3137 | 104 | LOG.info(_("Doing %(method)s on %(relative_url)s") % locals()) | 105 | LOG.info(_("Doing %(method)s on %(relative_url)s") % locals()) |
3138 | 105 | if body: | 106 | if body: |
3139 | 106 | LOG.info(_("Body: %s") % body) | 107 | LOG.info(_("Body: %s") % body) |
3140 | @@ -205,12 +206,24 @@ | |||
3141 | 205 | def get_server(self, server_id): | 206 | def get_server(self, server_id): |
3142 | 206 | return self.api_get('/servers/%s' % server_id)['server'] | 207 | return self.api_get('/servers/%s' % server_id)['server'] |
3143 | 207 | 208 | ||
3145 | 208 | def get_servers(self, detail=True): | 209 | def get_servers(self, detail=True, search_opts=None): |
3146 | 209 | rel_url = '/servers/detail' if detail else '/servers' | 210 | rel_url = '/servers/detail' if detail else '/servers' |
3147 | 211 | |||
3148 | 212 | if search_opts is not None: | ||
3149 | 213 | qparams = {} | ||
3150 | 214 | for opt, val in search_opts.iteritems(): | ||
3151 | 215 | qparams[opt] = val | ||
3152 | 216 | if qparams: | ||
3153 | 217 | query_string = "?%s" % urllib.urlencode(qparams) | ||
3154 | 218 | rel_url += query_string | ||
3155 | 210 | return self.api_get(rel_url)['servers'] | 219 | return self.api_get(rel_url)['servers'] |
3156 | 211 | 220 | ||
3157 | 212 | def post_server(self, server): | 221 | def post_server(self, server): |
3159 | 213 | return self.api_post('/servers', server)['server'] | 222 | response = self.api_post('/servers', server) |
3160 | 223 | if 'reservation_id' in response: | ||
3161 | 224 | return response | ||
3162 | 225 | else: | ||
3163 | 226 | return response['server'] | ||
3164 | 214 | 227 | ||
3165 | 215 | def put_server(self, server_id, server): | 228 | def put_server(self, server_id, server): |
3166 | 216 | return self.api_put('/servers/%s' % server_id, server) | 229 | return self.api_put('/servers/%s' % server_id, server) |
3167 | 217 | 230 | ||
3168 | === modified file 'nova/tests/integrated/test_servers.py' | |||
3169 | --- nova/tests/integrated/test_servers.py 2011-09-08 13:53:31 +0000 | |||
3170 | +++ nova/tests/integrated/test_servers.py 2011-09-23 07:08:19 +0000 | |||
3171 | @@ -436,6 +436,42 @@ | |||
3172 | 436 | # Cleanup | 436 | # Cleanup |
3173 | 437 | self._delete_server(server_id) | 437 | self._delete_server(server_id) |
3174 | 438 | 438 | ||
3175 | 439 | def test_create_multiple_servers(self): | ||
3176 | 440 | """Creates multiple servers and checks for reservation_id""" | ||
3177 | 441 | |||
3178 | 442 | # Create 2 servers, setting 'return_reservation_id, which should | ||
3179 | 443 | # return a reservation_id | ||
3180 | 444 | server = self._build_minimal_create_server_request() | ||
3181 | 445 | server['min_count'] = 2 | ||
3182 | 446 | server['return_reservation_id'] = True | ||
3183 | 447 | post = {'server': server} | ||
3184 | 448 | response = self.api.post_server(post) | ||
3185 | 449 | self.assertIn('reservation_id', response) | ||
3186 | 450 | reservation_id = response['reservation_id'] | ||
3187 | 451 | self.assertNotIn(reservation_id, ['', None]) | ||
3188 | 452 | |||
3189 | 453 | # Create 1 more server, which should not return a reservation_id | ||
3190 | 454 | server = self._build_minimal_create_server_request() | ||
3191 | 455 | post = {'server': server} | ||
3192 | 456 | created_server = self.api.post_server(post) | ||
3193 | 457 | self.assertTrue(created_server['id']) | ||
3194 | 458 | created_server_id = created_server['id'] | ||
3195 | 459 | |||
3196 | 460 | # lookup servers created by the first request. | ||
3197 | 461 | servers = self.api.get_servers(detail=True, | ||
3198 | 462 | search_opts={'reservation_id': reservation_id}) | ||
3199 | 463 | server_map = dict((server['id'], server) for server in servers) | ||
3200 | 464 | found_server = server_map.get(created_server_id) | ||
3201 | 465 | # The server from the 2nd request should not be there. | ||
3202 | 466 | self.assertEqual(found_server, None) | ||
3203 | 467 | # Should have found 2 servers. | ||
3204 | 468 | self.assertEqual(len(server_map), 2) | ||
3205 | 469 | |||
3206 | 470 | # Cleanup | ||
3207 | 471 | self._delete_server(created_server_id) | ||
3208 | 472 | for server_id in server_map.iterkeys(): | ||
3209 | 473 | self._delete_server(server_id) | ||
3210 | 474 | |||
3211 | 439 | 475 | ||
3212 | 440 | if __name__ == "__main__": | 476 | if __name__ == "__main__": |
3213 | 441 | unittest.main() | 477 | unittest.main() |
3214 | 442 | 478 | ||
3215 | === modified file 'nova/tests/scheduler/test_abstract_scheduler.py' | |||
3216 | --- nova/tests/scheduler/test_abstract_scheduler.py 2011-09-08 19:40:45 +0000 | |||
3217 | +++ nova/tests/scheduler/test_abstract_scheduler.py 2011-09-23 07:08:19 +0000 | |||
3218 | @@ -20,6 +20,7 @@ | |||
3219 | 20 | 20 | ||
3220 | 21 | import nova.db | 21 | import nova.db |
3221 | 22 | 22 | ||
3222 | 23 | from nova import context | ||
3223 | 23 | from nova import exception | 24 | from nova import exception |
3224 | 24 | from nova import rpc | 25 | from nova import rpc |
3225 | 25 | from nova import test | 26 | from nova import test |
3226 | @@ -102,7 +103,7 @@ | |||
3227 | 102 | was_called = False | 103 | was_called = False |
3228 | 103 | 104 | ||
3229 | 104 | 105 | ||
3231 | 105 | def fake_provision_resource(context, item, instance_id, request_spec, kwargs): | 106 | def fake_provision_resource(context, item, request_spec, kwargs): |
3232 | 106 | global was_called | 107 | global was_called |
3233 | 107 | was_called = True | 108 | was_called = True |
3234 | 108 | 109 | ||
3235 | @@ -118,8 +119,7 @@ | |||
3236 | 118 | was_called = True | 119 | was_called = True |
3237 | 119 | 120 | ||
3238 | 120 | 121 | ||
3241 | 121 | def fake_provision_resource_from_blob(context, item, instance_id, | 122 | def fake_provision_resource_from_blob(context, item, request_spec, kwargs): |
3240 | 122 | request_spec, kwargs): | ||
3242 | 123 | global was_called | 123 | global was_called |
3243 | 124 | was_called = True | 124 | was_called = True |
3244 | 125 | 125 | ||
3245 | @@ -185,7 +185,7 @@ | |||
3246 | 185 | zm = FakeZoneManager() | 185 | zm = FakeZoneManager() |
3247 | 186 | sched.set_zone_manager(zm) | 186 | sched.set_zone_manager(zm) |
3248 | 187 | 187 | ||
3250 | 188 | fake_context = {} | 188 | fake_context = context.RequestContext('user', 'project') |
3251 | 189 | build_plan = sched.select(fake_context, | 189 | build_plan = sched.select(fake_context, |
3252 | 190 | {'instance_type': {'memory_mb': 512}, | 190 | {'instance_type': {'memory_mb': 512}, |
3253 | 191 | 'num_instances': 4}) | 191 | 'num_instances': 4}) |
3254 | @@ -229,9 +229,10 @@ | |||
3255 | 229 | zm = FakeEmptyZoneManager() | 229 | zm = FakeEmptyZoneManager() |
3256 | 230 | sched.set_zone_manager(zm) | 230 | sched.set_zone_manager(zm) |
3257 | 231 | 231 | ||
3259 | 232 | fake_context = {} | 232 | fake_context = context.RequestContext('user', 'project') |
3260 | 233 | request_spec = {} | ||
3261 | 233 | self.assertRaises(driver.NoValidHost, sched.schedule_run_instance, | 234 | self.assertRaises(driver.NoValidHost, sched.schedule_run_instance, |
3263 | 234 | fake_context, 1, | 235 | fake_context, request_spec, |
3264 | 235 | dict(host_filter=None, instance_type={})) | 236 | dict(host_filter=None, instance_type={})) |
3265 | 236 | 237 | ||
3266 | 237 | def test_schedule_do_not_schedule_with_hint(self): | 238 | def test_schedule_do_not_schedule_with_hint(self): |
3267 | @@ -250,8 +251,8 @@ | |||
3268 | 250 | 'blob': "Non-None blob data", | 251 | 'blob': "Non-None blob data", |
3269 | 251 | } | 252 | } |
3270 | 252 | 253 | ||
3273 | 253 | result = sched.schedule_run_instance(None, 1, request_spec) | 254 | instances = sched.schedule_run_instance(None, request_spec) |
3274 | 254 | self.assertEquals(None, result) | 255 | self.assertTrue(instances) |
3275 | 255 | self.assertTrue(was_called) | 256 | self.assertTrue(was_called) |
3276 | 256 | 257 | ||
3277 | 257 | def test_provision_resource_local(self): | 258 | def test_provision_resource_local(self): |
3278 | @@ -263,7 +264,7 @@ | |||
3279 | 263 | fake_provision_resource_locally) | 264 | fake_provision_resource_locally) |
3280 | 264 | 265 | ||
3281 | 265 | request_spec = {'hostname': "foo"} | 266 | request_spec = {'hostname': "foo"} |
3283 | 266 | sched._provision_resource(None, request_spec, 1, request_spec, {}) | 267 | sched._provision_resource(None, request_spec, request_spec, {}) |
3284 | 267 | self.assertTrue(was_called) | 268 | self.assertTrue(was_called) |
3285 | 268 | 269 | ||
3286 | 269 | def test_provision_resource_remote(self): | 270 | def test_provision_resource_remote(self): |
3287 | @@ -275,7 +276,7 @@ | |||
3288 | 275 | fake_provision_resource_from_blob) | 276 | fake_provision_resource_from_blob) |
3289 | 276 | 277 | ||
3290 | 277 | request_spec = {} | 278 | request_spec = {} |
3292 | 278 | sched._provision_resource(None, request_spec, 1, request_spec, {}) | 279 | sched._provision_resource(None, request_spec, request_spec, {}) |
3293 | 279 | self.assertTrue(was_called) | 280 | self.assertTrue(was_called) |
3294 | 280 | 281 | ||
3295 | 281 | def test_provision_resource_from_blob_empty(self): | 282 | def test_provision_resource_from_blob_empty(self): |
3296 | @@ -285,7 +286,7 @@ | |||
3297 | 285 | request_spec = {} | 286 | request_spec = {} |
3298 | 286 | self.assertRaises(abstract_scheduler.InvalidBlob, | 287 | self.assertRaises(abstract_scheduler.InvalidBlob, |
3299 | 287 | sched._provision_resource_from_blob, | 288 | sched._provision_resource_from_blob, |
3301 | 288 | None, {}, 1, {}, {}) | 289 | None, {}, {}, {}) |
3302 | 289 | 290 | ||
3303 | 290 | def test_provision_resource_from_blob_with_local_blob(self): | 291 | def test_provision_resource_from_blob_with_local_blob(self): |
3304 | 291 | """ | 292 | """ |
3305 | @@ -303,20 +304,21 @@ | |||
3306 | 303 | # return fake instances | 304 | # return fake instances |
3307 | 304 | return {'id': 1, 'uuid': 'f874093c-7b17-49c0-89c3-22a5348497f9'} | 305 | return {'id': 1, 'uuid': 'f874093c-7b17-49c0-89c3-22a5348497f9'} |
3308 | 305 | 306 | ||
3310 | 306 | def fake_rpc_cast(*args, **kwargs): | 307 | def fake_cast_to_compute_host(*args, **kwargs): |
3311 | 307 | pass | 308 | pass |
3312 | 308 | 309 | ||
3313 | 309 | self.stubs.Set(sched, '_decrypt_blob', | 310 | self.stubs.Set(sched, '_decrypt_blob', |
3314 | 310 | fake_decrypt_blob_returns_local_info) | 311 | fake_decrypt_blob_returns_local_info) |
3315 | 312 | self.stubs.Set(driver, 'cast_to_compute_host', | ||
3316 | 313 | fake_cast_to_compute_host) | ||
3317 | 311 | self.stubs.Set(compute_api.API, | 314 | self.stubs.Set(compute_api.API, |
3318 | 312 | 'create_db_entry_for_new_instance', | 315 | 'create_db_entry_for_new_instance', |
3319 | 313 | fake_create_db_entry_for_new_instance) | 316 | fake_create_db_entry_for_new_instance) |
3320 | 314 | self.stubs.Set(rpc, 'cast', fake_rpc_cast) | ||
3321 | 315 | 317 | ||
3322 | 316 | build_plan_item = {'blob': "Non-None blob data"} | 318 | build_plan_item = {'blob': "Non-None blob data"} |
3323 | 317 | request_spec = {'image': {}, 'instance_properties': {}} | 319 | request_spec = {'image': {}, 'instance_properties': {}} |
3324 | 318 | 320 | ||
3326 | 319 | sched._provision_resource_from_blob(None, build_plan_item, 1, | 321 | sched._provision_resource_from_blob(None, build_plan_item, |
3327 | 320 | request_spec, {}) | 322 | request_spec, {}) |
3328 | 321 | self.assertTrue(was_called) | 323 | self.assertTrue(was_called) |
3329 | 322 | 324 | ||
3330 | @@ -335,7 +337,7 @@ | |||
3331 | 335 | 337 | ||
3332 | 336 | request_spec = {'blob': "Non-None blob data"} | 338 | request_spec = {'blob': "Non-None blob data"} |
3333 | 337 | 339 | ||
3335 | 338 | sched._provision_resource_from_blob(None, request_spec, 1, | 340 | sched._provision_resource_from_blob(None, request_spec, |
3336 | 339 | request_spec, {}) | 341 | request_spec, {}) |
3337 | 340 | self.assertTrue(was_called) | 342 | self.assertTrue(was_called) |
3338 | 341 | 343 | ||
3339 | @@ -352,7 +354,7 @@ | |||
3340 | 352 | 354 | ||
3341 | 353 | request_spec = {'child_blob': True, 'child_zone': True} | 355 | request_spec = {'child_blob': True, 'child_zone': True} |
3342 | 354 | 356 | ||
3344 | 355 | sched._provision_resource_from_blob(None, request_spec, 1, | 357 | sched._provision_resource_from_blob(None, request_spec, |
3345 | 356 | request_spec, {}) | 358 | request_spec, {}) |
3346 | 357 | self.assertTrue(was_called) | 359 | self.assertTrue(was_called) |
3347 | 358 | 360 | ||
3348 | @@ -386,7 +388,7 @@ | |||
3349 | 386 | zm.service_states = {} | 388 | zm.service_states = {} |
3350 | 387 | sched.set_zone_manager(zm) | 389 | sched.set_zone_manager(zm) |
3351 | 388 | 390 | ||
3353 | 389 | fake_context = {} | 391 | fake_context = context.RequestContext('user', 'project') |
3354 | 390 | build_plan = sched.select(fake_context, | 392 | build_plan = sched.select(fake_context, |
3355 | 391 | {'instance_type': {'memory_mb': 512}, | 393 | {'instance_type': {'memory_mb': 512}, |
3356 | 392 | 'num_instances': 4}) | 394 | 'num_instances': 4}) |
3357 | @@ -394,6 +396,45 @@ | |||
3358 | 394 | # 0 from local zones, 12 from remotes | 396 | # 0 from local zones, 12 from remotes |
3359 | 395 | self.assertEqual(12, len(build_plan)) | 397 | self.assertEqual(12, len(build_plan)) |
3360 | 396 | 398 | ||
3361 | 399 | def test_run_instance_non_admin(self): | ||
3362 | 400 | """Test creating an instance locally using run_instance, passing | ||
3363 | 401 | a non-admin context. DB actions should work.""" | ||
3364 | 402 | sched = FakeAbstractScheduler() | ||
3365 | 403 | |||
3366 | 404 | def fake_cast_to_compute_host(*args, **kwargs): | ||
3367 | 405 | pass | ||
3368 | 406 | |||
3369 | 407 | def fake_zone_get_all_zero(context): | ||
3370 | 408 | # make sure this is called with admin context, even though | ||
3371 | 409 | # we're using user context below | ||
3372 | 410 | self.assertTrue(context.is_admin) | ||
3373 | 411 | return [] | ||
3374 | 412 | |||
3375 | 413 | self.stubs.Set(driver, 'cast_to_compute_host', | ||
3376 | 414 | fake_cast_to_compute_host) | ||
3377 | 415 | self.stubs.Set(sched, '_call_zone_method', fake_call_zone_method) | ||
3378 | 416 | self.stubs.Set(nova.db, 'zone_get_all', fake_zone_get_all_zero) | ||
3379 | 417 | |||
3380 | 418 | zm = FakeZoneManager() | ||
3381 | 419 | sched.set_zone_manager(zm) | ||
3382 | 420 | |||
3383 | 421 | fake_context = context.RequestContext('user', 'project') | ||
3384 | 422 | |||
3385 | 423 | request_spec = { | ||
3386 | 424 | 'image': {'properties': {}}, | ||
3387 | 425 | 'security_group': [], | ||
3388 | 426 | 'instance_properties': { | ||
3389 | 427 | 'project_id': fake_context.project_id, | ||
3390 | 428 | 'user_id': fake_context.user_id}, | ||
3391 | 429 | 'instance_type': {'memory_mb': 256}, | ||
3392 | 430 | 'filter_driver': 'nova.scheduler.host_filter.AllHostsFilter' | ||
3393 | 431 | } | ||
3394 | 432 | |||
3395 | 433 | instances = sched.schedule_run_instance(fake_context, request_spec) | ||
3396 | 434 | self.assertEqual(len(instances), 1) | ||
3397 | 435 | self.assertFalse(instances[0].get('_is_precooked', False)) | ||
3398 | 436 | nova.db.instance_destroy(fake_context, instances[0]['id']) | ||
3399 | 437 | |||
3400 | 397 | 438 | ||
3401 | 398 | class BaseSchedulerTestCase(test.TestCase): | 439 | class BaseSchedulerTestCase(test.TestCase): |
3402 | 399 | """Test case for Base Scheduler.""" | 440 | """Test case for Base Scheduler.""" |
3403 | 400 | 441 | ||
3404 | === modified file 'nova/tests/scheduler/test_least_cost_scheduler.py' | |||
3405 | --- nova/tests/scheduler/test_least_cost_scheduler.py 2011-08-15 22:31:24 +0000 | |||
3406 | +++ nova/tests/scheduler/test_least_cost_scheduler.py 2011-09-23 07:08:19 +0000 | |||
3407 | @@ -134,7 +134,7 @@ | |||
3408 | 134 | 134 | ||
3409 | 135 | expected = [] | 135 | expected = [] |
3410 | 136 | for idx, (hostname, services) in enumerate(hosts): | 136 | for idx, (hostname, services) in enumerate(hosts): |
3412 | 137 | caps = copy.deepcopy(services["compute"]) | 137 | caps = copy.deepcopy(services) |
3413 | 138 | # Costs are normalized so over 10 hosts, each host with increasing | 138 | # Costs are normalized so over 10 hosts, each host with increasing |
3414 | 139 | # free ram will cost 1/N more. Since the lowest cost host has some | 139 | # free ram will cost 1/N more. Since the lowest cost host has some |
3415 | 140 | # free ram, we add in the 1/N for the base_cost | 140 | # free ram, we add in the 1/N for the base_cost |
3416 | 141 | 141 | ||
3417 | === modified file 'nova/tests/scheduler/test_scheduler.py' | |||
3418 | --- nova/tests/scheduler/test_scheduler.py 2011-09-15 20:42:30 +0000 | |||
3419 | +++ nova/tests/scheduler/test_scheduler.py 2011-09-23 07:08:19 +0000 | |||
3420 | @@ -35,10 +35,13 @@ | |||
3421 | 35 | from nova import test | 35 | from nova import test |
3422 | 36 | from nova import rpc | 36 | from nova import rpc |
3423 | 37 | from nova import utils | 37 | from nova import utils |
3424 | 38 | from nova.db.sqlalchemy import models | ||
3425 | 38 | from nova.scheduler import api | 39 | from nova.scheduler import api |
3426 | 39 | from nova.scheduler import driver | 40 | from nova.scheduler import driver |
3427 | 40 | from nova.scheduler import manager | 41 | from nova.scheduler import manager |
3428 | 41 | from nova.scheduler import multi | 42 | from nova.scheduler import multi |
3429 | 43 | from nova.scheduler.simple import SimpleScheduler | ||
3430 | 44 | from nova.scheduler.zone import ZoneScheduler | ||
3431 | 42 | from nova.compute import power_state | 45 | from nova.compute import power_state |
3432 | 43 | from nova.compute import vm_states | 46 | from nova.compute import vm_states |
3433 | 44 | 47 | ||
3434 | @@ -53,17 +56,86 @@ | |||
3435 | 53 | FAKE_UUID = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' | 56 | FAKE_UUID = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' |
3436 | 54 | 57 | ||
3437 | 55 | 58 | ||
3440 | 56 | class FakeContext(object): | 59 | def _create_instance_dict(**kwargs): |
3441 | 57 | auth_token = None | 60 | """Create a dictionary for a test instance""" |
3442 | 61 | inst = {} | ||
3443 | 62 | # NOTE(jk0): If an integer is passed as the image_ref, the image | ||
3444 | 63 | # service will use the default image service (in this case, the fake). | ||
3445 | 64 | inst['image_ref'] = '1' | ||
3446 | 65 | inst['reservation_id'] = 'r-fakeres' | ||
3447 | 66 | inst['user_id'] = kwargs.get('user_id', 'admin') | ||
3448 | 67 | inst['project_id'] = kwargs.get('project_id', 'fake') | ||
3449 | 68 | inst['instance_type_id'] = '1' | ||
3450 | 69 | if 'host' in kwargs: | ||
3451 | 70 | inst['host'] = kwargs.get('host') | ||
3452 | 71 | inst['vcpus'] = kwargs.get('vcpus', 1) | ||
3453 | 72 | inst['memory_mb'] = kwargs.get('memory_mb', 20) | ||
3454 | 73 | inst['local_gb'] = kwargs.get('local_gb', 30) | ||
3455 | 74 | inst['vm_state'] = kwargs.get('vm_state', vm_states.ACTIVE) | ||
3456 | 75 | inst['power_state'] = kwargs.get('power_state', power_state.RUNNING) | ||
3457 | 76 | inst['task_state'] = kwargs.get('task_state', None) | ||
3458 | 77 | inst['availability_zone'] = kwargs.get('availability_zone', None) | ||
3459 | 78 | inst['ami_launch_index'] = 0 | ||
3460 | 79 | inst['launched_on'] = kwargs.get('launched_on', 'dummy') | ||
3461 | 80 | return inst | ||
3462 | 81 | |||
3463 | 82 | |||
3464 | 83 | def _create_volume(): | ||
3465 | 84 | """Create a test volume""" | ||
3466 | 85 | vol = {} | ||
3467 | 86 | vol['size'] = 1 | ||
3468 | 87 | vol['availability_zone'] = 'test' | ||
3469 | 88 | ctxt = context.get_admin_context() | ||
3470 | 89 | return db.volume_create(ctxt, vol)['id'] | ||
3471 | 90 | |||
3472 | 91 | |||
3473 | 92 | def _create_instance(**kwargs): | ||
3474 | 93 | """Create a test instance""" | ||
3475 | 94 | ctxt = context.get_admin_context() | ||
3476 | 95 | return db.instance_create(ctxt, _create_instance_dict(**kwargs)) | ||
3477 | 96 | |||
3478 | 97 | |||
3479 | 98 | def _create_instance_from_spec(spec): | ||
3480 | 99 | return _create_instance(**spec['instance_properties']) | ||
3481 | 100 | |||
3482 | 101 | |||
3483 | 102 | def _create_request_spec(**kwargs): | ||
3484 | 103 | return dict(instance_properties=_create_instance_dict(**kwargs)) | ||
3485 | 104 | |||
3486 | 105 | |||
3487 | 106 | def _fake_cast_to_compute_host(context, host, method, **kwargs): | ||
3488 | 107 | global _picked_host | ||
3489 | 108 | _picked_host = host | ||
3490 | 109 | |||
3491 | 110 | |||
3492 | 111 | def _fake_cast_to_volume_host(context, host, method, **kwargs): | ||
3493 | 112 | global _picked_host | ||
3494 | 113 | _picked_host = host | ||
3495 | 114 | |||
3496 | 115 | |||
3497 | 116 | def _fake_create_instance_db_entry(simple_self, context, request_spec): | ||
3498 | 117 | instance = _create_instance_from_spec(request_spec) | ||
3499 | 118 | global instance_ids | ||
3500 | 119 | instance_ids.append(instance['id']) | ||
3501 | 120 | return instance | ||
3502 | 121 | |||
3503 | 122 | |||
3504 | 123 | class FakeContext(context.RequestContext): | ||
3505 | 124 | def __init__(self, *args, **kwargs): | ||
3506 | 125 | super(FakeContext, self).__init__('user', 'project', **kwargs) | ||
3507 | 58 | 126 | ||
3508 | 59 | 127 | ||
3509 | 60 | class TestDriver(driver.Scheduler): | 128 | class TestDriver(driver.Scheduler): |
3510 | 61 | """Scheduler Driver for Tests""" | 129 | """Scheduler Driver for Tests""" |
3513 | 62 | def schedule(context, topic, *args, **kwargs): | 130 | def schedule(self, context, topic, method, *args, **kwargs): |
3514 | 63 | return 'fallback_host' | 131 | host = 'fallback_host' |
3515 | 132 | driver.cast_to_host(context, topic, host, method, **kwargs) | ||
3516 | 64 | 133 | ||
3519 | 65 | def schedule_named_method(context, topic, num): | 134 | def schedule_named_method(self, context, num=None): |
3520 | 66 | return 'named_host' | 135 | topic = 'topic' |
3521 | 136 | host = 'named_host' | ||
3522 | 137 | method = 'named_method' | ||
3523 | 138 | driver.cast_to_host(context, topic, host, method, num=num) | ||
3524 | 67 | 139 | ||
3525 | 68 | 140 | ||
3526 | 69 | class SchedulerTestCase(test.TestCase): | 141 | class SchedulerTestCase(test.TestCase): |
3527 | @@ -89,31 +161,16 @@ | |||
3528 | 89 | 161 | ||
3529 | 90 | return db.service_get(ctxt, s_ref['id']) | 162 | return db.service_get(ctxt, s_ref['id']) |
3530 | 91 | 163 | ||
3531 | 92 | def _create_instance(self, **kwargs): | ||
3532 | 93 | """Create a test instance""" | ||
3533 | 94 | ctxt = context.get_admin_context() | ||
3534 | 95 | inst = {} | ||
3535 | 96 | inst['user_id'] = 'admin' | ||
3536 | 97 | inst['project_id'] = kwargs.get('project_id', 'fake') | ||
3537 | 98 | inst['host'] = kwargs.get('host', 'dummy') | ||
3538 | 99 | inst['vcpus'] = kwargs.get('vcpus', 1) | ||
3539 | 100 | inst['memory_mb'] = kwargs.get('memory_mb', 10) | ||
3540 | 101 | inst['local_gb'] = kwargs.get('local_gb', 20) | ||
3541 | 102 | inst['vm_state'] = kwargs.get('vm_state', vm_states.ACTIVE) | ||
3542 | 103 | inst['power_state'] = kwargs.get('power_state', power_state.RUNNING) | ||
3543 | 104 | inst['task_state'] = kwargs.get('task_state', None) | ||
3544 | 105 | return db.instance_create(ctxt, inst) | ||
3545 | 106 | |||
3546 | 107 | def test_fallback(self): | 164 | def test_fallback(self): |
3547 | 108 | scheduler = manager.SchedulerManager() | 165 | scheduler = manager.SchedulerManager() |
3548 | 109 | self.mox.StubOutWithMock(rpc, 'cast', use_mock_anything=True) | 166 | self.mox.StubOutWithMock(rpc, 'cast', use_mock_anything=True) |
3549 | 110 | ctxt = context.get_admin_context() | 167 | ctxt = context.get_admin_context() |
3550 | 111 | rpc.cast(ctxt, | 168 | rpc.cast(ctxt, |
3552 | 112 | 'topic.fallback_host', | 169 | 'fake_topic.fallback_host', |
3553 | 113 | {'method': 'noexist', | 170 | {'method': 'noexist', |
3554 | 114 | 'args': {'num': 7}}) | 171 | 'args': {'num': 7}}) |
3555 | 115 | self.mox.ReplayAll() | 172 | self.mox.ReplayAll() |
3557 | 116 | scheduler.noexist(ctxt, 'topic', num=7) | 173 | scheduler.noexist(ctxt, 'fake_topic', num=7) |
3558 | 117 | 174 | ||
3559 | 118 | def test_named_method(self): | 175 | def test_named_method(self): |
3560 | 119 | scheduler = manager.SchedulerManager() | 176 | scheduler = manager.SchedulerManager() |
3561 | @@ -173,8 +230,8 @@ | |||
3562 | 173 | scheduler = manager.SchedulerManager() | 230 | scheduler = manager.SchedulerManager() |
3563 | 174 | ctxt = context.get_admin_context() | 231 | ctxt = context.get_admin_context() |
3564 | 175 | s_ref = self._create_compute_service() | 232 | s_ref = self._create_compute_service() |
3567 | 176 | i_ref1 = self._create_instance(project_id='p-01', host=s_ref['host']) | 233 | i_ref1 = _create_instance(project_id='p-01', host=s_ref['host']) |
3568 | 177 | i_ref2 = self._create_instance(project_id='p-02', vcpus=3, | 234 | i_ref2 = _create_instance(project_id='p-02', vcpus=3, |
3569 | 178 | host=s_ref['host']) | 235 | host=s_ref['host']) |
3570 | 179 | 236 | ||
3571 | 180 | result = scheduler.show_host_resources(ctxt, s_ref['host']) | 237 | result = scheduler.show_host_resources(ctxt, s_ref['host']) |
3572 | @@ -197,7 +254,10 @@ | |||
3573 | 197 | """Test case for zone scheduler""" | 254 | """Test case for zone scheduler""" |
3574 | 198 | def setUp(self): | 255 | def setUp(self): |
3575 | 199 | super(ZoneSchedulerTestCase, self).setUp() | 256 | super(ZoneSchedulerTestCase, self).setUp() |
3577 | 200 | self.flags(scheduler_driver='nova.scheduler.zone.ZoneScheduler') | 257 | self.flags( |
3578 | 258 | scheduler_driver='nova.scheduler.multi.MultiScheduler', | ||
3579 | 259 | compute_scheduler_driver='nova.scheduler.zone.ZoneScheduler', | ||
3580 | 260 | volume_scheduler_driver='nova.scheduler.zone.ZoneScheduler') | ||
3581 | 201 | 261 | ||
3582 | 202 | def _create_service_model(self, **kwargs): | 262 | def _create_service_model(self, **kwargs): |
3583 | 203 | service = db.sqlalchemy.models.Service() | 263 | service = db.sqlalchemy.models.Service() |
3584 | @@ -214,7 +274,7 @@ | |||
3585 | 214 | 274 | ||
3586 | 215 | def test_with_two_zones(self): | 275 | def test_with_two_zones(self): |
3587 | 216 | scheduler = manager.SchedulerManager() | 276 | scheduler = manager.SchedulerManager() |
3589 | 217 | ctxt = context.get_admin_context() | 277 | ctxt = context.RequestContext('user', 'project') |
3590 | 218 | service_list = [self._create_service_model(id=1, | 278 | service_list = [self._create_service_model(id=1, |
3591 | 219 | host='host1', | 279 | host='host1', |
3592 | 220 | zone='zone1'), | 280 | zone='zone1'), |
3593 | @@ -230,66 +290,53 @@ | |||
3594 | 230 | self._create_service_model(id=5, | 290 | self._create_service_model(id=5, |
3595 | 231 | host='host5', | 291 | host='host5', |
3596 | 232 | zone='zone2')] | 292 | zone='zone2')] |
3597 | 293 | |||
3598 | 294 | request_spec = _create_request_spec(availability_zone='zone1') | ||
3599 | 295 | |||
3600 | 296 | fake_instance = _create_instance_dict( | ||
3601 | 297 | **request_spec['instance_properties']) | ||
3602 | 298 | fake_instance['id'] = 100 | ||
3603 | 299 | fake_instance['uuid'] = FAKE_UUID | ||
3604 | 300 | |||
3605 | 233 | self.mox.StubOutWithMock(db, 'service_get_all_by_topic') | 301 | self.mox.StubOutWithMock(db, 'service_get_all_by_topic') |
3606 | 302 | self.mox.StubOutWithMock(db, 'instance_update') | ||
3607 | 303 | # Assumes we're testing with MultiScheduler | ||
3608 | 304 | compute_sched_driver = scheduler.driver.drivers['compute'] | ||
3609 | 305 | self.mox.StubOutWithMock(compute_sched_driver, | ||
3610 | 306 | 'create_instance_db_entry') | ||
3611 | 307 | self.mox.StubOutWithMock(rpc, 'cast', use_mock_anything=True) | ||
3612 | 308 | |||
3613 | 234 | arg = IgnoreArg() | 309 | arg = IgnoreArg() |
3614 | 235 | db.service_get_all_by_topic(arg, arg).AndReturn(service_list) | 310 | db.service_get_all_by_topic(arg, arg).AndReturn(service_list) |
3617 | 236 | self.mox.StubOutWithMock(rpc, 'cast', use_mock_anything=True) | 311 | compute_sched_driver.create_instance_db_entry(arg, |
3618 | 237 | rpc.cast(ctxt, | 312 | request_spec).AndReturn(fake_instance) |
3619 | 313 | db.instance_update(arg, 100, {'host': 'host1', 'scheduled_at': arg}) | ||
3620 | 314 | rpc.cast(arg, | ||
3621 | 238 | 'compute.host1', | 315 | 'compute.host1', |
3622 | 239 | {'method': 'run_instance', | 316 | {'method': 'run_instance', |
3625 | 240 | 'args': {'instance_id': 'i-ffffffff', | 317 | 'args': {'instance_id': 100}}) |
3624 | 241 | 'availability_zone': 'zone1'}}) | ||
3626 | 242 | self.mox.ReplayAll() | 318 | self.mox.ReplayAll() |
3627 | 243 | scheduler.run_instance(ctxt, | 319 | scheduler.run_instance(ctxt, |
3628 | 244 | 'compute', | 320 | 'compute', |
3631 | 245 | instance_id='i-ffffffff', | 321 | request_spec=request_spec) |
3630 | 246 | availability_zone='zone1') | ||
3632 | 247 | 322 | ||
3633 | 248 | 323 | ||
3634 | 249 | class SimpleDriverTestCase(test.TestCase): | 324 | class SimpleDriverTestCase(test.TestCase): |
3635 | 250 | """Test case for simple driver""" | 325 | """Test case for simple driver""" |
3636 | 251 | def setUp(self): | 326 | def setUp(self): |
3637 | 252 | super(SimpleDriverTestCase, self).setUp() | 327 | super(SimpleDriverTestCase, self).setUp() |
3638 | 328 | simple_scheduler = 'nova.scheduler.simple.SimpleScheduler' | ||
3639 | 253 | self.flags(connection_type='fake', | 329 | self.flags(connection_type='fake', |
3646 | 254 | stub_network=True, | 330 | stub_network=True, |
3647 | 255 | max_cores=4, | 331 | max_cores=4, |
3648 | 256 | max_gigabytes=4, | 332 | max_gigabytes=4, |
3649 | 257 | network_manager='nova.network.manager.FlatManager', | 333 | network_manager='nova.network.manager.FlatManager', |
3650 | 258 | volume_driver='nova.volume.driver.FakeISCSIDriver', | 334 | volume_driver='nova.volume.driver.FakeISCSIDriver', |
3651 | 259 | scheduler_driver='nova.scheduler.simple.SimpleScheduler') | 335 | scheduler_driver='nova.scheduler.multi.MultiScheduler', |
3652 | 336 | compute_scheduler_driver=simple_scheduler, | ||
3653 | 337 | volume_scheduler_driver=simple_scheduler) | ||
3654 | 260 | self.scheduler = manager.SchedulerManager() | 338 | self.scheduler = manager.SchedulerManager() |
3655 | 261 | self.context = context.get_admin_context() | 339 | self.context = context.get_admin_context() |
3656 | 262 | self.user_id = 'fake' | ||
3657 | 263 | self.project_id = 'fake' | ||
3658 | 264 | |||
3659 | 265 | def _create_instance(self, **kwargs): | ||
3660 | 266 | """Create a test instance""" | ||
3661 | 267 | inst = {} | ||
3662 | 268 | # NOTE(jk0): If an integer is passed as the image_ref, the image | ||
3663 | 269 | # service will use the default image service (in this case, the fake). | ||
3664 | 270 | inst['image_ref'] = '1' | ||
3665 | 271 | inst['reservation_id'] = 'r-fakeres' | ||
3666 | 272 | inst['user_id'] = self.user_id | ||
3667 | 273 | inst['project_id'] = self.project_id | ||
3668 | 274 | inst['instance_type_id'] = '1' | ||
3669 | 275 | inst['vcpus'] = kwargs.get('vcpus', 1) | ||
3670 | 276 | inst['ami_launch_index'] = 0 | ||
3671 | 277 | inst['availability_zone'] = kwargs.get('availability_zone', None) | ||
3672 | 278 | inst['host'] = kwargs.get('host', 'dummy') | ||
3673 | 279 | inst['memory_mb'] = kwargs.get('memory_mb', 20) | ||
3674 | 280 | inst['local_gb'] = kwargs.get('local_gb', 30) | ||
3675 | 281 | inst['launched_on'] = kwargs.get('launghed_on', 'dummy') | ||
3676 | 282 | inst['vm_state'] = kwargs.get('vm_state', vm_states.ACTIVE) | ||
3677 | 283 | inst['task_state'] = kwargs.get('task_state', None) | ||
3678 | 284 | inst['power_state'] = kwargs.get('power_state', power_state.RUNNING) | ||
3679 | 285 | return db.instance_create(self.context, inst)['id'] | ||
3680 | 286 | |||
3681 | 287 | def _create_volume(self): | ||
3682 | 288 | """Create a test volume""" | ||
3683 | 289 | vol = {} | ||
3684 | 290 | vol['size'] = 1 | ||
3685 | 291 | vol['availability_zone'] = 'test' | ||
3686 | 292 | return db.volume_create(self.context, vol)['id'] | ||
3687 | 293 | 340 | ||
3688 | 294 | def _create_compute_service(self, **kwargs): | 341 | def _create_compute_service(self, **kwargs): |
3689 | 295 | """Create a compute service.""" | 342 | """Create a compute service.""" |
3690 | @@ -369,14 +416,30 @@ | |||
3691 | 369 | 'compute', | 416 | 'compute', |
3692 | 370 | FLAGS.compute_manager) | 417 | FLAGS.compute_manager) |
3693 | 371 | compute2.start() | 418 | compute2.start() |
3702 | 372 | instance_id1 = self._create_instance() | 419 | |
3703 | 373 | compute1.run_instance(self.context, instance_id1) | 420 | global instance_ids |
3704 | 374 | instance_id2 = self._create_instance() | 421 | instance_ids = [] |
3705 | 375 | host = self.scheduler.driver.schedule_run_instance(self.context, | 422 | instance_ids.append(_create_instance()['id']) |
3706 | 376 | instance_id2) | 423 | compute1.run_instance(self.context, instance_ids[0]) |
3707 | 377 | self.assertEqual(host, 'host2') | 424 | |
3708 | 378 | compute1.terminate_instance(self.context, instance_id1) | 425 | self.stubs.Set(SimpleScheduler, |
3709 | 379 | db.instance_destroy(self.context, instance_id2) | 426 | 'create_instance_db_entry', _fake_create_instance_db_entry) |
3710 | 427 | global _picked_host | ||
3711 | 428 | _picked_host = None | ||
3712 | 429 | self.stubs.Set(driver, | ||
3713 | 430 | 'cast_to_compute_host', _fake_cast_to_compute_host) | ||
3714 | 431 | |||
3715 | 432 | request_spec = _create_request_spec() | ||
3716 | 433 | instances = self.scheduler.driver.schedule_run_instance( | ||
3717 | 434 | self.context, request_spec) | ||
3718 | 435 | |||
3719 | 436 | self.assertEqual(_picked_host, 'host2') | ||
3720 | 437 | self.assertEqual(len(instance_ids), 2) | ||
3721 | 438 | self.assertEqual(len(instances), 1) | ||
3722 | 439 | self.assertEqual(instances[0].get('_is_precooked', False), False) | ||
3723 | 440 | |||
3724 | 441 | compute1.terminate_instance(self.context, instance_ids[0]) | ||
3725 | 442 | compute2.terminate_instance(self.context, instance_ids[1]) | ||
3726 | 380 | compute1.kill() | 443 | compute1.kill() |
3727 | 381 | compute2.kill() | 444 | compute2.kill() |
3728 | 382 | 445 | ||
3729 | @@ -392,14 +455,27 @@ | |||
3730 | 392 | 'compute', | 455 | 'compute', |
3731 | 393 | FLAGS.compute_manager) | 456 | FLAGS.compute_manager) |
3732 | 394 | compute2.start() | 457 | compute2.start() |
3741 | 395 | instance_id1 = self._create_instance() | 458 | |
3742 | 396 | compute1.run_instance(self.context, instance_id1) | 459 | global instance_ids |
3743 | 397 | instance_id2 = self._create_instance(availability_zone='nova:host1') | 460 | instance_ids = [] |
3744 | 398 | host = self.scheduler.driver.schedule_run_instance(self.context, | 461 | instance_ids.append(_create_instance()['id']) |
3745 | 399 | instance_id2) | 462 | compute1.run_instance(self.context, instance_ids[0]) |
3746 | 400 | self.assertEqual('host1', host) | 463 | |
3747 | 401 | compute1.terminate_instance(self.context, instance_id1) | 464 | self.stubs.Set(SimpleScheduler, |
3748 | 402 | db.instance_destroy(self.context, instance_id2) | 465 | 'create_instance_db_entry', _fake_create_instance_db_entry) |
3749 | 466 | global _picked_host | ||
3750 | 467 | _picked_host = None | ||
3751 | 468 | self.stubs.Set(driver, | ||
3752 | 469 | 'cast_to_compute_host', _fake_cast_to_compute_host) | ||
3753 | 470 | |||
3754 | 471 | request_spec = _create_request_spec(availability_zone='nova:host1') | ||
3755 | 472 | instances = self.scheduler.driver.schedule_run_instance( | ||
3756 | 473 | self.context, request_spec) | ||
3757 | 474 | self.assertEqual(_picked_host, 'host1') | ||
3758 | 475 | self.assertEqual(len(instance_ids), 2) | ||
3759 | 476 | |||
3760 | 477 | compute1.terminate_instance(self.context, instance_ids[0]) | ||
3761 | 478 | compute1.terminate_instance(self.context, instance_ids[1]) | ||
3762 | 403 | compute1.kill() | 479 | compute1.kill() |
3763 | 404 | compute2.kill() | 480 | compute2.kill() |
3764 | 405 | 481 | ||
3765 | @@ -414,12 +490,21 @@ | |||
3766 | 414 | delta = datetime.timedelta(seconds=FLAGS.service_down_time * 2) | 490 | delta = datetime.timedelta(seconds=FLAGS.service_down_time * 2) |
3767 | 415 | past = now - delta | 491 | past = now - delta |
3768 | 416 | db.service_update(self.context, s1['id'], {'updated_at': past}) | 492 | db.service_update(self.context, s1['id'], {'updated_at': past}) |
3770 | 417 | instance_id2 = self._create_instance(availability_zone='nova:host1') | 493 | |
3771 | 494 | global instance_ids | ||
3772 | 495 | instance_ids = [] | ||
3773 | 496 | self.stubs.Set(SimpleScheduler, | ||
3774 | 497 | 'create_instance_db_entry', _fake_create_instance_db_entry) | ||
3775 | 498 | global _picked_host | ||
3776 | 499 | _picked_host = None | ||
3777 | 500 | self.stubs.Set(driver, | ||
3778 | 501 | 'cast_to_compute_host', _fake_cast_to_compute_host) | ||
3779 | 502 | |||
3780 | 503 | request_spec = _create_request_spec(availability_zone='nova:host1') | ||
3781 | 418 | self.assertRaises(driver.WillNotSchedule, | 504 | self.assertRaises(driver.WillNotSchedule, |
3782 | 419 | self.scheduler.driver.schedule_run_instance, | 505 | self.scheduler.driver.schedule_run_instance, |
3783 | 420 | self.context, | 506 | self.context, |
3786 | 421 | instance_id2) | 507 | request_spec) |
3785 | 422 | db.instance_destroy(self.context, instance_id2) | ||
3787 | 423 | compute1.kill() | 508 | compute1.kill() |
3788 | 424 | 509 | ||
3789 | 425 | def test_will_schedule_on_disabled_host_if_specified_no_queue(self): | 510 | def test_will_schedule_on_disabled_host_if_specified_no_queue(self): |
3790 | @@ -430,11 +515,22 @@ | |||
3791 | 430 | compute1.start() | 515 | compute1.start() |
3792 | 431 | s1 = db.service_get_by_args(self.context, 'host1', 'nova-compute') | 516 | s1 = db.service_get_by_args(self.context, 'host1', 'nova-compute') |
3793 | 432 | db.service_update(self.context, s1['id'], {'disabled': True}) | 517 | db.service_update(self.context, s1['id'], {'disabled': True}) |
3799 | 433 | instance_id2 = self._create_instance(availability_zone='nova:host1') | 518 | |
3800 | 434 | host = self.scheduler.driver.schedule_run_instance(self.context, | 519 | global instance_ids |
3801 | 435 | instance_id2) | 520 | instance_ids = [] |
3802 | 436 | self.assertEqual('host1', host) | 521 | self.stubs.Set(SimpleScheduler, |
3803 | 437 | db.instance_destroy(self.context, instance_id2) | 522 | 'create_instance_db_entry', _fake_create_instance_db_entry) |
3804 | 523 | global _picked_host | ||
3805 | 524 | _picked_host = None | ||
3806 | 525 | self.stubs.Set(driver, | ||
3807 | 526 | 'cast_to_compute_host', _fake_cast_to_compute_host) | ||
3808 | 527 | |||
3809 | 528 | request_spec = _create_request_spec(availability_zone='nova:host1') | ||
3810 | 529 | instances = self.scheduler.driver.schedule_run_instance( | ||
3811 | 530 | self.context, request_spec) | ||
3812 | 531 | self.assertEqual(_picked_host, 'host1') | ||
3813 | 532 | self.assertEqual(len(instance_ids), 1) | ||
3814 | 533 | compute1.terminate_instance(self.context, instance_ids[0]) | ||
3815 | 438 | compute1.kill() | 534 | compute1.kill() |
3816 | 439 | 535 | ||
3817 | 440 | def test_too_many_cores_no_queue(self): | 536 | def test_too_many_cores_no_queue(self): |
3818 | @@ -452,17 +548,17 @@ | |||
3819 | 452 | instance_ids1 = [] | 548 | instance_ids1 = [] |
3820 | 453 | instance_ids2 = [] | 549 | instance_ids2 = [] |
3821 | 454 | for index in xrange(FLAGS.max_cores): | 550 | for index in xrange(FLAGS.max_cores): |
3823 | 455 | instance_id = self._create_instance() | 551 | instance_id = _create_instance()['id'] |
3824 | 456 | compute1.run_instance(self.context, instance_id) | 552 | compute1.run_instance(self.context, instance_id) |
3825 | 457 | instance_ids1.append(instance_id) | 553 | instance_ids1.append(instance_id) |
3827 | 458 | instance_id = self._create_instance() | 554 | instance_id = _create_instance()['id'] |
3828 | 459 | compute2.run_instance(self.context, instance_id) | 555 | compute2.run_instance(self.context, instance_id) |
3829 | 460 | instance_ids2.append(instance_id) | 556 | instance_ids2.append(instance_id) |
3831 | 461 | instance_id = self._create_instance() | 557 | request_spec = _create_request_spec() |
3832 | 462 | self.assertRaises(driver.NoValidHost, | 558 | self.assertRaises(driver.NoValidHost, |
3833 | 463 | self.scheduler.driver.schedule_run_instance, | 559 | self.scheduler.driver.schedule_run_instance, |
3834 | 464 | self.context, | 560 | self.context, |
3836 | 465 | instance_id) | 561 | request_spec) |
3837 | 466 | for instance_id in instance_ids1: | 562 | for instance_id in instance_ids1: |
3838 | 467 | compute1.terminate_instance(self.context, instance_id) | 563 | compute1.terminate_instance(self.context, instance_id) |
3839 | 468 | for instance_id in instance_ids2: | 564 | for instance_id in instance_ids2: |
3840 | @@ -481,13 +577,19 @@ | |||
3841 | 481 | 'nova-volume', | 577 | 'nova-volume', |
3842 | 482 | 'volume', | 578 | 'volume', |
3843 | 483 | FLAGS.volume_manager) | 579 | FLAGS.volume_manager) |
3844 | 580 | |||
3845 | 581 | global _picked_host | ||
3846 | 582 | _picked_host = None | ||
3847 | 583 | self.stubs.Set(driver, | ||
3848 | 584 | 'cast_to_volume_host', _fake_cast_to_volume_host) | ||
3849 | 585 | |||
3850 | 484 | volume2.start() | 586 | volume2.start() |
3852 | 485 | volume_id1 = self._create_volume() | 587 | volume_id1 = _create_volume() |
3853 | 486 | volume1.create_volume(self.context, volume_id1) | 588 | volume1.create_volume(self.context, volume_id1) |
3858 | 487 | volume_id2 = self._create_volume() | 589 | volume_id2 = _create_volume() |
3859 | 488 | host = self.scheduler.driver.schedule_create_volume(self.context, | 590 | self.scheduler.driver.schedule_create_volume(self.context, |
3860 | 489 | volume_id2) | 591 | volume_id2) |
3861 | 490 | self.assertEqual(host, 'host2') | 592 | self.assertEqual(_picked_host, 'host2') |
3862 | 491 | volume1.delete_volume(self.context, volume_id1) | 593 | volume1.delete_volume(self.context, volume_id1) |
3863 | 492 | db.volume_destroy(self.context, volume_id2) | 594 | db.volume_destroy(self.context, volume_id2) |
3864 | 493 | 595 | ||
3865 | @@ -514,17 +616,30 @@ | |||
3866 | 514 | compute2.kill() | 616 | compute2.kill() |
3867 | 515 | 617 | ||
3868 | 516 | def test_least_busy_host_gets_instance(self): | 618 | def test_least_busy_host_gets_instance(self): |
3870 | 517 | """Ensures the host with less cores gets the next one""" | 619 | """Ensures the host with less cores gets the next one w/ Simple""" |
3871 | 518 | compute1 = self.start_service('compute', host='host1') | 620 | compute1 = self.start_service('compute', host='host1') |
3872 | 519 | compute2 = self.start_service('compute', host='host2') | 621 | compute2 = self.start_service('compute', host='host2') |
3881 | 520 | instance_id1 = self._create_instance() | 622 | |
3882 | 521 | compute1.run_instance(self.context, instance_id1) | 623 | global instance_ids |
3883 | 522 | instance_id2 = self._create_instance() | 624 | instance_ids = [] |
3884 | 523 | host = self.scheduler.driver.schedule_run_instance(self.context, | 625 | instance_ids.append(_create_instance()['id']) |
3885 | 524 | instance_id2) | 626 | compute1.run_instance(self.context, instance_ids[0]) |
3886 | 525 | self.assertEqual(host, 'host2') | 627 | |
3887 | 526 | compute1.terminate_instance(self.context, instance_id1) | 628 | self.stubs.Set(SimpleScheduler, |
3888 | 527 | db.instance_destroy(self.context, instance_id2) | 629 | 'create_instance_db_entry', _fake_create_instance_db_entry) |
3889 | 630 | global _picked_host | ||
3890 | 631 | _picked_host = None | ||
3891 | 632 | self.stubs.Set(driver, | ||
3892 | 633 | 'cast_to_compute_host', _fake_cast_to_compute_host) | ||
3893 | 634 | |||
3894 | 635 | request_spec = _create_request_spec() | ||
3895 | 636 | instances = self.scheduler.driver.schedule_run_instance( | ||
3896 | 637 | self.context, request_spec) | ||
3897 | 638 | self.assertEqual(_picked_host, 'host2') | ||
3898 | 639 | self.assertEqual(len(instance_ids), 2) | ||
3899 | 640 | |||
3900 | 641 | compute1.terminate_instance(self.context, instance_ids[0]) | ||
3901 | 642 | compute2.terminate_instance(self.context, instance_ids[1]) | ||
3902 | 528 | compute1.kill() | 643 | compute1.kill() |
3903 | 529 | compute2.kill() | 644 | compute2.kill() |
3904 | 530 | 645 | ||
3905 | @@ -532,41 +647,64 @@ | |||
3906 | 532 | """Ensures if you set availability_zone it launches on that zone""" | 647 | """Ensures if you set availability_zone it launches on that zone""" |
3907 | 533 | compute1 = self.start_service('compute', host='host1') | 648 | compute1 = self.start_service('compute', host='host1') |
3908 | 534 | compute2 = self.start_service('compute', host='host2') | 649 | compute2 = self.start_service('compute', host='host2') |
3917 | 535 | instance_id1 = self._create_instance() | 650 | |
3918 | 536 | compute1.run_instance(self.context, instance_id1) | 651 | global instance_ids |
3919 | 537 | instance_id2 = self._create_instance(availability_zone='nova:host1') | 652 | instance_ids = [] |
3920 | 538 | host = self.scheduler.driver.schedule_run_instance(self.context, | 653 | instance_ids.append(_create_instance()['id']) |
3921 | 539 | instance_id2) | 654 | compute1.run_instance(self.context, instance_ids[0]) |
3922 | 540 | self.assertEqual('host1', host) | 655 | |
3923 | 541 | compute1.terminate_instance(self.context, instance_id1) | 656 | self.stubs.Set(SimpleScheduler, |
3924 | 542 | db.instance_destroy(self.context, instance_id2) | 657 | 'create_instance_db_entry', _fake_create_instance_db_entry) |
3925 | 658 | global _picked_host | ||
3926 | 659 | _picked_host = None | ||
3927 | 660 | self.stubs.Set(driver, | ||
3928 | 661 | 'cast_to_compute_host', _fake_cast_to_compute_host) | ||
3929 | 662 | |||
3930 | 663 | request_spec = _create_request_spec(availability_zone='nova:host1') | ||
3931 | 664 | instances = self.scheduler.driver.schedule_run_instance( | ||
3932 | 665 | self.context, request_spec) | ||
3933 | 666 | self.assertEqual(_picked_host, 'host1') | ||
3934 | 667 | self.assertEqual(len(instance_ids), 2) | ||
3935 | 668 | |||
3936 | 669 | compute1.terminate_instance(self.context, instance_ids[0]) | ||
3937 | 670 | compute1.terminate_instance(self.context, instance_ids[1]) | ||
3938 | 543 | compute1.kill() | 671 | compute1.kill() |
3939 | 544 | compute2.kill() | 672 | compute2.kill() |
3940 | 545 | 673 | ||
3942 | 546 | def test_wont_sechedule_if_specified_host_is_down(self): | 674 | def test_wont_schedule_if_specified_host_is_down(self): |
3943 | 547 | compute1 = self.start_service('compute', host='host1') | 675 | compute1 = self.start_service('compute', host='host1') |
3944 | 548 | s1 = db.service_get_by_args(self.context, 'host1', 'nova-compute') | 676 | s1 = db.service_get_by_args(self.context, 'host1', 'nova-compute') |
3945 | 549 | now = utils.utcnow() | 677 | now = utils.utcnow() |
3946 | 550 | delta = datetime.timedelta(seconds=FLAGS.service_down_time * 2) | 678 | delta = datetime.timedelta(seconds=FLAGS.service_down_time * 2) |
3947 | 551 | past = now - delta | 679 | past = now - delta |
3948 | 552 | db.service_update(self.context, s1['id'], {'updated_at': past}) | 680 | db.service_update(self.context, s1['id'], {'updated_at': past}) |
3950 | 553 | instance_id2 = self._create_instance(availability_zone='nova:host1') | 681 | request_spec = _create_request_spec(availability_zone='nova:host1') |
3951 | 554 | self.assertRaises(driver.WillNotSchedule, | 682 | self.assertRaises(driver.WillNotSchedule, |
3952 | 555 | self.scheduler.driver.schedule_run_instance, | 683 | self.scheduler.driver.schedule_run_instance, |
3953 | 556 | self.context, | 684 | self.context, |
3956 | 557 | instance_id2) | 685 | request_spec) |
3955 | 558 | db.instance_destroy(self.context, instance_id2) | ||
3957 | 559 | compute1.kill() | 686 | compute1.kill() |
3958 | 560 | 687 | ||
3959 | 561 | def test_will_schedule_on_disabled_host_if_specified(self): | 688 | def test_will_schedule_on_disabled_host_if_specified(self): |
3960 | 562 | compute1 = self.start_service('compute', host='host1') | 689 | compute1 = self.start_service('compute', host='host1') |
3961 | 563 | s1 = db.service_get_by_args(self.context, 'host1', 'nova-compute') | 690 | s1 = db.service_get_by_args(self.context, 'host1', 'nova-compute') |
3962 | 564 | db.service_update(self.context, s1['id'], {'disabled': True}) | 691 | db.service_update(self.context, s1['id'], {'disabled': True}) |
3968 | 565 | instance_id2 = self._create_instance(availability_zone='nova:host1') | 692 | |
3969 | 566 | host = self.scheduler.driver.schedule_run_instance(self.context, | 693 | global instance_ids |
3970 | 567 | instance_id2) | 694 | instance_ids = [] |
3971 | 568 | self.assertEqual('host1', host) | 695 | self.stubs.Set(SimpleScheduler, |
3972 | 569 | db.instance_destroy(self.context, instance_id2) | 696 | 'create_instance_db_entry', _fake_create_instance_db_entry) |
3973 | 697 | global _picked_host | ||
3974 | 698 | _picked_host = None | ||
3975 | 699 | self.stubs.Set(driver, | ||
3976 | 700 | 'cast_to_compute_host', _fake_cast_to_compute_host) | ||
3977 | 701 | |||
3978 | 702 | request_spec = _create_request_spec(availability_zone='nova:host1') | ||
3979 | 703 | instances = self.scheduler.driver.schedule_run_instance( | ||
3980 | 704 | self.context, request_spec) | ||
3981 | 705 | self.assertEqual(_picked_host, 'host1') | ||
3982 | 706 | self.assertEqual(len(instance_ids), 1) | ||
3983 | 707 | compute1.terminate_instance(self.context, instance_ids[0]) | ||
3984 | 570 | compute1.kill() | 708 | compute1.kill() |
3985 | 571 | 709 | ||
3986 | 572 | def test_too_many_cores(self): | 710 | def test_too_many_cores(self): |
3987 | @@ -576,18 +714,30 @@ | |||
3988 | 576 | instance_ids1 = [] | 714 | instance_ids1 = [] |
3989 | 577 | instance_ids2 = [] | 715 | instance_ids2 = [] |
3990 | 578 | for index in xrange(FLAGS.max_cores): | 716 | for index in xrange(FLAGS.max_cores): |
3992 | 579 | instance_id = self._create_instance() | 717 | instance_id = _create_instance()['id'] |
3993 | 580 | compute1.run_instance(self.context, instance_id) | 718 | compute1.run_instance(self.context, instance_id) |
3994 | 581 | instance_ids1.append(instance_id) | 719 | instance_ids1.append(instance_id) |
3996 | 582 | instance_id = self._create_instance() | 720 | instance_id = _create_instance()['id'] |
3997 | 583 | compute2.run_instance(self.context, instance_id) | 721 | compute2.run_instance(self.context, instance_id) |
3998 | 584 | instance_ids2.append(instance_id) | 722 | instance_ids2.append(instance_id) |
4000 | 585 | instance_id = self._create_instance() | 723 | |
4001 | 724 | def _create_instance_db_entry(simple_self, context, request_spec): | ||
4002 | 725 | self.fail(_("Shouldn't try to create DB entry when at " | ||
4003 | 726 | "max cores")) | ||
4004 | 727 | self.stubs.Set(SimpleScheduler, | ||
4005 | 728 | 'create_instance_db_entry', _create_instance_db_entry) | ||
4006 | 729 | |||
4007 | 730 | global _picked_host | ||
4008 | 731 | _picked_host = None | ||
4009 | 732 | self.stubs.Set(driver, | ||
4010 | 733 | 'cast_to_compute_host', _fake_cast_to_compute_host) | ||
4011 | 734 | |||
4012 | 735 | request_spec = _create_request_spec() | ||
4013 | 736 | |||
4014 | 586 | self.assertRaises(driver.NoValidHost, | 737 | self.assertRaises(driver.NoValidHost, |
4015 | 587 | self.scheduler.driver.schedule_run_instance, | 738 | self.scheduler.driver.schedule_run_instance, |
4016 | 588 | self.context, | 739 | self.context, |
4019 | 589 | instance_id) | 740 | request_spec) |
4018 | 590 | db.instance_destroy(self.context, instance_id) | ||
4020 | 591 | for instance_id in instance_ids1: | 741 | for instance_id in instance_ids1: |
4021 | 592 | compute1.terminate_instance(self.context, instance_id) | 742 | compute1.terminate_instance(self.context, instance_id) |
4022 | 593 | for instance_id in instance_ids2: | 743 | for instance_id in instance_ids2: |
4023 | @@ -599,12 +749,18 @@ | |||
4024 | 599 | """Ensures the host with less gigabytes gets the next one""" | 749 | """Ensures the host with less gigabytes gets the next one""" |
4025 | 600 | volume1 = self.start_service('volume', host='host1') | 750 | volume1 = self.start_service('volume', host='host1') |
4026 | 601 | volume2 = self.start_service('volume', host='host2') | 751 | volume2 = self.start_service('volume', host='host2') |
4028 | 602 | volume_id1 = self._create_volume() | 752 | |
4029 | 753 | global _picked_host | ||
4030 | 754 | _picked_host = None | ||
4031 | 755 | self.stubs.Set(driver, | ||
4032 | 756 | 'cast_to_volume_host', _fake_cast_to_volume_host) | ||
4033 | 757 | |||
4034 | 758 | volume_id1 = _create_volume() | ||
4035 | 603 | volume1.create_volume(self.context, volume_id1) | 759 | volume1.create_volume(self.context, volume_id1) |
4040 | 604 | volume_id2 = self._create_volume() | 760 | volume_id2 = _create_volume() |
4041 | 605 | host = self.scheduler.driver.schedule_create_volume(self.context, | 761 | self.scheduler.driver.schedule_create_volume(self.context, |
4042 | 606 | volume_id2) | 762 | volume_id2) |
4043 | 607 | self.assertEqual(host, 'host2') | 763 | self.assertEqual(_picked_host, 'host2') |
4044 | 608 | volume1.delete_volume(self.context, volume_id1) | 764 | volume1.delete_volume(self.context, volume_id1) |
4045 | 609 | db.volume_destroy(self.context, volume_id2) | 765 | db.volume_destroy(self.context, volume_id2) |
4046 | 610 | volume1.kill() | 766 | volume1.kill() |
4047 | @@ -617,13 +773,13 @@ | |||
4048 | 617 | volume_ids1 = [] | 773 | volume_ids1 = [] |
4049 | 618 | volume_ids2 = [] | 774 | volume_ids2 = [] |
4050 | 619 | for index in xrange(FLAGS.max_gigabytes): | 775 | for index in xrange(FLAGS.max_gigabytes): |
4052 | 620 | volume_id = self._create_volume() | 776 | volume_id = _create_volume() |
4053 | 621 | volume1.create_volume(self.context, volume_id) | 777 | volume1.create_volume(self.context, volume_id) |
4054 | 622 | volume_ids1.append(volume_id) | 778 | volume_ids1.append(volume_id) |
4056 | 623 | volume_id = self._create_volume() | 779 | volume_id = _create_volume() |
4057 | 624 | volume2.create_volume(self.context, volume_id) | 780 | volume2.create_volume(self.context, volume_id) |
4058 | 625 | volume_ids2.append(volume_id) | 781 | volume_ids2.append(volume_id) |
4060 | 626 | volume_id = self._create_volume() | 782 | volume_id = _create_volume() |
4061 | 627 | self.assertRaises(driver.NoValidHost, | 783 | self.assertRaises(driver.NoValidHost, |
4062 | 628 | self.scheduler.driver.schedule_create_volume, | 784 | self.scheduler.driver.schedule_create_volume, |
4063 | 629 | self.context, | 785 | self.context, |
4064 | @@ -636,13 +792,13 @@ | |||
4065 | 636 | volume2.kill() | 792 | volume2.kill() |
4066 | 637 | 793 | ||
4067 | 638 | def test_scheduler_live_migration_with_volume(self): | 794 | def test_scheduler_live_migration_with_volume(self): |
4069 | 639 | """scheduler_live_migration() works correctly as expected. | 795 | """schedule_live_migration() works correctly as expected. |
4070 | 640 | 796 | ||
4071 | 641 | Also, checks instance state is changed from 'running' -> 'migrating'. | 797 | Also, checks instance state is changed from 'running' -> 'migrating'. |
4072 | 642 | 798 | ||
4073 | 643 | """ | 799 | """ |
4074 | 644 | 800 | ||
4076 | 645 | instance_id = self._create_instance() | 801 | instance_id = _create_instance(host='dummy')['id'] |
4077 | 646 | i_ref = db.instance_get(self.context, instance_id) | 802 | i_ref = db.instance_get(self.context, instance_id) |
4078 | 647 | dic = {'instance_id': instance_id, 'size': 1} | 803 | dic = {'instance_id': instance_id, 'size': 1} |
4079 | 648 | v_ref = db.volume_create(self.context, dic) | 804 | v_ref = db.volume_create(self.context, dic) |
4080 | @@ -680,7 +836,8 @@ | |||
4081 | 680 | def test_live_migration_src_check_instance_not_running(self): | 836 | def test_live_migration_src_check_instance_not_running(self): |
4082 | 681 | """The instance given by instance_id is not running.""" | 837 | """The instance given by instance_id is not running.""" |
4083 | 682 | 838 | ||
4085 | 683 | instance_id = self._create_instance(power_state=power_state.NOSTATE) | 839 | instance_id = _create_instance( |
4086 | 840 | power_state=power_state.NOSTATE)['id'] | ||
4087 | 684 | i_ref = db.instance_get(self.context, instance_id) | 841 | i_ref = db.instance_get(self.context, instance_id) |
4088 | 685 | 842 | ||
4089 | 686 | try: | 843 | try: |
4090 | @@ -695,7 +852,7 @@ | |||
4091 | 695 | def test_live_migration_src_check_volume_node_not_alive(self): | 852 | def test_live_migration_src_check_volume_node_not_alive(self): |
4092 | 696 | """Raise exception when volume node is not alive.""" | 853 | """Raise exception when volume node is not alive.""" |
4093 | 697 | 854 | ||
4095 | 698 | instance_id = self._create_instance() | 855 | instance_id = _create_instance()['id'] |
4096 | 699 | i_ref = db.instance_get(self.context, instance_id) | 856 | i_ref = db.instance_get(self.context, instance_id) |
4097 | 700 | dic = {'instance_id': instance_id, 'size': 1} | 857 | dic = {'instance_id': instance_id, 'size': 1} |
4098 | 701 | v_ref = db.volume_create(self.context, {'instance_id': instance_id, | 858 | v_ref = db.volume_create(self.context, {'instance_id': instance_id, |
4099 | @@ -715,7 +872,7 @@ | |||
4100 | 715 | 872 | ||
4101 | 716 | def test_live_migration_src_check_compute_node_not_alive(self): | 873 | def test_live_migration_src_check_compute_node_not_alive(self): |
4102 | 717 | """Confirms src-compute node is alive.""" | 874 | """Confirms src-compute node is alive.""" |
4104 | 718 | instance_id = self._create_instance() | 875 | instance_id = _create_instance()['id'] |
4105 | 719 | i_ref = db.instance_get(self.context, instance_id) | 876 | i_ref = db.instance_get(self.context, instance_id) |
4106 | 720 | t = utils.utcnow() - datetime.timedelta(10) | 877 | t = utils.utcnow() - datetime.timedelta(10) |
4107 | 721 | s_ref = self._create_compute_service(created_at=t, updated_at=t, | 878 | s_ref = self._create_compute_service(created_at=t, updated_at=t, |
4108 | @@ -730,7 +887,7 @@ | |||
4109 | 730 | 887 | ||
4110 | 731 | def test_live_migration_src_check_works_correctly(self): | 888 | def test_live_migration_src_check_works_correctly(self): |
4111 | 732 | """Confirms this method finishes with no error.""" | 889 | """Confirms this method finishes with no error.""" |
4113 | 733 | instance_id = self._create_instance() | 890 | instance_id = _create_instance()['id'] |
4114 | 734 | i_ref = db.instance_get(self.context, instance_id) | 891 | i_ref = db.instance_get(self.context, instance_id) |
4115 | 735 | s_ref = self._create_compute_service(host=i_ref['host']) | 892 | s_ref = self._create_compute_service(host=i_ref['host']) |
4116 | 736 | 893 | ||
4117 | @@ -743,7 +900,7 @@ | |||
4118 | 743 | 900 | ||
4119 | 744 | def test_live_migration_dest_check_not_alive(self): | 901 | def test_live_migration_dest_check_not_alive(self): |
4120 | 745 | """Confirms exception raises in case dest host does not exist.""" | 902 | """Confirms exception raises in case dest host does not exist.""" |
4122 | 746 | instance_id = self._create_instance() | 903 | instance_id = _create_instance()['id'] |
4123 | 747 | i_ref = db.instance_get(self.context, instance_id) | 904 | i_ref = db.instance_get(self.context, instance_id) |
4124 | 748 | t = utils.utcnow() - datetime.timedelta(10) | 905 | t = utils.utcnow() - datetime.timedelta(10) |
4125 | 749 | s_ref = self._create_compute_service(created_at=t, updated_at=t, | 906 | s_ref = self._create_compute_service(created_at=t, updated_at=t, |
4126 | @@ -758,7 +915,7 @@ | |||
4127 | 758 | 915 | ||
4128 | 759 | def test_live_migration_dest_check_service_same_host(self): | 916 | def test_live_migration_dest_check_service_same_host(self): |
4129 | 760 | """Confirms exceptioin raises in case dest and src is same host.""" | 917 | """Confirms exceptioin raises in case dest and src is same host.""" |
4131 | 761 | instance_id = self._create_instance() | 918 | instance_id = _create_instance()['id'] |
4132 | 762 | i_ref = db.instance_get(self.context, instance_id) | 919 | i_ref = db.instance_get(self.context, instance_id) |
4133 | 763 | s_ref = self._create_compute_service(host=i_ref['host']) | 920 | s_ref = self._create_compute_service(host=i_ref['host']) |
4134 | 764 | 921 | ||
4135 | @@ -771,9 +928,9 @@ | |||
4136 | 771 | 928 | ||
4137 | 772 | def test_live_migration_dest_check_service_lack_memory(self): | 929 | def test_live_migration_dest_check_service_lack_memory(self): |
4138 | 773 | """Confirms exception raises when dest doesn't have enough memory.""" | 930 | """Confirms exception raises when dest doesn't have enough memory.""" |
4142 | 774 | instance_id = self._create_instance() | 931 | instance_id = _create_instance()['id'] |
4143 | 775 | instance_id2 = self._create_instance(host='somewhere', | 932 | instance_id2 = _create_instance(host='somewhere', |
4144 | 776 | memory_mb=12) | 933 | memory_mb=12)['id'] |
4145 | 777 | i_ref = db.instance_get(self.context, instance_id) | 934 | i_ref = db.instance_get(self.context, instance_id) |
4146 | 778 | s_ref = self._create_compute_service(host='somewhere') | 935 | s_ref = self._create_compute_service(host='somewhere') |
4147 | 779 | 936 | ||
4148 | @@ -787,9 +944,9 @@ | |||
4149 | 787 | 944 | ||
4150 | 788 | def test_block_migration_dest_check_service_lack_disk(self): | 945 | def test_block_migration_dest_check_service_lack_disk(self): |
4151 | 789 | """Confirms exception raises when dest doesn't have enough disk.""" | 946 | """Confirms exception raises when dest doesn't have enough disk.""" |
4155 | 790 | instance_id = self._create_instance() | 947 | instance_id = _create_instance()['id'] |
4156 | 791 | instance_id2 = self._create_instance(host='somewhere', | 948 | instance_id2 = _create_instance(host='somewhere', |
4157 | 792 | local_gb=70) | 949 | local_gb=70)['id'] |
4158 | 793 | i_ref = db.instance_get(self.context, instance_id) | 950 | i_ref = db.instance_get(self.context, instance_id) |
4159 | 794 | s_ref = self._create_compute_service(host='somewhere') | 951 | s_ref = self._create_compute_service(host='somewhere') |
4160 | 795 | 952 | ||
4161 | @@ -803,7 +960,7 @@ | |||
4162 | 803 | 960 | ||
4163 | 804 | def test_live_migration_dest_check_service_works_correctly(self): | 961 | def test_live_migration_dest_check_service_works_correctly(self): |
4164 | 805 | """Confirms method finishes with no error.""" | 962 | """Confirms method finishes with no error.""" |
4166 | 806 | instance_id = self._create_instance() | 963 | instance_id = _create_instance()['id'] |
4167 | 807 | i_ref = db.instance_get(self.context, instance_id) | 964 | i_ref = db.instance_get(self.context, instance_id) |
4168 | 808 | s_ref = self._create_compute_service(host='somewhere', | 965 | s_ref = self._create_compute_service(host='somewhere', |
4169 | 809 | memory_mb_used=5) | 966 | memory_mb_used=5) |
4170 | @@ -821,7 +978,7 @@ | |||
4171 | 821 | 978 | ||
4172 | 822 | dest = 'dummydest' | 979 | dest = 'dummydest' |
4173 | 823 | # mocks for live_migration_common_check() | 980 | # mocks for live_migration_common_check() |
4175 | 824 | instance_id = self._create_instance() | 981 | instance_id = _create_instance()['id'] |
4176 | 825 | i_ref = db.instance_get(self.context, instance_id) | 982 | i_ref = db.instance_get(self.context, instance_id) |
4177 | 826 | t1 = utils.utcnow() - datetime.timedelta(10) | 983 | t1 = utils.utcnow() - datetime.timedelta(10) |
4178 | 827 | s_ref = self._create_compute_service(created_at=t1, updated_at=t1, | 984 | s_ref = self._create_compute_service(created_at=t1, updated_at=t1, |
4179 | @@ -855,7 +1012,7 @@ | |||
4180 | 855 | def test_live_migration_common_check_service_different_hypervisor(self): | 1012 | def test_live_migration_common_check_service_different_hypervisor(self): |
4181 | 856 | """Original host and dest host has different hypervisor type.""" | 1013 | """Original host and dest host has different hypervisor type.""" |
4182 | 857 | dest = 'dummydest' | 1014 | dest = 'dummydest' |
4184 | 858 | instance_id = self._create_instance() | 1015 | instance_id = _create_instance(host='dummy')['id'] |
4185 | 859 | i_ref = db.instance_get(self.context, instance_id) | 1016 | i_ref = db.instance_get(self.context, instance_id) |
4186 | 860 | 1017 | ||
4187 | 861 | # compute service for destination | 1018 | # compute service for destination |
4188 | @@ -880,7 +1037,7 @@ | |||
4189 | 880 | def test_live_migration_common_check_service_different_version(self): | 1037 | def test_live_migration_common_check_service_different_version(self): |
4190 | 881 | """Original host and dest host has different hypervisor version.""" | 1038 | """Original host and dest host has different hypervisor version.""" |
4191 | 882 | dest = 'dummydest' | 1039 | dest = 'dummydest' |
4193 | 883 | instance_id = self._create_instance() | 1040 | instance_id = _create_instance(host='dummy')['id'] |
4194 | 884 | i_ref = db.instance_get(self.context, instance_id) | 1041 | i_ref = db.instance_get(self.context, instance_id) |
4195 | 885 | 1042 | ||
4196 | 886 | # compute service for destination | 1043 | # compute service for destination |
4197 | @@ -904,10 +1061,10 @@ | |||
4198 | 904 | db.service_destroy(self.context, s_ref2['id']) | 1061 | db.service_destroy(self.context, s_ref2['id']) |
4199 | 905 | 1062 | ||
4200 | 906 | def test_live_migration_common_check_checking_cpuinfo_fail(self): | 1063 | def test_live_migration_common_check_checking_cpuinfo_fail(self): |
4202 | 907 | """Raise excetion when original host doen't have compatible cpu.""" | 1064 | """Raise exception when original host doesn't have compatible cpu.""" |
4203 | 908 | 1065 | ||
4204 | 909 | dest = 'dummydest' | 1066 | dest = 'dummydest' |
4206 | 910 | instance_id = self._create_instance() | 1067 | instance_id = _create_instance(host='dummy')['id'] |
4207 | 911 | i_ref = db.instance_get(self.context, instance_id) | 1068 | i_ref = db.instance_get(self.context, instance_id) |
4208 | 912 | 1069 | ||
4209 | 913 | # compute service for destination | 1070 | # compute service for destination |
4210 | @@ -927,7 +1084,7 @@ | |||
4211 | 927 | 1084 | ||
4212 | 928 | self.mox.ReplayAll() | 1085 | self.mox.ReplayAll() |
4213 | 929 | try: | 1086 | try: |
4215 | 930 | self.scheduler.driver._live_migration_common_check(self.context, | 1087 | driver._live_migration_common_check(self.context, |
4216 | 931 | i_ref, | 1088 | i_ref, |
4217 | 932 | dest, | 1089 | dest, |
4218 | 933 | False) | 1090 | False) |
4219 | @@ -1021,7 +1178,6 @@ | |||
4220 | 1021 | class ZoneRedirectTest(test.TestCase): | 1178 | class ZoneRedirectTest(test.TestCase): |
4221 | 1022 | def setUp(self): | 1179 | def setUp(self): |
4222 | 1023 | super(ZoneRedirectTest, self).setUp() | 1180 | super(ZoneRedirectTest, self).setUp() |
4223 | 1024 | self.stubs = stubout.StubOutForTesting() | ||
4224 | 1025 | 1181 | ||
4225 | 1026 | self.stubs.Set(db, 'zone_get_all', zone_get_all) | 1182 | self.stubs.Set(db, 'zone_get_all', zone_get_all) |
4226 | 1027 | self.stubs.Set(db, 'instance_get_by_uuid', | 1183 | self.stubs.Set(db, 'instance_get_by_uuid', |
4227 | @@ -1029,7 +1185,6 @@ | |||
4228 | 1029 | self.flags(enable_zone_routing=True) | 1185 | self.flags(enable_zone_routing=True) |
4229 | 1030 | 1186 | ||
4230 | 1031 | def tearDown(self): | 1187 | def tearDown(self): |
4231 | 1032 | self.stubs.UnsetAll() | ||
4232 | 1033 | super(ZoneRedirectTest, self).tearDown() | 1188 | super(ZoneRedirectTest, self).tearDown() |
4233 | 1034 | 1189 | ||
4234 | 1035 | def test_trap_found_locally(self): | 1190 | def test_trap_found_locally(self): |
4235 | @@ -1257,12 +1412,10 @@ | |||
4236 | 1257 | class CallZoneMethodTest(test.TestCase): | 1412 | class CallZoneMethodTest(test.TestCase): |
4237 | 1258 | def setUp(self): | 1413 | def setUp(self): |
4238 | 1259 | super(CallZoneMethodTest, self).setUp() | 1414 | super(CallZoneMethodTest, self).setUp() |
4239 | 1260 | self.stubs = stubout.StubOutForTesting() | ||
4240 | 1261 | self.stubs.Set(db, 'zone_get_all', zone_get_all) | 1415 | self.stubs.Set(db, 'zone_get_all', zone_get_all) |
4241 | 1262 | self.stubs.Set(novaclient, 'Client', FakeNovaClientZones) | 1416 | self.stubs.Set(novaclient, 'Client', FakeNovaClientZones) |
4242 | 1263 | 1417 | ||
4243 | 1264 | def tearDown(self): | 1418 | def tearDown(self): |
4244 | 1265 | self.stubs.UnsetAll() | ||
4245 | 1266 | super(CallZoneMethodTest, self).tearDown() | 1419 | super(CallZoneMethodTest, self).tearDown() |
4246 | 1267 | 1420 | ||
4247 | 1268 | def test_call_zone_method(self): | 1421 | def test_call_zone_method(self): |
4248 | 1269 | 1422 | ||
4249 | === modified file 'nova/tests/scheduler/test_vsa_scheduler.py' | |||
4250 | --- nova/tests/scheduler/test_vsa_scheduler.py 2011-08-26 02:09:50 +0000 | |||
4251 | +++ nova/tests/scheduler/test_vsa_scheduler.py 2011-09-23 07:08:19 +0000 | |||
4252 | @@ -22,6 +22,7 @@ | |||
4253 | 22 | from nova import exception | 22 | from nova import exception |
4254 | 23 | from nova import flags | 23 | from nova import flags |
4255 | 24 | from nova import log as logging | 24 | from nova import log as logging |
4256 | 25 | from nova import rpc | ||
4257 | 25 | from nova import test | 26 | from nova import test |
4258 | 26 | from nova import utils | 27 | from nova import utils |
4259 | 27 | from nova.volume import volume_types | 28 | from nova.volume import volume_types |
4260 | @@ -37,6 +38,10 @@ | |||
4261 | 37 | global_volume = {} | 38 | global_volume = {} |
4262 | 38 | 39 | ||
4263 | 39 | 40 | ||
4264 | 41 | def fake_rpc_cast(*args, **kwargs): | ||
4265 | 42 | pass | ||
4266 | 43 | |||
4267 | 44 | |||
4268 | 40 | class FakeVsaLeastUsedScheduler( | 45 | class FakeVsaLeastUsedScheduler( |
4269 | 41 | vsa_sched.VsaSchedulerLeastUsedHost): | 46 | vsa_sched.VsaSchedulerLeastUsedHost): |
4270 | 42 | # No need to stub anything at the moment | 47 | # No need to stub anything at the moment |
4271 | @@ -170,12 +175,10 @@ | |||
4272 | 170 | LOG.debug(_("Test: provision vol %(name)s on host %(host)s"), | 175 | LOG.debug(_("Test: provision vol %(name)s on host %(host)s"), |
4273 | 171 | locals()) | 176 | locals()) |
4274 | 172 | LOG.debug(_("\t vol=%(vol)s"), locals()) | 177 | LOG.debug(_("\t vol=%(vol)s"), locals()) |
4275 | 173 | pass | ||
4276 | 174 | 178 | ||
4277 | 175 | def _fake_vsa_update(self, context, vsa_id, values): | 179 | def _fake_vsa_update(self, context, vsa_id, values): |
4278 | 176 | LOG.debug(_("Test: VSA update request: vsa_id=%(vsa_id)s "\ | 180 | LOG.debug(_("Test: VSA update request: vsa_id=%(vsa_id)s "\ |
4279 | 177 | "values=%(values)s"), locals()) | 181 | "values=%(values)s"), locals()) |
4280 | 178 | pass | ||
4281 | 179 | 182 | ||
4282 | 180 | def _fake_volume_create(self, context, options): | 183 | def _fake_volume_create(self, context, options): |
4283 | 181 | LOG.debug(_("Test: Volume create: %s"), options) | 184 | LOG.debug(_("Test: Volume create: %s"), options) |
4284 | @@ -196,7 +199,6 @@ | |||
4285 | 196 | "values=%(values)s"), locals()) | 199 | "values=%(values)s"), locals()) |
4286 | 197 | global scheduled_volume | 200 | global scheduled_volume |
4287 | 198 | scheduled_volume = {'id': volume_id, 'host': values['host']} | 201 | scheduled_volume = {'id': volume_id, 'host': values['host']} |
4288 | 199 | pass | ||
4289 | 200 | 202 | ||
4290 | 201 | def _fake_service_get_by_args(self, context, host, binary): | 203 | def _fake_service_get_by_args(self, context, host, binary): |
4291 | 202 | return "service" | 204 | return "service" |
4292 | @@ -209,7 +211,6 @@ | |||
4293 | 209 | 211 | ||
4294 | 210 | def setUp(self, sched_class=None): | 212 | def setUp(self, sched_class=None): |
4295 | 211 | super(VsaSchedulerTestCase, self).setUp() | 213 | super(VsaSchedulerTestCase, self).setUp() |
4296 | 212 | self.stubs = stubout.StubOutForTesting() | ||
4297 | 213 | self.context = context.get_admin_context() | 214 | self.context = context.get_admin_context() |
4298 | 214 | 215 | ||
4299 | 215 | if sched_class is None: | 216 | if sched_class is None: |
4300 | @@ -220,6 +221,7 @@ | |||
4301 | 220 | self.host_num = 10 | 221 | self.host_num = 10 |
4302 | 221 | self.drive_type_num = 5 | 222 | self.drive_type_num = 5 |
4303 | 222 | 223 | ||
4304 | 224 | self.stubs.Set(rpc, 'cast', fake_rpc_cast) | ||
4305 | 223 | self.stubs.Set(self.sched, | 225 | self.stubs.Set(self.sched, |
4306 | 224 | '_get_service_states', self._fake_get_service_states) | 226 | '_get_service_states', self._fake_get_service_states) |
4307 | 225 | self.stubs.Set(self.sched, | 227 | self.stubs.Set(self.sched, |
4308 | @@ -234,8 +236,6 @@ | |||
4309 | 234 | def tearDown(self): | 236 | def tearDown(self): |
4310 | 235 | for name in self.created_types_lst: | 237 | for name in self.created_types_lst: |
4311 | 236 | volume_types.purge(self.context, name) | 238 | volume_types.purge(self.context, name) |
4312 | 237 | |||
4313 | 238 | self.stubs.UnsetAll() | ||
4314 | 239 | super(VsaSchedulerTestCase, self).tearDown() | 239 | super(VsaSchedulerTestCase, self).tearDown() |
4315 | 240 | 240 | ||
4316 | 241 | def test_vsa_sched_create_volumes_simple(self): | 241 | def test_vsa_sched_create_volumes_simple(self): |
4317 | @@ -333,6 +333,8 @@ | |||
4318 | 333 | self.stubs.Set(self.sched, | 333 | self.stubs.Set(self.sched, |
4319 | 334 | '_get_service_states', self._fake_get_service_states) | 334 | '_get_service_states', self._fake_get_service_states) |
4320 | 335 | self.stubs.Set(nova.db, 'volume_create', self._fake_volume_create) | 335 | self.stubs.Set(nova.db, 'volume_create', self._fake_volume_create) |
4321 | 336 | self.stubs.Set(nova.db, 'volume_update', self._fake_volume_update) | ||
4322 | 337 | self.stubs.Set(rpc, 'cast', fake_rpc_cast) | ||
4323 | 336 | 338 | ||
4324 | 337 | self.sched.schedule_create_volumes(self.context, | 339 | self.sched.schedule_create_volumes(self.context, |
4325 | 338 | request_spec, | 340 | request_spec, |
4326 | @@ -467,10 +469,9 @@ | |||
4327 | 467 | self.stubs.Set(self.sched, | 469 | self.stubs.Set(self.sched, |
4328 | 468 | 'service_is_up', self._fake_service_is_up_True) | 470 | 'service_is_up', self._fake_service_is_up_True) |
4329 | 469 | 471 | ||
4332 | 470 | host = self.sched.schedule_create_volume(self.context, | 472 | self.sched.schedule_create_volume(self.context, |
4333 | 471 | 123, availability_zone=None) | 473 | 123, availability_zone=None) |
4334 | 472 | 474 | ||
4335 | 473 | self.assertEqual(host, 'host_3') | ||
4336 | 474 | self.assertEqual(scheduled_volume['id'], 123) | 475 | self.assertEqual(scheduled_volume['id'], 123) |
4337 | 475 | self.assertEqual(scheduled_volume['host'], 'host_3') | 476 | self.assertEqual(scheduled_volume['host'], 'host_3') |
4338 | 476 | 477 | ||
4339 | @@ -514,10 +515,9 @@ | |||
4340 | 514 | global_volume['volume_type_id'] = volume_type['id'] | 515 | global_volume['volume_type_id'] = volume_type['id'] |
4341 | 515 | global_volume['size'] = 0 | 516 | global_volume['size'] = 0 |
4342 | 516 | 517 | ||
4345 | 517 | host = self.sched.schedule_create_volume(self.context, | 518 | self.sched.schedule_create_volume(self.context, |
4346 | 518 | 123, availability_zone=None) | 519 | 123, availability_zone=None) |
4347 | 519 | 520 | ||
4348 | 520 | self.assertEqual(host, 'host_2') | ||
4349 | 521 | self.assertEqual(scheduled_volume['id'], 123) | 521 | self.assertEqual(scheduled_volume['id'], 123) |
4350 | 522 | self.assertEqual(scheduled_volume['host'], 'host_2') | 522 | self.assertEqual(scheduled_volume['host'], 'host_2') |
4351 | 523 | 523 | ||
4352 | @@ -529,7 +529,6 @@ | |||
4353 | 529 | FakeVsaMostAvailCapacityScheduler()) | 529 | FakeVsaMostAvailCapacityScheduler()) |
4354 | 530 | 530 | ||
4355 | 531 | def tearDown(self): | 531 | def tearDown(self): |
4356 | 532 | self.stubs.UnsetAll() | ||
4357 | 533 | super(VsaSchedulerTestCaseMostAvail, self).tearDown() | 532 | super(VsaSchedulerTestCaseMostAvail, self).tearDown() |
4358 | 534 | 533 | ||
4359 | 535 | def test_vsa_sched_create_single_volume(self): | 534 | def test_vsa_sched_create_single_volume(self): |
4360 | @@ -558,10 +557,9 @@ | |||
4361 | 558 | global_volume['volume_type_id'] = volume_type['id'] | 557 | global_volume['volume_type_id'] = volume_type['id'] |
4362 | 559 | global_volume['size'] = 0 | 558 | global_volume['size'] = 0 |
4363 | 560 | 559 | ||
4366 | 561 | host = self.sched.schedule_create_volume(self.context, | 560 | self.sched.schedule_create_volume(self.context, |
4367 | 562 | 123, availability_zone=None) | 561 | 123, availability_zone=None) |
4368 | 563 | 562 | ||
4369 | 564 | self.assertEqual(host, 'host_9') | ||
4370 | 565 | self.assertEqual(scheduled_volume['id'], 123) | 563 | self.assertEqual(scheduled_volume['id'], 123) |
4371 | 566 | self.assertEqual(scheduled_volume['host'], 'host_9') | 564 | self.assertEqual(scheduled_volume['host'], 'host_9') |
4372 | 567 | 565 | ||
4373 | 568 | 566 | ||
4374 | === modified file 'nova/tests/test_compute.py' | |||
4375 | --- nova/tests/test_compute.py 2011-09-21 20:59:40 +0000 | |||
4376 | +++ nova/tests/test_compute.py 2011-09-23 07:08:19 +0000 | |||
4377 | @@ -26,6 +26,7 @@ | |||
4378 | 26 | from nova import exception | 26 | from nova import exception |
4379 | 27 | from nova import flags | 27 | from nova import flags |
4380 | 28 | from nova import log as logging | 28 | from nova import log as logging |
4381 | 29 | from nova.scheduler import driver as scheduler_driver | ||
4382 | 29 | from nova import rpc | 30 | from nova import rpc |
4383 | 30 | from nova import test | 31 | from nova import test |
4384 | 31 | from nova import utils | 32 | from nova import utils |
4385 | @@ -73,10 +74,42 @@ | |||
4386 | 73 | self.context = context.RequestContext(self.user_id, self.project_id) | 74 | self.context = context.RequestContext(self.user_id, self.project_id) |
4387 | 74 | test_notifier.NOTIFICATIONS = [] | 75 | test_notifier.NOTIFICATIONS = [] |
4388 | 75 | 76 | ||
4389 | 77 | orig_rpc_call = rpc.call | ||
4390 | 78 | orig_rpc_cast = rpc.cast | ||
4391 | 79 | |||
4392 | 80 | def rpc_call_wrapper(context, topic, msg, do_cast=True): | ||
4393 | 81 | """Stub out the scheduler creating the instance entry""" | ||
4394 | 82 | if topic == FLAGS.scheduler_topic and \ | ||
4395 | 83 | msg['method'] == 'run_instance': | ||
4396 | 84 | request_spec = msg['args']['request_spec'] | ||
4397 | 85 | scheduler = scheduler_driver.Scheduler | ||
4398 | 86 | num_instances = request_spec.get('num_instances', 1) | ||
4399 | 87 | instances = [] | ||
4400 | 88 | for x in xrange(num_instances): | ||
4401 | 89 | instance = scheduler().create_instance_db_entry( | ||
4402 | 90 | context, | ||
4403 | 91 | request_spec) | ||
4404 | 92 | encoded = scheduler_driver.encode_instance(instance) | ||
4405 | 93 | instances.append(encoded) | ||
4406 | 94 | return instances | ||
4407 | 95 | else: | ||
4408 | 96 | if do_cast: | ||
4409 | 97 | orig_rpc_cast(context, topic, msg) | ||
4410 | 98 | else: | ||
4411 | 99 | return orig_rpc_call(context, topic, msg) | ||
4412 | 100 | |||
4413 | 101 | def rpc_cast_wrapper(context, topic, msg): | ||
4414 | 102 | """Stub out the scheduler creating the instance entry in | ||
4415 | 103 | the reservation_id case. | ||
4416 | 104 | """ | ||
4417 | 105 | rpc_call_wrapper(context, topic, msg, do_cast=True) | ||
4418 | 106 | |||
4419 | 76 | def fake_show(meh, context, id): | 107 | def fake_show(meh, context, id): |
4420 | 77 | return {'id': 1, 'properties': {'kernel_id': 1, 'ramdisk_id': 1}} | 108 | return {'id': 1, 'properties': {'kernel_id': 1, 'ramdisk_id': 1}} |
4421 | 78 | 109 | ||
4422 | 79 | self.stubs.Set(fake_image._FakeImageService, 'show', fake_show) | 110 | self.stubs.Set(fake_image._FakeImageService, 'show', fake_show) |
4423 | 111 | self.stubs.Set(rpc, 'call', rpc_call_wrapper) | ||
4424 | 112 | self.stubs.Set(rpc, 'cast', rpc_cast_wrapper) | ||
4425 | 80 | 113 | ||
4426 | 81 | def _create_instance(self, params=None): | 114 | def _create_instance(self, params=None): |
4427 | 82 | """Create a test instance""" | 115 | """Create a test instance""" |
4428 | @@ -139,7 +172,7 @@ | |||
4429 | 139 | """Verify that an instance cannot be created without a display_name.""" | 172 | """Verify that an instance cannot be created without a display_name.""" |
4430 | 140 | cases = [dict(), dict(display_name=None)] | 173 | cases = [dict(), dict(display_name=None)] |
4431 | 141 | for instance in cases: | 174 | for instance in cases: |
4433 | 142 | ref = self.compute_api.create(self.context, | 175 | (ref, resv_id) = self.compute_api.create(self.context, |
4434 | 143 | instance_types.get_default_instance_type(), None, **instance) | 176 | instance_types.get_default_instance_type(), None, **instance) |
4435 | 144 | try: | 177 | try: |
4436 | 145 | self.assertNotEqual(ref[0]['display_name'], None) | 178 | self.assertNotEqual(ref[0]['display_name'], None) |
4437 | @@ -149,7 +182,7 @@ | |||
4438 | 149 | def test_create_instance_associates_security_groups(self): | 182 | def test_create_instance_associates_security_groups(self): |
4439 | 150 | """Make sure create associates security groups""" | 183 | """Make sure create associates security groups""" |
4440 | 151 | group = self._create_group() | 184 | group = self._create_group() |
4442 | 152 | ref = self.compute_api.create( | 185 | (ref, resv_id) = self.compute_api.create( |
4443 | 153 | self.context, | 186 | self.context, |
4444 | 154 | instance_type=instance_types.get_default_instance_type(), | 187 | instance_type=instance_types.get_default_instance_type(), |
4445 | 155 | image_href=None, | 188 | image_href=None, |
4446 | @@ -209,7 +242,7 @@ | |||
4447 | 209 | ('<}\x1fh\x10e\x08l\x02l\x05o\x12!{>', 'hello'), | 242 | ('<}\x1fh\x10e\x08l\x02l\x05o\x12!{>', 'hello'), |
4448 | 210 | ('hello_server', 'hello-server')] | 243 | ('hello_server', 'hello-server')] |
4449 | 211 | for display_name, hostname in cases: | 244 | for display_name, hostname in cases: |
4451 | 212 | ref = self.compute_api.create(self.context, | 245 | (ref, resv_id) = self.compute_api.create(self.context, |
4452 | 213 | instance_types.get_default_instance_type(), None, | 246 | instance_types.get_default_instance_type(), None, |
4453 | 214 | display_name=display_name) | 247 | display_name=display_name) |
4454 | 215 | try: | 248 | try: |
4455 | @@ -221,7 +254,7 @@ | |||
4456 | 221 | """Make sure destroying disassociates security groups""" | 254 | """Make sure destroying disassociates security groups""" |
4457 | 222 | group = self._create_group() | 255 | group = self._create_group() |
4458 | 223 | 256 | ||
4460 | 224 | ref = self.compute_api.create( | 257 | (ref, resv_id) = self.compute_api.create( |
4461 | 225 | self.context, | 258 | self.context, |
4462 | 226 | instance_type=instance_types.get_default_instance_type(), | 259 | instance_type=instance_types.get_default_instance_type(), |
4463 | 227 | image_href=None, | 260 | image_href=None, |
4464 | @@ -237,7 +270,7 @@ | |||
4465 | 237 | """Make sure destroying security groups disassociates instances""" | 270 | """Make sure destroying security groups disassociates instances""" |
4466 | 238 | group = self._create_group() | 271 | group = self._create_group() |
4467 | 239 | 272 | ||
4469 | 240 | ref = self.compute_api.create( | 273 | (ref, resv_id) = self.compute_api.create( |
4470 | 241 | self.context, | 274 | self.context, |
4471 | 242 | instance_type=instance_types.get_default_instance_type(), | 275 | instance_type=instance_types.get_default_instance_type(), |
4472 | 243 | image_href=None, | 276 | image_href=None, |
4473 | @@ -1394,3 +1427,81 @@ | |||
4474 | 1394 | self.assertEqual(self.compute_api._volume_size(inst_type, | 1427 | self.assertEqual(self.compute_api._volume_size(inst_type, |
4475 | 1395 | 'swap'), | 1428 | 'swap'), |
4476 | 1396 | swap_size) | 1429 | swap_size) |
4477 | 1430 | |||
4478 | 1431 | def test_reservation_id_one_instance(self): | ||
4479 | 1432 | """Verify building an instance has a reservation_id that | ||
4480 | 1433 | matches return value from create""" | ||
4481 | 1434 | (refs, resv_id) = self.compute_api.create(self.context, | ||
4482 | 1435 | instance_types.get_default_instance_type(), None) | ||
4483 | 1436 | try: | ||
4484 | 1437 | self.assertEqual(len(refs), 1) | ||
4485 | 1438 | self.assertEqual(refs[0]['reservation_id'], resv_id) | ||
4486 | 1439 | finally: | ||
4487 | 1440 | db.instance_destroy(self.context, refs[0]['id']) | ||
4488 | 1441 | |||
4489 | 1442 | def test_reservation_ids_two_instances(self): | ||
4490 | 1443 | """Verify building 2 instances at once results in a | ||
4491 | 1444 | reservation_id being returned equal to reservation id set | ||
4492 | 1445 | in both instances | ||
4493 | 1446 | """ | ||
4494 | 1447 | (refs, resv_id) = self.compute_api.create(self.context, | ||
4495 | 1448 | instance_types.get_default_instance_type(), None, | ||
4496 | 1449 | min_count=2, max_count=2) | ||
4497 | 1450 | try: | ||
4498 | 1451 | self.assertEqual(len(refs), 2) | ||
4499 | 1452 | self.assertNotEqual(resv_id, None) | ||
4500 | 1453 | finally: | ||
4501 | 1454 | for instance in refs: | ||
4502 | 1455 | self.assertEqual(instance['reservation_id'], resv_id) | ||
4503 | 1456 | db.instance_destroy(self.context, instance['id']) | ||
4504 | 1457 | |||
4505 | 1458 | def test_reservation_ids_two_instances_no_wait(self): | ||
4506 | 1459 | """Verify building 2 instances at once without waiting for | ||
4507 | 1460 | instance IDs results in a reservation_id being returned equal | ||
4508 | 1461 | to reservation id set in both instances | ||
4509 | 1462 | """ | ||
4510 | 1463 | (refs, resv_id) = self.compute_api.create(self.context, | ||
4511 | 1464 | instance_types.get_default_instance_type(), None, | ||
4512 | 1465 | min_count=2, max_count=2, wait_for_instances=False) | ||
4513 | 1466 | try: | ||
4514 | 1467 | self.assertEqual(refs, None) | ||
4515 | 1468 | self.assertNotEqual(resv_id, None) | ||
4516 | 1469 | finally: | ||
4517 | 1470 | instances = self.compute_api.get_all(self.context, | ||
4518 | 1471 | search_opts={'reservation_id': resv_id}) | ||
4519 | 1472 | self.assertEqual(len(instances), 2) | ||
4520 | 1473 | for instance in instances: | ||
4521 | 1474 | self.assertEqual(instance['reservation_id'], resv_id) | ||
4522 | 1475 | db.instance_destroy(self.context, instance['id']) | ||
4523 | 1476 | |||
4524 | 1477 | def test_create_with_specified_reservation_id(self): | ||
4525 | 1478 | """Verify building instances with a specified | ||
4526 | 1479 | reservation_id results in the correct reservation_id | ||
4527 | 1480 | being set | ||
4528 | 1481 | """ | ||
4529 | 1482 | |||
4530 | 1483 | # We need admin context to be able to specify our own | ||
4531 | 1484 | # reservation_ids. | ||
4532 | 1485 | context = self.context.elevated() | ||
4533 | 1486 | # 1 instance | ||
4534 | 1487 | (refs, resv_id) = self.compute_api.create(context, | ||
4535 | 1488 | instance_types.get_default_instance_type(), None, | ||
4536 | 1489 | min_count=1, max_count=1, reservation_id='meow') | ||
4537 | 1490 | try: | ||
4538 | 1491 | self.assertEqual(len(refs), 1) | ||
4539 | 1492 | self.assertEqual(resv_id, 'meow') | ||
4540 | 1493 | finally: | ||
4541 | 1494 | self.assertEqual(refs[0]['reservation_id'], resv_id) | ||
4542 | 1495 | db.instance_destroy(self.context, refs[0]['id']) | ||
4543 | 1496 | |||
4544 | 1497 | # 2 instances | ||
4545 | 1498 | (refs, resv_id) = self.compute_api.create(context, | ||
4546 | 1499 | instance_types.get_default_instance_type(), None, | ||
4547 | 1500 | min_count=2, max_count=2, reservation_id='woof') | ||
4548 | 1501 | try: | ||
4549 | 1502 | self.assertEqual(len(refs), 2) | ||
4550 | 1503 | self.assertEqual(resv_id, 'woof') | ||
4551 | 1504 | finally: | ||
4552 | 1505 | for instance in refs: | ||
4553 | 1506 | self.assertEqual(instance['reservation_id'], resv_id) | ||
4554 | 1507 | db.instance_destroy(self.context, instance['id']) | ||
4555 | 1397 | 1508 | ||
4556 | === modified file 'nova/tests/test_quota.py' | |||
4557 | --- nova/tests/test_quota.py 2011-08-03 19:22:58 +0000 | |||
4558 | +++ nova/tests/test_quota.py 2011-09-23 07:08:19 +0000 | |||
4559 | @@ -21,9 +21,11 @@ | |||
4560 | 21 | from nova import db | 21 | from nova import db |
4561 | 22 | from nova import flags | 22 | from nova import flags |
4562 | 23 | from nova import quota | 23 | from nova import quota |
4563 | 24 | from nova import rpc | ||
4564 | 24 | from nova import test | 25 | from nova import test |
4565 | 25 | from nova import volume | 26 | from nova import volume |
4566 | 26 | from nova.compute import instance_types | 27 | from nova.compute import instance_types |
4567 | 28 | from nova.scheduler import driver as scheduler_driver | ||
4568 | 27 | 29 | ||
4569 | 28 | 30 | ||
4570 | 29 | FLAGS = flags.FLAGS | 31 | FLAGS = flags.FLAGS |
4571 | @@ -51,6 +53,21 @@ | |||
4572 | 51 | self.context = context.RequestContext(self.user_id, | 53 | self.context = context.RequestContext(self.user_id, |
4573 | 52 | self.project_id, | 54 | self.project_id, |
4574 | 53 | True) | 55 | True) |
4575 | 56 | orig_rpc_call = rpc.call | ||
4576 | 57 | |||
4577 | 58 | def rpc_call_wrapper(context, topic, msg): | ||
4578 | 59 | """Stub out the scheduler creating the instance entry""" | ||
4579 | 60 | if topic == FLAGS.scheduler_topic and \ | ||
4580 | 61 | msg['method'] == 'run_instance': | ||
4581 | 62 | scheduler = scheduler_driver.Scheduler | ||
4582 | 63 | instance = scheduler().create_instance_db_entry( | ||
4583 | 64 | context, | ||
4584 | 65 | msg['args']['request_spec']) | ||
4585 | 66 | return [scheduler_driver.encode_instance(instance)] | ||
4586 | 67 | else: | ||
4587 | 68 | return orig_rpc_call(context, topic, msg) | ||
4588 | 69 | |||
4589 | 70 | self.stubs.Set(rpc, 'call', rpc_call_wrapper) | ||
4590 | 54 | 71 | ||
4591 | 55 | def _create_instance(self, cores=2): | 72 | def _create_instance(self, cores=2): |
4592 | 56 | """Create a test instance""" | 73 | """Create a test instance""" |
You'll see some code removed from the volumes extension. This is because it subclasses the servers controller to override create() to allow 'block_ device_ mapping' to be specified with a build. Since it was mostly code duplication (and used create_ instance_ helper) , I just created a method in the servers controller that can be overridden in the volumes extension to retrieve the block_device_ mapping.