Merge lp:~termie/nova/eventlet_merge into lp:~hudson-openstack/nova/trunk

Proposed by termie
Status: Merged
Approved by: Eric Day
Approved revision: 400
Merged at revision: 465
Proposed branch: lp:~termie/nova/eventlet_merge
Merge into: lp:~hudson-openstack/nova/trunk
Diff against target: 4171 lines (+651/-1472)
45 files modified
bin/nova-api (+8/-12)
bin/nova-combined (+65/-0)
bin/nova-compute (+6/-9)
bin/nova-network (+6/-9)
bin/nova-scheduler (+6/-9)
bin/nova-volume (+6/-9)
nova/compute/disk.py (+39/-45)
nova/compute/manager.py (+20/-29)
nova/flags.py (+7/-0)
nova/manager.py (+1/-3)
nova/network/manager.py (+1/-3)
nova/objectstore/image.py (+1/-0)
nova/process.py (+0/-209)
nova/rpc.py (+26/-57)
nova/server.py (+0/-151)
nova/service.py (+0/-195)
nova/test.py (+85/-15)
nova/tests/access_unittest.py (+1/-1)
nova/tests/auth_unittest.py (+3/-3)
nova/tests/cloud_unittest.py (+2/-4)
nova/tests/compute_unittest.py (+18/-23)
nova/tests/flags_unittest.py (+1/-1)
nova/tests/misc_unittest.py (+1/-1)
nova/tests/network_unittest.py (+1/-1)
nova/tests/objectstore_unittest.py (+2/-2)
nova/tests/process_unittest.py (+0/-132)
nova/tests/quota_unittest.py (+1/-1)
nova/tests/rpc_unittest.py (+18/-18)
nova/tests/scheduler_unittest.py (+12/-12)
nova/tests/service_unittest.py (+14/-32)
nova/tests/validator_unittest.py (+0/-42)
nova/tests/virt_unittest.py (+5/-6)
nova/tests/volume_unittest.py (+25/-33)
nova/utils.py (+43/-13)
nova/validate.py (+0/-94)
nova/virt/fake.py (+3/-7)
nova/virt/images.py (+6/-5)
nova/virt/libvirt_conn.py (+80/-99)
nova/virt/xenapi/network_utils.py (+3/-7)
nova/virt/xenapi/vm_utils.py (+10/-20)
nova/virt/xenapi/vmops.py (+30/-37)
nova/virt/xenapi_conn.py (+23/-24)
nova/volume/driver.py (+57/-79)
nova/volume/manager.py (+11/-16)
run_tests.py (+4/-4)
To merge this branch: bzr merge lp:~termie/nova/eventlet_merge
Reviewer Review Type Date Requested Status
Jay Pipes (community) Approve
Eric Day (community) Approve
Anthony Young (community) Needs Fixing
Review via email: mp+43383@code.launchpad.net

Description of the change

This branch removes most of the dependencies on twisted and moves towards the plan described by https://blueprints.launchpad.net/nova/+spec/unified-service-architecture

Tests are currently passing besides objectstore which is being skipped because it is heavily reliant on our twisted pieces, and I can run everything using the nova.sh

Additionally this adds nova-combined that covers everythign except for nova-objectstore, to test it what I've usually done is run nova.sh as usual

$ sudo ./eventlet_merge/contrib/nova.sh run ignored eventlet_merge

and then quit all the services except for nova-objectstore and then in one of the screens do

$ ./eventlet_merge/bin/nova-combined

And then run whatever manual testing you normally run.

Once objectstore has been deprecated and removed nova-combined can be expected to run the whole nova stack in a single process for testing and dev.

To post a comment you must log in.
Revision history for this message
Anthony Young (sleepsonthefloor) wrote :

Some assorted notes from eday, termie, vishy, sleepsonthefloor, jesse

#11 - remove extra line above docstring
#32 remove main in nova-api
#93-95 duplicate flag api_port in
#199 xtra space
#548 fix space
#583 fix space
#634 s/return/pass
#938 pylint
#1398 - kill, and need to fix init scripts
#1440 use warn vs debug
#1442 remove section
#1452-1487 lots of asorted
#2467 can probably be removed
#2878 - hmm...
#3039 #3036 s/return/pass
#3125 - can just return timer_done
#3160 - _wait_for_timer
#3329 output, error
#3341 whitespace
#3427 delete
#3450 - fix comment - clarify what is meant by 'block'
#3425 maybe move to previous line?
#3450 move up to previous line
#3442-#3844 investigate - some funkiness going on. Ask armando?
#3831 - s/utis/utils
#3832 make sure timer stops
#3909 fix sleeping out of process greenthread.sleep
#3928 fix whitespace
#3938 fix whitespace
#3949 fix whitespace
#3999 greenthread.sleep
#4093 fix whitespace
#4114 fix whitespace
#4235 fix whitespace
#4253 remove debug code

review: Needs Fixing
Revision history for this message
Jay Pipes (jaypipes) wrote :

I have a feeling the git-bzr bridge is not working properly... I see that same issues with timestamps that I saw when Chris MacGown used the git-bzr bridge. Andy, I assume you are still using git with your git-bzr bridge?

Note that this merge request first *removes* nova/service.py (notice the 1970-01-01 timestamps, BTW) and then *adds* a file nova/service.py. I don't know why this is the case and why it doesn't simply show the diff of the modified nova/service.py.

For Chris' branch, I had to essentially re-construct his patches and submit them from bzr to Launchpad; trying to merge it as-is into trunk resulted in conflicts that could not be resolved. In fact, Chris' git-bzr original branch (lp:~chris-slicehost/glance/implements_swift_backend) still sits in the list of Glance's active branches (https://code.launchpad.net/glance/) despite it being marked as Merged. The reason is because something happens in the repository metadata when the git-bzr bridge produces a branch, and somehow history is discarded. You can see that timestamps of revisions coming from the branches managed with the git-bzr bridge are all 0000-00-00...

Anyway, I'm saying this because despite me liking some of the changes in this patch (and I certainly welcome the direction it takes), I fear it will not merge correctly. In addition, the patch is (IMHO) unnecessarily large. It would be a lot easier to review (and thus get into trunk quickly) if the patch was broken up and we could simply merge in changes in chunks. This patch does not need to be done all at once.

Just my 2 cents.
-jay

Revision history for this message
Eric Day (eday) wrote :

Hi Jay,

I stopped by to see the Anso guys on Friday and we did a group code review on this. The list above is what came out of it. I had the same questions about removing/adding some files, and the it sounds like it was done this way because of using tmp files for developing and then when it worked, moving them into place. Even though they are named the same, it's pretty different in content.

As for the size, I suppose it could have been done in a couple chunks, but then there would have been duplicate files/classes (like service.py) for both eventlet and twisted. Most of the changes are just removing defers/yields, so I'm okay withe the size.

I will approve once the issues above are resolved.

Revision history for this message
Jay Pipes (jaypipes) wrote :

OK, Eric, we'll see if it merges correctly, then...

Revision history for this message
termie (termie) wrote :

> I have a feeling the git-bzr bridge is not working properly... I see that same
> issues with timestamps that I saw when Chris MacGown used the git-bzr bridge.
> Andy, I assume you are still using git with your git-bzr bridge?
>

Not git-bzr was used on this branch, I have largely stopped using it because I need to maintain patches against both bzr and bzr-fastimport to keep it working. I don't know which timestamps you are referring to.

> Note that this merge request first *removes* nova/service.py (notice the
> 1970-01-01 timestamps, BTW) and then *adds* a file nova/service.py. I don't
> know why this is the case and why it doesn't simply show the diff of the
> modified nova/service.py.

I had another file for the first half of the development of this branch that I later overwrote service.py with, it is something of a shame to lose the development history of that single file but not worth dealing the bzr's poor tools to attempt to reconstruct it.

> For Chris' branch, I had to essentially re-construct his patches and submit
> them from bzr to Launchpad; trying to merge it as-is into trunk resulted in
> conflicts that could not be resolved. In fact, Chris' git-bzr original branch
> (lp:~chris-slicehost/glance/implements_swift_backend) still sits in the list
> of Glance's active branches (https://code.launchpad.net/glance/) despite it
> being marked as Merged. The reason is because something happens in the
> repository metadata when the git-bzr bridge produces a branch, and somehow
> history is discarded. You can see that timestamps of revisions coming from the
> branches managed with the git-bzr bridge are all 0000-00-00...
>
> Anyway, I'm saying this because despite me liking some of the changes in this
> patch (and I certainly welcome the direction it takes), I fear it will not
> merge correctly. In addition, the patch is (IMHO) unnecessarily large. It
> would be a lot easier to review (and thus get into trunk quickly) if the patch
> was broken up and we could simply merge in changes in chunks. This patch does
> not need to be done all at once.

I also fear it will not merge correctly simply because while merging from upstream a bunch of spurious conflicts were generated. This patch does need to be done all at once.

> Just my 2 cents.
> -jay

Revision history for this message
Jay Pipes (jaypipes) wrote :

On Tue, Dec 14, 2010 at 6:26 PM, termie <email address hidden> wrote:
>> I have a feeling the git-bzr bridge is not working properly... I see that same
>> issues with timestamps that I saw when Chris MacGown used the git-bzr bridge.
>> Andy, I assume you are still using git with your git-bzr bridge?
>>
>
> Not git-bzr was used on this branch, I have largely stopped using it because I need to maintain patches against both bzr and bzr-fastimport to keep it working. I don't know which timestamps you are referring to.

OK.

>> Note that this merge request first *removes* nova/service.py (notice the
>> 1970-01-01 timestamps, BTW) and then *adds* a file nova/service.py.  I don't
>> know why this is the case and why it doesn't simply show the diff of the
>> modified nova/service.py.
>
> I had another file for the first half of the development of this branch that I later overwrote service.py with, it is something of a shame to lose the development history of that single file but not worth dealing the bzr's poor tools to attempt to reconstruct it.

Because you are used to or prefer git doesn't mean that bzr has poor
tooling. If you overwrite a file in git, the same thing would happen.

<snip>
> I also fear it will not merge correctly simply because while merging from upstream a bunch of spurious conflicts were generated. This patch does need to be done all at once.

OK. However, I don't agree, but I can move forward with the review...

-jay

Revision history for this message
termie (termie) wrote :

Updated per review.

Revision history for this message
Josh Kearney (jk0) wrote :

For what it's worth, I'm running this branch in my development environment and it's been great.

Revision history for this message
Eric Day (eday) wrote :

Is there any reason why the virt_unittests were removed? Also, I know we discussed this, but I'd still like to find a way to get the objectstore unittests running again, even if it means creating the base twisted test classes again just for it.

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

Some notes on trying to get this branch to run. I'll update again just to be sure I have the latest.

1. logging is console only by default (assuming run as user). Needed to update service.Service to get file logging. Not a biggie.

2. Fails when trying to run a xenserver instance: http://paste.openstack.org/show/303/

3. Why does nova-objectstore have to be run the old way and not integrated with nova-combined?

I'll see if I can fix #2 in the meanwhile.

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

Oh, it should be noted that run-combined still needs to take the flagfile ... this threw me off while testing.

bin/nova-combined --flagfile=../nova.conf

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

And, I was using the latest branch re: the above comments.

Revision history for this message
termie (termie) wrote :

> Is there any reason why the virt_unittests were removed? Also, I know we
> discussed this, but I'd still like to find a way to get the objectstore
> unittests running again, even if it means creating the base twisted test
> classes again just for it.

It was in there twice. But looking at the diff apparently bzr has lied to me yet again and somehow had conflicts it didn't mention, sorting those out.

Revision history for this message
termie (termie) wrote :

ah, the conflicts are just in the diff view that launchpad creates, which is still unfortunate

Revision history for this message
termie (termie) wrote :

> Some notes on trying to get this branch to run. I'll update again just to be
> sure I have the latest.
>
> 1. logging is console only by default (assuming run as user). Needed to update
> service.Service to get file logging. Not a biggie.
>
> 2. Fails when trying to run a xenserver instance:
> http://paste.openstack.org/show/303/

Fixed the improt error, would be appreciative if you can test whether it runs, I currently don't have any way to test the xen stuff.

> 3. Why does nova-objectstore have to be run the old way and not integrated
> with nova-combined?

It is very reliant on twisted's implementations of things and as it is being deprecated I didn't want to spend time porting it over to eventlet.

>
> I'll see if I can fix #2 in the meanwhile.

Revision history for this message
termie (termie) wrote :

Oh, and re logging, it isn't supposed to log to a file, that is for your supervisor daemon to handle (init, upstart and daemonstools all expect output to stdout and handle the logging for you)

Revision history for this message
termie (termie) wrote :

> On Tue, Dec 14, 2010 at 6:26 PM, termie <email address hidden> wrote:
> >> I have a feeling the git-bzr bridge is not working properly... I see that
> same
> >> issues with timestamps that I saw when Chris MacGown used the git-bzr
> bridge.
> >> Andy, I assume you are still using git with your git-bzr bridge?
> >>
> >
> > Not git-bzr was used on this branch, I have largely stopped using it because
> I need to maintain patches against both bzr and bzr-fastimport to keep it
> working. I don't know which timestamps you are referring to.
>
> OK.
>
> >> Note that this merge request first *removes* nova/service.py (notice the
> >> 1970-01-01 timestamps, BTW) and then *adds* a file nova/service.py.  I
> don't
> >> know why this is the case and why it doesn't simply show the diff of the
> >> modified nova/service.py.
> >
> > I had another file for the first half of the development of this branch that
> I later overwrote service.py with, it is something of a shame to lose the
> development history of that single file but not worth dealing the bzr's poor
> tools to attempt to reconstruct it.
>
> Because you are used to or prefer git doesn't mean that bzr has poor
> tooling. If you overwrite a file in git, the same thing would happen.
>

I was referring to the tools to pull apart and remake the commit to fix the problem, some equivalent of git's rebase -i (which to my knowledge doesn't exist outside of git), not that it was bzr's fault that there is an issue to begin with. And apologies for the jab at bzr, "bzr's poor tools" would better read "without git's tools."

> <snip>
> > I also fear it will not merge correctly simply because while merging from
> upstream a bunch of spurious conflicts were generated. This patch does need to
> be done all at once.
>
> OK. However, I don't agree, but I can move forward with the review...
>

Sorry, some further explanation: it needs to be done all at once because once one call in a stack has switched from using deferreds to eventlet every other call in that stack has to as well otherwise things begin to return before initiating actual work or return incorrect values (usually a deferred where the result of a deferred was required).

> -jay

Revision history for this message
termie (termie) wrote :

yay, it was just a tiny conflict (not sure how it was introduced as it doesn't seem like it could have run the other way), resolved and merged from upstream

Revision history for this message
Eric Day (eday) wrote :

Based on the in-person review we did, and seeing that this branch works (minus a few xen issues we can iron out, as that code is changing too), I'm ok with it. I would still like to see the objectstore unittests enabled again, but if we are deprecating this before release it may not be important.

review: Approve
Revision history for this message
Jay Pipes (jaypipes) wrote :

Yeah, the conflict was the i18n patch that was merged recently...
sorry about that... looking forward to seeing this work get into trunk
:)

On Wed, Dec 15, 2010 at 2:04 PM, termie <email address hidden> wrote:
> yay, it was just a tiny conflict (not sure how it was introduced as it doesn't seem like it could have run the other way), resolved and merged from upstream
> --
> https://code.launchpad.net/~termie/nova/eventlet_merge/+merge/43383
> Your team Nova Core is requested to review the proposed merge of lp:~termie/nova/eventlet_merge into lp:nova.
>

Revision history for this message
Jay Pipes (jaypipes) wrote :

> > On Tue, Dec 14, 2010 at 6:26 PM, termie <email address hidden> wrote:

> I was referring to the tools to pull apart and remake the commit to fix the
> problem, some equivalent of git's rebase -i (which to my knowledge doesn't
> exist outside of git), not that it was bzr's fault that there is an issue to
> begin with. And apologies for the jab at bzr, "bzr's poor tools" would better
> read "without git's tools."

No worries. :)

You may be interested in bzr's rebase plugin ;)

http://doc.bazaar.canonical.com/plugins/en/rebase-plugin.html

> > <snip>
> > > I also fear it will not merge correctly simply because while merging from
> > upstream a bunch of spurious conflicts were generated. This patch does need
> to
> > be done all at once.
> >
> > OK. However, I don't agree, but I can move forward with the review...
> >
>
> Sorry, some further explanation: it needs to be done all at once because once
> one call in a stack has switched from using deferreds to eventlet every other
> call in that stack has to as well otherwise things begin to return before
> initiating actual work or return incorrect values (usually a deferred where
> the result of a deferred was required).

Gotcha. After looking through the code more thoroughly, I think it's a great improvement. I share the same concerns as Eric re: objectstore, but then again, if I can get Glance done, maybe that won't be much of an issue! :)

-jay
> > -jay

review: Approve
Revision history for this message
Eric Day (eday) wrote :

There are some pep8 issues with this you'll need to fix before merge will succeed. Just run:

pep8 -r bin/* nova

bin/nova-combined:53:1: W293 blank line contains whitespace
bin/nova-combined:66:1: W391 blank line at end of file
bin/nova-combined:66:1: W293 blank line contains whitespace
nova/service.py:102:1: W293 blank line contains whitespace
nova/service.py:125:20: W291 trailing whitespace
nova/service.py:195:1: W293 blank line contains whitespace
nova/service.py:210:1: W293 blank line contains whitespace
nova/service.py:230:1: W293 blank line contains whitespace
nova/test.py:58:1: E302 expected 2 blank lines, found 1
nova/utils.py:236:9: E301 expected 1 blank line, found 0
nova/utils.py:247:1: W293 blank line contains whitespace
nova/utils.py:251:1: W293 blank line contains whitespace
nova/utils.py:254:1: W293 blank line contains whitespace
nova/compute/manager.py:67:1: W293 blank line contains whitespace
nova/tests/rpc_unittest.py:70:34: W291 trailing whitespace
nova/tests/service_unittest.py:122:1: W293 blank line contains whitespace
nova/virt/libvirt_conn.py:208:1: W293 blank line contains whitespace
nova/virt/libvirt_conn.py:390:1: W293 blank line contains whitespace
nova/virt/libvirt_conn.py:442:1: W293 blank line contains whitespace
nova/virt/libvirt_conn.py:444:15: E111 indentation is not a multiple of four
nova/virt/xenapi_conn.py:203:64: W291 trailing whitespace

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

The attempt to merge lp:~termie/nova/eventlet_merge into lp:nova failed. Below is the output from the failed tests.

bin/nova-combined:53:1: W293 blank line contains whitespace

^
    JCR: Trailing whitespace is superfluous.
    FBM: Except when it occurs as part of a blank line (i.e. the line is
         nothing but whitespace). According to Python docs[1] a line with only
         whitespace is considered a blank line, and is to be ignored. However,
         matching a blank line to its indentation level avoids mistakenly
         terminating a multi-line statement (e.g. class declaration) when
         pasting code into the standard Python interpreter.

         [1] http://docs.python.org/reference/lexical_analysis.html#blank-lines

    The warning returned varies on whether the line itself is blank, for easier
    filtering for those who want to indent their blank lines.

    Okay: spam(1)
    W291: spam(1)\s
    W293: class Foo(object):\n \n bang = 12
bin/nova-combined:66:1: W391 blank line at end of file

^
    JCR: Trailing blank lines are superfluous.

    Okay: spam(1)
    W391: spam(1)\n
bin/nova-combined:66:1: W293 blank line contains whitespace

^
    JCR: Trailing whitespace is superfluous.
    FBM: Except when it occurs as part of a blank line (i.e. the line is
         nothing but whitespace). According to Python docs[1] a line with only
         whitespace is considered a blank line, and is to be ignored. However,
         matching a blank line to its indentation level avoids mistakenly
         terminating a multi-line statement (e.g. class declaration) when
         pasting code into the standard Python interpreter.

         [1] http://docs.python.org/reference/lexical_analysis.html#blank-lines

    The warning returned varies on whether the line itself is blank, for easier
    filtering for those who want to indent their blank lines.

    Okay: spam(1)
    W291: spam(1)\s
    W293: class Foo(object):\n \n bang = 12

lp:~termie/nova/eventlet_merge updated
400. By termie

pep8 fixes for bin

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'bin/nova-api'
--- bin/nova-api 2010-12-11 20:10:24 +0000
+++ bin/nova-api 2010-12-16 20:49:10 +0000
@@ -17,9 +17,8 @@
17# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.17# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
18# See the License for the specific language governing permissions and18# See the License for the specific language governing permissions and
19# limitations under the License.19# limitations under the License.
20"""20
21Nova API daemon.21"""Starter script for Nova API."""
22"""
2322
24import gettext23import gettext
25import os24import os
@@ -35,9 +34,11 @@
3534
36gettext.install('nova', unicode=1)35gettext.install('nova', unicode=1)
3736
37from nova import api
38from nova import flags38from nova import flags
39from nova import utils39from nova import utils
40from nova import server40from nova import wsgi
41
4142
42FLAGS = flags.FLAGS43FLAGS = flags.FLAGS
43flags.DEFINE_integer('osapi_port', 8774, 'OpenStack API port')44flags.DEFINE_integer('osapi_port', 8774, 'OpenStack API port')
@@ -46,15 +47,10 @@
46flags.DEFINE_string('ec2api_host', '0.0.0.0', 'EC2 API host')47flags.DEFINE_string('ec2api_host', '0.0.0.0', 'EC2 API host')
4748
4849
49def main(_args):50if __name__ == '__main__':
50 from nova import api51 utils.default_flagfile()
51 from nova import wsgi52 FLAGS(sys.argv)
52 server = wsgi.Server()53 server = wsgi.Server()
53 server.start(api.API('os'), FLAGS.osapi_port, host=FLAGS.osapi_host)54 server.start(api.API('os'), FLAGS.osapi_port, host=FLAGS.osapi_host)
54 server.start(api.API('ec2'), FLAGS.ec2api_port, host=FLAGS.ec2api_host)55 server.start(api.API('ec2'), FLAGS.ec2api_port, host=FLAGS.ec2api_host)
55 server.wait()56 server.wait()
56
57
58if __name__ == '__main__':
59 utils.default_flagfile()
60 server.serve('nova-api', main)
6157
=== added file 'bin/nova-combined'
--- bin/nova-combined 1970-01-01 00:00:00 +0000
+++ bin/nova-combined 2010-12-16 20:49:10 +0000
@@ -0,0 +1,65 @@
1#!/usr/bin/env python
2# vim: tabstop=4 shiftwidth=4 softtabstop=4
3
4# Copyright 2010 United States Government as represented by the
5# Administrator of the National Aeronautics and Space Administration.
6# All Rights Reserved.
7#
8# Licensed under the Apache License, Version 2.0 (the "License"); you may
9# not use this file except in compliance with the License. You may obtain
10# a copy of the License at
11#
12# http://www.apache.org/licenses/LICENSE-2.0
13#
14# Unless required by applicable law or agreed to in writing, software
15# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
16# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
17# License for the specific language governing permissions and limitations
18# under the License.
19
20"""Combined starter script for Nova services."""
21
22import eventlet
23eventlet.monkey_patch()
24
25import os
26import sys
27
28# If ../nova/__init__.py exists, add ../ to Python search path, so that
29# it will override what happens to be installed in /usr/(local/)lib/python...
30possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]),
31 os.pardir,
32 os.pardir))
33if os.path.exists(os.path.join(possible_topdir, 'nova', '__init__.py')):
34 sys.path.insert(0, possible_topdir)
35
36from nova import api
37from nova import flags
38from nova import service
39from nova import utils
40from nova import wsgi
41
42
43FLAGS = flags.FLAGS
44flags.DEFINE_integer('osapi_port', 8774, 'OpenStack API port')
45flags.DEFINE_string('osapi_host', '0.0.0.0', 'OpenStack API host')
46flags.DEFINE_integer('ec2api_port', 8773, 'EC2 API port')
47flags.DEFINE_string('ec2api_host', '0.0.0.0', 'EC2 API host')
48
49
50if __name__ == '__main__':
51 utils.default_flagfile()
52 FLAGS(sys.argv)
53
54 compute = service.Service.create(binary='nova-compute')
55 network = service.Service.create(binary='nova-network')
56 volume = service.Service.create(binary='nova-volume')
57 scheduler = service.Service.create(binary='nova-scheduler')
58 #objectstore = service.Service.create(binary='nova-objectstore')
59
60 service.serve(compute, network, volume, scheduler)
61
62 server = wsgi.Server()
63 server.start(api.API('os'), FLAGS.osapi_port, host=FLAGS.osapi_host)
64 server.start(api.API('ec2'), FLAGS.ec2api_port, host=FLAGS.ec2api_host)
65 server.wait()
066
=== modified file 'bin/nova-compute'
--- bin/nova-compute 2010-12-11 20:10:24 +0000
+++ bin/nova-compute 2010-12-16 20:49:10 +0000
@@ -17,9 +17,10 @@
17# License for the specific language governing permissions and limitations17# License for the specific language governing permissions and limitations
18# under the License.18# under the License.
1919
20"""20"""Starter script for Nova Compute."""
21 Twistd daemon for the nova compute nodes.21
22"""22import eventlet
23eventlet.monkey_patch()
2324
24import gettext25import gettext
25import os26import os
@@ -36,13 +37,9 @@
36gettext.install('nova', unicode=1)37gettext.install('nova', unicode=1)
3738
38from nova import service39from nova import service
39from nova import twistd
40from nova import utils40from nova import utils
4141
42
43if __name__ == '__main__':42if __name__ == '__main__':
44 utils.default_flagfile()43 utils.default_flagfile()
45 twistd.serve(__file__)44 service.serve()
4645 service.wait()
47if __name__ == '__builtin__':
48 application = service.Service.create() # pylint: disable=C0103
4946
=== modified file 'bin/nova-network'
--- bin/nova-network 2010-12-14 23:22:03 +0000
+++ bin/nova-network 2010-12-16 20:49:10 +0000
@@ -17,9 +17,10 @@
17# License for the specific language governing permissions and limitations17# License for the specific language governing permissions and limitations
18# under the License.18# under the License.
1919
20"""20"""Starter script for Nova Network."""
21 Twistd daemon for the nova network nodes.21
22"""22import eventlet
23eventlet.monkey_patch()
2324
24import gettext25import gettext
25import os26import os
@@ -36,13 +37,9 @@
36gettext.install('nova', unicode=1)37gettext.install('nova', unicode=1)
3738
38from nova import service39from nova import service
39from nova import twistd
40from nova import utils40from nova import utils
4141
42
43if __name__ == '__main__':42if __name__ == '__main__':
44 utils.default_flagfile()43 utils.default_flagfile()
45 twistd.serve(__file__)44 service.serve()
4645 service.wait()
47if __name__ == '__builtin__':
48 application = service.Service.create() # pylint: disable-msg=C0103
4946
=== modified file 'bin/nova-scheduler'
--- bin/nova-scheduler 2010-12-14 23:22:03 +0000
+++ bin/nova-scheduler 2010-12-16 20:49:10 +0000
@@ -17,9 +17,10 @@
17# License for the specific language governing permissions and limitations17# License for the specific language governing permissions and limitations
18# under the License.18# under the License.
1919
20"""20"""Starter script for Nova Scheduler."""
21 Twistd daemon for the nova scheduler nodes.21
22"""22import eventlet
23eventlet.monkey_patch()
2324
24import gettext25import gettext
25import os26import os
@@ -36,13 +37,9 @@
36gettext.install('nova', unicode=1)37gettext.install('nova', unicode=1)
3738
38from nova import service39from nova import service
39from nova import twistd
40from nova import utils40from nova import utils
4141
42
43if __name__ == '__main__':42if __name__ == '__main__':
44 utils.default_flagfile()43 utils.default_flagfile()
45 twistd.serve(__file__)44 service.serve()
4645 service.wait()
47if __name__ == '__builtin__':
48 application = service.Service.create()
4946
=== modified file 'bin/nova-volume'
--- bin/nova-volume 2010-12-14 23:22:03 +0000
+++ bin/nova-volume 2010-12-16 20:49:10 +0000
@@ -17,9 +17,10 @@
17# License for the specific language governing permissions and limitations17# License for the specific language governing permissions and limitations
18# under the License.18# under the License.
1919
20"""20"""Starter script for Nova Volume."""
21 Twistd daemon for the nova volume nodes.21
22"""22import eventlet
23eventlet.monkey_patch()
2324
24import gettext25import gettext
25import os26import os
@@ -36,13 +37,9 @@
36gettext.install('nova', unicode=1)37gettext.install('nova', unicode=1)
3738
38from nova import service39from nova import service
39from nova import twistd
40from nova import utils40from nova import utils
4141
42
43if __name__ == '__main__':42if __name__ == '__main__':
44 utils.default_flagfile()43 utils.default_flagfile()
45 twistd.serve(__file__)44 service.serve()
4645 service.wait()
47if __name__ == '__builtin__':
48 application = service.Service.create() # pylint: disable-msg=C0103
4946
=== modified file 'nova/compute/disk.py'
--- nova/compute/disk.py 2010-11-16 18:49:18 +0000
+++ nova/compute/disk.py 2010-12-16 20:49:10 +0000
@@ -26,8 +26,6 @@
26import os26import os
27import tempfile27import tempfile
2828
29from twisted.internet import defer
30
31from nova import exception29from nova import exception
32from nova import flags30from nova import flags
3331
@@ -39,7 +37,6 @@
39 'block_size to use for dd')37 'block_size to use for dd')
4038
4139
42@defer.inlineCallbacks
43def partition(infile, outfile, local_bytes=0, resize=True,40def partition(infile, outfile, local_bytes=0, resize=True,
44 local_type='ext2', execute=None):41 local_type='ext2', execute=None):
45 """42 """
@@ -64,10 +61,10 @@
64 file_size = os.path.getsize(infile)61 file_size = os.path.getsize(infile)
65 if resize and file_size < FLAGS.minimum_root_size:62 if resize and file_size < FLAGS.minimum_root_size:
66 last_sector = FLAGS.minimum_root_size / sector_size - 163 last_sector = FLAGS.minimum_root_size / sector_size - 1
67 yield execute('dd if=/dev/zero of=%s count=1 seek=%d bs=%d'64 execute('dd if=/dev/zero of=%s count=1 seek=%d bs=%d'
68 % (infile, last_sector, sector_size))65 % (infile, last_sector, sector_size))
69 yield execute('e2fsck -fp %s' % infile, check_exit_code=False)66 execute('e2fsck -fp %s' % infile, check_exit_code=False)
70 yield execute('resize2fs %s' % infile)67 execute('resize2fs %s' % infile)
71 file_size = FLAGS.minimum_root_size68 file_size = FLAGS.minimum_root_size
72 elif file_size % sector_size != 0:69 elif file_size % sector_size != 0:
73 logging.warn("Input partition size not evenly divisible by"70 logging.warn("Input partition size not evenly divisible by"
@@ -86,30 +83,29 @@
86 last_sector = local_last # e83 last_sector = local_last # e
8784
88 # create an empty file85 # create an empty file
89 yield execute('dd if=/dev/zero of=%s count=1 seek=%d bs=%d'86 execute('dd if=/dev/zero of=%s count=1 seek=%d bs=%d'
90 % (outfile, mbr_last, sector_size))87 % (outfile, mbr_last, sector_size))
9188
92 # make mbr partition89 # make mbr partition
93 yield execute('parted --script %s mklabel msdos' % outfile)90 execute('parted --script %s mklabel msdos' % outfile)
9491
95 # append primary file92 # append primary file
96 yield execute('dd if=%s of=%s bs=%s conv=notrunc,fsync oflag=append'93 execute('dd if=%s of=%s bs=%s conv=notrunc,fsync oflag=append'
97 % (infile, outfile, FLAGS.block_size))94 % (infile, outfile, FLAGS.block_size))
9895
99 # make primary partition96 # make primary partition
100 yield execute('parted --script %s mkpart primary %ds %ds'97 execute('parted --script %s mkpart primary %ds %ds'
101 % (outfile, primary_first, primary_last))98 % (outfile, primary_first, primary_last))
10299
103 if local_bytes > 0:100 if local_bytes > 0:
104 # make the file bigger101 # make the file bigger
105 yield execute('dd if=/dev/zero of=%s count=1 seek=%d bs=%d'102 execute('dd if=/dev/zero of=%s count=1 seek=%d bs=%d'
106 % (outfile, last_sector, sector_size))103 % (outfile, last_sector, sector_size))
107 # make and format local partition104 # make and format local partition
108 yield execute('parted --script %s mkpartfs primary %s %ds %ds'105 execute('parted --script %s mkpartfs primary %s %ds %ds'
109 % (outfile, local_type, local_first, local_last))106 % (outfile, local_type, local_first, local_last))
110107
111108
112@defer.inlineCallbacks
113def inject_data(image, key=None, net=None, partition=None, execute=None):109def inject_data(image, key=None, net=None, partition=None, execute=None):
114 """Injects a ssh key and optionally net data into a disk image.110 """Injects a ssh key and optionally net data into a disk image.
115111
@@ -119,26 +115,26 @@
119 If partition is not specified it mounts the image as a single partition.115 If partition is not specified it mounts the image as a single partition.
120116
121 """117 """
122 out, err = yield execute('sudo losetup -f --show %s' % image)118 out, err = execute('sudo losetup -f --show %s' % image)
123 if err:119 if err:
124 raise exception.Error('Could not attach image to loopback: %s' % err)120 raise exception.Error('Could not attach image to loopback: %s' % err)
125 device = out.strip()121 device = out.strip()
126 try:122 try:
127 if not partition is None:123 if not partition is None:
128 # create partition124 # create partition
129 out, err = yield execute('sudo kpartx -a %s' % device)125 out, err = execute('sudo kpartx -a %s' % device)
130 if err:126 if err:
131 raise exception.Error('Failed to load partition: %s' % err)127 raise exception.Error('Failed to load partition: %s' % err)
132 mapped_device = '/dev/mapper/%sp%s' % (device.split('/')[-1],128 mapped_device = '/dev/mapper/%sp%s' % (device.split('/')[-1],
133 partition)129 partition)
134 else:130 else:
135 mapped_device = device131 mapped_device = device
136 out, err = yield execute('sudo tune2fs -c 0 -i 0 %s' % mapped_device)132 out, err = execute('sudo tune2fs -c 0 -i 0 %s' % mapped_device)
137133
138 tmpdir = tempfile.mkdtemp()134 tmpdir = tempfile.mkdtemp()
139 try:135 try:
140 # mount loopback to dir136 # mount loopback to dir
141 out, err = yield execute(137 out, err = execute(
142 'sudo mount %s %s' % (mapped_device, tmpdir))138 'sudo mount %s %s' % (mapped_device, tmpdir))
143 if err:139 if err:
144 raise exception.Error('Failed to mount filesystem: %s' % err)140 raise exception.Error('Failed to mount filesystem: %s' % err)
@@ -146,24 +142,23 @@
146 try:142 try:
147 if key:143 if key:
148 # inject key file144 # inject key file
149 yield _inject_key_into_fs(key, tmpdir, execute=execute)145 _inject_key_into_fs(key, tmpdir, execute=execute)
150 if net:146 if net:
151 yield _inject_net_into_fs(net, tmpdir, execute=execute)147 _inject_net_into_fs(net, tmpdir, execute=execute)
152 finally:148 finally:
153 # unmount device149 # unmount device
154 yield execute('sudo umount %s' % mapped_device)150 execute('sudo umount %s' % mapped_device)
155 finally:151 finally:
156 # remove temporary directory152 # remove temporary directory
157 yield execute('rmdir %s' % tmpdir)153 execute('rmdir %s' % tmpdir)
158 if not partition is None:154 if not partition is None:
159 # remove partitions155 # remove partitions
160 yield execute('sudo kpartx -d %s' % device)156 execute('sudo kpartx -d %s' % device)
161 finally:157 finally:
162 # remove loopback158 # remove loopback
163 yield execute('sudo losetup -d %s' % device)159 execute('sudo losetup -d %s' % device)
164160
165161
166@defer.inlineCallbacks
167def _inject_key_into_fs(key, fs, execute=None):162def _inject_key_into_fs(key, fs, execute=None):
168 """Add the given public ssh key to root's authorized_keys.163 """Add the given public ssh key to root's authorized_keys.
169164
@@ -171,22 +166,21 @@
171 fs is the path to the base of the filesystem into which to inject the key.166 fs is the path to the base of the filesystem into which to inject the key.
172 """167 """
173 sshdir = os.path.join(os.path.join(fs, 'root'), '.ssh')168 sshdir = os.path.join(os.path.join(fs, 'root'), '.ssh')
174 yield execute('sudo mkdir -p %s' % sshdir) # existing dir doesn't matter169 execute('sudo mkdir -p %s' % sshdir) # existing dir doesn't matter
175 yield execute('sudo chown root %s' % sshdir)170 execute('sudo chown root %s' % sshdir)
176 yield execute('sudo chmod 700 %s' % sshdir)171 execute('sudo chmod 700 %s' % sshdir)
177 keyfile = os.path.join(sshdir, 'authorized_keys')172 keyfile = os.path.join(sshdir, 'authorized_keys')
178 yield execute('sudo tee -a %s' % keyfile, '\n' + key.strip() + '\n')173 execute('sudo tee -a %s' % keyfile, '\n' + key.strip() + '\n')
179174
180175
181@defer.inlineCallbacks
182def _inject_net_into_fs(net, fs, execute=None):176def _inject_net_into_fs(net, fs, execute=None):
183 """Inject /etc/network/interfaces into the filesystem rooted at fs.177 """Inject /etc/network/interfaces into the filesystem rooted at fs.
184178
185 net is the contents of /etc/network/interfaces.179 net is the contents of /etc/network/interfaces.
186 """180 """
187 netdir = os.path.join(os.path.join(fs, 'etc'), 'network')181 netdir = os.path.join(os.path.join(fs, 'etc'), 'network')
188 yield execute('sudo mkdir -p %s' % netdir) # existing dir doesn't matter182 execute('sudo mkdir -p %s' % netdir) # existing dir doesn't matter
189 yield execute('sudo chown root:root %s' % netdir)183 execute('sudo chown root:root %s' % netdir)
190 yield execute('sudo chmod 755 %s' % netdir)184 execute('sudo chmod 755 %s' % netdir)
191 netfile = os.path.join(netdir, 'interfaces')185 netfile = os.path.join(netdir, 'interfaces')
192 yield execute('sudo tee %s' % netfile, net)186 execute('sudo tee %s' % netfile, net)
193187
=== modified file 'nova/compute/manager.py'
--- nova/compute/manager.py 2010-12-02 18:58:13 +0000
+++ nova/compute/manager.py 2010-12-16 20:49:10 +0000
@@ -37,8 +37,6 @@
37import datetime37import datetime
38import logging38import logging
3939
40from twisted.internet import defer
41
42from nova import exception40from nova import exception
43from nova import flags41from nova import flags
44from nova import manager42from nova import manager
@@ -78,13 +76,11 @@
78 state = power_state.NOSTATE76 state = power_state.NOSTATE
79 self.db.instance_set_state(context, instance_id, state)77 self.db.instance_set_state(context, instance_id, state)
8078
81 @defer.inlineCallbacks
82 @exception.wrap_exception79 @exception.wrap_exception
83 def refresh_security_group(self, context, security_group_id, **_kwargs):80 def refresh_security_group(self, context, security_group_id, **_kwargs):
84 """This call passes stright through to the virtualization driver."""81 """This call passes stright through to the virtualization driver."""
85 yield self.driver.refresh_security_group(security_group_id)82 self.driver.refresh_security_group(security_group_id)
8683
87 @defer.inlineCallbacks
88 @exception.wrap_exception84 @exception.wrap_exception
89 def run_instance(self, context, instance_id, **_kwargs):85 def run_instance(self, context, instance_id, **_kwargs):
90 """Launch a new instance with specified options."""86 """Launch a new instance with specified options."""
@@ -105,7 +101,7 @@
105 'spawning')101 'spawning')
106102
107 try:103 try:
108 yield self.driver.spawn(instance_ref)104 self.driver.spawn(instance_ref)
109 now = datetime.datetime.utcnow()105 now = datetime.datetime.utcnow()
110 self.db.instance_update(context,106 self.db.instance_update(context,
111 instance_id,107 instance_id,
@@ -119,7 +115,6 @@
119115
120 self._update_state(context, instance_id)116 self._update_state(context, instance_id)
121117
122 @defer.inlineCallbacks
123 @exception.wrap_exception118 @exception.wrap_exception
124 def terminate_instance(self, context, instance_id):119 def terminate_instance(self, context, instance_id):
125 """Terminate an instance on this machine."""120 """Terminate an instance on this machine."""
@@ -134,12 +129,11 @@
134 self.db.instance_destroy(context, instance_id)129 self.db.instance_destroy(context, instance_id)
135 raise exception.Error('trying to destroy already destroyed'130 raise exception.Error('trying to destroy already destroyed'
136 ' instance: %s' % instance_id)131 ' instance: %s' % instance_id)
137 yield self.driver.destroy(instance_ref)132 self.driver.destroy(instance_ref)
138133
139 # TODO(ja): should we keep it in a terminated state for a bit?134 # TODO(ja): should we keep it in a terminated state for a bit?
140 self.db.instance_destroy(context, instance_id)135 self.db.instance_destroy(context, instance_id)
141136
142 @defer.inlineCallbacks
143 @exception.wrap_exception137 @exception.wrap_exception
144 def reboot_instance(self, context, instance_id):138 def reboot_instance(self, context, instance_id):
145 """Reboot an instance on this server."""139 """Reboot an instance on this server."""
@@ -159,10 +153,9 @@
159 instance_id,153 instance_id,
160 power_state.NOSTATE,154 power_state.NOSTATE,
161 'rebooting')155 'rebooting')
162 yield self.driver.reboot(instance_ref)156 self.driver.reboot(instance_ref)
163 self._update_state(context, instance_id)157 self._update_state(context, instance_id)
164158
165 @defer.inlineCallbacks
166 @exception.wrap_exception159 @exception.wrap_exception
167 def rescue_instance(self, context, instance_id):160 def rescue_instance(self, context, instance_id):
168 """Rescue an instance on this server."""161 """Rescue an instance on this server."""
@@ -175,10 +168,9 @@
175 instance_id,168 instance_id,
176 power_state.NOSTATE,169 power_state.NOSTATE,
177 'rescuing')170 'rescuing')
178 yield self.driver.rescue(instance_ref)171 self.driver.rescue(instance_ref)
179 self._update_state(context, instance_id)172 self._update_state(context, instance_id)
180173
181 @defer.inlineCallbacks
182 @exception.wrap_exception174 @exception.wrap_exception
183 def unrescue_instance(self, context, instance_id):175 def unrescue_instance(self, context, instance_id):
184 """Rescue an instance on this server."""176 """Rescue an instance on this server."""
@@ -191,7 +183,7 @@
191 instance_id,183 instance_id,
192 power_state.NOSTATE,184 power_state.NOSTATE,
193 'unrescuing')185 'unrescuing')
194 yield self.driver.unrescue(instance_ref)186 self.driver.unrescue(instance_ref)
195 self._update_state(context, instance_id)187 self._update_state(context, instance_id)
196188
197 @exception.wrap_exception189 @exception.wrap_exception
@@ -203,7 +195,6 @@
203195
204 return self.driver.get_console_output(instance_ref)196 return self.driver.get_console_output(instance_ref)
205197
206 @defer.inlineCallbacks
207 @exception.wrap_exception198 @exception.wrap_exception
208 def attach_volume(self, context, instance_id, volume_id, mountpoint):199 def attach_volume(self, context, instance_id, volume_id, mountpoint):
209 """Attach a volume to an instance."""200 """Attach a volume to an instance."""
@@ -211,12 +202,12 @@
211 logging.debug("instance %s: attaching volume %s to %s", instance_id,202 logging.debug("instance %s: attaching volume %s to %s", instance_id,
212 volume_id, mountpoint)203 volume_id, mountpoint)
213 instance_ref = self.db.instance_get(context, instance_id)204 instance_ref = self.db.instance_get(context, instance_id)
214 dev_path = yield self.volume_manager.setup_compute_volume(context,205 dev_path = self.volume_manager.setup_compute_volume(context,
215 volume_id)206 volume_id)
216 try:207 try:
217 yield self.driver.attach_volume(instance_ref['name'],208 self.driver.attach_volume(instance_ref['name'],
218 dev_path,209 dev_path,
219 mountpoint)210 mountpoint)
220 self.db.volume_attached(context,211 self.db.volume_attached(context,
221 volume_id,212 volume_id,
222 instance_id,213 instance_id,
@@ -227,12 +218,12 @@
227 # ecxception below.218 # ecxception below.
228 logging.exception("instance %s: attach failed %s, removing",219 logging.exception("instance %s: attach failed %s, removing",
229 instance_id, mountpoint)220 instance_id, mountpoint)
230 yield self.volume_manager.remove_compute_volume(context,221 self.volume_manager.remove_compute_volume(context,
231 volume_id)222 volume_id)
232 raise exc223 raise exc
233 defer.returnValue(True)224
234225 return True
235 @defer.inlineCallbacks226
236 @exception.wrap_exception227 @exception.wrap_exception
237 def detach_volume(self, context, instance_id, volume_id):228 def detach_volume(self, context, instance_id, volume_id):
238 """Detach a volume from an instance."""229 """Detach a volume from an instance."""
@@ -246,8 +237,8 @@
246 logging.warn("Detaching volume from unknown instance %s",237 logging.warn("Detaching volume from unknown instance %s",
247 instance_ref['name'])238 instance_ref['name'])
248 else:239 else:
249 yield self.driver.detach_volume(instance_ref['name'],240 self.driver.detach_volume(instance_ref['name'],
250 volume_ref['mountpoint'])241 volume_ref['mountpoint'])
251 yield self.volume_manager.remove_compute_volume(context, volume_id)242 self.volume_manager.remove_compute_volume(context, volume_id)
252 self.db.volume_detached(context, volume_id)243 self.db.volume_detached(context, volume_id)
253 defer.returnValue(True)244 return True
254245
=== modified file 'nova/flags.py'
--- nova/flags.py 2010-12-03 20:21:18 +0000
+++ nova/flags.py 2010-12-16 20:49:10 +0000
@@ -159,6 +159,7 @@
159 return str(val)159 return str(val)
160 raise KeyError(name)160 raise KeyError(name)
161161
162
162FLAGS = FlagValues()163FLAGS = FlagValues()
163gflags.FLAGS = FLAGS164gflags.FLAGS = FLAGS
164gflags.DEFINE_flag(gflags.HelpFlag(), FLAGS)165gflags.DEFINE_flag(gflags.HelpFlag(), FLAGS)
@@ -183,6 +184,12 @@
183DEFINE_spaceseplist = _wrapper(gflags.DEFINE_spaceseplist)184DEFINE_spaceseplist = _wrapper(gflags.DEFINE_spaceseplist)
184DEFINE_multistring = _wrapper(gflags.DEFINE_multistring)185DEFINE_multistring = _wrapper(gflags.DEFINE_multistring)
185DEFINE_multi_int = _wrapper(gflags.DEFINE_multi_int)186DEFINE_multi_int = _wrapper(gflags.DEFINE_multi_int)
187DEFINE_flag = _wrapper(gflags.DEFINE_flag)
188
189
190HelpFlag = gflags.HelpFlag
191HelpshortFlag = gflags.HelpshortFlag
192HelpXMLFlag = gflags.HelpXMLFlag
186193
187194
188def DECLARE(name, module_string, flag_values=FLAGS):195def DECLARE(name, module_string, flag_values=FLAGS):
189196
=== modified file 'nova/manager.py'
--- nova/manager.py 2010-12-01 17:24:39 +0000
+++ nova/manager.py 2010-12-16 20:49:10 +0000
@@ -55,7 +55,6 @@
55from nova import flags55from nova import flags
56from nova.db import base56from nova.db import base
5757
58from twisted.internet import defer
5958
60FLAGS = flags.FLAGS59FLAGS = flags.FLAGS
6160
@@ -67,10 +66,9 @@
67 self.host = host66 self.host = host
68 super(Manager, self).__init__(db_driver)67 super(Manager, self).__init__(db_driver)
6968
70 @defer.inlineCallbacks
71 def periodic_tasks(self, context=None):69 def periodic_tasks(self, context=None):
72 """Tasks to be run at a periodic interval"""70 """Tasks to be run at a periodic interval"""
73 yield71 pass
7472
75 def init_host(self):73 def init_host(self):
76 """Do any initialization that needs to be run if this is a standalone74 """Do any initialization that needs to be run if this is a standalone
7775
=== modified file 'nova/network/manager.py'
--- nova/network/manager.py 2010-11-23 23:56:26 +0000
+++ nova/network/manager.py 2010-12-16 20:49:10 +0000
@@ -49,7 +49,6 @@
49import math49import math
5050
51import IPy51import IPy
52from twisted.internet import defer
5352
54from nova import context53from nova import context
55from nova import db54from nova import db
@@ -399,10 +398,9 @@
399 instances in its subnet.398 instances in its subnet.
400 """399 """
401400
402 @defer.inlineCallbacks
403 def periodic_tasks(self, context=None):401 def periodic_tasks(self, context=None):
404 """Tasks to be run at a periodic interval."""402 """Tasks to be run at a periodic interval."""
405 yield super(VlanManager, self).periodic_tasks(context)403 super(VlanManager, self).periodic_tasks(context)
406 now = datetime.datetime.utcnow()404 now = datetime.datetime.utcnow()
407 timeout = FLAGS.fixed_ip_disassociate_timeout405 timeout = FLAGS.fixed_ip_disassociate_timeout
408 time = now - datetime.timedelta(seconds=timeout)406 time = now - datetime.timedelta(seconds=timeout)
409407
=== modified file 'nova/objectstore/image.py'
--- nova/objectstore/image.py 2010-12-14 00:20:27 +0000
+++ nova/objectstore/image.py 2010-12-16 20:49:10 +0000
@@ -267,6 +267,7 @@
267 if err:267 if err:
268 raise exception.Error("Failed to decrypt initialization "268 raise exception.Error("Failed to decrypt initialization "
269 "vector: %s" % err)269 "vector: %s" % err)
270
270 _out, err = utils.execute(271 _out, err = utils.execute(
271 'openssl enc -d -aes-128-cbc -in %s -K %s -iv %s -out %s'272 'openssl enc -d -aes-128-cbc -in %s -K %s -iv %s -out %s'
272 % (encrypted_filename, key, iv, decrypted_filename),273 % (encrypted_filename, key, iv, decrypted_filename),
273274
=== removed file 'nova/process.py'
--- nova/process.py 2010-10-21 18:49:51 +0000
+++ nova/process.py 1970-01-01 00:00:00 +0000
@@ -1,209 +0,0 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright 2010 United States Government as represented by the
4# Administrator of the National Aeronautics and Space Administration.
5# Copyright 2010 FathomDB Inc.
6# All Rights Reserved.
7#
8# Licensed under the Apache License, Version 2.0 (the "License"); you may
9# not use this file except in compliance with the License. You may obtain
10# a copy of the License at
11#
12# http://www.apache.org/licenses/LICENSE-2.0
13#
14# Unless required by applicable law or agreed to in writing, software
15# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
16# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
17# License for the specific language governing permissions and limitations
18# under the License.
19
20"""
21Process pool using twisted threading
22"""
23
24import logging
25import StringIO
26
27from twisted.internet import defer
28from twisted.internet import error
29from twisted.internet import protocol
30from twisted.internet import reactor
31
32from nova import flags
33from nova.exception import ProcessExecutionError
34
35FLAGS = flags.FLAGS
36flags.DEFINE_integer('process_pool_size', 4,
37 'Number of processes to use in the process pool')
38
39
40# This is based on _BackRelay from twister.internal.utils, but modified to
41# capture both stdout and stderr, without odd stderr handling, and also to
42# handle stdin
43class BackRelayWithInput(protocol.ProcessProtocol):
44 """
45 Trivial protocol for communicating with a process and turning its output
46 into the result of a L{Deferred}.
47
48 @ivar deferred: A L{Deferred} which will be called back with all of stdout
49 and all of stderr as well (as a tuple). C{terminate_on_stderr} is true
50 and any bytes are received over stderr, this will fire with an
51 L{_ProcessExecutionError} instance and the attribute will be set to
52 C{None}.
53
54 @ivar onProcessEnded: If C{terminate_on_stderr} is false and bytes are
55 received over stderr, this attribute will refer to a L{Deferred} which
56 will be called back when the process ends. This C{Deferred} is also
57 associated with the L{_ProcessExecutionError} which C{deferred} fires
58 with earlier in this case so that users can determine when the process
59 has actually ended, in addition to knowing when bytes have been
60 received via stderr.
61 """
62
63 def __init__(self, deferred, cmd, started_deferred=None,
64 terminate_on_stderr=False, check_exit_code=True,
65 process_input=None):
66 self.deferred = deferred
67 self.cmd = cmd
68 self.stdout = StringIO.StringIO()
69 self.stderr = StringIO.StringIO()
70 self.started_deferred = started_deferred
71 self.terminate_on_stderr = terminate_on_stderr
72 self.check_exit_code = check_exit_code
73 self.process_input = process_input
74 self.on_process_ended = None
75
76 def _build_execution_error(self, exit_code=None):
77 return ProcessExecutionError(cmd=self.cmd,
78 exit_code=exit_code,
79 stdout=self.stdout.getvalue(),
80 stderr=self.stderr.getvalue())
81
82 def errReceived(self, text):
83 self.stderr.write(text)
84 if self.terminate_on_stderr and (self.deferred is not None):
85 self.on_process_ended = defer.Deferred()
86 self.deferred.errback(self._build_execution_error())
87 self.deferred = None
88 self.transport.loseConnection()
89
90 def outReceived(self, text):
91 self.stdout.write(text)
92
93 def processEnded(self, reason):
94 if self.deferred is not None:
95 stdout, stderr = self.stdout.getvalue(), self.stderr.getvalue()
96 exit_code = reason.value.exitCode
97 if self.check_exit_code and exit_code != 0:
98 self.deferred.errback(self._build_execution_error(exit_code))
99 else:
100 try:
101 if self.check_exit_code:
102 reason.trap(error.ProcessDone)
103 self.deferred.callback((stdout, stderr))
104 except:
105 # NOTE(justinsb): This logic is a little suspicious to me.
106 # If the callback throws an exception, then errback will
107 # be called also. However, this is what the unit tests
108 # test for.
109 exec_error = self._build_execution_error(exit_code)
110 self.deferred.errback(exec_error)
111 elif self.on_process_ended is not None:
112 self.on_process_ended.errback(reason)
113
114 def connectionMade(self):
115 if self.started_deferred:
116 self.started_deferred.callback(self)
117 if self.process_input:
118 self.transport.write(str(self.process_input))
119 self.transport.closeStdin()
120
121
122def get_process_output(executable, args=None, env=None, path=None,
123 process_reactor=None, check_exit_code=True,
124 process_input=None, started_deferred=None,
125 terminate_on_stderr=False):
126 if process_reactor is None:
127 process_reactor = reactor
128 args = args and args or ()
129 env = env and env and {}
130 deferred = defer.Deferred()
131 cmd = executable
132 if args:
133 cmd = " ".join([cmd] + args)
134 logging.debug("Running cmd: %s", cmd)
135 process_handler = BackRelayWithInput(
136 deferred,
137 cmd,
138 started_deferred=started_deferred,
139 check_exit_code=check_exit_code,
140 process_input=process_input,
141 terminate_on_stderr=terminate_on_stderr)
142 # NOTE(vish): commands come in as unicode, but self.executes needs
143 # strings or process.spawn raises a deprecation warning
144 executable = str(executable)
145 if not args is None:
146 args = [str(x) for x in args]
147 process_reactor.spawnProcess(process_handler, executable,
148 (executable,) + tuple(args), env, path)
149 return deferred
150
151
152class ProcessPool(object):
153 """ A simple process pool implementation using Twisted's Process bits.
154
155 This is pretty basic right now, but hopefully the API will be the correct
156 one so that it can be optimized later.
157 """
158 def __init__(self, size=None):
159 self.size = size and size or FLAGS.process_pool_size
160 self._pool = defer.DeferredSemaphore(self.size)
161
162 def simple_execute(self, cmd, **kw):
163 """ Weak emulation of the old utils.execute() function.
164
165 This only exists as a way to quickly move old execute methods to
166 this new style of code.
167
168 NOTE(termie): This will break on args with spaces in them.
169 """
170 parsed = cmd.split(' ')
171 executable, args = parsed[0], parsed[1:]
172 return self.execute(executable, args, **kw)
173
174 def execute(self, *args, **kw):
175 deferred = self._pool.acquire()
176
177 def _associate_process(proto):
178 deferred.process = proto.transport
179 return proto.transport
180
181 started = defer.Deferred()
182 started.addCallback(_associate_process)
183 kw.setdefault('started_deferred', started)
184
185 deferred.process = None
186 deferred.started = started
187
188 deferred.addCallback(lambda _: get_process_output(*args, **kw))
189 deferred.addBoth(self._release)
190 return deferred
191
192 def _release(self, retval=None):
193 self._pool.release()
194 return retval
195
196
197class SharedPool(object):
198 _instance = None
199
200 def __init__(self):
201 if SharedPool._instance is None:
202 self.__class__._instance = ProcessPool()
203
204 def __getattr__(self, key):
205 return getattr(self._instance, key)
206
207
208def simple_execute(cmd, **kwargs):
209 return SharedPool().simple_execute(cmd, **kwargs)
2100
=== modified file 'nova/rpc.py'
--- nova/rpc.py 2010-11-22 22:48:44 +0000
+++ nova/rpc.py 2010-12-16 20:49:10 +0000
@@ -25,18 +25,18 @@
25import logging25import logging
26import sys26import sys
27import time27import time
28import traceback
28import uuid29import uuid
2930
30from carrot import connection as carrot_connection31from carrot import connection as carrot_connection
31from carrot import messaging32from carrot import messaging
32from eventlet import greenthread33from eventlet import greenthread
33from twisted.internet import defer
34from twisted.internet import task
3534
35from nova import context
36from nova import exception36from nova import exception
37from nova import fakerabbit37from nova import fakerabbit
38from nova import flags38from nova import flags
39from nova import context39from nova import utils
4040
4141
42FLAGS = flags.FLAGS42FLAGS = flags.FLAGS
@@ -128,17 +128,9 @@
128128
129 def attach_to_eventlet(self):129 def attach_to_eventlet(self):
130 """Only needed for unit tests!"""130 """Only needed for unit tests!"""
131 def fetch_repeatedly():131 timer = utils.LoopingCall(self.fetch, enable_callbacks=True)
132 while True:132 timer.start(0.1)
133 self.fetch(enable_callbacks=True)133 return timer
134 greenthread.sleep(0.1)
135 greenthread.spawn(fetch_repeatedly)
136
137 def attach_to_twisted(self):
138 """Attach a callback to twisted that fires 10 times a second"""
139 loop = task.LoopingCall(self.fetch, enable_callbacks=True)
140 loop.start(interval=0.1)
141 return loop
142134
143135
144class Publisher(messaging.Publisher):136class Publisher(messaging.Publisher):
@@ -196,11 +188,13 @@
196 node_func = getattr(self.proxy, str(method))188 node_func = getattr(self.proxy, str(method))
197 node_args = dict((str(k), v) for k, v in args.iteritems())189 node_args = dict((str(k), v) for k, v in args.iteritems())
198 # NOTE(vish): magic is fun!190 # NOTE(vish): magic is fun!
199 # pylint: disable-msg=W0142191 try:
200 d = defer.maybeDeferred(node_func, context=ctxt, **node_args)192 rval = node_func(context=ctxt, **node_args)
201 if msg_id:193 if msg_id:
202 d.addCallback(lambda rval: msg_reply(msg_id, rval, None))194 msg_reply(msg_id, rval, None)
203 d.addErrback(lambda e: msg_reply(msg_id, None, e))195 except Exception as e:
196 if msg_id:
197 msg_reply(msg_id, None, sys.exc_info())
204 return198 return
205199
206200
@@ -242,13 +236,15 @@
242def msg_reply(msg_id, reply=None, failure=None):236def msg_reply(msg_id, reply=None, failure=None):
243 """Sends a reply or an error on the channel signified by msg_id237 """Sends a reply or an error on the channel signified by msg_id
244238
245 failure should be a twisted failure object"""239 failure should be a sys.exc_info() tuple.
240
241 """
246 if failure:242 if failure:
247 message = failure.getErrorMessage()243 message = str(failure[1])
248 traceback = failure.getTraceback()244 tb = traceback.format_exception(*failure)
249 logging.error("Returning exception %s to caller", message)245 logging.error("Returning exception %s to caller", message)
250 logging.error(traceback)246 logging.error(tb)
251 failure = (failure.type.__name__, str(failure.value), traceback)247 failure = (failure[0].__name__, str(failure[1]), tb)
252 conn = Connection.instance()248 conn = Connection.instance()
253 publisher = DirectPublisher(connection=conn, msg_id=msg_id)249 publisher = DirectPublisher(connection=conn, msg_id=msg_id)
254 try:250 try:
@@ -313,7 +309,6 @@
313 _pack_context(msg, context)309 _pack_context(msg, context)
314310
315 class WaitMessage(object):311 class WaitMessage(object):
316
317 def __call__(self, data, message):312 def __call__(self, data, message):
318 """Acks message and sets result."""313 """Acks message and sets result."""
319 message.ack()314 message.ack()
@@ -337,41 +332,15 @@
337 except StopIteration:332 except StopIteration:
338 pass333 pass
339 consumer.close()334 consumer.close()
335 # NOTE(termie): this is a little bit of a change from the original
336 # non-eventlet code where returning a Failure
337 # instance from a deferred call is very similar to
338 # raising an exception
339 if isinstance(wait_msg.result, Exception):
340 raise wait_msg.result
340 return wait_msg.result341 return wait_msg.result
341342
342343
343def call_twisted(context, topic, msg):
344 """Sends a message on a topic and wait for a response"""
345 LOG.debug("Making asynchronous call...")
346 msg_id = uuid.uuid4().hex
347 msg.update({'_msg_id': msg_id})
348 LOG.debug("MSG_ID is %s" % (msg_id))
349 _pack_context(msg, context)
350
351 conn = Connection.instance()
352 d = defer.Deferred()
353 consumer = DirectConsumer(connection=conn, msg_id=msg_id)
354
355 def deferred_receive(data, message):
356 """Acks message and callbacks or errbacks"""
357 message.ack()
358 if data['failure']:
359 return d.errback(RemoteError(*data['failure']))
360 else:
361 return d.callback(data['result'])
362
363 consumer.register_callback(deferred_receive)
364 injected = consumer.attach_to_twisted()
365
366 # clean up after the injected listened and return x
367 d.addCallback(lambda x: injected.stop() and x or x)
368
369 publisher = TopicPublisher(connection=conn, topic=topic)
370 publisher.send(msg)
371 publisher.close()
372 return d
373
374
375def cast(context, topic, msg):344def cast(context, topic, msg):
376 """Sends a message on a topic without waiting for a response"""345 """Sends a message on a topic without waiting for a response"""
377 LOG.debug("Making asynchronous cast...")346 LOG.debug("Making asynchronous cast...")
378347
=== removed file 'nova/server.py'
--- nova/server.py 2010-11-23 20:52:00 +0000
+++ nova/server.py 1970-01-01 00:00:00 +0000
@@ -1,151 +0,0 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright 2010 United States Government as represented by the
4# Administrator of the National Aeronautics and Space Administration.
5# All Rights Reserved.
6#
7# Licensed under the Apache License, Version 2.0 (the "License"); you may
8# not use this file except in compliance with the License. You may obtain
9# a copy of the License at
10#
11# http://www.apache.org/licenses/LICENSE-2.0
12#
13# Unless required by applicable law or agreed to in writing, software
14# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
15# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
16# License for the specific language governing permissions and limitations
17# under the License.
18
19"""
20Base functionality for nova daemons - gradually being replaced with twistd.py.
21"""
22
23import daemon
24from daemon import pidlockfile
25import logging
26import logging.handlers
27import os
28import signal
29import sys
30import time
31
32from nova import flags
33
34
35FLAGS = flags.FLAGS
36flags.DEFINE_bool('daemonize', False, 'daemonize this process')
37# NOTE(termie): right now I am defaulting to using syslog when we daemonize
38# it may be better to do something else -shrug-
39# NOTE(Devin): I think we should let each process have its own log file
40# and put it in /var/logs/nova/(appname).log
41# This makes debugging much easier and cuts down on sys log
42# clutter.
43flags.DEFINE_bool('use_syslog', True, 'output to syslog when daemonizing')
44flags.DEFINE_string('logfile', None, 'log file to output to')
45flags.DEFINE_string('logdir', None, 'directory to keep log files in '
46 '(will be prepended to $logfile)')
47flags.DEFINE_string('pidfile', None, 'pid file to output to')
48flags.DEFINE_string('working_directory', './', 'working directory...')
49flags.DEFINE_integer('uid', os.getuid(), 'uid under which to run')
50flags.DEFINE_integer('gid', os.getgid(), 'gid under which to run')
51
52
53def stop(pidfile):
54 """
55 Stop the daemon
56 """
57 # Get the pid from the pidfile
58 try:
59 pid = int(open(pidfile, 'r').read().strip())
60 except IOError:
61 message = "pidfile %s does not exist. Daemon not running?\n"
62 sys.stderr.write(message % pidfile)
63 return
64
65 # Try killing the daemon process
66 try:
67 while 1:
68 os.kill(pid, signal.SIGTERM)
69 time.sleep(0.1)
70 except OSError, err:
71 err = str(err)
72 if err.find("No such process") > 0:
73 if os.path.exists(pidfile):
74 os.remove(pidfile)
75 else:
76 print str(err)
77 sys.exit(1)
78
79
80def serve(name, main):
81 """Controller for server"""
82 argv = FLAGS(sys.argv)
83
84 if not FLAGS.pidfile:
85 FLAGS.pidfile = '%s.pid' % name
86
87 logging.debug("Full set of FLAGS: \n\n\n")
88 for flag in FLAGS:
89 logging.debug("%s : %s", flag, FLAGS.get(flag, None))
90
91 action = 'start'
92 if len(argv) > 1:
93 action = argv.pop()
94
95 if action == 'stop':
96 stop(FLAGS.pidfile)
97 sys.exit()
98 elif action == 'restart':
99 stop(FLAGS.pidfile)
100 elif action == 'start':
101 pass
102 else:
103 print 'usage: %s [options] [start|stop|restart]' % argv[0]
104 sys.exit(1)
105 daemonize(argv, name, main)
106
107
108def daemonize(args, name, main):
109 """Does the work of daemonizing the process"""
110 logging.getLogger('amqplib').setLevel(logging.WARN)
111 files_to_keep = []
112 if FLAGS.daemonize:
113 logger = logging.getLogger()
114 formatter = logging.Formatter(
115 name + '(%(name)s): %(levelname)s %(message)s')
116 if FLAGS.use_syslog and not FLAGS.logfile:
117 syslog = logging.handlers.SysLogHandler(address='/dev/log')
118 syslog.setFormatter(formatter)
119 logger.addHandler(syslog)
120 files_to_keep.append(syslog.socket)
121 else:
122 if not FLAGS.logfile:
123 FLAGS.logfile = '%s.log' % name
124 if FLAGS.logdir:
125 FLAGS.logfile = os.path.join(FLAGS.logdir, FLAGS.logfile)
126 logfile = logging.FileHandler(FLAGS.logfile)
127 logfile.setFormatter(formatter)
128 logger.addHandler(logfile)
129 files_to_keep.append(logfile.stream)
130 stdin, stdout, stderr = None, None, None
131 else:
132 stdin, stdout, stderr = sys.stdin, sys.stdout, sys.stderr
133
134 if FLAGS.verbose:
135 logging.getLogger().setLevel(logging.DEBUG)
136 else:
137 logging.getLogger().setLevel(logging.WARNING)
138
139 with daemon.DaemonContext(
140 detach_process=FLAGS.daemonize,
141 working_directory=FLAGS.working_directory,
142 pidfile=pidlockfile.TimeoutPIDLockFile(FLAGS.pidfile,
143 acquire_timeout=1,
144 threaded=False),
145 stdin=stdin,
146 stdout=stdout,
147 stderr=stderr,
148 uid=FLAGS.uid,
149 gid=FLAGS.gid,
150 files_preserve=files_to_keep):
151 main(args)
1520
=== added file 'nova/service.py'
--- nova/service.py 1970-01-01 00:00:00 +0000
+++ nova/service.py 2010-12-16 20:49:10 +0000
@@ -0,0 +1,234 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright 2010 United States Government as represented by the
4# Administrator of the National Aeronautics and Space Administration.
5# All Rights Reserved.
6#
7# Licensed under the Apache License, Version 2.0 (the "License"); you may
8# not use this file except in compliance with the License. You may obtain
9# a copy of the License at
10#
11# http://www.apache.org/licenses/LICENSE-2.0
12#
13# Unless required by applicable law or agreed to in writing, software
14# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
15# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
16# License for the specific language governing permissions and limitations
17# under the License.
18
19"""
20Generic Node baseclass for all workers that run on hosts
21"""
22
23import inspect
24import logging
25import os
26import sys
27
28from eventlet import event
29from eventlet import greenthread
30from eventlet import greenpool
31
32from nova import context
33from nova import db
34from nova import exception
35from nova import flags
36from nova import rpc
37from nova import utils
38
39
40FLAGS = flags.FLAGS
41flags.DEFINE_integer('report_interval', 10,
42 'seconds between nodes reporting state to datastore',
43 lower_bound=1)
44
45flags.DEFINE_integer('periodic_interval', 60,
46 'seconds between running periodic tasks',
47 lower_bound=1)
48
49flags.DEFINE_string('pidfile', None,
50 'pidfile to use for this service')
51
52
53flags.DEFINE_flag(flags.HelpFlag())
54flags.DEFINE_flag(flags.HelpshortFlag())
55flags.DEFINE_flag(flags.HelpXMLFlag())
56
57
58class Service(object):
59 """Base class for workers that run on hosts."""
60
61 def __init__(self, host, binary, topic, manager, report_interval=None,
62 periodic_interval=None, *args, **kwargs):
63 self.host = host
64 self.binary = binary
65 self.topic = topic
66 self.manager_class_name = manager
67 self.report_interval = report_interval
68 self.periodic_interval = periodic_interval
69 super(Service, self).__init__(*args, **kwargs)
70 self.saved_args, self.saved_kwargs = args, kwargs
71 self.timers = []
72
73 def start(self):
74 manager_class = utils.import_class(self.manager_class_name)
75 self.manager = manager_class(host=self.host, *self.saved_args,
76 **self.saved_kwargs)
77 self.manager.init_host()
78 self.model_disconnected = False
79 ctxt = context.get_admin_context()
80 try:
81 service_ref = db.service_get_by_args(ctxt,
82 self.host,
83 self.binary)
84 self.service_id = service_ref['id']
85 except exception.NotFound:
86 self._create_service_ref(ctxt)
87
88 conn1 = rpc.Connection.instance(new=True)
89 conn2 = rpc.Connection.instance(new=True)
90 if self.report_interval:
91 consumer_all = rpc.AdapterConsumer(
92 connection=conn1,
93 topic=self.topic,
94 proxy=self)
95 consumer_node = rpc.AdapterConsumer(
96 connection=conn2,
97 topic='%s.%s' % (self.topic, self.host),
98 proxy=self)
99
100 self.timers.append(consumer_all.attach_to_eventlet())
101 self.timers.append(consumer_node.attach_to_eventlet())
102
103 pulse = utils.LoopingCall(self.report_state)
104 pulse.start(interval=self.report_interval, now=False)
105 self.timers.append(pulse)
106
107 if self.periodic_interval:
108 periodic = utils.LoopingCall(self.periodic_tasks)
109 periodic.start(interval=self.periodic_interval, now=False)
110 self.timers.append(periodic)
111
112 def _create_service_ref(self, context):
113 service_ref = db.service_create(context,
114 {'host': self.host,
115 'binary': self.binary,
116 'topic': self.topic,
117 'report_count': 0})
118 self.service_id = service_ref['id']
119
120 def __getattr__(self, key):
121 manager = self.__dict__.get('manager', None)
122 return getattr(manager, key)
123
124 @classmethod
125 def create(cls,
126 host=None,
127 binary=None,
128 topic=None,
129 manager=None,
130 report_interval=None,
131 periodic_interval=None):
132 """Instantiates class and passes back application object.
133
134 Args:
135 host, defaults to FLAGS.host
136 binary, defaults to basename of executable
137 topic, defaults to bin_name - "nova-" part
138 manager, defaults to FLAGS.<topic>_manager
139 report_interval, defaults to FLAGS.report_interval
140 periodic_interval, defaults to FLAGS.periodic_interval
141 """
142 if not host:
143 host = FLAGS.host
144 if not binary:
145 binary = os.path.basename(inspect.stack()[-1][1])
146 if not topic:
147 topic = binary.rpartition("nova-")[2]
148 if not manager:
149 manager = FLAGS.get('%s_manager' % topic, None)
150 if not report_interval:
151 report_interval = FLAGS.report_interval
152 if not periodic_interval:
153 periodic_interval = FLAGS.periodic_interval
154 logging.warn("Starting %s node", topic)
155 service_obj = cls(host, binary, topic, manager,
156 report_interval, periodic_interval)
157
158 return service_obj
159
160 def kill(self):
161 """Destroy the service object in the datastore"""
162 self.stop()
163 try:
164 db.service_destroy(context.get_admin_context(), self.service_id)
165 except exception.NotFound:
166 logging.warn("Service killed that has no database entry")
167
168 def stop(self):
169 for x in self.timers:
170 try:
171 x.stop()
172 except Exception:
173 pass
174 self.timers = []
175
176 def periodic_tasks(self):
177 """Tasks to be run at a periodic interval"""
178 self.manager.periodic_tasks(context.get_admin_context())
179
180 def report_state(self):
181 """Update the state of this service in the datastore."""
182 ctxt = context.get_admin_context()
183 try:
184 try:
185 service_ref = db.service_get(ctxt, self.service_id)
186 except exception.NotFound:
187 logging.debug("The service database object disappeared, "
188 "Recreating it.")
189 self._create_service_ref(ctxt)
190 service_ref = db.service_get(ctxt, self.service_id)
191
192 db.service_update(ctxt,
193 self.service_id,
194 {'report_count': service_ref['report_count'] + 1})
195
196 # TODO(termie): make this pattern be more elegant.
197 if getattr(self, "model_disconnected", False):
198 self.model_disconnected = False
199 logging.error("Recovered model server connection!")
200
201 # TODO(vish): this should probably only catch connection errors
202 except Exception: # pylint: disable-msg=W0702
203 if not getattr(self, "model_disconnected", False):
204 self.model_disconnected = True
205 logging.exception("model server went away")
206
207
208def serve(*services):
209 argv = FLAGS(sys.argv)
210
211 if not services:
212 services = [Service.create()]
213
214 name = '_'.join(x.binary for x in services)
215 logging.debug("Serving %s" % name)
216
217 logging.getLogger('amqplib').setLevel(logging.WARN)
218
219 if FLAGS.verbose:
220 logging.getLogger().setLevel(logging.DEBUG)
221 else:
222 logging.getLogger().setLevel(logging.WARNING)
223
224 logging.debug("Full set of FLAGS:")
225 for flag in FLAGS:
226 logging.debug("%s : %s" % (flag, FLAGS.get(flag, None)))
227
228 for x in services:
229 x.start()
230
231
232def wait():
233 while True:
234 greenthread.sleep(5)
0235
=== removed file 'nova/service.py'
--- nova/service.py 2010-11-15 19:43:50 +0000
+++ nova/service.py 1970-01-01 00:00:00 +0000
@@ -1,195 +0,0 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright 2010 United States Government as represented by the
4# Administrator of the National Aeronautics and Space Administration.
5# All Rights Reserved.
6#
7# Licensed under the Apache License, Version 2.0 (the "License"); you may
8# not use this file except in compliance with the License. You may obtain
9# a copy of the License at
10#
11# http://www.apache.org/licenses/LICENSE-2.0
12#
13# Unless required by applicable law or agreed to in writing, software
14# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
15# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
16# License for the specific language governing permissions and limitations
17# under the License.
18
19"""
20A service is a very thin wrapper around a Manager object. It exposes the
21manager's public methods to other components of the system via rpc. It will
22report state periodically to the database and is responsible for initiating
23any periodic tasts that need to be executed on a given host.
24
25This module contains Service, a generic baseclass for all workers.
26"""
27
28import inspect
29import logging
30import os
31
32from twisted.internet import defer
33from twisted.internet import task
34from twisted.application import service
35
36from nova import context
37from nova import db
38from nova import exception
39from nova import flags
40from nova import rpc
41from nova import utils
42
43
44FLAGS = flags.FLAGS
45flags.DEFINE_integer('report_interval', 10,
46 'seconds between nodes reporting state to datastore',
47 lower_bound=1)
48
49flags.DEFINE_integer('periodic_interval', 60,
50 'seconds between running periodic tasks',
51 lower_bound=1)
52
53
54class Service(object, service.Service):
55 """Base class for workers that run on hosts."""
56
57 def __init__(self, host, binary, topic, manager, report_interval=None,
58 periodic_interval=None, *args, **kwargs):
59 self.host = host
60 self.binary = binary
61 self.topic = topic
62 self.manager_class_name = manager
63 self.report_interval = report_interval
64 self.periodic_interval = periodic_interval
65 super(Service, self).__init__(*args, **kwargs)
66 self.saved_args, self.saved_kwargs = args, kwargs
67
68 def startService(self): # pylint: disable-msg C0103
69 manager_class = utils.import_class(self.manager_class_name)
70 self.manager = manager_class(host=self.host, *self.saved_args,
71 **self.saved_kwargs)
72 self.manager.init_host()
73 self.model_disconnected = False
74 ctxt = context.get_admin_context()
75 try:
76 service_ref = db.service_get_by_args(ctxt,
77 self.host,
78 self.binary)
79 self.service_id = service_ref['id']
80 except exception.NotFound:
81 self._create_service_ref(ctxt)
82
83 conn = rpc.Connection.instance()
84 if self.report_interval:
85 consumer_all = rpc.AdapterConsumer(
86 connection=conn,
87 topic=self.topic,
88 proxy=self)
89 consumer_node = rpc.AdapterConsumer(
90 connection=conn,
91 topic='%s.%s' % (self.topic, self.host),
92 proxy=self)
93
94 consumer_all.attach_to_twisted()
95 consumer_node.attach_to_twisted()
96
97 pulse = task.LoopingCall(self.report_state)
98 pulse.start(interval=self.report_interval, now=False)
99
100 if self.periodic_interval:
101 pulse = task.LoopingCall(self.periodic_tasks)
102 pulse.start(interval=self.periodic_interval, now=False)
103
104 def _create_service_ref(self, context):
105 service_ref = db.service_create(context,
106 {'host': self.host,
107 'binary': self.binary,
108 'topic': self.topic,
109 'report_count': 0})
110 self.service_id = service_ref['id']
111
112 def __getattr__(self, key):
113 manager = self.__dict__.get('manager', None)
114 return getattr(manager, key)
115
116 @classmethod
117 def create(cls,
118 host=None,
119 binary=None,
120 topic=None,
121 manager=None,
122 report_interval=None,
123 periodic_interval=None):
124 """Instantiates class and passes back application object.
125
126 Args:
127 host, defaults to FLAGS.host
128 binary, defaults to basename of executable
129 topic, defaults to bin_name - "nova-" part
130 manager, defaults to FLAGS.<topic>_manager
131 report_interval, defaults to FLAGS.report_interval
132 periodic_interval, defaults to FLAGS.periodic_interval
133 """
134 if not host:
135 host = FLAGS.host
136 if not binary:
137 binary = os.path.basename(inspect.stack()[-1][1])
138 if not topic:
139 topic = binary.rpartition("nova-")[2]
140 if not manager:
141 manager = FLAGS.get('%s_manager' % topic, None)
142 if not report_interval:
143 report_interval = FLAGS.report_interval
144 if not periodic_interval:
145 periodic_interval = FLAGS.periodic_interval
146 logging.warn("Starting %s node", topic)
147 service_obj = cls(host, binary, topic, manager,
148 report_interval, periodic_interval)
149
150 # This is the parent service that twistd will be looking for when it
151 # parses this file, return it so that we can get it into globals.
152 application = service.Application(binary)
153 service_obj.setServiceParent(application)
154 return application
155
156 def kill(self):
157 """Destroy the service object in the datastore"""
158 try:
159 db.service_destroy(context.get_admin_context(), self.service_id)
160 except exception.NotFound:
161 logging.warn("Service killed that has no database entry")
162
163 @defer.inlineCallbacks
164 def periodic_tasks(self):
165 """Tasks to be run at a periodic interval"""
166 yield self.manager.periodic_tasks(context.get_admin_context())
167
168 @defer.inlineCallbacks
169 def report_state(self):
170 """Update the state of this service in the datastore."""
171 ctxt = context.get_admin_context()
172 try:
173 try:
174 service_ref = db.service_get(ctxt, self.service_id)
175 except exception.NotFound:
176 logging.debug("The service database object disappeared, "
177 "Recreating it.")
178 self._create_service_ref(ctxt)
179 service_ref = db.service_get(ctxt, self.service_id)
180
181 db.service_update(ctxt,
182 self.service_id,
183 {'report_count': service_ref['report_count'] + 1})
184
185 # TODO(termie): make this pattern be more elegant.
186 if getattr(self, "model_disconnected", False):
187 self.model_disconnected = False
188 logging.error("Recovered model server connection!")
189
190 # TODO(vish): this should probably only catch connection errors
191 except Exception: # pylint: disable-msg=W0702
192 if not getattr(self, "model_disconnected", False):
193 self.model_disconnected = True
194 logging.exception("model server went away")
195 yield
1960
=== modified file 'nova/test.py'
--- nova/test.py 2010-10-26 15:48:20 +0000
+++ nova/test.py 2010-12-16 20:49:10 +0000
@@ -25,11 +25,12 @@
25import datetime25import datetime
26import sys26import sys
27import time27import time
28import unittest
2829
29import mox30import mox
30import stubout31import stubout
31from twisted.internet import defer32from twisted.internet import defer
32from twisted.trial import unittest33from twisted.trial import unittest as trial_unittest
3334
34from nova import context35from nova import context
35from nova import db36from nova import db
@@ -55,7 +56,89 @@
55 return _skipper56 return _skipper
5657
5758
58class TrialTestCase(unittest.TestCase):59class TestCase(unittest.TestCase):
60 """Test case base class for all unit tests"""
61 def setUp(self):
62 """Run before each test method to initialize test environment"""
63 super(TestCase, self).setUp()
64 # NOTE(vish): We need a better method for creating fixtures for tests
65 # now that we have some required db setup for the system
66 # to work properly.
67 self.start = datetime.datetime.utcnow()
68 ctxt = context.get_admin_context()
69 if db.network_count(ctxt) != 5:
70 network_manager.VlanManager().create_networks(ctxt,
71 FLAGS.fixed_range,
72 5, 16,
73 FLAGS.vlan_start,
74 FLAGS.vpn_start)
75
76 # emulate some of the mox stuff, we can't use the metaclass
77 # because it screws with our generators
78 self.mox = mox.Mox()
79 self.stubs = stubout.StubOutForTesting()
80 self.flag_overrides = {}
81 self.injected = []
82 self._monkey_patch_attach()
83 self._original_flags = FLAGS.FlagValuesDict()
84
85 def tearDown(self):
86 """Runs after each test method to finalize/tear down test
87 environment."""
88 try:
89 self.mox.UnsetStubs()
90 self.stubs.UnsetAll()
91 self.stubs.SmartUnsetAll()
92 self.mox.VerifyAll()
93 # NOTE(vish): Clean up any ips associated during the test.
94 ctxt = context.get_admin_context()
95 db.fixed_ip_disassociate_all_by_timeout(ctxt, FLAGS.host,
96 self.start)
97 db.network_disassociate_all(ctxt)
98 rpc.Consumer.attach_to_eventlet = self.originalAttach
99 for x in self.injected:
100 try:
101 x.stop()
102 except AssertionError:
103 pass
104
105 if FLAGS.fake_rabbit:
106 fakerabbit.reset_all()
107
108 db.security_group_destroy_all(ctxt)
109 super(TestCase, self).tearDown()
110 finally:
111 self.reset_flags()
112
113 def flags(self, **kw):
114 """Override flag variables for a test"""
115 for k, v in kw.iteritems():
116 if k in self.flag_overrides:
117 self.reset_flags()
118 raise Exception(
119 'trying to override already overriden flag: %s' % k)
120 self.flag_overrides[k] = getattr(FLAGS, k)
121 setattr(FLAGS, k, v)
122
123 def reset_flags(self):
124 """Resets all flag variables for the test. Runs after each test"""
125 FLAGS.Reset()
126 for k, v in self._original_flags.iteritems():
127 setattr(FLAGS, k, v)
128
129 def _monkey_patch_attach(self):
130 self.originalAttach = rpc.Consumer.attach_to_eventlet
131
132 def _wrapped(innerSelf):
133 rv = self.originalAttach(innerSelf)
134 self.injected.append(rv)
135 return rv
136
137 _wrapped.func_name = self.originalAttach.func_name
138 rpc.Consumer.attach_to_eventlet = _wrapped
139
140
141class TrialTestCase(trial_unittest.TestCase):
59 """Test case base class for all unit tests"""142 """Test case base class for all unit tests"""
60 def setUp(self):143 def setUp(self):
61 """Run before each test method to initialize test environment"""144 """Run before each test method to initialize test environment"""
@@ -78,7 +161,6 @@
78 self.stubs = stubout.StubOutForTesting()161 self.stubs = stubout.StubOutForTesting()
79 self.flag_overrides = {}162 self.flag_overrides = {}
80 self.injected = []163 self.injected = []
81 self._monkey_patch_attach()
82 self._original_flags = FLAGS.FlagValuesDict()164 self._original_flags = FLAGS.FlagValuesDict()
83165
84 def tearDown(self):166 def tearDown(self):
@@ -94,7 +176,6 @@
94 db.fixed_ip_disassociate_all_by_timeout(ctxt, FLAGS.host,176 db.fixed_ip_disassociate_all_by_timeout(ctxt, FLAGS.host,
95 self.start)177 self.start)
96 db.network_disassociate_all(ctxt)178 db.network_disassociate_all(ctxt)
97 rpc.Consumer.attach_to_twisted = self.originalAttach
98 for x in self.injected:179 for x in self.injected:
99 try:180 try:
100 x.stop()181 x.stop()
@@ -147,14 +228,3 @@
147 return d228 return d
148 _wrapped.func_name = func.func_name229 _wrapped.func_name = func.func_name
149 return _wrapped230 return _wrapped
150
151 def _monkey_patch_attach(self):
152 self.originalAttach = rpc.Consumer.attach_to_twisted
153
154 def _wrapped(innerSelf):
155 rv = self.originalAttach(innerSelf)
156 self.injected.append(rv)
157 return rv
158
159 _wrapped.func_name = self.originalAttach.func_name
160 rpc.Consumer.attach_to_twisted = _wrapped
161231
=== modified file 'nova/tests/access_unittest.py'
--- nova/tests/access_unittest.py 2010-10-22 07:48:27 +0000
+++ nova/tests/access_unittest.py 2010-12-16 20:49:10 +0000
@@ -35,7 +35,7 @@
35 pass35 pass
3636
3737
38class AccessTestCase(test.TrialTestCase):38class AccessTestCase(test.TestCase):
39 def setUp(self):39 def setUp(self):
40 super(AccessTestCase, self).setUp()40 super(AccessTestCase, self).setUp()
41 um = manager.AuthManager()41 um = manager.AuthManager()
4242
=== modified file 'nova/tests/auth_unittest.py'
--- nova/tests/auth_unittest.py 2010-10-22 07:48:27 +0000
+++ nova/tests/auth_unittest.py 2010-12-16 20:49:10 +0000
@@ -326,12 +326,12 @@
326 self.assertTrue(user.is_admin())326 self.assertTrue(user.is_admin())
327327
328328
329class AuthManagerLdapTestCase(AuthManagerTestCase, test.TrialTestCase):329class AuthManagerLdapTestCase(AuthManagerTestCase, test.TestCase):
330 auth_driver = 'nova.auth.ldapdriver.FakeLdapDriver'330 auth_driver = 'nova.auth.ldapdriver.FakeLdapDriver'
331331
332 def __init__(self, *args, **kwargs):332 def __init__(self, *args, **kwargs):
333 AuthManagerTestCase.__init__(self)333 AuthManagerTestCase.__init__(self)
334 test.TrialTestCase.__init__(self, *args, **kwargs)334 test.TestCase.__init__(self, *args, **kwargs)
335 import nova.auth.fakeldap as fakeldap335 import nova.auth.fakeldap as fakeldap
336 FLAGS.redis_db = 8336 FLAGS.redis_db = 8
337 if FLAGS.flush_db:337 if FLAGS.flush_db:
@@ -343,7 +343,7 @@
343 self.skip = True343 self.skip = True
344344
345345
346class AuthManagerDbTestCase(AuthManagerTestCase, test.TrialTestCase):346class AuthManagerDbTestCase(AuthManagerTestCase, test.TestCase):
347 auth_driver = 'nova.auth.dbdriver.DbDriver'347 auth_driver = 'nova.auth.dbdriver.DbDriver'
348348
349349
350350
=== modified file 'nova/tests/cloud_unittest.py'
--- nova/tests/cloud_unittest.py 2010-12-09 17:30:03 +0000
+++ nova/tests/cloud_unittest.py 2010-12-16 20:49:10 +0000
@@ -27,8 +27,6 @@
27import time27import time
2828
29from eventlet import greenthread29from eventlet import greenthread
30from twisted.internet import defer
31import unittest
32from xml.etree import ElementTree30from xml.etree import ElementTree
3331
34from nova import context32from nova import context
@@ -53,7 +51,7 @@
53os.makedirs(IMAGES_PATH)51os.makedirs(IMAGES_PATH)
5452
5553
56class CloudTestCase(test.TrialTestCase):54class CloudTestCase(test.TestCase):
57 def setUp(self):55 def setUp(self):
58 super(CloudTestCase, self).setUp()56 super(CloudTestCase, self).setUp()
59 self.flags(connection_type='fake', images_path=IMAGES_PATH)57 self.flags(connection_type='fake', images_path=IMAGES_PATH)
@@ -199,7 +197,7 @@
199 logging.debug("Need to watch instance %s until it's running..." %197 logging.debug("Need to watch instance %s until it's running..." %
200 instance['instance_id'])198 instance['instance_id'])
201 while True:199 while True:
202 rv = yield defer.succeed(time.sleep(1))200 greenthread.sleep(1)
203 info = self.cloud._get_instance(instance['instance_id'])201 info = self.cloud._get_instance(instance['instance_id'])
204 logging.debug(info['state'])202 logging.debug(info['state'])
205 if info['state'] == power_state.RUNNING:203 if info['state'] == power_state.RUNNING:
206204
=== modified file 'nova/tests/compute_unittest.py'
--- nova/tests/compute_unittest.py 2010-12-03 20:21:18 +0000
+++ nova/tests/compute_unittest.py 2010-12-16 20:49:10 +0000
@@ -22,8 +22,6 @@
22import datetime22import datetime
23import logging23import logging
2424
25from twisted.internet import defer
26
27from nova import context25from nova import context
28from nova import db26from nova import db
29from nova import exception27from nova import exception
@@ -33,10 +31,11 @@
33from nova.auth import manager31from nova.auth import manager
34from nova.compute import api as compute_api32from nova.compute import api as compute_api
3533
34
36FLAGS = flags.FLAGS35FLAGS = flags.FLAGS
3736
3837
39class ComputeTestCase(test.TrialTestCase):38class ComputeTestCase(test.TestCase):
40 """Test case for compute"""39 """Test case for compute"""
41 def setUp(self):40 def setUp(self):
42 logging.getLogger().setLevel(logging.DEBUG)41 logging.getLogger().setLevel(logging.DEBUG)
@@ -94,24 +93,22 @@
94 db.security_group_destroy(self.context, group['id'])93 db.security_group_destroy(self.context, group['id'])
95 db.instance_destroy(self.context, ref[0]['id'])94 db.instance_destroy(self.context, ref[0]['id'])
9695
97 @defer.inlineCallbacks
98 def test_run_terminate(self):96 def test_run_terminate(self):
99 """Make sure it is possible to run and terminate instance"""97 """Make sure it is possible to run and terminate instance"""
100 instance_id = self._create_instance()98 instance_id = self._create_instance()
10199
102 yield self.compute.run_instance(self.context, instance_id)100 self.compute.run_instance(self.context, instance_id)
103101
104 instances = db.instance_get_all(context.get_admin_context())102 instances = db.instance_get_all(context.get_admin_context())
105 logging.info("Running instances: %s", instances)103 logging.info("Running instances: %s", instances)
106 self.assertEqual(len(instances), 1)104 self.assertEqual(len(instances), 1)
107105
108 yield self.compute.terminate_instance(self.context, instance_id)106 self.compute.terminate_instance(self.context, instance_id)
109107
110 instances = db.instance_get_all(context.get_admin_context())108 instances = db.instance_get_all(context.get_admin_context())
111 logging.info("After terminating instances: %s", instances)109 logging.info("After terminating instances: %s", instances)
112 self.assertEqual(len(instances), 0)110 self.assertEqual(len(instances), 0)
113111
114 @defer.inlineCallbacks
115 def test_run_terminate_timestamps(self):112 def test_run_terminate_timestamps(self):
116 """Make sure timestamps are set for launched and destroyed"""113 """Make sure timestamps are set for launched and destroyed"""
117 instance_id = self._create_instance()114 instance_id = self._create_instance()
@@ -119,42 +116,40 @@
119 self.assertEqual(instance_ref['launched_at'], None)116 self.assertEqual(instance_ref['launched_at'], None)
120 self.assertEqual(instance_ref['deleted_at'], None)117 self.assertEqual(instance_ref['deleted_at'], None)
121 launch = datetime.datetime.utcnow()118 launch = datetime.datetime.utcnow()
122 yield self.compute.run_instance(self.context, instance_id)119 self.compute.run_instance(self.context, instance_id)
123 instance_ref = db.instance_get(self.context, instance_id)120 instance_ref = db.instance_get(self.context, instance_id)
124 self.assert_(instance_ref['launched_at'] > launch)121 self.assert_(instance_ref['launched_at'] > launch)
125 self.assertEqual(instance_ref['deleted_at'], None)122 self.assertEqual(instance_ref['deleted_at'], None)
126 terminate = datetime.datetime.utcnow()123 terminate = datetime.datetime.utcnow()
127 yield self.compute.terminate_instance(self.context, instance_id)124 self.compute.terminate_instance(self.context, instance_id)
128 self.context = self.context.elevated(True)125 self.context = self.context.elevated(True)
129 instance_ref = db.instance_get(self.context, instance_id)126 instance_ref = db.instance_get(self.context, instance_id)
130 self.assert_(instance_ref['launched_at'] < terminate)127 self.assert_(instance_ref['launched_at'] < terminate)
131 self.assert_(instance_ref['deleted_at'] > terminate)128 self.assert_(instance_ref['deleted_at'] > terminate)
132129
133 @defer.inlineCallbacks
134 def test_reboot(self):130 def test_reboot(self):
135 """Ensure instance can be rebooted"""131 """Ensure instance can be rebooted"""
136 instance_id = self._create_instance()132 instance_id = self._create_instance()
137 yield self.compute.run_instance(self.context, instance_id)133 self.compute.run_instance(self.context, instance_id)
138 yield self.compute.reboot_instance(self.context, instance_id)134 self.compute.reboot_instance(self.context, instance_id)
139 yield self.compute.terminate_instance(self.context, instance_id)135 self.compute.terminate_instance(self.context, instance_id)
140136
141 @defer.inlineCallbacks
142 def test_console_output(self):137 def test_console_output(self):
143 """Make sure we can get console output from instance"""138 """Make sure we can get console output from instance"""
144 instance_id = self._create_instance()139 instance_id = self._create_instance()
145 yield self.compute.run_instance(self.context, instance_id)140 self.compute.run_instance(self.context, instance_id)
146141
147 console = yield self.compute.get_console_output(self.context,142 console = self.compute.get_console_output(self.context,
148 instance_id)143 instance_id)
149 self.assert_(console)144 self.assert_(console)
150 yield self.compute.terminate_instance(self.context, instance_id)145 self.compute.terminate_instance(self.context, instance_id)
151146
152 @defer.inlineCallbacks
153 def test_run_instance_existing(self):147 def test_run_instance_existing(self):
154 """Ensure failure when running an instance that already exists"""148 """Ensure failure when running an instance that already exists"""
155 instance_id = self._create_instance()149 instance_id = self._create_instance()
156 yield self.compute.run_instance(self.context, instance_id)150 self.compute.run_instance(self.context, instance_id)
157 self.assertFailure(self.compute.run_instance(self.context,151 self.assertRaises(exception.Error,
158 instance_id),152 self.compute.run_instance,
159 exception.Error)153 self.context,
160 yield self.compute.terminate_instance(self.context, instance_id)154 instance_id)
155 self.compute.terminate_instance(self.context, instance_id)
161156
=== modified file 'nova/tests/flags_unittest.py'
--- nova/tests/flags_unittest.py 2010-10-22 07:48:27 +0000
+++ nova/tests/flags_unittest.py 2010-12-16 20:49:10 +0000
@@ -24,7 +24,7 @@
24flags.DEFINE_string('flags_unittest', 'foo', 'for testing purposes only')24flags.DEFINE_string('flags_unittest', 'foo', 'for testing purposes only')
2525
2626
27class FlagsTestCase(test.TrialTestCase):27class FlagsTestCase(test.TestCase):
2828
29 def setUp(self):29 def setUp(self):
30 super(FlagsTestCase, self).setUp()30 super(FlagsTestCase, self).setUp()
3131
=== modified file 'nova/tests/misc_unittest.py'
--- nova/tests/misc_unittest.py 2010-12-15 10:57:56 +0000
+++ nova/tests/misc_unittest.py 2010-12-16 20:49:10 +0000
@@ -20,7 +20,7 @@
20from nova.utils import parse_mailmap, str_dict_replace20from nova.utils import parse_mailmap, str_dict_replace
2121
2222
23class ProjectTestCase(test.TrialTestCase):23class ProjectTestCase(test.TestCase):
24 def test_authors_up_to_date(self):24 def test_authors_up_to_date(self):
25 if os.path.exists('../.bzr'):25 if os.path.exists('../.bzr'):
26 contributors = set()26 contributors = set()
2727
=== modified file 'nova/tests/network_unittest.py'
--- nova/tests/network_unittest.py 2010-11-17 02:23:20 +0000
+++ nova/tests/network_unittest.py 2010-12-16 20:49:10 +0000
@@ -33,7 +33,7 @@
33FLAGS = flags.FLAGS33FLAGS = flags.FLAGS
3434
3535
36class NetworkTestCase(test.TrialTestCase):36class NetworkTestCase(test.TestCase):
37 """Test cases for network code"""37 """Test cases for network code"""
38 def setUp(self):38 def setUp(self):
39 super(NetworkTestCase, self).setUp()39 super(NetworkTestCase, self).setUp()
4040
=== modified file 'nova/tests/objectstore_unittest.py'
--- nova/tests/objectstore_unittest.py 2010-10-22 07:48:27 +0000
+++ nova/tests/objectstore_unittest.py 2010-12-16 20:49:10 +0000
@@ -54,7 +54,7 @@
54os.makedirs(os.path.join(OSS_TEMPDIR, 'buckets'))54os.makedirs(os.path.join(OSS_TEMPDIR, 'buckets'))
5555
5656
57class ObjectStoreTestCase(test.TrialTestCase):57class ObjectStoreTestCase(test.TestCase):
58 """Test objectstore API directly."""58 """Test objectstore API directly."""
5959
60 def setUp(self):60 def setUp(self):
@@ -191,7 +191,7 @@
191 protocol = TestHTTPChannel191 protocol = TestHTTPChannel
192192
193193
194class S3APITestCase(test.TrialTestCase):194class S3APITestCase(test.TestCase):
195 """Test objectstore through S3 API."""195 """Test objectstore through S3 API."""
196196
197 def setUp(self):197 def setUp(self):
198198
=== removed file 'nova/tests/process_unittest.py'
--- nova/tests/process_unittest.py 2010-10-22 07:48:27 +0000
+++ nova/tests/process_unittest.py 1970-01-01 00:00:00 +0000
@@ -1,132 +0,0 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright 2010 United States Government as represented by the
4# Administrator of the National Aeronautics and Space Administration.
5# All Rights Reserved.
6#
7# Licensed under the Apache License, Version 2.0 (the "License"); you may
8# not use this file except in compliance with the License. You may obtain
9# a copy of the License at
10#
11# http://www.apache.org/licenses/LICENSE-2.0
12#
13# Unless required by applicable law or agreed to in writing, software
14# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
15# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
16# License for the specific language governing permissions and limitations
17# under the License.
18
19import logging
20from twisted.internet import defer
21from twisted.internet import reactor
22from xml.etree import ElementTree
23
24from nova import exception
25from nova import flags
26from nova import process
27from nova import test
28from nova import utils
29
30FLAGS = flags.FLAGS
31
32
33class ProcessTestCase(test.TrialTestCase):
34 def setUp(self):
35 logging.getLogger().setLevel(logging.DEBUG)
36 super(ProcessTestCase, self).setUp()
37
38 def test_execute_stdout(self):
39 pool = process.ProcessPool(2)
40 d = pool.simple_execute('echo test')
41
42 def _check(rv):
43 self.assertEqual(rv[0], 'test\n')
44 self.assertEqual(rv[1], '')
45
46 d.addCallback(_check)
47 d.addErrback(self.fail)
48 return d
49
50 def test_execute_stderr(self):
51 pool = process.ProcessPool(2)
52 d = pool.simple_execute('cat BAD_FILE', check_exit_code=False)
53
54 def _check(rv):
55 self.assertEqual(rv[0], '')
56 self.assert_('No such file' in rv[1])
57
58 d.addCallback(_check)
59 d.addErrback(self.fail)
60 return d
61
62 def test_execute_unexpected_stderr(self):
63 pool = process.ProcessPool(2)
64 d = pool.simple_execute('cat BAD_FILE')
65 d.addCallback(lambda x: self.fail('should have raised an error'))
66 d.addErrback(lambda failure: failure.trap(IOError))
67 return d
68
69 def test_max_processes(self):
70 pool = process.ProcessPool(2)
71 d1 = pool.simple_execute('sleep 0.01')
72 d2 = pool.simple_execute('sleep 0.01')
73 d3 = pool.simple_execute('sleep 0.005')
74 d4 = pool.simple_execute('sleep 0.005')
75
76 called = []
77
78 def _called(rv, name):
79 called.append(name)
80
81 d1.addCallback(_called, 'd1')
82 d2.addCallback(_called, 'd2')
83 d3.addCallback(_called, 'd3')
84 d4.addCallback(_called, 'd4')
85
86 # Make sure that d3 and d4 had to wait on the other two and were called
87 # in order
88 # NOTE(termie): there may be a race condition in this test if for some
89 # reason one of the sleeps takes longer to complete
90 # than it should
91 d4.addCallback(lambda x: self.assertEqual(called[2], 'd3'))
92 d4.addCallback(lambda x: self.assertEqual(called[3], 'd4'))
93 d4.addErrback(self.fail)
94 return d4
95
96 def test_kill_long_process(self):
97 pool = process.ProcessPool(2)
98
99 d1 = pool.simple_execute('sleep 1')
100 d2 = pool.simple_execute('sleep 0.005')
101
102 timeout = reactor.callLater(0.1, self.fail, 'should have been killed')
103
104 # kill d1 and wait on it to end then cancel the timeout
105 d2.addCallback(lambda _: d1.process.signalProcess('KILL'))
106 d2.addCallback(lambda _: d1)
107 d2.addBoth(lambda _: timeout.active() and timeout.cancel())
108 d2.addErrback(self.fail)
109 return d2
110
111 def test_process_exit_is_contained(self):
112 pool = process.ProcessPool(2)
113
114 d1 = pool.simple_execute('sleep 1')
115 d1.addCallback(lambda x: self.fail('should have errbacked'))
116 d1.addErrback(lambda fail: fail.trap(IOError))
117 reactor.callLater(0.05, d1.process.signalProcess, 'KILL')
118
119 return d1
120
121 def test_shared_pool_is_singleton(self):
122 pool1 = process.SharedPool()
123 pool2 = process.SharedPool()
124 self.assertEqual(id(pool1._instance), id(pool2._instance))
125
126 def test_shared_pool_works_as_singleton(self):
127 d1 = process.simple_execute('sleep 1')
128 d2 = process.simple_execute('sleep 0.005')
129 # lp609749: would have failed with
130 # exceptions.AssertionError: Someone released me too many times:
131 # too many tokens!
132 return d1
1330
=== modified file 'nova/tests/quota_unittest.py'
--- nova/tests/quota_unittest.py 2010-11-24 22:52:10 +0000
+++ nova/tests/quota_unittest.py 2010-12-16 20:49:10 +0000
@@ -32,7 +32,7 @@
32FLAGS = flags.FLAGS32FLAGS = flags.FLAGS
3333
3434
35class QuotaTestCase(test.TrialTestCase):35class QuotaTestCase(test.TestCase):
36 def setUp(self):36 def setUp(self):
37 logging.getLogger().setLevel(logging.DEBUG)37 logging.getLogger().setLevel(logging.DEBUG)
38 super(QuotaTestCase, self).setUp()38 super(QuotaTestCase, self).setUp()
3939
=== modified file 'nova/tests/rpc_unittest.py'
--- nova/tests/rpc_unittest.py 2010-10-22 07:48:27 +0000
+++ nova/tests/rpc_unittest.py 2010-12-16 20:49:10 +0000
@@ -20,8 +20,6 @@
20"""20"""
21import logging21import logging
2222
23from twisted.internet import defer
24
25from nova import context23from nova import context
26from nova import flags24from nova import flags
27from nova import rpc25from nova import rpc
@@ -31,7 +29,7 @@
31FLAGS = flags.FLAGS29FLAGS = flags.FLAGS
3230
3331
34class RpcTestCase(test.TrialTestCase):32class RpcTestCase(test.TestCase):
35 """Test cases for rpc"""33 """Test cases for rpc"""
36 def setUp(self):34 def setUp(self):
37 super(RpcTestCase, self).setUp()35 super(RpcTestCase, self).setUp()
@@ -40,23 +38,22 @@
40 self.consumer = rpc.AdapterConsumer(connection=self.conn,38 self.consumer = rpc.AdapterConsumer(connection=self.conn,
41 topic='test',39 topic='test',
42 proxy=self.receiver)40 proxy=self.receiver)
43 self.consumer.attach_to_twisted()41 self.consumer.attach_to_eventlet()
44 self.context = context.get_admin_context()42 self.context = context.get_admin_context()
4543
46 def test_call_succeed(self):44 def test_call_succeed(self):
47 """Get a value through rpc call"""45 """Get a value through rpc call"""
48 value = 4246 value = 42
49 result = yield rpc.call_twisted(self.context,47 result = rpc.call(self.context, 'test', {"method": "echo",
50 'test', {"method": "echo",
51 "args": {"value": value}})48 "args": {"value": value}})
52 self.assertEqual(value, result)49 self.assertEqual(value, result)
5350
54 def test_context_passed(self):51 def test_context_passed(self):
55 """Makes sure a context is passed through rpc call"""52 """Makes sure a context is passed through rpc call"""
56 value = 4253 value = 42
57 result = yield rpc.call_twisted(self.context,54 result = rpc.call(self.context,
58 'test', {"method": "context",55 'test', {"method": "context",
59 "args": {"value": value}})56 "args": {"value": value}})
60 self.assertEqual(self.context.to_dict(), result)57 self.assertEqual(self.context.to_dict(), result)
6158
62 def test_call_exception(self):59 def test_call_exception(self):
@@ -67,14 +64,17 @@
67 to an int in the test.64 to an int in the test.
68 """65 """
69 value = 4266 value = 42
70 self.assertFailure(rpc.call_twisted(self.context, 'test',67 self.assertRaises(rpc.RemoteError,
71 {"method": "fail",68 rpc.call,
72 "args": {"value": value}}),69 self.context,
73 rpc.RemoteError)70 'test',
71 {"method": "fail",
72 "args": {"value": value}})
74 try:73 try:
75 yield rpc.call_twisted(self.context,74 rpc.call(self.context,
76 'test', {"method": "fail",75 'test',
77 "args": {"value": value}})76 {"method": "fail",
77 "args": {"value": value}})
78 self.fail("should have thrown rpc.RemoteError")78 self.fail("should have thrown rpc.RemoteError")
79 except rpc.RemoteError as exc:79 except rpc.RemoteError as exc:
80 self.assertEqual(int(exc.value), value)80 self.assertEqual(int(exc.value), value)
@@ -89,13 +89,13 @@
89 def echo(context, value):89 def echo(context, value):
90 """Simply returns whatever value is sent in"""90 """Simply returns whatever value is sent in"""
91 logging.debug("Received %s", value)91 logging.debug("Received %s", value)
92 return defer.succeed(value)92 return value
9393
94 @staticmethod94 @staticmethod
95 def context(context, value):95 def context(context, value):
96 """Returns dictionary version of context"""96 """Returns dictionary version of context"""
97 logging.debug("Received %s", context)97 logging.debug("Received %s", context)
98 return defer.succeed(context.to_dict())98 return context.to_dict()
9999
100 @staticmethod100 @staticmethod
101 def fail(context, value):101 def fail(context, value):
102102
=== modified file 'nova/tests/scheduler_unittest.py'
--- nova/tests/scheduler_unittest.py 2010-10-26 06:37:51 +0000
+++ nova/tests/scheduler_unittest.py 2010-12-16 20:49:10 +0000
@@ -44,7 +44,7 @@
44 return 'named_host'44 return 'named_host'
4545
4646
47class SchedulerTestCase(test.TrialTestCase):47class SchedulerTestCase(test.TestCase):
48 """Test case for scheduler"""48 """Test case for scheduler"""
49 def setUp(self):49 def setUp(self):
50 super(SchedulerTestCase, self).setUp()50 super(SchedulerTestCase, self).setUp()
@@ -73,7 +73,7 @@
73 scheduler.named_method(ctxt, 'topic', num=7)73 scheduler.named_method(ctxt, 'topic', num=7)
7474
7575
76class SimpleDriverTestCase(test.TrialTestCase):76class SimpleDriverTestCase(test.TestCase):
77 """Test case for simple driver"""77 """Test case for simple driver"""
78 def setUp(self):78 def setUp(self):
79 super(SimpleDriverTestCase, self).setUp()79 super(SimpleDriverTestCase, self).setUp()
@@ -122,12 +122,12 @@
122 'nova-compute',122 'nova-compute',
123 'compute',123 'compute',
124 FLAGS.compute_manager)124 FLAGS.compute_manager)
125 compute1.startService()125 compute1.start()
126 compute2 = service.Service('host2',126 compute2 = service.Service('host2',
127 'nova-compute',127 'nova-compute',
128 'compute',128 'compute',
129 FLAGS.compute_manager)129 FLAGS.compute_manager)
130 compute2.startService()130 compute2.start()
131 hosts = self.scheduler.driver.hosts_up(self.context, 'compute')131 hosts = self.scheduler.driver.hosts_up(self.context, 'compute')
132 self.assertEqual(len(hosts), 2)132 self.assertEqual(len(hosts), 2)
133 compute1.kill()133 compute1.kill()
@@ -139,12 +139,12 @@
139 'nova-compute',139 'nova-compute',
140 'compute',140 'compute',
141 FLAGS.compute_manager)141 FLAGS.compute_manager)
142 compute1.startService()142 compute1.start()
143 compute2 = service.Service('host2',143 compute2 = service.Service('host2',
144 'nova-compute',144 'nova-compute',
145 'compute',145 'compute',
146 FLAGS.compute_manager)146 FLAGS.compute_manager)
147 compute2.startService()147 compute2.start()
148 instance_id1 = self._create_instance()148 instance_id1 = self._create_instance()
149 compute1.run_instance(self.context, instance_id1)149 compute1.run_instance(self.context, instance_id1)
150 instance_id2 = self._create_instance()150 instance_id2 = self._create_instance()
@@ -162,12 +162,12 @@
162 'nova-compute',162 'nova-compute',
163 'compute',163 'compute',
164 FLAGS.compute_manager)164 FLAGS.compute_manager)
165 compute1.startService()165 compute1.start()
166 compute2 = service.Service('host2',166 compute2 = service.Service('host2',
167 'nova-compute',167 'nova-compute',
168 'compute',168 'compute',
169 FLAGS.compute_manager)169 FLAGS.compute_manager)
170 compute2.startService()170 compute2.start()
171 instance_ids1 = []171 instance_ids1 = []
172 instance_ids2 = []172 instance_ids2 = []
173 for index in xrange(FLAGS.max_cores):173 for index in xrange(FLAGS.max_cores):
@@ -195,12 +195,12 @@
195 'nova-volume',195 'nova-volume',
196 'volume',196 'volume',
197 FLAGS.volume_manager)197 FLAGS.volume_manager)
198 volume1.startService()198 volume1.start()
199 volume2 = service.Service('host2',199 volume2 = service.Service('host2',
200 'nova-volume',200 'nova-volume',
201 'volume',201 'volume',
202 FLAGS.volume_manager)202 FLAGS.volume_manager)
203 volume2.startService()203 volume2.start()
204 volume_id1 = self._create_volume()204 volume_id1 = self._create_volume()
205 volume1.create_volume(self.context, volume_id1)205 volume1.create_volume(self.context, volume_id1)
206 volume_id2 = self._create_volume()206 volume_id2 = self._create_volume()
@@ -218,12 +218,12 @@
218 'nova-volume',218 'nova-volume',
219 'volume',219 'volume',
220 FLAGS.volume_manager)220 FLAGS.volume_manager)
221 volume1.startService()221 volume1.start()
222 volume2 = service.Service('host2',222 volume2 = service.Service('host2',
223 'nova-volume',223 'nova-volume',
224 'volume',224 'volume',
225 FLAGS.volume_manager)225 FLAGS.volume_manager)
226 volume2.startService()226 volume2.start()
227 volume_ids1 = []227 volume_ids1 = []
228 volume_ids2 = []228 volume_ids2 = []
229 for index in xrange(FLAGS.max_gigabytes):229 for index in xrange(FLAGS.max_gigabytes):
230230
=== modified file 'nova/tests/service_unittest.py'
--- nova/tests/service_unittest.py 2010-10-26 06:08:57 +0000
+++ nova/tests/service_unittest.py 2010-12-16 20:49:10 +0000
@@ -22,9 +22,6 @@
2222
23import mox23import mox
2424
25from twisted.application.app import startApplication
26from twisted.internet import defer
27
28from nova import exception25from nova import exception
29from nova import flags26from nova import flags
30from nova import rpc27from nova import rpc
@@ -48,7 +45,7 @@
48 return 'service'45 return 'service'
4946
5047
51class ServiceManagerTestCase(test.TrialTestCase):48class ServiceManagerTestCase(test.TestCase):
52 """Test cases for Services"""49 """Test cases for Services"""
5350
54 def test_attribute_error_for_no_manager(self):51 def test_attribute_error_for_no_manager(self):
@@ -63,7 +60,7 @@
63 'test',60 'test',
64 'test',61 'test',
65 'nova.tests.service_unittest.FakeManager')62 'nova.tests.service_unittest.FakeManager')
66 serv.startService()63 serv.start()
67 self.assertEqual(serv.test_method(), 'manager')64 self.assertEqual(serv.test_method(), 'manager')
6865
69 def test_override_manager_method(self):66 def test_override_manager_method(self):
@@ -71,11 +68,11 @@
71 'test',68 'test',
72 'test',69 'test',
73 'nova.tests.service_unittest.FakeManager')70 'nova.tests.service_unittest.FakeManager')
74 serv.startService()71 serv.start()
75 self.assertEqual(serv.test_method(), 'service')72 self.assertEqual(serv.test_method(), 'service')
7673
7774
78class ServiceTestCase(test.TrialTestCase):75class ServiceTestCase(test.TestCase):
79 """Test cases for Services"""76 """Test cases for Services"""
8077
81 def setUp(self):78 def setUp(self):
@@ -94,8 +91,6 @@
94 self.mox.StubOutWithMock(rpc,91 self.mox.StubOutWithMock(rpc,
95 'AdapterConsumer',92 'AdapterConsumer',
96 use_mock_anything=True)93 use_mock_anything=True)
97 self.mox.StubOutWithMock(
98 service.task, 'LoopingCall', use_mock_anything=True)
99 rpc.AdapterConsumer(connection=mox.IgnoreArg(),94 rpc.AdapterConsumer(connection=mox.IgnoreArg(),
100 topic=topic,95 topic=topic,
101 proxy=mox.IsA(service.Service)).AndReturn(96 proxy=mox.IsA(service.Service)).AndReturn(
@@ -106,19 +101,8 @@
106 proxy=mox.IsA(service.Service)).AndReturn(101 proxy=mox.IsA(service.Service)).AndReturn(
107 rpc.AdapterConsumer)102 rpc.AdapterConsumer)
108103
109 rpc.AdapterConsumer.attach_to_twisted()104 rpc.AdapterConsumer.attach_to_eventlet()
110 rpc.AdapterConsumer.attach_to_twisted()105 rpc.AdapterConsumer.attach_to_eventlet()
111
112 # Stub out looping call a bit needlessly since we don't have an easy
113 # way to cancel it (yet) when the tests finishes
114 service.task.LoopingCall(mox.IgnoreArg()).AndReturn(
115 service.task.LoopingCall)
116 service.task.LoopingCall.start(interval=mox.IgnoreArg(),
117 now=mox.IgnoreArg())
118 service.task.LoopingCall(mox.IgnoreArg()).AndReturn(
119 service.task.LoopingCall)
120 service.task.LoopingCall.start(interval=mox.IgnoreArg(),
121 now=mox.IgnoreArg())
122106
123 service_create = {'host': host,107 service_create = {'host': host,
124 'binary': binary,108 'binary': binary,
@@ -136,14 +120,14 @@
136 service_create).AndReturn(service_ref)120 service_create).AndReturn(service_ref)
137 self.mox.ReplayAll()121 self.mox.ReplayAll()
138122
139 startApplication(app, False)123 app.start()
124 app.stop()
140 self.assert_(app)125 self.assert_(app)
141126
142 # We're testing sort of weird behavior in how report_state decides127 # We're testing sort of weird behavior in how report_state decides
143 # whether it is disconnected, it looks for a variable on itself called128 # whether it is disconnected, it looks for a variable on itself called
144 # 'model_disconnected' and report_state doesn't really do much so this129 # 'model_disconnected' and report_state doesn't really do much so this
145 # these are mostly just for coverage130 # these are mostly just for coverage
146 @defer.inlineCallbacks
147 def test_report_state_no_service(self):131 def test_report_state_no_service(self):
148 host = 'foo'132 host = 'foo'
149 binary = 'bar'133 binary = 'bar'
@@ -173,10 +157,9 @@
173 binary,157 binary,
174 topic,158 topic,
175 'nova.tests.service_unittest.FakeManager')159 'nova.tests.service_unittest.FakeManager')
176 serv.startService()160 serv.start()
177 yield serv.report_state()161 serv.report_state()
178162
179 @defer.inlineCallbacks
180 def test_report_state_newly_disconnected(self):163 def test_report_state_newly_disconnected(self):
181 host = 'foo'164 host = 'foo'
182 binary = 'bar'165 binary = 'bar'
@@ -204,11 +187,10 @@
204 binary,187 binary,
205 topic,188 topic,
206 'nova.tests.service_unittest.FakeManager')189 'nova.tests.service_unittest.FakeManager')
207 serv.startService()190 serv.start()
208 yield serv.report_state()191 serv.report_state()
209 self.assert_(serv.model_disconnected)192 self.assert_(serv.model_disconnected)
210193
211 @defer.inlineCallbacks
212 def test_report_state_newly_connected(self):194 def test_report_state_newly_connected(self):
213 host = 'foo'195 host = 'foo'
214 binary = 'bar'196 binary = 'bar'
@@ -238,8 +220,8 @@
238 binary,220 binary,
239 topic,221 topic,
240 'nova.tests.service_unittest.FakeManager')222 'nova.tests.service_unittest.FakeManager')
241 serv.startService()223 serv.start()
242 serv.model_disconnected = True224 serv.model_disconnected = True
243 yield serv.report_state()225 serv.report_state()
244226
245 self.assert_(not serv.model_disconnected)227 self.assert_(not serv.model_disconnected)
246228
=== removed file 'nova/tests/validator_unittest.py'
--- nova/tests/validator_unittest.py 2010-10-22 07:48:27 +0000
+++ nova/tests/validator_unittest.py 1970-01-01 00:00:00 +0000
@@ -1,42 +0,0 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright 2010 United States Government as represented by the
4# Administrator of the National Aeronautics and Space Administration.
5# All Rights Reserved.
6#
7# Licensed under the Apache License, Version 2.0 (the "License"); you may
8# not use this file except in compliance with the License. You may obtain
9# a copy of the License at
10#
11# http://www.apache.org/licenses/LICENSE-2.0
12#
13# Unless required by applicable law or agreed to in writing, software
14# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
15# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
16# License for the specific language governing permissions and limitations
17# under the License.
18
19import logging
20import unittest
21
22from nova import flags
23from nova import test
24from nova import validate
25
26
27class ValidationTestCase(test.TrialTestCase):
28 def setUp(self):
29 super(ValidationTestCase, self).setUp()
30
31 def tearDown(self):
32 super(ValidationTestCase, self).tearDown()
33
34 def test_type_validation(self):
35 self.assertTrue(type_case("foo", 5, 1))
36 self.assertRaises(TypeError, type_case, "bar", "5", 1)
37 self.assertRaises(TypeError, type_case, None, 5, 1)
38
39
40@validate.typetest(instanceid=str, size=int, number_of_instances=int)
41def type_case(instanceid, size, number_of_instances):
42 return True
430
=== modified file 'nova/tests/virt_unittest.py'
--- nova/tests/virt_unittest.py 2010-10-24 22:06:42 +0000
+++ nova/tests/virt_unittest.py 2010-12-16 20:49:10 +0000
@@ -30,7 +30,7 @@
30flags.DECLARE('instances_path', 'nova.compute.manager')30flags.DECLARE('instances_path', 'nova.compute.manager')
3131
3232
33class LibvirtConnTestCase(test.TrialTestCase):33class LibvirtConnTestCase(test.TestCase):
34 def setUp(self):34 def setUp(self):
35 super(LibvirtConnTestCase, self).setUp()35 super(LibvirtConnTestCase, self).setUp()
36 self.manager = manager.AuthManager()36 self.manager = manager.AuthManager()
@@ -123,7 +123,7 @@
123 self.manager.delete_user(self.user)123 self.manager.delete_user(self.user)
124124
125125
126class NWFilterTestCase(test.TrialTestCase):126class NWFilterTestCase(test.TestCase):
127127
128 def setUp(self):128 def setUp(self):
129 super(NWFilterTestCase, self).setUp()129 super(NWFilterTestCase, self).setUp()
@@ -235,7 +235,7 @@
235 'project_id': 'fake'})235 'project_id': 'fake'})
236 inst_id = instance_ref['id']236 inst_id = instance_ref['id']
237237
238 def _ensure_all_called(_):238 def _ensure_all_called():
239 instance_filter = 'nova-instance-%s' % instance_ref['name']239 instance_filter = 'nova-instance-%s' % instance_ref['name']
240 secgroup_filter = 'nova-secgroup-%s' % self.security_group['id']240 secgroup_filter = 'nova-secgroup-%s' % self.security_group['id']
241 for required in [secgroup_filter, 'allow-dhcp-server',241 for required in [secgroup_filter, 'allow-dhcp-server',
@@ -253,7 +253,6 @@
253 instance = db.instance_get(self.context, inst_id)253 instance = db.instance_get(self.context, inst_id)
254254
255 d = self.fw.setup_nwfilters_for_instance(instance)255 d = self.fw.setup_nwfilters_for_instance(instance)
256 d.addCallback(_ensure_all_called)256 _ensure_all_called()
257 d.addCallback(lambda _: self.teardown_security_group())257 self.teardown_security_group()
258
259 return d258 return d
260259
=== modified file 'nova/tests/volume_unittest.py'
--- nova/tests/volume_unittest.py 2010-11-03 21:38:14 +0000
+++ nova/tests/volume_unittest.py 2010-12-16 20:49:10 +0000
@@ -21,8 +21,6 @@
21"""21"""
22import logging22import logging
2323
24from twisted.internet import defer
25
26from nova import context24from nova import context
27from nova import exception25from nova import exception
28from nova import db26from nova import db
@@ -33,7 +31,7 @@
33FLAGS = flags.FLAGS31FLAGS = flags.FLAGS
3432
3533
36class VolumeTestCase(test.TrialTestCase):34class VolumeTestCase(test.TestCase):
37 """Test Case for volumes."""35 """Test Case for volumes."""
3836
39 def setUp(self):37 def setUp(self):
@@ -56,51 +54,48 @@
56 vol['attach_status'] = "detached"54 vol['attach_status'] = "detached"
57 return db.volume_create(context.get_admin_context(), vol)['id']55 return db.volume_create(context.get_admin_context(), vol)['id']
5856
59 @defer.inlineCallbacks
60 def test_create_delete_volume(self):57 def test_create_delete_volume(self):
61 """Test volume can be created and deleted."""58 """Test volume can be created and deleted."""
62 volume_id = self._create_volume()59 volume_id = self._create_volume()
63 yield self.volume.create_volume(self.context, volume_id)60 self.volume.create_volume(self.context, volume_id)
64 self.assertEqual(volume_id, db.volume_get(context.get_admin_context(),61 self.assertEqual(volume_id, db.volume_get(context.get_admin_context(),
65 volume_id).id)62 volume_id).id)
6663
67 yield self.volume.delete_volume(self.context, volume_id)64 self.volume.delete_volume(self.context, volume_id)
68 self.assertRaises(exception.NotFound,65 self.assertRaises(exception.NotFound,
69 db.volume_get,66 db.volume_get,
70 self.context,67 self.context,
71 volume_id)68 volume_id)
7269
73 @defer.inlineCallbacks
74 def test_too_big_volume(self):70 def test_too_big_volume(self):
75 """Ensure failure if a too large of a volume is requested."""71 """Ensure failure if a too large of a volume is requested."""
76 # FIXME(vish): validation needs to move into the data layer in72 # FIXME(vish): validation needs to move into the data layer in
77 # volume_create73 # volume_create
78 defer.returnValue(True)74 return True
79 try:75 try:
80 volume_id = self._create_volume('1001')76 volume_id = self._create_volume('1001')
81 yield self.volume.create_volume(self.context, volume_id)77 self.volume.create_volume(self.context, volume_id)
82 self.fail("Should have thrown TypeError")78 self.fail("Should have thrown TypeError")
83 except TypeError:79 except TypeError:
84 pass80 pass
8581
86 @defer.inlineCallbacks
87 def test_too_many_volumes(self):82 def test_too_many_volumes(self):
88 """Ensure that NoMoreTargets is raised when we run out of volumes."""83 """Ensure that NoMoreTargets is raised when we run out of volumes."""
89 vols = []84 vols = []
90 total_slots = FLAGS.iscsi_num_targets85 total_slots = FLAGS.iscsi_num_targets
91 for _index in xrange(total_slots):86 for _index in xrange(total_slots):
92 volume_id = self._create_volume()87 volume_id = self._create_volume()
93 yield self.volume.create_volume(self.context, volume_id)88 self.volume.create_volume(self.context, volume_id)
94 vols.append(volume_id)89 vols.append(volume_id)
95 volume_id = self._create_volume()90 volume_id = self._create_volume()
96 self.assertFailure(self.volume.create_volume(self.context,91 self.assertRaises(db.NoMoreTargets,
97 volume_id),92 self.volume.create_volume,
98 db.NoMoreTargets)93 self.context,
94 volume_id)
99 db.volume_destroy(context.get_admin_context(), volume_id)95 db.volume_destroy(context.get_admin_context(), volume_id)
100 for volume_id in vols:96 for volume_id in vols:
101 yield self.volume.delete_volume(self.context, volume_id)97 self.volume.delete_volume(self.context, volume_id)
10298
103 @defer.inlineCallbacks
104 def test_run_attach_detach_volume(self):99 def test_run_attach_detach_volume(self):
105 """Make sure volume can be attached and detached from instance."""100 """Make sure volume can be attached and detached from instance."""
106 inst = {}101 inst = {}
@@ -115,15 +110,15 @@
115 instance_id = db.instance_create(self.context, inst)['id']110 instance_id = db.instance_create(self.context, inst)['id']
116 mountpoint = "/dev/sdf"111 mountpoint = "/dev/sdf"
117 volume_id = self._create_volume()112 volume_id = self._create_volume()
118 yield self.volume.create_volume(self.context, volume_id)113 self.volume.create_volume(self.context, volume_id)
119 if FLAGS.fake_tests:114 if FLAGS.fake_tests:
120 db.volume_attached(self.context, volume_id, instance_id,115 db.volume_attached(self.context, volume_id, instance_id,
121 mountpoint)116 mountpoint)
122 else:117 else:
123 yield self.compute.attach_volume(self.context,118 self.compute.attach_volume(self.context,
124 instance_id,119 instance_id,
125 volume_id,120 volume_id,
126 mountpoint)121 mountpoint)
127 vol = db.volume_get(context.get_admin_context(), volume_id)122 vol = db.volume_get(context.get_admin_context(), volume_id)
128 self.assertEqual(vol['status'], "in-use")123 self.assertEqual(vol['status'], "in-use")
129 self.assertEqual(vol['attach_status'], "attached")124 self.assertEqual(vol['attach_status'], "attached")
@@ -131,25 +126,26 @@
131 instance_ref = db.volume_get_instance(self.context, volume_id)126 instance_ref = db.volume_get_instance(self.context, volume_id)
132 self.assertEqual(instance_ref['id'], instance_id)127 self.assertEqual(instance_ref['id'], instance_id)
133128
134 self.assertFailure(self.volume.delete_volume(self.context, volume_id),129 self.assertRaises(exception.Error,
135 exception.Error)130 self.volume.delete_volume,
131 self.context,
132 volume_id)
136 if FLAGS.fake_tests:133 if FLAGS.fake_tests:
137 db.volume_detached(self.context, volume_id)134 db.volume_detached(self.context, volume_id)
138 else:135 else:
139 yield self.compute.detach_volume(self.context,136 self.compute.detach_volume(self.context,
140 instance_id,137 instance_id,
141 volume_id)138 volume_id)
142 vol = db.volume_get(self.context, volume_id)139 vol = db.volume_get(self.context, volume_id)
143 self.assertEqual(vol['status'], "available")140 self.assertEqual(vol['status'], "available")
144141
145 yield self.volume.delete_volume(self.context, volume_id)142 self.volume.delete_volume(self.context, volume_id)
146 self.assertRaises(exception.Error,143 self.assertRaises(exception.Error,
147 db.volume_get,144 db.volume_get,
148 self.context,145 self.context,
149 volume_id)146 volume_id)
150 db.instance_destroy(self.context, instance_id)147 db.instance_destroy(self.context, instance_id)
151148
152 @defer.inlineCallbacks
153 def test_concurrent_volumes_get_different_targets(self):149 def test_concurrent_volumes_get_different_targets(self):
154 """Ensure multiple concurrent volumes get different targets."""150 """Ensure multiple concurrent volumes get different targets."""
155 volume_ids = []151 volume_ids = []
@@ -164,15 +160,11 @@
164 self.assert_(iscsi_target not in targets)160 self.assert_(iscsi_target not in targets)
165 targets.append(iscsi_target)161 targets.append(iscsi_target)
166 logging.debug("Target %s allocated", iscsi_target)162 logging.debug("Target %s allocated", iscsi_target)
167 deferreds = []
168 total_slots = FLAGS.iscsi_num_targets163 total_slots = FLAGS.iscsi_num_targets
169 for _index in xrange(total_slots):164 for _index in xrange(total_slots):
170 volume_id = self._create_volume()165 volume_id = self._create_volume()
171 d = self.volume.create_volume(self.context, volume_id)166 d = self.volume.create_volume(self.context, volume_id)
172 d.addCallback(_check)167 _check(d)
173 d.addErrback(self.fail)
174 deferreds.append(d)
175 yield defer.DeferredList(deferreds)
176 for volume_id in volume_ids:168 for volume_id in volume_ids:
177 self.volume.delete_volume(self.context, volume_id)169 self.volume.delete_volume(self.context, volume_id)
178170
179171
=== modified file 'nova/utils.py'
--- nova/utils.py 2010-11-23 20:58:46 +0000
+++ nova/utils.py 2010-12-16 20:49:10 +0000
@@ -31,7 +31,8 @@
31import sys31import sys
32from xml.sax import saxutils32from xml.sax import saxutils
3333
34from twisted.internet.threads import deferToThread34from eventlet import event
35from eventlet import greenthread
3536
36from nova import exception37from nova import exception
37from nova import flags38from nova import flags
@@ -75,7 +76,7 @@
7576
7677
77def execute(cmd, process_input=None, addl_env=None, check_exit_code=True):78def execute(cmd, process_input=None, addl_env=None, check_exit_code=True):
78 logging.debug("Running cmd: %s", cmd)79 logging.debug("Running cmd (subprocess): %s", cmd)
79 env = os.environ.copy()80 env = os.environ.copy()
80 if addl_env:81 if addl_env:
81 env.update(addl_env)82 env.update(addl_env)
@@ -95,6 +96,10 @@
95 stdout=stdout,96 stdout=stdout,
96 stderr=stderr,97 stderr=stderr,
97 cmd=cmd)98 cmd=cmd)
99 # NOTE(termie): this appears to be necessary to let the subprocess call
100 # clean something up in between calls, without it two
101 # execute calls in a row hangs the second one
102 greenthread.sleep(0)
98 return result103 return result
99104
100105
@@ -123,13 +128,7 @@
123128
124def runthis(prompt, cmd, check_exit_code=True):129def runthis(prompt, cmd, check_exit_code=True):
125 logging.debug("Running %s" % (cmd))130 logging.debug("Running %s" % (cmd))
126 exit_code = subprocess.call(cmd.split(" "))131 rv, err = execute(cmd, check_exit_code=check_exit_code)
127 logging.debug(prompt % (exit_code))
128 if check_exit_code and exit_code != 0:
129 raise ProcessExecutionError(exit_code=exit_code,
130 stdout=None,
131 stderr=None,
132 cmd=cmd)
133132
134133
135def generate_uid(topic, size=8):134def generate_uid(topic, size=8):
@@ -224,10 +223,41 @@
224 return getattr(backend, key)223 return getattr(backend, key)
225224
226225
227def deferredToThread(f):226class LoopingCall(object):
228 def g(*args, **kwargs):227 def __init__(self, f=None, *args, **kw):
229 return deferToThread(f, *args, **kwargs)228 self.args = args
230 return g229 self.kw = kw
230 self.f = f
231 self._running = False
232
233 def start(self, interval, now=True):
234 self._running = True
235 done = event.Event()
236
237 def _inner():
238 if not now:
239 greenthread.sleep(interval)
240 try:
241 while self._running:
242 self.f(*self.args, **self.kw)
243 greenthread.sleep(interval)
244 except Exception:
245 logging.exception('in looping call')
246 done.send_exception(*sys.exc_info())
247 return
248
249 done.send(True)
250
251 self.done = done
252
253 greenthread.spawn(_inner)
254 return self.done
255
256 def stop(self):
257 self._running = False
258
259 def wait(self):
260 return self.done.wait()
231261
232262
233def xhtml_escape(value):263def xhtml_escape(value):
234264
=== removed file 'nova/validate.py'
--- nova/validate.py 2010-10-21 18:49:51 +0000
+++ nova/validate.py 1970-01-01 00:00:00 +0000
@@ -1,94 +0,0 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright 2010 United States Government as represented by the
4# Administrator of the National Aeronautics and Space Administration.
5# All Rights Reserved.
6#
7# Licensed under the Apache License, Version 2.0 (the "License"); you may
8# not use this file except in compliance with the License. You may obtain
9# a copy of the License at
10#
11# http://www.apache.org/licenses/LICENSE-2.0
12#
13# Unless required by applicable law or agreed to in writing, software
14# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
15# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
16# License for the specific language governing permissions and limitations
17# under the License.
18
19"""Decorators for argument validation, courtesy of
20http://rmi.net/~lutz/rangetest.html"""
21
22
23def rangetest(**argchecks):
24 """Validate ranges for both + defaults"""
25
26 def onDecorator(func):
27 """onCall remembers func and argchecks"""
28 import sys
29 code = func.__code__ if sys.version_info[0] == 3 else func.func_code
30 allargs = code.co_varnames[:code.co_argcount]
31 funcname = func.__name__
32
33 def onCall(*pargs, **kargs):
34 # all pargs match first N args by position
35 # the rest must be in kargs or omitted defaults
36 positionals = list(allargs)
37 positionals = positionals[:len(pargs)]
38
39 for (argname, (low, high)) in argchecks.items():
40 # for all args to be checked
41 if argname in kargs:
42 # was passed by name
43 if float(kargs[argname]) < low or \
44 float(kargs[argname]) > high:
45 errmsg = '{0} argument "{1}" not in {2}..{3}'
46 errmsg = errmsg.format(funcname, argname, low, high)
47 raise TypeError(errmsg)
48
49 elif argname in positionals:
50 # was passed by position
51 position = positionals.index(argname)
52 if float(pargs[position]) < low or \
53 float(pargs[position]) > high:
54 errmsg = '{0} argument "{1}" with value of {4} ' \
55 'not in {2}..{3}'
56 errmsg = errmsg.format(funcname, argname, low, high,
57 pargs[position])
58 raise TypeError(errmsg)
59 else:
60 pass
61
62 return func(*pargs, **kargs) # okay: run original call
63 return onCall
64 return onDecorator
65
66
67def typetest(**argchecks):
68 def onDecorator(func):
69 import sys
70 code = func.__code__ if sys.version_info[0] == 3 else func.func_code
71 allargs = code.co_varnames[:code.co_argcount]
72 funcname = func.__name__
73
74 def onCall(*pargs, **kargs):
75 positionals = list(allargs)[:len(pargs)]
76 for (argname, typeof) in argchecks.items():
77 if argname in kargs:
78 if not isinstance(kargs[argname], typeof):
79 errmsg = '{0} argument "{1}" not of type {2}'
80 errmsg = errmsg.format(funcname, argname, typeof)
81 raise TypeError(errmsg)
82 elif argname in positionals:
83 position = positionals.index(argname)
84 if not isinstance(pargs[position], typeof):
85 errmsg = '{0} argument "{1}" with value of {2} ' \
86 'not of type {3}'
87 errmsg = errmsg.format(funcname, argname,
88 pargs[position], typeof)
89 raise TypeError(errmsg)
90 else:
91 pass
92 return func(*pargs, **kargs)
93 return onCall
94 return onDecorator
950
=== modified file 'nova/virt/fake.py'
--- nova/virt/fake.py 2010-11-01 20:13:18 +0000
+++ nova/virt/fake.py 2010-12-16 20:49:10 +0000
@@ -25,8 +25,6 @@
2525
26"""26"""
2727
28from twisted.internet import defer
29
30from nova import exception28from nova import exception
31from nova.compute import power_state29from nova.compute import power_state
3230
@@ -107,7 +105,6 @@
107 fake_instance = FakeInstance()105 fake_instance = FakeInstance()
108 self.instances[instance.name] = fake_instance106 self.instances[instance.name] = fake_instance
109 fake_instance._state = power_state.RUNNING107 fake_instance._state = power_state.RUNNING
110 return defer.succeed(None)
111108
112 def reboot(self, instance):109 def reboot(self, instance):
113 """110 """
@@ -119,19 +116,19 @@
119 The work will be done asynchronously. This function returns a116 The work will be done asynchronously. This function returns a
120 Deferred that allows the caller to detect when it is complete.117 Deferred that allows the caller to detect when it is complete.
121 """118 """
122 return defer.succeed(None)119 pass
123120
124 def rescue(self, instance):121 def rescue(self, instance):
125 """122 """
126 Rescue the specified instance.123 Rescue the specified instance.
127 """124 """
128 return defer.succeed(None)125 pass
129126
130 def unrescue(self, instance):127 def unrescue(self, instance):
131 """128 """
132 Unrescue the specified instance.129 Unrescue the specified instance.
133 """130 """
134 return defer.succeed(None)131 pass
135132
136 def destroy(self, instance):133 def destroy(self, instance):
137 """134 """
@@ -144,7 +141,6 @@
144 Deferred that allows the caller to detect when it is complete.141 Deferred that allows the caller to detect when it is complete.
145 """142 """
146 del self.instances[instance.name]143 del self.instances[instance.name]
147 return defer.succeed(None)
148144
149 def attach_volume(self, instance_name, device_path, mountpoint):145 def attach_volume(self, instance_name, device_path, mountpoint):
150 """Attach the disk at device_path to the instance at mountpoint"""146 """Attach the disk at device_path to the instance at mountpoint"""
151147
=== modified file 'nova/virt/images.py'
--- nova/virt/images.py 2010-10-22 00:15:21 +0000
+++ nova/virt/images.py 2010-12-16 20:49:10 +0000
@@ -26,7 +26,7 @@
26import urlparse26import urlparse
2727
28from nova import flags28from nova import flags
29from nova import process29from nova import utils
30from nova.auth import manager30from nova.auth import manager
31from nova.auth import signer31from nova.auth import signer
32from nova.objectstore import image32from nova.objectstore import image
@@ -50,7 +50,7 @@
5050
51 # This should probably move somewhere else, like e.g. a download_as51 # This should probably move somewhere else, like e.g. a download_as
52 # method on User objects and at the same time get rewritten to use52 # method on User objects and at the same time get rewritten to use
53 # twisted web client.53 # a web client.
54 headers = {}54 headers = {}
55 headers['Date'] = time.strftime("%a, %d %b %Y %H:%M:%S GMT", time.gmtime())55 headers['Date'] = time.strftime("%a, %d %b %Y %H:%M:%S GMT", time.gmtime())
5656
@@ -63,15 +63,16 @@
6363
64 cmd = ['/usr/bin/curl', '--fail', '--silent', url]64 cmd = ['/usr/bin/curl', '--fail', '--silent', url]
65 for (k, v) in headers.iteritems():65 for (k, v) in headers.iteritems():
66 cmd += ['-H', '%s: %s' % (k, v)]66 cmd += ['-H', '"%s: %s"' % (k, v)]
6767
68 cmd += ['-o', path]68 cmd += ['-o', path]
69 return process.SharedPool().execute(executable=cmd[0], args=cmd[1:])69 cmd_out = ' '.join(cmd)
70 return utils.execute(cmd_out)
7071
7172
72def _fetch_local_image(image, path, user, project):73def _fetch_local_image(image, path, user, project):
73 source = _image_path('%s/image' % image)74 source = _image_path('%s/image' % image)
74 return process.simple_execute('cp %s %s' % (source, path))75 return utils.execute('cp %s %s' % (source, path))
7576
7677
77def _image_path(path):78def _image_path(path):
7879
=== modified file 'nova/virt/libvirt_conn.py'
--- nova/virt/libvirt_conn.py 2010-11-17 19:56:42 +0000
+++ nova/virt/libvirt_conn.py 2010-12-16 20:49:10 +0000
@@ -45,16 +45,15 @@
45import os45import os
46import shutil46import shutil
4747
48from eventlet import event
49from eventlet import tpool
50
48import IPy51import IPy
49from twisted.internet import defer
50from twisted.internet import task
51from twisted.internet import threads
5252
53from nova import context53from nova import context
54from nova import db54from nova import db
55from nova import exception55from nova import exception
56from nova import flags56from nova import flags
57from nova import process
58from nova import utils57from nova import utils
59#from nova.api import context58#from nova.api import context
60from nova.auth import manager59from nova.auth import manager
@@ -184,14 +183,12 @@
184 except Exception as _err:183 except Exception as _err:
185 pass184 pass
186 # If the instance is already terminated, we're still happy185 # If the instance is already terminated, we're still happy
187 d = defer.Deferred()186
188 if cleanup:187 done = event.Event()
189 d.addCallback(lambda _: self._cleanup(instance))188
190 # FIXME: What does this comment mean?189 # We'll save this for when we do shutdown,
191 # TODO(termie): short-circuit me for tests
192 # WE'LL save this for when we do shutdown,
193 # instead of destroy - but destroy returns immediately190 # instead of destroy - but destroy returns immediately
194 timer = task.LoopingCall(f=None)191 timer = utils.LoopingCall(f=None)
195192
196 def _wait_for_shutdown():193 def _wait_for_shutdown():
197 try:194 try:
@@ -200,17 +197,26 @@
200 instance['id'], state)197 instance['id'], state)
201 if state == power_state.SHUTDOWN:198 if state == power_state.SHUTDOWN:
202 timer.stop()199 timer.stop()
203 d.callback(None)
204 except Exception:200 except Exception:
205 db.instance_set_state(context.get_admin_context(),201 db.instance_set_state(context.get_admin_context(),
206 instance['id'],202 instance['id'],
207 power_state.SHUTDOWN)203 power_state.SHUTDOWN)
208 timer.stop()204 timer.stop()
209 d.callback(None)
210205
211 timer.f = _wait_for_shutdown206 timer.f = _wait_for_shutdown
212 timer.start(interval=0.5, now=True)207 timer_done = timer.start(interval=0.5, now=True)
213 return d208
209 # NOTE(termie): this is strictly superfluous (we could put the
210 # cleanup code in the timer), but this emulates the
211 # previous model so I am keeping it around until
212 # everything has been vetted a bit
213 def _wait_for_timer():
214 timer_done.wait()
215 self._cleanup(instance)
216 done.send()
217
218 greenthread.spawn(_wait_for_timer)
219 return done
214220
215 def _cleanup(self, instance):221 def _cleanup(self, instance):
216 target = os.path.join(FLAGS.instances_path, instance['name'])222 target = os.path.join(FLAGS.instances_path, instance['name'])
@@ -219,7 +225,6 @@
219 if os.path.exists(target):225 if os.path.exists(target):
220 shutil.rmtree(target)226 shutil.rmtree(target)
221227
222 @defer.inlineCallbacks
223 @exception.wrap_exception228 @exception.wrap_exception
224 def attach_volume(self, instance_name, device_path, mountpoint):229 def attach_volume(self, instance_name, device_path, mountpoint):
225 virt_dom = self._conn.lookupByName(instance_name)230 virt_dom = self._conn.lookupByName(instance_name)
@@ -230,7 +235,6 @@
230 <target dev='%s' bus='virtio'/>235 <target dev='%s' bus='virtio'/>
231 </disk>""" % (device_path, mount_device)236 </disk>""" % (device_path, mount_device)
232 virt_dom.attachDevice(xml)237 virt_dom.attachDevice(xml)
233 yield
234238
235 def _get_disk_xml(self, xml, device):239 def _get_disk_xml(self, xml, device):
236 """Returns the xml for the disk mounted at device"""240 """Returns the xml for the disk mounted at device"""
@@ -252,7 +256,6 @@
252 if doc != None:256 if doc != None:
253 doc.freeDoc()257 doc.freeDoc()
254258
255 @defer.inlineCallbacks
256 @exception.wrap_exception259 @exception.wrap_exception
257 def detach_volume(self, instance_name, mountpoint):260 def detach_volume(self, instance_name, mountpoint):
258 virt_dom = self._conn.lookupByName(instance_name)261 virt_dom = self._conn.lookupByName(instance_name)
@@ -261,17 +264,13 @@
261 if not xml:264 if not xml:
262 raise exception.NotFound("No disk at %s" % mount_device)265 raise exception.NotFound("No disk at %s" % mount_device)
263 virt_dom.detachDevice(xml)266 virt_dom.detachDevice(xml)
264 yield
265267
266 @defer.inlineCallbacks
267 @exception.wrap_exception268 @exception.wrap_exception
268 def reboot(self, instance):269 def reboot(self, instance):
269 yield self.destroy(instance, False)270 self.destroy(instance, False)
270 xml = self.to_xml(instance)271 xml = self.to_xml(instance)
271 yield self._conn.createXML(xml, 0)272 self._conn.createXML(xml, 0)
272273 timer = utils.LoopingCall(f=None)
273 d = defer.Deferred()
274 timer = task.LoopingCall(f=None)
275274
276 def _wait_for_reboot():275 def _wait_for_reboot():
277 try:276 try:
@@ -281,33 +280,28 @@
281 if state == power_state.RUNNING:280 if state == power_state.RUNNING:
282 logging.debug('instance %s: rebooted', instance['name'])281 logging.debug('instance %s: rebooted', instance['name'])
283 timer.stop()282 timer.stop()
284 d.callback(None)
285 except Exception, exn:283 except Exception, exn:
286 logging.error('_wait_for_reboot failed: %s', exn)284 logging.error('_wait_for_reboot failed: %s', exn)
287 db.instance_set_state(context.get_admin_context(),285 db.instance_set_state(context.get_admin_context(),
288 instance['id'],286 instance['id'],
289 power_state.SHUTDOWN)287 power_state.SHUTDOWN)
290 timer.stop()288 timer.stop()
291 d.callback(None)
292289
293 timer.f = _wait_for_reboot290 timer.f = _wait_for_reboot
294 timer.start(interval=0.5, now=True)291 return timer.start(interval=0.5, now=True)
295 yield d
296292
297 @defer.inlineCallbacks
298 @exception.wrap_exception293 @exception.wrap_exception
299 def rescue(self, instance):294 def rescue(self, instance):
300 yield self.destroy(instance, False)295 self.destroy(instance, False)
301296
302 xml = self.to_xml(instance, rescue=True)297 xml = self.to_xml(instance, rescue=True)
303 rescue_images = {'image_id': FLAGS.rescue_image_id,298 rescue_images = {'image_id': FLAGS.rescue_image_id,
304 'kernel_id': FLAGS.rescue_kernel_id,299 'kernel_id': FLAGS.rescue_kernel_id,
305 'ramdisk_id': FLAGS.rescue_ramdisk_id}300 'ramdisk_id': FLAGS.rescue_ramdisk_id}
306 yield self._create_image(instance, xml, 'rescue-', rescue_images)301 self._create_image(instance, xml, 'rescue-', rescue_images)
307 yield self._conn.createXML(xml, 0)302 self._conn.createXML(xml, 0)
308303
309 d = defer.Deferred()304 timer = utils.LoopingCall(f=None)
310 timer = task.LoopingCall(f=None)
311305
312 def _wait_for_rescue():306 def _wait_for_rescue():
313 try:307 try:
@@ -316,27 +310,22 @@
316 if state == power_state.RUNNING:310 if state == power_state.RUNNING:
317 logging.debug('instance %s: rescued', instance['name'])311 logging.debug('instance %s: rescued', instance['name'])
318 timer.stop()312 timer.stop()
319 d.callback(None)
320 except Exception, exn:313 except Exception, exn:
321 logging.error('_wait_for_rescue failed: %s', exn)314 logging.error('_wait_for_rescue failed: %s', exn)
322 db.instance_set_state(None,315 db.instance_set_state(None,
323 instance['id'],316 instance['id'],
324 power_state.SHUTDOWN)317 power_state.SHUTDOWN)
325 timer.stop()318 timer.stop()
326 d.callback(None)
327319
328 timer.f = _wait_for_rescue320 timer.f = _wait_for_rescue
329 timer.start(interval=0.5, now=True)321 return timer.start(interval=0.5, now=True)
330 yield d
331322
332 @defer.inlineCallbacks
333 @exception.wrap_exception323 @exception.wrap_exception
334 def unrescue(self, instance):324 def unrescue(self, instance):
335 # NOTE(vish): Because reboot destroys and recreates an instance using325 # NOTE(vish): Because reboot destroys and recreates an instance using
336 # the normal xml file, we can just call reboot here326 # the normal xml file, we can just call reboot here
337 yield self.reboot(instance)327 self.reboot(instance)
338328
339 @defer.inlineCallbacks
340 @exception.wrap_exception329 @exception.wrap_exception
341 def spawn(self, instance):330 def spawn(self, instance):
342 xml = self.to_xml(instance)331 xml = self.to_xml(instance)
@@ -344,14 +333,12 @@
344 instance['id'],333 instance['id'],
345 power_state.NOSTATE,334 power_state.NOSTATE,
346 'launching')335 'launching')
347 yield NWFilterFirewall(self._conn).\336 NWFilterFirewall(self._conn).setup_nwfilters_for_instance(instance)
348 setup_nwfilters_for_instance(instance)337 self._create_image(instance, xml)
349 yield self._create_image(instance, xml)338 self._conn.createXML(xml, 0)
350 yield self._conn.createXML(xml, 0)
351 logging.debug("instance %s: is running", instance['name'])339 logging.debug("instance %s: is running", instance['name'])
352340
353 local_d = defer.Deferred()341 timer = utils.LoopingCall(f=None)
354 timer = task.LoopingCall(f=None)
355342
356 def _wait_for_boot():343 def _wait_for_boot():
357 try:344 try:
@@ -361,7 +348,6 @@
361 if state == power_state.RUNNING:348 if state == power_state.RUNNING:
362 logging.debug('instance %s: booted', instance['name'])349 logging.debug('instance %s: booted', instance['name'])
363 timer.stop()350 timer.stop()
364 local_d.callback(None)
365 except:351 except:
366 logging.exception('instance %s: failed to boot',352 logging.exception('instance %s: failed to boot',
367 instance['name'])353 instance['name'])
@@ -369,10 +355,9 @@
369 instance['id'],355 instance['id'],
370 power_state.SHUTDOWN)356 power_state.SHUTDOWN)
371 timer.stop()357 timer.stop()
372 local_d.callback(None)358
373 timer.f = _wait_for_boot359 timer.f = _wait_for_boot
374 timer.start(interval=0.5, now=True)360 return timer.start(interval=0.5, now=True)
375 yield local_d
376361
377 def _flush_xen_console(self, virsh_output):362 def _flush_xen_console(self, virsh_output):
378 logging.info('virsh said: %r' % (virsh_output,))363 logging.info('virsh said: %r' % (virsh_output,))
@@ -380,10 +365,9 @@
380365
381 if virsh_output.startswith('/dev/'):366 if virsh_output.startswith('/dev/'):
382 logging.info('cool, it\'s a device')367 logging.info('cool, it\'s a device')
383 d = process.simple_execute("sudo dd if=%s iflag=nonblock" %368 out, err = utils.execute("sudo dd if=%s iflag=nonblock" %
384 virsh_output, check_exit_code=False)369 virsh_output, check_exit_code=False)
385 d.addCallback(lambda r: r[0])370 return out
386 return d
387 else:371 else:
388 return ''372 return ''
389373
@@ -403,21 +387,20 @@
403 def get_console_output(self, instance):387 def get_console_output(self, instance):
404 console_log = os.path.join(FLAGS.instances_path, instance['name'],388 console_log = os.path.join(FLAGS.instances_path, instance['name'],
405 'console.log')389 'console.log')
406 d = process.simple_execute('sudo chown %d %s' % (os.getuid(),390
407 console_log))391 utils.execute('sudo chown %d %s' % (os.getuid(), console_log))
392
408 if FLAGS.libvirt_type == 'xen':393 if FLAGS.libvirt_type == 'xen':
409 # Xen is spethial394 # Xen is special
410 d.addCallback(lambda _:395 virsh_output = utils.execute("virsh ttyconsole %s" %
411 process.simple_execute("virsh ttyconsole %s" %396 instance['name'])
412 instance['name']))397 data = self._flush_xen_console(virsh_output)
413 d.addCallback(self._flush_xen_console)398 fpath = self._append_to_file(data, console_log)
414 d.addCallback(self._append_to_file, console_log)
415 else:399 else:
416 d.addCallback(lambda _: defer.succeed(console_log))400 fpath = console_log
417 d.addCallback(self._dump_file)401
418 return d402 return self._dump_file(fpath)
419403
420 @defer.inlineCallbacks
421 def _create_image(self, inst, libvirt_xml, prefix='', disk_images=None):404 def _create_image(self, inst, libvirt_xml, prefix='', disk_images=None):
422 # syntactic nicety405 # syntactic nicety
423 basepath = lambda fname = '', prefix = prefix: os.path.join(406 basepath = lambda fname = '', prefix = prefix: os.path.join(
@@ -426,8 +409,8 @@
426 prefix + fname)409 prefix + fname)
427410
428 # ensure directories exist and are writable411 # ensure directories exist and are writable
429 yield process.simple_execute('mkdir -p %s' % basepath(prefix=''))412 utils.execute('mkdir -p %s' % basepath(prefix=''))
430 yield process.simple_execute('chmod 0777 %s' % basepath(prefix=''))413 utils.execute('chmod 0777 %s' % basepath(prefix=''))
431414
432 # TODO(termie): these are blocking calls, it would be great415 # TODO(termie): these are blocking calls, it would be great
433 # if they weren't.416 # if they weren't.
@@ -448,19 +431,19 @@
448 'kernel_id': inst['kernel_id'],431 'kernel_id': inst['kernel_id'],
449 'ramdisk_id': inst['ramdisk_id']}432 'ramdisk_id': inst['ramdisk_id']}
450 if not os.path.exists(basepath('disk')):433 if not os.path.exists(basepath('disk')):
451 yield images.fetch(inst.image_id, basepath('disk-raw'), user,434 images.fetch(inst.image_id, basepath('disk-raw'), user,
452 project)435 project)
453 if not os.path.exists(basepath('kernel')):436 if not os.path.exists(basepath('kernel')):
454 yield images.fetch(inst.kernel_id, basepath('kernel'), user,437 images.fetch(inst.kernel_id, basepath('kernel'), user,
455 project)438 project)
456 if not os.path.exists(basepath('ramdisk')):439 if not os.path.exists(basepath('ramdisk')):
457 yield images.fetch(inst.ramdisk_id, basepath('ramdisk'), user,440 images.fetch(inst.ramdisk_id, basepath('ramdisk'), user,
458 project)441 project)
459442
460 execute = lambda cmd, process_input = None, check_exit_code = True: \443 def execute(cmd, process_input=None, check_exit_code=True):
461 process.simple_execute(cmd=cmd,444 return utils.execute(cmd=cmd,
462 process_input=process_input,445 process_input=process_input,
463 check_exit_code=check_exit_code)446 check_exit_code=check_exit_code)
464447
465 key = str(inst['key_data'])448 key = str(inst['key_data'])
466 net = None449 net = None
@@ -482,11 +465,11 @@
482 if net:465 if net:
483 logging.info('instance %s: injecting net into image %s',466 logging.info('instance %s: injecting net into image %s',
484 inst['name'], inst.image_id)467 inst['name'], inst.image_id)
485 yield disk.inject_data(basepath('disk-raw'), key, net,468 disk.inject_data(basepath('disk-raw'), key, net,
486 execute=execute)469 execute=execute)
487470
488 if os.path.exists(basepath('disk')):471 if os.path.exists(basepath('disk')):
489 yield process.simple_execute('rm -f %s' % basepath('disk'))472 utils.execute('rm -f %s' % basepath('disk'))
490473
491 local_bytes = (instance_types.INSTANCE_TYPES[inst.instance_type]474 local_bytes = (instance_types.INSTANCE_TYPES[inst.instance_type]
492 ['local_gb']475 ['local_gb']
@@ -495,12 +478,11 @@
495 resize = True478 resize = True
496 if inst['instance_type'] == 'm1.tiny' or prefix == 'rescue-':479 if inst['instance_type'] == 'm1.tiny' or prefix == 'rescue-':
497 resize = False480 resize = False
498 yield disk.partition(basepath('disk-raw'), basepath('disk'),481 disk.partition(basepath('disk-raw'), basepath('disk'),
499 local_bytes, resize, execute=execute)482 local_bytes, resize, execute=execute)
500483
501 if FLAGS.libvirt_type == 'uml':484 if FLAGS.libvirt_type == 'uml':
502 yield process.simple_execute('sudo chown root %s' %485 utils.execute('sudo chown root %s' % basepath('disk'))
503 basepath('disk'))
504486
505 def to_xml(self, instance, rescue=False):487 def to_xml(self, instance, rescue=False):
506 # TODO(termie): cache?488 # TODO(termie): cache?
@@ -758,15 +740,15 @@
758 def _define_filter(self, xml):740 def _define_filter(self, xml):
759 if callable(xml):741 if callable(xml):
760 xml = xml()742 xml = xml()
761 d = threads.deferToThread(self._conn.nwfilterDefineXML, xml)743
762 return d744 # execute in a native thread and block current greenthread until done
745 tpool.execute(self._conn.nwfilterDefineXML, xml)
763746
764 @staticmethod747 @staticmethod
765 def _get_net_and_mask(cidr):748 def _get_net_and_mask(cidr):
766 net = IPy.IP(cidr)749 net = IPy.IP(cidr)
767 return str(net.net()), str(net.netmask())750 return str(net.net()), str(net.netmask())
768751
769 @defer.inlineCallbacks
770 def setup_nwfilters_for_instance(self, instance):752 def setup_nwfilters_for_instance(self, instance):
771 """753 """
772 Creates an NWFilter for the given instance. In the process,754 Creates an NWFilter for the given instance. In the process,
@@ -774,10 +756,10 @@
774 the base filter are all in place.756 the base filter are all in place.
775 """757 """
776758
777 yield self._define_filter(self.nova_base_ipv4_filter)759 self._define_filter(self.nova_base_ipv4_filter)
778 yield self._define_filter(self.nova_base_ipv6_filter)760 self._define_filter(self.nova_base_ipv6_filter)
779 yield self._define_filter(self.nova_dhcp_filter)761 self._define_filter(self.nova_dhcp_filter)
780 yield self._define_filter(self.nova_base_filter)762 self._define_filter(self.nova_base_filter)
781763
782 nwfilter_xml = "<filter name='nova-instance-%s' chain='root'>\n" \764 nwfilter_xml = "<filter name='nova-instance-%s' chain='root'>\n" \
783 " <filterref filter='nova-base' />\n" % \765 " <filterref filter='nova-base' />\n" % \
@@ -789,20 +771,19 @@
789 net, mask = self._get_net_and_mask(network_ref['cidr'])771 net, mask = self._get_net_and_mask(network_ref['cidr'])
790 project_filter = self.nova_project_filter(instance['project_id'],772 project_filter = self.nova_project_filter(instance['project_id'],
791 net, mask)773 net, mask)
792 yield self._define_filter(project_filter)774 self._define_filter(project_filter)
793775
794 nwfilter_xml += " <filterref filter='nova-project-%s' />\n" % \776 nwfilter_xml += " <filterref filter='nova-project-%s' />\n" % \
795 instance['project_id']777 instance['project_id']
796778
797 for security_group in instance.security_groups:779 for security_group in instance.security_groups:
798 yield self.ensure_security_group_filter(security_group['id'])780 self.ensure_security_group_filter(security_group['id'])
799781
800 nwfilter_xml += " <filterref filter='nova-secgroup-%d' />\n" % \782 nwfilter_xml += " <filterref filter='nova-secgroup-%d' />\n" % \
801 security_group['id']783 security_group['id']
802 nwfilter_xml += "</filter>"784 nwfilter_xml += "</filter>"
803785
804 yield self._define_filter(nwfilter_xml)786 self._define_filter(nwfilter_xml)
805 return
806787
807 def ensure_security_group_filter(self, security_group_id):788 def ensure_security_group_filter(self, security_group_id):
808 return self._define_filter(789 return self._define_filter(
809790
=== modified file 'nova/virt/xenapi/network_utils.py'
--- nova/virt/xenapi/network_utils.py 2010-12-06 12:42:34 +0000
+++ nova/virt/xenapi/network_utils.py 2010-12-16 20:49:10 +0000
@@ -20,8 +20,6 @@
20their lookup functions.20their lookup functions.
21"""21"""
2222
23from twisted.internet import defer
24
2523
26class NetworkHelper():24class NetworkHelper():
27 """25 """
@@ -31,14 +29,12 @@
31 return29 return
3230
33 @classmethod31 @classmethod
34 @defer.inlineCallbacks
35 def find_network_with_bridge(cls, session, bridge):32 def find_network_with_bridge(cls, session, bridge):
36 """ Return the network on which the bridge is attached, if found """33 """ Return the network on which the bridge is attached, if found."""
37 expr = 'field "bridge" = "%s"' % bridge34 expr = 'field "bridge" = "%s"' % bridge
38 networks = yield session.call_xenapi('network.get_all_records_where',35 networks = session.call_xenapi('network.get_all_records_where', expr)
39 expr)
40 if len(networks) == 1:36 if len(networks) == 1:
41 defer.returnValue(networks.keys()[0])37 return networks.keys()[0]
42 elif len(networks) > 1:38 elif len(networks) > 1:
43 raise Exception('Found non-unique network for bridge %s' % bridge)39 raise Exception('Found non-unique network for bridge %s' % bridge)
44 else:40 else:
4541
=== modified file 'nova/virt/xenapi/vm_utils.py'
--- nova/virt/xenapi/vm_utils.py 2010-12-09 19:37:30 +0000
+++ nova/virt/xenapi/vm_utils.py 2010-12-16 20:49:10 +0000
@@ -21,19 +21,14 @@
2121
22import logging22import logging
23import urllib23import urllib
24
25from twisted.internet import defer
26from xml.dom import minidom24from xml.dom import minidom
2725
28from nova import flags
29from nova import utils26from nova import utils
30
31from nova.auth.manager import AuthManager27from nova.auth.manager import AuthManager
32from nova.compute import instance_types28from nova.compute import instance_types
33from nova.compute import power_state29from nova.compute import power_state
34from nova.virt import images30from nova.virt import images
3531
36FLAGS = flags.FLAGS
3732
38XENAPI_POWER_STATE = {33XENAPI_POWER_STATE = {
39 'Halted': power_state.SHUTDOWN,34 'Halted': power_state.SHUTDOWN,
@@ -42,6 +37,7 @@
42 'Suspended': power_state.SHUTDOWN, # FIXME37 'Suspended': power_state.SHUTDOWN, # FIXME
43 'Crashed': power_state.CRASHED}38 'Crashed': power_state.CRASHED}
4439
40
45XenAPI = None41XenAPI = None
4642
4743
@@ -64,7 +60,6 @@
64 XenAPI = __import__('XenAPI')60 XenAPI = __import__('XenAPI')
6561
66 @classmethod62 @classmethod
67 @defer.inlineCallbacks
68 def create_vm(cls, session, instance, kernel, ramdisk):63 def create_vm(cls, session, instance, kernel, ramdisk):
69 """Create a VM record. Returns a Deferred that gives the new64 """Create a VM record. Returns a Deferred that gives the new
70 VM reference."""65 VM reference."""
@@ -102,12 +97,11 @@
102 'other_config': {},97 'other_config': {},
103 }98 }
104 logging.debug('Created VM %s...', instance.name)99 logging.debug('Created VM %s...', instance.name)
105 vm_ref = yield session.call_xenapi('VM.create', rec)100 vm_ref = session.call_xenapi('VM.create', rec)
106 logging.debug('Created VM %s as %s.', instance.name, vm_ref)101 logging.debug('Created VM %s as %s.', instance.name, vm_ref)
107 defer.returnValue(vm_ref)102 return vm_ref
108103
109 @classmethod104 @classmethod
110 @defer.inlineCallbacks
111 def create_vbd(cls, session, vm_ref, vdi_ref, userdevice, bootable):105 def create_vbd(cls, session, vm_ref, vdi_ref, userdevice, bootable):
112 """Create a VBD record. Returns a Deferred that gives the new106 """Create a VBD record. Returns a Deferred that gives the new
113 VBD reference."""107 VBD reference."""
@@ -126,13 +120,12 @@
126 vbd_rec['qos_algorithm_params'] = {}120 vbd_rec['qos_algorithm_params'] = {}
127 vbd_rec['qos_supported_algorithms'] = []121 vbd_rec['qos_supported_algorithms'] = []
128 logging.debug('Creating VBD for VM %s, VDI %s ... ', vm_ref, vdi_ref)122 logging.debug('Creating VBD for VM %s, VDI %s ... ', vm_ref, vdi_ref)
129 vbd_ref = yield session.call_xenapi('VBD.create', vbd_rec)123 vbd_ref = session.call_xenapi('VBD.create', vbd_rec)
130 logging.debug('Created VBD %s for VM %s, VDI %s.', vbd_ref, vm_ref,124 logging.debug('Created VBD %s for VM %s, VDI %s.', vbd_ref, vm_ref,
131 vdi_ref)125 vdi_ref)
132 defer.returnValue(vbd_ref)126 return vbd_ref
133127
134 @classmethod128 @classmethod
135 @defer.inlineCallbacks
136 def create_vif(cls, session, vm_ref, network_ref, mac_address):129 def create_vif(cls, session, vm_ref, network_ref, mac_address):
137 """Create a VIF record. Returns a Deferred that gives the new130 """Create a VIF record. Returns a Deferred that gives the new
138 VIF reference."""131 VIF reference."""
@@ -148,13 +141,12 @@
148 vif_rec['qos_algorithm_params'] = {}141 vif_rec['qos_algorithm_params'] = {}
149 logging.debug('Creating VIF for VM %s, network %s ... ', vm_ref,142 logging.debug('Creating VIF for VM %s, network %s ... ', vm_ref,
150 network_ref)143 network_ref)
151 vif_ref = yield session.call_xenapi('VIF.create', vif_rec)144 vif_ref = session.call_xenapi('VIF.create', vif_rec)
152 logging.debug('Created VIF %s for VM %s, network %s.', vif_ref,145 logging.debug('Created VIF %s for VM %s, network %s.', vif_ref,
153 vm_ref, network_ref)146 vm_ref, network_ref)
154 defer.returnValue(vif_ref)147 return vif_ref
155148
156 @classmethod149 @classmethod
157 @defer.inlineCallbacks
158 def fetch_image(cls, session, image, user, project, use_sr):150 def fetch_image(cls, session, image, user, project, use_sr):
159 """use_sr: True to put the image as a VDI in an SR, False to place151 """use_sr: True to put the image as a VDI in an SR, False to place
160 it on dom0's filesystem. The former is for VM disks, the latter for152 it on dom0's filesystem. The former is for VM disks, the latter for
@@ -171,12 +163,11 @@
171 args['password'] = user.secret163 args['password'] = user.secret
172 if use_sr:164 if use_sr:
173 args['add_partition'] = 'true'165 args['add_partition'] = 'true'
174 task = yield session.async_call_plugin('objectstore', fn, args)166 task = session.async_call_plugin('objectstore', fn, args)
175 uuid = yield session.wait_for_task(task)167 uuid = session.wait_for_task(task)
176 defer.returnValue(uuid)168 return uuid
177169
178 @classmethod170 @classmethod
179 @utils.deferredToThread
180 def lookup(cls, session, i):171 def lookup(cls, session, i):
181 """ Look the instance i up, and returns it if available """172 """ Look the instance i up, and returns it if available """
182 return VMHelper.lookup_blocking(session, i)173 return VMHelper.lookup_blocking(session, i)
@@ -194,7 +185,6 @@
194 return vms[0]185 return vms[0]
195186
196 @classmethod187 @classmethod
197 @utils.deferredToThread
198 def lookup_vm_vdis(cls, session, vm):188 def lookup_vm_vdis(cls, session, vm):
199 """ Look for the VDIs that are attached to the VM """189 """ Look for the VDIs that are attached to the VM """
200 return VMHelper.lookup_vm_vdis_blocking(session, vm)190 return VMHelper.lookup_vm_vdis_blocking(session, vm)
201191
=== modified file 'nova/virt/xenapi/vmops.py'
--- nova/virt/xenapi/vmops.py 2010-12-09 17:08:24 +0000
+++ nova/virt/xenapi/vmops.py 2010-12-16 20:49:10 +0000
@@ -20,8 +20,6 @@
2020
21import logging21import logging
2222
23from twisted.internet import defer
24
25from nova import db23from nova import db
26from nova import context24from nova import context
2725
@@ -49,10 +47,9 @@
49 return [self._session.get_xenapi().VM.get_name_label(vm) \47 return [self._session.get_xenapi().VM.get_name_label(vm) \
50 for vm in self._session.get_xenapi().VM.get_all()]48 for vm in self._session.get_xenapi().VM.get_all()]
5149
52 @defer.inlineCallbacks
53 def spawn(self, instance):50 def spawn(self, instance):
54 """ Create VM instance """51 """ Create VM instance """
55 vm = yield VMHelper.lookup(self._session, instance.name)52 vm = VMHelper.lookup(self._session, instance.name)
56 if vm is not None:53 if vm is not None:
57 raise Exception('Attempted to create non-unique name %s' %54 raise Exception('Attempted to create non-unique name %s' %
58 instance.name)55 instance.name)
@@ -60,66 +57,63 @@
60 bridge = db.project_get_network(context.get_admin_context(),57 bridge = db.project_get_network(context.get_admin_context(),
61 instance.project_id).bridge58 instance.project_id).bridge
62 network_ref = \59 network_ref = \
63 yield NetworkHelper.find_network_with_bridge(self._session, bridge)60 NetworkHelper.find_network_with_bridge(self._session, bridge)
6461
65 user = AuthManager().get_user(instance.user_id)62 user = AuthManager().get_user(instance.user_id)
66 project = AuthManager().get_project(instance.project_id)63 project = AuthManager().get_project(instance.project_id)
67 vdi_uuid = yield VMHelper.fetch_image(self._session,64 vdi_uuid = VMHelper.fetch_image(
68 instance.image_id, user, project, True)65 self._session, instance.image_id, user, project, True)
69 kernel = yield VMHelper.fetch_image(self._session,66 kernel = VMHelper.fetch_image(
70 instance.kernel_id, user, project, False)67 self._session, instance.kernel_id, user, project, False)
71 ramdisk = yield VMHelper.fetch_image(self._session,68 ramdisk = VMHelper.fetch_image(
72 instance.ramdisk_id, user, project, False)69 self._session, instance.ramdisk_id, user, project, False)
73 vdi_ref = yield self._session.call_xenapi('VDI.get_by_uuid', vdi_uuid)70 vdi_ref = self._session.call_xenapi('VDI.get_by_uuid', vdi_uuid)
74 vm_ref = yield VMHelper.create_vm(self._session,71 vm_ref = VMHelper.create_vm(
75 instance, kernel, ramdisk)72 self._session, instance, kernel, ramdisk)
76 yield VMHelper.create_vbd(self._session, vm_ref, vdi_ref, 0, True)73 VMHelper.create_vbd(self._session, vm_ref, vdi_ref, 0, True)
77 if network_ref:74 if network_ref:
78 yield VMHelper.create_vif(self._session, vm_ref,75 VMHelper.create_vif(self._session, vm_ref,
79 network_ref, instance.mac_address)76 network_ref, instance.mac_address)
80 logging.debug('Starting VM %s...', vm_ref)77 logging.debug('Starting VM %s...', vm_ref)
81 yield self._session.call_xenapi('VM.start', vm_ref, False, False)78 self._session.call_xenapi('VM.start', vm_ref, False, False)
82 logging.info('Spawning VM %s created %s.', instance.name,79 logging.info('Spawning VM %s created %s.', instance.name,
83 vm_ref)80 vm_ref)
8481
85 @defer.inlineCallbacks
86 def reboot(self, instance):82 def reboot(self, instance):
87 """ Reboot VM instance """83 """ Reboot VM instance """
88 instance_name = instance.name84 instance_name = instance.name
89 vm = yield VMHelper.lookup(self._session, instance_name)85 vm = VMHelper.lookup(self._session, instance_name)
90 if vm is None:86 if vm is None:
91 raise Exception('instance not present %s' % instance_name)87 raise Exception('instance not present %s' % instance_name)
92 task = yield self._session.call_xenapi('Async.VM.clean_reboot', vm)88 task = self._session.call_xenapi('Async.VM.clean_reboot', vm)
93 yield self._session.wait_for_task(task)89 self._session.wait_for_task(task)
9490
95 @defer.inlineCallbacks
96 def destroy(self, instance):91 def destroy(self, instance):
97 """ Destroy VM instance """92 """ Destroy VM instance """
98 vm = yield VMHelper.lookup(self._session, instance.name)93 vm = VMHelper.lookup(self._session, instance.name)
99 if vm is None:94 if vm is None:
100 # Don't complain, just return. This lets us clean up instances95 # Don't complain, just return. This lets us clean up instances
101 # that have already disappeared from the underlying platform.96 # that have already disappeared from the underlying platform.
102 defer.returnValue(None)97 return
103 # Get the VDIs related to the VM98 # Get the VDIs related to the VM
104 vdis = yield VMHelper.lookup_vm_vdis(self._session, vm)99 vdis = VMHelper.lookup_vm_vdis(self._session, vm)
105 try:100 try:
106 task = yield self._session.call_xenapi('Async.VM.hard_shutdown',101 task = self._session.call_xenapi('Async.VM.hard_shutdown',
107 vm)102 vm)
108 yield self._session.wait_for_task(task)103 self._session.wait_for_task(task)
109 except XenAPI.Failure, exc:104 except XenAPI.Failure, exc:
110 logging.warn(exc)105 logging.warn(exc)
111 # Disk clean-up106 # Disk clean-up
112 if vdis:107 if vdis:
113 for vdi in vdis:108 for vdi in vdis:
114 try:109 try:
115 task = yield self._session.call_xenapi('Async.VDI.destroy',110 task = self._session.call_xenapi('Async.VDI.destroy', vdi)
116 vdi)111 self._session.wait_for_task(task)
117 yield self._session.wait_for_task(task)
118 except XenAPI.Failure, exc:112 except XenAPI.Failure, exc:
119 logging.warn(exc)113 logging.warn(exc)
120 try:114 try:
121 task = yield self._session.call_xenapi('Async.VM.destroy', vm)115 task = self._session.call_xenapi('Async.VM.destroy', vm)
122 yield self._session.wait_for_task(task)116 self._session.wait_for_task(task)
123 except XenAPI.Failure, exc:117 except XenAPI.Failure, exc:
124 logging.warn(exc)118 logging.warn(exc)
125119
@@ -131,14 +125,13 @@
131 rec = self._session.get_xenapi().VM.get_record(vm)125 rec = self._session.get_xenapi().VM.get_record(vm)
132 return VMHelper.compile_info(rec)126 return VMHelper.compile_info(rec)
133127
134 @defer.inlineCallbacks
135 def get_diagnostics(self, instance_id):128 def get_diagnostics(self, instance_id):
136 """Return data about VM diagnostics"""129 """Return data about VM diagnostics"""
137 vm = yield VMHelper.lookup(self._session, instance_id)130 vm = VMHelper.lookup(self._session, instance_id)
138 if vm is None:131 if vm is None:
139 raise Exception("instance not present %s" % instance_id)132 raise Exception("instance not present %s" % instance_id)
140 rec = yield self._session.get_xenapi().VM.get_record(vm)133 rec = self._session.get_xenapi().VM.get_record(vm)
141 defer.returnValue(VMHelper.compile_diagnostics(self._session, rec))134 return VMHelper.compile_diagnostics(self._session, rec)
142135
143 def get_console_output(self, instance):136 def get_console_output(self, instance):
144 """ Return snapshot of console """137 """ Return snapshot of console """
145138
=== modified file 'nova/virt/xenapi_conn.py'
--- nova/virt/xenapi_conn.py 2010-12-08 20:16:49 +0000
+++ nova/virt/xenapi_conn.py 2010-12-16 20:49:10 +0000
@@ -48,10 +48,11 @@
48"""48"""
4949
50import logging50import logging
51import sys
51import xmlrpclib52import xmlrpclib
5253
53from twisted.internet import defer54from eventlet import event
54from twisted.internet import reactor55from eventlet import tpool
5556
56from nova import utils57from nova import utils
57from nova import flags58from nova import flags
@@ -159,53 +160,51 @@
159 """ Return the xenapi host """160 """ Return the xenapi host """
160 return self._session.xenapi.session.get_this_host(self._session.handle)161 return self._session.xenapi.session.get_this_host(self._session.handle)
161162
162 @utils.deferredToThread
163 def call_xenapi(self, method, *args):163 def call_xenapi(self, method, *args):
164 """Call the specified XenAPI method on a background thread. Returns164 """Call the specified XenAPI method on a background thread."""
165 a Deferred for the result."""
166 f = self._session.xenapi165 f = self._session.xenapi
167 for m in method.split('.'):166 for m in method.split('.'):
168 f = f.__getattr__(m)167 f = f.__getattr__(m)
169 return f(*args)168 return tpool.execute(f, *args)
170169
171 @utils.deferredToThread
172 def async_call_plugin(self, plugin, fn, args):170 def async_call_plugin(self, plugin, fn, args):
173 """Call Async.host.call_plugin on a background thread. Returns a171 """Call Async.host.call_plugin on a background thread."""
174 Deferred with the task reference."""172 return tpool.execute(_unwrap_plugin_exceptions,
175 return _unwrap_plugin_exceptions(173 self._session.xenapi.Async.host.call_plugin,
176 self._session.xenapi.Async.host.call_plugin,174 self.get_xenapi_host(), plugin, fn, args)
177 self.get_xenapi_host(), plugin, fn, args)
178175
179 def wait_for_task(self, task):176 def wait_for_task(self, task):
180 """Return a Deferred that will give the result of the given task.177 """Return a Deferred that will give the result of the given task.
181 The task is polled until it completes."""178 The task is polled until it completes."""
182 d = defer.Deferred()179
183 reactor.callLater(0, self._poll_task, task, d)180 done = event.Event()
184 return d181 loop = utils.LoopingCall(self._poll_task, task, done)
185182 loop.start(FLAGS.xenapi_task_poll_interval, now=True)
186 @utils.deferredToThread183 rv = done.wait()
187 def _poll_task(self, task, deferred):184 loop.stop()
185 return rv
186
187 def _poll_task(self, task, done):
188 """Poll the given XenAPI task, and fire the given Deferred if we188 """Poll the given XenAPI task, and fire the given Deferred if we
189 get a result."""189 get a result."""
190 try:190 try:
191 #logging.debug('Polling task %s...', task)191 #logging.debug('Polling task %s...', task)
192 status = self._session.xenapi.task.get_status(task)192 status = self._session.xenapi.task.get_status(task)
193 if status == 'pending':193 if status == 'pending':
194 reactor.callLater(FLAGS.xenapi_task_poll_interval,194 return
195 self._poll_task, task, deferred)
196 elif status == 'success':195 elif status == 'success':
197 result = self._session.xenapi.task.get_result(task)196 result = self._session.xenapi.task.get_result(task)
198 logging.info('Task %s status: success. %s', task, result)197 logging.info('Task %s status: success. %s', task, result)
199 deferred.callback(_parse_xmlrpc_value(result))198 done.send(_parse_xmlrpc_value(result))
200 else:199 else:
201 error_info = self._session.xenapi.task.get_error_info(task)200 error_info = self._session.xenapi.task.get_error_info(task)
202 logging.warn('Task %s status: %s. %s', task, status,201 logging.warn('Task %s status: %s. %s', task, status,
203 error_info)202 error_info)
204 deferred.errback(XenAPI.Failure(error_info))203 done.send_exception(XenAPI.Failure(error_info))
205 #logging.debug('Polling task %s done.', task)204 #logging.debug('Polling task %s done.', task)
206 except XenAPI.Failure, exc:205 except XenAPI.Failure, exc:
207 logging.warn(exc)206 logging.warn(exc)
208 deferred.errback(exc)207 done.send_exception(*sys.exc_info())
209208
210209
211def _unwrap_plugin_exceptions(func, *args, **kwargs):210def _unwrap_plugin_exceptions(func, *args, **kwargs):
212211
=== modified file 'nova/volume/driver.py'
--- nova/volume/driver.py 2010-11-03 22:06:00 +0000
+++ nova/volume/driver.py 2010-12-16 20:49:10 +0000
@@ -22,12 +22,10 @@
2222
23import logging23import logging
24import os24import os
2525import time
26from twisted.internet import defer
2726
28from nova import exception27from nova import exception
29from nova import flags28from nova import flags
30from nova import process
31from nova import utils29from nova import utils
3230
3331
@@ -55,14 +53,13 @@
5553
56class VolumeDriver(object):54class VolumeDriver(object):
57 """Executes commands relating to Volumes."""55 """Executes commands relating to Volumes."""
58 def __init__(self, execute=process.simple_execute,56 def __init__(self, execute=utils.execute,
59 sync_exec=utils.execute, *args, **kwargs):57 sync_exec=utils.execute, *args, **kwargs):
60 # NOTE(vish): db is set by Manager58 # NOTE(vish): db is set by Manager
61 self.db = None59 self.db = None
62 self._execute = execute60 self._execute = execute
63 self._sync_exec = sync_exec61 self._sync_exec = sync_exec
6462
65 @defer.inlineCallbacks
66 def _try_execute(self, command):63 def _try_execute(self, command):
67 # NOTE(vish): Volume commands can partially fail due to timing, but64 # NOTE(vish): Volume commands can partially fail due to timing, but
68 # running them a second time on failure will usually65 # running them a second time on failure will usually
@@ -70,15 +67,15 @@
70 tries = 067 tries = 0
71 while True:68 while True:
72 try:69 try:
73 yield self._execute(command)70 self._execute(command)
74 defer.returnValue(True)71 return True
75 except exception.ProcessExecutionError:72 except exception.ProcessExecutionError:
76 tries = tries + 173 tries = tries + 1
77 if tries >= FLAGS.num_shell_tries:74 if tries >= FLAGS.num_shell_tries:
78 raise75 raise
79 logging.exception("Recovering from a failed execute."76 logging.exception("Recovering from a failed execute."
80 "Try number %s", tries)77 "Try number %s", tries)
81 yield self._execute("sleep %s" % tries ** 2)78 time.sleep(tries ** 2)
8279
83 def check_for_setup_error(self):80 def check_for_setup_error(self):
84 """Returns an error if prerequisites aren't met"""81 """Returns an error if prerequisites aren't met"""
@@ -86,53 +83,45 @@
86 raise exception.Error("volume group %s doesn't exist"83 raise exception.Error("volume group %s doesn't exist"
87 % FLAGS.volume_group)84 % FLAGS.volume_group)
8885
89 @defer.inlineCallbacks
90 def create_volume(self, volume):86 def create_volume(self, volume):
91 """Creates a logical volume."""87 """Creates a logical volume."""
92 if int(volume['size']) == 0:88 if int(volume['size']) == 0:
93 sizestr = '100M'89 sizestr = '100M'
94 else:90 else:
95 sizestr = '%sG' % volume['size']91 sizestr = '%sG' % volume['size']
96 yield self._try_execute("sudo lvcreate -L %s -n %s %s" %92 self._try_execute("sudo lvcreate -L %s -n %s %s" %
97 (sizestr,93 (sizestr,
98 volume['name'],94 volume['name'],
99 FLAGS.volume_group))95 FLAGS.volume_group))
10096
101 @defer.inlineCallbacks
102 def delete_volume(self, volume):97 def delete_volume(self, volume):
103 """Deletes a logical volume."""98 """Deletes a logical volume."""
104 yield self._try_execute("sudo lvremove -f %s/%s" %99 self._try_execute("sudo lvremove -f %s/%s" %
105 (FLAGS.volume_group,100 (FLAGS.volume_group,
106 volume['name']))101 volume['name']))
107102
108 @defer.inlineCallbacks
109 def local_path(self, volume):103 def local_path(self, volume):
110 yield # NOTE(vish): stops deprecation warning104 # NOTE(vish): stops deprecation warning
111 escaped_group = FLAGS.volume_group.replace('-', '--')105 escaped_group = FLAGS.volume_group.replace('-', '--')
112 escaped_name = volume['name'].replace('-', '--')106 escaped_name = volume['name'].replace('-', '--')
113 defer.returnValue("/dev/mapper/%s-%s" % (escaped_group,107 return "/dev/mapper/%s-%s" % (escaped_group, escaped_name)
114 escaped_name))
115108
116 def ensure_export(self, context, volume):109 def ensure_export(self, context, volume):
117 """Synchronously recreates an export for a logical volume."""110 """Synchronously recreates an export for a logical volume."""
118 raise NotImplementedError()111 raise NotImplementedError()
119112
120 @defer.inlineCallbacks
121 def create_export(self, context, volume):113 def create_export(self, context, volume):
122 """Exports the volume."""114 """Exports the volume."""
123 raise NotImplementedError()115 raise NotImplementedError()
124116
125 @defer.inlineCallbacks
126 def remove_export(self, context, volume):117 def remove_export(self, context, volume):
127 """Removes an export for a logical volume."""118 """Removes an export for a logical volume."""
128 raise NotImplementedError()119 raise NotImplementedError()
129120
130 @defer.inlineCallbacks
131 def discover_volume(self, volume):121 def discover_volume(self, volume):
132 """Discover volume on a remote host."""122 """Discover volume on a remote host."""
133 raise NotImplementedError()123 raise NotImplementedError()
134124
135 @defer.inlineCallbacks
136 def undiscover_volume(self, volume):125 def undiscover_volume(self, volume):
137 """Undiscover volume on a remote host."""126 """Undiscover volume on a remote host."""
138 raise NotImplementedError()127 raise NotImplementedError()
@@ -155,14 +144,13 @@
155 dev = {'shelf_id': shelf_id, 'blade_id': blade_id}144 dev = {'shelf_id': shelf_id, 'blade_id': blade_id}
156 self.db.export_device_create_safe(context, dev)145 self.db.export_device_create_safe(context, dev)
157146
158 @defer.inlineCallbacks
159 def create_export(self, context, volume):147 def create_export(self, context, volume):
160 """Creates an export for a logical volume."""148 """Creates an export for a logical volume."""
161 self._ensure_blades(context)149 self._ensure_blades(context)
162 (shelf_id,150 (shelf_id,
163 blade_id) = self.db.volume_allocate_shelf_and_blade(context,151 blade_id) = self.db.volume_allocate_shelf_and_blade(context,
164 volume['id'])152 volume['id'])
165 yield self._try_execute(153 self._try_execute(
166 "sudo vblade-persist setup %s %s %s /dev/%s/%s" %154 "sudo vblade-persist setup %s %s %s /dev/%s/%s" %
167 (shelf_id,155 (shelf_id,
168 blade_id,156 blade_id,
@@ -176,33 +164,30 @@
176 # still works for the other volumes, so we164 # still works for the other volumes, so we
177 # just wait a bit for the current volume to165 # just wait a bit for the current volume to
178 # be ready and ignore any errors.166 # be ready and ignore any errors.
179 yield self._execute("sleep 2")167 time.sleep(2)
180 yield self._execute("sudo vblade-persist auto all",168 self._execute("sudo vblade-persist auto all",
181 check_exit_code=False)169 check_exit_code=False)
182 yield self._execute("sudo vblade-persist start all",170 self._execute("sudo vblade-persist start all",
183 check_exit_code=False)171 check_exit_code=False)
184172
185 @defer.inlineCallbacks
186 def remove_export(self, context, volume):173 def remove_export(self, context, volume):
187 """Removes an export for a logical volume."""174 """Removes an export for a logical volume."""
188 (shelf_id,175 (shelf_id,
189 blade_id) = self.db.volume_get_shelf_and_blade(context,176 blade_id) = self.db.volume_get_shelf_and_blade(context,
190 volume['id'])177 volume['id'])
191 yield self._try_execute("sudo vblade-persist stop %s %s" %178 self._try_execute("sudo vblade-persist stop %s %s" %
192 (shelf_id, blade_id))179 (shelf_id, blade_id))
193 yield self._try_execute("sudo vblade-persist destroy %s %s" %180 self._try_execute("sudo vblade-persist destroy %s %s" %
194 (shelf_id, blade_id))181 (shelf_id, blade_id))
195182
196 @defer.inlineCallbacks
197 def discover_volume(self, _volume):183 def discover_volume(self, _volume):
198 """Discover volume on a remote host."""184 """Discover volume on a remote host."""
199 yield self._execute("sudo aoe-discover")185 self._execute("sudo aoe-discover")
200 yield self._execute("sudo aoe-stat", check_exit_code=False)186 self._execute("sudo aoe-stat", check_exit_code=False)
201187
202 @defer.inlineCallbacks
203 def undiscover_volume(self, _volume):188 def undiscover_volume(self, _volume):
204 """Undiscover volume on a remote host."""189 """Undiscover volume on a remote host."""
205 yield190 pass
206191
207192
208class FakeAOEDriver(AOEDriver):193class FakeAOEDriver(AOEDriver):
@@ -252,7 +237,6 @@
252 target = {'host': host, 'target_num': target_num}237 target = {'host': host, 'target_num': target_num}
253 self.db.iscsi_target_create_safe(context, target)238 self.db.iscsi_target_create_safe(context, target)
254239
255 @defer.inlineCallbacks
256 def create_export(self, context, volume):240 def create_export(self, context, volume):
257 """Creates an export for a logical volume."""241 """Creates an export for a logical volume."""
258 self._ensure_iscsi_targets(context, volume['host'])242 self._ensure_iscsi_targets(context, volume['host'])
@@ -261,61 +245,55 @@
261 volume['host'])245 volume['host'])
262 iscsi_name = "%s%s" % (FLAGS.iscsi_target_prefix, volume['name'])246 iscsi_name = "%s%s" % (FLAGS.iscsi_target_prefix, volume['name'])
263 volume_path = "/dev/%s/%s" % (FLAGS.volume_group, volume['name'])247 volume_path = "/dev/%s/%s" % (FLAGS.volume_group, volume['name'])
264 yield self._execute("sudo ietadm --op new "248 self._execute("sudo ietadm --op new "
265 "--tid=%s --params Name=%s" %249 "--tid=%s --params Name=%s" %
266 (iscsi_target, iscsi_name))250 (iscsi_target, iscsi_name))
267 yield self._execute("sudo ietadm --op new --tid=%s "251 self._execute("sudo ietadm --op new --tid=%s "
268 "--lun=0 --params Path=%s,Type=fileio" %252 "--lun=0 --params Path=%s,Type=fileio" %
269 (iscsi_target, volume_path))253 (iscsi_target, volume_path))
270254
271 @defer.inlineCallbacks
272 def remove_export(self, context, volume):255 def remove_export(self, context, volume):
273 """Removes an export for a logical volume."""256 """Removes an export for a logical volume."""
274 iscsi_target = self.db.volume_get_iscsi_target_num(context,257 iscsi_target = self.db.volume_get_iscsi_target_num(context,
275 volume['id'])258 volume['id'])
276 yield self._execute("sudo ietadm --op delete --tid=%s "259 self._execute("sudo ietadm --op delete --tid=%s "
277 "--lun=0" % iscsi_target)260 "--lun=0" % iscsi_target)
278 yield self._execute("sudo ietadm --op delete --tid=%s" %261 self._execute("sudo ietadm --op delete --tid=%s" %
279 iscsi_target)262 iscsi_target)
280263
281 @defer.inlineCallbacks
282 def _get_name_and_portal(self, volume_name, host):264 def _get_name_and_portal(self, volume_name, host):
283 """Gets iscsi name and portal from volume name and host."""265 """Gets iscsi name and portal from volume name and host."""
284 (out, _err) = yield self._execute("sudo iscsiadm -m discovery -t "266 (out, _err) = self._execute("sudo iscsiadm -m discovery -t "
285 "sendtargets -p %s" % host)267 "sendtargets -p %s" % host)
286 for target in out.splitlines():268 for target in out.splitlines():
287 if FLAGS.iscsi_ip_prefix in target and volume_name in target:269 if FLAGS.iscsi_ip_prefix in target and volume_name in target:
288 (location, _sep, iscsi_name) = target.partition(" ")270 (location, _sep, iscsi_name) = target.partition(" ")
289 break271 break
290 iscsi_portal = location.split(",")[0]272 iscsi_portal = location.split(",")[0]
291 defer.returnValue((iscsi_name, iscsi_portal))273 return (iscsi_name, iscsi_portal)
292274
293 @defer.inlineCallbacks
294 def discover_volume(self, volume):275 def discover_volume(self, volume):
295 """Discover volume on a remote host."""276 """Discover volume on a remote host."""
296 (iscsi_name,277 iscsi_name, iscsi_portal = self._get_name_and_portal(volume['name'],
297 iscsi_portal) = yield self._get_name_and_portal(volume['name'],278 volume['host'])
298 volume['host'])279 self._execute("sudo iscsiadm -m node -T %s -p %s --login" %
299 yield self._execute("sudo iscsiadm -m node -T %s -p %s --login" %280 (iscsi_name, iscsi_portal))
300 (iscsi_name, iscsi_portal))281 self._execute("sudo iscsiadm -m node -T %s -p %s --op update "
301 yield self._execute("sudo iscsiadm -m node -T %s -p %s --op update "282 "-n node.startup -v automatic" %
302 "-n node.startup -v automatic" %283 (iscsi_name, iscsi_portal))
303 (iscsi_name, iscsi_portal))284 return "/dev/iscsi/%s" % volume['name']
304 defer.returnValue("/dev/iscsi/%s" % volume['name'])
305285
306 @defer.inlineCallbacks
307 def undiscover_volume(self, volume):286 def undiscover_volume(self, volume):
308 """Undiscover volume on a remote host."""287 """Undiscover volume on a remote host."""
309 (iscsi_name,288 iscsi_name, iscsi_portal = self._get_name_and_portal(volume['name'],
310 iscsi_portal) = yield self._get_name_and_portal(volume['name'],289 volume['host'])
311 volume['host'])290 self._execute("sudo iscsiadm -m node -T %s -p %s --op update "
312 yield self._execute("sudo iscsiadm -m node -T %s -p %s --op update "291 "-n node.startup -v manual" %
313 "-n node.startup -v manual" %292 (iscsi_name, iscsi_portal))
314 (iscsi_name, iscsi_portal))293 self._execute("sudo iscsiadm -m node -T %s -p %s --logout " %
315 yield self._execute("sudo iscsiadm -m node -T %s -p %s --logout " %294 (iscsi_name, iscsi_portal))
316 (iscsi_name, iscsi_portal))295 self._execute("sudo iscsiadm -m node --op delete "
317 yield self._execute("sudo iscsiadm -m node --op delete "296 "--targetname %s" % iscsi_name)
318 "--targetname %s" % iscsi_name)
319297
320298
321class FakeISCSIDriver(ISCSIDriver):299class FakeISCSIDriver(ISCSIDriver):
322300
=== modified file 'nova/volume/manager.py'
--- nova/volume/manager.py 2010-11-03 21:38:14 +0000
+++ nova/volume/manager.py 2010-12-16 20:49:10 +0000
@@ -45,7 +45,6 @@
45import logging45import logging
46import datetime46import datetime
4747
48from twisted.internet import defer
4948
50from nova import context49from nova import context
51from nova import exception50from nova import exception
@@ -86,7 +85,6 @@
86 for volume in volumes:85 for volume in volumes:
87 self.driver.ensure_export(ctxt, volume)86 self.driver.ensure_export(ctxt, volume)
8887
89 @defer.inlineCallbacks
90 def create_volume(self, context, volume_id):88 def create_volume(self, context, volume_id):
91 """Creates and exports the volume."""89 """Creates and exports the volume."""
92 context = context.elevated()90 context = context.elevated()
@@ -102,19 +100,18 @@
102100
103 logging.debug("volume %s: creating lv of size %sG",101 logging.debug("volume %s: creating lv of size %sG",
104 volume_ref['name'], volume_ref['size'])102 volume_ref['name'], volume_ref['size'])
105 yield self.driver.create_volume(volume_ref)103 self.driver.create_volume(volume_ref)
106104
107 logging.debug("volume %s: creating export", volume_ref['name'])105 logging.debug("volume %s: creating export", volume_ref['name'])
108 yield self.driver.create_export(context, volume_ref)106 self.driver.create_export(context, volume_ref)
109107
110 now = datetime.datetime.utcnow()108 now = datetime.datetime.utcnow()
111 self.db.volume_update(context,109 self.db.volume_update(context,
112 volume_ref['id'], {'status': 'available',110 volume_ref['id'], {'status': 'available',
113 'launched_at': now})111 'launched_at': now})
114 logging.debug("volume %s: created successfully", volume_ref['name'])112 logging.debug("volume %s: created successfully", volume_ref['name'])
115 defer.returnValue(volume_id)113 return volume_id
116114
117 @defer.inlineCallbacks
118 def delete_volume(self, context, volume_id):115 def delete_volume(self, context, volume_id):
119 """Deletes and unexports volume."""116 """Deletes and unexports volume."""
120 context = context.elevated()117 context = context.elevated()
@@ -124,14 +121,13 @@
124 if volume_ref['host'] != self.host:121 if volume_ref['host'] != self.host:
125 raise exception.Error("Volume is not local to this node")122 raise exception.Error("Volume is not local to this node")
126 logging.debug("volume %s: removing export", volume_ref['name'])123 logging.debug("volume %s: removing export", volume_ref['name'])
127 yield self.driver.remove_export(context, volume_ref)124 self.driver.remove_export(context, volume_ref)
128 logging.debug("volume %s: deleting", volume_ref['name'])125 logging.debug("volume %s: deleting", volume_ref['name'])
129 yield self.driver.delete_volume(volume_ref)126 self.driver.delete_volume(volume_ref)
130 self.db.volume_destroy(context, volume_id)127 self.db.volume_destroy(context, volume_id)
131 logging.debug("volume %s: deleted successfully", volume_ref['name'])128 logging.debug("volume %s: deleted successfully", volume_ref['name'])
132 defer.returnValue(True)129 return True
133130
134 @defer.inlineCallbacks
135 def setup_compute_volume(self, context, volume_id):131 def setup_compute_volume(self, context, volume_id):
136 """Setup remote volume on compute host.132 """Setup remote volume on compute host.
137133
@@ -139,17 +135,16 @@
139 context = context.elevated()135 context = context.elevated()
140 volume_ref = self.db.volume_get(context, volume_id)136 volume_ref = self.db.volume_get(context, volume_id)
141 if volume_ref['host'] == self.host and FLAGS.use_local_volumes:137 if volume_ref['host'] == self.host and FLAGS.use_local_volumes:
142 path = yield self.driver.local_path(volume_ref)138 path = self.driver.local_path(volume_ref)
143 else:139 else:
144 path = yield self.driver.discover_volume(volume_ref)140 path = self.driver.discover_volume(volume_ref)
145 defer.returnValue(path)141 return path
146142
147 @defer.inlineCallbacks
148 def remove_compute_volume(self, context, volume_id):143 def remove_compute_volume(self, context, volume_id):
149 """Remove remote volume on compute host."""144 """Remove remote volume on compute host."""
150 context = context.elevated()145 context = context.elevated()
151 volume_ref = self.db.volume_get(context, volume_id)146 volume_ref = self.db.volume_get(context, volume_id)
152 if volume_ref['host'] == self.host and FLAGS.use_local_volumes:147 if volume_ref['host'] == self.host and FLAGS.use_local_volumes:
153 defer.returnValue(True)148 return True
154 else:149 else:
155 yield self.driver.undiscover_volume(volume_ref)150 self.driver.undiscover_volume(volume_ref)
156151
=== modified file 'run_tests.py'
--- run_tests.py 2010-12-11 20:10:24 +0000
+++ run_tests.py 2010-12-16 20:49:10 +0000
@@ -39,6 +39,9 @@
3939
40"""40"""
4141
42import eventlet
43eventlet.monkey_patch()
44
42import __main__45import __main__
43import gettext46import gettext
44import os47import os
@@ -59,15 +62,12 @@
59from nova.tests.flags_unittest import *62from nova.tests.flags_unittest import *
60from nova.tests.misc_unittest import *63from nova.tests.misc_unittest import *
61from nova.tests.network_unittest import *64from nova.tests.network_unittest import *
62from nova.tests.objectstore_unittest import *65#from nova.tests.objectstore_unittest import *
63from nova.tests.process_unittest import *
64from nova.tests.quota_unittest import *66from nova.tests.quota_unittest import *
65from nova.tests.rpc_unittest import *67from nova.tests.rpc_unittest import *
66from nova.tests.scheduler_unittest import *68from nova.tests.scheduler_unittest import *
67from nova.tests.service_unittest import *69from nova.tests.service_unittest import *
68from nova.tests.twistd_unittest import *70from nova.tests.twistd_unittest import *
69from nova.tests.validator_unittest import *
70from nova.tests.virt_unittest import *
71from nova.tests.virt_unittest import *71from nova.tests.virt_unittest import *
72from nova.tests.volume_unittest import *72from nova.tests.volume_unittest import *
7373