Merge lp:~termie/nova/eventlet_merge into lp:~hudson-openstack/nova/trunk
- eventlet_merge
- Merge into trunk
Status: | Merged |
---|---|
Approved by: | Eric Day |
Approved revision: | 400 |
Merged at revision: | 465 |
Proposed branch: | lp:~termie/nova/eventlet_merge |
Merge into: | lp:~hudson-openstack/nova/trunk |
Diff against target: |
4171 lines (+651/-1472) 45 files modified
bin/nova-api (+8/-12) bin/nova-combined (+65/-0) bin/nova-compute (+6/-9) bin/nova-network (+6/-9) bin/nova-scheduler (+6/-9) bin/nova-volume (+6/-9) nova/compute/disk.py (+39/-45) nova/compute/manager.py (+20/-29) nova/flags.py (+7/-0) nova/manager.py (+1/-3) nova/network/manager.py (+1/-3) nova/objectstore/image.py (+1/-0) nova/process.py (+0/-209) nova/rpc.py (+26/-57) nova/server.py (+0/-151) nova/service.py (+0/-195) nova/test.py (+85/-15) nova/tests/access_unittest.py (+1/-1) nova/tests/auth_unittest.py (+3/-3) nova/tests/cloud_unittest.py (+2/-4) nova/tests/compute_unittest.py (+18/-23) nova/tests/flags_unittest.py (+1/-1) nova/tests/misc_unittest.py (+1/-1) nova/tests/network_unittest.py (+1/-1) nova/tests/objectstore_unittest.py (+2/-2) nova/tests/process_unittest.py (+0/-132) nova/tests/quota_unittest.py (+1/-1) nova/tests/rpc_unittest.py (+18/-18) nova/tests/scheduler_unittest.py (+12/-12) nova/tests/service_unittest.py (+14/-32) nova/tests/validator_unittest.py (+0/-42) nova/tests/virt_unittest.py (+5/-6) nova/tests/volume_unittest.py (+25/-33) nova/utils.py (+43/-13) nova/validate.py (+0/-94) nova/virt/fake.py (+3/-7) nova/virt/images.py (+6/-5) nova/virt/libvirt_conn.py (+80/-99) nova/virt/xenapi/network_utils.py (+3/-7) nova/virt/xenapi/vm_utils.py (+10/-20) nova/virt/xenapi/vmops.py (+30/-37) nova/virt/xenapi_conn.py (+23/-24) nova/volume/driver.py (+57/-79) nova/volume/manager.py (+11/-16) run_tests.py (+4/-4) |
To merge this branch: | bzr merge lp:~termie/nova/eventlet_merge |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Jay Pipes (community) | Approve | ||
Eric Day (community) | Approve | ||
Anthony Young (community) | Needs Fixing | ||
Review via email:
|
Commit message
Description of the change
This branch removes most of the dependencies on twisted and moves towards the plan described by https:/
Tests are currently passing besides objectstore which is being skipped because it is heavily reliant on our twisted pieces, and I can run everything using the nova.sh
Additionally this adds nova-combined that covers everythign except for nova-objectstore, to test it what I've usually done is run nova.sh as usual
$ sudo ./eventlet_
and then quit all the services except for nova-objectstore and then in one of the screens do
$ ./eventlet_
And then run whatever manual testing you normally run.
Once objectstore has been deprecated and removed nova-combined can be expected to run the whole nova stack in a single process for testing and dev.

Jay Pipes (jaypipes) wrote : | # |
I have a feeling the git-bzr bridge is not working properly... I see that same issues with timestamps that I saw when Chris MacGown used the git-bzr bridge. Andy, I assume you are still using git with your git-bzr bridge?
Note that this merge request first *removes* nova/service.py (notice the 1970-01-01 timestamps, BTW) and then *adds* a file nova/service.py. I don't know why this is the case and why it doesn't simply show the diff of the modified nova/service.py.
For Chris' branch, I had to essentially re-construct his patches and submit them from bzr to Launchpad; trying to merge it as-is into trunk resulted in conflicts that could not be resolved. In fact, Chris' git-bzr original branch (lp:~chris-slicehost/glance/implements_swift_backend) still sits in the list of Glance's active branches (https:/
Anyway, I'm saying this because despite me liking some of the changes in this patch (and I certainly welcome the direction it takes), I fear it will not merge correctly. In addition, the patch is (IMHO) unnecessarily large. It would be a lot easier to review (and thus get into trunk quickly) if the patch was broken up and we could simply merge in changes in chunks. This patch does not need to be done all at once.
Just my 2 cents.
-jay

Eric Day (eday) wrote : | # |
Hi Jay,
I stopped by to see the Anso guys on Friday and we did a group code review on this. The list above is what came out of it. I had the same questions about removing/adding some files, and the it sounds like it was done this way because of using tmp files for developing and then when it worked, moving them into place. Even though they are named the same, it's pretty different in content.
As for the size, I suppose it could have been done in a couple chunks, but then there would have been duplicate files/classes (like service.py) for both eventlet and twisted. Most of the changes are just removing defers/yields, so I'm okay withe the size.
I will approve once the issues above are resolved.

Jay Pipes (jaypipes) wrote : | # |
OK, Eric, we'll see if it merges correctly, then...

termie (termie) wrote : | # |
> I have a feeling the git-bzr bridge is not working properly... I see that same
> issues with timestamps that I saw when Chris MacGown used the git-bzr bridge.
> Andy, I assume you are still using git with your git-bzr bridge?
>
Not git-bzr was used on this branch, I have largely stopped using it because I need to maintain patches against both bzr and bzr-fastimport to keep it working. I don't know which timestamps you are referring to.
> Note that this merge request first *removes* nova/service.py (notice the
> 1970-01-01 timestamps, BTW) and then *adds* a file nova/service.py. I don't
> know why this is the case and why it doesn't simply show the diff of the
> modified nova/service.py.
I had another file for the first half of the development of this branch that I later overwrote service.py with, it is something of a shame to lose the development history of that single file but not worth dealing the bzr's poor tools to attempt to reconstruct it.
> For Chris' branch, I had to essentially re-construct his patches and submit
> them from bzr to Launchpad; trying to merge it as-is into trunk resulted in
> conflicts that could not be resolved. In fact, Chris' git-bzr original branch
> (lp:~chris-slicehost/glance/implements_swift_backend) still sits in the list
> of Glance's active branches (https:/
> being marked as Merged. The reason is because something happens in the
> repository metadata when the git-bzr bridge produces a branch, and somehow
> history is discarded. You can see that timestamps of revisions coming from the
> branches managed with the git-bzr bridge are all 0000-00-00...
>
> Anyway, I'm saying this because despite me liking some of the changes in this
> patch (and I certainly welcome the direction it takes), I fear it will not
> merge correctly. In addition, the patch is (IMHO) unnecessarily large. It
> would be a lot easier to review (and thus get into trunk quickly) if the patch
> was broken up and we could simply merge in changes in chunks. This patch does
> not need to be done all at once.
I also fear it will not merge correctly simply because while merging from upstream a bunch of spurious conflicts were generated. This patch does need to be done all at once.
> Just my 2 cents.
> -jay

Jay Pipes (jaypipes) wrote : | # |
On Tue, Dec 14, 2010 at 6:26 PM, termie <email address hidden> wrote:
>> I have a feeling the git-bzr bridge is not working properly... I see that same
>> issues with timestamps that I saw when Chris MacGown used the git-bzr bridge.
>> Andy, I assume you are still using git with your git-bzr bridge?
>>
>
> Not git-bzr was used on this branch, I have largely stopped using it because I need to maintain patches against both bzr and bzr-fastimport to keep it working. I don't know which timestamps you are referring to.
OK.
>> Note that this merge request first *removes* nova/service.py (notice the
>> 1970-01-01 timestamps, BTW) and then *adds* a file nova/service.py. I don't
>> know why this is the case and why it doesn't simply show the diff of the
>> modified nova/service.py.
>
> I had another file for the first half of the development of this branch that I later overwrote service.py with, it is something of a shame to lose the development history of that single file but not worth dealing the bzr's poor tools to attempt to reconstruct it.
Because you are used to or prefer git doesn't mean that bzr has poor
tooling. If you overwrite a file in git, the same thing would happen.
<snip>
> I also fear it will not merge correctly simply because while merging from upstream a bunch of spurious conflicts were generated. This patch does need to be done all at once.
OK. However, I don't agree, but I can move forward with the review...
-jay

termie (termie) wrote : | # |
Updated per review.

Josh Kearney (jk0) wrote : | # |
For what it's worth, I'm running this branch in my development environment and it's been great.

Eric Day (eday) wrote : | # |
Is there any reason why the virt_unittests were removed? Also, I know we discussed this, but I'd still like to find a way to get the objectstore unittests running again, even if it means creating the base twisted test classes again just for it.

Sandy Walsh (sandy-walsh) wrote : | # |
Some notes on trying to get this branch to run. I'll update again just to be sure I have the latest.
1. logging is console only by default (assuming run as user). Needed to update service.Service to get file logging. Not a biggie.
2. Fails when trying to run a xenserver instance: http://
3. Why does nova-objectstore have to be run the old way and not integrated with nova-combined?
I'll see if I can fix #2 in the meanwhile.

Sandy Walsh (sandy-walsh) wrote : | # |
Oh, it should be noted that run-combined still needs to take the flagfile ... this threw me off while testing.
bin/nova-combined --flagfile=

Sandy Walsh (sandy-walsh) wrote : | # |
And, I was using the latest branch re: the above comments.

termie (termie) wrote : | # |
> Is there any reason why the virt_unittests were removed? Also, I know we
> discussed this, but I'd still like to find a way to get the objectstore
> unittests running again, even if it means creating the base twisted test
> classes again just for it.
It was in there twice. But looking at the diff apparently bzr has lied to me yet again and somehow had conflicts it didn't mention, sorting those out.

termie (termie) wrote : | # |
ah, the conflicts are just in the diff view that launchpad creates, which is still unfortunate

termie (termie) wrote : | # |
> Some notes on trying to get this branch to run. I'll update again just to be
> sure I have the latest.
>
> 1. logging is console only by default (assuming run as user). Needed to update
> service.Service to get file logging. Not a biggie.
>
> 2. Fails when trying to run a xenserver instance:
> http://
Fixed the improt error, would be appreciative if you can test whether it runs, I currently don't have any way to test the xen stuff.
> 3. Why does nova-objectstore have to be run the old way and not integrated
> with nova-combined?
It is very reliant on twisted's implementations of things and as it is being deprecated I didn't want to spend time porting it over to eventlet.
>
> I'll see if I can fix #2 in the meanwhile.

termie (termie) wrote : | # |
Oh, and re logging, it isn't supposed to log to a file, that is for your supervisor daemon to handle (init, upstart and daemonstools all expect output to stdout and handle the logging for you)

termie (termie) wrote : | # |
> On Tue, Dec 14, 2010 at 6:26 PM, termie <email address hidden> wrote:
> >> I have a feeling the git-bzr bridge is not working properly... I see that
> same
> >> issues with timestamps that I saw when Chris MacGown used the git-bzr
> bridge.
> >> Andy, I assume you are still using git with your git-bzr bridge?
> >>
> >
> > Not git-bzr was used on this branch, I have largely stopped using it because
> I need to maintain patches against both bzr and bzr-fastimport to keep it
> working. I don't know which timestamps you are referring to.
>
> OK.
>
> >> Note that this merge request first *removes* nova/service.py (notice the
> >> 1970-01-01 timestamps, BTW) and then *adds* a file nova/service.py. I
> don't
> >> know why this is the case and why it doesn't simply show the diff of the
> >> modified nova/service.py.
> >
> > I had another file for the first half of the development of this branch that
> I later overwrote service.py with, it is something of a shame to lose the
> development history of that single file but not worth dealing the bzr's poor
> tools to attempt to reconstruct it.
>
> Because you are used to or prefer git doesn't mean that bzr has poor
> tooling. If you overwrite a file in git, the same thing would happen.
>
I was referring to the tools to pull apart and remake the commit to fix the problem, some equivalent of git's rebase -i (which to my knowledge doesn't exist outside of git), not that it was bzr's fault that there is an issue to begin with. And apologies for the jab at bzr, "bzr's poor tools" would better read "without git's tools."
> <snip>
> > I also fear it will not merge correctly simply because while merging from
> upstream a bunch of spurious conflicts were generated. This patch does need to
> be done all at once.
>
> OK. However, I don't agree, but I can move forward with the review...
>
Sorry, some further explanation: it needs to be done all at once because once one call in a stack has switched from using deferreds to eventlet every other call in that stack has to as well otherwise things begin to return before initiating actual work or return incorrect values (usually a deferred where the result of a deferred was required).
> -jay

termie (termie) wrote : | # |
yay, it was just a tiny conflict (not sure how it was introduced as it doesn't seem like it could have run the other way), resolved and merged from upstream

Eric Day (eday) wrote : | # |
Based on the in-person review we did, and seeing that this branch works (minus a few xen issues we can iron out, as that code is changing too), I'm ok with it. I would still like to see the objectstore unittests enabled again, but if we are deprecating this before release it may not be important.

Jay Pipes (jaypipes) wrote : | # |
Yeah, the conflict was the i18n patch that was merged recently...
sorry about that... looking forward to seeing this work get into trunk
:)
On Wed, Dec 15, 2010 at 2:04 PM, termie <email address hidden> wrote:
> yay, it was just a tiny conflict (not sure how it was introduced as it doesn't seem like it could have run the other way), resolved and merged from upstream
> --
> https:/
> Your team Nova Core is requested to review the proposed merge of lp:~termie/nova/eventlet_merge into lp:nova.
>

Jay Pipes (jaypipes) wrote : | # |
> > On Tue, Dec 14, 2010 at 6:26 PM, termie <email address hidden> wrote:
> I was referring to the tools to pull apart and remake the commit to fix the
> problem, some equivalent of git's rebase -i (which to my knowledge doesn't
> exist outside of git), not that it was bzr's fault that there is an issue to
> begin with. And apologies for the jab at bzr, "bzr's poor tools" would better
> read "without git's tools."
No worries. :)
You may be interested in bzr's rebase plugin ;)
http://
> > <snip>
> > > I also fear it will not merge correctly simply because while merging from
> > upstream a bunch of spurious conflicts were generated. This patch does need
> to
> > be done all at once.
> >
> > OK. However, I don't agree, but I can move forward with the review...
> >
>
> Sorry, some further explanation: it needs to be done all at once because once
> one call in a stack has switched from using deferreds to eventlet every other
> call in that stack has to as well otherwise things begin to return before
> initiating actual work or return incorrect values (usually a deferred where
> the result of a deferred was required).
Gotcha. After looking through the code more thoroughly, I think it's a great improvement. I share the same concerns as Eric re: objectstore, but then again, if I can get Glance done, maybe that won't be much of an issue! :)
-jay
> > -jay

Eric Day (eday) wrote : | # |
There are some pep8 issues with this you'll need to fix before merge will succeed. Just run:
pep8 -r bin/* nova
bin/nova-
bin/nova-
bin/nova-
nova/service.
nova/service.
nova/service.
nova/service.
nova/service.
nova/test.py:58:1: E302 expected 2 blank lines, found 1
nova/utils.
nova/utils.
nova/utils.
nova/utils.
nova/compute/
nova/tests/
nova/tests/
nova/virt/
nova/virt/
nova/virt/
nova/virt/
nova/virt/

OpenStack Infra (hudson-openstack) wrote : | # |
The attempt to merge lp:~termie/nova/eventlet_merge into lp:nova failed. Below is the output from the failed tests.
bin/nova-
^
JCR: Trailing whitespace is superfluous.
FBM: Except when it occurs as part of a blank line (i.e. the line is
nothing but whitespace). According to Python docs[1] a line with only
whitespace is considered a blank line, and is to be ignored. However,
matching a blank line to its indentation level avoids mistakenly
pasting code into the standard Python interpreter.
[1] http://
The warning returned varies on whether the line itself is blank, for easier
filtering for those who want to indent their blank lines.
Okay: spam(1)
W291: spam(1)\s
W293: class Foo(object):\n \n bang = 12
bin/nova-
^
JCR: Trailing blank lines are superfluous.
Okay: spam(1)
W391: spam(1)\n
bin/nova-
^
JCR: Trailing whitespace is superfluous.
FBM: Except when it occurs as part of a blank line (i.e. the line is
nothing but whitespace). According to Python docs[1] a line with only
whitespace is considered a blank line, and is to be ignored. However,
matching a blank line to its indentation level avoids mistakenly
pasting code into the standard Python interpreter.
[1] http://
The warning returned varies on whether the line itself is blank, for easier
filtering for those who want to indent their blank lines.
Okay: spam(1)
W291: spam(1)\s
W293: class Foo(object):\n \n bang = 12
Preview Diff
1 | === modified file 'bin/nova-api' |
2 | --- bin/nova-api 2010-12-11 20:10:24 +0000 |
3 | +++ bin/nova-api 2010-12-16 20:49:10 +0000 |
4 | @@ -17,9 +17,8 @@ |
5 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
6 | # See the License for the specific language governing permissions and |
7 | # limitations under the License. |
8 | -""" |
9 | -Nova API daemon. |
10 | -""" |
11 | + |
12 | +"""Starter script for Nova API.""" |
13 | |
14 | import gettext |
15 | import os |
16 | @@ -35,9 +34,11 @@ |
17 | |
18 | gettext.install('nova', unicode=1) |
19 | |
20 | +from nova import api |
21 | from nova import flags |
22 | from nova import utils |
23 | -from nova import server |
24 | +from nova import wsgi |
25 | + |
26 | |
27 | FLAGS = flags.FLAGS |
28 | flags.DEFINE_integer('osapi_port', 8774, 'OpenStack API port') |
29 | @@ -46,15 +47,10 @@ |
30 | flags.DEFINE_string('ec2api_host', '0.0.0.0', 'EC2 API host') |
31 | |
32 | |
33 | -def main(_args): |
34 | - from nova import api |
35 | - from nova import wsgi |
36 | +if __name__ == '__main__': |
37 | + utils.default_flagfile() |
38 | + FLAGS(sys.argv) |
39 | server = wsgi.Server() |
40 | server.start(api.API('os'), FLAGS.osapi_port, host=FLAGS.osapi_host) |
41 | server.start(api.API('ec2'), FLAGS.ec2api_port, host=FLAGS.ec2api_host) |
42 | server.wait() |
43 | - |
44 | - |
45 | -if __name__ == '__main__': |
46 | - utils.default_flagfile() |
47 | - server.serve('nova-api', main) |
48 | |
49 | === added file 'bin/nova-combined' |
50 | --- bin/nova-combined 1970-01-01 00:00:00 +0000 |
51 | +++ bin/nova-combined 2010-12-16 20:49:10 +0000 |
52 | @@ -0,0 +1,65 @@ |
53 | +#!/usr/bin/env python |
54 | +# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
55 | + |
56 | +# Copyright 2010 United States Government as represented by the |
57 | +# Administrator of the National Aeronautics and Space Administration. |
58 | +# All Rights Reserved. |
59 | +# |
60 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
61 | +# not use this file except in compliance with the License. You may obtain |
62 | +# a copy of the License at |
63 | +# |
64 | +# http://www.apache.org/licenses/LICENSE-2.0 |
65 | +# |
66 | +# Unless required by applicable law or agreed to in writing, software |
67 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
68 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
69 | +# License for the specific language governing permissions and limitations |
70 | +# under the License. |
71 | + |
72 | +"""Combined starter script for Nova services.""" |
73 | + |
74 | +import eventlet |
75 | +eventlet.monkey_patch() |
76 | + |
77 | +import os |
78 | +import sys |
79 | + |
80 | +# If ../nova/__init__.py exists, add ../ to Python search path, so that |
81 | +# it will override what happens to be installed in /usr/(local/)lib/python... |
82 | +possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]), |
83 | + os.pardir, |
84 | + os.pardir)) |
85 | +if os.path.exists(os.path.join(possible_topdir, 'nova', '__init__.py')): |
86 | + sys.path.insert(0, possible_topdir) |
87 | + |
88 | +from nova import api |
89 | +from nova import flags |
90 | +from nova import service |
91 | +from nova import utils |
92 | +from nova import wsgi |
93 | + |
94 | + |
95 | +FLAGS = flags.FLAGS |
96 | +flags.DEFINE_integer('osapi_port', 8774, 'OpenStack API port') |
97 | +flags.DEFINE_string('osapi_host', '0.0.0.0', 'OpenStack API host') |
98 | +flags.DEFINE_integer('ec2api_port', 8773, 'EC2 API port') |
99 | +flags.DEFINE_string('ec2api_host', '0.0.0.0', 'EC2 API host') |
100 | + |
101 | + |
102 | +if __name__ == '__main__': |
103 | + utils.default_flagfile() |
104 | + FLAGS(sys.argv) |
105 | + |
106 | + compute = service.Service.create(binary='nova-compute') |
107 | + network = service.Service.create(binary='nova-network') |
108 | + volume = service.Service.create(binary='nova-volume') |
109 | + scheduler = service.Service.create(binary='nova-scheduler') |
110 | + #objectstore = service.Service.create(binary='nova-objectstore') |
111 | + |
112 | + service.serve(compute, network, volume, scheduler) |
113 | + |
114 | + server = wsgi.Server() |
115 | + server.start(api.API('os'), FLAGS.osapi_port, host=FLAGS.osapi_host) |
116 | + server.start(api.API('ec2'), FLAGS.ec2api_port, host=FLAGS.ec2api_host) |
117 | + server.wait() |
118 | |
119 | === modified file 'bin/nova-compute' |
120 | --- bin/nova-compute 2010-12-11 20:10:24 +0000 |
121 | +++ bin/nova-compute 2010-12-16 20:49:10 +0000 |
122 | @@ -17,9 +17,10 @@ |
123 | # License for the specific language governing permissions and limitations |
124 | # under the License. |
125 | |
126 | -""" |
127 | - Twistd daemon for the nova compute nodes. |
128 | -""" |
129 | +"""Starter script for Nova Compute.""" |
130 | + |
131 | +import eventlet |
132 | +eventlet.monkey_patch() |
133 | |
134 | import gettext |
135 | import os |
136 | @@ -36,13 +37,9 @@ |
137 | gettext.install('nova', unicode=1) |
138 | |
139 | from nova import service |
140 | -from nova import twistd |
141 | from nova import utils |
142 | |
143 | - |
144 | if __name__ == '__main__': |
145 | utils.default_flagfile() |
146 | - twistd.serve(__file__) |
147 | - |
148 | -if __name__ == '__builtin__': |
149 | - application = service.Service.create() # pylint: disable=C0103 |
150 | + service.serve() |
151 | + service.wait() |
152 | |
153 | === modified file 'bin/nova-network' |
154 | --- bin/nova-network 2010-12-14 23:22:03 +0000 |
155 | +++ bin/nova-network 2010-12-16 20:49:10 +0000 |
156 | @@ -17,9 +17,10 @@ |
157 | # License for the specific language governing permissions and limitations |
158 | # under the License. |
159 | |
160 | -""" |
161 | - Twistd daemon for the nova network nodes. |
162 | -""" |
163 | +"""Starter script for Nova Network.""" |
164 | + |
165 | +import eventlet |
166 | +eventlet.monkey_patch() |
167 | |
168 | import gettext |
169 | import os |
170 | @@ -36,13 +37,9 @@ |
171 | gettext.install('nova', unicode=1) |
172 | |
173 | from nova import service |
174 | -from nova import twistd |
175 | from nova import utils |
176 | |
177 | - |
178 | if __name__ == '__main__': |
179 | utils.default_flagfile() |
180 | - twistd.serve(__file__) |
181 | - |
182 | -if __name__ == '__builtin__': |
183 | - application = service.Service.create() # pylint: disable-msg=C0103 |
184 | + service.serve() |
185 | + service.wait() |
186 | |
187 | === modified file 'bin/nova-scheduler' |
188 | --- bin/nova-scheduler 2010-12-14 23:22:03 +0000 |
189 | +++ bin/nova-scheduler 2010-12-16 20:49:10 +0000 |
190 | @@ -17,9 +17,10 @@ |
191 | # License for the specific language governing permissions and limitations |
192 | # under the License. |
193 | |
194 | -""" |
195 | - Twistd daemon for the nova scheduler nodes. |
196 | -""" |
197 | +"""Starter script for Nova Scheduler.""" |
198 | + |
199 | +import eventlet |
200 | +eventlet.monkey_patch() |
201 | |
202 | import gettext |
203 | import os |
204 | @@ -36,13 +37,9 @@ |
205 | gettext.install('nova', unicode=1) |
206 | |
207 | from nova import service |
208 | -from nova import twistd |
209 | from nova import utils |
210 | |
211 | - |
212 | if __name__ == '__main__': |
213 | utils.default_flagfile() |
214 | - twistd.serve(__file__) |
215 | - |
216 | -if __name__ == '__builtin__': |
217 | - application = service.Service.create() |
218 | + service.serve() |
219 | + service.wait() |
220 | |
221 | === modified file 'bin/nova-volume' |
222 | --- bin/nova-volume 2010-12-14 23:22:03 +0000 |
223 | +++ bin/nova-volume 2010-12-16 20:49:10 +0000 |
224 | @@ -17,9 +17,10 @@ |
225 | # License for the specific language governing permissions and limitations |
226 | # under the License. |
227 | |
228 | -""" |
229 | - Twistd daemon for the nova volume nodes. |
230 | -""" |
231 | +"""Starter script for Nova Volume.""" |
232 | + |
233 | +import eventlet |
234 | +eventlet.monkey_patch() |
235 | |
236 | import gettext |
237 | import os |
238 | @@ -36,13 +37,9 @@ |
239 | gettext.install('nova', unicode=1) |
240 | |
241 | from nova import service |
242 | -from nova import twistd |
243 | from nova import utils |
244 | |
245 | - |
246 | if __name__ == '__main__': |
247 | utils.default_flagfile() |
248 | - twistd.serve(__file__) |
249 | - |
250 | -if __name__ == '__builtin__': |
251 | - application = service.Service.create() # pylint: disable-msg=C0103 |
252 | + service.serve() |
253 | + service.wait() |
254 | |
255 | === modified file 'nova/compute/disk.py' |
256 | --- nova/compute/disk.py 2010-11-16 18:49:18 +0000 |
257 | +++ nova/compute/disk.py 2010-12-16 20:49:10 +0000 |
258 | @@ -26,8 +26,6 @@ |
259 | import os |
260 | import tempfile |
261 | |
262 | -from twisted.internet import defer |
263 | - |
264 | from nova import exception |
265 | from nova import flags |
266 | |
267 | @@ -39,7 +37,6 @@ |
268 | 'block_size to use for dd') |
269 | |
270 | |
271 | -@defer.inlineCallbacks |
272 | def partition(infile, outfile, local_bytes=0, resize=True, |
273 | local_type='ext2', execute=None): |
274 | """ |
275 | @@ -64,10 +61,10 @@ |
276 | file_size = os.path.getsize(infile) |
277 | if resize and file_size < FLAGS.minimum_root_size: |
278 | last_sector = FLAGS.minimum_root_size / sector_size - 1 |
279 | - yield execute('dd if=/dev/zero of=%s count=1 seek=%d bs=%d' |
280 | - % (infile, last_sector, sector_size)) |
281 | - yield execute('e2fsck -fp %s' % infile, check_exit_code=False) |
282 | - yield execute('resize2fs %s' % infile) |
283 | + execute('dd if=/dev/zero of=%s count=1 seek=%d bs=%d' |
284 | + % (infile, last_sector, sector_size)) |
285 | + execute('e2fsck -fp %s' % infile, check_exit_code=False) |
286 | + execute('resize2fs %s' % infile) |
287 | file_size = FLAGS.minimum_root_size |
288 | elif file_size % sector_size != 0: |
289 | logging.warn("Input partition size not evenly divisible by" |
290 | @@ -86,30 +83,29 @@ |
291 | last_sector = local_last # e |
292 | |
293 | # create an empty file |
294 | - yield execute('dd if=/dev/zero of=%s count=1 seek=%d bs=%d' |
295 | - % (outfile, mbr_last, sector_size)) |
296 | + execute('dd if=/dev/zero of=%s count=1 seek=%d bs=%d' |
297 | + % (outfile, mbr_last, sector_size)) |
298 | |
299 | # make mbr partition |
300 | - yield execute('parted --script %s mklabel msdos' % outfile) |
301 | + execute('parted --script %s mklabel msdos' % outfile) |
302 | |
303 | # append primary file |
304 | - yield execute('dd if=%s of=%s bs=%s conv=notrunc,fsync oflag=append' |
305 | - % (infile, outfile, FLAGS.block_size)) |
306 | + execute('dd if=%s of=%s bs=%s conv=notrunc,fsync oflag=append' |
307 | + % (infile, outfile, FLAGS.block_size)) |
308 | |
309 | # make primary partition |
310 | - yield execute('parted --script %s mkpart primary %ds %ds' |
311 | - % (outfile, primary_first, primary_last)) |
312 | + execute('parted --script %s mkpart primary %ds %ds' |
313 | + % (outfile, primary_first, primary_last)) |
314 | |
315 | if local_bytes > 0: |
316 | # make the file bigger |
317 | - yield execute('dd if=/dev/zero of=%s count=1 seek=%d bs=%d' |
318 | - % (outfile, last_sector, sector_size)) |
319 | + execute('dd if=/dev/zero of=%s count=1 seek=%d bs=%d' |
320 | + % (outfile, last_sector, sector_size)) |
321 | # make and format local partition |
322 | - yield execute('parted --script %s mkpartfs primary %s %ds %ds' |
323 | - % (outfile, local_type, local_first, local_last)) |
324 | - |
325 | - |
326 | -@defer.inlineCallbacks |
327 | + execute('parted --script %s mkpartfs primary %s %ds %ds' |
328 | + % (outfile, local_type, local_first, local_last)) |
329 | + |
330 | + |
331 | def inject_data(image, key=None, net=None, partition=None, execute=None): |
332 | """Injects a ssh key and optionally net data into a disk image. |
333 | |
334 | @@ -119,26 +115,26 @@ |
335 | If partition is not specified it mounts the image as a single partition. |
336 | |
337 | """ |
338 | - out, err = yield execute('sudo losetup -f --show %s' % image) |
339 | + out, err = execute('sudo losetup -f --show %s' % image) |
340 | if err: |
341 | raise exception.Error('Could not attach image to loopback: %s' % err) |
342 | device = out.strip() |
343 | try: |
344 | if not partition is None: |
345 | # create partition |
346 | - out, err = yield execute('sudo kpartx -a %s' % device) |
347 | + out, err = execute('sudo kpartx -a %s' % device) |
348 | if err: |
349 | raise exception.Error('Failed to load partition: %s' % err) |
350 | mapped_device = '/dev/mapper/%sp%s' % (device.split('/')[-1], |
351 | partition) |
352 | else: |
353 | mapped_device = device |
354 | - out, err = yield execute('sudo tune2fs -c 0 -i 0 %s' % mapped_device) |
355 | + out, err = execute('sudo tune2fs -c 0 -i 0 %s' % mapped_device) |
356 | |
357 | tmpdir = tempfile.mkdtemp() |
358 | try: |
359 | # mount loopback to dir |
360 | - out, err = yield execute( |
361 | + out, err = execute( |
362 | 'sudo mount %s %s' % (mapped_device, tmpdir)) |
363 | if err: |
364 | raise exception.Error('Failed to mount filesystem: %s' % err) |
365 | @@ -146,24 +142,23 @@ |
366 | try: |
367 | if key: |
368 | # inject key file |
369 | - yield _inject_key_into_fs(key, tmpdir, execute=execute) |
370 | + _inject_key_into_fs(key, tmpdir, execute=execute) |
371 | if net: |
372 | - yield _inject_net_into_fs(net, tmpdir, execute=execute) |
373 | + _inject_net_into_fs(net, tmpdir, execute=execute) |
374 | finally: |
375 | # unmount device |
376 | - yield execute('sudo umount %s' % mapped_device) |
377 | + execute('sudo umount %s' % mapped_device) |
378 | finally: |
379 | # remove temporary directory |
380 | - yield execute('rmdir %s' % tmpdir) |
381 | + execute('rmdir %s' % tmpdir) |
382 | if not partition is None: |
383 | # remove partitions |
384 | - yield execute('sudo kpartx -d %s' % device) |
385 | + execute('sudo kpartx -d %s' % device) |
386 | finally: |
387 | # remove loopback |
388 | - yield execute('sudo losetup -d %s' % device) |
389 | - |
390 | - |
391 | -@defer.inlineCallbacks |
392 | + execute('sudo losetup -d %s' % device) |
393 | + |
394 | + |
395 | def _inject_key_into_fs(key, fs, execute=None): |
396 | """Add the given public ssh key to root's authorized_keys. |
397 | |
398 | @@ -171,22 +166,21 @@ |
399 | fs is the path to the base of the filesystem into which to inject the key. |
400 | """ |
401 | sshdir = os.path.join(os.path.join(fs, 'root'), '.ssh') |
402 | - yield execute('sudo mkdir -p %s' % sshdir) # existing dir doesn't matter |
403 | - yield execute('sudo chown root %s' % sshdir) |
404 | - yield execute('sudo chmod 700 %s' % sshdir) |
405 | + execute('sudo mkdir -p %s' % sshdir) # existing dir doesn't matter |
406 | + execute('sudo chown root %s' % sshdir) |
407 | + execute('sudo chmod 700 %s' % sshdir) |
408 | keyfile = os.path.join(sshdir, 'authorized_keys') |
409 | - yield execute('sudo tee -a %s' % keyfile, '\n' + key.strip() + '\n') |
410 | - |
411 | - |
412 | -@defer.inlineCallbacks |
413 | + execute('sudo tee -a %s' % keyfile, '\n' + key.strip() + '\n') |
414 | + |
415 | + |
416 | def _inject_net_into_fs(net, fs, execute=None): |
417 | """Inject /etc/network/interfaces into the filesystem rooted at fs. |
418 | |
419 | net is the contents of /etc/network/interfaces. |
420 | """ |
421 | netdir = os.path.join(os.path.join(fs, 'etc'), 'network') |
422 | - yield execute('sudo mkdir -p %s' % netdir) # existing dir doesn't matter |
423 | - yield execute('sudo chown root:root %s' % netdir) |
424 | - yield execute('sudo chmod 755 %s' % netdir) |
425 | + execute('sudo mkdir -p %s' % netdir) # existing dir doesn't matter |
426 | + execute('sudo chown root:root %s' % netdir) |
427 | + execute('sudo chmod 755 %s' % netdir) |
428 | netfile = os.path.join(netdir, 'interfaces') |
429 | - yield execute('sudo tee %s' % netfile, net) |
430 | + execute('sudo tee %s' % netfile, net) |
431 | |
432 | === modified file 'nova/compute/manager.py' |
433 | --- nova/compute/manager.py 2010-12-02 18:58:13 +0000 |
434 | +++ nova/compute/manager.py 2010-12-16 20:49:10 +0000 |
435 | @@ -37,8 +37,6 @@ |
436 | import datetime |
437 | import logging |
438 | |
439 | -from twisted.internet import defer |
440 | - |
441 | from nova import exception |
442 | from nova import flags |
443 | from nova import manager |
444 | @@ -78,13 +76,11 @@ |
445 | state = power_state.NOSTATE |
446 | self.db.instance_set_state(context, instance_id, state) |
447 | |
448 | - @defer.inlineCallbacks |
449 | @exception.wrap_exception |
450 | def refresh_security_group(self, context, security_group_id, **_kwargs): |
451 | """This call passes stright through to the virtualization driver.""" |
452 | - yield self.driver.refresh_security_group(security_group_id) |
453 | + self.driver.refresh_security_group(security_group_id) |
454 | |
455 | - @defer.inlineCallbacks |
456 | @exception.wrap_exception |
457 | def run_instance(self, context, instance_id, **_kwargs): |
458 | """Launch a new instance with specified options.""" |
459 | @@ -105,7 +101,7 @@ |
460 | 'spawning') |
461 | |
462 | try: |
463 | - yield self.driver.spawn(instance_ref) |
464 | + self.driver.spawn(instance_ref) |
465 | now = datetime.datetime.utcnow() |
466 | self.db.instance_update(context, |
467 | instance_id, |
468 | @@ -119,7 +115,6 @@ |
469 | |
470 | self._update_state(context, instance_id) |
471 | |
472 | - @defer.inlineCallbacks |
473 | @exception.wrap_exception |
474 | def terminate_instance(self, context, instance_id): |
475 | """Terminate an instance on this machine.""" |
476 | @@ -134,12 +129,11 @@ |
477 | self.db.instance_destroy(context, instance_id) |
478 | raise exception.Error('trying to destroy already destroyed' |
479 | ' instance: %s' % instance_id) |
480 | - yield self.driver.destroy(instance_ref) |
481 | + self.driver.destroy(instance_ref) |
482 | |
483 | # TODO(ja): should we keep it in a terminated state for a bit? |
484 | self.db.instance_destroy(context, instance_id) |
485 | |
486 | - @defer.inlineCallbacks |
487 | @exception.wrap_exception |
488 | def reboot_instance(self, context, instance_id): |
489 | """Reboot an instance on this server.""" |
490 | @@ -159,10 +153,9 @@ |
491 | instance_id, |
492 | power_state.NOSTATE, |
493 | 'rebooting') |
494 | - yield self.driver.reboot(instance_ref) |
495 | + self.driver.reboot(instance_ref) |
496 | self._update_state(context, instance_id) |
497 | |
498 | - @defer.inlineCallbacks |
499 | @exception.wrap_exception |
500 | def rescue_instance(self, context, instance_id): |
501 | """Rescue an instance on this server.""" |
502 | @@ -175,10 +168,9 @@ |
503 | instance_id, |
504 | power_state.NOSTATE, |
505 | 'rescuing') |
506 | - yield self.driver.rescue(instance_ref) |
507 | + self.driver.rescue(instance_ref) |
508 | self._update_state(context, instance_id) |
509 | |
510 | - @defer.inlineCallbacks |
511 | @exception.wrap_exception |
512 | def unrescue_instance(self, context, instance_id): |
513 | """Rescue an instance on this server.""" |
514 | @@ -191,7 +183,7 @@ |
515 | instance_id, |
516 | power_state.NOSTATE, |
517 | 'unrescuing') |
518 | - yield self.driver.unrescue(instance_ref) |
519 | + self.driver.unrescue(instance_ref) |
520 | self._update_state(context, instance_id) |
521 | |
522 | @exception.wrap_exception |
523 | @@ -203,7 +195,6 @@ |
524 | |
525 | return self.driver.get_console_output(instance_ref) |
526 | |
527 | - @defer.inlineCallbacks |
528 | @exception.wrap_exception |
529 | def attach_volume(self, context, instance_id, volume_id, mountpoint): |
530 | """Attach a volume to an instance.""" |
531 | @@ -211,12 +202,12 @@ |
532 | logging.debug("instance %s: attaching volume %s to %s", instance_id, |
533 | volume_id, mountpoint) |
534 | instance_ref = self.db.instance_get(context, instance_id) |
535 | - dev_path = yield self.volume_manager.setup_compute_volume(context, |
536 | - volume_id) |
537 | + dev_path = self.volume_manager.setup_compute_volume(context, |
538 | + volume_id) |
539 | try: |
540 | - yield self.driver.attach_volume(instance_ref['name'], |
541 | - dev_path, |
542 | - mountpoint) |
543 | + self.driver.attach_volume(instance_ref['name'], |
544 | + dev_path, |
545 | + mountpoint) |
546 | self.db.volume_attached(context, |
547 | volume_id, |
548 | instance_id, |
549 | @@ -227,12 +218,12 @@ |
550 | # ecxception below. |
551 | logging.exception("instance %s: attach failed %s, removing", |
552 | instance_id, mountpoint) |
553 | - yield self.volume_manager.remove_compute_volume(context, |
554 | - volume_id) |
555 | + self.volume_manager.remove_compute_volume(context, |
556 | + volume_id) |
557 | raise exc |
558 | - defer.returnValue(True) |
559 | - |
560 | - @defer.inlineCallbacks |
561 | + |
562 | + return True |
563 | + |
564 | @exception.wrap_exception |
565 | def detach_volume(self, context, instance_id, volume_id): |
566 | """Detach a volume from an instance.""" |
567 | @@ -246,8 +237,8 @@ |
568 | logging.warn("Detaching volume from unknown instance %s", |
569 | instance_ref['name']) |
570 | else: |
571 | - yield self.driver.detach_volume(instance_ref['name'], |
572 | - volume_ref['mountpoint']) |
573 | - yield self.volume_manager.remove_compute_volume(context, volume_id) |
574 | + self.driver.detach_volume(instance_ref['name'], |
575 | + volume_ref['mountpoint']) |
576 | + self.volume_manager.remove_compute_volume(context, volume_id) |
577 | self.db.volume_detached(context, volume_id) |
578 | - defer.returnValue(True) |
579 | + return True |
580 | |
581 | === modified file 'nova/flags.py' |
582 | --- nova/flags.py 2010-12-03 20:21:18 +0000 |
583 | +++ nova/flags.py 2010-12-16 20:49:10 +0000 |
584 | @@ -159,6 +159,7 @@ |
585 | return str(val) |
586 | raise KeyError(name) |
587 | |
588 | + |
589 | FLAGS = FlagValues() |
590 | gflags.FLAGS = FLAGS |
591 | gflags.DEFINE_flag(gflags.HelpFlag(), FLAGS) |
592 | @@ -183,6 +184,12 @@ |
593 | DEFINE_spaceseplist = _wrapper(gflags.DEFINE_spaceseplist) |
594 | DEFINE_multistring = _wrapper(gflags.DEFINE_multistring) |
595 | DEFINE_multi_int = _wrapper(gflags.DEFINE_multi_int) |
596 | +DEFINE_flag = _wrapper(gflags.DEFINE_flag) |
597 | + |
598 | + |
599 | +HelpFlag = gflags.HelpFlag |
600 | +HelpshortFlag = gflags.HelpshortFlag |
601 | +HelpXMLFlag = gflags.HelpXMLFlag |
602 | |
603 | |
604 | def DECLARE(name, module_string, flag_values=FLAGS): |
605 | |
606 | === modified file 'nova/manager.py' |
607 | --- nova/manager.py 2010-12-01 17:24:39 +0000 |
608 | +++ nova/manager.py 2010-12-16 20:49:10 +0000 |
609 | @@ -55,7 +55,6 @@ |
610 | from nova import flags |
611 | from nova.db import base |
612 | |
613 | -from twisted.internet import defer |
614 | |
615 | FLAGS = flags.FLAGS |
616 | |
617 | @@ -67,10 +66,9 @@ |
618 | self.host = host |
619 | super(Manager, self).__init__(db_driver) |
620 | |
621 | - @defer.inlineCallbacks |
622 | def periodic_tasks(self, context=None): |
623 | """Tasks to be run at a periodic interval""" |
624 | - yield |
625 | + pass |
626 | |
627 | def init_host(self): |
628 | """Do any initialization that needs to be run if this is a standalone |
629 | |
630 | === modified file 'nova/network/manager.py' |
631 | --- nova/network/manager.py 2010-11-23 23:56:26 +0000 |
632 | +++ nova/network/manager.py 2010-12-16 20:49:10 +0000 |
633 | @@ -49,7 +49,6 @@ |
634 | import math |
635 | |
636 | import IPy |
637 | -from twisted.internet import defer |
638 | |
639 | from nova import context |
640 | from nova import db |
641 | @@ -399,10 +398,9 @@ |
642 | instances in its subnet. |
643 | """ |
644 | |
645 | - @defer.inlineCallbacks |
646 | def periodic_tasks(self, context=None): |
647 | """Tasks to be run at a periodic interval.""" |
648 | - yield super(VlanManager, self).periodic_tasks(context) |
649 | + super(VlanManager, self).periodic_tasks(context) |
650 | now = datetime.datetime.utcnow() |
651 | timeout = FLAGS.fixed_ip_disassociate_timeout |
652 | time = now - datetime.timedelta(seconds=timeout) |
653 | |
654 | === modified file 'nova/objectstore/image.py' |
655 | --- nova/objectstore/image.py 2010-12-14 00:20:27 +0000 |
656 | +++ nova/objectstore/image.py 2010-12-16 20:49:10 +0000 |
657 | @@ -267,6 +267,7 @@ |
658 | if err: |
659 | raise exception.Error("Failed to decrypt initialization " |
660 | "vector: %s" % err) |
661 | + |
662 | _out, err = utils.execute( |
663 | 'openssl enc -d -aes-128-cbc -in %s -K %s -iv %s -out %s' |
664 | % (encrypted_filename, key, iv, decrypted_filename), |
665 | |
666 | === removed file 'nova/process.py' |
667 | --- nova/process.py 2010-10-21 18:49:51 +0000 |
668 | +++ nova/process.py 1970-01-01 00:00:00 +0000 |
669 | @@ -1,209 +0,0 @@ |
670 | -# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
671 | - |
672 | -# Copyright 2010 United States Government as represented by the |
673 | -# Administrator of the National Aeronautics and Space Administration. |
674 | -# Copyright 2010 FathomDB Inc. |
675 | -# All Rights Reserved. |
676 | -# |
677 | -# Licensed under the Apache License, Version 2.0 (the "License"); you may |
678 | -# not use this file except in compliance with the License. You may obtain |
679 | -# a copy of the License at |
680 | -# |
681 | -# http://www.apache.org/licenses/LICENSE-2.0 |
682 | -# |
683 | -# Unless required by applicable law or agreed to in writing, software |
684 | -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
685 | -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
686 | -# License for the specific language governing permissions and limitations |
687 | -# under the License. |
688 | - |
689 | -""" |
690 | -Process pool using twisted threading |
691 | -""" |
692 | - |
693 | -import logging |
694 | -import StringIO |
695 | - |
696 | -from twisted.internet import defer |
697 | -from twisted.internet import error |
698 | -from twisted.internet import protocol |
699 | -from twisted.internet import reactor |
700 | - |
701 | -from nova import flags |
702 | -from nova.exception import ProcessExecutionError |
703 | - |
704 | -FLAGS = flags.FLAGS |
705 | -flags.DEFINE_integer('process_pool_size', 4, |
706 | - 'Number of processes to use in the process pool') |
707 | - |
708 | - |
709 | -# This is based on _BackRelay from twister.internal.utils, but modified to |
710 | -# capture both stdout and stderr, without odd stderr handling, and also to |
711 | -# handle stdin |
712 | -class BackRelayWithInput(protocol.ProcessProtocol): |
713 | - """ |
714 | - Trivial protocol for communicating with a process and turning its output |
715 | - into the result of a L{Deferred}. |
716 | - |
717 | - @ivar deferred: A L{Deferred} which will be called back with all of stdout |
718 | - and all of stderr as well (as a tuple). C{terminate_on_stderr} is true |
719 | - and any bytes are received over stderr, this will fire with an |
720 | - L{_ProcessExecutionError} instance and the attribute will be set to |
721 | - C{None}. |
722 | - |
723 | - @ivar onProcessEnded: If C{terminate_on_stderr} is false and bytes are |
724 | - received over stderr, this attribute will refer to a L{Deferred} which |
725 | - will be called back when the process ends. This C{Deferred} is also |
726 | - associated with the L{_ProcessExecutionError} which C{deferred} fires |
727 | - with earlier in this case so that users can determine when the process |
728 | - has actually ended, in addition to knowing when bytes have been |
729 | - received via stderr. |
730 | - """ |
731 | - |
732 | - def __init__(self, deferred, cmd, started_deferred=None, |
733 | - terminate_on_stderr=False, check_exit_code=True, |
734 | - process_input=None): |
735 | - self.deferred = deferred |
736 | - self.cmd = cmd |
737 | - self.stdout = StringIO.StringIO() |
738 | - self.stderr = StringIO.StringIO() |
739 | - self.started_deferred = started_deferred |
740 | - self.terminate_on_stderr = terminate_on_stderr |
741 | - self.check_exit_code = check_exit_code |
742 | - self.process_input = process_input |
743 | - self.on_process_ended = None |
744 | - |
745 | - def _build_execution_error(self, exit_code=None): |
746 | - return ProcessExecutionError(cmd=self.cmd, |
747 | - exit_code=exit_code, |
748 | - stdout=self.stdout.getvalue(), |
749 | - stderr=self.stderr.getvalue()) |
750 | - |
751 | - def errReceived(self, text): |
752 | - self.stderr.write(text) |
753 | - if self.terminate_on_stderr and (self.deferred is not None): |
754 | - self.on_process_ended = defer.Deferred() |
755 | - self.deferred.errback(self._build_execution_error()) |
756 | - self.deferred = None |
757 | - self.transport.loseConnection() |
758 | - |
759 | - def outReceived(self, text): |
760 | - self.stdout.write(text) |
761 | - |
762 | - def processEnded(self, reason): |
763 | - if self.deferred is not None: |
764 | - stdout, stderr = self.stdout.getvalue(), self.stderr.getvalue() |
765 | - exit_code = reason.value.exitCode |
766 | - if self.check_exit_code and exit_code != 0: |
767 | - self.deferred.errback(self._build_execution_error(exit_code)) |
768 | - else: |
769 | - try: |
770 | - if self.check_exit_code: |
771 | - reason.trap(error.ProcessDone) |
772 | - self.deferred.callback((stdout, stderr)) |
773 | - except: |
774 | - # NOTE(justinsb): This logic is a little suspicious to me. |
775 | - # If the callback throws an exception, then errback will |
776 | - # be called also. However, this is what the unit tests |
777 | - # test for. |
778 | - exec_error = self._build_execution_error(exit_code) |
779 | - self.deferred.errback(exec_error) |
780 | - elif self.on_process_ended is not None: |
781 | - self.on_process_ended.errback(reason) |
782 | - |
783 | - def connectionMade(self): |
784 | - if self.started_deferred: |
785 | - self.started_deferred.callback(self) |
786 | - if self.process_input: |
787 | - self.transport.write(str(self.process_input)) |
788 | - self.transport.closeStdin() |
789 | - |
790 | - |
791 | -def get_process_output(executable, args=None, env=None, path=None, |
792 | - process_reactor=None, check_exit_code=True, |
793 | - process_input=None, started_deferred=None, |
794 | - terminate_on_stderr=False): |
795 | - if process_reactor is None: |
796 | - process_reactor = reactor |
797 | - args = args and args or () |
798 | - env = env and env and {} |
799 | - deferred = defer.Deferred() |
800 | - cmd = executable |
801 | - if args: |
802 | - cmd = " ".join([cmd] + args) |
803 | - logging.debug("Running cmd: %s", cmd) |
804 | - process_handler = BackRelayWithInput( |
805 | - deferred, |
806 | - cmd, |
807 | - started_deferred=started_deferred, |
808 | - check_exit_code=check_exit_code, |
809 | - process_input=process_input, |
810 | - terminate_on_stderr=terminate_on_stderr) |
811 | - # NOTE(vish): commands come in as unicode, but self.executes needs |
812 | - # strings or process.spawn raises a deprecation warning |
813 | - executable = str(executable) |
814 | - if not args is None: |
815 | - args = [str(x) for x in args] |
816 | - process_reactor.spawnProcess(process_handler, executable, |
817 | - (executable,) + tuple(args), env, path) |
818 | - return deferred |
819 | - |
820 | - |
821 | -class ProcessPool(object): |
822 | - """ A simple process pool implementation using Twisted's Process bits. |
823 | - |
824 | - This is pretty basic right now, but hopefully the API will be the correct |
825 | - one so that it can be optimized later. |
826 | - """ |
827 | - def __init__(self, size=None): |
828 | - self.size = size and size or FLAGS.process_pool_size |
829 | - self._pool = defer.DeferredSemaphore(self.size) |
830 | - |
831 | - def simple_execute(self, cmd, **kw): |
832 | - """ Weak emulation of the old utils.execute() function. |
833 | - |
834 | - This only exists as a way to quickly move old execute methods to |
835 | - this new style of code. |
836 | - |
837 | - NOTE(termie): This will break on args with spaces in them. |
838 | - """ |
839 | - parsed = cmd.split(' ') |
840 | - executable, args = parsed[0], parsed[1:] |
841 | - return self.execute(executable, args, **kw) |
842 | - |
843 | - def execute(self, *args, **kw): |
844 | - deferred = self._pool.acquire() |
845 | - |
846 | - def _associate_process(proto): |
847 | - deferred.process = proto.transport |
848 | - return proto.transport |
849 | - |
850 | - started = defer.Deferred() |
851 | - started.addCallback(_associate_process) |
852 | - kw.setdefault('started_deferred', started) |
853 | - |
854 | - deferred.process = None |
855 | - deferred.started = started |
856 | - |
857 | - deferred.addCallback(lambda _: get_process_output(*args, **kw)) |
858 | - deferred.addBoth(self._release) |
859 | - return deferred |
860 | - |
861 | - def _release(self, retval=None): |
862 | - self._pool.release() |
863 | - return retval |
864 | - |
865 | - |
866 | -class SharedPool(object): |
867 | - _instance = None |
868 | - |
869 | - def __init__(self): |
870 | - if SharedPool._instance is None: |
871 | - self.__class__._instance = ProcessPool() |
872 | - |
873 | - def __getattr__(self, key): |
874 | - return getattr(self._instance, key) |
875 | - |
876 | - |
877 | -def simple_execute(cmd, **kwargs): |
878 | - return SharedPool().simple_execute(cmd, **kwargs) |
879 | |
880 | === modified file 'nova/rpc.py' |
881 | --- nova/rpc.py 2010-11-22 22:48:44 +0000 |
882 | +++ nova/rpc.py 2010-12-16 20:49:10 +0000 |
883 | @@ -25,18 +25,18 @@ |
884 | import logging |
885 | import sys |
886 | import time |
887 | +import traceback |
888 | import uuid |
889 | |
890 | from carrot import connection as carrot_connection |
891 | from carrot import messaging |
892 | from eventlet import greenthread |
893 | -from twisted.internet import defer |
894 | -from twisted.internet import task |
895 | |
896 | +from nova import context |
897 | from nova import exception |
898 | from nova import fakerabbit |
899 | from nova import flags |
900 | -from nova import context |
901 | +from nova import utils |
902 | |
903 | |
904 | FLAGS = flags.FLAGS |
905 | @@ -128,17 +128,9 @@ |
906 | |
907 | def attach_to_eventlet(self): |
908 | """Only needed for unit tests!""" |
909 | - def fetch_repeatedly(): |
910 | - while True: |
911 | - self.fetch(enable_callbacks=True) |
912 | - greenthread.sleep(0.1) |
913 | - greenthread.spawn(fetch_repeatedly) |
914 | - |
915 | - def attach_to_twisted(self): |
916 | - """Attach a callback to twisted that fires 10 times a second""" |
917 | - loop = task.LoopingCall(self.fetch, enable_callbacks=True) |
918 | - loop.start(interval=0.1) |
919 | - return loop |
920 | + timer = utils.LoopingCall(self.fetch, enable_callbacks=True) |
921 | + timer.start(0.1) |
922 | + return timer |
923 | |
924 | |
925 | class Publisher(messaging.Publisher): |
926 | @@ -196,11 +188,13 @@ |
927 | node_func = getattr(self.proxy, str(method)) |
928 | node_args = dict((str(k), v) for k, v in args.iteritems()) |
929 | # NOTE(vish): magic is fun! |
930 | - # pylint: disable-msg=W0142 |
931 | - d = defer.maybeDeferred(node_func, context=ctxt, **node_args) |
932 | - if msg_id: |
933 | - d.addCallback(lambda rval: msg_reply(msg_id, rval, None)) |
934 | - d.addErrback(lambda e: msg_reply(msg_id, None, e)) |
935 | + try: |
936 | + rval = node_func(context=ctxt, **node_args) |
937 | + if msg_id: |
938 | + msg_reply(msg_id, rval, None) |
939 | + except Exception as e: |
940 | + if msg_id: |
941 | + msg_reply(msg_id, None, sys.exc_info()) |
942 | return |
943 | |
944 | |
945 | @@ -242,13 +236,15 @@ |
946 | def msg_reply(msg_id, reply=None, failure=None): |
947 | """Sends a reply or an error on the channel signified by msg_id |
948 | |
949 | - failure should be a twisted failure object""" |
950 | + failure should be a sys.exc_info() tuple. |
951 | + |
952 | + """ |
953 | if failure: |
954 | - message = failure.getErrorMessage() |
955 | - traceback = failure.getTraceback() |
956 | + message = str(failure[1]) |
957 | + tb = traceback.format_exception(*failure) |
958 | logging.error("Returning exception %s to caller", message) |
959 | - logging.error(traceback) |
960 | - failure = (failure.type.__name__, str(failure.value), traceback) |
961 | + logging.error(tb) |
962 | + failure = (failure[0].__name__, str(failure[1]), tb) |
963 | conn = Connection.instance() |
964 | publisher = DirectPublisher(connection=conn, msg_id=msg_id) |
965 | try: |
966 | @@ -313,7 +309,6 @@ |
967 | _pack_context(msg, context) |
968 | |
969 | class WaitMessage(object): |
970 | - |
971 | def __call__(self, data, message): |
972 | """Acks message and sets result.""" |
973 | message.ack() |
974 | @@ -337,41 +332,15 @@ |
975 | except StopIteration: |
976 | pass |
977 | consumer.close() |
978 | + # NOTE(termie): this is a little bit of a change from the original |
979 | + # non-eventlet code where returning a Failure |
980 | + # instance from a deferred call is very similar to |
981 | + # raising an exception |
982 | + if isinstance(wait_msg.result, Exception): |
983 | + raise wait_msg.result |
984 | return wait_msg.result |
985 | |
986 | |
987 | -def call_twisted(context, topic, msg): |
988 | - """Sends a message on a topic and wait for a response""" |
989 | - LOG.debug("Making asynchronous call...") |
990 | - msg_id = uuid.uuid4().hex |
991 | - msg.update({'_msg_id': msg_id}) |
992 | - LOG.debug("MSG_ID is %s" % (msg_id)) |
993 | - _pack_context(msg, context) |
994 | - |
995 | - conn = Connection.instance() |
996 | - d = defer.Deferred() |
997 | - consumer = DirectConsumer(connection=conn, msg_id=msg_id) |
998 | - |
999 | - def deferred_receive(data, message): |
1000 | - """Acks message and callbacks or errbacks""" |
1001 | - message.ack() |
1002 | - if data['failure']: |
1003 | - return d.errback(RemoteError(*data['failure'])) |
1004 | - else: |
1005 | - return d.callback(data['result']) |
1006 | - |
1007 | - consumer.register_callback(deferred_receive) |
1008 | - injected = consumer.attach_to_twisted() |
1009 | - |
1010 | - # clean up after the injected listened and return x |
1011 | - d.addCallback(lambda x: injected.stop() and x or x) |
1012 | - |
1013 | - publisher = TopicPublisher(connection=conn, topic=topic) |
1014 | - publisher.send(msg) |
1015 | - publisher.close() |
1016 | - return d |
1017 | - |
1018 | - |
1019 | def cast(context, topic, msg): |
1020 | """Sends a message on a topic without waiting for a response""" |
1021 | LOG.debug("Making asynchronous cast...") |
1022 | |
1023 | === removed file 'nova/server.py' |
1024 | --- nova/server.py 2010-11-23 20:52:00 +0000 |
1025 | +++ nova/server.py 1970-01-01 00:00:00 +0000 |
1026 | @@ -1,151 +0,0 @@ |
1027 | -# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
1028 | - |
1029 | -# Copyright 2010 United States Government as represented by the |
1030 | -# Administrator of the National Aeronautics and Space Administration. |
1031 | -# All Rights Reserved. |
1032 | -# |
1033 | -# Licensed under the Apache License, Version 2.0 (the "License"); you may |
1034 | -# not use this file except in compliance with the License. You may obtain |
1035 | -# a copy of the License at |
1036 | -# |
1037 | -# http://www.apache.org/licenses/LICENSE-2.0 |
1038 | -# |
1039 | -# Unless required by applicable law or agreed to in writing, software |
1040 | -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
1041 | -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
1042 | -# License for the specific language governing permissions and limitations |
1043 | -# under the License. |
1044 | - |
1045 | -""" |
1046 | -Base functionality for nova daemons - gradually being replaced with twistd.py. |
1047 | -""" |
1048 | - |
1049 | -import daemon |
1050 | -from daemon import pidlockfile |
1051 | -import logging |
1052 | -import logging.handlers |
1053 | -import os |
1054 | -import signal |
1055 | -import sys |
1056 | -import time |
1057 | - |
1058 | -from nova import flags |
1059 | - |
1060 | - |
1061 | -FLAGS = flags.FLAGS |
1062 | -flags.DEFINE_bool('daemonize', False, 'daemonize this process') |
1063 | -# NOTE(termie): right now I am defaulting to using syslog when we daemonize |
1064 | -# it may be better to do something else -shrug- |
1065 | -# NOTE(Devin): I think we should let each process have its own log file |
1066 | -# and put it in /var/logs/nova/(appname).log |
1067 | -# This makes debugging much easier and cuts down on sys log |
1068 | -# clutter. |
1069 | -flags.DEFINE_bool('use_syslog', True, 'output to syslog when daemonizing') |
1070 | -flags.DEFINE_string('logfile', None, 'log file to output to') |
1071 | -flags.DEFINE_string('logdir', None, 'directory to keep log files in ' |
1072 | - '(will be prepended to $logfile)') |
1073 | -flags.DEFINE_string('pidfile', None, 'pid file to output to') |
1074 | -flags.DEFINE_string('working_directory', './', 'working directory...') |
1075 | -flags.DEFINE_integer('uid', os.getuid(), 'uid under which to run') |
1076 | -flags.DEFINE_integer('gid', os.getgid(), 'gid under which to run') |
1077 | - |
1078 | - |
1079 | -def stop(pidfile): |
1080 | - """ |
1081 | - Stop the daemon |
1082 | - """ |
1083 | - # Get the pid from the pidfile |
1084 | - try: |
1085 | - pid = int(open(pidfile, 'r').read().strip()) |
1086 | - except IOError: |
1087 | - message = "pidfile %s does not exist. Daemon not running?\n" |
1088 | - sys.stderr.write(message % pidfile) |
1089 | - return |
1090 | - |
1091 | - # Try killing the daemon process |
1092 | - try: |
1093 | - while 1: |
1094 | - os.kill(pid, signal.SIGTERM) |
1095 | - time.sleep(0.1) |
1096 | - except OSError, err: |
1097 | - err = str(err) |
1098 | - if err.find("No such process") > 0: |
1099 | - if os.path.exists(pidfile): |
1100 | - os.remove(pidfile) |
1101 | - else: |
1102 | - print str(err) |
1103 | - sys.exit(1) |
1104 | - |
1105 | - |
1106 | -def serve(name, main): |
1107 | - """Controller for server""" |
1108 | - argv = FLAGS(sys.argv) |
1109 | - |
1110 | - if not FLAGS.pidfile: |
1111 | - FLAGS.pidfile = '%s.pid' % name |
1112 | - |
1113 | - logging.debug("Full set of FLAGS: \n\n\n") |
1114 | - for flag in FLAGS: |
1115 | - logging.debug("%s : %s", flag, FLAGS.get(flag, None)) |
1116 | - |
1117 | - action = 'start' |
1118 | - if len(argv) > 1: |
1119 | - action = argv.pop() |
1120 | - |
1121 | - if action == 'stop': |
1122 | - stop(FLAGS.pidfile) |
1123 | - sys.exit() |
1124 | - elif action == 'restart': |
1125 | - stop(FLAGS.pidfile) |
1126 | - elif action == 'start': |
1127 | - pass |
1128 | - else: |
1129 | - print 'usage: %s [options] [start|stop|restart]' % argv[0] |
1130 | - sys.exit(1) |
1131 | - daemonize(argv, name, main) |
1132 | - |
1133 | - |
1134 | -def daemonize(args, name, main): |
1135 | - """Does the work of daemonizing the process""" |
1136 | - logging.getLogger('amqplib').setLevel(logging.WARN) |
1137 | - files_to_keep = [] |
1138 | - if FLAGS.daemonize: |
1139 | - logger = logging.getLogger() |
1140 | - formatter = logging.Formatter( |
1141 | - name + '(%(name)s): %(levelname)s %(message)s') |
1142 | - if FLAGS.use_syslog and not FLAGS.logfile: |
1143 | - syslog = logging.handlers.SysLogHandler(address='/dev/log') |
1144 | - syslog.setFormatter(formatter) |
1145 | - logger.addHandler(syslog) |
1146 | - files_to_keep.append(syslog.socket) |
1147 | - else: |
1148 | - if not FLAGS.logfile: |
1149 | - FLAGS.logfile = '%s.log' % name |
1150 | - if FLAGS.logdir: |
1151 | - FLAGS.logfile = os.path.join(FLAGS.logdir, FLAGS.logfile) |
1152 | - logfile = logging.FileHandler(FLAGS.logfile) |
1153 | - logfile.setFormatter(formatter) |
1154 | - logger.addHandler(logfile) |
1155 | - files_to_keep.append(logfile.stream) |
1156 | - stdin, stdout, stderr = None, None, None |
1157 | - else: |
1158 | - stdin, stdout, stderr = sys.stdin, sys.stdout, sys.stderr |
1159 | - |
1160 | - if FLAGS.verbose: |
1161 | - logging.getLogger().setLevel(logging.DEBUG) |
1162 | - else: |
1163 | - logging.getLogger().setLevel(logging.WARNING) |
1164 | - |
1165 | - with daemon.DaemonContext( |
1166 | - detach_process=FLAGS.daemonize, |
1167 | - working_directory=FLAGS.working_directory, |
1168 | - pidfile=pidlockfile.TimeoutPIDLockFile(FLAGS.pidfile, |
1169 | - acquire_timeout=1, |
1170 | - threaded=False), |
1171 | - stdin=stdin, |
1172 | - stdout=stdout, |
1173 | - stderr=stderr, |
1174 | - uid=FLAGS.uid, |
1175 | - gid=FLAGS.gid, |
1176 | - files_preserve=files_to_keep): |
1177 | - main(args) |
1178 | |
1179 | === added file 'nova/service.py' |
1180 | --- nova/service.py 1970-01-01 00:00:00 +0000 |
1181 | +++ nova/service.py 2010-12-16 20:49:10 +0000 |
1182 | @@ -0,0 +1,234 @@ |
1183 | +# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
1184 | + |
1185 | +# Copyright 2010 United States Government as represented by the |
1186 | +# Administrator of the National Aeronautics and Space Administration. |
1187 | +# All Rights Reserved. |
1188 | +# |
1189 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
1190 | +# not use this file except in compliance with the License. You may obtain |
1191 | +# a copy of the License at |
1192 | +# |
1193 | +# http://www.apache.org/licenses/LICENSE-2.0 |
1194 | +# |
1195 | +# Unless required by applicable law or agreed to in writing, software |
1196 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
1197 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
1198 | +# License for the specific language governing permissions and limitations |
1199 | +# under the License. |
1200 | + |
1201 | +""" |
1202 | +Generic Node baseclass for all workers that run on hosts |
1203 | +""" |
1204 | + |
1205 | +import inspect |
1206 | +import logging |
1207 | +import os |
1208 | +import sys |
1209 | + |
1210 | +from eventlet import event |
1211 | +from eventlet import greenthread |
1212 | +from eventlet import greenpool |
1213 | + |
1214 | +from nova import context |
1215 | +from nova import db |
1216 | +from nova import exception |
1217 | +from nova import flags |
1218 | +from nova import rpc |
1219 | +from nova import utils |
1220 | + |
1221 | + |
1222 | +FLAGS = flags.FLAGS |
1223 | +flags.DEFINE_integer('report_interval', 10, |
1224 | + 'seconds between nodes reporting state to datastore', |
1225 | + lower_bound=1) |
1226 | + |
1227 | +flags.DEFINE_integer('periodic_interval', 60, |
1228 | + 'seconds between running periodic tasks', |
1229 | + lower_bound=1) |
1230 | + |
1231 | +flags.DEFINE_string('pidfile', None, |
1232 | + 'pidfile to use for this service') |
1233 | + |
1234 | + |
1235 | +flags.DEFINE_flag(flags.HelpFlag()) |
1236 | +flags.DEFINE_flag(flags.HelpshortFlag()) |
1237 | +flags.DEFINE_flag(flags.HelpXMLFlag()) |
1238 | + |
1239 | + |
1240 | +class Service(object): |
1241 | + """Base class for workers that run on hosts.""" |
1242 | + |
1243 | + def __init__(self, host, binary, topic, manager, report_interval=None, |
1244 | + periodic_interval=None, *args, **kwargs): |
1245 | + self.host = host |
1246 | + self.binary = binary |
1247 | + self.topic = topic |
1248 | + self.manager_class_name = manager |
1249 | + self.report_interval = report_interval |
1250 | + self.periodic_interval = periodic_interval |
1251 | + super(Service, self).__init__(*args, **kwargs) |
1252 | + self.saved_args, self.saved_kwargs = args, kwargs |
1253 | + self.timers = [] |
1254 | + |
1255 | + def start(self): |
1256 | + manager_class = utils.import_class(self.manager_class_name) |
1257 | + self.manager = manager_class(host=self.host, *self.saved_args, |
1258 | + **self.saved_kwargs) |
1259 | + self.manager.init_host() |
1260 | + self.model_disconnected = False |
1261 | + ctxt = context.get_admin_context() |
1262 | + try: |
1263 | + service_ref = db.service_get_by_args(ctxt, |
1264 | + self.host, |
1265 | + self.binary) |
1266 | + self.service_id = service_ref['id'] |
1267 | + except exception.NotFound: |
1268 | + self._create_service_ref(ctxt) |
1269 | + |
1270 | + conn1 = rpc.Connection.instance(new=True) |
1271 | + conn2 = rpc.Connection.instance(new=True) |
1272 | + if self.report_interval: |
1273 | + consumer_all = rpc.AdapterConsumer( |
1274 | + connection=conn1, |
1275 | + topic=self.topic, |
1276 | + proxy=self) |
1277 | + consumer_node = rpc.AdapterConsumer( |
1278 | + connection=conn2, |
1279 | + topic='%s.%s' % (self.topic, self.host), |
1280 | + proxy=self) |
1281 | + |
1282 | + self.timers.append(consumer_all.attach_to_eventlet()) |
1283 | + self.timers.append(consumer_node.attach_to_eventlet()) |
1284 | + |
1285 | + pulse = utils.LoopingCall(self.report_state) |
1286 | + pulse.start(interval=self.report_interval, now=False) |
1287 | + self.timers.append(pulse) |
1288 | + |
1289 | + if self.periodic_interval: |
1290 | + periodic = utils.LoopingCall(self.periodic_tasks) |
1291 | + periodic.start(interval=self.periodic_interval, now=False) |
1292 | + self.timers.append(periodic) |
1293 | + |
1294 | + def _create_service_ref(self, context): |
1295 | + service_ref = db.service_create(context, |
1296 | + {'host': self.host, |
1297 | + 'binary': self.binary, |
1298 | + 'topic': self.topic, |
1299 | + 'report_count': 0}) |
1300 | + self.service_id = service_ref['id'] |
1301 | + |
1302 | + def __getattr__(self, key): |
1303 | + manager = self.__dict__.get('manager', None) |
1304 | + return getattr(manager, key) |
1305 | + |
1306 | + @classmethod |
1307 | + def create(cls, |
1308 | + host=None, |
1309 | + binary=None, |
1310 | + topic=None, |
1311 | + manager=None, |
1312 | + report_interval=None, |
1313 | + periodic_interval=None): |
1314 | + """Instantiates class and passes back application object. |
1315 | + |
1316 | + Args: |
1317 | + host, defaults to FLAGS.host |
1318 | + binary, defaults to basename of executable |
1319 | + topic, defaults to bin_name - "nova-" part |
1320 | + manager, defaults to FLAGS.<topic>_manager |
1321 | + report_interval, defaults to FLAGS.report_interval |
1322 | + periodic_interval, defaults to FLAGS.periodic_interval |
1323 | + """ |
1324 | + if not host: |
1325 | + host = FLAGS.host |
1326 | + if not binary: |
1327 | + binary = os.path.basename(inspect.stack()[-1][1]) |
1328 | + if not topic: |
1329 | + topic = binary.rpartition("nova-")[2] |
1330 | + if not manager: |
1331 | + manager = FLAGS.get('%s_manager' % topic, None) |
1332 | + if not report_interval: |
1333 | + report_interval = FLAGS.report_interval |
1334 | + if not periodic_interval: |
1335 | + periodic_interval = FLAGS.periodic_interval |
1336 | + logging.warn("Starting %s node", topic) |
1337 | + service_obj = cls(host, binary, topic, manager, |
1338 | + report_interval, periodic_interval) |
1339 | + |
1340 | + return service_obj |
1341 | + |
1342 | + def kill(self): |
1343 | + """Destroy the service object in the datastore""" |
1344 | + self.stop() |
1345 | + try: |
1346 | + db.service_destroy(context.get_admin_context(), self.service_id) |
1347 | + except exception.NotFound: |
1348 | + logging.warn("Service killed that has no database entry") |
1349 | + |
1350 | + def stop(self): |
1351 | + for x in self.timers: |
1352 | + try: |
1353 | + x.stop() |
1354 | + except Exception: |
1355 | + pass |
1356 | + self.timers = [] |
1357 | + |
1358 | + def periodic_tasks(self): |
1359 | + """Tasks to be run at a periodic interval""" |
1360 | + self.manager.periodic_tasks(context.get_admin_context()) |
1361 | + |
1362 | + def report_state(self): |
1363 | + """Update the state of this service in the datastore.""" |
1364 | + ctxt = context.get_admin_context() |
1365 | + try: |
1366 | + try: |
1367 | + service_ref = db.service_get(ctxt, self.service_id) |
1368 | + except exception.NotFound: |
1369 | + logging.debug("The service database object disappeared, " |
1370 | + "Recreating it.") |
1371 | + self._create_service_ref(ctxt) |
1372 | + service_ref = db.service_get(ctxt, self.service_id) |
1373 | + |
1374 | + db.service_update(ctxt, |
1375 | + self.service_id, |
1376 | + {'report_count': service_ref['report_count'] + 1}) |
1377 | + |
1378 | + # TODO(termie): make this pattern be more elegant. |
1379 | + if getattr(self, "model_disconnected", False): |
1380 | + self.model_disconnected = False |
1381 | + logging.error("Recovered model server connection!") |
1382 | + |
1383 | + # TODO(vish): this should probably only catch connection errors |
1384 | + except Exception: # pylint: disable-msg=W0702 |
1385 | + if not getattr(self, "model_disconnected", False): |
1386 | + self.model_disconnected = True |
1387 | + logging.exception("model server went away") |
1388 | + |
1389 | + |
1390 | +def serve(*services): |
1391 | + argv = FLAGS(sys.argv) |
1392 | + |
1393 | + if not services: |
1394 | + services = [Service.create()] |
1395 | + |
1396 | + name = '_'.join(x.binary for x in services) |
1397 | + logging.debug("Serving %s" % name) |
1398 | + |
1399 | + logging.getLogger('amqplib').setLevel(logging.WARN) |
1400 | + |
1401 | + if FLAGS.verbose: |
1402 | + logging.getLogger().setLevel(logging.DEBUG) |
1403 | + else: |
1404 | + logging.getLogger().setLevel(logging.WARNING) |
1405 | + |
1406 | + logging.debug("Full set of FLAGS:") |
1407 | + for flag in FLAGS: |
1408 | + logging.debug("%s : %s" % (flag, FLAGS.get(flag, None))) |
1409 | + |
1410 | + for x in services: |
1411 | + x.start() |
1412 | + |
1413 | + |
1414 | +def wait(): |
1415 | + while True: |
1416 | + greenthread.sleep(5) |
1417 | |
1418 | === removed file 'nova/service.py' |
1419 | --- nova/service.py 2010-11-15 19:43:50 +0000 |
1420 | +++ nova/service.py 1970-01-01 00:00:00 +0000 |
1421 | @@ -1,195 +0,0 @@ |
1422 | -# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
1423 | - |
1424 | -# Copyright 2010 United States Government as represented by the |
1425 | -# Administrator of the National Aeronautics and Space Administration. |
1426 | -# All Rights Reserved. |
1427 | -# |
1428 | -# Licensed under the Apache License, Version 2.0 (the "License"); you may |
1429 | -# not use this file except in compliance with the License. You may obtain |
1430 | -# a copy of the License at |
1431 | -# |
1432 | -# http://www.apache.org/licenses/LICENSE-2.0 |
1433 | -# |
1434 | -# Unless required by applicable law or agreed to in writing, software |
1435 | -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
1436 | -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
1437 | -# License for the specific language governing permissions and limitations |
1438 | -# under the License. |
1439 | - |
1440 | -""" |
1441 | -A service is a very thin wrapper around a Manager object. It exposes the |
1442 | -manager's public methods to other components of the system via rpc. It will |
1443 | -report state periodically to the database and is responsible for initiating |
1444 | -any periodic tasts that need to be executed on a given host. |
1445 | - |
1446 | -This module contains Service, a generic baseclass for all workers. |
1447 | -""" |
1448 | - |
1449 | -import inspect |
1450 | -import logging |
1451 | -import os |
1452 | - |
1453 | -from twisted.internet import defer |
1454 | -from twisted.internet import task |
1455 | -from twisted.application import service |
1456 | - |
1457 | -from nova import context |
1458 | -from nova import db |
1459 | -from nova import exception |
1460 | -from nova import flags |
1461 | -from nova import rpc |
1462 | -from nova import utils |
1463 | - |
1464 | - |
1465 | -FLAGS = flags.FLAGS |
1466 | -flags.DEFINE_integer('report_interval', 10, |
1467 | - 'seconds between nodes reporting state to datastore', |
1468 | - lower_bound=1) |
1469 | - |
1470 | -flags.DEFINE_integer('periodic_interval', 60, |
1471 | - 'seconds between running periodic tasks', |
1472 | - lower_bound=1) |
1473 | - |
1474 | - |
1475 | -class Service(object, service.Service): |
1476 | - """Base class for workers that run on hosts.""" |
1477 | - |
1478 | - def __init__(self, host, binary, topic, manager, report_interval=None, |
1479 | - periodic_interval=None, *args, **kwargs): |
1480 | - self.host = host |
1481 | - self.binary = binary |
1482 | - self.topic = topic |
1483 | - self.manager_class_name = manager |
1484 | - self.report_interval = report_interval |
1485 | - self.periodic_interval = periodic_interval |
1486 | - super(Service, self).__init__(*args, **kwargs) |
1487 | - self.saved_args, self.saved_kwargs = args, kwargs |
1488 | - |
1489 | - def startService(self): # pylint: disable-msg C0103 |
1490 | - manager_class = utils.import_class(self.manager_class_name) |
1491 | - self.manager = manager_class(host=self.host, *self.saved_args, |
1492 | - **self.saved_kwargs) |
1493 | - self.manager.init_host() |
1494 | - self.model_disconnected = False |
1495 | - ctxt = context.get_admin_context() |
1496 | - try: |
1497 | - service_ref = db.service_get_by_args(ctxt, |
1498 | - self.host, |
1499 | - self.binary) |
1500 | - self.service_id = service_ref['id'] |
1501 | - except exception.NotFound: |
1502 | - self._create_service_ref(ctxt) |
1503 | - |
1504 | - conn = rpc.Connection.instance() |
1505 | - if self.report_interval: |
1506 | - consumer_all = rpc.AdapterConsumer( |
1507 | - connection=conn, |
1508 | - topic=self.topic, |
1509 | - proxy=self) |
1510 | - consumer_node = rpc.AdapterConsumer( |
1511 | - connection=conn, |
1512 | - topic='%s.%s' % (self.topic, self.host), |
1513 | - proxy=self) |
1514 | - |
1515 | - consumer_all.attach_to_twisted() |
1516 | - consumer_node.attach_to_twisted() |
1517 | - |
1518 | - pulse = task.LoopingCall(self.report_state) |
1519 | - pulse.start(interval=self.report_interval, now=False) |
1520 | - |
1521 | - if self.periodic_interval: |
1522 | - pulse = task.LoopingCall(self.periodic_tasks) |
1523 | - pulse.start(interval=self.periodic_interval, now=False) |
1524 | - |
1525 | - def _create_service_ref(self, context): |
1526 | - service_ref = db.service_create(context, |
1527 | - {'host': self.host, |
1528 | - 'binary': self.binary, |
1529 | - 'topic': self.topic, |
1530 | - 'report_count': 0}) |
1531 | - self.service_id = service_ref['id'] |
1532 | - |
1533 | - def __getattr__(self, key): |
1534 | - manager = self.__dict__.get('manager', None) |
1535 | - return getattr(manager, key) |
1536 | - |
1537 | - @classmethod |
1538 | - def create(cls, |
1539 | - host=None, |
1540 | - binary=None, |
1541 | - topic=None, |
1542 | - manager=None, |
1543 | - report_interval=None, |
1544 | - periodic_interval=None): |
1545 | - """Instantiates class and passes back application object. |
1546 | - |
1547 | - Args: |
1548 | - host, defaults to FLAGS.host |
1549 | - binary, defaults to basename of executable |
1550 | - topic, defaults to bin_name - "nova-" part |
1551 | - manager, defaults to FLAGS.<topic>_manager |
1552 | - report_interval, defaults to FLAGS.report_interval |
1553 | - periodic_interval, defaults to FLAGS.periodic_interval |
1554 | - """ |
1555 | - if not host: |
1556 | - host = FLAGS.host |
1557 | - if not binary: |
1558 | - binary = os.path.basename(inspect.stack()[-1][1]) |
1559 | - if not topic: |
1560 | - topic = binary.rpartition("nova-")[2] |
1561 | - if not manager: |
1562 | - manager = FLAGS.get('%s_manager' % topic, None) |
1563 | - if not report_interval: |
1564 | - report_interval = FLAGS.report_interval |
1565 | - if not periodic_interval: |
1566 | - periodic_interval = FLAGS.periodic_interval |
1567 | - logging.warn("Starting %s node", topic) |
1568 | - service_obj = cls(host, binary, topic, manager, |
1569 | - report_interval, periodic_interval) |
1570 | - |
1571 | - # This is the parent service that twistd will be looking for when it |
1572 | - # parses this file, return it so that we can get it into globals. |
1573 | - application = service.Application(binary) |
1574 | - service_obj.setServiceParent(application) |
1575 | - return application |
1576 | - |
1577 | - def kill(self): |
1578 | - """Destroy the service object in the datastore""" |
1579 | - try: |
1580 | - db.service_destroy(context.get_admin_context(), self.service_id) |
1581 | - except exception.NotFound: |
1582 | - logging.warn("Service killed that has no database entry") |
1583 | - |
1584 | - @defer.inlineCallbacks |
1585 | - def periodic_tasks(self): |
1586 | - """Tasks to be run at a periodic interval""" |
1587 | - yield self.manager.periodic_tasks(context.get_admin_context()) |
1588 | - |
1589 | - @defer.inlineCallbacks |
1590 | - def report_state(self): |
1591 | - """Update the state of this service in the datastore.""" |
1592 | - ctxt = context.get_admin_context() |
1593 | - try: |
1594 | - try: |
1595 | - service_ref = db.service_get(ctxt, self.service_id) |
1596 | - except exception.NotFound: |
1597 | - logging.debug("The service database object disappeared, " |
1598 | - "Recreating it.") |
1599 | - self._create_service_ref(ctxt) |
1600 | - service_ref = db.service_get(ctxt, self.service_id) |
1601 | - |
1602 | - db.service_update(ctxt, |
1603 | - self.service_id, |
1604 | - {'report_count': service_ref['report_count'] + 1}) |
1605 | - |
1606 | - # TODO(termie): make this pattern be more elegant. |
1607 | - if getattr(self, "model_disconnected", False): |
1608 | - self.model_disconnected = False |
1609 | - logging.error("Recovered model server connection!") |
1610 | - |
1611 | - # TODO(vish): this should probably only catch connection errors |
1612 | - except Exception: # pylint: disable-msg=W0702 |
1613 | - if not getattr(self, "model_disconnected", False): |
1614 | - self.model_disconnected = True |
1615 | - logging.exception("model server went away") |
1616 | - yield |
1617 | |
1618 | === modified file 'nova/test.py' |
1619 | --- nova/test.py 2010-10-26 15:48:20 +0000 |
1620 | +++ nova/test.py 2010-12-16 20:49:10 +0000 |
1621 | @@ -25,11 +25,12 @@ |
1622 | import datetime |
1623 | import sys |
1624 | import time |
1625 | +import unittest |
1626 | |
1627 | import mox |
1628 | import stubout |
1629 | from twisted.internet import defer |
1630 | -from twisted.trial import unittest |
1631 | +from twisted.trial import unittest as trial_unittest |
1632 | |
1633 | from nova import context |
1634 | from nova import db |
1635 | @@ -55,7 +56,89 @@ |
1636 | return _skipper |
1637 | |
1638 | |
1639 | -class TrialTestCase(unittest.TestCase): |
1640 | +class TestCase(unittest.TestCase): |
1641 | + """Test case base class for all unit tests""" |
1642 | + def setUp(self): |
1643 | + """Run before each test method to initialize test environment""" |
1644 | + super(TestCase, self).setUp() |
1645 | + # NOTE(vish): We need a better method for creating fixtures for tests |
1646 | + # now that we have some required db setup for the system |
1647 | + # to work properly. |
1648 | + self.start = datetime.datetime.utcnow() |
1649 | + ctxt = context.get_admin_context() |
1650 | + if db.network_count(ctxt) != 5: |
1651 | + network_manager.VlanManager().create_networks(ctxt, |
1652 | + FLAGS.fixed_range, |
1653 | + 5, 16, |
1654 | + FLAGS.vlan_start, |
1655 | + FLAGS.vpn_start) |
1656 | + |
1657 | + # emulate some of the mox stuff, we can't use the metaclass |
1658 | + # because it screws with our generators |
1659 | + self.mox = mox.Mox() |
1660 | + self.stubs = stubout.StubOutForTesting() |
1661 | + self.flag_overrides = {} |
1662 | + self.injected = [] |
1663 | + self._monkey_patch_attach() |
1664 | + self._original_flags = FLAGS.FlagValuesDict() |
1665 | + |
1666 | + def tearDown(self): |
1667 | + """Runs after each test method to finalize/tear down test |
1668 | + environment.""" |
1669 | + try: |
1670 | + self.mox.UnsetStubs() |
1671 | + self.stubs.UnsetAll() |
1672 | + self.stubs.SmartUnsetAll() |
1673 | + self.mox.VerifyAll() |
1674 | + # NOTE(vish): Clean up any ips associated during the test. |
1675 | + ctxt = context.get_admin_context() |
1676 | + db.fixed_ip_disassociate_all_by_timeout(ctxt, FLAGS.host, |
1677 | + self.start) |
1678 | + db.network_disassociate_all(ctxt) |
1679 | + rpc.Consumer.attach_to_eventlet = self.originalAttach |
1680 | + for x in self.injected: |
1681 | + try: |
1682 | + x.stop() |
1683 | + except AssertionError: |
1684 | + pass |
1685 | + |
1686 | + if FLAGS.fake_rabbit: |
1687 | + fakerabbit.reset_all() |
1688 | + |
1689 | + db.security_group_destroy_all(ctxt) |
1690 | + super(TestCase, self).tearDown() |
1691 | + finally: |
1692 | + self.reset_flags() |
1693 | + |
1694 | + def flags(self, **kw): |
1695 | + """Override flag variables for a test""" |
1696 | + for k, v in kw.iteritems(): |
1697 | + if k in self.flag_overrides: |
1698 | + self.reset_flags() |
1699 | + raise Exception( |
1700 | + 'trying to override already overriden flag: %s' % k) |
1701 | + self.flag_overrides[k] = getattr(FLAGS, k) |
1702 | + setattr(FLAGS, k, v) |
1703 | + |
1704 | + def reset_flags(self): |
1705 | + """Resets all flag variables for the test. Runs after each test""" |
1706 | + FLAGS.Reset() |
1707 | + for k, v in self._original_flags.iteritems(): |
1708 | + setattr(FLAGS, k, v) |
1709 | + |
1710 | + def _monkey_patch_attach(self): |
1711 | + self.originalAttach = rpc.Consumer.attach_to_eventlet |
1712 | + |
1713 | + def _wrapped(innerSelf): |
1714 | + rv = self.originalAttach(innerSelf) |
1715 | + self.injected.append(rv) |
1716 | + return rv |
1717 | + |
1718 | + _wrapped.func_name = self.originalAttach.func_name |
1719 | + rpc.Consumer.attach_to_eventlet = _wrapped |
1720 | + |
1721 | + |
1722 | +class TrialTestCase(trial_unittest.TestCase): |
1723 | """Test case base class for all unit tests""" |
1724 | def setUp(self): |
1725 | """Run before each test method to initialize test environment""" |
1726 | @@ -78,7 +161,6 @@ |
1727 | self.stubs = stubout.StubOutForTesting() |
1728 | self.flag_overrides = {} |
1729 | self.injected = [] |
1730 | - self._monkey_patch_attach() |
1731 | self._original_flags = FLAGS.FlagValuesDict() |
1732 | |
1733 | def tearDown(self): |
1734 | @@ -94,7 +176,6 @@ |
1735 | db.fixed_ip_disassociate_all_by_timeout(ctxt, FLAGS.host, |
1736 | self.start) |
1737 | db.network_disassociate_all(ctxt) |
1738 | - rpc.Consumer.attach_to_twisted = self.originalAttach |
1739 | for x in self.injected: |
1740 | try: |
1741 | x.stop() |
1742 | @@ -147,14 +228,3 @@ |
1743 | return d |
1744 | _wrapped.func_name = func.func_name |
1745 | return _wrapped |
1746 | - |
1747 | - def _monkey_patch_attach(self): |
1748 | - self.originalAttach = rpc.Consumer.attach_to_twisted |
1749 | - |
1750 | - def _wrapped(innerSelf): |
1751 | - rv = self.originalAttach(innerSelf) |
1752 | - self.injected.append(rv) |
1753 | - return rv |
1754 | - |
1755 | - _wrapped.func_name = self.originalAttach.func_name |
1756 | - rpc.Consumer.attach_to_twisted = _wrapped |
1757 | |
1758 | === modified file 'nova/tests/access_unittest.py' |
1759 | --- nova/tests/access_unittest.py 2010-10-22 07:48:27 +0000 |
1760 | +++ nova/tests/access_unittest.py 2010-12-16 20:49:10 +0000 |
1761 | @@ -35,7 +35,7 @@ |
1762 | pass |
1763 | |
1764 | |
1765 | -class AccessTestCase(test.TrialTestCase): |
1766 | +class AccessTestCase(test.TestCase): |
1767 | def setUp(self): |
1768 | super(AccessTestCase, self).setUp() |
1769 | um = manager.AuthManager() |
1770 | |
1771 | === modified file 'nova/tests/auth_unittest.py' |
1772 | --- nova/tests/auth_unittest.py 2010-10-22 07:48:27 +0000 |
1773 | +++ nova/tests/auth_unittest.py 2010-12-16 20:49:10 +0000 |
1774 | @@ -326,12 +326,12 @@ |
1775 | self.assertTrue(user.is_admin()) |
1776 | |
1777 | |
1778 | -class AuthManagerLdapTestCase(AuthManagerTestCase, test.TrialTestCase): |
1779 | +class AuthManagerLdapTestCase(AuthManagerTestCase, test.TestCase): |
1780 | auth_driver = 'nova.auth.ldapdriver.FakeLdapDriver' |
1781 | |
1782 | def __init__(self, *args, **kwargs): |
1783 | AuthManagerTestCase.__init__(self) |
1784 | - test.TrialTestCase.__init__(self, *args, **kwargs) |
1785 | + test.TestCase.__init__(self, *args, **kwargs) |
1786 | import nova.auth.fakeldap as fakeldap |
1787 | FLAGS.redis_db = 8 |
1788 | if FLAGS.flush_db: |
1789 | @@ -343,7 +343,7 @@ |
1790 | self.skip = True |
1791 | |
1792 | |
1793 | -class AuthManagerDbTestCase(AuthManagerTestCase, test.TrialTestCase): |
1794 | +class AuthManagerDbTestCase(AuthManagerTestCase, test.TestCase): |
1795 | auth_driver = 'nova.auth.dbdriver.DbDriver' |
1796 | |
1797 | |
1798 | |
1799 | === modified file 'nova/tests/cloud_unittest.py' |
1800 | --- nova/tests/cloud_unittest.py 2010-12-09 17:30:03 +0000 |
1801 | +++ nova/tests/cloud_unittest.py 2010-12-16 20:49:10 +0000 |
1802 | @@ -27,8 +27,6 @@ |
1803 | import time |
1804 | |
1805 | from eventlet import greenthread |
1806 | -from twisted.internet import defer |
1807 | -import unittest |
1808 | from xml.etree import ElementTree |
1809 | |
1810 | from nova import context |
1811 | @@ -53,7 +51,7 @@ |
1812 | os.makedirs(IMAGES_PATH) |
1813 | |
1814 | |
1815 | -class CloudTestCase(test.TrialTestCase): |
1816 | +class CloudTestCase(test.TestCase): |
1817 | def setUp(self): |
1818 | super(CloudTestCase, self).setUp() |
1819 | self.flags(connection_type='fake', images_path=IMAGES_PATH) |
1820 | @@ -199,7 +197,7 @@ |
1821 | logging.debug("Need to watch instance %s until it's running..." % |
1822 | instance['instance_id']) |
1823 | while True: |
1824 | - rv = yield defer.succeed(time.sleep(1)) |
1825 | + greenthread.sleep(1) |
1826 | info = self.cloud._get_instance(instance['instance_id']) |
1827 | logging.debug(info['state']) |
1828 | if info['state'] == power_state.RUNNING: |
1829 | |
1830 | === modified file 'nova/tests/compute_unittest.py' |
1831 | --- nova/tests/compute_unittest.py 2010-12-03 20:21:18 +0000 |
1832 | +++ nova/tests/compute_unittest.py 2010-12-16 20:49:10 +0000 |
1833 | @@ -22,8 +22,6 @@ |
1834 | import datetime |
1835 | import logging |
1836 | |
1837 | -from twisted.internet import defer |
1838 | - |
1839 | from nova import context |
1840 | from nova import db |
1841 | from nova import exception |
1842 | @@ -33,10 +31,11 @@ |
1843 | from nova.auth import manager |
1844 | from nova.compute import api as compute_api |
1845 | |
1846 | + |
1847 | FLAGS = flags.FLAGS |
1848 | |
1849 | |
1850 | -class ComputeTestCase(test.TrialTestCase): |
1851 | +class ComputeTestCase(test.TestCase): |
1852 | """Test case for compute""" |
1853 | def setUp(self): |
1854 | logging.getLogger().setLevel(logging.DEBUG) |
1855 | @@ -94,24 +93,22 @@ |
1856 | db.security_group_destroy(self.context, group['id']) |
1857 | db.instance_destroy(self.context, ref[0]['id']) |
1858 | |
1859 | - @defer.inlineCallbacks |
1860 | def test_run_terminate(self): |
1861 | """Make sure it is possible to run and terminate instance""" |
1862 | instance_id = self._create_instance() |
1863 | |
1864 | - yield self.compute.run_instance(self.context, instance_id) |
1865 | + self.compute.run_instance(self.context, instance_id) |
1866 | |
1867 | instances = db.instance_get_all(context.get_admin_context()) |
1868 | logging.info("Running instances: %s", instances) |
1869 | self.assertEqual(len(instances), 1) |
1870 | |
1871 | - yield self.compute.terminate_instance(self.context, instance_id) |
1872 | + self.compute.terminate_instance(self.context, instance_id) |
1873 | |
1874 | instances = db.instance_get_all(context.get_admin_context()) |
1875 | logging.info("After terminating instances: %s", instances) |
1876 | self.assertEqual(len(instances), 0) |
1877 | |
1878 | - @defer.inlineCallbacks |
1879 | def test_run_terminate_timestamps(self): |
1880 | """Make sure timestamps are set for launched and destroyed""" |
1881 | instance_id = self._create_instance() |
1882 | @@ -119,42 +116,40 @@ |
1883 | self.assertEqual(instance_ref['launched_at'], None) |
1884 | self.assertEqual(instance_ref['deleted_at'], None) |
1885 | launch = datetime.datetime.utcnow() |
1886 | - yield self.compute.run_instance(self.context, instance_id) |
1887 | + self.compute.run_instance(self.context, instance_id) |
1888 | instance_ref = db.instance_get(self.context, instance_id) |
1889 | self.assert_(instance_ref['launched_at'] > launch) |
1890 | self.assertEqual(instance_ref['deleted_at'], None) |
1891 | terminate = datetime.datetime.utcnow() |
1892 | - yield self.compute.terminate_instance(self.context, instance_id) |
1893 | + self.compute.terminate_instance(self.context, instance_id) |
1894 | self.context = self.context.elevated(True) |
1895 | instance_ref = db.instance_get(self.context, instance_id) |
1896 | self.assert_(instance_ref['launched_at'] < terminate) |
1897 | self.assert_(instance_ref['deleted_at'] > terminate) |
1898 | |
1899 | - @defer.inlineCallbacks |
1900 | def test_reboot(self): |
1901 | """Ensure instance can be rebooted""" |
1902 | instance_id = self._create_instance() |
1903 | - yield self.compute.run_instance(self.context, instance_id) |
1904 | - yield self.compute.reboot_instance(self.context, instance_id) |
1905 | - yield self.compute.terminate_instance(self.context, instance_id) |
1906 | + self.compute.run_instance(self.context, instance_id) |
1907 | + self.compute.reboot_instance(self.context, instance_id) |
1908 | + self.compute.terminate_instance(self.context, instance_id) |
1909 | |
1910 | - @defer.inlineCallbacks |
1911 | def test_console_output(self): |
1912 | """Make sure we can get console output from instance""" |
1913 | instance_id = self._create_instance() |
1914 | - yield self.compute.run_instance(self.context, instance_id) |
1915 | + self.compute.run_instance(self.context, instance_id) |
1916 | |
1917 | - console = yield self.compute.get_console_output(self.context, |
1918 | + console = self.compute.get_console_output(self.context, |
1919 | instance_id) |
1920 | self.assert_(console) |
1921 | - yield self.compute.terminate_instance(self.context, instance_id) |
1922 | + self.compute.terminate_instance(self.context, instance_id) |
1923 | |
1924 | - @defer.inlineCallbacks |
1925 | def test_run_instance_existing(self): |
1926 | """Ensure failure when running an instance that already exists""" |
1927 | instance_id = self._create_instance() |
1928 | - yield self.compute.run_instance(self.context, instance_id) |
1929 | - self.assertFailure(self.compute.run_instance(self.context, |
1930 | - instance_id), |
1931 | - exception.Error) |
1932 | - yield self.compute.terminate_instance(self.context, instance_id) |
1933 | + self.compute.run_instance(self.context, instance_id) |
1934 | + self.assertRaises(exception.Error, |
1935 | + self.compute.run_instance, |
1936 | + self.context, |
1937 | + instance_id) |
1938 | + self.compute.terminate_instance(self.context, instance_id) |
1939 | |
1940 | === modified file 'nova/tests/flags_unittest.py' |
1941 | --- nova/tests/flags_unittest.py 2010-10-22 07:48:27 +0000 |
1942 | +++ nova/tests/flags_unittest.py 2010-12-16 20:49:10 +0000 |
1943 | @@ -24,7 +24,7 @@ |
1944 | flags.DEFINE_string('flags_unittest', 'foo', 'for testing purposes only') |
1945 | |
1946 | |
1947 | -class FlagsTestCase(test.TrialTestCase): |
1948 | +class FlagsTestCase(test.TestCase): |
1949 | |
1950 | def setUp(self): |
1951 | super(FlagsTestCase, self).setUp() |
1952 | |
1953 | === modified file 'nova/tests/misc_unittest.py' |
1954 | --- nova/tests/misc_unittest.py 2010-12-15 10:57:56 +0000 |
1955 | +++ nova/tests/misc_unittest.py 2010-12-16 20:49:10 +0000 |
1956 | @@ -20,7 +20,7 @@ |
1957 | from nova.utils import parse_mailmap, str_dict_replace |
1958 | |
1959 | |
1960 | -class ProjectTestCase(test.TrialTestCase): |
1961 | +class ProjectTestCase(test.TestCase): |
1962 | def test_authors_up_to_date(self): |
1963 | if os.path.exists('../.bzr'): |
1964 | contributors = set() |
1965 | |
1966 | === modified file 'nova/tests/network_unittest.py' |
1967 | --- nova/tests/network_unittest.py 2010-11-17 02:23:20 +0000 |
1968 | +++ nova/tests/network_unittest.py 2010-12-16 20:49:10 +0000 |
1969 | @@ -33,7 +33,7 @@ |
1970 | FLAGS = flags.FLAGS |
1971 | |
1972 | |
1973 | -class NetworkTestCase(test.TrialTestCase): |
1974 | +class NetworkTestCase(test.TestCase): |
1975 | """Test cases for network code""" |
1976 | def setUp(self): |
1977 | super(NetworkTestCase, self).setUp() |
1978 | |
1979 | === modified file 'nova/tests/objectstore_unittest.py' |
1980 | --- nova/tests/objectstore_unittest.py 2010-10-22 07:48:27 +0000 |
1981 | +++ nova/tests/objectstore_unittest.py 2010-12-16 20:49:10 +0000 |
1982 | @@ -54,7 +54,7 @@ |
1983 | os.makedirs(os.path.join(OSS_TEMPDIR, 'buckets')) |
1984 | |
1985 | |
1986 | -class ObjectStoreTestCase(test.TrialTestCase): |
1987 | +class ObjectStoreTestCase(test.TestCase): |
1988 | """Test objectstore API directly.""" |
1989 | |
1990 | def setUp(self): |
1991 | @@ -191,7 +191,7 @@ |
1992 | protocol = TestHTTPChannel |
1993 | |
1994 | |
1995 | -class S3APITestCase(test.TrialTestCase): |
1996 | +class S3APITestCase(test.TestCase): |
1997 | """Test objectstore through S3 API.""" |
1998 | |
1999 | def setUp(self): |
2000 | |
2001 | === removed file 'nova/tests/process_unittest.py' |
2002 | --- nova/tests/process_unittest.py 2010-10-22 07:48:27 +0000 |
2003 | +++ nova/tests/process_unittest.py 1970-01-01 00:00:00 +0000 |
2004 | @@ -1,132 +0,0 @@ |
2005 | -# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
2006 | - |
2007 | -# Copyright 2010 United States Government as represented by the |
2008 | -# Administrator of the National Aeronautics and Space Administration. |
2009 | -# All Rights Reserved. |
2010 | -# |
2011 | -# Licensed under the Apache License, Version 2.0 (the "License"); you may |
2012 | -# not use this file except in compliance with the License. You may obtain |
2013 | -# a copy of the License at |
2014 | -# |
2015 | -# http://www.apache.org/licenses/LICENSE-2.0 |
2016 | -# |
2017 | -# Unless required by applicable law or agreed to in writing, software |
2018 | -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
2019 | -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
2020 | -# License for the specific language governing permissions and limitations |
2021 | -# under the License. |
2022 | - |
2023 | -import logging |
2024 | -from twisted.internet import defer |
2025 | -from twisted.internet import reactor |
2026 | -from xml.etree import ElementTree |
2027 | - |
2028 | -from nova import exception |
2029 | -from nova import flags |
2030 | -from nova import process |
2031 | -from nova import test |
2032 | -from nova import utils |
2033 | - |
2034 | -FLAGS = flags.FLAGS |
2035 | - |
2036 | - |
2037 | -class ProcessTestCase(test.TrialTestCase): |
2038 | - def setUp(self): |
2039 | - logging.getLogger().setLevel(logging.DEBUG) |
2040 | - super(ProcessTestCase, self).setUp() |
2041 | - |
2042 | - def test_execute_stdout(self): |
2043 | - pool = process.ProcessPool(2) |
2044 | - d = pool.simple_execute('echo test') |
2045 | - |
2046 | - def _check(rv): |
2047 | - self.assertEqual(rv[0], 'test\n') |
2048 | - self.assertEqual(rv[1], '') |
2049 | - |
2050 | - d.addCallback(_check) |
2051 | - d.addErrback(self.fail) |
2052 | - return d |
2053 | - |
2054 | - def test_execute_stderr(self): |
2055 | - pool = process.ProcessPool(2) |
2056 | - d = pool.simple_execute('cat BAD_FILE', check_exit_code=False) |
2057 | - |
2058 | - def _check(rv): |
2059 | - self.assertEqual(rv[0], '') |
2060 | - self.assert_('No such file' in rv[1]) |
2061 | - |
2062 | - d.addCallback(_check) |
2063 | - d.addErrback(self.fail) |
2064 | - return d |
2065 | - |
2066 | - def test_execute_unexpected_stderr(self): |
2067 | - pool = process.ProcessPool(2) |
2068 | - d = pool.simple_execute('cat BAD_FILE') |
2069 | - d.addCallback(lambda x: self.fail('should have raised an error')) |
2070 | - d.addErrback(lambda failure: failure.trap(IOError)) |
2071 | - return d |
2072 | - |
2073 | - def test_max_processes(self): |
2074 | - pool = process.ProcessPool(2) |
2075 | - d1 = pool.simple_execute('sleep 0.01') |
2076 | - d2 = pool.simple_execute('sleep 0.01') |
2077 | - d3 = pool.simple_execute('sleep 0.005') |
2078 | - d4 = pool.simple_execute('sleep 0.005') |
2079 | - |
2080 | - called = [] |
2081 | - |
2082 | - def _called(rv, name): |
2083 | - called.append(name) |
2084 | - |
2085 | - d1.addCallback(_called, 'd1') |
2086 | - d2.addCallback(_called, 'd2') |
2087 | - d3.addCallback(_called, 'd3') |
2088 | - d4.addCallback(_called, 'd4') |
2089 | - |
2090 | - # Make sure that d3 and d4 had to wait on the other two and were called |
2091 | - # in order |
2092 | - # NOTE(termie): there may be a race condition in this test if for some |
2093 | - # reason one of the sleeps takes longer to complete |
2094 | - # than it should |
2095 | - d4.addCallback(lambda x: self.assertEqual(called[2], 'd3')) |
2096 | - d4.addCallback(lambda x: self.assertEqual(called[3], 'd4')) |
2097 | - d4.addErrback(self.fail) |
2098 | - return d4 |
2099 | - |
2100 | - def test_kill_long_process(self): |
2101 | - pool = process.ProcessPool(2) |
2102 | - |
2103 | - d1 = pool.simple_execute('sleep 1') |
2104 | - d2 = pool.simple_execute('sleep 0.005') |
2105 | - |
2106 | - timeout = reactor.callLater(0.1, self.fail, 'should have been killed') |
2107 | - |
2108 | - # kill d1 and wait on it to end then cancel the timeout |
2109 | - d2.addCallback(lambda _: d1.process.signalProcess('KILL')) |
2110 | - d2.addCallback(lambda _: d1) |
2111 | - d2.addBoth(lambda _: timeout.active() and timeout.cancel()) |
2112 | - d2.addErrback(self.fail) |
2113 | - return d2 |
2114 | - |
2115 | - def test_process_exit_is_contained(self): |
2116 | - pool = process.ProcessPool(2) |
2117 | - |
2118 | - d1 = pool.simple_execute('sleep 1') |
2119 | - d1.addCallback(lambda x: self.fail('should have errbacked')) |
2120 | - d1.addErrback(lambda fail: fail.trap(IOError)) |
2121 | - reactor.callLater(0.05, d1.process.signalProcess, 'KILL') |
2122 | - |
2123 | - return d1 |
2124 | - |
2125 | - def test_shared_pool_is_singleton(self): |
2126 | - pool1 = process.SharedPool() |
2127 | - pool2 = process.SharedPool() |
2128 | - self.assertEqual(id(pool1._instance), id(pool2._instance)) |
2129 | - |
2130 | - def test_shared_pool_works_as_singleton(self): |
2131 | - d1 = process.simple_execute('sleep 1') |
2132 | - d2 = process.simple_execute('sleep 0.005') |
2133 | - # lp609749: would have failed with |
2134 | - # exceptions.AssertionError: Someone released me too many times: |
2135 | - # too many tokens! |
2136 | - return d1 |
2137 | |
2138 | === modified file 'nova/tests/quota_unittest.py' |
2139 | --- nova/tests/quota_unittest.py 2010-11-24 22:52:10 +0000 |
2140 | +++ nova/tests/quota_unittest.py 2010-12-16 20:49:10 +0000 |
2141 | @@ -32,7 +32,7 @@ |
2142 | FLAGS = flags.FLAGS |
2143 | |
2144 | |
2145 | -class QuotaTestCase(test.TrialTestCase): |
2146 | +class QuotaTestCase(test.TestCase): |
2147 | def setUp(self): |
2148 | logging.getLogger().setLevel(logging.DEBUG) |
2149 | super(QuotaTestCase, self).setUp() |
2150 | |
2151 | === modified file 'nova/tests/rpc_unittest.py' |
2152 | --- nova/tests/rpc_unittest.py 2010-10-22 07:48:27 +0000 |
2153 | +++ nova/tests/rpc_unittest.py 2010-12-16 20:49:10 +0000 |
2154 | @@ -20,8 +20,6 @@ |
2155 | """ |
2156 | import logging |
2157 | |
2158 | -from twisted.internet import defer |
2159 | - |
2160 | from nova import context |
2161 | from nova import flags |
2162 | from nova import rpc |
2163 | @@ -31,7 +29,7 @@ |
2164 | FLAGS = flags.FLAGS |
2165 | |
2166 | |
2167 | -class RpcTestCase(test.TrialTestCase): |
2168 | +class RpcTestCase(test.TestCase): |
2169 | """Test cases for rpc""" |
2170 | def setUp(self): |
2171 | super(RpcTestCase, self).setUp() |
2172 | @@ -40,23 +38,22 @@ |
2173 | self.consumer = rpc.AdapterConsumer(connection=self.conn, |
2174 | topic='test', |
2175 | proxy=self.receiver) |
2176 | - self.consumer.attach_to_twisted() |
2177 | + self.consumer.attach_to_eventlet() |
2178 | self.context = context.get_admin_context() |
2179 | |
2180 | def test_call_succeed(self): |
2181 | """Get a value through rpc call""" |
2182 | value = 42 |
2183 | - result = yield rpc.call_twisted(self.context, |
2184 | - 'test', {"method": "echo", |
2185 | + result = rpc.call(self.context, 'test', {"method": "echo", |
2186 | "args": {"value": value}}) |
2187 | self.assertEqual(value, result) |
2188 | |
2189 | def test_context_passed(self): |
2190 | """Makes sure a context is passed through rpc call""" |
2191 | value = 42 |
2192 | - result = yield rpc.call_twisted(self.context, |
2193 | - 'test', {"method": "context", |
2194 | - "args": {"value": value}}) |
2195 | + result = rpc.call(self.context, |
2196 | + 'test', {"method": "context", |
2197 | + "args": {"value": value}}) |
2198 | self.assertEqual(self.context.to_dict(), result) |
2199 | |
2200 | def test_call_exception(self): |
2201 | @@ -67,14 +64,17 @@ |
2202 | to an int in the test. |
2203 | """ |
2204 | value = 42 |
2205 | - self.assertFailure(rpc.call_twisted(self.context, 'test', |
2206 | - {"method": "fail", |
2207 | - "args": {"value": value}}), |
2208 | - rpc.RemoteError) |
2209 | + self.assertRaises(rpc.RemoteError, |
2210 | + rpc.call, |
2211 | + self.context, |
2212 | + 'test', |
2213 | + {"method": "fail", |
2214 | + "args": {"value": value}}) |
2215 | try: |
2216 | - yield rpc.call_twisted(self.context, |
2217 | - 'test', {"method": "fail", |
2218 | - "args": {"value": value}}) |
2219 | + rpc.call(self.context, |
2220 | + 'test', |
2221 | + {"method": "fail", |
2222 | + "args": {"value": value}}) |
2223 | self.fail("should have thrown rpc.RemoteError") |
2224 | except rpc.RemoteError as exc: |
2225 | self.assertEqual(int(exc.value), value) |
2226 | @@ -89,13 +89,13 @@ |
2227 | def echo(context, value): |
2228 | """Simply returns whatever value is sent in""" |
2229 | logging.debug("Received %s", value) |
2230 | - return defer.succeed(value) |
2231 | + return value |
2232 | |
2233 | @staticmethod |
2234 | def context(context, value): |
2235 | """Returns dictionary version of context""" |
2236 | logging.debug("Received %s", context) |
2237 | - return defer.succeed(context.to_dict()) |
2238 | + return context.to_dict() |
2239 | |
2240 | @staticmethod |
2241 | def fail(context, value): |
2242 | |
2243 | === modified file 'nova/tests/scheduler_unittest.py' |
2244 | --- nova/tests/scheduler_unittest.py 2010-10-26 06:37:51 +0000 |
2245 | +++ nova/tests/scheduler_unittest.py 2010-12-16 20:49:10 +0000 |
2246 | @@ -44,7 +44,7 @@ |
2247 | return 'named_host' |
2248 | |
2249 | |
2250 | -class SchedulerTestCase(test.TrialTestCase): |
2251 | +class SchedulerTestCase(test.TestCase): |
2252 | """Test case for scheduler""" |
2253 | def setUp(self): |
2254 | super(SchedulerTestCase, self).setUp() |
2255 | @@ -73,7 +73,7 @@ |
2256 | scheduler.named_method(ctxt, 'topic', num=7) |
2257 | |
2258 | |
2259 | -class SimpleDriverTestCase(test.TrialTestCase): |
2260 | +class SimpleDriverTestCase(test.TestCase): |
2261 | """Test case for simple driver""" |
2262 | def setUp(self): |
2263 | super(SimpleDriverTestCase, self).setUp() |
2264 | @@ -122,12 +122,12 @@ |
2265 | 'nova-compute', |
2266 | 'compute', |
2267 | FLAGS.compute_manager) |
2268 | - compute1.startService() |
2269 | + compute1.start() |
2270 | compute2 = service.Service('host2', |
2271 | 'nova-compute', |
2272 | 'compute', |
2273 | FLAGS.compute_manager) |
2274 | - compute2.startService() |
2275 | + compute2.start() |
2276 | hosts = self.scheduler.driver.hosts_up(self.context, 'compute') |
2277 | self.assertEqual(len(hosts), 2) |
2278 | compute1.kill() |
2279 | @@ -139,12 +139,12 @@ |
2280 | 'nova-compute', |
2281 | 'compute', |
2282 | FLAGS.compute_manager) |
2283 | - compute1.startService() |
2284 | + compute1.start() |
2285 | compute2 = service.Service('host2', |
2286 | 'nova-compute', |
2287 | 'compute', |
2288 | FLAGS.compute_manager) |
2289 | - compute2.startService() |
2290 | + compute2.start() |
2291 | instance_id1 = self._create_instance() |
2292 | compute1.run_instance(self.context, instance_id1) |
2293 | instance_id2 = self._create_instance() |
2294 | @@ -162,12 +162,12 @@ |
2295 | 'nova-compute', |
2296 | 'compute', |
2297 | FLAGS.compute_manager) |
2298 | - compute1.startService() |
2299 | + compute1.start() |
2300 | compute2 = service.Service('host2', |
2301 | 'nova-compute', |
2302 | 'compute', |
2303 | FLAGS.compute_manager) |
2304 | - compute2.startService() |
2305 | + compute2.start() |
2306 | instance_ids1 = [] |
2307 | instance_ids2 = [] |
2308 | for index in xrange(FLAGS.max_cores): |
2309 | @@ -195,12 +195,12 @@ |
2310 | 'nova-volume', |
2311 | 'volume', |
2312 | FLAGS.volume_manager) |
2313 | - volume1.startService() |
2314 | + volume1.start() |
2315 | volume2 = service.Service('host2', |
2316 | 'nova-volume', |
2317 | 'volume', |
2318 | FLAGS.volume_manager) |
2319 | - volume2.startService() |
2320 | + volume2.start() |
2321 | volume_id1 = self._create_volume() |
2322 | volume1.create_volume(self.context, volume_id1) |
2323 | volume_id2 = self._create_volume() |
2324 | @@ -218,12 +218,12 @@ |
2325 | 'nova-volume', |
2326 | 'volume', |
2327 | FLAGS.volume_manager) |
2328 | - volume1.startService() |
2329 | + volume1.start() |
2330 | volume2 = service.Service('host2', |
2331 | 'nova-volume', |
2332 | 'volume', |
2333 | FLAGS.volume_manager) |
2334 | - volume2.startService() |
2335 | + volume2.start() |
2336 | volume_ids1 = [] |
2337 | volume_ids2 = [] |
2338 | for index in xrange(FLAGS.max_gigabytes): |
2339 | |
2340 | === modified file 'nova/tests/service_unittest.py' |
2341 | --- nova/tests/service_unittest.py 2010-10-26 06:08:57 +0000 |
2342 | +++ nova/tests/service_unittest.py 2010-12-16 20:49:10 +0000 |
2343 | @@ -22,9 +22,6 @@ |
2344 | |
2345 | import mox |
2346 | |
2347 | -from twisted.application.app import startApplication |
2348 | -from twisted.internet import defer |
2349 | - |
2350 | from nova import exception |
2351 | from nova import flags |
2352 | from nova import rpc |
2353 | @@ -48,7 +45,7 @@ |
2354 | return 'service' |
2355 | |
2356 | |
2357 | -class ServiceManagerTestCase(test.TrialTestCase): |
2358 | +class ServiceManagerTestCase(test.TestCase): |
2359 | """Test cases for Services""" |
2360 | |
2361 | def test_attribute_error_for_no_manager(self): |
2362 | @@ -63,7 +60,7 @@ |
2363 | 'test', |
2364 | 'test', |
2365 | 'nova.tests.service_unittest.FakeManager') |
2366 | - serv.startService() |
2367 | + serv.start() |
2368 | self.assertEqual(serv.test_method(), 'manager') |
2369 | |
2370 | def test_override_manager_method(self): |
2371 | @@ -71,11 +68,11 @@ |
2372 | 'test', |
2373 | 'test', |
2374 | 'nova.tests.service_unittest.FakeManager') |
2375 | - serv.startService() |
2376 | + serv.start() |
2377 | self.assertEqual(serv.test_method(), 'service') |
2378 | |
2379 | |
2380 | -class ServiceTestCase(test.TrialTestCase): |
2381 | +class ServiceTestCase(test.TestCase): |
2382 | """Test cases for Services""" |
2383 | |
2384 | def setUp(self): |
2385 | @@ -94,8 +91,6 @@ |
2386 | self.mox.StubOutWithMock(rpc, |
2387 | 'AdapterConsumer', |
2388 | use_mock_anything=True) |
2389 | - self.mox.StubOutWithMock( |
2390 | - service.task, 'LoopingCall', use_mock_anything=True) |
2391 | rpc.AdapterConsumer(connection=mox.IgnoreArg(), |
2392 | topic=topic, |
2393 | proxy=mox.IsA(service.Service)).AndReturn( |
2394 | @@ -106,19 +101,8 @@ |
2395 | proxy=mox.IsA(service.Service)).AndReturn( |
2396 | rpc.AdapterConsumer) |
2397 | |
2398 | - rpc.AdapterConsumer.attach_to_twisted() |
2399 | - rpc.AdapterConsumer.attach_to_twisted() |
2400 | - |
2401 | - # Stub out looping call a bit needlessly since we don't have an easy |
2402 | - # way to cancel it (yet) when the tests finishes |
2403 | - service.task.LoopingCall(mox.IgnoreArg()).AndReturn( |
2404 | - service.task.LoopingCall) |
2405 | - service.task.LoopingCall.start(interval=mox.IgnoreArg(), |
2406 | - now=mox.IgnoreArg()) |
2407 | - service.task.LoopingCall(mox.IgnoreArg()).AndReturn( |
2408 | - service.task.LoopingCall) |
2409 | - service.task.LoopingCall.start(interval=mox.IgnoreArg(), |
2410 | - now=mox.IgnoreArg()) |
2411 | + rpc.AdapterConsumer.attach_to_eventlet() |
2412 | + rpc.AdapterConsumer.attach_to_eventlet() |
2413 | |
2414 | service_create = {'host': host, |
2415 | 'binary': binary, |
2416 | @@ -136,14 +120,14 @@ |
2417 | service_create).AndReturn(service_ref) |
2418 | self.mox.ReplayAll() |
2419 | |
2420 | - startApplication(app, False) |
2421 | + app.start() |
2422 | + app.stop() |
2423 | self.assert_(app) |
2424 | |
2425 | # We're testing sort of weird behavior in how report_state decides |
2426 | # whether it is disconnected, it looks for a variable on itself called |
2427 | # 'model_disconnected' and report_state doesn't really do much so this |
2428 | # these are mostly just for coverage |
2429 | - @defer.inlineCallbacks |
2430 | def test_report_state_no_service(self): |
2431 | host = 'foo' |
2432 | binary = 'bar' |
2433 | @@ -173,10 +157,9 @@ |
2434 | binary, |
2435 | topic, |
2436 | 'nova.tests.service_unittest.FakeManager') |
2437 | - serv.startService() |
2438 | - yield serv.report_state() |
2439 | + serv.start() |
2440 | + serv.report_state() |
2441 | |
2442 | - @defer.inlineCallbacks |
2443 | def test_report_state_newly_disconnected(self): |
2444 | host = 'foo' |
2445 | binary = 'bar' |
2446 | @@ -204,11 +187,10 @@ |
2447 | binary, |
2448 | topic, |
2449 | 'nova.tests.service_unittest.FakeManager') |
2450 | - serv.startService() |
2451 | - yield serv.report_state() |
2452 | + serv.start() |
2453 | + serv.report_state() |
2454 | self.assert_(serv.model_disconnected) |
2455 | |
2456 | - @defer.inlineCallbacks |
2457 | def test_report_state_newly_connected(self): |
2458 | host = 'foo' |
2459 | binary = 'bar' |
2460 | @@ -238,8 +220,8 @@ |
2461 | binary, |
2462 | topic, |
2463 | 'nova.tests.service_unittest.FakeManager') |
2464 | - serv.startService() |
2465 | + serv.start() |
2466 | serv.model_disconnected = True |
2467 | - yield serv.report_state() |
2468 | + serv.report_state() |
2469 | |
2470 | self.assert_(not serv.model_disconnected) |
2471 | |
2472 | === removed file 'nova/tests/validator_unittest.py' |
2473 | --- nova/tests/validator_unittest.py 2010-10-22 07:48:27 +0000 |
2474 | +++ nova/tests/validator_unittest.py 1970-01-01 00:00:00 +0000 |
2475 | @@ -1,42 +0,0 @@ |
2476 | -# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
2477 | - |
2478 | -# Copyright 2010 United States Government as represented by the |
2479 | -# Administrator of the National Aeronautics and Space Administration. |
2480 | -# All Rights Reserved. |
2481 | -# |
2482 | -# Licensed under the Apache License, Version 2.0 (the "License"); you may |
2483 | -# not use this file except in compliance with the License. You may obtain |
2484 | -# a copy of the License at |
2485 | -# |
2486 | -# http://www.apache.org/licenses/LICENSE-2.0 |
2487 | -# |
2488 | -# Unless required by applicable law or agreed to in writing, software |
2489 | -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
2490 | -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
2491 | -# License for the specific language governing permissions and limitations |
2492 | -# under the License. |
2493 | - |
2494 | -import logging |
2495 | -import unittest |
2496 | - |
2497 | -from nova import flags |
2498 | -from nova import test |
2499 | -from nova import validate |
2500 | - |
2501 | - |
2502 | -class ValidationTestCase(test.TrialTestCase): |
2503 | - def setUp(self): |
2504 | - super(ValidationTestCase, self).setUp() |
2505 | - |
2506 | - def tearDown(self): |
2507 | - super(ValidationTestCase, self).tearDown() |
2508 | - |
2509 | - def test_type_validation(self): |
2510 | - self.assertTrue(type_case("foo", 5, 1)) |
2511 | - self.assertRaises(TypeError, type_case, "bar", "5", 1) |
2512 | - self.assertRaises(TypeError, type_case, None, 5, 1) |
2513 | - |
2514 | - |
2515 | -@validate.typetest(instanceid=str, size=int, number_of_instances=int) |
2516 | -def type_case(instanceid, size, number_of_instances): |
2517 | - return True |
2518 | |
2519 | === modified file 'nova/tests/virt_unittest.py' |
2520 | --- nova/tests/virt_unittest.py 2010-10-24 22:06:42 +0000 |
2521 | +++ nova/tests/virt_unittest.py 2010-12-16 20:49:10 +0000 |
2522 | @@ -30,7 +30,7 @@ |
2523 | flags.DECLARE('instances_path', 'nova.compute.manager') |
2524 | |
2525 | |
2526 | -class LibvirtConnTestCase(test.TrialTestCase): |
2527 | +class LibvirtConnTestCase(test.TestCase): |
2528 | def setUp(self): |
2529 | super(LibvirtConnTestCase, self).setUp() |
2530 | self.manager = manager.AuthManager() |
2531 | @@ -123,7 +123,7 @@ |
2532 | self.manager.delete_user(self.user) |
2533 | |
2534 | |
2535 | -class NWFilterTestCase(test.TrialTestCase): |
2536 | +class NWFilterTestCase(test.TestCase): |
2537 | |
2538 | def setUp(self): |
2539 | super(NWFilterTestCase, self).setUp() |
2540 | @@ -235,7 +235,7 @@ |
2541 | 'project_id': 'fake'}) |
2542 | inst_id = instance_ref['id'] |
2543 | |
2544 | - def _ensure_all_called(_): |
2545 | + def _ensure_all_called(): |
2546 | instance_filter = 'nova-instance-%s' % instance_ref['name'] |
2547 | secgroup_filter = 'nova-secgroup-%s' % self.security_group['id'] |
2548 | for required in [secgroup_filter, 'allow-dhcp-server', |
2549 | @@ -253,7 +253,6 @@ |
2550 | instance = db.instance_get(self.context, inst_id) |
2551 | |
2552 | d = self.fw.setup_nwfilters_for_instance(instance) |
2553 | - d.addCallback(_ensure_all_called) |
2554 | - d.addCallback(lambda _: self.teardown_security_group()) |
2555 | - |
2556 | + _ensure_all_called() |
2557 | + self.teardown_security_group() |
2558 | return d |
2559 | |
2560 | === modified file 'nova/tests/volume_unittest.py' |
2561 | --- nova/tests/volume_unittest.py 2010-11-03 21:38:14 +0000 |
2562 | +++ nova/tests/volume_unittest.py 2010-12-16 20:49:10 +0000 |
2563 | @@ -21,8 +21,6 @@ |
2564 | """ |
2565 | import logging |
2566 | |
2567 | -from twisted.internet import defer |
2568 | - |
2569 | from nova import context |
2570 | from nova import exception |
2571 | from nova import db |
2572 | @@ -33,7 +31,7 @@ |
2573 | FLAGS = flags.FLAGS |
2574 | |
2575 | |
2576 | -class VolumeTestCase(test.TrialTestCase): |
2577 | +class VolumeTestCase(test.TestCase): |
2578 | """Test Case for volumes.""" |
2579 | |
2580 | def setUp(self): |
2581 | @@ -56,51 +54,48 @@ |
2582 | vol['attach_status'] = "detached" |
2583 | return db.volume_create(context.get_admin_context(), vol)['id'] |
2584 | |
2585 | - @defer.inlineCallbacks |
2586 | def test_create_delete_volume(self): |
2587 | """Test volume can be created and deleted.""" |
2588 | volume_id = self._create_volume() |
2589 | - yield self.volume.create_volume(self.context, volume_id) |
2590 | + self.volume.create_volume(self.context, volume_id) |
2591 | self.assertEqual(volume_id, db.volume_get(context.get_admin_context(), |
2592 | volume_id).id) |
2593 | |
2594 | - yield self.volume.delete_volume(self.context, volume_id) |
2595 | + self.volume.delete_volume(self.context, volume_id) |
2596 | self.assertRaises(exception.NotFound, |
2597 | db.volume_get, |
2598 | self.context, |
2599 | volume_id) |
2600 | |
2601 | - @defer.inlineCallbacks |
2602 | def test_too_big_volume(self): |
2603 | """Ensure failure if a too large of a volume is requested.""" |
2604 | # FIXME(vish): validation needs to move into the data layer in |
2605 | # volume_create |
2606 | - defer.returnValue(True) |
2607 | + return True |
2608 | try: |
2609 | volume_id = self._create_volume('1001') |
2610 | - yield self.volume.create_volume(self.context, volume_id) |
2611 | + self.volume.create_volume(self.context, volume_id) |
2612 | self.fail("Should have thrown TypeError") |
2613 | except TypeError: |
2614 | pass |
2615 | |
2616 | - @defer.inlineCallbacks |
2617 | def test_too_many_volumes(self): |
2618 | """Ensure that NoMoreTargets is raised when we run out of volumes.""" |
2619 | vols = [] |
2620 | total_slots = FLAGS.iscsi_num_targets |
2621 | for _index in xrange(total_slots): |
2622 | volume_id = self._create_volume() |
2623 | - yield self.volume.create_volume(self.context, volume_id) |
2624 | + self.volume.create_volume(self.context, volume_id) |
2625 | vols.append(volume_id) |
2626 | volume_id = self._create_volume() |
2627 | - self.assertFailure(self.volume.create_volume(self.context, |
2628 | - volume_id), |
2629 | - db.NoMoreTargets) |
2630 | + self.assertRaises(db.NoMoreTargets, |
2631 | + self.volume.create_volume, |
2632 | + self.context, |
2633 | + volume_id) |
2634 | db.volume_destroy(context.get_admin_context(), volume_id) |
2635 | for volume_id in vols: |
2636 | - yield self.volume.delete_volume(self.context, volume_id) |
2637 | + self.volume.delete_volume(self.context, volume_id) |
2638 | |
2639 | - @defer.inlineCallbacks |
2640 | def test_run_attach_detach_volume(self): |
2641 | """Make sure volume can be attached and detached from instance.""" |
2642 | inst = {} |
2643 | @@ -115,15 +110,15 @@ |
2644 | instance_id = db.instance_create(self.context, inst)['id'] |
2645 | mountpoint = "/dev/sdf" |
2646 | volume_id = self._create_volume() |
2647 | - yield self.volume.create_volume(self.context, volume_id) |
2648 | + self.volume.create_volume(self.context, volume_id) |
2649 | if FLAGS.fake_tests: |
2650 | db.volume_attached(self.context, volume_id, instance_id, |
2651 | mountpoint) |
2652 | else: |
2653 | - yield self.compute.attach_volume(self.context, |
2654 | - instance_id, |
2655 | - volume_id, |
2656 | - mountpoint) |
2657 | + self.compute.attach_volume(self.context, |
2658 | + instance_id, |
2659 | + volume_id, |
2660 | + mountpoint) |
2661 | vol = db.volume_get(context.get_admin_context(), volume_id) |
2662 | self.assertEqual(vol['status'], "in-use") |
2663 | self.assertEqual(vol['attach_status'], "attached") |
2664 | @@ -131,25 +126,26 @@ |
2665 | instance_ref = db.volume_get_instance(self.context, volume_id) |
2666 | self.assertEqual(instance_ref['id'], instance_id) |
2667 | |
2668 | - self.assertFailure(self.volume.delete_volume(self.context, volume_id), |
2669 | - exception.Error) |
2670 | + self.assertRaises(exception.Error, |
2671 | + self.volume.delete_volume, |
2672 | + self.context, |
2673 | + volume_id) |
2674 | if FLAGS.fake_tests: |
2675 | db.volume_detached(self.context, volume_id) |
2676 | else: |
2677 | - yield self.compute.detach_volume(self.context, |
2678 | - instance_id, |
2679 | - volume_id) |
2680 | + self.compute.detach_volume(self.context, |
2681 | + instance_id, |
2682 | + volume_id) |
2683 | vol = db.volume_get(self.context, volume_id) |
2684 | self.assertEqual(vol['status'], "available") |
2685 | |
2686 | - yield self.volume.delete_volume(self.context, volume_id) |
2687 | + self.volume.delete_volume(self.context, volume_id) |
2688 | self.assertRaises(exception.Error, |
2689 | db.volume_get, |
2690 | self.context, |
2691 | volume_id) |
2692 | db.instance_destroy(self.context, instance_id) |
2693 | |
2694 | - @defer.inlineCallbacks |
2695 | def test_concurrent_volumes_get_different_targets(self): |
2696 | """Ensure multiple concurrent volumes get different targets.""" |
2697 | volume_ids = [] |
2698 | @@ -164,15 +160,11 @@ |
2699 | self.assert_(iscsi_target not in targets) |
2700 | targets.append(iscsi_target) |
2701 | logging.debug("Target %s allocated", iscsi_target) |
2702 | - deferreds = [] |
2703 | total_slots = FLAGS.iscsi_num_targets |
2704 | for _index in xrange(total_slots): |
2705 | volume_id = self._create_volume() |
2706 | d = self.volume.create_volume(self.context, volume_id) |
2707 | - d.addCallback(_check) |
2708 | - d.addErrback(self.fail) |
2709 | - deferreds.append(d) |
2710 | - yield defer.DeferredList(deferreds) |
2711 | + _check(d) |
2712 | for volume_id in volume_ids: |
2713 | self.volume.delete_volume(self.context, volume_id) |
2714 | |
2715 | |
2716 | === modified file 'nova/utils.py' |
2717 | --- nova/utils.py 2010-11-23 20:58:46 +0000 |
2718 | +++ nova/utils.py 2010-12-16 20:49:10 +0000 |
2719 | @@ -31,7 +31,8 @@ |
2720 | import sys |
2721 | from xml.sax import saxutils |
2722 | |
2723 | -from twisted.internet.threads import deferToThread |
2724 | +from eventlet import event |
2725 | +from eventlet import greenthread |
2726 | |
2727 | from nova import exception |
2728 | from nova import flags |
2729 | @@ -75,7 +76,7 @@ |
2730 | |
2731 | |
2732 | def execute(cmd, process_input=None, addl_env=None, check_exit_code=True): |
2733 | - logging.debug("Running cmd: %s", cmd) |
2734 | + logging.debug("Running cmd (subprocess): %s", cmd) |
2735 | env = os.environ.copy() |
2736 | if addl_env: |
2737 | env.update(addl_env) |
2738 | @@ -95,6 +96,10 @@ |
2739 | stdout=stdout, |
2740 | stderr=stderr, |
2741 | cmd=cmd) |
2742 | + # NOTE(termie): this appears to be necessary to let the subprocess call |
2743 | + # clean something up in between calls, without it two |
2744 | + # execute calls in a row hangs the second one |
2745 | + greenthread.sleep(0) |
2746 | return result |
2747 | |
2748 | |
2749 | @@ -123,13 +128,7 @@ |
2750 | |
2751 | def runthis(prompt, cmd, check_exit_code=True): |
2752 | logging.debug("Running %s" % (cmd)) |
2753 | - exit_code = subprocess.call(cmd.split(" ")) |
2754 | - logging.debug(prompt % (exit_code)) |
2755 | - if check_exit_code and exit_code != 0: |
2756 | - raise ProcessExecutionError(exit_code=exit_code, |
2757 | - stdout=None, |
2758 | - stderr=None, |
2759 | - cmd=cmd) |
2760 | + rv, err = execute(cmd, check_exit_code=check_exit_code) |
2761 | |
2762 | |
2763 | def generate_uid(topic, size=8): |
2764 | @@ -224,10 +223,41 @@ |
2765 | return getattr(backend, key) |
2766 | |
2767 | |
2768 | -def deferredToThread(f): |
2769 | - def g(*args, **kwargs): |
2770 | - return deferToThread(f, *args, **kwargs) |
2771 | - return g |
2772 | +class LoopingCall(object): |
2773 | + def __init__(self, f=None, *args, **kw): |
2774 | + self.args = args |
2775 | + self.kw = kw |
2776 | + self.f = f |
2777 | + self._running = False |
2778 | + |
2779 | + def start(self, interval, now=True): |
2780 | + self._running = True |
2781 | + done = event.Event() |
2782 | + |
2783 | + def _inner(): |
2784 | + if not now: |
2785 | + greenthread.sleep(interval) |
2786 | + try: |
2787 | + while self._running: |
2788 | + self.f(*self.args, **self.kw) |
2789 | + greenthread.sleep(interval) |
2790 | + except Exception: |
2791 | + logging.exception('in looping call') |
2792 | + done.send_exception(*sys.exc_info()) |
2793 | + return |
2794 | + |
2795 | + done.send(True) |
2796 | + |
2797 | + self.done = done |
2798 | + |
2799 | + greenthread.spawn(_inner) |
2800 | + return self.done |
2801 | + |
2802 | + def stop(self): |
2803 | + self._running = False |
2804 | + |
2805 | + def wait(self): |
2806 | + return self.done.wait() |
2807 | |
2808 | |
2809 | def xhtml_escape(value): |
2810 | |
2811 | === removed file 'nova/validate.py' |
2812 | --- nova/validate.py 2010-10-21 18:49:51 +0000 |
2813 | +++ nova/validate.py 1970-01-01 00:00:00 +0000 |
2814 | @@ -1,94 +0,0 @@ |
2815 | -# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
2816 | - |
2817 | -# Copyright 2010 United States Government as represented by the |
2818 | -# Administrator of the National Aeronautics and Space Administration. |
2819 | -# All Rights Reserved. |
2820 | -# |
2821 | -# Licensed under the Apache License, Version 2.0 (the "License"); you may |
2822 | -# not use this file except in compliance with the License. You may obtain |
2823 | -# a copy of the License at |
2824 | -# |
2825 | -# http://www.apache.org/licenses/LICENSE-2.0 |
2826 | -# |
2827 | -# Unless required by applicable law or agreed to in writing, software |
2828 | -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
2829 | -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
2830 | -# License for the specific language governing permissions and limitations |
2831 | -# under the License. |
2832 | - |
2833 | -"""Decorators for argument validation, courtesy of |
2834 | -http://rmi.net/~lutz/rangetest.html""" |
2835 | - |
2836 | - |
2837 | -def rangetest(**argchecks): |
2838 | - """Validate ranges for both + defaults""" |
2839 | - |
2840 | - def onDecorator(func): |
2841 | - """onCall remembers func and argchecks""" |
2842 | - import sys |
2843 | - code = func.__code__ if sys.version_info[0] == 3 else func.func_code |
2844 | - allargs = code.co_varnames[:code.co_argcount] |
2845 | - funcname = func.__name__ |
2846 | - |
2847 | - def onCall(*pargs, **kargs): |
2848 | - # all pargs match first N args by position |
2849 | - # the rest must be in kargs or omitted defaults |
2850 | - positionals = list(allargs) |
2851 | - positionals = positionals[:len(pargs)] |
2852 | - |
2853 | - for (argname, (low, high)) in argchecks.items(): |
2854 | - # for all args to be checked |
2855 | - if argname in kargs: |
2856 | - # was passed by name |
2857 | - if float(kargs[argname]) < low or \ |
2858 | - float(kargs[argname]) > high: |
2859 | - errmsg = '{0} argument "{1}" not in {2}..{3}' |
2860 | - errmsg = errmsg.format(funcname, argname, low, high) |
2861 | - raise TypeError(errmsg) |
2862 | - |
2863 | - elif argname in positionals: |
2864 | - # was passed by position |
2865 | - position = positionals.index(argname) |
2866 | - if float(pargs[position]) < low or \ |
2867 | - float(pargs[position]) > high: |
2868 | - errmsg = '{0} argument "{1}" with value of {4} ' \ |
2869 | - 'not in {2}..{3}' |
2870 | - errmsg = errmsg.format(funcname, argname, low, high, |
2871 | - pargs[position]) |
2872 | - raise TypeError(errmsg) |
2873 | - else: |
2874 | - pass |
2875 | - |
2876 | - return func(*pargs, **kargs) # okay: run original call |
2877 | - return onCall |
2878 | - return onDecorator |
2879 | - |
2880 | - |
2881 | -def typetest(**argchecks): |
2882 | - def onDecorator(func): |
2883 | - import sys |
2884 | - code = func.__code__ if sys.version_info[0] == 3 else func.func_code |
2885 | - allargs = code.co_varnames[:code.co_argcount] |
2886 | - funcname = func.__name__ |
2887 | - |
2888 | - def onCall(*pargs, **kargs): |
2889 | - positionals = list(allargs)[:len(pargs)] |
2890 | - for (argname, typeof) in argchecks.items(): |
2891 | - if argname in kargs: |
2892 | - if not isinstance(kargs[argname], typeof): |
2893 | - errmsg = '{0} argument "{1}" not of type {2}' |
2894 | - errmsg = errmsg.format(funcname, argname, typeof) |
2895 | - raise TypeError(errmsg) |
2896 | - elif argname in positionals: |
2897 | - position = positionals.index(argname) |
2898 | - if not isinstance(pargs[position], typeof): |
2899 | - errmsg = '{0} argument "{1}" with value of {2} ' \ |
2900 | - 'not of type {3}' |
2901 | - errmsg = errmsg.format(funcname, argname, |
2902 | - pargs[position], typeof) |
2903 | - raise TypeError(errmsg) |
2904 | - else: |
2905 | - pass |
2906 | - return func(*pargs, **kargs) |
2907 | - return onCall |
2908 | - return onDecorator |
2909 | |
2910 | === modified file 'nova/virt/fake.py' |
2911 | --- nova/virt/fake.py 2010-11-01 20:13:18 +0000 |
2912 | +++ nova/virt/fake.py 2010-12-16 20:49:10 +0000 |
2913 | @@ -25,8 +25,6 @@ |
2914 | |
2915 | """ |
2916 | |
2917 | -from twisted.internet import defer |
2918 | - |
2919 | from nova import exception |
2920 | from nova.compute import power_state |
2921 | |
2922 | @@ -107,7 +105,6 @@ |
2923 | fake_instance = FakeInstance() |
2924 | self.instances[instance.name] = fake_instance |
2925 | fake_instance._state = power_state.RUNNING |
2926 | - return defer.succeed(None) |
2927 | |
2928 | def reboot(self, instance): |
2929 | """ |
2930 | @@ -119,19 +116,19 @@ |
2931 | The work will be done asynchronously. This function returns a |
2932 | Deferred that allows the caller to detect when it is complete. |
2933 | """ |
2934 | - return defer.succeed(None) |
2935 | + pass |
2936 | |
2937 | def rescue(self, instance): |
2938 | """ |
2939 | Rescue the specified instance. |
2940 | """ |
2941 | - return defer.succeed(None) |
2942 | + pass |
2943 | |
2944 | def unrescue(self, instance): |
2945 | """ |
2946 | Unrescue the specified instance. |
2947 | """ |
2948 | - return defer.succeed(None) |
2949 | + pass |
2950 | |
2951 | def destroy(self, instance): |
2952 | """ |
2953 | @@ -144,7 +141,6 @@ |
2954 | Deferred that allows the caller to detect when it is complete. |
2955 | """ |
2956 | del self.instances[instance.name] |
2957 | - return defer.succeed(None) |
2958 | |
2959 | def attach_volume(self, instance_name, device_path, mountpoint): |
2960 | """Attach the disk at device_path to the instance at mountpoint""" |
2961 | |
2962 | === modified file 'nova/virt/images.py' |
2963 | --- nova/virt/images.py 2010-10-22 00:15:21 +0000 |
2964 | +++ nova/virt/images.py 2010-12-16 20:49:10 +0000 |
2965 | @@ -26,7 +26,7 @@ |
2966 | import urlparse |
2967 | |
2968 | from nova import flags |
2969 | -from nova import process |
2970 | +from nova import utils |
2971 | from nova.auth import manager |
2972 | from nova.auth import signer |
2973 | from nova.objectstore import image |
2974 | @@ -50,7 +50,7 @@ |
2975 | |
2976 | # This should probably move somewhere else, like e.g. a download_as |
2977 | # method on User objects and at the same time get rewritten to use |
2978 | - # twisted web client. |
2979 | + # a web client. |
2980 | headers = {} |
2981 | headers['Date'] = time.strftime("%a, %d %b %Y %H:%M:%S GMT", time.gmtime()) |
2982 | |
2983 | @@ -63,15 +63,16 @@ |
2984 | |
2985 | cmd = ['/usr/bin/curl', '--fail', '--silent', url] |
2986 | for (k, v) in headers.iteritems(): |
2987 | - cmd += ['-H', '%s: %s' % (k, v)] |
2988 | + cmd += ['-H', '"%s: %s"' % (k, v)] |
2989 | |
2990 | cmd += ['-o', path] |
2991 | - return process.SharedPool().execute(executable=cmd[0], args=cmd[1:]) |
2992 | + cmd_out = ' '.join(cmd) |
2993 | + return utils.execute(cmd_out) |
2994 | |
2995 | |
2996 | def _fetch_local_image(image, path, user, project): |
2997 | source = _image_path('%s/image' % image) |
2998 | - return process.simple_execute('cp %s %s' % (source, path)) |
2999 | + return utils.execute('cp %s %s' % (source, path)) |
3000 | |
3001 | |
3002 | def _image_path(path): |
3003 | |
3004 | === modified file 'nova/virt/libvirt_conn.py' |
3005 | --- nova/virt/libvirt_conn.py 2010-11-17 19:56:42 +0000 |
3006 | +++ nova/virt/libvirt_conn.py 2010-12-16 20:49:10 +0000 |
3007 | @@ -45,16 +45,15 @@ |
3008 | import os |
3009 | import shutil |
3010 | |
3011 | +from eventlet import event |
3012 | +from eventlet import tpool |
3013 | + |
3014 | import IPy |
3015 | -from twisted.internet import defer |
3016 | -from twisted.internet import task |
3017 | -from twisted.internet import threads |
3018 | |
3019 | from nova import context |
3020 | from nova import db |
3021 | from nova import exception |
3022 | from nova import flags |
3023 | -from nova import process |
3024 | from nova import utils |
3025 | #from nova.api import context |
3026 | from nova.auth import manager |
3027 | @@ -184,14 +183,12 @@ |
3028 | except Exception as _err: |
3029 | pass |
3030 | # If the instance is already terminated, we're still happy |
3031 | - d = defer.Deferred() |
3032 | - if cleanup: |
3033 | - d.addCallback(lambda _: self._cleanup(instance)) |
3034 | - # FIXME: What does this comment mean? |
3035 | - # TODO(termie): short-circuit me for tests |
3036 | - # WE'LL save this for when we do shutdown, |
3037 | + |
3038 | + done = event.Event() |
3039 | + |
3040 | + # We'll save this for when we do shutdown, |
3041 | # instead of destroy - but destroy returns immediately |
3042 | - timer = task.LoopingCall(f=None) |
3043 | + timer = utils.LoopingCall(f=None) |
3044 | |
3045 | def _wait_for_shutdown(): |
3046 | try: |
3047 | @@ -200,17 +197,26 @@ |
3048 | instance['id'], state) |
3049 | if state == power_state.SHUTDOWN: |
3050 | timer.stop() |
3051 | - d.callback(None) |
3052 | except Exception: |
3053 | db.instance_set_state(context.get_admin_context(), |
3054 | instance['id'], |
3055 | power_state.SHUTDOWN) |
3056 | timer.stop() |
3057 | - d.callback(None) |
3058 | |
3059 | timer.f = _wait_for_shutdown |
3060 | - timer.start(interval=0.5, now=True) |
3061 | - return d |
3062 | + timer_done = timer.start(interval=0.5, now=True) |
3063 | + |
3064 | + # NOTE(termie): this is strictly superfluous (we could put the |
3065 | + # cleanup code in the timer), but this emulates the |
3066 | + # previous model so I am keeping it around until |
3067 | + # everything has been vetted a bit |
3068 | + def _wait_for_timer(): |
3069 | + timer_done.wait() |
3070 | + self._cleanup(instance) |
3071 | + done.send() |
3072 | + |
3073 | + greenthread.spawn(_wait_for_timer) |
3074 | + return done |
3075 | |
3076 | def _cleanup(self, instance): |
3077 | target = os.path.join(FLAGS.instances_path, instance['name']) |
3078 | @@ -219,7 +225,6 @@ |
3079 | if os.path.exists(target): |
3080 | shutil.rmtree(target) |
3081 | |
3082 | - @defer.inlineCallbacks |
3083 | @exception.wrap_exception |
3084 | def attach_volume(self, instance_name, device_path, mountpoint): |
3085 | virt_dom = self._conn.lookupByName(instance_name) |
3086 | @@ -230,7 +235,6 @@ |
3087 | <target dev='%s' bus='virtio'/> |
3088 | </disk>""" % (device_path, mount_device) |
3089 | virt_dom.attachDevice(xml) |
3090 | - yield |
3091 | |
3092 | def _get_disk_xml(self, xml, device): |
3093 | """Returns the xml for the disk mounted at device""" |
3094 | @@ -252,7 +256,6 @@ |
3095 | if doc != None: |
3096 | doc.freeDoc() |
3097 | |
3098 | - @defer.inlineCallbacks |
3099 | @exception.wrap_exception |
3100 | def detach_volume(self, instance_name, mountpoint): |
3101 | virt_dom = self._conn.lookupByName(instance_name) |
3102 | @@ -261,17 +264,13 @@ |
3103 | if not xml: |
3104 | raise exception.NotFound("No disk at %s" % mount_device) |
3105 | virt_dom.detachDevice(xml) |
3106 | - yield |
3107 | |
3108 | - @defer.inlineCallbacks |
3109 | @exception.wrap_exception |
3110 | def reboot(self, instance): |
3111 | - yield self.destroy(instance, False) |
3112 | + self.destroy(instance, False) |
3113 | xml = self.to_xml(instance) |
3114 | - yield self._conn.createXML(xml, 0) |
3115 | - |
3116 | - d = defer.Deferred() |
3117 | - timer = task.LoopingCall(f=None) |
3118 | + self._conn.createXML(xml, 0) |
3119 | + timer = utils.LoopingCall(f=None) |
3120 | |
3121 | def _wait_for_reboot(): |
3122 | try: |
3123 | @@ -281,33 +280,28 @@ |
3124 | if state == power_state.RUNNING: |
3125 | logging.debug('instance %s: rebooted', instance['name']) |
3126 | timer.stop() |
3127 | - d.callback(None) |
3128 | except Exception, exn: |
3129 | logging.error('_wait_for_reboot failed: %s', exn) |
3130 | db.instance_set_state(context.get_admin_context(), |
3131 | instance['id'], |
3132 | power_state.SHUTDOWN) |
3133 | timer.stop() |
3134 | - d.callback(None) |
3135 | |
3136 | timer.f = _wait_for_reboot |
3137 | - timer.start(interval=0.5, now=True) |
3138 | - yield d |
3139 | + return timer.start(interval=0.5, now=True) |
3140 | |
3141 | - @defer.inlineCallbacks |
3142 | @exception.wrap_exception |
3143 | def rescue(self, instance): |
3144 | - yield self.destroy(instance, False) |
3145 | + self.destroy(instance, False) |
3146 | |
3147 | xml = self.to_xml(instance, rescue=True) |
3148 | rescue_images = {'image_id': FLAGS.rescue_image_id, |
3149 | 'kernel_id': FLAGS.rescue_kernel_id, |
3150 | 'ramdisk_id': FLAGS.rescue_ramdisk_id} |
3151 | - yield self._create_image(instance, xml, 'rescue-', rescue_images) |
3152 | - yield self._conn.createXML(xml, 0) |
3153 | + self._create_image(instance, xml, 'rescue-', rescue_images) |
3154 | + self._conn.createXML(xml, 0) |
3155 | |
3156 | - d = defer.Deferred() |
3157 | - timer = task.LoopingCall(f=None) |
3158 | + timer = utils.LoopingCall(f=None) |
3159 | |
3160 | def _wait_for_rescue(): |
3161 | try: |
3162 | @@ -316,27 +310,22 @@ |
3163 | if state == power_state.RUNNING: |
3164 | logging.debug('instance %s: rescued', instance['name']) |
3165 | timer.stop() |
3166 | - d.callback(None) |
3167 | except Exception, exn: |
3168 | logging.error('_wait_for_rescue failed: %s', exn) |
3169 | db.instance_set_state(None, |
3170 | instance['id'], |
3171 | power_state.SHUTDOWN) |
3172 | timer.stop() |
3173 | - d.callback(None) |
3174 | |
3175 | timer.f = _wait_for_rescue |
3176 | - timer.start(interval=0.5, now=True) |
3177 | - yield d |
3178 | + return timer.start(interval=0.5, now=True) |
3179 | |
3180 | - @defer.inlineCallbacks |
3181 | @exception.wrap_exception |
3182 | def unrescue(self, instance): |
3183 | # NOTE(vish): Because reboot destroys and recreates an instance using |
3184 | # the normal xml file, we can just call reboot here |
3185 | - yield self.reboot(instance) |
3186 | + self.reboot(instance) |
3187 | |
3188 | - @defer.inlineCallbacks |
3189 | @exception.wrap_exception |
3190 | def spawn(self, instance): |
3191 | xml = self.to_xml(instance) |
3192 | @@ -344,14 +333,12 @@ |
3193 | instance['id'], |
3194 | power_state.NOSTATE, |
3195 | 'launching') |
3196 | - yield NWFilterFirewall(self._conn).\ |
3197 | - setup_nwfilters_for_instance(instance) |
3198 | - yield self._create_image(instance, xml) |
3199 | - yield self._conn.createXML(xml, 0) |
3200 | + NWFilterFirewall(self._conn).setup_nwfilters_for_instance(instance) |
3201 | + self._create_image(instance, xml) |
3202 | + self._conn.createXML(xml, 0) |
3203 | logging.debug("instance %s: is running", instance['name']) |
3204 | |
3205 | - local_d = defer.Deferred() |
3206 | - timer = task.LoopingCall(f=None) |
3207 | + timer = utils.LoopingCall(f=None) |
3208 | |
3209 | def _wait_for_boot(): |
3210 | try: |
3211 | @@ -361,7 +348,6 @@ |
3212 | if state == power_state.RUNNING: |
3213 | logging.debug('instance %s: booted', instance['name']) |
3214 | timer.stop() |
3215 | - local_d.callback(None) |
3216 | except: |
3217 | logging.exception('instance %s: failed to boot', |
3218 | instance['name']) |
3219 | @@ -369,10 +355,9 @@ |
3220 | instance['id'], |
3221 | power_state.SHUTDOWN) |
3222 | timer.stop() |
3223 | - local_d.callback(None) |
3224 | + |
3225 | timer.f = _wait_for_boot |
3226 | - timer.start(interval=0.5, now=True) |
3227 | - yield local_d |
3228 | + return timer.start(interval=0.5, now=True) |
3229 | |
3230 | def _flush_xen_console(self, virsh_output): |
3231 | logging.info('virsh said: %r' % (virsh_output,)) |
3232 | @@ -380,10 +365,9 @@ |
3233 | |
3234 | if virsh_output.startswith('/dev/'): |
3235 | logging.info('cool, it\'s a device') |
3236 | - d = process.simple_execute("sudo dd if=%s iflag=nonblock" % |
3237 | - virsh_output, check_exit_code=False) |
3238 | - d.addCallback(lambda r: r[0]) |
3239 | - return d |
3240 | + out, err = utils.execute("sudo dd if=%s iflag=nonblock" % |
3241 | + virsh_output, check_exit_code=False) |
3242 | + return out |
3243 | else: |
3244 | return '' |
3245 | |
3246 | @@ -403,21 +387,20 @@ |
3247 | def get_console_output(self, instance): |
3248 | console_log = os.path.join(FLAGS.instances_path, instance['name'], |
3249 | 'console.log') |
3250 | - d = process.simple_execute('sudo chown %d %s' % (os.getuid(), |
3251 | - console_log)) |
3252 | + |
3253 | + utils.execute('sudo chown %d %s' % (os.getuid(), console_log)) |
3254 | + |
3255 | if FLAGS.libvirt_type == 'xen': |
3256 | - # Xen is spethial |
3257 | - d.addCallback(lambda _: |
3258 | - process.simple_execute("virsh ttyconsole %s" % |
3259 | - instance['name'])) |
3260 | - d.addCallback(self._flush_xen_console) |
3261 | - d.addCallback(self._append_to_file, console_log) |
3262 | + # Xen is special |
3263 | + virsh_output = utils.execute("virsh ttyconsole %s" % |
3264 | + instance['name']) |
3265 | + data = self._flush_xen_console(virsh_output) |
3266 | + fpath = self._append_to_file(data, console_log) |
3267 | else: |
3268 | - d.addCallback(lambda _: defer.succeed(console_log)) |
3269 | - d.addCallback(self._dump_file) |
3270 | - return d |
3271 | - |
3272 | - @defer.inlineCallbacks |
3273 | + fpath = console_log |
3274 | + |
3275 | + return self._dump_file(fpath) |
3276 | + |
3277 | def _create_image(self, inst, libvirt_xml, prefix='', disk_images=None): |
3278 | # syntactic nicety |
3279 | basepath = lambda fname = '', prefix = prefix: os.path.join( |
3280 | @@ -426,8 +409,8 @@ |
3281 | prefix + fname) |
3282 | |
3283 | # ensure directories exist and are writable |
3284 | - yield process.simple_execute('mkdir -p %s' % basepath(prefix='')) |
3285 | - yield process.simple_execute('chmod 0777 %s' % basepath(prefix='')) |
3286 | + utils.execute('mkdir -p %s' % basepath(prefix='')) |
3287 | + utils.execute('chmod 0777 %s' % basepath(prefix='')) |
3288 | |
3289 | # TODO(termie): these are blocking calls, it would be great |
3290 | # if they weren't. |
3291 | @@ -448,19 +431,19 @@ |
3292 | 'kernel_id': inst['kernel_id'], |
3293 | 'ramdisk_id': inst['ramdisk_id']} |
3294 | if not os.path.exists(basepath('disk')): |
3295 | - yield images.fetch(inst.image_id, basepath('disk-raw'), user, |
3296 | - project) |
3297 | + images.fetch(inst.image_id, basepath('disk-raw'), user, |
3298 | + project) |
3299 | if not os.path.exists(basepath('kernel')): |
3300 | - yield images.fetch(inst.kernel_id, basepath('kernel'), user, |
3301 | - project) |
3302 | + images.fetch(inst.kernel_id, basepath('kernel'), user, |
3303 | + project) |
3304 | if not os.path.exists(basepath('ramdisk')): |
3305 | - yield images.fetch(inst.ramdisk_id, basepath('ramdisk'), user, |
3306 | - project) |
3307 | + images.fetch(inst.ramdisk_id, basepath('ramdisk'), user, |
3308 | + project) |
3309 | |
3310 | - execute = lambda cmd, process_input = None, check_exit_code = True: \ |
3311 | - process.simple_execute(cmd=cmd, |
3312 | - process_input=process_input, |
3313 | - check_exit_code=check_exit_code) |
3314 | + def execute(cmd, process_input=None, check_exit_code=True): |
3315 | + return utils.execute(cmd=cmd, |
3316 | + process_input=process_input, |
3317 | + check_exit_code=check_exit_code) |
3318 | |
3319 | key = str(inst['key_data']) |
3320 | net = None |
3321 | @@ -482,11 +465,11 @@ |
3322 | if net: |
3323 | logging.info('instance %s: injecting net into image %s', |
3324 | inst['name'], inst.image_id) |
3325 | - yield disk.inject_data(basepath('disk-raw'), key, net, |
3326 | - execute=execute) |
3327 | + disk.inject_data(basepath('disk-raw'), key, net, |
3328 | + execute=execute) |
3329 | |
3330 | if os.path.exists(basepath('disk')): |
3331 | - yield process.simple_execute('rm -f %s' % basepath('disk')) |
3332 | + utils.execute('rm -f %s' % basepath('disk')) |
3333 | |
3334 | local_bytes = (instance_types.INSTANCE_TYPES[inst.instance_type] |
3335 | ['local_gb'] |
3336 | @@ -495,12 +478,11 @@ |
3337 | resize = True |
3338 | if inst['instance_type'] == 'm1.tiny' or prefix == 'rescue-': |
3339 | resize = False |
3340 | - yield disk.partition(basepath('disk-raw'), basepath('disk'), |
3341 | - local_bytes, resize, execute=execute) |
3342 | + disk.partition(basepath('disk-raw'), basepath('disk'), |
3343 | + local_bytes, resize, execute=execute) |
3344 | |
3345 | if FLAGS.libvirt_type == 'uml': |
3346 | - yield process.simple_execute('sudo chown root %s' % |
3347 | - basepath('disk')) |
3348 | + utils.execute('sudo chown root %s' % basepath('disk')) |
3349 | |
3350 | def to_xml(self, instance, rescue=False): |
3351 | # TODO(termie): cache? |
3352 | @@ -758,15 +740,15 @@ |
3353 | def _define_filter(self, xml): |
3354 | if callable(xml): |
3355 | xml = xml() |
3356 | - d = threads.deferToThread(self._conn.nwfilterDefineXML, xml) |
3357 | - return d |
3358 | + |
3359 | + # execute in a native thread and block current greenthread until done |
3360 | + tpool.execute(self._conn.nwfilterDefineXML, xml) |
3361 | |
3362 | @staticmethod |
3363 | def _get_net_and_mask(cidr): |
3364 | net = IPy.IP(cidr) |
3365 | return str(net.net()), str(net.netmask()) |
3366 | |
3367 | - @defer.inlineCallbacks |
3368 | def setup_nwfilters_for_instance(self, instance): |
3369 | """ |
3370 | Creates an NWFilter for the given instance. In the process, |
3371 | @@ -774,10 +756,10 @@ |
3372 | the base filter are all in place. |
3373 | """ |
3374 | |
3375 | - yield self._define_filter(self.nova_base_ipv4_filter) |
3376 | - yield self._define_filter(self.nova_base_ipv6_filter) |
3377 | - yield self._define_filter(self.nova_dhcp_filter) |
3378 | - yield self._define_filter(self.nova_base_filter) |
3379 | + self._define_filter(self.nova_base_ipv4_filter) |
3380 | + self._define_filter(self.nova_base_ipv6_filter) |
3381 | + self._define_filter(self.nova_dhcp_filter) |
3382 | + self._define_filter(self.nova_base_filter) |
3383 | |
3384 | nwfilter_xml = "<filter name='nova-instance-%s' chain='root'>\n" \ |
3385 | " <filterref filter='nova-base' />\n" % \ |
3386 | @@ -789,20 +771,19 @@ |
3387 | net, mask = self._get_net_and_mask(network_ref['cidr']) |
3388 | project_filter = self.nova_project_filter(instance['project_id'], |
3389 | net, mask) |
3390 | - yield self._define_filter(project_filter) |
3391 | + self._define_filter(project_filter) |
3392 | |
3393 | nwfilter_xml += " <filterref filter='nova-project-%s' />\n" % \ |
3394 | instance['project_id'] |
3395 | |
3396 | for security_group in instance.security_groups: |
3397 | - yield self.ensure_security_group_filter(security_group['id']) |
3398 | + self.ensure_security_group_filter(security_group['id']) |
3399 | |
3400 | nwfilter_xml += " <filterref filter='nova-secgroup-%d' />\n" % \ |
3401 | security_group['id'] |
3402 | nwfilter_xml += "</filter>" |
3403 | |
3404 | - yield self._define_filter(nwfilter_xml) |
3405 | - return |
3406 | + self._define_filter(nwfilter_xml) |
3407 | |
3408 | def ensure_security_group_filter(self, security_group_id): |
3409 | return self._define_filter( |
3410 | |
3411 | === modified file 'nova/virt/xenapi/network_utils.py' |
3412 | --- nova/virt/xenapi/network_utils.py 2010-12-06 12:42:34 +0000 |
3413 | +++ nova/virt/xenapi/network_utils.py 2010-12-16 20:49:10 +0000 |
3414 | @@ -20,8 +20,6 @@ |
3415 | their lookup functions. |
3416 | """ |
3417 | |
3418 | -from twisted.internet import defer |
3419 | - |
3420 | |
3421 | class NetworkHelper(): |
3422 | """ |
3423 | @@ -31,14 +29,12 @@ |
3424 | return |
3425 | |
3426 | @classmethod |
3427 | - @defer.inlineCallbacks |
3428 | def find_network_with_bridge(cls, session, bridge): |
3429 | - """ Return the network on which the bridge is attached, if found """ |
3430 | + """ Return the network on which the bridge is attached, if found.""" |
3431 | expr = 'field "bridge" = "%s"' % bridge |
3432 | - networks = yield session.call_xenapi('network.get_all_records_where', |
3433 | - expr) |
3434 | + networks = session.call_xenapi('network.get_all_records_where', expr) |
3435 | if len(networks) == 1: |
3436 | - defer.returnValue(networks.keys()[0]) |
3437 | + return networks.keys()[0] |
3438 | elif len(networks) > 1: |
3439 | raise Exception('Found non-unique network for bridge %s' % bridge) |
3440 | else: |
3441 | |
3442 | === modified file 'nova/virt/xenapi/vm_utils.py' |
3443 | --- nova/virt/xenapi/vm_utils.py 2010-12-09 19:37:30 +0000 |
3444 | +++ nova/virt/xenapi/vm_utils.py 2010-12-16 20:49:10 +0000 |
3445 | @@ -21,19 +21,14 @@ |
3446 | |
3447 | import logging |
3448 | import urllib |
3449 | - |
3450 | -from twisted.internet import defer |
3451 | from xml.dom import minidom |
3452 | |
3453 | -from nova import flags |
3454 | from nova import utils |
3455 | - |
3456 | from nova.auth.manager import AuthManager |
3457 | from nova.compute import instance_types |
3458 | from nova.compute import power_state |
3459 | from nova.virt import images |
3460 | |
3461 | -FLAGS = flags.FLAGS |
3462 | |
3463 | XENAPI_POWER_STATE = { |
3464 | 'Halted': power_state.SHUTDOWN, |
3465 | @@ -42,6 +37,7 @@ |
3466 | 'Suspended': power_state.SHUTDOWN, # FIXME |
3467 | 'Crashed': power_state.CRASHED} |
3468 | |
3469 | + |
3470 | XenAPI = None |
3471 | |
3472 | |
3473 | @@ -64,7 +60,6 @@ |
3474 | XenAPI = __import__('XenAPI') |
3475 | |
3476 | @classmethod |
3477 | - @defer.inlineCallbacks |
3478 | def create_vm(cls, session, instance, kernel, ramdisk): |
3479 | """Create a VM record. Returns a Deferred that gives the new |
3480 | VM reference.""" |
3481 | @@ -102,12 +97,11 @@ |
3482 | 'other_config': {}, |
3483 | } |
3484 | logging.debug('Created VM %s...', instance.name) |
3485 | - vm_ref = yield session.call_xenapi('VM.create', rec) |
3486 | + vm_ref = session.call_xenapi('VM.create', rec) |
3487 | logging.debug('Created VM %s as %s.', instance.name, vm_ref) |
3488 | - defer.returnValue(vm_ref) |
3489 | + return vm_ref |
3490 | |
3491 | @classmethod |
3492 | - @defer.inlineCallbacks |
3493 | def create_vbd(cls, session, vm_ref, vdi_ref, userdevice, bootable): |
3494 | """Create a VBD record. Returns a Deferred that gives the new |
3495 | VBD reference.""" |
3496 | @@ -126,13 +120,12 @@ |
3497 | vbd_rec['qos_algorithm_params'] = {} |
3498 | vbd_rec['qos_supported_algorithms'] = [] |
3499 | logging.debug('Creating VBD for VM %s, VDI %s ... ', vm_ref, vdi_ref) |
3500 | - vbd_ref = yield session.call_xenapi('VBD.create', vbd_rec) |
3501 | + vbd_ref = session.call_xenapi('VBD.create', vbd_rec) |
3502 | logging.debug('Created VBD %s for VM %s, VDI %s.', vbd_ref, vm_ref, |
3503 | vdi_ref) |
3504 | - defer.returnValue(vbd_ref) |
3505 | + return vbd_ref |
3506 | |
3507 | @classmethod |
3508 | - @defer.inlineCallbacks |
3509 | def create_vif(cls, session, vm_ref, network_ref, mac_address): |
3510 | """Create a VIF record. Returns a Deferred that gives the new |
3511 | VIF reference.""" |
3512 | @@ -148,13 +141,12 @@ |
3513 | vif_rec['qos_algorithm_params'] = {} |
3514 | logging.debug('Creating VIF for VM %s, network %s ... ', vm_ref, |
3515 | network_ref) |
3516 | - vif_ref = yield session.call_xenapi('VIF.create', vif_rec) |
3517 | + vif_ref = session.call_xenapi('VIF.create', vif_rec) |
3518 | logging.debug('Created VIF %s for VM %s, network %s.', vif_ref, |
3519 | vm_ref, network_ref) |
3520 | - defer.returnValue(vif_ref) |
3521 | + return vif_ref |
3522 | |
3523 | @classmethod |
3524 | - @defer.inlineCallbacks |
3525 | def fetch_image(cls, session, image, user, project, use_sr): |
3526 | """use_sr: True to put the image as a VDI in an SR, False to place |
3527 | it on dom0's filesystem. The former is for VM disks, the latter for |
3528 | @@ -171,12 +163,11 @@ |
3529 | args['password'] = user.secret |
3530 | if use_sr: |
3531 | args['add_partition'] = 'true' |
3532 | - task = yield session.async_call_plugin('objectstore', fn, args) |
3533 | - uuid = yield session.wait_for_task(task) |
3534 | - defer.returnValue(uuid) |
3535 | + task = session.async_call_plugin('objectstore', fn, args) |
3536 | + uuid = session.wait_for_task(task) |
3537 | + return uuid |
3538 | |
3539 | @classmethod |
3540 | - @utils.deferredToThread |
3541 | def lookup(cls, session, i): |
3542 | """ Look the instance i up, and returns it if available """ |
3543 | return VMHelper.lookup_blocking(session, i) |
3544 | @@ -194,7 +185,6 @@ |
3545 | return vms[0] |
3546 | |
3547 | @classmethod |
3548 | - @utils.deferredToThread |
3549 | def lookup_vm_vdis(cls, session, vm): |
3550 | """ Look for the VDIs that are attached to the VM """ |
3551 | return VMHelper.lookup_vm_vdis_blocking(session, vm) |
3552 | |
3553 | === modified file 'nova/virt/xenapi/vmops.py' |
3554 | --- nova/virt/xenapi/vmops.py 2010-12-09 17:08:24 +0000 |
3555 | +++ nova/virt/xenapi/vmops.py 2010-12-16 20:49:10 +0000 |
3556 | @@ -20,8 +20,6 @@ |
3557 | |
3558 | import logging |
3559 | |
3560 | -from twisted.internet import defer |
3561 | - |
3562 | from nova import db |
3563 | from nova import context |
3564 | |
3565 | @@ -49,10 +47,9 @@ |
3566 | return [self._session.get_xenapi().VM.get_name_label(vm) \ |
3567 | for vm in self._session.get_xenapi().VM.get_all()] |
3568 | |
3569 | - @defer.inlineCallbacks |
3570 | def spawn(self, instance): |
3571 | """ Create VM instance """ |
3572 | - vm = yield VMHelper.lookup(self._session, instance.name) |
3573 | + vm = VMHelper.lookup(self._session, instance.name) |
3574 | if vm is not None: |
3575 | raise Exception('Attempted to create non-unique name %s' % |
3576 | instance.name) |
3577 | @@ -60,66 +57,63 @@ |
3578 | bridge = db.project_get_network(context.get_admin_context(), |
3579 | instance.project_id).bridge |
3580 | network_ref = \ |
3581 | - yield NetworkHelper.find_network_with_bridge(self._session, bridge) |
3582 | + NetworkHelper.find_network_with_bridge(self._session, bridge) |
3583 | |
3584 | user = AuthManager().get_user(instance.user_id) |
3585 | project = AuthManager().get_project(instance.project_id) |
3586 | - vdi_uuid = yield VMHelper.fetch_image(self._session, |
3587 | - instance.image_id, user, project, True) |
3588 | - kernel = yield VMHelper.fetch_image(self._session, |
3589 | - instance.kernel_id, user, project, False) |
3590 | - ramdisk = yield VMHelper.fetch_image(self._session, |
3591 | - instance.ramdisk_id, user, project, False) |
3592 | - vdi_ref = yield self._session.call_xenapi('VDI.get_by_uuid', vdi_uuid) |
3593 | - vm_ref = yield VMHelper.create_vm(self._session, |
3594 | - instance, kernel, ramdisk) |
3595 | - yield VMHelper.create_vbd(self._session, vm_ref, vdi_ref, 0, True) |
3596 | + vdi_uuid = VMHelper.fetch_image( |
3597 | + self._session, instance.image_id, user, project, True) |
3598 | + kernel = VMHelper.fetch_image( |
3599 | + self._session, instance.kernel_id, user, project, False) |
3600 | + ramdisk = VMHelper.fetch_image( |
3601 | + self._session, instance.ramdisk_id, user, project, False) |
3602 | + vdi_ref = self._session.call_xenapi('VDI.get_by_uuid', vdi_uuid) |
3603 | + vm_ref = VMHelper.create_vm( |
3604 | + self._session, instance, kernel, ramdisk) |
3605 | + VMHelper.create_vbd(self._session, vm_ref, vdi_ref, 0, True) |
3606 | if network_ref: |
3607 | - yield VMHelper.create_vif(self._session, vm_ref, |
3608 | - network_ref, instance.mac_address) |
3609 | + VMHelper.create_vif(self._session, vm_ref, |
3610 | + network_ref, instance.mac_address) |
3611 | logging.debug('Starting VM %s...', vm_ref) |
3612 | - yield self._session.call_xenapi('VM.start', vm_ref, False, False) |
3613 | + self._session.call_xenapi('VM.start', vm_ref, False, False) |
3614 | logging.info('Spawning VM %s created %s.', instance.name, |
3615 | vm_ref) |
3616 | |
3617 | - @defer.inlineCallbacks |
3618 | def reboot(self, instance): |
3619 | """ Reboot VM instance """ |
3620 | instance_name = instance.name |
3621 | - vm = yield VMHelper.lookup(self._session, instance_name) |
3622 | + vm = VMHelper.lookup(self._session, instance_name) |
3623 | if vm is None: |
3624 | raise Exception('instance not present %s' % instance_name) |
3625 | - task = yield self._session.call_xenapi('Async.VM.clean_reboot', vm) |
3626 | - yield self._session.wait_for_task(task) |
3627 | + task = self._session.call_xenapi('Async.VM.clean_reboot', vm) |
3628 | + self._session.wait_for_task(task) |
3629 | |
3630 | - @defer.inlineCallbacks |
3631 | def destroy(self, instance): |
3632 | """ Destroy VM instance """ |
3633 | - vm = yield VMHelper.lookup(self._session, instance.name) |
3634 | + vm = VMHelper.lookup(self._session, instance.name) |
3635 | if vm is None: |
3636 | # Don't complain, just return. This lets us clean up instances |
3637 | # that have already disappeared from the underlying platform. |
3638 | - defer.returnValue(None) |
3639 | + return |
3640 | # Get the VDIs related to the VM |
3641 | - vdis = yield VMHelper.lookup_vm_vdis(self._session, vm) |
3642 | + vdis = VMHelper.lookup_vm_vdis(self._session, vm) |
3643 | try: |
3644 | - task = yield self._session.call_xenapi('Async.VM.hard_shutdown', |
3645 | + task = self._session.call_xenapi('Async.VM.hard_shutdown', |
3646 | vm) |
3647 | - yield self._session.wait_for_task(task) |
3648 | + self._session.wait_for_task(task) |
3649 | except XenAPI.Failure, exc: |
3650 | logging.warn(exc) |
3651 | # Disk clean-up |
3652 | if vdis: |
3653 | for vdi in vdis: |
3654 | try: |
3655 | - task = yield self._session.call_xenapi('Async.VDI.destroy', |
3656 | - vdi) |
3657 | - yield self._session.wait_for_task(task) |
3658 | + task = self._session.call_xenapi('Async.VDI.destroy', vdi) |
3659 | + self._session.wait_for_task(task) |
3660 | except XenAPI.Failure, exc: |
3661 | logging.warn(exc) |
3662 | try: |
3663 | - task = yield self._session.call_xenapi('Async.VM.destroy', vm) |
3664 | - yield self._session.wait_for_task(task) |
3665 | + task = self._session.call_xenapi('Async.VM.destroy', vm) |
3666 | + self._session.wait_for_task(task) |
3667 | except XenAPI.Failure, exc: |
3668 | logging.warn(exc) |
3669 | |
3670 | @@ -131,14 +125,13 @@ |
3671 | rec = self._session.get_xenapi().VM.get_record(vm) |
3672 | return VMHelper.compile_info(rec) |
3673 | |
3674 | - @defer.inlineCallbacks |
3675 | def get_diagnostics(self, instance_id): |
3676 | """Return data about VM diagnostics""" |
3677 | - vm = yield VMHelper.lookup(self._session, instance_id) |
3678 | + vm = VMHelper.lookup(self._session, instance_id) |
3679 | if vm is None: |
3680 | raise Exception("instance not present %s" % instance_id) |
3681 | - rec = yield self._session.get_xenapi().VM.get_record(vm) |
3682 | - defer.returnValue(VMHelper.compile_diagnostics(self._session, rec)) |
3683 | + rec = self._session.get_xenapi().VM.get_record(vm) |
3684 | + return VMHelper.compile_diagnostics(self._session, rec) |
3685 | |
3686 | def get_console_output(self, instance): |
3687 | """ Return snapshot of console """ |
3688 | |
3689 | === modified file 'nova/virt/xenapi_conn.py' |
3690 | --- nova/virt/xenapi_conn.py 2010-12-08 20:16:49 +0000 |
3691 | +++ nova/virt/xenapi_conn.py 2010-12-16 20:49:10 +0000 |
3692 | @@ -48,10 +48,11 @@ |
3693 | """ |
3694 | |
3695 | import logging |
3696 | +import sys |
3697 | import xmlrpclib |
3698 | |
3699 | -from twisted.internet import defer |
3700 | -from twisted.internet import reactor |
3701 | +from eventlet import event |
3702 | +from eventlet import tpool |
3703 | |
3704 | from nova import utils |
3705 | from nova import flags |
3706 | @@ -159,53 +160,51 @@ |
3707 | """ Return the xenapi host """ |
3708 | return self._session.xenapi.session.get_this_host(self._session.handle) |
3709 | |
3710 | - @utils.deferredToThread |
3711 | def call_xenapi(self, method, *args): |
3712 | - """Call the specified XenAPI method on a background thread. Returns |
3713 | - a Deferred for the result.""" |
3714 | + """Call the specified XenAPI method on a background thread.""" |
3715 | f = self._session.xenapi |
3716 | for m in method.split('.'): |
3717 | f = f.__getattr__(m) |
3718 | - return f(*args) |
3719 | + return tpool.execute(f, *args) |
3720 | |
3721 | - @utils.deferredToThread |
3722 | def async_call_plugin(self, plugin, fn, args): |
3723 | - """Call Async.host.call_plugin on a background thread. Returns a |
3724 | - Deferred with the task reference.""" |
3725 | - return _unwrap_plugin_exceptions( |
3726 | - self._session.xenapi.Async.host.call_plugin, |
3727 | - self.get_xenapi_host(), plugin, fn, args) |
3728 | + """Call Async.host.call_plugin on a background thread.""" |
3729 | + return tpool.execute(_unwrap_plugin_exceptions, |
3730 | + self._session.xenapi.Async.host.call_plugin, |
3731 | + self.get_xenapi_host(), plugin, fn, args) |
3732 | |
3733 | def wait_for_task(self, task): |
3734 | """Return a Deferred that will give the result of the given task. |
3735 | The task is polled until it completes.""" |
3736 | - d = defer.Deferred() |
3737 | - reactor.callLater(0, self._poll_task, task, d) |
3738 | - return d |
3739 | - |
3740 | - @utils.deferredToThread |
3741 | - def _poll_task(self, task, deferred): |
3742 | + |
3743 | + done = event.Event() |
3744 | + loop = utils.LoopingCall(self._poll_task, task, done) |
3745 | + loop.start(FLAGS.xenapi_task_poll_interval, now=True) |
3746 | + rv = done.wait() |
3747 | + loop.stop() |
3748 | + return rv |
3749 | + |
3750 | + def _poll_task(self, task, done): |
3751 | """Poll the given XenAPI task, and fire the given Deferred if we |
3752 | get a result.""" |
3753 | try: |
3754 | #logging.debug('Polling task %s...', task) |
3755 | status = self._session.xenapi.task.get_status(task) |
3756 | if status == 'pending': |
3757 | - reactor.callLater(FLAGS.xenapi_task_poll_interval, |
3758 | - self._poll_task, task, deferred) |
3759 | + return |
3760 | elif status == 'success': |
3761 | result = self._session.xenapi.task.get_result(task) |
3762 | logging.info('Task %s status: success. %s', task, result) |
3763 | - deferred.callback(_parse_xmlrpc_value(result)) |
3764 | + done.send(_parse_xmlrpc_value(result)) |
3765 | else: |
3766 | error_info = self._session.xenapi.task.get_error_info(task) |
3767 | logging.warn('Task %s status: %s. %s', task, status, |
3768 | error_info) |
3769 | - deferred.errback(XenAPI.Failure(error_info)) |
3770 | - #logging.debug('Polling task %s done.', task) |
3771 | + done.send_exception(XenAPI.Failure(error_info)) |
3772 | + #logging.debug('Polling task %s done.', task) |
3773 | except XenAPI.Failure, exc: |
3774 | logging.warn(exc) |
3775 | - deferred.errback(exc) |
3776 | + done.send_exception(*sys.exc_info()) |
3777 | |
3778 | |
3779 | def _unwrap_plugin_exceptions(func, *args, **kwargs): |
3780 | |
3781 | === modified file 'nova/volume/driver.py' |
3782 | --- nova/volume/driver.py 2010-11-03 22:06:00 +0000 |
3783 | +++ nova/volume/driver.py 2010-12-16 20:49:10 +0000 |
3784 | @@ -22,12 +22,10 @@ |
3785 | |
3786 | import logging |
3787 | import os |
3788 | - |
3789 | -from twisted.internet import defer |
3790 | +import time |
3791 | |
3792 | from nova import exception |
3793 | from nova import flags |
3794 | -from nova import process |
3795 | from nova import utils |
3796 | |
3797 | |
3798 | @@ -55,14 +53,13 @@ |
3799 | |
3800 | class VolumeDriver(object): |
3801 | """Executes commands relating to Volumes.""" |
3802 | - def __init__(self, execute=process.simple_execute, |
3803 | + def __init__(self, execute=utils.execute, |
3804 | sync_exec=utils.execute, *args, **kwargs): |
3805 | # NOTE(vish): db is set by Manager |
3806 | self.db = None |
3807 | self._execute = execute |
3808 | self._sync_exec = sync_exec |
3809 | |
3810 | - @defer.inlineCallbacks |
3811 | def _try_execute(self, command): |
3812 | # NOTE(vish): Volume commands can partially fail due to timing, but |
3813 | # running them a second time on failure will usually |
3814 | @@ -70,15 +67,15 @@ |
3815 | tries = 0 |
3816 | while True: |
3817 | try: |
3818 | - yield self._execute(command) |
3819 | - defer.returnValue(True) |
3820 | + self._execute(command) |
3821 | + return True |
3822 | except exception.ProcessExecutionError: |
3823 | tries = tries + 1 |
3824 | if tries >= FLAGS.num_shell_tries: |
3825 | raise |
3826 | logging.exception("Recovering from a failed execute." |
3827 | "Try number %s", tries) |
3828 | - yield self._execute("sleep %s" % tries ** 2) |
3829 | + time.sleep(tries ** 2) |
3830 | |
3831 | def check_for_setup_error(self): |
3832 | """Returns an error if prerequisites aren't met""" |
3833 | @@ -86,53 +83,45 @@ |
3834 | raise exception.Error("volume group %s doesn't exist" |
3835 | % FLAGS.volume_group) |
3836 | |
3837 | - @defer.inlineCallbacks |
3838 | def create_volume(self, volume): |
3839 | """Creates a logical volume.""" |
3840 | if int(volume['size']) == 0: |
3841 | sizestr = '100M' |
3842 | else: |
3843 | sizestr = '%sG' % volume['size'] |
3844 | - yield self._try_execute("sudo lvcreate -L %s -n %s %s" % |
3845 | - (sizestr, |
3846 | - volume['name'], |
3847 | - FLAGS.volume_group)) |
3848 | + self._try_execute("sudo lvcreate -L %s -n %s %s" % |
3849 | + (sizestr, |
3850 | + volume['name'], |
3851 | + FLAGS.volume_group)) |
3852 | |
3853 | - @defer.inlineCallbacks |
3854 | def delete_volume(self, volume): |
3855 | """Deletes a logical volume.""" |
3856 | - yield self._try_execute("sudo lvremove -f %s/%s" % |
3857 | - (FLAGS.volume_group, |
3858 | - volume['name'])) |
3859 | + self._try_execute("sudo lvremove -f %s/%s" % |
3860 | + (FLAGS.volume_group, |
3861 | + volume['name'])) |
3862 | |
3863 | - @defer.inlineCallbacks |
3864 | def local_path(self, volume): |
3865 | - yield # NOTE(vish): stops deprecation warning |
3866 | + # NOTE(vish): stops deprecation warning |
3867 | escaped_group = FLAGS.volume_group.replace('-', '--') |
3868 | escaped_name = volume['name'].replace('-', '--') |
3869 | - defer.returnValue("/dev/mapper/%s-%s" % (escaped_group, |
3870 | - escaped_name)) |
3871 | + return "/dev/mapper/%s-%s" % (escaped_group, escaped_name) |
3872 | |
3873 | def ensure_export(self, context, volume): |
3874 | """Synchronously recreates an export for a logical volume.""" |
3875 | raise NotImplementedError() |
3876 | |
3877 | - @defer.inlineCallbacks |
3878 | def create_export(self, context, volume): |
3879 | """Exports the volume.""" |
3880 | raise NotImplementedError() |
3881 | |
3882 | - @defer.inlineCallbacks |
3883 | def remove_export(self, context, volume): |
3884 | """Removes an export for a logical volume.""" |
3885 | raise NotImplementedError() |
3886 | |
3887 | - @defer.inlineCallbacks |
3888 | def discover_volume(self, volume): |
3889 | """Discover volume on a remote host.""" |
3890 | raise NotImplementedError() |
3891 | |
3892 | - @defer.inlineCallbacks |
3893 | def undiscover_volume(self, volume): |
3894 | """Undiscover volume on a remote host.""" |
3895 | raise NotImplementedError() |
3896 | @@ -155,14 +144,13 @@ |
3897 | dev = {'shelf_id': shelf_id, 'blade_id': blade_id} |
3898 | self.db.export_device_create_safe(context, dev) |
3899 | |
3900 | - @defer.inlineCallbacks |
3901 | def create_export(self, context, volume): |
3902 | """Creates an export for a logical volume.""" |
3903 | self._ensure_blades(context) |
3904 | (shelf_id, |
3905 | blade_id) = self.db.volume_allocate_shelf_and_blade(context, |
3906 | volume['id']) |
3907 | - yield self._try_execute( |
3908 | + self._try_execute( |
3909 | "sudo vblade-persist setup %s %s %s /dev/%s/%s" % |
3910 | (shelf_id, |
3911 | blade_id, |
3912 | @@ -176,33 +164,30 @@ |
3913 | # still works for the other volumes, so we |
3914 | # just wait a bit for the current volume to |
3915 | # be ready and ignore any errors. |
3916 | - yield self._execute("sleep 2") |
3917 | - yield self._execute("sudo vblade-persist auto all", |
3918 | - check_exit_code=False) |
3919 | - yield self._execute("sudo vblade-persist start all", |
3920 | - check_exit_code=False) |
3921 | + time.sleep(2) |
3922 | + self._execute("sudo vblade-persist auto all", |
3923 | + check_exit_code=False) |
3924 | + self._execute("sudo vblade-persist start all", |
3925 | + check_exit_code=False) |
3926 | |
3927 | - @defer.inlineCallbacks |
3928 | def remove_export(self, context, volume): |
3929 | """Removes an export for a logical volume.""" |
3930 | (shelf_id, |
3931 | blade_id) = self.db.volume_get_shelf_and_blade(context, |
3932 | volume['id']) |
3933 | - yield self._try_execute("sudo vblade-persist stop %s %s" % |
3934 | - (shelf_id, blade_id)) |
3935 | - yield self._try_execute("sudo vblade-persist destroy %s %s" % |
3936 | - (shelf_id, blade_id)) |
3937 | + self._try_execute("sudo vblade-persist stop %s %s" % |
3938 | + (shelf_id, blade_id)) |
3939 | + self._try_execute("sudo vblade-persist destroy %s %s" % |
3940 | + (shelf_id, blade_id)) |
3941 | |
3942 | - @defer.inlineCallbacks |
3943 | def discover_volume(self, _volume): |
3944 | """Discover volume on a remote host.""" |
3945 | - yield self._execute("sudo aoe-discover") |
3946 | - yield self._execute("sudo aoe-stat", check_exit_code=False) |
3947 | + self._execute("sudo aoe-discover") |
3948 | + self._execute("sudo aoe-stat", check_exit_code=False) |
3949 | |
3950 | - @defer.inlineCallbacks |
3951 | def undiscover_volume(self, _volume): |
3952 | """Undiscover volume on a remote host.""" |
3953 | - yield |
3954 | + pass |
3955 | |
3956 | |
3957 | class FakeAOEDriver(AOEDriver): |
3958 | @@ -252,7 +237,6 @@ |
3959 | target = {'host': host, 'target_num': target_num} |
3960 | self.db.iscsi_target_create_safe(context, target) |
3961 | |
3962 | - @defer.inlineCallbacks |
3963 | def create_export(self, context, volume): |
3964 | """Creates an export for a logical volume.""" |
3965 | self._ensure_iscsi_targets(context, volume['host']) |
3966 | @@ -261,61 +245,55 @@ |
3967 | volume['host']) |
3968 | iscsi_name = "%s%s" % (FLAGS.iscsi_target_prefix, volume['name']) |
3969 | volume_path = "/dev/%s/%s" % (FLAGS.volume_group, volume['name']) |
3970 | - yield self._execute("sudo ietadm --op new " |
3971 | - "--tid=%s --params Name=%s" % |
3972 | - (iscsi_target, iscsi_name)) |
3973 | - yield self._execute("sudo ietadm --op new --tid=%s " |
3974 | - "--lun=0 --params Path=%s,Type=fileio" % |
3975 | - (iscsi_target, volume_path)) |
3976 | + self._execute("sudo ietadm --op new " |
3977 | + "--tid=%s --params Name=%s" % |
3978 | + (iscsi_target, iscsi_name)) |
3979 | + self._execute("sudo ietadm --op new --tid=%s " |
3980 | + "--lun=0 --params Path=%s,Type=fileio" % |
3981 | + (iscsi_target, volume_path)) |
3982 | |
3983 | - @defer.inlineCallbacks |
3984 | def remove_export(self, context, volume): |
3985 | """Removes an export for a logical volume.""" |
3986 | iscsi_target = self.db.volume_get_iscsi_target_num(context, |
3987 | volume['id']) |
3988 | - yield self._execute("sudo ietadm --op delete --tid=%s " |
3989 | - "--lun=0" % iscsi_target) |
3990 | - yield self._execute("sudo ietadm --op delete --tid=%s" % |
3991 | - iscsi_target) |
3992 | + self._execute("sudo ietadm --op delete --tid=%s " |
3993 | + "--lun=0" % iscsi_target) |
3994 | + self._execute("sudo ietadm --op delete --tid=%s" % |
3995 | + iscsi_target) |
3996 | |
3997 | - @defer.inlineCallbacks |
3998 | def _get_name_and_portal(self, volume_name, host): |
3999 | """Gets iscsi name and portal from volume name and host.""" |
4000 | - (out, _err) = yield self._execute("sudo iscsiadm -m discovery -t " |
4001 | - "sendtargets -p %s" % host) |
4002 | + (out, _err) = self._execute("sudo iscsiadm -m discovery -t " |
4003 | + "sendtargets -p %s" % host) |
4004 | for target in out.splitlines(): |
4005 | if FLAGS.iscsi_ip_prefix in target and volume_name in target: |
4006 | (location, _sep, iscsi_name) = target.partition(" ") |
4007 | break |
4008 | iscsi_portal = location.split(",")[0] |
4009 | - defer.returnValue((iscsi_name, iscsi_portal)) |
4010 | + return (iscsi_name, iscsi_portal) |
4011 | |
4012 | - @defer.inlineCallbacks |
4013 | def discover_volume(self, volume): |
4014 | """Discover volume on a remote host.""" |
4015 | - (iscsi_name, |
4016 | - iscsi_portal) = yield self._get_name_and_portal(volume['name'], |
4017 | - volume['host']) |
4018 | - yield self._execute("sudo iscsiadm -m node -T %s -p %s --login" % |
4019 | - (iscsi_name, iscsi_portal)) |
4020 | - yield self._execute("sudo iscsiadm -m node -T %s -p %s --op update " |
4021 | - "-n node.startup -v automatic" % |
4022 | - (iscsi_name, iscsi_portal)) |
4023 | - defer.returnValue("/dev/iscsi/%s" % volume['name']) |
4024 | + iscsi_name, iscsi_portal = self._get_name_and_portal(volume['name'], |
4025 | + volume['host']) |
4026 | + self._execute("sudo iscsiadm -m node -T %s -p %s --login" % |
4027 | + (iscsi_name, iscsi_portal)) |
4028 | + self._execute("sudo iscsiadm -m node -T %s -p %s --op update " |
4029 | + "-n node.startup -v automatic" % |
4030 | + (iscsi_name, iscsi_portal)) |
4031 | + return "/dev/iscsi/%s" % volume['name'] |
4032 | |
4033 | - @defer.inlineCallbacks |
4034 | def undiscover_volume(self, volume): |
4035 | """Undiscover volume on a remote host.""" |
4036 | - (iscsi_name, |
4037 | - iscsi_portal) = yield self._get_name_and_portal(volume['name'], |
4038 | - volume['host']) |
4039 | - yield self._execute("sudo iscsiadm -m node -T %s -p %s --op update " |
4040 | - "-n node.startup -v manual" % |
4041 | - (iscsi_name, iscsi_portal)) |
4042 | - yield self._execute("sudo iscsiadm -m node -T %s -p %s --logout " % |
4043 | - (iscsi_name, iscsi_portal)) |
4044 | - yield self._execute("sudo iscsiadm -m node --op delete " |
4045 | - "--targetname %s" % iscsi_name) |
4046 | + iscsi_name, iscsi_portal = self._get_name_and_portal(volume['name'], |
4047 | + volume['host']) |
4048 | + self._execute("sudo iscsiadm -m node -T %s -p %s --op update " |
4049 | + "-n node.startup -v manual" % |
4050 | + (iscsi_name, iscsi_portal)) |
4051 | + self._execute("sudo iscsiadm -m node -T %s -p %s --logout " % |
4052 | + (iscsi_name, iscsi_portal)) |
4053 | + self._execute("sudo iscsiadm -m node --op delete " |
4054 | + "--targetname %s" % iscsi_name) |
4055 | |
4056 | |
4057 | class FakeISCSIDriver(ISCSIDriver): |
4058 | |
4059 | === modified file 'nova/volume/manager.py' |
4060 | --- nova/volume/manager.py 2010-11-03 21:38:14 +0000 |
4061 | +++ nova/volume/manager.py 2010-12-16 20:49:10 +0000 |
4062 | @@ -45,7 +45,6 @@ |
4063 | import logging |
4064 | import datetime |
4065 | |
4066 | -from twisted.internet import defer |
4067 | |
4068 | from nova import context |
4069 | from nova import exception |
4070 | @@ -86,7 +85,6 @@ |
4071 | for volume in volumes: |
4072 | self.driver.ensure_export(ctxt, volume) |
4073 | |
4074 | - @defer.inlineCallbacks |
4075 | def create_volume(self, context, volume_id): |
4076 | """Creates and exports the volume.""" |
4077 | context = context.elevated() |
4078 | @@ -102,19 +100,18 @@ |
4079 | |
4080 | logging.debug("volume %s: creating lv of size %sG", |
4081 | volume_ref['name'], volume_ref['size']) |
4082 | - yield self.driver.create_volume(volume_ref) |
4083 | + self.driver.create_volume(volume_ref) |
4084 | |
4085 | logging.debug("volume %s: creating export", volume_ref['name']) |
4086 | - yield self.driver.create_export(context, volume_ref) |
4087 | + self.driver.create_export(context, volume_ref) |
4088 | |
4089 | now = datetime.datetime.utcnow() |
4090 | self.db.volume_update(context, |
4091 | volume_ref['id'], {'status': 'available', |
4092 | 'launched_at': now}) |
4093 | logging.debug("volume %s: created successfully", volume_ref['name']) |
4094 | - defer.returnValue(volume_id) |
4095 | + return volume_id |
4096 | |
4097 | - @defer.inlineCallbacks |
4098 | def delete_volume(self, context, volume_id): |
4099 | """Deletes and unexports volume.""" |
4100 | context = context.elevated() |
4101 | @@ -124,14 +121,13 @@ |
4102 | if volume_ref['host'] != self.host: |
4103 | raise exception.Error("Volume is not local to this node") |
4104 | logging.debug("volume %s: removing export", volume_ref['name']) |
4105 | - yield self.driver.remove_export(context, volume_ref) |
4106 | + self.driver.remove_export(context, volume_ref) |
4107 | logging.debug("volume %s: deleting", volume_ref['name']) |
4108 | - yield self.driver.delete_volume(volume_ref) |
4109 | + self.driver.delete_volume(volume_ref) |
4110 | self.db.volume_destroy(context, volume_id) |
4111 | logging.debug("volume %s: deleted successfully", volume_ref['name']) |
4112 | - defer.returnValue(True) |
4113 | + return True |
4114 | |
4115 | - @defer.inlineCallbacks |
4116 | def setup_compute_volume(self, context, volume_id): |
4117 | """Setup remote volume on compute host. |
4118 | |
4119 | @@ -139,17 +135,16 @@ |
4120 | context = context.elevated() |
4121 | volume_ref = self.db.volume_get(context, volume_id) |
4122 | if volume_ref['host'] == self.host and FLAGS.use_local_volumes: |
4123 | - path = yield self.driver.local_path(volume_ref) |
4124 | + path = self.driver.local_path(volume_ref) |
4125 | else: |
4126 | - path = yield self.driver.discover_volume(volume_ref) |
4127 | - defer.returnValue(path) |
4128 | + path = self.driver.discover_volume(volume_ref) |
4129 | + return path |
4130 | |
4131 | - @defer.inlineCallbacks |
4132 | def remove_compute_volume(self, context, volume_id): |
4133 | """Remove remote volume on compute host.""" |
4134 | context = context.elevated() |
4135 | volume_ref = self.db.volume_get(context, volume_id) |
4136 | if volume_ref['host'] == self.host and FLAGS.use_local_volumes: |
4137 | - defer.returnValue(True) |
4138 | + return True |
4139 | else: |
4140 | - yield self.driver.undiscover_volume(volume_ref) |
4141 | + self.driver.undiscover_volume(volume_ref) |
4142 | |
4143 | === modified file 'run_tests.py' |
4144 | --- run_tests.py 2010-12-11 20:10:24 +0000 |
4145 | +++ run_tests.py 2010-12-16 20:49:10 +0000 |
4146 | @@ -39,6 +39,9 @@ |
4147 | |
4148 | """ |
4149 | |
4150 | +import eventlet |
4151 | +eventlet.monkey_patch() |
4152 | + |
4153 | import __main__ |
4154 | import gettext |
4155 | import os |
4156 | @@ -59,15 +62,12 @@ |
4157 | from nova.tests.flags_unittest import * |
4158 | from nova.tests.misc_unittest import * |
4159 | from nova.tests.network_unittest import * |
4160 | -from nova.tests.objectstore_unittest import * |
4161 | -from nova.tests.process_unittest import * |
4162 | +#from nova.tests.objectstore_unittest import * |
4163 | from nova.tests.quota_unittest import * |
4164 | from nova.tests.rpc_unittest import * |
4165 | from nova.tests.scheduler_unittest import * |
4166 | from nova.tests.service_unittest import * |
4167 | from nova.tests.twistd_unittest import * |
4168 | -from nova.tests.validator_unittest import * |
4169 | -from nova.tests.virt_unittest import * |
4170 | from nova.tests.virt_unittest import * |
4171 | from nova.tests.volume_unittest import * |
4172 |
Some assorted notes from eday, termie, vishy, sleepsonthefloor, jesse
#11 - remove extra line above docstring
#32 remove main in nova-api
#93-95 duplicate flag api_port in
#199 xtra space
#548 fix space
#583 fix space
#634 s/return/pass
#938 pylint
#1398 - kill, and need to fix init scripts
#1440 use warn vs debug
#1442 remove section
#1452-1487 lots of asorted
#2467 can probably be removed
#2878 - hmm...
#3039 #3036 s/return/pass
#3125 - can just return timer_done
#3160 - _wait_for_timer
#3329 output, error
#3341 whitespace
#3427 delete
#3450 - fix comment - clarify what is meant by 'block'
#3425 maybe move to previous line?
#3450 move up to previous line
#3442-#3844 investigate - some funkiness going on. Ask armando?
#3831 - s/utis/utils
#3832 make sure timer stops
#3909 fix sleeping out of process greenthread.sleep
#3928 fix whitespace
#3938 fix whitespace
#3949 fix whitespace
#3999 greenthread.sleep
#4093 fix whitespace
#4114 fix whitespace
#4235 fix whitespace
#4253 remove debug code