Merge lp:~citrix-openstack/nova/xenapi-netinject-prop into lp:~hudson-openstack/nova/trunk
- xenapi-netinject-prop
- Merge into trunk
Status: | Merged |
---|---|
Approved by: | Sandy Walsh |
Approved revision: | 723 |
Merged at revision: | 889 |
Proposed branch: | lp:~citrix-openstack/nova/xenapi-netinject-prop |
Merge into: | lp:~hudson-openstack/nova/trunk |
Diff against target: |
1060 lines (+507/-93) 11 files modified
Authors (+1/-0) nova/tests/db/fakes.py (+42/-32) nova/tests/fake_utils.py (+106/-0) nova/tests/test_xenapi.py (+139/-26) nova/tests/xenapi/stubs.py (+8/-4) nova/virt/disk.py (+19/-7) nova/virt/libvirt_conn.py (+1/-3) nova/virt/xenapi/fake.py (+31/-6) nova/virt/xenapi/vm_utils.py (+133/-0) nova/virt/xenapi/vmops.py (+13/-15) nova/virt/xenapi_conn.py (+14/-0) |
To merge this branch: | bzr merge lp:~citrix-openstack/nova/xenapi-netinject-prop |
Related bugs: | |
Related blueprints: |
Network injection in XenAPI
(Medium)
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Trey Morris (community) | Approve | ||
Thierry Carrez (community) | ffe | Approve | |
Sandy Walsh (community) | Approve | ||
Rick Harris (community) | Approve | ||
Review via email: mp+49798@code.launchpad.net |
Commit message
Description of the change
This is basic network injection for XenServer, and includes:
o Modification of the /etc/network/
o Setting of xenstore keys before instance boot, intended for the XenServer Windows agent. The agent will use these to configure the network at boot-time.
This change does not implement live reconfiguration, which is on another blueprint:
https:/
It does include template code to detect the presence of agents and avoid modifying the filesystem if they are injection-capable.
Jay Pipes (jaypipes) wrote : | # |
Andy Southgate (andy-southgate) wrote : | # |
Hi,
Thanks Jay. It looks like lp:nova has moved on since I rebased this
branch. Is it sufficient to rebase to trunk again?
Cheers,
Andy S.
On 02/16/11 18:48, Jay Pipes wrote:
> Hi Andy!
>
> Before going any further, please set this merge proposal status to Work In Progress and resolve the merge conflict in nova/virt/
>
> Lemme know if you're uncertain about the process of resolving merge conflicts.
>
> Cheers,
> jay
>
Jay Pipes (jaypipes) wrote : | # |
On Wed, Feb 16, 2011 at 2:01 PM, Andy Southgate
<email address hidden> wrote:
> Thanks Jay. It looks like lp:nova has moved on since I rebased this
> branch. Is it sufficient to rebase to trunk again?
It's good practice to have a local copy of the lp:nova trunk, and
then, when you are ready to bzr push your local topic branch to
Launchpad, simply do a bzr merge ../trunk and resolve any conflicts
that might pop up. After you resolve the conflicts in the files (if
any), do a bzr resolve --all and then bzr push your local topic branch
to Launchpad.
As far as "rebasing", not sure you need to do that. bzr has a rebase
plugin, but it's not what you need in this case.
So, in short, the recommended dev process is like so:
# Assume ~/repos/nova is where you have your bzr branches for nova...
cd ~/repos/nova
bzr branch lp:nova trunk
bzr branch trunk topic # Where "topic" is the name you give the branch
you work on (say, xs-netinject-prop)
cd topic
# do work in topic branch...
bzr commit -m "Some comment about your changes"
bzr push lp:~citrix-openstack/nova/topic
# On Launchpad, propose your branch for merging, as you did...
# If review come back as some changes are needed, set the merge
proposal status to Work In Progress
# then, in your local branch, do more work
bzr commit -m "More changes"
cd ../trunk
bzr pull # This pulls all changes merged into lp:nova trunk from the
rest of the Nova contributors into your local branch
cd ../topic
bzr merge ../trunk
# If any conflicts, resolve them and then do bzr resolve --all
bzr commit -m "Merge trunk"
bzr push
# The set your merge proposal back to "Needs Review"
All set :)
Don't worry, once you do it a couple times, it becomes second nature ;)
-jay
Andy Southgate (andy-southgate) wrote : | # |
Thanks, that was a bit quicker than a rebase!
Andy
Andy Southgate (andy-southgate) wrote : | # |
Salvatore (salvatore-orlando) is taking over this merge BTW as I'm moving on to other things, so please send question to him if you have them.
Ewan Mellor (ewanmellor) wrote : | # |
I have merged with trunk again. This branch is ready for review. I have requested a review from Trey Morris, since this branch is related to his xs-inject-
Thanks!
Trey Morris (tr3buchet) wrote : | # |
Great tests!
I have some concerns, not the least of which is that you added new functions which write networking data to the xenstore param list and call them just before spawn writes to the xenstore param list. If written to the same location, spawn will blow away all of your changes. It doesn't appear that your function writes them to the same path though so we'd end up with some nasty config duplication. Your function also writes to different paths than spawn does. Those paths work with the agent. I'm not sure at this point what the agent would do with data at other paths (possibly ignore it).
Mounting the vdi also concerns me. We use VHDs and aren't going to be mounting them because we'll be building not only base images but also customer images where they could be using who knows which file systems or even encryption. So there will need to be some sort of branching put in place to determine if you do or do not want to mount the vdi.
There are functions in place such as "write_
Also in the next hour I'll be proposing merge for the 2nd half of my network_injection branch, it refactors what takes place in spawn and allows network reconfiguration
Let me know if you want to discuss anything, I'm glad to help.
Salvatore Orlando (salvatore-orlando) wrote : | # |
> Great tests!
>
> I have some concerns, not the least of which is that you added new functions
> which write networking data to the xenstore param list and call them just
> before spawn writes to the xenstore param list. If written to the same
> location, spawn will blow away all of your changes. It doesn't appear that
> your function writes them to the same path though so we'd end up with some
> nasty config duplication. Your function also writes to different paths than
> spawn does. Those paths work with the agent. I'm not sure at this point what
> the agent would do with data at other paths (possibly ignore it).
>
I'm not entirely sure whether this duplication is intended or just the result of some previous merge with trunk (I'm thinking of multi-nic-support). However, I agree the fact that they write at different path is a bit weird. I'll investigate this issue.
> Mounting the vdi also concerns me. We use VHDs and aren't going to be mounting
> them because we'll be building not only base images but also customer images
> where they could be using who knows which file systems or even encryption. So
> there will need to be some sort of branching put in place to determine if you
> do or do not want to mount the vdi.
>
Mounting the image for injecting data is a technique which has been used in nova since its early days. In the xenapi implementation, we first stream it into a VDI, and then attach the VDI to the VM in which nova-compute is running. Your point on VHDs and encrypted file systems is more than valid, though. For the same reason we believe the guest agent should be the 'preferred' way for getting network/key configuration into the instance. We are providing injection for 'compatibility' with the libvirt implementation.
Also the way in which is currently done will probably not work even in more trival cases, such as root partition not starting at byte 0 or /etc being mounted on a partition other than the first. As already stated, injection is intended only for 'basic' scenarios.
In the proposed implementation if _mounted_processing fails to mount the image into the vdi, an error message is logged but this does not cause the spawn process to fail. However, I agree we probably need a better mechanism for understanding whether injection should be performed or not. I don't think proposing changes either to the API or instance_types would be a good idea at all. Now that glance is in place we can probably use metadata to this purpose. For instance image_type=
> There are functions in place such as "write_
> should use instead of making the calls directly to follow convention set when
> Ed refactored.
>
Good point. I will take care of this.
> Also in the next hour I'll be proposing merge for the 2nd half of my
> network_injection branch, it refactors what takes place in spawn and allows
> network reconfiguration
> sense to get this i...
Trey Morris (tr3buchet) wrote : | # |
> I'm not entirely sure whether this duplication is intended or just the result of some previous merge
> with trunk (I'm thinking of multi-nic-support). However, I agree the fact that they write at different
> path is a bit weird. I'll investigate this issue.
The paths I used are what the agent is expecting for instance the mac address being lowercase with no punctuation vs all uppercase and full of '_'. That's what I'm referring to. I'm not sure of a reason these should be changed, so I'm all ears if you've got one.
> (mounting the vdi)
First I'm glad it fails quietly. Second, I understand the reason for doing so. It's a great reason. It's something that for sure should be implemented; however, we will never use it in any circumstance, relying on the agent instead. For this reason, I suggest a configuration option. example: --mount-vdi=false. I don't see the point in having more metadata if it will always be true or always be false. Of course, if there are people who want to mount some vdi at spawn and twiddle with the file system, but not others, then that's a different story. I'm happy with either way as long as there is a way to disable it completely.
At this point I feel confident saying that injection of network configuration into the xenstore/
There will be a lot of changes to these areas as i start working on multinic, especially in the inject_
Also, ff there is anything you'd like me to change about the way I've implemented network injection, please let me know. I've written it in a way that works perfectly for us. I'd like for it to work for you as well.
-tr3buchet
Salvatore Orlando (salvatore-orlando) wrote : | # |
Acknowledging Trey's review and reverting status to work in progress to ease reviewer's work.
@Trey: I will reply to your latest comments shortly!
Salvatore Orlando (salvatore-orlando) wrote : | # |
> > I'm not entirely sure whether this duplication is intended or just the
> result of some previous merge
> > with trunk (I'm thinking of multi-nic-support). However, I agree the fact
> that they write at different
> > path is a bit weird. I'll investigate this issue.
>
> The paths I used are what the agent is expecting for instance the mac address
> being lowercase with no punctuation vs all uppercase and full of '_'. That's
> what I'm referring to. I'm not sure of a reason these should be changed, so
> I'm all ears if you've got one.
>
I also don't see any reason for which we should use different paths. The only reason I see for the different paths is that these two branches have evolved independently even though the functionalities they implement are quite overlapping.
> > (mounting the vdi)
>
> First I'm glad it fails quietly. Second, I understand the reason for doing so.
> It's a great reason. It's something that for sure should be implemented;
> however, we will never use it in any circumstance, relying on the agent
> instead. For this reason, I suggest a configuration option. example: --mount-
> vdi=false. I don't see the point in having more metadata if it will always be
> true or always be false. Of course, if there are people who want to mount some
> vdi at spawn and twiddle with the file system, but not others, then that's a
> different story. I'm happy with either way as long as there is a way to
> disable it completely.
>
I was suggesting metadata assuming a scenario in which there might be images with and without agent. However, I agree this scenario does not make a lot of sense after all. As you said, either you use the agent or you don't. So the flag idea is maybe more effective, and easier to implement as well :-)
> At this point I feel confident saying that injection of network configuration
> into the xenstore/
> is finished. Perhaps preconfigure_
> knowledgeable of xenstore as you guys surely are so if I've forgotten
> something please let me know. I only chose my paths as such because of the
> agent.
I tentatively agree with you that preconfigure_
> There will be a lot of changes to these areas as i start working on multinic,
> especially in the inject_
> leave writing injection into the xenstore to me and you handle the rest
> (mounting of vdi's, configuration etc). This way I can proceed with multi-nic
> without worry that I'll just have to rewrite it.
>
I see your branch has now been merged. I will update my branch accordingly.
I agree on your approach: maybe if there is anything in preconfigure_
> Also, ff there is anything you'd like me to change about the way I've
> implemented network injection, please let me know. I've written it in a way
> that works perfectly for us. I'd like for it to work for you as well.
>
> -tr3buchet
Trey Morris (tr3buchet) wrote : | # |
> I agree on your approach: maybe if there is anything in preconfigure_
Absolutely! Just let me know. First thing I came across was not injecting broadcast address where you guys did. Come up with a list so I can implement them all at once.
-tr3buchet
Salvatore Orlando (salvatore-orlando) wrote : | # |
State changed to work in progress due to conflicts with current trunk
Salvatore Orlando (salvatore-orlando) wrote : | # |
Conflicts resolved.
Branch ready for review again.
Salvatore Orlando (salvatore-orlando) wrote : | # |
Reverting to work in progress to resolve conflicts and integrate IPv6 injection into branch
Salvatore Orlando (salvatore-orlando) wrote : | # |
Branch updated and again ready for review.
Rick Harris (rconradharris) wrote : | # |
Nice work, Andy-- really digging the added tests here.
Overall this looks good, just have a few small suggestions that might improve
readability:
> 87 + def fake_network_
> 88 + l = []
> 89 + l.append(
> 90 + return l
> 91 +
Pep8: "Never use the characters `l' (lowercase letter el)". Might be better
as:
return [FakeModel(
> 153 +def fake_execute_
> 154 + global _fake_execute_log
> 155 + return _fake_execute_log
`global` isn't needed here since you're not mutating the value.
> +def fake_execute(*cmd, **kwargs):
Since `cmd` is a list, it might be clearer as `cmd_parts` (or something
similar)
> 185 + cmd_map = map(str, cmd)
> 186 + cmd_str = ' '.join(cmd_map)
Going forward, comprehensions are preferred to map/reduce/filter. So, might be
better as:
cmd_str = ' '.join(str(part) for part in cmd_parts)
> 199 + if isinstance(
I think `basestring` is preferred to StringTypes:
if isinstance(
> 213 + LOG.debug(_("Reply to faked command is stdout='%(0)s' stderr='%(1)s'") %
> 214 + {'0': reply[0], '1': reply[1]})
Rather than using a position-style, since a dict is being passed, might be
better to attach meaningful names. For example:
stdout = reply[0]
stderr = reply[1]
LOG.
> 264 + #db_fakes.
Vestigial code?
> 389 def _test_spawn(self, image_id, kernel_id, ramdisk_id,
> 394 + instance_
Should probably be lined up with the paren, something like:
def _test_spawn(self, image_id, kernel_id, ramdisk_id,
> 580 + def f_1(self, interval, now=True):
> 581 + self.f(*self.args, **self.kw)
> 582 + stubs.Set(
Could use a more descriptive name than `f_1`. Perhaps, `fake_start`.
> 640 + t = __import_
> 641 + -1)
> 642 + template = t.Template
Is there a reason you're using __import__ here--if so, it might benefit from a
NOTE(andy):
Otherwise, might be better to do:
if not template:
from Cheetah import Template
template = Template
> 643 + #load template file if necessary
Not a super-useful comment (didn't add anything for me, at least)
> 808 + LOG.debug(
Not a particularly useful logging line, especially since we raise immediately
afterwards. I'd consider scrapping this.
> 804 + if (ref in _db_content[cls]):
> 805 + if (field in _db_content[
> 806 + return _db_content[
> 807 + else:
> 808 + LOG.debug(
> 809 + raise Failure(
Might be c...
Salvatore Orlando (salvatore-orlando) wrote : | # |
Hi Rick,
thanks for taking time for doing such a detailed review!
Your tips will be very useful to provide higher-quality code in the future.
I will update the branch in the few hours in order to give reviewers a full day before feature freeze to review it.
There are only a few bits which probably deserve some discussion:
>
> Is there a reason you're using __import__ here--if so, it might benefit from a
> NOTE(andy):
>
> Otherwise, might be better to do:
>
> if not template:
> from Cheetah import Template
> template = Template
>
I agrre the syntax you are proposing is much neater, but wouldn't this put a requirement on Cheetah for the whole nova project (e.g.: nova services, unit tests, etc. failing if Cheetah is not installed)?
> > 990 + LOG.debug("ENTERING WAIT FOR BOOT!")
This is vestigial code. I love to use them for debugging code, and I usually throw a pep8 violation in them in order to make sure I remove them before push. Unfortunately I forgot to do the pep8 violation for this one!
Rick Harris (rconradharris) wrote : | # |
> wouldn't this put a requirement on Cheetah
Since you're importing Cheetah in function and not in the module namespace, we'll only end up importing Cheetah if it's actually needed. So, I think the "from x import y", in this case, works just as well as __import__, just a bit cleaner IMHO.
> and I usually throw a pep8 violation in them in order to make sure I remove them before push
Interesting. I've been meaning to add a SPIKE(author) tag that would cause Hudson to reject patches that include this. You could wrap extra debugs with a # SPIKE comment and be sure it never gets committed.
Salvatore Orlando (salvatore-orlando) wrote : | # |
All Rick's comments, but one, have been addressed.
The one not addressed is the following:
> 804 + if (ref in _db_content[cls]):
> 805 + if (field in _db_content[
> 806 + return _db_content[
> 807 + else:
> 808 + LOG.debug(
> 809 + raise Failure(
Might be clearer with a EAFP style, like:
try:
return _db_content[
except KeyError:
raise Failure(
In the fake xenapi driver we need to distinguish between the following cases:
- entity not found (ref not in _db_content[cls]), in which case we need to raise a HANDLE_INVALID falure
- entity found, but attribute not found, in which case we need to raised a NOT_IMPLEMENTED failure.
Nevertheless, Rick's suggestion is very valid and this calls for a wider refactoring of the fake xenapi driver, starting with the _getter routine. However, I think it would be better to do this after-cactus.
Sandy Walsh (sandy-walsh) wrote : | # |
Getting XenAPIMigrateIn
XenAPIMigrateIn
test_
OK
Otherwise, impressive branch ... nothing serious to say.
641 + # it the caller does not provide template object and data
"If the caller ..."
But largely I'm going to trust Trey on ensuring the approach is sound (as he's closer to the problem than I)
Once he approves, I will too.
Salvatore Orlando (salvatore-orlando) wrote : | # |
> Getting XenAPIMigrateIn
> tests:
>
> XenAPIMigrateIn
> test_finish_resize [sudo] password for swalsh:
> [sudo] password for swalsh:
> OK
>
Thanks for spotting this Sandy! Finish_resize calls _create_vm which in turn calls preconfigure_
I already pushed updated code, pleasee see if the tests run fine now.
Another way of doing this is disabling the flag xenapi_inject_image for the migrate use case. If you think that would make more sense, I'll do that.
> Otherwise, impressive branch ... nothing serious to say.
>
> 641 + # it the caller does not provide template object and data
>
> "If the caller ..."
>
> But largely I'm going to trust Trey on ensuring the approach is sound (as he's
> closer to the problem than I)
>
get_injectables is a routine 'shared' between libvirt and nova. In the libvirt backend Cheetah.Template is loaded with a dynamic import as a global variable, and the template file is read when the LibVirtConnection instance is initialized.
We don't do that in XenAPI and I'm a bit wary of using global variables, so I put the template class and the template file as optional variables and then dynamically set them if they are not set from the caller (xenapi backend in this case).
> Once he approves, I will too.
Trey Morris (tr3buchet) wrote : | # |
--- disk.py ----
get_injectables needs a docstring
why is get_injectables in disk? seems like an odd place..
get_injectables troubles me because one of my goals with multi-nic is to untether the network and virt layers by passing all of the data required for network configuration into the virt layer from compute so no network related db lookups are required at the virt layer. In the mean time (before nova-multi-nic), get_network_info() in vmops.py is performing this task. There will be a similar get_network_info() function for each libvirt/
This should also apply to the fake calls for network info in tests/db/fakes.py. The xenapi/
--- vm_utils.py ---
_find_guest_agent needs a docstring
--- vmops.py ---
yay!!
--- xenapi_conn.py ---
1058 + ' network configuration if not injected into the image'
should probably be
1058 + ' network configuration is not injected into the image'
unless i misinterpreted.
The rest is great! I like the way you've implemented mounting vdi as a flaggable option and I also like how if a proper agent is present you preference using it over adding to the mounted filesystem.
Salvatore Orlando (salvatore-orlando) wrote : | # |
Hi Trey,
see my comments inline
> --- disk.py ----
> get_injectables needs a docstring
> why is get_injectables in disk? seems like an odd place..
>
> get_injectables troubles me because one of my goals with multi-nic is to
> untether the network and virt layers by passing all of the data required for
> network configuration into the virt layer from compute so no network related
> db lookups are required at the virt layer. In the mean time (before nova-
> multi-nic), get_network_info() in vmops.py is performing this task. There will
> be a similar get_network_info() function for each libvirt/
> can see the format of the data compute will pass to the virt layer spawn
> process by looking at get_network_info() in vmops.py. Is there something you
> need that this function isn't handling? I'm not sure I fully grasp the goal of
> get_injectables().
>
> This should also apply to the fake calls for network info in
> tests/db/fakes.py. The xenapi/
> (but this will surely have to figured out later).
When I noted your network_info I realized there was some overlap between get_injectables and it.
get_injectables is in disk.py since it is used by both libvirt and xenapi (and possibly hyperv); it seemed the best place for it because it performs operations related to the disk image.
From what you said, every backend is going to have a _get_network_info, at least until multi_nic hits trunk. In this case I don't see anymore a reason for having that common bit of code, and I will therefore remove it. However, I should still use the interface.template file and populate it with Chetaah. To do so, the best thing I can do is to map the dict created by _get_network_info onto the dict required by the Cheetah template (the dict already contains all the info I need).
Another thing to note is that the Cheetah template does not support multiple IP addresses. For this reason I think it might be worth to use only the first element from the ip/ipv6 lists in the dict created by network_info.
> --- vm_utils.py ---
> _find_guest_agent needs a docstring
>
Will be done
> --- vmops.py ---
> yay!!
>
> --- xenapi_conn.py ---
> 1058 + ' network configuration if not injected into the
> image'
> should probably be
> 1058 + ' network configuration is not injected into the
> image'
> unless i misinterpreted.
>
Typo. Thanks for spotting this!
> The rest is great! I like the way you've implemented mounting vdi as a
> flaggable option and I also like how if a proper agent is present you
> preference using it over adding to the mounted filesystem.
Salvatore Orlando (salvatore-orlando) wrote : | # |
Hi reviewers!
Updating the branch took a bit longer than expect as in the meanwhile another branch changing the way in which network injection is performed for libvirt landed in trunk!
I adapted my branch following Trey's comments. No more _get_injectables! Now I also use the value returned by _get_network_info. Unlike the libvirt impl, I decided to use the 'info' field in the tuple rather than network_ref. This way, if the db changes in the future we will have to update only _get_network_info accordingly.
Another tangential issue was the rescue feature. In the unit tests spawn_rescue was stubbed out with a nice 'pass', leading to a failure on a later operation on the xenapi fake driver (which has been improved to be less 'fake'). By the way, if spawn_rescue has been stubbed out, what's the point of having a unit test for rescue? Code coverage is not complete at all, and I think the unit test is quite unreliable.
Thanks again for spending so much time on this branch. I hope it is now ready to rest in trunk forever!
- 721. By Salvatore Orlando
-
merge trunk
Rick Harris (rconradharris) wrote : | # |
As with Sandy, I really can't speak to whether this is the right approach. I'll leave that to Trey.
But from a code-quality perspective, this looks really good. Thanks for the fix-ups!
One small nit (not a deal-breaker):
> 97 + l = []
> 98 + l.append(
> 99 + return l
Looks like one was missed. I'm okay letting this slide since we can get this later; better to get this merged sooner. That said, if you can slip in a fix here, that'd be great.
- 722. By Salvatore Orlando
-
minor pep8 fix in db/fakes.py
Salvatore Orlando (salvatore-orlando) wrote : | # |
> As with Sandy, I really can't speak to whether this is the right approach.
> I'll leave that to Trey.
>
> But from a code-quality perspective, this looks really good. Thanks for the
> fix-ups!
>
> One small nit (not a deal-breaker):
>
> > 97 + l = []
> > 98 + l.append(
> > 99 + return l
>
> Looks like one was missed. I'm okay letting this slide since we can get this
> later; better to get this merged sooner. That said, if you can slip in a fix
> here, that'd be great.
As for the approach, I've think I've addresses Trey's concerns.
Thanks for spotting the missed fix! Updated branch pushed.
Salvatore Orlando (salvatore-orlando) wrote : | # |
Requesting FF exception for this branch, as it has already 1 approve vote and all discussion points (major and minor) have been addressed according to reviewers' suggestions.
The branch only touches the part of the XenAPI backend for the compute service which deals with spawning VMs. It performs delicate operations such as mounting VDIs, but the routine we use for these operations are already widely used in the XenAPI backend.
Two unit tests, carefully prepared to reduce stubbing-out to a minimum, have been provided.
Benefits:
- this branch will enable support for IP injection in disk images for XenAPI, in the same way in which is already done in libvirt. It indeed uses the same template file. Altough XenAPI already supports agent-based injection, this way of doing injection can be very useful in situations where guest images do not have an agent.
- this branch slightly improves unit test infrastructure for xenapi backend, increasing code coverage.
Risk assessment:
- this branch has a quite difficult life. It was first proposed more than a month ago, but was pushed back due to overlaps with other network-related branches dealing with XenAPI backend. More problems arised when the interfaces.template file changed yesterday to reflect multiple interface. However, now the branch is completely in sync with the current trunk, and since we are in FF period we can expect minimum to no interferences with other branches.
Risk assessment:
- No risk for network manager, as its code has not been touched at all. The xenapi network driver will be used only by the compute node for setting up VLANs on xen hosts. This also means that there cannot be any impact on other capabilities of the Network Manager, such as VPN access or Floating IPs.
- Since this is a new feature, and the associated code is executed only when a compute node uses XenAPI and the VLan manager, there is no risk of breaking libvirt, hyperv, or mware installations, or XenAPI installations which use the flat manager
Sandy Walsh (sandy-walsh) wrote : | # |
Heh, nearly had a heart attack when I saw the tests failing. Then I saw it was only 'suds' missing. Should have looked in pip-requires.
Nice work!we Hopefully this sneaks it in?
Salvatore Orlando (salvatore-orlando) wrote : | # |
> Heh, nearly had a heart attack when I saw the tests failing. Then I saw it was
> only 'suds' missing. Should have looked in pip-requires.
>
> Nice work!we Hopefully this sneaks it in?
Hi Sandy,
Are you sure you're not talking about VMware-vSphere? I believe that was the branch with the test failing due to suds...
In this branch your tests were failing because I was not stubbing out utils.execute in the migrate test case (on my dev machines they were not failing because all the commands were in the sudoers file :-)
Anyway, that has been fixed!
- 723. By Salvatore Orlando
-
merge trunk
Thierry Carrez (ttx) wrote : | # |
This is a bit too wide for my taste but it seems very close to the goal. FFe granted, provided this gets merged very soon (before end of day Monday)
Sandy Walsh (sandy-walsh) wrote : | # |
> Are you sure you're not talking about VMware-vSphere? I believe that was the
> branch with the test failing due to suds...
Could have been just part of the trunk merges. It was definitely this branch. All good now though.
Preview Diff
1 | === modified file 'Authors' | |||
2 | --- Authors 2011-03-24 22:47:36 +0000 | |||
3 | +++ Authors 2011-03-25 13:43:59 +0000 | |||
4 | @@ -1,4 +1,5 @@ | |||
5 | 1 | Andy Smith <code@term.ie> | 1 | Andy Smith <code@term.ie> |
6 | 2 | Andy Southgate <andy.southgate@citrix.com> | ||
7 | 2 | Anne Gentle <anne@openstack.org> | 3 | Anne Gentle <anne@openstack.org> |
8 | 3 | Anthony Young <sleepsonthefloor@gmail.com> | 4 | Anthony Young <sleepsonthefloor@gmail.com> |
9 | 4 | Antony Messerli <ant@openstack.org> | 5 | Antony Messerli <ant@openstack.org> |
10 | 5 | 6 | ||
11 | === modified file 'nova/tests/db/fakes.py' | |||
12 | --- nova/tests/db/fakes.py 2011-03-17 02:20:18 +0000 | |||
13 | +++ nova/tests/db/fakes.py 2011-03-25 13:43:59 +0000 | |||
14 | @@ -24,7 +24,7 @@ | |||
15 | 24 | from nova import utils | 24 | from nova import utils |
16 | 25 | 25 | ||
17 | 26 | 26 | ||
19 | 27 | def stub_out_db_instance_api(stubs): | 27 | def stub_out_db_instance_api(stubs, injected=True): |
20 | 28 | """ Stubs out the db API for creating Instances """ | 28 | """ Stubs out the db API for creating Instances """ |
21 | 29 | 29 | ||
22 | 30 | INSTANCE_TYPES = { | 30 | INSTANCE_TYPES = { |
23 | @@ -56,6 +56,25 @@ | |||
24 | 56 | flavorid=5, | 56 | flavorid=5, |
25 | 57 | rxtx_cap=5)} | 57 | rxtx_cap=5)} |
26 | 58 | 58 | ||
27 | 59 | network_fields = { | ||
28 | 60 | 'id': 'test', | ||
29 | 61 | 'bridge': 'xenbr0', | ||
30 | 62 | 'label': 'test_network', | ||
31 | 63 | 'netmask': '255.255.255.0', | ||
32 | 64 | 'cidr_v6': 'fe80::a00:0/120', | ||
33 | 65 | 'netmask_v6': '120', | ||
34 | 66 | 'gateway': '10.0.0.1', | ||
35 | 67 | 'gateway_v6': 'fe80::a00:1', | ||
36 | 68 | 'broadcast': '10.0.0.255', | ||
37 | 69 | 'dns': '10.0.0.2', | ||
38 | 70 | 'ra_server': None, | ||
39 | 71 | 'injected': injected} | ||
40 | 72 | |||
41 | 73 | fixed_ip_fields = { | ||
42 | 74 | 'address': '10.0.0.3', | ||
43 | 75 | 'address_v6': 'fe80::a00:3', | ||
44 | 76 | 'network_id': 'test'} | ||
45 | 77 | |||
46 | 59 | class FakeModel(object): | 78 | class FakeModel(object): |
47 | 60 | """ Stubs out for model """ | 79 | """ Stubs out for model """ |
48 | 61 | def __init__(self, values): | 80 | def __init__(self, values): |
49 | @@ -76,38 +95,29 @@ | |||
50 | 76 | def fake_instance_type_get_by_name(context, name): | 95 | def fake_instance_type_get_by_name(context, name): |
51 | 77 | return INSTANCE_TYPES[name] | 96 | return INSTANCE_TYPES[name] |
52 | 78 | 97 | ||
53 | 79 | def fake_instance_create(values): | ||
54 | 80 | """ Stubs out the db.instance_create method """ | ||
55 | 81 | |||
56 | 82 | type_data = INSTANCE_TYPES[values['instance_type']] | ||
57 | 83 | |||
58 | 84 | base_options = { | ||
59 | 85 | 'name': values['name'], | ||
60 | 86 | 'id': values['id'], | ||
61 | 87 | 'reservation_id': utils.generate_uid('r'), | ||
62 | 88 | 'image_id': values['image_id'], | ||
63 | 89 | 'kernel_id': values['kernel_id'], | ||
64 | 90 | 'ramdisk_id': values['ramdisk_id'], | ||
65 | 91 | 'state_description': 'scheduling', | ||
66 | 92 | 'user_id': values['user_id'], | ||
67 | 93 | 'project_id': values['project_id'], | ||
68 | 94 | 'launch_time': time.strftime('%Y-%m-%dT%H:%M:%SZ', time.gmtime()), | ||
69 | 95 | 'instance_type': values['instance_type'], | ||
70 | 96 | 'memory_mb': type_data['memory_mb'], | ||
71 | 97 | 'mac_address': values['mac_address'], | ||
72 | 98 | 'vcpus': type_data['vcpus'], | ||
73 | 99 | 'local_gb': type_data['local_gb'], | ||
74 | 100 | 'os_type': values['os_type']} | ||
75 | 101 | |||
76 | 102 | return FakeModel(base_options) | ||
77 | 103 | |||
78 | 104 | def fake_network_get_by_instance(context, instance_id): | 98 | def fake_network_get_by_instance(context, instance_id): |
85 | 105 | fields = { | 99 | return FakeModel(network_fields) |
86 | 106 | 'bridge': 'xenbr0', | 100 | |
87 | 107 | } | 101 | def fake_network_get_all_by_instance(context, instance_id): |
88 | 108 | return FakeModel(fields) | 102 | return [FakeModel(network_fields)] |
89 | 109 | 103 | ||
90 | 110 | stubs.Set(db, 'instance_create', fake_instance_create) | 104 | def fake_instance_get_fixed_address(context, instance_id): |
91 | 105 | return FakeModel(fixed_ip_fields).address | ||
92 | 106 | |||
93 | 107 | def fake_instance_get_fixed_address_v6(context, instance_id): | ||
94 | 108 | return FakeModel(fixed_ip_fields).address | ||
95 | 109 | |||
96 | 110 | def fake_fixed_ip_get_all_by_instance(context, instance_id): | ||
97 | 111 | return [FakeModel(fixed_ip_fields)] | ||
98 | 112 | |||
99 | 111 | stubs.Set(db, 'network_get_by_instance', fake_network_get_by_instance) | 113 | stubs.Set(db, 'network_get_by_instance', fake_network_get_by_instance) |
100 | 112 | stubs.Set(db, 'instance_type_get_all', fake_instance_type_get_all) | 114 | stubs.Set(db, 'instance_type_get_all', fake_instance_type_get_all) |
101 | 113 | stubs.Set(db, 'instance_type_get_by_name', fake_instance_type_get_by_name) | 115 | stubs.Set(db, 'instance_type_get_by_name', fake_instance_type_get_by_name) |
102 | 116 | stubs.Set(db, 'instance_get_fixed_address', | ||
103 | 117 | fake_instance_get_fixed_address) | ||
104 | 118 | stubs.Set(db, 'instance_get_fixed_address_v6', | ||
105 | 119 | fake_instance_get_fixed_address_v6) | ||
106 | 120 | stubs.Set(db, 'network_get_all_by_instance', | ||
107 | 121 | fake_network_get_all_by_instance) | ||
108 | 122 | stubs.Set(db, 'fixed_ip_get_all_by_instance', | ||
109 | 123 | fake_fixed_ip_get_all_by_instance) | ||
110 | 114 | 124 | ||
111 | === added file 'nova/tests/fake_utils.py' | |||
112 | --- nova/tests/fake_utils.py 1970-01-01 00:00:00 +0000 | |||
113 | +++ nova/tests/fake_utils.py 2011-03-25 13:43:59 +0000 | |||
114 | @@ -0,0 +1,106 @@ | |||
115 | 1 | # vim: tabstop=4 shiftwidth=4 softtabstop=4 | ||
116 | 2 | |||
117 | 3 | # Copyright (c) 2011 Citrix Systems, Inc. | ||
118 | 4 | # | ||
119 | 5 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | ||
120 | 6 | # not use this file except in compliance with the License. You may obtain | ||
121 | 7 | # a copy of the License at | ||
122 | 8 | # | ||
123 | 9 | # http://www.apache.org/licenses/LICENSE-2.0 | ||
124 | 10 | # | ||
125 | 11 | # Unless required by applicable law or agreed to in writing, software | ||
126 | 12 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | ||
127 | 13 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | ||
128 | 14 | # License for the specific language governing permissions and limitations | ||
129 | 15 | # under the License. | ||
130 | 16 | |||
131 | 17 | """This modules stubs out functions in nova.utils | ||
132 | 18 | """ | ||
133 | 19 | |||
134 | 20 | import re | ||
135 | 21 | import types | ||
136 | 22 | |||
137 | 23 | from eventlet import greenthread | ||
138 | 24 | |||
139 | 25 | from nova import exception | ||
140 | 26 | from nova import log as logging | ||
141 | 27 | from nova import utils | ||
142 | 28 | |||
143 | 29 | LOG = logging.getLogger('nova.tests.fake_utils') | ||
144 | 30 | |||
145 | 31 | _fake_execute_repliers = [] | ||
146 | 32 | _fake_execute_log = [] | ||
147 | 33 | |||
148 | 34 | |||
149 | 35 | def fake_execute_get_log(): | ||
150 | 36 | return _fake_execute_log | ||
151 | 37 | |||
152 | 38 | |||
153 | 39 | def fake_execute_clear_log(): | ||
154 | 40 | global _fake_execute_log | ||
155 | 41 | _fake_execute_log = [] | ||
156 | 42 | |||
157 | 43 | |||
158 | 44 | def fake_execute_set_repliers(repliers): | ||
159 | 45 | """Allows the client to configure replies to commands""" | ||
160 | 46 | global _fake_execute_repliers | ||
161 | 47 | _fake_execute_repliers = repliers | ||
162 | 48 | |||
163 | 49 | |||
164 | 50 | def fake_execute_default_reply_handler(*ignore_args, **ignore_kwargs): | ||
165 | 51 | """A reply handler for commands that haven't been added to the reply | ||
166 | 52 | list. Returns empty strings for stdout and stderr | ||
167 | 53 | """ | ||
168 | 54 | return '', '' | ||
169 | 55 | |||
170 | 56 | |||
171 | 57 | def fake_execute(*cmd_parts, **kwargs): | ||
172 | 58 | """This function stubs out execute, optionally executing | ||
173 | 59 | a preconfigued function to return expected data | ||
174 | 60 | """ | ||
175 | 61 | global _fake_execute_repliers | ||
176 | 62 | |||
177 | 63 | process_input = kwargs.get('process_input', None) | ||
178 | 64 | addl_env = kwargs.get('addl_env', None) | ||
179 | 65 | check_exit_code = kwargs.get('check_exit_code', 0) | ||
180 | 66 | cmd_str = ' '.join(str(part) for part in cmd_parts) | ||
181 | 67 | |||
182 | 68 | LOG.debug(_("Faking execution of cmd (subprocess): %s"), cmd_str) | ||
183 | 69 | _fake_execute_log.append(cmd_str) | ||
184 | 70 | |||
185 | 71 | reply_handler = fake_execute_default_reply_handler | ||
186 | 72 | |||
187 | 73 | for fake_replier in _fake_execute_repliers: | ||
188 | 74 | if re.match(fake_replier[0], cmd_str): | ||
189 | 75 | reply_handler = fake_replier[1] | ||
190 | 76 | LOG.debug(_('Faked command matched %s') % fake_replier[0]) | ||
191 | 77 | break | ||
192 | 78 | |||
193 | 79 | if isinstance(reply_handler, basestring): | ||
194 | 80 | # If the reply handler is a string, return it as stdout | ||
195 | 81 | reply = reply_handler, '' | ||
196 | 82 | else: | ||
197 | 83 | try: | ||
198 | 84 | # Alternative is a function, so call it | ||
199 | 85 | reply = reply_handler(cmd_parts, | ||
200 | 86 | process_input=process_input, | ||
201 | 87 | addl_env=addl_env, | ||
202 | 88 | check_exit_code=check_exit_code) | ||
203 | 89 | except exception.ProcessExecutionError as e: | ||
204 | 90 | LOG.debug(_('Faked command raised an exception %s' % str(e))) | ||
205 | 91 | raise | ||
206 | 92 | |||
207 | 93 | stdout = reply[0] | ||
208 | 94 | stderr = reply[1] | ||
209 | 95 | LOG.debug(_("Reply to faked command is stdout='%(stdout)s' " | ||
210 | 96 | "stderr='%(stderr)s'") % locals()) | ||
211 | 97 | |||
212 | 98 | # Replicate the sleep call in the real function | ||
213 | 99 | greenthread.sleep(0) | ||
214 | 100 | return reply | ||
215 | 101 | |||
216 | 102 | |||
217 | 103 | def stub_out_utils_execute(stubs): | ||
218 | 104 | fake_execute_set_repliers([]) | ||
219 | 105 | fake_execute_clear_log() | ||
220 | 106 | stubs.Set(utils, 'execute', fake_execute) | ||
221 | 0 | 107 | ||
222 | === modified file 'nova/tests/test_xenapi.py' | |||
223 | --- nova/tests/test_xenapi.py 2011-03-21 18:21:26 +0000 | |||
224 | +++ nova/tests/test_xenapi.py 2011-03-25 13:43:59 +0000 | |||
225 | @@ -19,11 +19,15 @@ | |||
226 | 19 | """ | 19 | """ |
227 | 20 | 20 | ||
228 | 21 | import functools | 21 | import functools |
229 | 22 | import os | ||
230 | 23 | import re | ||
231 | 22 | import stubout | 24 | import stubout |
232 | 25 | import ast | ||
233 | 23 | 26 | ||
234 | 24 | from nova import db | 27 | from nova import db |
235 | 25 | from nova import context | 28 | from nova import context |
236 | 26 | from nova import flags | 29 | from nova import flags |
237 | 30 | from nova import log as logging | ||
238 | 27 | from nova import test | 31 | from nova import test |
239 | 28 | from nova import utils | 32 | from nova import utils |
240 | 29 | from nova.auth import manager | 33 | from nova.auth import manager |
241 | @@ -38,6 +42,9 @@ | |||
242 | 38 | from nova.tests.db import fakes as db_fakes | 42 | from nova.tests.db import fakes as db_fakes |
243 | 39 | from nova.tests.xenapi import stubs | 43 | from nova.tests.xenapi import stubs |
244 | 40 | from nova.tests.glance import stubs as glance_stubs | 44 | from nova.tests.glance import stubs as glance_stubs |
245 | 45 | from nova.tests import fake_utils | ||
246 | 46 | |||
247 | 47 | LOG = logging.getLogger('nova.tests.test_xenapi') | ||
248 | 41 | 48 | ||
249 | 42 | FLAGS = flags.FLAGS | 49 | FLAGS = flags.FLAGS |
250 | 43 | 50 | ||
251 | @@ -64,13 +71,14 @@ | |||
252 | 64 | def setUp(self): | 71 | def setUp(self): |
253 | 65 | super(XenAPIVolumeTestCase, self).setUp() | 72 | super(XenAPIVolumeTestCase, self).setUp() |
254 | 66 | self.stubs = stubout.StubOutForTesting() | 73 | self.stubs = stubout.StubOutForTesting() |
255 | 74 | self.context = context.RequestContext('fake', 'fake', False) | ||
256 | 67 | FLAGS.target_host = '127.0.0.1' | 75 | FLAGS.target_host = '127.0.0.1' |
257 | 68 | FLAGS.xenapi_connection_url = 'test_url' | 76 | FLAGS.xenapi_connection_url = 'test_url' |
258 | 69 | FLAGS.xenapi_connection_password = 'test_pass' | 77 | FLAGS.xenapi_connection_password = 'test_pass' |
259 | 70 | db_fakes.stub_out_db_instance_api(self.stubs) | 78 | db_fakes.stub_out_db_instance_api(self.stubs) |
260 | 71 | stubs.stub_out_get_target(self.stubs) | 79 | stubs.stub_out_get_target(self.stubs) |
261 | 72 | xenapi_fake.reset() | 80 | xenapi_fake.reset() |
263 | 73 | self.values = {'name': 1, 'id': 1, | 81 | self.values = {'id': 1, |
264 | 74 | 'project_id': 'fake', | 82 | 'project_id': 'fake', |
265 | 75 | 'user_id': 'fake', | 83 | 'user_id': 'fake', |
266 | 76 | 'image_id': 1, | 84 | 'image_id': 1, |
267 | @@ -90,7 +98,7 @@ | |||
268 | 90 | vol['availability_zone'] = FLAGS.storage_availability_zone | 98 | vol['availability_zone'] = FLAGS.storage_availability_zone |
269 | 91 | vol['status'] = "creating" | 99 | vol['status'] = "creating" |
270 | 92 | vol['attach_status'] = "detached" | 100 | vol['attach_status'] = "detached" |
272 | 93 | return db.volume_create(context.get_admin_context(), vol) | 101 | return db.volume_create(self.context, vol) |
273 | 94 | 102 | ||
274 | 95 | def test_create_iscsi_storage(self): | 103 | def test_create_iscsi_storage(self): |
275 | 96 | """ This shows how to test helper classes' methods """ | 104 | """ This shows how to test helper classes' methods """ |
276 | @@ -126,7 +134,7 @@ | |||
277 | 126 | stubs.stubout_session(self.stubs, stubs.FakeSessionForVolumeTests) | 134 | stubs.stubout_session(self.stubs, stubs.FakeSessionForVolumeTests) |
278 | 127 | conn = xenapi_conn.get_connection(False) | 135 | conn = xenapi_conn.get_connection(False) |
279 | 128 | volume = self._create_volume() | 136 | volume = self._create_volume() |
281 | 129 | instance = db.instance_create(self.values) | 137 | instance = db.instance_create(self.context, self.values) |
282 | 130 | vm = xenapi_fake.create_vm(instance.name, 'Running') | 138 | vm = xenapi_fake.create_vm(instance.name, 'Running') |
283 | 131 | result = conn.attach_volume(instance.name, volume['id'], '/dev/sdc') | 139 | result = conn.attach_volume(instance.name, volume['id'], '/dev/sdc') |
284 | 132 | 140 | ||
285 | @@ -146,7 +154,7 @@ | |||
286 | 146 | stubs.FakeSessionForVolumeFailedTests) | 154 | stubs.FakeSessionForVolumeFailedTests) |
287 | 147 | conn = xenapi_conn.get_connection(False) | 155 | conn = xenapi_conn.get_connection(False) |
288 | 148 | volume = self._create_volume() | 156 | volume = self._create_volume() |
290 | 149 | instance = db.instance_create(self.values) | 157 | instance = db.instance_create(self.context, self.values) |
291 | 150 | xenapi_fake.create_vm(instance.name, 'Running') | 158 | xenapi_fake.create_vm(instance.name, 'Running') |
292 | 151 | self.assertRaises(Exception, | 159 | self.assertRaises(Exception, |
293 | 152 | conn.attach_volume, | 160 | conn.attach_volume, |
294 | @@ -175,8 +183,9 @@ | |||
295 | 175 | self.project = self.manager.create_project('fake', 'fake', 'fake') | 183 | self.project = self.manager.create_project('fake', 'fake', 'fake') |
296 | 176 | self.network = utils.import_object(FLAGS.network_manager) | 184 | self.network = utils.import_object(FLAGS.network_manager) |
297 | 177 | self.stubs = stubout.StubOutForTesting() | 185 | self.stubs = stubout.StubOutForTesting() |
300 | 178 | FLAGS.xenapi_connection_url = 'test_url' | 186 | self.flags(xenapi_connection_url='test_url', |
301 | 179 | FLAGS.xenapi_connection_password = 'test_pass' | 187 | xenapi_connection_password='test_pass', |
302 | 188 | instance_name_template='%d') | ||
303 | 180 | xenapi_fake.reset() | 189 | xenapi_fake.reset() |
304 | 181 | xenapi_fake.create_local_srs() | 190 | xenapi_fake.create_local_srs() |
305 | 182 | db_fakes.stub_out_db_instance_api(self.stubs) | 191 | db_fakes.stub_out_db_instance_api(self.stubs) |
306 | @@ -189,6 +198,8 @@ | |||
307 | 189 | stubs.stub_out_vm_methods(self.stubs) | 198 | stubs.stub_out_vm_methods(self.stubs) |
308 | 190 | glance_stubs.stubout_glance_client(self.stubs, | 199 | glance_stubs.stubout_glance_client(self.stubs, |
309 | 191 | glance_stubs.FakeGlance) | 200 | glance_stubs.FakeGlance) |
310 | 201 | fake_utils.stub_out_utils_execute(self.stubs) | ||
311 | 202 | self.context = context.RequestContext('fake', 'fake', False) | ||
312 | 192 | self.conn = xenapi_conn.get_connection(False) | 203 | self.conn = xenapi_conn.get_connection(False) |
313 | 193 | 204 | ||
314 | 194 | def test_list_instances_0(self): | 205 | def test_list_instances_0(self): |
315 | @@ -213,7 +224,7 @@ | |||
316 | 213 | if not vm_rec["is_control_domain"]: | 224 | if not vm_rec["is_control_domain"]: |
317 | 214 | vm_labels.append(vm_rec["name_label"]) | 225 | vm_labels.append(vm_rec["name_label"]) |
318 | 215 | 226 | ||
320 | 216 | self.assertEquals(vm_labels, [1]) | 227 | self.assertEquals(vm_labels, ['1']) |
321 | 217 | 228 | ||
322 | 218 | def ensure_vbd_was_torn_down(): | 229 | def ensure_vbd_was_torn_down(): |
323 | 219 | vbd_labels = [] | 230 | vbd_labels = [] |
324 | @@ -221,7 +232,7 @@ | |||
325 | 221 | vbd_rec = xenapi_fake.get_record('VBD', vbd_ref) | 232 | vbd_rec = xenapi_fake.get_record('VBD', vbd_ref) |
326 | 222 | vbd_labels.append(vbd_rec["vm_name_label"]) | 233 | vbd_labels.append(vbd_rec["vm_name_label"]) |
327 | 223 | 234 | ||
329 | 224 | self.assertEquals(vbd_labels, [1]) | 235 | self.assertEquals(vbd_labels, ['1']) |
330 | 225 | 236 | ||
331 | 226 | def ensure_vdi_was_torn_down(): | 237 | def ensure_vdi_was_torn_down(): |
332 | 227 | for vdi_ref in xenapi_fake.get_all('VDI'): | 238 | for vdi_ref in xenapi_fake.get_all('VDI'): |
333 | @@ -238,11 +249,10 @@ | |||
334 | 238 | 249 | ||
335 | 239 | def create_vm_record(self, conn, os_type): | 250 | def create_vm_record(self, conn, os_type): |
336 | 240 | instances = conn.list_instances() | 251 | instances = conn.list_instances() |
338 | 241 | self.assertEquals(instances, [1]) | 252 | self.assertEquals(instances, ['1']) |
339 | 242 | 253 | ||
340 | 243 | # Get Nova record for VM | 254 | # Get Nova record for VM |
341 | 244 | vm_info = conn.get_info(1) | 255 | vm_info = conn.get_info(1) |
342 | 245 | |||
343 | 246 | # Get XenAPI record for VM | 256 | # Get XenAPI record for VM |
344 | 247 | vms = [rec for ref, rec | 257 | vms = [rec for ref, rec |
345 | 248 | in xenapi_fake.get_all_records('VM').iteritems() | 258 | in xenapi_fake.get_all_records('VM').iteritems() |
346 | @@ -251,7 +261,7 @@ | |||
347 | 251 | self.vm_info = vm_info | 261 | self.vm_info = vm_info |
348 | 252 | self.vm = vm | 262 | self.vm = vm |
349 | 253 | 263 | ||
351 | 254 | def check_vm_record(self, conn): | 264 | def check_vm_record(self, conn, check_injection=False): |
352 | 255 | # Check that m1.large above turned into the right thing. | 265 | # Check that m1.large above turned into the right thing. |
353 | 256 | instance_type = db.instance_type_get_by_name(conn, 'm1.large') | 266 | instance_type = db.instance_type_get_by_name(conn, 'm1.large') |
354 | 257 | mem_kib = long(instance_type['memory_mb']) << 10 | 267 | mem_kib = long(instance_type['memory_mb']) << 10 |
355 | @@ -271,6 +281,25 @@ | |||
356 | 271 | # Check that the VM is running according to XenAPI. | 281 | # Check that the VM is running according to XenAPI. |
357 | 272 | self.assertEquals(self.vm['power_state'], 'Running') | 282 | self.assertEquals(self.vm['power_state'], 'Running') |
358 | 273 | 283 | ||
359 | 284 | if check_injection: | ||
360 | 285 | xenstore_data = self.vm['xenstore_data'] | ||
361 | 286 | key = 'vm-data/networking/aabbccddeeff' | ||
362 | 287 | xenstore_value = xenstore_data[key] | ||
363 | 288 | tcpip_data = ast.literal_eval(xenstore_value) | ||
364 | 289 | self.assertEquals(tcpip_data, { | ||
365 | 290 | 'label': 'test_network', | ||
366 | 291 | 'broadcast': '10.0.0.255', | ||
367 | 292 | 'ips': [{'ip': '10.0.0.3', | ||
368 | 293 | 'netmask':'255.255.255.0', | ||
369 | 294 | 'enabled':'1'}], | ||
370 | 295 | 'ip6s': [{'ip': 'fe80::a8bb:ccff:fedd:eeff', | ||
371 | 296 | 'netmask': '120', | ||
372 | 297 | 'enabled': '1', | ||
373 | 298 | 'gateway': 'fe80::a00:1'}], | ||
374 | 299 | 'mac': 'aa:bb:cc:dd:ee:ff', | ||
375 | 300 | 'dns': ['10.0.0.2'], | ||
376 | 301 | 'gateway': '10.0.0.1'}) | ||
377 | 302 | |||
378 | 274 | def check_vm_params_for_windows(self): | 303 | def check_vm_params_for_windows(self): |
379 | 275 | self.assertEquals(self.vm['platform']['nx'], 'true') | 304 | self.assertEquals(self.vm['platform']['nx'], 'true') |
380 | 276 | self.assertEquals(self.vm['HVM_boot_params'], {'order': 'dc'}) | 305 | self.assertEquals(self.vm['HVM_boot_params'], {'order': 'dc'}) |
381 | @@ -304,10 +333,10 @@ | |||
382 | 304 | self.assertEquals(self.vm['HVM_boot_policy'], '') | 333 | self.assertEquals(self.vm['HVM_boot_policy'], '') |
383 | 305 | 334 | ||
384 | 306 | def _test_spawn(self, image_id, kernel_id, ramdisk_id, | 335 | def _test_spawn(self, image_id, kernel_id, ramdisk_id, |
389 | 307 | instance_type="m1.large", os_type="linux"): | 336 | instance_type="m1.large", os_type="linux", |
390 | 308 | stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) | 337 | check_injection=False): |
391 | 309 | values = {'name': 1, | 338 | stubs.stubout_loopingcall_start(self.stubs) |
392 | 310 | 'id': 1, | 339 | values = {'id': 1, |
393 | 311 | 'project_id': self.project.id, | 340 | 'project_id': self.project.id, |
394 | 312 | 'user_id': self.user.id, | 341 | 'user_id': self.user.id, |
395 | 313 | 'image_id': image_id, | 342 | 'image_id': image_id, |
396 | @@ -316,12 +345,10 @@ | |||
397 | 316 | 'instance_type': instance_type, | 345 | 'instance_type': instance_type, |
398 | 317 | 'mac_address': 'aa:bb:cc:dd:ee:ff', | 346 | 'mac_address': 'aa:bb:cc:dd:ee:ff', |
399 | 318 | 'os_type': os_type} | 347 | 'os_type': os_type} |
406 | 319 | 348 | instance = db.instance_create(self.context, values) | |
407 | 320 | conn = xenapi_conn.get_connection(False) | 349 | self.conn.spawn(instance) |
408 | 321 | instance = db.instance_create(values) | 350 | self.create_vm_record(self.conn, os_type) |
409 | 322 | conn.spawn(instance) | 351 | self.check_vm_record(self.conn, check_injection) |
404 | 323 | self.create_vm_record(conn, os_type) | ||
405 | 324 | self.check_vm_record(conn) | ||
410 | 325 | 352 | ||
411 | 326 | def test_spawn_not_enough_memory(self): | 353 | def test_spawn_not_enough_memory(self): |
412 | 327 | FLAGS.xenapi_image_service = 'glance' | 354 | FLAGS.xenapi_image_service = 'glance' |
413 | @@ -362,6 +389,85 @@ | |||
414 | 362 | glance_stubs.FakeGlance.IMAGE_RAMDISK) | 389 | glance_stubs.FakeGlance.IMAGE_RAMDISK) |
415 | 363 | self.check_vm_params_for_linux_with_external_kernel() | 390 | self.check_vm_params_for_linux_with_external_kernel() |
416 | 364 | 391 | ||
417 | 392 | def test_spawn_netinject_file(self): | ||
418 | 393 | FLAGS.xenapi_image_service = 'glance' | ||
419 | 394 | db_fakes.stub_out_db_instance_api(self.stubs, injected=True) | ||
420 | 395 | |||
421 | 396 | self._tee_executed = False | ||
422 | 397 | |||
423 | 398 | def _tee_handler(cmd, **kwargs): | ||
424 | 399 | input = kwargs.get('process_input', None) | ||
425 | 400 | self.assertNotEqual(input, None) | ||
426 | 401 | config = [line.strip() for line in input.split("\n")] | ||
427 | 402 | # Find the start of eth0 configuration and check it | ||
428 | 403 | index = config.index('auto eth0') | ||
429 | 404 | self.assertEquals(config[index + 1:index + 8], [ | ||
430 | 405 | 'iface eth0 inet static', | ||
431 | 406 | 'address 10.0.0.3', | ||
432 | 407 | 'netmask 255.255.255.0', | ||
433 | 408 | 'broadcast 10.0.0.255', | ||
434 | 409 | 'gateway 10.0.0.1', | ||
435 | 410 | 'dns-nameservers 10.0.0.2', | ||
436 | 411 | '']) | ||
437 | 412 | self._tee_executed = True | ||
438 | 413 | return '', '' | ||
439 | 414 | |||
440 | 415 | fake_utils.fake_execute_set_repliers([ | ||
441 | 416 | # Capture the sudo tee .../etc/network/interfaces command | ||
442 | 417 | (r'(sudo\s+)?tee.*interfaces', _tee_handler), | ||
443 | 418 | ]) | ||
444 | 419 | FLAGS.xenapi_image_service = 'glance' | ||
445 | 420 | self._test_spawn(glance_stubs.FakeGlance.IMAGE_MACHINE, | ||
446 | 421 | glance_stubs.FakeGlance.IMAGE_KERNEL, | ||
447 | 422 | glance_stubs.FakeGlance.IMAGE_RAMDISK, | ||
448 | 423 | check_injection=True) | ||
449 | 424 | self.assertTrue(self._tee_executed) | ||
450 | 425 | |||
451 | 426 | def test_spawn_netinject_xenstore(self): | ||
452 | 427 | FLAGS.xenapi_image_service = 'glance' | ||
453 | 428 | db_fakes.stub_out_db_instance_api(self.stubs, injected=True) | ||
454 | 429 | |||
455 | 430 | self._tee_executed = False | ||
456 | 431 | |||
457 | 432 | def _mount_handler(cmd, *ignore_args, **ignore_kwargs): | ||
458 | 433 | # When mounting, create real files under the mountpoint to simulate | ||
459 | 434 | # files in the mounted filesystem | ||
460 | 435 | |||
461 | 436 | # mount point will be the last item of the command list | ||
462 | 437 | self._tmpdir = cmd[len(cmd) - 1] | ||
463 | 438 | LOG.debug(_('Creating files in %s to simulate guest agent' % | ||
464 | 439 | self._tmpdir)) | ||
465 | 440 | os.makedirs(os.path.join(self._tmpdir, 'usr', 'sbin')) | ||
466 | 441 | # Touch the file using open | ||
467 | 442 | open(os.path.join(self._tmpdir, 'usr', 'sbin', | ||
468 | 443 | 'xe-update-networking'), 'w').close() | ||
469 | 444 | return '', '' | ||
470 | 445 | |||
471 | 446 | def _umount_handler(cmd, *ignore_args, **ignore_kwargs): | ||
472 | 447 | # Umount would normall make files in the m,ounted filesystem | ||
473 | 448 | # disappear, so do that here | ||
474 | 449 | LOG.debug(_('Removing simulated guest agent files in %s' % | ||
475 | 450 | self._tmpdir)) | ||
476 | 451 | os.remove(os.path.join(self._tmpdir, 'usr', 'sbin', | ||
477 | 452 | 'xe-update-networking')) | ||
478 | 453 | os.rmdir(os.path.join(self._tmpdir, 'usr', 'sbin')) | ||
479 | 454 | os.rmdir(os.path.join(self._tmpdir, 'usr')) | ||
480 | 455 | return '', '' | ||
481 | 456 | |||
482 | 457 | def _tee_handler(cmd, *ignore_args, **ignore_kwargs): | ||
483 | 458 | self._tee_executed = True | ||
484 | 459 | return '', '' | ||
485 | 460 | |||
486 | 461 | fake_utils.fake_execute_set_repliers([ | ||
487 | 462 | (r'(sudo\s+)?mount', _mount_handler), | ||
488 | 463 | (r'(sudo\s+)?umount', _umount_handler), | ||
489 | 464 | (r'(sudo\s+)?tee.*interfaces', _tee_handler)]) | ||
490 | 465 | self._test_spawn(1, 2, 3, check_injection=True) | ||
491 | 466 | |||
492 | 467 | # tee must not run in this case, where an injection-capable | ||
493 | 468 | # guest agent is detected | ||
494 | 469 | self.assertFalse(self._tee_executed) | ||
495 | 470 | |||
496 | 365 | def test_spawn_with_network_qos(self): | 471 | def test_spawn_with_network_qos(self): |
497 | 366 | self._create_instance() | 472 | self._create_instance() |
498 | 367 | for vif_ref in xenapi_fake.get_all('VIF'): | 473 | for vif_ref in xenapi_fake.get_all('VIF'): |
499 | @@ -371,6 +477,7 @@ | |||
500 | 371 | str(4 * 1024)) | 477 | str(4 * 1024)) |
501 | 372 | 478 | ||
502 | 373 | def test_rescue(self): | 479 | def test_rescue(self): |
503 | 480 | self.flags(xenapi_inject_image=False) | ||
504 | 374 | instance = self._create_instance() | 481 | instance = self._create_instance() |
505 | 375 | conn = xenapi_conn.get_connection(False) | 482 | conn = xenapi_conn.get_connection(False) |
506 | 376 | conn.rescue(instance, None) | 483 | conn.rescue(instance, None) |
507 | @@ -391,8 +498,8 @@ | |||
508 | 391 | 498 | ||
509 | 392 | def _create_instance(self): | 499 | def _create_instance(self): |
510 | 393 | """Creates and spawns a test instance""" | 500 | """Creates and spawns a test instance""" |
511 | 501 | stubs.stubout_loopingcall_start(self.stubs) | ||
512 | 394 | values = { | 502 | values = { |
513 | 395 | 'name': 1, | ||
514 | 396 | 'id': 1, | 503 | 'id': 1, |
515 | 397 | 'project_id': self.project.id, | 504 | 'project_id': self.project.id, |
516 | 398 | 'user_id': self.user.id, | 505 | 'user_id': self.user.id, |
517 | @@ -402,7 +509,7 @@ | |||
518 | 402 | 'instance_type': 'm1.large', | 509 | 'instance_type': 'm1.large', |
519 | 403 | 'mac_address': 'aa:bb:cc:dd:ee:ff', | 510 | 'mac_address': 'aa:bb:cc:dd:ee:ff', |
520 | 404 | 'os_type': 'linux'} | 511 | 'os_type': 'linux'} |
522 | 405 | instance = db.instance_create(values) | 512 | instance = db.instance_create(self.context, values) |
523 | 406 | self.conn.spawn(instance) | 513 | self.conn.spawn(instance) |
524 | 407 | return instance | 514 | return instance |
525 | 408 | 515 | ||
526 | @@ -447,21 +554,26 @@ | |||
527 | 447 | db_fakes.stub_out_db_instance_api(self.stubs) | 554 | db_fakes.stub_out_db_instance_api(self.stubs) |
528 | 448 | stubs.stub_out_get_target(self.stubs) | 555 | stubs.stub_out_get_target(self.stubs) |
529 | 449 | xenapi_fake.reset() | 556 | xenapi_fake.reset() |
530 | 557 | xenapi_fake.create_network('fake', FLAGS.flat_network_bridge) | ||
531 | 450 | self.manager = manager.AuthManager() | 558 | self.manager = manager.AuthManager() |
532 | 451 | self.user = self.manager.create_user('fake', 'fake', 'fake', | 559 | self.user = self.manager.create_user('fake', 'fake', 'fake', |
533 | 452 | admin=True) | 560 | admin=True) |
534 | 453 | self.project = self.manager.create_project('fake', 'fake', 'fake') | 561 | self.project = self.manager.create_project('fake', 'fake', 'fake') |
536 | 454 | self.values = {'name': 1, 'id': 1, | 562 | self.context = context.RequestContext('fake', 'fake', False) |
537 | 563 | self.values = {'id': 1, | ||
538 | 455 | 'project_id': self.project.id, | 564 | 'project_id': self.project.id, |
539 | 456 | 'user_id': self.user.id, | 565 | 'user_id': self.user.id, |
540 | 457 | 'image_id': 1, | 566 | 'image_id': 1, |
541 | 458 | 'kernel_id': None, | 567 | 'kernel_id': None, |
542 | 459 | 'ramdisk_id': None, | 568 | 'ramdisk_id': None, |
543 | 569 | 'local_gb': 5, | ||
544 | 460 | 'instance_type': 'm1.large', | 570 | 'instance_type': 'm1.large', |
545 | 461 | 'mac_address': 'aa:bb:cc:dd:ee:ff', | 571 | 'mac_address': 'aa:bb:cc:dd:ee:ff', |
546 | 462 | 'os_type': 'linux'} | 572 | 'os_type': 'linux'} |
547 | 463 | 573 | ||
548 | 574 | fake_utils.stub_out_utils_execute(self.stubs) | ||
549 | 464 | stubs.stub_out_migration_methods(self.stubs) | 575 | stubs.stub_out_migration_methods(self.stubs) |
550 | 576 | stubs.stubout_get_this_vm_uuid(self.stubs) | ||
551 | 465 | glance_stubs.stubout_glance_client(self.stubs, | 577 | glance_stubs.stubout_glance_client(self.stubs, |
552 | 466 | glance_stubs.FakeGlance) | 578 | glance_stubs.FakeGlance) |
553 | 467 | 579 | ||
554 | @@ -472,14 +584,15 @@ | |||
555 | 472 | self.stubs.UnsetAll() | 584 | self.stubs.UnsetAll() |
556 | 473 | 585 | ||
557 | 474 | def test_migrate_disk_and_power_off(self): | 586 | def test_migrate_disk_and_power_off(self): |
559 | 475 | instance = db.instance_create(self.values) | 587 | instance = db.instance_create(self.context, self.values) |
560 | 476 | stubs.stubout_session(self.stubs, stubs.FakeSessionForMigrationTests) | 588 | stubs.stubout_session(self.stubs, stubs.FakeSessionForMigrationTests) |
561 | 477 | conn = xenapi_conn.get_connection(False) | 589 | conn = xenapi_conn.get_connection(False) |
562 | 478 | conn.migrate_disk_and_power_off(instance, '127.0.0.1') | 590 | conn.migrate_disk_and_power_off(instance, '127.0.0.1') |
563 | 479 | 591 | ||
564 | 480 | def test_finish_resize(self): | 592 | def test_finish_resize(self): |
566 | 481 | instance = db.instance_create(self.values) | 593 | instance = db.instance_create(self.context, self.values) |
567 | 482 | stubs.stubout_session(self.stubs, stubs.FakeSessionForMigrationTests) | 594 | stubs.stubout_session(self.stubs, stubs.FakeSessionForMigrationTests) |
568 | 595 | stubs.stubout_loopingcall_start(self.stubs) | ||
569 | 483 | conn = xenapi_conn.get_connection(False) | 596 | conn = xenapi_conn.get_connection(False) |
570 | 484 | conn.finish_resize(instance, dict(base_copy='hurr', cow='durr')) | 597 | conn.finish_resize(instance, dict(base_copy='hurr', cow='durr')) |
571 | 485 | 598 | ||
572 | 486 | 599 | ||
573 | === modified file 'nova/tests/xenapi/stubs.py' | |||
574 | --- nova/tests/xenapi/stubs.py 2011-03-24 04:09:51 +0000 | |||
575 | +++ nova/tests/xenapi/stubs.py 2011-03-25 13:43:59 +0000 | |||
576 | @@ -21,6 +21,7 @@ | |||
577 | 21 | from nova.virt.xenapi import volume_utils | 21 | from nova.virt.xenapi import volume_utils |
578 | 22 | from nova.virt.xenapi import vm_utils | 22 | from nova.virt.xenapi import vm_utils |
579 | 23 | from nova.virt.xenapi import vmops | 23 | from nova.virt.xenapi import vmops |
580 | 24 | from nova import utils | ||
581 | 24 | 25 | ||
582 | 25 | 26 | ||
583 | 26 | def stubout_instance_snapshot(stubs): | 27 | def stubout_instance_snapshot(stubs): |
584 | @@ -137,14 +138,17 @@ | |||
585 | 137 | stubs.Set(vm_utils, '_is_vdi_pv', f) | 138 | stubs.Set(vm_utils, '_is_vdi_pv', f) |
586 | 138 | 139 | ||
587 | 139 | 140 | ||
588 | 141 | def stubout_loopingcall_start(stubs): | ||
589 | 142 | def fake_start(self, interval, now=True): | ||
590 | 143 | self.f(*self.args, **self.kw) | ||
591 | 144 | stubs.Set(utils.LoopingCall, 'start', fake_start) | ||
592 | 145 | |||
593 | 146 | |||
594 | 140 | class FakeSessionForVMTests(fake.SessionBase): | 147 | class FakeSessionForVMTests(fake.SessionBase): |
595 | 141 | """ Stubs out a XenAPISession for VM tests """ | 148 | """ Stubs out a XenAPISession for VM tests """ |
596 | 142 | def __init__(self, uri): | 149 | def __init__(self, uri): |
597 | 143 | super(FakeSessionForVMTests, self).__init__(uri) | 150 | super(FakeSessionForVMTests, self).__init__(uri) |
598 | 144 | 151 | ||
599 | 145 | def network_get_all_records_where(self, _1, _2): | ||
600 | 146 | return self.xenapi.network.get_all_records() | ||
601 | 147 | |||
602 | 148 | def host_call_plugin(self, _1, _2, _3, _4, _5): | 152 | def host_call_plugin(self, _1, _2, _3, _4, _5): |
603 | 149 | sr_ref = fake.get_all('SR')[0] | 153 | sr_ref = fake.get_all('SR')[0] |
604 | 150 | vdi_ref = fake.create_vdi('', False, sr_ref, False) | 154 | vdi_ref = fake.create_vdi('', False, sr_ref, False) |
605 | @@ -196,7 +200,7 @@ | |||
606 | 196 | pass | 200 | pass |
607 | 197 | 201 | ||
608 | 198 | def fake_spawn_rescue(self, inst): | 202 | def fake_spawn_rescue(self, inst): |
610 | 199 | pass | 203 | inst._rescue = False |
611 | 200 | 204 | ||
612 | 201 | stubs.Set(vmops.VMOps, "_shutdown", fake_shutdown) | 205 | stubs.Set(vmops.VMOps, "_shutdown", fake_shutdown) |
613 | 202 | stubs.Set(vmops.VMOps, "_acquire_bootlock", fake_acquire_bootlock) | 206 | stubs.Set(vmops.VMOps, "_acquire_bootlock", fake_acquire_bootlock) |
614 | 203 | 207 | ||
615 | === modified file 'nova/virt/disk.py' | |||
616 | --- nova/virt/disk.py 2011-03-14 17:59:41 +0000 | |||
617 | +++ nova/virt/disk.py 2011-03-25 13:43:59 +0000 | |||
618 | @@ -26,6 +26,8 @@ | |||
619 | 26 | import tempfile | 26 | import tempfile |
620 | 27 | import time | 27 | import time |
621 | 28 | 28 | ||
622 | 29 | from nova import context | ||
623 | 30 | from nova import db | ||
624 | 29 | from nova import exception | 31 | from nova import exception |
625 | 30 | from nova import flags | 32 | from nova import flags |
626 | 31 | from nova import log as logging | 33 | from nova import log as logging |
627 | @@ -38,6 +40,9 @@ | |||
628 | 38 | 'minimum size in bytes of root partition') | 40 | 'minimum size in bytes of root partition') |
629 | 39 | flags.DEFINE_integer('block_size', 1024 * 1024 * 256, | 41 | flags.DEFINE_integer('block_size', 1024 * 1024 * 256, |
630 | 40 | 'block_size to use for dd') | 42 | 'block_size to use for dd') |
631 | 43 | flags.DEFINE_string('injected_network_template', | ||
632 | 44 | utils.abspath('virt/interfaces.template'), | ||
633 | 45 | 'Template file for injected network') | ||
634 | 41 | flags.DEFINE_integer('timeout_nbd', 10, | 46 | flags.DEFINE_integer('timeout_nbd', 10, |
635 | 42 | 'time to wait for a NBD device coming up') | 47 | 'time to wait for a NBD device coming up') |
636 | 43 | flags.DEFINE_integer('max_nbd_devices', 16, | 48 | flags.DEFINE_integer('max_nbd_devices', 16, |
637 | @@ -97,11 +102,7 @@ | |||
638 | 97 | % err) | 102 | % err) |
639 | 98 | 103 | ||
640 | 99 | try: | 104 | try: |
646 | 100 | if key: | 105 | inject_data_into_fs(tmpdir, key, net, utils.execute) |
642 | 101 | # inject key file | ||
643 | 102 | _inject_key_into_fs(key, tmpdir) | ||
644 | 103 | if net: | ||
645 | 104 | _inject_net_into_fs(net, tmpdir) | ||
647 | 105 | finally: | 106 | finally: |
648 | 106 | # unmount device | 107 | # unmount device |
649 | 107 | utils.execute('sudo', 'umount', mapped_device) | 108 | utils.execute('sudo', 'umount', mapped_device) |
650 | @@ -164,7 +165,18 @@ | |||
651 | 164 | _DEVICES.append(device) | 165 | _DEVICES.append(device) |
652 | 165 | 166 | ||
653 | 166 | 167 | ||
655 | 167 | def _inject_key_into_fs(key, fs): | 168 | def inject_data_into_fs(fs, key, net, execute): |
656 | 169 | """Injects data into a filesystem already mounted by the caller. | ||
657 | 170 | Virt connections can call this directly if they mount their fs | ||
658 | 171 | in a different way to inject_data | ||
659 | 172 | """ | ||
660 | 173 | if key: | ||
661 | 174 | _inject_key_into_fs(key, fs, execute=execute) | ||
662 | 175 | if net: | ||
663 | 176 | _inject_net_into_fs(net, fs, execute=execute) | ||
664 | 177 | |||
665 | 178 | |||
666 | 179 | def _inject_key_into_fs(key, fs, execute=None): | ||
667 | 168 | """Add the given public ssh key to root's authorized_keys. | 180 | """Add the given public ssh key to root's authorized_keys. |
668 | 169 | 181 | ||
669 | 170 | key is an ssh key string. | 182 | key is an ssh key string. |
670 | @@ -179,7 +191,7 @@ | |||
671 | 179 | process_input='\n' + key.strip() + '\n') | 191 | process_input='\n' + key.strip() + '\n') |
672 | 180 | 192 | ||
673 | 181 | 193 | ||
675 | 182 | def _inject_net_into_fs(net, fs): | 194 | def _inject_net_into_fs(net, fs, execute=None): |
676 | 183 | """Inject /etc/network/interfaces into the filesystem rooted at fs. | 195 | """Inject /etc/network/interfaces into the filesystem rooted at fs. |
677 | 184 | 196 | ||
678 | 185 | net is the contents of /etc/network/interfaces. | 197 | net is the contents of /etc/network/interfaces. |
679 | 186 | 198 | ||
680 | === modified file 'nova/virt/libvirt_conn.py' | |||
681 | --- nova/virt/libvirt_conn.py 2011-03-24 21:26:11 +0000 | |||
682 | +++ nova/virt/libvirt_conn.py 2011-03-25 13:43:59 +0000 | |||
683 | @@ -76,9 +76,7 @@ | |||
684 | 76 | flags.DEFINE_string('rescue_image_id', 'ami-rescue', 'Rescue ami image') | 76 | flags.DEFINE_string('rescue_image_id', 'ami-rescue', 'Rescue ami image') |
685 | 77 | flags.DEFINE_string('rescue_kernel_id', 'aki-rescue', 'Rescue aki image') | 77 | flags.DEFINE_string('rescue_kernel_id', 'aki-rescue', 'Rescue aki image') |
686 | 78 | flags.DEFINE_string('rescue_ramdisk_id', 'ari-rescue', 'Rescue ari image') | 78 | flags.DEFINE_string('rescue_ramdisk_id', 'ari-rescue', 'Rescue ari image') |
690 | 79 | flags.DEFINE_string('injected_network_template', | 79 | |
688 | 80 | utils.abspath('virt/interfaces.template'), | ||
689 | 81 | 'Template file for injected network') | ||
691 | 82 | flags.DEFINE_string('libvirt_xml_template', | 80 | flags.DEFINE_string('libvirt_xml_template', |
692 | 83 | utils.abspath('virt/libvirt.xml.template'), | 81 | utils.abspath('virt/libvirt.xml.template'), |
693 | 84 | 'Libvirt XML Template') | 82 | 'Libvirt XML Template') |
694 | 85 | 83 | ||
695 | === modified file 'nova/virt/xenapi/fake.py' | |||
696 | --- nova/virt/xenapi/fake.py 2011-03-03 19:13:15 +0000 | |||
697 | +++ nova/virt/xenapi/fake.py 2011-03-25 13:43:59 +0000 | |||
698 | @@ -162,6 +162,12 @@ | |||
699 | 162 | vbd_rec['vm_name_label'] = vm_name_label | 162 | vbd_rec['vm_name_label'] = vm_name_label |
700 | 163 | 163 | ||
701 | 164 | 164 | ||
702 | 165 | def after_VM_create(vm_ref, vm_rec): | ||
703 | 166 | """Create read-only fields in the VM record.""" | ||
704 | 167 | if 'is_control_domain' not in vm_rec: | ||
705 | 168 | vm_rec['is_control_domain'] = False | ||
706 | 169 | |||
707 | 170 | |||
708 | 165 | def create_pbd(config, host_ref, sr_ref, attached): | 171 | def create_pbd(config, host_ref, sr_ref, attached): |
709 | 166 | return _create_object('PBD', { | 172 | return _create_object('PBD', { |
710 | 167 | 'device-config': config, | 173 | 'device-config': config, |
711 | @@ -286,6 +292,25 @@ | |||
712 | 286 | rec['currently_attached'] = False | 292 | rec['currently_attached'] = False |
713 | 287 | rec['device'] = '' | 293 | rec['device'] = '' |
714 | 288 | 294 | ||
715 | 295 | def VM_get_xenstore_data(self, _1, vm_ref): | ||
716 | 296 | return _db_content['VM'][vm_ref].get('xenstore_data', '') | ||
717 | 297 | |||
718 | 298 | def VM_remove_from_xenstore_data(self, _1, vm_ref, key): | ||
719 | 299 | db_ref = _db_content['VM'][vm_ref] | ||
720 | 300 | if not 'xenstore_data' in db_ref: | ||
721 | 301 | return | ||
722 | 302 | db_ref['xenstore_data'][key] = None | ||
723 | 303 | |||
724 | 304 | def network_get_all_records_where(self, _1, _2): | ||
725 | 305 | # TODO (salvatore-orlando):filter table on _2 | ||
726 | 306 | return _db_content['network'] | ||
727 | 307 | |||
728 | 308 | def VM_add_to_xenstore_data(self, _1, vm_ref, key, value): | ||
729 | 309 | db_ref = _db_content['VM'][vm_ref] | ||
730 | 310 | if not 'xenstore_data' in db_ref: | ||
731 | 311 | db_ref['xenstore_data'] = {} | ||
732 | 312 | db_ref['xenstore_data'][key] = value | ||
733 | 313 | |||
734 | 289 | def host_compute_free_memory(self, _1, ref): | 314 | def host_compute_free_memory(self, _1, ref): |
735 | 290 | #Always return 12GB available | 315 | #Always return 12GB available |
736 | 291 | return 12 * 1024 * 1024 * 1024 | 316 | return 12 * 1024 * 1024 * 1024 |
737 | @@ -376,7 +401,6 @@ | |||
738 | 376 | def _getter(self, name, params): | 401 | def _getter(self, name, params): |
739 | 377 | self._check_session(params) | 402 | self._check_session(params) |
740 | 378 | (cls, func) = name.split('.') | 403 | (cls, func) = name.split('.') |
741 | 379 | |||
742 | 380 | if func == 'get_all': | 404 | if func == 'get_all': |
743 | 381 | self._check_arg_count(params, 1) | 405 | self._check_arg_count(params, 1) |
744 | 382 | return get_all(cls) | 406 | return get_all(cls) |
745 | @@ -399,10 +423,11 @@ | |||
746 | 399 | if len(params) == 2: | 423 | if len(params) == 2: |
747 | 400 | field = func[len('get_'):] | 424 | field = func[len('get_'):] |
748 | 401 | ref = params[1] | 425 | ref = params[1] |
753 | 402 | 426 | if (ref in _db_content[cls]): | |
754 | 403 | if (ref in _db_content[cls] and | 427 | if (field in _db_content[cls][ref]): |
755 | 404 | field in _db_content[cls][ref]): | 428 | return _db_content[cls][ref][field] |
756 | 405 | return _db_content[cls][ref][field] | 429 | else: |
757 | 430 | raise Failure(['HANDLE_INVALID', cls, ref]) | ||
758 | 406 | 431 | ||
759 | 407 | LOG.debug(_('Raising NotImplemented')) | 432 | LOG.debug(_('Raising NotImplemented')) |
760 | 408 | raise NotImplementedError( | 433 | raise NotImplementedError( |
761 | @@ -476,7 +501,7 @@ | |||
762 | 476 | def _check_session(self, params): | 501 | def _check_session(self, params): |
763 | 477 | if (self._session is None or | 502 | if (self._session is None or |
764 | 478 | self._session not in _db_content['session']): | 503 | self._session not in _db_content['session']): |
766 | 479 | raise Failure(['HANDLE_INVALID', 'session', self._session]) | 504 | raise Failure(['HANDLE_INVALID', 'session', self._session]) |
767 | 480 | if len(params) == 0 or params[0] != self._session: | 505 | if len(params) == 0 or params[0] != self._session: |
768 | 481 | LOG.debug(_('Raising NotImplemented')) | 506 | LOG.debug(_('Raising NotImplemented')) |
769 | 482 | raise NotImplementedError('Call to XenAPI without using .xenapi') | 507 | raise NotImplementedError('Call to XenAPI without using .xenapi') |
770 | 483 | 508 | ||
771 | === modified file 'nova/virt/xenapi/vm_utils.py' | |||
772 | --- nova/virt/xenapi/vm_utils.py 2011-03-24 22:31:39 +0000 | |||
773 | +++ nova/virt/xenapi/vm_utils.py 2011-03-25 13:43:59 +0000 | |||
774 | @@ -22,6 +22,7 @@ | |||
775 | 22 | import os | 22 | import os |
776 | 23 | import pickle | 23 | import pickle |
777 | 24 | import re | 24 | import re |
778 | 25 | import tempfile | ||
779 | 25 | import time | 26 | import time |
780 | 26 | import urllib | 27 | import urllib |
781 | 27 | import uuid | 28 | import uuid |
782 | @@ -29,6 +30,8 @@ | |||
783 | 29 | 30 | ||
784 | 30 | from eventlet import event | 31 | from eventlet import event |
785 | 31 | import glance.client | 32 | import glance.client |
786 | 33 | from nova import context | ||
787 | 34 | from nova import db | ||
788 | 32 | from nova import exception | 35 | from nova import exception |
789 | 33 | from nova import flags | 36 | from nova import flags |
790 | 34 | from nova import log as logging | 37 | from nova import log as logging |
791 | @@ -36,6 +39,7 @@ | |||
792 | 36 | from nova.auth.manager import AuthManager | 39 | from nova.auth.manager import AuthManager |
793 | 37 | from nova.compute import instance_types | 40 | from nova.compute import instance_types |
794 | 38 | from nova.compute import power_state | 41 | from nova.compute import power_state |
795 | 42 | from nova.virt import disk | ||
796 | 39 | from nova.virt import images | 43 | from nova.virt import images |
797 | 40 | from nova.virt.xenapi import HelperBase | 44 | from nova.virt.xenapi import HelperBase |
798 | 41 | from nova.virt.xenapi.volume_utils import StorageError | 45 | from nova.virt.xenapi.volume_utils import StorageError |
799 | @@ -670,6 +674,23 @@ | |||
800 | 670 | return None | 674 | return None |
801 | 671 | 675 | ||
802 | 672 | @classmethod | 676 | @classmethod |
803 | 677 | def preconfigure_instance(cls, session, instance, vdi_ref, network_info): | ||
804 | 678 | """Makes alterations to the image before launching as part of spawn. | ||
805 | 679 | """ | ||
806 | 680 | |||
807 | 681 | # As mounting the image VDI is expensive, we only want do do it once, | ||
808 | 682 | # if at all, so determine whether it's required first, and then do | ||
809 | 683 | # everything | ||
810 | 684 | mount_required = False | ||
811 | 685 | key, net = _prepare_injectables(instance, network_info) | ||
812 | 686 | mount_required = key or net | ||
813 | 687 | if not mount_required: | ||
814 | 688 | return | ||
815 | 689 | |||
816 | 690 | with_vdi_attached_here(session, vdi_ref, False, | ||
817 | 691 | lambda dev: _mounted_processing(dev, key, net)) | ||
818 | 692 | |||
819 | 693 | @classmethod | ||
820 | 673 | def lookup_kernel_ramdisk(cls, session, vm): | 694 | def lookup_kernel_ramdisk(cls, session, vm): |
821 | 674 | vm_rec = session.get_xenapi().VM.get_record(vm) | 695 | vm_rec = session.get_xenapi().VM.get_record(vm) |
822 | 675 | if 'PV_kernel' in vm_rec and 'PV_ramdisk' in vm_rec: | 696 | if 'PV_kernel' in vm_rec and 'PV_ramdisk' in vm_rec: |
823 | @@ -927,6 +948,7 @@ | |||
824 | 927 | e.details[0] == 'DEVICE_DETACH_REJECTED'): | 948 | e.details[0] == 'DEVICE_DETACH_REJECTED'): |
825 | 928 | LOG.debug(_('VBD.unplug rejected: retrying...')) | 949 | LOG.debug(_('VBD.unplug rejected: retrying...')) |
826 | 929 | time.sleep(1) | 950 | time.sleep(1) |
827 | 951 | LOG.debug(_('Not sleeping anymore!')) | ||
828 | 930 | elif (len(e.details) > 0 and | 952 | elif (len(e.details) > 0 and |
829 | 931 | e.details[0] == 'DEVICE_ALREADY_DETACHED'): | 953 | e.details[0] == 'DEVICE_ALREADY_DETACHED'): |
830 | 932 | LOG.debug(_('VBD.unplug successful eventually.')) | 954 | LOG.debug(_('VBD.unplug successful eventually.')) |
831 | @@ -1002,3 +1024,114 @@ | |||
832 | 1002 | def get_name_label_for_image(image): | 1024 | def get_name_label_for_image(image): |
833 | 1003 | # TODO(sirp): This should eventually be the URI for the Glance image | 1025 | # TODO(sirp): This should eventually be the URI for the Glance image |
834 | 1004 | return _('Glance image %s') % image | 1026 | return _('Glance image %s') % image |
835 | 1027 | |||
836 | 1028 | |||
837 | 1029 | def _mount_filesystem(dev_path, dir): | ||
838 | 1030 | """mounts the device specified by dev_path in dir""" | ||
839 | 1031 | try: | ||
840 | 1032 | out, err = utils.execute('sudo', 'mount', | ||
841 | 1033 | '-t', 'ext2,ext3', | ||
842 | 1034 | dev_path, dir) | ||
843 | 1035 | except exception.ProcessExecutionError as e: | ||
844 | 1036 | err = str(e) | ||
845 | 1037 | return err | ||
846 | 1038 | |||
847 | 1039 | |||
848 | 1040 | def _find_guest_agent(base_dir, agent_rel_path): | ||
849 | 1041 | """ | ||
850 | 1042 | tries to locate a guest agent at the path | ||
851 | 1043 | specificed by agent_rel_path | ||
852 | 1044 | """ | ||
853 | 1045 | agent_path = os.path.join(base_dir, agent_rel_path) | ||
854 | 1046 | if os.path.isfile(agent_path): | ||
855 | 1047 | # The presence of the guest agent | ||
856 | 1048 | # file indicates that this instance can | ||
857 | 1049 | # reconfigure the network from xenstore data, | ||
858 | 1050 | # so manipulation of files in /etc is not | ||
859 | 1051 | # required | ||
860 | 1052 | LOG.info(_('XenServer tools installed in this ' | ||
861 | 1053 | 'image are capable of network injection. ' | ||
862 | 1054 | 'Networking files will not be' | ||
863 | 1055 | 'manipulated')) | ||
864 | 1056 | return True | ||
865 | 1057 | xe_daemon_filename = os.path.join(base_dir, | ||
866 | 1058 | 'usr', 'sbin', 'xe-daemon') | ||
867 | 1059 | if os.path.isfile(xe_daemon_filename): | ||
868 | 1060 | LOG.info(_('XenServer tools are present ' | ||
869 | 1061 | 'in this image but are not capable ' | ||
870 | 1062 | 'of network injection')) | ||
871 | 1063 | else: | ||
872 | 1064 | LOG.info(_('XenServer tools are not ' | ||
873 | 1065 | 'installed in this image')) | ||
874 | 1066 | return False | ||
875 | 1067 | |||
876 | 1068 | |||
877 | 1069 | def _mounted_processing(device, key, net): | ||
878 | 1070 | """Callback which runs with the image VDI attached""" | ||
879 | 1071 | |||
880 | 1072 | dev_path = '/dev/' + device + '1' # NB: Partition 1 hardcoded | ||
881 | 1073 | tmpdir = tempfile.mkdtemp() | ||
882 | 1074 | try: | ||
883 | 1075 | # Mount only Linux filesystems, to avoid disturbing NTFS images | ||
884 | 1076 | err = _mount_filesystem(dev_path, tmpdir) | ||
885 | 1077 | if not err: | ||
886 | 1078 | try: | ||
887 | 1079 | # This try block ensures that the umount occurs | ||
888 | 1080 | if not _find_guest_agent(tmpdir, FLAGS.xenapi_agent_path): | ||
889 | 1081 | LOG.info(_('Manipulating interface files ' | ||
890 | 1082 | 'directly')) | ||
891 | 1083 | disk.inject_data_into_fs(tmpdir, key, net, | ||
892 | 1084 | utils.execute) | ||
893 | 1085 | finally: | ||
894 | 1086 | utils.execute('sudo', 'umount', dev_path) | ||
895 | 1087 | else: | ||
896 | 1088 | LOG.info(_('Failed to mount filesystem (expected for ' | ||
897 | 1089 | 'non-linux instances): %s') % err) | ||
898 | 1090 | finally: | ||
899 | 1091 | # remove temporary directory | ||
900 | 1092 | os.rmdir(tmpdir) | ||
901 | 1093 | |||
902 | 1094 | |||
903 | 1095 | def _prepare_injectables(inst, networks_info): | ||
904 | 1096 | """ | ||
905 | 1097 | prepares the ssh key and the network configuration file to be | ||
906 | 1098 | injected into the disk image | ||
907 | 1099 | """ | ||
908 | 1100 | #do the import here - Cheetah.Template will be loaded | ||
909 | 1101 | #only if injection is performed | ||
910 | 1102 | from Cheetah import Template as t | ||
911 | 1103 | template = t.Template | ||
912 | 1104 | template_data = open(FLAGS.injected_network_template).read() | ||
913 | 1105 | |||
914 | 1106 | key = str(inst['key_data']) | ||
915 | 1107 | net = None | ||
916 | 1108 | if networks_info: | ||
917 | 1109 | ifc_num = -1 | ||
918 | 1110 | interfaces_info = [] | ||
919 | 1111 | for (network_ref, info) in networks_info: | ||
920 | 1112 | ifc_num += 1 | ||
921 | 1113 | if not network_ref['injected']: | ||
922 | 1114 | continue | ||
923 | 1115 | |||
924 | 1116 | ip_v4 = ip_v6 = None | ||
925 | 1117 | if 'ips' in info and len(info['ips']) > 0: | ||
926 | 1118 | ip_v4 = info['ips'][0] | ||
927 | 1119 | if 'ip6s' in info and len(info['ip6s']) > 0: | ||
928 | 1120 | ip_v6 = info['ip6s'][0] | ||
929 | 1121 | if len(info['dns']) > 0: | ||
930 | 1122 | dns = info['dns'][0] | ||
931 | 1123 | interface_info = {'name': 'eth%d' % ifc_num, | ||
932 | 1124 | 'address': ip_v4 and ip_v4['ip'] or '', | ||
933 | 1125 | 'netmask': ip_v4 and ip_v4['netmask'] or '', | ||
934 | 1126 | 'gateway': info['gateway'], | ||
935 | 1127 | 'broadcast': info['broadcast'], | ||
936 | 1128 | 'dns': dns, | ||
937 | 1129 | 'address_v6': ip_v6 and ip_v6['ip'] or '', | ||
938 | 1130 | 'netmask_v6': ip_v6 and ip_v6['netmask'] or '', | ||
939 | 1131 | 'gateway_v6': ip_v6 and ip_v6['gateway'] or '', | ||
940 | 1132 | 'use_ipv6': FLAGS.use_ipv6} | ||
941 | 1133 | interfaces_info.append(interface_info) | ||
942 | 1134 | net = str(template(template_data, | ||
943 | 1135 | searchList=[{'interfaces': interfaces_info, | ||
944 | 1136 | 'use_ipv6': FLAGS.use_ipv6}])) | ||
945 | 1137 | return key, net | ||
946 | 1005 | 1138 | ||
947 | === modified file 'nova/virt/xenapi/vmops.py' | |||
948 | --- nova/virt/xenapi/vmops.py 2011-03-24 21:11:48 +0000 | |||
949 | +++ nova/virt/xenapi/vmops.py 2011-03-25 13:43:59 +0000 | |||
950 | @@ -33,6 +33,7 @@ | |||
951 | 33 | from nova import log as logging | 33 | from nova import log as logging |
952 | 34 | from nova import exception | 34 | from nova import exception |
953 | 35 | from nova import utils | 35 | from nova import utils |
954 | 36 | from nova import flags | ||
955 | 36 | 37 | ||
956 | 37 | from nova.auth.manager import AuthManager | 38 | from nova.auth.manager import AuthManager |
957 | 38 | from nova.compute import power_state | 39 | from nova.compute import power_state |
958 | @@ -43,6 +44,7 @@ | |||
959 | 43 | 44 | ||
960 | 44 | XenAPI = None | 45 | XenAPI = None |
961 | 45 | LOG = logging.getLogger("nova.virt.xenapi.vmops") | 46 | LOG = logging.getLogger("nova.virt.xenapi.vmops") |
962 | 47 | FLAGS = flags.FLAGS | ||
963 | 46 | 48 | ||
964 | 47 | 49 | ||
965 | 48 | class VMOps(object): | 50 | class VMOps(object): |
966 | @@ -53,7 +55,6 @@ | |||
967 | 53 | self.XenAPI = session.get_imported_xenapi() | 55 | self.XenAPI = session.get_imported_xenapi() |
968 | 54 | self._session = session | 56 | self._session = session |
969 | 55 | self.poll_rescue_last_ran = None | 57 | self.poll_rescue_last_ran = None |
970 | 56 | |||
971 | 57 | VMHelper.XenAPI = self.XenAPI | 58 | VMHelper.XenAPI = self.XenAPI |
972 | 58 | 59 | ||
973 | 59 | def list_instances(self): | 60 | def list_instances(self): |
974 | @@ -168,6 +169,12 @@ | |||
975 | 168 | # create it now. This goes away once nova-multi-nic hits. | 169 | # create it now. This goes away once nova-multi-nic hits. |
976 | 169 | if network_info is None: | 170 | if network_info is None: |
977 | 170 | network_info = self._get_network_info(instance) | 171 | network_info = self._get_network_info(instance) |
978 | 172 | |||
979 | 173 | # Alter the image before VM start for, e.g. network injection | ||
980 | 174 | if FLAGS.xenapi_inject_image: | ||
981 | 175 | VMHelper.preconfigure_instance(self._session, instance, | ||
982 | 176 | vdi_ref, network_info) | ||
983 | 177 | |||
984 | 171 | self.create_vifs(vm_ref, network_info) | 178 | self.create_vifs(vm_ref, network_info) |
985 | 172 | self.inject_network_info(instance, vm_ref, network_info) | 179 | self.inject_network_info(instance, vm_ref, network_info) |
986 | 173 | return vm_ref | 180 | return vm_ref |
987 | @@ -237,26 +244,17 @@ | |||
988 | 237 | obj = None | 244 | obj = None |
989 | 238 | try: | 245 | try: |
990 | 239 | # check for opaque ref | 246 | # check for opaque ref |
992 | 240 | obj = self._session.get_xenapi().VM.get_record(instance_or_vm) | 247 | obj = self._session.get_xenapi().VM.get_uuid(instance_or_vm) |
993 | 241 | return instance_or_vm | 248 | return instance_or_vm |
994 | 242 | except self.XenAPI.Failure: | 249 | except self.XenAPI.Failure: |
996 | 243 | # wasn't an opaque ref, must be an instance name | 250 | # wasn't an opaque ref, can be an instance name |
997 | 244 | instance_name = instance_or_vm | 251 | instance_name = instance_or_vm |
998 | 245 | 252 | ||
999 | 246 | # if instance_or_vm is an int/long it must be instance id | 253 | # if instance_or_vm is an int/long it must be instance id |
1000 | 247 | elif isinstance(instance_or_vm, (int, long)): | 254 | elif isinstance(instance_or_vm, (int, long)): |
1001 | 248 | ctx = context.get_admin_context() | 255 | ctx = context.get_admin_context() |
1013 | 249 | try: | 256 | instance_obj = db.instance_get(ctx, instance_or_vm) |
1014 | 250 | instance_obj = db.instance_get(ctx, instance_or_vm) | 257 | instance_name = instance_obj.name |
1004 | 251 | instance_name = instance_obj.name | ||
1005 | 252 | except exception.NotFound: | ||
1006 | 253 | # The unit tests screw this up, as they use an integer for | ||
1007 | 254 | # the vm name. I'd fix that up, but that's a matter for | ||
1008 | 255 | # another bug report. So for now, just try with the passed | ||
1009 | 256 | # value | ||
1010 | 257 | instance_name = instance_or_vm | ||
1011 | 258 | |||
1012 | 259 | # otherwise instance_or_vm is an instance object | ||
1015 | 260 | else: | 258 | else: |
1016 | 261 | instance_name = instance_or_vm.name | 259 | instance_name = instance_or_vm.name |
1017 | 262 | vm_ref = VMHelper.lookup(self._session, instance_name) | 260 | vm_ref = VMHelper.lookup(self._session, instance_name) |
1018 | @@ -692,7 +690,6 @@ | |||
1019 | 692 | vm_ref = VMHelper.lookup(self._session, instance.name) | 690 | vm_ref = VMHelper.lookup(self._session, instance.name) |
1020 | 693 | self._shutdown(instance, vm_ref) | 691 | self._shutdown(instance, vm_ref) |
1021 | 694 | self._acquire_bootlock(vm_ref) | 692 | self._acquire_bootlock(vm_ref) |
1022 | 695 | |||
1023 | 696 | instance._rescue = True | 693 | instance._rescue = True |
1024 | 697 | self.spawn_rescue(instance) | 694 | self.spawn_rescue(instance) |
1025 | 698 | rescue_vm_ref = VMHelper.lookup(self._session, instance.name) | 695 | rescue_vm_ref = VMHelper.lookup(self._session, instance.name) |
1026 | @@ -816,6 +813,7 @@ | |||
1027 | 816 | info = { | 813 | info = { |
1028 | 817 | 'label': network['label'], | 814 | 'label': network['label'], |
1029 | 818 | 'gateway': network['gateway'], | 815 | 'gateway': network['gateway'], |
1030 | 816 | 'broadcast': network['broadcast'], | ||
1031 | 819 | 'mac': instance.mac_address, | 817 | 'mac': instance.mac_address, |
1032 | 820 | 'rxtx_cap': flavor['rxtx_cap'], | 818 | 'rxtx_cap': flavor['rxtx_cap'], |
1033 | 821 | 'dns': [network['dns']], | 819 | 'dns': [network['dns']], |
1034 | 822 | 820 | ||
1035 | === modified file 'nova/virt/xenapi_conn.py' | |||
1036 | --- nova/virt/xenapi_conn.py 2011-03-24 08:56:06 +0000 | |||
1037 | +++ nova/virt/xenapi_conn.py 2011-03-25 13:43:59 +0000 | |||
1038 | @@ -107,8 +107,22 @@ | |||
1039 | 107 | 5, | 107 | 5, |
1040 | 108 | 'Max number of times to poll for VHD to coalesce.' | 108 | 'Max number of times to poll for VHD to coalesce.' |
1041 | 109 | ' Used only if connection_type=xenapi.') | 109 | ' Used only if connection_type=xenapi.') |
1042 | 110 | flags.DEFINE_bool('xenapi_inject_image', | ||
1043 | 111 | True, | ||
1044 | 112 | 'Specifies whether an attempt to inject network/key' | ||
1045 | 113 | ' data into the disk image should be made.' | ||
1046 | 114 | ' Used only if connection_type=xenapi.') | ||
1047 | 115 | flags.DEFINE_string('xenapi_agent_path', | ||
1048 | 116 | 'usr/sbin/xe-update-networking', | ||
1049 | 117 | 'Specifies the path in which the xenapi guest agent' | ||
1050 | 118 | ' should be located. If the agent is present,' | ||
1051 | 119 | ' network configuration is not injected into the image' | ||
1052 | 120 | ' Used only if connection_type=xenapi.' | ||
1053 | 121 | ' and xenapi_inject_image=True') | ||
1054 | 122 | |||
1055 | 110 | flags.DEFINE_string('xenapi_sr_base_path', '/var/run/sr-mount', | 123 | flags.DEFINE_string('xenapi_sr_base_path', '/var/run/sr-mount', |
1056 | 111 | 'Base path to the storage repository') | 124 | 'Base path to the storage repository') |
1057 | 125 | |||
1058 | 112 | flags.DEFINE_string('target_host', | 126 | flags.DEFINE_string('target_host', |
1059 | 113 | None, | 127 | None, |
1060 | 114 | 'iSCSI Target Host') | 128 | 'iSCSI Target Host') |
Hi Andy!
Before going any further, please set this merge proposal status to Work In Progress and resolve the merge conflict in nova/virt/ xenapi/ vm_utils. py.
Lemme know if you're uncertain about the process of resolving merge conflicts.
Cheers,
jay