Merge lp:~citrix-openstack/nova/xenapi-netinject-prop into lp:~hudson-openstack/nova/trunk

Proposed by Andy Southgate
Status: Merged
Approved by: Sandy Walsh
Approved revision: 723
Merged at revision: 889
Proposed branch: lp:~citrix-openstack/nova/xenapi-netinject-prop
Merge into: lp:~hudson-openstack/nova/trunk
Diff against target: 1060 lines (+507/-93)
11 files modified
Authors (+1/-0)
nova/tests/db/fakes.py (+42/-32)
nova/tests/fake_utils.py (+106/-0)
nova/tests/test_xenapi.py (+139/-26)
nova/tests/xenapi/stubs.py (+8/-4)
nova/virt/disk.py (+19/-7)
nova/virt/libvirt_conn.py (+1/-3)
nova/virt/xenapi/fake.py (+31/-6)
nova/virt/xenapi/vm_utils.py (+133/-0)
nova/virt/xenapi/vmops.py (+13/-15)
nova/virt/xenapi_conn.py (+14/-0)
To merge this branch: bzr merge lp:~citrix-openstack/nova/xenapi-netinject-prop
Reviewer Review Type Date Requested Status
Trey Morris (community) Approve
Thierry Carrez (community) ffe Approve
Sandy Walsh (community) Approve
Rick Harris (community) Approve
Review via email: mp+49798@code.launchpad.net

Description of the change

This is basic network injection for XenServer, and includes:

o Modification of the /etc/network/interfaces file within the image using code taken from and now shared with libvirt_conn. This is for compatibility with legacy Linux images without a guest agent.

o Setting of xenstore keys before instance boot, intended for the XenServer Windows agent. The agent will use these to configure the network at boot-time.

This change does not implement live reconfiguration, which is on another blueprint:

https://blueprints.launchpad.net/nova/+spec/xs-inject-networking

It does include template code to detect the presence of agents and avoid modifying the filesystem if they are injection-capable.

To post a comment you must log in.
Revision history for this message
Jay Pipes (jaypipes) wrote :

Hi Andy!

Before going any further, please set this merge proposal status to Work In Progress and resolve the merge conflict in nova/virt/xenapi/vm_utils.py.

Lemme know if you're uncertain about the process of resolving merge conflicts.

Cheers,
jay

Revision history for this message
Andy Southgate (andy-southgate) wrote :

Hi,

Thanks Jay. It looks like lp:nova has moved on since I rebased this
branch. Is it sufficient to rebase to trunk again?

Cheers,

Andy S.

On 02/16/11 18:48, Jay Pipes wrote:
> Hi Andy!
>
> Before going any further, please set this merge proposal status to Work In Progress and resolve the merge conflict in nova/virt/xenapi/vm_utils.py.
>
> Lemme know if you're uncertain about the process of resolving merge conflicts.
>
> Cheers,
> jay
>

Revision history for this message
Jay Pipes (jaypipes) wrote :

On Wed, Feb 16, 2011 at 2:01 PM, Andy Southgate
<email address hidden> wrote:
> Thanks Jay.  It looks like lp:nova has moved on since I rebased this
> branch.  Is it sufficient to rebase to trunk again?

It's good practice to have a local copy of the lp:nova trunk, and
then, when you are ready to bzr push your local topic branch to
Launchpad, simply do a bzr merge ../trunk and resolve any conflicts
that might pop up. After you resolve the conflicts in the files (if
any), do a bzr resolve --all and then bzr push your local topic branch
to Launchpad.

As far as "rebasing", not sure you need to do that. bzr has a rebase
plugin, but it's not what you need in this case.

So, in short, the recommended dev process is like so:

# Assume ~/repos/nova is where you have your bzr branches for nova...
cd ~/repos/nova
bzr branch lp:nova trunk
bzr branch trunk topic # Where "topic" is the name you give the branch
you work on (say, xs-netinject-prop)
cd topic
# do work in topic branch...
bzr commit -m "Some comment about your changes"
bzr push lp:~citrix-openstack/nova/topic
# On Launchpad, propose your branch for merging, as you did...
# If review come back as some changes are needed, set the merge
proposal status to Work In Progress
# then, in your local branch, do more work
bzr commit -m "More changes"
cd ../trunk
bzr pull # This pulls all changes merged into lp:nova trunk from the
rest of the Nova contributors into your local branch
cd ../topic
bzr merge ../trunk
# If any conflicts, resolve them and then do bzr resolve --all
bzr commit -m "Merge trunk"
bzr push
# The set your merge proposal back to "Needs Review"

All set :)

Don't worry, once you do it a couple times, it becomes second nature ;)

-jay

Revision history for this message
Andy Southgate (andy-southgate) wrote :

Thanks, that was a bit quicker than a rebase!

Andy

Revision history for this message
Andy Southgate (andy-southgate) wrote :

Salvatore (salvatore-orlando) is taking over this merge BTW as I'm moving on to other things, so please send question to him if you have them.

Revision history for this message
Ewan Mellor (ewanmellor) wrote :

I have merged with trunk again. This branch is ready for review. I have requested a review from Trey Morris, since this branch is related to his xs-inject-networking work. An additional nova-core review would be appreciated.

Thanks!

Revision history for this message
Trey Morris (tr3buchet) wrote :

Great tests!

I have some concerns, not the least of which is that you added new functions which write networking data to the xenstore param list and call them just before spawn writes to the xenstore param list. If written to the same location, spawn will blow away all of your changes. It doesn't appear that your function writes them to the same path though so we'd end up with some nasty config duplication. Your function also writes to different paths than spawn does. Those paths work with the agent. I'm not sure at this point what the agent would do with data at other paths (possibly ignore it).

Mounting the vdi also concerns me. We use VHDs and aren't going to be mounting them because we'll be building not only base images but also customer images where they could be using who knows which file systems or even encryption. So there will need to be some sort of branching put in place to determine if you do or do not want to mount the vdi.

There are functions in place such as "write_to_param_xenstore()" in vmops you should use instead of making the calls directly to follow convention set when Ed refactored.

Also in the next hour I'll be proposing merge for the 2nd half of my network_injection branch, it refactors what takes place in spawn and allows network reconfiguration/injection in post spawn situations. I think it makes sense to get this in and then place your branch on top of it and fix any conflicts instead of doing it twice.

Let me know if you want to discuss anything, I'm glad to help.

review: Needs Fixing
Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :
Download full text (3.4 KiB)

> Great tests!
>
> I have some concerns, not the least of which is that you added new functions
> which write networking data to the xenstore param list and call them just
> before spawn writes to the xenstore param list. If written to the same
> location, spawn will blow away all of your changes. It doesn't appear that
> your function writes them to the same path though so we'd end up with some
> nasty config duplication. Your function also writes to different paths than
> spawn does. Those paths work with the agent. I'm not sure at this point what
> the agent would do with data at other paths (possibly ignore it).
>

I'm not entirely sure whether this duplication is intended or just the result of some previous merge with trunk (I'm thinking of multi-nic-support). However, I agree the fact that they write at different path is a bit weird. I'll investigate this issue.

> Mounting the vdi also concerns me. We use VHDs and aren't going to be mounting
> them because we'll be building not only base images but also customer images
> where they could be using who knows which file systems or even encryption. So
> there will need to be some sort of branching put in place to determine if you
> do or do not want to mount the vdi.
>

Mounting the image for injecting data is a technique which has been used in nova since its early days. In the xenapi implementation, we first stream it into a VDI, and then attach the VDI to the VM in which nova-compute is running. Your point on VHDs and encrypted file systems is more than valid, though. For the same reason we believe the guest agent should be the 'preferred' way for getting network/key configuration into the instance. We are providing injection for 'compatibility' with the libvirt implementation.

Also the way in which is currently done will probably not work even in more trival cases, such as root partition not starting at byte 0 or /etc being mounted on a partition other than the first. As already stated, injection is intended only for 'basic' scenarios.

In the proposed implementation if _mounted_processing fails to mount the image into the vdi, an error message is logged but this does not cause the spawn process to fail. However, I agree we probably need a better mechanism for understanding whether injection should be performed or not. I don't think proposing changes either to the API or instance_types would be a good idea at all. Now that glance is in place we can probably use metadata to this purpose. For instance image_type==simple_machine would mean network injection should occur, whereas image_type==agent_machine would mean the xensore param list should be populated. But probably this is not the rigth place for this discussion!

> There are functions in place such as "write_to_param_xenstore()" in vmops you
> should use instead of making the calls directly to follow convention set when
> Ed refactored.
>

Good point. I will take care of this.

> Also in the next hour I'll be proposing merge for the 2nd half of my
> network_injection branch, it refactors what takes place in spawn and allows
> network reconfiguration/injection in post spawn situations. I think it makes
> sense to get this i...

Read more...

Revision history for this message
Trey Morris (tr3buchet) wrote :

> I'm not entirely sure whether this duplication is intended or just the result of some previous merge
> with trunk (I'm thinking of multi-nic-support). However, I agree the fact that they write at different
> path is a bit weird. I'll investigate this issue.

The paths I used are what the agent is expecting for instance the mac address being lowercase with no punctuation vs all uppercase and full of '_'. That's what I'm referring to. I'm not sure of a reason these should be changed, so I'm all ears if you've got one.

> (mounting the vdi)

First I'm glad it fails quietly. Second, I understand the reason for doing so. It's a great reason. It's something that for sure should be implemented; however, we will never use it in any circumstance, relying on the agent instead. For this reason, I suggest a configuration option. example: --mount-vdi=false. I don't see the point in having more metadata if it will always be true or always be false. Of course, if there are people who want to mount some vdi at spawn and twiddle with the file system, but not others, then that's a different story. I'm happy with either way as long as there is a way to disable it completely.

At this point I feel confident saying that injection of network configuration into the xenstore/xenstore-param-list and the agent call to reset networking is finished. Perhaps preconfigure_xenstore() isn't needed. I'm not nearly as knowledgeable of xenstore as you guys surely are so if I've forgotten something please let me know. I only chose my paths as such because of the agent.

There will be a lot of changes to these areas as i start working on multinic, especially in the inject_network_info() function. Perhaps we can agree to leave writing injection into the xenstore to me and you handle the rest (mounting of vdi's, configuration etc). This way I can proceed with multi-nic without worry that I'll just have to rewrite it.

Also, ff there is anything you'd like me to change about the way I've implemented network injection, please let me know. I've written it in a way that works perfectly for us. I'd like for it to work for you as well.

-tr3buchet

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

Acknowledging Trey's review and reverting status to work in progress to ease reviewer's work.

@Trey: I will reply to your latest comments shortly!

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

> > I'm not entirely sure whether this duplication is intended or just the
> result of some previous merge
> > with trunk (I'm thinking of multi-nic-support). However, I agree the fact
> that they write at different
> > path is a bit weird. I'll investigate this issue.
>
> The paths I used are what the agent is expecting for instance the mac address
> being lowercase with no punctuation vs all uppercase and full of '_'. That's
> what I'm referring to. I'm not sure of a reason these should be changed, so
> I'm all ears if you've got one.
>

I also don't see any reason for which we should use different paths. The only reason I see for the different paths is that these two branches have evolved independently even though the functionalities they implement are quite overlapping.

> > (mounting the vdi)
>
> First I'm glad it fails quietly. Second, I understand the reason for doing so.
> It's a great reason. It's something that for sure should be implemented;
> however, we will never use it in any circumstance, relying on the agent
> instead. For this reason, I suggest a configuration option. example: --mount-
> vdi=false. I don't see the point in having more metadata if it will always be
> true or always be false. Of course, if there are people who want to mount some
> vdi at spawn and twiddle with the file system, but not others, then that's a
> different story. I'm happy with either way as long as there is a way to
> disable it completely.
>

I was suggesting metadata assuming a scenario in which there might be images with and without agent. However, I agree this scenario does not make a lot of sense after all. As you said, either you use the agent or you don't. So the flag idea is maybe more effective, and easier to implement as well :-)

> At this point I feel confident saying that injection of network configuration
> into the xenstore/xenstore-param-list and the agent call to reset networking
> is finished. Perhaps preconfigure_xenstore() isn't needed. I'm not nearly as
> knowledgeable of xenstore as you guys surely are so if I've forgotten
> something please let me know. I only chose my paths as such because of the
> agent.

I tentatively agree with you that preconfigure_xenstore is not required anymore. I will however double-check the code before taking a decision in this direction.

> There will be a lot of changes to these areas as i start working on multinic,
> especially in the inject_network_info() function. Perhaps we can agree to
> leave writing injection into the xenstore to me and you handle the rest
> (mounting of vdi's, configuration etc). This way I can proceed with multi-nic
> without worry that I'll just have to rewrite it.
>

I see your branch has now been merged. I will update my branch accordingly.
I agree on your approach: maybe if there is anything in preconfigure_xenstore I can you ask you to add it to inject_network_info?

> Also, ff there is anything you'd like me to change about the way I've
> implemented network injection, please let me know. I've written it in a way
> that works perfectly for us. I'd like for it to work for you as well.
>
> -tr3buchet

Revision history for this message
Trey Morris (tr3buchet) wrote :

 > I agree on your approach: maybe if there is anything in preconfigure_xenstore I can you ask you to add it to inject_network_info?

Absolutely! Just let me know. First thing I came across was not injecting broadcast address where you guys did. Come up with a list so I can implement them all at once.

-tr3buchet

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

State changed to work in progress due to conflicts with current trunk

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

Conflicts resolved.
Branch ready for review again.

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

Reverting to work in progress to resolve conflicts and integrate IPv6 injection into branch

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

Branch updated and again ready for review.

Revision history for this message
Rick Harris (rconradharris) wrote :
Download full text (5.4 KiB)

Nice work, Andy-- really digging the added tests here.

Overall this looks good, just have a few small suggestions that might improve
readability:

> 87 + def fake_network_get_all_by_instance(context, instance_id):
> 88 + l = []
> 89 + l.append(FakeModel(network_fields))
> 90 + return l
> 91 +

Pep8: "Never use the characters `l' (lowercase letter el)". Might be better
as:

    return [FakeModel(network_fields)]

> 153 +def fake_execute_get_log():
> 154 + global _fake_execute_log
> 155 + return _fake_execute_log

`global` isn't needed here since you're not mutating the value.

> +def fake_execute(*cmd, **kwargs):

Since `cmd` is a list, it might be clearer as `cmd_parts` (or something
similar)

> 185 + cmd_map = map(str, cmd)
> 186 + cmd_str = ' '.join(cmd_map)

Going forward, comprehensions are preferred to map/reduce/filter. So, might be
better as:

    cmd_str = ' '.join(str(part) for part in cmd_parts)

> 199 + if isinstance(reply_handler, types.StringTypes):

I think `basestring` is preferred to StringTypes:

    if isinstance(reply_handler, basestring):

> 213 + LOG.debug(_("Reply to faked command is stdout='%(0)s' stderr='%(1)s'") %
> 214 + {'0': reply[0], '1': reply[1]})

Rather than using a position-style, since a dict is being passed, might be
better to attach meaningful names. For example:

    stdout = reply[0]
    stderr = reply[1]
    LOG.debug(_("Reply to faked command is stdout='%(stdout)s'"
                "stderr='%(stderr)s'") % locals())

> 264 + #db_fakes.stub_out_db_network_api(self.stubs)

Vestigial code?

> 389 def _test_spawn(self, image_id, kernel_id, ramdisk_id,
> 394 + instance_type="m1.large", os_type="linux", check_injection=False):

Should probably be lined up with the paren, something like:

    def _test_spawn(self, image_id, kernel_id, ramdisk_id,
                    instance_type="m1.large", os_type="linux",
                    check_injection=False):

> 580 + def f_1(self, interval, now=True):
> 581 + self.f(*self.args, **self.kw)
> 582 + stubs.Set(utils.LoopingCall, 'start', f_1)

Could use a more descriptive name than `f_1`. Perhaps, `fake_start`.

> 640 + t = __import__('Cheetah.Template', globals(), locals(), ['Template'],
> 641 + -1)
> 642 + template = t.Template

Is there a reason you're using __import__ here--if so, it might benefit from a
NOTE(andy):

Otherwise, might be better to do:

    if not template:
        from Cheetah import Template
        template = Template

> 643 + #load template file if necessary

Not a super-useful comment (didn't add anything for me, at least)

> 808 + LOG.debug(_('Raising Failure'))

Not a particularly useful logging line, especially since we raise immediately
afterwards. I'd consider scrapping this.

> 804 + if (ref in _db_content[cls]):
> 805 + if (field in _db_content[cls][ref]):
> 806 + return _db_content[cls][ref][field]
> 807 + else:
> 808 + LOG.debug(_('Raising Failure'))
> 809 + raise Failure(['HANDLE_INVALID', cls, ref])

Might be c...

Read more...

review: Needs Fixing
Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

Hi Rick,

thanks for taking time for doing such a detailed review!
Your tips will be very useful to provide higher-quality code in the future.

I will update the branch in the few hours in order to give reviewers a full day before feature freeze to review it.

There are only a few bits which probably deserve some discussion:
>
> Is there a reason you're using __import__ here--if so, it might benefit from a
> NOTE(andy):
>
> Otherwise, might be better to do:
>
> if not template:
> from Cheetah import Template
> template = Template
>

I agrre the syntax you are proposing is much neater, but wouldn't this put a requirement on Cheetah for the whole nova project (e.g.: nova services, unit tests, etc. failing if Cheetah is not installed)?

> > 990 + LOG.debug("ENTERING WAIT FOR BOOT!")

This is vestigial code. I love to use them for debugging code, and I usually throw a pep8 violation in them in order to make sure I remove them before push. Unfortunately I forgot to do the pep8 violation for this one!

Revision history for this message
Rick Harris (rconradharris) wrote :

> wouldn't this put a requirement on Cheetah

Since you're importing Cheetah in function and not in the module namespace, we'll only end up importing Cheetah if it's actually needed. So, I think the "from x import y", in this case, works just as well as __import__, just a bit cleaner IMHO.

> and I usually throw a pep8 violation in them in order to make sure I remove them before push

Interesting. I've been meaning to add a SPIKE(author) tag that would cause Hudson to reject patches that include this. You could wrap extra debugs with a # SPIKE comment and be sure it never gets committed.

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

All Rick's comments, but one, have been addressed.

The one not addressed is the following:

> 804 + if (ref in _db_content[cls]):
> 805 + if (field in _db_content[cls][ref]):
> 806 + return _db_content[cls][ref][field]
> 807 + else:
> 808 + LOG.debug(_('Raising Failure'))
> 809 + raise Failure(['HANDLE_INVALID', cls, ref])

Might be clearer with a EAFP style, like:

    try:
        return _db_content[cls][ref][field]
    except KeyError:
        raise Failure(['HANDLE_INVALID', cls, ref])

In the fake xenapi driver we need to distinguish between the following cases:
- entity not found (ref not in _db_content[cls]), in which case we need to raise a HANDLE_INVALID falure
- entity found, but attribute not found, in which case we need to raised a NOT_IMPLEMENTED failure.

Nevertheless, Rick's suggestion is very valid and this calls for a wider refactoring of the fake xenapi driver, starting with the _getter routine. However, I think it would be better to do this after-cactus.

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

Getting XenAPIMigrateInstance (test_xenapi.py) asking for passwords during tests:

XenAPIMigrateInstance
    test_finish_resize [sudo] password for swalsh:
                                [sudo] password for swalsh:
OK

Otherwise, impressive branch ... nothing serious to say.

641 + # it the caller does not provide template object and data

"If the caller ..."

But largely I'm going to trust Trey on ensuring the approach is sound (as he's closer to the problem than I)

Once he approves, I will too.

review: Needs Fixing
Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

> Getting XenAPIMigrateInstance (test_xenapi.py) asking for passwords during
> tests:
>
> XenAPIMigrateInstance
> test_finish_resize [sudo] password for swalsh:
> [sudo] password for swalsh:
> OK
>

Thanks for spotting this Sandy! Finish_resize calls _create_vm which in turn calls preconfigure_instance (which writes /etc/network/interfaces). We were not stubbing out utils.execute for the migrate test case.

I already pushed updated code, pleasee see if the tests run fine now.
Another way of doing this is disabling the flag xenapi_inject_image for the migrate use case. If you think that would make more sense, I'll do that.

> Otherwise, impressive branch ... nothing serious to say.
>
> 641 + # it the caller does not provide template object and data
>
> "If the caller ..."
>
> But largely I'm going to trust Trey on ensuring the approach is sound (as he's
> closer to the problem than I)
>

get_injectables is a routine 'shared' between libvirt and nova. In the libvirt backend Cheetah.Template is loaded with a dynamic import as a global variable, and the template file is read when the LibVirtConnection instance is initialized.

We don't do that in XenAPI and I'm a bit wary of using global variables, so I put the template class and the template file as optional variables and then dynamically set them if they are not set from the caller (xenapi backend in this case).
> Once he approves, I will too.

Revision history for this message
Trey Morris (tr3buchet) wrote :

--- disk.py ----
get_injectables needs a docstring
why is get_injectables in disk? seems like an odd place..

get_injectables troubles me because one of my goals with multi-nic is to untether the network and virt layers by passing all of the data required for network configuration into the virt layer from compute so no network related db lookups are required at the virt layer. In the mean time (before nova-multi-nic), get_network_info() in vmops.py is performing this task. There will be a similar get_network_info() function for each libvirt/xenapi/hyperV. You can see the format of the data compute will pass to the virt layer spawn process by looking at get_network_info() in vmops.py. Is there something you need that this function isn't handling? I'm not sure I fully grasp the goal of get_injectables().

This should also apply to the fake calls for network info in tests/db/fakes.py. The xenapi/libvirt/hyperV tests shouldn't be using those (but this will surely have to figured out later).

--- vm_utils.py ---
_find_guest_agent needs a docstring

--- vmops.py ---
yay!!

--- xenapi_conn.py ---
1058 + ' network configuration if not injected into the image'
should probably be
1058 + ' network configuration is not injected into the image'
unless i misinterpreted.

The rest is great! I like the way you've implemented mounting vdi as a flaggable option and I also like how if a proper agent is present you preference using it over adding to the mounted filesystem.

review: Needs Fixing
Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

Hi Trey,
see my comments inline

> --- disk.py ----
> get_injectables needs a docstring
> why is get_injectables in disk? seems like an odd place..
>
> get_injectables troubles me because one of my goals with multi-nic is to
> untether the network and virt layers by passing all of the data required for
> network configuration into the virt layer from compute so no network related
> db lookups are required at the virt layer. In the mean time (before nova-
> multi-nic), get_network_info() in vmops.py is performing this task. There will
> be a similar get_network_info() function for each libvirt/xenapi/hyperV. You
> can see the format of the data compute will pass to the virt layer spawn
> process by looking at get_network_info() in vmops.py. Is there something you
> need that this function isn't handling? I'm not sure I fully grasp the goal of
> get_injectables().
>
> This should also apply to the fake calls for network info in
> tests/db/fakes.py. The xenapi/libvirt/hyperV tests shouldn't be using those
> (but this will surely have to figured out later).

When I noted your network_info I realized there was some overlap between get_injectables and it.
get_injectables is in disk.py since it is used by both libvirt and xenapi (and possibly hyperv); it seemed the best place for it because it performs operations related to the disk image.

From what you said, every backend is going to have a _get_network_info, at least until multi_nic hits trunk. In this case I don't see anymore a reason for having that common bit of code, and I will therefore remove it. However, I should still use the interface.template file and populate it with Chetaah. To do so, the best thing I can do is to map the dict created by _get_network_info onto the dict required by the Cheetah template (the dict already contains all the info I need).

Another thing to note is that the Cheetah template does not support multiple IP addresses. For this reason I think it might be worth to use only the first element from the ip/ipv6 lists in the dict created by network_info.

> --- vm_utils.py ---
> _find_guest_agent needs a docstring
>

Will be done

> --- vmops.py ---
> yay!!
>
> --- xenapi_conn.py ---
> 1058 + ' network configuration if not injected into the
> image'
> should probably be
> 1058 + ' network configuration is not injected into the
> image'
> unless i misinterpreted.
>

Typo. Thanks for spotting this!

> The rest is great! I like the way you've implemented mounting vdi as a
> flaggable option and I also like how if a proper agent is present you
> preference using it over adding to the mounted filesystem.

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

Hi reviewers!

Updating the branch took a bit longer than expect as in the meanwhile another branch changing the way in which network injection is performed for libvirt landed in trunk!

I adapted my branch following Trey's comments. No more _get_injectables! Now I also use the value returned by _get_network_info. Unlike the libvirt impl, I decided to use the 'info' field in the tuple rather than network_ref. This way, if the db changes in the future we will have to update only _get_network_info accordingly.

Another tangential issue was the rescue feature. In the unit tests spawn_rescue was stubbed out with a nice 'pass', leading to a failure on a later operation on the xenapi fake driver (which has been improved to be less 'fake'). By the way, if spawn_rescue has been stubbed out, what's the point of having a unit test for rescue? Code coverage is not complete at all, and I think the unit test is quite unreliable.

Thanks again for spending so much time on this branch. I hope it is now ready to rest in trunk forever!

721. By Salvatore Orlando

merge trunk

Revision history for this message
Rick Harris (rconradharris) wrote :

As with Sandy, I really can't speak to whether this is the right approach. I'll leave that to Trey.

But from a code-quality perspective, this looks really good. Thanks for the fix-ups!

One small nit (not a deal-breaker):

> 97 + l = []
> 98 + l.append(FakeModel(fixed_ip_fields))
> 99 + return l

Looks like one was missed. I'm okay letting this slide since we can get this later; better to get this merged sooner. That said, if you can slip in a fix here, that'd be great.

review: Approve
722. By Salvatore Orlando

minor pep8 fix in db/fakes.py

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

> As with Sandy, I really can't speak to whether this is the right approach.
> I'll leave that to Trey.
>
> But from a code-quality perspective, this looks really good. Thanks for the
> fix-ups!
>
> One small nit (not a deal-breaker):
>
> > 97 + l = []
> > 98 + l.append(FakeModel(fixed_ip_fields))
> > 99 + return l
>
> Looks like one was missed. I'm okay letting this slide since we can get this
> later; better to get this merged sooner. That said, if you can slip in a fix
> here, that'd be great.

As for the approach, I've think I've addresses Trey's concerns.
Thanks for spotting the missed fix! Updated branch pushed.

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

Requesting FF exception for this branch, as it has already 1 approve vote and all discussion points (major and minor) have been addressed according to reviewers' suggestions.

The branch only touches the part of the XenAPI backend for the compute service which deals with spawning VMs. It performs delicate operations such as mounting VDIs, but the routine we use for these operations are already widely used in the XenAPI backend.
Two unit tests, carefully prepared to reduce stubbing-out to a minimum, have been provided.

Benefits:
- this branch will enable support for IP injection in disk images for XenAPI, in the same way in which is already done in libvirt. It indeed uses the same template file. Altough XenAPI already supports agent-based injection, this way of doing injection can be very useful in situations where guest images do not have an agent.
- this branch slightly improves unit test infrastructure for xenapi backend, increasing code coverage.

Risk assessment:
- this branch has a quite difficult life. It was first proposed more than a month ago, but was pushed back due to overlaps with other network-related branches dealing with XenAPI backend. More problems arised when the interfaces.template file changed yesterday to reflect multiple interface. However, now the branch is completely in sync with the current trunk, and since we are in FF period we can expect minimum to no interferences with other branches.

Risk assessment:
- No risk for network manager, as its code has not been touched at all. The xenapi network driver will be used only by the compute node for setting up VLANs on xen hosts. This also means that there cannot be any impact on other capabilities of the Network Manager, such as VPN access or Floating IPs.
- Since this is a new feature, and the associated code is executed only when a compute node uses XenAPI and the VLan manager, there is no risk of breaking libvirt, hyperv, or mware installations, or XenAPI installations which use the flat manager

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

Heh, nearly had a heart attack when I saw the tests failing. Then I saw it was only 'suds' missing. Should have looked in pip-requires.

Nice work!we Hopefully this sneaks it in?

review: Approve
Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

> Heh, nearly had a heart attack when I saw the tests failing. Then I saw it was
> only 'suds' missing. Should have looked in pip-requires.
>
> Nice work!we Hopefully this sneaks it in?

Hi Sandy,

Are you sure you're not talking about VMware-vSphere? I believe that was the branch with the test failing due to suds...
In this branch your tests were failing because I was not stubbing out utils.execute in the migrate test case (on my dev machines they were not failing because all the commands were in the sudoers file :-)
Anyway, that has been fixed!

723. By Salvatore Orlando

merge trunk

Revision history for this message
Thierry Carrez (ttx) wrote :

This is a bit too wide for my taste but it seems very close to the goal. FFe granted, provided this gets merged very soon (before end of day Monday)

review: Approve (ffe)
Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

> Are you sure you're not talking about VMware-vSphere? I believe that was the
> branch with the test failing due to suds...

Could have been just part of the trunk merges. It was definitely this branch. All good now though.

Revision history for this message
Trey Morris (tr3buchet) wrote :

great :)

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'Authors'
--- Authors 2011-03-24 22:47:36 +0000
+++ Authors 2011-03-25 13:43:59 +0000
@@ -1,4 +1,5 @@
1Andy Smith <code@term.ie>1Andy Smith <code@term.ie>
2Andy Southgate <andy.southgate@citrix.com>
2Anne Gentle <anne@openstack.org>3Anne Gentle <anne@openstack.org>
3Anthony Young <sleepsonthefloor@gmail.com>4Anthony Young <sleepsonthefloor@gmail.com>
4Antony Messerli <ant@openstack.org>5Antony Messerli <ant@openstack.org>
56
=== modified file 'nova/tests/db/fakes.py'
--- nova/tests/db/fakes.py 2011-03-17 02:20:18 +0000
+++ nova/tests/db/fakes.py 2011-03-25 13:43:59 +0000
@@ -24,7 +24,7 @@
24from nova import utils24from nova import utils
2525
2626
27def stub_out_db_instance_api(stubs):27def stub_out_db_instance_api(stubs, injected=True):
28 """ Stubs out the db API for creating Instances """28 """ Stubs out the db API for creating Instances """
2929
30 INSTANCE_TYPES = {30 INSTANCE_TYPES = {
@@ -56,6 +56,25 @@
56 flavorid=5,56 flavorid=5,
57 rxtx_cap=5)}57 rxtx_cap=5)}
5858
59 network_fields = {
60 'id': 'test',
61 'bridge': 'xenbr0',
62 'label': 'test_network',
63 'netmask': '255.255.255.0',
64 'cidr_v6': 'fe80::a00:0/120',
65 'netmask_v6': '120',
66 'gateway': '10.0.0.1',
67 'gateway_v6': 'fe80::a00:1',
68 'broadcast': '10.0.0.255',
69 'dns': '10.0.0.2',
70 'ra_server': None,
71 'injected': injected}
72
73 fixed_ip_fields = {
74 'address': '10.0.0.3',
75 'address_v6': 'fe80::a00:3',
76 'network_id': 'test'}
77
59 class FakeModel(object):78 class FakeModel(object):
60 """ Stubs out for model """79 """ Stubs out for model """
61 def __init__(self, values):80 def __init__(self, values):
@@ -76,38 +95,29 @@
76 def fake_instance_type_get_by_name(context, name):95 def fake_instance_type_get_by_name(context, name):
77 return INSTANCE_TYPES[name]96 return INSTANCE_TYPES[name]
7897
79 def fake_instance_create(values):
80 """ Stubs out the db.instance_create method """
81
82 type_data = INSTANCE_TYPES[values['instance_type']]
83
84 base_options = {
85 'name': values['name'],
86 'id': values['id'],
87 'reservation_id': utils.generate_uid('r'),
88 'image_id': values['image_id'],
89 'kernel_id': values['kernel_id'],
90 'ramdisk_id': values['ramdisk_id'],
91 'state_description': 'scheduling',
92 'user_id': values['user_id'],
93 'project_id': values['project_id'],
94 'launch_time': time.strftime('%Y-%m-%dT%H:%M:%SZ', time.gmtime()),
95 'instance_type': values['instance_type'],
96 'memory_mb': type_data['memory_mb'],
97 'mac_address': values['mac_address'],
98 'vcpus': type_data['vcpus'],
99 'local_gb': type_data['local_gb'],
100 'os_type': values['os_type']}
101
102 return FakeModel(base_options)
103
104 def fake_network_get_by_instance(context, instance_id):98 def fake_network_get_by_instance(context, instance_id):
105 fields = {99 return FakeModel(network_fields)
106 'bridge': 'xenbr0',100
107 }101 def fake_network_get_all_by_instance(context, instance_id):
108 return FakeModel(fields)102 return [FakeModel(network_fields)]
109103
110 stubs.Set(db, 'instance_create', fake_instance_create)104 def fake_instance_get_fixed_address(context, instance_id):
105 return FakeModel(fixed_ip_fields).address
106
107 def fake_instance_get_fixed_address_v6(context, instance_id):
108 return FakeModel(fixed_ip_fields).address
109
110 def fake_fixed_ip_get_all_by_instance(context, instance_id):
111 return [FakeModel(fixed_ip_fields)]
112
111 stubs.Set(db, 'network_get_by_instance', fake_network_get_by_instance)113 stubs.Set(db, 'network_get_by_instance', fake_network_get_by_instance)
112 stubs.Set(db, 'instance_type_get_all', fake_instance_type_get_all)114 stubs.Set(db, 'instance_type_get_all', fake_instance_type_get_all)
113 stubs.Set(db, 'instance_type_get_by_name', fake_instance_type_get_by_name)115 stubs.Set(db, 'instance_type_get_by_name', fake_instance_type_get_by_name)
116 stubs.Set(db, 'instance_get_fixed_address',
117 fake_instance_get_fixed_address)
118 stubs.Set(db, 'instance_get_fixed_address_v6',
119 fake_instance_get_fixed_address_v6)
120 stubs.Set(db, 'network_get_all_by_instance',
121 fake_network_get_all_by_instance)
122 stubs.Set(db, 'fixed_ip_get_all_by_instance',
123 fake_fixed_ip_get_all_by_instance)
114124
=== added file 'nova/tests/fake_utils.py'
--- nova/tests/fake_utils.py 1970-01-01 00:00:00 +0000
+++ nova/tests/fake_utils.py 2011-03-25 13:43:59 +0000
@@ -0,0 +1,106 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright (c) 2011 Citrix Systems, Inc.
4#
5# Licensed under the Apache License, Version 2.0 (the "License"); you may
6# not use this file except in compliance with the License. You may obtain
7# a copy of the License at
8#
9# http://www.apache.org/licenses/LICENSE-2.0
10#
11# Unless required by applicable law or agreed to in writing, software
12# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
13# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
14# License for the specific language governing permissions and limitations
15# under the License.
16
17"""This modules stubs out functions in nova.utils
18"""
19
20import re
21import types
22
23from eventlet import greenthread
24
25from nova import exception
26from nova import log as logging
27from nova import utils
28
29LOG = logging.getLogger('nova.tests.fake_utils')
30
31_fake_execute_repliers = []
32_fake_execute_log = []
33
34
35def fake_execute_get_log():
36 return _fake_execute_log
37
38
39def fake_execute_clear_log():
40 global _fake_execute_log
41 _fake_execute_log = []
42
43
44def fake_execute_set_repliers(repliers):
45 """Allows the client to configure replies to commands"""
46 global _fake_execute_repliers
47 _fake_execute_repliers = repliers
48
49
50def fake_execute_default_reply_handler(*ignore_args, **ignore_kwargs):
51 """A reply handler for commands that haven't been added to the reply
52 list. Returns empty strings for stdout and stderr
53 """
54 return '', ''
55
56
57def fake_execute(*cmd_parts, **kwargs):
58 """This function stubs out execute, optionally executing
59 a preconfigued function to return expected data
60 """
61 global _fake_execute_repliers
62
63 process_input = kwargs.get('process_input', None)
64 addl_env = kwargs.get('addl_env', None)
65 check_exit_code = kwargs.get('check_exit_code', 0)
66 cmd_str = ' '.join(str(part) for part in cmd_parts)
67
68 LOG.debug(_("Faking execution of cmd (subprocess): %s"), cmd_str)
69 _fake_execute_log.append(cmd_str)
70
71 reply_handler = fake_execute_default_reply_handler
72
73 for fake_replier in _fake_execute_repliers:
74 if re.match(fake_replier[0], cmd_str):
75 reply_handler = fake_replier[1]
76 LOG.debug(_('Faked command matched %s') % fake_replier[0])
77 break
78
79 if isinstance(reply_handler, basestring):
80 # If the reply handler is a string, return it as stdout
81 reply = reply_handler, ''
82 else:
83 try:
84 # Alternative is a function, so call it
85 reply = reply_handler(cmd_parts,
86 process_input=process_input,
87 addl_env=addl_env,
88 check_exit_code=check_exit_code)
89 except exception.ProcessExecutionError as e:
90 LOG.debug(_('Faked command raised an exception %s' % str(e)))
91 raise
92
93 stdout = reply[0]
94 stderr = reply[1]
95 LOG.debug(_("Reply to faked command is stdout='%(stdout)s' "
96 "stderr='%(stderr)s'") % locals())
97
98 # Replicate the sleep call in the real function
99 greenthread.sleep(0)
100 return reply
101
102
103def stub_out_utils_execute(stubs):
104 fake_execute_set_repliers([])
105 fake_execute_clear_log()
106 stubs.Set(utils, 'execute', fake_execute)
0107
=== modified file 'nova/tests/test_xenapi.py'
--- nova/tests/test_xenapi.py 2011-03-21 18:21:26 +0000
+++ nova/tests/test_xenapi.py 2011-03-25 13:43:59 +0000
@@ -19,11 +19,15 @@
19"""19"""
2020
21import functools21import functools
22import os
23import re
22import stubout24import stubout
25import ast
2326
24from nova import db27from nova import db
25from nova import context28from nova import context
26from nova import flags29from nova import flags
30from nova import log as logging
27from nova import test31from nova import test
28from nova import utils32from nova import utils
29from nova.auth import manager33from nova.auth import manager
@@ -38,6 +42,9 @@
38from nova.tests.db import fakes as db_fakes42from nova.tests.db import fakes as db_fakes
39from nova.tests.xenapi import stubs43from nova.tests.xenapi import stubs
40from nova.tests.glance import stubs as glance_stubs44from nova.tests.glance import stubs as glance_stubs
45from nova.tests import fake_utils
46
47LOG = logging.getLogger('nova.tests.test_xenapi')
4148
42FLAGS = flags.FLAGS49FLAGS = flags.FLAGS
4350
@@ -64,13 +71,14 @@
64 def setUp(self):71 def setUp(self):
65 super(XenAPIVolumeTestCase, self).setUp()72 super(XenAPIVolumeTestCase, self).setUp()
66 self.stubs = stubout.StubOutForTesting()73 self.stubs = stubout.StubOutForTesting()
74 self.context = context.RequestContext('fake', 'fake', False)
67 FLAGS.target_host = '127.0.0.1'75 FLAGS.target_host = '127.0.0.1'
68 FLAGS.xenapi_connection_url = 'test_url'76 FLAGS.xenapi_connection_url = 'test_url'
69 FLAGS.xenapi_connection_password = 'test_pass'77 FLAGS.xenapi_connection_password = 'test_pass'
70 db_fakes.stub_out_db_instance_api(self.stubs)78 db_fakes.stub_out_db_instance_api(self.stubs)
71 stubs.stub_out_get_target(self.stubs)79 stubs.stub_out_get_target(self.stubs)
72 xenapi_fake.reset()80 xenapi_fake.reset()
73 self.values = {'name': 1, 'id': 1,81 self.values = {'id': 1,
74 'project_id': 'fake',82 'project_id': 'fake',
75 'user_id': 'fake',83 'user_id': 'fake',
76 'image_id': 1,84 'image_id': 1,
@@ -90,7 +98,7 @@
90 vol['availability_zone'] = FLAGS.storage_availability_zone98 vol['availability_zone'] = FLAGS.storage_availability_zone
91 vol['status'] = "creating"99 vol['status'] = "creating"
92 vol['attach_status'] = "detached"100 vol['attach_status'] = "detached"
93 return db.volume_create(context.get_admin_context(), vol)101 return db.volume_create(self.context, vol)
94102
95 def test_create_iscsi_storage(self):103 def test_create_iscsi_storage(self):
96 """ This shows how to test helper classes' methods """104 """ This shows how to test helper classes' methods """
@@ -126,7 +134,7 @@
126 stubs.stubout_session(self.stubs, stubs.FakeSessionForVolumeTests)134 stubs.stubout_session(self.stubs, stubs.FakeSessionForVolumeTests)
127 conn = xenapi_conn.get_connection(False)135 conn = xenapi_conn.get_connection(False)
128 volume = self._create_volume()136 volume = self._create_volume()
129 instance = db.instance_create(self.values)137 instance = db.instance_create(self.context, self.values)
130 vm = xenapi_fake.create_vm(instance.name, 'Running')138 vm = xenapi_fake.create_vm(instance.name, 'Running')
131 result = conn.attach_volume(instance.name, volume['id'], '/dev/sdc')139 result = conn.attach_volume(instance.name, volume['id'], '/dev/sdc')
132140
@@ -146,7 +154,7 @@
146 stubs.FakeSessionForVolumeFailedTests)154 stubs.FakeSessionForVolumeFailedTests)
147 conn = xenapi_conn.get_connection(False)155 conn = xenapi_conn.get_connection(False)
148 volume = self._create_volume()156 volume = self._create_volume()
149 instance = db.instance_create(self.values)157 instance = db.instance_create(self.context, self.values)
150 xenapi_fake.create_vm(instance.name, 'Running')158 xenapi_fake.create_vm(instance.name, 'Running')
151 self.assertRaises(Exception,159 self.assertRaises(Exception,
152 conn.attach_volume,160 conn.attach_volume,
@@ -175,8 +183,9 @@
175 self.project = self.manager.create_project('fake', 'fake', 'fake')183 self.project = self.manager.create_project('fake', 'fake', 'fake')
176 self.network = utils.import_object(FLAGS.network_manager)184 self.network = utils.import_object(FLAGS.network_manager)
177 self.stubs = stubout.StubOutForTesting()185 self.stubs = stubout.StubOutForTesting()
178 FLAGS.xenapi_connection_url = 'test_url'186 self.flags(xenapi_connection_url='test_url',
179 FLAGS.xenapi_connection_password = 'test_pass'187 xenapi_connection_password='test_pass',
188 instance_name_template='%d')
180 xenapi_fake.reset()189 xenapi_fake.reset()
181 xenapi_fake.create_local_srs()190 xenapi_fake.create_local_srs()
182 db_fakes.stub_out_db_instance_api(self.stubs)191 db_fakes.stub_out_db_instance_api(self.stubs)
@@ -189,6 +198,8 @@
189 stubs.stub_out_vm_methods(self.stubs)198 stubs.stub_out_vm_methods(self.stubs)
190 glance_stubs.stubout_glance_client(self.stubs,199 glance_stubs.stubout_glance_client(self.stubs,
191 glance_stubs.FakeGlance)200 glance_stubs.FakeGlance)
201 fake_utils.stub_out_utils_execute(self.stubs)
202 self.context = context.RequestContext('fake', 'fake', False)
192 self.conn = xenapi_conn.get_connection(False)203 self.conn = xenapi_conn.get_connection(False)
193204
194 def test_list_instances_0(self):205 def test_list_instances_0(self):
@@ -213,7 +224,7 @@
213 if not vm_rec["is_control_domain"]:224 if not vm_rec["is_control_domain"]:
214 vm_labels.append(vm_rec["name_label"])225 vm_labels.append(vm_rec["name_label"])
215226
216 self.assertEquals(vm_labels, [1])227 self.assertEquals(vm_labels, ['1'])
217228
218 def ensure_vbd_was_torn_down():229 def ensure_vbd_was_torn_down():
219 vbd_labels = []230 vbd_labels = []
@@ -221,7 +232,7 @@
221 vbd_rec = xenapi_fake.get_record('VBD', vbd_ref)232 vbd_rec = xenapi_fake.get_record('VBD', vbd_ref)
222 vbd_labels.append(vbd_rec["vm_name_label"])233 vbd_labels.append(vbd_rec["vm_name_label"])
223234
224 self.assertEquals(vbd_labels, [1])235 self.assertEquals(vbd_labels, ['1'])
225236
226 def ensure_vdi_was_torn_down():237 def ensure_vdi_was_torn_down():
227 for vdi_ref in xenapi_fake.get_all('VDI'):238 for vdi_ref in xenapi_fake.get_all('VDI'):
@@ -238,11 +249,10 @@
238249
239 def create_vm_record(self, conn, os_type):250 def create_vm_record(self, conn, os_type):
240 instances = conn.list_instances()251 instances = conn.list_instances()
241 self.assertEquals(instances, [1])252 self.assertEquals(instances, ['1'])
242253
243 # Get Nova record for VM254 # Get Nova record for VM
244 vm_info = conn.get_info(1)255 vm_info = conn.get_info(1)
245
246 # Get XenAPI record for VM256 # Get XenAPI record for VM
247 vms = [rec for ref, rec257 vms = [rec for ref, rec
248 in xenapi_fake.get_all_records('VM').iteritems()258 in xenapi_fake.get_all_records('VM').iteritems()
@@ -251,7 +261,7 @@
251 self.vm_info = vm_info261 self.vm_info = vm_info
252 self.vm = vm262 self.vm = vm
253263
254 def check_vm_record(self, conn):264 def check_vm_record(self, conn, check_injection=False):
255 # Check that m1.large above turned into the right thing.265 # Check that m1.large above turned into the right thing.
256 instance_type = db.instance_type_get_by_name(conn, 'm1.large')266 instance_type = db.instance_type_get_by_name(conn, 'm1.large')
257 mem_kib = long(instance_type['memory_mb']) << 10267 mem_kib = long(instance_type['memory_mb']) << 10
@@ -271,6 +281,25 @@
271 # Check that the VM is running according to XenAPI.281 # Check that the VM is running according to XenAPI.
272 self.assertEquals(self.vm['power_state'], 'Running')282 self.assertEquals(self.vm['power_state'], 'Running')
273283
284 if check_injection:
285 xenstore_data = self.vm['xenstore_data']
286 key = 'vm-data/networking/aabbccddeeff'
287 xenstore_value = xenstore_data[key]
288 tcpip_data = ast.literal_eval(xenstore_value)
289 self.assertEquals(tcpip_data, {
290 'label': 'test_network',
291 'broadcast': '10.0.0.255',
292 'ips': [{'ip': '10.0.0.3',
293 'netmask':'255.255.255.0',
294 'enabled':'1'}],
295 'ip6s': [{'ip': 'fe80::a8bb:ccff:fedd:eeff',
296 'netmask': '120',
297 'enabled': '1',
298 'gateway': 'fe80::a00:1'}],
299 'mac': 'aa:bb:cc:dd:ee:ff',
300 'dns': ['10.0.0.2'],
301 'gateway': '10.0.0.1'})
302
274 def check_vm_params_for_windows(self):303 def check_vm_params_for_windows(self):
275 self.assertEquals(self.vm['platform']['nx'], 'true')304 self.assertEquals(self.vm['platform']['nx'], 'true')
276 self.assertEquals(self.vm['HVM_boot_params'], {'order': 'dc'})305 self.assertEquals(self.vm['HVM_boot_params'], {'order': 'dc'})
@@ -304,10 +333,10 @@
304 self.assertEquals(self.vm['HVM_boot_policy'], '')333 self.assertEquals(self.vm['HVM_boot_policy'], '')
305334
306 def _test_spawn(self, image_id, kernel_id, ramdisk_id,335 def _test_spawn(self, image_id, kernel_id, ramdisk_id,
307 instance_type="m1.large", os_type="linux"):336 instance_type="m1.large", os_type="linux",
308 stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests)337 check_injection=False):
309 values = {'name': 1,338 stubs.stubout_loopingcall_start(self.stubs)
310 'id': 1,339 values = {'id': 1,
311 'project_id': self.project.id,340 'project_id': self.project.id,
312 'user_id': self.user.id,341 'user_id': self.user.id,
313 'image_id': image_id,342 'image_id': image_id,
@@ -316,12 +345,10 @@
316 'instance_type': instance_type,345 'instance_type': instance_type,
317 'mac_address': 'aa:bb:cc:dd:ee:ff',346 'mac_address': 'aa:bb:cc:dd:ee:ff',
318 'os_type': os_type}347 'os_type': os_type}
319348 instance = db.instance_create(self.context, values)
320 conn = xenapi_conn.get_connection(False)349 self.conn.spawn(instance)
321 instance = db.instance_create(values)350 self.create_vm_record(self.conn, os_type)
322 conn.spawn(instance)351 self.check_vm_record(self.conn, check_injection)
323 self.create_vm_record(conn, os_type)
324 self.check_vm_record(conn)
325352
326 def test_spawn_not_enough_memory(self):353 def test_spawn_not_enough_memory(self):
327 FLAGS.xenapi_image_service = 'glance'354 FLAGS.xenapi_image_service = 'glance'
@@ -362,6 +389,85 @@
362 glance_stubs.FakeGlance.IMAGE_RAMDISK)389 glance_stubs.FakeGlance.IMAGE_RAMDISK)
363 self.check_vm_params_for_linux_with_external_kernel()390 self.check_vm_params_for_linux_with_external_kernel()
364391
392 def test_spawn_netinject_file(self):
393 FLAGS.xenapi_image_service = 'glance'
394 db_fakes.stub_out_db_instance_api(self.stubs, injected=True)
395
396 self._tee_executed = False
397
398 def _tee_handler(cmd, **kwargs):
399 input = kwargs.get('process_input', None)
400 self.assertNotEqual(input, None)
401 config = [line.strip() for line in input.split("\n")]
402 # Find the start of eth0 configuration and check it
403 index = config.index('auto eth0')
404 self.assertEquals(config[index + 1:index + 8], [
405 'iface eth0 inet static',
406 'address 10.0.0.3',
407 'netmask 255.255.255.0',
408 'broadcast 10.0.0.255',
409 'gateway 10.0.0.1',
410 'dns-nameservers 10.0.0.2',
411 ''])
412 self._tee_executed = True
413 return '', ''
414
415 fake_utils.fake_execute_set_repliers([
416 # Capture the sudo tee .../etc/network/interfaces command
417 (r'(sudo\s+)?tee.*interfaces', _tee_handler),
418 ])
419 FLAGS.xenapi_image_service = 'glance'
420 self._test_spawn(glance_stubs.FakeGlance.IMAGE_MACHINE,
421 glance_stubs.FakeGlance.IMAGE_KERNEL,
422 glance_stubs.FakeGlance.IMAGE_RAMDISK,
423 check_injection=True)
424 self.assertTrue(self._tee_executed)
425
426 def test_spawn_netinject_xenstore(self):
427 FLAGS.xenapi_image_service = 'glance'
428 db_fakes.stub_out_db_instance_api(self.stubs, injected=True)
429
430 self._tee_executed = False
431
432 def _mount_handler(cmd, *ignore_args, **ignore_kwargs):
433 # When mounting, create real files under the mountpoint to simulate
434 # files in the mounted filesystem
435
436 # mount point will be the last item of the command list
437 self._tmpdir = cmd[len(cmd) - 1]
438 LOG.debug(_('Creating files in %s to simulate guest agent' %
439 self._tmpdir))
440 os.makedirs(os.path.join(self._tmpdir, 'usr', 'sbin'))
441 # Touch the file using open
442 open(os.path.join(self._tmpdir, 'usr', 'sbin',
443 'xe-update-networking'), 'w').close()
444 return '', ''
445
446 def _umount_handler(cmd, *ignore_args, **ignore_kwargs):
447 # Umount would normall make files in the m,ounted filesystem
448 # disappear, so do that here
449 LOG.debug(_('Removing simulated guest agent files in %s' %
450 self._tmpdir))
451 os.remove(os.path.join(self._tmpdir, 'usr', 'sbin',
452 'xe-update-networking'))
453 os.rmdir(os.path.join(self._tmpdir, 'usr', 'sbin'))
454 os.rmdir(os.path.join(self._tmpdir, 'usr'))
455 return '', ''
456
457 def _tee_handler(cmd, *ignore_args, **ignore_kwargs):
458 self._tee_executed = True
459 return '', ''
460
461 fake_utils.fake_execute_set_repliers([
462 (r'(sudo\s+)?mount', _mount_handler),
463 (r'(sudo\s+)?umount', _umount_handler),
464 (r'(sudo\s+)?tee.*interfaces', _tee_handler)])
465 self._test_spawn(1, 2, 3, check_injection=True)
466
467 # tee must not run in this case, where an injection-capable
468 # guest agent is detected
469 self.assertFalse(self._tee_executed)
470
365 def test_spawn_with_network_qos(self):471 def test_spawn_with_network_qos(self):
366 self._create_instance()472 self._create_instance()
367 for vif_ref in xenapi_fake.get_all('VIF'):473 for vif_ref in xenapi_fake.get_all('VIF'):
@@ -371,6 +477,7 @@
371 str(4 * 1024))477 str(4 * 1024))
372478
373 def test_rescue(self):479 def test_rescue(self):
480 self.flags(xenapi_inject_image=False)
374 instance = self._create_instance()481 instance = self._create_instance()
375 conn = xenapi_conn.get_connection(False)482 conn = xenapi_conn.get_connection(False)
376 conn.rescue(instance, None)483 conn.rescue(instance, None)
@@ -391,8 +498,8 @@
391498
392 def _create_instance(self):499 def _create_instance(self):
393 """Creates and spawns a test instance"""500 """Creates and spawns a test instance"""
501 stubs.stubout_loopingcall_start(self.stubs)
394 values = {502 values = {
395 'name': 1,
396 'id': 1,503 'id': 1,
397 'project_id': self.project.id,504 'project_id': self.project.id,
398 'user_id': self.user.id,505 'user_id': self.user.id,
@@ -402,7 +509,7 @@
402 'instance_type': 'm1.large',509 'instance_type': 'm1.large',
403 'mac_address': 'aa:bb:cc:dd:ee:ff',510 'mac_address': 'aa:bb:cc:dd:ee:ff',
404 'os_type': 'linux'}511 'os_type': 'linux'}
405 instance = db.instance_create(values)512 instance = db.instance_create(self.context, values)
406 self.conn.spawn(instance)513 self.conn.spawn(instance)
407 return instance514 return instance
408515
@@ -447,21 +554,26 @@
447 db_fakes.stub_out_db_instance_api(self.stubs)554 db_fakes.stub_out_db_instance_api(self.stubs)
448 stubs.stub_out_get_target(self.stubs)555 stubs.stub_out_get_target(self.stubs)
449 xenapi_fake.reset()556 xenapi_fake.reset()
557 xenapi_fake.create_network('fake', FLAGS.flat_network_bridge)
450 self.manager = manager.AuthManager()558 self.manager = manager.AuthManager()
451 self.user = self.manager.create_user('fake', 'fake', 'fake',559 self.user = self.manager.create_user('fake', 'fake', 'fake',
452 admin=True)560 admin=True)
453 self.project = self.manager.create_project('fake', 'fake', 'fake')561 self.project = self.manager.create_project('fake', 'fake', 'fake')
454 self.values = {'name': 1, 'id': 1,562 self.context = context.RequestContext('fake', 'fake', False)
563 self.values = {'id': 1,
455 'project_id': self.project.id,564 'project_id': self.project.id,
456 'user_id': self.user.id,565 'user_id': self.user.id,
457 'image_id': 1,566 'image_id': 1,
458 'kernel_id': None,567 'kernel_id': None,
459 'ramdisk_id': None,568 'ramdisk_id': None,
569 'local_gb': 5,
460 'instance_type': 'm1.large',570 'instance_type': 'm1.large',
461 'mac_address': 'aa:bb:cc:dd:ee:ff',571 'mac_address': 'aa:bb:cc:dd:ee:ff',
462 'os_type': 'linux'}572 'os_type': 'linux'}
463573
574 fake_utils.stub_out_utils_execute(self.stubs)
464 stubs.stub_out_migration_methods(self.stubs)575 stubs.stub_out_migration_methods(self.stubs)
576 stubs.stubout_get_this_vm_uuid(self.stubs)
465 glance_stubs.stubout_glance_client(self.stubs,577 glance_stubs.stubout_glance_client(self.stubs,
466 glance_stubs.FakeGlance)578 glance_stubs.FakeGlance)
467579
@@ -472,14 +584,15 @@
472 self.stubs.UnsetAll()584 self.stubs.UnsetAll()
473585
474 def test_migrate_disk_and_power_off(self):586 def test_migrate_disk_and_power_off(self):
475 instance = db.instance_create(self.values)587 instance = db.instance_create(self.context, self.values)
476 stubs.stubout_session(self.stubs, stubs.FakeSessionForMigrationTests)588 stubs.stubout_session(self.stubs, stubs.FakeSessionForMigrationTests)
477 conn = xenapi_conn.get_connection(False)589 conn = xenapi_conn.get_connection(False)
478 conn.migrate_disk_and_power_off(instance, '127.0.0.1')590 conn.migrate_disk_and_power_off(instance, '127.0.0.1')
479591
480 def test_finish_resize(self):592 def test_finish_resize(self):
481 instance = db.instance_create(self.values)593 instance = db.instance_create(self.context, self.values)
482 stubs.stubout_session(self.stubs, stubs.FakeSessionForMigrationTests)594 stubs.stubout_session(self.stubs, stubs.FakeSessionForMigrationTests)
595 stubs.stubout_loopingcall_start(self.stubs)
483 conn = xenapi_conn.get_connection(False)596 conn = xenapi_conn.get_connection(False)
484 conn.finish_resize(instance, dict(base_copy='hurr', cow='durr'))597 conn.finish_resize(instance, dict(base_copy='hurr', cow='durr'))
485598
486599
=== modified file 'nova/tests/xenapi/stubs.py'
--- nova/tests/xenapi/stubs.py 2011-03-24 04:09:51 +0000
+++ nova/tests/xenapi/stubs.py 2011-03-25 13:43:59 +0000
@@ -21,6 +21,7 @@
21from nova.virt.xenapi import volume_utils21from nova.virt.xenapi import volume_utils
22from nova.virt.xenapi import vm_utils22from nova.virt.xenapi import vm_utils
23from nova.virt.xenapi import vmops23from nova.virt.xenapi import vmops
24from nova import utils
2425
2526
26def stubout_instance_snapshot(stubs):27def stubout_instance_snapshot(stubs):
@@ -137,14 +138,17 @@
137 stubs.Set(vm_utils, '_is_vdi_pv', f)138 stubs.Set(vm_utils, '_is_vdi_pv', f)
138139
139140
141def stubout_loopingcall_start(stubs):
142 def fake_start(self, interval, now=True):
143 self.f(*self.args, **self.kw)
144 stubs.Set(utils.LoopingCall, 'start', fake_start)
145
146
140class FakeSessionForVMTests(fake.SessionBase):147class FakeSessionForVMTests(fake.SessionBase):
141 """ Stubs out a XenAPISession for VM tests """148 """ Stubs out a XenAPISession for VM tests """
142 def __init__(self, uri):149 def __init__(self, uri):
143 super(FakeSessionForVMTests, self).__init__(uri)150 super(FakeSessionForVMTests, self).__init__(uri)
144151
145 def network_get_all_records_where(self, _1, _2):
146 return self.xenapi.network.get_all_records()
147
148 def host_call_plugin(self, _1, _2, _3, _4, _5):152 def host_call_plugin(self, _1, _2, _3, _4, _5):
149 sr_ref = fake.get_all('SR')[0]153 sr_ref = fake.get_all('SR')[0]
150 vdi_ref = fake.create_vdi('', False, sr_ref, False)154 vdi_ref = fake.create_vdi('', False, sr_ref, False)
@@ -196,7 +200,7 @@
196 pass200 pass
197201
198 def fake_spawn_rescue(self, inst):202 def fake_spawn_rescue(self, inst):
199 pass203 inst._rescue = False
200204
201 stubs.Set(vmops.VMOps, "_shutdown", fake_shutdown)205 stubs.Set(vmops.VMOps, "_shutdown", fake_shutdown)
202 stubs.Set(vmops.VMOps, "_acquire_bootlock", fake_acquire_bootlock)206 stubs.Set(vmops.VMOps, "_acquire_bootlock", fake_acquire_bootlock)
203207
=== modified file 'nova/virt/disk.py'
--- nova/virt/disk.py 2011-03-14 17:59:41 +0000
+++ nova/virt/disk.py 2011-03-25 13:43:59 +0000
@@ -26,6 +26,8 @@
26import tempfile26import tempfile
27import time27import time
2828
29from nova import context
30from nova import db
29from nova import exception31from nova import exception
30from nova import flags32from nova import flags
31from nova import log as logging33from nova import log as logging
@@ -38,6 +40,9 @@
38 'minimum size in bytes of root partition')40 'minimum size in bytes of root partition')
39flags.DEFINE_integer('block_size', 1024 * 1024 * 256,41flags.DEFINE_integer('block_size', 1024 * 1024 * 256,
40 'block_size to use for dd')42 'block_size to use for dd')
43flags.DEFINE_string('injected_network_template',
44 utils.abspath('virt/interfaces.template'),
45 'Template file for injected network')
41flags.DEFINE_integer('timeout_nbd', 10,46flags.DEFINE_integer('timeout_nbd', 10,
42 'time to wait for a NBD device coming up')47 'time to wait for a NBD device coming up')
43flags.DEFINE_integer('max_nbd_devices', 16,48flags.DEFINE_integer('max_nbd_devices', 16,
@@ -97,11 +102,7 @@
97 % err)102 % err)
98103
99 try:104 try:
100 if key:105 inject_data_into_fs(tmpdir, key, net, utils.execute)
101 # inject key file
102 _inject_key_into_fs(key, tmpdir)
103 if net:
104 _inject_net_into_fs(net, tmpdir)
105 finally:106 finally:
106 # unmount device107 # unmount device
107 utils.execute('sudo', 'umount', mapped_device)108 utils.execute('sudo', 'umount', mapped_device)
@@ -164,7 +165,18 @@
164 _DEVICES.append(device)165 _DEVICES.append(device)
165166
166167
167def _inject_key_into_fs(key, fs):168def inject_data_into_fs(fs, key, net, execute):
169 """Injects data into a filesystem already mounted by the caller.
170 Virt connections can call this directly if they mount their fs
171 in a different way to inject_data
172 """
173 if key:
174 _inject_key_into_fs(key, fs, execute=execute)
175 if net:
176 _inject_net_into_fs(net, fs, execute=execute)
177
178
179def _inject_key_into_fs(key, fs, execute=None):
168 """Add the given public ssh key to root's authorized_keys.180 """Add the given public ssh key to root's authorized_keys.
169181
170 key is an ssh key string.182 key is an ssh key string.
@@ -179,7 +191,7 @@
179 process_input='\n' + key.strip() + '\n')191 process_input='\n' + key.strip() + '\n')
180192
181193
182def _inject_net_into_fs(net, fs):194def _inject_net_into_fs(net, fs, execute=None):
183 """Inject /etc/network/interfaces into the filesystem rooted at fs.195 """Inject /etc/network/interfaces into the filesystem rooted at fs.
184196
185 net is the contents of /etc/network/interfaces.197 net is the contents of /etc/network/interfaces.
186198
=== modified file 'nova/virt/libvirt_conn.py'
--- nova/virt/libvirt_conn.py 2011-03-24 21:26:11 +0000
+++ nova/virt/libvirt_conn.py 2011-03-25 13:43:59 +0000
@@ -76,9 +76,7 @@
76flags.DEFINE_string('rescue_image_id', 'ami-rescue', 'Rescue ami image')76flags.DEFINE_string('rescue_image_id', 'ami-rescue', 'Rescue ami image')
77flags.DEFINE_string('rescue_kernel_id', 'aki-rescue', 'Rescue aki image')77flags.DEFINE_string('rescue_kernel_id', 'aki-rescue', 'Rescue aki image')
78flags.DEFINE_string('rescue_ramdisk_id', 'ari-rescue', 'Rescue ari image')78flags.DEFINE_string('rescue_ramdisk_id', 'ari-rescue', 'Rescue ari image')
79flags.DEFINE_string('injected_network_template',79
80 utils.abspath('virt/interfaces.template'),
81 'Template file for injected network')
82flags.DEFINE_string('libvirt_xml_template',80flags.DEFINE_string('libvirt_xml_template',
83 utils.abspath('virt/libvirt.xml.template'),81 utils.abspath('virt/libvirt.xml.template'),
84 'Libvirt XML Template')82 'Libvirt XML Template')
8583
=== modified file 'nova/virt/xenapi/fake.py'
--- nova/virt/xenapi/fake.py 2011-03-03 19:13:15 +0000
+++ nova/virt/xenapi/fake.py 2011-03-25 13:43:59 +0000
@@ -162,6 +162,12 @@
162 vbd_rec['vm_name_label'] = vm_name_label162 vbd_rec['vm_name_label'] = vm_name_label
163163
164164
165def after_VM_create(vm_ref, vm_rec):
166 """Create read-only fields in the VM record."""
167 if 'is_control_domain' not in vm_rec:
168 vm_rec['is_control_domain'] = False
169
170
165def create_pbd(config, host_ref, sr_ref, attached):171def create_pbd(config, host_ref, sr_ref, attached):
166 return _create_object('PBD', {172 return _create_object('PBD', {
167 'device-config': config,173 'device-config': config,
@@ -286,6 +292,25 @@
286 rec['currently_attached'] = False292 rec['currently_attached'] = False
287 rec['device'] = ''293 rec['device'] = ''
288294
295 def VM_get_xenstore_data(self, _1, vm_ref):
296 return _db_content['VM'][vm_ref].get('xenstore_data', '')
297
298 def VM_remove_from_xenstore_data(self, _1, vm_ref, key):
299 db_ref = _db_content['VM'][vm_ref]
300 if not 'xenstore_data' in db_ref:
301 return
302 db_ref['xenstore_data'][key] = None
303
304 def network_get_all_records_where(self, _1, _2):
305 # TODO (salvatore-orlando):filter table on _2
306 return _db_content['network']
307
308 def VM_add_to_xenstore_data(self, _1, vm_ref, key, value):
309 db_ref = _db_content['VM'][vm_ref]
310 if not 'xenstore_data' in db_ref:
311 db_ref['xenstore_data'] = {}
312 db_ref['xenstore_data'][key] = value
313
289 def host_compute_free_memory(self, _1, ref):314 def host_compute_free_memory(self, _1, ref):
290 #Always return 12GB available315 #Always return 12GB available
291 return 12 * 1024 * 1024 * 1024316 return 12 * 1024 * 1024 * 1024
@@ -376,7 +401,6 @@
376 def _getter(self, name, params):401 def _getter(self, name, params):
377 self._check_session(params)402 self._check_session(params)
378 (cls, func) = name.split('.')403 (cls, func) = name.split('.')
379
380 if func == 'get_all':404 if func == 'get_all':
381 self._check_arg_count(params, 1)405 self._check_arg_count(params, 1)
382 return get_all(cls)406 return get_all(cls)
@@ -399,10 +423,11 @@
399 if len(params) == 2:423 if len(params) == 2:
400 field = func[len('get_'):]424 field = func[len('get_'):]
401 ref = params[1]425 ref = params[1]
402426 if (ref in _db_content[cls]):
403 if (ref in _db_content[cls] and427 if (field in _db_content[cls][ref]):
404 field in _db_content[cls][ref]):428 return _db_content[cls][ref][field]
405 return _db_content[cls][ref][field]429 else:
430 raise Failure(['HANDLE_INVALID', cls, ref])
406431
407 LOG.debug(_('Raising NotImplemented'))432 LOG.debug(_('Raising NotImplemented'))
408 raise NotImplementedError(433 raise NotImplementedError(
@@ -476,7 +501,7 @@
476 def _check_session(self, params):501 def _check_session(self, params):
477 if (self._session is None or502 if (self._session is None or
478 self._session not in _db_content['session']):503 self._session not in _db_content['session']):
479 raise Failure(['HANDLE_INVALID', 'session', self._session])504 raise Failure(['HANDLE_INVALID', 'session', self._session])
480 if len(params) == 0 or params[0] != self._session:505 if len(params) == 0 or params[0] != self._session:
481 LOG.debug(_('Raising NotImplemented'))506 LOG.debug(_('Raising NotImplemented'))
482 raise NotImplementedError('Call to XenAPI without using .xenapi')507 raise NotImplementedError('Call to XenAPI without using .xenapi')
483508
=== modified file 'nova/virt/xenapi/vm_utils.py'
--- nova/virt/xenapi/vm_utils.py 2011-03-24 22:31:39 +0000
+++ nova/virt/xenapi/vm_utils.py 2011-03-25 13:43:59 +0000
@@ -22,6 +22,7 @@
22import os22import os
23import pickle23import pickle
24import re24import re
25import tempfile
25import time26import time
26import urllib27import urllib
27import uuid28import uuid
@@ -29,6 +30,8 @@
2930
30from eventlet import event31from eventlet import event
31import glance.client32import glance.client
33from nova import context
34from nova import db
32from nova import exception35from nova import exception
33from nova import flags36from nova import flags
34from nova import log as logging37from nova import log as logging
@@ -36,6 +39,7 @@
36from nova.auth.manager import AuthManager39from nova.auth.manager import AuthManager
37from nova.compute import instance_types40from nova.compute import instance_types
38from nova.compute import power_state41from nova.compute import power_state
42from nova.virt import disk
39from nova.virt import images43from nova.virt import images
40from nova.virt.xenapi import HelperBase44from nova.virt.xenapi import HelperBase
41from nova.virt.xenapi.volume_utils import StorageError45from nova.virt.xenapi.volume_utils import StorageError
@@ -670,6 +674,23 @@
670 return None674 return None
671675
672 @classmethod676 @classmethod
677 def preconfigure_instance(cls, session, instance, vdi_ref, network_info):
678 """Makes alterations to the image before launching as part of spawn.
679 """
680
681 # As mounting the image VDI is expensive, we only want do do it once,
682 # if at all, so determine whether it's required first, and then do
683 # everything
684 mount_required = False
685 key, net = _prepare_injectables(instance, network_info)
686 mount_required = key or net
687 if not mount_required:
688 return
689
690 with_vdi_attached_here(session, vdi_ref, False,
691 lambda dev: _mounted_processing(dev, key, net))
692
693 @classmethod
673 def lookup_kernel_ramdisk(cls, session, vm):694 def lookup_kernel_ramdisk(cls, session, vm):
674 vm_rec = session.get_xenapi().VM.get_record(vm)695 vm_rec = session.get_xenapi().VM.get_record(vm)
675 if 'PV_kernel' in vm_rec and 'PV_ramdisk' in vm_rec:696 if 'PV_kernel' in vm_rec and 'PV_ramdisk' in vm_rec:
@@ -927,6 +948,7 @@
927 e.details[0] == 'DEVICE_DETACH_REJECTED'):948 e.details[0] == 'DEVICE_DETACH_REJECTED'):
928 LOG.debug(_('VBD.unplug rejected: retrying...'))949 LOG.debug(_('VBD.unplug rejected: retrying...'))
929 time.sleep(1)950 time.sleep(1)
951 LOG.debug(_('Not sleeping anymore!'))
930 elif (len(e.details) > 0 and952 elif (len(e.details) > 0 and
931 e.details[0] == 'DEVICE_ALREADY_DETACHED'):953 e.details[0] == 'DEVICE_ALREADY_DETACHED'):
932 LOG.debug(_('VBD.unplug successful eventually.'))954 LOG.debug(_('VBD.unplug successful eventually.'))
@@ -1002,3 +1024,114 @@
1002def get_name_label_for_image(image):1024def get_name_label_for_image(image):
1003 # TODO(sirp): This should eventually be the URI for the Glance image1025 # TODO(sirp): This should eventually be the URI for the Glance image
1004 return _('Glance image %s') % image1026 return _('Glance image %s') % image
1027
1028
1029def _mount_filesystem(dev_path, dir):
1030 """mounts the device specified by dev_path in dir"""
1031 try:
1032 out, err = utils.execute('sudo', 'mount',
1033 '-t', 'ext2,ext3',
1034 dev_path, dir)
1035 except exception.ProcessExecutionError as e:
1036 err = str(e)
1037 return err
1038
1039
1040def _find_guest_agent(base_dir, agent_rel_path):
1041 """
1042 tries to locate a guest agent at the path
1043 specificed by agent_rel_path
1044 """
1045 agent_path = os.path.join(base_dir, agent_rel_path)
1046 if os.path.isfile(agent_path):
1047 # The presence of the guest agent
1048 # file indicates that this instance can
1049 # reconfigure the network from xenstore data,
1050 # so manipulation of files in /etc is not
1051 # required
1052 LOG.info(_('XenServer tools installed in this '
1053 'image are capable of network injection. '
1054 'Networking files will not be'
1055 'manipulated'))
1056 return True
1057 xe_daemon_filename = os.path.join(base_dir,
1058 'usr', 'sbin', 'xe-daemon')
1059 if os.path.isfile(xe_daemon_filename):
1060 LOG.info(_('XenServer tools are present '
1061 'in this image but are not capable '
1062 'of network injection'))
1063 else:
1064 LOG.info(_('XenServer tools are not '
1065 'installed in this image'))
1066 return False
1067
1068
1069def _mounted_processing(device, key, net):
1070 """Callback which runs with the image VDI attached"""
1071
1072 dev_path = '/dev/' + device + '1' # NB: Partition 1 hardcoded
1073 tmpdir = tempfile.mkdtemp()
1074 try:
1075 # Mount only Linux filesystems, to avoid disturbing NTFS images
1076 err = _mount_filesystem(dev_path, tmpdir)
1077 if not err:
1078 try:
1079 # This try block ensures that the umount occurs
1080 if not _find_guest_agent(tmpdir, FLAGS.xenapi_agent_path):
1081 LOG.info(_('Manipulating interface files '
1082 'directly'))
1083 disk.inject_data_into_fs(tmpdir, key, net,
1084 utils.execute)
1085 finally:
1086 utils.execute('sudo', 'umount', dev_path)
1087 else:
1088 LOG.info(_('Failed to mount filesystem (expected for '
1089 'non-linux instances): %s') % err)
1090 finally:
1091 # remove temporary directory
1092 os.rmdir(tmpdir)
1093
1094
1095def _prepare_injectables(inst, networks_info):
1096 """
1097 prepares the ssh key and the network configuration file to be
1098 injected into the disk image
1099 """
1100 #do the import here - Cheetah.Template will be loaded
1101 #only if injection is performed
1102 from Cheetah import Template as t
1103 template = t.Template
1104 template_data = open(FLAGS.injected_network_template).read()
1105
1106 key = str(inst['key_data'])
1107 net = None
1108 if networks_info:
1109 ifc_num = -1
1110 interfaces_info = []
1111 for (network_ref, info) in networks_info:
1112 ifc_num += 1
1113 if not network_ref['injected']:
1114 continue
1115
1116 ip_v4 = ip_v6 = None
1117 if 'ips' in info and len(info['ips']) > 0:
1118 ip_v4 = info['ips'][0]
1119 if 'ip6s' in info and len(info['ip6s']) > 0:
1120 ip_v6 = info['ip6s'][0]
1121 if len(info['dns']) > 0:
1122 dns = info['dns'][0]
1123 interface_info = {'name': 'eth%d' % ifc_num,
1124 'address': ip_v4 and ip_v4['ip'] or '',
1125 'netmask': ip_v4 and ip_v4['netmask'] or '',
1126 'gateway': info['gateway'],
1127 'broadcast': info['broadcast'],
1128 'dns': dns,
1129 'address_v6': ip_v6 and ip_v6['ip'] or '',
1130 'netmask_v6': ip_v6 and ip_v6['netmask'] or '',
1131 'gateway_v6': ip_v6 and ip_v6['gateway'] or '',
1132 'use_ipv6': FLAGS.use_ipv6}
1133 interfaces_info.append(interface_info)
1134 net = str(template(template_data,
1135 searchList=[{'interfaces': interfaces_info,
1136 'use_ipv6': FLAGS.use_ipv6}]))
1137 return key, net
10051138
=== modified file 'nova/virt/xenapi/vmops.py'
--- nova/virt/xenapi/vmops.py 2011-03-24 21:11:48 +0000
+++ nova/virt/xenapi/vmops.py 2011-03-25 13:43:59 +0000
@@ -33,6 +33,7 @@
33from nova import log as logging33from nova import log as logging
34from nova import exception34from nova import exception
35from nova import utils35from nova import utils
36from nova import flags
3637
37from nova.auth.manager import AuthManager38from nova.auth.manager import AuthManager
38from nova.compute import power_state39from nova.compute import power_state
@@ -43,6 +44,7 @@
4344
44XenAPI = None45XenAPI = None
45LOG = logging.getLogger("nova.virt.xenapi.vmops")46LOG = logging.getLogger("nova.virt.xenapi.vmops")
47FLAGS = flags.FLAGS
4648
4749
48class VMOps(object):50class VMOps(object):
@@ -53,7 +55,6 @@
53 self.XenAPI = session.get_imported_xenapi()55 self.XenAPI = session.get_imported_xenapi()
54 self._session = session56 self._session = session
55 self.poll_rescue_last_ran = None57 self.poll_rescue_last_ran = None
56
57 VMHelper.XenAPI = self.XenAPI58 VMHelper.XenAPI = self.XenAPI
5859
59 def list_instances(self):60 def list_instances(self):
@@ -168,6 +169,12 @@
168 # create it now. This goes away once nova-multi-nic hits.169 # create it now. This goes away once nova-multi-nic hits.
169 if network_info is None:170 if network_info is None:
170 network_info = self._get_network_info(instance)171 network_info = self._get_network_info(instance)
172
173 # Alter the image before VM start for, e.g. network injection
174 if FLAGS.xenapi_inject_image:
175 VMHelper.preconfigure_instance(self._session, instance,
176 vdi_ref, network_info)
177
171 self.create_vifs(vm_ref, network_info)178 self.create_vifs(vm_ref, network_info)
172 self.inject_network_info(instance, vm_ref, network_info)179 self.inject_network_info(instance, vm_ref, network_info)
173 return vm_ref180 return vm_ref
@@ -237,26 +244,17 @@
237 obj = None244 obj = None
238 try:245 try:
239 # check for opaque ref246 # check for opaque ref
240 obj = self._session.get_xenapi().VM.get_record(instance_or_vm)247 obj = self._session.get_xenapi().VM.get_uuid(instance_or_vm)
241 return instance_or_vm248 return instance_or_vm
242 except self.XenAPI.Failure:249 except self.XenAPI.Failure:
243 # wasn't an opaque ref, must be an instance name250 # wasn't an opaque ref, can be an instance name
244 instance_name = instance_or_vm251 instance_name = instance_or_vm
245252
246 # if instance_or_vm is an int/long it must be instance id253 # if instance_or_vm is an int/long it must be instance id
247 elif isinstance(instance_or_vm, (int, long)):254 elif isinstance(instance_or_vm, (int, long)):
248 ctx = context.get_admin_context()255 ctx = context.get_admin_context()
249 try:256 instance_obj = db.instance_get(ctx, instance_or_vm)
250 instance_obj = db.instance_get(ctx, instance_or_vm)257 instance_name = instance_obj.name
251 instance_name = instance_obj.name
252 except exception.NotFound:
253 # The unit tests screw this up, as they use an integer for
254 # the vm name. I'd fix that up, but that's a matter for
255 # another bug report. So for now, just try with the passed
256 # value
257 instance_name = instance_or_vm
258
259 # otherwise instance_or_vm is an instance object
260 else:258 else:
261 instance_name = instance_or_vm.name259 instance_name = instance_or_vm.name
262 vm_ref = VMHelper.lookup(self._session, instance_name)260 vm_ref = VMHelper.lookup(self._session, instance_name)
@@ -692,7 +690,6 @@
692 vm_ref = VMHelper.lookup(self._session, instance.name)690 vm_ref = VMHelper.lookup(self._session, instance.name)
693 self._shutdown(instance, vm_ref)691 self._shutdown(instance, vm_ref)
694 self._acquire_bootlock(vm_ref)692 self._acquire_bootlock(vm_ref)
695
696 instance._rescue = True693 instance._rescue = True
697 self.spawn_rescue(instance)694 self.spawn_rescue(instance)
698 rescue_vm_ref = VMHelper.lookup(self._session, instance.name)695 rescue_vm_ref = VMHelper.lookup(self._session, instance.name)
@@ -816,6 +813,7 @@
816 info = {813 info = {
817 'label': network['label'],814 'label': network['label'],
818 'gateway': network['gateway'],815 'gateway': network['gateway'],
816 'broadcast': network['broadcast'],
819 'mac': instance.mac_address,817 'mac': instance.mac_address,
820 'rxtx_cap': flavor['rxtx_cap'],818 'rxtx_cap': flavor['rxtx_cap'],
821 'dns': [network['dns']],819 'dns': [network['dns']],
822820
=== modified file 'nova/virt/xenapi_conn.py'
--- nova/virt/xenapi_conn.py 2011-03-24 08:56:06 +0000
+++ nova/virt/xenapi_conn.py 2011-03-25 13:43:59 +0000
@@ -107,8 +107,22 @@
107 5,107 5,
108 'Max number of times to poll for VHD to coalesce.'108 'Max number of times to poll for VHD to coalesce.'
109 ' Used only if connection_type=xenapi.')109 ' Used only if connection_type=xenapi.')
110flags.DEFINE_bool('xenapi_inject_image',
111 True,
112 'Specifies whether an attempt to inject network/key'
113 ' data into the disk image should be made.'
114 ' Used only if connection_type=xenapi.')
115flags.DEFINE_string('xenapi_agent_path',
116 'usr/sbin/xe-update-networking',
117 'Specifies the path in which the xenapi guest agent'
118 ' should be located. If the agent is present,'
119 ' network configuration is not injected into the image'
120 ' Used only if connection_type=xenapi.'
121 ' and xenapi_inject_image=True')
122
110flags.DEFINE_string('xenapi_sr_base_path', '/var/run/sr-mount',123flags.DEFINE_string('xenapi_sr_base_path', '/var/run/sr-mount',
111 'Base path to the storage repository')124 'Base path to the storage repository')
125
112flags.DEFINE_string('target_host',126flags.DEFINE_string('target_host',
113 None,127 None,
114 'iSCSI Target Host')128 'iSCSI Target Host')