Merge lp:~citrix-openstack/nova/xenapi-netinject-prop into lp:~hudson-openstack/nova/trunk

Proposed by Andy Southgate
Status: Merged
Approved by: Sandy Walsh
Approved revision: 723
Merged at revision: 889
Proposed branch: lp:~citrix-openstack/nova/xenapi-netinject-prop
Merge into: lp:~hudson-openstack/nova/trunk
Diff against target: 1060 lines (+507/-93)
11 files modified
Authors (+1/-0)
nova/tests/db/fakes.py (+42/-32)
nova/tests/fake_utils.py (+106/-0)
nova/tests/test_xenapi.py (+139/-26)
nova/tests/xenapi/stubs.py (+8/-4)
nova/virt/disk.py (+19/-7)
nova/virt/libvirt_conn.py (+1/-3)
nova/virt/xenapi/fake.py (+31/-6)
nova/virt/xenapi/vm_utils.py (+133/-0)
nova/virt/xenapi/vmops.py (+13/-15)
nova/virt/xenapi_conn.py (+14/-0)
To merge this branch: bzr merge lp:~citrix-openstack/nova/xenapi-netinject-prop
Reviewer Review Type Date Requested Status
Trey Morris (community) Approve
Thierry Carrez (community) ffe Approve
Sandy Walsh (community) Approve
Rick Harris (community) Approve
Review via email: mp+49798@code.launchpad.net

Description of the change

This is basic network injection for XenServer, and includes:

o Modification of the /etc/network/interfaces file within the image using code taken from and now shared with libvirt_conn. This is for compatibility with legacy Linux images without a guest agent.

o Setting of xenstore keys before instance boot, intended for the XenServer Windows agent. The agent will use these to configure the network at boot-time.

This change does not implement live reconfiguration, which is on another blueprint:

https://blueprints.launchpad.net/nova/+spec/xs-inject-networking

It does include template code to detect the presence of agents and avoid modifying the filesystem if they are injection-capable.

To post a comment you must log in.
Revision history for this message
Jay Pipes (jaypipes) wrote :

Hi Andy!

Before going any further, please set this merge proposal status to Work In Progress and resolve the merge conflict in nova/virt/xenapi/vm_utils.py.

Lemme know if you're uncertain about the process of resolving merge conflicts.

Cheers,
jay

Revision history for this message
Andy Southgate (andy-southgate) wrote :

Hi,

Thanks Jay. It looks like lp:nova has moved on since I rebased this
branch. Is it sufficient to rebase to trunk again?

Cheers,

Andy S.

On 02/16/11 18:48, Jay Pipes wrote:
> Hi Andy!
>
> Before going any further, please set this merge proposal status to Work In Progress and resolve the merge conflict in nova/virt/xenapi/vm_utils.py.
>
> Lemme know if you're uncertain about the process of resolving merge conflicts.
>
> Cheers,
> jay
>

Revision history for this message
Jay Pipes (jaypipes) wrote :

On Wed, Feb 16, 2011 at 2:01 PM, Andy Southgate
<email address hidden> wrote:
> Thanks Jay.  It looks like lp:nova has moved on since I rebased this
> branch.  Is it sufficient to rebase to trunk again?

It's good practice to have a local copy of the lp:nova trunk, and
then, when you are ready to bzr push your local topic branch to
Launchpad, simply do a bzr merge ../trunk and resolve any conflicts
that might pop up. After you resolve the conflicts in the files (if
any), do a bzr resolve --all and then bzr push your local topic branch
to Launchpad.

As far as "rebasing", not sure you need to do that. bzr has a rebase
plugin, but it's not what you need in this case.

So, in short, the recommended dev process is like so:

# Assume ~/repos/nova is where you have your bzr branches for nova...
cd ~/repos/nova
bzr branch lp:nova trunk
bzr branch trunk topic # Where "topic" is the name you give the branch
you work on (say, xs-netinject-prop)
cd topic
# do work in topic branch...
bzr commit -m "Some comment about your changes"
bzr push lp:~citrix-openstack/nova/topic
# On Launchpad, propose your branch for merging, as you did...
# If review come back as some changes are needed, set the merge
proposal status to Work In Progress
# then, in your local branch, do more work
bzr commit -m "More changes"
cd ../trunk
bzr pull # This pulls all changes merged into lp:nova trunk from the
rest of the Nova contributors into your local branch
cd ../topic
bzr merge ../trunk
# If any conflicts, resolve them and then do bzr resolve --all
bzr commit -m "Merge trunk"
bzr push
# The set your merge proposal back to "Needs Review"

All set :)

Don't worry, once you do it a couple times, it becomes second nature ;)

-jay

Revision history for this message
Andy Southgate (andy-southgate) wrote :

Thanks, that was a bit quicker than a rebase!

Andy

Revision history for this message
Andy Southgate (andy-southgate) wrote :

Salvatore (salvatore-orlando) is taking over this merge BTW as I'm moving on to other things, so please send question to him if you have them.

Revision history for this message
Ewan Mellor (ewanmellor) wrote :

I have merged with trunk again. This branch is ready for review. I have requested a review from Trey Morris, since this branch is related to his xs-inject-networking work. An additional nova-core review would be appreciated.

Thanks!

Revision history for this message
Trey Morris (tr3buchet) wrote :

Great tests!

I have some concerns, not the least of which is that you added new functions which write networking data to the xenstore param list and call them just before spawn writes to the xenstore param list. If written to the same location, spawn will blow away all of your changes. It doesn't appear that your function writes them to the same path though so we'd end up with some nasty config duplication. Your function also writes to different paths than spawn does. Those paths work with the agent. I'm not sure at this point what the agent would do with data at other paths (possibly ignore it).

Mounting the vdi also concerns me. We use VHDs and aren't going to be mounting them because we'll be building not only base images but also customer images where they could be using who knows which file systems or even encryption. So there will need to be some sort of branching put in place to determine if you do or do not want to mount the vdi.

There are functions in place such as "write_to_param_xenstore()" in vmops you should use instead of making the calls directly to follow convention set when Ed refactored.

Also in the next hour I'll be proposing merge for the 2nd half of my network_injection branch, it refactors what takes place in spawn and allows network reconfiguration/injection in post spawn situations. I think it makes sense to get this in and then place your branch on top of it and fix any conflicts instead of doing it twice.

Let me know if you want to discuss anything, I'm glad to help.

review: Needs Fixing
Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :
Download full text (3.4 KiB)

> Great tests!
>
> I have some concerns, not the least of which is that you added new functions
> which write networking data to the xenstore param list and call them just
> before spawn writes to the xenstore param list. If written to the same
> location, spawn will blow away all of your changes. It doesn't appear that
> your function writes them to the same path though so we'd end up with some
> nasty config duplication. Your function also writes to different paths than
> spawn does. Those paths work with the agent. I'm not sure at this point what
> the agent would do with data at other paths (possibly ignore it).
>

I'm not entirely sure whether this duplication is intended or just the result of some previous merge with trunk (I'm thinking of multi-nic-support). However, I agree the fact that they write at different path is a bit weird. I'll investigate this issue.

> Mounting the vdi also concerns me. We use VHDs and aren't going to be mounting
> them because we'll be building not only base images but also customer images
> where they could be using who knows which file systems or even encryption. So
> there will need to be some sort of branching put in place to determine if you
> do or do not want to mount the vdi.
>

Mounting the image for injecting data is a technique which has been used in nova since its early days. In the xenapi implementation, we first stream it into a VDI, and then attach the VDI to the VM in which nova-compute is running. Your point on VHDs and encrypted file systems is more than valid, though. For the same reason we believe the guest agent should be the 'preferred' way for getting network/key configuration into the instance. We are providing injection for 'compatibility' with the libvirt implementation.

Also the way in which is currently done will probably not work even in more trival cases, such as root partition not starting at byte 0 or /etc being mounted on a partition other than the first. As already stated, injection is intended only for 'basic' scenarios.

In the proposed implementation if _mounted_processing fails to mount the image into the vdi, an error message is logged but this does not cause the spawn process to fail. However, I agree we probably need a better mechanism for understanding whether injection should be performed or not. I don't think proposing changes either to the API or instance_types would be a good idea at all. Now that glance is in place we can probably use metadata to this purpose. For instance image_type==simple_machine would mean network injection should occur, whereas image_type==agent_machine would mean the xensore param list should be populated. But probably this is not the rigth place for this discussion!

> There are functions in place such as "write_to_param_xenstore()" in vmops you
> should use instead of making the calls directly to follow convention set when
> Ed refactored.
>

Good point. I will take care of this.

> Also in the next hour I'll be proposing merge for the 2nd half of my
> network_injection branch, it refactors what takes place in spawn and allows
> network reconfiguration/injection in post spawn situations. I think it makes
> sense to get this i...

Read more...

Revision history for this message
Trey Morris (tr3buchet) wrote :

> I'm not entirely sure whether this duplication is intended or just the result of some previous merge
> with trunk (I'm thinking of multi-nic-support). However, I agree the fact that they write at different
> path is a bit weird. I'll investigate this issue.

The paths I used are what the agent is expecting for instance the mac address being lowercase with no punctuation vs all uppercase and full of '_'. That's what I'm referring to. I'm not sure of a reason these should be changed, so I'm all ears if you've got one.

> (mounting the vdi)

First I'm glad it fails quietly. Second, I understand the reason for doing so. It's a great reason. It's something that for sure should be implemented; however, we will never use it in any circumstance, relying on the agent instead. For this reason, I suggest a configuration option. example: --mount-vdi=false. I don't see the point in having more metadata if it will always be true or always be false. Of course, if there are people who want to mount some vdi at spawn and twiddle with the file system, but not others, then that's a different story. I'm happy with either way as long as there is a way to disable it completely.

At this point I feel confident saying that injection of network configuration into the xenstore/xenstore-param-list and the agent call to reset networking is finished. Perhaps preconfigure_xenstore() isn't needed. I'm not nearly as knowledgeable of xenstore as you guys surely are so if I've forgotten something please let me know. I only chose my paths as such because of the agent.

There will be a lot of changes to these areas as i start working on multinic, especially in the inject_network_info() function. Perhaps we can agree to leave writing injection into the xenstore to me and you handle the rest (mounting of vdi's, configuration etc). This way I can proceed with multi-nic without worry that I'll just have to rewrite it.

Also, ff there is anything you'd like me to change about the way I've implemented network injection, please let me know. I've written it in a way that works perfectly for us. I'd like for it to work for you as well.

-tr3buchet

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

Acknowledging Trey's review and reverting status to work in progress to ease reviewer's work.

@Trey: I will reply to your latest comments shortly!

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

> > I'm not entirely sure whether this duplication is intended or just the
> result of some previous merge
> > with trunk (I'm thinking of multi-nic-support). However, I agree the fact
> that they write at different
> > path is a bit weird. I'll investigate this issue.
>
> The paths I used are what the agent is expecting for instance the mac address
> being lowercase with no punctuation vs all uppercase and full of '_'. That's
> what I'm referring to. I'm not sure of a reason these should be changed, so
> I'm all ears if you've got one.
>

I also don't see any reason for which we should use different paths. The only reason I see for the different paths is that these two branches have evolved independently even though the functionalities they implement are quite overlapping.

> > (mounting the vdi)
>
> First I'm glad it fails quietly. Second, I understand the reason for doing so.
> It's a great reason. It's something that for sure should be implemented;
> however, we will never use it in any circumstance, relying on the agent
> instead. For this reason, I suggest a configuration option. example: --mount-
> vdi=false. I don't see the point in having more metadata if it will always be
> true or always be false. Of course, if there are people who want to mount some
> vdi at spawn and twiddle with the file system, but not others, then that's a
> different story. I'm happy with either way as long as there is a way to
> disable it completely.
>

I was suggesting metadata assuming a scenario in which there might be images with and without agent. However, I agree this scenario does not make a lot of sense after all. As you said, either you use the agent or you don't. So the flag idea is maybe more effective, and easier to implement as well :-)

> At this point I feel confident saying that injection of network configuration
> into the xenstore/xenstore-param-list and the agent call to reset networking
> is finished. Perhaps preconfigure_xenstore() isn't needed. I'm not nearly as
> knowledgeable of xenstore as you guys surely are so if I've forgotten
> something please let me know. I only chose my paths as such because of the
> agent.

I tentatively agree with you that preconfigure_xenstore is not required anymore. I will however double-check the code before taking a decision in this direction.

> There will be a lot of changes to these areas as i start working on multinic,
> especially in the inject_network_info() function. Perhaps we can agree to
> leave writing injection into the xenstore to me and you handle the rest
> (mounting of vdi's, configuration etc). This way I can proceed with multi-nic
> without worry that I'll just have to rewrite it.
>

I see your branch has now been merged. I will update my branch accordingly.
I agree on your approach: maybe if there is anything in preconfigure_xenstore I can you ask you to add it to inject_network_info?

> Also, ff there is anything you'd like me to change about the way I've
> implemented network injection, please let me know. I've written it in a way
> that works perfectly for us. I'd like for it to work for you as well.
>
> -tr3buchet

Revision history for this message
Trey Morris (tr3buchet) wrote :

 > I agree on your approach: maybe if there is anything in preconfigure_xenstore I can you ask you to add it to inject_network_info?

Absolutely! Just let me know. First thing I came across was not injecting broadcast address where you guys did. Come up with a list so I can implement them all at once.

-tr3buchet

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

State changed to work in progress due to conflicts with current trunk

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

Conflicts resolved.
Branch ready for review again.

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

Reverting to work in progress to resolve conflicts and integrate IPv6 injection into branch

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

Branch updated and again ready for review.

Revision history for this message
Rick Harris (rconradharris) wrote :
Download full text (5.4 KiB)

Nice work, Andy-- really digging the added tests here.

Overall this looks good, just have a few small suggestions that might improve
readability:

> 87 + def fake_network_get_all_by_instance(context, instance_id):
> 88 + l = []
> 89 + l.append(FakeModel(network_fields))
> 90 + return l
> 91 +

Pep8: "Never use the characters `l' (lowercase letter el)". Might be better
as:

    return [FakeModel(network_fields)]

> 153 +def fake_execute_get_log():
> 154 + global _fake_execute_log
> 155 + return _fake_execute_log

`global` isn't needed here since you're not mutating the value.

> +def fake_execute(*cmd, **kwargs):

Since `cmd` is a list, it might be clearer as `cmd_parts` (or something
similar)

> 185 + cmd_map = map(str, cmd)
> 186 + cmd_str = ' '.join(cmd_map)

Going forward, comprehensions are preferred to map/reduce/filter. So, might be
better as:

    cmd_str = ' '.join(str(part) for part in cmd_parts)

> 199 + if isinstance(reply_handler, types.StringTypes):

I think `basestring` is preferred to StringTypes:

    if isinstance(reply_handler, basestring):

> 213 + LOG.debug(_("Reply to faked command is stdout='%(0)s' stderr='%(1)s'") %
> 214 + {'0': reply[0], '1': reply[1]})

Rather than using a position-style, since a dict is being passed, might be
better to attach meaningful names. For example:

    stdout = reply[0]
    stderr = reply[1]
    LOG.debug(_("Reply to faked command is stdout='%(stdout)s'"
                "stderr='%(stderr)s'") % locals())

> 264 + #db_fakes.stub_out_db_network_api(self.stubs)

Vestigial code?

> 389 def _test_spawn(self, image_id, kernel_id, ramdisk_id,
> 394 + instance_type="m1.large", os_type="linux", check_injection=False):

Should probably be lined up with the paren, something like:

    def _test_spawn(self, image_id, kernel_id, ramdisk_id,
                    instance_type="m1.large", os_type="linux",
                    check_injection=False):

> 580 + def f_1(self, interval, now=True):
> 581 + self.f(*self.args, **self.kw)
> 582 + stubs.Set(utils.LoopingCall, 'start', f_1)

Could use a more descriptive name than `f_1`. Perhaps, `fake_start`.

> 640 + t = __import__('Cheetah.Template', globals(), locals(), ['Template'],
> 641 + -1)
> 642 + template = t.Template

Is there a reason you're using __import__ here--if so, it might benefit from a
NOTE(andy):

Otherwise, might be better to do:

    if not template:
        from Cheetah import Template
        template = Template

> 643 + #load template file if necessary

Not a super-useful comment (didn't add anything for me, at least)

> 808 + LOG.debug(_('Raising Failure'))

Not a particularly useful logging line, especially since we raise immediately
afterwards. I'd consider scrapping this.

> 804 + if (ref in _db_content[cls]):
> 805 + if (field in _db_content[cls][ref]):
> 806 + return _db_content[cls][ref][field]
> 807 + else:
> 808 + LOG.debug(_('Raising Failure'))
> 809 + raise Failure(['HANDLE_INVALID', cls, ref])

Might be c...

Read more...

review: Needs Fixing
Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

Hi Rick,

thanks for taking time for doing such a detailed review!
Your tips will be very useful to provide higher-quality code in the future.

I will update the branch in the few hours in order to give reviewers a full day before feature freeze to review it.

There are only a few bits which probably deserve some discussion:
>
> Is there a reason you're using __import__ here--if so, it might benefit from a
> NOTE(andy):
>
> Otherwise, might be better to do:
>
> if not template:
> from Cheetah import Template
> template = Template
>

I agrre the syntax you are proposing is much neater, but wouldn't this put a requirement on Cheetah for the whole nova project (e.g.: nova services, unit tests, etc. failing if Cheetah is not installed)?

> > 990 + LOG.debug("ENTERING WAIT FOR BOOT!")

This is vestigial code. I love to use them for debugging code, and I usually throw a pep8 violation in them in order to make sure I remove them before push. Unfortunately I forgot to do the pep8 violation for this one!

Revision history for this message
Rick Harris (rconradharris) wrote :

> wouldn't this put a requirement on Cheetah

Since you're importing Cheetah in function and not in the module namespace, we'll only end up importing Cheetah if it's actually needed. So, I think the "from x import y", in this case, works just as well as __import__, just a bit cleaner IMHO.

> and I usually throw a pep8 violation in them in order to make sure I remove them before push

Interesting. I've been meaning to add a SPIKE(author) tag that would cause Hudson to reject patches that include this. You could wrap extra debugs with a # SPIKE comment and be sure it never gets committed.

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

All Rick's comments, but one, have been addressed.

The one not addressed is the following:

> 804 + if (ref in _db_content[cls]):
> 805 + if (field in _db_content[cls][ref]):
> 806 + return _db_content[cls][ref][field]
> 807 + else:
> 808 + LOG.debug(_('Raising Failure'))
> 809 + raise Failure(['HANDLE_INVALID', cls, ref])

Might be clearer with a EAFP style, like:

    try:
        return _db_content[cls][ref][field]
    except KeyError:
        raise Failure(['HANDLE_INVALID', cls, ref])

In the fake xenapi driver we need to distinguish between the following cases:
- entity not found (ref not in _db_content[cls]), in which case we need to raise a HANDLE_INVALID falure
- entity found, but attribute not found, in which case we need to raised a NOT_IMPLEMENTED failure.

Nevertheless, Rick's suggestion is very valid and this calls for a wider refactoring of the fake xenapi driver, starting with the _getter routine. However, I think it would be better to do this after-cactus.

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

Getting XenAPIMigrateInstance (test_xenapi.py) asking for passwords during tests:

XenAPIMigrateInstance
    test_finish_resize [sudo] password for swalsh:
                                [sudo] password for swalsh:
OK

Otherwise, impressive branch ... nothing serious to say.

641 + # it the caller does not provide template object and data

"If the caller ..."

But largely I'm going to trust Trey on ensuring the approach is sound (as he's closer to the problem than I)

Once he approves, I will too.

review: Needs Fixing
Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

> Getting XenAPIMigrateInstance (test_xenapi.py) asking for passwords during
> tests:
>
> XenAPIMigrateInstance
> test_finish_resize [sudo] password for swalsh:
> [sudo] password for swalsh:
> OK
>

Thanks for spotting this Sandy! Finish_resize calls _create_vm which in turn calls preconfigure_instance (which writes /etc/network/interfaces). We were not stubbing out utils.execute for the migrate test case.

I already pushed updated code, pleasee see if the tests run fine now.
Another way of doing this is disabling the flag xenapi_inject_image for the migrate use case. If you think that would make more sense, I'll do that.

> Otherwise, impressive branch ... nothing serious to say.
>
> 641 + # it the caller does not provide template object and data
>
> "If the caller ..."
>
> But largely I'm going to trust Trey on ensuring the approach is sound (as he's
> closer to the problem than I)
>

get_injectables is a routine 'shared' between libvirt and nova. In the libvirt backend Cheetah.Template is loaded with a dynamic import as a global variable, and the template file is read when the LibVirtConnection instance is initialized.

We don't do that in XenAPI and I'm a bit wary of using global variables, so I put the template class and the template file as optional variables and then dynamically set them if they are not set from the caller (xenapi backend in this case).
> Once he approves, I will too.

Revision history for this message
Trey Morris (tr3buchet) wrote :

--- disk.py ----
get_injectables needs a docstring
why is get_injectables in disk? seems like an odd place..

get_injectables troubles me because one of my goals with multi-nic is to untether the network and virt layers by passing all of the data required for network configuration into the virt layer from compute so no network related db lookups are required at the virt layer. In the mean time (before nova-multi-nic), get_network_info() in vmops.py is performing this task. There will be a similar get_network_info() function for each libvirt/xenapi/hyperV. You can see the format of the data compute will pass to the virt layer spawn process by looking at get_network_info() in vmops.py. Is there something you need that this function isn't handling? I'm not sure I fully grasp the goal of get_injectables().

This should also apply to the fake calls for network info in tests/db/fakes.py. The xenapi/libvirt/hyperV tests shouldn't be using those (but this will surely have to figured out later).

--- vm_utils.py ---
_find_guest_agent needs a docstring

--- vmops.py ---
yay!!

--- xenapi_conn.py ---
1058 + ' network configuration if not injected into the image'
should probably be
1058 + ' network configuration is not injected into the image'
unless i misinterpreted.

The rest is great! I like the way you've implemented mounting vdi as a flaggable option and I also like how if a proper agent is present you preference using it over adding to the mounted filesystem.

review: Needs Fixing
Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

Hi Trey,
see my comments inline

> --- disk.py ----
> get_injectables needs a docstring
> why is get_injectables in disk? seems like an odd place..
>
> get_injectables troubles me because one of my goals with multi-nic is to
> untether the network and virt layers by passing all of the data required for
> network configuration into the virt layer from compute so no network related
> db lookups are required at the virt layer. In the mean time (before nova-
> multi-nic), get_network_info() in vmops.py is performing this task. There will
> be a similar get_network_info() function for each libvirt/xenapi/hyperV. You
> can see the format of the data compute will pass to the virt layer spawn
> process by looking at get_network_info() in vmops.py. Is there something you
> need that this function isn't handling? I'm not sure I fully grasp the goal of
> get_injectables().
>
> This should also apply to the fake calls for network info in
> tests/db/fakes.py. The xenapi/libvirt/hyperV tests shouldn't be using those
> (but this will surely have to figured out later).

When I noted your network_info I realized there was some overlap between get_injectables and it.
get_injectables is in disk.py since it is used by both libvirt and xenapi (and possibly hyperv); it seemed the best place for it because it performs operations related to the disk image.

From what you said, every backend is going to have a _get_network_info, at least until multi_nic hits trunk. In this case I don't see anymore a reason for having that common bit of code, and I will therefore remove it. However, I should still use the interface.template file and populate it with Chetaah. To do so, the best thing I can do is to map the dict created by _get_network_info onto the dict required by the Cheetah template (the dict already contains all the info I need).

Another thing to note is that the Cheetah template does not support multiple IP addresses. For this reason I think it might be worth to use only the first element from the ip/ipv6 lists in the dict created by network_info.

> --- vm_utils.py ---
> _find_guest_agent needs a docstring
>

Will be done

> --- vmops.py ---
> yay!!
>
> --- xenapi_conn.py ---
> 1058 + ' network configuration if not injected into the
> image'
> should probably be
> 1058 + ' network configuration is not injected into the
> image'
> unless i misinterpreted.
>

Typo. Thanks for spotting this!

> The rest is great! I like the way you've implemented mounting vdi as a
> flaggable option and I also like how if a proper agent is present you
> preference using it over adding to the mounted filesystem.

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

Hi reviewers!

Updating the branch took a bit longer than expect as in the meanwhile another branch changing the way in which network injection is performed for libvirt landed in trunk!

I adapted my branch following Trey's comments. No more _get_injectables! Now I also use the value returned by _get_network_info. Unlike the libvirt impl, I decided to use the 'info' field in the tuple rather than network_ref. This way, if the db changes in the future we will have to update only _get_network_info accordingly.

Another tangential issue was the rescue feature. In the unit tests spawn_rescue was stubbed out with a nice 'pass', leading to a failure on a later operation on the xenapi fake driver (which has been improved to be less 'fake'). By the way, if spawn_rescue has been stubbed out, what's the point of having a unit test for rescue? Code coverage is not complete at all, and I think the unit test is quite unreliable.

Thanks again for spending so much time on this branch. I hope it is now ready to rest in trunk forever!

721. By Salvatore Orlando

merge trunk

Revision history for this message
Rick Harris (rconradharris) wrote :

As with Sandy, I really can't speak to whether this is the right approach. I'll leave that to Trey.

But from a code-quality perspective, this looks really good. Thanks for the fix-ups!

One small nit (not a deal-breaker):

> 97 + l = []
> 98 + l.append(FakeModel(fixed_ip_fields))
> 99 + return l

Looks like one was missed. I'm okay letting this slide since we can get this later; better to get this merged sooner. That said, if you can slip in a fix here, that'd be great.

review: Approve
722. By Salvatore Orlando

minor pep8 fix in db/fakes.py

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

> As with Sandy, I really can't speak to whether this is the right approach.
> I'll leave that to Trey.
>
> But from a code-quality perspective, this looks really good. Thanks for the
> fix-ups!
>
> One small nit (not a deal-breaker):
>
> > 97 + l = []
> > 98 + l.append(FakeModel(fixed_ip_fields))
> > 99 + return l
>
> Looks like one was missed. I'm okay letting this slide since we can get this
> later; better to get this merged sooner. That said, if you can slip in a fix
> here, that'd be great.

As for the approach, I've think I've addresses Trey's concerns.
Thanks for spotting the missed fix! Updated branch pushed.

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

Requesting FF exception for this branch, as it has already 1 approve vote and all discussion points (major and minor) have been addressed according to reviewers' suggestions.

The branch only touches the part of the XenAPI backend for the compute service which deals with spawning VMs. It performs delicate operations such as mounting VDIs, but the routine we use for these operations are already widely used in the XenAPI backend.
Two unit tests, carefully prepared to reduce stubbing-out to a minimum, have been provided.

Benefits:
- this branch will enable support for IP injection in disk images for XenAPI, in the same way in which is already done in libvirt. It indeed uses the same template file. Altough XenAPI already supports agent-based injection, this way of doing injection can be very useful in situations where guest images do not have an agent.
- this branch slightly improves unit test infrastructure for xenapi backend, increasing code coverage.

Risk assessment:
- this branch has a quite difficult life. It was first proposed more than a month ago, but was pushed back due to overlaps with other network-related branches dealing with XenAPI backend. More problems arised when the interfaces.template file changed yesterday to reflect multiple interface. However, now the branch is completely in sync with the current trunk, and since we are in FF period we can expect minimum to no interferences with other branches.

Risk assessment:
- No risk for network manager, as its code has not been touched at all. The xenapi network driver will be used only by the compute node for setting up VLANs on xen hosts. This also means that there cannot be any impact on other capabilities of the Network Manager, such as VPN access or Floating IPs.
- Since this is a new feature, and the associated code is executed only when a compute node uses XenAPI and the VLan manager, there is no risk of breaking libvirt, hyperv, or mware installations, or XenAPI installations which use the flat manager

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

Heh, nearly had a heart attack when I saw the tests failing. Then I saw it was only 'suds' missing. Should have looked in pip-requires.

Nice work!we Hopefully this sneaks it in?

review: Approve
Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

> Heh, nearly had a heart attack when I saw the tests failing. Then I saw it was
> only 'suds' missing. Should have looked in pip-requires.
>
> Nice work!we Hopefully this sneaks it in?

Hi Sandy,

Are you sure you're not talking about VMware-vSphere? I believe that was the branch with the test failing due to suds...
In this branch your tests were failing because I was not stubbing out utils.execute in the migrate test case (on my dev machines they were not failing because all the commands were in the sudoers file :-)
Anyway, that has been fixed!

723. By Salvatore Orlando

merge trunk

Revision history for this message
Thierry Carrez (ttx) wrote :

This is a bit too wide for my taste but it seems very close to the goal. FFe granted, provided this gets merged very soon (before end of day Monday)

review: Approve (ffe)
Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

> Are you sure you're not talking about VMware-vSphere? I believe that was the
> branch with the test failing due to suds...

Could have been just part of the trunk merges. It was definitely this branch. All good now though.

Revision history for this message
Trey Morris (tr3buchet) wrote :

great :)

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Authors'
2--- Authors 2011-03-24 22:47:36 +0000
3+++ Authors 2011-03-25 13:43:59 +0000
4@@ -1,4 +1,5 @@
5 Andy Smith <code@term.ie>
6+Andy Southgate <andy.southgate@citrix.com>
7 Anne Gentle <anne@openstack.org>
8 Anthony Young <sleepsonthefloor@gmail.com>
9 Antony Messerli <ant@openstack.org>
10
11=== modified file 'nova/tests/db/fakes.py'
12--- nova/tests/db/fakes.py 2011-03-17 02:20:18 +0000
13+++ nova/tests/db/fakes.py 2011-03-25 13:43:59 +0000
14@@ -24,7 +24,7 @@
15 from nova import utils
16
17
18-def stub_out_db_instance_api(stubs):
19+def stub_out_db_instance_api(stubs, injected=True):
20 """ Stubs out the db API for creating Instances """
21
22 INSTANCE_TYPES = {
23@@ -56,6 +56,25 @@
24 flavorid=5,
25 rxtx_cap=5)}
26
27+ network_fields = {
28+ 'id': 'test',
29+ 'bridge': 'xenbr0',
30+ 'label': 'test_network',
31+ 'netmask': '255.255.255.0',
32+ 'cidr_v6': 'fe80::a00:0/120',
33+ 'netmask_v6': '120',
34+ 'gateway': '10.0.0.1',
35+ 'gateway_v6': 'fe80::a00:1',
36+ 'broadcast': '10.0.0.255',
37+ 'dns': '10.0.0.2',
38+ 'ra_server': None,
39+ 'injected': injected}
40+
41+ fixed_ip_fields = {
42+ 'address': '10.0.0.3',
43+ 'address_v6': 'fe80::a00:3',
44+ 'network_id': 'test'}
45+
46 class FakeModel(object):
47 """ Stubs out for model """
48 def __init__(self, values):
49@@ -76,38 +95,29 @@
50 def fake_instance_type_get_by_name(context, name):
51 return INSTANCE_TYPES[name]
52
53- def fake_instance_create(values):
54- """ Stubs out the db.instance_create method """
55-
56- type_data = INSTANCE_TYPES[values['instance_type']]
57-
58- base_options = {
59- 'name': values['name'],
60- 'id': values['id'],
61- 'reservation_id': utils.generate_uid('r'),
62- 'image_id': values['image_id'],
63- 'kernel_id': values['kernel_id'],
64- 'ramdisk_id': values['ramdisk_id'],
65- 'state_description': 'scheduling',
66- 'user_id': values['user_id'],
67- 'project_id': values['project_id'],
68- 'launch_time': time.strftime('%Y-%m-%dT%H:%M:%SZ', time.gmtime()),
69- 'instance_type': values['instance_type'],
70- 'memory_mb': type_data['memory_mb'],
71- 'mac_address': values['mac_address'],
72- 'vcpus': type_data['vcpus'],
73- 'local_gb': type_data['local_gb'],
74- 'os_type': values['os_type']}
75-
76- return FakeModel(base_options)
77-
78 def fake_network_get_by_instance(context, instance_id):
79- fields = {
80- 'bridge': 'xenbr0',
81- }
82- return FakeModel(fields)
83-
84- stubs.Set(db, 'instance_create', fake_instance_create)
85+ return FakeModel(network_fields)
86+
87+ def fake_network_get_all_by_instance(context, instance_id):
88+ return [FakeModel(network_fields)]
89+
90+ def fake_instance_get_fixed_address(context, instance_id):
91+ return FakeModel(fixed_ip_fields).address
92+
93+ def fake_instance_get_fixed_address_v6(context, instance_id):
94+ return FakeModel(fixed_ip_fields).address
95+
96+ def fake_fixed_ip_get_all_by_instance(context, instance_id):
97+ return [FakeModel(fixed_ip_fields)]
98+
99 stubs.Set(db, 'network_get_by_instance', fake_network_get_by_instance)
100 stubs.Set(db, 'instance_type_get_all', fake_instance_type_get_all)
101 stubs.Set(db, 'instance_type_get_by_name', fake_instance_type_get_by_name)
102+ stubs.Set(db, 'instance_get_fixed_address',
103+ fake_instance_get_fixed_address)
104+ stubs.Set(db, 'instance_get_fixed_address_v6',
105+ fake_instance_get_fixed_address_v6)
106+ stubs.Set(db, 'network_get_all_by_instance',
107+ fake_network_get_all_by_instance)
108+ stubs.Set(db, 'fixed_ip_get_all_by_instance',
109+ fake_fixed_ip_get_all_by_instance)
110
111=== added file 'nova/tests/fake_utils.py'
112--- nova/tests/fake_utils.py 1970-01-01 00:00:00 +0000
113+++ nova/tests/fake_utils.py 2011-03-25 13:43:59 +0000
114@@ -0,0 +1,106 @@
115+# vim: tabstop=4 shiftwidth=4 softtabstop=4
116+
117+# Copyright (c) 2011 Citrix Systems, Inc.
118+#
119+# Licensed under the Apache License, Version 2.0 (the "License"); you may
120+# not use this file except in compliance with the License. You may obtain
121+# a copy of the License at
122+#
123+# http://www.apache.org/licenses/LICENSE-2.0
124+#
125+# Unless required by applicable law or agreed to in writing, software
126+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
127+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
128+# License for the specific language governing permissions and limitations
129+# under the License.
130+
131+"""This modules stubs out functions in nova.utils
132+"""
133+
134+import re
135+import types
136+
137+from eventlet import greenthread
138+
139+from nova import exception
140+from nova import log as logging
141+from nova import utils
142+
143+LOG = logging.getLogger('nova.tests.fake_utils')
144+
145+_fake_execute_repliers = []
146+_fake_execute_log = []
147+
148+
149+def fake_execute_get_log():
150+ return _fake_execute_log
151+
152+
153+def fake_execute_clear_log():
154+ global _fake_execute_log
155+ _fake_execute_log = []
156+
157+
158+def fake_execute_set_repliers(repliers):
159+ """Allows the client to configure replies to commands"""
160+ global _fake_execute_repliers
161+ _fake_execute_repliers = repliers
162+
163+
164+def fake_execute_default_reply_handler(*ignore_args, **ignore_kwargs):
165+ """A reply handler for commands that haven't been added to the reply
166+ list. Returns empty strings for stdout and stderr
167+ """
168+ return '', ''
169+
170+
171+def fake_execute(*cmd_parts, **kwargs):
172+ """This function stubs out execute, optionally executing
173+ a preconfigued function to return expected data
174+ """
175+ global _fake_execute_repliers
176+
177+ process_input = kwargs.get('process_input', None)
178+ addl_env = kwargs.get('addl_env', None)
179+ check_exit_code = kwargs.get('check_exit_code', 0)
180+ cmd_str = ' '.join(str(part) for part in cmd_parts)
181+
182+ LOG.debug(_("Faking execution of cmd (subprocess): %s"), cmd_str)
183+ _fake_execute_log.append(cmd_str)
184+
185+ reply_handler = fake_execute_default_reply_handler
186+
187+ for fake_replier in _fake_execute_repliers:
188+ if re.match(fake_replier[0], cmd_str):
189+ reply_handler = fake_replier[1]
190+ LOG.debug(_('Faked command matched %s') % fake_replier[0])
191+ break
192+
193+ if isinstance(reply_handler, basestring):
194+ # If the reply handler is a string, return it as stdout
195+ reply = reply_handler, ''
196+ else:
197+ try:
198+ # Alternative is a function, so call it
199+ reply = reply_handler(cmd_parts,
200+ process_input=process_input,
201+ addl_env=addl_env,
202+ check_exit_code=check_exit_code)
203+ except exception.ProcessExecutionError as e:
204+ LOG.debug(_('Faked command raised an exception %s' % str(e)))
205+ raise
206+
207+ stdout = reply[0]
208+ stderr = reply[1]
209+ LOG.debug(_("Reply to faked command is stdout='%(stdout)s' "
210+ "stderr='%(stderr)s'") % locals())
211+
212+ # Replicate the sleep call in the real function
213+ greenthread.sleep(0)
214+ return reply
215+
216+
217+def stub_out_utils_execute(stubs):
218+ fake_execute_set_repliers([])
219+ fake_execute_clear_log()
220+ stubs.Set(utils, 'execute', fake_execute)
221
222=== modified file 'nova/tests/test_xenapi.py'
223--- nova/tests/test_xenapi.py 2011-03-21 18:21:26 +0000
224+++ nova/tests/test_xenapi.py 2011-03-25 13:43:59 +0000
225@@ -19,11 +19,15 @@
226 """
227
228 import functools
229+import os
230+import re
231 import stubout
232+import ast
233
234 from nova import db
235 from nova import context
236 from nova import flags
237+from nova import log as logging
238 from nova import test
239 from nova import utils
240 from nova.auth import manager
241@@ -38,6 +42,9 @@
242 from nova.tests.db import fakes as db_fakes
243 from nova.tests.xenapi import stubs
244 from nova.tests.glance import stubs as glance_stubs
245+from nova.tests import fake_utils
246+
247+LOG = logging.getLogger('nova.tests.test_xenapi')
248
249 FLAGS = flags.FLAGS
250
251@@ -64,13 +71,14 @@
252 def setUp(self):
253 super(XenAPIVolumeTestCase, self).setUp()
254 self.stubs = stubout.StubOutForTesting()
255+ self.context = context.RequestContext('fake', 'fake', False)
256 FLAGS.target_host = '127.0.0.1'
257 FLAGS.xenapi_connection_url = 'test_url'
258 FLAGS.xenapi_connection_password = 'test_pass'
259 db_fakes.stub_out_db_instance_api(self.stubs)
260 stubs.stub_out_get_target(self.stubs)
261 xenapi_fake.reset()
262- self.values = {'name': 1, 'id': 1,
263+ self.values = {'id': 1,
264 'project_id': 'fake',
265 'user_id': 'fake',
266 'image_id': 1,
267@@ -90,7 +98,7 @@
268 vol['availability_zone'] = FLAGS.storage_availability_zone
269 vol['status'] = "creating"
270 vol['attach_status'] = "detached"
271- return db.volume_create(context.get_admin_context(), vol)
272+ return db.volume_create(self.context, vol)
273
274 def test_create_iscsi_storage(self):
275 """ This shows how to test helper classes' methods """
276@@ -126,7 +134,7 @@
277 stubs.stubout_session(self.stubs, stubs.FakeSessionForVolumeTests)
278 conn = xenapi_conn.get_connection(False)
279 volume = self._create_volume()
280- instance = db.instance_create(self.values)
281+ instance = db.instance_create(self.context, self.values)
282 vm = xenapi_fake.create_vm(instance.name, 'Running')
283 result = conn.attach_volume(instance.name, volume['id'], '/dev/sdc')
284
285@@ -146,7 +154,7 @@
286 stubs.FakeSessionForVolumeFailedTests)
287 conn = xenapi_conn.get_connection(False)
288 volume = self._create_volume()
289- instance = db.instance_create(self.values)
290+ instance = db.instance_create(self.context, self.values)
291 xenapi_fake.create_vm(instance.name, 'Running')
292 self.assertRaises(Exception,
293 conn.attach_volume,
294@@ -175,8 +183,9 @@
295 self.project = self.manager.create_project('fake', 'fake', 'fake')
296 self.network = utils.import_object(FLAGS.network_manager)
297 self.stubs = stubout.StubOutForTesting()
298- FLAGS.xenapi_connection_url = 'test_url'
299- FLAGS.xenapi_connection_password = 'test_pass'
300+ self.flags(xenapi_connection_url='test_url',
301+ xenapi_connection_password='test_pass',
302+ instance_name_template='%d')
303 xenapi_fake.reset()
304 xenapi_fake.create_local_srs()
305 db_fakes.stub_out_db_instance_api(self.stubs)
306@@ -189,6 +198,8 @@
307 stubs.stub_out_vm_methods(self.stubs)
308 glance_stubs.stubout_glance_client(self.stubs,
309 glance_stubs.FakeGlance)
310+ fake_utils.stub_out_utils_execute(self.stubs)
311+ self.context = context.RequestContext('fake', 'fake', False)
312 self.conn = xenapi_conn.get_connection(False)
313
314 def test_list_instances_0(self):
315@@ -213,7 +224,7 @@
316 if not vm_rec["is_control_domain"]:
317 vm_labels.append(vm_rec["name_label"])
318
319- self.assertEquals(vm_labels, [1])
320+ self.assertEquals(vm_labels, ['1'])
321
322 def ensure_vbd_was_torn_down():
323 vbd_labels = []
324@@ -221,7 +232,7 @@
325 vbd_rec = xenapi_fake.get_record('VBD', vbd_ref)
326 vbd_labels.append(vbd_rec["vm_name_label"])
327
328- self.assertEquals(vbd_labels, [1])
329+ self.assertEquals(vbd_labels, ['1'])
330
331 def ensure_vdi_was_torn_down():
332 for vdi_ref in xenapi_fake.get_all('VDI'):
333@@ -238,11 +249,10 @@
334
335 def create_vm_record(self, conn, os_type):
336 instances = conn.list_instances()
337- self.assertEquals(instances, [1])
338+ self.assertEquals(instances, ['1'])
339
340 # Get Nova record for VM
341 vm_info = conn.get_info(1)
342-
343 # Get XenAPI record for VM
344 vms = [rec for ref, rec
345 in xenapi_fake.get_all_records('VM').iteritems()
346@@ -251,7 +261,7 @@
347 self.vm_info = vm_info
348 self.vm = vm
349
350- def check_vm_record(self, conn):
351+ def check_vm_record(self, conn, check_injection=False):
352 # Check that m1.large above turned into the right thing.
353 instance_type = db.instance_type_get_by_name(conn, 'm1.large')
354 mem_kib = long(instance_type['memory_mb']) << 10
355@@ -271,6 +281,25 @@
356 # Check that the VM is running according to XenAPI.
357 self.assertEquals(self.vm['power_state'], 'Running')
358
359+ if check_injection:
360+ xenstore_data = self.vm['xenstore_data']
361+ key = 'vm-data/networking/aabbccddeeff'
362+ xenstore_value = xenstore_data[key]
363+ tcpip_data = ast.literal_eval(xenstore_value)
364+ self.assertEquals(tcpip_data, {
365+ 'label': 'test_network',
366+ 'broadcast': '10.0.0.255',
367+ 'ips': [{'ip': '10.0.0.3',
368+ 'netmask':'255.255.255.0',
369+ 'enabled':'1'}],
370+ 'ip6s': [{'ip': 'fe80::a8bb:ccff:fedd:eeff',
371+ 'netmask': '120',
372+ 'enabled': '1',
373+ 'gateway': 'fe80::a00:1'}],
374+ 'mac': 'aa:bb:cc:dd:ee:ff',
375+ 'dns': ['10.0.0.2'],
376+ 'gateway': '10.0.0.1'})
377+
378 def check_vm_params_for_windows(self):
379 self.assertEquals(self.vm['platform']['nx'], 'true')
380 self.assertEquals(self.vm['HVM_boot_params'], {'order': 'dc'})
381@@ -304,10 +333,10 @@
382 self.assertEquals(self.vm['HVM_boot_policy'], '')
383
384 def _test_spawn(self, image_id, kernel_id, ramdisk_id,
385- instance_type="m1.large", os_type="linux"):
386- stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests)
387- values = {'name': 1,
388- 'id': 1,
389+ instance_type="m1.large", os_type="linux",
390+ check_injection=False):
391+ stubs.stubout_loopingcall_start(self.stubs)
392+ values = {'id': 1,
393 'project_id': self.project.id,
394 'user_id': self.user.id,
395 'image_id': image_id,
396@@ -316,12 +345,10 @@
397 'instance_type': instance_type,
398 'mac_address': 'aa:bb:cc:dd:ee:ff',
399 'os_type': os_type}
400-
401- conn = xenapi_conn.get_connection(False)
402- instance = db.instance_create(values)
403- conn.spawn(instance)
404- self.create_vm_record(conn, os_type)
405- self.check_vm_record(conn)
406+ instance = db.instance_create(self.context, values)
407+ self.conn.spawn(instance)
408+ self.create_vm_record(self.conn, os_type)
409+ self.check_vm_record(self.conn, check_injection)
410
411 def test_spawn_not_enough_memory(self):
412 FLAGS.xenapi_image_service = 'glance'
413@@ -362,6 +389,85 @@
414 glance_stubs.FakeGlance.IMAGE_RAMDISK)
415 self.check_vm_params_for_linux_with_external_kernel()
416
417+ def test_spawn_netinject_file(self):
418+ FLAGS.xenapi_image_service = 'glance'
419+ db_fakes.stub_out_db_instance_api(self.stubs, injected=True)
420+
421+ self._tee_executed = False
422+
423+ def _tee_handler(cmd, **kwargs):
424+ input = kwargs.get('process_input', None)
425+ self.assertNotEqual(input, None)
426+ config = [line.strip() for line in input.split("\n")]
427+ # Find the start of eth0 configuration and check it
428+ index = config.index('auto eth0')
429+ self.assertEquals(config[index + 1:index + 8], [
430+ 'iface eth0 inet static',
431+ 'address 10.0.0.3',
432+ 'netmask 255.255.255.0',
433+ 'broadcast 10.0.0.255',
434+ 'gateway 10.0.0.1',
435+ 'dns-nameservers 10.0.0.2',
436+ ''])
437+ self._tee_executed = True
438+ return '', ''
439+
440+ fake_utils.fake_execute_set_repliers([
441+ # Capture the sudo tee .../etc/network/interfaces command
442+ (r'(sudo\s+)?tee.*interfaces', _tee_handler),
443+ ])
444+ FLAGS.xenapi_image_service = 'glance'
445+ self._test_spawn(glance_stubs.FakeGlance.IMAGE_MACHINE,
446+ glance_stubs.FakeGlance.IMAGE_KERNEL,
447+ glance_stubs.FakeGlance.IMAGE_RAMDISK,
448+ check_injection=True)
449+ self.assertTrue(self._tee_executed)
450+
451+ def test_spawn_netinject_xenstore(self):
452+ FLAGS.xenapi_image_service = 'glance'
453+ db_fakes.stub_out_db_instance_api(self.stubs, injected=True)
454+
455+ self._tee_executed = False
456+
457+ def _mount_handler(cmd, *ignore_args, **ignore_kwargs):
458+ # When mounting, create real files under the mountpoint to simulate
459+ # files in the mounted filesystem
460+
461+ # mount point will be the last item of the command list
462+ self._tmpdir = cmd[len(cmd) - 1]
463+ LOG.debug(_('Creating files in %s to simulate guest agent' %
464+ self._tmpdir))
465+ os.makedirs(os.path.join(self._tmpdir, 'usr', 'sbin'))
466+ # Touch the file using open
467+ open(os.path.join(self._tmpdir, 'usr', 'sbin',
468+ 'xe-update-networking'), 'w').close()
469+ return '', ''
470+
471+ def _umount_handler(cmd, *ignore_args, **ignore_kwargs):
472+ # Umount would normall make files in the m,ounted filesystem
473+ # disappear, so do that here
474+ LOG.debug(_('Removing simulated guest agent files in %s' %
475+ self._tmpdir))
476+ os.remove(os.path.join(self._tmpdir, 'usr', 'sbin',
477+ 'xe-update-networking'))
478+ os.rmdir(os.path.join(self._tmpdir, 'usr', 'sbin'))
479+ os.rmdir(os.path.join(self._tmpdir, 'usr'))
480+ return '', ''
481+
482+ def _tee_handler(cmd, *ignore_args, **ignore_kwargs):
483+ self._tee_executed = True
484+ return '', ''
485+
486+ fake_utils.fake_execute_set_repliers([
487+ (r'(sudo\s+)?mount', _mount_handler),
488+ (r'(sudo\s+)?umount', _umount_handler),
489+ (r'(sudo\s+)?tee.*interfaces', _tee_handler)])
490+ self._test_spawn(1, 2, 3, check_injection=True)
491+
492+ # tee must not run in this case, where an injection-capable
493+ # guest agent is detected
494+ self.assertFalse(self._tee_executed)
495+
496 def test_spawn_with_network_qos(self):
497 self._create_instance()
498 for vif_ref in xenapi_fake.get_all('VIF'):
499@@ -371,6 +477,7 @@
500 str(4 * 1024))
501
502 def test_rescue(self):
503+ self.flags(xenapi_inject_image=False)
504 instance = self._create_instance()
505 conn = xenapi_conn.get_connection(False)
506 conn.rescue(instance, None)
507@@ -391,8 +498,8 @@
508
509 def _create_instance(self):
510 """Creates and spawns a test instance"""
511+ stubs.stubout_loopingcall_start(self.stubs)
512 values = {
513- 'name': 1,
514 'id': 1,
515 'project_id': self.project.id,
516 'user_id': self.user.id,
517@@ -402,7 +509,7 @@
518 'instance_type': 'm1.large',
519 'mac_address': 'aa:bb:cc:dd:ee:ff',
520 'os_type': 'linux'}
521- instance = db.instance_create(values)
522+ instance = db.instance_create(self.context, values)
523 self.conn.spawn(instance)
524 return instance
525
526@@ -447,21 +554,26 @@
527 db_fakes.stub_out_db_instance_api(self.stubs)
528 stubs.stub_out_get_target(self.stubs)
529 xenapi_fake.reset()
530+ xenapi_fake.create_network('fake', FLAGS.flat_network_bridge)
531 self.manager = manager.AuthManager()
532 self.user = self.manager.create_user('fake', 'fake', 'fake',
533 admin=True)
534 self.project = self.manager.create_project('fake', 'fake', 'fake')
535- self.values = {'name': 1, 'id': 1,
536+ self.context = context.RequestContext('fake', 'fake', False)
537+ self.values = {'id': 1,
538 'project_id': self.project.id,
539 'user_id': self.user.id,
540 'image_id': 1,
541 'kernel_id': None,
542 'ramdisk_id': None,
543+ 'local_gb': 5,
544 'instance_type': 'm1.large',
545 'mac_address': 'aa:bb:cc:dd:ee:ff',
546 'os_type': 'linux'}
547
548+ fake_utils.stub_out_utils_execute(self.stubs)
549 stubs.stub_out_migration_methods(self.stubs)
550+ stubs.stubout_get_this_vm_uuid(self.stubs)
551 glance_stubs.stubout_glance_client(self.stubs,
552 glance_stubs.FakeGlance)
553
554@@ -472,14 +584,15 @@
555 self.stubs.UnsetAll()
556
557 def test_migrate_disk_and_power_off(self):
558- instance = db.instance_create(self.values)
559+ instance = db.instance_create(self.context, self.values)
560 stubs.stubout_session(self.stubs, stubs.FakeSessionForMigrationTests)
561 conn = xenapi_conn.get_connection(False)
562 conn.migrate_disk_and_power_off(instance, '127.0.0.1')
563
564 def test_finish_resize(self):
565- instance = db.instance_create(self.values)
566+ instance = db.instance_create(self.context, self.values)
567 stubs.stubout_session(self.stubs, stubs.FakeSessionForMigrationTests)
568+ stubs.stubout_loopingcall_start(self.stubs)
569 conn = xenapi_conn.get_connection(False)
570 conn.finish_resize(instance, dict(base_copy='hurr', cow='durr'))
571
572
573=== modified file 'nova/tests/xenapi/stubs.py'
574--- nova/tests/xenapi/stubs.py 2011-03-24 04:09:51 +0000
575+++ nova/tests/xenapi/stubs.py 2011-03-25 13:43:59 +0000
576@@ -21,6 +21,7 @@
577 from nova.virt.xenapi import volume_utils
578 from nova.virt.xenapi import vm_utils
579 from nova.virt.xenapi import vmops
580+from nova import utils
581
582
583 def stubout_instance_snapshot(stubs):
584@@ -137,14 +138,17 @@
585 stubs.Set(vm_utils, '_is_vdi_pv', f)
586
587
588+def stubout_loopingcall_start(stubs):
589+ def fake_start(self, interval, now=True):
590+ self.f(*self.args, **self.kw)
591+ stubs.Set(utils.LoopingCall, 'start', fake_start)
592+
593+
594 class FakeSessionForVMTests(fake.SessionBase):
595 """ Stubs out a XenAPISession for VM tests """
596 def __init__(self, uri):
597 super(FakeSessionForVMTests, self).__init__(uri)
598
599- def network_get_all_records_where(self, _1, _2):
600- return self.xenapi.network.get_all_records()
601-
602 def host_call_plugin(self, _1, _2, _3, _4, _5):
603 sr_ref = fake.get_all('SR')[0]
604 vdi_ref = fake.create_vdi('', False, sr_ref, False)
605@@ -196,7 +200,7 @@
606 pass
607
608 def fake_spawn_rescue(self, inst):
609- pass
610+ inst._rescue = False
611
612 stubs.Set(vmops.VMOps, "_shutdown", fake_shutdown)
613 stubs.Set(vmops.VMOps, "_acquire_bootlock", fake_acquire_bootlock)
614
615=== modified file 'nova/virt/disk.py'
616--- nova/virt/disk.py 2011-03-14 17:59:41 +0000
617+++ nova/virt/disk.py 2011-03-25 13:43:59 +0000
618@@ -26,6 +26,8 @@
619 import tempfile
620 import time
621
622+from nova import context
623+from nova import db
624 from nova import exception
625 from nova import flags
626 from nova import log as logging
627@@ -38,6 +40,9 @@
628 'minimum size in bytes of root partition')
629 flags.DEFINE_integer('block_size', 1024 * 1024 * 256,
630 'block_size to use for dd')
631+flags.DEFINE_string('injected_network_template',
632+ utils.abspath('virt/interfaces.template'),
633+ 'Template file for injected network')
634 flags.DEFINE_integer('timeout_nbd', 10,
635 'time to wait for a NBD device coming up')
636 flags.DEFINE_integer('max_nbd_devices', 16,
637@@ -97,11 +102,7 @@
638 % err)
639
640 try:
641- if key:
642- # inject key file
643- _inject_key_into_fs(key, tmpdir)
644- if net:
645- _inject_net_into_fs(net, tmpdir)
646+ inject_data_into_fs(tmpdir, key, net, utils.execute)
647 finally:
648 # unmount device
649 utils.execute('sudo', 'umount', mapped_device)
650@@ -164,7 +165,18 @@
651 _DEVICES.append(device)
652
653
654-def _inject_key_into_fs(key, fs):
655+def inject_data_into_fs(fs, key, net, execute):
656+ """Injects data into a filesystem already mounted by the caller.
657+ Virt connections can call this directly if they mount their fs
658+ in a different way to inject_data
659+ """
660+ if key:
661+ _inject_key_into_fs(key, fs, execute=execute)
662+ if net:
663+ _inject_net_into_fs(net, fs, execute=execute)
664+
665+
666+def _inject_key_into_fs(key, fs, execute=None):
667 """Add the given public ssh key to root's authorized_keys.
668
669 key is an ssh key string.
670@@ -179,7 +191,7 @@
671 process_input='\n' + key.strip() + '\n')
672
673
674-def _inject_net_into_fs(net, fs):
675+def _inject_net_into_fs(net, fs, execute=None):
676 """Inject /etc/network/interfaces into the filesystem rooted at fs.
677
678 net is the contents of /etc/network/interfaces.
679
680=== modified file 'nova/virt/libvirt_conn.py'
681--- nova/virt/libvirt_conn.py 2011-03-24 21:26:11 +0000
682+++ nova/virt/libvirt_conn.py 2011-03-25 13:43:59 +0000
683@@ -76,9 +76,7 @@
684 flags.DEFINE_string('rescue_image_id', 'ami-rescue', 'Rescue ami image')
685 flags.DEFINE_string('rescue_kernel_id', 'aki-rescue', 'Rescue aki image')
686 flags.DEFINE_string('rescue_ramdisk_id', 'ari-rescue', 'Rescue ari image')
687-flags.DEFINE_string('injected_network_template',
688- utils.abspath('virt/interfaces.template'),
689- 'Template file for injected network')
690+
691 flags.DEFINE_string('libvirt_xml_template',
692 utils.abspath('virt/libvirt.xml.template'),
693 'Libvirt XML Template')
694
695=== modified file 'nova/virt/xenapi/fake.py'
696--- nova/virt/xenapi/fake.py 2011-03-03 19:13:15 +0000
697+++ nova/virt/xenapi/fake.py 2011-03-25 13:43:59 +0000
698@@ -162,6 +162,12 @@
699 vbd_rec['vm_name_label'] = vm_name_label
700
701
702+def after_VM_create(vm_ref, vm_rec):
703+ """Create read-only fields in the VM record."""
704+ if 'is_control_domain' not in vm_rec:
705+ vm_rec['is_control_domain'] = False
706+
707+
708 def create_pbd(config, host_ref, sr_ref, attached):
709 return _create_object('PBD', {
710 'device-config': config,
711@@ -286,6 +292,25 @@
712 rec['currently_attached'] = False
713 rec['device'] = ''
714
715+ def VM_get_xenstore_data(self, _1, vm_ref):
716+ return _db_content['VM'][vm_ref].get('xenstore_data', '')
717+
718+ def VM_remove_from_xenstore_data(self, _1, vm_ref, key):
719+ db_ref = _db_content['VM'][vm_ref]
720+ if not 'xenstore_data' in db_ref:
721+ return
722+ db_ref['xenstore_data'][key] = None
723+
724+ def network_get_all_records_where(self, _1, _2):
725+ # TODO (salvatore-orlando):filter table on _2
726+ return _db_content['network']
727+
728+ def VM_add_to_xenstore_data(self, _1, vm_ref, key, value):
729+ db_ref = _db_content['VM'][vm_ref]
730+ if not 'xenstore_data' in db_ref:
731+ db_ref['xenstore_data'] = {}
732+ db_ref['xenstore_data'][key] = value
733+
734 def host_compute_free_memory(self, _1, ref):
735 #Always return 12GB available
736 return 12 * 1024 * 1024 * 1024
737@@ -376,7 +401,6 @@
738 def _getter(self, name, params):
739 self._check_session(params)
740 (cls, func) = name.split('.')
741-
742 if func == 'get_all':
743 self._check_arg_count(params, 1)
744 return get_all(cls)
745@@ -399,10 +423,11 @@
746 if len(params) == 2:
747 field = func[len('get_'):]
748 ref = params[1]
749-
750- if (ref in _db_content[cls] and
751- field in _db_content[cls][ref]):
752- return _db_content[cls][ref][field]
753+ if (ref in _db_content[cls]):
754+ if (field in _db_content[cls][ref]):
755+ return _db_content[cls][ref][field]
756+ else:
757+ raise Failure(['HANDLE_INVALID', cls, ref])
758
759 LOG.debug(_('Raising NotImplemented'))
760 raise NotImplementedError(
761@@ -476,7 +501,7 @@
762 def _check_session(self, params):
763 if (self._session is None or
764 self._session not in _db_content['session']):
765- raise Failure(['HANDLE_INVALID', 'session', self._session])
766+ raise Failure(['HANDLE_INVALID', 'session', self._session])
767 if len(params) == 0 or params[0] != self._session:
768 LOG.debug(_('Raising NotImplemented'))
769 raise NotImplementedError('Call to XenAPI without using .xenapi')
770
771=== modified file 'nova/virt/xenapi/vm_utils.py'
772--- nova/virt/xenapi/vm_utils.py 2011-03-24 22:31:39 +0000
773+++ nova/virt/xenapi/vm_utils.py 2011-03-25 13:43:59 +0000
774@@ -22,6 +22,7 @@
775 import os
776 import pickle
777 import re
778+import tempfile
779 import time
780 import urllib
781 import uuid
782@@ -29,6 +30,8 @@
783
784 from eventlet import event
785 import glance.client
786+from nova import context
787+from nova import db
788 from nova import exception
789 from nova import flags
790 from nova import log as logging
791@@ -36,6 +39,7 @@
792 from nova.auth.manager import AuthManager
793 from nova.compute import instance_types
794 from nova.compute import power_state
795+from nova.virt import disk
796 from nova.virt import images
797 from nova.virt.xenapi import HelperBase
798 from nova.virt.xenapi.volume_utils import StorageError
799@@ -670,6 +674,23 @@
800 return None
801
802 @classmethod
803+ def preconfigure_instance(cls, session, instance, vdi_ref, network_info):
804+ """Makes alterations to the image before launching as part of spawn.
805+ """
806+
807+ # As mounting the image VDI is expensive, we only want do do it once,
808+ # if at all, so determine whether it's required first, and then do
809+ # everything
810+ mount_required = False
811+ key, net = _prepare_injectables(instance, network_info)
812+ mount_required = key or net
813+ if not mount_required:
814+ return
815+
816+ with_vdi_attached_here(session, vdi_ref, False,
817+ lambda dev: _mounted_processing(dev, key, net))
818+
819+ @classmethod
820 def lookup_kernel_ramdisk(cls, session, vm):
821 vm_rec = session.get_xenapi().VM.get_record(vm)
822 if 'PV_kernel' in vm_rec and 'PV_ramdisk' in vm_rec:
823@@ -927,6 +948,7 @@
824 e.details[0] == 'DEVICE_DETACH_REJECTED'):
825 LOG.debug(_('VBD.unplug rejected: retrying...'))
826 time.sleep(1)
827+ LOG.debug(_('Not sleeping anymore!'))
828 elif (len(e.details) > 0 and
829 e.details[0] == 'DEVICE_ALREADY_DETACHED'):
830 LOG.debug(_('VBD.unplug successful eventually.'))
831@@ -1002,3 +1024,114 @@
832 def get_name_label_for_image(image):
833 # TODO(sirp): This should eventually be the URI for the Glance image
834 return _('Glance image %s') % image
835+
836+
837+def _mount_filesystem(dev_path, dir):
838+ """mounts the device specified by dev_path in dir"""
839+ try:
840+ out, err = utils.execute('sudo', 'mount',
841+ '-t', 'ext2,ext3',
842+ dev_path, dir)
843+ except exception.ProcessExecutionError as e:
844+ err = str(e)
845+ return err
846+
847+
848+def _find_guest_agent(base_dir, agent_rel_path):
849+ """
850+ tries to locate a guest agent at the path
851+ specificed by agent_rel_path
852+ """
853+ agent_path = os.path.join(base_dir, agent_rel_path)
854+ if os.path.isfile(agent_path):
855+ # The presence of the guest agent
856+ # file indicates that this instance can
857+ # reconfigure the network from xenstore data,
858+ # so manipulation of files in /etc is not
859+ # required
860+ LOG.info(_('XenServer tools installed in this '
861+ 'image are capable of network injection. '
862+ 'Networking files will not be'
863+ 'manipulated'))
864+ return True
865+ xe_daemon_filename = os.path.join(base_dir,
866+ 'usr', 'sbin', 'xe-daemon')
867+ if os.path.isfile(xe_daemon_filename):
868+ LOG.info(_('XenServer tools are present '
869+ 'in this image but are not capable '
870+ 'of network injection'))
871+ else:
872+ LOG.info(_('XenServer tools are not '
873+ 'installed in this image'))
874+ return False
875+
876+
877+def _mounted_processing(device, key, net):
878+ """Callback which runs with the image VDI attached"""
879+
880+ dev_path = '/dev/' + device + '1' # NB: Partition 1 hardcoded
881+ tmpdir = tempfile.mkdtemp()
882+ try:
883+ # Mount only Linux filesystems, to avoid disturbing NTFS images
884+ err = _mount_filesystem(dev_path, tmpdir)
885+ if not err:
886+ try:
887+ # This try block ensures that the umount occurs
888+ if not _find_guest_agent(tmpdir, FLAGS.xenapi_agent_path):
889+ LOG.info(_('Manipulating interface files '
890+ 'directly'))
891+ disk.inject_data_into_fs(tmpdir, key, net,
892+ utils.execute)
893+ finally:
894+ utils.execute('sudo', 'umount', dev_path)
895+ else:
896+ LOG.info(_('Failed to mount filesystem (expected for '
897+ 'non-linux instances): %s') % err)
898+ finally:
899+ # remove temporary directory
900+ os.rmdir(tmpdir)
901+
902+
903+def _prepare_injectables(inst, networks_info):
904+ """
905+ prepares the ssh key and the network configuration file to be
906+ injected into the disk image
907+ """
908+ #do the import here - Cheetah.Template will be loaded
909+ #only if injection is performed
910+ from Cheetah import Template as t
911+ template = t.Template
912+ template_data = open(FLAGS.injected_network_template).read()
913+
914+ key = str(inst['key_data'])
915+ net = None
916+ if networks_info:
917+ ifc_num = -1
918+ interfaces_info = []
919+ for (network_ref, info) in networks_info:
920+ ifc_num += 1
921+ if not network_ref['injected']:
922+ continue
923+
924+ ip_v4 = ip_v6 = None
925+ if 'ips' in info and len(info['ips']) > 0:
926+ ip_v4 = info['ips'][0]
927+ if 'ip6s' in info and len(info['ip6s']) > 0:
928+ ip_v6 = info['ip6s'][0]
929+ if len(info['dns']) > 0:
930+ dns = info['dns'][0]
931+ interface_info = {'name': 'eth%d' % ifc_num,
932+ 'address': ip_v4 and ip_v4['ip'] or '',
933+ 'netmask': ip_v4 and ip_v4['netmask'] or '',
934+ 'gateway': info['gateway'],
935+ 'broadcast': info['broadcast'],
936+ 'dns': dns,
937+ 'address_v6': ip_v6 and ip_v6['ip'] or '',
938+ 'netmask_v6': ip_v6 and ip_v6['netmask'] or '',
939+ 'gateway_v6': ip_v6 and ip_v6['gateway'] or '',
940+ 'use_ipv6': FLAGS.use_ipv6}
941+ interfaces_info.append(interface_info)
942+ net = str(template(template_data,
943+ searchList=[{'interfaces': interfaces_info,
944+ 'use_ipv6': FLAGS.use_ipv6}]))
945+ return key, net
946
947=== modified file 'nova/virt/xenapi/vmops.py'
948--- nova/virt/xenapi/vmops.py 2011-03-24 21:11:48 +0000
949+++ nova/virt/xenapi/vmops.py 2011-03-25 13:43:59 +0000
950@@ -33,6 +33,7 @@
951 from nova import log as logging
952 from nova import exception
953 from nova import utils
954+from nova import flags
955
956 from nova.auth.manager import AuthManager
957 from nova.compute import power_state
958@@ -43,6 +44,7 @@
959
960 XenAPI = None
961 LOG = logging.getLogger("nova.virt.xenapi.vmops")
962+FLAGS = flags.FLAGS
963
964
965 class VMOps(object):
966@@ -53,7 +55,6 @@
967 self.XenAPI = session.get_imported_xenapi()
968 self._session = session
969 self.poll_rescue_last_ran = None
970-
971 VMHelper.XenAPI = self.XenAPI
972
973 def list_instances(self):
974@@ -168,6 +169,12 @@
975 # create it now. This goes away once nova-multi-nic hits.
976 if network_info is None:
977 network_info = self._get_network_info(instance)
978+
979+ # Alter the image before VM start for, e.g. network injection
980+ if FLAGS.xenapi_inject_image:
981+ VMHelper.preconfigure_instance(self._session, instance,
982+ vdi_ref, network_info)
983+
984 self.create_vifs(vm_ref, network_info)
985 self.inject_network_info(instance, vm_ref, network_info)
986 return vm_ref
987@@ -237,26 +244,17 @@
988 obj = None
989 try:
990 # check for opaque ref
991- obj = self._session.get_xenapi().VM.get_record(instance_or_vm)
992+ obj = self._session.get_xenapi().VM.get_uuid(instance_or_vm)
993 return instance_or_vm
994 except self.XenAPI.Failure:
995- # wasn't an opaque ref, must be an instance name
996+ # wasn't an opaque ref, can be an instance name
997 instance_name = instance_or_vm
998
999 # if instance_or_vm is an int/long it must be instance id
1000 elif isinstance(instance_or_vm, (int, long)):
1001 ctx = context.get_admin_context()
1002- try:
1003- instance_obj = db.instance_get(ctx, instance_or_vm)
1004- instance_name = instance_obj.name
1005- except exception.NotFound:
1006- # The unit tests screw this up, as they use an integer for
1007- # the vm name. I'd fix that up, but that's a matter for
1008- # another bug report. So for now, just try with the passed
1009- # value
1010- instance_name = instance_or_vm
1011-
1012- # otherwise instance_or_vm is an instance object
1013+ instance_obj = db.instance_get(ctx, instance_or_vm)
1014+ instance_name = instance_obj.name
1015 else:
1016 instance_name = instance_or_vm.name
1017 vm_ref = VMHelper.lookup(self._session, instance_name)
1018@@ -692,7 +690,6 @@
1019 vm_ref = VMHelper.lookup(self._session, instance.name)
1020 self._shutdown(instance, vm_ref)
1021 self._acquire_bootlock(vm_ref)
1022-
1023 instance._rescue = True
1024 self.spawn_rescue(instance)
1025 rescue_vm_ref = VMHelper.lookup(self._session, instance.name)
1026@@ -816,6 +813,7 @@
1027 info = {
1028 'label': network['label'],
1029 'gateway': network['gateway'],
1030+ 'broadcast': network['broadcast'],
1031 'mac': instance.mac_address,
1032 'rxtx_cap': flavor['rxtx_cap'],
1033 'dns': [network['dns']],
1034
1035=== modified file 'nova/virt/xenapi_conn.py'
1036--- nova/virt/xenapi_conn.py 2011-03-24 08:56:06 +0000
1037+++ nova/virt/xenapi_conn.py 2011-03-25 13:43:59 +0000
1038@@ -107,8 +107,22 @@
1039 5,
1040 'Max number of times to poll for VHD to coalesce.'
1041 ' Used only if connection_type=xenapi.')
1042+flags.DEFINE_bool('xenapi_inject_image',
1043+ True,
1044+ 'Specifies whether an attempt to inject network/key'
1045+ ' data into the disk image should be made.'
1046+ ' Used only if connection_type=xenapi.')
1047+flags.DEFINE_string('xenapi_agent_path',
1048+ 'usr/sbin/xe-update-networking',
1049+ 'Specifies the path in which the xenapi guest agent'
1050+ ' should be located. If the agent is present,'
1051+ ' network configuration is not injected into the image'
1052+ ' Used only if connection_type=xenapi.'
1053+ ' and xenapi_inject_image=True')
1054+
1055 flags.DEFINE_string('xenapi_sr_base_path', '/var/run/sr-mount',
1056 'Base path to the storage repository')
1057+
1058 flags.DEFINE_string('target_host',
1059 None,
1060 'iSCSI Target Host')