Merge lp:~citrix-openstack/nova/xenapi-glance-2 into lp:~hudson-openstack/nova/trunk

Proposed by Ewan Mellor
Status: Merged
Approved by: Rick Clark
Approved revision: 544
Merged at revision: 577
Proposed branch: lp:~citrix-openstack/nova/xenapi-glance-2
Merge into: lp:~hudson-openstack/nova/trunk
Diff against target: 869 lines (+504/-70)
10 files modified
nova/tests/glance/__init__.py (+20/-0)
nova/tests/glance/stubs.py (+37/-0)
nova/tests/test_xenapi.py (+72/-38)
nova/tests/xenapi/stubs.py (+19/-5)
nova/virt/xenapi/fake.py (+77/-10)
nova/virt/xenapi/vm_utils.py (+242/-14)
nova/virt/xenapi/vmops.py (+2/-1)
nova/virt/xenapi_conn.py (+3/-0)
plugins/xenserver/xenapi/etc/xapi.d/plugins/glance (+31/-2)
tools/pip-requires (+1/-0)
To merge this branch: bzr merge lp:~citrix-openstack/nova/xenapi-glance-2
Reviewer Review Type Date Requested Status
Sandy Walsh (community) Approve
Ed Leafe (community) Approve
Thierry Carrez (community) ffe Approve
Jay Pipes (community) Needs Information
Rick Harris Pending
Review via email: mp+45977@code.launchpad.net

Description of the change

Implement support for streaming images from Glance when using the XenAPI virtualization backend, as per the bexar-xenapi-support-for-glance blueprint.

Images may be streamed raw, or will be streamed into the right place to allow room for an MBR and partition table, if using non-raw images. PV vs HVM detection now occurs on the image, immediately after it has been streamed. External kernel / ramdisk are also supported in this mode.

Unit test changes include a partial Glance simulator, which is stubbed in place of glance.client.Client. This allows us to pass through the VM spawn path with either glance or objectstore backends enabled; the unit tests now cover both. A dependency upon glance has been added to pip-requires, in order to pull the Glance client code into the venv.

This includes minor fixes to nova.image.glance. This code is expected to be heavily reworked anyway with the image-service-use-glance-clients work.

To post a comment you must log in.
Revision history for this message
Thierry Carrez (ttx) wrote :

OK to try to get this merged, but other branches proposed before should have review priority.

539. By Salvatore Orlando

Fix for _stream_disk

540. By Salvatore Orlando

Fixing the stub for _stream_disk as well

541. By Salvatore Orlando

pep8 fixes

542. By Salvatore Orlando

Fixed another issue in _stream_disk, as it did never execute _write_partition.
Fixed fake method accordingly.
Fixed pep8 errors.

Revision history for this message
Jay Pipes (jaypipes) wrote :

Hi!

"This includes minor fixes to nova.image.glance. This code is expected to be heavily reworked anyway with the image-service-use-glance-clients work."

The image-service-use-glance-clients is now in trunk. Does this affect this patch?

Cheers,
jay

review: Needs Information
Revision history for this message
Thierry Carrez (ttx) wrote :

On one hand, this branch touches relatively specific code, on the other, it seems a bit far away from being ready to be merged. I'd like to see Jay's question answered, and the review of someone familiar with the XenAPI stuff (Ozone team, anyone ?) rather than making a hasty decision on FFe.

review: Needs Information (ffe)
Revision history for this message
Armando Migliaccio (armando-migliaccio) wrote :

I am not entirely sure what Ewan really meant it with 'heavily reworked', but by following a discussion on the mailing list I understand that the extent of the work may be related to when glance-auth gets underway in Cactus.

As for the rest, we tried this branch internally against Glance revno 34 and everything seemed to work okay.

Hope this help!

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

I'm still getting up to speed on Swift/Glance/etc.

With this branch (vm_utils @514) xenapi would now have a hard dependency on glance. Is Glance ready to be a lynch-pin for Nova XenAPI deployments?

Should image store stuff be abstracted?

Just curious.

Revision history for this message
Armando Migliaccio (armando-migliaccio) wrote :

I always assumed that Glance was going to be the level of abstraction to the storage back-end that we needed for Nova, but as Sandy is, I am getting up to speed on this too.

543. By Ewan Mellor

Merged with trunk revno 565.

Revision history for this message
Ewan Mellor (ewanmellor) wrote :

I don't see the image-service-use-glance-clients work in trunk. Which csets are you referring to, Jay?

> -----Original Message-----
> From: <email address hidden> [mailto:<email address hidden>] On Behalf Of
> Jay Pipes
> Sent: 13 January 2011 11:13
> To: <email address hidden>
> Subject: Re: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> lp:nova
>
> Review: Needs Information
> Hi!
>
> "This includes minor fixes to nova.image.glance. This code is expected
> to be heavily reworked anyway with the image-service-use-glance-clients
> work."
>
> The image-service-use-glance-clients is now in trunk. Does this affect
> this patch?
>
> Cheers,
> jay
> --
> https://code.launchpad.net/~citrix-openstack/nova/xenapi-glance-
> 2/+merge/45977
> Your team Citrix OpenStack development team is subscribed to branch
> lp:~citrix-openstack/nova/xenapi-glance-2.

Revision history for this message
Ewan Mellor (ewanmellor) wrote :

There is a flag (xenapi_image_service) which allows you to use nova-objectstore instead of glance, so it's not a hard dependency in that sense. There is a hard dependency on the glance client code, as you've noticed, and I didn’t see a problem with that since I don't regard glance as a "third party".

In the long run, I expect nova-objectstore to be deprecated altogether, and Glance would then be the abstraction between different image services. I don't feel the need to have an image service abstraction inside Nova, when Glance is filling that role too. If someone has a need to plug in a replacement for Glance, then we can discuss that, but I don't know of any such need.

> -----Original Message-----
> From: <email address hidden> [mailto:<email address hidden>] On Behalf Of
> Sandy Walsh
> Sent: 14 January 2011 10:07
> To: <email address hidden>
> Subject: Re: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> lp:nova
>
> I'm still getting up to speed on Swift/Glance/etc.
>
> With this branch (vm_utils @514) xenapi would now have a hard
> dependency on glance. Is Glance ready to be a lynch-pin for Nova XenAPI
> deployments?
>
> Should image store stuff be abstracted?
>
> Just curious.
>
> --
> https://code.launchpad.net/~citrix-openstack/nova/xenapi-glance-
> 2/+merge/45977
> Your team Citrix OpenStack development team is subscribed to branch
> lp:~citrix-openstack/nova/xenapi-glance-2.

Revision history for this message
Ed Leafe (ed-leafe) wrote :

Many of your localized strings are incorrect. If you have more than one formatting placeholder (e.g. %s), you must use a dictionary for your values to be substituted. The reason is that a translation may re-order the words in a phrase, and position-dependent substitution (i.e., tuples) will be ambiguous.

review: Needs Fixing
Revision history for this message
Armando Migliaccio (armando-migliaccio) wrote :

Hi Ed,

can you elaborate a bit more? I see lots of localized strings with multiple formatting placeholders and no use of dictionaries.

> Many of your localized strings are incorrect. If you have more than one
> formatting placeholder (e.g. %s), you must use a dictionary for your values to
> be substituted. The reason is that a translation may re-order the words in a
> phrase, and position-dependent substitution (i.e., tuples) will be ambiguous.

Revision history for this message
Ed Leafe (ed-leafe) wrote :

> can you elaborate a bit more? I see lots of localized strings with multiple
> formatting placeholders and no use of dictionaries.

Yes, and these will have to be corrected at some point.

The problem is that translations are not word-for-word; a phrase in different languages may order the words differently. Probably the most common example would be adjectives coming before the noun in English ("the white house"), but after the noun in others (e.g., Spanish: "la casa blanca"). If your code looks like:

color = "white"
thing = "house"
print _("The %s %s") % (color, thing)

... it will print "The white house" in English, but the Spanish will print "La blanca casa", which is wrong. You need to use a mapping for the formatting, so that the code above would read:

color = "white"
thing = "house"
print _("The %(color)s %(thing)s") % locals()

Yeah, I know that this is a weak example, since the color and thing are still in English, but it's just designed to demonstrate how positional substitution is to be avoided in localization strings.

If you run xgettext on a file that uses tuples for multiple substitutions, it will emit the following:
"warning: 'msgid' format string with unnamed arguments cannot be properly localized: The translator cannot reorder the arguments. Please consider using a format string with named arguments, and a mapping instead of a tuple for the arguments."

I hope that that helps.

Revision history for this message
Ed Leafe (ed-leafe) wrote :

Additional explanation at the bottom of this page: http://www.gnu.org/software/hello/manual/gettext/Python.html

Revision history for this message
Armando Migliaccio (armando-migliaccio) wrote :

Thanks, that explains it.

I agree with you that we need to fix this, but I personally don't see it as much of an issue as most of the log strings are made of nouns, rarely with verbs and adjectives. This makes the translation pretty straightforward.

Since it affects every log in trunk I also think it would be better to have a trunk overhaul in a separate branch.

By the way, some of the languages (i.e. Japanese) have already been translated 100%...does this mean that those translations have to be reviewed?

> > can you elaborate a bit more? I see lots of localized strings with multiple
> > formatting placeholders and no use of dictionaries.
>
> Yes, and these will have to be corrected at some point.
>
> The problem is that translations are not word-for-word; a phrase in different
> languages may order the words differently. Probably the most common example
> would be adjectives coming before the noun in English ("the white house"), but
> after the noun in others (e.g., Spanish: "la casa blanca"). If your code looks
> like:
>
> color = "white"
> thing = "house"
> print _("The %s %s") % (color, thing)
>
> ... it will print "The white house" in English, but the Spanish will print "La
> blanca casa", which is wrong. You need to use a mapping for the formatting, so
> that the code above would read:
>
> color = "white"
> thing = "house"
> print _("The %(color)s %(thing)s") % locals()
>
> Yeah, I know that this is a weak example, since the color and thing are still
> in English, but it's just designed to demonstrate how positional substitution
> is to be avoided in localization strings.
>
> If you run xgettext on a file that uses tuples for multiple substitutions, it
> will emit the following:
> "warning: 'msgid' format string with unnamed arguments cannot be properly
> localized: The translator cannot reorder the arguments. Please consider using
> a format string with named arguments, and a mapping instead of a tuple for the
> arguments."
>
> I hope that that helps.

Revision history for this message
Jay Pipes (jaypipes) wrote :

On Fri, Jan 14, 2011 at 12:14 PM, Armando Migliaccio
<email address hidden> wrote:
> Thanks, that explains it.
>
> I agree with you that we need to fix this, but I personally don't see it as much of an issue as most of the log strings are made of nouns, rarely with verbs and adjectives. This makes the translation pretty straightforward.
>
> Since it affects every log in trunk I also think it would be better to have a trunk overhaul in a separate branch.
>
> By the way, some of the languages (i.e. Japanese) have already been translated 100%...does this mean that those translations have to be reviewed?

I don't see this as a showstopper for this particular branch (though I
completely agree it's a bug).

We should be able to address this in Nova in Cactus as a separate
blueprint or even a bug report.

Just my 2 cents,

jay

Revision history for this message
Jay Pipes (jaypipes) wrote :

> I don't see the image-service-use-glance-clients work in trunk. Which csets
> are you referring to, Jay?

I was referring to this blueprint being marked completed: https://blueprints.launchpad.net/nova/+spec/image-service-use-glance-clients.

Rick, could you elaborate on the actual csets, please? Thanks!

Revision history for this message
Rick Harris (rconradharris) wrote :

Jay-

I was referring to https://code.launchpad.net/~rconradharris/nova/xs-snap-return-image-id-before-snapshot .

As part of that effort, I removed ParallaxClient and TellerClient in Nova-- it now uses Glance's client (glance/client.py). This is why it's failing on Hudson atm, we don't have glance packaged up and installed.

Is that what you meant, or am I misunderstanding something?

Revision history for this message
Jay Pipes (jaypipes) wrote :

Hi Rick,

Yeah, that's what I was referring to. OK, looks like we need to get the packaging done pronto for Glance.

/me goes off to work on that.

Cheers,
jay

Revision history for this message
Ewan Mellor (ewanmellor) wrote :

Yes, Rick, those are the changes we're talking about. Thanks. They're not yet in trunk, but as soon as they do go on, the xenapi-glance-2 branch will need fixing up to match.

> -----Original Message-----
> From: <email address hidden> [mailto:<email address hidden>] On Behalf Of
> Rick Harris
> Sent: 14 January 2011 16:06
> To: <email address hidden>
> Subject: Re: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> lp:nova
>
> Jay-
>
> I was referring to https://code.launchpad.net/~rconradharris/nova/xs-
> snap-return-image-id-before-snapshot .
>
> As part of that effort, I removed ParallaxClient and TellerClient in
> Nova-- it now uses Glance's client (glance/client.py). This is why
> it's failing on Hudson atm, we don't have glance packaged up and
> installed.
>
> Is that what you meant, or am I misunderstanding something?
> --
> https://code.launchpad.net/~citrix-openstack/nova/xenapi-glance-
> 2/+merge/45977
> Your team Citrix OpenStack development team is subscribed to branch
> lp:~citrix-openstack/nova/xenapi-glance-2.

Revision history for this message
Ewan Mellor (ewanmellor) wrote :

> I am not entirely sure what Ewan really meant it with 'heavily reworked', but
> by following a discussion on the mailing list I understand that the extent of
> the work may be related to when glance-auth gets underway in Cactus.

By "heavily reworked" I meant that nova/image/glance.py would be completely rewritten. This code is now in https://code.launchpad.net/~rconradharris/nova/xs-snap-return-image-id-before-snapshot, which is pending merge into trunk.

As soon as xs-snap-return-image-id-before-snapshot is merged, this will conflict with xenapi-glance-2. We will simply need to remove the changes to nova/image/glance.py, and most likely pip-requires too. There will be no additional changes on our part.

Revision history for this message
Ewan Mellor (ewanmellor) wrote :

I think that we've answered all the questions on this review so far:

  o I have requested a review from rconradharris, as per Thierry's request.

  o We will need to refresh this patch as soon as xs-snap-return-image-id-before-snapshot is merged. This will not be a big deal -- we'll just remove a few changes from xenapi-glance-2, in favour of the changes in xs-snap-return-image-id-before-snapshot.

  o We won't hold up this branch because of the I18N issues, but will fix those in a separate branch.

  o We don't feel the need for an abstraction between Nova and Glance, as Glance is already an abstraction across image services.

  o We have a flag that allows people to continue to use nova-objectstore, so we're not adding a hard dependency upon Glance.

  o We are adding a hard dependency upon Glance's client code (glance.client) and we think this is fine because Glance is part of OpenStack, not a third party.

With all these things addressed, please can someone review the actual content of the branch? I'm keen to get an FFE for this, and it will obviously need careful review in order to do that.

Thanks,

Ewan.

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

There should be some means to point to our glance installation for development purposes as a flag.

I had to add this hack to get it going:

swalsh@novadev:~/openstack/xenapi-glance-2$ bzr diff
=== modified file 'bin/nova-combined'
--- bin/nova-combined 2011-01-11 06:47:35 +0000
+++ bin/nova-combined 2011-01-15 19:43:08 +0000
@@ -34,6 +34,10 @@
 if os.path.exists(os.path.join(possible_topdir, 'nova', '__init__.py')):
     sys.path.insert(0, possible_topdir)

+ glance = os.path.normpath(os.path.join(possible_topdir, "..", "glance"))
+ print "added glance path:", glance
+ sys.path.insert(0, glance)
+
 gettext.install('nova', unicode=1)

Now I'm trying to figure out how to get my images in glance so I can try to create an instance. Tips welcome!

review: Needs Fixing
Revision history for this message
Ewan Mellor (ewanmellor) wrote :

The Glance client library is assumed to be installed, just like any other dependency. It is on PyPI, and can be easy_installed.

We _could_ add a flag to point at a source location, but that would be non-standard. Are you sure?

-----Original Message-----
From: <email address hidden> [mailto:<email address hidden>] On Behalf Of Sandy Walsh
Sent: 17 January 2011 14:53
To: <email address hidden>
Subject: Re: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into lp:nova

Review: Needs Fixing
There should be some means to point to our glance installation for development purposes as a flag.

I had to add this hack to get it going:

swalsh@novadev:~/openstack/xenapi-glance-2$ bzr diff
=== modified file 'bin/nova-combined'
--- bin/nova-combined 2011-01-11 06:47:35 +0000
+++ bin/nova-combined 2011-01-15 19:43:08 +0000
@@ -34,6 +34,10 @@
 if os.path.exists(os.path.join(possible_topdir, 'nova', '__init__.py')):
     sys.path.insert(0, possible_topdir)

+ glance = os.path.normpath(os.path.join(possible_topdir, "..", "glance"))
+ print "added glance path:", glance
+ sys.path.insert(0, glance)
+
 gettext.install('nova', unicode=1)

Now I'm trying to figure out how to get my images in glance so I can try to create an instance. Tips welcome!

--
https://code.launchpad.net/~citrix-openstack/nova/xenapi-glance-2/+merge/45977
Your team Citrix OpenStack development team is subscribed to branch lp:~citrix-openstack/nova/xenapi-glance-2.

Revision history for this message
Jay Pipes (jaypipes) wrote :

> There should be some means to point to our glance installation for development
> purposes as a flag.
>
> I had to add this hack to get it going:
>
> swalsh@novadev:~/openstack/xenapi-glance-2$ bzr diff
> === modified file 'bin/nova-combined'
> --- bin/nova-combined 2011-01-11 06:47:35 +0000
> +++ bin/nova-combined 2011-01-15 19:43:08 +0000
> @@ -34,6 +34,10 @@
> if os.path.exists(os.path.join(possible_topdir, 'nova', '__init__.py')):
> sys.path.insert(0, possible_topdir)
>
> + glance = os.path.normpath(os.path.join(possible_topdir, "..", "glance"))
> + print "added glance path:", glance
> + sys.path.insert(0, glance)
> +
> gettext.install('nova', unicode=1)

As Ewan says, Glance is now packaged in PyPI, so the above shouldn't be necessary, nor would I recommend dealing with Glance differently than any other dependency.

> Now I'm trying to figure out how to get my images in glance so I can try to
> create an instance. Tips welcome!

This link should help you out :)

http://glance.openstack.org/client.html#adding-a-new-virtual-machine-image

Cheers!
jay

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

> The Glance client library is assumed to be installed, just like any other
> dependency. It is on PyPI, and can be easy_installed.
>
> We _could_ add a flag to point at a source location, but that would be non-
> standard. Are you sure?

Nope, certainly not sure. :) I was just thinking about situations where I'm working on bugs that span glance/nova. I prefer to run from source vs. an install so I can jump between branches.

Is there another way we can solve this problem?

Revision history for this message
Ewan Mellor (ewanmellor) wrote :

I just symlink into my source tree from .../site-packages/glance. I presume you could do the same thing with virtualenv too, if your dev machine didn't belong to you.

> -----Original Message-----
> From: <email address hidden> [mailto:<email address hidden>] On Behalf Of
> Sandy Walsh
> Sent: 17 January 2011 17:11
> To: <email address hidden>
> Subject: Re: RE: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> lp:nova
>
> > The Glance client library is assumed to be installed, just like any
> other
> > dependency. It is on PyPI, and can be easy_installed.
> >
> > We _could_ add a flag to point at a source location, but that would
> be non-
> > standard. Are you sure?
>
>
> Nope, certainly not sure. :) I was just thinking about situations where
> I'm working on bugs that span glance/nova. I prefer to run from source
> vs. an install so I can jump between branches.
>
> Is there another way we can solve this problem?
> --
> https://code.launchpad.net/~citrix-openstack/nova/xenapi-glance-
> 2/+merge/45977
> Your team Citrix OpenStack development team is subscribed to branch
> lp:~citrix-openstack/nova/xenapi-glance-2.

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

Ah, good idea ... thx. Disregard flag. :)

Revision history for this message
Jay Pipes (jaypipes) wrote :

OK, Ewan, Salvatore,

Rick's xs-snapshot branch is now in trunk. Can you re-merge xenapi-glance-2 with trunk and make any changes needed? Thanks much for all your patience!

544. By Ewan Mellor

Merged with trunk revno 572.

Revision history for this message
Ewan Mellor (ewanmellor) wrote :

I have re-merged this branch with trunk, following the xs-snapshot merge. Are people otherwise happy with the code? I would like to make an FFE proposal for this if the reviewers are happy. Thanks.

> -----Original Message-----
> From: <email address hidden> [mailto:<email address hidden>] On Behalf Of
> Jay Pipes
> Sent: 17 January 2011 18:22
> To: <email address hidden>
> Subject: Re: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> lp:nova
>
> OK, Ewan, Salvatore,
>
> Rick's xs-snapshot branch is now in trunk. Can you re-merge xenapi-
> glance-2 with trunk and make any changes needed? Thanks much for all
> your patience!
> --
> https://code.launchpad.net/~citrix-openstack/nova/xenapi-glance-
> 2/+merge/45977
> Your team Citrix OpenStack development team is subscribed to branch
> lp:~citrix-openstack/nova/xenapi-glance-2.

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

I'm in the short strokes now of getting the whole thing running (using the latest branch).

1. Creating instance using objectstore
2. Switching to Glance
3. Saving snapshot. Got hung up yesterday on this.
Note: Had to convert my XenServer SR from LVM to VHD since Glance only works with VHD currently.
4. Attempting to create instance from snapshot.

Stay tuned.

(I probably won't have time to review the code as well, but I'm sure others have that covered)

Revision history for this message
Thierry Carrez (ttx) wrote :

Consider the exception granted if you can get this merged today !

review: Approve (ffe)
Revision history for this message
Ewan Mellor (ewanmellor) wrote :

Thanks Thierry!

Sandy, could you please withdraw your "Needs Fixing", as per your subsequent comments?

Ed, could you also withdraw yours? I promise we'll fix the I18N issues ASAP, but Jay would like to do that on a separate blueprint.

Jay, an Approve from you would be appreciated, if you're happy.

Rick Harris, if you're out there, a review from you (or someone else in your team) would be appreciated. We need to sneak this in while Thierry's feeling reckless ;-)

Thanks everyone,

Ewan.

> -----Original Message-----
> From: <email address hidden> [mailto:<email address hidden>] On Behalf Of
> Thierry Carrez
> Sent: 18 January 2011 13:17
> To: <email address hidden>
> Subject: Re: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> lp:nova
>
> Review: Approve ffe
> Consider the exception granted if you can get this merged today !
> --
> https://code.launchpad.net/~citrix-openstack/nova/xenapi-glance-
> 2/+merge/45977
> Your team Citrix OpenStack development team is subscribed to branch
> lp:~citrix-openstack/nova/xenapi-glance-2.

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

Hmm, think I have issues with my LVM -> VHD conversion: http://paste.openstack.org/show/488/

Revision history for this message
Jay Pipes (jaypipes) wrote :

On Tue, Jan 18, 2011 at 8:32 AM, Ewan Mellor <email address hidden> wrote:
> Jay, an Approve from you would be appreciated, if you're happy.

We need your help to solve a Glance packaging bug so that I can do
testing properly on the branch... but I promise this is my priority
today. If you could join IRC right quick, we can get the packaging
bug solved :) Thanks!

> Rick Harris, if you're out there, a review from you (or someone else in your team) would be appreciated.  We need to sneak this in while Thierry's feeling reckless ;-)

I'll ping Rick on IRC...

-jay

Revision history for this message
Ed Leafe (ed-leafe) wrote :

Decided to move correcting the localization issues into a separate blueprint.

review: Approve
Revision history for this message
Ed Leafe (ed-leafe) wrote :

On Jan 18, 2011, at 8:32 AM, Ewan Mellor wrote:

> Ed, could you also withdraw yours? I promise we'll fix the I18N issues ASAP, but Jay would like to do that on a separate blueprint.

 Done.

-- Ed Leafe

Revision history for this message
Sandy Walsh (sandy-walsh) :
review: Approve
Revision history for this message
Ewan Mellor (ewanmellor) wrote :

It's expecting to find the local disk as per the default XenServer installation. You pesky Rackers screw with it, I think. You need SR.other_config['i18n-key'] == 'local-storage'. You can set this on the dom0 CLI using this:

sruuid=$(xe sr-list --minimal name-label="Local storage") # Or whichever SR you want to use
xe sr-param-set other-config:i18n-key=local-storage uuid=$sruuid

> -----Original Message-----
> From: <email address hidden> [mailto:<email address hidden>] On Behalf Of
> Sandy Walsh
> Sent: 18 January 2011 13:32
> To: <email address hidden>
> Subject: Re: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> lp:nova
>
> Hmm, think I have issues with my LVM -> VHD conversion:
> http://paste.openstack.org/show/488/
> --
> https://code.launchpad.net/~citrix-openstack/nova/xenapi-glance-
> 2/+merge/45977
> Your team Citrix OpenStack development team is subscribed to branch
> lp:~citrix-openstack/nova/xenapi-glance-2.

Revision history for this message
Ewan Mellor (ewanmellor) wrote :

Irritating line wrap issue there. Just to be clear, that's

sruuid=$(xe sr-list --minimal name-label="Local storage") # Or whichever SR you want to use

xe sr-param-set other-config:i18n-key=local-storage uuid=$sruuid

> -----Original Message-----
> From: <email address hidden> [mailto:<email address hidden>] On Behalf Of
> Ewan Mellor
> Sent: 18 January 2011 13:40
> To: <email address hidden>
> Subject: RE: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> lp:nova
>
> It's expecting to find the local disk as per the default XenServer
> installation. You pesky Rackers screw with it, I think. You need
> SR.other_config['i18n-key'] == 'local-storage'. You can set this on
> the dom0 CLI using this:
>
> sruuid=$(xe sr-list --minimal name-label="Local storage") # Or
> whichever SR you want to use
> xe sr-param-set other-config:i18n-key=local-storage uuid=$sruuid
>
> > -----Original Message-----
> > From: <email address hidden> [mailto:<email address hidden>] On Behalf
> Of
> > Sandy Walsh
> > Sent: 18 January 2011 13:32
> > To: <email address hidden>
> > Subject: Re: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> > lp:nova
> >
> > Hmm, think I have issues with my LVM -> VHD conversion:
> > http://paste.openstack.org/show/488/
> > --
> > https://code.launchpad.net/~citrix-openstack/nova/xenapi-glance-
> > 2/+merge/45977
> > Your team Citrix OpenStack development team is subscribed to branch
> > lp:~citrix-openstack/nova/xenapi-glance-2.
>
> --
> https://code.launchpad.net/~citrix-openstack/nova/xenapi-glance-
> 2/+merge/45977
> Your team Citrix OpenStack development team is subscribed to branch
> lp:~citrix-openstack/nova/xenapi-glance-2.

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

Thanks Ewan ... that worked great!

Now it gets tricky. I was hoping to use trunk to add my kernel/ramdisk/machine images to create an instance and switch over to Glance to create a snapshot, but now with the trunk merge, it always expects Glance (even though --image_service=nova.image.s3.S3ImageService is set) ... resulting in

http://paste.openstack.org/show/489/

I guess I have to hand-craft the images and pre-populate my Glance folder in order to get this to work now, huh?

Any notes on taking EC2-compliant kernel/ramdisk/machine images and prepping them for Glance usage?

Revision history for this message
Ewan Mellor (ewanmellor) wrote :

In the absence of any Glance CLI tools, I threw this together:

#!/usr/bin/python2.6

import sys

from glance.client import Client

hostname = sys.argv[1]
name = sys.argv[2]
typ = sys.argv[3]
filename = sys.argv[4]

c = Client(hostname, 9292)

meta = {'name': name,
        'type': typ,
        'is_public': True,
        }

with open(filename) as f:
    new_meta = c.add_image(meta, f)

print 'Stored image. Got identifier: %s' % new_meta

The third argument should be "image", "ramdisk", or "kernel". You can then use the image identifiers on the euca-run-instances command line, just as you would if you'd used euca-register.

> -----Original Message-----
> From: <email address hidden> [mailto:<email address hidden>] On Behalf Of
> Sandy Walsh
> Sent: 18 January 2011 13:58
> To: <email address hidden>
> Subject: Re: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> lp:nova
>
> Thanks Ewan ... that worked great!
>
> Now it gets tricky. I was hoping to use trunk to add my
> kernel/ramdisk/machine images to create an instance and switch over to
> Glance to create a snapshot, but now with the trunk merge, it always
> expects Glance (even though --
> image_service=nova.image.s3.S3ImageService is set) ... resulting in
>
> http://paste.openstack.org/show/489/
>
> I guess I have to hand-craft the images and pre-populate my Glance
> folder in order to get this to work now, huh?
>
> Any notes on taking EC2-compliant kernel/ramdisk/machine images and
> prepping them for Glance usage?
>
> --
> https://code.launchpad.net/~citrix-openstack/nova/xenapi-glance-
> 2/+merge/45977
> Your team Citrix OpenStack development team is subscribed to branch
> lp:~citrix-openstack/nova/xenapi-glance-2.

Revision history for this message
Jay Pipes (jaypipes) wrote :

On Tue, Jan 18, 2011 at 10:10 AM, Ewan Mellor <email address hidden> wrote:
> In the absence of any Glance CLI tools, I threw this together:

Coming soon to a Glance near you! :)

https://blueprints.launchpad.net/glance/+spec/admin-tool

-jay

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added directory 'nova/tests/glance'
2=== added file 'nova/tests/glance/__init__.py'
3--- nova/tests/glance/__init__.py 1970-01-01 00:00:00 +0000
4+++ nova/tests/glance/__init__.py 2011-01-17 19:48:42 +0000
5@@ -0,0 +1,20 @@
6+# vim: tabstop=4 shiftwidth=4 softtabstop=4
7+
8+# Copyright (c) 2011 Citrix Systems, Inc.
9+#
10+# Licensed under the Apache License, Version 2.0 (the "License"); you may
11+# not use this file except in compliance with the License. You may obtain
12+# a copy of the License at
13+#
14+# http://www.apache.org/licenses/LICENSE-2.0
15+#
16+# Unless required by applicable law or agreed to in writing, software
17+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
18+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
19+# License for the specific language governing permissions and limitations
20+# under the License.
21+
22+"""
23+:mod:`glance` -- Stubs for Glance
24+=================================
25+"""
26
27=== added file 'nova/tests/glance/stubs.py'
28--- nova/tests/glance/stubs.py 1970-01-01 00:00:00 +0000
29+++ nova/tests/glance/stubs.py 2011-01-17 19:48:42 +0000
30@@ -0,0 +1,37 @@
31+# vim: tabstop=4 shiftwidth=4 softtabstop=4
32+
33+# Copyright (c) 2011 Citrix Systems, Inc.
34+#
35+# Licensed under the Apache License, Version 2.0 (the "License"); you may
36+# not use this file except in compliance with the License. You may obtain
37+# a copy of the License at
38+#
39+# http://www.apache.org/licenses/LICENSE-2.0
40+#
41+# Unless required by applicable law or agreed to in writing, software
42+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
43+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
44+# License for the specific language governing permissions and limitations
45+# under the License.
46+
47+import StringIO
48+
49+import glance.client
50+
51+
52+def stubout_glance_client(stubs, cls):
53+ """Stubs out glance.client.Client"""
54+ stubs.Set(glance.client, 'Client',
55+ lambda *args, **kwargs: cls(*args, **kwargs))
56+
57+
58+class FakeGlance(object):
59+ def __init__(self, host, port=None, use_ssl=False):
60+ pass
61+
62+ def get_image(self, image):
63+ meta = {
64+ 'size': 0,
65+ }
66+ image_file = StringIO.StringIO('')
67+ return meta, image_file
68
69=== modified file 'nova/tests/test_xenapi.py'
70--- nova/tests/test_xenapi.py 2011-01-13 16:52:28 +0000
71+++ nova/tests/test_xenapi.py 2011-01-17 19:48:42 +0000
72@@ -34,6 +34,7 @@
73 from nova.virt.xenapi.vmops import SimpleDH
74 from nova.tests.db import fakes as db_fakes
75 from nova.tests.xenapi import stubs
76+from nova.tests.glance import stubs as glance_stubs
77
78 FLAGS = flags.FLAGS
79
80@@ -108,18 +109,16 @@
81 conn = xenapi_conn.get_connection(False)
82 volume = self._create_volume()
83 instance = db.instance_create(self.values)
84- xenapi_fake.create_vm(instance.name, 'Running')
85+ vm = xenapi_fake.create_vm(instance.name, 'Running')
86 result = conn.attach_volume(instance.name, volume['id'], '/dev/sdc')
87
88 def check():
89 # check that the VM has a VBD attached to it
90- # Get XenAPI reference for the VM
91- vms = xenapi_fake.get_all('VM')
92 # Get XenAPI record for VBD
93 vbds = xenapi_fake.get_all('VBD')
94 vbd = xenapi_fake.get_record('VBD', vbds[0])
95 vm_ref = vbd['VM']
96- self.assertEqual(vm_ref, vms[0])
97+ self.assertEqual(vm_ref, vm)
98
99 check()
100
101@@ -157,9 +156,14 @@
102 FLAGS.xenapi_connection_url = 'test_url'
103 FLAGS.xenapi_connection_password = 'test_pass'
104 xenapi_fake.reset()
105+ xenapi_fake.create_local_srs()
106 db_fakes.stub_out_db_instance_api(self.stubs)
107 xenapi_fake.create_network('fake', FLAGS.flat_network_bridge)
108 stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests)
109+ stubs.stubout_get_this_vm_uuid(self.stubs)
110+ stubs.stubout_stream_disk(self.stubs)
111+ glance_stubs.stubout_glance_client(self.stubs,
112+ glance_stubs.FakeGlance)
113 self.conn = xenapi_conn.get_connection(False)
114
115 def test_list_instances_0(self):
116@@ -207,40 +211,70 @@
117
118 check()
119
120- def test_spawn(self):
121- instance = self._create_instance()
122-
123- def check():
124- instances = self.conn.list_instances()
125- self.assertEquals(instances, [1])
126-
127- # Get Nova record for VM
128- vm_info = self.conn.get_info(1)
129-
130- # Get XenAPI record for VM
131- vms = xenapi_fake.get_all('VM')
132- vm = xenapi_fake.get_record('VM', vms[0])
133-
134- # Check that m1.large above turned into the right thing.
135- instance_type = instance_types.INSTANCE_TYPES['m1.large']
136- mem_kib = long(instance_type['memory_mb']) << 10
137- mem_bytes = str(mem_kib << 10)
138- vcpus = instance_type['vcpus']
139- self.assertEquals(vm_info['max_mem'], mem_kib)
140- self.assertEquals(vm_info['mem'], mem_kib)
141- self.assertEquals(vm['memory_static_max'], mem_bytes)
142- self.assertEquals(vm['memory_dynamic_max'], mem_bytes)
143- self.assertEquals(vm['memory_dynamic_min'], mem_bytes)
144- self.assertEquals(vm['VCPUs_max'], str(vcpus))
145- self.assertEquals(vm['VCPUs_at_startup'], str(vcpus))
146-
147- # Check that the VM is running according to Nova
148- self.assertEquals(vm_info['state'], power_state.RUNNING)
149-
150- # Check that the VM is running according to XenAPI.
151- self.assertEquals(vm['power_state'], 'Running')
152-
153- check()
154+ def check_vm_record(self, conn):
155+ instances = conn.list_instances()
156+ self.assertEquals(instances, [1])
157+
158+ # Get Nova record for VM
159+ vm_info = conn.get_info(1)
160+
161+ # Get XenAPI record for VM
162+ vms = [rec for ref, rec
163+ in xenapi_fake.get_all_records('VM').iteritems()
164+ if not rec['is_control_domain']]
165+ vm = vms[0]
166+
167+ # Check that m1.large above turned into the right thing.
168+ instance_type = instance_types.INSTANCE_TYPES['m1.large']
169+ mem_kib = long(instance_type['memory_mb']) << 10
170+ mem_bytes = str(mem_kib << 10)
171+ vcpus = instance_type['vcpus']
172+ self.assertEquals(vm_info['max_mem'], mem_kib)
173+ self.assertEquals(vm_info['mem'], mem_kib)
174+ self.assertEquals(vm['memory_static_max'], mem_bytes)
175+ self.assertEquals(vm['memory_dynamic_max'], mem_bytes)
176+ self.assertEquals(vm['memory_dynamic_min'], mem_bytes)
177+ self.assertEquals(vm['VCPUs_max'], str(vcpus))
178+ self.assertEquals(vm['VCPUs_at_startup'], str(vcpus))
179+
180+ # Check that the VM is running according to Nova
181+ self.assertEquals(vm_info['state'], power_state.RUNNING)
182+
183+ # Check that the VM is running according to XenAPI.
184+ self.assertEquals(vm['power_state'], 'Running')
185+
186+ def _test_spawn(self, image_id, kernel_id, ramdisk_id):
187+ stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests)
188+ values = {'name': 1,
189+ 'id': 1,
190+ 'project_id': self.project.id,
191+ 'user_id': self.user.id,
192+ 'image_id': image_id,
193+ 'kernel_id': kernel_id,
194+ 'ramdisk_id': ramdisk_id,
195+ 'instance_type': 'm1.large',
196+ 'mac_address': 'aa:bb:cc:dd:ee:ff',
197+ }
198+ conn = xenapi_conn.get_connection(False)
199+ instance = db.instance_create(values)
200+ conn.spawn(instance)
201+ self.check_vm_record(conn)
202+
203+ def test_spawn_raw_objectstore(self):
204+ FLAGS.xenapi_image_service = 'objectstore'
205+ self._test_spawn(1, None, None)
206+
207+ def test_spawn_objectstore(self):
208+ FLAGS.xenapi_image_service = 'objectstore'
209+ self._test_spawn(1, 2, 3)
210+
211+ def test_spawn_raw_glance(self):
212+ FLAGS.xenapi_image_service = 'glance'
213+ self._test_spawn(1, None, None)
214+
215+ def test_spawn_glance(self):
216+ FLAGS.xenapi_image_service = 'glance'
217+ self._test_spawn(1, 2, 3)
218
219 def tearDown(self):
220 super(XenAPIVMTestCase, self).tearDown()
221
222=== modified file 'nova/tests/xenapi/stubs.py'
223--- nova/tests/xenapi/stubs.py 2011-01-10 09:12:48 +0000
224+++ nova/tests/xenapi/stubs.py 2011-01-17 19:48:42 +0000
225@@ -115,6 +115,21 @@
226 stubs.Set(volume_utils, '_get_target', fake_get_target)
227
228
229+def stubout_get_this_vm_uuid(stubs):
230+ def f():
231+ vms = [rec['uuid'] for ref, rec
232+ in fake.get_all_records('VM').iteritems()
233+ if rec['is_control_domain']]
234+ return vms[0]
235+ stubs.Set(vm_utils, 'get_this_vm_uuid', f)
236+
237+
238+def stubout_stream_disk(stubs):
239+ def f(_1, _2, _3, _4):
240+ pass
241+ stubs.Set(vm_utils, '_stream_disk', f)
242+
243+
244 class FakeSessionForVMTests(fake.SessionBase):
245 """ Stubs out a XenAPISession for VM tests """
246 def __init__(self, uri):
247@@ -124,7 +139,10 @@
248 return self.xenapi.network.get_all_records()
249
250 def host_call_plugin(self, _1, _2, _3, _4, _5):
251- return ''
252+ sr_ref = fake.get_all('SR')[0]
253+ vdi_ref = fake.create_vdi('', False, sr_ref, False)
254+ vdi_rec = fake.get_record('VDI', vdi_ref)
255+ return '<string>%s</string>' % vdi_rec['uuid']
256
257 def VM_start(self, _1, ref, _2, _3):
258 vm = fake.get_record('VM', ref)
259@@ -159,10 +177,6 @@
260 def __init__(self, uri):
261 super(FakeSessionForVolumeTests, self).__init__(uri)
262
263- def VBD_plug(self, _1, ref):
264- rec = fake.get_record('VBD', ref)
265- rec['currently-attached'] = True
266-
267 def VDI_introduce(self, _1, uuid, _2, _3, _4, _5,
268 _6, _7, _8, _9, _10, _11):
269 valid_vdi = False
270
271=== modified file 'nova/virt/xenapi/fake.py'
272--- nova/virt/xenapi/fake.py 2011-01-04 05:26:41 +0000
273+++ nova/virt/xenapi/fake.py 2011-01-17 19:48:42 +0000
274@@ -76,6 +76,7 @@
275 for c in _CLASSES:
276 _db_content[c] = {}
277 create_host('fake')
278+ create_vm('fake', 'Running', is_a_template=False, is_control_domain=True)
279
280
281 def create_host(name_label):
282@@ -136,14 +137,21 @@
283
284
285 def create_vbd(vm_ref, vdi_ref):
286- vbd_rec = {'VM': vm_ref, 'VDI': vdi_ref}
287+ vbd_rec = {
288+ 'VM': vm_ref,
289+ 'VDI': vdi_ref,
290+ 'currently_attached': False,
291+ }
292 vbd_ref = _create_object('VBD', vbd_rec)
293 after_VBD_create(vbd_ref, vbd_rec)
294 return vbd_ref
295
296
297 def after_VBD_create(vbd_ref, vbd_rec):
298- """Create backref from VM to VBD when VBD is created"""
299+ """Create read-only fields and backref from VM to VBD when VBD is
300+ created."""
301+ vbd_rec['currently_attached'] = False
302+ vbd_rec['device'] = ''
303 vm_ref = vbd_rec['VM']
304 vm_rec = _db_content['VM'][vm_ref]
305 vm_rec['VBDs'] = [vbd_ref]
306@@ -152,9 +160,10 @@
307 vbd_rec['vm_name_label'] = vm_name_label
308
309
310-def create_pbd(config, sr_ref, attached):
311+def create_pbd(config, host_ref, sr_ref, attached):
312 return _create_object('PBD', {
313 'device-config': config,
314+ 'host': host_ref,
315 'SR': sr_ref,
316 'currently-attached': attached,
317 })
318@@ -167,6 +176,33 @@
319 })
320
321
322+def create_local_srs():
323+ """Create an SR that looks like the one created on the local disk by
324+ default by the XenServer installer. Do this one per host."""
325+ for host_ref in _db_content['host'].keys():
326+ _create_local_sr(host_ref)
327+
328+
329+def _create_local_sr(host_ref):
330+ sr_ref = _create_object('SR', {
331+ 'name_label': 'Local storage',
332+ 'type': 'lvm',
333+ 'content_type': 'user',
334+ 'shared': False,
335+ 'physical_size': str(1 << 30),
336+ 'physical_utilisation': str(0),
337+ 'virtual_allocation': str(0),
338+ 'other_config': {
339+ 'i18n-original-value-name_label': 'Local storage',
340+ 'i18n-key': 'local-storage',
341+ },
342+ 'VDIs': []
343+ })
344+ pbd_ref = create_pbd('', host_ref, sr_ref, True)
345+ _db_content['SR'][sr_ref]['PBDs'] = [pbd_ref]
346+ return sr_ref
347+
348+
349 def _create_object(table, obj):
350 ref = str(uuid.uuid4())
351 obj['uuid'] = str(uuid.uuid4())
352@@ -179,9 +215,10 @@
353 # Forces fake to support iscsi only
354 if sr_type != 'iscsi':
355 raise Failure(['SR_UNKNOWN_DRIVER', sr_type])
356+ host_ref = _db_content['host'].keys()[0]
357 sr_ref = _create_object(table, obj[2])
358 vdi_ref = create_vdi('', False, sr_ref, False)
359- pbd_ref = create_pbd('', sr_ref, True)
360+ pbd_ref = create_pbd('', host_ref, sr_ref, True)
361 _db_content['SR'][sr_ref]['VDIs'] = [vdi_ref]
362 _db_content['SR'][sr_ref]['PBDs'] = [pbd_ref]
363 _db_content['VDI'][vdi_ref]['SR'] = sr_ref
364@@ -233,6 +270,20 @@
365 def __init__(self, uri):
366 self._session = None
367
368+ def VBD_plug(self, _1, ref):
369+ rec = get_record('VBD', ref)
370+ if rec['currently_attached']:
371+ raise Failure(['DEVICE_ALREADY_ATTACHED', ref])
372+ rec['currently_attached'] = True
373+ rec['device'] = rec['userdevice']
374+
375+ def VBD_unplug(self, _1, ref):
376+ rec = get_record('VBD', ref)
377+ if not rec['currently_attached']:
378+ raise Failure(['DEVICE_ALREADY_DETACHED', ref])
379+ rec['currently_attached'] = False
380+ rec['device'] = ''
381+
382 def xenapi_request(self, methodname, params):
383 if methodname.startswith('login'):
384 self._login(methodname, params)
385@@ -289,6 +340,8 @@
386 return lambda *params: self._getter(name, params)
387 elif self._is_create(name):
388 return lambda *params: self._create(name, params)
389+ elif self._is_destroy(name):
390+ return lambda *params: self._destroy(name, params)
391 else:
392 return None
393
394@@ -299,10 +352,16 @@
395 bits[1].startswith(getter and 'get_' or 'set_'))
396
397 def _is_create(self, name):
398+ return self._is_method(name, 'create')
399+
400+ def _is_destroy(self, name):
401+ return self._is_method(name, 'destroy')
402+
403+ def _is_method(self, name, meth):
404 bits = name.split('.')
405 return (len(bits) == 2 and
406 bits[0] in _CLASSES and
407- bits[1] == 'create')
408+ bits[1] == meth)
409
410 def _getter(self, name, params):
411 self._check_session(params)
412@@ -370,10 +429,9 @@
413 _create_sr(cls, params) or _create_object(cls, params[1])
414
415 # Call hook to provide any fixups needed (ex. creating backrefs)
416- try:
417- globals()["after_%s_create" % cls](ref, params[1])
418- except KeyError:
419- pass
420+ after_hook = 'after_%s_create' % cls
421+ if after_hook in globals():
422+ globals()[after_hook](ref, params[1])
423
424 obj = get_record(cls, ref)
425
426@@ -383,6 +441,15 @@
427
428 return ref
429
430+ def _destroy(self, name, params):
431+ self._check_session(params)
432+ self._check_arg_count(params, 2)
433+ table, _ = name.split('.')
434+ ref = params[1]
435+ if ref not in _db_content[table]:
436+ raise Failure(['HANDLE_INVALID', table, ref])
437+ del _db_content[table][ref]
438+
439 def _async(self, name, params):
440 task_ref = create_task(name)
441 task = _db_content['task'][task_ref]
442@@ -420,7 +487,7 @@
443 try:
444 return result[0]
445 except IndexError:
446- return None
447+ raise Failure(['UUID_INVALID', v, result, recs, k])
448
449 return result
450
451
452=== modified file 'nova/virt/xenapi/vm_utils.py'
453--- nova/virt/xenapi/vm_utils.py 2011-01-12 19:47:40 +0000
454+++ nova/virt/xenapi/vm_utils.py 2011-01-17 19:48:42 +0000
455@@ -19,11 +19,14 @@
456 their attributes like VDIs, VIFs, as well as their lookup functions.
457 """
458
459+import os
460 import pickle
461+import re
462 import urllib
463 from xml.dom import minidom
464
465 from eventlet import event
466+import glance.client
467 from nova import exception
468 from nova import flags
469 from nova import log as logging
470@@ -47,17 +50,23 @@
471 'Crashed': power_state.CRASHED}
472
473
474+SECTOR_SIZE = 512
475+MBR_SIZE_SECTORS = 63
476+MBR_SIZE_BYTES = MBR_SIZE_SECTORS * SECTOR_SIZE
477+KERNEL_DIR = '/boot/guest'
478+
479+
480 class ImageType:
481- """
482- Enumeration class for distinguishing different image types
483- 0 - kernel/ramdisk image (goes on dom0's filesystem)
484- 1 - disk image (local SR, partitioned by objectstore plugin)
485- 2 - raw disk image (local SR, NOT partitioned by plugin)
486- """
487+ """
488+ Enumeration class for distinguishing different image types
489+ 0 - kernel/ramdisk image (goes on dom0's filesystem)
490+ 1 - disk image (local SR, partitioned by objectstore plugin)
491+ 2 - raw disk image (local SR, NOT partitioned by plugin)
492+ """
493
494- KERNEL_RAMDISK = 0
495- DISK = 1
496- DISK_RAW = 2
497+ KERNEL_RAMDISK = 0
498+ DISK = 1
499+ DISK_RAW = 2
500
501
502 class VMHelper(HelperBase):
503@@ -207,6 +216,25 @@
504 return vif_ref
505
506 @classmethod
507+ def create_vdi(cls, session, sr_ref, name_label, virtual_size, read_only):
508+ """Create a VDI record and returns its reference."""
509+ vdi_ref = session.get_xenapi().VDI.create(
510+ {'name_label': name_label,
511+ 'name_description': '',
512+ 'SR': sr_ref,
513+ 'virtual_size': str(virtual_size),
514+ 'type': 'User',
515+ 'sharable': False,
516+ 'read_only': read_only,
517+ 'xenstore_data': {},
518+ 'other_config': {},
519+ 'sm_config': {},
520+ 'tags': []})
521+ LOG.debug(_('Created VDI %s (%s, %s, %s) on %s.'), vdi_ref,
522+ name_label, virtual_size, read_only, sr_ref)
523+ return vdi_ref
524+
525+ @classmethod
526 def create_snapshot(cls, session, instance_id, vm_ref, label):
527 """ Creates Snapshot (Template) VM, Snapshot VBD, Snapshot VDI,
528 Snapshot VHD
529@@ -256,15 +284,71 @@
530 def fetch_image(cls, session, instance_id, image, user, project, type):
531 """
532 type is interpreted as an ImageType instance
533+ Related flags:
534+ xenapi_image_service = ['glance', 'objectstore']
535+ glance_address = 'address for glance services'
536+ glance_port = 'port for glance services'
537 """
538+ access = AuthManager().get_access_key(user, project)
539+
540+ if FLAGS.xenapi_image_service == 'glance':
541+ return cls._fetch_image_glance(session, instance_id, image,
542+ access, type)
543+ else:
544+ return cls._fetch_image_objectstore(session, instance_id, image,
545+ access, user.secret, type)
546+
547+ @classmethod
548+ def _fetch_image_glance(cls, session, instance_id, image, access, type):
549+ sr = find_sr(session)
550+ if sr is None:
551+ raise exception.NotFound('Cannot find SR to write VDI to')
552+
553+ c = glance.client.Client(FLAGS.glance_host, FLAGS.glance_port)
554+
555+ meta, image_file = c.get_image(image)
556+ virtual_size = int(meta['size'])
557+ vdi_size = virtual_size
558+ LOG.debug(_("Size for image %s:%d"), image, virtual_size)
559+ if type == ImageType.DISK:
560+ # Make room for MBR.
561+ vdi_size += MBR_SIZE_BYTES
562+
563+ vdi = cls.create_vdi(session, sr, _('Glance image %s') % image,
564+ vdi_size, False)
565+
566+ with_vdi_attached_here(session, vdi, False,
567+ lambda dev:
568+ _stream_disk(dev, type,
569+ virtual_size, image_file))
570+ if (type == ImageType.KERNEL_RAMDISK):
571+ #we need to invoke a plugin for copying VDI's
572+ #content into proper path
573+ LOG.debug(_("Copying VDI %s to /boot/guest on dom0"), vdi)
574+ fn = "copy_kernel_vdi"
575+ args = {}
576+ args['vdi-ref'] = vdi
577+ #let the plugin copy the correct number of bytes
578+ args['image-size'] = str(vdi_size)
579+ task = session.async_call_plugin('glance', fn, args)
580+ filename = session.wait_for_task(instance_id, task)
581+ #remove the VDI as it is not needed anymore
582+ session.get_xenapi().VDI.destroy(vdi)
583+ LOG.debug(_("Kernel/Ramdisk VDI %s destroyed"), vdi)
584+ return filename
585+ else:
586+ return session.get_xenapi().VDI.get_uuid(vdi)
587+
588+ @classmethod
589+ def _fetch_image_objectstore(cls, session, instance_id, image, access,
590+ secret, type):
591 url = images.image_url(image)
592- access = AuthManager().get_access_key(user, project)
593 LOG.debug(_("Asking xapi to fetch %s as %s"), url, access)
594 fn = (type != ImageType.KERNEL_RAMDISK) and 'get_vdi' or 'get_kernel'
595 args = {}
596 args['src_url'] = url
597 args['username'] = access
598- args['password'] = user.secret
599+ args['password'] = secret
600 args['add_partition'] = 'false'
601 args['raw'] = 'false'
602 if type != ImageType.KERNEL_RAMDISK:
603@@ -276,14 +360,21 @@
604 return uuid
605
606 @classmethod
607- def lookup_image(cls, session, vdi_ref):
608+ def lookup_image(cls, session, instance_id, vdi_ref):
609+ if FLAGS.xenapi_image_service == 'glance':
610+ return cls._lookup_image_glance(session, vdi_ref)
611+ else:
612+ return cls._lookup_image_objectstore(session, instance_id, vdi_ref)
613+
614+ @classmethod
615+ def _lookup_image_objectstore(cls, session, instance_id, vdi_ref):
616 LOG.debug(_("Looking up vdi %s for PV kernel"), vdi_ref)
617 fn = "is_vdi_pv"
618 args = {}
619 args['vdi-ref'] = vdi_ref
620- #TODO: Call proper function in plugin
621 task = session.async_call_plugin('objectstore', fn, args)
622- pv_str = session.wait_for_task(task)
623+ pv_str = session.wait_for_task(instance_id, task)
624+ pv = None
625 if pv_str.lower() == 'true':
626 pv = True
627 elif pv_str.lower() == 'false':
628@@ -292,6 +383,23 @@
629 return pv
630
631 @classmethod
632+ def _lookup_image_glance(cls, session, vdi_ref):
633+ LOG.debug(_("Looking up vdi %s for PV kernel"), vdi_ref)
634+
635+ def is_vdi_pv(dev):
636+ LOG.debug(_("Running pygrub against %s"), dev)
637+ output = os.popen('pygrub -qn /dev/%s' % dev)
638+ for line in output.readlines():
639+ #try to find kernel string
640+ m = re.search('(?<=kernel:)/.*(?:>)', line)
641+ if m and m.group(0).find('xen') != -1:
642+ LOG.debug(_("Found Xen kernel %s") % m.group(0))
643+ return True
644+ LOG.debug(_("No Xen kernel found. Booting HVM."))
645+ return False
646+ return with_vdi_attached_here(session, vdi_ref, True, is_vdi_pv)
647+
648+ @classmethod
649 def lookup(cls, session, i):
650 """Look the instance i up, and returns it if available"""
651 vms = session.get_xenapi().VM.get_by_name_label(i)
652@@ -464,3 +572,123 @@
653 vdi_ref = vdi_refs[0]
654 vdi_rec = session.get_xenapi().VDI.get_record(vdi_ref)
655 return vdi_ref, vdi_rec
656+
657+
658+def find_sr(session):
659+ host = session.get_xenapi_host()
660+ srs = session.get_xenapi().SR.get_all()
661+ for sr in srs:
662+ sr_rec = session.get_xenapi().SR.get_record(sr)
663+ if not ('i18n-key' in sr_rec['other_config'] and
664+ sr_rec['other_config']['i18n-key'] == 'local-storage'):
665+ continue
666+ for pbd in sr_rec['PBDs']:
667+ pbd_rec = session.get_xenapi().PBD.get_record(pbd)
668+ if pbd_rec['host'] == host:
669+ return sr
670+ return None
671+
672+
673+def with_vdi_attached_here(session, vdi, read_only, f):
674+ this_vm_ref = get_this_vm_ref(session)
675+ vbd_rec = {}
676+ vbd_rec['VM'] = this_vm_ref
677+ vbd_rec['VDI'] = vdi
678+ vbd_rec['userdevice'] = 'autodetect'
679+ vbd_rec['bootable'] = False
680+ vbd_rec['mode'] = read_only and 'RO' or 'RW'
681+ vbd_rec['type'] = 'disk'
682+ vbd_rec['unpluggable'] = True
683+ vbd_rec['empty'] = False
684+ vbd_rec['other_config'] = {}
685+ vbd_rec['qos_algorithm_type'] = ''
686+ vbd_rec['qos_algorithm_params'] = {}
687+ vbd_rec['qos_supported_algorithms'] = []
688+ LOG.debug(_('Creating VBD for VDI %s ... '), vdi)
689+ vbd = session.get_xenapi().VBD.create(vbd_rec)
690+ LOG.debug(_('Creating VBD for VDI %s done.'), vdi)
691+ try:
692+ LOG.debug(_('Plugging VBD %s ... '), vbd)
693+ session.get_xenapi().VBD.plug(vbd)
694+ LOG.debug(_('Plugging VBD %s done.'), vbd)
695+ return f(session.get_xenapi().VBD.get_device(vbd))
696+ finally:
697+ LOG.debug(_('Destroying VBD for VDI %s ... '), vdi)
698+ vbd_unplug_with_retry(session, vbd)
699+ ignore_failure(session.get_xenapi().VBD.destroy, vbd)
700+ LOG.debug(_('Destroying VBD for VDI %s done.'), vdi)
701+
702+
703+def vbd_unplug_with_retry(session, vbd):
704+ """Call VBD.unplug on the given VBD, with a retry if we get
705+ DEVICE_DETACH_REJECTED. For reasons which I don't understand, we're
706+ seeing the device still in use, even when all processes using the device
707+ should be dead."""
708+ while True:
709+ try:
710+ session.get_xenapi().VBD.unplug(vbd)
711+ LOG.debug(_('VBD.unplug successful first time.'))
712+ return
713+ except VMHelper.XenAPI.Failure, e:
714+ if (len(e.details) > 0 and
715+ e.details[0] == 'DEVICE_DETACH_REJECTED'):
716+ LOG.debug(_('VBD.unplug rejected: retrying...'))
717+ time.sleep(1)
718+ elif (len(e.details) > 0 and
719+ e.details[0] == 'DEVICE_ALREADY_DETACHED'):
720+ LOG.debug(_('VBD.unplug successful eventually.'))
721+ return
722+ else:
723+ LOG.error(_('Ignoring XenAPI.Failure in VBD.unplug: %s'),
724+ e)
725+ return
726+
727+
728+def ignore_failure(func, *args, **kwargs):
729+ try:
730+ return func(*args, **kwargs)
731+ except VMHelper.XenAPI.Failure, e:
732+ LOG.error(_('Ignoring XenAPI.Failure %s'), e)
733+ return None
734+
735+
736+def get_this_vm_uuid():
737+ with file('/sys/hypervisor/uuid') as f:
738+ return f.readline().strip()
739+
740+
741+def get_this_vm_ref(session):
742+ return session.get_xenapi().VM.get_by_uuid(get_this_vm_uuid())
743+
744+
745+def _stream_disk(dev, type, virtual_size, image_file):
746+ offset = 0
747+ if type == ImageType.DISK:
748+ offset = MBR_SIZE_BYTES
749+ _write_partition(virtual_size, dev)
750+
751+ with open('/dev/%s' % dev, 'wb') as f:
752+ f.seek(offset)
753+ for chunk in image_file:
754+ f.write(chunk)
755+
756+
757+def _write_partition(virtual_size, dev):
758+ dest = '/dev/%s' % dev
759+ mbr_last = MBR_SIZE_SECTORS - 1
760+ primary_first = MBR_SIZE_SECTORS
761+ primary_last = MBR_SIZE_SECTORS + (virtual_size / SECTOR_SIZE) - 1
762+
763+ LOG.debug(_('Writing partition table %d %d to %s...'),
764+ primary_first, primary_last, dest)
765+
766+ def execute(cmd, process_input=None, check_exit_code=True):
767+ return utils.execute(cmd=cmd,
768+ process_input=process_input,
769+ check_exit_code=check_exit_code)
770+
771+ execute('parted --script %s mklabel msdos' % dest)
772+ execute('parted --script %s mkpart primary %ds %ds' %
773+ (dest, primary_first, primary_last))
774+
775+ LOG.debug(_('Writing partition table %s done.'), dest)
776
777=== modified file 'nova/virt/xenapi/vmops.py'
778--- nova/virt/xenapi/vmops.py 2011-01-17 17:16:36 +0000
779+++ nova/virt/xenapi/vmops.py 2011-01-17 19:48:42 +0000
780@@ -85,7 +85,8 @@
781 #Have a look at the VDI and see if it has a PV kernel
782 pv_kernel = False
783 if not instance.kernel_id:
784- pv_kernel = VMHelper.lookup_image(self._session, vdi_ref)
785+ pv_kernel = VMHelper.lookup_image(self._session, instance.id,
786+ vdi_ref)
787 kernel = None
788 if instance.kernel_id:
789 kernel = VMHelper.fetch_image(self._session, instance.id,
790
791=== modified file 'nova/virt/xenapi_conn.py'
792--- nova/virt/xenapi_conn.py 2011-01-17 17:16:36 +0000
793+++ nova/virt/xenapi_conn.py 2011-01-17 19:48:42 +0000
794@@ -89,6 +89,9 @@
795 'The interval used for polling of remote tasks '
796 '(Async.VM.start, etc). Used only if '
797 'connection_type=xenapi.')
798+flags.DEFINE_string('xenapi_image_service',
799+ 'glance',
800+ 'Where to get VM images: glance or objectstore.')
801 flags.DEFINE_float('xenapi_vhd_coalesce_poll_interval',
802 5.0,
803 'The interval used for polling of coalescing vhds.'
804
805=== modified file 'plugins/xenserver/xenapi/etc/xapi.d/plugins/glance'
806--- plugins/xenserver/xenapi/etc/xapi.d/plugins/glance 2011-01-07 03:37:33 +0000
807+++ plugins/xenserver/xenapi/etc/xapi.d/plugins/glance 2011-01-17 19:48:42 +0000
808@@ -18,7 +18,7 @@
809 # under the License.
810
811 #
812-# XenAPI plugin for putting images into glance
813+# XenAPI plugin for managing glance images
814 #
815
816 import base64
817@@ -40,8 +40,36 @@
818 configure_logging('glance')
819
820 CHUNK_SIZE = 8192
821+KERNEL_DIR = '/boot/guest'
822 FILE_SR_PATH = '/var/run/sr-mount'
823
824+def copy_kernel_vdi(session,args):
825+ vdi = exists(args, 'vdi-ref')
826+ size = exists(args,'image-size')
827+ #Use the uuid as a filename
828+ vdi_uuid=session.xenapi.VDI.get_uuid(vdi)
829+ copy_args={'vdi_uuid':vdi_uuid,'vdi_size':int(size)}
830+ filename=with_vdi_in_dom0(session, vdi, False,
831+ lambda dev:
832+ _copy_kernel_vdi('/dev/%s' % dev,copy_args))
833+ return filename
834+
835+def _copy_kernel_vdi(dest,copy_args):
836+ vdi_uuid=copy_args['vdi_uuid']
837+ vdi_size=copy_args['vdi_size']
838+ logging.debug("copying kernel/ramdisk file from %s to /boot/guest/%s",dest,vdi_uuid)
839+ filename=KERNEL_DIR + '/' + vdi_uuid
840+ #read data from /dev/ and write into a file on /boot/guest
841+ of=open(filename,'wb')
842+ f=open(dest,'rb')
843+ #copy only vdi_size bytes
844+ data=f.read(vdi_size)
845+ of.write(data)
846+ f.close()
847+ of.close()
848+ logging.debug("Done. Filename: %s",filename)
849+ return filename
850+
851 def put_vdis(session, args):
852 params = pickle.loads(exists(args, 'params'))
853 vdi_uuids = params["vdi_uuids"]
854@@ -128,4 +156,5 @@
855
856
857 if __name__ == '__main__':
858- XenAPIPlugin.dispatch({'put_vdis': put_vdis})
859+ XenAPIPlugin.dispatch({'put_vdis': put_vdis,
860+ 'copy_kernel_vdi': copy_kernel_vdi})
861
862=== modified file 'tools/pip-requires'
863--- tools/pip-requires 2011-01-13 21:18:16 +0000
864+++ tools/pip-requires 2011-01-17 19:48:42 +0000
865@@ -26,3 +26,4 @@
866 PasteDeploy
867 paste
868 netaddr
869+glance