Merge lp:~citrix-openstack/nova/xenapi-glance-2 into lp:~hudson-openstack/nova/trunk

Proposed by Ewan Mellor
Status: Merged
Approved by: Rick Clark
Approved revision: 544
Merged at revision: 577
Proposed branch: lp:~citrix-openstack/nova/xenapi-glance-2
Merge into: lp:~hudson-openstack/nova/trunk
Diff against target: 869 lines (+504/-70)
10 files modified
nova/tests/glance/__init__.py (+20/-0)
nova/tests/glance/stubs.py (+37/-0)
nova/tests/test_xenapi.py (+72/-38)
nova/tests/xenapi/stubs.py (+19/-5)
nova/virt/xenapi/fake.py (+77/-10)
nova/virt/xenapi/vm_utils.py (+242/-14)
nova/virt/xenapi/vmops.py (+2/-1)
nova/virt/xenapi_conn.py (+3/-0)
plugins/xenserver/xenapi/etc/xapi.d/plugins/glance (+31/-2)
tools/pip-requires (+1/-0)
To merge this branch: bzr merge lp:~citrix-openstack/nova/xenapi-glance-2
Reviewer Review Type Date Requested Status
Sandy Walsh (community) Approve
Ed Leafe (community) Approve
Thierry Carrez (community) ffe Approve
Jay Pipes (community) Needs Information
Rick Harris Pending
Review via email: mp+45977@code.launchpad.net

Description of the change

Implement support for streaming images from Glance when using the XenAPI virtualization backend, as per the bexar-xenapi-support-for-glance blueprint.

Images may be streamed raw, or will be streamed into the right place to allow room for an MBR and partition table, if using non-raw images. PV vs HVM detection now occurs on the image, immediately after it has been streamed. External kernel / ramdisk are also supported in this mode.

Unit test changes include a partial Glance simulator, which is stubbed in place of glance.client.Client. This allows us to pass through the VM spawn path with either glance or objectstore backends enabled; the unit tests now cover both. A dependency upon glance has been added to pip-requires, in order to pull the Glance client code into the venv.

This includes minor fixes to nova.image.glance. This code is expected to be heavily reworked anyway with the image-service-use-glance-clients work.

To post a comment you must log in.
Revision history for this message
Thierry Carrez (ttx) wrote :

OK to try to get this merged, but other branches proposed before should have review priority.

539. By Salvatore Orlando

Fix for _stream_disk

540. By Salvatore Orlando

Fixing the stub for _stream_disk as well

541. By Salvatore Orlando

pep8 fixes

542. By Salvatore Orlando

Fixed another issue in _stream_disk, as it did never execute _write_partition.
Fixed fake method accordingly.
Fixed pep8 errors.

Revision history for this message
Jay Pipes (jaypipes) wrote :

Hi!

"This includes minor fixes to nova.image.glance. This code is expected to be heavily reworked anyway with the image-service-use-glance-clients work."

The image-service-use-glance-clients is now in trunk. Does this affect this patch?

Cheers,
jay

review: Needs Information
Revision history for this message
Thierry Carrez (ttx) wrote :

On one hand, this branch touches relatively specific code, on the other, it seems a bit far away from being ready to be merged. I'd like to see Jay's question answered, and the review of someone familiar with the XenAPI stuff (Ozone team, anyone ?) rather than making a hasty decision on FFe.

review: Needs Information (ffe)
Revision history for this message
Armando Migliaccio (armando-migliaccio) wrote :

I am not entirely sure what Ewan really meant it with 'heavily reworked', but by following a discussion on the mailing list I understand that the extent of the work may be related to when glance-auth gets underway in Cactus.

As for the rest, we tried this branch internally against Glance revno 34 and everything seemed to work okay.

Hope this help!

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

I'm still getting up to speed on Swift/Glance/etc.

With this branch (vm_utils @514) xenapi would now have a hard dependency on glance. Is Glance ready to be a lynch-pin for Nova XenAPI deployments?

Should image store stuff be abstracted?

Just curious.

Revision history for this message
Armando Migliaccio (armando-migliaccio) wrote :

I always assumed that Glance was going to be the level of abstraction to the storage back-end that we needed for Nova, but as Sandy is, I am getting up to speed on this too.

543. By Ewan Mellor

Merged with trunk revno 565.

Revision history for this message
Ewan Mellor (ewanmellor) wrote :

I don't see the image-service-use-glance-clients work in trunk. Which csets are you referring to, Jay?

> -----Original Message-----
> From: <email address hidden> [mailto:<email address hidden>] On Behalf Of
> Jay Pipes
> Sent: 13 January 2011 11:13
> To: <email address hidden>
> Subject: Re: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> lp:nova
>
> Review: Needs Information
> Hi!
>
> "This includes minor fixes to nova.image.glance. This code is expected
> to be heavily reworked anyway with the image-service-use-glance-clients
> work."
>
> The image-service-use-glance-clients is now in trunk. Does this affect
> this patch?
>
> Cheers,
> jay
> --
> https://code.launchpad.net/~citrix-openstack/nova/xenapi-glance-
> 2/+merge/45977
> Your team Citrix OpenStack development team is subscribed to branch
> lp:~citrix-openstack/nova/xenapi-glance-2.

Revision history for this message
Ewan Mellor (ewanmellor) wrote :

There is a flag (xenapi_image_service) which allows you to use nova-objectstore instead of glance, so it's not a hard dependency in that sense. There is a hard dependency on the glance client code, as you've noticed, and I didn’t see a problem with that since I don't regard glance as a "third party".

In the long run, I expect nova-objectstore to be deprecated altogether, and Glance would then be the abstraction between different image services. I don't feel the need to have an image service abstraction inside Nova, when Glance is filling that role too. If someone has a need to plug in a replacement for Glance, then we can discuss that, but I don't know of any such need.

> -----Original Message-----
> From: <email address hidden> [mailto:<email address hidden>] On Behalf Of
> Sandy Walsh
> Sent: 14 January 2011 10:07
> To: <email address hidden>
> Subject: Re: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> lp:nova
>
> I'm still getting up to speed on Swift/Glance/etc.
>
> With this branch (vm_utils @514) xenapi would now have a hard
> dependency on glance. Is Glance ready to be a lynch-pin for Nova XenAPI
> deployments?
>
> Should image store stuff be abstracted?
>
> Just curious.
>
> --
> https://code.launchpad.net/~citrix-openstack/nova/xenapi-glance-
> 2/+merge/45977
> Your team Citrix OpenStack development team is subscribed to branch
> lp:~citrix-openstack/nova/xenapi-glance-2.

Revision history for this message
Ed Leafe (ed-leafe) wrote :

Many of your localized strings are incorrect. If you have more than one formatting placeholder (e.g. %s), you must use a dictionary for your values to be substituted. The reason is that a translation may re-order the words in a phrase, and position-dependent substitution (i.e., tuples) will be ambiguous.

review: Needs Fixing
Revision history for this message
Armando Migliaccio (armando-migliaccio) wrote :

Hi Ed,

can you elaborate a bit more? I see lots of localized strings with multiple formatting placeholders and no use of dictionaries.

> Many of your localized strings are incorrect. If you have more than one
> formatting placeholder (e.g. %s), you must use a dictionary for your values to
> be substituted. The reason is that a translation may re-order the words in a
> phrase, and position-dependent substitution (i.e., tuples) will be ambiguous.

Revision history for this message
Ed Leafe (ed-leafe) wrote :

> can you elaborate a bit more? I see lots of localized strings with multiple
> formatting placeholders and no use of dictionaries.

Yes, and these will have to be corrected at some point.

The problem is that translations are not word-for-word; a phrase in different languages may order the words differently. Probably the most common example would be adjectives coming before the noun in English ("the white house"), but after the noun in others (e.g., Spanish: "la casa blanca"). If your code looks like:

color = "white"
thing = "house"
print _("The %s %s") % (color, thing)

... it will print "The white house" in English, but the Spanish will print "La blanca casa", which is wrong. You need to use a mapping for the formatting, so that the code above would read:

color = "white"
thing = "house"
print _("The %(color)s %(thing)s") % locals()

Yeah, I know that this is a weak example, since the color and thing are still in English, but it's just designed to demonstrate how positional substitution is to be avoided in localization strings.

If you run xgettext on a file that uses tuples for multiple substitutions, it will emit the following:
"warning: 'msgid' format string with unnamed arguments cannot be properly localized: The translator cannot reorder the arguments. Please consider using a format string with named arguments, and a mapping instead of a tuple for the arguments."

I hope that that helps.

Revision history for this message
Ed Leafe (ed-leafe) wrote :

Additional explanation at the bottom of this page: http://www.gnu.org/software/hello/manual/gettext/Python.html

Revision history for this message
Armando Migliaccio (armando-migliaccio) wrote :

Thanks, that explains it.

I agree with you that we need to fix this, but I personally don't see it as much of an issue as most of the log strings are made of nouns, rarely with verbs and adjectives. This makes the translation pretty straightforward.

Since it affects every log in trunk I also think it would be better to have a trunk overhaul in a separate branch.

By the way, some of the languages (i.e. Japanese) have already been translated 100%...does this mean that those translations have to be reviewed?

> > can you elaborate a bit more? I see lots of localized strings with multiple
> > formatting placeholders and no use of dictionaries.
>
> Yes, and these will have to be corrected at some point.
>
> The problem is that translations are not word-for-word; a phrase in different
> languages may order the words differently. Probably the most common example
> would be adjectives coming before the noun in English ("the white house"), but
> after the noun in others (e.g., Spanish: "la casa blanca"). If your code looks
> like:
>
> color = "white"
> thing = "house"
> print _("The %s %s") % (color, thing)
>
> ... it will print "The white house" in English, but the Spanish will print "La
> blanca casa", which is wrong. You need to use a mapping for the formatting, so
> that the code above would read:
>
> color = "white"
> thing = "house"
> print _("The %(color)s %(thing)s") % locals()
>
> Yeah, I know that this is a weak example, since the color and thing are still
> in English, but it's just designed to demonstrate how positional substitution
> is to be avoided in localization strings.
>
> If you run xgettext on a file that uses tuples for multiple substitutions, it
> will emit the following:
> "warning: 'msgid' format string with unnamed arguments cannot be properly
> localized: The translator cannot reorder the arguments. Please consider using
> a format string with named arguments, and a mapping instead of a tuple for the
> arguments."
>
> I hope that that helps.

Revision history for this message
Jay Pipes (jaypipes) wrote :

On Fri, Jan 14, 2011 at 12:14 PM, Armando Migliaccio
<email address hidden> wrote:
> Thanks, that explains it.
>
> I agree with you that we need to fix this, but I personally don't see it as much of an issue as most of the log strings are made of nouns, rarely with verbs and adjectives. This makes the translation pretty straightforward.
>
> Since it affects every log in trunk I also think it would be better to have a trunk overhaul in a separate branch.
>
> By the way, some of the languages (i.e. Japanese) have already been translated 100%...does this mean that those translations have to be reviewed?

I don't see this as a showstopper for this particular branch (though I
completely agree it's a bug).

We should be able to address this in Nova in Cactus as a separate
blueprint or even a bug report.

Just my 2 cents,

jay

Revision history for this message
Jay Pipes (jaypipes) wrote :

> I don't see the image-service-use-glance-clients work in trunk. Which csets
> are you referring to, Jay?

I was referring to this blueprint being marked completed: https://blueprints.launchpad.net/nova/+spec/image-service-use-glance-clients.

Rick, could you elaborate on the actual csets, please? Thanks!

Revision history for this message
Rick Harris (rconradharris) wrote :

Jay-

I was referring to https://code.launchpad.net/~rconradharris/nova/xs-snap-return-image-id-before-snapshot .

As part of that effort, I removed ParallaxClient and TellerClient in Nova-- it now uses Glance's client (glance/client.py). This is why it's failing on Hudson atm, we don't have glance packaged up and installed.

Is that what you meant, or am I misunderstanding something?

Revision history for this message
Jay Pipes (jaypipes) wrote :

Hi Rick,

Yeah, that's what I was referring to. OK, looks like we need to get the packaging done pronto for Glance.

/me goes off to work on that.

Cheers,
jay

Revision history for this message
Ewan Mellor (ewanmellor) wrote :

Yes, Rick, those are the changes we're talking about. Thanks. They're not yet in trunk, but as soon as they do go on, the xenapi-glance-2 branch will need fixing up to match.

> -----Original Message-----
> From: <email address hidden> [mailto:<email address hidden>] On Behalf Of
> Rick Harris
> Sent: 14 January 2011 16:06
> To: <email address hidden>
> Subject: Re: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> lp:nova
>
> Jay-
>
> I was referring to https://code.launchpad.net/~rconradharris/nova/xs-
> snap-return-image-id-before-snapshot .
>
> As part of that effort, I removed ParallaxClient and TellerClient in
> Nova-- it now uses Glance's client (glance/client.py). This is why
> it's failing on Hudson atm, we don't have glance packaged up and
> installed.
>
> Is that what you meant, or am I misunderstanding something?
> --
> https://code.launchpad.net/~citrix-openstack/nova/xenapi-glance-
> 2/+merge/45977
> Your team Citrix OpenStack development team is subscribed to branch
> lp:~citrix-openstack/nova/xenapi-glance-2.

Revision history for this message
Ewan Mellor (ewanmellor) wrote :

> I am not entirely sure what Ewan really meant it with 'heavily reworked', but
> by following a discussion on the mailing list I understand that the extent of
> the work may be related to when glance-auth gets underway in Cactus.

By "heavily reworked" I meant that nova/image/glance.py would be completely rewritten. This code is now in https://code.launchpad.net/~rconradharris/nova/xs-snap-return-image-id-before-snapshot, which is pending merge into trunk.

As soon as xs-snap-return-image-id-before-snapshot is merged, this will conflict with xenapi-glance-2. We will simply need to remove the changes to nova/image/glance.py, and most likely pip-requires too. There will be no additional changes on our part.

Revision history for this message
Ewan Mellor (ewanmellor) wrote :

I think that we've answered all the questions on this review so far:

  o I have requested a review from rconradharris, as per Thierry's request.

  o We will need to refresh this patch as soon as xs-snap-return-image-id-before-snapshot is merged. This will not be a big deal -- we'll just remove a few changes from xenapi-glance-2, in favour of the changes in xs-snap-return-image-id-before-snapshot.

  o We won't hold up this branch because of the I18N issues, but will fix those in a separate branch.

  o We don't feel the need for an abstraction between Nova and Glance, as Glance is already an abstraction across image services.

  o We have a flag that allows people to continue to use nova-objectstore, so we're not adding a hard dependency upon Glance.

  o We are adding a hard dependency upon Glance's client code (glance.client) and we think this is fine because Glance is part of OpenStack, not a third party.

With all these things addressed, please can someone review the actual content of the branch? I'm keen to get an FFE for this, and it will obviously need careful review in order to do that.

Thanks,

Ewan.

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

There should be some means to point to our glance installation for development purposes as a flag.

I had to add this hack to get it going:

swalsh@novadev:~/openstack/xenapi-glance-2$ bzr diff
=== modified file 'bin/nova-combined'
--- bin/nova-combined 2011-01-11 06:47:35 +0000
+++ bin/nova-combined 2011-01-15 19:43:08 +0000
@@ -34,6 +34,10 @@
 if os.path.exists(os.path.join(possible_topdir, 'nova', '__init__.py')):
     sys.path.insert(0, possible_topdir)

+ glance = os.path.normpath(os.path.join(possible_topdir, "..", "glance"))
+ print "added glance path:", glance
+ sys.path.insert(0, glance)
+
 gettext.install('nova', unicode=1)

Now I'm trying to figure out how to get my images in glance so I can try to create an instance. Tips welcome!

review: Needs Fixing
Revision history for this message
Ewan Mellor (ewanmellor) wrote :

The Glance client library is assumed to be installed, just like any other dependency. It is on PyPI, and can be easy_installed.

We _could_ add a flag to point at a source location, but that would be non-standard. Are you sure?

-----Original Message-----
From: <email address hidden> [mailto:<email address hidden>] On Behalf Of Sandy Walsh
Sent: 17 January 2011 14:53
To: <email address hidden>
Subject: Re: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into lp:nova

Review: Needs Fixing
There should be some means to point to our glance installation for development purposes as a flag.

I had to add this hack to get it going:

swalsh@novadev:~/openstack/xenapi-glance-2$ bzr diff
=== modified file 'bin/nova-combined'
--- bin/nova-combined 2011-01-11 06:47:35 +0000
+++ bin/nova-combined 2011-01-15 19:43:08 +0000
@@ -34,6 +34,10 @@
 if os.path.exists(os.path.join(possible_topdir, 'nova', '__init__.py')):
     sys.path.insert(0, possible_topdir)

+ glance = os.path.normpath(os.path.join(possible_topdir, "..", "glance"))
+ print "added glance path:", glance
+ sys.path.insert(0, glance)
+
 gettext.install('nova', unicode=1)

Now I'm trying to figure out how to get my images in glance so I can try to create an instance. Tips welcome!

--
https://code.launchpad.net/~citrix-openstack/nova/xenapi-glance-2/+merge/45977
Your team Citrix OpenStack development team is subscribed to branch lp:~citrix-openstack/nova/xenapi-glance-2.

Revision history for this message
Jay Pipes (jaypipes) wrote :

> There should be some means to point to our glance installation for development
> purposes as a flag.
>
> I had to add this hack to get it going:
>
> swalsh@novadev:~/openstack/xenapi-glance-2$ bzr diff
> === modified file 'bin/nova-combined'
> --- bin/nova-combined 2011-01-11 06:47:35 +0000
> +++ bin/nova-combined 2011-01-15 19:43:08 +0000
> @@ -34,6 +34,10 @@
> if os.path.exists(os.path.join(possible_topdir, 'nova', '__init__.py')):
> sys.path.insert(0, possible_topdir)
>
> + glance = os.path.normpath(os.path.join(possible_topdir, "..", "glance"))
> + print "added glance path:", glance
> + sys.path.insert(0, glance)
> +
> gettext.install('nova', unicode=1)

As Ewan says, Glance is now packaged in PyPI, so the above shouldn't be necessary, nor would I recommend dealing with Glance differently than any other dependency.

> Now I'm trying to figure out how to get my images in glance so I can try to
> create an instance. Tips welcome!

This link should help you out :)

http://glance.openstack.org/client.html#adding-a-new-virtual-machine-image

Cheers!
jay

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

> The Glance client library is assumed to be installed, just like any other
> dependency. It is on PyPI, and can be easy_installed.
>
> We _could_ add a flag to point at a source location, but that would be non-
> standard. Are you sure?

Nope, certainly not sure. :) I was just thinking about situations where I'm working on bugs that span glance/nova. I prefer to run from source vs. an install so I can jump between branches.

Is there another way we can solve this problem?

Revision history for this message
Ewan Mellor (ewanmellor) wrote :

I just symlink into my source tree from .../site-packages/glance. I presume you could do the same thing with virtualenv too, if your dev machine didn't belong to you.

> -----Original Message-----
> From: <email address hidden> [mailto:<email address hidden>] On Behalf Of
> Sandy Walsh
> Sent: 17 January 2011 17:11
> To: <email address hidden>
> Subject: Re: RE: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> lp:nova
>
> > The Glance client library is assumed to be installed, just like any
> other
> > dependency. It is on PyPI, and can be easy_installed.
> >
> > We _could_ add a flag to point at a source location, but that would
> be non-
> > standard. Are you sure?
>
>
> Nope, certainly not sure. :) I was just thinking about situations where
> I'm working on bugs that span glance/nova. I prefer to run from source
> vs. an install so I can jump between branches.
>
> Is there another way we can solve this problem?
> --
> https://code.launchpad.net/~citrix-openstack/nova/xenapi-glance-
> 2/+merge/45977
> Your team Citrix OpenStack development team is subscribed to branch
> lp:~citrix-openstack/nova/xenapi-glance-2.

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

Ah, good idea ... thx. Disregard flag. :)

Revision history for this message
Jay Pipes (jaypipes) wrote :

OK, Ewan, Salvatore,

Rick's xs-snapshot branch is now in trunk. Can you re-merge xenapi-glance-2 with trunk and make any changes needed? Thanks much for all your patience!

544. By Ewan Mellor

Merged with trunk revno 572.

Revision history for this message
Ewan Mellor (ewanmellor) wrote :

I have re-merged this branch with trunk, following the xs-snapshot merge. Are people otherwise happy with the code? I would like to make an FFE proposal for this if the reviewers are happy. Thanks.

> -----Original Message-----
> From: <email address hidden> [mailto:<email address hidden>] On Behalf Of
> Jay Pipes
> Sent: 17 January 2011 18:22
> To: <email address hidden>
> Subject: Re: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> lp:nova
>
> OK, Ewan, Salvatore,
>
> Rick's xs-snapshot branch is now in trunk. Can you re-merge xenapi-
> glance-2 with trunk and make any changes needed? Thanks much for all
> your patience!
> --
> https://code.launchpad.net/~citrix-openstack/nova/xenapi-glance-
> 2/+merge/45977
> Your team Citrix OpenStack development team is subscribed to branch
> lp:~citrix-openstack/nova/xenapi-glance-2.

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

I'm in the short strokes now of getting the whole thing running (using the latest branch).

1. Creating instance using objectstore
2. Switching to Glance
3. Saving snapshot. Got hung up yesterday on this.
Note: Had to convert my XenServer SR from LVM to VHD since Glance only works with VHD currently.
4. Attempting to create instance from snapshot.

Stay tuned.

(I probably won't have time to review the code as well, but I'm sure others have that covered)

Revision history for this message
Thierry Carrez (ttx) wrote :

Consider the exception granted if you can get this merged today !

review: Approve (ffe)
Revision history for this message
Ewan Mellor (ewanmellor) wrote :

Thanks Thierry!

Sandy, could you please withdraw your "Needs Fixing", as per your subsequent comments?

Ed, could you also withdraw yours? I promise we'll fix the I18N issues ASAP, but Jay would like to do that on a separate blueprint.

Jay, an Approve from you would be appreciated, if you're happy.

Rick Harris, if you're out there, a review from you (or someone else in your team) would be appreciated. We need to sneak this in while Thierry's feeling reckless ;-)

Thanks everyone,

Ewan.

> -----Original Message-----
> From: <email address hidden> [mailto:<email address hidden>] On Behalf Of
> Thierry Carrez
> Sent: 18 January 2011 13:17
> To: <email address hidden>
> Subject: Re: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> lp:nova
>
> Review: Approve ffe
> Consider the exception granted if you can get this merged today !
> --
> https://code.launchpad.net/~citrix-openstack/nova/xenapi-glance-
> 2/+merge/45977
> Your team Citrix OpenStack development team is subscribed to branch
> lp:~citrix-openstack/nova/xenapi-glance-2.

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

Hmm, think I have issues with my LVM -> VHD conversion: http://paste.openstack.org/show/488/

Revision history for this message
Jay Pipes (jaypipes) wrote :

On Tue, Jan 18, 2011 at 8:32 AM, Ewan Mellor <email address hidden> wrote:
> Jay, an Approve from you would be appreciated, if you're happy.

We need your help to solve a Glance packaging bug so that I can do
testing properly on the branch... but I promise this is my priority
today. If you could join IRC right quick, we can get the packaging
bug solved :) Thanks!

> Rick Harris, if you're out there, a review from you (or someone else in your team) would be appreciated.  We need to sneak this in while Thierry's feeling reckless ;-)

I'll ping Rick on IRC...

-jay

Revision history for this message
Ed Leafe (ed-leafe) wrote :

Decided to move correcting the localization issues into a separate blueprint.

review: Approve
Revision history for this message
Ed Leafe (ed-leafe) wrote :

On Jan 18, 2011, at 8:32 AM, Ewan Mellor wrote:

> Ed, could you also withdraw yours? I promise we'll fix the I18N issues ASAP, but Jay would like to do that on a separate blueprint.

 Done.

-- Ed Leafe

Revision history for this message
Sandy Walsh (sandy-walsh) :
review: Approve
Revision history for this message
Ewan Mellor (ewanmellor) wrote :

It's expecting to find the local disk as per the default XenServer installation. You pesky Rackers screw with it, I think. You need SR.other_config['i18n-key'] == 'local-storage'. You can set this on the dom0 CLI using this:

sruuid=$(xe sr-list --minimal name-label="Local storage") # Or whichever SR you want to use
xe sr-param-set other-config:i18n-key=local-storage uuid=$sruuid

> -----Original Message-----
> From: <email address hidden> [mailto:<email address hidden>] On Behalf Of
> Sandy Walsh
> Sent: 18 January 2011 13:32
> To: <email address hidden>
> Subject: Re: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> lp:nova
>
> Hmm, think I have issues with my LVM -> VHD conversion:
> http://paste.openstack.org/show/488/
> --
> https://code.launchpad.net/~citrix-openstack/nova/xenapi-glance-
> 2/+merge/45977
> Your team Citrix OpenStack development team is subscribed to branch
> lp:~citrix-openstack/nova/xenapi-glance-2.

Revision history for this message
Ewan Mellor (ewanmellor) wrote :

Irritating line wrap issue there. Just to be clear, that's

sruuid=$(xe sr-list --minimal name-label="Local storage") # Or whichever SR you want to use

xe sr-param-set other-config:i18n-key=local-storage uuid=$sruuid

> -----Original Message-----
> From: <email address hidden> [mailto:<email address hidden>] On Behalf Of
> Ewan Mellor
> Sent: 18 January 2011 13:40
> To: <email address hidden>
> Subject: RE: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> lp:nova
>
> It's expecting to find the local disk as per the default XenServer
> installation. You pesky Rackers screw with it, I think. You need
> SR.other_config['i18n-key'] == 'local-storage'. You can set this on
> the dom0 CLI using this:
>
> sruuid=$(xe sr-list --minimal name-label="Local storage") # Or
> whichever SR you want to use
> xe sr-param-set other-config:i18n-key=local-storage uuid=$sruuid
>
> > -----Original Message-----
> > From: <email address hidden> [mailto:<email address hidden>] On Behalf
> Of
> > Sandy Walsh
> > Sent: 18 January 2011 13:32
> > To: <email address hidden>
> > Subject: Re: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> > lp:nova
> >
> > Hmm, think I have issues with my LVM -> VHD conversion:
> > http://paste.openstack.org/show/488/
> > --
> > https://code.launchpad.net/~citrix-openstack/nova/xenapi-glance-
> > 2/+merge/45977
> > Your team Citrix OpenStack development team is subscribed to branch
> > lp:~citrix-openstack/nova/xenapi-glance-2.
>
> --
> https://code.launchpad.net/~citrix-openstack/nova/xenapi-glance-
> 2/+merge/45977
> Your team Citrix OpenStack development team is subscribed to branch
> lp:~citrix-openstack/nova/xenapi-glance-2.

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

Thanks Ewan ... that worked great!

Now it gets tricky. I was hoping to use trunk to add my kernel/ramdisk/machine images to create an instance and switch over to Glance to create a snapshot, but now with the trunk merge, it always expects Glance (even though --image_service=nova.image.s3.S3ImageService is set) ... resulting in

http://paste.openstack.org/show/489/

I guess I have to hand-craft the images and pre-populate my Glance folder in order to get this to work now, huh?

Any notes on taking EC2-compliant kernel/ramdisk/machine images and prepping them for Glance usage?

Revision history for this message
Ewan Mellor (ewanmellor) wrote :

In the absence of any Glance CLI tools, I threw this together:

#!/usr/bin/python2.6

import sys

from glance.client import Client

hostname = sys.argv[1]
name = sys.argv[2]
typ = sys.argv[3]
filename = sys.argv[4]

c = Client(hostname, 9292)

meta = {'name': name,
        'type': typ,
        'is_public': True,
        }

with open(filename) as f:
    new_meta = c.add_image(meta, f)

print 'Stored image. Got identifier: %s' % new_meta

The third argument should be "image", "ramdisk", or "kernel". You can then use the image identifiers on the euca-run-instances command line, just as you would if you'd used euca-register.

> -----Original Message-----
> From: <email address hidden> [mailto:<email address hidden>] On Behalf Of
> Sandy Walsh
> Sent: 18 January 2011 13:58
> To: <email address hidden>
> Subject: Re: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> lp:nova
>
> Thanks Ewan ... that worked great!
>
> Now it gets tricky. I was hoping to use trunk to add my
> kernel/ramdisk/machine images to create an instance and switch over to
> Glance to create a snapshot, but now with the trunk merge, it always
> expects Glance (even though --
> image_service=nova.image.s3.S3ImageService is set) ... resulting in
>
> http://paste.openstack.org/show/489/
>
> I guess I have to hand-craft the images and pre-populate my Glance
> folder in order to get this to work now, huh?
>
> Any notes on taking EC2-compliant kernel/ramdisk/machine images and
> prepping them for Glance usage?
>
> --
> https://code.launchpad.net/~citrix-openstack/nova/xenapi-glance-
> 2/+merge/45977
> Your team Citrix OpenStack development team is subscribed to branch
> lp:~citrix-openstack/nova/xenapi-glance-2.

Revision history for this message
Jay Pipes (jaypipes) wrote :

On Tue, Jan 18, 2011 at 10:10 AM, Ewan Mellor <email address hidden> wrote:
> In the absence of any Glance CLI tools, I threw this together:

Coming soon to a Glance near you! :)

https://blueprints.launchpad.net/glance/+spec/admin-tool

-jay

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== added directory 'nova/tests/glance'
=== added file 'nova/tests/glance/__init__.py'
--- nova/tests/glance/__init__.py 1970-01-01 00:00:00 +0000
+++ nova/tests/glance/__init__.py 2011-01-17 19:48:42 +0000
@@ -0,0 +1,20 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright (c) 2011 Citrix Systems, Inc.
4#
5# Licensed under the Apache License, Version 2.0 (the "License"); you may
6# not use this file except in compliance with the License. You may obtain
7# a copy of the License at
8#
9# http://www.apache.org/licenses/LICENSE-2.0
10#
11# Unless required by applicable law or agreed to in writing, software
12# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
13# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
14# License for the specific language governing permissions and limitations
15# under the License.
16
17"""
18:mod:`glance` -- Stubs for Glance
19=================================
20"""
021
=== added file 'nova/tests/glance/stubs.py'
--- nova/tests/glance/stubs.py 1970-01-01 00:00:00 +0000
+++ nova/tests/glance/stubs.py 2011-01-17 19:48:42 +0000
@@ -0,0 +1,37 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright (c) 2011 Citrix Systems, Inc.
4#
5# Licensed under the Apache License, Version 2.0 (the "License"); you may
6# not use this file except in compliance with the License. You may obtain
7# a copy of the License at
8#
9# http://www.apache.org/licenses/LICENSE-2.0
10#
11# Unless required by applicable law or agreed to in writing, software
12# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
13# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
14# License for the specific language governing permissions and limitations
15# under the License.
16
17import StringIO
18
19import glance.client
20
21
22def stubout_glance_client(stubs, cls):
23 """Stubs out glance.client.Client"""
24 stubs.Set(glance.client, 'Client',
25 lambda *args, **kwargs: cls(*args, **kwargs))
26
27
28class FakeGlance(object):
29 def __init__(self, host, port=None, use_ssl=False):
30 pass
31
32 def get_image(self, image):
33 meta = {
34 'size': 0,
35 }
36 image_file = StringIO.StringIO('')
37 return meta, image_file
038
=== modified file 'nova/tests/test_xenapi.py'
--- nova/tests/test_xenapi.py 2011-01-13 16:52:28 +0000
+++ nova/tests/test_xenapi.py 2011-01-17 19:48:42 +0000
@@ -34,6 +34,7 @@
34from nova.virt.xenapi.vmops import SimpleDH34from nova.virt.xenapi.vmops import SimpleDH
35from nova.tests.db import fakes as db_fakes35from nova.tests.db import fakes as db_fakes
36from nova.tests.xenapi import stubs36from nova.tests.xenapi import stubs
37from nova.tests.glance import stubs as glance_stubs
3738
38FLAGS = flags.FLAGS39FLAGS = flags.FLAGS
3940
@@ -108,18 +109,16 @@
108 conn = xenapi_conn.get_connection(False)109 conn = xenapi_conn.get_connection(False)
109 volume = self._create_volume()110 volume = self._create_volume()
110 instance = db.instance_create(self.values)111 instance = db.instance_create(self.values)
111 xenapi_fake.create_vm(instance.name, 'Running')112 vm = xenapi_fake.create_vm(instance.name, 'Running')
112 result = conn.attach_volume(instance.name, volume['id'], '/dev/sdc')113 result = conn.attach_volume(instance.name, volume['id'], '/dev/sdc')
113114
114 def check():115 def check():
115 # check that the VM has a VBD attached to it116 # check that the VM has a VBD attached to it
116 # Get XenAPI reference for the VM
117 vms = xenapi_fake.get_all('VM')
118 # Get XenAPI record for VBD117 # Get XenAPI record for VBD
119 vbds = xenapi_fake.get_all('VBD')118 vbds = xenapi_fake.get_all('VBD')
120 vbd = xenapi_fake.get_record('VBD', vbds[0])119 vbd = xenapi_fake.get_record('VBD', vbds[0])
121 vm_ref = vbd['VM']120 vm_ref = vbd['VM']
122 self.assertEqual(vm_ref, vms[0])121 self.assertEqual(vm_ref, vm)
123122
124 check()123 check()
125124
@@ -157,9 +156,14 @@
157 FLAGS.xenapi_connection_url = 'test_url'156 FLAGS.xenapi_connection_url = 'test_url'
158 FLAGS.xenapi_connection_password = 'test_pass'157 FLAGS.xenapi_connection_password = 'test_pass'
159 xenapi_fake.reset()158 xenapi_fake.reset()
159 xenapi_fake.create_local_srs()
160 db_fakes.stub_out_db_instance_api(self.stubs)160 db_fakes.stub_out_db_instance_api(self.stubs)
161 xenapi_fake.create_network('fake', FLAGS.flat_network_bridge)161 xenapi_fake.create_network('fake', FLAGS.flat_network_bridge)
162 stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests)162 stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests)
163 stubs.stubout_get_this_vm_uuid(self.stubs)
164 stubs.stubout_stream_disk(self.stubs)
165 glance_stubs.stubout_glance_client(self.stubs,
166 glance_stubs.FakeGlance)
163 self.conn = xenapi_conn.get_connection(False)167 self.conn = xenapi_conn.get_connection(False)
164168
165 def test_list_instances_0(self):169 def test_list_instances_0(self):
@@ -207,40 +211,70 @@
207211
208 check()212 check()
209213
210 def test_spawn(self):214 def check_vm_record(self, conn):
211 instance = self._create_instance()215 instances = conn.list_instances()
212216 self.assertEquals(instances, [1])
213 def check():217
214 instances = self.conn.list_instances()218 # Get Nova record for VM
215 self.assertEquals(instances, [1])219 vm_info = conn.get_info(1)
216220
217 # Get Nova record for VM221 # Get XenAPI record for VM
218 vm_info = self.conn.get_info(1)222 vms = [rec for ref, rec
219223 in xenapi_fake.get_all_records('VM').iteritems()
220 # Get XenAPI record for VM224 if not rec['is_control_domain']]
221 vms = xenapi_fake.get_all('VM')225 vm = vms[0]
222 vm = xenapi_fake.get_record('VM', vms[0])226
223227 # Check that m1.large above turned into the right thing.
224 # Check that m1.large above turned into the right thing.228 instance_type = instance_types.INSTANCE_TYPES['m1.large']
225 instance_type = instance_types.INSTANCE_TYPES['m1.large']229 mem_kib = long(instance_type['memory_mb']) << 10
226 mem_kib = long(instance_type['memory_mb']) << 10230 mem_bytes = str(mem_kib << 10)
227 mem_bytes = str(mem_kib << 10)231 vcpus = instance_type['vcpus']
228 vcpus = instance_type['vcpus']232 self.assertEquals(vm_info['max_mem'], mem_kib)
229 self.assertEquals(vm_info['max_mem'], mem_kib)233 self.assertEquals(vm_info['mem'], mem_kib)
230 self.assertEquals(vm_info['mem'], mem_kib)234 self.assertEquals(vm['memory_static_max'], mem_bytes)
231 self.assertEquals(vm['memory_static_max'], mem_bytes)235 self.assertEquals(vm['memory_dynamic_max'], mem_bytes)
232 self.assertEquals(vm['memory_dynamic_max'], mem_bytes)236 self.assertEquals(vm['memory_dynamic_min'], mem_bytes)
233 self.assertEquals(vm['memory_dynamic_min'], mem_bytes)237 self.assertEquals(vm['VCPUs_max'], str(vcpus))
234 self.assertEquals(vm['VCPUs_max'], str(vcpus))238 self.assertEquals(vm['VCPUs_at_startup'], str(vcpus))
235 self.assertEquals(vm['VCPUs_at_startup'], str(vcpus))239
236240 # Check that the VM is running according to Nova
237 # Check that the VM is running according to Nova241 self.assertEquals(vm_info['state'], power_state.RUNNING)
238 self.assertEquals(vm_info['state'], power_state.RUNNING)242
239243 # Check that the VM is running according to XenAPI.
240 # Check that the VM is running according to XenAPI.244 self.assertEquals(vm['power_state'], 'Running')
241 self.assertEquals(vm['power_state'], 'Running')245
242246 def _test_spawn(self, image_id, kernel_id, ramdisk_id):
243 check()247 stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests)
248 values = {'name': 1,
249 'id': 1,
250 'project_id': self.project.id,
251 'user_id': self.user.id,
252 'image_id': image_id,
253 'kernel_id': kernel_id,
254 'ramdisk_id': ramdisk_id,
255 'instance_type': 'm1.large',
256 'mac_address': 'aa:bb:cc:dd:ee:ff',
257 }
258 conn = xenapi_conn.get_connection(False)
259 instance = db.instance_create(values)
260 conn.spawn(instance)
261 self.check_vm_record(conn)
262
263 def test_spawn_raw_objectstore(self):
264 FLAGS.xenapi_image_service = 'objectstore'
265 self._test_spawn(1, None, None)
266
267 def test_spawn_objectstore(self):
268 FLAGS.xenapi_image_service = 'objectstore'
269 self._test_spawn(1, 2, 3)
270
271 def test_spawn_raw_glance(self):
272 FLAGS.xenapi_image_service = 'glance'
273 self._test_spawn(1, None, None)
274
275 def test_spawn_glance(self):
276 FLAGS.xenapi_image_service = 'glance'
277 self._test_spawn(1, 2, 3)
244278
245 def tearDown(self):279 def tearDown(self):
246 super(XenAPIVMTestCase, self).tearDown()280 super(XenAPIVMTestCase, self).tearDown()
247281
=== modified file 'nova/tests/xenapi/stubs.py'
--- nova/tests/xenapi/stubs.py 2011-01-10 09:12:48 +0000
+++ nova/tests/xenapi/stubs.py 2011-01-17 19:48:42 +0000
@@ -115,6 +115,21 @@
115 stubs.Set(volume_utils, '_get_target', fake_get_target)115 stubs.Set(volume_utils, '_get_target', fake_get_target)
116116
117117
118def stubout_get_this_vm_uuid(stubs):
119 def f():
120 vms = [rec['uuid'] for ref, rec
121 in fake.get_all_records('VM').iteritems()
122 if rec['is_control_domain']]
123 return vms[0]
124 stubs.Set(vm_utils, 'get_this_vm_uuid', f)
125
126
127def stubout_stream_disk(stubs):
128 def f(_1, _2, _3, _4):
129 pass
130 stubs.Set(vm_utils, '_stream_disk', f)
131
132
118class FakeSessionForVMTests(fake.SessionBase):133class FakeSessionForVMTests(fake.SessionBase):
119 """ Stubs out a XenAPISession for VM tests """134 """ Stubs out a XenAPISession for VM tests """
120 def __init__(self, uri):135 def __init__(self, uri):
@@ -124,7 +139,10 @@
124 return self.xenapi.network.get_all_records()139 return self.xenapi.network.get_all_records()
125140
126 def host_call_plugin(self, _1, _2, _3, _4, _5):141 def host_call_plugin(self, _1, _2, _3, _4, _5):
127 return ''142 sr_ref = fake.get_all('SR')[0]
143 vdi_ref = fake.create_vdi('', False, sr_ref, False)
144 vdi_rec = fake.get_record('VDI', vdi_ref)
145 return '<string>%s</string>' % vdi_rec['uuid']
128146
129 def VM_start(self, _1, ref, _2, _3):147 def VM_start(self, _1, ref, _2, _3):
130 vm = fake.get_record('VM', ref)148 vm = fake.get_record('VM', ref)
@@ -159,10 +177,6 @@
159 def __init__(self, uri):177 def __init__(self, uri):
160 super(FakeSessionForVolumeTests, self).__init__(uri)178 super(FakeSessionForVolumeTests, self).__init__(uri)
161179
162 def VBD_plug(self, _1, ref):
163 rec = fake.get_record('VBD', ref)
164 rec['currently-attached'] = True
165
166 def VDI_introduce(self, _1, uuid, _2, _3, _4, _5,180 def VDI_introduce(self, _1, uuid, _2, _3, _4, _5,
167 _6, _7, _8, _9, _10, _11):181 _6, _7, _8, _9, _10, _11):
168 valid_vdi = False182 valid_vdi = False
169183
=== modified file 'nova/virt/xenapi/fake.py'
--- nova/virt/xenapi/fake.py 2011-01-04 05:26:41 +0000
+++ nova/virt/xenapi/fake.py 2011-01-17 19:48:42 +0000
@@ -76,6 +76,7 @@
76 for c in _CLASSES:76 for c in _CLASSES:
77 _db_content[c] = {}77 _db_content[c] = {}
78 create_host('fake')78 create_host('fake')
79 create_vm('fake', 'Running', is_a_template=False, is_control_domain=True)
7980
8081
81def create_host(name_label):82def create_host(name_label):
@@ -136,14 +137,21 @@
136137
137138
138def create_vbd(vm_ref, vdi_ref):139def create_vbd(vm_ref, vdi_ref):
139 vbd_rec = {'VM': vm_ref, 'VDI': vdi_ref}140 vbd_rec = {
141 'VM': vm_ref,
142 'VDI': vdi_ref,
143 'currently_attached': False,
144 }
140 vbd_ref = _create_object('VBD', vbd_rec)145 vbd_ref = _create_object('VBD', vbd_rec)
141 after_VBD_create(vbd_ref, vbd_rec)146 after_VBD_create(vbd_ref, vbd_rec)
142 return vbd_ref147 return vbd_ref
143148
144149
145def after_VBD_create(vbd_ref, vbd_rec):150def after_VBD_create(vbd_ref, vbd_rec):
146 """Create backref from VM to VBD when VBD is created"""151 """Create read-only fields and backref from VM to VBD when VBD is
152 created."""
153 vbd_rec['currently_attached'] = False
154 vbd_rec['device'] = ''
147 vm_ref = vbd_rec['VM']155 vm_ref = vbd_rec['VM']
148 vm_rec = _db_content['VM'][vm_ref]156 vm_rec = _db_content['VM'][vm_ref]
149 vm_rec['VBDs'] = [vbd_ref]157 vm_rec['VBDs'] = [vbd_ref]
@@ -152,9 +160,10 @@
152 vbd_rec['vm_name_label'] = vm_name_label160 vbd_rec['vm_name_label'] = vm_name_label
153161
154162
155def create_pbd(config, sr_ref, attached):163def create_pbd(config, host_ref, sr_ref, attached):
156 return _create_object('PBD', {164 return _create_object('PBD', {
157 'device-config': config,165 'device-config': config,
166 'host': host_ref,
158 'SR': sr_ref,167 'SR': sr_ref,
159 'currently-attached': attached,168 'currently-attached': attached,
160 })169 })
@@ -167,6 +176,33 @@
167 })176 })
168177
169178
179def create_local_srs():
180 """Create an SR that looks like the one created on the local disk by
181 default by the XenServer installer. Do this one per host."""
182 for host_ref in _db_content['host'].keys():
183 _create_local_sr(host_ref)
184
185
186def _create_local_sr(host_ref):
187 sr_ref = _create_object('SR', {
188 'name_label': 'Local storage',
189 'type': 'lvm',
190 'content_type': 'user',
191 'shared': False,
192 'physical_size': str(1 << 30),
193 'physical_utilisation': str(0),
194 'virtual_allocation': str(0),
195 'other_config': {
196 'i18n-original-value-name_label': 'Local storage',
197 'i18n-key': 'local-storage',
198 },
199 'VDIs': []
200 })
201 pbd_ref = create_pbd('', host_ref, sr_ref, True)
202 _db_content['SR'][sr_ref]['PBDs'] = [pbd_ref]
203 return sr_ref
204
205
170def _create_object(table, obj):206def _create_object(table, obj):
171 ref = str(uuid.uuid4())207 ref = str(uuid.uuid4())
172 obj['uuid'] = str(uuid.uuid4())208 obj['uuid'] = str(uuid.uuid4())
@@ -179,9 +215,10 @@
179 # Forces fake to support iscsi only215 # Forces fake to support iscsi only
180 if sr_type != 'iscsi':216 if sr_type != 'iscsi':
181 raise Failure(['SR_UNKNOWN_DRIVER', sr_type])217 raise Failure(['SR_UNKNOWN_DRIVER', sr_type])
218 host_ref = _db_content['host'].keys()[0]
182 sr_ref = _create_object(table, obj[2])219 sr_ref = _create_object(table, obj[2])
183 vdi_ref = create_vdi('', False, sr_ref, False)220 vdi_ref = create_vdi('', False, sr_ref, False)
184 pbd_ref = create_pbd('', sr_ref, True)221 pbd_ref = create_pbd('', host_ref, sr_ref, True)
185 _db_content['SR'][sr_ref]['VDIs'] = [vdi_ref]222 _db_content['SR'][sr_ref]['VDIs'] = [vdi_ref]
186 _db_content['SR'][sr_ref]['PBDs'] = [pbd_ref]223 _db_content['SR'][sr_ref]['PBDs'] = [pbd_ref]
187 _db_content['VDI'][vdi_ref]['SR'] = sr_ref224 _db_content['VDI'][vdi_ref]['SR'] = sr_ref
@@ -233,6 +270,20 @@
233 def __init__(self, uri):270 def __init__(self, uri):
234 self._session = None271 self._session = None
235272
273 def VBD_plug(self, _1, ref):
274 rec = get_record('VBD', ref)
275 if rec['currently_attached']:
276 raise Failure(['DEVICE_ALREADY_ATTACHED', ref])
277 rec['currently_attached'] = True
278 rec['device'] = rec['userdevice']
279
280 def VBD_unplug(self, _1, ref):
281 rec = get_record('VBD', ref)
282 if not rec['currently_attached']:
283 raise Failure(['DEVICE_ALREADY_DETACHED', ref])
284 rec['currently_attached'] = False
285 rec['device'] = ''
286
236 def xenapi_request(self, methodname, params):287 def xenapi_request(self, methodname, params):
237 if methodname.startswith('login'):288 if methodname.startswith('login'):
238 self._login(methodname, params)289 self._login(methodname, params)
@@ -289,6 +340,8 @@
289 return lambda *params: self._getter(name, params)340 return lambda *params: self._getter(name, params)
290 elif self._is_create(name):341 elif self._is_create(name):
291 return lambda *params: self._create(name, params)342 return lambda *params: self._create(name, params)
343 elif self._is_destroy(name):
344 return lambda *params: self._destroy(name, params)
292 else:345 else:
293 return None346 return None
294347
@@ -299,10 +352,16 @@
299 bits[1].startswith(getter and 'get_' or 'set_'))352 bits[1].startswith(getter and 'get_' or 'set_'))
300353
301 def _is_create(self, name):354 def _is_create(self, name):
355 return self._is_method(name, 'create')
356
357 def _is_destroy(self, name):
358 return self._is_method(name, 'destroy')
359
360 def _is_method(self, name, meth):
302 bits = name.split('.')361 bits = name.split('.')
303 return (len(bits) == 2 and362 return (len(bits) == 2 and
304 bits[0] in _CLASSES and363 bits[0] in _CLASSES and
305 bits[1] == 'create')364 bits[1] == meth)
306365
307 def _getter(self, name, params):366 def _getter(self, name, params):
308 self._check_session(params)367 self._check_session(params)
@@ -370,10 +429,9 @@
370 _create_sr(cls, params) or _create_object(cls, params[1])429 _create_sr(cls, params) or _create_object(cls, params[1])
371430
372 # Call hook to provide any fixups needed (ex. creating backrefs)431 # Call hook to provide any fixups needed (ex. creating backrefs)
373 try:432 after_hook = 'after_%s_create' % cls
374 globals()["after_%s_create" % cls](ref, params[1])433 if after_hook in globals():
375 except KeyError:434 globals()[after_hook](ref, params[1])
376 pass
377435
378 obj = get_record(cls, ref)436 obj = get_record(cls, ref)
379437
@@ -383,6 +441,15 @@
383441
384 return ref442 return ref
385443
444 def _destroy(self, name, params):
445 self._check_session(params)
446 self._check_arg_count(params, 2)
447 table, _ = name.split('.')
448 ref = params[1]
449 if ref not in _db_content[table]:
450 raise Failure(['HANDLE_INVALID', table, ref])
451 del _db_content[table][ref]
452
386 def _async(self, name, params):453 def _async(self, name, params):
387 task_ref = create_task(name)454 task_ref = create_task(name)
388 task = _db_content['task'][task_ref]455 task = _db_content['task'][task_ref]
@@ -420,7 +487,7 @@
420 try:487 try:
421 return result[0]488 return result[0]
422 except IndexError:489 except IndexError:
423 return None490 raise Failure(['UUID_INVALID', v, result, recs, k])
424491
425 return result492 return result
426493
427494
=== modified file 'nova/virt/xenapi/vm_utils.py'
--- nova/virt/xenapi/vm_utils.py 2011-01-12 19:47:40 +0000
+++ nova/virt/xenapi/vm_utils.py 2011-01-17 19:48:42 +0000
@@ -19,11 +19,14 @@
19their attributes like VDIs, VIFs, as well as their lookup functions.19their attributes like VDIs, VIFs, as well as their lookup functions.
20"""20"""
2121
22import os
22import pickle23import pickle
24import re
23import urllib25import urllib
24from xml.dom import minidom26from xml.dom import minidom
2527
26from eventlet import event28from eventlet import event
29import glance.client
27from nova import exception30from nova import exception
28from nova import flags31from nova import flags
29from nova import log as logging32from nova import log as logging
@@ -47,17 +50,23 @@
47 'Crashed': power_state.CRASHED}50 'Crashed': power_state.CRASHED}
4851
4952
53SECTOR_SIZE = 512
54MBR_SIZE_SECTORS = 63
55MBR_SIZE_BYTES = MBR_SIZE_SECTORS * SECTOR_SIZE
56KERNEL_DIR = '/boot/guest'
57
58
50class ImageType:59class ImageType:
51 """60 """
52 Enumeration class for distinguishing different image types61 Enumeration class for distinguishing different image types
53 0 - kernel/ramdisk image (goes on dom0's filesystem)62 0 - kernel/ramdisk image (goes on dom0's filesystem)
54 1 - disk image (local SR, partitioned by objectstore plugin)63 1 - disk image (local SR, partitioned by objectstore plugin)
55 2 - raw disk image (local SR, NOT partitioned by plugin)64 2 - raw disk image (local SR, NOT partitioned by plugin)
56 """65 """
5766
58 KERNEL_RAMDISK = 067 KERNEL_RAMDISK = 0
59 DISK = 168 DISK = 1
60 DISK_RAW = 269 DISK_RAW = 2
6170
6271
63class VMHelper(HelperBase):72class VMHelper(HelperBase):
@@ -207,6 +216,25 @@
207 return vif_ref216 return vif_ref
208217
209 @classmethod218 @classmethod
219 def create_vdi(cls, session, sr_ref, name_label, virtual_size, read_only):
220 """Create a VDI record and returns its reference."""
221 vdi_ref = session.get_xenapi().VDI.create(
222 {'name_label': name_label,
223 'name_description': '',
224 'SR': sr_ref,
225 'virtual_size': str(virtual_size),
226 'type': 'User',
227 'sharable': False,
228 'read_only': read_only,
229 'xenstore_data': {},
230 'other_config': {},
231 'sm_config': {},
232 'tags': []})
233 LOG.debug(_('Created VDI %s (%s, %s, %s) on %s.'), vdi_ref,
234 name_label, virtual_size, read_only, sr_ref)
235 return vdi_ref
236
237 @classmethod
210 def create_snapshot(cls, session, instance_id, vm_ref, label):238 def create_snapshot(cls, session, instance_id, vm_ref, label):
211 """ Creates Snapshot (Template) VM, Snapshot VBD, Snapshot VDI,239 """ Creates Snapshot (Template) VM, Snapshot VBD, Snapshot VDI,
212 Snapshot VHD240 Snapshot VHD
@@ -256,15 +284,71 @@
256 def fetch_image(cls, session, instance_id, image, user, project, type):284 def fetch_image(cls, session, instance_id, image, user, project, type):
257 """285 """
258 type is interpreted as an ImageType instance286 type is interpreted as an ImageType instance
287 Related flags:
288 xenapi_image_service = ['glance', 'objectstore']
289 glance_address = 'address for glance services'
290 glance_port = 'port for glance services'
259 """291 """
292 access = AuthManager().get_access_key(user, project)
293
294 if FLAGS.xenapi_image_service == 'glance':
295 return cls._fetch_image_glance(session, instance_id, image,
296 access, type)
297 else:
298 return cls._fetch_image_objectstore(session, instance_id, image,
299 access, user.secret, type)
300
301 @classmethod
302 def _fetch_image_glance(cls, session, instance_id, image, access, type):
303 sr = find_sr(session)
304 if sr is None:
305 raise exception.NotFound('Cannot find SR to write VDI to')
306
307 c = glance.client.Client(FLAGS.glance_host, FLAGS.glance_port)
308
309 meta, image_file = c.get_image(image)
310 virtual_size = int(meta['size'])
311 vdi_size = virtual_size
312 LOG.debug(_("Size for image %s:%d"), image, virtual_size)
313 if type == ImageType.DISK:
314 # Make room for MBR.
315 vdi_size += MBR_SIZE_BYTES
316
317 vdi = cls.create_vdi(session, sr, _('Glance image %s') % image,
318 vdi_size, False)
319
320 with_vdi_attached_here(session, vdi, False,
321 lambda dev:
322 _stream_disk(dev, type,
323 virtual_size, image_file))
324 if (type == ImageType.KERNEL_RAMDISK):
325 #we need to invoke a plugin for copying VDI's
326 #content into proper path
327 LOG.debug(_("Copying VDI %s to /boot/guest on dom0"), vdi)
328 fn = "copy_kernel_vdi"
329 args = {}
330 args['vdi-ref'] = vdi
331 #let the plugin copy the correct number of bytes
332 args['image-size'] = str(vdi_size)
333 task = session.async_call_plugin('glance', fn, args)
334 filename = session.wait_for_task(instance_id, task)
335 #remove the VDI as it is not needed anymore
336 session.get_xenapi().VDI.destroy(vdi)
337 LOG.debug(_("Kernel/Ramdisk VDI %s destroyed"), vdi)
338 return filename
339 else:
340 return session.get_xenapi().VDI.get_uuid(vdi)
341
342 @classmethod
343 def _fetch_image_objectstore(cls, session, instance_id, image, access,
344 secret, type):
260 url = images.image_url(image)345 url = images.image_url(image)
261 access = AuthManager().get_access_key(user, project)
262 LOG.debug(_("Asking xapi to fetch %s as %s"), url, access)346 LOG.debug(_("Asking xapi to fetch %s as %s"), url, access)
263 fn = (type != ImageType.KERNEL_RAMDISK) and 'get_vdi' or 'get_kernel'347 fn = (type != ImageType.KERNEL_RAMDISK) and 'get_vdi' or 'get_kernel'
264 args = {}348 args = {}
265 args['src_url'] = url349 args['src_url'] = url
266 args['username'] = access350 args['username'] = access
267 args['password'] = user.secret351 args['password'] = secret
268 args['add_partition'] = 'false'352 args['add_partition'] = 'false'
269 args['raw'] = 'false'353 args['raw'] = 'false'
270 if type != ImageType.KERNEL_RAMDISK:354 if type != ImageType.KERNEL_RAMDISK:
@@ -276,14 +360,21 @@
276 return uuid360 return uuid
277361
278 @classmethod362 @classmethod
279 def lookup_image(cls, session, vdi_ref):363 def lookup_image(cls, session, instance_id, vdi_ref):
364 if FLAGS.xenapi_image_service == 'glance':
365 return cls._lookup_image_glance(session, vdi_ref)
366 else:
367 return cls._lookup_image_objectstore(session, instance_id, vdi_ref)
368
369 @classmethod
370 def _lookup_image_objectstore(cls, session, instance_id, vdi_ref):
280 LOG.debug(_("Looking up vdi %s for PV kernel"), vdi_ref)371 LOG.debug(_("Looking up vdi %s for PV kernel"), vdi_ref)
281 fn = "is_vdi_pv"372 fn = "is_vdi_pv"
282 args = {}373 args = {}
283 args['vdi-ref'] = vdi_ref374 args['vdi-ref'] = vdi_ref
284 #TODO: Call proper function in plugin
285 task = session.async_call_plugin('objectstore', fn, args)375 task = session.async_call_plugin('objectstore', fn, args)
286 pv_str = session.wait_for_task(task)376 pv_str = session.wait_for_task(instance_id, task)
377 pv = None
287 if pv_str.lower() == 'true':378 if pv_str.lower() == 'true':
288 pv = True379 pv = True
289 elif pv_str.lower() == 'false':380 elif pv_str.lower() == 'false':
@@ -292,6 +383,23 @@
292 return pv383 return pv
293384
294 @classmethod385 @classmethod
386 def _lookup_image_glance(cls, session, vdi_ref):
387 LOG.debug(_("Looking up vdi %s for PV kernel"), vdi_ref)
388
389 def is_vdi_pv(dev):
390 LOG.debug(_("Running pygrub against %s"), dev)
391 output = os.popen('pygrub -qn /dev/%s' % dev)
392 for line in output.readlines():
393 #try to find kernel string
394 m = re.search('(?<=kernel:)/.*(?:>)', line)
395 if m and m.group(0).find('xen') != -1:
396 LOG.debug(_("Found Xen kernel %s") % m.group(0))
397 return True
398 LOG.debug(_("No Xen kernel found. Booting HVM."))
399 return False
400 return with_vdi_attached_here(session, vdi_ref, True, is_vdi_pv)
401
402 @classmethod
295 def lookup(cls, session, i):403 def lookup(cls, session, i):
296 """Look the instance i up, and returns it if available"""404 """Look the instance i up, and returns it if available"""
297 vms = session.get_xenapi().VM.get_by_name_label(i)405 vms = session.get_xenapi().VM.get_by_name_label(i)
@@ -464,3 +572,123 @@
464 vdi_ref = vdi_refs[0]572 vdi_ref = vdi_refs[0]
465 vdi_rec = session.get_xenapi().VDI.get_record(vdi_ref)573 vdi_rec = session.get_xenapi().VDI.get_record(vdi_ref)
466 return vdi_ref, vdi_rec574 return vdi_ref, vdi_rec
575
576
577def find_sr(session):
578 host = session.get_xenapi_host()
579 srs = session.get_xenapi().SR.get_all()
580 for sr in srs:
581 sr_rec = session.get_xenapi().SR.get_record(sr)
582 if not ('i18n-key' in sr_rec['other_config'] and
583 sr_rec['other_config']['i18n-key'] == 'local-storage'):
584 continue
585 for pbd in sr_rec['PBDs']:
586 pbd_rec = session.get_xenapi().PBD.get_record(pbd)
587 if pbd_rec['host'] == host:
588 return sr
589 return None
590
591
592def with_vdi_attached_here(session, vdi, read_only, f):
593 this_vm_ref = get_this_vm_ref(session)
594 vbd_rec = {}
595 vbd_rec['VM'] = this_vm_ref
596 vbd_rec['VDI'] = vdi
597 vbd_rec['userdevice'] = 'autodetect'
598 vbd_rec['bootable'] = False
599 vbd_rec['mode'] = read_only and 'RO' or 'RW'
600 vbd_rec['type'] = 'disk'
601 vbd_rec['unpluggable'] = True
602 vbd_rec['empty'] = False
603 vbd_rec['other_config'] = {}
604 vbd_rec['qos_algorithm_type'] = ''
605 vbd_rec['qos_algorithm_params'] = {}
606 vbd_rec['qos_supported_algorithms'] = []
607 LOG.debug(_('Creating VBD for VDI %s ... '), vdi)
608 vbd = session.get_xenapi().VBD.create(vbd_rec)
609 LOG.debug(_('Creating VBD for VDI %s done.'), vdi)
610 try:
611 LOG.debug(_('Plugging VBD %s ... '), vbd)
612 session.get_xenapi().VBD.plug(vbd)
613 LOG.debug(_('Plugging VBD %s done.'), vbd)
614 return f(session.get_xenapi().VBD.get_device(vbd))
615 finally:
616 LOG.debug(_('Destroying VBD for VDI %s ... '), vdi)
617 vbd_unplug_with_retry(session, vbd)
618 ignore_failure(session.get_xenapi().VBD.destroy, vbd)
619 LOG.debug(_('Destroying VBD for VDI %s done.'), vdi)
620
621
622def vbd_unplug_with_retry(session, vbd):
623 """Call VBD.unplug on the given VBD, with a retry if we get
624 DEVICE_DETACH_REJECTED. For reasons which I don't understand, we're
625 seeing the device still in use, even when all processes using the device
626 should be dead."""
627 while True:
628 try:
629 session.get_xenapi().VBD.unplug(vbd)
630 LOG.debug(_('VBD.unplug successful first time.'))
631 return
632 except VMHelper.XenAPI.Failure, e:
633 if (len(e.details) > 0 and
634 e.details[0] == 'DEVICE_DETACH_REJECTED'):
635 LOG.debug(_('VBD.unplug rejected: retrying...'))
636 time.sleep(1)
637 elif (len(e.details) > 0 and
638 e.details[0] == 'DEVICE_ALREADY_DETACHED'):
639 LOG.debug(_('VBD.unplug successful eventually.'))
640 return
641 else:
642 LOG.error(_('Ignoring XenAPI.Failure in VBD.unplug: %s'),
643 e)
644 return
645
646
647def ignore_failure(func, *args, **kwargs):
648 try:
649 return func(*args, **kwargs)
650 except VMHelper.XenAPI.Failure, e:
651 LOG.error(_('Ignoring XenAPI.Failure %s'), e)
652 return None
653
654
655def get_this_vm_uuid():
656 with file('/sys/hypervisor/uuid') as f:
657 return f.readline().strip()
658
659
660def get_this_vm_ref(session):
661 return session.get_xenapi().VM.get_by_uuid(get_this_vm_uuid())
662
663
664def _stream_disk(dev, type, virtual_size, image_file):
665 offset = 0
666 if type == ImageType.DISK:
667 offset = MBR_SIZE_BYTES
668 _write_partition(virtual_size, dev)
669
670 with open('/dev/%s' % dev, 'wb') as f:
671 f.seek(offset)
672 for chunk in image_file:
673 f.write(chunk)
674
675
676def _write_partition(virtual_size, dev):
677 dest = '/dev/%s' % dev
678 mbr_last = MBR_SIZE_SECTORS - 1
679 primary_first = MBR_SIZE_SECTORS
680 primary_last = MBR_SIZE_SECTORS + (virtual_size / SECTOR_SIZE) - 1
681
682 LOG.debug(_('Writing partition table %d %d to %s...'),
683 primary_first, primary_last, dest)
684
685 def execute(cmd, process_input=None, check_exit_code=True):
686 return utils.execute(cmd=cmd,
687 process_input=process_input,
688 check_exit_code=check_exit_code)
689
690 execute('parted --script %s mklabel msdos' % dest)
691 execute('parted --script %s mkpart primary %ds %ds' %
692 (dest, primary_first, primary_last))
693
694 LOG.debug(_('Writing partition table %s done.'), dest)
467695
=== modified file 'nova/virt/xenapi/vmops.py'
--- nova/virt/xenapi/vmops.py 2011-01-17 17:16:36 +0000
+++ nova/virt/xenapi/vmops.py 2011-01-17 19:48:42 +0000
@@ -85,7 +85,8 @@
85 #Have a look at the VDI and see if it has a PV kernel85 #Have a look at the VDI and see if it has a PV kernel
86 pv_kernel = False86 pv_kernel = False
87 if not instance.kernel_id:87 if not instance.kernel_id:
88 pv_kernel = VMHelper.lookup_image(self._session, vdi_ref)88 pv_kernel = VMHelper.lookup_image(self._session, instance.id,
89 vdi_ref)
89 kernel = None90 kernel = None
90 if instance.kernel_id:91 if instance.kernel_id:
91 kernel = VMHelper.fetch_image(self._session, instance.id,92 kernel = VMHelper.fetch_image(self._session, instance.id,
9293
=== modified file 'nova/virt/xenapi_conn.py'
--- nova/virt/xenapi_conn.py 2011-01-17 17:16:36 +0000
+++ nova/virt/xenapi_conn.py 2011-01-17 19:48:42 +0000
@@ -89,6 +89,9 @@
89 'The interval used for polling of remote tasks '89 'The interval used for polling of remote tasks '
90 '(Async.VM.start, etc). Used only if '90 '(Async.VM.start, etc). Used only if '
91 'connection_type=xenapi.')91 'connection_type=xenapi.')
92flags.DEFINE_string('xenapi_image_service',
93 'glance',
94 'Where to get VM images: glance or objectstore.')
92flags.DEFINE_float('xenapi_vhd_coalesce_poll_interval',95flags.DEFINE_float('xenapi_vhd_coalesce_poll_interval',
93 5.0,96 5.0,
94 'The interval used for polling of coalescing vhds.'97 'The interval used for polling of coalescing vhds.'
9598
=== modified file 'plugins/xenserver/xenapi/etc/xapi.d/plugins/glance'
--- plugins/xenserver/xenapi/etc/xapi.d/plugins/glance 2011-01-07 03:37:33 +0000
+++ plugins/xenserver/xenapi/etc/xapi.d/plugins/glance 2011-01-17 19:48:42 +0000
@@ -18,7 +18,7 @@
18# under the License.18# under the License.
1919
20#20#
21# XenAPI plugin for putting images into glance21# XenAPI plugin for managing glance images
22#22#
2323
24import base6424import base64
@@ -40,8 +40,36 @@
40configure_logging('glance')40configure_logging('glance')
4141
42CHUNK_SIZE = 819242CHUNK_SIZE = 8192
43KERNEL_DIR = '/boot/guest'
43FILE_SR_PATH = '/var/run/sr-mount'44FILE_SR_PATH = '/var/run/sr-mount'
4445
46def copy_kernel_vdi(session,args):
47 vdi = exists(args, 'vdi-ref')
48 size = exists(args,'image-size')
49 #Use the uuid as a filename
50 vdi_uuid=session.xenapi.VDI.get_uuid(vdi)
51 copy_args={'vdi_uuid':vdi_uuid,'vdi_size':int(size)}
52 filename=with_vdi_in_dom0(session, vdi, False,
53 lambda dev:
54 _copy_kernel_vdi('/dev/%s' % dev,copy_args))
55 return filename
56
57def _copy_kernel_vdi(dest,copy_args):
58 vdi_uuid=copy_args['vdi_uuid']
59 vdi_size=copy_args['vdi_size']
60 logging.debug("copying kernel/ramdisk file from %s to /boot/guest/%s",dest,vdi_uuid)
61 filename=KERNEL_DIR + '/' + vdi_uuid
62 #read data from /dev/ and write into a file on /boot/guest
63 of=open(filename,'wb')
64 f=open(dest,'rb')
65 #copy only vdi_size bytes
66 data=f.read(vdi_size)
67 of.write(data)
68 f.close()
69 of.close()
70 logging.debug("Done. Filename: %s",filename)
71 return filename
72
45def put_vdis(session, args):73def put_vdis(session, args):
46 params = pickle.loads(exists(args, 'params'))74 params = pickle.loads(exists(args, 'params'))
47 vdi_uuids = params["vdi_uuids"]75 vdi_uuids = params["vdi_uuids"]
@@ -128,4 +156,5 @@
128156
129157
130if __name__ == '__main__':158if __name__ == '__main__':
131 XenAPIPlugin.dispatch({'put_vdis': put_vdis})159 XenAPIPlugin.dispatch({'put_vdis': put_vdis,
160 'copy_kernel_vdi': copy_kernel_vdi})
132161
=== modified file 'tools/pip-requires'
--- tools/pip-requires 2011-01-13 21:18:16 +0000
+++ tools/pip-requires 2011-01-17 19:48:42 +0000
@@ -26,3 +26,4 @@
26PasteDeploy26PasteDeploy
27paste27paste
28netaddr28netaddr
29glance