Merge lp:~citrix-openstack/nova/xenapi-glance-2 into lp:~hudson-openstack/nova/trunk
- xenapi-glance-2
- Merge into trunk
Status: | Merged |
---|---|
Approved by: | Rick Clark |
Approved revision: | 544 |
Merged at revision: | 577 |
Proposed branch: | lp:~citrix-openstack/nova/xenapi-glance-2 |
Merge into: | lp:~hudson-openstack/nova/trunk |
Diff against target: |
869 lines (+504/-70) 10 files modified
nova/tests/glance/__init__.py (+20/-0) nova/tests/glance/stubs.py (+37/-0) nova/tests/test_xenapi.py (+72/-38) nova/tests/xenapi/stubs.py (+19/-5) nova/virt/xenapi/fake.py (+77/-10) nova/virt/xenapi/vm_utils.py (+242/-14) nova/virt/xenapi/vmops.py (+2/-1) nova/virt/xenapi_conn.py (+3/-0) plugins/xenserver/xenapi/etc/xapi.d/plugins/glance (+31/-2) tools/pip-requires (+1/-0) |
To merge this branch: | bzr merge lp:~citrix-openstack/nova/xenapi-glance-2 |
Related bugs: | |
Related blueprints: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Sandy Walsh (community) | Approve | ||
Ed Leafe (community) | Approve | ||
Thierry Carrez (community) | ffe | Approve | |
Jay Pipes (community) | Needs Information | ||
Rick Harris | Pending | ||
Review via email: mp+45977@code.launchpad.net |
Commit message
Description of the change
Implement support for streaming images from Glance when using the XenAPI virtualization backend, as per the bexar-xenapi-
Images may be streamed raw, or will be streamed into the right place to allow room for an MBR and partition table, if using non-raw images. PV vs HVM detection now occurs on the image, immediately after it has been streamed. External kernel / ramdisk are also supported in this mode.
Unit test changes include a partial Glance simulator, which is stubbed in place of glance.
This includes minor fixes to nova.image.glance. This code is expected to be heavily reworked anyway with the image-service-
Thierry Carrez (ttx) wrote : | # |
- 539. By Salvatore Orlando
-
Fix for _stream_disk
- 540. By Salvatore Orlando
-
Fixing the stub for _stream_disk as well
- 541. By Salvatore Orlando
-
pep8 fixes
- 542. By Salvatore Orlando
-
Fixed another issue in _stream_disk, as it did never execute _write_partition.
Fixed fake method accordingly.
Fixed pep8 errors.
Jay Pipes (jaypipes) wrote : | # |
Hi!
"This includes minor fixes to nova.image.glance. This code is expected to be heavily reworked anyway with the image-service-
The image-service-
Cheers,
jay
Thierry Carrez (ttx) wrote : | # |
On one hand, this branch touches relatively specific code, on the other, it seems a bit far away from being ready to be merged. I'd like to see Jay's question answered, and the review of someone familiar with the XenAPI stuff (Ozone team, anyone ?) rather than making a hasty decision on FFe.
Armando Migliaccio (armando-migliaccio) wrote : | # |
I am not entirely sure what Ewan really meant it with 'heavily reworked', but by following a discussion on the mailing list I understand that the extent of the work may be related to when glance-auth gets underway in Cactus.
As for the rest, we tried this branch internally against Glance revno 34 and everything seemed to work okay.
Hope this help!
Sandy Walsh (sandy-walsh) wrote : | # |
I'm still getting up to speed on Swift/Glance/etc.
With this branch (vm_utils @514) xenapi would now have a hard dependency on glance. Is Glance ready to be a lynch-pin for Nova XenAPI deployments?
Should image store stuff be abstracted?
Just curious.
Armando Migliaccio (armando-migliaccio) wrote : | # |
I always assumed that Glance was going to be the level of abstraction to the storage back-end that we needed for Nova, but as Sandy is, I am getting up to speed on this too.
- 543. By Ewan Mellor
-
Merged with trunk revno 565.
Ewan Mellor (ewanmellor) wrote : | # |
I don't see the image-service-
> -----Original Message-----
> From: <email address hidden> [mailto:<email address hidden>] On Behalf Of
> Jay Pipes
> Sent: 13 January 2011 11:13
> To: <email address hidden>
> Subject: Re: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> lp:nova
>
> Review: Needs Information
> Hi!
>
> "This includes minor fixes to nova.image.glance. This code is expected
> to be heavily reworked anyway with the image-service-
> work."
>
> The image-service-
> this patch?
>
> Cheers,
> jay
> --
> https:/
> 2/+merge/45977
> Your team Citrix OpenStack development team is subscribed to branch
> lp:~citrix-openstack/nova/xenapi-glance-2.
Ewan Mellor (ewanmellor) wrote : | # |
There is a flag (xenapi_
In the long run, I expect nova-objectstore to be deprecated altogether, and Glance would then be the abstraction between different image services. I don't feel the need to have an image service abstraction inside Nova, when Glance is filling that role too. If someone has a need to plug in a replacement for Glance, then we can discuss that, but I don't know of any such need.
> -----Original Message-----
> From: <email address hidden> [mailto:<email address hidden>] On Behalf Of
> Sandy Walsh
> Sent: 14 January 2011 10:07
> To: <email address hidden>
> Subject: Re: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> lp:nova
>
> I'm still getting up to speed on Swift/Glance/etc.
>
> With this branch (vm_utils @514) xenapi would now have a hard
> dependency on glance. Is Glance ready to be a lynch-pin for Nova XenAPI
> deployments?
>
> Should image store stuff be abstracted?
>
> Just curious.
>
> --
> https:/
> 2/+merge/45977
> Your team Citrix OpenStack development team is subscribed to branch
> lp:~citrix-openstack/nova/xenapi-glance-2.
Ed Leafe (ed-leafe) wrote : | # |
Many of your localized strings are incorrect. If you have more than one formatting placeholder (e.g. %s), you must use a dictionary for your values to be substituted. The reason is that a translation may re-order the words in a phrase, and position-dependent substitution (i.e., tuples) will be ambiguous.
Armando Migliaccio (armando-migliaccio) wrote : | # |
Hi Ed,
can you elaborate a bit more? I see lots of localized strings with multiple formatting placeholders and no use of dictionaries.
> Many of your localized strings are incorrect. If you have more than one
> formatting placeholder (e.g. %s), you must use a dictionary for your values to
> be substituted. The reason is that a translation may re-order the words in a
> phrase, and position-dependent substitution (i.e., tuples) will be ambiguous.
Ed Leafe (ed-leafe) wrote : | # |
> can you elaborate a bit more? I see lots of localized strings with multiple
> formatting placeholders and no use of dictionaries.
Yes, and these will have to be corrected at some point.
The problem is that translations are not word-for-word; a phrase in different languages may order the words differently. Probably the most common example would be adjectives coming before the noun in English ("the white house"), but after the noun in others (e.g., Spanish: "la casa blanca"). If your code looks like:
color = "white"
thing = "house"
print _("The %s %s") % (color, thing)
... it will print "The white house" in English, but the Spanish will print "La blanca casa", which is wrong. You need to use a mapping for the formatting, so that the code above would read:
color = "white"
thing = "house"
print _("The %(color)s %(thing)s") % locals()
Yeah, I know that this is a weak example, since the color and thing are still in English, but it's just designed to demonstrate how positional substitution is to be avoided in localization strings.
If you run xgettext on a file that uses tuples for multiple substitutions, it will emit the following:
"warning: 'msgid' format string with unnamed arguments cannot be properly localized: The translator cannot reorder the arguments. Please consider using a format string with named arguments, and a mapping instead of a tuple for the arguments."
I hope that that helps.
Ed Leafe (ed-leafe) wrote : | # |
Additional explanation at the bottom of this page: http://
Armando Migliaccio (armando-migliaccio) wrote : | # |
Thanks, that explains it.
I agree with you that we need to fix this, but I personally don't see it as much of an issue as most of the log strings are made of nouns, rarely with verbs and adjectives. This makes the translation pretty straightforward.
Since it affects every log in trunk I also think it would be better to have a trunk overhaul in a separate branch.
By the way, some of the languages (i.e. Japanese) have already been translated 100%...does this mean that those translations have to be reviewed?
> > can you elaborate a bit more? I see lots of localized strings with multiple
> > formatting placeholders and no use of dictionaries.
>
> Yes, and these will have to be corrected at some point.
>
> The problem is that translations are not word-for-word; a phrase in different
> languages may order the words differently. Probably the most common example
> would be adjectives coming before the noun in English ("the white house"), but
> after the noun in others (e.g., Spanish: "la casa blanca"). If your code looks
> like:
>
> color = "white"
> thing = "house"
> print _("The %s %s") % (color, thing)
>
> ... it will print "The white house" in English, but the Spanish will print "La
> blanca casa", which is wrong. You need to use a mapping for the formatting, so
> that the code above would read:
>
> color = "white"
> thing = "house"
> print _("The %(color)s %(thing)s") % locals()
>
> Yeah, I know that this is a weak example, since the color and thing are still
> in English, but it's just designed to demonstrate how positional substitution
> is to be avoided in localization strings.
>
> If you run xgettext on a file that uses tuples for multiple substitutions, it
> will emit the following:
> "warning: 'msgid' format string with unnamed arguments cannot be properly
> localized: The translator cannot reorder the arguments. Please consider using
> a format string with named arguments, and a mapping instead of a tuple for the
> arguments."
>
> I hope that that helps.
Jay Pipes (jaypipes) wrote : | # |
On Fri, Jan 14, 2011 at 12:14 PM, Armando Migliaccio
<email address hidden> wrote:
> Thanks, that explains it.
>
> I agree with you that we need to fix this, but I personally don't see it as much of an issue as most of the log strings are made of nouns, rarely with verbs and adjectives. This makes the translation pretty straightforward.
>
> Since it affects every log in trunk I also think it would be better to have a trunk overhaul in a separate branch.
>
> By the way, some of the languages (i.e. Japanese) have already been translated 100%...does this mean that those translations have to be reviewed?
I don't see this as a showstopper for this particular branch (though I
completely agree it's a bug).
We should be able to address this in Nova in Cactus as a separate
blueprint or even a bug report.
Just my 2 cents,
jay
Jay Pipes (jaypipes) wrote : | # |
> I don't see the image-service-
> are you referring to, Jay?
I was referring to this blueprint being marked completed: https:/
Rick, could you elaborate on the actual csets, please? Thanks!
Rick Harris (rconradharris) wrote : | # |
Jay-
I was referring to https:/
As part of that effort, I removed ParallaxClient and TellerClient in Nova-- it now uses Glance's client (glance/client.py). This is why it's failing on Hudson atm, we don't have glance packaged up and installed.
Is that what you meant, or am I misunderstanding something?
Jay Pipes (jaypipes) wrote : | # |
Hi Rick,
Yeah, that's what I was referring to. OK, looks like we need to get the packaging done pronto for Glance.
/me goes off to work on that.
Cheers,
jay
Ewan Mellor (ewanmellor) wrote : | # |
Yes, Rick, those are the changes we're talking about. Thanks. They're not yet in trunk, but as soon as they do go on, the xenapi-glance-2 branch will need fixing up to match.
> -----Original Message-----
> From: <email address hidden> [mailto:<email address hidden>] On Behalf Of
> Rick Harris
> Sent: 14 January 2011 16:06
> To: <email address hidden>
> Subject: Re: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> lp:nova
>
> Jay-
>
> I was referring to https:/
> snap-return-
>
> As part of that effort, I removed ParallaxClient and TellerClient in
> Nova-- it now uses Glance's client (glance/client.py). This is why
> it's failing on Hudson atm, we don't have glance packaged up and
> installed.
>
> Is that what you meant, or am I misunderstanding something?
> --
> https:/
> 2/+merge/45977
> Your team Citrix OpenStack development team is subscribed to branch
> lp:~citrix-openstack/nova/xenapi-glance-2.
Ewan Mellor (ewanmellor) wrote : | # |
> I am not entirely sure what Ewan really meant it with 'heavily reworked', but
> by following a discussion on the mailing list I understand that the extent of
> the work may be related to when glance-auth gets underway in Cactus.
By "heavily reworked" I meant that nova/image/
As soon as xs-snap-
Ewan Mellor (ewanmellor) wrote : | # |
I think that we've answered all the questions on this review so far:
o I have requested a review from rconradharris, as per Thierry's request.
o We will need to refresh this patch as soon as xs-snap-
o We won't hold up this branch because of the I18N issues, but will fix those in a separate branch.
o We don't feel the need for an abstraction between Nova and Glance, as Glance is already an abstraction across image services.
o We have a flag that allows people to continue to use nova-objectstore, so we're not adding a hard dependency upon Glance.
o We are adding a hard dependency upon Glance's client code (glance.client) and we think this is fine because Glance is part of OpenStack, not a third party.
With all these things addressed, please can someone review the actual content of the branch? I'm keen to get an FFE for this, and it will obviously need careful review in order to do that.
Thanks,
Ewan.
Sandy Walsh (sandy-walsh) wrote : | # |
There should be some means to point to our glance installation for development purposes as a flag.
I had to add this hack to get it going:
swalsh@
=== modified file 'bin/nova-combined'
--- bin/nova-combined 2011-01-11 06:47:35 +0000
+++ bin/nova-combined 2011-01-15 19:43:08 +0000
@@ -34,6 +34,10 @@
if os.path.
sys.
+ glance = os.path.
+ print "added glance path:", glance
+ sys.path.insert(0, glance)
+
gettext.
Now I'm trying to figure out how to get my images in glance so I can try to create an instance. Tips welcome!
Ewan Mellor (ewanmellor) wrote : | # |
The Glance client library is assumed to be installed, just like any other dependency. It is on PyPI, and can be easy_installed.
We _could_ add a flag to point at a source location, but that would be non-standard. Are you sure?
-----Original Message-----
From: <email address hidden> [mailto:<email address hidden>] On Behalf Of Sandy Walsh
Sent: 17 January 2011 14:53
To: <email address hidden>
Subject: Re: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into lp:nova
Review: Needs Fixing
There should be some means to point to our glance installation for development purposes as a flag.
I had to add this hack to get it going:
swalsh@
=== modified file 'bin/nova-combined'
--- bin/nova-combined 2011-01-11 06:47:35 +0000
+++ bin/nova-combined 2011-01-15 19:43:08 +0000
@@ -34,6 +34,10 @@
if os.path.
sys.
+ glance = os.path.
+ print "added glance path:", glance
+ sys.path.insert(0, glance)
+
gettext.
Now I'm trying to figure out how to get my images in glance so I can try to create an instance. Tips welcome!
--
https:/
Your team Citrix OpenStack development team is subscribed to branch lp:~citrix-openstack/nova/xenapi-glance-2.
Jay Pipes (jaypipes) wrote : | # |
> There should be some means to point to our glance installation for development
> purposes as a flag.
>
> I had to add this hack to get it going:
>
> swalsh@
> === modified file 'bin/nova-combined'
> --- bin/nova-combined 2011-01-11 06:47:35 +0000
> +++ bin/nova-combined 2011-01-15 19:43:08 +0000
> @@ -34,6 +34,10 @@
> if os.path.
> sys.path.insert(0, possible_topdir)
>
> + glance = os.path.
> + print "added glance path:", glance
> + sys.path.insert(0, glance)
> +
> gettext.
As Ewan says, Glance is now packaged in PyPI, so the above shouldn't be necessary, nor would I recommend dealing with Glance differently than any other dependency.
> Now I'm trying to figure out how to get my images in glance so I can try to
> create an instance. Tips welcome!
This link should help you out :)
http://
Cheers!
jay
Sandy Walsh (sandy-walsh) wrote : | # |
> The Glance client library is assumed to be installed, just like any other
> dependency. It is on PyPI, and can be easy_installed.
>
> We _could_ add a flag to point at a source location, but that would be non-
> standard. Are you sure?
Nope, certainly not sure. :) I was just thinking about situations where I'm working on bugs that span glance/nova. I prefer to run from source vs. an install so I can jump between branches.
Is there another way we can solve this problem?
Ewan Mellor (ewanmellor) wrote : | # |
I just symlink into my source tree from .../site-
> -----Original Message-----
> From: <email address hidden> [mailto:<email address hidden>] On Behalf Of
> Sandy Walsh
> Sent: 17 January 2011 17:11
> To: <email address hidden>
> Subject: Re: RE: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> lp:nova
>
> > The Glance client library is assumed to be installed, just like any
> other
> > dependency. It is on PyPI, and can be easy_installed.
> >
> > We _could_ add a flag to point at a source location, but that would
> be non-
> > standard. Are you sure?
>
>
> Nope, certainly not sure. :) I was just thinking about situations where
> I'm working on bugs that span glance/nova. I prefer to run from source
> vs. an install so I can jump between branches.
>
> Is there another way we can solve this problem?
> --
> https:/
> 2/+merge/45977
> Your team Citrix OpenStack development team is subscribed to branch
> lp:~citrix-openstack/nova/xenapi-glance-2.
Sandy Walsh (sandy-walsh) wrote : | # |
Ah, good idea ... thx. Disregard flag. :)
Jay Pipes (jaypipes) wrote : | # |
OK, Ewan, Salvatore,
Rick's xs-snapshot branch is now in trunk. Can you re-merge xenapi-glance-2 with trunk and make any changes needed? Thanks much for all your patience!
- 544. By Ewan Mellor
-
Merged with trunk revno 572.
Ewan Mellor (ewanmellor) wrote : | # |
I have re-merged this branch with trunk, following the xs-snapshot merge. Are people otherwise happy with the code? I would like to make an FFE proposal for this if the reviewers are happy. Thanks.
> -----Original Message-----
> From: <email address hidden> [mailto:<email address hidden>] On Behalf Of
> Jay Pipes
> Sent: 17 January 2011 18:22
> To: <email address hidden>
> Subject: Re: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> lp:nova
>
> OK, Ewan, Salvatore,
>
> Rick's xs-snapshot branch is now in trunk. Can you re-merge xenapi-
> glance-2 with trunk and make any changes needed? Thanks much for all
> your patience!
> --
> https:/
> 2/+merge/45977
> Your team Citrix OpenStack development team is subscribed to branch
> lp:~citrix-openstack/nova/xenapi-glance-2.
Sandy Walsh (sandy-walsh) wrote : | # |
I'm in the short strokes now of getting the whole thing running (using the latest branch).
1. Creating instance using objectstore
2. Switching to Glance
3. Saving snapshot. Got hung up yesterday on this.
Note: Had to convert my XenServer SR from LVM to VHD since Glance only works with VHD currently.
4. Attempting to create instance from snapshot.
Stay tuned.
(I probably won't have time to review the code as well, but I'm sure others have that covered)
Thierry Carrez (ttx) wrote : | # |
Consider the exception granted if you can get this merged today !
Ewan Mellor (ewanmellor) wrote : | # |
Thanks Thierry!
Sandy, could you please withdraw your "Needs Fixing", as per your subsequent comments?
Ed, could you also withdraw yours? I promise we'll fix the I18N issues ASAP, but Jay would like to do that on a separate blueprint.
Jay, an Approve from you would be appreciated, if you're happy.
Rick Harris, if you're out there, a review from you (or someone else in your team) would be appreciated. We need to sneak this in while Thierry's feeling reckless ;-)
Thanks everyone,
Ewan.
> -----Original Message-----
> From: <email address hidden> [mailto:<email address hidden>] On Behalf Of
> Thierry Carrez
> Sent: 18 January 2011 13:17
> To: <email address hidden>
> Subject: Re: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> lp:nova
>
> Review: Approve ffe
> Consider the exception granted if you can get this merged today !
> --
> https:/
> 2/+merge/45977
> Your team Citrix OpenStack development team is subscribed to branch
> lp:~citrix-openstack/nova/xenapi-glance-2.
Sandy Walsh (sandy-walsh) wrote : | # |
Hmm, think I have issues with my LVM -> VHD conversion: http://
Jay Pipes (jaypipes) wrote : | # |
On Tue, Jan 18, 2011 at 8:32 AM, Ewan Mellor <email address hidden> wrote:
> Jay, an Approve from you would be appreciated, if you're happy.
We need your help to solve a Glance packaging bug so that I can do
testing properly on the branch... but I promise this is my priority
today. If you could join IRC right quick, we can get the packaging
bug solved :) Thanks!
> Rick Harris, if you're out there, a review from you (or someone else in your team) would be appreciated. We need to sneak this in while Thierry's feeling reckless ;-)
I'll ping Rick on IRC...
-jay
Ed Leafe (ed-leafe) wrote : | # |
Decided to move correcting the localization issues into a separate blueprint.
Ed Leafe (ed-leafe) wrote : | # |
On Jan 18, 2011, at 8:32 AM, Ewan Mellor wrote:
> Ed, could you also withdraw yours? I promise we'll fix the I18N issues ASAP, but Jay would like to do that on a separate blueprint.
Done.
-- Ed Leafe
Sandy Walsh (sandy-walsh) : | # |
Ewan Mellor (ewanmellor) wrote : | # |
It's expecting to find the local disk as per the default XenServer installation. You pesky Rackers screw with it, I think. You need SR.other_
sruuid=$(xe sr-list --minimal name-label="Local storage") # Or whichever SR you want to use
xe sr-param-set other-config:
> -----Original Message-----
> From: <email address hidden> [mailto:<email address hidden>] On Behalf Of
> Sandy Walsh
> Sent: 18 January 2011 13:32
> To: <email address hidden>
> Subject: Re: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> lp:nova
>
> Hmm, think I have issues with my LVM -> VHD conversion:
> http://
> --
> https:/
> 2/+merge/45977
> Your team Citrix OpenStack development team is subscribed to branch
> lp:~citrix-openstack/nova/xenapi-glance-2.
Ewan Mellor (ewanmellor) wrote : | # |
Irritating line wrap issue there. Just to be clear, that's
sruuid=$(xe sr-list --minimal name-label="Local storage") # Or whichever SR you want to use
xe sr-param-set other-config:
> -----Original Message-----
> From: <email address hidden> [mailto:<email address hidden>] On Behalf Of
> Ewan Mellor
> Sent: 18 January 2011 13:40
> To: <email address hidden>
> Subject: RE: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> lp:nova
>
> It's expecting to find the local disk as per the default XenServer
> installation. You pesky Rackers screw with it, I think. You need
> SR.other_
> the dom0 CLI using this:
>
> sruuid=$(xe sr-list --minimal name-label="Local storage") # Or
> whichever SR you want to use
> xe sr-param-set other-config:
>
> > -----Original Message-----
> > From: <email address hidden> [mailto:<email address hidden>] On Behalf
> Of
> > Sandy Walsh
> > Sent: 18 January 2011 13:32
> > To: <email address hidden>
> > Subject: Re: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> > lp:nova
> >
> > Hmm, think I have issues with my LVM -> VHD conversion:
> > http://
> > --
> > https:/
> > 2/+merge/45977
> > Your team Citrix OpenStack development team is subscribed to branch
> > lp:~citrix-openstack/nova/xenapi-glance-2.
>
> --
> https:/
> 2/+merge/45977
> Your team Citrix OpenStack development team is subscribed to branch
> lp:~citrix-openstack/nova/xenapi-glance-2.
Sandy Walsh (sandy-walsh) wrote : | # |
Thanks Ewan ... that worked great!
Now it gets tricky. I was hoping to use trunk to add my kernel/
http://
I guess I have to hand-craft the images and pre-populate my Glance folder in order to get this to work now, huh?
Any notes on taking EC2-compliant kernel/
Ewan Mellor (ewanmellor) wrote : | # |
In the absence of any Glance CLI tools, I threw this together:
#!/usr/
import sys
from glance.client import Client
hostname = sys.argv[1]
name = sys.argv[2]
typ = sys.argv[3]
filename = sys.argv[4]
c = Client(hostname, 9292)
meta = {'name': name,
'type': typ,
}
with open(filename) as f:
new_meta = c.add_image(meta, f)
print 'Stored image. Got identifier: %s' % new_meta
The third argument should be "image", "ramdisk", or "kernel". You can then use the image identifiers on the euca-run-instances command line, just as you would if you'd used euca-register.
> -----Original Message-----
> From: <email address hidden> [mailto:<email address hidden>] On Behalf Of
> Sandy Walsh
> Sent: 18 January 2011 13:58
> To: <email address hidden>
> Subject: Re: [Merge] lp:~citrix-openstack/nova/xenapi-glance-2 into
> lp:nova
>
> Thanks Ewan ... that worked great!
>
> Now it gets tricky. I was hoping to use trunk to add my
> kernel/
> Glance to create a snapshot, but now with the trunk merge, it always
> expects Glance (even though --
> image_service=
>
> http://
>
> I guess I have to hand-craft the images and pre-populate my Glance
> folder in order to get this to work now, huh?
>
> Any notes on taking EC2-compliant kernel/
> prepping them for Glance usage?
>
> --
> https:/
> 2/+merge/45977
> Your team Citrix OpenStack development team is subscribed to branch
> lp:~citrix-openstack/nova/xenapi-glance-2.
Jay Pipes (jaypipes) wrote : | # |
On Tue, Jan 18, 2011 at 10:10 AM, Ewan Mellor <email address hidden> wrote:
> In the absence of any Glance CLI tools, I threw this together:
Coming soon to a Glance near you! :)
https:/
-jay
Preview Diff
1 | === added directory 'nova/tests/glance' | |||
2 | === added file 'nova/tests/glance/__init__.py' | |||
3 | --- nova/tests/glance/__init__.py 1970-01-01 00:00:00 +0000 | |||
4 | +++ nova/tests/glance/__init__.py 2011-01-17 19:48:42 +0000 | |||
5 | @@ -0,0 +1,20 @@ | |||
6 | 1 | # vim: tabstop=4 shiftwidth=4 softtabstop=4 | ||
7 | 2 | |||
8 | 3 | # Copyright (c) 2011 Citrix Systems, Inc. | ||
9 | 4 | # | ||
10 | 5 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | ||
11 | 6 | # not use this file except in compliance with the License. You may obtain | ||
12 | 7 | # a copy of the License at | ||
13 | 8 | # | ||
14 | 9 | # http://www.apache.org/licenses/LICENSE-2.0 | ||
15 | 10 | # | ||
16 | 11 | # Unless required by applicable law or agreed to in writing, software | ||
17 | 12 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | ||
18 | 13 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | ||
19 | 14 | # License for the specific language governing permissions and limitations | ||
20 | 15 | # under the License. | ||
21 | 16 | |||
22 | 17 | """ | ||
23 | 18 | :mod:`glance` -- Stubs for Glance | ||
24 | 19 | ================================= | ||
25 | 20 | """ | ||
26 | 0 | 21 | ||
27 | === added file 'nova/tests/glance/stubs.py' | |||
28 | --- nova/tests/glance/stubs.py 1970-01-01 00:00:00 +0000 | |||
29 | +++ nova/tests/glance/stubs.py 2011-01-17 19:48:42 +0000 | |||
30 | @@ -0,0 +1,37 @@ | |||
31 | 1 | # vim: tabstop=4 shiftwidth=4 softtabstop=4 | ||
32 | 2 | |||
33 | 3 | # Copyright (c) 2011 Citrix Systems, Inc. | ||
34 | 4 | # | ||
35 | 5 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | ||
36 | 6 | # not use this file except in compliance with the License. You may obtain | ||
37 | 7 | # a copy of the License at | ||
38 | 8 | # | ||
39 | 9 | # http://www.apache.org/licenses/LICENSE-2.0 | ||
40 | 10 | # | ||
41 | 11 | # Unless required by applicable law or agreed to in writing, software | ||
42 | 12 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | ||
43 | 13 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | ||
44 | 14 | # License for the specific language governing permissions and limitations | ||
45 | 15 | # under the License. | ||
46 | 16 | |||
47 | 17 | import StringIO | ||
48 | 18 | |||
49 | 19 | import glance.client | ||
50 | 20 | |||
51 | 21 | |||
52 | 22 | def stubout_glance_client(stubs, cls): | ||
53 | 23 | """Stubs out glance.client.Client""" | ||
54 | 24 | stubs.Set(glance.client, 'Client', | ||
55 | 25 | lambda *args, **kwargs: cls(*args, **kwargs)) | ||
56 | 26 | |||
57 | 27 | |||
58 | 28 | class FakeGlance(object): | ||
59 | 29 | def __init__(self, host, port=None, use_ssl=False): | ||
60 | 30 | pass | ||
61 | 31 | |||
62 | 32 | def get_image(self, image): | ||
63 | 33 | meta = { | ||
64 | 34 | 'size': 0, | ||
65 | 35 | } | ||
66 | 36 | image_file = StringIO.StringIO('') | ||
67 | 37 | return meta, image_file | ||
68 | 0 | 38 | ||
69 | === modified file 'nova/tests/test_xenapi.py' | |||
70 | --- nova/tests/test_xenapi.py 2011-01-13 16:52:28 +0000 | |||
71 | +++ nova/tests/test_xenapi.py 2011-01-17 19:48:42 +0000 | |||
72 | @@ -34,6 +34,7 @@ | |||
73 | 34 | from nova.virt.xenapi.vmops import SimpleDH | 34 | from nova.virt.xenapi.vmops import SimpleDH |
74 | 35 | from nova.tests.db import fakes as db_fakes | 35 | from nova.tests.db import fakes as db_fakes |
75 | 36 | from nova.tests.xenapi import stubs | 36 | from nova.tests.xenapi import stubs |
76 | 37 | from nova.tests.glance import stubs as glance_stubs | ||
77 | 37 | 38 | ||
78 | 38 | FLAGS = flags.FLAGS | 39 | FLAGS = flags.FLAGS |
79 | 39 | 40 | ||
80 | @@ -108,18 +109,16 @@ | |||
81 | 108 | conn = xenapi_conn.get_connection(False) | 109 | conn = xenapi_conn.get_connection(False) |
82 | 109 | volume = self._create_volume() | 110 | volume = self._create_volume() |
83 | 110 | instance = db.instance_create(self.values) | 111 | instance = db.instance_create(self.values) |
85 | 111 | xenapi_fake.create_vm(instance.name, 'Running') | 112 | vm = xenapi_fake.create_vm(instance.name, 'Running') |
86 | 112 | result = conn.attach_volume(instance.name, volume['id'], '/dev/sdc') | 113 | result = conn.attach_volume(instance.name, volume['id'], '/dev/sdc') |
87 | 113 | 114 | ||
88 | 114 | def check(): | 115 | def check(): |
89 | 115 | # check that the VM has a VBD attached to it | 116 | # check that the VM has a VBD attached to it |
90 | 116 | # Get XenAPI reference for the VM | ||
91 | 117 | vms = xenapi_fake.get_all('VM') | ||
92 | 118 | # Get XenAPI record for VBD | 117 | # Get XenAPI record for VBD |
93 | 119 | vbds = xenapi_fake.get_all('VBD') | 118 | vbds = xenapi_fake.get_all('VBD') |
94 | 120 | vbd = xenapi_fake.get_record('VBD', vbds[0]) | 119 | vbd = xenapi_fake.get_record('VBD', vbds[0]) |
95 | 121 | vm_ref = vbd['VM'] | 120 | vm_ref = vbd['VM'] |
97 | 122 | self.assertEqual(vm_ref, vms[0]) | 121 | self.assertEqual(vm_ref, vm) |
98 | 123 | 122 | ||
99 | 124 | check() | 123 | check() |
100 | 125 | 124 | ||
101 | @@ -157,9 +156,14 @@ | |||
102 | 157 | FLAGS.xenapi_connection_url = 'test_url' | 156 | FLAGS.xenapi_connection_url = 'test_url' |
103 | 158 | FLAGS.xenapi_connection_password = 'test_pass' | 157 | FLAGS.xenapi_connection_password = 'test_pass' |
104 | 159 | xenapi_fake.reset() | 158 | xenapi_fake.reset() |
105 | 159 | xenapi_fake.create_local_srs() | ||
106 | 160 | db_fakes.stub_out_db_instance_api(self.stubs) | 160 | db_fakes.stub_out_db_instance_api(self.stubs) |
107 | 161 | xenapi_fake.create_network('fake', FLAGS.flat_network_bridge) | 161 | xenapi_fake.create_network('fake', FLAGS.flat_network_bridge) |
108 | 162 | stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) | 162 | stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) |
109 | 163 | stubs.stubout_get_this_vm_uuid(self.stubs) | ||
110 | 164 | stubs.stubout_stream_disk(self.stubs) | ||
111 | 165 | glance_stubs.stubout_glance_client(self.stubs, | ||
112 | 166 | glance_stubs.FakeGlance) | ||
113 | 163 | self.conn = xenapi_conn.get_connection(False) | 167 | self.conn = xenapi_conn.get_connection(False) |
114 | 164 | 168 | ||
115 | 165 | def test_list_instances_0(self): | 169 | def test_list_instances_0(self): |
116 | @@ -207,40 +211,70 @@ | |||
117 | 207 | 211 | ||
118 | 208 | check() | 212 | check() |
119 | 209 | 213 | ||
154 | 210 | def test_spawn(self): | 214 | def check_vm_record(self, conn): |
155 | 211 | instance = self._create_instance() | 215 | instances = conn.list_instances() |
156 | 212 | 216 | self.assertEquals(instances, [1]) | |
157 | 213 | def check(): | 217 | |
158 | 214 | instances = self.conn.list_instances() | 218 | # Get Nova record for VM |
159 | 215 | self.assertEquals(instances, [1]) | 219 | vm_info = conn.get_info(1) |
160 | 216 | 220 | ||
161 | 217 | # Get Nova record for VM | 221 | # Get XenAPI record for VM |
162 | 218 | vm_info = self.conn.get_info(1) | 222 | vms = [rec for ref, rec |
163 | 219 | 223 | in xenapi_fake.get_all_records('VM').iteritems() | |
164 | 220 | # Get XenAPI record for VM | 224 | if not rec['is_control_domain']] |
165 | 221 | vms = xenapi_fake.get_all('VM') | 225 | vm = vms[0] |
166 | 222 | vm = xenapi_fake.get_record('VM', vms[0]) | 226 | |
167 | 223 | 227 | # Check that m1.large above turned into the right thing. | |
168 | 224 | # Check that m1.large above turned into the right thing. | 228 | instance_type = instance_types.INSTANCE_TYPES['m1.large'] |
169 | 225 | instance_type = instance_types.INSTANCE_TYPES['m1.large'] | 229 | mem_kib = long(instance_type['memory_mb']) << 10 |
170 | 226 | mem_kib = long(instance_type['memory_mb']) << 10 | 230 | mem_bytes = str(mem_kib << 10) |
171 | 227 | mem_bytes = str(mem_kib << 10) | 231 | vcpus = instance_type['vcpus'] |
172 | 228 | vcpus = instance_type['vcpus'] | 232 | self.assertEquals(vm_info['max_mem'], mem_kib) |
173 | 229 | self.assertEquals(vm_info['max_mem'], mem_kib) | 233 | self.assertEquals(vm_info['mem'], mem_kib) |
174 | 230 | self.assertEquals(vm_info['mem'], mem_kib) | 234 | self.assertEquals(vm['memory_static_max'], mem_bytes) |
175 | 231 | self.assertEquals(vm['memory_static_max'], mem_bytes) | 235 | self.assertEquals(vm['memory_dynamic_max'], mem_bytes) |
176 | 232 | self.assertEquals(vm['memory_dynamic_max'], mem_bytes) | 236 | self.assertEquals(vm['memory_dynamic_min'], mem_bytes) |
177 | 233 | self.assertEquals(vm['memory_dynamic_min'], mem_bytes) | 237 | self.assertEquals(vm['VCPUs_max'], str(vcpus)) |
178 | 234 | self.assertEquals(vm['VCPUs_max'], str(vcpus)) | 238 | self.assertEquals(vm['VCPUs_at_startup'], str(vcpus)) |
179 | 235 | self.assertEquals(vm['VCPUs_at_startup'], str(vcpus)) | 239 | |
180 | 236 | 240 | # Check that the VM is running according to Nova | |
181 | 237 | # Check that the VM is running according to Nova | 241 | self.assertEquals(vm_info['state'], power_state.RUNNING) |
182 | 238 | self.assertEquals(vm_info['state'], power_state.RUNNING) | 242 | |
183 | 239 | 243 | # Check that the VM is running according to XenAPI. | |
184 | 240 | # Check that the VM is running according to XenAPI. | 244 | self.assertEquals(vm['power_state'], 'Running') |
185 | 241 | self.assertEquals(vm['power_state'], 'Running') | 245 | |
186 | 242 | 246 | def _test_spawn(self, image_id, kernel_id, ramdisk_id): | |
187 | 243 | check() | 247 | stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) |
188 | 248 | values = {'name': 1, | ||
189 | 249 | 'id': 1, | ||
190 | 250 | 'project_id': self.project.id, | ||
191 | 251 | 'user_id': self.user.id, | ||
192 | 252 | 'image_id': image_id, | ||
193 | 253 | 'kernel_id': kernel_id, | ||
194 | 254 | 'ramdisk_id': ramdisk_id, | ||
195 | 255 | 'instance_type': 'm1.large', | ||
196 | 256 | 'mac_address': 'aa:bb:cc:dd:ee:ff', | ||
197 | 257 | } | ||
198 | 258 | conn = xenapi_conn.get_connection(False) | ||
199 | 259 | instance = db.instance_create(values) | ||
200 | 260 | conn.spawn(instance) | ||
201 | 261 | self.check_vm_record(conn) | ||
202 | 262 | |||
203 | 263 | def test_spawn_raw_objectstore(self): | ||
204 | 264 | FLAGS.xenapi_image_service = 'objectstore' | ||
205 | 265 | self._test_spawn(1, None, None) | ||
206 | 266 | |||
207 | 267 | def test_spawn_objectstore(self): | ||
208 | 268 | FLAGS.xenapi_image_service = 'objectstore' | ||
209 | 269 | self._test_spawn(1, 2, 3) | ||
210 | 270 | |||
211 | 271 | def test_spawn_raw_glance(self): | ||
212 | 272 | FLAGS.xenapi_image_service = 'glance' | ||
213 | 273 | self._test_spawn(1, None, None) | ||
214 | 274 | |||
215 | 275 | def test_spawn_glance(self): | ||
216 | 276 | FLAGS.xenapi_image_service = 'glance' | ||
217 | 277 | self._test_spawn(1, 2, 3) | ||
218 | 244 | 278 | ||
219 | 245 | def tearDown(self): | 279 | def tearDown(self): |
220 | 246 | super(XenAPIVMTestCase, self).tearDown() | 280 | super(XenAPIVMTestCase, self).tearDown() |
221 | 247 | 281 | ||
222 | === modified file 'nova/tests/xenapi/stubs.py' | |||
223 | --- nova/tests/xenapi/stubs.py 2011-01-10 09:12:48 +0000 | |||
224 | +++ nova/tests/xenapi/stubs.py 2011-01-17 19:48:42 +0000 | |||
225 | @@ -115,6 +115,21 @@ | |||
226 | 115 | stubs.Set(volume_utils, '_get_target', fake_get_target) | 115 | stubs.Set(volume_utils, '_get_target', fake_get_target) |
227 | 116 | 116 | ||
228 | 117 | 117 | ||
229 | 118 | def stubout_get_this_vm_uuid(stubs): | ||
230 | 119 | def f(): | ||
231 | 120 | vms = [rec['uuid'] for ref, rec | ||
232 | 121 | in fake.get_all_records('VM').iteritems() | ||
233 | 122 | if rec['is_control_domain']] | ||
234 | 123 | return vms[0] | ||
235 | 124 | stubs.Set(vm_utils, 'get_this_vm_uuid', f) | ||
236 | 125 | |||
237 | 126 | |||
238 | 127 | def stubout_stream_disk(stubs): | ||
239 | 128 | def f(_1, _2, _3, _4): | ||
240 | 129 | pass | ||
241 | 130 | stubs.Set(vm_utils, '_stream_disk', f) | ||
242 | 131 | |||
243 | 132 | |||
244 | 118 | class FakeSessionForVMTests(fake.SessionBase): | 133 | class FakeSessionForVMTests(fake.SessionBase): |
245 | 119 | """ Stubs out a XenAPISession for VM tests """ | 134 | """ Stubs out a XenAPISession for VM tests """ |
246 | 120 | def __init__(self, uri): | 135 | def __init__(self, uri): |
247 | @@ -124,7 +139,10 @@ | |||
248 | 124 | return self.xenapi.network.get_all_records() | 139 | return self.xenapi.network.get_all_records() |
249 | 125 | 140 | ||
250 | 126 | def host_call_plugin(self, _1, _2, _3, _4, _5): | 141 | def host_call_plugin(self, _1, _2, _3, _4, _5): |
252 | 127 | return '' | 142 | sr_ref = fake.get_all('SR')[0] |
253 | 143 | vdi_ref = fake.create_vdi('', False, sr_ref, False) | ||
254 | 144 | vdi_rec = fake.get_record('VDI', vdi_ref) | ||
255 | 145 | return '<string>%s</string>' % vdi_rec['uuid'] | ||
256 | 128 | 146 | ||
257 | 129 | def VM_start(self, _1, ref, _2, _3): | 147 | def VM_start(self, _1, ref, _2, _3): |
258 | 130 | vm = fake.get_record('VM', ref) | 148 | vm = fake.get_record('VM', ref) |
259 | @@ -159,10 +177,6 @@ | |||
260 | 159 | def __init__(self, uri): | 177 | def __init__(self, uri): |
261 | 160 | super(FakeSessionForVolumeTests, self).__init__(uri) | 178 | super(FakeSessionForVolumeTests, self).__init__(uri) |
262 | 161 | 179 | ||
263 | 162 | def VBD_plug(self, _1, ref): | ||
264 | 163 | rec = fake.get_record('VBD', ref) | ||
265 | 164 | rec['currently-attached'] = True | ||
266 | 165 | |||
267 | 166 | def VDI_introduce(self, _1, uuid, _2, _3, _4, _5, | 180 | def VDI_introduce(self, _1, uuid, _2, _3, _4, _5, |
268 | 167 | _6, _7, _8, _9, _10, _11): | 181 | _6, _7, _8, _9, _10, _11): |
269 | 168 | valid_vdi = False | 182 | valid_vdi = False |
270 | 169 | 183 | ||
271 | === modified file 'nova/virt/xenapi/fake.py' | |||
272 | --- nova/virt/xenapi/fake.py 2011-01-04 05:26:41 +0000 | |||
273 | +++ nova/virt/xenapi/fake.py 2011-01-17 19:48:42 +0000 | |||
274 | @@ -76,6 +76,7 @@ | |||
275 | 76 | for c in _CLASSES: | 76 | for c in _CLASSES: |
276 | 77 | _db_content[c] = {} | 77 | _db_content[c] = {} |
277 | 78 | create_host('fake') | 78 | create_host('fake') |
278 | 79 | create_vm('fake', 'Running', is_a_template=False, is_control_domain=True) | ||
279 | 79 | 80 | ||
280 | 80 | 81 | ||
281 | 81 | def create_host(name_label): | 82 | def create_host(name_label): |
282 | @@ -136,14 +137,21 @@ | |||
283 | 136 | 137 | ||
284 | 137 | 138 | ||
285 | 138 | def create_vbd(vm_ref, vdi_ref): | 139 | def create_vbd(vm_ref, vdi_ref): |
287 | 139 | vbd_rec = {'VM': vm_ref, 'VDI': vdi_ref} | 140 | vbd_rec = { |
288 | 141 | 'VM': vm_ref, | ||
289 | 142 | 'VDI': vdi_ref, | ||
290 | 143 | 'currently_attached': False, | ||
291 | 144 | } | ||
292 | 140 | vbd_ref = _create_object('VBD', vbd_rec) | 145 | vbd_ref = _create_object('VBD', vbd_rec) |
293 | 141 | after_VBD_create(vbd_ref, vbd_rec) | 146 | after_VBD_create(vbd_ref, vbd_rec) |
294 | 142 | return vbd_ref | 147 | return vbd_ref |
295 | 143 | 148 | ||
296 | 144 | 149 | ||
297 | 145 | def after_VBD_create(vbd_ref, vbd_rec): | 150 | def after_VBD_create(vbd_ref, vbd_rec): |
299 | 146 | """Create backref from VM to VBD when VBD is created""" | 151 | """Create read-only fields and backref from VM to VBD when VBD is |
300 | 152 | created.""" | ||
301 | 153 | vbd_rec['currently_attached'] = False | ||
302 | 154 | vbd_rec['device'] = '' | ||
303 | 147 | vm_ref = vbd_rec['VM'] | 155 | vm_ref = vbd_rec['VM'] |
304 | 148 | vm_rec = _db_content['VM'][vm_ref] | 156 | vm_rec = _db_content['VM'][vm_ref] |
305 | 149 | vm_rec['VBDs'] = [vbd_ref] | 157 | vm_rec['VBDs'] = [vbd_ref] |
306 | @@ -152,9 +160,10 @@ | |||
307 | 152 | vbd_rec['vm_name_label'] = vm_name_label | 160 | vbd_rec['vm_name_label'] = vm_name_label |
308 | 153 | 161 | ||
309 | 154 | 162 | ||
311 | 155 | def create_pbd(config, sr_ref, attached): | 163 | def create_pbd(config, host_ref, sr_ref, attached): |
312 | 156 | return _create_object('PBD', { | 164 | return _create_object('PBD', { |
313 | 157 | 'device-config': config, | 165 | 'device-config': config, |
314 | 166 | 'host': host_ref, | ||
315 | 158 | 'SR': sr_ref, | 167 | 'SR': sr_ref, |
316 | 159 | 'currently-attached': attached, | 168 | 'currently-attached': attached, |
317 | 160 | }) | 169 | }) |
318 | @@ -167,6 +176,33 @@ | |||
319 | 167 | }) | 176 | }) |
320 | 168 | 177 | ||
321 | 169 | 178 | ||
322 | 179 | def create_local_srs(): | ||
323 | 180 | """Create an SR that looks like the one created on the local disk by | ||
324 | 181 | default by the XenServer installer. Do this one per host.""" | ||
325 | 182 | for host_ref in _db_content['host'].keys(): | ||
326 | 183 | _create_local_sr(host_ref) | ||
327 | 184 | |||
328 | 185 | |||
329 | 186 | def _create_local_sr(host_ref): | ||
330 | 187 | sr_ref = _create_object('SR', { | ||
331 | 188 | 'name_label': 'Local storage', | ||
332 | 189 | 'type': 'lvm', | ||
333 | 190 | 'content_type': 'user', | ||
334 | 191 | 'shared': False, | ||
335 | 192 | 'physical_size': str(1 << 30), | ||
336 | 193 | 'physical_utilisation': str(0), | ||
337 | 194 | 'virtual_allocation': str(0), | ||
338 | 195 | 'other_config': { | ||
339 | 196 | 'i18n-original-value-name_label': 'Local storage', | ||
340 | 197 | 'i18n-key': 'local-storage', | ||
341 | 198 | }, | ||
342 | 199 | 'VDIs': [] | ||
343 | 200 | }) | ||
344 | 201 | pbd_ref = create_pbd('', host_ref, sr_ref, True) | ||
345 | 202 | _db_content['SR'][sr_ref]['PBDs'] = [pbd_ref] | ||
346 | 203 | return sr_ref | ||
347 | 204 | |||
348 | 205 | |||
349 | 170 | def _create_object(table, obj): | 206 | def _create_object(table, obj): |
350 | 171 | ref = str(uuid.uuid4()) | 207 | ref = str(uuid.uuid4()) |
351 | 172 | obj['uuid'] = str(uuid.uuid4()) | 208 | obj['uuid'] = str(uuid.uuid4()) |
352 | @@ -179,9 +215,10 @@ | |||
353 | 179 | # Forces fake to support iscsi only | 215 | # Forces fake to support iscsi only |
354 | 180 | if sr_type != 'iscsi': | 216 | if sr_type != 'iscsi': |
355 | 181 | raise Failure(['SR_UNKNOWN_DRIVER', sr_type]) | 217 | raise Failure(['SR_UNKNOWN_DRIVER', sr_type]) |
356 | 218 | host_ref = _db_content['host'].keys()[0] | ||
357 | 182 | sr_ref = _create_object(table, obj[2]) | 219 | sr_ref = _create_object(table, obj[2]) |
358 | 183 | vdi_ref = create_vdi('', False, sr_ref, False) | 220 | vdi_ref = create_vdi('', False, sr_ref, False) |
360 | 184 | pbd_ref = create_pbd('', sr_ref, True) | 221 | pbd_ref = create_pbd('', host_ref, sr_ref, True) |
361 | 185 | _db_content['SR'][sr_ref]['VDIs'] = [vdi_ref] | 222 | _db_content['SR'][sr_ref]['VDIs'] = [vdi_ref] |
362 | 186 | _db_content['SR'][sr_ref]['PBDs'] = [pbd_ref] | 223 | _db_content['SR'][sr_ref]['PBDs'] = [pbd_ref] |
363 | 187 | _db_content['VDI'][vdi_ref]['SR'] = sr_ref | 224 | _db_content['VDI'][vdi_ref]['SR'] = sr_ref |
364 | @@ -233,6 +270,20 @@ | |||
365 | 233 | def __init__(self, uri): | 270 | def __init__(self, uri): |
366 | 234 | self._session = None | 271 | self._session = None |
367 | 235 | 272 | ||
368 | 273 | def VBD_plug(self, _1, ref): | ||
369 | 274 | rec = get_record('VBD', ref) | ||
370 | 275 | if rec['currently_attached']: | ||
371 | 276 | raise Failure(['DEVICE_ALREADY_ATTACHED', ref]) | ||
372 | 277 | rec['currently_attached'] = True | ||
373 | 278 | rec['device'] = rec['userdevice'] | ||
374 | 279 | |||
375 | 280 | def VBD_unplug(self, _1, ref): | ||
376 | 281 | rec = get_record('VBD', ref) | ||
377 | 282 | if not rec['currently_attached']: | ||
378 | 283 | raise Failure(['DEVICE_ALREADY_DETACHED', ref]) | ||
379 | 284 | rec['currently_attached'] = False | ||
380 | 285 | rec['device'] = '' | ||
381 | 286 | |||
382 | 236 | def xenapi_request(self, methodname, params): | 287 | def xenapi_request(self, methodname, params): |
383 | 237 | if methodname.startswith('login'): | 288 | if methodname.startswith('login'): |
384 | 238 | self._login(methodname, params) | 289 | self._login(methodname, params) |
385 | @@ -289,6 +340,8 @@ | |||
386 | 289 | return lambda *params: self._getter(name, params) | 340 | return lambda *params: self._getter(name, params) |
387 | 290 | elif self._is_create(name): | 341 | elif self._is_create(name): |
388 | 291 | return lambda *params: self._create(name, params) | 342 | return lambda *params: self._create(name, params) |
389 | 343 | elif self._is_destroy(name): | ||
390 | 344 | return lambda *params: self._destroy(name, params) | ||
391 | 292 | else: | 345 | else: |
392 | 293 | return None | 346 | return None |
393 | 294 | 347 | ||
394 | @@ -299,10 +352,16 @@ | |||
395 | 299 | bits[1].startswith(getter and 'get_' or 'set_')) | 352 | bits[1].startswith(getter and 'get_' or 'set_')) |
396 | 300 | 353 | ||
397 | 301 | def _is_create(self, name): | 354 | def _is_create(self, name): |
398 | 355 | return self._is_method(name, 'create') | ||
399 | 356 | |||
400 | 357 | def _is_destroy(self, name): | ||
401 | 358 | return self._is_method(name, 'destroy') | ||
402 | 359 | |||
403 | 360 | def _is_method(self, name, meth): | ||
404 | 302 | bits = name.split('.') | 361 | bits = name.split('.') |
405 | 303 | return (len(bits) == 2 and | 362 | return (len(bits) == 2 and |
406 | 304 | bits[0] in _CLASSES and | 363 | bits[0] in _CLASSES and |
408 | 305 | bits[1] == 'create') | 364 | bits[1] == meth) |
409 | 306 | 365 | ||
410 | 307 | def _getter(self, name, params): | 366 | def _getter(self, name, params): |
411 | 308 | self._check_session(params) | 367 | self._check_session(params) |
412 | @@ -370,10 +429,9 @@ | |||
413 | 370 | _create_sr(cls, params) or _create_object(cls, params[1]) | 429 | _create_sr(cls, params) or _create_object(cls, params[1]) |
414 | 371 | 430 | ||
415 | 372 | # Call hook to provide any fixups needed (ex. creating backrefs) | 431 | # Call hook to provide any fixups needed (ex. creating backrefs) |
420 | 373 | try: | 432 | after_hook = 'after_%s_create' % cls |
421 | 374 | globals()["after_%s_create" % cls](ref, params[1]) | 433 | if after_hook in globals(): |
422 | 375 | except KeyError: | 434 | globals()[after_hook](ref, params[1]) |
419 | 376 | pass | ||
423 | 377 | 435 | ||
424 | 378 | obj = get_record(cls, ref) | 436 | obj = get_record(cls, ref) |
425 | 379 | 437 | ||
426 | @@ -383,6 +441,15 @@ | |||
427 | 383 | 441 | ||
428 | 384 | return ref | 442 | return ref |
429 | 385 | 443 | ||
430 | 444 | def _destroy(self, name, params): | ||
431 | 445 | self._check_session(params) | ||
432 | 446 | self._check_arg_count(params, 2) | ||
433 | 447 | table, _ = name.split('.') | ||
434 | 448 | ref = params[1] | ||
435 | 449 | if ref not in _db_content[table]: | ||
436 | 450 | raise Failure(['HANDLE_INVALID', table, ref]) | ||
437 | 451 | del _db_content[table][ref] | ||
438 | 452 | |||
439 | 386 | def _async(self, name, params): | 453 | def _async(self, name, params): |
440 | 387 | task_ref = create_task(name) | 454 | task_ref = create_task(name) |
441 | 388 | task = _db_content['task'][task_ref] | 455 | task = _db_content['task'][task_ref] |
442 | @@ -420,7 +487,7 @@ | |||
443 | 420 | try: | 487 | try: |
444 | 421 | return result[0] | 488 | return result[0] |
445 | 422 | except IndexError: | 489 | except IndexError: |
447 | 423 | return None | 490 | raise Failure(['UUID_INVALID', v, result, recs, k]) |
448 | 424 | 491 | ||
449 | 425 | return result | 492 | return result |
450 | 426 | 493 | ||
451 | 427 | 494 | ||
452 | === modified file 'nova/virt/xenapi/vm_utils.py' | |||
453 | --- nova/virt/xenapi/vm_utils.py 2011-01-12 19:47:40 +0000 | |||
454 | +++ nova/virt/xenapi/vm_utils.py 2011-01-17 19:48:42 +0000 | |||
455 | @@ -19,11 +19,14 @@ | |||
456 | 19 | their attributes like VDIs, VIFs, as well as their lookup functions. | 19 | their attributes like VDIs, VIFs, as well as their lookup functions. |
457 | 20 | """ | 20 | """ |
458 | 21 | 21 | ||
459 | 22 | import os | ||
460 | 22 | import pickle | 23 | import pickle |
461 | 24 | import re | ||
462 | 23 | import urllib | 25 | import urllib |
463 | 24 | from xml.dom import minidom | 26 | from xml.dom import minidom |
464 | 25 | 27 | ||
465 | 26 | from eventlet import event | 28 | from eventlet import event |
466 | 29 | import glance.client | ||
467 | 27 | from nova import exception | 30 | from nova import exception |
468 | 28 | from nova import flags | 31 | from nova import flags |
469 | 29 | from nova import log as logging | 32 | from nova import log as logging |
470 | @@ -47,17 +50,23 @@ | |||
471 | 47 | 'Crashed': power_state.CRASHED} | 50 | 'Crashed': power_state.CRASHED} |
472 | 48 | 51 | ||
473 | 49 | 52 | ||
474 | 53 | SECTOR_SIZE = 512 | ||
475 | 54 | MBR_SIZE_SECTORS = 63 | ||
476 | 55 | MBR_SIZE_BYTES = MBR_SIZE_SECTORS * SECTOR_SIZE | ||
477 | 56 | KERNEL_DIR = '/boot/guest' | ||
478 | 57 | |||
479 | 58 | |||
480 | 50 | class ImageType: | 59 | class ImageType: |
487 | 51 | """ | 60 | """ |
488 | 52 | Enumeration class for distinguishing different image types | 61 | Enumeration class for distinguishing different image types |
489 | 53 | 0 - kernel/ramdisk image (goes on dom0's filesystem) | 62 | 0 - kernel/ramdisk image (goes on dom0's filesystem) |
490 | 54 | 1 - disk image (local SR, partitioned by objectstore plugin) | 63 | 1 - disk image (local SR, partitioned by objectstore plugin) |
491 | 55 | 2 - raw disk image (local SR, NOT partitioned by plugin) | 64 | 2 - raw disk image (local SR, NOT partitioned by plugin) |
492 | 56 | """ | 65 | """ |
493 | 57 | 66 | ||
497 | 58 | KERNEL_RAMDISK = 0 | 67 | KERNEL_RAMDISK = 0 |
498 | 59 | DISK = 1 | 68 | DISK = 1 |
499 | 60 | DISK_RAW = 2 | 69 | DISK_RAW = 2 |
500 | 61 | 70 | ||
501 | 62 | 71 | ||
502 | 63 | class VMHelper(HelperBase): | 72 | class VMHelper(HelperBase): |
503 | @@ -207,6 +216,25 @@ | |||
504 | 207 | return vif_ref | 216 | return vif_ref |
505 | 208 | 217 | ||
506 | 209 | @classmethod | 218 | @classmethod |
507 | 219 | def create_vdi(cls, session, sr_ref, name_label, virtual_size, read_only): | ||
508 | 220 | """Create a VDI record and returns its reference.""" | ||
509 | 221 | vdi_ref = session.get_xenapi().VDI.create( | ||
510 | 222 | {'name_label': name_label, | ||
511 | 223 | 'name_description': '', | ||
512 | 224 | 'SR': sr_ref, | ||
513 | 225 | 'virtual_size': str(virtual_size), | ||
514 | 226 | 'type': 'User', | ||
515 | 227 | 'sharable': False, | ||
516 | 228 | 'read_only': read_only, | ||
517 | 229 | 'xenstore_data': {}, | ||
518 | 230 | 'other_config': {}, | ||
519 | 231 | 'sm_config': {}, | ||
520 | 232 | 'tags': []}) | ||
521 | 233 | LOG.debug(_('Created VDI %s (%s, %s, %s) on %s.'), vdi_ref, | ||
522 | 234 | name_label, virtual_size, read_only, sr_ref) | ||
523 | 235 | return vdi_ref | ||
524 | 236 | |||
525 | 237 | @classmethod | ||
526 | 210 | def create_snapshot(cls, session, instance_id, vm_ref, label): | 238 | def create_snapshot(cls, session, instance_id, vm_ref, label): |
527 | 211 | """ Creates Snapshot (Template) VM, Snapshot VBD, Snapshot VDI, | 239 | """ Creates Snapshot (Template) VM, Snapshot VBD, Snapshot VDI, |
528 | 212 | Snapshot VHD | 240 | Snapshot VHD |
529 | @@ -256,15 +284,71 @@ | |||
530 | 256 | def fetch_image(cls, session, instance_id, image, user, project, type): | 284 | def fetch_image(cls, session, instance_id, image, user, project, type): |
531 | 257 | """ | 285 | """ |
532 | 258 | type is interpreted as an ImageType instance | 286 | type is interpreted as an ImageType instance |
533 | 287 | Related flags: | ||
534 | 288 | xenapi_image_service = ['glance', 'objectstore'] | ||
535 | 289 | glance_address = 'address for glance services' | ||
536 | 290 | glance_port = 'port for glance services' | ||
537 | 259 | """ | 291 | """ |
538 | 292 | access = AuthManager().get_access_key(user, project) | ||
539 | 293 | |||
540 | 294 | if FLAGS.xenapi_image_service == 'glance': | ||
541 | 295 | return cls._fetch_image_glance(session, instance_id, image, | ||
542 | 296 | access, type) | ||
543 | 297 | else: | ||
544 | 298 | return cls._fetch_image_objectstore(session, instance_id, image, | ||
545 | 299 | access, user.secret, type) | ||
546 | 300 | |||
547 | 301 | @classmethod | ||
548 | 302 | def _fetch_image_glance(cls, session, instance_id, image, access, type): | ||
549 | 303 | sr = find_sr(session) | ||
550 | 304 | if sr is None: | ||
551 | 305 | raise exception.NotFound('Cannot find SR to write VDI to') | ||
552 | 306 | |||
553 | 307 | c = glance.client.Client(FLAGS.glance_host, FLAGS.glance_port) | ||
554 | 308 | |||
555 | 309 | meta, image_file = c.get_image(image) | ||
556 | 310 | virtual_size = int(meta['size']) | ||
557 | 311 | vdi_size = virtual_size | ||
558 | 312 | LOG.debug(_("Size for image %s:%d"), image, virtual_size) | ||
559 | 313 | if type == ImageType.DISK: | ||
560 | 314 | # Make room for MBR. | ||
561 | 315 | vdi_size += MBR_SIZE_BYTES | ||
562 | 316 | |||
563 | 317 | vdi = cls.create_vdi(session, sr, _('Glance image %s') % image, | ||
564 | 318 | vdi_size, False) | ||
565 | 319 | |||
566 | 320 | with_vdi_attached_here(session, vdi, False, | ||
567 | 321 | lambda dev: | ||
568 | 322 | _stream_disk(dev, type, | ||
569 | 323 | virtual_size, image_file)) | ||
570 | 324 | if (type == ImageType.KERNEL_RAMDISK): | ||
571 | 325 | #we need to invoke a plugin for copying VDI's | ||
572 | 326 | #content into proper path | ||
573 | 327 | LOG.debug(_("Copying VDI %s to /boot/guest on dom0"), vdi) | ||
574 | 328 | fn = "copy_kernel_vdi" | ||
575 | 329 | args = {} | ||
576 | 330 | args['vdi-ref'] = vdi | ||
577 | 331 | #let the plugin copy the correct number of bytes | ||
578 | 332 | args['image-size'] = str(vdi_size) | ||
579 | 333 | task = session.async_call_plugin('glance', fn, args) | ||
580 | 334 | filename = session.wait_for_task(instance_id, task) | ||
581 | 335 | #remove the VDI as it is not needed anymore | ||
582 | 336 | session.get_xenapi().VDI.destroy(vdi) | ||
583 | 337 | LOG.debug(_("Kernel/Ramdisk VDI %s destroyed"), vdi) | ||
584 | 338 | return filename | ||
585 | 339 | else: | ||
586 | 340 | return session.get_xenapi().VDI.get_uuid(vdi) | ||
587 | 341 | |||
588 | 342 | @classmethod | ||
589 | 343 | def _fetch_image_objectstore(cls, session, instance_id, image, access, | ||
590 | 344 | secret, type): | ||
591 | 260 | url = images.image_url(image) | 345 | url = images.image_url(image) |
592 | 261 | access = AuthManager().get_access_key(user, project) | ||
593 | 262 | LOG.debug(_("Asking xapi to fetch %s as %s"), url, access) | 346 | LOG.debug(_("Asking xapi to fetch %s as %s"), url, access) |
594 | 263 | fn = (type != ImageType.KERNEL_RAMDISK) and 'get_vdi' or 'get_kernel' | 347 | fn = (type != ImageType.KERNEL_RAMDISK) and 'get_vdi' or 'get_kernel' |
595 | 264 | args = {} | 348 | args = {} |
596 | 265 | args['src_url'] = url | 349 | args['src_url'] = url |
597 | 266 | args['username'] = access | 350 | args['username'] = access |
599 | 267 | args['password'] = user.secret | 351 | args['password'] = secret |
600 | 268 | args['add_partition'] = 'false' | 352 | args['add_partition'] = 'false' |
601 | 269 | args['raw'] = 'false' | 353 | args['raw'] = 'false' |
602 | 270 | if type != ImageType.KERNEL_RAMDISK: | 354 | if type != ImageType.KERNEL_RAMDISK: |
603 | @@ -276,14 +360,21 @@ | |||
604 | 276 | return uuid | 360 | return uuid |
605 | 277 | 361 | ||
606 | 278 | @classmethod | 362 | @classmethod |
608 | 279 | def lookup_image(cls, session, vdi_ref): | 363 | def lookup_image(cls, session, instance_id, vdi_ref): |
609 | 364 | if FLAGS.xenapi_image_service == 'glance': | ||
610 | 365 | return cls._lookup_image_glance(session, vdi_ref) | ||
611 | 366 | else: | ||
612 | 367 | return cls._lookup_image_objectstore(session, instance_id, vdi_ref) | ||
613 | 368 | |||
614 | 369 | @classmethod | ||
615 | 370 | def _lookup_image_objectstore(cls, session, instance_id, vdi_ref): | ||
616 | 280 | LOG.debug(_("Looking up vdi %s for PV kernel"), vdi_ref) | 371 | LOG.debug(_("Looking up vdi %s for PV kernel"), vdi_ref) |
617 | 281 | fn = "is_vdi_pv" | 372 | fn = "is_vdi_pv" |
618 | 282 | args = {} | 373 | args = {} |
619 | 283 | args['vdi-ref'] = vdi_ref | 374 | args['vdi-ref'] = vdi_ref |
620 | 284 | #TODO: Call proper function in plugin | ||
621 | 285 | task = session.async_call_plugin('objectstore', fn, args) | 375 | task = session.async_call_plugin('objectstore', fn, args) |
623 | 286 | pv_str = session.wait_for_task(task) | 376 | pv_str = session.wait_for_task(instance_id, task) |
624 | 377 | pv = None | ||
625 | 287 | if pv_str.lower() == 'true': | 378 | if pv_str.lower() == 'true': |
626 | 288 | pv = True | 379 | pv = True |
627 | 289 | elif pv_str.lower() == 'false': | 380 | elif pv_str.lower() == 'false': |
628 | @@ -292,6 +383,23 @@ | |||
629 | 292 | return pv | 383 | return pv |
630 | 293 | 384 | ||
631 | 294 | @classmethod | 385 | @classmethod |
632 | 386 | def _lookup_image_glance(cls, session, vdi_ref): | ||
633 | 387 | LOG.debug(_("Looking up vdi %s for PV kernel"), vdi_ref) | ||
634 | 388 | |||
635 | 389 | def is_vdi_pv(dev): | ||
636 | 390 | LOG.debug(_("Running pygrub against %s"), dev) | ||
637 | 391 | output = os.popen('pygrub -qn /dev/%s' % dev) | ||
638 | 392 | for line in output.readlines(): | ||
639 | 393 | #try to find kernel string | ||
640 | 394 | m = re.search('(?<=kernel:)/.*(?:>)', line) | ||
641 | 395 | if m and m.group(0).find('xen') != -1: | ||
642 | 396 | LOG.debug(_("Found Xen kernel %s") % m.group(0)) | ||
643 | 397 | return True | ||
644 | 398 | LOG.debug(_("No Xen kernel found. Booting HVM.")) | ||
645 | 399 | return False | ||
646 | 400 | return with_vdi_attached_here(session, vdi_ref, True, is_vdi_pv) | ||
647 | 401 | |||
648 | 402 | @classmethod | ||
649 | 295 | def lookup(cls, session, i): | 403 | def lookup(cls, session, i): |
650 | 296 | """Look the instance i up, and returns it if available""" | 404 | """Look the instance i up, and returns it if available""" |
651 | 297 | vms = session.get_xenapi().VM.get_by_name_label(i) | 405 | vms = session.get_xenapi().VM.get_by_name_label(i) |
652 | @@ -464,3 +572,123 @@ | |||
653 | 464 | vdi_ref = vdi_refs[0] | 572 | vdi_ref = vdi_refs[0] |
654 | 465 | vdi_rec = session.get_xenapi().VDI.get_record(vdi_ref) | 573 | vdi_rec = session.get_xenapi().VDI.get_record(vdi_ref) |
655 | 466 | return vdi_ref, vdi_rec | 574 | return vdi_ref, vdi_rec |
656 | 575 | |||
657 | 576 | |||
658 | 577 | def find_sr(session): | ||
659 | 578 | host = session.get_xenapi_host() | ||
660 | 579 | srs = session.get_xenapi().SR.get_all() | ||
661 | 580 | for sr in srs: | ||
662 | 581 | sr_rec = session.get_xenapi().SR.get_record(sr) | ||
663 | 582 | if not ('i18n-key' in sr_rec['other_config'] and | ||
664 | 583 | sr_rec['other_config']['i18n-key'] == 'local-storage'): | ||
665 | 584 | continue | ||
666 | 585 | for pbd in sr_rec['PBDs']: | ||
667 | 586 | pbd_rec = session.get_xenapi().PBD.get_record(pbd) | ||
668 | 587 | if pbd_rec['host'] == host: | ||
669 | 588 | return sr | ||
670 | 589 | return None | ||
671 | 590 | |||
672 | 591 | |||
673 | 592 | def with_vdi_attached_here(session, vdi, read_only, f): | ||
674 | 593 | this_vm_ref = get_this_vm_ref(session) | ||
675 | 594 | vbd_rec = {} | ||
676 | 595 | vbd_rec['VM'] = this_vm_ref | ||
677 | 596 | vbd_rec['VDI'] = vdi | ||
678 | 597 | vbd_rec['userdevice'] = 'autodetect' | ||
679 | 598 | vbd_rec['bootable'] = False | ||
680 | 599 | vbd_rec['mode'] = read_only and 'RO' or 'RW' | ||
681 | 600 | vbd_rec['type'] = 'disk' | ||
682 | 601 | vbd_rec['unpluggable'] = True | ||
683 | 602 | vbd_rec['empty'] = False | ||
684 | 603 | vbd_rec['other_config'] = {} | ||
685 | 604 | vbd_rec['qos_algorithm_type'] = '' | ||
686 | 605 | vbd_rec['qos_algorithm_params'] = {} | ||
687 | 606 | vbd_rec['qos_supported_algorithms'] = [] | ||
688 | 607 | LOG.debug(_('Creating VBD for VDI %s ... '), vdi) | ||
689 | 608 | vbd = session.get_xenapi().VBD.create(vbd_rec) | ||
690 | 609 | LOG.debug(_('Creating VBD for VDI %s done.'), vdi) | ||
691 | 610 | try: | ||
692 | 611 | LOG.debug(_('Plugging VBD %s ... '), vbd) | ||
693 | 612 | session.get_xenapi().VBD.plug(vbd) | ||
694 | 613 | LOG.debug(_('Plugging VBD %s done.'), vbd) | ||
695 | 614 | return f(session.get_xenapi().VBD.get_device(vbd)) | ||
696 | 615 | finally: | ||
697 | 616 | LOG.debug(_('Destroying VBD for VDI %s ... '), vdi) | ||
698 | 617 | vbd_unplug_with_retry(session, vbd) | ||
699 | 618 | ignore_failure(session.get_xenapi().VBD.destroy, vbd) | ||
700 | 619 | LOG.debug(_('Destroying VBD for VDI %s done.'), vdi) | ||
701 | 620 | |||
702 | 621 | |||
703 | 622 | def vbd_unplug_with_retry(session, vbd): | ||
704 | 623 | """Call VBD.unplug on the given VBD, with a retry if we get | ||
705 | 624 | DEVICE_DETACH_REJECTED. For reasons which I don't understand, we're | ||
706 | 625 | seeing the device still in use, even when all processes using the device | ||
707 | 626 | should be dead.""" | ||
708 | 627 | while True: | ||
709 | 628 | try: | ||
710 | 629 | session.get_xenapi().VBD.unplug(vbd) | ||
711 | 630 | LOG.debug(_('VBD.unplug successful first time.')) | ||
712 | 631 | return | ||
713 | 632 | except VMHelper.XenAPI.Failure, e: | ||
714 | 633 | if (len(e.details) > 0 and | ||
715 | 634 | e.details[0] == 'DEVICE_DETACH_REJECTED'): | ||
716 | 635 | LOG.debug(_('VBD.unplug rejected: retrying...')) | ||
717 | 636 | time.sleep(1) | ||
718 | 637 | elif (len(e.details) > 0 and | ||
719 | 638 | e.details[0] == 'DEVICE_ALREADY_DETACHED'): | ||
720 | 639 | LOG.debug(_('VBD.unplug successful eventually.')) | ||
721 | 640 | return | ||
722 | 641 | else: | ||
723 | 642 | LOG.error(_('Ignoring XenAPI.Failure in VBD.unplug: %s'), | ||
724 | 643 | e) | ||
725 | 644 | return | ||
726 | 645 | |||
727 | 646 | |||
728 | 647 | def ignore_failure(func, *args, **kwargs): | ||
729 | 648 | try: | ||
730 | 649 | return func(*args, **kwargs) | ||
731 | 650 | except VMHelper.XenAPI.Failure, e: | ||
732 | 651 | LOG.error(_('Ignoring XenAPI.Failure %s'), e) | ||
733 | 652 | return None | ||
734 | 653 | |||
735 | 654 | |||
736 | 655 | def get_this_vm_uuid(): | ||
737 | 656 | with file('/sys/hypervisor/uuid') as f: | ||
738 | 657 | return f.readline().strip() | ||
739 | 658 | |||
740 | 659 | |||
741 | 660 | def get_this_vm_ref(session): | ||
742 | 661 | return session.get_xenapi().VM.get_by_uuid(get_this_vm_uuid()) | ||
743 | 662 | |||
744 | 663 | |||
745 | 664 | def _stream_disk(dev, type, virtual_size, image_file): | ||
746 | 665 | offset = 0 | ||
747 | 666 | if type == ImageType.DISK: | ||
748 | 667 | offset = MBR_SIZE_BYTES | ||
749 | 668 | _write_partition(virtual_size, dev) | ||
750 | 669 | |||
751 | 670 | with open('/dev/%s' % dev, 'wb') as f: | ||
752 | 671 | f.seek(offset) | ||
753 | 672 | for chunk in image_file: | ||
754 | 673 | f.write(chunk) | ||
755 | 674 | |||
756 | 675 | |||
757 | 676 | def _write_partition(virtual_size, dev): | ||
758 | 677 | dest = '/dev/%s' % dev | ||
759 | 678 | mbr_last = MBR_SIZE_SECTORS - 1 | ||
760 | 679 | primary_first = MBR_SIZE_SECTORS | ||
761 | 680 | primary_last = MBR_SIZE_SECTORS + (virtual_size / SECTOR_SIZE) - 1 | ||
762 | 681 | |||
763 | 682 | LOG.debug(_('Writing partition table %d %d to %s...'), | ||
764 | 683 | primary_first, primary_last, dest) | ||
765 | 684 | |||
766 | 685 | def execute(cmd, process_input=None, check_exit_code=True): | ||
767 | 686 | return utils.execute(cmd=cmd, | ||
768 | 687 | process_input=process_input, | ||
769 | 688 | check_exit_code=check_exit_code) | ||
770 | 689 | |||
771 | 690 | execute('parted --script %s mklabel msdos' % dest) | ||
772 | 691 | execute('parted --script %s mkpart primary %ds %ds' % | ||
773 | 692 | (dest, primary_first, primary_last)) | ||
774 | 693 | |||
775 | 694 | LOG.debug(_('Writing partition table %s done.'), dest) | ||
776 | 467 | 695 | ||
777 | === modified file 'nova/virt/xenapi/vmops.py' | |||
778 | --- nova/virt/xenapi/vmops.py 2011-01-17 17:16:36 +0000 | |||
779 | +++ nova/virt/xenapi/vmops.py 2011-01-17 19:48:42 +0000 | |||
780 | @@ -85,7 +85,8 @@ | |||
781 | 85 | #Have a look at the VDI and see if it has a PV kernel | 85 | #Have a look at the VDI and see if it has a PV kernel |
782 | 86 | pv_kernel = False | 86 | pv_kernel = False |
783 | 87 | if not instance.kernel_id: | 87 | if not instance.kernel_id: |
785 | 88 | pv_kernel = VMHelper.lookup_image(self._session, vdi_ref) | 88 | pv_kernel = VMHelper.lookup_image(self._session, instance.id, |
786 | 89 | vdi_ref) | ||
787 | 89 | kernel = None | 90 | kernel = None |
788 | 90 | if instance.kernel_id: | 91 | if instance.kernel_id: |
789 | 91 | kernel = VMHelper.fetch_image(self._session, instance.id, | 92 | kernel = VMHelper.fetch_image(self._session, instance.id, |
790 | 92 | 93 | ||
791 | === modified file 'nova/virt/xenapi_conn.py' | |||
792 | --- nova/virt/xenapi_conn.py 2011-01-17 17:16:36 +0000 | |||
793 | +++ nova/virt/xenapi_conn.py 2011-01-17 19:48:42 +0000 | |||
794 | @@ -89,6 +89,9 @@ | |||
795 | 89 | 'The interval used for polling of remote tasks ' | 89 | 'The interval used for polling of remote tasks ' |
796 | 90 | '(Async.VM.start, etc). Used only if ' | 90 | '(Async.VM.start, etc). Used only if ' |
797 | 91 | 'connection_type=xenapi.') | 91 | 'connection_type=xenapi.') |
798 | 92 | flags.DEFINE_string('xenapi_image_service', | ||
799 | 93 | 'glance', | ||
800 | 94 | 'Where to get VM images: glance or objectstore.') | ||
801 | 92 | flags.DEFINE_float('xenapi_vhd_coalesce_poll_interval', | 95 | flags.DEFINE_float('xenapi_vhd_coalesce_poll_interval', |
802 | 93 | 5.0, | 96 | 5.0, |
803 | 94 | 'The interval used for polling of coalescing vhds.' | 97 | 'The interval used for polling of coalescing vhds.' |
804 | 95 | 98 | ||
805 | === modified file 'plugins/xenserver/xenapi/etc/xapi.d/plugins/glance' | |||
806 | --- plugins/xenserver/xenapi/etc/xapi.d/plugins/glance 2011-01-07 03:37:33 +0000 | |||
807 | +++ plugins/xenserver/xenapi/etc/xapi.d/plugins/glance 2011-01-17 19:48:42 +0000 | |||
808 | @@ -18,7 +18,7 @@ | |||
809 | 18 | # under the License. | 18 | # under the License. |
810 | 19 | 19 | ||
811 | 20 | # | 20 | # |
813 | 21 | # XenAPI plugin for putting images into glance | 21 | # XenAPI plugin for managing glance images |
814 | 22 | # | 22 | # |
815 | 23 | 23 | ||
816 | 24 | import base64 | 24 | import base64 |
817 | @@ -40,8 +40,36 @@ | |||
818 | 40 | configure_logging('glance') | 40 | configure_logging('glance') |
819 | 41 | 41 | ||
820 | 42 | CHUNK_SIZE = 8192 | 42 | CHUNK_SIZE = 8192 |
821 | 43 | KERNEL_DIR = '/boot/guest' | ||
822 | 43 | FILE_SR_PATH = '/var/run/sr-mount' | 44 | FILE_SR_PATH = '/var/run/sr-mount' |
823 | 44 | 45 | ||
824 | 46 | def copy_kernel_vdi(session,args): | ||
825 | 47 | vdi = exists(args, 'vdi-ref') | ||
826 | 48 | size = exists(args,'image-size') | ||
827 | 49 | #Use the uuid as a filename | ||
828 | 50 | vdi_uuid=session.xenapi.VDI.get_uuid(vdi) | ||
829 | 51 | copy_args={'vdi_uuid':vdi_uuid,'vdi_size':int(size)} | ||
830 | 52 | filename=with_vdi_in_dom0(session, vdi, False, | ||
831 | 53 | lambda dev: | ||
832 | 54 | _copy_kernel_vdi('/dev/%s' % dev,copy_args)) | ||
833 | 55 | return filename | ||
834 | 56 | |||
835 | 57 | def _copy_kernel_vdi(dest,copy_args): | ||
836 | 58 | vdi_uuid=copy_args['vdi_uuid'] | ||
837 | 59 | vdi_size=copy_args['vdi_size'] | ||
838 | 60 | logging.debug("copying kernel/ramdisk file from %s to /boot/guest/%s",dest,vdi_uuid) | ||
839 | 61 | filename=KERNEL_DIR + '/' + vdi_uuid | ||
840 | 62 | #read data from /dev/ and write into a file on /boot/guest | ||
841 | 63 | of=open(filename,'wb') | ||
842 | 64 | f=open(dest,'rb') | ||
843 | 65 | #copy only vdi_size bytes | ||
844 | 66 | data=f.read(vdi_size) | ||
845 | 67 | of.write(data) | ||
846 | 68 | f.close() | ||
847 | 69 | of.close() | ||
848 | 70 | logging.debug("Done. Filename: %s",filename) | ||
849 | 71 | return filename | ||
850 | 72 | |||
851 | 45 | def put_vdis(session, args): | 73 | def put_vdis(session, args): |
852 | 46 | params = pickle.loads(exists(args, 'params')) | 74 | params = pickle.loads(exists(args, 'params')) |
853 | 47 | vdi_uuids = params["vdi_uuids"] | 75 | vdi_uuids = params["vdi_uuids"] |
854 | @@ -128,4 +156,5 @@ | |||
855 | 128 | 156 | ||
856 | 129 | 157 | ||
857 | 130 | if __name__ == '__main__': | 158 | if __name__ == '__main__': |
859 | 131 | XenAPIPlugin.dispatch({'put_vdis': put_vdis}) | 159 | XenAPIPlugin.dispatch({'put_vdis': put_vdis, |
860 | 160 | 'copy_kernel_vdi': copy_kernel_vdi}) | ||
861 | 132 | 161 | ||
862 | === modified file 'tools/pip-requires' | |||
863 | --- tools/pip-requires 2011-01-13 21:18:16 +0000 | |||
864 | +++ tools/pip-requires 2011-01-17 19:48:42 +0000 | |||
865 | @@ -26,3 +26,4 @@ | |||
866 | 26 | PasteDeploy | 26 | PasteDeploy |
867 | 27 | paste | 27 | paste |
868 | 28 | netaddr | 28 | netaddr |
869 | 29 | glance |
OK to try to get this merged, but other branches proposed before should have review priority.