Merge lp:~nttdata/nova/live-migration into lp:~hudson-openstack/nova/trunk
- live-migration
- Merge into trunk
Status: | Merged | ||||
---|---|---|---|---|---|
Merged at revision: | 799 | ||||
Proposed branch: | lp:~nttdata/nova/live-migration | ||||
Merge into: | lp:~hudson-openstack/nova/trunk | ||||
Diff against target: |
3335 lines (+2788/-10) 21 files modified
bin/nova-manage (+88/-0) contrib/nova.sh (+1/-0) nova/compute/manager.py (+252/-1) nova/db/api.py (+59/-0) nova/db/sqlalchemy/api.py (+121/-0) nova/db/sqlalchemy/migrate_repo/versions/010_add_live_migration.py (+83/-0) nova/db/sqlalchemy/models.py (+38/-0) nova/scheduler/driver.py (+237/-0) nova/scheduler/manager.py (+52/-0) nova/service.py (+3/-0) nova/tests/test_compute.py (+294/-0) nova/tests/test_scheduler.py (+622/-1) nova/tests/test_service.py (+41/-0) nova/tests/test_virt.py (+223/-3) nova/tests/test_volume.py (+195/-0) nova/virt/cpuinfo.xml.template (+9/-0) nova/virt/fake.py (+21/-0) nova/virt/libvirt_conn.py (+369/-0) nova/virt/xenapi_conn.py (+21/-0) nova/volume/driver.py (+52/-4) nova/volume/manager.py (+7/-1) |
||||
To merge this branch: | bzr merge lp:~nttdata/nova/live-migration | ||||
Related bugs: |
|
||||
Related blueprints: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Ken Pepple (community) | Approve | ||
Thierry Carrez (community) | Approve | ||
Jay Pipes (community) | Approve | ||
Rick Harris (community) | Approve | ||
Brian Schott (community) | Approve | ||
termie (community) | Needs Fixing | ||
Review via email: mp+49699@code.launchpad.net |
Commit message
Description of the change
Main changes from previous merge request:
1. Adding test code
2. Bug fixing
- improper resource checking
(memory checking is enough for current version)
- retrying when continuously live migration request
in this case, iptables complains, so let's retry!
3. ISCSI EBS volume checking
- adding nova.volume.
- changing nova.compute.
Please feel free to give us comments.
Thanks in advance.
Kei Masumoto (masumotok) wrote : | # |
Hi Rick,
Thanks for review!
I think I fixed all of your comments.
Additional changes are made at nova.compute.
It is not related to live-migration, but instances which has kernel and ramdisk cannot launch without this changes. I never change this file and I tested not only run_test.sh but also confirm instances were successfully migrated at real server before I raised merge request. So I completely have no idea when this changes are included...
Anyway, I think this change is necessary. Could you also please review it?
Kindly Regards,
Kei
-----Original Message-----
From: <email address hidden> [mailto:<email address hidden>] On Behalf Of Rick Harris
Sent: Friday, February 18, 2011 7:10 AM
To: <email address hidden>
Subject: Re: [Merge] lp:~nttdata/nova/live-migration into lp:nova
Review: Needs Fixing
Just a few nits :)
> + def describeresourc
> + def updateresource(
These should probably be `describe_resource` and `update_resource` respectively.
3083 +def mktmpfile(dir):
3084 + """create tmpfile under dir, and return filename."""
3085 + filename = datetime.
3086 + fpath = os.path.join(dir, filename)
3087 + open(fpath, 'a+').write(fpath + '\n')
3088 + return fpath
It would probably be better to use the `tempfile` module in the Python stdlib.
3091 +def exists(filename):
3092 + """check file path existence."""
3093 + return os.path.
3094 +
3095 +
3096 +def remove(filename):
3097 + """remove file."""
3098 + return os.remove(filename)
These wrapper functions seem unnecessary, it would probably be better to just use os.path.exists and os.remove directly in the code.
If you need a stub-point for testing, you can stub out `os.path` and `os` directly.
+ LOG.info(
Needs i18n _('post_live...') treatment.
533 + #services.
534 + #services.
535 + #services.
536 + #services.
537 + #services.
538 + #services.
539 + #services.
540 + #services.
541 + #services.
Was this left in by mistake?
902 + print 'manager.attrerr', e
Probably should be logging here, rather than printing to stdout.
--
https:/
Your team NTT DATA is subscribed to branch lp:~nttdata/nova/live-migration.
Brian Schott (bfschott) wrote : | # |
We're very interested in this capability, so looking forward to it. Few comments.
1. Current branch conflicts with lp:nova trunk.
+N nova/db/
+N nova/db/
fix: bschott@
2. Should these be in their own table? That is a lot of fields to add to the Service table directly, since this is a table that has entries for every service type. I was thinking about adding a ComputeService (compute_services) table for our heterogeneous compute cluster.
627 + # The below items are compute node only.
628 + # None is inserted for other service.
629 + vcpus = Column(Integer, nullable=True)
630 + memory_mb = Column(Integer, nullable=True)
631 + local_gb = Column(Integer, nullable=True)
632 + vcpus_used = Column(Integer, nullable=True)
633 + memory_mb_used = Column(Integer, nullable=True)
634 + local_gb_used = Column(Integer, nullable=True)
635 + hypervisor_type = Column(Text, nullable=True)
636 + hypervisor_version = Column(Integer, nullable=True)
3. We can use the "arch" sub-field below for our project. Can we talk about adding accelerator_info (for GPUs, FPGAs, or other co-procesors) and possibly network_info for details on the physical network interface?
# Note(masumotok): Expected Strings example:
#
# '{"arch":"x86_64", "model":"Nehalem",
# "topology"
# features:[ "tdtscp", "xtpr"]}'
#
# Points are "json translatable" and it must have all
# dictionary keys above.
cpu_info = Column(Text, nullable=True)
bschott@
+N nova/api/
+N nova/db/
+N nova/db/
+N nova/tests/
+N nova/tests/
M .mailmap
M Authors
M HACKING
M MANIFEST.in
M bin/nova-manage
M locale/nova.pot
M nova/api/
M nova/api/
M nova/api/
M nova/api/
M nova/api/
M nova/auth/
M nova/auth/
M nova/compute/api.py
M nova/compute/
M nova/compute/
M nova/context.py
M nova/db/api.py
M nova/db/
M nova/db/
M nova/db/
M nova/db/
M nova/db/
M nova/flags.py
M nova/log.py
M nova/network/
M nova/network/
M nova/rpc.py
M nova/tests/
M nova/tests/
M nova/tests/
M nova/tests/
M nova/tests/
M nova/tests/
M nova/twistd.py
M nova/u...
termie (termie) wrote : | # |
Hello :) I think the code looks very good (tests especially appear thorough), however there are many places for style cleanup, you may want read the part of the HACKING file about docstrings before going on:
in bin/nova-api:
looks like utils.default_
in bin/nova-
looks like there is a leftover debugging statement ('open...')
in bin/nova-manage:
please update the docstring for 'live_migration' to describe what it will do (something like "Migrates a running instance to a new machine." is fine)
for the long "if FLAGS.volume_
if (FLAGS.
FLAGS.
When generating the "msg" you can do something similar:
msg = ('Migration of %s initiated. Checking its progress'
' using euca-describe-
in the docstring for describe_resource, please capitalize the first word (Describe...)
the comment at line 83 ("Checking result msg format is necessary...") is a little unclear, are you saying:
It will be necessary to check the result msg format when this feature is included in the API.
if so, you could say:
TODO(masumotok): It will be necessary to check the result msg...
Please capitalize the first letter of the docstring for update_resource
in nova/compute/
the triple quotes are not necessary around the description of the 'flags.
flags.DEFINE_string looks like it should be flags.DEFINE_
the docstring for compare_cpu has an extra space at the beginning that is not necessary.
please capitalize the first letter of the docstring for mktmpfile
if you are only writing to the tmpfile for debugging purposes, perhaps that should be a logging.debug call?
please add a period to the end of the docstring for update_
in the pre_live_migration method, there should be an apostrophe in the word "doesnt" (doesn't)
may as well capitalize the first letter in the Bridge settings comment ('Call this method...')
in the message about failing a retry you can remove the 'th.' part, and change 'fail' to 'failed', it still doesn't read perfectly but pluralization isn't really necessary for log messages.
in the live_migration method, you can delete the line about #@exception.
also, please capitalize the first letter of the docstring.
in post_live_
"""Post operations for live migration.
Mainly, database updating.
"""
also in post_live_migration you check 'None == some_variable' a couple times, in python we don't usually do this because it is impossible to write 'if some_variable = None' because the assignment operation is not an expression... which means you don't need to be extra safe with which side the variable is on and having the variable first is easier to read (at least in english).
also a bit further down you don't need to use triple ...
Kei Masumoto (masumotok) wrote : | # |
Hi Brian,
Thanks for review!
I think I fixed based on all of your comments.
(branch will update soon)
Also, regarding to the point 3 below,
> 3. We can use the "arch" sub-field below for our project. Can we talk about adding
> accelerator_info (for GPUs, FPGAs, or other co-procesors) and possibly network_info
> for details on the physical network interface?
We use cpu_info column to store an argument of compareCPU() in virConnect.
You can get examples following by the below procedure.
# python
# import libvirt
# conn = libvirt.
# conn.getCapabil
Once you follow above, you get xml. We cut and store <cpu>...</cpu> to db.
I think some other result will be shown in your hardware environment.
Then FPGA/GPU info is also included, I suppose.
If libvirt don’t find any FPGA/GPU info, then please let me know.
I have to give it much thought..
-----Original Message-----
From: <email address hidden> [mailto:<email address hidden>] On Behalf Of Brian Schott
Sent: Saturday, February 19, 2011 5:02 AM
To: <email address hidden>
Subject: Re: [Merge] lp:~nttdata/nova/live-migration into lp:nova
Review: Needs Fixing
We're very interested in this capability, so looking forward to it. Few comments.
1. Current branch conflicts with lp:nova trunk.
+N nova/db/
+N nova/db/
fix: bschott@
2. Should these be in their own table? That is a lot of fields to add to the Service table directly, since this is a table that has entries for every service type. I was thinking about adding a ComputeService (compute_services) table for our heterogeneous compute cluster.
627 + # The below items are compute node only.
628 + # None is inserted for other service.
629 + vcpus = Column(Integer, nullable=True)
630 + memory_mb = Column(Integer, nullable=True)
631 + local_gb = Column(Integer, nullable=True)
632 + vcpus_used = Column(Integer, nullable=True)
633 + memory_mb_used = Column(Integer, nullable=True)
634 + local_gb_used = Column(Integer, nullable=True)
635 + hypervisor_type = Column(Text, nullable=True)
636 + hypervisor_version = Column(Integer, nullable=True)
3. We can use the "arch" sub-field below for our project. Can we talk about adding accelerator_info (for GPUs, FPGAs, or other co-procesors) and possibly network_info for details on the physical network interface?
# Note(masumotok): Expected Strings example:
#
# '{"arch":"x86_64", "model":"Nehalem",
# "topology"
# features:[ "tdtscp", "xtpr"]}'
#
# Points are "json translatable" and it must have all
# dictionary keys above.
cpu_info = Column(Text, nullable=True)
bschott@
+N nova/api/
Kei Masumoto (masumotok) wrote : | # |
Hi termie,
Thank you very much for reviewing my branch!
I fixed based on all comments from you.
Hope I don’t miss to fix...
Please let me know if you have further comments.
Regards, ,
Kei
--
https:/
Your team NTT DATA is subscribed to branch lp:~nttdata/nova/live-migration.
Kei Masumoto (masumotok) wrote : | # |
Hi Rick,
Thanks again for reviewing my branch.
After you gave me a review, I went through 2 other reviewers and things are improved.
Hopefully, let me make proceed for merging.
Also, I kindly ask you to tell me if I missed your point.
Regards,
Kei
_______
差出人: <email address hidden> [<email address hidden>] は Kei Masumoto [<email address hidden>] の代理
送信日時: 2011年2月23日 2:08
宛先: termie
件名: RE: [Merge] lp:~nttdata/nova/live-migration into lp:nova
Hi termie,
Thank you very much for reviewing my branch!
I fixed based on all comments from you.
Hope I don’t miss to fix...
Please let me know if you have further comments.
Regards, ,
Kei
--
https:/
Your team NTT DATA is subscribed to branch lp:~nttdata/nova/live-migration.
--
https:/
Your team NTT DATA is subscribed to branch lp:~nttdata/nova/live-migration.
Brian Schott (bfschott) wrote : | # |
I've been reviewing your changes. Thank you for doing the compute service table changes. I plan to do some integration testing next week with our hpc-trunk branch.
Brian Schott
<email address hidden>
On Feb 25, 2011, at 8:47 AM, Kei Masumoto wrote:
> Hi Rick,
>
> Thanks again for reviewing my branch.
> After you gave me a review, I went through 2 other reviewers and things are improved.
> Hopefully, let me make proceed for merging.
> Also, I kindly ask you to tell me if I missed your point.
>
> Regards,
> Kei
>
> _______
> 差出人: <email address hidden> [<email address hidden>] は Kei Masumoto [<email address hidden>] の代理
> 送信日時: 2011年2月23日 2:08
> 宛先: termie
> 件名: RE: [Merge] lp:~nttdata/nova/live-migration into lp:nova
>
> Hi termie,
>
> Thank you very much for reviewing my branch!
> I fixed based on all comments from you.
> Hope I don’t miss to fix...
>
> Please let me know if you have further comments.
>
> Regards, ,
> Kei
>
> --
> https:/
> Your team NTT DATA is subscribed to branch lp:~nttdata/nova/live-migration.
>
> --
> https:/
> Your team NTT DATA is subscribed to branch lp:~nttdata/nova/live-migration.
> --
> https:/
> You are reviewing the proposed merge of lp:~nttdata/nova/live-migration into lp:nova.
Kei Masumoto (masumotok) wrote : | # |
To All reviewers:
It has been passed a week that I did "bzr merge lp:nova" to this branch, and I think it's better to update again to avoid any conflict.
If you are on the way to review once again, please wait for a while. I will e-mail again once I finished. Few hours are enough, hopefully.
Regards,
Kei Masumoto
--
https:/
Your team NTT DATA is subscribed to branch lp:~nttdata/nova/live-migration.
Kei Masumoto (masumotok) wrote : | # |
To All reviewers:
I completed to merge trunk rev 752.
Few changes are necessary. Please check the below comment.
> 1. merged trunk rev749
> 2. rpc.call returns '/' as '\/', so nova.compute.
> 3. nova.tests.
If you have further comments, if I missed your point, please let me know.
--
https:/
Your team NTT DATA is subscribed to branch lp:~nttdata/nova/live-migration.
Rick Harris (rconradharris) wrote : | # |
Hi Kei! Improvement look good, thanks for the updates. Here are my round-two review notes:
> def mktmpfile(self, context):
It might be a good idea to rename these functions. Right now, the name
confirm_tmpfile contain implementation details, but doesn't provide a good
hint as to what it's used for.
Might be better as:
create_
Also, what happens if the destination isn't on shared storage? We've deposited
the test file, will that ever be cleaned up?
Perhaps in pseudo-code it should be something like:
def mounted_
try:
# Unlike confirm_tmpfile, this doesn't delete the test_file; that is left
# to cleanup_test_file
finally:
# Regardless of whether we find it, we always delete it
> =======
> ERROR: Failure: ImportError (No module named libvirt)
> -------
> Traceback (most recent call last):
> File "/Library/
> addr.filename, addr.module)
> File "/Library/
> return self.importFrom
> File "/Library/
> mod = load_module(
> File "/Users/
> import libvirt
> ImportError: No module named libvirt
>
> -------
Libvirt is required for the tests to run.
Since not everyone is going to have libvirt on their machine, we should probably use
the 'LazyImport pattern' here so that we only import libvirt if it's actually
going to be used-- in the case of unit-tests only the FakeLibvirt should be
used.
> 232 + for v in instance_
In general, it's better to use variable names with one letter-- it aids
readability and make it a little easier to 'grep' around the code. In this
case, 'volume' seems like the right choice. There are a few other instances
throughout the code where I think single-letter variable names should
proabably be expanded:
+ p = os.path.
On the other hand, it's fine (and idiomatic) for exception handler blocks to use `e` as the
variable for their exception. Like:
+ except exception.
> 697 +compute_services = Table('
'compute_services' sounds little too much like the 'compute worker service'
that we already have. This might be clearer if renamed
'compute_nodes' or 'compute_hosts'.
The 'compute_node' would represent the physical machine, while the
'compute-service' would represent the logical endpoin...
Kei Masumoto (masumotok) wrote : | # |
Hi Rick!
Thanks for review!
I agree all comments from you.
Fixed them soon...
Kei Masumoto (masumotok) wrote : | # |
Hi, Rick!
Fixed based on your comments.
One thing I note that I removed libvirt-dependant test in test_virt.py for non-libvirt environment developers.
(At first I was trying to mock/FakeLibvirt as much as I could, but eventually tests become not so meaningful :) )
Hope your feedback..
Kei
Rick Harris (rconradharris) wrote : | # |
Hi Kei.
I ran the tests on my Mac OS X machine and received 1 failure. Looks like we might need to mock out the get_cpu_info portion of the driver.
> but eventually tests become not so meaningful
Agreed, in terms of "does this really work?", the unit tests aren't a substitute for real functional/
However, even with lots of code faked out, we can still get some value from the tests in terms of catching small issues: passing wrong number arguments, syntax errors, variables being of the wrong type. These are things that unit tests are really good at catching. And since we don't have the benefit of a compiler-pass, these unit tests really help cut down on the number of these problems that make it into trunk.
=======
ERROR: test_update_
-------
Traceback (most recent call last):
File "/Users/
conn.
File "/Users/
dic = {'vcpus': self.get_
File "/Users/
return open('/
IOError: [Errno 2] No such file or directory: '/proc/cpuinfo'
-------
2011-03-03 12:58:20,747 AUDIT nova.auth.manager [-] Created user fake (admin: True)
2011-03-03 12:58:20,750 AUDIT nova.auth.manager [-] Created project fake with manager fake
-------
=======
ERROR: test_update_
-------
Traceback (most recent call last):
File "/Users/
super(
File "/Users/
self.
File "build/
mock_
File "build/
raise ExpectedMethodC
ExpectedMethodC
0. get_cpu_
-------
2011-03-03 12:58:20,747 AUDIT nova.auth.manager [-] Created user fake (admin: True)
2011-03-03 12:58:20,750 AUDIT nova.auth.manager [-] Created project fake with manager fake
2011-03-03 12:58:20,795 AUDIT nova.auth.manager [-] Deleting project fake
2011-03-03 ...
Kei Masumoto (masumotok) wrote : | # |
Hi Rick.
Thanks for your response.
> I ran the tests on my Mac OS X machine and received 1 failure. Looks like we might
> need to mock out the get_cpu_info portion of the driver.
Thanks for this information! That is very helpful. OK, perhaps enhancing exception handling at get_cpu_info is better instead of mock out, since get_cpu_info() is called whenever nova-compute launches.
if sys.platform.
return 0
else:
open(
Please let me know if you have any comments at this point.
> However, even with lots of code faked out, we can still get some value
> from the tests in terms of catching small issues: passing wrong number arguments,
> syntax errors, variables being of the wrong type. These are things that unit
> tests are really good at catching. And since we don't have the benefit of
> a compiler-pass, these unit tests really help cut down on the number of
> these problems that make it into trunk.
Understand. actually I was bit confusing I can include unit-test-like test into trunk.
Let me get deleted test code back to this branch.
Fix them soon...
Thanks again!
Kei
Kei Masumoto (masumotok) wrote : | # |
Hi Rick,
I fixed my branch based on your comments.
I think I already explained main changes at previous e-mail - please review it.
Thanks,
Kei
--
https:/
Your team NTT DATA is subscribed to branch lp:~nttdata/nova/live-migration.
Brian Schott (bfschott) wrote : | # |
In case reviewers are hitting this:
---
bzr rename nova/db/
--
I suggest you create flags:
compute_vcpus_total
compute_
compute_
They can be used to specify resources less than total (like suppose you only want to dedicate 1 core to VMs or half your host memory)? Also, some Linux distros don't have /proc, or at least I think /proc fs is still optional in the kernel.
if FLAGS.compute_
return FLAGS.compute_
else:
try
except ... (i forget what goes here :-)
return 1
Kei Masumoto (masumotok) wrote : | # |
Hi Brian,
Thanks for approval.
> bzr rename nova/db/
> nova/db/
OK. Fix this soon.
> They can be used to specify resources less than total
> (like suppose you only want to dedicate 1 core to VMs or half your host memory)?
> Also, some Linux distros don't have /proc, or at least
> I think /proc fs is still optional in the kernel.
>
> if FLAGS.compute_
> return FLAGS.compute_
> else:
> try
> open('/
> except ... (i forget what goes here :-)
> return 1
I think I understand your point.
Currently, I use multi-platform library to calculate cpu number and amount of disks, so no problem at this point.
Regarding to the memory, I was trying to multi-platform library like psutil(http://
If "like suppose you only want to dedicate 1 core to VMs or half your host memory?" is your point, please specify which point you are looking at. I don’t intend neither "only 1 core to VM" nor "half of host memory".
Again, thanks for approval.
Kei
Brian Schott (bfschott) wrote : | # |
Sorry, I meant it would be good for cloud server administrators to be able to specify how many cores and disk and memory are dedicated to nova.
If someone has cloud on laptop or office computers they might want to reserve some capacity for host operating system.
Or admin might reserve one core and a gigabyte if memory for swift storage server. Not all configs are dedicated compute blades.
Looking forward to seeing this merged soon!
Sent from my iPhone
On Mar 8, 2011, at 9:40 PM, Kei Masumoto <email address hidden> wrote:
> Hi Brian,
>
> Thanks for approval.
>
>> bzr rename nova/db/
>> nova/db/
> OK. Fix this soon.
>
>> They can be used to specify resources less than total
>> (like suppose you only want to dedicate 1 core to VMs or half your host memory)?
>> Also, some Linux distros don't have /proc, or at least
>> I think /proc fs is still optional in the kernel.
>>
>> if FLAGS.compute_
>> return FLAGS.compute_
>> else:
>> try
>> open('/
>> except ... (i forget what goes here :-)
>> return 1
>
> I think I understand your point.
> Currently, I use multi-platform library to calculate cpu number and amount of disks, so no problem at this point.
> Regarding to the memory, I was trying to multi-platform library like psutil(http://
>
> If "like suppose you only want to dedicate 1 core to VMs or half your host memory?" is your point, please specify which point you are looking at. I don’t intend neither "only 1 core to VM" nor "half of host memory".
>
> Again, thanks for approval.
>
> Kei
>
> --
> https:/
> You are reviewing the proposed merge of lp:~nttdata/nova/live-migration into lp:nova.
Kei Masumoto (masumotok) wrote : | # |
Brian, Thanks for explanation! I understand your comment, "reserving some cpu/memory/disk" sounds like good idea. I personally agree to implement it to nova, but unfortunately, I didn’t mention "reserving" when I got blueprint approval. In addition, it is not only for live migration topic but also entire nova topic. Admin also wants to use "reserving" when just launching instance or creating volumes, doesn’t he?.
Therefore, I think it is better to discuss at next design summit.. (actually, I've heard from someone that feature is necessary and it should be implement at scheduler. I am sure there are many discussions :)
Thanks again!
Kei
Brian Schott (bfschott) wrote : | # |
No problem. I'll propose a follow-on blueprint and link it to this one with a more detailed approach.
Brian Schott
<email address hidden>
On Mar 9, 2011, at 12:43 AM, Kei Masumoto wrote:
> Brian, Thanks for explanation! I understand your comment, "reserving some cpu/memory/disk" sounds like good idea. I personally agree to implement it to nova, but unfortunately, I didn’t mention "reserving" when I got blueprint approval. In addition, it is not only for live migration topic but also entire nova topic. Admin also wants to use "reserving" when just launching instance or creating volumes, doesn’t he?.
> Therefore, I think it is better to discuss at next design summit.. (actually, I've heard from someone that feature is necessary and it should be implement at scheduler. I am sure there are many discussions :)
> Thanks again!
> Kei
>
>
> --
> https:/
> You are reviewing the proposed merge of lp:~nttdata/nova/live-migration into lp:nova.
Rick Harris (rconradharris) wrote : | # |
Nice work, Kei.
Some small nits:
> 194 + os.fdopen(fd, 'w+').close()a
`os.close` should suffice since the fd that mkstemp returns is already open.
> 1007 + ec2_id = instance_
Doesn't appear to be used.
Jay Pipes (jaypipes) wrote : | # |
Hi Kei!
All tests are passing locally for me, which is great, and the code looks very solid. Very good use of mox in your test cases.
Just a few suggestions, mostly small style stuff...
1)
There are a number of places you use __setitem__, like so:
1223 + instance_
it's easier to just write:
instance_ref['id'] = 1
2) Tiny style/i18n/English stuff
Please do not take offence to me correcting your English phrases! :)
62 + print 'Unexpected error occurs'
Please i18n that. Also, the English saying would be "An unexpected error has occurred."
29 + raise exception.
There are 2 spaces after raise. Only 1 needed :)
92 + raise exception.
English saying would be "%s does not exist" (without the s on exist)
146 +flags.
Might want to use DEFINE_integer to ensure an integer is used as the flag value...
192 + LOG.debug(
193 + "compute node that they mounts same storage.") % tmp_file)
s/node that they mounts same storage/nodes that they should mount the same storage/
248 + msg = _("%(instance_
s/does'nt have/does not have/
365 + LOG.info(
73 + LOG.info(
s/floating_ip is not found for/No floating IP was found for/
381 + LOG.info(
s/finishes successfully/
383 + LOG.info(_("The below error is normally occurs. "
384 + "Just check if instance is successfully migrated.\n"
385 + "libvir: QEMU error : Domain not found: no domain "
386 + "with matching name.."))
I would say this, instead:
LOG.info(_("You may see the error \"libvirt: QEMU error: "
"Domain not found: no domain with matching name.\" "
"This error can be safely ignored.")
547 + raise exception.
548 + "compute node.") % host)
s/or not compute node/or is not a compute node/
1040 + raise exception.
1041 + "migrate %(dest)s (host:%(mem_avail)s "
I'd rewrite that as "Unable to migrate %(ec2_id)s to destination: %(dest)s ..."
1073 + logging.
s/comfirm/confirm/ :)
You use this line:
global ghost, gbinary, gmox
in 2 places in the nova/tests/
2198 + global ghost, gbinary, gmox
2238 + global ghost, gbinary, gmox
However, the actual variable names are:
2187 +# temporary variable to store host/binary/
2188 +# from each method to fake class.
2189 +global_host = None
2190 +global_binary = None
2191 +global_mox = None
You will want to make those consistent I believe, otherwise I'm not sure what gbinary, ghost, and gmox are going to refer to ;)
2385 + def tes1t_update_
s/tes1t/test :) The mispelling is causing this test case not to be run. (It passes, BTW, when you fix the typo...I checked. :) )
2658 + msg = _("""Cannot confirm exported volume id:%(volume_
2659 + """vblade process...
Kei Masumoto (masumotok) wrote : | # |
Thanks for review, Rick! I'll fix it, soon..
Kei
Kei Masumoto (masumotok) wrote : | # |
Thanks Jay! Your statement at the last IRC meeting is so helpful for me. Everyone gave me review again and got Approve!
I'll fixed my branch based on your comment soon...
One note:
> You use this line:
>
> global ghost, gbinary, gmox
>
> in 2 places in the nova/tests/
>
>2198 + global ghost, gbinary, gmox
>2238 + global ghost, gbinary, gmox
>
>However, the actual variable names are:
>
>2187 +# temporary variable to store host/binary/
>2188 +# from each method to fake class.
>2189 +global_host = None
>2190 +global_binary = None
>2191 +global_mox = None
>
>You will want to make those consistent I believe, otherwise I'm not sure what gbinary, ghost, and gmox are going to >refer to ;)
I completely forgot to update this testcase. I'll rewrite this. Sorry..
Thanks again!
Kei
Jay Pipes (jaypipes) wrote : | # |
Awesome job, Kei :)
Brian Schott (bfschott) wrote : | # |
+1
This branch adds a lot of new capabilities.
Brian Schott
<email address hidden>
On Mar 10, 2011, at 9:36 AM, Jay Pipes wrote:
> Review: Approve
> Awesome job, Kei :)
> --
> https:/
> You are reviewing the proposed merge of lp:~nttdata/nova/live-migration into lp:nova.
Kei Masumoto (masumotok) wrote : | # |
Jay, Brian, thank you! I appreciate your help!
Kei
Ken Pepple (ken-pepple) wrote : | # |
This is a nit but will drive nova admins crazy -- in nova-manage, i think we should verify the destination host name and service alive-ness before we send the rpc call off to the scheduler. I think this is as easy to implement as wrapping nova-manage:31-39 with an if..else statement with a call to db.service_
This will save us from waiting on the migration (that will never happen) and cleaning out the queue later.
Kei Masumoto (masumotok) wrote : | # |
Hi Ken, thanks for your comments!
Fix it soon..
Kei
Kei Masumoto (masumotok) wrote : | # |
Hi Ken,
I appreciate if I ask you some question.
1. In current proposed code, scheduler checks many things, src/dest alive check, lack of memory check, hypervisor check... Your comment implies all those checks must be done in nova-manage? or just alive check? I was thinking it is safer that those checks is done at scheduler, for example, scheduler is busy. Other example is that an use directly send a request to scheduler not using nova-manage(what I am worrying about some security issues occurs here or no need to think?).
2. In current proposed code, if destination host is not alive(this check is done at scheduler), an exception raised and returned to nova-manage. then, we have to cleanup rabbitmq?
I'm bit confused, please give me a light..
Kei
Ken Pepple (ken-pepple) wrote : | # |
> 1. In current proposed code, scheduler checks many things, src/dest alive check, lack of memory check, hypervisor check... Your comment implies all those checks must be done in nova-manage? or just alive check? I was thinking it is safer that those checks is done at scheduler, for example, scheduler is busy. Other example is that an use directly send a request to scheduler not using nova-manage(what I am worrying about some security issues occurs here or no need to think?).
Hi masumoto-san -
Sorry, I meant for only basic checks to be done in nova-manage. My concern is that admins will start a live migration to a non-existant host (or disabled host), wait for a minute or two, then check euca-describe-
I agree that most checks should be done in scheduler, so that later we might be able to add API support for live-migration.
Will nova/compute/
> 2. In current proposed code, if destination host is not alive(this check is done at scheduler), an exception raised and returned to nova-manage. then, we have to cleanup rabbitmq?
no, not we are never putting it in the queue.
Thanks for all the work on this -- looking for to live-migration.
Kei Masumoto (masumotok) wrote : | # |
Hi Ken-san, Thanks for your answer!
> My concern is that admins will start a live migration to a non-existant host
> (or disabled host), wait for a minute or two, then check euca-describe-
> and see see that nothing happened because the recover_
> already set it back to "running" state.
In our environment, admin can get an error message such as, "destination host is not alive" in a few seconds, because scheduler check it and raise exception(see below for other scheduler checks).
> Will nova/compute/
pre_live_
On the other hand, scheduler make sure for other checks that can be done everywhere(see below).
[Examples of scheduler checks]
- instance is running
- src/dest host exists(and alive)
- nova-computes run on src/dest host
- nova-volume is alive when instance mounts a volume.
- hypervisor_type, hypervisor_version and cpu compatibility
- dest host has enough memory?
- src/dest host mounts same shared storage?
Please let me know if it does not make sense to you.(I always am worrying about my english mistake confuse you :)) . I think I explained here your concerns does not matter in current implementation....
Thanks again!
Kei
-----Original Message-----
From: Ken Pepple [mailto:<email address hidden>]
Sent: Friday, March 11, 2011 12:26 PM
To: RDH 桝本 圭(ITアーキ&セキュ技術)
Cc: <email address hidden>
Subject: Re: [Merge] lp:~nttdata/nova/live-migration into lp:nova
> 1. In current proposed code, scheduler checks many things, src/dest alive check, lack of memory check, hypervisor check... Your comment implies all those checks must be done in nova-manage? or just alive check? I was thinking it is safer that those checks is done at scheduler, for example, scheduler is busy. Other example is that an use directly send a request to scheduler not using nova-manage(what I am worrying about some security issues occurs here or no need to think?).
Hi masumoto-san -
Sorry, I meant for only basic checks to be done in nova-manage. My concern is that admins will start a live migration to a non-existant host (or disabled host), wait for a minute or two, then check euca-describe-
I agree that most checks should be done in scheduler, so that later we might be able to add API support for live-migration.
Will nova/compute/
> 2. In current proposed code, if destination host is not alive(this check is done at scheduler), an exception raised and returned to nova-manage. then, we have to cleanup rabbitmq?
no, not we are ...
Ken Pepple (ken-pepple) wrote : | # |
On Mar 10, 2011, at 8:01 PM, Kei Masumoto wrote:
> Please let me know if it does not make sense to you.(I always am worrying about my english mistake confuse you :)) . I think I explained here your concerns does not matter in current implementation….
Okay, i think i understand … it can be a bit difficult following the scheduler code sometimes.
Last question: don't we need this patch (see below) ? my install fails when i do this:
root@shuttle:
2011-03-10 21:53:20,653 CRITICAL nova [-] global name 'ec2_id_to_id' is not defined
(nova): TRACE: Traceback (most recent call last):
(nova): TRACE: File "bin/nova-manage", line 1074, in <module>
(nova): TRACE: main()
(nova): TRACE: File "bin/nova-manage", line 1066, in main
(nova): TRACE: fn(*argv)
(nova): TRACE: File "bin/nova-manage", line 573, in live_migration
(nova): TRACE: instance_id = ec2_id_
(nova): TRACE: NameError: global name 'ec2_id_to_id' is not defined
(nova): TRACE:
Or did I not install this correctly ?
Thanks again
/k
===== PATCH ======
=== modified file 'bin/nova-manage'
--- bin/nova-manage 2011-03-10 06:23:13 +0000
+++ bin/nova-manage 2011-03-11 05:56:38 +0000
@@ -570,7 +570,7 @@
"""
ctxt = context.
- instance_id = ec2_id_
+ instance_id = ec2utils.
if FLAGS.connectio
msg = _('Only KVM is supported for now. Sorry!')
Kei Masumoto (masumotok) wrote : | # |
Hi ken!
Hold on, in japan, now quite big earthquake comes. TV channel does not work, I am not sure what is going on..
Probably, almost all companies stop their business, and employees are trying to escape.
Actually, I helped a pregnant woman and baby and finally get back to my apartment.
After I tidy my room, I'm gonna check this. my new TV didn’t fall down and not broke although glasses broke, I don’t know I am still lucky or not...
By the way, I am not sure it is ok to write e-mail or I should prepare to escape? :)
Kei
Ken Pepple (ken-pepple) wrote : | # |
Masumoto-san -- i'm talking with my friends in Aoyama (8.8M !). you should definitely escape :)
Thierry Carrez (ttx) wrote : | # |
I think this should be merged *now*. The feature part was approved already. Given the situation in Japan I don't expect Kei to have lots of time to add the additional pre-checks that Ken mentioned.
Someone can propose a branch that adds the checks afterwards.
OpenStack Infra (hudson-openstack) wrote : | # |
Attempt to merge into lp:nova failed due to conflicts:
text conflict in nova/tests/
Ken Pepple (ken-pepple) wrote : | # |
agreeing with ttx, will file bugs/patches on my objections.
Jay Pipes (jaypipes) wrote : | # |
I'm fixing the merge conflict locally for Kei and will push shortly.
Brian Schott (bfschott) wrote : | # |
Jay,
Not that you need a reference, but I may have fixed those conflicts in:
lp:~usc-isi/nova/hpc-trunk
Don't pull the whole branch, as it has our cpu-arch extensions.
Brian Schott
<email address hidden>
On Mar 14, 2011, at 12:43 PM, Jay Pipes wrote:
> I'm fixing the merge conflict locally for Kei and will push shortly.
> --
> https:/
> You are reviewing the proposed merge of lp:~nttdata/nova/live-migration into lp:nova.
Jay Pipes (jaypipes) wrote : | # |
Thanks Brian! It was a simple little import order thingie, though :)
Not a big deal!
On Mon, Mar 14, 2011 at 1:44 PM, Brian Schott <email address hidden> wrote:
> Jay,
>
> Not that you need a reference, but I may have fixed those conflicts in:
> lp:~usc-isi/nova/hpc-trunk
> Don't pull the whole branch, as it has our cpu-arch extensions.
>
> Brian Schott
> <email address hidden>
>
>
>
> On Mar 14, 2011, at 12:43 PM, Jay Pipes wrote:
>
>> I'm fixing the merge conflict locally for Kei and will push shortly.
>> --
>> https:/
>> You are reviewing the proposed merge of lp:~nttdata/nova/live-migration into lp:nova.
>
>
> --
> https:/
> You are reviewing the proposed merge of lp:~nttdata/nova/live-migration into lp:nova.
>
Brian Schott (bfschott) wrote : | # |
Lorin,
Good catch. That's going to hit trunk soon. I'm going to submit a bug to nova trunk. Jay, are you able to confirm this?
Brian
---
Brian Schott
USC Information Sciences Institute
http://
ph: 703-812-3722 fx: 703-812-3712
On Mar 14, 2011, at 3:34 PM, Lorin Hochstein wrote:
> I was running hpc-trunk with Ubuntu packages and saw error this in nova-compute
>
> 2011-03-14 12:12:02,764 ERROR nova [-] in Service.create()
> (nova): TRACE: Traceback (most recent call last):
> (nova): TRACE: File "/usr/lib/
> (nova): TRACE: services = [Service.create()]
> (nova): TRACE: File "/usr/lib/
> (nova): TRACE: report_interval, periodic_interval)
> (nova): TRACE: File "/usr/lib/
> (nova): TRACE: self.manager = manager_
> (nova): TRACE: File "/usr/lib/
> (nova): TRACE: self.driver = utils.import_
> (nova): TRACE: File "/usr/lib/
> (nova): TRACE: return cls()
> (nova): TRACE: File "/usr/lib/
> (nova): TRACE: conn = libvirt_
> (nova): TRACE: File "/usr/lib/
> (nova): TRACE: return LibvirtConnecti
> (nova): TRACE: File "/usr/lib/
> (nova): TRACE: self.cpuinfo_xml = open(FLAGS.
> (nova): TRACE: IOError: [Errno 2] No such file or directory: '/usr/lib/
> (nova): TRACE:
>
> The NTT guys seem to have added a new file (cpuinfo.
>
> Lorin
>
> --
> Lorin Hochstein, Computer Scientist
> USC Information Sciences Institute
> 703.812.3710
> http://
>
> _______
> DODCS mailing list
> <email address hidden>
> http://
Jay Pipes (jaypipes) wrote : | # |
Hmm, this should not have gotten through the distribution/
tests... I'll see what I can discover.
-jay
On Mon, Mar 14, 2011 at 3:47 PM, Brian Schott <email address hidden> wrote:
> Lorin,
>
> Good catch. That's going to hit trunk soon. I'm going to submit a bug to nova trunk. Jay, are you able to confirm this?
>
> Brian
>
> ---
> Brian Schott
> USC Information Sciences Institute
> http://
> ph: 703-812-3722 fx: 703-812-3712
>
>
> On Mar 14, 2011, at 3:34 PM, Lorin Hochstein wrote:
>
>> I was running hpc-trunk with Ubuntu packages and saw error this in nova-compute
>>
>> 2011-03-14 12:12:02,764 ERROR nova [-] in Service.create()
>> (nova): TRACE: Traceback (most recent call last):
>> (nova): TRACE: File "/usr/lib/
>> (nova): TRACE: services = [Service.create()]
>> (nova): TRACE: File "/usr/lib/
>> (nova): TRACE: report_interval, periodic_interval)
>> (nova): TRACE: File "/usr/lib/
>> (nova): TRACE: self.manager = manager_
>> (nova): TRACE: File "/usr/lib/
>> (nova): TRACE: self.driver = utils.import_
>> (nova): TRACE: File "/usr/lib/
>> (nova): TRACE: return cls()
>> (nova): TRACE: File "/usr/lib/
>> (nova): TRACE: conn = libvirt_
>> (nova): TRACE: File "/usr/lib/
>> (nova): TRACE: return LibvirtConnecti
>> (nova): TRACE: File "/usr/lib/
>> (nova): TRACE: self.cpuinfo_xml = open(FLAGS.
>> (nova): TRACE: IOError: [Errno 2] No such file or directory: '/usr/lib/
>> (nova): TRACE:
>>
>> The NTT guys seem to have added a new file (cpuinfo.
>>
>> Lorin
>>
>> --
>> Lorin Hochstein, Computer Scientist
>> USC Information Sciences Institute
>> 703.812.3710
>> http://
>>
>> _______
>> DODCS mailing list
>> <email address hidden>
>> http://
>
>
> --
> https:/
> You are reviewing the proposed merge of lp:~nttdata/nova/live-migration into lp:nova.
>
Soren Hansen (soren) wrote : | # |
I have a Jenkins job that would have alerted us about this sooner. It
triggers when files are added to bzr, but don't end up in the tarball.
I've made a note to get that added tomorrow.
--
Soren Hansen | http://
Ubuntu Developer | http://
OpenStack Developer | http://
Kei Masumoto (masumotok) wrote : | # |
Hi,
I would like to say thank you for everyone regarding to live-migration branch was merged. Especially, reviewers both from core dev and from community, help me to improve quality of our branch. I don’t think this is the end of our work, at least I've already recognized that I have to follow some recent nova's changes. My team is planning to submit some patches.
By the way, regarding to the earthquake, it happened the north part of Japan, not Tokyo(where I live and I work at). Although we can engaging our business, the effect also comes to Tokyo. For example, crazy traffic jam, lack of gasoline, lack of food, stopping electricity and I spend with no lights but with candles in the night... etc.
Please forgive us we need some more time to get back, and I am looking forward to sending all of you many thanks f2f at next design summit.
Regards,
Kei
Jay Pipes (jaypipes) wrote : | # |
I think I can safely say that all of us in the contributor community
wish you and all our colleagues and friends in Japan our best. It's a
horrific event and many of us feel powerless to do anything about it.
We hope you can find some normality in the coming weeks as, hopefully,
Japan recovers from the earthquake.
All our best,
jay
On Wed, Mar 16, 2011 at 2:42 AM, <email address hidden> wrote:
> Hi,
>
> I would like to say thank you for everyone regarding to live-migration branch was merged. Especially, reviewers both from core dev and from community, help me to improve quality of our branch. I don’t think this is the end of our work, at least I've already recognized that I have to follow some recent nova's changes. My team is planning to submit some patches.
>
> By the way, regarding to the earthquake, it happened the north part of Japan, not Tokyo(where I live and I work at). Although we can engaging our business, the effect also comes to Tokyo. For example, crazy traffic jam, lack of gasoline, lack of food, stopping electricity and I spend with no lights but with candles in the night... etc.
>
> Please forgive us we need some more time to get back, and I am looking forward to sending all of you many thanks f2f at next design summit.
>
> Regards,
> Kei
>
Brian Schott (bfschott) wrote : | # |
+100
Brian Schott
<email address hidden>
On Mar 16, 2011, at 11:12 AM, Jay Pipes wrote:
> I think I can safely say that all of us in the contributor community
> wish you and all our colleagues and friends in Japan our best. It's a
> horrific event and many of us feel powerless to do anything about it.
> We hope you can find some normality in the coming weeks as, hopefully,
> Japan recovers from the earthquake.
>
> All our best,
>
> jay
>
> On Wed, Mar 16, 2011 at 2:42 AM, <email address hidden> wrote:
>> Hi,
>>
>> I would like to say thank you for everyone regarding to live-migration branch was merged. Especially, reviewers both from core dev and from community, help me to improve quality of our branch. I don’t think this is the end of our work, at least I've already recognized that I have to follow some recent nova's changes. My team is planning to submit some patches.
>>
>> By the way, regarding to the earthquake, it happened the north part of Japan, not Tokyo(where I live and I work at). Although we can engaging our business, the effect also comes to Tokyo. For example, crazy traffic jam, lack of gasoline, lack of food, stopping electricity and I spend with no lights but with candles in the night... etc.
>>
>> Please forgive us we need some more time to get back, and I am looking forward to sending all of you many thanks f2f at next design summit.
>>
>> Regards,
>> Kei
>>
>
> --
> https:/
> You are reviewing the proposed merge of lp:~nttdata/nova/live-migration into lp:nova.
Preview Diff
1 | === modified file 'bin/nova-manage' | |||
2 | --- bin/nova-manage 2011-03-10 04:42:11 +0000 | |||
3 | +++ bin/nova-manage 2011-03-10 06:27:59 +0000 | |||
4 | @@ -558,6 +558,40 @@ | |||
5 | 558 | db.network_delete_safe(context.get_admin_context(), network.id) | 558 | db.network_delete_safe(context.get_admin_context(), network.id) |
6 | 559 | 559 | ||
7 | 560 | 560 | ||
8 | 561 | class VmCommands(object): | ||
9 | 562 | """Class for mangaging VM instances.""" | ||
10 | 563 | |||
11 | 564 | def live_migration(self, ec2_id, dest): | ||
12 | 565 | """Migrates a running instance to a new machine. | ||
13 | 566 | |||
14 | 567 | :param ec2_id: instance id which comes from euca-describe-instance. | ||
15 | 568 | :param dest: destination host name. | ||
16 | 569 | |||
17 | 570 | """ | ||
18 | 571 | |||
19 | 572 | ctxt = context.get_admin_context() | ||
20 | 573 | instance_id = ec2_id_to_id(ec2_id) | ||
21 | 574 | |||
22 | 575 | if FLAGS.connection_type != 'libvirt': | ||
23 | 576 | msg = _('Only KVM is supported for now. Sorry!') | ||
24 | 577 | raise exception.Error(msg) | ||
25 | 578 | |||
26 | 579 | if (FLAGS.volume_driver != 'nova.volume.driver.AOEDriver' and \ | ||
27 | 580 | FLAGS.volume_driver != 'nova.volume.driver.ISCSIDriver'): | ||
28 | 581 | msg = _("Support only AOEDriver and ISCSIDriver. Sorry!") | ||
29 | 582 | raise exception.Error(msg) | ||
30 | 583 | |||
31 | 584 | rpc.call(ctxt, | ||
32 | 585 | FLAGS.scheduler_topic, | ||
33 | 586 | {"method": "live_migration", | ||
34 | 587 | "args": {"instance_id": instance_id, | ||
35 | 588 | "dest": dest, | ||
36 | 589 | "topic": FLAGS.compute_topic}}) | ||
37 | 590 | |||
38 | 591 | print _('Migration of %s initiated.' | ||
39 | 592 | 'Check its progress using euca-describe-instances.') % ec2_id | ||
40 | 593 | |||
41 | 594 | |||
42 | 561 | class ServiceCommands(object): | 595 | class ServiceCommands(object): |
43 | 562 | """Enable and disable running services""" | 596 | """Enable and disable running services""" |
44 | 563 | 597 | ||
45 | @@ -602,6 +636,59 @@ | |||
46 | 602 | return | 636 | return |
47 | 603 | db.service_update(ctxt, svc['id'], {'disabled': True}) | 637 | db.service_update(ctxt, svc['id'], {'disabled': True}) |
48 | 604 | 638 | ||
49 | 639 | def describe_resource(self, host): | ||
50 | 640 | """Describes cpu/memory/hdd info for host. | ||
51 | 641 | |||
52 | 642 | :param host: hostname. | ||
53 | 643 | |||
54 | 644 | """ | ||
55 | 645 | |||
56 | 646 | result = rpc.call(context.get_admin_context(), | ||
57 | 647 | FLAGS.scheduler_topic, | ||
58 | 648 | {"method": "show_host_resources", | ||
59 | 649 | "args": {"host": host}}) | ||
60 | 650 | |||
61 | 651 | if type(result) != dict: | ||
62 | 652 | print _('An unexpected error has occurred.') | ||
63 | 653 | print _('[Result]'), result | ||
64 | 654 | else: | ||
65 | 655 | cpu = result['resource']['vcpus'] | ||
66 | 656 | mem = result['resource']['memory_mb'] | ||
67 | 657 | hdd = result['resource']['local_gb'] | ||
68 | 658 | cpu_u = result['resource']['vcpus_used'] | ||
69 | 659 | mem_u = result['resource']['memory_mb_used'] | ||
70 | 660 | hdd_u = result['resource']['local_gb_used'] | ||
71 | 661 | |||
72 | 662 | print 'HOST\t\t\tPROJECT\t\tcpu\tmem(mb)\tdisk(gb)' | ||
73 | 663 | print '%s(total)\t\t\t%s\t%s\t%s' % (host, cpu, mem, hdd) | ||
74 | 664 | print '%s(used)\t\t\t%s\t%s\t%s' % (host, cpu_u, mem_u, hdd_u) | ||
75 | 665 | for p_id, val in result['usage'].items(): | ||
76 | 666 | print '%s\t\t%s\t\t%s\t%s\t%s' % (host, | ||
77 | 667 | p_id, | ||
78 | 668 | val['vcpus'], | ||
79 | 669 | val['memory_mb'], | ||
80 | 670 | val['local_gb']) | ||
81 | 671 | |||
82 | 672 | def update_resource(self, host): | ||
83 | 673 | """Updates available vcpu/memory/disk info for host. | ||
84 | 674 | |||
85 | 675 | :param host: hostname. | ||
86 | 676 | |||
87 | 677 | """ | ||
88 | 678 | |||
89 | 679 | ctxt = context.get_admin_context() | ||
90 | 680 | service_refs = db.service_get_all_by_host(ctxt, host) | ||
91 | 681 | if len(service_refs) <= 0: | ||
92 | 682 | raise exception.Invalid(_('%s does not exist.') % host) | ||
93 | 683 | |||
94 | 684 | service_refs = [s for s in service_refs if s['topic'] == 'compute'] | ||
95 | 685 | if len(service_refs) <= 0: | ||
96 | 686 | raise exception.Invalid(_('%s is not compute node.') % host) | ||
97 | 687 | |||
98 | 688 | rpc.call(ctxt, | ||
99 | 689 | db.queue_get_for(ctxt, FLAGS.compute_topic, host), | ||
100 | 690 | {"method": "update_available_resource"}) | ||
101 | 691 | |||
102 | 605 | 692 | ||
103 | 606 | class LogCommands(object): | 693 | class LogCommands(object): |
104 | 607 | def request(self, request_id, logfile='/var/log/nova.log'): | 694 | def request(self, request_id, logfile='/var/log/nova.log'): |
105 | @@ -905,6 +992,7 @@ | |||
106 | 905 | ('fixed', FixedIpCommands), | 992 | ('fixed', FixedIpCommands), |
107 | 906 | ('floating', FloatingIpCommands), | 993 | ('floating', FloatingIpCommands), |
108 | 907 | ('network', NetworkCommands), | 994 | ('network', NetworkCommands), |
109 | 995 | ('vm', VmCommands), | ||
110 | 908 | ('service', ServiceCommands), | 996 | ('service', ServiceCommands), |
111 | 909 | ('log', LogCommands), | 997 | ('log', LogCommands), |
112 | 910 | ('db', DbCommands), | 998 | ('db', DbCommands), |
113 | 911 | 999 | ||
114 | === modified file 'contrib/nova.sh' | |||
115 | --- contrib/nova.sh 2011-03-08 00:01:43 +0000 | |||
116 | +++ contrib/nova.sh 2011-03-10 06:27:59 +0000 | |||
117 | @@ -76,6 +76,7 @@ | |||
118 | 76 | sudo apt-get install -y python-migrate python-eventlet python-gflags python-ipy python-tempita | 76 | sudo apt-get install -y python-migrate python-eventlet python-gflags python-ipy python-tempita |
119 | 77 | sudo apt-get install -y python-libvirt python-libxml2 python-routes python-cheetah | 77 | sudo apt-get install -y python-libvirt python-libxml2 python-routes python-cheetah |
120 | 78 | sudo apt-get install -y python-netaddr python-paste python-pastedeploy python-glance | 78 | sudo apt-get install -y python-netaddr python-paste python-pastedeploy python-glance |
121 | 79 | sudo apt-get install -y python-multiprocessing | ||
122 | 79 | 80 | ||
123 | 80 | if [ "$USE_IPV6" == 1 ]; then | 81 | if [ "$USE_IPV6" == 1 ]; then |
124 | 81 | sudo apt-get install -y radvd | 82 | sudo apt-get install -y radvd |
125 | 82 | 83 | ||
126 | === modified file 'nova/compute/manager.py' | |||
127 | --- nova/compute/manager.py 2011-03-07 22:40:19 +0000 | |||
128 | +++ nova/compute/manager.py 2011-03-10 06:27:59 +0000 | |||
129 | @@ -36,9 +36,12 @@ | |||
130 | 36 | 36 | ||
131 | 37 | import base64 | 37 | import base64 |
132 | 38 | import datetime | 38 | import datetime |
133 | 39 | import os | ||
134 | 39 | import random | 40 | import random |
135 | 40 | import string | 41 | import string |
136 | 41 | import socket | 42 | import socket |
137 | 43 | import tempfile | ||
138 | 44 | import time | ||
139 | 42 | import functools | 45 | import functools |
140 | 43 | 46 | ||
141 | 44 | from nova import exception | 47 | from nova import exception |
142 | @@ -61,6 +64,9 @@ | |||
143 | 61 | flags.DEFINE_string('console_host', socket.gethostname(), | 64 | flags.DEFINE_string('console_host', socket.gethostname(), |
144 | 62 | 'Console proxy host to use to connect to instances on' | 65 | 'Console proxy host to use to connect to instances on' |
145 | 63 | 'this host.') | 66 | 'this host.') |
146 | 67 | flags.DEFINE_integer('live_migration_retry_count', 30, | ||
147 | 68 | ("Retry count needed in live_migration." | ||
148 | 69 | " sleep 1 sec for each count")) | ||
149 | 64 | 70 | ||
150 | 65 | LOG = logging.getLogger('nova.compute.manager') | 71 | LOG = logging.getLogger('nova.compute.manager') |
151 | 66 | 72 | ||
152 | @@ -181,7 +187,7 @@ | |||
153 | 181 | context=context) | 187 | context=context) |
154 | 182 | self.db.instance_update(context, | 188 | self.db.instance_update(context, |
155 | 183 | instance_id, | 189 | instance_id, |
157 | 184 | {'host': self.host}) | 190 | {'host': self.host, 'launched_on': self.host}) |
158 | 185 | 191 | ||
159 | 186 | self.db.instance_set_state(context, | 192 | self.db.instance_set_state(context, |
160 | 187 | instance_id, | 193 | instance_id, |
161 | @@ -723,3 +729,248 @@ | |||
162 | 723 | self.volume_manager.remove_compute_volume(context, volume_id) | 729 | self.volume_manager.remove_compute_volume(context, volume_id) |
163 | 724 | self.db.volume_detached(context, volume_id) | 730 | self.db.volume_detached(context, volume_id) |
164 | 725 | return True | 731 | return True |
165 | 732 | |||
166 | 733 | @exception.wrap_exception | ||
167 | 734 | def compare_cpu(self, context, cpu_info): | ||
168 | 735 | """Checks the host cpu is compatible to a cpu given by xml. | ||
169 | 736 | |||
170 | 737 | :param context: security context | ||
171 | 738 | :param cpu_info: json string obtained from virConnect.getCapabilities | ||
172 | 739 | :returns: See driver.compare_cpu | ||
173 | 740 | |||
174 | 741 | """ | ||
175 | 742 | return self.driver.compare_cpu(cpu_info) | ||
176 | 743 | |||
177 | 744 | @exception.wrap_exception | ||
178 | 745 | def create_shared_storage_test_file(self, context): | ||
179 | 746 | """Makes tmpfile under FLAGS.instance_path. | ||
180 | 747 | |||
181 | 748 | This method enables compute nodes to recognize that they mounts | ||
182 | 749 | same shared storage. (create|check|creanup)_shared_storage_test_file() | ||
183 | 750 | is a pair. | ||
184 | 751 | |||
185 | 752 | :param context: security context | ||
186 | 753 | :returns: tmpfile name(basename) | ||
187 | 754 | |||
188 | 755 | """ | ||
189 | 756 | |||
190 | 757 | dirpath = FLAGS.instances_path | ||
191 | 758 | fd, tmp_file = tempfile.mkstemp(dir=dirpath) | ||
192 | 759 | LOG.debug(_("Creating tmpfile %s to notify to other " | ||
193 | 760 | "compute nodes that they should mount " | ||
194 | 761 | "the same storage.") % tmp_file) | ||
195 | 762 | os.close(fd) | ||
196 | 763 | return os.path.basename(tmp_file) | ||
197 | 764 | |||
198 | 765 | @exception.wrap_exception | ||
199 | 766 | def check_shared_storage_test_file(self, context, filename): | ||
200 | 767 | """Confirms existence of the tmpfile under FLAGS.instances_path. | ||
201 | 768 | |||
202 | 769 | :param context: security context | ||
203 | 770 | :param filename: confirm existence of FLAGS.instances_path/thisfile | ||
204 | 771 | |||
205 | 772 | """ | ||
206 | 773 | |||
207 | 774 | tmp_file = os.path.join(FLAGS.instances_path, filename) | ||
208 | 775 | if not os.path.exists(tmp_file): | ||
209 | 776 | raise exception.NotFound(_('%s not found') % tmp_file) | ||
210 | 777 | |||
211 | 778 | @exception.wrap_exception | ||
212 | 779 | def cleanup_shared_storage_test_file(self, context, filename): | ||
213 | 780 | """Removes existence of the tmpfile under FLAGS.instances_path. | ||
214 | 781 | |||
215 | 782 | :param context: security context | ||
216 | 783 | :param filename: remove existence of FLAGS.instances_path/thisfile | ||
217 | 784 | |||
218 | 785 | """ | ||
219 | 786 | |||
220 | 787 | tmp_file = os.path.join(FLAGS.instances_path, filename) | ||
221 | 788 | os.remove(tmp_file) | ||
222 | 789 | |||
223 | 790 | @exception.wrap_exception | ||
224 | 791 | def update_available_resource(self, context): | ||
225 | 792 | """See comments update_resource_info. | ||
226 | 793 | |||
227 | 794 | :param context: security context | ||
228 | 795 | :returns: See driver.update_available_resource() | ||
229 | 796 | |||
230 | 797 | """ | ||
231 | 798 | |||
232 | 799 | return self.driver.update_available_resource(context, self.host) | ||
233 | 800 | |||
234 | 801 | def pre_live_migration(self, context, instance_id): | ||
235 | 802 | """Preparations for live migration at dest host. | ||
236 | 803 | |||
237 | 804 | :param context: security context | ||
238 | 805 | :param instance_id: nova.db.sqlalchemy.models.Instance.Id | ||
239 | 806 | |||
240 | 807 | """ | ||
241 | 808 | |||
242 | 809 | # Getting instance info | ||
243 | 810 | instance_ref = self.db.instance_get(context, instance_id) | ||
244 | 811 | ec2_id = instance_ref['hostname'] | ||
245 | 812 | |||
246 | 813 | # Getting fixed ips | ||
247 | 814 | fixed_ip = self.db.instance_get_fixed_address(context, instance_id) | ||
248 | 815 | if not fixed_ip: | ||
249 | 816 | msg = _("%(instance_id)s(%(ec2_id)s) does not have fixed_ip.") | ||
250 | 817 | raise exception.NotFound(msg % locals()) | ||
251 | 818 | |||
252 | 819 | # If any volume is mounted, prepare here. | ||
253 | 820 | if not instance_ref['volumes']: | ||
254 | 821 | LOG.info(_("%s has no volume."), ec2_id) | ||
255 | 822 | else: | ||
256 | 823 | for v in instance_ref['volumes']: | ||
257 | 824 | self.volume_manager.setup_compute_volume(context, v['id']) | ||
258 | 825 | |||
259 | 826 | # Bridge settings. | ||
260 | 827 | # Call this method prior to ensure_filtering_rules_for_instance, | ||
261 | 828 | # since bridge is not set up, ensure_filtering_rules_for instance | ||
262 | 829 | # fails. | ||
263 | 830 | # | ||
264 | 831 | # Retry operation is necessary because continuously request comes, | ||
265 | 832 | # concorrent request occurs to iptables, then it complains. | ||
266 | 833 | max_retry = FLAGS.live_migration_retry_count | ||
267 | 834 | for cnt in range(max_retry): | ||
268 | 835 | try: | ||
269 | 836 | self.network_manager.setup_compute_network(context, | ||
270 | 837 | instance_id) | ||
271 | 838 | break | ||
272 | 839 | except exception.ProcessExecutionError: | ||
273 | 840 | if cnt == max_retry - 1: | ||
274 | 841 | raise | ||
275 | 842 | else: | ||
276 | 843 | LOG.warn(_("setup_compute_network() failed %(cnt)d." | ||
277 | 844 | "Retry up to %(max_retry)d for %(ec2_id)s.") | ||
278 | 845 | % locals()) | ||
279 | 846 | time.sleep(1) | ||
280 | 847 | |||
281 | 848 | # Creating filters to hypervisors and firewalls. | ||
282 | 849 | # An example is that nova-instance-instance-xxx, | ||
283 | 850 | # which is written to libvirt.xml(Check "virsh nwfilter-list") | ||
284 | 851 | # This nwfilter is necessary on the destination host. | ||
285 | 852 | # In addition, this method is creating filtering rule | ||
286 | 853 | # onto destination host. | ||
287 | 854 | self.driver.ensure_filtering_rules_for_instance(instance_ref) | ||
288 | 855 | |||
289 | 856 | def live_migration(self, context, instance_id, dest): | ||
290 | 857 | """Executing live migration. | ||
291 | 858 | |||
292 | 859 | :param context: security context | ||
293 | 860 | :param instance_id: nova.db.sqlalchemy.models.Instance.Id | ||
294 | 861 | :param dest: destination host | ||
295 | 862 | |||
296 | 863 | """ | ||
297 | 864 | |||
298 | 865 | # Get instance for error handling. | ||
299 | 866 | instance_ref = self.db.instance_get(context, instance_id) | ||
300 | 867 | i_name = instance_ref.name | ||
301 | 868 | |||
302 | 869 | try: | ||
303 | 870 | # Checking volume node is working correctly when any volumes | ||
304 | 871 | # are attached to instances. | ||
305 | 872 | if instance_ref['volumes']: | ||
306 | 873 | rpc.call(context, | ||
307 | 874 | FLAGS.volume_topic, | ||
308 | 875 | {"method": "check_for_export", | ||
309 | 876 | "args": {'instance_id': instance_id}}) | ||
310 | 877 | |||
311 | 878 | # Asking dest host to preparing live migration. | ||
312 | 879 | rpc.call(context, | ||
313 | 880 | self.db.queue_get_for(context, FLAGS.compute_topic, dest), | ||
314 | 881 | {"method": "pre_live_migration", | ||
315 | 882 | "args": {'instance_id': instance_id}}) | ||
316 | 883 | |||
317 | 884 | except Exception: | ||
318 | 885 | msg = _("Pre live migration for %(i_name)s failed at %(dest)s") | ||
319 | 886 | LOG.error(msg % locals()) | ||
320 | 887 | self.recover_live_migration(context, instance_ref) | ||
321 | 888 | raise | ||
322 | 889 | |||
323 | 890 | # Executing live migration | ||
324 | 891 | # live_migration might raises exceptions, but | ||
325 | 892 | # nothing must be recovered in this version. | ||
326 | 893 | self.driver.live_migration(context, instance_ref, dest, | ||
327 | 894 | self.post_live_migration, | ||
328 | 895 | self.recover_live_migration) | ||
329 | 896 | |||
330 | 897 | def post_live_migration(self, ctxt, instance_ref, dest): | ||
331 | 898 | """Post operations for live migration. | ||
332 | 899 | |||
333 | 900 | This method is called from live_migration | ||
334 | 901 | and mainly updating database record. | ||
335 | 902 | |||
336 | 903 | :param ctxt: security context | ||
337 | 904 | :param instance_id: nova.db.sqlalchemy.models.Instance.Id | ||
338 | 905 | :param dest: destination host | ||
339 | 906 | |||
340 | 907 | """ | ||
341 | 908 | |||
342 | 909 | LOG.info(_('post_live_migration() is started..')) | ||
343 | 910 | instance_id = instance_ref['id'] | ||
344 | 911 | |||
345 | 912 | # Detaching volumes. | ||
346 | 913 | try: | ||
347 | 914 | for vol in self.db.volume_get_all_by_instance(ctxt, instance_id): | ||
348 | 915 | self.volume_manager.remove_compute_volume(ctxt, vol['id']) | ||
349 | 916 | except exception.NotFound: | ||
350 | 917 | pass | ||
351 | 918 | |||
352 | 919 | # Releasing vlan. | ||
353 | 920 | # (not necessary in current implementation?) | ||
354 | 921 | |||
355 | 922 | # Releasing security group ingress rule. | ||
356 | 923 | self.driver.unfilter_instance(instance_ref) | ||
357 | 924 | |||
358 | 925 | # Database updating. | ||
359 | 926 | i_name = instance_ref.name | ||
360 | 927 | try: | ||
361 | 928 | # Not return if floating_ip is not found, otherwise, | ||
362 | 929 | # instance never be accessible.. | ||
363 | 930 | floating_ip = self.db.instance_get_floating_address(ctxt, | ||
364 | 931 | instance_id) | ||
365 | 932 | if not floating_ip: | ||
366 | 933 | LOG.info(_('No floating_ip is found for %s.'), i_name) | ||
367 | 934 | else: | ||
368 | 935 | floating_ip_ref = self.db.floating_ip_get_by_address(ctxt, | ||
369 | 936 | floating_ip) | ||
370 | 937 | self.db.floating_ip_update(ctxt, | ||
371 | 938 | floating_ip_ref['address'], | ||
372 | 939 | {'host': dest}) | ||
373 | 940 | except exception.NotFound: | ||
374 | 941 | LOG.info(_('No floating_ip is found for %s.'), i_name) | ||
375 | 942 | except: | ||
376 | 943 | LOG.error(_("Live migration: Unexpected error:" | ||
377 | 944 | "%s cannot inherit floating ip..") % i_name) | ||
378 | 945 | |||
379 | 946 | # Restore instance/volume state | ||
380 | 947 | self.recover_live_migration(ctxt, instance_ref, dest) | ||
381 | 948 | |||
382 | 949 | LOG.info(_('Migrating %(i_name)s to %(dest)s finished successfully.') | ||
383 | 950 | % locals()) | ||
384 | 951 | LOG.info(_("You may see the error \"libvirt: QEMU error: " | ||
385 | 952 | "Domain not found: no domain with matching name.\" " | ||
386 | 953 | "This error can be safely ignored.")) | ||
387 | 954 | |||
388 | 955 | def recover_live_migration(self, ctxt, instance_ref, host=None): | ||
389 | 956 | """Recovers Instance/volume state from migrating -> running. | ||
390 | 957 | |||
391 | 958 | :param ctxt: security context | ||
392 | 959 | :param instance_id: nova.db.sqlalchemy.models.Instance.Id | ||
393 | 960 | :param host: | ||
394 | 961 | DB column value is updated by this hostname. | ||
395 | 962 | if none, the host instance currently running is selected. | ||
396 | 963 | |||
397 | 964 | """ | ||
398 | 965 | |||
399 | 966 | if not host: | ||
400 | 967 | host = instance_ref['host'] | ||
401 | 968 | |||
402 | 969 | self.db.instance_update(ctxt, | ||
403 | 970 | instance_ref['id'], | ||
404 | 971 | {'state_description': 'running', | ||
405 | 972 | 'state': power_state.RUNNING, | ||
406 | 973 | 'host': host}) | ||
407 | 974 | |||
408 | 975 | for volume in instance_ref['volumes']: | ||
409 | 976 | self.db.volume_update(ctxt, volume['id'], {'status': 'in-use'}) | ||
410 | 726 | 977 | ||
411 | === modified file 'nova/db/api.py' | |||
412 | --- nova/db/api.py 2011-03-09 21:27:38 +0000 | |||
413 | +++ nova/db/api.py 2011-03-10 06:27:59 +0000 | |||
414 | @@ -104,6 +104,11 @@ | |||
415 | 104 | return IMPL.service_get_all_by_host(context, host) | 104 | return IMPL.service_get_all_by_host(context, host) |
416 | 105 | 105 | ||
417 | 106 | 106 | ||
418 | 107 | def service_get_all_compute_by_host(context, host): | ||
419 | 108 | """Get all compute services for a given host.""" | ||
420 | 109 | return IMPL.service_get_all_compute_by_host(context, host) | ||
421 | 110 | |||
422 | 111 | |||
423 | 107 | def service_get_all_compute_sorted(context): | 112 | def service_get_all_compute_sorted(context): |
424 | 108 | """Get all compute services sorted by instance count. | 113 | """Get all compute services sorted by instance count. |
425 | 109 | 114 | ||
426 | @@ -153,6 +158,29 @@ | |||
427 | 153 | ################### | 158 | ################### |
428 | 154 | 159 | ||
429 | 155 | 160 | ||
430 | 161 | def compute_node_get(context, compute_id, session=None): | ||
431 | 162 | """Get an computeNode or raise if it does not exist.""" | ||
432 | 163 | return IMPL.compute_node_get(context, compute_id) | ||
433 | 164 | |||
434 | 165 | |||
435 | 166 | def compute_node_create(context, values): | ||
436 | 167 | """Create a computeNode from the values dictionary.""" | ||
437 | 168 | return IMPL.compute_node_create(context, values) | ||
438 | 169 | |||
439 | 170 | |||
440 | 171 | def compute_node_update(context, compute_id, values): | ||
441 | 172 | """Set the given properties on an computeNode and update it. | ||
442 | 173 | |||
443 | 174 | Raises NotFound if computeNode does not exist. | ||
444 | 175 | |||
445 | 176 | """ | ||
446 | 177 | |||
447 | 178 | return IMPL.compute_node_update(context, compute_id, values) | ||
448 | 179 | |||
449 | 180 | |||
450 | 181 | ################### | ||
451 | 182 | |||
452 | 183 | |||
453 | 156 | def certificate_create(context, values): | 184 | def certificate_create(context, values): |
454 | 157 | """Create a certificate from the values dictionary.""" | 185 | """Create a certificate from the values dictionary.""" |
455 | 158 | return IMPL.certificate_create(context, values) | 186 | return IMPL.certificate_create(context, values) |
456 | @@ -257,6 +285,11 @@ | |||
457 | 257 | return IMPL.floating_ip_get_by_address(context, address) | 285 | return IMPL.floating_ip_get_by_address(context, address) |
458 | 258 | 286 | ||
459 | 259 | 287 | ||
460 | 288 | def floating_ip_update(context, address, values): | ||
461 | 289 | """Update a floating ip by address or raise if it doesn't exist.""" | ||
462 | 290 | return IMPL.floating_ip_update(context, address, values) | ||
463 | 291 | |||
464 | 292 | |||
465 | 260 | #################### | 293 | #################### |
466 | 261 | 294 | ||
467 | 262 | def migration_update(context, id, values): | 295 | def migration_update(context, id, values): |
468 | @@ -441,6 +474,27 @@ | |||
469 | 441 | security_group_id) | 474 | security_group_id) |
470 | 442 | 475 | ||
471 | 443 | 476 | ||
472 | 477 | def instance_get_vcpu_sum_by_host_and_project(context, hostname, proj_id): | ||
473 | 478 | """Get instances.vcpus by host and project.""" | ||
474 | 479 | return IMPL.instance_get_vcpu_sum_by_host_and_project(context, | ||
475 | 480 | hostname, | ||
476 | 481 | proj_id) | ||
477 | 482 | |||
478 | 483 | |||
479 | 484 | def instance_get_memory_sum_by_host_and_project(context, hostname, proj_id): | ||
480 | 485 | """Get amount of memory by host and project.""" | ||
481 | 486 | return IMPL.instance_get_memory_sum_by_host_and_project(context, | ||
482 | 487 | hostname, | ||
483 | 488 | proj_id) | ||
484 | 489 | |||
485 | 490 | |||
486 | 491 | def instance_get_disk_sum_by_host_and_project(context, hostname, proj_id): | ||
487 | 492 | """Get total amount of disk by host and project.""" | ||
488 | 493 | return IMPL.instance_get_disk_sum_by_host_and_project(context, | ||
489 | 494 | hostname, | ||
490 | 495 | proj_id) | ||
491 | 496 | |||
492 | 497 | |||
493 | 444 | def instance_action_create(context, values): | 498 | def instance_action_create(context, values): |
494 | 445 | """Create an instance action from the values dictionary.""" | 499 | """Create an instance action from the values dictionary.""" |
495 | 446 | return IMPL.instance_action_create(context, values) | 500 | return IMPL.instance_action_create(context, values) |
496 | @@ -765,6 +819,11 @@ | |||
497 | 765 | return IMPL.volume_get_all_by_host(context, host) | 819 | return IMPL.volume_get_all_by_host(context, host) |
498 | 766 | 820 | ||
499 | 767 | 821 | ||
500 | 822 | def volume_get_all_by_instance(context, instance_id): | ||
501 | 823 | """Get all volumes belonging to a instance.""" | ||
502 | 824 | return IMPL.volume_get_all_by_instance(context, instance_id) | ||
503 | 825 | |||
504 | 826 | |||
505 | 768 | def volume_get_all_by_project(context, project_id): | 827 | def volume_get_all_by_project(context, project_id): |
506 | 769 | """Get all volumes belonging to a project.""" | 828 | """Get all volumes belonging to a project.""" |
507 | 770 | return IMPL.volume_get_all_by_project(context, project_id) | 829 | return IMPL.volume_get_all_by_project(context, project_id) |
508 | 771 | 830 | ||
509 | === modified file 'nova/db/sqlalchemy/api.py' | |||
510 | --- nova/db/sqlalchemy/api.py 2011-03-09 21:27:38 +0000 | |||
511 | +++ nova/db/sqlalchemy/api.py 2011-03-10 06:27:59 +0000 | |||
512 | @@ -118,6 +118,11 @@ | |||
513 | 118 | service_ref = service_get(context, service_id, session=session) | 118 | service_ref = service_get(context, service_id, session=session) |
514 | 119 | service_ref.delete(session=session) | 119 | service_ref.delete(session=session) |
515 | 120 | 120 | ||
516 | 121 | if service_ref.topic == 'compute' and \ | ||
517 | 122 | len(service_ref.compute_node) != 0: | ||
518 | 123 | for c in service_ref.compute_node: | ||
519 | 124 | c.delete(session=session) | ||
520 | 125 | |||
521 | 121 | 126 | ||
522 | 122 | @require_admin_context | 127 | @require_admin_context |
523 | 123 | def service_get(context, service_id, session=None): | 128 | def service_get(context, service_id, session=None): |
524 | @@ -125,6 +130,7 @@ | |||
525 | 125 | session = get_session() | 130 | session = get_session() |
526 | 126 | 131 | ||
527 | 127 | result = session.query(models.Service).\ | 132 | result = session.query(models.Service).\ |
528 | 133 | options(joinedload('compute_node')).\ | ||
529 | 128 | filter_by(id=service_id).\ | 134 | filter_by(id=service_id).\ |
530 | 129 | filter_by(deleted=can_read_deleted(context)).\ | 135 | filter_by(deleted=can_read_deleted(context)).\ |
531 | 130 | first() | 136 | first() |
532 | @@ -175,6 +181,24 @@ | |||
533 | 175 | 181 | ||
534 | 176 | 182 | ||
535 | 177 | @require_admin_context | 183 | @require_admin_context |
536 | 184 | def service_get_all_compute_by_host(context, host): | ||
537 | 185 | topic = 'compute' | ||
538 | 186 | session = get_session() | ||
539 | 187 | result = session.query(models.Service).\ | ||
540 | 188 | options(joinedload('compute_node')).\ | ||
541 | 189 | filter_by(deleted=False).\ | ||
542 | 190 | filter_by(host=host).\ | ||
543 | 191 | filter_by(topic=topic).\ | ||
544 | 192 | all() | ||
545 | 193 | |||
546 | 194 | if not result: | ||
547 | 195 | raise exception.NotFound(_("%s does not exist or is not " | ||
548 | 196 | "a compute node.") % host) | ||
549 | 197 | |||
550 | 198 | return result | ||
551 | 199 | |||
552 | 200 | |||
553 | 201 | @require_admin_context | ||
554 | 178 | def _service_get_all_topic_subquery(context, session, topic, subq, label): | 202 | def _service_get_all_topic_subquery(context, session, topic, subq, label): |
555 | 179 | sort_value = getattr(subq.c, label) | 203 | sort_value = getattr(subq.c, label) |
556 | 180 | return session.query(models.Service, func.coalesce(sort_value, 0)).\ | 204 | return session.query(models.Service, func.coalesce(sort_value, 0)).\ |
557 | @@ -285,6 +309,42 @@ | |||
558 | 285 | 309 | ||
559 | 286 | 310 | ||
560 | 287 | @require_admin_context | 311 | @require_admin_context |
561 | 312 | def compute_node_get(context, compute_id, session=None): | ||
562 | 313 | if not session: | ||
563 | 314 | session = get_session() | ||
564 | 315 | |||
565 | 316 | result = session.query(models.ComputeNode).\ | ||
566 | 317 | filter_by(id=compute_id).\ | ||
567 | 318 | filter_by(deleted=can_read_deleted(context)).\ | ||
568 | 319 | first() | ||
569 | 320 | |||
570 | 321 | if not result: | ||
571 | 322 | raise exception.NotFound(_('No computeNode for id %s') % compute_id) | ||
572 | 323 | |||
573 | 324 | return result | ||
574 | 325 | |||
575 | 326 | |||
576 | 327 | @require_admin_context | ||
577 | 328 | def compute_node_create(context, values): | ||
578 | 329 | compute_node_ref = models.ComputeNode() | ||
579 | 330 | compute_node_ref.update(values) | ||
580 | 331 | compute_node_ref.save() | ||
581 | 332 | return compute_node_ref | ||
582 | 333 | |||
583 | 334 | |||
584 | 335 | @require_admin_context | ||
585 | 336 | def compute_node_update(context, compute_id, values): | ||
586 | 337 | session = get_session() | ||
587 | 338 | with session.begin(): | ||
588 | 339 | compute_ref = compute_node_get(context, compute_id, session=session) | ||
589 | 340 | compute_ref.update(values) | ||
590 | 341 | compute_ref.save(session=session) | ||
591 | 342 | |||
592 | 343 | |||
593 | 344 | ################### | ||
594 | 345 | |||
595 | 346 | |||
596 | 347 | @require_admin_context | ||
597 | 288 | def certificate_get(context, certificate_id, session=None): | 348 | def certificate_get(context, certificate_id, session=None): |
598 | 289 | if not session: | 349 | if not session: |
599 | 290 | session = get_session() | 350 | session = get_session() |
600 | @@ -505,6 +565,16 @@ | |||
601 | 505 | return result | 565 | return result |
602 | 506 | 566 | ||
603 | 507 | 567 | ||
604 | 568 | @require_context | ||
605 | 569 | def floating_ip_update(context, address, values): | ||
606 | 570 | session = get_session() | ||
607 | 571 | with session.begin(): | ||
608 | 572 | floating_ip_ref = floating_ip_get_by_address(context, address, session) | ||
609 | 573 | for (key, value) in values.iteritems(): | ||
610 | 574 | floating_ip_ref[key] = value | ||
611 | 575 | floating_ip_ref.save(session=session) | ||
612 | 576 | |||
613 | 577 | |||
614 | 508 | ################### | 578 | ################### |
615 | 509 | 579 | ||
616 | 510 | 580 | ||
617 | @@ -905,6 +975,45 @@ | |||
618 | 905 | 975 | ||
619 | 906 | 976 | ||
620 | 907 | @require_context | 977 | @require_context |
621 | 978 | def instance_get_vcpu_sum_by_host_and_project(context, hostname, proj_id): | ||
622 | 979 | session = get_session() | ||
623 | 980 | result = session.query(models.Instance).\ | ||
624 | 981 | filter_by(host=hostname).\ | ||
625 | 982 | filter_by(project_id=proj_id).\ | ||
626 | 983 | filter_by(deleted=False).\ | ||
627 | 984 | value(func.sum(models.Instance.vcpus)) | ||
628 | 985 | if not result: | ||
629 | 986 | return 0 | ||
630 | 987 | return result | ||
631 | 988 | |||
632 | 989 | |||
633 | 990 | @require_context | ||
634 | 991 | def instance_get_memory_sum_by_host_and_project(context, hostname, proj_id): | ||
635 | 992 | session = get_session() | ||
636 | 993 | result = session.query(models.Instance).\ | ||
637 | 994 | filter_by(host=hostname).\ | ||
638 | 995 | filter_by(project_id=proj_id).\ | ||
639 | 996 | filter_by(deleted=False).\ | ||
640 | 997 | value(func.sum(models.Instance.memory_mb)) | ||
641 | 998 | if not result: | ||
642 | 999 | return 0 | ||
643 | 1000 | return result | ||
644 | 1001 | |||
645 | 1002 | |||
646 | 1003 | @require_context | ||
647 | 1004 | def instance_get_disk_sum_by_host_and_project(context, hostname, proj_id): | ||
648 | 1005 | session = get_session() | ||
649 | 1006 | result = session.query(models.Instance).\ | ||
650 | 1007 | filter_by(host=hostname).\ | ||
651 | 1008 | filter_by(project_id=proj_id).\ | ||
652 | 1009 | filter_by(deleted=False).\ | ||
653 | 1010 | value(func.sum(models.Instance.local_gb)) | ||
654 | 1011 | if not result: | ||
655 | 1012 | return 0 | ||
656 | 1013 | return result | ||
657 | 1014 | |||
658 | 1015 | |||
659 | 1016 | @require_context | ||
660 | 908 | def instance_action_create(context, values): | 1017 | def instance_action_create(context, values): |
661 | 909 | """Create an instance action from the values dictionary.""" | 1018 | """Create an instance action from the values dictionary.""" |
662 | 910 | action_ref = models.InstanceActions() | 1019 | action_ref = models.InstanceActions() |
663 | @@ -1522,6 +1631,18 @@ | |||
664 | 1522 | all() | 1631 | all() |
665 | 1523 | 1632 | ||
666 | 1524 | 1633 | ||
667 | 1634 | @require_admin_context | ||
668 | 1635 | def volume_get_all_by_instance(context, instance_id): | ||
669 | 1636 | session = get_session() | ||
670 | 1637 | result = session.query(models.Volume).\ | ||
671 | 1638 | filter_by(instance_id=instance_id).\ | ||
672 | 1639 | filter_by(deleted=False).\ | ||
673 | 1640 | all() | ||
674 | 1641 | if not result: | ||
675 | 1642 | raise exception.NotFound(_('No volume for instance %s') % instance_id) | ||
676 | 1643 | return result | ||
677 | 1644 | |||
678 | 1645 | |||
679 | 1525 | @require_context | 1646 | @require_context |
680 | 1526 | def volume_get_all_by_project(context, project_id): | 1647 | def volume_get_all_by_project(context, project_id): |
681 | 1527 | authorize_project_context(context, project_id) | 1648 | authorize_project_context(context, project_id) |
682 | 1528 | 1649 | ||
683 | === added file 'nova/db/sqlalchemy/migrate_repo/versions/010_add_live_migration.py' | |||
684 | --- nova/db/sqlalchemy/migrate_repo/versions/010_add_live_migration.py 1970-01-01 00:00:00 +0000 | |||
685 | +++ nova/db/sqlalchemy/migrate_repo/versions/010_add_live_migration.py 2011-03-10 06:27:59 +0000 | |||
686 | @@ -0,0 +1,83 @@ | |||
687 | 1 | # vim: tabstop=4 shiftwidth=4 softtabstop=4 | ||
688 | 2 | |||
689 | 3 | # Copyright 2010 United States Government as represented by the | ||
690 | 4 | # Administrator of the National Aeronautics and Space Administration. | ||
691 | 5 | # All Rights Reserved. | ||
692 | 6 | # | ||
693 | 7 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | ||
694 | 8 | # not use this file except in compliance with the License. You may obtain | ||
695 | 9 | # a copy of the License at | ||
696 | 10 | # | ||
697 | 11 | # http://www.apache.org/licenses/LICENSE-2.0 | ||
698 | 12 | # | ||
699 | 13 | # Unless required by applicable law or agreed to in writing, software | ||
700 | 14 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | ||
701 | 15 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | ||
702 | 16 | # License for the specific language governing permissions and limitations | ||
703 | 17 | # under the License. | ||
704 | 18 | |||
705 | 19 | from migrate import * | ||
706 | 20 | from nova import log as logging | ||
707 | 21 | from sqlalchemy import * | ||
708 | 22 | |||
709 | 23 | |||
710 | 24 | meta = MetaData() | ||
711 | 25 | |||
712 | 26 | instances = Table('instances', meta, | ||
713 | 27 | Column('id', Integer(), primary_key=True, nullable=False), | ||
714 | 28 | ) | ||
715 | 29 | |||
716 | 30 | # | ||
717 | 31 | # New Tables | ||
718 | 32 | # | ||
719 | 33 | |||
720 | 34 | compute_nodes = Table('compute_nodes', meta, | ||
721 | 35 | Column('created_at', DateTime(timezone=False)), | ||
722 | 36 | Column('updated_at', DateTime(timezone=False)), | ||
723 | 37 | Column('deleted_at', DateTime(timezone=False)), | ||
724 | 38 | Column('deleted', Boolean(create_constraint=True, name=None)), | ||
725 | 39 | Column('id', Integer(), primary_key=True, nullable=False), | ||
726 | 40 | Column('service_id', Integer(), nullable=False), | ||
727 | 41 | |||
728 | 42 | Column('vcpus', Integer(), nullable=False), | ||
729 | 43 | Column('memory_mb', Integer(), nullable=False), | ||
730 | 44 | Column('local_gb', Integer(), nullable=False), | ||
731 | 45 | Column('vcpus_used', Integer(), nullable=False), | ||
732 | 46 | Column('memory_mb_used', Integer(), nullable=False), | ||
733 | 47 | Column('local_gb_used', Integer(), nullable=False), | ||
734 | 48 | Column('hypervisor_type', | ||
735 | 49 | Text(convert_unicode=False, assert_unicode=None, | ||
736 | 50 | unicode_error=None, _warn_on_bytestring=False), | ||
737 | 51 | nullable=False), | ||
738 | 52 | Column('hypervisor_version', Integer(), nullable=False), | ||
739 | 53 | Column('cpu_info', | ||
740 | 54 | Text(convert_unicode=False, assert_unicode=None, | ||
741 | 55 | unicode_error=None, _warn_on_bytestring=False), | ||
742 | 56 | nullable=False), | ||
743 | 57 | ) | ||
744 | 58 | |||
745 | 59 | |||
746 | 60 | # | ||
747 | 61 | # Tables to alter | ||
748 | 62 | # | ||
749 | 63 | instances_launched_on = Column( | ||
750 | 64 | 'launched_on', | ||
751 | 65 | Text(convert_unicode=False, assert_unicode=None, | ||
752 | 66 | unicode_error=None, _warn_on_bytestring=False), | ||
753 | 67 | nullable=True) | ||
754 | 68 | |||
755 | 69 | |||
756 | 70 | def upgrade(migrate_engine): | ||
757 | 71 | # Upgrade operations go here. Don't create your own engine; | ||
758 | 72 | # bind migrate_engine to your metadata | ||
759 | 73 | meta.bind = migrate_engine | ||
760 | 74 | |||
761 | 75 | try: | ||
762 | 76 | compute_nodes.create() | ||
763 | 77 | except Exception: | ||
764 | 78 | logging.info(repr(compute_nodes)) | ||
765 | 79 | logging.exception('Exception while creating table') | ||
766 | 80 | meta.drop_all(tables=[compute_nodes]) | ||
767 | 81 | raise | ||
768 | 82 | |||
769 | 83 | instances.create_column(instances_launched_on) | ||
770 | 0 | 84 | ||
771 | === modified file 'nova/db/sqlalchemy/models.py' | |||
772 | --- nova/db/sqlalchemy/models.py 2011-03-03 19:13:15 +0000 | |||
773 | +++ nova/db/sqlalchemy/models.py 2011-03-10 06:27:59 +0000 | |||
774 | @@ -113,6 +113,41 @@ | |||
775 | 113 | availability_zone = Column(String(255), default='nova') | 113 | availability_zone = Column(String(255), default='nova') |
776 | 114 | 114 | ||
777 | 115 | 115 | ||
778 | 116 | class ComputeNode(BASE, NovaBase): | ||
779 | 117 | """Represents a running compute service on a host.""" | ||
780 | 118 | |||
781 | 119 | __tablename__ = 'compute_nodes' | ||
782 | 120 | id = Column(Integer, primary_key=True) | ||
783 | 121 | service_id = Column(Integer, ForeignKey('services.id'), nullable=True) | ||
784 | 122 | service = relationship(Service, | ||
785 | 123 | backref=backref('compute_node'), | ||
786 | 124 | foreign_keys=service_id, | ||
787 | 125 | primaryjoin='and_(' | ||
788 | 126 | 'ComputeNode.service_id == Service.id,' | ||
789 | 127 | 'ComputeNode.deleted == False)') | ||
790 | 128 | |||
791 | 129 | vcpus = Column(Integer, nullable=True) | ||
792 | 130 | memory_mb = Column(Integer, nullable=True) | ||
793 | 131 | local_gb = Column(Integer, nullable=True) | ||
794 | 132 | vcpus_used = Column(Integer, nullable=True) | ||
795 | 133 | memory_mb_used = Column(Integer, nullable=True) | ||
796 | 134 | local_gb_used = Column(Integer, nullable=True) | ||
797 | 135 | hypervisor_type = Column(Text, nullable=True) | ||
798 | 136 | hypervisor_version = Column(Integer, nullable=True) | ||
799 | 137 | |||
800 | 138 | # Note(masumotok): Expected Strings example: | ||
801 | 139 | # | ||
802 | 140 | # '{"arch":"x86_64", | ||
803 | 141 | # "model":"Nehalem", | ||
804 | 142 | # "topology":{"sockets":1, "threads":2, "cores":3}, | ||
805 | 143 | # "features":["tdtscp", "xtpr"]}' | ||
806 | 144 | # | ||
807 | 145 | # Points are "json translatable" and it must have all dictionary keys | ||
808 | 146 | # above, since it is copied from <cpu> tag of getCapabilities() | ||
809 | 147 | # (See libvirt.virtConnection). | ||
810 | 148 | cpu_info = Column(Text, nullable=True) | ||
811 | 149 | |||
812 | 150 | |||
813 | 116 | class Certificate(BASE, NovaBase): | 151 | class Certificate(BASE, NovaBase): |
814 | 117 | """Represents a an x509 certificate""" | 152 | """Represents a an x509 certificate""" |
815 | 118 | __tablename__ = 'certificates' | 153 | __tablename__ = 'certificates' |
816 | @@ -191,6 +226,9 @@ | |||
817 | 191 | display_name = Column(String(255)) | 226 | display_name = Column(String(255)) |
818 | 192 | display_description = Column(String(255)) | 227 | display_description = Column(String(255)) |
819 | 193 | 228 | ||
820 | 229 | # To remember on which host a instance booted. | ||
821 | 230 | # An instance may have moved to another host by live migraiton. | ||
822 | 231 | launched_on = Column(Text) | ||
823 | 194 | locked = Column(Boolean) | 232 | locked = Column(Boolean) |
824 | 195 | 233 | ||
825 | 196 | # TODO(vish): see Ewan's email about state improvements, probably | 234 | # TODO(vish): see Ewan's email about state improvements, probably |
826 | 197 | 235 | ||
827 | === modified file 'nova/scheduler/driver.py' | |||
828 | --- nova/scheduler/driver.py 2011-01-18 19:01:16 +0000 | |||
829 | +++ nova/scheduler/driver.py 2011-03-10 06:27:59 +0000 | |||
830 | @@ -26,10 +26,14 @@ | |||
831 | 26 | from nova import db | 26 | from nova import db |
832 | 27 | from nova import exception | 27 | from nova import exception |
833 | 28 | from nova import flags | 28 | from nova import flags |
834 | 29 | from nova import log as logging | ||
835 | 30 | from nova import rpc | ||
836 | 31 | from nova.compute import power_state | ||
837 | 29 | 32 | ||
838 | 30 | FLAGS = flags.FLAGS | 33 | FLAGS = flags.FLAGS |
839 | 31 | flags.DEFINE_integer('service_down_time', 60, | 34 | flags.DEFINE_integer('service_down_time', 60, |
840 | 32 | 'maximum time since last checkin for up service') | 35 | 'maximum time since last checkin for up service') |
841 | 36 | flags.DECLARE('instances_path', 'nova.compute.manager') | ||
842 | 33 | 37 | ||
843 | 34 | 38 | ||
844 | 35 | class NoValidHost(exception.Error): | 39 | class NoValidHost(exception.Error): |
845 | @@ -64,3 +68,236 @@ | |||
846 | 64 | def schedule(self, context, topic, *_args, **_kwargs): | 68 | def schedule(self, context, topic, *_args, **_kwargs): |
847 | 65 | """Must override at least this method for scheduler to work.""" | 69 | """Must override at least this method for scheduler to work.""" |
848 | 66 | raise NotImplementedError(_("Must implement a fallback schedule")) | 70 | raise NotImplementedError(_("Must implement a fallback schedule")) |
849 | 71 | |||
850 | 72 | def schedule_live_migration(self, context, instance_id, dest): | ||
851 | 73 | """Live migration scheduling method. | ||
852 | 74 | |||
853 | 75 | :param context: | ||
854 | 76 | :param instance_id: | ||
855 | 77 | :param dest: destination host | ||
856 | 78 | :return: | ||
857 | 79 | The host where instance is running currently. | ||
858 | 80 | Then scheduler send request that host. | ||
859 | 81 | |||
860 | 82 | """ | ||
861 | 83 | |||
862 | 84 | # Whether instance exists and is running. | ||
863 | 85 | instance_ref = db.instance_get(context, instance_id) | ||
864 | 86 | |||
865 | 87 | # Checking instance. | ||
866 | 88 | self._live_migration_src_check(context, instance_ref) | ||
867 | 89 | |||
868 | 90 | # Checking destination host. | ||
869 | 91 | self._live_migration_dest_check(context, instance_ref, dest) | ||
870 | 92 | |||
871 | 93 | # Common checking. | ||
872 | 94 | self._live_migration_common_check(context, instance_ref, dest) | ||
873 | 95 | |||
874 | 96 | # Changing instance_state. | ||
875 | 97 | db.instance_set_state(context, | ||
876 | 98 | instance_id, | ||
877 | 99 | power_state.PAUSED, | ||
878 | 100 | 'migrating') | ||
879 | 101 | |||
880 | 102 | # Changing volume state | ||
881 | 103 | for volume_ref in instance_ref['volumes']: | ||
882 | 104 | db.volume_update(context, | ||
883 | 105 | volume_ref['id'], | ||
884 | 106 | {'status': 'migrating'}) | ||
885 | 107 | |||
886 | 108 | # Return value is necessary to send request to src | ||
887 | 109 | # Check _schedule() in detail. | ||
888 | 110 | src = instance_ref['host'] | ||
889 | 111 | return src | ||
890 | 112 | |||
891 | 113 | def _live_migration_src_check(self, context, instance_ref): | ||
892 | 114 | """Live migration check routine (for src host). | ||
893 | 115 | |||
894 | 116 | :param context: security context | ||
895 | 117 | :param instance_ref: nova.db.sqlalchemy.models.Instance object | ||
896 | 118 | |||
897 | 119 | """ | ||
898 | 120 | |||
899 | 121 | # Checking instance is running. | ||
900 | 122 | if (power_state.RUNNING != instance_ref['state'] or \ | ||
901 | 123 | 'running' != instance_ref['state_description']): | ||
902 | 124 | ec2_id = instance_ref['hostname'] | ||
903 | 125 | raise exception.Invalid(_('Instance(%s) is not running') % ec2_id) | ||
904 | 126 | |||
905 | 127 | # Checing volume node is running when any volumes are mounted | ||
906 | 128 | # to the instance. | ||
907 | 129 | if len(instance_ref['volumes']) != 0: | ||
908 | 130 | services = db.service_get_all_by_topic(context, 'volume') | ||
909 | 131 | if len(services) < 1 or not self.service_is_up(services[0]): | ||
910 | 132 | raise exception.Invalid(_("volume node is not alive" | ||
911 | 133 | "(time synchronize problem?)")) | ||
912 | 134 | |||
913 | 135 | # Checking src host exists and compute node | ||
914 | 136 | src = instance_ref['host'] | ||
915 | 137 | services = db.service_get_all_compute_by_host(context, src) | ||
916 | 138 | |||
917 | 139 | # Checking src host is alive. | ||
918 | 140 | if not self.service_is_up(services[0]): | ||
919 | 141 | raise exception.Invalid(_("%s is not alive(time " | ||
920 | 142 | "synchronize problem?)") % src) | ||
921 | 143 | |||
922 | 144 | def _live_migration_dest_check(self, context, instance_ref, dest): | ||
923 | 145 | """Live migration check routine (for destination host). | ||
924 | 146 | |||
925 | 147 | :param context: security context | ||
926 | 148 | :param instance_ref: nova.db.sqlalchemy.models.Instance object | ||
927 | 149 | :param dest: destination host | ||
928 | 150 | |||
929 | 151 | """ | ||
930 | 152 | |||
931 | 153 | # Checking dest exists and compute node. | ||
932 | 154 | dservice_refs = db.service_get_all_compute_by_host(context, dest) | ||
933 | 155 | dservice_ref = dservice_refs[0] | ||
934 | 156 | |||
935 | 157 | # Checking dest host is alive. | ||
936 | 158 | if not self.service_is_up(dservice_ref): | ||
937 | 159 | raise exception.Invalid(_("%s is not alive(time " | ||
938 | 160 | "synchronize problem?)") % dest) | ||
939 | 161 | |||
940 | 162 | # Checking whether The host where instance is running | ||
941 | 163 | # and dest is not same. | ||
942 | 164 | src = instance_ref['host'] | ||
943 | 165 | if dest == src: | ||
944 | 166 | ec2_id = instance_ref['hostname'] | ||
945 | 167 | raise exception.Invalid(_("%(dest)s is where %(ec2_id)s is " | ||
946 | 168 | "running now. choose other host.") | ||
947 | 169 | % locals()) | ||
948 | 170 | |||
949 | 171 | # Checking dst host still has enough capacities. | ||
950 | 172 | self.assert_compute_node_has_enough_resources(context, | ||
951 | 173 | instance_ref, | ||
952 | 174 | dest) | ||
953 | 175 | |||
954 | 176 | def _live_migration_common_check(self, context, instance_ref, dest): | ||
955 | 177 | """Live migration common check routine. | ||
956 | 178 | |||
957 | 179 | Below checkings are followed by | ||
958 | 180 | http://wiki.libvirt.org/page/TodoPreMigrationChecks | ||
959 | 181 | |||
960 | 182 | :param context: security context | ||
961 | 183 | :param instance_ref: nova.db.sqlalchemy.models.Instance object | ||
962 | 184 | :param dest: destination host | ||
963 | 185 | |||
964 | 186 | """ | ||
965 | 187 | |||
966 | 188 | # Checking shared storage connectivity | ||
967 | 189 | self.mounted_on_same_shared_storage(context, instance_ref, dest) | ||
968 | 190 | |||
969 | 191 | # Checking dest exists. | ||
970 | 192 | dservice_refs = db.service_get_all_compute_by_host(context, dest) | ||
971 | 193 | dservice_ref = dservice_refs[0]['compute_node'][0] | ||
972 | 194 | |||
973 | 195 | # Checking original host( where instance was launched at) exists. | ||
974 | 196 | try: | ||
975 | 197 | oservice_refs = db.service_get_all_compute_by_host(context, | ||
976 | 198 | instance_ref['launched_on']) | ||
977 | 199 | except exception.NotFound: | ||
978 | 200 | raise exception.Invalid(_("host %s where instance was launched " | ||
979 | 201 | "does not exist.") | ||
980 | 202 | % instance_ref['launched_on']) | ||
981 | 203 | oservice_ref = oservice_refs[0]['compute_node'][0] | ||
982 | 204 | |||
983 | 205 | # Checking hypervisor is same. | ||
984 | 206 | orig_hypervisor = oservice_ref['hypervisor_type'] | ||
985 | 207 | dest_hypervisor = dservice_ref['hypervisor_type'] | ||
986 | 208 | if orig_hypervisor != dest_hypervisor: | ||
987 | 209 | raise exception.Invalid(_("Different hypervisor type" | ||
988 | 210 | "(%(orig_hypervisor)s->" | ||
989 | 211 | "%(dest_hypervisor)s)')" % locals())) | ||
990 | 212 | |||
991 | 213 | # Checkng hypervisor version. | ||
992 | 214 | orig_hypervisor = oservice_ref['hypervisor_version'] | ||
993 | 215 | dest_hypervisor = dservice_ref['hypervisor_version'] | ||
994 | 216 | if orig_hypervisor > dest_hypervisor: | ||
995 | 217 | raise exception.Invalid(_("Older hypervisor version" | ||
996 | 218 | "(%(orig_hypervisor)s->" | ||
997 | 219 | "%(dest_hypervisor)s)") % locals()) | ||
998 | 220 | |||
999 | 221 | # Checking cpuinfo. | ||
1000 | 222 | try: | ||
1001 | 223 | rpc.call(context, | ||
1002 | 224 | db.queue_get_for(context, FLAGS.compute_topic, dest), | ||
1003 | 225 | {"method": 'compare_cpu', | ||
1004 | 226 | "args": {'cpu_info': oservice_ref['cpu_info']}}) | ||
1005 | 227 | |||
1006 | 228 | except rpc.RemoteError: | ||
1007 | 229 | src = instance_ref['host'] | ||
1008 | 230 | logging.exception(_("host %(dest)s is not compatible with " | ||
1009 | 231 | "original host %(src)s.") % locals()) | ||
1010 | 232 | raise | ||
1011 | 233 | |||
1012 | 234 | def assert_compute_node_has_enough_resources(self, context, | ||
1013 | 235 | instance_ref, dest): | ||
1014 | 236 | """Checks if destination host has enough resource for live migration. | ||
1015 | 237 | |||
1016 | 238 | Currently, only memory checking has been done. | ||
1017 | 239 | If storage migration(block migration, meaning live-migration | ||
1018 | 240 | without any shared storage) will be available, local storage | ||
1019 | 241 | checking is also necessary. | ||
1020 | 242 | |||
1021 | 243 | :param context: security context | ||
1022 | 244 | :param instance_ref: nova.db.sqlalchemy.models.Instance object | ||
1023 | 245 | :param dest: destination host | ||
1024 | 246 | |||
1025 | 247 | """ | ||
1026 | 248 | |||
1027 | 249 | # Getting instance information | ||
1028 | 250 | ec2_id = instance_ref['hostname'] | ||
1029 | 251 | |||
1030 | 252 | # Getting host information | ||
1031 | 253 | service_refs = db.service_get_all_compute_by_host(context, dest) | ||
1032 | 254 | compute_node_ref = service_refs[0]['compute_node'][0] | ||
1033 | 255 | |||
1034 | 256 | mem_total = int(compute_node_ref['memory_mb']) | ||
1035 | 257 | mem_used = int(compute_node_ref['memory_mb_used']) | ||
1036 | 258 | mem_avail = mem_total - mem_used | ||
1037 | 259 | mem_inst = instance_ref['memory_mb'] | ||
1038 | 260 | if mem_avail <= mem_inst: | ||
1039 | 261 | raise exception.NotEmpty(_("Unable to migrate %(ec2_id)s " | ||
1040 | 262 | "to destination: %(dest)s " | ||
1041 | 263 | "(host:%(mem_avail)s " | ||
1042 | 264 | "<= instance:%(mem_inst)s)") | ||
1043 | 265 | % locals()) | ||
1044 | 266 | |||
1045 | 267 | def mounted_on_same_shared_storage(self, context, instance_ref, dest): | ||
1046 | 268 | """Check if the src and dest host mount same shared storage. | ||
1047 | 269 | |||
1048 | 270 | At first, dest host creates temp file, and src host can see | ||
1049 | 271 | it if they mounts same shared storage. Then src host erase it. | ||
1050 | 272 | |||
1051 | 273 | :param context: security context | ||
1052 | 274 | :param instance_ref: nova.db.sqlalchemy.models.Instance object | ||
1053 | 275 | :param dest: destination host | ||
1054 | 276 | |||
1055 | 277 | """ | ||
1056 | 278 | |||
1057 | 279 | src = instance_ref['host'] | ||
1058 | 280 | dst_t = db.queue_get_for(context, FLAGS.compute_topic, dest) | ||
1059 | 281 | src_t = db.queue_get_for(context, FLAGS.compute_topic, src) | ||
1060 | 282 | |||
1061 | 283 | try: | ||
1062 | 284 | # create tmpfile at dest host | ||
1063 | 285 | filename = rpc.call(context, dst_t, | ||
1064 | 286 | {"method": 'create_shared_storage_test_file'}) | ||
1065 | 287 | |||
1066 | 288 | # make sure existence at src host. | ||
1067 | 289 | rpc.call(context, src_t, | ||
1068 | 290 | {"method": 'check_shared_storage_test_file', | ||
1069 | 291 | "args": {'filename': filename}}) | ||
1070 | 292 | |||
1071 | 293 | except rpc.RemoteError: | ||
1072 | 294 | ipath = FLAGS.instances_path | ||
1073 | 295 | logging.error(_("Cannot confirm tmpfile at %(ipath)s is on " | ||
1074 | 296 | "same shared storage between %(src)s " | ||
1075 | 297 | "and %(dest)s.") % locals()) | ||
1076 | 298 | raise | ||
1077 | 299 | |||
1078 | 300 | finally: | ||
1079 | 301 | rpc.call(context, dst_t, | ||
1080 | 302 | {"method": 'cleanup_shared_storage_test_file', | ||
1081 | 303 | "args": {'filename': filename}}) | ||
1082 | 67 | 304 | ||
1083 | === modified file 'nova/scheduler/manager.py' | |||
1084 | --- nova/scheduler/manager.py 2011-01-19 15:41:30 +0000 | |||
1085 | +++ nova/scheduler/manager.py 2011-03-10 06:27:59 +0000 | |||
1086 | @@ -67,3 +67,55 @@ | |||
1087 | 67 | {"method": method, | 67 | {"method": method, |
1088 | 68 | "args": kwargs}) | 68 | "args": kwargs}) |
1089 | 69 | LOG.debug(_("Casting to %(topic)s %(host)s for %(method)s") % locals()) | 69 | LOG.debug(_("Casting to %(topic)s %(host)s for %(method)s") % locals()) |
1090 | 70 | |||
1091 | 71 | # NOTE (masumotok) : This method should be moved to nova.api.ec2.admin. | ||
1092 | 72 | # Based on bexar design summit discussion, | ||
1093 | 73 | # just put this here for bexar release. | ||
1094 | 74 | def show_host_resources(self, context, host, *args): | ||
1095 | 75 | """Shows the physical/usage resource given by hosts. | ||
1096 | 76 | |||
1097 | 77 | :param context: security context | ||
1098 | 78 | :param host: hostname | ||
1099 | 79 | :returns: | ||
1100 | 80 | example format is below. | ||
1101 | 81 | {'resource':D, 'usage':{proj_id1:D, proj_id2:D}} | ||
1102 | 82 | D: {'vcpus':3, 'memory_mb':2048, 'local_gb':2048} | ||
1103 | 83 | |||
1104 | 84 | """ | ||
1105 | 85 | |||
1106 | 86 | compute_ref = db.service_get_all_compute_by_host(context, host) | ||
1107 | 87 | compute_ref = compute_ref[0] | ||
1108 | 88 | |||
1109 | 89 | # Getting physical resource information | ||
1110 | 90 | compute_node_ref = compute_ref['compute_node'][0] | ||
1111 | 91 | resource = {'vcpus': compute_node_ref['vcpus'], | ||
1112 | 92 | 'memory_mb': compute_node_ref['memory_mb'], | ||
1113 | 93 | 'local_gb': compute_node_ref['local_gb'], | ||
1114 | 94 | 'vcpus_used': compute_node_ref['vcpus_used'], | ||
1115 | 95 | 'memory_mb_used': compute_node_ref['memory_mb_used'], | ||
1116 | 96 | 'local_gb_used': compute_node_ref['local_gb_used']} | ||
1117 | 97 | |||
1118 | 98 | # Getting usage resource information | ||
1119 | 99 | usage = {} | ||
1120 | 100 | instance_refs = db.instance_get_all_by_host(context, | ||
1121 | 101 | compute_ref['host']) | ||
1122 | 102 | if not instance_refs: | ||
1123 | 103 | return {'resource': resource, 'usage': usage} | ||
1124 | 104 | |||
1125 | 105 | project_ids = [i['project_id'] for i in instance_refs] | ||
1126 | 106 | project_ids = list(set(project_ids)) | ||
1127 | 107 | for project_id in project_ids: | ||
1128 | 108 | vcpus = db.instance_get_vcpu_sum_by_host_and_project(context, | ||
1129 | 109 | host, | ||
1130 | 110 | project_id) | ||
1131 | 111 | mem = db.instance_get_memory_sum_by_host_and_project(context, | ||
1132 | 112 | host, | ||
1133 | 113 | project_id) | ||
1134 | 114 | hdd = db.instance_get_disk_sum_by_host_and_project(context, | ||
1135 | 115 | host, | ||
1136 | 116 | project_id) | ||
1137 | 117 | usage[project_id] = {'vcpus': int(vcpus), | ||
1138 | 118 | 'memory_mb': int(mem), | ||
1139 | 119 | 'local_gb': int(hdd)} | ||
1140 | 120 | |||
1141 | 121 | return {'resource': resource, 'usage': usage} | ||
1142 | 70 | 122 | ||
1143 | === modified file 'nova/service.py' | |||
1144 | --- nova/service.py 2011-03-09 00:51:05 +0000 | |||
1145 | +++ nova/service.py 2011-03-10 06:27:59 +0000 | |||
1146 | @@ -92,6 +92,9 @@ | |||
1147 | 92 | except exception.NotFound: | 92 | except exception.NotFound: |
1148 | 93 | self._create_service_ref(ctxt) | 93 | self._create_service_ref(ctxt) |
1149 | 94 | 94 | ||
1150 | 95 | if 'nova-compute' == self.binary: | ||
1151 | 96 | self.manager.update_available_resource(ctxt) | ||
1152 | 97 | |||
1153 | 95 | conn1 = rpc.Connection.instance(new=True) | 98 | conn1 = rpc.Connection.instance(new=True) |
1154 | 96 | conn2 = rpc.Connection.instance(new=True) | 99 | conn2 = rpc.Connection.instance(new=True) |
1155 | 97 | if self.report_interval: | 100 | if self.report_interval: |
1156 | 98 | 101 | ||
1157 | === modified file 'nova/tests/test_compute.py' | |||
1158 | --- nova/tests/test_compute.py 2011-03-10 04:42:11 +0000 | |||
1159 | +++ nova/tests/test_compute.py 2011-03-10 06:27:59 +0000 | |||
1160 | @@ -20,6 +20,7 @@ | |||
1161 | 20 | """ | 20 | """ |
1162 | 21 | 21 | ||
1163 | 22 | import datetime | 22 | import datetime |
1164 | 23 | import mox | ||
1165 | 23 | 24 | ||
1166 | 24 | from nova import compute | 25 | from nova import compute |
1167 | 25 | from nova import context | 26 | from nova import context |
1168 | @@ -27,15 +28,20 @@ | |||
1169 | 27 | from nova import exception | 28 | from nova import exception |
1170 | 28 | from nova import flags | 29 | from nova import flags |
1171 | 29 | from nova import log as logging | 30 | from nova import log as logging |
1172 | 31 | from nova import rpc | ||
1173 | 30 | from nova import test | 32 | from nova import test |
1174 | 31 | from nova import utils | 33 | from nova import utils |
1175 | 32 | from nova.auth import manager | 34 | from nova.auth import manager |
1176 | 33 | from nova.compute import instance_types | 35 | from nova.compute import instance_types |
1177 | 36 | from nova.compute import manager as compute_manager | ||
1178 | 37 | from nova.compute import power_state | ||
1179 | 38 | from nova.db.sqlalchemy import models | ||
1180 | 34 | from nova.image import local | 39 | from nova.image import local |
1181 | 35 | 40 | ||
1182 | 36 | LOG = logging.getLogger('nova.tests.compute') | 41 | LOG = logging.getLogger('nova.tests.compute') |
1183 | 37 | FLAGS = flags.FLAGS | 42 | FLAGS = flags.FLAGS |
1184 | 38 | flags.DECLARE('stub_network', 'nova.compute.manager') | 43 | flags.DECLARE('stub_network', 'nova.compute.manager') |
1185 | 44 | flags.DECLARE('live_migration_retry_count', 'nova.compute.manager') | ||
1186 | 39 | 45 | ||
1187 | 40 | 46 | ||
1188 | 41 | class ComputeTestCase(test.TestCase): | 47 | class ComputeTestCase(test.TestCase): |
1189 | @@ -83,6 +89,41 @@ | |||
1190 | 83 | 'project_id': self.project.id} | 89 | 'project_id': self.project.id} |
1191 | 84 | return db.security_group_create(self.context, values) | 90 | return db.security_group_create(self.context, values) |
1192 | 85 | 91 | ||
1193 | 92 | def _get_dummy_instance(self): | ||
1194 | 93 | """Get mock-return-value instance object | ||
1195 | 94 | Use this when any testcase executed later than test_run_terminate | ||
1196 | 95 | """ | ||
1197 | 96 | vol1 = models.Volume() | ||
1198 | 97 | vol1['id'] = 1 | ||
1199 | 98 | vol2 = models.Volume() | ||
1200 | 99 | vol2['id'] = 2 | ||
1201 | 100 | instance_ref = models.Instance() | ||
1202 | 101 | instance_ref['id'] = 1 | ||
1203 | 102 | instance_ref['volumes'] = [vol1, vol2] | ||
1204 | 103 | instance_ref['hostname'] = 'i-00000001' | ||
1205 | 104 | instance_ref['host'] = 'dummy' | ||
1206 | 105 | return instance_ref | ||
1207 | 106 | |||
1208 | 107 | def test_create_instance_defaults_display_name(self): | ||
1209 | 108 | """Verify that an instance cannot be created without a display_name.""" | ||
1210 | 109 | cases = [dict(), dict(display_name=None)] | ||
1211 | 110 | for instance in cases: | ||
1212 | 111 | ref = self.compute_api.create(self.context, | ||
1213 | 112 | FLAGS.default_instance_type, None, **instance) | ||
1214 | 113 | try: | ||
1215 | 114 | self.assertNotEqual(ref[0]['display_name'], None) | ||
1216 | 115 | finally: | ||
1217 | 116 | db.instance_destroy(self.context, ref[0]['id']) | ||
1218 | 117 | |||
1219 | 118 | def test_create_instance_associates_security_groups(self): | ||
1220 | 119 | """Make sure create associates security groups""" | ||
1221 | 120 | group = self._create_group() | ||
1222 | 121 | instance_ref = models.Instance() | ||
1223 | 122 | instance_ref['id'] = 1 | ||
1224 | 123 | instance_ref['volumes'] = [{'id': 1}, {'id': 2}] | ||
1225 | 124 | instance_ref['hostname'] = 'i-00000001' | ||
1226 | 125 | return instance_ref | ||
1227 | 126 | |||
1228 | 86 | def test_create_instance_defaults_display_name(self): | 127 | def test_create_instance_defaults_display_name(self): |
1229 | 87 | """Verify that an instance cannot be created without a display_name.""" | 128 | """Verify that an instance cannot be created without a display_name.""" |
1230 | 88 | cases = [dict(), dict(display_name=None)] | 129 | cases = [dict(), dict(display_name=None)] |
1231 | @@ -301,3 +342,256 @@ | |||
1232 | 301 | self.compute.terminate_instance(self.context, instance_id) | 342 | self.compute.terminate_instance(self.context, instance_id) |
1233 | 302 | type = instance_types.get_by_flavor_id("1") | 343 | type = instance_types.get_by_flavor_id("1") |
1234 | 303 | self.assertEqual(type, 'm1.tiny') | 344 | self.assertEqual(type, 'm1.tiny') |
1235 | 345 | |||
1236 | 346 | def _setup_other_managers(self): | ||
1237 | 347 | self.volume_manager = utils.import_object(FLAGS.volume_manager) | ||
1238 | 348 | self.network_manager = utils.import_object(FLAGS.network_manager) | ||
1239 | 349 | self.compute_driver = utils.import_object(FLAGS.compute_driver) | ||
1240 | 350 | |||
1241 | 351 | def test_pre_live_migration_instance_has_no_fixed_ip(self): | ||
1242 | 352 | """Confirm raising exception if instance doesn't have fixed_ip.""" | ||
1243 | 353 | instance_ref = self._get_dummy_instance() | ||
1244 | 354 | c = context.get_admin_context() | ||
1245 | 355 | i_id = instance_ref['id'] | ||
1246 | 356 | |||
1247 | 357 | dbmock = self.mox.CreateMock(db) | ||
1248 | 358 | dbmock.instance_get(c, i_id).AndReturn(instance_ref) | ||
1249 | 359 | dbmock.instance_get_fixed_address(c, i_id).AndReturn(None) | ||
1250 | 360 | |||
1251 | 361 | self.compute.db = dbmock | ||
1252 | 362 | self.mox.ReplayAll() | ||
1253 | 363 | self.assertRaises(exception.NotFound, | ||
1254 | 364 | self.compute.pre_live_migration, | ||
1255 | 365 | c, instance_ref['id']) | ||
1256 | 366 | |||
1257 | 367 | def test_pre_live_migration_instance_has_volume(self): | ||
1258 | 368 | """Confirm setup_compute_volume is called when volume is mounted.""" | ||
1259 | 369 | i_ref = self._get_dummy_instance() | ||
1260 | 370 | c = context.get_admin_context() | ||
1261 | 371 | |||
1262 | 372 | self._setup_other_managers() | ||
1263 | 373 | dbmock = self.mox.CreateMock(db) | ||
1264 | 374 | volmock = self.mox.CreateMock(self.volume_manager) | ||
1265 | 375 | netmock = self.mox.CreateMock(self.network_manager) | ||
1266 | 376 | drivermock = self.mox.CreateMock(self.compute_driver) | ||
1267 | 377 | |||
1268 | 378 | dbmock.instance_get(c, i_ref['id']).AndReturn(i_ref) | ||
1269 | 379 | dbmock.instance_get_fixed_address(c, i_ref['id']).AndReturn('dummy') | ||
1270 | 380 | for i in range(len(i_ref['volumes'])): | ||
1271 | 381 | vid = i_ref['volumes'][i]['id'] | ||
1272 | 382 | volmock.setup_compute_volume(c, vid).InAnyOrder('g1') | ||
1273 | 383 | netmock.setup_compute_network(c, i_ref['id']) | ||
1274 | 384 | drivermock.ensure_filtering_rules_for_instance(i_ref) | ||
1275 | 385 | |||
1276 | 386 | self.compute.db = dbmock | ||
1277 | 387 | self.compute.volume_manager = volmock | ||
1278 | 388 | self.compute.network_manager = netmock | ||
1279 | 389 | self.compute.driver = drivermock | ||
1280 | 390 | |||
1281 | 391 | self.mox.ReplayAll() | ||
1282 | 392 | ret = self.compute.pre_live_migration(c, i_ref['id']) | ||
1283 | 393 | self.assertEqual(ret, None) | ||
1284 | 394 | |||
1285 | 395 | def test_pre_live_migration_instance_has_no_volume(self): | ||
1286 | 396 | """Confirm log meg when instance doesn't mount any volumes.""" | ||
1287 | 397 | i_ref = self._get_dummy_instance() | ||
1288 | 398 | i_ref['volumes'] = [] | ||
1289 | 399 | c = context.get_admin_context() | ||
1290 | 400 | |||
1291 | 401 | self._setup_other_managers() | ||
1292 | 402 | dbmock = self.mox.CreateMock(db) | ||
1293 | 403 | netmock = self.mox.CreateMock(self.network_manager) | ||
1294 | 404 | drivermock = self.mox.CreateMock(self.compute_driver) | ||
1295 | 405 | |||
1296 | 406 | dbmock.instance_get(c, i_ref['id']).AndReturn(i_ref) | ||
1297 | 407 | dbmock.instance_get_fixed_address(c, i_ref['id']).AndReturn('dummy') | ||
1298 | 408 | self.mox.StubOutWithMock(compute_manager.LOG, 'info') | ||
1299 | 409 | compute_manager.LOG.info(_("%s has no volume."), i_ref['hostname']) | ||
1300 | 410 | netmock.setup_compute_network(c, i_ref['id']) | ||
1301 | 411 | drivermock.ensure_filtering_rules_for_instance(i_ref) | ||
1302 | 412 | |||
1303 | 413 | self.compute.db = dbmock | ||
1304 | 414 | self.compute.network_manager = netmock | ||
1305 | 415 | self.compute.driver = drivermock | ||
1306 | 416 | |||
1307 | 417 | self.mox.ReplayAll() | ||
1308 | 418 | ret = self.compute.pre_live_migration(c, i_ref['id']) | ||
1309 | 419 | self.assertEqual(ret, None) | ||
1310 | 420 | |||
1311 | 421 | def test_pre_live_migration_setup_compute_node_fail(self): | ||
1312 | 422 | """Confirm operation setup_compute_network() fails. | ||
1313 | 423 | |||
1314 | 424 | It retries and raise exception when timeout exceeded. | ||
1315 | 425 | |||
1316 | 426 | """ | ||
1317 | 427 | |||
1318 | 428 | i_ref = self._get_dummy_instance() | ||
1319 | 429 | c = context.get_admin_context() | ||
1320 | 430 | |||
1321 | 431 | self._setup_other_managers() | ||
1322 | 432 | dbmock = self.mox.CreateMock(db) | ||
1323 | 433 | netmock = self.mox.CreateMock(self.network_manager) | ||
1324 | 434 | volmock = self.mox.CreateMock(self.volume_manager) | ||
1325 | 435 | |||
1326 | 436 | dbmock.instance_get(c, i_ref['id']).AndReturn(i_ref) | ||
1327 | 437 | dbmock.instance_get_fixed_address(c, i_ref['id']).AndReturn('dummy') | ||
1328 | 438 | for i in range(len(i_ref['volumes'])): | ||
1329 | 439 | volmock.setup_compute_volume(c, i_ref['volumes'][i]['id']) | ||
1330 | 440 | for i in range(FLAGS.live_migration_retry_count): | ||
1331 | 441 | netmock.setup_compute_network(c, i_ref['id']).\ | ||
1332 | 442 | AndRaise(exception.ProcessExecutionError()) | ||
1333 | 443 | |||
1334 | 444 | self.compute.db = dbmock | ||
1335 | 445 | self.compute.network_manager = netmock | ||
1336 | 446 | self.compute.volume_manager = volmock | ||
1337 | 447 | |||
1338 | 448 | self.mox.ReplayAll() | ||
1339 | 449 | self.assertRaises(exception.ProcessExecutionError, | ||
1340 | 450 | self.compute.pre_live_migration, | ||
1341 | 451 | c, i_ref['id']) | ||
1342 | 452 | |||
1343 | 453 | def test_live_migration_works_correctly_with_volume(self): | ||
1344 | 454 | """Confirm check_for_export to confirm volume health check.""" | ||
1345 | 455 | i_ref = self._get_dummy_instance() | ||
1346 | 456 | c = context.get_admin_context() | ||
1347 | 457 | topic = db.queue_get_for(c, FLAGS.compute_topic, i_ref['host']) | ||
1348 | 458 | |||
1349 | 459 | dbmock = self.mox.CreateMock(db) | ||
1350 | 460 | dbmock.instance_get(c, i_ref['id']).AndReturn(i_ref) | ||
1351 | 461 | self.mox.StubOutWithMock(rpc, 'call') | ||
1352 | 462 | rpc.call(c, FLAGS.volume_topic, {"method": "check_for_export", | ||
1353 | 463 | "args": {'instance_id': i_ref['id']}}) | ||
1354 | 464 | dbmock.queue_get_for(c, FLAGS.compute_topic, i_ref['host']).\ | ||
1355 | 465 | AndReturn(topic) | ||
1356 | 466 | rpc.call(c, topic, {"method": "pre_live_migration", | ||
1357 | 467 | "args": {'instance_id': i_ref['id']}}) | ||
1358 | 468 | self.mox.StubOutWithMock(self.compute.driver, 'live_migration') | ||
1359 | 469 | self.compute.driver.live_migration(c, i_ref, i_ref['host'], | ||
1360 | 470 | self.compute.post_live_migration, | ||
1361 | 471 | self.compute.recover_live_migration) | ||
1362 | 472 | |||
1363 | 473 | self.compute.db = dbmock | ||
1364 | 474 | self.mox.ReplayAll() | ||
1365 | 475 | ret = self.compute.live_migration(c, i_ref['id'], i_ref['host']) | ||
1366 | 476 | self.assertEqual(ret, None) | ||
1367 | 477 | |||
1368 | 478 | def test_live_migration_dest_raises_exception(self): | ||
1369 | 479 | """Confirm exception when pre_live_migration fails.""" | ||
1370 | 480 | i_ref = self._get_dummy_instance() | ||
1371 | 481 | c = context.get_admin_context() | ||
1372 | 482 | topic = db.queue_get_for(c, FLAGS.compute_topic, i_ref['host']) | ||
1373 | 483 | |||
1374 | 484 | dbmock = self.mox.CreateMock(db) | ||
1375 | 485 | dbmock.instance_get(c, i_ref['id']).AndReturn(i_ref) | ||
1376 | 486 | self.mox.StubOutWithMock(rpc, 'call') | ||
1377 | 487 | rpc.call(c, FLAGS.volume_topic, {"method": "check_for_export", | ||
1378 | 488 | "args": {'instance_id': i_ref['id']}}) | ||
1379 | 489 | dbmock.queue_get_for(c, FLAGS.compute_topic, i_ref['host']).\ | ||
1380 | 490 | AndReturn(topic) | ||
1381 | 491 | rpc.call(c, topic, {"method": "pre_live_migration", | ||
1382 | 492 | "args": {'instance_id': i_ref['id']}}).\ | ||
1383 | 493 | AndRaise(rpc.RemoteError('', '', '')) | ||
1384 | 494 | dbmock.instance_update(c, i_ref['id'], {'state_description': 'running', | ||
1385 | 495 | 'state': power_state.RUNNING, | ||
1386 | 496 | 'host': i_ref['host']}) | ||
1387 | 497 | for v in i_ref['volumes']: | ||
1388 | 498 | dbmock.volume_update(c, v['id'], {'status': 'in-use'}) | ||
1389 | 499 | |||
1390 | 500 | self.compute.db = dbmock | ||
1391 | 501 | self.mox.ReplayAll() | ||
1392 | 502 | self.assertRaises(rpc.RemoteError, | ||
1393 | 503 | self.compute.live_migration, | ||
1394 | 504 | c, i_ref['id'], i_ref['host']) | ||
1395 | 505 | |||
1396 | 506 | def test_live_migration_dest_raises_exception_no_volume(self): | ||
1397 | 507 | """Same as above test(input pattern is different) """ | ||
1398 | 508 | i_ref = self._get_dummy_instance() | ||
1399 | 509 | i_ref['volumes'] = [] | ||
1400 | 510 | c = context.get_admin_context() | ||
1401 | 511 | topic = db.queue_get_for(c, FLAGS.compute_topic, i_ref['host']) | ||
1402 | 512 | |||
1403 | 513 | dbmock = self.mox.CreateMock(db) | ||
1404 | 514 | dbmock.instance_get(c, i_ref['id']).AndReturn(i_ref) | ||
1405 | 515 | dbmock.queue_get_for(c, FLAGS.compute_topic, i_ref['host']).\ | ||
1406 | 516 | AndReturn(topic) | ||
1407 | 517 | self.mox.StubOutWithMock(rpc, 'call') | ||
1408 | 518 | rpc.call(c, topic, {"method": "pre_live_migration", | ||
1409 | 519 | "args": {'instance_id': i_ref['id']}}).\ | ||
1410 | 520 | AndRaise(rpc.RemoteError('', '', '')) | ||
1411 | 521 | dbmock.instance_update(c, i_ref['id'], {'state_description': 'running', | ||
1412 | 522 | 'state': power_state.RUNNING, | ||
1413 | 523 | 'host': i_ref['host']}) | ||
1414 | 524 | |||
1415 | 525 | self.compute.db = dbmock | ||
1416 | 526 | self.mox.ReplayAll() | ||
1417 | 527 | self.assertRaises(rpc.RemoteError, | ||
1418 | 528 | self.compute.live_migration, | ||
1419 | 529 | c, i_ref['id'], i_ref['host']) | ||
1420 | 530 | |||
1421 | 531 | def test_live_migration_works_correctly_no_volume(self): | ||
1422 | 532 | """Confirm live_migration() works as expected correctly.""" | ||
1423 | 533 | i_ref = self._get_dummy_instance() | ||
1424 | 534 | i_ref['volumes'] = [] | ||
1425 | 535 | c = context.get_admin_context() | ||
1426 | 536 | topic = db.queue_get_for(c, FLAGS.compute_topic, i_ref['host']) | ||
1427 | 537 | |||
1428 | 538 | dbmock = self.mox.CreateMock(db) | ||
1429 | 539 | dbmock.instance_get(c, i_ref['id']).AndReturn(i_ref) | ||
1430 | 540 | self.mox.StubOutWithMock(rpc, 'call') | ||
1431 | 541 | dbmock.queue_get_for(c, FLAGS.compute_topic, i_ref['host']).\ | ||
1432 | 542 | AndReturn(topic) | ||
1433 | 543 | rpc.call(c, topic, {"method": "pre_live_migration", | ||
1434 | 544 | "args": {'instance_id': i_ref['id']}}) | ||
1435 | 545 | self.mox.StubOutWithMock(self.compute.driver, 'live_migration') | ||
1436 | 546 | self.compute.driver.live_migration(c, i_ref, i_ref['host'], | ||
1437 | 547 | self.compute.post_live_migration, | ||
1438 | 548 | self.compute.recover_live_migration) | ||
1439 | 549 | |||
1440 | 550 | self.compute.db = dbmock | ||
1441 | 551 | self.mox.ReplayAll() | ||
1442 | 552 | ret = self.compute.live_migration(c, i_ref['id'], i_ref['host']) | ||
1443 | 553 | self.assertEqual(ret, None) | ||
1444 | 554 | |||
1445 | 555 | def test_post_live_migration_working_correctly(self): | ||
1446 | 556 | """Confirm post_live_migration() works as expected correctly.""" | ||
1447 | 557 | dest = 'desthost' | ||
1448 | 558 | flo_addr = '1.2.1.2' | ||
1449 | 559 | |||
1450 | 560 | # Preparing datas | ||
1451 | 561 | c = context.get_admin_context() | ||
1452 | 562 | instance_id = self._create_instance() | ||
1453 | 563 | i_ref = db.instance_get(c, instance_id) | ||
1454 | 564 | db.instance_update(c, i_ref['id'], {'state_description': 'migrating', | ||
1455 | 565 | 'state': power_state.PAUSED}) | ||
1456 | 566 | v_ref = db.volume_create(c, {'size': 1, 'instance_id': instance_id}) | ||
1457 | 567 | fix_addr = db.fixed_ip_create(c, {'address': '1.1.1.1', | ||
1458 | 568 | 'instance_id': instance_id}) | ||
1459 | 569 | fix_ref = db.fixed_ip_get_by_address(c, fix_addr) | ||
1460 | 570 | flo_ref = db.floating_ip_create(c, {'address': flo_addr, | ||
1461 | 571 | 'fixed_ip_id': fix_ref['id']}) | ||
1462 | 572 | # reload is necessary before setting mocks | ||
1463 | 573 | i_ref = db.instance_get(c, instance_id) | ||
1464 | 574 | |||
1465 | 575 | # Preparing mocks | ||
1466 | 576 | self.mox.StubOutWithMock(self.compute.volume_manager, | ||
1467 | 577 | 'remove_compute_volume') | ||
1468 | 578 | for v in i_ref['volumes']: | ||
1469 | 579 | self.compute.volume_manager.remove_compute_volume(c, v['id']) | ||
1470 | 580 | self.mox.StubOutWithMock(self.compute.driver, 'unfilter_instance') | ||
1471 | 581 | self.compute.driver.unfilter_instance(i_ref) | ||
1472 | 582 | |||
1473 | 583 | # executing | ||
1474 | 584 | self.mox.ReplayAll() | ||
1475 | 585 | ret = self.compute.post_live_migration(c, i_ref, dest) | ||
1476 | 586 | |||
1477 | 587 | # make sure every data is rewritten to dest | ||
1478 | 588 | i_ref = db.instance_get(c, i_ref['id']) | ||
1479 | 589 | c1 = (i_ref['host'] == dest) | ||
1480 | 590 | flo_refs = db.floating_ip_get_all_by_host(c, dest) | ||
1481 | 591 | c2 = (len(flo_refs) != 0 and flo_refs[0]['address'] == flo_addr) | ||
1482 | 592 | |||
1483 | 593 | # post operaton | ||
1484 | 594 | self.assertTrue(c1 and c2) | ||
1485 | 595 | db.instance_destroy(c, instance_id) | ||
1486 | 596 | db.volume_destroy(c, v_ref['id']) | ||
1487 | 597 | db.floating_ip_destroy(c, flo_addr) | ||
1488 | 304 | 598 | ||
1489 | === modified file 'nova/tests/test_scheduler.py' | |||
1490 | --- nova/tests/test_scheduler.py 2011-03-07 01:25:01 +0000 | |||
1491 | +++ nova/tests/test_scheduler.py 2011-03-10 06:27:59 +0000 | |||
1492 | @@ -20,10 +20,12 @@ | |||
1493 | 20 | """ | 20 | """ |
1494 | 21 | 21 | ||
1495 | 22 | import datetime | 22 | import datetime |
1496 | 23 | import mox | ||
1497 | 23 | 24 | ||
1498 | 24 | from mox import IgnoreArg | 25 | from mox import IgnoreArg |
1499 | 25 | from nova import context | 26 | from nova import context |
1500 | 26 | from nova import db | 27 | from nova import db |
1501 | 28 | from nova import exception | ||
1502 | 27 | from nova import flags | 29 | from nova import flags |
1503 | 28 | from nova import service | 30 | from nova import service |
1504 | 29 | from nova import test | 31 | from nova import test |
1505 | @@ -32,11 +34,14 @@ | |||
1506 | 32 | from nova.auth import manager as auth_manager | 34 | from nova.auth import manager as auth_manager |
1507 | 33 | from nova.scheduler import manager | 35 | from nova.scheduler import manager |
1508 | 34 | from nova.scheduler import driver | 36 | from nova.scheduler import driver |
1509 | 37 | from nova.compute import power_state | ||
1510 | 38 | from nova.db.sqlalchemy import models | ||
1511 | 35 | 39 | ||
1512 | 36 | 40 | ||
1513 | 37 | FLAGS = flags.FLAGS | 41 | FLAGS = flags.FLAGS |
1514 | 38 | flags.DECLARE('max_cores', 'nova.scheduler.simple') | 42 | flags.DECLARE('max_cores', 'nova.scheduler.simple') |
1515 | 39 | flags.DECLARE('stub_network', 'nova.compute.manager') | 43 | flags.DECLARE('stub_network', 'nova.compute.manager') |
1516 | 44 | flags.DECLARE('instances_path', 'nova.compute.manager') | ||
1517 | 40 | 45 | ||
1518 | 41 | 46 | ||
1519 | 42 | class TestDriver(driver.Scheduler): | 47 | class TestDriver(driver.Scheduler): |
1520 | @@ -54,6 +59,34 @@ | |||
1521 | 54 | super(SchedulerTestCase, self).setUp() | 59 | super(SchedulerTestCase, self).setUp() |
1522 | 55 | self.flags(scheduler_driver='nova.tests.test_scheduler.TestDriver') | 60 | self.flags(scheduler_driver='nova.tests.test_scheduler.TestDriver') |
1523 | 56 | 61 | ||
1524 | 62 | def _create_compute_service(self): | ||
1525 | 63 | """Create compute-manager(ComputeNode and Service record).""" | ||
1526 | 64 | ctxt = context.get_admin_context() | ||
1527 | 65 | dic = {'host': 'dummy', 'binary': 'nova-compute', 'topic': 'compute', | ||
1528 | 66 | 'report_count': 0, 'availability_zone': 'dummyzone'} | ||
1529 | 67 | s_ref = db.service_create(ctxt, dic) | ||
1530 | 68 | |||
1531 | 69 | dic = {'service_id': s_ref['id'], | ||
1532 | 70 | 'vcpus': 16, 'memory_mb': 32, 'local_gb': 100, | ||
1533 | 71 | 'vcpus_used': 16, 'memory_mb_used': 32, 'local_gb_used': 10, | ||
1534 | 72 | 'hypervisor_type': 'qemu', 'hypervisor_version': 12003, | ||
1535 | 73 | 'cpu_info': ''} | ||
1536 | 74 | db.compute_node_create(ctxt, dic) | ||
1537 | 75 | |||
1538 | 76 | return db.service_get(ctxt, s_ref['id']) | ||
1539 | 77 | |||
1540 | 78 | def _create_instance(self, **kwargs): | ||
1541 | 79 | """Create a test instance""" | ||
1542 | 80 | ctxt = context.get_admin_context() | ||
1543 | 81 | inst = {} | ||
1544 | 82 | inst['user_id'] = 'admin' | ||
1545 | 83 | inst['project_id'] = kwargs.get('project_id', 'fake') | ||
1546 | 84 | inst['host'] = kwargs.get('host', 'dummy') | ||
1547 | 85 | inst['vcpus'] = kwargs.get('vcpus', 1) | ||
1548 | 86 | inst['memory_mb'] = kwargs.get('memory_mb', 10) | ||
1549 | 87 | inst['local_gb'] = kwargs.get('local_gb', 20) | ||
1550 | 88 | return db.instance_create(ctxt, inst) | ||
1551 | 89 | |||
1552 | 57 | def test_fallback(self): | 90 | def test_fallback(self): |
1553 | 58 | scheduler = manager.SchedulerManager() | 91 | scheduler = manager.SchedulerManager() |
1554 | 59 | self.mox.StubOutWithMock(rpc, 'cast', use_mock_anything=True) | 92 | self.mox.StubOutWithMock(rpc, 'cast', use_mock_anything=True) |
1555 | @@ -76,6 +109,73 @@ | |||
1556 | 76 | self.mox.ReplayAll() | 109 | self.mox.ReplayAll() |
1557 | 77 | scheduler.named_method(ctxt, 'topic', num=7) | 110 | scheduler.named_method(ctxt, 'topic', num=7) |
1558 | 78 | 111 | ||
1559 | 112 | def test_show_host_resources_host_not_exit(self): | ||
1560 | 113 | """A host given as an argument does not exists.""" | ||
1561 | 114 | |||
1562 | 115 | scheduler = manager.SchedulerManager() | ||
1563 | 116 | dest = 'dummydest' | ||
1564 | 117 | ctxt = context.get_admin_context() | ||
1565 | 118 | |||
1566 | 119 | try: | ||
1567 | 120 | scheduler.show_host_resources(ctxt, dest) | ||
1568 | 121 | except exception.NotFound, e: | ||
1569 | 122 | c1 = (e.message.find(_("does not exist or is not a " | ||
1570 | 123 | "compute node.")) >= 0) | ||
1571 | 124 | self.assertTrue(c1) | ||
1572 | 125 | |||
1573 | 126 | def _dic_is_equal(self, dic1, dic2, keys=None): | ||
1574 | 127 | """Compares 2 dictionary contents(Helper method)""" | ||
1575 | 128 | if not keys: | ||
1576 | 129 | keys = ['vcpus', 'memory_mb', 'local_gb', | ||
1577 | 130 | 'vcpus_used', 'memory_mb_used', 'local_gb_used'] | ||
1578 | 131 | |||
1579 | 132 | for key in keys: | ||
1580 | 133 | if not (dic1[key] == dic2[key]): | ||
1581 | 134 | return False | ||
1582 | 135 | return True | ||
1583 | 136 | |||
1584 | 137 | def test_show_host_resources_no_project(self): | ||
1585 | 138 | """No instance are running on the given host.""" | ||
1586 | 139 | |||
1587 | 140 | scheduler = manager.SchedulerManager() | ||
1588 | 141 | ctxt = context.get_admin_context() | ||
1589 | 142 | s_ref = self._create_compute_service() | ||
1590 | 143 | |||
1591 | 144 | result = scheduler.show_host_resources(ctxt, s_ref['host']) | ||
1592 | 145 | |||
1593 | 146 | # result checking | ||
1594 | 147 | c1 = ('resource' in result and 'usage' in result) | ||
1595 | 148 | compute_node = s_ref['compute_node'][0] | ||
1596 | 149 | c2 = self._dic_is_equal(result['resource'], compute_node) | ||
1597 | 150 | c3 = result['usage'] == {} | ||
1598 | 151 | self.assertTrue(c1 and c2 and c3) | ||
1599 | 152 | db.service_destroy(ctxt, s_ref['id']) | ||
1600 | 153 | |||
1601 | 154 | def test_show_host_resources_works_correctly(self): | ||
1602 | 155 | """Show_host_resources() works correctly as expected.""" | ||
1603 | 156 | |||
1604 | 157 | scheduler = manager.SchedulerManager() | ||
1605 | 158 | ctxt = context.get_admin_context() | ||
1606 | 159 | s_ref = self._create_compute_service() | ||
1607 | 160 | i_ref1 = self._create_instance(project_id='p-01', host=s_ref['host']) | ||
1608 | 161 | i_ref2 = self._create_instance(project_id='p-02', vcpus=3, | ||
1609 | 162 | host=s_ref['host']) | ||
1610 | 163 | |||
1611 | 164 | result = scheduler.show_host_resources(ctxt, s_ref['host']) | ||
1612 | 165 | |||
1613 | 166 | c1 = ('resource' in result and 'usage' in result) | ||
1614 | 167 | compute_node = s_ref['compute_node'][0] | ||
1615 | 168 | c2 = self._dic_is_equal(result['resource'], compute_node) | ||
1616 | 169 | c3 = result['usage'].keys() == ['p-01', 'p-02'] | ||
1617 | 170 | keys = ['vcpus', 'memory_mb', 'local_gb'] | ||
1618 | 171 | c4 = self._dic_is_equal(result['usage']['p-01'], i_ref1, keys) | ||
1619 | 172 | c5 = self._dic_is_equal(result['usage']['p-02'], i_ref2, keys) | ||
1620 | 173 | self.assertTrue(c1 and c2 and c3 and c4 and c5) | ||
1621 | 174 | |||
1622 | 175 | db.service_destroy(ctxt, s_ref['id']) | ||
1623 | 176 | db.instance_destroy(ctxt, i_ref1['id']) | ||
1624 | 177 | db.instance_destroy(ctxt, i_ref2['id']) | ||
1625 | 178 | |||
1626 | 79 | 179 | ||
1627 | 80 | class ZoneSchedulerTestCase(test.TestCase): | 180 | class ZoneSchedulerTestCase(test.TestCase): |
1628 | 81 | """Test case for zone scheduler""" | 181 | """Test case for zone scheduler""" |
1629 | @@ -161,9 +261,15 @@ | |||
1630 | 161 | inst['project_id'] = self.project.id | 261 | inst['project_id'] = self.project.id |
1631 | 162 | inst['instance_type'] = 'm1.tiny' | 262 | inst['instance_type'] = 'm1.tiny' |
1632 | 163 | inst['mac_address'] = utils.generate_mac() | 263 | inst['mac_address'] = utils.generate_mac() |
1633 | 264 | inst['vcpus'] = kwargs.get('vcpus', 1) | ||
1634 | 164 | inst['ami_launch_index'] = 0 | 265 | inst['ami_launch_index'] = 0 |
1635 | 165 | inst['vcpus'] = 1 | ||
1636 | 166 | inst['availability_zone'] = kwargs.get('availability_zone', None) | 266 | inst['availability_zone'] = kwargs.get('availability_zone', None) |
1637 | 267 | inst['host'] = kwargs.get('host', 'dummy') | ||
1638 | 268 | inst['memory_mb'] = kwargs.get('memory_mb', 20) | ||
1639 | 269 | inst['local_gb'] = kwargs.get('local_gb', 30) | ||
1640 | 270 | inst['launched_on'] = kwargs.get('launghed_on', 'dummy') | ||
1641 | 271 | inst['state_description'] = kwargs.get('state_description', 'running') | ||
1642 | 272 | inst['state'] = kwargs.get('state', power_state.RUNNING) | ||
1643 | 167 | return db.instance_create(self.context, inst)['id'] | 273 | return db.instance_create(self.context, inst)['id'] |
1644 | 168 | 274 | ||
1645 | 169 | def _create_volume(self): | 275 | def _create_volume(self): |
1646 | @@ -173,6 +279,211 @@ | |||
1647 | 173 | vol['availability_zone'] = 'test' | 279 | vol['availability_zone'] = 'test' |
1648 | 174 | return db.volume_create(self.context, vol)['id'] | 280 | return db.volume_create(self.context, vol)['id'] |
1649 | 175 | 281 | ||
1650 | 282 | def _create_compute_service(self, **kwargs): | ||
1651 | 283 | """Create a compute service.""" | ||
1652 | 284 | |||
1653 | 285 | dic = {'binary': 'nova-compute', 'topic': 'compute', | ||
1654 | 286 | 'report_count': 0, 'availability_zone': 'dummyzone'} | ||
1655 | 287 | dic['host'] = kwargs.get('host', 'dummy') | ||
1656 | 288 | s_ref = db.service_create(self.context, dic) | ||
1657 | 289 | if 'created_at' in kwargs.keys() or 'updated_at' in kwargs.keys(): | ||
1658 | 290 | t = datetime.datetime.utcnow() - datetime.timedelta(0) | ||
1659 | 291 | dic['created_at'] = kwargs.get('created_at', t) | ||
1660 | 292 | dic['updated_at'] = kwargs.get('updated_at', t) | ||
1661 | 293 | db.service_update(self.context, s_ref['id'], dic) | ||
1662 | 294 | |||
1663 | 295 | dic = {'service_id': s_ref['id'], | ||
1664 | 296 | 'vcpus': 16, 'memory_mb': 32, 'local_gb': 100, | ||
1665 | 297 | 'vcpus_used': 16, 'local_gb_used': 10, | ||
1666 | 298 | 'hypervisor_type': 'qemu', 'hypervisor_version': 12003, | ||
1667 | 299 | 'cpu_info': ''} | ||
1668 | 300 | dic['memory_mb_used'] = kwargs.get('memory_mb_used', 32) | ||
1669 | 301 | dic['hypervisor_type'] = kwargs.get('hypervisor_type', 'qemu') | ||
1670 | 302 | dic['hypervisor_version'] = kwargs.get('hypervisor_version', 12003) | ||
1671 | 303 | db.compute_node_create(self.context, dic) | ||
1672 | 304 | return db.service_get(self.context, s_ref['id']) | ||
1673 | 305 | |||
1674 | 306 | def test_doesnt_report_disabled_hosts_as_up(self): | ||
1675 | 307 | """Ensures driver doesn't find hosts before they are enabled""" | ||
1676 | 308 | # NOTE(vish): constructing service without create method | ||
1677 | 309 | # because we are going to use it without queue | ||
1678 | 310 | compute1 = service.Service('host1', | ||
1679 | 311 | 'nova-compute', | ||
1680 | 312 | 'compute', | ||
1681 | 313 | FLAGS.compute_manager) | ||
1682 | 314 | compute1.start() | ||
1683 | 315 | compute2 = service.Service('host2', | ||
1684 | 316 | 'nova-compute', | ||
1685 | 317 | 'compute', | ||
1686 | 318 | FLAGS.compute_manager) | ||
1687 | 319 | compute2.start() | ||
1688 | 320 | s1 = db.service_get_by_args(self.context, 'host1', 'nova-compute') | ||
1689 | 321 | s2 = db.service_get_by_args(self.context, 'host2', 'nova-compute') | ||
1690 | 322 | db.service_update(self.context, s1['id'], {'disabled': True}) | ||
1691 | 323 | db.service_update(self.context, s2['id'], {'disabled': True}) | ||
1692 | 324 | hosts = self.scheduler.driver.hosts_up(self.context, 'compute') | ||
1693 | 325 | self.assertEqual(0, len(hosts)) | ||
1694 | 326 | compute1.kill() | ||
1695 | 327 | compute2.kill() | ||
1696 | 328 | |||
1697 | 329 | def test_reports_enabled_hosts_as_up(self): | ||
1698 | 330 | """Ensures driver can find the hosts that are up""" | ||
1699 | 331 | # NOTE(vish): constructing service without create method | ||
1700 | 332 | # because we are going to use it without queue | ||
1701 | 333 | compute1 = service.Service('host1', | ||
1702 | 334 | 'nova-compute', | ||
1703 | 335 | 'compute', | ||
1704 | 336 | FLAGS.compute_manager) | ||
1705 | 337 | compute1.start() | ||
1706 | 338 | compute2 = service.Service('host2', | ||
1707 | 339 | 'nova-compute', | ||
1708 | 340 | 'compute', | ||
1709 | 341 | FLAGS.compute_manager) | ||
1710 | 342 | compute2.start() | ||
1711 | 343 | hosts = self.scheduler.driver.hosts_up(self.context, 'compute') | ||
1712 | 344 | self.assertEqual(2, len(hosts)) | ||
1713 | 345 | compute1.kill() | ||
1714 | 346 | compute2.kill() | ||
1715 | 347 | |||
1716 | 348 | def test_least_busy_host_gets_instance(self): | ||
1717 | 349 | """Ensures the host with less cores gets the next one""" | ||
1718 | 350 | compute1 = service.Service('host1', | ||
1719 | 351 | 'nova-compute', | ||
1720 | 352 | 'compute', | ||
1721 | 353 | FLAGS.compute_manager) | ||
1722 | 354 | compute1.start() | ||
1723 | 355 | compute2 = service.Service('host2', | ||
1724 | 356 | 'nova-compute', | ||
1725 | 357 | 'compute', | ||
1726 | 358 | FLAGS.compute_manager) | ||
1727 | 359 | compute2.start() | ||
1728 | 360 | instance_id1 = self._create_instance() | ||
1729 | 361 | compute1.run_instance(self.context, instance_id1) | ||
1730 | 362 | instance_id2 = self._create_instance() | ||
1731 | 363 | host = self.scheduler.driver.schedule_run_instance(self.context, | ||
1732 | 364 | instance_id2) | ||
1733 | 365 | self.assertEqual(host, 'host2') | ||
1734 | 366 | compute1.terminate_instance(self.context, instance_id1) | ||
1735 | 367 | db.instance_destroy(self.context, instance_id2) | ||
1736 | 368 | compute1.kill() | ||
1737 | 369 | compute2.kill() | ||
1738 | 370 | |||
1739 | 371 | def test_specific_host_gets_instance(self): | ||
1740 | 372 | """Ensures if you set availability_zone it launches on that zone""" | ||
1741 | 373 | compute1 = service.Service('host1', | ||
1742 | 374 | 'nova-compute', | ||
1743 | 375 | 'compute', | ||
1744 | 376 | FLAGS.compute_manager) | ||
1745 | 377 | compute1.start() | ||
1746 | 378 | compute2 = service.Service('host2', | ||
1747 | 379 | 'nova-compute', | ||
1748 | 380 | 'compute', | ||
1749 | 381 | FLAGS.compute_manager) | ||
1750 | 382 | compute2.start() | ||
1751 | 383 | instance_id1 = self._create_instance() | ||
1752 | 384 | compute1.run_instance(self.context, instance_id1) | ||
1753 | 385 | instance_id2 = self._create_instance(availability_zone='nova:host1') | ||
1754 | 386 | host = self.scheduler.driver.schedule_run_instance(self.context, | ||
1755 | 387 | instance_id2) | ||
1756 | 388 | self.assertEqual('host1', host) | ||
1757 | 389 | compute1.terminate_instance(self.context, instance_id1) | ||
1758 | 390 | db.instance_destroy(self.context, instance_id2) | ||
1759 | 391 | compute1.kill() | ||
1760 | 392 | compute2.kill() | ||
1761 | 393 | |||
1762 | 394 | def test_wont_sechedule_if_specified_host_is_down(self): | ||
1763 | 395 | compute1 = service.Service('host1', | ||
1764 | 396 | 'nova-compute', | ||
1765 | 397 | 'compute', | ||
1766 | 398 | FLAGS.compute_manager) | ||
1767 | 399 | compute1.start() | ||
1768 | 400 | s1 = db.service_get_by_args(self.context, 'host1', 'nova-compute') | ||
1769 | 401 | now = datetime.datetime.utcnow() | ||
1770 | 402 | delta = datetime.timedelta(seconds=FLAGS.service_down_time * 2) | ||
1771 | 403 | past = now - delta | ||
1772 | 404 | db.service_update(self.context, s1['id'], {'updated_at': past}) | ||
1773 | 405 | instance_id2 = self._create_instance(availability_zone='nova:host1') | ||
1774 | 406 | self.assertRaises(driver.WillNotSchedule, | ||
1775 | 407 | self.scheduler.driver.schedule_run_instance, | ||
1776 | 408 | self.context, | ||
1777 | 409 | instance_id2) | ||
1778 | 410 | db.instance_destroy(self.context, instance_id2) | ||
1779 | 411 | compute1.kill() | ||
1780 | 412 | |||
1781 | 413 | def test_will_schedule_on_disabled_host_if_specified(self): | ||
1782 | 414 | compute1 = service.Service('host1', | ||
1783 | 415 | 'nova-compute', | ||
1784 | 416 | 'compute', | ||
1785 | 417 | FLAGS.compute_manager) | ||
1786 | 418 | compute1.start() | ||
1787 | 419 | s1 = db.service_get_by_args(self.context, 'host1', 'nova-compute') | ||
1788 | 420 | db.service_update(self.context, s1['id'], {'disabled': True}) | ||
1789 | 421 | instance_id2 = self._create_instance(availability_zone='nova:host1') | ||
1790 | 422 | host = self.scheduler.driver.schedule_run_instance(self.context, | ||
1791 | 423 | instance_id2) | ||
1792 | 424 | self.assertEqual('host1', host) | ||
1793 | 425 | db.instance_destroy(self.context, instance_id2) | ||
1794 | 426 | compute1.kill() | ||
1795 | 427 | |||
1796 | 428 | def test_too_many_cores(self): | ||
1797 | 429 | """Ensures we don't go over max cores""" | ||
1798 | 430 | compute1 = service.Service('host1', | ||
1799 | 431 | 'nova-compute', | ||
1800 | 432 | 'compute', | ||
1801 | 433 | FLAGS.compute_manager) | ||
1802 | 434 | compute1.start() | ||
1803 | 435 | compute2 = service.Service('host2', | ||
1804 | 436 | 'nova-compute', | ||
1805 | 437 | 'compute', | ||
1806 | 438 | FLAGS.compute_manager) | ||
1807 | 439 | compute2.start() | ||
1808 | 440 | instance_ids1 = [] | ||
1809 | 441 | instance_ids2 = [] | ||
1810 | 442 | for index in xrange(FLAGS.max_cores): | ||
1811 | 443 | instance_id = self._create_instance() | ||
1812 | 444 | compute1.run_instance(self.context, instance_id) | ||
1813 | 445 | instance_ids1.append(instance_id) | ||
1814 | 446 | instance_id = self._create_instance() | ||
1815 | 447 | compute2.run_instance(self.context, instance_id) | ||
1816 | 448 | instance_ids2.append(instance_id) | ||
1817 | 449 | instance_id = self._create_instance() | ||
1818 | 450 | self.assertRaises(driver.NoValidHost, | ||
1819 | 451 | self.scheduler.driver.schedule_run_instance, | ||
1820 | 452 | self.context, | ||
1821 | 453 | instance_id) | ||
1822 | 454 | for instance_id in instance_ids1: | ||
1823 | 455 | compute1.terminate_instance(self.context, instance_id) | ||
1824 | 456 | for instance_id in instance_ids2: | ||
1825 | 457 | compute2.terminate_instance(self.context, instance_id) | ||
1826 | 458 | compute1.kill() | ||
1827 | 459 | compute2.kill() | ||
1828 | 460 | |||
1829 | 461 | def test_least_busy_host_gets_volume(self): | ||
1830 | 462 | """Ensures the host with less gigabytes gets the next one""" | ||
1831 | 463 | volume1 = service.Service('host1', | ||
1832 | 464 | 'nova-volume', | ||
1833 | 465 | 'volume', | ||
1834 | 466 | FLAGS.volume_manager) | ||
1835 | 467 | volume1.start() | ||
1836 | 468 | volume2 = service.Service('host2', | ||
1837 | 469 | 'nova-volume', | ||
1838 | 470 | 'volume', | ||
1839 | 471 | FLAGS.volume_manager) | ||
1840 | 472 | volume2.start() | ||
1841 | 473 | volume_id1 = self._create_volume() | ||
1842 | 474 | volume1.create_volume(self.context, volume_id1) | ||
1843 | 475 | volume_id2 = self._create_volume() | ||
1844 | 476 | host = self.scheduler.driver.schedule_create_volume(self.context, | ||
1845 | 477 | volume_id2) | ||
1846 | 478 | self.assertEqual(host, 'host2') | ||
1847 | 479 | volume1.delete_volume(self.context, volume_id1) | ||
1848 | 480 | db.volume_destroy(self.context, volume_id2) | ||
1849 | 481 | dic = {'service_id': s_ref['id'], | ||
1850 | 482 | 'vcpus': 16, 'memory_mb': 32, 'local_gb': 100, | ||
1851 | 483 | 'vcpus_used': 16, 'memory_mb_used': 12, 'local_gb_used': 10, | ||
1852 | 484 | 'hypervisor_type': 'qemu', 'hypervisor_version': 12003, | ||
1853 | 485 | 'cpu_info': ''} | ||
1854 | 486 | |||
1855 | 176 | def test_doesnt_report_disabled_hosts_as_up(self): | 487 | def test_doesnt_report_disabled_hosts_as_up(self): |
1856 | 177 | """Ensures driver doesn't find hosts before they are enabled""" | 488 | """Ensures driver doesn't find hosts before they are enabled""" |
1857 | 178 | compute1 = self.start_service('compute', host='host1') | 489 | compute1 = self.start_service('compute', host='host1') |
1858 | @@ -316,3 +627,313 @@ | |||
1859 | 316 | volume2.delete_volume(self.context, volume_id) | 627 | volume2.delete_volume(self.context, volume_id) |
1860 | 317 | volume1.kill() | 628 | volume1.kill() |
1861 | 318 | volume2.kill() | 629 | volume2.kill() |
1862 | 630 | |||
1863 | 631 | def test_scheduler_live_migration_with_volume(self): | ||
1864 | 632 | """scheduler_live_migration() works correctly as expected. | ||
1865 | 633 | |||
1866 | 634 | Also, checks instance state is changed from 'running' -> 'migrating'. | ||
1867 | 635 | |||
1868 | 636 | """ | ||
1869 | 637 | |||
1870 | 638 | instance_id = self._create_instance() | ||
1871 | 639 | i_ref = db.instance_get(self.context, instance_id) | ||
1872 | 640 | dic = {'instance_id': instance_id, 'size': 1} | ||
1873 | 641 | v_ref = db.volume_create(self.context, dic) | ||
1874 | 642 | |||
1875 | 643 | # cannot check 2nd argument b/c the addresses of instance object | ||
1876 | 644 | # is different. | ||
1877 | 645 | driver_i = self.scheduler.driver | ||
1878 | 646 | nocare = mox.IgnoreArg() | ||
1879 | 647 | self.mox.StubOutWithMock(driver_i, '_live_migration_src_check') | ||
1880 | 648 | self.mox.StubOutWithMock(driver_i, '_live_migration_dest_check') | ||
1881 | 649 | self.mox.StubOutWithMock(driver_i, '_live_migration_common_check') | ||
1882 | 650 | driver_i._live_migration_src_check(nocare, nocare) | ||
1883 | 651 | driver_i._live_migration_dest_check(nocare, nocare, i_ref['host']) | ||
1884 | 652 | driver_i._live_migration_common_check(nocare, nocare, i_ref['host']) | ||
1885 | 653 | self.mox.StubOutWithMock(rpc, 'cast', use_mock_anything=True) | ||
1886 | 654 | kwargs = {'instance_id': instance_id, 'dest': i_ref['host']} | ||
1887 | 655 | rpc.cast(self.context, | ||
1888 | 656 | db.queue_get_for(nocare, FLAGS.compute_topic, i_ref['host']), | ||
1889 | 657 | {"method": 'live_migration', "args": kwargs}) | ||
1890 | 658 | |||
1891 | 659 | self.mox.ReplayAll() | ||
1892 | 660 | self.scheduler.live_migration(self.context, FLAGS.compute_topic, | ||
1893 | 661 | instance_id=instance_id, | ||
1894 | 662 | dest=i_ref['host']) | ||
1895 | 663 | |||
1896 | 664 | i_ref = db.instance_get(self.context, instance_id) | ||
1897 | 665 | self.assertTrue(i_ref['state_description'] == 'migrating') | ||
1898 | 666 | db.instance_destroy(self.context, instance_id) | ||
1899 | 667 | db.volume_destroy(self.context, v_ref['id']) | ||
1900 | 668 | |||
1901 | 669 | def test_live_migration_src_check_instance_not_running(self): | ||
1902 | 670 | """The instance given by instance_id is not running.""" | ||
1903 | 671 | |||
1904 | 672 | instance_id = self._create_instance(state_description='migrating') | ||
1905 | 673 | i_ref = db.instance_get(self.context, instance_id) | ||
1906 | 674 | |||
1907 | 675 | try: | ||
1908 | 676 | self.scheduler.driver._live_migration_src_check(self.context, | ||
1909 | 677 | i_ref) | ||
1910 | 678 | except exception.Invalid, e: | ||
1911 | 679 | c = (e.message.find('is not running') > 0) | ||
1912 | 680 | |||
1913 | 681 | self.assertTrue(c) | ||
1914 | 682 | db.instance_destroy(self.context, instance_id) | ||
1915 | 683 | |||
1916 | 684 | def test_live_migration_src_check_volume_node_not_alive(self): | ||
1917 | 685 | """Raise exception when volume node is not alive.""" | ||
1918 | 686 | |||
1919 | 687 | instance_id = self._create_instance() | ||
1920 | 688 | i_ref = db.instance_get(self.context, instance_id) | ||
1921 | 689 | dic = {'instance_id': instance_id, 'size': 1} | ||
1922 | 690 | v_ref = db.volume_create(self.context, {'instance_id': instance_id, | ||
1923 | 691 | 'size': 1}) | ||
1924 | 692 | t1 = datetime.datetime.utcnow() - datetime.timedelta(1) | ||
1925 | 693 | dic = {'created_at': t1, 'updated_at': t1, 'binary': 'nova-volume', | ||
1926 | 694 | 'topic': 'volume', 'report_count': 0} | ||
1927 | 695 | s_ref = db.service_create(self.context, dic) | ||
1928 | 696 | |||
1929 | 697 | try: | ||
1930 | 698 | self.scheduler.driver.schedule_live_migration(self.context, | ||
1931 | 699 | instance_id, | ||
1932 | 700 | i_ref['host']) | ||
1933 | 701 | except exception.Invalid, e: | ||
1934 | 702 | c = (e.message.find('volume node is not alive') >= 0) | ||
1935 | 703 | |||
1936 | 704 | self.assertTrue(c) | ||
1937 | 705 | db.instance_destroy(self.context, instance_id) | ||
1938 | 706 | db.service_destroy(self.context, s_ref['id']) | ||
1939 | 707 | db.volume_destroy(self.context, v_ref['id']) | ||
1940 | 708 | |||
1941 | 709 | def test_live_migration_src_check_compute_node_not_alive(self): | ||
1942 | 710 | """Confirms src-compute node is alive.""" | ||
1943 | 711 | instance_id = self._create_instance() | ||
1944 | 712 | i_ref = db.instance_get(self.context, instance_id) | ||
1945 | 713 | t = datetime.datetime.utcnow() - datetime.timedelta(10) | ||
1946 | 714 | s_ref = self._create_compute_service(created_at=t, updated_at=t, | ||
1947 | 715 | host=i_ref['host']) | ||
1948 | 716 | |||
1949 | 717 | try: | ||
1950 | 718 | self.scheduler.driver._live_migration_src_check(self.context, | ||
1951 | 719 | i_ref) | ||
1952 | 720 | except exception.Invalid, e: | ||
1953 | 721 | c = (e.message.find('is not alive') >= 0) | ||
1954 | 722 | |||
1955 | 723 | self.assertTrue(c) | ||
1956 | 724 | db.instance_destroy(self.context, instance_id) | ||
1957 | 725 | db.service_destroy(self.context, s_ref['id']) | ||
1958 | 726 | |||
1959 | 727 | def test_live_migration_src_check_works_correctly(self): | ||
1960 | 728 | """Confirms this method finishes with no error.""" | ||
1961 | 729 | instance_id = self._create_instance() | ||
1962 | 730 | i_ref = db.instance_get(self.context, instance_id) | ||
1963 | 731 | s_ref = self._create_compute_service(host=i_ref['host']) | ||
1964 | 732 | |||
1965 | 733 | ret = self.scheduler.driver._live_migration_src_check(self.context, | ||
1966 | 734 | i_ref) | ||
1967 | 735 | |||
1968 | 736 | self.assertTrue(ret == None) | ||
1969 | 737 | db.instance_destroy(self.context, instance_id) | ||
1970 | 738 | db.service_destroy(self.context, s_ref['id']) | ||
1971 | 739 | |||
1972 | 740 | def test_live_migration_dest_check_not_alive(self): | ||
1973 | 741 | """Confirms exception raises in case dest host does not exist.""" | ||
1974 | 742 | instance_id = self._create_instance() | ||
1975 | 743 | i_ref = db.instance_get(self.context, instance_id) | ||
1976 | 744 | t = datetime.datetime.utcnow() - datetime.timedelta(10) | ||
1977 | 745 | s_ref = self._create_compute_service(created_at=t, updated_at=t, | ||
1978 | 746 | host=i_ref['host']) | ||
1979 | 747 | |||
1980 | 748 | try: | ||
1981 | 749 | self.scheduler.driver._live_migration_dest_check(self.context, | ||
1982 | 750 | i_ref, | ||
1983 | 751 | i_ref['host']) | ||
1984 | 752 | except exception.Invalid, e: | ||
1985 | 753 | c = (e.message.find('is not alive') >= 0) | ||
1986 | 754 | |||
1987 | 755 | self.assertTrue(c) | ||
1988 | 756 | db.instance_destroy(self.context, instance_id) | ||
1989 | 757 | db.service_destroy(self.context, s_ref['id']) | ||
1990 | 758 | |||
1991 | 759 | def test_live_migration_dest_check_service_same_host(self): | ||
1992 | 760 | """Confirms exceptioin raises in case dest and src is same host.""" | ||
1993 | 761 | instance_id = self._create_instance() | ||
1994 | 762 | i_ref = db.instance_get(self.context, instance_id) | ||
1995 | 763 | s_ref = self._create_compute_service(host=i_ref['host']) | ||
1996 | 764 | |||
1997 | 765 | try: | ||
1998 | 766 | self.scheduler.driver._live_migration_dest_check(self.context, | ||
1999 | 767 | i_ref, | ||
2000 | 768 | i_ref['host']) | ||
2001 | 769 | except exception.Invalid, e: | ||
2002 | 770 | c = (e.message.find('choose other host') >= 0) | ||
2003 | 771 | |||
2004 | 772 | self.assertTrue(c) | ||
2005 | 773 | db.instance_destroy(self.context, instance_id) | ||
2006 | 774 | db.service_destroy(self.context, s_ref['id']) | ||
2007 | 775 | |||
2008 | 776 | def test_live_migration_dest_check_service_lack_memory(self): | ||
2009 | 777 | """Confirms exception raises when dest doesn't have enough memory.""" | ||
2010 | 778 | instance_id = self._create_instance() | ||
2011 | 779 | i_ref = db.instance_get(self.context, instance_id) | ||
2012 | 780 | s_ref = self._create_compute_service(host='somewhere', | ||
2013 | 781 | memory_mb_used=12) | ||
2014 | 782 | |||
2015 | 783 | try: | ||
2016 | 784 | self.scheduler.driver._live_migration_dest_check(self.context, | ||
2017 | 785 | i_ref, | ||
2018 | 786 | 'somewhere') | ||
2019 | 787 | except exception.NotEmpty, e: | ||
2020 | 788 | c = (e.message.find('Unable to migrate') >= 0) | ||
2021 | 789 | |||
2022 | 790 | self.assertTrue(c) | ||
2023 | 791 | db.instance_destroy(self.context, instance_id) | ||
2024 | 792 | db.service_destroy(self.context, s_ref['id']) | ||
2025 | 793 | |||
2026 | 794 | def test_live_migration_dest_check_service_works_correctly(self): | ||
2027 | 795 | """Confirms method finishes with no error.""" | ||
2028 | 796 | instance_id = self._create_instance() | ||
2029 | 797 | i_ref = db.instance_get(self.context, instance_id) | ||
2030 | 798 | s_ref = self._create_compute_service(host='somewhere', | ||
2031 | 799 | memory_mb_used=5) | ||
2032 | 800 | |||
2033 | 801 | ret = self.scheduler.driver._live_migration_dest_check(self.context, | ||
2034 | 802 | i_ref, | ||
2035 | 803 | 'somewhere') | ||
2036 | 804 | self.assertTrue(ret == None) | ||
2037 | 805 | db.instance_destroy(self.context, instance_id) | ||
2038 | 806 | db.service_destroy(self.context, s_ref['id']) | ||
2039 | 807 | |||
2040 | 808 | def test_live_migration_common_check_service_orig_not_exists(self): | ||
2041 | 809 | """Destination host does not exist.""" | ||
2042 | 810 | |||
2043 | 811 | dest = 'dummydest' | ||
2044 | 812 | # mocks for live_migration_common_check() | ||
2045 | 813 | instance_id = self._create_instance() | ||
2046 | 814 | i_ref = db.instance_get(self.context, instance_id) | ||
2047 | 815 | t1 = datetime.datetime.utcnow() - datetime.timedelta(10) | ||
2048 | 816 | s_ref = self._create_compute_service(created_at=t1, updated_at=t1, | ||
2049 | 817 | host=dest) | ||
2050 | 818 | |||
2051 | 819 | # mocks for mounted_on_same_shared_storage() | ||
2052 | 820 | fpath = '/test/20110127120000' | ||
2053 | 821 | self.mox.StubOutWithMock(driver, 'rpc', use_mock_anything=True) | ||
2054 | 822 | topic = FLAGS.compute_topic | ||
2055 | 823 | driver.rpc.call(mox.IgnoreArg(), | ||
2056 | 824 | db.queue_get_for(self.context, topic, dest), | ||
2057 | 825 | {"method": 'create_shared_storage_test_file'}).AndReturn(fpath) | ||
2058 | 826 | driver.rpc.call(mox.IgnoreArg(), | ||
2059 | 827 | db.queue_get_for(mox.IgnoreArg(), topic, i_ref['host']), | ||
2060 | 828 | {"method": 'check_shared_storage_test_file', | ||
2061 | 829 | "args": {'filename': fpath}}) | ||
2062 | 830 | driver.rpc.call(mox.IgnoreArg(), | ||
2063 | 831 | db.queue_get_for(mox.IgnoreArg(), topic, dest), | ||
2064 | 832 | {"method": 'cleanup_shared_storage_test_file', | ||
2065 | 833 | "args": {'filename': fpath}}) | ||
2066 | 834 | |||
2067 | 835 | self.mox.ReplayAll() | ||
2068 | 836 | try: | ||
2069 | 837 | self.scheduler.driver._live_migration_common_check(self.context, | ||
2070 | 838 | i_ref, | ||
2071 | 839 | dest) | ||
2072 | 840 | except exception.Invalid, e: | ||
2073 | 841 | c = (e.message.find('does not exist') >= 0) | ||
2074 | 842 | |||
2075 | 843 | self.assertTrue(c) | ||
2076 | 844 | db.instance_destroy(self.context, instance_id) | ||
2077 | 845 | db.service_destroy(self.context, s_ref['id']) | ||
2078 | 846 | |||
2079 | 847 | def test_live_migration_common_check_service_different_hypervisor(self): | ||
2080 | 848 | """Original host and dest host has different hypervisor type.""" | ||
2081 | 849 | dest = 'dummydest' | ||
2082 | 850 | instance_id = self._create_instance() | ||
2083 | 851 | i_ref = db.instance_get(self.context, instance_id) | ||
2084 | 852 | |||
2085 | 853 | # compute service for destination | ||
2086 | 854 | s_ref = self._create_compute_service(host=i_ref['host']) | ||
2087 | 855 | # compute service for original host | ||
2088 | 856 | s_ref2 = self._create_compute_service(host=dest, hypervisor_type='xen') | ||
2089 | 857 | |||
2090 | 858 | # mocks | ||
2091 | 859 | driver = self.scheduler.driver | ||
2092 | 860 | self.mox.StubOutWithMock(driver, 'mounted_on_same_shared_storage') | ||
2093 | 861 | driver.mounted_on_same_shared_storage(mox.IgnoreArg(), i_ref, dest) | ||
2094 | 862 | |||
2095 | 863 | self.mox.ReplayAll() | ||
2096 | 864 | try: | ||
2097 | 865 | self.scheduler.driver._live_migration_common_check(self.context, | ||
2098 | 866 | i_ref, | ||
2099 | 867 | dest) | ||
2100 | 868 | except exception.Invalid, e: | ||
2101 | 869 | c = (e.message.find(_('Different hypervisor type')) >= 0) | ||
2102 | 870 | |||
2103 | 871 | self.assertTrue(c) | ||
2104 | 872 | db.instance_destroy(self.context, instance_id) | ||
2105 | 873 | db.service_destroy(self.context, s_ref['id']) | ||
2106 | 874 | db.service_destroy(self.context, s_ref2['id']) | ||
2107 | 875 | |||
2108 | 876 | def test_live_migration_common_check_service_different_version(self): | ||
2109 | 877 | """Original host and dest host has different hypervisor version.""" | ||
2110 | 878 | dest = 'dummydest' | ||
2111 | 879 | instance_id = self._create_instance() | ||
2112 | 880 | i_ref = db.instance_get(self.context, instance_id) | ||
2113 | 881 | |||
2114 | 882 | # compute service for destination | ||
2115 | 883 | s_ref = self._create_compute_service(host=i_ref['host']) | ||
2116 | 884 | # compute service for original host | ||
2117 | 885 | s_ref2 = self._create_compute_service(host=dest, | ||
2118 | 886 | hypervisor_version=12002) | ||
2119 | 887 | |||
2120 | 888 | # mocks | ||
2121 | 889 | driver = self.scheduler.driver | ||
2122 | 890 | self.mox.StubOutWithMock(driver, 'mounted_on_same_shared_storage') | ||
2123 | 891 | driver.mounted_on_same_shared_storage(mox.IgnoreArg(), i_ref, dest) | ||
2124 | 892 | |||
2125 | 893 | self.mox.ReplayAll() | ||
2126 | 894 | try: | ||
2127 | 895 | self.scheduler.driver._live_migration_common_check(self.context, | ||
2128 | 896 | i_ref, | ||
2129 | 897 | dest) | ||
2130 | 898 | except exception.Invalid, e: | ||
2131 | 899 | c = (e.message.find(_('Older hypervisor version')) >= 0) | ||
2132 | 900 | |||
2133 | 901 | self.assertTrue(c) | ||
2134 | 902 | db.instance_destroy(self.context, instance_id) | ||
2135 | 903 | db.service_destroy(self.context, s_ref['id']) | ||
2136 | 904 | db.service_destroy(self.context, s_ref2['id']) | ||
2137 | 905 | |||
2138 | 906 | def test_live_migration_common_check_checking_cpuinfo_fail(self): | ||
2139 | 907 | """Raise excetion when original host doen't have compatible cpu.""" | ||
2140 | 908 | |||
2141 | 909 | dest = 'dummydest' | ||
2142 | 910 | instance_id = self._create_instance() | ||
2143 | 911 | i_ref = db.instance_get(self.context, instance_id) | ||
2144 | 912 | |||
2145 | 913 | # compute service for destination | ||
2146 | 914 | s_ref = self._create_compute_service(host=i_ref['host']) | ||
2147 | 915 | # compute service for original host | ||
2148 | 916 | s_ref2 = self._create_compute_service(host=dest) | ||
2149 | 917 | |||
2150 | 918 | # mocks | ||
2151 | 919 | driver = self.scheduler.driver | ||
2152 | 920 | self.mox.StubOutWithMock(driver, 'mounted_on_same_shared_storage') | ||
2153 | 921 | driver.mounted_on_same_shared_storage(mox.IgnoreArg(), i_ref, dest) | ||
2154 | 922 | self.mox.StubOutWithMock(rpc, 'call', use_mock_anything=True) | ||
2155 | 923 | rpc.call(mox.IgnoreArg(), mox.IgnoreArg(), | ||
2156 | 924 | {"method": 'compare_cpu', | ||
2157 | 925 | "args": {'cpu_info': s_ref2['compute_node'][0]['cpu_info']}}).\ | ||
2158 | 926 | AndRaise(rpc.RemoteError("doesn't have compatibility to", "", "")) | ||
2159 | 927 | |||
2160 | 928 | self.mox.ReplayAll() | ||
2161 | 929 | try: | ||
2162 | 930 | self.scheduler.driver._live_migration_common_check(self.context, | ||
2163 | 931 | i_ref, | ||
2164 | 932 | dest) | ||
2165 | 933 | except rpc.RemoteError, e: | ||
2166 | 934 | c = (e.message.find(_("doesn't have compatibility to")) >= 0) | ||
2167 | 935 | |||
2168 | 936 | self.assertTrue(c) | ||
2169 | 937 | db.instance_destroy(self.context, instance_id) | ||
2170 | 938 | db.service_destroy(self.context, s_ref['id']) | ||
2171 | 939 | db.service_destroy(self.context, s_ref2['id']) | ||
2172 | 319 | 940 | ||
2173 | === modified file 'nova/tests/test_service.py' | |||
2174 | --- nova/tests/test_service.py 2011-02-23 23:14:16 +0000 | |||
2175 | +++ nova/tests/test_service.py 2011-03-10 06:27:59 +0000 | |||
2176 | @@ -30,6 +30,7 @@ | |||
2177 | 30 | from nova import test | 30 | from nova import test |
2178 | 31 | from nova import service | 31 | from nova import service |
2179 | 32 | from nova import manager | 32 | from nova import manager |
2180 | 33 | from nova.compute import manager as compute_manager | ||
2181 | 33 | 34 | ||
2182 | 34 | FLAGS = flags.FLAGS | 35 | FLAGS = flags.FLAGS |
2183 | 35 | flags.DEFINE_string("fake_manager", "nova.tests.test_service.FakeManager", | 36 | flags.DEFINE_string("fake_manager", "nova.tests.test_service.FakeManager", |
2184 | @@ -251,3 +252,43 @@ | |||
2185 | 251 | serv.report_state() | 252 | serv.report_state() |
2186 | 252 | 253 | ||
2187 | 253 | self.assert_(not serv.model_disconnected) | 254 | self.assert_(not serv.model_disconnected) |
2188 | 255 | |||
2189 | 256 | def test_compute_can_update_available_resource(self): | ||
2190 | 257 | """Confirm compute updates their record of compute-service table.""" | ||
2191 | 258 | host = 'foo' | ||
2192 | 259 | binary = 'nova-compute' | ||
2193 | 260 | topic = 'compute' | ||
2194 | 261 | |||
2195 | 262 | # Any mocks are not working without UnsetStubs() here. | ||
2196 | 263 | self.mox.UnsetStubs() | ||
2197 | 264 | ctxt = context.get_admin_context() | ||
2198 | 265 | service_ref = db.service_create(ctxt, {'host': host, | ||
2199 | 266 | 'binary': binary, | ||
2200 | 267 | 'topic': topic}) | ||
2201 | 268 | serv = service.Service(host, | ||
2202 | 269 | binary, | ||
2203 | 270 | topic, | ||
2204 | 271 | 'nova.compute.manager.ComputeManager') | ||
2205 | 272 | |||
2206 | 273 | # This testcase want to test calling update_available_resource. | ||
2207 | 274 | # No need to call periodic call, then below variable must be set 0. | ||
2208 | 275 | serv.report_interval = 0 | ||
2209 | 276 | serv.periodic_interval = 0 | ||
2210 | 277 | |||
2211 | 278 | # Creating mocks | ||
2212 | 279 | self.mox.StubOutWithMock(service.rpc.Connection, 'instance') | ||
2213 | 280 | service.rpc.Connection.instance(new=mox.IgnoreArg()) | ||
2214 | 281 | service.rpc.Connection.instance(new=mox.IgnoreArg()) | ||
2215 | 282 | self.mox.StubOutWithMock(serv.manager.driver, | ||
2216 | 283 | 'update_available_resource') | ||
2217 | 284 | serv.manager.driver.update_available_resource(mox.IgnoreArg(), host) | ||
2218 | 285 | |||
2219 | 286 | # Just doing start()-stop(), not confirm new db record is created, | ||
2220 | 287 | # because update_available_resource() works only in | ||
2221 | 288 | # libvirt environment. This testcase confirms | ||
2222 | 289 | # update_available_resource() is called. Otherwise, mox complains. | ||
2223 | 290 | self.mox.ReplayAll() | ||
2224 | 291 | serv.start() | ||
2225 | 292 | serv.stop() | ||
2226 | 293 | |||
2227 | 294 | db.service_destroy(ctxt, service_ref['id']) | ||
2228 | 254 | 295 | ||
2229 | === modified file 'nova/tests/test_virt.py' | |||
2230 | --- nova/tests/test_virt.py 2011-03-09 23:45:00 +0000 | |||
2231 | +++ nova/tests/test_virt.py 2011-03-10 06:27:59 +0000 | |||
2232 | @@ -14,21 +14,28 @@ | |||
2233 | 14 | # License for the specific language governing permissions and limitations | 14 | # License for the specific language governing permissions and limitations |
2234 | 15 | # under the License. | 15 | # under the License. |
2235 | 16 | 16 | ||
2236 | 17 | import eventlet | ||
2237 | 18 | import mox | ||
2238 | 17 | import os | 19 | import os |
2239 | 20 | import sys | ||
2240 | 18 | 21 | ||
2241 | 19 | import eventlet | ||
2242 | 20 | from xml.etree.ElementTree import fromstring as xml_to_tree | 22 | from xml.etree.ElementTree import fromstring as xml_to_tree |
2243 | 21 | from xml.dom.minidom import parseString as xml_to_dom | 23 | from xml.dom.minidom import parseString as xml_to_dom |
2244 | 22 | 24 | ||
2245 | 23 | from nova import context | 25 | from nova import context |
2246 | 24 | from nova import db | 26 | from nova import db |
2247 | 27 | from nova import exception | ||
2248 | 25 | from nova import flags | 28 | from nova import flags |
2249 | 26 | from nova import test | 29 | from nova import test |
2250 | 27 | from nova import utils | 30 | from nova import utils |
2251 | 28 | from nova.api.ec2 import cloud | 31 | from nova.api.ec2 import cloud |
2252 | 29 | from nova.auth import manager | 32 | from nova.auth import manager |
2253 | 33 | from nova.compute import manager as compute_manager | ||
2254 | 34 | from nova.compute import power_state | ||
2255 | 35 | from nova.db.sqlalchemy import models | ||
2256 | 30 | from nova.virt import libvirt_conn | 36 | from nova.virt import libvirt_conn |
2257 | 31 | 37 | ||
2258 | 38 | libvirt = None | ||
2259 | 32 | FLAGS = flags.FLAGS | 39 | FLAGS = flags.FLAGS |
2260 | 33 | flags.DECLARE('instances_path', 'nova.compute.manager') | 40 | flags.DECLARE('instances_path', 'nova.compute.manager') |
2261 | 34 | 41 | ||
2262 | @@ -103,11 +110,28 @@ | |||
2263 | 103 | libvirt_conn._late_load_cheetah() | 110 | libvirt_conn._late_load_cheetah() |
2264 | 104 | self.flags(fake_call=True) | 111 | self.flags(fake_call=True) |
2265 | 105 | self.manager = manager.AuthManager() | 112 | self.manager = manager.AuthManager() |
2266 | 113 | |||
2267 | 114 | try: | ||
2268 | 115 | pjs = self.manager.get_projects() | ||
2269 | 116 | pjs = [p for p in pjs if p.name == 'fake'] | ||
2270 | 117 | if 0 != len(pjs): | ||
2271 | 118 | self.manager.delete_project(pjs[0]) | ||
2272 | 119 | |||
2273 | 120 | users = self.manager.get_users() | ||
2274 | 121 | users = [u for u in users if u.name == 'fake'] | ||
2275 | 122 | if 0 != len(users): | ||
2276 | 123 | self.manager.delete_user(users[0]) | ||
2277 | 124 | except Exception, e: | ||
2278 | 125 | pass | ||
2279 | 126 | |||
2280 | 127 | users = self.manager.get_users() | ||
2281 | 106 | self.user = self.manager.create_user('fake', 'fake', 'fake', | 128 | self.user = self.manager.create_user('fake', 'fake', 'fake', |
2282 | 107 | admin=True) | 129 | admin=True) |
2283 | 108 | self.project = self.manager.create_project('fake', 'fake', 'fake') | 130 | self.project = self.manager.create_project('fake', 'fake', 'fake') |
2284 | 109 | self.network = utils.import_object(FLAGS.network_manager) | 131 | self.network = utils.import_object(FLAGS.network_manager) |
2285 | 132 | self.context = context.get_admin_context() | ||
2286 | 110 | FLAGS.instances_path = '' | 133 | FLAGS.instances_path = '' |
2287 | 134 | self.call_libvirt_dependant_setup = False | ||
2288 | 111 | 135 | ||
2289 | 112 | test_ip = '10.11.12.13' | 136 | test_ip = '10.11.12.13' |
2290 | 113 | test_instance = {'memory_kb': '1024000', | 137 | test_instance = {'memory_kb': '1024000', |
2291 | @@ -119,6 +143,58 @@ | |||
2292 | 119 | 'bridge': 'br101', | 143 | 'bridge': 'br101', |
2293 | 120 | 'instance_type': 'm1.small'} | 144 | 'instance_type': 'm1.small'} |
2294 | 121 | 145 | ||
2295 | 146 | def lazy_load_library_exists(self): | ||
2296 | 147 | """check if libvirt is available.""" | ||
2297 | 148 | # try to connect libvirt. if fail, skip test. | ||
2298 | 149 | try: | ||
2299 | 150 | import libvirt | ||
2300 | 151 | import libxml2 | ||
2301 | 152 | except ImportError: | ||
2302 | 153 | return False | ||
2303 | 154 | global libvirt | ||
2304 | 155 | libvirt = __import__('libvirt') | ||
2305 | 156 | libvirt_conn.libvirt = __import__('libvirt') | ||
2306 | 157 | libvirt_conn.libxml2 = __import__('libxml2') | ||
2307 | 158 | return True | ||
2308 | 159 | |||
2309 | 160 | def create_fake_libvirt_mock(self, **kwargs): | ||
2310 | 161 | """Defining mocks for LibvirtConnection(libvirt is not used).""" | ||
2311 | 162 | |||
2312 | 163 | # A fake libvirt.virConnect | ||
2313 | 164 | class FakeLibvirtConnection(object): | ||
2314 | 165 | pass | ||
2315 | 166 | |||
2316 | 167 | # A fake libvirt_conn.IptablesFirewallDriver | ||
2317 | 168 | class FakeIptablesFirewallDriver(object): | ||
2318 | 169 | |||
2319 | 170 | def __init__(self, **kwargs): | ||
2320 | 171 | pass | ||
2321 | 172 | |||
2322 | 173 | def setattr(self, key, val): | ||
2323 | 174 | self.__setattr__(key, val) | ||
2324 | 175 | |||
2325 | 176 | # Creating mocks | ||
2326 | 177 | fake = FakeLibvirtConnection() | ||
2327 | 178 | fakeip = FakeIptablesFirewallDriver | ||
2328 | 179 | # Customizing above fake if necessary | ||
2329 | 180 | for key, val in kwargs.items(): | ||
2330 | 181 | fake.__setattr__(key, val) | ||
2331 | 182 | |||
2332 | 183 | # Inevitable mocks for libvirt_conn.LibvirtConnection | ||
2333 | 184 | self.mox.StubOutWithMock(libvirt_conn.utils, 'import_class') | ||
2334 | 185 | libvirt_conn.utils.import_class(mox.IgnoreArg()).AndReturn(fakeip) | ||
2335 | 186 | self.mox.StubOutWithMock(libvirt_conn.LibvirtConnection, '_conn') | ||
2336 | 187 | libvirt_conn.LibvirtConnection._conn = fake | ||
2337 | 188 | |||
2338 | 189 | def create_service(self, **kwargs): | ||
2339 | 190 | service_ref = {'host': kwargs.get('host', 'dummy'), | ||
2340 | 191 | 'binary': 'nova-compute', | ||
2341 | 192 | 'topic': 'compute', | ||
2342 | 193 | 'report_count': 0, | ||
2343 | 194 | 'availability_zone': 'zone'} | ||
2344 | 195 | |||
2345 | 196 | return db.service_create(context.get_admin_context(), service_ref) | ||
2346 | 197 | |||
2347 | 122 | def test_xml_and_uri_no_ramdisk_no_kernel(self): | 198 | def test_xml_and_uri_no_ramdisk_no_kernel(self): |
2348 | 123 | instance_data = dict(self.test_instance) | 199 | instance_data = dict(self.test_instance) |
2349 | 124 | self._check_xml_and_uri(instance_data, | 200 | self._check_xml_and_uri(instance_data, |
2350 | @@ -258,8 +334,8 @@ | |||
2351 | 258 | expected_result, | 334 | expected_result, |
2352 | 259 | '%s failed common check %d' % (xml, i)) | 335 | '%s failed common check %d' % (xml, i)) |
2353 | 260 | 336 | ||
2356 | 261 | # This test is supposed to make sure we don't override a specifically | 337 | # This test is supposed to make sure we don't |
2357 | 262 | # set uri | 338 | # override a specifically set uri |
2358 | 263 | # | 339 | # |
2359 | 264 | # Deliberately not just assigning this string to FLAGS.libvirt_uri and | 340 | # Deliberately not just assigning this string to FLAGS.libvirt_uri and |
2360 | 265 | # checking against that later on. This way we make sure the | 341 | # checking against that later on. This way we make sure the |
2361 | @@ -273,6 +349,150 @@ | |||
2362 | 273 | self.assertEquals(uri, testuri) | 349 | self.assertEquals(uri, testuri) |
2363 | 274 | db.instance_destroy(user_context, instance_ref['id']) | 350 | db.instance_destroy(user_context, instance_ref['id']) |
2364 | 275 | 351 | ||
2365 | 352 | def test_update_available_resource_works_correctly(self): | ||
2366 | 353 | """Confirm compute_node table is updated successfully.""" | ||
2367 | 354 | org_path = FLAGS.instances_path = '' | ||
2368 | 355 | FLAGS.instances_path = '.' | ||
2369 | 356 | |||
2370 | 357 | # Prepare mocks | ||
2371 | 358 | def getVersion(): | ||
2372 | 359 | return 12003 | ||
2373 | 360 | |||
2374 | 361 | def getType(): | ||
2375 | 362 | return 'qemu' | ||
2376 | 363 | |||
2377 | 364 | def listDomainsID(): | ||
2378 | 365 | return [] | ||
2379 | 366 | |||
2380 | 367 | service_ref = self.create_service(host='dummy') | ||
2381 | 368 | self.create_fake_libvirt_mock(getVersion=getVersion, | ||
2382 | 369 | getType=getType, | ||
2383 | 370 | listDomainsID=listDomainsID) | ||
2384 | 371 | self.mox.StubOutWithMock(libvirt_conn.LibvirtConnection, | ||
2385 | 372 | 'get_cpu_info') | ||
2386 | 373 | libvirt_conn.LibvirtConnection.get_cpu_info().AndReturn('cpuinfo') | ||
2387 | 374 | |||
2388 | 375 | # Start test | ||
2389 | 376 | self.mox.ReplayAll() | ||
2390 | 377 | conn = libvirt_conn.LibvirtConnection(False) | ||
2391 | 378 | conn.update_available_resource(self.context, 'dummy') | ||
2392 | 379 | service_ref = db.service_get(self.context, service_ref['id']) | ||
2393 | 380 | compute_node = service_ref['compute_node'][0] | ||
2394 | 381 | |||
2395 | 382 | if sys.platform.upper() == 'LINUX2': | ||
2396 | 383 | self.assertTrue(compute_node['vcpus'] >= 0) | ||
2397 | 384 | self.assertTrue(compute_node['memory_mb'] > 0) | ||
2398 | 385 | self.assertTrue(compute_node['local_gb'] > 0) | ||
2399 | 386 | self.assertTrue(compute_node['vcpus_used'] == 0) | ||
2400 | 387 | self.assertTrue(compute_node['memory_mb_used'] > 0) | ||
2401 | 388 | self.assertTrue(compute_node['local_gb_used'] > 0) | ||
2402 | 389 | self.assertTrue(len(compute_node['hypervisor_type']) > 0) | ||
2403 | 390 | self.assertTrue(compute_node['hypervisor_version'] > 0) | ||
2404 | 391 | else: | ||
2405 | 392 | self.assertTrue(compute_node['vcpus'] >= 0) | ||
2406 | 393 | self.assertTrue(compute_node['memory_mb'] == 0) | ||
2407 | 394 | self.assertTrue(compute_node['local_gb'] > 0) | ||
2408 | 395 | self.assertTrue(compute_node['vcpus_used'] == 0) | ||
2409 | 396 | self.assertTrue(compute_node['memory_mb_used'] == 0) | ||
2410 | 397 | self.assertTrue(compute_node['local_gb_used'] > 0) | ||
2411 | 398 | self.assertTrue(len(compute_node['hypervisor_type']) > 0) | ||
2412 | 399 | self.assertTrue(compute_node['hypervisor_version'] > 0) | ||
2413 | 400 | |||
2414 | 401 | db.service_destroy(self.context, service_ref['id']) | ||
2415 | 402 | FLAGS.instances_path = org_path | ||
2416 | 403 | |||
2417 | 404 | def test_update_resource_info_no_compute_record_found(self): | ||
2418 | 405 | """Raise exception if no recorde found on services table.""" | ||
2419 | 406 | org_path = FLAGS.instances_path = '' | ||
2420 | 407 | FLAGS.instances_path = '.' | ||
2421 | 408 | self.create_fake_libvirt_mock() | ||
2422 | 409 | |||
2423 | 410 | self.mox.ReplayAll() | ||
2424 | 411 | conn = libvirt_conn.LibvirtConnection(False) | ||
2425 | 412 | self.assertRaises(exception.Invalid, | ||
2426 | 413 | conn.update_available_resource, | ||
2427 | 414 | self.context, 'dummy') | ||
2428 | 415 | |||
2429 | 416 | FLAGS.instances_path = org_path | ||
2430 | 417 | |||
2431 | 418 | def test_ensure_filtering_rules_for_instance_timeout(self): | ||
2432 | 419 | """ensure_filtering_fules_for_instance() finishes with timeout.""" | ||
2433 | 420 | # Skip if non-libvirt environment | ||
2434 | 421 | if not self.lazy_load_library_exists(): | ||
2435 | 422 | return | ||
2436 | 423 | |||
2437 | 424 | # Preparing mocks | ||
2438 | 425 | def fake_none(self): | ||
2439 | 426 | return | ||
2440 | 427 | |||
2441 | 428 | def fake_raise(self): | ||
2442 | 429 | raise libvirt.libvirtError('ERR') | ||
2443 | 430 | |||
2444 | 431 | self.create_fake_libvirt_mock(nwfilterLookupByName=fake_raise) | ||
2445 | 432 | instance_ref = db.instance_create(self.context, self.test_instance) | ||
2446 | 433 | |||
2447 | 434 | # Start test | ||
2448 | 435 | self.mox.ReplayAll() | ||
2449 | 436 | try: | ||
2450 | 437 | conn = libvirt_conn.LibvirtConnection(False) | ||
2451 | 438 | conn.firewall_driver.setattr('setup_basic_filtering', fake_none) | ||
2452 | 439 | conn.firewall_driver.setattr('prepare_instance_filter', fake_none) | ||
2453 | 440 | conn.ensure_filtering_rules_for_instance(instance_ref) | ||
2454 | 441 | except exception.Error, e: | ||
2455 | 442 | c1 = (0 <= e.message.find('Timeout migrating for')) | ||
2456 | 443 | self.assertTrue(c1) | ||
2457 | 444 | |||
2458 | 445 | db.instance_destroy(self.context, instance_ref['id']) | ||
2459 | 446 | |||
2460 | 447 | def test_live_migration_raises_exception(self): | ||
2461 | 448 | """Confirms recover method is called when exceptions are raised.""" | ||
2462 | 449 | # Skip if non-libvirt environment | ||
2463 | 450 | if not self.lazy_load_library_exists(): | ||
2464 | 451 | return | ||
2465 | 452 | |||
2466 | 453 | # Preparing data | ||
2467 | 454 | self.compute = utils.import_object(FLAGS.compute_manager) | ||
2468 | 455 | instance_dict = {'host': 'fake', 'state': power_state.RUNNING, | ||
2469 | 456 | 'state_description': 'running'} | ||
2470 | 457 | instance_ref = db.instance_create(self.context, self.test_instance) | ||
2471 | 458 | instance_ref = db.instance_update(self.context, instance_ref['id'], | ||
2472 | 459 | instance_dict) | ||
2473 | 460 | vol_dict = {'status': 'migrating', 'size': 1} | ||
2474 | 461 | volume_ref = db.volume_create(self.context, vol_dict) | ||
2475 | 462 | db.volume_attached(self.context, volume_ref['id'], instance_ref['id'], | ||
2476 | 463 | '/dev/fake') | ||
2477 | 464 | |||
2478 | 465 | # Preparing mocks | ||
2479 | 466 | vdmock = self.mox.CreateMock(libvirt.virDomain) | ||
2480 | 467 | self.mox.StubOutWithMock(vdmock, "migrateToURI") | ||
2481 | 468 | vdmock.migrateToURI(FLAGS.live_migration_uri % 'dest', | ||
2482 | 469 | mox.IgnoreArg(), | ||
2483 | 470 | None, FLAGS.live_migration_bandwidth).\ | ||
2484 | 471 | AndRaise(libvirt.libvirtError('ERR')) | ||
2485 | 472 | |||
2486 | 473 | def fake_lookup(instance_name): | ||
2487 | 474 | if instance_name == instance_ref.name: | ||
2488 | 475 | return vdmock | ||
2489 | 476 | |||
2490 | 477 | self.create_fake_libvirt_mock(lookupByName=fake_lookup) | ||
2491 | 478 | |||
2492 | 479 | # Start test | ||
2493 | 480 | self.mox.ReplayAll() | ||
2494 | 481 | conn = libvirt_conn.LibvirtConnection(False) | ||
2495 | 482 | self.assertRaises(libvirt.libvirtError, | ||
2496 | 483 | conn._live_migration, | ||
2497 | 484 | self.context, instance_ref, 'dest', '', | ||
2498 | 485 | self.compute.recover_live_migration) | ||
2499 | 486 | |||
2500 | 487 | instance_ref = db.instance_get(self.context, instance_ref['id']) | ||
2501 | 488 | self.assertTrue(instance_ref['state_description'] == 'running') | ||
2502 | 489 | self.assertTrue(instance_ref['state'] == power_state.RUNNING) | ||
2503 | 490 | volume_ref = db.volume_get(self.context, volume_ref['id']) | ||
2504 | 491 | self.assertTrue(volume_ref['status'] == 'in-use') | ||
2505 | 492 | |||
2506 | 493 | db.volume_destroy(self.context, volume_ref['id']) | ||
2507 | 494 | db.instance_destroy(self.context, instance_ref['id']) | ||
2508 | 495 | |||
2509 | 276 | def tearDown(self): | 496 | def tearDown(self): |
2510 | 277 | self.manager.delete_project(self.project) | 497 | self.manager.delete_project(self.project) |
2511 | 278 | self.manager.delete_user(self.user) | 498 | self.manager.delete_user(self.user) |
2512 | 279 | 499 | ||
2513 | === modified file 'nova/tests/test_volume.py' | |||
2514 | --- nova/tests/test_volume.py 2011-03-07 01:25:01 +0000 | |||
2515 | +++ nova/tests/test_volume.py 2011-03-10 06:27:59 +0000 | |||
2516 | @@ -20,6 +20,8 @@ | |||
2517 | 20 | 20 | ||
2518 | 21 | """ | 21 | """ |
2519 | 22 | 22 | ||
2520 | 23 | import cStringIO | ||
2521 | 24 | |||
2522 | 23 | from nova import context | 25 | from nova import context |
2523 | 24 | from nova import exception | 26 | from nova import exception |
2524 | 25 | from nova import db | 27 | from nova import db |
2525 | @@ -173,3 +175,196 @@ | |||
2526 | 173 | # each of them having a different FLAG for storage_node | 175 | # each of them having a different FLAG for storage_node |
2527 | 174 | # This will allow us to test cross-node interactions | 176 | # This will allow us to test cross-node interactions |
2528 | 175 | pass | 177 | pass |
2529 | 178 | |||
2530 | 179 | |||
2531 | 180 | class DriverTestCase(test.TestCase): | ||
2532 | 181 | """Base Test class for Drivers.""" | ||
2533 | 182 | driver_name = "nova.volume.driver.FakeAOEDriver" | ||
2534 | 183 | |||
2535 | 184 | def setUp(self): | ||
2536 | 185 | super(DriverTestCase, self).setUp() | ||
2537 | 186 | self.flags(volume_driver=self.driver_name, | ||
2538 | 187 | logging_default_format_string="%(message)s") | ||
2539 | 188 | self.volume = utils.import_object(FLAGS.volume_manager) | ||
2540 | 189 | self.context = context.get_admin_context() | ||
2541 | 190 | self.output = "" | ||
2542 | 191 | |||
2543 | 192 | def _fake_execute(_command, *_args, **_kwargs): | ||
2544 | 193 | """Fake _execute.""" | ||
2545 | 194 | return self.output, None | ||
2546 | 195 | self.volume.driver._execute = _fake_execute | ||
2547 | 196 | self.volume.driver._sync_execute = _fake_execute | ||
2548 | 197 | |||
2549 | 198 | log = logging.getLogger() | ||
2550 | 199 | self.stream = cStringIO.StringIO() | ||
2551 | 200 | log.addHandler(logging.StreamHandler(self.stream)) | ||
2552 | 201 | |||
2553 | 202 | inst = {} | ||
2554 | 203 | self.instance_id = db.instance_create(self.context, inst)['id'] | ||
2555 | 204 | |||
2556 | 205 | def tearDown(self): | ||
2557 | 206 | super(DriverTestCase, self).tearDown() | ||
2558 | 207 | |||
2559 | 208 | def _attach_volume(self): | ||
2560 | 209 | """Attach volumes to an instance. This function also sets | ||
2561 | 210 | a fake log message.""" | ||
2562 | 211 | return [] | ||
2563 | 212 | |||
2564 | 213 | def _detach_volume(self, volume_id_list): | ||
2565 | 214 | """Detach volumes from an instance.""" | ||
2566 | 215 | for volume_id in volume_id_list: | ||
2567 | 216 | db.volume_detached(self.context, volume_id) | ||
2568 | 217 | self.volume.delete_volume(self.context, volume_id) | ||
2569 | 218 | |||
2570 | 219 | |||
2571 | 220 | class AOETestCase(DriverTestCase): | ||
2572 | 221 | """Test Case for AOEDriver""" | ||
2573 | 222 | driver_name = "nova.volume.driver.AOEDriver" | ||
2574 | 223 | |||
2575 | 224 | def setUp(self): | ||
2576 | 225 | super(AOETestCase, self).setUp() | ||
2577 | 226 | |||
2578 | 227 | def tearDown(self): | ||
2579 | 228 | super(AOETestCase, self).tearDown() | ||
2580 | 229 | |||
2581 | 230 | def _attach_volume(self): | ||
2582 | 231 | """Attach volumes to an instance. This function also sets | ||
2583 | 232 | a fake log message.""" | ||
2584 | 233 | volume_id_list = [] | ||
2585 | 234 | for index in xrange(3): | ||
2586 | 235 | vol = {} | ||
2587 | 236 | vol['size'] = 0 | ||
2588 | 237 | volume_id = db.volume_create(self.context, | ||
2589 | 238 | vol)['id'] | ||
2590 | 239 | self.volume.create_volume(self.context, volume_id) | ||
2591 | 240 | |||
2592 | 241 | # each volume has a different mountpoint | ||
2593 | 242 | mountpoint = "/dev/sd" + chr((ord('b') + index)) | ||
2594 | 243 | db.volume_attached(self.context, volume_id, self.instance_id, | ||
2595 | 244 | mountpoint) | ||
2596 | 245 | |||
2597 | 246 | (shelf_id, blade_id) = db.volume_get_shelf_and_blade(self.context, | ||
2598 | 247 | volume_id) | ||
2599 | 248 | self.output += "%s %s eth0 /dev/nova-volumes/vol-foo auto run\n" \ | ||
2600 | 249 | % (shelf_id, blade_id) | ||
2601 | 250 | |||
2602 | 251 | volume_id_list.append(volume_id) | ||
2603 | 252 | |||
2604 | 253 | return volume_id_list | ||
2605 | 254 | |||
2606 | 255 | def test_check_for_export_with_no_volume(self): | ||
2607 | 256 | """No log message when no volume is attached to an instance.""" | ||
2608 | 257 | self.stream.truncate(0) | ||
2609 | 258 | self.volume.check_for_export(self.context, self.instance_id) | ||
2610 | 259 | self.assertEqual(self.stream.getvalue(), '') | ||
2611 | 260 | |||
2612 | 261 | def test_check_for_export_with_all_vblade_processes(self): | ||
2613 | 262 | """No log message when all the vblade processes are running.""" | ||
2614 | 263 | volume_id_list = self._attach_volume() | ||
2615 | 264 | |||
2616 | 265 | self.stream.truncate(0) | ||
2617 | 266 | self.volume.check_for_export(self.context, self.instance_id) | ||
2618 | 267 | self.assertEqual(self.stream.getvalue(), '') | ||
2619 | 268 | |||
2620 | 269 | self._detach_volume(volume_id_list) | ||
2621 | 270 | |||
2622 | 271 | def test_check_for_export_with_vblade_process_missing(self): | ||
2623 | 272 | """Output a warning message when some vblade processes aren't | ||
2624 | 273 | running.""" | ||
2625 | 274 | volume_id_list = self._attach_volume() | ||
2626 | 275 | |||
2627 | 276 | # the first vblade process isn't running | ||
2628 | 277 | self.output = self.output.replace("run", "down", 1) | ||
2629 | 278 | (shelf_id, blade_id) = db.volume_get_shelf_and_blade(self.context, | ||
2630 | 279 | volume_id_list[0]) | ||
2631 | 280 | |||
2632 | 281 | msg_is_match = False | ||
2633 | 282 | self.stream.truncate(0) | ||
2634 | 283 | try: | ||
2635 | 284 | self.volume.check_for_export(self.context, self.instance_id) | ||
2636 | 285 | except exception.ProcessExecutionError, e: | ||
2637 | 286 | volume_id = volume_id_list[0] | ||
2638 | 287 | msg = _("Cannot confirm exported volume id:%(volume_id)s. " | ||
2639 | 288 | "vblade process for e%(shelf_id)s.%(blade_id)s " | ||
2640 | 289 | "isn't running.") % locals() | ||
2641 | 290 | |||
2642 | 291 | msg_is_match = (0 <= e.message.find(msg)) | ||
2643 | 292 | |||
2644 | 293 | self.assertTrue(msg_is_match) | ||
2645 | 294 | self._detach_volume(volume_id_list) | ||
2646 | 295 | |||
2647 | 296 | |||
2648 | 297 | class ISCSITestCase(DriverTestCase): | ||
2649 | 298 | """Test Case for ISCSIDriver""" | ||
2650 | 299 | driver_name = "nova.volume.driver.ISCSIDriver" | ||
2651 | 300 | |||
2652 | 301 | def setUp(self): | ||
2653 | 302 | super(ISCSITestCase, self).setUp() | ||
2654 | 303 | |||
2655 | 304 | def tearDown(self): | ||
2656 | 305 | super(ISCSITestCase, self).tearDown() | ||
2657 | 306 | |||
2658 | 307 | def _attach_volume(self): | ||
2659 | 308 | """Attach volumes to an instance. This function also sets | ||
2660 | 309 | a fake log message.""" | ||
2661 | 310 | volume_id_list = [] | ||
2662 | 311 | for index in xrange(3): | ||
2663 | 312 | vol = {} | ||
2664 | 313 | vol['size'] = 0 | ||
2665 | 314 | vol_ref = db.volume_create(self.context, vol) | ||
2666 | 315 | self.volume.create_volume(self.context, vol_ref['id']) | ||
2667 | 316 | vol_ref = db.volume_get(self.context, vol_ref['id']) | ||
2668 | 317 | |||
2669 | 318 | # each volume has a different mountpoint | ||
2670 | 319 | mountpoint = "/dev/sd" + chr((ord('b') + index)) | ||
2671 | 320 | db.volume_attached(self.context, vol_ref['id'], self.instance_id, | ||
2672 | 321 | mountpoint) | ||
2673 | 322 | volume_id_list.append(vol_ref['id']) | ||
2674 | 323 | |||
2675 | 324 | return volume_id_list | ||
2676 | 325 | |||
2677 | 326 | def test_check_for_export_with_no_volume(self): | ||
2678 | 327 | """No log message when no volume is attached to an instance.""" | ||
2679 | 328 | self.stream.truncate(0) | ||
2680 | 329 | self.volume.check_for_export(self.context, self.instance_id) | ||
2681 | 330 | self.assertEqual(self.stream.getvalue(), '') | ||
2682 | 331 | |||
2683 | 332 | def test_check_for_export_with_all_volume_exported(self): | ||
2684 | 333 | """No log message when all the vblade processes are running.""" | ||
2685 | 334 | volume_id_list = self._attach_volume() | ||
2686 | 335 | |||
2687 | 336 | self.mox.StubOutWithMock(self.volume.driver, '_execute') | ||
2688 | 337 | for i in volume_id_list: | ||
2689 | 338 | tid = db.volume_get_iscsi_target_num(self.context, i) | ||
2690 | 339 | self.volume.driver._execute("sudo ietadm --op show --tid=%(tid)d" | ||
2691 | 340 | % locals()) | ||
2692 | 341 | |||
2693 | 342 | self.stream.truncate(0) | ||
2694 | 343 | self.mox.ReplayAll() | ||
2695 | 344 | self.volume.check_for_export(self.context, self.instance_id) | ||
2696 | 345 | self.assertEqual(self.stream.getvalue(), '') | ||
2697 | 346 | self.mox.UnsetStubs() | ||
2698 | 347 | |||
2699 | 348 | self._detach_volume(volume_id_list) | ||
2700 | 349 | |||
2701 | 350 | def test_check_for_export_with_some_volume_missing(self): | ||
2702 | 351 | """Output a warning message when some volumes are not recognied | ||
2703 | 352 | by ietd.""" | ||
2704 | 353 | volume_id_list = self._attach_volume() | ||
2705 | 354 | |||
2706 | 355 | # the first vblade process isn't running | ||
2707 | 356 | tid = db.volume_get_iscsi_target_num(self.context, volume_id_list[0]) | ||
2708 | 357 | self.mox.StubOutWithMock(self.volume.driver, '_execute') | ||
2709 | 358 | self.volume.driver._execute("sudo ietadm --op show --tid=%(tid)d" | ||
2710 | 359 | % locals()).AndRaise(exception.ProcessExecutionError()) | ||
2711 | 360 | |||
2712 | 361 | self.mox.ReplayAll() | ||
2713 | 362 | self.assertRaises(exception.ProcessExecutionError, | ||
2714 | 363 | self.volume.check_for_export, | ||
2715 | 364 | self.context, | ||
2716 | 365 | self.instance_id) | ||
2717 | 366 | msg = _("Cannot confirm exported volume id:%s.") % volume_id_list[0] | ||
2718 | 367 | self.assertTrue(0 <= self.stream.getvalue().find(msg)) | ||
2719 | 368 | self.mox.UnsetStubs() | ||
2720 | 369 | |||
2721 | 370 | self._detach_volume(volume_id_list) | ||
2722 | 176 | 371 | ||
2723 | === added file 'nova/virt/cpuinfo.xml.template' | |||
2724 | --- nova/virt/cpuinfo.xml.template 1970-01-01 00:00:00 +0000 | |||
2725 | +++ nova/virt/cpuinfo.xml.template 2011-03-10 06:27:59 +0000 | |||
2726 | @@ -0,0 +1,9 @@ | |||
2727 | 1 | <cpu> | ||
2728 | 2 | <arch>$arch</arch> | ||
2729 | 3 | <model>$model</model> | ||
2730 | 4 | <vendor>$vendor</vendor> | ||
2731 | 5 | <topology sockets="$topology.sockets" cores="$topology.cores" threads="$topology.threads"/> | ||
2732 | 6 | #for $var in $features | ||
2733 | 7 | <features name="$var" /> | ||
2734 | 8 | #end for | ||
2735 | 9 | </cpu> | ||
2736 | 0 | 10 | ||
2737 | === modified file 'nova/virt/fake.py' | |||
2738 | --- nova/virt/fake.py 2011-02-28 17:39:23 +0000 | |||
2739 | +++ nova/virt/fake.py 2011-03-10 06:27:59 +0000 | |||
2740 | @@ -407,6 +407,27 @@ | |||
2741 | 407 | """ | 407 | """ |
2742 | 408 | return True | 408 | return True |
2743 | 409 | 409 | ||
2744 | 410 | def update_available_resource(self, ctxt, host): | ||
2745 | 411 | """This method is supported only by libvirt.""" | ||
2746 | 412 | return | ||
2747 | 413 | |||
2748 | 414 | def compare_cpu(self, xml): | ||
2749 | 415 | """This method is supported only by libvirt.""" | ||
2750 | 416 | raise NotImplementedError('This method is supported only by libvirt.') | ||
2751 | 417 | |||
2752 | 418 | def ensure_filtering_rules_for_instance(self, instance_ref): | ||
2753 | 419 | """This method is supported only by libvirt.""" | ||
2754 | 420 | raise NotImplementedError('This method is supported only by libvirt.') | ||
2755 | 421 | |||
2756 | 422 | def live_migration(self, context, instance_ref, dest, | ||
2757 | 423 | post_method, recover_method): | ||
2758 | 424 | """This method is supported only by libvirt.""" | ||
2759 | 425 | return | ||
2760 | 426 | |||
2761 | 427 | def unfilter_instance(self, instance_ref): | ||
2762 | 428 | """This method is supported only by libvirt.""" | ||
2763 | 429 | raise NotImplementedError('This method is supported only by libvirt.') | ||
2764 | 430 | |||
2765 | 410 | 431 | ||
2766 | 411 | class FakeInstance(object): | 432 | class FakeInstance(object): |
2767 | 412 | 433 | ||
2768 | 413 | 434 | ||
2769 | === modified file 'nova/virt/libvirt_conn.py' | |||
2770 | --- nova/virt/libvirt_conn.py 2011-03-10 04:42:11 +0000 | |||
2771 | +++ nova/virt/libvirt_conn.py 2011-03-10 06:27:59 +0000 | |||
2772 | @@ -36,10 +36,13 @@ | |||
2773 | 36 | 36 | ||
2774 | 37 | """ | 37 | """ |
2775 | 38 | 38 | ||
2776 | 39 | import multiprocessing | ||
2777 | 39 | import os | 40 | import os |
2778 | 40 | import shutil | 41 | import shutil |
2779 | 42 | import sys | ||
2780 | 41 | import random | 43 | import random |
2781 | 42 | import subprocess | 44 | import subprocess |
2782 | 45 | import time | ||
2783 | 43 | import uuid | 46 | import uuid |
2784 | 44 | from xml.dom import minidom | 47 | from xml.dom import minidom |
2785 | 45 | 48 | ||
2786 | @@ -70,6 +73,7 @@ | |||
2787 | 70 | LOG = logging.getLogger('nova.virt.libvirt_conn') | 73 | LOG = logging.getLogger('nova.virt.libvirt_conn') |
2788 | 71 | 74 | ||
2789 | 72 | FLAGS = flags.FLAGS | 75 | FLAGS = flags.FLAGS |
2790 | 76 | flags.DECLARE('live_migration_retry_count', 'nova.compute.manager') | ||
2791 | 73 | # TODO(vish): These flags should probably go into a shared location | 77 | # TODO(vish): These flags should probably go into a shared location |
2792 | 74 | flags.DEFINE_string('rescue_image_id', 'ami-rescue', 'Rescue ami image') | 78 | flags.DEFINE_string('rescue_image_id', 'ami-rescue', 'Rescue ami image') |
2793 | 75 | flags.DEFINE_string('rescue_kernel_id', 'aki-rescue', 'Rescue aki image') | 79 | flags.DEFINE_string('rescue_kernel_id', 'aki-rescue', 'Rescue aki image') |
2794 | @@ -100,6 +104,17 @@ | |||
2795 | 100 | flags.DEFINE_string('firewall_driver', | 104 | flags.DEFINE_string('firewall_driver', |
2796 | 101 | 'nova.virt.libvirt_conn.IptablesFirewallDriver', | 105 | 'nova.virt.libvirt_conn.IptablesFirewallDriver', |
2797 | 102 | 'Firewall driver (defaults to iptables)') | 106 | 'Firewall driver (defaults to iptables)') |
2798 | 107 | flags.DEFINE_string('cpuinfo_xml_template', | ||
2799 | 108 | utils.abspath('virt/cpuinfo.xml.template'), | ||
2800 | 109 | 'CpuInfo XML Template (Used only live migration now)') | ||
2801 | 110 | flags.DEFINE_string('live_migration_uri', | ||
2802 | 111 | "qemu+tcp://%s/system", | ||
2803 | 112 | 'Define protocol used by live_migration feature') | ||
2804 | 113 | flags.DEFINE_string('live_migration_flag', | ||
2805 | 114 | "VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER", | ||
2806 | 115 | 'Define live migration behavior.') | ||
2807 | 116 | flags.DEFINE_integer('live_migration_bandwidth', 0, | ||
2808 | 117 | 'Define live migration behavior') | ||
2809 | 103 | 118 | ||
2810 | 104 | 119 | ||
2811 | 105 | def get_connection(read_only): | 120 | def get_connection(read_only): |
2812 | @@ -146,6 +161,7 @@ | |||
2813 | 146 | self.libvirt_uri = self.get_uri() | 161 | self.libvirt_uri = self.get_uri() |
2814 | 147 | 162 | ||
2815 | 148 | self.libvirt_xml = open(FLAGS.libvirt_xml_template).read() | 163 | self.libvirt_xml = open(FLAGS.libvirt_xml_template).read() |
2816 | 164 | self.cpuinfo_xml = open(FLAGS.cpuinfo_xml_template).read() | ||
2817 | 149 | self._wrapped_conn = None | 165 | self._wrapped_conn = None |
2818 | 150 | self.read_only = read_only | 166 | self.read_only = read_only |
2819 | 151 | 167 | ||
2820 | @@ -851,6 +867,158 @@ | |||
2821 | 851 | 867 | ||
2822 | 852 | return interfaces | 868 | return interfaces |
2823 | 853 | 869 | ||
2824 | 870 | def get_vcpu_total(self): | ||
2825 | 871 | """Get vcpu number of physical computer. | ||
2826 | 872 | |||
2827 | 873 | :returns: the number of cpu core. | ||
2828 | 874 | |||
2829 | 875 | """ | ||
2830 | 876 | |||
2831 | 877 | # On certain platforms, this will raise a NotImplementedError. | ||
2832 | 878 | try: | ||
2833 | 879 | return multiprocessing.cpu_count() | ||
2834 | 880 | except NotImplementedError: | ||
2835 | 881 | LOG.warn(_("Cannot get the number of cpu, because this " | ||
2836 | 882 | "function is not implemented for this platform. " | ||
2837 | 883 | "This error can be safely ignored for now.")) | ||
2838 | 884 | return 0 | ||
2839 | 885 | |||
2840 | 886 | def get_memory_mb_total(self): | ||
2841 | 887 | """Get the total memory size(MB) of physical computer. | ||
2842 | 888 | |||
2843 | 889 | :returns: the total amount of memory(MB). | ||
2844 | 890 | |||
2845 | 891 | """ | ||
2846 | 892 | |||
2847 | 893 | if sys.platform.upper() != 'LINUX2': | ||
2848 | 894 | return 0 | ||
2849 | 895 | |||
2850 | 896 | meminfo = open('/proc/meminfo').read().split() | ||
2851 | 897 | idx = meminfo.index('MemTotal:') | ||
2852 | 898 | # transforming kb to mb. | ||
2853 | 899 | return int(meminfo[idx + 1]) / 1024 | ||
2854 | 900 | |||
2855 | 901 | def get_local_gb_total(self): | ||
2856 | 902 | """Get the total hdd size(GB) of physical computer. | ||
2857 | 903 | |||
2858 | 904 | :returns: | ||
2859 | 905 | The total amount of HDD(GB). | ||
2860 | 906 | Note that this value shows a partition where | ||
2861 | 907 | NOVA-INST-DIR/instances mounts. | ||
2862 | 908 | |||
2863 | 909 | """ | ||
2864 | 910 | |||
2865 | 911 | hddinfo = os.statvfs(FLAGS.instances_path) | ||
2866 | 912 | return hddinfo.f_frsize * hddinfo.f_blocks / 1024 / 1024 / 1024 | ||
2867 | 913 | |||
2868 | 914 | def get_vcpu_used(self): | ||
2869 | 915 | """ Get vcpu usage number of physical computer. | ||
2870 | 916 | |||
2871 | 917 | :returns: The total number of vcpu that currently used. | ||
2872 | 918 | |||
2873 | 919 | """ | ||
2874 | 920 | |||
2875 | 921 | total = 0 | ||
2876 | 922 | for dom_id in self._conn.listDomainsID(): | ||
2877 | 923 | dom = self._conn.lookupByID(dom_id) | ||
2878 | 924 | total += len(dom.vcpus()[1]) | ||
2879 | 925 | return total | ||
2880 | 926 | |||
2881 | 927 | def get_memory_mb_used(self): | ||
2882 | 928 | """Get the free memory size(MB) of physical computer. | ||
2883 | 929 | |||
2884 | 930 | :returns: the total usage of memory(MB). | ||
2885 | 931 | |||
2886 | 932 | """ | ||
2887 | 933 | |||
2888 | 934 | if sys.platform.upper() != 'LINUX2': | ||
2889 | 935 | return 0 | ||
2890 | 936 | |||
2891 | 937 | m = open('/proc/meminfo').read().split() | ||
2892 | 938 | idx1 = m.index('MemFree:') | ||
2893 | 939 | idx2 = m.index('Buffers:') | ||
2894 | 940 | idx3 = m.index('Cached:') | ||
2895 | 941 | avail = (int(m[idx1 + 1]) + int(m[idx2 + 1]) + int(m[idx3 + 1])) / 1024 | ||
2896 | 942 | return self.get_memory_mb_total() - avail | ||
2897 | 943 | |||
2898 | 944 | def get_local_gb_used(self): | ||
2899 | 945 | """Get the free hdd size(GB) of physical computer. | ||
2900 | 946 | |||
2901 | 947 | :returns: | ||
2902 | 948 | The total usage of HDD(GB). | ||
2903 | 949 | Note that this value shows a partition where | ||
2904 | 950 | NOVA-INST-DIR/instances mounts. | ||
2905 | 951 | |||
2906 | 952 | """ | ||
2907 | 953 | |||
2908 | 954 | hddinfo = os.statvfs(FLAGS.instances_path) | ||
2909 | 955 | avail = hddinfo.f_frsize * hddinfo.f_bavail / 1024 / 1024 / 1024 | ||
2910 | 956 | return self.get_local_gb_total() - avail | ||
2911 | 957 | |||
2912 | 958 | def get_hypervisor_type(self): | ||
2913 | 959 | """Get hypervisor type. | ||
2914 | 960 | |||
2915 | 961 | :returns: hypervisor type (ex. qemu) | ||
2916 | 962 | |||
2917 | 963 | """ | ||
2918 | 964 | |||
2919 | 965 | return self._conn.getType() | ||
2920 | 966 | |||
2921 | 967 | def get_hypervisor_version(self): | ||
2922 | 968 | """Get hypervisor version. | ||
2923 | 969 | |||
2924 | 970 | :returns: hypervisor version (ex. 12003) | ||
2925 | 971 | |||
2926 | 972 | """ | ||
2927 | 973 | |||
2928 | 974 | return self._conn.getVersion() | ||
2929 | 975 | |||
2930 | 976 | def get_cpu_info(self): | ||
2931 | 977 | """Get cpuinfo information. | ||
2932 | 978 | |||
2933 | 979 | Obtains cpu feature from virConnect.getCapabilities, | ||
2934 | 980 | and returns as a json string. | ||
2935 | 981 | |||
2936 | 982 | :return: see above description | ||
2937 | 983 | |||
2938 | 984 | """ | ||
2939 | 985 | |||
2940 | 986 | xml = self._conn.getCapabilities() | ||
2941 | 987 | xml = libxml2.parseDoc(xml) | ||
2942 | 988 | nodes = xml.xpathEval('//cpu') | ||
2943 | 989 | if len(nodes) != 1: | ||
2944 | 990 | raise exception.Invalid(_("Invalid xml. '<cpu>' must be 1," | ||
2945 | 991 | "but %d\n") % len(nodes) | ||
2946 | 992 | + xml.serialize()) | ||
2947 | 993 | |||
2948 | 994 | cpu_info = dict() | ||
2949 | 995 | cpu_info['arch'] = xml.xpathEval('//cpu/arch')[0].getContent() | ||
2950 | 996 | cpu_info['model'] = xml.xpathEval('//cpu/model')[0].getContent() | ||
2951 | 997 | cpu_info['vendor'] = xml.xpathEval('//cpu/vendor')[0].getContent() | ||
2952 | 998 | |||
2953 | 999 | topology_node = xml.xpathEval('//cpu/topology')[0].get_properties() | ||
2954 | 1000 | topology = dict() | ||
2955 | 1001 | while topology_node != None: | ||
2956 | 1002 | name = topology_node.get_name() | ||
2957 | 1003 | topology[name] = topology_node.getContent() | ||
2958 | 1004 | topology_node = topology_node.get_next() | ||
2959 | 1005 | |||
2960 | 1006 | keys = ['cores', 'sockets', 'threads'] | ||
2961 | 1007 | tkeys = topology.keys() | ||
2962 | 1008 | if list(set(tkeys)) != list(set(keys)): | ||
2963 | 1009 | ks = ', '.join(keys) | ||
2964 | 1010 | raise exception.Invalid(_("Invalid xml: topology(%(topology)s) " | ||
2965 | 1011 | "must have %(ks)s") % locals()) | ||
2966 | 1012 | |||
2967 | 1013 | feature_nodes = xml.xpathEval('//cpu/feature') | ||
2968 | 1014 | features = list() | ||
2969 | 1015 | for nodes in feature_nodes: | ||
2970 | 1016 | features.append(nodes.get_properties().getContent()) | ||
2971 | 1017 | |||
2972 | 1018 | cpu_info['topology'] = topology | ||
2973 | 1019 | cpu_info['features'] = features | ||
2974 | 1020 | return utils.dumps(cpu_info) | ||
2975 | 1021 | |||
2976 | 854 | def block_stats(self, instance_name, disk): | 1022 | def block_stats(self, instance_name, disk): |
2977 | 855 | """ | 1023 | """ |
2978 | 856 | Note that this function takes an instance name, not an Instance, so | 1024 | Note that this function takes an instance name, not an Instance, so |
2979 | @@ -881,6 +1049,207 @@ | |||
2980 | 881 | def refresh_security_group_members(self, security_group_id): | 1049 | def refresh_security_group_members(self, security_group_id): |
2981 | 882 | self.firewall_driver.refresh_security_group_members(security_group_id) | 1050 | self.firewall_driver.refresh_security_group_members(security_group_id) |
2982 | 883 | 1051 | ||
2983 | 1052 | def update_available_resource(self, ctxt, host): | ||
2984 | 1053 | """Updates compute manager resource info on ComputeNode table. | ||
2985 | 1054 | |||
2986 | 1055 | This method is called when nova-coompute launches, and | ||
2987 | 1056 | whenever admin executes "nova-manage service update_resource". | ||
2988 | 1057 | |||
2989 | 1058 | :param ctxt: security context | ||
2990 | 1059 | :param host: hostname that compute manager is currently running | ||
2991 | 1060 | |||
2992 | 1061 | """ | ||
2993 | 1062 | |||
2994 | 1063 | try: | ||
2995 | 1064 | service_ref = db.service_get_all_compute_by_host(ctxt, host)[0] | ||
2996 | 1065 | except exception.NotFound: | ||
2997 | 1066 | raise exception.Invalid(_("Cannot update compute manager " | ||
2998 | 1067 | "specific info, because no service " | ||
2999 | 1068 | "record was found.")) | ||
3000 | 1069 | |||
3001 | 1070 | # Updating host information | ||
3002 | 1071 | dic = {'vcpus': self.get_vcpu_total(), | ||
3003 | 1072 | 'memory_mb': self.get_memory_mb_total(), | ||
3004 | 1073 | 'local_gb': self.get_local_gb_total(), | ||
3005 | 1074 | 'vcpus_used': self.get_vcpu_used(), | ||
3006 | 1075 | 'memory_mb_used': self.get_memory_mb_used(), | ||
3007 | 1076 | 'local_gb_used': self.get_local_gb_used(), | ||
3008 | 1077 | 'hypervisor_type': self.get_hypervisor_type(), | ||
3009 | 1078 | 'hypervisor_version': self.get_hypervisor_version(), | ||
3010 | 1079 | 'cpu_info': self.get_cpu_info()} | ||
3011 | 1080 | |||
3012 | 1081 | compute_node_ref = service_ref['compute_node'] | ||
3013 | 1082 | if not compute_node_ref: | ||
3014 | 1083 | LOG.info(_('Compute_service record created for %s ') % host) | ||
3015 | 1084 | dic['service_id'] = service_ref['id'] | ||
3016 | 1085 | db.compute_node_create(ctxt, dic) | ||
3017 | 1086 | else: | ||
3018 | 1087 | LOG.info(_('Compute_service record updated for %s ') % host) | ||
3019 | 1088 | db.compute_node_update(ctxt, compute_node_ref[0]['id'], dic) | ||
3020 | 1089 | |||
3021 | 1090 | def compare_cpu(self, cpu_info): | ||
3022 | 1091 | """Checks the host cpu is compatible to a cpu given by xml. | ||
3023 | 1092 | |||
3024 | 1093 | "xml" must be a part of libvirt.openReadonly().getCapabilities(). | ||
3025 | 1094 | return values follows by virCPUCompareResult. | ||
3026 | 1095 | if 0 > return value, do live migration. | ||
3027 | 1096 | 'http://libvirt.org/html/libvirt-libvirt.html#virCPUCompareResult' | ||
3028 | 1097 | |||
3029 | 1098 | :param cpu_info: json string that shows cpu feature(see get_cpu_info()) | ||
3030 | 1099 | :returns: | ||
3031 | 1100 | None. if given cpu info is not compatible to this server, | ||
3032 | 1101 | raise exception. | ||
3033 | 1102 | |||
3034 | 1103 | """ | ||
3035 | 1104 | |||
3036 | 1105 | LOG.info(_('Instance launched has CPU info:\n%s') % cpu_info) | ||
3037 | 1106 | dic = utils.loads(cpu_info) | ||
3038 | 1107 | xml = str(Template(self.cpuinfo_xml, searchList=dic)) | ||
3039 | 1108 | LOG.info(_('to xml...\n:%s ' % xml)) | ||
3040 | 1109 | |||
3041 | 1110 | u = "http://libvirt.org/html/libvirt-libvirt.html#virCPUCompareResult" | ||
3042 | 1111 | m = _("CPU doesn't have compatibility.\n\n%(ret)s\n\nRefer to %(u)s") | ||
3043 | 1112 | # unknown character exists in xml, then libvirt complains | ||
3044 | 1113 | try: | ||
3045 | 1114 | ret = self._conn.compareCPU(xml, 0) | ||
3046 | 1115 | except libvirt.libvirtError, e: | ||
3047 | 1116 | ret = e.message | ||
3048 | 1117 | LOG.error(m % locals()) | ||
3049 | 1118 | raise | ||
3050 | 1119 | |||
3051 | 1120 | if ret <= 0: | ||
3052 | 1121 | raise exception.Invalid(m % locals()) | ||
3053 | 1122 | |||
3054 | 1123 | return | ||
3055 | 1124 | |||
3056 | 1125 | def ensure_filtering_rules_for_instance(self, instance_ref): | ||
3057 | 1126 | """Setting up filtering rules and waiting for its completion. | ||
3058 | 1127 | |||
3059 | 1128 | To migrate an instance, filtering rules to hypervisors | ||
3060 | 1129 | and firewalls are inevitable on destination host. | ||
3061 | 1130 | ( Waiting only for filterling rules to hypervisor, | ||
3062 | 1131 | since filtering rules to firewall rules can be set faster). | ||
3063 | 1132 | |||
3064 | 1133 | Concretely, the below method must be called. | ||
3065 | 1134 | - setup_basic_filtering (for nova-basic, etc.) | ||
3066 | 1135 | - prepare_instance_filter(for nova-instance-instance-xxx, etc.) | ||
3067 | 1136 | |||
3068 | 1137 | to_xml may have to be called since it defines PROJNET, PROJMASK. | ||
3069 | 1138 | but libvirt migrates those value through migrateToURI(), | ||
3070 | 1139 | so , no need to be called. | ||
3071 | 1140 | |||
3072 | 1141 | Don't use thread for this method since migration should | ||
3073 | 1142 | not be started when setting-up filtering rules operations | ||
3074 | 1143 | are not completed. | ||
3075 | 1144 | |||
3076 | 1145 | :params instance_ref: nova.db.sqlalchemy.models.Instance object | ||
3077 | 1146 | |||
3078 | 1147 | """ | ||
3079 | 1148 | |||
3080 | 1149 | # If any instances never launch at destination host, | ||
3081 | 1150 | # basic-filtering must be set here. | ||
3082 | 1151 | self.firewall_driver.setup_basic_filtering(instance_ref) | ||
3083 | 1152 | # setting up n)ova-instance-instance-xx mainly. | ||
3084 | 1153 | self.firewall_driver.prepare_instance_filter(instance_ref) | ||
3085 | 1154 | |||
3086 | 1155 | # wait for completion | ||
3087 | 1156 | timeout_count = range(FLAGS.live_migration_retry_count) | ||
3088 | 1157 | while timeout_count: | ||
3089 | 1158 | try: | ||
3090 | 1159 | filter_name = 'nova-instance-%s' % instance_ref.name | ||
3091 | 1160 | self._conn.nwfilterLookupByName(filter_name) | ||
3092 | 1161 | break | ||
3093 | 1162 | except libvirt.libvirtError: | ||
3094 | 1163 | timeout_count.pop() | ||
3095 | 1164 | if len(timeout_count) == 0: | ||
3096 | 1165 | ec2_id = instance_ref['hostname'] | ||
3097 | 1166 | iname = instance_ref.name | ||
3098 | 1167 | msg = _('Timeout migrating for %(ec2_id)s(%(iname)s)') | ||
3099 | 1168 | raise exception.Error(msg % locals()) | ||
3100 | 1169 | time.sleep(1) | ||
3101 | 1170 | |||
3102 | 1171 | def live_migration(self, ctxt, instance_ref, dest, | ||
3103 | 1172 | post_method, recover_method): | ||
3104 | 1173 | """Spawning live_migration operation for distributing high-load. | ||
3105 | 1174 | |||
3106 | 1175 | :params ctxt: security context | ||
3107 | 1176 | :params instance_ref: | ||
3108 | 1177 | nova.db.sqlalchemy.models.Instance object | ||
3109 | 1178 | instance object that is migrated. | ||
3110 | 1179 | :params dest: destination host | ||
3111 | 1180 | :params post_method: | ||
3112 | 1181 | post operation method. | ||
3113 | 1182 | expected nova.compute.manager.post_live_migration. | ||
3114 | 1183 | :params recover_method: | ||
3115 | 1184 | recovery method when any exception occurs. | ||
3116 | 1185 | expected nova.compute.manager.recover_live_migration. | ||
3117 | 1186 | |||
3118 | 1187 | """ | ||
3119 | 1188 | |||
3120 | 1189 | greenthread.spawn(self._live_migration, ctxt, instance_ref, dest, | ||
3121 | 1190 | post_method, recover_method) | ||
3122 | 1191 | |||
3123 | 1192 | def _live_migration(self, ctxt, instance_ref, dest, | ||
3124 | 1193 | post_method, recover_method): | ||
3125 | 1194 | """Do live migration. | ||
3126 | 1195 | |||
3127 | 1196 | :params ctxt: security context | ||
3128 | 1197 | :params instance_ref: | ||
3129 | 1198 | nova.db.sqlalchemy.models.Instance object | ||
3130 | 1199 | instance object that is migrated. | ||
3131 | 1200 | :params dest: destination host | ||
3132 | 1201 | :params post_method: | ||
3133 | 1202 | post operation method. | ||
3134 | 1203 | expected nova.compute.manager.post_live_migration. | ||
3135 | 1204 | :params recover_method: | ||
3136 | 1205 | recovery method when any exception occurs. | ||
3137 | 1206 | expected nova.compute.manager.recover_live_migration. | ||
3138 | 1207 | |||
3139 | 1208 | """ | ||
3140 | 1209 | |||
3141 | 1210 | # Do live migration. | ||
3142 | 1211 | try: | ||
3143 | 1212 | flaglist = FLAGS.live_migration_flag.split(',') | ||
3144 | 1213 | flagvals = [getattr(libvirt, x.strip()) for x in flaglist] | ||
3145 | 1214 | logical_sum = reduce(lambda x, y: x | y, flagvals) | ||
3146 | 1215 | |||
3147 | 1216 | if self.read_only: | ||
3148 | 1217 | tmpconn = self._connect(self.libvirt_uri, False) | ||
3149 | 1218 | dom = tmpconn.lookupByName(instance_ref.name) | ||
3150 | 1219 | dom.migrateToURI(FLAGS.live_migration_uri % dest, | ||
3151 | 1220 | logical_sum, | ||
3152 | 1221 | None, | ||
3153 | 1222 | FLAGS.live_migration_bandwidth) | ||
3154 | 1223 | tmpconn.close() | ||
3155 | 1224 | else: | ||
3156 | 1225 | dom = self._conn.lookupByName(instance_ref.name) | ||
3157 | 1226 | dom.migrateToURI(FLAGS.live_migration_uri % dest, | ||
3158 | 1227 | logical_sum, | ||
3159 | 1228 | None, | ||
3160 | 1229 | FLAGS.live_migration_bandwidth) | ||
3161 | 1230 | |||
3162 | 1231 | except Exception: | ||
3163 | 1232 | recover_method(ctxt, instance_ref) | ||
3164 | 1233 | raise | ||
3165 | 1234 | |||
3166 | 1235 | # Waiting for completion of live_migration. | ||
3167 | 1236 | timer = utils.LoopingCall(f=None) | ||
3168 | 1237 | |||
3169 | 1238 | def wait_for_live_migration(): | ||
3170 | 1239 | """waiting for live migration completion""" | ||
3171 | 1240 | try: | ||
3172 | 1241 | self.get_info(instance_ref.name)['state'] | ||
3173 | 1242 | except exception.NotFound: | ||
3174 | 1243 | timer.stop() | ||
3175 | 1244 | post_method(ctxt, instance_ref, dest) | ||
3176 | 1245 | |||
3177 | 1246 | timer.f = wait_for_live_migration | ||
3178 | 1247 | timer.start(interval=0.5, now=True) | ||
3179 | 1248 | |||
3180 | 1249 | def unfilter_instance(self, instance_ref): | ||
3181 | 1250 | """See comments of same method in firewall_driver.""" | ||
3182 | 1251 | self.firewall_driver.unfilter_instance(instance_ref) | ||
3183 | 1252 | |||
3184 | 884 | 1253 | ||
3185 | 885 | class FirewallDriver(object): | 1254 | class FirewallDriver(object): |
3186 | 886 | def prepare_instance_filter(self, instance): | 1255 | def prepare_instance_filter(self, instance): |
3187 | 887 | 1256 | ||
3188 | === modified file 'nova/virt/xenapi_conn.py' | |||
3189 | --- nova/virt/xenapi_conn.py 2011-03-07 23:51:20 +0000 | |||
3190 | +++ nova/virt/xenapi_conn.py 2011-03-10 06:27:59 +0000 | |||
3191 | @@ -263,6 +263,27 @@ | |||
3192 | 263 | 'username': FLAGS.xenapi_connection_username, | 263 | 'username': FLAGS.xenapi_connection_username, |
3193 | 264 | 'password': FLAGS.xenapi_connection_password} | 264 | 'password': FLAGS.xenapi_connection_password} |
3194 | 265 | 265 | ||
3195 | 266 | def update_available_resource(self, ctxt, host): | ||
3196 | 267 | """This method is supported only by libvirt.""" | ||
3197 | 268 | return | ||
3198 | 269 | |||
3199 | 270 | def compare_cpu(self, xml): | ||
3200 | 271 | """This method is supported only by libvirt.""" | ||
3201 | 272 | raise NotImplementedError('This method is supported only by libvirt.') | ||
3202 | 273 | |||
3203 | 274 | def ensure_filtering_rules_for_instance(self, instance_ref): | ||
3204 | 275 | """This method is supported only libvirt.""" | ||
3205 | 276 | return | ||
3206 | 277 | |||
3207 | 278 | def live_migration(self, context, instance_ref, dest, | ||
3208 | 279 | post_method, recover_method): | ||
3209 | 280 | """This method is supported only by libvirt.""" | ||
3210 | 281 | return | ||
3211 | 282 | |||
3212 | 283 | def unfilter_instance(self, instance_ref): | ||
3213 | 284 | """This method is supported only by libvirt.""" | ||
3214 | 285 | raise NotImplementedError('This method is supported only by libvirt.') | ||
3215 | 286 | |||
3216 | 266 | 287 | ||
3217 | 267 | class XenAPISession(object): | 288 | class XenAPISession(object): |
3218 | 268 | """The session to invoke XenAPI SDK calls""" | 289 | """The session to invoke XenAPI SDK calls""" |
3219 | 269 | 290 | ||
3220 | === modified file 'nova/volume/driver.py' | |||
3221 | --- nova/volume/driver.py 2011-03-09 20:33:20 +0000 | |||
3222 | +++ nova/volume/driver.py 2011-03-10 06:27:59 +0000 | |||
3223 | @@ -143,6 +143,10 @@ | |||
3224 | 143 | """Undiscover volume on a remote host.""" | 143 | """Undiscover volume on a remote host.""" |
3225 | 144 | raise NotImplementedError() | 144 | raise NotImplementedError() |
3226 | 145 | 145 | ||
3227 | 146 | def check_for_export(self, context, volume_id): | ||
3228 | 147 | """Make sure volume is exported.""" | ||
3229 | 148 | raise NotImplementedError() | ||
3230 | 149 | |||
3231 | 146 | 150 | ||
3232 | 147 | class AOEDriver(VolumeDriver): | 151 | class AOEDriver(VolumeDriver): |
3233 | 148 | """Implements AOE specific volume commands.""" | 152 | """Implements AOE specific volume commands.""" |
3234 | @@ -198,15 +202,45 @@ | |||
3235 | 198 | self._try_execute('sudo', 'vblade-persist', 'destroy', | 202 | self._try_execute('sudo', 'vblade-persist', 'destroy', |
3236 | 199 | shelf_id, blade_id) | 203 | shelf_id, blade_id) |
3237 | 200 | 204 | ||
3239 | 201 | def discover_volume(self, _volume): | 205 | def discover_volume(self, context, _volume): |
3240 | 202 | """Discover volume on a remote host.""" | 206 | """Discover volume on a remote host.""" |
3243 | 203 | self._execute('sudo', 'aoe-discover') | 207 | (shelf_id, |
3244 | 204 | self._execute('sudo', 'aoe-stat', check_exit_code=False) | 208 | blade_id) = self.db.volume_get_shelf_and_blade(context, |
3245 | 209 | _volume['id']) | ||
3246 | 210 | self._execute("sudo aoe-discover") | ||
3247 | 211 | out, err = self._execute("sudo aoe-stat", check_exit_code=False) | ||
3248 | 212 | device_path = 'e%(shelf_id)d.%(blade_id)d' % locals() | ||
3249 | 213 | if out.find(device_path) >= 0: | ||
3250 | 214 | return "/dev/etherd/%s" % device_path | ||
3251 | 215 | else: | ||
3252 | 216 | return | ||
3253 | 205 | 217 | ||
3254 | 206 | def undiscover_volume(self, _volume): | 218 | def undiscover_volume(self, _volume): |
3255 | 207 | """Undiscover volume on a remote host.""" | 219 | """Undiscover volume on a remote host.""" |
3256 | 208 | pass | 220 | pass |
3257 | 209 | 221 | ||
3258 | 222 | def check_for_export(self, context, volume_id): | ||
3259 | 223 | """Make sure volume is exported.""" | ||
3260 | 224 | (shelf_id, | ||
3261 | 225 | blade_id) = self.db.volume_get_shelf_and_blade(context, | ||
3262 | 226 | volume_id) | ||
3263 | 227 | cmd = "sudo vblade-persist ls --no-header" | ||
3264 | 228 | out, _err = self._execute(cmd) | ||
3265 | 229 | exported = False | ||
3266 | 230 | for line in out.split('\n'): | ||
3267 | 231 | param = line.split(' ') | ||
3268 | 232 | if len(param) == 6 and param[0] == str(shelf_id) \ | ||
3269 | 233 | and param[1] == str(blade_id) and param[-1] == "run": | ||
3270 | 234 | exported = True | ||
3271 | 235 | break | ||
3272 | 236 | if not exported: | ||
3273 | 237 | # Instance will be terminated in this case. | ||
3274 | 238 | desc = _("Cannot confirm exported volume id:%(volume_id)s. " | ||
3275 | 239 | "vblade process for e%(shelf_id)s.%(blade_id)s " | ||
3276 | 240 | "isn't running.") % locals() | ||
3277 | 241 | raise exception.ProcessExecutionError(out, _err, cmd=cmd, | ||
3278 | 242 | description=desc) | ||
3279 | 243 | |||
3280 | 210 | 244 | ||
3281 | 211 | class FakeAOEDriver(AOEDriver): | 245 | class FakeAOEDriver(AOEDriver): |
3282 | 212 | """Logs calls instead of executing.""" | 246 | """Logs calls instead of executing.""" |
3283 | @@ -402,7 +436,7 @@ | |||
3284 | 402 | (property_key, property_value)) | 436 | (property_key, property_value)) |
3285 | 403 | return self._run_iscsiadm(iscsi_properties, iscsi_command) | 437 | return self._run_iscsiadm(iscsi_properties, iscsi_command) |
3286 | 404 | 438 | ||
3288 | 405 | def discover_volume(self, volume): | 439 | def discover_volume(self, context, volume): |
3289 | 406 | """Discover volume on a remote host.""" | 440 | """Discover volume on a remote host.""" |
3290 | 407 | iscsi_properties = self._get_iscsi_properties(volume) | 441 | iscsi_properties = self._get_iscsi_properties(volume) |
3291 | 408 | 442 | ||
3292 | @@ -461,6 +495,20 @@ | |||
3293 | 461 | self._run_iscsiadm(iscsi_properties, "--logout") | 495 | self._run_iscsiadm(iscsi_properties, "--logout") |
3294 | 462 | self._run_iscsiadm(iscsi_properties, "--op delete") | 496 | self._run_iscsiadm(iscsi_properties, "--op delete") |
3295 | 463 | 497 | ||
3296 | 498 | def check_for_export(self, context, volume_id): | ||
3297 | 499 | """Make sure volume is exported.""" | ||
3298 | 500 | |||
3299 | 501 | tid = self.db.volume_get_iscsi_target_num(context, volume_id) | ||
3300 | 502 | try: | ||
3301 | 503 | self._execute("sudo ietadm --op show --tid=%(tid)d" % locals()) | ||
3302 | 504 | except exception.ProcessExecutionError, e: | ||
3303 | 505 | # Instances remount read-only in this case. | ||
3304 | 506 | # /etc/init.d/iscsitarget restart and rebooting nova-volume | ||
3305 | 507 | # is better since ensure_export() works at boot time. | ||
3306 | 508 | logging.error(_("Cannot confirm exported volume " | ||
3307 | 509 | "id:%(volume_id)s.") % locals()) | ||
3308 | 510 | raise | ||
3309 | 511 | |||
3310 | 464 | 512 | ||
3311 | 465 | class FakeISCSIDriver(ISCSIDriver): | 513 | class FakeISCSIDriver(ISCSIDriver): |
3312 | 466 | """Logs calls instead of executing.""" | 514 | """Logs calls instead of executing.""" |
3313 | 467 | 515 | ||
3314 | === modified file 'nova/volume/manager.py' | |||
3315 | --- nova/volume/manager.py 2011-02-21 23:52:41 +0000 | |||
3316 | +++ nova/volume/manager.py 2011-03-10 06:27:59 +0000 | |||
3317 | @@ -160,7 +160,7 @@ | |||
3318 | 160 | if volume_ref['host'] == self.host and FLAGS.use_local_volumes: | 160 | if volume_ref['host'] == self.host and FLAGS.use_local_volumes: |
3319 | 161 | path = self.driver.local_path(volume_ref) | 161 | path = self.driver.local_path(volume_ref) |
3320 | 162 | else: | 162 | else: |
3322 | 163 | path = self.driver.discover_volume(volume_ref) | 163 | path = self.driver.discover_volume(context, volume_ref) |
3323 | 164 | return path | 164 | return path |
3324 | 165 | 165 | ||
3325 | 166 | def remove_compute_volume(self, context, volume_id): | 166 | def remove_compute_volume(self, context, volume_id): |
3326 | @@ -171,3 +171,9 @@ | |||
3327 | 171 | return True | 171 | return True |
3328 | 172 | else: | 172 | else: |
3329 | 173 | self.driver.undiscover_volume(volume_ref) | 173 | self.driver.undiscover_volume(volume_ref) |
3330 | 174 | |||
3331 | 175 | def check_for_export(self, context, instance_id): | ||
3332 | 176 | """Make sure whether volume is exported.""" | ||
3333 | 177 | instance_ref = self.db.instance_get(context, instance_id) | ||
3334 | 178 | for volume in instance_ref['volumes']: | ||
3335 | 179 | self.driver.check_for_export(context, volume['id']) |
Just a few nits :)
> + def describeresourc e(self, host): self, host):
> + def updateresource(
These should probably be `describe_resource` and `update_resource` respectively.
3083 +def mktmpfile(dir): datetime. utcnow( ).strftime( '%Y%m%d% H%M%S')
3084 + """create tmpfile under dir, and return filename."""
3085 + filename = datetime.
3086 + fpath = os.path.join(dir, filename)
3087 + open(fpath, 'a+').write(fpath + '\n')
3088 + return fpath
It would probably be better to use the `tempfile` module in the Python stdlib.
3091 +def exists(filename): exists( filename)
3092 + """check file path existence."""
3093 + return os.path.
3094 +
3095 +
3096 +def remove(filename):
3097 + """remove file."""
3098 + return os.remove(filename)
These wrapper functions seem unnecessary, it would probably be better to just use os.path.exists and os.remove directly in the code.
If you need a stub-point for testing, you can stub out `os.path` and `os` directly.
+ LOG.info( 'post_live_ migration( ) is started..')
Needs i18n _('post_live...') treatment.
533 + #services. create_ column( services_ vcpus) create_ column( services_ memory_ mb) create_ column( services_ local_gb) create_ column( services_ vcpus_used) create_ column( services_ memory_ mb_used) create_ column( services_ local_gb_ used) create_ column( services_ hypervisor_ type) create_ column( services_ hypervisor_ version) create_ column( services_ cpu_info)
534 + #services.
535 + #services.
536 + #services.
537 + #services.
538 + #services.
539 + #services.
540 + #services.
541 + #services.
Was this left in by mistake?
902 + print 'manager.attrerr', e
Probably should be logging here, rather than printing to stdout.