Merge lp:~termie/nova/revert_live_migration into lp:~hudson-openstack/nova/trunk

Proposed by termie
Status: Merged
Approved by: Vish Ishaya
Approved revision: 576
Merged at revision: 578
Proposed branch: lp:~termie/nova/revert_live_migration
Merge into: lp:~hudson-openstack/nova/trunk
Diff against target: 1347 lines (+17/-952)
17 files modified
bin/nova-manage (+1/-81)
nova/api/ec2/cloud.py (+1/-1)
nova/compute/manager.py (+1/-117)
nova/db/api.py (+0/-30)
nova/db/sqlalchemy/api.py (+0/-64)
nova/db/sqlalchemy/models.py (+2/-24)
nova/network/manager.py (+6/-8)
nova/scheduler/driver.py (+0/-183)
nova/scheduler/manager.py (+0/-48)
nova/service.py (+0/-4)
nova/virt/cpuinfo.xml.template (+0/-9)
nova/virt/fake.py (+0/-32)
nova/virt/libvirt_conn.py (+0/-287)
nova/virt/xenapi_conn.py (+0/-30)
nova/volume/driver.py (+5/-25)
nova/volume/manager.py (+1/-8)
setup.py (+0/-1)
To merge this branch: bzr merge lp:~termie/nova/revert_live_migration
Reviewer Review Type Date Requested Status
Devin Carlen (community) Approve
Rick Clark (community) Approve
Review via email: mp+46660@code.launchpad.net

Description of the change

The live_migration branch ( https://code.launchpad.net/~nttdata/nova/live-migration/+merge/44940 ) was not ready to be merged.

Outstanding issues:
 - many style violations, especially in docstrings (leading spaces, extra newlines)
 - no test coverage
 - unusual defaults in the database columns (-1?)
 - unusual naming "phy_resource"

The database changes in particular should preclude the original from being merged until they are correct, and for a patch of this scope some tests are really necessary for the new functionality.

The patch needs further review and should not be rushed in for bexar as it commits us to a variety of data model decisions that require more input.

To post a comment you must log in.
Revision history for this message
Soren Hansen (soren) wrote :

======================================================================
FAIL: test_authors_up_to_date (nova.tests.test_misc.ProjectTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/soren/src/openstack/nova/nova/nova/tests/test_misc.py", line 53, in test_authors_up_to_date
    '%r not listed in Authors' % missing)
AssertionError: set([u'<root@openstack2-api>', u'Masumoto<email address hidden>']) not listed in Authors

----------------------------------------------------------------------
Ran 286 tests in 94.682s

FAILED (failures=1)

Revision history for this message
Soren Hansen (soren) wrote :

I think this is an *EXTREMELY* poor way to do code review.

Revision history for this message
Devin Carlen (devcamcar) wrote :

What specifically is a poor way?

We were chatting earlier and I think for some of these larger reviews it would make sense to do them as a group, wich skype audio/screen share/etherpad, etc. We have the technology and the skill. I think it would be valuable. Anyone have ideas?

Revision history for this message
Jay Pipes (jaypipes) wrote :

Correct me if I'm wrong, Soren, but I think soren was referring to using a reversion patch as a place to review the reverted patch :)

A better way would be to comment on the original merge proposal and then, if necessary, simply remove the revision from trunk.

FWIW, I think we should remove the live-migrations patch from trunk...

There's no need to issue a reversion patch. We can simply remove a revision from trunk manually.

Revision history for this message
Soren Hansen (soren) wrote :

Devin: I've never tried anything like that, actually. Could be an interesting experiment. I'm concerned about the lack of transparency involved in something like that, though, but since the response to the person requesting review is going to have to be in written form, perhaps it's not so bad.

Revision history for this message
Devin Carlen (devcamcar) wrote :

We do group code reviews internally sometimes and it's the best way for an org to absorb new code in my opinion. Hard to do all the time of course but for the big ones it's invaluable.

Revision history for this message
Thierry Carrez (ttx) wrote :

FWIW, the live-migration branch was proposed for merging on Dec 31, 2010, and nobody cared to review it until January 10th, despite weekly calls asking people to review it. It wasn't exactly rushed in.

Revision history for this message
Vish Ishaya (vishvananda) wrote :

Please lets not do this. Revisions shouldn't be removed, especially after they are public.

Vish

On Jan 18, 2011, at 11:46 AM, Jay Pipes wrote:

> Correct me if I'm wrong, Soren, but I think soren was referring to using a reversion patch as a place to review the reverted patch :)
>
> A better way would be to comment on the original merge proposal and then, if necessary, simply remove the revision from trunk.
>
> FWIW, I think we should remove the live-migrations patch from trunk...
>
> There's no need to issue a reversion patch. We can simply remove a revision from trunk manually.
> --
> https://code.launchpad.net/~termie/nova/revert_live_migration/+merge/46660
> You are subscribed to branch lp:nova.

Revision history for this message
Soren Hansen (soren) wrote :

Devin, sorry, I thought your question was for termie.

I think proposing a branch to revert someone's patch is an extremely poor way to do code reviews. The patch was in the queue since Dec 31. There was *plenty* of time to object to coding style, etc.

The patch is in. Let's fix the problems and move forward rather than backwards.

And no, we absolutely cannot remove the patch from trunk. We did that once, it was *dreadful*. It makes life absolutely miserable for everyone who has branched off of trunk after it was merged. The code was added to trunk. We should have the exact same VCS tracking for reversals as for commits. This means adding a patch that removes the offending code.

Revision history for this message
Jay Pipes (jaypipes) wrote :

On Tue, Jan 18, 2011 at 3:08 PM, Vish Ishaya <email address hidden> wrote:
> Please lets not do this.  Revisions shouldn't be removed, especially after they are public.

If it's a very short time between when the patch goes in trunk and
gets removed, I don't think it's too big of an issue. It's not like
trunk was packaged up or released.

Just my 2 cents, of course :)

-jay

Revision history for this message
termie (termie) wrote :

Soren: just because a patch was waiting for a review for a long time doesn't mean it was good to go when it was, the code reviewers who reviewed it failed to do so thoroughly and the patch should not have been merged. This is not a "code review," this is a reversion patch. The code review will be on the patch when it is re-proposed.

Jay: Asking in IRC it was suggested that the best way to deal with this was to submit a reversion patch, rather than manually removing the code.

Revision history for this message
termie (termie) wrote :

The review process is as much to teach the code submitter the appropriate practices as it is to make sure we get quality code. Having me go in afterwards and change the content of the patch does nothing for the former and removes the benefit of the latter.

Revision history for this message
termie (termie) wrote :

> FWIW, the live-migration branch was proposed for merging on Dec 31, 2010, and
> nobody cared to review it until January 10th, despite weekly calls asking
> people to review it. It wasn't exactly rushed in.

The reviewers rushed it in at the end, not the patch authors.

Revision history for this message
Rick Clark (dendrobates) wrote :

approved in today's irc meeting. We will revert and add tests asap.

review: Approve
Revision history for this message
Devin Carlen (devcamcar) wrote :

this looks odd:

123 - instance_id = floating_ip_ref['fixed_ip']['instance']['id']
124 + instance_id = floating_ip_ref['fixed_ip']['instance']['ec2_id']

termie says its legit though, so approve

review: Approve
Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Attempt to merge into lp:nova failed due to conflicts:

text conflict in nova/virt/fake.py

576. By termie

merge from upstream to fix conflict

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'bin/nova-manage'
2--- bin/nova-manage 2011-01-16 19:12:27 +0000
3+++ bin/nova-manage 2011-01-18 22:58:14 +0000
4@@ -62,7 +62,6 @@
5
6 import IPy
7
8-
9 # If ../nova/__init__.py exists, add ../ to Python search path, so that
10 # it will override what happens to be installed in /usr/(local/)lib/python...
11 possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]),
12@@ -82,9 +81,8 @@
13 from nova import quota
14 from nova import utils
15 from nova.auth import manager
16-from nova import rpc
17 from nova.cloudpipe import pipelib
18-from nova.api.ec2 import cloud
19+
20
21 logging.basicConfig()
22 FLAGS = flags.FLAGS
23@@ -467,82 +465,6 @@
24 int(vpn_start), fixed_range_v6)
25
26
27-class InstanceCommands(object):
28- """Class for mangaging VM instances."""
29-
30- def live_migration(self, ec2_id, dest):
31- """live_migration"""
32-
33- ctxt = context.get_admin_context()
34- instance_id = cloud.ec2_id_to_id(ec2_id)
35-
36- if FLAGS.connection_type != 'libvirt':
37- msg = _('Only KVM is supported for now. Sorry!')
38- raise exception.Error(msg)
39-
40- if FLAGS.volume_driver != 'nova.volume.driver.AOEDriver':
41- instance_ref = db.instance_get(ctxt, instance_id)
42- if len(instance_ref['volumes']) != 0:
43- msg = _(("""Volumes attached by ISCSIDriver"""
44- """ are not supported. Sorry!"""))
45- raise exception.Error(msg)
46-
47- rpc.call(ctxt,
48- FLAGS.scheduler_topic,
49- {"method": "live_migration",
50- "args": {"instance_id": instance_id,
51- "dest": dest,
52- "topic": FLAGS.compute_topic}})
53-
54- msg = 'Migration of %s initiated. ' % ec2_id
55- msg += 'Check its progress using euca-describe-instances.'
56- print msg
57-
58-
59-class HostCommands(object):
60- """Class for mangaging host(physical nodes)."""
61-
62- def list(self):
63- """describe host list."""
64-
65- # To supress msg: No handlers could be found for logger "amqplib"
66- logging.basicConfig()
67-
68- service_refs = db.service_get_all(context.get_admin_context())
69- hosts = [h['host'] for h in service_refs]
70- hosts = list(set(hosts))
71- for host in hosts:
72- print host
73-
74- def show(self, host):
75- """describe cpu/memory/hdd info for host."""
76-
77- result = rpc.call(context.get_admin_context(),
78- FLAGS.scheduler_topic,
79- {"method": "show_host_resource",
80- "args": {"host": host}})
81-
82- # Checking result msg format is necessary, that will have done
83- # when this feture is included in API.
84- if type(result) != dict:
85- print 'Unexpected error occurs'
86- elif not result['ret']:
87- print '%s' % result['msg']
88- else:
89- cpu = result['phy_resource']['vcpus']
90- mem = result['phy_resource']['memory_mb']
91- hdd = result['phy_resource']['local_gb']
92-
93- print 'HOST\t\tPROJECT\t\tcpu\tmem(mb)\tdisk(gb)'
94- print '%s\t\t\t%s\t%s\t%s' % (host, cpu, mem, hdd)
95- for p_id, val in result['usage'].items():
96- print '%s\t%s\t\t%s\t%s\t%s' % (host,
97- p_id,
98- val['vcpus'],
99- val['memory_mb'],
100- val['local_gb'])
101-
102-
103 class ServiceCommands(object):
104 """Enable and disable running services"""
105
106@@ -605,8 +527,6 @@
107 ('vpn', VpnCommands),
108 ('floating', FloatingIpCommands),
109 ('network', NetworkCommands),
110- ('instance', InstanceCommands),
111- ('host', HostCommands),
112 ('service', ServiceCommands),
113 ('log', LogCommands)]
114
115
116=== modified file 'nova/api/ec2/cloud.py'
117--- nova/api/ec2/cloud.py 2011-01-18 23:50:47 +0000
118+++ nova/api/ec2/cloud.py 2011-01-18 22:58:14 +0000
119@@ -729,7 +729,7 @@
120 ec2_id = None
121 if (floating_ip_ref['fixed_ip']
122 and floating_ip_ref['fixed_ip']['instance']):
123- instance_id = floating_ip_ref['fixed_ip']['instance']['id']
124+ instance_id = floating_ip_ref['fixed_ip']['instance']['ec2_id']
125 ec2_id = id_to_ec2_id(instance_id)
126 address_rv = {'public_ip': address,
127 'instance_id': ec2_id}
128
129=== modified file 'nova/compute/manager.py'
130--- nova/compute/manager.py 2011-01-19 00:46:43 +0000
131+++ nova/compute/manager.py 2011-01-18 22:58:14 +0000
132@@ -41,7 +41,6 @@
133 import socket
134 import functools
135
136-from nova import db
137 from nova import exception
138 from nova import flags
139 from nova import log as logging
140@@ -121,35 +120,6 @@
141 """
142 self.driver.init_host()
143
144- def update_service(self, ctxt, host, binary):
145- """Insert compute node specific information to DB."""
146-
147- try:
148- service_ref = db.service_get_by_args(ctxt,
149- host,
150- binary)
151- except exception.NotFound:
152- msg = _(("""Cannot insert compute manager specific info"""
153- """Because no service record found."""))
154- raise exception.Invalid(msg)
155-
156- # Updating host information
157- vcpu = self.driver.get_vcpu_number()
158- memory_mb = self.driver.get_memory_mb()
159- local_gb = self.driver.get_local_gb()
160- hypervisor = self.driver.get_hypervisor_type()
161- version = self.driver.get_hypervisor_version()
162- cpu_info = self.driver.get_cpu_info()
163-
164- db.service_update(ctxt,
165- service_ref['id'],
166- {'vcpus': vcpu,
167- 'memory_mb': memory_mb,
168- 'local_gb': local_gb,
169- 'hypervisor_type': hypervisor,
170- 'hypervisor_version': version,
171- 'cpu_info': cpu_info})
172-
173 def _update_state(self, context, instance_id):
174 """Update the state of an instance from the driver info."""
175 # FIXME(ja): include other fields from state?
176@@ -208,10 +178,9 @@
177 raise exception.Error(_("Instance has already been created"))
178 LOG.audit(_("instance %s: starting..."), instance_id,
179 context=context)
180-
181 self.db.instance_update(context,
182 instance_id,
183- {'host': self.host, 'launched_on': self.host})
184+ {'host': self.host})
185
186 self.db.instance_set_state(context,
187 instance_id,
188@@ -591,88 +560,3 @@
189 self.volume_manager.remove_compute_volume(context, volume_id)
190 self.db.volume_detached(context, volume_id)
191 return True
192-
193- def compare_cpu(self, context, cpu_info):
194- """ Check the host cpu is compatible to a cpu given by xml."""
195- return self.driver.compare_cpu(cpu_info)
196-
197- def pre_live_migration(self, context, instance_id, dest):
198- """Any preparation for live migration at dst host."""
199-
200- # Getting instance info
201- instance_ref = db.instance_get(context, instance_id)
202- ec2_id = instance_ref['hostname']
203-
204- # Getting fixed ips
205- fixed_ip = db.instance_get_fixed_address(context, instance_id)
206- if not fixed_ip:
207- msg = _('%s(%s) doesnt have fixed_ip') % (instance_id, ec2_id)
208- raise exception.NotFound(msg)
209-
210- # If any volume is mounted, prepare here.
211- if len(instance_ref['volumes']) == 0:
212- logging.info(_("%s has no volume.") % ec2_id)
213- else:
214- for v in instance_ref['volumes']:
215- self.volume_manager.setup_compute_volume(context, v['id'])
216-
217- # Bridge settings
218- # call this method prior to ensure_filtering_rules_for_instance,
219- # since bridge is not set up, ensure_filtering_rules_for instance
220- # fails.
221- self.network_manager.setup_compute_network(context, instance_id)
222-
223- # Creating filters to hypervisors and firewalls.
224- # An example is that nova-instance-instance-xxx,
225- # which is written to libvirt.xml( check "virsh nwfilter-list )
226- # On destination host, this nwfilter is necessary.
227- # In addition, this method is creating filtering rule
228- # onto destination host.
229- self.driver.ensure_filtering_rules_for_instance(instance_ref)
230-
231- def live_migration(self, context, instance_id, dest):
232- """executes live migration."""
233-
234- # Get instance for error handling.
235- instance_ref = db.instance_get(context, instance_id)
236- ec2_id = instance_ref['hostname']
237-
238- try:
239- # Checking volume node is working correctly when any volumes
240- # are attached to instances.
241- if len(instance_ref['volumes']) != 0:
242- rpc.call(context,
243- FLAGS.volume_topic,
244- {"method": "check_for_export",
245- "args": {'instance_id': instance_id}})
246-
247- # Asking dest host to preparing live migration.
248- compute_topic = db.queue_get_for(context,
249- FLAGS.compute_topic,
250- dest)
251- rpc.call(context,
252- compute_topic,
253- {"method": "pre_live_migration",
254- "args": {'instance_id': instance_id,
255- 'dest': dest}})
256-
257- except Exception, e:
258- msg = _('Pre live migration for %s failed at %s')
259- logging.error(msg, ec2_id, dest)
260- db.instance_set_state(context,
261- instance_id,
262- power_state.RUNNING,
263- 'running')
264-
265- for v in instance_ref['volumes']:
266- db.volume_update(context,
267- v['id'],
268- {'status': 'in-use'})
269-
270- # e should be raised. just calling "raise" may raise NotFound.
271- raise e
272-
273- # Executing live migration
274- # live_migration might raises exceptions, but
275- # nothing must be recovered in this version.
276- self.driver.live_migration(context, instance_ref, dest)
277
278=== modified file 'nova/db/api.py'
279--- nova/db/api.py 2011-01-16 19:12:27 +0000
280+++ nova/db/api.py 2011-01-18 22:58:14 +0000
281@@ -253,10 +253,6 @@
282 return IMPL.floating_ip_get_by_address(context, address)
283
284
285-def floating_ip_update(context, address, values):
286- """update floating ip information."""
287- return IMPL.floating_ip_update(context, address, values)
288-
289 ####################
290
291
292@@ -409,32 +405,6 @@
293 security_group_id)
294
295
296-def instance_get_all_by_host(context, hostname):
297- """Get instances by host"""
298- return IMPL.instance_get_all_by_host(context, hostname)
299-
300-
301-def instance_get_vcpu_sum_by_host_and_project(context, hostname, proj_id):
302- """Get instances.vcpus by host and project"""
303- return IMPL.instance_get_vcpu_sum_by_host_and_project(context,
304- hostname,
305- proj_id)
306-
307-
308-def instance_get_memory_sum_by_host_and_project(context, hostname, proj_id):
309- """Get amount of memory by host and project """
310- return IMPL.instance_get_memory_sum_by_host_and_project(context,
311- hostname,
312- proj_id)
313-
314-
315-def instance_get_disk_sum_by_host_and_project(context, hostname, proj_id):
316- """Get total amount of disk by host and project """
317- return IMPL.instance_get_disk_sum_by_host_and_project(context,
318- hostname,
319- proj_id)
320-
321-
322 def instance_action_create(context, values):
323 """Create an instance action from the values dictionary."""
324 return IMPL.instance_action_create(context, values)
325
326=== modified file 'nova/db/sqlalchemy/api.py'
327--- nova/db/sqlalchemy/api.py 2011-01-16 19:12:27 +0000
328+++ nova/db/sqlalchemy/api.py 2011-01-18 22:58:14 +0000
329@@ -495,16 +495,6 @@
330 return result
331
332
333-@require_context
334-def floating_ip_update(context, address, values):
335- session = get_session()
336- with session.begin():
337- floating_ip_ref = floating_ip_get_by_address(context, address, session)
338- for (key, value) in values.iteritems():
339- floating_ip_ref[key] = value
340- floating_ip_ref.save(session=session)
341-
342-
343 ###################
344
345
346@@ -868,7 +858,6 @@
347 return instance_ref
348
349
350-@require_context
351 def instance_add_security_group(context, instance_id, security_group_id):
352 """Associate the given security group with the given instance"""
353 session = get_session()
354@@ -882,59 +871,6 @@
355
356
357 @require_context
358-def instance_get_all_by_host(context, hostname):
359- session = get_session()
360- if not session:
361- session = get_session()
362-
363- result = session.query(models.Instance).\
364- filter_by(host=hostname).\
365- filter_by(deleted=can_read_deleted(context)).\
366- all()
367- if not result:
368- return []
369- return result
370-
371-
372-@require_context
373-def _instance_get_sum_by_host_and_project(context, column, hostname, proj_id):
374- session = get_session()
375-
376- result = session.query(models.Instance).\
377- filter_by(host=hostname).\
378- filter_by(project_id=proj_id).\
379- filter_by(deleted=can_read_deleted(context)).\
380- value(column)
381- if not result:
382- return 0
383- return result
384-
385-
386-@require_context
387-def instance_get_vcpu_sum_by_host_and_project(context, hostname, proj_id):
388- return _instance_get_sum_by_host_and_project(context,
389- 'vcpus',
390- hostname,
391- proj_id)
392-
393-
394-@require_context
395-def instance_get_memory_sum_by_host_and_project(context, hostname, proj_id):
396- return _instance_get_sum_by_host_and_project(context,
397- 'memory_mb',
398- hostname,
399- proj_id)
400-
401-
402-@require_context
403-def instance_get_disk_sum_by_host_and_project(context, hostname, proj_id):
404- return _instance_get_sum_by_host_and_project(context,
405- 'local_gb',
406- hostname,
407- proj_id)
408-
409-
410-@require_context
411 def instance_action_create(context, values):
412 """Create an instance action from the values dictionary."""
413 action_ref = models.InstanceActions()
414
415=== modified file 'nova/db/sqlalchemy/models.py'
416--- nova/db/sqlalchemy/models.py 2011-01-16 19:12:27 +0000
417+++ nova/db/sqlalchemy/models.py 2011-01-18 22:58:14 +0000
418@@ -150,32 +150,13 @@
419
420 __tablename__ = 'services'
421 id = Column(Integer, primary_key=True)
422- #host_id = Column(Integer, ForeignKey('hosts.id'), nullable=True)
423- #host = relationship(Host, backref=backref('services'))
424- host = Column(String(255))
425+ host = Column(String(255)) # , ForeignKey('hosts.id'))
426 binary = Column(String(255))
427 topic = Column(String(255))
428 report_count = Column(Integer, nullable=False, default=0)
429 disabled = Column(Boolean, default=False)
430 availability_zone = Column(String(255), default='nova')
431
432- # The below items are compute node only.
433- # -1 or None is inserted for other service.
434- vcpus = Column(Integer, nullable=False, default=-1)
435- memory_mb = Column(Integer, nullable=False, default=-1)
436- local_gb = Column(Integer, nullable=False, default=-1)
437- hypervisor_type = Column(String(128))
438- hypervisor_version = Column(Integer, nullable=False, default=-1)
439- # Note(masumotok): Expected Strings example:
440- #
441- # '{"arch":"x86_64", "model":"Nehalem",
442- # "topology":{"sockets":1, "threads":2, "cores":3},
443- # features:[ "tdtscp", "xtpr"]}'
444- #
445- # Points are "json translatable" and it must have all
446- # dictionary keys above.
447- cpu_info = Column(String(512))
448-
449
450 class Certificate(BASE, NovaBase):
451 """Represents a an x509 certificate"""
452@@ -250,9 +231,6 @@
453 display_name = Column(String(255))
454 display_description = Column(String(255))
455
456- # To remember on which host a instance booted.
457- # An instance may moved to other host by live migraiton.
458- launched_on = Column(String(255))
459 locked = Column(Boolean)
460
461 # TODO(vish): see Ewan's email about state improvements, probably
462@@ -610,7 +588,7 @@
463 Volume, ExportDevice, IscsiTarget, FixedIp, FloatingIp,
464 Network, SecurityGroup, SecurityGroupIngressRule,
465 SecurityGroupInstanceAssociation, AuthToken, User,
466- Project, Certificate, ConsolePool, Console) # , Host, Image
467+ Project, Certificate, ConsolePool, Console) # , Image, Host
468 engine = create_engine(FLAGS.sql_connection, echo=False)
469 for model in models:
470 model.metadata.create_all(engine)
471
472=== modified file 'nova/network/manager.py'
473--- nova/network/manager.py 2011-01-16 19:12:27 +0000
474+++ nova/network/manager.py 2011-01-18 22:58:14 +0000
475@@ -159,7 +159,7 @@
476 """Called when this host becomes the host for a network."""
477 raise NotImplementedError()
478
479- def setup_compute_network(self, context, instance_id, network_ref=None):
480+ def setup_compute_network(self, context, instance_id):
481 """Sets up matching network for compute hosts."""
482 raise NotImplementedError()
483
484@@ -320,7 +320,7 @@
485 self.db.fixed_ip_update(context, address, {'allocated': False})
486 self.db.fixed_ip_disassociate(context.elevated(), address)
487
488- def setup_compute_network(self, context, instance_id, network_ref=None):
489+ def setup_compute_network(self, context, instance_id):
490 """Network is created manually."""
491 pass
492
493@@ -395,10 +395,9 @@
494 super(FlatDHCPManager, self).init_host()
495 self.driver.metadata_forward()
496
497- def setup_compute_network(self, context, instance_id, network_ref=None):
498+ def setup_compute_network(self, context, instance_id):
499 """Sets up matching network for compute hosts."""
500- if network_ref is None:
501- network_ref = db.network_get_by_instance(context, instance_id)
502+ network_ref = db.network_get_by_instance(context, instance_id)
503 self.driver.ensure_bridge(network_ref['bridge'],
504 FLAGS.flat_interface)
505
506@@ -488,10 +487,9 @@
507 """Returns a fixed ip to the pool."""
508 self.db.fixed_ip_update(context, address, {'allocated': False})
509
510- def setup_compute_network(self, context, instance_id, network_ref=None):
511+ def setup_compute_network(self, context, instance_id):
512 """Sets up matching network for compute hosts."""
513- if network_ref is None:
514- network_ref = db.network_get_by_instance(context, instance_id)
515+ network_ref = db.network_get_by_instance(context, instance_id)
516 self.driver.ensure_vlan_bridge(network_ref['vlan'],
517 network_ref['bridge'])
518
519
520=== modified file 'nova/scheduler/driver.py'
521--- nova/scheduler/driver.py 2011-01-16 19:12:27 +0000
522+++ nova/scheduler/driver.py 2011-01-18 22:58:14 +0000
523@@ -26,9 +26,6 @@
524 from nova import db
525 from nova import exception
526 from nova import flags
527-from nova import log as logging
528-from nova import rpc
529-from nova.compute import power_state
530
531 FLAGS = flags.FLAGS
532 flags.DEFINE_integer('service_down_time', 60,
533@@ -67,183 +64,3 @@
534 def schedule(self, context, topic, *_args, **_kwargs):
535 """Must override at least this method for scheduler to work."""
536 raise NotImplementedError(_("Must implement a fallback schedule"))
537-
538- def schedule_live_migration(self, context, instance_id, dest):
539- """ live migration method """
540-
541- # Whether instance exists and running
542- instance_ref = db.instance_get(context, instance_id)
543- ec2_id = instance_ref['hostname']
544-
545- # Checking instance.
546- self._live_migration_src_check(context, instance_ref)
547-
548- # Checking destination host.
549- self._live_migration_dest_check(context, instance_ref, dest)
550-
551- # Common checking.
552- self._live_migration_common_check(context, instance_ref, dest)
553-
554- # Changing instance_state.
555- db.instance_set_state(context,
556- instance_id,
557- power_state.PAUSED,
558- 'migrating')
559-
560- # Changing volume state
561- for v in instance_ref['volumes']:
562- db.volume_update(context,
563- v['id'],
564- {'status': 'migrating'})
565-
566- # Return value is necessary to send request to src
567- # Check _schedule() in detail.
568- src = instance_ref['host']
569- return src
570-
571- def _live_migration_src_check(self, context, instance_ref):
572- """Live migration check routine (for src host)"""
573-
574- # Checking instance is running.
575- if power_state.RUNNING != instance_ref['state'] or \
576- 'running' != instance_ref['state_description']:
577- msg = _('Instance(%s) is not running')
578- ec2_id = instance_ref['hostname']
579- raise exception.Invalid(msg % ec2_id)
580-
581- # Checing volume node is running when any volumes are mounted
582- # to the instance.
583- if len(instance_ref['volumes']) != 0:
584- services = db.service_get_all_by_topic(context, 'volume')
585- if len(services) < 1 or not self.service_is_up(services[0]):
586- msg = _('volume node is not alive(time synchronize problem?)')
587- raise exception.Invalid(msg)
588-
589- # Checking src host is alive.
590- src = instance_ref['host']
591- services = db.service_get_all_by_topic(context, 'compute')
592- services = [service for service in services if service.host == src]
593- if len(services) < 1 or not self.service_is_up(services[0]):
594- msg = _('%s is not alive(time synchronize problem?)')
595- raise exception.Invalid(msg % src)
596-
597- def _live_migration_dest_check(self, context, instance_ref, dest):
598- """Live migration check routine (for destination host)"""
599-
600- # Checking dest exists and compute node.
601- dservice_refs = db.service_get_all_by_host(context, dest)
602- if len(dservice_refs) <= 0:
603- msg = _('%s does not exists.')
604- raise exception.Invalid(msg % dest)
605-
606- dservice_ref = dservice_refs[0]
607- if dservice_ref['topic'] != 'compute':
608- msg = _('%s must be compute node')
609- raise exception.Invalid(msg % dest)
610-
611- # Checking dest host is alive.
612- if not self.service_is_up(dservice_ref):
613- msg = _('%s is not alive(time synchronize problem?)')
614- raise exception.Invalid(msg % dest)
615-
616- # Checking whether The host where instance is running
617- # and dest is not same.
618- src = instance_ref['host']
619- if dest == src:
620- ec2_id = instance_ref['hostname']
621- msg = _('%s is where %s is running now. choose other host.')
622- raise exception.Invalid(msg % (dest, ec2_id))
623-
624- # Checking dst host still has enough capacities.
625- self.has_enough_resource(context, instance_ref, dest)
626-
627- def _live_migration_common_check(self, context, instance_ref, dest):
628- """
629- Live migration check routine.
630- Below pre-checkings are followed by
631- http://wiki.libvirt.org/page/TodoPreMigrationChecks
632-
633- """
634-
635- # Checking dest exists.
636- dservice_refs = db.service_get_all_by_host(context, dest)
637- if len(dservice_refs) <= 0:
638- msg = _('%s does not exists.')
639- raise exception.Invalid(msg % dest)
640- dservice_ref = dservice_refs[0]
641-
642- # Checking original host( where instance was launched at) exists.
643- orighost = instance_ref['launched_on']
644- oservice_refs = db.service_get_all_by_host(context, orighost)
645- if len(oservice_refs) <= 0:
646- msg = _('%s(where instance was launched at) does not exists.')
647- raise exception.Invalid(msg % orighost)
648- oservice_ref = oservice_refs[0]
649-
650- # Checking hypervisor is same.
651- otype = oservice_ref['hypervisor_type']
652- dtype = dservice_ref['hypervisor_type']
653- if otype != dtype:
654- msg = _('Different hypervisor type(%s->%s)')
655- raise exception.Invalid(msg % (otype, dtype))
656-
657- # Checkng hypervisor version.
658- oversion = oservice_ref['hypervisor_version']
659- dversion = dservice_ref['hypervisor_version']
660- if oversion > dversion:
661- msg = _('Older hypervisor version(%s->%s)')
662- raise exception.Invalid(msg % (oversion, dversion))
663-
664- # Checking cpuinfo.
665- cpu_info = oservice_ref['cpu_info']
666- try:
667- rpc.call(context,
668- db.queue_get_for(context, FLAGS.compute_topic, dest),
669- {"method": 'compare_cpu',
670- "args": {'cpu_info': cpu_info}})
671-
672- except rpc.RemoteError, e:
673- msg = _(("""%s doesnt have compatibility to %s"""
674- """(where %s was launched at)"""))
675- ec2_id = instance_ref['hostname']
676- src = instance_ref['host']
677- logging.error(msg % (dest, src, ec2_id))
678- raise e
679-
680- def has_enough_resource(self, context, instance_ref, dest):
681- """ Check if destination host has enough resource for live migration"""
682-
683- # Getting instance information
684- ec2_id = instance_ref['hostname']
685- vcpus = instance_ref['vcpus']
686- mem = instance_ref['memory_mb']
687- hdd = instance_ref['local_gb']
688-
689- # Gettin host information
690- service_refs = db.service_get_all_by_host(context, dest)
691- if len(service_refs) <= 0:
692- msg = _('%s does not exists.')
693- raise exception.Invalid(msg % dest)
694- service_ref = service_refs[0]
695-
696- total_cpu = int(service_ref['vcpus'])
697- total_mem = int(service_ref['memory_mb'])
698- total_hdd = int(service_ref['local_gb'])
699-
700- instances_ref = db.instance_get_all_by_host(context, dest)
701- for i_ref in instances_ref:
702- total_cpu -= int(i_ref['vcpus'])
703- total_mem -= int(i_ref['memory_mb'])
704- total_hdd -= int(i_ref['local_gb'])
705-
706- # Checking host has enough information
707- logging.debug('host(%s) remains vcpu:%s mem:%s hdd:%s,' %
708- (dest, total_cpu, total_mem, total_hdd))
709- logging.debug('instance(%s) has vcpu:%s mem:%s hdd:%s,' %
710- (ec2_id, vcpus, mem, hdd))
711-
712- if total_cpu <= vcpus or total_mem <= mem or total_hdd <= hdd:
713- msg = '%s doesnt have enough resource for %s' % (dest, ec2_id)
714- raise exception.NotEmpty(msg)
715-
716- logging.debug(_('%s has_enough_resource() for %s') % (dest, ec2_id))
717
718=== modified file 'nova/scheduler/manager.py'
719--- nova/scheduler/manager.py 2011-01-19 16:14:23 +0000
720+++ nova/scheduler/manager.py 2011-01-18 22:58:14 +0000
721@@ -29,7 +29,6 @@
722 from nova import manager
723 from nova import rpc
724 from nova import utils
725-from nova import exception
726
727 LOG = logging.getLogger('nova.scheduler.manager')
728 FLAGS = flags.FLAGS
729@@ -68,50 +67,3 @@
730 {"method": method,
731 "args": kwargs})
732 LOG.debug(_("Casting to %s %s for %s"), topic, host, method)
733-
734- # NOTE (masumotok) : This method should be moved to nova.api.ec2.admin.
735- # Based on bear design summit discussion,
736- # just put this here for bexar release.
737- def show_host_resource(self, context, host, *args):
738- """ show the physical/usage resource given by hosts."""
739-
740- services = db.service_get_all_by_host(context, host)
741- if len(services) == 0:
742- return {'ret': False, 'msg': 'No such Host'}
743-
744- compute = [s for s in services if s['topic'] == 'compute']
745- if 0 == len(compute):
746- service_ref = services[0]
747- else:
748- service_ref = compute[0]
749-
750- # Getting physical resource information
751- h_resource = {'vcpus': service_ref['vcpus'],
752- 'memory_mb': service_ref['memory_mb'],
753- 'local_gb': service_ref['local_gb']}
754-
755- # Getting usage resource information
756- u_resource = {}
757- instances_ref = db.instance_get_all_by_host(context,
758- service_ref['host'])
759-
760- if 0 == len(instances_ref):
761- return {'ret': True, 'phy_resource': h_resource, 'usage': {}}
762-
763- project_ids = [i['project_id'] for i in instances_ref]
764- project_ids = list(set(project_ids))
765- for p_id in project_ids:
766- vcpus = db.instance_get_vcpu_sum_by_host_and_project(context,
767- host,
768- p_id)
769- mem = db.instance_get_memory_sum_by_host_and_project(context,
770- host,
771- p_id)
772- hdd = db.instance_get_disk_sum_by_host_and_project(context,
773- host,
774- p_id)
775- u_resource[p_id] = {'vcpus': vcpus,
776- 'memory_mb': mem,
777- 'local_gb': hdd}
778-
779- return {'ret': True, 'phy_resource': h_resource, 'usage': u_resource}
780
781=== modified file 'nova/service.py'
782--- nova/service.py 2011-01-19 16:14:23 +0000
783+++ nova/service.py 2011-01-18 22:58:14 +0000
784@@ -80,7 +80,6 @@
785 self.manager.init_host()
786 self.model_disconnected = False
787 ctxt = context.get_admin_context()
788-
789 try:
790 service_ref = db.service_get_by_args(ctxt,
791 self.host,
792@@ -89,9 +88,6 @@
793 except exception.NotFound:
794 self._create_service_ref(ctxt)
795
796- if 'nova-compute' == self.binary:
797- self.manager.update_service(ctxt, self.host, self.binary)
798-
799 conn1 = rpc.Connection.instance(new=True)
800 conn2 = rpc.Connection.instance(new=True)
801 if self.report_interval:
802
803=== removed file 'nova/virt/cpuinfo.xml.template'
804--- nova/virt/cpuinfo.xml.template 2011-01-16 19:12:27 +0000
805+++ nova/virt/cpuinfo.xml.template 1970-01-01 00:00:00 +0000
806@@ -1,9 +0,0 @@
807-<cpu>
808- <arch>$arch</arch>
809- <model>$model</model>
810- <vendor>$vendor</vendor>
811- <topology sockets="$topology.sockets" cores="$topology.cores" threads="$topology.threads"/>
812-#for $var in $features
813- <features name="$var" />
814-#end for
815-</cpu>
816
817=== modified file 'nova/virt/fake.py'
818--- nova/virt/fake.py 2011-01-18 19:30:26 +0000
819+++ nova/virt/fake.py 2011-01-18 22:58:14 +0000
820@@ -358,38 +358,6 @@
821 """
822 return True
823
824- def get_cpu_info(self):
825- """This method is supported only libvirt. """
826- return
827-
828- def get_vcpu_number(self):
829- """This method is supported only libvirt. """
830- return -1
831-
832- def get_memory_mb(self):
833- """This method is supported only libvirt.."""
834- return -1
835-
836- def get_local_gb(self):
837- """This method is supported only libvirt.."""
838- return -1
839-
840- def get_hypervisor_type(self):
841- """This method is supported only libvirt.."""
842- return
843-
844- def get_hypervisor_version(self):
845- """This method is supported only libvirt.."""
846- return -1
847-
848- def compare_cpu(self, xml):
849- """This method is supported only libvirt.."""
850- raise NotImplementedError('This method is supported only libvirt.')
851-
852- def live_migration(self, context, instance_ref, dest):
853- """This method is supported only libvirt.."""
854- raise NotImplementedError('This method is supported only libvirt.')
855-
856
857 class FakeInstance(object):
858
859
860=== modified file 'nova/virt/libvirt_conn.py'
861--- nova/virt/libvirt_conn.py 2011-01-18 20:42:06 +0000
862+++ nova/virt/libvirt_conn.py 2011-01-18 22:58:14 +0000
863@@ -36,11 +36,8 @@
864
865 """
866
867-import json
868 import os
869 import shutil
870-import re
871-import time
872 import random
873 import subprocess
874 import uuid
875@@ -83,9 +80,6 @@
876 flags.DEFINE_string('libvirt_xml_template',
877 utils.abspath('virt/libvirt.xml.template'),
878 'Libvirt XML Template')
879-flags.DEFINE_string('cpuinfo_xml_template',
880- utils.abspath('virt/cpuinfo.xml.template'),
881- 'CpuInfo XML Template (used only live migration now)')
882 flags.DEFINE_string('libvirt_type',
883 'kvm',
884 'Libvirt domain type (valid options are: '
885@@ -94,16 +88,6 @@
886 '',
887 'Override the default libvirt URI (which is dependent'
888 ' on libvirt_type)')
889-flags.DEFINE_string('live_migration_uri',
890- "qemu+tcp://%s/system",
891- 'Define protocol used by live_migration feature')
892-flags.DEFINE_string('live_migration_flag',
893- "VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER",
894- 'Define live migration behavior.')
895-flags.DEFINE_integer('live_migration_bandwidth', 0,
896- 'Define live migration behavior')
897-flags.DEFINE_string('live_migration_timeout_sec', 10,
898- 'Timeout second for pre_live_migration is completed.')
899 flags.DEFINE_bool('allow_project_net_traffic',
900 True,
901 'Whether to allow in project network traffic')
902@@ -162,7 +146,6 @@
903 self.libvirt_uri = self.get_uri()
904
905 self.libvirt_xml = open(FLAGS.libvirt_xml_template).read()
906- self.cpuinfo_xml = open(FLAGS.cpuinfo_xml_template).read()
907 self._wrapped_conn = None
908 self.read_only = read_only
909
910@@ -835,74 +818,6 @@
911
912 return interfaces
913
914- def get_vcpu_number(self):
915- """ Get vcpu number of physical computer. """
916- return self._conn.getMaxVcpus(None)
917-
918- def get_memory_mb(self):
919- """Get the memory size of physical computer ."""
920- meminfo = open('/proc/meminfo').read().split()
921- idx = meminfo.index('MemTotal:')
922- # transforming kb to mb.
923- return int(meminfo[idx + 1]) / 1024
924-
925- def get_local_gb(self):
926- """Get the hdd size of physical computer ."""
927- hddinfo = os.statvfs(FLAGS.instances_path)
928- return hddinfo.f_bsize * hddinfo.f_blocks / 1024 / 1024 / 1024
929-
930- def get_hypervisor_type(self):
931- """ Get hypervisor type """
932- return self._conn.getType()
933-
934- def get_hypervisor_version(self):
935- """ Get hypervisor version """
936- return self._conn.getVersion()
937-
938- def get_cpu_info(self):
939- """ Get cpuinfo information """
940- xmlstr = self._conn.getCapabilities()
941- xml = libxml2.parseDoc(xmlstr)
942- nodes = xml.xpathEval('//cpu')
943- if len(nodes) != 1:
944- msg = 'Unexpected xml format. tag "cpu" must be 1, but %d.' \
945- % len(nodes)
946- msg += '\n' + xml.serialize()
947- raise exception.Invalid(_(msg))
948-
949- arch = xml.xpathEval('//cpu/arch')[0].getContent()
950- model = xml.xpathEval('//cpu/model')[0].getContent()
951- vendor = xml.xpathEval('//cpu/vendor')[0].getContent()
952-
953- topology_node = xml.xpathEval('//cpu/topology')[0].get_properties()
954- topology = dict()
955- while topology_node != None:
956- name = topology_node.get_name()
957- topology[name] = topology_node.getContent()
958- topology_node = topology_node.get_next()
959-
960- keys = ['cores', 'sockets', 'threads']
961- tkeys = topology.keys()
962- if list(set(tkeys)) != list(set(keys)):
963- msg = _('Invalid xml: topology(%s) must have %s')
964- raise exception.Invalid(msg % (str(topology), ', '.join(keys)))
965-
966- feature_nodes = xml.xpathEval('//cpu/feature')
967- features = list()
968- for nodes in feature_nodes:
969- feature_name = nodes.get_properties().getContent()
970- features.append(feature_name)
971-
972- template = ("""{"arch":"%s", "model":"%s", "vendor":"%s", """
973- """"topology":{"cores":"%s", "threads":"%s", """
974- """"sockets":"%s"}, "features":[%s]}""")
975- c = topology['cores']
976- s = topology['sockets']
977- t = topology['threads']
978- f = ['"%s"' % x for x in features]
979- cpu_info = template % (arch, model, vendor, c, s, t, ', '.join(f))
980- return cpu_info
981-
982 def block_stats(self, instance_name, disk):
983 """
984 Note that this function takes an instance name, not an Instance, so
985@@ -933,208 +848,6 @@
986 def refresh_security_group_members(self, security_group_id):
987 self.firewall_driver.refresh_security_group_members(security_group_id)
988
989- def compare_cpu(self, cpu_info):
990- """
991- Check the host cpu is compatible to a cpu given by xml.
992- "xml" must be a part of libvirt.openReadonly().getCapabilities().
993- return values follows by virCPUCompareResult.
994- if 0 > return value, do live migration.
995-
996- 'http://libvirt.org/html/libvirt-libvirt.html#virCPUCompareResult'
997- """
998- msg = _('Checking cpu_info: instance was launched this cpu.\n: %s ')
999- LOG.info(msg % cpu_info)
1000- dic = json.loads(cpu_info)
1001- xml = str(Template(self.cpuinfo_xml, searchList=dic))
1002- msg = _('to xml...\n: %s ')
1003- LOG.info(msg % xml)
1004-
1005- url = 'http://libvirt.org/html/libvirt-libvirt.html'
1006- url += '#virCPUCompareResult\n'
1007- msg = 'CPU does not have compativility.\n'
1008- msg += 'result:%d \n'
1009- msg += 'Refer to %s'
1010- msg = _(msg)
1011-
1012- # unknown character exists in xml, then libvirt complains
1013- try:
1014- ret = self._conn.compareCPU(xml, 0)
1015- except libvirt.libvirtError, e:
1016- LOG.error(msg % (ret, url))
1017- raise e
1018-
1019- if ret <= 0:
1020- raise exception.Invalid(msg % (ret, url))
1021-
1022- return
1023-
1024- def ensure_filtering_rules_for_instance(self, instance_ref):
1025- """ Setting up inevitable filtering rules on compute node,
1026- and waiting for its completion.
1027- To migrate an instance, filtering rules to hypervisors
1028- and firewalls are inevitable on destination host.
1029- ( Waiting only for filterling rules to hypervisor,
1030- since filtering rules to firewall rules can be set faster).
1031-
1032- Concretely, the below method must be called.
1033- - setup_basic_filtering (for nova-basic, etc.)
1034- - prepare_instance_filter(for nova-instance-instance-xxx, etc.)
1035-
1036- to_xml may have to be called since it defines PROJNET, PROJMASK.
1037- but libvirt migrates those value through migrateToURI(),
1038- so , no need to be called.
1039-
1040- Don't use thread for this method since migration should
1041- not be started when setting-up filtering rules operations
1042- are not completed."""
1043-
1044- # Tf any instances never launch at destination host,
1045- # basic-filtering must be set here.
1046- self.nwfilter.setup_basic_filtering(instance_ref)
1047- # setting up n)ova-instance-instance-xx mainly.
1048- self.firewall_driver.prepare_instance_filter(instance_ref)
1049-
1050- # wait for completion
1051- timeout_count = range(FLAGS.live_migration_timeout_sec * 2)
1052- while len(timeout_count) != 0:
1053- try:
1054- filter_name = 'nova-instance-%s' % instance_ref.name
1055- self._conn.nwfilterLookupByName(filter_name)
1056- break
1057- except libvirt.libvirtError:
1058- timeout_count.pop()
1059- if len(timeout_count) == 0:
1060- ec2_id = instance_ref['hostname']
1061- msg = _('Timeout migrating for %s(%s)')
1062- raise exception.Error(msg % (ec2_id, instance_ref.name))
1063- time.sleep(0.5)
1064-
1065- def live_migration(self, context, instance_ref, dest):
1066- """
1067- Just spawning live_migration operation for
1068- distributing high-load.
1069- """
1070- greenthread.spawn(self._live_migration, context, instance_ref, dest)
1071-
1072- def _live_migration(self, context, instance_ref, dest):
1073- """ Do live migration."""
1074-
1075- # Do live migration.
1076- try:
1077- duri = FLAGS.live_migration_uri % dest
1078-
1079- flaglist = FLAGS.live_migration_flag.split(',')
1080- flagvals = [getattr(libvirt, x.strip()) for x in flaglist]
1081- logical_sum = reduce(lambda x, y: x | y, flagvals)
1082-
1083- bandwidth = FLAGS.live_migration_bandwidth
1084-
1085- if self.read_only:
1086- tmpconn = self._connect(self.libvirt_uri, False)
1087- dom = tmpconn.lookupByName(instance_ref.name)
1088- dom.migrateToURI(duri, logical_sum, None, bandwidth)
1089- tmpconn.close()
1090- else:
1091- dom = self._conn.lookupByName(instance_ref.name)
1092- dom.migrateToURI(duri, logical_sum, None, bandwidth)
1093-
1094- except Exception, e:
1095- id = instance_ref['id']
1096- db.instance_set_state(context, id, power_state.RUNNING, 'running')
1097- for v in instance_ref['volumes']:
1098- db.volume_update(context,
1099- v['id'],
1100- {'status': 'in-use'})
1101-
1102- raise e
1103-
1104- # Waiting for completion of live_migration.
1105- timer = utils.LoopingCall(f=None)
1106-
1107- def wait_for_live_migration():
1108-
1109- try:
1110- state = self.get_info(instance_ref.name)['state']
1111- except exception.NotFound:
1112- timer.stop()
1113- self._post_live_migration(context, instance_ref, dest)
1114-
1115- timer.f = wait_for_live_migration
1116- timer.start(interval=0.5, now=True)
1117-
1118- def _post_live_migration(self, context, instance_ref, dest):
1119- """
1120- Post operations for live migration.
1121- Mainly, database updating.
1122- """
1123- LOG.info('post livemigration operation is started..')
1124- # Detaching volumes.
1125- # (not necessary in current version )
1126-
1127- # Releasing vlan.
1128- # (not necessary in current implementation?)
1129-
1130- # Releasing security group ingress rule.
1131- if FLAGS.firewall_driver == \
1132- 'nova.virt.libvirt_conn.IptablesFirewallDriver':
1133- try:
1134- self.firewall_driver.unfilter_instance(instance_ref)
1135- except KeyError, e:
1136- pass
1137-
1138- # Database updating.
1139- ec2_id = instance_ref['hostname']
1140-
1141- instance_id = instance_ref['id']
1142- fixed_ip = db.instance_get_fixed_address(context, instance_id)
1143- # Not return if fixed_ip is not found, otherwise,
1144- # instance never be accessible..
1145- if None == fixed_ip:
1146- logging.warn('fixed_ip is not found for %s ' % ec2_id)
1147- db.fixed_ip_update(context, fixed_ip, {'host': dest})
1148- network_ref = db.fixed_ip_get_network(context, fixed_ip)
1149- db.network_update(context, network_ref['id'], {'host': dest})
1150-
1151- try:
1152- floating_ip \
1153- = db.instance_get_floating_address(context, instance_id)
1154- # Not return if floating_ip is not found, otherwise,
1155- # instance never be accessible..
1156- if None == floating_ip:
1157- logging.error('floating_ip is not found for %s ' % ec2_id)
1158- else:
1159- floating_ip_ref = db.floating_ip_get_by_address(context,
1160- floating_ip)
1161- db.floating_ip_update(context,
1162- floating_ip_ref['address'],
1163- {'host': dest})
1164- except exception.NotFound:
1165- logging.debug('%s doesnt have floating_ip.. ' % ec2_id)
1166- except:
1167- msg = 'Live migration: Unexpected error:'
1168- msg += '%s cannot inherit floating ip.. ' % ec2_id
1169- logging.error(_(msg))
1170-
1171- # Restore instance/volume state
1172- db.instance_update(context,
1173- instance_id,
1174- {'state_description': 'running',
1175- 'state': power_state.RUNNING,
1176- 'host': dest})
1177-
1178- for v in instance_ref['volumes']:
1179- db.volume_update(context,
1180- v['id'],
1181- {'status': 'in-use'})
1182-
1183- logging.info(_('Live migrating %s to %s finishes successfully')
1184- % (ec2_id, dest))
1185- msg = _(("""Known error: the below error is nomally occurs.\n"""
1186- """Just check if iinstance is successfully migrated.\n"""
1187- """libvir: QEMU error : Domain not found: no domain """
1188- """with matching name.."""))
1189- logging.info(msg)
1190-
1191
1192 class FirewallDriver(object):
1193 def prepare_instance_filter(self, instance):
1194
1195=== modified file 'nova/virt/xenapi_conn.py'
1196--- nova/virt/xenapi_conn.py 2011-01-18 21:19:10 +0000
1197+++ nova/virt/xenapi_conn.py 2011-01-18 22:58:14 +0000
1198@@ -212,36 +212,6 @@
1199 'username': FLAGS.xenapi_connection_username,
1200 'password': FLAGS.xenapi_connection_password}
1201
1202- def get_cpu_info(self):
1203- """This method is supported only libvirt. """
1204- return
1205-
1206- def get_vcpu_number(self):
1207- """This method is supported only libvirt. """
1208- return -1
1209-
1210- def get_memory_mb(self):
1211- """This method is supported only libvirt.."""
1212- return -1
1213-
1214- def get_local_gb(self):
1215- """This method is supported only libvirt.."""
1216- return -1
1217-
1218- def get_hypervisor_type(self):
1219- """This method is supported only libvirt.."""
1220- return
1221-
1222- def get_hypervisor_version(self):
1223- """This method is supported only libvirt.."""
1224- return -1
1225-
1226- def compare_cpu(self, xml):
1227- raise NotImplementedError('This method is supported only libvirt.')
1228-
1229- def live_migration(self, context, instance_ref, dest):
1230- raise NotImplementedError('This method is supported only libvirt.')
1231-
1232
1233 class XenAPISession(object):
1234 """The session to invoke XenAPI SDK calls"""
1235
1236=== modified file 'nova/volume/driver.py'
1237--- nova/volume/driver.py 2011-01-18 18:59:12 +0000
1238+++ nova/volume/driver.py 2011-01-18 22:58:14 +0000
1239@@ -122,7 +122,7 @@
1240 """Removes an export for a logical volume."""
1241 raise NotImplementedError()
1242
1243- def discover_volume(self, _context, volume):
1244+ def discover_volume(self, volume):
1245 """Discover volume on a remote host."""
1246 raise NotImplementedError()
1247
1248@@ -184,35 +184,15 @@
1249 self._try_execute("sudo vblade-persist destroy %s %s" %
1250 (shelf_id, blade_id))
1251
1252- def discover_volume(self, context, volume):
1253+ def discover_volume(self, _volume):
1254 """Discover volume on a remote host."""
1255 self._execute("sudo aoe-discover")
1256 self._execute("sudo aoe-stat", check_exit_code=False)
1257- shelf_id, blade_id = self.db.volume_get_shelf_and_blade(context,
1258- volume['id'])
1259- return "/dev/etherd/e%s.%s" % (shelf_id, blade_id)
1260
1261 def undiscover_volume(self, _volume):
1262 """Undiscover volume on a remote host."""
1263 pass
1264
1265- def check_for_export(self, context, volume_id):
1266- """Make sure whether volume is exported."""
1267- (shelf_id,
1268- blade_id) = self.db.volume_get_shelf_and_blade(context,
1269- volume_id)
1270- (out, _err) = self._execute("sudo vblade-persist ls --no-header")
1271- exists = False
1272- for line in out.split('\n'):
1273- param = line.split(' ')
1274- if len(param) == 6 and param[0] == str(shelf_id) \
1275- and param[1] == str(blade_id) and param[-1] == "run":
1276- exists = True
1277- break
1278- if not exists:
1279- logging.warning(_("vblade process for e%s.%s isn't running.")
1280- % (shelf_id, blade_id))
1281-
1282
1283 class FakeAOEDriver(AOEDriver):
1284 """Logs calls instead of executing."""
1285@@ -296,7 +276,7 @@
1286 iscsi_portal = location.split(",")[0]
1287 return (iscsi_name, iscsi_portal)
1288
1289- def discover_volume(self, _context, volume):
1290+ def discover_volume(self, volume):
1291 """Discover volume on a remote host."""
1292 iscsi_name, iscsi_portal = self._get_name_and_portal(volume['name'],
1293 volume['host'])
1294@@ -385,7 +365,7 @@
1295 """Removes an export for a logical volume"""
1296 pass
1297
1298- def discover_volume(self, _context, volume):
1299+ def discover_volume(self, volume):
1300 """Discover volume on a remote host"""
1301 return "rbd:%s/%s" % (FLAGS.rbd_pool, volume['name'])
1302
1303@@ -434,7 +414,7 @@
1304 """Removes an export for a logical volume"""
1305 pass
1306
1307- def discover_volume(self, _context, volume):
1308+ def discover_volume(self, volume):
1309 """Discover volume on a remote host"""
1310 return "sheepdog:%s" % volume['name']
1311
1312
1313=== modified file 'nova/volume/manager.py'
1314--- nova/volume/manager.py 2011-01-19 00:46:43 +0000
1315+++ nova/volume/manager.py 2011-01-18 22:58:14 +0000
1316@@ -138,7 +138,7 @@
1317 if volume_ref['host'] == self.host and FLAGS.use_local_volumes:
1318 path = self.driver.local_path(volume_ref)
1319 else:
1320- path = self.driver.discover_volume(context, volume_ref)
1321+ path = self.driver.discover_volume(volume_ref)
1322 return path
1323
1324 def remove_compute_volume(self, context, volume_id):
1325@@ -149,10 +149,3 @@
1326 return True
1327 else:
1328 self.driver.undiscover_volume(volume_ref)
1329-
1330- def check_for_export(self, context, instance_id):
1331- """Make sure whether volume is exported."""
1332- if FLAGS.volume_driver == 'nova.volume.driver.AOEDriver':
1333- instance_ref = self.db.instance_get(instance_id)
1334- for v in instance_ref['volumes']:
1335- self.driver.check_for_export(context, v['id'])
1336
1337=== modified file 'setup.py'
1338--- setup.py 2011-01-13 18:02:17 +0000
1339+++ setup.py 2011-01-18 22:58:14 +0000
1340@@ -34,7 +34,6 @@
1341 version_file.write(vcsversion)
1342
1343
1344-
1345 class local_BuildDoc(BuildDoc):
1346 def run(self):
1347 for builder in ['html', 'man']: