Comment 2 for bug 747922

Revision history for this message
Masanori Itoh (itohm) wrote :

BTW, if you used euca-reboot-instances on KVM based systems, the issue gets back to nova side.
At this moment, libvirt does not support rebooting KVM instances, and the current implementation of
RebootInstance is like the following.

  trunk/nova/virt/libvirt_conn.py
    473 def reboot(self, instance):
    474 self.destroy(instance, False) # DESTROY ONCE
    475 xml = self.to_xml(instance)

One idea could be calling virsh dumpxml to the instance to be rebooted and updating the above xml here.

    476 self.firewall_driver.setup_basic_filtering(instance)
    477 self.firewall_driver.prepare_instance_filter(instance)
    478 self._conn.createXML(xml, 0) # CREATE AGAIN, AND THERE IS NO CODE TO RE-ATTACH EBSs.
    479 self.firewall_driver.apply_instance_filter(instance)
    480
    481 timer = utils.LoopingCall(f=None)
    482
    483 def _wait_for_reboot():
    484 try:
    485 state = self.get_info(instance['name'])['state']
    486 db.instance_set_state(context.get_admin_context(),
    487 instance['id'], state)
    488 if state == power_state.RUNNING:
    489 LOG.debug(_('instance %s: rebooted'), instance['name'])
    490 timer.stop()
    491 except Exception, exn:
    492 LOG.exception(_('_wait_for_reboot failed: %s'), exn)
    493 db.instance_set_state(context.get_admin_context(),
    494 instance['id'],
    495 power_state.SHUTDOWN)
    496 timer.stop()
    497
    498 timer.f = _wait_for_reboot
    499 return timer.start(interval=0.5, now=True)

-Masanori