Essex 2012.1.3 : Error deleting instance with 2 Nova Volumes attached

Bug #1079745 reported by Alejandro Comisario
10
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Invalid
Undecided
Unassigned
nova (Ubuntu)
Fix Released
Undecided
Unassigned
Precise
Fix Released
Undecided
Unassigned

Bug Description

Hi, we are using "Essex 2012.1.3" with Ubuntu 12.04, and when we delete an instance that has 2 nova volumes attached, we see in the compute logs, that it tries to detach the same volume twice, stacking because of iscsadm commnad, leaving both nova volumes in "in-use" state, and the instance in "deleting" task, and shutoff state.

Here's the volume-list command for the volumes after the "delete" command.

+-----+----------------+--------------+------+-------------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Attached to |
+-----+----------------+--------------+------+-------------+--------------------------------------+
| 306 | in-use | ev-00000160 | 8 | None | 6e97906c-6122-408a-b151-7d23aaae089b |
| 311 | in-use | ev-00000161 | 10 | None | 6e97906c-6122-408a-b151-7d23aaae089b |

Here's the "show" command for the vm :

+-------------------------------------+-------------------------------------------------------------+
| Property | Value |
+-------------------------------------+-------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-SRV-ATTR:host | DC2-r9-23vms |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | instance-000003f8 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | deleting |
| OS-EXT-STS:vm_state | shutoff |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | |
| created | 2012-11-15T18:13:16Z |
| flavor | gz.micro (1) |
| hostId | 1251f2dc1db48c4c0692d5dd7152312461e83536014d81da5c8c8a27 |
| id | 6e97906c-6122-408a-b151-7d23aaae089b |
| image | ami_PRECISE_20120927 (f76cb3f7-7e83-4fb1-a189-611f6a2e9eef) |
| key_name | |
| metadata | {u'region': u'DC2_r9', u'name': u'e-000004dc'} |
| name | e-000004dc |
| private network | 172.16.76.218 |
| status | SHUTOFF |
| tenant_id | 7ca5bf73c54f4647b689b01c2a4f0bde |
| updated | 2012-11-16T14:56:58Z |
| user_id | 5744a1b442e8400ca3da9e85845045e4 |
+-------------------------------------+-------------------------------------------------------------+

And here are the nova-compute.log for the operation that stacks:

2012-11-16 09:46:33 DEBUG nova.rpc.amqp [-] received {u'_context_roles': [u'asoc_arquitectura'], u'_context_request_id': u'req-ed637bc2-9e77-4564-bb27-27658cc699db', u'_context_read_deleted': u'no', u'args': {u'instance_uuid': u'6e97906c-6122-408a-b151-7d23aaae089b'}, u'_context_auth_token': '<SANITIZED>', u'_context_is_admin': False, u'_context_project_id': u'7ca5bf73c54f4647b689b01c2a4f0bde', u'_context_timestamp': u'2012-11-16T14:46:06.546864', u'_context_user_id': u'5744a1b442e8400ca3da9e85845045e4', u'method': u'terminate_instance', u'_context_remote_address': u'172.16.150.76'} from (pid=25044) _safe_log /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/rpc/common.py:160
2012-11-16 09:46:33 DEBUG nova.rpc.amqp [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] unpacked context: {'user_id': u'5744a1b442e8400ca3da9e85845045e4', 'roles': [u'asoc_arquitectura'], 'timestamp': '2012-11-16T14:46:06.546864', 'auth_token': '<SANITIZED>', 'remote_address': u'172.16.150.76', 'is_admin': False, 'request_id': u'req-ed637bc2-9e77-4564-bb27-27658cc699db', 'project_id': u'7ca5bf73c54f4647b689b01c2a4f0bde', 'read_deleted': u'no'} from (pid=25044) _safe_log /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/rpc/common.py:160
2012-11-16 09:46:33 INFO nova.compute.manager [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] check_instance_lock: decorating: |<function terminate_instance at 0x1d56de8>|
2012-11-16 09:46:33 INFO nova.compute.manager [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] check_instance_lock: arguments: |<nova.compute.manager.ComputeManager object at 0x7f6da2a1f610>| |<nova.rpc.amqp.RpcContext object at 0x407d5d0>| |6e97906c-6122-408a-b151-7d23aaae089b|
2012-11-16 09:46:33 DEBUG nova.compute.manager [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] instance 6e97906c-6122-408a-b151-7d23aaae089b: getting locked state from (pid=25044) get_lock /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/compute/manager.py:1626
2012-11-16 09:46:33 INFO nova.compute.manager [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] check_instance_lock: locked: |False|
2012-11-16 09:46:33 INFO nova.compute.manager [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] check_instance_lock: admin: |False|
2012-11-16 09:46:33 INFO nova.compute.manager [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] check_instance_lock: executing: |<function terminate_instance at 0x1d56de8>|
2012-11-16 09:46:33 DEBUG nova.utils [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] Attempting to grab semaphore "6e97906c-6122-408a-b151-7d23aaae089b" for method "do_terminate_instance"... from (pid=25044) inner /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:927
2012-11-16 09:46:33 DEBUG nova.utils [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] Got semaphore "6e97906c-6122-408a-b151-7d23aaae089b" for method "do_terminate_instance"... from (pid=25044) inner /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:931
2012-11-16 09:46:33 AUDIT nova.compute.manager [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] [instance: 6e97906c-6122-408a-b151-7d23aaae089b] Terminating instance
2012-11-16 09:46:33 DEBUG nova.rpc.amqp [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] Making asynchronous call on network ... from (pid=25044) multicall /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/rpc/amqp.py:326
2012-11-16 09:46:33 DEBUG nova.rpc.amqp [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] MSG_ID is a60c1d4fb94344b3a8a4ded3ff83ed86 from (pid=25044) multicall /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/rpc/amqp.py:329
2012-11-16 09:46:33 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /var/lib/nova/instances/instance-000003f8/disk from (pid=25044) execute /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:219
2012-11-16 09:46:33 DEBUG nova.virt.libvirt.connection [-] skipping None since it looks like volume from (pid=25044) get_instance_disk_info /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/virt/libvirt/connection.py:2287
2012-11-16 09:46:33 DEBUG nova.virt.libvirt.connection [-] skipping None since it looks like volume from (pid=25044) get_instance_disk_info /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/virt/libvirt/connection.py:2287
2012-11-16 09:46:34 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /var/lib/nova/instances/instance-00000385/disk from (pid=25044) execute /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:219
2012-11-16 09:46:34 DEBUG nova.compute.manager [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] [instance: 6e97906c-6122-408a-b151-7d23aaae089b] Deallocating network for instance from (pid=25044) _deallocate_network /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/compute/manager.py:641
2012-11-16 09:46:34 DEBUG nova.rpc.amqp [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] Making asynchronous call on network ... from (pid=25044) multicall /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/rpc/amqp.py:326
2012-11-16 09:46:34 DEBUG nova.rpc.amqp [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] MSG_ID is 025d818087da436697458c520195300f from (pid=25044) multicall /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/rpc/amqp.py:329
2012-11-16 09:46:34 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /var/lib/nova/instances/instance-00000385/disk from (pid=25044) execute /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:219
2012-11-16 09:46:35 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /var/lib/nova/instances/instance-00000308/disk from (pid=25044) execute /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:219
2012-11-16 09:46:35 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /var/lib/nova/instances/instance-00000308/disk from (pid=25044) execute /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:219
2012-11-16 09:46:36 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /var/lib/nova/instances/instance-00000009/disk from (pid=25044) execute /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:219
2012-11-16 09:46:38 DEBUG nova.utils [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] Attempting to grab semaphore "iptables" for method "apply"... from (pid=25044) inner /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:927
2012-11-16 09:46:38 DEBUG nova.utils [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] Got semaphore "iptables" for method "apply"... from (pid=25044) inner /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:931
2012-11-16 09:46:38 DEBUG nova.utils [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] Attempting to grab file lock "iptables" for method "apply"... from (pid=25044) inner /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:935
2012-11-16 09:46:38 DEBUG nova.utils [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] Got file lock "iptables" for method "apply"... from (pid=25044) inner /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:942
2012-11-16 09:46:38 DEBUG nova.utils [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] Running cmd (subprocess): sudo nova-rootwrap iptables-save -t filter from (pid=25044) execute /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:219
2012-11-16 09:46:39 INFO nova.virt.libvirt.connection [-] [instance: 6e97906c-6122-408a-b151-7d23aaae089b] Instance destroyed successfully.
2012-11-16 09:46:39 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /var/lib/nova/instances/instance-00000009/disk from (pid=25044) execute /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:219
2012-11-16 09:46:39 DEBUG nova.utils [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] Running cmd (subprocess): sudo nova-rootwrap iptables-restore from (pid=25044) execute /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:219
2012-11-16 09:46:40 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /var/lib/nova/instances/instance-000002fd/disk from (pid=25044) execute /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:219
2012-11-16 09:46:40 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /var/lib/nova/instances/instance-000002fd/disk from (pid=25044) execute /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:219
2012-11-16 09:46:40 DEBUG nova.utils [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] Running cmd (subprocess): sudo nova-rootwrap iptables-save -t nat from (pid=25044) execute /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:219
2012-11-16 09:46:41 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /var/lib/nova/instances/instance-00000161/disk from (pid=25044) execute /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:219
2012-11-16 09:46:41 DEBUG nova.utils [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] Running cmd (subprocess): sudo nova-rootwrap iptables-restore from (pid=25044) execute /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:219
2012-11-16 09:46:41 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /var/lib/nova/instances/instance-00000161/disk from (pid=25044) execute /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:219
2012-11-16 09:46:42 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /var/lib/nova/instances/instance-000000b9/disk from (pid=25044) execute /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:219
2012-11-16 09:46:42 DEBUG nova.network.linux_net [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] IPTablesManager.apply completed with success from (pid=25044) apply /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/network/linux_net.py:335
2012-11-16 09:46:44 DEBUG nova.utils [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] Attempting to grab semaphore "connect_volume" for method "disconnect_volume"... from (pid=25044) inner /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:927
2012-11-16 09:46:44 DEBUG nova.utils [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] Got semaphore "connect_volume" for method "disconnect_volume"... from (pid=25044) inner /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:931
2012-11-16 09:46:53 DEBUG nova.utils [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] Running cmd (subprocess): sudo nova-rootwrap iscsiadm -m node -T iqn.1992-08.com.netapp:sn.1574861693 -p 10.1.1.160:3260 --op update -n node.startup -v manual from (pid=25044) execute /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:219
2012-11-16 09:46:54 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /var/lib/nova/instances/instance-000000b9/disk from (pid=25044) execute /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:219
2012-11-16 09:46:54 DEBUG nova.virt.libvirt.volume [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] iscsiadm ('--op', 'update', '-n', 'node.startup', '-v', 'manual'): stdout= stderr= from (pid=25044) _run_iscsiadm /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/virt/libvirt/volume.py:112
2012-11-16 09:46:54 DEBUG nova.utils [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] Running cmd (subprocess): sudo nova-rootwrap iscsiadm -m node -T iqn.1992-08.com.netapp:sn.1574861693 -p 10.1.1.160:3260 --logout from (pid=25044) execute /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:219
2012-11-16 09:46:55 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /var/lib/nova/instances/instance-000003fc/disk from (pid=25044) execute /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:219
2012-11-16 09:46:55 DEBUG nova.virt.libvirt.volume [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] iscsiadm ('--logout',): stdout=Logging out of session [sid: 4, target: iqn.1992-08.com.netapp:sn.1574861693, portal: 10.1.1.160,3260]
Logout of [sid: 4, target: iqn.1992-08.com.netapp:sn.1574861693, portal: 10.1.1.160,3260]: successful
 stderr= from (pid=25044) _run_iscsiadm /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/virt/libvirt/volume.py:112
2012-11-16 09:46:55 DEBUG nova.utils [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] Running cmd (subprocess): sudo nova-rootwrap iscsiadm -m node -T iqn.1992-08.com.netapp:sn.1574861693 -p 10.1.1.160:3260 --op delete from (pid=25044) execute /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:219
2012-11-16 09:46:55 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /var/lib/nova/instances/instance-000003fc/disk from (pid=25044) execute /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:219
2012-11-16 09:46:56 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /var/lib/nova/instances/instance-0000010d/disk from (pid=25044) execute /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:219
2012-11-16 09:46:56 DEBUG nova.virt.libvirt.volume [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] iscsiadm ('--op', 'delete'): stdout= stderr= from (pid=25044) _run_iscsiadm /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/virt/libvirt/volume.py:112
2012-11-16 09:46:56 DEBUG nova.utils [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] Attempting to grab semaphore "connect_volume" for method "disconnect_volume"... from (pid=25044) inner /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:927
2012-11-16 09:46:56 DEBUG nova.utils [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] Got semaphore "connect_volume" for method "disconnect_volume"... from (pid=25044) inner /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:931
2012-11-16 09:47:06 DEBUG nova.utils [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] Running cmd (subprocess): sudo nova-rootwrap iscsiadm -m node -T iqn.1992-08.com.netapp:sn.1574861693 -p 10.1.1.160:3260 --op update -n node.startup -v manual from (pid=25044) execute /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:219
2012-11-16 09:47:06 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img info /var/lib/nova/instances/instance-0000010d/disk from (pid=25044) execute /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:219
2012-11-16 09:47:06 DEBUG nova.utils [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] Result was 255 from (pid=25044) execute /usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py:235
2012-11-16 09:47:06 ERROR nova.rpc.amqp [req-ed637bc2-9e77-4564-bb27-27658cc699db 5744a1b442e8400ca3da9e85845045e4 7ca5bf73c54f4647b689b01c2a4f0bde] Exception during message handling
2012-11-16 09:47:06 TRACE nova.rpc.amqp Traceback (most recent call last):
2012-11-16 09:47:06 TRACE nova.rpc.amqp File "/usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/rpc/amqp.py", line 253, in _process_data
2012-11-16 09:47:06 TRACE nova.rpc.amqp rval = node_func(context=ctxt, **node_args)
2012-11-16 09:47:06 TRACE nova.rpc.amqp File "/usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/exception.py", line 114, in wrapped
2012-11-16 09:47:06 TRACE nova.rpc.amqp return f(*args, **kw)
2012-11-16 09:47:06 TRACE nova.rpc.amqp File "/usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/compute/manager.py", line 159, in decorated_function
2012-11-16 09:47:06 TRACE nova.rpc.amqp function(self, context, instance_uuid, *args, **kwargs)
2012-11-16 09:47:06 TRACE nova.rpc.amqp File "/usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/compute/manager.py", line 183, in decorated_function
2012-11-16 09:47:06 TRACE nova.rpc.amqp sys.exc_info())
2012-11-16 09:47:06 TRACE nova.rpc.amqp File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
2012-11-16 09:47:06 TRACE nova.rpc.amqp self.gen.next()
2012-11-16 09:47:06 TRACE nova.rpc.amqp File "/usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/compute/manager.py", line 177, in decorated_function
2012-11-16 09:47:06 TRACE nova.rpc.amqp return function(self, context, instance_uuid, *args, **kwargs)
2012-11-16 09:47:06 TRACE nova.rpc.amqp File "/usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/compute/manager.py", line 772, in terminate_instance
2012-11-16 09:47:06 TRACE nova.rpc.amqp do_terminate_instance()
2012-11-16 09:47:06 TRACE nova.rpc.amqp File "/usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py", line 945, in inner
2012-11-16 09:47:06 TRACE nova.rpc.amqp retval = f(*args, **kwargs)
2012-11-16 09:47:06 TRACE nova.rpc.amqp File "/usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/compute/manager.py", line 765, in do_terminate_instance
2012-11-16 09:47:06 TRACE nova.rpc.amqp self._delete_instance(context, instance)
2012-11-16 09:47:06 TRACE nova.rpc.amqp File "/usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/compute/manager.py", line 743, in _delete_instance
2012-11-16 09:47:06 TRACE nova.rpc.amqp self._shutdown_instance(context, instance, 'Terminating')
2012-11-16 09:47:06 TRACE nova.rpc.amqp File "/usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/compute/manager.py", line 712, in _shutdown_instance
2012-11-16 09:47:06 TRACE nova.rpc.amqp block_device_info)
2012-11-16 09:47:06 TRACE nova.rpc.amqp File "/usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/virt/libvirt/connection.py", line 487, in destroy
2012-11-16 09:47:06 TRACE nova.rpc.amqp cleanup=True)
2012-11-16 09:47:06 TRACE nova.rpc.amqp File "/usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/virt/libvirt/connection.py", line 479, in _destroy
2012-11-16 09:47:06 TRACE nova.rpc.amqp mount_device)
2012-11-16 09:47:06 TRACE nova.rpc.amqp File "/usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/virt/libvirt/connection.py", line 525, in volume_driver_method
2012-11-16 09:47:06 TRACE nova.rpc.amqp return method(connection_info, *args, **kwargs)
2012-11-16 09:47:06 TRACE nova.rpc.amqp File "/usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py", line 945, in inner
2012-11-16 09:47:06 TRACE nova.rpc.amqp retval = f(*args, **kwargs)
2012-11-16 09:47:06 TRACE nova.rpc.amqp File "/usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/virt/libvirt/volume.py", line 204, in disconnect_volume
2012-11-16 09:47:06 TRACE nova.rpc.amqp self._iscsiadm_update(iscsi_properties, "node.startup", "manual")
2012-11-16 09:47:06 TRACE nova.rpc.amqp File "/usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/virt/libvirt/volume.py", line 118, in _iscsiadm_update
2012-11-16 09:47:06 TRACE nova.rpc.amqp return self._run_iscsiadm(iscsi_properties, iscsi_command)
2012-11-16 09:47:06 TRACE nova.rpc.amqp File "/usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/virt/libvirt/volume.py", line 110, in _run_iscsiadm
2012-11-16 09:47:06 TRACE nova.rpc.amqp check_exit_code=check_exit_code)
2012-11-16 09:47:06 TRACE nova.rpc.amqp File "/usr/local/lib/python2.7/dist-packages/nova-2012.1.3-py2.7.egg/nova/utils.py", line 242, in execute
2012-11-16 09:47:06 TRACE nova.rpc.amqp cmd=' '.join(cmd))
2012-11-16 09:47:06 TRACE nova.rpc.amqp ProcessExecutionError: Unexpected error while running command.
2012-11-16 09:47:06 TRACE nova.rpc.amqp Command: sudo nova-rootwrap iscsiadm -m node -T iqn.1992-08.com.netapp:sn.1574861693 -p 10.1.1.160:3260 --op update -n node.startup -v manual
2012-11-16 09:47:06 TRACE nova.rpc.amqp Exit code: 255
2012-11-16 09:47:06 TRACE nova.rpc.amqp Stdout: ''
2012-11-16 09:47:06 TRACE nova.rpc.amqp Stderr: 'iscsiadm: no records found!\n'
2012-11-16 09:47:06 TRACE nova.rpc.amqp

Revision history for this message
Alejandro Comisario (alejandro-comisario) wrote :

For us, this is very critical

Revision history for this message
Russell Bryant (russellb) wrote :

Marked as Invalid for master since the code is not even there anymore (moved to Cinder).

I nominated the bug for the Essex series for the stable-maint-essex team

Changed in nova:
status: New → Incomplete
status: Incomplete → Invalid
Revision history for this message
Russell Bryant (russellb) wrote :

Oops, this particular code hasn't moved, but it has certainly changed a lot since Essex. Feel free to ping this bug if you reproduce on master and we can change the status.

Revision history for this message
Alejandro Comisario (alejandro-comisario) wrote :

Actually, i dont want to reporduce it in master, since we are not planing to try Grizzly till the first milestones, and since we are running Essex into production, we would like to get this fixed in essex.

So, how can i do get this reviewed in essex release ?
I Know there's the 2012.1.4 release, but i dont know if this release fixes this issue.

Revision history for this message
Rafi Khardalian (rkhardalian) wrote :

I'll take a look to see if I can reproduce this in our Essex environments and propose a fix accordingly.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (stable/essex)

Fix proposed to branch: stable/essex
Review: https://review.openstack.org/16356

Revision history for this message
Rafi Khardalian (rkhardalian) wrote :

I don't know why you're getting a duplicate disconnect. I wasn't able to reproduce your issue in our environments specifically because we already had the proposed patch applied in our local stable tree.

The patch (from Dan Smith) is a backport from what went into master some time back, which accepts 0 and 255 as exit codes for disconnect operations. It will mitigate this problem.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (stable/essex)

Reviewed: https://review.openstack.org/16356
Committed: http://github.com/openstack/nova/commit/bd1024194a384bbc2f332e74587b577d7b79c539
Submitter: Jenkins
Branch: stable/essex

commit bd1024194a384bbc2f332e74587b577d7b79c539
Author: Dan Smith <email address hidden>
Date: Mon Jul 9 13:37:01 2012 -0700

    When deleting an instance, avoid freakout if iscsi device is gone

    It's fairly easy to get into a state where an instance is not running,
    the iscsi infrastructure has been torn down, and the instance cannot
    be deleted because the iscsi device cannot be successfully removed.
    This patch ignores the associated return value for that scenario and
    marches forward in the teardown process.

    Fixes bug 1079745 (Cherry picked for Essex)

    Change-Id: Ib28790892eebe341ac10a92250cf872605fefe9b

Revision history for this message
Brano Zarnovican (zarnovican) wrote :

The fix above will ignore retcode 0 and 255. My 'isciadm' is returning 21.

2012-12-11 10:39:04 TRACE nova.rpc.amqp File "/opt/common-python/lib/python2.6/site-packages/nova/virt/libvirt/volume.py", line 221, in disconnect_volume
2012-12-11 10:39:04 TRACE nova.rpc.amqp check_exit_code=[0, 255])
2012-12-11 10:39:04 TRACE nova.rpc.amqp File "/opt/common-python/lib/python2.6/site-packages/nova/virt/libvirt/volume.py", line 134, in _iscsiadm_update
2012-12-11 10:39:04 TRACE nova.rpc.amqp return self._run_iscsiadm(iscsi_properties, iscsi_command, **kwargs)
2012-12-11 10:39:04 TRACE nova.rpc.amqp File "/opt/common-python/lib/python2.6/site-packages/nova/virt/libvirt/volume.py", line 125, in _run_iscsiadm
2012-12-11 10:39:04 TRACE nova.rpc.amqp check_exit_code=check_exit_code)
2012-12-11 10:39:04 TRACE nova.rpc.amqp File "/opt/common-python/lib/python2.6/site-packages/nova/utils.py", line 243, in execute
2012-12-11 10:39:04 TRACE nova.rpc.amqp cmd=' '.join(cmd))
2012-12-11 10:39:04 TRACE nova.rpc.amqp ProcessExecutionError: Unexpected error while running command.
2012-12-11 10:39:04 TRACE nova.rpc.amqp Command: sudo /opt/openstack/bin/nova-rootwrap iscsiadm -m node -T iqn.2010-10.org.openstack:vol-00000385 -p 172.30.0.232:3260 --op update -n node.startup -v manual
2012-12-11 10:39:04 TRACE nova.rpc.amqp Exit code: 21
2012-12-11 10:39:04 TRACE nova.rpc.amqp Stdout: ''
2012-12-11 10:39:04 TRACE nova.rpc.amqp Stderr: 'iscsiadm: No records found\n'
2012-12-11 10:39:04 TRACE nova.rpc.amqp

man iscsiadm

       21 ISCSI_ERR_NO_OBJS_FOUND - no records/targets/sessions/portals found to execute operation on.

# rpm -qf /sbin/iscsiadm
iscsi-initiator-utils-6.2.0.872-41.el6.x86_64

Workaround:
Either change [0, 255] to [0, 21, 255] or ignore everything by changing it to range(256).

Changed in nova (Ubuntu):
status: New → Fix Released
Revision history for this message
Brian Murray (brian-murray) wrote : Please test proposed package

Hello Alejandro, or anyone else affected,

Accepted nova into precise-proposed. The package will build now and be available at http://launchpad.net/ubuntu/+source/nova/2012.1.3+stable-20130423-e52e6912-0ubuntu1 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-needed to verification-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

Changed in nova (Ubuntu Precise):
status: New → Fix Committed
tags: added: verification-needed
Revision history for this message
Yolanda Robla (yolanda.robla) wrote : Verification report.

Please find the attached test log from the Ubuntu Server Team's CI infrastructure. As part of the verification process for this bug, Nova has been deployed and configured across multiple nodes using precise-proposed as an installation source. After successful bring-up and configuration of the cluster, a number of exercises and smoke tests have be invoked to ensure the updated package did not introduce any regressions. A number of test iterations were carried out to catch any possible transient errors.

Please Note the list of installed packages at the top and bottom of the report.

For records of upstream test coverage of this update, please see the Jenkins links in the comments of the relevant upstream code-review(s):

Stable review: https://review.openstack.org/16356

As per the provisional Micro Release Exception granted to this package by the Technical Board, we hope this contributes toward verification of this update.

Revision history for this message
Yolanda Robla (yolanda.robla) wrote :

Test coverage log.

Revision history for this message
Yolanda Robla (yolanda.robla) wrote :

Please find the attached test log from the Ubuntu Server Team's CI infrastructure. As part of the verification process for this bug, Nova has been deployed and configured across multiple nodes using precise-proposed as an installation source. After successful bring-up and configuration of the cluster, a number of exercises and smoke tests have be invoked to ensure the updated package did not introduce any regressions. A number of test iterations were carried out to catch any possible transient errors.

Please Note the list of installed packages at the top and bottom of the report.

For records of upstream test coverage of this update, please see the Jenkins links in the comments of the relevant upstream code-review(s):

Stable review: https://review.openstack.org/16356

As per the provisional Micro Release Exception granted to this package by the Technical Board, we hope this contributes toward verification of this update.

Revision history for this message
Yolanda Robla (yolanda.robla) wrote :

Test coverage log.

tags: added: verification-done
removed: verification-needed
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package nova - 2012.1.3+stable-20130423-e52e6912-0ubuntu1

---------------
nova (2012.1.3+stable-20130423-e52e6912-0ubuntu1) precise-proposed; urgency=low

  * Resynchronize with stable/essex (e52e6912) (LP: #1089488):
    - [48e81f1] VNC proxy can be made to connect to wrong VM LP: 1125378
    - [3bf5a58] snat rule too broad for some network configurations LP: 1048765
    - [efaacda] DOS by allocating all fixed ips LP: 1125468
    - [b683ced] Add nosehtmloutput as a test dependency.
    - [45274c8] Nova unit tests not running, but still passing for stable/essex
      LP: 1132835
    - [e02b459] vnc unit-test fixes
    - [87361d3] Jenkins jobs fail because of incompatibility between sqlalchemy-
      migrate and the newest sqlalchemy-0.8.0b1 (LP: #1073569)
    - [e98928c] VNC proxy can be made to connect to wrong VM LP: 1125378
    - [c0a10db] DoS through XML entity expansion (CVE-2013-1664) LP: 1100282
    - [243d516] No authentication on block device used for os-volume_boot
      LP: 1069904
    - [80fefe5] use_single_default_gateway does not function correctly
      (LP: #1075859)
    - [bd10241] Essex 2012.1.3 : Error deleting instance with 2 Nova Volumes
      attached (LP: #1079745)
    - [86a5937] do_refresh_security_group_rules in nova.virt.firewall is very
      slow (LP: #1062314)
    - [ae9c5f4] deallocate_fixed_ip attempts to update an already deleted
      fixed_ip (LP: #1017633)
    - [20f98c5] failed to allocate fixed ip because old deleted one exists
      (LP: #996482)
    - [75f6922] snapshot stays in saving state if the vm base image is deleted
      (LP: #921774)
    - [1076699] lock files may be removed in error dues to permissions issues
      (LP: #1051924)
    - [40c5e94] ensure_default_security_group() does not call sgh (LP: #1050982)
    - [4eebe76] At termination, LXC rootfs is not always unmounted before
      rmtree() is called (LP: #1046313)
    - [47dabb3] Heavily loaded nova-compute instances don't sent reports
      frequently enough (LP: #1045152)
    - [b375b4f] When attach volume lost attach when node restart (LP: #1004791)
    - [4ac2dcc] nova usage-list returns wrong usage (LP: #1043999)
    - [014fcbc] Bridge port's hairpin mode not set after resuming a machine
      (LP: #1040537)
    - [2f35f8e] Nova flavor ephemeral space size reported incorrectly
      (LP: #1026210)
  * Dropped, superseeded by new snapshot:
    - debian/patches/CVE-2013-0335.patch: [48e81f1]
    - debian/patches/CVE-2013-1838.patch: [efaacda]
    - debian/patches/CVE-2013-1664.patch: [c0a10db]
    - debian/patches/CVE-2013-0208.patch: [243d516]
 -- Yolanda <email address hidden> Mon, 22 Apr 2013 12:37:08 +0200

Changed in nova (Ubuntu Precise):
status: Fix Committed → Fix Released
Revision history for this message
Scott Kitterman (kitterman) wrote : Update Released

The verification of this Stable Release Update has completed successfully and the package has now been released to -updates. Subsequently, the Ubuntu Stable Release Updates Team is being unsubscribed and will not receive messages about this bug report. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regresssions.

Sean Dague (sdague)
no longer affects: nova/essex
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.