Lvm hang during tempest tests

Bug #1373513 reported by Vish Ishaya
54
This bug affects 11 people
Affects Status Importance Assigned to Milestone
Cinder
Fix Released
Critical
John Griffith
OpenStack Compute (nova)
Invalid
Critical
Unassigned

Bug Description

Managed to trigger a hang in lvm create

Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.704164] INFO: task lvm:14805 blocked for more than 120 seconds.
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.705096] Not tainted 3.13.0-35-generic #62-Ubuntu
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.705839] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706871] lvm D ffff8801ffd14440 0 14805 14804 0x00000000
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706876] ffff880068f9dae0 0000000000000082 ffff8801a14bc800 ffff880068f9dfd8
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706879] 0000000000014440 0000000000014440 ffff8801a14bc800 ffff8801ffd14cd8
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706881] 0000000000000000 ffff88004063c280 0000000000000000 ffff8801a14bc800
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706883] Call Trace:
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706895] [<ffffffff81722a6d>] io_schedule+0x9d/0x140
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706914] [<ffffffff811fac94>] do_blockdev_direct_IO+0x1ce4/0x2910
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706918] [<ffffffff811f5b00>] ? I_BDEV+0x10/0x10
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706920] [<ffffffff811fb915>] __blockdev_direct_IO+0x55/0x60
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706922] [<ffffffff811f5b00>] ? I_BDEV+0x10/0x10
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706924] [<ffffffff811f61f6>] blkdev_direct_IO+0x56/0x60
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706926] [<ffffffff811f5b00>] ? I_BDEV+0x10/0x10
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706937] [<ffffffff8115106b>] generic_file_aio_read+0x69b/0x700
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706947] [<ffffffff811cca78>] ? path_openat+0x158/0x640
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706953] [<ffffffff810f3c92>] ? from_kgid_munged+0x12/0x20
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706955] [<ffffffff811f667b>] blkdev_aio_read+0x4b/0x70
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706958] [<ffffffff811bc99a>] do_sync_read+0x5a/0x90
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706960] [<ffffffff811bd035>] vfs_read+0x95/0x160
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706962] [<ffffffff811bdb49>] SyS_read+0x49/0xa0
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706966] [<ffffffff8172ed6d>] system_call_fastpath+0x1a/0x1f
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.706968] INFO: task lvs:14822 blocked for more than 120 seconds.
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.707774] Not tainted 3.13.0-35-generic #62-Ubuntu
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.708507] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709535] lvs D ffff8801ffc14440 0 14822 14821 0x00000000
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709537] ffff880009ffdae0 0000000000000082 ffff8800095e1800 ffff880009ffdfd8
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709539] 0000000000014440 0000000000014440 ffff8800095e1800 ffff8801ffc14cd8
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709541] 0000000000000000 ffff880003d59900 0000000000000000 ffff8800095e1800
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709543] Call Trace:
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709547] [<ffffffff81722a6d>] io_schedule+0x9d/0x140
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709549] [<ffffffff811fac94>] do_blockdev_direct_IO+0x1ce4/0x2910
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709551] [<ffffffff811f5b00>] ? I_BDEV+0x10/0x10
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709554] [<ffffffff811fb915>] __blockdev_direct_IO+0x55/0x60
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709555] [<ffffffff811f5b00>] ? I_BDEV+0x10/0x10
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709557] [<ffffffff811f61f6>] blkdev_direct_IO+0x56/0x60
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709559] [<ffffffff811f5b00>] ? I_BDEV+0x10/0x10
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709562] [<ffffffff8115106b>] generic_file_aio_read+0x69b/0x700
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709564] [<ffffffff811cca78>] ? path_openat+0x158/0x640
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709566] [<ffffffff810f3c92>] ? from_kgid_munged+0x12/0x20
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709568] [<ffffffff811f667b>] blkdev_aio_read+0x4b/0x70
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709570] [<ffffffff811bc99a>] do_sync_read+0x5a/0x90
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709572] [<ffffffff811bd035>] vfs_read+0x95/0x160
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709573] [<ffffffff811bdb49>] SyS_read+0x49/0xa0
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709575] [<ffffffff8172ed6d>] system_call_fastpath+0x1a/0x1f
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.709578] INFO: task lvs:14984 blocked for more than 120 seconds.
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.710411] Not tainted 3.13.0-35-generic #62-Ubuntu
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.711153] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.712193] lvs D ffff8801ffc94440 0 14984 14983 0x00000000
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.712196] ffff880101711ae0 0000000000000086 ffff880007d2b000 ffff880101711fd8
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.712198] 0000000000014440 0000000000014440 ffff880007d2b000 ffff8801ffc94cd8
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.712200] 0000000000000000 ffff880017fd2300 0000000000000000 ffff880007d2b000
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.712202] Call Trace:
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.712219] [<ffffffff81722a6d>] io_schedule+0x9d/0x140
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.712221] [<ffffffff811fac94>] do_blockdev_direct_IO+0x1ce4/0x2910
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.712224] [<ffffffff811f5b00>] ? I_BDEV+0x10/0x10
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.712226] [<ffffffff811fb915>] __blockdev_direct_IO+0x55/0x60
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.712228] [<ffffffff811f5b00>] ? I_BDEV+0x10/0x10
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.712230] [<ffffffff811f61f6>] blkdev_direct_IO+0x56/0x60
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.712231] [<ffffffff811f5b00>] ? I_BDEV+0x10/0x10
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.712234] [<ffffffff8115106b>] generic_file_aio_read+0x69b/0x700
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.712236] [<ffffffff811cca78>] ? path_openat+0x158/0x640
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.712238] [<ffffffff810f3c92>] ? from_kgid_munged+0x12/0x20
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.712249] [<ffffffff811f667b>] blkdev_aio_read+0x4b/0x70
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.712256] [<ffffffff811bc99a>] do_sync_read+0x5a/0x90
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.712264] [<ffffffff811bd035>] vfs_read+0x95/0x160
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.712270] [<ffffffff811bdb49>] SyS_read+0x49/0xa0
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.712276] [<ffffffff8172ed6d>] system_call_fastpath+0x1a/0x1f
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.712278] INFO: task vgs:15086 blocked for more than 120 seconds.
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.713149] Not tainted 3.13.0-35-generic #62-Ubuntu
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.713870] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.714919] vgs D ffff8801ffc94440 0 15086 15061 0x00000000
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.714921] ffff88014411fae0 0000000000000082 ffff8801f2f7e000 ffff88014411ffd8
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.714921] 0000000000014440 0000000000014440 ffff8801f2f7e000 ffff8801ffc94cd8
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.714921] 0000000000000000 ffff880017fd2580 0000000000000000 ffff8801f2f7e000
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.714921] Call Trace:
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.714921] [<ffffffff81722a6d>] io_schedule+0x9d/0x140
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.714921] [<ffffffff811fac94>] do_blockdev_direct_IO+0x1ce4/0x2910
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.714921] [<ffffffff811f5b00>] ? I_BDEV+0x10/0x10
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.714923] [<ffffffff811fb915>] __blockdev_direct_IO+0x55/0x60
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.714924] [<ffffffff811f5b00>] ? I_BDEV+0x10/0x10
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.714926] [<ffffffff811f61f6>] blkdev_direct_IO+0x56/0x60
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.714928] [<ffffffff811f5b00>] ? I_BDEV+0x10/0x10
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.714930] [<ffffffff8115106b>] generic_file_aio_read+0x69b/0x700
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.714932] [<ffffffff811cca78>] ? path_openat+0x158/0x640
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.714934] [<ffffffff810f3c92>] ? from_kgid_munged+0x12/0x20
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.714936] [<ffffffff811f667b>] blkdev_aio_read+0x4b/0x70
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.714938] [<ffffffff811bc99a>] do_sync_read+0x5a/0x90
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.714940] [<ffffffff811bd035>] vfs_read+0x95/0x160
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.714942] [<ffffffff811bdb49>] SyS_read+0x49/0xa0
Sep 22 21:19:01 devstack-trusty-hpcloud-b5-2344254 kernel: [ 3120.714944] [<ffffffff8172ed6d>] system_call_fastpath+0x1a/0x1f

Link to logs: http://logs.openstack.org/22/119522/9/check/check-tempest-dsvm-postgres-full/46a5f8c/logs/syslog.txt.gz#_Sep_22_21_19_01

Changed in cinder:
status: New → Confirmed
importance: Undecided → High
Revision history for this message
John Griffith (john-griffith) wrote :

I wonder if this is a result of something happening in terms of overall system load and perhaps flushing to disk. Experimenting with some ideas like changing the IO scheduler might be worth a shot.

Also Iv'e been considering a proposal to attach a Cinder Volume to our OpenStack instance and use that as our VG in gating rather than the loopback file. Don't have any data to say this is better or worse, but an idea I've kicked around for a bit now.

Revision history for this message
Matt Riedemann (mriedem) wrote :

This is a related diagnostics patch that I need to update to make rootwrap work:

https://review.openstack.org/#/c/126735/

Revision history for this message
Matt Riedemann (mriedem) wrote :

Looking at a logstash query like this:

message:"Failed to delete resource" AND message:"within the required time" AND tags:"console"

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRmFpbGVkIHRvIGRlbGV0ZSByZXNvdXJjZVwiIEFORCBtZXNzYWdlOlwid2l0aGluIHRoZSByZXF1aXJlZCB0aW1lXCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MTMyMTkyOTUxMDgsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

That's coming from mostly volume-related tests, so I think they are all related to the same thing, something is doing a blocking (on i/o?) call in cinder on the delete flow (or something using the same lock that gets into the delete flow, like a volume update periodic task maybe?) and we timeout.

It would be nice if that Tempest error message gave the type of resource in the error message rather than just 'failed to delete resource'. Then we could filter a bit better on the error message.

Revision history for this message
Matt Riedemann (mriedem) wrote :

Opened tempest bug 1380712 to make the error message more specific so we can get an elastic-recheck query going for this.

Revision history for this message
Matt Riedemann (mriedem) wrote :
Download full text (6.8 KiB)

This could be related, libvirt domain delete/destroy fails because the backing volume is busy:

http://logs.openstack.org/20/128020/2/check/check-tempest-dsvm-full/6b56bf1/logs/screen-n-cpu.txt.gz?level=TRACE#_2014-10-13_18_12_06_653

2014-10-13 18:12:06.653 ERROR nova.virt.libvirt.driver [req-017c0b4b-11ba-4f59-9895-3d0053481b00 VolumesV2ActionsTestXML-1527860598 VolumesV2ActionsTestXML-196352251] [instance: 6f733615-3124-400e-b890-b07468fa4777] Error from libvirt during destroy. Code=38 Error=Failed to terminate process 16365 with SIGKILL: Device or resource busy
2014-10-13 18:12:08.000 ERROR nova.compute.manager [req-017c0b4b-11ba-4f59-9895-3d0053481b00 VolumesV2ActionsTestXML-1527860598 VolumesV2ActionsTestXML-196352251] [instance: 6f733615-3124-400e-b890-b07468fa4777] Setting instance vm_state to ERROR
2014-10-13 18:12:08.000 17350 TRACE nova.compute.manager [instance: 6f733615-3124-400e-b890-b07468fa4777] Traceback (most recent call last):
2014-10-13 18:12:08.000 17350 TRACE nova.compute.manager [instance: 6f733615-3124-400e-b890-b07468fa4777] File "/opt/stack/new/nova/nova/compute/manager.py", line 2500, in do_terminate_instance
2014-10-13 18:12:08.000 17350 TRACE nova.compute.manager [instance: 6f733615-3124-400e-b890-b07468fa4777] self._delete_instance(context, instance, bdms, quotas)
2014-10-13 18:12:08.000 17350 TRACE nova.compute.manager [instance: 6f733615-3124-400e-b890-b07468fa4777] File "/opt/stack/new/nova/nova/hooks.py", line 131, in inner
2014-10-13 18:12:08.000 17350 TRACE nova.compute.manager [instance: 6f733615-3124-400e-b890-b07468fa4777] rv = f(*args, **kwargs)
2014-10-13 18:12:08.000 17350 TRACE nova.compute.manager [instance: 6f733615-3124-400e-b890-b07468fa4777] File "/opt/stack/new/nova/nova/compute/manager.py", line 2469, in _delete_instance
2014-10-13 18:12:08.000 17350 TRACE nova.compute.manager [instance: 6f733615-3124-400e-b890-b07468fa4777] quotas.rollback()
2014-10-13 18:12:08.000 17350 TRACE nova.compute.manager [instance: 6f733615-3124-400e-b890-b07468fa4777] File "/usr/local/lib/python2.7/dist-packages/oslo/utils/excutils.py", line 82, in __exit__
2014-10-13 18:12:08.000 17350 TRACE nova.compute.manager [instance: 6f733615-3124-400e-b890-b07468fa4777] six.reraise(self.type_, self.value, self.tb)
2014-10-13 18:12:08.000 17350 TRACE nova.compute.manager [instance: 6f733615-3124-400e-b890-b07468fa4777] File "/opt/stack/new/nova/nova/compute/manager.py", line 2446, in _delete_instance
2014-10-13 18:12:08.000 17350 TRACE nova.compute.manager [instance: 6f733615-3124-400e-b890-b07468fa4777] self._shutdown_instance(context, instance, bdms)
2014-10-13 18:12:08.000 17350 TRACE nova.compute.manager [instance: 6f733615-3124-400e-b890-b07468fa4777] File "/opt/stack/new/nova/nova/compute/manager.py", line 2376, in _shutdown_instance
2014-10-13 18:12:08.000 17350 TRACE nova.compute.manager [instance: 6f733615-3124-400e-b890-b07468fa4777] requested_networks)
2014-10-13 18:12:08.000 17350 TRACE nova.compute.manager [instance: 6f733615-3124-400e-b890-b07468fa4777] File "/usr/local/lib/python2.7/dist-packages/oslo/utils/excutils.py", line 82, in __exit__
2014-10-13 18:12:08.000 17350 T...

Read more...

Revision history for this message
Matt Riedemann (mriedem) wrote :

The n-cpu error in comment 5 could be a separate issue, I'm not seeing in vishy's original build fail link:

http://logs.openstack.org/22/119522/9/check/check-tempest-dsvm-postgres-full/46a5f8c/logs/screen-n-cpu.txt.gz?level=TRACE

Revision history for this message
Matt Riedemann (mriedem) wrote :

Running a query on this:

message:"INFO: task lvm:" AND message:"blocked for more than 120 seconds" AND tags:"syslog"

There are 111 hits in 7 days, check and gate, not all are failures though. That's not surprising though, we've seen some of these fail within a window of 2 seconds (the tempest volume delete timeout is 196 seconds).

When we see this though, it's in a failed job 98% of the time, so this is a good query.

Revision history for this message
Matt Riedemann (mriedem) wrote :
Changed in cinder:
importance: High → Critical
Revision history for this message
Pavel Boldin (pboldin) wrote :

According to my investigation this is related to a SYNCHRONIZE_CACHE command being issued to the Cinder iSCSI-over-LVM driver. I'm working on a similar case with qemu hang-up.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to cinder (master)

Related fix proposed to branch: master
Review: https://review.openstack.org/128482

Revision history for this message
Matt Riedemann (mriedem) wrote :
Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Related fix proposed to branch: master
Review: https://review.openstack.org/128962

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to cinder (master)

Reviewed: https://review.openstack.org/128962
Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=e3feb0297dfece1fc198b231d9aa39ae2920d877
Submitter: Jenkins
Branch: master

commit e3feb0297dfece1fc198b231d9aa39ae2920d877
Author: Matt Riedemann <email address hidden>
Date: Thu Oct 16 08:39:07 2014 -0700

    Log a warning when getting lvs and vgs takes longer than 60 seconds

    We know something is causing lvs/vgs commands to block while deleting a
    volume and this is causing Tempest to timeout while waiting for the
    volume to be deleted. What we don't have right now is very good
    (specific) logging when this happens, unless we get messages in syslog
    for lvm tasks taking more than 120 seconds, but that doesn't always
    happen when we see the volume delete timeout in Tempest.

    This patch adds a check for when getting logical volumes and volume
    groups takes longer than 60 seconds and logs a warning if that happens.
    This is helpful in production also because the default interval for
    periodic tasks is 60 seconds so having these take longer than that time
    could cause periodic tasks to block up on each other and you'll get
    warnings from the FixedIntervalLoopingCall in oslo which is controlling
    the task runs.

    Related-Bug: #1373513

    Change-Id: I7ac5ddbbd9bda7744db33f0bb8d56879301f5538

Revision history for this message
Matt Riedemann (mriedem) wrote :

https://review.openstack.org/#/c/128962/ is merged but the logstash results aren't very helpful:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVG9va1wiIEFORCAobWVzc2FnZTpcInNlY29uZHMgdG8gZ2V0IGxvZ2ljYWwgdm9sdW1lc1wiIE9SIG1lc3NhZ2U6XCJzZWNvbmRzIHRvIGdldCB2b2x1bWUgZ3JvdXBzXCIpIEFORCB0YWdzOlwic2NyZWVuLWMtdm9sLnR4dFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI0MzIwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MTM1NzEwMjMyNTIsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

There are 1432 hits in the first 4 hours since it merged, but 99% of those jobs were successful. If you filter by build_status:"FAILURE" there are 38 hits and the range of times vary quit a bit, e.g.:

"Took 67.2061200142 seconds to get volume groups."

"Took 117.149465799 seconds to get volume groups."

--

"Took 63.6563539505 seconds to get logical volumes."

"Took 203.954236984 seconds to get logical volumes."

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on cinder (master)

Change abandoned by Matt Riedemann (<email address hidden>) on branch: master
Review: https://review.openstack.org/126735
Reason: From the results of https://review.openstack.org/#/c/128962/ in the gate, we apparently see lvs/vgs take a long time in a high number of successful runs, so there must be something else going on, so I'm going to abandon this.

Revision history for this message
Eric Harney (eharney) wrote :

I made this patch to try to troubleshoot this: https://review.openstack.org/#/c/130364/

Ignoring the errors because I didn't use root_helper, there is a pattern here:
http://logs.openstack.org/64/130364/1/check/check-tempest-dsvm-full/caf173e/logs/screen-c-vol.txt.gz#_2014-10-22_21_48_07_126

These locks seem to mostly point to instances of "lvs", "vgs", and "lvcreate" commands blocking against each other. These all lock LVM VG metadata while running.

Part of the length of time is these block indicates that something is probably wrong and needs further investigation. (I suspect they are maybe scanning some dead devices, etc.)

But, this leads me back to a previous suggestion: it's possible that enabling lvmetad on these gate machines could alleviate this issue. It caches some of the information collected during scans and so may help avoid this by having fewer long-running commands.

Revision history for this message
Eric Harney (eharney) wrote :
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to cinder (master)

Related fix proposed to branch: master
Review: https://review.openstack.org/130625

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to cinder (master)

Reviewed: https://review.openstack.org/130625
Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=dcda67053df5dc0240c743537dc1b9c4a3231b61
Submitter: Jenkins
Branch: master

commit dcda67053df5dc0240c743537dc1b9c4a3231b61
Author: Eric Harney <email address hidden>
Date: Thu Oct 23 11:28:29 2014 -0400

    Brick LVM: Optimize get_volume

    When we only want the LV information for a single volume, call
    "lvs" with the volume name specified.

    This should reduce unnecessary load of calling "lvs" for the
    whole VG.

    Related-Bug: #1373513

    Change-Id: Ifbb679e9160f44fe52ecba7bf19aa0eb5bb133f1

Alan Pevec (apevec)
tags: added: icehouse-backport-potential juno-backport-potential
Revision history for this message
Alan Pevec (apevec) wrote :

Notice for potential backports: "Brick LVM: Optimize get_volume" caused regression bug 1390081

Revision history for this message
Matt Riedemann (mriedem) wrote :
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on cinder (master)

Change abandoned by Mike Perez (<email address hidden>) on branch: master
Review: https://review.openstack.org/128482

Revision history for this message
John Griffith (john-griffith) wrote :

This doesn't seem to be showing up in the query as of late. Will give it a bit longer but wondering if something updated that fixed this.

Revision history for this message
John Griffith (john-griffith) wrote :

Finally had a chance to look at this more closely since I can easily reproduce it with Rally. It turns out that the issue is we're running vgscan on all disks on the system. What happens though is that some of those disks are a result of a cinder create / attach, and we hit issues where the attachment is removed and deleted during the vgscan causing the scan operation to hang and if we're lucky timeout.

We can fix this by modifying lvm.conf to ONLY query devices that we specify as being used by cinders VG, this has risks though incase somebody wants to use the system for other purposes. Turns out though you can create your own lvm.conf and set it via an env variable.

Running a number of tests iterations with Rally (which produces the original issue every time), this works and cleans up the issue. I'll just need to figure out the best way to deal with env vars in Cinder commands.

Changed in cinder:
assignee: nobody → John Griffith (john-griffith)
milestone: none → kilo-2
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to cinder (master)

Fix proposed to branch: master
Review: https://review.openstack.org/148393

Changed in cinder:
status: Confirmed → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on cinder (master)

Change abandoned by John Griffith (<email address hidden>) on branch: master
Review: https://review.openstack.org/148393
Reason: reworking a better approach

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to cinder (master)

Fix proposed to branch: master
Review: https://review.openstack.org/148747

Mike Perez (thingee)
Changed in cinder:
milestone: kilo-2 → kilo-3
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to cinder (master)

Reviewed: https://review.openstack.org/148747
Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=eb7bb3e08c1b8e1008ba447b842b635821e2097e
Submitter: Jenkins
Branch: master

commit eb7bb3e08c1b8e1008ba447b842b635821e2097e
Author: John Griffith <email address hidden>
Date: Tue Jan 20 16:31:57 2015 -0700

    Enable use of an /etc/cinder/lvm.conf file

    During tempest and Rally runs we've noticed occasional
    LVM command hangs (lvs, vgs and pvs), we've also gathered
    enough data to show that we see very slow response times from
    these commands almost all of the time.

    It turns out that this seems to be an issue with us scanning
    all devices during LVM operations, including devices that may
    be Cinder Volumes that are attaching and detaching from the system.

    Inspecting a run instrumented with strace shows a number of LVM
    commands timing out due to the device being scanned being removed
    during scan, and the LVM command in turn waiting until it times out
    on the scan that's in process.

    This patch just adds the ability to setup a lvm.conf file in
    /etc/cinder. The Cinder LVM code will now specifically set
    the LVM_SYSTEM_DIR environment variable to that directory for
    each of the LVM scan commands in brick/local_dev/lvm.py.
    If the system doesn't have the file, we use the empty string
    which tells LVM to use it's defaults. This only affects LVM
    commands in Cinder, the idea is to ensure we don't impact any
    other LVM operations on the node outside of Cinder and that we
    behave as we always have in the case of no lvm.conf file being
    setup in /etc/cinder. The presence of the file is auto-detected
    on brick/localdev/lvm init.

    We'll update the OpenStack Devstack deployment scripts to put this
    together and fix things up there first. Until that's done and until
    we auto-generate the conf (or document it well), this will be a
    *partial* bugfix.

    I considered adding a default lvm.conf file to cinder/etc/<sample>
    that would be copied in on install, but decided against this to
    avoid any possible issues with compatability issues between
    platforms or versions.

    To use, just copy the /etc/lvm/lvm.conf file to /etc/cinder and
    modify the filter as appropriate, for example:
      To use loopback device only:
        filter = [ "a/loop/", "r/.*/" ]
      If you have a physical drive like /dev/sdb1
        filter = [ "a/dev/sdb1/", "r/.*/" ]

    Finally, this patch also goes through and cleans up our cmd
    variables in brick/localdev/lvm. We had a mix of using a
    cmd array, and strings; this causes inconsistencies and makes
    it difficult to extend or modify commands. Switch everything to
    using an array and use extend to provide the correct prefix.

    Need to update docs to include a recommendation to create an
    /etc/cinder/lvm.conf file and set device filters appropriately.
    Doc-Impact
    Partial-Bug: #1373513

    Change-Id: Ia2289197a6d7fcfc910ee3de48e0a2173881a1e6

Revision history for this message
John Griffith (john-griffith) wrote :

Made some progress with the LVM.filter, we're still seeing the LVM calls that take greater than 60 seconds in Kibana fairly often. Taking a closer look at those however it seems that we're typically only seeing the "too long" on one call in each run, and that's always one of the lvs calls (I think anyway).

I also haven't seen a full on hang in a couple weeks so that's good, even though this is still not good; we're not causing failures in the gate like we were with it. I need to take a look at updating the ERQ so we don't come up with this bug for other failures but maybe we can actually get this nailed down.

One interesting observation is that in a bunch of these I looked at today the only entry for an LVM command being > 60 seconds was an LVS as mentioned, and it looks like there may be a pattern in the sequence when it happens.

Marking the bug as in progress again

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to cinder (master)

Fix proposed to branch: master
Review: https://review.openstack.org/155981

Revision history for this message
John Griffith (john-griffith) wrote :

Spent some time digging through logs, it seems that there's one specific case where we hit this long running LVS and VGS command. It doesn't appear that the deactivate conf change has any relevance here at all.

Most of the logs I've looked at from the last few days have only two long running commands (over 60 seconds); one for an lvs and one for a vgs right after that. In addition, if you look at timings for the other calls they're all coming in around half a second.

Looks like there's something we're doing in the test process, or maybe it's an interaction with another LVM related command that's causing the remaining issues.

Also notice that we don't seem to be hitting Test failures for this any longer since the lvm.conf change landed. ERQ is still associating failures due to this being in the logs for almost every run, but the root failure on most of these seems to be unrelated to this issue.

Revision history for this message
John Griffith (john-griffith) wrote :

Caught an strace by calling "strace lvs" external when the system was hung up:
https://gist.github.com/j-griffith/5448723bada6de44de6c

Revision history for this message
John Griffith (john-griffith) wrote :

I've added Nova to the projects here because currently we're at a stale mate where there seems to be a single case during unrescue that tirggers this. Patc is proposed but looks like it won't be accepted, want to make sure we link this and keep it tracked although it is different than the original bug posted by Vish, I believe it's related.

affects: cinder → nova
Changed in nova:
milestone: kilo-3 → none
Changed in cinder:
status: New → In Progress
importance: Undecided → Critical
Revision history for this message
John Griffith (john-griffith) wrote :

Related bug with more detail: https://bugs.launchpad.net/nova/+bug/1423654

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on nova (master)

Change abandoned by John Griffith (<email address hidden>) on branch: master
Review: https://review.openstack.org/159713
Reason: I give up, no hard feelings but honestly this is silly.

Revision history for this message
Attila Fazekas (afazekas) wrote :

As I remember the lvm2-metad agent is not enabled on Ubuntu by default,
I wonder is it it able to help , can it be enabled on devstack side ?

Revision history for this message
John Griffith (john-griffith) wrote :

Seeing this everywhere now, even without the migration test. Working on a few things around session disconnects, still need to look at what Eric has posted. metad might be an interesting thing to investigate too.

Note, you can simply run "tox -eall -- volume" and hit this error pretty much 100% of the time. Search your c-vol logs for "returned: 0 in 1" and you'll see several LVM commands that take greater than 100 seconds.

Revision history for this message
John Griffith (john-griffith) wrote :

FWIW, doing some deeper looking here:
1. skipping the detach during rescue negative reduces the occurences, but doesn't eliminate them. End up with two instances of long running/hung lvm commands

2. skipping the tempest/api/compute/servers/test_server_rescue_negative.py entirely
     We're down to a single case of the long running /hung LVM calls

3. skipping the tempest/scenario/test_encrypted_cinder_volumes.py
      We eliminate the problem altogether (at least 4 out of 4 runs so far)

I'm certainly not saying you should remove all of these tests, but I am saying that there's something specific to the behaviors in these tests that causes the LVM to try and query devices that don't exist any longer.

The overwhelming feedback from Nova folks is "we don't do anything to paths here", I still say there's data that says otherwise. At the same time I don't have concrete answers that pass the Nova core test, so I think we'll just live with this scenario in the gate.

Changed in nova:
assignee: John Griffith (john-griffith) → nobody
Changed in cinder:
status: In Progress → Triaged
Changed in nova:
status: In Progress → New
Changed in nova:
status: New → Invalid
Changed in cinder:
assignee: nobody → John Griffith (john-griffith)
Revision history for this message
John Griffith (john-griffith) wrote :

Turns out this can be addressed by making our filter a global_filter in lvm.conf.

The issue that we're running into is that dev files can come and go on the system, and there are cases where an LVS or VGS call will try and open a dev/sdX device that no longer exists when trying to query it for LVM data to see if it's an LVM disk. I added a filter a while back via /etc/cinder/lvm.conf, but it turns out that doesn't do us any good for the device open/query routines on lvs and vgs. For those you need to use the global_filter which tells LVM to not even bother with devices other than those specified by said filter.

There's also a possibility that we can just skip the filters and use lvmetad which is not enabled by default on Ubuntu. Afazekas mentioned something about this earlier today but I wasn't sure what he was referring to until I read some man pages and it turns out that does a lot of the same things as filters only a bit smarter and more dynamically. I would like some investigation done with that approach though because it also changes the activate/deactivate behavior so I want to make sure we fully understand it and test it thoroughly before adding it.

Revision history for this message
John Griffith (john-griffith) wrote :

Fix committed via filters in devstack here: 4bf861c76c220a98a3b3165eea5448411d000f3a

Changed in cinder:
status: Triaged → Fix Committed
Thierry Carrez (ttx)
Changed in cinder:
milestone: none → kilo-rc1
status: Fix Committed → Fix Released
Thierry Carrez (ttx)
Changed in cinder:
milestone: kilo-rc1 → 2015.1.0
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.