get_partition_list fails

Bug #1731639 reported by Chris Sanders
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Ceph OSD Charm
Fix Released
Medium
Unassigned
charms.ceph
Fix Released
Medium
James Page

Bug Description

A fresh install is failing due to an out of range index on a fresh MAAS created dev.

juju debug-log:
unit-ceph-osd-0: 09:00:24 DEBUG unit.ceph-osd/0.juju-log mon:1: Hardening function 'install'
unit-ceph-osd-0: 09:00:25 DEBUG unit.ceph-osd/0.juju-log mon:1: Hardening function 'config_changed'
unit-ceph-osd-0: 09:00:25 DEBUG unit.ceph-osd/0.juju-log mon:1: Hardening function 'upgrade_charm'
unit-ceph-osd-0: 09:00:25 DEBUG unit.ceph-osd/0.juju-log mon:1: Hardening function 'update_status'
unit-ceph-osd-0: 09:00:25 INFO unit.ceph-osd/0.juju-log mon:1: mon has provided conf- scanning disks
unit-ceph-osd-0: 09:00:28 INFO unit.ceph-osd/0.juju-log mon:1: Making dir /var/lib/charm/ceph-osd ceph:ceph 555
unit-ceph-osd-0: 09:00:28 INFO unit.ceph-osd/0.juju-log mon:1: Monitor hosts are ['192.168.0.104:6789']
unit-ceph-osd-0: 09:00:33 DEBUG unit.ceph-osd/0.juju-log mon:1: got journal devs: set([])
unit-ceph-osd-0: 09:00:33 DEBUG unit.ceph-osd/0.juju-log mon:1: read zapped: set([])
unit-ceph-osd-0: 09:00:33 DEBUG unit.ceph-osd/0.juju-log mon:1: write zapped: set([])
unit-ceph-osd-0: 09:00:33 INFO unit.ceph-osd/0.juju-log mon:1: ceph bootstrapped, rescanning disks
unit-ceph-osd-0: 09:00:36 INFO unit.ceph-osd/0.juju-log mon:1: Making dir /var/lib/charm/ceph-osd ceph:ceph 555
unit-ceph-osd-0: 09:00:36 INFO unit.ceph-osd/0.juju-log mon:1: Monitor hosts are ['192.168.0.104:6789']
unit-ceph-osd-0: 09:00:40 DEBUG unit.ceph-osd/0.juju-log mon:1: get partitions: ['1 2048 1249882078 1249880031 596G bb6f46ea-01']
unit-ceph-osd-0: 09:00:40 DEBUG unit.ceph-osd/0.mon-relation-changed Traceback (most recent call last):
unit-ceph-osd-0: 09:00:40 DEBUG unit.ceph-osd/0.mon-relation-changed File "/var/lib/juju/agents/unit-ceph-osd-0/charm/hooks/mon-relation-changed", line 541, in <module>
unit-ceph-osd-0: 09:00:40 DEBUG unit.ceph-osd/0.mon-relation-changed hooks.execute(sys.argv)
unit-ceph-osd-0: 09:00:40 DEBUG unit.ceph-osd/0.mon-relation-changed File "/var/lib/juju/agents/unit-ceph-osd-0/charm/hooks/charmhelpers/core/hookenv.py", line 768, in execute
unit-ceph-osd-0: 09:00:40 DEBUG unit.ceph-osd/0.mon-relation-changed self._hooks[hook_name]()
unit-ceph-osd-0: 09:00:40 DEBUG unit.ceph-osd/0.mon-relation-changed File "/var/lib/juju/agents/unit-ceph-osd-0/charm/hooks/mon-relation-changed", line 468, in mon_relation
unit-ceph-osd-0: 09:00:40 DEBUG unit.ceph-osd/0.mon-relation-changed prepare_disks_and_activate()
unit-ceph-osd-0: 09:00:40 DEBUG unit.ceph-osd/0.mon-relation-changed File "/var/lib/juju/agents/unit-ceph-osd-0/charm/hooks/mon-relation-changed", line 375, in prepare_disks_and_activate
unit-ceph-osd-0: 09:00:40 DEBUG unit.ceph-osd/0.mon-relation-changed config('bluestore'))
unit-ceph-osd-0: 09:00:40 DEBUG unit.ceph-osd/0.mon-relation-changed File "lib/ceph/utils.py", line 1367, in osdize
unit-ceph-osd-0: 09:00:40 DEBUG unit.ceph-osd/0.mon-relation-changed bluestore)
unit-ceph-osd-0: 09:00:40 DEBUG unit.ceph-osd/0.mon-relation-changed File "lib/ceph/utils.py", line 1382, in osdize_dev
unit-ceph-osd-0: 09:00:40 DEBUG unit.ceph-osd/0.mon-relation-changed if is_osd_disk(dev) and not reformat_osd:
unit-ceph-osd-0: 09:00:40 DEBUG unit.ceph-osd/0.mon-relation-changed File "lib/ceph/utils.py", line 952, in is_osd_disk
unit-ceph-osd-0: 09:00:40 DEBUG unit.ceph-osd/0.mon-relation-changed partitions = get_partition_list(dev)
unit-ceph-osd-0: 09:00:40 DEBUG unit.ceph-osd/0.mon-relation-changed File "lib/ceph/utils.py", line 944, in get_partition_list
unit-ceph-osd-0: 09:00:40 DEBUG unit.ceph-osd/0.mon-relation-changed uuid=parts[6])
unit-ceph-osd-0: 09:00:40 DEBUG unit.ceph-osd/0.mon-relation-changed IndexError: list index out of range
unit-ceph-osd-0: 09:00:40 ERROR juju.worker.uniter.operation hook "mon-relation-changed" failed: exit status 1
unit-ceph-osd-0: 09:00:46 DEBUG unit.ceph-osd/0.juju-log mon:1: Hardening function 'install'
unit-ceph-osd-0: 09:00:46 DEBUG unit.ceph-osd/0.juju-log mon:1: Hardening function 'config_changed'
unit-ceph-osd-0: 09:00:46 DEBUG unit.ceph-osd/0.juju-log mon:1: Hardening function 'upgrade_charm'
unit-ceph-osd-0: 09:00:46 DEBUG unit.ceph-osd/0.juju-log mon:1: Hardening function 'update_status'
unit-ceph-osd-0: 09:00:46 INFO unit.ceph-osd/0.juju-log mon:1: mon has provided conf- scanning disks
unit-ceph-osd-0: 09:00:49 INFO unit.ceph-osd/0.juju-log mon:1: Making dir /var/lib/charm/ceph-osd ceph:ceph 555
unit-ceph-osd-0: 09:00:49 INFO unit.ceph-osd/0.juju-log mon:1: Monitor hosts are ['192.168.0.104:6789']
unit-ceph-osd-0: 09:00:54 DEBUG unit.ceph-osd/0.juju-log mon:1: got journal devs: set([])
unit-ceph-osd-0: 09:00:54 DEBUG unit.ceph-osd/0.juju-log mon:1: read zapped: set([])
unit-ceph-osd-0: 09:00:54 DEBUG unit.ceph-osd/0.juju-log mon:1: write zapped: set([])
unit-ceph-osd-0: 09:00:54 INFO unit.ceph-osd/0.juju-log mon:1: ceph bootstrapped, rescanning disks
unit-ceph-osd-0: 09:00:57 INFO unit.ceph-osd/0.juju-log mon:1: Making dir /var/lib/charm/ceph-osd ceph:ceph 555
unit-ceph-osd-0: 09:00:57 INFO unit.ceph-osd/0.juju-log mon:1: Monitor hosts are ['192.168.0.104:6789']
unit-ceph-osd-0: 09:01:01 DEBUG unit.ceph-osd/0.juju-log mon:1: get partitions: ['1 2048 1249882078 1249880031 596G bb6f46ea-01']
unit-ceph-osd-0: 09:01:01 DEBUG unit.ceph-osd/0.mon-relation-changed Traceback (most recent call last):
unit-ceph-osd-0: 09:01:01 DEBUG unit.ceph-osd/0.mon-relation-changed File "/var/lib/juju/agents/unit-ceph-osd-0/charm/hooks/mon-relation-changed", line 541, in <module>
unit-ceph-osd-0: 09:01:01 DEBUG unit.ceph-osd/0.mon-relation-changed hooks.execute(sys.argv)
unit-ceph-osd-0: 09:01:01 DEBUG unit.ceph-osd/0.mon-relation-changed File "/var/lib/juju/agents/unit-ceph-osd-0/charm/hooks/charmhelpers/core/hookenv.py", line 768, in execute
unit-ceph-osd-0: 09:01:01 DEBUG unit.ceph-osd/0.mon-relation-changed self._hooks[hook_name]()
unit-ceph-osd-0: 09:01:01 DEBUG unit.ceph-osd/0.mon-relation-changed File "/var/lib/juju/agents/unit-ceph-osd-0/charm/hooks/mon-relation-changed", line 468, in mon_relation
unit-ceph-osd-0: 09:01:01 DEBUG unit.ceph-osd/0.mon-relation-changed prepare_disks_and_activate()
unit-ceph-osd-0: 09:01:01 DEBUG unit.ceph-osd/0.mon-relation-changed File "/var/lib/juju/agents/unit-ceph-osd-0/charm/hooks/mon-relation-changed", line 375, in prepare_disks_and_activate
unit-ceph-osd-0: 09:01:01 DEBUG unit.ceph-osd/0.mon-relation-changed config('bluestore'))
unit-ceph-osd-0: 09:01:01 DEBUG unit.ceph-osd/0.mon-relation-changed File "lib/ceph/utils.py", line 1367, in osdize
unit-ceph-osd-0: 09:01:01 DEBUG unit.ceph-osd/0.mon-relation-changed bluestore)
unit-ceph-osd-0: 09:01:01 DEBUG unit.ceph-osd/0.mon-relation-changed File "lib/ceph/utils.py", line 1382, in osdize_dev
unit-ceph-osd-0: 09:01:01 DEBUG unit.ceph-osd/0.mon-relation-changed if is_osd_disk(dev) and not reformat_osd:
unit-ceph-osd-0: 09:01:01 DEBUG unit.ceph-osd/0.mon-relation-changed File "lib/ceph/utils.py", line 952, in is_osd_disk
unit-ceph-osd-0: 09:01:01 DEBUG unit.ceph-osd/0.mon-relation-changed partitions = get_partition_list(dev)
unit-ceph-osd-0: 09:01:01 DEBUG unit.ceph-osd/0.mon-relation-changed File "lib/ceph/utils.py", line 944, in get_partition_list
unit-ceph-osd-0: 09:01:01 DEBUG unit.ceph-osd/0.mon-relation-changed uuid=parts[6])
unit-ceph-osd-0: 09:01:01 DEBUG unit.ceph-osd/0.mon-relation-changed IndexError: list index out of range
unit-ceph-osd-0: 09:01:01 ERROR juju.worker.uniter.operation hook "mon-relation-changed" failed: exit status 1

Running partx on the machine does indeed show the issue, there are not 6 columns to parse.
root@sacred-kitten:~# partx --raw --noheadings /dev/sda
1 2048 1249882078 1249880031 596G bb6f46ea-01

root@sacred-kitten:~# fdisk -l /dev/sda
Disk /dev/sda: 596 GiB, 639939641344 bytes, 1249882112 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimaMAAS version: 2.2.2l): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xbb6f46ea

Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 1249882078 1249880031 596G 83 Linux

This is how the device is setup by MAAS after when a single root partition was setup and the remainder of the drive left unpartitioned. The intention was to use a partition for the OS and the rest for ceph. I'm not sure if that's a valid configuration, but I wouldn't expect the hook to bail with a trace back either way.

MAAS version: 2.2.2
Juju: 2.2.6

Revision history for this message
James Page (james-page) wrote :

Looks like the NAME column is empty:

NR START END SECTORS SIZE NAME UUID

Charm should deal with this.

James Page (james-page)
Changed in charms.ceph:
status: New → In Progress
importance: Undecided → Medium
assignee: nobody → James Page (james-page)
Changed in charm-ceph-osd:
status: New → Triaged
importance: Undecided → Medium
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charms.ceph (master)

Fix proposed to branch: master
Review: https://review.openstack.org/523856

James Page (james-page)
Changed in charm-ceph-osd:
milestone: none → 18.02
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charms.ceph (master)

Reviewed: https://review.openstack.org/523856
Committed: https://git.openstack.org/cgit/openstack/charms.ceph/commit/?id=e1a0b3558bd3fc7cb785e921e63942e172c58726
Submitter: Zuul
Branch: master

commit e1a0b3558bd3fc7cb785e921e63942e172c58726
Author: James Page <email address hidden>
Date: Wed Nov 29 12:12:20 2017 +0000

    Deal with partitions without NAME labels

    Partition NAME labels are optional; update parser to deal with
    an empty name.

    Change-Id: I179d74670a7e6b35b23f7423e5356db4199d53e6
    Closes-Bug: 1731639

Changed in charms.ceph:
status: In Progress → Fix Released
Ryan Beisner (1chb1n)
Changed in charm-ceph-osd:
milestone: 18.02 → 18.05
David Ames (thedac)
Changed in charm-ceph-osd:
milestone: 18.05 → 18.08
Revision history for this message
Billy Olsen (billy-olsen) wrote :

Moving charm-ceph-osd bug to fix released as the change in charms.ceph was synced in at least in the 18.05 charm release.

Changed in charm-ceph-osd:
status: Triaged → Fix Released
milestone: 18.08 → 18.05
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.