Created by James Page and last modified
Get this branch:
bzr branch lp:~openstack-charmers-next/charms/xenial/lxd/trunk
Members of OpenStack Charmers - Testing Charms can upload to this branch. Log in for directions.

Related bugs

Related blueprints

Branch information

OpenStack Charmers - Testing Charms

Recent revisions

72. By Jenkins <email address hidden> on 2016-07-07

Merge "Install and configure needed kernel modules"

71. By James Page on 2016-07-06

Resync charmhelpers for licensing change

The charm-helpers project have re-licensed to Apache 2.0
inline with the agreed licensing approach to intefaces,
layers and charms generally.

Resync helpers to bring charmhelpers inline with charm

Change-Id: Idf85c8e79caa47182e858c6a840f714a4c371806

70. By James Page on 2016-07-01

Re-license charm as Apache-2.0

All contributions to this charm where made under Canonical
copyright; switch to Apache-2.0 license as agreed so we
can move forward with official project status.

Change-Id: I2ae8c26a2a486ac39ee386d2c0ff96ef186edf86

69. By Chuck Short on 2016-06-27

Switch to using charm-store for amulet tests

All OpenStack charms are now directly published to the charm store
on landing; switch Amulet helper to resolve charms using the
charm store rather than bzr branches, removing the lag between
charm changes landing and being available for other charms to
use for testing.

This is also important for new layered charms where the charm must
be build and published prior to being consumable.

Change-Id: I1bcb20ab061fa639cc1116d2fe0bbf4c5a4464bc
Signed-off-by: Chuck Short <email address hidden>

68. By Chuck Short on 2016-06-27

Drop check for LVM thinpool name

On more recent versions of LXD, storage.lvm_thinpool_name
is no longer returned in the configuration data when a
thinpool is created.

Change-Id: I3e9aa4158fd4f23afee02d5ee6ad7296e8f5e505
Signed-off-by: Chuck Short <email address hidden>

67. By James Page on 2016-05-18

Resync charm-helpers

Avoid use of 'service --status-all' which is currently
broken on trusty for upstart managed daemons; the change
moves to detecting how the daemon is managed, and then
using upstart status XXX or the return code of service XXX
status to determine whether a process is running.

Fixes for IPv6 network address detection under Ubuntu
16.04 which changes the output format of the ip commands

Update the version map to include 8.1.x as a Neutron
version for Mitaka.

Change-Id: I3290a1e2f3886e02f606002612c83750cfd6de20
Closes-Bug: 1581171
Closes-Bug: 1581598
Closes-Bug: 1580674

66. By Chuck Short on 2016-04-20

Ensure ZFS pool management is idempotent

As block device configuration is called from the
config-changed hook, it's vital that the code for
managing the zfs pool is idempotent.

Add a helper to query pool information and ensure
that the ZFS pool does not already exist before
attempting creation.

Change-Id: I4f4ad9c4cdb73b77e8b3a9367b81ec1566bacd59
Signed-off-by: Chuck Short <email address hidden>

65. By Paul Hummer on 2016-04-20

Update the README

Reference promulgated charm store locations in preparation
for 16.04 release.

Change-Id: I8b12f44d498d50a7caeb55a93ae405e8955ff51f

64. By Paul Hummer on 2016-04-18

Rename block-device -> block-devices

Add support for block-devices, but still only accept one block device.

Add tests for parsing the device block list.

Change-Id: I78fe3b9e617a7da75145a2695bee312cf3685246

63. By Chuck Short on 2016-04-13

Fix btrfs storage configuration

In Xenial, the LXD.socket job was introduced to start
and stop the /var/lib/lxd/unix.socket. However, a race
was encountered when we try to convert the storage to btrfs
from an already running LXD daemon. So in order to convert
the storage to LXD you have to stop the LXD.socket daemon
and LXD service before formatting the storage device.

Change-Id: I02bc06759769f1fc61bf7c6c063a37d0140e37b8
Signed-off-by: Chuck Short <email address hidden>

Branch metadata

Branch format:
Branch format 7
Repository format:
Bazaar repository format 2a (needs bzr 1.16 or later)
This branch contains Public information 
Everyone can see this information.