charm-nova-compute:stable/21.10

Last commit made on 2021-10-21
Get this branch:
git clone -b stable/21.10 https://git.launchpad.net/charm-nova-compute

Branch merges

Branch information

Name:
stable/21.10
Repository:
lp:charm-nova-compute

Recent commits

ece2dea... by Alex Kavanagh

21.10 - Release

Remove the "channel: candidate" from the func-test
bundles.

Change-Id: I2ddf9d6ac813928f0176ca5f4ea2280762d7ef7e

62dbf9c... by Alex Kavanagh

21.10 - Stable cut of charms for testing period

* use stable/21.10 libraries
* use zaza/zaza-openstack-tests at stable/21.10
* build.lock files for reactive charms
* bundles refer to ~openstack-charms candidate channel

Change-Id: I388ab5bf183008ff2b7a5541aed804fec9c57ccf

c501f19... by Zuul <email address hidden>

Merge "Block nova-compute startup on mountpoint"

ac90d67... by Alex Kavanagh

Add xena bundles

- add non-voting focal-xena bundle
- add non-voting impish-xena bundle
- charm-helpers sync for new charm-helpers changes
- update tox/pip.sh to ensure setuptools<50.0.0
- remove EOL groovy-victoria check

Change-Id: Id72bc716f5f9c354ea495301b1f2b862914e9102
Func-Test-PR: https://github.com/openstack-charmers/zaza-openstack-tests/pull/648
Co-authored-by: Aurelien Lourot <email address hidden>

1e37ba3... by Zuul <email address hidden>

Merge "Overhaul README"

67fac56... by Aurelien Lourot

Increase nova-cloud-controller RAM in func tests

Recent test run(s) have shown memory exhaustion on the nova
cloud controller units. This exhibits itself as the controller
dropping messages from the compute nodes and logging messages like:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/oslo_service/threadgroup.py", line 268, in _perform_action_on_threads
  File "/usr/lib/python3/dist-packages/oslo_service/threadgroup.py", line 342, in <lambda>
    lambda x: x.wait(),
  File "/usr/lib/python3/dist-packages/oslo_service/threadgroup.py", line 61, in wait
    return self.thread.wait()
  File "/usr/lib/python3/dist-packages/eventlet/greenthread.py", line 180, in wait
    return self._exit_event.wait()
  File "/usr/lib/python3/dist-packages/eventlet/event.py", line 125, in wait
    result = hub.switch()
  File "/usr/lib/python3/dist-packages/eventlet/hubs/hub.py", line 298, in switch
    return self.greenlet.switch()
  File "/usr/lib/python3/dist-packages/eventlet/hubs/hub.py", line 350, in run
    self.wait(sleep_time)
  File "/usr/lib/python3/dist-packages/eventlet/hubs/poll.py", line 80, in wait
    presult = self.do_poll(seconds)
  File "/usr/lib/python3/dist-packages/eventlet/hubs/epolls.py", line 31, in do_poll
    return self.poll.poll(seconds)
MemoryError

to the nova-conductor log.

It seems very likely this issue is specific to Bionic Stein so it
may be a little wasteful to have increased the memory allocation
for all the bundles but I think consistancy between the bundles is
more important.

Change-Id: I4c693af04b291ebaa847f32c9169680228e22867
Co-authored-by: Liam Young <email address hidden>

22523e5... by Nobuto Murata

Allow overriding libvirt/num_pcie_ports

Especially with arm64/aarch64, the default value limits the number of
volume attachments to two usually. And when more than two volumes are to
be attached, it will fail with "No more available PCI slots". There is
no one-size-fits-all value here so let operators override the default
value.

Closes-Bug: #1944214
Change-Id: I9b9565873cbaeb575704b94a25d0a8556ab96292

7d850b2... by James Troup

Spelling, grammar and consistency fixes to config.yaml.

Change-Id: Iecb447eb94a47c72c9f0a1a749fa5628d02a8ef7

0720a0a... by James Troup

Clarify CPU pinning config options.

Change-Id: I646834db820b9bfe09784a4c659b2a4a69bb1c72

af2e403... by James Page

Block nova-compute startup on mountpoint

If an ephemeral-device storage configuration has been provided,
ensure that the nova-compute service will not start until the
mountpoint (currently /var/lib/nova/instances) has actually
been mounted. If this does not happen the nova-compute service
will fail to start in a failsafe condition.

Change-Id: Ic16691e119e430faec9994f6e207596629e47bb6
Closes-Bug: 1863358