Previously when an instance was removed the leadership settings and
charms.reactive flags remained for that instance's IP address. If a new
instance was subsequently added and happened to have the same IP address
the charm would never add the new instance to the cluster because it
believed the instance was already configured and clustered based on
leader settings.
Clear leader settings flags for instance cluster configured and
clustered.
Due to a bug in Juju the previous use of IP addresses with '.' were
unable to be unset. Transform dotted flags to use '-' instead.
When the recoveryMethod clone actually needs to overwrite the remote
node the mysql-shell unfortunately returns with returncode 1. Both
"Clone process has finished" and "Group Replication is running"
actually indicate successful states.
21.04 libraries freeze for charms on master branch
* charm-helpers sync for classic charms
* build.lock file for reactive charms
* ensure tox.ini is from release-tools
* ensure requirements.txt files are from release-tools
* On reactive charms:
- ensure stable/21.04 branch for charms.openstack
- ensure stable/21.04 branch for charm-helpers
Prior to this change, when the mysql-innodb-cluster charm received its
certificates from vault it would update its clients and subsequently
begin a rolling restart. This led to race condition failures on the
client side.
Inherit the BaseCoordinator and create the DelayedActionCoordinator in
order to delay db_router client updates until after rolling restarts
have completed.