Building with a 20.04 base has started to fail because of: https://github.com/canonical/charmcraft/issues/738
TLDR: some pip packages have been updated to need a newer version of
setuptools than focal provides.
Ubuntu 22.04 does have a new enough setuptools to avoid this, so
we can workaround the problem by building the charm on 22.04.
Note that the above bug suggests an alternative option of asking the
charm plugin to install a newer setuptools via pip:
We can consolidate the "controller already exists" logic into a
common path, increasing code coverage and potentially avoiding situations
where the charm loses track of the bootstrap state. This also replaces the
difficult to differentiate scalebot_configure_juju() and
scalebot_configure_juju_for_lab() with just scalebot_juju_bootstrap()
(you can guess what that does, right?). This consolidation avoids
some redundant and buggy behavior with how we check to see if
configs that changed require a bootstrap.
It is still the case that rebootstrapping will fail if there are
active models, but we can't automatically deal with that without
data loss, so a user will need to manually clean those up.
Rename state scalebot.configured.juju to scalebot.juju.bootstrapped
I've found the code to be easier to understand with this rename.
scalebot.configured.juju was a name that was more consistent with other
states (scalebot.configured.{env,jobbuilder,repo}), but I think
"bootstrapped" more clearly describes what the state represents.
juju always creates a default model after the bootstrapping process.
scalebot never uses this model, and when it (or any other active model)
exists, it blocks destroying the controller. We do need to destroy
the controller when certain configs change, so lets remove the
default model right after bootstrapping. It would be better if juju
let us disable creating the default model, that's been requested
in LP: #1972001.
Correct list of config changes that should lead to rebootstrap
When we added additional bootstrap-time configuration parameters
(scalebot_juju_model_defaults, scalebot_juju_bootstrap_*), we missed
adding them to the list of vars we check w/ data_changed(), so
changing them doesn't actually cause a rebootstrap. Fixes LP: #1859500.
Rebootstrapping doesn't actually work though, but that will be fixed
in a later commit.
Add destroy_all_models option to scalebot_juju_destroy_controller()
scalebot_juju_destroy_controller() currently always destroys all
models. In a later commit, we'll want to call this function but
have it fail if active models are present. So, let's add a
parameter to change that behavior.
symlink the juju snap's jujud binary into /usr/local/bin
juju apparently needs to be in the user's $PATH in order to bootstrap
a new controller:
unit-scalebot-0: 21:57:24 ERROR unit.scalebot/0.juju-log Fail to bootstrap a controller. It may be because: Failed for: failed to acquire node: No available machine matches constraints.
unit-scalebot-0: 21:57:25 WARNING unit.scalebot/0.update-status ERROR No controllers registered.
unit-scalebot-0: 21:57:25 WARNING unit.scalebot/0.update-status
unit-scalebot-0: 21:57:25 WARNING unit.scalebot/0.update-status Please either create a new controller using "juju bootstrap" or connect to
unit-scalebot-0: 21:57:25 WARNING unit.scalebot/0.update-status another controller that you have been given access to using "juju register".
unit-scalebot-0: 21:57:25 WARNING unit.scalebot/0.update-status
unit-scalebot-0: 21:57:25 INFO unit.scalebot/0.juju-log Invoking reactive handler: reactive/scalebot-jenkins.py:471:scalebot_start
unit-scalebot-0: 21:57:25 INFO unit.scalebot/0.juju-log Invoking reactive handler: reactive/apt.py:50:ensure_package_status
unit-scalebot-0: 21:57:25 INFO unit.scalebot/0.juju-log status-set: blocked: Fail to bootstrap a controller. It may be because: Failed for: failed to acquire node: No available machine matches constraints.
unit-scalebot-0: 21:57:25 INFO juju.worker.uniter.operation ran "update-status" hook (via explicit, bespoke hook script)