- Get this branch:
-
git clone
-b 3.6
https://git.launchpad.net/~cmatsuoka/+git/juju-test
Branch merges
- You will only be able to propose a merge to another personal repository with the same name.
Related source package recipes
Related rock recipes
Branch information
- Name:
- 3.6
- Repository:
- lp:~cmatsuoka/+git/juju-test
Recent commits
- 439fd0a... by Juju bot <email address hidden>
-
Merge pull request #17331 from jack-w-
shaw/Replace_ BootstrapSeries _with_Bootstrap Base https:/
/github. com/juju/ juju/pull/ 17331 For some reason when bootstraping we convert bases to series and then back to bases
Just keep the bases throughout
## Checklist
- [x] Code style: imports ordered, good names, simple structure, etc
- [x] Comments saying why design decisions were made
- [x] Go unit tests, with comments saying what you're testing## QA steps
All unit tests pass
```
juju bootstrap lxd lxd
juju bootstrap lxd jammy --bootstrap-base ubuntu@22.04
juju bootstrap lxd focal --bootstrap-base ubuntu@20.04
``` - 51a0f74... by Juju bot <email address hidden>
-
Merge pull request #17328 from jack-w-
shaw/Remove_ LegacyKubernete sSeries https:/
/github. com/juju/ juju/pull/ 17328 Remove LegacyKubernete
sSeries Use LegacyKubernete
sBase everywhere instead The only place outside of tests this is used is in bundle deploy. In this case, make use of the 'new' application.Base attribute and drop the series
## Checklist
- [x] Code style: imports ordered, good names, simple structure, etc
- [x] Comments saying why design decisions were made
- [x] Go unit tests, with comments saying what you're testing
- [ ] [Integration tests](https://github. com/juju/ juju/tree/ main/tests), with comments saying what you're testing
- [ ] [doc.go](https://discourse. charmhub. io/t/readme- in-packages/ 451) added or updated in changed packages ## QA steps
All unit tests pass
#### Deploy a bundle with legacy 'kubernetes' series
Bootstrap
```
$ make microk8s-operator- upgarde
$ juju bootstrap microk8s mk8s
$ juju add-model m
```Construct bundle
```
$ juju download cos-lite
Fetching bundle "cos-lite" revision 11 using "stable" channel and base "amd64/ubuntu/ 22.04"
Install the "cos-lite" bundle with:
juju deploy ./cos-lite_r11.bundle $ unzip cos-lite_r11.bundle
[edit ./bundle.yaml such that:]
$ cat bundle.yaml
---
bundle: kubernetes
name: cos-lite
description: >
COS Lite is a light-weight, highly-integrated, observability stack running on Kubernetes
applications:
traefik:
charm: traefik-k8s
series: kubernetes
scale: 1
trust: true
channel: stable
alertmanager:
charm: alertmanager-k8s
series: kubernetes
scale: 1
trust: true
channel: stable
prometheus:
charm: prometheus-k8s
series: kubernetes
scale: 1
trust: true
channel: stable
grafana:
charm: grafana-k8s
series: kubernetes
scale: 1
trust: true
channel: stable
catalogue:
charm: catalogue-k8s
series: kubernetes
scale: 1
trust: true
channel: stable
...
```Deploy
```
$ juju deploy ./bundle
Located charm "alertmanager-k8s" in charm-hub, channel latest/stable
Located charm "catalogue-k8s" in charm-hub, channel latest/stable
Located charm "grafana-k8s" in charm-hub, channel latest/stable
Located charm "loki-k8s" in charm-hub, channel latest/stable
Located charm "prometheus-k8s" in charm-hub, channel latest/stable
Located charm "traefik-k8s" in charm-hub, channel latest/stable
Executing changes:
- upload charm alertmanager-k8s from charm-hub for base ubuntu@20.04/stable from channel stable with architecture=amd64
- deploy application alertmanager from charm-hub with 1 unit on ubuntu@20.04/stable with stable using alertmanager-k8s
added resource alertmanager-image
- upload charm catalogue-k8s from charm-hub for base ubuntu@20.04/stable from channel stable with architecture=amd64
- deploy application catalogue from charm-hub with 1 unit on ubuntu@20.04/stable with stable using catalogue-k8s
added resource catalogue-image
- upload charm grafana-k8s from charm-hub for base ubuntu@20.04/stable from channel stable with architecture=amd64
- deploy application grafana from charm-hub with 1 unit on ubuntu@20.04/stable with stable using grafana-k8s
added resource grafana-image
added resource litestream-image
- upload charm loki-k8s from charm-hub from channel stable with architecture=amd64
- deploy application loki from charm-hub with 1 unit with stable using loki-k8s
added resource loki-image
- upload charm prometheus-k8s from charm-hub for base ubuntu@20.04/stable from channel stable with architecture=amd64
- deploy application prometheus from charm-hub with 1 unit on ubuntu@20.04/stable with stable using prometheus-k8s
added resource prometheus-image
- upload charm traefik-k8s from charm-hub for base ubuntu@20.04/stable from channel stable with architecture=amd64
- deploy application traefik from charm-hub with 1 unit on ubuntu@20.04/stable with stable using traefik-k8s
added resource traefik-image
- add relation traefik:ingress- per-unit - prometheus:ingress
- add relation traefik:ingress- per-unit - loki:ingress
- add relation traefik:traefik- route - grafana:ingress
- add relation traefik:ingress - alertmanager:ingress
- add relation prometheus:alertmanager - alertmanager: alerting
- add relation grafana:grafana- source - prometheus: grafana- source
- add relation grafana:grafana- source - loki:grafana-source
- add relation grafana:grafana- source - alertmanager: grafana- source
- add relation loki:alertmanager - alertmanager:alerting
- add relation prometheus:metrics- endpoint - traefik: metrics- endpoint
- add relation prometheus:metrics- endpoint - alertmanager: self-metrics- endpoint
- add relation prometheus:metrics- endpoint - loki:metrics- endpoint
- add relation prometheus:metrics- endpoint - grafana: metrics- endpoint
- add relation grafana:grafana- dashboard - loki:grafana- dashboard
- add relation grafana:grafana- dashboard - prometheus: grafana- dashboard
- add relation grafana:grafana- dashboard - alertmanager: grafana- dashboard
- add relation catalogue:ingress - traefik:ingress
- add relation catalogue:catalogue - grafana:catalogue
- add relation catalogue:catalogue - prometheus:catalogue
- add relation catalogue:catalogue - alertmanager:catalogue
Deploy of bundle completed.
``` - d57b08d... by Jack Shaw
-
Replace BootstrapSeries with BootstrapBase
For some reason when bootstraping we convert bases to series and then
back to basesJust keep the bases throughout
- ec637ec... by Jack Shaw
-
Remove LegacyKubernete
sSeries Use LegacyKubernete
sBase everywhere instead - c325cbb... by Juju bot <email address hidden>
-
Merge pull request #17320 from jack-w-
shaw/JUJU- 5923_rename_ vsphere_ template_ dir_to_ base https:/
/github. com/juju/ juju/pull/ 17320 In 3.5 we would cache our images in directories named according to the series. This meant converting a series to a base
Dop this conversion and instead name these directorys based on the striaght base
Since 3.6 is a new minor version requiring model migration to access, we don't need to worry about incompatibilities when upgrading in place
Fortunately, once a template is created, we only pass through the template structure (i.e. the whole dir), so this change was fairly simple and painless
## Checklist
- [x] Code style: imports ordered, good names, simple structure, etc
- [x] Comments saying why design decisions were made
- [x] Go unit tests, with comments saying what you're testing## QA steps
```
$ juju bootstrap vsphere-boston vsphere
Creating Juju controller "vsphere" on vsphere-boston/ Boston
Looking for packaged Juju agent version 3.6-beta1 for amd64
No packaged binary found, preparing local Juju agent binary
Launching controller instance(s) on vsphere-boston/ Boston. ..
- juju-51e713-0 (arch=amd64 mem=3.5G)
Installing Juju agent on bootstrap instance
Waiting for address
Attempting to connect to 10.246.157.128:22
Attempting to connect to [fe80::250:56ff: fe36:ec7c] :22
Connected to 10.246.157.128
Running machine configuration script...
Bootstrap agent now started
Contacting Juju controller at 10.246.157.128 to verify accessibility...Bootstrap complete, controller "vsphere" is now available
Controller machines are in the "controller" modelNow you can run
juju add-model <model-name>
to create a new model to deploy workloads.$ juju status -m controller
Model Controller Cloud/Region Version SLA Timestamp
controller vsphere vsphere-boston/ Boston 3.6-beta1.1 unsupported 12:11:58+01:00 App Version Status Scale Charm Channel Rev Exposed Message
controller active 1 juju-controller 3.6/stable 83 noUnit Workload Agent Machine Public address Ports Message
controller/0* active idle 0 10.246.157.128Machine State Address Inst id Base AZ Message
0 started 10.246.157.128 juju-51e713-0 ubuntu@22.04 poweredOn$ juju add-model m
$ juju add-machine --base ubuntu@22.04
$ juju add-machine --base ubuntu@20.04
$ juju status
Model Controller Cloud/Region Version SLA Timestamp
m vsphere vsphere-boston/ Boston 3.6-beta1.1 unsupported 12:13:19+01:00 Machine State Address Inst id Base AZ Message
0 started 10.246.157.106 juju-0ed1b9-0 ubuntu@22.04 poweredOn
1 started 10.246.157.129 juju-0ed1b9-1 ubuntu@20.04 poweredOn
```Observe in vsphere console
 ### Migrate from 3.5
`juju-3.5` indicates juju build from `3.5` branch of `juju/juju`
```
$ juju-3.5 bootstrap vsphere-boston vsphere-3.5
$ juju-3.5 add-model m2
$ juju-3.5 deploy ubuntu jammy
$ juju-3.5 deploy ubuntu focal --base uubntu@20.04
(wait)
$ juju status
Model Controller Cloud/Region Version SLA Timestamp
m2 vsphere-3.5 vsphere-boston/ Boston 3.5.0.1 unsupported 12:44:09+01:00 App Version Status Scale Charm Channel Rev Exposed Message
focal 20.04 active 1 ubuntu latest/stable 24 no
jammy 22.04 active 1 ubuntu latest/stable 24 noUnit Workload Agent Machine Public address Ports Message
focal/0* active idle 1 10.246.157.133
jammy/0* active idle 0 10.246.157.130Machine State Address Inst id Base AZ Message
0 started 10.246.157.130 juju-293607-0 ubuntu@22.04 poweredOn
1 started 10.246.157.133 juju-293607-1 ubuntu@20.04 poweredOn$ juju migrate m2 vsphere
$ juju switch vsphere:m2
$ juju upgrade-model
$ juju status
Model Controller Cloud/Region Version SLA Timestamp
m2 vsphere vsphere-boston/ Boston 3.6-beta1.1 unsupported 12:46:17+01:00 App Version Status Scale Charm Channel Rev Exposed Message
focal 20.04 active 1 ubuntu latest/stable 24 no
jammy 22.04 active 1 ubuntu latest/stable 24 noUnit Workload Agent Machine Public address Ports Message
focal/0* active idle 1 10.246.157.133
jammy/0* active idle 0 10.246.157.130Machine State Address Inst id Base AZ Message
0 started 10.246.157.130 juju-293607-0 ubuntu@22.04 poweredOn
1 started 10.246.157.133 juju-293607-1 ubuntu@20.04 poweredOn$ juju add-unit jammy
$ juju add-unit focal
(wait)
$ juju status
Model Controller Cloud/Region Version SLA Timestamp
m2 vsphere vsphere-boston/ Boston 3.6-beta1.1 unsupported 12:48:57+01:00 App Version Status Scale Charm Channel Rev Exposed Message
focal active 2 ubuntu latest/stable 24 no
jammy active 2 ubuntu latest/stable 24 noUnit Workload Agent Machine Public address Ports Message
focal/0* active idle 1 10.246.157.133
focal/1 active idle 3 10.246.157.121
jammy/0* active idle 0 10.246.157.130
jammy/1 active idle 2 10.246.157.115Machine State Address Inst id Base AZ Message
0 started 10.246.157.130 juju-293607-0 ubuntu@22.04 poweredOn
1 started 10.246.157.133 juju-293607-1 ubuntu@20.04 poweredOn
2 started 10.246.157.115 juju-293607-2 ubuntu@20.04 poweredOn
3 started 10.246.157.121 juju-293607-3 ubuntu@20.04 poweredOn
``` - 4472d74... by Jack Shaw
-
Rename vsphere template dir
In 3.5 we would cache our images in directories named according to the
series. This meant converting a series to a baseDop this conversion and instead name these directorys based on the
striaght baseSince 3.6 is a new minor version requiring model migration to access, we
don't need to worry about incompatibilities when upgrading in placeFortunately, once a template is created, we only pass through the
template structure (i.e. the whole dir), so this change was fairly simple
and painless - 1752b71... by Juju bot <email address hidden>
-
Merge pull request #17308 from jack-w-
shaw/JUJU- 5963_drop_ series_ from_RefreshBas e https:/
/github. com/juju/ juju/pull/ 17308 Up until now we supported constructing the RefreshBase type with a series in the channel. However, no where in the code base do we make use of this.
I checked every read and write to the Channel attribute and nowhere do we fill in what could be a series.
Since we're moving away from the series construct, drop support for this.
## Checklist
- [x] Code style: imports ordered, good names, simple structure, etc
- [x] Comments saying why design decisions were made
- [x] Go unit tests, with comments saying what you're testing## QA steps
### Deploy some charms with different bases
```
$ juju deploy ubuntu
$ juju deploy ubuntu focal --base ubuntu@20.04
(wait)
$ juju status
Model Controller Cloud/Region Version SLA Timestamp
m lxd localhost/localhost 3.6-beta1.1 unsupported 12:37:55+01:00App Version Status Scale Charm Channel Rev Exposed Message
focal 20.04 active 1 ubuntu latest/stable 24 no
ubuntu 22.04 active 1 ubuntu latest/stable 24 noUnit Workload Agent Machine Public address Ports Message
focal/0* active idle 1 10.219.211.70
ubuntu/0* active idle 0 10.219.211.170Machine State Address Inst id Base AZ Message
0 started 10.219.211.170 juju-4dcf8d-0 ubuntu@22.04 Running
1 started 10.219.211.70 juju-4dcf8d-1 ubuntu@20.04 Running
```### Refresh a charm
(Note zookeeper channel `3/stable` point to revision 126, `3/edge` points to 131)
```
$ juju deploy zookeeper --channel 3/stable
Deployed "zookeeper" from charm-hub charm "zookeeper", revision 126 in channel 3/stable on ubuntu@22.04/stable(wait)
$ juju status
Model Controller Cloud/Region Version SLA Timestamp
m3 lxd localhost/localhost 3.6-beta1.1 unsupported 15:30:31+01:00App Version Status Scale Charm Channel Rev Exposed Message
zookeeper active 1 zookeeper 3/stable 126 noUnit Workload Agent Machine Public address Ports Message
zookeeper/0* active idle 0 10.219.211.111Machine State Address Inst id Base AZ Message
0 started 10.219.211.111 juju-5abdd5-0 ubuntu@22.04 Running$ juju refresh zookeeper --channel 3/edge
Added charm-hub charm "zookeeper", revision 131 in channel 3/edge, to the model
no change to endpoints in space "alpha": certificates, cluster, cos-agent, restart, upgrade, zookeeper$ juju status
Model Controller Cloud/Region Version SLA Timestamp
m3 lxd localhost/localhost 3.6-beta1.1 unsupported 15:31:46+01:00App Version Status Scale Charm Channel Rev Exposed Message
zookeeper 3.8.2 active 1 zookeeper 3/edge 131 noUnit Workload Agent Machine Public address Ports Message
zookeeper/0* active idle 0 10.219.211.111Machine State Address Inst id Base AZ Message
0 started 10.219.211.111 juju-5abdd5-0 ubuntu@22.04 Running
```### Download a charm
```
$ juju download ubuntu
Fetching charm "ubuntu" revision 24 using "stable" channel and base "amd64/ubuntu/ 22.04"
Install the "ubuntu" charm with:
juju deploy ./ubuntu_r24.charm$ juju download ubuntu --base ubuntu@20.04
Fetching charm "ubuntu" revision 24 using "stable" channel and base "amd64/ubuntu/ 20.04"
Install the "ubuntu" charm with:
juju deploy ./ubuntu_r24.charm```
- 73712a6... by Jack Shaw
-
Drop support for series in RefreshBase
Up until now we supported constructing the RefreshBase type with a
series in the channel. However, no where in the code base do we make use
of this.I checked every read and write to the Channel attribute and nowhere do
we fill in what could be a series.Since we're moving away from the series construct, drop suport for this.
- ac56535... by Juju bot <email address hidden>
-
Merge pull request #17286 from jack-w-
shaw/JUJU- 5900_drop_ series_ from_supported https:/
/github. com/juju/ juju/pull/ 17286 Drop series parsing from supported-bases code
Originally, ControllerBases and WorkloadBases were shims around
ControllerSeries and WorkloadBasesDe-couple these functions from eachother
We still read distro-info, which will remain the case until after Madrid
where a final decision for direction can be made.As such, we still read the codename, and do use this in a few places.
However, we treat the codenames as opaque strings, and never parse them
as OS systems. This is as close as we can get before Madrid## Checklist
- [x] Code style: imports ordered, good names, simple structure, etc
- [x] Comments saying why design decisions were made
- [x] Go unit tests, with comments saying what you're testing
- [ ] [Integration tests](https://github. com/juju/ juju/tree/ main/tests), with comments saying what you're testing
- [ ] [doc.go](https://discourse. charmhub. io/t/readme- in-packages/ 451) added or updated in changed packages ## QA steps
### Setup
Download ubuntu with `juju download ubuntu` and unzip into the dir `./ubuntu`
Then edit the charm manifest.yaml to be:
```
bases:
- architectures:
- amd64
channel: '20.04'
name: ubuntu
- architectures:
- amd64
channel: '22.04'
name: ubuntu
- architectures:
- amd64
channel: '25.04'
name: ubuntu
```(i.e. add ubuntu@25.04 support to the charm, a future OS that is not yet a workload base)
### Bootstrap controllers to various bases
```
$ juju bootstrap lxd err --bootstrap-base ubuntu@18.04
Creating Juju controller "err" on lxd/localhost
ERROR failed to bootstrap model: use --force to override: ubuntu@18.04/stable not supported$ juju bootstrap lxd err --bootstrap-base ubuntu@23.04
Creating Juju controller "err" on lxd/localhost
ERROR failed to bootstrap model: use --force to override: ubuntu@23.04/stable not supported$ juju bootstrap lxd err --bootstrap-base ubuntu@26.04
Creating Juju controller "err" on lxd/localhost
ERROR failed to bootstrap model: base "ubuntu@26.04/stable" not valid
``````
$ juju bootstrap lxd jammy --bootstrap-base ubuntu@22.04
Creating Juju controller "jammy" on lxd/localhost
Looking for packaged Juju agent version 3.6-beta1 for amd64
No packaged binary found, preparing local Juju agent binary
To configure your system to better support LXD containers, please see: https://documentation. ubuntu. com/lxd/ en/latest/ explanation/ performance_ tuning/
Launching controller instance(s) on localhost/localhost. ..
- juju-c70ac7-0 (arch=amd64)
Installing Juju agent on bootstrap instance
Waiting for address
Attempting to connect to 10.219.211.9:22
Connected to 10.219.211.9
Running machine configuration script...
Bootstrap agent now started
Contacting Juju controller at 10.219.211.9 to verify accessibility...Bootstrap complete, controller "jammy" is now available
Controller machines are in the "controller" modelNow you can run
juju add-model <model-name>
to create a new model to deploy workloads.$ juju status -m controller
Model Controller Cloud/Region Version SLA Timestamp
controller jammy localhost/localhost 3.6-beta1.1 unsupported 18:04:42+01:00App Version Status Scale Charm Channel Rev Exposed Message
controller active 1 juju-controller 3.6/stable 83 noUnit Workload Agent Machine Public address Ports Message
controller/0* active idle 0 10.219.211.9Machine State Address Inst id Base AZ Message
0 started 10.219.211.9 juju-c70ac7-0 ubuntu@22.04 Running
``````
$ juju bootstrap lxd focal --bootstrap-base ubuntu@20.04
Creating Juju controller "focal" on lxd/localhost
Looking for packaged Juju agent version 3.6-beta1 for amd64
No packaged binary found, preparing local Juju agent binary
To configure your system to better support LXD containers, please see: https://documentation. ubuntu. com/lxd/ en/latest/ explanation/ performance_ tuning/
Launching controller instance(s) on localhost/localhost. ..
- juju-e97da4-0 (arch=amd64)
Installing Juju agent on bootstrap instance
Waiting for address
Attempting to connect to 10.219.211.217:22
Connected to 10.219.211.217
Running machine configuration script...
Bootstrap agent now started
Contacting Juju controller at 10.219.211.217 to verify accessibility...Bootstrap complete, controller "focal" is now available
Controller machines are in the "controller" modelNow you can run
juju add-model <model-name>
to create a new model to deploy workloads.$ juju status -m controller
Model Controller Cloud/Region Version SLA Timestamp
controller focal localhost/localhost 3.6-beta1.1 unsupported 18:05:34+01:00App Version Status Scale Charm Channel Rev Exposed Message
controller active 1 juju-controller 3.6/stable 83 noUnit Workload Agent Machine Public address Ports Message
controller/0* active idle 0 10.219.211.217Machine State Address Inst id Base AZ Message
0 started 10.219.211.217 juju-e97da4-0 ubuntu@20.04 Running
```### Deploy machines to various bases
```
$ juju add-machine
$ juju add-machine --base ubuntu@22.04
$ juju add-machine --base ubuntu@20.04
$ juju add-machine --base ubuntu@21.04
$ juju status
Model Controller Cloud/Region Version SLA Timestamp
m jammy localhost/localhost 3.6-beta1.1 unsupported 18:07:35+01:00Machine State Address Inst id Base AZ Message
0 started 10.219.211.195 juju-8f4c57-0 ubuntu@22.04 Running
1 started 10.219.211.215 juju-8f4c57-1 ubuntu@22.04 Running
2 started 10.219.211.194 juju-8f4c57-2 ubuntu@20.04 Running
3 started 10.219.211.85 juju-8f4c57-3 ubuntu@21.04 Running
```### Upgrade a machine
```
$ juju deploy ./ubuntu --base ubuntu@20.04
Located local charm "ubuntu", revision 0
Deploying "ubuntu" from local charm "ubuntu", revision 0 on ubuntu@20.04/stable$ juju status
Model Controller Cloud/Region Version SLA Timestamp
m2 lxd localhost/localhost 3.6-beta1.1 unsupported 12:25:39+01:00App Version Status Scale Charm Channel Rev Exposed Message
ubuntu 20.04 active 1 ubuntu 0 noUnit Workload Agent Machine Public address Ports Message
ubuntu/0* active idle 0 10.219.211.72Machine State Address Inst id Base AZ Message
0 started 10.219.211.72 juju-88afe1-0 ubuntu@20.04 Running$ juju upgrade-machine 0 prepare ubuntu@25.04
ERROR os "ubuntu" version "25.04" not found$ juju upgrade-machine 0 prepare ubuntu@22.04
WARNING: This command will mark machine "0" as being upgraded to "ubuntu@22.04".
This operation cannot be reverted or canceled once started.
Units running on the machine will also be upgraded. These units include:
- ubuntu/0Leadership for the following applications will be pinned and not
subject to change until the "complete" command is run:
- ubuntuContinue [y/N]? y
machine-0 validation of upgrade base from "ubuntu@20.04/stable" to "ubuntu@22.04"
machine-0 started upgrade from "ubuntu@20.04" to "ubuntu@22.04"
ubuntu/0 pre-series-upgrade hook running
ubuntu/0 pre-series-upgrade completed
machine-0 binaries and service files writtenJuju is now ready for the machine base to be updated.
Perform any manual steps required along with "do-release-upgrade" .
When ready, run the following to complete the upgrade base process:juju upgrade-machine 0 complete
``` - 666b07d... by Juju bot <email address hidden>
-
Merge pull request #17293 from jack-w-
shaw/JUJU- 5952_remove_ series_ from_charm_ revision_ updater https:/
/github. com/juju/ juju/pull/ 17293 RefreshBases, at the moment, are agnostic to the format of their channel. They can be channels, channels with risks, or series
In the charm revision updater for some reason we go with a series, which is parsed from a charm origin in a previous step.
Use a channel instead which, as it turns out, was already available
See here, where the channel is finally read:
https://github. com/juju/ juju/blob/ 3.6/charmhub/ refresh. go#L425- L432 You see we first check if the channel is a series, and if it is not, in the sanatiseChannel step, we parse it as a channel and extract the track
## QA steps
Ensure
#### Bootstrap
```
$ juju bootstrap lxd lxd
```#### Enable wrench
```
$ juju ssh -m controller 0
$ cd /var/lib/juju
$ sudo mkdir wrench
$ cd wrench
$ sudo vim charmrevision
[write "shortinterval", save and exit]
$ cat charmrevision
shortinterval
$ exit
```#### Deploy a charm from not the latest revision
```
$ juju add-model m --config logging-config= "<root> =INFO;juju. worker. charmrevision= DEBUG"
$ juju debug-log
...
controller-0: 15:42:16 DEBUG juju.worker.charmrevision 10s elapsed, performing work
controller-0: 15:42:36 DEBUG juju.worker.charmrevision 10s elapsed, performing work
controller-0: 15:42:49 DEBUG juju.worker.charmrevision 10s elapsed, performing work
controller-0: 15:42:59 DEBUG juju.worker.charmrevision 10s elapsed, performing work
controller-0: 15:43:12 DEBUG juju.worker.charmrevision 10s elapsed, performing work
controller-0: 15:43:22 DEBUG juju.worker.charmrevision 10s elapsed, performing work
controller-0: 15:43:32 DEBUG juju.worker.charmrevision 10s elapsed, performing work
...$ juju deploy ubuntu --revision 22 --channel latest/stable # <-- 24 is the latest revision
```#### Check mongo [using this plugin](https:/
/discourse. charmhub. io/t/login- into-mongodb/ 309) ```
$ juju mongo
>>> db.charms.find(). pretty( )
...
{
"_id" : "110944e2-5095-4f58- 83f2-211055ef7d fa:ch:amd64/ ubuntu- 24",
"model-uuid" : "110944e2-5095-4f58- 83f2-211055ef7d fa",
"url" : "ch:amd64/ubuntu- 24",
"charm-version" : "",
"life" : 0,
"pendingupload" : false,
"placeholder" : true,
"bundlesha256" : "",
"storagepath" : "",
"meta" : null,
"config" : null,
"manifest" : null,
"actions" : null,
"metrics" : null,
"lxd-profile" : null,
"txn-revno" : 2
}
```#### Check juju status
```
$ juju status
Model Controller Cloud/Region Version SLA Timestamp
m lxd localhost/localhost 3.6-beta1.1 unsupported 15:48:40+01:00App Version Status Scale Charm Channel Rev Exposed Message
ubuntu 22.04 active 1 ubuntu latest/stable 22 no # <-- "22" should be orangeUnit Workload Agent Machine Public address Ports Message
ubuntu/0* active idle 0 10.219.211.38Machine State Address Inst id Base AZ Message
0 started 10.219.211.38 juju-ef7dfa-0 ubuntu@22.04 Running
