Merge ~mitchburton/landscape-charm:merge-beta into landscape-charm:stable

Proposed by Mitch Burton
Status: Merged
Approved by: Mitch Burton
Approved revision: ad38c9ddb9ad0f1ed8436138145ac2f993ab946e
Merged at revision: ad38c9ddb9ad0f1ed8436138145ac2f993ab946e
Proposed branch: ~mitchburton/landscape-charm:merge-beta
Merge into: landscape-charm:stable
Diff against target: 2981 lines (+1633/-372)
16 files modified
LICENSE (+19/-20)
Makefile (+5/-2)
README.md (+2/-3)
bundle.yaml (+5/-8)
config.yaml (+11/-0)
lib/charms/grafana_agent/LICENSE (+201/-0)
lib/charms/grafana_agent/v0/cos_agent.py (+819/-0)
lib/charms/operator_libs_linux/v0/apt.py (+97/-66)
lib/charms/operator_libs_linux/v0/passwd.py (+6/-2)
metadata.yaml (+2/-0)
requirements-dev.txt (+4/-0)
src/charm.py (+359/-228)
src/haproxy-config.yaml (+6/-7)
src/settings_files.py (+20/-0)
tests/test_charm.py (+67/-30)
tests/test_settings_files.py (+10/-6)
Reviewer Review Type Date Requested Status
Spencer Runde Approve
Review via email: mp+462451@code.launchpad.net

Commit message

Merge beta into stable

Description of the change

Follow up on https://code.launchpad.net/~mitchburton/landscape-charm/+git/landscape-charm/+merge/462351, with essentially the same changes.

Summary:
- License update to newer version of GPLv2
- Update Makefile and README
- Add Landscape PPA key and secret token support
- Add Grafana machine agent library
- Fix bug causing package installation to hang
- Clarify inline documentation
- Bootstrap account on schema migration
- Fix bug where haproxy fails if leader goes down due to missing backend service keys
- Ensure haproxy cert is properly encoded
- Add application dashboard relation joined handler for LMA support
- Only set website relations in haproxy for the leader
- Update haproxy config to use `http-request` for path replacement instead of regex
- Update haproxy config to use hashid backend if present

To post a comment you must log in.
Revision history for this message
Spencer Runde (spencerrunde) wrote :

LGTM

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1diff --git a/LICENSE b/LICENSE
2index 5b6e7c6..d159169 100644
3--- a/LICENSE
4+++ b/LICENSE
5@@ -1,12 +1,12 @@
6- GNU GENERAL PUBLIC LICENSE
7- Version 2, June 1991
8+ GNU GENERAL PUBLIC LICENSE
9+ Version 2, June 1991
10
11- Copyright (C) 1989, 1991 Free Software Foundation, Inc.
12- 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
13+ Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
14+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
15 Everyone is permitted to copy and distribute verbatim copies
16 of this license document, but changing it is not allowed.
17
18- Preamble
19+ Preamble
20
21 The licenses for most software are designed to take away your
22 freedom to share and change it. By contrast, the GNU General Public
23@@ -15,7 +15,7 @@ software--to make sure the software is free for all its users. This
24 General Public License applies to most of the Free Software
25 Foundation's software and to any other program whose authors commit to
26 using it. (Some other Free Software Foundation software is covered by
27-the GNU Library General Public License instead.) You can apply it to
28+the GNU Lesser General Public License instead.) You can apply it to
29 your programs, too.
30
31 When we speak of free software, we are referring to freedom, not
32@@ -55,8 +55,8 @@ patent must be licensed for everyone's free use or not licensed at all.
33
34 The precise terms and conditions for copying, distribution and
35 modification follow.
36-
37
38- GNU GENERAL PUBLIC LICENSE
39+
40+ GNU GENERAL PUBLIC LICENSE
41 TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
42
43 0. This License applies to any program or other work which contains
44@@ -110,7 +110,7 @@ above, provided that you also meet all of these conditions:
45 License. (Exception: if the Program itself is interactive but
46 does not normally print such an announcement, your work based on
47 the Program is not required to print an announcement.)
48-
49
50+
51 These requirements apply to the modified work as a whole. If
52 identifiable sections of that work are not derived from the Program,
53 and can be reasonably considered independent and separate works in
54@@ -168,7 +168,7 @@ access to copy from a designated place, then offering equivalent
55 access to copy the source code from the same place counts as
56 distribution of the source code, even though third parties are not
57 compelled to copy the source along with the object code.
58-
59
60+
61 4. You may not copy, modify, sublicense, or distribute the Program
62 except as expressly provided under this License. Any attempt
63 otherwise to copy, modify, sublicense or distribute the Program is
64@@ -225,7 +225,7 @@ impose that choice.
65
66 This section is intended to make thoroughly clear what is believed to
67 be a consequence of the rest of this License.
68-
69
70+
71 8. If the distribution and/or use of the Program is restricted in
72 certain countries either by patents or by copyrighted interfaces, the
73 original copyright holder who places the Program under this License
74@@ -255,7 +255,7 @@ make exceptions for this. Our decision will be guided by the two goals
75 of preserving the free status of all derivatives of our free software and
76 of promoting the sharing and reuse of software generally.
77
78- NO WARRANTY
79+ NO WARRANTY
80
81 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
82 FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
83@@ -277,9 +277,9 @@ YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
84 PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
85 POSSIBILITY OF SUCH DAMAGES.
86
87- END OF TERMS AND CONDITIONS
88-
89
90- How to Apply These Terms to Your New Programs
91+ END OF TERMS AND CONDITIONS
92+
93+ How to Apply These Terms to Your New Programs
94
95 If you develop a new program, and you want it to be of the greatest
96 possible use to the public, the best way to achieve this is to make it
97@@ -303,10 +303,9 @@ the "copyright" line and a pointer to where the full notice is found.
98 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
99 GNU General Public License for more details.
100
101- You should have received a copy of the GNU General Public License
102- along with this program; if not, write to the Free Software
103- Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
104-
105+ You should have received a copy of the GNU General Public License along
106+ with this program; if not, write to the Free Software Foundation, Inc.,
107+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
108
109 Also add information on how to contact you by electronic and paper mail.
110
111@@ -336,5 +335,5 @@ necessary. Here is a sample; alter the names:
112 This General Public License does not permit incorporating your program into
113 proprietary programs. If your program is a subroutine library, you may
114 consider it more useful to permit linking proprietary applications with the
115-library. If this is what you want to do, use the GNU Library General
116+library. If this is what you want to do, use the GNU Lesser General
117 Public License instead of this License.
118diff --git a/Makefile b/Makefile
119index 6fb245f..2827c47 100644
120--- a/Makefile
121+++ b/Makefile
122@@ -1,8 +1,11 @@
123+DIRNAME = $(notdir $(shell pwd))
124+DIRNAME := $(addsuffix -auto, $(DIRNAME))
125+
126 build: clean
127 charmcraft pack
128- juju add-model testserver
129+ juju add-model $(DIRNAME)
130 juju deploy ./bundle.yaml
131
132 clean:
133 -rm *.charm
134- -juju destroy-model -y testserver --force
135+ -juju destroy-model -y $(DIRNAME) --force
136diff --git a/README.md b/README.md
137index 86763ce..5832b12 100644
138--- a/README.md
139+++ b/README.md
140@@ -69,6 +69,5 @@ When developing the charm, here's a quick way to test out changes as
141 they would be deployed by `landscape-scalable`:
142
143 ```bash
144-charmcraft pack
145-juju deploy ./bundle.yaml
146-```
147\ No newline at end of file
148+make build
149+```
150diff --git a/bundle.yaml b/bundle.yaml
151index 0e7f386..7af5829 100644
152--- a/bundle.yaml
153+++ b/bundle.yaml
154@@ -1,19 +1,17 @@
155 description: Landscape Scalable
156+series: jammy
157 applications:
158 postgresql:
159- series: focal
160+ channel: 14/beta
161 charm: ch:postgresql
162 num_units: 1
163- options:
164- extra_packages: python3-apt postgresql-contrib postgresql-.*-debversion postgresql-plpython3-*
165- max_connections: 500
166- max_prepared_transactions: 500
167 rabbitmq-server:
168- series: focal
169+ channel: latest/edge
170 charm: ch:rabbitmq-server
171 num_units: 1
172 haproxy:
173- series: focal
174+ series: jammy
175+ channel: edge
176 charm: ch:haproxy
177 num_units: 1
178 expose: true
179@@ -23,7 +21,6 @@ applications:
180 ssl_cert: SELFSIGNED
181 global_default_bind_options: "no-tlsv10"
182 landscape-server:
183- series: jammy
184 charm: ./landscape-server_ubuntu-22.04-amd64-arm64_ubuntu-20.04-amd64-arm64.charm
185 num_units: 1
186 relations:
187diff --git a/config.yaml b/config.yaml
188index c08640f..cc1e765 100644
189--- a/config.yaml
190+++ b/config.yaml
191@@ -6,6 +6,11 @@ options:
192 type: string
193 default: "ppa:landscape/self-hosted-23.03"
194 description: The PPA from which Landscape Server will be installed.
195+ landscape_ppa_key:
196+ type: string
197+ default: ""
198+ description: |
199+ Full ASCII-armoured GPG public key for the Landscape PPA source.
200 worker_counts:
201 type: int
202 default: 2
203@@ -182,3 +187,9 @@ options:
204 description: |
205 Additional service.conf settings to be merged with the default
206 configuration.
207+ secret_token:
208+ type: string
209+ default:
210+ description: |
211+ A secret token for the landscape service. If not set one will be
212+ generated securely.
213diff --git a/lib/charms/grafana_agent/LICENSE b/lib/charms/grafana_agent/LICENSE
214new file mode 100644
215index 0000000..a76e8a4
216--- /dev/null
217+++ b/lib/charms/grafana_agent/LICENSE
218@@ -0,0 +1,201 @@
219+ Apache License
220+ Version 2.0, January 2004
221+ http://www.apache.org/licenses/
222+
223+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
224+
225+ 1. Definitions.
226+
227+ "License" shall mean the terms and conditions for use, reproduction,
228+ and distribution as defined by Sections 1 through 9 of this document.
229+
230+ "Licensor" shall mean the copyright owner or entity authorized by
231+ the copyright owner that is granting the License.
232+
233+ "Legal Entity" shall mean the union of the acting entity and all
234+ other entities that control, are controlled by, or are under common
235+ control with that entity. For the purposes of this definition,
236+ "control" means (i) the power, direct or indirect, to cause the
237+ direction or management of such entity, whether by contract or
238+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
239+ outstanding shares, or (iii) beneficial ownership of such entity.
240+
241+ "You" (or "Your") shall mean an individual or Legal Entity
242+ exercising permissions granted by this License.
243+
244+ "Source" form shall mean the preferred form for making modifications,
245+ including but not limited to software source code, documentation
246+ source, and configuration files.
247+
248+ "Object" form shall mean any form resulting from mechanical
249+ transformation or translation of a Source form, including but
250+ not limited to compiled object code, generated documentation,
251+ and conversions to other media types.
252+
253+ "Work" shall mean the work of authorship, whether in Source or
254+ Object form, made available under the License, as indicated by a
255+ copyright notice that is included in or attached to the work
256+ (an example is provided in the Appendix below).
257+
258+ "Derivative Works" shall mean any work, whether in Source or Object
259+ form, that is based on (or derived from) the Work and for which the
260+ editorial revisions, annotations, elaborations, or other modifications
261+ represent, as a whole, an original work of authorship. For the purposes
262+ of this License, Derivative Works shall not include works that remain
263+ separable from, or merely link (or bind by name) to the interfaces of,
264+ the Work and Derivative Works thereof.
265+
266+ "Contribution" shall mean any work of authorship, including
267+ the original version of the Work and any modifications or additions
268+ to that Work or Derivative Works thereof, that is intentionally
269+ submitted to Licensor for inclusion in the Work by the copyright owner
270+ or by an individual or Legal Entity authorized to submit on behalf of
271+ the copyright owner. For the purposes of this definition, "submitted"
272+ means any form of electronic, verbal, or written communication sent
273+ to the Licensor or its representatives, including but not limited to
274+ communication on electronic mailing lists, source code control systems,
275+ and issue tracking systems that are managed by, or on behalf of, the
276+ Licensor for the purpose of discussing and improving the Work, but
277+ excluding communication that is conspicuously marked or otherwise
278+ designated in writing by the copyright owner as "Not a Contribution."
279+
280+ "Contributor" shall mean Licensor and any individual or Legal Entity
281+ on behalf of whom a Contribution has been received by Licensor and
282+ subsequently incorporated within the Work.
283+
284+ 2. Grant of Copyright License. Subject to the terms and conditions of
285+ this License, each Contributor hereby grants to You a perpetual,
286+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
287+ copyright license to reproduce, prepare Derivative Works of,
288+ publicly display, publicly perform, sublicense, and distribute the
289+ Work and such Derivative Works in Source or Object form.
290+
291+ 3. Grant of Patent License. Subject to the terms and conditions of
292+ this License, each Contributor hereby grants to You a perpetual,
293+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
294+ (except as stated in this section) patent license to make, have made,
295+ use, offer to sell, sell, import, and otherwise transfer the Work,
296+ where such license applies only to those patent claims licensable
297+ by such Contributor that are necessarily infringed by their
298+ Contribution(s) alone or by combination of their Contribution(s)
299+ with the Work to which such Contribution(s) was submitted. If You
300+ institute patent litigation against any entity (including a
301+ cross-claim or counterclaim in a lawsuit) alleging that the Work
302+ or a Contribution incorporated within the Work constitutes direct
303+ or contributory patent infringement, then any patent licenses
304+ granted to You under this License for that Work shall terminate
305+ as of the date such litigation is filed.
306+
307+ 4. Redistribution. You may reproduce and distribute copies of the
308+ Work or Derivative Works thereof in any medium, with or without
309+ modifications, and in Source or Object form, provided that You
310+ meet the following conditions:
311+
312+ (a) You must give any other recipients of the Work or
313+ Derivative Works a copy of this License; and
314+
315+ (b) You must cause any modified files to carry prominent notices
316+ stating that You changed the files; and
317+
318+ (c) You must retain, in the Source form of any Derivative Works
319+ that You distribute, all copyright, patent, trademark, and
320+ attribution notices from the Source form of the Work,
321+ excluding those notices that do not pertain to any part of
322+ the Derivative Works; and
323+
324+ (d) If the Work includes a "NOTICE" text file as part of its
325+ distribution, then any Derivative Works that You distribute must
326+ include a readable copy of the attribution notices contained
327+ within such NOTICE file, excluding those notices that do not
328+ pertain to any part of the Derivative Works, in at least one
329+ of the following places: within a NOTICE text file distributed
330+ as part of the Derivative Works; within the Source form or
331+ documentation, if provided along with the Derivative Works; or,
332+ within a display generated by the Derivative Works, if and
333+ wherever such third-party notices normally appear. The contents
334+ of the NOTICE file are for informational purposes only and
335+ do not modify the License. You may add Your own attribution
336+ notices within Derivative Works that You distribute, alongside
337+ or as an addendum to the NOTICE text from the Work, provided
338+ that such additional attribution notices cannot be construed
339+ as modifying the License.
340+
341+ You may add Your own copyright statement to Your modifications and
342+ may provide additional or different license terms and conditions
343+ for use, reproduction, or distribution of Your modifications, or
344+ for any such Derivative Works as a whole, provided Your use,
345+ reproduction, and distribution of the Work otherwise complies with
346+ the conditions stated in this License.
347+
348+ 5. Submission of Contributions. Unless You explicitly state otherwise,
349+ any Contribution intentionally submitted for inclusion in the Work
350+ by You to the Licensor shall be under the terms and conditions of
351+ this License, without any additional terms or conditions.
352+ Notwithstanding the above, nothing herein shall supersede or modify
353+ the terms of any separate license agreement you may have executed
354+ with Licensor regarding such Contributions.
355+
356+ 6. Trademarks. This License does not grant permission to use the trade
357+ names, trademarks, service marks, or product names of the Licensor,
358+ except as required for reasonable and customary use in describing the
359+ origin of the Work and reproducing the content of the NOTICE file.
360+
361+ 7. Disclaimer of Warranty. Unless required by applicable law or
362+ agreed to in writing, Licensor provides the Work (and each
363+ Contributor provides its Contributions) on an "AS IS" BASIS,
364+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
365+ implied, including, without limitation, any warranties or conditions
366+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
367+ PARTICULAR PURPOSE. You are solely responsible for determining the
368+ appropriateness of using or redistributing the Work and assume any
369+ risks associated with Your exercise of permissions under this License.
370+
371+ 8. Limitation of Liability. In no event and under no legal theory,
372+ whether in tort (including negligence), contract, or otherwise,
373+ unless required by applicable law (such as deliberate and grossly
374+ negligent acts) or agreed to in writing, shall any Contributor be
375+ liable to You for damages, including any direct, indirect, special,
376+ incidental, or consequential damages of any character arising as a
377+ result of this License or out of the use or inability to use the
378+ Work (including but not limited to damages for loss of goodwill,
379+ work stoppage, computer failure or malfunction, or any and all
380+ other commercial damages or losses), even if such Contributor
381+ has been advised of the possibility of such damages.
382+
383+ 9. Accepting Warranty or Additional Liability. While redistributing
384+ the Work or Derivative Works thereof, You may choose to offer,
385+ and charge a fee for, acceptance of support, warranty, indemnity,
386+ or other liability obligations and/or rights consistent with this
387+ License. However, in accepting such obligations, You may act only
388+ on Your own behalf and on Your sole responsibility, not on behalf
389+ of any other Contributor, and only if You agree to indemnify,
390+ defend, and hold each Contributor harmless for any liability
391+ incurred by, or claims asserted against, such Contributor by reason
392+ of your accepting any such warranty or additional liability.
393+
394+ END OF TERMS AND CONDITIONS
395+
396+ APPENDIX: How to apply the Apache License to your work.
397+
398+ To apply the Apache License to your work, attach the following
399+ boilerplate notice, with the fields enclosed by brackets "[]"
400+ replaced with your own identifying information. (Don't include
401+ the brackets!) The text should be enclosed in the appropriate
402+ comment syntax for the file format. We also recommend that a
403+ file or class name and description of purpose be included on the
404+ same "printed page" as the copyright notice for easier
405+ identification within third-party archives.
406+
407+ Copyright 2024 Canonical Ltd.
408+
409+ Licensed under the Apache License, Version 2.0 (the "License");
410+ you may not use this file except in compliance with the License.
411+ You may obtain a copy of the License at
412+
413+ http://www.apache.org/licenses/LICENSE-2.0
414+
415+ Unless required by applicable law or agreed to in writing, software
416+ distributed under the License is distributed on an "AS IS" BASIS,
417+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
418+ See the License for the specific language governing permissions and
419+ limitations under the License.
420\ No newline at end of file
421diff --git a/lib/charms/grafana_agent/v0/cos_agent.py b/lib/charms/grafana_agent/v0/cos_agent.py
422new file mode 100644
423index 0000000..259a901
424--- /dev/null
425+++ b/lib/charms/grafana_agent/v0/cos_agent.py
426@@ -0,0 +1,819 @@
427+# Copyright 2023 Canonical Ltd.
428+# See LICENSE file for licensing details.
429+
430+r"""## Overview.
431+
432+This library can be used to manage the cos_agent relation interface:
433+
434+- `COSAgentProvider`: Use in machine charms that need to have a workload's metrics
435+ or logs scraped, or forward rule files or dashboards to Prometheus, Loki or Grafana through
436+ the Grafana Agent machine charm.
437+
438+- `COSAgentConsumer`: Used in the Grafana Agent machine charm to manage the requirer side of
439+ the `cos_agent` interface.
440+
441+
442+## COSAgentProvider Library Usage
443+
444+Grafana Agent machine Charmed Operator interacts with its clients using the cos_agent library.
445+Charms seeking to send telemetry, must do so using the `COSAgentProvider` object from
446+this charm library.
447+
448+Using the `COSAgentProvider` object only requires instantiating it,
449+typically in the `__init__` method of your charm (the one which sends telemetry).
450+
451+The constructor of `COSAgentProvider` has only one required and nine optional parameters:
452+
453+```python
454+ def __init__(
455+ self,
456+ charm: CharmType,
457+ relation_name: str = DEFAULT_RELATION_NAME,
458+ metrics_endpoints: Optional[List[_MetricsEndpointDict]] = None,
459+ metrics_rules_dir: str = "./src/prometheus_alert_rules",
460+ logs_rules_dir: str = "./src/loki_alert_rules",
461+ recurse_rules_dirs: bool = False,
462+ log_slots: Optional[List[str]] = None,
463+ dashboard_dirs: Optional[List[str]] = None,
464+ refresh_events: Optional[List] = None,
465+ scrape_configs: Optional[Union[List[Dict], Callable]] = None,
466+ ):
467+```
468+
469+### Parameters
470+
471+- `charm`: The instance of the charm that instantiates `COSAgentProvider`, typically `self`.
472+
473+- `relation_name`: If your charmed operator uses a relation name other than `cos-agent` to use
474+ the `cos_agent` interface, this is where you have to specify that.
475+
476+- `metrics_endpoints`: In this parameter you can specify the metrics endpoints that Grafana Agent
477+ machine Charmed Operator will scrape. The configs of this list will be merged with the configs
478+ from `scrape_configs`.
479+
480+- `metrics_rules_dir`: The directory in which the Charmed Operator stores its metrics alert rules
481+ files.
482+
483+- `logs_rules_dir`: The directory in which the Charmed Operator stores its logs alert rules files.
484+
485+- `recurse_rules_dirs`: This parameters set whether Grafana Agent machine Charmed Operator has to
486+ search alert rules files recursively in the previous two directories or not.
487+
488+- `log_slots`: Snap slots to connect to for scraping logs in the form ["snap-name:slot", ...].
489+
490+- `dashboard_dirs`: List of directories where the dashboards are stored in the Charmed Operator.
491+
492+- `refresh_events`: List of events on which to refresh relation data.
493+
494+- `scrape_configs`: List of standard scrape_configs dicts or a callable that returns the list in
495+ case the configs need to be generated dynamically. The contents of this list will be merged
496+ with the configs from `metrics_endpoints`.
497+
498+
499+### Example 1 - Minimal instrumentation:
500+
501+In order to use this object the following should be in the `charm.py` file.
502+
503+```python
504+from charms.grafana_agent.v0.cos_agent import COSAgentProvider
505+...
506+class TelemetryProviderCharm(CharmBase):
507+ def __init__(self, *args):
508+ ...
509+ self._grafana_agent = COSAgentProvider(self)
510+```
511+
512+### Example 2 - Full instrumentation:
513+
514+In order to use this object the following should be in the `charm.py` file.
515+
516+```python
517+from charms.grafana_agent.v0.cos_agent import COSAgentProvider
518+...
519+class TelemetryProviderCharm(CharmBase):
520+ def __init__(self, *args):
521+ ...
522+ self._grafana_agent = COSAgentProvider(
523+ self,
524+ relation_name="custom-cos-agent",
525+ metrics_endpoints=[
526+ # specify "path" and "port" to scrape from localhost
527+ {"path": "/metrics", "port": 9000},
528+ {"path": "/metrics", "port": 9001},
529+ {"path": "/metrics", "port": 9002},
530+ ],
531+ metrics_rules_dir="./src/alert_rules/prometheus",
532+ logs_rules_dir="./src/alert_rules/loki",
533+ recursive_rules_dir=True,
534+ log_slots=["my-app:slot"],
535+ dashboard_dirs=["./src/dashboards_1", "./src/dashboards_2"],
536+ refresh_events=["update-status", "upgrade-charm"],
537+ scrape_configs=[
538+ {
539+ "job_name": "custom_job",
540+ "metrics_path": "/metrics",
541+ "authorization": {"credentials": "bearer-token"},
542+ "static_configs": [
543+ {
544+ "targets": ["localhost:9003"]},
545+ "labels": {"key": "value"},
546+ },
547+ ],
548+ },
549+ ]
550+ )
551+```
552+
553+### Example 3 - Dynamic scrape configs generation:
554+
555+Pass a function to the `scrape_configs` to decouple the generation of the configs
556+from the instantiation of the COSAgentProvider object.
557+
558+```python
559+from charms.grafana_agent.v0.cos_agent import COSAgentProvider
560+...
561+
562+class TelemetryProviderCharm(CharmBase):
563+ def generate_scrape_configs(self):
564+ return [
565+ {
566+ "job_name": "custom",
567+ "metrics_path": "/metrics",
568+ "static_configs": [{"targets": ["localhost:9000"]}],
569+ },
570+ ]
571+
572+ def __init__(self, *args):
573+ ...
574+ self._grafana_agent = COSAgentProvider(
575+ self,
576+ scrape_configs=self.generate_scrape_configs,
577+ )
578+```
579+
580+## COSAgentConsumer Library Usage
581+
582+This object may be used by any Charmed Operator which gathers telemetry data by
583+implementing the consumer side of the `cos_agent` interface.
584+For instance Grafana Agent machine Charmed Operator.
585+
586+For this purpose the charm needs to instantiate the `COSAgentConsumer` object with one mandatory
587+and two optional arguments.
588+
589+### Parameters
590+
591+- `charm`: A reference to the parent (Grafana Agent machine) charm.
592+
593+- `relation_name`: The name of the relation that the charm uses to interact
594+ with its clients that provides telemetry data using the `COSAgentProvider` object.
595+
596+ If provided, this relation name must match a provided relation in metadata.yaml with the
597+ `cos_agent` interface.
598+ The default value of this argument is "cos-agent".
599+
600+- `refresh_events`: List of events on which to refresh relation data.
601+
602+
603+### Example 1 - Minimal instrumentation:
604+
605+In order to use this object the following should be in the `charm.py` file.
606+
607+```python
608+from charms.grafana_agent.v0.cos_agent import COSAgentConsumer
609+...
610+class GrafanaAgentMachineCharm(GrafanaAgentCharm)
611+ def __init__(self, *args):
612+ ...
613+ self._cos = COSAgentRequirer(self)
614+```
615+
616+
617+### Example 2 - Full instrumentation:
618+
619+In order to use this object the following should be in the `charm.py` file.
620+
621+```python
622+from charms.grafana_agent.v0.cos_agent import COSAgentConsumer
623+...
624+class GrafanaAgentMachineCharm(GrafanaAgentCharm)
625+ def __init__(self, *args):
626+ ...
627+ self._cos = COSAgentRequirer(
628+ self,
629+ relation_name="cos-agent-consumer",
630+ refresh_events=["update-status", "upgrade-charm"],
631+ )
632+```
633+"""
634+
635+import json
636+import logging
637+from collections import namedtuple
638+from itertools import chain
639+from pathlib import Path
640+from typing import TYPE_CHECKING, Any, Callable, ClassVar, Dict, List, Optional, Set, Union
641+
642+import pydantic
643+from cosl import GrafanaDashboard, JujuTopology
644+from cosl.rules import AlertRules
645+from ops.charm import RelationChangedEvent
646+from ops.framework import EventBase, EventSource, Object, ObjectEvents
647+from ops.model import Relation, Unit
648+from ops.testing import CharmType
649+
650+if TYPE_CHECKING:
651+ try:
652+ from typing import TypedDict
653+
654+ class _MetricsEndpointDict(TypedDict):
655+ path: str
656+ port: int
657+
658+ except ModuleNotFoundError:
659+ _MetricsEndpointDict = Dict # pyright: ignore
660+
661+LIBID = "dc15fa84cef84ce58155fb84f6c6213a"
662+LIBAPI = 0
663+LIBPATCH = 7
664+
665+PYDEPS = ["cosl", "pydantic < 2"]
666+
667+DEFAULT_RELATION_NAME = "cos-agent"
668+DEFAULT_PEER_RELATION_NAME = "peers"
669+DEFAULT_SCRAPE_CONFIG = {
670+ "static_configs": [{"targets": ["localhost:80"]}],
671+ "metrics_path": "/metrics",
672+}
673+
674+logger = logging.getLogger(__name__)
675+SnapEndpoint = namedtuple("SnapEndpoint", "owner, name")
676+
677+
678+class CosAgentProviderUnitData(pydantic.BaseModel):
679+ """Unit databag model for `cos-agent` relation."""
680+
681+ # The following entries are the same for all units of the same principal.
682+ # Note that the same grafana agent subordinate may be related to several apps.
683+ # this needs to make its way to the gagent leader
684+ metrics_alert_rules: dict
685+ log_alert_rules: dict
686+ dashboards: List[GrafanaDashboard]
687+ subordinate: Optional[bool]
688+
689+ # The following entries may vary across units of the same principal app.
690+ # this data does not need to be forwarded to the gagent leader
691+ metrics_scrape_jobs: List[Dict]
692+ log_slots: List[str]
693+
694+ # when this whole datastructure is dumped into a databag, it will be nested under this key.
695+ # while not strictly necessary (we could have it 'flattened out' into the databag),
696+ # this simplifies working with the model.
697+ KEY: ClassVar[str] = "config"
698+
699+
700+class CosAgentPeersUnitData(pydantic.BaseModel):
701+ """Unit databag model for `peers` cos-agent machine charm peer relation."""
702+
703+ # We need the principal unit name and relation metadata to be able to render identifiers
704+ # (e.g. topology) on the leader side, after all the data moves into peer data (the grafana
705+ # agent leader can only see its own principal, because it is a subordinate charm).
706+ principal_unit_name: str
707+ principal_relation_id: str
708+ principal_relation_name: str
709+
710+ # The only data that is forwarded to the leader is data that needs to go into the app databags
711+ # of the outgoing o11y relations.
712+ metrics_alert_rules: Optional[dict]
713+ log_alert_rules: Optional[dict]
714+ dashboards: Optional[List[GrafanaDashboard]]
715+
716+ # when this whole datastructure is dumped into a databag, it will be nested under this key.
717+ # while not strictly necessary (we could have it 'flattened out' into the databag),
718+ # this simplifies working with the model.
719+ KEY: ClassVar[str] = "config"
720+
721+ @property
722+ def app_name(self) -> str:
723+ """Parse out the app name from the unit name.
724+
725+ TODO: Switch to using `model_post_init` when pydantic v2 is released?
726+ https://github.com/pydantic/pydantic/issues/1729#issuecomment-1300576214
727+ """
728+ return self.principal_unit_name.split("/")[0]
729+
730+
731+class COSAgentProvider(Object):
732+ """Integration endpoint wrapper for the provider side of the cos_agent interface."""
733+
734+ def __init__(
735+ self,
736+ charm: CharmType,
737+ relation_name: str = DEFAULT_RELATION_NAME,
738+ metrics_endpoints: Optional[List["_MetricsEndpointDict"]] = None,
739+ metrics_rules_dir: str = "./src/prometheus_alert_rules",
740+ logs_rules_dir: str = "./src/loki_alert_rules",
741+ recurse_rules_dirs: bool = False,
742+ log_slots: Optional[List[str]] = None,
743+ dashboard_dirs: Optional[List[str]] = None,
744+ refresh_events: Optional[List] = None,
745+ *,
746+ scrape_configs: Optional[Union[List[dict], Callable]] = None,
747+ ):
748+ """Create a COSAgentProvider instance.
749+
750+ Args:
751+ charm: The `CharmBase` instance that is instantiating this object.
752+ relation_name: The name of the relation to communicate over.
753+ metrics_endpoints: List of endpoints in the form [{"path": path, "port": port}, ...].
754+ This argument is a simplified form of the `scrape_configs`.
755+ The contents of this list will be merged with the contents of `scrape_configs`.
756+ metrics_rules_dir: Directory where the metrics rules are stored.
757+ logs_rules_dir: Directory where the logs rules are stored.
758+ recurse_rules_dirs: Whether to recurse into rule paths.
759+ log_slots: Snap slots to connect to for scraping logs
760+ in the form ["snap-name:slot", ...].
761+ dashboard_dirs: Directory where the dashboards are stored.
762+ refresh_events: List of events on which to refresh relation data.
763+ scrape_configs: List of standard scrape_configs dicts or a callable
764+ that returns the list in case the configs need to be generated dynamically.
765+ The contents of this list will be merged with the contents of `metrics_endpoints`.
766+ """
767+ super().__init__(charm, relation_name)
768+ dashboard_dirs = dashboard_dirs or ["./src/grafana_dashboards"]
769+
770+ self._charm = charm
771+ self._relation_name = relation_name
772+ self._metrics_endpoints = metrics_endpoints or []
773+ self._scrape_configs = scrape_configs or []
774+ self._metrics_rules = metrics_rules_dir
775+ self._logs_rules = logs_rules_dir
776+ self._recursive = recurse_rules_dirs
777+ self._log_slots = log_slots or []
778+ self._dashboard_dirs = dashboard_dirs
779+ self._refresh_events = refresh_events or [self._charm.on.config_changed]
780+
781+ events = self._charm.on[relation_name]
782+ self.framework.observe(events.relation_joined, self._on_refresh)
783+ self.framework.observe(events.relation_changed, self._on_refresh)
784+ for event in self._refresh_events:
785+ self.framework.observe(event, self._on_refresh)
786+
787+ def _on_refresh(self, event):
788+ """Trigger the class to update relation data."""
789+ relations = self._charm.model.relations[self._relation_name]
790+
791+ for relation in relations:
792+ # Before a principal is related to the grafana-agent subordinate, we'd get
793+ # ModelError: ERROR cannot read relation settings: unit "zk/2": settings not found
794+ # Add a guard to make sure it doesn't happen.
795+ if relation.data and self._charm.unit in relation.data:
796+ # Subordinate relations can communicate only over unit data.
797+ try:
798+ data = CosAgentProviderUnitData(
799+ metrics_alert_rules=self._metrics_alert_rules,
800+ log_alert_rules=self._log_alert_rules,
801+ dashboards=self._dashboards,
802+ metrics_scrape_jobs=self._scrape_jobs,
803+ log_slots=self._log_slots,
804+ subordinate=self._charm.meta.subordinate,
805+ )
806+ relation.data[self._charm.unit][data.KEY] = data.json()
807+ except (
808+ pydantic.ValidationError,
809+ json.decoder.JSONDecodeError,
810+ ) as e:
811+ logger.error("Invalid relation data provided: %s", e)
812+
813+ @property
814+ def _scrape_jobs(self) -> List[Dict]:
815+ """Return a prometheus_scrape-like data structure for jobs.
816+
817+ https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config
818+ """
819+ if callable(self._scrape_configs):
820+ scrape_configs = self._scrape_configs()
821+ else:
822+ # Create a copy of the user scrape_configs, since we will mutate this object
823+ scrape_configs = self._scrape_configs.copy()
824+
825+ # Convert "metrics_endpoints" to standard scrape_configs, and add them in
826+ for endpoint in self._metrics_endpoints:
827+ scrape_configs.append(
828+ {
829+ "metrics_path": endpoint["path"],
830+ "static_configs": [{"targets": [f"localhost:{endpoint['port']}"]}],
831+ }
832+ )
833+
834+ scrape_configs = scrape_configs or [DEFAULT_SCRAPE_CONFIG]
835+
836+ # Augment job name to include the app name and a unique id (index)
837+ for idx, scrape_config in enumerate(scrape_configs):
838+ scrape_config["job_name"] = "_".join(
839+ [self._charm.app.name, str(idx), scrape_config.get("job_name", "default")]
840+ )
841+
842+ return scrape_configs
843+
844+ @property
845+ def _metrics_alert_rules(self) -> Dict:
846+ """Use (for now) the prometheus_scrape AlertRules to initialize this."""
847+ alert_rules = AlertRules(
848+ query_type="promql", topology=JujuTopology.from_charm(self._charm)
849+ )
850+ alert_rules.add_path(self._metrics_rules, recursive=self._recursive)
851+ return alert_rules.as_dict()
852+
853+ @property
854+ def _log_alert_rules(self) -> Dict:
855+ """Use (for now) the loki_push_api AlertRules to initialize this."""
856+ alert_rules = AlertRules(query_type="logql", topology=JujuTopology.from_charm(self._charm))
857+ alert_rules.add_path(self._logs_rules, recursive=self._recursive)
858+ return alert_rules.as_dict()
859+
860+ @property
861+ def _dashboards(self) -> List[GrafanaDashboard]:
862+ dashboards: List[GrafanaDashboard] = []
863+ for d in self._dashboard_dirs:
864+ for path in Path(d).glob("*"):
865+ dashboard = GrafanaDashboard._serialize(path.read_bytes())
866+ dashboards.append(dashboard)
867+ return dashboards
868+
869+
870+class COSAgentDataChanged(EventBase):
871+ """Event emitted by `COSAgentRequirer` when relation data changes."""
872+
873+
874+class COSAgentValidationError(EventBase):
875+ """Event emitted by `COSAgentRequirer` when there is an error in the relation data."""
876+
877+ def __init__(self, handle, message: str = ""):
878+ super().__init__(handle)
879+ self.message = message
880+
881+ def snapshot(self) -> Dict:
882+ """Save COSAgentValidationError source information."""
883+ return {"message": self.message}
884+
885+ def restore(self, snapshot):
886+ """Restore COSAgentValidationError source information."""
887+ self.message = snapshot["message"]
888+
889+
890+class COSAgentRequirerEvents(ObjectEvents):
891+ """`COSAgentRequirer` events."""
892+
893+ data_changed = EventSource(COSAgentDataChanged)
894+ validation_error = EventSource(COSAgentValidationError)
895+
896+
897+class MultiplePrincipalsError(Exception):
898+ """Custom exception for when there are multiple principal applications."""
899+
900+ pass
901+
902+
903+class COSAgentRequirer(Object):
904+ """Integration endpoint wrapper for the Requirer side of the cos_agent interface."""
905+
906+ on = COSAgentRequirerEvents() # pyright: ignore
907+
908+ def __init__(
909+ self,
910+ charm: CharmType,
911+ *,
912+ relation_name: str = DEFAULT_RELATION_NAME,
913+ peer_relation_name: str = DEFAULT_PEER_RELATION_NAME,
914+ refresh_events: Optional[List[str]] = None,
915+ ):
916+ """Create a COSAgentRequirer instance.
917+
918+ Args:
919+ charm: The `CharmBase` instance that is instantiating this object.
920+ relation_name: The name of the relation to communicate over.
921+ peer_relation_name: The name of the peer relation to communicate over.
922+ refresh_events: List of events on which to refresh relation data.
923+ """
924+ super().__init__(charm, relation_name)
925+ self._charm = charm
926+ self._relation_name = relation_name
927+ self._peer_relation_name = peer_relation_name
928+ self._refresh_events = refresh_events or [self._charm.on.config_changed]
929+
930+ events = self._charm.on[relation_name]
931+ self.framework.observe(
932+ events.relation_joined, self._on_relation_data_changed
933+ ) # TODO: do we need this?
934+ self.framework.observe(events.relation_changed, self._on_relation_data_changed)
935+ for event in self._refresh_events:
936+ self.framework.observe(event, self.trigger_refresh) # pyright: ignore
937+
938+ # Peer relation events
939+ # A peer relation is needed as it is the only mechanism for exchanging data across
940+ # subordinate units.
941+ # self.framework.observe(
942+ # self.on[self._peer_relation_name].relation_joined, self._on_peer_relation_joined
943+ # )
944+ peer_events = self._charm.on[peer_relation_name]
945+ self.framework.observe(peer_events.relation_changed, self._on_peer_relation_changed)
946+
947+ @property
948+ def peer_relation(self) -> Optional["Relation"]:
949+ """Helper function for obtaining the peer relation object.
950+
951+ Returns: peer relation object
952+ (NOTE: would return None if called too early, e.g. during install).
953+ """
954+ return self.model.get_relation(self._peer_relation_name)
955+
956+ def _on_peer_relation_changed(self, _):
957+ # Peer data is used for forwarding data from principal units to the grafana agent
958+ # subordinate leader, for updating the app data of the outgoing o11y relations.
959+ if self._charm.unit.is_leader():
960+ self.on.data_changed.emit() # pyright: ignore
961+
962+ def _on_relation_data_changed(self, event: RelationChangedEvent):
963+ # Peer data is the only means of communication between subordinate units.
964+ if not self.peer_relation:
965+ event.defer()
966+ return
967+
968+ cos_agent_relation = event.relation
969+ if not event.unit or not cos_agent_relation.data.get(event.unit):
970+ return
971+ principal_unit = event.unit
972+
973+ # Coherence check
974+ units = cos_agent_relation.units
975+ if len(units) > 1:
976+ # should never happen
977+ raise ValueError(
978+ f"unexpected error: subordinate relation {cos_agent_relation} "
979+ f"should have exactly one unit"
980+ )
981+
982+ if not (raw := cos_agent_relation.data[principal_unit].get(CosAgentProviderUnitData.KEY)):
983+ return
984+
985+ if not (provider_data := self._validated_provider_data(raw)):
986+ return
987+
988+ # Copy data from the principal relation to the peer relation, so the leader could
989+ # follow up.
990+ # Save the originating unit name, so it could be used for topology later on by the leader.
991+ data = CosAgentPeersUnitData( # peer relation databag model
992+ principal_unit_name=event.unit.name,
993+ principal_relation_id=str(event.relation.id),
994+ principal_relation_name=event.relation.name,
995+ metrics_alert_rules=provider_data.metrics_alert_rules,
996+ log_alert_rules=provider_data.log_alert_rules,
997+ dashboards=provider_data.dashboards,
998+ )
999+ self.peer_relation.data[self._charm.unit][
1000+ f"{CosAgentPeersUnitData.KEY}-{event.unit.name}"
1001+ ] = data.json()
1002+
1003+ # We can't easily tell if the data that was changed is limited to only the data
1004+ # that goes into peer relation (in which case, if this is not a leader unit, we wouldn't
1005+ # need to emit `on.data_changed`), so we're emitting `on.data_changed` either way.
1006+ self.on.data_changed.emit() # pyright: ignore
1007+
1008+ def _validated_provider_data(self, raw) -> Optional[CosAgentProviderUnitData]:
1009+ try:
1010+ return CosAgentProviderUnitData(**json.loads(raw))
1011+ except (pydantic.ValidationError, json.decoder.JSONDecodeError) as e:
1012+ self.on.validation_error.emit(message=str(e)) # pyright: ignore
1013+ return None
1014+
1015+ def trigger_refresh(self, _):
1016+ """Trigger a refresh of relation data."""
1017+ # FIXME: Figure out what we should do here
1018+ self.on.data_changed.emit() # pyright: ignore
1019+
1020+ @property
1021+ def _principal_unit(self) -> Optional[Unit]:
1022+ """Return the principal unit for a relation.
1023+
1024+ Assumes that the relation is of type subordinate.
1025+ Relies on the fact that, for subordinate relations, the only remote unit visible to
1026+ *this unit* is the principal unit that this unit is attached to.
1027+ """
1028+ if relations := self._principal_relations:
1029+ # Technically it's a list, but for subordinates there can only be one relation
1030+ principal_relation = next(iter(relations))
1031+ if units := principal_relation.units:
1032+ # Technically it's a list, but for subordinates there can only be one
1033+ return next(iter(units))
1034+
1035+ return None
1036+
1037+ @property
1038+ def _principal_relations(self):
1039+ relations = []
1040+ for relation in self._charm.model.relations[self._relation_name]:
1041+ if not json.loads(relation.data[next(iter(relation.units))]["config"]).get(
1042+ ["subordinate"], False
1043+ ):
1044+ relations.append(relation)
1045+ if len(relations) > 1:
1046+ logger.error(
1047+ "Multiple applications claiming to be principal. Update the cos-agent library in the client application charms."
1048+ )
1049+ raise MultiplePrincipalsError("Multiple principal applications.")
1050+ return relations
1051+
1052+ @property
1053+ def _remote_data(self) -> List[CosAgentProviderUnitData]:
1054+ """Return a list of remote data from each of the related units.
1055+
1056+ Assumes that the relation is of type subordinate.
1057+ Relies on the fact that, for subordinate relations, the only remote unit visible to
1058+ *this unit* is the principal unit that this unit is attached to.
1059+ """
1060+ all_data = []
1061+
1062+ for relation in self._charm.model.relations[self._relation_name]:
1063+ if not relation.units:
1064+ continue
1065+ unit = next(iter(relation.units))
1066+ if not (raw := relation.data[unit].get(CosAgentProviderUnitData.KEY)):
1067+ continue
1068+ if not (provider_data := self._validated_provider_data(raw)):
1069+ continue
1070+ all_data.append(provider_data)
1071+
1072+ return all_data
1073+
1074+ def _gather_peer_data(self) -> List[CosAgentPeersUnitData]:
1075+ """Collect data from the peers.
1076+
1077+ Returns a trimmed-down list of CosAgentPeersUnitData.
1078+ """
1079+ relation = self.peer_relation
1080+
1081+ # Ensure that whatever context we're running this in, we take the necessary precautions:
1082+ if not relation or not relation.data or not relation.app:
1083+ return []
1084+
1085+ # Iterate over all peer unit data and only collect every principal once.
1086+ peer_data: List[CosAgentPeersUnitData] = []
1087+ app_names: Set[str] = set()
1088+
1089+ for unit in chain((self._charm.unit,), relation.units):
1090+ if not relation.data.get(unit):
1091+ continue
1092+
1093+ for unit_name in relation.data.get(unit): # pyright: ignore
1094+ if not unit_name.startswith(CosAgentPeersUnitData.KEY):
1095+ continue
1096+ raw = relation.data[unit].get(unit_name)
1097+ if raw is None:
1098+ continue
1099+ data = CosAgentPeersUnitData(**json.loads(raw))
1100+ # Have we already seen this principal app?
1101+ if (app_name := data.app_name) in app_names:
1102+ continue
1103+ peer_data.append(data)
1104+ app_names.add(app_name)
1105+
1106+ return peer_data
1107+
1108+ @property
1109+ def metrics_alerts(self) -> Dict[str, Any]:
1110+ """Fetch metrics alerts."""
1111+ alert_rules = {}
1112+
1113+ seen_apps: List[str] = []
1114+ for data in self._gather_peer_data():
1115+ if rules := data.metrics_alert_rules:
1116+ app_name = data.app_name
1117+ if app_name in seen_apps:
1118+ continue # dedup!
1119+ seen_apps.append(app_name)
1120+ # This is only used for naming the file, so be as specific as we can be
1121+ identifier = JujuTopology(
1122+ model=self._charm.model.name,
1123+ model_uuid=self._charm.model.uuid,
1124+ application=app_name,
1125+ # For the topology unit, we could use `data.principal_unit_name`, but that unit
1126+ # name may not be very stable: `_gather_peer_data` de-duplicates by app name so
1127+ # the exact unit name that turns up first in the iterator may vary from time to
1128+ # time. So using the grafana-agent unit name instead.
1129+ unit=self._charm.unit.name,
1130+ ).identifier
1131+
1132+ alert_rules[identifier] = rules
1133+
1134+ return alert_rules
1135+
1136+ @property
1137+ def metrics_jobs(self) -> List[Dict]:
1138+ """Parse the relation data contents and extract the metrics jobs."""
1139+ scrape_jobs = []
1140+ for data in self._remote_data:
1141+ for job in data.metrics_scrape_jobs:
1142+ # In #220, relation schema changed from a simplified dict to the standard
1143+ # `scrape_configs`.
1144+ # This is to ensure backwards compatibility with Providers older than v0.5.
1145+ if "path" in job and "port" in job and "job_name" in job:
1146+ job = {
1147+ "job_name": job["job_name"],
1148+ "metrics_path": job["path"],
1149+ "static_configs": [{"targets": [f"localhost:{job['port']}"]}],
1150+ # We include insecure_skip_verify because we are always scraping localhost.
1151+ # Even if we have the certs for the scrape targets, we'd rather specify the scrape
1152+ # jobs with localhost rather than the SAN DNS the cert was issued for.
1153+ "tls_config": {"insecure_skip_verify": True},
1154+ }
1155+
1156+ scrape_jobs.append(job)
1157+
1158+ return scrape_jobs
1159+
1160+ @property
1161+ def snap_log_endpoints(self) -> List[SnapEndpoint]:
1162+ """Fetch logging endpoints exposed by related snaps."""
1163+ plugs = []
1164+ for data in self._remote_data:
1165+ targets = data.log_slots
1166+ if targets:
1167+ for target in targets:
1168+ if target in plugs:
1169+ logger.warning(
1170+ f"plug {target} already listed. "
1171+ "The same snap is being passed from multiple "
1172+ "endpoints; this should not happen."
1173+ )
1174+ else:
1175+ plugs.append(target)
1176+
1177+ endpoints = []
1178+ for plug in plugs:
1179+ if ":" not in plug:
1180+ logger.error(f"invalid plug definition received: {plug}. Ignoring...")
1181+ else:
1182+ endpoint = SnapEndpoint(*plug.split(":"))
1183+ endpoints.append(endpoint)
1184+ return endpoints
1185+
1186+ @property
1187+ def logs_alerts(self) -> Dict[str, Any]:
1188+ """Fetch log alerts."""
1189+ alert_rules = {}
1190+ seen_apps: List[str] = []
1191+
1192+ for data in self._gather_peer_data():
1193+ if rules := data.log_alert_rules:
1194+ # This is only used for naming the file, so be as specific as we can be
1195+ app_name = data.app_name
1196+ if app_name in seen_apps:
1197+ continue # dedup!
1198+ seen_apps.append(app_name)
1199+
1200+ identifier = JujuTopology(
1201+ model=self._charm.model.name,
1202+ model_uuid=self._charm.model.uuid,
1203+ application=app_name,
1204+ # For the topology unit, we could use `data.principal_unit_name`, but that unit
1205+ # name may not be very stable: `_gather_peer_data` de-duplicates by app name so
1206+ # the exact unit name that turns up first in the iterator may vary from time to
1207+ # time. So using the grafana-agent unit name instead.
1208+ unit=self._charm.unit.name,
1209+ ).identifier
1210+
1211+ alert_rules[identifier] = rules
1212+
1213+ return alert_rules
1214+
1215+ @property
1216+ def dashboards(self) -> List[Dict[str, str]]:
1217+ """Fetch dashboards as encoded content.
1218+
1219+ Dashboards are assumed not to vary across units of the same primary.
1220+ """
1221+ dashboards: List[Dict[str, Any]] = []
1222+
1223+ seen_apps: List[str] = []
1224+ for data in self._gather_peer_data():
1225+ app_name = data.app_name
1226+ if app_name in seen_apps:
1227+ continue # dedup!
1228+ seen_apps.append(app_name)
1229+
1230+ for encoded_dashboard in data.dashboards or ():
1231+ content = GrafanaDashboard(encoded_dashboard)._deserialize()
1232+
1233+ title = content.get("title", "no_title")
1234+
1235+ dashboards.append(
1236+ {
1237+ "relation_id": data.principal_relation_id,
1238+ # We have the remote charm name - use it for the identifier
1239+ "charm": f"{data.principal_relation_name}-{app_name}",
1240+ "content": content,
1241+ "title": title,
1242+ }
1243+ )
1244+
1245+ return dashboards
1246diff --git a/lib/charms/operator_libs_linux/v0/apt.py b/lib/charms/operator_libs_linux/v0/apt.py
1247index 2f921f0..1400df7 100644
1248--- a/lib/charms/operator_libs_linux/v0/apt.py
1249+++ b/lib/charms/operator_libs_linux/v0/apt.py
1250@@ -78,7 +78,6 @@ Keys are constructed as `{repo_type}-{}-{release}` in order to uniquely identify
1251 Repositories can be added with explicit values through a Python constructor.
1252
1253 Example:
1254-
1255 ```python
1256 repositories = apt.RepositoryMapping()
1257
1258@@ -91,7 +90,6 @@ Alternatively, any valid `sources.list` line may be used to construct a new
1259 `DebianRepository`.
1260
1261 Example:
1262-
1263 ```python
1264 repositories = apt.RepositoryMapping()
1265
1266@@ -110,7 +108,7 @@ import re
1267 import subprocess
1268 from collections.abc import Mapping
1269 from enum import Enum
1270-from subprocess import PIPE, CalledProcessError, check_call, check_output
1271+from subprocess import PIPE, CalledProcessError, check_output
1272 from typing import Iterable, List, Optional, Tuple, Union
1273 from urllib.parse import urlparse
1274
1275@@ -124,7 +122,7 @@ LIBAPI = 0
1276
1277 # Increment this PATCH version before using `charmcraft publish-lib` or reset
1278 # to 0 if you are raising the major API version
1279-LIBPATCH = 8
1280+LIBPATCH = 13
1281
1282
1283 VALID_SOURCE_TYPES = ("deb", "deb-src")
1284@@ -135,7 +133,7 @@ class Error(Exception):
1285 """Base class of most errors raised by this library."""
1286
1287 def __repr__(self):
1288- """String representation of Error."""
1289+ """Represent the Error."""
1290 return "<{}.{} {}>".format(type(self).__module__, type(self).__name__, self.args)
1291
1292 @property
1293@@ -212,15 +210,15 @@ class DebianPackage:
1294 ) == (other._name, other._version.number)
1295
1296 def __hash__(self):
1297- """A basic hash so this class can be used in Mappings and dicts."""
1298+ """Return a hash of this package."""
1299 return hash((self._name, self._version.number))
1300
1301 def __repr__(self):
1302- """A representation of the package."""
1303+ """Represent the package."""
1304 return "<{}.{}: {}>".format(self.__module__, self.__class__.__name__, self.__dict__)
1305
1306 def __str__(self):
1307- """A human-readable representation of the package."""
1308+ """Return a human-readable representation of the package."""
1309 return "<{}: {}-{}.{} -- {}>".format(
1310 self.__class__.__name__,
1311 self._name,
1312@@ -250,11 +248,12 @@ class DebianPackage:
1313 package_names = [package_names]
1314 _cmd = ["apt-get", "-y", *optargs, command, *package_names]
1315 try:
1316- env = {"DEBIAN_FRONTEND": "noninteractive"}
1317- check_call(_cmd, env=env, stderr=PIPE, stdout=PIPE)
1318+ env = os.environ.copy()
1319+ env["DEBIAN_FRONTEND"] = "noninteractive"
1320+ subprocess.run(_cmd, capture_output=True, check=True, text=True, env=env)
1321 except CalledProcessError as e:
1322 raise PackageError(
1323- "Could not {} package(s) [{}]: {}".format(command, [*package_names], e.output)
1324+ "Could not {} package(s) [{}]: {}".format(command, [*package_names], e.stderr)
1325 ) from None
1326
1327 def _add(self) -> None:
1328@@ -266,7 +265,7 @@ class DebianPackage:
1329 )
1330
1331 def _remove(self) -> None:
1332- """Removes a package from the system. Implementation-specific."""
1333+ """Remove a package from the system. Implementation-specific."""
1334 return self._apt("remove", "{}={}".format(self.name, self.version))
1335
1336 @property
1337@@ -275,7 +274,7 @@ class DebianPackage:
1338 return self._name
1339
1340 def ensure(self, state: PackageState):
1341- """Ensures that a package is in a given state.
1342+ """Ensure that a package is in a given state.
1343
1344 Args:
1345 state: a `PackageState` to reconcile the package to
1346@@ -307,7 +306,7 @@ class DebianPackage:
1347
1348 @state.setter
1349 def state(self, state: PackageState) -> None:
1350- """Sets the package state to a given value.
1351+ """Set the package state to a given value.
1352
1353 Args:
1354 state: a `PackageState` to reconcile the package to
1355@@ -356,7 +355,7 @@ class DebianPackage:
1356
1357 Args:
1358 package: a string representing the package
1359- version: an optional string if a specific version isr equested
1360+ version: an optional string if a specific version is requested
1361 arch: an optional architecture, defaulting to `dpkg --print-architecture`. If an
1362 architecture is not specified, this will be used for selection.
1363
1364@@ -389,7 +388,7 @@ class DebianPackage:
1365
1366 Args:
1367 package: a string representing the package
1368- version: an optional string if a specific version isr equested
1369+ version: an optional string if a specific version is requested
1370 arch: an optional architecture, defaulting to `dpkg --print-architecture`.
1371 If an architecture is not specified, this will be used for selection.
1372 """
1373@@ -459,7 +458,7 @@ class DebianPackage:
1374
1375 Args:
1376 package: a string representing the package
1377- version: an optional string if a specific version isr equested
1378+ version: an optional string if a specific version is requested
1379 arch: an optional architecture, defaulting to `dpkg --print-architecture`.
1380 If an architecture is not specified, this will be used for selection.
1381 """
1382@@ -477,7 +476,7 @@ class DebianPackage:
1383 )
1384 except CalledProcessError as e:
1385 raise PackageError(
1386- "Could not list packages in apt-cache: {}".format(e.output)
1387+ "Could not list packages in apt-cache: {}".format(e.stderr)
1388 ) from None
1389
1390 pkg_groups = output.strip().split("\n\n")
1391@@ -515,7 +514,7 @@ class Version:
1392 """An abstraction around package versions.
1393
1394 This seems like it should be strictly unnecessary, except that `apt_pkg` is not usable inside a
1395- venv, and wedging version comparisions into `DebianPackage` would overcomplicate it.
1396+ venv, and wedging version comparisons into `DebianPackage` would overcomplicate it.
1397
1398 This class implements the algorithm found here:
1399 https://www.debian.org/doc/debian-policy/ch-controlfields.html#version
1400@@ -526,11 +525,11 @@ class Version:
1401 self._epoch = epoch or ""
1402
1403 def __repr__(self):
1404- """A representation of the package."""
1405+ """Represent the package."""
1406 return "<{}.{}: {}>".format(self.__module__, self.__class__.__name__, self.__dict__)
1407
1408 def __str__(self):
1409- """A human-readable representation of the package."""
1410+ """Return human-readable representation of the package."""
1411 return "{}{}".format("{}:".format(self._epoch) if self._epoch else "", self._version)
1412
1413 @property
1414@@ -731,13 +730,16 @@ def add_package(
1415 """Add a package or list of packages to the system.
1416
1417 Args:
1418+ package_names: single package name, or list of package names
1419 name: the name(s) of the package(s)
1420 version: an (Optional) version as a string. Defaults to the latest known
1421 arch: an optional architecture for the package
1422 update_cache: whether or not to run `apt-get update` prior to operating
1423
1424 Raises:
1425+ TypeError if no package name is given, or explicit version is set for multiple packages
1426 PackageNotFoundError if the package is not in the cache.
1427+ PackageError if packages fail to install
1428 """
1429 cache_refreshed = False
1430 if update_cache:
1431@@ -746,7 +748,7 @@ def add_package(
1432
1433 packages = {"success": [], "retry": [], "failed": []}
1434
1435- package_names = [package_names] if type(package_names) is str else package_names
1436+ package_names = [package_names] if isinstance(package_names, str) else package_names
1437 if not package_names:
1438 raise TypeError("Expected at least one package name to add, received zero!")
1439
1440@@ -785,7 +787,7 @@ def _add(
1441 version: Optional[str] = "",
1442 arch: Optional[str] = "",
1443 ) -> Tuple[Union[DebianPackage, str], bool]:
1444- """Adds a package.
1445+ """Add a package to the system.
1446
1447 Args:
1448 name: the name(s) of the package(s)
1449@@ -806,7 +808,7 @@ def _add(
1450 def remove_package(
1451 package_names: Union[str, List[str]]
1452 ) -> Union[DebianPackage, List[DebianPackage]]:
1453- """Removes a package from the system.
1454+ """Remove package(s) from the system.
1455
1456 Args:
1457 package_names: the name of a package
1458@@ -816,7 +818,7 @@ def remove_package(
1459 """
1460 packages = []
1461
1462- package_names = [package_names] if type(package_names) is str else package_names
1463+ package_names = [package_names] if isinstance(package_names, str) else package_names
1464 if not package_names:
1465 raise TypeError("Expected at least one package name to add, received zero!")
1466
1467@@ -834,8 +836,70 @@ def remove_package(
1468
1469
1470 def update() -> None:
1471- """Updates the apt cache via `apt-get update`."""
1472- check_call(["apt-get", "update"], stderr=PIPE, stdout=PIPE)
1473+ """Update the apt cache via `apt-get update`."""
1474+ subprocess.run(["apt-get", "update"], capture_output=True, check=True)
1475+
1476+
1477+def import_key(key: str) -> str:
1478+ """Import an ASCII Armor key.
1479+
1480+ A Radix64 format keyid is also supported for backwards
1481+ compatibility. In this case Ubuntu keyserver will be
1482+ queried for a key via HTTPS by its keyid. This method
1483+ is less preferable because https proxy servers may
1484+ require traffic decryption which is equivalent to a
1485+ man-in-the-middle attack (a proxy server impersonates
1486+ keyserver TLS certificates and has to be explicitly
1487+ trusted by the system).
1488+
1489+ Args:
1490+ key: A GPG key in ASCII armor format, including BEGIN
1491+ and END markers or a keyid.
1492+
1493+ Returns:
1494+ The GPG key filename written.
1495+
1496+ Raises:
1497+ GPGKeyError if the key could not be imported
1498+ """
1499+ key = key.strip()
1500+ if "-" in key or "\n" in key:
1501+ # Send everything not obviously a keyid to GPG to import, as
1502+ # we trust its validation better than our own. eg. handling
1503+ # comments before the key.
1504+ logger.debug("PGP key found (looks like ASCII Armor format)")
1505+ if (
1506+ "-----BEGIN PGP PUBLIC KEY BLOCK-----" in key
1507+ and "-----END PGP PUBLIC KEY BLOCK-----" in key
1508+ ):
1509+ logger.debug("Writing provided PGP key in the binary format")
1510+ key_bytes = key.encode("utf-8")
1511+ key_name = DebianRepository._get_keyid_by_gpg_key(key_bytes)
1512+ key_gpg = DebianRepository._dearmor_gpg_key(key_bytes)
1513+ gpg_key_filename = "/etc/apt/trusted.gpg.d/{}.gpg".format(key_name)
1514+ DebianRepository._write_apt_gpg_keyfile(
1515+ key_name=gpg_key_filename, key_material=key_gpg
1516+ )
1517+ return gpg_key_filename
1518+ else:
1519+ raise GPGKeyError("ASCII armor markers missing from GPG key")
1520+ else:
1521+ logger.warning(
1522+ "PGP key found (looks like Radix64 format). "
1523+ "SECURELY importing PGP key from keyserver; "
1524+ "full key not provided."
1525+ )
1526+ # as of bionic add-apt-repository uses curl with an HTTPS keyserver URL
1527+ # to retrieve GPG keys. `apt-key adv` command is deprecated as is
1528+ # apt-key in general as noted in its manpage. See lp:1433761 for more
1529+ # history. Instead, /etc/apt/trusted.gpg.d is used directly to drop
1530+ # gpg
1531+ key_asc = DebianRepository._get_key_by_keyid(key)
1532+ # write the key in GPG format so that apt-key list shows it
1533+ key_gpg = DebianRepository._dearmor_gpg_key(key_asc.encode("utf-8"))
1534+ gpg_key_filename = "/etc/apt/trusted.gpg.d/{}.gpg".format(key)
1535+ DebianRepository._write_apt_gpg_keyfile(key_name=gpg_key_filename, key_material=key_gpg)
1536+ return gpg_key_filename
1537
1538
1539 class InvalidSourceError(Error):
1540@@ -901,7 +965,7 @@ class DebianRepository:
1541
1542 @filename.setter
1543 def filename(self, fname: str) -> None:
1544- """Sets the filename used when a repo is written back to diskself.
1545+ """Set the filename used when a repo is written back to disk.
1546
1547 Args:
1548 fname: a filename to write the repository information to.
1549@@ -1004,7 +1068,7 @@ class DebianRepository:
1550 A Radix64 format keyid is also supported for backwards
1551 compatibility. In this case Ubuntu keyserver will be
1552 queried for a key via HTTPS by its keyid. This method
1553- is less preferrable because https proxy servers may
1554+ is less preferable because https proxy servers may
1555 require traffic decryption which is equivalent to a
1556 man-in-the-middle attack (a proxy server impersonates
1557 keyserver TLS certificates and has to be explicitly
1558@@ -1017,40 +1081,7 @@ class DebianRepository:
1559 Raises:
1560 GPGKeyError if the key could not be imported
1561 """
1562- key = key.strip()
1563- if "-" in key or "\n" in key:
1564- # Send everything not obviously a keyid to GPG to import, as
1565- # we trust its validation better than our own. eg. handling
1566- # comments before the key.
1567- logger.debug("PGP key found (looks like ASCII Armor format)")
1568- if (
1569- "-----BEGIN PGP PUBLIC KEY BLOCK-----" in key
1570- and "-----END PGP PUBLIC KEY BLOCK-----" in key
1571- ):
1572- logger.debug("Writing provided PGP key in the binary format")
1573- key_bytes = key.encode("utf-8")
1574- key_name = self._get_keyid_by_gpg_key(key_bytes)
1575- key_gpg = self._dearmor_gpg_key(key_bytes)
1576- self._gpg_key_filename = "/etc/apt/trusted.gpg.d/{}.gpg".format(key_name)
1577- self._write_apt_gpg_keyfile(key_name=self._gpg_key_filename, key_material=key_gpg)
1578- else:
1579- raise GPGKeyError("ASCII armor markers missing from GPG key")
1580- else:
1581- logger.warning(
1582- "PGP key found (looks like Radix64 format). "
1583- "SECURELY importing PGP key from keyserver; "
1584- "full key not provided."
1585- )
1586- # as of bionic add-apt-repository uses curl with an HTTPS keyserver URL
1587- # to retrieve GPG keys. `apt-key adv` command is deprecated as is
1588- # apt-key in general as noted in its manpage. See lp:1433761 for more
1589- # history. Instead, /etc/apt/trusted.gpg.d is used directly to drop
1590- # gpg
1591- key_asc = self._get_key_by_keyid(key)
1592- # write the key in GPG format so that apt-key list shows it
1593- key_gpg = self._dearmor_gpg_key(key_asc.encode("utf-8"))
1594- self._gpg_key_filename = "/etc/apt/trusted.gpg.d/{}.gpg".format(key)
1595- self._write_apt_gpg_keyfile(key_name=key, key_material=key_gpg)
1596+ self._gpg_key_filename = import_key(key)
1597
1598 @staticmethod
1599 def _get_keyid_by_gpg_key(key_material: bytes) -> str:
1600@@ -1116,7 +1147,7 @@ class DebianRepository:
1601
1602 @staticmethod
1603 def _dearmor_gpg_key(key_asc: bytes) -> bytes:
1604- """Converts a GPG key in the ASCII armor format to the binary format.
1605+ """Convert a GPG key in the ASCII armor format to the binary format.
1606
1607 Args:
1608 key_asc: A GPG key in ASCII armor format.
1609@@ -1140,7 +1171,7 @@ class DebianRepository:
1610
1611 @staticmethod
1612 def _write_apt_gpg_keyfile(key_name: str, key_material: bytes) -> None:
1613- """Writes GPG key material into a file at a provided path.
1614+ """Write GPG key material into a file at a provided path.
1615
1616 Args:
1617 key_name: A key name to use for a key file (could be a fingerprint)
1618@@ -1188,7 +1219,7 @@ class RepositoryMapping(Mapping):
1619 return len(self._repository_map)
1620
1621 def __iter__(self) -> Iterable[DebianRepository]:
1622- """Iterator magic method for RepositoryMapping."""
1623+ """Return iterator for RepositoryMapping."""
1624 return iter(self._repository_map.values())
1625
1626 def __getitem__(self, repository_uri: str) -> DebianRepository:
1627diff --git a/lib/charms/operator_libs_linux/v0/passwd.py b/lib/charms/operator_libs_linux/v0/passwd.py
1628index b692e70..ed5a058 100644
1629--- a/lib/charms/operator_libs_linux/v0/passwd.py
1630+++ b/lib/charms/operator_libs_linux/v0/passwd.py
1631@@ -45,7 +45,7 @@ LIBAPI = 0
1632
1633 # Increment this PATCH version before using `charmcraft publish-lib` or reset
1634 # to 0 if you are raising the major API version
1635-LIBPATCH = 3
1636+LIBPATCH = 4
1637
1638
1639 def user_exists(user: Union[str, int]) -> Optional[pwd.struct_passwd]:
1640@@ -99,6 +99,7 @@ def add_user(
1641 secondary_groups: List[str] = None,
1642 uid: int = None,
1643 home_dir: str = None,
1644+ create_home: bool = True,
1645 ) -> str:
1646 """Add a user to the system.
1647
1648@@ -113,6 +114,7 @@ def add_user(
1649 secondary_groups: Optional list of additional groups
1650 uid: UID for user being created
1651 home_dir: Home directory for user
1652+ create_home: Force home directory creation
1653
1654 Returns:
1655 The password database entry struct, as returned by `pwd.getpwnam`
1656@@ -135,7 +137,9 @@ def add_user(
1657 if home_dir:
1658 cmd.extend(["--home", str(home_dir)])
1659 if password:
1660- cmd.extend(["--password", password, "--create-home"])
1661+ cmd.extend(["--password", password])
1662+ if create_home:
1663+ cmd.append("--create-home")
1664 if system_user or password is None:
1665 cmd.append("--system")
1666
1667diff --git a/metadata.yaml b/metadata.yaml
1668index efc7563..61ddaad 100644
1669--- a/metadata.yaml
1670+++ b/metadata.yaml
1671@@ -31,6 +31,8 @@ provides:
1672 nrpe-external-master:
1673 interface: nrpe-external-master
1674 scope: container
1675+ cos-agent:
1676+ interface: cos_agent
1677
1678 peers:
1679 replicas:
1680diff --git a/requirements-dev.txt b/requirements-dev.txt
1681index 4f2a3f5..671f33c 100644
1682--- a/requirements-dev.txt
1683+++ b/requirements-dev.txt
1684@@ -1,3 +1,7 @@
1685 -r requirements.txt
1686 coverage
1687 flake8
1688+
1689+# Grafana Agent Library
1690+cosl
1691+pydantic < 2
1692diff --git a/src/charm.py b/src/charm.py
1693index 29885b8..533fc40 100755
1694--- a/src/charm.py
1695+++ b/src/charm.py
1696@@ -21,31 +21,50 @@ from subprocess import CalledProcessError, check_call
1697 import yaml
1698
1699 from charms.operator_libs_linux.v0 import apt
1700-from charms.operator_libs_linux.v0.apt import (
1701- PackageError, PackageNotFoundError)
1702+from charms.operator_libs_linux.v0.apt import PackageError, PackageNotFoundError
1703 from charms.operator_libs_linux.v0.passwd import group_exists, user_exists
1704 from charms.operator_libs_linux.v0.systemd import service_reload
1705+from charms.grafana_agent.v0.cos_agent import COSAgentProvider
1706
1707 from ops.charm import (
1708- ActionEvent, CharmBase, InstallEvent, LeaderElectedEvent,
1709- LeaderSettingsChangedEvent, RelationChangedEvent, RelationJoinedEvent,
1710- UpdateStatusEvent)
1711+ ActionEvent,
1712+ CharmBase,
1713+ InstallEvent,
1714+ LeaderElectedEvent,
1715+ LeaderSettingsChangedEvent,
1716+ RelationChangedEvent,
1717+ RelationDepartedEvent,
1718+ RelationJoinedEvent,
1719+ UpdateStatusEvent,
1720+)
1721 from ops.framework import StoredState
1722 from ops.main import main
1723 from ops.model import (
1724- ActiveStatus, BlockedStatus, Relation, MaintenanceStatus, WaitingStatus)
1725+ ActiveStatus,
1726+ BlockedStatus,
1727+ Relation,
1728+ MaintenanceStatus,
1729+ WaitingStatus,
1730+)
1731
1732 from settings_files import (
1733- DEFAULT_POSTGRES_PORT, configure_for_deployment_mode, merge_service_conf,
1734- prepend_default_settings, update_db_conf, update_default_settings, update_service_conf,
1735- write_license_file, write_ssl_cert)
1736+ DEFAULT_POSTGRES_PORT,
1737+ configure_for_deployment_mode,
1738+ generate_secret_token,
1739+ merge_service_conf,
1740+ prepend_default_settings,
1741+ update_db_conf,
1742+ update_default_settings,
1743+ update_service_conf,
1744+ write_license_file,
1745+ write_ssl_cert,
1746+)
1747
1748 logger = logging.getLogger(__name__)
1749
1750 DEBCONF_SET_SELECTIONS = "/usr/bin/debconf-set-selections"
1751 DPKG_RECONFIGURE = "/usr/sbin/dpkg-reconfigure"
1752-HAPROXY_CONFIG_FILE = os.path.join(os.path.dirname(__file__),
1753- "haproxy-config.yaml")
1754+HAPROXY_CONFIG_FILE = os.path.join(os.path.dirname(__file__), "haproxy-config.yaml")
1755 LSCTL = "/usr/bin/lsctl"
1756 NRPE_D_DIR = "/etc/nagios/nrpe.d"
1757 POSTFIX_CF = "/etc/postfix/main.cf"
1758@@ -53,11 +72,11 @@ SCHEMA_SCRIPT = "/usr/bin/landscape-schema"
1759 BOOTSTRAP_ACCOUNT_SCRIPT = "/opt/canonical/landscape/bootstrap-account"
1760 HASH_ID_DATABASES = "/opt/canonical/landscape/hash-id-databases-ignore-maintenance"
1761
1762+LANDSCAPE_SERVER = "landscape-server"
1763 LANDSCAPE_PACKAGES = (
1764- "landscape-server",
1765+ LANDSCAPE_SERVER,
1766 "landscape-client",
1767 "landscape-common",
1768- "landscape-hashids"
1769 )
1770
1771 DEFAULT_SERVICES = (
1772@@ -100,56 +119,71 @@ class LandscapeServerCharm(CharmBase):
1773 self.framework.observe(self.on.update_status, self._update_status)
1774
1775 # Relations
1776- self.framework.observe(self.on.db_relation_joined,
1777- self._db_relation_changed)
1778- self.framework.observe(self.on.db_relation_changed,
1779- self._db_relation_changed)
1780- self.framework.observe(self.on.amqp_relation_joined,
1781- self._amqp_relation_joined)
1782- self.framework.observe(self.on.amqp_relation_changed,
1783- self._amqp_relation_changed)
1784- self.framework.observe(self.on.website_relation_joined,
1785- self._website_relation_joined)
1786- self.framework.observe(self.on.website_relation_changed,
1787- self._website_relation_changed)
1788- self.framework.observe(self.on.nrpe_external_master_relation_joined,
1789- self._nrpe_external_master_relation_joined)
1790- self.framework.observe(self.on.application_dashboard_relation_joined,
1791- self._application_dashboard_relation_joined)
1792+ self.framework.observe(self.on.db_relation_joined, self._db_relation_changed)
1793+ self.framework.observe(self.on.db_relation_changed, self._db_relation_changed)
1794+ self.framework.observe(self.on.amqp_relation_joined, self._amqp_relation_joined)
1795+ self.framework.observe(
1796+ self.on.amqp_relation_changed, self._amqp_relation_changed
1797+ )
1798+ self.framework.observe(
1799+ self.on.website_relation_joined, self._website_relation_joined
1800+ )
1801+ self.framework.observe(
1802+ self.on.website_relation_changed, self._website_relation_changed
1803+ )
1804+ self.framework.observe(
1805+ self.on.website_relation_departed, self._website_relation_departed
1806+ )
1807+ self.framework.observe(
1808+ self.on.nrpe_external_master_relation_joined,
1809+ self._nrpe_external_master_relation_joined,
1810+ )
1811+ self.framework.observe(
1812+ self.on.application_dashboard_relation_joined,
1813+ self._application_dashboard_relation_joined,
1814+ )
1815
1816 # Leadership/peering
1817 self.framework.observe(self.on.leader_elected, self._leader_elected)
1818- self.framework.observe(self.on.leader_settings_changed,
1819- self._leader_settings_changed)
1820- self.framework.observe(self.on.replicas_relation_joined,
1821- self._on_replicas_relation_joined)
1822- self.framework.observe(self.on.replicas_relation_changed,
1823- self._on_replicas_relation_changed)
1824+ self.framework.observe(
1825+ self.on.leader_settings_changed, self._leader_settings_changed
1826+ )
1827+ self.framework.observe(
1828+ self.on.replicas_relation_joined, self._on_replicas_relation_joined
1829+ )
1830+ self.framework.observe(
1831+ self.on.replicas_relation_changed, self._on_replicas_relation_changed
1832+ )
1833
1834 # Actions
1835 self.framework.observe(self.on.pause_action, self._pause)
1836 self.framework.observe(self.on.resume_action, self._resume)
1837 self.framework.observe(self.on.upgrade_action, self._upgrade)
1838- self.framework.observe(self.on.migrate_schema_action,
1839- self._migrate_schema)
1840- self.framework.observe(self.on.hash_id_databases_action,
1841- self._hash_id_databases)
1842+ self.framework.observe(self.on.migrate_schema_action, self._migrate_schema)
1843+ self.framework.observe(
1844+ self.on.hash_id_databases_action, self._hash_id_databases
1845+ )
1846
1847 # State
1848- self._stored.set_default(ready={
1849- "db": False,
1850- "amqp": False,
1851- "haproxy": False,
1852- })
1853+ self._stored.set_default(
1854+ ready={
1855+ "db": False,
1856+ "amqp": False,
1857+ "haproxy": False,
1858+ }
1859+ )
1860 self._stored.set_default(leader_ip="")
1861 self._stored.set_default(running=False)
1862 self._stored.set_default(paused=False)
1863 self._stored.set_default(default_root_url="")
1864 self._stored.set_default(account_bootstrapped=False)
1865+ self._stored.set_default(secret_token=None)
1866
1867 self.landscape_uid = user_exists("landscape").pw_uid
1868 self.root_gid = group_exists("root").gr_gid
1869
1870+ self._grafana_agent = COSAgentProvider(self)
1871+
1872 def _on_config_changed(self, _) -> None:
1873 prev_status = self.unit.status
1874
1875@@ -173,10 +207,8 @@ class LandscapeServerCharm(CharmBase):
1876 # Write the license file, if it exists.
1877 license_file = self.model.config.get("license_file")
1878 if license_file:
1879- self.unit.status = MaintenanceStatus(
1880- "Writing Landscape license file")
1881- write_license_file(
1882- license_file, self.landscape_uid, self.root_gid)
1883+ self.unit.status = MaintenanceStatus("Writing Landscape license file")
1884+ write_license_file(license_file, self.landscape_uid, self.root_gid)
1885 self.unit.status = WaitingStatus("Waiting on relations")
1886
1887 smtp_relay_host = self.model.config.get("smtp_relay_host")
1888@@ -189,10 +221,12 @@ class LandscapeServerCharm(CharmBase):
1889 for relation in haproxy_relations:
1890 self._update_haproxy_connection(relation)
1891
1892- if any(self.model.config.get(v) for v in OPENID_CONFIG_VALS) \
1893- and any(self.model.config.get(v) for v in OIDC_CONFIG_VALS):
1894+ if any(self.model.config.get(v) for v in OPENID_CONFIG_VALS) and any(
1895+ self.model.config.get(v) for v in OIDC_CONFIG_VALS
1896+ ):
1897 self.unit.status = BlockedStatus(
1898- "OpenID and OIDC configurations are mutually exclusive")
1899+ "OpenID and OIDC configurations are mutually exclusive"
1900+ )
1901 else:
1902 self._configure_openid()
1903 self._configure_oidc()
1904@@ -200,13 +234,13 @@ class LandscapeServerCharm(CharmBase):
1905 # Update root_url, if provided
1906 root_url = self.model.config.get("root_url")
1907 if root_url:
1908- update_service_conf({
1909- "global": {"root-url": root_url},
1910- "api": {"root-url": root_url},
1911- "package-upload": {"root-url": root_url},
1912- })
1913-
1914- self._bootstrap_account()
1915+ update_service_conf(
1916+ {
1917+ "global": {"root-url": root_url},
1918+ "api": {"root-url": root_url},
1919+ "package-upload": {"root-url": root_url},
1920+ }
1921+ )
1922
1923 config_host = self.model.config.get("db_host")
1924 schema_password = self.model.config.get("db_schema_password")
1925@@ -232,37 +266,63 @@ class LandscapeServerCharm(CharmBase):
1926 else:
1927 return
1928
1929+ self._bootstrap_account()
1930+
1931+ secret_token = self._get_secret_token()
1932+ if self.unit.is_leader():
1933+ if not secret_token:
1934+ # If the secret token wasn't in the config, and we don't have one
1935+ # in the peer relation data, then the leader needs to generate one
1936+ # for all of the units to use.
1937+ logger.info("Generating new random secret token")
1938+ secret_token = generate_secret_token()
1939+ peer_relation = self.model.get_relation("replicas")
1940+ peer_relation.data[self.app].update({"secret-token": secret_token})
1941+ if (secret_token) and (secret_token != self._stored.secret_token):
1942+ self._write_secret_token(secret_token)
1943+ self._stored.secret_token = secret_token
1944+
1945 if isinstance(prev_status, BlockedStatus):
1946 self.unit.status = prev_status
1947
1948 self._update_ready_status(restart_services=True)
1949
1950+ def _get_secret_token(self):
1951+ secret_token = self.model.config.get("secret_token")
1952+ if not secret_token:
1953+ peer_relation = self.model.get_relation("replicas")
1954+ secret_token = peer_relation.data[self.app].get("secret-token", None)
1955+ return secret_token
1956+
1957+ def _write_secret_token(self, secret_token):
1958+ logger.info("Writing secret token")
1959+ update_service_conf({"landscape": {"secret-token": secret_token}})
1960+
1961 def _on_install(self, event: InstallEvent) -> None:
1962 """Handle the install event."""
1963 self.unit.status = MaintenanceStatus("Installing apt packages")
1964
1965+ landscape_ppa_key = self.model.config["landscape_ppa_key"]
1966+ if landscape_ppa_key != "":
1967+ try:
1968+ landscape_key_file = apt.import_key(landscape_ppa_key)
1969+ logger.info(f"Imported Landscape PPA key at {landscape_key_file}")
1970+ except apt.GPGKeyError:
1971+ logger.error("Failed to import Landscape PPA key")
1972+
1973 landscape_ppa = self.model.config["landscape_ppa"]
1974
1975 try:
1976+ # This package is responsible for the hanging installs and ignores env vars
1977+ apt.remove_package(["needrestart"])
1978 # Add the Landscape Server PPA and install via apt.
1979 check_call(["add-apt-repository", "-y", landscape_ppa])
1980- apt.add_package(["landscape-server", "landscape-hashids"])
1981- except PackageNotFoundError:
1982- logger.error("Landscape package not found in package cache "
1983- "or on system")
1984- self.unit.status = BlockedStatus("Failed to install packages")
1985- return
1986- except PackageError as e:
1987- logger.error(
1988- "Could not install landscape-server package. Reason: %s",
1989- e.message)
1990- self.unit.status = BlockedStatus("Failed to install packages")
1991- return
1992- except CalledProcessError as e:
1993- logger.error("Package install failed with return code %d",
1994- e.returncode)
1995- self.unit.status = BlockedStatus("Failed to install packages")
1996- return
1997+ # Explicitly ensure cache is up-to-date after adding the PPA.
1998+ apt.add_package([LANDSCAPE_SERVER, "landscape-hashids"], update_cache=True)
1999+ check_call(["apt-mark", "hold", "landscape-hashids", LANDSCAPE_SERVER])
2000+ except (PackageNotFoundError, PackageError, CalledProcessError) as exc:
2001+ logger.error("Failed to install packages")
2002+ raise exc # This will trigger juju's exponential retry
2003
2004 # Write the config-provided SSL certificate, if it exists.
2005 config_ssl_cert = self.model.config["ssl_cert"]
2006@@ -275,8 +335,7 @@ class LandscapeServerCharm(CharmBase):
2007 license_file = self.model.config.get("license_file")
2008
2009 if license_file:
2010- self.unit.status = MaintenanceStatus(
2011- "Writing Landscape license file")
2012+ self.unit.status = MaintenanceStatus("Writing Landscape license file")
2013 write_license_file(license_file, self.landscape_uid, self.root_gid)
2014
2015 self.unit.status = ActiveStatus("Unit is ready")
2016@@ -296,10 +355,10 @@ class LandscapeServerCharm(CharmBase):
2017 return
2018
2019 if not all(self._stored.ready.values()):
2020- waiting_on = [
2021- rel for rel, ready in self._stored.ready.items() if not ready]
2022+ waiting_on = [rel for rel, ready in self._stored.ready.items() if not ready]
2023 self.unit.status = WaitingStatus(
2024- "Waiting on relations: {}".format(", ".join(waiting_on)))
2025+ "Waiting on relations: {}".format(", ".join(waiting_on))
2026+ )
2027 return
2028
2029 if self._stored.running and not restart_services:
2030@@ -322,19 +381,23 @@ class LandscapeServerCharm(CharmBase):
2031 deployment_mode = self.model.config.get("deployment_mode")
2032 is_standalone = deployment_mode == "standalone"
2033
2034- update_default_settings({
2035- "RUN_ALL": "no",
2036- "RUN_APISERVER": str(self.model.config["worker_counts"]),
2037- "RUN_ASYNC_FRONTEND": "yes",
2038- "RUN_JOBHANDLER": "yes",
2039- "RUN_APPSERVER": str(self.model.config["worker_counts"]),
2040- "RUN_MSGSERVER": str(self.model.config["worker_counts"]),
2041- "RUN_PINGSERVER": str(self.model.config["worker_counts"]),
2042- "RUN_CRON": "yes" if is_leader else "no",
2043- "RUN_PACKAGESEARCH": "yes" if is_leader else "no",
2044- "RUN_PACKAGEUPLOADSERVER": "yes" if is_leader and is_standalone else "no",
2045- "RUN_PPPA_PROXY": "no",
2046- })
2047+ update_default_settings(
2048+ {
2049+ "RUN_ALL": "no",
2050+ "RUN_APISERVER": str(self.model.config["worker_counts"]),
2051+ "RUN_ASYNC_FRONTEND": "yes",
2052+ "RUN_JOBHANDLER": "yes",
2053+ "RUN_APPSERVER": str(self.model.config["worker_counts"]),
2054+ "RUN_MSGSERVER": str(self.model.config["worker_counts"]),
2055+ "RUN_PINGSERVER": str(self.model.config["worker_counts"]),
2056+ "RUN_CRON": "yes" if is_leader else "no",
2057+ "RUN_PACKAGESEARCH": "yes" if is_leader else "no",
2058+ "RUN_PACKAGEUPLOADSERVER": "yes"
2059+ if is_leader and is_standalone
2060+ else "no",
2061+ "RUN_PPPA_PROXY": "no",
2062+ }
2063+ )
2064
2065 logger.info("Starting services")
2066
2067@@ -366,7 +429,7 @@ class LandscapeServerCharm(CharmBase):
2068
2069 allowed_units = unit_data["allowed-units"].split()
2070 if self.unit.name not in allowed_units:
2071- logger.info("%s not in allowed_units")
2072+ logger.info(f"{self.unit.name} not in allowed_units")
2073 self.unit.status = ActiveStatus("Unit is ready")
2074 self._update_ready_status()
2075 return
2076@@ -406,8 +469,13 @@ class LandscapeServerCharm(CharmBase):
2077 else:
2078 user = unit_data["user"]
2079
2080- update_db_conf(host=host, port=port, user=user, password=password,
2081- schema_password=schema_password)
2082+ update_db_conf(
2083+ host=host,
2084+ port=port,
2085+ user=user,
2086+ password=password,
2087+ schema_password=schema_password,
2088+ )
2089
2090 if not self._migrate_schema_bootstrap():
2091 return
2092@@ -419,10 +487,12 @@ class LandscapeServerCharm(CharmBase):
2093 def _migrate_schema_bootstrap(self):
2094 """
2095 Migrates schema along with the bootstrap command which ensures that the
2096- databases along with the landscape user exists. Returns True on success
2097+ databases along with the landscape user exists. In addition creates
2098+ admin if configured. Returns True on success
2099 """
2100 try:
2101 check_call([SCHEMA_SCRIPT, "--bootstrap"])
2102+ self._bootstrap_account()
2103 return True
2104 except CalledProcessError as e:
2105 logger.error(
2106@@ -435,10 +505,12 @@ class LandscapeServerCharm(CharmBase):
2107 self._stored.ready["amqp"] = False
2108 self.unit.status = MaintenanceStatus("Setting up amqp connection")
2109
2110- event.relation.data[self.unit].update({
2111- "username": "landscape",
2112- "vhost": "landscape",
2113- })
2114+ event.relation.data[self.unit].update(
2115+ {
2116+ "username": "landscape",
2117+ "vhost": "landscape",
2118+ }
2119+ )
2120
2121 def _amqp_relation_changed(self, event):
2122 unit_data = event.relation.data[event.unit]
2123@@ -453,12 +525,14 @@ class LandscapeServerCharm(CharmBase):
2124 if isinstance(hostname, list):
2125 hostname = ",".join(hostname)
2126
2127- update_service_conf({
2128- "broker": {
2129- "host": hostname,
2130- "password": password,
2131+ update_service_conf(
2132+ {
2133+ "broker": {
2134+ "host": hostname,
2135+ "password": password,
2136+ }
2137 }
2138- })
2139+ )
2140
2141 self._stored.ready["amqp"] = True
2142 self.unit.status = ActiveStatus("Unit is ready")
2143@@ -471,11 +545,13 @@ class LandscapeServerCharm(CharmBase):
2144 if not self.model.config.get("root_url"):
2145 url = f'https://{event.relation.data[event.unit]["public-address"]}/'
2146 self._stored.default_root_url = url
2147- update_service_conf({
2148- "global": {"root-url": url},
2149- "api": {"root-url": url},
2150- "package-upload": {"root-url": url},
2151- })
2152+ update_service_conf(
2153+ {
2154+ "global": {"root-url": url},
2155+ "api": {"root-url": url},
2156+ "package-upload": {"root-url": url},
2157+ }
2158+ )
2159
2160 self._update_ready_status()
2161
2162@@ -491,7 +567,8 @@ class LandscapeServerCharm(CharmBase):
2163 if ssl_cert != "DEFAULT" and ssl_key == "":
2164 # We have a cert but no key, this is an error.
2165 self.unit.status = BlockedStatus(
2166- "`ssl_cert` is specified but `ssl_key` is missing")
2167+ "`ssl_cert` is specified but `ssl_key` is missing"
2168+ )
2169 return
2170
2171 if ssl_cert != "DEFAULT":
2172@@ -501,8 +578,8 @@ class LandscapeServerCharm(CharmBase):
2173 ssl_cert = b64encode(ssl_cert + b"\n" + ssl_key)
2174 except binascii.Error:
2175 self.unit.status = BlockedStatus(
2176- "Unable to decode `ssl_cert` or `ssl_key` - must be "
2177- "b64-encoded")
2178+ "Unable to decode `ssl_cert` or `ssl_key` - must be b64-encoded"
2179+ )
2180 return
2181
2182 with open(HAPROXY_CONFIG_FILE) as haproxy_config_file:
2183@@ -512,68 +589,83 @@ class LandscapeServerCharm(CharmBase):
2184 https_service = haproxy_config["https_service"]
2185 https_service["crts"] = [ssl_cert]
2186
2187- if self.unit.is_leader():
2188- https_service["service_options"].extend(
2189- haproxy_config["leader_service_options"])
2190-
2191 server_ip = relation.data[self.unit]["private-address"]
2192 unit_name = self.unit.name.replace("/", "-")
2193 worker_counts = self.model.config["worker_counts"]
2194
2195 (appservers, pingservers, message_servers, api_servers) = [
2196- [(
2197- f"landscape-{name}-{unit_name}-{i}",
2198- server_ip,
2199- haproxy_config["ports"][name] + i,
2200- haproxy_config["server_options"],
2201- ) for i in range(worker_counts)]
2202+ [
2203+ (
2204+ f"landscape-{name}-{unit_name}-{i}",
2205+ server_ip,
2206+ haproxy_config["ports"][name] + i,
2207+ haproxy_config["server_options"],
2208+ )
2209+ for i in range(worker_counts)
2210+ ]
2211 for name in ("appserver", "pingserver", "message-server", "api")
2212 ]
2213
2214 # There should only ever be one package-upload-server service.
2215- package_upload_servers = [(
2216- f"landscape-package-upload-{unit_name}-0",
2217- server_ip,
2218- haproxy_config["ports"]["package-upload"],
2219- haproxy_config["server_options"],
2220- )]
2221+ package_upload_servers = [
2222+ (
2223+ f"landscape-package-upload-{unit_name}-0",
2224+ server_ip,
2225+ haproxy_config["ports"]["package-upload"],
2226+ haproxy_config["server_options"],
2227+ )
2228+ ]
2229
2230 http_service["servers"] = appservers
2231- http_service["backends"] = [{
2232- "backend_name": "landscape-ping",
2233- "servers": pingservers,
2234- }]
2235+ http_service["backends"] = [
2236+ {
2237+ "backend_name": "landscape-ping",
2238+ "servers": pingservers,
2239+ }
2240+ ]
2241 https_service["servers"] = appservers
2242- https_service["backends"] = [{
2243- "backend_name": "landscape-message",
2244- "servers": message_servers,
2245- }, {
2246- "backend_name": "landscape-api",
2247- "servers": api_servers,
2248- }]
2249-
2250- if self.unit.is_leader():
2251- https_service["backends"].append({
2252+ https_service["backends"] = [
2253+ {
2254+ "backend_name": "landscape-message",
2255+ "servers": message_servers,
2256+ },
2257+ {
2258+ "backend_name": "landscape-api",
2259+ "servers": api_servers,
2260+ },
2261+ # Only the leader should have servers for the landscape-package-upload
2262+ # and landscape-hashid-databases backends. However, when the leader
2263+ # is lost, haproxy will fail as the service options will reference
2264+ # a (no longer) existing backend. To prevent that, all units should
2265+ # declare all backends, even if a unit should not have any servers on
2266+ # a specific backend.
2267+ {
2268 "backend_name": "landscape-package-upload",
2269- "servers": package_upload_servers,
2270- })
2271+ "servers": package_upload_servers if self.unit.is_leader() else [],
2272+ },
2273+ {
2274+ "backend_name": "landscape-hashid-databases",
2275+ "servers": appservers if self.unit.is_leader() else [],
2276+ },
2277+ ]
2278
2279 error_files_location = haproxy_config["error_files"]["location"]
2280 error_files = []
2281 for code, filename in haproxy_config["error_files"]["files"].items():
2282 error_file_path = os.path.join(error_files_location, filename)
2283 with open(error_file_path, "rb") as error_file:
2284- error_files.append({
2285- "http_status": code,
2286- "content": b64encode(error_file.read())
2287- })
2288+ error_files.append(
2289+ {"http_status": code, "content": b64encode(error_file.read())}
2290+ )
2291
2292 http_service["error_files"] = error_files
2293 https_service["error_files"] = error_files
2294
2295- relation.data[self.unit].update({
2296- "services": yaml.safe_dump([http_service, https_service])
2297- })
2298+ relation.data[self.unit].update(
2299+ {
2300+ "services": yaml.safe_dump([http_service, https_service]),
2301+ }
2302+ )
2303
2304 self._stored.ready["haproxy"] = True
2305
2306@@ -596,19 +688,24 @@ class LandscapeServerCharm(CharmBase):
2307 self.unit.status = MaintenanceStatus("Configuring HAProxy")
2308 haproxy_ssl_cert = event.relation.data[event.unit]["ssl_cert"]
2309
2310+ # Sometimes the data has not been encoded properly in the HA charm
2311+ if haproxy_ssl_cert.startswith("b'"):
2312+ haproxy_ssl_cert = haproxy_ssl_cert.strip('b').strip("'")
2313+
2314 if haproxy_ssl_cert != "DEFAULT":
2315 # If DEFAULT, cert is being managed by a third party,
2316 # possibly a subordinate charm.
2317 write_ssl_cert(haproxy_ssl_cert)
2318
2319 self.unit.status = ActiveStatus("Unit is ready")
2320-
2321 self._update_haproxy_connection(event.relation)
2322
2323 self._update_ready_status()
2324
2325- def _nrpe_external_master_relation_joined(
2326- self, event: RelationJoinedEvent) -> None:
2327+ def _website_relation_departed(self, event: RelationDepartedEvent) -> None:
2328+ event.relation.data[self.unit].update({"services": ""})
2329+
2330+ def _nrpe_external_master_relation_joined(self, event: RelationJoinedEvent) -> None:
2331 self._update_nrpe_checks(event.relation)
2332
2333 def _update_nrpe_checks(self, relation: Relation):
2334@@ -624,16 +721,16 @@ class LandscapeServerCharm(CharmBase):
2335 monitors = {
2336 "monitors": {
2337 "remote": {
2338- "nrpe": {
2339- s: {"command": f"check_{s}"} for s in services_to_add
2340- },
2341+ "nrpe": {s: {"command": f"check_{s}"} for s in services_to_add},
2342 },
2343 },
2344 }
2345
2346- relation.data[self.unit].update({
2347- "monitors": yaml.safe_dump(monitors),
2348- })
2349+ relation.data[self.unit].update(
2350+ {
2351+ "monitors": yaml.safe_dump(monitors),
2352+ }
2353+ )
2354
2355 if not os.path.exists(NRPE_D_DIR):
2356 logger.debug("NRPE directories not ready")
2357@@ -647,12 +744,14 @@ class LandscapeServerCharm(CharmBase):
2358 continue
2359
2360 with open(cfg_filename, "w") as cfg_fp:
2361- cfg_fp.write(f"""# check {service}
2362+ cfg_fp.write(
2363+ f"""# check {service}
2364 # The following header was added by the landscape-server charm
2365 # Modifying it will affect nagios monitoring and alerting
2366 # servicegroups: juju
2367 command[check_{service}]=/usr/local/lib/nagios/plugins/check_systemd.py {service}
2368-""")
2369+"""
2370+ )
2371
2372 for service in services_to_remove:
2373 service_cfg = service.replace("-", "_")
2374@@ -673,7 +772,8 @@ command[check_{service}]=/usr/local/lib/nagios/plugins/check_systemd.py {service
2375
2376 if not root_url:
2377 root_url = "https://" + str(
2378- self.model.get_binding(event.relation).network.bind_address)
2379+ self.model.get_binding(event.relation).network.bind_address
2380+ )
2381
2382 site_name = self.model.config.get("site_name")
2383 if site_name:
2384@@ -690,13 +790,15 @@ command[check_{service}]=/usr/local/lib/nagios/plugins/check_systemd.py {service
2385 else:
2386 icon_data = None
2387
2388- event.relation.data[self.app].update({
2389- "name": "Landscape",
2390- "url": root_url,
2391- "subtitle": subtitle,
2392- "group": group,
2393- "icon": icon_data,
2394- })
2395+ event.relation.data[self.app].update(
2396+ {
2397+ "name": "Landscape",
2398+ "url": root_url,
2399+ "subtitle": subtitle,
2400+ "group": group,
2401+ "icon": icon_data,
2402+ }
2403+ )
2404
2405 def _leader_elected(self, event: LeaderElectedEvent) -> None:
2406 # Just because we received this event does not mean we are
2407@@ -709,16 +811,17 @@ command[check_{service}]=/usr/local/lib/nagios/plugins/check_systemd.py {service
2408 ip = str(self.model.get_binding(peer_relation).network.bind_address)
2409 peer_relation.data[self.app].update({"leader-ip": ip})
2410
2411- update_service_conf({
2412- "package-search": {
2413- "host": "localhost",
2414- },
2415- })
2416+ update_service_conf(
2417+ {
2418+ "package-search": {
2419+ "host": "localhost",
2420+ },
2421+ }
2422+ )
2423
2424 self._leader_changed()
2425
2426- def _leader_settings_changed(
2427- self, event: LeaderSettingsChangedEvent) -> None:
2428+ def _leader_settings_changed(self, event: LeaderSettingsChangedEvent) -> None:
2429 # Just because we received this event does not mean we are
2430 # guaranteed to be a follower by the time we process it. See
2431 # https://juju.is/docs/sdk/leader-elected-event
2432@@ -728,11 +831,13 @@ command[check_{service}]=/usr/local/lib/nagios/plugins/check_systemd.py {service
2433 leader_ip = peer_relation.data[self.app].get("leader-ip")
2434
2435 if leader_ip:
2436- update_service_conf({
2437- "package-search": {
2438- "host": leader_ip,
2439- },
2440- })
2441+ update_service_conf(
2442+ {
2443+ "package-search": {
2444+ "host": leader_ip,
2445+ },
2446+ }
2447+ )
2448
2449 self._leader_changed()
2450
2451@@ -747,9 +852,10 @@ command[check_{service}]=/usr/local/lib/nagios/plugins/check_systemd.py {service
2452 for relation in nrpe_relations:
2453 self._update_nrpe_checks(relation)
2454
2455- haproxy_relations = self.model.relations.get("website", [])
2456- for relation in haproxy_relations:
2457- self._update_haproxy_connection(relation)
2458+ if self.unit.is_leader():
2459+ haproxy_relations = self.model.relations.get("website", [])
2460+ for relation in haproxy_relations:
2461+ self._update_haproxy_connection(relation)
2462
2463 self._update_ready_status(restart_services=True)
2464
2465@@ -760,13 +866,23 @@ command[check_{service}]=/usr/local/lib/nagios/plugins/check_systemd.py {service
2466
2467 event.relation.data[self.unit].update({"unit-data": self.unit.name})
2468
2469- def _on_replicas_relation_changed(
2470- self, event: RelationChangedEvent) -> None:
2471+ def _on_replicas_relation_changed(self, event: RelationChangedEvent) -> None:
2472 leader_ip_value = event.relation.data[self.app].get("leader-ip")
2473
2474 if leader_ip_value and leader_ip_value != self._stored.leader_ip:
2475 self._stored.leader_ip = leader_ip_value
2476
2477+ if self.unit.is_leader():
2478+ haproxy_relations = self.model.relations.get("website", [])
2479+ for relation in haproxy_relations:
2480+ self._update_haproxy_connection(relation)
2481+
2482+ secret_token = self._get_secret_token()
2483+ if (secret_token) and (secret_token != self._stored.secret_token):
2484+ self._write_secret_token(secret_token)
2485+ self._stored.secret_token = secret_token
2486+ self._update_ready_status(restart_services=True)
2487+
2488 def _configure_smtp(self, relay_host: str) -> None:
2489
2490 # Rewrite postfix config.
2491@@ -800,27 +916,32 @@ command[check_{service}]=/usr/local/lib/nagios/plugins/check_systemd.py {service
2492 none_count = oidc_vals.count(None)
2493
2494 if none_count == 0:
2495- update_service_conf({
2496- "landscape": {
2497- "oidc-issuer": oidc_issuer,
2498- "oidc-client-id": oidc_client_id,
2499- "oidc-client-secret": oidc_client_secret,
2500- "oidc-logout-url": oidc_logout_url,
2501- },
2502- })
2503+ update_service_conf(
2504+ {
2505+ "landscape": {
2506+ "oidc-issuer": oidc_issuer,
2507+ "oidc-client-id": oidc_client_id,
2508+ "oidc-client-secret": oidc_client_secret,
2509+ "oidc-logout-url": oidc_logout_url,
2510+ },
2511+ }
2512+ )
2513 elif none_count == 1 and oidc_logout_url is None:
2514 # Only the logout url is optional.
2515- update_service_conf({
2516- "landscape": {
2517- "oidc-issuer": oidc_issuer,
2518- "oidc-client-id": oidc_client_id,
2519- "oidc-client-secret": oidc_client_secret,
2520- },
2521- })
2522+ update_service_conf(
2523+ {
2524+ "landscape": {
2525+ "oidc-issuer": oidc_issuer,
2526+ "oidc-client-id": oidc_client_id,
2527+ "oidc-client-secret": oidc_client_secret,
2528+ },
2529+ }
2530+ )
2531 elif none_count < 4:
2532 self.unit.status = BlockedStatus(
2533 "OIDC connect config requires at least 'oidc_issuer', "
2534- "'oidc_client_id', and 'oidc_client_secret' values")
2535+ "'oidc_client_id', and 'oidc_client_secret' values"
2536+ )
2537 return
2538
2539 self.unit.status = WaitingStatus("Waiting on relations")
2540@@ -832,17 +953,20 @@ command[check_{service}]=/usr/local/lib/nagios/plugins/check_systemd.py {service
2541 openid_logout_url = self.model.config.get("openid_logout_url")
2542
2543 if openid_provider_url and openid_logout_url:
2544- update_service_conf({
2545- "landscape": {
2546- "openid-provider-url": openid_provider_url,
2547- "openid-logout-url": openid_logout_url,
2548- },
2549- })
2550+ update_service_conf(
2551+ {
2552+ "landscape": {
2553+ "openid-provider-url": openid_provider_url,
2554+ "openid-logout-url": openid_logout_url,
2555+ },
2556+ }
2557+ )
2558 self.unit.status = WaitingStatus("Waiting on relations")
2559 elif openid_provider_url or openid_logout_url:
2560 self.unit.status = BlockedStatus(
2561 "OpenID configuration requires both 'openid_provider_url' and "
2562- "'openid_logout_url'")
2563+ "'openid_logout_url'"
2564+ )
2565
2566 def _bootstrap_account(self):
2567 """If admin account details are provided, create admin"""
2568@@ -906,8 +1030,7 @@ command[check_{service}]=/usr/local/lib/nagios/plugins/check_systemd.py {service
2569 try:
2570 check_call([LSCTL, "stop"])
2571 except CalledProcessError as e:
2572- logger.error("Stopping services failed with return code %d",
2573- e.returncode)
2574+ logger.error("Stopping services failed with return code %d", e.returncode)
2575 self.unit.status = BlockedStatus("Failed to stop services")
2576 event.fail("Failed to stop services")
2577 else:
2578@@ -919,14 +1042,12 @@ command[check_{service}]=/usr/local/lib/nagios/plugins/check_systemd.py {service
2579 self.unit.status = MaintenanceStatus("Starting services")
2580 event.log("Starting services")
2581
2582- start_result = subprocess.run([LSCTL, "start"], capture_output=True,
2583- text=True)
2584+ start_result = subprocess.run([LSCTL, "start"], capture_output=True, text=True)
2585
2586 try:
2587 check_call([LSCTL, "status"])
2588 except CalledProcessError as e:
2589- logger.error("Starting services failed with return code %d",
2590- e.returncode)
2591+ logger.error("Starting services failed with return code %d", e.returncode)
2592 logger.error("Failed to start services: %s", start_result.stdout)
2593 self.unit.status = MaintenanceStatus("Stopping services")
2594 subprocess.run([LSCTL, "stop"])
2595@@ -940,8 +1061,10 @@ command[check_{service}]=/usr/local/lib/nagios/plugins/check_systemd.py {service
2596
2597 def _upgrade(self, event: ActionEvent) -> None:
2598 if self._stored.running:
2599- event.fail("Cannot upgrade while running. Please run action "
2600- "'pause' prior to upgrade")
2601+ event.fail(
2602+ "Cannot upgrade while running. Please run action "
2603+ "'pause' prior to upgrade"
2604+ )
2605 return
2606
2607 prev_status = self.unit.status
2608@@ -953,12 +1076,18 @@ command[check_{service}]=/usr/local/lib/nagios/plugins/check_systemd.py {service
2609 for package in LANDSCAPE_PACKAGES:
2610 try:
2611 event.log(f"Upgrading {package}...")
2612+ if package == LANDSCAPE_SERVER:
2613+ check_call(["apt-mark", "unhold", LANDSCAPE_SERVER])
2614 pkg = apt.DebianPackage.from_apt_cache(package)
2615 pkg.ensure(state=apt.PackageState.Latest)
2616 installed = apt.DebianPackage.from_installed_package(package)
2617 event.log(f"Upgraded to {installed.version}...")
2618+ if package == LANDSCAPE_SERVER:
2619+ check_call(["apt-mark", "hold", LANDSCAPE_SERVER])
2620 except PackageNotFoundError as e:
2621- logger.error(f"Could not upgrade package {package}. Reason: {e.message}")
2622+ logger.error(
2623+ f"Could not upgrade package {package}. Reason: {e.message}"
2624+ )
2625 event.fail(f"Could not upgrade package {package}. Reason: {e.message}")
2626 self.unit.status = BlockedStatus("Failed to upgrade packages")
2627 return
2628@@ -967,8 +1096,10 @@ command[check_{service}]=/usr/local/lib/nagios/plugins/check_systemd.py {service
2629
2630 def _migrate_schema(self, event: ActionEvent) -> None:
2631 if self._stored.running:
2632- event.fail("Cannot migrate schema while running. Please run action"
2633- " 'pause' prior to migration")
2634+ event.fail(
2635+ "Cannot migrate schema while running. Please run action"
2636+ " 'pause' prior to migration"
2637+ )
2638 return
2639
2640 prev_status = self.unit.status
2641@@ -978,10 +1109,8 @@ command[check_{service}]=/usr/local/lib/nagios/plugins/check_systemd.py {service
2642 try:
2643 subprocess.run([SCHEMA_SCRIPT], check=True, text=True)
2644 except CalledProcessError as e:
2645- logger.error("Schema migration failed with error code %s",
2646- e.returncode)
2647- event.fail("Schema migration failed with error code %s",
2648- e.returncode)
2649+ logger.error("Schema migration failed with error code %s", e.returncode)
2650+ event.fail("Schema migration failed with error code %s", e.returncode)
2651 self.unit.status = BlockedStatus("Failed schema migration")
2652 else:
2653 self.unit.status = prev_status
2654@@ -992,7 +1121,9 @@ command[check_{service}]=/usr/local/lib/nagios/plugins/check_systemd.py {service
2655 event.log("Running hash_id_databases")
2656
2657 try:
2658- subprocess.run(["sudo", "-u", "landscape", HASH_ID_DATABASES], check=True, text=True)
2659+ subprocess.run(
2660+ ["sudo", "-u", "landscape", HASH_ID_DATABASES], check=True, text=True
2661+ )
2662 except CalledProcessError as e:
2663 logger.error("Hashing ID databases failed with error code %s", e.returncode)
2664 event.fail("Hashing ID databases failed with error code %s", e.returncode)
2665diff --git a/src/haproxy-config.yaml b/src/haproxy-config.yaml
2666index e986f53..030fc6e 100644
2667--- a/src/haproxy-config.yaml
2668+++ b/src/haproxy-config.yaml
2669@@ -34,11 +34,11 @@ https_service:
2670 - use_backend landscape-message if attachment
2671 - use_backend landscape-api if api
2672 - use_backend landscape-ping if ping
2673-
2674-leader_service_options:
2675- - acl package-upload path_beg -i /upload
2676- - use_backend landscape-package-upload if package-upload
2677- - reqrep ^([^\ ]*)\ /upload/(.*) \1\ /\2
2678+ - acl hashids path_beg -i /hash-id-databases
2679+ - use_backend landscape-hashid-databases if hashids
2680+ - acl package-upload path_beg -i /upload
2681+ - use_backend landscape-package-upload if package-upload
2682+ - http-request replace-path ^([^\ ]*)\ /upload/(.*) \1\ /\2
2683
2684 error_files:
2685 location: /opt/canonical/landscape/canonical/landscape/offline
2686@@ -55,11 +55,10 @@ ports:
2687 message-server: 8090
2688 api: 9080
2689 package-upload: 9100
2690- pppa-proxy: 9298
2691
2692 server_options:
2693 - check
2694 - inter 5000
2695 - rise 2
2696 - fall 5
2697- - maxconn 50
2698\ No newline at end of file
2699+ - maxconn 50
2700diff --git a/src/settings_files.py b/src/settings_files.py
2701index 6eb7231..b4d1189 100644
2702--- a/src/settings_files.py
2703+++ b/src/settings_files.py
2704@@ -9,6 +9,8 @@ import os
2705 from base64 import b64decode, binascii
2706 from collections import defaultdict
2707 from configparser import ConfigParser
2708+import secrets
2709+from string import ascii_letters, digits
2710 from urllib.request import urlopen
2711 from urllib.error import URLError
2712
2713@@ -37,6 +39,14 @@ class SSLCertReadException(Exception):
2714 pass
2715
2716
2717+class ServiceConfMissing(Exception):
2718+ pass
2719+
2720+
2721+class SecretTokenMissing(Exception):
2722+ pass
2723+
2724+
2725 def configure_for_deployment_mode(mode: str) -> None:
2726 """
2727 Places files where Landscape expects to find them for different deployment
2728@@ -111,6 +121,11 @@ def update_service_conf(updates: dict) -> None:
2729 `updates` is a mapping of {section => {key => value}}, to be applied
2730 to the config file.
2731 """
2732+ if not os.path.isfile(SERVICE_CONF):
2733+ # Landscape server will not overwrite this file on install, so we
2734+ # cannot get the default values if we create it here
2735+ raise ServiceConfMissing("Landscape server install failed!")
2736+
2737 config = ConfigParser()
2738 config.read(SERVICE_CONF)
2739
2740@@ -125,6 +140,11 @@ def update_service_conf(updates: dict) -> None:
2741 config.write(config_fp)
2742
2743
2744+def generate_secret_token():
2745+ alphanumerics = ascii_letters + digits
2746+ return "".join(secrets.choice(alphanumerics) for _ in range(172))
2747+
2748+
2749 def write_license_file(license_file: str, uid: int, gid: int) -> None:
2750 """
2751 Reads or decodes `license_file` to LICENSE_FILE and sets it up
2752diff --git a/tests/test_charm.py b/tests/test_charm.py
2753index 08a4119..7ee1671 100644
2754--- a/tests/test_charm.py
2755+++ b/tests/test_charm.py
2756@@ -24,7 +24,8 @@ from charms.operator_libs_linux.v0.apt import (
2757
2758 from charm import (
2759 DEFAULT_SERVICES, HAPROXY_CONFIG_FILE, LANDSCAPE_PACKAGES, LEADER_SERVICES, LSCTL,
2760- NRPE_D_DIR, SCHEMA_SCRIPT, HASH_ID_DATABASES, LandscapeServerCharm)
2761+ NRPE_D_DIR, SCHEMA_SCRIPT, HASH_ID_DATABASES, LandscapeServerCharm,
2762+ )
2763
2764
2765 class TestCharm(unittest.TestCase):
2766@@ -71,10 +72,13 @@ class TestCharm(unittest.TestCase):
2767 with patches as mocks:
2768 harness.begin_with_initial_hooks()
2769
2770- mocks["check_call"].assert_called_once_with(
2771+ mocks["check_call"].assert_any_call(
2772 ["add-apt-repository", "-y", ppa])
2773- mocks["apt"].add_package.assert_called_once_with(["landscape-server",
2774- "landscape-hashids"])
2775+ mocks["check_call"].assert_any_call(
2776+ ["apt-mark", "hold", "landscape-hashids", "landscape-server"])
2777+ mocks["apt"].add_package.assert_called_once_with(
2778+ ["landscape-server", "landscape-hashids"], update_cache=True,
2779+ )
2780 status = harness.charm.unit.status
2781 self.assertIsInstance(status, WaitingStatus)
2782 self.assertEqual(status.message,
2783@@ -95,11 +99,8 @@ class TestCharm(unittest.TestCase):
2784
2785 with patches as mocks:
2786 mocks["apt"].add_package.side_effect = PackageNotFoundError
2787- harness.begin_with_initial_hooks()
2788-
2789- status = harness.charm.unit.status
2790- self.assertIsInstance(status, BlockedStatus)
2791- self.assertEqual(status.message, "Failed to install packages")
2792+ self.assertRaises(PackageNotFoundError,
2793+ harness.begin_with_initial_hooks)
2794
2795 def test_install_package_error(self):
2796 harness = Harness(LandscapeServerCharm)
2797@@ -116,11 +117,7 @@ class TestCharm(unittest.TestCase):
2798
2799 with patches as mocks:
2800 mocks["apt"].add_package.side_effect = PackageError("ouch")
2801- harness.begin_with_initial_hooks()
2802-
2803- status = harness.charm.unit.status
2804- self.assertIsInstance(status, BlockedStatus)
2805- self.assertEqual(status.message, "Failed to install packages")
2806+ self.assertRaises(PackageError, harness.begin_with_initial_hooks)
2807
2808 def test_install_called_process_error(self):
2809 harness = Harness(LandscapeServerCharm)
2810@@ -131,11 +128,8 @@ class TestCharm(unittest.TestCase):
2811 with patch("charm.check_call") as mock:
2812 with patch("charm.update_service_conf"):
2813 mock.side_effect = CalledProcessError(127, Mock())
2814- harness.begin_with_initial_hooks()
2815-
2816- status = harness.charm.unit.status
2817- self.assertIsInstance(status, BlockedStatus)
2818- self.assertEqual(status.message, "Failed to install packages")
2819+ self.assertRaises(CalledProcessError,
2820+ harness.begin_with_initial_hooks)
2821
2822 def test_install_ssl_cert(self):
2823 harness = Harness(LandscapeServerCharm)
2824@@ -188,20 +182,28 @@ class TestCharm(unittest.TestCase):
2825
2826 def test_install_license_file_b64(self):
2827 harness = Harness(LandscapeServerCharm)
2828- harness.update_config({"license_file": "VEhJUyBJUyBBIExJQ0VOU0U="})
2829+ license_text = "VEhJUyBJUyBBIExJQ0VOU0U"
2830+ harness.update_config({"license_file": license_text})
2831 relation_id = harness.add_relation("replicas", "landscape-server")
2832 harness.update_relation_data(
2833 relation_id, "landscape-server", {"leader-ip": "test"})
2834
2835 with patch.multiple(
2836 "charm",
2837+ apt=DEFAULT,
2838+ check_call=DEFAULT,
2839 update_service_conf=DEFAULT,
2840+ prepend_default_settings=DEFAULT,
2841 write_license_file=DEFAULT,
2842 ) as mocks:
2843 harness.begin_with_initial_hooks()
2844
2845- mocks["write_license_file"].assert_called_once_with(
2846- "VEhJUyBJUyBBIExJQ0VOU0U=", 1000, 1000)
2847+ mock_write = mocks["write_license_file"]
2848+ self.assertEqual(len(mock_write.mock_calls), 2)
2849+ self.assertEqual(mock_write.mock_calls[0].args,
2850+ (license_text, 1000, 1000))
2851+ self.assertEqual(mock_write.mock_calls[1].args,
2852+ (license_text, 1000, 1000))
2853
2854 def test_update_ready_status_not_running(self):
2855 self.harness.charm.unit.status = WaitingStatus()
2856@@ -505,6 +507,7 @@ class TestCharm(unittest.TestCase):
2857 "password": "testpass",
2858 },
2859 }
2860+ self.harness.add_relation("replicas", "landscape-server")
2861
2862 with patch("charm.check_call"):
2863 with patch(
2864@@ -809,10 +812,34 @@ class TestCharm(unittest.TestCase):
2865 self.assertIsInstance(status, WaitingStatus)
2866 write_cert_mock.assert_called_once_with("FANCYNEWCERT")
2867
2868+ def test_website_relation_changed_strip_b_char(self):
2869+ self.harness.charm._update_haproxy_connection = Mock()
2870+ mock_event = Mock()
2871+ mock_event.relation.data = {
2872+ mock_event.unit: {"ssl_cert": "b'FANCYNEWCERT'"},
2873+ self.harness.charm.unit: {
2874+ "private-address": "test",
2875+ "public-address": "test2",
2876+ },
2877+ }
2878+
2879+ with patch.multiple(
2880+ "charm",
2881+ write_ssl_cert=DEFAULT,
2882+ update_service_conf=DEFAULT,
2883+ ) as mocks:
2884+ write_cert_mock = mocks["write_ssl_cert"]
2885+ self.harness.charm._website_relation_changed(mock_event)
2886+
2887+ status = self.harness.charm.unit.status
2888+ self.assertIsInstance(status, WaitingStatus)
2889+ write_cert_mock.assert_called_once_with("FANCYNEWCERT")
2890+
2891 @patch("charm.update_service_conf")
2892 def test_on_config_changed_no_smtp_change(self, _):
2893 self.harness.charm._update_ready_status = Mock()
2894 self.harness.charm._configure_smtp = Mock()
2895+ self.harness.add_relation("replicas", "landscape-server")
2896 self.harness.update_config({"smtp_relay_host": ""})
2897
2898 self.harness.charm._configure_smtp.assert_not_called()
2899@@ -822,6 +849,7 @@ class TestCharm(unittest.TestCase):
2900 def test_on_config_changed_smtp_change(self, _):
2901 self.harness.charm._update_ready_status = Mock()
2902 self.harness.charm._configure_smtp = Mock()
2903+ self.harness.add_relation("replicas", "landscape-server")
2904 self.harness.update_config({"smtp_relay_host": "smtp.example.com"})
2905
2906 self.harness.charm._configure_smtp.assert_called_once_with(
2907@@ -932,11 +960,12 @@ class TestCharm(unittest.TestCase):
2908 prev_status = self.harness.charm.unit.status
2909
2910 with patch("charm.apt", spec_set=apt) as apt_mock:
2911- pkg_mock = Mock()
2912- apt_mock.DebianPackage.from_apt_cache.return_value = pkg_mock
2913- self.harness.charm._upgrade(event)
2914+ with patch("charm.check_call"):
2915+ pkg_mock = Mock()
2916+ apt_mock.DebianPackage.from_apt_cache.return_value = pkg_mock
2917+ self.harness.charm._upgrade(event)
2918
2919- self.assertEqual(event.log.call_count, 9)
2920+ self.assertGreaterEqual(event.log.call_count, 5)
2921 self.assertEqual(
2922 apt_mock.DebianPackage.from_apt_cache.call_count,
2923 len(LANDSCAPE_PACKAGES)
2924@@ -963,10 +992,11 @@ class TestCharm(unittest.TestCase):
2925 self.harness.charm._stored.running = False
2926
2927 with patch("charm.apt", spec_set=apt) as apt_mock:
2928- pkg_mock = Mock()
2929- apt_mock.DebianPackage.from_apt_cache.return_value = pkg_mock
2930- pkg_mock.ensure.side_effect = PackageNotFoundError("ouch")
2931- self.harness.charm._upgrade(event)
2932+ with patch("charm.check_call"):
2933+ pkg_mock = Mock()
2934+ apt_mock.DebianPackage.from_apt_cache.return_value = pkg_mock
2935+ pkg_mock.ensure.side_effect = PackageNotFoundError("ouch")
2936+ self.harness.charm._upgrade(event)
2937
2938 self.assertEqual(event.log.call_count, 2)
2939 event.fail.assert_called_once()
2940@@ -1149,6 +1179,13 @@ class TestBootstrapAccount(unittest.TestCase):
2941 )
2942 self.harness.add_relation("replicas", "landscape-server")
2943 self.harness.set_leader()
2944+
2945+ pwd_mock = patch("charm.user_exists").start()
2946+ pwd_mock.return_value = Mock(
2947+ spec_set=struct_passwd, pw_uid=1000)
2948+ grp_mock = patch("charm.group_exists").start()
2949+ grp_mock.return_value = Mock(
2950+ spec_set=struct_group, gr_gid=1000)
2951
2952 self.process_mock = patch("subprocess.run").start()
2953 self.log_mock = patch("charm.logger.error").start()
2954diff --git a/tests/test_settings_files.py b/tests/test_settings_files.py
2955index e16f8f7..84b4ffc 100644
2956--- a/tests/test_settings_files.py
2957+++ b/tests/test_settings_files.py
2958@@ -210,9 +210,11 @@ class UpdateServiceConfTestCase(TestCase):
2959 i += 1
2960 return retval
2961
2962- with patch("builtins.open") as open_mock:
2963- open_mock.side_effect = return_conf
2964- update_service_conf({"test": {"new": "yes"}})
2965+ with patch("os.path.isfile") as mock_isfile:
2966+ with patch("builtins.open") as open_mock:
2967+ mock_isfile.return_value = True
2968+ open_mock.side_effect = return_conf
2969+ update_service_conf({"test": {"new": "yes"}})
2970
2971 self.assertEqual(outfile.captured,
2972 "[fixed]\nold = no\n\n[test]\nnew = yes\n\n")
2973@@ -230,9 +232,11 @@ class UpdateServiceConfTestCase(TestCase):
2974 i += 1
2975 return retval
2976
2977- with patch("builtins.open") as open_mock:
2978- open_mock.side_effect = return_conf
2979- update_service_conf({"fixed": {"old": "yes"}})
2980+ with patch("os.path.isfile") as mock_isfile:
2981+ with patch("builtins.open") as open_mock:
2982+ mock_isfile.return_value = True
2983+ open_mock.side_effect = return_conf
2984+ update_service_conf({"fixed": {"old": "yes"}})
2985
2986 self.assertEqual(outfile.captured, "[fixed]\nold = yes\n\n")
2987

Subscribers

People subscribed via source and target branches

to all changes: