Merge ~rharding/charm-grafana:add-mysql-datasource into ~prometheus-charmers/charm-grafana:master
- Git
- lp:~rharding/charm-grafana
- add-mysql-datasource
- Merge into master
Status: | Merged |
---|---|
Merged at revision: | 2bf4bbf7c439f4c1c6cc2fa39003b1a08f068973 |
Proposed branch: | ~rharding/charm-grafana:add-mysql-datasource |
Merge into: | ~prometheus-charmers/charm-grafana:master |
Diff against target: |
204 lines (+108/-35) 1 file modified
reactive/grafana.py (+108/-35) |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Tom Haddon | Pending | ||
Jamon Camisso | Pending | ||
Paul Collins | Pending | ||
Review via email: mp+332440@code.launchpad.net |
This proposal supersedes a proposal from 2017-08-29.
Commit message
Description of the change
This change adds support for a mysql based data source.
This is needed to allow building grafana dashboards on top of MySQL data that is generated as part of Juju KPIs.
In order to do this the current grafana-source endpoints needs to be made more generic. There's work around for issues in the current interface where it auto prefixes things with http:// and pulls the IP address of the relation unit vs getting it fed to the system.
The idea is that I've got a charm that pulls log data and stores it parsed into a mysql db.
mysql -> db relation -> my application charmed -> feeds the data source to grafana for dashboards.
Richard Harding (rharding) wrote : Posted in a previous version of this proposal | # |
Richard Harding (rharding) wrote : Posted in a previous version of this proposal | # |
Also wanted to make sure the prometheus grafana-source relation was still in working order and validated that in a gce model:
https:/
Richard Harding (rharding) wrote : Posted in a previous version of this proposal | # |
And finally both running together
https:/
Tom Haddon (mthaddon) wrote : Posted in a previous version of this proposal | # |
Could we have a documentation update for this too explaining the feature?
Jacek Nykis (jacekn) wrote : Posted in a previous version of this proposal | # |
Thanks! It's a nice addition and I only had quick look but some early feedback and questions inline. We filed RT#105512 to review and test properly.
Could you clarify what the gypsy-danger charm is? Is it mysql charm fork? Or will it proxy to the DB?
My first instinct, without knowing what the gypsy-danger is, is that if you want to talk to the DB you should relate grafana to mysql.
Richard Harding (rharding) wrote : Posted in a previous version of this proposal | # |
Thanks for the initial review.
gypsy-danger is a python project to parse the apache log files from the charmstore that are tossed into swift looking for important juju kpi data. It then loads that data into several tables in a mysql database that then allows for building queries against the parsed data.
The python code project is: https:/
and I started a charm that will eventually deploy and cron job the fetching of new apache logs from swift each day (updating the mysql database with new log data each day).
I've gotten this working with hand coding/deploying here at: kpi.jujugui.org where it talks to a mysql server and I've put the gypsy-danger python parser code on it's own server and so I'm working to charm this up.
If I build a new relation for mysql->grafana directly then it adds a second path to add a new grafana datasource. It also breaks the usability. My code needs a database to store parsed data from apache logs. It needs to be the same exact database with the same user/credential as grafana needs to build a dashboard upon. In this way the theory is that:
gypsy-danger needs a database to store data in. It requests from mysql a fresh db, username, password. MySQL provides that and gypsy-danger then starts to populate the database with tables and records of data. Then, gypsy-danger wants to have a visual front end of that data and sends the data source information to grafana for visualization.
In fact, one long term thing in my wishlist, is that gypsy-danger could actually send over a saved dashboard to grafana so that both the datasource and the dashboard are loaded and ready to go out of the box.
I hope that clears up what I'm trying to do with this. Let me know if I'm unclear or I've taken a dark path of wrongness that I should stay away from.
Paul Collins (pjdc) wrote : Posted in a previous version of this proposal | # |
A couple remarks inline. As well as this, I think it'd be valuable to have a Mojo spec to deploy grafana, etc., along with gypsy-danger and mysql and exercise the new grafana functionality so that we can CI it, although I don't know if that should be a blocker. There are a couple of likely-looking jobs on Mojo CI Jenkins, although neither passes currently:
https:/
https:/
Jacek Nykis (jacekn) wrote : Posted in a previous version of this proposal | # |
> Thanks for the initial review.
Thank you for your contribution!
> If I build a new relation for mysql->grafana directly then it adds a second
> path to add a new grafana datasource. It also breaks the usability. My code
> needs a database to store parsed data from apache logs. It needs to be the
> same exact database with the same user/credential as grafana needs to build a
> dashboard upon.
I see, if it needs to be the same DB then the way you want to do it is probably the only sensible one.
> I hope that clears up what I'm trying to do with this. Let me know if I'm
> unclear or I've taken a dark path of wrongness that I should stay away from.
Thanks, I think it's clear why direct grafana<->mysql relation will not work.
What I'd say though is that it's still worth moving some of the mysql source handling logic into the grafana-source interface. mysql is a valid grafana source and IMO it makes logical sense for the interface to handle it without need for grafana to mangle relation data.
Richard Harding (rharding) wrote : Posted in a previous version of this proposal | # |
New interfaces MP located here:
https:/
Will push an updated PR here that supports using that shortly.
Richard Harding (rharding) wrote : Posted in a previous version of this proposal | # |
I've updated the interface and updated this MP removing much of the lint cleanup previously done.
I can look into mojo stuff there but that'll be a longer road to hoe as I've not gotten that setup and since they're not currently working...I'd prefer to add a test/support for gypsy-danger as part of something that was working to test current Grafana/Prometheus if required.
Richard Harding (rharding) wrote : Posted in a previous version of this proposal | # |
Updated the testing bundle to:
juju deploy cs:~rharding/
This includes the changes that add the hook to properly remove the datasource when the relation is broken so that it can be readded a second time.
Tested with mysql and prometheus and verified the datasources are removed and re-add successfully.
This goes with the interfaces changes in:
Jamon Camisso (jamon) wrote : Posted in a previous version of this proposal | # |
Looks good to me, it is working well any way that I try it along with the grafana-source charm.
Just two tiny flake8 issues and it's ready to merge and promulgate to the charmstore.
Richard Harding (rharding) wrote : Posted in a previous version of this proposal | # |
Updated the flake8 issues per review.
Jamon Camisso (jamon) wrote : Posted in a previous version of this proposal | # |
Ah so close, but when I go to merge this I see that upstream has progressed ahead a bit in terms of admin-password, google analytics etc.
Could you please rebase your branch against master and push again? Sorry about that, I didn't realize the upstream charm had those changes.
Preview Diff
1 | diff --git a/reactive/grafana.py b/reactive/grafana.py | |||
2 | index 8b3b126..f63ee5a 100644 | |||
3 | --- a/reactive/grafana.py | |||
4 | +++ b/reactive/grafana.py | |||
5 | @@ -1,19 +1,33 @@ | |||
8 | 1 | import six | 1 | import base64 |
9 | 2 | import os | 2 | import datetime |
10 | 3 | import glob | 3 | import glob |
11 | 4 | import hashlib | 4 | import hashlib |
12 | 5 | import datetime | ||
13 | 6 | import requests | ||
14 | 7 | import json | 5 | import json |
17 | 8 | import subprocess | 6 | import os |
18 | 9 | import base64 | 7 | import requests |
19 | 10 | import shutil | 8 | import shutil |
23 | 11 | from charmhelpers import fetch | 9 | import six |
24 | 12 | from charmhelpers.core import host, hookenv, unitdata | 10 | import subprocess |
25 | 13 | from charmhelpers.core.templating import render | 11 | |
26 | 14 | from charmhelpers.contrib.charmsupport import nrpe | 12 | from charmhelpers.contrib.charmsupport import nrpe |
29 | 15 | from charms.reactive import hook, remove_state, set_state, when, when_not | 13 | from charmhelpers.core import ( |
30 | 16 | from charms.reactive.helpers import any_file_changed, data_changed, is_state | 14 | hookenv, |
31 | 15 | host, | ||
32 | 16 | unitdata, | ||
33 | 17 | ) | ||
34 | 18 | from charmhelpers.core.templating import render | ||
35 | 19 | from charmhelpers import fetch | ||
36 | 20 | from charms.reactive.helpers import ( | ||
37 | 21 | any_file_changed, | ||
38 | 22 | is_state, | ||
39 | 23 | ) | ||
40 | 24 | from charms.reactive import ( | ||
41 | 25 | hook, | ||
42 | 26 | remove_state, | ||
43 | 27 | set_state, | ||
44 | 28 | when, | ||
45 | 29 | when_not, | ||
46 | 30 | ) | ||
47 | 17 | 31 | ||
48 | 18 | import time | 32 | import time |
49 | 19 | 33 | ||
50 | @@ -284,20 +298,38 @@ def wipe_nrpe_checks(): | |||
51 | 284 | @when('grafana-source.available') | 298 | @when('grafana-source.available') |
52 | 285 | def configure_sources(relation): | 299 | def configure_sources(relation): |
53 | 286 | sources = relation.datasources() | 300 | sources = relation.datasources() |
54 | 287 | if not data_changed('grafana.sources', sources): | ||
55 | 288 | return | ||
56 | 289 | for ds in sources: | 301 | for ds in sources: |
57 | 290 | hookenv.log('Found datasource: {}'.format(str(ds))) | 302 | hookenv.log('Found datasource: {}'.format(str(ds))) |
58 | 291 | # Ensure datasource is configured | 303 | # Ensure datasource is configured |
59 | 292 | check_datasource(ds) | 304 | check_datasource(ds) |
60 | 293 | 305 | ||
61 | 294 | 306 | ||
68 | 295 | @when('grafana.started') | 307 | @hook('grafana-source-relation-departed') |
69 | 296 | @when_not('grafana-source.available') | 308 | def sources_gone(relation): |
70 | 297 | def sources_gone(): | 309 | hookenv.log('sources gone') |
71 | 298 | # Last datasource gone, remove as needed | 310 | hookenv.log(relation) |
72 | 299 | # TODO implementation | 311 | # Update the datasources now with the new information that something is |
73 | 300 | pass | 312 | # not available. |
74 | 313 | conv = relation.conversation() | ||
75 | 314 | ds = { | ||
76 | 315 | 'service_name': conv.scope.split('/')[0], | ||
77 | 316 | 'type': conv.get_remote('type'), | ||
78 | 317 | 'url': conv.get_remote('url'), | ||
79 | 318 | 'description': conv.get_remote('description') | ||
80 | 319 | } | ||
81 | 320 | hookenv.log('Removing datasource:') | ||
82 | 321 | hookenv.log(ds) | ||
83 | 322 | |||
84 | 323 | if ds: | ||
85 | 324 | conn = sqlite3.connect('/var/lib/grafana/grafana.db', timeout=30) | ||
86 | 325 | cur = conn.cursor() | ||
87 | 326 | cur.execute( | ||
88 | 327 | 'DELETE FROM DATA_SOURCE WHERE type=? AND url=?', | ||
89 | 328 | (ds['type'], ds['url']) | ||
90 | 329 | ) | ||
91 | 330 | conn.commit() | ||
92 | 331 | cur.close() | ||
93 | 332 | relation.set_local = None | ||
94 | 301 | 333 | ||
95 | 302 | 334 | ||
96 | 303 | @when('website.available') | 335 | @when('website.available') |
97 | @@ -352,14 +384,11 @@ def check_datasource(ds): | |||
98 | 352 | # 'username': 'username, | 384 | # 'username': 'username, |
99 | 353 | # 'password': 'password | 385 | # 'password': 'password |
100 | 354 | # } | 386 | # } |
101 | 355 | |||
102 | 356 | conn = sqlite3.connect('/var/lib/grafana/grafana.db', timeout=30) | 387 | conn = sqlite3.connect('/var/lib/grafana/grafana.db', timeout=30) |
103 | 357 | cur = conn.cursor() | 388 | cur = conn.cursor() |
104 | 358 | query = cur.execute('SELECT id, type, name, url, is_default FROM DATA_SOURCE') | 389 | query = cur.execute('SELECT id, type, name, url, is_default FROM DATA_SOURCE') |
105 | 359 | rows = query.fetchall() | 390 | rows = query.fetchall() |
106 | 360 | ds_name = '{} - {}'.format(ds['service_name'], ds['description']) | 391 | ds_name = '{} - {}'.format(ds['service_name'], ds['description']) |
107 | 361 | print(ds_name) | ||
108 | 362 | print(rows) | ||
109 | 363 | for row in rows: | 392 | for row in rows: |
110 | 364 | if (row[1] == ds['type'] and row[2] == ds_name and row[3] == ds['url']): | 393 | if (row[1] == ds['type'] and row[2] == ds_name and row[3] == ds['url']): |
111 | 365 | hookenv.log('Datasource already exist, updating: {}'.format(ds_name)) | 394 | hookenv.log('Datasource already exist, updating: {}'.format(ds_name)) |
112 | @@ -371,7 +400,6 @@ def check_datasource(ds): | |||
113 | 371 | return | 400 | return |
114 | 372 | hookenv.log('Adding new datasource: {}'.format(ds_name)) | 401 | hookenv.log('Adding new datasource: {}'.format(ds_name)) |
115 | 373 | stmt, values = generate_query(ds, 0) | 402 | stmt, values = generate_query(ds, 0) |
116 | 374 | print(stmt, values) | ||
117 | 375 | cur.execute(stmt, values) | 403 | cur.execute(stmt, values) |
118 | 376 | conn.commit() | 404 | conn.commit() |
119 | 377 | conn.close() | 405 | conn.close() |
120 | @@ -379,15 +407,20 @@ def check_datasource(ds): | |||
121 | 379 | 407 | ||
122 | 380 | def generate_query(ds, is_default, id=None): | 408 | def generate_query(ds, is_default, id=None): |
123 | 381 | if not id: | 409 | if not id: |
131 | 382 | stmt = 'INSERT INTO DATA_SOURCE (org_id, version, type, name' + \ | 410 | fields = [ |
132 | 383 | ', access, url, is_default, created, updated, basic_auth' | 411 | 'org_id', |
133 | 384 | if 'username' in ds and 'password' in ds: | 412 | 'version', |
134 | 385 | stmt += ', basic_auth_user, basic_auth_password)' + \ | 413 | 'type', |
135 | 386 | ' VALUES (?,?,?,?,?,?,?,?,?,?,?,?)' | 414 | 'name', |
136 | 387 | else: | 415 | 'access', |
137 | 388 | stmt += ') VALUES (?,?,?,?,?,?,?,?,?,?)' | 416 | 'url', |
138 | 417 | 'is_default', | ||
139 | 418 | 'created', | ||
140 | 419 | 'updated', | ||
141 | 420 | ] | ||
142 | 421 | |||
143 | 389 | dtime = datetime.datetime.today().strftime("%F %T") | 422 | dtime = datetime.datetime.today().strftime("%F %T") |
145 | 390 | values = (1, | 423 | values = [1, |
146 | 391 | 0, | 424 | 0, |
147 | 392 | ds['type'], | 425 | ds['type'], |
148 | 393 | '{} - {}'.format(ds['service_name'], ds['description']), | 426 | '{} - {}'.format(ds['service_name'], ds['description']), |
149 | @@ -395,11 +428,51 @@ def generate_query(ds, is_default, id=None): | |||
150 | 395 | ds['url'], | 428 | ds['url'], |
151 | 396 | is_default, | 429 | is_default, |
152 | 397 | dtime, | 430 | dtime, |
156 | 398 | dtime) | 431 | dtime] |
157 | 399 | if 'username' in ds and 'password' in ds: | 432 | |
158 | 400 | values = values + (1, ds['username'], ds['password']) | 433 | # Used in connecting a mysql source |
159 | 434 | if 'database' in ds: | ||
160 | 435 | hookenv.log('Adding fields for the database') | ||
161 | 436 | fields.append('database') | ||
162 | 437 | values.append(ds['database']) | ||
163 | 438 | |||
164 | 439 | if ds['type'] == 'mysql': | ||
165 | 440 | # There's only one username/password on the interface to we coopt | ||
166 | 441 | # it for the mysql login in the mysql case | ||
167 | 442 | fields.append('user') | ||
168 | 443 | values.append(ds['username']) | ||
169 | 444 | fields.append('password') | ||
170 | 445 | values.append(ds['password']) | ||
171 | 446 | |||
172 | 447 | if 'username' in ds and 'password' in ds and ds['type'] != 'msyql': | ||
173 | 448 | # We use the username and password in a backward compatible way | ||
174 | 449 | # for basic_auth when it's not a mysql data source. | ||
175 | 450 | hookenv.log('Adding basic_auth info') | ||
176 | 451 | fields.append('basic_auth') | ||
177 | 452 | fields.append('basic_auth_user') | ||
178 | 453 | fields.append('basic_auth_password') | ||
179 | 454 | values.append(1) | ||
180 | 455 | values.append(ds['username']) | ||
181 | 456 | values.append(ds['password']) | ||
182 | 401 | else: | 457 | else: |
184 | 402 | values = values + (0,) | 458 | fields.append('basic_auth') |
185 | 459 | values.append(0) | ||
186 | 460 | |||
187 | 461 | values_placeholders = '?, ' * len(values) | ||
188 | 462 | # Remove the space [, ] at the end of the generated string. | ||
189 | 463 | values_placeholders = values_placeholders[:-2] | ||
190 | 464 | |||
191 | 465 | stmt = """ | ||
192 | 466 | INSERT INTO DATA_SOURCE ({}) | ||
193 | 467 | VALUES ({}) | ||
194 | 468 | """.format( | ||
195 | 469 | ", ".join(fields), | ||
196 | 470 | values_placeholders | ||
197 | 471 | ) | ||
198 | 472 | |||
199 | 473 | hookenv.log('Statement is: \n{}'.format(stmt)) | ||
200 | 474 | hookenv.log(values) | ||
201 | 475 | |||
202 | 403 | else: | 476 | else: |
203 | 404 | if 'username' in ds and 'password' in ds: | 477 | if 'username' in ds and 'password' in ds: |
204 | 405 | stmt = 'UPDATE DATA_SOURCE SET basic_auth_user = ?, basic_auth_password = ?, basic_auth = 1' | 478 | stmt = 'UPDATE DATA_SOURCE SET basic_auth_user = ?, basic_auth_password = ?, basic_auth = 1' |
Current test to make this work:
juju deploy mysql gypsy-danger- 1 grafana- source gypsy-danger
juju deploy cs:~rharding/
juju relate mysql:db gypsy-danger
juju deploy grafana (this updated charm)
juju relate grafana:
Log into grafana and see the data source. Hitting the verify it works button should show it works.
https:/ /usercontent. irccloud- cdn.com/ file/3KOzJdGO/ Selection_ 010.png