Merge lp:~james-page/charms/trusty/percona-cluster/vivid into lp:~openstack-charmers-archive/charms/trusty/percona-cluster/next

Proposed by James Page
Status: Merged
Merged at revision: 84
Proposed branch: lp:~james-page/charms/trusty/percona-cluster/vivid
Merge into: lp:~openstack-charmers-archive/charms/trusty/percona-cluster/next
Diff against target: 241 lines (+145/-11)
3 files modified
hooks/percona_hooks.py (+8/-7)
hooks/percona_utils.py (+21/-4)
templates/mysqld.cnf (+116/-0)
To merge this branch: bzr merge lp:~james-page/charms/trusty/percona-cluster/vivid
Reviewer Review Type Date Requested Status
David Ames (community) Approve
James Page Needs Resubmitting
Review via email: mp+271659@code.launchpad.net
To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #9404 percona-cluster-next for james-page mp271659
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/9404/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #10250 percona-cluster-next for james-page mp271659
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/10250/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6490 percona-cluster-next for james-page mp271659
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/6490/

Revision history for this message
James Page (james-page) :
review: Needs Resubmitting
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #11094 percona-cluster-next for james-page mp271659
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/11094/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #10302 percona-cluster-next for james-page mp271659
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/10302/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6899 percona-cluster-next for james-page mp271659
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/12626197/
Build: http://10.245.162.77:8080/job/charm_amulet_test/6899/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6930 percona-cluster-next for james-page mp271659
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/6930/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #12204 percona-cluster-next for james-page mp271659
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/12204/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #11325 percona-cluster-next for james-page mp271659
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/11325/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #7446 percona-cluster-next for james-page mp271659
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/7446/

Revision history for this message
David Ames (thedac) wrote :

Approval after the fact. Tested and merged.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'hooks/percona_hooks.py'
2--- hooks/percona_hooks.py 2015-09-22 13:54:27 +0000
3+++ hooks/percona_hooks.py 2015-09-29 11:04:01 +0000
4@@ -45,7 +45,6 @@
5 )
6 from percona_utils import (
7 determine_packages,
8- MY_CNF,
9 setup_percona_repo,
10 get_host_ip,
11 get_cluster_hosts,
12@@ -61,6 +60,7 @@
13 notify_bootstrapped,
14 is_bootstrapped,
15 get_wsrep_value,
16+ resolve_cnf_file,
17 )
18 from charmhelpers.contrib.database.mysql import (
19 PerconaClusterHelper,
20@@ -111,8 +111,8 @@
21
22
23 def render_config(clustered=False, hosts=[]):
24- if not os.path.exists(os.path.dirname(MY_CNF)):
25- os.makedirs(os.path.dirname(MY_CNF))
26+ if not os.path.exists(os.path.dirname(resolve_cnf_file())):
27+ os.makedirs(os.path.dirname(resolve_cnf_file()))
28
29 context = {
30 'cluster_name': 'juju_cluster',
31@@ -123,7 +123,7 @@
32 'sst_password': config('sst-password'),
33 'innodb_file_per_table': config('innodb-file-per-table'),
34 'table_open_cache': config('table-open-cache'),
35- 'lp1366997_workaround': config('lp1366997-workaround')
36+ 'lp1366997_workaround': config('lp1366997-workaround'),
37 }
38
39 if config('prefer-ipv6'):
40@@ -137,7 +137,8 @@
41 context['ipv6'] = False
42
43 context.update(PerconaClusterHelper().parse_config())
44- render(os.path.basename(MY_CNF), MY_CNF, context, perms=0o444)
45+ render(os.path.basename(resolve_cnf_file()),
46+ resolve_cnf_file(), context, perms=0o444)
47
48
49 def render_config_restart_on_changed(clustered, hosts, bootstrap=False):
50@@ -152,10 +153,10 @@
51 it is started so long as the new node to be added is guaranteed to have
52 been restarted so as to apply the new config.
53 """
54- pre_hash = file_hash(MY_CNF)
55+ pre_hash = file_hash(resolve_cnf_file())
56 render_config(clustered, hosts)
57 update_db_rels = False
58- if file_hash(MY_CNF) != pre_hash or bootstrap:
59+ if file_hash(resolve_cnf_file()) != pre_hash or bootstrap:
60 if bootstrap:
61 service('bootstrap-pxc', 'mysql')
62 # NOTE(dosaboy): this will not actually do anything if no cluster
63
64=== modified file 'hooks/percona_utils.py'
65--- hooks/percona_utils.py 2015-07-30 09:59:33 +0000
66+++ hooks/percona_utils.py 2015-09-29 11:04:01 +0000
67@@ -25,6 +25,7 @@
68 INFO,
69 WARNING,
70 ERROR,
71+ cached,
72 )
73 from charmhelpers.fetch import (
74 apt_install,
75@@ -46,8 +47,7 @@
76 KEY = "keys/repo.percona.com"
77 REPO = """deb http://repo.percona.com/apt {release} main
78 deb-src http://repo.percona.com/apt {release} main"""
79-MY_CNF = "/etc/mysql/my.cnf"
80-SEEDED_MARKER = "/var/lib/mysql/seeded"
81+SEEDED_MARKER = "{data_dir}/seeded"
82 HOSTS_FILE = '/etc/hosts'
83
84
85@@ -69,12 +69,13 @@
86
87 def seeded():
88 ''' Check whether service unit is already seeded '''
89- return os.path.exists(SEEDED_MARKER)
90+ return os.path.exists(SEEDED_MARKER.format(data_dir=resolve_data_dir()))
91
92
93 def mark_seeded():
94 ''' Mark service unit as seeded '''
95- with open(SEEDED_MARKER, 'w') as seeded:
96+ with open(SEEDED_MARKER.format(data_dir=resolve_data_dir()),
97+ 'w') as seeded:
98 seeded.write('done')
99
100
101@@ -360,3 +361,19 @@
102 (cluster_uuid), DEBUG)
103 for rid in rids:
104 relation_set(relation_id=rid, **{'bootstrap-uuid': cluster_uuid})
105+
106+
107+@cached
108+def resolve_data_dir():
109+ if lsb_release()['DISTRIB_CODENAME'] < 'vivid':
110+ return '/var/lib/mysql'
111+ else:
112+ return '/var/lib/percona-xtradb-cluster'
113+
114+
115+@cached
116+def resolve_cnf_file():
117+ if lsb_release()['DISTRIB_CODENAME'] < 'vivid':
118+ return '/etc/mysql/my.cnf'
119+ else:
120+ return '/etc/mysql/percona-xtradb-cluster.conf.d/mysqld.cnf'
121
122=== added file 'templates/mysqld.cnf'
123--- templates/mysqld.cnf 1970-01-01 00:00:00 +0000
124+++ templates/mysqld.cnf 2015-09-29 11:04:01 +0000
125@@ -0,0 +1,116 @@
126+[mysqld]
127+#
128+# * Basic Settings
129+#
130+user = mysql
131+pid-file = /var/run/mysqld/mysqld.pid
132+socket = /var/run/mysqld/mysqld.sock
133+port = 3306
134+basedir = /usr
135+datadir = /var/lib/percona-xtradb-cluster
136+tmpdir = /tmp
137+lc-messages-dir = /usr/share/mysql
138+skip-external-locking
139+
140+#
141+# * Networking
142+#
143+{% if bind_address -%}
144+bind-address = {{ bind_address }}
145+{% else -%}
146+bind-address = 0.0.0.0
147+{% endif %}
148+
149+#
150+# * Fine Tuning
151+#
152+key_buffer = {{ key_buffer }}
153+table_open_cache = {{ table_open_cache }}
154+max_allowed_packet = 16M
155+thread_stack = 192K
156+thread_cache_size = 8
157+
158+# This replaces the startup script and checks MyISAM tables if needed
159+# the first time they are touched
160+myisam-recover = BACKUP
161+
162+{% if max_connections != -1 -%}
163+max_connections = {{ max_connections }}
164+{% endif %}
165+
166+{% if wait_timeout != -1 -%}
167+# Seconds before clearing idle connections
168+wait_timeout = {{ wait_timeout }}
169+{% endif %}
170+
171+#
172+# * Query Cache Configuration
173+#
174+query_cache_limit = 1M
175+query_cache_size = 16M
176+
177+#
178+# * Logging and Replication
179+#
180+#
181+# Error log - should be very few entries.
182+#
183+log_error = /var/log/mysql/error.log
184+#
185+# The following can be used as easy to replay backup logs or for replication.
186+# note: if you are setting up a replication slave, see README.Debian about
187+# other settings you may need to change.
188+expire_logs_days = 10
189+max_binlog_size = 100M
190+
191+#
192+# * InnoDB
193+#
194+{% if innodb_file_per_table -%}
195+# This enables storing InnoDB tables in separate .ibd files. Note that, however
196+# existing InnoDB tables will remain in ibdata file(s) unles OPTIMIZE is run
197+# on them. Still, the ibdata1 file will NOT shrink - a full dump/import of the
198+# data is needed in order to get rid of large ibdata file.
199+innodb_file_per_table = 1
200+{% else -%}
201+innodb_file_per_table = 0
202+{% endif %}
203+
204+innodb_buffer_pool_size = {{ innodb_buffer_pool_size }}
205+
206+#
207+# * Galera
208+#
209+# Add address of other cluster nodes here
210+{% if not clustered -%}
211+# Empty gcomm address is being used when cluster is getting bootstrapped
212+wsrep_cluster_address=gcomm://
213+{% else -%}
214+# Cluster connection URL contains the IPs of node#1, node#2 and node#3
215+wsrep_cluster_address=gcomm://{{ cluster_hosts }}
216+{% endif %}
217+
218+#
219+# Node address
220+wsrep_node_address={{ private_address }}
221+#
222+# SST method
223+wsrep_sst_method={{ sst_method }}
224+#
225+# Cluster name
226+wsrep_cluster_name={{ cluster_name }}
227+#
228+# Authentication for SST method
229+wsrep_sst_auth="sstuser:{{ sst_password }}"
230+
231+{% if wsrep_provider_options -%}
232+wsrep_provider_options = {{ wsrep_provider_options }}
233+{% endif %}
234+
235+#
236+# * IPv6 SST configuration
237+#
238+{% if ipv6 -%}
239+[sst]
240+sockopt=,pf=ip6
241+{% endif %}

Subscribers

People subscribed via source and target branches