Merge lp:~james-page/charms/trusty/percona-cluster/vivid into lp:~openstack-charmers-archive/charms/trusty/percona-cluster/next

Proposed by James Page
Status: Merged
Merged at revision: 84
Proposed branch: lp:~james-page/charms/trusty/percona-cluster/vivid
Merge into: lp:~openstack-charmers-archive/charms/trusty/percona-cluster/next
Diff against target: 241 lines (+145/-11)
3 files modified
hooks/percona_hooks.py (+8/-7)
hooks/percona_utils.py (+21/-4)
templates/mysqld.cnf (+116/-0)
To merge this branch: bzr merge lp:~james-page/charms/trusty/percona-cluster/vivid
Reviewer Review Type Date Requested Status
David Ames (community) Approve
James Page Needs Resubmitting
Review via email: mp+271659@code.launchpad.net
To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #9404 percona-cluster-next for james-page mp271659
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/9404/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #10250 percona-cluster-next for james-page mp271659
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/10250/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6490 percona-cluster-next for james-page mp271659
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/6490/

Revision history for this message
James Page (james-page) :
review: Needs Resubmitting
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #11094 percona-cluster-next for james-page mp271659
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/11094/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #10302 percona-cluster-next for james-page mp271659
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/10302/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6899 percona-cluster-next for james-page mp271659
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
make: *** [functional_test] Error 1
ERROR:root:Make target returned non-zero.

Full amulet test output: http://paste.ubuntu.com/12626197/
Build: http://10.245.162.77:8080/job/charm_amulet_test/6899/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #6930 percona-cluster-next for james-page mp271659
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/6930/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #12204 percona-cluster-next for james-page mp271659
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/12204/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #11325 percona-cluster-next for james-page mp271659
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/11325/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #7446 percona-cluster-next for james-page mp271659
    AMULET OK: passed

Build: http://10.245.162.77:8080/job/charm_amulet_test/7446/

Revision history for this message
David Ames (thedac) wrote :

Approval after the fact. Tested and merged.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'hooks/percona_hooks.py'
--- hooks/percona_hooks.py 2015-09-22 13:54:27 +0000
+++ hooks/percona_hooks.py 2015-09-29 11:04:01 +0000
@@ -45,7 +45,6 @@
45)45)
46from percona_utils import (46from percona_utils import (
47 determine_packages,47 determine_packages,
48 MY_CNF,
49 setup_percona_repo,48 setup_percona_repo,
50 get_host_ip,49 get_host_ip,
51 get_cluster_hosts,50 get_cluster_hosts,
@@ -61,6 +60,7 @@
61 notify_bootstrapped,60 notify_bootstrapped,
62 is_bootstrapped,61 is_bootstrapped,
63 get_wsrep_value,62 get_wsrep_value,
63 resolve_cnf_file,
64)64)
65from charmhelpers.contrib.database.mysql import (65from charmhelpers.contrib.database.mysql import (
66 PerconaClusterHelper,66 PerconaClusterHelper,
@@ -111,8 +111,8 @@
111111
112112
113def render_config(clustered=False, hosts=[]):113def render_config(clustered=False, hosts=[]):
114 if not os.path.exists(os.path.dirname(MY_CNF)):114 if not os.path.exists(os.path.dirname(resolve_cnf_file())):
115 os.makedirs(os.path.dirname(MY_CNF))115 os.makedirs(os.path.dirname(resolve_cnf_file()))
116116
117 context = {117 context = {
118 'cluster_name': 'juju_cluster',118 'cluster_name': 'juju_cluster',
@@ -123,7 +123,7 @@
123 'sst_password': config('sst-password'),123 'sst_password': config('sst-password'),
124 'innodb_file_per_table': config('innodb-file-per-table'),124 'innodb_file_per_table': config('innodb-file-per-table'),
125 'table_open_cache': config('table-open-cache'),125 'table_open_cache': config('table-open-cache'),
126 'lp1366997_workaround': config('lp1366997-workaround')126 'lp1366997_workaround': config('lp1366997-workaround'),
127 }127 }
128128
129 if config('prefer-ipv6'):129 if config('prefer-ipv6'):
@@ -137,7 +137,8 @@
137 context['ipv6'] = False137 context['ipv6'] = False
138138
139 context.update(PerconaClusterHelper().parse_config())139 context.update(PerconaClusterHelper().parse_config())
140 render(os.path.basename(MY_CNF), MY_CNF, context, perms=0o444)140 render(os.path.basename(resolve_cnf_file()),
141 resolve_cnf_file(), context, perms=0o444)
141142
142143
143def render_config_restart_on_changed(clustered, hosts, bootstrap=False):144def render_config_restart_on_changed(clustered, hosts, bootstrap=False):
@@ -152,10 +153,10 @@
152 it is started so long as the new node to be added is guaranteed to have153 it is started so long as the new node to be added is guaranteed to have
153 been restarted so as to apply the new config.154 been restarted so as to apply the new config.
154 """155 """
155 pre_hash = file_hash(MY_CNF)156 pre_hash = file_hash(resolve_cnf_file())
156 render_config(clustered, hosts)157 render_config(clustered, hosts)
157 update_db_rels = False158 update_db_rels = False
158 if file_hash(MY_CNF) != pre_hash or bootstrap:159 if file_hash(resolve_cnf_file()) != pre_hash or bootstrap:
159 if bootstrap:160 if bootstrap:
160 service('bootstrap-pxc', 'mysql')161 service('bootstrap-pxc', 'mysql')
161 # NOTE(dosaboy): this will not actually do anything if no cluster162 # NOTE(dosaboy): this will not actually do anything if no cluster
162163
=== modified file 'hooks/percona_utils.py'
--- hooks/percona_utils.py 2015-07-30 09:59:33 +0000
+++ hooks/percona_utils.py 2015-09-29 11:04:01 +0000
@@ -25,6 +25,7 @@
25 INFO,25 INFO,
26 WARNING,26 WARNING,
27 ERROR,27 ERROR,
28 cached,
28)29)
29from charmhelpers.fetch import (30from charmhelpers.fetch import (
30 apt_install,31 apt_install,
@@ -46,8 +47,7 @@
46KEY = "keys/repo.percona.com"47KEY = "keys/repo.percona.com"
47REPO = """deb http://repo.percona.com/apt {release} main48REPO = """deb http://repo.percona.com/apt {release} main
48deb-src http://repo.percona.com/apt {release} main"""49deb-src http://repo.percona.com/apt {release} main"""
49MY_CNF = "/etc/mysql/my.cnf"50SEEDED_MARKER = "{data_dir}/seeded"
50SEEDED_MARKER = "/var/lib/mysql/seeded"
51HOSTS_FILE = '/etc/hosts'51HOSTS_FILE = '/etc/hosts'
5252
5353
@@ -69,12 +69,13 @@
6969
70def seeded():70def seeded():
71 ''' Check whether service unit is already seeded '''71 ''' Check whether service unit is already seeded '''
72 return os.path.exists(SEEDED_MARKER)72 return os.path.exists(SEEDED_MARKER.format(data_dir=resolve_data_dir()))
7373
7474
75def mark_seeded():75def mark_seeded():
76 ''' Mark service unit as seeded '''76 ''' Mark service unit as seeded '''
77 with open(SEEDED_MARKER, 'w') as seeded:77 with open(SEEDED_MARKER.format(data_dir=resolve_data_dir()),
78 'w') as seeded:
78 seeded.write('done')79 seeded.write('done')
7980
8081
@@ -360,3 +361,19 @@
360 (cluster_uuid), DEBUG)361 (cluster_uuid), DEBUG)
361 for rid in rids:362 for rid in rids:
362 relation_set(relation_id=rid, **{'bootstrap-uuid': cluster_uuid})363 relation_set(relation_id=rid, **{'bootstrap-uuid': cluster_uuid})
364
365
366@cached
367def resolve_data_dir():
368 if lsb_release()['DISTRIB_CODENAME'] < 'vivid':
369 return '/var/lib/mysql'
370 else:
371 return '/var/lib/percona-xtradb-cluster'
372
373
374@cached
375def resolve_cnf_file():
376 if lsb_release()['DISTRIB_CODENAME'] < 'vivid':
377 return '/etc/mysql/my.cnf'
378 else:
379 return '/etc/mysql/percona-xtradb-cluster.conf.d/mysqld.cnf'
363380
=== added file 'templates/mysqld.cnf'
--- templates/mysqld.cnf 1970-01-01 00:00:00 +0000
+++ templates/mysqld.cnf 2015-09-29 11:04:01 +0000
@@ -0,0 +1,116 @@
1[mysqld]
2#
3# * Basic Settings
4#
5user = mysql
6pid-file = /var/run/mysqld/mysqld.pid
7socket = /var/run/mysqld/mysqld.sock
8port = 3306
9basedir = /usr
10datadir = /var/lib/percona-xtradb-cluster
11tmpdir = /tmp
12lc-messages-dir = /usr/share/mysql
13skip-external-locking
14
15#
16# * Networking
17#
18{% if bind_address -%}
19bind-address = {{ bind_address }}
20{% else -%}
21bind-address = 0.0.0.0
22{% endif %}
23
24#
25# * Fine Tuning
26#
27key_buffer = {{ key_buffer }}
28table_open_cache = {{ table_open_cache }}
29max_allowed_packet = 16M
30thread_stack = 192K
31thread_cache_size = 8
32
33# This replaces the startup script and checks MyISAM tables if needed
34# the first time they are touched
35myisam-recover = BACKUP
36
37{% if max_connections != -1 -%}
38max_connections = {{ max_connections }}
39{% endif %}
40
41{% if wait_timeout != -1 -%}
42# Seconds before clearing idle connections
43wait_timeout = {{ wait_timeout }}
44{% endif %}
45
46#
47# * Query Cache Configuration
48#
49query_cache_limit = 1M
50query_cache_size = 16M
51
52#
53# * Logging and Replication
54#
55#
56# Error log - should be very few entries.
57#
58log_error = /var/log/mysql/error.log
59#
60# The following can be used as easy to replay backup logs or for replication.
61# note: if you are setting up a replication slave, see README.Debian about
62# other settings you may need to change.
63expire_logs_days = 10
64max_binlog_size = 100M
65
66#
67# * InnoDB
68#
69{% if innodb_file_per_table -%}
70# This enables storing InnoDB tables in separate .ibd files. Note that, however
71# existing InnoDB tables will remain in ibdata file(s) unles OPTIMIZE is run
72# on them. Still, the ibdata1 file will NOT shrink - a full dump/import of the
73# data is needed in order to get rid of large ibdata file.
74innodb_file_per_table = 1
75{% else -%}
76innodb_file_per_table = 0
77{% endif %}
78
79innodb_buffer_pool_size = {{ innodb_buffer_pool_size }}
80
81#
82# * Galera
83#
84# Add address of other cluster nodes here
85{% if not clustered -%}
86# Empty gcomm address is being used when cluster is getting bootstrapped
87wsrep_cluster_address=gcomm://
88{% else -%}
89# Cluster connection URL contains the IPs of node#1, node#2 and node#3
90wsrep_cluster_address=gcomm://{{ cluster_hosts }}
91{% endif %}
92
93#
94# Node address
95wsrep_node_address={{ private_address }}
96#
97# SST method
98wsrep_sst_method={{ sst_method }}
99#
100# Cluster name
101wsrep_cluster_name={{ cluster_name }}
102#
103# Authentication for SST method
104wsrep_sst_auth="sstuser:{{ sst_password }}"
105
106{% if wsrep_provider_options -%}
107wsrep_provider_options = {{ wsrep_provider_options }}
108{% endif %}
109
110#
111# * IPv6 SST configuration
112#
113{% if ipv6 -%}
114[sst]
115sockopt=,pf=ip6
116{% endif %}

Subscribers

People subscribed via source and target branches