Merge lp:~chad.smith/charms/precise/postgresql/postgresql-manual-tune-kernel-params-fix into lp:charms/postgresql

Proposed by Chad Smith
Status: Merged
Merged at revision: 77
Proposed branch: lp:~chad.smith/charms/precise/postgresql/postgresql-manual-tune-kernel-params-fix
Merge into: lp:charms/postgresql
Diff against target: 1001 lines (+545/-86)
7 files modified
Makefile (+9/-0)
README.md (+14/-0)
config.yaml (+23/-15)
hooks/hooks.py (+143/-70)
hooks/test_hooks.py (+354/-0)
revision (+1/-0)
templates/postgresql.conf.tmpl (+1/-1)
To merge this branch: bzr merge lp:~chad.smith/charms/precise/postgresql/postgresql-manual-tune-kernel-params-fix
Reviewer Review Type Date Requested Status
Stuart Bishop (community) Approve
David Britton (community) Approve
Review via email: mp+200730@code.launchpad.net

Description of the change

Work in progress branch for initial review. Please let me know what you guys think before I go too far down this route. I didn't want to cause any issues by using local "trial" tests instead of juju integration tests, but it seems with a timeout of 15 minutes, the 5 existing integration tests were taking 1hr 15 mins to complete (during error conditions)

This branch does the following:
  - adds basic framework for functional unit tests of hooks.
         - these tests don't involve time-consuming juju unit deployment like the existing integration tests and provide
           local testing of hook functions
  - fixes some of the kernel_shmall, and kernel_shmmax config parameters, so ensure they properly written when
    performance-tuning == "manual"
  - Some cleanup to reduce use of globals and reduce the use of local variable names that match global variable names to avoid confusion. Removing globals also better enabled the local unit testing framework by avoiding a global call to
    hookenv.config() during module import

If we think this is a good initial approach, it should be straight forward for us to add more unit tests to get coverage of most hook functions defined within. All thoughts appreciated.

To post a comment you must log in.
87. By Chad Smith

uncomment integration tests

88. By Chad Smith

linting

89. By Chad Smith

revision bump

90. By Chad Smith

include python-psutil in install dependencies

Revision history for this message
David Britton (dpb) wrote :

Makefile:

[0] make two targets, test and integration-test
[1] remove ls-lint, as that is a landscape specific bzr plugin

91. By Chad Smith

silly comma, I've got you this time

Revision history for this message
David Britton (dpb) wrote :

[3] put in Makefile target for "devel" or "build-devel", where it will install appropriate packages

like: sudo apt-get install -y python-mocker python-twisted-core

[4] get rid of the revision file

I agree with the general approach. It's very light, but I think can be expanded as you go forward.

+1

review: Approve
Revision history for this message
Stuart Bishop (stub) wrote :

This all looks good. The cleanups are much needed, as are the retrofitted unittests. And those globals... thanks for killing some more of them.

The checks for "wal_buffers = -1" are unnecessary, as this setting is not related to replication.

review: Approve
Revision history for this message
Stuart Bishop (stub) wrote :

The file 'revision' needs to be removed before landing

Revision history for this message
Stuart Bishop (stub) wrote :

Integration tests pass with the local provider (but still flaky due to the usual juju race conditions).

Merged.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== added file 'Makefile'
--- Makefile 1970-01-01 00:00:00 +0000
+++ Makefile 2014-01-07 21:59:36 +0000
@@ -0,0 +1,9 @@
1CHARM_DIR := $(shell pwd)
2
3test:
4 cd hooks && CHARM_DIR=$(CHARM_DIR) trial test_hooks.py
5 echo "Integration tests using Juju deployed units"
6 TEST_TIMEOUT=900 ./test.py -v
7
8lint:
9 bzr ls-lint
010
=== modified file 'README.md'
--- README.md 2013-12-23 10:35:30 +0000
+++ README.md 2014-01-07 21:59:36 +0000
@@ -134,3 +134,17 @@
134- `state`: 'standalone', 'master' or 'hot standby'134- `state`: 'standalone', 'master' or 'hot standby'
135- `allowed-units`: space separated list of allowed clients (unit name).135- `allowed-units`: space separated list of allowed clients (unit name).
136 You should check this to determine if you can connect to the database yet.136 You should check this to determine if you can connect to the database yet.
137
138### For clustered support
139In order for client charms to support replication the client will need to be
140aware when relation-list reports > 1 unit of postgresql related:
141 - When > 1 postgresql units are related:
142 - if the client charm needs database write access, they will ignore
143 all "standalone", "hot standby" and "failover" states as those will
144 likely come from a standby unit (read-only) during standby install,
145 setup or teardown
146 - If read-only access is needed for a client, acting on
147 db-admin-relation-changed "hot standby" state will provide you with a
148 readonly replicated copy of the db
149 - When 1 postgresql unit is related:
150 - watch for updates to the db-admin-relation-changed with "standalone" state
137151
=== modified file 'config.yaml'
--- config.yaml 2013-11-13 10:07:41 +0000
+++ config.yaml 2014-01-07 21:59:36 +0000
@@ -160,7 +160,7 @@
160 type: boolean160 type: boolean
161 description: |161 description: |
162 Hot standby or warm standby. When True, queries can be run against162 Hot standby or warm standby. When True, queries can be run against
163 the database is in recovery or standby mode (ie. replicated).163 the database when in recovery or standby mode (ie. replicated).
164 Overridden by juju when master/slave relations are used.164 Overridden by juju when master/slave relations are used.
165 hot_standby_feedback:165 hot_standby_feedback:
166 default: False166 default: False
@@ -230,28 +230,40 @@
230 type: string230 type: string
231 description: |231 description: |
232 Possible values here are "auto" or "manual". If we set "auto" then the232 Possible values here are "auto" or "manual". If we set "auto" then the
233 charm will attempt to automatically tune all the performance paramaters233 charm will attempt to automatically tune all the performance parameters
234 as below. If manual, then it will use the defaults below unless234 for kernel_shmall, kernel_shmmax, shared_buffers and
235 overridden. "auto" gathers information about the node you're deployed235 effective_cache_size below, unless those config values are explicitly
236 on and tries to make intelligent guesses about what tuning parameters236 set. If manual, then it will use the defaults below unless set.
237 to set based on available RAM and CPU under the assumption that it's237 "auto" gathers information about the node on which you are deployed and
238 the only significant service running on this node.238 tries to make intelligent guesses about what tuning parameters to set
239 based on available RAM and CPU under the assumption that it's the only
240 significant service running on this node.
239 kernel_shmall:241 kernel_shmall:
240 default: 0242 default: 0
241 type: int243 type: int
242 description: Kernel/shmall244 description: Total amount of shared memory available, in bytes.
243 kernel_shmmax:245 kernel_shmmax:
244 default: 0246 default: 0
245 type: int247 type: int
246 description: Kernel/shmmax248 description: The maximum size, in bytes, of a shared memory segment.
247 shared_buffers:249 shared_buffers:
248 default: ""250 default: ""
249 type: string251 type: string
250 description: Shared buffers252 description: |
253 The amount of memory the database server uses for shared memory
254 buffers. This string should be of the format '###MB'.
255 effective_cache_size:
256 default: ""
257 type: string
258 description: |
259 Effective cache size is an estimate of how much memory is available for
260 disk caching within the database. (50% to 75% of system memory). This
261 string should be of the format '###MB'.
251 temp_buffers:262 temp_buffers:
252 default: "1MB"263 default: "1MB"
253 type: string264 type: string
254 description: Temp buffers265 description: |
266 The maximum number of temporary buffers used by each database session.
255 wal_buffers:267 wal_buffers:
256 default: "-1"268 default: "-1"
257 type: string269 type: string
@@ -265,10 +277,6 @@
265 default: 4.0277 default: 4.0
266 type: float278 type: float
267 description: Random page cost279 description: Random page cost
268 effective_cache_size:
269 default: ""
270 type: string
271 description: Effective cache size
272 #------------------------------------------------------------------------280 #------------------------------------------------------------------------
273 # Volume management281 # Volume management
274 # volume-map, volume-dev_regexp are only used 282 # volume-map, volume-dev_regexp are only used
275283
=== modified file 'hooks/hooks.py'
--- hooks/hooks.py 2013-12-23 10:35:30 +0000
+++ hooks/hooks.py 2014-01-07 21:59:36 +0000
@@ -48,7 +48,7 @@
48 '''48 '''
49 myname = hookenv.local_unit().replace('/', '-')49 myname = hookenv.local_unit().replace('/', '-')
50 ts = time.strftime('%Y-%m-%d %H:%M:%S', time.gmtime())50 ts = time.strftime('%Y-%m-%d %H:%M:%S', time.gmtime())
51 with open('/var/log/juju/{}-debug.log'.format(myname), 'a') as f:51 with open('{}/{}-debug.log'.format(juju_log_dir, myname), 'a') as f:
52 f.write('{} {}: {}\n'.format(ts, lvl, msg))52 f.write('{} {}: {}\n'.format(ts, lvl, msg))
53 hookenv.log(msg, lvl)53 hookenv.log(msg, lvl)
5454
@@ -121,7 +121,7 @@
121def volume_get_volid_from_volume_map():121def volume_get_volid_from_volume_map():
122 volume_map = {}122 volume_map = {}
123 try:123 try:
124 volume_map = yaml.load(config_data['volume-map'].strip())124 volume_map = yaml.load(hookenv.config('volume-map').strip())
125 if volume_map:125 if volume_map:
126 return volume_map.get(os.environ['JUJU_UNIT_NAME'])126 return volume_map.get(os.environ['JUJU_UNIT_NAME'])
127 except ConstructorError as e:127 except ConstructorError as e:
@@ -154,7 +154,7 @@
154# @returns volid154# @returns volid
155# None config state is invalid - we should not serve155# None config state is invalid - we should not serve
156def volume_get_volume_id():156def volume_get_volume_id():
157 ephemeral_storage = config_data['volume-ephemeral-storage']157 ephemeral_storage = hookenv.config('volume-ephemeral-storage')
158 volid = volume_get_volid_from_volume_map()158 volid = volume_get_volid_from_volume_map()
159 juju_unit_name = hookenv.local_unit()159 juju_unit_name = hookenv.local_unit()
160 if ephemeral_storage in [True, 'yes', 'Yes', 'true', 'True']:160 if ephemeral_storage in [True, 'yes', 'Yes', 'true', 'True']:
@@ -195,6 +195,7 @@
195195
196196
197def postgresql_autostart(enabled):197def postgresql_autostart(enabled):
198 postgresql_config_dir = _get_postgresql_config_dir()
198 startup_file = os.path.join(postgresql_config_dir, 'start.conf')199 startup_file = os.path.join(postgresql_config_dir, 'start.conf')
199 if enabled:200 if enabled:
200 log("Enabling PostgreSQL startup in {}".format(startup_file))201 log("Enabling PostgreSQL startup in {}".format(startup_file))
@@ -202,8 +203,8 @@
202 else:203 else:
203 log("Disabling PostgreSQL startup in {}".format(startup_file))204 log("Disabling PostgreSQL startup in {}".format(startup_file))
204 mode = 'manual'205 mode = 'manual'
205 contents = Template(open("templates/start_conf.tmpl").read()).render(206 template_file = "{}/templates/start_conf.tmpl".format(hookenv.charm_dir())
206 {'mode': mode})207 contents = Template(open(template_file).read()).render({'mode': mode})
207 host.write_file(208 host.write_file(
208 startup_file, contents, 'postgres', 'postgres', perms=0o644)209 startup_file, contents, 'postgres', 'postgres', perms=0o644)
209210
@@ -229,7 +230,7 @@
229 if status != 0:230 if status != 0:
230 return False231 return False
231 # e.g. output: "Running clusters: 9.1/main"232 # e.g. output: "Running clusters: 9.1/main"
232 vc = "%s/%s" % (config_data["version"], config_data["cluster_name"])233 vc = "%s/%s" % (hookenv.config("version"), hookenv.config("cluster_name"))
233 return vc in output.decode('utf8').split()234 return vc in output.decode('utf8').split()
234235
235236
@@ -253,7 +254,7 @@
253 # success = host.service_restart('postgresql')254 # success = host.service_restart('postgresql')
254 try:255 try:
255 run('pg_ctlcluster -force {version} {cluster_name} '256 run('pg_ctlcluster -force {version} {cluster_name} '
256 'restart'.format(**config_data))257 'restart'.format(**hookenv.config()))
257 success = True258 success = True
258 except subprocess.CalledProcessError:259 except subprocess.CalledProcessError:
259 success = False260 success = False
@@ -327,11 +328,11 @@
327 return success328 return success
328329
329330
330def get_service_port(postgresql_config):331def get_service_port(config_file):
331 '''Return the port PostgreSQL is listening on.'''332 '''Return the port PostgreSQL is listening on.'''
332 if not os.path.exists(postgresql_config):333 if not os.path.exists(config_file):
333 return None334 return None
334 postgresql_config = open(postgresql_config, 'r').read()335 postgresql_config = open(config_file, 'r').read()
335 port = re.search("port.*=(.*)", postgresql_config).group(1).strip()336 port = re.search("port.*=(.*)", postgresql_config).group(1).strip()
336 try:337 try:
337 return int(port)338 return int(port)
@@ -339,14 +340,26 @@
339 return None340 return None
340341
341342
342def create_postgresql_config(postgresql_config):343def _get_system_ram():
344 """ Return the system ram in Megabytes """
345 import psutil
346 return psutil.phymem_usage()[0] / (1024 ** 2)
347
348
349def _get_page_size():
350 """ Return the operating system's configured PAGE_SIZE """
351 return int(run("getconf PAGE_SIZE")) # frequently 4096
352
353
354def create_postgresql_config(config_file):
343 '''Create the postgresql.conf file'''355 '''Create the postgresql.conf file'''
356 config_data = hookenv.config()
344 if config_data["performance_tuning"] == "auto":357 if config_data["performance_tuning"] == "auto":
345 # Taken from:358 # Taken from:
346 # http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server359 # http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
347 # num_cpus is not being used ... commenting it out ... negronjl360 # num_cpus is not being used ... commenting it out ... negronjl
348 #num_cpus = run("cat /proc/cpuinfo | grep processor | wc -l")361 #num_cpus = run("cat /proc/cpuinfo | grep processor | wc -l")
349 total_ram = run("free -m | grep Mem | awk '{print $2}'")362 total_ram = _get_system_ram()
350 if not config_data["effective_cache_size"]:363 if not config_data["effective_cache_size"]:
351 config_data["effective_cache_size"] = \364 config_data["effective_cache_size"] = \
352 "%sMB" % (int(int(total_ram) * 0.75),)365 "%sMB" % (int(int(total_ram) * 0.75),)
@@ -357,15 +370,22 @@
357 else:370 else:
358 config_data["shared_buffers"] = \371 config_data["shared_buffers"] = \
359 "%sMB" % (int(int(total_ram) * 0.15),)372 "%sMB" % (int(int(total_ram) * 0.15),)
360 # XXX: This is very messy - should probably be a subordinate charm373 config_data["kernel_shmmax"] = (int(total_ram) * 1024 * 1024) + 1024
361 conf_file = open("/etc/sysctl.d/50-postgresql.conf", "w")374 config_data["kernel_shmall"] = config_data["kernel_shmmax"]
362 conf_file.write("kernel.sem = 250 32000 100 1024\n")375
363 conf_file.write("kernel.shmall = %s\n" %376 # XXX: This is very messy - should probably be a subordinate charm
364 ((int(total_ram) * 1024 * 1024) + 1024),)377 lines = ["kernel.sem = 250 32000 100 1024\n"]
365 conf_file.write("kernel.shmmax = %s\n" %378 if config_data["kernel_shmall"] > 0:
366 ((int(total_ram) * 1024 * 1024) + 1024),)379 # Convert config kernel_shmall (bytes) to pages
367 conf_file.close()380 page_size = _get_page_size()
368 run("sysctl -p /etc/sysctl.d/50-postgresql.conf")381 num_pages = config_data["kernel_shmall"] / page_size
382 if (config_data["kernel_shmall"] % page_size) > 0:
383 num_pages += 1
384 lines.append("kernel.shmall = %s\n" % num_pages)
385 if config_data["kernel_shmmax"] > 0:
386 lines.append("kernel.shmmax = %s\n" % config_data["kernel_shmmax"])
387 host.write_file(postgresql_sysctl, ''.join(lines), perms=0600)
388 run("sysctl -p {}".format(postgresql_sysctl))
369389
370 # If we are replicating, some settings may need to be overridden to390 # If we are replicating, some settings may need to be overridden to
371 # certain minimum levels.391 # certain minimum levels.
@@ -383,28 +403,31 @@
383403
384 # Send config data to the template404 # Send config data to the template
385 # Return it as pg_config405 # Return it as pg_config
406 charm_dir = hookenv.charm_dir()
407 template_file = "{}/templates/postgresql.conf.tmpl".format(charm_dir)
386 pg_config = Template(408 pg_config = Template(
387 open("templates/postgresql.conf.tmpl").read()).render(config_data)409 open(template_file).read()).render(config_data)
388 host.write_file(410 host.write_file(
389 postgresql_config, pg_config,411 config_file, pg_config,
390 owner="postgres", group="postgres", perms=0600)412 owner="postgres", group="postgres", perms=0600)
391413
392 local_state['saved_config'] = config_data414 local_state['saved_config'] = config_data
393 local_state.save()415 local_state.save()
394416
395417
396def create_postgresql_ident(postgresql_ident):418def create_postgresql_ident(output_file):
397 '''Create the pg_ident.conf file.'''419 '''Create the pg_ident.conf file.'''
398 ident_data = {}420 ident_data = {}
399 pg_ident_template = Template(421 charm_dir = hookenv.charm_dir()
400 open("templates/pg_ident.conf.tmpl").read())422 template_file = "{}/templates/pg_ident.conf.tmpl".format(charm_dir)
423 pg_ident_template = Template(open(template_file).read())
401 host.write_file(424 host.write_file(
402 postgresql_ident, pg_ident_template.render(ident_data),425 output_file, pg_ident_template.render(ident_data),
403 owner="postgres", group="postgres", perms=0600)426 owner="postgres", group="postgres", perms=0600)
404427
405428
406def generate_postgresql_hba(429def generate_postgresql_hba(
407 postgresql_hba, user=None, schema_user=None, database=None):430 output_file, user=None, schema_user=None, database=None):
408 '''Create the pg_hba.conf file.'''431 '''Create the pg_hba.conf file.'''
409432
410 # Per Bug #1117542, when generating the postgresql_hba file we433 # Per Bug #1117542, when generating the postgresql_hba file we
@@ -429,6 +452,7 @@
429 return output.rstrip(".") # trailing dot452 return output.rstrip(".") # trailing dot
430 return addr453 return addr
431454
455 config_data = hookenv.config()
432 allowed_units = set()456 allowed_units = set()
433 relation_data = []457 relation_data = []
434 relids = hookenv.relation_ids('db') + hookenv.relation_ids('db-admin')458 relids = hookenv.relation_ids('db') + hookenv.relation_ids('db-admin')
@@ -525,9 +549,10 @@
525 'private-address': munge_address(admin_ip)}549 'private-address': munge_address(admin_ip)}
526 relation_data.append(admin_host)550 relation_data.append(admin_host)
527551
528 pg_hba_template = Template(open("templates/pg_hba.conf.tmpl").read())552 template_file = "{}/templates/pg_hba.conf.tmpl".format(hookenv.charm_dir())
553 pg_hba_template = Template(open(template_file).read())
529 host.write_file(554 host.write_file(
530 postgresql_hba, pg_hba_template.render(access_list=relation_data),555 output_file, pg_hba_template.render(access_list=relation_data),
531 owner="postgres", group="postgres", perms=0600)556 owner="postgres", group="postgres", perms=0600)
532 postgresql_reload()557 postgresql_reload()
533558
@@ -539,27 +564,36 @@
539 relid, {"allowed-units": " ".join(unit_sorted(allowed_units))})564 relid, {"allowed-units": " ".join(unit_sorted(allowed_units))})
540565
541566
542def install_postgresql_crontab(postgresql_ident):567def install_postgresql_crontab(output_file):
543 '''Create the postgres user's crontab'''568 '''Create the postgres user's crontab'''
569 config_data = hookenv.config()
544 crontab_data = {570 crontab_data = {
545 'backup_schedule': config_data["backup_schedule"],571 'backup_schedule': config_data["backup_schedule"],
546 'scripts_dir': postgresql_scripts_dir,572 'scripts_dir': postgresql_scripts_dir,
547 'backup_days': config_data["backup_retention_count"],573 'backup_days': config_data["backup_retention_count"],
548 }574 }
575 charm_dir = hookenv.charm_dir()
576 template_file = "{}/templates/postgres.cron.tmpl".format(charm_dir)
549 crontab_template = Template(577 crontab_template = Template(
550 open("templates/postgres.cron.tmpl").read()).render(crontab_data)578 open(template_file).read()).render(crontab_data)
551 host.write_file('/etc/cron.d/postgres', crontab_template, perms=0600)579 host.write_file(output_file, crontab_template, perms=0600)
552580
553581
554def create_recovery_conf(master_host, restart_on_change=False):582def create_recovery_conf(master_host, restart_on_change=False):
583 version = hookenv.config('version')
584 cluster_name = hookenv.config('cluster_name')
585 postgresql_cluster_dir = os.path.join(
586 postgresql_data_dir, version, cluster_name)
587
555 recovery_conf_path = os.path.join(postgresql_cluster_dir, 'recovery.conf')588 recovery_conf_path = os.path.join(postgresql_cluster_dir, 'recovery.conf')
556 if os.path.exists(recovery_conf_path):589 if os.path.exists(recovery_conf_path):
557 old_recovery_conf = open(recovery_conf_path, 'r').read()590 old_recovery_conf = open(recovery_conf_path, 'r').read()
558 else:591 else:
559 old_recovery_conf = None592 old_recovery_conf = None
560593
561 recovery_conf = Template(594 charm_dir = hookenv.charm_dir()
562 open("templates/recovery.conf.tmpl").read()).render({595 template_file = "{}/templates/recovery.conf.tmpl".format(charm_dir)
596 recovery_conf = Template(open(template_file).read()).render({
563 'host': master_host,597 'host': master_host,
564 'password': local_state['replication_password']})598 'password': local_state['replication_password']})
565 log(recovery_conf, DEBUG)599 log(recovery_conf, DEBUG)
@@ -578,9 +612,9 @@
578# Returns a string containing the postgresql config or612# Returns a string containing the postgresql config or
579# None613# None
580#------------------------------------------------------------------------------614#------------------------------------------------------------------------------
581def load_postgresql_config(postgresql_config):615def load_postgresql_config(config_file):
582 if os.path.isfile(postgresql_config):616 if os.path.isfile(config_file):
583 return(open(postgresql_config).read())617 return(open(config_file).read())
584 else:618 else:
585 return(None)619 return(None)
586620
@@ -677,7 +711,11 @@
677# - manipulate /var/lib/postgresql/VERSION/CLUSTER symlink711# - manipulate /var/lib/postgresql/VERSION/CLUSTER symlink
678#------------------------------------------------------------------------------712#------------------------------------------------------------------------------
679def config_changed_volume_apply():713def config_changed_volume_apply():
680 data_directory_path = postgresql_cluster_dir714 version = hookenv.config('version')
715 cluster_name = hookenv.config('cluster_name')
716 data_directory_path = os.path.join(
717 postgresql_data_dir, version, cluster_name)
718
681 assert(data_directory_path)719 assert(data_directory_path)
682 volid = volume_get_volume_id()720 volid = volume_get_volume_id()
683 if volid:721 if volid:
@@ -698,7 +736,7 @@
698 mount_point = volume_mount_point_from_volid(volid)736 mount_point = volume_mount_point_from_volid(volid)
699 new_pg_dir = os.path.join(mount_point, "postgresql")737 new_pg_dir = os.path.join(mount_point, "postgresql")
700 new_pg_version_cluster_dir = os.path.join(738 new_pg_version_cluster_dir = os.path.join(
701 new_pg_dir, config_data["version"], config_data["cluster_name"])739 new_pg_dir, version, cluster_name)
702 if not mount_point:740 if not mount_point:
703 log(741 log(
704 "invalid mount point from volid = {}, "742 "invalid mount point from volid = {}, "
@@ -724,7 +762,7 @@
724 # /var/lib/postgresql/9.1/main762 # /var/lib/postgresql/9.1/main
725 curr_dir_stat = os.stat(data_directory_path)763 curr_dir_stat = os.stat(data_directory_path)
726 for new_dir in [new_pg_dir,764 for new_dir in [new_pg_dir,
727 os.path.join(new_pg_dir, config_data["version"]),765 os.path.join(new_pg_dir, version),
728 new_pg_version_cluster_dir]:766 new_pg_version_cluster_dir]:
729 if not os.path.isdir(new_dir):767 if not os.path.isdir(new_dir):
730 log("mkdir %s".format(new_dir))768 log("mkdir %s".format(new_dir))
@@ -781,7 +819,8 @@
781819
782@hooks.hook()820@hooks.hook()
783def config_changed(force_restart=False):821def config_changed(force_restart=False):
784 update_repos_and_packages()822 config_data = hookenv.config()
823 update_repos_and_packages(config_data["version"])
785824
786 # Trigger volume initialization logic for permanent storage825 # Trigger volume initialization logic for permanent storage
787 volid = volume_get_volume_id()826 volid = volume_get_volume_id()
@@ -813,10 +852,17 @@
813 "Disabled and stopped postgresql service "852 "Disabled and stopped postgresql service "
814 "(config_changed_volume_apply failure)", ERROR)853 "(config_changed_volume_apply failure)", ERROR)
815 sys.exit(1)854 sys.exit(1)
855
856 postgresql_config_dir = _get_postgresql_config_dir(config_data)
857 postgresql_config = os.path.join(postgresql_config_dir, "postgresql.conf")
858 postgresql_hba = os.path.join(postgresql_config_dir, "pg_hba.conf")
859 postgresql_ident = os.path.join(postgresql_config_dir, "pg_ident.conf")
860
816 current_service_port = get_service_port(postgresql_config)861 current_service_port = get_service_port(postgresql_config)
817 create_postgresql_config(postgresql_config)862 create_postgresql_config(postgresql_config)
818 generate_postgresql_hba(postgresql_hba)863 generate_postgresql_hba(postgresql_hba)
819 create_postgresql_ident(postgresql_ident)864 create_postgresql_ident(postgresql_ident)
865
820 updated_service_port = config_data["listen_port"]866 updated_service_port = config_data["listen_port"]
821 update_service_port(current_service_port, updated_service_port)867 update_service_port(current_service_port, updated_service_port)
822 update_nrpe_checks()868 update_nrpe_checks()
@@ -832,8 +878,8 @@
832 if os.path.isfile(f) and os.access(f, os.X_OK):878 if os.path.isfile(f) and os.access(f, os.X_OK):
833 subprocess.check_call(['sh', '-c', f])879 subprocess.check_call(['sh', '-c', f])
834880
835 update_repos_and_packages()881 config_data = hookenv.config()
836882 update_repos_and_packages(config_data["version"])
837 if not 'state' in local_state:883 if not 'state' in local_state:
838 # Fresh installation. Because this function is invoked by both884 # Fresh installation. Because this function is invoked by both
839 # the install hook and the upgrade-charm hook, we need to guard885 # the install hook and the upgrade-charm hook, we need to guard
@@ -848,6 +894,10 @@
848 run("pg_createcluster --locale='{}' --encoding='{}' 9.1 main".format(894 run("pg_createcluster --locale='{}' --encoding='{}' 9.1 main".format(
849 config_data['locale'], config_data['encoding']))895 config_data['locale'], config_data['encoding']))
850896
897 postgresql_backups_dir = (
898 config_data['backup_dir'].strip() or
899 os.path.join(postgresql_data_dir, 'backups'))
900
851 host.mkdir(postgresql_backups_dir, owner="postgres", perms=0o755)901 host.mkdir(postgresql_backups_dir, owner="postgres", perms=0o755)
852 host.mkdir(postgresql_scripts_dir, owner="postgres", perms=0o755)902 host.mkdir(postgresql_scripts_dir, owner="postgres", perms=0o755)
853 host.mkdir(postgresql_logs_dir, owner="postgres", perms=0o755)903 host.mkdir(postgresql_logs_dir, owner="postgres", perms=0o755)
@@ -857,10 +907,11 @@
857 'scripts_dir': postgresql_scripts_dir,907 'scripts_dir': postgresql_scripts_dir,
858 'logs_dir': postgresql_logs_dir,908 'logs_dir': postgresql_logs_dir,
859 }909 }
860 dump_script = Template(910 charm_dir = hookenv.charm_dir()
861 open("templates/dump-pg-db.tmpl").read()).render(paths)911 template_file = "{}/templates/dump-pg-db.tmpl".format(charm_dir)
862 backup_job = Template(912 dump_script = Template(open(template_file).read()).render(paths)
863 open("templates/pg_backup_job.tmpl").read()).render(paths)913 template_file = "{}/templates/pg_backup_job.tmpl".format(charm_dir)
914 backup_job = Template(open(template_file).read()).render(paths)
864 host.write_file(915 host.write_file(
865 '{}/dump-pg-db'.format(postgresql_scripts_dir),916 '{}/dump-pg-db'.format(postgresql_scripts_dir),
866 dump_script, perms=0755)917 dump_script, perms=0755)
@@ -1176,6 +1227,7 @@
1176 log("Client relations {}".format(local_state['client_relations']))1227 log("Client relations {}".format(local_state['client_relations']))
1177 local_state.publish()1228 local_state.publish()
11781229
1230 postgresql_hba = os.path.join(_get_postgresql_config_dir(), "pg_hba.conf")
1179 generate_postgresql_hba(postgresql_hba, user=user,1231 generate_postgresql_hba(postgresql_hba, user=user,
1180 schema_user=schema_user,1232 schema_user=schema_user,
1181 database=database)1233 database=database)
@@ -1213,6 +1265,7 @@
1213 log("Client relations {}".format(local_state['client_relations']))1265 log("Client relations {}".format(local_state['client_relations']))
1214 local_state.publish()1266 local_state.publish()
12151267
1268 postgresql_hba = os.path.join(_get_postgresql_config_dir(), "pg_hba.conf")
1216 generate_postgresql_hba(postgresql_hba)1269 generate_postgresql_hba(postgresql_hba)
12171270
1218 snapshot_relations()1271 snapshot_relations()
@@ -1245,6 +1298,7 @@
1245 run_sql_as_postgres(sql, AsIs(quote_identifier(database)),1298 run_sql_as_postgres(sql, AsIs(quote_identifier(database)),
1246 AsIs(quote_identifier(user + "_schema")))1299 AsIs(quote_identifier(user + "_schema")))
12471300
1301 postgresql_hba = os.path.join(_get_postgresql_config_dir(), "pg_hba.conf")
1248 generate_postgresql_hba(postgresql_hba)1302 generate_postgresql_hba(postgresql_hba)
12491303
1250 # Cleanup our local state.1304 # Cleanup our local state.
@@ -1261,13 +1315,14 @@
1261 sql = "ALTER USER %s NOSUPERUSER"1315 sql = "ALTER USER %s NOSUPERUSER"
1262 run_sql_as_postgres(sql, AsIs(quote_identifier(user)))1316 run_sql_as_postgres(sql, AsIs(quote_identifier(user)))
12631317
1318 postgresql_hba = os.path.join(_get_postgresql_config_dir(), "pg_hba.conf")
1264 generate_postgresql_hba(postgresql_hba)1319 generate_postgresql_hba(postgresql_hba)
12651320
1266 # Cleanup our local state.1321 # Cleanup our local state.
1267 snapshot_relations()1322 snapshot_relations()
12681323
12691324
1270def update_repos_and_packages():1325def update_repos_and_packages(version):
1271 extra_repos = hookenv.config('extra_archives')1326 extra_repos = hookenv.config('extra_archives')
1272 extra_repos_added = local_state.setdefault('extra_repos_added', set())1327 extra_repos_added = local_state.setdefault('extra_repos_added', set())
1273 if extra_repos:1328 if extra_repos:
@@ -1284,10 +1339,12 @@
1284 # It might have been better for debversion and plpython to only get1339 # It might have been better for debversion and plpython to only get
1285 # installed if they were listed in the extra-packages config item,1340 # installed if they were listed in the extra-packages config item,
1286 # but they predate this feature.1341 # but they predate this feature.
1287 packages = ["postgresql-%s" % config_data["version"],1342 packages = ["python-psutil", # to obtain system RAM from python
1288 "postgresql-contrib-%s" % config_data["version"],1343 "libc-bin", # for getconf
1289 "postgresql-plpython-%s" % config_data["version"],1344 "postgresql-%s" % version,
1290 "postgresql-%s-debversion" % config_data["version"],1345 "postgresql-contrib-%s" % version,
1346 "postgresql-plpython-%s" % version,
1347 "postgresql-%s-debversion" % version,
1291 "python-jinja2", "syslinux", "python-psycopg2"]1348 "python-jinja2", "syslinux", "python-psycopg2"]
1292 packages.extend((hookenv.config('extra-packages') or '').split())1349 packages.extend((hookenv.config('extra-packages') or '').split())
1293 packages = fetch.filter_installed_packages(packages)1350 packages = fetch.filter_installed_packages(packages)
@@ -1351,6 +1408,11 @@
13511408
1352def promote_database():1409def promote_database():
1353 '''Take the database out of recovery mode.'''1410 '''Take the database out of recovery mode.'''
1411 config_data = hookenv.config()
1412 version = config_data['version']
1413 cluster_name = config_data['cluster_name']
1414 postgresql_cluster_dir = os.path.join(
1415 postgresql_data_dir, version, cluster_name)
1354 recovery_conf = os.path.join(postgresql_cluster_dir, 'recovery.conf')1416 recovery_conf = os.path.join(postgresql_cluster_dir, 'recovery.conf')
1355 if os.path.exists(recovery_conf):1417 if os.path.exists(recovery_conf):
1356 # Rather than using 'pg_ctl promote', we do the promotion1418 # Rather than using 'pg_ctl promote', we do the promotion
@@ -1554,6 +1616,8 @@
1554 del local_state['paused_at_failover']1616 del local_state['paused_at_failover']
15551617
1556 publish_hot_standby_credentials()1618 publish_hot_standby_credentials()
1619 postgresql_hba = os.path.join(
1620 _get_postgresql_config_dir(), "pg_hba.conf")
1557 generate_postgresql_hba(postgresql_hba)1621 generate_postgresql_hba(postgresql_hba)
15581622
1559 local_state.publish()1623 local_state.publish()
@@ -1706,7 +1770,7 @@
1706 lock.1770 lock.
1707 '''1771 '''
1708 import psycopg21772 import psycopg2
1709 key = long(config_data['advisory_lock_restart_key'])1773 key = long(hookenv.config('advisory_lock_restart_key'))
1710 if exclusive:1774 if exclusive:
1711 lock_function = 'pg_advisory_lock'1775 lock_function = 'pg_advisory_lock'
1712 else:1776 else:
@@ -1749,6 +1813,12 @@
1749 postgresql_stop()1813 postgresql_stop()
1750 log("Cloning master {}".format(master_unit))1814 log("Cloning master {}".format(master_unit))
17511815
1816 config_data = hookenv.config()
1817 version = config_data['version']
1818 cluster_name = config_data['cluster_name']
1819 postgresql_cluster_dir = os.path.join(
1820 postgresql_data_dir, version, cluster_name)
1821 postgresql_config_dir = _get_postgresql_config_dir(config_data)
1752 cmd = [1822 cmd = [
1753 'sudo', '-E', # -E needed to locate pgpass file.1823 'sudo', '-E', # -E needed to locate pgpass file.
1754 '-u', 'postgres', 'pg_basebackup', '-D', postgresql_cluster_dir,1824 '-u', 'postgres', 'pg_basebackup', '-D', postgresql_cluster_dir,
@@ -1802,6 +1872,11 @@
18021872
18031873
1804def postgresql_is_in_backup_mode():1874def postgresql_is_in_backup_mode():
1875 version = hookenv.config('version')
1876 cluster_name = hookenv.config('cluster_name')
1877 postgresql_cluster_dir = os.path.join(
1878 postgresql_data_dir, version, cluster_name)
1879
1805 return os.path.exists(1880 return os.path.exists(
1806 os.path.join(postgresql_cluster_dir, 'backup_label'))1881 os.path.join(postgresql_cluster_dir, 'backup_label'))
18071882
@@ -1919,30 +1994,28 @@
1919 host.service_reload('nagios-nrpe-server')1994 host.service_reload('nagios-nrpe-server')
19201995
19211996
1997def _get_postgresql_config_dir(config_data=None):
1998 """ Return the directory path of the postgresql configuration files. """
1999 if config_data == None:
2000 config_data = hookenv.config()
2001 version = config_data['version']
2002 cluster_name = config_data['cluster_name']
2003 return os.path.join("/etc/postgresql", version, cluster_name)
2004
1922###############################################################################2005###############################################################################
1923# Global variables2006# Global variables
1924###############################################################################2007###############################################################################
1925config_data = hookenv.config()
1926version = config_data['version']
1927cluster_name = config_data['cluster_name']
1928postgresql_data_dir = "/var/lib/postgresql"2008postgresql_data_dir = "/var/lib/postgresql"
1929postgresql_cluster_dir = os.path.join(2009postgresql_scripts_dir = os.path.join(postgresql_data_dir, 'scripts')
1930 postgresql_data_dir, version, cluster_name)2010postgresql_logs_dir = os.path.join(postgresql_data_dir, 'logs')
1931postgresql_bin_dir = os.path.join('/usr/lib/postgresql', version, 'bin')2011
1932postgresql_config_dir = os.path.join("/etc/postgresql", version, cluster_name)2012postgresql_sysctl = "/etc/sysctl.d/50-postgresql.conf"
1933postgresql_config = os.path.join(postgresql_config_dir, "postgresql.conf")
1934postgresql_ident = os.path.join(postgresql_config_dir, "pg_ident.conf")
1935postgresql_hba = os.path.join(postgresql_config_dir, "pg_hba.conf")
1936postgresql_crontab = "/etc/cron.d/postgresql"2013postgresql_crontab = "/etc/cron.d/postgresql"
1937postgresql_service_config_dir = "/var/run/postgresql"2014postgresql_service_config_dir = "/var/run/postgresql"
1938postgresql_scripts_dir = os.path.join(postgresql_data_dir, 'scripts')
1939postgresql_backups_dir = (
1940 config_data['backup_dir'].strip() or
1941 os.path.join(postgresql_data_dir, 'backups'))
1942postgresql_logs_dir = os.path.join(postgresql_data_dir, 'logs')
1943hook_name = os.path.basename(sys.argv[0])
1944replication_relation_types = ['master', 'slave', 'replication']2015replication_relation_types = ['master', 'slave', 'replication']
1945local_state = State('local_state.pickle')2016local_state = State('local_state.pickle')
2017hook_name = os.path.basename(sys.argv[0])
2018juju_log_dir = "/var/log/juju"
19462019
19472020
1948if __name__ == '__main__':2021if __name__ == '__main__':
19492022
=== added file 'hooks/test_hooks.py'
--- hooks/test_hooks.py 1970-01-01 00:00:00 +0000
+++ hooks/test_hooks.py 2014-01-07 21:59:36 +0000
@@ -0,0 +1,354 @@
1import mocker
2import hooks
3
4
5class TestJujuHost(object):
6 """
7 Testing object to intercept charmhelper calls and inject data, or make sure
8 certain data is set.
9 """
10 def write_file(self, file_path, contents, owner=None, group=None,
11 perms=None):
12 """
13 Only write the file as requested. owner, group and perms untested.
14 """
15 with open(file_path, 'w') as target:
16 target.write(contents)
17
18 def mkdir(self, dir_path, owner, group, perms):
19 """Not yet tested"""
20 pass
21
22 def service_start(self, service_name):
23 """Not yet tested"""
24 pass
25
26 def service_reload(self, service_name):
27 """Not yet tested"""
28 pass
29
30 def service_pwgen(self, service_name):
31 """Not yet tested"""
32 return ""
33
34 def service_stop(self, service_name):
35 """Not yet tested"""
36 pass
37
38
39class TestJuju(object):
40 """
41 Testing object to intercept juju calls and inject data, or make sure
42 certain data is set.
43 """
44
45 _relation_data = {}
46 _relation_ids = {}
47 _relation_list = ("postgres/0",)
48
49 def __init__(self):
50 self._config = {
51 "admin_addresses": "",
52 "locale": "C",
53 "encoding": "UTF-8",
54 "extra_packages": "",
55 "dumpfile_location": "None",
56 "config_change_command": "reload",
57 "version": "9.1",
58 "cluster_name": "main",
59 "listen_ip": "*",
60 "listen_port": "5432",
61 "max_connections": "100",
62 "ssl": "True",
63 "log_min_duration_statement": -1,
64 "log_checkpoints": False,
65 "log_connections": False,
66 "log_disconnections": False,
67 "log_line_prefix": "%t ",
68 "log_lock_waits": False,
69 "log_timezone": "UTC",
70 "autovacuum": True,
71 "log_autovacuum_min_duration": -1,
72 "autovacuum_analyze_threshold": 50,
73 "autovacuum_vacuum_scale_factor": 0.2,
74 "autovacuum_analyze_scale_factor": 0.1,
75 "autovacuum_vacuum_cost_delay": "20ms",
76 "search_path": "\"$user\",public",
77 "standard_conforming_strings": True,
78 "hot_standby": False,
79 "hot_standby_feedback": False,
80 "wal_level": "minimal",
81 "max_wal_senders": 0,
82 "wal_keep_segments": 0,
83 "replicated_wal_keep_segments": 5000,
84 "archive_mode": False,
85 "archive_command": "",
86 "work_mem": "1MB",
87 "maintenance_work_mem": "1MB",
88 "performance_tuning": "auto",
89 "kernel_shmall": 0,
90 "kernel_shmmax": 0,
91 "shared_buffers": "",
92 "effective_cache_size": "",
93 "temp_buffers": "1MB",
94 "wal_buffers": "-1",
95 "checkpoint_segments": 3,
96 "random_page_cost": 4.0,
97 "volume_ephemeral_storage": True,
98 "volume_map": "",
99 "volume_dev_regexp": "/dev/db[b-z]",
100 "backup_dir": "/var/lib/postgresql/backups",
101 "backup_schedule": "13 4 * * *",
102 "backup_retention_count": 7,
103 "nagios_context": "juju",
104 "extra_archives": "",
105 "advisory_lock_restart_key": 765}
106
107 def relation_set(self, *args, **kwargs):
108 """
109 Capture result of relation_set into _relation_data, which
110 can then be checked later.
111 """
112 if "relation_id" in kwargs:
113 del kwargs["relation_id"]
114 self._relation_data = dict(self._relation_data, **kwargs)
115 for arg in args:
116 (key, value) = arg.split("=")
117 self._relation_data[key] = value
118
119 def relation_ids(self, relation_name="db-admin"):
120 """
121 Return expected relation_ids for tests. Feel free to expand
122 as more tests are added.
123 """
124 return [self._relation_ids[name] for name in self._relation_ids.keys()
125 if name.find(relation_name) == 0]
126
127 def related_units(self, relid="db-admin:5"):
128 """
129 Return expected relation_ids for tests. Feel free to expand
130 as more tests are added.
131 """
132 return [name for name, value in self._relation_ids.iteritems()
133 if value == relid]
134
135 def relation_list(self):
136 """
137 Hardcode expected relation_list for tests. Feel free to expand
138 as more tests are added.
139 """
140 return list(self._relation_list)
141
142 def unit_get(self, *args):
143 """
144 for now the only thing this is called for is "public-address",
145 so it's a simplistic return.
146 """
147 return "localhost"
148
149 def local_unit(self):
150 return hooks.os.environ["JUJU_UNIT_NAME"]
151
152 def charm_dir(self):
153 return hooks.os.environ["CHARM_DIR"]
154
155 def juju_log(self, *args, **kwargs):
156 pass
157
158 def log(self, *args, **kwargs):
159 pass
160
161 def config_get(self, scope=None):
162 if scope is None:
163 return self.config
164 else:
165 return self.config[scope]
166
167 def relation_get(self, scope=None, unit_name=None, relation_id=None):
168 pass
169
170
171class TestHooks(mocker.MockerTestCase):
172
173 def setUp(self):
174 hooks.hookenv = TestJuju()
175 hooks.host = TestJujuHost()
176 hooks.juju_log_dir = self.makeDir()
177 hooks.hookenv.config = lambda: hooks.hookenv._config
178 #hooks.hookenv.localunit = lambda: "localhost"
179 hooks.os.environ["JUJU_UNIT_NAME"] = "landscape/1"
180 hooks.postgresql_sysctl = self.makeFile()
181 hooks._get_system_ram = lambda: 1024 # MB
182 hooks._get_page_size = lambda: 1024 * 1024 # bytes
183 self.maxDiff = None
184
185 def assertFileContains(self, filename, lines):
186 """Make sure strings exist in a file."""
187 with open(filename, "r") as fp:
188 contents = fp.read()
189 for line in lines:
190 self.assertIn(line, contents)
191
192 def assertNotFileContains(self, filename, lines):
193 """Make sure strings do not exist in a file."""
194 with open(filename, "r") as fp:
195 contents = fp.read()
196 for line in lines:
197 self.assertNotIn(line, contents)
198
199 def assertFilesEqual(self, file1, file2):
200 """Given two filenames, compare them."""
201 with open(file1, "r") as fp1:
202 contents1 = fp1.read()
203 with open(file2, "r") as fp2:
204 contents2 = fp2.read()
205 self.assertEqual(contents1, contents2)
206
207
208class TestHooksService(TestHooks):
209
210 def test_create_postgresql_config_wal_no_replication(self):
211 """
212 When postgresql is in C{standalone} mode, and participates in no
213 C{replication} relations, default wal settings will be present.
214 """
215 config_outfile = self.makeFile()
216 run = self.mocker.replace(hooks.run)
217 run("sysctl -p %s" % hooks.postgresql_sysctl)
218 self.mocker.result(True)
219 self.mocker.replay()
220 hooks.create_postgresql_config(config_outfile)
221 self.assertFileContains(
222 config_outfile,
223 ["wal_buffers = -1", "wal_level = minimal", "max_wal_senders = 0",
224 "wal_keep_segments = 0"])
225
226 def test_create_postgresql_config_wal_with_replication(self):
227 """
228 When postgresql is in C{replicated} mode, and participates in a
229 C{replication} relation, C{hot_standby} will be set to C{on},
230 C{wal_level} will be enabled as C{hot_standby} and the
231 C{max_wall_senders} will match the count of replication relations.
232 The value of C{wal_keep_segments} will be the maximum of the configured
233 C{wal_keep_segments} and C{replicated_wal_keep_segments}.
234 """
235 self.addCleanup(
236 setattr, hooks.hookenv, "_relation_ids", {})
237 hooks.hookenv._relation_ids = {
238 "replication/0": "db-admin:5", "replication/1": "db-admin:6"}
239 config_outfile = self.makeFile()
240 run = self.mocker.replace(hooks.run)
241 run("sysctl -p %s" % hooks.postgresql_sysctl)
242 self.mocker.result(True)
243 self.mocker.replay()
244 hooks.create_postgresql_config(config_outfile)
245 self.assertFileContains(
246 config_outfile,
247 ["hot_standby = on", "wal_buffers = -1", "wal_level = hot_standby",
248 "max_wal_senders = 2", "wal_keep_segments = 5000"])
249
250 def test_create_postgresql_config_wal_with_replication_max_override(self):
251 """
252 When postgresql is in C{replicated} mode, and participates in a
253 C{replication} relation, C{hot_standby} will be set to C{on},
254 C{wal_level} will be enabled as C{hot_standby}. The written value for
255 C{max_wal_senders} will be the maximum of replication slave count and
256 the configuration value for C{max_wal_senders}.
257 The written value of C{wal_keep_segments} will be
258 the maximum of the configuration C{wal_keep_segments} and
259 C{replicated_wal_keep_segments}.
260 """
261 self.addCleanup(
262 setattr, hooks.hookenv, "_relation_ids", ())
263 hooks.hookenv._relation_ids = {
264 "replication/0": "db-admin:5", "replication/1": "db-admin:6"}
265 hooks.hookenv._config["max_wal_senders"] = "3"
266 hooks.hookenv._config["wal_keep_segments"] = 1000
267 hooks.hookenv._config["replicated_wal_keep_segments"] = 999
268 config_outfile = self.makeFile()
269 run = self.mocker.replace(hooks.run)
270 run("sysctl -p %s" % hooks.postgresql_sysctl)
271 self.mocker.result(True)
272 self.mocker.replay()
273 hooks.create_postgresql_config(config_outfile)
274 self.assertFileContains(
275 config_outfile,
276 ["hot_standby = on", "wal_buffers = -1", "wal_level = hot_standby",
277 "max_wal_senders = 3", "wal_keep_segments = 1000"])
278
279 def test_create_postgresql_config_performance_tune_auto_large_ram(self):
280 """
281 When configuration attribute C{performance_tune} is set to C{auto} and
282 total RAM on a system is > 1023MB. It will automatically calculate
283 values for the following attributes if these attributes were left as
284 default values:
285 - C{effective_cache_size} set to 75% of total RAM in MegaBytes
286 - C{shared_buffers} set to 25% of total RAM in MegaBytes
287 - C{kernel_shmmax} set to total RAM in bytes
288 - C{kernel_shmall} equal to kernel_shmmax in pages
289 """
290 config_outfile = self.makeFile()
291 run = self.mocker.replace(hooks.run)
292 run("sysctl -p %s" % hooks.postgresql_sysctl)
293 self.mocker.result(True)
294 self.mocker.replay()
295 hooks.create_postgresql_config(config_outfile)
296 self.assertFileContains(
297 config_outfile,
298 ["shared_buffers = 256MB", "effective_cache_size = 768MB"])
299 self.assertFileContains(
300 hooks.postgresql_sysctl,
301 ["kernel.shmall = 1025\nkernel.shmmax = 1073742848"])
302
303 def test_create_postgresql_config_performance_tune_auto_small_ram(self):
304 """
305 When configuration attribute C{performance_tune} is set to C{auto} and
306 total RAM on a system is <= 1023MB. It will automatically calculate
307 values for the following attributes if these attributes were left as
308 default values:
309 - C{effective_cache_size} set to 75% of total RAM in MegaBytes
310 - C{shared_buffers} set to 15% of total RAM in MegaBytes
311 - C{kernel_shmmax} set to total RAM in bytes
312 - C{kernel_shmall} equal to kernel_shmmax in pages
313 """
314 hooks._get_system_ram = lambda: 1023 # MB
315 config_outfile = self.makeFile()
316 run = self.mocker.replace(hooks.run)
317 run("sysctl -p %s" % hooks.postgresql_sysctl)
318 self.mocker.result(True)
319 self.mocker.replay()
320 hooks.create_postgresql_config(config_outfile)
321 self.assertFileContains(
322 config_outfile,
323 ["shared_buffers = 153MB", "effective_cache_size = 767MB"])
324 self.assertFileContains(
325 hooks.postgresql_sysctl,
326 ["kernel.shmall = 1024\nkernel.shmmax = 1072694272"])
327
328 def test_create_postgresql_config_performance_tune_auto_overridden(self):
329 """
330 When configuration attribute C{performance_tune} is set to C{auto} any
331 non-default values for the configuration parameters below will be used
332 instead of the automatically calculated values.
333 - C{effective_cache_size}
334 - C{shared_buffers}
335 - C{kernel_shmmax}
336 - C{kernel_shmall}
337 """
338 hooks.hookenv._config["effective_cache_size"] = "999MB"
339 hooks.hookenv._config["shared_buffers"] = "101MB"
340 hooks.hookenv._config["kernel_shmmax"] = 50000
341 hooks.hookenv._config["kernel_shmall"] = 500
342 hooks._get_system_ram = lambda: 1023 # MB
343 config_outfile = self.makeFile()
344 run = self.mocker.replace(hooks.run)
345 run("sysctl -p %s" % hooks.postgresql_sysctl)
346 self.mocker.result(True)
347 self.mocker.replay()
348 hooks.create_postgresql_config(config_outfile)
349 self.assertFileContains(
350 config_outfile,
351 ["shared_buffers = 101MB", "effective_cache_size = 999MB"])
352 self.assertFileContains(
353 hooks.postgresql_sysctl,
354 ["kernel.shmall = 1024\nkernel.shmmax = 1072694272"])
0355
=== added file 'revision'
--- revision 1970-01-01 00:00:00 +0000
+++ revision 2014-01-07 21:59:36 +0000
@@ -0,0 +1,1 @@
12
02
=== modified file 'templates/postgresql.conf.tmpl'
--- templates/postgresql.conf.tmpl 2013-01-24 11:28:39 +0000
+++ templates/postgresql.conf.tmpl 2014-01-07 21:59:36 +0000
@@ -40,7 +40,7 @@
40{% if shared_buffers != "" -%}40{% if shared_buffers != "" -%}
41shared_buffers = {{shared_buffers}}41shared_buffers = {{shared_buffers}}
42{% endif -%}42{% endif -%}
43{% if temp_buffers!= "" -%}43{% if temp_buffers != "" -%}
44temp_buffers = {{temp_buffers}}44temp_buffers = {{temp_buffers}}
45{% endif -%}45{% endif -%}
46{% if work_mem != "" -%}46{% if work_mem != "" -%}

Subscribers

People subscribed via source and target branches