Merge lp:~dbuliga/charms/trusty/nrpe/nrpe into lp:charms/trusty/nrpe

Proposed by Denis Buliga
Status: Needs review
Proposed branch: lp:~dbuliga/charms/trusty/nrpe/nrpe
Merge into: lp:charms/trusty/nrpe
Diff against target: 2433 lines (+1349/-658)
17 files modified
hooks/centos.py (+29/-0)
hooks/charmhelpers/__init__.py (+10/-0)
hooks/charmhelpers/core/host.py (+81/-288)
hooks/charmhelpers/core/host_factory/__init__.py (+492/-0)
hooks/charmhelpers/core/host_factory/centos/__init__.py (+53/-0)
hooks/charmhelpers/core/host_factory/ubuntu/__init__.py (+47/-0)
hooks/charmhelpers/fetch/__init__.py (+66/-285)
hooks/charmhelpers/fetch/bzrurl.py (+23/-32)
hooks/charmhelpers/fetch/centos/__init__.py (+158/-0)
hooks/charmhelpers/fetch/giturl.py (+24/-25)
hooks/charmhelpers/fetch/ubuntu/__init__.py (+296/-0)
hooks/nrpe_utils.py (+10/-12)
hooks/services.py (+7/-6)
hooks/ubuntu.py (+27/-0)
templates/nrpe-centos.tmpl (+16/-0)
tests/11-monitors-configurations (+5/-5)
tests/13-monitors-config (+5/-5)
To merge this branch: bzr merge lp:~dbuliga/charms/trusty/nrpe/nrpe
Reviewer Review Type Date Requested Status
Adam Israel (community) Needs Fixing
Review via email: mp+288615@code.launchpad.net

Description of the change

This branch introduces a factory that based on platform, loads the proper functionality for that platform. It deploys NRPE as a subordinate charm to a charm which is deployed on CentOS as well as for a charm which is deployed on Ubuntu.

A new parameter was added in the 'monitors' relation which is supposed to send the platform for the subordinate charm. This helps for instance, NAGIOS, to set icon and description for that charm based on the parameter.

The need for a template for CentOS appeared because that pid_file has to be under the user nrpe on CentOS instead of nagios on Ubuntu.

To post a comment you must log in.
lp:~dbuliga/charms/trusty/nrpe/nrpe updated
41. By David Ames

[gnuoy, r=thedac] Add support for multiple charms to be related to nrpe via the nrpe, juju-info or local-monitors relation. This is useful when joining an additional local subordinate to this charm. See README changes.

42. By Denis Buliga

Updated nrpe to work on CentOS

Revision history for this message
Adam Israel (aisrael) wrote :

Hi Denis,

I've had a chance to review your merge proposal. I've run into a few errors running the amulet tests related to missing config files. Could you check the logs at the pastebin link below and fix the tests? Once they pass, I'll be happy to re-review and get this promulgated.

http://pastebin.ubuntu.com/17298099/

review: Needs Fixing
Revision history for this message
Antonio Rosales (arosales) wrote :

@Denis,

Thanks for your contribution.

Given this is a CentoOS update this may actually be better as a separate CentOS charm. Specifically, the metadata stating it is a CentOS charm so Juju can deploy it as such and it can be discoverable in the Charm Store.

It looks like there is some feedback on the tests, but it would be great to see this in a stand-alone CentOS charm.

-thanks,
Antonio

Unmerged revisions

42. By Denis Buliga

Updated nrpe to work on CentOS

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added file 'hooks/centos.py'
2--- hooks/centos.py 1970-01-01 00:00:00 +0000
3+++ hooks/centos.py 2016-04-29 11:54:41 +0000
4@@ -0,0 +1,29 @@
5+from charmhelpers.core import host
6+from charmhelpers.core import hookenv
7+from charmhelpers.core.services import helpers
8+
9+
10+def determine_packages():
11+ """ List of packages this charm needs installed """
12+ pkgs = [
13+ 'epel-release',
14+ 'nagios',
15+ 'nagios-plugins-nrpe',
16+ 'nagios-plugins-all',
17+ 'nrpe'
18+ ]
19+ if hookenv.config('export_nagios_definitions'):
20+ pkgs.append('rsync')
21+ return pkgs
22+
23+
24+def restart_nrpe(service_name):
25+ """ Restart nrpe """
26+ host.service_restart('nrpe')
27+
28+
29+def render_nrpe_template():
30+ return helpers.render_template(
31+ source='nrpe-centos.tmpl',
32+ target='/etc/nagios/nrpe.cfg'
33+ )
34
35=== modified file 'hooks/charmhelpers/__init__.py' (properties changed: -x to +x)
36--- hooks/charmhelpers/__init__.py 2015-03-23 09:45:10 +0000
37+++ hooks/charmhelpers/__init__.py 2016-04-29 11:54:41 +0000
38@@ -18,6 +18,7 @@
39 # only standard libraries.
40 import subprocess
41 import sys
42+import platform
43
44 try:
45 import six # flake8: noqa
46@@ -36,3 +37,12 @@
47 else:
48 subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml'])
49 import yaml # flake8: noqa
50+
51+
52+def get_platform():
53+ tuple_platform = platform.linux_distribution()
54+ current_platform = tuple_platform[0]
55+ if "Ubuntu" in current_platform:
56+ return "ubuntu"
57+ elif "CentOS" in current_platform:
58+ return "centos"
59
60=== modified file 'hooks/charmhelpers/core/__init__.py' (properties changed: -x to +x)
61=== modified file 'hooks/charmhelpers/core/decorators.py' (properties changed: -x to +x)
62=== modified file 'hooks/charmhelpers/core/fstab.py' (properties changed: -x to +x)
63=== modified file 'hooks/charmhelpers/core/hookenv.py' (properties changed: -x to +x)
64=== modified file 'hooks/charmhelpers/core/host.py' (properties changed: -x to +x)
65--- hooks/charmhelpers/core/host.py 2015-03-23 09:45:10 +0000
66+++ hooks/charmhelpers/core/host.py 2016-04-29 11:54:41 +0000
67@@ -21,236 +21,123 @@
68 # Nick Moffitt <nick.moffitt@canonical.com>
69 # Matthew Wedgwood <matthew.wedgwood@canonical.com>
70
71-import os
72-import re
73-import pwd
74-import grp
75-import random
76-import string
77-import subprocess
78-import hashlib
79 from contextlib import contextmanager
80-from collections import OrderedDict
81-
82-import six
83-
84-from .hookenv import log
85-from .fstab import Fstab
86+from charmhelpers.core.host_factory import Host
87+
88+host = Host()
89
90
91 def service_start(service_name):
92 """Start a system service"""
93- return service('start', service_name)
94+ return host.service_start(service_name)
95
96
97 def service_stop(service_name):
98 """Stop a system service"""
99- return service('stop', service_name)
100+ return host.service_stop(service_name)
101
102
103 def service_restart(service_name):
104 """Restart a system service"""
105- return service('restart', service_name)
106+ return host.service_restart(service_name)
107
108
109 def service_reload(service_name, restart_on_failure=False):
110 """Reload a system service, optionally falling back to restart if
111 reload fails"""
112- service_result = service('reload', service_name)
113- if not service_result and restart_on_failure:
114- service_result = service('restart', service_name)
115- return service_result
116+ return host.service_reload(service_name, restart_on_failure)
117
118
119 def service(action, service_name):
120 """Control a system service"""
121- cmd = ['service', service_name, action]
122- return subprocess.call(cmd) == 0
123+ return host.service(action, service_name)
124
125
126 def service_running(service):
127- """Determine whether a system service is running"""
128- try:
129- output = subprocess.check_output(
130- ['service', service, 'status'],
131- stderr=subprocess.STDOUT).decode('UTF-8')
132- except subprocess.CalledProcessError:
133- return False
134- else:
135- if ("start/running" in output or "is running" in output):
136- return True
137- else:
138- return False
139+ return host.service_running(service)
140
141
142 def service_available(service_name):
143 """Determine whether a system service is available"""
144- try:
145- subprocess.check_output(
146- ['service', service_name, 'status'],
147- stderr=subprocess.STDOUT).decode('UTF-8')
148- except subprocess.CalledProcessError as e:
149- return 'unrecognized service' not in e.output
150- else:
151- return True
152-
153-
154-def adduser(username, password=None, shell='/bin/bash', system_user=False):
155- """Add a user to the system"""
156- try:
157- user_info = pwd.getpwnam(username)
158- log('user {0} already exists!'.format(username))
159- except KeyError:
160- log('creating user {0}'.format(username))
161- cmd = ['useradd']
162- if system_user or password is None:
163- cmd.append('--system')
164- else:
165- cmd.extend([
166- '--create-home',
167- '--shell', shell,
168- '--password', password,
169- ])
170- cmd.append(username)
171- subprocess.check_call(cmd)
172- user_info = pwd.getpwnam(username)
173- return user_info
174+ return host.service_available(service_name)
175+
176+
177+def adduser(username, password=None, shell='/bin/bash', system_user=False,
178+ primary_group=None, secondary_groups=None):
179+ """
180+ Add a user to the system.
181+
182+ Will log but otherwise succeed if the user already exists.
183+
184+ :param str username: Username to create
185+ :param str password: Password for user; if ``None``, create a system user
186+ :param str shell: The default shell for the user
187+ :param bool system_user: Whether to create a login or system user
188+ :param str primary_group: Primary group for user; defaults to their username
189+ :param list secondary_groups: Optional list of additional groups
190+
191+ :returns: The password database entry struct, as returned by `pwd.getpwnam`
192+ """
193+ return host.adduser(username, password, shell, system_user,
194+ primary_group, secondary_groups)
195+
196+
197+def user_exists(username):
198+ """Check if a user exists"""
199+ return host.user_exists(username)
200
201
202 def add_group(group_name, system_group=False):
203- """Add a group to the system"""
204- try:
205- group_info = grp.getgrnam(group_name)
206- log('group {0} already exists!'.format(group_name))
207- except KeyError:
208- log('creating group {0}'.format(group_name))
209- cmd = ['addgroup']
210- if system_group:
211- cmd.append('--system')
212- else:
213- cmd.extend([
214- '--group',
215- ])
216- cmd.append(group_name)
217- subprocess.check_call(cmd)
218- group_info = grp.getgrnam(group_name)
219- return group_info
220+ return host.add_group(group_name, system_group)
221
222
223 def add_user_to_group(username, group):
224- """Add a user to a group"""
225- cmd = [
226- 'gpasswd', '-a',
227- username,
228- group
229- ]
230- log("Adding user {} to group {}".format(username, group))
231- subprocess.check_call(cmd)
232+ host.add_user_to_group(username, group)
233
234
235 def rsync(from_path, to_path, flags='-r', options=None):
236 """Replicate the contents of a path"""
237- options = options or ['--delete', '--executability']
238- cmd = ['/usr/bin/rsync', flags]
239- cmd.extend(options)
240- cmd.append(from_path)
241- cmd.append(to_path)
242- log(" ".join(cmd))
243- return subprocess.check_output(cmd).decode('UTF-8').strip()
244+ return host.rsync(from_path, to_path, flags, options)
245
246
247 def symlink(source, destination):
248 """Create a symbolic link"""
249- log("Symlinking {} as {}".format(source, destination))
250- cmd = [
251- 'ln',
252- '-sf',
253- source,
254- destination,
255- ]
256- subprocess.check_call(cmd)
257+ host.symlink(source, destination)
258
259
260 def mkdir(path, owner='root', group='root', perms=0o555, force=False):
261 """Create a directory"""
262- log("Making dir {} {}:{} {:o}".format(path, owner, group,
263- perms))
264- uid = pwd.getpwnam(owner).pw_uid
265- gid = grp.getgrnam(group).gr_gid
266- realpath = os.path.abspath(path)
267- path_exists = os.path.exists(realpath)
268- if path_exists and force:
269- if not os.path.isdir(realpath):
270- log("Removing non-directory file {} prior to mkdir()".format(path))
271- os.unlink(realpath)
272- os.makedirs(realpath, perms)
273- elif not path_exists:
274- os.makedirs(realpath, perms)
275- os.chown(realpath, uid, gid)
276- os.chmod(realpath, perms)
277+ host.mkdir(path, owner, group, perms, force)
278
279
280 def write_file(path, content, owner='root', group='root', perms=0o444):
281 """Create or overwrite a file with the contents of a byte string."""
282- log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
283- uid = pwd.getpwnam(owner).pw_uid
284- gid = grp.getgrnam(group).gr_gid
285- with open(path, 'wb') as target:
286- os.fchown(target.fileno(), uid, gid)
287- os.fchmod(target.fileno(), perms)
288- target.write(content)
289+ host.write_file(path, content, owner, group, perms)
290
291
292 def fstab_remove(mp):
293- """Remove the given mountpoint entry from /etc/fstab
294- """
295- return Fstab.remove_by_mountpoint(mp)
296+ """Remove the given mountpoint entry from /etc/fstab"""
297+ return host.fstab_remove(mp)
298
299
300 def fstab_add(dev, mp, fs, options=None):
301- """Adds the given device entry to the /etc/fstab file
302- """
303- return Fstab.add(dev, mp, fs, options=options)
304+ """Adds the given device entry to the /etc/fstab file"""
305+ return host.fstab_add(dev, mp, fs, options)
306
307
308 def mount(device, mountpoint, options=None, persist=False, filesystem="ext3"):
309 """Mount a filesystem at a particular mountpoint"""
310- cmd_args = ['mount']
311- if options is not None:
312- cmd_args.extend(['-o', options])
313- cmd_args.extend([device, mountpoint])
314- try:
315- subprocess.check_output(cmd_args)
316- except subprocess.CalledProcessError as e:
317- log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
318- return False
319-
320- if persist:
321- return fstab_add(device, mountpoint, filesystem, options=options)
322- return True
323+ return host.mount(device, mountpoint, options, persist, filesystem)
324
325
326 def umount(mountpoint, persist=False):
327 """Unmount a filesystem"""
328- cmd_args = ['umount', mountpoint]
329- try:
330- subprocess.check_output(cmd_args)
331- except subprocess.CalledProcessError as e:
332- log('Error unmounting {}\n{}'.format(mountpoint, e.output))
333- return False
334-
335- if persist:
336- return fstab_remove(mountpoint)
337- return True
338+ return host.umount(mountpoint, persist)
339
340
341 def mounts():
342 """Get a list of all mounted volumes as [[mountpoint,device],[...]]"""
343- with open('/proc/mounts') as f:
344- # [['/mount/point','/dev/path'],[...]]
345- system_mounts = [m[1::-1] for m in [l.strip().split()
346- for l in f.readlines()]]
347- return system_mounts
348+ return host.mounts()
349
350
351 def file_hash(path, hash_type='md5'):
352@@ -260,13 +147,7 @@
353 :param str hash_type: Any hash alrgorithm supported by :mod:`hashlib`,
354 such as md5, sha1, sha256, sha512, etc.
355 """
356- if os.path.exists(path):
357- h = getattr(hashlib, hash_type)()
358- with open(path, 'rb') as source:
359- h.update(source.read())
360- return h.hexdigest()
361- else:
362- return None
363+ return host.file_hash(path, hash_type)
364
365
366 def check_hash(path, checksum, hash_type='md5'):
367@@ -280,9 +161,7 @@
368 :raises ChecksumError: If the file fails the checksum
369
370 """
371- actual_checksum = file_hash(path, hash_type)
372- if checksum != actual_checksum:
373- raise ChecksumError("'%s' != '%s'" % (checksum, actual_checksum))
374+ host.check_hash(path, checksum, hash_type)
375
376
377 class ChecksumError(ValueError):
378@@ -296,154 +175,68 @@
379
380 @restart_on_change({
381 '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
382+ '/etc/apache/sites-enabled/*': [ 'apache2' ]
383 })
384- def ceph_client_changed():
385+ def config_changed():
386 pass # your code here
387
388 In this example, the cinder-api and cinder-volume services
389 would be restarted if /etc/ceph/ceph.conf is changed by the
390- ceph_client_changed function.
391+ ceph_client_changed function. The apache2 service would be
392+ restarted if any file matching the pattern got changed, created
393+ or removed. Standard wildcards are supported, see documentation
394+ for the 'glob' module for more information.
395 """
396- def wrap(f):
397- def wrapped_f(*args, **kwargs):
398- checksums = {}
399- for path in restart_map:
400- checksums[path] = file_hash(path)
401- f(*args, **kwargs)
402- restarts = []
403- for path in restart_map:
404- if checksums[path] != file_hash(path):
405- restarts += restart_map[path]
406- services_list = list(OrderedDict.fromkeys(restarts))
407- if not stopstart:
408- for service_name in services_list:
409- service('restart', service_name)
410- else:
411- for action in ['stop', 'start']:
412- for service_name in services_list:
413- service(action, service_name)
414- return wrapped_f
415- return wrap
416+ return host.restart_on_change(restart_map, stopstart)
417
418
419 def lsb_release():
420- """Return /etc/lsb-release in a dict"""
421- d = {}
422- with open('/etc/lsb-release', 'r') as lsb:
423- for l in lsb:
424- k, v = l.split('=')
425- d[k.strip()] = v.strip()
426- return d
427+ """Return /etc/os-release in a dict"""
428+ return host.lsb_release()
429
430
431 def pwgen(length=None):
432 """Generate a random pasword."""
433- if length is None:
434- # A random length is ok to use a weak PRNG
435- length = random.choice(range(35, 45))
436- alphanumeric_chars = [
437- l for l in (string.ascii_letters + string.digits)
438- if l not in 'l0QD1vAEIOUaeiou']
439- # Use a crypto-friendly PRNG (e.g. /dev/urandom) for making the
440- # actual password
441- random_generator = random.SystemRandom()
442- random_chars = [
443- random_generator.choice(alphanumeric_chars) for _ in range(length)]
444- return(''.join(random_chars))
445-
446-
447-def list_nics(nic_type):
448+ return host.pwgen(length)
449+
450+
451+def list_nics(nic_type=None):
452 '''Return a list of nics of given type(s)'''
453- if isinstance(nic_type, six.string_types):
454- int_types = [nic_type]
455- else:
456- int_types = nic_type
457- interfaces = []
458- for int_type in int_types:
459- cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
460- ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
461- ip_output = (line for line in ip_output if line)
462- for line in ip_output:
463- if line.split()[1].startswith(int_type):
464- matched = re.search('.*: (' + int_type + r'[0-9]+\.[0-9]+)@.*', line)
465- if matched:
466- interface = matched.groups()[0]
467- else:
468- interface = line.split()[1].replace(":", "")
469- interfaces.append(interface)
470-
471- return interfaces
472+ return host.list_nics(nic_type)
473
474
475 def set_nic_mtu(nic, mtu):
476 '''Set MTU on a network interface'''
477- cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]
478- subprocess.check_call(cmd)
479+ host.set_nic_mtu(nic, mtu)
480
481
482 def get_nic_mtu(nic):
483- cmd = ['ip', 'addr', 'show', nic]
484- ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
485- mtu = ""
486- for line in ip_output:
487- words = line.split()
488- if 'mtu' in words:
489- mtu = words[words.index("mtu") + 1]
490- return mtu
491+ '''Get MTU of a network interface'''
492+ return host.get_nic_mtu(nic)
493
494
495 def get_nic_hwaddr(nic):
496- cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
497- ip_output = subprocess.check_output(cmd).decode('UTF-8')
498- hwaddr = ""
499- words = ip_output.split()
500- if 'link/ether' in words:
501- hwaddr = words[words.index('link/ether') + 1]
502- return hwaddr
503+ return host.get_nic_hwaddr(nic)
504
505
506 def cmp_pkgrevno(package, revno, pkgcache=None):
507- '''Compare supplied revno with the revno of the installed package
508-
509- * 1 => Installed revno is greater than supplied arg
510- * 0 => Installed revno is the same as supplied arg
511- * -1 => Installed revno is less than supplied arg
512-
513- This function imports apt_cache function from charmhelpers.fetch if
514- the pkgcache argument is None. Be sure to add charmhelpers.fetch if
515- you call this function, or pass an apt_pkg.Cache() instance.
516- '''
517- import apt_pkg
518- if not pkgcache:
519- from charmhelpers.fetch import apt_cache
520- pkgcache = apt_cache()
521- pkg = pkgcache[package]
522- return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
523+ return host.cmp_pkgrevno(package, revno, pkgcache)
524
525
526 @contextmanager
527 def chdir(d):
528- cur = os.getcwd()
529- try:
530- yield os.chdir(d)
531- finally:
532- os.chdir(cur)
533-
534-
535-def chownr(path, owner, group, follow_links=True):
536- uid = pwd.getpwnam(owner).pw_uid
537- gid = grp.getgrnam(group).gr_gid
538- if follow_links:
539- chown = os.chown
540- else:
541- chown = os.lchown
542-
543- for root, dirs, files in os.walk(path):
544- for name in dirs + files:
545- full = os.path.join(root, name)
546- broken_symlink = os.path.lexists(full) and not os.path.exists(full)
547- if not broken_symlink:
548- chown(full, uid, gid)
549+ host.chdir(d)
550+
551+
552+def chownr(path, owner, group, follow_links=True, chowntopdir=False):
553+ """
554+ Recursively change user and group ownership of files and directories
555+ in given path. Doesn't chown path itself by default, only its children.
556+
557+ :param bool follow_links: Also Chown links if True
558+ :param bool chowntopdir: Also chown path itself if True
559+ """
560+ host.chownr(path, owner, group, follow_links, chowntopdir)
561
562
563 def lchownr(path, owner, group):
564
565=== added directory 'hooks/charmhelpers/core/host_factory'
566=== added file 'hooks/charmhelpers/core/host_factory/__init__.py'
567--- hooks/charmhelpers/core/host_factory/__init__.py 1970-01-01 00:00:00 +0000
568+++ hooks/charmhelpers/core/host_factory/__init__.py 2016-04-29 11:54:41 +0000
569@@ -0,0 +1,492 @@
570+import os
571+import re
572+import pwd
573+import glob
574+import grp
575+import random
576+import string
577+import subprocess
578+import hashlib
579+import importlib
580+import six
581+
582+from contextlib import contextmanager
583+from collections import OrderedDict
584+from ..hookenv import log
585+from ..fstab import Fstab
586+from charmhelpers import get_platform
587+
588+SYSTEMD_SYSTEM = '/run/systemd/system'
589+
590+
591+class HostBase(object):
592+
593+ def service_start(self, service_name):
594+ """Start a system service"""
595+ return self.service('start', service_name)
596+
597+ def service_stop(self, service_name):
598+ """Stop a system service"""
599+ return self.service('stop', service_name)
600+
601+ def service_restart(self, service_name):
602+ """Restart a system service"""
603+ return self.service('restart', service_name)
604+
605+ def service_reload(self, service_name, restart_on_failure=False):
606+ """Reload a system service, optionally falling back to restart if
607+ reload fails"""
608+ service_result = self.service('reload', service_name)
609+ if not service_result and restart_on_failure:
610+ service_result = self.service('restart', service_name)
611+ return service_result
612+
613+ def service(self, action, service_name):
614+ """Control a system service"""
615+ if self.init_is_systemd():
616+ cmd = ['systemctl', action, service_name]
617+ else:
618+ cmd = ['service', service_name, action]
619+ return subprocess.call(cmd) == 0
620+
621+ def service_running(self, service_name):
622+ """Determine whether a system service is running"""
623+ if self.init_is_systemd():
624+ return self.service('is-active', service_name)
625+ else:
626+ try:
627+ output = subprocess.check_output(
628+ ['service', service_name, 'status'],
629+ stderr=subprocess.STDOUT).decode('UTF-8')
630+ except subprocess.CalledProcessError:
631+ return False
632+ else:
633+ if ("start/running" in output or "is running" in output):
634+ return True
635+ else:
636+ return False
637+
638+ def service_available(self, service_name):
639+ """Determine whether a system service is available"""
640+ if self.init_is_systemd():
641+ return self.service('is-enabled', service_name)
642+ try:
643+ subprocess.check_output(
644+ ['service', service_name, 'status'],
645+ stderr=subprocess.STDOUT).decode('UTF-8')
646+ except subprocess.CalledProcessError as e:
647+ return b'unrecognized service' not in e.output
648+ else:
649+ return True
650+
651+ def init_is_systemd(self):
652+ """Return True if the host system uses systemd, False otherwise."""
653+ return os.path.isdir(SYSTEMD_SYSTEM)
654+
655+ def adduser(self, username, password=None, shell='/bin/bash',
656+ system_user=False, primary_group=None, secondary_groups=None):
657+ """Add a user to the system.
658+
659+ Will log but otherwise succeed if the user already exists.
660+
661+ :param str username: Username to create
662+ :param str password: Password for user;
663+ if ``None``, create a system user
664+ :param str shell: The default shell for the user
665+ :param bool system_user: Whether to create a login or system user
666+ :param str primary_group: Primary group for user; defaults to username
667+ :param list secondary_groups: Optional list of additional groups
668+
669+ :returns: The password database entry struct,
670+ as returned by `pwd.getpwnam`
671+ """
672+ try:
673+ user_info = pwd.getpwnam(username)
674+ log('user {0} already exists!'.format(username))
675+ except KeyError:
676+ log('creating user {0}'.format(username))
677+ cmd = ['useradd']
678+ if system_user or password is None:
679+ cmd.append('--system')
680+ else:
681+ cmd.extend([
682+ '--create-home',
683+ '--shell', shell,
684+ '--password', password,
685+ ])
686+ if not primary_group:
687+ try:
688+ grp.getgrnam(username)
689+ primary_group = username # avoid "group exists" error
690+ except KeyError:
691+ pass
692+ if primary_group:
693+ cmd.extend(['-g', primary_group])
694+ if secondary_groups:
695+ cmd.extend(['-G', ','.join(secondary_groups)])
696+ cmd.append(username)
697+ subprocess.check_call(cmd)
698+ user_info = pwd.getpwnam(username)
699+ return user_info
700+
701+ def _add_group(self, group_name, system_group=False):
702+ raise NotImplementedError()
703+
704+ def add_group(self, group_name, system_group=False):
705+ try:
706+ group_info = grp.getgrnam(group_name)
707+ log('group {0} already exists!'.format(group_name))
708+ except KeyError:
709+ log('creating group {0}'.format(group_name))
710+ self._add_group(group_name, system_group)
711+ group_info = grp.getgrnam(group_name)
712+ return group_info
713+
714+ def add_user_to_group(self, username, group):
715+ """Add a user to a group"""
716+ cmd = ['gpasswd', '-a', username, group]
717+ log("Adding user {} to group {}".format(username, group))
718+ subprocess.check_call(cmd)
719+
720+ def rsync(self, from_path, to_path, flags='-r', options=None):
721+ """Replicate the contents of a path"""
722+ options = options or ['--delete', '--executability']
723+ cmd = ['/usr/bin/rsync', flags]
724+ cmd.extend(options)
725+ cmd.append(from_path)
726+ cmd.append(to_path)
727+ log(" ".join(cmd))
728+ return subprocess.check_output(cmd).decode('UTF-8').strip()
729+
730+ def symlink(self, source, destination):
731+ """Create a symbolic link"""
732+ log("Symlinking {} as {}".format(source, destination))
733+ cmd = [
734+ 'ln',
735+ '-sf',
736+ source,
737+ destination,
738+ ]
739+ subprocess.check_call(cmd)
740+
741+ def mkdir(self, path, owner='root', group='root',
742+ perms=0o555, force=False):
743+ """Create a directory"""
744+ log("Making dir {} {}:{} {:o}".format(path, owner, group,
745+ perms))
746+ uid = pwd.getpwnam(owner).pw_uid
747+ gid = grp.getgrnam(group).gr_gid
748+ realpath = os.path.abspath(path)
749+ path_exists = os.path.exists(realpath)
750+ if path_exists and force:
751+ if not os.path.isdir(realpath):
752+ log("Removing non-directory file {}"
753+ " prior to mkdir()".format(path))
754+ os.unlink(realpath)
755+ os.makedirs(realpath, perms)
756+ elif not path_exists:
757+ os.makedirs(realpath, perms)
758+ os.chown(realpath, uid, gid)
759+ os.chmod(realpath, perms)
760+
761+ def write_file(self, path, content, owner='root',
762+ group='root', perms=0o444):
763+ """Create or overwrite a file with the contents of a byte string."""
764+ log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
765+ uid = pwd.getpwnam(owner).pw_uid
766+ gid = grp.getgrnam(group).gr_gid
767+ with open(path, 'wb') as target:
768+ os.fchown(target.fileno(), uid, gid)
769+ os.fchmod(target.fileno(), perms)
770+ target.write(content)
771+
772+ def fstab_remove(self, mp):
773+ """Remove the given mountpoint entry from /etc/fstab"""
774+ return Fstab.remove_by_mountpoint(mp)
775+
776+ def fstab_add(self, dev, mp, fs, options=None):
777+ """Adds the given device entry to the /etc/fstab file"""
778+ return Fstab.add(dev, mp, fs, options=options)
779+
780+ def mount(self, device, mountpoint, options=None,
781+ persist=False, filesystem="ext3"):
782+ """Mount a filesystem at a particular mountpoint"""
783+ cmd_args = ['mount']
784+ if options is not None:
785+ cmd_args.extend(['-o', options])
786+ cmd_args.extend([device, mountpoint])
787+ try:
788+ subprocess.check_output(cmd_args)
789+ except subprocess.CalledProcessError as e:
790+ log('Error mounting {} at {}\n{}'.format(
791+ device, mountpoint, e.output))
792+ return False
793+
794+ if persist:
795+ return self.fstab_add(device, mountpoint,
796+ filesystem, options=options)
797+ return True
798+
799+ def umount(self, mountpoint, persist=False):
800+ """Unmount a filesystem"""
801+ cmd_args = ['umount', mountpoint]
802+ try:
803+ subprocess.check_output(cmd_args)
804+ except subprocess.CalledProcessError as e:
805+ log('Error unmounting {}\n{}'.format(mountpoint, e.output))
806+ return False
807+
808+ if persist:
809+ return self.fstab_remove(mountpoint)
810+ return True
811+
812+ def mounts(self):
813+ """Get a list of all mounted volumes as [[mountpoint,device],[...]]"""
814+ with open('/proc/mounts') as f:
815+ # [['/mount/point','/dev/path'],[...]]
816+ system_mounts = [m[1::-1] for m in [l.strip().split()
817+ for l in f.readlines()]]
818+ return system_mounts
819+
820+ def file_hash(self, path, hash_type='md5'):
821+ """Generate a hash checksum of the contents of 'path' or None if not found.
822+
823+ :param str hash_type: Any hash alrgorithm supported by :mod:`hashlib`,
824+ such as md5, sha1, sha256, sha512, etc.
825+ """
826+ if os.path.exists(path):
827+ h = getattr(hashlib, hash_type)()
828+ with open(path, 'rb') as source:
829+ h.update(source.read())
830+ return h.hexdigest()
831+ else:
832+ return None
833+
834+ def check_hash(self, path, checksum, hash_type='md5'):
835+ """Validate a file using a cryptographic checksum.
836+
837+ :param str checksum: Value of the checksum used to validate the file.
838+ :param str hash_type: Hash algorithm used to generate `checksum`.
839+ Can be any hash alrgorithm supported by :mod:`hashlib`,
840+ such as md5, sha1, sha256, sha512, etc.
841+ :raises ChecksumError: If the file fails the checksum
842+
843+ """
844+ actual_checksum = self.file_hash(path, hash_type)
845+ if checksum != actual_checksum:
846+ raise ChecksumError("'%s' != '%s'" % (checksum, actual_checksum))
847+
848+ def restart_on_change(self, restart_map, stopstart=False):
849+ """Restart services based on configuration files changing
850+
851+ This function is used a decorator, for example::
852+
853+ @restart_on_change({
854+ '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
855+ '/etc/apache/sites-enabled/*': [ 'apache2' ]
856+ })
857+ def config_changed():
858+ pass # your code here
859+
860+ In this example, the cinder-api and cinder-volume services
861+ would be restarted if /etc/ceph/ceph.conf is changed by the
862+ ceph_client_changed function. The apache2 service would be
863+ restarted if any file matching the pattern got changed, created
864+ or removed. Standard wildcards are supported, see documentation
865+ for the 'glob' module for more information.
866+ """
867+ def wrap(f):
868+ def wrapped_f(*args, **kwargs):
869+ checksums = {path: self.path_hash(path)
870+ for path in restart_map}
871+ f(*args, **kwargs)
872+ restarts = []
873+ for path in restart_map:
874+ if self.path_hash(path) != checksums[path]:
875+ restarts += restart_map[path]
876+ services_list = list(OrderedDict.fromkeys(restarts))
877+ if not stopstart:
878+ for service_name in services_list:
879+ self.service('restart', service_name)
880+ else:
881+ for action in ['stop', 'start']:
882+ for service_name in services_list:
883+ self.service(action, service_name)
884+ return wrapped_f
885+ return wrap
886+
887+ def lsb_release(self):
888+ return self._lsb_release()
889+
890+ def _lsb_release(self):
891+ raise NotImplementedError()
892+
893+ def pwgen(self, length=None):
894+ """Generate a random pasword."""
895+ if length is None:
896+ # A random length is ok to use a weak PRNG
897+ length = random.choice(range(35, 45))
898+ alphanumeric_chars = [
899+ l for l in (string.ascii_letters + string.digits)
900+ if l not in 'l0QD1vAEIOUaeiou']
901+ # Use a crypto-friendly PRNG (e.g. /dev/urandom) for making the
902+ # actual password
903+ random_generator = random.SystemRandom()
904+ random_chars = [
905+ random_generator.choice(alphanumeric_chars) for _ in range(length)]
906+ return(''.join(random_chars))
907+
908+ def list_nics(self, nic_type=None):
909+ """Return a list of nics of given type(s)"""
910+ if isinstance(nic_type, six.string_types):
911+ int_types = [nic_type]
912+ else:
913+ int_types = nic_type
914+
915+ interfaces = []
916+ if nic_type:
917+ for int_type in int_types:
918+ cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
919+ ip_output = subprocess.check_output(cmd).decode('UTF-8')
920+ ip_output = ip_output.split('\n')
921+ ip_output = (line for line in ip_output if line)
922+ for line in ip_output:
923+ if line.split()[1].startswith(int_type):
924+ matched = re.search('.*: (' + int_type +
925+ r'[0-9]+\.[0-9]+)@.*', line)
926+ if matched:
927+ iface = matched.groups()[0]
928+ else:
929+ iface = line.split()[1].replace(":", "")
930+
931+ if iface not in interfaces:
932+ interfaces.append(iface)
933+ else:
934+ cmd = ['ip', 'a']
935+ ip_output = subprocess.check_output(
936+ cmd).decode('UTF-8').split('\n')
937+ ip_output = (line.strip() for line in ip_output if line)
938+
939+ key = re.compile('^[0-9]+:\s+(.+):')
940+ for line in ip_output:
941+ matched = re.search(key, line)
942+ if matched:
943+ iface = matched.group(1)
944+ iface = iface.partition("@")[0]
945+ if iface not in interfaces:
946+ interfaces.append(iface)
947+ return interfaces
948+
949+ def set_nic_mtu(self, nic, mtu):
950+ """Set the Maximum Transmission Unit (MTU) on a network interface."""
951+ cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]
952+ subprocess.check_call(cmd)
953+
954+ def get_nic_mtu(self, nic):
955+ """
956+ Return the Maximum Transmission Unit (MTU) for a network interface.
957+ """
958+ cmd = ['ip', 'addr', 'show', nic]
959+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
960+ mtu = ""
961+ for line in ip_output:
962+ words = line.split()
963+ if 'mtu' in words:
964+ mtu = words[words.index("mtu") + 1]
965+ return mtu
966+
967+ def get_nic_hwaddr(self, nic):
968+ """Return the Media Access Control (MAC) for a network interface."""
969+ cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
970+ ip_output = subprocess.check_output(cmd).decode('UTF-8')
971+ hwaddr = ""
972+ words = ip_output.split()
973+ if 'link/ether' in words:
974+ hwaddr = words[words.index('link/ether') + 1]
975+ return hwaddr
976+
977+ def cmp_pkgrevno(self, package, revno, pkgcache=None):
978+ """Compare supplied revno with the revno of the installed package
979+
980+ * 1 => Installed revno is greater than supplied arg
981+ * 0 => Installed revno is the same as supplied arg
982+ * -1 => Installed revno is less than supplied arg
983+
984+ This function imports apt_cache function from charmhelpers.fetch if
985+ the pkgcache argument is None. Be sure to add charmhelpers.fetch if
986+ you call this function, or pass an apt_pkg.Cache() instance.
987+ """
988+ return self._cmp_pkgrevno(package, revno, pkgcache)
989+
990+ def _cmp_pkgrevno(self, package, revno, pkgcache=None):
991+ raise NotImplementedError()
992+
993+ @contextmanager
994+ def chdir(self, directory):
995+ """
996+ Change the current working directory to a different directory for a
997+ code block and return the previous directory after the block exits.
998+ Useful to run commands from a specificed directory.
999+
1000+ :param str directory: The directory path to change to for this context.
1001+ """
1002+ cur = os.getcwd()
1003+ try:
1004+ yield os.chdir(directory)
1005+ finally:
1006+ os.chdir(cur)
1007+
1008+ def chownr(self, path, owner, group, follow_links=True, chowntopdir=False):
1009+ """Recursively change user and group ownership of files and directories
1010+ in given path. Doesn't chown path itself by default, only its children.
1011+
1012+ :param str path: The string path to start changing ownership.
1013+ :param str owner: The owner string to use when looking up the uid.
1014+ :param str group: The group string to use when looking up the gid.
1015+ :param bool follow_links: Also Chown links if True
1016+ :param bool chowntopdir: Also chown path itself if True
1017+ """
1018+ uid = pwd.getpwnam(owner).pw_uid
1019+ gid = grp.getgrnam(group).gr_gid
1020+ if follow_links:
1021+ chown = os.chown
1022+ else:
1023+ chown = os.lchown
1024+
1025+ if chowntopdir:
1026+ broken_symlink = os.path.lexists(path) and not os.path.exists(path)
1027+ if not broken_symlink:
1028+ chown(path, uid, gid)
1029+ for root, dirs, files in os.walk(path):
1030+ for name in dirs + files:
1031+ full = os.path.join(root, name)
1032+ broken_symlink = os.path.lexists(
1033+ full
1034+ ) and not os.path.exists(full)
1035+ if not broken_symlink:
1036+ chown(full, uid, gid)
1037+
1038+ def lchownr(self, path, owner, group):
1039+ """
1040+ Recursively change user and group ownership of files and directories
1041+ in a given path, not following symbolic links. See the documentation
1042+ for 'os.lchown' for more information.
1043+
1044+ :param str path: The string path to start changing ownership.
1045+ :param str owner: The owner string to use when looking up the uid.
1046+ :param str group: The group string to use when looking up the gid.
1047+ """
1048+ self.chownr(path, owner, group, follow_links=False)
1049+
1050+
1051+class ChecksumError(ValueError):
1052+ """A class derived from Value error to indicate the checksum failed."""
1053+ pass
1054+
1055+
1056+module = "charmhelpers.core.host_factory.%s" % get_platform()
1057+host = importlib.import_module(module)
1058+
1059+
1060+class Host(host.Host):
1061+ pass
1062
1063=== added directory 'hooks/charmhelpers/core/host_factory/centos'
1064=== added file 'hooks/charmhelpers/core/host_factory/centos/__init__.py'
1065--- hooks/charmhelpers/core/host_factory/centos/__init__.py 1970-01-01 00:00:00 +0000
1066+++ hooks/charmhelpers/core/host_factory/centos/__init__.py 2016-04-29 11:54:41 +0000
1067@@ -0,0 +1,53 @@
1068+import subprocess
1069+import yum
1070+
1071+from .. import HostBase
1072+
1073+
1074+class Host(HostBase):
1075+ '''
1076+ Implementation of HostBase for CentOS
1077+ '''
1078+
1079+ def _add_group(self, group_name, system_group=False):
1080+ cmd = ['groupadd']
1081+ if system_group:
1082+ cmd.append('-r')
1083+ cmd.append(group_name)
1084+ subprocess.check_call(cmd)
1085+
1086+ def _lsb_release(self):
1087+ """Return /etc/os-release in a dict"""
1088+ d = {}
1089+ with open('/etc/os-release', 'r') as lsb:
1090+ for l in lsb:
1091+ if len(l.split('=')) != 2:
1092+ continue
1093+ k, v = l.split('=')
1094+ d[k.strip()] = v.strip()
1095+ return d
1096+
1097+ def _cmp_pkgrevno(self, package, revno, pkgcache=None):
1098+ """Compare supplied revno with the revno of the installed package
1099+
1100+ * 1 => Installed revno is greater than supplied arg
1101+ * 0 => Installed revno is the same as supplied arg
1102+ * -1 => Installed revno is less than supplied arg
1103+
1104+ This function imports apt_cache function from charmhelpers.fetch if
1105+ the pkgcache argument is None. Be sure to add charmhelpers.fetch if
1106+ you call this function, or pass an apt_pkg.Cache() instance.
1107+ """
1108+ if not pkgcache:
1109+ y = yum.YumBase()
1110+ packages = y.doPackageLists()
1111+ pck = {}
1112+ for i in packages["installed"]:
1113+ pck[i.Name] = i.version
1114+ pkgcache = pck
1115+ pkg = pkgcache[package]
1116+ if pkg > revno:
1117+ return 1
1118+ if pkg < revno:
1119+ return -1
1120+ return 0
1121
1122=== added directory 'hooks/charmhelpers/core/host_factory/ubuntu'
1123=== added file 'hooks/charmhelpers/core/host_factory/ubuntu/__init__.py'
1124--- hooks/charmhelpers/core/host_factory/ubuntu/__init__.py 1970-01-01 00:00:00 +0000
1125+++ hooks/charmhelpers/core/host_factory/ubuntu/__init__.py 2016-04-29 11:54:41 +0000
1126@@ -0,0 +1,47 @@
1127+import subprocess
1128+
1129+from .. import HostBase
1130+
1131+
1132+class Host(HostBase):
1133+ '''
1134+ Implementation of HostBase for Ubuntu
1135+ '''
1136+
1137+ def _add_group(self, group_name, system_group=False):
1138+ cmd = ['addgroup']
1139+ if system_group:
1140+ cmd.append('--system')
1141+ else:
1142+ cmd.extend([
1143+ '--group',
1144+ ])
1145+ cmd.append(group_name)
1146+ subprocess.check_call(cmd)
1147+
1148+ def _lsb_release(self):
1149+ """Return /etc/lsb-release in a dict"""
1150+ d = {}
1151+ with open('/etc/lsb-release', 'r') as lsb:
1152+ for l in lsb:
1153+ k, v = l.split('=')
1154+ d[k.strip()] = v.strip()
1155+ return d
1156+
1157+ def _cmp_pkgrevno(self, package, revno, pkgcache=None):
1158+ """Compare supplied revno with the revno of the installed package
1159+
1160+ * 1 => Installed revno is greater than supplied arg
1161+ * 0 => Installed revno is the same as supplied arg
1162+ * -1 => Installed revno is less than supplied arg
1163+
1164+ This function imports apt_cache function from charmhelpers.fetch if
1165+ the pkgcache argument is None. Be sure to add charmhelpers.fetch if
1166+ you call this function, or pass an apt_pkg.Cache() instance.
1167+ """
1168+ import apt_pkg
1169+ if not pkgcache:
1170+ from charmhelpers.fetch import apt_cache
1171+ pkgcache = apt_cache()
1172+ pkg = pkgcache[package]
1173+ return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
1174
1175=== modified file 'hooks/charmhelpers/core/services/__init__.py' (properties changed: -x to +x)
1176=== modified file 'hooks/charmhelpers/core/services/base.py' (properties changed: -x to +x)
1177=== modified file 'hooks/charmhelpers/core/services/helpers.py' (properties changed: -x to +x)
1178=== modified file 'hooks/charmhelpers/core/strutils.py' (properties changed: -x to +x)
1179=== modified file 'hooks/charmhelpers/core/sysctl.py' (properties changed: -x to +x)
1180=== modified file 'hooks/charmhelpers/core/templating.py' (properties changed: -x to +x)
1181=== modified file 'hooks/charmhelpers/core/unitdata.py' (properties changed: -x to +x)
1182=== modified file 'hooks/charmhelpers/fetch/__init__.py' (properties changed: -x to +x)
1183--- hooks/charmhelpers/fetch/__init__.py 2015-03-23 09:45:10 +0000
1184+++ hooks/charmhelpers/fetch/__init__.py 2016-04-29 11:54:41 +0000
1185@@ -14,84 +14,21 @@
1186 # You should have received a copy of the GNU Lesser General Public License
1187 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1188
1189+import six
1190 import importlib
1191-from tempfile import NamedTemporaryFile
1192-import time
1193+
1194 from yaml import safe_load
1195-from charmhelpers.core.host import (
1196- lsb_release
1197-)
1198-import subprocess
1199-from charmhelpers.core.hookenv import (
1200+from charmhelpers import get_platform
1201+from charmhelpers.core.hookenv import(
1202 config,
1203- log,
1204+ log
1205 )
1206-import os
1207-
1208-import six
1209 if six.PY3:
1210 from urllib.parse import urlparse, urlunparse
1211 else:
1212 from urlparse import urlparse, urlunparse
1213
1214
1215-CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
1216-deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
1217-"""
1218-PROPOSED_POCKET = """# Proposed
1219-deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
1220-"""
1221-CLOUD_ARCHIVE_POCKETS = {
1222- # Folsom
1223- 'folsom': 'precise-updates/folsom',
1224- 'precise-folsom': 'precise-updates/folsom',
1225- 'precise-folsom/updates': 'precise-updates/folsom',
1226- 'precise-updates/folsom': 'precise-updates/folsom',
1227- 'folsom/proposed': 'precise-proposed/folsom',
1228- 'precise-folsom/proposed': 'precise-proposed/folsom',
1229- 'precise-proposed/folsom': 'precise-proposed/folsom',
1230- # Grizzly
1231- 'grizzly': 'precise-updates/grizzly',
1232- 'precise-grizzly': 'precise-updates/grizzly',
1233- 'precise-grizzly/updates': 'precise-updates/grizzly',
1234- 'precise-updates/grizzly': 'precise-updates/grizzly',
1235- 'grizzly/proposed': 'precise-proposed/grizzly',
1236- 'precise-grizzly/proposed': 'precise-proposed/grizzly',
1237- 'precise-proposed/grizzly': 'precise-proposed/grizzly',
1238- # Havana
1239- 'havana': 'precise-updates/havana',
1240- 'precise-havana': 'precise-updates/havana',
1241- 'precise-havana/updates': 'precise-updates/havana',
1242- 'precise-updates/havana': 'precise-updates/havana',
1243- 'havana/proposed': 'precise-proposed/havana',
1244- 'precise-havana/proposed': 'precise-proposed/havana',
1245- 'precise-proposed/havana': 'precise-proposed/havana',
1246- # Icehouse
1247- 'icehouse': 'precise-updates/icehouse',
1248- 'precise-icehouse': 'precise-updates/icehouse',
1249- 'precise-icehouse/updates': 'precise-updates/icehouse',
1250- 'precise-updates/icehouse': 'precise-updates/icehouse',
1251- 'icehouse/proposed': 'precise-proposed/icehouse',
1252- 'precise-icehouse/proposed': 'precise-proposed/icehouse',
1253- 'precise-proposed/icehouse': 'precise-proposed/icehouse',
1254- # Juno
1255- 'juno': 'trusty-updates/juno',
1256- 'trusty-juno': 'trusty-updates/juno',
1257- 'trusty-juno/updates': 'trusty-updates/juno',
1258- 'trusty-updates/juno': 'trusty-updates/juno',
1259- 'juno/proposed': 'trusty-proposed/juno',
1260- 'trusty-juno/proposed': 'trusty-proposed/juno',
1261- 'trusty-proposed/juno': 'trusty-proposed/juno',
1262- # Kilo
1263- 'kilo': 'trusty-updates/kilo',
1264- 'trusty-kilo': 'trusty-updates/kilo',
1265- 'trusty-kilo/updates': 'trusty-updates/kilo',
1266- 'trusty-updates/kilo': 'trusty-updates/kilo',
1267- 'kilo/proposed': 'trusty-proposed/kilo',
1268- 'trusty-kilo/proposed': 'trusty-proposed/kilo',
1269- 'trusty-proposed/kilo': 'trusty-proposed/kilo',
1270-}
1271-
1272 # The order of this list is very important. Handlers should be listed in from
1273 # least- to most-specific URL matching.
1274 FETCH_HANDLERS = (
1275@@ -100,10 +37,6 @@
1276 'charmhelpers.fetch.giturl.GitUrlFetchHandler',
1277 )
1278
1279-APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
1280-APT_NO_LOCK_RETRY_DELAY = 10 # Wait 10 seconds between apt lock checks.
1281-APT_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times.
1282-
1283
1284 class SourceConfigError(Exception):
1285 pass
1286@@ -141,162 +74,38 @@
1287 return urlunparse(parts)
1288
1289
1290+module = "charmhelpers.fetch.%s" % get_platform()
1291+fetch = importlib.import_module(module)
1292+
1293+
1294 def filter_installed_packages(packages):
1295 """Returns a list of packages that require installation"""
1296- cache = apt_cache()
1297- _pkgs = []
1298- for package in packages:
1299- try:
1300- p = cache[package]
1301- p.current_ver or _pkgs.append(package)
1302- except KeyError:
1303- log('Package {} has no installation candidate.'.format(package),
1304- level='WARNING')
1305- _pkgs.append(package)
1306- return _pkgs
1307-
1308-
1309-def apt_cache(in_memory=True):
1310- """Build and return an apt cache"""
1311- import apt_pkg
1312- apt_pkg.init()
1313- if in_memory:
1314- apt_pkg.config.set("Dir::Cache::pkgcache", "")
1315- apt_pkg.config.set("Dir::Cache::srcpkgcache", "")
1316- return apt_pkg.Cache()
1317-
1318-
1319-def apt_install(packages, options=None, fatal=False):
1320+ return fetch.filter_installed_packages(packages)
1321+
1322+
1323+def install(packages, options=None, fatal=False):
1324 """Install one or more packages"""
1325- if options is None:
1326- options = ['--option=Dpkg::Options::=--force-confold']
1327-
1328- cmd = ['apt-get', '--assume-yes']
1329- cmd.extend(options)
1330- cmd.append('install')
1331- if isinstance(packages, six.string_types):
1332- cmd.append(packages)
1333- else:
1334- cmd.extend(packages)
1335- log("Installing {} with options: {}".format(packages,
1336- options))
1337- _run_apt_command(cmd, fatal)
1338-
1339-
1340-def apt_upgrade(options=None, fatal=False, dist=False):
1341+ fetch.install(packages, options, fatal)
1342+
1343+
1344+def upgrade(options=None, fatal=False, dist=False):
1345 """Upgrade all packages"""
1346- if options is None:
1347- options = ['--option=Dpkg::Options::=--force-confold']
1348-
1349- cmd = ['apt-get', '--assume-yes']
1350- cmd.extend(options)
1351- if dist:
1352- cmd.append('dist-upgrade')
1353- else:
1354- cmd.append('upgrade')
1355- log("Upgrading with options: {}".format(options))
1356- _run_apt_command(cmd, fatal)
1357-
1358-
1359-def apt_update(fatal=False):
1360+ fetch.upgrade(options, fatal, dist)
1361+
1362+
1363+def update(fatal=False):
1364 """Update local apt cache"""
1365- cmd = ['apt-get', 'update']
1366- _run_apt_command(cmd, fatal)
1367-
1368-
1369-def apt_purge(packages, fatal=False):
1370+ fetch.update(fatal)
1371+
1372+
1373+def purge(packages, fatal=False):
1374 """Purge one or more packages"""
1375- cmd = ['apt-get', '--assume-yes', 'purge']
1376- if isinstance(packages, six.string_types):
1377- cmd.append(packages)
1378- else:
1379- cmd.extend(packages)
1380- log("Purging {}".format(packages))
1381- _run_apt_command(cmd, fatal)
1382-
1383-
1384-def apt_hold(packages, fatal=False):
1385- """Hold one or more packages"""
1386- cmd = ['apt-mark', 'hold']
1387- if isinstance(packages, six.string_types):
1388- cmd.append(packages)
1389- else:
1390- cmd.extend(packages)
1391- log("Holding {}".format(packages))
1392-
1393- if fatal:
1394- subprocess.check_call(cmd)
1395- else:
1396- subprocess.call(cmd)
1397-
1398-
1399+ fetch.purge(packages, fatal)
1400+
1401+
1402+# PPA only works with .DEB packed and not with .RPM
1403 def add_source(source, key=None):
1404- """Add a package source to this system.
1405-
1406- @param source: a URL or sources.list entry, as supported by
1407- add-apt-repository(1). Examples::
1408-
1409- ppa:charmers/example
1410- deb https://stub:key@private.example.com/ubuntu trusty main
1411-
1412- In addition:
1413- 'proposed:' may be used to enable the standard 'proposed'
1414- pocket for the release.
1415- 'cloud:' may be used to activate official cloud archive pockets,
1416- such as 'cloud:icehouse'
1417- 'distro' may be used as a noop
1418-
1419- @param key: A key to be added to the system's APT keyring and used
1420- to verify the signatures on packages. Ideally, this should be an
1421- ASCII format GPG public key including the block headers. A GPG key
1422- id may also be used, but be aware that only insecure protocols are
1423- available to retrieve the actual public key from a public keyserver
1424- placing your Juju environment at risk. ppa and cloud archive keys
1425- are securely added automtically, so sould not be provided.
1426- """
1427- if source is None:
1428- log('Source is not present. Skipping')
1429- return
1430-
1431- if (source.startswith('ppa:') or
1432- source.startswith('http') or
1433- source.startswith('deb ') or
1434- source.startswith('cloud-archive:')):
1435- subprocess.check_call(['add-apt-repository', '--yes', source])
1436- elif source.startswith('cloud:'):
1437- apt_install(filter_installed_packages(['ubuntu-cloud-keyring']),
1438- fatal=True)
1439- pocket = source.split(':')[-1]
1440- if pocket not in CLOUD_ARCHIVE_POCKETS:
1441- raise SourceConfigError(
1442- 'Unsupported cloud: source option %s' %
1443- pocket)
1444- actual_pocket = CLOUD_ARCHIVE_POCKETS[pocket]
1445- with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
1446- apt.write(CLOUD_ARCHIVE.format(actual_pocket))
1447- elif source == 'proposed':
1448- release = lsb_release()['DISTRIB_CODENAME']
1449- with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
1450- apt.write(PROPOSED_POCKET.format(release))
1451- elif source == 'distro':
1452- pass
1453- else:
1454- log("Unknown source: {!r}".format(source))
1455-
1456- if key:
1457- if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
1458- with NamedTemporaryFile('w+') as key_file:
1459- key_file.write(key)
1460- key_file.flush()
1461- key_file.seek(0)
1462- subprocess.check_call(['apt-key', 'add', '-'], stdin=key_file)
1463- else:
1464- # Note that hkp: is in no way a secure protocol. Using a
1465- # GPG key id is pointless from a security POV unless you
1466- # absolutely trust your network and DNS.
1467- subprocess.check_call(['apt-key', 'adv', '--keyserver',
1468- 'hkp://keyserver.ubuntu.com:80', '--recv',
1469- key])
1470+ fetch.add_source(source, key)
1471
1472
1473 def configure_sources(update=False,
1474@@ -338,7 +147,32 @@
1475 for source, key in zip(sources, keys):
1476 add_source(source, key)
1477 if update:
1478- apt_update(fatal=True)
1479+ update(fatal=True)
1480+
1481+
1482+def install_from_config(config_var_name):
1483+ charm_config = config()
1484+ source = charm_config[config_var_name]
1485+ return install_remote(source)
1486+
1487+
1488+def plugins(fetch_handlers=None):
1489+ if not fetch_handlers:
1490+ fetch_handlers = FETCH_HANDLERS
1491+ plugin_list = []
1492+ for handler_name in fetch_handlers:
1493+ package, classname = handler_name.rsplit('.', 1)
1494+ try:
1495+ handler_class = getattr(
1496+ importlib.import_module(package),
1497+ classname)
1498+ plugin_list.append(handler_class())
1499+ except NotImplementedError:
1500+ # Skip missing plugins so that they can be ommitted from
1501+ # installation if desired
1502+ log("FetchHandler {} not found, skipping plugin".format(
1503+ handler_name))
1504+ return plugin_list
1505
1506
1507 def install_remote(source, *args, **kwargs):
1508@@ -370,70 +204,17 @@
1509 for handler in handlers:
1510 try:
1511 installed_to = handler.install(source, *args, **kwargs)
1512- except UnhandledSource:
1513- pass
1514+ except UnhandledSource as e:
1515+ log('Install source attempt unsuccessful: {}'.format(e),
1516+ level='WARNING')
1517 if not installed_to:
1518 raise UnhandledSource("No handler found for source {}".format(source))
1519 return installed_to
1520
1521-
1522-def install_from_config(config_var_name):
1523- charm_config = config()
1524- source = charm_config[config_var_name]
1525- return install_remote(source)
1526-
1527-
1528-def plugins(fetch_handlers=None):
1529- if not fetch_handlers:
1530- fetch_handlers = FETCH_HANDLERS
1531- plugin_list = []
1532- for handler_name in fetch_handlers:
1533- package, classname = handler_name.rsplit('.', 1)
1534- try:
1535- handler_class = getattr(
1536- importlib.import_module(package),
1537- classname)
1538- plugin_list.append(handler_class())
1539- except (ImportError, AttributeError):
1540- # Skip missing plugins so that they can be ommitted from
1541- # installation if desired
1542- log("FetchHandler {} not found, skipping plugin".format(
1543- handler_name))
1544- return plugin_list
1545-
1546-
1547-def _run_apt_command(cmd, fatal=False):
1548- """
1549- Run an APT command, checking output and retrying if the fatal flag is set
1550- to True.
1551-
1552- :param: cmd: str: The apt command to run.
1553- :param: fatal: bool: Whether the command's output should be checked and
1554- retried.
1555- """
1556- env = os.environ.copy()
1557-
1558- if 'DEBIAN_FRONTEND' not in env:
1559- env['DEBIAN_FRONTEND'] = 'noninteractive'
1560-
1561- if fatal:
1562- retry_count = 0
1563- result = None
1564-
1565- # If the command is considered "fatal", we need to retry if the apt
1566- # lock was not acquired.
1567-
1568- while result is None or result == APT_NO_LOCK:
1569- try:
1570- result = subprocess.check_call(cmd, env=env)
1571- except subprocess.CalledProcessError as e:
1572- retry_count = retry_count + 1
1573- if retry_count > APT_NO_LOCK_RETRY_COUNT:
1574- raise
1575- result = e.returncode
1576- log("Couldn't acquire DPKG lock. Will retry in {} seconds."
1577- "".format(APT_NO_LOCK_RETRY_DELAY))
1578- time.sleep(APT_NO_LOCK_RETRY_DELAY)
1579-
1580- else:
1581- subprocess.call(cmd, env=env)
1582+# Backwards compatibility
1583+if get_platform() == "ubuntu":
1584+ from charmhelpers.fetch.ubuntu import *
1585+ apt_install = install
1586+ apt_update = update
1587+ apt_upgrade = upgrade
1588+ apt_purge = purge
1589
1590=== modified file 'hooks/charmhelpers/fetch/archiveurl.py' (properties changed: -x to +x)
1591=== modified file 'hooks/charmhelpers/fetch/bzrurl.py' (properties changed: -x to +x)
1592--- hooks/charmhelpers/fetch/bzrurl.py 2015-03-23 09:45:10 +0000
1593+++ hooks/charmhelpers/fetch/bzrurl.py 2016-04-29 11:54:41 +0000
1594@@ -15,60 +15,51 @@
1595 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1596
1597 import os
1598+
1599+from subprocess import check_call
1600 from charmhelpers.fetch import (
1601 BaseFetchHandler,
1602- UnhandledSource
1603+ UnhandledSource,
1604+ filter_installed_packages,
1605+ install,
1606 )
1607 from charmhelpers.core.host import mkdir
1608
1609-import six
1610-if six.PY3:
1611- raise ImportError('bzrlib does not support Python3')
1612
1613-try:
1614- from bzrlib.branch import Branch
1615- from bzrlib import bzrdir, workingtree, errors
1616-except ImportError:
1617- from charmhelpers.fetch import apt_install
1618- apt_install("python-bzrlib")
1619- from bzrlib.branch import Branch
1620- from bzrlib import bzrdir, workingtree, errors
1621+if filter_installed_packages(['bzr']) != []:
1622+ install(['bzr'])
1623+ if filter_installed_packages(['bzr']) != []:
1624+ raise NotImplementedError('Unable to install bzr')
1625
1626
1627 class BzrUrlFetchHandler(BaseFetchHandler):
1628 """Handler for bazaar branches via generic and lp URLs"""
1629 def can_handle(self, source):
1630 url_parts = self.parse_url(source)
1631- if url_parts.scheme not in ('bzr+ssh', 'lp'):
1632+ if url_parts.scheme not in ('bzr+ssh', 'lp', ''):
1633 return False
1634+ elif not url_parts.scheme:
1635+ return os.path.exists(os.path.join(source, '.bzr'))
1636 else:
1637 return True
1638
1639 def branch(self, source, dest):
1640- url_parts = self.parse_url(source)
1641- # If we use lp:branchname scheme we need to load plugins
1642 if not self.can_handle(source):
1643 raise UnhandledSource("Cannot handle {}".format(source))
1644- if url_parts.scheme == "lp":
1645- from bzrlib.plugin import load_plugins
1646- load_plugins()
1647- try:
1648- local_branch = bzrdir.BzrDir.create_branch_convenience(dest)
1649- except errors.AlreadyControlDirError:
1650- local_branch = Branch.open(dest)
1651- try:
1652- remote_branch = Branch.open(source)
1653- remote_branch.push(local_branch)
1654- tree = workingtree.WorkingTree.open(dest)
1655- tree.update()
1656- except Exception as e:
1657- raise e
1658+ if os.path.exists(dest):
1659+ check_call(['bzr', 'pull', '--overwrite', '-d', dest, source])
1660+ else:
1661+ check_call(['bzr', 'branch', source, dest])
1662
1663- def install(self, source):
1664+ def install(self, source, dest=None):
1665 url_parts = self.parse_url(source)
1666 branch_name = url_parts.path.strip("/").split("/")[-1]
1667- dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
1668- branch_name)
1669+ if dest:
1670+ dest_dir = os.path.join(dest, branch_name)
1671+ else:
1672+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
1673+ branch_name)
1674+
1675 if not os.path.exists(dest_dir):
1676 mkdir(dest_dir, perms=0o755)
1677 try:
1678
1679=== added directory 'hooks/charmhelpers/fetch/centos'
1680=== added file 'hooks/charmhelpers/fetch/centos/__init__.py'
1681--- hooks/charmhelpers/fetch/centos/__init__.py 1970-01-01 00:00:00 +0000
1682+++ hooks/charmhelpers/fetch/centos/__init__.py 2016-04-29 11:54:41 +0000
1683@@ -0,0 +1,158 @@
1684+import subprocess
1685+import os
1686+import time
1687+import six
1688+import yum
1689+
1690+from tempfile import NamedTemporaryFile
1691+from charmhelpers.core.hookenv import (
1692+ log,
1693+)
1694+
1695+YUM_NO_LOCK = 1 # The return code for "couldn't acquire lock" in YUM.
1696+YUM_NO_LOCK_RETRY_DELAY = 10 # Wait 10 seconds between apt lock checks.
1697+YUM_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times.
1698+
1699+
1700+def filter_installed_packages(packages):
1701+ """Returns a list of packages that require installation"""
1702+ yb = yum.YumBase()
1703+ pkgs = []
1704+ for package in yb.doPackageLists().installed:
1705+ pkgs.append(package.base_package_name)
1706+ _pkgs = []
1707+ for package in packages:
1708+ if package not in pkgs:
1709+ _pkgs.append(package)
1710+ return _pkgs
1711+
1712+
1713+def install(packages, options=None, fatal=False):
1714+ """Install one or more packages"""
1715+ cmd = ['yum', '--assumeyes']
1716+ if options is not None:
1717+ cmd.extend(options)
1718+ cmd.append('install')
1719+ if isinstance(packages, six.string_types):
1720+ cmd.append(packages)
1721+ else:
1722+ cmd.extend(packages)
1723+ log("Installing {} with options: {}".format(packages,
1724+ options))
1725+ _run_yum_command(cmd, fatal)
1726+
1727+
1728+def update(fatal=False):
1729+ """Update local yum cache"""
1730+ cmd = ['yum', 'update', '--assumeyes']
1731+ log("Update with fatal: {}".format(fatal))
1732+ _run_yum_command(cmd, fatal)
1733+
1734+
1735+def upgrade(options=None, fatal=False, dist=False):
1736+ """Upgrade all packages"""
1737+ cmd = ['yum', '--assumeyes']
1738+ if options is not None:
1739+ cmd.extend(options)
1740+ cmd.append('upgrade')
1741+ log("Upgrading with options: {}".format(options))
1742+ _run_yum_command(cmd, fatal)
1743+
1744+
1745+def purge(packages, fatal=False):
1746+ """Purge one or more packages"""
1747+ cmd = ['yum', 'remove', '--assumeyes']
1748+ if isinstance(packages, six.string_types):
1749+ cmd.append(packages)
1750+ else:
1751+ cmd.extend(packages)
1752+ log("Purging {}".format(packages))
1753+ _run_yum_command(cmd, fatal)
1754+
1755+
1756+def yum_search(packages):
1757+ """Search for a package"""
1758+ output = {}
1759+ cmd = ['yum', 'search']
1760+ if isinstance(packages, six.string_types):
1761+ cmd.append(packages)
1762+ else:
1763+ cmd.extend(packages)
1764+ log("Searching for {}".format(packages))
1765+ result = subprocess.check_output(cmd)
1766+ for package in list(packages):
1767+ if package not in result:
1768+ output[package] = False
1769+ else:
1770+ output[package] = True
1771+ return output
1772+
1773+
1774+def add_source(source, key=None):
1775+ if source is None:
1776+ log('Source is not present. Skipping')
1777+ return
1778+
1779+ if source.startswith('http'):
1780+ log("Add source: {!r}".format(source))
1781+
1782+ found = False
1783+ # search if already exists
1784+ directory = '/etc/yum.repos.d/'
1785+ for filename in os.listdir(directory):
1786+ with open(directory+filename, 'r') as rpm_file:
1787+ if source in rpm_file:
1788+ found = True
1789+
1790+ if not found:
1791+ # write in the charms.repo
1792+ with open(directory+'Charms.repo', 'a') as rpm_file:
1793+ rpm_file.write('[%s]\n' % source[7:].replace('/', '_'))
1794+ rpm_file.write('name=%s\n' % source[7:])
1795+ rpm_file.write('baseurl=%s\n\n' % source)
1796+ else:
1797+ log("Unknown source: {!r}".format(source))
1798+
1799+ if key:
1800+ if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
1801+ with NamedTemporaryFile('w+') as key_file:
1802+ key_file.write(key)
1803+ key_file.flush()
1804+ key_file.seek(0)
1805+ subprocess.check_call(['rpm', '--import', key_file])
1806+ else:
1807+ subprocess.check_call(['rpm', '--import', key])
1808+
1809+
1810+def _run_yum_command(cmd, fatal=False):
1811+ """
1812+ Run an YUM command, checking output and retrying if the fatal flag is set
1813+ to True.
1814+
1815+ :param: cmd: str: The yum command to run.
1816+ :param: fatal: bool: Whether the command's output should be checked and
1817+ retried.
1818+ """
1819+ env = os.environ.copy()
1820+
1821+ if fatal:
1822+ retry_count = 0
1823+ result = None
1824+
1825+ # If the command is considered "fatal", we need to retry if the yum
1826+ # lock was not acquired.
1827+
1828+ while result is None or result == YUM_NO_LOCK:
1829+ try:
1830+ result = subprocess.check_call(cmd, env=env)
1831+ except subprocess.CalledProcessError as e:
1832+ retry_count = retry_count + 1
1833+ if retry_count > YUM_NO_LOCK_RETRY_COUNT:
1834+ raise
1835+ result = e.returncode
1836+ log("Couldn't acquire YUM lock. Will retry in {} seconds."
1837+ "".format(YUM_NO_LOCK_RETRY_DELAY))
1838+ time.sleep(YUM_NO_LOCK_RETRY_DELAY)
1839+
1840+ else:
1841+ subprocess.call(cmd, env=env)
1842
1843=== modified file 'hooks/charmhelpers/fetch/giturl.py' (properties changed: -x to +x)
1844--- hooks/charmhelpers/fetch/giturl.py 2015-03-23 09:45:10 +0000
1845+++ hooks/charmhelpers/fetch/giturl.py 2016-04-29 11:54:41 +0000
1846@@ -15,24 +15,18 @@
1847 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1848
1849 import os
1850+from subprocess import check_call, CalledProcessError
1851 from charmhelpers.fetch import (
1852 BaseFetchHandler,
1853- UnhandledSource
1854+ UnhandledSource,
1855+ filter_installed_packages,
1856+ install,
1857 )
1858-from charmhelpers.core.host import mkdir
1859-
1860-import six
1861-if six.PY3:
1862- raise ImportError('GitPython does not support Python 3')
1863-
1864-try:
1865- from git import Repo
1866-except ImportError:
1867- from charmhelpers.fetch import apt_install
1868- apt_install("python-git")
1869- from git import Repo
1870-
1871-from git.exc import GitCommandError # noqa E402
1872+
1873+if filter_installed_packages(['git']) != []:
1874+ install(['git'])
1875+ if filter_installed_packages(['git']) != []:
1876+ raise NotImplementedError('Unable to install git')
1877
1878
1879 class GitUrlFetchHandler(BaseFetchHandler):
1880@@ -40,19 +34,26 @@
1881 def can_handle(self, source):
1882 url_parts = self.parse_url(source)
1883 # TODO (mattyw) no support for ssh git@ yet
1884- if url_parts.scheme not in ('http', 'https', 'git'):
1885+ if url_parts.scheme not in ('http', 'https', 'git', ''):
1886 return False
1887+ elif not url_parts.scheme:
1888+ return os.path.exists(os.path.join(source, '.git'))
1889 else:
1890 return True
1891
1892- def clone(self, source, dest, branch):
1893+ def clone(self, source, dest, branch="master", depth=None):
1894 if not self.can_handle(source):
1895 raise UnhandledSource("Cannot handle {}".format(source))
1896
1897- repo = Repo.clone_from(source, dest)
1898- repo.git.checkout(branch)
1899+ if os.path.exists(dest):
1900+ cmd = ['git', '-C', dest, 'pull', source, branch]
1901+ else:
1902+ cmd = ['git', 'clone', source, dest, '--branch', branch]
1903+ if depth:
1904+ cmd.extend(['--depth', depth])
1905+ check_call(cmd)
1906
1907- def install(self, source, branch="master", dest=None):
1908+ def install(self, source, branch="master", dest=None, depth=None):
1909 url_parts = self.parse_url(source)
1910 branch_name = url_parts.path.strip("/").split("/")[-1]
1911 if dest:
1912@@ -60,12 +61,10 @@
1913 else:
1914 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
1915 branch_name)
1916- if not os.path.exists(dest_dir):
1917- mkdir(dest_dir, perms=0o755)
1918 try:
1919- self.clone(source, dest_dir, branch)
1920- except GitCommandError as e:
1921- raise UnhandledSource(e.message)
1922+ self.clone(source, dest_dir, branch, depth)
1923+ except CalledProcessError as e:
1924+ raise UnhandledSource(e)
1925 except OSError as e:
1926 raise UnhandledSource(e.strerror)
1927 return dest_dir
1928
1929=== added directory 'hooks/charmhelpers/fetch/ubuntu'
1930=== added file 'hooks/charmhelpers/fetch/ubuntu/__init__.py'
1931--- hooks/charmhelpers/fetch/ubuntu/__init__.py 1970-01-01 00:00:00 +0000
1932+++ hooks/charmhelpers/fetch/ubuntu/__init__.py 2016-04-29 11:54:41 +0000
1933@@ -0,0 +1,296 @@
1934+import os
1935+import six
1936+import time
1937+import subprocess
1938+
1939+from tempfile import NamedTemporaryFile
1940+from charmhelpers.core.host import (
1941+ lsb_release
1942+)
1943+from charmhelpers.core.hookenv import (
1944+ log,
1945+)
1946+from charmhelpers.fetch import (
1947+ SourceConfigError,
1948+)
1949+
1950+CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
1951+deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
1952+"""
1953+PROPOSED_POCKET = """# Proposed
1954+deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
1955+"""
1956+CLOUD_ARCHIVE_POCKETS = {
1957+ # Folsom
1958+ 'folsom': 'precise-updates/folsom',
1959+ 'precise-folsom': 'precise-updates/folsom',
1960+ 'precise-folsom/updates': 'precise-updates/folsom',
1961+ 'precise-updates/folsom': 'precise-updates/folsom',
1962+ 'folsom/proposed': 'precise-proposed/folsom',
1963+ 'precise-folsom/proposed': 'precise-proposed/folsom',
1964+ 'precise-proposed/folsom': 'precise-proposed/folsom',
1965+ # Grizzly
1966+ 'grizzly': 'precise-updates/grizzly',
1967+ 'precise-grizzly': 'precise-updates/grizzly',
1968+ 'precise-grizzly/updates': 'precise-updates/grizzly',
1969+ 'precise-updates/grizzly': 'precise-updates/grizzly',
1970+ 'grizzly/proposed': 'precise-proposed/grizzly',
1971+ 'precise-grizzly/proposed': 'precise-proposed/grizzly',
1972+ 'precise-proposed/grizzly': 'precise-proposed/grizzly',
1973+ # Havana
1974+ 'havana': 'precise-updates/havana',
1975+ 'precise-havana': 'precise-updates/havana',
1976+ 'precise-havana/updates': 'precise-updates/havana',
1977+ 'precise-updates/havana': 'precise-updates/havana',
1978+ 'havana/proposed': 'precise-proposed/havana',
1979+ 'precise-havana/proposed': 'precise-proposed/havana',
1980+ 'precise-proposed/havana': 'precise-proposed/havana',
1981+ # Icehouse
1982+ 'icehouse': 'precise-updates/icehouse',
1983+ 'precise-icehouse': 'precise-updates/icehouse',
1984+ 'precise-icehouse/updates': 'precise-updates/icehouse',
1985+ 'precise-updates/icehouse': 'precise-updates/icehouse',
1986+ 'icehouse/proposed': 'precise-proposed/icehouse',
1987+ 'precise-icehouse/proposed': 'precise-proposed/icehouse',
1988+ 'precise-proposed/icehouse': 'precise-proposed/icehouse',
1989+ # Juno
1990+ 'juno': 'trusty-updates/juno',
1991+ 'trusty-juno': 'trusty-updates/juno',
1992+ 'trusty-juno/updates': 'trusty-updates/juno',
1993+ 'trusty-updates/juno': 'trusty-updates/juno',
1994+ 'juno/proposed': 'trusty-proposed/juno',
1995+ 'trusty-juno/proposed': 'trusty-proposed/juno',
1996+ 'trusty-proposed/juno': 'trusty-proposed/juno',
1997+ # Kilo
1998+ 'kilo': 'trusty-updates/kilo',
1999+ 'trusty-kilo': 'trusty-updates/kilo',
2000+ 'trusty-kilo/updates': 'trusty-updates/kilo',
2001+ 'trusty-updates/kilo': 'trusty-updates/kilo',
2002+ 'kilo/proposed': 'trusty-proposed/kilo',
2003+ 'trusty-kilo/proposed': 'trusty-proposed/kilo',
2004+ 'trusty-proposed/kilo': 'trusty-proposed/kilo',
2005+ # Liberty
2006+ 'liberty': 'trusty-updates/liberty',
2007+ 'trusty-liberty': 'trusty-updates/liberty',
2008+ 'trusty-liberty/updates': 'trusty-updates/liberty',
2009+ 'trusty-updates/liberty': 'trusty-updates/liberty',
2010+ 'liberty/proposed': 'trusty-proposed/liberty',
2011+ 'trusty-liberty/proposed': 'trusty-proposed/liberty',
2012+ 'trusty-proposed/liberty': 'trusty-proposed/liberty',
2013+ # Mitaka
2014+ 'mitaka': 'trusty-updates/mitaka',
2015+ 'trusty-mitaka': 'trusty-updates/mitaka',
2016+ 'trusty-mitaka/updates': 'trusty-updates/mitaka',
2017+ 'trusty-updates/mitaka': 'trusty-updates/mitaka',
2018+ 'mitaka/proposed': 'trusty-proposed/mitaka',
2019+ 'trusty-mitaka/proposed': 'trusty-proposed/mitaka',
2020+ 'trusty-proposed/mitaka': 'trusty-proposed/mitaka',
2021+}
2022+
2023+
2024+APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
2025+APT_NO_LOCK_RETRY_DELAY = 10 # Wait 10 seconds between apt lock checks.
2026+APT_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times.
2027+
2028+
2029+def filter_installed_packages(packages):
2030+ """Returns a list of packages that require installation"""
2031+ temp_cache = apt_cache()
2032+ _pkgs = []
2033+ for package in packages:
2034+ try:
2035+ p = temp_cache[package]
2036+ p.current_ver or _pkgs.append(package)
2037+ except KeyError:
2038+ log('Package {} has no installation candidate.'.format(package),
2039+ level='WARNING')
2040+ _pkgs.append(package)
2041+ return _pkgs
2042+
2043+
2044+def apt_cache(in_memory=True):
2045+ """Build and return an apt cache"""
2046+ from apt import apt_pkg
2047+ apt_pkg.init()
2048+ if in_memory:
2049+ apt_pkg.config.set("Dir::Cache::pkgcache", "")
2050+ apt_pkg.config.set("Dir::Cache::srcpkgcache", "")
2051+ return apt_pkg.Cache()
2052+
2053+
2054+def install(packages, options=None, fatal=False):
2055+ """Install one or more packages"""
2056+ if options is None:
2057+ options = ['--option=Dpkg::Options::=--force-confold']
2058+
2059+ cmd = ['apt-get', '--assume-yes']
2060+ cmd.extend(options)
2061+ cmd.append('install')
2062+ if isinstance(packages, six.string_types):
2063+ cmd.append(packages)
2064+ else:
2065+ cmd.extend(packages)
2066+ log("Installing {} with options: {}".format(packages,
2067+ options))
2068+ _run_apt_command(cmd, fatal)
2069+
2070+
2071+def upgrade(options=None, fatal=False, dist=False):
2072+ """Upgrade all packages"""
2073+ if options is None:
2074+ options = ['--option=Dpkg::Options::=--force-confold']
2075+
2076+ cmd = ['apt-get', '--assume-yes']
2077+ cmd.extend(options)
2078+ if dist:
2079+ cmd.append('dist-upgrade')
2080+ else:
2081+ cmd.append('upgrade')
2082+ log("Upgrading with options: {}".format(options))
2083+ _run_apt_command(cmd, fatal)
2084+
2085+
2086+def update(fatal=False):
2087+ """Update local apt cache"""
2088+ cmd = ['apt-get', 'update']
2089+ _run_apt_command(cmd, fatal)
2090+
2091+
2092+def purge(packages, fatal=False):
2093+ """Purge one or more packages"""
2094+ cmd = ['apt-get', '--assume-yes', 'purge']
2095+ if isinstance(packages, six.string_types):
2096+ cmd.append(packages)
2097+ else:
2098+ cmd.extend(packages)
2099+ log("Purging {}".format(packages))
2100+ _run_apt_command(cmd, fatal)
2101+
2102+
2103+def apt_mark(packages, mark, fatal=False):
2104+ """Flag one or more packages using apt-mark"""
2105+ log("Marking {} as {}".format(packages, mark))
2106+ cmd = ['apt-mark', mark]
2107+ if isinstance(packages, six.string_types):
2108+ cmd.append(packages)
2109+ else:
2110+ cmd.extend(packages)
2111+
2112+ if fatal:
2113+ subprocess.check_call(cmd, universal_newlines=True)
2114+ else:
2115+ subprocess.call(cmd, universal_newlines=True)
2116+
2117+
2118+def apt_hold(packages, fatal=False):
2119+ return apt_mark(packages, 'hold', fatal=fatal)
2120+
2121+
2122+def apt_unhold(packages, fatal=False):
2123+ return apt_mark(packages, 'unhold', fatal=fatal)
2124+
2125+
2126+def add_source(source, key=None):
2127+ """Add a package source to this system.
2128+
2129+ @param source: a URL or sources.list entry, as supported by
2130+ add-apt-repository(1). Examples::
2131+
2132+ ppa:charmers/example
2133+ deb https://stub:key@private.example.com/ubuntu trusty main
2134+
2135+ In addition:
2136+ 'proposed:' may be used to enable the standard 'proposed'
2137+ pocket for the release.
2138+ 'cloud:' may be used to activate official cloud archive pockets,
2139+ such as 'cloud:icehouse'
2140+ 'distro' may be used as a noop
2141+
2142+ @param key: A key to be added to the system's APT keyring and used
2143+ to verify the signatures on packages. Ideally, this should be an
2144+ ASCII format GPG public key including the block headers. A GPG key
2145+ id may also be used, but be aware that only insecure protocols are
2146+ available to retrieve the actual public key from a public keyserver
2147+ placing your Juju environment at risk. ppa and cloud archive keys
2148+ are securely added automtically, so sould not be provided.
2149+ """
2150+ if source is None:
2151+ log('Source is not present. Skipping')
2152+ return
2153+
2154+ if (source.startswith('ppa:') or
2155+ source.startswith('http') or
2156+ source.startswith('deb ') or
2157+ source.startswith('cloud-archive:')):
2158+ subprocess.check_call(['add-apt-repository', '--yes', source])
2159+ elif source.startswith('cloud:'):
2160+ install(filter_installed_packages(['ubuntu-cloud-keyring']),
2161+ fatal=True)
2162+ pocket = source.split(':')[-1]
2163+ if pocket not in CLOUD_ARCHIVE_POCKETS:
2164+ raise SourceConfigError(
2165+ 'Unsupported cloud: source option %s' %
2166+ pocket)
2167+ actual_pocket = CLOUD_ARCHIVE_POCKETS[pocket]
2168+ with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
2169+ apt.write(CLOUD_ARCHIVE.format(actual_pocket))
2170+ elif source == 'proposed':
2171+ release = lsb_release()['DISTRIB_CODENAME']
2172+ with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
2173+ apt.write(PROPOSED_POCKET.format(release))
2174+ elif source == 'distro':
2175+ pass
2176+ else:
2177+ log("Unknown source: {!r}".format(source))
2178+
2179+ if key:
2180+ if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
2181+ with NamedTemporaryFile('w+') as key_file:
2182+ key_file.write(key)
2183+ key_file.flush()
2184+ key_file.seek(0)
2185+ subprocess.check_call(['apt-key', 'add', '-'], stdin=key_file)
2186+ else:
2187+ # Note that hkp: is in no way a secure protocol. Using a
2188+ # GPG key id is pointless from a security POV unless you
2189+ # absolutely trust your network and DNS.
2190+ subprocess.check_call(['apt-key', 'adv', '--keyserver',
2191+ 'hkp://keyserver.ubuntu.com:80', '--recv',
2192+ key])
2193+
2194+
2195+def _run_apt_command(cmd, fatal=False):
2196+ """
2197+ Run an APT command, checking output and retrying if the fatal flag is set
2198+ to True.
2199+
2200+ :param: cmd: str: The apt command to run.
2201+ :param: fatal: bool: Whether the command's output should be checked and
2202+ retried.
2203+ """
2204+ env = os.environ.copy()
2205+
2206+ if 'DEBIAN_FRONTEND' not in env:
2207+ env['DEBIAN_FRONTEND'] = 'noninteractive'
2208+
2209+ if fatal:
2210+ retry_count = 0
2211+ result = None
2212+
2213+ # If the command is considered "fatal", we need to retry if the apt
2214+ # lock was not acquired.
2215+
2216+ while result is None or result == APT_NO_LOCK:
2217+ try:
2218+ result = subprocess.check_call(cmd, env=env)
2219+ except subprocess.CalledProcessError as e:
2220+ retry_count = retry_count + 1
2221+ if retry_count > APT_NO_LOCK_RETRY_COUNT:
2222+ raise
2223+ result = e.returncode
2224+ log("Couldn't acquire DPKG lock. Will retry in {} seconds."
2225+ "".format(APT_NO_LOCK_RETRY_DELAY))
2226+ time.sleep(APT_NO_LOCK_RETRY_DELAY)
2227+
2228+ else:
2229+ subprocess.call(cmd, env=env)
2230
2231=== modified file 'hooks/nrpe_helpers.py' (properties changed: -x to +x)
2232=== modified file 'hooks/nrpe_utils.py' (properties changed: -x to +x)
2233--- hooks/nrpe_utils.py 2016-01-18 22:54:35 +0000
2234+++ hooks/nrpe_utils.py 2016-04-29 11:54:41 +0000
2235@@ -1,14 +1,18 @@
2236 import os
2237 import shutil
2238 import glob
2239+import importlib
2240
2241 from charmhelpers import fetch
2242 from charmhelpers.core import host
2243 from charmhelpers.core.templating import render
2244 from charmhelpers.core import hookenv
2245+from charmhelpers import get_platform
2246
2247 import nrpe_helpers
2248
2249+platform = importlib.import_module(get_platform())
2250+
2251
2252 def restart_rsync(service_name):
2253 """ Restart rsync """
2254@@ -17,25 +21,18 @@
2255
2256 def restart_nrpe(service_name):
2257 """ Restart nrpe """
2258- host.service_restart('nagios-nrpe-server')
2259+ platform.restart_nrpe(service_name)
2260
2261
2262 def determine_packages():
2263 """ List of packages this charm needs installed """
2264- pkgs = [
2265- 'nagios-nrpe-server',
2266- 'nagios-plugins-basic',
2267- 'nagios-plugins-standard'
2268- ]
2269- if hookenv.config('export_nagios_definitions'):
2270- pkgs.append('rsync')
2271- return pkgs
2272+ return platform.determine_packages()
2273
2274
2275 def install_packages(service_name):
2276 """ Install packages """
2277- fetch.apt_update()
2278- fetch.apt_install(determine_packages(), fatal=True)
2279+ fetch.update()
2280+ fetch.install(determine_packages(), fatal=True)
2281
2282
2283 def remove_host_export_fragments(service_name):
2284@@ -139,5 +136,6 @@
2285 for rid in hookenv.relation_ids('monitors'):
2286 hookenv.relation_set(
2287 relation_id=rid,
2288- relation_settings=monitor_relation.provide_data()
2289+ relation_settings=monitor_relation.provide_data(),
2290+ charm_platform=get_platform()
2291 )
2292
2293=== modified file 'hooks/services.py' (properties changed: -x to +x)
2294--- hooks/services.py 2016-01-19 19:29:15 +0000
2295+++ hooks/services.py 2016-04-29 11:54:41 +0000
2296@@ -1,9 +1,13 @@
2297+import nrpe_utils
2298+import nrpe_helpers
2299+import importlib
2300+
2301 from charmhelpers.core import hookenv
2302 from charmhelpers.core.services.base import ServiceManager
2303 from charmhelpers.core.services import helpers
2304+from charmhelpers import get_platform
2305
2306-import nrpe_utils
2307-import nrpe_helpers
2308+platform = importlib.import_module(get_platform())
2309
2310
2311 def manage():
2312@@ -29,10 +33,7 @@
2313 nrpe_utils.update_nrpe_external_master_relation,
2314 nrpe_utils.update_monitor_relation,
2315 nrpe_utils.render_nrped_files,
2316- helpers.render_template(
2317- source='nrpe.tmpl',
2318- target='/etc/nagios/nrpe.cfg'
2319- ),
2320+ platform.render_nrpe_template(),
2321 ],
2322 'provided_data': [nrpe_helpers.PrincipleRelation()],
2323 'start': [nrpe_utils.restart_nrpe],
2324
2325=== added file 'hooks/ubuntu.py'
2326--- hooks/ubuntu.py 1970-01-01 00:00:00 +0000
2327+++ hooks/ubuntu.py 2016-04-29 11:54:41 +0000
2328@@ -0,0 +1,27 @@
2329+from charmhelpers.core import host
2330+from charmhelpers.core import hookenv
2331+from charmhelpers.core.services import helpers
2332+
2333+
2334+def determine_packages():
2335+ """ List of packages this charm needs installed """
2336+ pkgs = [
2337+ 'nagios-nrpe-server',
2338+ 'nagios-plugins-basic',
2339+ 'nagios-plugins-standard'
2340+ ]
2341+ if hookenv.config('export_nagios_definitions'):
2342+ pkgs.append('rsync')
2343+ return pkgs
2344+
2345+
2346+def restart_nrpe(service_name):
2347+ """ Restart nrpe """
2348+ host.service_restart('nagios-nrpe-server')
2349+
2350+
2351+def render_nrpe_template():
2352+ return helpers.render_template(
2353+ source='nrpe.tmpl',
2354+ target='/etc/nagios/nrpe.cfg'
2355+ )
2356
2357=== added file 'templates/nrpe-centos.tmpl'
2358--- templates/nrpe-centos.tmpl 1970-01-01 00:00:00 +0000
2359+++ templates/nrpe-centos.tmpl 2016-04-29 11:54:41 +0000
2360@@ -0,0 +1,16 @@
2361+#--------------------------------------------------------
2362+# This file is managed by Juju
2363+#--------------------------------------------------------
2364+
2365+server_port={{ server_port }}
2366+allowed_hosts={{ external_nagios_master }},{{ monitor_allowed_hosts }}
2367+nrpe_user=nrpe
2368+nrpe_group=nrpe
2369+dont_blame_nrpe=0cat
2370+debug=0
2371+command_timeout=60
2372+pid_file=/var/run/nrpe/nrpe.pid
2373+
2374+# All configuration snippets go into nrpe.d/
2375+include_dir=/etc/nagios/nrpe.d/
2376+
2377
2378=== modified file 'tests/11-monitors-configurations'
2379--- tests/11-monitors-configurations 2016-02-03 22:17:17 +0000
2380+++ tests/11-monitors-configurations 2016-04-29 11:54:41 +0000
2381@@ -37,11 +37,11 @@
2382 # look for procrunning in nrpe config
2383 try:
2384 mysql_unit.file_contents('/etc/nagios/nrpe.d/'
2385- 'check_proc_mysqld_principle.cfg')
2386+ 'check_total_procs_testing.cfg')
2387 except IOError as e:
2388- amulet.raise_status(amulet.ERROR,
2389- msg="procrunning config not found. Error:" +
2390- e.args[1])
2391+ amulet.raise_status(amulet.FAIL,
2392+ msg="procrunning config not found. Error: {0}".format(e)
2393+ )
2394
2395
2396 def test_nagios_monitors_response():
2397@@ -52,7 +52,7 @@
2398 r = requests.get(host_url % nagios_unit.info['public-address'],
2399 auth=('nagiosadmin', nagpwd))
2400 if not r.text.find('mysql-0-basic'):
2401- amulet.raise_status(amulet.ERROR,
2402+ amulet.raise_status(amulet.FAIL,
2403 msg='Nagios is not monitoring the' +
2404 ' hosts it supposed to.')
2405
2406
2407=== modified file 'tests/13-monitors-config'
2408--- tests/13-monitors-config 2016-02-03 22:17:17 +0000
2409+++ tests/13-monitors-config 2016-04-29 11:54:41 +0000
2410@@ -49,11 +49,11 @@
2411 # look for procrunning in nrpe config
2412 try:
2413 mysql_unit.file_contents('/etc/nagios/nrpe.d/'
2414- 'check_proc_mysqld_principle.cfg')
2415+ 'check_total_procs_testing.cfg')
2416 except IOError as e:
2417- amulet.raise_status(amulet.ERROR,
2418- msg="procrunning config not found. Error:" +
2419- e.args[1])
2420+ amulet.raise_status(amulet.FAIL,
2421+ msg="procrunning config not found. Error: {0}".format(e)
2422+ )
2423
2424
2425 def test_nagios_monitors_response():
2426@@ -64,7 +64,7 @@
2427 r = requests.get(host_url % nagios_unit.info['public-address'],
2428 auth=('nagiosadmin', nagpwd))
2429 if not r.text.find('processcount'):
2430- amulet.raise_status(amulet.ERROR,
2431+ amulet.raise_status(amulet.FAIL,
2432 msg='Nagios is not monitoring the' +
2433 ' hosts it supposed to.')
2434

Subscribers

People subscribed via source and target branches

to all changes: