Merge lp:~mthaddon/mojo/juju-intro-mojo-specs into lp:mojo/mojo-specs

Proposed by Tom Haddon on 2015-09-03
Status: Merged
Merged at revision: 9
Proposed branch: lp:~mthaddon/mojo/juju-intro-mojo-specs
Merge into: lp:mojo/mojo-specs
Diff against target: 2179 lines (+2033/-27)
21 files modified
juju-intro/README (+5/-0)
juju-intro/collect (+5/-0)
juju-intro/deploy (+14/-0)
juju-intro/manifest (+10/-0)
juju-intro/manifest-verify (+4/-0)
juju-intro/verify-installed (+10/-0)
mojo-how-to/devel/verify (+0/-26)
mojo-how-to/manifest-verify (+3/-1)
mojo-spec-helpers/tests/check-juju (+5/-0)
mojo-spec-helpers/tests/verify-nrpe (+37/-0)
mojo-spec-helpers/utils/add-floating-ip (+114/-0)
mojo-spec-helpers/utils/cache_managers.py (+56/-0)
mojo-spec-helpers/utils/container_managers.py (+265/-0)
mojo-spec-helpers/utils/mojo_os_utils.py (+554/-0)
mojo-spec-helpers/utils/mojo_utils.py (+356/-0)
mojo-spec-helpers/utils/shyaml.py (+219/-0)
mojo-spec-helpers/utils/tests/README.md (+67/-0)
mojo-spec-helpers/utils/tests/run_tests.py (+18/-0)
mojo-spec-helpers/utils/tests/test_cache_managers.py (+80/-0)
mojo-spec-helpers/utils/tests/test_container_managers.py (+188/-0)
mojo-spec-helpers/utils/tests/test_mojo_utils.py (+23/-0)
To merge this branch: bzr merge lp:~mthaddon/mojo/juju-intro-mojo-specs
Reviewer Review Type Date Requested Status
Paul Collins 2015-09-03 Approve on 2015-09-17
Review via email: mp+270005@code.launchpad.net

Description of the change

Add a "juju-intro" service which uses the charms described on https://jujucharms.com/docs/stable/getting-started so that we can test these actually work in CI

To post a comment you must log in.
Tom Haddon (mthaddon) wrote :

I'm going to test this in CI first before putting this MP up for review.

Tom Haddon (mthaddon) wrote :

This is now ready for review. Has run as follows: http://paste.ubuntu.com/12272171/

70. By Tom Haddon on 2015-09-05

Just use the current promulgated series of each charm

71. By Tom Haddon on 2015-09-08

Add an e2e check for the site being up after initial install

Paul Collins (pjdc) wrote :

Approving, although I noticed:
 - running this spec with the local provider fails here due to mysql's default dataset-size of 80%; not sure how likely it is for someone to be using it, however
 - mojo-spec-helpers may need another refresh (or is itself stale); add-floating-ip was rewritten in 100% Python recently (see internal trunk)

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added directory 'juju-intro'
2=== added file 'juju-intro/README'
3--- juju-intro/README 1970-01-01 00:00:00 +0000
4+++ juju-intro/README 2015-09-08 08:23:55 +0000
5@@ -0,0 +1,5 @@
6+This spec contains the services that are included in the introduction to Juju
7+as part of [1], and is intended to be used with CI to confirm that the charms
8+we're pointing new users at for Juju are always in a working state.
9+
10+[1] https://jujucharms.com/docs/stable/getting-started
11
12=== added symlink 'juju-intro/check-juju'
13=== target is u'../mojo-spec-helpers/tests/check-juju'
14=== added file 'juju-intro/collect'
15--- juju-intro/collect 1970-01-01 00:00:00 +0000
16+++ juju-intro/collect 2015-09-08 08:23:55 +0000
17@@ -0,0 +1,5 @@
18+wordpress lp:charms/wordpress
19+mysql lp:charms/mysql
20+
21+# subordinates
22+nrpe lp:charms/nrpe-external-master
23
24=== added file 'juju-intro/deploy'
25--- juju-intro/deploy 1970-01-01 00:00:00 +0000
26+++ juju-intro/deploy 2015-09-08 08:23:55 +0000
27@@ -0,0 +1,14 @@
28+wordpress:
29+ series: {{ series }}
30+ services:
31+ wordpress:
32+ charm: wordpress
33+ expose: true
34+ mysql:
35+ charm: mysql
36+ nrpe:
37+ charm: nrpe
38+ relations:
39+ - ["wordpress", "mysql"]
40+ - ["wordpress", "nrpe"]
41+ - ["mysql", "nrpe"]
42
43=== added file 'juju-intro/manifest'
44--- juju-intro/manifest 1970-01-01 00:00:00 +0000
45+++ juju-intro/manifest 2015-09-08 08:23:55 +0000
46@@ -0,0 +1,10 @@
47+collect
48+deploy delay=0
49+include config=manifest-verify
50+
51+## This isn't included in the main verify manifest because after we've installed
52+## the service this won't work as expected so it's a one time thing for the
53+## initial deployment
54+
55+# Verify the site is installed
56+verify config=verify-installed
57
58=== added file 'juju-intro/manifest-verify'
59--- juju-intro/manifest-verify 1970-01-01 00:00:00 +0000
60+++ juju-intro/manifest-verify 2015-09-08 08:23:55 +0000
61@@ -0,0 +1,4 @@
62+# Checking juju status
63+verify config=check-juju
64+# Running all naigos checks to confirm service is working as expected
65+verify config=verify-nrpe
66
67=== added file 'juju-intro/verify-installed'
68--- juju-intro/verify-installed 1970-01-01 00:00:00 +0000
69+++ juju-intro/verify-installed 2015-09-08 08:23:55 +0000
70@@ -0,0 +1,10 @@
71+#!/bin/bash
72+
73+set -e
74+set -u
75+
76+# Check the service is actually up and we can get to the install page
77+# This isn't included in the main verify manifest because after we've installed
78+# the service this won't work as expected so it's a one time thing for the
79+# initial deployment
80+juju ssh wordpress/0 "/usr/lib/nagios/plugins/check_http -I 127.0.0.1 -H localhost -f follow -s '<title>WordPress &rsaquo; Installation</title>'"
81
82=== added symlink 'juju-intro/verify-nrpe'
83=== target is u'../mojo-spec-helpers/tests/verify-nrpe'
84=== removed file 'mojo-how-to/devel/verify'
85--- mojo-how-to/devel/verify 2015-01-26 16:23:38 +0000
86+++ mojo-how-to/devel/verify 1970-01-01 00:00:00 +0000
87@@ -1,26 +0,0 @@
88-#!/bin/bash
89-
90-set -e
91-
92-# If we have any etc bzr nagios checks, we need to wait up to 15 minutes
93-# for the cron to run to populate the check file, so just ignore those
94-NAGIOS_OUTPUT=$(juju status | sed -rn 's/^ {8}public-address: //p'| xargs -I% ssh ubuntu@% 'egrep -oh /usr.*lib.* /etc/nagios/nrpe.d/check_* |grep -v check_etc_bzr.py |sed "s/.*/(set -x; &) || echo MOJO_NAGIOS_FAIL /"|sudo -u nagios -s bash |& sed "s/^/%: /"' 2>/dev/null)
95-
96-echo "${NAGIOS_OUTPUT}"
97-
98-NAGIOS_FAIL=$(echo "${NAGIOS_OUTPUT}" | grep MOJO_NAGIOS_FAIL) || true
99-
100-if [ -n "${NAGIOS_FAIL}" ]; then
101- echo "########################"
102- echo "# Nagios Checks Failed #"
103- echo "########################"
104- exit 1
105-else
106- echo "########################"
107- echo "# Nagios Checks Passed #"
108- echo "########################"
109-fi
110-
111-echo "#########################"
112-echo "# Successfully verified #"
113-echo "#########################"
114
115=== modified file 'mojo-how-to/manifest-verify'
116--- mojo-how-to/manifest-verify 2015-01-21 15:39:29 +0000
117+++ mojo-how-to/manifest-verify 2015-09-08 08:23:55 +0000
118@@ -1,2 +1,4 @@
119+# Check juju
120+verify config=check-juju
121 # The service is up and running, let's verify it
122-verify
123+verify config=verify-nrpe
124
125=== removed symlink 'mojo-how-to/production/verify'
126=== target was u'../devel/verify'
127=== added directory 'mojo-spec-helpers'
128=== added directory 'mojo-spec-helpers/tests'
129=== added file 'mojo-spec-helpers/tests/check-juju'
130--- mojo-spec-helpers/tests/check-juju 1970-01-01 00:00:00 +0000
131+++ mojo-spec-helpers/tests/check-juju 2015-09-08 08:23:55 +0000
132@@ -0,0 +1,5 @@
133+#!/usr/bin/python
134+import utils.mojo_utils as mojo_utils
135+
136+mojo_utils.juju_check_hooks_complete()
137+mojo_utils.juju_status_check_and_wait()
138
139=== added symlink 'mojo-spec-helpers/tests/utils'
140=== target is u'../utils'
141=== added file 'mojo-spec-helpers/tests/verify-nrpe'
142--- mojo-spec-helpers/tests/verify-nrpe 1970-01-01 00:00:00 +0000
143+++ mojo-spec-helpers/tests/verify-nrpe 2015-09-08 08:23:55 +0000
144@@ -0,0 +1,37 @@
145+#!/bin/bash
146+
147+set -e
148+
149+# If we have any etc bzr nagios checks, we need to wait up to 15 minutes
150+# for the cron to run to populate the check file, so just ignore those
151+check() {
152+ juju ssh $1 'egrep -oh /usr.*lib.* /etc/nagios/nrpe.d/check_* |\
153+ grep -v check_etc_bzr.py |sed "s/.*/(set -x; &) || \
154+ echo MOJO_NAGIOS_FAIL /"|sudo -u nagios -s bash' 2>/dev/null
155+}
156+
157+NRPE_UNITS=$(juju status | sed -rn 's/^ *(nrpe\/[0-9]*):$/\1/p')
158+NAGIOS_OUTPUT=$(
159+ for unit in $NRPE_UNITS; do
160+ check $unit | sed --e "s#^#$unit: #"
161+ done
162+)
163+
164+echo "${NAGIOS_OUTPUT}"
165+
166+NAGIOS_FAIL=$(echo "${NAGIOS_OUTPUT}" | grep MOJO_NAGIOS_FAIL) || true
167+
168+if [ -n "${NAGIOS_FAIL}" ]; then
169+ echo "########################"
170+ echo "# Nagios Checks Failed #"
171+ echo "########################"
172+ exit 1
173+else
174+ echo "########################"
175+ echo "# Nagios Checks Passed #"
176+ echo "########################"
177+fi
178+
179+echo "########################"
180+echo "# Succesfully verified #"
181+echo "########################"
182
183=== added directory 'mojo-spec-helpers/utils'
184=== added file 'mojo-spec-helpers/utils/__init__.py'
185=== added file 'mojo-spec-helpers/utils/add-floating-ip'
186--- mojo-spec-helpers/utils/add-floating-ip 1970-01-01 00:00:00 +0000
187+++ mojo-spec-helpers/utils/add-floating-ip 2015-09-08 08:23:55 +0000
188@@ -0,0 +1,114 @@
189+#!/bin/sh
190+#
191+# Author: Paul Gear
192+# Description: Manage floating IP allocations in mojo local directory for a juju service or unit.
193+# NOTE: $MOJO_PROJECT and $MOJO_STAGE must be set before calling this script.
194+#
195+
196+set -e
197+set -u
198+
199+
200+SECRETS_DIR="/srv/mojo/LOCAL/$MOJO_PROJECT/$MOJO_STAGE/"
201+
202+# Echo to standard error
203+# Useful for printing from functions without polluting the "returned" output
204+echo_stderr() { echo "$@" 1>&2; }
205+
206+# print the juju unit followed by the machine instance id
207+get_juju_units()
208+{
209+ juju status "$@" | python -c '
210+import sys, yaml
211+status = yaml.safe_load(sys.stdin)
212+for serv in status["services"]:
213+ if status["services"][serv].get("units"):
214+ for unit in status["services"][serv]["units"]:
215+ machine = status["services"][serv]["units"][unit]["machine"]
216+ instance = status["machines"][machine]["instance-id"]
217+ print unit, instance
218+'
219+}
220+
221+
222+# get the existing floating IP for this unit, or create a new one
223+get_unit_floating_ip()
224+{
225+ UNIT="$1"
226+ MACHINE="$2"
227+ UNIT_FILE_NAME=$(echo "$UNIT" | tr '/' '_')
228+ FLOATING_IP_FILE="$SECRETS_DIR/$UNIT_FILE_NAME"
229+ IP=""
230+ if [ -s "$FLOATING_IP_FILE" ]; then
231+ IP=$(head -n 1 "$FLOATING_IP_FILE")
232+
233+ echo_stderr "- Found IP "$IP" for "$UNIT
234+
235+ # check how the IP is used now
236+ ALLOCATION=$(nova floating-ip-list | awk -v IP="$IP" '$2 == IP {print $5}')
237+ case $ALLOCATION in
238+ "-")
239+ # unallocated - we can use it
240+ echo_stderr "- IP "$IP" is currently unallocated"
241+ ;;
242+ "")
243+ # non-existent - we'll create one below
244+ IP=""
245+ echo_stderr "- No IP found for "$UNIT
246+ ;;
247+ *)
248+ # allocated to a unit already
249+ if nova show $MACHINE | awk '$3 == "network"' | sed -re 's/,? +/\n/g' | grep -q "^$IP$"; then
250+ # it's allocated to us; do nothing
251+ echo_stderr "- IP "$IP" is already allocated to "$UNIT
252+ return
253+ fi
254+ # it's allocated to another unit - create one below
255+ IP=""
256+ echo_stderr "- IP "$IP" is allocated to another unit"
257+ ;;
258+ esac
259+ fi
260+ if [ -z "$IP" ]; then
261+ IP=$(nova floating-ip-create | grep -wo '[0-9.a-f:]*')
262+ echo "$IP" > "$FLOATING_IP_FILE"
263+ echo_stderr "- Created new IP "$IP
264+ fi
265+ echo "$IP"
266+}
267+
268+
269+usage()
270+{
271+ cat <<EOF
272+Usage: $0 {service|unit}
273+
274+# Add a floating IP to the apache2/0 unit:
275+add-floating-ip apache2/0
276+
277+# Add floating IPs to all units the jenkins-slave service:
278+add-floating-ip jenkins-slave
279+
280+# Add floating IPs to all units in the haproxy and squid services:
281+add-floating-ip haproxy squid
282+
283+EOF
284+ exit 2
285+}
286+
287+
288+if [ "$#" -lt 1 ]; then
289+ usage
290+fi
291+
292+for i in "$@"; do
293+ get_juju_units "$i" | while read unit machine; do
294+ echo_stderr ""
295+ echo_stderr "Assigning IPs for "$unit
296+ IP=$(get_unit_floating_ip "$unit" "$machine")
297+ if [ -n "$IP" ]; then
298+ echo_stderr "- Assigning IP "$IP" to "$unit
299+ nova floating-ip-associate "$machine" "$IP"
300+ fi
301+ done
302+done
303
304=== added file 'mojo-spec-helpers/utils/cache_managers.py'
305--- mojo-spec-helpers/utils/cache_managers.py 1970-01-01 00:00:00 +0000
306+++ mojo-spec-helpers/utils/cache_managers.py 2015-09-08 08:23:55 +0000
307@@ -0,0 +1,56 @@
308+# System
309+import os
310+import json
311+
312+
313+class JsonCache:
314+ """
315+ Store key value pairs in a JSON file
316+ """
317+
318+ def __init__(self, cache_path):
319+ self.cache_path = cache_path
320+
321+ def get_cache(self):
322+ """
323+ Get the dictionary from the cache file
324+ or if it doesn't exist, return an empty dictionary
325+ """
326+
327+ if os.path.isfile(self.cache_path):
328+ with open(self.cache_path) as cache_file:
329+ return json.load(cache_file)
330+ else:
331+ return {}
332+
333+ def put_cache(self, cache):
334+ """
335+ Save a dictionary to the JSON cache file
336+ """
337+
338+ with open(self.cache_path, 'w') as cache_file:
339+ json.dump(cache, cache_file)
340+
341+ def set(self, name, value):
342+ """
343+ Set a key value pair to the cache
344+ """
345+
346+ cache = self.get_cache()
347+ cache[name] = value
348+ self.put_cache(cache)
349+
350+ def get(self, name):
351+ """
352+ Retrieve a value from the cache by key
353+ """
354+
355+ return self.get_cache().get(name)
356+
357+ def wipe(self):
358+ """
359+ Remove the cache file
360+ """
361+
362+ if os.path.isfile(self.cache_path):
363+ os.remove(self.cache_path)
364
365=== added file 'mojo-spec-helpers/utils/container_managers.py'
366--- mojo-spec-helpers/utils/container_managers.py 1970-01-01 00:00:00 +0000
367+++ mojo-spec-helpers/utils/container_managers.py 2015-09-08 08:23:55 +0000
368@@ -0,0 +1,265 @@
369+# Modules
370+import requests
371+
372+
373+class LocalEnvironmentSwiftContainer:
374+ """
375+ Manage a Swift container for the local deployment environment
376+ for storing deployment settings
377+
378+ swift_connection: A working swiftclient.client.Connection object
379+ container_name: The name of the swift container to use
380+ """
381+
382+ previous_build_obj = 'previous-build-label'
383+ deployed_build_obj = 'deployed-build-label'
384+ deployed_revno_obj = 'deployed-spec-revno'
385+ code_upgrade_succeeded_template = 'code-upgrade-{}-{}-succeeded'
386+ mojo_run_succeeded_template = 'mojo-run-{}-succeeded'
387+
388+ def __init__(self, swift_connection, container_name):
389+ self.swift_connection = swift_connection
390+ self.container_name = container_name
391+
392+ def previous_build_label(self):
393+ """
394+ Get the build label that was previously deployed on this environment
395+ From the Swift account associated with this environment
396+ through environment variables "OS_AUTH_URL" etc.
397+ """
398+
399+ return self.swift_connection.get_object(
400+ container=self.container_name,
401+ obj=self.previous_build_obj
402+ )[1].strip()
403+
404+ def deployed_build_label(self):
405+ """
406+ Get the build label that is currently deployed on this environment
407+ From the Swift account associated with this environment
408+ through environment variables "OS_AUTH_URL" etc.
409+ """
410+
411+ return self.swift_connection.get_object(
412+ container=self.container_name,
413+ obj=self.deployed_build_obj
414+ )[1].strip()
415+
416+ def deployed_spec_revno(self):
417+ """
418+ Get the mojo spec revision number that is currently deployed
419+ on this environment from the Swift account associated with this
420+ environment through environment variables "OS_AUTH_URL" etc.
421+ """
422+
423+ return self.swift_connection.get_object(
424+ container=self.container_name,
425+ obj=self.deployed_revno_obj
426+ )[1].strip()
427+
428+ def save_code_upgrade_succeeded(
429+ self, from_build_label, to_build_label
430+ ):
431+ """
432+ Save an object into Swift
433+ to signify that a code-upgrade succeeded
434+ from one build_label to another
435+ """
436+
437+ upgrade_object_name = self.code_upgrade_succeeded_template.format(
438+ from_build_label, to_build_label
439+ )
440+
441+ self.swift_connection.put_object(
442+ container=self.container_name,
443+ obj=upgrade_object_name,
444+ contents='true'
445+ )
446+
447+ def save_mojo_run_succeeded(self, spec_revno):
448+ """
449+ Save an object into Swift
450+ to signifiy that a mojo-run succeeded with
451+ a given revision number of the mojo spec
452+ """
453+
454+ run_object_name = self.mojo_run_succeeded_template.format(
455+ spec_revno
456+ )
457+
458+ self.swift_connection.put_object(
459+ container=self.container_name,
460+ obj=run_object_name,
461+ contents='true'
462+ )
463+
464+ def save_deployed_build_label(self, build_label):
465+ """
466+ Save an object into swift containing
467+ the build_label which is deployed to this environment
468+ """
469+
470+ self.swift_connection.put_object(
471+ container=self.container_name,
472+ obj='deployed-build-label',
473+ contents=build_label
474+ )
475+
476+ def save_previous_build_label(self, build_label):
477+ """
478+ Save an object into swift
479+ to record the previously deployed build_label
480+ """
481+
482+ self.swift_connection.put_object(
483+ container=self.container_name,
484+ obj='previous-build-label',
485+ contents=build_label
486+ )
487+
488+ def save_mojo_spec_revno(self, spec_revno):
489+ """
490+ Save an object into swift containing
491+ the current mojo spec revision number
492+ """
493+
494+ self.swift_connection.put_object(
495+ container=self.container_name,
496+ obj='deployed-spec-revno',
497+ contents=spec_revno
498+ )
499+
500+
501+class HttpContainer:
502+ """
503+ Methods for retrieving objects
504+ from a Swift HTTP storage container
505+ """
506+
507+ def __init__(self, container_url):
508+ """
509+ container_url: The storage URL path for the swift container
510+ """
511+
512+ self.container_url = container_url
513+
514+ def get(self, object_name):
515+ """
516+ Retrieve the contents of an object
517+ """
518+
519+ object_url = '{}/{}'.format(self.container_url, object_name)
520+
521+ response = requests.get(object_url)
522+
523+ try:
524+ response.raise_for_status()
525+ except requests.exceptions.HTTPError as http_error:
526+ http_error.message += '. URL: {}'.format(object_url)
527+ http_error.args += ('URL: {}'.format(object_url),)
528+ raise http_error
529+
530+ response.raise_for_status()
531+
532+ return response.text
533+
534+ def head(self, object_name):
535+ """
536+ Retrieve the HEAD of an object
537+ """
538+
539+ object_url = '{}/{}'.format(self.container_url, object_name)
540+
541+ return requests.head(object_url)
542+
543+ def exists(self, object_name):
544+ """
545+ Check an object exists
546+ """
547+
548+ return self.head(object_name).ok
549+
550+
551+class CIContainer(HttpContainer):
552+ """
553+ Methods for retrieving continuous integration
554+ resources from an http swift store
555+ """
556+
557+ code_upgrade_succeeded_template = 'code-upgrade-{}-{}-succeeded'
558+ mojo_run_succeeded_template = 'mojo-run-{}-succeeded'
559+
560+ def has_code_upgrade_been_tested(
561+ self,
562+ from_build_label,
563+ to_build_label
564+ ):
565+ """
566+ Check if a specific code upgrade has been tested
567+ (from one build_label to another)
568+ by checking if a specially named Swift object exists
569+ in the CI system's Swift HTTP store.
570+ (This object will have been created
571+ by self.save_code_upgrade_succeeded)
572+ """
573+
574+ return self.exists(
575+ self.code_upgrade_succeeded_template.format(
576+ from_build_label,
577+ to_build_label
578+ )
579+ )
580+
581+ def has_mojo_run_been_tested(self, spec_revno):
582+ """
583+ Check if a specific mojo spec revision number has been tested
584+ by checking if a specially named Swift object exists
585+ in the CI system's Swift HTTP store.
586+ (This object will have been created by self.save_mojo_run_succeeded)
587+ """
588+
589+ return self.exists(
590+ self.mojo_run_succeeded_template.format(
591+ spec_revno
592+ )
593+ )
594+
595+
596+class BuildContainer(HttpContainer):
597+ """
598+ Methods for interacting with the HTTP Swift store
599+ containing code builds
600+ """
601+
602+ def latest_build_label(self):
603+ """
604+ Get the latest build label from the webteam's Swift HTTP object store
605+ """
606+
607+ return self.get('latest-build-label').strip()
608+
609+
610+class DeployedEnvironmentContainer(HttpContainer):
611+ """
612+ Methods for requesting information
613+ from the HTTP swift store for a
614+ depoyed environment (e.g. production, staging)
615+ """
616+
617+ def deployed_build_label(self):
618+ """
619+ Get the build label which was most recently
620+ deployed to this environment
621+ from their Swift HTTP object store
622+ """
623+
624+ return self.get('deployed-build-label').strip()
625+
626+ def deployed_spec_revno(self):
627+ """
628+ Get the version of the mojo spec
629+ which was most recently run on this environment
630+ from their Swift HTTP object store
631+ """
632+
633+ return self.get('deployed-spec-revno').strip()
634
635=== added file 'mojo-spec-helpers/utils/mojo_os_utils.py'
636--- mojo-spec-helpers/utils/mojo_os_utils.py 1970-01-01 00:00:00 +0000
637+++ mojo-spec-helpers/utils/mojo_os_utils.py 2015-09-08 08:23:55 +0000
638@@ -0,0 +1,554 @@
639+#!/usr/bin/python
640+
641+import swiftclient
642+import glanceclient
643+from keystoneclient.v2_0 import client as keystoneclient
644+import mojo_utils
645+from novaclient.v1_1 import client as novaclient
646+from neutronclient.v2_0 import client as neutronclient
647+import logging
648+import re
649+import sys
650+import tempfile
651+import urllib
652+import os
653+import time
654+import subprocess
655+import paramiko
656+import StringIO
657+
658+
659+# Openstack Client helpers
660+def get_nova_creds(cloud_creds):
661+ auth = {
662+ 'username': cloud_creds['OS_USERNAME'],
663+ 'api_key': cloud_creds['OS_PASSWORD'],
664+ 'auth_url': cloud_creds['OS_AUTH_URL'],
665+ 'project_id': cloud_creds['OS_TENANT_NAME'],
666+ 'region_name': cloud_creds['OS_REGION_NAME'],
667+ }
668+ return auth
669+
670+
671+def get_ks_creds(cloud_creds):
672+ auth = {
673+ 'username': cloud_creds['OS_USERNAME'],
674+ 'password': cloud_creds['OS_PASSWORD'],
675+ 'auth_url': cloud_creds['OS_AUTH_URL'],
676+ 'tenant_name': cloud_creds['OS_TENANT_NAME'],
677+ 'region_name': cloud_creds['OS_REGION_NAME'],
678+ }
679+ return auth
680+
681+
682+def get_swift_creds(cloud_creds):
683+ auth = {
684+ 'user': cloud_creds['OS_USERNAME'],
685+ 'key': cloud_creds['OS_PASSWORD'],
686+ 'authurl': cloud_creds['OS_AUTH_URL'],
687+ 'tenant_name': cloud_creds['OS_TENANT_NAME'],
688+ 'auth_version': '2.0',
689+ }
690+ return auth
691+
692+
693+def get_nova_client(novarc_creds):
694+ nova_creds = get_nova_creds(novarc_creds)
695+ return novaclient.Client(**nova_creds)
696+
697+
698+def get_neutron_client(novarc_creds):
699+ neutron_creds = get_ks_creds(novarc_creds)
700+ return neutronclient.Client(**neutron_creds)
701+
702+
703+def get_keystone_client(novarc_creds):
704+ keystone_creds = get_ks_creds(novarc_creds)
705+ keystone_creds['insecure'] = True
706+ return keystoneclient.Client(**keystone_creds)
707+
708+
709+def get_swift_client(novarc_creds, insecure=True):
710+ swift_creds = get_swift_creds(novarc_creds)
711+ swift_creds['insecure'] = insecure
712+ return swiftclient.client.Connection(**swift_creds)
713+
714+
715+def get_glance_client(novarc_creds):
716+ kc = get_keystone_client(novarc_creds)
717+ glance_endpoint = kc.service_catalog.url_for(service_type='image',
718+ endpoint_type='publicURL')
719+ return glanceclient.Client('1', glance_endpoint, token=kc.auth_token)
720+
721+
722+# Glance Helpers
723+def download_image(image, image_glance_name=None):
724+ logging.info('Downloading ' + image)
725+ tmp_dir = tempfile.mkdtemp(dir='/tmp')
726+ if not image_glance_name:
727+ image_glance_name = image.split('/')[-1]
728+ local_file = os.path.join(tmp_dir, image_glance_name)
729+ urllib.urlretrieve(image, local_file)
730+ return local_file
731+
732+
733+def upload_image(gclient, ifile, image_name, public, disk_format,
734+ container_format):
735+ logging.info('Uploading %s to glance ' % (image_name))
736+ with open(ifile) as fimage:
737+ gclient.images.create(
738+ name=image_name,
739+ is_public=public,
740+ disk_format=disk_format,
741+ container_format=container_format,
742+ data=fimage)
743+
744+
745+def get_images_list(gclient):
746+ return [image.name for image in gclient.images.list()]
747+
748+
749+# Keystone helpers
750+def tenant_create(kclient, tenants):
751+ current_tenants = [tenant.name for tenant in kclient.tenants.list()]
752+ for tenant in tenants:
753+ if tenant in current_tenants:
754+ logging.warning('Not creating tenant %s it already'
755+ 'exists' % (tenant))
756+ else:
757+ logging.info('Creating tenant %s' % (tenant))
758+ kclient.tenants.create(tenant_name=tenant)
759+
760+
761+def user_create(kclient, users):
762+ current_users = [user.name for user in kclient.users.list()]
763+ for user in users:
764+ if user['username'] in current_users:
765+ logging.warning('Not creating user %s it already'
766+ 'exists' % (user['username']))
767+ else:
768+ logging.info('Creating user %s' % (user['username']))
769+ tenant_id = get_tenant_id(kclient, user['tenant'])
770+ kclient.users.create(name=user['username'],
771+ password=user['password'],
772+ email=user['email'],
773+ tenant_id=tenant_id)
774+
775+
776+def get_roles_for_user(kclient, user_id, tenant_id):
777+ roles = []
778+ ksuser_roles = kclient.roles.roles_for_user(user_id, tenant_id)
779+ for role in ksuser_roles:
780+ roles.append(role.id)
781+ return roles
782+
783+
784+def add_users_to_roles(kclient, users):
785+ for user_details in users:
786+ tenant_id = get_tenant_id(kclient, user_details['tenant'])
787+ for role_name in user_details['roles']:
788+ role = kclient.roles.find(name=role_name)
789+ user = kclient.users.find(name=user_details['username'])
790+ users_roles = get_roles_for_user(kclient, user, tenant_id)
791+ if role.id in users_roles:
792+ logging.warning('Not adding role %s to %s it already has '
793+ 'it' % (user_details['username'], role_name))
794+ else:
795+ logging.info('Adding %s to role %s for tenant'
796+ '%s' % (user_details['username'], role_name,
797+ tenant_id))
798+ kclient.roles.add_user_role(user_details['username'], role,
799+ tenant_id)
800+
801+
802+def get_tenant_id(ks_client, tenant_name):
803+ for t in ks_client.tenants.list():
804+ if t._info['name'] == tenant_name:
805+ return t._info['id']
806+ return None
807+
808+
809+# Neutron Helpers
810+def get_gateway_uuids():
811+ gateway_config = mojo_utils.get_juju_status('neutron-gateway')
812+ uuids = []
813+ for machine in gateway_config['machines']:
814+ uuids.append(gateway_config['machines'][machine]['instance-id'])
815+ return uuids
816+
817+
818+def get_net_uuid(neutron_client, net_name):
819+ network = neutron_client.list_networks(name=net_name)['networks'][0]
820+ return network['id']
821+
822+
823+def configure_gateway_ext_port(novaclient):
824+ uuids = get_gateway_uuids()
825+ for uuid in uuids:
826+ server = novaclient.servers.get(uuid)
827+ mac_addrs = [a.mac_addr for a in server.interface_list()]
828+ if len(mac_addrs) < 2:
829+ logging.info('Adding additional port to Neutron Gateway')
830+ server.interface_attach(port_id=None, net_id=None, fixed_ip=None)
831+ else:
832+ logging.warning('Neutron Gateway already has additional port')
833+ if uuids:
834+ logging.info('Seting Neutron Gateway external port to eth1')
835+ mojo_utils.juju_set('neutron-gateway', 'ext-port=eth1')
836+
837+
838+def create_tenant_network(neutron_client, tenant_id, net_name='private',
839+ shared=False, network_type='gre'):
840+ networks = neutron_client.list_networks(name=net_name)
841+ if len(networks['networks']) == 0:
842+ logging.info('Creating network: %s',
843+ net_name)
844+ network_msg = {
845+ 'network': {
846+ 'name': net_name,
847+ 'shared': shared,
848+ 'tenant_id': tenant_id,
849+ }
850+ }
851+ if network_type == 'vxlan':
852+ network_msg['network']['provider:segmentation_id'] = 1233
853+ network_msg['network']['provider:network_type'] = network_type
854+ network = neutron_client.create_network(network_msg)['network']
855+ else:
856+ logging.warning('Network %s already exists.', net_name)
857+ network = networks['networks'][0]
858+ return network
859+
860+
861+def create_external_network(neutron_client, tenant_id, net_name='ext_net',
862+ network_type='gre'):
863+ networks = neutron_client.list_networks(name=net_name)
864+ if len(networks['networks']) == 0:
865+ logging.info('Configuring external bridge')
866+ network_msg = {
867+ 'name': net_name,
868+ 'router:external': True,
869+ 'tenant_id': tenant_id,
870+ }
871+ if network_type == 'vxlan':
872+ network_msg['provider:segmentation_id'] = 1234
873+ network_msg['provider:network_type'] = network_type
874+
875+ logging.info('Creating new external network definition: %s',
876+ net_name)
877+ network = neutron_client.create_network(
878+ {'network': network_msg})['network']
879+ logging.info('New external network created: %s', network['id'])
880+ else:
881+ logging.warning('Network %s already exists.', net_name)
882+ network = networks['networks'][0]
883+ return network
884+
885+
886+def create_tenant_subnet(neutron_client, tenant_id, network, cidr, dhcp=True,
887+ subnet_name='private_subnet'):
888+ # Create subnet
889+ subnets = neutron_client.list_subnets(name=subnet_name)
890+ if len(subnets['subnets']) == 0:
891+ logging.info('Creating subnet')
892+ subnet_msg = {
893+ 'subnet': {
894+ 'name': subnet_name,
895+ 'network_id': network['id'],
896+ 'enable_dhcp': dhcp,
897+ 'cidr': cidr,
898+ 'ip_version': 4,
899+ 'tenant_id': tenant_id
900+ }
901+ }
902+ subnet = neutron_client.create_subnet(subnet_msg)['subnet']
903+ else:
904+ logging.warning('Subnet %s already exists.', subnet_name)
905+ subnet = subnets['subnets'][0]
906+ return subnet
907+
908+
909+def create_external_subnet(neutron_client, tenant_id, network,
910+ default_gateway=None, cidr=None,
911+ start_floating_ip=None, end_floating_ip=None,
912+ subnet_name='ext_net_subnet'):
913+ subnets = neutron_client.list_subnets(name=subnet_name)
914+ if len(subnets['subnets']) == 0:
915+ subnet_msg = {
916+ 'name': subnet_name,
917+ 'network_id': network['id'],
918+ 'enable_dhcp': False,
919+ 'ip_version': 4,
920+ 'tenant_id': tenant_id
921+ }
922+
923+ if default_gateway:
924+ subnet_msg['gateway_ip'] = default_gateway
925+ if cidr:
926+ subnet_msg['cidr'] = cidr
927+ if (start_floating_ip and end_floating_ip):
928+ allocation_pool = {
929+ 'start': start_floating_ip,
930+ 'end': end_floating_ip,
931+ }
932+ subnet_msg['allocation_pools'] = [allocation_pool]
933+
934+ logging.info('Creating new subnet')
935+ subnet = neutron_client.create_subnet({'subnet': subnet_msg})['subnet']
936+ logging.info('New subnet created: %s', subnet['id'])
937+ else:
938+ logging.warning('Subnet %s already exists.', subnet_name)
939+ subnet = subnets['subnets'][0]
940+ return subnet
941+
942+
943+def update_subnet_dns(neutron_client, subnet, dns_servers):
944+ msg = {
945+ 'subnet': {
946+ 'dns_nameservers': dns_servers.split(',')
947+ }
948+ }
949+ logging.info('Updating dns_nameservers (%s) for subnet',
950+ dns_servers)
951+ neutron_client.update_subnet(subnet['id'], msg)
952+
953+
954+def create_provider_router(neutron_client, tenant_id):
955+ routers = neutron_client.list_routers(name='provider-router')
956+ if len(routers['routers']) == 0:
957+ logging.info('Creating provider router for external network access')
958+ router_info = {
959+ 'router': {
960+ 'name': 'provider-router',
961+ 'tenant_id': tenant_id
962+ }
963+ }
964+ router = neutron_client.create_router(router_info)['router']
965+ logging.info('New router created: %s', (router['id']))
966+ else:
967+ logging.warning('Router provider-router already exists.')
968+ router = routers['routers'][0]
969+ return router
970+
971+
972+def plug_extnet_into_router(neutron_client, router, network):
973+ ports = neutron_client.list_ports(device_owner='network:router_gateway',
974+ network_id=network['id'])
975+ if len(ports['ports']) == 0:
976+ logging.info('Plugging router into ext_net')
977+ router = neutron_client.add_gateway_router(
978+ router=router['id'],
979+ body={'network_id': network['id']})
980+ logging.info('Router connected')
981+ else:
982+ logging.warning('Router already connected')
983+
984+
985+def plug_subnet_into_router(neutron_client, router, network, subnet):
986+ routers = neutron_client.list_routers(name=router)
987+ if len(routers['routers']) == 0:
988+ logging.error('Unable to locate provider router %s', router)
989+ sys.exit(1)
990+ else:
991+ # Check to see if subnet already plugged into router
992+ ports = neutron_client.list_ports(
993+ device_owner='network:router_interface',
994+ network_id=network['id'])
995+ if len(ports['ports']) == 0:
996+ logging.info('Adding interface from subnet to %s' % (router))
997+ router = routers['routers'][0]
998+ neutron_client.add_interface_router(router['id'],
999+ {'subnet_id': subnet['id']})
1000+ else:
1001+ logging.warning('Router already connected to subnet')
1002+
1003+
1004+# Nova Helpers
1005+def create_keypair(nova_client, keypair_name):
1006+ if nova_client.keypairs.findall(name=keypair_name):
1007+ _oldkey = nova_client.keypairs.find(name=keypair_name)
1008+ logging.info('Deleting key %s' % (keypair_name))
1009+ nova_client.keypairs.delete(_oldkey)
1010+ logging.info('Creating key %s' % (keypair_name))
1011+ new_key = nova_client.keypairs.create(name=keypair_name)
1012+ return new_key.private_key
1013+
1014+
1015+def boot_instance(nova_client, image_name, flavor_name, key_name):
1016+ image = nova_client.images.find(name=image_name)
1017+ flavor = nova_client.flavors.find(name=flavor_name)
1018+ net = nova_client.networks.find(label="private")
1019+ nics = [{'net-id': net.id}]
1020+ # Obviously time may not produce a unique name
1021+ vm_name = time.strftime("%Y%m%d%H%M%S")
1022+ logging.info('Creating %s %s '
1023+ 'instance %s' % (flavor_name, image_name, vm_name))
1024+ instance = nova_client.servers.create(name=vm_name,
1025+ image=image,
1026+ flavor=flavor,
1027+ key_name=key_name,
1028+ nics=nics)
1029+ return instance
1030+
1031+
1032+def wait_for_active(nova_client, vm_name, wait_time):
1033+ logging.info('Waiting %is for %s to reach ACTIVE '
1034+ 'state' % (wait_time, vm_name))
1035+ for counter in range(wait_time):
1036+ instance = nova_client.servers.find(name=vm_name)
1037+ if instance.status == 'ACTIVE':
1038+ logging.info('%s is ACTIVE' % (vm_name))
1039+ return True
1040+ elif instance.status != 'BUILD':
1041+ logging.error('instance %s in unknown '
1042+ 'state %s' % (instance.name, instance.status))
1043+ return False
1044+ time.sleep(1)
1045+ logging.error('instance %s failed to reach '
1046+ 'active state in %is' % (instance.name, wait_time))
1047+ return False
1048+
1049+
1050+def wait_for_cloudinit(nova_client, vm_name, bootstring, wait_time):
1051+ logging.info('Waiting %is for cloudinit on %s to '
1052+ 'complete' % (wait_time, vm_name))
1053+ instance = nova_client.servers.find(name=vm_name)
1054+ for counter in range(wait_time):
1055+ instance = nova_client.servers.find(name=vm_name)
1056+ console_log = instance.get_console_output()
1057+ if bootstring in console_log:
1058+ logging.info('Cloudinit for %s is complete' % (vm_name))
1059+ return True
1060+ time.sleep(1)
1061+ logging.error('cloudinit for instance %s failed '
1062+ 'to complete in %is' % (instance.name, wait_time))
1063+ return False
1064+
1065+
1066+def wait_for_boot(nova_client, vm_name, bootstring, active_wait,
1067+ cloudinit_wait):
1068+ if not wait_for_active(nova_client, vm_name, active_wait):
1069+ raise Exception('Error initialising %s' % vm_name)
1070+ if not wait_for_cloudinit(nova_client, vm_name, bootstring,
1071+ cloudinit_wait):
1072+ raise Exception('Cloudinit error %s' % vm_name)
1073+
1074+
1075+def wait_for_ping(ip, wait_time):
1076+ logging.info('Waiting for ping to %s' % (ip))
1077+ for counter in range(wait_time):
1078+ if ping(ip):
1079+ logging.info('Ping %s success' % (ip))
1080+ return True
1081+ time.sleep(1)
1082+ logging.error('Ping failed for %s' % (ip))
1083+ return False
1084+
1085+
1086+def assign_floating_ip(nova_client, vm_name):
1087+ floating_ip = nova_client.floating_ips.create()
1088+ logging.info('Assigning floating IP %s to %s' % (floating_ip.ip, vm_name))
1089+ instance = nova_client.servers.find(name=vm_name)
1090+ instance.add_floating_ip(floating_ip)
1091+ return floating_ip.ip
1092+
1093+
1094+def add_secgroup_rules(nova_client):
1095+ secgroup = nova_client.security_groups.find(name="default")
1096+ # Using presence of a 22 rule to indicate whether secgroup rules
1097+ # have been added
1098+ port_rules = [rule['to_port'] for rule in secgroup.rules]
1099+ if 22 in port_rules:
1100+ logging.warn('Security group rules for ssh already added')
1101+ else:
1102+ logging.info('Adding ssh security group rule')
1103+ nova_client.security_group_rules.create(secgroup.id,
1104+ ip_protocol="tcp",
1105+ from_port=22,
1106+ to_port=22)
1107+ if -1 in port_rules:
1108+ logging.warn('Security group rules for ping already added')
1109+ else:
1110+ logging.info('Adding ping security group rule')
1111+ nova_client.security_group_rules.create(secgroup.id,
1112+ ip_protocol="icmp",
1113+ from_port=-1,
1114+ to_port=-1)
1115+
1116+
1117+def ping(ip):
1118+ # Use the system ping command with count of 1 and wait time of 1.
1119+ ret = subprocess.call(['ping', '-c', '1', '-W', '1', ip],
1120+ stdout=open('/dev/null', 'w'),
1121+ stderr=open('/dev/null', 'w'))
1122+ return ret == 0
1123+
1124+
1125+def ssh_test(username, ip, vm_name, password=None, privkey=None):
1126+ logging.info('Attempting to ssh to %s(%s)' % (vm_name, ip))
1127+ ssh = paramiko.SSHClient()
1128+ ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
1129+ if privkey:
1130+ key = paramiko.RSAKey.from_private_key(StringIO.StringIO(privkey))
1131+ ssh.connect(ip, username=username, password='', pkey=key)
1132+ else:
1133+ ssh.connect(ip, username=username, password=password)
1134+ stdin, stdout, stderr = ssh.exec_command('uname -n')
1135+ return_string = stdout.readlines()[0].strip()
1136+ ssh.close()
1137+ if return_string == vm_name:
1138+ logging.info('SSH to %s(%s) succesfull' % (vm_name, ip))
1139+ return True
1140+ else:
1141+ logging.info('SSH to %s(%s) failed' % (vm_name, ip))
1142+ return False
1143+
1144+
1145+def boot_and_test(nova_client, image_name, flavor_name, number, privkey,
1146+ active_wait=180, cloudinit_wait=180, ping_wait=180):
1147+ image_config = mojo_utils.get_mojo_config('images.yaml')
1148+ for counter in range(number):
1149+ instance = boot_instance(nova_client,
1150+ image_name=image_name,
1151+ flavor_name=flavor_name,
1152+ key_name='mojo')
1153+ wait_for_boot(nova_client, instance.name,
1154+ image_config[image_name]['bootstring'], active_wait,
1155+ cloudinit_wait)
1156+ ip = assign_floating_ip(nova_client, instance.name)
1157+ wait_for_ping(ip, ping_wait)
1158+ if not wait_for_ping(ip, ping_wait):
1159+ raise Exception('Ping of %s failed' % (ip))
1160+ ssh_test_args = {
1161+ 'username': image_config[image_name]['username'],
1162+ 'ip': ip,
1163+ 'vm_name': instance.name,
1164+ }
1165+ if image_config[image_name]['auth_type'] == 'password':
1166+ ssh_test_args['password'] = image_config[image_name]['password']
1167+ elif image_config[image_name]['auth_type'] == 'privkey':
1168+ ssh_test_args['privkey'] = privkey
1169+ if not ssh_test(**ssh_test_args):
1170+ raise Exception('SSH failed' % (ip))
1171+
1172+
1173+# Hacluster helper
1174+def get_crm_leader(service, resource=None):
1175+ if not resource:
1176+ resource = 'res_.*_vip'
1177+ leader = set()
1178+ for unit in mojo_utils.get_juju_units(service=service):
1179+ crm_out = mojo_utils.remote_run(unit, 'sudo crm status')[0]
1180+ for line in crm_out.splitlines():
1181+ line = line.lstrip()
1182+ if re.match(resource, line):
1183+ leader.add(line.split()[-1])
1184+ if len(leader) != 1:
1185+ raise Exception('Unexpected leader count: ' + str(len(leader)))
1186+ return leader.pop().split('-')[-1]
1187+
1188+
1189+def delete_crm_leader(service, resource=None):
1190+ mach_no = get_crm_leader(service, resource)
1191+ unit = mojo_utils.convert_machineno_to_unit(mach_no)
1192+ mojo_utils.delete_unit(unit)
1193
1194=== added file 'mojo-spec-helpers/utils/mojo_utils.py'
1195--- mojo-spec-helpers/utils/mojo_utils.py 1970-01-01 00:00:00 +0000
1196+++ mojo-spec-helpers/utils/mojo_utils.py 2015-09-08 08:23:55 +0000
1197@@ -0,0 +1,356 @@
1198+#!/usr/bin/python
1199+
1200+import subprocess
1201+import yaml
1202+import os
1203+import mojo
1204+import logging
1205+import time
1206+from collections import Counter
1207+from swiftclient.client import Connection
1208+
1209+JUJU_STATUSES = {
1210+ 'good': ['ACTIVE', 'started'],
1211+ 'bad': ['error'],
1212+ 'transitional': ['pending', 'pending', 'down', 'installed', 'stopped'],
1213+}
1214+
1215+
1216+def get_juju_status(service=None):
1217+ cmd = ['juju', 'status']
1218+ if service:
1219+ cmd.append(service)
1220+ status_file = subprocess.Popen(cmd, stdout=subprocess.PIPE).stdout
1221+ return yaml.load(status_file)
1222+
1223+
1224+def get_juju_units(juju_status=None, service=None):
1225+ if not juju_status:
1226+ juju_status = get_juju_status()
1227+ units = []
1228+ if service:
1229+ services = [service]
1230+ else:
1231+ services = [juju_service for juju_service in juju_status['services']]
1232+ for svc in services:
1233+ if 'units' in juju_status['services'][svc]:
1234+ for unit in juju_status['services'][svc]['units']:
1235+ units.append(unit)
1236+ return units
1237+
1238+
1239+def convert_machineno_to_unit(machineno, juju_status=None):
1240+ if not juju_status:
1241+ juju_status = get_juju_status()
1242+ services = [service for service in juju_status['services']]
1243+ for svc in services:
1244+ if 'units' in juju_status['services'][svc]:
1245+ for unit in juju_status['services'][svc]['units']:
1246+ unit_info = juju_status['services'][svc]['units'][unit]
1247+ if unit_info['machine'] == machineno:
1248+ return unit
1249+
1250+
1251+def remote_shell_check(unit):
1252+ cmd = ['juju', 'run', '--unit', unit, 'uname -a']
1253+ FNULL = open(os.devnull, 'w')
1254+ return not subprocess.call(cmd, stdout=FNULL, stderr=subprocess.STDOUT)
1255+
1256+
1257+def remote_run(unit, remote_cmd=None):
1258+ cmd = ['juju', 'run', '--unit', unit]
1259+ if remote_cmd:
1260+ cmd.append(remote_cmd)
1261+ else:
1262+ cmd.append('uname -a')
1263+ p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
1264+ output = p.communicate()
1265+ if p.returncode != 0:
1266+ raise Exception('Error running nagios checks')
1267+ return output
1268+
1269+
1270+def remote_upload(unit, script, remote_dir=None):
1271+ if remote_dir:
1272+ dst = unit + ':' + remote_dir
1273+ else:
1274+ dst = unit + ':/tmp/'
1275+ cmd = ['juju', 'scp', script, dst]
1276+ return subprocess.check_call(cmd)
1277+
1278+
1279+def delete_unit(unit):
1280+ service = unit.split('/')[0]
1281+ unit_count = len(get_juju_units(service=service))
1282+ logging.info('Removing unit ' + unit)
1283+ cmd = ['juju', 'destroy-unit', unit]
1284+ subprocess.check_call(cmd)
1285+ target_num = unit_count - 1
1286+ # Wait for the unit to disappear from juju status
1287+ while len(get_juju_units(service=service)) > target_num:
1288+ time.sleep(5)
1289+ juju_wait_finished()
1290+
1291+
1292+def add_unit(service, unit_num=None):
1293+ unit_count = len(get_juju_units(service=service))
1294+ if unit_num:
1295+ additional_units = int(unit_num)
1296+ else:
1297+ additional_units = 1
1298+ logging.info('Adding %i unit(s) to %s' % (additional_units, service))
1299+ cmd = ['juju', 'add-unit', service, '-n', str(additional_units)]
1300+ subprocess.check_call(cmd)
1301+ target_num = unit_count + additional_units
1302+ # Wait for the new unit to appear in juju status
1303+ while len(get_juju_units(service=service)) < target_num:
1304+ time.sleep(5)
1305+ juju_wait_finished()
1306+
1307+
1308+def juju_set(service, option):
1309+ subprocess.check_call(['juju', 'set', service, option])
1310+ juju_wait_finished()
1311+
1312+
1313+def juju_set_config_option(service, option_name, value):
1314+ option = "{}={}".format(option_name, value)
1315+ juju_set(service, option)
1316+
1317+
1318+def juju_get(service, option):
1319+ cmd = ['juju', 'get', service]
1320+ juju_get_output = subprocess.Popen(cmd, stdout=subprocess.PIPE).stdout
1321+ service_config = yaml.load(juju_get_output)
1322+ if 'value' in service_config['settings'][option]:
1323+ return service_config['settings'][option]['value']
1324+
1325+
1326+def get_undercload_auth():
1327+ juju_env = subprocess.check_output(['juju', 'switch']).strip('\n')
1328+ juju_env_file = open(os.environ['HOME'] + "/.juju/environments.yaml", 'r')
1329+ juju_env_contents = yaml.load(juju_env_file)
1330+ novarc_settings = juju_env_contents['environments'][juju_env]
1331+ auth_settings = {
1332+ 'OS_AUTH_URL': novarc_settings['auth-url'],
1333+ 'OS_TENANT_NAME': novarc_settings['tenant-name'],
1334+ 'OS_USERNAME': novarc_settings['username'],
1335+ 'OS_PASSWORD': novarc_settings['password'],
1336+ 'OS_REGION_NAME': novarc_settings['region'],
1337+ }
1338+ return auth_settings
1339+
1340+
1341+# Openstack Client helpers
1342+def get_auth_url(juju_status=None):
1343+ if juju_get('keystone', 'vip'):
1344+ return juju_get('keystone', 'vip')
1345+ if not juju_status:
1346+ juju_status = get_juju_status()
1347+ unit = juju_status['services']['keystone']['units'].itervalues().next()
1348+ return unit['public-address']
1349+
1350+
1351+def get_overcloud_auth(juju_status=None):
1352+ if not juju_status:
1353+ juju_status = get_juju_status()
1354+ if juju_get('keystone', 'use-https').lower() == 'yes':
1355+ transport = 'https'
1356+ port = 35357
1357+ else:
1358+ transport = 'http'
1359+ port = 5000
1360+ address = get_auth_url()
1361+ auth_settings = {
1362+ 'OS_AUTH_URL': '%s://%s:%i/v2.0' % (transport, address, port),
1363+ 'OS_TENANT_NAME': 'admin',
1364+ 'OS_USERNAME': 'admin',
1365+ 'OS_PASSWORD': 'openstack',
1366+ 'OS_REGION_NAME': 'RegionOne',
1367+ }
1368+ return auth_settings
1369+
1370+
1371+def get_mojo_file(filename):
1372+ spec = mojo.Spec(os.environ['MOJO_SPEC_DIR'])
1373+ return spec.get_config(filename, stage=os.environ['MOJO_STAGE'])
1374+
1375+
1376+def get_mojo_spec_revno():
1377+ """
1378+ Get the current revision number of the mojo spec
1379+ """
1380+
1381+ revno_command = 'bzr revno {}'.format(os.environ['MOJO_SPEC_DIR'])
1382+ return subprocess.check_output(revno_command.split()).strip()
1383+
1384+
1385+def get_mojo_config(filename):
1386+ config_file = get_mojo_file(filename)
1387+ logging.info('Using config %s' % (config_file))
1388+ return yaml.load(file(config_file, 'r'))
1389+
1390+
1391+def get_charm_dir():
1392+ return os.path.join(os.environ['MOJO_REPO_DIR'],
1393+ os.environ['MOJO_SERIES'])
1394+
1395+
1396+def sync_charmhelpers(charmdir):
1397+ p = subprocess.Popen(['make', 'sync'], cwd=charmdir)
1398+ p.communicate()
1399+
1400+
1401+def sync_all_charmhelpers():
1402+ charm_base_dir = get_charm_dir()
1403+ for direc in os.listdir(charm_base_dir):
1404+ charm_dir = os.path.join(charm_base_dir, direc)
1405+ if os.path.isdir(charm_dir):
1406+ sync_charmhelpers(charm_dir)
1407+
1408+
1409+def parse_mojo_arg(options, mojoarg, multiargs=False):
1410+ if mojoarg.upper() in os.environ:
1411+ if multiargs:
1412+ return os.environ[mojoarg.upper()].split()
1413+ else:
1414+ return os.environ[mojoarg.upper()]
1415+ else:
1416+ return getattr(options, mojoarg)
1417+
1418+
1419+def get_machine_state(juju_status, state_type):
1420+ states = Counter()
1421+ for machine_no in juju_status['machines']:
1422+ if state_type in juju_status['machines'][machine_no]:
1423+ state = juju_status['machines'][machine_no][state_type]
1424+ else:
1425+ state = 'unknown'
1426+ states[state] += 1
1427+ return states
1428+
1429+
1430+def get_machine_agent_states(juju_status):
1431+ return get_machine_state(juju_status, 'agent-state')
1432+
1433+
1434+def get_machine_instance_states(juju_status):
1435+ return get_machine_state(juju_status, 'instance-state')
1436+
1437+
1438+def get_service_agent_states(juju_status):
1439+ service_state = Counter()
1440+ for service in juju_status['services']:
1441+ if 'units' in juju_status['services'][service]:
1442+ for unit in juju_status['services'][service]['units']:
1443+ unit_info = juju_status['services'][service]['units'][unit]
1444+ service_state[unit_info['agent-state']] += 1
1445+ if 'subordinates' in unit_info:
1446+ for sub_unit in unit_info['subordinates']:
1447+ sub_sstate = \
1448+ unit_info['subordinates'][sub_unit]['agent-state']
1449+ service_state[sub_sstate] += 1
1450+ return service_state
1451+
1452+
1453+def juju_status_summary(heading, statetype, states):
1454+ print heading
1455+ print " " + statetype
1456+ for state in states:
1457+ print " %s: %i" % (state, states[state])
1458+
1459+
1460+def juju_status_error_check(states):
1461+ for state in states:
1462+ if state in JUJU_STATUSES['bad']:
1463+ logging.error('Some statuses are in a bad state')
1464+ return True
1465+ logging.info('No statuses are in a bad state')
1466+ return False
1467+
1468+
1469+def juju_status_all_stable(states):
1470+ for state in states:
1471+ if state in JUJU_STATUSES['transitional']:
1472+ logging.info('Some statuses are in a transitional state')
1473+ return False
1474+ logging.info('Statuses are in a stable state')
1475+ return True
1476+
1477+
1478+def juju_status_check_and_wait():
1479+ checks = {
1480+ 'Machines': [{
1481+ 'Heading': 'Instance State',
1482+ 'check_func': get_machine_instance_states,
1483+ },
1484+ {
1485+ 'Heading': 'Agent State',
1486+ 'check_func': get_machine_agent_states,
1487+ }],
1488+ 'Services': [{
1489+ 'Heading': 'Agent State',
1490+ 'check_func': get_service_agent_states,
1491+ }]
1492+ }
1493+ stable_state = [False]
1494+ while False in stable_state:
1495+ juju_status = get_juju_status()
1496+ stable_state = []
1497+ for juju_objtype, check_info in checks.iteritems():
1498+ for check in check_info:
1499+ check_function = check['check_func']
1500+ states = check_function(juju_status)
1501+ if juju_status_error_check(states):
1502+ raise Exception("Error in juju status")
1503+ stable_state.append(juju_status_all_stable(states))
1504+ time.sleep(5)
1505+ for juju_objtype, check_info in checks.iteritems():
1506+ for check in check_info:
1507+ check_function = check['check_func']
1508+ states = check_function(juju_status)
1509+ juju_status_summary(juju_objtype, check['Heading'], states)
1510+
1511+
1512+def remote_runs(units):
1513+ for unit in units:
1514+ if not remote_shell_check(unit):
1515+ raise Exception('Juju run failed on ' + unit)
1516+
1517+
1518+def juju_check_hooks_complete():
1519+ juju_units = get_juju_units()
1520+ remote_runs(juju_units)
1521+ remote_runs(juju_units)
1522+
1523+
1524+def juju_wait_finished():
1525+ # Wait till all statuses are green
1526+ juju_status_check_and_wait()
1527+ # juju status may report all has finished but hooks are still firing.
1528+ # So check..
1529+ juju_check_hooks_complete()
1530+ # Check nothing has subsequently gone bad
1531+ juju_status_check_and_wait()
1532+
1533+
1534+def build_swift_connection():
1535+ """
1536+ Create a Swift Connection object from the environment variables
1537+ OS_TENANT_NAME, OS_STORAGE_URL, OS_AUTH_URL, OS_USERNAME
1538+ OS_PASSWORD
1539+ """
1540+
1541+ # Get extra Swift options like tenant name and storage URL
1542+ os_options = {'tenant_name': os.environ.get('OS_TENANT_NAME')}
1543+ storage_url = os.environ.get('OS_STORAGE_URL')
1544+ if storage_url:
1545+ os_options['object_storage_url'] = storage_url
1546+
1547+ return Connection(
1548+ os.environ.get('OS_AUTH_URL'),
1549+ os.environ.get('OS_USERNAME'),
1550+ os.environ.get('OS_PASSWORD'),
1551+ auth_version='2.0',
1552+ os_options=os_options
1553+ )
1554
1555=== added file 'mojo-spec-helpers/utils/shyaml.py'
1556--- mojo-spec-helpers/utils/shyaml.py 1970-01-01 00:00:00 +0000
1557+++ mojo-spec-helpers/utils/shyaml.py 2015-09-08 08:23:55 +0000
1558@@ -0,0 +1,219 @@
1559+#!/usr/bin/env python
1560+
1561+# Note: to launch test, you can use:
1562+# python -m doctest -d shyaml.py
1563+# or
1564+# nosetests
1565+
1566+from __future__ import print_function
1567+
1568+import sys
1569+import yaml
1570+import os.path
1571+import re
1572+
1573+EXNAME = os.path.basename(sys.argv[0])
1574+
1575+
1576+def tokenize(s):
1577+ r"""Returns an iterable in all subpart of a '.' separated string
1578+ So:
1579+ >>> list(tokenize('foo.bar.wiz'))
1580+ ['foo', 'bar', 'wiz']
1581+ this function has to deal with any type of data in the string. So it
1582+ actually interprets the string. Characters with meaning are '.' and '\'.
1583+ Both of these can be included in a token by quoting them with '\'.
1584+ So dot of slashes can be contained in token:
1585+ >>> print('\n'.join(tokenize(r'foo.dot<\.>.slash<\\>')))
1586+ foo
1587+ dot<.>
1588+ slash<\>
1589+ Notice that empty keys are also supported:
1590+ >>> list(tokenize(r'foo..bar'))
1591+ ['foo', '', 'bar']
1592+ Given an empty string:
1593+ >>> list(tokenize(r''))
1594+ ['']
1595+ And a None value:
1596+ >>> list(tokenize(None))
1597+ []
1598+ """
1599+ if s is None:
1600+ raise StopIteration
1601+ tokens = (re.sub(r'\\(\\|\.)', r'\1', m.group(0))
1602+ for m in re.finditer(r'((\\.|[^.\\])*)', s))
1603+ # an empty string superfluous token is added
1604+ # after all non-empty string token:
1605+ for token in tokens:
1606+ if len(token) != 0:
1607+ next(tokens)
1608+ yield token
1609+
1610+
1611+def mget(dct, key, default=None):
1612+ r"""Allow to get values deep in a dict with doted keys
1613+ Accessing leaf values is quite straightforward:
1614+ >>> dct = {'a': {'x': 1, 'b': {'c': 2}}}
1615+ >>> mget(dct, 'a.x')
1616+ 1
1617+ >>> mget(dct, 'a.b.c')
1618+ 2
1619+ But you can also get subdict if your key is not targeting a
1620+ leaf value:
1621+ >>> mget(dct, 'a.b')
1622+ {'c': 2}
1623+ As a special feature, list access is also supported by providing a
1624+ (possibily signed) integer, it'll be interpreted as usual python
1625+ sequence access using bracket notation:
1626+ >>> mget({'a': {'x': [1, 5], 'b': {'c': 2}}}, 'a.x.-1')
1627+ 5
1628+ >>> mget({'a': {'x': 1, 'b': [{'c': 2}]}}, 'a.b.0.c')
1629+ 2
1630+ Keys that contains '.' can be accessed by escaping them:
1631+ >>> dct = {'a': {'x': 1}, 'a.x': 3, 'a.y': 4}
1632+ >>> mget(dct, 'a.x')
1633+ 1
1634+ >>> mget(dct, r'a\.x')
1635+ 3
1636+ >>> mget(dct, r'a.y')
1637+ >>> mget(dct, r'a\.y')
1638+ 4
1639+ As a consequence, if your key contains a '\', you should also escape it:
1640+ >>> dct = {r'a\x': 3, r'a\.x': 4, 'a.x': 5, 'a\\': {'x': 6}}
1641+ >>> mget(dct, r'a\\x')
1642+ 3
1643+ >>> mget(dct, r'a\\\.x')
1644+ 4
1645+ >>> mget(dct, r'a\\.x')
1646+ 6
1647+ >>> mget({'a\\': {'b': 1}}, r'a\\.b')
1648+ 1
1649+ >>> mget({r'a.b\.c': 1}, r'a\.b\\\.c')
1650+ 1
1651+ And even empty strings key are supported:
1652+ >>> dct = {r'a': {'': {'y': 3}, 'y': 4}, 'b': {'': {'': 1}}, '': 2}
1653+ >>> mget(dct, r'a..y')
1654+ 3
1655+ >>> mget(dct, r'a.y')
1656+ 4
1657+ >>> mget(dct, r'')
1658+ 2
1659+ >>> mget(dct, r'b..')
1660+ 1
1661+ mget support also default value if the key is not found:
1662+ >>> mget({'a': 1}, 'b.y', default='N/A')
1663+ 'N/A'
1664+ but will complain if you are trying to get into a leaf:
1665+ >>> mget({'a': 1}, 'a.y', default='N/A') # doctest: +ELLIPSIS
1666+ Traceback (most recent call last):
1667+ ...
1668+ TypeError: 'int' object ...
1669+ if the key is None, the whole dct should be sent back:
1670+ >>> mget({'a': 1}, None)
1671+ {'a': 1}
1672+ """
1673+ return aget(dct, tokenize(key), default)
1674+
1675+
1676+def aget(dct, key, default=None):
1677+ r"""Allow to get values deep in a dict with iterable keys
1678+ Accessing leaf values is quite straightforward:
1679+ >>> dct = {'a': {'x': 1, 'b': {'c': 2}}}
1680+ >>> aget(dct, ('a', 'x'))
1681+ 1
1682+ >>> aget(dct, ('a', 'b', 'c'))
1683+ 2
1684+ If key is empty, it returns unchanged the ``dct`` value.
1685+ >>> aget({'x': 1}, ())
1686+ {'x': 1}
1687+ """
1688+ key = iter(key)
1689+ try:
1690+ head = next(key)
1691+ except StopIteration:
1692+ return dct
1693+ try:
1694+ value = dct[int(head)] if isinstance(dct, list) else dct[head]
1695+ except KeyError:
1696+ return default
1697+ return aget(value, key, default)
1698+
1699+
1700+def stderr(msg):
1701+ sys.stderr.write(msg + "\n")
1702+
1703+
1704+def die(msg, errlvl=1, prefix="Error: "):
1705+ stderr("%s%s" % (prefix, msg))
1706+ sys.exit(errlvl)
1707+
1708+SIMPLE_TYPES = (str, int, float)
1709+COMPLEX_TYPES = (list, dict)
1710+
1711+
1712+def dump(value):
1713+ return value if isinstance(value, SIMPLE_TYPES) \
1714+ else yaml.dump(value, default_flow_style=False)
1715+
1716+
1717+def type_name(value):
1718+ """Returns pseudo-YAML type name of given value."""
1719+ return "struct" if isinstance(value, dict) else \
1720+ "sequence" if isinstance(value, (tuple, list)) else \
1721+ type(value).__name__
1722+
1723+
1724+def stdout(value):
1725+ sys.stdout.write(value)
1726+
1727+
1728+def main(args):
1729+ usage = """usage:
1730+ %(exname)s {get-value{,-0},get-type,keys{,-0},values{,-0}} KEY DEFAULT
1731+ """ % {"exname": EXNAME}
1732+ if len(args) == 0:
1733+ die(usage, errlvl=0, prefix="")
1734+ action = args[0]
1735+ key_value = None if len(args) == 1 else args[1]
1736+ default = args[2] if len(args) > 2 else ""
1737+ contents = yaml.load(sys.stdin)
1738+ try:
1739+ value = mget(contents, key_value, default)
1740+ except IndexError:
1741+ die("list index error in path %r." % key_value)
1742+ except (KeyError, TypeError):
1743+ die("invalid path %r." % key_value)
1744+
1745+ tvalue = type_name(value)
1746+ termination = "\0" if action.endswith("-0") else "\n"
1747+
1748+ if action == "get-value":
1749+ print(dump(value), end='')
1750+ elif action in ("get-values", "get-values-0"):
1751+ if isinstance(value, dict):
1752+ for k, v in value.iteritems():
1753+ stdout("%s%s%s%s" % (dump(k), termination,
1754+ dump(v), termination))
1755+ elif isinstance(value, list):
1756+ for l in value:
1757+ stdout("%s%s" % (dump(l), termination))
1758+ else:
1759+ die("%s does not support %r type. "
1760+ "Please provide or select a sequence or struct."
1761+ % (action, tvalue))
1762+ elif action == "get-type":
1763+ print(tvalue)
1764+ elif action in ("keys", "keys-0", "values", "values-0"):
1765+ if isinstance(value, dict):
1766+ method = value.keys if action.startswith("keys") else value.values
1767+ for k in method():
1768+ stdout("%s%s" % (dump(k), termination))
1769+ else:
1770+ die("%s does not support %r type. "
1771+ "Please provide or select a struct." % (action, tvalue))
1772+ else:
1773+ die("Invalid argument.\n%s" % usage)
1774+
1775+
1776+if __name__ == "__main__":
1777+ sys.exit(main(sys.argv[1:]))
1778
1779=== added directory 'mojo-spec-helpers/utils/tests'
1780=== added file 'mojo-spec-helpers/utils/tests/README.md'
1781--- mojo-spec-helpers/utils/tests/README.md 1970-01-01 00:00:00 +0000
1782+++ mojo-spec-helpers/utils/tests/README.md 2015-09-08 08:23:55 +0000
1783@@ -0,0 +1,67 @@
1784+Tests for python managers
1785+===
1786+
1787+These are tests for `container_managers.py`, `cache_managers.py`
1788+and some functions in `mojo_utils.py`.
1789+
1790+Setup
1791+---
1792+
1793+### A test container
1794+
1795+Tests for `container_managers.py` require a test container to be setup with objects named as follows:
1796+
1797+- latest-build-label
1798+- deployed-build-label
1799+- deployed-spec-revno
1800+- code-upgrade-test-build-label-01-test-build-label-02-succeeded
1801+- mojo-run-155-succeeded
1802+
1803+This container should be openly readable:
1804+
1805+``` bash
1806+swift post --read-acl .r:* ${TEST_CONTAINER_NAME}
1807+```
1808+
1809+You should then set the URL for this container in the environment variable `TEST_CONTAINER_URL`:
1810+
1811+``` bash
1812+export TEST_CONTAINER_URL=$(swift stat -v ${TEST_CONTAINER_NAME} | egrep -o 'http.*$')
1813+```
1814+
1815+### A local swift account
1816+
1817+You also need to make the credentials for connecting to a swift account available with `OS_*` environment variables:
1818+
1819+``` bash
1820+export OS_USERNAME=${USERNAME}
1821+export OS_TENANT_NAME=${TENANT_NAME}
1822+export OS_PASSWORD=${PASSWORD}
1823+export OS_STORAGE_URL=${OPTIONAL_STORAGE_URL}
1824+export OS_AUTH_URL=${AUTH_URL}
1825+export OS_REGION_NAME=${REGION_NAME}
1826+```
1827+
1828+### PYTHONPATH
1829+
1830+You also need to add the parent directory to your python path to run the tests:
1831+
1832+``` bash
1833+export PYTHONPATH=..
1834+```
1835+
1836+Running the tests
1837+---
1838+
1839+You can either run the tests with native python:
1840+
1841+``` bash
1842+./run_tests.py
1843+```
1844+
1845+Or with `pytest`:
1846+
1847+``` bash
1848+py.test
1849+```
1850+
1851
1852=== added file 'mojo-spec-helpers/utils/tests/run_tests.py'
1853--- mojo-spec-helpers/utils/tests/run_tests.py 1970-01-01 00:00:00 +0000
1854+++ mojo-spec-helpers/utils/tests/run_tests.py 2015-09-08 08:23:55 +0000
1855@@ -0,0 +1,18 @@
1856+#!/usr/bin/env python
1857+
1858+"""
1859+These are tests for container_managers.py, cache_managers.py
1860+and some scripts in mojo_utils.py
1861+
1862+To run the container_tests, you'll need to set several environment
1863+variables - see test_container_managers.py
1864+"""
1865+
1866+from test_container_managers import container_tests
1867+from test_cache_managers import test_cache_managers
1868+from test_mojo_utils import test_get_mojo_spec_revno
1869+
1870+
1871+container_tests()
1872+test_cache_managers()
1873+test_get_mojo_spec_revno()
1874
1875=== added file 'mojo-spec-helpers/utils/tests/test_cache_managers.py'
1876--- mojo-spec-helpers/utils/tests/test_cache_managers.py 1970-01-01 00:00:00 +0000
1877+++ mojo-spec-helpers/utils/tests/test_cache_managers.py 2015-09-08 08:23:55 +0000
1878@@ -0,0 +1,80 @@
1879+# System imports
1880+import os
1881+import subprocess
1882+import json
1883+
1884+# Local imports
1885+from cache_managers import JsonCache
1886+
1887+
1888+def _check_cache_file(cache, cache_path):
1889+ # Check file exists
1890+ assert os.path.isfile(cache_path), "Cache file not created"
1891+ print u"\u2713 Cache file exists: {}".format(cache_path)
1892+
1893+ # Check the data
1894+ with open(cache_path) as cache_file:
1895+ assert json.load(cache_file) == cache.get_cache(), "Bad data"
1896+ print u"\u2713 Cache file data is correct"
1897+
1898+
1899+def test_cache_managers():
1900+ print (
1901+ "\n===\n"
1902+ "Test JsonCache"
1903+ "\n===\n"
1904+ )
1905+
1906+ cache_path = subprocess.check_output(
1907+ 'mktemp -u /tmp/cache-XXXX.json'.split()
1908+ ).strip()
1909+
1910+ cache = JsonCache(cache_path=cache_path)
1911+
1912+ fake_data = {
1913+ 'sentence': 'hello world',
1914+ 'number': 12,
1915+ 'array': [1, 2, 3, "fish"]
1916+ }
1917+
1918+ # Insert items using "set"
1919+ for key, value in fake_data.iteritems():
1920+ cache.set(key, value)
1921+
1922+ # Check data against inserted data
1923+ assert fake_data == cache.get_cache(), "Data not set correctly"
1924+ print u"\u2713 Data correctly inserted"
1925+
1926+ # Verify cache file
1927+ _check_cache_file(cache, cache_path)
1928+
1929+ # Check retrieving each key with "get"
1930+ for key, value in fake_data.iteritems():
1931+ assert cache.get(key) == value
1932+
1933+ print u"\u2713 Successfully retrieved items with 'get'"
1934+
1935+ # Check wiping the cache
1936+ cache.wipe()
1937+
1938+ assert not os.path.isfile(cache_path), "Cache file shouldn't exist!"
1939+ assert not cache.get(fake_data.keys()[0]), "Cache still returning data"
1940+ print u"\u2713 Cache successfully wiped"
1941+
1942+ # Recreate cache with "put_cache"
1943+ cache.put_cache(fake_data)
1944+
1945+ # Check data against inserted data
1946+ assert fake_data == cache.get_cache()
1947+ print u"\u2713 Data correctly inserted"
1948+
1949+ # Check file integrity
1950+ _check_cache_file(cache, cache_path)
1951+
1952+ # Clean up
1953+ cache.wipe()
1954+ assert not os.path.isfile(cache_path), "Cache file shouldn't exist!"
1955+ print u"\u2713 Deleted {}".format(cache_path)
1956+
1957+if __name__ == "__main__":
1958+ test_cache_managers()
1959
1960=== added file 'mojo-spec-helpers/utils/tests/test_container_managers.py'
1961--- mojo-spec-helpers/utils/tests/test_container_managers.py 1970-01-01 00:00:00 +0000
1962+++ mojo-spec-helpers/utils/tests/test_container_managers.py 2015-09-08 08:23:55 +0000
1963@@ -0,0 +1,188 @@
1964+# System
1965+import os
1966+
1967+# Local imports
1968+from container_managers import (
1969+ BuildContainer,
1970+ CIContainer,
1971+ DeployedEnvironmentContainer,
1972+ LocalEnvironmentSwiftContainer
1973+)
1974+from mojo_utils import build_swift_connection
1975+
1976+"""
1977+Tests for container_managers.
1978+
1979+To test these you'll need to set a TEST_CONTAINER_URL,
1980+which should be the HTTP URL to a swift container which contains
1981+the following publicly readable objects:
1982+- latest-build-label
1983+- deployed-build-label
1984+- deployed-spec-revno
1985+- code-upgrade-test-build-label-01-test-build-label-02-succeeded
1986+- mojo-run-155-succeeded
1987+
1988+You'll also need to have the following environment variables setup
1989+with credentials to connect to a valid (testing) Swift account:
1990+
1991+OS_AUTH_URL, OS_USERNAME, OS_PASSWORD, OS_TENANT_NAME, OS_REGION_NAME
1992+"""
1993+
1994+test_container_url = os.environ.get('TEST_CONTAINER_URL')
1995+
1996+
1997+def test_deployed_environment_container():
1998+ print (
1999+ "\n===\n"
2000+ "Test DeployedEnvironmentContainer"
2001+ "\n===\n"
2002+ )
2003+
2004+ deployed_env_container = DeployedEnvironmentContainer(
2005+ container_url=test_container_url,
2006+ )
2007+
2008+ build_label = deployed_env_container.deployed_build_label()
2009+ assert build_label, "Build label missing"
2010+ print u"\u2713 Found build label: {}".format(build_label)
2011+
2012+ spec_revno = deployed_env_container.deployed_spec_revno()
2013+ assert spec_revno, "Spec revno missing"
2014+ print u"\u2713 Found spec revno (from web): {}".format(spec_revno)
2015+
2016+
2017+def test_local_environment_swift_container():
2018+ print (
2019+ "\n===\n"
2020+ "Test LocalEnvironmentSwiftContainer"
2021+ "\n===\n"
2022+ )
2023+
2024+ swift_connection = build_swift_connection()
2025+
2026+ container_name = 'test-container'
2027+
2028+ swift_connection.put_container(container_name)
2029+
2030+ container = LocalEnvironmentSwiftContainer(
2031+ swift_connection=swift_connection,
2032+ container_name=container_name
2033+ )
2034+
2035+ deployed_build_label = "fiddlesticksandfishes"
2036+ previous_build_label = "fishesandfiddlesticks"
2037+ deployed_spec_revno = '111'
2038+
2039+ container.save_deployed_build_label(deployed_build_label)
2040+ container.save_previous_build_label(previous_build_label)
2041+ container.save_mojo_spec_revno(deployed_spec_revno)
2042+
2043+ assert container.deployed_build_label() == deployed_build_label
2044+ assert container.previous_build_label() == previous_build_label
2045+ assert container.deployed_spec_revno() == deployed_spec_revno
2046+ print u"\u2713 Saved and retrieved build_labels and spec_revno"
2047+
2048+ # Get names for objects
2049+ previous_obj = container.previous_build_obj
2050+ deployed_obj = container.deployed_build_obj
2051+ revno_obj = container.deployed_revno_obj
2052+ upgrade_object = container.code_upgrade_succeeded_template.format(
2053+ previous_build_label, deployed_build_label
2054+ )
2055+ run_object = container.mojo_run_succeeded_template.format(
2056+ deployed_spec_revno
2057+ )
2058+
2059+ # Remove the objects if they exist
2060+ for object_name in [previous_obj, deployed_obj, revno_obj, upgrade_object]:
2061+ try:
2062+ swift_connection.delete_object(container_name, object_name)
2063+ except:
2064+ pass
2065+
2066+ container.save_code_upgrade_succeeded(
2067+ previous_build_label, deployed_build_label
2068+ )
2069+
2070+ container.save_mojo_run_succeeded(deployed_spec_revno)
2071+
2072+ upgrade_head = swift_connection.head_object(container_name, upgrade_object)
2073+
2074+ run_head = swift_connection.head_object(container_name, run_object)
2075+
2076+ # Make sure the objects were saved to swift
2077+ assert int(upgrade_head['content-length']) > 0
2078+ assert int(run_head['content-length']) > 0
2079+
2080+ print u"\u2713 Code upgrade and mojo run successfully saved"
2081+
2082+
2083+def test_latest_build():
2084+ print (
2085+ "\n===\n"
2086+ "Test BuildContainer"
2087+ "\n===\n"
2088+ )
2089+
2090+ build_container = BuildContainer(
2091+ container_url=test_container_url
2092+ )
2093+
2094+ build_label = build_container.latest_build_label()
2095+ assert build_label, "Build label missing"
2096+ print u"\u2713 Found latest build label: {}".format(build_label)
2097+
2098+
2099+def test_ci_container():
2100+ print (
2101+ "\n===\n"
2102+ "Test CIContainer"
2103+ "\n===\n"
2104+ )
2105+
2106+ ci_container = CIContainer(
2107+ container_url=test_container_url
2108+ )
2109+
2110+ build_one = "test-build-label-01"
2111+ build_two = "test-build-label-02"
2112+
2113+ good_upgrade = ci_container.has_code_upgrade_been_tested(
2114+ from_build_label=build_one,
2115+ to_build_label=build_two
2116+ )
2117+ bad_upgrade = ci_container.has_code_upgrade_been_tested(
2118+ from_build_label=build_two,
2119+ to_build_label=build_one
2120+ )
2121+
2122+ good_upgrade = ci_container.has_code_upgrade_been_tested(
2123+ from_build_label=build_one,
2124+ to_build_label=build_two
2125+ )
2126+ bad_upgrade = ci_container.has_code_upgrade_been_tested(
2127+ from_build_label=build_two,
2128+ to_build_label=build_one
2129+ )
2130+
2131+ assert good_upgrade, "Good upgrade test missing"
2132+ assert bad_upgrade is False, "Bad upgrade isn't false"
2133+ print u"\u2713 Upgrades checked successfully"
2134+
2135+ good_mojo_test = ci_container.has_mojo_run_been_tested(spec_revno="155")
2136+ bad_mojo_test = ci_container.has_mojo_run_been_tested(spec_revno="99999")
2137+
2138+ assert good_mojo_test, "Good mojo test missing"
2139+ assert bad_mojo_test is False, "Bad mojo test isn't false"
2140+ print u"\u2713 Mojo runs checked succeeded"
2141+
2142+
2143+def container_tests():
2144+ test_deployed_environment_container()
2145+ test_local_environment_swift_container()
2146+ test_latest_build()
2147+ test_ci_container()
2148+
2149+
2150+if __name__ == "__main__":
2151+ container_tests()
2152
2153=== added file 'mojo-spec-helpers/utils/tests/test_mojo_utils.py'
2154--- mojo-spec-helpers/utils/tests/test_mojo_utils.py 1970-01-01 00:00:00 +0000
2155+++ mojo-spec-helpers/utils/tests/test_mojo_utils.py 2015-09-08 08:23:55 +0000
2156@@ -0,0 +1,23 @@
2157+# System imports
2158+import os
2159+
2160+# Loca imports
2161+from mojo_utils import get_mojo_spec_revno
2162+
2163+
2164+def test_get_mojo_spec_revno():
2165+ print (
2166+ "\n===\n"
2167+ "Test get_mojo_spec_revno"
2168+ "\n===\n"
2169+ )
2170+
2171+ os.environ['MOJO_SPEC_DIR'] = os.path.abspath(
2172+ __file__ + '/../../../..'
2173+ )
2174+
2175+ assert get_mojo_spec_revno().isdigit()
2176+ print u"\u2713 Successfully retrieved revno"
2177+
2178+if __name__ == "__main__":
2179+ test_get_mojo_spec_revno()

Subscribers

People subscribed via source and target branches

to all changes: