Merge lp:~mbruzek/charms/trusty/kubernetes/v1.0.0 into lp:~kubernetes/charms/trusty/kubernetes/trunk

Proposed by Matt Bruzek
Status: Merged
Merged at revision: 7
Proposed branch: lp:~mbruzek/charms/trusty/kubernetes/v1.0.0
Merge into: lp:~kubernetes/charms/trusty/kubernetes/trunk
Diff against target: 1499 lines (+188/-1128)
17 files modified
charm-helpers.yaml (+0/-5)
cli.txt (+0/-52)
copyright (+7/-7)
docs/1-getting-started.md (+0/-81)
docs/contributing.md (+0/-52)
files/cadvisor.upstart.tmpl (+1/-1)
files/create_kubernetes_tar.sh (+0/-59)
files/kubelet.upstart.tmpl (+2/-1)
files/proxy.upstart.tmpl (+0/-1)
hooks/charmhelpers/core/hookenv.py (+0/-498)
hooks/charmhelpers/core/host.py (+0/-311)
hooks/hooks.py (+39/-60)
hooks/kubernetes_installer.py (+2/-0)
hooks/lib/__init__.py (+3/-0)
hooks/lib/registrator.py (+84/-0)
unit_tests/lib/test_registrator.py (+48/-0)
unit_tests/test_hooks.py (+2/-0)
To merge this branch: bzr merge lp:~mbruzek/charms/trusty/kubernetes/v1.0.0
Reviewer Review Type Date Requested Status
Charles Butler (community) Approve
Review via email: mp+265177@code.launchpad.net

Description of the change

Vendoring the changes to the kubernetes charm for the v1.0.0 release of Kubernetes.

To post a comment you must log in.
Revision history for this message
Charles Butler (lazypower) wrote :

LGTM

Nice callout on the ETCD changes before review. I think we have an opportunity to look into this in a lab env and see what else we can do wrt networking resolution outside of AWS and GCE

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== removed file 'charm-helpers.yaml'
--- charm-helpers.yaml 2015-01-27 17:31:57 +0000
+++ charm-helpers.yaml 1970-01-01 00:00:00 +0000
@@ -1,5 +0,0 @@
1destination: hooks/charmhelpers
2branch: lp:charmhelpers
3include:
4 - core
5 - fetch
60
=== removed file 'cli.txt'
--- cli.txt 2015-01-27 17:31:57 +0000
+++ cli.txt 1970-01-01 00:00:00 +0000
@@ -1,52 +0,0 @@
1$ ./kubelet -h
2Usage of ./kubelet:
3 -address=127.0.0.1: The IP address for the info server to serve on (set to 0.0.0.0 for all interfaces)
4 -allow_privileged=false: If true, allow containers to request privileged mode. [default=false]
5 -alsologtostderr=false: log to standard error as well as files
6 -config="": Path to the config file or directory of files
7 -docker_endpoint="": If non-empty, use this for the docker endpoint to communicate with
8 -enable_debugging_handlers=true: Enables server endpoints for log collection and local running of containers and commands
9 -enable_server=true: Enable the info server
10 -etcd_config="": The config file for the etcd client. Mutually exclusive with -etcd_servers
11 -etcd_servers=[]: List of etcd servers to watch (http://ip:port), comma separated. Mutually exclusive with -etcd_config
12 -file_check_frequency=20s: Duration between checking config files for new data
13 -hostname_override="": If non-empty, will use this string as identification instead of the actual hostname.
14 -http_check_frequency=20s: Duration between checking http for new data
15 -log_backtrace_at=:0: when logging hits line file:N, emit a stack trace
16 -log_dir="": If non-empty, write log files in this directory
17 -log_flush_frequency=5s: Maximum number of seconds between log flushes
18 -logtostderr=false: log to standard error instead of files
19 -manifest_url="": URL for accessing the container manifest
20 -network_container_image="kubernetes/pause:latest": The image that network containers in each pod will use.
21 -port=10250: The port for the info server to serve on
22 -registry_burst=10: Maximum size of a bursty pulls, temporarily allows pulls to burst to this number, while still not exceeding registry_qps. Only used if --registry_qps > 0
23 -registry_qps=0: If > 0, limit registry pull QPS to this value. If 0, unlimited. [default=0.0]
24 -root_dir="/var/lib/kubelet": Directory path for managing kubelet files (volume mounts,etc).
25 -runonce=false: If true, exit after spawning pods from local manifests or remote urls. Exclusive with --etcd_servers and --enable-server
26 -stderrthreshold=0: logs at or above this threshold go to stderr
27 -sync_frequency=10s: Max period between synchronizing running containers and config
28 -v=0: log level for V logs
29 -version=false: Print version information and quit
30 -vmodule=: comma-separated list of pattern=N settings for file-filtered logging
31
32
33$ ./proxy -h
34Usage of ./proxy:
35 -alsologtostderr=false: log to standard error as well as files
36 -api_version="": The API version to use when talking to the server
37 -bind_address=0.0.0.0: The address for the proxy server to serve on (set to 0.0.0.0 for all interfaces)
38 -certificate_authority="": Path to a cert. file for the certificate authority
39 -client_certificate="": Path to a client key file for TLS.
40 -client_key="": Path to a client key file for TLS.
41 -etcd_config="": The config file for the etcd client. Mutually exclusive with -etcd_servers
42 -etcd_servers=[]: List of etcd servers to watch (http://ip:port), comma separated (optional). Mutually exclusive with -etcd_config
43 -insecure_skip_tls_verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure.
44 -log_backtrace_at=:0: when logging hits line file:N, emit a stack trace
45 -log_dir="": If non-empty, write log files in this directory
46 -log_flush_frequency=5s: Maximum number of seconds between log flushes
47 -logtostderr=false: log to standard error instead of files
48 -master="": The address of the Kubernetes API server
49 -stderrthreshold=0: logs at or above this threshold go to stderr
50 -v=0: log level for V logs
51 -version=false: Print version information and quit
52 -vmodule=: comma-separated list of pattern=N settings for file-filtered logging
530
=== modified file 'copyright'
--- copyright 2015-01-27 17:31:57 +0000
+++ copyright 2015-07-17 20:16:05 +0000
@@ -1,13 +1,13 @@
1Copyright 2015 Canonical LTD1Copyright 2015 Canonical Ltd.
22
3Licensed under the Apache License, Version 2.0 (the "License");3Licensed under the Apache License, Version 2.0 (the "License");
4you may not use this file except in compliance with the License.4you may not use this file except in compliance with the License.
5You may obtain a copy of the License at5You may obtain a copy of the License at
66
7 http://www.apache.org/licenses/LICENSE-2.07 http://www.apache.org/licenses/LICENSE-2.0
88
9 Unless required by applicable law or agreed to in writing, software9Unless required by applicable law or agreed to in writing, software
10 distributed under the License is distributed on an "AS IS" BASIS,10distributed under the License is distributed on an "AS IS" BASIS,
11 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.11WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 See the License for the specific language governing permissions and12See the License for the specific language governing permissions and
13 limitations under the License.13limitations under the License.
1414
=== removed directory 'docs'
=== removed file 'docs/1-getting-started.md'
--- docs/1-getting-started.md 2015-01-27 17:31:57 +0000
+++ docs/1-getting-started.md 1970-01-01 00:00:00 +0000
@@ -1,81 +0,0 @@
1# Getting Started
2
3## Environment Considerations
4
5Kubernetes has specific cloud provider integration, and as of the current writing of this document that supported list includes the official Juju supported providers:
6
7- [Amazon AWS](https://jujucharms.com/docs/config-aws)
8- [Azure](https://jujucharms.com/docs/config-azure)
9- [Vagrant](https://jujucharms.com/docs/config-vagrant)
10
11Other providers available for use as a *juju manual environment* can be listed in the [Kubernetes Documentation](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs/getting-started-guides)
12
13## Deployment
14
15The Kubernetes Charms are currently under heavy development. We encourage you to fork these charms and contribute back to the development effort! See our [contributing](contributing.md) doc for more information on this.
16
17#### Deploying the Preview Release charms
18
19 juju deploy cs:~hazmat/trusty/etcd
20 juju deploy cs:~hazmat/trusty/flannel
21 juju deploy local:trusty/kubernetes-master
22 juju deploy local:trusty/kubernetes
23
24 juju add-relation etcd flannel
25 juju add-relation etcd kubernetes
26 juju add-relation etcd kubernetes-master
27 juju add-relation kubernetes kubernetes-master
28
29#### Deploying the Development Release Charms
30
31> These charms are known to be unstable as they are tracking the current efforts of the community at enabling different features against Kubernetes. This includes the specifics for integration per cloud environment, and upgrading to the latest development version.
32
33 mkdir -p ~/charms/trusty
34 git clone https://github.com/whitmo/kubernetes-master.git ~/charms/trusty/kubernetes-master
35 git clone https://github.com/whitmo/kubernetes.git ~/charms/trusty/kubernetes
36
37##### Skipping the manual deployment after git clone
38
39> **Note:** This path requires the pre-requisite of juju-deployer. You can obtain juju-deployer via `apt-get install juju-deployer`
40
41 wget https://github.com/whitmo/bundle-kubernetes/blob/master/develop.yaml kubernetes-devel.yaml
42 juju-deployer kubernetes-devel.yaml
43
44
45## Verifying Deployment with the Kubernetes Agent
46
47You'll need the kubernetes command line client to utlize the created cluster. And this can be fetched from the [Releases](https://github.com/GoogleCloudPlatform/kubernetes/releases) page on the Kubernetes project. Make sure you're fetching a client library that matches what the charm is deploying.
48
49Grab the tarball and from the extracted release you can just directly use the cli binary at ./kubernetes/platforms/linux/amd64/kubecfg
50
51You'll need the address of the kubernetes master as environment variable :
52
53 juju status kubernetes-master/0
54
55Grab the public-address there and export it as KUBERNETES_MASTER environment variable :
56
57 export KUBERNETES_MASTER=$(juju status --format=oneline kubernetes-master | cut -d' ' -f3):8080
58
59And now you can run through the kubernetes examples per normal. :
60
61 kubecfg list minions
62
63
64## Scale Up
65
66If the default capacity of the bundle doesn't provide enough capacity for your workload(s) you can scale horizontially by adding a unit to the flannel and kubernetes services respectively.
67
68 juju add-unit flannel
69 juju add-unit kubernetes --to # (machine id of new flannel unit)
70
71## Known Issues / Limitations
72
73Kubernetes currently has platform specific functionality. For example load balancers and persistence volumes only work with the google compute provider atm.
74
75The Juju integration uses the kubernetes null provider. This means external load balancers and storage can't be directly driven through kubernetes config files.
76
77## Where to get help
78
79If you run into any issues, file a bug at our [issue tracker](http://github.com/whitmo/kubernetes-charm/issues), email the Juju Mailing List at <juju@lists.ubuntu.com>, or feel free to join us in #juju on irc.freenode.net.
80
81
820
=== removed file 'docs/contributing.md'
--- docs/contributing.md 2015-01-27 17:31:57 +0000
+++ docs/contributing.md 1970-01-01 00:00:00 +0000
@@ -1,52 +0,0 @@
1
2#### Contributions are welcome, in any form. Whether that be Bugs, BugFixes, Documentation, or Features.
3
4### Submitting a bug
5
61. Go to our [issue tracker](http://github.com/whitmo/kubernetes-charm/issues) on GitHub
72. Search for existing issues using the search field at the top of the page
83. File a new issue including the info listed below
94. Thanks a ton for helping make the Kubernetes Charm higher quality!
10
11##### When filing a new bug, please include:
12
13- **Descriptive title** - use keywords so others can find your bug (avoiding duplicates)
14- **Steps to trigger the problem** - that are specific, and repeatable
15- **What happens** - when you follow the steps, and what you expected to happen instead.
16- Include the exact text of any error messages if applicable (or upload screenshots).
17- Kubernetes Charm version (or if you're pulling directly from Git, your current commit SHA - use git rev-parse HEAD) and the Juju Version output from `juju --version`.
18- Did this work in a previous charm version? If so, also provide the version that it worked in.
19- Any errors logged in `juju debug log` Console view
20
21### Can I help fix a bug?
22
23Yes please! But first...
24
25- Make sure no one else is already working on it -- if the bug has a milestone assigned or is tagged 'fix in progress', then it's already under way. Otherwise, post a comment on the bug to let others know you're starting to work on it.
26
27We use the Fork & Pull model for distributed development. For a more in-depth overview: consult with the github documentation on [Collaborative Development Models](https://help.github.com/articles/using-pull-requests/#before-you-begin).
28
29> ##### Fork & pull
30>
31> The fork & pull model lets anyone fork an existing repository and push changes to their personal fork without requiring access be granted to the source repository. The changes must then be pulled into the source repository by the project maintainer. This model reduces the amount of friction for new contributors and is popular with open source projects because it allows people to work independently without upfront coordination.
32
33### Submitting a Bug Fix
34
35The following checklist will help developers not familiar with the fork and pull process of development. We appreciate your enthusiasm to make the Kubernetes Charm a High Quality experience! To Rapidly get started - follow the 8 steps below.
36
371. [Fork the repository](https://help.github.com/articles/fork-a-repo/)
382. Clone your fork `git clone git@github.com/myusername/kubernetes-charm.git`
393. Checkout your topic branch with `git checkout -b my-awesome-bugfix`
404. Hack away at your feature/bugfix
415. Validate your bugfix if possible in the amulet test(s) so we dont reintroduce it later.
426. Validate your code meets guidelines by passing lint tests `make lint`
436. Commit code `git commit -a -m 'i did all this work to fix #32'`
447. Push your branch to your forks remote branch `git push origin my-awesome-bugfix`
458. Create the [Pull Request](https://help.github.com/articles/using-pull-requests/#initiating-the-pull-request)
469. Await Code Review
4710. Rejoice when Pull Request is accepted
48
49### Submitting a Feature
50
51The Steps are the same as [Submitting a Bug Fix](#submitting-a-bug-fix). If you want extra credit, make sure you [File an issue](http://github.com/whitmo/kubernetes-charm/issues) that covers the Feature you are working on - as kind of a courtesy heads up. And assign the issue to yourself so we know you are working on it.
52
530
=== modified file 'files/cadvisor.upstart.tmpl'
--- files/cadvisor.upstart.tmpl 2015-01-27 17:31:57 +0000
+++ files/cadvisor.upstart.tmpl 2015-07-17 20:16:05 +0000
@@ -11,6 +11,6 @@
11 --volume=/var/run:/var/run:rw \11 --volume=/var/run:/var/run:rw \
12 --volume=/sys/fs/cgroup:/sys/fs/cgroup:ro \12 --volume=/sys/fs/cgroup:/sys/fs/cgroup:ro \
13 --volume=/var/lib/docker/:/var/lib/docker:ro \13 --volume=/var/lib/docker/:/var/lib/docker:ro \
14 --publish=127.0.0.1:4194:8080 \14 --publish=127.0.0.1:4193:8080 \
15 --name=cadvisor \15 --name=cadvisor \
16 google/cadvisor:latest16 google/cadvisor:latest
1717
=== removed file 'files/create_kubernetes_tar.sh'
--- files/create_kubernetes_tar.sh 2015-01-27 17:31:57 +0000
+++ files/create_kubernetes_tar.sh 1970-01-01 00:00:00 +0000
@@ -1,59 +0,0 @@
1#!/bin/bash
2
3set -ex
4
5# This script downloads a Kubernetes release and creates a tar file with only
6# the files that are needed for this charm.
7
8# Usage: create_kubernetes_tar.sh VERSION ARCHITECTURE
9
10usage() {
11 echo "Build a tar file with only the files needed for the kubernetes charm."
12 echo "The script accepts two arguments version and desired architecture."
13 echo "$0 version architecture"
14}
15
16download_kubernetes() {
17 local VERSION=$1
18 URL_PREFIX="https://github.com/GoogleCloudPlatform/kubernetes"
19 KUBERNETES_URL="${URL_PREFIX}/releases/download/${VERSION}/kubernetes.tar.gz"
20 # Remove the previous temporary files to remain idempotent.
21 if [ -f /tmp/kubernetes.tar.gz ]; then
22 rm /tmp/kubernetes.tar.gz
23 fi
24 # Download the kubernetes release from the Internet.
25 wget --no-verbose --tries 2 -O /tmp/kubernetes.tar.gz $KUBERNETES_URL
26}
27
28extract_kubernetes() {
29 local ARCH=$1
30 # Untar the kubernetes release file.
31 tar -xvzf /tmp/kubernetes.tar.gz -C /tmp
32 # Untar the server linux amd64 package.
33 tar -xvzf /tmp/kubernetes/server/kubernetes-server-linux-$ARCH.tar.gz -C /tmp
34}
35
36create_charm_tar() {
37 local OUTPUT_FILE=${1:-"$PWD/kubernetes.tar.gz"}
38 local OUTPUT_DIR=`dirname $OUTPUT_FILE`
39 if [ ! -d $OUTPUT_DIR ]; then
40 mkdir -p $OUTPUT
41 fi
42
43 # Change to the directory the binaries are.
44 cd /tmp/kubernetes/server/bin/
45
46 # Create a tar file with the binaries that are needed for kubernetes minion.
47 tar -cvzf $OUTPUT_FILE kubelet kube-proxy
48}
49
50if [ $# -gt 2 ]; then
51 usage
52 exit 1
53fi
54VERSION=${1:-"v0.8.1"}
55ARCH=${2:-"amd64"}
56download_kubernetes $VERSION
57extract_kubernetes $ARCH
58TAR_FILE="$PWD/kubernetes-$VERSION-$ARCH.tar.gz"
59create_charm_tar $TAR_FILE
600
=== modified file 'files/kubelet.upstart.tmpl'
--- files/kubelet.upstart.tmpl 2015-04-10 20:43:29 +0000
+++ files/kubelet.upstart.tmpl 2015-07-17 20:16:05 +0000
@@ -9,6 +9,7 @@
99
10exec /usr/local/bin/kubelet \10exec /usr/local/bin/kubelet \
11 --address=%(kubelet_bind_addr)s \11 --address=%(kubelet_bind_addr)s \
12 --etcd_servers=%(etcd_servers)s \12 --api_servers=%(kubeapi_server)s \
13 --hostname_override=%(kubelet_bind_addr)s \13 --hostname_override=%(kubelet_bind_addr)s \
14 --cadvisor_port=4193 \
14 --logtostderr=true15 --logtostderr=true
1516
=== modified file 'files/proxy.upstart.tmpl'
--- files/proxy.upstart.tmpl 2015-04-10 20:43:29 +0000
+++ files/proxy.upstart.tmpl 2015-07-17 20:16:05 +0000
@@ -8,6 +8,5 @@
8kill timeout 60 # wait 60s between SIGTERM and SIGKILL.8kill timeout 60 # wait 60s between SIGTERM and SIGKILL.
99
10exec /usr/local/bin/proxy \10exec /usr/local/bin/proxy \
11 --etcd_servers=%(etcd_servers)s \
12 --master=%(kubeapi_server)s \11 --master=%(kubeapi_server)s \
13 --logtostderr=true12 --logtostderr=true
1413
=== removed directory 'hooks/charmhelpers'
=== removed file 'hooks/charmhelpers/__init__.py'
=== removed directory 'hooks/charmhelpers/core'
=== removed file 'hooks/charmhelpers/core/__init__.py'
=== removed file 'hooks/charmhelpers/core/hookenv.py'
--- hooks/charmhelpers/core/hookenv.py 2015-01-27 17:31:57 +0000
+++ hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000
@@ -1,498 +0,0 @@
1"Interactions with the Juju environment"
2# Copyright 2013 Canonical Ltd.
3#
4# Authors:
5# Charm Helpers Developers <juju@lists.ubuntu.com>
6
7import os
8import json
9import yaml
10import subprocess
11import sys
12import UserDict
13from subprocess import CalledProcessError
14
15CRITICAL = "CRITICAL"
16ERROR = "ERROR"
17WARNING = "WARNING"
18INFO = "INFO"
19DEBUG = "DEBUG"
20MARKER = object()
21
22cache = {}
23
24
25def cached(func):
26 """Cache return values for multiple executions of func + args
27
28 For example:
29
30 @cached
31 def unit_get(attribute):
32 pass
33
34 unit_get('test')
35
36 will cache the result of unit_get + 'test' for future calls.
37 """
38 def wrapper(*args, **kwargs):
39 global cache
40 key = str((func, args, kwargs))
41 try:
42 return cache[key]
43 except KeyError:
44 res = func(*args, **kwargs)
45 cache[key] = res
46 return res
47 return wrapper
48
49
50def flush(key):
51 """Flushes any entries from function cache where the
52 key is found in the function+args """
53 flush_list = []
54 for item in cache:
55 if key in item:
56 flush_list.append(item)
57 for item in flush_list:
58 del cache[item]
59
60
61def log(message, level=None):
62 """Write a message to the juju log"""
63 command = ['juju-log']
64 if level:
65 command += ['-l', level]
66 command += [message]
67 subprocess.call(command)
68
69
70class Serializable(UserDict.IterableUserDict):
71 """Wrapper, an object that can be serialized to yaml or json"""
72
73 def __init__(self, obj):
74 # wrap the object
75 UserDict.IterableUserDict.__init__(self)
76 self.data = obj
77
78 def __getattr__(self, attr):
79 # See if this object has attribute.
80 if attr in ("json", "yaml", "data"):
81 return self.__dict__[attr]
82 # Check for attribute in wrapped object.
83 got = getattr(self.data, attr, MARKER)
84 if got is not MARKER:
85 return got
86 # Proxy to the wrapped object via dict interface.
87 try:
88 return self.data[attr]
89 except KeyError:
90 raise AttributeError(attr)
91
92 def __getstate__(self):
93 # Pickle as a standard dictionary.
94 return self.data
95
96 def __setstate__(self, state):
97 # Unpickle into our wrapper.
98 self.data = state
99
100 def json(self):
101 """Serialize the object to json"""
102 return json.dumps(self.data)
103
104 def yaml(self):
105 """Serialize the object to yaml"""
106 return yaml.dump(self.data)
107
108
109def execution_environment():
110 """A convenient bundling of the current execution context"""
111 context = {}
112 context['conf'] = config()
113 if relation_id():
114 context['reltype'] = relation_type()
115 context['relid'] = relation_id()
116 context['rel'] = relation_get()
117 context['unit'] = local_unit()
118 context['rels'] = relations()
119 context['env'] = os.environ
120 return context
121
122
123def in_relation_hook():
124 """Determine whether we're running in a relation hook"""
125 return 'JUJU_RELATION' in os.environ
126
127
128def relation_type():
129 """The scope for the current relation hook"""
130 return os.environ.get('JUJU_RELATION', None)
131
132
133def relation_id():
134 """The relation ID for the current relation hook"""
135 return os.environ.get('JUJU_RELATION_ID', None)
136
137
138def local_unit():
139 """Local unit ID"""
140 return os.environ['JUJU_UNIT_NAME']
141
142
143def remote_unit():
144 """The remote unit for the current relation hook"""
145 return os.environ['JUJU_REMOTE_UNIT']
146
147
148def service_name():
149 """The name service group this unit belongs to"""
150 return local_unit().split('/')[0]
151
152
153def hook_name():
154 """The name of the currently executing hook"""
155 return os.path.basename(sys.argv[0])
156
157
158class Config(dict):
159 """A Juju charm config dictionary that can write itself to
160 disk (as json) and track which values have changed since
161 the previous hook invocation.
162
163 Do not instantiate this object directly - instead call
164 ``hookenv.config()``
165
166 Example usage::
167
168 >>> # inside a hook
169 >>> from charmhelpers.core import hookenv
170 >>> config = hookenv.config()
171 >>> config['foo']
172 'bar'
173 >>> config['mykey'] = 'myval'
174 >>> config.save()
175
176
177 >>> # user runs `juju set mycharm foo=baz`
178 >>> # now we're inside subsequent config-changed hook
179 >>> config = hookenv.config()
180 >>> config['foo']
181 'baz'
182 >>> # test to see if this val has changed since last hook
183 >>> config.changed('foo')
184 True
185 >>> # what was the previous value?
186 >>> config.previous('foo')
187 'bar'
188 >>> # keys/values that we add are preserved across hooks
189 >>> config['mykey']
190 'myval'
191 >>> # don't forget to save at the end of hook!
192 >>> config.save()
193
194 """
195 CONFIG_FILE_NAME = '.juju-persistent-config'
196
197 def __init__(self, *args, **kw):
198 super(Config, self).__init__(*args, **kw)
199 self._prev_dict = None
200 self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)
201 if os.path.exists(self.path):
202 self.load_previous()
203
204 def load_previous(self, path=None):
205 """Load previous copy of config from disk so that current values
206 can be compared to previous values.
207
208 :param path:
209
210 File path from which to load the previous config. If `None`,
211 config is loaded from the default location. If `path` is
212 specified, subsequent `save()` calls will write to the same
213 path.
214
215 """
216 self.path = path or self.path
217 with open(self.path) as f:
218 self._prev_dict = json.load(f)
219
220 def changed(self, key):
221 """Return true if the value for this key has changed since
222 the last save.
223
224 """
225 if self._prev_dict is None:
226 return True
227 return self.previous(key) != self.get(key)
228
229 def previous(self, key):
230 """Return previous value for this key, or None if there
231 is no "previous" value.
232
233 """
234 if self._prev_dict:
235 return self._prev_dict.get(key)
236 return None
237
238 def save(self):
239 """Save this config to disk.
240
241 Preserves items in _prev_dict that do not exist in self.
242
243 """
244 if self._prev_dict:
245 for k, v in self._prev_dict.iteritems():
246 if k not in self:
247 self[k] = v
248 with open(self.path, 'w') as f:
249 json.dump(self, f)
250
251
252@cached
253def config(scope=None):
254 """Juju charm configuration"""
255 config_cmd_line = ['config-get']
256 if scope is not None:
257 config_cmd_line.append(scope)
258 config_cmd_line.append('--format=json')
259 try:
260 config_data = json.loads(subprocess.check_output(config_cmd_line))
261 if scope is not None:
262 return config_data
263 return Config(config_data)
264 except ValueError:
265 return None
266
267
268@cached
269def relation_get(attribute=None, unit=None, rid=None):
270 """Get relation information"""
271 _args = ['relation-get', '--format=json']
272 if rid:
273 _args.append('-r')
274 _args.append(rid)
275 _args.append(attribute or '-')
276 if unit:
277 _args.append(unit)
278 try:
279 return json.loads(subprocess.check_output(_args))
280 except ValueError:
281 return None
282 except CalledProcessError, e:
283 if e.returncode == 2:
284 return None
285 raise
286
287
288def relation_set(relation_id=None, relation_settings={}, **kwargs):
289 """Set relation information for the current unit"""
290 relation_cmd_line = ['relation-set']
291 if relation_id is not None:
292 relation_cmd_line.extend(('-r', relation_id))
293 for k, v in (relation_settings.items() + kwargs.items()):
294 if v is None:
295 relation_cmd_line.append('{}='.format(k))
296 else:
297 relation_cmd_line.append('{}={}'.format(k, v))
298 subprocess.check_call(relation_cmd_line)
299 # Flush cache of any relation-gets for local unit
300 flush(local_unit())
301
302
303@cached
304def relation_ids(reltype=None):
305 """A list of relation_ids"""
306 reltype = reltype or relation_type()
307 relid_cmd_line = ['relation-ids', '--format=json']
308 if reltype is not None:
309 relid_cmd_line.append(reltype)
310 return json.loads(subprocess.check_output(relid_cmd_line)) or []
311 return []
312
313
314@cached
315def related_units(relid=None):
316 """A list of related units"""
317 relid = relid or relation_id()
318 units_cmd_line = ['relation-list', '--format=json']
319 if relid is not None:
320 units_cmd_line.extend(('-r', relid))
321 return json.loads(subprocess.check_output(units_cmd_line)) or []
322
323
324@cached
325def relation_for_unit(unit=None, rid=None):
326 """Get the json represenation of a unit's relation"""
327 unit = unit or remote_unit()
328 relation = relation_get(unit=unit, rid=rid)
329 for key in relation:
330 if key.endswith('-list'):
331 relation[key] = relation[key].split()
332 relation['__unit__'] = unit
333 return relation
334
335
336@cached
337def relations_for_id(relid=None):
338 """Get relations of a specific relation ID"""
339 relation_data = []
340 relid = relid or relation_ids()
341 for unit in related_units(relid):
342 unit_data = relation_for_unit(unit, relid)
343 unit_data['__relid__'] = relid
344 relation_data.append(unit_data)
345 return relation_data
346
347
348@cached
349def relations_of_type(reltype=None):
350 """Get relations of a specific type"""
351 relation_data = []
352 reltype = reltype or relation_type()
353 for relid in relation_ids(reltype):
354 for relation in relations_for_id(relid):
355 relation['__relid__'] = relid
356 relation_data.append(relation)
357 return relation_data
358
359
360@cached
361def relation_types():
362 """Get a list of relation types supported by this charm"""
363 charmdir = os.environ.get('CHARM_DIR', '')
364 mdf = open(os.path.join(charmdir, 'metadata.yaml'))
365 md = yaml.safe_load(mdf)
366 rel_types = []
367 for key in ('provides', 'requires', 'peers'):
368 section = md.get(key)
369 if section:
370 rel_types.extend(section.keys())
371 mdf.close()
372 return rel_types
373
374
375@cached
376def relations():
377 """Get a nested dictionary of relation data for all related units"""
378 rels = {}
379 for reltype in relation_types():
380 relids = {}
381 for relid in relation_ids(reltype):
382 units = {local_unit(): relation_get(unit=local_unit(), rid=relid)}
383 for unit in related_units(relid):
384 reldata = relation_get(unit=unit, rid=relid)
385 units[unit] = reldata
386 relids[relid] = units
387 rels[reltype] = relids
388 return rels
389
390
391@cached
392def is_relation_made(relation, keys='private-address'):
393 '''
394 Determine whether a relation is established by checking for
395 presence of key(s). If a list of keys is provided, they
396 must all be present for the relation to be identified as made
397 '''
398 if isinstance(keys, str):
399 keys = [keys]
400 for r_id in relation_ids(relation):
401 for unit in related_units(r_id):
402 context = {}
403 for k in keys:
404 context[k] = relation_get(k, rid=r_id,
405 unit=unit)
406 if None not in context.values():
407 return True
408 return False
409
410
411def open_port(port, protocol="TCP"):
412 """Open a service network port"""
413 _args = ['open-port']
414 _args.append('{}/{}'.format(port, protocol))
415 subprocess.check_call(_args)
416
417
418def close_port(port, protocol="TCP"):
419 """Close a service network port"""
420 _args = ['close-port']
421 _args.append('{}/{}'.format(port, protocol))
422 subprocess.check_call(_args)
423
424
425@cached
426def unit_get(attribute):
427 """Get the unit ID for the remote unit"""
428 _args = ['unit-get', '--format=json', attribute]
429 try:
430 return json.loads(subprocess.check_output(_args))
431 except ValueError:
432 return None
433
434
435def unit_private_ip():
436 """Get this unit's private IP address"""
437 return unit_get('private-address')
438
439
440class UnregisteredHookError(Exception):
441 """Raised when an undefined hook is called"""
442 pass
443
444
445class Hooks(object):
446 """A convenient handler for hook functions.
447
448 Example:
449 hooks = Hooks()
450
451 # register a hook, taking its name from the function name
452 @hooks.hook()
453 def install():
454 ...
455
456 # register a hook, providing a custom hook name
457 @hooks.hook("config-changed")
458 def config_changed():
459 ...
460
461 if __name__ == "__main__":
462 # execute a hook based on the name the program is called by
463 hooks.execute(sys.argv)
464 """
465
466 def __init__(self):
467 super(Hooks, self).__init__()
468 self._hooks = {}
469
470 def register(self, name, function):
471 """Register a hook"""
472 self._hooks[name] = function
473
474 def execute(self, args):
475 """Execute a registered hook based on args[0]"""
476 hook_name = os.path.basename(args[0])
477 if hook_name in self._hooks:
478 self._hooks[hook_name]()
479 else:
480 raise UnregisteredHookError(hook_name)
481
482 def hook(self, *hook_names):
483 """Decorator, registering them as hooks"""
484 def wrapper(decorated):
485 for hook_name in hook_names:
486 self.register(hook_name, decorated)
487 else:
488 self.register(decorated.__name__, decorated)
489 if '_' in decorated.__name__:
490 self.register(
491 decorated.__name__.replace('_', '-'), decorated)
492 return decorated
493 return wrapper
494
495
496def charm_dir():
497 """Return the root directory of the current charm"""
498 return os.environ.get('CHARM_DIR')
4990
=== removed file 'hooks/charmhelpers/core/host.py'
--- hooks/charmhelpers/core/host.py 2015-01-27 17:31:57 +0000
+++ hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000
@@ -1,311 +0,0 @@
1"""Tools for working with the host system"""
2# Copyright 2012 Canonical Ltd.
3#
4# Authors:
5# Nick Moffitt <nick.moffitt@canonical.com>
6# Matthew Wedgwood <matthew.wedgwood@canonical.com>
7
8import os
9import pwd
10import grp
11import random
12import string
13import subprocess
14import hashlib
15import apt_pkg
16
17from collections import OrderedDict
18
19from hookenv import log
20
21
22def service_start(service_name):
23 """Start a system service"""
24 return service('start', service_name)
25
26
27def service_stop(service_name):
28 """Stop a system service"""
29 return service('stop', service_name)
30
31
32def service_restart(service_name):
33 """Restart a system service"""
34 return service('restart', service_name)
35
36
37def service_reload(service_name, restart_on_failure=False):
38 """Reload a system service, optionally falling back to restart if reload fails"""
39 service_result = service('reload', service_name)
40 if not service_result and restart_on_failure:
41 service_result = service('restart', service_name)
42 return service_result
43
44
45def service(action, service_name):
46 """Control a system service"""
47 cmd = ['service', service_name, action]
48 return subprocess.call(cmd) == 0
49
50
51def service_running(service):
52 """Determine whether a system service is running"""
53 try:
54 output = subprocess.check_output(['service', service, 'status'])
55 except subprocess.CalledProcessError:
56 return False
57 else:
58 if ("start/running" in output or "is running" in output):
59 return True
60 else:
61 return False
62
63
64def adduser(username, password=None, shell='/bin/bash', system_user=False):
65 """Add a user to the system"""
66 try:
67 user_info = pwd.getpwnam(username)
68 log('user {0} already exists!'.format(username))
69 except KeyError:
70 log('creating user {0}'.format(username))
71 cmd = ['useradd']
72 if system_user or password is None:
73 cmd.append('--system')
74 else:
75 cmd.extend([
76 '--create-home',
77 '--shell', shell,
78 '--password', password,
79 ])
80 cmd.append(username)
81 subprocess.check_call(cmd)
82 user_info = pwd.getpwnam(username)
83 return user_info
84
85
86def add_user_to_group(username, group):
87 """Add a user to a group"""
88 cmd = [
89 'gpasswd', '-a',
90 username,
91 group
92 ]
93 log("Adding user {} to group {}".format(username, group))
94 subprocess.check_call(cmd)
95
96
97def rsync(from_path, to_path, flags='-r', options=None):
98 """Replicate the contents of a path"""
99 options = options or ['--delete', '--executability']
100 cmd = ['/usr/bin/rsync', flags]
101 cmd.extend(options)
102 cmd.append(from_path)
103 cmd.append(to_path)
104 log(" ".join(cmd))
105 return subprocess.check_output(cmd).strip()
106
107
108def symlink(source, destination):
109 """Create a symbolic link"""
110 log("Symlinking {} as {}".format(source, destination))
111 cmd = [
112 'ln',
113 '-sf',
114 source,
115 destination,
116 ]
117 subprocess.check_call(cmd)
118
119
120def mkdir(path, owner='root', group='root', perms=0555, force=False):
121 """Create a directory"""
122 log("Making dir {} {}:{} {:o}".format(path, owner, group,
123 perms))
124 uid = pwd.getpwnam(owner).pw_uid
125 gid = grp.getgrnam(group).gr_gid
126 realpath = os.path.abspath(path)
127 if os.path.exists(realpath):
128 if force and not os.path.isdir(realpath):
129 log("Removing non-directory file {} prior to mkdir()".format(path))
130 os.unlink(realpath)
131 else:
132 os.makedirs(realpath, perms)
133 os.chown(realpath, uid, gid)
134
135
136def write_file(path, content, owner='root', group='root', perms=0444):
137 """Create or overwrite a file with the contents of a string"""
138 log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
139 uid = pwd.getpwnam(owner).pw_uid
140 gid = grp.getgrnam(group).gr_gid
141 with open(path, 'w') as target:
142 os.fchown(target.fileno(), uid, gid)
143 os.fchmod(target.fileno(), perms)
144 target.write(content)
145
146
147def mount(device, mountpoint, options=None, persist=False):
148 """Mount a filesystem at a particular mountpoint"""
149 cmd_args = ['mount']
150 if options is not None:
151 cmd_args.extend(['-o', options])
152 cmd_args.extend([device, mountpoint])
153 try:
154 subprocess.check_output(cmd_args)
155 except subprocess.CalledProcessError, e:
156 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
157 return False
158 if persist:
159 # TODO: update fstab
160 pass
161 return True
162
163
164def umount(mountpoint, persist=False):
165 """Unmount a filesystem"""
166 cmd_args = ['umount', mountpoint]
167 try:
168 subprocess.check_output(cmd_args)
169 except subprocess.CalledProcessError, e:
170 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
171 return False
172 if persist:
173 # TODO: update fstab
174 pass
175 return True
176
177
178def mounts():
179 """Get a list of all mounted volumes as [[mountpoint,device],[...]]"""
180 with open('/proc/mounts') as f:
181 # [['/mount/point','/dev/path'],[...]]
182 system_mounts = [m[1::-1] for m in [l.strip().split()
183 for l in f.readlines()]]
184 return system_mounts
185
186
187def file_hash(path):
188 """Generate a md5 hash of the contents of 'path' or None if not found """
189 if os.path.exists(path):
190 h = hashlib.md5()
191 with open(path, 'r') as source:
192 h.update(source.read()) # IGNORE:E1101 - it does have update
193 return h.hexdigest()
194 else:
195 return None
196
197
198def restart_on_change(restart_map, stopstart=False):
199 """Restart services based on configuration files changing
200
201 This function is used a decorator, for example
202
203 @restart_on_change({
204 '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
205 })
206 def ceph_client_changed():
207 ...
208
209 In this example, the cinder-api and cinder-volume services
210 would be restarted if /etc/ceph/ceph.conf is changed by the
211 ceph_client_changed function.
212 """
213 def wrap(f):
214 def wrapped_f(*args):
215 checksums = {}
216 for path in restart_map:
217 checksums[path] = file_hash(path)
218 f(*args)
219 restarts = []
220 for path in restart_map:
221 if checksums[path] != file_hash(path):
222 restarts += restart_map[path]
223 services_list = list(OrderedDict.fromkeys(restarts))
224 if not stopstart:
225 for service_name in services_list:
226 service('restart', service_name)
227 else:
228 for action in ['stop', 'start']:
229 for service_name in services_list:
230 service(action, service_name)
231 return wrapped_f
232 return wrap
233
234
235def lsb_release():
236 """Return /etc/lsb-release in a dict"""
237 d = {}
238 with open('/etc/lsb-release', 'r') as lsb:
239 for l in lsb:
240 k, v = l.split('=')
241 d[k.strip()] = v.strip()
242 return d
243
244
245def pwgen(length=None):
246 """Generate a random pasword."""
247 if length is None:
248 length = random.choice(range(35, 45))
249 alphanumeric_chars = [
250 l for l in (string.letters + string.digits)
251 if l not in 'l0QD1vAEIOUaeiou']
252 random_chars = [
253 random.choice(alphanumeric_chars) for _ in range(length)]
254 return(''.join(random_chars))
255
256
257def list_nics(nic_type):
258 '''Return a list of nics of given type(s)'''
259 if isinstance(nic_type, basestring):
260 int_types = [nic_type]
261 else:
262 int_types = nic_type
263 interfaces = []
264 for int_type in int_types:
265 cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
266 ip_output = subprocess.check_output(cmd).split('\n')
267 ip_output = (line for line in ip_output if line)
268 for line in ip_output:
269 if line.split()[1].startswith(int_type):
270 interfaces.append(line.split()[1].replace(":", ""))
271 return interfaces
272
273
274def set_nic_mtu(nic, mtu):
275 '''Set MTU on a network interface'''
276 cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]
277 subprocess.check_call(cmd)
278
279
280def get_nic_mtu(nic):
281 cmd = ['ip', 'addr', 'show', nic]
282 ip_output = subprocess.check_output(cmd).split('\n')
283 mtu = ""
284 for line in ip_output:
285 words = line.split()
286 if 'mtu' in words:
287 mtu = words[words.index("mtu") + 1]
288 return mtu
289
290
291def get_nic_hwaddr(nic):
292 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
293 ip_output = subprocess.check_output(cmd)
294 hwaddr = ""
295 words = ip_output.split()
296 if 'link/ether' in words:
297 hwaddr = words[words.index('link/ether') + 1]
298 return hwaddr
299
300
301def cmp_pkgrevno(package, revno, pkgcache=None):
302 '''Compare supplied revno with the revno of the installed package
303 1 => Installed revno is greater than supplied arg
304 0 => Installed revno is the same as supplied arg
305 -1 => Installed revno is less than supplied arg
306 '''
307 if not pkgcache:
308 apt_pkg.init()
309 pkgcache = apt_pkg.Cache()
310 pkg = pkgcache[package]
311 return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
3120
=== modified file 'hooks/hooks.py'
--- hooks/hooks.py 2015-04-10 20:43:29 +0000
+++ hooks/hooks.py 2015-07-17 20:16:05 +0000
@@ -1,4 +1,5 @@
1#!/usr/bin/python1#!/usr/bin/env python
2
2"""3"""
3The main hook file that is called by Juju.4The main hook file that is called by Juju.
4"""5"""
@@ -15,6 +16,8 @@
15from kubernetes_installer import KubernetesInstaller16from kubernetes_installer import KubernetesInstaller
16from path import path17from path import path
1718
19from lib.registrator import Registrator
20
18hooks = hookenv.Hooks()21hooks = hookenv.Hooks()
1922
2023
@@ -69,21 +72,21 @@
69 # Check required keys72 # Check required keys
70 for k in ('etcd_servers', 'kubeapi_server'):73 for k in ('etcd_servers', 'kubeapi_server'):
71 if not template_data.get(k):74 if not template_data.get(k):
72 print("Missing data for %s %s" % (k, template_data))75 print('Missing data for %s %s' % (k, template_data))
73 return76 return
74 print("Running with\n%s" % template_data)77 print('Running with\n%s' % template_data)
7578
76 # Setup kubernetes supplemental group79 # Setup kubernetes supplemental group
77 setup_kubernetes_group()80 setup_kubernetes_group()
7881
79 # Register services82 # Register upstart managed services
80 for n in ("cadvisor", "kubelet", "proxy"):83 for n in ('kubelet', 'proxy'):
81 if render_upstart(n, template_data) or not host.service_running(n):84 if render_upstart(n, template_data) or not host.service_running(n):
82 print("Starting %s" % n)85 print('Starting %s' % n)
83 host.service_restart(n)86 host.service_restart(n)
8487
85 # Register machine via api88 # Register machine via api
86 print("Registering machine")89 print('Registering machine')
87 register_machine(template_data['kubeapi_server'])90 register_machine(template_data['kubeapi_server'])
8891
89 # Save the marker (for restarts to detect prev install)92 # Save the marker (for restarts to detect prev install)
@@ -93,7 +96,7 @@
93def get_template_data():96def get_template_data():
94 rels = hookenv.relations()97 rels = hookenv.relations()
95 template_data = hookenv.Config()98 template_data = hookenv.Config()
96 template_data.CONFIG_FILE_NAME = ".unit-state"99 template_data.CONFIG_FILE_NAME = '.unit-state'
97100
98 overlay_type = get_scoped_rel_attr('network', rels, 'overlay_type')101 overlay_type = get_scoped_rel_attr('network', rels, 'overlay_type')
99 etcd_servers = get_rel_hosts('etcd', rels, ('hostname', 'port'))102 etcd_servers = get_rel_hosts('etcd', rels, ('hostname', 'port'))
@@ -102,7 +105,7 @@
102 # kubernetes master isn't ha yet.105 # kubernetes master isn't ha yet.
103 if api_servers:106 if api_servers:
104 api_info = api_servers.pop()107 api_info = api_servers.pop()
105 api_servers = "http://%s:%s" % (api_info[0], api_info[1])108 api_servers = 'http://%s:%s' % (api_info[0], api_info[1])
106109
107 template_data['overlay_type'] = overlay_type110 template_data['overlay_type'] = overlay_type
108 template_data['kubelet_bind_addr'] = _bind_addr(111 template_data['kubelet_bind_addr'] = _bind_addr(
@@ -110,7 +113,7 @@
110 template_data['proxy_bind_addr'] = _bind_addr(113 template_data['proxy_bind_addr'] = _bind_addr(
111 hookenv.unit_get('public-address'))114 hookenv.unit_get('public-address'))
112 template_data['kubeapi_server'] = api_servers115 template_data['kubeapi_server'] = api_servers
113 template_data['etcd_servers'] = ",".join([116 template_data['etcd_servers'] = ','.join([
114 'http://%s:%s' % (s[0], s[1]) for s in sorted(etcd_servers)])117 'http://%s:%s' % (s[0], s[1]) for s in sorted(etcd_servers)])
115 template_data['identifier'] = os.environ['JUJU_UNIT_NAME'].replace(118 template_data['identifier'] = os.environ['JUJU_UNIT_NAME'].replace(
116 '/', '-')119 '/', '-')
@@ -123,7 +126,7 @@
123 try:126 try:
124 return socket.gethostbyname(addr)127 return socket.gethostbyname(addr)
125 except socket.error:128 except socket.error:
126 raise ValueError("Could not resolve private address")129 raise ValueError('Could not resolve private address')
127130
128131
129def _encode(d):132def _encode(d):
@@ -179,60 +182,36 @@
179182
180def register_machine(apiserver, retry=False):183def register_machine(apiserver, retry=False):
181 parsed = urlparse.urlparse(apiserver)184 parsed = urlparse.urlparse(apiserver)
182 headers = {"Content-type": "application/json",
183 "Accept": "application/json"}
184 # identity = hookenv.local_unit().replace('/', '-')185 # identity = hookenv.local_unit().replace('/', '-')
185 private_address = hookenv.unit_private_ip()186 private_address = hookenv.unit_private_ip()
186187
187 with open('/proc/meminfo') as fh:188 with open('/proc/meminfo') as fh:
188 info = fh.readline()189 info = fh.readline()
189 mem = info.strip().split(":")[1].strip().split()[0]190 mem = info.strip().split(':')[1].strip().split()[0]
190 cpus = os.sysconf("SC_NPROCESSORS_ONLN")191 cpus = os.sysconf('SC_NPROCESSORS_ONLN')
191192
192 request = _encode({193 registration_request = Registrator()
193 'Kind': 'Minion',194 registration_request.data['Kind'] = 'Minion'
194 # These can only differ for cloud provider backed instances?195 registration_request.data['id'] = private_address
195 'ID': private_address,196 registration_request.data['name'] = private_address
196 'HostIP': private_address,197 registration_request.data['metadata']['name'] = private_address
197 'metadata': {198 registration_request.data['spec']['capacity']['mem'] = mem + ' K'
198 'name': private_address,199 registration_request.data['spec']['capacity']['cpu'] = cpus
199 },200 registration_request.data['spec']['externalID'] = private_address
200 'resources': {201 registration_request.data['status']['hostIP'] = private_address
201 'capacity': {202
202 'mem': mem + ' K',203 response, result = registration_request.register(parsed.hostname,
203 'cpu': cpus}}})204 parsed.port,
204205 '/api/v1/nodes')
205 # print("Registration request %s" % request)206
206 conn = httplib.HTTPConnection(parsed.hostname, parsed.port)207 print(response)
207 conn.request("POST", "/api/v1beta1/minions", json.dumps(request), headers)208
208209 try:
209 response = conn.getresponse()210 registration_request.command_succeeded(response, result)
210 body = response.read()211 except ValueError:
211 print(body)212 # This happens when we have already registered
212 result = json.loads(body)213 # for now this is OK
213 print("Response status:%s reason:%s body:%s" % (214 pass
214 response.status, response.reason, result))
215
216 if response.status in (200, 201):
217 print("Registered")
218 elif response.status in (409,):
219 print("Status conflict")
220 # The kubernetes API documentation suggests doing a put in this case:
221 # issue a PUT/update to modify the existing object
222 conn.request("PUT", "/api/v1beta1/minions", json.dumps(request),
223 headers)
224 elif not retry and response.status in (500,) and result.get(
225 'message', '').startswith('The requested resource does not exist'):
226 # There's something fishy in the kube api here (0.4 dev), first time we
227 # go to register a new minion, we always seem to get this error.
228 # https://github.com/GoogleCloudPlatform/kubernetes/issues/1995
229 time.sleep(1)
230 print("Retrying registration...")
231 return register_machine(apiserver, retry=True)
232 else:
233 print("Registration error")
234 raise RuntimeError("Unable to register machine with %s" % request)
235
236215
237def setup_kubernetes_group():216def setup_kubernetes_group():
238 output = subprocess.check_output(['groups', 'kubernetes'])217 output = subprocess.check_output(['groups', 'kubernetes'])
239218
=== modified file 'hooks/kubernetes_installer.py'
--- hooks/kubernetes_installer.py 2015-04-10 20:43:29 +0000
+++ hooks/kubernetes_installer.py 2015-07-17 20:16:05 +0000
@@ -1,3 +1,5 @@
1#!/usr/bin/env python
2
1import subprocess3import subprocess
2from path import path4from path import path
35
46
=== added directory 'hooks/lib'
=== added file 'hooks/lib/__init__.py'
--- hooks/lib/__init__.py 1970-01-01 00:00:00 +0000
+++ hooks/lib/__init__.py 2015-07-17 20:16:05 +0000
@@ -0,0 +1,3 @@
1#!/usr/bin/env python
2
3
04
=== added file 'hooks/lib/registrator.py'
--- hooks/lib/registrator.py 1970-01-01 00:00:00 +0000
+++ hooks/lib/registrator.py 2015-07-17 20:16:05 +0000
@@ -0,0 +1,84 @@
1#!/usr/bin/env python
2
3import httplib
4import json
5import time
6
7
8class Registrator:
9
10 def __init__(self):
11 self.ds ={
12 "creationTimestamp": "",
13 "kind": "Minion",
14 "name": "", # private_address
15 "metadata": {
16 "name": "", #private_address,
17 },
18 "spec": {
19 "externalID": "", #private_address
20 "capacity": {
21 "mem": "", # mem + ' K',
22 "cpu": "", # cpus
23 }
24 },
25 "status": {
26 "conditions": [],
27 "hostIP": "", #private_address
28 }
29 }
30
31 @property
32 def data(self):
33 ''' Returns a data-structure for population to make a request. '''
34 return self.ds
35
36 def register(self, hostname, port, api_path):
37 ''' Contact the API Server for a new registration '''
38 headers = {"Content-type": "application/json",
39 "Accept": "application/json"}
40 connection = httplib.HTTPConnection(hostname, port)
41 print 'CONN {}'.format(connection)
42 connection.request("POST", api_path, json.dumps(self.data), headers)
43 response = connection.getresponse()
44 body = response.read()
45 print(body)
46 result = json.loads(body)
47 print("Response status:%s reason:%s body:%s" % \
48 (response.status, response.reason, result))
49 return response, result
50
51 def update(self):
52 ''' Contact the API Server to update a registration '''
53 # do a get on the API for the node
54 # repost to the API with any modified data
55 pass
56
57 def save(self):
58 ''' Marshall the registration data '''
59 # TODO
60 pass
61
62 def command_succeeded(self, response, result):
63 ''' Evaluate response data to determine if the command was successful '''
64 if response.status in [200, 201]:
65 print("Registered")
66 return True
67 elif response.status in [409,]:
68 print("Status Conflict")
69 # Suggested return a PUT instead of a POST with this response
70 # code, this predicates use of the UPDATE method
71 # TODO
72 elif response.status in (500,) and result.get(
73 'message', '').startswith('The requested resource does not exist'):
74 # There's something fishy in the kube api here (0.4 dev), first time we
75 # go to register a new minion, we always seem to get this error.
76 # https://github.com/GoogleCloudPlatform/kubernetes/issues/1995
77 time.sleep(1)
78 print("Retrying registration...")
79 raise ValueError("Registration returned 500, retry")
80 # return register_machine(apiserver, retry=True)
81 else:
82 print("Registration error")
83 # TODO - get request data
84 raise RuntimeError("Unable to register machine with")
0\ No newline at end of file85\ No newline at end of file
186
=== added directory 'unit_tests/lib'
=== added file 'unit_tests/lib/test_registrator.py'
--- unit_tests/lib/test_registrator.py 1970-01-01 00:00:00 +0000
+++ unit_tests/lib/test_registrator.py 2015-07-17 20:16:05 +0000
@@ -0,0 +1,48 @@
1#!/usr/bin/env python
2
3
4import json
5from mock import MagicMock, patch, call
6from path import Path
7import pytest
8import sys
9
10d = Path('__file__').parent.abspath() / 'hooks'
11sys.path.insert(0, d.abspath())
12
13from lib.registrator import Registrator
14
15class TestRegistrator():
16
17 def setup_method(self, method):
18 self.r = Registrator()
19
20 def test_data_type(self):
21 if type(self.r.data) is not dict:
22 pytest.fail("Invalid type")
23
24 @patch('json.loads')
25 @patch('httplib.HTTPConnection')
26 def test_register(self, httplibmock, jsonmock):
27 result = self.r.register('foo', 80, '/v1/test')
28
29 httplibmock.assert_called_with('foo', 80)
30 requestmock = httplibmock().request
31 requestmock.assert_called_with(
32 "POST", "/v1/test",
33 json.dumps(self.r.data),
34 {"Content-type": "application/json",
35 "Accept": "application/json"})
36
37
38 def test_command_succeeded(self):
39 response = MagicMock()
40 result = json.loads('{"status": "Failure", "kind": "Status", "code": 409, "apiVersion": "v1", "reason": "AlreadyExists", "details": {"kind": "node", "name": "10.200.147.200"}, "message": "node \\"10.200.147.200\\" already exists", "creationTimestamp": null}')
41 response.status = 200
42 self.r.command_succeeded(response, result)
43 response.status = 500
44 with pytest.raises(RuntimeError):
45 self.r.command_succeeded(response, result)
46 response.status = 409
47 with pytest.raises(ValueError):
48 self.r.command_succeeded(response, result)
049
=== modified file 'unit_tests/test_hooks.py'
--- unit_tests/test_hooks.py 2015-04-10 20:43:29 +0000
+++ unit_tests/test_hooks.py 2015-07-17 20:16:05 +0000
@@ -1,3 +1,5 @@
1#!/usr/bin/env python
2
1# import pytest3# import pytest
24
35

Subscribers

People subscribed via source and target branches