Merge lp:~mbruzek/charms/trusty/kubernetes/v1.0.0 into lp:~kubernetes/charms/trusty/kubernetes/trunk

Proposed by Matt Bruzek
Status: Merged
Merged at revision: 7
Proposed branch: lp:~mbruzek/charms/trusty/kubernetes/v1.0.0
Merge into: lp:~kubernetes/charms/trusty/kubernetes/trunk
Diff against target: 1499 lines (+188/-1128)
17 files modified
charm-helpers.yaml (+0/-5)
cli.txt (+0/-52)
copyright (+7/-7)
docs/1-getting-started.md (+0/-81)
docs/contributing.md (+0/-52)
files/cadvisor.upstart.tmpl (+1/-1)
files/create_kubernetes_tar.sh (+0/-59)
files/kubelet.upstart.tmpl (+2/-1)
files/proxy.upstart.tmpl (+0/-1)
hooks/charmhelpers/core/hookenv.py (+0/-498)
hooks/charmhelpers/core/host.py (+0/-311)
hooks/hooks.py (+39/-60)
hooks/kubernetes_installer.py (+2/-0)
hooks/lib/__init__.py (+3/-0)
hooks/lib/registrator.py (+84/-0)
unit_tests/lib/test_registrator.py (+48/-0)
unit_tests/test_hooks.py (+2/-0)
To merge this branch: bzr merge lp:~mbruzek/charms/trusty/kubernetes/v1.0.0
Reviewer Review Type Date Requested Status
Charles Butler (community) Approve
Review via email: mp+265177@code.launchpad.net

Description of the change

Vendoring the changes to the kubernetes charm for the v1.0.0 release of Kubernetes.

To post a comment you must log in.
Revision history for this message
Charles Butler (lazypower) wrote :

LGTM

Nice callout on the ETCD changes before review. I think we have an opportunity to look into this in a lab env and see what else we can do wrt networking resolution outside of AWS and GCE

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== removed file 'charm-helpers.yaml'
2--- charm-helpers.yaml 2015-01-27 17:31:57 +0000
3+++ charm-helpers.yaml 1970-01-01 00:00:00 +0000
4@@ -1,5 +0,0 @@
5-destination: hooks/charmhelpers
6-branch: lp:charmhelpers
7-include:
8- - core
9- - fetch
10
11=== removed file 'cli.txt'
12--- cli.txt 2015-01-27 17:31:57 +0000
13+++ cli.txt 1970-01-01 00:00:00 +0000
14@@ -1,52 +0,0 @@
15-$ ./kubelet -h
16-Usage of ./kubelet:
17- -address=127.0.0.1: The IP address for the info server to serve on (set to 0.0.0.0 for all interfaces)
18- -allow_privileged=false: If true, allow containers to request privileged mode. [default=false]
19- -alsologtostderr=false: log to standard error as well as files
20- -config="": Path to the config file or directory of files
21- -docker_endpoint="": If non-empty, use this for the docker endpoint to communicate with
22- -enable_debugging_handlers=true: Enables server endpoints for log collection and local running of containers and commands
23- -enable_server=true: Enable the info server
24- -etcd_config="": The config file for the etcd client. Mutually exclusive with -etcd_servers
25- -etcd_servers=[]: List of etcd servers to watch (http://ip:port), comma separated. Mutually exclusive with -etcd_config
26- -file_check_frequency=20s: Duration between checking config files for new data
27- -hostname_override="": If non-empty, will use this string as identification instead of the actual hostname.
28- -http_check_frequency=20s: Duration between checking http for new data
29- -log_backtrace_at=:0: when logging hits line file:N, emit a stack trace
30- -log_dir="": If non-empty, write log files in this directory
31- -log_flush_frequency=5s: Maximum number of seconds between log flushes
32- -logtostderr=false: log to standard error instead of files
33- -manifest_url="": URL for accessing the container manifest
34- -network_container_image="kubernetes/pause:latest": The image that network containers in each pod will use.
35- -port=10250: The port for the info server to serve on
36- -registry_burst=10: Maximum size of a bursty pulls, temporarily allows pulls to burst to this number, while still not exceeding registry_qps. Only used if --registry_qps > 0
37- -registry_qps=0: If > 0, limit registry pull QPS to this value. If 0, unlimited. [default=0.0]
38- -root_dir="/var/lib/kubelet": Directory path for managing kubelet files (volume mounts,etc).
39- -runonce=false: If true, exit after spawning pods from local manifests or remote urls. Exclusive with --etcd_servers and --enable-server
40- -stderrthreshold=0: logs at or above this threshold go to stderr
41- -sync_frequency=10s: Max period between synchronizing running containers and config
42- -v=0: log level for V logs
43- -version=false: Print version information and quit
44- -vmodule=: comma-separated list of pattern=N settings for file-filtered logging
45-
46-
47-$ ./proxy -h
48-Usage of ./proxy:
49- -alsologtostderr=false: log to standard error as well as files
50- -api_version="": The API version to use when talking to the server
51- -bind_address=0.0.0.0: The address for the proxy server to serve on (set to 0.0.0.0 for all interfaces)
52- -certificate_authority="": Path to a cert. file for the certificate authority
53- -client_certificate="": Path to a client key file for TLS.
54- -client_key="": Path to a client key file for TLS.
55- -etcd_config="": The config file for the etcd client. Mutually exclusive with -etcd_servers
56- -etcd_servers=[]: List of etcd servers to watch (http://ip:port), comma separated (optional). Mutually exclusive with -etcd_config
57- -insecure_skip_tls_verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure.
58- -log_backtrace_at=:0: when logging hits line file:N, emit a stack trace
59- -log_dir="": If non-empty, write log files in this directory
60- -log_flush_frequency=5s: Maximum number of seconds between log flushes
61- -logtostderr=false: log to standard error instead of files
62- -master="": The address of the Kubernetes API server
63- -stderrthreshold=0: logs at or above this threshold go to stderr
64- -v=0: log level for V logs
65- -version=false: Print version information and quit
66- -vmodule=: comma-separated list of pattern=N settings for file-filtered logging
67
68=== modified file 'copyright'
69--- copyright 2015-01-27 17:31:57 +0000
70+++ copyright 2015-07-17 20:16:05 +0000
71@@ -1,13 +1,13 @@
72-Copyright 2015 Canonical LTD
73+Copyright 2015 Canonical Ltd.
74
75 Licensed under the Apache License, Version 2.0 (the "License");
76 you may not use this file except in compliance with the License.
77 You may obtain a copy of the License at
78
79- http://www.apache.org/licenses/LICENSE-2.0
80+ http://www.apache.org/licenses/LICENSE-2.0
81
82- Unless required by applicable law or agreed to in writing, software
83- distributed under the License is distributed on an "AS IS" BASIS,
84- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
85- See the License for the specific language governing permissions and
86- limitations under the License.
87+Unless required by applicable law or agreed to in writing, software
88+distributed under the License is distributed on an "AS IS" BASIS,
89+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
90+See the License for the specific language governing permissions and
91+limitations under the License.
92
93=== removed directory 'docs'
94=== removed file 'docs/1-getting-started.md'
95--- docs/1-getting-started.md 2015-01-27 17:31:57 +0000
96+++ docs/1-getting-started.md 1970-01-01 00:00:00 +0000
97@@ -1,81 +0,0 @@
98-# Getting Started
99-
100-## Environment Considerations
101-
102-Kubernetes has specific cloud provider integration, and as of the current writing of this document that supported list includes the official Juju supported providers:
103-
104-- [Amazon AWS](https://jujucharms.com/docs/config-aws)
105-- [Azure](https://jujucharms.com/docs/config-azure)
106-- [Vagrant](https://jujucharms.com/docs/config-vagrant)
107-
108-Other providers available for use as a *juju manual environment* can be listed in the [Kubernetes Documentation](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs/getting-started-guides)
109-
110-## Deployment
111-
112-The Kubernetes Charms are currently under heavy development. We encourage you to fork these charms and contribute back to the development effort! See our [contributing](contributing.md) doc for more information on this.
113-
114-#### Deploying the Preview Release charms
115-
116- juju deploy cs:~hazmat/trusty/etcd
117- juju deploy cs:~hazmat/trusty/flannel
118- juju deploy local:trusty/kubernetes-master
119- juju deploy local:trusty/kubernetes
120-
121- juju add-relation etcd flannel
122- juju add-relation etcd kubernetes
123- juju add-relation etcd kubernetes-master
124- juju add-relation kubernetes kubernetes-master
125-
126-#### Deploying the Development Release Charms
127-
128-> These charms are known to be unstable as they are tracking the current efforts of the community at enabling different features against Kubernetes. This includes the specifics for integration per cloud environment, and upgrading to the latest development version.
129-
130- mkdir -p ~/charms/trusty
131- git clone https://github.com/whitmo/kubernetes-master.git ~/charms/trusty/kubernetes-master
132- git clone https://github.com/whitmo/kubernetes.git ~/charms/trusty/kubernetes
133-
134-##### Skipping the manual deployment after git clone
135-
136-> **Note:** This path requires the pre-requisite of juju-deployer. You can obtain juju-deployer via `apt-get install juju-deployer`
137-
138- wget https://github.com/whitmo/bundle-kubernetes/blob/master/develop.yaml kubernetes-devel.yaml
139- juju-deployer kubernetes-devel.yaml
140-
141-
142-## Verifying Deployment with the Kubernetes Agent
143-
144-You'll need the kubernetes command line client to utlize the created cluster. And this can be fetched from the [Releases](https://github.com/GoogleCloudPlatform/kubernetes/releases) page on the Kubernetes project. Make sure you're fetching a client library that matches what the charm is deploying.
145-
146-Grab the tarball and from the extracted release you can just directly use the cli binary at ./kubernetes/platforms/linux/amd64/kubecfg
147-
148-You'll need the address of the kubernetes master as environment variable :
149-
150- juju status kubernetes-master/0
151-
152-Grab the public-address there and export it as KUBERNETES_MASTER environment variable :
153-
154- export KUBERNETES_MASTER=$(juju status --format=oneline kubernetes-master | cut -d' ' -f3):8080
155-
156-And now you can run through the kubernetes examples per normal. :
157-
158- kubecfg list minions
159-
160-
161-## Scale Up
162-
163-If the default capacity of the bundle doesn't provide enough capacity for your workload(s) you can scale horizontially by adding a unit to the flannel and kubernetes services respectively.
164-
165- juju add-unit flannel
166- juju add-unit kubernetes --to # (machine id of new flannel unit)
167-
168-## Known Issues / Limitations
169-
170-Kubernetes currently has platform specific functionality. For example load balancers and persistence volumes only work with the google compute provider atm.
171-
172-The Juju integration uses the kubernetes null provider. This means external load balancers and storage can't be directly driven through kubernetes config files.
173-
174-## Where to get help
175-
176-If you run into any issues, file a bug at our [issue tracker](http://github.com/whitmo/kubernetes-charm/issues), email the Juju Mailing List at <juju@lists.ubuntu.com>, or feel free to join us in #juju on irc.freenode.net.
177-
178-
179
180=== removed file 'docs/contributing.md'
181--- docs/contributing.md 2015-01-27 17:31:57 +0000
182+++ docs/contributing.md 1970-01-01 00:00:00 +0000
183@@ -1,52 +0,0 @@
184-
185-#### Contributions are welcome, in any form. Whether that be Bugs, BugFixes, Documentation, or Features.
186-
187-### Submitting a bug
188-
189-1. Go to our [issue tracker](http://github.com/whitmo/kubernetes-charm/issues) on GitHub
190-2. Search for existing issues using the search field at the top of the page
191-3. File a new issue including the info listed below
192-4. Thanks a ton for helping make the Kubernetes Charm higher quality!
193-
194-##### When filing a new bug, please include:
195-
196-- **Descriptive title** - use keywords so others can find your bug (avoiding duplicates)
197-- **Steps to trigger the problem** - that are specific, and repeatable
198-- **What happens** - when you follow the steps, and what you expected to happen instead.
199-- Include the exact text of any error messages if applicable (or upload screenshots).
200-- Kubernetes Charm version (or if you're pulling directly from Git, your current commit SHA - use git rev-parse HEAD) and the Juju Version output from `juju --version`.
201-- Did this work in a previous charm version? If so, also provide the version that it worked in.
202-- Any errors logged in `juju debug log` Console view
203-
204-### Can I help fix a bug?
205-
206-Yes please! But first...
207-
208-- Make sure no one else is already working on it -- if the bug has a milestone assigned or is tagged 'fix in progress', then it's already under way. Otherwise, post a comment on the bug to let others know you're starting to work on it.
209-
210-We use the Fork & Pull model for distributed development. For a more in-depth overview: consult with the github documentation on [Collaborative Development Models](https://help.github.com/articles/using-pull-requests/#before-you-begin).
211-
212-> ##### Fork & pull
213->
214-> The fork & pull model lets anyone fork an existing repository and push changes to their personal fork without requiring access be granted to the source repository. The changes must then be pulled into the source repository by the project maintainer. This model reduces the amount of friction for new contributors and is popular with open source projects because it allows people to work independently without upfront coordination.
215-
216-### Submitting a Bug Fix
217-
218-The following checklist will help developers not familiar with the fork and pull process of development. We appreciate your enthusiasm to make the Kubernetes Charm a High Quality experience! To Rapidly get started - follow the 8 steps below.
219-
220-1. [Fork the repository](https://help.github.com/articles/fork-a-repo/)
221-2. Clone your fork `git clone git@github.com/myusername/kubernetes-charm.git`
222-3. Checkout your topic branch with `git checkout -b my-awesome-bugfix`
223-4. Hack away at your feature/bugfix
224-5. Validate your bugfix if possible in the amulet test(s) so we dont reintroduce it later.
225-6. Validate your code meets guidelines by passing lint tests `make lint`
226-6. Commit code `git commit -a -m 'i did all this work to fix #32'`
227-7. Push your branch to your forks remote branch `git push origin my-awesome-bugfix`
228-8. Create the [Pull Request](https://help.github.com/articles/using-pull-requests/#initiating-the-pull-request)
229-9. Await Code Review
230-10. Rejoice when Pull Request is accepted
231-
232-### Submitting a Feature
233-
234-The Steps are the same as [Submitting a Bug Fix](#submitting-a-bug-fix). If you want extra credit, make sure you [File an issue](http://github.com/whitmo/kubernetes-charm/issues) that covers the Feature you are working on - as kind of a courtesy heads up. And assign the issue to yourself so we know you are working on it.
235-
236
237=== modified file 'files/cadvisor.upstart.tmpl'
238--- files/cadvisor.upstart.tmpl 2015-01-27 17:31:57 +0000
239+++ files/cadvisor.upstart.tmpl 2015-07-17 20:16:05 +0000
240@@ -11,6 +11,6 @@
241 --volume=/var/run:/var/run:rw \
242 --volume=/sys/fs/cgroup:/sys/fs/cgroup:ro \
243 --volume=/var/lib/docker/:/var/lib/docker:ro \
244- --publish=127.0.0.1:4194:8080 \
245+ --publish=127.0.0.1:4193:8080 \
246 --name=cadvisor \
247 google/cadvisor:latest
248
249=== removed file 'files/create_kubernetes_tar.sh'
250--- files/create_kubernetes_tar.sh 2015-01-27 17:31:57 +0000
251+++ files/create_kubernetes_tar.sh 1970-01-01 00:00:00 +0000
252@@ -1,59 +0,0 @@
253-#!/bin/bash
254-
255-set -ex
256-
257-# This script downloads a Kubernetes release and creates a tar file with only
258-# the files that are needed for this charm.
259-
260-# Usage: create_kubernetes_tar.sh VERSION ARCHITECTURE
261-
262-usage() {
263- echo "Build a tar file with only the files needed for the kubernetes charm."
264- echo "The script accepts two arguments version and desired architecture."
265- echo "$0 version architecture"
266-}
267-
268-download_kubernetes() {
269- local VERSION=$1
270- URL_PREFIX="https://github.com/GoogleCloudPlatform/kubernetes"
271- KUBERNETES_URL="${URL_PREFIX}/releases/download/${VERSION}/kubernetes.tar.gz"
272- # Remove the previous temporary files to remain idempotent.
273- if [ -f /tmp/kubernetes.tar.gz ]; then
274- rm /tmp/kubernetes.tar.gz
275- fi
276- # Download the kubernetes release from the Internet.
277- wget --no-verbose --tries 2 -O /tmp/kubernetes.tar.gz $KUBERNETES_URL
278-}
279-
280-extract_kubernetes() {
281- local ARCH=$1
282- # Untar the kubernetes release file.
283- tar -xvzf /tmp/kubernetes.tar.gz -C /tmp
284- # Untar the server linux amd64 package.
285- tar -xvzf /tmp/kubernetes/server/kubernetes-server-linux-$ARCH.tar.gz -C /tmp
286-}
287-
288-create_charm_tar() {
289- local OUTPUT_FILE=${1:-"$PWD/kubernetes.tar.gz"}
290- local OUTPUT_DIR=`dirname $OUTPUT_FILE`
291- if [ ! -d $OUTPUT_DIR ]; then
292- mkdir -p $OUTPUT
293- fi
294-
295- # Change to the directory the binaries are.
296- cd /tmp/kubernetes/server/bin/
297-
298- # Create a tar file with the binaries that are needed for kubernetes minion.
299- tar -cvzf $OUTPUT_FILE kubelet kube-proxy
300-}
301-
302-if [ $# -gt 2 ]; then
303- usage
304- exit 1
305-fi
306-VERSION=${1:-"v0.8.1"}
307-ARCH=${2:-"amd64"}
308-download_kubernetes $VERSION
309-extract_kubernetes $ARCH
310-TAR_FILE="$PWD/kubernetes-$VERSION-$ARCH.tar.gz"
311-create_charm_tar $TAR_FILE
312
313=== modified file 'files/kubelet.upstart.tmpl'
314--- files/kubelet.upstart.tmpl 2015-04-10 20:43:29 +0000
315+++ files/kubelet.upstart.tmpl 2015-07-17 20:16:05 +0000
316@@ -9,6 +9,7 @@
317
318 exec /usr/local/bin/kubelet \
319 --address=%(kubelet_bind_addr)s \
320- --etcd_servers=%(etcd_servers)s \
321+ --api_servers=%(kubeapi_server)s \
322 --hostname_override=%(kubelet_bind_addr)s \
323+ --cadvisor_port=4193 \
324 --logtostderr=true
325
326=== modified file 'files/proxy.upstart.tmpl'
327--- files/proxy.upstart.tmpl 2015-04-10 20:43:29 +0000
328+++ files/proxy.upstart.tmpl 2015-07-17 20:16:05 +0000
329@@ -8,6 +8,5 @@
330 kill timeout 60 # wait 60s between SIGTERM and SIGKILL.
331
332 exec /usr/local/bin/proxy \
333- --etcd_servers=%(etcd_servers)s \
334 --master=%(kubeapi_server)s \
335 --logtostderr=true
336
337=== removed directory 'hooks/charmhelpers'
338=== removed file 'hooks/charmhelpers/__init__.py'
339=== removed directory 'hooks/charmhelpers/core'
340=== removed file 'hooks/charmhelpers/core/__init__.py'
341=== removed file 'hooks/charmhelpers/core/hookenv.py'
342--- hooks/charmhelpers/core/hookenv.py 2015-01-27 17:31:57 +0000
343+++ hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000
344@@ -1,498 +0,0 @@
345-"Interactions with the Juju environment"
346-# Copyright 2013 Canonical Ltd.
347-#
348-# Authors:
349-# Charm Helpers Developers <juju@lists.ubuntu.com>
350-
351-import os
352-import json
353-import yaml
354-import subprocess
355-import sys
356-import UserDict
357-from subprocess import CalledProcessError
358-
359-CRITICAL = "CRITICAL"
360-ERROR = "ERROR"
361-WARNING = "WARNING"
362-INFO = "INFO"
363-DEBUG = "DEBUG"
364-MARKER = object()
365-
366-cache = {}
367-
368-
369-def cached(func):
370- """Cache return values for multiple executions of func + args
371-
372- For example:
373-
374- @cached
375- def unit_get(attribute):
376- pass
377-
378- unit_get('test')
379-
380- will cache the result of unit_get + 'test' for future calls.
381- """
382- def wrapper(*args, **kwargs):
383- global cache
384- key = str((func, args, kwargs))
385- try:
386- return cache[key]
387- except KeyError:
388- res = func(*args, **kwargs)
389- cache[key] = res
390- return res
391- return wrapper
392-
393-
394-def flush(key):
395- """Flushes any entries from function cache where the
396- key is found in the function+args """
397- flush_list = []
398- for item in cache:
399- if key in item:
400- flush_list.append(item)
401- for item in flush_list:
402- del cache[item]
403-
404-
405-def log(message, level=None):
406- """Write a message to the juju log"""
407- command = ['juju-log']
408- if level:
409- command += ['-l', level]
410- command += [message]
411- subprocess.call(command)
412-
413-
414-class Serializable(UserDict.IterableUserDict):
415- """Wrapper, an object that can be serialized to yaml or json"""
416-
417- def __init__(self, obj):
418- # wrap the object
419- UserDict.IterableUserDict.__init__(self)
420- self.data = obj
421-
422- def __getattr__(self, attr):
423- # See if this object has attribute.
424- if attr in ("json", "yaml", "data"):
425- return self.__dict__[attr]
426- # Check for attribute in wrapped object.
427- got = getattr(self.data, attr, MARKER)
428- if got is not MARKER:
429- return got
430- # Proxy to the wrapped object via dict interface.
431- try:
432- return self.data[attr]
433- except KeyError:
434- raise AttributeError(attr)
435-
436- def __getstate__(self):
437- # Pickle as a standard dictionary.
438- return self.data
439-
440- def __setstate__(self, state):
441- # Unpickle into our wrapper.
442- self.data = state
443-
444- def json(self):
445- """Serialize the object to json"""
446- return json.dumps(self.data)
447-
448- def yaml(self):
449- """Serialize the object to yaml"""
450- return yaml.dump(self.data)
451-
452-
453-def execution_environment():
454- """A convenient bundling of the current execution context"""
455- context = {}
456- context['conf'] = config()
457- if relation_id():
458- context['reltype'] = relation_type()
459- context['relid'] = relation_id()
460- context['rel'] = relation_get()
461- context['unit'] = local_unit()
462- context['rels'] = relations()
463- context['env'] = os.environ
464- return context
465-
466-
467-def in_relation_hook():
468- """Determine whether we're running in a relation hook"""
469- return 'JUJU_RELATION' in os.environ
470-
471-
472-def relation_type():
473- """The scope for the current relation hook"""
474- return os.environ.get('JUJU_RELATION', None)
475-
476-
477-def relation_id():
478- """The relation ID for the current relation hook"""
479- return os.environ.get('JUJU_RELATION_ID', None)
480-
481-
482-def local_unit():
483- """Local unit ID"""
484- return os.environ['JUJU_UNIT_NAME']
485-
486-
487-def remote_unit():
488- """The remote unit for the current relation hook"""
489- return os.environ['JUJU_REMOTE_UNIT']
490-
491-
492-def service_name():
493- """The name service group this unit belongs to"""
494- return local_unit().split('/')[0]
495-
496-
497-def hook_name():
498- """The name of the currently executing hook"""
499- return os.path.basename(sys.argv[0])
500-
501-
502-class Config(dict):
503- """A Juju charm config dictionary that can write itself to
504- disk (as json) and track which values have changed since
505- the previous hook invocation.
506-
507- Do not instantiate this object directly - instead call
508- ``hookenv.config()``
509-
510- Example usage::
511-
512- >>> # inside a hook
513- >>> from charmhelpers.core import hookenv
514- >>> config = hookenv.config()
515- >>> config['foo']
516- 'bar'
517- >>> config['mykey'] = 'myval'
518- >>> config.save()
519-
520-
521- >>> # user runs `juju set mycharm foo=baz`
522- >>> # now we're inside subsequent config-changed hook
523- >>> config = hookenv.config()
524- >>> config['foo']
525- 'baz'
526- >>> # test to see if this val has changed since last hook
527- >>> config.changed('foo')
528- True
529- >>> # what was the previous value?
530- >>> config.previous('foo')
531- 'bar'
532- >>> # keys/values that we add are preserved across hooks
533- >>> config['mykey']
534- 'myval'
535- >>> # don't forget to save at the end of hook!
536- >>> config.save()
537-
538- """
539- CONFIG_FILE_NAME = '.juju-persistent-config'
540-
541- def __init__(self, *args, **kw):
542- super(Config, self).__init__(*args, **kw)
543- self._prev_dict = None
544- self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)
545- if os.path.exists(self.path):
546- self.load_previous()
547-
548- def load_previous(self, path=None):
549- """Load previous copy of config from disk so that current values
550- can be compared to previous values.
551-
552- :param path:
553-
554- File path from which to load the previous config. If `None`,
555- config is loaded from the default location. If `path` is
556- specified, subsequent `save()` calls will write to the same
557- path.
558-
559- """
560- self.path = path or self.path
561- with open(self.path) as f:
562- self._prev_dict = json.load(f)
563-
564- def changed(self, key):
565- """Return true if the value for this key has changed since
566- the last save.
567-
568- """
569- if self._prev_dict is None:
570- return True
571- return self.previous(key) != self.get(key)
572-
573- def previous(self, key):
574- """Return previous value for this key, or None if there
575- is no "previous" value.
576-
577- """
578- if self._prev_dict:
579- return self._prev_dict.get(key)
580- return None
581-
582- def save(self):
583- """Save this config to disk.
584-
585- Preserves items in _prev_dict that do not exist in self.
586-
587- """
588- if self._prev_dict:
589- for k, v in self._prev_dict.iteritems():
590- if k not in self:
591- self[k] = v
592- with open(self.path, 'w') as f:
593- json.dump(self, f)
594-
595-
596-@cached
597-def config(scope=None):
598- """Juju charm configuration"""
599- config_cmd_line = ['config-get']
600- if scope is not None:
601- config_cmd_line.append(scope)
602- config_cmd_line.append('--format=json')
603- try:
604- config_data = json.loads(subprocess.check_output(config_cmd_line))
605- if scope is not None:
606- return config_data
607- return Config(config_data)
608- except ValueError:
609- return None
610-
611-
612-@cached
613-def relation_get(attribute=None, unit=None, rid=None):
614- """Get relation information"""
615- _args = ['relation-get', '--format=json']
616- if rid:
617- _args.append('-r')
618- _args.append(rid)
619- _args.append(attribute or '-')
620- if unit:
621- _args.append(unit)
622- try:
623- return json.loads(subprocess.check_output(_args))
624- except ValueError:
625- return None
626- except CalledProcessError, e:
627- if e.returncode == 2:
628- return None
629- raise
630-
631-
632-def relation_set(relation_id=None, relation_settings={}, **kwargs):
633- """Set relation information for the current unit"""
634- relation_cmd_line = ['relation-set']
635- if relation_id is not None:
636- relation_cmd_line.extend(('-r', relation_id))
637- for k, v in (relation_settings.items() + kwargs.items()):
638- if v is None:
639- relation_cmd_line.append('{}='.format(k))
640- else:
641- relation_cmd_line.append('{}={}'.format(k, v))
642- subprocess.check_call(relation_cmd_line)
643- # Flush cache of any relation-gets for local unit
644- flush(local_unit())
645-
646-
647-@cached
648-def relation_ids(reltype=None):
649- """A list of relation_ids"""
650- reltype = reltype or relation_type()
651- relid_cmd_line = ['relation-ids', '--format=json']
652- if reltype is not None:
653- relid_cmd_line.append(reltype)
654- return json.loads(subprocess.check_output(relid_cmd_line)) or []
655- return []
656-
657-
658-@cached
659-def related_units(relid=None):
660- """A list of related units"""
661- relid = relid or relation_id()
662- units_cmd_line = ['relation-list', '--format=json']
663- if relid is not None:
664- units_cmd_line.extend(('-r', relid))
665- return json.loads(subprocess.check_output(units_cmd_line)) or []
666-
667-
668-@cached
669-def relation_for_unit(unit=None, rid=None):
670- """Get the json represenation of a unit's relation"""
671- unit = unit or remote_unit()
672- relation = relation_get(unit=unit, rid=rid)
673- for key in relation:
674- if key.endswith('-list'):
675- relation[key] = relation[key].split()
676- relation['__unit__'] = unit
677- return relation
678-
679-
680-@cached
681-def relations_for_id(relid=None):
682- """Get relations of a specific relation ID"""
683- relation_data = []
684- relid = relid or relation_ids()
685- for unit in related_units(relid):
686- unit_data = relation_for_unit(unit, relid)
687- unit_data['__relid__'] = relid
688- relation_data.append(unit_data)
689- return relation_data
690-
691-
692-@cached
693-def relations_of_type(reltype=None):
694- """Get relations of a specific type"""
695- relation_data = []
696- reltype = reltype or relation_type()
697- for relid in relation_ids(reltype):
698- for relation in relations_for_id(relid):
699- relation['__relid__'] = relid
700- relation_data.append(relation)
701- return relation_data
702-
703-
704-@cached
705-def relation_types():
706- """Get a list of relation types supported by this charm"""
707- charmdir = os.environ.get('CHARM_DIR', '')
708- mdf = open(os.path.join(charmdir, 'metadata.yaml'))
709- md = yaml.safe_load(mdf)
710- rel_types = []
711- for key in ('provides', 'requires', 'peers'):
712- section = md.get(key)
713- if section:
714- rel_types.extend(section.keys())
715- mdf.close()
716- return rel_types
717-
718-
719-@cached
720-def relations():
721- """Get a nested dictionary of relation data for all related units"""
722- rels = {}
723- for reltype in relation_types():
724- relids = {}
725- for relid in relation_ids(reltype):
726- units = {local_unit(): relation_get(unit=local_unit(), rid=relid)}
727- for unit in related_units(relid):
728- reldata = relation_get(unit=unit, rid=relid)
729- units[unit] = reldata
730- relids[relid] = units
731- rels[reltype] = relids
732- return rels
733-
734-
735-@cached
736-def is_relation_made(relation, keys='private-address'):
737- '''
738- Determine whether a relation is established by checking for
739- presence of key(s). If a list of keys is provided, they
740- must all be present for the relation to be identified as made
741- '''
742- if isinstance(keys, str):
743- keys = [keys]
744- for r_id in relation_ids(relation):
745- for unit in related_units(r_id):
746- context = {}
747- for k in keys:
748- context[k] = relation_get(k, rid=r_id,
749- unit=unit)
750- if None not in context.values():
751- return True
752- return False
753-
754-
755-def open_port(port, protocol="TCP"):
756- """Open a service network port"""
757- _args = ['open-port']
758- _args.append('{}/{}'.format(port, protocol))
759- subprocess.check_call(_args)
760-
761-
762-def close_port(port, protocol="TCP"):
763- """Close a service network port"""
764- _args = ['close-port']
765- _args.append('{}/{}'.format(port, protocol))
766- subprocess.check_call(_args)
767-
768-
769-@cached
770-def unit_get(attribute):
771- """Get the unit ID for the remote unit"""
772- _args = ['unit-get', '--format=json', attribute]
773- try:
774- return json.loads(subprocess.check_output(_args))
775- except ValueError:
776- return None
777-
778-
779-def unit_private_ip():
780- """Get this unit's private IP address"""
781- return unit_get('private-address')
782-
783-
784-class UnregisteredHookError(Exception):
785- """Raised when an undefined hook is called"""
786- pass
787-
788-
789-class Hooks(object):
790- """A convenient handler for hook functions.
791-
792- Example:
793- hooks = Hooks()
794-
795- # register a hook, taking its name from the function name
796- @hooks.hook()
797- def install():
798- ...
799-
800- # register a hook, providing a custom hook name
801- @hooks.hook("config-changed")
802- def config_changed():
803- ...
804-
805- if __name__ == "__main__":
806- # execute a hook based on the name the program is called by
807- hooks.execute(sys.argv)
808- """
809-
810- def __init__(self):
811- super(Hooks, self).__init__()
812- self._hooks = {}
813-
814- def register(self, name, function):
815- """Register a hook"""
816- self._hooks[name] = function
817-
818- def execute(self, args):
819- """Execute a registered hook based on args[0]"""
820- hook_name = os.path.basename(args[0])
821- if hook_name in self._hooks:
822- self._hooks[hook_name]()
823- else:
824- raise UnregisteredHookError(hook_name)
825-
826- def hook(self, *hook_names):
827- """Decorator, registering them as hooks"""
828- def wrapper(decorated):
829- for hook_name in hook_names:
830- self.register(hook_name, decorated)
831- else:
832- self.register(decorated.__name__, decorated)
833- if '_' in decorated.__name__:
834- self.register(
835- decorated.__name__.replace('_', '-'), decorated)
836- return decorated
837- return wrapper
838-
839-
840-def charm_dir():
841- """Return the root directory of the current charm"""
842- return os.environ.get('CHARM_DIR')
843
844=== removed file 'hooks/charmhelpers/core/host.py'
845--- hooks/charmhelpers/core/host.py 2015-01-27 17:31:57 +0000
846+++ hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000
847@@ -1,311 +0,0 @@
848-"""Tools for working with the host system"""
849-# Copyright 2012 Canonical Ltd.
850-#
851-# Authors:
852-# Nick Moffitt <nick.moffitt@canonical.com>
853-# Matthew Wedgwood <matthew.wedgwood@canonical.com>
854-
855-import os
856-import pwd
857-import grp
858-import random
859-import string
860-import subprocess
861-import hashlib
862-import apt_pkg
863-
864-from collections import OrderedDict
865-
866-from hookenv import log
867-
868-
869-def service_start(service_name):
870- """Start a system service"""
871- return service('start', service_name)
872-
873-
874-def service_stop(service_name):
875- """Stop a system service"""
876- return service('stop', service_name)
877-
878-
879-def service_restart(service_name):
880- """Restart a system service"""
881- return service('restart', service_name)
882-
883-
884-def service_reload(service_name, restart_on_failure=False):
885- """Reload a system service, optionally falling back to restart if reload fails"""
886- service_result = service('reload', service_name)
887- if not service_result and restart_on_failure:
888- service_result = service('restart', service_name)
889- return service_result
890-
891-
892-def service(action, service_name):
893- """Control a system service"""
894- cmd = ['service', service_name, action]
895- return subprocess.call(cmd) == 0
896-
897-
898-def service_running(service):
899- """Determine whether a system service is running"""
900- try:
901- output = subprocess.check_output(['service', service, 'status'])
902- except subprocess.CalledProcessError:
903- return False
904- else:
905- if ("start/running" in output or "is running" in output):
906- return True
907- else:
908- return False
909-
910-
911-def adduser(username, password=None, shell='/bin/bash', system_user=False):
912- """Add a user to the system"""
913- try:
914- user_info = pwd.getpwnam(username)
915- log('user {0} already exists!'.format(username))
916- except KeyError:
917- log('creating user {0}'.format(username))
918- cmd = ['useradd']
919- if system_user or password is None:
920- cmd.append('--system')
921- else:
922- cmd.extend([
923- '--create-home',
924- '--shell', shell,
925- '--password', password,
926- ])
927- cmd.append(username)
928- subprocess.check_call(cmd)
929- user_info = pwd.getpwnam(username)
930- return user_info
931-
932-
933-def add_user_to_group(username, group):
934- """Add a user to a group"""
935- cmd = [
936- 'gpasswd', '-a',
937- username,
938- group
939- ]
940- log("Adding user {} to group {}".format(username, group))
941- subprocess.check_call(cmd)
942-
943-
944-def rsync(from_path, to_path, flags='-r', options=None):
945- """Replicate the contents of a path"""
946- options = options or ['--delete', '--executability']
947- cmd = ['/usr/bin/rsync', flags]
948- cmd.extend(options)
949- cmd.append(from_path)
950- cmd.append(to_path)
951- log(" ".join(cmd))
952- return subprocess.check_output(cmd).strip()
953-
954-
955-def symlink(source, destination):
956- """Create a symbolic link"""
957- log("Symlinking {} as {}".format(source, destination))
958- cmd = [
959- 'ln',
960- '-sf',
961- source,
962- destination,
963- ]
964- subprocess.check_call(cmd)
965-
966-
967-def mkdir(path, owner='root', group='root', perms=0555, force=False):
968- """Create a directory"""
969- log("Making dir {} {}:{} {:o}".format(path, owner, group,
970- perms))
971- uid = pwd.getpwnam(owner).pw_uid
972- gid = grp.getgrnam(group).gr_gid
973- realpath = os.path.abspath(path)
974- if os.path.exists(realpath):
975- if force and not os.path.isdir(realpath):
976- log("Removing non-directory file {} prior to mkdir()".format(path))
977- os.unlink(realpath)
978- else:
979- os.makedirs(realpath, perms)
980- os.chown(realpath, uid, gid)
981-
982-
983-def write_file(path, content, owner='root', group='root', perms=0444):
984- """Create or overwrite a file with the contents of a string"""
985- log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
986- uid = pwd.getpwnam(owner).pw_uid
987- gid = grp.getgrnam(group).gr_gid
988- with open(path, 'w') as target:
989- os.fchown(target.fileno(), uid, gid)
990- os.fchmod(target.fileno(), perms)
991- target.write(content)
992-
993-
994-def mount(device, mountpoint, options=None, persist=False):
995- """Mount a filesystem at a particular mountpoint"""
996- cmd_args = ['mount']
997- if options is not None:
998- cmd_args.extend(['-o', options])
999- cmd_args.extend([device, mountpoint])
1000- try:
1001- subprocess.check_output(cmd_args)
1002- except subprocess.CalledProcessError, e:
1003- log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
1004- return False
1005- if persist:
1006- # TODO: update fstab
1007- pass
1008- return True
1009-
1010-
1011-def umount(mountpoint, persist=False):
1012- """Unmount a filesystem"""
1013- cmd_args = ['umount', mountpoint]
1014- try:
1015- subprocess.check_output(cmd_args)
1016- except subprocess.CalledProcessError, e:
1017- log('Error unmounting {}\n{}'.format(mountpoint, e.output))
1018- return False
1019- if persist:
1020- # TODO: update fstab
1021- pass
1022- return True
1023-
1024-
1025-def mounts():
1026- """Get a list of all mounted volumes as [[mountpoint,device],[...]]"""
1027- with open('/proc/mounts') as f:
1028- # [['/mount/point','/dev/path'],[...]]
1029- system_mounts = [m[1::-1] for m in [l.strip().split()
1030- for l in f.readlines()]]
1031- return system_mounts
1032-
1033-
1034-def file_hash(path):
1035- """Generate a md5 hash of the contents of 'path' or None if not found """
1036- if os.path.exists(path):
1037- h = hashlib.md5()
1038- with open(path, 'r') as source:
1039- h.update(source.read()) # IGNORE:E1101 - it does have update
1040- return h.hexdigest()
1041- else:
1042- return None
1043-
1044-
1045-def restart_on_change(restart_map, stopstart=False):
1046- """Restart services based on configuration files changing
1047-
1048- This function is used a decorator, for example
1049-
1050- @restart_on_change({
1051- '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
1052- })
1053- def ceph_client_changed():
1054- ...
1055-
1056- In this example, the cinder-api and cinder-volume services
1057- would be restarted if /etc/ceph/ceph.conf is changed by the
1058- ceph_client_changed function.
1059- """
1060- def wrap(f):
1061- def wrapped_f(*args):
1062- checksums = {}
1063- for path in restart_map:
1064- checksums[path] = file_hash(path)
1065- f(*args)
1066- restarts = []
1067- for path in restart_map:
1068- if checksums[path] != file_hash(path):
1069- restarts += restart_map[path]
1070- services_list = list(OrderedDict.fromkeys(restarts))
1071- if not stopstart:
1072- for service_name in services_list:
1073- service('restart', service_name)
1074- else:
1075- for action in ['stop', 'start']:
1076- for service_name in services_list:
1077- service(action, service_name)
1078- return wrapped_f
1079- return wrap
1080-
1081-
1082-def lsb_release():
1083- """Return /etc/lsb-release in a dict"""
1084- d = {}
1085- with open('/etc/lsb-release', 'r') as lsb:
1086- for l in lsb:
1087- k, v = l.split('=')
1088- d[k.strip()] = v.strip()
1089- return d
1090-
1091-
1092-def pwgen(length=None):
1093- """Generate a random pasword."""
1094- if length is None:
1095- length = random.choice(range(35, 45))
1096- alphanumeric_chars = [
1097- l for l in (string.letters + string.digits)
1098- if l not in 'l0QD1vAEIOUaeiou']
1099- random_chars = [
1100- random.choice(alphanumeric_chars) for _ in range(length)]
1101- return(''.join(random_chars))
1102-
1103-
1104-def list_nics(nic_type):
1105- '''Return a list of nics of given type(s)'''
1106- if isinstance(nic_type, basestring):
1107- int_types = [nic_type]
1108- else:
1109- int_types = nic_type
1110- interfaces = []
1111- for int_type in int_types:
1112- cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
1113- ip_output = subprocess.check_output(cmd).split('\n')
1114- ip_output = (line for line in ip_output if line)
1115- for line in ip_output:
1116- if line.split()[1].startswith(int_type):
1117- interfaces.append(line.split()[1].replace(":", ""))
1118- return interfaces
1119-
1120-
1121-def set_nic_mtu(nic, mtu):
1122- '''Set MTU on a network interface'''
1123- cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]
1124- subprocess.check_call(cmd)
1125-
1126-
1127-def get_nic_mtu(nic):
1128- cmd = ['ip', 'addr', 'show', nic]
1129- ip_output = subprocess.check_output(cmd).split('\n')
1130- mtu = ""
1131- for line in ip_output:
1132- words = line.split()
1133- if 'mtu' in words:
1134- mtu = words[words.index("mtu") + 1]
1135- return mtu
1136-
1137-
1138-def get_nic_hwaddr(nic):
1139- cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
1140- ip_output = subprocess.check_output(cmd)
1141- hwaddr = ""
1142- words = ip_output.split()
1143- if 'link/ether' in words:
1144- hwaddr = words[words.index('link/ether') + 1]
1145- return hwaddr
1146-
1147-
1148-def cmp_pkgrevno(package, revno, pkgcache=None):
1149- '''Compare supplied revno with the revno of the installed package
1150- 1 => Installed revno is greater than supplied arg
1151- 0 => Installed revno is the same as supplied arg
1152- -1 => Installed revno is less than supplied arg
1153- '''
1154- if not pkgcache:
1155- apt_pkg.init()
1156- pkgcache = apt_pkg.Cache()
1157- pkg = pkgcache[package]
1158- return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
1159
1160=== modified file 'hooks/hooks.py'
1161--- hooks/hooks.py 2015-04-10 20:43:29 +0000
1162+++ hooks/hooks.py 2015-07-17 20:16:05 +0000
1163@@ -1,4 +1,5 @@
1164-#!/usr/bin/python
1165+#!/usr/bin/env python
1166+
1167 """
1168 The main hook file that is called by Juju.
1169 """
1170@@ -15,6 +16,8 @@
1171 from kubernetes_installer import KubernetesInstaller
1172 from path import path
1173
1174+from lib.registrator import Registrator
1175+
1176 hooks = hookenv.Hooks()
1177
1178
1179@@ -69,21 +72,21 @@
1180 # Check required keys
1181 for k in ('etcd_servers', 'kubeapi_server'):
1182 if not template_data.get(k):
1183- print("Missing data for %s %s" % (k, template_data))
1184+ print('Missing data for %s %s' % (k, template_data))
1185 return
1186- print("Running with\n%s" % template_data)
1187+ print('Running with\n%s' % template_data)
1188
1189 # Setup kubernetes supplemental group
1190 setup_kubernetes_group()
1191
1192- # Register services
1193- for n in ("cadvisor", "kubelet", "proxy"):
1194+ # Register upstart managed services
1195+ for n in ('kubelet', 'proxy'):
1196 if render_upstart(n, template_data) or not host.service_running(n):
1197- print("Starting %s" % n)
1198+ print('Starting %s' % n)
1199 host.service_restart(n)
1200
1201 # Register machine via api
1202- print("Registering machine")
1203+ print('Registering machine')
1204 register_machine(template_data['kubeapi_server'])
1205
1206 # Save the marker (for restarts to detect prev install)
1207@@ -93,7 +96,7 @@
1208 def get_template_data():
1209 rels = hookenv.relations()
1210 template_data = hookenv.Config()
1211- template_data.CONFIG_FILE_NAME = ".unit-state"
1212+ template_data.CONFIG_FILE_NAME = '.unit-state'
1213
1214 overlay_type = get_scoped_rel_attr('network', rels, 'overlay_type')
1215 etcd_servers = get_rel_hosts('etcd', rels, ('hostname', 'port'))
1216@@ -102,7 +105,7 @@
1217 # kubernetes master isn't ha yet.
1218 if api_servers:
1219 api_info = api_servers.pop()
1220- api_servers = "http://%s:%s" % (api_info[0], api_info[1])
1221+ api_servers = 'http://%s:%s' % (api_info[0], api_info[1])
1222
1223 template_data['overlay_type'] = overlay_type
1224 template_data['kubelet_bind_addr'] = _bind_addr(
1225@@ -110,7 +113,7 @@
1226 template_data['proxy_bind_addr'] = _bind_addr(
1227 hookenv.unit_get('public-address'))
1228 template_data['kubeapi_server'] = api_servers
1229- template_data['etcd_servers'] = ",".join([
1230+ template_data['etcd_servers'] = ','.join([
1231 'http://%s:%s' % (s[0], s[1]) for s in sorted(etcd_servers)])
1232 template_data['identifier'] = os.environ['JUJU_UNIT_NAME'].replace(
1233 '/', '-')
1234@@ -123,7 +126,7 @@
1235 try:
1236 return socket.gethostbyname(addr)
1237 except socket.error:
1238- raise ValueError("Could not resolve private address")
1239+ raise ValueError('Could not resolve private address')
1240
1241
1242 def _encode(d):
1243@@ -179,60 +182,36 @@
1244
1245 def register_machine(apiserver, retry=False):
1246 parsed = urlparse.urlparse(apiserver)
1247- headers = {"Content-type": "application/json",
1248- "Accept": "application/json"}
1249 # identity = hookenv.local_unit().replace('/', '-')
1250 private_address = hookenv.unit_private_ip()
1251
1252 with open('/proc/meminfo') as fh:
1253 info = fh.readline()
1254- mem = info.strip().split(":")[1].strip().split()[0]
1255- cpus = os.sysconf("SC_NPROCESSORS_ONLN")
1256-
1257- request = _encode({
1258- 'Kind': 'Minion',
1259- # These can only differ for cloud provider backed instances?
1260- 'ID': private_address,
1261- 'HostIP': private_address,
1262- 'metadata': {
1263- 'name': private_address,
1264- },
1265- 'resources': {
1266- 'capacity': {
1267- 'mem': mem + ' K',
1268- 'cpu': cpus}}})
1269-
1270- # print("Registration request %s" % request)
1271- conn = httplib.HTTPConnection(parsed.hostname, parsed.port)
1272- conn.request("POST", "/api/v1beta1/minions", json.dumps(request), headers)
1273-
1274- response = conn.getresponse()
1275- body = response.read()
1276- print(body)
1277- result = json.loads(body)
1278- print("Response status:%s reason:%s body:%s" % (
1279- response.status, response.reason, result))
1280-
1281- if response.status in (200, 201):
1282- print("Registered")
1283- elif response.status in (409,):
1284- print("Status conflict")
1285- # The kubernetes API documentation suggests doing a put in this case:
1286- # issue a PUT/update to modify the existing object
1287- conn.request("PUT", "/api/v1beta1/minions", json.dumps(request),
1288- headers)
1289- elif not retry and response.status in (500,) and result.get(
1290- 'message', '').startswith('The requested resource does not exist'):
1291- # There's something fishy in the kube api here (0.4 dev), first time we
1292- # go to register a new minion, we always seem to get this error.
1293- # https://github.com/GoogleCloudPlatform/kubernetes/issues/1995
1294- time.sleep(1)
1295- print("Retrying registration...")
1296- return register_machine(apiserver, retry=True)
1297- else:
1298- print("Registration error")
1299- raise RuntimeError("Unable to register machine with %s" % request)
1300-
1301+ mem = info.strip().split(':')[1].strip().split()[0]
1302+ cpus = os.sysconf('SC_NPROCESSORS_ONLN')
1303+
1304+ registration_request = Registrator()
1305+ registration_request.data['Kind'] = 'Minion'
1306+ registration_request.data['id'] = private_address
1307+ registration_request.data['name'] = private_address
1308+ registration_request.data['metadata']['name'] = private_address
1309+ registration_request.data['spec']['capacity']['mem'] = mem + ' K'
1310+ registration_request.data['spec']['capacity']['cpu'] = cpus
1311+ registration_request.data['spec']['externalID'] = private_address
1312+ registration_request.data['status']['hostIP'] = private_address
1313+
1314+ response, result = registration_request.register(parsed.hostname,
1315+ parsed.port,
1316+ '/api/v1/nodes')
1317+
1318+ print(response)
1319+
1320+ try:
1321+ registration_request.command_succeeded(response, result)
1322+ except ValueError:
1323+ # This happens when we have already registered
1324+ # for now this is OK
1325+ pass
1326
1327 def setup_kubernetes_group():
1328 output = subprocess.check_output(['groups', 'kubernetes'])
1329
1330=== modified file 'hooks/kubernetes_installer.py'
1331--- hooks/kubernetes_installer.py 2015-04-10 20:43:29 +0000
1332+++ hooks/kubernetes_installer.py 2015-07-17 20:16:05 +0000
1333@@ -1,3 +1,5 @@
1334+#!/usr/bin/env python
1335+
1336 import subprocess
1337 from path import path
1338
1339
1340=== added directory 'hooks/lib'
1341=== added file 'hooks/lib/__init__.py'
1342--- hooks/lib/__init__.py 1970-01-01 00:00:00 +0000
1343+++ hooks/lib/__init__.py 2015-07-17 20:16:05 +0000
1344@@ -0,0 +1,3 @@
1345+#!/usr/bin/env python
1346+
1347+
1348
1349=== added file 'hooks/lib/registrator.py'
1350--- hooks/lib/registrator.py 1970-01-01 00:00:00 +0000
1351+++ hooks/lib/registrator.py 2015-07-17 20:16:05 +0000
1352@@ -0,0 +1,84 @@
1353+#!/usr/bin/env python
1354+
1355+import httplib
1356+import json
1357+import time
1358+
1359+
1360+class Registrator:
1361+
1362+ def __init__(self):
1363+ self.ds ={
1364+ "creationTimestamp": "",
1365+ "kind": "Minion",
1366+ "name": "", # private_address
1367+ "metadata": {
1368+ "name": "", #private_address,
1369+ },
1370+ "spec": {
1371+ "externalID": "", #private_address
1372+ "capacity": {
1373+ "mem": "", # mem + ' K',
1374+ "cpu": "", # cpus
1375+ }
1376+ },
1377+ "status": {
1378+ "conditions": [],
1379+ "hostIP": "", #private_address
1380+ }
1381+ }
1382+
1383+ @property
1384+ def data(self):
1385+ ''' Returns a data-structure for population to make a request. '''
1386+ return self.ds
1387+
1388+ def register(self, hostname, port, api_path):
1389+ ''' Contact the API Server for a new registration '''
1390+ headers = {"Content-type": "application/json",
1391+ "Accept": "application/json"}
1392+ connection = httplib.HTTPConnection(hostname, port)
1393+ print 'CONN {}'.format(connection)
1394+ connection.request("POST", api_path, json.dumps(self.data), headers)
1395+ response = connection.getresponse()
1396+ body = response.read()
1397+ print(body)
1398+ result = json.loads(body)
1399+ print("Response status:%s reason:%s body:%s" % \
1400+ (response.status, response.reason, result))
1401+ return response, result
1402+
1403+ def update(self):
1404+ ''' Contact the API Server to update a registration '''
1405+ # do a get on the API for the node
1406+ # repost to the API with any modified data
1407+ pass
1408+
1409+ def save(self):
1410+ ''' Marshall the registration data '''
1411+ # TODO
1412+ pass
1413+
1414+ def command_succeeded(self, response, result):
1415+ ''' Evaluate response data to determine if the command was successful '''
1416+ if response.status in [200, 201]:
1417+ print("Registered")
1418+ return True
1419+ elif response.status in [409,]:
1420+ print("Status Conflict")
1421+ # Suggested return a PUT instead of a POST with this response
1422+ # code, this predicates use of the UPDATE method
1423+ # TODO
1424+ elif response.status in (500,) and result.get(
1425+ 'message', '').startswith('The requested resource does not exist'):
1426+ # There's something fishy in the kube api here (0.4 dev), first time we
1427+ # go to register a new minion, we always seem to get this error.
1428+ # https://github.com/GoogleCloudPlatform/kubernetes/issues/1995
1429+ time.sleep(1)
1430+ print("Retrying registration...")
1431+ raise ValueError("Registration returned 500, retry")
1432+ # return register_machine(apiserver, retry=True)
1433+ else:
1434+ print("Registration error")
1435+ # TODO - get request data
1436+ raise RuntimeError("Unable to register machine with")
1437\ No newline at end of file
1438
1439=== added directory 'unit_tests/lib'
1440=== added file 'unit_tests/lib/test_registrator.py'
1441--- unit_tests/lib/test_registrator.py 1970-01-01 00:00:00 +0000
1442+++ unit_tests/lib/test_registrator.py 2015-07-17 20:16:05 +0000
1443@@ -0,0 +1,48 @@
1444+#!/usr/bin/env python
1445+
1446+
1447+import json
1448+from mock import MagicMock, patch, call
1449+from path import Path
1450+import pytest
1451+import sys
1452+
1453+d = Path('__file__').parent.abspath() / 'hooks'
1454+sys.path.insert(0, d.abspath())
1455+
1456+from lib.registrator import Registrator
1457+
1458+class TestRegistrator():
1459+
1460+ def setup_method(self, method):
1461+ self.r = Registrator()
1462+
1463+ def test_data_type(self):
1464+ if type(self.r.data) is not dict:
1465+ pytest.fail("Invalid type")
1466+
1467+ @patch('json.loads')
1468+ @patch('httplib.HTTPConnection')
1469+ def test_register(self, httplibmock, jsonmock):
1470+ result = self.r.register('foo', 80, '/v1/test')
1471+
1472+ httplibmock.assert_called_with('foo', 80)
1473+ requestmock = httplibmock().request
1474+ requestmock.assert_called_with(
1475+ "POST", "/v1/test",
1476+ json.dumps(self.r.data),
1477+ {"Content-type": "application/json",
1478+ "Accept": "application/json"})
1479+
1480+
1481+ def test_command_succeeded(self):
1482+ response = MagicMock()
1483+ result = json.loads('{"status": "Failure", "kind": "Status", "code": 409, "apiVersion": "v1", "reason": "AlreadyExists", "details": {"kind": "node", "name": "10.200.147.200"}, "message": "node \\"10.200.147.200\\" already exists", "creationTimestamp": null}')
1484+ response.status = 200
1485+ self.r.command_succeeded(response, result)
1486+ response.status = 500
1487+ with pytest.raises(RuntimeError):
1488+ self.r.command_succeeded(response, result)
1489+ response.status = 409
1490+ with pytest.raises(ValueError):
1491+ self.r.command_succeeded(response, result)
1492
1493=== modified file 'unit_tests/test_hooks.py'
1494--- unit_tests/test_hooks.py 2015-04-10 20:43:29 +0000
1495+++ unit_tests/test_hooks.py 2015-07-17 20:16:05 +0000
1496@@ -1,3 +1,5 @@
1497+#!/usr/bin/env python
1498+
1499 # import pytest
1500
1501

Subscribers

People subscribed via source and target branches