Merge lp:~flacoste/maas/odev-merge into lp:~maas-committers/maas/trunk

Proposed by Francis J. Lacoste
Status: Merged
Merged at revision: 247
Proposed branch: lp:~flacoste/maas/odev-merge
Merge into: lp:~maas-committers/maas/trunk
Diff against target: 1049 lines (+962/-0)
16 files modified
MANIFEST.in (+1/-0)
vdenv/HOWTO (+41/-0)
vdenv/HOWTO.juju (+68/-0)
vdenv/README.txt (+5/-0)
vdenv/TODO (+14/-0)
vdenv/api-list.py (+36/-0)
vdenv/bin/authorize-ssh (+39/-0)
vdenv/bin/start-odev (+26/-0)
vdenv/bin/system-setup (+44/-0)
vdenv/bin/virsh-listener (+27/-0)
vdenv/libvirt-domain.tmpl (+56/-0)
vdenv/libvirt-network.tmpl (+23/-0)
vdenv/settings.cfg (+24/-0)
vdenv/setup.py (+217/-0)
vdenv/zimmer-build/build (+271/-0)
vdenv/zimmer-build/ud-build.txt (+70/-0)
To merge this branch: bzr merge lp:~flacoste/maas/odev-merge
Reviewer Review Type Date Requested Status
Francis J. Lacoste (community) Approve
Review via email: mp+96823@code.launchpad.net

Commit message

bzr join lp:~orchestra/orchestra/odev that will become vdenv (Virtual Data centre Environment). Some obsolete files from the original branch (misc.txt, ref, cobbler-server) were removed.

Description of the change

This simply bzr join lp:~orchestra/orchestra/odev into the maas tree as a top-level directory (vdenv), and add it to MANIFEST.in.

A successor branch will move things in better place and remove Orchestra references.

vdenv stands for virtual datacentre environment. This will enable you to have a virtual datacentre (managed through MaaS). It will also be the standard way to test MaaS locally without hardware.

To post a comment you must log in.
Revision history for this message
Francis J. Lacoste (flacoste) wrote :

Self-reviewing this.

review: Approve
Revision history for this message
MAAS Lander (maas-lander) wrote :

There are additional revisions which have not been approved in review. Please seek review and approval of these new revisions.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'MANIFEST.in'
2--- MANIFEST.in 2012-01-23 21:16:00 +0000
3+++ MANIFEST.in 2012-03-09 20:39:27 +0000
4@@ -1,5 +1,6 @@
5 graft src/*/static
6 graft src/*/templates
7+graft vdenv
8 prune src/*/testing
9 prune src/*/tests
10 prune src/maastesting
11
12=== added directory 'vdenv'
13=== added file 'vdenv/HOWTO'
14--- vdenv/HOWTO 1970-01-01 00:00:00 +0000
15+++ vdenv/HOWTO 2012-03-09 20:39:27 +0000
16@@ -0,0 +1,41 @@
17+#! /bin/bash -e
18+#
19+# This file documents how to get odev running on your system. But it's also
20+# a script; you may find that you can just run it and get a working setup.
21+
22+## System-level setup. This needs to be done only once.
23+./bin/system-setup
24+
25+## Build a zimmer image in this branch.
26+pushd zimmer-build
27+./build zimmer-disk0.img --import-keys=auto
28+popd
29+
30+## Get zimmer and cobbler running.
31+./bin/start-odev
32+
33+cobblerlogin=ubuntu@192.168.123.2
34+cat <<EOF
35+While we're waiting for the server to come up, let's set up ssh login to
36+the cobbler server at $cobblerlogin.
37+
38+Please enter your Launchpad login name to import your ssh keys from Launchpad,
39+or an asterisk ("*") to import your local public ssh keys. Enter nothing to
40+skip this step.
41+
42+(If the server prompts you for a password, the default is "passw0rd")
43+EOF
44+read keyowner
45+./bin/authorize-ssh $cobblerlogin $keyowner
46+
47+## populate the nodes into the cobbler server
48+./setup.py cobbler-setup
49+
50+## Listen for libvirt requests from the Cobbler server.
51+VIRSH_LISTENER_DEBUG=1 ./bin/virsh-listener &
52+
53+
54+## at this point you may want to modify zimmer to provide a proxy
55+## other than itself to things installing from it (LP: #914202).
56+## ssh to zimmer, and then edit :
57+## /var/lib/cobbler/snippets/orchestra_proxy
58
59=== added file 'vdenv/HOWTO.juju'
60--- vdenv/HOWTO.juju 1970-01-01 00:00:00 +0000
61+++ vdenv/HOWTO.juju 2012-03-09 20:39:27 +0000
62@@ -0,0 +1,68 @@
63+# http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage
64+
65+pkgs="libzookeeper-java zookeeper juju bzr"
66+
67+JUJU_D=$HOME/juju
68+JUJU_ORIGIN="lp:juju"
69+JUJU_SERIES="precise"
70+
71+REPO="$HOME/charms"
72+CHARMS_D="$CHARMS_D/$JUJU_SERIES"
73+
74+ZIMMER_IP=192.168.123.2
75+
76+id_rsa="$HOME/.ssh/id_rsa"
77+[ -f "$id_rsa" ] || ssh-keygen -t rsa -N '' -f "$id_rsa"
78+read x y z < "$id_rsa"
79+grep -q "$y" ~/.ssh/authorized_keys ||
80+ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
81+
82+sudo apt-get --assume-yes install $pkgs </dev/null
83+
84+mkdir -p "${JUJU_D%/*}"
85+#( cd ${JUJU_D%/*} && bzr branch lp:juju )
86+( cd ${JUJU_D%/*} && bzr branch $JUJU_ORIGIN juju )
87+
88+mkdir -p "$CHARMS_D"
89+( cd "$CHARMS_D" && bzr branch lp:charm/mysql && bzr branch lp:charm/wordpress )
90+
91+ENAME="odev"
92+
93+mkdir ~/.juju/
94+cat > ~/.juju/environments.yaml <<EOF
95+environments:
96+ $ENAME:
97+ type: orchestra
98+ juju-origin: $JUJU_ORIGIN
99+ orchestra-server: $ZIMMER_IP
100+ orchestra-user: cobbler
101+ orchestra-pass: xcobbler
102+ acquired-mgmt-class: orchestra-juju-acquired
103+ available-mgmt-class: orchestra-juju-available
104+ admin-secret: SEEKRIT
105+ storage-url: http://$ZIMMER_IP/webdav
106+ authorized-keys: $(cat ~/.ssh/id_rsa.pub)
107+ data-dir: $HOME/juju-data/$ENAME
108+ default-series: $JUJU_SERIES
109+EOF
110+
111+export PATH="$JUJU_D/bin:$HOME/bin:/usr/sbin:/usr/bin:/sbin:/bin" PYTHONPATH=$JUJU_D
112+
113+# now start your juju bootstrap node. this will take some time, as we're
114+# doing a full install into the VM.
115+juju bootstrap --environment $ENAME
116+
117+# now create the mysql and wordpress units
118+# this takes quite a while as full VM install of each
119+juju deploy --environment $ENAME --repository $REPO local:mysql
120+juju deploy --environment $ENAME --repository $REPO local:wordpress
121+
122+# now link the two
123+juju add-relation --environment $ENAME wordpress mysql
124+
125+# juju status:
126+# FIXME: resolution will try to use dns and will not work for nodes
127+# workaround: can add 192.168.123.1 to /etc/resolv.conf 'server' line
128+# FIXME: juju status hangs "connecting to environment" during bootstrap
129+# node installation. The post should call home and indicate done. so
130+# juju could/should know that its still installing.
131
132=== added file 'vdenv/README.txt'
133--- vdenv/README.txt 1970-01-01 00:00:00 +0000
134+++ vdenv/README.txt 2012-03-09 20:39:27 +0000
135@@ -0,0 +1,5 @@
136+This allows you to create a VM cobbler provisioning environment in
137+a single system. That allows development working with cobbler or the
138+API without need for lots of hardware.
139+
140+
141
142=== added file 'vdenv/TODO'
143--- vdenv/TODO 1970-01-01 00:00:00 +0000
144+++ vdenv/TODO 2012-03-09 20:39:27 +0000
145@@ -0,0 +1,14 @@
146+- prefix names with 'odev' (or some prefix)
147+- settings.cfg: add 'cobbler' section for auth
148+- improve the Domain objects
149+- document
150+ - vinagre $(virsh vncdisplay node01)
151+ - ssh -L 5901:localhost:5901 -L 8000:192.168.123.2:80
152+ - start ssh connection to remote system with a bunch of ports
153+ forwarded for vnc connections and http to the zimmer box
154+ ssh -C home-jimbo \
155+ $(t=98; for((i=0;i<5;i++)); do p=$(printf "%02d" "$i"); echo -L $t$p:localhost:59$p; done ; echo -L${t}80:192.168.123.2:80)
156+- tell orchestra to point to a different proxy server
157+- document or fix annoying ssh key entries (juju prompt for add and change)
158+- get serial consoles to log file for domains
159+- support i386 (for i386 installs of ubuntu)
160
161=== added file 'vdenv/api-list.py'
162--- vdenv/api-list.py 1970-01-01 00:00:00 +0000
163+++ vdenv/api-list.py 2012-03-09 20:39:27 +0000
164@@ -0,0 +1,36 @@
165+#!/usr/bin/python
166+
167+import xmlrpclib
168+import sys
169+
170+host = "192.168.123.2"
171+user = "cobbler"
172+password = "cobbler"
173+if len(sys.argv) >= 2:
174+ host = sys.argv[1]
175+if len(sys.argv) >= 3:
176+ user = sys.argv[2]
177+if len(sys.argv) >= 4:
178+ password = sys.argv[3]
179+
180+if not host.startswith('http://'):
181+ host = "http://%s/cobbler_api" % host
182+
183+server = xmlrpclib.Server(host)
184+token = server.login(user, password)
185+
186+distros = server.get_distros()
187+print "::::::::::: distros :::::::::::"
188+for d in server.get_distros():
189+ print("%s: breed=%s, os_version=%s, mgmt_classes=%s" %
190+ (d['name'], d['breed'], d['os_version'], d['mgmt_classes']))
191+
192+profiles = server.get_profiles()
193+print "\n::::::::::: profiles :::::::::::"
194+for d in server.get_profiles():
195+ print("%s: distro=%s parent=%s kickstart=%s" %
196+ (d['name'], d['distro'], d['parent'], d['kickstart']))
197+
198+print "\n::::::::::: servers :::::::::::"
199+for s in server.get_systems():
200+ print s['interfaces']
201
202=== added directory 'vdenv/bin'
203=== added file 'vdenv/bin/authorize-ssh'
204--- vdenv/bin/authorize-ssh 1970-01-01 00:00:00 +0000
205+++ vdenv/bin/authorize-ssh 2012-03-09 20:39:27 +0000
206@@ -0,0 +1,39 @@
207+#! /bin/bash -e
208+#
209+# Wait for the virtual cobbler instance's ssh server to start up, and set up
210+# passwordless login if desired.
211+#
212+# Usage:
213+# authorize-ssh <cobbler-ssh-login> <key-owner>
214+#
215+# Where:
216+# * cobbler-ssh-login is an ssh user/hostname, e.g. ubuntu@192.168.123.2
217+# * key-owner is a Launchpad login name, or * to use local keys, or nothing.
218+#
219+# If a Launchpad login name is given, import the associated ssh keys into the
220+# cobbler instance. If key-owner is an asterisk, import the local public ssh
221+# keys from ~/.ssh/id_*.pub
222+
223+cobblerlogin=$1
224+keyowner=$2
225+
226+if test -z "$keyowner"
227+then
228+ echo "Not setting up ssh keys."
229+ echo "I'll still test a login to Cobbler though."
230+ inputfiles=/dev/null
231+ remotecmd="uptime"
232+elif test "$keyowner" = "*"
233+then
234+ inputfiles=`ls ~/.ssh/id_*.pub`
235+ echo "Copying public key(s): $inputfiles"
236+ remotecmd="tee .ssh/authorized_keys"
237+else
238+ inputfiles=/dev/null
239+ remotecmd="ssh-import-id $keyowner"
240+fi
241+
242+while ! cat $inputfiles | ssh $cobblerlogin -o StrictHostKeyChecking=no $remotecmd
243+do
244+ sleep 5
245+done
246
247=== added file 'vdenv/bin/start-odev'
248--- vdenv/bin/start-odev 1970-01-01 00:00:00 +0000
249+++ vdenv/bin/start-odev 2012-03-09 20:39:27 +0000
250@@ -0,0 +1,26 @@
251+#! /bin/bash -e
252+#
253+# Get zimmer and cobbler running, assuming that zimmer has already been set up.
254+
255+## create libvirt xml files for nodes, zimmer, network
256+./setup.py libvirt-setup
257+
258+## start odev-net network
259+virsh -c qemu:///system net-start odev-net
260+
261+## create zimmer disk image qcow backing against pristine version
262+qemu-img create -f qcow2 -b zimmer-build/zimmer-disk0.img zimmer-disk0.img
263+
264+## start zimmer instance / orchestra server
265+virsh -c qemu:///system start zimmer
266+
267+cat <<EOF
268+Starting orchestra server.
269+You can now ssh ubuntu@192.168.123.2 (password: passw0rd).
270+If you do that, you may run 'ssh-import-id' to import your ssh key.
271+
272+Access the cobbler UI on http://192.168.123.2/cobbler_web
273+and log in with 'cobbler:xcobbler'.
274+EOF
275+
276+
277
278=== added file 'vdenv/bin/system-setup'
279--- vdenv/bin/system-setup 1970-01-01 00:00:00 +0000
280+++ vdenv/bin/system-setup 2012-03-09 20:39:27 +0000
281@@ -0,0 +1,44 @@
282+#! /bin/bash -e
283+#
284+# System-wide setup for odev. This requires sudo.
285+
286+## install some dependencies
287+pkgs=""
288+pkgs="$pkgs genisoimage coreutils" # for cloud-init's 'make-iso'
289+pkgs="$pkgs python-libvirt libvirt-bin" # for libvirt interaction
290+pkgs="$pkgs socat" # for libvirt-> cobbler
291+pkgs="$pkgs python-cheetah" # for setup.py
292+pkgs="$pkgs qemu-utils qemu-kvm" # needed generally
293+
294+new_pkgs=""
295+for pkg in ${pkgs}; do
296+ dpkg-query --show "$pkg" >/dev/null ||
297+ new_pkgs="${new_pkgs:+${new_pkgs} }${pkg}"
298+done
299+
300+if [ -n "$new_pkgs" ]; then
301+ sudo apt-get update -qq || /bin/true
302+ sudo apt-get install -y $pkgs </dev/null
303+fi
304+
305+new_groups=""
306+for group in libvirtd kvm; do
307+ groups $USER | grep -q $group && continue
308+ sudo adduser $USER $group
309+ new_groups="${new_groups:+${new_groups} }${group}"
310+done
311+
312+if [ -n "$new_groups" ]; then
313+ cat <<EOF
314+Done.
315+
316+The script just added you to the system group[s] $new_groups
317+
318+If you were not previously in these groups, you will need to log out and
319+log back in again to make the changes take effect.
320+EOF
321+
322+ # The user may need to log out at this point.
323+ echo "Ctrl-C if you want to log out now. Otherwise, press <enter>."
324+ read
325+fi
326
327=== added file 'vdenv/bin/virsh-listener'
328--- vdenv/bin/virsh-listener 1970-01-01 00:00:00 +0000
329+++ vdenv/bin/virsh-listener 2012-03-09 20:39:27 +0000
330@@ -0,0 +1,27 @@
331+#!/bin/bash -e
332+
333+## * libvirt from the cobbler system:
334+## after 'cobbler-setup' above is done, the cobbler system will know about
335+## all the nodes and it will believe it can control them via the 'virsh'
336+## power module. It is configured
337+## to talk to qemu+tcp://192.168.123.1:65001/system . In order to allow
338+## that to be valid we have to make libvirt listen on that port/interface.
339+## This can be done moderately securely with 'socat'. Below, we tell socat
340+## to forward tcp connections on 192.168.123.1:65001 to the libvirt unix
341+## socket . It restricts connections to zimmer's IP address.
342+
343+sock="/var/run/libvirt/libvirt-sock"
344+
345+[ "${VIRSH_LISTENER_DEBUG:-0}" != "0" ] && cat <<EOF
346+Starting virsh listener.
347+
348+You can verify this is working by powering a sytem on from the web-ui or
349+the following on the cobbler server:
350+
351+zimmmer$ virsh -c qemu+tcp://192.168.123.1:65001/system
352+EOF
353+
354+echo "Listening for libvirt requests on $sock."
355+exec socat -d -d \
356+ TCP4-LISTEN:65001,bind=192.168.123.1,range=192.168.123.2/32,fork \
357+ UNIX-CONNECT:$sock
358
359=== added file 'vdenv/libvirt-domain.tmpl'
360--- vdenv/libvirt-domain.tmpl 1970-01-01 00:00:00 +0000
361+++ vdenv/libvirt-domain.tmpl 2012-03-09 20:39:27 +0000
362@@ -0,0 +1,56 @@
363+<domain type='kvm'>
364+ <name>$name</name>
365+ <memory>$mem</memory>
366+ <currentMemory>$mem</currentMemory>
367+ <vcpu>1</vcpu>
368+ <os>
369+ <type arch='x86_64' machine='pc-0.12'>hvm</type>
370+ <boot dev='network' />
371+ <boot dev='hd' />
372+ </os>
373+ <features>
374+ <acpi/>
375+ <apic/>
376+ <pae/>
377+ </features>
378+ <clock offset='utc'/>
379+ <on_poweroff>destroy</on_poweroff>
380+ <on_reboot>restart</on_reboot>
381+ <on_crash>restart</on_crash>
382+ <devices>
383+ <emulator>/usr/bin/kvm</emulator>
384+ <disk type='file' device='disk'>
385+ <driver name='qemu' type='qcow2'/>
386+ <source file='$disk0'/>
387+ <target dev='vda' bus='virtio'/>
388+ <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
389+ </disk>
390+ <controller type='ide' index='0'>
391+ <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
392+ </controller>
393+ <interface type='network'>
394+ <!-- <boot order='1'/> -->
395+ <source network='$network'/>
396+ <target dev='vnet1'/>
397+ <model type='virtio'/>
398+ <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
399+ <mac address='$mac'/>
400+ </interface>
401+ <serial type='pty'>
402+ <source path='/dev/pts/5'/>
403+ <target port='0'/>
404+ </serial>
405+ <console type='pty'>
406+ <target type='serial' port='0'/>
407+ </console>
408+ <input type='mouse' bus='ps2'/>
409+ <graphics type='vnc' autoport='yes' keymap='en-us'/>
410+ <video>
411+ <model type='cirrus' vram='9216' heads='1'/>
412+ <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
413+ </video>
414+ <memballoon model='virtio'>
415+ <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
416+ </memballoon>
417+ </devices>
418+</domain>
419
420=== added file 'vdenv/libvirt-network.tmpl'
421--- vdenv/libvirt-network.tmpl 1970-01-01 00:00:00 +0000
422+++ vdenv/libvirt-network.tmpl 2012-03-09 20:39:27 +0000
423@@ -0,0 +1,23 @@
424+<network>
425+ <name>$name</name>
426+ <forward mode='nat'/>
427+ <bridge name='$bridge' stp='off' delay='0' />
428+ <dns>
429+ <host ip='$ip_pre.1'>
430+ <hostname>host-system</hostname>
431+ </host>
432+ <host ip='$ip_pre.2'>
433+ <hostname>zimmer-server</hostname>
434+ </host>
435+ </dns>
436+ <ip address='$ip_pre.1' netmask='$netmask'>
437+ <dhcp>
438+ <range start='$ip_pre.$dhcp.range.start' end='$ip_pre.$dhcp.range.end' />
439+ <bootp server="$all_systems.zimmer.ipaddr" file="pxelinux.0" />
440+ #for $sys in $all_systems.itervalues()
441+ <host mac="$sys.mac" name="$sys.name" ip="$sys.ipaddr" />
442+ #end for
443+ </dhcp>
444+ </ip>
445+</network>
446+
447
448=== added file 'vdenv/settings.cfg'
449--- vdenv/settings.cfg 1970-01-01 00:00:00 +0000
450+++ vdenv/settings.cfg 2012-03-09 20:39:27 +0000
451@@ -0,0 +1,24 @@
452+network:
453+ name: odev-net
454+ bridge: virbr1
455+ ip_pre: 192.168.123
456+ ip: 1
457+ netmask: 255.255.255.0
458+ dhcp:
459+ range:
460+ start: 2
461+ end: 254
462+ template: libvirt-network.tmpl
463+
464+systems:
465+ zimmer:
466+ ip: 2 # ip address must be in dhcp range
467+ mac: 00:16:3e:3e:a9:1a
468+ template: libvirt-domain.tmpl
469+ mem: 512
470+
471+nodes:
472+ prefix: odev-node
473+ mac_pre: 00:16:3e:3e:aa
474+ mem: 512
475+ template: libvirt-domain.tmpl
476
477=== added file 'vdenv/setup.py'
478--- vdenv/setup.py 1970-01-01 00:00:00 +0000
479+++ vdenv/setup.py 2012-03-09 20:39:27 +0000
480@@ -0,0 +1,217 @@
481+#!/usr/bin/python
482+
483+import yaml
484+import os
485+import re
486+import sys
487+import libvirt
488+from Cheetah.Template import Template
489+import subprocess
490+import xmlrpclib
491+
492+NODES_RANGE = range(1,4)
493+
494+def yaml_loadf(fname):
495+ fp = open(fname)
496+ ret = yaml.load(fp)
497+ fp.close()
498+ return(ret)
499+
500+class Domain:
501+ def __init__(self, syscfg, ident, basedir=None):
502+ self.ip_pre = syscfg['network']['ip_pre']
503+ if basedir == None:
504+ basedir = os.path.abspath(os.curdir)
505+ self.basedir = basedir
506+ self._setcfg(syscfg,ident)
507+ self.network = syscfg['network']['name']
508+
509+ def __repr__(self):
510+ return("== %s ==\n ip: %s\n mac: %s\n template: %s\n" %
511+ (self.name, self.ipaddr, self.mac, self.template))
512+
513+ @property
514+ def ipaddr(self):
515+ return("%s.%s" % (self.ip_pre, self.ipnum))
516+
517+ @property
518+ def disk0(self):
519+ return("%s/%s-disk0.img" % (self.basedir, self.name))
520+
521+ def dictInfo(self):
522+ ret = vars(self)
523+ # have to add the getters
524+ for prop in ( "ipaddr", "disk0" ):
525+ ret[prop] = getattr(self,prop)
526+ return ret
527+
528+ def toLibVirtXml(self):
529+ template = Template(file=self.template, searchList=[self.dictInfo()])
530+ return template.respond()
531+
532+class Node(Domain):
533+ def _setcfg(self, cfg, num):
534+ cfg = cfg['nodes']
535+ self.name = "%s%02i" % (cfg['prefix'],num)
536+ self.mac = "%s:%02x" % (cfg['mac_pre'],num)
537+ self.ipnum = num + 100
538+ self.template = cfg['template']
539+ self.mem = cfg['mem'] * 1024
540+ return
541+
542+class System(Domain):
543+ def _setcfg(self, cfg, ident):
544+ cfg = cfg['systems'][ident]
545+ self.name = ident
546+ self.mac = cfg['mac']
547+ self.ipnum = cfg['ip']
548+ self.template = cfg['template']
549+ self.mem = cfg['mem'] * 1024
550+
551+def renderSysDom(config, syscfg, stype="node"):
552+ template = Template(file=syscfg['template'], searchList=[config, syscfg])
553+ return template.respond()
554+
555+# cobbler:
556+# ip: 2 # ip address must be in dhcp range
557+# mac: 00:16:3e:3e:a9:1a
558+# template: libvirt-system.tmpl
559+# mem: 524288
560+#
561+#nodes:
562+# prefix: node
563+# mac_pre: 00:16:3e:3e:aa
564+# mam: 256
565+
566+def writeDomXmlFile(dom, outpre=""):
567+ fname="%s%s.xml" % (outpre, dom.name)
568+ output = open(fname,"w")
569+ output.write(dom.toLibVirtXml())
570+ output.close()
571+ return fname
572+
573+def libvirt_setup(config):
574+ conn = libvirt.open("qemu:///system")
575+ netname = config['network']['name']
576+ if netname in conn.listDefinedNetworks() or netname in conn.listNetworks():
577+ net = conn.networkLookupByName(netname)
578+ if net.isActive():
579+ net.destroy()
580+ net.undefine()
581+
582+ allsys = {}
583+ for system in config['systems']:
584+ d = System(config, system)
585+ allsys[d.name]=d.dictInfo()
586+ for num in NODES_RANGE:
587+ d = Node(config, num)
588+ allsys[d.name]=d.dictInfo()
589+
590+ conn.networkDefineXML(Template(file=config['network']['template'],
591+ searchList=[config['network'],
592+ {'all_systems': allsys }]).respond())
593+
594+ print "defined network %s " % netname
595+
596+ cob = System(config, "zimmer")
597+ systems = [ cob ]
598+
599+ for node in NODES_RANGE:
600+ systems.append(Node(config, node))
601+
602+ qcow_create = "qemu-img create -f qcow2 %s 2G"
603+ defined_systems = conn.listDefinedDomains()
604+ for system in systems:
605+ if system.name in defined_systems:
606+ dom = conn.lookupByName(system.name)
607+ if dom.isActive():
608+ dom.destroy()
609+ dom.undefine()
610+ conn.defineXML(system.toLibVirtXml())
611+ if isinstance(system,Node):
612+ subprocess.check_call(qcow_create % system.disk0, shell=True)
613+ print "defined domain %s" % system.name
614+
615+def cobbler_addsystem(server, token, system, profile, hostip):
616+ eth0 = {
617+ "macaddress-eth0" : system.mac,
618+ "ipaddress-eth0" : system.ipaddr,
619+ "static-eth0" : False,
620+ }
621+ items = {
622+ 'name': system.name,
623+ 'hostname': system.name,
624+ 'power_address': "qemu+tcp://%s:65001" % hostip,
625+ 'power_id': system.name,
626+ 'power_type': "virsh",
627+ 'profile': profile,
628+ 'netboot_enabled': True,
629+ 'modify_interface': eth0,
630+ 'mgmt_classes': ['orchestra-juju-available'],
631+ }
632+
633+ if len(server.find_system({"name": system.name})):
634+ server.remove_system(system.name,token)
635+ server.update()
636+ print "removed existing %s" % system.name
637+
638+ sid = server.new_system(token)
639+ for key, val in items.iteritems():
640+ ret = server.modify_system(sid, key, val, token)
641+ if not ret:
642+ raise Exception("failed for %s [%s]: %s, %s" %
643+ (system.name, ret, key, val))
644+ ret = server.save_system(sid,token)
645+ if not ret:
646+ raise Exception("failed to save %s" % system.name)
647+ print "added %s" % system.name
648+
649+
650+def get_profile_arch():
651+ """Get the system architecture for use in the cobbler setup profile."""
652+ # This should, for any given system, match what the zimmer-build
653+ # script does to determine the right architecture.
654+ arch_text = subprocess.check_output(['/bin/uname', '-m']).strip()
655+ if re.match('i.86', arch_text):
656+ return 'i386'
657+ else:
658+ return arch_text
659+
660+
661+def cobbler_setup(config):
662+ hostip = "%s.1" % config['network']['ip_pre']
663+ arch = get_profile_arch()
664+ profile = "precise-%s-juju" % arch
665+
666+ cob = System(config, "zimmer")
667+ cobbler_url = "http://%s/cobbler_api" % cob.ipaddr
668+ print("Connecting to %s." % cobbler_url)
669+ server = xmlrpclib.Server(cobbler_url)
670+ token = server.login("cobbler","xcobbler")
671+
672+ systems = [Node(config, node) for node in NODES_RANGE]
673+
674+ for system in systems:
675+ cobbler_addsystem(server, token, system, profile, hostip)
676+
677+def main():
678+ outpre = "libvirt-cobbler-"
679+ cfg_file = "settings.cfg"
680+
681+ if len(sys.argv) == 1:
682+ print(
683+ "Usage: setup.py action\n"
684+ "action one of: libvirt-setup, cobbler-setup")
685+ sys.exit(1)
686+
687+ config = yaml_loadf(cfg_file)
688+
689+ if sys.argv[1] == "libvirt-setup":
690+ libvirt_setup(config)
691+ elif sys.argv[1] == "cobbler-setup":
692+ cobbler_setup(config)
693+
694+if __name__ == '__main__':
695+ main()
696+
697+# vi: ts=4 noexpandtab
698
699=== added directory 'vdenv/zimmer-build'
700=== added file 'vdenv/zimmer-build/build'
701--- vdenv/zimmer-build/build 1970-01-01 00:00:00 +0000
702+++ vdenv/zimmer-build/build 2012-03-09 20:39:27 +0000
703@@ -0,0 +1,271 @@
704+#!/bin/bash
705+
706+# This should mirror what's in odev's setup.py, except here x86_64 is called
707+# amd64, not x86_64 because that is Ubuntu's selected name for the arch.
708+GUEST_ARCHITECTURE=$(uname -m)
709+case "$GUEST_ARCHITECTURE" in
710+ i?86) GUEST_ARCHITECTURE="i386" ;;
711+ x86_64) GUEST_ARCHITECTURE="amd64" ;;
712+esac
713+
714+
715+DEF_ZIMG="http://cloud-images.ubuntu.com/server/precise/current/precise-server-cloudimg-${GUEST_ARCHITECTURE}-disk1.img"
716+DEF_SAVE_D="pristine"
717+DEF_UD_FILE="ud-build.txt"
718+ZIMMER_SSH_FORWARD=""
719+#ZIMMER_SSH_FORWARD=${ZIMMER_SSH_FORWARD:-"hostfwd=tcp::2222-:22"}
720+ZIMMER_MEM="${ZIMMER_MEM:-1024}"
721+KVM_PID=""
722+TAIL_PID=""
723+LOG="output.log"
724+
725+case $(uname -m) in
726+ i?86) DEF_ZIMG=${DEF_ZIMG//amd64/i386};;
727+esac
728+
729+VERBOSITY=0
730+TEMP_D=""
731+
732+error() { echo "$@" 1>&2; }
733+errorp() { printf "$@" 1>&2; }
734+fail() { [ $# -eq 0 ] || error "$@"; exit 1; }
735+failp() { [ $# -eq 0 ] || errorp "$@"; exit 1; }
736+
737+Usage() {
738+ cat <<EOF
739+Usage: ${0##*/} [ options ] output
740+
741+ build a zimmer server from a cloud image, and put it in 'output'
742+
743+ options:
744+ --zimg Z url or path to compressed cloud image
745+ (will be uncompressed)
746+ def: $DEF_ZIMG
747+ --img I url or path to uncompressed cloud image
748+ expected to be uncompressed.
749+ default: create from zimg
750+ --log L log items to LOG
751+ default: $LOG
752+ --save D put pristine copies of things in D
753+ default: $DEF_SAVE_D
754+ --ud-file F use user-data file F
755+ default: $DEF_USER_DATA
756+ --import-keys K import ssh keys
757+ values are 'auto', 'lp:<id>', or path to file
758+EOF
759+}
760+
761+bad_Usage() { Usage 1>&2; [ $# -eq 0 ] || error "$@"; exit 1; }
762+cleanup() {
763+ [ -z "${TEMP_D}" -o ! -d "${TEMP_D}" ] || rm -Rf "${TEMP_D}"
764+ [ -z "$KVM_PID" ] || kill "$KVM_PID"
765+ [ -z "$TAIL_PID" ] || kill "$TAIL_PID"
766+}
767+
768+log() {
769+ [ -n "$LOG" ] || return
770+ echo "$(date -R):" "$@" >> "$LOG"
771+}
772+debug() {
773+ local level=${1}; shift;
774+ log "$@"
775+ [ "${level}" -gt "${VERBOSITY}" ] && return
776+ error "${@}"
777+}
778+
779+# Download image file.
780+# Parameters: source URL, filename to save to
781+download() {
782+ local src="$1" dest="$2"
783+
784+ debug 0 "downloading $src to $dest"
785+ wget --progress=dot:mega "$src" -O "$dest.partial" &&
786+ mv -- "$dest.partial" "$dest" ||
787+ fail "failed to get $src"
788+}
789+
790+short_opts="ho:v"
791+long_opts="help,img:,import-keys:,log:,ud-file:,verbose,zimg:"
792+getopt_out=$(getopt --name "${0##*/}" \
793+ --options "${short_opts}" --long "${long_opts}" -- "$@") &&
794+ eval set -- "${getopt_out}" ||
795+ bad_Usage
796+
797+img=""
798+zimg=""
799+save_d="$DEF_SAVE_D"
800+ud_file="$DEF_UD_FILE"
801+import_keys=""
802+
803+while [ $# -ne 0 ]; do
804+ cur=${1}; next=${2};
805+ case "$cur" in
806+ -h|--help) Usage ; exit 0;;
807+ --img) img=${2}; shift;;
808+ --log) LOG=${2}; shift;;
809+ --save) save_d=${2}; shift;;
810+ --ud-file) ud_file=${2}; shift;;
811+ -v|--verbose) VERBOSITY=$((${VERBOSITY}+1));;
812+ --zimg) zimg=${2}; shift;;
813+ --import-keys) import_keys=${2}; shift;;
814+ --) shift; break;;
815+ esac
816+ shift;
817+done
818+
819+## check arguments here
820+## how many args do you expect?
821+[ $# -gt 1 ] && bad_Usage "too many arguments"
822+[ $# -eq 0 ] && bad_Usage "need an output argument"
823+output="$1"
824+[ "${output%.zimg}" = "${output}" ] || fail "do not name output with .zimg"
825+
826+command -v genisoimage >/dev/null ||
827+ fail "you do not have genisoimage installed. install genisoimage package"
828+
829+
830+[ -f "$ud_file" ] ||
831+ fail "user data file $ud_file" is not a file
832+
833+TEMP_D=$(mktemp -d "${TMPDIR:-/tmp}/${0##*/}.XXXXXX") ||
834+ fail "failed to make tempdir"
835+trap cleanup EXIT
836+
837+mkdir -p "$save_d" || fail "failed to mkdir $save_d"
838+
839+# if --import-keys was specified, get the keys into a local file
840+keyf="$TEMP_D/keys"
841+if [ "$import_keys" = "auto" ]; then
842+ ssh-add -L > "${keyf}" 2>/dev/null ||
843+ cat $HOME/.ssh/id*.pub > "$keyf" 2>/dev/null ||
844+ error "Warning: unable to find 'auto' keys"
845+elif [ -f "$import_keys" ]; then
846+ cat "$import_keys" > "$keyf"
847+elif [ "${import_keys#lp:}" != "${import_keys}" ]; then
848+ ssh-import-id -o - ${import_keys#lp:} > "$keyf" 2>/dev/null ||
849+ error "Warning: failed to ssh-import ${import_keys#lp:}"
850+fi
851+
852+if [ -n "$img" ]; then
853+ # if img was given, then we assume good, its the backing image
854+ [ -f "$img" ] || fail "$img (--img) is not a file"
855+ debug 0 "using $img as uncompressed image"
856+else
857+ if [ -z "$zimg" ]; then
858+ zimg="$DEF_ZIMG"
859+ fi
860+ case "$zimg" in
861+ http://*|https://*)
862+ o_zimg="${zimg}"
863+ zimg=${save_d}/$(basename "$o_zimg" ".img").zimg
864+ [ -f "$zimg" ] &&
865+ fail "please delete $destfirst or use --zimg|--img"
866+ download "$o_zimg" "$zimg"
867+ ;;
868+ file://*)
869+ o_zimg=${zimg}
870+ zimg=${zimg#file://}
871+ debug 0 "using file $o_zimg as zimg"
872+ [ -f "$zimg" ] || fail "$zimg is not a file"
873+ ;;
874+ *) [ -f "$zimg" ] || fail "$zimg is not a file"
875+ debug 0 "using file $zimg as zimg"
876+ ;;
877+ esac
878+ img=${zimg%.zimg}.img
879+ debug 0 "creating uncompressed img $img from $zimg"
880+ qemu-img convert -O qcow2 "$zimg" "$img"
881+fi
882+
883+debug 0 "making nocloud data source in iso"
884+seed_d="$TEMP_D/seed"
885+mkdir "$seed_d" || fail "failed to make 'seed' in tempdir"
886+
887+cp "$ud_file" "$seed_d/user-data" || fail "failed to copy $ud_file to $seed_d"
888+cat > "$seed_d/meta-data" <<EOF
889+instance-id: i-zimmer-build
890+local-hostname: zimmer-build
891+EOF
892+
893+# if keys were specified, dump them into meta-data
894+if [ -s "$keyf" ]; then
895+ {
896+ echo "public-keys:"
897+ echo " zimmer-build:"
898+ while read line; do
899+ echo " - \"$line\""
900+ done < "$keyf"
901+ } >> "$seed_d/meta-data"
902+fi
903+
904+( cd "$seed_d" &&
905+ genisoimage -output "$TEMP_D/build.iso" \
906+ -volid cidata -joliet -rock user-data meta-data 2>/dev/null ) ||
907+ fail "failed to create iso for user-data from $ud_file"
908+
909+build0="$TEMP_D/build0.img"
910+img_fp=$(readlink -f "$img") || fail "failed to get fullpath for $img"
911+qemu-img create -f qcow2 -b "$img_fp" "${build0}" ||
912+ fail "failed to create qcow image backed by $img"
913+
914+## on precise, you do do not need 'boot=on' in kvm commanad line
915+[ "$(lsb_release -sc)" = "precise" ] && bton="" || bton="boot=on"
916+
917+serial_out="$TEMP_D/serial.output"
918+monitor="${TEMP_D}/monitor.fifo" && mkfifo "$monitor" ||
919+ fail "failed to mkfifo for monitor"
920+
921+debug 0 "booting kvm guest to turn cloud-image into zimmer"
922+kvm_start=$SECONDS
923+MONITOR="-monitor null"
924+NOGRAPHIC="-nographic"
925+kvm \
926+ -drive file=${build0},if=virtio,cache=unsafe${bton:+,${bton}} \
927+ -boot c -cdrom "$TEMP_D/build.iso" \
928+ -net nic,model=virtio \
929+ -net user${ZIMMER_SSH_FORWARD:+,${ZIMMER_SSH_FORWARD}} \
930+ -m "${ZIMMER_MEM}" \
931+ $NOGRAPHIC \
932+ $MONITOR \
933+ -serial "file:$serial_out" \
934+ 2>&1 &
935+
936+KVM_PID=$!
937+tail -F "$serial_out" 2>/dev/null &
938+TAIL_PID=$!
939+
940+sleep 20
941+[ -s "$serial_out" ] ||
942+ fail "no output in serial console output after 20 seconds"
943+
944+wait $KVM_PID
945+ret=$?
946+KVM_PID=""
947+
948+{ kill $TAIL_PID ; } >/dev/null 2>&1
949+TAIL_PID=""
950+
951+{
952+ echo ===== begin serial console ====
953+ cat "$serial_out"
954+ echo ===== end serial console ====
955+} >> "$LOG"
956+[ $ret -eq 0 ] || fail "failed to build via kvm guest"
957+grep -q "ZIMMER BUILD FINISHED" "$serial_out" ||
958+ fail "did not find finished message in $serial_out"
959+
960+debug 0 "kvm image built in $(($SECONDS-$kvm_start))s"
961+debug 0 "creating dist image in $output"
962+## create a re-shrunk image of build0.img into 'zimmer-disk0.img.dist'
963+[ ! -f "$output" ] || rm -f "$output" ||
964+ fail "failed to remove existing $output"
965+qemu-img convert -O qcow2 "$TEMP_D/build0.img" "$output" &&
966+ chmod 444 "$output" ||
967+ fail "failed to create $output from build0.img"
968+
969+debug 0 "creating pristine compressed zimmer-disk0.zimg"
970+## optionally create a zip'd image for transmission
971+qemu-img convert -f qcow2 -O qcow2 -c "$output" "${output%.img}.zimg"
972+
973+debug 0 "done. took $SECONDS seconds"
974+# vi: ts=4 noexpandtab
975
976=== added file 'vdenv/zimmer-build/ud-build.txt'
977--- vdenv/zimmer-build/ud-build.txt 1970-01-01 00:00:00 +0000
978+++ vdenv/zimmer-build/ud-build.txt 2012-03-09 20:39:27 +0000
979@@ -0,0 +1,70 @@
980+#cloud-config
981+password: passw0rd
982+chpasswd: { expire: False }
983+ssh_pwauth: True
984+
985+#apt_proxy: "http://local-proxy:3128/"
986+#apt_mirror: "http://us.archive.ubuntu.com/ubuntu"
987+#ssh_import_id: smoser
988+
989+bucket:
990+ - &setup |
991+ cd /root
992+ (
993+ #ONE_TIME_PROXY=http://local-proxy:3128/
994+
995+ echo === $(date) ====
996+ debconf-set-selections <<EOF
997+ ubuntu-orchestra-provisioning-server ubuntu-orchestra-provisioning-server/import-isos boolean false
998+ ubuntu-orchestra-provisioning-server ubuntu-orchestra-provisioning-server/dnsmasq-dhcp-range string 10.10.10.2,10.10.10.254
999+ ubuntu-orchestra-provisioning-server ubuntu-orchestra-provisioning-server/dnsmasq-enabled boolean false
1000+ cobbler cobbler/server_and_next_server string zimmer-server
1001+ cobbler cobbler/password password xcobbler
1002+ cloud-init cloud-init/datasources multiselect NoCloud, OVF
1003+
1004+ EOF
1005+
1006+ [ -n "$ONE_TIME_PROXY" ] && export http_proxy="$ONE_TIME_PROXY"
1007+ export DEBIAN_FRONTEND=noninteractive;
1008+ dpkg-reconfigure cloud-init
1009+
1010+ read oldhost < /etc/hostname
1011+ sed -i "/$oldhost/d;/zimmer/d" /etc/hosts
1012+ echo zimmer > /etc/hostname
1013+ hostname zimmer
1014+
1015+ echo "127.0.1.2 zimmer-server" >> /etc/hosts
1016+
1017+ echo === $(date): starting apt ====
1018+ apt_get() {
1019+ DEBIAN_FRONTEND=noninteractive apt-get \
1020+ --option "Dpkg::Options::=--force-confold" --assume-yes "$@"
1021+ }
1022+ apt_get update
1023+ apt_get install ubuntu-orchestra-provisioning-server libvirt-bin cobbler-web
1024+
1025+ case $(uname -m) in
1026+ i?86) arches="i386";;
1027+ *) arches="amd64";;
1028+ esac
1029+ cat >> /etc/orchestra/import_isos <<END
1030+ RELEASES="oneiric precise"
1031+ ARCHES="${arches}"
1032+ END
1033+
1034+ echo === $(date): starting import ====
1035+ orchestra-import-isos
1036+
1037+ sed -i '/zimmer-server/d' /etc/hosts
1038+
1039+ echo === $(date): starting cleanup ====
1040+ apt_get clean
1041+ time sh -c 'dd if=/dev/zero of=/out.img; rm /out.img'
1042+
1043+ echo === $(date): poweroff ===
1044+ echo === ZIMMER BUILD FINISHED ===
1045+ ) 2>&1 | tee out.log
1046+
1047+runcmd:
1048+ - [ sh, -c, *setup ]
1049+ - [ /sbin/poweroff ]