Merge lp:~free.ekanayaka/landscape-client/jaunty.fix-347983 into lp:ubuntu/jaunty/landscape-client

Proposed by Free Ekanayaka
Status: Merged
Merge reported by: Free Ekanayaka
Merged at revision: not available
Proposed branch: lp:~free.ekanayaka/landscape-client/jaunty.fix-347983
Merge into: lp:ubuntu/jaunty/landscape-client
Diff against target: None lines
To merge this branch: bzr merge lp:~free.ekanayaka/landscape-client/jaunty.fix-347983
Reviewer Review Type Date Requested Status
Mathias Gug Approve
Review via email: mp+12172@code.launchpad.net
To post a comment you must log in.
Revision history for this message
Free Ekanayaka (free.ekanayaka) wrote :

SRU for landscape-client version 1.3.2.3-0ubuntu0.9.04.0

Revision history for this message
Mathias Gug (mathiaz) wrote :

Looks good to me.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Makefile'
2--- Makefile 2008-09-08 16:35:57 +0000
3+++ Makefile 2009-09-21 16:05:18 +0000
4@@ -15,8 +15,13 @@
5 -rm docs/api -rf;
6 -rm man/\*.1 -rf
7
8-doc:
9- ${PYDOCTOR} --make-html --html-output docs/api --add-package landscape
10+doc: docs/api/twisted/pickle
11+ mkdir -p docs/api
12+ ${PYDOCTOR} --make-html --html-output docs/api --add-package landscape --extra-system=docs/api/twisted/pickle:twisted/
13+
14+docs/api/twisted/pickle:
15+ mkdir -p docs/api/twisted
16+ -${PYDOCTOR} --make-html --html-output docs/api/twisted --add-package /usr/share/pyshared/twisted -o docs/api/twisted/pickle
17
18 manpages:
19 ${TXT2MAN} -P Landscape -t landscape-client < man/landscape-client.txt > man/landscape-client.1
20
21=== modified file 'debian/README.source'
22--- debian/README.source 2009-04-09 17:09:50 +0000
23+++ debian/README.source 2009-09-21 16:12:27 +0000
24@@ -7,6 +7,9 @@
25
26 1.0.29-0ubuntu0.9.04.1
27
28-In addition, when you build a new package, it would be appreciated if you also
29-updated landscape/__init__.py to include the entire new version number. This
30-helps us keep track of exact version of clients in use.
31+In addition, when you build a package for a new client version, it would be
32+appreciated if you also updated the UPSTREAM_VERSION variable in
33+landscape/__init__.py to include the entire new client version number. This
34+helps us keep track of exact version of clients in use. There's no need to
35+update the DEBIAN_REVISION variable, as it gets automatically set at build
36+time.
37
38=== modified file 'debian/changelog'
39--- debian/changelog 2009-04-13 14:33:31 +0000
40+++ debian/changelog 2009-09-21 16:12:27 +0000
41@@ -1,3 +1,32 @@
42+landscape-client (1.3.2.3-0ubuntu0.9.04.0) jaunty-proposed; urgency=low
43+
44+ * New upstream release (LP: #347983):
45+ - Don't clear the hash_id_requests table upon resynchronize (LP #417122)
46+ - Include the README file in landscape-client (LP: #396260)
47+ - Fix client capturing stderr from run_command when constructing
48+ hash-id-databases url (LP: #397480)
49+ - Use substvars to conditionally depend on update-motd or
50+ libpam-modules (LP: #393454)
51+ - Fix reporting wrong version to the server (LP: #391225)
52+ - The init script does not wait for the network to be available
53+ before checking for EC2 user data (LP: #383336)
54+ - When the broker is restarted by the watchdog, the state of the client
55+ is inconsistent (LP: #380633)
56+ - Package stays unknown forever in the client with hash-id-databases
57+ support (LP: #381356)
58+ - Standard error not captured when calling smart-update (LP: #387441)
59+ - Changer calls reporter without switching groups, just user (LP: #388092)
60+ - Run smart update in the package-reporter instead of having a cronjob (LP: #362355)
61+ - Package changer does not inherit proxy settings (LP: #381241)
62+ - The ./test script doesn't work in landscape-client (LP: #381613)
63+ - The source package should build on all supported releases (LP: #385098)
64+ - Strip smart update's output (LP: #387331)
65+ - The fetch() timeout isn't based on activity (#389224)
66+ - Client can use a UUID of "None" when fetching the hash-id-database (LP: #381291)
67+ - Registration should use the fqdn rather than just the hostname (LP: #385730)
68+
69+ -- Free Ekanayaka <free.ekanayaka@canonical.com> Mon, 21 Sep 2009 17:59:31 +0200
70+
71 landscape-client (1.0.29.1-0ubuntu0.9.04.0) jaunty; urgency=low
72
73 * Apply a fix for segfault bug involving curl timeouts. (LP: #360510)
74
75=== added file 'debian/cloud-default.conf'
76--- debian/cloud-default.conf 1970-01-01 00:00:00 +0000
77+++ debian/cloud-default.conf 2009-09-21 16:12:27 +0000
78@@ -0,0 +1,7 @@
79+[client]
80+cloud = True
81+url = https://landscape.canonical.com/message-system
82+data_path = /var/lib/landscape/client
83+ping_url = http://landscape.canonical.com/ping
84+include_manager_plugins = ScriptExecution
85+script_users = ALL
86
87=== modified file 'debian/compat'
88--- debian/compat 2008-09-08 16:35:57 +0000
89+++ debian/compat 2009-09-21 16:12:27 +0000
90@@ -1,1 +1,1 @@
91-6
92+5
93
94=== modified file 'debian/control'
95--- debian/control 2008-10-17 12:42:23 +0000
96+++ debian/control 2009-09-21 16:12:27 +0000
97@@ -1,34 +1,46 @@
98 Source: landscape-client
99 Section: admin
100 Priority: optional
101-Maintainer: Ubuntu Core Developers <ubuntu-devel-discuss@lists.ubuntu.com>
102-Build-Depends: debhelper (>= 6.0.7~), po-debconf
103-Build-Depends-Indep: python-dev, python-central, lsb-release, gawk
104+Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
105+XSBC-Original-Maintainer: Landscape Team <landscape-team@canonical.com>
106+Build-Depends: debhelper (>= 5), po-debconf, python-dev, python-central | python-support, lsb-release, gawk
107 Standards-Version: 3.8.0
108-Vcs-Bzr: http://bazaar.launchpad.net/~ubuntu-core-dev/landscape-client/ubuntu
109-XS-Python-Version: >= 2.4
110+XS-Python-Version: >= 2.4, << 2.7
111
112 Package: landscape-common
113 Architecture: all
114 Pre-Depends: python, python-gobject
115-Depends: ${python:Depends}, ${misc:Depends}, python-twisted-core, python-dbus, lsb-release, lsb-base, perl-modules, python-gdbm, ca-certificates, adduser, update-motd
116-XB-Python-Version: ${python:Versions}
117-Replaces: landscape-client (< 1.0.23-0ubuntu0.8.10)
118+Depends: ${python:Depends}, ${misc:Depends}, ${extra:Depends},
119+ python-smartpm (>= 1.2-0ubuntu1),
120+ python-dbus | python2.4-dbus,
121+ python-twisted-core,
122+ ca-certificates,
123+ perl-modules,
124+ python-gdbm,
125+ lsb-release,
126+ lsb-base,
127+ adduser
128+Replaces: landscape-client (<= 1.0.23-0ubuntu0.8.10)
129 Description: The Landscape administration system client
130 Landscape is a web-based tool for managing Ubuntu systems. This
131 package is necessary if you want your machine to be managed in a
132 Landscape account.
133 .
134 This package provides the core libraries.
135-
136+XB-Python-Version: ${python:Versions}
137+
138 Package: landscape-client
139-Architecture: all
140+Architecture: any
141 Pre-Depends: python
142-Depends: ${python:Depends}, ${misc:Depends}, landscape-common, hal, python-twisted-web, python-smartpm, python-pycurl, cron
143-XB-Python-Version: ${python:Versions}
144+Depends: ${python:Depends}, ${misc:Depends},
145+ python-pycurl | python2.4-pycurl,
146+ python-twisted-web,
147+ landscape-common (>= ${Source-Version}),
148+ hal
149 Description: The Landscape administration system client
150 Landscape is a web-based tool for managing Ubuntu systems. This
151 package is necessary if you want your machine to be managed in a
152 Landscape account.
153 .
154 This package provides the Landscape client and requires a Landscape account.
155+XB-Python-Version: ${python:Versions}
156
157=== removed file 'debian/landscape-client.cron.hourly'
158--- debian/landscape-client.cron.hourly 2008-09-23 18:19:26 +0000
159+++ debian/landscape-client.cron.hourly 1970-01-01 00:00:00 +0000
160@@ -1,12 +0,0 @@
161-#!/bin/sh
162-
163-[ -f "/etc/default/landscape-client" ] || exit 0
164-
165-. /etc/default/landscape-client
166-
167-if [ "$RUN" -eq "1" ]
168-then
169- [ -x /usr/share/smart/smart ] && /usr/share/smart/smart update > /dev/null 2>&1
170-fi
171-
172-exit 0
173
174=== added file 'debian/landscape-client.docs'
175--- debian/landscape-client.docs 1970-01-01 00:00:00 +0000
176+++ debian/landscape-client.docs 2009-09-21 16:12:27 +0000
177@@ -0,0 +1,1 @@
178+README
179
180=== modified file 'debian/landscape-client.init'
181--- debian/landscape-client.init 2009-03-18 20:42:05 +0000
182+++ debian/landscape-client.init 2009-09-21 16:12:27 +0000
183@@ -17,15 +17,32 @@
184 PIDDIR=/var/run/landscape
185 PIDFILE=$PIDDIR/$NAME.pid
186 RUN=0 # overridden in /etc/default/landscape-client
187+CLOUD=0 # overridden in /etc/default/landscape-client
188+DAEMON_GROUP=landscape
189
190 [ -f $DAEMON ] || exit 0
191
192 . /lib/lsb/init-functions
193 [ -f $LANDSCAPE_DEFAULTS ] && . $LANDSCAPE_DEFAULTS
194
195+# This $RUN check should match the semantics of
196+# l.sysvconfig.SysVConfig.is_configured_to_run.
197 if [ $RUN -eq 0 ]; then
198- echo "$NAME is not configured, please run landscape-config."
199- exit 0
200+ if [ $CLOUD -eq 1 ]; then
201+ if landscape-is-cloud-managed; then
202+ # Install the cloud default configuration file
203+ cp /usr/share/landscape/cloud-default.conf /etc/landscape/client.conf
204+ # Override default file for not going in this conditional again at
205+ # next startup
206+ echo "RUN=1" > $LANDSCAPE_DEFAULTS
207+ else
208+ echo "$NAME is not configured, please run landscape-config."
209+ exit 0
210+ fi
211+ else
212+ echo "$NAME is not configured, please run landscape-config."
213+ exit 0
214+ fi
215 fi
216
217 case "$1" in
218@@ -35,11 +52,11 @@
219 mkdir $PIDDIR
220 chown landscape $PIDDIR
221 fi
222- FULL_COMMAND="start-stop-daemon --start --quiet --oknodo --startas $DAEMON --pidfile $PIDFILE -- --daemon --pid-file $PIDFILE"
223+ FULL_COMMAND="start-stop-daemon --start --quiet --oknodo --startas $DAEMON --pidfile $PIDFILE -g $DAEMON_GROUP -- --daemon --pid-file $PIDFILE"
224 if [ x"$DAEMON_USER" != x ]; then
225- sudo -u $DAEMON_USER $FULL_COMMAND
226+ sudo -u $DAEMON_USER $FULL_COMMAND
227 else
228- $FULL_COMMAND
229+ $FULL_COMMAND
230 fi
231 log_end_msg $?
232 ;;
233
234=== modified file 'debian/landscape-client.install'
235--- debian/landscape-client.install 2008-09-15 17:21:53 +0000
236+++ debian/landscape-client.install 2009-09-21 16:12:27 +0000
237@@ -6,4 +6,7 @@
238 usr/bin/landscape-monitor
239 usr/bin/landscape-package-changer
240 usr/bin/landscape-package-reporter
241+usr/bin/landscape-is-cloud-managed
242 etc/dbus-1/system.d/landscape.conf
243+usr/share/landscape/cloud-default.conf
244+usr/lib/landscape
245
246=== modified file 'debian/landscape-client.postinst'
247--- debian/landscape-client.postinst 2008-09-25 17:54:00 +0000
248+++ debian/landscape-client.postinst 2009-09-21 16:12:27 +0000
249@@ -76,6 +76,23 @@
250 echo "Old monolithic client data file migrated to new format."
251 fi
252
253+ # Add the setuid flag to smart-update and it be executable by users in
254+ # the landscape group (that normally means landscape itself)
255+ smart_update=/usr/lib/landscape/smart-update
256+ if ! dpkg-statoverride --list $smart_update >/dev/null 2>&1; then
257+ dpkg-statoverride --update --add root landscape 4754 $smart_update
258+ fi
259+
260+ # Remove old cron jobs
261+ old_cron_job=/etc/cron.hourly/landscape-client
262+ if [ -e $old_cron_job ]; then
263+ rm $old_cron_job
264+ fi
265+ very_old_cron_job=/etc/cron.hourly/smartpm-core
266+ if [ -e $very_old_cron_job ]; then
267+ rm $very_old_cron_job
268+ fi
269+
270 ;;
271
272 abort-upgrade|abort-remove|abort-deconfigure)
273
274=== modified file 'debian/landscape-client.prerm'
275--- debian/landscape-client.prerm 2008-09-15 17:21:53 +0000
276+++ debian/landscape-client.prerm 2009-09-21 16:12:27 +0000
277@@ -19,6 +19,13 @@
278
279 case "$1" in
280 remove|upgrade|deconfigure)
281+
282+ # Remove statoverride for smart-update
283+ smart_update=/usr/lib/landscape/smart-update
284+ if dpkg-statoverride --list $smart_update >/dev/null 2>&1; then
285+ dpkg-statoverride --remove $smart_update
286+ fi
287+
288 ;;
289
290 failed-upgrade)
291
292=== added file 'debian/landscape-common.dirs'
293--- debian/landscape-common.dirs 1970-01-01 00:00:00 +0000
294+++ debian/landscape-common.dirs 2009-09-21 16:12:27 +0000
295@@ -0,0 +1,1 @@
296+etc/update-motd.d
297
298=== modified file 'debian/landscape-common.install'
299--- debian/landscape-common.install 2009-02-25 12:03:23 +0000
300+++ debian/landscape-common.install 2009-09-21 16:12:27 +0000
301@@ -1,3 +1,3 @@
302-usr/lib/python*/*-packages/*
303+usr/lib/python*
304 usr/bin/landscape-sysinfo
305 usr/share/landscape/landscape-sysinfo.wrapper
306
307=== modified file 'debian/landscape-common.postinst'
308--- debian/landscape-common.postinst 2008-12-11 17:11:08 +0000
309+++ debian/landscape-common.postinst 2009-09-21 16:12:27 +0000
310@@ -26,28 +26,28 @@
311 case "$1" in
312 configure)
313
314- db_get $PACKAGE/sysinfo
315- # Choices:
316- # * Do not display sysinfo on login
317- # * Cache sysinfo in /etc/motd
318- # * Run sysinfo on every login
319- SYSINFO="${RET:-Cache sysinfo in /etc/motd}"
320- WRAPPER=/usr/share/landscape/landscape-sysinfo.wrapper
321- PROFILE_LOCATION=/etc/profile.d/50-landscape-sysinfo.sh
322- UPDATE_MOTD_LOCATION=/etc/update-motd.d/50-landscape-sysinfo
323- if [ "$RET" = "Cache sysinfo in /etc/motd" ]; then
324- rm -f $PROFILE_LOCATION 2>/dev/null || true
325- ln -sf $WRAPPER $UPDATE_MOTD_LOCATION
326- /usr/sbin/update-motd 2>/dev/null || true
327- elif [ "$RET" = "Run sysinfo on every login" ]; then
328- rm -f $UPDATE_MOTD_LOCATION 2>/dev/null || true
329- /usr/sbin/update-motd 2>/dev/null || true
330- ln -sf $WRAPPER $PROFILE_LOCATION
331- else
332- rm -f $UPDATE_MOTD_LOCATION 2>/dev/null || true
333- /usr/sbin/update-motd 2>/dev/null || true
334- rm -f $PROFILE_LOCATION || true
335- fi
336+ db_get $PACKAGE/sysinfo
337+ # Choices:
338+ # * Do not display sysinfo on login
339+ # * Cache sysinfo in /etc/motd
340+ # * Run sysinfo on every login
341+ SYSINFO="${RET:-Cache sysinfo in /etc/motd}"
342+ WRAPPER=/usr/share/landscape/landscape-sysinfo.wrapper
343+ PROFILE_LOCATION=/etc/profile.d/50-landscape-sysinfo.sh
344+ UPDATE_MOTD_LOCATION=/etc/update-motd.d/50-landscape-sysinfo
345+ if [ "$RET" = "Cache sysinfo in /etc/motd" ]; then
346+ rm -f $PROFILE_LOCATION 2>/dev/null || true
347+ ln -sf $WRAPPER $UPDATE_MOTD_LOCATION
348+ /usr/sbin/update-motd 2>/dev/null || true
349+ elif [ "$RET" = "Run sysinfo on every login" ]; then
350+ rm -f $UPDATE_MOTD_LOCATION 2>/dev/null || true
351+ /usr/sbin/update-motd 2>/dev/null || true
352+ ln -sf $WRAPPER $PROFILE_LOCATION
353+ else
354+ rm -f $UPDATE_MOTD_LOCATION 2>/dev/null || true
355+ /usr/sbin/update-motd 2>/dev/null || true
356+ rm -f $PROFILE_LOCATION || true
357+ fi
358
359 # 0.9.1 introduces non-backwards compatible changes. This detects
360 # whether or not the data is in the current format. If not, all
361@@ -65,10 +65,18 @@
362
363 # Create landscape system user
364 if ! getent passwd landscape >/dev/null; then
365- adduser --quiet --system --disabled-password \
366+ adduser --quiet --system --group --disabled-password \
367 --home /var/lib/landscape --no-create-home landscape
368 fi
369
370+ # Create landscape system group (for <= 1.0.29.1-0ubuntu0.9.04.0)
371+ if ! getent group landscape >/dev/null; then
372+ addgroup --quiet --system landscape
373+ fi
374+
375+ # Ensure primary group is landscape (for <= 1.0.29.1-0ubuntu0.9.04.0)
376+ usermod -g landscape landscape
377+
378 # Fix prior ownerships
379 find /var/lib/landscape/ -wholename /var/lib/landscape/client/custom-graph-scripts -prune -or -exec chown landscape {} \;
380 chown -R landscape /var/log/landscape
381
382=== modified file 'debian/landscape-common.prerm'
383--- debian/landscape-common.prerm 2008-09-18 16:47:08 +0000
384+++ debian/landscape-common.prerm 2009-09-21 16:12:27 +0000
385@@ -19,9 +19,9 @@
386
387 case "$1" in
388 remove|upgrade|deconfigure)
389- rm -f /etc/update-motd.d/50-landscape-sysinfo 2>/dev/null || true
390- /usr/sbin/update-motd 2>/dev/null || true
391- rm -f /etc/profile.d/landscape-sysinfo.sh 2>/dev/null || true
392+ rm -f /etc/update-motd.d/50-landscape-sysinfo 2>/dev/null || true
393+ /usr/sbin/update-motd 2>/dev/null || true
394+ rm -f /etc/profile.d/landscape-sysinfo.sh 2>/dev/null || true
395 ;;
396
397 failed-upgrade)
398
399=== modified file 'debian/rules'
400--- debian/rules 2009-02-25 12:03:23 +0000
401+++ debian/rules 2009-09-21 16:12:27 +0000
402@@ -16,10 +16,16 @@
403 package = landscape-client
404 root_dir = debian/tmp/
405
406+revision = $(shell dpkg-parsechangelog | grep ^Version | cut -f 2 -d " "| cut -f 2 -d "-")
407+
408+landscape_substvars = debian/landscape-common.substvars
409+
410 build: build-stamp
411 build-stamp:
412 dh_testdir
413+ sed -i -e "s/^DEBIAN_REVISION = \"\"/DEBIAN_REVISION = \"-$(revision)\"/g" landscape/__init__.py
414 python setup.py build
415+ make -C smart-update
416 touch build-stamp
417
418 clean:
419@@ -27,8 +33,10 @@
420 dh_testroot
421 rm -f build-stamp
422 rm -rf build
423+ make -C smart-update clean
424 dh_clean
425 debconf-updatepo
426+ sed -i -e "s/^DEBIAN_REVISION = .*/DEBIAN_REVISION = \"\"/g" landscape/__init__.py
427
428 install: build
429 dh_testdir
430@@ -44,37 +52,48 @@
431
432 install -D -o root -g root -m 644 dbus/landscape.conf $(root_dir)/etc/dbus-1/system.d/landscape.conf
433 install -D -o root -g root -m 755 debian/landscape-sysinfo.wrapper $(root_dir)/usr/share/landscape/landscape-sysinfo.wrapper
434-
435-
436- dh_installcron
437-
438-binary-arch:
439+ install -D -o root -g root -m 644 debian/cloud-default.conf $(root_dir)/usr/share/landscape/cloud-default.conf
440+ install -D -o root -g root -m 755 smart-update/smart-update $(root_dir)/usr/lib/landscape/smart-update
441+
442+binary-indep:
443 # do nothing
444 #
445-binary-indep: build install
446+binary-arch: build install
447 # That's not present in Ubuntu releases we still support, so
448 # we're just installing the lintian overrides file by hand
449 # for now.
450- dh_lintian
451+ #dh_lintian
452 dh_testdir
453 dh_testroot
454 dh_installdocs
455 dh_installman -p landscape-client man/landscape-client.1 man/landscape-config.1 man/landscape-message.1
456 dh_installchangelogs
457 dh_install --sourcedir debian/tmp/
458- dh_installinit -- start 25 2 3 4 5 . stop 15 1 .
459+ dh_installinit -- start 45 2 3 4 5 . stop 15 1 .
460 dh_installlogrotate
461 dh_installdebconf
462 dh_compress
463 dh_fixperms
464
465+ifneq (,$(findstring $(dist_release),"dapper"))
466+ # We need python2.4-pysqlite2 and a non-buggy libcurl3-gnutls on dapper
467+ echo "extra:Depends=python2.4-pysqlite2, libcurl3-gnutls (>= 7.15.1-1ubuntu3)" >> $(landscape_substvars)
468+endif
469+ifneq (,$(findstring $(dist_release),"intrepid jaunty"))
470+ # We want update-motd in intrepid and jaunty
471+ echo "extra:Depends=update-motd" >> $(landscape_substvars)
472+endif
473+ifneq (,$(findstring $(dist_release),"karmic"))
474+ # We want libpam-modules in karmic
475+ echo "extra:Depends=libpam-modules (>= 1.0.1-9ubuntu3)" >> $(landscape_substvars)
476+endif
477
478 ifeq ($(use_pycentral),yes)
479- ifneq (,$(py_setup_install_args))
480+ ifneq (,$(py_setup_install_args))
481 DH_PYCENTRAL=include-links dh_pycentral
482- else
483+ else
484 DH_PYCENTRAL=nomove dh_pycentral
485- endif
486+ endif
487 else
488 dh_python
489 endif
490
491=== modified file 'landscape/__init__.py'
492--- landscape/__init__.py 2009-04-13 14:33:31 +0000
493+++ landscape/__init__.py 2009-09-21 16:07:58 +0000
494@@ -1,2 +1,4 @@
495-VERSION = "AUTOPPA_VERSION(1.0.29.1-0ubuntu0.9.04.0)"[len("AUTOPPA_VERSION("):-1]
496+DEBIAN_REVISION = ""
497+UPSTREAM_VERSION = "1.3.2.1"
498+VERSION = "%s%s" % (UPSTREAM_VERSION, DEBIAN_REVISION)
499 API = "3.2"
500
501=== modified file 'landscape/broker/broker.py'
502--- landscape/broker/broker.py 2009-03-18 20:42:05 +0000
503+++ landscape/broker/broker.py 2009-09-21 16:05:18 +0000
504@@ -19,7 +19,23 @@
505
506
507 class BrokerDBusObject(Object):
508- """A DBus-published object which allows adding messages to the queue."""
509+ """A DBus-published object exposing broker's services and emitting signals.
510+
511+ Each public method decorated with C{@method(IFACE_NAME)} exposes a
512+ broker's service to the other Landscape client processes, which can
513+ remote call it through the D-Bus interface.
514+
515+ Each public method decorated with C{@signal(IFACE_NAME)} turns the
516+ corresponding broker's L{TwistedReactor} event into a D-Bus signal,
517+ which gets broadcasted over the bus and eventually received by the
518+ other Landscape client processes.
519+
520+ In order to receive D-Bus signals from the broker, the Landscape
521+ client processes have to register themselves as L{BrokerPlugin}s
522+ using the L{register_plugin} method. The L{BrokerPlugin.message}
523+ method of each plugin will be called on every C{"message"} event
524+ generated by the L{TwistedReactor}.
525+ """
526
527 bus_name = BUS_NAME
528 object_path = OBJECT_PATH
529@@ -27,9 +43,11 @@
530 def __init__(self, config, reactor, exchange, registration,
531 message_store, bus):
532 """
533- @param exchange: The
534- L{MessageExchange<landscape.exchange.MessageExchange>} to send
535- messages with.
536+ @param config: The L{BrokerConfiguration} used by the broker.
537+ @param reactor: The L{TwistedReactor} driving the broker's events.
538+ @param exchange: The L{MessageExchange} to send messages with.
539+ @param registration: The {RegistrationHandler}.
540+ @param message_store: The broker's L{MessageStore}.
541 @param bus: The L{Bus} that represents where we're listening.
542 """
543 super(BrokerDBusObject, self).__init__(bus)
544@@ -53,6 +71,7 @@
545 self.server_uuid_changed(old_uuid or "", new_uuid or "")
546 reactor.call_on("server-uuid-changed", server_uuid_changed)
547 reactor.call_on("resynchronize-clients", self.resynchronize)
548+ self.broker_started()
549
550 @signal(IFACE_NAME)
551 def resynchronize(self):
552@@ -67,6 +86,10 @@
553 pass
554
555 def _broadcast_message(self, message):
556+ """Call the C{message} method of all the registered plugins.
557+
558+ @see: L{register_plugin}.
559+ """
560 blob = byte_array(dumps(message))
561 results = []
562 for plugin in self.get_plugin_objects():
563@@ -148,6 +171,10 @@
564 def registration_failed(self):
565 pass
566
567+ @signal(IFACE_NAME)
568+ def broker_started(self):
569+ pass
570+
571 @method(IFACE_NAME, out_signature="as")
572 def get_accepted_message_types(self):
573 return self.message_store.get_accepted_types()
574@@ -172,6 +199,14 @@
575
576 @method(IFACE_NAME)
577 def register_plugin(self, bus_name, object_path):
578+ """Register a new plugin.
579+
580+ Plugins are DBus-published objects identified by C{bus_name}
581+ and C{object_path}.
582+
583+ Examples of plugins are L{MonitorDBusObject} and L{ManagerDBusObject}
584+ from the monitor and manager Landscape services respectively.
585+ """
586 self._registered_plugins.add((bus_name, object_path))
587
588 @method(IFACE_NAME)
589
590=== modified file 'landscape/broker/deployment.py'
591--- landscape/broker/deployment.py 2009-03-18 20:42:05 +0000
592+++ landscape/broker/deployment.py 2009-09-21 16:05:18 +0000
593@@ -14,7 +14,10 @@
594
595
596 class BrokerConfiguration(Configuration):
597- """Specialized configuration for the Landscape Broker."""
598+ """Specialized configuration for the Landscape Broker.
599+
600+ @cvar required_options: C{["url"]}
601+ """
602
603 required_options = ["url"]
604
605@@ -24,9 +27,21 @@
606 self._original_https_proxy = os.environ.get("https_proxy")
607
608 def make_parser(self):
609- """
610- Specialize L{Configuration.make_parser}, adding many
611- broker-specific options.
612+ """Parser factory for broker-specific options.
613+
614+ @return: An L{OptionParser} preset for all the options
615+ from L{Configuration.make_parser} plus:
616+ - C{account_name}
617+ - C{registration_password}
618+ - C{computer_title}
619+ - C{url}
620+ - C{ssl_public_key}
621+ - C{exchange_interval} (C{15*60})
622+ - C{urgent_exchange_interval} (C{1*60})
623+ - C{ping_url}
624+ - C{http_proxy}
625+ - C{https_proxy}
626+ - C{cloud}
627 """
628 parser = super(BrokerConfiguration, self).make_parser()
629
630@@ -63,13 +78,16 @@
631
632 @property
633 def message_store_path(self):
634+ """Get the path to the message store."""
635 return os.path.join(self.data_path, "messages")
636
637 def load(self, args, accept_nonexistent_config=False):
638 """
639+ Load options from command line arguments and a config file.
640+
641 Load the configuration with L{Configuration.load}, and then set
642- http_proxy and https_proxy environment variables based on that config
643- data.
644+ C{http_proxy} and C{https_proxy} environment variables based on
645+ that config data.
646 """
647 super(BrokerConfiguration, self).load(
648 args, accept_nonexistent_config=accept_nonexistent_config)
649@@ -85,15 +103,34 @@
650
651
652 class BrokerService(LandscapeService):
653- """
654- The core Twisted Service which creates and runs all necessary
655- components when started.
656+ """The core C{Service} of the Landscape Broker C{Application}.
657+
658+ The Landscape broker service handles all the communication between the
659+ client and server. When started it creates and runs all necessary components
660+ to exchange messages with the Landscape server.
661+
662+ @ivar persist_filename: Path to broker-specific persisted data.
663+ @ivar persist: A L{Persist} object saving and loading from
664+ C{self.persist_filename}.
665+ @ivar message_store: A L{MessageStore} used by the C{exchanger} to
666+ queue outgoing messages.
667+ @ivar transport: A L{HTTPTransport} used by the C{exchanger} to deliver messages.
668+ @ivar identity: The L{Identity} of the Landscape client the broker runs on.
669+ @ivar exchanger: The L{MessageExchange} exchanges messages with the server.
670+ @ivar pinger: The L{Pinger} checks if the server has new messages for us.
671+ @ivar registration: The L{RegistrationHandler} performs the initial
672+ registration.
673+
674+ @cvar service_name: C{"broker"}
675 """
676
677 transport_factory = HTTPTransport
678 service_name = "broker"
679
680 def __init__(self, config):
681+ """
682+ @param config: a L{BrokerConfiguration}.
683+ """
684 self.persist_filename = os.path.join(
685 config.data_path, "%s.bpickle" % (self.service_name,))
686 super(BrokerService, self).__init__(config)
687@@ -128,9 +165,10 @@
688 reactor.stop()
689
690 def startService(self):
691- """
692- Set up the persist, message store, transport, reactor, and
693- dbus message exchange service.
694+ """Start the broker.
695+
696+ Create the DBus-published L{BrokerDBusObject}, and start
697+ the L{MessageExchange} and L{Pinger} services.
698
699 If the configuration specifies the bus as 'session', the DBUS
700 message exchange service will use the DBUS Session Bus.
701
702=== modified file 'landscape/broker/exchange.py'
703--- landscape/broker/exchange.py 2009-03-18 20:42:05 +0000
704+++ landscape/broker/exchange.py 2009-09-21 16:05:18 +0000
705@@ -27,6 +27,16 @@
706 monitor_interval=None,
707 max_messages=100,
708 create_time=time.time):
709+ """
710+ @param reactor: A L{TwistedReactor} used to fire events in response
711+ to messages received by the server.
712+ @param store: A L{MessageStore} used to queue outgoing messages.
713+ @param transport: A L{HTTPTransport} used to deliver messages.
714+ @param exchange_interval: time interval between subsequent
715+ exchanges of non-urgent messages.
716+ @param urgent_exachange_interval: time interval between subsequent
717+ exchanges of urgent messages.
718+ """
719 self._reactor = reactor
720 self._message_store = store
721 self._create_time = create_time
722@@ -50,6 +60,7 @@
723 reactor.call_on("pre-exit", self.stop)
724
725 def get_exchange_intervals(self):
726+ """Return a binary tuple with urgent and normal exchange intervals."""
727 return (self._urgent_exchange_interval, self._exchange_interval)
728
729 def send(self, message, urgent=False):
730@@ -57,6 +68,8 @@
731
732 If urgent is True, an exchange with the server will be
733 scheduled urgently.
734+
735+ @param message: Same as in L{MessageStore.add}.
736 """
737 if "timestamp" not in message:
738 message["timestamp"] = int(self._reactor.time())
739@@ -85,9 +98,9 @@
740 If this makes existing held messages available for sending,
741 urgently exchange messages.
742
743- If new types are made available, a
744- C{("message-type-accepted", type_name)} reactor event will
745- be fired.
746+ If new types are made available or old types are dropped a
747+ C{("message-type-acceptance-changed", type, bool)} reactor
748+ event will be fired.
749 """
750 old_types = set(self._message_store.get_accepted_types())
751 new_types = set(message["types"])
752@@ -123,7 +136,13 @@
753 def exchange(self):
754 """Send pending messages to the server and process responses.
755
756- @return: A deferred that is fired when exchange has completed.
757+ An C{pre-exchange} reactor event will be emitted just before the
758+ actual exchange takes place.
759+
760+ An C{exchange-done} or C{exchange-failed} reactor event will be
761+ emitted after a successful or failed exchange.
762+
763+ @return: A L{Deferred} that is fired when exchange has completed.
764
765 XXX Actually that is a lie right now. It returns before exchange is
766 actually complete.
767@@ -182,11 +201,10 @@
768 approximately 10 seconds before the exchange is started.
769
770 @param urgent: If true, ensure an exchange happens within the
771- urgent interval. This will reschedule the exchange
772- if necessary. If another urgent exchange is already
773- scheduled, nothing happens.
774+ urgent interval. This will reschedule the exchange if necessary.
775+ If another urgent exchange is already scheduled, nothing happens.
776 @param force: If true, an exchange will necessarily be scheduled,
777- even if it was already scheduled before.
778+ even if it was already scheduled before.
779 """
780 # The 'not self._exchanging' check below is currently untested.
781 # It's a bit tricky to test as it is preventing rehooking 'exchange'
782@@ -217,7 +235,12 @@
783 self._reactor.fire("impending-exchange")
784
785 def make_payload(self):
786- """Return a dict representing the complete payload."""
787+ """Return a dict representing the complete exchange payload.
788+
789+ The payload will contain all pending messages eligible for
790+ delivery, up to a maximum of C{max_messages} as passed to
791+ the L{__init__} method.
792+ """
793 store = self._message_store
794 accepted_types_digest = self._hash_types(store.get_accepted_types())
795 messages = store.get_pending_messages(self._max_messages)
796@@ -263,6 +286,19 @@
797 return md5.new(accepted_types_str).digest()
798
799 def _handle_result(self, payload, result):
800+ """Handle a response from the server.
801+
802+ Called by L{exchange} after a batch of messages has been
803+ successfully delivered to the server.
804+
805+ If the C{server_uuid} changed, a C{"server-uuid-changed"} event
806+ will be fired.
807+
808+ Call L{handle_message} for each message in C{result}.
809+
810+ @param payload: The payload that was sent to the server.
811+ @param result: The response got in reply to the C{payload}.
812+ """
813 message_store = self._message_store
814 self._client_accepted_types_hash = result.get("client-accepted-types-hash")
815 next_expected = result.get("next-expected-sequence")
816@@ -309,9 +345,10 @@
817 self.schedule_exchange(urgent=True)
818
819 def register_message(self, type, handler):
820- """
821- Register a handler to be called when a message of the given
822- type has been received from the server.
823+ """Register a handler for the given message type.
824+
825+ The C{handler} callable will to be executed when a message of
826+ type C{type} has been received from the server.
827
828 Multiple handlers for the same type will be called in the
829 order they were registered.
830
831=== modified file 'landscape/broker/registration.py'
832--- landscape/broker/registration.py 2009-03-18 20:42:05 +0000
833+++ landscape/broker/registration.py 2009-09-21 16:05:18 +0000
834@@ -1,4 +1,5 @@
835
836+import time
837 import logging
838 import socket
839
840@@ -7,9 +8,11 @@
841 from landscape.lib.twisted_util import gather_results
842 from landscape.lib.bpickle import loads
843 from landscape.lib.log import log_failure
844-
845-
846-EC2_API = "http://169.254.169.254/latest"
847+from landscape.lib.fetch import fetch, FetchError
848+
849+
850+EC2_HOST = "169.254.169.254"
851+EC2_API = "http://%s/latest" % (EC2_HOST,)
852
853
854 class InvalidCredentialsError(Exception):
855@@ -33,7 +36,15 @@
856
857
858 class Identity(object):
859- """Maintains details about the identity of this Landscape client."""
860+ """Maintains details about the identity of this Landscape client.
861+
862+ @ivar secure_id: A server-provided ID for secure message exchange.
863+ @ivar insecure_id: Non-secure server-provided ID, mainly used with
864+ the ping server.
865+ @ivar computer_title: See L{BrokerConfiguration}.
866+ @ivar account_name: See L{BrokerConfiguration}.
867+ @ivar registration_password: See L{BrokerConfiguration}.
868+ """
869
870 secure_id = persist_property("secure-id")
871 insecure_id = persist_property("insecure-id")
872@@ -42,6 +53,11 @@
873 registration_password = config_property("registration_password")
874
875 def __init__(self, config, persist):
876+ """
877+ @param config: A L{BrokerConfiguration} object, used to set the
878+ C{computer_title}, C{account_name} and C{registration_password}
879+ instance variables.
880+ """
881 self._config = config
882 self._persist = persist.root_at("registration")
883
884@@ -98,36 +114,6 @@
885 self._exchange.exchange()
886 return result
887
888- def _extract_ec2_instance_data(self, raw_user_data, launch_index):
889- """
890- Given the raw string of EC2 User Data, parse it and return the dict of
891- instance data for this particular instance.
892-
893- If the data can't be parsed, a debug message will be logged and None
894- will be returned.
895- """
896- try:
897- user_data = loads(raw_user_data)
898- except ValueError:
899- logging.debug("Got invalid user-data %r" % (raw_user_data,))
900- return
901-
902- if not isinstance(user_data, dict):
903- logging.debug("user-data %r is not a dict" % (user_data,))
904- return
905- for key in "otps", "exchange-url", "ping-url":
906- if key not in user_data:
907- logging.debug("user-data %r doesn't have key %r."
908- % (user_data, key))
909- return
910- if len(user_data["otps"]) <= launch_index:
911- logging.debug("user-data %r doesn't have OTP for launch index %d"
912- % (user_data, launch_index))
913- return
914- return {"otp": user_data["otps"][launch_index],
915- "exchange-url": user_data["exchange-url"],
916- "ping-url": user_data["ping-url"]}
917-
918 def _fetch_ec2_data(self):
919 id = self._identity
920 if self._cloud and not id.secure_id:
921@@ -170,7 +156,7 @@
922 self._ec2_data["launch_index"] = int(
923 self._ec2_data["launch_index"])
924
925- instance_data = self._extract_ec2_instance_data(
926+ instance_data = _extract_ec2_instance_data(
927 raw_user_data, int(launch_index))
928 if instance_data is not None:
929 self._otp = instance_data["otp"]
930@@ -191,9 +177,15 @@
931 registration_data.addErrback(log_error)
932
933 def _handle_exchange_done(self):
934+ """Registered handler for the C{"exchange-done"} event.
935+
936+ If we are not registered yet, schedule another message exchange.
937+
938+ The first exchange made us accept the message type "register", so
939+ the next "pre-exchange" event will make L{_handle_pre_exchange}
940+ queue a registration message for delivery.
941+ """
942 if self.should_register() and not self._should_register:
943- # We received accepted-types (first exchange), so we now trigger
944- # the second exchange for registration
945 self._exchange.exchange()
946
947 def _handle_pre_exchange(self):
948@@ -218,7 +210,7 @@
949 logging.info("Queueing message to register with OTP")
950 message = {"type": "register-cloud-vm",
951 "otp": self._otp,
952- "hostname": socket.gethostname(),
953+ "hostname": socket.getfqdn(),
954 "account_name": None,
955 "registration_password": None,
956 }
957@@ -229,7 +221,7 @@
958 "%r as an EC2 instance." % (id.account_name,))
959 message = {"type": "register-cloud-vm",
960 "otp": None,
961- "hostname": socket.gethostname(),
962+ "hostname": socket.getfqdn(),
963 "account_name": id.account_name,
964 "registration_password": \
965 id.registration_password}
966@@ -246,15 +238,18 @@
967 "computer_title": id.computer_title,
968 "account_name": id.account_name,
969 "registration_password": id.registration_password,
970- "hostname": socket.gethostname()}
971+ "hostname": socket.getfqdn()}
972 self._exchange.send(message)
973 else:
974 self._reactor.fire("registration-failed")
975
976 def _handle_set_id(self, message):
977- """
978+ """Registered handler for the C{"set-id"} event.
979+
980 Record and start using the secure and insecure IDs from the given
981 message.
982+
983+ Fire C{"registration-done"} and C{"resynchronize-clients"}.
984 """
985 id = self._identity
986 id.secure_id = message.get("id")
987@@ -301,3 +296,76 @@
988 def _failed(self):
989 self.deferred.errback(InvalidCredentialsError())
990 self._cancel_calls()
991+
992+
993+def _extract_ec2_instance_data(raw_user_data, launch_index):
994+ """
995+ Given the raw string of EC2 User Data, parse it and return the dict of
996+ instance data for this particular instance.
997+
998+ If the data can't be parsed, a debug message will be logged and None
999+ will be returned.
1000+ """
1001+ try:
1002+ user_data = loads(raw_user_data)
1003+ except ValueError:
1004+ logging.debug("Got invalid user-data %r" % (raw_user_data,))
1005+ return
1006+
1007+ if not isinstance(user_data, dict):
1008+ logging.debug("user-data %r is not a dict" % (user_data,))
1009+ return
1010+ for key in "otps", "exchange-url", "ping-url":
1011+ if key not in user_data:
1012+ logging.debug("user-data %r doesn't have key %r."
1013+ % (user_data, key))
1014+ return
1015+ if len(user_data["otps"]) <= launch_index:
1016+ logging.debug("user-data %r doesn't have OTP for launch index %d"
1017+ % (user_data, launch_index))
1018+ return
1019+ return {"otp": user_data["otps"][launch_index],
1020+ "exchange-url": user_data["exchange-url"],
1021+ "ping-url": user_data["ping-url"]}
1022+
1023+
1024+def _wait_for_network():
1025+ """
1026+ Keep trying to connect to the EC2 metadata server until it becomes
1027+ accessible or until five minutes pass.
1028+
1029+ This is necessary because the networking init script on Ubuntu is
1030+ asynchronous; the network may not actually be up by the time the
1031+ landscape-client init script is invoked.
1032+ """
1033+ timeout = 5*60
1034+ port = 80
1035+
1036+ start = time.time()
1037+ while True:
1038+ s = socket.socket()
1039+ try:
1040+ s.connect((EC2_HOST, port))
1041+ s.close()
1042+ return
1043+ except socket.error, e:
1044+ time.sleep(1)
1045+ if time.time() - start > timeout:
1046+ break
1047+
1048+
1049+def is_cloud_managed(fetch=fetch):
1050+ """
1051+ Return C{True} if the machine has been started by Landscape, i.e. if we can
1052+ find the expected data inside the EC2 user-data field.
1053+ """
1054+ _wait_for_network()
1055+ try:
1056+ raw_user_data = fetch(EC2_API + "/user-data",
1057+ connect_timeout=5)
1058+ launch_index = fetch(EC2_API + "/meta-data/ami-launch-index",
1059+ connect_timeout=5)
1060+ except FetchError:
1061+ return False
1062+ instance_data = _extract_ec2_instance_data(raw_user_data, int(launch_index))
1063+ return instance_data is not None
1064
1065=== modified file 'landscape/broker/store.py'
1066--- landscape/broker/store.py 2009-03-18 20:42:05 +0000
1067+++ landscape/broker/store.py 2009-09-21 16:05:18 +0000
1068@@ -6,6 +6,7 @@
1069 import os
1070
1071 from landscape.lib import bpickle
1072+from landscape.lib.persist import Persist
1073 from landscape.lib.monitor import Monitor
1074 from landscape import API
1075
1076@@ -22,7 +23,7 @@
1077 that reason, let's review the terminology here.
1078
1079 Assume we have 10 messages in the store, which we label by
1080- the following uppercase letters:
1081+ the following uppercase letters::
1082
1083 A, B, C, D, E, F, G, H, I, J
1084 ^
1085@@ -41,6 +42,11 @@
1086
1087 def __init__(self, persist, directory, directory_size=1000,
1088 monitor_interval=60*60, get_time=time.time):
1089+ """
1090+ @param persist: a L{Persist} used to save state parameters like
1091+ the accepted message types, sequence, server uuid etc.
1092+ @param directory: base of the file system hierarchy
1093+ """
1094 self._get_time = get_time
1095 self._directory = directory
1096 self._directory_size = directory_size
1097@@ -52,7 +58,7 @@
1098 os.makedirs(message_dir)
1099
1100 def commit(self):
1101- """Save metadata to disk."""
1102+ """Persist metadata to disk."""
1103 self._original_persist.save()
1104
1105 def set_accepted_types(self, types):
1106@@ -67,34 +73,40 @@
1107 self._reprocess_holding()
1108
1109 def get_accepted_types(self):
1110+ """Get a list of all accepted message types."""
1111 return self._persist.get("accepted-types", ())
1112
1113 def accepts(self, type):
1114+ """Return bool indicating if C{type} is an accepted message type."""
1115 return type in self.get_accepted_types()
1116
1117 def get_sequence(self):
1118- """
1119- Get the sequence number of the message that the server expects us to
1120- send on the next exchange.
1121+ """Get the current sequence.
1122+
1123+ @return: The sequence number of the message that the server expects us to
1124+ send on the next exchange.
1125 """
1126 return self._persist.get("sequence", 0)
1127
1128 def set_sequence(self, number):
1129- """
1130+ """Set the current sequence.
1131+
1132 Set the sequence number of the message that the server expects us to
1133 send on the next exchange.
1134 """
1135 self._persist.set("sequence", number)
1136
1137 def get_server_sequence(self):
1138- """
1139- Get the sequence number of the message that we will ask the server to
1140- send to us on the next exchange.
1141+ """Get the current server sequence.
1142+
1143+ @return: the sequence number of the message that we will ask the server to
1144+ send to us on the next exchange.
1145 """
1146 return self._persist.get("server_sequence", 0)
1147
1148 def set_server_sequence(self, number):
1149- """
1150+ """Set the current server sequence.
1151+
1152 Set the sequence number of the message that we will ask the server to
1153 send to us on the next exchange.
1154 """
1155@@ -109,16 +121,19 @@
1156 self._persist.set("server_uuid", uuid)
1157
1158 def get_pending_offset(self):
1159+ """Get the current pending offset."""
1160 return self._persist.get("pending_offset", 0)
1161
1162 def set_pending_offset(self, val):
1163- """
1164+ """Set the current pending offset.
1165+
1166 Set the offset into the message pool to consider assigned to the
1167 current sequence number as returned by l{get_sequence}.
1168 """
1169 self._persist.set("pending_offset", val)
1170
1171 def add_pending_offset(self, val):
1172+ """Increment the current pending offset by C{val}."""
1173 self.set_pending_offset(self.get_pending_offset() + val)
1174
1175 def count_pending_messages(self):
1176@@ -186,7 +201,9 @@
1177
1178 def add(self, message):
1179 """Queue a message for delivery.
1180-
1181+
1182+ @param message: a C{dict} with a C{type} key and other keys conforming
1183+ to the L{Message} schema for that specifc message type.
1184 @return: message_id, which is an identifier for the added message.
1185 """
1186 assert "type" in message
1187
1188=== modified file 'landscape/broker/tests/test_registration.py'
1189--- landscape/broker/tests/test_registration.py 2009-03-18 20:42:05 +0000
1190+++ landscape/broker/tests/test_registration.py 2009-09-21 16:05:18 +0000
1191@@ -1,3 +1,4 @@
1192+import unittest
1193 import logging
1194 import pycurl
1195 import socket
1196@@ -5,12 +6,13 @@
1197 from twisted.internet.defer import succeed, fail
1198
1199 from landscape.broker.registration import (
1200- InvalidCredentialsError, RegistrationHandler)
1201+ InvalidCredentialsError, RegistrationHandler, is_cloud_managed, EC2_HOST,
1202+ EC2_API)
1203
1204 from landscape.broker.deployment import BrokerConfiguration
1205 from landscape.tests.helpers import LandscapeTest, ExchangeHelper
1206 from landscape.lib.bpickle import dumps
1207-from landscape.lib.fetch import HTTPCodeError
1208+from landscape.lib.fetch import HTTPCodeError, FetchError
1209
1210
1211 class RegistrationTest(LandscapeTest):
1212@@ -23,9 +25,9 @@
1213 self.identity = self.broker_service.identity
1214 self.handler = self.broker_service.registration
1215 logging.getLogger().setLevel(logging.INFO)
1216- self.hostname = "ooga"
1217- self.addCleanup(setattr, socket, "gethostname", socket.gethostname)
1218- socket.gethostname = lambda: self.hostname
1219+ self.hostname = "ooga.local"
1220+ self.addCleanup(setattr, socket, "getfqdn", socket.getfqdn)
1221+ socket.getfqdn = lambda: self.hostname
1222
1223 def check_persist_property(self, attr, persist_name):
1224 value = "VALUE"
1225@@ -141,7 +143,7 @@
1226 "computer_title": "Computer Title",
1227 "account_name": "account_name",
1228 "registration_password": None,
1229- "hostname": "ooga"}
1230+ "hostname": "ooga.local"}
1231 ])
1232 self.assertEquals(self.logfile.getvalue().strip(),
1233 "INFO: Queueing message to register with account "
1234@@ -159,7 +161,7 @@
1235 "computer_title": "Computer Title",
1236 "account_name": "account_name",
1237 "registration_password": "SEKRET",
1238- "hostname": "ooga"}
1239+ "hostname": "ooga.local"}
1240 ])
1241 self.assertEquals(self.logfile.getvalue().strip(),
1242 "INFO: Queueing message to register with account "
1243@@ -341,7 +343,7 @@
1244 "computer_title": "Computer Title",
1245 "account_name": "account_name",
1246 "registration_password": "SEKRET",
1247- "hostname": socket.gethostname()}
1248+ "hostname": socket.getfqdn()}
1249 ])
1250
1251 def get_registration_handler_for_cloud(
1252@@ -406,7 +408,7 @@
1253 """
1254 message = dict(type="register-cloud-vm",
1255 otp="otp1",
1256- hostname="ooga",
1257+ hostname="ooga.local",
1258 local_hostname="ooga.local",
1259 public_hostname="ooga.amazon.com",
1260 instance_key=u"key1",
1261@@ -641,7 +643,7 @@
1262 "computer_title": u"whatever",
1263 "account_name": u"onward",
1264 "registration_password": u"password",
1265- "hostname": socket.gethostname()}])
1266+ "hostname": socket.getfqdn()}])
1267
1268 def test_should_register_in_cloud(self):
1269 """
1270@@ -729,3 +731,151 @@
1271 config.computer_title = None
1272 self.broker_service.identity.secure_id = None
1273 self.assertFalse(handler.should_register())
1274+
1275+
1276+class IsCloudManagedTests(LandscapeTest):
1277+
1278+ def setUp(self):
1279+ super(IsCloudManagedTests, self).setUp()
1280+ self.urls = []
1281+ self.responses = []
1282+
1283+ def fake_fetch(self, url, connect_timeout=None):
1284+ self.urls.append((url, connect_timeout))
1285+ return self.responses.pop(0)
1286+
1287+ def mock_socket(self):
1288+ """
1289+ Mock out socket usage by is_cloud_managed to wait for the network.
1290+ """
1291+ # Mock the socket.connect call that it also does
1292+ socket_class = self.mocker.replace("socket.socket", passthrough=False)
1293+ socket = socket_class()
1294+ socket.connect((EC2_HOST, 80))
1295+ socket.close()
1296+
1297+
1298+ def test_is_managed(self):
1299+ """
1300+ L{is_cloud_managed} returns True if the EC2 user-data contains Landscape
1301+ instance information. It fetches the EC2 data with low timeouts.
1302+ """
1303+ user_data = {"otps": ["otp1"], "exchange-url": "http://exchange",
1304+ "ping-url": "http://ping"}
1305+ self.responses = [dumps(user_data), "0"]
1306+
1307+ self.mock_socket()
1308+ self.mocker.replay()
1309+
1310+ self.assertTrue(is_cloud_managed(self.fake_fetch))
1311+ self.assertEquals(
1312+ self.urls,
1313+ [(EC2_API + "/user-data", 5),
1314+ (EC2_API + "/meta-data/ami-launch-index", 5)])
1315+
1316+ def test_is_managed_index(self):
1317+ user_data = {"otps": ["otp1", "otp2"],
1318+ "exchange-url": "http://exchange",
1319+ "ping-url": "http://ping"}
1320+ self.responses = [dumps(user_data), "1"]
1321+ self.mock_socket()
1322+ self.mocker.replay()
1323+ self.assertTrue(is_cloud_managed(self.fake_fetch))
1324+
1325+ def test_is_managed_wrong_index(self):
1326+ user_data = {"otps": ["otp1"], "exchange-url": "http://exchange",
1327+ "ping-url": "http://ping"}
1328+ self.responses = [dumps(user_data), "1"]
1329+ self.mock_socket()
1330+ self.mocker.replay()
1331+ self.assertFalse(is_cloud_managed(self.fake_fetch))
1332+
1333+ def test_is_managed_exchange_url(self):
1334+ user_data = {"otps": ["otp1"], "ping-url": "http://ping"}
1335+ self.responses = [dumps(user_data), "0"]
1336+ self.mock_socket()
1337+ self.mocker.replay()
1338+ self.assertFalse(is_cloud_managed(self.fake_fetch))
1339+
1340+ def test_is_managed_ping_url(self):
1341+ user_data = {"otps": ["otp1"], "exchange-url": "http://exchange"}
1342+ self.responses = [dumps(user_data), "0"]
1343+ self.mock_socket()
1344+ self.mocker.replay()
1345+ self.assertFalse(is_cloud_managed(self.fake_fetch))
1346+
1347+ def test_is_managed_bpickle(self):
1348+ self.responses = ["some other user data", "0"]
1349+ self.mock_socket()
1350+ self.mocker.replay()
1351+ self.assertFalse(is_cloud_managed(self.fake_fetch))
1352+
1353+ def test_is_managed_no_data(self):
1354+ self.responses = ["", "0"]
1355+ self.mock_socket()
1356+ self.mocker.replay()
1357+ self.assertFalse(is_cloud_managed(self.fake_fetch))
1358+
1359+ def test_is_managed_fetch_not_found(self):
1360+ def fake_fetch(url, connect_timeout=None):
1361+ raise HTTPCodeError(404, "ohnoes")
1362+ self.mock_socket()
1363+ self.mocker.replay()
1364+ self.assertFalse(is_cloud_managed(fake_fetch))
1365+
1366+ def test_is_managed_fetch_error(self):
1367+ def fake_fetch(url, connect_timeout=None):
1368+ raise FetchError(7, "couldn't connect to host")
1369+ self.mock_socket()
1370+ self.mocker.replay()
1371+ self.assertFalse(is_cloud_managed(fake_fetch))
1372+
1373+ def test_waits_for_network(self):
1374+ """
1375+ is_cloud_managed will wait until the network before trying to fetch
1376+ the EC2 user data.
1377+ """
1378+ user_data = {"otps": ["otp1"], "exchange-url": "http://exchange",
1379+ "ping-url": "http://ping"}
1380+ self.responses = [dumps(user_data), "0"]
1381+
1382+ self.mocker.order()
1383+ time_sleep = self.mocker.replace("time.sleep", passthrough=False)
1384+ socket_class = self.mocker.replace("socket.socket", passthrough=False)
1385+ socket_obj = socket_class()
1386+ socket_obj.connect((EC2_HOST, 80))
1387+ self.mocker.throw(socket.error("woops"))
1388+ time_sleep(1)
1389+ socket_obj = socket_class()
1390+ socket_obj.connect((EC2_HOST, 80))
1391+ self.mocker.result(None)
1392+ socket_obj.close()
1393+ self.mocker.replay()
1394+ self.assertTrue(is_cloud_managed(self.fake_fetch))
1395+
1396+ def test_waiting_times_out(self):
1397+ """
1398+ We'll only wait five minutes for the network to come up.
1399+ """
1400+ def fake_fetch(url, connect_timeout=None):
1401+ raise FetchError(7, "couldn't connect to host")
1402+
1403+ self.mocker.order()
1404+ time_sleep = self.mocker.replace("time.sleep", passthrough=False)
1405+ time_time = self.mocker.replace("time.time", passthrough=False)
1406+ time_time()
1407+ self.mocker.result(100)
1408+ socket_class = self.mocker.replace("socket.socket", passthrough=False)
1409+ socket_obj = socket_class()
1410+ socket_obj.connect((EC2_HOST, 80))
1411+ self.mocker.throw(socket.error("woops"))
1412+ time_sleep(1)
1413+ time_time()
1414+ self.mocker.result(401)
1415+ self.mocker.replay()
1416+ # Mocking time.time is dangerous, because the test harness calls it. So
1417+ # we explicitly reset mocker before returning from the test.
1418+ try:
1419+ self.assertFalse(is_cloud_managed(fake_fetch))
1420+ finally:
1421+ self.mocker.reset()
1422
1423=== modified file 'landscape/broker/tests/test_transport.py'
1424--- landscape/broker/tests/test_transport.py 2009-03-18 20:42:05 +0000
1425+++ landscape/broker/tests/test_transport.py 2009-09-21 16:05:18 +0000
1426@@ -1,9 +1,8 @@
1427 import os
1428
1429-from pycurl import error as PyCurlError
1430-
1431 from landscape import VERSION
1432 from landscape.broker.transport import HTTPTransport
1433+from landscape.lib.fetch import PyCurlError
1434 from landscape.lib import bpickle
1435
1436 from landscape.tests.helpers import LandscapeTest, LogKeeperHelper
1437
1438=== modified file 'landscape/broker/transport.py'
1439--- landscape/broker/transport.py 2009-03-18 20:42:05 +0000
1440+++ landscape/broker/transport.py 2009-09-21 16:05:18 +0000
1441@@ -15,13 +15,19 @@
1442 """Transport makes a request to exchange message data over HTTP."""
1443
1444 def __init__(self, url, pubkey=None):
1445+ """
1446+ @param url: URL of the remote Landscape server message system.
1447+ @param pubkey: SSH pubblic key used for secure communication.
1448+ """
1449 self._url = url
1450 self._pubkey = pubkey
1451
1452 def get_url(self):
1453+ """Get the URL of the remote message system."""
1454 return self._url
1455
1456 def set_url(self, url):
1457+ """Set the URL of the remote message system."""
1458 self._url = url
1459
1460 def _curl(self, payload, computer_id, message_api):
1461@@ -37,7 +43,16 @@
1462 def exchange(self, payload, computer_id=None, message_api=API):
1463 """Exchange message data with the server.
1464
1465- THREAD SAFE (HOPEFULLY)
1466+ @param payload: The object to send, it must be L{bpickle}-compatible.
1467+ @param computer_id: The computer ID to send the message as (see
1468+ also L{Identity}).
1469+
1470+ @type: C{dict}
1471+ @return: The server's response to sent message or C{None} in case
1472+ of error.
1473+
1474+ @note: This code is thread safe (HOPEFULLY).
1475+
1476 """
1477 spayload = bpickle.dumps(payload)
1478 try:
1479
1480=== modified file 'landscape/configuration.py'
1481--- landscape/configuration.py 2009-03-18 20:42:05 +0000
1482+++ landscape/configuration.py 2009-09-21 16:05:18 +0000
1483@@ -12,15 +12,13 @@
1484 from ConfigParser import ConfigParser, Error as ConfigParserError
1485 from StringIO import StringIO
1486
1487-import pycurl
1488-
1489 from dbus.exceptions import DBusException
1490
1491 from landscape.sysvconfig import SysVConfig, ProcessError
1492 from landscape.lib.dbus_util import (
1493 get_bus, NoReplyError, ServiceUnknownError, SecurityError)
1494 from landscape.lib.twisted_util import gather_results
1495-from landscape.lib.fetch import fetch, HTTPCodeError
1496+from landscape.lib.fetch import fetch, FetchError
1497
1498 from landscape.broker.registration import InvalidCredentialsError
1499 from landscape.broker.deployment import BrokerConfiguration
1500@@ -499,9 +497,7 @@
1501 error_message = None
1502 try:
1503 content = fetch(url)
1504- except pycurl.error, error:
1505- error_message = error.args[1]
1506- except HTTPCodeError, error:
1507+ except FetchError, error:
1508 error_message = str(error)
1509 if error_message is not None:
1510 raise ImportOptionError(
1511
1512=== modified file 'landscape/deployment.py'
1513--- landscape/deployment.py 2009-03-18 20:42:05 +0000
1514+++ landscape/deployment.py 2009-09-21 16:05:18 +0000
1515@@ -44,12 +44,12 @@
1516 class BaseConfiguration(object):
1517 """Base class for configuration implementations.
1518
1519- @var required_options: Optionally, a sequence of key names to require when
1520+ @cvar required_options: Optionally, a sequence of key names to require when
1521 reading or writing a configuration.
1522- @var unsaved_options: Optionally, a sequence of key names to never write
1523+ @cvar unsaved_options: Optionally, a sequence of key names to never write
1524 to the configuration file. This is useful when you want to provide
1525 command-line options that should never end up in a configuration file.
1526- @var default_config_filenames: A sequence of filenames to check when
1527+ @cvar default_config_filenames: A sequence of filenames to check when
1528 reading or writing a configuration.
1529 """
1530
1531@@ -63,6 +63,10 @@
1532 config_section = "client"
1533
1534 def __init__(self):
1535+ """Default configuration.
1536+
1537+ Default values for supported options are set as in L{make_parser}.
1538+ """
1539 self._set_options = {}
1540 self._command_line_args = []
1541 self._command_line_options = {}
1542@@ -78,10 +82,10 @@
1543 """Find and return the value of the given configuration parameter.
1544
1545 The following sources will be searched:
1546- * The attributes that were explicitly set on this object,
1547- * The parameters specified on the command line,
1548- * The parameters specified in the configuration file, and
1549- * The defaults.
1550+ - The attributes that were explicitly set on this object,
1551+ - The parameters specified on the command line,
1552+ - The parameters specified in the configuration file, and
1553+ - The defaults.
1554
1555 If no values are found and the parameter does exist as a possible
1556 parameter, C{None} is returned.
1557@@ -107,6 +111,7 @@
1558 return value
1559
1560 def get(self, name, default=None):
1561+ """Return the value of the C{name} option or C{default}."""
1562 try:
1563 return self.__getattr__(name)
1564 except AttributeError:
1565@@ -124,6 +129,10 @@
1566 self._set_options[name] = value
1567
1568 def reload(self):
1569+ """Reload options using the configured command line arguments.
1570+
1571+ @see: L{load_command_line}
1572+ """
1573 self.load(self._command_line_args)
1574
1575 def load(self, args, accept_nonexistent_config=False):
1576@@ -189,15 +198,12 @@
1577
1578 Options are considered in the following precedence:
1579
1580- 1. Manually set options (config.option = value)
1581- 2. Options passed in the command line
1582- 3. Previously existent options in the configuration file
1583-
1584- The filename picked for saving configuration options is:
1585-
1586- 1. self.config, if defined
1587- 2. The last loaded configuration file, if any
1588- 3. The first filename in self.default_config_filenames
1589+ 1. Manually set options (C{config.option = value})
1590+ 2. Options passed in the command line
1591+ 3. Previously existent options in the configuration file
1592+
1593+ The filename picked for saving configuration options is the one
1594+ returned by L{get_config_filename}.
1595 """
1596 # The filename we'll write to
1597 filename = self.get_config_filename()
1598@@ -223,9 +229,12 @@
1599 config_file.close()
1600
1601 def make_parser(self):
1602- """
1603- Return an L{OptionParser} preset with options that all
1604- landscape-related programs accept.
1605+ """Parser factory for supported options
1606+
1607+ @return: An L{OptionParser} preset with options that all
1608+ landscape-related programs accept. These include
1609+ - C{config} (C{None})
1610+ - C{bus} (C{system})
1611 """
1612 parser = OptionParser(version=VERSION)
1613 parser.add_option("-c", "--config", metavar="FILE",
1614@@ -237,6 +246,13 @@
1615 return parser
1616
1617 def get_config_filename(self):
1618+ """Pick the proper configuration file.
1619+
1620+ The picked filename is:
1621+ 1. C{self.config}, if defined
1622+ 2. The last loaded configuration file, if any
1623+ 3. The first filename in C{self.default_config_filenames}
1624+ """
1625 if self.config:
1626 return self.config
1627 if self._config_filename:
1628@@ -249,6 +265,10 @@
1629 return None
1630
1631 def get_command_line_options(self):
1632+ """Get currently loaded command line options.
1633+
1634+ @see: L{load_command_line}
1635+ """
1636 return self._command_line_options
1637
1638
1639@@ -263,9 +283,15 @@
1640 return os.path.join(self.data_path, "hash.db")
1641
1642 def make_parser(self):
1643- """
1644- Return an L{OptionParser} preset with options that all
1645- landscape-related programs accept.
1646+ """Parser factory for supported options.
1647+
1648+ @return: An L{OptionParser} preset for all options
1649+ from L{BaseConfiguration.make_parser} plus:
1650+ - C{data_path} (C{"/var/lib/landscape/client/"})
1651+ - C{quiet} (C{False})
1652+ - C{log_dir} (C{"/var/log/landscape"})
1653+ - C{log_level} (C{"info"})
1654+ - C{ignore_sigint} (C{False})
1655 """
1656 parser = super(Configuration, self).make_parser()
1657 parser.add_option("-d", "--data-path", metavar="PATH",
1658@@ -286,9 +312,10 @@
1659
1660
1661 def get_versioned_persist(service):
1662- """
1663- Load a Persist database for the given service and upgrade or mark
1664- as current, as necessary.
1665+ """Get a L{Persist} database with upgrade rules applied.
1666+
1667+ Load a L{Persist} database for the given C{service} and upgrade or
1668+ mark as current, as necessary.
1669 """
1670 persist = Persist(filename=service.persist_filename)
1671 upgrade_manager = UPGRADE_MANAGERS[service.service_name]
1672@@ -301,12 +328,12 @@
1673
1674
1675 class LandscapeService(Service, object):
1676- """
1677- A utility superclass for defining Landscape services.
1678+ """Utility superclass for defining Landscape services.
1679
1680 This sets up the reactor, bpickle/dbus integration, a Persist object, and
1681 connects to the bus when started.
1682
1683+ @ivar reactor: a L{TwistedReactor} object.
1684 @cvar service_name: The lower-case name of the service. This is used to
1685 generate the bpickle filename.
1686 """
1687@@ -322,6 +349,11 @@
1688 signal.signal(signal.SIGUSR1, lambda signal, frame: rotate_logs())
1689
1690 def startService(self):
1691+ """Extend L{twisted.application.service.IService.startService}.
1692+
1693+ Create a a new DBus connection (normally using a C{SystemBus}) and
1694+ save it in the public L{self.bus} instance variable.
1695+ """
1696 Service.startService(self)
1697 self.bus = get_bus(self.config.bus)
1698 info("%s started on '%s' bus with config %s" % (
1699@@ -353,9 +385,15 @@
1700 def run_landscape_service(configuration_class, service_class, args, bus_name):
1701 """Run a Landscape service.
1702
1703- @param configuration_class: A subclass of L{Configuration} for processing
1704- the C{args} and config file.
1705- @param service_class: A subclass of L{LandscapeService} to create and start.
1706+ The function will instantiate the given L{LandscapeService} subclass
1707+ and attach the resulting service object to a Twisted C{Application}.
1708+
1709+ After that it will start the Twisted L{Application} and call the
1710+ L{TwistedReactor.run} method of the L{LandscapeService}'s reactor.
1711+
1712+ @param configuration_class: The service-specific subclass of L{Configuration} used
1713+ to parse C{args} and build the C{service_class} object.
1714+ @param service_class: The L{LandscapeService} subclass to create and start.
1715 @param args: Command line arguments.
1716 @param bus_name: A bus name used to verify if the service is already
1717 running.
1718
1719=== added file 'landscape/lib/command.py'
1720--- landscape/lib/command.py 1970-01-01 00:00:00 +0000
1721+++ landscape/lib/command.py 2009-09-21 16:05:18 +0000
1722@@ -0,0 +1,38 @@
1723+"""Shell commands execution."""
1724+import commands
1725+
1726+class CommandError(Exception):
1727+ """
1728+ Raised by L{run_command} in case of non-0 exit code.
1729+
1730+ @cvar command: The shell command that failed.
1731+ @cvar exit_status: Its non-zero exit status.
1732+ @cvar output: The command's output.
1733+ """
1734+ def __init__(self, command, exit_status, output):
1735+ self.command = command
1736+ self.exit_status = exit_status
1737+ self.output = output
1738+
1739+ def __str__(self):
1740+ return "'%s' exited with status %d (%s)" % (
1741+ self.command, self.exit_status, self.output)
1742+
1743+ def __repr__(self):
1744+ return "<CommandError command=<%s> exit_status=%d output=<%s>>" % (
1745+ self.command, self.exit_status, self.output)
1746+
1747+
1748+def run_command(command):
1749+ """
1750+ Execute a command in a shell and return the command's output.
1751+
1752+ If the command's exit status is not 0 a L{CommandError} exception
1753+ is raised.
1754+ """
1755+ exit_status, output = commands.getstatusoutput(command)
1756+ # shift down 8 bits to get shell-like exit codes
1757+ exit_status = exit_status >> 8
1758+ if exit_status != 0:
1759+ raise CommandError(command, exit_status, output)
1760+ return output
1761
1762=== modified file 'landscape/lib/fetch.py'
1763--- landscape/lib/fetch.py 2009-04-13 14:33:31 +0000
1764+++ landscape/lib/fetch.py 2009-09-21 16:05:18 +0000
1765@@ -6,8 +6,10 @@
1766
1767 from twisted.internet.threads import deferToThread
1768
1769+class FetchError(Exception):
1770+ pass
1771
1772-class HTTPCodeError(Exception):
1773+class HTTPCodeError(FetchError):
1774
1775 def __init__(self, http_code, body):
1776 self.http_code = http_code
1777@@ -20,7 +22,20 @@
1778 return "<HTTPCodeError http_code=%d>" % self.http_code
1779
1780
1781-def fetch(url, post=False, data="", headers={}, cainfo=None, curl=None):
1782+class PyCurlError(FetchError):
1783+ def __init__(self, error_code, message):
1784+ self.error_code = error_code
1785+ self.message = message
1786+
1787+ def __str__(self):
1788+ return "Error %d: %s" % (self.error_code, self.message)
1789+
1790+ def __repr__(self):
1791+ return "<PyCurlError args=(%d, '%s')>" % (self.error_code,
1792+ self.message)
1793+
1794+def fetch(url, post=False, data="", headers={}, cainfo=None, curl=None,
1795+ connect_timeout=30, total_timeout=600):
1796 """Retrieve a URL and return the content.
1797
1798 @param url: The url to be fetched.
1799@@ -53,12 +68,16 @@
1800 curl.setopt(pycurl.URL, url)
1801 curl.setopt(pycurl.FOLLOWLOCATION, True)
1802 curl.setopt(pycurl.MAXREDIRS, 5)
1803- curl.setopt(pycurl.CONNECTTIMEOUT, 30)
1804- curl.setopt(pycurl.TIMEOUT, 600)
1805+ curl.setopt(pycurl.CONNECTTIMEOUT, connect_timeout)
1806+ curl.setopt(pycurl.LOW_SPEED_LIMIT, 1)
1807+ curl.setopt(pycurl.LOW_SPEED_TIME, total_timeout)
1808 curl.setopt(pycurl.NOSIGNAL, 1)
1809 curl.setopt(pycurl.WRITEFUNCTION, input.write)
1810
1811- curl.perform()
1812+ try:
1813+ curl.perform()
1814+ except pycurl.error, e:
1815+ raise PyCurlError(e.args[0], e.args[1])
1816
1817 body = input.getvalue()
1818
1819
1820=== modified file 'landscape/lib/persist.py'
1821--- landscape/lib/persist.py 2008-09-08 16:35:57 +0000
1822+++ landscape/lib/persist.py 2009-09-21 16:05:18 +0000
1823@@ -40,22 +40,23 @@
1824
1825
1826 class Persist(object):
1827- """Persistence handler.
1828+
1829+ """Persist a hierarchical database of key=>value pairs.
1830
1831 There are three different kinds of opition maps, regarding the
1832 persistence and priority that maps are queried.
1833
1834- hard - Options are persistent.
1835- soft - Options are not persistent, and have a higher priority
1836+ - hard - Options are persistent.
1837+ - soft - Options are not persistent, and have a higher priority
1838 than persistent options.
1839- weak - Options are not persistent, and have a lower priority
1840+ - weak - Options are not persistent, and have a lower priority
1841 than persistent options.
1842
1843 @ivar filename: The name of the file where persist data is saved
1844- or None if not filename is available.
1845+ or None if no filename is available.
1846+
1847 """
1848
1849-
1850 def __init__(self, backend=None, filename=None):
1851 """
1852 @param backend: The backend to use. If none is specified,
1853@@ -91,13 +92,19 @@
1854 modified = property(_get_modified)
1855
1856 def reset_modified(self):
1857+ """Set the database status as non-modified."""
1858 self._modified = False
1859
1860 def assert_writable(self):
1861+ """Assert if the object is writable
1862+
1863+ @raise: L{PersistReadOnlyError}
1864+ """
1865 if self._readonly:
1866 raise PersistReadOnlyError("Configuration is in readonly mode.")
1867
1868 def load(self, filepath):
1869+ """Load a persisted database."""
1870 filepath = os.path.expanduser(filepath)
1871 if not os.path.isfile(filepath):
1872 raise PersistError("File not found: %s" % filepath)
1873@@ -313,12 +320,31 @@
1874 return result
1875
1876 def root_at(self, path):
1877+ """
1878+ Rebase the database hierarchy.
1879+
1880+ @return: A L{RootedPersist} using this L{Persist} as parent.
1881+ """
1882 return RootedPersist(self, path)
1883
1884
1885 class RootedPersist(object):
1886+ """Root a L{Persist}'s tree at a particular branch.
1887+
1888+ This class shares the same interface of L{Persist} and provides a shortcut
1889+ to access the nodes of a particular branch in a L{Persist}'s tree.
1890+
1891+ The chosen branch will be viewed as the root of the tree of the
1892+ L{RootedPersist} and all operations will be forwarded to the parent
1893+ L{Persist} as appropriate.
1894+ """
1895
1896 def __init__(self, parent, root):
1897+ """
1898+ @param parent: the parent L{Persist}.
1899+ @param root: a branch of the parent L{Persist}'s tree, that
1900+ will be used as root of this L{RootedPersist}.
1901+ """
1902 self.parent = parent
1903 if type(root) is str:
1904 self.root = path_string_to_tuple(root)
1905
1906=== added file 'landscape/lib/tests/test_command.py'
1907--- landscape/lib/tests/test_command.py 1970-01-01 00:00:00 +0000
1908+++ landscape/lib/tests/test_command.py 2009-09-21 16:05:18 +0000
1909@@ -0,0 +1,32 @@
1910+from landscape.tests.helpers import LandscapeTest
1911+
1912+from landscape.lib.command import run_command, CommandError
1913+
1914+
1915+class CommandTest(LandscapeTest):
1916+
1917+ def setUp(self):
1918+ super(CommandTest, self).setUp()
1919+
1920+ def test_basic(self):
1921+ self.assertEquals(run_command("echo test"), "test")
1922+
1923+ def test_non_0_exit_status(self):
1924+ try:
1925+ run_command("false")
1926+ except CommandError, error:
1927+ self.assertEquals(error.command, "false")
1928+ self.assertEquals(error.output, "")
1929+ self.assertEquals(error.exit_status, 1)
1930+ else:
1931+ self.fail("CommandError not raised")
1932+
1933+ def test_error_str(self):
1934+ self.assertEquals(str(CommandError("test_command", 1, "test output")),
1935+ "'test_command' exited with status 1 "
1936+ "(test output)")
1937+
1938+ def test_error_repr(self):
1939+ self.assertEquals(repr(CommandError("test_command", 1, "test output")),
1940+ "<CommandError command=<test_command> "
1941+ "exit_status=1 output=<test output>>")
1942
1943=== modified file 'landscape/lib/tests/test_fetch.py'
1944--- landscape/lib/tests/test_fetch.py 2009-04-13 14:33:31 +0000
1945+++ landscape/lib/tests/test_fetch.py 2009-09-21 16:05:18 +0000
1946@@ -1,16 +1,17 @@
1947 import pycurl
1948
1949-from landscape.lib.fetch import fetch, fetch_async, HTTPCodeError
1950+from landscape.lib.fetch import fetch, fetch_async, HTTPCodeError, PyCurlError
1951 from landscape.tests.helpers import LandscapeTest
1952
1953
1954 class CurlStub(object):
1955
1956- def __init__(self, result=None, http_code=200):
1957+ def __init__(self, result=None, http_code=200, error=None):
1958 self.result = result
1959 self._http_code = http_code
1960 self.options = {}
1961 self.performed = False
1962+ self.error = error
1963
1964 def getinfo(self, what):
1965 if what == pycurl.HTTP_CODE:
1966@@ -23,6 +24,8 @@
1967 self.options[option] = value
1968
1969 def perform(self):
1970+ if self.error:
1971+ raise self.error
1972 if self.performed:
1973 raise AssertionError("Can't perform twice")
1974 self.options[pycurl.WRITEFUNCTION](self.result)
1975@@ -45,7 +48,8 @@
1976 pycurl.FOLLOWLOCATION: True,
1977 pycurl.MAXREDIRS: 5,
1978 pycurl.CONNECTTIMEOUT: 30,
1979- pycurl.TIMEOUT: 600,
1980+ pycurl.LOW_SPEED_LIMIT: 1,
1981+ pycurl.LOW_SPEED_TIME: 600,
1982 pycurl.NOSIGNAL: 1,
1983 pycurl.WRITEFUNCTION: Any()})
1984
1985@@ -58,7 +62,8 @@
1986 pycurl.FOLLOWLOCATION: True,
1987 pycurl.MAXREDIRS: 5,
1988 pycurl.CONNECTTIMEOUT: 30,
1989- pycurl.TIMEOUT: 600,
1990+ pycurl.LOW_SPEED_LIMIT: 1,
1991+ pycurl.LOW_SPEED_TIME: 600,
1992 pycurl.NOSIGNAL: 1,
1993 pycurl.WRITEFUNCTION: Any(),
1994 pycurl.POST: True})
1995@@ -73,7 +78,8 @@
1996 pycurl.FOLLOWLOCATION: True,
1997 pycurl.MAXREDIRS: 5,
1998 pycurl.CONNECTTIMEOUT: 30,
1999- pycurl.TIMEOUT: 600,
2000+ pycurl.LOW_SPEED_LIMIT: 1,
2001+ pycurl.LOW_SPEED_TIME: 600,
2002 pycurl.NOSIGNAL: 1,
2003 pycurl.WRITEFUNCTION: Any(),
2004 pycurl.POST: True,
2005@@ -89,7 +95,8 @@
2006 pycurl.FOLLOWLOCATION: True,
2007 pycurl.MAXREDIRS: 5,
2008 pycurl.CONNECTTIMEOUT: 30,
2009- pycurl.TIMEOUT: 600,
2010+ pycurl.LOW_SPEED_LIMIT: 1,
2011+ pycurl.LOW_SPEED_TIME: 600,
2012 pycurl.NOSIGNAL: 1,
2013 pycurl.WRITEFUNCTION: Any(),
2014 pycurl.CAINFO: "cainfo"})
2015@@ -110,11 +117,27 @@
2016 pycurl.FOLLOWLOCATION: True,
2017 pycurl.MAXREDIRS: 5,
2018 pycurl.CONNECTTIMEOUT: 30,
2019- pycurl.TIMEOUT: 600,
2020+ pycurl.LOW_SPEED_LIMIT: 1,
2021+ pycurl.LOW_SPEED_TIME: 600,
2022 pycurl.NOSIGNAL: 1,
2023 pycurl.WRITEFUNCTION: Any(),
2024 pycurl.HTTPHEADER: ["a: 1", "b: 2"]})
2025
2026+ def test_timeouts(self):
2027+ curl = CurlStub("result")
2028+ result = fetch("http://example.com", connect_timeout=5, total_timeout=30,
2029+ curl=curl)
2030+ self.assertEquals(result, "result")
2031+ self.assertEquals(curl.options,
2032+ {pycurl.URL: "http://example.com",
2033+ pycurl.FOLLOWLOCATION: True,
2034+ pycurl.MAXREDIRS: 5,
2035+ pycurl.CONNECTTIMEOUT: 5,
2036+ pycurl.LOW_SPEED_LIMIT: 1,
2037+ pycurl.LOW_SPEED_TIME: 30,
2038+ pycurl.NOSIGNAL: 1,
2039+ pycurl.WRITEFUNCTION: Any()})
2040+
2041 def test_non_200_result(self):
2042 curl = CurlStub("result", http_code=404)
2043 try:
2044@@ -125,14 +148,32 @@
2045 else:
2046 self.fail("HTTPCodeError not raised")
2047
2048- def test_error_str(self):
2049+ def test_http_error_str(self):
2050 self.assertEquals(str(HTTPCodeError(501, "")),
2051 "Server returned HTTP code 501")
2052
2053- def test_error_repr(self):
2054+ def test_http_error_repr(self):
2055 self.assertEquals(repr(HTTPCodeError(501, "")),
2056 "<HTTPCodeError http_code=501>")
2057
2058+ def test_pycurl_error(self):
2059+ curl = CurlStub(result=None, http_code=None,
2060+ error=pycurl.error(60, "pycurl error"))
2061+ try:
2062+ fetch("http://example.com", curl=curl)
2063+ except PyCurlError, error:
2064+ self.assertEquals(error.error_code, 60)
2065+ self.assertEquals(error.message, "pycurl error")
2066+ else:
2067+ self.fail("PyCurlError not raised")
2068+
2069+ def test_pycurl_error_str(self):
2070+ self.assertEquals(str(PyCurlError(60, "pycurl error")),
2071+ "Error 60: pycurl error")
2072+
2073+ def test_pycurl_error_repr(self):
2074+ self.assertEquals(repr(PyCurlError(60, "pycurl error")),
2075+ "<PyCurlError args=(60, 'pycurl error')>")
2076
2077 def test_create_curl(self):
2078 curls = []
2079@@ -151,7 +192,8 @@
2080 pycurl.FOLLOWLOCATION: True,
2081 pycurl.MAXREDIRS: 5,
2082 pycurl.CONNECTTIMEOUT: 30,
2083- pycurl.TIMEOUT: 600,
2084+ pycurl.LOW_SPEED_LIMIT: 1,
2085+ pycurl.LOW_SPEED_TIME: 600,
2086 pycurl.NOSIGNAL: 1,
2087 pycurl.WRITEFUNCTION: Any()})
2088 finally:
2089
2090=== modified file 'landscape/manager/deployment.py'
2091--- landscape/manager/deployment.py 2008-12-11 17:11:08 +0000
2092+++ landscape/manager/deployment.py 2009-09-21 16:05:18 +0000
2093@@ -84,11 +84,18 @@
2094 self.config, self.bus, store_name)
2095 self.dbus_service = ManagerDBusObject(self.bus, self.registry)
2096 DBusSignalToReactorTransmitter(self.bus, self.reactor)
2097- self.remote_broker.register_plugin(self.dbus_service.bus_name,
2098- self.dbus_service.object_path)
2099+
2100 for plugin in self.plugins:
2101 self.registry.add(plugin)
2102
2103+ def broker_started():
2104+ self.remote_broker.register_plugin(self.dbus_service.bus_name,
2105+ self.dbus_service.object_path)
2106+ self.registry.broker_started()
2107+
2108+ broker_started()
2109+ self.bus.add_signal_receiver(broker_started, "broker_started")
2110+
2111
2112 def run(args):
2113 run_landscape_service(ManagerConfiguration, ManagerService, args,
2114
2115=== modified file 'landscape/manager/scriptexecution.py'
2116--- landscape/manager/scriptexecution.py 2008-12-11 17:11:08 +0000
2117+++ landscape/manager/scriptexecution.py 2009-09-21 16:05:18 +0000
2118@@ -137,7 +137,9 @@
2119 def _respond(self, status, data, opid, result_code=None):
2120 message = {"type": "operation-result",
2121 "status": status,
2122- "result-text": data,
2123+ # Let's decode result-text, replacing non-printable
2124+ # characters
2125+ "result-text": data.decode("utf-8", "replace"),
2126 "operation-id": opid}
2127 if result_code:
2128 message["result-code"] = result_code
2129
2130=== modified file 'landscape/manager/tests/test_deployment.py'
2131--- landscape/manager/tests/test_deployment.py 2008-12-11 17:11:08 +0000
2132+++ landscape/manager/tests/test_deployment.py 2009-09-21 16:05:18 +0000
2133@@ -1,5 +1,7 @@
2134 import os
2135
2136+from twisted.internet.defer import Deferred
2137+
2138 from landscape.tests.helpers import (
2139 LandscapeTest, LandscapeIsolatedTest, RemoteBrokerHelper)
2140 from landscape.manager.deployment import ManagerService, ManagerConfiguration
2141@@ -70,32 +72,63 @@
2142
2143 helpers = [RemoteBrokerHelper]
2144
2145+ def setUp(self):
2146+ super(DeploymentBusTests, self).setUp()
2147+ configuration = ManagerConfiguration()
2148+ self.path = self.make_dir()
2149+ configuration.load(["-d", self.path, "--bus", "session",
2150+ "--manager-plugins", "ProcessKiller"])
2151+ self.manager_service = ManagerService(configuration)
2152+ self.manager_service.startService()
2153+
2154 def test_dbus_reactor_transmitter_installed(self):
2155- configuration = ManagerConfiguration()
2156- configuration.load(["-d", self.make_dir(), "--bus", "session",
2157- "--manager-plugins", "ProcessKiller"])
2158- manager_service = ManagerService(configuration)
2159- manager_service.startService()
2160 return assertTransmitterActive(self, self.broker_service,
2161- manager_service.reactor)
2162+ self.manager_service.reactor)
2163
2164 def test_receives_messages(self):
2165- configuration = ManagerConfiguration()
2166- configuration.load(["-d", self.make_dir(), "--bus", "session",
2167- "--manager-plugins", "ProcessKiller"])
2168- manager_service = ManagerService(configuration)
2169- manager_service.startService()
2170- return assertReceivesMessages(self, manager_service.dbus_service,
2171+ return assertReceivesMessages(self, self.manager_service.dbus_service,
2172 self.broker_service, self.remote)
2173
2174 def test_manager_store(self):
2175- configuration = ManagerConfiguration()
2176- path = self.make_dir()
2177- configuration.load(["-d", path, "--bus", "session",
2178- "--manager-plugins", "ProcessKiller"])
2179- manager_service = ManagerService(configuration)
2180- manager_service.startService()
2181- self.assertNotIdentical(manager_service.registry.store, None)
2182- self.assertTrue(
2183- isinstance(manager_service.registry.store, ManagerStore))
2184- self.assertTrue(os.path.isfile(os.path.join(path, "manager.database")))
2185+ self.assertNotIdentical(self.manager_service.registry.store, None)
2186+ self.assertTrue(
2187+ isinstance(self.manager_service.registry.store, ManagerStore))
2188+ self.assertTrue(
2189+ os.path.isfile(os.path.join(self.path, "manager.database")))
2190+
2191+ def test_register_plugin_on_broker_started(self):
2192+ """
2193+ When the broker is restarted, it fires a "broker-started" signal which
2194+ makes the Manager plugin register itself again.
2195+ """
2196+ d = Deferred()
2197+ def register_plugin(bus_name, object_path):
2198+ d.callback((bus_name, object_path))
2199+ def patch(ignore):
2200+ self.manager_service.remote_broker.register_plugin = register_plugin
2201+ self.broker_service.dbus_object.broker_started()
2202+ return d
2203+ return self.remote.get_registered_plugins(
2204+ ).addCallback(patch
2205+ ).addCallback(self.assertEquals,
2206+ ("com.canonical.landscape.Manager",
2207+ "/com/canonical/landscape/Manager"))
2208+
2209+ def test_register_message_on_broker_started(self):
2210+ """
2211+ When the broker is restarted, it fires a "broker-started" signal which
2212+ makes the Manager plugin register all registered messages again.
2213+ """
2214+ self.manager_service.registry.register_message("foo", lambda x: None)
2215+ d = Deferred()
2216+ def register_client_accepted_message_type(type):
2217+ if type == "foo":
2218+ d.callback(type)
2219+ def patch(ignore):
2220+ self.manager_service.remote_broker.register_client_accepted_message_type = \
2221+ register_client_accepted_message_type
2222+ self.broker_service.dbus_object.broker_started()
2223+ return d
2224+ return self.remote.get_registered_plugins(
2225+ ).addCallback(patch
2226+ ).addCallback(self.assertEquals, "foo")
2227
2228=== modified file 'landscape/manager/tests/test_scriptexecution.py'
2229--- landscape/manager/tests/test_scriptexecution.py 2008-12-11 17:11:08 +0000
2230+++ landscape/manager/tests/test_scriptexecution.py 2009-09-21 16:05:18 +0000
2231@@ -3,6 +3,7 @@
2232 import sys
2233 import tempfile
2234 import stat
2235+import pwd
2236
2237 from twisted.internet.defer import gatherResults
2238 from twisted.internet.error import ProcessDone
2239@@ -19,7 +20,7 @@
2240
2241
2242 def get_default_environment():
2243- username = os.getlogin()
2244+ username = pwd.getpwuid(os.getuid())[0]
2245 uid, gid, home = get_user_info(username)
2246 return {
2247 "PATH": UBUNTU_PATH,
2248@@ -334,11 +335,9 @@
2249 protocol.childDataReceived(1, "hi")
2250 protocol.processEnded(Failure(ProcessDone(0)))
2251 self.manager.reactor.advance(501)
2252- def got_error(f):
2253- print f
2254- self.assertTrue(f.check(ProcessTimeLimitReachedError))
2255- self.assertEquals(f.value.data, "hi\n")
2256- result.addErrback(got_error)
2257+ def got_result(output):
2258+ self.assertEquals(output, "hi")
2259+ result.addCallback(got_result)
2260 return result
2261
2262 def test_script_is_owned_by_user(self):
2263@@ -348,7 +347,7 @@
2264 correct permissions. Therefore os.chmod and os.chown must be called
2265 before data is written.
2266 """
2267- username = os.getlogin()
2268+ username = pwd.getpwuid(os.getuid())[0]
2269 uid, gid, home = get_user_info(username)
2270
2271 mock_chown = self.mocker.replace("os.chown", passthrough=False)
2272@@ -474,7 +473,7 @@
2273
2274 def test_user(self):
2275 """A user can be specified in the message."""
2276- username = os.getlogin()
2277+ username = pwd.getpwuid(os.getuid())[0]
2278 uid, gid, home = get_user_info(username)
2279
2280 # ignore the call to chown!
2281@@ -551,7 +550,7 @@
2282
2283 def test_urgent_response(self):
2284 """Responses to script execution messages are urgent."""
2285- username = os.getlogin()
2286+ username = pwd.getpwuid(os.getuid())[0]
2287 uid, gid, home = get_user_info(username)
2288
2289 # ignore the call to chown!
2290@@ -585,6 +584,43 @@
2291 result.addCallback(got_result)
2292 return result
2293
2294+ def test_binary_output(self):
2295+ """
2296+ If a script outputs non-printable characters not handled by utf-8, they
2297+ are replaced during the encoding phase but the script succeeds.
2298+ """
2299+ username = pwd.getpwuid(os.getuid())[0]
2300+ uid, gid, home = get_user_info(username)
2301+
2302+ mock_chown = self.mocker.replace("os.chown", passthrough=False)
2303+ mock_chown(ARGS)
2304+
2305+ def spawn_called(protocol, filename, uid, gid, path, env):
2306+ protocol.childDataReceived(1,
2307+ "\x7fELF\x01\x01\x01\x00\x00\x00\x95\x01")
2308+ protocol.processEnded(Failure(ProcessDone(0)))
2309+ self._verify_script(filename, sys.executable, "print 'hi'")
2310+ process_factory = self.mocker.mock()
2311+ process_factory.spawnProcess(
2312+ ANY, ANY, uid=uid, gid=gid, path=ANY,
2313+ env=get_default_environment())
2314+ self.mocker.call(spawn_called)
2315+
2316+ self.mocker.replay()
2317+
2318+ self.manager.add(ScriptExecutionPlugin(process_factory=process_factory))
2319+
2320+ def got_result(r):
2321+ self.assertTrue(self.broker_service.exchanger.is_urgent())
2322+ [message] = self.broker_service.message_store.get_pending_messages()
2323+ self.assertEquals(
2324+ message["result-text"],
2325+ u"\x7fELF\x01\x01\x01\x00\x00\x00\ufffd\x01")
2326+
2327+ result = self._send_script(sys.executable, "print 'hi'")
2328+ result.addCallback(got_result)
2329+ return result
2330+
2331 def test_parse_error_causes_operation_failure(self):
2332 """
2333 If there is an error parsing the message, an operation-result will be
2334@@ -596,12 +632,20 @@
2335 self.manager.dispatch_message(
2336 {"type": "execute-script", "operation-id": 444})
2337
2338+ if sys.version_info[:2] < (2, 6):
2339+ expected_message = [{"type": "operation-result",
2340+ "operation-id": 444,
2341+ "result-text": u"KeyError: 'username'",
2342+ "status": FAILED}]
2343+ else:
2344+ expected_message = [{"type": "operation-result",
2345+ "operation-id": 444,
2346+ "result-text": u"KeyError: username",
2347+ "status": FAILED}]
2348+
2349 self.assertMessages(
2350 self.broker_service.message_store.get_pending_messages(),
2351- [{"type": "operation-result",
2352- "operation-id": 444,
2353- "result-text": u"KeyError: 'username'",
2354- "status": FAILED}])
2355+ expected_message)
2356
2357 self.assertTrue("KeyError: 'username'" in self.logfile.getvalue())
2358
2359
2360=== modified file 'landscape/message_schemas.py'
2361--- landscape/message_schemas.py 2009-03-18 20:42:05 +0000
2362+++ landscape/message_schemas.py 2009-09-21 16:05:18 +0000
2363@@ -324,6 +324,10 @@
2364 CUSTOM_GRAPH = Message("custom-graph", {
2365 "data": Dict(Int(), GRAPH_DATA)})
2366
2367+REBOOT_REQUIRED = Message(
2368+ "reboot-required",
2369+ {"flag": Bool()})
2370+
2371 message_schemas = {}
2372 for schema in [ACTIVE_PROCESS_INFO, COMPUTER_UPTIME, CLIENT_UPTIME,
2373 OPERATION_RESULT, COMPUTER_INFO, DISTRIBUTION_INFO,
2374@@ -332,5 +336,6 @@
2375 REGISTER, REGISTER_CLOUD_VM, TEMPERATURE, PROCESSOR_INFO,
2376 USERS, PACKAGES,
2377 CHANGE_PACKAGES_RESULT, UNKNOWN_PACKAGE_HASHES,
2378- ADD_PACKAGES, TEXT_MESSAGE, TEST, CUSTOM_GRAPH]:
2379+ ADD_PACKAGES, TEXT_MESSAGE, TEST, CUSTOM_GRAPH,
2380+ REBOOT_REQUIRED]:
2381 message_schemas[schema.type] = schema
2382
2383=== modified file 'landscape/monitor/computerinfo.py'
2384--- landscape/monitor/computerinfo.py 2008-09-08 16:35:57 +0000
2385+++ landscape/monitor/computerinfo.py 2009-09-21 16:05:18 +0000
2386@@ -13,10 +13,10 @@
2387
2388 persist_name = "computer-info"
2389
2390- def __init__(self, get_hostname=socket.gethostname,
2391+ def __init__(self, get_fqdn=socket.getfqdn,
2392 meminfo_file="/proc/meminfo",
2393 lsb_release_filename="/etc/lsb-release"):
2394- self._get_hostname = get_hostname
2395+ self._get_fqdn = get_fqdn
2396 self._meminfo_file = meminfo_file
2397 self._lsb_release_filename = lsb_release_filename
2398
2399@@ -55,7 +55,7 @@
2400 def _create_computer_info_message(self):
2401 message = {}
2402 self._add_if_new(message, "hostname",
2403- self._get_hostname())
2404+ self._get_fqdn())
2405 total_memory, total_swap = self._get_memory_info()
2406 self._add_if_new(message, "total-memory",
2407 total_memory)
2408
2409=== modified file 'landscape/monitor/deployment.py'
2410--- landscape/monitor/deployment.py 2008-12-11 17:11:08 +0000
2411+++ landscape/monitor/deployment.py 2009-09-21 16:05:18 +0000
2412@@ -15,7 +15,7 @@
2413 ALL_PLUGINS = ["ActiveProcessInfo", "ComputerInfo", "HardwareInventory",
2414 "LoadAverage", "MemoryInfo", "MountInfo", "ProcessorInfo",
2415 "Temperature", "PackageMonitor",
2416- "UserMonitor"]
2417+ "UserMonitor", "RebootRequired"]
2418
2419
2420 class MonitorConfiguration(Configuration):
2421@@ -68,14 +68,13 @@
2422
2423 # If this raises ServiceUnknownError, we should do something nice.
2424 self.remote_broker = RemoteBroker(self.bus)
2425+
2426 self.registry = MonitorPluginRegistry(self.remote_broker, self.reactor,
2427 self.config, self.bus,
2428 self.persist,
2429 self.persist_filename)
2430 self.dbus_service = MonitorDBusObject(self.bus, self.registry)
2431 DBusSignalToReactorTransmitter(self.bus, self.reactor)
2432- self.remote_broker.register_plugin(self.dbus_service.bus_name,
2433- self.dbus_service.object_path)
2434
2435 for plugin in self.plugins:
2436 self.registry.add(plugin)
2437@@ -83,6 +82,14 @@
2438 self.flush_call_id = self.reactor.call_every(
2439 self.config.flush_interval, self.registry.flush)
2440
2441+ def broker_started():
2442+ self.remote_broker.register_plugin(self.dbus_service.bus_name,
2443+ self.dbus_service.object_path)
2444+ self.registry.broker_started()
2445+
2446+ broker_started()
2447+ self.bus.add_signal_receiver(broker_started, "broker_started")
2448+
2449 def stopService(self):
2450 """Stop the monitor.
2451
2452
2453=== modified file 'landscape/monitor/monitor.py'
2454--- landscape/monitor/monitor.py 2008-12-11 17:11:08 +0000
2455+++ landscape/monitor/monitor.py 2009-09-21 16:05:18 +0000
2456@@ -73,11 +73,13 @@
2457 self._persist = registry.persist.root_at(self.persist_name)
2458
2459 def call_on_accepted(self, type, callable, *args, **kwargs):
2460+
2461 def acceptance_changed(acceptance):
2462 if acceptance:
2463 return callable(*args, **kwargs)
2464- self.registry.reactor.call_on(("message-type-acceptance-changed", type),
2465- acceptance_changed)
2466+
2467+ self.registry.reactor.call_on(("message-type-acceptance-changed",
2468+ type), acceptance_changed)
2469
2470
2471 class DataWatcher(MonitorPlugin):
2472
2473=== modified file 'landscape/monitor/mountinfo.py'
2474--- landscape/monitor/mountinfo.py 2008-09-08 16:35:57 +0000
2475+++ landscape/monitor/mountinfo.py 2009-09-21 16:05:18 +0000
2476@@ -26,6 +26,7 @@
2477 self._create_time = create_time
2478 self._free_space = []
2479 self._mount_info = []
2480+ self._mount_info_to_persist = None
2481 self._mount_activity = []
2482 self._prev_mount_activity = {}
2483 self._hal_manager = hal_manager or HALManager()
2484@@ -61,6 +62,7 @@
2485 def create_mount_info_message(self):
2486 if self._mount_info:
2487 message = {"type": "mount-info", "mount-info": self._mount_info}
2488+ self._mount_info_to_persist = self._mount_info[:]
2489 self._mount_info = []
2490 return message
2491 return None
2492@@ -74,12 +76,20 @@
2493
2494 def send_messages(self, urgent=False):
2495 for message in self.create_messages():
2496- self.registry.broker.send_message(message, urgent=urgent)
2497+ d = self.registry.broker.send_message(message, urgent=urgent)
2498+ if message["type"] == "mount-info":
2499+ d.addCallback(lambda x: self.persist_mount_info())
2500
2501 def exchange(self):
2502 self.registry.broker.call_if_accepted("mount-info",
2503 self.send_messages)
2504
2505+ def persist_mount_info(self):
2506+ for timestamp, mount_info in self._mount_info_to_persist:
2507+ mount_point = mount_info["mount-point"]
2508+ self._persist.set(("mount-info", mount_point), mount_info)
2509+ self._mount_info_to_persist = None
2510+
2511 def run(self):
2512 self._monitor.ping()
2513 now = int(self._create_time())
2514@@ -97,8 +107,8 @@
2515
2516 prev_mount_info = self._persist.get(("mount-info", mount_point))
2517 if not prev_mount_info or prev_mount_info != mount_info:
2518- self._persist.set(("mount-info", mount_point), mount_info)
2519- self._mount_info.append((now, mount_info))
2520+ if mount_info not in [m for t, m in self._mount_info]:
2521+ self._mount_info.append((now, mount_info))
2522
2523 if not self._prev_mount_activity.get(mount_point, False):
2524 self._mount_activity.append((now, mount_point, True))
2525
2526=== modified file 'landscape/monitor/processorinfo.py'
2527--- landscape/monitor/processorinfo.py 2008-09-08 16:35:57 +0000
2528+++ landscape/monitor/processorinfo.py 2009-09-21 16:05:18 +0000
2529@@ -148,6 +148,43 @@
2530 return processors
2531
2532
2533+class ARMMessageFactory:
2534+ """Factory for arm-based processors provides processor information."""
2535+
2536+ def __init__(self, source_filename):
2537+ """Initialize reader with filename of data source."""
2538+ self._source_filename = source_filename
2539+
2540+ def create_message(self):
2541+ """Returns a list containing information about each processor."""
2542+ processors = []
2543+ file = open(self._source_filename)
2544+
2545+ try:
2546+ regexp = re.compile("(?P<key>.*?)\s*:\s*(?P<value>.*)")
2547+ current = {}
2548+
2549+ for line in file:
2550+ match = regexp.match(line.strip())
2551+ if match:
2552+ key = match.group("key")
2553+ value = match.group("value")
2554+
2555+ if key == "Processor":
2556+ # ARM doesn't support SMP, thus no processor-id in
2557+ # the cpuinfo
2558+ current["processor-id"] = 0
2559+ current["model"] = value
2560+ elif key == "Cache size":
2561+ current["cache-size"] = int(value)
2562+
2563+ if current:
2564+ processors.append(current)
2565+ finally:
2566+ file.close()
2567+
2568+ return processors
2569+
2570 class SparcMessageFactory:
2571 """Factory for sparc-based processors provides processor information."""
2572
2573@@ -216,6 +253,7 @@
2574 return processors
2575
2576
2577-message_factories = [("ppc(64)?", PowerPCMessageFactory),
2578+message_factories = [("arm*", ARMMessageFactory),
2579+ ("ppc(64)?", PowerPCMessageFactory),
2580 ("sparc[64]", SparcMessageFactory),
2581 ("i[3-7]86|x86_64", X86MessageFactory)]
2582
2583=== added file 'landscape/monitor/rebootrequired.py'
2584--- landscape/monitor/rebootrequired.py 1970-01-01 00:00:00 +0000
2585+++ landscape/monitor/rebootrequired.py 2009-09-21 16:05:18 +0000
2586@@ -0,0 +1,48 @@
2587+import os
2588+import logging
2589+
2590+from landscape.monitor.monitor import MonitorPlugin
2591+
2592+
2593+class RebootRequired(MonitorPlugin):
2594+ """
2595+ Report whether the system requires a reboot.
2596+ """
2597+
2598+ persist_name = "reboot-required"
2599+ run_interval = 900 # 15 minutes
2600+
2601+ def __init__(self, reboot_required_filename="/var/run/reboot-required"):
2602+ self._reboot_required_filename = reboot_required_filename
2603+
2604+ def _check_reboot_required(self):
2605+ """Return a boolean indicating whether the computer needs a reboot."""
2606+ return os.path.exists(self._reboot_required_filename)
2607+
2608+ def _create_message(self):
2609+ """Return the body of the reboot-required message to be sent."""
2610+ message = {}
2611+ key = "flag"
2612+ value = self._check_reboot_required()
2613+ if value != self._persist.get(key):
2614+ self._persist.set(key, value)
2615+ message[key] = value
2616+ return message
2617+
2618+ def send_message(self):
2619+ """Send a reboot-required message if needed.
2620+
2621+ A message will be sent only if the reboot-required status of the
2622+ system has changed.
2623+ """
2624+ message = self._create_message()
2625+ if message:
2626+ message["type"] = "reboot-required"
2627+ logging.info("Queueing message with updated "
2628+ "reboot-required status.")
2629+ self.registry.broker.send_message(message)
2630+
2631+ def run(self):
2632+ """Send reboot-required messages if the server accepts them."""
2633+ return self.registry.broker.call_if_accepted(
2634+ "reboot-required", self.send_message)
2635
2636=== modified file 'landscape/monitor/tests/test_computerinfo.py'
2637--- landscape/monitor/tests/test_computerinfo.py 2008-09-08 16:35:57 +0000
2638+++ landscape/monitor/tests/test_computerinfo.py 2009-09-21 16:05:18 +0000
2639@@ -5,8 +5,8 @@
2640 from landscape.tests.mocker import ANY
2641
2642
2643-def get_hostname():
2644- return "ooga"
2645+def get_fqdn():
2646+ return "ooga.local"
2647
2648
2649 class ComputerInfoTest(LandscapeTest):
2650@@ -48,16 +48,16 @@
2651 DISTRIB_DESCRIPTION="Ubuntu 6.06.1 LTS"
2652 """)
2653
2654- def test_get_hostname(self):
2655+ def test_get_fqdn(self):
2656 self.mstore.set_accepted_types(["computer-info"])
2657- plugin = ComputerInfo(get_hostname=get_hostname,
2658+ plugin = ComputerInfo(get_fqdn=get_fqdn,
2659 lsb_release_filename=self.lsb_release_filename)
2660 self.monitor.add(plugin)
2661 plugin.exchange()
2662 messages = self.mstore.get_pending_messages()
2663 self.assertEquals(len(messages), 1)
2664 self.assertEquals(messages[0]["type"], "computer-info")
2665- self.assertEquals(messages[0]["hostname"], "ooga")
2666+ self.assertEquals(messages[0]["hostname"], "ooga.local")
2667
2668 def test_get_real_hostname(self):
2669 self.mstore.set_accepted_types(["computer-info"])
2670@@ -72,7 +72,7 @@
2671
2672 def test_only_report_changed_hostnames(self):
2673 self.mstore.set_accepted_types(["computer-info"])
2674- plugin = ComputerInfo(get_hostname=get_hostname)
2675+ plugin = ComputerInfo(get_fqdn=get_fqdn)
2676 self.monitor.add(plugin)
2677 plugin.exchange()
2678 messages = self.mstore.get_pending_messages()
2679@@ -89,7 +89,7 @@
2680 i = i + 1
2681
2682 self.mstore.set_accepted_types(["computer-info"])
2683- plugin = ComputerInfo(get_hostname=hostname_factory().next)
2684+ plugin = ComputerInfo(get_fqdn=hostname_factory().next)
2685 self.monitor.add(plugin)
2686
2687 plugin.exchange()
2688@@ -270,14 +270,14 @@
2689 """
2690 self.mstore.set_accepted_types(["distribution-info", "computer-info"])
2691 meminfo_filename = self.make_path(self.sample_memory_info)
2692- plugin = ComputerInfo(get_hostname=get_hostname,
2693+ plugin = ComputerInfo(get_fqdn=get_fqdn,
2694 meminfo_file=meminfo_filename,
2695 lsb_release_filename=self.lsb_release_filename)
2696 self.monitor.add(plugin)
2697 plugin.exchange()
2698 self.reactor.fire("resynchronize")
2699 plugin.exchange()
2700- computer_info = {"type": "computer-info", "hostname": "ooga",
2701+ computer_info = {"type": "computer-info", "hostname": "ooga.local",
2702 "timestamp": 0, "total-memory": 1510,
2703 "total-swap": 1584}
2704 dist_info = {"type": "distribution-info",
2705
2706=== modified file 'landscape/monitor/tests/test_deployment.py'
2707--- landscape/monitor/tests/test_deployment.py 2008-09-08 16:35:57 +0000
2708+++ landscape/monitor/tests/test_deployment.py 2009-09-21 16:05:18 +0000
2709@@ -8,6 +8,8 @@
2710 from landscape.broker.tests.test_remote import assertTransmitterActive
2711 from landscape.tests.test_plugin import assertReceivesMessages
2712
2713+from twisted.internet.defer import Deferred
2714+
2715
2716 class DeploymentTest(LandscapeTest):
2717
2718@@ -27,7 +29,7 @@
2719 "-d", self.make_path()])
2720 monitor = MonitorService(configuration)
2721 plugins = monitor.plugins
2722- self.assertEquals(len(plugins), 10)
2723+ self.assertEquals(len(plugins), 11)
2724
2725
2726 class MonitorServiceTest(LandscapeIsolatedTest):
2727@@ -58,6 +60,43 @@
2728 return assertReceivesMessages(self, self.monitor.dbus_service,
2729 self.broker_service, self.remote)
2730
2731+ def test_register_plugin_on_broker_started(self):
2732+ """
2733+ When the broker is restarted, it fires a "broker-started" signal which
2734+ makes the Monitor plugin register itself again.
2735+ """
2736+ d = Deferred()
2737+ def register_plugin(bus_name, object_path):
2738+ d.callback((bus_name, object_path))
2739+ def patch(ignore):
2740+ self.monitor.remote_broker.register_plugin = register_plugin
2741+ self.broker_service.dbus_object.broker_started()
2742+ return d
2743+ return self.remote.get_registered_plugins(
2744+ ).addCallback(patch
2745+ ).addCallback(self.assertEquals,
2746+ ("com.canonical.landscape.Monitor",
2747+ "/com/canonical/landscape/Monitor"))
2748+
2749+ def test_register_message_on_broker_started(self):
2750+ """
2751+ When the broker is restarted, it fires a "broker-started" signal which
2752+ makes the Monitor plugin register all registered messages again.
2753+ """
2754+ self.monitor.registry.register_message("foo", lambda x: None)
2755+ d = Deferred()
2756+ def register_client_accepted_message_type(type):
2757+ if type == "foo":
2758+ d.callback(type)
2759+ def patch(ignore):
2760+ self.monitor.remote_broker.register_client_accepted_message_type = \
2761+ register_client_accepted_message_type
2762+ self.broker_service.dbus_object.broker_started()
2763+ return d
2764+ return self.remote.get_registered_plugins(
2765+ ).addCallback(patch
2766+ ).addCallback(self.assertEquals, "foo")
2767+
2768
2769 class MonitorTest(MonitorServiceTest):
2770
2771
2772=== modified file 'landscape/monitor/tests/test_mountinfo.py'
2773--- landscape/monitor/tests/test_mountinfo.py 2008-09-08 16:35:57 +0000
2774+++ landscape/monitor/tests/test_mountinfo.py 2009-09-21 16:05:18 +0000
2775@@ -1,5 +1,7 @@
2776 import tempfile
2777
2778+from twisted.internet.defer import succeed
2779+
2780 from landscape.monitor.mountinfo import MountInfo
2781 from landscape.tests.test_hal import MockHALManager, MockRealHALDevice
2782 from landscape.tests.helpers import (LandscapeTest, MakePathHelper,
2783@@ -38,7 +40,7 @@
2784 """
2785 plugin = self.get_mount_info(create_time=self.reactor.time)
2786 self.monitor.add(plugin)
2787-
2788+
2789 self.reactor.advance(self.monitor.step_size)
2790
2791 message = plugin.create_mount_info_message()
2792@@ -675,8 +677,49 @@
2793
2794 remote_broker_mock = self.mocker.replace(self.remote)
2795 remote_broker_mock.send_message(ANY, urgent=True)
2796+ self.mocker.result(succeed(None))
2797 self.mocker.count(3)
2798 self.mocker.replay()
2799
2800 self.reactor.fire(("message-type-acceptance-changed", "mount-info"),
2801 True)
2802+
2803+ def test_persist_timing(self):
2804+ """Mount info are only persisted when exchange happens.
2805+
2806+ Previously mount info were persisted as soon as they were gathered: if
2807+ an event happened between the persist and the exchange, the server
2808+ didn't get the mount info at all. This test ensures that mount info are
2809+ only saved when exchange happens.
2810+ """
2811+ def statvfs(path):
2812+ return (4096, 0, mb(1000), mb(100), 0, 0, 0, 0, 0)
2813+
2814+ filename = self.make_path("""\
2815+/dev/hda1 / ext3 rw 0 0
2816+""")
2817+ plugin = MountInfo(mounts_file=filename, create_time=self.reactor.time,
2818+ statvfs=statvfs, mtab_file=filename)
2819+ self.monitor.add(plugin)
2820+ plugin.run()
2821+ message1 = plugin.create_mount_info_message()
2822+ self.assertEquals(
2823+ message1.get("mount-info"),
2824+ [(0, {"device": "/dev/hda1",
2825+ "filesystem": "ext3",
2826+ "mount-point": "/",
2827+ "total-space": 4096000L})])
2828+ plugin.run()
2829+ message2 = plugin.create_mount_info_message()
2830+ self.assertEquals(
2831+ message2.get("mount-info"),
2832+ [(0, {"device": "/dev/hda1",
2833+ "filesystem": "ext3",
2834+ "mount-point": "/",
2835+ "total-space": 4096000L})])
2836+ # Run again, calling create_mount_info_message purge the information
2837+ plugin.run()
2838+ plugin.exchange()
2839+ plugin.run()
2840+ message3 = plugin.create_mount_info_message()
2841+ self.assertIdentical(message3, None)
2842
2843=== modified file 'landscape/monitor/tests/test_processorinfo.py'
2844--- landscape/monitor/tests/test_processorinfo.py 2008-09-08 16:35:57 +0000
2845+++ landscape/monitor/tests/test_processorinfo.py 2009-09-21 16:05:18 +0000
2846@@ -1,6 +1,7 @@
2847 from landscape.plugin import PluginConfigError
2848 from landscape.monitor.processorinfo import ProcessorInfo
2849-from landscape.tests.helpers import LandscapeTest, MakePathHelper, MonitorHelper
2850+from landscape.tests.helpers import (
2851+ LandscapeTest, MakePathHelper, MonitorHelper)
2852 from landscape.tests.mocker import ANY
2853
2854
2855@@ -58,7 +59,6 @@
2856 self.assertEquals(len(messages), 2)
2857
2858
2859-
2860 class PowerPCMessageTest(LandscapeTest):
2861 """Tests for powerpc-specific message builder."""
2862
2863@@ -167,6 +167,131 @@
2864 self.assertEquals(processor["model"], "7447A, altivec supported")
2865
2866
2867+class ARMMessageTest(LandscapeTest):
2868+ """Tests for ARM-specific message builder."""
2869+
2870+ helpers = [MonitorHelper, MakePathHelper]
2871+
2872+ ARM_NOKIA = """
2873+Processor : ARMv6-compatible processor rev 2 (v6l)
2874+BogoMIPS : 164.36
2875+Features : swp half thumb fastmult vfp edsp java
2876+CPU implementer : 0x41
2877+CPU architecture: 6TEJ
2878+CPU variant : 0x0
2879+CPU part : 0xb36
2880+CPU revision : 2
2881+Cache type : write-back
2882+Cache clean : cp15 c7 ops
2883+Cache lockdown : format C
2884+Cache format : Harvard
2885+I size : 32768
2886+I assoc : 4
2887+I line length : 32
2888+I sets : 256
2889+D size : 32768
2890+D assoc : 4
2891+D line length : 32
2892+D sets : 256
2893+
2894+Hardware : Nokia RX-44
2895+Revision : 24202524
2896+Serial : 0000000000000000
2897+"""
2898+
2899+ ARMv7 = """
2900+Processor : ARMv7 Processor rev 1 (v7l)
2901+BogoMIPS : 663.55
2902+Features : swp half thumb fastmult vfp edsp
2903+CPU implementer : 0x41
2904+CPU architecture: 7
2905+CPU variant : 0x2
2906+CPU part : 0xc08
2907+CPU revision : 1
2908+Cache type : write-back
2909+Cache clean : read-block
2910+Cache lockdown : not supported
2911+Cache format : Unified
2912+Cache size : 768
2913+Cache assoc : 1
2914+Cache line length : 8
2915+Cache sets : 64
2916+
2917+Hardware : Sample Board
2918+Revision : 81029
2919+Serial : 0000000000000000
2920+"""
2921+
2922+ ARMv7_reverse = """
2923+Serial : 0000000000000000
2924+Revision : 81029
2925+Hardware : Sample Board
2926+
2927+Cache sets : 64
2928+Cache line length : 8
2929+Cache assoc : 1
2930+Cache size : 768
2931+Cache format : Unified
2932+Cache lockdown : not supported
2933+Cache clean : read-block
2934+Cache type : write-back
2935+CPU revision : 1
2936+CPU part : 0xc08
2937+CPU variant : 0x2
2938+CPU architecture: 7
2939+CPU implementer : 0x41
2940+Features : swp half thumb fastmult vfp edsp
2941+BogoMIPS : 663.55
2942+Processor : ARMv7 Processor rev 1 (v7l)
2943+"""
2944+
2945+ def test_read_sample_nokia_data(self):
2946+ """Ensure the plugin can parse /proc/cpuinfo from a Nokia N810."""
2947+ filename = self.make_path(self.ARM_NOKIA)
2948+ plugin = ProcessorInfo(machine_name="armv6l",
2949+ source_filename=filename)
2950+ message = plugin.create_message()
2951+ self.assertEquals(message["type"], "processor-info")
2952+ self.assertTrue(len(message["processors"]) == 1)
2953+
2954+ processor_0 = message["processors"][0]
2955+ self.assertEquals(len(processor_0), 2)
2956+ self.assertEquals(processor_0["model"],
2957+ "ARMv6-compatible processor rev 2 (v6l)")
2958+ self.assertEquals(processor_0["processor-id"], 0)
2959+
2960+ def test_read_sample_armv7_data(self):
2961+ """Ensure the plugin can parse /proc/cpuinfo from a sample ARMv7."""
2962+ filename = self.make_path(self.ARMv7)
2963+ plugin = ProcessorInfo(machine_name="armv7l",
2964+ source_filename=filename)
2965+ message = plugin.create_message()
2966+ self.assertEquals(message["type"], "processor-info")
2967+ self.assertTrue(len(message["processors"]) == 1)
2968+
2969+ processor_0 = message["processors"][0]
2970+ self.assertEquals(len(processor_0), 3)
2971+ self.assertEquals(processor_0["model"],
2972+ "ARMv7 Processor rev 1 (v7l)")
2973+ self.assertEquals(processor_0["processor-id"], 0)
2974+ self.assertEquals(processor_0["cache-size"], 768)
2975+
2976+ def test_read_sample_armv7_reverse_data(self):
2977+ """Ensure the plugin can parse a reversed sample ARMv7 /proc/cpuinfo"""
2978+ filename = self.make_path(self.ARMv7_reverse)
2979+ plugin = ProcessorInfo(machine_name="armv7l",
2980+ source_filename=filename)
2981+ message = plugin.create_message()
2982+ self.assertEquals(message["type"], "processor-info")
2983+ self.assertTrue(len(message["processors"]) == 1)
2984+
2985+ processor_0 = message["processors"][0]
2986+ self.assertEquals(len(processor_0), 3)
2987+ self.assertEquals(processor_0["model"],
2988+ "ARMv7 Processor rev 1 (v7l)")
2989+ self.assertEquals(processor_0["processor-id"], 0)
2990+ self.assertEquals(processor_0["cache-size"], 768)
2991+
2992 class SparcMessageTest(LandscapeTest):
2993 """Tests for sparc-specific message builder."""
2994
2995@@ -407,4 +532,3 @@
2996
2997 self.mstore.set_accepted_types(["processor-info"])
2998 self.assertMessages(list(self.mstore.get_pending_messages()), [])
2999-
3000
3001=== added file 'landscape/monitor/tests/test_rebootrequired.py'
3002--- landscape/monitor/tests/test_rebootrequired.py 1970-01-01 00:00:00 +0000
3003+++ landscape/monitor/tests/test_rebootrequired.py 2009-09-21 16:05:18 +0000
3004@@ -0,0 +1,62 @@
3005+import os
3006+
3007+from landscape.monitor.rebootrequired import RebootRequired
3008+from landscape.tests.helpers import LandscapeIsolatedTest
3009+from landscape.tests.helpers import (
3010+ MakePathHelper, MonitorHelper, LogKeeperHelper)
3011+
3012+
3013+class RebootRequiredTest(LandscapeIsolatedTest):
3014+
3015+ helpers = [MakePathHelper, MonitorHelper, LogKeeperHelper]
3016+
3017+ def setUp(self):
3018+ super(RebootRequiredTest, self).setUp()
3019+ self.reboot_required_filename = self.make_path("")
3020+ self.plugin = RebootRequired(self.reboot_required_filename)
3021+ self.monitor.add(self.plugin)
3022+ self.mstore.set_accepted_types(["reboot-required"])
3023+
3024+ def test_wb_check_reboot_required(self):
3025+ """
3026+ L{RebootRequired.check_reboot_required} should return C{True} if the
3027+ reboot-required flag file is present, C{False} otherwise.
3028+ """
3029+ self.assertTrue(self.plugin._check_reboot_required())
3030+ os.remove(self.reboot_required_filename)
3031+ self.assertFalse(self.plugin._check_reboot_required())
3032+
3033+ def test_wb_create_message(self):
3034+ """
3035+ A message should be created if and only if the reboot-required status
3036+ of the system has changed.
3037+ """
3038+ self.assertEquals(self.plugin._create_message(), {"flag": True})
3039+ self.assertEquals(self.plugin._create_message(), {})
3040+
3041+ def test_send_message(self):
3042+ """
3043+ A new C{"reboot-required"} message should be enqueued if and only
3044+ if the reboot-required status of the system has changed.
3045+ """
3046+ self.plugin.send_message()
3047+ self.assertIn("Queueing message with updated reboot-required status.",
3048+ self.logfile.getvalue())
3049+ self.assertMessages(self.mstore.get_pending_messages(),
3050+ [{"type": "reboot-required", "flag": True}])
3051+ self.mstore.delete_all_messages()
3052+ self.plugin.send_message()
3053+ self.assertMessages(self.mstore.get_pending_messages(), [])
3054+
3055+ def test_run(self):
3056+ """
3057+ If the server can accept them, the plugin should send
3058+ C{reboot-required} messages.
3059+ """
3060+ mock_plugin = self.mocker.patch(self.plugin)
3061+ mock_plugin.send_message()
3062+ self.mocker.count(1)
3063+ self.mocker.replay()
3064+ self.plugin.run()
3065+ self.mstore.set_accepted_types([])
3066+ self.plugin.run()
3067
3068=== modified file 'landscape/package/changer.py'
3069--- landscape/package/changer.py 2009-04-09 17:09:50 +0000
3070+++ landscape/package/changer.py 2009-09-21 16:05:18 +0000
3071@@ -3,8 +3,9 @@
3072 import sys
3073 import os
3074 import pwd
3075+import grp
3076
3077-from twisted.internet.defer import succeed, fail
3078+from twisted.internet.defer import fail
3079
3080 from landscape.package.reporter import find_reporter_command
3081 from landscape.package.taskhandler import PackageTaskHandler, run_task_handler
3082@@ -42,14 +43,24 @@
3083
3084 def run(self):
3085 task1 = self._store.get_next_task(self.queue_name)
3086- result = super(PackageChanger, self).run()
3087+
3088 def finished(result):
3089 task2 = self._store.get_next_task(self.queue_name)
3090 if task1 and task1.id != (task2 and task2.id):
3091+ # In order to let the reporter run smart-update cleanly,
3092+ # we have to deinitialize Smart, so that the write lock
3093+ # gets released
3094+ self._facade.deinit()
3095 if os.getuid() == 0:
3096+ os.setgid(grp.getgrnam("landscape").gr_gid)
3097 os.setuid(pwd.getpwnam("landscape").pw_uid)
3098 os.system(find_reporter_command())
3099- return result.addCallback(finished)
3100+
3101+ result = self.use_hash_id_db()
3102+ result.addCallback(lambda x: self.handle_tasks())
3103+ result.addCallback(finished)
3104+
3105+ return result
3106
3107 def handle_tasks(self):
3108 result = super(PackageChanger, self).handle_tasks()
3109
3110=== modified file 'landscape/package/facade.py'
3111--- landscape/package/facade.py 2009-04-09 17:09:50 +0000
3112+++ landscape/package/facade.py 2009-09-21 16:05:18 +0000
3113@@ -1,5 +1,5 @@
3114 from smart.transaction import Transaction, PolicyInstall, PolicyUpgrade, Failed
3115-from smart.const import INSTALL, REMOVE, UPGRADE
3116+from smart.const import INSTALL, REMOVE, UPGRADE, ALWAYS, NEVER
3117
3118 import smart
3119
3120@@ -23,6 +23,10 @@
3121 """Raised when Smart fails in an undefined way."""
3122
3123
3124+class ChannelError(Exception):
3125+ """Raised when channels fail to load."""
3126+
3127+
3128 class SmartFacade(object):
3129 """Wrapper for tasks using Smart.
3130
3131@@ -33,6 +37,10 @@
3132 _deb_package_type = None
3133
3134 def __init__(self, smart_init_kwargs={}):
3135+ """
3136+ @param smart_init_kwargs: A dictionary that can be used to pass
3137+ specific keyword parameters to to L{smart.init}.
3138+ """
3139 self._smart_init_kwargs = smart_init_kwargs.copy()
3140 self._smart_init_kwargs.setdefault("interface", "landscape")
3141 self._reset()
3142@@ -46,6 +54,8 @@
3143
3144 self._marks = {}
3145
3146+ self._caching = ALWAYS
3147+
3148 def deinit(self):
3149 """Deinitialize the Facade and the Smart library."""
3150 if self._ctrl:
3151@@ -59,6 +69,8 @@
3152 install_landscape_interface)
3153 install_landscape_interface()
3154 self._ctrl = smart.init(**self._smart_init_kwargs)
3155+ smart.initDistro(self._ctrl)
3156+ smart.initPlugins()
3157 smart.sysconf.set("pm-iface-output", True, soft=True)
3158 smart.sysconf.set("deb-non-interactive", True, soft=True)
3159
3160@@ -74,10 +86,18 @@
3161 """Hook called when the Smart library is initialized."""
3162
3163 def reload_channels(self):
3164- """Reload Smart channels, getting all the cache (packages) in memory.
3165+ """
3166+ Reload Smart channels, getting all the cache (packages) in memory.
3167+
3168+ @raise: L{ChannelError} if Smart fails to reload the channels.
3169 """
3170 ctrl = self._get_ctrl()
3171- ctrl.reloadChannels()
3172+
3173+ reload_result = ctrl.reloadChannels(caching=self._caching)
3174+ if not reload_result and self._caching == NEVER:
3175+ # Raise an error only if we are trying to download remote lists
3176+ raise ChannelError("Smart failed to reload channels (%s)"
3177+ % smart.sysconf.get("channels"))
3178
3179 self._hash2pkg.clear()
3180 self._pkg2hash.clear()
3181@@ -101,23 +121,46 @@
3182 @param with_info: If True, the skeleton will include information
3183 useful for sending data to the server. Such information isn't
3184 necessary if the skeleton will be used to build a hash.
3185+
3186+ @return: a L{PackageSkeleton} object.
3187 """
3188 return build_skeleton(pkg, with_info)
3189
3190 def get_package_hash(self, pkg):
3191- """Return a hash from the given package."""
3192+ """Return a hash from the given package.
3193+
3194+ @param pkg: a L{smart.backends.deb.base.DebPackage} objects
3195+ """
3196 return self._pkg2hash.get(pkg)
3197
3198+ def get_package_hashes(self):
3199+ """Get the hashes of all the packages available in the channels."""
3200+ return self._pkg2hash.values()
3201+
3202 def get_packages(self):
3203+ """
3204+ Get all the packages available in the channels.
3205+
3206+ @return: a C{list} of L{smart.backends.deb.base.DebPackage} objects
3207+ """
3208 return [pkg for pkg in self._get_ctrl().getCache().getPackages()
3209 if isinstance(pkg, self._deb_package_type)]
3210
3211 def get_packages_by_name(self, name):
3212- """Return a list with all known (available) packages."""
3213+ """
3214+ Get all available packages matching the provided name.
3215+
3216+ @return: a C{list} of L{smart.backends.deb.base.DebPackage} objects
3217+ """
3218 return [pkg for pkg in self._get_ctrl().getCache().getPackages(name)
3219 if isinstance(pkg, self._deb_package_type)]
3220
3221 def get_package_by_hash(self, hash):
3222+ """
3223+ Get all available packages matching the provided hash.
3224+
3225+ @return: a C{list} of L{smart.backends.deb.base.DebPackage} objects
3226+ """
3227 return self._hash2pkg.get(hash)
3228
3229 def mark_install(self, pkg):
3230@@ -181,3 +224,77 @@
3231 cache = self._get_ctrl().getCache()
3232 cache.reset()
3233 cache.load()
3234+
3235+ def get_arch(self):
3236+ """
3237+ Get the host dpkg architecture.
3238+ """
3239+ self._get_ctrl()
3240+ from smart.backends.deb.loader import DEBARCH
3241+ return DEBARCH
3242+
3243+ def set_arch(self, arch):
3244+ """
3245+ Set the host dpkg architecture.
3246+
3247+ To take effect it must be called before L{reload_channels}.
3248+
3249+ @param arch: the dpkg architecture to use (e.g. C{"i386"})
3250+ """
3251+ self._get_ctrl()
3252+ smart.sysconf.set("deb-arch", arch)
3253+
3254+ # XXX workaround Smart setting DEBARCH statically in the
3255+ # smart.backends.deb.base module
3256+ import smart.backends.deb.loader as loader
3257+ loader.DEBARCH = arch
3258+
3259+ def set_caching(self, mode):
3260+ """
3261+ Set Smart's caching mode.
3262+
3263+ @param mode: The caching mode to pass to Smart's C{reloadChannels}
3264+ when calling L{reload_channels} (e.g C{smart.const.NEVER} or
3265+ C{smart.const.ALWAYS}).
3266+ """
3267+ self._caching = mode
3268+
3269+ def reset_channels(self):
3270+ """Remove all configured Smart channels."""
3271+ self._get_ctrl()
3272+ smart.sysconf.set("channels", {}, soft=True)
3273+
3274+ def add_channel(self, alias, channel):
3275+ """
3276+ Add a Smart channel.
3277+
3278+ This method can be called more than once to set multiple channels.
3279+ To take effect it must be called before L{reload_channels}.
3280+
3281+ @param alias: A string identifying the channel to be added.
3282+ @param channel: A C{dict} holding information about the channel to
3283+ add (see the Smart API for details about valid keys and values).
3284+ """
3285+ channels = self.get_channels()
3286+ channels.update({alias: channel})
3287+ smart.sysconf.set("channels", channels, soft=True)
3288+
3289+ def get_channels(self):
3290+ """
3291+ @return: A C{dict} of all configured channels.
3292+ """
3293+ self._get_ctrl()
3294+ return smart.sysconf.get("channels")
3295+
3296+
3297+def make_apt_deb_channel(baseurl, distribution, components):
3298+ """Convenience to create Smart channels of type C{"apt-deb"}."""
3299+ return {"baseurl": baseurl,
3300+ "distribution": distribution,
3301+ "components": components,
3302+ "type": "apt-deb"}
3303+
3304+def make_deb_dir_channel(path):
3305+ """Convenience to create Smart channels of type C{"deb-dir"}."""
3306+ return {"path": path,
3307+ "type": "deb-dir"}
3308
3309=== modified file 'landscape/package/reporter.py'
3310--- landscape/package/reporter.py 2008-09-08 16:35:57 +0000
3311+++ landscape/package/reporter.py 2009-09-21 16:05:18 +0000
3312@@ -1,12 +1,15 @@
3313+import urlparse
3314 import logging
3315 import time
3316 import sys
3317 import os
3318
3319 from twisted.internet.defer import Deferred, succeed
3320+from twisted.internet.utils import getProcessOutputAndValue
3321
3322 from landscape.lib.sequenceranges import sequence_to_ranges
3323 from landscape.lib.twisted_util import gather_results
3324+from landscape.lib.fetch import fetch_async
3325
3326 from landscape.package.taskhandler import PackageTaskHandler, run_task_handler
3327 from landscape.package.store import UnknownHashIDRequest
3328@@ -17,13 +20,30 @@
3329
3330
3331 class PackageReporter(PackageTaskHandler):
3332+ """Report information about the system packages.
3333
3334+ @cvar queue_name: Name of the task queue to pick tasks from.
3335+ @cvar smart_update_interval: Time interval in minutes to pass to
3336+ the C{--after} command line option of C{smart-update}.
3337+ """
3338 queue_name = "reporter"
3339+ smart_update_interval = 60
3340+ smart_update_filename = "/usr/lib/landscape/smart-update"
3341
3342 def run(self):
3343 result = Deferred()
3344
3345- # First, handle any queued tasks.
3346+ # Run smart-update before anything else, to make sure that
3347+ # the SmartFacade will load freshly updated channels
3348+ result.addCallback(lambda x: self.run_smart_update())
3349+
3350+ # If the appropriate hash=>id db is not there, fetch it
3351+ result.addCallback(lambda x: self.fetch_hash_id_db())
3352+
3353+ # Attach the hash=>id database if available
3354+ result.addCallback(lambda x: self.use_hash_id_db())
3355+
3356+ # Now, handle any queued tasks.
3357 result.addCallback(lambda x: self.handle_tasks())
3358
3359 # Then, remove any expired hash=>id translation requests.
3360@@ -38,6 +58,105 @@
3361 result.callback(None)
3362 return result
3363
3364+ def fetch_hash_id_db(self):
3365+ """
3366+ Fetch the appropriate pre-canned database of hash=>id mappings
3367+ from the server. If the database is already present, it won't
3368+ be downloaded twice.
3369+
3370+ The format of the database filename is <uuid>_<codename>_<arch>,
3371+ and it will be downloaded from the HTTP directory set in
3372+ config.package_hash_id_url, or config.url/hash-id-databases if
3373+ the former is not set.
3374+
3375+ Fetch failures are handled gracefully and logged as appropriate.
3376+ """
3377+
3378+ def fetch_it(hash_id_db_filename):
3379+
3380+ if hash_id_db_filename is None:
3381+ # Couldn't determine which hash=>id database to fetch,
3382+ # just ignore the failure and go on
3383+ return
3384+
3385+ if os.path.exists(hash_id_db_filename):
3386+ # We don't download twice
3387+ return
3388+
3389+ base_url = self._get_hash_id_db_base_url()
3390+ if not base_url:
3391+ logging.warning("Can't determine the hash=>id database url")
3392+ return
3393+
3394+ # Cast to str as pycurl doesn't like unicode
3395+ url = str(base_url + os.path.basename(hash_id_db_filename))
3396+
3397+ def fetch_ok(data):
3398+ hash_id_db_fd = open(hash_id_db_filename, "w")
3399+ hash_id_db_fd.write(data)
3400+ hash_id_db_fd.close()
3401+ logging.info("Downloaded hash=>id database from %s" % url)
3402+
3403+ def fetch_error(failure):
3404+ exception = failure.value
3405+ logging.warning("Couldn't download hash=>id database: %s" %
3406+ str(exception))
3407+
3408+ result = fetch_async(url)
3409+ result.addCallback(fetch_ok)
3410+ result.addErrback(fetch_error)
3411+
3412+ return result
3413+
3414+ result = self._determine_hash_id_db_filename()
3415+ result.addCallback(fetch_it)
3416+ return result
3417+
3418+ def _get_hash_id_db_base_url(self):
3419+
3420+ base_url = self._config.get("package_hash_id_url")
3421+
3422+ if not base_url:
3423+
3424+ if not self._config.get("url"):
3425+ # We really have no idea where to download from
3426+ return None
3427+
3428+ # If config.url is http://host:123/path/to/message-system
3429+ # then we'll use http://host:123/path/to/hash-id-databases
3430+ base_url = urlparse.urljoin(self._config.url.rstrip("/"),
3431+ "hash-id-databases")
3432+
3433+ return base_url.rstrip("/") + "/"
3434+
3435+ def run_smart_update(self):
3436+ """Run smart-update and log a warning in case of non-zero exit code.
3437+
3438+ @return: a deferred returning (out, err, code)
3439+ """
3440+ result = getProcessOutputAndValue(self.smart_update_filename,
3441+ args=("--after", "%d" %
3442+ self.smart_update_interval))
3443+
3444+ def callback((out, err, code)):
3445+ # smart-update --after N will exit with error code 1 when it
3446+ # doesn't actually run the update code because to enough time
3447+ # has passed yet, but we don't actually consider it a failure.
3448+ smart_failed = False
3449+ if code != 0 and code != 1:
3450+ smart_failed = True
3451+ if code == 1 and err.strip() != "":
3452+ smart_failed = True
3453+ if smart_failed:
3454+ logging.warning("'%s' exited with status %d (%s)" % (
3455+ self.smart_update_filename, code, err))
3456+ logging.debug("'%s' exited with status %d (out='%s', err='%s'" % (
3457+ self.smart_update_filename, code, out, err))
3458+ return (out, err, code)
3459+
3460+ result.addCallback(callback)
3461+ return result
3462+
3463 def handle_task(self, task):
3464 message = task.data
3465 if message["type"] == "package-ids":
3466@@ -82,7 +201,23 @@
3467 self._store.clear_available()
3468 self._store.clear_available_upgrades()
3469 self._store.clear_installed()
3470- self._store.clear_hash_id_requests()
3471+
3472+ # Don't clear the hash_id_requests table because the messages
3473+ # associated with the existing requests might still have to be
3474+ # delivered, and if we clear the table and later create a new request,
3475+ # that new request could get the same id of one of the deleted ones,
3476+ # and when the pending message eventually gets delivered the reporter
3477+ # would think that the message is associated to the newly created
3478+ # request, as it have the same id has the deleted request the message
3479+ # actually refers to. This would cause the ids in the message to be
3480+ # possibly mapped to the wrong hashes.
3481+ #
3482+ # This problem would happen for example when switching the client from
3483+ # one Landscape server to another, because the uuid-changed event would
3484+ # cause a resynchronize task to be created by the monitor. See #417122.
3485+ #
3486+ #self._store.clear_hash_id_requests()
3487+
3488 return succeed(None)
3489
3490 def _handle_unknown_packages(self, hashes):
3491
3492=== modified file 'landscape/package/skeleton.py'
3493--- landscape/package/skeleton.py 2008-09-08 16:35:57 +0000
3494+++ landscape/package/skeleton.py 2009-09-21 16:05:18 +0000
3495@@ -45,7 +45,7 @@
3496 return digest.digest()
3497
3498
3499-def build_skeleton(pkg, with_info=False):
3500+def build_skeleton(pkg, with_info=False, with_unicode=False):
3501 if not build_skeleton.inited:
3502 build_skeleton.inited = True
3503 global DebPackage, DebNameProvides, DebOrDepends
3504@@ -57,7 +57,11 @@
3505 if not isinstance(pkg, DebPackage):
3506 raise PackageTypeError()
3507
3508- skeleton = PackageSkeleton(DEB_PACKAGE, pkg.name, pkg.version)
3509+ if with_unicode:
3510+ skeleton = PackageSkeleton(DEB_PACKAGE, unicode(pkg.name),
3511+ unicode(pkg.version))
3512+ else:
3513+ skeleton = PackageSkeleton(DEB_PACKAGE, pkg.name, pkg.version)
3514 relations = set()
3515 for relation in pkg.provides:
3516 if isinstance(relation, DebNameProvides):
3517
3518=== modified file 'landscape/package/store.py'
3519--- landscape/package/store.py 2009-03-18 20:42:05 +0000
3520+++ landscape/package/store.py 2009-09-21 16:05:18 +0000
3521@@ -1,5 +1,5 @@
3522+"""Provide access to the persistent data used by L{PackageTaskHandler}s."""
3523 import time
3524-import os
3525
3526 try:
3527 import sqlite3
3528@@ -13,6 +13,10 @@
3529 """Raised for unknown hash id requests."""
3530
3531
3532+class InvalidHashIdDb(Exception):
3533+ """Raised when trying to add an invalid hash=>id lookaside database."""
3534+
3535+
3536 def with_cursor(method):
3537 """Decorator that encloses the method in a database transaction.
3538
3539@@ -22,6 +26,7 @@
3540 the autocommit mode, we explicitly terminate transactions and enforce
3541 cursor closing with this decorator.
3542 """
3543+
3544 def inner(self, *args, **kwargs):
3545 try:
3546 cursor = self._db.cursor()
3547@@ -37,21 +42,34 @@
3548 return inner
3549
3550
3551-class PackageStore(object):
3552+class HashIdStore(object):
3553+ """Persist package hash=>id mappings in a file.
3554+
3555+ The implementation uses a SQLite database as backend, with a single
3556+ table called "hash", whose schema is defined in L{ensure_hash_id_schema}.
3557+ """
3558
3559 def __init__(self, filename):
3560- self._db = sqlite3.connect(filename)
3561- ensure_schema(self._db)
3562+ """
3563+ @param filename: The file where the mappings are persisted to.
3564+ """
3565+ self._filename = filename
3566+ self._db = sqlite3.connect(self._filename)
3567+ ensure_hash_id_schema(self._db)
3568
3569 @with_cursor
3570 def set_hash_ids(self, cursor, hash_ids):
3571+ """Set the ids of a set of hashes.
3572+
3573+ @param hash_ids: a C{dict} of hash=>id mappings.
3574+ """
3575 for hash, id in hash_ids.iteritems():
3576 cursor.execute("REPLACE INTO hash VALUES (?, ?)",
3577 (id, buffer(hash)))
3578
3579 @with_cursor
3580 def get_hash_id(self, cursor, hash):
3581- assert isinstance(hash, basestring)
3582+ """Return the id associated to C{hash}, or C{None} if not available."""
3583 cursor.execute("SELECT id FROM hash WHERE hash=?", (buffer(hash),))
3584 value = cursor.fetchone()
3585 if value:
3586@@ -59,7 +77,14 @@
3587 return None
3588
3589 @with_cursor
3590+ def get_hash_ids(self, cursor):
3591+ """Return a C{dict} holding all the available hash=>id mappings."""
3592+ cursor.execute("SELECT hash, id FROM hash")
3593+ return dict([(str(row[0]), row[1]) for row in cursor.fetchall()])
3594+
3595+ @with_cursor
3596 def get_id_hash(self, cursor, id):
3597+ """Return the hash associated to C{id}, or C{None} if not available."""
3598 assert isinstance(id, (int, long))
3599 cursor.execute("SELECT hash FROM hash WHERE id=?", (id,))
3600 value = cursor.fetchone()
3601@@ -69,9 +94,102 @@
3602
3603 @with_cursor
3604 def clear_hash_ids(self, cursor):
3605+ """Delete all hash=>id mappings."""
3606 cursor.execute("DELETE FROM hash")
3607
3608 @with_cursor
3609+ def check_sanity(self, cursor):
3610+ """Check database integrity.
3611+
3612+ @raise: L{InvalidHashIdDb} if the filenme passed to the constructor is
3613+ not a SQLite database or does not have a table called "hash" with
3614+ a compatible schema.
3615+ """
3616+ try:
3617+ cursor.execute("SELECT id FROM hash WHERE hash=?", ("",))
3618+ except sqlite3.DatabaseError, e:
3619+ raise InvalidHashIdDb(self._filename)
3620+
3621+
3622+class PackageStore(HashIdStore):
3623+ """Persist data about system packages and L{PackageTaskHandler}'s tasks.
3624+
3625+ This class extends L{HashIdStore} by adding tables to the SQLite database
3626+ backend for storing information about the status of the system packages and
3627+ about the tasks to be performed by L{PackageTaskHandler}s.
3628+
3629+ The additional tables and schemas are defined in L{ensure_package_schema}.
3630+ """
3631+
3632+ def __init__(self, filename):
3633+ """
3634+ @param filename: The file where data is persisted to.
3635+ """
3636+ super(PackageStore, self).__init__(filename)
3637+ self._hash_id_stores = []
3638+ ensure_package_schema(self._db)
3639+
3640+ def add_hash_id_db(self, filename):
3641+ """
3642+ Attach an additional "lookaside" hash=>id database.
3643+
3644+ This method can be called more than once to attach several
3645+ hash=>id databases, which will be queried *before* the main
3646+ database, in the same the order they were added.
3647+
3648+ If C{filename} is not a SQLite database or does not have a
3649+ table called "hash" with a compatible schema, L{InvalidHashIdDb}
3650+ is raised.
3651+
3652+ @param filename: a secondary SQLite databases to look for pre-canned
3653+ hash=>id mappings.
3654+ """
3655+ hash_id_store = HashIdStore(filename)
3656+
3657+ try:
3658+ hash_id_store.check_sanity()
3659+ except InvalidHashIdDb, e:
3660+ # propagate the error
3661+ raise e
3662+
3663+ self._hash_id_stores.append(hash_id_store)
3664+
3665+ def has_hash_id_db(self):
3666+ """Return C{True} if one or more lookaside databases are attached."""
3667+ return len(self._hash_id_stores) > 0
3668+
3669+ def get_hash_id(self, hash):
3670+ """Return the id associated to C{hash}, or C{None} if not available.
3671+
3672+ This method composes the L{HashIdStore.get_hash_id} methods of all
3673+ the attached lookaside databases, falling back to the main one, as
3674+ described in L{add_hash_id_db}.
3675+ """
3676+ assert isinstance(hash, basestring)
3677+
3678+ # Check if we can find the hash=>id mapping in the lookaside stores
3679+ for store in self._hash_id_stores:
3680+ id = store.get_hash_id(hash)
3681+ if id:
3682+ return id
3683+
3684+ # Fall back to the locally-populated db
3685+ return HashIdStore.get_hash_id(self, hash)
3686+
3687+ def get_id_hash(self, id):
3688+ """Return the hash associated to C{id}, or C{None} if not available.
3689+
3690+ This method composes the L{HashIdStore.get_id_hash} methods of all
3691+ the attached lookaside databases, falling back to the main one in
3692+ case the hash associated to C{id} is not found in any of them.
3693+ """
3694+ for store in self._hash_id_stores:
3695+ hash = store.get_id_hash(id)
3696+ if hash is not None:
3697+ return hash
3698+ return HashIdStore.get_id_hash(self, id)
3699+
3700+ @with_cursor
3701 def add_available(self, cursor, ids):
3702 for id in ids:
3703 cursor.execute("REPLACE INTO available VALUES (?)", (id,))
3704@@ -244,15 +362,34 @@
3705 cursor.execute("DELETE FROM task WHERE id=?", (self.id,))
3706
3707
3708-def ensure_schema(db):
3709+def ensure_hash_id_schema(db):
3710+ """Create all tables needed by a L{HashIdStore}.
3711+
3712+ @param db: A connection to a SQLite database.
3713+ """
3714+ cursor = db.cursor()
3715+ try:
3716+ cursor.execute("CREATE TABLE hash"
3717+ " (id INTEGER PRIMARY KEY, hash BLOB UNIQUE)")
3718+ except (sqlite3.OperationalError, sqlite3.DatabaseError):
3719+ cursor.close()
3720+ db.rollback()
3721+ else:
3722+ cursor.close()
3723+ db.commit()
3724+
3725+
3726+def ensure_package_schema(db):
3727+ """Create all tables needed by a L{PackageStore}.
3728+
3729+ @param db: A connection to a SQLite database.
3730+ """
3731 # FIXME This needs a "patch" table with a "version" column which will
3732 # help with upgrades. It should also be used to decide when to
3733 # create the schema from the ground up, rather than that using
3734 # try block.
3735 cursor = db.cursor()
3736 try:
3737- cursor.execute("CREATE TABLE hash"
3738- " (id INTEGER PRIMARY KEY, hash BLOB UNIQUE)")
3739 cursor.execute("CREATE TABLE available"
3740 " (id INTEGER PRIMARY KEY)")
3741 cursor.execute("CREATE TABLE available_upgrade"
3742
3743=== modified file 'landscape/package/taskhandler.py'
3744--- landscape/package/taskhandler.py 2008-09-08 16:35:57 +0000
3745+++ landscape/package/taskhandler.py 2009-09-21 16:05:18 +0000
3746@@ -1,12 +1,14 @@
3747 import os
3748+import logging
3749
3750 from twisted.internet.defer import Deferred, succeed
3751
3752 from landscape.lib.dbus_util import get_bus
3753 from landscape.lib.lock import lock_path, LockError
3754 from landscape.lib.log import log_failure
3755+from landscape.lib.command import run_command, CommandError
3756 from landscape.deployment import Configuration, init_logging
3757-from landscape.package.store import PackageStore
3758+from landscape.package.store import PackageStore, InvalidHashIdDb
3759 from landscape.broker.remote import RemoteBroker
3760
3761
3762@@ -14,11 +16,13 @@
3763
3764 queue_name = "default"
3765
3766- def __init__(self, package_store, package_facade, remote_broker):
3767+ def __init__(self, package_store, package_facade, remote_broker, config):
3768 self._store = package_store
3769 self._facade = package_facade
3770 self._broker = remote_broker
3771+ self._config = config
3772 self._channels_reloaded = False
3773+ self._server_uuid = None
3774
3775 def ensure_channels_reloaded(self):
3776 if not self._channels_reloaded:
3777@@ -53,6 +57,79 @@
3778 def handle_task(self, task):
3779 return succeed(None)
3780
3781+ def use_hash_id_db(self):
3782+ """
3783+ Attach the appropriate pre-canned hash=>id database to our store.
3784+ """
3785+
3786+ def use_it(hash_id_db_filename):
3787+
3788+ if hash_id_db_filename is None:
3789+ # Couldn't determine which hash=>id database to use,
3790+ # just ignore the failure and go on
3791+ return
3792+
3793+ if not os.path.exists(hash_id_db_filename):
3794+ # The appropriate database isn't there, but nevermind
3795+ # and just go on
3796+ return
3797+
3798+ try:
3799+ self._store.add_hash_id_db(hash_id_db_filename)
3800+ except InvalidHashIdDb:
3801+ # The appropriate database is there but broken,
3802+ # let's remove it and go on
3803+ logging.warning("Invalid hash=>id database %s" %
3804+ hash_id_db_filename)
3805+ os.remove(hash_id_db_filename)
3806+ return
3807+
3808+ result = self._determine_hash_id_db_filename()
3809+ result.addCallback(use_it)
3810+ return result
3811+
3812+ def _determine_hash_id_db_filename(self):
3813+ """Build up the filename of the hash=>id database to use.
3814+
3815+ @return: a deferred resulting in the filename to use or C{None}
3816+ in case of errors.
3817+ """
3818+
3819+ def got_server_uuid(server_uuid):
3820+
3821+ warning = "Couldn't determine which hash=>id database to use: %s"
3822+
3823+ if server_uuid is None:
3824+ logging.warning(warning % "server UUID not available")
3825+ return None
3826+
3827+ try:
3828+ # XXX replace this with a L{SmartFacade} method
3829+ codename = run_command("lsb_release -cs")
3830+ except CommandError, error:
3831+ logging.warning(warning % str(error))
3832+ return None
3833+
3834+ arch = self._facade.get_arch()
3835+ if arch is None:
3836+ # The Smart code should always return a proper string, so this
3837+ # branch shouldn't get executed at all. However this check is
3838+ # kept as an extra paranoia sanity check.
3839+ logging.warning(warning % "unknown dpkg architecture")
3840+ return None
3841+
3842+ package_directory = os.path.join(self._config.data_path, "package")
3843+ hash_id_db_directory = os.path.join(package_directory, "hash-id")
3844+
3845+ return os.path.join(hash_id_db_directory,
3846+ "%s_%s_%s" % (server_uuid,
3847+ codename,
3848+ arch))
3849+
3850+ result = self._broker.get_server_uuid()
3851+ result.addCallback(got_server_uuid)
3852+ return result
3853+
3854
3855 def run_task_handler(cls, args, reactor=None):
3856 from twisted.internet.glib2reactor import install
3857@@ -69,8 +146,10 @@
3858 config.load(args)
3859
3860 package_directory = os.path.join(config.data_path, "package")
3861- if not os.path.isdir(package_directory):
3862- os.mkdir(package_directory)
3863+ hash_id_directory = os.path.join(package_directory, "hash-id")
3864+ for directory in [package_directory, hash_id_directory]:
3865+ if not os.path.isdir(directory):
3866+ os.mkdir(directory)
3867
3868 lock_filename = os.path.join(package_directory, program_name + ".lock")
3869 try:
3870@@ -98,7 +177,7 @@
3871 package_facade = SmartFacade()
3872 remote = RemoteBroker(get_bus(config.bus))
3873
3874- handler = cls(package_store, package_facade, remote)
3875+ handler = cls(package_store, package_facade, remote, config)
3876
3877 def got_err(failure):
3878 log_failure(failure)
3879
3880=== modified file 'landscape/package/tests/helpers.py'
3881--- landscape/package/tests/helpers.py 2008-09-08 16:35:57 +0000
3882+++ landscape/package/tests/helpers.py 2009-09-21 16:05:18 +0000
3883@@ -21,15 +21,15 @@
3884 def set_up(self, test_case):
3885 super(SmartFacadeHelper, self).set_up(test_case)
3886
3887- from landscape.package.facade import SmartFacade
3888+ from landscape.package.facade import SmartFacade, make_deb_dir_channel
3889
3890 class Facade(SmartFacade):
3891 repository_dir = test_case.repository_dir
3892
3893 def smart_initialized(self):
3894- smart.sysconf.set("channels",
3895- {"alias": {"type": "deb-dir",
3896- "path": test_case.repository_dir}})
3897+ self.reset_channels()
3898+ self.add_channel("alias",
3899+ make_deb_dir_channel(test_case.repository_dir))
3900
3901 test_case.Facade = Facade
3902 test_case.facade = Facade({"datadir": test_case.smart_dir})
3903
3904=== added directory 'landscape/package/tests/repository'
3905=== added directory 'landscape/package/tests/repository/dists'
3906=== added directory 'landscape/package/tests/repository/dists/hardy'
3907=== added file 'landscape/package/tests/repository/dists/hardy/Release'
3908--- landscape/package/tests/repository/dists/hardy/Release 1970-01-01 00:00:00 +0000
3909+++ landscape/package/tests/repository/dists/hardy/Release 2009-09-21 16:05:18 +0000
3910@@ -0,0 +1,22 @@
3911+Origin: Ubuntu
3912+Label: Ubuntu
3913+Codename: hardy
3914+Version: 8.04
3915+Date: Mon, 30 Mar 2009 19:08:02 +0000
3916+Architectures: i386 amd64
3917+Components: main
3918+Description: Test Repository
3919+MD5Sum:
3920+ 356312bc1c0ab2b8dbe5c67f09879497 827 main/binary-i386/Packages
3921+ ad2d9b94381264ce25cda7cfa1b2da03 555 main/binary-i386/Packages.gz
3922+ 2335af6bede2d86bdf542595d75e0144 107 main/binary-i386/Release
3923+ f0fd5c1bb18584cf07f9bf4a9f2e6d92 605 main/binary-amd64/Packages
3924+ 98860034ca03a73a9face10af8238a81 407 main/binary-amd64/Packages.gz
3925+ 771950159a4e9c8bfca599f89f77c0e2 108 main/binary-amd64/Release
3926+SHA1:
3927+ 1f39494284f8da4a1cdd788a3d91a048c5edf7f5 827 main/binary-i386/Packages
3928+ e79a66d7543f24f77a9ffe1409431ae717781375 555 main/binary-i386/Packages.gz
3929+ 0d02cbecfee1760b855ec60ce8fc7f6a910fc196 107 main/binary-i386/Release
3930+ 37ba69be70f4a79506038c0124293187bc879014 605 main/binary-amd64/Packages
3931+ 65dca66c72b18d59cdcf671775104e86cbe2123a 407 main/binary-amd64/Packages.gz
3932+ cc42af4b0ec4b3e35cd3e1562fef6ef14b20d712 108 main/binary-amd64/Release
3933
3934=== added directory 'landscape/package/tests/repository/dists/hardy/main'
3935=== added directory 'landscape/package/tests/repository/dists/hardy/main/binary-amd64'
3936=== added file 'landscape/package/tests/repository/dists/hardy/main/binary-amd64/Packages'
3937--- landscape/package/tests/repository/dists/hardy/main/binary-amd64/Packages 1970-01-01 00:00:00 +0000
3938+++ landscape/package/tests/repository/dists/hardy/main/binary-amd64/Packages 2009-09-21 16:05:18 +0000
3939@@ -0,0 +1,17 @@
3940+Package: libclthreads2
3941+Source: clthreads
3942+Version: 2.4.0-1
3943+Architecture: amd64
3944+Maintainer: Debian Multimedia Maintainers <pkg-multimedia-maintainers@lists.alioth.debian.org>
3945+Installed-Size: 80
3946+Depends: libc6 (>= 2.3.2), libgcc1 (>= 1:4.1.1), libstdc++6 (>= 4.1.1)
3947+Priority: extra
3948+Section: libs
3949+Filename: pool/main/c/clthreads/libclthreads2_2.4.0-1_amd64.deb
3950+Size: 12938
3951+SHA1: dc6cb78896642dd436851888b8bd4454ab8f421b
3952+MD5sum: 19960adb88e178fb7eb4997b47eee05b
3953+Description: POSIX threads C++ access library
3954+ C++ wrapper library around the POSIX threads API. This package includes
3955+ the shared library object.
3956+
3957
3958=== added file 'landscape/package/tests/repository/dists/hardy/main/binary-amd64/Release'
3959--- landscape/package/tests/repository/dists/hardy/main/binary-amd64/Release 1970-01-01 00:00:00 +0000
3960+++ landscape/package/tests/repository/dists/hardy/main/binary-amd64/Release 2009-09-21 16:05:18 +0000
3961@@ -0,0 +1,6 @@
3962+Version: 8.04
3963+Component: main
3964+Origin: Ubuntu
3965+Label: Ubuntu
3966+Architecture: amd64
3967+Description: Test Repository
3968
3969=== added directory 'landscape/package/tests/repository/dists/hardy/main/binary-i386'
3970=== added file 'landscape/package/tests/repository/dists/hardy/main/binary-i386/Packages'
3971--- landscape/package/tests/repository/dists/hardy/main/binary-i386/Packages 1970-01-01 00:00:00 +0000
3972+++ landscape/package/tests/repository/dists/hardy/main/binary-i386/Packages 2009-09-21 16:05:18 +0000
3973@@ -0,0 +1,20 @@
3974+Package: syslinux
3975+Version: 2:3.73+dfsg-2
3976+Architecture: i386
3977+Maintainer: Daniel Baumann <daniel@debian.org>
3978+Installed-Size: 140
3979+Depends: libc6 (>= 2.7-1), syslinux-common (= 2:3.73+dfsg-2), dosfstools, mtools
3980+Homepage: http://syslinux.zytor.com/
3981+Priority: optional
3982+Section: utils
3983+Filename: pool/main/s/syslinux/syslinux_3.73+dfsg-2_i386.deb
3984+Size: 70384
3985+SHA1: 6edf6a7e81a5e9759270872e45c782394dfa85e5
3986+MD5sum: ae8baa9f6c6a172a3b127af1e6675046
3987+Description: utilities for the syslinux bootloaders
3988+ SYSLINUX is a suite of lightweight bootloaders, currently supporting DOS FAT
3989+ filesystems (SYSLINUX), Linux ext2/ext3 filesystems (EXTLINUX), PXE network
3990+ booting (PXELINUX), or bootable "El Torito" ISO 9660 CD-ROMs (ISOLINUX). It
3991+ also includes a tool, MEMDISK, which loads legacy operating systems (such as
3992+ DOS) from these media.
3993+
3994
3995=== added file 'landscape/package/tests/repository/dists/hardy/main/binary-i386/Release'
3996--- landscape/package/tests/repository/dists/hardy/main/binary-i386/Release 1970-01-01 00:00:00 +0000
3997+++ landscape/package/tests/repository/dists/hardy/main/binary-i386/Release 2009-09-21 16:05:18 +0000
3998@@ -0,0 +1,6 @@
3999+Version: 8.04
4000+Component: main
4001+Origin: Ubuntu
4002+Label: Ubuntu
4003+Architecture: i386
4004+Description: Test Repository
4005
4006=== modified file 'landscape/package/tests/test_changer.py'
4007--- landscape/package/tests/test_changer.py 2008-09-08 16:35:57 +0000
4008+++ landscape/package/tests/test_changer.py 2009-09-21 16:05:18 +0000
4009@@ -7,14 +7,12 @@
4010
4011 from smart.cache import Provides
4012
4013-from landscape.lib.lock import lock_path
4014-
4015 from landscape.package.changer import (
4016 PackageChanger, main, find_changer_command, UNKNOWN_PACKAGE_DATA_TIMEOUT)
4017 from landscape.package.store import PackageStore
4018 from landscape.package.facade import (
4019- SmartFacade, DependencyError, TransactionError, SmartError)
4020-from landscape.broker.remote import RemoteBroker
4021+ DependencyError, TransactionError, SmartError)
4022+from landscape.deployment import Configuration
4023
4024 from landscape.tests.mocker import ANY
4025 from landscape.tests.helpers import (
4026@@ -31,7 +29,8 @@
4027 super(PackageChangerTest, self).setUp()
4028
4029 self.store = PackageStore(self.makeFile())
4030- self.changer = PackageChanger(self.store, self.facade, self.remote)
4031+ self.config = Configuration()
4032+ self.changer = PackageChanger(self.store, self.facade, self.remote, self.config)
4033
4034 service = self.broker_service
4035 service.message_store.set_accepted_types(["change-packages-result"])
4036@@ -420,18 +419,40 @@
4037 "REPORTER RUN")
4038 return result.addCallback(got_result)
4039
4040- def test_set_effective_uid_when_running_as_root(self):
4041+ def test_set_effective_uid_and_gid_when_running_as_root(self):
4042 """
4043 After the package changer has run, we want the package-reporter to run
4044 to report the recent changes. If we're running as root, we want to
4045- change to the "landscape" user.
4046+ change to the "landscape" user and "landscape" group. We also want to
4047+ deinitialize Smart to let the reporter run smart-update cleanly.
4048 """
4049+
4050 # We are running as root
4051 getuid_mock = self.mocker.replace("os.getuid")
4052 getuid_mock()
4053 self.mocker.result(0)
4054
4055- # We want to return a known uid
4056+ # The order matters (first smart then gid and finally uid)
4057+ self.mocker.order()
4058+
4059+ # Deinitialize smart
4060+ facade_mock = self.mocker.patch(self.facade)
4061+ facade_mock.deinit()
4062+
4063+ # We want to return a known gid
4064+ grnam_mock = self.mocker.replace("grp.getgrnam")
4065+ grnam_mock("landscape")
4066+
4067+ class FakeGroup(object):
4068+ gr_gid = 199
4069+
4070+ self.mocker.result(FakeGroup())
4071+
4072+ # First the changer should change the group
4073+ setgid_mock = self.mocker.replace("os.setgid")
4074+ setgid_mock(199)
4075+
4076+ # And a known uid as well
4077 pwnam_mock = self.mocker.replace("pwd.getpwnam")
4078 pwnam_mock("landscape")
4079
4080@@ -440,11 +461,11 @@
4081
4082 self.mocker.result(FakeUser())
4083
4084- # And now, the changer should change the user
4085+ # And now the user as well
4086 setuid_mock = self.mocker.replace("os.setuid")
4087 setuid_mock(199)
4088
4089- # Finally, we don't really want the package changer to run.
4090+ # Finally, we don't really want the package reporter to run.
4091 system_mock = self.mocker.replace("os.system")
4092 system_mock(ANY)
4093
4094@@ -457,6 +478,29 @@
4095 return self.changer.run()
4096
4097
4098+ def test_run(self):
4099+ changer_mock = self.mocker.patch(self.changer)
4100+
4101+ self.mocker.order()
4102+
4103+ results = [Deferred() for i in range(2)]
4104+
4105+ changer_mock.use_hash_id_db()
4106+ self.mocker.result(results[0])
4107+
4108+ changer_mock.handle_tasks()
4109+ self.mocker.result(results[1])
4110+
4111+ self.mocker.replay()
4112+
4113+ self.changer.run()
4114+
4115+ # It must raise an error because deferreds weren't yet fired.
4116+ self.assertRaises(AssertionError, self.mocker.verify)
4117+
4118+ for deferred in reversed(results):
4119+ deferred.callback(None)
4120+
4121 def test_dont_spawn_reporter_after_running_if_nothing_done(self):
4122 output_filename = self.makeFile("REPORTER NOT RUN")
4123 reporter_filename = self.makeFile("#!/bin/sh\necho REPORTER RUN > %s" %
4124@@ -478,7 +522,6 @@
4125 return result.addCallback(got_result)
4126
4127 def test_main(self):
4128- data_path = self.makeDir()
4129 self.mocker.order()
4130
4131 run_task_handler = self.mocker.replace("landscape.package.taskhandler"
4132
4133=== modified file 'landscape/package/tests/test_facade.py'
4134--- landscape/package/tests/test_facade.py 2009-04-09 17:09:50 +0000
4135+++ landscape/package/tests/test_facade.py 2009-09-21 16:05:18 +0000
4136@@ -1,14 +1,21 @@
4137-import gdbm
4138 import time
4139 import os
4140+import re
4141+import sys
4142
4143 from smart.control import Control
4144 from smart.cache import Provides
4145+from smart.const import NEVER
4146+
4147+from twisted.internet import reactor
4148+from twisted.internet.defer import Deferred
4149+from twisted.internet.utils import getProcessOutputAndValue
4150
4151 import smart
4152
4153 from landscape.package.facade import (
4154- SmartFacade, TransactionError, DependencyError, SmartError)
4155+ TransactionError, DependencyError, ChannelError, SmartError,
4156+ make_apt_deb_channel, make_deb_dir_channel)
4157
4158 from landscape.tests.mocker import ANY
4159 from landscape.tests.helpers import LandscapeTest
4160@@ -79,6 +86,11 @@
4161 pkg = self.facade.get_packages_by_name("name2")[0]
4162 self.assertEquals(self.facade.get_package_hash(pkg), HASH2)
4163
4164+ def test_get_package_hashes(self):
4165+ self.facade.reload_channels()
4166+ hashes = self.facade.get_package_hashes()
4167+ self.assertEquals(sorted(hashes), sorted([HASH1, HASH2, HASH3]))
4168+
4169 def test_get_package_by_hash(self):
4170 self.facade.reload_channels()
4171 pkg = self.facade.get_package_by_hash(HASH1)
4172@@ -91,7 +103,6 @@
4173 def test_reload_channels_clears_hash_cache(self):
4174 # Load hashes.
4175 self.facade.reload_channels()
4176- start = time.time()
4177
4178 # Hold a reference to packages.
4179 pkg1 = self.facade.get_packages_by_name("name1")[0]
4180@@ -393,3 +404,104 @@
4181
4182 self.facade.reload_channels()
4183 self.assertEquals(self.facade.get_package_hash(pkg), None)
4184+
4185+ def test_reset_add_get_channels(self):
4186+
4187+ channels = [("alias0", {"type": "test"}),
4188+ ("alias1", {"type": "test"})]
4189+
4190+ self.facade.reset_channels()
4191+
4192+ self.assertEquals(self.facade.get_channels(), {})
4193+
4194+ self.facade.add_channel(*channels[0])
4195+ self.facade.add_channel(*channels[1])
4196+
4197+ self.assertEquals(self.facade.get_channels(), dict(channels))
4198+
4199+ def test_make_channels(self):
4200+
4201+ channel0 = make_apt_deb_channel("http://my.url/dir", "hardy", "main")
4202+ channel1 = make_deb_dir_channel("/my/repo")
4203+
4204+ self.assertEquals(channel0, {"baseurl": "http://my.url/dir",
4205+ "distribution": "hardy",
4206+ "components": "main",
4207+ "type": "apt-deb"})
4208+
4209+ self.assertEquals(channel1, {"path": "/my/repo",
4210+ "type": "deb-dir"})
4211+
4212+ def test_get_arch(self):
4213+ """
4214+ The L{SmartFacade.get_arch} should return the system dpkg
4215+ architecture.
4216+ """
4217+ deferred = Deferred()
4218+
4219+ def do_test():
4220+ result = getProcessOutputAndValue("/usr/bin/dpkg",
4221+ ("--print-architecture",))
4222+ def callback((out, err, code)):
4223+ self.assertEquals(self.facade.get_arch(), out.strip())
4224+ result.addCallback(callback)
4225+ result.chainDeferred(deferred)
4226+
4227+ reactor.callWhenRunning(do_test)
4228+ return deferred
4229+
4230+ def test_set_arch_multiple_times(self):
4231+
4232+ repository_dir = os.path.join(os.path.dirname(__file__), "repository")
4233+
4234+ alias = "alias"
4235+ channel = {"baseurl": "file://%s" % repository_dir,
4236+ "distribution": "hardy",
4237+ "components": "main",
4238+ "type": "apt-deb"}
4239+
4240+ self.facade.set_arch("i386")
4241+ self.facade.reset_channels()
4242+ self.facade.add_channel(alias, channel)
4243+ self.facade.reload_channels()
4244+
4245+ pkgs = self.facade.get_packages()
4246+ self.assertEquals(len(pkgs), 1)
4247+ self.assertEquals(pkgs[0].name, "syslinux")
4248+
4249+ self.facade.deinit()
4250+ self.facade.set_arch("amd64")
4251+ self.facade.reset_channels()
4252+ self.facade.add_channel(alias, channel)
4253+ self.facade.reload_channels()
4254+
4255+ pkgs = self.facade.get_packages()
4256+ self.assertEquals(len(pkgs), 1)
4257+ self.assertEquals(pkgs[0].name, "libclthreads2")
4258+
4259+ def test_set_caching_with_reload_error(self):
4260+
4261+ alias = "alias"
4262+ channel = {"type": "deb-dir",
4263+ "path": "/does/not/exist"}
4264+
4265+ self.facade.reset_channels()
4266+ self.facade.add_channel(alias, channel)
4267+ self.facade.set_caching(NEVER)
4268+
4269+ self.assertRaises(ChannelError, self.facade.reload_channels)
4270+ self.facade._channels = {}
4271+
4272+ ignore_re = re.compile("\[Smart\].*'alias'.*/does/not/exist")
4273+
4274+ self.log_helper.ignored_exception_regexes = [ignore_re]
4275+
4276+ def test_init_landscape_plugins(self):
4277+ """
4278+ The landscape plugin which helps managing proxies is loaded when smart
4279+ is initialized: this sets a smart configuration variable and load the
4280+ module.
4281+ """
4282+ self.facade.reload_channels()
4283+ self.assertTrue(smart.sysconf.get("use-landscape-proxies"))
4284+ self.assertIn("smart.plugins.landscape", sys.modules)
4285
4286=== modified file 'landscape/package/tests/test_reporter.py'
4287--- landscape/package/tests/test_reporter.py 2008-09-08 16:35:57 +0000
4288+++ landscape/package/tests/test_reporter.py 2009-09-21 16:05:18 +0000
4289@@ -1,10 +1,13 @@
4290 import glob
4291 import sys
4292 import os
4293+import logging
4294
4295 from twisted.internet.defer import Deferred
4296+from twisted.internet import reactor
4297
4298-from landscape.lib.lock import lock_path
4299+from landscape.lib.fetch import fetch_async, FetchError
4300+from landscape.lib.command import CommandError
4301
4302 from landscape.package.store import PackageStore, UnknownHashIDRequest
4303 from landscape.package.reporter import (
4304@@ -12,6 +15,7 @@
4305 from landscape.package import reporter
4306 from landscape.package.facade import SmartFacade
4307
4308+from landscape.deployment import Configuration
4309 from landscape.broker.remote import RemoteBroker
4310
4311 from landscape.package.tests.helpers import (
4312@@ -30,7 +34,8 @@
4313 super(PackageReporterTest, self).setUp()
4314
4315 self.store = PackageStore(self.makeFile())
4316- self.reporter = PackageReporter(self.store, self.facade, self.remote)
4317+ self.config = Configuration()
4318+ self.reporter = PackageReporter(self.store, self.facade, self.remote, self.config)
4319
4320 def set_pkg2_upgrades_pkg1(self):
4321 previous = self.Facade.channels_reloaded
4322@@ -219,6 +224,375 @@
4323 deferred = self.reporter.handle_tasks()
4324 return deferred.addCallback(got_result)
4325
4326+ def test_fetch_hash_id_db(self):
4327+
4328+ # Assume package_hash_id_url is set
4329+ self.config.data_path = self.makeDir()
4330+ self.config.package_hash_id_url = "http://fake.url/path/"
4331+ os.makedirs(os.path.join(self.config.data_path, "package", "hash-id"))
4332+ hash_id_db_filename = os.path.join(self.config.data_path, "package",
4333+ "hash-id", "uuid_codename_arch")
4334+
4335+ # Fake uuid, codename and arch
4336+ message_store = self.broker_service.message_store
4337+ message_store.set_server_uuid("uuid")
4338+ command_mock = self.mocker.replace("landscape.lib.command.run_command")
4339+ command_mock("lsb_release -cs")
4340+ self.mocker.result("codename")
4341+ self.facade.set_arch("arch")
4342+
4343+ # Let's say fetch_async is successful
4344+ hash_id_db_url = self.config.package_hash_id_url + "uuid_codename_arch"
4345+ fetch_async_mock = self.mocker.replace("landscape.lib.fetch.fetch_async")
4346+ fetch_async_mock(hash_id_db_url)
4347+ fetch_async_result = Deferred()
4348+ fetch_async_result.callback("hash-ids")
4349+ self.mocker.result(fetch_async_result)
4350+
4351+ # The download should be properly logged
4352+ logging_mock = self.mocker.replace("logging.info")
4353+ logging_mock("Downloaded hash=>id database from %s" % hash_id_db_url)
4354+ self.mocker.result(None)
4355+
4356+ # We don't have our hash=>id database yet
4357+ self.assertFalse(os.path.exists(hash_id_db_filename))
4358+
4359+ # Now go!
4360+ self.mocker.replay()
4361+ result = self.reporter.fetch_hash_id_db()
4362+
4363+ # Check the database
4364+ def callback(ignored):
4365+ self.assertTrue(os.path.exists(hash_id_db_filename))
4366+ self.assertEquals(open(hash_id_db_filename).read(), "hash-ids")
4367+ result.addCallback(callback)
4368+
4369+ return result
4370+
4371+ def test_fetch_hash_id_db_does_not_download_twice(self):
4372+
4373+ # Let's say that the hash=>id database is already there
4374+ self.config.package_hash_id_url = "http://fake.url/path/"
4375+ self.config.data_path = self.makeDir()
4376+ os.makedirs(os.path.join(self.config.data_path, "package", "hash-id"))
4377+ hash_id_db_filename = os.path.join(self.config.data_path, "package",
4378+ "hash-id", "uuid_codename_arch")
4379+ open(hash_id_db_filename, "w").write("test")
4380+
4381+ # Fake uuid, codename and arch
4382+ message_store = self.broker_service.message_store
4383+ message_store.set_server_uuid("uuid")
4384+ command_mock = self.mocker.replace("landscape.lib.command.run_command")
4385+ command_mock("lsb_release -cs")
4386+ self.mocker.result("codename")
4387+ self.facade.set_arch("arch")
4388+
4389+ # Intercept any call to fetch_async
4390+ fetch_async_mock = self.mocker.replace("landscape.lib.fetch.fetch_async")
4391+ fetch_async_mock(ANY)
4392+
4393+ # Go!
4394+ self.mocker.replay()
4395+ result = self.reporter.fetch_hash_id_db()
4396+
4397+ def callback(ignored):
4398+ # Check that fetch_async hasn't been called
4399+ self.assertRaises(AssertionError, self.mocker.verify)
4400+ fetch_async(None)
4401+
4402+ # The hash=>id database is still there
4403+ self.assertEquals(open(hash_id_db_filename).read(), "test")
4404+
4405+ result.addCallback(callback)
4406+
4407+ return result
4408+
4409+ def test_fetch_hash_id_db_undetermined_server_uuid(self):
4410+ """
4411+ If the server-uuid can't be determined for some reason, no download
4412+ should be attempted and the failure should be properly logged.
4413+ """
4414+ message_store = self.broker_service.message_store
4415+ message_store.set_server_uuid(None)
4416+
4417+ logging_mock = self.mocker.replace("logging.warning")
4418+ logging_mock("Couldn't determine which hash=>id database to use: "
4419+ "server UUID not available")
4420+ self.mocker.result(None)
4421+ self.mocker.replay()
4422+
4423+ result = self.reporter.fetch_hash_id_db()
4424+ return result
4425+
4426+ def test_fetch_hash_id_db_undetermined_codename(self):
4427+
4428+ # Fake uuid
4429+ message_store = self.broker_service.message_store
4430+ message_store.set_server_uuid("uuid")
4431+
4432+ # Undetermined codename
4433+ command_mock = self.mocker.replace("landscape.lib.command.run_command")
4434+ command_mock("lsb_release -cs")
4435+ command_error = CommandError("lsb_release -cs", 1, "error")
4436+ self.mocker.throw(command_error)
4437+
4438+ # The failure should be properly logged
4439+ logging_mock = self.mocker.replace("logging.warning")
4440+ logging_mock("Couldn't determine which hash=>id database to use: %s" %
4441+ str(command_error))
4442+ self.mocker.result(None)
4443+
4444+ # Go!
4445+ self.mocker.replay()
4446+ result = self.reporter.fetch_hash_id_db()
4447+
4448+ return result
4449+
4450+ def test_fetch_hash_id_db_undetermined_arch(self):
4451+
4452+ # Fake uuid and codename
4453+ message_store = self.broker_service.message_store
4454+ message_store.set_server_uuid("uuid")
4455+ command_mock = self.mocker.replace("landscape.lib.command.run_command")
4456+ command_mock("lsb_release -cs")
4457+ self.mocker.result("codename")
4458+
4459+ # Undetermined arch
4460+ self.facade.set_arch(None)
4461+
4462+ # The failure should be properly logged
4463+ logging_mock = self.mocker.replace("logging.warning")
4464+ logging_mock("Couldn't determine which hash=>id database to use: "\
4465+ "unknown dpkg architecture")
4466+ self.mocker.result(None)
4467+
4468+ # Go!
4469+ self.mocker.replay()
4470+ result = self.reporter.fetch_hash_id_db()
4471+
4472+ return result
4473+
4474+ def test_fetch_hash_id_db_with_default_url(self):
4475+
4476+ # Let's say package_hash_id_url is not set but url is
4477+ self.config.data_path = self.makeDir()
4478+ self.config.package_hash_id_url = None
4479+ self.config.url = "http://fake.url/path/message-system/"
4480+ os.makedirs(os.path.join(self.config.data_path, "package", "hash-id"))
4481+ hash_id_db_filename = os.path.join(self.config.data_path, "package",
4482+ "hash-id", "uuid_codename_arch")
4483+
4484+ # Fake uuid, codename and arch
4485+ message_store = self.broker_service.message_store
4486+ message_store.set_server_uuid("uuid")
4487+ command_mock = self.mocker.replace("landscape.lib.command.run_command")
4488+ command_mock("lsb_release -cs")
4489+ self.mocker.result("codename")
4490+ self.facade.set_arch("arch")
4491+
4492+ # Check fetch_async is called with the default url
4493+ hash_id_db_url = "http://fake.url/path/hash-id-databases/" \
4494+ "uuid_codename_arch"
4495+ fetch_async_mock = self.mocker.replace("landscape.lib.fetch.fetch_async")
4496+ fetch_async_mock(hash_id_db_url)
4497+ fetch_async_result = Deferred()
4498+ fetch_async_result.callback("hash-ids")
4499+ self.mocker.result(fetch_async_result)
4500+
4501+ # Now go!
4502+ self.mocker.replay()
4503+ result = self.reporter.fetch_hash_id_db()
4504+
4505+ # Check the database
4506+ def callback(ignored):
4507+ self.assertTrue(os.path.exists(hash_id_db_filename))
4508+ self.assertEquals(open(hash_id_db_filename).read(), "hash-ids")
4509+ result.addCallback(callback)
4510+ return result
4511+
4512+ def test_fetch_hash_id_db_with_download_error(self):
4513+
4514+ # Assume package_hash_id_url is set
4515+ self.config.data_path = self.makeDir()
4516+ self.config.package_hash_id_url = "http://fake.url/path/"
4517+
4518+ # Fake uuid, codename and arch
4519+ message_store = self.broker_service.message_store
4520+ message_store.set_server_uuid("uuid")
4521+ command_mock = self.mocker.replace("landscape.lib.command.run_command")
4522+ command_mock("lsb_release -cs")
4523+ self.mocker.result("codename")
4524+ self.facade.set_arch("arch")
4525+
4526+ # Let's say fetch_async fails
4527+ hash_id_db_url = self.config.package_hash_id_url + "uuid_codename_arch"
4528+ fetch_async_mock = self.mocker.replace("landscape.lib.fetch.fetch_async")
4529+ fetch_async_mock(hash_id_db_url)
4530+ fetch_async_result = Deferred()
4531+ fetch_async_result.errback(FetchError("fetch error"))
4532+ self.mocker.result(fetch_async_result)
4533+
4534+ # The failure should be properly logged
4535+ logging_mock = self.mocker.replace("logging.warning")
4536+ logging_mock("Couldn't download hash=>id database: fetch error")
4537+ self.mocker.result(None)
4538+
4539+ # Now go!
4540+ self.mocker.replay()
4541+ result = self.reporter.fetch_hash_id_db()
4542+
4543+ # We shouldn't have any hash=>id database
4544+ def callback(ignored):
4545+ hash_id_db_filename = os.path.join(self.config.data_path, "package",
4546+ "hash-id", "uuid_codename_arch")
4547+ self.assertEquals(os.path.exists(hash_id_db_filename), False)
4548+ result.addCallback(callback)
4549+
4550+ return result
4551+
4552+ def test_fetch_hash_id_db_with_undetermined_url(self):
4553+
4554+ # We don't know where to fetch the hash=>id database from
4555+ self.config.url = None
4556+ self.config.package_hash_id_url = None
4557+
4558+ # Fake uuid, codename and arch
4559+ message_store = self.broker_service.message_store
4560+ message_store.set_server_uuid("uuid")
4561+ command_mock = self.mocker.replace("landscape.lib.command.run_command")
4562+ command_mock("lsb_release -cs")
4563+ self.mocker.result("codename")
4564+ self.facade.set_arch("arch")
4565+
4566+ # The failure should be properly logged
4567+ logging_mock = self.mocker.replace("logging.warning")
4568+ logging_mock("Can't determine the hash=>id database url")
4569+ self.mocker.result(None)
4570+
4571+ # Let's go
4572+ self.mocker.replay()
4573+
4574+ result = self.reporter.fetch_hash_id_db()
4575+
4576+ # We shouldn't have any hash=>id database
4577+ def callback(ignored):
4578+ hash_id_db_filename = os.path.join(self.config.data_path, "package",
4579+ "hash-id", "uuid_codename_arch")
4580+ self.assertEquals(os.path.exists(hash_id_db_filename), False)
4581+ result.addCallback(callback)
4582+
4583+ return result
4584+
4585+ def test_run_smart_update(self):
4586+ """
4587+ The L{PackageReporter.run_smart_update} method should run smart-update
4588+ with the proper arguments.
4589+ """
4590+ self.reporter.smart_update_filename = self.makeFile(
4591+ "#!/bin/sh\necho -n $@")
4592+ os.chmod(self.reporter.smart_update_filename, 0755)
4593+ logging_mock = self.mocker.replace("logging.debug")
4594+ logging_mock("'%s' exited with status 0 (out='--after %d', err=''" % (
4595+ self.reporter.smart_update_filename,
4596+ self.reporter.smart_update_interval))
4597+ self.mocker.replay()
4598+ deferred = Deferred()
4599+
4600+ def do_test():
4601+ def raiseme(x):
4602+ raise Exception
4603+ logging.warning = raiseme
4604+ result = self.reporter.run_smart_update()
4605+ def callback((out, err, code)):
4606+ interval = self.reporter.smart_update_interval
4607+ self.assertEquals(out, "--after %d" % interval)
4608+ self.assertEquals(err, "")
4609+ self.assertEquals(code, 0)
4610+ result.addCallback(callback)
4611+ result.chainDeferred(deferred)
4612+
4613+ reactor.callWhenRunning(do_test)
4614+ return deferred
4615+
4616+ def test_run_smart_update_warns_about_failures(self):
4617+ """
4618+ The L{PackageReporter.run_smart_update} method should log a warning
4619+ in case smart-update terminates with a non-zero exit code other than 1.
4620+ """
4621+ self.reporter.smart_update_filename = self.makeFile(
4622+ "#!/bin/sh\necho -n error >&2\necho -n output\nexit 2")
4623+ os.chmod(self.reporter.smart_update_filename, 0755)
4624+ logging_mock = self.mocker.replace("logging.warning")
4625+ logging_mock("'%s' exited with status 2"
4626+ " (error)" % self.reporter.smart_update_filename)
4627+ self.mocker.replay()
4628+ deferred = Deferred()
4629+
4630+ def do_test():
4631+ result = self.reporter.run_smart_update()
4632+ def callback((out, err, code)):
4633+ interval = self.reporter.smart_update_interval
4634+ self.assertEquals(out, "output")
4635+ self.assertEquals(err, "error")
4636+ self.assertEquals(code, 2)
4637+ result.addCallback(callback)
4638+ result.chainDeferred(deferred)
4639+
4640+ reactor.callWhenRunning(do_test)
4641+ return deferred
4642+
4643+ def test_run_smart_update_warns_exit_code_1_and_non_empty_stderr(self):
4644+ """
4645+ The L{PackageReporter.run_smart_update} method should log a warning
4646+ in case smart-update terminates with exit code 1 and non empty stderr.
4647+ """
4648+ self.reporter.smart_update_filename = self.makeFile(
4649+ "#!/bin/sh\necho -n \"error \" >&2\nexit 1")
4650+ os.chmod(self.reporter.smart_update_filename, 0755)
4651+ logging_mock = self.mocker.replace("logging.warning")
4652+ logging_mock("'%s' exited with status 1"
4653+ " (error )" % self.reporter.smart_update_filename)
4654+ self.mocker.replay()
4655+ deferred = Deferred()
4656+ def do_test():
4657+ result = self.reporter.run_smart_update()
4658+ def callback((out, err, code)):
4659+ interval = self.reporter.smart_update_interval
4660+ self.assertEquals(out, "")
4661+ self.assertEquals(err, "error ")
4662+ self.assertEquals(code, 1)
4663+ result.addCallback(callback)
4664+ result.chainDeferred(deferred)
4665+
4666+ reactor.callWhenRunning(do_test)
4667+ return deferred
4668+
4669+ def test_run_smart_update_ignores_exit_code_1_and_empty_output(self):
4670+ """
4671+ The L{PackageReporter.run_smart_update} method should not log anything
4672+ in case smart-update terminates with exit code 1 and output containing
4673+ only a newline character.
4674+ """
4675+ self.reporter.smart_update_filename = self.makeFile(
4676+ "#!/bin/sh\necho\nexit 1")
4677+ os.chmod(self.reporter.smart_update_filename, 0755)
4678+ deferred = Deferred()
4679+ def do_test():
4680+ def raiseme(x):
4681+ raise Exception
4682+ logging.warning = raiseme
4683+ result = self.reporter.run_smart_update()
4684+ def callback((out, err, code)):
4685+ interval = self.reporter.smart_update_interval
4686+ self.assertEquals(out, "\n")
4687+ self.assertEquals(err, "")
4688+ self.assertEquals(code, 1)
4689+ result.addCallback(callback)
4690+ result.chainDeferred(deferred)
4691+
4692+ reactor.callWhenRunning(do_test)
4693+ return deferred
4694+
4695 def test_remove_expired_hash_id_request(self):
4696 request = self.store.add_hash_id_request(["hash1"])
4697 request.message_id = 9999
4698@@ -569,19 +943,28 @@
4699
4700 self.mocker.order()
4701
4702- results = [Deferred() for i in range(4)]
4703+ results = [Deferred() for i in range(7)]
4704+
4705+ reporter_mock.run_smart_update()
4706+ self.mocker.result(results[0])
4707+
4708+ reporter_mock.fetch_hash_id_db()
4709+ self.mocker.result(results[1])
4710+
4711+ reporter_mock.use_hash_id_db()
4712+ self.mocker.result(results[2])
4713
4714 reporter_mock.handle_tasks()
4715- self.mocker.result(results[0])
4716+ self.mocker.result(results[3])
4717
4718 reporter_mock.remove_expired_hash_id_requests()
4719- self.mocker.result(results[1])
4720+ self.mocker.result(results[4])
4721
4722 reporter_mock.request_unknown_hashes()
4723- self.mocker.result(results[2])
4724+ self.mocker.result(results[5])
4725
4726 reporter_mock.detect_changes()
4727- self.mocker.result(results[3])
4728+ self.mocker.result(results[6])
4729
4730 self.mocker.replay()
4731
4732@@ -625,7 +1008,8 @@
4733 def test_resynchronize(self):
4734 """
4735 When a resynchronize task arrives, the reporter should clear
4736- out all the data in the package store, except the hash ids.
4737+ out all the data in the package store, except the hash ids and
4738+ the hash ids requests.
4739 This is done in the reporter so that we know it happens when
4740 no other reporter is possibly running at the same time.
4741 """
4742@@ -636,6 +1020,11 @@
4743 request1 = self.store.add_hash_id_request(["hash3"])
4744 request2 = self.store.add_hash_id_request(["hash4"])
4745
4746+ # Set the message id to avoid the requests being deleted by the
4747+ # L{PackageReporter.remove_expired_hash_id_requests} method.
4748+ request1.message_id = 1
4749+ request2.message_id = 2
4750+
4751 # Let's make sure the data is there.
4752 self.assertEquals(self.store.get_available_upgrades(), [2])
4753 self.assertEquals(self.store.get_available(), [1])
4754@@ -643,7 +1032,7 @@
4755 self.assertEquals(self.store.get_hash_id_request(request1.id).id, request1.id)
4756
4757 self.store.add_task("reporter", {"type": "resynchronize"})
4758-
4759+
4760 deferred = self.reporter.run()
4761
4762 def check_result(result):
4763@@ -659,13 +1048,22 @@
4764 self.assertEquals(self.store.get_available(), [3, 4])
4765 self.assertEquals(self.store.get_installed(), [])
4766
4767- # A New hash id request should also be detected for HASH3,
4768- # but there should be no other hash id requests.
4769- request = self.store.get_hash_id_request(request1.id)
4770- self.assertEquals(request.id, request1.id)
4771- self.assertEquals(request.hashes, [HASH3])
4772- self.assertRaises(UnknownHashIDRequest,
4773- self.store.get_hash_id_request, request2.id)
4774+ # The two original hash id requests should be still there, and
4775+ # a new hash id request should also be detected for HASH3.
4776+ requests_count = 0
4777+ new_request_found = False
4778+ for request in self.store.iter_hash_id_requests():
4779+ requests_count += 1
4780+ if request.id == request1.id:
4781+ self.assertEquals(request.hashes, ["hash3"])
4782+ elif request.id == request2.id:
4783+ self.assertEquals(request.hashes, ["hash4"])
4784+ elif not new_request_found:
4785+ self.assertEquals(request.hashes, [HASH3])
4786+ else:
4787+ self.fail("Unexpected hash-id request!")
4788+ self.assertEquals(requests_count, 3)
4789+
4790 deferred.addCallback(check_result)
4791 return deferred
4792
4793
4794=== modified file 'landscape/package/tests/test_store.py'
4795--- landscape/package/tests/test_store.py 2009-03-18 20:42:05 +0000
4796+++ landscape/package/tests/test_store.py 2009-09-21 16:05:18 +0000
4797@@ -1,20 +1,35 @@
4798 import threading
4799 import time
4800-import os
4801+
4802+try:
4803+ import sqlite3
4804+except ImportError:
4805+ from pysqlite2 import dbapi2 as sqlite3
4806
4807 from landscape.tests.helpers import LandscapeTest
4808
4809-from landscape.package.store import PackageStore, UnknownHashIDRequest
4810-
4811-
4812-class PackageStoreTest(LandscapeTest):
4813+from landscape.package.store import (HashIdStore, PackageStore,
4814+ UnknownHashIDRequest, InvalidHashIdDb)
4815+
4816+
4817+class HashIdStoreTest(LandscapeTest):
4818
4819 def setUp(self):
4820- super(PackageStoreTest, self).setUp()
4821+ super(HashIdStoreTest, self).setUp()
4822
4823 self.filename = self.makeFile()
4824- self.store1 = PackageStore(self.filename)
4825- self.store2 = PackageStore(self.filename)
4826+ self.store1 = HashIdStore(self.filename)
4827+ self.store2 = HashIdStore(self.filename)
4828+
4829+ def test_set_and_get_hash_id(self):
4830+ self.store1.set_hash_ids({"ha\x00sh1": 123, "ha\x00sh2": 456})
4831+ self.assertEquals(self.store1.get_hash_id("ha\x00sh1"), 123)
4832+ self.assertEquals(self.store1.get_hash_id("ha\x00sh2"), 456)
4833+
4834+ def test_get_hash_ids(self):
4835+ hash_ids = {"hash1": 123, "hash2": 456}
4836+ self.store1.set_hash_ids(hash_ids)
4837+ self.assertEquals(self.store1.get_hash_ids(), hash_ids)
4838
4839 def test_wb_transactional_commits(self):
4840 mock_db = self.mocker.replace(self.store1._db)
4841@@ -28,11 +43,6 @@
4842 self.mocker.replay()
4843 self.assertRaises(Exception, self.store1.set_hash_ids, None)
4844
4845- def test_set_and_get_hash_id(self):
4846- self.store1.set_hash_ids({"ha\x00sh1": 123, "ha\x00sh2": 456})
4847- self.assertEquals(self.store2.get_hash_id("ha\x00sh1"), 123)
4848- self.assertEquals(self.store2.get_hash_id("ha\x00sh2"), 456)
4849-
4850 def test_get_id_hash(self):
4851 self.store1.set_hash_ids({"hash1": 123, "hash2": 456})
4852 self.assertEquals(self.store2.get_id_hash(123), "hash1")
4853@@ -70,6 +80,115 @@
4854 self.assertTrue(time.time()-started < 5,
4855 "Setting 20k hashes took more than 5 seconds.")
4856
4857+ def test_check_sanity(self):
4858+
4859+ store_filename = self.makeFile()
4860+ db = sqlite3.connect(store_filename)
4861+ cursor = db.cursor()
4862+ cursor.execute("CREATE TABLE hash"
4863+ " (junk INTEGER PRIMARY KEY, hash BLOB UNIQUE)")
4864+ cursor.close()
4865+ db.commit()
4866+
4867+ store = HashIdStore(store_filename)
4868+ self.assertRaises(InvalidHashIdDb, store.check_sanity)
4869+
4870+
4871+class PackageStoreTest(LandscapeTest):
4872+
4873+ def setUp(self):
4874+ super(PackageStoreTest, self).setUp()
4875+
4876+ self.filename = self.makeFile()
4877+ self.store1 = PackageStore(self.filename)
4878+ self.store2 = PackageStore(self.filename)
4879+
4880+ def test_has_hash_id_db(self):
4881+
4882+ self.assertFalse(self.store1.has_hash_id_db())
4883+
4884+ hash_id_db_filename = self.makeFile()
4885+ HashIdStore(hash_id_db_filename)
4886+ self.store1.add_hash_id_db(hash_id_db_filename)
4887+
4888+ self.assertTrue(self.store1.has_hash_id_db())
4889+
4890+ def test_add_hash_id_db_with_non_sqlite_file(self):
4891+
4892+ def junk_db_factory():
4893+ filename = self.makeFile()
4894+ open(filename, "w").write("junk")
4895+ return filename
4896+
4897+ def raiseme():
4898+ store_filename = junk_db_factory()
4899+ try:
4900+ self.store1.add_hash_id_db(store_filename)
4901+ except InvalidHashIdDb, e:
4902+ self.assertEqual(str(e), store_filename)
4903+ else:
4904+ self.fail()
4905+
4906+ raiseme()
4907+ self.assertFalse(self.store1.has_hash_id_db())
4908+
4909+ def test_add_hash_id_db_with_wrong_schema(self):
4910+
4911+ def non_compliant_db_factory():
4912+ filename = self.makeFile()
4913+ db = sqlite3.connect(filename)
4914+ cursor = db.cursor()
4915+ cursor.execute("CREATE TABLE hash"
4916+ " (junk INTEGER PRIMARY KEY, hash BLOB UNIQUE)")
4917+ cursor.close()
4918+ db.commit()
4919+ return filename
4920+
4921+ self.assertRaises(InvalidHashIdDb, self.store1.add_hash_id_db,
4922+ non_compliant_db_factory())
4923+ self.assertFalse(self.store1.has_hash_id_db())
4924+
4925+ def hash_id_db_factory(self, hash_ids):
4926+ filename = self.makeFile()
4927+ store = HashIdStore(filename)
4928+ store.set_hash_ids(hash_ids)
4929+ return filename
4930+
4931+ def test_get_hash_id_using_hash_id_dbs(self):
4932+
4933+
4934+ # Without hash=>id dbs
4935+ self.assertEquals(self.store1.get_hash_id("hash1"), None)
4936+ self.assertEquals(self.store1.get_hash_id("hash2"), None)
4937+
4938+ # This hash=>id will be overriden
4939+ self.store1.set_hash_ids({"hash1": 1})
4940+
4941+ # Add a couple of hash=>id dbs
4942+ self.store1.add_hash_id_db(self.hash_id_db_factory({"hash1": 2,
4943+ "hash2": 3}))
4944+ self.store1.add_hash_id_db(self.hash_id_db_factory({"hash2": 4,
4945+ "ha\x00sh1": 5}))
4946+
4947+ # Check look-up priorities and binary hashes
4948+ self.assertEquals(self.store1.get_hash_id("hash1"), 2)
4949+ self.assertEquals(self.store1.get_hash_id("hash2"), 3)
4950+ self.assertEquals(self.store1.get_hash_id("ha\x00sh1"), 5)
4951+
4952+ def test_get_id_hash_using_hash_id_db(self):
4953+ """
4954+ When lookaside hash->id dbs are used, L{get_id_hash} has
4955+ to query them first, falling back to the regular db in case
4956+ the desired mapping is not found.
4957+ """
4958+ self.store1.add_hash_id_db(self.hash_id_db_factory({"hash1": 123}))
4959+ self.store1.add_hash_id_db(self.hash_id_db_factory({"hash1": 999,
4960+ "hash2": 456}))
4961+ self.store1.set_hash_ids({"hash3": 789})
4962+ self.assertEquals(self.store1.get_id_hash(123), "hash1")
4963+ self.assertEquals(self.store1.get_id_hash(456), "hash2")
4964+ self.assertEquals(self.store1.get_id_hash(789), "hash3")
4965+
4966 def test_add_and_get_available_packages(self):
4967 self.store1.add_available([1, 2])
4968 self.assertEquals(self.store2.get_available(), [1, 2])
4969@@ -326,7 +445,7 @@
4970 self.store1.get_hash_id_request, request1.id)
4971 self.assertRaises(UnknownHashIDRequest,
4972 self.store1.get_hash_id_request, request2.id)
4973-
4974+
4975 def test_clear_tasks(self):
4976 data = {"answer": 42}
4977 task = self.store1.add_task("reporter", data)
4978@@ -388,4 +507,3 @@
4979 thread.join()
4980
4981 self.assertEquals(error, [])
4982-
4983
4984=== modified file 'landscape/package/tests/test_taskhandler.py'
4985--- landscape/package/tests/test_taskhandler.py 2008-09-08 16:35:57 +0000
4986+++ landscape/package/tests/test_taskhandler.py 2009-09-21 16:05:18 +0000
4987@@ -1,23 +1,21 @@
4988 import os
4989-import sys
4990-
4991-from cStringIO import StringIO
4992
4993 from twisted.internet import reactor
4994 from twisted.internet.defer import Deferred, fail
4995
4996 from landscape.lib.lock import lock_path
4997+from landscape.lib.command import CommandError
4998
4999 from landscape.deployment import Configuration
5000 from landscape.broker.remote import RemoteBroker
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches

to all changes: