Merge lp:~jtv/launchpad/bug-527170 into lp:launchpad
- bug-527170
- Merge into devel
Status: | Merged |
---|---|
Approved by: | Jeroen T. Vermeulen |
Approved revision: | no longer in the source branch. |
Merged at revision: | not available |
Proposed branch: | lp:~jtv/launchpad/bug-527170 |
Merge into: | lp:launchpad |
Diff against target: |
413 lines (+228/-18) 4 files modified
lib/canonical/testing/layers.py (+1/-1) lib/lp/soyuz/doc/sampledata-setup.txt (+6/-5) utilities/soyuz-sampledata-cleanup.py (+200/-12) utilities/start-dev-soyuz.sh (+21/-0) |
To merge this branch: | bzr merge lp:~jtv/launchpad/bug-527170 |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Jeroen T. Vermeulen (community) | code | Approve | |
Michael Nelson (community) | code | Approve | |
Review via email:
|
This proposal supersedes a proposal from 2010-03-02.
Commit message
Automate much of manual Soyuz setup on dev systems.
Description of the change
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Jeroen T. Vermeulen (jtv) wrote : Posted in a previous version of this proposal | # |
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Jeroen T. Vermeulen (jtv) wrote : Posted in a previous version of this proposal | # |
There's still a problem with this that escaped my notice because zeca was retaining my GPG key from previous tests.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Jeroen T. Vermeulen (jtv) wrote : | # |
Done. A feature I had previously neglected to document: the script now goes through the steps of adding and validating your real GPG key to ppa-user for you. Saves more hassle.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Michael Nelson (michael.nelson) wrote : | # |
Thanks Jeroen!
So there were a few suggestions, (1) adding the amd arch by default, (2) creating the PPA for the person and (3) adding the external dependencies. You've done (1) and were going to see how much effort is involved for (2) and (3).
12:35 < noodles775> jtv: what would be the problem with just adding amd64 as an arch always? the user still needs to target a specific series?
12:36 < jtv> noodles775: dunno frankly; I figured the safe thing to do was to stay faithful to what we already had.
12:36 < noodles775> jtv: also, do you think it should be possible to run this twice? (ie. the first time I didn't set my email, so I need to make schema before re-running right?)
12:36 < jtv> noodles775: right now you'll have to "make schema" between runs.
12:36 < jtv> wgrant: is there any conceivable reason not to add amd64 support when setting up the sampledata for soyuz?
12:37 < wgrant> jtv: No.
12:37 < jtv> noodles775: there's your answer. No reason. Fixing.
12:37 < noodles775> jtv: great.
12:39 < jtv> noodles775: testing the fix.
12:39 < noodles775> jtv: I wonder if in the future we should separate out utilities/
12:40 < noodles775> (ie. it could be run multiple times).
12:41 < jtv> noodles775: we have that option. But I think we'll get a lot of documentation benefit out of sticking with ppa-user for now.
12:41 < noodles775> Right.
12:42 < noodles775> Wow... automatic addition of my GPG key :)
12:42 < jtv> yup
12:44 < jtv> noodles775: It was hard, but I hope worth it. I just pushed the version that always includes amd64 support.
12:46 < noodles775> jtv: it seems to have added a third key, in addition to the two identified by my email?
12:46 < jtv> noodles775: the bogus one is for signing the Code of Conduct. Please don't make me rework that to use the proper key (and prompt you for a passphrase)
12:46 < noodles775> jtv: ah, no worries :)
12:47 < jtv> noodles775: but glad to hear that it picked up both keys for someone who had two. :-)
12:47 < noodles775> jtv: and none of them are found in zeca when clicking on the key id?
12:48 < noodles775> jtv: actually, if I modify the final url param to op=get it finds it.
12:48 < jtv> noodles775: yup
12:48 < jtv> that would've happened with a manual upload as well, app'ly
12:48 < noodles775> but not op=index.... ok, but it's there :)
12:51 < noodles775> jtv: Another thought, it'd only be an extra 1 line to create a default PPA for the user in create_ppa_user? (you could just use factory.
12:51 < jtv> noodles775: I thought that'd be a nice next feature, but I think you just saved me the research.
12:51 < noodles775> jtv: great!
12:52 -!- salgado [~salgado@
12:53 -!- gmb changed the topic of #launchpad-reviews to: on call: gmb || reviewing: adiroiban || queue [adeuring] || This channel is logged: http://
12:53 < jtv> noodles775: should I add virtualized=False?
12:54 < noodles775> jtv: yes, definitely.
12:54 < jtv> noodles775: running tests...
12:55 * noodles775 tries uploading a package with the new setup.
12:56 -!-...
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Jeroen T. Vermeulen (jtv) wrote : | # |
Okay, I've added creation of the PPA ("test-ppa") and addition of an external dependency on Lucid.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Jeroen T. Vermeulen (jtv) wrote : | # |
Not sure how this got turned into a review request for myself, but I approve.
Preview Diff
1 | === modified file 'lib/canonical/testing/layers.py' |
2 | --- lib/canonical/testing/layers.py 2010-02-12 19:34:42 +0000 |
3 | +++ lib/canonical/testing/layers.py 2010-03-06 05:57:42 +0000 |
4 | @@ -97,7 +97,7 @@ |
5 | confirm_dbrevision, confirm_dbrevision_on_startup) |
6 | from canonical.database.sqlbase import cursor, ZopelessTransactionManager |
7 | from canonical.launchpad.interfaces import IMailBox, IOpenLaunchBag |
8 | -from canonical.launchpad.ftests import ANONYMOUS, login, logout, is_logged_in |
9 | +from lp.testing import ANONYMOUS, login, logout, is_logged_in |
10 | import lp.services.mail.stub |
11 | from lp.services.mail.mailbox import TestMailBox |
12 | from canonical.launchpad.scripts import execute_zcml_for_scripts |
13 | |
14 | === renamed file 'lib/lp/soyuz/doc/sampledata-cleanup.txt' => 'lib/lp/soyuz/doc/sampledata-setup.txt' |
15 | --- lib/lp/soyuz/doc/sampledata-cleanup.txt 2010-02-24 12:13:29 +0000 |
16 | +++ lib/lp/soyuz/doc/sampledata-setup.txt 2010-03-06 05:57:42 +0000 |
17 | @@ -1,18 +1,16 @@ |
18 | -= Sample data cleanup = |
19 | += Sample data setup = |
20 | |
21 | In order to run Soyuz locally on a development system, the sample data |
22 | must be cleaned up and customized a bit. This is done by a the script |
23 | -utilities/soyuz-sampledata-cleanup.py. |
24 | +utilities/soyuz-sampledata-setup.py. |
25 | |
26 | We only need this script for the playground sample data, so there's |
27 | little point in inspecting what it does to the test database in detail. |
28 | |
29 | >>> from canonical.launchpad.ftests.script import run_script |
30 | |
31 | -The --dry-run option makes the script roll back its changes. |
32 | - |
33 | >>> return_code, output, error = run_script( |
34 | - ... 'utilities/soyuz-sampledata-cleanup.py', args=['--dry-run']) |
35 | + ... 'utilities/soyuz-sampledata-setup.py') |
36 | |
37 | >>> print return_code |
38 | 0 |
39 | @@ -20,3 +18,6 @@ |
40 | >>> print error |
41 | INFO ... |
42 | INFO Done. |
43 | + |
44 | + >>> from canonical.launchpad.ftests.harness import LaunchpadTestSetup |
45 | + >>> LaunchpadTestSetup().force_dirty_database() |
46 | |
47 | === modified file 'utilities/soyuz-sampledata-cleanup.py' |
48 | --- utilities/soyuz-sampledata-cleanup.py 2010-02-24 12:13:29 +0000 |
49 | +++ utilities/soyuz-sampledata-cleanup.py 2010-03-06 05:57:42 +0000 |
50 | @@ -13,6 +13,9 @@ |
51 | |
52 | DO NOT RUN ON PRODUCTION SYSTEMS. This script deletes lots of |
53 | Ubuntu-related data. |
54 | + |
55 | +This script creates a user "ppa-user" (email ppa-user@example.com, |
56 | +password test) who is able to create PPAs. |
57 | """ |
58 | |
59 | __metaclass__ = type |
60 | @@ -20,9 +23,12 @@ |
61 | import _pythonpath |
62 | |
63 | from optparse import OptionParser |
64 | -from os import getenv |
65 | import re |
66 | +import os |
67 | +import subprocess |
68 | import sys |
69 | +from textwrap import dedent |
70 | +import transaction |
71 | |
72 | from zope.component import getUtility |
73 | from zope.event import notify |
74 | @@ -35,20 +41,32 @@ |
75 | |
76 | from canonical.lp import initZopeless |
77 | |
78 | +from canonical.launchpad.interfaces.account import AccountStatus |
79 | +from canonical.launchpad.interfaces.emailaddress import EmailAddressStatus |
80 | +from canonical.launchpad.interfaces.gpghandler import IGPGHandler |
81 | from canonical.launchpad.interfaces.launchpad import ( |
82 | ILaunchpadCelebrities) |
83 | from canonical.launchpad.scripts import execute_zcml_for_scripts |
84 | from canonical.launchpad.scripts.logger import logger, logger_options |
85 | from canonical.launchpad.webapp.interfaces import ( |
86 | - IStoreSelector, MAIN_STORE, SLAVE_FLAVOR) |
87 | + IStoreSelector, MAIN_STORE, MASTER_FLAVOR, SLAVE_FLAVOR) |
88 | |
89 | +from lp.registry.interfaces.codeofconduct import ISignedCodeOfConductSet |
90 | +from lp.registry.interfaces.gpg import GPGKeyAlgorithm, IGPGKeySet |
91 | from lp.registry.interfaces.series import SeriesStatus |
92 | +from lp.registry.model.codeofconduct import SignedCodeOfConduct |
93 | from lp.soyuz.interfaces.component import IComponentSet |
94 | +from lp.soyuz.interfaces.processor import IProcessorFamilySet |
95 | from lp.soyuz.interfaces.section import ISectionSet |
96 | from lp.soyuz.interfaces.sourcepackageformat import ( |
97 | ISourcePackageFormatSelectionSet, SourcePackageFormat) |
98 | from lp.soyuz.model.section import SectionSelection |
99 | from lp.soyuz.model.component import ComponentSelection |
100 | +from lp.testing.factory import LaunchpadObjectFactory |
101 | + |
102 | + |
103 | +user_name = 'ppa-user' |
104 | +default_email = '%s@example.com' % user_name |
105 | |
106 | |
107 | class DoNotRunOnProduction(Exception): |
108 | @@ -64,13 +82,18 @@ |
109 | return max_id[0] |
110 | |
111 | |
112 | +def get_store(flavor=MASTER_FLAVOR): |
113 | + """Obtain an ORM store.""" |
114 | + return getUtility(IStoreSelector).get(MAIN_STORE, flavor) |
115 | + |
116 | + |
117 | def check_preconditions(options): |
118 | """Try to ensure that it's safe to run. |
119 | |
120 | This script must not run on a production server, or anything |
121 | remotely like it. |
122 | """ |
123 | - store = getUtility(IStoreSelector).get(MAIN_STORE, SLAVE_FLAVOR) |
124 | + store = get_store(SLAVE_FLAVOR) |
125 | |
126 | # Just a guess, but dev systems aren't likely to have ids this high |
127 | # in this table. Production data does. |
128 | @@ -82,7 +105,7 @@ |
129 | # For some configs it's just absolutely clear this script shouldn't |
130 | # run. Don't even accept --force there. |
131 | forbidden_configs = re.compile('(edge|lpnet|production)') |
132 | - current_config = getenv('LPCONFIG', 'an unknown config') |
133 | + current_config = os.getenv('LPCONFIG', 'an unknown config') |
134 | if forbidden_configs.match(current_config): |
135 | raise DoNotRunOnProduction( |
136 | "I won't delete Ubuntu data on %s and you can't --force me." |
137 | @@ -95,22 +118,29 @@ |
138 | :return: (options, args, logger) |
139 | """ |
140 | parser = OptionParser( |
141 | - description="Delete existing Ubuntu releases and set up new ones.") |
142 | + description="Set up fresh Ubuntu series and %s identity." % user_name) |
143 | parser.add_option('-f', '--force', action='store_true', dest='force', |
144 | help="DANGEROUS: run even if the database looks production-like.") |
145 | parser.add_option('-n', '--dry-run', action='store_true', dest='dry_run', |
146 | help="Do not commit changes.") |
147 | + parser.add_option('-e', '--email', action='store', dest='email', |
148 | + default=default_email, |
149 | + help=( |
150 | + "Email address to use for %s. Should match your GPG key." |
151 | + % user_name)) |
152 | + |
153 | logger_options(parser) |
154 | |
155 | options, args = parser.parse_args(arguments) |
156 | + |
157 | return options, args, logger(options) |
158 | |
159 | |
160 | -def get_person(name): |
161 | +def get_person_set(): |
162 | """Return `IPersonSet` utility.""" |
163 | # Avoid circular import. |
164 | from lp.registry.interfaces.person import IPersonSet |
165 | - return getUtility(IPersonSet).getByName(name) |
166 | + return getUtility(IPersonSet) |
167 | |
168 | |
169 | def retire_series(distribution): |
170 | @@ -150,6 +180,20 @@ |
171 | components = main restricted universe multiverse''' |
172 | |
173 | |
174 | +def add_architecture(distroseries, architecture_name): |
175 | + """Add a DistroArchSeries for the given architecture to `distroseries`.""" |
176 | + # Avoid circular import. |
177 | + from lp.soyuz.model.distroarchseries import DistroArchSeries |
178 | + |
179 | + store = get_store(MASTER_FLAVOR) |
180 | + family = getUtility(IProcessorFamilySet).getByName(architecture_name) |
181 | + archseries = DistroArchSeries( |
182 | + distroseries=distroseries, processorfamily=family, |
183 | + owner=distroseries.owner, official=True, |
184 | + architecturetag=architecture_name) |
185 | + store.add(archseries) |
186 | + |
187 | + |
188 | def create_sections(distroseries): |
189 | """Set up some sections for `distroseries`.""" |
190 | section_names = ( |
191 | @@ -253,7 +297,7 @@ |
192 | # published binaries without corresponding sources. |
193 | |
194 | log.info("Deleting all items in official archives...") |
195 | - retire_distro_archives(distribution, get_person('name16')) |
196 | + retire_distro_archives(distribution, get_person_set().getByName('name16')) |
197 | |
198 | # Disable publishing of all PPAs, as they probably have broken |
199 | # publishings too. |
200 | @@ -271,7 +315,7 @@ |
201 | utility.add(distroseries, format) |
202 | |
203 | |
204 | -def populate(distribution, parent_series_name, uploader_name, log): |
205 | +def populate(distribution, parent_series_name, uploader_name, options, log): |
206 | """Set up sample data on `distribution`.""" |
207 | parent_series = distribution.getSeries(parent_series_name) |
208 | |
209 | @@ -281,15 +325,141 @@ |
210 | |
211 | log.info("Configuring sections...") |
212 | create_sections(parent_series) |
213 | + add_architecture(parent_series, 'amd64') |
214 | |
215 | log.info("Configuring components and permissions...") |
216 | - create_components(parent_series, get_person(uploader_name)) |
217 | + uploader = get_person_set().getByName(uploader_name) |
218 | + create_components(parent_series, uploader) |
219 | |
220 | set_source_package_format(parent_series) |
221 | |
222 | create_sample_series(parent_series, log) |
223 | |
224 | |
225 | +def sign_code_of_conduct(person, log): |
226 | + """Sign Ubuntu Code of Conduct for `person`, if necessary.""" |
227 | + if person.is_ubuntu_coc_signer: |
228 | + # Already signed. |
229 | + return |
230 | + |
231 | + log.info("Signing Ubuntu code of conduct.") |
232 | + signedcocset = getUtility(ISignedCodeOfConductSet) |
233 | + person_id = person.id |
234 | + if signedcocset.searchByUser(person_id).count() == 0: |
235 | + fake_gpg_key = LaunchpadObjectFactory().makeGPGKey(person) |
236 | + Store.of(person).add(SignedCodeOfConduct( |
237 | + owner=person, signingkey=fake_gpg_key, |
238 | + signedcode="Normally a signed CoC would go here.", active=True)) |
239 | + |
240 | + |
241 | +def create_ppa_user(username, options, approver, log): |
242 | + """Create new user, with password "test," and sign code of conduct.""" |
243 | + # Avoid circular import. |
244 | + from lp.registry.interfaces.person import PersonCreationRationale |
245 | + |
246 | + store = get_store(MASTER_FLAVOR) |
247 | + |
248 | + log.info("Creating %s with email address '%s'." % ( |
249 | + user_name, options.email)) |
250 | + |
251 | + person = get_person_set().getByName(username) |
252 | + if person is None: |
253 | + person, email = get_person_set().createPersonAndEmail( |
254 | + options.email, PersonCreationRationale.OWNER_CREATED_LAUNCHPAD, |
255 | + name=username, displayname=username, password='test') |
256 | + email.status = EmailAddressStatus.PREFERRED |
257 | + email.account.status = AccountStatus.ACTIVE |
258 | + |
259 | + if not options.dry_run: |
260 | + transaction.commit() |
261 | + |
262 | + sign_code_of_conduct(person, log) |
263 | + get_person_set().getByName('ubuntu-team').addMember(person, approver) |
264 | + |
265 | + return person |
266 | + |
267 | + |
268 | +def parse_fingerprints(gpg_output): |
269 | + """Find key fingerprints in "gpg --fingerprint <email>" output.""" |
270 | + line_prefix = re.compile('\s*Key fingerprint\s*=\s*') |
271 | + return [ |
272 | + ''.join(re.sub(line_prefix, '', line).split()) |
273 | + for line in gpg_output.splitlines() |
274 | + if line_prefix.match(line) |
275 | + ] |
276 | + |
277 | + |
278 | +def run_native_gpg(command_line, log): |
279 | + """Run GPG using the user's real keyring.""" |
280 | + # Need to override GNUPGHOME or we'll get a dummy GPG in a temp |
281 | + # directory, which won't find any keys. |
282 | + env = os.environ.copy() |
283 | + if 'GNUPGHOME' in env: |
284 | + del env['GNUPGHOME'] |
285 | + pipe = subprocess.Popen( |
286 | + command_line, env=env, stdout=subprocess.PIPE, stderr=subprocess.PIPE) |
287 | + stdout, stderr = pipe.communicate() |
288 | + if stderr != '': |
289 | + log.error(stderr) |
290 | + if pipe.returncode != 0: |
291 | + raise Exception('GPG error during "%s"' % ' '.join(command_line)) |
292 | + |
293 | + return stdout |
294 | + |
295 | + |
296 | +def add_gpg_key(person, fingerprint, log): |
297 | + """Add the GPG key with the given fingerprint to `person`.""" |
298 | + log.info("Adding GPG key %s" % fingerprint) |
299 | + |
300 | + command_line = [ |
301 | + 'gpg', |
302 | + '--keyserver', 'keyserver.launchpad.dev', |
303 | + '--send-key', fingerprint |
304 | + ] |
305 | + run_native_gpg(command_line, log) |
306 | + |
307 | + gpghandler = getUtility(IGPGHandler) |
308 | + key = gpghandler.retrieveKey(fingerprint) |
309 | + |
310 | + gpgkeyset = getUtility(IGPGKeySet) |
311 | + if gpgkeyset.getByFingerprint(fingerprint) is not None: |
312 | + # We already have this key. |
313 | + return |
314 | + |
315 | + algorithm = GPGKeyAlgorithm.items[key.algorithm] |
316 | + can_encrypt = True |
317 | + lpkey = gpgkeyset.new( |
318 | + person.id, key.keyid, fingerprint, key.keysize, algorithm, |
319 | + active=True, can_encrypt=can_encrypt) |
320 | + Store.of(person).add(lpkey) |
321 | + log.info("Created Launchpad key %s" % lpkey.displayname) |
322 | + |
323 | + |
324 | +def attach_gpg_keys(options, person, log): |
325 | + """Attach the selected GPG key to `person`.""" |
326 | + log.info("Looking for GPG keys.") |
327 | + |
328 | + output = run_native_gpg(['gpg', '--fingerprint', options.email], log) |
329 | + |
330 | + fingerprints = parse_fingerprints(output) |
331 | + if len(fingerprints) == 0: |
332 | + log.warn("No GPG key fingerprints found!") |
333 | + for fingerprint in fingerprints: |
334 | + add_gpg_key(person, fingerprint, log) |
335 | + |
336 | + |
337 | +def create_ppa(distribution, person, name): |
338 | + """Create a PPA for `person`.""" |
339 | + ppa = LaunchpadObjectFactory().makeArchive( |
340 | + distribution=distribution, owner=person, name=name, virtualized=False, |
341 | + description="Automatically created test PPA.") |
342 | + |
343 | + series_name = distribution.currentseries.name |
344 | + ppa.external_dependencies = ( |
345 | + "deb http://archive.ubuntu.com/ubuntu %s " |
346 | + "main restricted universe multiverse\n") % series_name |
347 | + |
348 | + |
349 | def main(argv): |
350 | options, args, log = parse_args(argv[1:]) |
351 | |
352 | @@ -302,8 +472,17 @@ |
353 | clean_up(ubuntu, log) |
354 | |
355 | # Use Hoary as the root, as Breezy and Grumpy are broken. |
356 | - populate(ubuntu, 'hoary', 'ubuntu-team', log) |
357 | - |
358 | + populate(ubuntu, 'hoary', 'ubuntu-team', options, log) |
359 | + |
360 | + admin = get_person_set().getByName('name16') |
361 | + person = create_ppa_user(user_name, options, admin, log) |
362 | + |
363 | + gave_email = (options.email != default_email) |
364 | + if gave_email: |
365 | + attach_gpg_keys(options, person, log) |
366 | + |
367 | + create_ppa(ubuntu, person, 'test-ppa') |
368 | + |
369 | if options.dry_run: |
370 | txn.abort() |
371 | else: |
372 | @@ -311,6 +490,15 @@ |
373 | |
374 | log.info("Done.") |
375 | |
376 | + print dedent(""" |
377 | + Now start your local Launchpad with "make run" and log into |
378 | + https://launchpad.dev/ as "%(email)s" with "test" as the password. |
379 | + Your user name will be %(user_name)s.""" |
380 | + % { |
381 | + 'email': options.email, |
382 | + 'user_name': user_name, |
383 | + }) |
384 | + |
385 | |
386 | if __name__ == "__main__": |
387 | main(sys.argv) |
388 | |
389 | === added file 'utilities/start-dev-soyuz.sh' |
390 | --- utilities/start-dev-soyuz.sh 1970-01-01 00:00:00 +0000 |
391 | +++ utilities/start-dev-soyuz.sh 2010-03-06 05:57:42 +0000 |
392 | @@ -0,0 +1,21 @@ |
393 | +#!/bin/sh -e |
394 | +# Start up Soyuz for local testing on a dev machine. |
395 | + |
396 | +start_twistd() { |
397 | + # Start twistd for service $1. |
398 | + mkdir -p "/var/tmp/$1" |
399 | + echo "Starting $1." |
400 | + bin/twistd \ |
401 | + --logfile "/var/tmp/development-$1.log" \ |
402 | + --pidfile "/var/tmp/development-$1.pid" \ |
403 | + -y "daemons/$1.tac" |
404 | +} |
405 | + |
406 | +start_twistd zeca |
407 | +start_twistd buildd-manager |
408 | + |
409 | +echo "Starting poppy." |
410 | +mkdir -p /var/tmp/poppy |
411 | +bin/py daemons/poppy-upload.py /var/tmp/poppy/incoming 2121 & |
412 | + |
413 | +echo "Done." |
= Bug 527170 =
This branch automates more of the manual work needed to get a working
local Soyuz setup. For the current process, see:
https:/ /dev.launchpad. net/Soyuz/ HowToUseSoyuzLo cally
I already implemented a script that does part of the work needed on the
dev sample data, based on wgrant's original, but some features are
added:
* --amd64 option automates setup of amd64 support.
* Creates a user ppa-user, with a fixed name.
* Signs the Ubuntu Code of Conduct for ppa-user (with a fake key).
* Gives you a fixed URL for adding your own GPG key.
* --email option lets you specify your email address of choice.
* Adds ppa-user to ubuntu-team.
The dev server won't actually be able to send email to your chosen email
address (although it will try, see the wiki page) but the choice makes
it easier for you to attach your GPG key. This is needed when you sign
uploads.
I tried automating the registration of a GPG key, or use of a dedicated
one, but no solution was really satisfactory. Well, working with gpgme
to get the real key based on your email address would have been, but it
seems that our scripting environment forces gpg to live in a temporary
directory rather than in ~/.gnupg.
This is a utilities script acting on the dev playground database, so
there's no real testing. But at least the script is exercised by the
test suite so we can detect obvious breakage:
{{{
./bin/test -vv -t sampledata-cleanup
}}}
No lint.
There's also a new script utilities/ start-dev- soyuz.sh, also based on
wgrant's original. I regularlized it a bit and made sure the necessary
directories in /var/tmp are created. I have no idea how to test this
second script without (1) upsetting the test environment, or (2) porting
it to python and adding a lot of weight. For this script, used in
manual testing only and easy enough to edit, I don't think that's worth
the trouble.
Jeroen