Merge lp:~edwin-grubbs/launchpad/bug-615654-jobqueue-cron-script into lp:launchpad/db-devel

Proposed by Edwin Grubbs
Status: Merged
Approved by: Aaron Bentley
Approved revision: no longer in the source branch.
Merged at revision: 9873
Proposed branch: lp:~edwin-grubbs/launchpad/bug-615654-jobqueue-cron-script
Merge into: lp:launchpad/db-devel
Prerequisite: lp:~edwin-grubbs/launchpad/bug-615654-queue-addmember-emails
Diff against target: 854 lines (+512/-71)
17 files modified
cronscripts/merge-proposal-jobs.py (+1/-1)
cronscripts/process-job-source-groups.py (+114/-0)
cronscripts/process-job-source.py (+65/-0)
database/schema/security.cfg (+16/-0)
lib/canonical/config/schema-lazr.conf (+14/-0)
lib/canonical/launchpad/scripts/tests/__init__.py (+1/-1)
lib/lp/archiveuploader/tests/test_ppauploadprocessor.py (+2/-0)
lib/lp/code/interfaces/branchmergeproposal.py (+2/-1)
lib/lp/registry/interfaces/persontransferjob.py (+10/-0)
lib/lp/registry/model/persontransferjob.py (+0/-10)
lib/lp/registry/stories/person/xx-approve-members.txt (+3/-0)
lib/lp/registry/tests/test_membership_notification_job_creation.py (+65/-0)
lib/lp/registry/tests/test_process_job_sources_cronjob.py (+125/-0)
lib/lp/services/job/interfaces/job.py (+8/-0)
lib/lp/services/job/runner.py (+32/-16)
lib/lp/services/scripts/base.py (+44/-32)
lib/lp/testing/__init__.py (+10/-10)
To merge this branch: bzr merge lp:~edwin-grubbs/launchpad/bug-615654-jobqueue-cron-script
Reviewer Review Type Date Requested Status
Aaron Bentley (community) Approve
Gavin Panella (community) Approve
Review via email: mp+36910@code.launchpad.net

Description of the change

Summary
-------

This branch adds a cron script to process the
IMembershipNotificationJobs. The cronscript has been abstracted so that
new job sources can be processed by just adding configs to
schema-lazr.conf, as long as it uses a crontab_group that already has a
crontab entry configured in production.

Implementation details
----------------------

New db users and configs for cronjobs:
    database/schema/security.cfg
    lib/canonical/config/schema-lazr.conf

Cronscripts and tests.
    cronscripts/process-job-source-groups.py
    cronscripts/process-job-source.py
    lib/lp/registry/tests/test_membership_notification_job_creation.py
    lib/lp/registry/tests/test_process_job_sources_cronjob.py

Improved error message.
    lib/canonical/launchpad/webapp/errorlog.py

Added get() to IJobSource definition, since TwistedJobRunner needs it.
Moved the get() method into the class that actually provides IJobSource.
    lib/lp/services/job/interfaces/job.py
    lib/lp/registry/model/persontransferjob.py

Cruft from previous branch.
    lib/lp/registry/model/teammembership.py

Eliminate the need for TwistedJobRunner cron scripts to be run with
bin/py by importing _pythonpath in ampoule child processes like normal
cron scripts. Eliminated the need to specify the error_dir, oops_prefix,
and copy_to_zlog when the defaults in [error_reports] are fine.
    lib/lp/services/job/runner.py

Tests
-----

./bin/test -vv -t 'test_membership_notification_job_creation|test_process_job_sources_cronjob'

Demo and Q/A
------------

* Add a member to a team.
  * Change the member status to ADMIN.
  * Run ./cronscripts/process-job-source-groups.py -v MAIN
  * Check email.

To post a comment you must log in.
Revision history for this message
Gavin Panella (allenap) wrote :

Hi Edwin,

This branch looks good.

I know you're adding ProcessJobSourceGroups in here as well as the
membership stuff, but it still seems like a lot of work to do
something asynchronously. Moving something out of the app server takes
a lot of plumbing work, and it's not trivial plumbing work either.

I'd really like a way to create, in the app server, something like a
persistent closure that can be run later, there or somewhere
else. When run it would have the same user and database access as in
the app server thread that spawned it. Creating a new job now seems to
involve creating tables, database users, job sources, and so on, and
it's getting in the way of the actual work that needs doing.

I'm rambling on, and it's not a fault in this branch, but after doing
this recently, do you have any thoughts on that?

Gavin.

[1]

+ for job_source in selected_job_sources:
+ child = subprocess.Popen(child_args + [job_source])
+ children.append(child)
+ for child in children:
+ child.wait()

Is selected_job_sources ever likely to be large? If not now, but could
be one day, consider filing a bug to limit the number of concurrency
processes.

[2]

+ for child in children:
+ child.wait()

Is there any value in checking and doing something with the exit code
from the child?

[3]

+ packages=('_pythonpath', 'twisted', 'ampoule'), env=env)

I just looked again at _pythonpath.py and now I can't see.

review: Approve
Revision history for this message
Aaron Bentley (abentley) wrote :

Gavin, radical changes in the way we create and run tasks are the purview of the NewTaskSystem. Not having to create new database users is already listed in https://dev.launchpad.net/Foundations/NewTaskSystem/Requirements and you may wish to add your other wishes under "nice to have".

Revision history for this message
Edwin Grubbs (edwin-grubbs) wrote :
Download full text (3.2 KiB)

> Hi Edwin,
>
> This branch looks good.

Thanks for the review. Unfortunately, abentley had already started reviewing it, but it was all done on IRC up to this point. I should have just listed him as the requested reviewer. Sorry for the duplicate effort. I'll try to answer your questions below.

> I know you're adding ProcessJobSourceGroups in here as well as the
> membership stuff, but it still seems like a lot of work to do
> something asynchronously. Moving something out of the app server takes
> a lot of plumbing work, and it's not trivial plumbing work either.
>
> I'd really like a way to create, in the app server, something like a
> persistent closure that can be run later, there or somewhere
> else. When run it would have the same user and database access as in
> the app server thread that spawned it. Creating a new job now seems to
> involve creating tables, database users, job sources, and so on, and
> it's getting in the way of the actual work that needs doing.
>
> I'm rambling on, and it's not a fault in this branch, but after doing
> this recently, do you have any thoughts on that?
>
> Gavin.

It seems that the reason we don't support a closure method for processing the jobs is that there are a lot of security restrictions placed on us. For example, each job class is required to have its own dbuser, so that the losas can figure out which job class is causing problems on the db. Of course, the job system seems much better suited for really slow processes, and it might be worthwhile to have a secondary solution that just moves the processing after the request response.

> [1]
>
> + for job_source in selected_job_sources:
> + child = subprocess.Popen(child_args + [job_source])
> + children.append(child)
> + for child in children:
> + child.wait()
>
> Is selected_job_sources ever likely to be large? If not now, but could
> be one day, consider filing a bug to limit the number of concurrency
> processes.

Currently, each job source class is given its own process. If we run into a problem with too many concurrent jobs being processed, the job source classes can be reorganized into different crontab_groups, and then different crontab entries can process the jobs at different times. We are not really interested in a long term solution for this, since the job system will most likely be moved to some kind of message queue, so the cron scripts for job processing will disappear.

> [2]
>
> + for child in children:
> + child.wait()
>
> Is there any value in checking and doing something with the exit code
> from the child?

No, waiting for the children to finish is really only helpful for testing, so that all the output from the child processes can be gathered. The cron scripts log to files, and I am actually making the default behavior not to wait on the child processes, so that one really long running child process does not cause the other job sources not to run the next time the crontab entry calls it, because the parent process still has its lock file.

> [3]
>
> + packages=('_pythonpath', 'twisted', 'ampoule'), env=env)
>
> I just looked again at ...

Read more...

Revision history for this message
Aaron Bentley (abentley) wrote :

Waiting for children to exit will mean that no new jobs can be processed until all the jobs complete. This may severely impair responsiveness. I suggest starting the subprocesses and then exiting.

The NoErrorOptionParser should not exist. Option handling should be taken care of in the ProcessJobSource class. If the layering is getting in our way, we should fix it, not work around it.

Unless you are specifically testing stringification, you should not stringify objects in tests, so that your tests are not fragile.

In test_setstatus_admin, what's the purpose of transaction.commit?

Since you do person_set.getByEmail('<email address hidden>'), you can use login_person or with person_logged_in instead of login. Also, lifeless requested that you use the constant for <email address hidden>' instead of spelling it out.

Please use lp.testing.run_script instead of rolling your own.

In test_processed, I don't think you really care what the database id of the job is, so use assertTextMatchesExpressionIgnoreWhitespace. (You'll need to move it to TestCaseWithFactory.)

get isn't required to run jobs under the non-twisted job runner, so it seems like an unnecessary requirement on IJobSource.

Perhaps ProcessJobSource should go in lp.services.job.runner ?

test_empty_queue looks fragile because it includes the directory of the lock file. Perhaps another case for assertTextMatchesExpressionIgnoreWhitespace?

Perhaps PYTHONPATH and LPCONFIG should be handled by common code, eg:
for name in ['PYTHONPATH', 'LPCONFIG']:
    if name in os.environ:
        env[name] = os.environ[name]

review: Needs Fixing
Revision history for this message
Gavin Panella (allenap) wrote :
Download full text (3.6 KiB)

> Thanks for the review. Unfortunately, abentley had already started reviewing
> it, but it was all done on IRC up to this point. I should have just listed him
> as the requested reviewer. Sorry for the duplicate effort. I'll try to answer
> your questions below.

No worries. It's good that Aaron also reviewed this. He spotted a lot
of stuff that I skimmed over; I didn't take as long on this review as
I ought to have done.

>
> > I know you're adding ProcessJobSourceGroups in here as well as the
> > membership stuff, but it still seems like a lot of work to do
> > something asynchronously. Moving something out of the app server takes
> > a lot of plumbing work, and it's not trivial plumbing work either.
> >
> > I'd really like a way to create, in the app server, something like a
> > persistent closure that can be run later, there or somewhere
> > else. When run it would have the same user and database access as in
> > the app server thread that spawned it. Creating a new job now seems to
> > involve creating tables, database users, job sources, and so on, and
> > it's getting in the way of the actual work that needs doing.
> >
> > I'm rambling on, and it's not a fault in this branch, but after doing
> > this recently, do you have any thoughts on that?
> >
> > Gavin.
>
>
> It seems that the reason we don't support a closure method for processing the
> jobs is that there are a lot of security restrictions placed on us. For
> example, each job class is required to have its own dbuser, so that the losas
> can figure out which job class is causing problems on the db. Of course, the
> job system seems much better suited for really slow processes, and it might be
> worthwhile to have a secondary solution that just moves the processing after
> the request response.

Yes, that sounds good. As suggested by Aaron, I'll add it to the
wishlist.

>
> > [1]
> >
> > + for job_source in selected_job_sources:
> > + child = subprocess.Popen(child_args + [job_source])
> > + children.append(child)
> > + for child in children:
> > + child.wait()
> >
> > Is selected_job_sources ever likely to be large? If not now, but could
> > be one day, consider filing a bug to limit the number of concurrency
> > processes.
>
>
> Currently, each job source class is given its own process. If we run into a
> problem with too many concurrent jobs being processed, the job source classes
> can be reorganized into different crontab_groups, and then different crontab
> entries can process the jobs at different times. We are not really interested
> in a long term solution for this, since the job system will most likely be
> moved to some kind of message queue, so the cron scripts for job processing
> will disappear.

Okay, fair enough.

>
> > [2]
> >
> > + for child in children:
> > + child.wait()
> >
> > Is there any value in checking and doing something with the exit code
> > from the child?
>
>
> No, waiting for the children to finish is really only helpful for testing, so
> that all the output from the child processes can be gathered. The cron scripts
> log to files, and I am actually making the default behavi...

Read more...

Revision history for this message
Edwin Grubbs (edwin-grubbs) wrote :
Download full text (25.6 KiB)

On Thu, Sep 30, 2010 at 11:02 AM, Aaron Bentley <email address hidden> wrote:
> Review: Needs Fixing
> Waiting for children to exit will mean that no new jobs can be processed until all the jobs complete.  This may severely impair responsiveness.  I suggest starting the subprocesses and then exiting.

It no longer waits for child processes by default. I added an option
to wait, which is convenient when running it from the command line
when experimenting.

> The NoErrorOptionParser should not exist.  Option handling should be taken care of in the ProcessJobSource class.  If the layering is getting in our way, we should fix it, not work around it.

I moved most of the setup code to the run() methods in LaunchpadScript
and LaunchpadCronScript so __init__ does very little now besides
setting up the parser attribute.

> Unless you are specifically testing stringification, you should not stringify objects in tests, so that your tests are not fragile.

Fixed.

> In test_setstatus_admin, what's the purpose of transaction.commit?

I'm not sure why I was thinking I would need that. Removed.

> Since you do person_set.getByEmail('<email address hidden>'), you can use login_person or with person_logged_in instead of login.  Also, lifeless requested that you use the constant for <email address hidden>' instead of spelling it out.

Fixed.

> Please use lp.testing.run_script instead of rolling your own.

Fixed.

> In test_processed, I don't think you really care what the database id of the job is, so use  assertTextMatchesExpressionIgnoreWhitespace.  (You'll need to move it to TestCaseWithFactory.)

Fixed.

> get isn't required to run jobs under the non-twisted job runner, so it seems like an unnecessary requirement on IJobSource.

ok, I've moved get() to a new ITwistedJobSource interface.

> Perhaps ProcessJobSource should go in lp.services.job.runner ?

I'd rather wait until we have a reason for subclassing from it, which
I hope won't happen before we stop using cron for job processing.

> test_empty_queue looks fragile because it includes the directory of the lock file.  Perhaps another case for assertTextMatchesExpressionIgnoreWhitespace?

Fixed.

> Perhaps PYTHONPATH and LPCONFIG should be handled by common code, eg:
> for name in ['PYTHONPATH', 'LPCONFIG']:
>    if name in os.environ:
>        env[name] = os.environ[name]

Fixed.

Incremental diff:

=== modified file 'cronscripts/process-job-source-groups.py'
--- cronscripts/process-job-source-groups.py 2010-09-28 15:50:05 +0000
+++ cronscripts/process-job-source-groups.py 2010-09-30 20:33:49 +0000
@@ -47,6 +47,13 @@
             '-e', '--exclude', dest='excluded_job_sources',
             metavar="JOB_SOURCE", default=[], action='append',
             help="Exclude specific job sources.")
+ self.parser.add_option(
+ '--wait', dest='do_wait', default=False, action='store_true',
+ help="Wait for the child processes to finish. This is useful "
+ "for testing, but it shouldn't be used in a cronjob, since "
+ "it would prevent the cronjob from processing new jobs "
+ "if just one of the child processes is still processing, "
...

Revision history for this message
Edwin Grubbs (edwin-grubbs) wrote :

I realized that I could make process-job-source.py simpler. Here is the diff.

=== modified file 'cronscripts/process-job-source.py'
--- cronscripts/process-job-source.py 2010-09-30 20:39:02 +0000
+++ cronscripts/process-job-source.py 2010-10-01 13:39:54 +0000
@@ -24,12 +24,8 @@
         "For more help, run:\n"
         " cronscripts/process-job-source-groups.py --help")

- def configure(self):
- """Override configs main() is called by run().
-
- It is done here instead of __init__(), so that we get to use its
- option parser.
- """
+ def __init__(self):
+ super(ProcessJobSource, self).__init__()
         if len(self.args) != 1:
             self.parser.print_help()
             sys.exit(1)
@@ -54,5 +50,4 @@

 if __name__ == '__main__':
     script = ProcessJobSource()
- script.configure()
     script.lock_and_run()

Revision history for this message
Edwin Grubbs (edwin-grubbs) wrote :
Download full text (10.9 KiB)

I ran into some problems merging in db-devel due to some changes in LaunchpadCronScript. I switched to using properties to make it easy for subclasses to override the values without interfering with __init__() logic.

Instead of an incremental diff, here is a full diff of the affected files.

=== added file 'cronscripts/process-job-source.py'
--- cronscripts/process-job-source.py 1970-01-01 00:00:00 +0000
+++ cronscripts/process-job-source.py 2010-10-04 04:01:36 +0000
@@ -0,0 +1,69 @@
+#!/usr/bin/python -S
+#
+# Copyright 2009, 2010 Canonical Ltd. This software is licensed under the
+# GNU Affero General Public License version 3 (see the file LICENSE).
+
+"""Handle jobs for a specified job source class."""
+
+__metaclass__ = type
+
+import sys
+
+import _pythonpath
+from twisted.python import log
+
+from canonical.config import config
+from lp.services.job import runner
+from lp.services.job.runner import JobCronScript
+
+
+class ProcessJobSource(JobCronScript):
+ """Run jobs for a specified job source class."""
+ usage = (
+ "Usage: %prog [options] JOB_SOURCE\n\n"
+ "For more help, run:\n"
+ " cronscripts/process-job-source-groups.py --help")
+
+ def __init__(self):
+ super(ProcessJobSource, self).__init__()
+ # The fromlist argument is necessary so that __import__()
+ # returns the bottom submodule instead of the top one.
+ module = __import__(self.config_section.module,
+ fromlist=[self.job_source_name])
+ self.source_interface = getattr(module, self.job_source_name)
+
+ @property
+ def config_name(self):
+ return self.job_source_name
+
+ @property
+ def name(self):
+ return 'process-job-source-%s' % self.job_source_name
+
+ @property
+ def dbuser(self):
+ return self.config_section.dbuser
+
+ @property
+ def runner_class(self):
+ runner_class_name = getattr(
+ self.config_section, 'runner_class', 'JobRunner')
+ # Override attributes that are normally set in __init__().
+ return getattr(runner, runner_class_name)
+
+ def handle_options(self):
+ if len(self.args) != 1:
+ self.parser.print_help()
+ sys.exit(1)
+ self.job_source_name = self.args[0]
+ super(ProcessJobSource, self).handle_options()
+
+ def main(self):
+ if self.options.verbose:
+ log.startLogging(sys.stdout)
+ super(ProcessJobSource, self).main()
+
+
+if __name__ == '__main__':
+ script = ProcessJobSource()
+ script.lock_and_run()

=== modified file 'lib/lp/services/job/runner.py'
--- lib/lp/services/job/runner.py 2010-09-20 09:48:58 +0000
+++ lib/lp/services/job/runner.py 2010-10-04 04:01:36 +0000
@@ -77,7 +77,6 @@

     # We redefine __eq__ and __ne__ here to prevent the security proxy
     # from mucking up our comparisons in tests and elsewhere.
-
     def __eq__(self, job):
         return (
             self.__class__ is removeSecurityProxy(job.__class__)
@@ -337,13 +336,12 @@
     """Run Jobs via twisted."""

     def __init__(self, job_source, dbuser, logger=None, error_utility=None):
- env = {'PYTHONPATH': os.e...

Revision history for this message
Aaron Bentley (abentley) wrote :

This looks good, except that there are now two definitions assertTextMatchesExpressionIgnoreWhitespace; one in TestCaseWithFactory and one in BrowserTestCase. Actually, it's not factory-specific, so it should probably go in TestCase.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'cronscripts/merge-proposal-jobs.py'
2--- cronscripts/merge-proposal-jobs.py 2010-05-19 18:07:56 +0000
3+++ cronscripts/merge-proposal-jobs.py 2010-10-07 14:14:12 +0000
4@@ -32,7 +32,7 @@
5 def __init__(self):
6 super(RunMergeProposalJobs, self).__init__(
7 runner_class=TwistedJobRunner,
8- script_name='merge-proposal-jobs')
9+ name='merge-proposal-jobs')
10
11
12 if __name__ == '__main__':
13
14=== added file 'cronscripts/process-job-source-groups.py'
15--- cronscripts/process-job-source-groups.py 1970-01-01 00:00:00 +0000
16+++ cronscripts/process-job-source-groups.py 2010-10-07 14:14:12 +0000
17@@ -0,0 +1,114 @@
18+#!/usr/bin/python -S
19+#
20+# Copyright 2009, 2010 Canonical Ltd. This software is licensed under the
21+# GNU Affero General Public License version 3 (see the file LICENSE).
22+
23+"""Handle jobs for multiple job source classes."""
24+
25+__metaclass__ = type
26+
27+from optparse import IndentedHelpFormatter
28+import os
29+import subprocess
30+import sys
31+import textwrap
32+
33+import _pythonpath
34+
35+from canonical.config import config
36+from lp.services.propertycache import cachedproperty
37+from lp.services.scripts.base import LaunchpadCronScript
38+
39+
40+class LongEpilogHelpFormatter(IndentedHelpFormatter):
41+ """Preserve newlines in epilog."""
42+
43+ def format_epilog(self, epilog):
44+ if epilog:
45+ return '\n%s\n' % epilog
46+ else:
47+ return ""
48+
49+
50+class ProcessJobSourceGroups(LaunchpadCronScript):
51+ """Handle each job source in a separate process with ProcessJobSource."""
52+
53+ def add_my_options(self):
54+ self.parser.usage = "%prog [ -e JOB_SOURCE ] GROUP [GROUP]..."
55+ self.parser.epilog = (
56+ textwrap.fill(
57+ "At least one group must be specified. Excluding job sources "
58+ "is useful when you want to run all the other job sources in "
59+ "a group.")
60+ + "\n\n" + self.group_help)
61+
62+ self.parser.formatter = LongEpilogHelpFormatter()
63+ self.parser.add_option(
64+ '-e', '--exclude', dest='excluded_job_sources',
65+ metavar="JOB_SOURCE", default=[], action='append',
66+ help="Exclude specific job sources.")
67+ self.parser.add_option(
68+ '--wait', dest='do_wait', default=False, action='store_true',
69+ help="Wait for the child processes to finish. This is useful "
70+ "for testing, but it shouldn't be used in a cronjob, since "
71+ "it would prevent the cronjob from processing new jobs "
72+ "if just one of the child processes is still processing, "
73+ "and each process only handles a single job source class.")
74+
75+ def main(self):
76+ selected_groups = self.args
77+ if len(selected_groups) == 0:
78+ self.parser.print_help()
79+ sys.exit(1)
80+
81+ selected_job_sources = set()
82+ # Include job sources from selected groups.
83+ for group in selected_groups:
84+ selected_job_sources.update(self.grouped_sources[group])
85+ # Then, exclude job sources.
86+ for source in self.options.excluded_job_sources:
87+ if source not in selected_job_sources:
88+ self.logger.info('%r is not in job source groups %s'
89+ % (source, self.options.groups))
90+ else:
91+ selected_job_sources.remove(source)
92+ # Process job sources.
93+ command = os.path.join(
94+ os.path.dirname(sys.argv[0]), 'process-job-source.py')
95+ child_args = [command]
96+ if self.options.verbose:
97+ child_args.append('-v')
98+ children = []
99+ for job_source in selected_job_sources:
100+ child = subprocess.Popen(child_args + [job_source])
101+ children.append(child)
102+ if self.options.do_wait:
103+ for child in children:
104+ child.wait()
105+
106+ @cachedproperty
107+ def all_job_sources(self):
108+ job_sources = config['process-job-source-groups'].job_sources
109+ return [job_source.strip() for job_source in job_sources.split(',')]
110+
111+ @cachedproperty
112+ def grouped_sources(self):
113+ groups = {}
114+ for source in self.all_job_sources:
115+ if source not in config:
116+ continue
117+ section = config[source]
118+ group = groups.setdefault(section.crontab_group, [])
119+ group.append(source)
120+ return groups
121+
122+ @cachedproperty
123+ def group_help(self):
124+ return '\n\n'.join(
125+ 'Group: %s\n %s' % (group, '\n '.join(sources))
126+ for group, sources in sorted(self.grouped_sources.items()))
127+
128+
129+if __name__ == '__main__':
130+ script = ProcessJobSourceGroups()
131+ script.lock_and_run()
132
133=== added file 'cronscripts/process-job-source.py'
134--- cronscripts/process-job-source.py 1970-01-01 00:00:00 +0000
135+++ cronscripts/process-job-source.py 2010-10-07 14:14:12 +0000
136@@ -0,0 +1,65 @@
137+#!/usr/bin/python -S
138+#
139+# Copyright 2009, 2010 Canonical Ltd. This software is licensed under the
140+# GNU Affero General Public License version 3 (see the file LICENSE).
141+
142+"""Handle jobs for a specified job source class."""
143+
144+__metaclass__ = type
145+
146+import sys
147+
148+import _pythonpath
149+from twisted.python import log
150+
151+from canonical.config import config
152+from lp.services.job import runner
153+from lp.services.job.runner import JobCronScript
154+
155+
156+class ProcessJobSource(JobCronScript):
157+ """Run jobs for a specified job source class."""
158+ usage = (
159+ "Usage: %prog [options] JOB_SOURCE\n\n"
160+ "For more help, run:\n"
161+ " cronscripts/process-job-source-groups.py --help")
162+
163+ def __init__(self):
164+ super(ProcessJobSource, self).__init__()
165+ # The fromlist argument is necessary so that __import__()
166+ # returns the bottom submodule instead of the top one.
167+ module = __import__(self.config_section.module,
168+ fromlist=[self.job_source_name])
169+ self.source_interface = getattr(module, self.job_source_name)
170+
171+ @property
172+ def config_name(self):
173+ return self.job_source_name
174+
175+ @property
176+ def name(self):
177+ return 'process-job-source-%s' % self.job_source_name
178+
179+ @property
180+ def runner_class(self):
181+ runner_class_name = getattr(
182+ self.config_section, 'runner_class', 'JobRunner')
183+ # Override attributes that are normally set in __init__().
184+ return getattr(runner, runner_class_name)
185+
186+ def handle_options(self):
187+ if len(self.args) != 1:
188+ self.parser.print_help()
189+ sys.exit(1)
190+ self.job_source_name = self.args[0]
191+ super(ProcessJobSource, self).handle_options()
192+
193+ def main(self):
194+ if self.options.verbose:
195+ log.startLogging(sys.stdout)
196+ super(ProcessJobSource, self).main()
197+
198+
199+if __name__ == '__main__':
200+ script = ProcessJobSource()
201+ script.lock_and_run()
202
203=== modified file 'database/schema/security.cfg'
204--- database/schema/security.cfg 2010-10-07 14:14:09 +0000
205+++ database/schema/security.cfg 2010-10-07 14:14:12 +0000
206@@ -715,6 +715,8 @@
207 public.teamparticipation = SELECT, DELETE
208 public.person = SELECT
209 public.emailaddress = SELECT
210+public.job = SELECT, INSERT
211+public.persontransferjob = SELECT, INSERT
212
213 [karma]
214 # Update the KarmaCache table
215@@ -1877,6 +1879,20 @@
216 public.product = SELECT, UPDATE
217 public.bugtracker = SELECT
218
219+[process-job-source-groups]
220+# Does not need access to tables.
221+type=user
222+groups=script
223+
224+[person-transfer-job]
225+type=user
226+groups=script
227+public.emailaddress = SELECT
228+public.job = SELECT, UPDATE
229+public.person = SELECT
230+public.persontransferjob = SELECT
231+public.teammembership = SELECT
232+
233 [weblogstats]
234 # For the script that parses our Apache/Squid logfiles and updates statistics
235 type=user
236
237=== modified file 'lib/canonical/config/schema-lazr.conf'
238--- lib/canonical/config/schema-lazr.conf 2010-09-24 22:30:48 +0000
239+++ lib/canonical/config/schema-lazr.conf 2010-10-07 14:14:12 +0000
240@@ -2046,3 +2046,17 @@
241
242 # datatype: boolean
243 send_email: true
244+
245+[process-job-source-groups]
246+# This section is used by cronscripts/process-job-source-groups.py.
247+dbuser: process-job-source-groups
248+# Each job source class also needs its own config section to specify the
249+# dbuser, the crontab_group, and the module that the job source class
250+# can be loaded from.
251+job_sources: IMembershipNotificationJobSource
252+
253+[IMembershipNotificationJobSource]
254+# This section is used by cronscripts/process-job-source.py.
255+module: lp.registry.interfaces.persontransferjob
256+dbuser: person-transfer-job
257+crontab_group: MAIN
258
259=== modified file 'lib/canonical/launchpad/scripts/tests/__init__.py'
260--- lib/canonical/launchpad/scripts/tests/__init__.py 2009-06-25 05:39:50 +0000
261+++ lib/canonical/launchpad/scripts/tests/__init__.py 2010-10-07 14:14:12 +0000
262@@ -31,5 +31,5 @@
263 args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
264 stdout, stderr = process.communicate()
265 if process.returncode != expect_returncode:
266- raise AssertionError('Failed:\n%s' % stderr)
267+ raise AssertionError('Failed:\n%s\n%s' % (stdout, stderr))
268 return (process.returncode, stdout, stderr)
269
270=== modified file 'lib/lp/archiveuploader/tests/test_ppauploadprocessor.py'
271--- lib/lp/archiveuploader/tests/test_ppauploadprocessor.py 2010-10-03 15:30:06 +0000
272+++ lib/lp/archiveuploader/tests/test_ppauploadprocessor.py 2010-10-07 14:14:12 +0000
273@@ -49,6 +49,7 @@
274 )
275 from lp.soyuz.model.publishing import BinaryPackagePublishingHistory
276 from lp.soyuz.tests.test_publishing import SoyuzTestPublisher
277+from lp.testing.mail_helpers import run_mail_jobs
278
279
280 class TestPPAUploadProcessorBase(TestUploadProcessorBase):
281@@ -668,6 +669,7 @@
282 ILaunchpadCelebrities).launchpad_beta_testers
283 self.name16.leave(beta_testers)
284 # Pop the message notifying the membership modification.
285+ run_mail_jobs()
286 unused = stub.test_emails.pop()
287
288 upload_dir = self.queueUpload("bar_1.0-1", "~name16/ubuntu")
289
290=== modified file 'lib/lp/code/interfaces/branchmergeproposal.py'
291--- lib/lp/code/interfaces/branchmergeproposal.py 2010-10-03 15:30:06 +0000
292+++ lib/lp/code/interfaces/branchmergeproposal.py 2010-10-07 14:14:12 +0000
293@@ -85,6 +85,7 @@
294 IJob,
295 IJobSource,
296 IRunnableJob,
297+ ITwistedJobSource,
298 )
299
300
301@@ -551,7 +552,7 @@
302 """Destroy this object."""
303
304
305-class IBranchMergeProposalJobSource(IJobSource):
306+class IBranchMergeProposalJobSource(ITwistedJobSource):
307 """A job source that will get all supported merge proposal jobs."""
308
309
310
311=== modified file 'lib/lp/registry/interfaces/persontransferjob.py'
312--- lib/lp/registry/interfaces/persontransferjob.py 2010-10-07 14:14:09 +0000
313+++ lib/lp/registry/interfaces/persontransferjob.py 2010-10-07 14:14:12 +0000
314@@ -61,6 +61,16 @@
315 class IMembershipNotificationJob(IPersonTransferJob):
316 """A Job to notify new members of a team of that change."""
317
318+ member = PublicPersonChoice(
319+ title=_('Alias for minor_person attribute'),
320+ vocabulary='ValidPersonOrTeam',
321+ required=True)
322+
323+ team = PublicPersonChoice(
324+ title=_('Alias for major_person attribute'),
325+ vocabulary='ValidPersonOrTeam',
326+ required=True)
327+
328
329 class IMembershipNotificationJobSource(IJobSource):
330 """An interface for acquiring IMembershipNotificationJobs."""
331
332=== modified file 'lib/lp/registry/model/persontransferjob.py'
333--- lib/lp/registry/model/persontransferjob.py 2010-10-07 14:14:09 +0000
334+++ lib/lp/registry/model/persontransferjob.py 2010-10-07 14:14:12 +0000
335@@ -104,16 +104,6 @@
336 # but the DB representation is unicode.
337 self._json_data = json_data.decode('utf-8')
338
339- @classmethod
340- def get(cls, key):
341- """Return the instance of this class whose key is supplied."""
342- store = IMasterStore(PersonTransferJob)
343- instance = store.get(cls, key)
344- if instance is None:
345- raise SQLObjectNotFound(
346- 'No occurrence of %s has key %s' % (cls.__name__, key))
347- return instance
348-
349
350 class PersonTransferJobDerived(BaseRunnableJob):
351 """Intermediate class for deriving from PersonTransferJob.
352
353=== modified file 'lib/lp/registry/stories/person/xx-approve-members.txt'
354--- lib/lp/registry/stories/person/xx-approve-members.txt 2009-10-26 14:37:31 +0000
355+++ lib/lp/registry/stories/person/xx-approve-members.txt 2010-10-07 14:14:12 +0000
356@@ -23,6 +23,9 @@
357 >>> browser.getControl(name='action_7').value = ['approve']
358 >>> browser.getControl(name='comment').value = 'Thanks for your interest'
359 >>> browser.getControl('Save changes').click()
360+ >>> from lp.testing.mail_helpers import run_mail_jobs
361+ >>> login(ANONYMOUS)
362+ >>> run_mail_jobs()
363 >>> len(stub.test_emails)
364 12
365 >>> for from_addr, to_addrs, raw_msg in sorted(stub.test_emails):
366
367=== added file 'lib/lp/registry/tests/test_membership_notification_job_creation.py'
368--- lib/lp/registry/tests/test_membership_notification_job_creation.py 1970-01-01 00:00:00 +0000
369+++ lib/lp/registry/tests/test_membership_notification_job_creation.py 2010-10-07 14:14:12 +0000
370@@ -0,0 +1,65 @@
371+# Copyright 2010 Canonical Ltd. This software is licensed under the
372+# GNU Affero General Public License version 3 (see the file LICENSE).
373+
374+"""Test that MembershipNotificationJobs are created appropriately."""
375+
376+__metaclass__ = type
377+
378+from zope.component import getUtility
379+
380+from canonical.testing import LaunchpadFunctionalLayer
381+from lp.registry.interfaces.person import IPersonSet
382+from lp.registry.interfaces.persontransferjob import (
383+ IMembershipNotificationJobSource,
384+ )
385+from lp.registry.interfaces.teammembership import (
386+ ITeamMembershipSet,
387+ TeamMembershipStatus,
388+ )
389+from lp.registry.model.persontransferjob import MembershipNotificationJob
390+from lp.services.job.interfaces.job import JobStatus
391+from lp.testing import (
392+ login_person,
393+ TestCaseWithFactory,
394+ )
395+from lp.testing.sampledata import ADMIN_EMAIL
396+
397+
398+class CreateMembershipNotificationJobTest(TestCaseWithFactory):
399+ """Test that MembershipNotificationJobs are created appropriately."""
400+ layer = LaunchpadFunctionalLayer
401+
402+ def setUp(self):
403+ super(CreateMembershipNotificationJobTest, self).setUp()
404+ self.person = self.factory.makePerson(name='murdock')
405+ self.team = self.factory.makeTeam(name='a-team')
406+ self.job_source = getUtility(IMembershipNotificationJobSource)
407+
408+ def test_setstatus_admin(self):
409+ login_person(self.team.teamowner)
410+ self.team.addMember(self.person, self.team.teamowner)
411+ membership_set = getUtility(ITeamMembershipSet)
412+ tm = membership_set.getByPersonAndTeam(self.person, self.team)
413+ tm.setStatus(TeamMembershipStatus.ADMIN, self.team.teamowner)
414+ jobs = list(self.job_source.iterReady())
415+ job_info = [
416+ (job.__class__, job.member, job.team, job.status)
417+ for job in jobs]
418+ self.assertEqual(
419+ [(MembershipNotificationJob,
420+ self.person,
421+ self.team,
422+ JobStatus.WAITING),
423+ ],
424+ job_info)
425+
426+ def test_setstatus_silent(self):
427+ person_set = getUtility(IPersonSet)
428+ admin = person_set.getByEmail(ADMIN_EMAIL)
429+ login_person(admin)
430+ self.team.addMember(self.person, self.team.teamowner)
431+ membership_set = getUtility(ITeamMembershipSet)
432+ tm = membership_set.getByPersonAndTeam(self.person, self.team)
433+ tm.setStatus(
434+ TeamMembershipStatus.ADMIN, admin, silent=True)
435+ self.assertEqual([], list(self.job_source.iterReady()))
436
437=== added file 'lib/lp/registry/tests/test_process_job_sources_cronjob.py'
438--- lib/lp/registry/tests/test_process_job_sources_cronjob.py 1970-01-01 00:00:00 +0000
439+++ lib/lp/registry/tests/test_process_job_sources_cronjob.py 2010-10-07 14:14:12 +0000
440@@ -0,0 +1,125 @@
441+# Copyright 2010 Canonical Ltd. This software is licensed under the
442+# GNU Affero General Public License version 3 (see the file LICENSE).
443+
444+"""Test cron script for processing jobs from any job source class."""
445+
446+__metaclass__ = type
447+
448+import os
449+import subprocess
450+
451+import transaction
452+from zope.component import getUtility
453+
454+from canonical.config import config
455+from canonical.launchpad.scripts.tests import run_script
456+from canonical.testing import LaunchpadScriptLayer
457+from lp.registry.interfaces.teammembership import (
458+ ITeamMembershipSet,
459+ TeamMembershipStatus,
460+ )
461+from lp.testing import (
462+ login_person,
463+ TestCaseWithFactory,
464+ )
465+
466+
467+class ProcessJobSourceTest(TestCaseWithFactory):
468+ """Test the process-job-source.py script."""
469+ layer = LaunchpadScriptLayer
470+ script = 'cronscripts/process-job-source.py'
471+
472+ def tearDown(self):
473+ super(ProcessJobSourceTest, self).tearDown()
474+ self.layer.force_dirty_database()
475+
476+ def test_missing_argument(self):
477+ # The script should display usage info when called without any
478+ # arguments.
479+ returncode, output, error = run_script(
480+ self.script, [], expect_returncode=1)
481+ self.assertIn('Usage:', output)
482+ self.assertIn('process-job-source.py [options] JOB_SOURCE', output)
483+
484+ def test_empty_queue(self):
485+ # The script should just create a lockfile and exit if no jobs
486+ # are in the queue.
487+ returncode, output, error = run_script(
488+ self.script, ['IMembershipNotificationJobSource'])
489+ expected = (
490+ 'INFO Creating lockfile: .*launchpad-process-job-'
491+ 'source-IMembershipNotificationJobSource.lock.*')
492+ self.assertTextMatchesExpressionIgnoreWhitespace(expected, error)
493+
494+ def test_processed(self):
495+ # The script should output the number of jobs it processed.
496+ person = self.factory.makePerson(name='murdock')
497+ team = self.factory.makeTeam(name='a-team')
498+ login_person(team.teamowner)
499+ team.addMember(person, team.teamowner)
500+ membership_set = getUtility(ITeamMembershipSet)
501+ tm = membership_set.getByPersonAndTeam(person, team)
502+ tm.setStatus(TeamMembershipStatus.ADMIN, team.teamowner)
503+ transaction.commit()
504+ returncode, output, error = run_script(
505+ self.script, ['-v', 'IMembershipNotificationJobSource'])
506+ self.assertIn(
507+ ('DEBUG Running <MEMBERSHIP_NOTIFICATION branch job (1) '
508+ 'for murdock as part of a-team. status=Waiting>'),
509+ error)
510+ self.assertIn('DEBUG MembershipNotificationJob sent email', error)
511+ self.assertIn('Ran 1 MembershipNotificationJob jobs.', error)
512+
513+
514+class ProcessJobSourceGroupsTest(TestCaseWithFactory):
515+ """Test the process-job-source-groups.py script."""
516+ layer = LaunchpadScriptLayer
517+ script = 'cronscripts/process-job-source-groups.py'
518+
519+ def tearDown(self):
520+ super(ProcessJobSourceGroupsTest, self).tearDown()
521+ self.layer.force_dirty_database()
522+
523+ def test_missing_argument(self):
524+ # The script should display usage info when called without any
525+ # arguments.
526+ returncode, output, error = run_script(
527+ self.script, [], expect_returncode=1)
528+ self.assertIn(
529+ ('Usage: process-job-source-groups.py '
530+ '[ -e JOB_SOURCE ] GROUP [GROUP]...'),
531+ output)
532+ self.assertIn('-e JOB_SOURCE, --exclude=JOB_SOURCE', output)
533+ self.assertIn('At least one group must be specified.', output)
534+ self.assertIn('Group: MAIN\n IMembershipNotificationJobSource',
535+ output)
536+
537+ def test_empty_queue(self):
538+ # The script should just create a lockfile, launch a child for
539+ # each job source class, and then exit if no jobs are in the queue.
540+ returncode, output, error = run_script(self.script, ['MAIN'])
541+ expected = (
542+ '.*Creating lockfile:.*launchpad-processjobsourcegroups.lock'
543+ '.*Creating lockfile:.*launchpad-process-job-'
544+ 'source-IMembershipNotificationJobSource.lock.*')
545+ self.assertTextMatchesExpressionIgnoreWhitespace(expected, error)
546+
547+ def test_processed(self):
548+ # The script should output the number of jobs that have been
549+ # processed by its child processes.
550+ person = self.factory.makePerson(name='murdock')
551+ team = self.factory.makeTeam(name='a-team')
552+ login_person(team.teamowner)
553+ team.addMember(person, team.teamowner)
554+ membership_set = getUtility(ITeamMembershipSet)
555+ tm = membership_set.getByPersonAndTeam(person, team)
556+ tm.setStatus(TeamMembershipStatus.ADMIN, team.teamowner)
557+ transaction.commit()
558+ returncode, output, error = run_script(
559+ self.script, ['-v', '--wait', 'MAIN'])
560+ self.assertTextMatchesExpressionIgnoreWhitespace(
561+ ('DEBUG Running <MEMBERSHIP_NOTIFICATION branch job (.*) '
562+ 'for murdock as part of a-team. status=Waiting>'),
563+ error)
564+ self.assertIn('DEBUG MembershipNotificationJob sent email', error)
565+ self.assertIn('Ran 1 MembershipNotificationJob jobs.', error)
566
567=== modified file 'lib/lp/services/job/interfaces/job.py'
568--- lib/lp/services/job/interfaces/job.py 2010-08-20 20:31:18 +0000
569+++ lib/lp/services/job/interfaces/job.py 2010-10-07 14:14:12 +0000
570@@ -11,6 +11,7 @@
571 'IJob',
572 'IJobSource',
573 'IRunnableJob',
574+ 'ITwistedJobSource',
575 'JobStatus',
576 'LeaseHeld',
577 ]
578@@ -166,3 +167,10 @@
579
580 def contextManager():
581 """Get a context for running this kind of job in."""
582+
583+
584+class ITwistedJobSource(IJobSource):
585+ """Interface for a job source that is usable by the TwistedJobRunner."""
586+
587+ def get(id):
588+ """Get a job by its id."""
589
590=== modified file 'lib/lp/services/job/runner.py'
591--- lib/lp/services/job/runner.py 2010-09-20 09:48:58 +0000
592+++ lib/lp/services/job/runner.py 2010-10-07 14:14:12 +0000
593@@ -77,7 +77,6 @@
594
595 # We redefine __eq__ and __ne__ here to prevent the security proxy
596 # from mucking up our comparisons in tests and elsewhere.
597-
598 def __eq__(self, job):
599 return (
600 self.__class__ is removeSecurityProxy(job.__class__)
601@@ -337,13 +336,12 @@
602 """Run Jobs via twisted."""
603
604 def __init__(self, job_source, dbuser, logger=None, error_utility=None):
605- env = {'PYTHONPATH': os.environ['PYTHONPATH'],
606- 'PATH': os.environ['PATH']}
607- lp_config = os.environ.get('LPCONFIG')
608- if lp_config is not None:
609- env['LPCONFIG'] = lp_config
610+ env = {'PATH': os.environ['PATH']}
611+ for name in ('PYTHONPATH', 'LPCONFIG'):
612+ if name in os.environ:
613+ env[name] = os.environ[name]
614 starter = main.ProcessStarter(
615- packages=('twisted', 'ampoule'), env=env)
616+ packages=('_pythonpath', 'twisted', 'ampoule'), env=env)
617 super(TwistedJobRunner, self).__init__(logger, error_utility)
618 self.job_source = job_source
619 import_name = '%s.%s' % (
620@@ -367,7 +365,8 @@
621 transaction.commit()
622 job_id = job.id
623 deadline = timegm(job.lease_expires.timetuple())
624- self.logger.debug('Running %r, lease expires %s', job, job.lease_expires)
625+ self.logger.debug(
626+ 'Running %r, lease expires %s', job, job.lease_expires)
627 deferred = self.pool.doWork(
628 RunJobCommand, job_id = job_id, _deadline=deadline)
629 def update(response):
630@@ -442,14 +441,21 @@
631 class JobCronScript(LaunchpadCronScript):
632 """Base class for scripts that run jobs."""
633
634- def __init__(self, runner_class=JobRunner, test_args=None,
635- script_name=None):
636- self.dbuser = getattr(config, self.config_name).dbuser
637- if script_name is None:
638- script_name = self.config_name
639+ config_name = None
640+
641+ def __init__(self, runner_class=JobRunner, test_args=None, name=None):
642 super(JobCronScript, self).__init__(
643- script_name, self.dbuser, test_args)
644- self.runner_class = runner_class
645+ name=name, dbuser=None, test_args=test_args)
646+ self._runner_class = runner_class
647+
648+ @property
649+ def dbuser(self):
650+ return self.config_section.dbuser
651+
652+ @property
653+ def runner_class(self):
654+ """Enable subclasses to override with command-line arguments."""
655+ return self._runner_class
656
657 def job_counts(self, jobs):
658 """Return a list of tuples containing the job name and counts."""
659@@ -458,8 +464,18 @@
660 counts[job.__class__.__name__] += 1
661 return sorted(counts.items())
662
663+ @property
664+ def config_section(self):
665+ return getattr(config, self.config_name)
666+
667 def main(self):
668- errorlog.globalErrorUtility.configure(self.config_name)
669+ section = self.config_section
670+ if (getattr(section, 'error_dir', None) is not None
671+ and getattr(section, 'oops_prefix', None) is not None
672+ and getattr(section, 'copy_to_zlog', None) is not None):
673+ # If the three variables are not set, we will let the error
674+ # utility default to using the [error_reports] config.
675+ errorlog.globalErrorUtility.configure(self.config_name)
676 job_source = getUtility(self.source_interface)
677 runner = self.runner_class.runFromSource(
678 job_source, self.dbuser, self.logger)
679
680=== modified file 'lib/lp/services/scripts/base.py'
681--- lib/lp/services/scripts/base.py 2010-10-03 15:30:06 +0000
682+++ lib/lp/services/scripts/base.py 2010-10-07 14:14:12 +0000
683@@ -37,9 +37,6 @@
684 )
685 from canonical.lp import initZopeless
686
687-from lp.services.log.mappingfilter import MappingFilter
688-from lp.services.log.nullhandler import NullHandler
689-
690
691 LOCK_PATH = "/var/lock/"
692 UTC = pytz.UTC
693@@ -152,22 +149,21 @@
694 useful in test scripts.
695 """
696 if name is None:
697- self.name = self.__class__.__name__.lower()
698- else:
699- self.name = name
700-
701- self.dbuser = dbuser
702-
703- if self.description is None:
704- description = self.__doc__
705- else:
706- description = self.description
707+ self._name = self.__class__.__name__.lower()
708+ else:
709+ self._name = name
710+
711+ self._dbuser = dbuser
712
713 # The construction of the option parser is a bit roundabout, but
714 # at least it's isolated here. First we build the parser, then
715 # we add options that our logger object uses, then call our
716 # option-parsing hook, and finally pull out and store the
717 # supplied options and args.
718+ if self.description is None:
719+ description = self.__doc__
720+ else:
721+ description = self.description
722 self.parser = OptionParser(usage=self.usage,
723 description=description)
724 scripts.logger_options(self.parser, default=self.loglevel)
725@@ -177,9 +173,23 @@
726 "profiling stats in FILE."))
727 self.add_my_options()
728 self.options, self.args = self.parser.parse_args(args=test_args)
729- self.logger = scripts.logger(self.options, name)
730-
731- self.lockfilepath = os.path.join(LOCK_PATH, self.lockfilename)
732+
733+ # Enable subclasses to easily override these __init__()
734+ # arguments using command-line arguments.
735+ self.handle_options()
736+
737+ def handle_options(self):
738+ self.logger = scripts.logger(self.options, self.name)
739+
740+ @property
741+ def name(self):
742+ """Enable subclasses to override with command-line arguments."""
743+ return self._name
744+
745+ @property
746+ def dbuser(self):
747+ """Enable subclasses to override with command-line arguments."""
748+ return self._dbuser
749
750 #
751 # Hooks that we expect users to redefine.
752@@ -226,6 +236,10 @@
753 """
754 return "launchpad-%s.lock" % self.name
755
756+ @property
757+ def lockfilepath(self):
758+ return os.path.join(LOCK_PATH, self.lockfilename)
759+
760 def setup_lock(self):
761 """Create lockfile.
762
763@@ -275,8 +289,10 @@
764
765 @log_unhandled_exception_and_exit
766 def run(self, use_web_security=False, implicit_begin=True,
767- isolation=ISOLATION_LEVEL_DEFAULT):
768+ isolation=None):
769 """Actually run the script, executing zcml and initZopeless."""
770+ if isolation is None:
771+ isolation = ISOLATION_LEVEL_DEFAULT
772 self._init_zca(use_web_security=use_web_security)
773 self._init_db(implicit_begin=implicit_begin, isolation=isolation)
774
775@@ -339,17 +355,15 @@
776 """Logs successful script runs in the database."""
777
778 def __init__(self, name=None, dbuser=None, test_args=None):
779- """Initialize, and sys.exit() if the cronscript is disabled.
780-
781- Rather than hand editing crontab files, cronscripts can be
782- enabled and disabled using a config file.
783-
784- The control file location is specified by
785- config.canonical.cron_control_url.
786- """
787 super(LaunchpadCronScript, self).__init__(name, dbuser, test_args)
788
789- name = self.name
790+ # self.name is used instead of the name argument, since it may have
791+ # have been overridden by command-line parameters or by
792+ # overriding the name property.
793+ enabled = cronscript_enabled(
794+ config.canonical.cron_control_url, self.name, self.logger)
795+ if not enabled:
796+ sys.exit(0)
797
798 # Configure the IErrorReportingUtility we use with defaults.
799 # Scripts can override this if they want.
800@@ -359,12 +373,10 @@
801 globalErrorUtility.copy_to_zlog = False
802
803 # WARN and higher log messages should generate OOPS reports.
804- logging.getLogger().addHandler(OopsHandler(name))
805-
806- enabled = cronscript_enabled(
807- config.canonical.cron_control_url, name, self.logger)
808- if not enabled:
809- sys.exit(0)
810+ # self.name is used instead of the name argument, since it may have
811+ # have been overridden by command-line parameters or by
812+ # overriding the name property.
813+ logging.getLogger().addHandler(OopsHandler(self.name))
814
815 @log_unhandled_exception_and_exit
816 def record_activity(self, date_started, date_completed):
817
818=== modified file 'lib/lp/testing/__init__.py'
819--- lib/lp/testing/__init__.py 2010-09-29 20:29:43 +0000
820+++ lib/lp/testing/__init__.py 2010-10-07 14:14:12 +0000
821@@ -441,6 +441,16 @@
822 "Expected %s to be %s, but it was %s."
823 % (attribute_name, date, getattr(sql_object, attribute_name)))
824
825+ def assertTextMatchesExpressionIgnoreWhitespace(self,
826+ regular_expression_txt,
827+ text):
828+ def normalise_whitespace(text):
829+ return ' '.join(text.split())
830+ pattern = re.compile(
831+ normalise_whitespace(regular_expression_txt), re.S)
832+ self.assertIsNot(
833+ None, pattern.search(normalise_whitespace(text)), text)
834+
835 def assertIsInstance(self, instance, assert_class):
836 """Assert that an instance is an instance of assert_class.
837
838@@ -686,16 +696,6 @@
839 super(BrowserTestCase, self).setUp()
840 self.user = self.factory.makePerson(password='test')
841
842- def assertTextMatchesExpressionIgnoreWhitespace(self,
843- regular_expression_txt,
844- text):
845- def normalise_whitespace(text):
846- return ' '.join(text.split())
847- pattern = re.compile(
848- normalise_whitespace(regular_expression_txt), re.S)
849- self.assertIsNot(
850- None, pattern.search(normalise_whitespace(text)), text)
851-
852 def getViewBrowser(self, context, view_name=None, no_login=False):
853 login(ANONYMOUS)
854 url = canonical_url(context, view_name=view_name)

Subscribers

People subscribed via source and target branches

to status/vote changes: