Merge lp:~james-w/launchpad/copy-archive-job into lp:launchpad/db-devel
| Status: | Merged |
|---|---|
| Approved by: | Edwin Grubbs on 2010-07-28 |
| Approved revision: | no longer in the source branch. |
| Merged at revision: | 9598 |
| Proposed branch: | lp:~james-w/launchpad/copy-archive-job |
| Merge into: | lp:launchpad/db-devel |
| Prerequisite: | lp:~james-w/launchpad/archive-job-db |
| Diff against target: |
800 lines (+765/-0) 6 files modified
database/schema/security.cfg (+1/-0) lib/lp/soyuz/interfaces/archivejob.py (+55/-0) lib/lp/soyuz/model/archivejob.py (+130/-0) lib/lp/soyuz/model/copyarchivejob.py (+140/-0) lib/lp/soyuz/tests/test_archivejob.py (+48/-0) lib/lp/soyuz/tests/test_copyarchivejob.py (+391/-0) |
| To merge this branch: | bzr merge lp:~james-w/launchpad/copy-archive-job |
| Related bugs: |
| Reviewer | Review Type | Date Requested | Status |
|---|---|---|---|
| Edwin Grubbs (community) | code | 2010-06-24 | Approve on 2010-07-28 |
|
Review via email:
|
|||
Commit Message
Add CopyArchiveJob, a job to, um... copy archives.
Description of the Change
Summary
This adds the model classes and code for CopyArchiveJob, with no users.
Currently copying archives is done by a script, and it has a table for
storing requests while it is doing it so that they could be deferred if
desired, though that was never fully implemented.
This lays the groundwork so that we can use the job system for copying
archives.
Proposed fix
This is mainly boilerplate code for a new job type, IArchiveJob, and
the implementation for a single job type. We can have the script
use this later, and write the cronscript to process them.
Pre-implementation notes
Julian confirmed that using the job system is the right thing to do.
Implementation details
We don't do what the other job types do and return the current request
if the same target archive is chosen, we error instead. I chose this as
it only makes sense for a single copy to be targeting an archive at one
time, but currently they will only be done in response to a user request,
so we should tell the user what is going on.
We make a lot more use of the metadata than the other job types do. While
it is inconvenient and leaves room for errors it is a lot easier than
creating a very specific job type that has references for all these things.
We rely on tests of PackageCloner for testing most of the behaviour of run(),
we just test enough to be fairly confident that we are passing the right
arguments to PackageCloner.
Tests
./bin/test -s lp.soyuz.tests -m test_copyarchivejob
./bin/test -s lp.soyuz.tests -m test_archivejob
Demo and Q/A
This will need QA when we convert the populate_archive script to use it.
lint
== Pyflakes notices ==
lib/lp/
112: local variable 'ignore_this' is assigned to but never used
279: local variable 'ignore_result' is assigned to but never used
== Pylint notices ==
lib/lp/
39: [E0211, IArchiveJob.
46: [E0213, IArchiveJobSour
lib/lp/
29: [E1002, CopyArchiveJob.
Thanks,
James
| James Westby (james-w) wrote : | # |
| Guilherme Salgado (salgado) wrote : | # |
I haven't done a through review as I'm not familiar with the job system, so wouldn't be comfortable doing it. I have a few comments, though.
I assume the new tables created to describe a new job are standardised, but we need to have the new one reviewed by either stub or jml.
It's weird to see new code raising SQLObjectNotFound. Does the job system expect that?
I think your tests can do with just the DatabaseFunctio
I'll be happy to do a thorough review tomorrow, if you think it's worth it, but I'd like someone more familiar with the job system to review it as well.
| James Westby (james-w) wrote : | # |
Hi,
I adjusted the layer, and the db tables have been approved.
The SQLObjectNotFound is what the code jobs use, and no-one was
able to suggest anything better when I asked on IRC just now.
Thanks,
James
| Edwin Grubbs (edwin-grubbs) wrote : | # |
Hi James,
This branch looks very good. I have minior comments below.
merge-conditional
-Edwin
>=== added file 'lib/lp/
>--- lib/lp/
>+++ lib/lp/
>@@ -0,0 +1,55 @@
>+from zope.interface import Attribute, Interface
>+from zope.schema import Int, Object
>+
>+from canonical.launchpad import _
>+
>+from lazr.enum import DBEnumeratedType, DBItem
>+
>+from lp.services.
>+from lp.soyuz.
>+
>+
>+class ArchiveJobType(
>+ """Values that IArchiveJob.
>+
>+ COPY_ARCHIVE = DBItem(0, """
>+ Create a copy archive.
>+
>+ This job creates a copy archive from the current state of
>+ the archive.
>+ """)
>+
>+
>+class IArchiveJob(
>+ """A Job related to an Archive."""
>+
>+ id = Int(
>+ title=_('DB ID'), required=True, readonly=True,
>+ description=_("The tracking number for this job."))
>+
>+ archive = Object(
>+ title=_('The Archive this job is about.'), schema=IArchive,
It looks peculiar to occasionally capitalize the noun. For example,
"job" isn't capitalized here, but it is capitalized in the following title.
I think capitalizing to match the class names is better left to doc
strings, since field titles are often exposed to the user, even if that
would never happen for this interface.
>+ required=True)
>+
>+ job = Object(
>+ title=_('The common Job attributes'), schema=IJob, required=True)
>+
>+ metadata = Attribute('A dict of data about the job.')
>+
>+ def destroySelf():
>+ """Destroy this object."""
>+
>+
>+class IArchiveJobSour
>+ """An interface for acquiring IArchiveJobs."""
>+
>+ def create(archive):
>+ """Create a new IArchiveJobs for an archive."""
>+
>+
>+class ICopyArchiveJob
>+ """A Job to copy archives."""
>+
>+
>+class ICopyArchiveJob
>+ """Interface for acquiring CopyArchiveJobs."""
>
>=== added file 'lib/lp/
>--- lib/lp/
>+++ lib/lp/
>@@ -0,0 +1,130 @@
>+
>+__metaclass__ = object
>+
>+import simplejson
>+
>+from sqlobject import SQLObjectNotFound
>+from storm.base import Storm
>+from storm.expr import And
>+from storm.locals import Int, Reference, Unicode
>+
>+from zope.component import getUtility
>+from zope.interface import classProvides, implements
>+
>+from canonical.
>+from canonical.
>+ DEFAULT_FLAVOR, IStoreSelector, MAIN_STORE, MASTER_FLAVOR)
>+
>+from lazr.delegates import delegates
>+
>+from lp.services.
>+from lp.services.
>+from lp.soyuz.
>+ ArchiveJobType, IArchiveJob, IArchiveJobSource)
>+from lp.soyuz.
>+
>+
>+class ArchiveJob(Storm):
>+ """Base class for jobs r...
| James Westby (james-w) wrote : | # |
> Here you are creating a SPPH record for the source_archive, and
> makeSourceAndTa
> test the merge, shouldn't the to packages be in different archives?
It's not the best test, as the setup runs a job.
What we do is create a source archive, and then clone it to a target archive,
we then modify the source archive and request a "merge" operation, which should
copy the new publication we create in the source.
I'm not sure of a better way to write test to make that clear, so I added
comments.
All other changes made as suggested.
Thanks,
James

Please ignore the schema/sampledata changes, they are in the pre-requisite branch.
Thanks,
James