-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Jeroen T. Vermeulen wrote: > Actually this branch resolves more long-standing gripes with the > translations import auto-approver than just that one bug. Was there a preimplementation call? > But let's start with the original bug. An entry on the translations > import queue can be in one of several states: Needs Review, Approved, > Imported, Failed, Blocked, or Deleted. Failed is the state it ends up > in when its import failed, typically because of a syntax error. > > Successfully imported entries get gc'ed off the queue after a few days. > Currently, Failed ones do not. The idea is that its owner uploads an > updated copy of the same file; it reuses the same queue entry; it gets > processed; it ends up successfully Imported; and so finally the entry > drops out of the queue. But in practice, especially for Ubuntu, Failed > entries lie around effectively forever. > > So I fixed that. We want to clean up these entries less aggressively > than we do Imported ones, so the owner gets a fair chance to notice and > fix the problem (and sometimes, so that we get to restart a bunch of > entries that failed because of operational problems). This branch makes > the garbage-collection of entries in various states data-driven: 3 days > for Imported or Deleted entries, one month for Failed ones. There is no > need to test each state's "grace period" separately since we'd only be > unit-testing the contents of a dict. > > > == Other gripes == > > The approver's LaunchpadCronScript still lived in the same module as the > script it was originally spliced out of. Bit of a Zeus-and-Athena > scenario there. I gave it its own module in lp.translations.scripts. > Also, the class in there now inherits from LaunchpadCronScript instead > of having it instantiated from a separate LaunchpadCronScript-derived > class in the main script file. Do you think AutoApproveProcess is still a good name, since it's also responsible for other queue maintenance? What about ProcessImportQueue or something? > Unfortunately this did move some debug > output into the doctest; I switched from the FakeLogger to the > MockLogger so I could set a higher debug level and avoid this. > > You may noticed that where transaction managers are passed around, I > changed their names from ztm (Zopeless Transaction Manager) to txn. We > don't use the ztm any more. Actually txn is just an alias for the > transaction module, so the argument isn't needed at all now. But this > is a relatively elegant way of telling a method whether you want it to > commit transactions. Passing a boolean for "please commit" is just too > ugly. But functionally equivalent? (Would we ever pass in a different transaction manager?) > Oh, and another biggie: entries for obsolete distroseries. Right now > perhaps a fifth of the queue (certainly over 25K entries, out of 150K!) > are for obsolete Ubuntu releases that will never need any further > translations updates--they won't even get security updates. So I made > the script do some cleanup on those entries. > > Deleting all of these would cause major database mayhem every time an > Ubuntu release slipped into obsolescence, so this cleanup is limited to > 100 entries per run. Experience shows that that's not a noticeable load > for database replication, yet it'll get rid of all of these entries in a > few days. I didn't bother testing this since it's an operational detail > but I did consistently test that none of these cleanups affect rows that > shouldn't be affected. > === modified file 'cronscripts/rosetta-approve-imports.py' > --- cronscripts/rosetta-approve-imports.py 2009-07-17 00:26:05 +0000 > +++ cronscripts/rosetta-approve-imports.py 2009-09-18 07:39:51 +0000 > @@ -10,26 +10,10 @@ > import _pythonpath > > from canonical.config import config > -from canonical.database.sqlbase import ISOLATION_LEVEL_READ_COMMITTED > -from lp.translations.scripts.po_import import AutoApproveProcess > -from lp.services.scripts.base import LaunchpadCronScript > - > - > -class RosettaImportApprover(LaunchpadCronScript): > - def main(self): > - self.txn.set_isolation_level(ISOLATION_LEVEL_READ_COMMITTED) > - process = AutoApproveProcess(self.txn, self.logger) > - self.logger.debug('Starting auto-approval of translation imports') > - process.run() > - self.logger.debug('Completed auto-approval of translation imports') > +from lp.translations.scripts.import_approval import AutoApproveProcess > > > if __name__ == '__main__': > - script = RosettaImportApprover('rosetta-approve-imports', > - dbuser=config.poimport.dbuser) > - script.lock_or_quit() > - try: > - script.run() > - finally: > - script.unlock() > - > + script = AutoApproveProcess( > + 'rosetta-approve-imports', dbuser=config.poimport.dbuser) > + script.lock_and_run() It seems a bit odd to be wrapping the script in if __name__ = '__main__' blocks. It can't be loaded as a module because of its name, and even if it could, I don't see any value in it. > === modified file 'lib/lp/translations/doc/poimport.txt' > --- lib/lp/translations/doc/poimport.txt 2009-09-03 09:10:49 +0000 > +++ lib/lp/translations/doc/poimport.txt 2009-09-18 13:48:00 +0000 > @@ -15,8 +15,9 @@ > ... ITranslationImportQueue, RosettaImportStatus) > >>> from lp.registry.model.sourcepackagename import SourcePackageName > >>> from lp.translations.model.potemplate import POTemplateSubset > - >>> from lp.translations.scripts.po_import import ( > - ... AutoApproveProcess, ImportProcess) > + >>> from lp.translations.scripts.po_import import ImportProcess > + >>> from lp.translations.scripts.import_approval import ( > + ... AutoApproveProcess) > >>> import datetime > >>> import pytz > >>> UTC = pytz.timezone('UTC') > @@ -682,9 +683,13 @@ > entries from the queue. Running at this point, all it does is purge the > two hand-approved Welsh translations that have just been imported. > > - >>> process = AutoApproveProcess(transaction, FakeLogger()) > - >>> process.run() > - INFO Removed 2 entries from the queue. > + >>> import logging > + >>> from canonical.launchpad.ftests.logger import MockLogger > + >>> process = AutoApproveProcess('approver', test_args=[]) > + >>> process.logger = MockLogger() > + >>> process.logger.setLevel(logging.INFO) > + >>> process.main() > + log> Removed 2 entries from the queue. > >>> transaction.commit() > > If users upload two versions of the same file, they are imported in the > @@ -765,9 +770,11 @@ > submitted translations and approves them for import based on some > heuristic intelligence. > > - >>> process = AutoApproveProcess(transaction, FakeLogger()) > - >>> process.run() > - INFO The automatic approval system approved some entries. > + >>> process = AutoApproveProcess('approver', test_args=[]) > + >>> process.logger = MockLogger() > + >>> process.logger.setLevel(logging.INFO) > + >>> process.main() > + log> The automatic approval system approved some entries. > >>> print entry.status.name > APPROVED > >>> syncUpdate(entry) > > === modified file 'lib/lp/translations/interfaces/translationimportqueue.py' > --- lib/lp/translations/interfaces/translationimportqueue.py 2009-08-14 16:35:06 +0000 > +++ lib/lp/translations/interfaces/translationimportqueue.py 2009-09-18 07:39:51 +0000 > @@ -399,30 +399,37 @@ > All returned items will implement `IHasTranslationImports`. > """ > > - def executeOptimisticApprovals(ztm): > - """Try to move entries from the Needs Review status to Approved one. > + def executeOptimisticApprovals(txn=None): > + """Try to approve Needs-Review entries. > > - :arg ztm: Zope transaction manager object. > + :arg txn: Optional transaction manager. If given, will be > + committed regularly. > > This method moves all entries that we know where should they be > imported from the Needs Review status to the Accepted one. > """ > > - def executeOptimisticBlock(ztm): > + def executeOptimisticBlock(txn=None): > """Try to move entries from the Needs Review status to Blocked one. > > - :arg ztm: Zope transaction manager object or None. > - > - This method moves all .po entries that are on the same directory that > - a .pot entry that has the status Blocked to that same status. > - > - Return the number of items blocked. > + :arg txn: Optional transaction manager. If given, will be > + committed regularly. > + > + This method moves uploaded translations for Blocked templates to > + the Blocked status as well. This lets you block a template plus > + all its present or future translations in one go. > + > + :return: The number of items blocked. > """ > > def cleanUpQueue(): > - """Remove old DELETED and IMPORTED entries. > - > - Only entries older than 5 days will be removed. > + """Remove old entries in terminal states. > + > + This "garbage-collects" entries from the queue based on their > + status (e.g. Deleted and Imported ones) and how long they have > + been in that status. > + > + :return: The number of entries deleted. > """ > > def remove(entry): > > === modified file 'lib/lp/translations/model/translationimportqueue.py' > --- lib/lp/translations/model/translationimportqueue.py 2009-09-03 05:14:07 +0000 > +++ lib/lp/translations/model/translationimportqueue.py 2009-09-18 14:11:35 +0000 > @@ -23,6 +23,7 @@ > from zope.interface import implements > from zope.component import getUtility > from sqlobject import SQLObjectNotFound, StringCol, ForeignKey, BoolCol > +from storm.expr import And, Or > from storm.locals import Int, Reference > > from canonical.database.sqlbase import ( > @@ -32,9 +33,11 @@ > from canonical.database.enumcol import EnumCol > from canonical.launchpad.helpers import shortlist > from canonical.launchpad.interfaces.launchpad import ILaunchpadCelebrities > +from canonical.launchpad.interfaces.lpstorm import IMasterStore > from canonical.launchpad.webapp.interfaces import NotFoundError > from lp.registry.interfaces.distribution import IDistribution > -from lp.registry.interfaces.distroseries import IDistroSeries > +from lp.registry.interfaces.distroseries import ( > + IDistroSeries, DistroSeriesStatus) > from lp.registry.interfaces.person import IPerson > from lp.registry.interfaces.product import IProduct > from lp.registry.interfaces.productseries import IProductSeries > @@ -61,9 +64,13 @@ > from lp.registry.interfaces.person import validate_public_person > > > -# Number of days when the DELETED and IMPORTED entries are removed from the > +# Number of days when entries with terminal statuses are removed from the > # queue. > -DAYS_TO_KEEP = 3 > +entry_gc_age = { > + RosettaImportStatus.DELETED: datetime.timedelta(days=3), > + RosettaImportStatus.IMPORTED: datetime.timedelta(days=3), > + RosettaImportStatus.FAILED: datetime.timedelta(days=30), > +} Does it make sense to specify a timedelta here? If you always specify your durations in days, it's less repetitive convert to a timedelta in _cleanUpObsoleteEntries If you're specifying a timedelta, shouldn't the comment say "Length of time" instead of "Number of days"? Is FAILED really a terminal status? It seems like it's possibly not terminal, and that's why you're delaying 30 days. > > > def is_gettext_name(path): > @@ -1068,7 +1075,7 @@ > > return distroseriess + products > > - def executeOptimisticApprovals(self, ztm): > + def executeOptimisticApprovals(self, txn=None): > """See ITranslationImportQueue.""" > there_are_entries_approved = False > importer = getUtility(ITranslationImporter) > @@ -1108,12 +1115,13 @@ > # Already know where it should be imported. The entry is approved > # automatically. > entry.setStatus(RosettaImportStatus.APPROVED) > - # Do the commit to save the changes. > - ztm.commit() > + > + if txn is not None: > + txn.commit() > > return there_are_entries_approved > > - def executeOptimisticBlock(self, ztm=None): > + def executeOptimisticBlock(self, txn=None): > """See ITranslationImportQueue.""" > importer = getUtility(ITranslationImporter) > num_blocked = 0 > @@ -1143,40 +1151,79 @@ > # blocked, so we can block it too. > entry.setStatus(RosettaImportStatus.BLOCKED) > num_blocked += 1 > - if ztm is not None: > - # Do the commit to save the changes. > - ztm.commit() > + if txn is not None: > + txn.commit() > > return num_blocked > > + def _cleanUpObsoleteEntries(self, store): > + """Delete obsolete queue entries. > + > + :param store: The Store to delete from. > + :return: Number of entries deleted. > + """ > + now = datetime.datetime.now(pytz.UTC) > + deletion_criteria = False > + for status, gc_age in entry_gc_age.iteritems(): > + cutoff = now - gc_age > + deletion_criteria = Or( > + deletion_criteria, And( > + TranslationImportQueueEntry.status == status, > + TranslationImportQueueEntry.date_status_changed < cutoff)) > + > + entries = store.find(TranslationImportQueueEntry, deletion_criteria) I find the way the deletion_criteria is constructed a bit surprising. What do you think about building up a list of clauses first, and then ORing them? For example: deletion_clauses = [] for status, gc_age in entry_gc_age.iteritems(): cutoff = now - gc_age deletion_clauses.append(And( TranslationImportQueueEntry.status == status, TranslationImportQueueEntry.date_status_changed < cutoff)) entries = store.find( TranslationImportQueueEntry, Or(*deletion_clauses)) > + return entries.remove() > + > + def _cleanUpInactiveProductEntries(self, store): > + """Delete queue entries for deactivated `Product`s. > + > + :param store: The Store to delete from. > + :return: Number of entries deleted. > + """ > + # XXX JeroenVermeulen 2009-09-18 bug=271938: Stormify this once > + # the Storm remove() syntax starts working properly for joins. > + cur = cursor() > + cur.execute(""" > + DELETE FROM TranslationImportQueueEntry AS Entry > + USING ProductSeries, Product > + WHERE > + ProductSeries.id = Entry.productseries AND > + Product.id = ProductSeries.product AND > + Product.active IS FALSE > + """) > + return cur.rowcount > + > + def _cleanUpObsoleteDistroEntries(self, store): > + """Delete some queue entries for obsolete `DistroSeries`. > + > + :param store: The Store to delete from. > + :return: Number of entries deleted. > + """ > + # XXX JeroenVermeulen 2009-09-18 bug=271938,432484: Stormify > + # this once Storm's remove() supports joins and slices. > + cur = cursor() > + cur.execute(""" > + DELETE FROM TranslationImportQueueEntry > + WHERE id IN ( > + SELECT Entry.id > + FROM TranslationImportQueueEntry Entry > + JOIN DistroSeries ON > + DistroSeries.id = Entry.distroseries > + JOIN Distribution ON > + Distribution.id = DistroSeries.distribution > + WHERE DistroSeries.releasestatus = %s > + LIMIT 100) > + """ % quote(DistroSeriesStatus.OBSOLETE)) > + return cur.rowcount > + > def cleanUpQueue(self): > - """See ITranslationImportQueue.""" > - cur = cursor() > - > - # Delete outdated DELETED and IMPORTED entries. > - delta = datetime.timedelta(DAYS_TO_KEEP) > - last_date = datetime.datetime.utcnow() - delta > - cur.execute(""" > - DELETE FROM TranslationImportQueueEntry > - WHERE > - (status = %s OR status = %s) AND date_status_changed < %s > - """ % sqlvalues(RosettaImportStatus.DELETED.value, > - RosettaImportStatus.IMPORTED.value, > - last_date)) > - n_entries = cur.rowcount > - > - # Delete entries belonging to inactive product series. > - cur.execute(""" > - DELETE FROM TranslationImportQueueEntry AS entry > - USING ProductSeries AS series, Product AS product > - WHERE > - entry.productseries = series.id AND > - series.product = product.id AND > - product.active IS FALSE > - """) > - n_entries += cur.rowcount > - > - return n_entries > + """See `ITranslationImportQueue`.""" > + store = IMasterStore(TranslationImportQueueEntry) > + > + return ( > + self._cleanUpObsoleteEntries(store) + > + self._cleanUpInactiveProductEntries(store) + > + self._cleanUpObsoleteDistroEntries(store)) > > def remove(self, entry): > """See ITranslationImportQueue.""" > > === added file 'lib/lp/translations/scripts/import_approval.py' Since this does more than approving imports, is import_approval a good name? Maybe queue_gardener? > --- lib/lp/translations/scripts/import_approval.py 1970-01-01 00:00:00 +0000 > +++ lib/lp/translations/scripts/import_approval.py 2009-09-18 13:48:00 +0000 > @@ -0,0 +1,50 @@ > +# Copyright 2009 Canonical Ltd. This software is licensed under the > +# GNU Affero General Public License version 3 (see the file LICENSE). > + > +"""Translations auto-approval script.""" > + > +__metaclass__ = type > + > +__all__ = [ > + 'AutoApproveProcess', > + ] > + > +from zope.component import getUtility > + > +from lp.services.scripts.base import LaunchpadCronScript > +from lp.translations.interfaces.translationimportqueue import ( > + ITranslationImportQueue) > + > + > +class AutoApproveProcess(LaunchpadCronScript): > + """Automated gardening for the Translations import queue.""" > + def main(self): > + """Manage import queue. > + > + Approve uploads that can be approved automatically. > + Garbage-collect ones that are no longer needed. Block > + translations on the queue for templates that are blocked. > + """ > + self.logger.debug("Starting auto-approval of translation imports") > + > + translation_import_queue = getUtility(ITranslationImportQueue) > + > + if translation_import_queue.executeOptimisticApprovals(self.txn): > + self.logger.info( > + 'The automatic approval system approved some entries.') > + > + removed_entries = translation_import_queue.cleanUpQueue() > + if removed_entries > 0: > + self.logger.info('Removed %d entries from the queue.' % > + removed_entries) > + > + if self.txn: > + self.txn.commit() > + > + blocked_entries = ( > + translation_import_queue.executeOptimisticBlock(self.txn)) > + if blocked_entries > 0: > + self.logger.info('Blocked %d entries from the queue.' % > + blocked_entries) > + > + self.logger.debug("Completed auto-approval of translation imports.") > > === modified file 'lib/lp/translations/scripts/po_import.py' > --- lib/lp/translations/scripts/po_import.py 2009-09-03 09:10:49 +0000 > +++ lib/lp/translations/scripts/po_import.py 2009-09-18 07:39:51 +0000 > @@ -6,6 +6,10 @@ > __metaclass__ = type > > > +__all__ = [ > + 'ImportProcess', > + ] > + > import time > > from zope.component import getUtility > @@ -191,58 +195,3 @@ > self.logger.info("Import requests completed.") > else: > self.logger.info("Used up available time.") > - > - > -class AutoApproveProcess: > - """Attempt to approve some PO/POT imports without human intervention.""" > - def __init__(self, ztm, logger): > - self.ztm = ztm > - self.logger = logger > - > - def run(self): > - """Attempt to approve requests without human intervention. > - > - Look for entries in translation_import_queue that look like they can > - be approved automatically. > - > - Also, detect requests that should be blocked, and block them in their > - entirety (with all their .pot and .po files); and purges completed or > - removed entries from the queue. > - """ > - > - translation_import_queue = getUtility(ITranslationImportQueue) > - > - # There may be corner cases where an 'optimistic approval' could > - # import a .po file to the wrong IPOFile (but the right language). > - # The savings justify that risk. The problem can only occur where, > - # for a given productseries/sourcepackage, we have two potemplates in > - # the same directory, each with its own set of .po files, and for some > - # reason one of the .pot files has not been added to the queue. Then > - # we would import both sets of .po files to that template. This is > - # not a big issue because the two templates will rarely share an > - # identical msgid, and especially because it's not a very common > - # layout in the free software world. > - if translation_import_queue.executeOptimisticApprovals(self.ztm): > - self.logger.info( > - 'The automatic approval system approved some entries.') > - > - removed_entries = translation_import_queue.cleanUpQueue() > - if removed_entries > 0: > - self.logger.info('Removed %d entries from the queue.' % > - removed_entries) > - self.ztm.commit() > - self.ztm.begin() > - > - # We need to block entries automatically to save Rosetta experts some > - # work when a complete set of .po files and a .pot file should not be > - # imported into the system. We have the same corner case as with the > - # previous approval method, but in this case it's a matter of changing > - # the status back from "blocked" to "needs review," or approving it > - # directly so no data will be lost and a lot of work is saved. > - blocked_entries = ( > - translation_import_queue.executeOptimisticBlock(self.ztm)) > - if blocked_entries > 0: > - self.logger.info('Blocked %d entries from the queue.' % > - blocked_entries) > - self.ztm.commit() > - > > === modified file 'lib/lp/translations/tests/test_autoapproval.py' > --- lib/lp/translations/tests/test_autoapproval.py 2009-09-15 05:25:13 +0000 > +++ lib/lp/translations/tests/test_autoapproval.py 2009-09-18 15:23:36 +0000 > @@ -8,8 +8,13 @@ > through the possibilities should go here. > """ > > +from datetime import datetime, timedelta > +from pytz import UTC > import unittest > > +from canonical.launchpad.interfaces.lpstorm import IMasterStore > + > +from lp.registry.interfaces.distroseries import DistroSeriesStatus > from lp.registry.model.distribution import Distribution > from lp.registry.model.sourcepackagename import ( > SourcePackageName, > @@ -20,7 +25,7 @@ > POTemplateSet, > POTemplateSubset) > from lp.translations.model.translationimportqueue import ( > - TranslationImportQueue) > + TranslationImportQueue, TranslationImportQueueEntry) > from lp.translations.interfaces.customlanguagecode import ICustomLanguageCode > from lp.translations.interfaces.translationimportqueue import ( > RosettaImportStatus) > @@ -566,5 +571,115 @@ > self.assertEqual(None, pofile) > > > +class TestCleanup(TestCaseWithFactory): > + """Test `TranslationImportQueueEntry` garbage collection.""" > + > + layer = LaunchpadZopelessLayer > + > + def setUp(self): > + super(TestCleanup, self).setUp() > + self.queue = TranslationImportQueue() > + self.store = IMasterStore(TranslationImportQueueEntry) > + > + def _makeProductEntry(self): > + """Simulate upload for a product.""" > + product = self.factory.makeProduct() > + product.official_rosetta = True > + trunk = product.getSeries('trunk') > + return self.queue.addOrUpdateEntry( > + 'foo.pot', '# contents', False, product.owner, > + productseries=trunk) > + > + def _makeDistroEntry(self): > + """Simulate upload for a distribution package.""" > + package = self.factory.makeSourcePackage() > + owner = package.distroseries.owner > + return self.queue.addOrUpdateEntry( > + 'bar.pot', '# contents', False, owner, > + sourcepackagename=package.sourcepackagename, > + distroseries=package.distroseries) > + > + def _exists(self, entry_id): > + """Is the entry with the given id still on the queue?""" > + entry = self.store.find( > + TranslationImportQueueEntry, > + TranslationImportQueueEntry.id == entry_id).any() > + return entry is not None > + > + def _setStatus(self, entry, status, when=None): > + """Simulate status on queue entry having been set at a given time.""" > + entry.setStatus(status) > + if when is not None: > + entry.date_status_changed = when > + entry.syncUpdate() > + > + def test_cleanUpObsoleteEntries_unaffected_statuses(self): > + # _cleanUpObsoleteEntries leaves entries in non-terminal states > + # (Needs Review, Approved, Blocked) alone no matter how old they > + # are. > + one_year_ago = datetime.now(UTC) - timedelta(days=366) > + entry = self._makeProductEntry() > + entry_id = entry.id > + > + self._setStatus(entry, RosettaImportStatus.APPROVED, one_year_ago) > + self.queue._cleanUpObsoleteEntries(self.store) > + self.assertTrue(self._exists(entry_id)) > + > + self._setStatus(entry, RosettaImportStatus.BLOCKED, one_year_ago) > + self.queue._cleanUpObsoleteEntries(self.store) > + self.assertTrue(self._exists(entry_id)) > + > + self._setStatus(entry, RosettaImportStatus.NEEDS_REVIEW, one_year_ago) > + self.queue._cleanUpObsoleteEntries(self.store) > + self.assertTrue(self._exists(entry_id)) > + > + def test_cleanUpObsoleteEntries_affected_statuses(self): > + # _cleanUpObsoleteEntries deletes entries in terminal states > + # (Imported, Failed, Deleted) after a few days. The exact > + # period depends on the state. > + entry = self._makeProductEntry() > + self._setStatus(entry, RosettaImportStatus.IMPORTED, None) > + entry_id = entry.id > + > + self.queue._cleanUpObsoleteEntries(self.store) > + self.assertTrue(self._exists(entry_id)) > + > + entry.date_status_changed -= timedelta(days=7) > + entry.syncUpdate() > + > + self.queue._cleanUpObsoleteEntries(self.store) > + self.assertFalse(self._exists(entry_id)) > + > + def test_cleanUpInactiveProductEntries(self): > + # After a product is deactivated, _cleanUpInactiveProductEntries > + # will clean up any entries it may have on the queue. > + entry = self._makeProductEntry() > + entry_id = entry.id > + > + self.queue._cleanUpInactiveProductEntries(self.store) > + self.assertTrue(self._exists(entry_id)) > + > + entry.productseries.product.active = False > + entry.productseries.product.syncUpdate() > + > + self.queue._cleanUpInactiveProductEntries(self.store) > + self.assertFalse(self._exists(entry_id)) > + > + def test_cleanUpObsoleteDistroEntries(self): > + # _cleanUpObsoleteDistroEntries cleans up entries for > + # distroseries that are in the Obsolete state. > + entry = self._makeDistroEntry() > + entry_id = entry.id > + > + self.queue._cleanUpObsoleteDistroEntries(self.store) > + self.assertTrue(self._exists(entry_id)) > + > + entry.distroseries.status = DistroSeriesStatus.OBSOLETE > + entry.distroseries.syncUpdate() > + > + self.queue._cleanUpObsoleteDistroEntries(self.store) > + self.assertFalse(self._exists(entry_id)) > + > + > def test_suite(): > return unittest.TestLoader().loadTestsFromName(__name__) -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAkq3k6UACgkQ0F+nu1YWqI0buACggXJyn1+1RlxRvUxUtMRJfjYg vXMAnAkHVXGv1xMUXaqrUPrshrNaFN8h =CDeO -----END PGP SIGNATURE-----