Merge lp:~jtv/launchpad/bug-702228 into lp:launchpad
Status: | Merged |
---|---|
Approved by: | Robert Collins |
Approved revision: | no longer in the source branch. |
Merged at revision: | 12215 |
Proposed branch: | lp:~jtv/launchpad/bug-702228 |
Merge into: | lp:launchpad |
Diff against target: |
242 lines (+134/-27) 4 files modified
cronscripts/rosetta-export-queue.py (+5/-2) lib/lp/translations/doc/poexport-queue.txt (+1/-1) lib/lp/translations/scripts/po_export_queue.py (+61/-24) lib/lp/translations/tests/test_exportresult.py (+67/-0) |
To merge this branch: | bzr merge lp:~jtv/launchpad/bug-702228 |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
j.c.sackett (community) | code* | Approve | |
Robert Collins (community) | Approve | ||
Review via email: mp+46165@code.launchpad.net |
Commit message
[r=jcsackett,
Description of the change
= Bug 702228 =
A particularly difficult translations export request was breaking the export queue. It took over 3 hours to run, at which point our watchdog script that looks for long transactions on the master database killed its master store connection.
This branch addresses that by avoiding use of the master database, so that the connection will not start a transaction until the very end of a request (where a master connection is really needed). An occasional long transaction on the slave is not such a big problem, and certainly won't take any write locks. So the watchdog will leave the slave connection in peace.
There is a nesting of database policies: the SlaveDatabasePolicy makes the slave store the default, so that e.g. some use of cursor() that we still have will use the slave. This is merely a little help in reducing accidental use of the master store. But I also bracketed the time-critical part in a SlaveOnlyDataba
It's not quite obvious, but the method that fetches the next request off the queue accesses the master store. It doesn't make any changes there, but it does verify that the oldest fully-replicated request it gets off the slave store has not already been deleted from the master store. That situation could otherwise occur if the script deleted the previous request from the master store, then again tried to fetch the oldest request off the slave before the deletion had replicated.
I tested thusly:
{{{
./bin/test -vvc lp.translations -t export
}}}
To Q/A, first ensure that regular exports still work (both in Ubuntu and upstream; include XPI) by running the script manually, and seeing that it still produces valid tarballs and individual files. On staging that requires manually clearing the backlog first.
Watch pg_stat_activity on the master database during a large export. The export script's master backend process (as logged by the script) should be "<IDLE>" until the end, not "<IDLE> in transaction."
No lint,
Jeroen
Jeroen--
This looks good. I have couple comments below, but the one on the test case is the only one I feel is important. This is good to land with that change.
> === modified file 'lib/lp/ translations/ scripts/ po_export_ queue.py' translations/ scripts/ po_export_ queue.py 2010-08-20 20:31:18 +0000 translations/ scripts/ po_export_ queue.py 2011-01-13 18:25:08 +0000 lationFile` containing the
> --- lib/lp/
> +++ lib/lp/
> @@ -250,6 +252,46 @@
> 'request_url': self.request_url,
> }
>
> + def setExportFile(self, exported_file):
> + """Attach an exported file to the result, for upload to the Librarian.
> +
> + After this is set, `upload` will perform the actual upload. The two
> + actions are separated so as to isolate write access to the database.
> +
> + :param exported_file: An `IExportedTrans
> + exported data.
> + """
> + self.exported_file = exported_file
Given how very little this method is doing, I'm not sure it needs to be a separate method. It's not hurting anything to have it this way, but seems slightly unnecesary.
> === added file 'lib/lp/ translations/ tests/test_ exportresult. py' translations/ tests/test_ exportresult. py 1970-01-01 00:00:00 +0000 translations/ tests/test_ exportresult. py 2011-01-13 18:25:08 +0000 librarian. testing. fake import FakeLibrarian testing. layers import ZopelessDatabas eLayer log.logger import DevNullLogger .scripts. po_export_ queue import ExportResult nslationFile: lationFile` for testing.""" splitext( path) read(*args, **kwargs) t(TestCaseWithF actory) : eLayer t(self) : makePOFile( )] makePerson( ) requester, request, logger) nslationFile( self): getUniqueString () getUniqueString () nslationFile( filename, content, mime_type) exported_ file(self) : (FakeLibrarian( ))
> --- lib/lp/
> +++ lib/lp/
> @@ -0,0 +1,67 @@
> +# Copyright 2010 Canonical Ltd. This software is licensed under the
> +# GNU Affero General Public License version 3 (see the file LICENSE).
> +
> +"""Tests for `ExportResult`."""
> +
> +__metaclass__ = type
> +
> +import hashlib
> +import os.path
> +from StringIO import StringIO
> +
> +from canonical.
> +from canonical.
> +from lp.services.
> +from lp.testing import TestCaseWithFactory
> +from lp.translations
> +
> +
> +class FakeExportedTra
> + """Fake `IExportedTrans
> +
> + def __init__(self, path, content, content_type):
> + self.path = path
> + base, ext = os.path.
> + self.file_extension = ext
> + self.size = len(content)
> + self.content = content
> + self.file = StringIO(content)
> + self.content_type = content_type
> +
> + def read(self, *args, **kwargs):
> + return self.file.
> +
> +
> +class TestExportResul
> +
> + layer = ZopelessDatabas
> +
> + def makeExportResul
> + request = [self.factory.
> + requester = self.factory.
> + logger = DevNullLogger()
> + return ExportResult(
> +
> + def makeExportedTra
> + filename = self.factory.
> + content = self.factory.
> + mime_type = "text/plain"
> + return FakeExportedTra
> +
> + def test_upload_
> + librarian = self.useFixture
> + export = self.makeExp...