On Wed, Aug 31, 2011 at 10:26 AM, Jeroen T. Vermeulen <email address hidden> wrote:
> === modified file 'lib/lp/codehosting/tests/test_rewrite.py'
> +class TestBranchRewriterScriptHandlesDisconnects(TestCaseWithFactory):
> + """Ensure branch-rewrite.py survives fastdowntime deploys."""
> + layer = LaunchpadScriptLayer
> +
> + def setUp(self):
> + super(TestBranchRewriterScriptHandlesDisconnects, self).setUp()
> + self.pgbouncer = PGBouncerFixture()
> + self.addCleanup(self.pgbouncer.cleanUp)
> + self.pgbouncer.setUp()
>
> Couldn't those last three lines be replaced with a simple
>
> self.pgbouncer = self.useFixture(PGBouncerFixture())
>
> ?
>
> In fact I'm not even sure it's worth a setUp with the super() dance, and carrying self.pgbouncer from setUp to the tests.
Yes, useFixture is nicer here. I've removed setUp() entirely and am
using useFixture in the tests.
> + def spawn(self):
> + script_file = os.path.join(
> + config.root, 'scripts', 'branch-rewrite.py')
> +
> + self.rewriter_proc = subprocess.Popen(
> + [script_file], stdin=subprocess.PIPE, stdout=subprocess.PIPE,
> + stderr=subprocess.PIPE, bufsize=0)
>
> Again, is it worth keeping self.rewriter_proc on the test class? It just hides state. Why not make it a return value?
If I don't hide state, I have to maintain state and pass it to the
request() helper :-)
> Moreover, is this process guaranteed to clean itself up? If not, then please ensure that it is. For instance, you could add something like "self.addCleanup(kill, rewriter_proc)" to spawn().
subprocess module handles this for us as soon as things go out of
scope. But it doesn't hurt to be explicit so I've added the cleanup.
> + def request(self, query):
> + self.rewriter_proc.stdin.write(query + '\n')
> + return self.rewriter_proc.stdout.readline().rstrip('\n')
>
> Do we know that the readline() will never hang under reasonable circumstances?
> + def test_disconnect(self):
> + self.spawn()
>
> Sure "disconnect" is something that happens in this test, but is that one word really a good description of what you're testing here? Would it help to say something with "reconnects"?
I've change the test name to test_reconnects_when_disconnected.
> + # Everything should be working, and we get valid output.
> + out = self.request('foo')
> + assert out.endswith('/foo'), out
>
> Why use assert here? Normally you'd say self.assertEndsWith(out, '/foo').
Because I'm always forgetting which assert methods are approved,
standard, testutils extensions, LP extensions :-)
Fixed, along with the other occurrences.
> + self.pgbouncer.stop()
> +
> + # Now with pgbouncer down, we should get NULL messages and
> + # stderr spam, and this keeps happening. We test more than
> + # once to ensure that we will keep trying to reconnect even
> + # after several failures.
> + for count in range(5):
> + out = self.request('foo')
>
> Is this race-free? I don't know if pgbouncer is a separate process or not; if it is, does stop() block until it's taken effect?
It is race-free. The fixture provided by the pgbouncer package
lifeless put together blocks until the process has started up (and if
it doesn't, it is better to fix that than work around).
> + assert out == 'NULL', out
>
> Here too: why assert instead of self.assertEqual?
Fixed.
> === modified file 'scripts/branch-rewrite.py'
> self.logger.exception('Exception occurred:')
> print "NULL"
> + # The exception might have been a DisconnectionError or
> + # similar. Cleanup such as database reconnection will
> + # not happen until the transaction is rolled back. We
> + # are explicitly rolling back the store here instead of
> + # using transaction.abort() due to Bug #819282.
>
> This looks like it should properly be an XXX. Could you format it as one? XXX StuartBishop 2011-08-31 bug=819282: etc.
Yer, fixed.
> + try:
> + ISlaveStore(Branch).rollback()
> + except Exception:
> + self.logger.exception('Exception occurred in rollback:')
>
> Just to make sure: is the exception swallowed deliberately? Or was it supposed to be re-raised, or sys.exit() called with an error code or whatever?
I don't know what the best behavior here is. Should we swallow the
failure in rollback, assuming it is a transient database issue? Or
should we die with an exception and hope Apache restarts the process?
On Wed, Aug 31, 2011 at 10:26 AM, Jeroen T. Vermeulen <email address hidden> wrote: codehosting/ tests/test_ rewrite. py' terScriptHandle sDisconnects( TestCaseWithFac tory): Layer hRewriterScript HandlesDisconne cts, self).setUp() (self.pgbouncer .cleanUp) setUp() (PGBouncerFixtu re())
> === modified file 'lib/lp/
> +class TestBranchRewri
> + """Ensure branch-rewrite.py survives fastdowntime deploys."""
> + layer = LaunchpadScript
> +
> + def setUp(self):
> + super(TestBranc
> + self.pgbouncer = PGBouncerFixture()
> + self.addCleanup
> + self.pgbouncer.
>
> Couldn't those last three lines be replaced with a simple
>
> self.pgbouncer = self.useFixture
>
> ?
>
> In fact I'm not even sure it's worth a setUp with the super() dance, and carrying self.pgbouncer from setUp to the tests.
Yes, useFixture is nicer here. I've removed setUp() entirely and am
using useFixture in the tests.
> + def spawn(self): rewrite. py') s.PIPE, stdout= subprocess. PIPE, subprocess. PIPE, bufsize=0)
> + script_file = os.path.join(
> + config.root, 'scripts', 'branch-
> +
> + self.rewriter_proc = subprocess.Popen(
> + [script_file], stdin=subproces
> + stderr=
>
> Again, is it worth keeping self.rewriter_proc on the test class? It just hides state. Why not make it a return value?
If I don't hide state, I have to maintain state and pass it to the
request() helper :-)
> Moreover, is this process guaranteed to clean itself up? If not, then please ensure that it is. For instance, you could add something like "self.addCleanu p(kill, rewriter_proc)" to spawn().
subprocess module handles this for us as soon as things go out of
scope. But it doesn't hurt to be explicit so I've added the cleanup.
> + def request(self, query): proc.stdin. write(query + '\n') proc.stdout. readline( ).rstrip( '\n')
> + self.rewriter_
> + return self.rewriter_
>
> Do we know that the readline() will never hang under reasonable circumstances?
> + def test_disconnect (self):
> + self.spawn()
>
> Sure "disconnect" is something that happens in this test, but is that one word really a good description of what you're testing here? Would it help to say something with "reconnects"?
I've change the test name to test_reconnects _when_disconnec ted.
> + # Everything should be working, and we get valid output. '/foo') , out With(out, '/foo').
> + out = self.request('foo')
> + assert out.endswith(
>
> Why use assert here? Normally you'd say self.assertEnds
Because I'm always forgetting which assert methods are approved,
standard, testutils extensions, LP extensions :-)
Fixed, along with the other occurrences.
> + self.pgbouncer. stop()
> +
> + # Now with pgbouncer down, we should get NULL messages and
> + # stderr spam, and this keeps happening. We test more than
> + # once to ensure that we will keep trying to reconnect even
> + # after several failures.
> + for count in range(5):
> + out = self.request('foo')
>
> Is this race-free? I don't know if pgbouncer is a separate process or not; if it is, does stop() block until it's taken effect?
It is race-free. The fixture provided by the pgbouncer package
lifeless put together blocks until the process has started up (and if
it doesn't, it is better to fix that than work around).
> + assert out == 'NULL', out
>
> Here too: why assert instead of self.assertEqual?
Fixed.
> === modified file 'scripts/ branch- rewrite. py' exception( 'Exception occurred:')
> self.logger.
> print "NULL"
> + # The exception might have been a DisconnectionError or
> + # similar. Cleanup such as database reconnection will
> + # not happen until the transaction is rolled back. We
> + # are explicitly rolling back the store here instead of
> + # using transaction.abort() due to Bug #819282.
>
> This looks like it should properly be an XXX. Could you format it as one? XXX StuartBishop 2011-08-31 bug=819282: etc.
Yer, fixed.
> + try: Branch) .rollback( ) exception( 'Exception occurred in rollback:')
> + ISlaveStore(
> + except Exception:
> + self.logger.
>
> Just to make sure: is the exception swallowed deliberately? Or was it supposed to be re-raised, or sys.exit() called with an error code or whatever?
I don't know what the best behavior here is. Should we swallow the
failure in rollback, assuming it is a transient database issue? Or
should we die with an exception and hope Apache restarts the process?
-- www.stuartbisho p.net/
Stuart Bishop <email address hidden>
http://