Merge lp:~jameinel/launchpad/lp-serve-child-hangup into lp:launchpad

Proposed by John A Meinel
Status: Superseded
Proposed branch: lp:~jameinel/launchpad/lp-serve-child-hangup
Merge into: lp:launchpad
Prerequisite: lp:~jameinel/launchpad/lp-forking-serve-cleaner-childre
Diff against target: 312 lines (+180/-22)
2 files modified
bzrplugins/lpserve/__init__.py (+94/-19)
bzrplugins/lpserve/test_lpserve.py (+86/-3)
To merge this branch: bzr merge lp:~jameinel/launchpad/lp-serve-child-hangup
Reviewer Review Type Date Requested Status
Launchpad code reviewers Pending
Review via email: mp+50055@code.launchpad.net

This proposal has been superseded by a proposal from 2011-03-02.

Commit message

Children spawned from launchpad-forking-service should hangup if nobody has talked to them in a couple minutes.

Description of the change

This branch builds on my earlier branch, which has the children spawned by LPForkingService clean themselves up more readily.

This adds code so that the blocking calls will be unblocked after 2 minutes. At the moment, if the Conch server requests a fork, but is unable to actually connect to the child, that child ends up hanging forever on a file handle that it will never open.

Originally, I got this working using a loop an O_NONBLOCK. But after doing some testing, fcntl.fcntl(...) was unable to actually change the file descriptor to be blocking again. It looks good in man, and it does unset the flag, but I still get EAGAIN failures in the smart server code.

I don't really like spawning a thread to send SIGUSR1 back to myself, but it seemed the best tradeoff. If we want, we could even make it SIGTERM or something, since we know we are going to kill the process if the connection fails.

I figured 2 minutes was reasonable. This is the time from a successful ssh handshake until we actually connect to a newly spawned child. Even the worst-case time that I've seen was about 30s for a child to be spawned. So this gives us a 4x margin of error.

This is cleanup related to bug #717345, but it doesn't directly fix the problem. That will be in my next branch.

To post a comment you must log in.
Revision history for this message
Martin Pool (mbp) wrote :

On 17 February 2011 08:44, John A Meinel <email address hidden> wrote:
> John A Meinel has proposed merging lp:~jameinel/launchpad/lp-serve-child-hangup into lp:launchpad with lp:~jameinel/launchpad/lp-forking-serve-cleaner-childre as a prerequisite.
>
> Requested reviews:
>  Launchpad code reviewers (launchpad-reviewers)
>
> For more details, see:
> https://code.launchpad.net/~jameinel/launchpad/lp-serve-child-hangup/+merge/50055
>
> This branch builds on my earlier branch, which has the children spawned by LPForkingService clean themselves up more readily.
>
> This adds code so that the blocking calls will be unblocked after 2 minutes. At the moment, if the Conch server requests a fork, but is unable to actually connect to the child, that child ends up hanging forever on a file handle that it will never open.
>
> Originally, I got this working using a loop an O_NONBLOCK. But after doing some testing, fcntl.fcntl(...) was unable to actually change the file descriptor to be blocking again. It looks good in man, and it does unset the flag, but I still get EAGAIN failures in the smart server code.
>
> I don't really like spawning a thread to send SIGUSR1 back to myself, but it seemed the best tradeoff. If we want, we could even make it SIGTERM or something, since we know we are going to kill the process if the connection fails.

I don't really like that either, especially this is already getting to
be a fairly complex set of processes.

Why can't you make the file handles nonblocking and just leave them
so? Is it because they're later inherited by some other process?

If you do need a signal to interrupt things, how about just using alarm(2)?

Revision history for this message
John A Meinel (jameinel) wrote :

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

...

>> I don't really like spawning a thread to send SIGUSR1 back to myself, but it seemed the best tradeoff. If we want, we could even make it SIGTERM or something, since we know we are going to kill the process if the connection fails.
>
> I don't really like that either, especially this is already getting to
> be a fairly complex set of processes.
>
> Why can't you make the file handles nonblocking and just leave them
> so? Is it because they're later inherited by some other process?

Because the Smart server code expects blocking file handles.
SmartServerPipeStreamMedium IIRC.

>
> If you do need a signal to interrupt things, how about just using alarm(2)?

Because alarm doesn't seem to be exposed to python, and I want the
ability to cancel the callback. threading.Timer() gives me that exact
functionality.

I would be fine using SIGALRM if you prefer that to SIGUSR1.

John
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk1cS8UACgkQJdeBCYSNAAMupgCgjFH3eTQXY6vxSoZkp3inrYs7
CjQAnREQQ/NXmuuyz9Rkl8tUMNIFEED6
=iiu5
-----END PGP SIGNATURE-----

Revision history for this message
Robert Collins (lifeless) wrote :

What about using dup2 to duplicate the fd, make one copy nonblocking
and use that until we're up and running?

Revision history for this message
John A Meinel (jameinel) wrote :

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 2/16/2011 4:28 PM, Robert Collins wrote:
> What about using dup2 to duplicate the fd, make one copy nonblocking
> and use that until we're up and running?
>

The issue is we want non-blocking in the 'os.open()' call, until
everything is open, then we want to reset it to blocking.

You can use tricks with fcntl.fcntl(... F_SETFD) according to the man
pages. But testing showed it doesn't actually set the file back to
blocking mode.

I would expect something similar from dup2.

John
=:->

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk1cUS8ACgkQJdeBCYSNAAND1ACfZtlcchz8RU9L+0XIw/MZdsep
XscAmwRiaaebSzfpYa6dvAxNGU4jQNfP
=Fj9R
-----END PGP SIGNATURE-----

Revision history for this message
Robert Collins (lifeless) wrote :

On Thu, Feb 17, 2011 at 11:38 AM, John A Meinel <email address hidden> wrote:
> I would expect something similar from dup2.

I would expect dup2 to workaround those bugs :) - its why I suggested it.

-Rob

Revision history for this message
Martin Pool (mbp) wrote :

On 17 February 2011 10:01, Robert Collins <email address hidden> wrote:
> On Thu, Feb 17, 2011 at 11:38 AM, John A Meinel <email address hidden> wrote:
>> I would expect something similar from dup2.
>
> I would expect dup2 to workaround those bugs :) - its why I suggested it.

We want to do a nonblocking open, so we don't have an fd to dup until
we've done that.

After opening it, maybe we could dup it, then make the new copy
blocking, then close the original. It's possible that would work
around it. On the other hand if in general we can't make open fds
blocking, it may not fix it.

Revision history for this message
John A Meinel (jameinel) wrote :

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 2/16/2011 5:43 PM, Martin Pool wrote:
> On 17 February 2011 10:01, Robert Collins <email address hidden> wrote:
>> On Thu, Feb 17, 2011 at 11:38 AM, John A Meinel <email address hidden> wrote:
>>> I would expect something similar from dup2.
>>
>> I would expect dup2 to workaround those bugs :) - its why I suggested it.
>
> We want to do a nonblocking open, so we don't have an fd to dup until
> we've done that.
>
> After opening it, maybe we could dup it, then make the new copy
> blocking, then close the original. It's possible that would work
> around it. On the other hand if in general we can't make open fds
> blocking, it may not fix it.
>

Right, it *might* work to, open non-blocking, set it to blocking, then
dup2 it and use the new duplicated one. I can try it, but honestly, the
code is much clearer with the Timer rather than the
blocking/non-blocking dance. I have to write a loop that times out, etc.
Rather than just getting a SIG* after 120s of inactivity.

If you prefer SIGALRM, or alarm(2), then we can use ctypes/pyrex/whatever.

John
=:->

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk1dN90ACgkQJdeBCYSNAAP5nwCgkPAFSuCwQP6+l96tNXjB1PAP
XBQAnj5JBVESjYtPmxJiz13EZ7iJeAt3
=KHsN
-----END PGP SIGNATURE-----

Revision history for this message
John A Meinel (jameinel) wrote :

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

...

>
> If you do need a signal to interrupt things, how about just using alarm(2)?
>

It turns out it is exposed, but only conditionally, and not where I
expected it.

import signal
signal.alarm()

If you prefer, I'd be happy to use that instead. Should I be catching
SIGALRM, or should I just let it propagate to kill the child?

John
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk1j/c0ACgkQJdeBCYSNAANc0QCeLE9a+01g55AbrIa5EVWGmNdh
tukAn3Ybnl0SCYTqoxNKIu0VHvz95+r7
=LF2C
-----END PGP SIGNATURE-----

Revision history for this message
Martin Pool (mbp) wrote :

> It turns out it is exposed, but only conditionally, and not where I
> expected it.
>
> import signal
> signal.alarm()
>
> If you prefer, I'd be happy to use that instead. Should I be catching
> SIGALRM, or should I just let it propagate to kill the child?

Since it's supposed to be an unusual case that the connection fails
half way through, I think just letting it kill the client should be
fine. The parent should wait on them and log their exit status.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'bzrplugins/lpserve/__init__.py'
--- bzrplugins/lpserve/__init__.py 2011-02-17 21:47:07 +0000
+++ bzrplugins/lpserve/__init__.py 2011-03-02 13:07:04 +0000
@@ -15,6 +15,7 @@
1515
1616
17import errno17import errno
18import fcntl
18import logging19import logging
19import os20import os
20import resource21import resource
@@ -31,6 +32,7 @@
31from bzrlib.option import Option32from bzrlib.option import Option
32from bzrlib import (33from bzrlib import (
33 commands,34 commands,
35 errors,
34 lockdir,36 lockdir,
35 osutils,37 osutils,
36 trace,38 trace,
@@ -309,6 +311,11 @@
309 SLEEP_FOR_CHILDREN_TIMEOUT = 1.0311 SLEEP_FOR_CHILDREN_TIMEOUT = 1.0
310 WAIT_FOR_REQUEST_TIMEOUT = 1.0 # No request should take longer than this to312 WAIT_FOR_REQUEST_TIMEOUT = 1.0 # No request should take longer than this to
311 # be read313 # be read
314 CHILD_CONNECT_TIMEOUT = 120 # If we get a fork() request, but nobody
315 # connects just exit
316 # On a heavily loaded server, it could take a
317 # couple secs, but it should never take
318 # minutes
312319
313 _fork_function = os.fork320 _fork_function = os.fork
314321
@@ -324,6 +331,7 @@
324 # Map from pid => (temp_path_for_handles, request_socket)331 # Map from pid => (temp_path_for_handles, request_socket)
325 self._child_processes = {}332 self._child_processes = {}
326 self._children_spawned = 0333 self._children_spawned = 0
334 self._child_connect_timeout = self.CHILD_CONNECT_TIMEOUT
327335
328 def _create_master_socket(self):336 def _create_master_socket(self):
329 self._server_socket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)337 self._server_socket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
@@ -372,28 +380,94 @@
372 signal.signal(signal.SIGCHLD, signal.SIG_DFL)380 signal.signal(signal.SIGCHLD, signal.SIG_DFL)
373 signal.signal(signal.SIGTERM, signal.SIG_DFL)381 signal.signal(signal.SIGTERM, signal.SIG_DFL)
374382
383 def _compute_paths(self, base_path):
384 stdin_path = os.path.join(base_path, 'stdin')
385 stdout_path = os.path.join(base_path, 'stdout')
386 stderr_path = os.path.join(base_path, 'stderr')
387 return (stdin_path, stdout_path, stderr_path)
388
375 def _create_child_file_descriptors(self, base_path):389 def _create_child_file_descriptors(self, base_path):
376 stdin_path = os.path.join(base_path, 'stdin')390 stdin_path, stdout_path, stderr_path = self._compute_paths(base_path)
377 stdout_path = os.path.join(base_path, 'stdout')
378 stderr_path = os.path.join(base_path, 'stderr')
379 os.mkfifo(stdin_path)391 os.mkfifo(stdin_path)
380 os.mkfifo(stdout_path)392 os.mkfifo(stdout_path)
381 os.mkfifo(stderr_path)393 os.mkfifo(stderr_path)
382394
383 def _bind_child_file_descriptors(self, base_path):395 def _set_blocking(self, fd):
384 stdin_path = os.path.join(base_path, 'stdin')396 """Change the file descriptor to unset the O_NONBLOCK flag."""
385 stdout_path = os.path.join(base_path, 'stdout')397 flags = fcntl.fcntl(fd, fcntl.F_GETFD)
386 stderr_path = os.path.join(base_path, 'stderr')398 flags = flags & (~os.O_NONBLOCK)
399 fcntl.fcntl(fd, fcntl.F_SETFD, flags)
400
401 def _get_child_connect_timeout(self):
402 """signal.alarm only supports 1s granularity.
403
404 We have to make sure we don't ever send 0, which would not generate an
405 alarm.
406 """
407 timeout = int(self._child_connect_timeout)
408 if timeout <= 0:
409 timeout = 1
410 return timeout
411
412 def _open_handles(self, base_path):
413 """Open the given file handles.
414
415 This will attempt to open all of these file handles, but will not block
416 while opening them, timing out after self._child_connect_timeout
417 seconds.
418
419 :param base_path: The directory where all FIFOs are located
420 :return: (stdin_fid, stdout_fid, stderr_fid)
421 """
422 stdin_path, stdout_path, stderr_path = self._compute_paths(base_path)
387 # These open calls will block until another process connects (which423 # These open calls will block until another process connects (which
388 # must connect in the same order)424 # must connect in the same order)
389 stdin_fid = os.open(stdin_path, os.O_RDONLY)425 fids = []
390 stdout_fid = os.open(stdout_path, os.O_WRONLY)426 to_open = [(stdin_path, os.O_RDONLY), (stdout_path, os.O_WRONLY),
391 stderr_fid = os.open(stderr_path, os.O_WRONLY)427 (stderr_path, os.O_WRONLY)]
428 signal.alarm(self._get_child_connect_timeout())
429 tstart = time.time()
430 for path, flags in to_open:
431 try:
432 fids.append(os.open(path, flags))
433 except OSError, e:
434 if e.errno == errno.EINTR:
435 error = ('After %.3fs we failed to open %s, exiting'
436 % (time.time() - tstart, path,))
437 trace.warning(error)
438 for fid in fids:
439 try:
440 os.close(fid)
441 except OSError:
442 pass
443 self._cleanup_fifos(base_path)
444 raise errors.BzrError(error)
445 raise
446 # If we get to here, that means all the handles were opened
447 # successfully, so cancel the wakeup call.
448 signal.alarm(0)
449 return fids
450
451 def _cleanup_fifos(self, base_path):
452 """Remove the FIFO objects and directory from disk."""
453 stdin_path, stdout_path, stderr_path = self._compute_paths(base_path)
454 # Now that we've opened the handles, delete everything so that we don't
455 # leave garbage around. Because the open() is done in blocking mode, we
456 # know that someone has already connected to them, and we don't want
457 # anyone else getting confused and connecting.
458 # See [Decision #5]
459 os.remove(stdin_path)
460 os.remove(stdout_path)
461 os.remove(stderr_path)
462 os.rmdir(base_path)
463
464 def _bind_child_file_descriptors(self, base_path):
392 # Note: by this point bzrlib has opened stderr for logging465 # Note: by this point bzrlib has opened stderr for logging
393 # (as part of starting the service process in the first place).466 # (as part of starting the service process in the first place).
394 # As such, it has a stream handler that writes to stderr. logging467 # As such, it has a stream handler that writes to stderr. logging
395 # tries to flush and close that, but the file is already closed.468 # tries to flush and close that, but the file is already closed.
396 # This just supresses that exception469 # This just supresses that exception
470 stdin_fid, stdout_fid, stderr_fid = self._open_handles(base_path)
397 logging.raiseExceptions = False471 logging.raiseExceptions = False
398 sys.stdin.close()472 sys.stdin.close()
399 sys.stdout.close()473 sys.stdout.close()
@@ -407,15 +481,7 @@
407 ui.ui_factory.stdin = sys.stdin481 ui.ui_factory.stdin = sys.stdin
408 ui.ui_factory.stdout = sys.stdout482 ui.ui_factory.stdout = sys.stdout
409 ui.ui_factory.stderr = sys.stderr483 ui.ui_factory.stderr = sys.stderr
410 # Now that we've opened the handles, delete everything so that we don't484 self._cleanup_fifos(base_path)
411 # leave garbage around. Because the open() is done in blocking mode, we
412 # know that someone has already connected to them, and we don't want
413 # anyone else getting confused and connecting.
414 # See [Decision #5]
415 os.remove(stderr_path)
416 os.remove(stdout_path)
417 os.remove(stdin_path)
418 os.rmdir(base_path)
419485
420 def _close_child_file_descriptors(self):486 def _close_child_file_descriptors(self):
421 sys.stdin.close()487 sys.stdin.close()
@@ -724,6 +790,15 @@
724 self._should_terminate.set()790 self._should_terminate.set()
725 conn.sendall('ok\nquit command requested... exiting\n')791 conn.sendall('ok\nquit command requested... exiting\n')
726 conn.close()792 conn.close()
793 elif request.startswith('child_connect_timeout '):
794 try:
795 value = int(request.split(' ', 1)[1])
796 except ValueError, e:
797 conn.sendall('FAILURE: %r\n' % (e,))
798 else:
799 self._child_connect_timeout = value
800 conn.sendall('ok\n')
801 conn.close()
727 elif request.startswith('fork ') or request.startswith('fork-env '):802 elif request.startswith('fork ') or request.startswith('fork-env '):
728 command_argv, env = self._parse_fork_request(conn, client_addr,803 command_argv, env = self._parse_fork_request(conn, client_addr,
729 request)804 request)
730805
=== modified file 'bzrplugins/lpserve/test_lpserve.py'
--- bzrplugins/lpserve/test_lpserve.py 2010-11-08 22:43:58 +0000
+++ bzrplugins/lpserve/test_lpserve.py 2011-03-02 13:07:04 +0000
@@ -3,6 +3,7 @@
33
4import errno4import errno
5import os5import os
6import shutil
6import signal7import signal
7import socket8import socket
8import subprocess9import subprocess
@@ -13,6 +14,7 @@
13from testtools import content14from testtools import content
1415
15from bzrlib import (16from bzrlib import (
17 errors,
16 osutils,18 osutils,
17 tests,19 tests,
18 trace,20 trace,
@@ -263,6 +265,46 @@
263 one_byte_at_a_time=True)265 one_byte_at_a_time=True)
264 self.assertStartsWith(response, 'FAILURE\n')266 self.assertStartsWith(response, 'FAILURE\n')
265267
268 def test_child_connection_timeout(self):
269 self.assertEqual(self.service.CHILD_CONNECT_TIMEOUT,
270 self.service._child_connect_timeout)
271 response = self.send_message_to_service('child_connect_timeout 1\n')
272 self.assertEqual('ok\n', response)
273 self.assertEqual(1, self.service._child_connect_timeout)
274
275 def test_child_connection_timeout_bad_float(self):
276 self.assertEqual(self.service.CHILD_CONNECT_TIMEOUT,
277 self.service._child_connect_timeout)
278 response = self.send_message_to_service('child_connect_timeout 1.2\n')
279 self.assertStartsWith(response, 'FAILURE:')
280
281 def test_child_connection_timeout_no_val(self):
282 response = self.send_message_to_service('child_connect_timeout \n')
283 self.assertStartsWith(response, 'FAILURE:')
284
285 def test_child_connection_timeout_bad_val(self):
286 response = self.send_message_to_service('child_connect_timeout b\n')
287 self.assertStartsWith(response, 'FAILURE:')
288
289 def test__open_handles_will_timeout(self):
290 # signal.alarm() has only 1-second granularity. :(
291 self.service._child_connect_timeout = 1
292 tempdir = tempfile.mkdtemp(prefix='testlpserve-')
293 self.addCleanup(shutil.rmtree, tempdir, ignore_errors=True)
294 os.mkfifo(os.path.join(tempdir, 'stdin'))
295 os.mkfifo(os.path.join(tempdir, 'stdout'))
296 os.mkfifo(os.path.join(tempdir, 'stderr'))
297 def noop_on_alarm(signal, frame):
298 return
299 signal.signal(signal.SIGALRM, noop_on_alarm)
300 self.addCleanup(signal.signal, signal.SIGALRM, signal.SIG_DFL)
301 e = self.assertRaises(errors.BzrError,
302 self.service._open_handles, tempdir)
303 self.assertContainsRe(str(e), r'After \d+.\d+s we failed to open.*')
304 # Even though it timed out, we still cleanup the temp dir
305 self.assertFalse(os.path.exists(tempdir))
306
307
266308
267class TestCaseWithSubprocess(tests.TestCaseWithTransport):309class TestCaseWithSubprocess(tests.TestCaseWithTransport):
268 """Override the bzr start_bzr_subprocess command.310 """Override the bzr start_bzr_subprocess command.
@@ -452,9 +494,9 @@
452 stderr_path = os.path.join(path, 'stderr')494 stderr_path = os.path.join(path, 'stderr')
453 # The ordering must match the ordering of the service or we get a495 # The ordering must match the ordering of the service or we get a
454 # deadlock.496 # deadlock.
455 child_stdin = open(stdin_path, 'wb')497 child_stdin = open(stdin_path, 'wb', 0)
456 child_stdout = open(stdout_path, 'rb')498 child_stdout = open(stdout_path, 'rb', 0)
457 child_stderr = open(stderr_path, 'rb')499 child_stderr = open(stderr_path, 'rb', 0)
458 return child_stdin, child_stdout, child_stderr500 return child_stdin, child_stdout, child_stderr
459501
460 def communicate_with_fork(self, path, stdin=None):502 def communicate_with_fork(self, path, stdin=None):
@@ -484,6 +526,24 @@
484 self.assertEqual('', stderr_content)526 self.assertEqual('', stderr_content)
485 self.assertReturnCode(0, sock)527 self.assertReturnCode(0, sock)
486528
529 def DONT_test_fork_lp_serve_multiple_hello(self):
530 # This ensures that the fifos are all set to blocking mode
531 # We can't actually run this test, because by default 'bzr serve
532 # --inet' does not flush after each message. So we end up blocking
533 # forever waiting for the server to finish responding to the first
534 # request.
535 path, _, sock = self.send_fork_request('lp-serve --inet 2')
536 child_stdin, child_stdout, child_stderr = self._get_fork_handles(path)
537 child_stdin.write('hello\n')
538 child_stdin.flush()
539 self.assertEqual('ok\x012\n', child_stdout.read())
540 child_stdin.write('hello\n')
541 self.assertEqual('ok\x012\n', child_stdout.read())
542 child_stdin.close()
543 self.assertEqual('', child_stderr.read())
544 child_stdout.close()
545 child_stderr.close()
546
487 def test_fork_replay(self):547 def test_fork_replay(self):
488 path, _, sock = self.send_fork_request('launchpad-replay')548 path, _, sock = self.send_fork_request('launchpad-replay')
489 stdout_content, stderr_content = self.communicate_with_fork(path,549 stdout_content, stderr_content = self.communicate_with_fork(path,
@@ -540,6 +600,29 @@
540 def test_sigint_exits_nicely(self):600 def test_sigint_exits_nicely(self):
541 self._check_exits_nicely(signal.SIGINT)601 self._check_exits_nicely(signal.SIGINT)
542602
603 def test_child_exits_eventually(self):
604 # We won't ever bind to the socket the child wants, and after some
605 # time, the child should exit cleanly.
606 # First, tell the subprocess that we want children to exit quickly.
607 # *sigh* signal.alarm only has 1s resolution, so this test is slow.
608 response = self.send_message_to_service('child_connect_timeout 1\n')
609 self.assertEqual('ok\n', response)
610 # Now request a fork
611 path, pid, sock = self.send_fork_request('rocks')
612 # # Open one handle, but not all of them
613 stdin_path = os.path.join(path, 'stdin')
614 stdout_path = os.path.join(path, 'stdout')
615 stderr_path = os.path.join(path, 'stderr')
616 child_stdin = open(stdin_path, 'wb')
617 # We started opening the child, but stop before we get all handles
618 # open. After 1 second, the child should get signaled and die.
619 # The master process should notice, and tell us the status of the
620 # exited child.
621 val = sock.recv(4096)
622 self.assertEqual('exited\n%s\n' % (signal.SIGALRM,), val)
623 # The master process should clean up after the now deceased child.
624 self.failIfExists(path)
625
543626
544class TestCaseWithLPForkingServiceDaemon(627class TestCaseWithLPForkingServiceDaemon(
545 TestCaseWithLPForkingServiceSubprocess):628 TestCaseWithLPForkingServiceSubprocess):