Merge lp:~james-w/aptdaemon/fix-455861 into lp:aptdaemon
Proposed by
James Westby
Status: | Merged | ||||
---|---|---|---|---|---|
Merge reported by: | Michael Vogt | ||||
Merged at revision: | not available | ||||
Proposed branch: | lp:~james-w/aptdaemon/fix-455861 | ||||
Merge into: | lp:aptdaemon | ||||
Diff against target: |
102 lines (+44/-30) 1 file modified
aptdaemon/debconf.py (+44/-30) |
||||
To merge this branch: | bzr merge lp:~james-w/aptdaemon/fix-455861 | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Aptdaemon Developers | Pending | ||
Review via email: mp+21344@code.launchpad.net |
Description of the change
This implements locking as the io events are delivered asynchronously.
It ensures that if you call .stop() it won't close the socket just before
something else uses it.
I think that exiting quietly if the socket is already closed is the
correct behaviour.
A little testing here seems to show that this at least stops the crashes,
but it's a race condition, so it's hard to know for sure.
Thanks,
James
To post a comment you must log in.
Actually you don't need to introduce a lock here, since the event sources run in the same main loop.
I think that the root of the problem is iterating on the main loop after closing the socket, which will result in processing already queued io events which will fail.
Furthermore I increased the priority of the hangup callback, so that it is preferred to copy_stdout.
See bzr diff -r 340..343
=== modified file 'aptdaemon/ debconf. py' debconf. py 2009-10-02 11:27:03 +0000 debconf. py 2010-03-17 05:59:24 +0000
logging. debug(" debconf. stop()" )
self. socket. close() context_ default( )
gobject. source_ remove( self._listener_ id)
self. _listener_ id = None
--- aptdaemon/
+++ aptdaemon/
@@ -89,21 +89,12 @@
"""Stop listening on the socket."""
- # ensure outstanding gio messages are processed
- context = glib.main_
- while context.pending():
- context.iteration()
def _accept_ connection( self, source, condition):
"""Callback for new connections of the passthrough frontend."""
log.debug( "New passthrough connection") context_ default( )
stdin= subprocess. PIPE,
stdout= subprocess. PIPE,
env= self._get_ debconf_ env()) io_add_ watch(conn, gobject. IO_IN|gobject. IO_HUP| gobject. IO_ERR, ids.append( w) io_add_ watch(self. helper. stdout, gobject.IO_IN, ids.append( w) io_add_ watch(self. helper. stdout, gobject. IO_HUP| gobject. IO_ERR, ids.append( w) io_add_ watch(conn, IO_IN|gobject. IO_HUP| gobject. IO_ERR, ids.append( wid) io_add_ watch(self. helper. stdout, gobject.IO_IN, ids.append( wid) io_add_ watch(self. helper. stdout, IO_HUP| gobject. IO_ERR, gobject. PRIORITY_ HIGH) ids.append( wid)
- # ensure outstanding gio messages are processed (to ensure
- # that _hangup was run on the previous connection, see LP: #432607)
- context = glib.main_
- while context.pending():
- context.iteration()
if self._active_conn:
raise IOError, "Multiple debconf connections not supported"
conn, addr = source.accept()
@@ -112,15 +103,18 @@
- w = gobject.
- self._copy_conn, self.helper.stdin)
- self._watch_
- w= gobject.
- self._copy_stdout, conn)
- self._watch_
- w = gobject.
- self._hangup)
- self._watch_
+ wid = gobject.
+ gobject.
+ self._copy_conn, self.helper.stdin)
+ self._watch_
+ wid = gobject.
+ self._copy_stdout, conn)
+ self._watch_
+ wid = gobject.
+ gobject.
+ self._hangup,
+ priority=
+ self._watch_
return True