Merge lp:~adeuring/lazr.jobrunner/bug-1015667 into lp:lazr.jobrunner
| Status: | Merged |
|---|---|
| Approved by: | Abel Deuring on 2012-07-03 |
| Approved revision: | 39 |
| Merge reported by: | Abel Deuring |
| Merged at revision: | not available |
| Proposed branch: | lp:~adeuring/lazr.jobrunner/bug-1015667 |
| Merge into: | lp:lazr.jobrunner |
| Diff against target: |
259 lines (+181/-6) 4 files modified
setup.py (+6/-3) src/lazr/jobrunner/bin/inspect_queues.py (+64/-0) src/lazr/jobrunner/celerytask.py (+15/-3) src/lazr/jobrunner/tests/test_celerytask.py (+96/-0) |
| To merge this branch: | bzr merge lp:~adeuring/lazr.jobrunner/bug-1015667 |
| Related bugs: |
| Reviewer | Review Type | Date Requested | Status |
|---|---|---|---|
| Richard Harding (community) | 2012-07-03 | Approve on 2012-07-03 | |
|
Review via email:
|
|||
Description of the Change
This branch adds a script "inspect-queues" to lazr.jobrunner.
As described in bug 1015667, some Celery jobs seem to leave
messages in result queues behind. The new script allows us to
see what the messages in these queues contain. I hope that we
can get some clue which jobs created these messages by looking
at these messages. (I have also a branch for the main LP code
ready which changes the task ID so that it includes the job
class and the job ID, together with a UUID. Since the result
queue name is derived from the task ID, we should be able to
trace the "offending jobs" a bit better.)
As a side effect, the messages are also consumed. This might
not be desired in every case -- but I could not figure out
how to call drain_queues() so that the messages stay in the
queues. The darin_queues() parameter "retain" seems to have
no effect...
In other words, the script must be used carefully. But if we
store the result of running "rabbitmqctl list_queues" and run
the script, let's say, 24 hours later, we can be sure that the
queues are suffciently stale so that no results are
inadvertently deleted.
I also changed the function drain_queues(). The old
implementation delegated the setup of the queues completely to
kombu.Consumer.
fails with the error "PRECONDITION_
'...' in vhost '/' not equivalent". Calling
queue.declare(
Comsumer.__init__() does not allow to do this, so the queues
are now explicitly declared by drain_queues, when called from
inspect_queues().
no lint

Abel, looks ok, but I'd feel better if the name was something more along the lines of clear-queues since that's what it's doing. Inspect doesn't sound destructive enough to warn other users that this could lead to undesired consequences.