Merge lp:~adeuring/charmworld/queue-review-mutex into lp:~juju-jitsu/charmworld/trunk

Proposed by Abel Deuring
Status: Merged
Approved by: Abel Deuring
Approved revision: 159
Merged at revision: 161
Proposed branch: lp:~adeuring/charmworld/queue-review-mutex
Merge into: lp:~juju-jitsu/charmworld/trunk
Diff against target: 398 lines (+271/-16)
7 files modified
charmworld/jobs/core_review.py (+11/-2)
charmworld/jobs/lp.py (+16/-2)
charmworld/jobs/review.py (+11/-2)
charmworld/jobs/tests/test_utils.py (+152/-0)
charmworld/jobs/utils.py (+66/-0)
charmworld/testing/__init__.py (+10/-10)
default.ini (+5/-0)
To merge this branch: bzr merge lp:~adeuring/charmworld/queue-review-mutex
Reviewer Review Type Date Requested Status
Abel Deuring (community) Approve
Review via email: mp+150853@code.launchpad.net

Commit message

Use locks to prevent the jobs lp, core_review and review from running concurrently.

Description of the change

Locking for jobs

Charmworld runs a few cron jobs to keep the list of avaliable charms
up to date. If charmworld is installed on more than one machines, running
these jobs on all machines would cause unnecessary duplicate work,
like queueing the charms for further processing.

This branch adds a simple locking mechansim for these jobs. The core idea:
A lock is a record in the collection "locks" of the MongoDB with a given
ID. MongoDB can raise a DuplicateKey exception when a record with the ID
'some_name' already exists during a call of
collection.insert({'_id': 'some_name', ...})

("can" means: When I first attempted to use the lock for the jobs review and
core_review, locking simply did not work, while it worked fine in tests and
for the queueing job. It turned out that the parameter "fsync" msut be
specified when pymongo.Connection is instantiated. As the test
test_bad_connection_setup shows, the actual value of this parameter is
not important -- but it must be present...)

Hence lock() ensures that fsync was specified for the given DB connection.

Locks have a lease time; if the lease expires, a now stale lock is deleted
when another process wants to acquire the lock. The related call of
locks.remove(existing_lock) is a possible race condition: Two processes
might try concurrently to remove the stale lock. A simple tests shows that
duplicate remove(existing_lock) calls succeed, even if the second call
just does nothing.

https://codereview.appspot.com/7379053/

To post a comment you must log in.
Revision history for this message
Abel Deuring (adeuring) wrote :

Reviewers: mp+150853_code.launchpad.net,

Message:
Please take a look.

Description:
Locking for jobs

Charmworld runs a few cron jobs to keep the list of avaliable charms
up to date. If charmworld is installed on more than one machines,
running
these jobs on all machines would cause unnecessary duplicate work,
like queueing the charms for further processing.

This branch adds a simple locking mechansim for these jobs. The core
idea:
A lock is a record in the collection "locks" of the MongoDB with a given
ID. MongoDB can raise a DuplicateKey exception when a record with the ID
'some_name' already exists during a call of
collection.insert({'_id': 'some_name', ...})

("can" means: When I first attempted to use the lock for the jobs review
and
core_review, locking simply did not work, while it worked fine in tests
and
for the queueing job. It turned out that the parameter "fsync" msut be
specified when pymongo.Connection is instantiated. As the test
test_bad_connection_setup shows, the actual value of this parameter is
not important -- but it must be present...)

Hence lock() ensures that fsync was specified for the given DB
connection.

Locks have a lease time; if the lease expires, a now stale lock is
deleted
when another process wants to acquire the lock. The related call of
locks.remove(existing_lock) is a possible race condition: Two processes
might try concurrently to remove the stale lock. A simple tests shows
that
duplicate remove(existing_lock) calls succeed, even if the second call
just does nothing.

https://code.launchpad.net/~adeuring/charmworld/queue-review-mutex/+merge/150853

(do not edit description out of merge proposal)

Please review this at https://codereview.appspot.com/7379053/

Affected files:
   A [revision details]
   M charmworld/jobs/core_review.py
   M charmworld/jobs/lp.py
   M charmworld/jobs/review.py
   A charmworld/jobs/tests/test_utils.py
   M charmworld/jobs/utils.py
   M charmworld/testing/__init__.py
   M default.ini

Revision history for this message
Richard Harding (rharding) wrote :

lgtm with a check on using safe vs fsync. If safe works then it should
be a bit more performance than an fsync.

I also would prefer to see something that catches locks that were not
released. I know they timeout, but we should have some notice that they
were hit so we can look into the issue. I'm worried about a charm
instance dying without us realizing there's an issue. It might take some
coordination with the charm though to get an error/notice out and up to
nagios or email or something.

https://codereview.appspot.com/7379053/diff/1/charmworld/jobs/core_review.py
File charmworld/jobs/core_review.py (right):

https://codereview.appspot.com/7379053/diff/1/charmworld/jobs/core_review.py#newcode107
charmworld/jobs/core_review.py:107: c = pymongo.Connection(MONGO_URL,
fsync=True)
can we get away with just safe=True vs fsync? It's a bit lighter than
fsync and we changed to this in test runs to help fix issues from mongo
syncing.

https://codereview.appspot.com/7379053/diff/1/charmworld/jobs/lp.py
File charmworld/jobs/lp.py (right):

https://codereview.appspot.com/7379053/diff/1/charmworld/jobs/lp.py#newcode73
charmworld/jobs/lp.py:73: connection = getconnection(settings)
does this need to have the safe= flag set to get a safe connection?

https://codereview.appspot.com/7379053/diff/1/charmworld/jobs/review.py
File charmworld/jobs/review.py (right):

https://codereview.appspot.com/7379053/diff/1/charmworld/jobs/review.py#newcode117
charmworld/jobs/review.py:117: c = pymongo.Connection(MONGO_URL,
fsync=True)
again, can safe= work?

https://codereview.appspot.com/7379053/diff/1/charmworld/jobs/tests/test_utils.py
File charmworld/jobs/tests/test_utils.py (right):

https://codereview.appspot.com/7379053/diff/1/charmworld/jobs/tests/test_utils.py#newcode38
charmworld/jobs/tests/test_utils.py:38: self.assertIs(None,
self.db['locks'].find_one({'_id': LOCK_NAME}))
should this be wrapped into a helper? utils.check_lock(db, LOCK_NAME) or
something?

https://codereview.appspot.com/7379053/diff/1/default.ini
File default.ini (right):

https://codereview.appspot.com/7379053/diff/1/default.ini#newcode42
default.ini:42: script_lease_time = 30
we need the bug to note that the charm should be updated to create this
time based on the config time used to cron the importer.

https://codereview.appspot.com/7379053/

Revision history for this message
Abel Deuring (adeuring) wrote :
Download full text (3.8 KiB)

On 27.02.2013 18:30, <email address hidden> wrote:
> lgtm with a check on using safe vs fsync. If safe works then it should
> be a bit more performance than an fsync.

Right, using safe=True works also. Checking if this parameter has been
specififed, is a bit more tricky, so I changed the function lock() to
just try to insert a test record twice.

>
> I also would prefer to see something that catches locks that were not
> released. I know they timeout, but we should have some notice that they
> were hit so we can look into the issue. I'm worried about a charm
> instance dying without us realizing there's an issue. It might take some
> coordination with the charm though to get an error/notice out and up to
> nagios or email or something.

If a job "just dies" without releasing the job, we have a problem that
is likely much more serious that just a stale lock:

    try:
        yield
    finally:
        locks.remove(my_lock)

ensures that the lock is released for "regular Python errors". If the
job dies due to a segfault, this segfault would be my main concern, not
the "orphaned" lock data.

>
>
> https://codereview.appspot.com/7379053/diff/1/charmworld/jobs/core_review.py
>
> File charmworld/jobs/core_review.py (right):
>
> https://codereview.appspot.com/7379053/diff/1/charmworld/jobs/core_review.py#newcode107
>
> charmworld/jobs/core_review.py:107: c = pymongo.Connection(MONGO_URL,
> fsync=True)
> can we get away with just safe=True vs fsync? It's a bit lighter than
> fsync and we changed to this in test runs to help fix issues from mongo
> syncing.

Done. Remember though that we can use fsync=False -- the odd thing is
that specifying fsync=True as well as fsync=False ensure that inserting
a doc with an existing key raises a DuplicateKeyError, while _not_
specifying it suppresses the DuplicateKeyError.

>
> https://codereview.appspot.com/7379053/diff/1/charmworld/jobs/lp.py
> File charmworld/jobs/lp.py (right):
>
> https://codereview.appspot.com/7379053/diff/1/charmworld/jobs/lp.py#newcode73
>
> charmworld/jobs/lp.py:73: connection = getconnection(settings)
> does this need to have the safe= flag set to get a safe connection?

No, the trick here are the default parameters of getconnection():

def getconnection(settings, fsync=False, safe=False):
    [...]
    connection = pymongo.Connection(
        url_or_host, port,
        fsync=fsync,
        safe=safe,
    )

So, fsync _is_ specified and everyting is fine.

>
> https://codereview.appspot.com/7379053/diff/1/charmworld/jobs/review.py
> File charmworld/jobs/review.py (right):
>
> https://codereview.appspot.com/7379053/diff/1/charmworld/jobs/review.py#newcode117
>
> charmworld/jobs/review.py:117: c = pymongo.Connection(MONGO_URL,
> fsync=True)
> again, can safe= work?

changed.

>
> https://codereview.appspot.com/7379053/diff/1/charmworld/jobs/tests/test_utils.py
>
> File charmworld/jobs/tests/test_utils.py (right):
>
> https://codereview.appspot.com/7379053/diff/1/charmworld/jobs/tests/test_utils.py#newcode38
>
> charmworld/jobs/tests/test_utils.py:38: self.assertIs(None,
> self.db['locks'].find_one({'_id': LOCK_NAME}))
> should this be wrapped into a helper? utils.check_l...

Read more...

159. By Abel Deuring

implemented reviewr comments

Revision history for this message
Abel Deuring (adeuring) :
review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'charmworld/jobs/core_review.py'
2--- charmworld/jobs/core_review.py 2013-02-12 16:15:07 +0000
3+++ charmworld/jobs/core_review.py 2013-02-28 13:04:21 +0000
4@@ -21,7 +21,10 @@
5 import logging
6 import time
7
8+from charmworld.utils import get_ini
9 from config import MONGO_URL
10+from utils import lock
11+from utils import LockHeld
12
13 log = logging.getLogger("core.review")
14
15@@ -101,9 +104,15 @@
16 logging.basicConfig(
17 level=logging.DEBUG,
18 format="%(asctime)s: %(name)s@%(levelname)s: %(message)s")
19- c = pymongo.Connection(MONGO_URL)
20+ c = pymongo.Connection(MONGO_URL, safe=True)
21 db = c['juju']
22- update_review_queue(db)
23+ settings = get_ini()
24+ try:
25+ with lock('ingest-core-review',
26+ int(settings['script_lease_time']) * 60, db, log):
27+ update_review_queue(db)
28+ except LockHeld, error:
29+ log.warn(str(error))
30
31 if __name__ == '__main__':
32 main()
33
34=== modified file 'charmworld/jobs/lp.py'
35--- charmworld/jobs/lp.py 2013-02-19 12:40:36 +0000
36+++ charmworld/jobs/lp.py 2013-02-28 13:04:21 +0000
37@@ -6,9 +6,14 @@
38 import urllib
39 import json
40
41+from charmworld.models import getconnection
42+from charmworld.models import getdb
43+from charmworld.utils import get_ini
44 from config import CHARM_IMPORT_FILTER
45 from config import CHARM_IMPORT_LIMIT
46 from config import CHARM_QUEUE
47+from utils import lock
48+from utils import LockHeld
49 from utils import get_queue
50 from utils import parse_branch
51
52@@ -63,5 +68,14 @@
53 logging.basicConfig(
54 level=logging.INFO,
55 format="%(asctime)s: %(name)s@%(levelname)s: %(message)s")
56- queue = get_queue(CHARM_QUEUE)
57- queue_charms(queue)
58+ log = logging.getLogger("charm.launchpad")
59+ settings = get_ini()
60+ connection = getconnection(settings)
61+ db = getdb(connection, settings.get('mongo.database'))
62+ try:
63+ with lock('ingest-queue', int(settings['script_lease_time']) * 60,
64+ db, log):
65+ queue = get_queue(CHARM_QUEUE)
66+ queue_charms(queue)
67+ except LockHeld, error:
68+ log.warn(str(error))
69
70=== modified file 'charmworld/jobs/review.py'
71--- charmworld/jobs/review.py 2013-02-12 16:15:07 +0000
72+++ charmworld/jobs/review.py 2013-02-28 13:04:21 +0000
73@@ -22,7 +22,10 @@
74 import logging
75 import time
76
77+from charmworld.utils import get_ini
78 from config import MONGO_URL
79+from utils import lock
80+from utils import LockHeld
81
82 log = logging.getLogger("charm.review")
83
84@@ -111,9 +114,15 @@
85 logging.basicConfig(
86 level=logging.DEBUG,
87 format="%(asctime)s: %(name)s@%(levelname)s: %(message)s")
88- c = pymongo.Connection(MONGO_URL)
89+ c = pymongo.Connection(MONGO_URL, safe=True)
90 db = c['juju']
91- update_review_queue(db)
92+ settings = get_ini()
93+ try:
94+ with lock('ingest-review',
95+ int(settings['script_lease_time']) * 60, db, log):
96+ update_review_queue(db)
97+ except LockHeld, error:
98+ log.warn(str(error))
99
100 if __name__ == '__main__':
101 main()
102
103=== added file 'charmworld/jobs/tests/test_utils.py'
104--- charmworld/jobs/tests/test_utils.py 1970-01-01 00:00:00 +0000
105+++ charmworld/jobs/tests/test_utils.py 2013-02-28 13:04:21 +0000
106@@ -0,0 +1,152 @@
107+# Copyright 2013 Canonical Ltd. This software is licensed under the
108+# GNU Affero General Public License version 3 (see the file LICENSE).
109+
110+import logging
111+from pymongo import Connection
112+from pymongo.errors import DuplicateKeyError
113+from time import time
114+
115+from charmworld.jobs.utils import BadDBConnection
116+from charmworld.jobs.utils import lock
117+from charmworld.jobs.utils import LockHeld
118+from charmworld.testing import MONGO_DATABASE
119+from charmworld.testing import MONGO_URL
120+from charmworld.testing import MongoTestBase
121+
122+
123+LOCK_NAME = u'test_lock'
124+
125+
126+class LockTest(MongoTestBase):
127+
128+ def setUp(self):
129+ super(LockTest, self).setUp()
130+ self.locked_method_called = False
131+ self.log = logging.getLogger('locktest')
132+ self.log.setLevel(logging.WARN)
133+ self.handler = self.get_handler('locktest')
134+
135+ def test_regular_locking(self):
136+ # If no lock with the given name is held, a lock() call
137+ # is successful.
138+ self.assertIs(None, self.db['locks'].find_one({'_id': LOCK_NAME}))
139+ with lock(LOCK_NAME, 1, self.db, self.log):
140+ # Now a lock exists.
141+ self.assertIsNot(
142+ None, self.db['locks'].find_one({'_id': LOCK_NAME}))
143+ # The lock has been removed.
144+ self.assertIs(None, self.db['locks'].find_one({'_id': LOCK_NAME}))
145+
146+ def use_lock(self, exc=None, db=None):
147+ # Acquire a lock; optionally raise an exception.
148+ if db is None:
149+ db = self.db
150+ with lock(LOCK_NAME, 1, db, self.log):
151+ self.locked_method_called = True
152+ if exc is not None:
153+ raise exc
154+
155+ def make_lock(self, lease_time, pid=1):
156+ return {
157+ '_id': LOCK_NAME,
158+ 'lease_until': int(time() + lease_time),
159+ 'host': 'another-host',
160+ 'pid': pid,
161+ }
162+
163+ def test_lock_already_held(self):
164+ # If another process has already created a lock, another call
165+ # to lock() raises a LockHeld exception.
166+ existing_lock = self.make_lock(5)
167+ self.db['locks'].insert(existing_lock)
168+ self.assertRaises(LockHeld, self.use_lock)
169+
170+ def test_stale_lock(self):
171+ # Stale locks are removed.
172+ existing_lock = self.make_lock(-1)
173+ self.db['locks'].insert(existing_lock)
174+ self.use_lock()
175+ warning = self.handler.buffer[0]
176+ self.assertEqual(
177+ 'Stale lock removed. Host: %(host)s, PID: %(pid)s, '
178+ 'lease_until: %(lease_until)s' % existing_lock, warning.msg)
179+
180+ def test_lock_released_when_locked_function_fails(self):
181+ # The lock is also released when the locked function raises an
182+ # exception.
183+ error_occurred = False
184+ try:
185+ self.use_lock(exc=ValueError)
186+ except ValueError:
187+ error_occurred = True
188+ self.assertTrue(error_occurred)
189+ self.assertIs(None, self.db['locks'].find_one({'_id': LOCK_NAME}))
190+
191+ def test_duplicate_deletion_of_stale_lock(self):
192+ # MongoDB's Collection.remove() call does not fail when called
193+ # for a non-existing document. This means that two processes
194+ # can attempt to delete a stale lock without failing.
195+ locks = self.db['locks']
196+ lock = self.make_lock(-1)
197+ locks.insert(lock)
198+ locks.remove(lock)
199+ self.assertIs(None, locks.find_one(lock))
200+ # The second call to remove the lock does not raise an exception.
201+ locks.remove(lock)
202+
203+ def test_lock_removal_succeeds_only_for_known_lock(self):
204+ # A possible race condition: Two processes detect a stale lock.
205+ # Process A removes the lock and creates a new lock...
206+ stale_lock = self.make_lock(-2, pid=1)
207+ lock_process_a = self.make_lock(2, pid=2)
208+ locks = self.db.locks
209+ locks.insert(lock_process_a)
210+ # ...before process B tries to remove the stale lock again.
211+ locks.remove(stale_lock)
212+ # The new lock still exists.
213+ self.assertEqual(lock_process_a, locks.find_one({'_id': LOCK_NAME}))
214+
215+ def duplicate_insert(self, conn, expect_duplicate_key_error):
216+ # Try to insert the same document twice into the Mongo DB.
217+ db = conn[MONGO_DATABASE]
218+ locks = db['locks']
219+ lock_1 = self.make_lock(1, pid=42)
220+ locks.insert(lock_1)
221+ stored_lock = locks.find_one({'_id': LOCK_NAME})
222+ # A second insert call may succeed..
223+ lock_2 = self.make_lock(1, pid=43)
224+ if expect_duplicate_key_error:
225+ self.assertRaises(DuplicateKeyError, locks.insert, lock_2)
226+ else:
227+ locks.insert(lock_2)
228+ # ...but has no effect.
229+ stored_lock = locks.find_one({'_id': LOCK_NAME})
230+ self.assertEqual(lock_1, stored_lock)
231+ locks.remove({'_id': LOCK_NAME})
232+
233+ def test_bad_connection_setup(self):
234+ # If the DB connection is created without the parameter fsync,
235+ # no DuplicateKeyError is raised.
236+ conn = Connection(MONGO_URL)
237+ self.duplicate_insert(conn, expect_duplicate_key_error=False)
238+
239+ # A DuplicateKeyError is raised if the parameter fsync is specified.
240+ conn = Connection(MONGO_URL, fsync=True)
241+ self.duplicate_insert(conn, expect_duplicate_key_error=True)
242+ # Note that the value of fsync does not matter...
243+ conn = Connection(MONGO_URL, fsync=False)
244+ self.duplicate_insert(conn, expect_duplicate_key_error=True)
245+
246+ # Alternatively, using safe=True ensures also that a
247+ # DuplicateKeyError is raised.
248+ conn = Connection(MONGO_URL, safe=False)
249+ self.duplicate_insert(conn, expect_duplicate_key_error=False)
250+ conn = Connection(MONGO_URL, safe=True)
251+ self.duplicate_insert(conn, expect_duplicate_key_error=True)
252+
253+ def test_lock_creation_without_fsync_or_safe_param(self):
254+ # A DB connection that was created without the fsync parameter
255+ # or safe=True cannot be used to create a lock.
256+ conn = Connection(MONGO_URL)
257+ db = conn['locks']
258+ self.assertRaises(BadDBConnection, self.use_lock, db=db)
259
260=== modified file 'charmworld/jobs/utils.py'
261--- charmworld/jobs/utils.py 2013-02-19 12:40:36 +0000
262+++ charmworld/jobs/utils.py 2013-02-28 13:04:21 +0000
263@@ -2,8 +2,13 @@
264 # licensed under the GNU Affero General Public License version 3 (see
265 # the file LICENSE).
266
267+from contextlib import contextmanager
268 from mongoqueue import MongoQueue
269 from nose.tools import nottest
270+from pymongo.errors import DuplicateKeyError
271+from os import getpid
272+from socket import gethostname
273+from time import time
274
275 from charmworld.models import getconnection
276 from charmworld.models import getdb
277@@ -64,6 +69,67 @@
278 BRANCH_TIPS = "https://api.launchpad.net/devel/charm?ws.op=getBranchTips"
279
280
281+class LockHeld(Exception):
282+ pass
283+
284+
285+class BadDBConnection(Exception):
286+ pass
287+
288+
289+@contextmanager
290+def lock(name, lease_time, db, log):
291+ """A simple MongoDB based mutex lock.
292+
293+ :param name: The name of the mutex.
294+ :param lease_until: The time until which the lease is held.
295+ """
296+ locks = db['locks']
297+ # check that a DuplicateKeyError is raised when a doc with an existing
298+ # key is inserted.
299+ raises_key_error = False
300+ test_doc = {'_id': '_test'}
301+ try:
302+ locks.insert(test_doc)
303+ locks.insert(test_doc)
304+ except DuplicateKeyError:
305+ raises_key_error = True
306+ if not raises_key_error:
307+ raise BadDBConnection(
308+ 'The MongoDB connection is not configured to raise '
309+ 'DuplicateKeyErrors. Use the parameter safe=True or '
310+ 'fsync (with any value) when creating the conenction.'
311+ 'See test_bad_connection_setup() in '
312+ 'charmworld/jobs/tests/test_utils.py for more details.')
313+
314+ existing_lock = locks.find_one({'_id': name})
315+ now = time()
316+ if existing_lock is not None and existing_lock['lease_until'] < now:
317+ # Note that two calls that remove the same record do not fail, hence
318+ # there is no need to check the result call.
319+ locks.remove(existing_lock)
320+ log.warn(
321+ 'Stale lock removed. Host: %(host)s, PID: %(pid)s, '
322+ 'lease_until: %(lease_until)s' % existing_lock)
323+ my_lock = {
324+ '_id': name,
325+ 'lease_until': now + lease_time,
326+ 'host': gethostname(),
327+ 'pid': getpid(),
328+ }
329+ try:
330+ locks.insert(my_lock)
331+ except DuplicateKeyError:
332+ existing_lock = locks.find_one({'_id': name})
333+ raise LockHeld(
334+ 'Lock %s already held until %s. Current time: %s' % (
335+ name, existing_lock['lease_until'], now))
336+ try:
337+ yield
338+ finally:
339+ locks.remove(my_lock)
340+
341+
342 @nottest
343 def test():
344 import urllib
345
346=== modified file 'charmworld/testing/__init__.py'
347--- charmworld/testing/__init__.py 2013-02-21 16:30:35 +0000
348+++ charmworld/testing/__init__.py 2013-02-28 13:04:21 +0000
349@@ -46,6 +46,16 @@
350 # Drop the Mongo database.
351 self.connection.drop_database(MONGO_DATABASE)
352
353+ def get_handler(self, log_name):
354+ logger = logging.getLogger(log_name)
355+ handler = logging.handlers.BufferingHandler(20)
356+ logger.addHandler(handler)
357+
358+ def cleanup():
359+ logger.removeHandler(handler)
360+ self.addCleanup(cleanup)
361+ return handler
362+
363
364 class WebTestBase(MongoTestBase):
365
366@@ -125,16 +135,6 @@
367 self.addCleanup(cleanup)
368 return (in_queue, out_queue)
369
370- def get_handler(self, log_name):
371- logger = logging.getLogger(log_name)
372- handler = logging.handlers.BufferingHandler(20)
373- logger.addHandler(handler)
374-
375- def cleanup():
376- logger.removeHandler(handler)
377- self.addCleanup(cleanup)
378- return handler
379-
380
381 @contextmanager
382 def login_request(app, groups=None):
383
384=== modified file 'default.ini'
385--- default.ini 2013-02-21 13:06:17 +0000
386+++ default.ini 2013-02-28 13:04:21 +0000
387@@ -36,6 +36,11 @@
388 mongo.url = mongodb://localhost:27017
389 mongo.database = juju
390
391+# The time in minutes a lock is held by the scripts "queue" and "review".
392+# This value should be greater than the time interval between two runs of
393+# this script.
394+script_lease_time = 30
395+
396 [server:main]
397 use = egg:Paste#http
398 host = 0.0.0.0

Subscribers

People subscribed via source and target branches