Merge lp:~stub/launchpad/garbo into lp:launchpad/db-devel
- garbo
- Merge into db-devel
Status: | Merged | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Approved by: | Brad Crittenden | ||||||||||||
Approved revision: | no longer in the source branch. | ||||||||||||
Merged at revision: | not available | ||||||||||||
Proposed branch: | lp:~stub/launchpad/garbo | ||||||||||||
Merge into: | lp:launchpad/db-devel | ||||||||||||
Diff against target: | 709 lines | ||||||||||||
To merge this branch: | bzr merge lp:~stub/launchpad/garbo | ||||||||||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Brad Crittenden (community) | code | Approve | |
Review via email:
|
Commit message
Description of the change
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Stuart Bishop (stub) wrote : | # |
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Brad Crittenden (bac) wrote : | # |
Hi Stuart,
The branch looks good. I don't have anything to add but do ask for
you to add a comment on one test -- it's probably just the case of me
being paranoid.
Also the logic in bug 348874 seems to be wrong w.r.t. '>' vs '<' but
is switched and correct in theimplementation here. For completeness
would you make a note on that bug?
> === modified file 'lib/canonical/
> --- lib/canonical/
> +++ lib/canonical/
> @@ -12,6 +12,7 @@
> from pytz import UTC
> from storm.locals import Min
> import transaction
> +from zope.component import getUtility
>
> from canonical.
> from canonical.
> @@ -21,9 +22,12 @@
> CodeImportResul
> from canonical.
> from canonical.
> - DailyDatabaseGa
> + DailyDatabaseGa
> + OpenIDAssociati
> from canonical.
> from canonical.
> +from canonical.
> + IStoreSelector, MASTER_FLAVOR)
> from canonical.
> DatabaseLayer, LaunchpadScript
>
> @@ -57,20 +61,21 @@
> self.runDaily()
> self.runHourly()
>
> - def runDaily(self):
> - LaunchpadZopele
> + def runDaily(self, maximum_
> + LaunchpadZopele
> collector = DailyDatabaseGa
> + collector.
> collector.logger = QuietFakeLogger()
> collector.main()
>
> - def runHourly(self):
> - LaunchpadZopele
> + def runHourly(self, maximum_
> + LaunchpadZopele
> collector = HourlyDatabaseG
> + collector.
> collector.logger = QuietFakeLogger()
> collector.main()
>
> def test_OAuthNonce
> - store = IMasterStore(
> now = datetime.
> timestamps = [
> now - timedelta(days=2), # Garbage
> @@ -79,6 +84,7 @@
> now, # Not garbage
> ]
> LaunchpadZopele
> + store = IMasterStore(
>
> # Make sure we start with 0 nonces.
> self.failUnless
> @@ -93,7 +99,9 @@
> # Make sure we have 4 nonces now.
> self.failUnless
>
> - self.runHourly()
> + self.runHourly(
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Brad Crittenden (bac) wrote : | # |
Hi Stuart,
The branch looks good. I don't have anything to add but do ask for
you to add a comment on one test -- it's probably just the case of me
being paranoid.
Also the logic in bug 348874 seems to be wrong w.r.t. '>' vs '<' but
is switched and correct in theimplementation here. For completeness
would you make a note on that bug?
> === modified file 'lib/canonical/
> --- lib/canonical/
> +++ lib/canonical/
> @@ -12,6 +12,7 @@
> from pytz import UTC
> from storm.locals import Min
> import transaction
> +from zope.component import getUtility
>
> from canonical.
> from canonical.
> @@ -21,9 +22,12 @@
> CodeImportResul
> from canonical.
> from canonical.
> - DailyDatabaseGa
> + DailyDatabaseGa
> + OpenIDAssociati
> from canonical.
> from canonical.
> +from canonical.
> + IStoreSelector, MASTER_FLAVOR)
> from canonical.
> DatabaseLayer, LaunchpadScript
>
> @@ -57,20 +61,21 @@
> self.runDaily()
> self.runHourly()
>
> - def runDaily(self):
> - LaunchpadZopele
> + def runDaily(self, maximum_
> + LaunchpadZopele
> collector = DailyDatabaseGa
> + collector.
> collector.logger = QuietFakeLogger()
> collector.main()
>
> - def runHourly(self):
> - LaunchpadZopele
> + def runHourly(self, maximum_
> + LaunchpadZopele
> collector = HourlyDatabaseG
> + collector.
> collector.logger = QuietFakeLogger()
> collector.main()
>
> def test_OAuthNonce
> - store = IMasterStore(
> now = datetime.
> timestamps = [
> now - timedelta(days=2), # Garbage
> @@ -79,6 +84,7 @@
> now, # Not garbage
> ]
> LaunchpadZopele
> + store = IMasterStore(
>
> # Make sure we start with 0 nonces.
> self.failUnless
> @@ -93,7 +99,9 @@
> # Make sure we have 4 nonces now.
> self.failUnless
>
> - self.runHourly()
> + self.runHourly(
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Stuart Bishop (stub) wrote : | # |
On Tue, Apr 7, 2009 at 8:11 PM, Brad Crittenden <email address hidden> wrote:
>> + def test_OpenIDAsso
>> + store_name = pruner.store_name
>> + table_name = pruner.table_name
>> + LaunchpadZopele
>> + store_selector = getUtility(
>> + store = store_selector.
>> + now = time.time()
>> + # Create some associations in the past with lifetimes
>> + for delta in range(0, 20):
>> + store.execute("""
>> + INSERT INTO %s (server_url, handle, issued, lifetime)
>> + VALUES (%s, %s, %d, %d)
>> + """ % (table_name, str(delta), str(delta), now-10, delta))
>> + transaction.
>> +
>> + num_expired = store.execute("""
>> + SELECT COUNT(*) FROM %s
>> + WHERE issued + lifetime < %f
>> + """ % (table_name, now)).get_one()[0]
>> + self.failUnless
>> +
>> + self.runHourly()
>> +
>> + LaunchpadZopele
>> + store = store_selector.
>> + num_expired = store.execute("""
>> + SELECT COUNT(*) FROM %s
>> + WHERE issued + lifetime < %f
>> + """ % (table_name, now)).get_one()[0]
>> + self.failUnless
>
> This test depends on all three parts completing within one second, the
> granularity of your delta. While I'm having a hard time envisioning
> that being a problem it does seem like a potential spot for random
> failure. Perhaps if you just state the assumption in a comment that
> will suffice and be a big clue if it should ever fail.
I thought I'd fixed the logic to stop that already. I have a
consistent 'now' at the top of the test. The first test checks that
there is at least one item expired at time 'now'. The second check
tests that there are 0 items expired at time 'now'. The only change I
see if things pause or run slow is self.runHourly(), and this just
means more items might get expired. The final check doesn't care,
because it is just ensuring that all the items at the test start time
where expired and it couldn't care less if more where expired.
So if my logic is correct (it is 4:30am), the problem is that I'm not
checking that items that should not have been expired have not been
expired.
--
Stuart Bishop <email address hidden>
http://
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Stuart Bishop (stub) wrote : | # |
On Wed, Apr 8, 2009 at 4:42 AM, Stuart Bishop <email address hidden> wrote:
> I thought I'd fixed the logic to stop that already. I have a
> consistent 'now' at the top of the test. The first test checks that
> there is at least one item expired at time 'now'. The second check
> tests that there are 0 items expired at time 'now'. The only change I
> see if things pause or run slow is self.runHourly(), and this just
> means more items might get expired. The final check doesn't care,
> because it is just ensuring that all the items at the test start time
> where expired and it couldn't care less if more where expired.
>
> So if my logic is correct (it is 4:30am), the problem is that I'm not
> checking that items that should not have been expired have not been
> expired.
I've commented this test better and added the extra 'make sure we didn't just trash everything' check.
--
Stuart Bishop <email address hidden>
http://
Preview Diff
1 | === modified file 'database/schema/security.cfg' |
2 | --- database/schema/security.cfg 2009-05-08 02:33:14 +0000 |
3 | +++ database/schema/security.cfg 2009-05-10 03:41:25 +0000 |
4 | @@ -11,6 +11,7 @@ |
5 | [public] |
6 | # The public role is automatically granted to all users by PostgreSQL |
7 | type=group |
8 | +public.activity() = EXECUTE |
9 | public.person_sort_key(text, text) = EXECUTE |
10 | public.debversion_sort_key(text) = EXECUTE |
11 | public.null_count(anyarray) = EXECUTE |
12 | @@ -98,6 +99,9 @@ |
13 | public.personlanguage = SELECT |
14 | public.teammembership = SELECT |
15 | public.teamparticipation = SELECT |
16 | +# XXX 2009-05-07 stub bug=373252: SELECT and DELETE permissions required |
17 | +# for garbo.py. INSERT permission needed for the tests. |
18 | +public.openidassociation = SELECT, INSERT, DELETE |
19 | |
20 | [launchpad_main] |
21 | # lpmain replication set access from the main Z3 application. |
22 | @@ -1613,7 +1617,7 @@ |
23 | public.libraryfiledownloadcount = SELECT, INSERT, UPDATE, DELETE |
24 | |
25 | [garbo] |
26 | -# garbo-hourly and garbo-daily script permissions. We define the |
27 | +# garbo_hourly and garbo_daily script permissions. We define the |
28 | # permissions here in this group instead of in the users, so tasks can |
29 | # be shuffled around between the daily and hourly sections without |
30 | # changing DB permissions. |
31 | @@ -1621,6 +1625,8 @@ |
32 | groups=script |
33 | public.codeimportresult = SELECT, DELETE |
34 | public.oauthnonce = SELECT, DELETE |
35 | +public.openidassociation = SELECT, DELETE |
36 | +public.openidconsumerassociation = SELECT, DELETE |
37 | public.openidconsumernonce = SELECT, DELETE |
38 | public.revisioncache = SELECT, DELETE |
39 | public.person = SELECT |
40 | @@ -1628,11 +1634,11 @@ |
41 | public.hwsubmission = SELECT, UPDATE |
42 | public.mailinglistsubscription = SELECT, DELETE |
43 | |
44 | -[garbo-daily] |
45 | +[garbo_daily] |
46 | type=user |
47 | groups=garbo |
48 | |
49 | -[garbo-hourly] |
50 | +[garbo_hourly] |
51 | type=user |
52 | groups=garbo |
53 | |
54 | |
55 | === modified file 'database/schema/trusted.sql' |
56 | --- database/schema/trusted.sql 2009-04-24 14:43:14 +0000 |
57 | +++ database/schema/trusted.sql 2009-05-10 03:41:25 +0000 |
58 | @@ -48,10 +48,52 @@ |
59 | |
60 | COMMENT ON FUNCTION null_count(anyarray) IS 'Return the number of NULLs in the first row of the given array.'; |
61 | |
62 | + |
63 | +CREATE OR REPLACE FUNCTION replication_lag() RETURNS interval |
64 | +LANGUAGE plpgsql STABLE SECURITY DEFINER AS |
65 | +$$ |
66 | + DECLARE |
67 | + v_lag interval; |
68 | + BEGIN |
69 | + SELECT INTO v_lag max(st_lag_time) FROM _sl.sl_status; |
70 | + RETURN v_lag; |
71 | + -- Slony-I not installed here - non-replicated setup. |
72 | + EXCEPTION |
73 | + WHEN invalid_schema_name THEN |
74 | + RETURN NULL; |
75 | + WHEN undefined_table THEN |
76 | + RETURN NULL; |
77 | + END; |
78 | +$$; |
79 | + |
80 | +COMMENT ON FUNCTION replication_lag() IS |
81 | +'Returns the worst lag time known to this node in our cluster, or NULL if not a replicated installation.'; |
82 | + |
83 | + |
84 | +CREATE OR REPLACE FUNCTION activity() |
85 | +RETURNS SETOF pg_catalog.pg_stat_activity |
86 | +LANGUAGE SQL VOLATILE SECURITY DEFINER AS |
87 | +$$ |
88 | + SELECT |
89 | + datid, datname, procpid, usesysid, usename, |
90 | + CASE |
91 | + WHEN current_query LIKE '<IDLE>%' |
92 | + THEN current_query |
93 | + ELSE |
94 | + NULL |
95 | + END AS current_query, |
96 | + waiting, xact_start, query_start, |
97 | + backend_start, client_addr, client_port |
98 | + FROM pg_catalog.pg_stat_activity; |
99 | +$$; |
100 | + |
101 | +COMMENT ON FUNCTION activity() IS |
102 | +'SECURITY DEFINER wrapper around pg_stat_activity allowing unprivileged users to access most of its information.'; |
103 | + |
104 | + |
105 | /* This is created as a function so the same definition can be used with |
106 | many tables |
107 | */ |
108 | - |
109 | CREATE OR REPLACE FUNCTION valid_name(text) RETURNS boolean |
110 | LANGUAGE plpythonu IMMUTABLE RETURNS NULL ON NULL INPUT AS |
111 | $$ |
112 | @@ -858,27 +900,6 @@ |
113 | 'AFTER UPDATE trigger on BugAffectsPerson maintaining the Bug.users_affected_count column'; |
114 | |
115 | |
116 | -CREATE OR REPLACE FUNCTION replication_lag() RETURNS interval |
117 | -LANGUAGE plpgsql STABLE SECURITY DEFINER AS |
118 | -$$ |
119 | - DECLARE |
120 | - v_lag interval; |
121 | - BEGIN |
122 | - SELECT INTO v_lag max(st_lag_time) FROM _sl.sl_status; |
123 | - RETURN v_lag; |
124 | - -- Slony-I not installed here - non-replicated setup. |
125 | - EXCEPTION |
126 | - WHEN invalid_schema_name THEN |
127 | - RETURN NULL; |
128 | - WHEN undefined_table THEN |
129 | - RETURN NULL; |
130 | - END; |
131 | -$$; |
132 | - |
133 | -COMMENT ON FUNCTION replication_lag() IS |
134 | -'Returns the worst lag time known to this node in our cluster, or NULL if not a replicated installation.'; |
135 | - |
136 | - |
137 | CREATE OR REPLACE FUNCTION set_bugtask_date_milestone_set() RETURNS TRIGGER |
138 | LANGUAGE plpgsql AS |
139 | $$ |
140 | |
141 | === modified file 'lib/canonical/database/sqlbase.py' |
142 | --- lib/canonical/database/sqlbase.py 2009-04-17 10:32:16 +0000 |
143 | +++ lib/canonical/database/sqlbase.py 2009-05-10 03:41:25 +0000 |
144 | @@ -5,6 +5,7 @@ |
145 | import warnings |
146 | from datetime import datetime |
147 | import re |
148 | +from textwrap import dedent |
149 | |
150 | import psycopg2 |
151 | from psycopg2.extensions import ( |
152 | @@ -293,15 +294,27 @@ |
153 | if dbuser is None: |
154 | dbuser = config.launchpad.dbuser |
155 | |
156 | - # Construct a config fragment: |
157 | - overlay = '[database]\n' |
158 | - overlay += 'main_master: %s\n' % connection_string |
159 | - overlay += 'isolation_level: %s\n' % { |
160 | + isolation_level = { |
161 | ISOLATION_LEVEL_AUTOCOMMIT: 'autocommit', |
162 | ISOLATION_LEVEL_READ_COMMITTED: 'read_committed', |
163 | ISOLATION_LEVEL_SERIALIZABLE: 'serializable'}[isolation] |
164 | + |
165 | + # Construct a config fragment: |
166 | + overlay = dedent("""\ |
167 | + [database] |
168 | + main_master: %(connection_string)s |
169 | + auth_master: %(connection_string)s |
170 | + isolation_level: %(isolation_level)s |
171 | + """ % vars()) |
172 | + |
173 | if dbuser: |
174 | - overlay += '\n[launchpad]\ndbuser: %s\n' % dbuser |
175 | + # XXX 2009-05-07 stub bug=373252: Scripts should not be connecting |
176 | + # as the launchpad_auth database user. |
177 | + overlay += dedent("""\ |
178 | + [launchpad] |
179 | + dbuser: %(dbuser)s |
180 | + auth_dbuser: launchpad_auth |
181 | + """ % vars()) |
182 | |
183 | if cls._installed is not None: |
184 | if cls._config_overlay != overlay: |
185 | |
186 | === modified file 'lib/canonical/launchpad/doc/hwdb.txt' |
187 | --- lib/canonical/launchpad/doc/hwdb.txt 2009-05-07 02:02:51 +0000 |
188 | +++ lib/canonical/launchpad/doc/hwdb.txt 2009-05-10 03:41:25 +0000 |
189 | @@ -371,7 +371,8 @@ |
190 | >>> user.validateAndEnsurePreferredEmail(email) |
191 | >>> transaction.commit() |
192 | >>> from canonical.launchpad.scripts.garbo import HWSubmissionEmailLinker |
193 | - >>> HWSubmissionEmailLinker().run() |
194 | + >>> from canonical.launchpad.ftests.logger import MockLogger |
195 | + >>> HWSubmissionEmailLinker(log=MockLogger()).run() |
196 | >>> submission = hw_submission_set.getBySubmissionKey(u'unique-id-2') |
197 | >>> print submission.owner.displayname |
198 | Beeblebrox |
199 | @@ -405,7 +406,7 @@ |
200 | >>> login_person(user) |
201 | >>> user.validateAndEnsurePreferredEmail(email) |
202 | >>> transaction.commit() |
203 | - >>> HWSubmissionEmailLinker().run() |
204 | + >>> HWSubmissionEmailLinker(log=MockLogger()).run() |
205 | >>> submission = hw_submission_set.getBySubmissionKey(u'unique-id-2') |
206 | >>> print submission.owner.displayname |
207 | Beeblebrox |
208 | |
209 | === modified file 'lib/canonical/launchpad/doc/script-monitoring.txt' |
210 | --- lib/canonical/launchpad/doc/script-monitoring.txt 2009-04-17 10:32:16 +0000 |
211 | +++ lib/canonical/launchpad/doc/script-monitoring.txt 2009-05-10 03:41:25 +0000 |
212 | @@ -28,7 +28,7 @@ |
213 | >>> from canonical.testing.layers import LaunchpadZopelessLayer |
214 | |
215 | >>> UTC = pytz.timezone('UTC') |
216 | - >>> LaunchpadZopelessLayer.switchDbUser('garbo-daily') # A script db user |
217 | + >>> LaunchpadZopelessLayer.switchDbUser('garbo_daily') # A script db user |
218 | |
219 | >>> activity = getUtility(IScriptActivitySet).recordSuccess( |
220 | ... name='script-name', |
221 | @@ -89,7 +89,7 @@ |
222 | ... raise RuntimeError('Some failure') |
223 | ... |
224 | ... if __name__ == '__main__': |
225 | - ... script = TestScript('test-script', 'garbo-daily') |
226 | + ... script = TestScript('test-script', 'garbo_daily') |
227 | ... script.run() |
228 | ... """) |
229 | >>> script_file.flush() |
230 | |
231 | === modified file 'lib/canonical/launchpad/scripts/garbo.py' |
232 | --- lib/canonical/launchpad/scripts/garbo.py 2009-05-08 02:33:14 +0000 |
233 | +++ lib/canonical/launchpad/scripts/garbo.py 2009-05-10 03:41:25 +0000 |
234 | @@ -23,16 +23,13 @@ |
235 | from canonical.launchpad.interfaces.emailaddress import EmailAddressStatus |
236 | from canonical.launchpad.interfaces.looptuner import ITunableLoop |
237 | from canonical.launchpad.scripts.base import LaunchpadCronScript |
238 | -from canonical.launchpad.utilities.looptuner import LoopTuner |
239 | +from canonical.launchpad.utilities.looptuner import DBLoopTuner |
240 | from canonical.launchpad.webapp.interfaces import ( |
241 | - IStoreSelector, MAIN_STORE, MASTER_FLAVOR) |
242 | + IStoreSelector, AUTH_STORE, MAIN_STORE, MASTER_FLAVOR) |
243 | from lp.code.model.codeimportresult import CodeImportResult |
244 | -from lp.code.model.revision import RevisionAuthor |
245 | +from lp.code.model.revision import RevisionAuthor, RevisionCache |
246 | from lp.registry.model.mailinglist import MailingListSubscription |
247 | |
248 | -from lp.code.model.codeimportresult import CodeImportResult |
249 | -from lp.code.model.revision import RevisionCache |
250 | - |
251 | |
252 | ONE_DAY_IN_SECONDS = 24*60*60 |
253 | |
254 | @@ -45,9 +42,12 @@ |
255 | maximum_chunk_size = None # Override |
256 | cooldown_time = 0 |
257 | |
258 | + def __init__(self, log): |
259 | + self.log = log |
260 | + |
261 | def run(self): |
262 | assert self.maximum_chunk_size is not None, "Did not override." |
263 | - LoopTuner( |
264 | + DBLoopTuner( |
265 | self, self.goal_seconds, |
266 | minimum_chunk_size = self.minimum_chunk_size, |
267 | maximum_chunk_size = self.maximum_chunk_size, |
268 | @@ -61,7 +61,8 @@ |
269 | """ |
270 | maximum_chunk_size = 6*60*60 # 6 hours in seconds. |
271 | |
272 | - def __init__(self): |
273 | + def __init__(self, log): |
274 | + super(OAuthNoncePruner, self).__init__(log) |
275 | self.store = IMasterStore(OAuthNonce) |
276 | self.oldest_age = self.store.execute(""" |
277 | SELECT COALESCE(EXTRACT(EPOCH FROM |
278 | @@ -77,6 +78,10 @@ |
279 | self.oldest_age = max( |
280 | ONE_DAY_IN_SECONDS, self.oldest_age - chunk_size) |
281 | |
282 | + self.log.debug( |
283 | + "Removed OAuthNonce rows older than %d seconds" |
284 | + % self.oldest_age) |
285 | + |
286 | self.store.find( |
287 | OAuthNonce, |
288 | OAuthNonce.request_timestamp < SQL( |
289 | @@ -92,7 +97,8 @@ |
290 | """ |
291 | maximum_chunk_size = 6*60*60 # 6 hours in seconds. |
292 | |
293 | - def __init__(self): |
294 | + def __init__(self, log): |
295 | + super(OpenIDConsumerNoncePruner, self).__init__(log) |
296 | self.store = getUtility(IStoreSelector).get(MAIN_STORE, MASTER_FLAVOR) |
297 | self.earliest_timestamp = self.store.find( |
298 | Min(OpenIDConsumerNonce.timestamp)).one() |
299 | @@ -109,12 +115,52 @@ |
300 | self.earliest_wanted_timestamp, |
301 | self.earliest_timestamp + chunk_size) |
302 | |
303 | + self.log.debug( |
304 | + "Removing OpenIDConsumerNonce rows older than %s" |
305 | + % self.earliest_timestamp) |
306 | + |
307 | self.store.find( |
308 | OpenIDConsumerNonce, |
309 | OpenIDConsumerNonce.timestamp < self.earliest_timestamp).remove() |
310 | transaction.commit() |
311 | |
312 | |
313 | +class OpenIDAssociationPruner(TunableLoop): |
314 | + minimum_chunk_size = 3500 |
315 | + maximum_chunk_size = 50000 |
316 | + |
317 | + table_name = 'OpenIDAssociation' |
318 | + store_name = AUTH_STORE |
319 | + |
320 | + _num_removed = None |
321 | + |
322 | + def __init__(self, log): |
323 | + super(OpenIDAssociationPruner, self).__init__(log) |
324 | + self.store = getUtility(IStoreSelector).get( |
325 | + self.store_name, MASTER_FLAVOR) |
326 | + |
327 | + def __call__(self, chunksize): |
328 | + result = self.store.execute(""" |
329 | + DELETE FROM %s |
330 | + WHERE (server_url, handle) IN ( |
331 | + SELECT server_url, handle FROM %s |
332 | + WHERE issued + lifetime < |
333 | + EXTRACT(EPOCH FROM CURRENT_TIMESTAMP) |
334 | + LIMIT %d |
335 | + ) |
336 | + """ % (self.table_name, self.table_name, int(chunksize))) |
337 | + self._num_removed = result._raw_cursor.rowcount |
338 | + transaction.commit() |
339 | + |
340 | + def isDone(self): |
341 | + return self._num_removed == 0 |
342 | + |
343 | + |
344 | +class OpenIDConsumerAssociationPruner(OpenIDAssociationPruner): |
345 | + table_name = 'OpenIDConsumerAssociation' |
346 | + store_name = MAIN_STORE |
347 | + |
348 | + |
349 | class RevisionCachePruner(TunableLoop): |
350 | """A tunable loop to remove old revisions from the cache.""" |
351 | |
352 | @@ -141,8 +187,9 @@ |
353 | and they are not one of the 4 most recent results for that |
354 | CodeImport. |
355 | """ |
356 | - maximum_chunk_size = 100 |
357 | - def __init__(self): |
358 | + maximum_chunk_size = 1000 |
359 | + def __init__(self, log): |
360 | + super(CodeImportResultPruner, self).__init__(log) |
361 | self.store = IMasterStore(CodeImportResult) |
362 | |
363 | self.min_code_import = self.store.find( |
364 | @@ -158,6 +205,11 @@ |
365 | or self.next_code_import_id > self.max_code_import) |
366 | |
367 | def __call__(self, chunk_size): |
368 | + self.log.debug( |
369 | + "Removing expired CodeImportResults for CodeImports %d -> %d" % ( |
370 | + self.next_code_import_id, |
371 | + self.next_code_import_id + chunk_size - 1)) |
372 | + |
373 | self.store.execute(""" |
374 | DELETE FROM CodeImportResult |
375 | WHERE |
376 | @@ -193,7 +245,8 @@ |
377 | |
378 | maximum_chunk_size = 1000 |
379 | |
380 | - def __init__(self): |
381 | + def __init__(self, log): |
382 | + super(RevisionAuthorEmailLinker, self).__init__(log) |
383 | self.author_store = IMasterStore(RevisionAuthor) |
384 | self.email_store = IMasterStore(EmailAddress) |
385 | |
386 | @@ -252,7 +305,8 @@ |
387 | |
388 | maximum_chunk_size = 1000 |
389 | |
390 | - def __init__(self): |
391 | + def __init__(self, log): |
392 | + super(HWSubmissionEmailLinker, self).__init__(log) |
393 | self.submission_store = IMasterStore(HWSubmission) |
394 | self.email_store = IMasterStore(EmailAddress) |
395 | |
396 | @@ -311,7 +365,8 @@ |
397 | |
398 | maximum_chunk_size = 1000 |
399 | |
400 | - def __init__(self): |
401 | + def __init__(self, log): |
402 | + super(MailingListSubscriptionPruner, self).__init__(log) |
403 | self.subscription_store = IMasterStore(MailingListSubscription) |
404 | self.email_store = IMasterStore(EmailAddress) |
405 | |
406 | @@ -354,14 +409,24 @@ |
407 | script_name = None # Script name for locking and database user. Override. |
408 | tunable_loops = None # Collection of TunableLoops. Override. |
409 | |
410 | + # _maximum_chunk_size is used to override the defined |
411 | + # maximum_chunk_size to allow our tests to ensure multiple calls to |
412 | + # __call__ are required without creating huge amounts of test data. |
413 | + _maximum_chunk_size = None |
414 | + |
415 | def __init__(self, test_args=None): |
416 | super(BaseDatabaseGarbageCollector, self).__init__( |
417 | - self.script_name, dbuser=self.script_name, test_args=test_args) |
418 | + self.script_name, |
419 | + dbuser=self.script_name.replace('-','_'), |
420 | + test_args=test_args) |
421 | |
422 | def main(self): |
423 | for tunable_loop in self.tunable_loops: |
424 | self.logger.info("Running %s" % tunable_loop.__name__) |
425 | - tunable_loop().run() |
426 | + tunable_loop = tunable_loop(log=self.logger) |
427 | + if self._maximum_chunk_size is not None: |
428 | + tunable_loop.maximum_chunk_size = self._maximum_chunk_size |
429 | + tunable_loop.run() |
430 | |
431 | |
432 | class HourlyDatabaseGarbageCollector(BaseDatabaseGarbageCollector): |
433 | @@ -369,9 +434,12 @@ |
434 | tunable_loops = [ |
435 | OAuthNoncePruner, |
436 | OpenIDConsumerNoncePruner, |
437 | + OpenIDAssociationPruner, |
438 | + OpenIDConsumerAssociationPruner, |
439 | RevisionCachePruner, |
440 | ] |
441 | |
442 | + |
443 | class DailyDatabaseGarbageCollector(BaseDatabaseGarbageCollector): |
444 | script_name = 'garbo-daily' |
445 | tunable_loops = [ |
446 | |
447 | === modified file 'lib/canonical/launchpad/scripts/tests/test_garbo.py' |
448 | --- lib/canonical/launchpad/scripts/tests/test_garbo.py 2009-05-04 09:16:25 +0000 |
449 | +++ lib/canonical/launchpad/scripts/tests/test_garbo.py 2009-05-10 03:41:25 +0000 |
450 | @@ -13,6 +13,7 @@ |
451 | from storm.expr import Min |
452 | from storm.store import Store |
453 | import transaction |
454 | +from zope.component import getUtility |
455 | |
456 | from lp.code.model.codeimportresult import CodeImportResult |
457 | from canonical.launchpad.database.oauth import OAuthNonce |
458 | @@ -22,9 +23,12 @@ |
459 | from lp.code.interfaces.codeimportresult import CodeImportResultStatus |
460 | from canonical.launchpad.testing import TestCase, TestCaseWithFactory |
461 | from canonical.launchpad.scripts.garbo import ( |
462 | - DailyDatabaseGarbageCollector, HourlyDatabaseGarbageCollector) |
463 | + DailyDatabaseGarbageCollector, HourlyDatabaseGarbageCollector, |
464 | + OpenIDAssociationPruner, OpenIDConsumerAssociationPruner) |
465 | from canonical.launchpad.scripts.tests import run_script |
466 | from canonical.launchpad.scripts.logger import QuietFakeLogger |
467 | +from canonical.launchpad.webapp.interfaces import ( |
468 | + IStoreSelector, MASTER_FLAVOR) |
469 | from canonical.testing.layers import ( |
470 | DatabaseLayer, LaunchpadScriptLayer, LaunchpadZopelessLayer) |
471 | from lp.registry.interfaces.person import PersonCreationRationale |
472 | @@ -59,20 +63,21 @@ |
473 | self.runDaily() |
474 | self.runHourly() |
475 | |
476 | - def runDaily(self): |
477 | - LaunchpadZopelessLayer.switchDbUser('garbo-daily') |
478 | + def runDaily(self, maximum_chunk_size=2): |
479 | + LaunchpadZopelessLayer.switchDbUser('garbo_daily') |
480 | collector = DailyDatabaseGarbageCollector(test_args=[]) |
481 | + collector._maximum_chunk_size = maximum_chunk_size |
482 | collector.logger = QuietFakeLogger() |
483 | collector.main() |
484 | |
485 | - def runHourly(self): |
486 | - LaunchpadZopelessLayer.switchDbUser('garbo-hourly') |
487 | + def runHourly(self, maximum_chunk_size=2): |
488 | + LaunchpadZopelessLayer.switchDbUser('garbo_hourly') |
489 | collector = HourlyDatabaseGarbageCollector(test_args=[]) |
490 | + collector._maximum_chunk_size = maximum_chunk_size |
491 | collector.logger = QuietFakeLogger() |
492 | collector.main() |
493 | |
494 | def test_OAuthNoncePruner(self): |
495 | - store = IMasterStore(OAuthNonce) |
496 | now = datetime.utcnow().replace(tzinfo=UTC) |
497 | timestamps = [ |
498 | now - timedelta(days=2), # Garbage |
499 | @@ -81,6 +86,7 @@ |
500 | now, # Not garbage |
501 | ] |
502 | LaunchpadZopelessLayer.switchDbUser('testadmin') |
503 | + store = IMasterStore(OAuthNonce) |
504 | |
505 | # Make sure we start with 0 nonces. |
506 | self.failUnlessEqual(store.find(OAuthNonce).count(), 0) |
507 | @@ -95,7 +101,9 @@ |
508 | # Make sure we have 4 nonces now. |
509 | self.failUnlessEqual(store.find(OAuthNonce).count(), 4) |
510 | |
511 | - self.runHourly() |
512 | + self.runHourly(maximum_chunk_size=60) # 1 minute maximum chunk size |
513 | + |
514 | + store = IMasterStore(OAuthNonce) |
515 | |
516 | # Now back to two, having removed the two garbage entries. |
517 | self.failUnlessEqual(store.find(OAuthNonce).count(), 2) |
518 | @@ -139,7 +147,9 @@ |
519 | self.failUnlessEqual(store.find(OpenIDConsumerNonce).count(), 4) |
520 | |
521 | # Run the garbage collector. |
522 | - self.runHourly() |
523 | + self.runHourly(maximum_chunk_size=60) # 1 minute maximum chunks. |
524 | + |
525 | + store = IMasterStore(OpenIDConsumerNonce) |
526 | |
527 | # We should now have 2 nonces. |
528 | self.failUnlessEqual(store.find(OpenIDConsumerNonce).count(), 2) |
529 | @@ -170,16 +180,19 @@ |
530 | self.runDaily() |
531 | |
532 | # Nothing is removed, because we always keep the 4 latest. |
533 | + store = IMasterStore(CodeImportResult) |
534 | self.failUnlessEqual( |
535 | store.find(CodeImportResult).count(), 4) |
536 | |
537 | new_code_import_result(now - timedelta(days=31)) |
538 | self.runDaily() |
539 | + store = IMasterStore(CodeImportResult) |
540 | self.failUnlessEqual( |
541 | store.find(CodeImportResult).count(), 4) |
542 | |
543 | new_code_import_result(now - timedelta(days=29)) |
544 | self.runDaily() |
545 | + store = IMasterStore(CodeImportResult) |
546 | self.failUnlessEqual( |
547 | store.find(CodeImportResult).count(), 4) |
548 | |
549 | @@ -189,6 +202,53 @@ |
550 | Min(CodeImportResult.date_created)).one().replace(tzinfo=UTC) |
551 | >= now - timedelta(days=30)) |
552 | |
553 | + def test_OpenIDAssociationPruner(self, pruner=OpenIDAssociationPruner): |
554 | + store_name = pruner.store_name |
555 | + table_name = pruner.table_name |
556 | + LaunchpadZopelessLayer.switchDbUser('testadmin') |
557 | + store_selector = getUtility(IStoreSelector) |
558 | + store = store_selector.get(store_name, MASTER_FLAVOR) |
559 | + now = time.time() |
560 | + # Create some associations in the past with lifetimes |
561 | + for delta in range(0, 20): |
562 | + store.execute(""" |
563 | + INSERT INTO %s (server_url, handle, issued, lifetime) |
564 | + VALUES (%s, %s, %d, %d) |
565 | + """ % (table_name, str(delta), str(delta), now-10, delta)) |
566 | + transaction.commit() |
567 | + |
568 | + # Ensure that we created at least one expirable row (using the |
569 | + # test start time as 'now'). |
570 | + num_expired = store.execute(""" |
571 | + SELECT COUNT(*) FROM %s |
572 | + WHERE issued + lifetime < %f |
573 | + """ % (table_name, now)).get_one()[0] |
574 | + self.failUnless(num_expired > 0) |
575 | + |
576 | + # Expire all those expirable rows, and possibly a few more if this |
577 | + # test is running slow. |
578 | + self.runHourly() |
579 | + |
580 | + LaunchpadZopelessLayer.switchDbUser('testadmin') |
581 | + store = store_selector.get(store_name, MASTER_FLAVOR) |
582 | + # Confirm all the rows we know should have been expired have |
583 | + # been expired. These are the ones that would be expired using |
584 | + # the test start time as 'now'. |
585 | + num_expired = store.execute(""" |
586 | + SELECT COUNT(*) FROM %s |
587 | + WHERE issued + lifetime < %f |
588 | + """ % (table_name, now)).get_one()[0] |
589 | + self.failUnlessEqual(num_expired, 0) |
590 | + |
591 | + # Confirm that we haven't expired everything. This test will fail |
592 | + # if it has taken 10 seconds to get this far. |
593 | + num_unexpired = store.execute( |
594 | + "SELECT COUNT(*) FROM %s" % table_name).get_one()[0] |
595 | + self.failUnless(num_unexpired > 0) |
596 | + |
597 | + def test_OpenIDConsumerAssociationPruner(self): |
598 | + self.test_OpenIDAssociationPruner(OpenIDConsumerAssociationPruner) |
599 | + |
600 | def test_RevisionAuthorEmailLinker(self): |
601 | LaunchpadZopelessLayer.switchDbUser('testadmin') |
602 | rev1 = self.factory.makeRevision('Author 1 <author-1@Example.Org>') |
603 | @@ -301,5 +361,6 @@ |
604 | self.runDaily() |
605 | self.assertEqual(mailing_list.getSubscription(person), None) |
606 | |
607 | + |
608 | def test_suite(): |
609 | return unittest.TestLoader().loadTestsFromName(__name__) |
610 | |
611 | === modified file 'lib/canonical/launchpad/utilities/looptuner.py' |
612 | --- lib/canonical/launchpad/utilities/looptuner.py 2009-04-17 10:32:16 +0000 |
613 | +++ lib/canonical/launchpad/utilities/looptuner.py 2009-05-10 03:41:25 +0000 |
614 | @@ -180,10 +180,10 @@ |
615 | """ |
616 | |
617 | # We block until replication lag is under this threshold. |
618 | - acceptable_replication_lag = timedelta(seconds=90) # In seconds. |
619 | + acceptable_replication_lag = timedelta(seconds=30) # In seconds. |
620 | |
621 | # We block if there are transactions running longer than this threshold. |
622 | - long_running_transaction = 60*60 # In seconds |
623 | + long_running_transaction = 30*60 # In seconds |
624 | |
625 | def _blockWhenLagged(self): |
626 | """When database replication lag is high, block until it drops.""" |
627 | @@ -222,7 +222,7 @@ |
628 | usename, |
629 | datname, |
630 | current_query |
631 | - FROM pg_stat_activity |
632 | + FROM activity() |
633 | WHERE xact_start < CURRENT_TIMESTAMP - interval '%f seconds' |
634 | """ % self.long_running_transaction).get_all()) |
635 | if not results: |
636 | |
637 | === modified file 'lib/lp/code/model/tests/test_revision.py' |
638 | --- lib/lp/code/model/tests/test_revision.py 2009-05-08 02:33:14 +0000 |
639 | +++ lib/lp/code/model/tests/test_revision.py 2009-05-10 03:41:25 +0000 |
640 | @@ -17,6 +17,7 @@ |
641 | |
642 | from canonical.database.sqlbase import cursor |
643 | from canonical.launchpad.ftests import login, logout |
644 | +from canonical.launchpad.ftests.logger import MockLogger |
645 | from canonical.launchpad.interfaces.lpstorm import IMasterObject |
646 | from canonical.launchpad.interfaces.account import AccountStatus |
647 | from canonical.launchpad.scripts.garbo import RevisionAuthorEmailLinker |
648 | @@ -165,7 +166,7 @@ |
649 | # The person registers with Launchpad. |
650 | author = self.factory.makePerson(email=email) |
651 | # Garbo runs the RevisionAuthorEmailLinker job. |
652 | - RevisionAuthorEmailLinker().run() |
653 | + RevisionAuthorEmailLinker(log=MockLogger()).run() |
654 | # Now the kama needs allocating. |
655 | self.assertEqual( |
656 | [rev], list(RevisionSet.getRevisionsNeedingKarmaAllocated())) |
657 | |
658 | === modified file 'lib/lp/code/model/tests/test_revisionauthor.py' |
659 | --- lib/lp/code/model/tests/test_revisionauthor.py 2009-05-07 02:02:51 +0000 |
660 | +++ lib/lp/code/model/tests/test_revisionauthor.py 2009-05-10 03:41:26 +0000 |
661 | @@ -11,6 +11,7 @@ |
662 | |
663 | from canonical.config import config |
664 | from canonical.launchpad.interfaces.emailaddress import EmailAddressStatus |
665 | +from canonical.launchpad.ftests.logger import MockLogger |
666 | from canonical.launchpad.scripts.garbo import RevisionAuthorEmailLinker |
667 | from canonical.launchpad.testing import LaunchpadObjectFactory, TestCase |
668 | from canonical.testing import LaunchpadZopelessLayer |
669 | @@ -157,7 +158,7 @@ |
670 | |
671 | # After the garbo RevisionAuthorEmailLinker job runs, the link |
672 | # is made. |
673 | - RevisionAuthorEmailLinker().run() |
674 | + RevisionAuthorEmailLinker(log=MockLogger()).run() |
675 | self.assertEqual(harry, self.author.person, |
676 | 'Harry should now be the author.') |
677 | |
678 | |
679 | === modified file 'lib/lp/code/scripts/tests/test_revisionkarma.py' |
680 | --- lib/lp/code/scripts/tests/test_revisionkarma.py 2009-05-07 02:02:51 +0000 |
681 | +++ lib/lp/code/scripts/tests/test_revisionkarma.py 2009-05-10 03:41:26 +0000 |
682 | @@ -10,6 +10,7 @@ |
683 | |
684 | from canonical.config import config |
685 | from canonical.launchpad.database.emailaddress import EmailAddressSet |
686 | +from canonical.launchpad.ftests.logger import MockLogger |
687 | from canonical.launchpad.scripts.garbo import RevisionAuthorEmailLinker |
688 | from canonical.launchpad.testing import TestCaseWithFactory |
689 | from canonical.testing import LaunchpadZopelessLayer |
690 | @@ -74,7 +75,7 @@ |
691 | author = self.factory.makePerson(email=email) |
692 | transaction.commit() |
693 | # Run the RevisionAuthorEmailLinker garbo job. |
694 | - RevisionAuthorEmailLinker().run() |
695 | + RevisionAuthorEmailLinker(log=MockLogger()).run() |
696 | LaunchpadZopelessLayer.switchDbUser(config.revisionkarma.dbuser) |
697 | script = RevisionKarmaAllocator( |
698 | 'test', config.revisionkarma.dbuser, ['-q']) |
699 | @@ -106,7 +107,7 @@ |
700 | EmailAddressSet().new(email, author, account=author.account)) |
701 | transaction.commit() |
702 | # Run the RevisionAuthorEmailLinker garbo job. |
703 | - RevisionAuthorEmailLinker().run() |
704 | + RevisionAuthorEmailLinker(log=MockLogger()).run() |
705 | |
706 | # Now that the revision author is linked to the person, the revision |
707 | # needs karma allocated. |
708 | |
709 | === modified file 'scripts/rosetta/message-sharing-merge.py' (properties changed: +x to -x) |
Addresses:
Bug #351363: DBLoopTuner often cannot detect long running transactions
Bug #348874: OpenIDAssociations need to be garbage collected
We now use a SECURITY DEFINER wrapper around pg_stat_activity to allow DBLoopTuner to query this information even when not connected as a database superuser.
The OpenIDAssociation and OpenIDConsumerA ssociation tables are now garbage collected.
Scripts using the auth Store now connect as a sane database user.
When the test infrastructure is used to reset the Zopeless connection settings, ZStorm is reset and all Stores closed. After changing the connection settings, Stores need to be refetched but they will be connected using the correct database user.