Merge lp:~jameinel/bzr/2.1b1-pack-on-the-fly into lp:bzr
- 2.1b1-pack-on-the-fly
- Merge into bzr.dev
Proposed by
John A Meinel
Status: | Merged | ||||
---|---|---|---|---|---|
Merged at revision: | not available | ||||
Proposed branch: | lp:~jameinel/bzr/2.1b1-pack-on-the-fly | ||||
Merge into: | lp:bzr | ||||
Diff against target: | None lines | ||||
To merge this branch: | bzr merge lp:~jameinel/bzr/2.1b1-pack-on-the-fly | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
bzr-core | Pending | ||
Review via email: mp+11162@code.launchpad.net |
Commit message
Description of the change
To post a comment you must log in.
Revision history for this message
John A Meinel (jameinel) wrote : | # |
Revision history for this message
Robert Collins (lifeless) wrote : | # |
Conceptually great; I'm looking now.
The review merge is bong; I'm going to pull locally, sync up and get a clean diff.
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === modified file 'Makefile' |
2 | --- Makefile 2009-08-03 20:38:39 +0000 |
3 | +++ Makefile 2009-08-27 00:53:27 +0000 |
4 | @@ -1,4 +1,4 @@ |
5 | -# Copyright (C) 2005, 2006, 2007, 2008 Canonical Ltd |
6 | +# Copyright (C) 2005, 2006, 2007, 2008, 2009 Canonical Ltd |
7 | # |
8 | # This program is free software; you can redistribute it and/or modify |
9 | # it under the terms of the GNU General Public License as published by |
10 | @@ -40,8 +40,6 @@ |
11 | |
12 | check-nodocs: extensions |
13 | $(PYTHON) -Werror -O ./bzr selftest -1v $(tests) |
14 | - @echo "Running all tests with no locale." |
15 | - LC_CTYPE= LANG=C LC_ALL= ./bzr selftest -1v $(tests) 2>&1 | sed -e 's/^/[ascii] /' |
16 | |
17 | # Run Python style checker (apt-get install pyflakes) |
18 | # |
19 | |
20 | === modified file 'NEWS' |
21 | --- NEWS 2009-08-30 22:02:45 +0000 |
22 | +++ NEWS 2009-09-03 21:04:22 +0000 |
23 | @@ -6,6 +6,55 @@ |
24 | .. contents:: List of Releases |
25 | :depth: 1 |
26 | |
27 | +In Development |
28 | +############## |
29 | + |
30 | +Compatibility Breaks |
31 | +******************** |
32 | + |
33 | +New Features |
34 | +************ |
35 | + |
36 | +Bug Fixes |
37 | +********* |
38 | + |
39 | +* ``bzr check`` in pack-0.92, 1.6 and 1.9 format repositories will no |
40 | + longer report incorrect errors about ``Missing inventory ('TREE_ROOT', ...)`` |
41 | + (Robert Collins, #416732) |
42 | + |
43 | +* Don't restrict the command name used to run the test suite. |
44 | + (Vincent Ladeuil, #419950) |
45 | + |
46 | +Improvements |
47 | +************ |
48 | + |
49 | +Documentation |
50 | +************* |
51 | + |
52 | +API Changes |
53 | +*********** |
54 | + |
55 | +* ``bzrlib.tests`` now uses ``stopTestRun`` for its ``TestResult`` |
56 | + subclasses - the same as python's unittest module. (Robert Collins) |
57 | + |
58 | +Internals |
59 | +********* |
60 | + |
61 | +* The ``bzrlib.lsprof`` module has a new class ``BzrProfiler`` which makes |
62 | + profiling in some situations like callbacks and generators easier. |
63 | + (Robert Collins) |
64 | + |
65 | +Testing |
66 | +******* |
67 | + |
68 | +* Passing ``--lsprof-tests -v`` to bzr selftest will cause lsprof output to |
69 | + be output for every test. Note that this is very verbose! (Robert Collins) |
70 | + |
71 | +* Test parameterisation now does a shallow copy, not a deep copy of the test |
72 | + to be parameterised. This is not expected to break external use of test |
73 | + parameterisation, and is substantially faster. (Robert Collins) |
74 | + |
75 | + |
76 | bzr 2.0rc2 |
77 | ########## |
78 | |
79 | @@ -20,10 +69,34 @@ |
80 | revisions that are in the fallback repository. (Regressed in 2.0rc1). |
81 | (John Arbash Meinel, #419241) |
82 | |
83 | +* Fetches from 2a to 2a are now again requested in 'groupcompress' order. |
84 | + Groups that are seen as 'underutilized' will be repacked on-the-fly. |
85 | + This means that when the source is fully packed, there is minimal |
86 | + overhead during the fetch, but if the source is poorly packed the result |
87 | + is a fairly well packed repository (not as good as 'bzr pack' but |
88 | + good-enough.) (Robert Collins, John Arbash Meinel, #402652) |
89 | + |
90 | * Fix a segmentation fault when computing the ``merge_sort`` of a graph |
91 | that has a ghost in the mainline ancestry. |
92 | (John Arbash Meinel, #419241) |
93 | |
94 | +* ``groupcompress`` sort order is now more stable, rather than relying on |
95 | + ``topo_sort`` ordering. The implementation is now |
96 | + ``KnownGraph.gc_sort``. (John Arbash Meinel) |
97 | + |
98 | +* Local data conversion will generate correct deltas. This is a critical |
99 | + bugfix vs 2.0rc1, and all 2.0rc1 users should upgrade to 2.0rc2 before |
100 | + converting repositories. (Robert Collins, #422849) |
101 | + |
102 | +* Network streams now decode adjacent records of the same type into a |
103 | + single stream, reducing layering churn. (Robert Collins) |
104 | + |
105 | +Documentation |
106 | +************* |
107 | + |
108 | +* The main table of contents now provides links to the new Migration Docs |
109 | + and Plugins Guide. (Ian Clatworthy) |
110 | + |
111 | |
112 | bzr 2.0rc1 |
113 | ########## |
114 | @@ -64,6 +137,9 @@ |
115 | Bug Fixes |
116 | ********* |
117 | |
118 | +* Further tweaks to handling of ``bzr add`` messages about ignored files. |
119 | + (Jason Spashett, #76616) |
120 | + |
121 | * Fetches were being requested in 'groupcompress' order, but weren't |
122 | recombining the groups. Thus they would 'fragment' to get the correct |
123 | order, but not 'recombine' to actually benefit from it. Until we get |
124 | @@ -133,9 +209,6 @@ |
125 | classes changed to manage lock lifetime of the trees they open in a way |
126 | consistent with reader-exclusive locks. (Robert Collins, #305006) |
127 | |
128 | -Internals |
129 | -********* |
130 | - |
131 | Testing |
132 | ******* |
133 | |
134 | @@ -149,13 +222,29 @@ |
135 | conversion will commit too many copies a file. |
136 | (Martin Pool, #415508) |
137 | |
138 | +Improvements |
139 | +************ |
140 | + |
141 | +* ``bzr push`` locally on windows will no longer give a locking error with |
142 | + dirstate based formats. (Robert Collins) |
143 | + |
144 | +* ``bzr shelve`` and ``bzr unshelve`` now work on windows. |
145 | + (Robert Collins, #305006) |
146 | + |
147 | API Changes |
148 | *********** |
149 | |
150 | +* ``bzrlib.shelf_ui`` has had the ``from_args`` convenience methods of its |
151 | + classes changed to manage lock lifetime of the trees they open in a way |
152 | + consistent with reader-exclusive locks. (Robert Collins, #305006) |
153 | + |
154 | * ``Tree.path_content_summary`` may return a size of None, when called on |
155 | a tree with content filtering where the size of the canonical form |
156 | cannot be cheaply determined. (Martin Pool) |
157 | |
158 | +* When manually creating transport servers in test cases, a new helper |
159 | + ``TestCase.start_server`` that registers a cleanup and starts the server |
160 | + should be used. (Robert Collins) |
161 | |
162 | bzr 1.18 |
163 | ######## |
164 | @@ -493,6 +582,17 @@ |
165 | ``countTestsCases``. (Robert Collins) |
166 | |
167 | |
168 | +bzr 1.17.1 (unreleased) |
169 | +####################### |
170 | + |
171 | +Bug Fixes |
172 | +********* |
173 | + |
174 | +* The optional ``_knit_load_data_pyx`` C extension was never being |
175 | + imported. This caused significant slowdowns when reading data from |
176 | + knit format repositories. (Andrew Bennetts, #405653) |
177 | + |
178 | + |
179 | bzr 1.17 "So late it's brunch" 2009-07-20 |
180 | ######################################### |
181 | :Codename: so-late-its-brunch |
182 | @@ -991,6 +1091,9 @@ |
183 | Testing |
184 | ******* |
185 | |
186 | +* ``make check`` no longer repeats the test run in ``LANG=C``. |
187 | + (Martin Pool, #386180) |
188 | + |
189 | * The number of cores is now correctly detected on OSX. (John Szakmeister) |
190 | |
191 | * The number of cores is also detected on Solaris and win32. (Vincent Ladeuil) |
192 | @@ -4971,7 +5074,7 @@ |
193 | checkouts. (Aaron Bentley, #182040) |
194 | |
195 | * Stop polluting /tmp when running selftest. |
196 | - (Vincent Ladeuil, #123623) |
197 | + (Vincent Ladeuil, #123363) |
198 | |
199 | * Switch from NFKC => NFC for normalization checks. NFC allows a few |
200 | more characters which should be considered valid. |
201 | |
202 | === modified file 'bzr' |
203 | --- bzr 2009-08-11 03:02:56 +0000 |
204 | +++ bzr 2009-08-28 05:11:10 +0000 |
205 | @@ -23,7 +23,7 @@ |
206 | import warnings |
207 | |
208 | # update this on each release |
209 | -_script_version = (2, 0, 0) |
210 | +_script_version = (2, 1, 0) |
211 | |
212 | if __doc__ is None: |
213 | print "bzr does not support python -OO." |
214 | |
215 | === modified file 'bzrlib/__init__.py' |
216 | --- bzrlib/__init__.py 2009-08-27 07:49:53 +0000 |
217 | +++ bzrlib/__init__.py 2009-08-30 21:34:42 +0000 |
218 | @@ -50,7 +50,7 @@ |
219 | # Python version 2.0 is (2, 0, 0, 'final', 0)." Additionally we use a |
220 | # releaselevel of 'dev' for unreleased under-development code. |
221 | |
222 | -version_info = (2, 0, 0, 'candidate', 1) |
223 | +version_info = (2, 1, 0, 'dev', 0) |
224 | |
225 | # API compatibility version: bzrlib is currently API compatible with 1.15. |
226 | api_minimum_version = (1, 17, 0) |
227 | |
228 | === modified file 'bzrlib/_known_graph_py.py' |
229 | --- bzrlib/_known_graph_py.py 2009-08-17 20:41:26 +0000 |
230 | +++ bzrlib/_known_graph_py.py 2009-08-25 18:45:40 +0000 |
231 | @@ -97,6 +97,10 @@ |
232 | return [node for node in self._nodes.itervalues() |
233 | if not node.parent_keys] |
234 | |
235 | + def _find_tips(self): |
236 | + return [node for node in self._nodes.itervalues() |
237 | + if not node.child_keys] |
238 | + |
239 | def _find_gdfo(self): |
240 | nodes = self._nodes |
241 | known_parent_gdfos = {} |
242 | @@ -218,6 +222,51 @@ |
243 | # We started from the parents, so we don't need to do anymore work |
244 | return topo_order |
245 | |
246 | + def gc_sort(self): |
247 | + """Return a reverse topological ordering which is 'stable'. |
248 | + |
249 | + There are a few constraints: |
250 | + 1) Reverse topological (all children before all parents) |
251 | + 2) Grouped by prefix |
252 | + 3) 'stable' sorting, so that we get the same result, independent of |
253 | + machine, or extra data. |
254 | + To do this, we use the same basic algorithm as topo_sort, but when we |
255 | + aren't sure what node to access next, we sort them lexicographically. |
256 | + """ |
257 | + tips = self._find_tips() |
258 | + # Split the tips based on prefix |
259 | + prefix_tips = {} |
260 | + for node in tips: |
261 | + if node.key.__class__ is str or len(node.key) == 1: |
262 | + prefix = '' |
263 | + else: |
264 | + prefix = node.key[0] |
265 | + prefix_tips.setdefault(prefix, []).append(node) |
266 | + |
267 | + num_seen_children = dict.fromkeys(self._nodes, 0) |
268 | + |
269 | + result = [] |
270 | + for prefix in sorted(prefix_tips): |
271 | + pending = sorted(prefix_tips[prefix], key=lambda n:n.key, |
272 | + reverse=True) |
273 | + while pending: |
274 | + node = pending.pop() |
275 | + if node.parent_keys is None: |
276 | + # Ghost node, skip it |
277 | + continue |
278 | + result.append(node.key) |
279 | + for parent_key in sorted(node.parent_keys, reverse=True): |
280 | + parent_node = self._nodes[parent_key] |
281 | + seen_children = num_seen_children[parent_key] + 1 |
282 | + if seen_children == len(parent_node.child_keys): |
283 | + # All children have been processed, enqueue this parent |
284 | + pending.append(parent_node) |
285 | + # This has been queued up, stop tracking it |
286 | + del num_seen_children[parent_key] |
287 | + else: |
288 | + num_seen_children[parent_key] = seen_children |
289 | + return result |
290 | + |
291 | def merge_sort(self, tip_key): |
292 | """Compute the merge sorted graph output.""" |
293 | from bzrlib import tsort |
294 | |
295 | === modified file 'bzrlib/_known_graph_pyx.pyx' |
296 | --- bzrlib/_known_graph_pyx.pyx 2009-08-26 16:03:59 +0000 |
297 | +++ bzrlib/_known_graph_pyx.pyx 2009-09-02 13:32:52 +0000 |
298 | @@ -25,11 +25,18 @@ |
299 | ctypedef struct PyObject: |
300 | pass |
301 | |
302 | + int PyString_CheckExact(object) |
303 | + |
304 | + int PyObject_RichCompareBool(object, object, int) |
305 | + int Py_LT |
306 | + |
307 | + int PyTuple_CheckExact(object) |
308 | object PyTuple_New(Py_ssize_t n) |
309 | Py_ssize_t PyTuple_GET_SIZE(object t) |
310 | PyObject * PyTuple_GET_ITEM(object t, Py_ssize_t o) |
311 | void PyTuple_SET_ITEM(object t, Py_ssize_t o, object v) |
312 | |
313 | + int PyList_CheckExact(object) |
314 | Py_ssize_t PyList_GET_SIZE(object l) |
315 | PyObject * PyList_GET_ITEM(object l, Py_ssize_t o) |
316 | int PyList_SetItem(object l, Py_ssize_t o, object l) except -1 |
317 | @@ -108,14 +115,65 @@ |
318 | return <_KnownGraphNode>temp_node |
319 | |
320 | |
321 | -cdef _KnownGraphNode _get_parent(parents, Py_ssize_t pos): |
322 | +cdef _KnownGraphNode _get_tuple_node(tpl, Py_ssize_t pos): |
323 | cdef PyObject *temp_node |
324 | - cdef _KnownGraphNode node |
325 | |
326 | - temp_node = PyTuple_GET_ITEM(parents, pos) |
327 | + temp_node = PyTuple_GET_ITEM(tpl, pos) |
328 | return <_KnownGraphNode>temp_node |
329 | |
330 | |
331 | +def get_key(node): |
332 | + cdef _KnownGraphNode real_node |
333 | + real_node = node |
334 | + return real_node.key |
335 | + |
336 | + |
337 | +cdef object _sort_list_nodes(object lst_or_tpl, int reverse): |
338 | + """Sort a list of _KnownGraphNode objects. |
339 | + |
340 | + If lst_or_tpl is a list, it is allowed to mutate in place. It may also |
341 | + just return the input list if everything is already sorted. |
342 | + """ |
343 | + cdef _KnownGraphNode node1, node2 |
344 | + cdef int do_swap, is_tuple |
345 | + cdef Py_ssize_t length |
346 | + |
347 | + is_tuple = PyTuple_CheckExact(lst_or_tpl) |
348 | + if not (is_tuple or PyList_CheckExact(lst_or_tpl)): |
349 | + raise TypeError('lst_or_tpl must be a list or tuple.') |
350 | + length = len(lst_or_tpl) |
351 | + if length == 0 or length == 1: |
352 | + return lst_or_tpl |
353 | + if length == 2: |
354 | + if is_tuple: |
355 | + node1 = _get_tuple_node(lst_or_tpl, 0) |
356 | + node2 = _get_tuple_node(lst_or_tpl, 1) |
357 | + else: |
358 | + node1 = _get_list_node(lst_or_tpl, 0) |
359 | + node2 = _get_list_node(lst_or_tpl, 1) |
360 | + if reverse: |
361 | + do_swap = PyObject_RichCompareBool(node1.key, node2.key, Py_LT) |
362 | + else: |
363 | + do_swap = PyObject_RichCompareBool(node2.key, node1.key, Py_LT) |
364 | + if not do_swap: |
365 | + return lst_or_tpl |
366 | + if is_tuple: |
367 | + return (node2, node1) |
368 | + else: |
369 | + # Swap 'in-place', since lists are mutable |
370 | + Py_INCREF(node1) |
371 | + PyList_SetItem(lst_or_tpl, 1, node1) |
372 | + Py_INCREF(node2) |
373 | + PyList_SetItem(lst_or_tpl, 0, node2) |
374 | + return lst_or_tpl |
375 | + # For all other sizes, we just use 'sorted()' |
376 | + if is_tuple: |
377 | + # Note that sorted() is just list(iterable).sort() |
378 | + lst_or_tpl = list(lst_or_tpl) |
379 | + lst_or_tpl.sort(key=get_key, reverse=reverse) |
380 | + return lst_or_tpl |
381 | + |
382 | + |
383 | cdef class _MergeSorter |
384 | |
385 | cdef class KnownGraph: |
386 | @@ -216,6 +274,19 @@ |
387 | PyList_Append(tails, node) |
388 | return tails |
389 | |
390 | + def _find_tips(self): |
391 | + cdef PyObject *temp_node |
392 | + cdef _KnownGraphNode node |
393 | + cdef Py_ssize_t pos |
394 | + |
395 | + tips = [] |
396 | + pos = 0 |
397 | + while PyDict_Next(self._nodes, &pos, NULL, &temp_node): |
398 | + node = <_KnownGraphNode>temp_node |
399 | + if PyList_GET_SIZE(node.children) == 0: |
400 | + PyList_Append(tips, node) |
401 | + return tips |
402 | + |
403 | def _find_gdfo(self): |
404 | cdef _KnownGraphNode node |
405 | cdef _KnownGraphNode child |
406 | @@ -322,7 +393,7 @@ |
407 | continue |
408 | if node.parents is not None and PyTuple_GET_SIZE(node.parents) > 0: |
409 | for pos from 0 <= pos < PyTuple_GET_SIZE(node.parents): |
410 | - parent_node = _get_parent(node.parents, pos) |
411 | + parent_node = _get_tuple_node(node.parents, pos) |
412 | last_item = last_item + 1 |
413 | if last_item < PyList_GET_SIZE(pending): |
414 | Py_INCREF(parent_node) # SetItem steals a ref |
415 | @@ -397,6 +468,77 @@ |
416 | # We started from the parents, so we don't need to do anymore work |
417 | return topo_order |
418 | |
419 | + def gc_sort(self): |
420 | + """Return a reverse topological ordering which is 'stable'. |
421 | + |
422 | + There are a few constraints: |
423 | + 1) Reverse topological (all children before all parents) |
424 | + 2) Grouped by prefix |
425 | + 3) 'stable' sorting, so that we get the same result, independent of |
426 | + machine, or extra data. |
427 | + To do this, we use the same basic algorithm as topo_sort, but when we |
428 | + aren't sure what node to access next, we sort them lexicographically. |
429 | + """ |
430 | + cdef PyObject *temp |
431 | + cdef Py_ssize_t pos, last_item |
432 | + cdef _KnownGraphNode node, node2, parent_node |
433 | + |
434 | + tips = self._find_tips() |
435 | + # Split the tips based on prefix |
436 | + prefix_tips = {} |
437 | + for pos from 0 <= pos < PyList_GET_SIZE(tips): |
438 | + node = _get_list_node(tips, pos) |
439 | + if PyString_CheckExact(node.key) or len(node.key) == 1: |
440 | + prefix = '' |
441 | + else: |
442 | + prefix = node.key[0] |
443 | + temp = PyDict_GetItem(prefix_tips, prefix) |
444 | + if temp == NULL: |
445 | + prefix_tips[prefix] = [node] |
446 | + else: |
447 | + tip_nodes = <object>temp |
448 | + PyList_Append(tip_nodes, node) |
449 | + |
450 | + result = [] |
451 | + for prefix in sorted(prefix_tips): |
452 | + temp = PyDict_GetItem(prefix_tips, prefix) |
453 | + assert temp != NULL |
454 | + tip_nodes = <object>temp |
455 | + pending = _sort_list_nodes(tip_nodes, 1) |
456 | + last_item = PyList_GET_SIZE(pending) - 1 |
457 | + while last_item >= 0: |
458 | + node = _get_list_node(pending, last_item) |
459 | + last_item = last_item - 1 |
460 | + if node.parents is None: |
461 | + # Ghost |
462 | + continue |
463 | + PyList_Append(result, node.key) |
464 | + # Sorting the parent keys isn't strictly necessary for stable |
465 | + # sorting of a given graph. But it does help minimize the |
466 | + # differences between graphs |
467 | + # For bzr.dev ancestry: |
468 | + # 4.73ms no sort |
469 | + # 7.73ms RichCompareBool sort |
470 | + parents = _sort_list_nodes(node.parents, 1) |
471 | + for pos from 0 <= pos < len(parents): |
472 | + if PyTuple_CheckExact(parents): |
473 | + parent_node = _get_tuple_node(parents, pos) |
474 | + else: |
475 | + parent_node = _get_list_node(parents, pos) |
476 | + # TODO: GraphCycle detection |
477 | + parent_node.seen = parent_node.seen + 1 |
478 | + if (parent_node.seen |
479 | + == PyList_GET_SIZE(parent_node.children)): |
480 | + # All children have been processed, queue up this |
481 | + # parent |
482 | + last_item = last_item + 1 |
483 | + if last_item < PyList_GET_SIZE(pending): |
484 | + Py_INCREF(parent_node) # SetItem steals a ref |
485 | + PyList_SetItem(pending, last_item, parent_node) |
486 | + else: |
487 | + PyList_Append(pending, parent_node) |
488 | + parent_node.seen = 0 |
489 | + return result |
490 | |
491 | def merge_sort(self, tip_key): |
492 | """Compute the merge sorted graph output.""" |
493 | @@ -522,7 +664,7 @@ |
494 | raise RuntimeError('ghost nodes should not be pushed' |
495 | ' onto the stack: %s' % (node,)) |
496 | if PyTuple_GET_SIZE(node.parents) > 0: |
497 | - parent_node = _get_parent(node.parents, 0) |
498 | + parent_node = _get_tuple_node(node.parents, 0) |
499 | ms_node.left_parent = parent_node |
500 | if parent_node.parents is None: # left-hand ghost |
501 | ms_node.left_pending_parent = None |
502 | @@ -532,7 +674,7 @@ |
503 | if PyTuple_GET_SIZE(node.parents) > 1: |
504 | ms_node.pending_parents = [] |
505 | for pos from 1 <= pos < PyTuple_GET_SIZE(node.parents): |
506 | - parent_node = _get_parent(node.parents, pos) |
507 | + parent_node = _get_tuple_node(node.parents, pos) |
508 | if parent_node.parents is None: # ghost |
509 | continue |
510 | PyList_Append(ms_node.pending_parents, parent_node) |
511 | |
512 | === modified file 'bzrlib/builtins.py' |
513 | --- bzrlib/builtins.py 2009-08-26 03:20:32 +0000 |
514 | +++ bzrlib/builtins.py 2009-08-28 05:00:33 +0000 |
515 | @@ -3382,6 +3382,8 @@ |
516 | Option('lsprof-timed', |
517 | help='Generate lsprof output for benchmarked' |
518 | ' sections of code.'), |
519 | + Option('lsprof-tests', |
520 | + help='Generate lsprof output for each test.'), |
521 | Option('cache-dir', type=str, |
522 | help='Cache intermediate benchmark output in this ' |
523 | 'directory.'), |
524 | @@ -3428,7 +3430,7 @@ |
525 | first=False, list_only=False, |
526 | randomize=None, exclude=None, strict=False, |
527 | load_list=None, debugflag=None, starting_with=None, subunit=False, |
528 | - parallel=None): |
529 | + parallel=None, lsprof_tests=False): |
530 | from bzrlib.tests import selftest |
531 | import bzrlib.benchmarks as benchmarks |
532 | from bzrlib.benchmarks import tree_creator |
533 | @@ -3468,6 +3470,7 @@ |
534 | "transport": transport, |
535 | "test_suite_factory": test_suite_factory, |
536 | "lsprof_timed": lsprof_timed, |
537 | + "lsprof_tests": lsprof_tests, |
538 | "bench_history": benchfile, |
539 | "matching_tests_first": first, |
540 | "list_only": list_only, |
541 | |
542 | === modified file 'bzrlib/groupcompress.py' |
543 | --- bzrlib/groupcompress.py 2009-08-26 16:47:51 +0000 |
544 | +++ bzrlib/groupcompress.py 2009-09-03 15:25:36 +0000 |
545 | @@ -457,7 +457,6 @@ |
546 | # There are code paths that first extract as fulltext, and then |
547 | # extract as storage_kind (smart fetch). So we don't break the |
548 | # refcycle here, but instead in manager.get_record_stream() |
549 | - # self._manager = None |
550 | if storage_kind == 'fulltext': |
551 | return self._bytes |
552 | else: |
553 | @@ -469,6 +468,14 @@ |
554 | class _LazyGroupContentManager(object): |
555 | """This manages a group of _LazyGroupCompressFactory objects.""" |
556 | |
557 | + _max_cut_fraction = 0.75 # We allow a block to be trimmed to 75% of |
558 | + # current size, and still be considered |
559 | + # resuable |
560 | + _full_block_size = 4*1024*1024 |
561 | + _full_mixed_block_size = 2*1024*1024 |
562 | + _full_enough_block_size = 3*1024*1024 # size at which we won't repack |
563 | + _full_enough_mixed_block_size = 2*768*1024 # 1.5MB |
564 | + |
565 | def __init__(self, block): |
566 | self._block = block |
567 | # We need to preserve the ordering |
568 | @@ -546,22 +553,23 @@ |
569 | # time (self._block._content) is a little expensive. |
570 | self._block._ensure_content(self._last_byte) |
571 | |
572 | - def _check_rebuild_block(self): |
573 | + def _check_rebuild_action(self): |
574 | """Check to see if our block should be repacked.""" |
575 | total_bytes_used = 0 |
576 | last_byte_used = 0 |
577 | for factory in self._factories: |
578 | total_bytes_used += factory._end - factory._start |
579 | - last_byte_used = max(last_byte_used, factory._end) |
580 | - # If we are using most of the bytes from the block, we have nothing |
581 | - # else to check (currently more that 1/2) |
582 | + if last_byte_used < factory._end: |
583 | + last_byte_used = factory._end |
584 | + # If we are using more than half of the bytes from the block, we have |
585 | + # nothing else to check |
586 | if total_bytes_used * 2 >= self._block._content_length: |
587 | - return |
588 | - # Can we just strip off the trailing bytes? If we are going to be |
589 | - # transmitting more than 50% of the front of the content, go ahead |
590 | + return None, last_byte_used, total_bytes_used |
591 | + # We are using less than 50% of the content. Is the content we are |
592 | + # using at the beginning of the block? If so, we can just trim the |
593 | + # tail, rather than rebuilding from scratch. |
594 | if total_bytes_used * 2 > last_byte_used: |
595 | - self._trim_block(last_byte_used) |
596 | - return |
597 | + return 'trim', last_byte_used, total_bytes_used |
598 | |
599 | # We are using a small amount of the data, and it isn't just packed |
600 | # nicely at the front, so rebuild the content. |
601 | @@ -574,7 +582,77 @@ |
602 | # expanding many deltas into fulltexts, as well. |
603 | # If we build a cheap enough 'strip', then we could try a strip, |
604 | # if that expands the content, we then rebuild. |
605 | - self._rebuild_block() |
606 | + return 'rebuild', last_byte_used, total_bytes_used |
607 | + |
608 | + def check_is_well_utilized(self): |
609 | + """Is the current block considered 'well utilized'? |
610 | + |
611 | + This is a bit of a heuristic, but it basically asks if the current |
612 | + block considers itself to be a fully developed group, rather than just |
613 | + a loose collection of data. |
614 | + """ |
615 | + if len(self._factories) == 1: |
616 | + # A block of length 1 is never considered 'well utilized' :) |
617 | + return False |
618 | + action, last_byte_used, total_bytes_used = self._check_rebuild_action() |
619 | + block_size = self._block._content_length |
620 | + if total_bytes_used < block_size * self._max_cut_fraction: |
621 | + # This block wants to trim itself small enough that we want to |
622 | + # consider it under-utilized. |
623 | + return False |
624 | + # TODO: This code is meant to be the twin of _insert_record_stream's |
625 | + # 'start_new_block' logic. It would probably be better to factor |
626 | + # out that logic into a shared location, so that it stays |
627 | + # together better |
628 | + # We currently assume a block is properly utilized whenever it is >75% |
629 | + # of the size of a 'full' block. In normal operation, a block is |
630 | + # considered full when it hits 4MB of same-file content. So any block |
631 | + # >3MB is 'full enough'. |
632 | + # The only time this isn't true is when a given block has large-object |
633 | + # content. (a single file >4MB, etc.) |
634 | + # Under these circumstances, we allow a block to grow to |
635 | + # 2 x largest_content. Which means that if a given block had a large |
636 | + # object, it may actually be under-utilized. However, given that this |
637 | + # is 'pack-on-the-fly' it is probably reasonable to not repack large |
638 | + # contet blobs on-the-fly. |
639 | + if block_size >= self._full_enough_block_size: |
640 | + return True |
641 | + # If a block is <3MB, it still may be considered 'full' if it contains |
642 | + # mixed content. The current rule is 2MB of mixed content is considered |
643 | + # full. So check to see if this block contains mixed content, and |
644 | + # set the threshold appropriately. |
645 | + common_prefix = None |
646 | + for factory in self._factories: |
647 | + prefix = factory.key[:-1] |
648 | + if common_prefix is None: |
649 | + common_prefix = prefix |
650 | + elif prefix != common_prefix: |
651 | + # Mixed content, check the size appropriately |
652 | + if block_size >= self._full_enough_mixed_block_size: |
653 | + return True |
654 | + break |
655 | + # The content failed both the mixed check and the single-content check |
656 | + # so obviously it is not fully utilized |
657 | + # TODO: there is one other constraint that isn't being checked |
658 | + # namely, that the entries in the block are in the appropriate |
659 | + # order. For example, you could insert the entries in exactly |
660 | + # reverse groupcompress order, and we would think that is ok. |
661 | + # (all the right objects are in one group, and it is fully |
662 | + # utilized, etc.) For now, we assume that case is rare, |
663 | + # especially since we should always fetch in 'groupcompress' |
664 | + # order. |
665 | + return False |
666 | + |
667 | + def _check_rebuild_block(self): |
668 | + action, last_byte_used, total_bytes_used = self._check_rebuild_action() |
669 | + if action is None: |
670 | + return |
671 | + if action == 'trim': |
672 | + self._trim_block(last_byte_used) |
673 | + elif action == 'rebuild': |
674 | + self._rebuild_block() |
675 | + else: |
676 | + raise ValueError('unknown rebuild action: %r' % (action,)) |
677 | |
678 | def _wire_bytes(self): |
679 | """Return a byte stream suitable for transmitting over the wire.""" |
680 | @@ -1570,6 +1648,7 @@ |
681 | block_length = None |
682 | # XXX: TODO: remove this, it is just for safety checking for now |
683 | inserted_keys = set() |
684 | + reuse_this_block = reuse_blocks |
685 | for record in stream: |
686 | # Raise an error when a record is missing. |
687 | if record.storage_kind == 'absent': |
688 | @@ -1583,10 +1662,20 @@ |
689 | if reuse_blocks: |
690 | # If the reuse_blocks flag is set, check to see if we can just |
691 | # copy a groupcompress block as-is. |
692 | + # We only check on the first record (groupcompress-block) not |
693 | + # on all of the (groupcompress-block-ref) entries. |
694 | + # The reuse_this_block flag is then kept for as long as |
695 | + if record.storage_kind == 'groupcompress-block': |
696 | + # Check to see if we really want to re-use this block |
697 | + insert_manager = record._manager |
698 | + reuse_this_block = insert_manager.check_is_well_utilized() |
699 | + else: |
700 | + reuse_this_block = False |
701 | + if reuse_this_block: |
702 | + # We still want to reuse this block |
703 | if record.storage_kind == 'groupcompress-block': |
704 | # Insert the raw block into the target repo |
705 | insert_manager = record._manager |
706 | - insert_manager._check_rebuild_block() |
707 | bytes = record._manager._block.to_bytes() |
708 | _, start, length = self._access.add_raw_records( |
709 | [(None, len(bytes))], bytes)[0] |
710 | @@ -1597,6 +1686,11 @@ |
711 | 'groupcompress-block-ref'): |
712 | if insert_manager is None: |
713 | raise AssertionError('No insert_manager set') |
714 | + if insert_manager is not record._manager: |
715 | + raise AssertionError('insert_manager does not match' |
716 | + ' the current record, we cannot be positive' |
717 | + ' that the appropriate content was inserted.' |
718 | + ) |
719 | value = "%d %d %d %d" % (block_start, block_length, |
720 | record._start, record._end) |
721 | nodes = [(record.key, value, (record.parents,))] |
722 | |
723 | === modified file 'bzrlib/lsprof.py' |
724 | --- bzrlib/lsprof.py 2009-03-08 06:18:06 +0000 |
725 | +++ bzrlib/lsprof.py 2009-08-24 21:05:09 +0000 |
726 | @@ -13,45 +13,74 @@ |
727 | |
728 | __all__ = ['profile', 'Stats'] |
729 | |
730 | -_g_threadmap = {} |
731 | - |
732 | - |
733 | -def _thread_profile(f, *args, **kwds): |
734 | - # we lose the first profile point for a new thread in order to trampoline |
735 | - # a new Profile object into place |
736 | - global _g_threadmap |
737 | - thr = thread.get_ident() |
738 | - _g_threadmap[thr] = p = Profiler() |
739 | - # this overrides our sys.setprofile hook: |
740 | - p.enable(subcalls=True, builtins=True) |
741 | - |
742 | - |
743 | def profile(f, *args, **kwds): |
744 | """Run a function profile. |
745 | |
746 | Exceptions are not caught: If you need stats even when exceptions are to be |
747 | - raised, passing in a closure that will catch the exceptions and transform |
748 | - them appropriately for your driver function. |
749 | + raised, pass in a closure that will catch the exceptions and transform them |
750 | + appropriately for your driver function. |
751 | |
752 | :return: The functions return value and a stats object. |
753 | """ |
754 | - global _g_threadmap |
755 | - p = Profiler() |
756 | - p.enable(subcalls=True) |
757 | - threading.setprofile(_thread_profile) |
758 | + profiler = BzrProfiler() |
759 | + profiler.start() |
760 | try: |
761 | ret = f(*args, **kwds) |
762 | finally: |
763 | - p.disable() |
764 | - for pp in _g_threadmap.values(): |
765 | + stats = profiler.stop() |
766 | + return ret, stats |
767 | + |
768 | + |
769 | +class BzrProfiler(object): |
770 | + """Bzr utility wrapper around Profiler. |
771 | + |
772 | + For most uses the module level 'profile()' function will be suitable. |
773 | + However profiling when a simple wrapped function isn't available may |
774 | + be easier to accomplish using this class. |
775 | + |
776 | + To use it, create a BzrProfiler and call start() on it. Some arbitrary |
777 | + time later call stop() to stop profiling and retrieve the statistics |
778 | + from the code executed in the interim. |
779 | + """ |
780 | + |
781 | + def start(self): |
782 | + """Start profiling. |
783 | + |
784 | + This hooks into threading and will record all calls made until |
785 | + stop() is called. |
786 | + """ |
787 | + self._g_threadmap = {} |
788 | + self.p = Profiler() |
789 | + self.p.enable(subcalls=True) |
790 | + threading.setprofile(self._thread_profile) |
791 | + |
792 | + def stop(self): |
793 | + """Stop profiling. |
794 | + |
795 | + This unhooks from threading and cleans up the profiler, returning |
796 | + the gathered Stats object. |
797 | + |
798 | + :return: A bzrlib.lsprof.Stats object. |
799 | + """ |
800 | + self.p.disable() |
801 | + for pp in self._g_threadmap.values(): |
802 | pp.disable() |
803 | threading.setprofile(None) |
804 | + p = self.p |
805 | + self.p = None |
806 | + threads = {} |
807 | + for tid, pp in self._g_threadmap.items(): |
808 | + threads[tid] = Stats(pp.getstats(), {}) |
809 | + self._g_threadmap = None |
810 | + return Stats(p.getstats(), threads) |
811 | |
812 | - threads = {} |
813 | - for tid, pp in _g_threadmap.items(): |
814 | - threads[tid] = Stats(pp.getstats(), {}) |
815 | - _g_threadmap = {} |
816 | - return ret, Stats(p.getstats(), threads) |
817 | + def _thread_profile(self, f, *args, **kwds): |
818 | + # we lose the first profile point for a new thread in order to |
819 | + # trampoline a new Profile object into place |
820 | + thr = thread.get_ident() |
821 | + self._g_threadmap[thr] = p = Profiler() |
822 | + # this overrides our sys.setprofile hook: |
823 | + p.enable(subcalls=True, builtins=True) |
824 | |
825 | |
826 | class Stats(object): |
827 | |
828 | === modified file 'bzrlib/repofmt/groupcompress_repo.py' |
829 | --- bzrlib/repofmt/groupcompress_repo.py 2009-08-24 19:34:13 +0000 |
830 | +++ bzrlib/repofmt/groupcompress_repo.py 2009-09-01 06:10:24 +0000 |
831 | @@ -932,7 +932,7 @@ |
832 | super(GroupCHKStreamSource, self).__init__(from_repository, to_format) |
833 | self._revision_keys = None |
834 | self._text_keys = None |
835 | - # self._text_fetch_order = 'unordered' |
836 | + self._text_fetch_order = 'groupcompress' |
837 | self._chk_id_roots = None |
838 | self._chk_p_id_roots = None |
839 | |
840 | @@ -949,7 +949,7 @@ |
841 | p_id_roots_set = set() |
842 | source_vf = self.from_repository.inventories |
843 | stream = source_vf.get_record_stream(inventory_keys, |
844 | - 'unordered', True) |
845 | + 'groupcompress', True) |
846 | for record in stream: |
847 | if record.storage_kind == 'absent': |
848 | if allow_absent: |
849 | |
850 | === modified file 'bzrlib/repository.py' |
851 | --- bzrlib/repository.py 2009-08-30 22:02:45 +0000 |
852 | +++ bzrlib/repository.py 2009-09-03 15:26:27 +0000 |
853 | @@ -3844,6 +3844,9 @@ |
854 | possible_trees.append((basis_id, cache[basis_id])) |
855 | basis_id, delta = self._get_delta_for_revision(tree, parent_ids, |
856 | possible_trees) |
857 | + revision = self.source.get_revision(current_revision_id) |
858 | + pending_deltas.append((basis_id, delta, |
859 | + current_revision_id, revision.parent_ids)) |
860 | if self._converting_to_rich_root: |
861 | self._revision_id_to_root_id[current_revision_id] = \ |
862 | tree.get_root_id() |
863 | @@ -3878,9 +3881,6 @@ |
864 | if entry.revision == file_revision: |
865 | texts_possibly_new_in_tree.remove(file_key) |
866 | text_keys.update(texts_possibly_new_in_tree) |
867 | - revision = self.source.get_revision(current_revision_id) |
868 | - pending_deltas.append((basis_id, delta, |
869 | - current_revision_id, revision.parent_ids)) |
870 | pending_revisions.append(revision) |
871 | cache[current_revision_id] = tree |
872 | basis_id = current_revision_id |
873 | |
874 | === modified file 'bzrlib/smart/repository.py' |
875 | --- bzrlib/smart/repository.py 2009-08-14 00:55:42 +0000 |
876 | +++ bzrlib/smart/repository.py 2009-09-02 22:29:55 +0000 |
877 | @@ -519,36 +519,92 @@ |
878 | yield pack_writer.end() |
879 | |
880 | |
881 | +class _ByteStreamDecoder(object): |
882 | + """Helper for _byte_stream_to_stream. |
883 | + |
884 | + Broadly this class has to unwrap two layers of iterators: |
885 | + (type, substream) |
886 | + (substream details) |
887 | + |
888 | + This is complicated by wishing to return type, iterator_for_type, but |
889 | + getting the data for iterator_for_type when we find out type: we can't |
890 | + simply pass a generator down to the NetworkRecordStream parser, instead |
891 | + we have a little local state to seed each NetworkRecordStream instance, |
892 | + and gather the type that we'll be yielding. |
893 | + |
894 | + :ivar byte_stream: The byte stream being decoded. |
895 | + :ivar stream_decoder: A pack parser used to decode the bytestream |
896 | + :ivar current_type: The current type, used to join adjacent records of the |
897 | + same type into a single stream. |
898 | + :ivar first_bytes: The first bytes to give the next NetworkRecordStream. |
899 | + """ |
900 | + |
901 | + def __init__(self, byte_stream): |
902 | + """Create a _ByteStreamDecoder.""" |
903 | + self.stream_decoder = pack.ContainerPushParser() |
904 | + self.current_type = None |
905 | + self.first_bytes = None |
906 | + self.byte_stream = byte_stream |
907 | + |
908 | + def iter_stream_decoder(self): |
909 | + """Iterate the contents of the pack from stream_decoder.""" |
910 | + # dequeue pending items |
911 | + for record in self.stream_decoder.read_pending_records(): |
912 | + yield record |
913 | + # Pull bytes of the wire, decode them to records, yield those records. |
914 | + for bytes in self.byte_stream: |
915 | + self.stream_decoder.accept_bytes(bytes) |
916 | + for record in self.stream_decoder.read_pending_records(): |
917 | + yield record |
918 | + |
919 | + def iter_substream_bytes(self): |
920 | + if self.first_bytes is not None: |
921 | + yield self.first_bytes |
922 | + # If we run out of pack records, single the outer layer to stop. |
923 | + self.first_bytes = None |
924 | + for record in self.iter_pack_records: |
925 | + record_names, record_bytes = record |
926 | + record_name, = record_names |
927 | + substream_type = record_name[0] |
928 | + if substream_type != self.current_type: |
929 | + # end of a substream, seed the next substream. |
930 | + self.current_type = substream_type |
931 | + self.first_bytes = record_bytes |
932 | + return |
933 | + yield record_bytes |
934 | + |
935 | + def record_stream(self): |
936 | + """Yield substream_type, substream from the byte stream.""" |
937 | + self.seed_state() |
938 | + # Make and consume sub generators, one per substream type: |
939 | + while self.first_bytes is not None: |
940 | + substream = NetworkRecordStream(self.iter_substream_bytes()) |
941 | + # after substream is fully consumed, self.current_type is set to |
942 | + # the next type, and self.first_bytes is set to the matching bytes. |
943 | + yield self.current_type, substream.read() |
944 | + |
945 | + def seed_state(self): |
946 | + """Prepare the _ByteStreamDecoder to decode from the pack stream.""" |
947 | + # Set a single generator we can use to get data from the pack stream. |
948 | + self.iter_pack_records = self.iter_stream_decoder() |
949 | + # Seed the very first subiterator with content; after this each one |
950 | + # seeds the next. |
951 | + list(self.iter_substream_bytes()) |
952 | + |
953 | + |
954 | def _byte_stream_to_stream(byte_stream): |
955 | """Convert a byte stream into a format and a stream. |
956 | |
957 | :param byte_stream: A bytes iterator, as output by _stream_to_byte_stream. |
958 | :return: (RepositoryFormat, stream_generator) |
959 | """ |
960 | - stream_decoder = pack.ContainerPushParser() |
961 | - def record_stream(): |
962 | - """Closure to return the substreams.""" |
963 | - # May have fully parsed records already. |
964 | - for record in stream_decoder.read_pending_records(): |
965 | - record_names, record_bytes = record |
966 | - record_name, = record_names |
967 | - substream_type = record_name[0] |
968 | - substream = NetworkRecordStream([record_bytes]) |
969 | - yield substream_type, substream.read() |
970 | - for bytes in byte_stream: |
971 | - stream_decoder.accept_bytes(bytes) |
972 | - for record in stream_decoder.read_pending_records(): |
973 | - record_names, record_bytes = record |
974 | - record_name, = record_names |
975 | - substream_type = record_name[0] |
976 | - substream = NetworkRecordStream([record_bytes]) |
977 | - yield substream_type, substream.read() |
978 | + decoder = _ByteStreamDecoder(byte_stream) |
979 | for bytes in byte_stream: |
980 | - stream_decoder.accept_bytes(bytes) |
981 | - for record in stream_decoder.read_pending_records(max=1): |
982 | + decoder.stream_decoder.accept_bytes(bytes) |
983 | + for record in decoder.stream_decoder.read_pending_records(max=1): |
984 | record_names, src_format_name = record |
985 | src_format = network_format_registry.get(src_format_name) |
986 | - return src_format, record_stream() |
987 | + return src_format, decoder.record_stream() |
988 | |
989 | |
990 | class SmartServerRepositoryUnlock(SmartServerRepositoryRequest): |
991 | |
992 | === modified file 'bzrlib/tests/__init__.py' |
993 | --- bzrlib/tests/__init__.py 2009-08-24 20:30:18 +0000 |
994 | +++ bzrlib/tests/__init__.py 2009-08-28 21:05:31 +0000 |
995 | @@ -28,6 +28,7 @@ |
996 | |
997 | import atexit |
998 | import codecs |
999 | +from copy import copy |
1000 | from cStringIO import StringIO |
1001 | import difflib |
1002 | import doctest |
1003 | @@ -174,17 +175,47 @@ |
1004 | self._overall_start_time = time.time() |
1005 | self._strict = strict |
1006 | |
1007 | - def done(self): |
1008 | - # nb: called stopTestRun in the version of this that Python merged |
1009 | - # upstream, according to lifeless 20090803 |
1010 | + def stopTestRun(self): |
1011 | + run = self.testsRun |
1012 | + actionTaken = "Ran" |
1013 | + stopTime = time.time() |
1014 | + timeTaken = stopTime - self.startTime |
1015 | + self.printErrors() |
1016 | + self.stream.writeln(self.separator2) |
1017 | + self.stream.writeln("%s %d test%s in %.3fs" % (actionTaken, |
1018 | + run, run != 1 and "s" or "", timeTaken)) |
1019 | + self.stream.writeln() |
1020 | + if not self.wasSuccessful(): |
1021 | + self.stream.write("FAILED (") |
1022 | + failed, errored = map(len, (self.failures, self.errors)) |
1023 | + if failed: |
1024 | + self.stream.write("failures=%d" % failed) |
1025 | + if errored: |
1026 | + if failed: self.stream.write(", ") |
1027 | + self.stream.write("errors=%d" % errored) |
1028 | + if self.known_failure_count: |
1029 | + if failed or errored: self.stream.write(", ") |
1030 | + self.stream.write("known_failure_count=%d" % |
1031 | + self.known_failure_count) |
1032 | + self.stream.writeln(")") |
1033 | + else: |
1034 | + if self.known_failure_count: |
1035 | + self.stream.writeln("OK (known_failures=%d)" % |
1036 | + self.known_failure_count) |
1037 | + else: |
1038 | + self.stream.writeln("OK") |
1039 | + if self.skip_count > 0: |
1040 | + skipped = self.skip_count |
1041 | + self.stream.writeln('%d test%s skipped' % |
1042 | + (skipped, skipped != 1 and "s" or "")) |
1043 | + if self.unsupported: |
1044 | + for feature, count in sorted(self.unsupported.items()): |
1045 | + self.stream.writeln("Missing feature '%s' skipped %d tests." % |
1046 | + (feature, count)) |
1047 | if self._strict: |
1048 | ok = self.wasStrictlySuccessful() |
1049 | else: |
1050 | ok = self.wasSuccessful() |
1051 | - if ok: |
1052 | - self.stream.write('tests passed\n') |
1053 | - else: |
1054 | - self.stream.write('tests failed\n') |
1055 | if TestCase._first_thread_leaker_id: |
1056 | self.stream.write( |
1057 | '%s is leaking threads among %d leaking tests.\n' % ( |
1058 | @@ -382,12 +413,12 @@ |
1059 | else: |
1060 | raise errors.BzrError("Unknown whence %r" % whence) |
1061 | |
1062 | - def finished(self): |
1063 | - pass |
1064 | - |
1065 | def report_cleaning_up(self): |
1066 | pass |
1067 | |
1068 | + def startTestRun(self): |
1069 | + self.startTime = time.time() |
1070 | + |
1071 | def report_success(self, test): |
1072 | pass |
1073 | |
1074 | @@ -420,15 +451,14 @@ |
1075 | self.pb.update_latency = 0 |
1076 | self.pb.show_transport_activity = False |
1077 | |
1078 | - def done(self): |
1079 | + def stopTestRun(self): |
1080 | # called when the tests that are going to run have run |
1081 | self.pb.clear() |
1082 | - super(TextTestResult, self).done() |
1083 | - |
1084 | - def finished(self): |
1085 | self.pb.finished() |
1086 | + super(TextTestResult, self).stopTestRun() |
1087 | |
1088 | - def report_starting(self): |
1089 | + def startTestRun(self): |
1090 | + super(TextTestResult, self).startTestRun() |
1091 | self.pb.update('[test 0/%d] Starting' % (self.num_tests)) |
1092 | |
1093 | def printErrors(self): |
1094 | @@ -513,7 +543,8 @@ |
1095 | result = a_string |
1096 | return result.ljust(final_width) |
1097 | |
1098 | - def report_starting(self): |
1099 | + def startTestRun(self): |
1100 | + super(VerboseTestResult, self).startTestRun() |
1101 | self.stream.write('running %d tests...\n' % self.num_tests) |
1102 | |
1103 | def report_test_start(self, test): |
1104 | @@ -577,88 +608,57 @@ |
1105 | descriptions=0, |
1106 | verbosity=1, |
1107 | bench_history=None, |
1108 | - list_only=False, |
1109 | strict=False, |
1110 | + result_decorators=None, |
1111 | ): |
1112 | + """Create a TextTestRunner. |
1113 | + |
1114 | + :param result_decorators: An optional list of decorators to apply |
1115 | + to the result object being used by the runner. Decorators are |
1116 | + applied left to right - the first element in the list is the |
1117 | + innermost decorator. |
1118 | + """ |
1119 | self.stream = unittest._WritelnDecorator(stream) |
1120 | self.descriptions = descriptions |
1121 | self.verbosity = verbosity |
1122 | self._bench_history = bench_history |
1123 | - self.list_only = list_only |
1124 | self._strict = strict |
1125 | + self._result_decorators = result_decorators or [] |
1126 | |
1127 | def run(self, test): |
1128 | "Run the given test case or test suite." |
1129 | - startTime = time.time() |
1130 | if self.verbosity == 1: |
1131 | result_class = TextTestResult |
1132 | elif self.verbosity >= 2: |
1133 | result_class = VerboseTestResult |
1134 | - result = result_class(self.stream, |
1135 | + original_result = result_class(self.stream, |
1136 | self.descriptions, |
1137 | self.verbosity, |
1138 | bench_history=self._bench_history, |
1139 | strict=self._strict, |
1140 | ) |
1141 | - result.stop_early = self.stop_on_failure |
1142 | - result.report_starting() |
1143 | - if self.list_only: |
1144 | - if self.verbosity >= 2: |
1145 | - self.stream.writeln("Listing tests only ...\n") |
1146 | - run = 0 |
1147 | - for t in iter_suite_tests(test): |
1148 | - self.stream.writeln("%s" % (t.id())) |
1149 | - run += 1 |
1150 | - return None |
1151 | - else: |
1152 | - try: |
1153 | - import testtools |
1154 | - except ImportError: |
1155 | - test.run(result) |
1156 | - else: |
1157 | - if isinstance(test, testtools.ConcurrentTestSuite): |
1158 | - # We need to catch bzr specific behaviors |
1159 | - test.run(BZRTransformingResult(result)) |
1160 | - else: |
1161 | - test.run(result) |
1162 | - run = result.testsRun |
1163 | - actionTaken = "Ran" |
1164 | - stopTime = time.time() |
1165 | - timeTaken = stopTime - startTime |
1166 | - result.printErrors() |
1167 | - self.stream.writeln(result.separator2) |
1168 | - self.stream.writeln("%s %d test%s in %.3fs" % (actionTaken, |
1169 | - run, run != 1 and "s" or "", timeTaken)) |
1170 | - self.stream.writeln() |
1171 | - if not result.wasSuccessful(): |
1172 | - self.stream.write("FAILED (") |
1173 | - failed, errored = map(len, (result.failures, result.errors)) |
1174 | - if failed: |
1175 | - self.stream.write("failures=%d" % failed) |
1176 | - if errored: |
1177 | - if failed: self.stream.write(", ") |
1178 | - self.stream.write("errors=%d" % errored) |
1179 | - if result.known_failure_count: |
1180 | - if failed or errored: self.stream.write(", ") |
1181 | - self.stream.write("known_failure_count=%d" % |
1182 | - result.known_failure_count) |
1183 | - self.stream.writeln(")") |
1184 | - else: |
1185 | - if result.known_failure_count: |
1186 | - self.stream.writeln("OK (known_failures=%d)" % |
1187 | - result.known_failure_count) |
1188 | - else: |
1189 | - self.stream.writeln("OK") |
1190 | - if result.skip_count > 0: |
1191 | - skipped = result.skip_count |
1192 | - self.stream.writeln('%d test%s skipped' % |
1193 | - (skipped, skipped != 1 and "s" or "")) |
1194 | - if result.unsupported: |
1195 | - for feature, count in sorted(result.unsupported.items()): |
1196 | - self.stream.writeln("Missing feature '%s' skipped %d tests." % |
1197 | - (feature, count)) |
1198 | - result.finished() |
1199 | - return result |
1200 | + # Signal to result objects that look at stop early policy to stop, |
1201 | + original_result.stop_early = self.stop_on_failure |
1202 | + result = original_result |
1203 | + for decorator in self._result_decorators: |
1204 | + result = decorator(result) |
1205 | + result.stop_early = self.stop_on_failure |
1206 | + try: |
1207 | + import testtools |
1208 | + except ImportError: |
1209 | + pass |
1210 | + else: |
1211 | + if isinstance(test, testtools.ConcurrentTestSuite): |
1212 | + # We need to catch bzr specific behaviors |
1213 | + result = BZRTransformingResult(result) |
1214 | + result.startTestRun() |
1215 | + try: |
1216 | + test.run(result) |
1217 | + finally: |
1218 | + result.stopTestRun() |
1219 | + # higher level code uses our extended protocol to determine |
1220 | + # what exit code to give. |
1221 | + return original_result |
1222 | |
1223 | |
1224 | def iter_suite_tests(suite): |
1225 | @@ -928,6 +928,18 @@ |
1226 | def _lock_broken(self, result): |
1227 | self._lock_actions.append(('broken', result)) |
1228 | |
1229 | + def start_server(self, transport_server, backing_server=None): |
1230 | + """Start transport_server for this test. |
1231 | + |
1232 | + This starts the server, registers a cleanup for it and permits the |
1233 | + server's urls to be used. |
1234 | + """ |
1235 | + if backing_server is None: |
1236 | + transport_server.setUp() |
1237 | + else: |
1238 | + transport_server.setUp(backing_server) |
1239 | + self.addCleanup(transport_server.tearDown) |
1240 | + |
1241 | def _ndiff_strings(self, a, b): |
1242 | """Return ndiff between two strings containing lines. |
1243 | |
1244 | @@ -2067,13 +2079,12 @@ |
1245 | if self.__readonly_server is None: |
1246 | if self.transport_readonly_server is None: |
1247 | # readonly decorator requested |
1248 | - # bring up the server |
1249 | self.__readonly_server = ReadonlyServer() |
1250 | - self.__readonly_server.setUp(self.get_vfs_only_server()) |
1251 | else: |
1252 | + # explicit readonly transport. |
1253 | self.__readonly_server = self.create_transport_readonly_server() |
1254 | - self.__readonly_server.setUp(self.get_vfs_only_server()) |
1255 | - self.addCleanup(self.__readonly_server.tearDown) |
1256 | + self.start_server(self.__readonly_server, |
1257 | + self.get_vfs_only_server()) |
1258 | return self.__readonly_server |
1259 | |
1260 | def get_readonly_url(self, relpath=None): |
1261 | @@ -2098,8 +2109,7 @@ |
1262 | """ |
1263 | if self.__vfs_server is None: |
1264 | self.__vfs_server = MemoryServer() |
1265 | - self.__vfs_server.setUp() |
1266 | - self.addCleanup(self.__vfs_server.tearDown) |
1267 | + self.start_server(self.__vfs_server) |
1268 | return self.__vfs_server |
1269 | |
1270 | def get_server(self): |
1271 | @@ -2112,19 +2122,13 @@ |
1272 | then the self.get_vfs_server is returned. |
1273 | """ |
1274 | if self.__server is None: |
1275 | - if self.transport_server is None or self.transport_server is self.vfs_transport_factory: |
1276 | - return self.get_vfs_only_server() |
1277 | + if (self.transport_server is None or self.transport_server is |
1278 | + self.vfs_transport_factory): |
1279 | + self.__server = self.get_vfs_only_server() |
1280 | else: |
1281 | # bring up a decorated means of access to the vfs only server. |
1282 | self.__server = self.transport_server() |
1283 | - try: |
1284 | - self.__server.setUp(self.get_vfs_only_server()) |
1285 | - except TypeError, e: |
1286 | - # This should never happen; the try:Except here is to assist |
1287 | - # developers having to update code rather than seeing an |
1288 | - # uninformative TypeError. |
1289 | - raise Exception, "Old server API in use: %s, %s" % (self.__server, e) |
1290 | - self.addCleanup(self.__server.tearDown) |
1291 | + self.start_server(self.__server, self.get_vfs_only_server()) |
1292 | return self.__server |
1293 | |
1294 | def _adjust_url(self, base, relpath): |
1295 | @@ -2263,9 +2267,8 @@ |
1296 | |
1297 | def make_smart_server(self, path): |
1298 | smart_server = server.SmartTCPServer_for_testing() |
1299 | - smart_server.setUp(self.get_server()) |
1300 | + self.start_server(smart_server, self.get_server()) |
1301 | remote_transport = get_transport(smart_server.get_url()).clone(path) |
1302 | - self.addCleanup(smart_server.tearDown) |
1303 | return remote_transport |
1304 | |
1305 | def make_branch_and_memory_tree(self, relpath, format=None): |
1306 | @@ -2472,8 +2475,7 @@ |
1307 | """ |
1308 | if self.__vfs_server is None: |
1309 | self.__vfs_server = self.vfs_transport_factory() |
1310 | - self.__vfs_server.setUp() |
1311 | - self.addCleanup(self.__vfs_server.tearDown) |
1312 | + self.start_server(self.__vfs_server) |
1313 | return self.__vfs_server |
1314 | |
1315 | def make_branch_and_tree(self, relpath, format=None): |
1316 | @@ -2486,6 +2488,15 @@ |
1317 | repository will also be accessed locally. Otherwise a lightweight |
1318 | checkout is created and returned. |
1319 | |
1320 | + We do this because we can't physically create a tree in the local |
1321 | + path, with a branch reference to the transport_factory url, and |
1322 | + a branch + repository in the vfs_transport, unless the vfs_transport |
1323 | + namespace is distinct from the local disk - the two branch objects |
1324 | + would collide. While we could construct a tree with its branch object |
1325 | + pointing at the transport_factory transport in memory, reopening it |
1326 | + would behaving unexpectedly, and has in the past caused testing bugs |
1327 | + when we tried to do it that way. |
1328 | + |
1329 | :param format: The BzrDirFormat. |
1330 | :returns: the WorkingTree. |
1331 | """ |
1332 | @@ -2762,7 +2773,9 @@ |
1333 | strict=False, |
1334 | runner_class=None, |
1335 | suite_decorators=None, |
1336 | - stream=None): |
1337 | + stream=None, |
1338 | + result_decorators=None, |
1339 | + ): |
1340 | """Run a test suite for bzr selftest. |
1341 | |
1342 | :param runner_class: The class of runner to use. Must support the |
1343 | @@ -2783,8 +2796,8 @@ |
1344 | descriptions=0, |
1345 | verbosity=verbosity, |
1346 | bench_history=bench_history, |
1347 | - list_only=list_only, |
1348 | strict=strict, |
1349 | + result_decorators=result_decorators, |
1350 | ) |
1351 | runner.stop_on_failure=stop_on_failure |
1352 | # built in decorator factories: |
1353 | @@ -2805,10 +2818,15 @@ |
1354 | decorators.append(CountingDecorator) |
1355 | for decorator in decorators: |
1356 | suite = decorator(suite) |
1357 | - result = runner.run(suite) |
1358 | if list_only: |
1359 | + # Done after test suite decoration to allow randomisation etc |
1360 | + # to take effect, though that is of marginal benefit. |
1361 | + if verbosity >= 2: |
1362 | + stream.write("Listing tests only ...\n") |
1363 | + for t in iter_suite_tests(suite): |
1364 | + stream.write("%s\n" % (t.id())) |
1365 | return True |
1366 | - result.done() |
1367 | + result = runner.run(suite) |
1368 | if strict: |
1369 | return result.wasStrictlySuccessful() |
1370 | else: |
1371 | @@ -3131,7 +3149,7 @@ |
1372 | return result |
1373 | |
1374 | |
1375 | -class BZRTransformingResult(unittest.TestResult): |
1376 | +class ForwardingResult(unittest.TestResult): |
1377 | |
1378 | def __init__(self, target): |
1379 | unittest.TestResult.__init__(self) |
1380 | @@ -3143,6 +3161,27 @@ |
1381 | def stopTest(self, test): |
1382 | self.result.stopTest(test) |
1383 | |
1384 | + def startTestRun(self): |
1385 | + self.result.startTestRun() |
1386 | + |
1387 | + def stopTestRun(self): |
1388 | + self.result.stopTestRun() |
1389 | + |
1390 | + def addSkip(self, test, reason): |
1391 | + self.result.addSkip(test, reason) |
1392 | + |
1393 | + def addSuccess(self, test): |
1394 | + self.result.addSuccess(test) |
1395 | + |
1396 | + def addError(self, test, err): |
1397 | + self.result.addError(test, err) |
1398 | + |
1399 | + def addFailure(self, test, err): |
1400 | + self.result.addFailure(test, err) |
1401 | + |
1402 | + |
1403 | +class BZRTransformingResult(ForwardingResult): |
1404 | + |
1405 | def addError(self, test, err): |
1406 | feature = self._error_looks_like('UnavailableFeature: ', err) |
1407 | if feature is not None: |
1408 | @@ -3158,12 +3197,6 @@ |
1409 | else: |
1410 | self.result.addFailure(test, err) |
1411 | |
1412 | - def addSkip(self, test, reason): |
1413 | - self.result.addSkip(test, reason) |
1414 | - |
1415 | - def addSuccess(self, test): |
1416 | - self.result.addSuccess(test) |
1417 | - |
1418 | def _error_looks_like(self, prefix, err): |
1419 | """Deserialize exception and returns the stringify value.""" |
1420 | import subunit |
1421 | @@ -3181,6 +3214,38 @@ |
1422 | return value |
1423 | |
1424 | |
1425 | +class ProfileResult(ForwardingResult): |
1426 | + """Generate profiling data for all activity between start and success. |
1427 | + |
1428 | + The profile data is appended to the test's _benchcalls attribute and can |
1429 | + be accessed by the forwarded-to TestResult. |
1430 | + |
1431 | + While it might be cleaner do accumulate this in stopTest, addSuccess is |
1432 | + where our existing output support for lsprof is, and this class aims to |
1433 | + fit in with that: while it could be moved it's not necessary to accomplish |
1434 | + test profiling, nor would it be dramatically cleaner. |
1435 | + """ |
1436 | + |
1437 | + def startTest(self, test): |
1438 | + self.profiler = bzrlib.lsprof.BzrProfiler() |
1439 | + self.profiler.start() |
1440 | + ForwardingResult.startTest(self, test) |
1441 | + |
1442 | + def addSuccess(self, test): |
1443 | + stats = self.profiler.stop() |
1444 | + try: |
1445 | + calls = test._benchcalls |
1446 | + except AttributeError: |
1447 | + test._benchcalls = [] |
1448 | + calls = test._benchcalls |
1449 | + calls.append(((test.id(), "", ""), stats)) |
1450 | + ForwardingResult.addSuccess(self, test) |
1451 | + |
1452 | + def stopTest(self, test): |
1453 | + ForwardingResult.stopTest(self, test) |
1454 | + self.profiler = None |
1455 | + |
1456 | + |
1457 | # Controlled by "bzr selftest -E=..." option |
1458 | # Currently supported: |
1459 | # -Eallow_debug Will no longer clear debug.debug_flags() so it |
1460 | @@ -3208,6 +3273,7 @@ |
1461 | runner_class=None, |
1462 | suite_decorators=None, |
1463 | stream=None, |
1464 | + lsprof_tests=False, |
1465 | ): |
1466 | """Run the whole test suite under the enhanced runner""" |
1467 | # XXX: Very ugly way to do this... |
1468 | @@ -3242,6 +3308,9 @@ |
1469 | if starting_with: |
1470 | # But always filter as requested. |
1471 | suite = filter_suite_by_id_startswith(suite, starting_with) |
1472 | + result_decorators = [] |
1473 | + if lsprof_tests: |
1474 | + result_decorators.append(ProfileResult) |
1475 | return run_suite(suite, 'testbzr', verbose=verbose, pattern=pattern, |
1476 | stop_on_failure=stop_on_failure, |
1477 | transport=transport, |
1478 | @@ -3255,6 +3324,7 @@ |
1479 | runner_class=runner_class, |
1480 | suite_decorators=suite_decorators, |
1481 | stream=stream, |
1482 | + result_decorators=result_decorators, |
1483 | ) |
1484 | finally: |
1485 | default_transport = old_transport |
1486 | @@ -3416,6 +3486,206 @@ |
1487 | test_prefix_alias_registry.register('bp', 'bzrlib.plugins') |
1488 | |
1489 | |
1490 | +def _test_suite_testmod_names(): |
1491 | + """Return the standard list of test module names to test.""" |
1492 | + return [ |
1493 | + 'bzrlib.doc', |
1494 | + 'bzrlib.tests.blackbox', |
1495 | + 'bzrlib.tests.commands', |
1496 | + 'bzrlib.tests.per_branch', |
1497 | + 'bzrlib.tests.per_bzrdir', |
1498 | + 'bzrlib.tests.per_interrepository', |
1499 | + 'bzrlib.tests.per_intertree', |
1500 | + 'bzrlib.tests.per_inventory', |
1501 | + 'bzrlib.tests.per_interbranch', |
1502 | + 'bzrlib.tests.per_lock', |
1503 | + 'bzrlib.tests.per_transport', |
1504 | + 'bzrlib.tests.per_tree', |
1505 | + 'bzrlib.tests.per_pack_repository', |
1506 | + 'bzrlib.tests.per_repository', |
1507 | + 'bzrlib.tests.per_repository_chk', |
1508 | + 'bzrlib.tests.per_repository_reference', |
1509 | + 'bzrlib.tests.per_versionedfile', |
1510 | + 'bzrlib.tests.per_workingtree', |
1511 | + 'bzrlib.tests.test__annotator', |
1512 | + 'bzrlib.tests.test__chk_map', |
1513 | + 'bzrlib.tests.test__dirstate_helpers', |
1514 | + 'bzrlib.tests.test__groupcompress', |
1515 | + 'bzrlib.tests.test__known_graph', |
1516 | + 'bzrlib.tests.test__rio', |
1517 | + 'bzrlib.tests.test__walkdirs_win32', |
1518 | + 'bzrlib.tests.test_ancestry', |
1519 | + 'bzrlib.tests.test_annotate', |
1520 | + 'bzrlib.tests.test_api', |
1521 | + 'bzrlib.tests.test_atomicfile', |
1522 | + 'bzrlib.tests.test_bad_files', |
1523 | + 'bzrlib.tests.test_bencode', |
1524 | + 'bzrlib.tests.test_bisect_multi', |
1525 | + 'bzrlib.tests.test_branch', |
1526 | + 'bzrlib.tests.test_branchbuilder', |
1527 | + 'bzrlib.tests.test_btree_index', |
1528 | + 'bzrlib.tests.test_bugtracker', |
1529 | + 'bzrlib.tests.test_bundle', |
1530 | + 'bzrlib.tests.test_bzrdir', |
1531 | + 'bzrlib.tests.test__chunks_to_lines', |
1532 | + 'bzrlib.tests.test_cache_utf8', |
1533 | + 'bzrlib.tests.test_chk_map', |
1534 | + 'bzrlib.tests.test_chk_serializer', |
1535 | + 'bzrlib.tests.test_chunk_writer', |
1536 | + 'bzrlib.tests.test_clean_tree', |
1537 | + 'bzrlib.tests.test_commands', |
1538 | + 'bzrlib.tests.test_commit', |
1539 | + 'bzrlib.tests.test_commit_merge', |
1540 | + 'bzrlib.tests.test_config', |
1541 | + 'bzrlib.tests.test_conflicts', |
1542 | + 'bzrlib.tests.test_counted_lock', |
1543 | + 'bzrlib.tests.test_crash', |
1544 | + 'bzrlib.tests.test_decorators', |
1545 | + 'bzrlib.tests.test_delta', |
1546 | + 'bzrlib.tests.test_debug', |
1547 | + 'bzrlib.tests.test_deprecated_graph', |
1548 | + 'bzrlib.tests.test_diff', |
1549 | + 'bzrlib.tests.test_directory_service', |
1550 | + 'bzrlib.tests.test_dirstate', |
1551 | + 'bzrlib.tests.test_email_message', |
1552 | + 'bzrlib.tests.test_eol_filters', |
1553 | + 'bzrlib.tests.test_errors', |
1554 | + 'bzrlib.tests.test_export', |
1555 | + 'bzrlib.tests.test_extract', |
1556 | + 'bzrlib.tests.test_fetch', |
1557 | + 'bzrlib.tests.test_fifo_cache', |
1558 | + 'bzrlib.tests.test_filters', |
1559 | + 'bzrlib.tests.test_ftp_transport', |
1560 | + 'bzrlib.tests.test_foreign', |
1561 | + 'bzrlib.tests.test_generate_docs', |
1562 | + 'bzrlib.tests.test_generate_ids', |
1563 | + 'bzrlib.tests.test_globbing', |
1564 | + 'bzrlib.tests.test_gpg', |
1565 | + 'bzrlib.tests.test_graph', |
1566 | + 'bzrlib.tests.test_groupcompress', |
1567 | + 'bzrlib.tests.test_hashcache', |
1568 | + 'bzrlib.tests.test_help', |
1569 | + 'bzrlib.tests.test_hooks', |
1570 | + 'bzrlib.tests.test_http', |
1571 | + 'bzrlib.tests.test_http_response', |
1572 | + 'bzrlib.tests.test_https_ca_bundle', |
1573 | + 'bzrlib.tests.test_identitymap', |
1574 | + 'bzrlib.tests.test_ignores', |
1575 | + 'bzrlib.tests.test_index', |
1576 | + 'bzrlib.tests.test_info', |
1577 | + 'bzrlib.tests.test_inv', |
1578 | + 'bzrlib.tests.test_inventory_delta', |
1579 | + 'bzrlib.tests.test_knit', |
1580 | + 'bzrlib.tests.test_lazy_import', |
1581 | + 'bzrlib.tests.test_lazy_regex', |
1582 | + 'bzrlib.tests.test_lock', |
1583 | + 'bzrlib.tests.test_lockable_files', |
1584 | + 'bzrlib.tests.test_lockdir', |
1585 | + 'bzrlib.tests.test_log', |
1586 | + 'bzrlib.tests.test_lru_cache', |
1587 | + 'bzrlib.tests.test_lsprof', |
1588 | + 'bzrlib.tests.test_mail_client', |
1589 | + 'bzrlib.tests.test_memorytree', |
1590 | + 'bzrlib.tests.test_merge', |
1591 | + 'bzrlib.tests.test_merge3', |
1592 | + 'bzrlib.tests.test_merge_core', |
1593 | + 'bzrlib.tests.test_merge_directive', |
1594 | + 'bzrlib.tests.test_missing', |
1595 | + 'bzrlib.tests.test_msgeditor', |
1596 | + 'bzrlib.tests.test_multiparent', |
1597 | + 'bzrlib.tests.test_mutabletree', |
1598 | + 'bzrlib.tests.test_nonascii', |
1599 | + 'bzrlib.tests.test_options', |
1600 | + 'bzrlib.tests.test_osutils', |
1601 | + 'bzrlib.tests.test_osutils_encodings', |
1602 | + 'bzrlib.tests.test_pack', |
1603 | + 'bzrlib.tests.test_patch', |
1604 | + 'bzrlib.tests.test_patches', |
1605 | + 'bzrlib.tests.test_permissions', |
1606 | + 'bzrlib.tests.test_plugins', |
1607 | + 'bzrlib.tests.test_progress', |
1608 | + 'bzrlib.tests.test_read_bundle', |
1609 | + 'bzrlib.tests.test_reconcile', |
1610 | + 'bzrlib.tests.test_reconfigure', |
1611 | + 'bzrlib.tests.test_registry', |
1612 | + 'bzrlib.tests.test_remote', |
1613 | + 'bzrlib.tests.test_rename_map', |
1614 | + 'bzrlib.tests.test_repository', |
1615 | + 'bzrlib.tests.test_revert', |
1616 | + 'bzrlib.tests.test_revision', |
1617 | + 'bzrlib.tests.test_revisionspec', |
1618 | + 'bzrlib.tests.test_revisiontree', |
1619 | + 'bzrlib.tests.test_rio', |
1620 | + 'bzrlib.tests.test_rules', |
1621 | + 'bzrlib.tests.test_sampler', |
1622 | + 'bzrlib.tests.test_selftest', |
1623 | + 'bzrlib.tests.test_serializer', |
1624 | + 'bzrlib.tests.test_setup', |
1625 | + 'bzrlib.tests.test_sftp_transport', |
1626 | + 'bzrlib.tests.test_shelf', |
1627 | + 'bzrlib.tests.test_shelf_ui', |
1628 | + 'bzrlib.tests.test_smart', |
1629 | + 'bzrlib.tests.test_smart_add', |
1630 | + 'bzrlib.tests.test_smart_request', |
1631 | + 'bzrlib.tests.test_smart_transport', |
1632 | + 'bzrlib.tests.test_smtp_connection', |
1633 | + 'bzrlib.tests.test_source', |
1634 | + 'bzrlib.tests.test_ssh_transport', |
1635 | + 'bzrlib.tests.test_status', |
1636 | + 'bzrlib.tests.test_store', |
1637 | + 'bzrlib.tests.test_strace', |
1638 | + 'bzrlib.tests.test_subsume', |
1639 | + 'bzrlib.tests.test_switch', |
1640 | + 'bzrlib.tests.test_symbol_versioning', |
1641 | + 'bzrlib.tests.test_tag', |
1642 | + 'bzrlib.tests.test_testament', |
1643 | + 'bzrlib.tests.test_textfile', |
1644 | + 'bzrlib.tests.test_textmerge', |
1645 | + 'bzrlib.tests.test_timestamp', |
1646 | + 'bzrlib.tests.test_trace', |
1647 | + 'bzrlib.tests.test_transactions', |
1648 | + 'bzrlib.tests.test_transform', |
1649 | + 'bzrlib.tests.test_transport', |
1650 | + 'bzrlib.tests.test_transport_log', |
1651 | + 'bzrlib.tests.test_tree', |
1652 | + 'bzrlib.tests.test_treebuilder', |
1653 | + 'bzrlib.tests.test_tsort', |
1654 | + 'bzrlib.tests.test_tuned_gzip', |
1655 | + 'bzrlib.tests.test_ui', |
1656 | + 'bzrlib.tests.test_uncommit', |
1657 | + 'bzrlib.tests.test_upgrade', |
1658 | + 'bzrlib.tests.test_upgrade_stacked', |
1659 | + 'bzrlib.tests.test_urlutils', |
1660 | + 'bzrlib.tests.test_version', |
1661 | + 'bzrlib.tests.test_version_info', |
1662 | + 'bzrlib.tests.test_weave', |
1663 | + 'bzrlib.tests.test_whitebox', |
1664 | + 'bzrlib.tests.test_win32utils', |
1665 | + 'bzrlib.tests.test_workingtree', |
1666 | + 'bzrlib.tests.test_workingtree_4', |
1667 | + 'bzrlib.tests.test_wsgi', |
1668 | + 'bzrlib.tests.test_xml', |
1669 | + ] |
1670 | + |
1671 | + |
1672 | +def _test_suite_modules_to_doctest(): |
1673 | + """Return the list of modules to doctest.""" |
1674 | + return [ |
1675 | + 'bzrlib', |
1676 | + 'bzrlib.branchbuilder', |
1677 | + 'bzrlib.export', |
1678 | + 'bzrlib.inventory', |
1679 | + 'bzrlib.iterablefile', |
1680 | + 'bzrlib.lockdir', |
1681 | + 'bzrlib.merge3', |
1682 | + 'bzrlib.option', |
1683 | + 'bzrlib.symbol_versioning', |
1684 | + 'bzrlib.tests', |
1685 | + 'bzrlib.timestamp', |
1686 | + 'bzrlib.version_info_formats.format_custom', |
1687 | + ] |
1688 | + |
1689 | + |
1690 | def test_suite(keep_only=None, starting_with=None): |
1691 | """Build and return TestSuite for the whole of bzrlib. |
1692 | |
1693 | @@ -3427,184 +3697,6 @@ |
1694 | This function can be replaced if you need to change the default test |
1695 | suite on a global basis, but it is not encouraged. |
1696 | """ |
1697 | - testmod_names = [ |
1698 | - 'bzrlib.doc', |
1699 | - 'bzrlib.tests.blackbox', |
1700 | - 'bzrlib.tests.commands', |
1701 | - 'bzrlib.tests.per_branch', |
1702 | - 'bzrlib.tests.per_bzrdir', |
1703 | - 'bzrlib.tests.per_interrepository', |
1704 | - 'bzrlib.tests.per_intertree', |
1705 | - 'bzrlib.tests.per_inventory', |
1706 | - 'bzrlib.tests.per_interbranch', |
1707 | - 'bzrlib.tests.per_lock', |
1708 | - 'bzrlib.tests.per_transport', |
1709 | - 'bzrlib.tests.per_tree', |
1710 | - 'bzrlib.tests.per_pack_repository', |
1711 | - 'bzrlib.tests.per_repository', |
1712 | - 'bzrlib.tests.per_repository_chk', |
1713 | - 'bzrlib.tests.per_repository_reference', |
1714 | - 'bzrlib.tests.per_versionedfile', |
1715 | - 'bzrlib.tests.per_workingtree', |
1716 | - 'bzrlib.tests.test__annotator', |
1717 | - 'bzrlib.tests.test__chk_map', |
1718 | - 'bzrlib.tests.test__dirstate_helpers', |
1719 | - 'bzrlib.tests.test__groupcompress', |
1720 | - 'bzrlib.tests.test__known_graph', |
1721 | - 'bzrlib.tests.test__rio', |
1722 | - 'bzrlib.tests.test__walkdirs_win32', |
1723 | - 'bzrlib.tests.test_ancestry', |
1724 | - 'bzrlib.tests.test_annotate', |
1725 | - 'bzrlib.tests.test_api', |
1726 | - 'bzrlib.tests.test_atomicfile', |
1727 | - 'bzrlib.tests.test_bad_files', |
1728 | - 'bzrlib.tests.test_bencode', |
1729 | - 'bzrlib.tests.test_bisect_multi', |
1730 | - 'bzrlib.tests.test_branch', |
1731 | - 'bzrlib.tests.test_branchbuilder', |
1732 | - 'bzrlib.tests.test_btree_index', |
1733 | - 'bzrlib.tests.test_bugtracker', |
1734 | - 'bzrlib.tests.test_bundle', |
1735 | - 'bzrlib.tests.test_bzrdir', |
1736 | - 'bzrlib.tests.test__chunks_to_lines', |
1737 | - 'bzrlib.tests.test_cache_utf8', |
1738 | - 'bzrlib.tests.test_chk_map', |
1739 | - 'bzrlib.tests.test_chk_serializer', |
1740 | - 'bzrlib.tests.test_chunk_writer', |
1741 | - 'bzrlib.tests.test_clean_tree', |
1742 | - 'bzrlib.tests.test_commands', |
1743 | - 'bzrlib.tests.test_commit', |
1744 | - 'bzrlib.tests.test_commit_merge', |
1745 | - 'bzrlib.tests.test_config', |
1746 | - 'bzrlib.tests.test_conflicts', |
1747 | - 'bzrlib.tests.test_counted_lock', |
1748 | - 'bzrlib.tests.test_crash', |
1749 | - 'bzrlib.tests.test_decorators', |
1750 | - 'bzrlib.tests.test_delta', |
1751 | - 'bzrlib.tests.test_debug', |
1752 | - 'bzrlib.tests.test_deprecated_graph', |
1753 | - 'bzrlib.tests.test_diff', |
1754 | - 'bzrlib.tests.test_directory_service', |
1755 | - 'bzrlib.tests.test_dirstate', |
1756 | - 'bzrlib.tests.test_email_message', |
1757 | - 'bzrlib.tests.test_eol_filters', |
1758 | - 'bzrlib.tests.test_errors', |
1759 | - 'bzrlib.tests.test_export', |
1760 | - 'bzrlib.tests.test_extract', |
1761 | - 'bzrlib.tests.test_fetch', |
1762 | - 'bzrlib.tests.test_fifo_cache', |
1763 | - 'bzrlib.tests.test_filters', |
1764 | - 'bzrlib.tests.test_ftp_transport', |
1765 | - 'bzrlib.tests.test_foreign', |
1766 | - 'bzrlib.tests.test_generate_docs', |
1767 | - 'bzrlib.tests.test_generate_ids', |
1768 | - 'bzrlib.tests.test_globbing', |
1769 | - 'bzrlib.tests.test_gpg', |
1770 | - 'bzrlib.tests.test_graph', |
1771 | - 'bzrlib.tests.test_groupcompress', |
1772 | - 'bzrlib.tests.test_hashcache', |
1773 | - 'bzrlib.tests.test_help', |
1774 | - 'bzrlib.tests.test_hooks', |
1775 | - 'bzrlib.tests.test_http', |
1776 | - 'bzrlib.tests.test_http_response', |
1777 | - 'bzrlib.tests.test_https_ca_bundle', |
1778 | - 'bzrlib.tests.test_identitymap', |
1779 | - 'bzrlib.tests.test_ignores', |
1780 | - 'bzrlib.tests.test_index', |
1781 | - 'bzrlib.tests.test_info', |
1782 | - 'bzrlib.tests.test_inv', |
1783 | - 'bzrlib.tests.test_inventory_delta', |
1784 | - 'bzrlib.tests.test_knit', |
1785 | - 'bzrlib.tests.test_lazy_import', |
1786 | - 'bzrlib.tests.test_lazy_regex', |
1787 | - 'bzrlib.tests.test_lock', |
1788 | - 'bzrlib.tests.test_lockable_files', |
1789 | - 'bzrlib.tests.test_lockdir', |
1790 | - 'bzrlib.tests.test_log', |
1791 | - 'bzrlib.tests.test_lru_cache', |
1792 | - 'bzrlib.tests.test_lsprof', |
1793 | - 'bzrlib.tests.test_mail_client', |
1794 | - 'bzrlib.tests.test_memorytree', |
1795 | - 'bzrlib.tests.test_merge', |
1796 | - 'bzrlib.tests.test_merge3', |
1797 | - 'bzrlib.tests.test_merge_core', |
1798 | - 'bzrlib.tests.test_merge_directive', |
1799 | - 'bzrlib.tests.test_missing', |
1800 | - 'bzrlib.tests.test_msgeditor', |
1801 | - 'bzrlib.tests.test_multiparent', |
1802 | - 'bzrlib.tests.test_mutabletree', |
1803 | - 'bzrlib.tests.test_nonascii', |
1804 | - 'bzrlib.tests.test_options', |
1805 | - 'bzrlib.tests.test_osutils', |
1806 | - 'bzrlib.tests.test_osutils_encodings', |
1807 | - 'bzrlib.tests.test_pack', |
1808 | - 'bzrlib.tests.test_patch', |
1809 | - 'bzrlib.tests.test_patches', |
1810 | - 'bzrlib.tests.test_permissions', |
1811 | - 'bzrlib.tests.test_plugins', |
1812 | - 'bzrlib.tests.test_progress', |
1813 | - 'bzrlib.tests.test_read_bundle', |
1814 | - 'bzrlib.tests.test_reconcile', |
1815 | - 'bzrlib.tests.test_reconfigure', |
1816 | - 'bzrlib.tests.test_registry', |
1817 | - 'bzrlib.tests.test_remote', |
1818 | - 'bzrlib.tests.test_rename_map', |
1819 | - 'bzrlib.tests.test_repository', |
1820 | - 'bzrlib.tests.test_revert', |
1821 | - 'bzrlib.tests.test_revision', |
1822 | - 'bzrlib.tests.test_revisionspec', |
1823 | - 'bzrlib.tests.test_revisiontree', |
1824 | - 'bzrlib.tests.test_rio', |
1825 | - 'bzrlib.tests.test_rules', |
1826 | - 'bzrlib.tests.test_sampler', |
1827 | - 'bzrlib.tests.test_selftest', |
1828 | - 'bzrlib.tests.test_serializer', |
1829 | - 'bzrlib.tests.test_setup', |
1830 | - 'bzrlib.tests.test_sftp_transport', |
1831 | - 'bzrlib.tests.test_shelf', |
1832 | - 'bzrlib.tests.test_shelf_ui', |
1833 | - 'bzrlib.tests.test_smart', |
1834 | - 'bzrlib.tests.test_smart_add', |
1835 | - 'bzrlib.tests.test_smart_request', |
1836 | - 'bzrlib.tests.test_smart_transport', |
1837 | - 'bzrlib.tests.test_smtp_connection', |
1838 | - 'bzrlib.tests.test_source', |
1839 | - 'bzrlib.tests.test_ssh_transport', |
1840 | - 'bzrlib.tests.test_status', |
1841 | - 'bzrlib.tests.test_store', |
1842 | - 'bzrlib.tests.test_strace', |
1843 | - 'bzrlib.tests.test_subsume', |
1844 | - 'bzrlib.tests.test_switch', |
1845 | - 'bzrlib.tests.test_symbol_versioning', |
1846 | - 'bzrlib.tests.test_tag', |
1847 | - 'bzrlib.tests.test_testament', |
1848 | - 'bzrlib.tests.test_textfile', |
1849 | - 'bzrlib.tests.test_textmerge', |
1850 | - 'bzrlib.tests.test_timestamp', |
1851 | - 'bzrlib.tests.test_trace', |
1852 | - 'bzrlib.tests.test_transactions', |
1853 | - 'bzrlib.tests.test_transform', |
1854 | - 'bzrlib.tests.test_transport', |
1855 | - 'bzrlib.tests.test_transport_log', |
1856 | - 'bzrlib.tests.test_tree', |
1857 | - 'bzrlib.tests.test_treebuilder', |
1858 | - 'bzrlib.tests.test_tsort', |
1859 | - 'bzrlib.tests.test_tuned_gzip', |
1860 | - 'bzrlib.tests.test_ui', |
1861 | - 'bzrlib.tests.test_uncommit', |
1862 | - 'bzrlib.tests.test_upgrade', |
1863 | - 'bzrlib.tests.test_upgrade_stacked', |
1864 | - 'bzrlib.tests.test_urlutils', |
1865 | - 'bzrlib.tests.test_version', |
1866 | - 'bzrlib.tests.test_version_info', |
1867 | - 'bzrlib.tests.test_weave', |
1868 | - 'bzrlib.tests.test_whitebox', |
1869 | - 'bzrlib.tests.test_win32utils', |
1870 | - 'bzrlib.tests.test_workingtree', |
1871 | - 'bzrlib.tests.test_workingtree_4', |
1872 | - 'bzrlib.tests.test_wsgi', |
1873 | - 'bzrlib.tests.test_xml', |
1874 | - ] |
1875 | |
1876 | loader = TestUtil.TestLoader() |
1877 | |
1878 | @@ -3639,24 +3731,9 @@ |
1879 | suite = loader.suiteClass() |
1880 | |
1881 | # modules building their suite with loadTestsFromModuleNames |
1882 | - suite.addTest(loader.loadTestsFromModuleNames(testmod_names)) |
1883 | - |
1884 | - modules_to_doctest = [ |
1885 | - 'bzrlib', |
1886 | - 'bzrlib.branchbuilder', |
1887 | - 'bzrlib.export', |
1888 | - 'bzrlib.inventory', |
1889 | - 'bzrlib.iterablefile', |
1890 | - 'bzrlib.lockdir', |
1891 | - 'bzrlib.merge3', |
1892 | - 'bzrlib.option', |
1893 | - 'bzrlib.symbol_versioning', |
1894 | - 'bzrlib.tests', |
1895 | - 'bzrlib.timestamp', |
1896 | - 'bzrlib.version_info_formats.format_custom', |
1897 | - ] |
1898 | - |
1899 | - for mod in modules_to_doctest: |
1900 | + suite.addTest(loader.loadTestsFromModuleNames(_test_suite_testmod_names())) |
1901 | + |
1902 | + for mod in _test_suite_modules_to_doctest(): |
1903 | if not interesting_module(mod): |
1904 | # No tests to keep here, move along |
1905 | continue |
1906 | @@ -3803,8 +3880,7 @@ |
1907 | :param new_id: The id to assign to it. |
1908 | :return: The new test. |
1909 | """ |
1910 | - from copy import deepcopy |
1911 | - new_test = deepcopy(test) |
1912 | + new_test = copy(test) |
1913 | new_test.id = lambda: new_id |
1914 | return new_test |
1915 | |
1916 | |
1917 | === modified file 'bzrlib/tests/blackbox/test_filesystem_cicp.py' |
1918 | --- bzrlib/tests/blackbox/test_filesystem_cicp.py 2009-04-06 08:17:53 +0000 |
1919 | +++ bzrlib/tests/blackbox/test_filesystem_cicp.py 2009-08-26 09:06:02 +0000 |
1920 | @@ -216,12 +216,19 @@ |
1921 | |
1922 | |
1923 | class TestMisc(TestCICPBase): |
1924 | + |
1925 | def test_status(self): |
1926 | wt = self._make_mixed_case_tree() |
1927 | self.run_bzr('add') |
1928 | |
1929 | - self.check_output('added:\n CamelCaseParent/CamelCase\n lowercaseparent/lowercase\n', |
1930 | - 'status camelcaseparent/camelcase LOWERCASEPARENT/LOWERCASE') |
1931 | + self.check_output( |
1932 | + """added: |
1933 | + CamelCaseParent/ |
1934 | + CamelCaseParent/CamelCase |
1935 | + lowercaseparent/ |
1936 | + lowercaseparent/lowercase |
1937 | +""", |
1938 | + 'status camelcaseparent/camelcase LOWERCASEPARENT/LOWERCASE') |
1939 | |
1940 | def test_ci(self): |
1941 | wt = self._make_mixed_case_tree() |
1942 | |
1943 | === modified file 'bzrlib/tests/blackbox/test_info.py' |
1944 | --- bzrlib/tests/blackbox/test_info.py 2009-08-17 03:47:03 +0000 |
1945 | +++ bzrlib/tests/blackbox/test_info.py 2009-08-25 23:38:10 +0000 |
1946 | @@ -1328,6 +1328,10 @@ |
1947 | def test_info_locking_oslocks(self): |
1948 | if sys.platform == "win32": |
1949 | raise TestSkipped("don't use oslocks on win32 in unix manner") |
1950 | + # This test tests old (all-in-one, OS lock using) behaviour which |
1951 | + # simply cannot work on windows (and is indeed why we changed our |
1952 | + # design. As such, don't try to remove the thisFailsStrictLockCheck |
1953 | + # call here. |
1954 | self.thisFailsStrictLockCheck() |
1955 | |
1956 | tree = self.make_branch_and_tree('branch', |
1957 | |
1958 | === modified file 'bzrlib/tests/blackbox/test_push.py' |
1959 | --- bzrlib/tests/blackbox/test_push.py 2009-08-20 04:09:58 +0000 |
1960 | +++ bzrlib/tests/blackbox/test_push.py 2009-08-27 22:17:35 +0000 |
1961 | @@ -576,9 +576,7 @@ |
1962 | def setUp(self): |
1963 | tests.TestCaseWithTransport.setUp(self) |
1964 | self.memory_server = RedirectingMemoryServer() |
1965 | - self.memory_server.setUp() |
1966 | - self.addCleanup(self.memory_server.tearDown) |
1967 | - |
1968 | + self.start_server(self.memory_server) |
1969 | # Make the branch and tree that we'll be pushing. |
1970 | t = self.make_branch_and_tree('tree') |
1971 | self.build_tree(['tree/file']) |
1972 | |
1973 | === modified file 'bzrlib/tests/blackbox/test_selftest.py' |
1974 | --- bzrlib/tests/blackbox/test_selftest.py 2009-08-24 05:23:11 +0000 |
1975 | +++ bzrlib/tests/blackbox/test_selftest.py 2009-08-24 22:32:53 +0000 |
1976 | @@ -172,3 +172,7 @@ |
1977 | outputs_nothing(['selftest', '--list-only', '--exclude', 'selftest']) |
1978 | finally: |
1979 | tests.selftest = original_selftest |
1980 | + |
1981 | + def test_lsprof_tests(self): |
1982 | + params = self.get_params_passed_to_core('selftest --lsprof-tests') |
1983 | + self.assertEqual(True, params[1]["lsprof_tests"]) |
1984 | |
1985 | === modified file 'bzrlib/tests/blackbox/test_serve.py' |
1986 | --- bzrlib/tests/blackbox/test_serve.py 2009-07-20 11:27:05 +0000 |
1987 | +++ bzrlib/tests/blackbox/test_serve.py 2009-08-27 22:17:35 +0000 |
1988 | @@ -209,8 +209,7 @@ |
1989 | ssh_server = SFTPServer(StubSSHServer) |
1990 | # XXX: We *don't* want to override the default SSH vendor, so we set |
1991 | # _vendor to what _get_ssh_vendor returns. |
1992 | - ssh_server.setUp() |
1993 | - self.addCleanup(ssh_server.tearDown) |
1994 | + self.start_server(ssh_server) |
1995 | port = ssh_server._listener.port |
1996 | |
1997 | # Access the branch via a bzr+ssh URL. The BZR_REMOTE_PATH environment |
1998 | |
1999 | === modified file 'bzrlib/tests/blackbox/test_split.py' |
2000 | --- bzrlib/tests/blackbox/test_split.py 2009-06-08 02:02:08 +0000 |
2001 | +++ bzrlib/tests/blackbox/test_split.py 2009-08-27 21:48:33 +0000 |
2002 | @@ -31,7 +31,7 @@ |
2003 | wt.add(['b', 'b/c']) |
2004 | wt.commit('rev1') |
2005 | self.run_bzr('split a/b') |
2006 | - self.run_bzr_error(('.* is not versioned',), 'split q') |
2007 | + self.run_bzr_error(('.* is not versioned',), 'split q', working_dir='a') |
2008 | |
2009 | def test_split_repo_failure(self): |
2010 | repo = self.make_repository('branch', shared=True, format='knit') |
2011 | |
2012 | === modified file 'bzrlib/tests/http_utils.py' |
2013 | --- bzrlib/tests/http_utils.py 2009-05-04 14:48:21 +0000 |
2014 | +++ bzrlib/tests/http_utils.py 2009-08-27 22:17:35 +0000 |
2015 | @@ -133,8 +133,7 @@ |
2016 | """Get the server instance for the secondary transport.""" |
2017 | if self.__secondary_server is None: |
2018 | self.__secondary_server = self.create_transport_secondary_server() |
2019 | - self.__secondary_server.setUp() |
2020 | - self.addCleanup(self.__secondary_server.tearDown) |
2021 | + self.start_server(self.__secondary_server) |
2022 | return self.__secondary_server |
2023 | |
2024 | |
2025 | |
2026 | === modified file 'bzrlib/tests/per_branch/test_push.py' |
2027 | --- bzrlib/tests/per_branch/test_push.py 2009-08-14 00:55:42 +0000 |
2028 | +++ bzrlib/tests/per_branch/test_push.py 2009-08-27 22:17:35 +0000 |
2029 | @@ -394,8 +394,7 @@ |
2030 | # Create a smart server that publishes whatever the backing VFS server |
2031 | # does. |
2032 | self.smart_server = server.SmartTCPServer_for_testing() |
2033 | - self.smart_server.setUp(self.get_server()) |
2034 | - self.addCleanup(self.smart_server.tearDown) |
2035 | + self.start_server(self.smart_server, self.get_server()) |
2036 | # Make two empty branches, 'empty' and 'target'. |
2037 | self.empty_branch = self.make_branch('empty') |
2038 | self.make_branch('target') |
2039 | |
2040 | === modified file 'bzrlib/tests/per_pack_repository.py' |
2041 | --- bzrlib/tests/per_pack_repository.py 2009-08-14 00:55:42 +0000 |
2042 | +++ bzrlib/tests/per_pack_repository.py 2009-08-27 22:17:35 +0000 |
2043 | @@ -271,8 +271,7 @@ |
2044 | # failing to delete obsolete packs is not fatal |
2045 | format = self.get_format() |
2046 | server = fakenfs.FakeNFSServer() |
2047 | - server.setUp() |
2048 | - self.addCleanup(server.tearDown) |
2049 | + self.start_server(server) |
2050 | transport = get_transport(server.get_url()) |
2051 | bzrdir = self.get_format().initialize_on_transport(transport) |
2052 | repo = bzrdir.create_repository() |
2053 | @@ -1020,8 +1019,7 @@ |
2054 | # Create a smart server that publishes whatever the backing VFS server |
2055 | # does. |
2056 | self.smart_server = server.SmartTCPServer_for_testing() |
2057 | - self.smart_server.setUp(self.get_server()) |
2058 | - self.addCleanup(self.smart_server.tearDown) |
2059 | + self.start_server(self.smart_server, self.get_server()) |
2060 | # Log all HPSS calls into self.hpss_calls. |
2061 | client._SmartClient.hooks.install_named_hook( |
2062 | 'call', self.capture_hpss_call, None) |
2063 | |
2064 | === modified file 'bzrlib/tests/per_repository/test_repository.py' |
2065 | --- bzrlib/tests/per_repository/test_repository.py 2009-08-18 22:03:18 +0000 |
2066 | +++ bzrlib/tests/per_repository/test_repository.py 2009-08-27 22:17:35 +0000 |
2067 | @@ -823,9 +823,8 @@ |
2068 | be created at the given path.""" |
2069 | repo = self.make_repository(path, shared=shared) |
2070 | smart_server = server.SmartTCPServer_for_testing() |
2071 | - smart_server.setUp(self.get_server()) |
2072 | + self.start_server(smart_server, self.get_server()) |
2073 | remote_transport = get_transport(smart_server.get_url()).clone(path) |
2074 | - self.addCleanup(smart_server.tearDown) |
2075 | remote_bzrdir = bzrdir.BzrDir.open_from_transport(remote_transport) |
2076 | remote_repo = remote_bzrdir.open_repository() |
2077 | return remote_repo |
2078 | @@ -897,14 +896,6 @@ |
2079 | local_repo = local_bzrdir.open_repository() |
2080 | self.assertEqual(remote_backing_repo._format, local_repo._format) |
2081 | |
2082 | - # XXX: this helper probably belongs on TestCaseWithTransport |
2083 | - def make_smart_server(self, path): |
2084 | - smart_server = server.SmartTCPServer_for_testing() |
2085 | - smart_server.setUp(self.get_server()) |
2086 | - remote_transport = get_transport(smart_server.get_url()).clone(path) |
2087 | - self.addCleanup(smart_server.tearDown) |
2088 | - return remote_transport |
2089 | - |
2090 | def test_clone_to_hpss(self): |
2091 | pre_metadir_formats = [RepositoryFormat5(), RepositoryFormat6()] |
2092 | if self.repository_format in pre_metadir_formats: |
2093 | |
2094 | === modified file 'bzrlib/tests/per_workingtree/test_flush.py' |
2095 | --- bzrlib/tests/per_workingtree/test_flush.py 2009-07-31 17:42:29 +0000 |
2096 | +++ bzrlib/tests/per_workingtree/test_flush.py 2009-08-25 23:38:10 +0000 |
2097 | @@ -16,7 +16,9 @@ |
2098 | |
2099 | """Tests for WorkingTree.flush.""" |
2100 | |
2101 | +import sys |
2102 | from bzrlib import errors, inventory |
2103 | +from bzrlib.tests import TestSkipped |
2104 | from bzrlib.tests.per_workingtree import TestCaseWithWorkingTree |
2105 | |
2106 | |
2107 | @@ -31,8 +33,14 @@ |
2108 | tree.unlock() |
2109 | |
2110 | def test_flush_when_inventory_is_modified(self): |
2111 | + if sys.platform == "win32": |
2112 | + raise TestSkipped("don't use oslocks on win32 in unix manner") |
2113 | # This takes a write lock on the source tree, then opens a second copy |
2114 | - # and tries to grab a read lock, which is a bit bogus |
2115 | + # and tries to grab a read lock. This works on Unix and is a reasonable |
2116 | + # way to detect when the file is actually written to, but it won't work |
2117 | + # (as a test) on Windows. It might be nice to instead stub out the |
2118 | + # functions used to write and that way do both less work and also be |
2119 | + # able to execute on Windows. |
2120 | self.thisFailsStrictLockCheck() |
2121 | # when doing a flush the inventory should be written if needed. |
2122 | # we test that by changing the inventory (using |
2123 | |
2124 | === modified file 'bzrlib/tests/per_workingtree/test_locking.py' |
2125 | --- bzrlib/tests/per_workingtree/test_locking.py 2009-07-31 17:42:29 +0000 |
2126 | +++ bzrlib/tests/per_workingtree/test_locking.py 2009-08-25 23:38:10 +0000 |
2127 | @@ -16,11 +16,14 @@ |
2128 | |
2129 | """Tests for the (un)lock interfaces on all working tree implemenations.""" |
2130 | |
2131 | +import sys |
2132 | + |
2133 | from bzrlib import ( |
2134 | branch, |
2135 | errors, |
2136 | lockdir, |
2137 | ) |
2138 | +from bzrlib.tests import TestSkipped |
2139 | from bzrlib.tests.per_workingtree import TestCaseWithWorkingTree |
2140 | |
2141 | |
2142 | @@ -105,8 +108,14 @@ |
2143 | |
2144 | :param methodname: The lock method to use to establish locks. |
2145 | """ |
2146 | - # This write locks the local tree, and then grabs a read lock on a |
2147 | - # copy, which is bogus and the test just needs to be rewritten. |
2148 | + if sys.platform == "win32": |
2149 | + raise TestSkipped("don't use oslocks on win32 in unix manner") |
2150 | + # This helper takes a write lock on the source tree, then opens a |
2151 | + # second copy and tries to grab a read lock. This works on Unix and is |
2152 | + # a reasonable way to detect when the file is actually written to, but |
2153 | + # it won't work (as a test) on Windows. It might be nice to instead |
2154 | + # stub out the functions used to write and that way do both less work |
2155 | + # and also be able to execute on Windows. |
2156 | self.thisFailsStrictLockCheck() |
2157 | # when unlocking the last lock count from tree_write_lock, |
2158 | # the tree should do a flush(). |
2159 | |
2160 | === modified file 'bzrlib/tests/per_workingtree/test_set_root_id.py' |
2161 | --- bzrlib/tests/per_workingtree/test_set_root_id.py 2009-08-21 01:48:13 +0000 |
2162 | +++ bzrlib/tests/per_workingtree/test_set_root_id.py 2009-08-28 05:00:33 +0000 |
2163 | @@ -16,13 +16,18 @@ |
2164 | |
2165 | """Tests for WorkingTree.set_root_id""" |
2166 | |
2167 | +import sys |
2168 | + |
2169 | from bzrlib import inventory |
2170 | +from bzrlib.tests import TestSkipped |
2171 | from bzrlib.tests.per_workingtree import TestCaseWithWorkingTree |
2172 | |
2173 | |
2174 | class TestSetRootId(TestCaseWithWorkingTree): |
2175 | |
2176 | def test_set_and_read_unicode(self): |
2177 | + if sys.platform == "win32": |
2178 | + raise TestSkipped("don't use oslocks on win32 in unix manner") |
2179 | # This test tests that setting the root doesn't flush, so it |
2180 | # deliberately tests concurrent access that isn't possible on windows. |
2181 | self.thisFailsStrictLockCheck() |
2182 | |
2183 | === modified file 'bzrlib/tests/test__known_graph.py' |
2184 | --- bzrlib/tests/test__known_graph.py 2009-08-26 16:03:59 +0000 |
2185 | +++ bzrlib/tests/test__known_graph.py 2009-09-02 13:32:52 +0000 |
2186 | @@ -768,3 +768,70 @@ |
2187 | }, |
2188 | 'E', |
2189 | []) |
2190 | + |
2191 | + |
2192 | +class TestKnownGraphStableReverseTopoSort(TestCaseWithKnownGraph): |
2193 | + """Test the sort order returned by gc_sort.""" |
2194 | + |
2195 | + def assertSorted(self, expected, parent_map): |
2196 | + graph = self.make_known_graph(parent_map) |
2197 | + value = graph.gc_sort() |
2198 | + if expected != value: |
2199 | + self.assertEqualDiff(pprint.pformat(expected), |
2200 | + pprint.pformat(value)) |
2201 | + |
2202 | + def test_empty(self): |
2203 | + self.assertSorted([], {}) |
2204 | + |
2205 | + def test_single(self): |
2206 | + self.assertSorted(['a'], {'a':()}) |
2207 | + self.assertSorted([('a',)], {('a',):()}) |
2208 | + self.assertSorted([('F', 'a')], {('F', 'a'):()}) |
2209 | + |
2210 | + def test_linear(self): |
2211 | + self.assertSorted(['c', 'b', 'a'], {'a':(), 'b':('a',), 'c':('b',)}) |
2212 | + self.assertSorted([('c',), ('b',), ('a',)], |
2213 | + {('a',):(), ('b',): (('a',),), ('c',): (('b',),)}) |
2214 | + self.assertSorted([('F', 'c'), ('F', 'b'), ('F', 'a')], |
2215 | + {('F', 'a'):(), ('F', 'b'): (('F', 'a'),), |
2216 | + ('F', 'c'): (('F', 'b'),)}) |
2217 | + |
2218 | + def test_mixed_ancestries(self): |
2219 | + # Each prefix should be sorted separately |
2220 | + self.assertSorted([('F', 'c'), ('F', 'b'), ('F', 'a'), |
2221 | + ('G', 'c'), ('G', 'b'), ('G', 'a'), |
2222 | + ('Q', 'c'), ('Q', 'b'), ('Q', 'a'), |
2223 | + ], |
2224 | + {('F', 'a'):(), ('F', 'b'): (('F', 'a'),), |
2225 | + ('F', 'c'): (('F', 'b'),), |
2226 | + ('G', 'a'):(), ('G', 'b'): (('G', 'a'),), |
2227 | + ('G', 'c'): (('G', 'b'),), |
2228 | + ('Q', 'a'):(), ('Q', 'b'): (('Q', 'a'),), |
2229 | + ('Q', 'c'): (('Q', 'b'),), |
2230 | + }) |
2231 | + |
2232 | + def test_stable_sorting(self): |
2233 | + # the sort order should be stable even when extra nodes are added |
2234 | + self.assertSorted(['b', 'c', 'a'], |
2235 | + {'a':(), 'b':('a',), 'c':('a',)}) |
2236 | + self.assertSorted(['b', 'c', 'd', 'a'], |
2237 | + {'a':(), 'b':('a',), 'c':('a',), 'd':('a',)}) |
2238 | + self.assertSorted(['b', 'c', 'd', 'a'], |
2239 | + {'a':(), 'b':('a',), 'c':('a',), 'd':('a',)}) |
2240 | + self.assertSorted(['Z', 'b', 'c', 'd', 'a'], |
2241 | + {'a':(), 'b':('a',), 'c':('a',), 'd':('a',), |
2242 | + 'Z':('a',)}) |
2243 | + self.assertSorted(['e', 'b', 'c', 'f', 'Z', 'd', 'a'], |
2244 | + {'a':(), 'b':('a',), 'c':('a',), 'd':('a',), |
2245 | + 'Z':('a',), |
2246 | + 'e':('b', 'c', 'd'), |
2247 | + 'f':('d', 'Z'), |
2248 | + }) |
2249 | + |
2250 | + def test_skip_ghost(self): |
2251 | + self.assertSorted(['b', 'c', 'a'], |
2252 | + {'a':(), 'b':('a', 'ghost'), 'c':('a',)}) |
2253 | + |
2254 | + def test_skip_mainline_ghost(self): |
2255 | + self.assertSorted(['b', 'c', 'a'], |
2256 | + {'a':(), 'b':('ghost', 'a'), 'c':('a',)}) |
2257 | |
2258 | === modified file 'bzrlib/tests/test_bundle.py' |
2259 | --- bzrlib/tests/test_bundle.py 2009-08-04 14:10:09 +0000 |
2260 | +++ bzrlib/tests/test_bundle.py 2009-08-27 22:17:35 +0000 |
2261 | @@ -1830,9 +1830,8 @@ |
2262 | """ |
2263 | from bzrlib.tests.blackbox.test_push import RedirectingMemoryServer |
2264 | server = RedirectingMemoryServer() |
2265 | - server.setUp() |
2266 | + self.start_server(server) |
2267 | url = server.get_url() + 'infinite-loop' |
2268 | - self.addCleanup(server.tearDown) |
2269 | self.assertRaises(errors.NotABundle, read_mergeable_from_url, url) |
2270 | |
2271 | def test_smart_server_connection_reset(self): |
2272 | @@ -1841,8 +1840,7 @@ |
2273 | """ |
2274 | # Instantiate a server that will provoke a ConnectionReset |
2275 | sock_server = _DisconnectingTCPServer() |
2276 | - sock_server.setUp() |
2277 | - self.addCleanup(sock_server.tearDown) |
2278 | + self.start_server(sock_server) |
2279 | # We don't really care what the url is since the server will close the |
2280 | # connection without interpreting it |
2281 | url = sock_server.get_url() |
2282 | |
2283 | === modified file 'bzrlib/tests/test_crash.py' |
2284 | --- bzrlib/tests/test_crash.py 2009-08-20 04:45:48 +0000 |
2285 | +++ bzrlib/tests/test_crash.py 2009-08-28 12:38:01 +0000 |
2286 | @@ -18,20 +18,17 @@ |
2287 | from StringIO import StringIO |
2288 | import sys |
2289 | |
2290 | - |
2291 | -from bzrlib.crash import ( |
2292 | - report_bug, |
2293 | - _write_apport_report_to_file, |
2294 | +from bzrlib import ( |
2295 | + crash, |
2296 | + tests, |
2297 | ) |
2298 | -from bzrlib.tests import TestCase |
2299 | -from bzrlib.tests.features import ApportFeature |
2300 | - |
2301 | - |
2302 | -class TestApportReporting(TestCase): |
2303 | - |
2304 | - def setUp(self): |
2305 | - TestCase.setUp(self) |
2306 | - self.requireFeature(ApportFeature) |
2307 | + |
2308 | +from bzrlib.tests import features |
2309 | + |
2310 | + |
2311 | +class TestApportReporting(tests.TestCase): |
2312 | + |
2313 | + _test_needs_features = [features.ApportFeature] |
2314 | |
2315 | def test_apport_report_contents(self): |
2316 | try: |
2317 | @@ -39,19 +36,13 @@ |
2318 | except AssertionError, e: |
2319 | pass |
2320 | outf = StringIO() |
2321 | - _write_apport_report_to_file(sys.exc_info(), |
2322 | - outf) |
2323 | + crash._write_apport_report_to_file(sys.exc_info(), outf) |
2324 | report = outf.getvalue() |
2325 | |
2326 | - self.assertContainsRe(report, |
2327 | - '(?m)^BzrVersion:') |
2328 | + self.assertContainsRe(report, '(?m)^BzrVersion:') |
2329 | # should be in the traceback |
2330 | - self.assertContainsRe(report, |
2331 | - 'my error') |
2332 | - self.assertContainsRe(report, |
2333 | - 'AssertionError') |
2334 | - self.assertContainsRe(report, |
2335 | - 'test_apport_report_contents') |
2336 | + self.assertContainsRe(report, 'my error') |
2337 | + self.assertContainsRe(report, 'AssertionError') |
2338 | + self.assertContainsRe(report, 'test_apport_report_contents') |
2339 | # should also be in there |
2340 | - self.assertContainsRe(report, |
2341 | - '(?m)^CommandLine:.*selftest') |
2342 | + self.assertContainsRe(report, '(?m)^CommandLine:') |
2343 | |
2344 | === modified file 'bzrlib/tests/test_groupcompress.py' |
2345 | --- bzrlib/tests/test_groupcompress.py 2009-06-29 14:51:13 +0000 |
2346 | +++ bzrlib/tests/test_groupcompress.py 2009-09-03 15:26:27 +0000 |
2347 | @@ -538,7 +538,7 @@ |
2348 | 'as-requested', False)] |
2349 | self.assertEqual([('b',), ('a',), ('d',), ('c',)], keys) |
2350 | |
2351 | - def test_insert_record_stream_re_uses_blocks(self): |
2352 | + def test_insert_record_stream_reuses_blocks(self): |
2353 | vf = self.make_test_vf(True, dir='source') |
2354 | def grouped_stream(revision_ids, first_parents=()): |
2355 | parents = first_parents |
2356 | @@ -582,8 +582,14 @@ |
2357 | vf2 = self.make_test_vf(True, dir='target') |
2358 | # ordering in 'groupcompress' order, should actually swap the groups in |
2359 | # the target vf, but the groups themselves should not be disturbed. |
2360 | - vf2.insert_record_stream(vf.get_record_stream( |
2361 | - [(r,) for r in 'abcdefgh'], 'groupcompress', False)) |
2362 | + def small_size_stream(): |
2363 | + for record in vf.get_record_stream([(r,) for r in 'abcdefgh'], |
2364 | + 'groupcompress', False): |
2365 | + record._manager._full_enough_block_size = \ |
2366 | + record._manager._block._content_length |
2367 | + yield record |
2368 | + |
2369 | + vf2.insert_record_stream(small_size_stream()) |
2370 | stream = vf2.get_record_stream([(r,) for r in 'abcdefgh'], |
2371 | 'groupcompress', False) |
2372 | vf2.writer.end() |
2373 | @@ -594,6 +600,44 @@ |
2374 | record._manager._block._z_content) |
2375 | self.assertEqual(8, num_records) |
2376 | |
2377 | + def test_insert_record_stream_packs_on_the_fly(self): |
2378 | + vf = self.make_test_vf(True, dir='source') |
2379 | + def grouped_stream(revision_ids, first_parents=()): |
2380 | + parents = first_parents |
2381 | + for revision_id in revision_ids: |
2382 | + key = (revision_id,) |
2383 | + record = versionedfile.FulltextContentFactory( |
2384 | + key, parents, None, |
2385 | + 'some content that is\n' |
2386 | + 'identical except for\n' |
2387 | + 'revision_id:%s\n' % (revision_id,)) |
2388 | + yield record |
2389 | + parents = (key,) |
2390 | + # One group, a-d |
2391 | + vf.insert_record_stream(grouped_stream(['a', 'b', 'c', 'd'])) |
2392 | + # Second group, e-h |
2393 | + vf.insert_record_stream(grouped_stream(['e', 'f', 'g', 'h'], |
2394 | + first_parents=(('d',),))) |
2395 | + # Now copy the blocks into another vf, and see that the |
2396 | + # insert_record_stream rebuilt a new block on-the-fly because of |
2397 | + # under-utilization |
2398 | + vf2 = self.make_test_vf(True, dir='target') |
2399 | + vf2.insert_record_stream(vf.get_record_stream( |
2400 | + [(r,) for r in 'abcdefgh'], 'groupcompress', False)) |
2401 | + stream = vf2.get_record_stream([(r,) for r in 'abcdefgh'], |
2402 | + 'groupcompress', False) |
2403 | + vf2.writer.end() |
2404 | + num_records = 0 |
2405 | + # All of the records should be recombined into a single block |
2406 | + block = None |
2407 | + for record in stream: |
2408 | + num_records += 1 |
2409 | + if block is None: |
2410 | + block = record._manager._block |
2411 | + else: |
2412 | + self.assertIs(block, record._manager._block) |
2413 | + self.assertEqual(8, num_records) |
2414 | + |
2415 | def test__insert_record_stream_no_reuse_block(self): |
2416 | vf = self.make_test_vf(True, dir='source') |
2417 | def grouped_stream(revision_ids, first_parents=()): |
2418 | @@ -702,19 +746,128 @@ |
2419 | " 0 8', \(\(\('a',\),\),\)\)") |
2420 | |
2421 | |
2422 | +class StubGCVF(object): |
2423 | + def __init__(self, canned_get_blocks=None): |
2424 | + self._group_cache = {} |
2425 | + self._canned_get_blocks = canned_get_blocks or [] |
2426 | + def _get_blocks(self, read_memos): |
2427 | + return iter(self._canned_get_blocks) |
2428 | + |
2429 | + |
2430 | +class Test_BatchingBlockFetcher(TestCaseWithGroupCompressVersionedFiles): |
2431 | + """Simple whitebox unit tests for _BatchingBlockFetcher.""" |
2432 | + |
2433 | + def test_add_key_new_read_memo(self): |
2434 | + """Adding a key with an uncached read_memo new to this batch adds that |
2435 | + read_memo to the list of memos to fetch. |
2436 | + """ |
2437 | + # locations are: index_memo, ignored, parents, ignored |
2438 | + # where index_memo is: (idx, offset, len, factory_start, factory_end) |
2439 | + # and (idx, offset, size) is known as the 'read_memo', identifying the |
2440 | + # raw bytes needed. |
2441 | + read_memo = ('fake index', 100, 50) |
2442 | + locations = { |
2443 | + ('key',): (read_memo + (None, None), None, None, None)} |
2444 | + batcher = groupcompress._BatchingBlockFetcher(StubGCVF(), locations) |
2445 | + total_size = batcher.add_key(('key',)) |
2446 | + self.assertEqual(50, total_size) |
2447 | + self.assertEqual([('key',)], batcher.keys) |
2448 | + self.assertEqual([read_memo], batcher.memos_to_get) |
2449 | + |
2450 | + def test_add_key_duplicate_read_memo(self): |
2451 | + """read_memos that occur multiple times in a batch will only be fetched |
2452 | + once. |
2453 | + """ |
2454 | + read_memo = ('fake index', 100, 50) |
2455 | + # Two keys, both sharing the same read memo (but different overall |
2456 | + # index_memos). |
2457 | + locations = { |
2458 | + ('key1',): (read_memo + (0, 1), None, None, None), |
2459 | + ('key2',): (read_memo + (1, 2), None, None, None)} |
2460 | + batcher = groupcompress._BatchingBlockFetcher(StubGCVF(), locations) |
2461 | + total_size = batcher.add_key(('key1',)) |
2462 | + total_size = batcher.add_key(('key2',)) |
2463 | + self.assertEqual(50, total_size) |
2464 | + self.assertEqual([('key1',), ('key2',)], batcher.keys) |
2465 | + self.assertEqual([read_memo], batcher.memos_to_get) |
2466 | + |
2467 | + def test_add_key_cached_read_memo(self): |
2468 | + """Adding a key with a cached read_memo will not cause that read_memo |
2469 | + to be added to the list to fetch. |
2470 | + """ |
2471 | + read_memo = ('fake index', 100, 50) |
2472 | + gcvf = StubGCVF() |
2473 | + gcvf._group_cache[read_memo] = 'fake block' |
2474 | + locations = { |
2475 | + ('key',): (read_memo + (None, None), None, None, None)} |
2476 | + batcher = groupcompress._BatchingBlockFetcher(gcvf, locations) |
2477 | + total_size = batcher.add_key(('key',)) |
2478 | + self.assertEqual(0, total_size) |
2479 | + self.assertEqual([('key',)], batcher.keys) |
2480 | + self.assertEqual([], batcher.memos_to_get) |
2481 | + |
2482 | + def test_yield_factories_empty(self): |
2483 | + """An empty batch yields no factories.""" |
2484 | + batcher = groupcompress._BatchingBlockFetcher(StubGCVF(), {}) |
2485 | + self.assertEqual([], list(batcher.yield_factories())) |
2486 | + |
2487 | + def test_yield_factories_calls_get_blocks(self): |
2488 | + """Uncached memos are retrieved via get_blocks.""" |
2489 | + read_memo1 = ('fake index', 100, 50) |
2490 | + read_memo2 = ('fake index', 150, 40) |
2491 | + gcvf = StubGCVF( |
2492 | + canned_get_blocks=[ |
2493 | + (read_memo1, groupcompress.GroupCompressBlock()), |
2494 | + (read_memo2, groupcompress.GroupCompressBlock())]) |
2495 | + locations = { |
2496 | + ('key1',): (read_memo1 + (None, None), None, None, None), |
2497 | + ('key2',): (read_memo2 + (None, None), None, None, None)} |
2498 | + batcher = groupcompress._BatchingBlockFetcher(gcvf, locations) |
2499 | + batcher.add_key(('key1',)) |
2500 | + batcher.add_key(('key2',)) |
2501 | + factories = list(batcher.yield_factories(full_flush=True)) |
2502 | + self.assertLength(2, factories) |
2503 | + keys = [f.key for f in factories] |
2504 | + kinds = [f.storage_kind for f in factories] |
2505 | + self.assertEqual([('key1',), ('key2',)], keys) |
2506 | + self.assertEqual(['groupcompress-block', 'groupcompress-block'], kinds) |
2507 | + |
2508 | + def test_yield_factories_flushing(self): |
2509 | + """yield_factories holds back on yielding results from the final block |
2510 | + unless passed full_flush=True. |
2511 | + """ |
2512 | + fake_block = groupcompress.GroupCompressBlock() |
2513 | + read_memo = ('fake index', 100, 50) |
2514 | + gcvf = StubGCVF() |
2515 | + gcvf._group_cache[read_memo] = fake_block |
2516 | + locations = { |
2517 | + ('key',): (read_memo + (None, None), None, None, None)} |
2518 | + batcher = groupcompress._BatchingBlockFetcher(gcvf, locations) |
2519 | + batcher.add_key(('key',)) |
2520 | + self.assertEqual([], list(batcher.yield_factories())) |
2521 | + factories = list(batcher.yield_factories(full_flush=True)) |
2522 | + self.assertLength(1, factories) |
2523 | + self.assertEqual(('key',), factories[0].key) |
2524 | + self.assertEqual('groupcompress-block', factories[0].storage_kind) |
2525 | + |
2526 | + |
2527 | class TestLazyGroupCompress(tests.TestCaseWithTransport): |
2528 | |
2529 | _texts = { |
2530 | ('key1',): "this is a text\n" |
2531 | - "with a reasonable amount of compressible bytes\n", |
2532 | + "with a reasonable amount of compressible bytes\n" |
2533 | + "which can be shared between various other texts\n", |
2534 | ('key2',): "another text\n" |
2535 | - "with a reasonable amount of compressible bytes\n", |
2536 | + "with a reasonable amount of compressible bytes\n" |
2537 | + "which can be shared between various other texts\n", |
2538 | ('key3',): "yet another text which won't be extracted\n" |
2539 | - "with a reasonable amount of compressible bytes\n", |
2540 | + "with a reasonable amount of compressible bytes\n" |
2541 | + "which can be shared between various other texts\n", |
2542 | ('key4',): "this will be extracted\n" |
2543 | "but references most of its bytes from\n" |
2544 | "yet another text which won't be extracted\n" |
2545 | - "with a reasonable amount of compressible bytes\n", |
2546 | + "with a reasonable amount of compressible bytes\n" |
2547 | + "which can be shared between various other texts\n", |
2548 | } |
2549 | def make_block(self, key_to_text): |
2550 | """Create a GroupCompressBlock, filling it with the given texts.""" |
2551 | @@ -732,6 +885,13 @@ |
2552 | start, end = locations[key] |
2553 | manager.add_factory(key, (), start, end) |
2554 | |
2555 | + def make_block_and_full_manager(self, texts): |
2556 | + locations, block = self.make_block(texts) |
2557 | + manager = groupcompress._LazyGroupContentManager(block) |
2558 | + for key in sorted(texts): |
2559 | + self.add_key_to_manager(key, locations, block, manager) |
2560 | + return block, manager |
2561 | + |
2562 | def test_get_fulltexts(self): |
2563 | locations, block = self.make_block(self._texts) |
2564 | manager = groupcompress._LazyGroupContentManager(block) |
2565 | @@ -788,8 +948,8 @@ |
2566 | header_len = int(header_len) |
2567 | block_len = int(block_len) |
2568 | self.assertEqual('groupcompress-block', storage_kind) |
2569 | - self.assertEqual(33, z_header_len) |
2570 | - self.assertEqual(25, header_len) |
2571 | + self.assertEqual(34, z_header_len) |
2572 | + self.assertEqual(26, header_len) |
2573 | self.assertEqual(len(block_bytes), block_len) |
2574 | z_header = rest[:z_header_len] |
2575 | header = zlib.decompress(z_header) |
2576 | @@ -829,13 +989,7 @@ |
2577 | self.assertEqual([('key1',), ('key4',)], result_order) |
2578 | |
2579 | def test__check_rebuild_no_changes(self): |
2580 | - locations, block = self.make_block(self._texts) |
2581 | - manager = groupcompress._LazyGroupContentManager(block) |
2582 | - # Request all the keys, which ensures that we won't rebuild |
2583 | - self.add_key_to_manager(('key1',), locations, block, manager) |
2584 | - self.add_key_to_manager(('key2',), locations, block, manager) |
2585 | - self.add_key_to_manager(('key3',), locations, block, manager) |
2586 | - self.add_key_to_manager(('key4',), locations, block, manager) |
2587 | + block, manager = self.make_block_and_full_manager(self._texts) |
2588 | manager._check_rebuild_block() |
2589 | self.assertIs(block, manager._block) |
2590 | |
2591 | @@ -866,3 +1020,50 @@ |
2592 | self.assertEqual(('key4',), record.key) |
2593 | self.assertEqual(self._texts[record.key], |
2594 | record.get_bytes_as('fulltext')) |
2595 | + |
2596 | + def test_check_is_well_utilized_all_keys(self): |
2597 | + block, manager = self.make_block_and_full_manager(self._texts) |
2598 | + self.assertFalse(manager.check_is_well_utilized()) |
2599 | + # Though we can fake it by changing the recommended minimum size |
2600 | + manager._full_enough_block_size = block._content_length |
2601 | + self.assertTrue(manager.check_is_well_utilized()) |
2602 | + # Setting it just above causes it to fail |
2603 | + manager._full_enough_block_size = block._content_length + 1 |
2604 | + self.assertFalse(manager.check_is_well_utilized()) |
2605 | + # Setting the mixed-block size doesn't do anything, because the content |
2606 | + # is considered to not be 'mixed' |
2607 | + manager._full_enough_mixed_block_size = block._content_length |
2608 | + self.assertFalse(manager.check_is_well_utilized()) |
2609 | + |
2610 | + def test_check_is_well_utilized_mixed_keys(self): |
2611 | + texts = {} |
2612 | + f1k1 = ('f1', 'k1') |
2613 | + f1k2 = ('f1', 'k2') |
2614 | + f2k1 = ('f2', 'k1') |
2615 | + f2k2 = ('f2', 'k2') |
2616 | + texts[f1k1] = self._texts[('key1',)] |
2617 | + texts[f1k2] = self._texts[('key2',)] |
2618 | + texts[f2k1] = self._texts[('key3',)] |
2619 | + texts[f2k2] = self._texts[('key4',)] |
2620 | + block, manager = self.make_block_and_full_manager(texts) |
2621 | + self.assertFalse(manager.check_is_well_utilized()) |
2622 | + manager._full_enough_block_size = block._content_length |
2623 | + self.assertTrue(manager.check_is_well_utilized()) |
2624 | + manager._full_enough_block_size = block._content_length + 1 |
2625 | + self.assertFalse(manager.check_is_well_utilized()) |
2626 | + manager._full_enough_mixed_block_size = block._content_length |
2627 | + self.assertTrue(manager.check_is_well_utilized()) |
2628 | + |
2629 | + def test_check_is_well_utilized_partial_use(self): |
2630 | + locations, block = self.make_block(self._texts) |
2631 | + manager = groupcompress._LazyGroupContentManager(block) |
2632 | + manager._full_enough_block_size = block._content_length |
2633 | + self.add_key_to_manager(('key1',), locations, block, manager) |
2634 | + self.add_key_to_manager(('key2',), locations, block, manager) |
2635 | + # Just using the content from key1 and 2 is not enough to be considered |
2636 | + # 'complete' |
2637 | + self.assertFalse(manager.check_is_well_utilized()) |
2638 | + # However if we add key3, then we have enough, as we only require 75% |
2639 | + # consumption |
2640 | + self.add_key_to_manager(('key4',), locations, block, manager) |
2641 | + self.assertTrue(manager.check_is_well_utilized()) |
2642 | |
2643 | === modified file 'bzrlib/tests/test_http.py' |
2644 | --- bzrlib/tests/test_http.py 2009-08-19 16:33:39 +0000 |
2645 | +++ bzrlib/tests/test_http.py 2009-08-27 22:17:35 +0000 |
2646 | @@ -304,7 +304,7 @@ |
2647 | |
2648 | server = http_server.HttpServer(BogusRequestHandler) |
2649 | try: |
2650 | - self.assertRaises(httplib.UnknownProtocol,server.setUp) |
2651 | + self.assertRaises(httplib.UnknownProtocol, server.setUp) |
2652 | except: |
2653 | server.tearDown() |
2654 | self.fail('HTTP Server creation did not raise UnknownProtocol') |
2655 | @@ -312,7 +312,7 @@ |
2656 | def test_force_invalid_protocol(self): |
2657 | server = http_server.HttpServer(protocol_version='HTTP/0.1') |
2658 | try: |
2659 | - self.assertRaises(httplib.UnknownProtocol,server.setUp) |
2660 | + self.assertRaises(httplib.UnknownProtocol, server.setUp) |
2661 | except: |
2662 | server.tearDown() |
2663 | self.fail('HTTP Server creation did not raise UnknownProtocol') |
2664 | @@ -320,8 +320,10 @@ |
2665 | def test_server_start_and_stop(self): |
2666 | server = http_server.HttpServer() |
2667 | server.setUp() |
2668 | - self.assertTrue(server._http_running) |
2669 | - server.tearDown() |
2670 | + try: |
2671 | + self.assertTrue(server._http_running) |
2672 | + finally: |
2673 | + server.tearDown() |
2674 | self.assertFalse(server._http_running) |
2675 | |
2676 | def test_create_http_server_one_zero(self): |
2677 | @@ -330,8 +332,7 @@ |
2678 | protocol_version = 'HTTP/1.0' |
2679 | |
2680 | server = http_server.HttpServer(RequestHandlerOneZero) |
2681 | - server.setUp() |
2682 | - self.addCleanup(server.tearDown) |
2683 | + self.start_server(server) |
2684 | self.assertIsInstance(server._httpd, http_server.TestingHTTPServer) |
2685 | |
2686 | def test_create_http_server_one_one(self): |
2687 | @@ -340,8 +341,7 @@ |
2688 | protocol_version = 'HTTP/1.1' |
2689 | |
2690 | server = http_server.HttpServer(RequestHandlerOneOne) |
2691 | - server.setUp() |
2692 | - self.addCleanup(server.tearDown) |
2693 | + self.start_server(server) |
2694 | self.assertIsInstance(server._httpd, |
2695 | http_server.TestingThreadingHTTPServer) |
2696 | |
2697 | @@ -352,8 +352,7 @@ |
2698 | |
2699 | server = http_server.HttpServer(RequestHandlerOneZero, |
2700 | protocol_version='HTTP/1.1') |
2701 | - server.setUp() |
2702 | - self.addCleanup(server.tearDown) |
2703 | + self.start_server(server) |
2704 | self.assertIsInstance(server._httpd, |
2705 | http_server.TestingThreadingHTTPServer) |
2706 | |
2707 | @@ -364,8 +363,7 @@ |
2708 | |
2709 | server = http_server.HttpServer(RequestHandlerOneOne, |
2710 | protocol_version='HTTP/1.0') |
2711 | - server.setUp() |
2712 | - self.addCleanup(server.tearDown) |
2713 | + self.start_server(server) |
2714 | self.assertIsInstance(server._httpd, |
2715 | http_server.TestingHTTPServer) |
2716 | |
2717 | @@ -431,8 +429,8 @@ |
2718 | def test_http_impl_urls(self): |
2719 | """There are servers which ask for particular clients to connect""" |
2720 | server = self._server() |
2721 | + server.setUp() |
2722 | try: |
2723 | - server.setUp() |
2724 | url = server.get_url() |
2725 | self.assertTrue(url.startswith('%s://' % self._qualified_prefix)) |
2726 | finally: |
2727 | @@ -544,8 +542,7 @@ |
2728 | |
2729 | def test_post_body_is_received(self): |
2730 | server = RecordingServer(expect_body_tail='end-of-body') |
2731 | - server.setUp() |
2732 | - self.addCleanup(server.tearDown) |
2733 | + self.start_server(server) |
2734 | scheme = self._qualified_prefix |
2735 | url = '%s://%s:%s/' % (scheme, server.host, server.port) |
2736 | http_transport = self._transport(url) |
2737 | @@ -780,8 +777,7 @@ |
2738 | |
2739 | def test_send_receive_bytes(self): |
2740 | server = RecordingServer(expect_body_tail='c') |
2741 | - server.setUp() |
2742 | - self.addCleanup(server.tearDown) |
2743 | + self.start_server(server) |
2744 | sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) |
2745 | sock.connect((server.host, server.port)) |
2746 | sock.sendall('abc') |
2747 | |
2748 | === modified file 'bzrlib/tests/test_lsprof.py' |
2749 | --- bzrlib/tests/test_lsprof.py 2009-03-23 14:59:43 +0000 |
2750 | +++ bzrlib/tests/test_lsprof.py 2009-08-24 21:05:09 +0000 |
2751 | @@ -92,3 +92,22 @@ |
2752 | self.stats.save(f) |
2753 | data1 = cPickle.load(open(f)) |
2754 | self.assertEqual(type(data1), bzrlib.lsprof.Stats) |
2755 | + |
2756 | + |
2757 | +class TestBzrProfiler(tests.TestCase): |
2758 | + |
2759 | + _test_needs_features = [LSProfFeature] |
2760 | + |
2761 | + def test_start_call_stuff_stop(self): |
2762 | + profiler = bzrlib.lsprof.BzrProfiler() |
2763 | + profiler.start() |
2764 | + try: |
2765 | + def a_function(): |
2766 | + pass |
2767 | + a_function() |
2768 | + finally: |
2769 | + stats = profiler.stop() |
2770 | + stats.freeze() |
2771 | + lines = [str(data) for data in stats.data] |
2772 | + lines = [line for line in lines if 'a_function' in line] |
2773 | + self.assertLength(1, lines) |
2774 | |
2775 | === modified file 'bzrlib/tests/test_remote.py' |
2776 | --- bzrlib/tests/test_remote.py 2009-08-27 05:22:14 +0000 |
2777 | +++ bzrlib/tests/test_remote.py 2009-08-30 21:34:42 +0000 |
2778 | @@ -1945,8 +1945,7 @@ |
2779 | def test_allows_new_revisions(self): |
2780 | """get_parent_map's results can be updated by commit.""" |
2781 | smart_server = server.SmartTCPServer_for_testing() |
2782 | - smart_server.setUp() |
2783 | - self.addCleanup(smart_server.tearDown) |
2784 | + self.start_server(smart_server) |
2785 | self.make_branch('branch') |
2786 | branch = Branch.open(smart_server.get_url() + '/branch') |
2787 | tree = branch.create_checkout('tree', lightweight=True) |
2788 | @@ -2781,8 +2780,7 @@ |
2789 | stacked_branch.set_stacked_on_url('../base') |
2790 | # start a server looking at this |
2791 | smart_server = server.SmartTCPServer_for_testing() |
2792 | - smart_server.setUp() |
2793 | - self.addCleanup(smart_server.tearDown) |
2794 | + self.start_server(smart_server) |
2795 | remote_bzrdir = BzrDir.open(smart_server.get_url() + '/stacked') |
2796 | # can get its branch and repository |
2797 | remote_branch = remote_bzrdir.open_branch() |
2798 | @@ -2943,8 +2941,7 @@ |
2799 | # Create a smart server that publishes whatever the backing VFS server |
2800 | # does. |
2801 | self.smart_server = server.SmartTCPServer_for_testing() |
2802 | - self.smart_server.setUp(self.get_server()) |
2803 | - self.addCleanup(self.smart_server.tearDown) |
2804 | + self.start_server(self.smart_server, self.get_server()) |
2805 | # Log all HPSS calls into self.hpss_calls. |
2806 | _SmartClient.hooks.install_named_hook( |
2807 | 'call', self.capture_hpss_call, None) |
2808 | |
2809 | === modified file 'bzrlib/tests/test_repository.py' |
2810 | --- bzrlib/tests/test_repository.py 2009-08-17 23:15:55 +0000 |
2811 | +++ bzrlib/tests/test_repository.py 2009-09-01 21:21:53 +0000 |
2812 | @@ -683,6 +683,28 @@ |
2813 | |
2814 | class Test2a(TestCaseWithTransport): |
2815 | |
2816 | + def test_fetch_combines_groups(self): |
2817 | + builder = self.make_branch_builder('source', format='2a') |
2818 | + builder.start_series() |
2819 | + builder.build_snapshot('1', None, [ |
2820 | + ('add', ('', 'root-id', 'directory', '')), |
2821 | + ('add', ('file', 'file-id', 'file', 'content\n'))]) |
2822 | + builder.build_snapshot('2', ['1'], [ |
2823 | + ('modify', ('file-id', 'content-2\n'))]) |
2824 | + builder.finish_series() |
2825 | + source = builder.get_branch() |
2826 | + target = self.make_repository('target', format='2a') |
2827 | + target.fetch(source.repository) |
2828 | + target.lock_read() |
2829 | + self.addCleanup(target.unlock) |
2830 | + details = target.texts._index.get_build_details( |
2831 | + [('file-id', '1',), ('file-id', '2',)]) |
2832 | + file_1_details = details[('file-id', '1')] |
2833 | + file_2_details = details[('file-id', '2')] |
2834 | + # The index, and what to read off disk, should be the same for both |
2835 | + # versions of the file. |
2836 | + self.assertEqual(file_1_details[0][:3], file_2_details[0][:3]) |
2837 | + |
2838 | def test_format_pack_compresses_True(self): |
2839 | repo = self.make_repository('repo', format='2a') |
2840 | self.assertTrue(repo._format.pack_compresses) |
2841 | |
2842 | === modified file 'bzrlib/tests/test_selftest.py' |
2843 | --- bzrlib/tests/test_selftest.py 2009-08-24 05:35:28 +0000 |
2844 | +++ bzrlib/tests/test_selftest.py 2009-08-26 23:25:28 +0000 |
2845 | @@ -687,6 +687,26 @@ |
2846 | self.assertEqual(url, t.clone('..').base) |
2847 | |
2848 | |
2849 | +class TestProfileResult(tests.TestCase): |
2850 | + |
2851 | + def test_profiles_tests(self): |
2852 | + self.requireFeature(test_lsprof.LSProfFeature) |
2853 | + terminal = unittest.TestResult() |
2854 | + result = tests.ProfileResult(terminal) |
2855 | + class Sample(tests.TestCase): |
2856 | + def a(self): |
2857 | + self.sample_function() |
2858 | + def sample_function(self): |
2859 | + pass |
2860 | + test = Sample("a") |
2861 | + test.attrs_to_keep = test.attrs_to_keep + ('_benchcalls',) |
2862 | + test.run(result) |
2863 | + self.assertLength(1, test._benchcalls) |
2864 | + # We must be able to unpack it as the test reporting code wants |
2865 | + (_, _, _), stats = test._benchcalls[0] |
2866 | + self.assertTrue(callable(stats.pprint)) |
2867 | + |
2868 | + |
2869 | class TestTestResult(tests.TestCase): |
2870 | |
2871 | def check_timing(self, test_case, expected_re): |
2872 | @@ -800,7 +820,7 @@ |
2873 | def test_known_failure(self): |
2874 | """A KnownFailure being raised should trigger several result actions.""" |
2875 | class InstrumentedTestResult(tests.ExtendedTestResult): |
2876 | - def done(self): pass |
2877 | + def stopTestRun(self): pass |
2878 | def startTests(self): pass |
2879 | def report_test_start(self, test): pass |
2880 | def report_known_failure(self, test, err): |
2881 | @@ -854,7 +874,7 @@ |
2882 | def test_add_not_supported(self): |
2883 | """Test the behaviour of invoking addNotSupported.""" |
2884 | class InstrumentedTestResult(tests.ExtendedTestResult): |
2885 | - def done(self): pass |
2886 | + def stopTestRun(self): pass |
2887 | def startTests(self): pass |
2888 | def report_test_start(self, test): pass |
2889 | def report_unsupported(self, test, feature): |
2890 | @@ -898,7 +918,7 @@ |
2891 | def test_unavailable_exception(self): |
2892 | """An UnavailableFeature being raised should invoke addNotSupported.""" |
2893 | class InstrumentedTestResult(tests.ExtendedTestResult): |
2894 | - def done(self): pass |
2895 | + def stopTestRun(self): pass |
2896 | def startTests(self): pass |
2897 | def report_test_start(self, test): pass |
2898 | def addNotSupported(self, test, feature): |
2899 | @@ -981,11 +1001,14 @@ |
2900 | because of our use of global state. |
2901 | """ |
2902 | old_root = tests.TestCaseInTempDir.TEST_ROOT |
2903 | + old_leak = tests.TestCase._first_thread_leaker_id |
2904 | try: |
2905 | tests.TestCaseInTempDir.TEST_ROOT = None |
2906 | + tests.TestCase._first_thread_leaker_id = None |
2907 | return testrunner.run(test) |
2908 | finally: |
2909 | tests.TestCaseInTempDir.TEST_ROOT = old_root |
2910 | + tests.TestCase._first_thread_leaker_id = old_leak |
2911 | |
2912 | def test_known_failure_failed_run(self): |
2913 | # run a test that generates a known failure which should be printed in |
2914 | @@ -1031,6 +1054,20 @@ |
2915 | '\n' |
2916 | 'OK \\(known_failures=1\\)\n') |
2917 | |
2918 | + def test_result_decorator(self): |
2919 | + # decorate results |
2920 | + calls = [] |
2921 | + class LoggingDecorator(tests.ForwardingResult): |
2922 | + def startTest(self, test): |
2923 | + tests.ForwardingResult.startTest(self, test) |
2924 | + calls.append('start') |
2925 | + test = unittest.FunctionTestCase(lambda:None) |
2926 | + stream = StringIO() |
2927 | + runner = tests.TextTestRunner(stream=stream, |
2928 | + result_decorators=[LoggingDecorator]) |
2929 | + result = self.run_test_runner(runner, test) |
2930 | + self.assertLength(1, calls) |
2931 | + |
2932 | def test_skipped_test(self): |
2933 | # run a test that is skipped, and check the suite as a whole still |
2934 | # succeeds. |
2935 | @@ -1103,10 +1140,6 @@ |
2936 | self.assertContainsRe(out.getvalue(), |
2937 | r'(?m)^ this test never runs') |
2938 | |
2939 | - def test_not_applicable_demo(self): |
2940 | - # just so you can see it in the test output |
2941 | - raise tests.TestNotApplicable('this test is just a demonstation') |
2942 | - |
2943 | def test_unsupported_features_listed(self): |
2944 | """When unsupported features are encountered they are detailed.""" |
2945 | class Feature1(tests.Feature): |
2946 | @@ -1261,6 +1294,34 @@ |
2947 | self.assertContainsRe(log, 'this will be kept') |
2948 | self.assertEqual(log, test._log_contents) |
2949 | |
2950 | + def test_startTestRun(self): |
2951 | + """run should call result.startTestRun()""" |
2952 | + calls = [] |
2953 | + class LoggingDecorator(tests.ForwardingResult): |
2954 | + def startTestRun(self): |
2955 | + tests.ForwardingResult.startTestRun(self) |
2956 | + calls.append('startTestRun') |
2957 | + test = unittest.FunctionTestCase(lambda:None) |
2958 | + stream = StringIO() |
2959 | + runner = tests.TextTestRunner(stream=stream, |
2960 | + result_decorators=[LoggingDecorator]) |
2961 | + result = self.run_test_runner(runner, test) |
2962 | + self.assertLength(1, calls) |
2963 | + |
2964 | + def test_stopTestRun(self): |
2965 | + """run should call result.stopTestRun()""" |
2966 | + calls = [] |
2967 | + class LoggingDecorator(tests.ForwardingResult): |
2968 | + def stopTestRun(self): |
2969 | + tests.ForwardingResult.stopTestRun(self) |
2970 | + calls.append('stopTestRun') |
2971 | + test = unittest.FunctionTestCase(lambda:None) |
2972 | + stream = StringIO() |
2973 | + runner = tests.TextTestRunner(stream=stream, |
2974 | + result_decorators=[LoggingDecorator]) |
2975 | + result = self.run_test_runner(runner, test) |
2976 | + self.assertLength(1, calls) |
2977 | + |
2978 | |
2979 | class SampleTestCase(tests.TestCase): |
2980 | |
2981 | @@ -1480,6 +1541,7 @@ |
2982 | self.assertEqual((time.sleep, (0.003,), {}), self._benchcalls[1][0]) |
2983 | self.assertIsInstance(self._benchcalls[0][1], bzrlib.lsprof.Stats) |
2984 | self.assertIsInstance(self._benchcalls[1][1], bzrlib.lsprof.Stats) |
2985 | + del self._benchcalls[:] |
2986 | |
2987 | def test_knownFailure(self): |
2988 | """Self.knownFailure() should raise a KnownFailure exception.""" |
2989 | @@ -1742,16 +1804,16 @@ |
2990 | tree = self.make_branch_and_memory_tree('a') |
2991 | self.assertIsInstance(tree, bzrlib.memorytree.MemoryTree) |
2992 | |
2993 | - |
2994 | -class TestSFTPMakeBranchAndTree(test_sftp_transport.TestCaseWithSFTPServer): |
2995 | - |
2996 | - def test_make_tree_for_sftp_branch(self): |
2997 | - """Transports backed by local directories create local trees.""" |
2998 | - # NB: This is arguably a bug in the definition of make_branch_and_tree. |
2999 | + def test_make_tree_for_local_vfs_backed_transport(self): |
3000 | + # make_branch_and_tree has to use local branch and repositories |
3001 | + # when the vfs transport and local disk are colocated, even if |
3002 | + # a different transport is in use for url generation. |
3003 | + from bzrlib.transport.fakevfat import FakeVFATServer |
3004 | + self.transport_server = FakeVFATServer |
3005 | + self.assertFalse(self.get_url('t1').startswith('file://')) |
3006 | tree = self.make_branch_and_tree('t1') |
3007 | base = tree.bzrdir.root_transport.base |
3008 | - self.failIf(base.startswith('sftp'), |
3009 | - 'base %r is on sftp but should be local' % base) |
3010 | + self.assertStartsWith(base, 'file://') |
3011 | self.assertEquals(tree.bzrdir.root_transport, |
3012 | tree.branch.bzrdir.root_transport) |
3013 | self.assertEquals(tree.bzrdir.root_transport, |
3014 | @@ -1817,6 +1879,20 @@ |
3015 | self.assertNotContainsRe("Test.b", output.getvalue()) |
3016 | self.assertLength(2, output.readlines()) |
3017 | |
3018 | + def test_lsprof_tests(self): |
3019 | + self.requireFeature(test_lsprof.LSProfFeature) |
3020 | + calls = [] |
3021 | + class Test(object): |
3022 | + def __call__(test, result): |
3023 | + test.run(result) |
3024 | + def run(test, result): |
3025 | + self.assertIsInstance(result, tests.ForwardingResult) |
3026 | + calls.append("called") |
3027 | + def countTestCases(self): |
3028 | + return 1 |
3029 | + self.run_selftest(test_suite_factory=Test, lsprof_tests=True) |
3030 | + self.assertLength(1, calls) |
3031 | + |
3032 | def test_random(self): |
3033 | # test randomising by listing a number of tests. |
3034 | output_123 = self.run_selftest(test_suite_factory=self.factory, |
3035 | @@ -1877,8 +1953,8 @@ |
3036 | def test_transport_sftp(self): |
3037 | try: |
3038 | import bzrlib.transport.sftp |
3039 | - except ParamikoNotPresent: |
3040 | - raise TestSkipped("Paramiko not present") |
3041 | + except errors.ParamikoNotPresent: |
3042 | + raise tests.TestSkipped("Paramiko not present") |
3043 | self.check_transport_set(bzrlib.transport.sftp.SFTPAbsoluteServer) |
3044 | |
3045 | def test_transport_memory(self): |
3046 | @@ -2072,7 +2148,8 @@ |
3047 | return self.out, self.err |
3048 | |
3049 | |
3050 | -class TestRunBzrSubprocess(tests.TestCaseWithTransport): |
3051 | +class TestWithFakedStartBzrSubprocess(tests.TestCaseWithTransport): |
3052 | + """Base class for tests testing how we might run bzr.""" |
3053 | |
3054 | def setUp(self): |
3055 | tests.TestCaseWithTransport.setUp(self) |
3056 | @@ -2089,6 +2166,9 @@ |
3057 | 'working_dir':working_dir, 'allow_plugins':allow_plugins}) |
3058 | return self.next_subprocess |
3059 | |
3060 | + |
3061 | +class TestRunBzrSubprocess(TestWithFakedStartBzrSubprocess): |
3062 | + |
3063 | def assertRunBzrSubprocess(self, expected_args, process, *args, **kwargs): |
3064 | """Run run_bzr_subprocess with args and kwargs using a stubbed process. |
3065 | |
3066 | @@ -2157,6 +2237,32 @@ |
3067 | StubProcess(), '', allow_plugins=True) |
3068 | |
3069 | |
3070 | +class TestFinishBzrSubprocess(TestWithFakedStartBzrSubprocess): |
3071 | + |
3072 | + def test_finish_bzr_subprocess_with_error(self): |
3073 | + """finish_bzr_subprocess allows specification of the desired exit code. |
3074 | + """ |
3075 | + process = StubProcess(err="unknown command", retcode=3) |
3076 | + result = self.finish_bzr_subprocess(process, retcode=3) |
3077 | + self.assertEqual('', result[0]) |
3078 | + self.assertContainsRe(result[1], 'unknown command') |
3079 | + |
3080 | + def test_finish_bzr_subprocess_ignoring_retcode(self): |
3081 | + """finish_bzr_subprocess allows the exit code to be ignored.""" |
3082 | + process = StubProcess(err="unknown command", retcode=3) |
3083 | + result = self.finish_bzr_subprocess(process, retcode=None) |
3084 | + self.assertEqual('', result[0]) |
3085 | + self.assertContainsRe(result[1], 'unknown command') |
3086 | + |
3087 | + def test_finish_subprocess_with_unexpected_retcode(self): |
3088 | + """finish_bzr_subprocess raises self.failureException if the retcode is |
3089 | + not the expected one. |
3090 | + """ |
3091 | + process = StubProcess(err="unknown command", retcode=3) |
3092 | + self.assertRaises(self.failureException, self.finish_bzr_subprocess, |
3093 | + process) |
3094 | + |
3095 | + |
3096 | class _DontSpawnProcess(Exception): |
3097 | """A simple exception which just allows us to skip unnecessary steps""" |
3098 | |
3099 | @@ -2240,39 +2346,8 @@ |
3100 | self.assertEqual(['foo', 'current'], chdirs) |
3101 | |
3102 | |
3103 | -class TestBzrSubprocess(tests.TestCaseWithTransport): |
3104 | - |
3105 | - def test_start_and_stop_bzr_subprocess(self): |
3106 | - """We can start and perform other test actions while that process is |
3107 | - still alive. |
3108 | - """ |
3109 | - process = self.start_bzr_subprocess(['--version']) |
3110 | - result = self.finish_bzr_subprocess(process) |
3111 | - self.assertContainsRe(result[0], 'is free software') |
3112 | - self.assertEqual('', result[1]) |
3113 | - |
3114 | - def test_start_and_stop_bzr_subprocess_with_error(self): |
3115 | - """finish_bzr_subprocess allows specification of the desired exit code. |
3116 | - """ |
3117 | - process = self.start_bzr_subprocess(['--versionn']) |
3118 | - result = self.finish_bzr_subprocess(process, retcode=3) |
3119 | - self.assertEqual('', result[0]) |
3120 | - self.assertContainsRe(result[1], 'unknown command') |
3121 | - |
3122 | - def test_start_and_stop_bzr_subprocess_ignoring_retcode(self): |
3123 | - """finish_bzr_subprocess allows the exit code to be ignored.""" |
3124 | - process = self.start_bzr_subprocess(['--versionn']) |
3125 | - result = self.finish_bzr_subprocess(process, retcode=None) |
3126 | - self.assertEqual('', result[0]) |
3127 | - self.assertContainsRe(result[1], 'unknown command') |
3128 | - |
3129 | - def test_start_and_stop_bzr_subprocess_with_unexpected_retcode(self): |
3130 | - """finish_bzr_subprocess raises self.failureException if the retcode is |
3131 | - not the expected one. |
3132 | - """ |
3133 | - process = self.start_bzr_subprocess(['--versionn']) |
3134 | - self.assertRaises(self.failureException, self.finish_bzr_subprocess, |
3135 | - process) |
3136 | +class TestActuallyStartBzrSubprocess(tests.TestCaseWithTransport): |
3137 | + """Tests that really need to do things with an external bzr.""" |
3138 | |
3139 | def test_start_and_stop_bzr_subprocess_send_signal(self): |
3140 | """finish_bzr_subprocess raises self.failureException if the retcode is |
3141 | @@ -2286,14 +2361,6 @@ |
3142 | self.assertEqual('', result[0]) |
3143 | self.assertEqual('bzr: interrupted\n', result[1]) |
3144 | |
3145 | - def test_start_and_stop_working_dir(self): |
3146 | - cwd = osutils.getcwd() |
3147 | - self.make_branch_and_tree('one') |
3148 | - process = self.start_bzr_subprocess(['root'], working_dir='one') |
3149 | - result = self.finish_bzr_subprocess(process, universal_newlines=True) |
3150 | - self.assertEndsWith(result[0], 'one\n') |
3151 | - self.assertEqual('', result[1]) |
3152 | - |
3153 | |
3154 | class TestKnownFailure(tests.TestCase): |
3155 | |
3156 | @@ -2681,10 +2748,52 @@ |
3157 | |
3158 | class TestTestSuite(tests.TestCase): |
3159 | |
3160 | + def test__test_suite_testmod_names(self): |
3161 | + # Test that a plausible list of test module names are returned |
3162 | + # by _test_suite_testmod_names. |
3163 | + test_list = tests._test_suite_testmod_names() |
3164 | + self.assertSubset([ |
3165 | + 'bzrlib.tests.blackbox', |
3166 | + 'bzrlib.tests.per_transport', |
3167 | + 'bzrlib.tests.test_selftest', |
3168 | + ], |
3169 | + test_list) |
3170 | + |
3171 | + def test__test_suite_modules_to_doctest(self): |
3172 | + # Test that a plausible list of modules to doctest is returned |
3173 | + # by _test_suite_modules_to_doctest. |
3174 | + test_list = tests._test_suite_modules_to_doctest() |
3175 | + self.assertSubset([ |
3176 | + 'bzrlib.timestamp', |
3177 | + ], |
3178 | + test_list) |
3179 | + |
3180 | def test_test_suite(self): |
3181 | - # This test is slow - it loads the entire test suite to operate, so we |
3182 | - # do a single test with one test in each category |
3183 | - test_list = [ |
3184 | + # test_suite() loads the entire test suite to operate. To avoid this |
3185 | + # overhead, and yet still be confident that things are happening, |
3186 | + # we temporarily replace two functions used by test_suite with |
3187 | + # test doubles that supply a few sample tests to load, and check they |
3188 | + # are loaded. |
3189 | + calls = [] |
3190 | + def _test_suite_testmod_names(): |
3191 | + calls.append("testmod_names") |
3192 | + return [ |
3193 | + 'bzrlib.tests.blackbox.test_branch', |
3194 | + 'bzrlib.tests.per_transport', |
3195 | + 'bzrlib.tests.test_selftest', |
3196 | + ] |
3197 | + original_testmod_names = tests._test_suite_testmod_names |
3198 | + def _test_suite_modules_to_doctest(): |
3199 | + calls.append("modules_to_doctest") |
3200 | + return ['bzrlib.timestamp'] |
3201 | + orig_modules_to_doctest = tests._test_suite_modules_to_doctest |
3202 | + def restore_names(): |
3203 | + tests._test_suite_testmod_names = original_testmod_names |
3204 | + tests._test_suite_modules_to_doctest = orig_modules_to_doctest |
3205 | + self.addCleanup(restore_names) |
3206 | + tests._test_suite_testmod_names = _test_suite_testmod_names |
3207 | + tests._test_suite_modules_to_doctest = _test_suite_modules_to_doctest |
3208 | + expected_test_list = [ |
3209 | # testmod_names |
3210 | 'bzrlib.tests.blackbox.test_branch.TestBranch.test_branch', |
3211 | ('bzrlib.tests.per_transport.TransportTests' |
3212 | @@ -2695,13 +2804,16 @@ |
3213 | # plugins can't be tested that way since selftest may be run with |
3214 | # --no-plugins |
3215 | ] |
3216 | - suite = tests.test_suite(test_list) |
3217 | - self.assertEquals(test_list, _test_ids(suite)) |
3218 | + suite = tests.test_suite() |
3219 | + self.assertEqual(set(["testmod_names", "modules_to_doctest"]), |
3220 | + set(calls)) |
3221 | + self.assertSubset(expected_test_list, _test_ids(suite)) |
3222 | |
3223 | def test_test_suite_list_and_start(self): |
3224 | # We cannot test this at the same time as the main load, because we want |
3225 | - # to know that starting_with == None works. So a second full load is |
3226 | - # incurred. |
3227 | + # to know that starting_with == None works. So a second load is |
3228 | + # incurred - note that the starting_with parameter causes a partial load |
3229 | + # rather than a full load so this test should be pretty quick. |
3230 | test_list = ['bzrlib.tests.test_selftest.TestTestSuite.test_test_suite'] |
3231 | suite = tests.test_suite(test_list, |
3232 | ['bzrlib.tests.test_selftest.TestTestSuite']) |
3233 | @@ -2853,19 +2965,3 @@ |
3234 | self.verbosity) |
3235 | tests.run_suite(suite, runner_class=MyRunner, stream=StringIO()) |
3236 | self.assertLength(1, calls) |
3237 | - |
3238 | - def test_done(self): |
3239 | - """run_suite should call result.done()""" |
3240 | - self.calls = 0 |
3241 | - def one_more_call(): self.calls += 1 |
3242 | - def test_function(): |
3243 | - pass |
3244 | - test = unittest.FunctionTestCase(test_function) |
3245 | - class InstrumentedTestResult(tests.ExtendedTestResult): |
3246 | - def done(self): one_more_call() |
3247 | - class MyRunner(tests.TextTestRunner): |
3248 | - def run(self, test): |
3249 | - return InstrumentedTestResult(self.stream, self.descriptions, |
3250 | - self.verbosity) |
3251 | - tests.run_suite(test, runner_class=MyRunner, stream=StringIO()) |
3252 | - self.assertEquals(1, self.calls) |
3253 | |
3254 | === modified file 'bzrlib/tests/test_shelf.py' |
3255 | --- bzrlib/tests/test_shelf.py 2009-08-26 07:40:38 +0000 |
3256 | +++ bzrlib/tests/test_shelf.py 2009-08-28 05:00:33 +0000 |
3257 | @@ -476,6 +476,8 @@ |
3258 | def test_shelve_skips_added_root(self): |
3259 | """Skip adds of the root when iterating through shelvable changes.""" |
3260 | tree = self.make_branch_and_tree('tree') |
3261 | + tree.lock_tree_write() |
3262 | + self.addCleanup(tree.unlock) |
3263 | creator = shelf.ShelfCreator(tree, tree.basis_tree()) |
3264 | self.addCleanup(creator.finalize) |
3265 | self.assertEqual([], list(creator.iter_shelvable())) |
3266 | |
3267 | === modified file 'bzrlib/tests/test_smart.py' |
3268 | --- bzrlib/tests/test_smart.py 2009-08-17 23:15:55 +0000 |
3269 | +++ bzrlib/tests/test_smart.py 2009-09-03 15:26:27 +0000 |
3270 | @@ -36,6 +36,7 @@ |
3271 | smart, |
3272 | tests, |
3273 | urlutils, |
3274 | + versionedfile, |
3275 | ) |
3276 | from bzrlib.branch import Branch, BranchReferenceFormat |
3277 | import bzrlib.smart.branch |
3278 | @@ -87,8 +88,7 @@ |
3279 | if self._chroot_server is None: |
3280 | backing_transport = tests.TestCaseWithTransport.get_transport(self) |
3281 | self._chroot_server = chroot.ChrootServer(backing_transport) |
3282 | - self._chroot_server.setUp() |
3283 | - self.addCleanup(self._chroot_server.tearDown) |
3284 | + self.start_server(self._chroot_server) |
3285 | t = get_transport(self._chroot_server.get_url()) |
3286 | if relpath is not None: |
3287 | t = t.clone(relpath) |
3288 | @@ -113,6 +113,25 @@ |
3289 | return self.get_transport().get_smart_medium() |
3290 | |
3291 | |
3292 | +class TestByteStreamToStream(tests.TestCase): |
3293 | + |
3294 | + def test_repeated_substreams_same_kind_are_one_stream(self): |
3295 | + # Make a stream - an iterable of bytestrings. |
3296 | + stream = [('text', [versionedfile.FulltextContentFactory(('k1',), None, |
3297 | + None, 'foo')]),('text', [ |
3298 | + versionedfile.FulltextContentFactory(('k2',), None, None, 'bar')])] |
3299 | + fmt = bzrdir.format_registry.get('pack-0.92')().repository_format |
3300 | + bytes = smart.repository._stream_to_byte_stream(stream, fmt) |
3301 | + streams = [] |
3302 | + # Iterate the resulting iterable; checking that we get only one stream |
3303 | + # out. |
3304 | + fmt, stream = smart.repository._byte_stream_to_stream(bytes) |
3305 | + for kind, substream in stream: |
3306 | + streams.append((kind, list(substream))) |
3307 | + self.assertLength(1, streams) |
3308 | + self.assertLength(2, streams[0][1]) |
3309 | + |
3310 | + |
3311 | class TestSmartServerResponse(tests.TestCase): |
3312 | |
3313 | def test__eq__(self): |
3314 | |
3315 | === modified file 'bzrlib/tests/test_transport.py' |
3316 | --- bzrlib/tests/test_transport.py 2009-03-23 14:59:43 +0000 |
3317 | +++ bzrlib/tests/test_transport.py 2009-08-27 22:17:35 +0000 |
3318 | @@ -363,24 +363,22 @@ |
3319 | def test_abspath(self): |
3320 | # The abspath is always relative to the chroot_url. |
3321 | server = ChrootServer(get_transport('memory:///foo/bar/')) |
3322 | - server.setUp() |
3323 | + self.start_server(server) |
3324 | transport = get_transport(server.get_url()) |
3325 | self.assertEqual(server.get_url(), transport.abspath('/')) |
3326 | |
3327 | subdir_transport = transport.clone('subdir') |
3328 | self.assertEqual(server.get_url(), subdir_transport.abspath('/')) |
3329 | - server.tearDown() |
3330 | |
3331 | def test_clone(self): |
3332 | server = ChrootServer(get_transport('memory:///foo/bar/')) |
3333 | - server.setUp() |
3334 | + self.start_server(server) |
3335 | transport = get_transport(server.get_url()) |
3336 | # relpath from root and root path are the same |
3337 | relpath_cloned = transport.clone('foo') |
3338 | abspath_cloned = transport.clone('/foo') |
3339 | self.assertEqual(server, relpath_cloned.server) |
3340 | self.assertEqual(server, abspath_cloned.server) |
3341 | - server.tearDown() |
3342 | |
3343 | def test_chroot_url_preserves_chroot(self): |
3344 | """Calling get_transport on a chroot transport's base should produce a |
3345 | @@ -393,12 +391,11 @@ |
3346 | new_transport = get_transport(parent_url) |
3347 | """ |
3348 | server = ChrootServer(get_transport('memory:///path/subpath')) |
3349 | - server.setUp() |
3350 | + self.start_server(server) |
3351 | transport = get_transport(server.get_url()) |
3352 | new_transport = get_transport(transport.base) |
3353 | self.assertEqual(transport.server, new_transport.server) |
3354 | self.assertEqual(transport.base, new_transport.base) |
3355 | - server.tearDown() |
3356 | |
3357 | def test_urljoin_preserves_chroot(self): |
3358 | """Using urlutils.join(url, '..') on a chroot URL should not produce a |
3359 | @@ -410,11 +407,10 @@ |
3360 | new_transport = get_transport(parent_url) |
3361 | """ |
3362 | server = ChrootServer(get_transport('memory:///path/')) |
3363 | - server.setUp() |
3364 | + self.start_server(server) |
3365 | transport = get_transport(server.get_url()) |
3366 | self.assertRaises( |
3367 | InvalidURLJoin, urlutils.join, transport.base, '..') |
3368 | - server.tearDown() |
3369 | |
3370 | |
3371 | class ChrootServerTest(TestCase): |
3372 | @@ -428,7 +424,10 @@ |
3373 | backing_transport = MemoryTransport() |
3374 | server = ChrootServer(backing_transport) |
3375 | server.setUp() |
3376 | - self.assertTrue(server.scheme in _get_protocol_handlers().keys()) |
3377 | + try: |
3378 | + self.assertTrue(server.scheme in _get_protocol_handlers().keys()) |
3379 | + finally: |
3380 | + server.tearDown() |
3381 | |
3382 | def test_tearDown(self): |
3383 | backing_transport = MemoryTransport() |
3384 | @@ -441,8 +440,10 @@ |
3385 | backing_transport = MemoryTransport() |
3386 | server = ChrootServer(backing_transport) |
3387 | server.setUp() |
3388 | - self.assertEqual('chroot-%d:///' % id(server), server.get_url()) |
3389 | - server.tearDown() |
3390 | + try: |
3391 | + self.assertEqual('chroot-%d:///' % id(server), server.get_url()) |
3392 | + finally: |
3393 | + server.tearDown() |
3394 | |
3395 | |
3396 | class ReadonlyDecoratorTransportTest(TestCase): |
3397 | @@ -460,15 +461,12 @@ |
3398 | import bzrlib.transport.readonly as readonly |
3399 | # connect to '.' via http which is not listable |
3400 | server = HttpServer() |
3401 | - server.setUp() |
3402 | - try: |
3403 | - transport = get_transport('readonly+' + server.get_url()) |
3404 | - self.failUnless(isinstance(transport, |
3405 | - readonly.ReadonlyTransportDecorator)) |
3406 | - self.assertEqual(False, transport.listable()) |
3407 | - self.assertEqual(True, transport.is_readonly()) |
3408 | - finally: |
3409 | - server.tearDown() |
3410 | + self.start_server(server) |
3411 | + transport = get_transport('readonly+' + server.get_url()) |
3412 | + self.failUnless(isinstance(transport, |
3413 | + readonly.ReadonlyTransportDecorator)) |
3414 | + self.assertEqual(False, transport.listable()) |
3415 | + self.assertEqual(True, transport.is_readonly()) |
3416 | |
3417 | |
3418 | class FakeNFSDecoratorTests(TestCaseInTempDir): |
3419 | @@ -492,31 +490,24 @@ |
3420 | from bzrlib.tests.http_server import HttpServer |
3421 | # connect to '.' via http which is not listable |
3422 | server = HttpServer() |
3423 | - server.setUp() |
3424 | - try: |
3425 | - transport = self.get_nfs_transport(server.get_url()) |
3426 | - self.assertIsInstance( |
3427 | - transport, bzrlib.transport.fakenfs.FakeNFSTransportDecorator) |
3428 | - self.assertEqual(False, transport.listable()) |
3429 | - self.assertEqual(True, transport.is_readonly()) |
3430 | - finally: |
3431 | - server.tearDown() |
3432 | + self.start_server(server) |
3433 | + transport = self.get_nfs_transport(server.get_url()) |
3434 | + self.assertIsInstance( |
3435 | + transport, bzrlib.transport.fakenfs.FakeNFSTransportDecorator) |
3436 | + self.assertEqual(False, transport.listable()) |
3437 | + self.assertEqual(True, transport.is_readonly()) |
3438 | |
3439 | def test_fakenfs_server_default(self): |
3440 | # a FakeNFSServer() should bring up a local relpath server for itself |
3441 | import bzrlib.transport.fakenfs as fakenfs |
3442 | server = fakenfs.FakeNFSServer() |
3443 | - server.setUp() |
3444 | - try: |
3445 | - # the url should be decorated appropriately |
3446 | - self.assertStartsWith(server.get_url(), 'fakenfs+') |
3447 | - # and we should be able to get a transport for it |
3448 | - transport = get_transport(server.get_url()) |
3449 | - # which must be a FakeNFSTransportDecorator instance. |
3450 | - self.assertIsInstance( |
3451 | - transport, fakenfs.FakeNFSTransportDecorator) |
3452 | - finally: |
3453 | - server.tearDown() |
3454 | + self.start_server(server) |
3455 | + # the url should be decorated appropriately |
3456 | + self.assertStartsWith(server.get_url(), 'fakenfs+') |
3457 | + # and we should be able to get a transport for it |
3458 | + transport = get_transport(server.get_url()) |
3459 | + # which must be a FakeNFSTransportDecorator instance. |
3460 | + self.assertIsInstance(transport, fakenfs.FakeNFSTransportDecorator) |
3461 | |
3462 | def test_fakenfs_rename_semantics(self): |
3463 | # a FakeNFS transport must mangle the way rename errors occur to |
3464 | @@ -587,8 +578,7 @@ |
3465 | def setUp(self): |
3466 | super(TestTransportImplementation, self).setUp() |
3467 | self._server = self.transport_server() |
3468 | - self._server.setUp() |
3469 | - self.addCleanup(self._server.tearDown) |
3470 | + self.start_server(self._server) |
3471 | |
3472 | def get_transport(self, relpath=None): |
3473 | """Return a connected transport to the local directory. |
3474 | |
3475 | === modified file 'doc/_templates/index.html' |
3476 | --- doc/_templates/index.html 2009-07-22 14:36:38 +0000 |
3477 | +++ doc/_templates/index.html 2009-08-18 00:10:19 +0000 |
3478 | @@ -26,19 +26,17 @@ |
3479 | <p class="biglink"><a class="biglink" href="{{ pathto("en/upgrade-guide/index") }}">Upgrade Guide</a><br/> |
3480 | <span class="linkdescr">moving to Bazaar 2.x</span> |
3481 | </p> |
3482 | - <p class="biglink"><a class="biglink" href="{{ pathto("en/migration/index") }}">Migration Docs</a><br/> |
3483 | + <p class="biglink"><a class="biglink" href="http://doc.bazaar-vcs.org/migration/en/">Migration Docs</a><br/> |
3484 | <span class="linkdescr">for refugees of other tools</span> |
3485 | </p> |
3486 | - <p class="biglink"><a class="biglink" href="{{ pathto("developers/index") }}">Developer Docs</a><br/> |
3487 | - <span class="linkdescr">polices and tools for giving back</span> |
3488 | + <p class="biglink"><a class="biglink" href="http://doc.bazaar-vcs.org/plugins/en/">Plugins Guide</a><br/> |
3489 | + <span class="linkdescr">help on popular plugins</span> |
3490 | </p> |
3491 | </td></tr> |
3492 | </table> |
3493 | |
3494 | - <p>Other languages: |
3495 | - <a href="{{ pathto("index.es") }}">Spanish</a>, |
3496 | - <a href="{{ pathto("index.ru") }}">Russian</a> |
3497 | - </p> |
3498 | + <p>Keen to help? See the <a href="{{ pathto("developers/index") }}">Developer Docs</a> |
3499 | + for policies and tools on contributing code, tests and documentation.</p> |
3500 | |
3501 | |
3502 | <h2>Related Links</h2> |
3503 | @@ -59,4 +57,9 @@ |
3504 | </td></tr> |
3505 | </table> |
3506 | |
3507 | + <hr> |
3508 | + <p>Other languages: |
3509 | + <a href="{{ pathto("index.es") }}">Spanish</a>, |
3510 | + <a href="{{ pathto("index.ru") }}">Russian</a> |
3511 | + </p> |
3512 | {% endblock %} |
3513 | |
3514 | === modified file 'doc/contents.txt' |
3515 | --- doc/contents.txt 2009-07-22 13:41:01 +0000 |
3516 | +++ doc/contents.txt 2009-08-18 00:10:19 +0000 |
3517 | @@ -20,7 +20,6 @@ |
3518 | |
3519 | en/release-notes/index |
3520 | en/upgrade-guide/index |
3521 | - en/migration/index |
3522 | developers/index |
3523 | |
3524 | |
3525 | |
3526 | === modified file 'doc/developers/bug-handling.txt' |
3527 | --- doc/developers/bug-handling.txt 2009-08-24 00:29:31 +0000 |
3528 | +++ doc/developers/bug-handling.txt 2009-08-24 20:16:15 +0000 |
3529 | @@ -142,12 +142,8 @@ |
3530 | it's not a good idea for a developer to spend time reproducing the bug |
3531 | until they're going to work on it.) |
3532 | Triaged |
3533 | - This is an odd state - one we consider a bug in launchpad, as it really |
3534 | - means "Importance has been set". We use this to mean the same thing |
3535 | - as confirmed, and set no preference on whether Confirmed or Triaged are |
3536 | - used. Please do not change a "Confirmed" bug to "Triaged" or vice verca - |
3537 | - any reports we create or use will always search for both "Confirmed" and |
3538 | - "Triaged" or neither "Confirmed" nor "Triaged". |
3539 | + We don't use this status. If it is set, it means the same as |
3540 | + Confirmed. |
3541 | In Progress |
3542 | Someone has started working on this. |
3543 | Won't Fix |
3544 | |
3545 | === removed directory 'doc/en/migration' |
3546 | === removed file 'doc/en/migration/index.txt' |
3547 | --- doc/en/migration/index.txt 2009-07-22 13:41:01 +0000 |
3548 | +++ doc/en/migration/index.txt 1970-01-01 00:00:00 +0000 |
3549 | @@ -1,6 +0,0 @@ |
3550 | -Bazaar Migration Guide |
3551 | -====================== |
3552 | - |
3553 | -This guide is under development. For notes collected so far, see |
3554 | -http://bazaar-vcs.org/BzrMigration/. |
3555 | - |
This adds 'pack-on-the-fly' support for gc streaming.
1) It restores 'groupcompress' sorting for the requested inventories and texts. entManager. check_is_ well_utilized( )'
2) It uses a heuristic that is approximately:
if a given block is less than 75% the size of a 'fully utilized' block, then don't re-use the
content directly, but schedule it to be packed into a new block.
The specifics are in '_LazyGroupCont
3) I did some real-world testing, and the results seem pretty good.
To start with, the copy of bzr.dev on Launchpad is currently very poorly packed, taking up >90MB of disk space for a single pack file. After branching that using bzr.dev, I get a 101MB repository locally. If I 'bzr pack', I end up with 39MB (30MB in .pack, and 8.8MB in indices)
101MB poorly- packed- from-lp
101MB post 'bzr.dev branch new-repo' (takes 1m0s locally)
39MB post 'bzr pack' (takes 2m0s locally)
I then tested the results of using the pack-on-the-fly
41MB post 'bzr-pack branch new-repo' (takes 1m43s locally)
41MB post 'bzr-pack branch new-repo new-repo2) (takes 1m0s)
Which means that pack-on-the-fly is working as we hoped it would. It
a) Gives almost as good of pack results as if we had issued 'bzr pack'
b) Takes a bit of extra time when the source is poorly packed (1m => 1m45s)
c) Takes no extra time when the source is already properly packed (1m => 1m)
4) Unfortunately this was built on top of bzr.dev, but we can land it there, and then cherrypick it back to 2.0. I'll still submit a merge request for 2.0.