Merge lp:~jameinel/bzr/2.1-static-tuple-btree-string-intern into lp:~jameinel/bzr/2.1-static-tuple-btree
- 2.1-static-tuple-btree-string-intern
- Merge into 2.1-static-tuple-btree
Proposed by
John A Meinel
Status: | Merged |
---|---|
Merge reported by: | John A Meinel |
Merged at revision: | not available |
Proposed branch: | lp:~jameinel/bzr/2.1-static-tuple-btree-string-intern |
Merge into: | lp:~jameinel/bzr/2.1-static-tuple-btree |
Diff against target: |
4415 lines 66 files modified
Makefile (+1/-1) NEWS (+207/-63) README (+35/-67) bzrlib/__init__.py (+1/-1) bzrlib/_bencode_pyx.pyx (+9/-1) bzrlib/_btree_serializer_pyx.pyx (+37/-10) bzrlib/_export_c_api.h (+5/-2) bzrlib/_import_c_api.h (+6/-3) bzrlib/_simple_set_pyx.pxd (+27/-3) bzrlib/_simple_set_pyx.pyx (+130/-113) bzrlib/_static_tuple_c.c (+192/-96) bzrlib/_static_tuple_py.py (+2/-0) bzrlib/branch.py (+8/-2) bzrlib/btree_index.py (+17/-9) bzrlib/builtins.py (+16/-11) bzrlib/bundle/apply_bundle.py (+2/-1) bzrlib/commands.py (+1/-1) bzrlib/decorators.py (+27/-0) bzrlib/diff.py (+9/-6) bzrlib/foreign.py (+1/-2) bzrlib/help_topics/__init__.py (+8/-26) bzrlib/help_topics/en/debug-flags.txt (+5/-3) bzrlib/index.py (+5/-3) bzrlib/lock.py (+21/-0) bzrlib/lockable_files.py (+2/-2) bzrlib/lockdir.py (+2/-0) bzrlib/merge.py (+10/-1) bzrlib/mutabletree.py (+11/-3) bzrlib/osutils.py (+9/-2) bzrlib/python-compat.h (+6/-0) bzrlib/reconfigure.py (+1/-3) bzrlib/remote.py (+10/-3) bzrlib/repofmt/pack_repo.py (+20/-6) bzrlib/repository.py (+13/-2) bzrlib/send.py (+2/-3) bzrlib/static_tuple.py (+25/-0) bzrlib/tests/__init__.py (+31/-1) bzrlib/tests/blackbox/test_merge.py (+1/-1) bzrlib/tests/blackbox/test_uncommit.py (+2/-2) bzrlib/tests/lock_helpers.py (+2/-0) bzrlib/tests/per_branch/test_locking.py (+2/-2) bzrlib/tests/per_repository/test_write_group.py (+1/-1) bzrlib/tests/per_repository_chk/test_supported.py (+35/-0) bzrlib/tests/per_uifactory/__init__.py (+148/-0) bzrlib/tests/test__simple_set.py (+161/-69) bzrlib/tests/test__static_tuple.py (+60/-6) bzrlib/tests/test_btree_index.py (+39/-0) bzrlib/tests/test_decorators.py (+33/-10) bzrlib/tests/test_diff.py (+46/-0) bzrlib/tests/test_index.py (+10/-1) bzrlib/tests/test_msgeditor.py (+1/-1) bzrlib/tests/test_mutabletree.py (+30/-9) bzrlib/tests/test_osutils.py (+43/-0) bzrlib/tests/test_reconfigure.py (+14/-1) bzrlib/tests/test_remote.py (+20/-0) bzrlib/tests/test_status.py (+1/-1) bzrlib/tests/test_ui.py (+54/-51) bzrlib/ui/__init__.py (+29/-4) bzrlib/ui/text.py (+11/-3) bzrlib/util/_bencode_py.py (+7/-0) bzrlib/workingtree.py (+4/-4) doc/developers/HACKING.txt (+13/-0) doc/developers/releasing.txt (+14/-3) doc/en/upgrade-guide/data_migration.txt (+52/-6) doc/en/user-guide/branching_a_project.txt (+5/-5) setup.py (+7/-2) |
To merge this branch: | bzr merge lp:~jameinel/bzr/2.1-static-tuple-btree-string-intern |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Andrew Bennetts (community) | Approve | ||
Review via email: mp+13081@code.launchpad.net |
Commit message
Description of the change
To post a comment you must log in.
Revision history for this message
John A Meinel (jameinel) wrote : | # |
Revision history for this message
Andrew Bennetts (spiv) : | # |
review:
Approve
Revision history for this message
John A Meinel (jameinel) wrote : | # |
Bumping this back to 'work-in-progress' for now.
Running more of the test suite has shown some failures. Some really *random* failures on PQM, that I cannot reproduce locally, but at least I did get some local failures that I can work on fixing.
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === modified file 'Makefile' |
2 | --- Makefile 2009-10-01 02:46:17 +0000 |
3 | +++ Makefile 2009-10-15 18:31:14 +0000 |
4 | @@ -409,7 +409,7 @@ |
5 | $(MAKE) clean && \ |
6 | $(MAKE) && \ |
7 | bzr export $$expdir && \ |
8 | - cp bzrlib/*.c $$expdir/bzrlib/. && \ |
9 | + cp bzrlib/*.c bzrlib/*.h $$expdir/bzrlib/. && \ |
10 | tar cfz $$tarball -C $$expbasedir bzr-$$version && \ |
11 | gpg --detach-sign $$tarball && \ |
12 | rm -rf $$expbasedir |
13 | |
14 | === modified file 'NEWS' |
15 | --- NEWS 2009-10-06 15:58:12 +0000 |
16 | +++ NEWS 2009-10-15 18:31:14 +0000 |
17 | @@ -6,11 +6,113 @@ |
18 | :depth: 1 |
19 | |
20 | |
21 | -2.1.0 series (not released yet) |
22 | -############################### |
23 | - |
24 | -Compatibility Breaks |
25 | -******************** |
26 | +bzr 2.1.0b2 (not released yet) |
27 | +############################## |
28 | + |
29 | +:Codename: |
30 | +:2.1.0b2: ??? |
31 | + |
32 | + |
33 | +Compatibility Breaks |
34 | +******************** |
35 | + |
36 | +New Features |
37 | +************ |
38 | + |
39 | +Bug Fixes |
40 | +********* |
41 | + |
42 | +Improvements |
43 | +************ |
44 | + |
45 | +* When reading index files, we now use a ``StaticTuple`` rather than a |
46 | + plain ``tuple`` object. This generally gives a 20% decrease in peak |
47 | + memory, and can give a performance boost up to 40% on large projects. |
48 | + (John Arbash Meinel) |
49 | + |
50 | +Documentation |
51 | +************* |
52 | + |
53 | +API Changes |
54 | +*********** |
55 | + |
56 | +* ``UIFactory`` now has new ``show_error``, ``show_message`` and |
57 | + ``show_warning`` methods, which can be hooked by non-text UIs. |
58 | + (Martin Pool) |
59 | + |
60 | +Internals |
61 | +********* |
62 | + |
63 | +* Added ``bzrlib._simple_set_pyx``. This is a hybrid between a Set and a |
64 | + Dict (it only holds keys, but you can lookup the object located at a |
65 | + given key). It has significantly reduced memory consumption versus the |
66 | + builtin objects (1/2 the size of Set, 1/3rd the size of Dict). This is |
67 | + used as the interning structure for StaticTuple objects. |
68 | + (John Arbash Meinel) |
69 | + |
70 | +* ``bzrlib._static_tuple_pyx.StaticTuple`` is now available and used by |
71 | + the btree index parser. This class functions similarly to ``tuple`` |
72 | + objects. However, it can only point at other ``StaticTuple`` instances |
73 | + or strings. This allows us to remove it from the garbage collector (it |
74 | + cannot be in a cycle), it also allows us to intern the objects. In |
75 | + testing, this can reduce peak memory by 20-40%, and significantly |
76 | + improve performance by removing objects from being inspected by the |
77 | + garbage collector. (John Arbash Meinel) |
78 | + |
79 | +Testing |
80 | +******* |
81 | + |
82 | + |
83 | +bzr 2.0.2 (not released yet) |
84 | +############################ |
85 | + |
86 | +:Codename: |
87 | +:2.0.2: ??? |
88 | + |
89 | +Compatibility Breaks |
90 | +******************** |
91 | + |
92 | +New Features |
93 | +************ |
94 | + |
95 | +Bug Fixes |
96 | +********* |
97 | + |
98 | +Improvements |
99 | +************ |
100 | + |
101 | +Documentation |
102 | +************* |
103 | + |
104 | +API Changes |
105 | +*********** |
106 | + |
107 | +Internals |
108 | +********* |
109 | + |
110 | +Testing |
111 | +******* |
112 | + |
113 | + |
114 | +bzr 2.1.0b1 |
115 | +########### |
116 | + |
117 | +:Codename: While the cat is away |
118 | +:2.1.0b1: 2009-10-14 |
119 | + |
120 | +This is the first development release in the new split "stable" and |
121 | +"development" series. As such, the release is a snapshot of bzr.dev |
122 | +without creating a release candidate first. This release includes a |
123 | +fair amount of internal changes, with deprecated code being removed, |
124 | +and several new feature developments. People looking for a stable code |
125 | +base with only bugfixes should focus on the 2.0.1 release. All bugfixes |
126 | +present in 2.0.1 are present in 2.1.0b1. |
127 | + |
128 | +Highlights include support for ``bzr+ssh://host/~/homedir`` style urls, |
129 | +finer control over the plugin search path via extended BZR_PLUGIN_PATH |
130 | +syntax, visible warnings when extension modules fail to load, and improved |
131 | +error handling during unlocking. |
132 | + |
133 | |
134 | New Features |
135 | ************ |
136 | @@ -37,6 +139,10 @@ |
137 | automatically benefit from this feature when ``bzr`` on the server is |
138 | upgraded. (Andrew Bennetts, #109143) |
139 | |
140 | +* Extensions can now be compiled if either Cython or Pyrex is available. |
141 | + Currently Pyrex is preferred, but that may change in the future. |
142 | + (Arkanes) |
143 | + |
144 | * Give more control on BZR_PLUGIN_PATH by providing a way to refer to or |
145 | disable the user, site and core plugin directories. |
146 | (Vincent Ladeuil, #412930, #316192, #145612) |
147 | @@ -56,54 +162,24 @@ |
148 | filename will issue a warning and skip over those files. |
149 | (Robert Collins, #3918) |
150 | |
151 | -* ``bzr check`` in pack-0.92, 1.6 and 1.9 format repositories will no |
152 | - longer report incorrect errors about ``Missing inventory ('TREE_ROOT', ...)`` |
153 | - (Robert Collins, #416732) |
154 | - |
155 | * ``bzr dpush`` now aborts if uncommitted changes (including pending merges) |
156 | are present in the working tree. The configuration option ``dpush_strict`` |
157 | can be used to set the default for this behavior. |
158 | (Vincent Ladeuil, #438158) |
159 | |
160 | -* ``bzr info -v`` on a 2a format still claimed that it was a "Development |
161 | - format" (John Arbash Meinel, #424392) |
162 | - |
163 | * ``bzr merge`` and ``bzr remove-tree`` now requires --force if pending |
164 | merges are present in the working tree. |
165 | (Vincent Ladeuil, #426344) |
166 | |
167 | -* bzr will attempt to authenticate with SSH servers that support |
168 | - ``keyboard-interactive`` auth but not ``password`` auth when using |
169 | - Paramiko. (Andrew Bennetts, #433846) |
170 | - |
171 | * Clearer message when Bazaar runs out of memory, instead of a ``MemoryError`` |
172 | traceback. (Martin Pool, #109115) |
173 | |
174 | -* Conversion to 2a will create a single pack for all the new revisions (as |
175 | - long as it ran without interruption). This improves both ``bzr upgrade`` |
176 | - and ``bzr pull`` or ``bzr merge`` from local branches in older formats. |
177 | - The autopack logic that occurs every 100 revisions during local |
178 | - conversions was not returning that pack's identifier, which resulted in |
179 | - the partial packs created during the conversion not being consolidated |
180 | - at the end of the conversion process. (Robert Collins, #423818) |
181 | - |
182 | * Don't give a warning on Windows when failing to import ``_readdir_pyx`` |
183 | as it is never built. (John Arbash Meinel, #430645) |
184 | |
185 | * Don't restrict the command name used to run the test suite. |
186 | (Vincent Ladeuil, #419950) |
187 | |
188 | -* Fetches from 2a to 2a are now again requested in 'groupcompress' order. |
189 | - Groups that are seen as 'underutilized' will be repacked on-the-fly. |
190 | - This means that when the source is fully packed, there is minimal |
191 | - overhead during the fetch, but if the source is poorly packed the result |
192 | - is a fairly well packed repository (not as good as 'bzr pack' but |
193 | - good-enough.) (Robert Collins, John Arbash Meinel, #402652) |
194 | - |
195 | -* Fixed fetches from a stacked branch on a smart server that were failing |
196 | - with some combinations of remote and local formats. This was causing |
197 | - "unknown object type identifier 60" errors. (Andrew Bennetts, #427736) |
198 | - |
199 | * ftp transports were built differently when the kerberos python module was |
200 | present leading to obscure failures related to ASCII/BINARY modes. |
201 | (Vincent Ladeuil, #443041) |
202 | @@ -111,23 +187,6 @@ |
203 | * Network streams now decode adjacent records of the same type into a |
204 | single stream, reducing layering churn. (Robert Collins) |
205 | |
206 | -* Make sure that we unlock the tree if we fail to create a TreeTransform |
207 | - object when doing a merge, and there is limbo, or pending-deletions |
208 | - directory. (Gary van der Merwe, #427773) |
209 | - |
210 | -* Occasional IndexError on renamed files have been fixed. Operations that |
211 | - set a full inventory in the working tree will now go via the |
212 | - apply_inventory_delta code path which is simpler and easier to |
213 | - understand than dirstates set_state_from_inventory method. This may |
214 | - have a small performance impact on operations built on _write_inventory, |
215 | - but such operations are already doing full tree scans, so no radical |
216 | - performance change should be observed. (Robert Collins, #403322) |
217 | - |
218 | -* Prevent some kinds of incomplete data from being committed to a 2a |
219 | - repository, such as revisions without inventories or inventories without |
220 | - chk_bytes root records. |
221 | - (Andrew Bennetts, #423506) |
222 | - |
223 | * PreviewTree behaves correctly when get_file_mtime is invoked on an unmodified |
224 | file. (Aaron Bentley, #251532) |
225 | |
226 | @@ -138,9 +197,6 @@ |
227 | domains or user ids embedding '.sig'. Now they can. |
228 | (Matthew Fuller, Vincent Ladeuil, #430868) |
229 | |
230 | -* When a file kind becomes unversionable after being added, a sensible |
231 | - error will be shown instead of a traceback. (Robert Collins, #438569) |
232 | - |
233 | Improvements |
234 | ************ |
235 | |
236 | @@ -153,6 +209,12 @@ |
237 | See also <https://answers.launchpad.net/bzr/+faq/703>. |
238 | (Martin Pool, #406113, #430529) |
239 | |
240 | +* Secondary errors that occur during Branch.unlock and Repository.unlock |
241 | + no longer obscure the original error. These methods now use a new |
242 | + decorator, ``only_raises``. This fixes many causes of |
243 | + ``TooManyConcurrentRequests`` and similar errors. |
244 | + (Andrew Bennetts, #429747) |
245 | + |
246 | Documentation |
247 | ************* |
248 | |
249 | @@ -164,21 +226,37 @@ |
250 | API Changes |
251 | *********** |
252 | |
253 | -* ``ProgressTask.note`` is deprecated. |
254 | - (Martin Pool) |
255 | - |
256 | * ``bzrlib.user_encoding`` has been removed; use |
257 | ``bzrlib.osutils.get_user_encoding`` instead. (Martin Pool) |
258 | |
259 | * ``bzrlib.tests`` now uses ``stopTestRun`` for its ``TestResult`` |
260 | subclasses - the same as python's unittest module. (Robert Collins) |
261 | + |
262 | +* ``diff._get_trees_to_diff`` has been renamed to |
263 | + ``diff.get_trees_and_branches_to_diff``. It is now a public API, and it |
264 | + returns the old and new branches. (Gary van der Merwe) |
265 | |
266 | * ``bzrlib.trace.log_error``, ``error`` and ``info`` have been deprecated. |
267 | (Martin Pool) |
268 | |
269 | +* ``MutableTree.has_changes()`` does not require a tree parameter anymore. It |
270 | + now defaults to comparing to the basis tree. It now checks for pending |
271 | + merges too. ``Merger.check_basis`` has been deprecated and replaced by the |
272 | + corresponding has_changes() calls. ``Merge.compare_basis``, |
273 | + ``Merger.file_revisions`` and ``Merger.ensure_revision_trees`` have also |
274 | + been deprecated. |
275 | + (Vincent Ladeuil, #440631) |
276 | + |
277 | +* ``ProgressTask.note`` is deprecated. |
278 | + (Martin Pool) |
279 | + |
280 | Internals |
281 | ********* |
282 | |
283 | +* Added ``-Drelock`` debug flag. It will ``note`` a message every time a |
284 | + repository or branch object is unlocked then relocked the same way. |
285 | + (Andrew Bennetts) |
286 | + |
287 | * ``BTreeLeafParser.extract_key`` has been tweaked slightly to reduce |
288 | mallocs while parsing the index (approx 3=>1 mallocs per key read). |
289 | This results in a 10% speedup while reading an index. |
290 | @@ -225,8 +303,16 @@ |
291 | present. (Vincent Ladeuil, #430749) |
292 | |
293 | |
294 | -bzr 2.0.1 (Not Released Yet) |
295 | -############################ |
296 | +bzr 2.0.1 |
297 | +######### |
298 | + |
299 | +:Codename: Stability First |
300 | +:2.0.1: 2009-10-14 |
301 | + |
302 | +The first of our new ongoing bugfix-only stable releases has arrived. It |
303 | +includes a collection of 12 bugfixes applied to bzr 2.0.0, but does not |
304 | +include any of the feature development in the 2.1.0 series. |
305 | + |
306 | |
307 | Bug Fixes |
308 | ********* |
309 | @@ -243,6 +329,18 @@ |
310 | with some combinations of remote and local formats. This was causing |
311 | "unknown object type identifier 60" errors. (Andrew Bennetts, #427736) |
312 | |
313 | +* Fixed ``ObjectNotLocked`` errors when doing some log and diff operations |
314 | + on branches via a smart server. (Andrew Bennetts, #389413) |
315 | + |
316 | +* Handle things like ``bzr add foo`` and ``bzr rm foo`` when the tree is |
317 | + at the root of a drive. ``osutils._cicp_canonical_relpath`` always |
318 | + assumed that ``abspath()`` returned a path that did not have a trailing |
319 | + ``/``, but that is not true when working at the root of the filesystem. |
320 | + (John Arbash Meinel, Jason Spashett, #322807) |
321 | + |
322 | +* Hide deprecation warnings for 'final' releases for python2.6. |
323 | + (John Arbash Meinel, #440062) |
324 | + |
325 | * Improve the time for ``bzr log DIR`` for 2a format repositories. |
326 | We had been using the same code path as for <2a formats, which required |
327 | iterating over all objects in all revisions. |
328 | @@ -260,12 +358,25 @@ |
329 | but such operations are already doing full tree scans, so no radical |
330 | performance change should be observed. (Robert Collins, #403322) |
331 | |
332 | -* When a file kind becomes unversionable after being added, a sensible |
333 | - error will be shown instead of a traceback. (Robert Collins, #438569) |
334 | - |
335 | * Retrieving file text or mtime from a _PreviewTree has good performance when |
336 | there are many changes. (Aaron Bentley) |
337 | |
338 | +* The CHK index pages now use an unlimited cache size. With a limited |
339 | + cache and a large project, the random access of chk pages could cause us |
340 | + to download the entire cix file many times. |
341 | + (John Arbash Meinel, #402623) |
342 | + |
343 | +* When a file kind becomes unversionable after being added, a sensible |
344 | + error will be shown instead of a traceback. (Robert Collins, #438569) |
345 | + |
346 | +Documentation |
347 | +************* |
348 | + |
349 | +* Improved README. (Ian Clatworthy) |
350 | + |
351 | +* Improved upgrade documentation for Launchpad branches. |
352 | + (Barry Warsaw) |
353 | + |
354 | |
355 | bzr 2.0.0 |
356 | ######### |
357 | @@ -10886,5 +10997,38 @@ |
358 | * Storage of local versions: init, add, remove, rm, info, log, |
359 | diff, status, etc. |
360 | |
361 | + |
362 | +bzr ?.?.? (not released yet) |
363 | +############################ |
364 | + |
365 | +:Codename: template |
366 | +:2.0.2: ??? |
367 | + |
368 | +Compatibility Breaks |
369 | +******************** |
370 | + |
371 | +New Features |
372 | +************ |
373 | + |
374 | +Bug Fixes |
375 | +********* |
376 | + |
377 | +Improvements |
378 | +************ |
379 | + |
380 | +Documentation |
381 | +************* |
382 | + |
383 | +API Changes |
384 | +*********** |
385 | + |
386 | +Internals |
387 | +********* |
388 | + |
389 | +Testing |
390 | +******* |
391 | + |
392 | + |
393 | + |
394 | .. |
395 | vim: tw=74 ft=rst ff=unix |
396 | |
397 | === modified file 'README' |
398 | --- README 2008-03-16 14:01:20 +0000 |
399 | +++ README 2009-10-15 18:31:14 +0000 |
400 | @@ -3,72 +3,44 @@ |
401 | ================= |
402 | |
403 | Bazaar (``bzr``) is a decentralized revision control system, designed to be |
404 | -easy for developers and end users alike. Bazaar is part of the GNU project to |
405 | -develop a complete free operating system. |
406 | - |
407 | -To install Bazaar from source, follow the instructions in the INSTALL |
408 | -file. Otherwise, you may want to check your distribution package manager |
409 | -for ready-to-install packages, or http://bazaar-vcs.org/DistroDownloads. |
410 | - |
411 | -To learn how to use Bazaar, check the documentation in the doc/ directory. |
412 | -Once installed, you can also run 'bzr help'. An always up-to-date and more |
413 | -complete set of documents can be found in the Bazaar website, at: |
414 | - |
415 | - http://bazaar-vcs.org/Documentation |
416 | +easy for developers and end users alike. Bazaar is part of the GNU project |
417 | +to develop a complete free operating system. |
418 | + |
419 | +To install Bazaar, follow the instructions given at |
420 | +http://bazaar-vcs.org/Download. Ready-to-install packages are available |
421 | +for most popular operating systems or you can install from source. |
422 | + |
423 | +To learn how to use Bazaar, see the official documentation at: |
424 | + |
425 | + http://doc.bazaar-vcs.org/en/ |
426 | + |
427 | +For additional training materials including screencasts and slides, |
428 | +visit our community wiki documentation page at: |
429 | + |
430 | + http://bazaar-vcs.org/Documentation/ |
431 | |
432 | Bazaar is written in Python, and is sponsored by Canonical Limited, the |
433 | founders of Ubuntu and Launchpad. Bazaar is Free Software, and is released |
434 | under the GNU General Public License. |
435 | |
436 | -Bazaar was formerly known as Bazaar-NG. It's the successor to ``baz``, a fork |
437 | -of GNU arch, but shares no code. (To upgrade from Baz, use the ``baz-import`` |
438 | -command in the bzrtools plugin.) |
439 | - |
440 | Bazaar highlights |
441 | ================= |
442 | |
443 | -* Easy to use and intuitive. |
444 | - |
445 | - Only five commands are needed to do all basic operations, and all |
446 | - commands have documentation accessible via 'bzr help command'. |
447 | - Bazaar's interface is also easy to learn for CVS and Subversion users. |
448 | - |
449 | -* Robust and reliable. |
450 | - |
451 | - Bazaar is developed under an extensive test suite. Branches can be |
452 | - checked and verified for integrity at any time, and revisions can be |
453 | - signed with PGP/GnuPG. |
454 | - |
455 | -* Publish branches with HTTP. |
456 | - |
457 | - Branches can be hosted on an HTTP server with no need for special |
458 | - software on the server side. Branches can be uploaded by bzr itself |
459 | - over SSH (SFTP), or with rsync. |
460 | - |
461 | -* Adapts to multiple environments. |
462 | - |
463 | - Bazaar runs on Linux and Windows, fully supports Unicode filenames, |
464 | - and suits different development models, including centralized. |
465 | - |
466 | -* Easily extended and customized. |
467 | - |
468 | - A rich Python interface is provided for extending and embedding, |
469 | - including a plugin interface. There are already many available plugins, |
470 | - most of them registered at http://bazaar-vcs.org/PluginRegistry. |
471 | - |
472 | -* Smart merging. |
473 | - |
474 | - Changes will never be merged more than once, conflicts will be |
475 | - minimized, and identical changes are dealt with well. |
476 | - |
477 | -* Vibrant and active community. |
478 | - |
479 | - Help with Bazaar is obtained easily, via the mailing list, or the IRC |
480 | - channel. |
481 | - |
482 | - |
483 | -Registration and Feedback |
484 | -========================= |
485 | +Bazaar directly supports both central version control (like cvs/svn) and |
486 | +distributed version control (like git/hg). Developers can organize their |
487 | +workspace in whichever way they want on a per project basis including: |
488 | + |
489 | +* checkouts (like svn) |
490 | +* feature branches (like hg) |
491 | +* shared working tree (like git). |
492 | + |
493 | +It also directly supports and encourages a large number of development best |
494 | +practices like refactoring and pre-commit regression testing. Users can |
495 | +choose between our command line tool and our cross-platform GUI application. |
496 | +For further details, see our website at http://bazaar-vcs.org/en. |
497 | + |
498 | +Feedback |
499 | +======== |
500 | |
501 | If you encounter any problems with Bazaar, need help understanding it, or would |
502 | like to offer suggestions or feedback, please get in touch with us: |
503 | @@ -76,7 +48,7 @@ |
504 | * Ask a question through our web support interface, at |
505 | https://answers.launchpad.net/bzr/ |
506 | |
507 | -* Report bugs at https://bugs.edge.launchpad.net/bzr/+filebug |
508 | +* Report bugs at https://bugs.launchpad.net/bzr/+filebug |
509 | |
510 | * Write to us at bazaar@lists.canonical.com |
511 | You can join the list at <https://lists.ubuntu.com/mailman/listinfo/bazaar>. |
512 | @@ -85,12 +57,8 @@ |
513 | |
514 | * Talk to us in irc://irc.ubuntu.com/bzr |
515 | |
516 | -* And see http://bazaar-vcs.org/BzrSupport for more. |
517 | - |
518 | -If you would like to help us improve Bazaar by telling us about yourself and |
519 | -what we could do better, please register and complete the online survey here: |
520 | -http://www.surveymonkey.com/s.aspx?sm=L94RvLswhKdktrxiHWiX3g_3d_3d. |
521 | -Registration is completely optional. |
522 | - |
523 | -Enjoy, |
524 | +Our mission is to make a version control tool that developers LOVE to use |
525 | +and that casual contributors feel confident with. Please let us know how |
526 | +we're going. |
527 | + |
528 | The Bazaar Team |
529 | |
530 | === modified file 'bzrlib/__init__.py' |
531 | --- bzrlib/__init__.py 2009-09-25 21:24:21 +0000 |
532 | +++ bzrlib/__init__.py 2009-10-15 18:31:14 +0000 |
533 | @@ -44,7 +44,7 @@ |
534 | # Python version 2.0 is (2, 0, 0, 'final', 0)." Additionally we use a |
535 | # releaselevel of 'dev' for unreleased under-development code. |
536 | |
537 | -version_info = (2, 1, 0, 'dev', 0) |
538 | +version_info = (2, 1, 0, 'dev', 2) |
539 | |
540 | # API compatibility version: bzrlib is currently API compatible with 1.15. |
541 | api_minimum_version = (2, 1, 0) |
542 | |
543 | === modified file 'bzrlib/_bencode_pyx.pyx' |
544 | --- bzrlib/_bencode_pyx.pyx 2009-06-05 01:48:32 +0000 |
545 | +++ bzrlib/_bencode_pyx.pyx 2009-10-15 18:31:14 +0000 |
546 | @@ -58,6 +58,13 @@ |
547 | void D_UPDATE_TAIL(Decoder, int n) |
548 | void E_UPDATE_TAIL(Encoder, int n) |
549 | |
550 | +# To maintain compatibility with older versions of pyrex, we have to use the |
551 | +# relative import here, rather than 'bzrlib._static_tuple_c' |
552 | +from _static_tuple_c cimport StaticTuple, StaticTuple_CheckExact, \ |
553 | + import_static_tuple_c |
554 | + |
555 | +import_static_tuple_c() |
556 | + |
557 | |
558 | cdef class Decoder: |
559 | """Bencode decoder""" |
560 | @@ -371,7 +378,8 @@ |
561 | self._encode_int(x) |
562 | elif PyLong_CheckExact(x): |
563 | self._encode_long(x) |
564 | - elif PyList_CheckExact(x) or PyTuple_CheckExact(x): |
565 | + elif (PyList_CheckExact(x) or PyTuple_CheckExact(x) |
566 | + or StaticTuple_CheckExact(x)): |
567 | self._encode_list(x) |
568 | elif PyDict_CheckExact(x): |
569 | self._encode_dict(x) |
570 | |
571 | === modified file 'bzrlib/_btree_serializer_pyx.pyx' |
572 | --- bzrlib/_btree_serializer_pyx.pyx 2009-10-08 15:44:41 +0000 |
573 | +++ bzrlib/_btree_serializer_pyx.pyx 2009-10-15 18:31:14 +0000 |
574 | @@ -173,7 +173,13 @@ |
575 | last - self._start))) |
576 | raise AssertionError(failure_string) |
577 | # capture the key string |
578 | - key_element = safe_interned_string_from_size(self._start, |
579 | + if (self.key_length == 1 |
580 | + and (temp_ptr - self._start) == 45 |
581 | + and strncmp(self._start, 'sha1:', 5) == 0): |
582 | + key_element = safe_string_from_size(self._start, |
583 | + temp_ptr - self._start) |
584 | + else: |
585 | + key_element = safe_interned_string_from_size(self._start, |
586 | temp_ptr - self._start) |
587 | # advance our pointer |
588 | self._start = temp_ptr + 1 |
589 | @@ -189,6 +195,7 @@ |
590 | cdef char *ref_ptr |
591 | cdef char *next_start |
592 | cdef int loop_counter |
593 | + cdef Py_ssize_t str_len |
594 | |
595 | self._start = self._cur_str |
596 | # Find the next newline |
597 | @@ -224,8 +231,20 @@ |
598 | # Invalid line |
599 | raise AssertionError("Failed to find the value area") |
600 | else: |
601 | - # capture the value string |
602 | - value = safe_string_from_size(temp_ptr + 1, last - temp_ptr - 1) |
603 | + # Because of how conversions were done, we ended up with *lots* of |
604 | + # values that are identical. These are all of the 0-length nodes |
605 | + # that are referred to by the TREE_ROOT (and likely some other |
606 | + # directory nodes.) For example, bzr has 25k references to |
607 | + # something like '12607215 328306 0 0', which ends up consuming 1MB |
608 | + # of memory, just for those strings. |
609 | + str_len = last - temp_ptr - 1 |
610 | + if (str_len > 4 |
611 | + and strncmp(" 0 0", last - 4, 4) == 0): |
612 | + # This drops peak mem for bzr.dev from 87.4MB => 86.2MB |
613 | + # For Launchpad 236MB => 232MB |
614 | + value = safe_interned_string_from_size(temp_ptr + 1, str_len) |
615 | + else: |
616 | + value = safe_string_from_size(temp_ptr + 1, str_len) |
617 | # shrink the references end point |
618 | last = temp_ptr |
619 | |
620 | @@ -332,8 +351,17 @@ |
621 | elif node_len < 3: |
622 | raise ValueError('Without ref_lists, we need at least 3 entries not: %s' |
623 | % len(node)) |
624 | - # I don't expect that we can do faster than string.join() |
625 | - string_key = '\0'.join(node[1])# <object>PyTuple_GET_ITEM_ptr_object(node, 1)) |
626 | + # TODO: We can probably do better than string.join(), namely |
627 | + # when key has only 1 item, we can just grab that string |
628 | + # And when there are 2 items, we could do a single malloc + len() + 1 |
629 | + # also, doing .join() requires a PyObject_GetAttrString call, which |
630 | + # we could also avoid. |
631 | + # TODO: Note that pyrex 0.9.6 generates fairly crummy code here, using the |
632 | + # python object interface, versus 0.9.8+ which uses a helper that |
633 | + # checks if this supports the sequence interface. |
634 | + # We *could* do more work on our own, and grab the actual items |
635 | + # lists. For now, just ask people to use a better compiler. :) |
636 | + string_key = '\0'.join(node[1]) |
637 | |
638 | # TODO: instead of using string joins, precompute the final string length, |
639 | # and then malloc a single string and copy everything in. |
640 | @@ -350,7 +378,7 @@ |
641 | refs_len = 0 |
642 | if have_reference_lists: |
643 | # Figure out how many bytes it will take to store the references |
644 | - ref_lists = node[3]# <object>PyTuple_GET_ITEM_ptr_object(node, 3) |
645 | + ref_lists = node[3] |
646 | next_len = len(ref_lists) # TODO: use a Py function |
647 | if next_len > 0: |
648 | # If there are no nodes, we don't need to do any work |
649 | @@ -374,8 +402,7 @@ |
650 | # We will need (len - 1) '\x00' characters to |
651 | # separate the reference key |
652 | refs_len = refs_len + (next_len - 1) |
653 | - for i from 0 <= i < next_len: |
654 | - ref_bit = reference[i] |
655 | + for ref_bit in reference: |
656 | if not PyString_CheckExact(ref_bit): |
657 | raise TypeError('We expect reference bits' |
658 | ' to be strings not: %s' |
659 | @@ -384,7 +411,7 @@ |
660 | |
661 | # So we have the (key NULL refs NULL value LF) |
662 | key_len = PyString_Size(string_key) |
663 | - val = node[2] # PyTuple_GET_ITEM_ptr_object(node, 2) |
664 | + val = node[2] |
665 | if not PyString_CheckExact(val): |
666 | raise TypeError('Expected a plain str for value not: %s' |
667 | % type(val)) |
668 | @@ -416,7 +443,7 @@ |
669 | if i != 0: |
670 | out[0] = c'\x00' |
671 | out = out + 1 |
672 | - ref_bit = reference[i] #PyTuple_GET_ITEM_ptr_object(reference, i) |
673 | + ref_bit = reference[i] |
674 | ref_bit_len = PyString_GET_SIZE(ref_bit) |
675 | memcpy(out, PyString_AS_STRING(ref_bit), ref_bit_len) |
676 | out = out + ref_bit_len |
677 | |
678 | === modified file 'bzrlib/_export_c_api.h' |
679 | --- bzrlib/_export_c_api.h 2009-10-02 02:21:12 +0000 |
680 | +++ bzrlib/_export_c_api.h 2009-10-15 18:31:14 +0000 |
681 | @@ -45,14 +45,17 @@ |
682 | PyObject *d = NULL; |
683 | PyObject *c_obj = NULL; |
684 | |
685 | - d = PyObject_GetAttrString(module, _C_API_NAME); |
686 | + /* (char *) is because python2.4 declares this api as 'char *' rather than |
687 | + * const char* which it really is. |
688 | + */ |
689 | + d = PyObject_GetAttrString(module, (char *)_C_API_NAME); |
690 | if (!d) { |
691 | PyErr_Clear(); |
692 | d = PyDict_New(); |
693 | if (!d) |
694 | goto bad; |
695 | Py_INCREF(d); |
696 | - if (PyModule_AddObject(module, _C_API_NAME, d) < 0) |
697 | + if (PyModule_AddObject(module, (char *)_C_API_NAME, d) < 0) |
698 | goto bad; |
699 | } |
700 | c_obj = PyCObject_FromVoidPtrAndDesc(func, signature, 0); |
701 | |
702 | === modified file 'bzrlib/_import_c_api.h' |
703 | --- bzrlib/_import_c_api.h 2009-10-01 21:34:36 +0000 |
704 | +++ bzrlib/_import_c_api.h 2009-10-15 18:31:14 +0000 |
705 | @@ -47,7 +47,10 @@ |
706 | PyObject *c_obj = NULL; |
707 | const char *desc = NULL; |
708 | |
709 | - d = PyObject_GetAttrString(module, _C_API_NAME); |
710 | + /* (char *) because Python2.4 defines this as (char *) rather than |
711 | + * (const char *) |
712 | + */ |
713 | + d = PyObject_GetAttrString(module, (char *)_C_API_NAME); |
714 | if (!d) { |
715 | // PyObject_GetAttrString sets an appropriate exception |
716 | goto bad; |
717 | @@ -94,7 +97,7 @@ |
718 | { |
719 | PyObject *type = NULL; |
720 | |
721 | - type = PyObject_GetAttrString(module, class_name); |
722 | + type = PyObject_GetAttrString(module, (char *)class_name); |
723 | if (!type) { |
724 | goto bad; |
725 | } |
726 | @@ -149,7 +152,7 @@ |
727 | struct type_description *cur_type; |
728 | int ret_code; |
729 | |
730 | - module = PyImport_ImportModule(module_name); |
731 | + module = PyImport_ImportModule((char *)module_name); |
732 | if (!module) |
733 | goto bad; |
734 | if (functions != NULL) { |
735 | |
736 | === modified file 'bzrlib/_simple_set_pyx.pxd' |
737 | --- bzrlib/_simple_set_pyx.pxd 2009-10-08 04:40:16 +0000 |
738 | +++ bzrlib/_simple_set_pyx.pxd 2009-10-15 18:31:13 +0000 |
739 | @@ -40,11 +40,36 @@ |
740 | (like a dict), but we also don't implement the complete list of 'set' |
741 | operations (difference, intersection, etc). |
742 | """ |
743 | + # Data structure definition: |
744 | + # This is a basic hash table using open addressing. |
745 | + # http://en.wikipedia.org/wiki/Open_addressing |
746 | + # Basically that means we keep an array of pointers to Python objects |
747 | + # (called a table). Each location in the array is called a 'slot'. |
748 | + # |
749 | + # An empty slot holds a NULL pointer, a slot where there was an item |
750 | + # which was then deleted will hold a pointer to _dummy, and a filled slot |
751 | + # points at the actual object which fills that slot. |
752 | + # |
753 | + # The table is always a power of two, and the default location where an |
754 | + # object is inserted is at hash(object) & (table_size - 1) |
755 | + # |
756 | + # If there is a collision, then we search for another location. The |
757 | + # specific algorithm is in _lookup. We search until we: |
758 | + # find the object |
759 | + # find an equivalent object (by tp_richcompare(obj1, obj2, Py_EQ)) |
760 | + # find a NULL slot |
761 | + # |
762 | + # When an object is deleted, we set its slot to _dummy. this way we don't |
763 | + # have to track whether there was a collision, and find the corresponding |
764 | + # keys. (The collision resolution algorithm makes that nearly impossible |
765 | + # anyway, because it depends on the upper bits of the hash.) |
766 | + # The main effect of this, is that if we find _dummy, then we can insert |
767 | + # an object there, but we have to keep searching until we find NULL to |
768 | + # know that the object is not present elsewhere. |
769 | |
770 | cdef Py_ssize_t _used # active |
771 | cdef Py_ssize_t _fill # active + dummy |
772 | - cdef Py_ssize_t _mask # Table contains (mask+1) slots, a power |
773 | - # of 2 |
774 | + cdef Py_ssize_t _mask # Table contains (mask+1) slots, a power of 2 |
775 | cdef PyObject **_table # Pyrex/Cython doesn't support arrays to 'object' |
776 | # so we manage it manually |
777 | |
778 | @@ -57,7 +82,6 @@ |
779 | |
780 | # TODO: might want to export the C api here, though it is all available from |
781 | # the class object... |
782 | -cdef api object SimpleSet_Add(object self, object key) |
783 | cdef api SimpleSet SimpleSet_New() |
784 | cdef api object SimpleSet_Add(object self, object key) |
785 | cdef api int SimpleSet_Contains(object self, object key) except -1 |
786 | |
787 | === modified file 'bzrlib/_simple_set_pyx.pyx' |
788 | --- bzrlib/_simple_set_pyx.pyx 2009-10-08 04:40:16 +0000 |
789 | +++ bzrlib/_simple_set_pyx.pyx 2009-10-15 18:31:13 +0000 |
790 | @@ -16,15 +16,16 @@ |
791 | |
792 | """Definition of a class that is similar to Set with some small changes.""" |
793 | |
794 | +cdef extern from "python-compat.h": |
795 | + pass |
796 | + |
797 | cdef extern from "Python.h": |
798 | ctypedef unsigned long size_t |
799 | - ctypedef long (*hashfunc)(PyObject*) |
800 | - ctypedef PyObject *(*richcmpfunc)(PyObject *, PyObject *, int) |
801 | + ctypedef long (*hashfunc)(PyObject*) except -1 |
802 | + ctypedef object (*richcmpfunc)(PyObject *, PyObject *, int) |
803 | ctypedef int (*visitproc)(PyObject *, void *) |
804 | ctypedef int (*traverseproc)(PyObject *, visitproc, void *) |
805 | int Py_EQ |
806 | - PyObject *Py_True |
807 | - PyObject *Py_NotImplemented |
808 | void Py_INCREF(PyObject *) |
809 | void Py_DECREF(PyObject *) |
810 | ctypedef struct PyTypeObject: |
811 | @@ -33,38 +34,54 @@ |
812 | traverseproc tp_traverse |
813 | |
814 | PyTypeObject *Py_TYPE(PyObject *) |
815 | + # Note: *Don't* use hash(), Pyrex 0.9.8.5 thinks it returns an 'int', and |
816 | + # thus silently truncates to 32-bits on 64-bit machines. |
817 | + long PyObject_Hash(PyObject *) except -1 |
818 | |
819 | void *PyMem_Malloc(size_t nbytes) |
820 | void PyMem_Free(void *) |
821 | void memset(void *, int, size_t) |
822 | |
823 | |
824 | +# Dummy is an object used to mark nodes that have been deleted. Since |
825 | +# collisions require us to move a node to an alternative location, if we just |
826 | +# set an entry to NULL on delete, we won't find any relocated nodes. |
827 | +# We have to use _dummy_obj because we need to keep a refcount to it, but we |
828 | +# also use _dummy as a pointer, because it avoids having to put <PyObject*> all |
829 | +# over the code base. |
830 | cdef object _dummy_obj |
831 | cdef PyObject *_dummy |
832 | _dummy_obj = object() |
833 | _dummy = <PyObject *>_dummy_obj |
834 | |
835 | |
836 | -cdef int _is_equal(PyObject *this, long this_hash, PyObject *other): |
837 | +cdef object _NotImplemented |
838 | +_NotImplemented = NotImplemented |
839 | + |
840 | + |
841 | +cdef int _is_equal(PyObject *this, long this_hash, PyObject *other) except -1: |
842 | cdef long other_hash |
843 | - cdef PyObject *res |
844 | |
845 | if this == other: |
846 | return 1 |
847 | - other_hash = Py_TYPE(other).tp_hash(other) |
848 | + other_hash = PyObject_Hash(other) |
849 | if other_hash != this_hash: |
850 | return 0 |
851 | + |
852 | + # This implements a subset of the PyObject_RichCompareBool functionality. |
853 | + # Namely it: |
854 | + # 1) Doesn't try to do anything with old-style classes |
855 | + # 2) Assumes that both objects have a tp_richcompare implementation, and |
856 | + # that if that is not enough to compare equal, then they are not |
857 | + # equal. (It doesn't try to cast them both to some intermediate form |
858 | + # that would compare equal.) |
859 | res = Py_TYPE(this).tp_richcompare(this, other, Py_EQ) |
860 | - if res == Py_True: |
861 | - Py_DECREF(res) |
862 | - return 1 |
863 | - if res == Py_NotImplemented: |
864 | - Py_DECREF(res) |
865 | + if res is _NotImplemented: |
866 | res = Py_TYPE(other).tp_richcompare(other, this, Py_EQ) |
867 | - if res == Py_True: |
868 | - Py_DECREF(res) |
869 | + if res is _NotImplemented: |
870 | + return 0 |
871 | + if res: |
872 | return 1 |
873 | - Py_DECREF(res) |
874 | return 0 |
875 | |
876 | |
877 | @@ -84,7 +101,6 @@ |
878 | """ |
879 | # Attributes are defined in the .pxd file |
880 | DEF DEFAULT_SIZE=1024 |
881 | - DEF PERTURB_SHIFT=5 |
882 | |
883 | def __init__(self): |
884 | cdef Py_ssize_t size, n_bytes |
885 | @@ -170,27 +186,25 @@ |
886 | as it makes a lot of assuptions about keys not already being present, |
887 | and there being no dummy entries. |
888 | """ |
889 | - cdef size_t i, perturb, mask |
890 | + cdef size_t i, n_lookup |
891 | cdef long the_hash |
892 | - cdef PyObject **table, **entry |
893 | + cdef PyObject **table, **slot |
894 | + cdef Py_ssize_t mask |
895 | |
896 | mask = self._mask |
897 | table = self._table |
898 | |
899 | - the_hash = Py_TYPE(key).tp_hash(key) |
900 | - i = the_hash & mask |
901 | - entry = &table[i] |
902 | - perturb = the_hash |
903 | - # Because we know that we made all items unique before, we can just |
904 | - # iterate as long as the target location is not empty, we don't have to |
905 | - # do any comparison, etc. |
906 | - while entry[0] != NULL: |
907 | - i = (i << 2) + i + perturb + 1 |
908 | - entry = &table[i & mask] |
909 | - perturb >>= PERTURB_SHIFT |
910 | - entry[0] = key |
911 | - self._fill += 1 |
912 | - self._used += 1 |
913 | + the_hash = PyObject_Hash(key) |
914 | + i = the_hash |
915 | + for n_lookup from 0 <= n_lookup <= <size_t>mask: # Don't loop forever |
916 | + slot = &table[i & mask] |
917 | + if slot[0] == NULL: |
918 | + slot[0] = key |
919 | + self._fill = self._fill + 1 |
920 | + self._used = self._used + 1 |
921 | + return 1 |
922 | + i = i + 1 + n_lookup |
923 | + raise RuntimeError('ran out of slots.') |
924 | |
925 | def _py_resize(self, min_used): |
926 | """Do not use this directly, it is only exposed for testing.""" |
927 | @@ -206,7 +220,7 @@ |
928 | :return: The new size of the internal table |
929 | """ |
930 | cdef Py_ssize_t new_size, n_bytes, remaining |
931 | - cdef PyObject **new_table, **old_table, **entry |
932 | + cdef PyObject **new_table, **old_table, **slot |
933 | |
934 | new_size = DEFAULT_SIZE |
935 | while new_size <= min_used and new_size > 0: |
936 | @@ -236,16 +250,16 @@ |
937 | |
938 | # Moving everything to the other table is refcount neutral, so we don't |
939 | # worry about it. |
940 | - entry = old_table |
941 | + slot = old_table |
942 | while remaining > 0: |
943 | - if entry[0] == NULL: # unused slot |
944 | + if slot[0] == NULL: # unused slot |
945 | pass |
946 | - elif entry[0] == _dummy: # dummy slot |
947 | - remaining -= 1 |
948 | + elif slot[0] == _dummy: # dummy slot |
949 | + remaining = remaining - 1 |
950 | else: # active slot |
951 | - remaining -= 1 |
952 | - self._insert_clean(entry[0]) |
953 | - entry += 1 |
954 | + remaining = remaining - 1 |
955 | + self._insert_clean(slot[0]) |
956 | + slot = slot + 1 |
957 | PyMem_Free(old_table) |
958 | return new_size |
959 | |
960 | @@ -262,20 +276,24 @@ |
961 | cdef PyObject **slot, *py_key |
962 | cdef int added |
963 | |
964 | + py_key = <PyObject *>key |
965 | + if (Py_TYPE(py_key).tp_richcompare == NULL |
966 | + or Py_TYPE(py_key).tp_hash == NULL): |
967 | + raise TypeError('Types added to SimpleSet must implement' |
968 | + ' both tp_richcompare and tp_hash') |
969 | added = 0 |
970 | # We need at least one empty slot |
971 | assert self._used < self._mask |
972 | slot = _lookup(self, key) |
973 | - py_key = <PyObject *>key |
974 | if (slot[0] == NULL): |
975 | Py_INCREF(py_key) |
976 | - self._fill += 1 |
977 | - self._used += 1 |
978 | + self._fill = self._fill + 1 |
979 | + self._used = self._used + 1 |
980 | slot[0] = py_key |
981 | added = 1 |
982 | elif (slot[0] == _dummy): |
983 | Py_INCREF(py_key) |
984 | - self._used += 1 |
985 | + self._used = self._used + 1 |
986 | slot[0] = py_key |
987 | added = 1 |
988 | # No else: clause. If _lookup returns a pointer to |
989 | @@ -291,11 +309,13 @@ |
990 | return retval |
991 | |
992 | def discard(self, key): |
993 | - """Remove key from the dict, whether it exists or not. |
994 | + """Remove key from the set, whether it exists or not. |
995 | |
996 | - :return: 0 if the item did not exist, 1 if it did |
997 | + :return: False if the item did not exist, True if it did |
998 | """ |
999 | - return self._discard(key) |
1000 | + if self._discard(key): |
1001 | + return True |
1002 | + return False |
1003 | |
1004 | cdef int _discard(self, key) except -1: |
1005 | cdef PyObject **slot, *py_key |
1006 | @@ -303,7 +323,7 @@ |
1007 | slot = _lookup(self, key) |
1008 | if slot[0] == NULL or slot[0] == _dummy: |
1009 | return 0 |
1010 | - self._used -= 1 |
1011 | + self._used = self._used - 1 |
1012 | Py_DECREF(slot[0]) |
1013 | slot[0] = _dummy |
1014 | # PySet uses the heuristic: If more than 1/5 are dummies, then resize |
1015 | @@ -320,16 +340,6 @@ |
1016 | self._resize(self._used * 2) |
1017 | return 1 |
1018 | |
1019 | - def __delitem__(self, key): |
1020 | - """Remove the given item from the dict. |
1021 | - |
1022 | - Raise a KeyError if the key was not present. |
1023 | - """ |
1024 | - cdef int exists |
1025 | - exists = self._discard(key) |
1026 | - if not exists: |
1027 | - raise KeyError('Key %s not present' % (key,)) |
1028 | - |
1029 | def __iter__(self): |
1030 | return _SimpleSet_iterator(self) |
1031 | |
1032 | @@ -353,7 +363,7 @@ |
1033 | |
1034 | def __next__(self): |
1035 | cdef Py_ssize_t mask, i |
1036 | - cdef PyObject **table |
1037 | + cdef PyObject *key |
1038 | |
1039 | if self.set is None: |
1040 | raise StopIteration |
1041 | @@ -361,21 +371,13 @@ |
1042 | # Force this exception to continue to be raised |
1043 | self._used = -1 |
1044 | raise RuntimeError("Set size changed during iteration") |
1045 | - i = self.pos |
1046 | - mask = self.set._mask |
1047 | - table = self.set._table |
1048 | - assert i >= 0 |
1049 | - while i <= mask and (table[i] == NULL or table[i] == _dummy): |
1050 | - i += 1 |
1051 | - self.pos = i + 1 |
1052 | - if i > mask: |
1053 | - # we walked to the end |
1054 | + if not SimpleSet_Next(self.set, &self.pos, &key): |
1055 | self.set = None |
1056 | raise StopIteration |
1057 | - # We must have found one |
1058 | - key = <object>(table[i]) |
1059 | - self.len -= 1 |
1060 | - return key |
1061 | + # we found something |
1062 | + the_key = <object>key # INCREF |
1063 | + self.len = self.len - 1 |
1064 | + return the_key |
1065 | |
1066 | def __length_hint__(self): |
1067 | if self.set is not None and self._used == self.set._used: |
1068 | @@ -411,51 +413,68 @@ |
1069 | |
1070 | :param key: An object we are looking up |
1071 | :param hash: The hash for key |
1072 | - :return: The location in self.table where key should be put |
1073 | - should never be NULL, but may reference a NULL (PyObject*) |
1074 | + :return: The location in self.table where key should be put. |
1075 | + location == NULL is an exception, but (*location) == NULL just |
1076 | + indicates the slot is empty and can be used. |
1077 | """ |
1078 | - # This is the heart of most functions, which is why it is pulled out as an |
1079 | - # cdef inline function. |
1080 | - cdef size_t i, perturb |
1081 | + # This uses Quadratic Probing: |
1082 | + # http://en.wikipedia.org/wiki/Quadratic_probing |
1083 | + # with c1 = c2 = 1/2 |
1084 | + # This leads to probe locations at: |
1085 | + # h0 = hash(k1) |
1086 | + # h1 = h0 + 1 |
1087 | + # h2 = h0 + 3 = h1 + 1 + 1 |
1088 | + # h3 = h0 + 6 = h2 + 1 + 2 |
1089 | + # h4 = h0 + 10 = h2 + 1 + 3 |
1090 | + # Note that all of these are '& mask', but that is computed *after* the |
1091 | + # offset. |
1092 | + # This differs from the algorithm used by Set and Dict. Which, effectively, |
1093 | + # use double-hashing, and a step size that starts large, but dwindles to |
1094 | + # stepping one-by-one. |
1095 | + # This gives more 'locality' in that if you have a collision at offset X, |
1096 | + # the first fallback is X+1, which is fast to check. However, that means |
1097 | + # that an object w/ hash X+1 will also check there, and then X+2 next. |
1098 | + # However, for objects with differing hashes, their chains are different. |
1099 | + # The former checks X, X+1, X+3, ... the latter checks X+1, X+2, X+4, ... |
1100 | + # So different hashes diverge quickly. |
1101 | + # A bigger problem is that we *only* ever use the lowest bits of the hash |
1102 | + # So all integers (x + SIZE*N) will resolve into the same bucket, and all |
1103 | + # use the same collision resolution. We may want to try to find a way to |
1104 | + # incorporate the upper bits of the hash with quadratic probing. (For |
1105 | + # example, X, X+1, X+3+some_upper_bits, X+6+more_upper_bits, etc.) |
1106 | + cdef size_t i, n_lookup |
1107 | cdef Py_ssize_t mask |
1108 | cdef long key_hash |
1109 | - cdef long this_hash |
1110 | - cdef PyObject **table, **cur, **free_slot, *py_key |
1111 | + cdef PyObject **table, **slot, *cur, **free_slot, *py_key |
1112 | |
1113 | - key_hash = hash(key) |
1114 | + py_key = <PyObject *>key |
1115 | + # Note: avoid using hash(obj) because of a bug w/ pyrex 0.9.8.5 and 64-bit |
1116 | + # (it treats hash() as returning an 'int' rather than a 'long') |
1117 | + key_hash = PyObject_Hash(py_key) |
1118 | + i = <size_t>key_hash |
1119 | mask = self._mask |
1120 | table = self._table |
1121 | - i = key_hash & mask |
1122 | - cur = &table[i] |
1123 | - py_key = <PyObject *>key |
1124 | - if cur[0] == NULL: |
1125 | - # Found a blank spot, or found the exact key |
1126 | - return cur |
1127 | - if cur[0] == py_key: |
1128 | - return cur |
1129 | - if cur[0] == _dummy: |
1130 | - free_slot = cur |
1131 | - else: |
1132 | - if _is_equal(py_key, key_hash, cur[0]): |
1133 | - # Both py_key and cur[0] belong in this slot, return it |
1134 | - return cur |
1135 | - free_slot = NULL |
1136 | - # size_t is unsigned, hash is signed... |
1137 | - perturb = key_hash |
1138 | - while True: |
1139 | - i = (i << 2) + i + perturb + 1 |
1140 | - cur = &table[i & mask] |
1141 | - if cur[0] == NULL: # Found an empty spot |
1142 | - if free_slot: # Did we find a _dummy earlier? |
1143 | + free_slot = NULL |
1144 | + for n_lookup from 0 <= n_lookup <= <size_t>mask: # Don't loop forever |
1145 | + slot = &table[i & mask] |
1146 | + cur = slot[0] |
1147 | + if cur == NULL: |
1148 | + # Found a blank spot |
1149 | + if free_slot != NULL: |
1150 | + # Did we find an earlier _dummy entry? |
1151 | return free_slot |
1152 | else: |
1153 | - return cur |
1154 | - if (cur[0] == py_key # exact match |
1155 | - or _is_equal(py_key, key_hash, cur[0])): # Equivalent match |
1156 | - return cur |
1157 | - if (cur[0] == _dummy and free_slot == NULL): |
1158 | - free_slot = cur |
1159 | - perturb >>= PERTURB_SHIFT |
1160 | + return slot |
1161 | + if cur == py_key: |
1162 | + # Found an exact pointer to the key |
1163 | + return slot |
1164 | + if cur == _dummy: |
1165 | + if free_slot == NULL: |
1166 | + free_slot = slot |
1167 | + elif _is_equal(py_key, key_hash, cur): |
1168 | + # Both py_key and cur belong in this slot, return it |
1169 | + return slot |
1170 | + i = i + 1 + n_lookup |
1171 | raise AssertionError('should never get here') |
1172 | |
1173 | |
1174 | @@ -521,7 +540,6 @@ |
1175 | return _check_self(self)._used |
1176 | |
1177 | |
1178 | -# TODO: this should probably have direct tests, since it isn't used by __iter__ |
1179 | cdef api int SimpleSet_Next(object self, Py_ssize_t *pos, PyObject **key): |
1180 | """Walk over items in a SimpleSet. |
1181 | |
1182 | @@ -540,7 +558,7 @@ |
1183 | mask = true_self._mask |
1184 | table= true_self._table |
1185 | while (i <= mask and (table[i] == NULL or table[i] == _dummy)): |
1186 | - i += 1 |
1187 | + i = i + 1 |
1188 | pos[0] = i + 1 |
1189 | if (i > mask): |
1190 | return 0 # All done |
1191 | @@ -565,8 +583,7 @@ |
1192 | ret = visit(next_key, arg) |
1193 | if ret: |
1194 | return ret |
1195 | - |
1196 | - return 0; |
1197 | + return 0 |
1198 | |
1199 | # It is a little bit ugly to do this, but it works, and means that Meliae can |
1200 | # dump the total memory consumed by all child objects. |
1201 | |
1202 | === modified file 'bzrlib/_static_tuple_c.c' |
1203 | --- bzrlib/_static_tuple_c.c 2009-10-07 19:31:39 +0000 |
1204 | +++ bzrlib/_static_tuple_c.c 2009-10-15 18:31:14 +0000 |
1205 | @@ -20,12 +20,20 @@ |
1206 | */ |
1207 | #define STATIC_TUPLE_MODULE |
1208 | |
1209 | +#include <Python.h> |
1210 | +#include "python-compat.h" |
1211 | + |
1212 | #include "_static_tuple_c.h" |
1213 | #include "_export_c_api.h" |
1214 | + |
1215 | +/* Pyrex 0.9.6.4 exports _simple_set_pyx_api as |
1216 | + * import__simple_set_pyx(), while Pyrex 0.9.8.5 and Cython 0.11.3 export them |
1217 | + * as import_bzrlib___simple_set_pyx(). As such, we just #define one to be |
1218 | + * equivalent to the other in our internal code. |
1219 | + */ |
1220 | +#define import__simple_set_pyx import_bzrlib___simple_set_pyx |
1221 | #include "_simple_set_pyx_api.h" |
1222 | |
1223 | -#include "python-compat.h" |
1224 | - |
1225 | #if defined(__GNUC__) |
1226 | # define inline __inline__ |
1227 | #elif defined(_MSC_VER) |
1228 | @@ -74,7 +82,7 @@ |
1229 | static StaticTuple * |
1230 | StaticTuple_Intern(StaticTuple *self) |
1231 | { |
1232 | - PyObject *unique_key = NULL; |
1233 | + PyObject *canonical_tuple = NULL; |
1234 | |
1235 | if (_interned_tuples == NULL || _StaticTuple_is_interned(self)) { |
1236 | Py_INCREF(self); |
1237 | @@ -83,20 +91,18 @@ |
1238 | /* SimpleSet_Add returns whatever object is present at self |
1239 | * or the new object if it needs to add it. |
1240 | */ |
1241 | - unique_key = SimpleSet_Add(_interned_tuples, (PyObject *)self); |
1242 | - if (!unique_key) { |
1243 | - // Suppress any error and just return the object |
1244 | - PyErr_Clear(); |
1245 | - Py_INCREF(self); |
1246 | - return self; |
1247 | + canonical_tuple = SimpleSet_Add(_interned_tuples, (PyObject *)self); |
1248 | + if (!canonical_tuple) { |
1249 | + // Some sort of exception, propogate it. |
1250 | + return NULL; |
1251 | } |
1252 | - if (unique_key != (PyObject *)self) { |
1253 | - // There was already a key at that location |
1254 | - return (StaticTuple *)unique_key; |
1255 | + if (canonical_tuple != (PyObject *)self) { |
1256 | + // There was already a tuple with that value |
1257 | + return (StaticTuple *)canonical_tuple; |
1258 | } |
1259 | self->flags |= STATIC_TUPLE_INTERNED_FLAG; |
1260 | - // The two references in the dict do not count, so that the StaticTuple object |
1261 | - // does not become immortal just because it was interned. |
1262 | + // The two references in the dict do not count, so that the StaticTuple |
1263 | + // object does not become immortal just because it was interned. |
1264 | Py_REFCNT(self) -= 1; |
1265 | return self; |
1266 | } |
1267 | @@ -169,7 +175,7 @@ |
1268 | |
1269 | |
1270 | static PyObject * |
1271 | -StaticTuple_new(PyTypeObject *type, PyObject *args, PyObject *kwds) |
1272 | +StaticTuple_new_constructor(PyTypeObject *type, PyObject *args, PyObject *kwds) |
1273 | { |
1274 | StaticTuple *self; |
1275 | PyObject *obj = NULL; |
1276 | @@ -187,7 +193,7 @@ |
1277 | if (len < 0 || len > 255) { |
1278 | /* Too big or too small */ |
1279 | PyErr_SetString(PyExc_ValueError, "StaticTuple.__init__(...)" |
1280 | - " takes from 0 to 255 key bits"); |
1281 | + " takes from 0 to 255 items"); |
1282 | return NULL; |
1283 | } |
1284 | self = (StaticTuple *)StaticTuple_New(len); |
1285 | @@ -199,8 +205,7 @@ |
1286 | if (!PyString_CheckExact(obj)) { |
1287 | if (!StaticTuple_CheckExact(obj)) { |
1288 | PyErr_SetString(PyExc_TypeError, "StaticTuple.__init__(...)" |
1289 | - " requires that all key bits are strings or StaticTuple."); |
1290 | - /* TODO: What is the proper way to dealloc ? */ |
1291 | + " requires that all items are strings or StaticTuple."); |
1292 | type->tp_dealloc((PyObject *)self); |
1293 | return NULL; |
1294 | } |
1295 | @@ -236,21 +241,21 @@ |
1296 | /* adapted from tuplehash(), is the specific hash value considered |
1297 | * 'stable'? |
1298 | */ |
1299 | - register long x, y; |
1300 | - Py_ssize_t len = self->size; |
1301 | - PyObject **p; |
1302 | - long mult = 1000003L; |
1303 | + register long x, y; |
1304 | + Py_ssize_t len = self->size; |
1305 | + PyObject **p; |
1306 | + long mult = 1000003L; |
1307 | |
1308 | #if STATIC_TUPLE_HAS_HASH |
1309 | if (self->hash != -1) { |
1310 | return self->hash; |
1311 | } |
1312 | #endif |
1313 | - x = 0x345678L; |
1314 | - p = self->items; |
1315 | + x = 0x345678L; |
1316 | + p = self->items; |
1317 | // TODO: We could set specific flags if we know that, for example, all the |
1318 | - // keys are strings. I haven't seen a real-world benefit to that yet, |
1319 | - // though. |
1320 | + // items are strings. I haven't seen a real-world benefit to that |
1321 | + // yet, though. |
1322 | while (--len >= 0) { |
1323 | y = PyObject_Hash(*p++); |
1324 | if (y == -1) /* failure */ |
1325 | @@ -259,18 +264,13 @@ |
1326 | /* the cast might truncate len; that doesn't change hash stability */ |
1327 | mult += (long)(82520L + len + len); |
1328 | } |
1329 | - x += 97531L; |
1330 | - if (x == -1) |
1331 | - x = -2; |
1332 | + x += 97531L; |
1333 | + if (x == -1) |
1334 | + x = -2; |
1335 | #if STATIC_TUPLE_HAS_HASH |
1336 | - if (self->hash != -1) { |
1337 | - if (self->hash != x) { |
1338 | - fprintf(stderr, "hash changed: %d => %d\n", self->hash, x); |
1339 | - } |
1340 | - } |
1341 | self->hash = x; |
1342 | #endif |
1343 | - return x; |
1344 | + return x; |
1345 | } |
1346 | |
1347 | static PyObject * |
1348 | @@ -281,25 +281,39 @@ |
1349 | |
1350 | vt = StaticTuple_as_tuple((StaticTuple *)v); |
1351 | if (vt == NULL) { |
1352 | - goto Done; |
1353 | + goto done; |
1354 | } |
1355 | if (!PyTuple_Check(wt)) { |
1356 | PyErr_BadInternalCall(); |
1357 | - result = NULL; |
1358 | - goto Done; |
1359 | + goto done; |
1360 | } |
1361 | /* Now we have 2 tuples to compare, do it */ |
1362 | result = PyTuple_Type.tp_richcompare(vt, wt, op); |
1363 | -Done: |
1364 | +done: |
1365 | Py_XDECREF(vt); |
1366 | return result; |
1367 | } |
1368 | |
1369 | +/** Compare two objects to determine if they are equivalent. |
1370 | + * The basic flow is as follows |
1371 | + * 1) First make sure that both objects are StaticTuple instances. If they |
1372 | + * aren't then cast self to a tuple, and have the tuple do the comparison. |
1373 | + * 2) Special case comparison to Py_None, because it happens to occur fairly |
1374 | + * often in the test suite. |
1375 | + * 3) Special case when v and w are the same pointer. As we know the answer to |
1376 | + * all queries without walking individual items. |
1377 | + * 4) For all operations, we then walk the items to find the first paired |
1378 | + * items that are not equal. |
1379 | + * 5) If all items found are equal, we then check the length of self and |
1380 | + * other to determine equality. |
1381 | + * 6) If an item differs, then we apply "op" to those last two items. (eg. |
1382 | + * StaticTuple(A, B) > StaticTuple(A, C) iff B > C) |
1383 | + */ |
1384 | |
1385 | static PyObject * |
1386 | StaticTuple_richcompare(PyObject *v, PyObject *w, int op) |
1387 | { |
1388 | - StaticTuple *vk, *wk; |
1389 | + StaticTuple *v_st, *w_st; |
1390 | Py_ssize_t vlen, wlen, min_len, i; |
1391 | PyObject *v_obj, *w_obj; |
1392 | richcmpfunc string_richcompare; |
1393 | @@ -313,10 +327,10 @@ |
1394 | Py_INCREF(Py_NotImplemented); |
1395 | return Py_NotImplemented; |
1396 | } |
1397 | - vk = (StaticTuple *)v; |
1398 | + v_st = (StaticTuple *)v; |
1399 | if (StaticTuple_CheckExact(w)) { |
1400 | /* The most common case */ |
1401 | - wk = (StaticTuple*)w; |
1402 | + w_st = (StaticTuple*)w; |
1403 | } else if (PyTuple_Check(w)) { |
1404 | /* One of v or w is a tuple, so we go the 'slow' route and cast up to |
1405 | * tuples to compare. |
1406 | @@ -325,17 +339,19 @@ |
1407 | * We probably want to optimize comparing self to other when |
1408 | * other is a tuple. |
1409 | */ |
1410 | - return StaticTuple_richcompare_to_tuple(vk, w, op); |
1411 | + return StaticTuple_richcompare_to_tuple(v_st, w, op); |
1412 | } else if (w == Py_None) { |
1413 | // None is always less than the object |
1414 | - switch (op) { |
1415 | - case Py_NE:case Py_GT:case Py_GE: |
1416 | + switch (op) { |
1417 | + case Py_NE:case Py_GT:case Py_GE: |
1418 | Py_INCREF(Py_True); |
1419 | return Py_True; |
1420 | case Py_EQ:case Py_LT:case Py_LE: |
1421 | Py_INCREF(Py_False); |
1422 | return Py_False; |
1423 | - } |
1424 | + default: // Should never happen |
1425 | + return Py_NotImplemented; |
1426 | + } |
1427 | } else { |
1428 | /* We don't special case this comparison, we just let python handle |
1429 | * it. |
1430 | @@ -344,38 +360,49 @@ |
1431 | return Py_NotImplemented; |
1432 | } |
1433 | /* Now we know that we have 2 StaticTuple objects, so let's compare them. |
1434 | - * This code is somewhat borrowed from tuplerichcompare, except we know our |
1435 | + * This code is inspired from tuplerichcompare, except we know our |
1436 | * objects are limited in scope, so we can inline some comparisons. |
1437 | */ |
1438 | if (v == w) { |
1439 | /* Identical pointers, we can shortcut this easily. */ |
1440 | - switch (op) { |
1441 | - case Py_EQ:case Py_LE:case Py_GE: |
1442 | + switch (op) { |
1443 | + case Py_EQ:case Py_LE:case Py_GE: |
1444 | Py_INCREF(Py_True); |
1445 | return Py_True; |
1446 | - case Py_NE:case Py_LT:case Py_GT: |
1447 | + case Py_NE:case Py_LT:case Py_GT: |
1448 | Py_INCREF(Py_False); |
1449 | return Py_False; |
1450 | - } |
1451 | - } |
1452 | - /* TODO: if STATIC_TUPLE_INTERNED_FLAG is set on both objects and they are |
1453 | - * not the same pointer, then we know they aren't the same object |
1454 | - * without having to do sub-by-sub comparison. |
1455 | - */ |
1456 | + } |
1457 | + } |
1458 | + if (op == Py_EQ |
1459 | + && _StaticTuple_is_interned(v_st) |
1460 | + && _StaticTuple_is_interned(w_st)) |
1461 | + { |
1462 | + /* If both objects are interned, we know they are different if the |
1463 | + * pointer is not the same, which would have been handled by the |
1464 | + * previous if. No need to compare the entries. |
1465 | + */ |
1466 | + Py_INCREF(Py_False); |
1467 | + return Py_False; |
1468 | + } |
1469 | |
1470 | - /* It will be rare that we compare tuples of different lengths, so we don't |
1471 | - * start by optimizing the length comparision, same as the tuple code |
1472 | - * TODO: Interning may change this, because we'll be comparing lots of |
1473 | - * different StaticTuple objects in the intern dict |
1474 | + /* The only time we are likely to compare items of different lengths is in |
1475 | + * something like the interned_keys set. However, the hash is good enough |
1476 | + * that it is rare. Note that 'tuple_richcompare' also does not compare |
1477 | + * lengths here. |
1478 | */ |
1479 | - vlen = vk->size; |
1480 | - wlen = wk->size; |
1481 | - min_len = (vlen < wlen) ? vlen : wlen; |
1482 | + vlen = v_st->size; |
1483 | + wlen = w_st->size; |
1484 | + min_len = (vlen < wlen) ? vlen : wlen; |
1485 | string_richcompare = PyString_Type.tp_richcompare; |
1486 | for (i = 0; i < min_len; i++) { |
1487 | PyObject *result = NULL; |
1488 | - v_obj = StaticTuple_GET_ITEM(vk, i); |
1489 | - w_obj = StaticTuple_GET_ITEM(wk, i); |
1490 | + v_obj = StaticTuple_GET_ITEM(v_st, i); |
1491 | + w_obj = StaticTuple_GET_ITEM(w_st, i); |
1492 | + if (v_obj == w_obj) { |
1493 | + /* Shortcut case, these must be identical */ |
1494 | + continue; |
1495 | + } |
1496 | if (PyString_CheckExact(v_obj) && PyString_CheckExact(w_obj)) { |
1497 | result = string_richcompare(v_obj, w_obj, Py_EQ); |
1498 | } else if (StaticTuple_CheckExact(v_obj) && |
1499 | @@ -391,9 +418,15 @@ |
1500 | return NULL; /* There seems to be an error */ |
1501 | } |
1502 | if (result == Py_NotImplemented) { |
1503 | - PyErr_BadInternalCall(); |
1504 | Py_DECREF(result); |
1505 | - return NULL; |
1506 | + /* One side must have had a string and the other a StaticTuple. |
1507 | + * This clearly means that they are not equal. |
1508 | + */ |
1509 | + if (op == Py_EQ) { |
1510 | + Py_INCREF(Py_False); |
1511 | + return Py_False; |
1512 | + } |
1513 | + result = PyObject_RichCompare(v_obj, w_obj, Py_EQ); |
1514 | } |
1515 | if (result == Py_False) { |
1516 | /* This entry is not identical |
1517 | @@ -415,28 +448,28 @@ |
1518 | } |
1519 | Py_DECREF(result); |
1520 | } |
1521 | - if (i >= vlen || i >= wlen) { |
1522 | + if (i >= min_len) { |
1523 | /* We walked off one of the lists, but everything compared equal so |
1524 | * far. Just compare the size. |
1525 | */ |
1526 | - int cmp; |
1527 | - PyObject *res; |
1528 | - switch (op) { |
1529 | - case Py_LT: cmp = vlen < wlen; break; |
1530 | - case Py_LE: cmp = vlen <= wlen; break; |
1531 | - case Py_EQ: cmp = vlen == wlen; break; |
1532 | - case Py_NE: cmp = vlen != wlen; break; |
1533 | - case Py_GT: cmp = vlen > wlen; break; |
1534 | - case Py_GE: cmp = vlen >= wlen; break; |
1535 | - default: return NULL; /* cannot happen */ |
1536 | - } |
1537 | - if (cmp) |
1538 | - res = Py_True; |
1539 | - else |
1540 | - res = Py_False; |
1541 | - Py_INCREF(res); |
1542 | - return res; |
1543 | - } |
1544 | + int cmp; |
1545 | + PyObject *res; |
1546 | + switch (op) { |
1547 | + case Py_LT: cmp = vlen < wlen; break; |
1548 | + case Py_LE: cmp = vlen <= wlen; break; |
1549 | + case Py_EQ: cmp = vlen == wlen; break; |
1550 | + case Py_NE: cmp = vlen != wlen; break; |
1551 | + case Py_GT: cmp = vlen > wlen; break; |
1552 | + case Py_GE: cmp = vlen >= wlen; break; |
1553 | + default: return NULL; /* cannot happen */ |
1554 | + } |
1555 | + if (cmp) |
1556 | + res = Py_True; |
1557 | + else |
1558 | + res = Py_False; |
1559 | + Py_INCREF(res); |
1560 | + return res; |
1561 | + } |
1562 | /* The last item differs, shortcut the Py_NE case */ |
1563 | if (op == Py_NE) { |
1564 | Py_INCREF(Py_True); |
1565 | @@ -477,15 +510,22 @@ |
1566 | } |
1567 | |
1568 | static char StaticTuple__is_interned_doc[] = "_is_interned() => True/False\n" |
1569 | - "Check to see if this key has been interned.\n"; |
1570 | + "Check to see if this tuple has been interned.\n"; |
1571 | |
1572 | |
1573 | static PyObject * |
1574 | StaticTuple_item(StaticTuple *self, Py_ssize_t offset) |
1575 | { |
1576 | PyObject *obj; |
1577 | - if (offset < 0 || offset >= self->size) { |
1578 | - PyErr_SetString(PyExc_IndexError, "StaticTuple index out of range"); |
1579 | + /* We cast to (int) to avoid worrying about whether Py_ssize_t is a |
1580 | + * long long, etc. offsets should never be >2**31 anyway. |
1581 | + */ |
1582 | + if (offset < 0) { |
1583 | + PyErr_Format(PyExc_IndexError, "StaticTuple_item does not support" |
1584 | + " negative indices: %d\n", (int)offset); |
1585 | + } else if (offset >= self->size) { |
1586 | + PyErr_Format(PyExc_IndexError, "StaticTuple index out of range" |
1587 | + " %d >= %d", (int)offset, (int)self->size); |
1588 | return NULL; |
1589 | } |
1590 | obj = (PyObject *)self->items[offset]; |
1591 | @@ -519,9 +559,12 @@ |
1592 | |
1593 | static char StaticTuple_doc[] = |
1594 | "C implementation of a StaticTuple structure." |
1595 | - "\n This is used as StaticTuple(key_bit_1, key_bit_2, key_bit_3, ...)" |
1596 | - "\n This is similar to tuple, just less flexible in what it" |
1597 | - "\n supports, but also lighter memory consumption."; |
1598 | + "\n This is used as StaticTuple(item1, item2, item3)" |
1599 | + "\n This is similar to tuple, less flexible in what it" |
1600 | + "\n supports, but also lighter memory consumption." |
1601 | + "\n Note that the constructor mimics the () form of tuples" |
1602 | + "\n Rather than the 'tuple()' constructor." |
1603 | + "\n eg. StaticTuple(a, b) == (a, b) == tuple((a, b))"; |
1604 | |
1605 | static PyMethodDef StaticTuple_methods[] = { |
1606 | {"as_tuple", (PyCFunction)StaticTuple_as_tuple, METH_NOARGS, StaticTuple_as_tuple_doc}, |
1607 | @@ -542,6 +585,12 @@ |
1608 | 0, /* sq_contains */ |
1609 | }; |
1610 | |
1611 | +/* TODO: Implement StaticTuple_as_mapping. |
1612 | + * The only thing we really want to support from there is mp_subscript, |
1613 | + * so that we could support extended slicing (foo[::2]). Not worth it |
1614 | + * yet, though. |
1615 | + */ |
1616 | + |
1617 | |
1618 | PyTypeObject StaticTuple_Type = { |
1619 | PyObject_HEAD_INIT(NULL) |
1620 | @@ -561,7 +610,7 @@ |
1621 | (hashfunc)StaticTuple_hash, /* tp_hash */ |
1622 | 0, /* tp_call */ |
1623 | 0, /* tp_str */ |
1624 | - PyObject_GenericGetAttr, /* tp_getattro */ |
1625 | + 0, /* tp_getattro */ |
1626 | 0, /* tp_setattro */ |
1627 | 0, /* tp_as_buffer */ |
1628 | Py_TPFLAGS_DEFAULT, /* tp_flags*/ |
1629 | @@ -590,7 +639,7 @@ |
1630 | 0, /* tp_dictoffset */ |
1631 | 0, /* tp_init */ |
1632 | 0, /* tp_alloc */ |
1633 | - StaticTuple_new, /* tp_new */ |
1634 | + StaticTuple_new_constructor, /* tp_new */ |
1635 | }; |
1636 | |
1637 | |
1638 | @@ -646,11 +695,57 @@ |
1639 | } |
1640 | |
1641 | |
1642 | +static int |
1643 | +_workaround_pyrex_096(void) |
1644 | +{ |
1645 | + /* Work around an incompatibility in how pyrex 0.9.6 exports a module, |
1646 | + * versus how pyrex 0.9.8 and cython 0.11 export it. |
1647 | + * Namely 0.9.6 exports import__simple_set_pyx and tries to |
1648 | + * "import _simple_set_pyx" but it is available only as |
1649 | + * "import bzrlib._simple_set_pyx" |
1650 | + * It is a shame to hack up sys.modules, but that is what we've got to do. |
1651 | + */ |
1652 | + PyObject *sys_module = NULL, *modules = NULL, *set_module = NULL; |
1653 | + int retval = -1; |
1654 | + |
1655 | + /* Clear out the current ImportError exception, and try again. */ |
1656 | + PyErr_Clear(); |
1657 | + /* Note that this only seems to work if somewhere else imports |
1658 | + * bzrlib._simple_set_pyx before importing bzrlib._static_tuple_c |
1659 | + */ |
1660 | + set_module = PyImport_ImportModule("bzrlib._simple_set_pyx"); |
1661 | + if (set_module == NULL) { |
1662 | + // fprintf(stderr, "Failed to import bzrlib._simple_set_pyx\n"); |
1663 | + goto end; |
1664 | + } |
1665 | + /* Add the _simple_set_pyx into sys.modules at the appropriate location. */ |
1666 | + sys_module = PyImport_ImportModule("sys"); |
1667 | + if (sys_module == NULL) { |
1668 | + // fprintf(stderr, "Failed to import sys\n"); |
1669 | + goto end; |
1670 | + } |
1671 | + modules = PyObject_GetAttrString(sys_module, "modules"); |
1672 | + if (modules == NULL || !PyDict_Check(modules)) { |
1673 | + // fprintf(stderr, "Failed to find sys.modules\n"); |
1674 | + goto end; |
1675 | + } |
1676 | + PyDict_SetItemString(modules, "_simple_set_pyx", set_module); |
1677 | + /* Now that we have hacked it in, try the import again. */ |
1678 | + retval = import_bzrlib___simple_set_pyx(); |
1679 | +end: |
1680 | + Py_XDECREF(set_module); |
1681 | + Py_XDECREF(sys_module); |
1682 | + Py_XDECREF(modules); |
1683 | + return retval; |
1684 | +} |
1685 | + |
1686 | + |
1687 | PyMODINIT_FUNC |
1688 | init_static_tuple_c(void) |
1689 | { |
1690 | PyObject* m; |
1691 | |
1692 | + StaticTuple_Type.tp_getattro = PyObject_GenericGetAttr; |
1693 | if (PyType_Ready(&StaticTuple_Type) < 0) |
1694 | return; |
1695 | |
1696 | @@ -661,8 +756,9 @@ |
1697 | |
1698 | Py_INCREF(&StaticTuple_Type); |
1699 | PyModule_AddObject(m, "StaticTuple", (PyObject *)&StaticTuple_Type); |
1700 | - if (import_bzrlib___simple_set_pyx() == -1) { |
1701 | - // We failed to set up, stop early |
1702 | + if (import_bzrlib___simple_set_pyx() == -1 |
1703 | + && _workaround_pyrex_096() == -1) |
1704 | + { |
1705 | return; |
1706 | } |
1707 | setup_interned_tuples(m); |
1708 | |
1709 | === modified file 'bzrlib/_static_tuple_py.py' |
1710 | --- bzrlib/_static_tuple_py.py 2009-10-07 16:12:50 +0000 |
1711 | +++ bzrlib/_static_tuple_py.py 2009-10-15 18:31:14 +0000 |
1712 | @@ -52,6 +52,8 @@ |
1713 | return _interned_tuples.setdefault(self, self) |
1714 | |
1715 | |
1716 | +# Have to set it to None first, so that __new__ can determine whether |
1717 | +# the _empty_tuple singleton has been created yet or not. |
1718 | _empty_tuple = None |
1719 | _empty_tuple = StaticTuple() |
1720 | _interned_tuples = {} |
1721 | |
1722 | === modified file 'bzrlib/branch.py' |
1723 | --- bzrlib/branch.py 2009-08-19 18:04:49 +0000 |
1724 | +++ bzrlib/branch.py 2009-10-15 18:31:14 +0000 |
1725 | @@ -46,9 +46,10 @@ |
1726 | ) |
1727 | """) |
1728 | |
1729 | -from bzrlib.decorators import needs_read_lock, needs_write_lock |
1730 | +from bzrlib.decorators import needs_read_lock, needs_write_lock, only_raises |
1731 | from bzrlib.hooks import HookPoint, Hooks |
1732 | from bzrlib.inter import InterObject |
1733 | +from bzrlib.lock import _RelockDebugMixin |
1734 | from bzrlib import registry |
1735 | from bzrlib.symbol_versioning import ( |
1736 | deprecated_in, |
1737 | @@ -2079,7 +2080,7 @@ |
1738 | _legacy_formats[0].network_name(), _legacy_formats[0].__class__) |
1739 | |
1740 | |
1741 | -class BzrBranch(Branch): |
1742 | +class BzrBranch(Branch, _RelockDebugMixin): |
1743 | """A branch stored in the actual filesystem. |
1744 | |
1745 | Note that it's "local" in the context of the filesystem; it doesn't |
1746 | @@ -2131,6 +2132,8 @@ |
1747 | return self.control_files.is_locked() |
1748 | |
1749 | def lock_write(self, token=None): |
1750 | + if not self.is_locked(): |
1751 | + self._note_lock('w') |
1752 | # All-in-one needs to always unlock/lock. |
1753 | repo_control = getattr(self.repository, 'control_files', None) |
1754 | if self.control_files == repo_control or not self.is_locked(): |
1755 | @@ -2146,6 +2149,8 @@ |
1756 | raise |
1757 | |
1758 | def lock_read(self): |
1759 | + if not self.is_locked(): |
1760 | + self._note_lock('r') |
1761 | # All-in-one needs to always unlock/lock. |
1762 | repo_control = getattr(self.repository, 'control_files', None) |
1763 | if self.control_files == repo_control or not self.is_locked(): |
1764 | @@ -2160,6 +2165,7 @@ |
1765 | self.repository.unlock() |
1766 | raise |
1767 | |
1768 | + @only_raises(errors.LockNotHeld, errors.LockBroken) |
1769 | def unlock(self): |
1770 | try: |
1771 | self.control_files.unlock() |
1772 | |
1773 | === modified file 'bzrlib/btree_index.py' |
1774 | --- bzrlib/btree_index.py 2009-09-22 02:18:24 +0000 |
1775 | +++ bzrlib/btree_index.py 2009-10-15 18:31:13 +0000 |
1776 | @@ -163,6 +163,7 @@ |
1777 | node_refs, _ = self._check_key_ref_value(key, references, value) |
1778 | if key in self._nodes: |
1779 | raise errors.BadIndexDuplicateKey(key, self) |
1780 | + # TODO: StaticTuple |
1781 | self._nodes[key] = (node_refs, value) |
1782 | self._keys.add(key) |
1783 | if self._nodes_by_key is not None and self._key_length > 1: |
1784 | @@ -625,6 +626,7 @@ |
1785 | for line in lines[2:]: |
1786 | if line == '': |
1787 | break |
1788 | + # TODO: Switch to StaticTuple here. |
1789 | nodes.append(tuple(map(intern, line.split('\0')))) |
1790 | return nodes |
1791 | |
1792 | @@ -636,7 +638,7 @@ |
1793 | memory except when very large walks are done. |
1794 | """ |
1795 | |
1796 | - def __init__(self, transport, name, size): |
1797 | + def __init__(self, transport, name, size, unlimited_cache=False): |
1798 | """Create a B+Tree index object on the index name. |
1799 | |
1800 | :param transport: The transport to read data for the index from. |
1801 | @@ -646,6 +648,9 @@ |
1802 | the initial read (to read the root node header) can be done |
1803 | without over-reading even on empty indices, and on small indices |
1804 | allows single-IO to read the entire index. |
1805 | + :param unlimited_cache: If set to True, then instead of using an |
1806 | + LRUCache with size _NODE_CACHE_SIZE, we will use a dict and always |
1807 | + cache all leaf nodes. |
1808 | """ |
1809 | self._transport = transport |
1810 | self._name = name |
1811 | @@ -655,12 +660,15 @@ |
1812 | self._root_node = None |
1813 | # Default max size is 100,000 leave values |
1814 | self._leaf_value_cache = None # lru_cache.LRUCache(100*1000) |
1815 | - self._leaf_node_cache = lru_cache.LRUCache(_NODE_CACHE_SIZE) |
1816 | - # We could limit this, but even a 300k record btree has only 3k leaf |
1817 | - # nodes, and only 20 internal nodes. So the default of 100 nodes in an |
1818 | - # LRU would mean we always cache everything anyway, no need to pay the |
1819 | - # overhead of LRU |
1820 | - self._internal_node_cache = fifo_cache.FIFOCache(100) |
1821 | + if unlimited_cache: |
1822 | + self._leaf_node_cache = {} |
1823 | + self._internal_node_cache = {} |
1824 | + else: |
1825 | + self._leaf_node_cache = lru_cache.LRUCache(_NODE_CACHE_SIZE) |
1826 | + # We use a FIFO here just to prevent possible blowout. However, a |
1827 | + # 300k record btree has only 3k leaf nodes, and only 20 internal |
1828 | + # nodes. A value of 100 scales to ~100*100*100 = 1M records. |
1829 | + self._internal_node_cache = fifo_cache.FIFOCache(100) |
1830 | self._key_count = None |
1831 | self._row_lengths = None |
1832 | self._row_offsets = None # Start of each row, [-1] is the end |
1833 | @@ -698,9 +706,9 @@ |
1834 | if start_of_leaves is None: |
1835 | start_of_leaves = self._row_offsets[-2] |
1836 | if node_pos < start_of_leaves: |
1837 | - self._internal_node_cache.add(node_pos, node) |
1838 | + self._internal_node_cache[node_pos] = node |
1839 | else: |
1840 | - self._leaf_node_cache.add(node_pos, node) |
1841 | + self._leaf_node_cache[node_pos] = node |
1842 | found[node_pos] = node |
1843 | return found |
1844 | |
1845 | |
1846 | === modified file 'bzrlib/builtins.py' |
1847 | --- bzrlib/builtins.py 2009-09-24 19:51:37 +0000 |
1848 | +++ bzrlib/builtins.py 2009-10-15 18:31:14 +0000 |
1849 | @@ -431,7 +431,10 @@ |
1850 | for node in bt.iter_all_entries(): |
1851 | # Node is made up of: |
1852 | # (index, key, value, [references]) |
1853 | - self.outf.write('%s\n' % (node[1:],)) |
1854 | + refs_as_tuples = tuple([tuple([tuple(ref) for ref in ref_list]) |
1855 | + for ref_list in node[3]]) |
1856 | + as_tuple = (tuple(node[1]), node[2], refs_as_tuples) |
1857 | + self.outf.write('%s\n' % (as_tuple,)) |
1858 | |
1859 | |
1860 | class cmd_remove_tree(Command): |
1861 | @@ -461,8 +464,7 @@ |
1862 | raise errors.BzrCommandError("You cannot remove the working tree" |
1863 | " of a remote path") |
1864 | if not force: |
1865 | - if (working.has_changes(working.basis_tree()) |
1866 | - or len(working.get_parent_ids()) > 1): |
1867 | + if (working.has_changes()): |
1868 | raise errors.UncommittedChanges(working) |
1869 | |
1870 | working_path = working.bzrdir.root_transport.base |
1871 | @@ -1109,8 +1111,7 @@ |
1872 | else: |
1873 | revision_id = None |
1874 | if strict and tree is not None and revision_id is None: |
1875 | - if (tree.has_changes(tree.basis_tree()) |
1876 | - or len(tree.get_parent_ids()) > 1): |
1877 | + if (tree.has_changes()): |
1878 | raise errors.UncommittedChanges( |
1879 | tree, more='Use --no-strict to force the push.') |
1880 | if tree.last_revision() != tree.branch.last_revision(): |
1881 | @@ -1887,7 +1888,7 @@ |
1882 | @display_command |
1883 | def run(self, revision=None, file_list=None, diff_options=None, |
1884 | prefix=None, old=None, new=None, using=None): |
1885 | - from bzrlib.diff import _get_trees_to_diff, show_diff_trees |
1886 | + from bzrlib.diff import get_trees_and_branches_to_diff, show_diff_trees |
1887 | |
1888 | if (prefix is None) or (prefix == '0'): |
1889 | # diff -p0 format |
1890 | @@ -1907,9 +1908,10 @@ |
1891 | raise errors.BzrCommandError('bzr diff --revision takes exactly' |
1892 | ' one or two revision specifiers') |
1893 | |
1894 | - old_tree, new_tree, specific_files, extra_trees = \ |
1895 | - _get_trees_to_diff(file_list, revision, old, new, |
1896 | - apply_view=True) |
1897 | + (old_tree, new_tree, |
1898 | + old_branch, new_branch, |
1899 | + specific_files, extra_trees) = get_trees_and_branches_to_diff( |
1900 | + file_list, revision, old, new, apply_view=True) |
1901 | return show_diff_trees(old_tree, new_tree, sys.stdout, |
1902 | specific_files=specific_files, |
1903 | external_diff_options=diff_options, |
1904 | @@ -3663,7 +3665,7 @@ |
1905 | |
1906 | # die as quickly as possible if there are uncommitted changes |
1907 | if not force: |
1908 | - if tree.has_changes(basis_tree) or len(tree.get_parent_ids()) > 1: |
1909 | + if tree.has_changes(): |
1910 | raise errors.UncommittedChanges(tree) |
1911 | |
1912 | view_info = _get_view_info_for_change_reporter(tree) |
1913 | @@ -3720,7 +3722,10 @@ |
1914 | merger.other_rev_id) |
1915 | result.report(self.outf) |
1916 | return 0 |
1917 | - merger.check_basis(False) |
1918 | + if merger.this_basis is None: |
1919 | + raise errors.BzrCommandError( |
1920 | + "This branch has no commits." |
1921 | + " (perhaps you would prefer 'bzr pull')") |
1922 | if preview: |
1923 | return self._do_preview(merger, cleanups) |
1924 | elif interactive: |
1925 | |
1926 | === modified file 'bzrlib/bundle/apply_bundle.py' |
1927 | --- bzrlib/bundle/apply_bundle.py 2009-03-23 14:59:43 +0000 |
1928 | +++ bzrlib/bundle/apply_bundle.py 2009-10-15 18:31:14 +0000 |
1929 | @@ -56,7 +56,8 @@ |
1930 | change_reporter=change_reporter) |
1931 | merger.pp = pp |
1932 | merger.pp.next_phase() |
1933 | - merger.check_basis(check_clean, require_commits=False) |
1934 | + if check_clean and tree.has_changes(): |
1935 | + raise errors.UncommittedChanges(self) |
1936 | merger.other_rev_id = reader.target |
1937 | merger.other_tree = merger.revision_tree(reader.target) |
1938 | merger.other_basis = reader.target |
1939 | |
1940 | === modified file 'bzrlib/commands.py' |
1941 | --- bzrlib/commands.py 2009-09-17 07:16:12 +0000 |
1942 | +++ bzrlib/commands.py 2009-10-15 18:31:14 +0000 |
1943 | @@ -1097,7 +1097,7 @@ |
1944 | |
1945 | # Is this a final release version? If so, we should suppress warnings |
1946 | if bzrlib.version_info[3] == 'final': |
1947 | - suppress_deprecation_warnings(override=False) |
1948 | + suppress_deprecation_warnings(override=True) |
1949 | if argv is None: |
1950 | argv = osutils.get_unicode_argv() |
1951 | else: |
1952 | |
1953 | === modified file 'bzrlib/decorators.py' |
1954 | --- bzrlib/decorators.py 2009-03-23 14:59:43 +0000 |
1955 | +++ bzrlib/decorators.py 2009-10-15 18:31:14 +0000 |
1956 | @@ -24,6 +24,8 @@ |
1957 | |
1958 | import sys |
1959 | |
1960 | +from bzrlib import trace |
1961 | + |
1962 | |
1963 | def _get_parameters(func): |
1964 | """Recreate the parameters for a function using introspection. |
1965 | @@ -204,6 +206,31 @@ |
1966 | return write_locked |
1967 | |
1968 | |
1969 | +def only_raises(*errors): |
1970 | + """Make a decorator that will only allow the given error classes to be |
1971 | + raised. All other errors will be logged and then discarded. |
1972 | + |
1973 | + Typical use is something like:: |
1974 | + |
1975 | + @only_raises(LockNotHeld, LockBroken) |
1976 | + def unlock(self): |
1977 | + # etc |
1978 | + """ |
1979 | + def decorator(unbound): |
1980 | + def wrapped(*args, **kwargs): |
1981 | + try: |
1982 | + return unbound(*args, **kwargs) |
1983 | + except errors: |
1984 | + raise |
1985 | + except: |
1986 | + trace.mutter('Error suppressed by only_raises:') |
1987 | + trace.log_exception_quietly() |
1988 | + wrapped.__doc__ = unbound.__doc__ |
1989 | + wrapped.__name__ = unbound.__name__ |
1990 | + return wrapped |
1991 | + return decorator |
1992 | + |
1993 | + |
1994 | # Default is more functionality, 'bzr' the commandline will request fast |
1995 | # versions. |
1996 | needs_read_lock = _pretty_needs_read_lock |
1997 | |
1998 | === modified file 'bzrlib/diff.py' |
1999 | --- bzrlib/diff.py 2009-07-29 21:35:05 +0000 |
2000 | +++ bzrlib/diff.py 2009-10-15 18:31:13 +0000 |
2001 | @@ -277,8 +277,8 @@ |
2002 | new_abspath, e) |
2003 | |
2004 | |
2005 | -def _get_trees_to_diff(path_list, revision_specs, old_url, new_url, |
2006 | - apply_view=True): |
2007 | +def get_trees_and_branches_to_diff(path_list, revision_specs, old_url, new_url, |
2008 | + apply_view=True): |
2009 | """Get the trees and specific files to diff given a list of paths. |
2010 | |
2011 | This method works out the trees to be diff'ed and the files of |
2012 | @@ -299,9 +299,9 @@ |
2013 | if True and a view is set, apply the view or check that the paths |
2014 | are within it |
2015 | :returns: |
2016 | - a tuple of (old_tree, new_tree, specific_files, extra_trees) where |
2017 | - extra_trees is a sequence of additional trees to search in for |
2018 | - file-ids. |
2019 | + a tuple of (old_tree, new_tree, old_branch, new_branch, |
2020 | + specific_files, extra_trees) where extra_trees is a sequence of |
2021 | + additional trees to search in for file-ids. |
2022 | """ |
2023 | # Get the old and new revision specs |
2024 | old_revision_spec = None |
2025 | @@ -341,6 +341,7 @@ |
2026 | views.check_path_in_view(working_tree, relpath) |
2027 | specific_files.append(relpath) |
2028 | old_tree = _get_tree_to_diff(old_revision_spec, working_tree, branch) |
2029 | + old_branch = branch |
2030 | |
2031 | # Get the new location |
2032 | if new_url is None: |
2033 | @@ -354,6 +355,7 @@ |
2034 | specific_files.append(relpath) |
2035 | new_tree = _get_tree_to_diff(new_revision_spec, working_tree, branch, |
2036 | basis_is_default=working_tree is None) |
2037 | + new_branch = branch |
2038 | |
2039 | # Get the specific files (all files is None, no files is []) |
2040 | if make_paths_wt_relative and working_tree is not None: |
2041 | @@ -378,7 +380,8 @@ |
2042 | extra_trees = None |
2043 | if working_tree is not None and working_tree not in (old_tree, new_tree): |
2044 | extra_trees = (working_tree,) |
2045 | - return old_tree, new_tree, specific_files, extra_trees |
2046 | + return old_tree, new_tree, old_branch, new_branch, specific_files, extra_trees |
2047 | + |
2048 | |
2049 | def _get_tree_to_diff(spec, tree=None, branch=None, basis_is_default=True): |
2050 | if branch is None and tree is not None: |
2051 | |
2052 | === modified file 'bzrlib/foreign.py' |
2053 | --- bzrlib/foreign.py 2009-10-02 09:11:43 +0000 |
2054 | +++ bzrlib/foreign.py 2009-10-15 18:31:14 +0000 |
2055 | @@ -305,8 +305,7 @@ |
2056 | ).get_user_option_as_bool('dpush_strict') |
2057 | if strict is None: strict = True # default value |
2058 | if strict and source_wt is not None: |
2059 | - if (source_wt.has_changes(source_wt.basis_tree()) |
2060 | - or len(source_wt.get_parent_ids()) > 1): |
2061 | + if (source_wt.has_changes()): |
2062 | raise errors.UncommittedChanges( |
2063 | source_wt, more='Use --no-strict to force the push.') |
2064 | if source_wt.last_revision() != source_wt.branch.last_revision(): |
2065 | |
2066 | === modified file 'bzrlib/help_topics/__init__.py' |
2067 | --- bzrlib/help_topics/__init__.py 2009-06-19 09:06:56 +0000 |
2068 | +++ bzrlib/help_topics/__init__.py 2009-10-15 18:31:14 +0000 |
2069 | @@ -644,38 +644,20 @@ |
2070 | formats may also be introduced to improve performance and |
2071 | scalability. |
2072 | |
2073 | -Use the following guidelines to select a format (stopping |
2074 | -as soon as a condition is true): |
2075 | - |
2076 | -* If you are working on an existing project, use whatever |
2077 | - format that project is using. (Bazaar will do this for you |
2078 | - by default). |
2079 | - |
2080 | -* If you are using bzr-svn to interoperate with a Subversion |
2081 | - repository, use 1.14-rich-root. |
2082 | - |
2083 | -* If you are working on a project with big trees (5000+ paths) |
2084 | - or deep history (5000+ revisions), use 1.14. |
2085 | - |
2086 | -* Otherwise, use the default format - it is good enough for |
2087 | - most projects. |
2088 | - |
2089 | -If some of your developers are unable to use the most recent |
2090 | -version of Bazaar (due to distro package availability say), be |
2091 | -sure to adjust the guidelines above accordingly. For example, |
2092 | -you may need to select 1.9 instead of 1.14 if your project has |
2093 | -standardized on Bazaar 1.13.1 say. |
2094 | - |
2095 | -Note: Many of the currently supported formats have two variants: |
2096 | +The newest format, 2a, is highly recommended. If your |
2097 | +project is not using 2a, then you should suggest to the |
2098 | +project owner to upgrade. |
2099 | + |
2100 | + |
2101 | +Note: Some of the older formats have two variants: |
2102 | a plain one and a rich-root one. The latter include an additional |
2103 | field about the root of the tree. There is no performance cost |
2104 | for using a rich-root format but you cannot easily merge changes |
2105 | from a rich-root format into a plain format. As a consequence, |
2106 | moving a project to a rich-root format takes some co-ordination |
2107 | in that all contributors need to upgrade their repositories |
2108 | -around the same time. (It is for this reason that we have delayed |
2109 | -making a rich-root format the default so far, though we will do |
2110 | -so at some appropriate time in the future.) |
2111 | +around the same time. 2a and all future formats will be |
2112 | +implicitly rich-root. |
2113 | |
2114 | See ``bzr help current-formats`` for the complete list of |
2115 | currently supported formats. See ``bzr help other-formats`` for |
2116 | |
2117 | === modified file 'bzrlib/help_topics/en/debug-flags.txt' |
2118 | --- bzrlib/help_topics/en/debug-flags.txt 2009-08-20 05:02:45 +0000 |
2119 | +++ bzrlib/help_topics/en/debug-flags.txt 2009-10-15 18:31:14 +0000 |
2120 | @@ -22,16 +22,18 @@ |
2121 | -Dhttp Trace http connections, requests and responses. |
2122 | -Dindex Trace major index operations. |
2123 | -Dknit Trace knit operations. |
2124 | --Dstrict_locks Trace when OS locks are potentially used in a non-portable |
2125 | - manner. |
2126 | -Dlock Trace when lockdir locks are taken or released. |
2127 | -Dprogress Trace progress bar operations. |
2128 | -Dmerge Emit information for debugging merges. |
2129 | -Dno_apport Don't use apport to report crashes. |
2130 | --Dunlock Some errors during unlock are treated as warnings. |
2131 | -Dpack Emit information about pack operations. |
2132 | +-Drelock Emit a message every time a branch or repository object is |
2133 | + unlocked then relocked the same way. |
2134 | -Dsftp Trace SFTP internals. |
2135 | -Dstream Trace fetch streams. |
2136 | +-Dstrict_locks Trace when OS locks are potentially used in a non-portable |
2137 | + manner. |
2138 | +-Dunlock Some errors during unlock are treated as warnings. |
2139 | -DIDS_never Never use InterDifferingSerializer when fetching. |
2140 | -DIDS_always Always use InterDifferingSerializer to fetch if appropriate |
2141 | for the format, even for non-local fetches. |
2142 | |
2143 | === modified file 'bzrlib/index.py' |
2144 | --- bzrlib/index.py 2009-10-08 15:44:41 +0000 |
2145 | +++ bzrlib/index.py 2009-10-15 18:31:14 +0000 |
2146 | @@ -1,4 +1,4 @@ |
2147 | -# Copyright (C) 2007, 2008 Canonical Ltd |
2148 | +# Copyright (C) 2007, 2008, 2009 Canonical Ltd |
2149 | # |
2150 | # This program is free software; you can redistribute it and/or modify |
2151 | # it under the terms of the GNU General Public License as published by |
2152 | @@ -40,7 +40,7 @@ |
2153 | debug, |
2154 | errors, |
2155 | ) |
2156 | -from bzrlib._static_tuple_c import StaticTuple |
2157 | +from bzrlib.static_tuple import StaticTuple |
2158 | |
2159 | _HEADER_READV = (0, 200) |
2160 | _OPTION_KEY_ELEMENTS = "key_elements=" |
2161 | @@ -203,7 +203,9 @@ |
2162 | if reference not in self._nodes: |
2163 | self._check_key(reference) |
2164 | absent_references.append(reference) |
2165 | + # TODO: StaticTuple |
2166 | node_refs.append(tuple(reference_list)) |
2167 | + # TODO: StaticTuple |
2168 | return tuple(node_refs), absent_references |
2169 | |
2170 | def add_node(self, key, value, references=()): |
2171 | @@ -369,7 +371,7 @@ |
2172 | suitable for production use. :XXX |
2173 | """ |
2174 | |
2175 | - def __init__(self, transport, name, size): |
2176 | + def __init__(self, transport, name, size, unlimited_cache=False): |
2177 | """Open an index called name on transport. |
2178 | |
2179 | :param transport: A bzrlib.transport.Transport. |
2180 | |
2181 | === modified file 'bzrlib/lock.py' |
2182 | --- bzrlib/lock.py 2009-07-31 16:51:48 +0000 |
2183 | +++ bzrlib/lock.py 2009-10-15 18:31:13 +0000 |
2184 | @@ -518,3 +518,24 @@ |
2185 | # We default to using the first available lock class. |
2186 | _lock_type, WriteLock, ReadLock = _lock_classes[0] |
2187 | |
2188 | + |
2189 | +class _RelockDebugMixin(object): |
2190 | + """Mixin support for -Drelock flag. |
2191 | + |
2192 | + Add this as a base class then call self._note_lock with 'r' or 'w' when |
2193 | + acquiring a read- or write-lock. If this object was previously locked (and |
2194 | + locked the same way), and -Drelock is set, then this will trace.note a |
2195 | + message about it. |
2196 | + """ |
2197 | + |
2198 | + _prev_lock = None |
2199 | + |
2200 | + def _note_lock(self, lock_type): |
2201 | + if 'relock' in debug.debug_flags and self._prev_lock == lock_type: |
2202 | + if lock_type == 'r': |
2203 | + type_name = 'read' |
2204 | + else: |
2205 | + type_name = 'write' |
2206 | + trace.note('%r was %s locked again', self, type_name) |
2207 | + self._prev_lock = lock_type |
2208 | + |
2209 | |
2210 | === modified file 'bzrlib/lockable_files.py' |
2211 | --- bzrlib/lockable_files.py 2009-07-27 05:39:01 +0000 |
2212 | +++ bzrlib/lockable_files.py 2009-10-15 18:31:14 +0000 |
2213 | @@ -32,8 +32,7 @@ |
2214 | """) |
2215 | |
2216 | from bzrlib.decorators import ( |
2217 | - needs_read_lock, |
2218 | - needs_write_lock, |
2219 | + only_raises, |
2220 | ) |
2221 | from bzrlib.symbol_versioning import ( |
2222 | deprecated_in, |
2223 | @@ -221,6 +220,7 @@ |
2224 | """Setup a write transaction.""" |
2225 | self._set_transaction(transactions.WriteTransaction()) |
2226 | |
2227 | + @only_raises(errors.LockNotHeld, errors.LockBroken) |
2228 | def unlock(self): |
2229 | if not self._lock_mode: |
2230 | return lock.cant_unlock_not_held(self) |
2231 | |
2232 | === modified file 'bzrlib/lockdir.py' |
2233 | --- bzrlib/lockdir.py 2009-07-27 05:24:02 +0000 |
2234 | +++ bzrlib/lockdir.py 2009-10-15 18:31:14 +0000 |
2235 | @@ -112,6 +112,7 @@ |
2236 | lock, |
2237 | ) |
2238 | import bzrlib.config |
2239 | +from bzrlib.decorators import only_raises |
2240 | from bzrlib.errors import ( |
2241 | DirectoryNotEmpty, |
2242 | FileExists, |
2243 | @@ -286,6 +287,7 @@ |
2244 | info_bytes) |
2245 | return tmpname |
2246 | |
2247 | + @only_raises(LockNotHeld, LockBroken) |
2248 | def unlock(self): |
2249 | """Release a held lock |
2250 | """ |
2251 | |
2252 | === modified file 'bzrlib/merge.py' |
2253 | --- bzrlib/merge.py 2009-10-06 12:25:59 +0000 |
2254 | +++ bzrlib/merge.py 2009-10-15 18:31:14 +0000 |
2255 | @@ -35,6 +35,10 @@ |
2256 | ui, |
2257 | versionedfile |
2258 | ) |
2259 | +from bzrlib.symbol_versioning import ( |
2260 | + deprecated_in, |
2261 | + deprecated_method, |
2262 | + ) |
2263 | # TODO: Report back as changes are merged in |
2264 | |
2265 | |
2266 | @@ -226,6 +230,7 @@ |
2267 | revision_id = _mod_revision.ensure_null(revision_id) |
2268 | return branch, self.revision_tree(revision_id, branch) |
2269 | |
2270 | + @deprecated_method(deprecated_in((2, 1, 0))) |
2271 | def ensure_revision_trees(self): |
2272 | if self.this_revision_tree is None: |
2273 | self.this_basis_tree = self.revision_tree(self.this_basis) |
2274 | @@ -239,6 +244,7 @@ |
2275 | other_rev_id = self.other_basis |
2276 | self.other_tree = other_basis_tree |
2277 | |
2278 | + @deprecated_method(deprecated_in((2, 1, 0))) |
2279 | def file_revisions(self, file_id): |
2280 | self.ensure_revision_trees() |
2281 | def get_id(tree, file_id): |
2282 | @@ -252,6 +258,7 @@ |
2283 | trees = (self.this_basis_tree, self.other_tree) |
2284 | return [get_id(tree, file_id) for tree in trees] |
2285 | |
2286 | + @deprecated_method(deprecated_in((2, 1, 0))) |
2287 | def check_basis(self, check_clean, require_commits=True): |
2288 | if self.this_basis is None and require_commits is True: |
2289 | raise errors.BzrCommandError( |
2290 | @@ -262,6 +269,7 @@ |
2291 | if self.this_basis != self.this_rev_id: |
2292 | raise errors.UncommittedChanges(self.this_tree) |
2293 | |
2294 | + @deprecated_method(deprecated_in((2, 1, 0))) |
2295 | def compare_basis(self): |
2296 | try: |
2297 | basis_tree = self.revision_tree(self.this_tree.last_revision()) |
2298 | @@ -274,7 +282,8 @@ |
2299 | self.interesting_files = file_list |
2300 | |
2301 | def set_pending(self): |
2302 | - if not self.base_is_ancestor or not self.base_is_other_ancestor or self.other_rev_id is None: |
2303 | + if (not self.base_is_ancestor or not self.base_is_other_ancestor |
2304 | + or self.other_rev_id is None): |
2305 | return |
2306 | self._add_parent() |
2307 | |
2308 | |
2309 | === modified file 'bzrlib/mutabletree.py' |
2310 | --- bzrlib/mutabletree.py 2009-10-06 12:25:59 +0000 |
2311 | +++ bzrlib/mutabletree.py 2009-10-15 18:31:14 +0000 |
2312 | @@ -233,12 +233,20 @@ |
2313 | raise NotImplementedError(self._gather_kinds) |
2314 | |
2315 | @needs_read_lock |
2316 | - def has_changes(self, from_tree): |
2317 | - """Quickly check that the tree contains at least one change. |
2318 | + def has_changes(self, _from_tree=None): |
2319 | + """Quickly check that the tree contains at least one commitable change. |
2320 | + |
2321 | + :param _from_tree: tree to compare against to find changes (default to |
2322 | + the basis tree and is intended to be used by tests). |
2323 | |
2324 | :return: True if a change is found. False otherwise |
2325 | """ |
2326 | - changes = self.iter_changes(from_tree) |
2327 | + # Check pending merges |
2328 | + if len(self.get_parent_ids()) > 1: |
2329 | + return True |
2330 | + if _from_tree is None: |
2331 | + _from_tree = self.basis_tree() |
2332 | + changes = self.iter_changes(_from_tree) |
2333 | try: |
2334 | change = changes.next() |
2335 | # Exclude root (talk about black magic... --vila 20090629) |
2336 | |
2337 | === modified file 'bzrlib/osutils.py' |
2338 | --- bzrlib/osutils.py 2009-09-19 01:33:10 +0000 |
2339 | +++ bzrlib/osutils.py 2009-10-15 18:31:14 +0000 |
2340 | @@ -1132,7 +1132,14 @@ |
2341 | bit_iter = iter(rel.split('/')) |
2342 | for bit in bit_iter: |
2343 | lbit = bit.lower() |
2344 | - for look in _listdir(current): |
2345 | + try: |
2346 | + next_entries = _listdir(current) |
2347 | + except OSError: # enoent, eperm, etc |
2348 | + # We can't find this in the filesystem, so just append the |
2349 | + # remaining bits. |
2350 | + current = pathjoin(current, bit, *list(bit_iter)) |
2351 | + break |
2352 | + for look in next_entries: |
2353 | if lbit == look.lower(): |
2354 | current = pathjoin(current, look) |
2355 | break |
2356 | @@ -1142,7 +1149,7 @@ |
2357 | # the target of a move, for example). |
2358 | current = pathjoin(current, bit, *list(bit_iter)) |
2359 | break |
2360 | - return current[len(abs_base)+1:] |
2361 | + return current[len(abs_base):].lstrip('/') |
2362 | |
2363 | # XXX - TODO - we need better detection/integration of case-insensitive |
2364 | # file-systems; Linux often sees FAT32 devices (or NFS-mounted OSX |
2365 | |
2366 | === modified file 'bzrlib/python-compat.h' |
2367 | --- bzrlib/python-compat.h 2009-09-30 18:00:42 +0000 |
2368 | +++ bzrlib/python-compat.h 2009-10-15 18:31:14 +0000 |
2369 | @@ -28,6 +28,9 @@ |
2370 | /* http://www.python.org/dev/peps/pep-0353/ */ |
2371 | #if PY_VERSION_HEX < 0x02050000 && !defined(PY_SSIZE_T_MIN) |
2372 | typedef int Py_ssize_t; |
2373 | + typedef Py_ssize_t (*lenfunc)(PyObject *); |
2374 | + typedef PyObject * (*ssizeargfunc)(PyObject *, Py_ssize_t); |
2375 | + typedef PyObject * (*ssizessizeargfunc)(PyObject *, Py_ssize_t, Py_ssize_t); |
2376 | #define PY_SSIZE_T_MAX INT_MAX |
2377 | #define PY_SSIZE_T_MIN INT_MIN |
2378 | #define PyInt_FromSsize_t(z) PyInt_FromLong(z) |
2379 | @@ -77,5 +80,8 @@ |
2380 | #ifndef Py_TYPE |
2381 | # define Py_TYPE(o) ((o)->ob_type) |
2382 | #endif |
2383 | +#ifndef Py_REFCNT |
2384 | +# define Py_REFCNT(o) ((o)->ob_refcnt) |
2385 | +#endif |
2386 | |
2387 | #endif /* _BZR_PYTHON_COMPAT_H */ |
2388 | |
2389 | === modified file 'bzrlib/reconfigure.py' |
2390 | --- bzrlib/reconfigure.py 2009-07-24 03:15:56 +0000 |
2391 | +++ bzrlib/reconfigure.py 2009-10-15 18:31:14 +0000 |
2392 | @@ -265,9 +265,7 @@ |
2393 | |
2394 | def _check(self): |
2395 | """Raise if reconfiguration would destroy local changes""" |
2396 | - if self._destroy_tree: |
2397 | - # XXX: What about pending merges ? -- vila 20090629 |
2398 | - if self.tree.has_changes(self.tree.basis_tree()): |
2399 | + if self._destroy_tree and self.tree.has_changes(): |
2400 | raise errors.UncommittedChanges(self.tree) |
2401 | if self._create_reference and self.local_branch is not None: |
2402 | reference_branch = branch.Branch.open(self._select_bind_location()) |
2403 | |
2404 | === modified file 'bzrlib/remote.py' |
2405 | --- bzrlib/remote.py 2009-10-02 05:43:41 +0000 |
2406 | +++ bzrlib/remote.py 2009-10-15 18:31:14 +0000 |
2407 | @@ -33,7 +33,7 @@ |
2408 | ) |
2409 | from bzrlib.branch import BranchReferenceFormat |
2410 | from bzrlib.bzrdir import BzrDir, RemoteBzrDirFormat |
2411 | -from bzrlib.decorators import needs_read_lock, needs_write_lock |
2412 | +from bzrlib.decorators import needs_read_lock, needs_write_lock, only_raises |
2413 | from bzrlib.errors import ( |
2414 | NoSuchRevision, |
2415 | SmartProtocolError, |
2416 | @@ -619,7 +619,7 @@ |
2417 | return self._custom_format._serializer |
2418 | |
2419 | |
2420 | -class RemoteRepository(_RpcHelper): |
2421 | +class RemoteRepository(_RpcHelper, lock._RelockDebugMixin): |
2422 | """Repository accessed over rpc. |
2423 | |
2424 | For the moment most operations are performed using local transport-backed |
2425 | @@ -949,6 +949,7 @@ |
2426 | def lock_read(self): |
2427 | # wrong eventually - want a local lock cache context |
2428 | if not self._lock_mode: |
2429 | + self._note_lock('r') |
2430 | self._lock_mode = 'r' |
2431 | self._lock_count = 1 |
2432 | self._unstacked_provider.enable_cache(cache_misses=True) |
2433 | @@ -974,6 +975,7 @@ |
2434 | |
2435 | def lock_write(self, token=None, _skip_rpc=False): |
2436 | if not self._lock_mode: |
2437 | + self._note_lock('w') |
2438 | if _skip_rpc: |
2439 | if self._lock_token is not None: |
2440 | if token != self._lock_token: |
2441 | @@ -1082,6 +1084,7 @@ |
2442 | else: |
2443 | raise errors.UnexpectedSmartServerResponse(response) |
2444 | |
2445 | + @only_raises(errors.LockNotHeld, errors.LockBroken) |
2446 | def unlock(self): |
2447 | if not self._lock_count: |
2448 | return lock.cant_unlock_not_held(self) |
2449 | @@ -2081,7 +2084,7 @@ |
2450 | return self._custom_format.supports_set_append_revisions_only() |
2451 | |
2452 | |
2453 | -class RemoteBranch(branch.Branch, _RpcHelper): |
2454 | +class RemoteBranch(branch.Branch, _RpcHelper, lock._RelockDebugMixin): |
2455 | """Branch stored on a server accessed by HPSS RPC. |
2456 | |
2457 | At the moment most operations are mapped down to simple file operations. |
2458 | @@ -2318,6 +2321,7 @@ |
2459 | def lock_read(self): |
2460 | self.repository.lock_read() |
2461 | if not self._lock_mode: |
2462 | + self._note_lock('r') |
2463 | self._lock_mode = 'r' |
2464 | self._lock_count = 1 |
2465 | if self._real_branch is not None: |
2466 | @@ -2343,6 +2347,7 @@ |
2467 | |
2468 | def lock_write(self, token=None): |
2469 | if not self._lock_mode: |
2470 | + self._note_lock('w') |
2471 | # Lock the branch and repo in one remote call. |
2472 | remote_tokens = self._remote_lock_write(token) |
2473 | self._lock_token, self._repo_lock_token = remote_tokens |
2474 | @@ -2383,6 +2388,7 @@ |
2475 | return |
2476 | raise errors.UnexpectedSmartServerResponse(response) |
2477 | |
2478 | + @only_raises(errors.LockNotHeld, errors.LockBroken) |
2479 | def unlock(self): |
2480 | try: |
2481 | self._lock_count -= 1 |
2482 | @@ -2428,6 +2434,7 @@ |
2483 | raise NotImplementedError(self.dont_leave_lock_in_place) |
2484 | self._leave_lock = False |
2485 | |
2486 | + @needs_read_lock |
2487 | def get_rev_id(self, revno, history=None): |
2488 | if revno == 0: |
2489 | return _mod_revision.NULL_REVISION |
2490 | |
2491 | === modified file 'bzrlib/repofmt/pack_repo.py' |
2492 | --- bzrlib/repofmt/pack_repo.py 2009-09-08 05:51:36 +0000 |
2493 | +++ bzrlib/repofmt/pack_repo.py 2009-10-15 18:31:13 +0000 |
2494 | @@ -54,7 +54,7 @@ |
2495 | revision as _mod_revision, |
2496 | ) |
2497 | |
2498 | -from bzrlib.decorators import needs_write_lock |
2499 | +from bzrlib.decorators import needs_write_lock, only_raises |
2500 | from bzrlib.btree_index import ( |
2501 | BTreeGraphIndex, |
2502 | BTreeBuilder, |
2503 | @@ -73,6 +73,7 @@ |
2504 | ) |
2505 | from bzrlib.trace import ( |
2506 | mutter, |
2507 | + note, |
2508 | warning, |
2509 | ) |
2510 | |
2511 | @@ -224,10 +225,14 @@ |
2512 | return self.index_name('text', name) |
2513 | |
2514 | def _replace_index_with_readonly(self, index_type): |
2515 | + unlimited_cache = False |
2516 | + if index_type == 'chk': |
2517 | + unlimited_cache = True |
2518 | setattr(self, index_type + '_index', |
2519 | self.index_class(self.index_transport, |
2520 | self.index_name(index_type, self.name), |
2521 | - self.index_sizes[self.index_offset(index_type)])) |
2522 | + self.index_sizes[self.index_offset(index_type)], |
2523 | + unlimited_cache=unlimited_cache)) |
2524 | |
2525 | |
2526 | class ExistingPack(Pack): |
2527 | @@ -1674,7 +1679,7 @@ |
2528 | txt_index = self._make_index(name, '.tix') |
2529 | sig_index = self._make_index(name, '.six') |
2530 | if self.chk_index is not None: |
2531 | - chk_index = self._make_index(name, '.cix') |
2532 | + chk_index = self._make_index(name, '.cix', unlimited_cache=True) |
2533 | else: |
2534 | chk_index = None |
2535 | result = ExistingPack(self._pack_transport, name, rev_index, |
2536 | @@ -1699,7 +1704,8 @@ |
2537 | txt_index = self._make_index(name, '.tix', resume=True) |
2538 | sig_index = self._make_index(name, '.six', resume=True) |
2539 | if self.chk_index is not None: |
2540 | - chk_index = self._make_index(name, '.cix', resume=True) |
2541 | + chk_index = self._make_index(name, '.cix', resume=True, |
2542 | + unlimited_cache=True) |
2543 | else: |
2544 | chk_index = None |
2545 | result = self.resumed_pack_factory(name, rev_index, inv_index, |
2546 | @@ -1735,7 +1741,7 @@ |
2547 | return self._index_class(self.transport, 'pack-names', None |
2548 | ).iter_all_entries() |
2549 | |
2550 | - def _make_index(self, name, suffix, resume=False): |
2551 | + def _make_index(self, name, suffix, resume=False, unlimited_cache=False): |
2552 | size_offset = self._suffix_offsets[suffix] |
2553 | index_name = name + suffix |
2554 | if resume: |
2555 | @@ -1744,7 +1750,8 @@ |
2556 | else: |
2557 | transport = self._index_transport |
2558 | index_size = self._names[name][size_offset] |
2559 | - return self._index_class(transport, index_name, index_size) |
2560 | + return self._index_class(transport, index_name, index_size, |
2561 | + unlimited_cache=unlimited_cache) |
2562 | |
2563 | def _max_pack_count(self, total_revisions): |
2564 | """Return the maximum number of packs to use for total revisions. |
2565 | @@ -2300,6 +2307,9 @@ |
2566 | if self._write_lock_count == 1: |
2567 | self._transaction = transactions.WriteTransaction() |
2568 | if not locked: |
2569 | + if 'relock' in debug.debug_flags and self._prev_lock == 'w': |
2570 | + note('%r was write locked again', self) |
2571 | + self._prev_lock = 'w' |
2572 | for repo in self._fallback_repositories: |
2573 | # Writes don't affect fallback repos |
2574 | repo.lock_read() |
2575 | @@ -2312,6 +2322,9 @@ |
2576 | else: |
2577 | self.control_files.lock_read() |
2578 | if not locked: |
2579 | + if 'relock' in debug.debug_flags and self._prev_lock == 'r': |
2580 | + note('%r was read locked again', self) |
2581 | + self._prev_lock = 'r' |
2582 | for repo in self._fallback_repositories: |
2583 | repo.lock_read() |
2584 | self._refresh_data() |
2585 | @@ -2345,6 +2358,7 @@ |
2586 | packer = ReconcilePacker(collection, packs, extension, revs) |
2587 | return packer.pack(pb) |
2588 | |
2589 | + @only_raises(errors.LockNotHeld, errors.LockBroken) |
2590 | def unlock(self): |
2591 | if self._write_lock_count == 1 and self._write_group is not None: |
2592 | self.abort_write_group() |
2593 | |
2594 | === modified file 'bzrlib/repository.py' |
2595 | --- bzrlib/repository.py 2009-09-24 04:54:19 +0000 |
2596 | +++ bzrlib/repository.py 2009-10-15 18:31:14 +0000 |
2597 | @@ -49,7 +49,8 @@ |
2598 | from bzrlib.testament import Testament |
2599 | """) |
2600 | |
2601 | -from bzrlib.decorators import needs_read_lock, needs_write_lock |
2602 | +from bzrlib.decorators import needs_read_lock, needs_write_lock, only_raises |
2603 | +from bzrlib.lock import _RelockDebugMixin |
2604 | from bzrlib.inter import InterObject |
2605 | from bzrlib.inventory import ( |
2606 | Inventory, |
2607 | @@ -856,7 +857,7 @@ |
2608 | # Repositories |
2609 | |
2610 | |
2611 | -class Repository(object): |
2612 | +class Repository(_RelockDebugMixin): |
2613 | """Repository holding history for one or more branches. |
2614 | |
2615 | The repository holds and retrieves historical information including |
2616 | @@ -1381,6 +1382,7 @@ |
2617 | locked = self.is_locked() |
2618 | result = self.control_files.lock_write(token=token) |
2619 | if not locked: |
2620 | + self._note_lock('w') |
2621 | for repo in self._fallback_repositories: |
2622 | # Writes don't affect fallback repos |
2623 | repo.lock_read() |
2624 | @@ -1391,6 +1393,7 @@ |
2625 | locked = self.is_locked() |
2626 | self.control_files.lock_read() |
2627 | if not locked: |
2628 | + self._note_lock('r') |
2629 | for repo in self._fallback_repositories: |
2630 | repo.lock_read() |
2631 | self._refresh_data() |
2632 | @@ -1720,6 +1723,7 @@ |
2633 | self.start_write_group() |
2634 | return result |
2635 | |
2636 | + @only_raises(errors.LockNotHeld, errors.LockBroken) |
2637 | def unlock(self): |
2638 | if (self.control_files._lock_count == 1 and |
2639 | self.control_files._lock_mode == 'w'): |
2640 | @@ -4315,6 +4319,13 @@ |
2641 | ): |
2642 | if versioned_file is None: |
2643 | continue |
2644 | + # TODO: key is often going to be a StaticTuple object |
2645 | + # I don't believe we can define a method by which |
2646 | + # (prefix,) + StaticTuple will work, though we could |
2647 | + # define a StaticTuple.sq_concat that would allow you to |
2648 | + # pass in either a tuple or a StaticTuple as the second |
2649 | + # object, so instead we could have: |
2650 | + # StaticTuple(prefix) + key here... |
2651 | missing_keys.update((prefix,) + key for key in |
2652 | versioned_file.get_missing_compression_parent_keys()) |
2653 | except NotImplementedError: |
2654 | |
2655 | === modified file 'bzrlib/send.py' |
2656 | --- bzrlib/send.py 2009-07-17 14:41:02 +0000 |
2657 | +++ bzrlib/send.py 2009-10-15 18:31:13 +0000 |
2658 | @@ -115,14 +115,13 @@ |
2659 | ).get_user_option_as_bool('send_strict') |
2660 | if strict is None: strict = True # default value |
2661 | if strict and tree is not None: |
2662 | - if (tree.has_changes(tree.basis_tree()) |
2663 | - or len(tree.get_parent_ids()) > 1): |
2664 | + if (tree.has_changes()): |
2665 | raise errors.UncommittedChanges( |
2666 | tree, more='Use --no-strict to force the send.') |
2667 | if tree.last_revision() != tree.branch.last_revision(): |
2668 | # The tree has lost sync with its branch, there is little |
2669 | # chance that the user is aware of it but he can still force |
2670 | - # the push with --no-strict |
2671 | + # the send with --no-strict |
2672 | raise errors.OutOfDateTree( |
2673 | tree, more='Use --no-strict to force the send.') |
2674 | revision_id = branch.last_revision() |
2675 | |
2676 | === added file 'bzrlib/static_tuple.py' |
2677 | --- bzrlib/static_tuple.py 1970-01-01 00:00:00 +0000 |
2678 | +++ bzrlib/static_tuple.py 2009-10-15 18:31:14 +0000 |
2679 | @@ -0,0 +1,25 @@ |
2680 | +# Copyright (C) 2009 Canonical Ltd |
2681 | +# |
2682 | +# This program is free software; you can redistribute it and/or modify |
2683 | +# it under the terms of the GNU General Public License as published by |
2684 | +# the Free Software Foundation; either version 2 of the License, or |
2685 | +# (at your option) any later version. |
2686 | +# |
2687 | +# This program is distributed in the hope that it will be useful, |
2688 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
2689 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
2690 | +# GNU General Public License for more details. |
2691 | +# |
2692 | +# You should have received a copy of the GNU General Public License |
2693 | +# along with this program; if not, write to the Free Software |
2694 | +# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA |
2695 | + |
2696 | +"""Interface thunk for a StaticTuple implementation.""" |
2697 | + |
2698 | +try: |
2699 | + from bzrlib._static_tuple_c import StaticTuple |
2700 | +except ImportError, e: |
2701 | + from bzrlib import osutils |
2702 | + osutils.failed_to_load_extension(e) |
2703 | + from bzrlib._static_tuple_py import StaticTuple |
2704 | + |
2705 | |
2706 | === modified file 'bzrlib/tests/__init__.py' |
2707 | --- bzrlib/tests/__init__.py 2009-10-08 14:59:54 +0000 |
2708 | +++ bzrlib/tests/__init__.py 2009-10-15 18:31:14 +0000 |
2709 | @@ -222,6 +222,10 @@ |
2710 | '%s is leaking threads among %d leaking tests.\n' % ( |
2711 | TestCase._first_thread_leaker_id, |
2712 | TestCase._leaking_threads_tests)) |
2713 | + # We don't report the main thread as an active one. |
2714 | + self.stream.write( |
2715 | + '%d non-main threads were left active in the end.\n' |
2716 | + % (TestCase._active_threads - 1)) |
2717 | |
2718 | def _extractBenchmarkTime(self, testCase): |
2719 | """Add a benchmark time for the current test case.""" |
2720 | @@ -846,7 +850,13 @@ |
2721 | active = threading.activeCount() |
2722 | leaked_threads = active - TestCase._active_threads |
2723 | TestCase._active_threads = active |
2724 | - if leaked_threads: |
2725 | + # If some tests make the number of threads *decrease*, we'll consider |
2726 | + # that they are just observing old threads dieing, not agressively kill |
2727 | + # random threads. So we don't report these tests as leaking. The risk |
2728 | + # is that we have false positives that way (the test see 2 threads |
2729 | + # going away but leak one) but it seems less likely than the actual |
2730 | + # false positives (the test see threads going away and does not leak). |
2731 | + if leaked_threads > 0: |
2732 | TestCase._leaking_threads_tests += 1 |
2733 | if TestCase._first_thread_leaker_id is None: |
2734 | TestCase._first_thread_leaker_id = self.id() |
2735 | @@ -1146,6 +1156,25 @@ |
2736 | self.fail("Incorrect length: wanted %d, got %d for %r" % ( |
2737 | length, len(obj_with_len), obj_with_len)) |
2738 | |
2739 | + def assertLogsError(self, exception_class, func, *args, **kwargs): |
2740 | + """Assert that func(*args, **kwargs) quietly logs a specific exception. |
2741 | + """ |
2742 | + from bzrlib import trace |
2743 | + captured = [] |
2744 | + orig_log_exception_quietly = trace.log_exception_quietly |
2745 | + try: |
2746 | + def capture(): |
2747 | + orig_log_exception_quietly() |
2748 | + captured.append(sys.exc_info()) |
2749 | + trace.log_exception_quietly = capture |
2750 | + func(*args, **kwargs) |
2751 | + finally: |
2752 | + trace.log_exception_quietly = orig_log_exception_quietly |
2753 | + self.assertLength(1, captured) |
2754 | + err = captured[0][1] |
2755 | + self.assertIsInstance(err, exception_class) |
2756 | + return err |
2757 | + |
2758 | def assertPositive(self, val): |
2759 | """Assert that val is greater than 0.""" |
2760 | self.assertTrue(val > 0, 'expected a positive value, but got %s' % val) |
2761 | @@ -3661,6 +3690,7 @@ |
2762 | 'bzrlib.tests.per_repository', |
2763 | 'bzrlib.tests.per_repository_chk', |
2764 | 'bzrlib.tests.per_repository_reference', |
2765 | + 'bzrlib.tests.per_uifactory', |
2766 | 'bzrlib.tests.per_versionedfile', |
2767 | 'bzrlib.tests.per_workingtree', |
2768 | 'bzrlib.tests.test__annotator', |
2769 | |
2770 | === modified file 'bzrlib/tests/blackbox/test_merge.py' |
2771 | --- bzrlib/tests/blackbox/test_merge.py 2009-09-09 15:43:52 +0000 |
2772 | +++ bzrlib/tests/blackbox/test_merge.py 2009-10-15 18:31:13 +0000 |
2773 | @@ -605,7 +605,7 @@ |
2774 | |
2775 | def test_merge_force(self): |
2776 | self.tree_a.commit('empty change to allow merge to run') |
2777 | - # Second merge on top if the uncommitted one |
2778 | + # Second merge on top of the uncommitted one |
2779 | self.run_bzr(['merge', '../a', '--force'], working_dir='b') |
2780 | |
2781 | |
2782 | |
2783 | === modified file 'bzrlib/tests/blackbox/test_uncommit.py' |
2784 | --- bzrlib/tests/blackbox/test_uncommit.py 2009-04-04 02:50:01 +0000 |
2785 | +++ bzrlib/tests/blackbox/test_uncommit.py 2009-10-15 18:31:14 +0000 |
2786 | @@ -233,14 +233,14 @@ |
2787 | tree3.commit('unchanged', rev_id='c3') |
2788 | |
2789 | wt.merge_from_branch(tree2.branch) |
2790 | - wt.merge_from_branch(tree3.branch) |
2791 | + wt.merge_from_branch(tree3.branch, force=True) |
2792 | wt.commit('merge b3, c3', rev_id='a3') |
2793 | |
2794 | tree2.commit('unchanged', rev_id='b4') |
2795 | tree3.commit('unchanged', rev_id='c4') |
2796 | |
2797 | wt.merge_from_branch(tree3.branch) |
2798 | - wt.merge_from_branch(tree2.branch) |
2799 | + wt.merge_from_branch(tree2.branch, force=True) |
2800 | wt.commit('merge b4, c4', rev_id='a4') |
2801 | |
2802 | self.assertEqual(['a4'], wt.get_parent_ids()) |
2803 | |
2804 | === modified file 'bzrlib/tests/lock_helpers.py' |
2805 | --- bzrlib/tests/lock_helpers.py 2009-04-15 07:30:34 +0000 |
2806 | +++ bzrlib/tests/lock_helpers.py 2009-10-15 18:31:14 +0000 |
2807 | @@ -17,6 +17,7 @@ |
2808 | """Helper functions/classes for testing locking""" |
2809 | |
2810 | from bzrlib import errors |
2811 | +from bzrlib.decorators import only_raises |
2812 | |
2813 | |
2814 | class TestPreventLocking(errors.LockError): |
2815 | @@ -68,6 +69,7 @@ |
2816 | return self._other.lock_write() |
2817 | raise TestPreventLocking('lock_write disabled') |
2818 | |
2819 | + @only_raises(errors.LockNotHeld, errors.LockBroken) |
2820 | def unlock(self): |
2821 | self._sequence.append((self._other_id, 'ul', self._allow_unlock)) |
2822 | if self._allow_unlock: |
2823 | |
2824 | === modified file 'bzrlib/tests/per_branch/test_locking.py' |
2825 | --- bzrlib/tests/per_branch/test_locking.py 2009-07-10 10:46:00 +0000 |
2826 | +++ bzrlib/tests/per_branch/test_locking.py 2009-10-15 18:31:14 +0000 |
2827 | @@ -139,7 +139,7 @@ |
2828 | try: |
2829 | self.assertTrue(b.is_locked()) |
2830 | self.assertTrue(b.repository.is_locked()) |
2831 | - self.assertRaises(TestPreventLocking, b.unlock) |
2832 | + self.assertLogsError(TestPreventLocking, b.unlock) |
2833 | if self.combined_control: |
2834 | self.assertTrue(b.is_locked()) |
2835 | else: |
2836 | @@ -183,7 +183,7 @@ |
2837 | try: |
2838 | self.assertTrue(b.is_locked()) |
2839 | self.assertTrue(b.repository.is_locked()) |
2840 | - self.assertRaises(TestPreventLocking, b.unlock) |
2841 | + self.assertLogsError(TestPreventLocking, b.unlock) |
2842 | self.assertTrue(b.is_locked()) |
2843 | self.assertTrue(b.repository.is_locked()) |
2844 | |
2845 | |
2846 | === modified file 'bzrlib/tests/per_repository/test_write_group.py' |
2847 | --- bzrlib/tests/per_repository/test_write_group.py 2009-09-08 06:25:26 +0000 |
2848 | +++ bzrlib/tests/per_repository/test_write_group.py 2009-10-15 18:31:14 +0000 |
2849 | @@ -84,7 +84,7 @@ |
2850 | # don't need a specific exception for now - this is |
2851 | # really to be sure it's used right, not for signalling |
2852 | # semantic information. |
2853 | - self.assertRaises(errors.BzrError, repo.unlock) |
2854 | + self.assertLogsError(errors.BzrError, repo.unlock) |
2855 | # after this error occurs, the repository is unlocked, and the write |
2856 | # group is gone. you've had your chance, and you blew it. ;-) |
2857 | self.assertFalse(repo.is_locked()) |
2858 | |
2859 | === modified file 'bzrlib/tests/per_repository_chk/test_supported.py' |
2860 | --- bzrlib/tests/per_repository_chk/test_supported.py 2009-09-08 06:25:26 +0000 |
2861 | +++ bzrlib/tests/per_repository_chk/test_supported.py 2009-10-15 18:31:14 +0000 |
2862 | @@ -17,8 +17,10 @@ |
2863 | """Tests for repositories that support CHK indices.""" |
2864 | |
2865 | from bzrlib import ( |
2866 | + btree_index, |
2867 | errors, |
2868 | osutils, |
2869 | + repository, |
2870 | ) |
2871 | from bzrlib.versionedfile import VersionedFiles |
2872 | from bzrlib.tests.per_repository_chk import TestCaseWithRepositoryCHK |
2873 | @@ -108,6 +110,39 @@ |
2874 | finally: |
2875 | repo.unlock() |
2876 | |
2877 | + def test_chk_bytes_are_fully_buffered(self): |
2878 | + repo = self.make_repository('.') |
2879 | + repo.lock_write() |
2880 | + self.addCleanup(repo.unlock) |
2881 | + repo.start_write_group() |
2882 | + try: |
2883 | + sha1, len, _ = repo.chk_bytes.add_lines((None,), |
2884 | + None, ["foo\n", "bar\n"], random_id=True) |
2885 | + self.assertEqual('4e48e2c9a3d2ca8a708cb0cc545700544efb5021', |
2886 | + sha1) |
2887 | + self.assertEqual( |
2888 | + set([('sha1:4e48e2c9a3d2ca8a708cb0cc545700544efb5021',)]), |
2889 | + repo.chk_bytes.keys()) |
2890 | + except: |
2891 | + repo.abort_write_group() |
2892 | + raise |
2893 | + else: |
2894 | + repo.commit_write_group() |
2895 | + # This may not always be correct if we change away from BTreeGraphIndex |
2896 | + # in the future. But for now, lets check that chk_bytes are fully |
2897 | + # buffered |
2898 | + index = repo.chk_bytes._index._graph_index._indices[0] |
2899 | + self.assertIsInstance(index, btree_index.BTreeGraphIndex) |
2900 | + self.assertIs(type(index._leaf_node_cache), dict) |
2901 | + # Re-opening the repository should also have a repo with everything |
2902 | + # fully buffered |
2903 | + repo2 = repository.Repository.open(self.get_url()) |
2904 | + repo2.lock_read() |
2905 | + self.addCleanup(repo2.unlock) |
2906 | + index = repo2.chk_bytes._index._graph_index._indices[0] |
2907 | + self.assertIsInstance(index, btree_index.BTreeGraphIndex) |
2908 | + self.assertIs(type(index._leaf_node_cache), dict) |
2909 | + |
2910 | |
2911 | class TestCommitWriteGroupIntegrityCheck(TestCaseWithRepositoryCHK): |
2912 | """Tests that commit_write_group prevents various kinds of invalid data |
2913 | |
2914 | === added directory 'bzrlib/tests/per_uifactory' |
2915 | === added file 'bzrlib/tests/per_uifactory/__init__.py' |
2916 | --- bzrlib/tests/per_uifactory/__init__.py 1970-01-01 00:00:00 +0000 |
2917 | +++ bzrlib/tests/per_uifactory/__init__.py 2009-10-15 18:31:14 +0000 |
2918 | @@ -0,0 +1,148 @@ |
2919 | +# Copyright (C) 2009 Canonical Ltd |
2920 | +# |
2921 | +# This program is free software; you can redistribute it and/or modify |
2922 | +# it under the terms of the GNU General Public License as published by |
2923 | +# the Free Software Foundation; either version 2 of the License, or |
2924 | +# (at your option) any later version. |
2925 | +# |
2926 | +# This program is distributed in the hope that it will be useful, |
2927 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of |
2928 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
2929 | +# GNU General Public License for more details. |
2930 | +# |
2931 | +# You should have received a copy of the GNU General Public License |
2932 | +# along with this program; if not, write to the Free Software |
2933 | +# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA |
2934 | + |
2935 | +"""Tests run per UIFactory.""" |
2936 | + |
2937 | +# Testing UIFactories is a bit interesting because we require they all support a |
2938 | +# common interface, but the way they implement it can vary very widely. Between |
2939 | +# text, batch-mode, graphical and other potential UIFactories, the requirements |
2940 | +# to set up a factory, to make it respond to requests, and to simulate user |
2941 | +# input can vary a lot. |
2942 | +# |
2943 | +# We want tests that therefore allow for the evaluation of the result to vary |
2944 | +# per implementation, but we want to check that the supported facilities are |
2945 | +# the same across all UIFactorys, unless they're specifically skipped. |
2946 | +# |
2947 | +# Our normal approach is to use test scenarios but that seems to just end up |
2948 | +# creating test-like objects inside the scenario. Therefore we fall back to |
2949 | +# the older method of putting the common tests in a mixin. |
2950 | +# |
2951 | +# Plugins that add new UIFactorys can create their own subclasses. |
2952 | + |
2953 | + |
2954 | +from cStringIO import StringIO |
2955 | +import unittest |
2956 | + |
2957 | + |
2958 | +from bzrlib import ( |
2959 | + tests, |
2960 | + ui, |
2961 | + ) |
2962 | + |
2963 | + |
2964 | +class UIFactoryTestMixin(object): |
2965 | + """Common tests for UIFactories. |
2966 | + |
2967 | + These are supposed to be expressed with no assumptions about how the |
2968 | + UIFactory implements the method, only that it does implement them (or |
2969 | + fails cleanly), and that the concrete subclass will make arrangements to |
2970 | + build a factory and to examine its behaviour. |
2971 | + |
2972 | + Note that this is *not* a TestCase, because it can't be directly run, but |
2973 | + the concrete subclasses should be. |
2974 | + """ |
2975 | + |
2976 | + def test_note(self): |
2977 | + self.factory.note("a note to the user") |
2978 | + self._check_note("a note to the user") |
2979 | + |
2980 | + def test_show_error(self): |
2981 | + msg = 'an error occurred' |
2982 | + self.factory.show_error(msg) |
2983 | + self._check_show_error(msg) |
2984 | + |
2985 | + def test_show_message(self): |
2986 | + msg = 'a message' |
2987 | + self.factory.show_message(msg) |
2988 | + self._check_show_message(msg) |
2989 | + |
2990 | + def test_show_warning(self): |
2991 | + msg = 'a warning' |
2992 | + self.factory.show_warning(msg) |
2993 | + self._check_show_warning(msg) |
2994 | + |
2995 | + |
2996 | +class TestTextUIFactory(tests.TestCase, UIFactoryTestMixin): |
2997 | + |
2998 | + def setUp(self): |
2999 | + super(TestTextUIFactory, self).setUp() |
3000 | + self.stdin = StringIO() |
3001 | + self.stdout = StringIO() |
3002 | + self.stderr = StringIO() |
3003 | + self.factory = ui.text.TextUIFactory(self.stdin, self.stdout, |
3004 | + self.stderr) |
3005 | + |
3006 | + def _check_note(self, note_text): |
3007 | + self.assertEquals("%s\n" % note_text, |
3008 | + self.stdout.getvalue()) |
3009 | + |
3010 | + def _check_show_error(self, msg): |
3011 | + self.assertEquals("bzr: error: %s\n" % msg, |
3012 | + self.stderr.getvalue()) |
3013 | + self.assertEquals("", self.stdout.getvalue()) |
3014 | + |
3015 | + def _check_show_message(self, msg): |
3016 | + self.assertEquals("%s\n" % msg, |
3017 | + self.stdout.getvalue()) |
3018 | + self.assertEquals("", self.stderr.getvalue()) |
3019 | + |
3020 | + def _check_show_warning(self, msg): |
3021 | + self.assertEquals("bzr: warning: %s\n" % msg, |
3022 | + self.stderr.getvalue()) |
3023 | + self.assertEquals("", self.stdout.getvalue()) |
3024 | + |
3025 | + |
3026 | +class TestSilentUIFactory(tests.TestCase, UIFactoryTestMixin): |
3027 | + # discards output, therefore tests for output expect nothing |
3028 | + |
3029 | + def setUp(self): |
3030 | + super(TestSilentUIFactory, self).setUp() |
3031 | + self.factory = ui.SilentUIFactory() |
3032 | + |
3033 | + def _check_note(self, note_text): |
3034 | + # it's just discarded |
3035 | + pass |
3036 | + |
3037 | + def _check_show_error(self, msg): |
3038 | + pass |
3039 | + |
3040 | + def _check_show_message(self, msg): |
3041 | + pass |
3042 | + |
3043 | + def _check_show_warning(self, msg): |
3044 | + pass |
3045 | + |
3046 | + |
3047 | +class TestCannedInputUIFactory(tests.TestCase, UIFactoryTestMixin): |
3048 | + # discards output, reads input from variables |
3049 | + |
3050 | + def setUp(self): |
3051 | + super(TestCannedInputUIFactory, self).setUp() |
3052 | + self.factory = ui.CannedInputUIFactory([]) |
3053 | + |
3054 | + def _check_note(self, note_text): |
3055 | + pass |
3056 | + |
3057 | + def _check_show_error(self, msg): |
3058 | + pass |
3059 | + |
3060 | + def _check_show_message(self, msg): |
3061 | + pass |
3062 | + |
3063 | + def _check_show_warning(self, msg): |
3064 | + pass |
3065 | + |
3066 | + |
3067 | |
3068 | === modified file 'bzrlib/tests/test__simple_set.py' |
3069 | --- bzrlib/tests/test__simple_set.py 2009-10-08 04:35:01 +0000 |
3070 | +++ bzrlib/tests/test__simple_set.py 2009-10-15 18:31:14 +0000 |
3071 | @@ -30,6 +30,51 @@ |
3072 | _simple_set_pyx = None |
3073 | |
3074 | |
3075 | +class _Hashable(object): |
3076 | + """A simple object which has a fixed hash value. |
3077 | + |
3078 | + We could have used an 'int', but it turns out that Int objects don't |
3079 | + implement tp_richcompare... |
3080 | + """ |
3081 | + |
3082 | + def __init__(self, the_hash): |
3083 | + self.hash = the_hash |
3084 | + |
3085 | + def __hash__(self): |
3086 | + return self.hash |
3087 | + |
3088 | + def __eq__(self, other): |
3089 | + if not isinstance(other, _Hashable): |
3090 | + return NotImplemented |
3091 | + return other.hash == self.hash |
3092 | + |
3093 | + |
3094 | +class _BadSecondHash(_Hashable): |
3095 | + |
3096 | + def __init__(self, the_hash): |
3097 | + _Hashable.__init__(self, the_hash) |
3098 | + self._first = True |
3099 | + |
3100 | + def __hash__(self): |
3101 | + if self._first: |
3102 | + self._first = False |
3103 | + return self.hash |
3104 | + else: |
3105 | + raise ValueError('I can only be hashed once.') |
3106 | + |
3107 | + |
3108 | +class _BadCompare(_Hashable): |
3109 | + |
3110 | + def __eq__(self, other): |
3111 | + raise RuntimeError('I refuse to play nice') |
3112 | + |
3113 | + |
3114 | +class _NoImplementCompare(_Hashable): |
3115 | + |
3116 | + def __eq__(self, other): |
3117 | + return NotImplemented |
3118 | + |
3119 | + |
3120 | # Even though this is an extension, we don't permute the tests for a python |
3121 | # version. As the plain python version is just a dict or set |
3122 | |
3123 | @@ -62,6 +107,9 @@ |
3124 | def assertFillState(self, used, fill, mask, obj): |
3125 | self.assertEqual((used, fill, mask), (obj.used, obj.fill, obj.mask)) |
3126 | |
3127 | + def assertLookup(self, offset, value, obj, key): |
3128 | + self.assertEqual((offset, value), obj._test_lookup(key)) |
3129 | + |
3130 | def assertRefcount(self, count, obj): |
3131 | """Assert that the refcount for obj is what we expect. |
3132 | |
3133 | @@ -81,58 +129,88 @@ |
3134 | self.assertFillState(0, 0, 0x3ff, obj) |
3135 | |
3136 | def test__lookup(self): |
3137 | - # The tuple hash function is rather good at entropy. For all integers |
3138 | - # 0=>1023, hash((i,)) & 1023 maps to a unique output, and hash((i,j)) |
3139 | - # maps to all 1024 fields evenly. |
3140 | - # However, hash((c,d))& 1023 for characters has an uneven distribution |
3141 | - # of collisions, for example: |
3142 | - # ('a', 'a'), ('f', '4'), ('p', 'r'), ('q', '1'), ('F', 'T'), |
3143 | - # ('Q', 'Q'), ('V', 'd'), ('7', 'C') |
3144 | - # all collide @ 643 |
3145 | - obj = self.module.SimpleSet() |
3146 | - offset, val = obj._test_lookup(('a', 'a')) |
3147 | - self.assertEqual(643, offset) |
3148 | - self.assertEqual('<null>', val) |
3149 | - offset, val = obj._test_lookup(('f', '4')) |
3150 | - self.assertEqual(643, offset) |
3151 | - self.assertEqual('<null>', val) |
3152 | - offset, val = obj._test_lookup(('p', 'r')) |
3153 | - self.assertEqual(643, offset) |
3154 | - self.assertEqual('<null>', val) |
3155 | + # These are carefully chosen integers to force hash collisions in the |
3156 | + # algorithm, based on the initial set size of 1024 |
3157 | + obj = self.module.SimpleSet() |
3158 | + self.assertLookup(643, '<null>', obj, _Hashable(643)) |
3159 | + self.assertLookup(643, '<null>', obj, _Hashable(643 + 1024)) |
3160 | + self.assertLookup(643, '<null>', obj, _Hashable(643 + 50*1024)) |
3161 | + |
3162 | + def test__lookup_collision(self): |
3163 | + obj = self.module.SimpleSet() |
3164 | + k1 = _Hashable(643) |
3165 | + k2 = _Hashable(643 + 1024) |
3166 | + self.assertLookup(643, '<null>', obj, k1) |
3167 | + self.assertLookup(643, '<null>', obj, k2) |
3168 | + obj.add(k1) |
3169 | + self.assertLookup(643, k1, obj, k1) |
3170 | + self.assertLookup(644, '<null>', obj, k2) |
3171 | + |
3172 | + def test__lookup_after_resize(self): |
3173 | + obj = self.module.SimpleSet() |
3174 | + k1 = _Hashable(643) |
3175 | + k2 = _Hashable(643 + 1024) |
3176 | + obj.add(k1) |
3177 | + obj.add(k2) |
3178 | + self.assertLookup(643, k1, obj, k1) |
3179 | + self.assertLookup(644, k2, obj, k2) |
3180 | + obj._py_resize(2047) # resized to 2048 |
3181 | + self.assertEqual(2048, obj.mask + 1) |
3182 | + self.assertLookup(643, k1, obj, k1) |
3183 | + self.assertLookup(643+1024, k2, obj, k2) |
3184 | + obj._py_resize(1023) # resized back to 1024 |
3185 | + self.assertEqual(1024, obj.mask + 1) |
3186 | + self.assertLookup(643, k1, obj, k1) |
3187 | + self.assertLookup(644, k2, obj, k2) |
3188 | |
3189 | def test_get_set_del_with_collisions(self): |
3190 | obj = self.module.SimpleSet() |
3191 | - k1 = ('a', 'a') |
3192 | - k2 = ('f', '4') # collides |
3193 | - k3 = ('p', 'r') |
3194 | - k4 = ('q', '1') |
3195 | - self.assertEqual((643, '<null>'), obj._test_lookup(k1)) |
3196 | - self.assertEqual((643, '<null>'), obj._test_lookup(k2)) |
3197 | - self.assertEqual((643, '<null>'), obj._test_lookup(k3)) |
3198 | - self.assertEqual((643, '<null>'), obj._test_lookup(k4)) |
3199 | + |
3200 | + h1 = 643 |
3201 | + h2 = 643 + 1024 |
3202 | + h3 = 643 + 1024*50 |
3203 | + h4 = 643 + 1024*25 |
3204 | + h5 = 644 |
3205 | + h6 = 644 + 1024 |
3206 | + |
3207 | + k1 = _Hashable(h1) |
3208 | + k2 = _Hashable(h2) |
3209 | + k3 = _Hashable(h3) |
3210 | + k4 = _Hashable(h4) |
3211 | + k5 = _Hashable(h5) |
3212 | + k6 = _Hashable(h6) |
3213 | + self.assertLookup(643, '<null>', obj, k1) |
3214 | + self.assertLookup(643, '<null>', obj, k2) |
3215 | + self.assertLookup(643, '<null>', obj, k3) |
3216 | + self.assertLookup(643, '<null>', obj, k4) |
3217 | + self.assertLookup(644, '<null>', obj, k5) |
3218 | + self.assertLookup(644, '<null>', obj, k6) |
3219 | obj.add(k1) |
3220 | self.assertIn(k1, obj) |
3221 | self.assertNotIn(k2, obj) |
3222 | self.assertNotIn(k3, obj) |
3223 | self.assertNotIn(k4, obj) |
3224 | - self.assertEqual((643, k1), obj._test_lookup(k1)) |
3225 | - self.assertEqual((787, '<null>'), obj._test_lookup(k2)) |
3226 | - self.assertEqual((787, '<null>'), obj._test_lookup(k3)) |
3227 | - self.assertEqual((787, '<null>'), obj._test_lookup(k4)) |
3228 | + self.assertLookup(643, k1, obj, k1) |
3229 | + self.assertLookup(644, '<null>', obj, k2) |
3230 | + self.assertLookup(644, '<null>', obj, k3) |
3231 | + self.assertLookup(644, '<null>', obj, k4) |
3232 | + self.assertLookup(644, '<null>', obj, k5) |
3233 | + self.assertLookup(644, '<null>', obj, k6) |
3234 | self.assertIs(k1, obj[k1]) |
3235 | - obj.add(k2) |
3236 | + self.assertIs(k2, obj.add(k2)) |
3237 | self.assertIs(k2, obj[k2]) |
3238 | - self.assertEqual((643, k1), obj._test_lookup(k1)) |
3239 | - self.assertEqual((787, k2), obj._test_lookup(k2)) |
3240 | - self.assertEqual((660, '<null>'), obj._test_lookup(k3)) |
3241 | - # Even though k4 collides for the first couple of iterations, the hash |
3242 | - # perturbation uses the full width hash (not just the masked value), so |
3243 | - # it now diverges |
3244 | - self.assertEqual((180, '<null>'), obj._test_lookup(k4)) |
3245 | - self.assertEqual((643, k1), obj._test_lookup(('a', 'a'))) |
3246 | - self.assertEqual((787, k2), obj._test_lookup(('f', '4'))) |
3247 | - self.assertEqual((660, '<null>'), obj._test_lookup(('p', 'r'))) |
3248 | - self.assertEqual((180, '<null>'), obj._test_lookup(('q', '1'))) |
3249 | + self.assertLookup(643, k1, obj, k1) |
3250 | + self.assertLookup(644, k2, obj, k2) |
3251 | + self.assertLookup(646, '<null>', obj, k3) |
3252 | + self.assertLookup(646, '<null>', obj, k4) |
3253 | + self.assertLookup(645, '<null>', obj, k5) |
3254 | + self.assertLookup(645, '<null>', obj, k6) |
3255 | + self.assertLookup(643, k1, obj, _Hashable(h1)) |
3256 | + self.assertLookup(644, k2, obj, _Hashable(h2)) |
3257 | + self.assertLookup(646, '<null>', obj, _Hashable(h3)) |
3258 | + self.assertLookup(646, '<null>', obj, _Hashable(h4)) |
3259 | + self.assertLookup(645, '<null>', obj, _Hashable(h5)) |
3260 | + self.assertLookup(645, '<null>', obj, _Hashable(h6)) |
3261 | obj.add(k3) |
3262 | self.assertIs(k3, obj[k3]) |
3263 | self.assertIn(k1, obj) |
3264 | @@ -140,11 +218,11 @@ |
3265 | self.assertIn(k3, obj) |
3266 | self.assertNotIn(k4, obj) |
3267 | |
3268 | - del obj[k1] |
3269 | - self.assertEqual((643, '<dummy>'), obj._test_lookup(k1)) |
3270 | - self.assertEqual((787, k2), obj._test_lookup(k2)) |
3271 | - self.assertEqual((660, k3), obj._test_lookup(k3)) |
3272 | - self.assertEqual((643, '<dummy>'), obj._test_lookup(k4)) |
3273 | + obj.discard(k1) |
3274 | + self.assertLookup(643, '<dummy>', obj, k1) |
3275 | + self.assertLookup(644, k2, obj, k2) |
3276 | + self.assertLookup(646, k3, obj, k3) |
3277 | + self.assertLookup(643, '<dummy>', obj, k4) |
3278 | self.assertNotIn(k1, obj) |
3279 | self.assertIn(k2, obj) |
3280 | self.assertIn(k3, obj) |
3281 | @@ -179,7 +257,7 @@ |
3282 | self.assertRefcount(2, k1) |
3283 | self.assertRefcount(1, k2) |
3284 | # Deleting an entry should remove the fill, but not the used |
3285 | - del obj[k1] |
3286 | + obj.discard(k1) |
3287 | self.assertFillState(0, 1, 0x3ff, obj) |
3288 | self.assertRefcount(1, k1) |
3289 | k3 = tuple(['bar']) |
3290 | @@ -210,23 +288,6 @@ |
3291 | self.assertEqual(1, obj.discard(k3)) |
3292 | self.assertRefcount(1, k3) |
3293 | |
3294 | - def test__delitem__(self): |
3295 | - obj = self.module.SimpleSet() |
3296 | - k1 = tuple(['foo']) |
3297 | - k2 = tuple(['foo']) |
3298 | - k3 = tuple(['bar']) |
3299 | - self.assertRefcount(1, k1) |
3300 | - self.assertRefcount(1, k2) |
3301 | - self.assertRefcount(1, k3) |
3302 | - obj.add(k1) |
3303 | - self.assertRefcount(2, k1) |
3304 | - self.assertRaises(KeyError, obj.__delitem__, k3) |
3305 | - self.assertRefcount(1, k3) |
3306 | - obj.add(k3) |
3307 | - self.assertRefcount(2, k3) |
3308 | - del obj[k3] |
3309 | - self.assertRefcount(1, k3) |
3310 | - |
3311 | def test__resize(self): |
3312 | obj = self.module.SimpleSet() |
3313 | k1 = ('foo',) |
3314 | @@ -235,13 +296,13 @@ |
3315 | obj.add(k1) |
3316 | obj.add(k2) |
3317 | obj.add(k3) |
3318 | - del obj[k2] |
3319 | + obj.discard(k2) |
3320 | self.assertFillState(2, 3, 0x3ff, obj) |
3321 | self.assertEqual(1024, obj._py_resize(500)) |
3322 | # Doesn't change the size, but does change the content |
3323 | self.assertFillState(2, 2, 0x3ff, obj) |
3324 | obj.add(k2) |
3325 | - del obj[k3] |
3326 | + obj.discard(k3) |
3327 | self.assertFillState(2, 3, 0x3ff, obj) |
3328 | self.assertEqual(4096, obj._py_resize(4095)) |
3329 | self.assertFillState(2, 2, 0xfff, obj) |
3330 | @@ -250,13 +311,44 @@ |
3331 | self.assertNotIn(k3, obj) |
3332 | obj.add(k2) |
3333 | self.assertIn(k2, obj) |
3334 | - del obj[k2] |
3335 | + obj.discard(k2) |
3336 | self.assertEqual((591, '<dummy>'), obj._test_lookup(k2)) |
3337 | self.assertFillState(1, 2, 0xfff, obj) |
3338 | self.assertEqual(2048, obj._py_resize(1024)) |
3339 | self.assertFillState(1, 1, 0x7ff, obj) |
3340 | self.assertEqual((591, '<null>'), obj._test_lookup(k2)) |
3341 | |
3342 | + def test_second_hash_failure(self): |
3343 | + obj = self.module.SimpleSet() |
3344 | + k1 = _BadSecondHash(200) |
3345 | + k2 = _Hashable(200) |
3346 | + # Should only call hash() one time |
3347 | + obj.add(k1) |
3348 | + self.assertFalse(k1._first) |
3349 | + self.assertRaises(ValueError, obj.add, k2) |
3350 | + |
3351 | + def test_richcompare_failure(self): |
3352 | + obj = self.module.SimpleSet() |
3353 | + k1 = _Hashable(200) |
3354 | + k2 = _BadCompare(200) |
3355 | + obj.add(k1) |
3356 | + # Tries to compare with k1, fails |
3357 | + self.assertRaises(RuntimeError, obj.add, k2) |
3358 | + |
3359 | + def test_richcompare_not_implemented(self): |
3360 | + obj = self.module.SimpleSet() |
3361 | + # Even though their hashes are the same, tp_richcompare returns |
3362 | + # NotImplemented, which means we treat them as not equal |
3363 | + k1 = _NoImplementCompare(200) |
3364 | + k2 = _NoImplementCompare(200) |
3365 | + self.assertLookup(200, '<null>', obj, k1) |
3366 | + self.assertLookup(200, '<null>', obj, k2) |
3367 | + self.assertIs(k1, obj.add(k1)) |
3368 | + self.assertLookup(200, k1, obj, k1) |
3369 | + self.assertLookup(201, '<null>', obj, k2) |
3370 | + self.assertIs(k2, obj.add(k2)) |
3371 | + self.assertIs(k1, obj[k1]) |
3372 | + |
3373 | def test_add_and_remove_lots_of_items(self): |
3374 | obj = self.module.SimpleSet() |
3375 | chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz1234567890' |
3376 | @@ -295,5 +387,5 @@ |
3377 | # Set changed size |
3378 | self.assertRaises(RuntimeError, iterator.next) |
3379 | # And even removing an item still causes it to fail |
3380 | - del obj[k2] |
3381 | + obj.discard(k2) |
3382 | self.assertRaises(RuntimeError, iterator.next) |
3383 | |
3384 | === modified file 'bzrlib/tests/test__static_tuple.py' |
3385 | --- bzrlib/tests/test__static_tuple.py 2009-10-07 19:34:22 +0000 |
3386 | +++ bzrlib/tests/test__static_tuple.py 2009-10-15 18:31:14 +0000 |
3387 | @@ -23,6 +23,7 @@ |
3388 | _static_tuple_py, |
3389 | errors, |
3390 | osutils, |
3391 | + static_tuple, |
3392 | tests, |
3393 | ) |
3394 | |
3395 | @@ -140,6 +141,13 @@ |
3396 | self.assertEqual('foo', k[0]) |
3397 | self.assertEqual('z', k[6]) |
3398 | self.assertEqual('z', k[-1]) |
3399 | + self.assertRaises(IndexError, k.__getitem__, 7) |
3400 | + self.assertRaises(IndexError, k.__getitem__, 256+7) |
3401 | + self.assertRaises(IndexError, k.__getitem__, 12024) |
3402 | + # Python's [] resolver handles the negative arguments, so we can't |
3403 | + # really test StaticTuple_item() with negative values. |
3404 | + self.assertRaises(TypeError, k.__getitem__, 'not-an-int') |
3405 | + self.assertRaises(TypeError, k.__getitem__, '5') |
3406 | |
3407 | def test_refcount(self): |
3408 | f = 'fo' + 'oo' |
3409 | @@ -201,13 +209,29 @@ |
3410 | self.assertTrue(k_small <= k_big) |
3411 | self.assertTrue(k_small < k_big) |
3412 | |
3413 | + def assertCompareNoRelation(self, k1, k2): |
3414 | + """Run the comparison operators, make sure they do something. |
3415 | + |
3416 | + However, we don't actually care what comes first or second. This is |
3417 | + stuff like cross-class comparisons. We don't want to segfault/raise an |
3418 | + exception, but we don't care about the sort order. |
3419 | + """ |
3420 | + self.assertFalse(k1 == k2) |
3421 | + self.assertTrue(k1 != k2) |
3422 | + # Do the comparison, but we don't care about the result |
3423 | + k1 >= k2 |
3424 | + k1 > k2 |
3425 | + k1 <= k2 |
3426 | + k1 < k2 |
3427 | + |
3428 | def test_compare_vs_none(self): |
3429 | k1 = self.module.StaticTuple('baz', 'bing') |
3430 | self.assertCompareDifferent(None, k1) |
3431 | - self.assertCompareDifferent(10, k1) |
3432 | - # Comparison with a string is poorly-defined, I seem to get failures |
3433 | - # regardless of which one comes first... |
3434 | - # self.assertCompareDifferent('baz', k1) |
3435 | + |
3436 | + def test_compare_cross_class(self): |
3437 | + k1 = self.module.StaticTuple('baz', 'bing') |
3438 | + self.assertCompareNoRelation(10, k1) |
3439 | + self.assertCompareNoRelation('baz', k1) |
3440 | |
3441 | def test_compare_all_different_same_width(self): |
3442 | k1 = self.module.StaticTuple('baz', 'bing') |
3443 | @@ -255,6 +279,16 @@ |
3444 | self.assertCompareEqual(k3, (k1, ('foo', 'bar'))) |
3445 | self.assertCompareEqual((k1, ('foo', 'bar')), k3) |
3446 | |
3447 | + def test_compare_mixed_depths(self): |
3448 | + stuple = self.module.StaticTuple |
3449 | + k1 = stuple(stuple('a',), stuple('b',)) |
3450 | + k2 = stuple(stuple(stuple('c',), stuple('d',)), |
3451 | + stuple('b',)) |
3452 | + # This requires comparing a StaticTuple to a 'string', and then |
3453 | + # interpreting that value in the next higher StaticTuple. This used to |
3454 | + # generate a PyErr_BadIternalCall. We now fall back to *something*. |
3455 | + self.assertCompareNoRelation(k1, k2) |
3456 | + |
3457 | def test_hash(self): |
3458 | k = self.module.StaticTuple('foo') |
3459 | self.assertEqual(hash(k), hash(('foo',))) |
3460 | @@ -276,12 +310,22 @@ |
3461 | k = self.module.StaticTuple('foo', 'bar', 'baz', 'bing') |
3462 | self.assertEqual(('foo', 'bar'), k[:2]) |
3463 | self.assertEqual(('baz',), k[2:-1]) |
3464 | + try: |
3465 | + val = k[::2] |
3466 | + except TypeError: |
3467 | + # C implementation raises a TypeError, we don't need the |
3468 | + # implementation yet, so allow this to pass |
3469 | + pass |
3470 | + else: |
3471 | + # Python implementation uses a regular Tuple, so make sure it gives |
3472 | + # the right result |
3473 | + self.assertEqual(('foo', 'baz'), val) |
3474 | |
3475 | def test_referents(self): |
3476 | # We implement tp_traverse so that things like 'meliae' can measure the |
3477 | # amount of referenced memory. Unfortunately gc.get_referents() first |
3478 | - # checks the IS_GC flag before it traverses anything. So there isn't a |
3479 | - # way to expose it that I can see. |
3480 | + # checks the IS_GC flag before it traverses anything. We could write a |
3481 | + # helper func, but that won't work for the generic implementation... |
3482 | self.requireFeature(Meliae) |
3483 | from meliae import scanner |
3484 | strs = ['foo', 'bar', 'baz', 'bing'] |
3485 | @@ -383,3 +427,13 @@ |
3486 | if self.module is _static_tuple_py: |
3487 | return |
3488 | self.assertIsNot(None, self.module._C_API) |
3489 | + |
3490 | + def test_static_tuple_thunk(self): |
3491 | + # Make sure the right implementation is available from |
3492 | + # bzrlib.static_tuple.StaticTuple. |
3493 | + if self.module is _static_tuple_py: |
3494 | + if CompiledStaticTuple.available(): |
3495 | + # We will be using the C version |
3496 | + return |
3497 | + self.assertIs(static_tuple.StaticTuple, |
3498 | + self.module.StaticTuple) |
3499 | |
3500 | === modified file 'bzrlib/tests/test_btree_index.py' |
3501 | --- bzrlib/tests/test_btree_index.py 2009-08-13 19:56:26 +0000 |
3502 | +++ bzrlib/tests/test_btree_index.py 2009-10-15 18:31:13 +0000 |
3503 | @@ -23,6 +23,8 @@ |
3504 | from bzrlib import ( |
3505 | btree_index, |
3506 | errors, |
3507 | + fifo_cache, |
3508 | + lru_cache, |
3509 | osutils, |
3510 | tests, |
3511 | ) |
3512 | @@ -1115,6 +1117,43 @@ |
3513 | self.assertEqual({}, parent_map) |
3514 | self.assertEqual(set([('one',), ('two',)]), missing_keys) |
3515 | |
3516 | + def test_supports_unlimited_cache(self): |
3517 | + builder = btree_index.BTreeBuilder(reference_lists=0, key_elements=1) |
3518 | + # We need enough nodes to cause a page split (so we have both an |
3519 | + # internal node and a couple leaf nodes. 500 seems to be enough.) |
3520 | + nodes = self.make_nodes(500, 1, 0) |
3521 | + for node in nodes: |
3522 | + builder.add_node(*node) |
3523 | + stream = builder.finish() |
3524 | + trans = get_transport(self.get_url()) |
3525 | + size = trans.put_file('index', stream) |
3526 | + index = btree_index.BTreeGraphIndex(trans, 'index', size) |
3527 | + self.assertEqual(500, index.key_count()) |
3528 | + # We have an internal node |
3529 | + self.assertEqual(2, len(index._row_lengths)) |
3530 | + # We have at least 2 leaf nodes |
3531 | + self.assertTrue(index._row_lengths[-1] >= 2) |
3532 | + self.assertIsInstance(index._leaf_node_cache, lru_cache.LRUCache) |
3533 | + self.assertEqual(btree_index._NODE_CACHE_SIZE, |
3534 | + index._leaf_node_cache._max_cache) |
3535 | + self.assertIsInstance(index._internal_node_cache, fifo_cache.FIFOCache) |
3536 | + self.assertEqual(100, index._internal_node_cache._max_cache) |
3537 | + # No change if unlimited_cache=False is passed |
3538 | + index = btree_index.BTreeGraphIndex(trans, 'index', size, |
3539 | + unlimited_cache=False) |
3540 | + self.assertIsInstance(index._leaf_node_cache, lru_cache.LRUCache) |
3541 | + self.assertEqual(btree_index._NODE_CACHE_SIZE, |
3542 | + index._leaf_node_cache._max_cache) |
3543 | + self.assertIsInstance(index._internal_node_cache, fifo_cache.FIFOCache) |
3544 | + self.assertEqual(100, index._internal_node_cache._max_cache) |
3545 | + index = btree_index.BTreeGraphIndex(trans, 'index', size, |
3546 | + unlimited_cache=True) |
3547 | + self.assertIsInstance(index._leaf_node_cache, dict) |
3548 | + self.assertIs(type(index._internal_node_cache), dict) |
3549 | + # Exercise the lookup code |
3550 | + entries = set(index.iter_entries([n[0] for n in nodes])) |
3551 | + self.assertEqual(500, len(entries)) |
3552 | + |
3553 | |
3554 | class TestBTreeNodes(BTreeTestCase): |
3555 | |
3556 | |
3557 | === modified file 'bzrlib/tests/test_decorators.py' |
3558 | --- bzrlib/tests/test_decorators.py 2009-03-23 14:59:43 +0000 |
3559 | +++ bzrlib/tests/test_decorators.py 2009-10-15 18:31:14 +0000 |
3560 | @@ -23,11 +23,15 @@ |
3561 | from bzrlib.tests import TestCase |
3562 | |
3563 | |
3564 | -def create_decorator_sample(style, except_in_unlock=False): |
3565 | +class SampleUnlockError(Exception): |
3566 | + pass |
3567 | + |
3568 | + |
3569 | +def create_decorator_sample(style, unlock_error=None): |
3570 | """Create a DecoratorSample object, using specific lock operators. |
3571 | |
3572 | :param style: The type of lock decorators to use (fast/pretty/None) |
3573 | - :param except_in_unlock: If True, raise an exception during unlock |
3574 | + :param unlock_error: If specified, an error to raise from unlock. |
3575 | :return: An instantiated DecoratorSample object. |
3576 | """ |
3577 | |
3578 | @@ -58,10 +62,11 @@ |
3579 | def lock_write(self): |
3580 | self.actions.append('lock_write') |
3581 | |
3582 | + @decorators.only_raises(SampleUnlockError) |
3583 | def unlock(self): |
3584 | - if except_in_unlock: |
3585 | + if unlock_error: |
3586 | self.actions.append('unlock_fail') |
3587 | - raise KeyError('during unlock') |
3588 | + raise unlock_error |
3589 | else: |
3590 | self.actions.append('unlock') |
3591 | |
3592 | @@ -119,28 +124,28 @@ |
3593 | |
3594 | def test_read_lock_raises_original_error(self): |
3595 | sam = create_decorator_sample(self._decorator_style, |
3596 | - except_in_unlock=True) |
3597 | + unlock_error=SampleUnlockError()) |
3598 | self.assertRaises(TypeError, sam.fail_during_read) |
3599 | self.assertEqual(['lock_read', 'fail_during_read', 'unlock_fail'], |
3600 | sam.actions) |
3601 | |
3602 | def test_write_lock_raises_original_error(self): |
3603 | sam = create_decorator_sample(self._decorator_style, |
3604 | - except_in_unlock=True) |
3605 | + unlock_error=SampleUnlockError()) |
3606 | self.assertRaises(TypeError, sam.fail_during_write) |
3607 | self.assertEqual(['lock_write', 'fail_during_write', 'unlock_fail'], |
3608 | sam.actions) |
3609 | |
3610 | def test_read_lock_raises_unlock_error(self): |
3611 | sam = create_decorator_sample(self._decorator_style, |
3612 | - except_in_unlock=True) |
3613 | - self.assertRaises(KeyError, sam.frob) |
3614 | + unlock_error=SampleUnlockError()) |
3615 | + self.assertRaises(SampleUnlockError, sam.frob) |
3616 | self.assertEqual(['lock_read', 'frob', 'unlock_fail'], sam.actions) |
3617 | |
3618 | def test_write_lock_raises_unlock_error(self): |
3619 | sam = create_decorator_sample(self._decorator_style, |
3620 | - except_in_unlock=True) |
3621 | - self.assertRaises(KeyError, sam.bank, 'bar', biz='bing') |
3622 | + unlock_error=SampleUnlockError()) |
3623 | + self.assertRaises(SampleUnlockError, sam.bank, 'bar', biz='bing') |
3624 | self.assertEqual(['lock_write', ('bank', 'bar', 'bing'), |
3625 | 'unlock_fail'], sam.actions) |
3626 | |
3627 | @@ -276,3 +281,21 @@ |
3628 | finally: |
3629 | decorators.needs_read_lock = cur_read |
3630 | decorators.needs_write_lock = cur_write |
3631 | + |
3632 | + |
3633 | +class TestOnlyRaisesDecorator(TestCase): |
3634 | + |
3635 | + def raise_ZeroDivisionError(self): |
3636 | + 1/0 |
3637 | + |
3638 | + def test_raises_approved_error(self): |
3639 | + decorator = decorators.only_raises(ZeroDivisionError) |
3640 | + decorated_meth = decorator(self.raise_ZeroDivisionError) |
3641 | + self.assertRaises(ZeroDivisionError, decorated_meth) |
3642 | + |
3643 | + def test_quietly_logs_unapproved_errors(self): |
3644 | + decorator = decorators.only_raises(IOError) |
3645 | + decorated_meth = decorator(self.raise_ZeroDivisionError) |
3646 | + self.assertLogsError(ZeroDivisionError, decorated_meth) |
3647 | + |
3648 | + |
3649 | |
3650 | === modified file 'bzrlib/tests/test_diff.py' |
3651 | --- bzrlib/tests/test_diff.py 2009-03-31 00:12:10 +0000 |
3652 | +++ bzrlib/tests/test_diff.py 2009-10-15 18:31:14 +0000 |
3653 | @@ -32,6 +32,7 @@ |
3654 | external_diff, |
3655 | internal_diff, |
3656 | show_diff_trees, |
3657 | + get_trees_and_branches_to_diff, |
3658 | ) |
3659 | from bzrlib.errors import BinaryFile, NoDiff, ExecutableMissing |
3660 | import bzrlib.osutils as osutils |
3661 | @@ -41,6 +42,8 @@ |
3662 | import bzrlib._patiencediff_py |
3663 | from bzrlib.tests import (Feature, TestCase, TestCaseWithTransport, |
3664 | TestCaseInTempDir, TestSkipped) |
3665 | +from bzrlib.revisiontree import RevisionTree |
3666 | +from bzrlib.revisionspec import RevisionSpec |
3667 | |
3668 | |
3669 | class _AttribFeature(Feature): |
3670 | @@ -1382,3 +1385,46 @@ |
3671 | self.assertTrue(os.path.samefile('tree/newname', new_path)) |
3672 | # make sure we can create files with the same parent directories |
3673 | diff_obj._prepare_files('file2-id', 'oldname2', 'newname2') |
3674 | + |
3675 | + |
3676 | +class TestGetTreesAndBranchesToDiff(TestCaseWithTransport): |
3677 | + |
3678 | + def test_basic(self): |
3679 | + tree = self.make_branch_and_tree('tree') |
3680 | + (old_tree, new_tree, |
3681 | + old_branch, new_branch, |
3682 | + specific_files, extra_trees) = \ |
3683 | + get_trees_and_branches_to_diff(['tree'], None, None, None) |
3684 | + |
3685 | + self.assertIsInstance(old_tree, RevisionTree) |
3686 | + #print dir (old_tree) |
3687 | + self.assertEqual(_mod_revision.NULL_REVISION, old_tree.get_revision_id()) |
3688 | + self.assertEqual(tree.basedir, new_tree.basedir) |
3689 | + self.assertEqual(tree.branch.base, old_branch.base) |
3690 | + self.assertEqual(tree.branch.base, new_branch.base) |
3691 | + self.assertIs(None, specific_files) |
3692 | + self.assertIs(None, extra_trees) |
3693 | + |
3694 | + def test_with_rev_specs(self): |
3695 | + tree = self.make_branch_and_tree('tree') |
3696 | + self.build_tree_contents([('tree/file', 'oldcontent')]) |
3697 | + tree.add('file', 'file-id') |
3698 | + tree.commit('old tree', timestamp=0, rev_id="old-id") |
3699 | + self.build_tree_contents([('tree/file', 'newcontent')]) |
3700 | + tree.commit('new tree', timestamp=0, rev_id="new-id") |
3701 | + |
3702 | + revisions = [RevisionSpec.from_string('1'), |
3703 | + RevisionSpec.from_string('2')] |
3704 | + (old_tree, new_tree, |
3705 | + old_branch, new_branch, |
3706 | + specific_files, extra_trees) = \ |
3707 | + get_trees_and_branches_to_diff(['tree'], revisions, None, None) |
3708 | + |
3709 | + self.assertIsInstance(old_tree, RevisionTree) |
3710 | + self.assertEqual("old-id", old_tree.get_revision_id()) |
3711 | + self.assertIsInstance(new_tree, RevisionTree) |
3712 | + self.assertEqual("new-id", new_tree.get_revision_id()) |
3713 | + self.assertEqual(tree.branch.base, old_branch.base) |
3714 | + self.assertEqual(tree.branch.base, new_branch.base) |
3715 | + self.assertIs(None, specific_files) |
3716 | + self.assertEqual(tree.basedir, extra_trees[0].basedir) |
3717 | |
3718 | === modified file 'bzrlib/tests/test_index.py' |
3719 | --- bzrlib/tests/test_index.py 2009-08-13 19:56:26 +0000 |
3720 | +++ bzrlib/tests/test_index.py 2009-10-15 18:31:14 +0000 |
3721 | @@ -1,4 +1,4 @@ |
3722 | -# Copyright (C) 2007 Canonical Ltd |
3723 | +# Copyright (C) 2007, 2009 Canonical Ltd |
3724 | # |
3725 | # This program is free software; you can redistribute it and/or modify |
3726 | # it under the terms of the GNU General Public License as published by |
3727 | @@ -1006,6 +1006,15 @@ |
3728 | self.assertEqual(set(), missing_keys) |
3729 | self.assertEqual(set(), search_keys) |
3730 | |
3731 | + def test_supports_unlimited_cache(self): |
3732 | + builder = GraphIndexBuilder(0, key_elements=1) |
3733 | + stream = builder.finish() |
3734 | + trans = get_transport(self.get_url()) |
3735 | + size = trans.put_file('index', stream) |
3736 | + # It doesn't matter what unlimited_cache does here, just that it can be |
3737 | + # passed |
3738 | + index = GraphIndex(trans, 'index', size, unlimited_cache=True) |
3739 | + |
3740 | |
3741 | class TestCombinedGraphIndex(TestCaseWithMemoryTransport): |
3742 | |
3743 | |
3744 | === modified file 'bzrlib/tests/test_msgeditor.py' |
3745 | --- bzrlib/tests/test_msgeditor.py 2009-08-20 04:09:58 +0000 |
3746 | +++ bzrlib/tests/test_msgeditor.py 2009-10-15 18:31:14 +0000 |
3747 | @@ -93,7 +93,7 @@ |
3748 | tree3.commit('Feature Y, based on initial X work.', |
3749 | timestamp=1233285960, timezone=0) |
3750 | tree.merge_from_branch(tree2.branch) |
3751 | - tree.merge_from_branch(tree3.branch) |
3752 | + tree.merge_from_branch(tree3.branch, force=True) |
3753 | return tree |
3754 | |
3755 | def test_commit_template_pending_merges(self): |
3756 | |
3757 | === modified file 'bzrlib/tests/test_mutabletree.py' |
3758 | --- bzrlib/tests/test_mutabletree.py 2009-09-07 23:14:05 +0000 |
3759 | +++ bzrlib/tests/test_mutabletree.py 2009-10-15 18:31:13 +0000 |
3760 | @@ -19,15 +19,18 @@ |
3761 | Most functionality of MutableTree is tested as part of WorkingTree. |
3762 | """ |
3763 | |
3764 | -from bzrlib.tests import TestCase |
3765 | -from bzrlib.mutabletree import MutableTree, MutableTreeHooks |
3766 | - |
3767 | -class TestHooks(TestCase): |
3768 | +from bzrlib import ( |
3769 | + mutabletree, |
3770 | + tests, |
3771 | + ) |
3772 | + |
3773 | + |
3774 | +class TestHooks(tests.TestCase): |
3775 | |
3776 | def test_constructor(self): |
3777 | """Check that creating a MutableTreeHooks instance has the right |
3778 | defaults.""" |
3779 | - hooks = MutableTreeHooks() |
3780 | + hooks = mutabletree.MutableTreeHooks() |
3781 | self.assertTrue("start_commit" in hooks, |
3782 | "start_commit not in %s" % hooks) |
3783 | self.assertTrue("post_commit" in hooks, |
3784 | @@ -36,7 +39,25 @@ |
3785 | def test_installed_hooks_are_MutableTreeHooks(self): |
3786 | """The installed hooks object should be a MutableTreeHooks.""" |
3787 | # the installed hooks are saved in self._preserved_hooks. |
3788 | - self.assertIsInstance(self._preserved_hooks[MutableTree][1], |
3789 | - MutableTreeHooks) |
3790 | - |
3791 | - |
3792 | + self.assertIsInstance(self._preserved_hooks[mutabletree.MutableTree][1], |
3793 | + mutabletree.MutableTreeHooks) |
3794 | + |
3795 | + |
3796 | +class TestHasChanges(tests.TestCaseWithTransport): |
3797 | + |
3798 | + def setUp(self): |
3799 | + super(TestHasChanges, self).setUp() |
3800 | + self.tree = self.make_branch_and_tree('tree') |
3801 | + |
3802 | + def test_with_uncommitted_changes(self): |
3803 | + self.build_tree(['tree/file']) |
3804 | + self.tree.add('file') |
3805 | + self.assertTrue(self.tree.has_changes()) |
3806 | + |
3807 | + def test_with_pending_merges(self): |
3808 | + other_tree = self.tree.bzrdir.sprout('other').open_workingtree() |
3809 | + self.build_tree(['other/file']) |
3810 | + other_tree.add('file') |
3811 | + other_tree.commit('added file') |
3812 | + self.tree.merge_from_branch(other_tree.branch) |
3813 | + self.assertTrue(self.tree.has_changes()) |
3814 | |
3815 | === modified file 'bzrlib/tests/test_osutils.py' |
3816 | --- bzrlib/tests/test_osutils.py 2009-09-19 16:14:10 +0000 |
3817 | +++ bzrlib/tests/test_osutils.py 2009-10-15 18:31:14 +0000 |
3818 | @@ -457,6 +457,49 @@ |
3819 | self.failUnlessEqual('work/MixedCaseParent/nochild', actual) |
3820 | |
3821 | |
3822 | +class Test_CICPCanonicalRelpath(tests.TestCaseWithTransport): |
3823 | + |
3824 | + def assertRelpath(self, expected, base, path): |
3825 | + actual = osutils._cicp_canonical_relpath(base, path) |
3826 | + self.assertEqual(expected, actual) |
3827 | + |
3828 | + def test_simple(self): |
3829 | + self.build_tree(['MixedCaseName']) |
3830 | + base = osutils.realpath(self.get_transport('.').local_abspath('.')) |
3831 | + self.assertRelpath('MixedCaseName', base, 'mixedcAsename') |
3832 | + |
3833 | + def test_subdir_missing_tail(self): |
3834 | + self.build_tree(['MixedCaseParent/', 'MixedCaseParent/a_child']) |
3835 | + base = osutils.realpath(self.get_transport('.').local_abspath('.')) |
3836 | + self.assertRelpath('MixedCaseParent/a_child', base, |
3837 | + 'MixedCaseParent/a_child') |
3838 | + self.assertRelpath('MixedCaseParent/a_child', base, |
3839 | + 'MixedCaseParent/A_Child') |
3840 | + self.assertRelpath('MixedCaseParent/not_child', base, |
3841 | + 'MixedCaseParent/not_child') |
3842 | + |
3843 | + def test_at_root_slash(self): |
3844 | + # We can't test this on Windows, because it has a 'MIN_ABS_PATHLENGTH' |
3845 | + # check... |
3846 | + if osutils.MIN_ABS_PATHLENGTH > 1: |
3847 | + raise tests.TestSkipped('relpath requires %d chars' |
3848 | + % osutils.MIN_ABS_PATHLENGTH) |
3849 | + self.assertRelpath('foo', '/', '/foo') |
3850 | + |
3851 | + def test_at_root_drive(self): |
3852 | + if sys.platform != 'win32': |
3853 | + raise tests.TestNotApplicable('we can only test drive-letter relative' |
3854 | + ' paths on Windows where we have drive' |
3855 | + ' letters.') |
3856 | + # see bug #322807 |
3857 | + # The specific issue is that when at the root of a drive, 'abspath' |
3858 | + # returns "C:/" or just "/". However, the code assumes that abspath |
3859 | + # always returns something like "C:/foo" or "/foo" (no trailing slash). |
3860 | + self.assertRelpath('foo', 'C:/', 'C:/foo') |
3861 | + self.assertRelpath('foo', 'X:/', 'X:/foo') |
3862 | + self.assertRelpath('foo', 'X:/', 'X://foo') |
3863 | + |
3864 | + |
3865 | class TestPumpFile(tests.TestCase): |
3866 | """Test pumpfile method.""" |
3867 | |
3868 | |
3869 | === modified file 'bzrlib/tests/test_reconfigure.py' |
3870 | --- bzrlib/tests/test_reconfigure.py 2009-04-28 20:12:44 +0000 |
3871 | +++ bzrlib/tests/test_reconfigure.py 2009-10-15 18:31:14 +0000 |
3872 | @@ -1,4 +1,4 @@ |
3873 | -# Copyright (C) 2007 Canonical Ltd |
3874 | +# Copyright (C) 2007, 2008, 2009 Canonical Ltd |
3875 | # |
3876 | # This program is free software; you can redistribute it and/or modify |
3877 | # it under the terms of the GNU General Public License as published by |
3878 | @@ -44,6 +44,19 @@ |
3879 | self.assertRaises(errors.NoWorkingTree, workingtree.WorkingTree.open, |
3880 | 'tree') |
3881 | |
3882 | + def test_tree_with_pending_merge_to_branch(self): |
3883 | + tree = self.make_branch_and_tree('tree') |
3884 | + other_tree = tree.bzrdir.sprout('other').open_workingtree() |
3885 | + self.build_tree(['other/file']) |
3886 | + other_tree.add('file') |
3887 | + other_tree.commit('file added') |
3888 | + tree.merge_from_branch(other_tree.branch) |
3889 | + reconfiguration = reconfigure.Reconfigure.to_branch(tree.bzrdir) |
3890 | + self.assertRaises(errors.UncommittedChanges, reconfiguration.apply) |
3891 | + reconfiguration.apply(force=True) |
3892 | + self.assertRaises(errors.NoWorkingTree, workingtree.WorkingTree.open, |
3893 | + 'tree') |
3894 | + |
3895 | def test_branch_to_branch(self): |
3896 | branch = self.make_branch('branch') |
3897 | self.assertRaises(errors.AlreadyBranch, |
3898 | |
3899 | === modified file 'bzrlib/tests/test_remote.py' |
3900 | --- bzrlib/tests/test_remote.py 2009-09-24 05:31:23 +0000 |
3901 | +++ bzrlib/tests/test_remote.py 2009-10-15 18:31:14 +0000 |
3902 | @@ -2201,6 +2201,26 @@ |
3903 | repo.get_rev_id_for_revno, 5, (42, 'rev-foo')) |
3904 | self.assertFinished(client) |
3905 | |
3906 | + def test_branch_fallback_locking(self): |
3907 | + """RemoteBranch.get_rev_id takes a read lock, and tries to call the |
3908 | + get_rev_id_for_revno verb. If the verb is unknown the VFS fallback |
3909 | + will be invoked, which will fail if the repo is unlocked. |
3910 | + """ |
3911 | + self.setup_smart_server_with_call_log() |
3912 | + tree = self.make_branch_and_memory_tree('.') |
3913 | + tree.lock_write() |
3914 | + rev1 = tree.commit('First') |
3915 | + rev2 = tree.commit('Second') |
3916 | + tree.unlock() |
3917 | + branch = tree.branch |
3918 | + self.assertFalse(branch.is_locked()) |
3919 | + self.reset_smart_call_log() |
3920 | + verb = 'Repository.get_rev_id_for_revno' |
3921 | + self.disable_verb(verb) |
3922 | + self.assertEqual(rev1, branch.get_rev_id(1)) |
3923 | + self.assertLength(1, [call for call in self.hpss_calls if |
3924 | + call.call.method == verb]) |
3925 | + |
3926 | |
3927 | class TestRepositoryIsShared(TestRemoteRepository): |
3928 | |
3929 | |
3930 | === modified file 'bzrlib/tests/test_status.py' |
3931 | --- bzrlib/tests/test_status.py 2009-08-20 04:09:58 +0000 |
3932 | +++ bzrlib/tests/test_status.py 2009-10-15 18:31:13 +0000 |
3933 | @@ -53,7 +53,7 @@ |
3934 | tree2.commit('commit 3b', timestamp=1196796819, timezone=0) |
3935 | tree3.commit('commit 3c', timestamp=1196796819, timezone=0) |
3936 | tree.merge_from_branch(tree2.branch) |
3937 | - tree.merge_from_branch(tree3.branch) |
3938 | + tree.merge_from_branch(tree3.branch, force=True) |
3939 | return tree |
3940 | |
3941 | def test_multiple_pending(self): |
3942 | |
3943 | === modified file 'bzrlib/tests/test_ui.py' |
3944 | --- bzrlib/tests/test_ui.py 2009-09-24 08:56:52 +0000 |
3945 | +++ bzrlib/tests/test_ui.py 2009-10-15 18:31:14 +0000 |
3946 | @@ -54,7 +54,7 @@ |
3947 | ) |
3948 | |
3949 | |
3950 | -class UITests(tests.TestCase): |
3951 | +class TestTextUIFactory(tests.TestCase): |
3952 | |
3953 | def test_text_factory_ascii_password(self): |
3954 | ui = tests.TestUIFactory(stdin='secret\n', |
3955 | @@ -100,56 +100,6 @@ |
3956 | finally: |
3957 | pb.finished() |
3958 | |
3959 | - def test_progress_construction(self): |
3960 | - """TextUIFactory constructs the right progress view. |
3961 | - """ |
3962 | - for (file_class, term, pb, expected_pb_class) in ( |
3963 | - # on an xterm, either use them or not as the user requests, |
3964 | - # otherwise default on |
3965 | - (_TTYStringIO, 'xterm', 'none', NullProgressView), |
3966 | - (_TTYStringIO, 'xterm', 'text', TextProgressView), |
3967 | - (_TTYStringIO, 'xterm', None, TextProgressView), |
3968 | - # on a dumb terminal, again if there's explicit configuration do |
3969 | - # it, otherwise default off |
3970 | - (_TTYStringIO, 'dumb', 'none', NullProgressView), |
3971 | - (_TTYStringIO, 'dumb', 'text', TextProgressView), |
3972 | - (_TTYStringIO, 'dumb', None, NullProgressView), |
3973 | - # on a non-tty terminal, it's null regardless of $TERM |
3974 | - (StringIO, 'xterm', None, NullProgressView), |
3975 | - (StringIO, 'dumb', None, NullProgressView), |
3976 | - # however, it can still be forced on |
3977 | - (StringIO, 'dumb', 'text', TextProgressView), |
3978 | - ): |
3979 | - os.environ['TERM'] = term |
3980 | - if pb is None: |
3981 | - if 'BZR_PROGRESS_BAR' in os.environ: |
3982 | - del os.environ['BZR_PROGRESS_BAR'] |
3983 | - else: |
3984 | - os.environ['BZR_PROGRESS_BAR'] = pb |
3985 | - stdin = file_class('') |
3986 | - stderr = file_class() |
3987 | - stdout = file_class() |
3988 | - uif = make_ui_for_terminal(stdin, stdout, stderr) |
3989 | - self.assertIsInstance(uif, TextUIFactory, |
3990 | - "TERM=%s BZR_PROGRESS_BAR=%s uif=%r" % (term, pb, uif,)) |
3991 | - self.assertIsInstance(uif.make_progress_view(), |
3992 | - expected_pb_class, |
3993 | - "TERM=%s BZR_PROGRESS_BAR=%s uif=%r" % (term, pb, uif,)) |
3994 | - |
3995 | - def test_text_ui_non_terminal(self): |
3996 | - """Even on non-ttys, make_ui_for_terminal gives a text ui.""" |
3997 | - stdin = _NonTTYStringIO('') |
3998 | - stderr = _NonTTYStringIO() |
3999 | - stdout = _NonTTYStringIO() |
4000 | - for term_type in ['dumb', None, 'xterm']: |
4001 | - if term_type is None: |
4002 | - del os.environ['TERM'] |
4003 | - else: |
4004 | - os.environ['TERM'] = term_type |
4005 | - uif = make_ui_for_terminal(stdin, stdout, stderr) |
4006 | - self.assertIsInstance(uif, TextUIFactory, |
4007 | - 'TERM=%r' % (term_type,)) |
4008 | - |
4009 | def test_progress_note(self): |
4010 | stderr = StringIO() |
4011 | stdout = StringIO() |
4012 | @@ -304,6 +254,59 @@ |
4013 | pb.finished() |
4014 | |
4015 | |
4016 | +class UITests(tests.TestCase): |
4017 | + |
4018 | + def test_progress_construction(self): |
4019 | + """TextUIFactory constructs the right progress view. |
4020 | + """ |
4021 | + for (file_class, term, pb, expected_pb_class) in ( |
4022 | + # on an xterm, either use them or not as the user requests, |
4023 | + # otherwise default on |
4024 | + (_TTYStringIO, 'xterm', 'none', NullProgressView), |
4025 | + (_TTYStringIO, 'xterm', 'text', TextProgressView), |
4026 | + (_TTYStringIO, 'xterm', None, TextProgressView), |
4027 | + # on a dumb terminal, again if there's explicit configuration do |
4028 | + # it, otherwise default off |
4029 | + (_TTYStringIO, 'dumb', 'none', NullProgressView), |
4030 | + (_TTYStringIO, 'dumb', 'text', TextProgressView), |
4031 | + (_TTYStringIO, 'dumb', None, NullProgressView), |
4032 | + # on a non-tty terminal, it's null regardless of $TERM |
4033 | + (StringIO, 'xterm', None, NullProgressView), |
4034 | + (StringIO, 'dumb', None, NullProgressView), |
4035 | + # however, it can still be forced on |
4036 | + (StringIO, 'dumb', 'text', TextProgressView), |
4037 | + ): |
4038 | + os.environ['TERM'] = term |
4039 | + if pb is None: |
4040 | + if 'BZR_PROGRESS_BAR' in os.environ: |
4041 | + del os.environ['BZR_PROGRESS_BAR'] |
4042 | + else: |
4043 | + os.environ['BZR_PROGRESS_BAR'] = pb |
4044 | + stdin = file_class('') |
4045 | + stderr = file_class() |
4046 | + stdout = file_class() |
4047 | + uif = make_ui_for_terminal(stdin, stdout, stderr) |
4048 | + self.assertIsInstance(uif, TextUIFactory, |
4049 | + "TERM=%s BZR_PROGRESS_BAR=%s uif=%r" % (term, pb, uif,)) |
4050 | + self.assertIsInstance(uif.make_progress_view(), |
4051 | + expected_pb_class, |
4052 | + "TERM=%s BZR_PROGRESS_BAR=%s uif=%r" % (term, pb, uif,)) |
4053 | + |
4054 | + def test_text_ui_non_terminal(self): |
4055 | + """Even on non-ttys, make_ui_for_terminal gives a text ui.""" |
4056 | + stdin = _NonTTYStringIO('') |
4057 | + stderr = _NonTTYStringIO() |
4058 | + stdout = _NonTTYStringIO() |
4059 | + for term_type in ['dumb', None, 'xterm']: |
4060 | + if term_type is None: |
4061 | + del os.environ['TERM'] |
4062 | + else: |
4063 | + os.environ['TERM'] = term_type |
4064 | + uif = make_ui_for_terminal(stdin, stdout, stderr) |
4065 | + self.assertIsInstance(uif, TextUIFactory, |
4066 | + 'TERM=%r' % (term_type,)) |
4067 | + |
4068 | + |
4069 | class CLIUITests(TestCase): |
4070 | |
4071 | def test_cli_factory_deprecated(self): |
4072 | |
4073 | === modified file 'bzrlib/ui/__init__.py' |
4074 | --- bzrlib/ui/__init__.py 2009-07-22 07:34:08 +0000 |
4075 | +++ bzrlib/ui/__init__.py 2009-10-15 18:31:14 +0000 |
4076 | @@ -22,18 +22,18 @@ |
4077 | Several levels are supported, and you can also register new factories such as |
4078 | for a GUI. |
4079 | |
4080 | -UIFactory |
4081 | +bzrlib.ui.UIFactory |
4082 | Semi-abstract base class |
4083 | |
4084 | -SilentUIFactory |
4085 | +bzrlib.ui.SilentUIFactory |
4086 | Produces no output and cannot take any input; useful for programs using |
4087 | bzrlib in batch mode or for programs such as loggerhead. |
4088 | |
4089 | -CannedInputUIFactory |
4090 | +bzrlib.ui.CannedInputUIFactory |
4091 | For use in testing; the input values to be returned are provided |
4092 | at construction. |
4093 | |
4094 | -TextUIFactory |
4095 | +bzrlib.ui.text.TextUIFactory |
4096 | Standard text command-line interface, with stdin, stdout, stderr. |
4097 | May make more or less advanced use of them, eg in drawing progress bars, |
4098 | depending on the detected capabilities of the terminal. |
4099 | @@ -208,6 +208,22 @@ |
4100 | """ |
4101 | pass |
4102 | |
4103 | + def show_error(self, msg): |
4104 | + """Show an error message (not an exception) to the user. |
4105 | + |
4106 | + The message should not have an error prefix or trailing newline. That |
4107 | + will be added by the factory if appropriate. |
4108 | + """ |
4109 | + raise NotImplementedError(self.show_error) |
4110 | + |
4111 | + def show_message(self, msg): |
4112 | + """Show a message to the user.""" |
4113 | + raise NotImplementedError(self.show_message) |
4114 | + |
4115 | + def show_warning(self, msg): |
4116 | + """Show a warning to the user.""" |
4117 | + raise NotImplementedError(self.show_warning) |
4118 | + |
4119 | |
4120 | |
4121 | class CLIUIFactory(UIFactory): |
4122 | @@ -318,6 +334,15 @@ |
4123 | def get_username(self, prompt, **kwargs): |
4124 | return None |
4125 | |
4126 | + def show_error(self, msg): |
4127 | + pass |
4128 | + |
4129 | + def show_message(self, msg): |
4130 | + pass |
4131 | + |
4132 | + def show_warning(self, msg): |
4133 | + pass |
4134 | + |
4135 | |
4136 | class CannedInputUIFactory(SilentUIFactory): |
4137 | """A silent UI that return canned input.""" |
4138 | |
4139 | === modified file 'bzrlib/ui/text.py' |
4140 | --- bzrlib/ui/text.py 2009-08-06 02:23:37 +0000 |
4141 | +++ bzrlib/ui/text.py 2009-10-15 18:31:14 +0000 |
4142 | @@ -49,9 +49,6 @@ |
4143 | stdout=None, |
4144 | stderr=None): |
4145 | """Create a TextUIFactory. |
4146 | - |
4147 | - :param bar_type: The type of progress bar to create. Deprecated |
4148 | - and ignored; a TextProgressView is always used. |
4149 | """ |
4150 | super(TextUIFactory, self).__init__() |
4151 | # TODO: there's no good reason not to pass all three streams, maybe we |
4152 | @@ -176,6 +173,17 @@ |
4153 | self._progress_view.show_transport_activity(transport, |
4154 | direction, byte_count) |
4155 | |
4156 | + def show_error(self, msg): |
4157 | + self.clear_term() |
4158 | + self.stderr.write("bzr: error: %s\n" % msg) |
4159 | + |
4160 | + def show_message(self, msg): |
4161 | + self.note(msg) |
4162 | + |
4163 | + def show_warning(self, msg): |
4164 | + self.clear_term() |
4165 | + self.stderr.write("bzr: warning: %s\n" % msg) |
4166 | + |
4167 | def _progress_updated(self, task): |
4168 | """A task has been updated and wants to be displayed. |
4169 | """ |
4170 | |
4171 | === modified file 'bzrlib/util/_bencode_py.py' |
4172 | --- bzrlib/util/_bencode_py.py 2009-06-10 03:56:49 +0000 |
4173 | +++ bzrlib/util/_bencode_py.py 2009-10-15 18:31:14 +0000 |
4174 | @@ -154,6 +154,13 @@ |
4175 | encode_int(int(x), r) |
4176 | encode_func[BooleanType] = encode_bool |
4177 | |
4178 | +try: |
4179 | + from bzrlib._static_tuple_c import StaticTuple |
4180 | +except ImportError: |
4181 | + pass |
4182 | +else: |
4183 | + encode_func[StaticTuple] = encode_list |
4184 | + |
4185 | |
4186 | def bencode(x): |
4187 | r = [] |
4188 | |
4189 | === modified file 'bzrlib/workingtree.py' |
4190 | --- bzrlib/workingtree.py 2009-08-26 05:38:16 +0000 |
4191 | +++ bzrlib/workingtree.py 2009-10-15 18:31:14 +0000 |
4192 | @@ -896,7 +896,7 @@ |
4193 | |
4194 | @needs_write_lock # because merge pulls data into the branch. |
4195 | def merge_from_branch(self, branch, to_revision=None, from_revision=None, |
4196 | - merge_type=None): |
4197 | + merge_type=None, force=False): |
4198 | """Merge from a branch into this working tree. |
4199 | |
4200 | :param branch: The branch to merge from. |
4201 | @@ -911,9 +911,9 @@ |
4202 | merger = Merger(self.branch, this_tree=self, pb=pb) |
4203 | merger.pp = ProgressPhase("Merge phase", 5, pb) |
4204 | merger.pp.next_phase() |
4205 | - # check that there are no |
4206 | - # local alterations |
4207 | - merger.check_basis(check_clean=True, require_commits=False) |
4208 | + # check that there are no local alterations |
4209 | + if not force and self.has_changes(): |
4210 | + raise errors.UncommittedChanges(self) |
4211 | if to_revision is None: |
4212 | to_revision = _mod_revision.ensure_null(branch.last_revision()) |
4213 | merger.other_rev_id = to_revision |
4214 | |
4215 | === modified file 'doc/developers/HACKING.txt' |
4216 | --- doc/developers/HACKING.txt 2009-09-30 14:56:21 +0000 |
4217 | +++ doc/developers/HACKING.txt 2009-10-15 18:31:14 +0000 |
4218 | @@ -671,6 +671,19 @@ |
4219 | may not catch every case but it's still useful sometimes. |
4220 | |
4221 | |
4222 | +Cleanup methods |
4223 | +=============== |
4224 | + |
4225 | +Often when something has failed later code, including cleanups invoked |
4226 | +from ``finally`` blocks, will fail too. These secondary failures are |
4227 | +generally uninteresting compared to the original exception. So use the |
4228 | +``only_raises`` decorator (from ``bzrlib.decorators``) for methods that |
4229 | +are typically called in ``finally`` blocks, such as ``unlock`` methods. |
4230 | +For example, ``@only_raises(LockNotHeld, LockBroken)``. All errors that |
4231 | +are unlikely to be a knock-on failure from an previous failure should be |
4232 | +allowed. |
4233 | + |
4234 | + |
4235 | Factories |
4236 | ========= |
4237 | |
4238 | |
4239 | === modified file 'doc/developers/releasing.txt' |
4240 | --- doc/developers/releasing.txt 2009-09-16 08:59:26 +0000 |
4241 | +++ doc/developers/releasing.txt 2009-10-15 18:31:14 +0000 |
4242 | @@ -212,9 +212,20 @@ |
4243 | we have a releasable product. The next step is to make it generally |
4244 | available to the world. |
4245 | |
4246 | -#. Link from http://bazaar-vcs.org/Download to the tarball and signature. |
4247 | - |
4248 | -#. Announce on the `Bazaar home page <http://bazaar-vcs.org/>`_. |
4249 | +go to the release |
4250 | + |
4251 | +#. Within that release, upload the source tarball and zipfile and the GPG |
4252 | + signature. Or, if you prefer, use the |
4253 | + ``tools/packaging/lp-upload-release`` script to do this. |
4254 | + |
4255 | +#. Link from http://bazaar-vcs.org/SourceDownloads to the tarball and |
4256 | + signature. |
4257 | + |
4258 | +#. Announce on the `Bazaar website <http://bazaar-vcs.org/>`_. |
4259 | + This page is edited via the lp:bzr-website branch. (Changes |
4260 | + pushed to this branch are refreshed by a cron job on escudero.) |
4261 | + |
4262 | +#. Announce on the `Bazaar wiki <http://bazaar-vcs.org/Welcome>`_. |
4263 | |
4264 | #. Check that the documentation for this release is available in |
4265 | <http://doc.bazaar-vcs.org>. It should be automatically build when the |
4266 | |
4267 | === modified file 'doc/en/upgrade-guide/data_migration.txt' |
4268 | --- doc/en/upgrade-guide/data_migration.txt 2009-09-09 13:34:08 +0000 |
4269 | +++ doc/en/upgrade-guide/data_migration.txt 2009-10-15 18:31:13 +0000 |
4270 | @@ -30,7 +30,7 @@ |
4271 | * **upgrade** - migrate data to a different format. |
4272 | |
4273 | **reconcile** is rarely needed but it's good practice to run **check** |
4274 | -before and after runing **upgrade**. |
4275 | +before and after running **upgrade**. |
4276 | |
4277 | For detailed help on these commands, see the `Bazaar User Reference`_. |
4278 | |
4279 | @@ -40,7 +40,7 @@ |
4280 | Communicating with your community |
4281 | --------------------------------- |
4282 | |
4283 | -To enable a smooth transistion to the new format, you should: |
4284 | +To enable a smooth transition to the new format, you should: |
4285 | |
4286 | 1. Make one person responsible for migrating the trunk. |
4287 | |
4288 | @@ -97,22 +97,54 @@ |
4289 | Migrating branches on Launchpad |
4290 | ------------------------------- |
4291 | |
4292 | +You have two options for upgrading your Launchpad branches. You can either |
4293 | +upgrade them remotely or you can upgrade them locally and push the migrated |
4294 | +branch to Launchpad. We recommend the latter. Upgrading remotely currently |
4295 | +requires a fast, rock solid network connection to the Launchpad servers, and |
4296 | +any interruption in that connection can leave you with a partially upgraded |
4297 | +branch. The instructions below are the safest and often fastest way to |
4298 | +upgrade your Launchpad branches. |
4299 | + |
4300 | To allow isolation between public and private branches, Launchpad |
4301 | uses stacked branches rather than shared repositories as the core |
4302 | technology for efficient branch storage. The process for migrating |
4303 | to a new format for projects using Launchpad code hosting is therefore |
4304 | different to migrating a personal or in-house project. |
4305 | |
4306 | +In Launchpad, a project can define a *development series* and associate a |
4307 | +branch with that series. The branch then becomes the *focus of development* |
4308 | +and gets special treatment and a shortcut url. By default, if anybody |
4309 | +branches your project's focus of development and pushes changes back to |
4310 | +Launchpad, their branch will be stacked on your development focus branch. |
4311 | +Also, branches can be associated with other Launchpad artifacts such as bugs |
4312 | +and merge proposals. All of these things mean that upgrading your focus of |
4313 | +development branch is trickier. |
4314 | + |
4315 | Here are the steps to follow: |
4316 | |
4317 | -1. The nominated person grabs a copy of trunk and does the migration. |
4318 | +1. The nominated person grabs a copy of trunk and does the migration locally. |
4319 | |
4320 | 2. On Launchpad, unset the current trunk from being the development focus. |
4321 | (This *must* be done or the following step won't work as expected.) |
4322 | |
4323 | -3. Push the migrated trunk to Launchpad. |
4324 | - |
4325 | -4. Set it as the development focus. |
4326 | + 1. Go to your project's home page on Launchpad |
4327 | + |
4328 | + 2. Look for "XXX is the current focus of development" |
4329 | + |
4330 | + 3. Click on the edit (pencil) icon |
4331 | + |
4332 | + 4. Click on "Change details" in the portlet on the right |
4333 | + |
4334 | + 5. Scroll down to where it says "Branch: (Optional)" |
4335 | + |
4336 | + 6. Blank out this input field and click "Change" |
4337 | + |
4338 | +3. Push the migrated trunk to Launchpad. See below if you want your |
4339 | + new migrated development focus branch to have the same name as your old |
4340 | + pre-migration development focus branch. |
4341 | + |
4342 | +4. Set it as the development focus. Follow the instructions above but at step |
4343 | + 5, enter the name of the newly migrated branch you just pushed. |
4344 | |
4345 | 5. Ask users subscribed to the old trunk to subscribe to the new one. |
4346 | |
4347 | @@ -124,6 +156,20 @@ |
4348 | You are now ready to tell your community that the new trunk is available |
4349 | and to give them instructions on migrating any local branches they have. |
4350 | |
4351 | +If you want your new migrated development focus branch to have the same name |
4352 | +as your old pre-migration branch, you need to do a few extra things before you |
4353 | +establish the new development focus. |
4354 | + |
4355 | +1. Rename your old pre-migration branch; use something like |
4356 | + **foo-obsolete-do-not-use**. You will really not want to delete this |
4357 | + because there will be artifacts (bugs, merge proposals, etc.) associated |
4358 | + with it. |
4359 | + |
4360 | +2. Rename the new migrated branch to the pre-migration branch's old name. |
4361 | + |
4362 | +3. Re-establish the development focus branch using the new migrated branch's |
4363 | + new name (i.e. the old pre-migration branch's original name). |
4364 | + |
4365 | |
4366 | Migrating local branches after a central trunk has migrated |
4367 | ----------------------------------------------------------- |
4368 | |
4369 | === modified file 'doc/en/user-guide/branching_a_project.txt' |
4370 | --- doc/en/user-guide/branching_a_project.txt 2009-09-09 15:30:59 +0000 |
4371 | +++ doc/en/user-guide/branching_a_project.txt 2009-10-15 18:31:14 +0000 |
4372 | @@ -16,11 +16,11 @@ |
4373 | =========== ====================================================== |
4374 | Prefix Description |
4375 | =========== ====================================================== |
4376 | - file:// Access using the standard filesystem (default) |
4377 | - sftp:// Access using SFTP (most SSH servers provide SFTP). |
4378 | - bzr:// Fast access using the Bazaar smart server. |
4379 | - ftp:// Access using passive FTP. |
4380 | - http:// Read-only access to branches exported by a web server. |
4381 | + \file:// Access using the standard filesystem (default) |
4382 | + \sftp:// Access using SFTP (most SSH servers provide SFTP). |
4383 | + \bzr:// Fast access using the Bazaar smart server. |
4384 | + \ftp:// Access using passive FTP. |
4385 | + \http:// Read-only access to branches exported by a web server. |
4386 | =========== ====================================================== |
4387 | |
4388 | As indicated above, branches are identified using URLs with the |
4389 | |
4390 | === modified file 'setup.py' |
4391 | --- setup.py 2009-10-08 15:44:41 +0000 |
4392 | +++ setup.py 2009-10-15 18:31:14 +0000 |
4393 | @@ -167,7 +167,13 @@ |
4394 | from distutils.extension import Extension |
4395 | ext_modules = [] |
4396 | try: |
4397 | - from Pyrex.Distutils import build_ext |
4398 | + try: |
4399 | + from Pyrex.Distutils import build_ext |
4400 | + from Pyrex.Compiler.Version import version as pyrex_version |
4401 | + except ImportError: |
4402 | + print "No Pyrex, trying Cython..." |
4403 | + from Cython.Distutils import build_ext |
4404 | + from Cython.Compiler.Version import version as pyrex_version |
4405 | except ImportError: |
4406 | have_pyrex = False |
4407 | # try to build the extension from the prior generated source. |
4408 | @@ -180,7 +186,6 @@ |
4409 | from distutils.command.build_ext import build_ext |
4410 | else: |
4411 | have_pyrex = True |
4412 | - from Pyrex.Compiler.Version import version as pyrex_version |
4413 | |
4414 | |
4415 | class build_ext_if_possible(build_ext): |
This is a fairly minor update to my "btree uses StaticTuples", though it is mostly orthogonal.
This changes the code that decides to intern strings. Specifically it:
1) Doesn't intern key bits that start with 'sha1:'.
a) There are *lots* of these (500k)
b) They are rarely duplicated (no parent list, and only duplicated in the chk maps)
c) They are never a sub part of a key. revision_id may show up as (revision_id,) or may show up
as (file_id, revision_id), sha1: is never used like that [yet]
This saves us 24 bytes per string in the string interned dict. At 500k sha1's that is 11MB. 'sha1:. ..'). I may want to revisit
Note that at the moment, we still intern the StaticTuple(
that, since there are no parent lists. However, for something like fetch, we will load the key
from the index, and then load it again in the chk map. So I think interning is a net win.
2) Does intern some of the 'values' if they indicate the value is for a 'null content' record. We
get a null content record for all of the entries for directories. And since our upgrade to
rich-root decided to add an entry for TREE_ROOT for every revision, we have *lots* of these.
(I think it was something like 1/4th of all entries in the per-file graph was these root keys.)
The code comment mentions that when loading bzr.dev it saves 1.2MB, and for launchpad it saves
around 4MB. Which was right about 1.7% of peak memory. Not huge, but a fairly cheap win.
This is a little bit of a layering inversion, because it is adding domain specific logic into btree index, when they should be considered generic containers. However, the memory savings is visible. It also isn't a correctness thing, as it only decides whether we intern() or not.