Merge lp:~jameinel/bzr/2.1-static-tuple-btree-string-intern into lp:~jameinel/bzr/2.1-static-tuple-btree

Proposed by John A Meinel
Status: Merged
Merge reported by: John A Meinel
Merged at revision: not available
Proposed branch: lp:~jameinel/bzr/2.1-static-tuple-btree-string-intern
Merge into: lp:~jameinel/bzr/2.1-static-tuple-btree
Diff against target: 4415 lines
66 files modified
Makefile (+1/-1)
NEWS (+207/-63)
README (+35/-67)
bzrlib/__init__.py (+1/-1)
bzrlib/_bencode_pyx.pyx (+9/-1)
bzrlib/_btree_serializer_pyx.pyx (+37/-10)
bzrlib/_export_c_api.h (+5/-2)
bzrlib/_import_c_api.h (+6/-3)
bzrlib/_simple_set_pyx.pxd (+27/-3)
bzrlib/_simple_set_pyx.pyx (+130/-113)
bzrlib/_static_tuple_c.c (+192/-96)
bzrlib/_static_tuple_py.py (+2/-0)
bzrlib/branch.py (+8/-2)
bzrlib/btree_index.py (+17/-9)
bzrlib/builtins.py (+16/-11)
bzrlib/bundle/apply_bundle.py (+2/-1)
bzrlib/commands.py (+1/-1)
bzrlib/decorators.py (+27/-0)
bzrlib/diff.py (+9/-6)
bzrlib/foreign.py (+1/-2)
bzrlib/help_topics/__init__.py (+8/-26)
bzrlib/help_topics/en/debug-flags.txt (+5/-3)
bzrlib/index.py (+5/-3)
bzrlib/lock.py (+21/-0)
bzrlib/lockable_files.py (+2/-2)
bzrlib/lockdir.py (+2/-0)
bzrlib/merge.py (+10/-1)
bzrlib/mutabletree.py (+11/-3)
bzrlib/osutils.py (+9/-2)
bzrlib/python-compat.h (+6/-0)
bzrlib/reconfigure.py (+1/-3)
bzrlib/remote.py (+10/-3)
bzrlib/repofmt/pack_repo.py (+20/-6)
bzrlib/repository.py (+13/-2)
bzrlib/send.py (+2/-3)
bzrlib/static_tuple.py (+25/-0)
bzrlib/tests/__init__.py (+31/-1)
bzrlib/tests/blackbox/test_merge.py (+1/-1)
bzrlib/tests/blackbox/test_uncommit.py (+2/-2)
bzrlib/tests/lock_helpers.py (+2/-0)
bzrlib/tests/per_branch/test_locking.py (+2/-2)
bzrlib/tests/per_repository/test_write_group.py (+1/-1)
bzrlib/tests/per_repository_chk/test_supported.py (+35/-0)
bzrlib/tests/per_uifactory/__init__.py (+148/-0)
bzrlib/tests/test__simple_set.py (+161/-69)
bzrlib/tests/test__static_tuple.py (+60/-6)
bzrlib/tests/test_btree_index.py (+39/-0)
bzrlib/tests/test_decorators.py (+33/-10)
bzrlib/tests/test_diff.py (+46/-0)
bzrlib/tests/test_index.py (+10/-1)
bzrlib/tests/test_msgeditor.py (+1/-1)
bzrlib/tests/test_mutabletree.py (+30/-9)
bzrlib/tests/test_osutils.py (+43/-0)
bzrlib/tests/test_reconfigure.py (+14/-1)
bzrlib/tests/test_remote.py (+20/-0)
bzrlib/tests/test_status.py (+1/-1)
bzrlib/tests/test_ui.py (+54/-51)
bzrlib/ui/__init__.py (+29/-4)
bzrlib/ui/text.py (+11/-3)
bzrlib/util/_bencode_py.py (+7/-0)
bzrlib/workingtree.py (+4/-4)
doc/developers/HACKING.txt (+13/-0)
doc/developers/releasing.txt (+14/-3)
doc/en/upgrade-guide/data_migration.txt (+52/-6)
doc/en/user-guide/branching_a_project.txt (+5/-5)
setup.py (+7/-2)
To merge this branch: bzr merge lp:~jameinel/bzr/2.1-static-tuple-btree-string-intern
Reviewer Review Type Date Requested Status
Andrew Bennetts (community) Approve
Review via email: mp+13081@code.launchpad.net
To post a comment you must log in.
Revision history for this message
John A Meinel (jameinel) wrote :

This is a fairly minor update to my "btree uses StaticTuples", though it is mostly orthogonal.

This changes the code that decides to intern strings. Specifically it:

1) Doesn't intern key bits that start with 'sha1:'.
   a) There are *lots* of these (500k)
   b) They are rarely duplicated (no parent list, and only duplicated in the chk maps)
   c) They are never a sub part of a key. revision_id may show up as (revision_id,) or may show up
      as (file_id, revision_id), sha1: is never used like that [yet]

  This saves us 24 bytes per string in the string interned dict. At 500k sha1's that is 11MB.
  Note that at the moment, we still intern the StaticTuple('sha1:...'). I may want to revisit
  that, since there are no parent lists. However, for something like fetch, we will load the key
  from the index, and then load it again in the chk map. So I think interning is a net win.

2) Does intern some of the 'values' if they indicate the value is for a 'null content' record. We
   get a null content record for all of the entries for directories. And since our upgrade to
   rich-root decided to add an entry for TREE_ROOT for every revision, we have *lots* of these.
   (I think it was something like 1/4th of all entries in the per-file graph was these root keys.)

   The code comment mentions that when loading bzr.dev it saves 1.2MB, and for launchpad it saves
   around 4MB. Which was right about 1.7% of peak memory. Not huge, but a fairly cheap win.

This is a little bit of a layering inversion, because it is adding domain specific logic into btree index, when they should be considered generic containers. However, the memory savings is visible. It also isn't a correctness thing, as it only decides whether we intern() or not.

Revision history for this message
Andrew Bennetts (spiv) :
review: Approve
Revision history for this message
John A Meinel (jameinel) wrote :

Bumping this back to 'work-in-progress' for now.

Running more of the test suite has shown some failures. Some really *random* failures on PQM, that I cannot reproduce locally, but at least I did get some local failures that I can work on fixing.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'Makefile'
--- Makefile 2009-10-01 02:46:17 +0000
+++ Makefile 2009-10-15 18:31:14 +0000
@@ -409,7 +409,7 @@
409 $(MAKE) clean && \409 $(MAKE) clean && \
410 $(MAKE) && \410 $(MAKE) && \
411 bzr export $$expdir && \411 bzr export $$expdir && \
412 cp bzrlib/*.c $$expdir/bzrlib/. && \412 cp bzrlib/*.c bzrlib/*.h $$expdir/bzrlib/. && \
413 tar cfz $$tarball -C $$expbasedir bzr-$$version && \413 tar cfz $$tarball -C $$expbasedir bzr-$$version && \
414 gpg --detach-sign $$tarball && \414 gpg --detach-sign $$tarball && \
415 rm -rf $$expbasedir415 rm -rf $$expbasedir
416416
=== modified file 'NEWS'
--- NEWS 2009-10-06 15:58:12 +0000
+++ NEWS 2009-10-15 18:31:14 +0000
@@ -6,11 +6,113 @@
6 :depth: 16 :depth: 1
77
88
92.1.0 series (not released yet)9bzr 2.1.0b2 (not released yet)
10###############################10##############################
1111
12Compatibility Breaks12:Codename:
13********************13:2.1.0b2: ???
14
15
16Compatibility Breaks
17********************
18
19New Features
20************
21
22Bug Fixes
23*********
24
25Improvements
26************
27
28* When reading index files, we now use a ``StaticTuple`` rather than a
29 plain ``tuple`` object. This generally gives a 20% decrease in peak
30 memory, and can give a performance boost up to 40% on large projects.
31 (John Arbash Meinel)
32
33Documentation
34*************
35
36API Changes
37***********
38
39* ``UIFactory`` now has new ``show_error``, ``show_message`` and
40 ``show_warning`` methods, which can be hooked by non-text UIs.
41 (Martin Pool)
42
43Internals
44*********
45
46* Added ``bzrlib._simple_set_pyx``. This is a hybrid between a Set and a
47 Dict (it only holds keys, but you can lookup the object located at a
48 given key). It has significantly reduced memory consumption versus the
49 builtin objects (1/2 the size of Set, 1/3rd the size of Dict). This is
50 used as the interning structure for StaticTuple objects.
51 (John Arbash Meinel)
52
53* ``bzrlib._static_tuple_pyx.StaticTuple`` is now available and used by
54 the btree index parser. This class functions similarly to ``tuple``
55 objects. However, it can only point at other ``StaticTuple`` instances
56 or strings. This allows us to remove it from the garbage collector (it
57 cannot be in a cycle), it also allows us to intern the objects. In
58 testing, this can reduce peak memory by 20-40%, and significantly
59 improve performance by removing objects from being inspected by the
60 garbage collector. (John Arbash Meinel)
61
62Testing
63*******
64
65
66bzr 2.0.2 (not released yet)
67############################
68
69:Codename:
70:2.0.2: ???
71
72Compatibility Breaks
73********************
74
75New Features
76************
77
78Bug Fixes
79*********
80
81Improvements
82************
83
84Documentation
85*************
86
87API Changes
88***********
89
90Internals
91*********
92
93Testing
94*******
95
96
97bzr 2.1.0b1
98###########
99
100:Codename: While the cat is away
101:2.1.0b1: 2009-10-14
102
103This is the first development release in the new split "stable" and
104"development" series. As such, the release is a snapshot of bzr.dev
105without creating a release candidate first. This release includes a
106fair amount of internal changes, with deprecated code being removed,
107and several new feature developments. People looking for a stable code
108base with only bugfixes should focus on the 2.0.1 release. All bugfixes
109present in 2.0.1 are present in 2.1.0b1.
110
111Highlights include support for ``bzr+ssh://host/~/homedir`` style urls,
112finer control over the plugin search path via extended BZR_PLUGIN_PATH
113syntax, visible warnings when extension modules fail to load, and improved
114error handling during unlocking.
115
14116
15New Features117New Features
16************118************
@@ -37,6 +139,10 @@
37 automatically benefit from this feature when ``bzr`` on the server is139 automatically benefit from this feature when ``bzr`` on the server is
38 upgraded. (Andrew Bennetts, #109143)140 upgraded. (Andrew Bennetts, #109143)
39141
142* Extensions can now be compiled if either Cython or Pyrex is available.
143 Currently Pyrex is preferred, but that may change in the future.
144 (Arkanes)
145
40* Give more control on BZR_PLUGIN_PATH by providing a way to refer to or146* Give more control on BZR_PLUGIN_PATH by providing a way to refer to or
41 disable the user, site and core plugin directories.147 disable the user, site and core plugin directories.
42 (Vincent Ladeuil, #412930, #316192, #145612)148 (Vincent Ladeuil, #412930, #316192, #145612)
@@ -56,54 +162,24 @@
56 filename will issue a warning and skip over those files.162 filename will issue a warning and skip over those files.
57 (Robert Collins, #3918)163 (Robert Collins, #3918)
58164
59* ``bzr check`` in pack-0.92, 1.6 and 1.9 format repositories will no
60 longer report incorrect errors about ``Missing inventory ('TREE_ROOT', ...)``
61 (Robert Collins, #416732)
62
63* ``bzr dpush`` now aborts if uncommitted changes (including pending merges)165* ``bzr dpush`` now aborts if uncommitted changes (including pending merges)
64 are present in the working tree. The configuration option ``dpush_strict``166 are present in the working tree. The configuration option ``dpush_strict``
65 can be used to set the default for this behavior.167 can be used to set the default for this behavior.
66 (Vincent Ladeuil, #438158)168 (Vincent Ladeuil, #438158)
67169
68* ``bzr info -v`` on a 2a format still claimed that it was a "Development
69 format" (John Arbash Meinel, #424392)
70
71* ``bzr merge`` and ``bzr remove-tree`` now requires --force if pending170* ``bzr merge`` and ``bzr remove-tree`` now requires --force if pending
72 merges are present in the working tree.171 merges are present in the working tree.
73 (Vincent Ladeuil, #426344)172 (Vincent Ladeuil, #426344)
74173
75* bzr will attempt to authenticate with SSH servers that support
76 ``keyboard-interactive`` auth but not ``password`` auth when using
77 Paramiko. (Andrew Bennetts, #433846)
78
79* Clearer message when Bazaar runs out of memory, instead of a ``MemoryError``174* Clearer message when Bazaar runs out of memory, instead of a ``MemoryError``
80 traceback. (Martin Pool, #109115)175 traceback. (Martin Pool, #109115)
81176
82* Conversion to 2a will create a single pack for all the new revisions (as
83 long as it ran without interruption). This improves both ``bzr upgrade``
84 and ``bzr pull`` or ``bzr merge`` from local branches in older formats.
85 The autopack logic that occurs every 100 revisions during local
86 conversions was not returning that pack's identifier, which resulted in
87 the partial packs created during the conversion not being consolidated
88 at the end of the conversion process. (Robert Collins, #423818)
89
90* Don't give a warning on Windows when failing to import ``_readdir_pyx``177* Don't give a warning on Windows when failing to import ``_readdir_pyx``
91 as it is never built. (John Arbash Meinel, #430645)178 as it is never built. (John Arbash Meinel, #430645)
92179
93* Don't restrict the command name used to run the test suite.180* Don't restrict the command name used to run the test suite.
94 (Vincent Ladeuil, #419950)181 (Vincent Ladeuil, #419950)
95182
96* Fetches from 2a to 2a are now again requested in 'groupcompress' order.
97 Groups that are seen as 'underutilized' will be repacked on-the-fly.
98 This means that when the source is fully packed, there is minimal
99 overhead during the fetch, but if the source is poorly packed the result
100 is a fairly well packed repository (not as good as 'bzr pack' but
101 good-enough.) (Robert Collins, John Arbash Meinel, #402652)
102
103* Fixed fetches from a stacked branch on a smart server that were failing
104 with some combinations of remote and local formats. This was causing
105 "unknown object type identifier 60" errors. (Andrew Bennetts, #427736)
106
107* ftp transports were built differently when the kerberos python module was183* ftp transports were built differently when the kerberos python module was
108 present leading to obscure failures related to ASCII/BINARY modes.184 present leading to obscure failures related to ASCII/BINARY modes.
109 (Vincent Ladeuil, #443041)185 (Vincent Ladeuil, #443041)
@@ -111,23 +187,6 @@
111* Network streams now decode adjacent records of the same type into a187* Network streams now decode adjacent records of the same type into a
112 single stream, reducing layering churn. (Robert Collins)188 single stream, reducing layering churn. (Robert Collins)
113189
114* Make sure that we unlock the tree if we fail to create a TreeTransform
115 object when doing a merge, and there is limbo, or pending-deletions
116 directory. (Gary van der Merwe, #427773)
117
118* Occasional IndexError on renamed files have been fixed. Operations that
119 set a full inventory in the working tree will now go via the
120 apply_inventory_delta code path which is simpler and easier to
121 understand than dirstates set_state_from_inventory method. This may
122 have a small performance impact on operations built on _write_inventory,
123 but such operations are already doing full tree scans, so no radical
124 performance change should be observed. (Robert Collins, #403322)
125
126* Prevent some kinds of incomplete data from being committed to a 2a
127 repository, such as revisions without inventories or inventories without
128 chk_bytes root records.
129 (Andrew Bennetts, #423506)
130
131* PreviewTree behaves correctly when get_file_mtime is invoked on an unmodified190* PreviewTree behaves correctly when get_file_mtime is invoked on an unmodified
132 file. (Aaron Bentley, #251532)191 file. (Aaron Bentley, #251532)
133192
@@ -138,9 +197,6 @@
138 domains or user ids embedding '.sig'. Now they can.197 domains or user ids embedding '.sig'. Now they can.
139 (Matthew Fuller, Vincent Ladeuil, #430868)198 (Matthew Fuller, Vincent Ladeuil, #430868)
140199
141* When a file kind becomes unversionable after being added, a sensible
142 error will be shown instead of a traceback. (Robert Collins, #438569)
143
144Improvements200Improvements
145************201************
146202
@@ -153,6 +209,12 @@
153 See also <https://answers.launchpad.net/bzr/+faq/703>.209 See also <https://answers.launchpad.net/bzr/+faq/703>.
154 (Martin Pool, #406113, #430529)210 (Martin Pool, #406113, #430529)
155211
212* Secondary errors that occur during Branch.unlock and Repository.unlock
213 no longer obscure the original error. These methods now use a new
214 decorator, ``only_raises``. This fixes many causes of
215 ``TooManyConcurrentRequests`` and similar errors.
216 (Andrew Bennetts, #429747)
217
156Documentation218Documentation
157*************219*************
158220
@@ -164,21 +226,37 @@
164API Changes226API Changes
165***********227***********
166228
167* ``ProgressTask.note`` is deprecated.
168 (Martin Pool)
169
170* ``bzrlib.user_encoding`` has been removed; use229* ``bzrlib.user_encoding`` has been removed; use
171 ``bzrlib.osutils.get_user_encoding`` instead. (Martin Pool)230 ``bzrlib.osutils.get_user_encoding`` instead. (Martin Pool)
172231
173* ``bzrlib.tests`` now uses ``stopTestRun`` for its ``TestResult``232* ``bzrlib.tests`` now uses ``stopTestRun`` for its ``TestResult``
174 subclasses - the same as python's unittest module. (Robert Collins)233 subclasses - the same as python's unittest module. (Robert Collins)
234
235* ``diff._get_trees_to_diff`` has been renamed to
236 ``diff.get_trees_and_branches_to_diff``. It is now a public API, and it
237 returns the old and new branches. (Gary van der Merwe)
175238
176* ``bzrlib.trace.log_error``, ``error`` and ``info`` have been deprecated.239* ``bzrlib.trace.log_error``, ``error`` and ``info`` have been deprecated.
177 (Martin Pool)240 (Martin Pool)
178241
242* ``MutableTree.has_changes()`` does not require a tree parameter anymore. It
243 now defaults to comparing to the basis tree. It now checks for pending
244 merges too. ``Merger.check_basis`` has been deprecated and replaced by the
245 corresponding has_changes() calls. ``Merge.compare_basis``,
246 ``Merger.file_revisions`` and ``Merger.ensure_revision_trees`` have also
247 been deprecated.
248 (Vincent Ladeuil, #440631)
249
250* ``ProgressTask.note`` is deprecated.
251 (Martin Pool)
252
179Internals253Internals
180*********254*********
181255
256* Added ``-Drelock`` debug flag. It will ``note`` a message every time a
257 repository or branch object is unlocked then relocked the same way.
258 (Andrew Bennetts)
259
182* ``BTreeLeafParser.extract_key`` has been tweaked slightly to reduce260* ``BTreeLeafParser.extract_key`` has been tweaked slightly to reduce
183 mallocs while parsing the index (approx 3=>1 mallocs per key read).261 mallocs while parsing the index (approx 3=>1 mallocs per key read).
184 This results in a 10% speedup while reading an index.262 This results in a 10% speedup while reading an index.
@@ -225,8 +303,16 @@
225 present. (Vincent Ladeuil, #430749)303 present. (Vincent Ladeuil, #430749)
226304
227305
228bzr 2.0.1 (Not Released Yet)306bzr 2.0.1
229############################307#########
308
309:Codename: Stability First
310:2.0.1: 2009-10-14
311
312The first of our new ongoing bugfix-only stable releases has arrived. It
313includes a collection of 12 bugfixes applied to bzr 2.0.0, but does not
314include any of the feature development in the 2.1.0 series.
315
230316
231Bug Fixes317Bug Fixes
232*********318*********
@@ -243,6 +329,18 @@
243 with some combinations of remote and local formats. This was causing329 with some combinations of remote and local formats. This was causing
244 "unknown object type identifier 60" errors. (Andrew Bennetts, #427736)330 "unknown object type identifier 60" errors. (Andrew Bennetts, #427736)
245331
332* Fixed ``ObjectNotLocked`` errors when doing some log and diff operations
333 on branches via a smart server. (Andrew Bennetts, #389413)
334
335* Handle things like ``bzr add foo`` and ``bzr rm foo`` when the tree is
336 at the root of a drive. ``osutils._cicp_canonical_relpath`` always
337 assumed that ``abspath()`` returned a path that did not have a trailing
338 ``/``, but that is not true when working at the root of the filesystem.
339 (John Arbash Meinel, Jason Spashett, #322807)
340
341* Hide deprecation warnings for 'final' releases for python2.6.
342 (John Arbash Meinel, #440062)
343
246* Improve the time for ``bzr log DIR`` for 2a format repositories.344* Improve the time for ``bzr log DIR`` for 2a format repositories.
247 We had been using the same code path as for <2a formats, which required345 We had been using the same code path as for <2a formats, which required
248 iterating over all objects in all revisions.346 iterating over all objects in all revisions.
@@ -260,12 +358,25 @@
260 but such operations are already doing full tree scans, so no radical358 but such operations are already doing full tree scans, so no radical
261 performance change should be observed. (Robert Collins, #403322)359 performance change should be observed. (Robert Collins, #403322)
262360
263* When a file kind becomes unversionable after being added, a sensible
264 error will be shown instead of a traceback. (Robert Collins, #438569)
265
266* Retrieving file text or mtime from a _PreviewTree has good performance when361* Retrieving file text or mtime from a _PreviewTree has good performance when
267 there are many changes. (Aaron Bentley)362 there are many changes. (Aaron Bentley)
268363
364* The CHK index pages now use an unlimited cache size. With a limited
365 cache and a large project, the random access of chk pages could cause us
366 to download the entire cix file many times.
367 (John Arbash Meinel, #402623)
368
369* When a file kind becomes unversionable after being added, a sensible
370 error will be shown instead of a traceback. (Robert Collins, #438569)
371
372Documentation
373*************
374
375* Improved README. (Ian Clatworthy)
376
377* Improved upgrade documentation for Launchpad branches.
378 (Barry Warsaw)
379
269380
270bzr 2.0.0381bzr 2.0.0
271#########382#########
@@ -10886,5 +10997,38 @@
10886* Storage of local versions: init, add, remove, rm, info, log,10997* Storage of local versions: init, add, remove, rm, info, log,
10887 diff, status, etc.10998 diff, status, etc.
1088810999
11000
11001bzr ?.?.? (not released yet)
11002############################
11003
11004:Codename: template
11005:2.0.2: ???
11006
11007Compatibility Breaks
11008********************
11009
11010New Features
11011************
11012
11013Bug Fixes
11014*********
11015
11016Improvements
11017************
11018
11019Documentation
11020*************
11021
11022API Changes
11023***********
11024
11025Internals
11026*********
11027
11028Testing
11029*******
11030
11031
11032
10889..11033..
10890 vim: tw=74 ft=rst ff=unix11034 vim: tw=74 ft=rst ff=unix
1089111035
=== modified file 'README'
--- README 2008-03-16 14:01:20 +0000
+++ README 2009-10-15 18:31:14 +0000
@@ -3,72 +3,44 @@
3=================3=================
44
5Bazaar (``bzr``) is a decentralized revision control system, designed to be5Bazaar (``bzr``) is a decentralized revision control system, designed to be
6easy for developers and end users alike. Bazaar is part of the GNU project to6easy for developers and end users alike. Bazaar is part of the GNU project
7develop a complete free operating system.7to develop a complete free operating system.
88
9To install Bazaar from source, follow the instructions in the INSTALL9To install Bazaar, follow the instructions given at
10file. Otherwise, you may want to check your distribution package manager10http://bazaar-vcs.org/Download. Ready-to-install packages are available
11for ready-to-install packages, or http://bazaar-vcs.org/DistroDownloads.11for most popular operating systems or you can install from source.
1212
13To learn how to use Bazaar, check the documentation in the doc/ directory.13To learn how to use Bazaar, see the official documentation at:
14Once installed, you can also run 'bzr help'. An always up-to-date and more14
15complete set of documents can be found in the Bazaar website, at:15 http://doc.bazaar-vcs.org/en/
1616
17 http://bazaar-vcs.org/Documentation17For additional training materials including screencasts and slides,
18visit our community wiki documentation page at:
19
20 http://bazaar-vcs.org/Documentation/
1821
19Bazaar is written in Python, and is sponsored by Canonical Limited, the22Bazaar is written in Python, and is sponsored by Canonical Limited, the
20founders of Ubuntu and Launchpad. Bazaar is Free Software, and is released23founders of Ubuntu and Launchpad. Bazaar is Free Software, and is released
21under the GNU General Public License.24under the GNU General Public License.
2225
23Bazaar was formerly known as Bazaar-NG. It's the successor to ``baz``, a fork
24of GNU arch, but shares no code. (To upgrade from Baz, use the ``baz-import``
25command in the bzrtools plugin.)
26
27Bazaar highlights26Bazaar highlights
28=================27=================
2928
30* Easy to use and intuitive.29Bazaar directly supports both central version control (like cvs/svn) and
3130distributed version control (like git/hg). Developers can organize their
32 Only five commands are needed to do all basic operations, and all31workspace in whichever way they want on a per project basis including:
33 commands have documentation accessible via 'bzr help command'.32
34 Bazaar's interface is also easy to learn for CVS and Subversion users.33* checkouts (like svn)
3534* feature branches (like hg)
36* Robust and reliable.35* shared working tree (like git).
3736
38 Bazaar is developed under an extensive test suite. Branches can be37It also directly supports and encourages a large number of development best
39 checked and verified for integrity at any time, and revisions can be38practices like refactoring and pre-commit regression testing. Users can
40 signed with PGP/GnuPG.39choose between our command line tool and our cross-platform GUI application.
4140For further details, see our website at http://bazaar-vcs.org/en.
42* Publish branches with HTTP.41
4342Feedback
44 Branches can be hosted on an HTTP server with no need for special43========
45 software on the server side. Branches can be uploaded by bzr itself
46 over SSH (SFTP), or with rsync.
47
48* Adapts to multiple environments.
49
50 Bazaar runs on Linux and Windows, fully supports Unicode filenames,
51 and suits different development models, including centralized.
52
53* Easily extended and customized.
54
55 A rich Python interface is provided for extending and embedding,
56 including a plugin interface. There are already many available plugins,
57 most of them registered at http://bazaar-vcs.org/PluginRegistry.
58
59* Smart merging.
60
61 Changes will never be merged more than once, conflicts will be
62 minimized, and identical changes are dealt with well.
63
64* Vibrant and active community.
65
66 Help with Bazaar is obtained easily, via the mailing list, or the IRC
67 channel.
68
69
70Registration and Feedback
71=========================
7244
73If you encounter any problems with Bazaar, need help understanding it, or would45If you encounter any problems with Bazaar, need help understanding it, or would
74like to offer suggestions or feedback, please get in touch with us:46like to offer suggestions or feedback, please get in touch with us:
@@ -76,7 +48,7 @@
76* Ask a question through our web support interface, at 48* Ask a question through our web support interface, at
77 https://answers.launchpad.net/bzr/49 https://answers.launchpad.net/bzr/
7850
79* Report bugs at https://bugs.edge.launchpad.net/bzr/+filebug51* Report bugs at https://bugs.launchpad.net/bzr/+filebug
8052
81* Write to us at bazaar@lists.canonical.com 53* Write to us at bazaar@lists.canonical.com
82 You can join the list at <https://lists.ubuntu.com/mailman/listinfo/bazaar>.54 You can join the list at <https://lists.ubuntu.com/mailman/listinfo/bazaar>.
@@ -85,12 +57,8 @@
8557
86* Talk to us in irc://irc.ubuntu.com/bzr58* Talk to us in irc://irc.ubuntu.com/bzr
8759
88* And see http://bazaar-vcs.org/BzrSupport for more.60Our mission is to make a version control tool that developers LOVE to use
8961and that casual contributors feel confident with. Please let us know how
90If you would like to help us improve Bazaar by telling us about yourself and62we're going.
91what we could do better, please register and complete the online survey here:63
92http://www.surveymonkey.com/s.aspx?sm=L94RvLswhKdktrxiHWiX3g_3d_3d.
93Registration is completely optional.
94
95Enjoy,
96The Bazaar Team64The Bazaar Team
9765
=== modified file 'bzrlib/__init__.py'
--- bzrlib/__init__.py 2009-09-25 21:24:21 +0000
+++ bzrlib/__init__.py 2009-10-15 18:31:14 +0000
@@ -44,7 +44,7 @@
44# Python version 2.0 is (2, 0, 0, 'final', 0)." Additionally we use a44# Python version 2.0 is (2, 0, 0, 'final', 0)." Additionally we use a
45# releaselevel of 'dev' for unreleased under-development code.45# releaselevel of 'dev' for unreleased under-development code.
4646
47version_info = (2, 1, 0, 'dev', 0)47version_info = (2, 1, 0, 'dev', 2)
4848
49# API compatibility version: bzrlib is currently API compatible with 1.15.49# API compatibility version: bzrlib is currently API compatible with 1.15.
50api_minimum_version = (2, 1, 0)50api_minimum_version = (2, 1, 0)
5151
=== modified file 'bzrlib/_bencode_pyx.pyx'
--- bzrlib/_bencode_pyx.pyx 2009-06-05 01:48:32 +0000
+++ bzrlib/_bencode_pyx.pyx 2009-10-15 18:31:14 +0000
@@ -58,6 +58,13 @@
58 void D_UPDATE_TAIL(Decoder, int n)58 void D_UPDATE_TAIL(Decoder, int n)
59 void E_UPDATE_TAIL(Encoder, int n)59 void E_UPDATE_TAIL(Encoder, int n)
6060
61# To maintain compatibility with older versions of pyrex, we have to use the
62# relative import here, rather than 'bzrlib._static_tuple_c'
63from _static_tuple_c cimport StaticTuple, StaticTuple_CheckExact, \
64 import_static_tuple_c
65
66import_static_tuple_c()
67
6168
62cdef class Decoder:69cdef class Decoder:
63 """Bencode decoder"""70 """Bencode decoder"""
@@ -371,7 +378,8 @@
371 self._encode_int(x)378 self._encode_int(x)
372 elif PyLong_CheckExact(x):379 elif PyLong_CheckExact(x):
373 self._encode_long(x)380 self._encode_long(x)
374 elif PyList_CheckExact(x) or PyTuple_CheckExact(x):381 elif (PyList_CheckExact(x) or PyTuple_CheckExact(x)
382 or StaticTuple_CheckExact(x)):
375 self._encode_list(x)383 self._encode_list(x)
376 elif PyDict_CheckExact(x):384 elif PyDict_CheckExact(x):
377 self._encode_dict(x)385 self._encode_dict(x)
378386
=== modified file 'bzrlib/_btree_serializer_pyx.pyx'
--- bzrlib/_btree_serializer_pyx.pyx 2009-10-08 15:44:41 +0000
+++ bzrlib/_btree_serializer_pyx.pyx 2009-10-15 18:31:14 +0000
@@ -173,7 +173,13 @@
173 last - self._start)))173 last - self._start)))
174 raise AssertionError(failure_string)174 raise AssertionError(failure_string)
175 # capture the key string175 # capture the key string
176 key_element = safe_interned_string_from_size(self._start,176 if (self.key_length == 1
177 and (temp_ptr - self._start) == 45
178 and strncmp(self._start, 'sha1:', 5) == 0):
179 key_element = safe_string_from_size(self._start,
180 temp_ptr - self._start)
181 else:
182 key_element = safe_interned_string_from_size(self._start,
177 temp_ptr - self._start)183 temp_ptr - self._start)
178 # advance our pointer184 # advance our pointer
179 self._start = temp_ptr + 1185 self._start = temp_ptr + 1
@@ -189,6 +195,7 @@
189 cdef char *ref_ptr195 cdef char *ref_ptr
190 cdef char *next_start196 cdef char *next_start
191 cdef int loop_counter197 cdef int loop_counter
198 cdef Py_ssize_t str_len
192199
193 self._start = self._cur_str200 self._start = self._cur_str
194 # Find the next newline201 # Find the next newline
@@ -224,8 +231,20 @@
224 # Invalid line231 # Invalid line
225 raise AssertionError("Failed to find the value area")232 raise AssertionError("Failed to find the value area")
226 else:233 else:
227 # capture the value string234 # Because of how conversions were done, we ended up with *lots* of
228 value = safe_string_from_size(temp_ptr + 1, last - temp_ptr - 1)235 # values that are identical. These are all of the 0-length nodes
236 # that are referred to by the TREE_ROOT (and likely some other
237 # directory nodes.) For example, bzr has 25k references to
238 # something like '12607215 328306 0 0', which ends up consuming 1MB
239 # of memory, just for those strings.
240 str_len = last - temp_ptr - 1
241 if (str_len > 4
242 and strncmp(" 0 0", last - 4, 4) == 0):
243 # This drops peak mem for bzr.dev from 87.4MB => 86.2MB
244 # For Launchpad 236MB => 232MB
245 value = safe_interned_string_from_size(temp_ptr + 1, str_len)
246 else:
247 value = safe_string_from_size(temp_ptr + 1, str_len)
229 # shrink the references end point248 # shrink the references end point
230 last = temp_ptr249 last = temp_ptr
231250
@@ -332,8 +351,17 @@
332 elif node_len < 3:351 elif node_len < 3:
333 raise ValueError('Without ref_lists, we need at least 3 entries not: %s'352 raise ValueError('Without ref_lists, we need at least 3 entries not: %s'
334 % len(node))353 % len(node))
335 # I don't expect that we can do faster than string.join()354 # TODO: We can probably do better than string.join(), namely
336 string_key = '\0'.join(node[1])# <object>PyTuple_GET_ITEM_ptr_object(node, 1))355 # when key has only 1 item, we can just grab that string
356 # And when there are 2 items, we could do a single malloc + len() + 1
357 # also, doing .join() requires a PyObject_GetAttrString call, which
358 # we could also avoid.
359 # TODO: Note that pyrex 0.9.6 generates fairly crummy code here, using the
360 # python object interface, versus 0.9.8+ which uses a helper that
361 # checks if this supports the sequence interface.
362 # We *could* do more work on our own, and grab the actual items
363 # lists. For now, just ask people to use a better compiler. :)
364 string_key = '\0'.join(node[1])
337365
338 # TODO: instead of using string joins, precompute the final string length,366 # TODO: instead of using string joins, precompute the final string length,
339 # and then malloc a single string and copy everything in.367 # and then malloc a single string and copy everything in.
@@ -350,7 +378,7 @@
350 refs_len = 0378 refs_len = 0
351 if have_reference_lists:379 if have_reference_lists:
352 # Figure out how many bytes it will take to store the references380 # Figure out how many bytes it will take to store the references
353 ref_lists = node[3]# <object>PyTuple_GET_ITEM_ptr_object(node, 3)381 ref_lists = node[3]
354 next_len = len(ref_lists) # TODO: use a Py function382 next_len = len(ref_lists) # TODO: use a Py function
355 if next_len > 0:383 if next_len > 0:
356 # If there are no nodes, we don't need to do any work384 # If there are no nodes, we don't need to do any work
@@ -374,8 +402,7 @@
374 # We will need (len - 1) '\x00' characters to402 # We will need (len - 1) '\x00' characters to
375 # separate the reference key403 # separate the reference key
376 refs_len = refs_len + (next_len - 1)404 refs_len = refs_len + (next_len - 1)
377 for i from 0 <= i < next_len:405 for ref_bit in reference:
378 ref_bit = reference[i]
379 if not PyString_CheckExact(ref_bit):406 if not PyString_CheckExact(ref_bit):
380 raise TypeError('We expect reference bits'407 raise TypeError('We expect reference bits'
381 ' to be strings not: %s'408 ' to be strings not: %s'
@@ -384,7 +411,7 @@
384411
385 # So we have the (key NULL refs NULL value LF)412 # So we have the (key NULL refs NULL value LF)
386 key_len = PyString_Size(string_key)413 key_len = PyString_Size(string_key)
387 val = node[2] # PyTuple_GET_ITEM_ptr_object(node, 2)414 val = node[2]
388 if not PyString_CheckExact(val):415 if not PyString_CheckExact(val):
389 raise TypeError('Expected a plain str for value not: %s'416 raise TypeError('Expected a plain str for value not: %s'
390 % type(val))417 % type(val))
@@ -416,7 +443,7 @@
416 if i != 0:443 if i != 0:
417 out[0] = c'\x00'444 out[0] = c'\x00'
418 out = out + 1445 out = out + 1
419 ref_bit = reference[i] #PyTuple_GET_ITEM_ptr_object(reference, i)446 ref_bit = reference[i]
420 ref_bit_len = PyString_GET_SIZE(ref_bit)447 ref_bit_len = PyString_GET_SIZE(ref_bit)
421 memcpy(out, PyString_AS_STRING(ref_bit), ref_bit_len)448 memcpy(out, PyString_AS_STRING(ref_bit), ref_bit_len)
422 out = out + ref_bit_len449 out = out + ref_bit_len
423450
=== modified file 'bzrlib/_export_c_api.h'
--- bzrlib/_export_c_api.h 2009-10-02 02:21:12 +0000
+++ bzrlib/_export_c_api.h 2009-10-15 18:31:14 +0000
@@ -45,14 +45,17 @@
45 PyObject *d = NULL;45 PyObject *d = NULL;
46 PyObject *c_obj = NULL;46 PyObject *c_obj = NULL;
4747
48 d = PyObject_GetAttrString(module, _C_API_NAME);48 /* (char *) is because python2.4 declares this api as 'char *' rather than
49 * const char* which it really is.
50 */
51 d = PyObject_GetAttrString(module, (char *)_C_API_NAME);
49 if (!d) {52 if (!d) {
50 PyErr_Clear();53 PyErr_Clear();
51 d = PyDict_New();54 d = PyDict_New();
52 if (!d)55 if (!d)
53 goto bad;56 goto bad;
54 Py_INCREF(d);57 Py_INCREF(d);
55 if (PyModule_AddObject(module, _C_API_NAME, d) < 0)58 if (PyModule_AddObject(module, (char *)_C_API_NAME, d) < 0)
56 goto bad;59 goto bad;
57 }60 }
58 c_obj = PyCObject_FromVoidPtrAndDesc(func, signature, 0);61 c_obj = PyCObject_FromVoidPtrAndDesc(func, signature, 0);
5962
=== modified file 'bzrlib/_import_c_api.h'
--- bzrlib/_import_c_api.h 2009-10-01 21:34:36 +0000
+++ bzrlib/_import_c_api.h 2009-10-15 18:31:14 +0000
@@ -47,7 +47,10 @@
47 PyObject *c_obj = NULL;47 PyObject *c_obj = NULL;
48 const char *desc = NULL;48 const char *desc = NULL;
4949
50 d = PyObject_GetAttrString(module, _C_API_NAME);50 /* (char *) because Python2.4 defines this as (char *) rather than
51 * (const char *)
52 */
53 d = PyObject_GetAttrString(module, (char *)_C_API_NAME);
51 if (!d) {54 if (!d) {
52 // PyObject_GetAttrString sets an appropriate exception55 // PyObject_GetAttrString sets an appropriate exception
53 goto bad;56 goto bad;
@@ -94,7 +97,7 @@
94{97{
95 PyObject *type = NULL;98 PyObject *type = NULL;
9699
97 type = PyObject_GetAttrString(module, class_name);100 type = PyObject_GetAttrString(module, (char *)class_name);
98 if (!type) {101 if (!type) {
99 goto bad;102 goto bad;
100 }103 }
@@ -149,7 +152,7 @@
149 struct type_description *cur_type;152 struct type_description *cur_type;
150 int ret_code;153 int ret_code;
151 154
152 module = PyImport_ImportModule(module_name);155 module = PyImport_ImportModule((char *)module_name);
153 if (!module)156 if (!module)
154 goto bad;157 goto bad;
155 if (functions != NULL) {158 if (functions != NULL) {
156159
=== modified file 'bzrlib/_simple_set_pyx.pxd'
--- bzrlib/_simple_set_pyx.pxd 2009-10-08 04:40:16 +0000
+++ bzrlib/_simple_set_pyx.pxd 2009-10-15 18:31:13 +0000
@@ -40,11 +40,36 @@
40 (like a dict), but we also don't implement the complete list of 'set'40 (like a dict), but we also don't implement the complete list of 'set'
41 operations (difference, intersection, etc).41 operations (difference, intersection, etc).
42 """42 """
43 # Data structure definition:
44 # This is a basic hash table using open addressing.
45 # http://en.wikipedia.org/wiki/Open_addressing
46 # Basically that means we keep an array of pointers to Python objects
47 # (called a table). Each location in the array is called a 'slot'.
48 #
49 # An empty slot holds a NULL pointer, a slot where there was an item
50 # which was then deleted will hold a pointer to _dummy, and a filled slot
51 # points at the actual object which fills that slot.
52 #
53 # The table is always a power of two, and the default location where an
54 # object is inserted is at hash(object) & (table_size - 1)
55 #
56 # If there is a collision, then we search for another location. The
57 # specific algorithm is in _lookup. We search until we:
58 # find the object
59 # find an equivalent object (by tp_richcompare(obj1, obj2, Py_EQ))
60 # find a NULL slot
61 #
62 # When an object is deleted, we set its slot to _dummy. this way we don't
63 # have to track whether there was a collision, and find the corresponding
64 # keys. (The collision resolution algorithm makes that nearly impossible
65 # anyway, because it depends on the upper bits of the hash.)
66 # The main effect of this, is that if we find _dummy, then we can insert
67 # an object there, but we have to keep searching until we find NULL to
68 # know that the object is not present elsewhere.
4369
44 cdef Py_ssize_t _used # active70 cdef Py_ssize_t _used # active
45 cdef Py_ssize_t _fill # active + dummy71 cdef Py_ssize_t _fill # active + dummy
46 cdef Py_ssize_t _mask # Table contains (mask+1) slots, a power72 cdef Py_ssize_t _mask # Table contains (mask+1) slots, a power of 2
47 # of 2
48 cdef PyObject **_table # Pyrex/Cython doesn't support arrays to 'object'73 cdef PyObject **_table # Pyrex/Cython doesn't support arrays to 'object'
49 # so we manage it manually74 # so we manage it manually
5075
@@ -57,7 +82,6 @@
5782
58# TODO: might want to export the C api here, though it is all available from83# TODO: might want to export the C api here, though it is all available from
59# the class object...84# the class object...
60cdef api object SimpleSet_Add(object self, object key)
61cdef api SimpleSet SimpleSet_New()85cdef api SimpleSet SimpleSet_New()
62cdef api object SimpleSet_Add(object self, object key)86cdef api object SimpleSet_Add(object self, object key)
63cdef api int SimpleSet_Contains(object self, object key) except -187cdef api int SimpleSet_Contains(object self, object key) except -1
6488
=== modified file 'bzrlib/_simple_set_pyx.pyx'
--- bzrlib/_simple_set_pyx.pyx 2009-10-08 04:40:16 +0000
+++ bzrlib/_simple_set_pyx.pyx 2009-10-15 18:31:13 +0000
@@ -16,15 +16,16 @@
1616
17"""Definition of a class that is similar to Set with some small changes."""17"""Definition of a class that is similar to Set with some small changes."""
1818
19cdef extern from "python-compat.h":
20 pass
21
19cdef extern from "Python.h":22cdef extern from "Python.h":
20 ctypedef unsigned long size_t23 ctypedef unsigned long size_t
21 ctypedef long (*hashfunc)(PyObject*)24 ctypedef long (*hashfunc)(PyObject*) except -1
22 ctypedef PyObject *(*richcmpfunc)(PyObject *, PyObject *, int)25 ctypedef object (*richcmpfunc)(PyObject *, PyObject *, int)
23 ctypedef int (*visitproc)(PyObject *, void *)26 ctypedef int (*visitproc)(PyObject *, void *)
24 ctypedef int (*traverseproc)(PyObject *, visitproc, void *)27 ctypedef int (*traverseproc)(PyObject *, visitproc, void *)
25 int Py_EQ28 int Py_EQ
26 PyObject *Py_True
27 PyObject *Py_NotImplemented
28 void Py_INCREF(PyObject *)29 void Py_INCREF(PyObject *)
29 void Py_DECREF(PyObject *)30 void Py_DECREF(PyObject *)
30 ctypedef struct PyTypeObject:31 ctypedef struct PyTypeObject:
@@ -33,38 +34,54 @@
33 traverseproc tp_traverse34 traverseproc tp_traverse
3435
35 PyTypeObject *Py_TYPE(PyObject *)36 PyTypeObject *Py_TYPE(PyObject *)
37 # Note: *Don't* use hash(), Pyrex 0.9.8.5 thinks it returns an 'int', and
38 # thus silently truncates to 32-bits on 64-bit machines.
39 long PyObject_Hash(PyObject *) except -1
36 40
37 void *PyMem_Malloc(size_t nbytes)41 void *PyMem_Malloc(size_t nbytes)
38 void PyMem_Free(void *)42 void PyMem_Free(void *)
39 void memset(void *, int, size_t)43 void memset(void *, int, size_t)
4044
4145
46# Dummy is an object used to mark nodes that have been deleted. Since
47# collisions require us to move a node to an alternative location, if we just
48# set an entry to NULL on delete, we won't find any relocated nodes.
49# We have to use _dummy_obj because we need to keep a refcount to it, but we
50# also use _dummy as a pointer, because it avoids having to put <PyObject*> all
51# over the code base.
42cdef object _dummy_obj52cdef object _dummy_obj
43cdef PyObject *_dummy53cdef PyObject *_dummy
44_dummy_obj = object()54_dummy_obj = object()
45_dummy = <PyObject *>_dummy_obj55_dummy = <PyObject *>_dummy_obj
4656
4757
48cdef int _is_equal(PyObject *this, long this_hash, PyObject *other):58cdef object _NotImplemented
59_NotImplemented = NotImplemented
60
61
62cdef int _is_equal(PyObject *this, long this_hash, PyObject *other) except -1:
49 cdef long other_hash63 cdef long other_hash
50 cdef PyObject *res
5164
52 if this == other:65 if this == other:
53 return 166 return 1
54 other_hash = Py_TYPE(other).tp_hash(other)67 other_hash = PyObject_Hash(other)
55 if other_hash != this_hash:68 if other_hash != this_hash:
56 return 069 return 0
70
71 # This implements a subset of the PyObject_RichCompareBool functionality.
72 # Namely it:
73 # 1) Doesn't try to do anything with old-style classes
74 # 2) Assumes that both objects have a tp_richcompare implementation, and
75 # that if that is not enough to compare equal, then they are not
76 # equal. (It doesn't try to cast them both to some intermediate form
77 # that would compare equal.)
57 res = Py_TYPE(this).tp_richcompare(this, other, Py_EQ)78 res = Py_TYPE(this).tp_richcompare(this, other, Py_EQ)
58 if res == Py_True:79 if res is _NotImplemented:
59 Py_DECREF(res)
60 return 1
61 if res == Py_NotImplemented:
62 Py_DECREF(res)
63 res = Py_TYPE(other).tp_richcompare(other, this, Py_EQ)80 res = Py_TYPE(other).tp_richcompare(other, this, Py_EQ)
64 if res == Py_True:81 if res is _NotImplemented:
65 Py_DECREF(res)82 return 0
83 if res:
66 return 184 return 1
67 Py_DECREF(res)
68 return 085 return 0
6986
7087
@@ -84,7 +101,6 @@
84 """101 """
85 # Attributes are defined in the .pxd file102 # Attributes are defined in the .pxd file
86 DEF DEFAULT_SIZE=1024103 DEF DEFAULT_SIZE=1024
87 DEF PERTURB_SHIFT=5
88104
89 def __init__(self):105 def __init__(self):
90 cdef Py_ssize_t size, n_bytes106 cdef Py_ssize_t size, n_bytes
@@ -170,27 +186,25 @@
170 as it makes a lot of assuptions about keys not already being present,186 as it makes a lot of assuptions about keys not already being present,
171 and there being no dummy entries.187 and there being no dummy entries.
172 """188 """
173 cdef size_t i, perturb, mask189 cdef size_t i, n_lookup
174 cdef long the_hash190 cdef long the_hash
175 cdef PyObject **table, **entry191 cdef PyObject **table, **slot
192 cdef Py_ssize_t mask
176193
177 mask = self._mask194 mask = self._mask
178 table = self._table195 table = self._table
179196
180 the_hash = Py_TYPE(key).tp_hash(key)197 the_hash = PyObject_Hash(key)
181 i = the_hash & mask198 i = the_hash
182 entry = &table[i]199 for n_lookup from 0 <= n_lookup <= <size_t>mask: # Don't loop forever
183 perturb = the_hash200 slot = &table[i & mask]
184 # Because we know that we made all items unique before, we can just201 if slot[0] == NULL:
185 # iterate as long as the target location is not empty, we don't have to202 slot[0] = key
186 # do any comparison, etc.203 self._fill = self._fill + 1
187 while entry[0] != NULL:204 self._used = self._used + 1
188 i = (i << 2) + i + perturb + 1205 return 1
189 entry = &table[i & mask]206 i = i + 1 + n_lookup
190 perturb >>= PERTURB_SHIFT207 raise RuntimeError('ran out of slots.')
191 entry[0] = key
192 self._fill += 1
193 self._used += 1
194208
195 def _py_resize(self, min_used):209 def _py_resize(self, min_used):
196 """Do not use this directly, it is only exposed for testing."""210 """Do not use this directly, it is only exposed for testing."""
@@ -206,7 +220,7 @@
206 :return: The new size of the internal table220 :return: The new size of the internal table
207 """221 """
208 cdef Py_ssize_t new_size, n_bytes, remaining222 cdef Py_ssize_t new_size, n_bytes, remaining
209 cdef PyObject **new_table, **old_table, **entry223 cdef PyObject **new_table, **old_table, **slot
210224
211 new_size = DEFAULT_SIZE225 new_size = DEFAULT_SIZE
212 while new_size <= min_used and new_size > 0:226 while new_size <= min_used and new_size > 0:
@@ -236,16 +250,16 @@
236250
237 # Moving everything to the other table is refcount neutral, so we don't251 # Moving everything to the other table is refcount neutral, so we don't
238 # worry about it.252 # worry about it.
239 entry = old_table253 slot = old_table
240 while remaining > 0:254 while remaining > 0:
241 if entry[0] == NULL: # unused slot255 if slot[0] == NULL: # unused slot
242 pass 256 pass
243 elif entry[0] == _dummy: # dummy slot257 elif slot[0] == _dummy: # dummy slot
244 remaining -= 1258 remaining = remaining - 1
245 else: # active slot259 else: # active slot
246 remaining -= 1260 remaining = remaining - 1
247 self._insert_clean(entry[0])261 self._insert_clean(slot[0])
248 entry += 1262 slot = slot + 1
249 PyMem_Free(old_table)263 PyMem_Free(old_table)
250 return new_size264 return new_size
251265
@@ -262,20 +276,24 @@
262 cdef PyObject **slot, *py_key276 cdef PyObject **slot, *py_key
263 cdef int added277 cdef int added
264278
279 py_key = <PyObject *>key
280 if (Py_TYPE(py_key).tp_richcompare == NULL
281 or Py_TYPE(py_key).tp_hash == NULL):
282 raise TypeError('Types added to SimpleSet must implement'
283 ' both tp_richcompare and tp_hash')
265 added = 0284 added = 0
266 # We need at least one empty slot285 # We need at least one empty slot
267 assert self._used < self._mask286 assert self._used < self._mask
268 slot = _lookup(self, key)287 slot = _lookup(self, key)
269 py_key = <PyObject *>key
270 if (slot[0] == NULL):288 if (slot[0] == NULL):
271 Py_INCREF(py_key)289 Py_INCREF(py_key)
272 self._fill += 1290 self._fill = self._fill + 1
273 self._used += 1291 self._used = self._used + 1
274 slot[0] = py_key292 slot[0] = py_key
275 added = 1293 added = 1
276 elif (slot[0] == _dummy):294 elif (slot[0] == _dummy):
277 Py_INCREF(py_key)295 Py_INCREF(py_key)
278 self._used += 1296 self._used = self._used + 1
279 slot[0] = py_key297 slot[0] = py_key
280 added = 1298 added = 1
281 # No else: clause. If _lookup returns a pointer to299 # No else: clause. If _lookup returns a pointer to
@@ -291,11 +309,13 @@
291 return retval309 return retval
292310
293 def discard(self, key):311 def discard(self, key):
294 """Remove key from the dict, whether it exists or not.312 """Remove key from the set, whether it exists or not.
295313
296 :return: 0 if the item did not exist, 1 if it did314 :return: False if the item did not exist, True if it did
297 """315 """
298 return self._discard(key)316 if self._discard(key):
317 return True
318 return False
299319
300 cdef int _discard(self, key) except -1:320 cdef int _discard(self, key) except -1:
301 cdef PyObject **slot, *py_key321 cdef PyObject **slot, *py_key
@@ -303,7 +323,7 @@
303 slot = _lookup(self, key)323 slot = _lookup(self, key)
304 if slot[0] == NULL or slot[0] == _dummy:324 if slot[0] == NULL or slot[0] == _dummy:
305 return 0325 return 0
306 self._used -= 1326 self._used = self._used - 1
307 Py_DECREF(slot[0])327 Py_DECREF(slot[0])
308 slot[0] = _dummy328 slot[0] = _dummy
309 # PySet uses the heuristic: If more than 1/5 are dummies, then resize329 # PySet uses the heuristic: If more than 1/5 are dummies, then resize
@@ -320,16 +340,6 @@
320 self._resize(self._used * 2)340 self._resize(self._used * 2)
321 return 1341 return 1
322342
323 def __delitem__(self, key):
324 """Remove the given item from the dict.
325
326 Raise a KeyError if the key was not present.
327 """
328 cdef int exists
329 exists = self._discard(key)
330 if not exists:
331 raise KeyError('Key %s not present' % (key,))
332
333 def __iter__(self):343 def __iter__(self):
334 return _SimpleSet_iterator(self)344 return _SimpleSet_iterator(self)
335345
@@ -353,7 +363,7 @@
353363
354 def __next__(self):364 def __next__(self):
355 cdef Py_ssize_t mask, i365 cdef Py_ssize_t mask, i
356 cdef PyObject **table366 cdef PyObject *key
357367
358 if self.set is None:368 if self.set is None:
359 raise StopIteration369 raise StopIteration
@@ -361,21 +371,13 @@
361 # Force this exception to continue to be raised371 # Force this exception to continue to be raised
362 self._used = -1372 self._used = -1
363 raise RuntimeError("Set size changed during iteration")373 raise RuntimeError("Set size changed during iteration")
364 i = self.pos374 if not SimpleSet_Next(self.set, &self.pos, &key):
365 mask = self.set._mask
366 table = self.set._table
367 assert i >= 0
368 while i <= mask and (table[i] == NULL or table[i] == _dummy):
369 i += 1
370 self.pos = i + 1
371 if i > mask:
372 # we walked to the end
373 self.set = None375 self.set = None
374 raise StopIteration376 raise StopIteration
375 # We must have found one377 # we found something
376 key = <object>(table[i])378 the_key = <object>key # INCREF
377 self.len -= 1379 self.len = self.len - 1
378 return key380 return the_key
379381
380 def __length_hint__(self):382 def __length_hint__(self):
381 if self.set is not None and self._used == self.set._used:383 if self.set is not None and self._used == self.set._used:
@@ -411,51 +413,68 @@
411413
412 :param key: An object we are looking up414 :param key: An object we are looking up
413 :param hash: The hash for key415 :param hash: The hash for key
414 :return: The location in self.table where key should be put416 :return: The location in self.table where key should be put.
415 should never be NULL, but may reference a NULL (PyObject*)417 location == NULL is an exception, but (*location) == NULL just
418 indicates the slot is empty and can be used.
416 """419 """
417 # This is the heart of most functions, which is why it is pulled out as an420 # This uses Quadratic Probing:
418 # cdef inline function.421 # http://en.wikipedia.org/wiki/Quadratic_probing
419 cdef size_t i, perturb422 # with c1 = c2 = 1/2
423 # This leads to probe locations at:
424 # h0 = hash(k1)
425 # h1 = h0 + 1
426 # h2 = h0 + 3 = h1 + 1 + 1
427 # h3 = h0 + 6 = h2 + 1 + 2
428 # h4 = h0 + 10 = h2 + 1 + 3
429 # Note that all of these are '& mask', but that is computed *after* the
430 # offset.
431 # This differs from the algorithm used by Set and Dict. Which, effectively,
432 # use double-hashing, and a step size that starts large, but dwindles to
433 # stepping one-by-one.
434 # This gives more 'locality' in that if you have a collision at offset X,
435 # the first fallback is X+1, which is fast to check. However, that means
436 # that an object w/ hash X+1 will also check there, and then X+2 next.
437 # However, for objects with differing hashes, their chains are different.
438 # The former checks X, X+1, X+3, ... the latter checks X+1, X+2, X+4, ...
439 # So different hashes diverge quickly.
440 # A bigger problem is that we *only* ever use the lowest bits of the hash
441 # So all integers (x + SIZE*N) will resolve into the same bucket, and all
442 # use the same collision resolution. We may want to try to find a way to
443 # incorporate the upper bits of the hash with quadratic probing. (For
444 # example, X, X+1, X+3+some_upper_bits, X+6+more_upper_bits, etc.)
445 cdef size_t i, n_lookup
420 cdef Py_ssize_t mask446 cdef Py_ssize_t mask
421 cdef long key_hash447 cdef long key_hash
422 cdef long this_hash448 cdef PyObject **table, **slot, *cur, **free_slot, *py_key
423 cdef PyObject **table, **cur, **free_slot, *py_key
424449
425 key_hash = hash(key)450 py_key = <PyObject *>key
451 # Note: avoid using hash(obj) because of a bug w/ pyrex 0.9.8.5 and 64-bit
452 # (it treats hash() as returning an 'int' rather than a 'long')
453 key_hash = PyObject_Hash(py_key)
454 i = <size_t>key_hash
426 mask = self._mask455 mask = self._mask
427 table = self._table456 table = self._table
428 i = key_hash & mask457 free_slot = NULL
429 cur = &table[i]458 for n_lookup from 0 <= n_lookup <= <size_t>mask: # Don't loop forever
430 py_key = <PyObject *>key459 slot = &table[i & mask]
431 if cur[0] == NULL:460 cur = slot[0]
432 # Found a blank spot, or found the exact key461 if cur == NULL:
433 return cur462 # Found a blank spot
434 if cur[0] == py_key:463 if free_slot != NULL:
435 return cur464 # Did we find an earlier _dummy entry?
436 if cur[0] == _dummy:
437 free_slot = cur
438 else:
439 if _is_equal(py_key, key_hash, cur[0]):
440 # Both py_key and cur[0] belong in this slot, return it
441 return cur
442 free_slot = NULL
443 # size_t is unsigned, hash is signed...
444 perturb = key_hash
445 while True:
446 i = (i << 2) + i + perturb + 1
447 cur = &table[i & mask]
448 if cur[0] == NULL: # Found an empty spot
449 if free_slot: # Did we find a _dummy earlier?
450 return free_slot465 return free_slot
451 else:466 else:
452 return cur467 return slot
453 if (cur[0] == py_key # exact match468 if cur == py_key:
454 or _is_equal(py_key, key_hash, cur[0])): # Equivalent match469 # Found an exact pointer to the key
455 return cur470 return slot
456 if (cur[0] == _dummy and free_slot == NULL):471 if cur == _dummy:
457 free_slot = cur472 if free_slot == NULL:
458 perturb >>= PERTURB_SHIFT473 free_slot = slot
474 elif _is_equal(py_key, key_hash, cur):
475 # Both py_key and cur belong in this slot, return it
476 return slot
477 i = i + 1 + n_lookup
459 raise AssertionError('should never get here')478 raise AssertionError('should never get here')
460479
461480
@@ -521,7 +540,6 @@
521 return _check_self(self)._used540 return _check_self(self)._used
522541
523542
524# TODO: this should probably have direct tests, since it isn't used by __iter__
525cdef api int SimpleSet_Next(object self, Py_ssize_t *pos, PyObject **key):543cdef api int SimpleSet_Next(object self, Py_ssize_t *pos, PyObject **key):
526 """Walk over items in a SimpleSet.544 """Walk over items in a SimpleSet.
527545
@@ -540,7 +558,7 @@
540 mask = true_self._mask558 mask = true_self._mask
541 table= true_self._table559 table= true_self._table
542 while (i <= mask and (table[i] == NULL or table[i] == _dummy)):560 while (i <= mask and (table[i] == NULL or table[i] == _dummy)):
543 i += 1561 i = i + 1
544 pos[0] = i + 1562 pos[0] = i + 1
545 if (i > mask):563 if (i > mask):
546 return 0 # All done564 return 0 # All done
@@ -565,8 +583,7 @@
565 ret = visit(next_key, arg)583 ret = visit(next_key, arg)
566 if ret:584 if ret:
567 return ret585 return ret
568586 return 0
569 return 0;
570587
571# It is a little bit ugly to do this, but it works, and means that Meliae can588# It is a little bit ugly to do this, but it works, and means that Meliae can
572# dump the total memory consumed by all child objects.589# dump the total memory consumed by all child objects.
573590
=== modified file 'bzrlib/_static_tuple_c.c'
--- bzrlib/_static_tuple_c.c 2009-10-07 19:31:39 +0000
+++ bzrlib/_static_tuple_c.c 2009-10-15 18:31:14 +0000
@@ -20,12 +20,20 @@
20 */20 */
21#define STATIC_TUPLE_MODULE21#define STATIC_TUPLE_MODULE
2222
23#include <Python.h>
24#include "python-compat.h"
25
23#include "_static_tuple_c.h"26#include "_static_tuple_c.h"
24#include "_export_c_api.h"27#include "_export_c_api.h"
28
29/* Pyrex 0.9.6.4 exports _simple_set_pyx_api as
30 * import__simple_set_pyx(), while Pyrex 0.9.8.5 and Cython 0.11.3 export them
31 * as import_bzrlib___simple_set_pyx(). As such, we just #define one to be
32 * equivalent to the other in our internal code.
33 */
34#define import__simple_set_pyx import_bzrlib___simple_set_pyx
25#include "_simple_set_pyx_api.h"35#include "_simple_set_pyx_api.h"
2636
27#include "python-compat.h"
28
29#if defined(__GNUC__)37#if defined(__GNUC__)
30# define inline __inline__38# define inline __inline__
31#elif defined(_MSC_VER)39#elif defined(_MSC_VER)
@@ -74,7 +82,7 @@
74static StaticTuple *82static StaticTuple *
75StaticTuple_Intern(StaticTuple *self)83StaticTuple_Intern(StaticTuple *self)
76{84{
77 PyObject *unique_key = NULL;85 PyObject *canonical_tuple = NULL;
7886
79 if (_interned_tuples == NULL || _StaticTuple_is_interned(self)) {87 if (_interned_tuples == NULL || _StaticTuple_is_interned(self)) {
80 Py_INCREF(self);88 Py_INCREF(self);
@@ -83,20 +91,18 @@
83 /* SimpleSet_Add returns whatever object is present at self91 /* SimpleSet_Add returns whatever object is present at self
84 * or the new object if it needs to add it.92 * or the new object if it needs to add it.
85 */93 */
86 unique_key = SimpleSet_Add(_interned_tuples, (PyObject *)self);94 canonical_tuple = SimpleSet_Add(_interned_tuples, (PyObject *)self);
87 if (!unique_key) {95 if (!canonical_tuple) {
88 // Suppress any error and just return the object96 // Some sort of exception, propogate it.
89 PyErr_Clear();97 return NULL;
90 Py_INCREF(self);
91 return self;
92 }98 }
93 if (unique_key != (PyObject *)self) {99 if (canonical_tuple != (PyObject *)self) {
94 // There was already a key at that location100 // There was already a tuple with that value
95 return (StaticTuple *)unique_key;101 return (StaticTuple *)canonical_tuple;
96 }102 }
97 self->flags |= STATIC_TUPLE_INTERNED_FLAG;103 self->flags |= STATIC_TUPLE_INTERNED_FLAG;
98 // The two references in the dict do not count, so that the StaticTuple object104 // The two references in the dict do not count, so that the StaticTuple
99 // does not become immortal just because it was interned.105 // object does not become immortal just because it was interned.
100 Py_REFCNT(self) -= 1;106 Py_REFCNT(self) -= 1;
101 return self;107 return self;
102}108}
@@ -169,7 +175,7 @@
169175
170176
171static PyObject *177static PyObject *
172StaticTuple_new(PyTypeObject *type, PyObject *args, PyObject *kwds)178StaticTuple_new_constructor(PyTypeObject *type, PyObject *args, PyObject *kwds)
173{179{
174 StaticTuple *self;180 StaticTuple *self;
175 PyObject *obj = NULL;181 PyObject *obj = NULL;
@@ -187,7 +193,7 @@
187 if (len < 0 || len > 255) {193 if (len < 0 || len > 255) {
188 /* Too big or too small */194 /* Too big or too small */
189 PyErr_SetString(PyExc_ValueError, "StaticTuple.__init__(...)"195 PyErr_SetString(PyExc_ValueError, "StaticTuple.__init__(...)"
190 " takes from 0 to 255 key bits");196 " takes from 0 to 255 items");
191 return NULL;197 return NULL;
192 }198 }
193 self = (StaticTuple *)StaticTuple_New(len);199 self = (StaticTuple *)StaticTuple_New(len);
@@ -199,8 +205,7 @@
199 if (!PyString_CheckExact(obj)) {205 if (!PyString_CheckExact(obj)) {
200 if (!StaticTuple_CheckExact(obj)) {206 if (!StaticTuple_CheckExact(obj)) {
201 PyErr_SetString(PyExc_TypeError, "StaticTuple.__init__(...)"207 PyErr_SetString(PyExc_TypeError, "StaticTuple.__init__(...)"
202 " requires that all key bits are strings or StaticTuple.");208 " requires that all items are strings or StaticTuple.");
203 /* TODO: What is the proper way to dealloc ? */
204 type->tp_dealloc((PyObject *)self);209 type->tp_dealloc((PyObject *)self);
205 return NULL;210 return NULL;
206 }211 }
@@ -236,21 +241,21 @@
236 /* adapted from tuplehash(), is the specific hash value considered241 /* adapted from tuplehash(), is the specific hash value considered
237 * 'stable'?242 * 'stable'?
238 */243 */
239 register long x, y;244 register long x, y;
240 Py_ssize_t len = self->size;245 Py_ssize_t len = self->size;
241 PyObject **p;246 PyObject **p;
242 long mult = 1000003L;247 long mult = 1000003L;
243248
244#if STATIC_TUPLE_HAS_HASH249#if STATIC_TUPLE_HAS_HASH
245 if (self->hash != -1) {250 if (self->hash != -1) {
246 return self->hash;251 return self->hash;
247 }252 }
248#endif253#endif
249 x = 0x345678L;254 x = 0x345678L;
250 p = self->items;255 p = self->items;
251 // TODO: We could set specific flags if we know that, for example, all the256 // TODO: We could set specific flags if we know that, for example, all the
252 // keys are strings. I haven't seen a real-world benefit to that yet,257 // items are strings. I haven't seen a real-world benefit to that
253 // though.258 // yet, though.
254 while (--len >= 0) {259 while (--len >= 0) {
255 y = PyObject_Hash(*p++);260 y = PyObject_Hash(*p++);
256 if (y == -1) /* failure */261 if (y == -1) /* failure */
@@ -259,18 +264,13 @@
259 /* the cast might truncate len; that doesn't change hash stability */264 /* the cast might truncate len; that doesn't change hash stability */
260 mult += (long)(82520L + len + len);265 mult += (long)(82520L + len + len);
261 }266 }
262 x += 97531L;267 x += 97531L;
263 if (x == -1)268 if (x == -1)
264 x = -2;269 x = -2;
265#if STATIC_TUPLE_HAS_HASH270#if STATIC_TUPLE_HAS_HASH
266 if (self->hash != -1) {
267 if (self->hash != x) {
268 fprintf(stderr, "hash changed: %d => %d\n", self->hash, x);
269 }
270 }
271 self->hash = x;271 self->hash = x;
272#endif272#endif
273 return x;273 return x;
274}274}
275275
276static PyObject *276static PyObject *
@@ -281,25 +281,39 @@
281 281
282 vt = StaticTuple_as_tuple((StaticTuple *)v);282 vt = StaticTuple_as_tuple((StaticTuple *)v);
283 if (vt == NULL) {283 if (vt == NULL) {
284 goto Done;284 goto done;
285 }285 }
286 if (!PyTuple_Check(wt)) {286 if (!PyTuple_Check(wt)) {
287 PyErr_BadInternalCall();287 PyErr_BadInternalCall();
288 result = NULL;288 goto done;
289 goto Done;
290 }289 }
291 /* Now we have 2 tuples to compare, do it */290 /* Now we have 2 tuples to compare, do it */
292 result = PyTuple_Type.tp_richcompare(vt, wt, op);291 result = PyTuple_Type.tp_richcompare(vt, wt, op);
293Done:292done:
294 Py_XDECREF(vt);293 Py_XDECREF(vt);
295 return result;294 return result;
296}295}
297296
297/** Compare two objects to determine if they are equivalent.
298 * The basic flow is as follows
299 * 1) First make sure that both objects are StaticTuple instances. If they
300 * aren't then cast self to a tuple, and have the tuple do the comparison.
301 * 2) Special case comparison to Py_None, because it happens to occur fairly
302 * often in the test suite.
303 * 3) Special case when v and w are the same pointer. As we know the answer to
304 * all queries without walking individual items.
305 * 4) For all operations, we then walk the items to find the first paired
306 * items that are not equal.
307 * 5) If all items found are equal, we then check the length of self and
308 * other to determine equality.
309 * 6) If an item differs, then we apply "op" to those last two items. (eg.
310 * StaticTuple(A, B) > StaticTuple(A, C) iff B > C)
311 */
298312
299static PyObject *313static PyObject *
300StaticTuple_richcompare(PyObject *v, PyObject *w, int op)314StaticTuple_richcompare(PyObject *v, PyObject *w, int op)
301{315{
302 StaticTuple *vk, *wk;316 StaticTuple *v_st, *w_st;
303 Py_ssize_t vlen, wlen, min_len, i;317 Py_ssize_t vlen, wlen, min_len, i;
304 PyObject *v_obj, *w_obj;318 PyObject *v_obj, *w_obj;
305 richcmpfunc string_richcompare;319 richcmpfunc string_richcompare;
@@ -313,10 +327,10 @@
313 Py_INCREF(Py_NotImplemented);327 Py_INCREF(Py_NotImplemented);
314 return Py_NotImplemented;328 return Py_NotImplemented;
315 }329 }
316 vk = (StaticTuple *)v;330 v_st = (StaticTuple *)v;
317 if (StaticTuple_CheckExact(w)) {331 if (StaticTuple_CheckExact(w)) {
318 /* The most common case */332 /* The most common case */
319 wk = (StaticTuple*)w;333 w_st = (StaticTuple*)w;
320 } else if (PyTuple_Check(w)) {334 } else if (PyTuple_Check(w)) {
321 /* One of v or w is a tuple, so we go the 'slow' route and cast up to335 /* One of v or w is a tuple, so we go the 'slow' route and cast up to
322 * tuples to compare.336 * tuples to compare.
@@ -325,17 +339,19 @@
325 * We probably want to optimize comparing self to other when339 * We probably want to optimize comparing self to other when
326 * other is a tuple.340 * other is a tuple.
327 */341 */
328 return StaticTuple_richcompare_to_tuple(vk, w, op);342 return StaticTuple_richcompare_to_tuple(v_st, w, op);
329 } else if (w == Py_None) {343 } else if (w == Py_None) {
330 // None is always less than the object344 // None is always less than the object
331 switch (op) {345 switch (op) {
332 case Py_NE:case Py_GT:case Py_GE:346 case Py_NE:case Py_GT:case Py_GE:
333 Py_INCREF(Py_True);347 Py_INCREF(Py_True);
334 return Py_True;348 return Py_True;
335 case Py_EQ:case Py_LT:case Py_LE:349 case Py_EQ:case Py_LT:case Py_LE:
336 Py_INCREF(Py_False);350 Py_INCREF(Py_False);
337 return Py_False;351 return Py_False;
338 }352 default: // Should never happen
353 return Py_NotImplemented;
354 }
339 } else {355 } else {
340 /* We don't special case this comparison, we just let python handle356 /* We don't special case this comparison, we just let python handle
341 * it.357 * it.
@@ -344,38 +360,49 @@
344 return Py_NotImplemented;360 return Py_NotImplemented;
345 }361 }
346 /* Now we know that we have 2 StaticTuple objects, so let's compare them.362 /* Now we know that we have 2 StaticTuple objects, so let's compare them.
347 * This code is somewhat borrowed from tuplerichcompare, except we know our363 * This code is inspired from tuplerichcompare, except we know our
348 * objects are limited in scope, so we can inline some comparisons.364 * objects are limited in scope, so we can inline some comparisons.
349 */365 */
350 if (v == w) {366 if (v == w) {
351 /* Identical pointers, we can shortcut this easily. */367 /* Identical pointers, we can shortcut this easily. */
352 switch (op) {368 switch (op) {
353 case Py_EQ:case Py_LE:case Py_GE:369 case Py_EQ:case Py_LE:case Py_GE:
354 Py_INCREF(Py_True);370 Py_INCREF(Py_True);
355 return Py_True;371 return Py_True;
356 case Py_NE:case Py_LT:case Py_GT:372 case Py_NE:case Py_LT:case Py_GT:
357 Py_INCREF(Py_False);373 Py_INCREF(Py_False);
358 return Py_False;374 return Py_False;
359 }375 }
360 }376 }
361 /* TODO: if STATIC_TUPLE_INTERNED_FLAG is set on both objects and they are377 if (op == Py_EQ
362 * not the same pointer, then we know they aren't the same object378 && _StaticTuple_is_interned(v_st)
363 * without having to do sub-by-sub comparison.379 && _StaticTuple_is_interned(w_st))
364 */380 {
381 /* If both objects are interned, we know they are different if the
382 * pointer is not the same, which would have been handled by the
383 * previous if. No need to compare the entries.
384 */
385 Py_INCREF(Py_False);
386 return Py_False;
387 }
365388
366 /* It will be rare that we compare tuples of different lengths, so we don't389 /* The only time we are likely to compare items of different lengths is in
367 * start by optimizing the length comparision, same as the tuple code390 * something like the interned_keys set. However, the hash is good enough
368 * TODO: Interning may change this, because we'll be comparing lots of391 * that it is rare. Note that 'tuple_richcompare' also does not compare
369 * different StaticTuple objects in the intern dict392 * lengths here.
370 */393 */
371 vlen = vk->size;394 vlen = v_st->size;
372 wlen = wk->size;395 wlen = w_st->size;
373 min_len = (vlen < wlen) ? vlen : wlen;396 min_len = (vlen < wlen) ? vlen : wlen;
374 string_richcompare = PyString_Type.tp_richcompare;397 string_richcompare = PyString_Type.tp_richcompare;
375 for (i = 0; i < min_len; i++) {398 for (i = 0; i < min_len; i++) {
376 PyObject *result = NULL;399 PyObject *result = NULL;
377 v_obj = StaticTuple_GET_ITEM(vk, i);400 v_obj = StaticTuple_GET_ITEM(v_st, i);
378 w_obj = StaticTuple_GET_ITEM(wk, i);401 w_obj = StaticTuple_GET_ITEM(w_st, i);
402 if (v_obj == w_obj) {
403 /* Shortcut case, these must be identical */
404 continue;
405 }
379 if (PyString_CheckExact(v_obj) && PyString_CheckExact(w_obj)) {406 if (PyString_CheckExact(v_obj) && PyString_CheckExact(w_obj)) {
380 result = string_richcompare(v_obj, w_obj, Py_EQ);407 result = string_richcompare(v_obj, w_obj, Py_EQ);
381 } else if (StaticTuple_CheckExact(v_obj) &&408 } else if (StaticTuple_CheckExact(v_obj) &&
@@ -391,9 +418,15 @@
391 return NULL; /* There seems to be an error */418 return NULL; /* There seems to be an error */
392 }419 }
393 if (result == Py_NotImplemented) {420 if (result == Py_NotImplemented) {
394 PyErr_BadInternalCall();
395 Py_DECREF(result);421 Py_DECREF(result);
396 return NULL;422 /* One side must have had a string and the other a StaticTuple.
423 * This clearly means that they are not equal.
424 */
425 if (op == Py_EQ) {
426 Py_INCREF(Py_False);
427 return Py_False;
428 }
429 result = PyObject_RichCompare(v_obj, w_obj, Py_EQ);
397 }430 }
398 if (result == Py_False) {431 if (result == Py_False) {
399 /* This entry is not identical432 /* This entry is not identical
@@ -415,28 +448,28 @@
415 }448 }
416 Py_DECREF(result);449 Py_DECREF(result);
417 }450 }
418 if (i >= vlen || i >= wlen) {451 if (i >= min_len) {
419 /* We walked off one of the lists, but everything compared equal so452 /* We walked off one of the lists, but everything compared equal so
420 * far. Just compare the size.453 * far. Just compare the size.
421 */454 */
422 int cmp;455 int cmp;
423 PyObject *res;456 PyObject *res;
424 switch (op) {457 switch (op) {
425 case Py_LT: cmp = vlen < wlen; break;458 case Py_LT: cmp = vlen < wlen; break;
426 case Py_LE: cmp = vlen <= wlen; break;459 case Py_LE: cmp = vlen <= wlen; break;
427 case Py_EQ: cmp = vlen == wlen; break;460 case Py_EQ: cmp = vlen == wlen; break;
428 case Py_NE: cmp = vlen != wlen; break;461 case Py_NE: cmp = vlen != wlen; break;
429 case Py_GT: cmp = vlen > wlen; break;462 case Py_GT: cmp = vlen > wlen; break;
430 case Py_GE: cmp = vlen >= wlen; break;463 case Py_GE: cmp = vlen >= wlen; break;
431 default: return NULL; /* cannot happen */464 default: return NULL; /* cannot happen */
432 }465 }
433 if (cmp)466 if (cmp)
434 res = Py_True;467 res = Py_True;
435 else468 else
436 res = Py_False;469 res = Py_False;
437 Py_INCREF(res);470 Py_INCREF(res);
438 return res;471 return res;
439 }472 }
440 /* The last item differs, shortcut the Py_NE case */473 /* The last item differs, shortcut the Py_NE case */
441 if (op == Py_NE) {474 if (op == Py_NE) {
442 Py_INCREF(Py_True);475 Py_INCREF(Py_True);
@@ -477,15 +510,22 @@
477}510}
478511
479static char StaticTuple__is_interned_doc[] = "_is_interned() => True/False\n"512static char StaticTuple__is_interned_doc[] = "_is_interned() => True/False\n"
480 "Check to see if this key has been interned.\n";513 "Check to see if this tuple has been interned.\n";
481514
482515
483static PyObject *516static PyObject *
484StaticTuple_item(StaticTuple *self, Py_ssize_t offset)517StaticTuple_item(StaticTuple *self, Py_ssize_t offset)
485{518{
486 PyObject *obj;519 PyObject *obj;
487 if (offset < 0 || offset >= self->size) {520 /* We cast to (int) to avoid worrying about whether Py_ssize_t is a
488 PyErr_SetString(PyExc_IndexError, "StaticTuple index out of range");521 * long long, etc. offsets should never be >2**31 anyway.
522 */
523 if (offset < 0) {
524 PyErr_Format(PyExc_IndexError, "StaticTuple_item does not support"
525 " negative indices: %d\n", (int)offset);
526 } else if (offset >= self->size) {
527 PyErr_Format(PyExc_IndexError, "StaticTuple index out of range"
528 " %d >= %d", (int)offset, (int)self->size);
489 return NULL;529 return NULL;
490 }530 }
491 obj = (PyObject *)self->items[offset];531 obj = (PyObject *)self->items[offset];
@@ -519,9 +559,12 @@
519559
520static char StaticTuple_doc[] =560static char StaticTuple_doc[] =
521 "C implementation of a StaticTuple structure."561 "C implementation of a StaticTuple structure."
522 "\n This is used as StaticTuple(key_bit_1, key_bit_2, key_bit_3, ...)"562 "\n This is used as StaticTuple(item1, item2, item3)"
523 "\n This is similar to tuple, just less flexible in what it"563 "\n This is similar to tuple, less flexible in what it"
524 "\n supports, but also lighter memory consumption.";564 "\n supports, but also lighter memory consumption."
565 "\n Note that the constructor mimics the () form of tuples"
566 "\n Rather than the 'tuple()' constructor."
567 "\n eg. StaticTuple(a, b) == (a, b) == tuple((a, b))";
525568
526static PyMethodDef StaticTuple_methods[] = {569static PyMethodDef StaticTuple_methods[] = {
527 {"as_tuple", (PyCFunction)StaticTuple_as_tuple, METH_NOARGS, StaticTuple_as_tuple_doc},570 {"as_tuple", (PyCFunction)StaticTuple_as_tuple, METH_NOARGS, StaticTuple_as_tuple_doc},
@@ -542,6 +585,12 @@
542 0, /* sq_contains */585 0, /* sq_contains */
543};586};
544587
588/* TODO: Implement StaticTuple_as_mapping.
589 * The only thing we really want to support from there is mp_subscript,
590 * so that we could support extended slicing (foo[::2]). Not worth it
591 * yet, though.
592 */
593
545594
546PyTypeObject StaticTuple_Type = {595PyTypeObject StaticTuple_Type = {
547 PyObject_HEAD_INIT(NULL)596 PyObject_HEAD_INIT(NULL)
@@ -561,7 +610,7 @@
561 (hashfunc)StaticTuple_hash, /* tp_hash */610 (hashfunc)StaticTuple_hash, /* tp_hash */
562 0, /* tp_call */611 0, /* tp_call */
563 0, /* tp_str */612 0, /* tp_str */
564 PyObject_GenericGetAttr, /* tp_getattro */613 0, /* tp_getattro */
565 0, /* tp_setattro */614 0, /* tp_setattro */
566 0, /* tp_as_buffer */615 0, /* tp_as_buffer */
567 Py_TPFLAGS_DEFAULT, /* tp_flags*/616 Py_TPFLAGS_DEFAULT, /* tp_flags*/
@@ -590,7 +639,7 @@
590 0, /* tp_dictoffset */639 0, /* tp_dictoffset */
591 0, /* tp_init */640 0, /* tp_init */
592 0, /* tp_alloc */641 0, /* tp_alloc */
593 StaticTuple_new, /* tp_new */642 StaticTuple_new_constructor, /* tp_new */
594};643};
595644
596645
@@ -646,11 +695,57 @@
646}695}
647696
648697
698static int
699_workaround_pyrex_096(void)
700{
701 /* Work around an incompatibility in how pyrex 0.9.6 exports a module,
702 * versus how pyrex 0.9.8 and cython 0.11 export it.
703 * Namely 0.9.6 exports import__simple_set_pyx and tries to
704 * "import _simple_set_pyx" but it is available only as
705 * "import bzrlib._simple_set_pyx"
706 * It is a shame to hack up sys.modules, but that is what we've got to do.
707 */
708 PyObject *sys_module = NULL, *modules = NULL, *set_module = NULL;
709 int retval = -1;
710
711 /* Clear out the current ImportError exception, and try again. */
712 PyErr_Clear();
713 /* Note that this only seems to work if somewhere else imports
714 * bzrlib._simple_set_pyx before importing bzrlib._static_tuple_c
715 */
716 set_module = PyImport_ImportModule("bzrlib._simple_set_pyx");
717 if (set_module == NULL) {
718 // fprintf(stderr, "Failed to import bzrlib._simple_set_pyx\n");
719 goto end;
720 }
721 /* Add the _simple_set_pyx into sys.modules at the appropriate location. */
722 sys_module = PyImport_ImportModule("sys");
723 if (sys_module == NULL) {
724 // fprintf(stderr, "Failed to import sys\n");
725 goto end;
726 }
727 modules = PyObject_GetAttrString(sys_module, "modules");
728 if (modules == NULL || !PyDict_Check(modules)) {
729 // fprintf(stderr, "Failed to find sys.modules\n");
730 goto end;
731 }
732 PyDict_SetItemString(modules, "_simple_set_pyx", set_module);
733 /* Now that we have hacked it in, try the import again. */
734 retval = import_bzrlib___simple_set_pyx();
735end:
736 Py_XDECREF(set_module);
737 Py_XDECREF(sys_module);
738 Py_XDECREF(modules);
739 return retval;
740}
741
742
649PyMODINIT_FUNC743PyMODINIT_FUNC
650init_static_tuple_c(void)744init_static_tuple_c(void)
651{745{
652 PyObject* m;746 PyObject* m;
653747
748 StaticTuple_Type.tp_getattro = PyObject_GenericGetAttr;
654 if (PyType_Ready(&StaticTuple_Type) < 0)749 if (PyType_Ready(&StaticTuple_Type) < 0)
655 return;750 return;
656751
@@ -661,8 +756,9 @@
661756
662 Py_INCREF(&StaticTuple_Type);757 Py_INCREF(&StaticTuple_Type);
663 PyModule_AddObject(m, "StaticTuple", (PyObject *)&StaticTuple_Type);758 PyModule_AddObject(m, "StaticTuple", (PyObject *)&StaticTuple_Type);
664 if (import_bzrlib___simple_set_pyx() == -1) {759 if (import_bzrlib___simple_set_pyx() == -1
665 // We failed to set up, stop early760 && _workaround_pyrex_096() == -1)
761 {
666 return;762 return;
667 }763 }
668 setup_interned_tuples(m);764 setup_interned_tuples(m);
669765
=== modified file 'bzrlib/_static_tuple_py.py'
--- bzrlib/_static_tuple_py.py 2009-10-07 16:12:50 +0000
+++ bzrlib/_static_tuple_py.py 2009-10-15 18:31:14 +0000
@@ -52,6 +52,8 @@
52 return _interned_tuples.setdefault(self, self)52 return _interned_tuples.setdefault(self, self)
5353
5454
55# Have to set it to None first, so that __new__ can determine whether
56# the _empty_tuple singleton has been created yet or not.
55_empty_tuple = None57_empty_tuple = None
56_empty_tuple = StaticTuple()58_empty_tuple = StaticTuple()
57_interned_tuples = {}59_interned_tuples = {}
5860
=== modified file 'bzrlib/branch.py'
--- bzrlib/branch.py 2009-08-19 18:04:49 +0000
+++ bzrlib/branch.py 2009-10-15 18:31:14 +0000
@@ -46,9 +46,10 @@
46 )46 )
47""")47""")
4848
49from bzrlib.decorators import needs_read_lock, needs_write_lock49from bzrlib.decorators import needs_read_lock, needs_write_lock, only_raises
50from bzrlib.hooks import HookPoint, Hooks50from bzrlib.hooks import HookPoint, Hooks
51from bzrlib.inter import InterObject51from bzrlib.inter import InterObject
52from bzrlib.lock import _RelockDebugMixin
52from bzrlib import registry53from bzrlib import registry
53from bzrlib.symbol_versioning import (54from bzrlib.symbol_versioning import (
54 deprecated_in,55 deprecated_in,
@@ -2079,7 +2080,7 @@
2079 _legacy_formats[0].network_name(), _legacy_formats[0].__class__)2080 _legacy_formats[0].network_name(), _legacy_formats[0].__class__)
20802081
20812082
2082class BzrBranch(Branch):2083class BzrBranch(Branch, _RelockDebugMixin):
2083 """A branch stored in the actual filesystem.2084 """A branch stored in the actual filesystem.
20842085
2085 Note that it's "local" in the context of the filesystem; it doesn't2086 Note that it's "local" in the context of the filesystem; it doesn't
@@ -2131,6 +2132,8 @@
2131 return self.control_files.is_locked()2132 return self.control_files.is_locked()
21322133
2133 def lock_write(self, token=None):2134 def lock_write(self, token=None):
2135 if not self.is_locked():
2136 self._note_lock('w')
2134 # All-in-one needs to always unlock/lock.2137 # All-in-one needs to always unlock/lock.
2135 repo_control = getattr(self.repository, 'control_files', None)2138 repo_control = getattr(self.repository, 'control_files', None)
2136 if self.control_files == repo_control or not self.is_locked():2139 if self.control_files == repo_control or not self.is_locked():
@@ -2146,6 +2149,8 @@
2146 raise2149 raise
21472150
2148 def lock_read(self):2151 def lock_read(self):
2152 if not self.is_locked():
2153 self._note_lock('r')
2149 # All-in-one needs to always unlock/lock.2154 # All-in-one needs to always unlock/lock.
2150 repo_control = getattr(self.repository, 'control_files', None)2155 repo_control = getattr(self.repository, 'control_files', None)
2151 if self.control_files == repo_control or not self.is_locked():2156 if self.control_files == repo_control or not self.is_locked():
@@ -2160,6 +2165,7 @@
2160 self.repository.unlock()2165 self.repository.unlock()
2161 raise2166 raise
21622167
2168 @only_raises(errors.LockNotHeld, errors.LockBroken)
2163 def unlock(self):2169 def unlock(self):
2164 try:2170 try:
2165 self.control_files.unlock()2171 self.control_files.unlock()
21662172
=== modified file 'bzrlib/btree_index.py'
--- bzrlib/btree_index.py 2009-09-22 02:18:24 +0000
+++ bzrlib/btree_index.py 2009-10-15 18:31:13 +0000
@@ -163,6 +163,7 @@
163 node_refs, _ = self._check_key_ref_value(key, references, value)163 node_refs, _ = self._check_key_ref_value(key, references, value)
164 if key in self._nodes:164 if key in self._nodes:
165 raise errors.BadIndexDuplicateKey(key, self)165 raise errors.BadIndexDuplicateKey(key, self)
166 # TODO: StaticTuple
166 self._nodes[key] = (node_refs, value)167 self._nodes[key] = (node_refs, value)
167 self._keys.add(key)168 self._keys.add(key)
168 if self._nodes_by_key is not None and self._key_length > 1:169 if self._nodes_by_key is not None and self._key_length > 1:
@@ -625,6 +626,7 @@
625 for line in lines[2:]:626 for line in lines[2:]:
626 if line == '':627 if line == '':
627 break628 break
629 # TODO: Switch to StaticTuple here.
628 nodes.append(tuple(map(intern, line.split('\0'))))630 nodes.append(tuple(map(intern, line.split('\0'))))
629 return nodes631 return nodes
630632
@@ -636,7 +638,7 @@
636 memory except when very large walks are done.638 memory except when very large walks are done.
637 """639 """
638640
639 def __init__(self, transport, name, size):641 def __init__(self, transport, name, size, unlimited_cache=False):
640 """Create a B+Tree index object on the index name.642 """Create a B+Tree index object on the index name.
641643
642 :param transport: The transport to read data for the index from.644 :param transport: The transport to read data for the index from.
@@ -646,6 +648,9 @@
646 the initial read (to read the root node header) can be done648 the initial read (to read the root node header) can be done
647 without over-reading even on empty indices, and on small indices649 without over-reading even on empty indices, and on small indices
648 allows single-IO to read the entire index.650 allows single-IO to read the entire index.
651 :param unlimited_cache: If set to True, then instead of using an
652 LRUCache with size _NODE_CACHE_SIZE, we will use a dict and always
653 cache all leaf nodes.
649 """654 """
650 self._transport = transport655 self._transport = transport
651 self._name = name656 self._name = name
@@ -655,12 +660,15 @@
655 self._root_node = None660 self._root_node = None
656 # Default max size is 100,000 leave values661 # Default max size is 100,000 leave values
657 self._leaf_value_cache = None # lru_cache.LRUCache(100*1000)662 self._leaf_value_cache = None # lru_cache.LRUCache(100*1000)
658 self._leaf_node_cache = lru_cache.LRUCache(_NODE_CACHE_SIZE)663 if unlimited_cache:
659 # We could limit this, but even a 300k record btree has only 3k leaf664 self._leaf_node_cache = {}
660 # nodes, and only 20 internal nodes. So the default of 100 nodes in an665 self._internal_node_cache = {}
661 # LRU would mean we always cache everything anyway, no need to pay the666 else:
662 # overhead of LRU667 self._leaf_node_cache = lru_cache.LRUCache(_NODE_CACHE_SIZE)
663 self._internal_node_cache = fifo_cache.FIFOCache(100)668 # We use a FIFO here just to prevent possible blowout. However, a
669 # 300k record btree has only 3k leaf nodes, and only 20 internal
670 # nodes. A value of 100 scales to ~100*100*100 = 1M records.
671 self._internal_node_cache = fifo_cache.FIFOCache(100)
664 self._key_count = None672 self._key_count = None
665 self._row_lengths = None673 self._row_lengths = None
666 self._row_offsets = None # Start of each row, [-1] is the end674 self._row_offsets = None # Start of each row, [-1] is the end
@@ -698,9 +706,9 @@
698 if start_of_leaves is None:706 if start_of_leaves is None:
699 start_of_leaves = self._row_offsets[-2]707 start_of_leaves = self._row_offsets[-2]
700 if node_pos < start_of_leaves:708 if node_pos < start_of_leaves:
701 self._internal_node_cache.add(node_pos, node)709 self._internal_node_cache[node_pos] = node
702 else:710 else:
703 self._leaf_node_cache.add(node_pos, node)711 self._leaf_node_cache[node_pos] = node
704 found[node_pos] = node712 found[node_pos] = node
705 return found713 return found
706714
707715
=== modified file 'bzrlib/builtins.py'
--- bzrlib/builtins.py 2009-09-24 19:51:37 +0000
+++ bzrlib/builtins.py 2009-10-15 18:31:14 +0000
@@ -431,7 +431,10 @@
431 for node in bt.iter_all_entries():431 for node in bt.iter_all_entries():
432 # Node is made up of:432 # Node is made up of:
433 # (index, key, value, [references])433 # (index, key, value, [references])
434 self.outf.write('%s\n' % (node[1:],))434 refs_as_tuples = tuple([tuple([tuple(ref) for ref in ref_list])
435 for ref_list in node[3]])
436 as_tuple = (tuple(node[1]), node[2], refs_as_tuples)
437 self.outf.write('%s\n' % (as_tuple,))
435438
436439
437class cmd_remove_tree(Command):440class cmd_remove_tree(Command):
@@ -461,8 +464,7 @@
461 raise errors.BzrCommandError("You cannot remove the working tree"464 raise errors.BzrCommandError("You cannot remove the working tree"
462 " of a remote path")465 " of a remote path")
463 if not force:466 if not force:
464 if (working.has_changes(working.basis_tree())467 if (working.has_changes()):
465 or len(working.get_parent_ids()) > 1):
466 raise errors.UncommittedChanges(working)468 raise errors.UncommittedChanges(working)
467469
468 working_path = working.bzrdir.root_transport.base470 working_path = working.bzrdir.root_transport.base
@@ -1109,8 +1111,7 @@
1109 else:1111 else:
1110 revision_id = None1112 revision_id = None
1111 if strict and tree is not None and revision_id is None:1113 if strict and tree is not None and revision_id is None:
1112 if (tree.has_changes(tree.basis_tree())1114 if (tree.has_changes()):
1113 or len(tree.get_parent_ids()) > 1):
1114 raise errors.UncommittedChanges(1115 raise errors.UncommittedChanges(
1115 tree, more='Use --no-strict to force the push.')1116 tree, more='Use --no-strict to force the push.')
1116 if tree.last_revision() != tree.branch.last_revision():1117 if tree.last_revision() != tree.branch.last_revision():
@@ -1887,7 +1888,7 @@
1887 @display_command1888 @display_command
1888 def run(self, revision=None, file_list=None, diff_options=None,1889 def run(self, revision=None, file_list=None, diff_options=None,
1889 prefix=None, old=None, new=None, using=None):1890 prefix=None, old=None, new=None, using=None):
1890 from bzrlib.diff import _get_trees_to_diff, show_diff_trees1891 from bzrlib.diff import get_trees_and_branches_to_diff, show_diff_trees
18911892
1892 if (prefix is None) or (prefix == '0'):1893 if (prefix is None) or (prefix == '0'):
1893 # diff -p0 format1894 # diff -p0 format
@@ -1907,9 +1908,10 @@
1907 raise errors.BzrCommandError('bzr diff --revision takes exactly'1908 raise errors.BzrCommandError('bzr diff --revision takes exactly'
1908 ' one or two revision specifiers')1909 ' one or two revision specifiers')
19091910
1910 old_tree, new_tree, specific_files, extra_trees = \1911 (old_tree, new_tree,
1911 _get_trees_to_diff(file_list, revision, old, new,1912 old_branch, new_branch,
1912 apply_view=True)1913 specific_files, extra_trees) = get_trees_and_branches_to_diff(
1914 file_list, revision, old, new, apply_view=True)
1913 return show_diff_trees(old_tree, new_tree, sys.stdout,1915 return show_diff_trees(old_tree, new_tree, sys.stdout,
1914 specific_files=specific_files,1916 specific_files=specific_files,
1915 external_diff_options=diff_options,1917 external_diff_options=diff_options,
@@ -3663,7 +3665,7 @@
36633665
3664 # die as quickly as possible if there are uncommitted changes3666 # die as quickly as possible if there are uncommitted changes
3665 if not force:3667 if not force:
3666 if tree.has_changes(basis_tree) or len(tree.get_parent_ids()) > 1:3668 if tree.has_changes():
3667 raise errors.UncommittedChanges(tree)3669 raise errors.UncommittedChanges(tree)
36683670
3669 view_info = _get_view_info_for_change_reporter(tree)3671 view_info = _get_view_info_for_change_reporter(tree)
@@ -3720,7 +3722,10 @@
3720 merger.other_rev_id)3722 merger.other_rev_id)
3721 result.report(self.outf)3723 result.report(self.outf)
3722 return 03724 return 0
3723 merger.check_basis(False)3725 if merger.this_basis is None:
3726 raise errors.BzrCommandError(
3727 "This branch has no commits."
3728 " (perhaps you would prefer 'bzr pull')")
3724 if preview:3729 if preview:
3725 return self._do_preview(merger, cleanups)3730 return self._do_preview(merger, cleanups)
3726 elif interactive:3731 elif interactive:
37273732
=== modified file 'bzrlib/bundle/apply_bundle.py'
--- bzrlib/bundle/apply_bundle.py 2009-03-23 14:59:43 +0000
+++ bzrlib/bundle/apply_bundle.py 2009-10-15 18:31:14 +0000
@@ -56,7 +56,8 @@
56 change_reporter=change_reporter)56 change_reporter=change_reporter)
57 merger.pp = pp57 merger.pp = pp
58 merger.pp.next_phase()58 merger.pp.next_phase()
59 merger.check_basis(check_clean, require_commits=False)59 if check_clean and tree.has_changes():
60 raise errors.UncommittedChanges(self)
60 merger.other_rev_id = reader.target61 merger.other_rev_id = reader.target
61 merger.other_tree = merger.revision_tree(reader.target)62 merger.other_tree = merger.revision_tree(reader.target)
62 merger.other_basis = reader.target63 merger.other_basis = reader.target
6364
=== modified file 'bzrlib/commands.py'
--- bzrlib/commands.py 2009-09-17 07:16:12 +0000
+++ bzrlib/commands.py 2009-10-15 18:31:14 +0000
@@ -1097,7 +1097,7 @@
10971097
1098 # Is this a final release version? If so, we should suppress warnings1098 # Is this a final release version? If so, we should suppress warnings
1099 if bzrlib.version_info[3] == 'final':1099 if bzrlib.version_info[3] == 'final':
1100 suppress_deprecation_warnings(override=False)1100 suppress_deprecation_warnings(override=True)
1101 if argv is None:1101 if argv is None:
1102 argv = osutils.get_unicode_argv()1102 argv = osutils.get_unicode_argv()
1103 else:1103 else:
11041104
=== modified file 'bzrlib/decorators.py'
--- bzrlib/decorators.py 2009-03-23 14:59:43 +0000
+++ bzrlib/decorators.py 2009-10-15 18:31:14 +0000
@@ -24,6 +24,8 @@
2424
25import sys25import sys
2626
27from bzrlib import trace
28
2729
28def _get_parameters(func):30def _get_parameters(func):
29 """Recreate the parameters for a function using introspection.31 """Recreate the parameters for a function using introspection.
@@ -204,6 +206,31 @@
204 return write_locked206 return write_locked
205207
206208
209def only_raises(*errors):
210 """Make a decorator that will only allow the given error classes to be
211 raised. All other errors will be logged and then discarded.
212
213 Typical use is something like::
214
215 @only_raises(LockNotHeld, LockBroken)
216 def unlock(self):
217 # etc
218 """
219 def decorator(unbound):
220 def wrapped(*args, **kwargs):
221 try:
222 return unbound(*args, **kwargs)
223 except errors:
224 raise
225 except:
226 trace.mutter('Error suppressed by only_raises:')
227 trace.log_exception_quietly()
228 wrapped.__doc__ = unbound.__doc__
229 wrapped.__name__ = unbound.__name__
230 return wrapped
231 return decorator
232
233
207# Default is more functionality, 'bzr' the commandline will request fast234# Default is more functionality, 'bzr' the commandline will request fast
208# versions.235# versions.
209needs_read_lock = _pretty_needs_read_lock236needs_read_lock = _pretty_needs_read_lock
210237
=== modified file 'bzrlib/diff.py'
--- bzrlib/diff.py 2009-07-29 21:35:05 +0000
+++ bzrlib/diff.py 2009-10-15 18:31:13 +0000
@@ -277,8 +277,8 @@
277 new_abspath, e)277 new_abspath, e)
278278
279279
280def _get_trees_to_diff(path_list, revision_specs, old_url, new_url,280def get_trees_and_branches_to_diff(path_list, revision_specs, old_url, new_url,
281 apply_view=True):281 apply_view=True):
282 """Get the trees and specific files to diff given a list of paths.282 """Get the trees and specific files to diff given a list of paths.
283283
284 This method works out the trees to be diff'ed and the files of284 This method works out the trees to be diff'ed and the files of
@@ -299,9 +299,9 @@
299 if True and a view is set, apply the view or check that the paths299 if True and a view is set, apply the view or check that the paths
300 are within it300 are within it
301 :returns:301 :returns:
302 a tuple of (old_tree, new_tree, specific_files, extra_trees) where302 a tuple of (old_tree, new_tree, old_branch, new_branch,
303 extra_trees is a sequence of additional trees to search in for303 specific_files, extra_trees) where extra_trees is a sequence of
304 file-ids.304 additional trees to search in for file-ids.
305 """305 """
306 # Get the old and new revision specs306 # Get the old and new revision specs
307 old_revision_spec = None307 old_revision_spec = None
@@ -341,6 +341,7 @@
341 views.check_path_in_view(working_tree, relpath)341 views.check_path_in_view(working_tree, relpath)
342 specific_files.append(relpath)342 specific_files.append(relpath)
343 old_tree = _get_tree_to_diff(old_revision_spec, working_tree, branch)343 old_tree = _get_tree_to_diff(old_revision_spec, working_tree, branch)
344 old_branch = branch
344345
345 # Get the new location346 # Get the new location
346 if new_url is None:347 if new_url is None:
@@ -354,6 +355,7 @@
354 specific_files.append(relpath)355 specific_files.append(relpath)
355 new_tree = _get_tree_to_diff(new_revision_spec, working_tree, branch,356 new_tree = _get_tree_to_diff(new_revision_spec, working_tree, branch,
356 basis_is_default=working_tree is None)357 basis_is_default=working_tree is None)
358 new_branch = branch
357359
358 # Get the specific files (all files is None, no files is [])360 # Get the specific files (all files is None, no files is [])
359 if make_paths_wt_relative and working_tree is not None:361 if make_paths_wt_relative and working_tree is not None:
@@ -378,7 +380,8 @@
378 extra_trees = None380 extra_trees = None
379 if working_tree is not None and working_tree not in (old_tree, new_tree):381 if working_tree is not None and working_tree not in (old_tree, new_tree):
380 extra_trees = (working_tree,)382 extra_trees = (working_tree,)
381 return old_tree, new_tree, specific_files, extra_trees383 return old_tree, new_tree, old_branch, new_branch, specific_files, extra_trees
384
382385
383def _get_tree_to_diff(spec, tree=None, branch=None, basis_is_default=True):386def _get_tree_to_diff(spec, tree=None, branch=None, basis_is_default=True):
384 if branch is None and tree is not None:387 if branch is None and tree is not None:
385388
=== modified file 'bzrlib/foreign.py'
--- bzrlib/foreign.py 2009-10-02 09:11:43 +0000
+++ bzrlib/foreign.py 2009-10-15 18:31:14 +0000
@@ -305,8 +305,7 @@
305 ).get_user_option_as_bool('dpush_strict')305 ).get_user_option_as_bool('dpush_strict')
306 if strict is None: strict = True # default value306 if strict is None: strict = True # default value
307 if strict and source_wt is not None:307 if strict and source_wt is not None:
308 if (source_wt.has_changes(source_wt.basis_tree())308 if (source_wt.has_changes()):
309 or len(source_wt.get_parent_ids()) > 1):
310 raise errors.UncommittedChanges(309 raise errors.UncommittedChanges(
311 source_wt, more='Use --no-strict to force the push.')310 source_wt, more='Use --no-strict to force the push.')
312 if source_wt.last_revision() != source_wt.branch.last_revision():311 if source_wt.last_revision() != source_wt.branch.last_revision():
313312
=== modified file 'bzrlib/help_topics/__init__.py'
--- bzrlib/help_topics/__init__.py 2009-06-19 09:06:56 +0000
+++ bzrlib/help_topics/__init__.py 2009-10-15 18:31:14 +0000
@@ -644,38 +644,20 @@
644formats may also be introduced to improve performance and644formats may also be introduced to improve performance and
645scalability.645scalability.
646646
647Use the following guidelines to select a format (stopping647The newest format, 2a, is highly recommended. If your
648as soon as a condition is true):648project is not using 2a, then you should suggest to the
649649project owner to upgrade.
650* If you are working on an existing project, use whatever650
651 format that project is using. (Bazaar will do this for you651
652 by default).652Note: Some of the older formats have two variants:
653
654* If you are using bzr-svn to interoperate with a Subversion
655 repository, use 1.14-rich-root.
656
657* If you are working on a project with big trees (5000+ paths)
658 or deep history (5000+ revisions), use 1.14.
659
660* Otherwise, use the default format - it is good enough for
661 most projects.
662
663If some of your developers are unable to use the most recent
664version of Bazaar (due to distro package availability say), be
665sure to adjust the guidelines above accordingly. For example,
666you may need to select 1.9 instead of 1.14 if your project has
667standardized on Bazaar 1.13.1 say.
668
669Note: Many of the currently supported formats have two variants:
670a plain one and a rich-root one. The latter include an additional653a plain one and a rich-root one. The latter include an additional
671field about the root of the tree. There is no performance cost654field about the root of the tree. There is no performance cost
672for using a rich-root format but you cannot easily merge changes655for using a rich-root format but you cannot easily merge changes
673from a rich-root format into a plain format. As a consequence,656from a rich-root format into a plain format. As a consequence,
674moving a project to a rich-root format takes some co-ordination657moving a project to a rich-root format takes some co-ordination
675in that all contributors need to upgrade their repositories658in that all contributors need to upgrade their repositories
676around the same time. (It is for this reason that we have delayed659around the same time. 2a and all future formats will be
677making a rich-root format the default so far, though we will do660implicitly rich-root.
678so at some appropriate time in the future.)
679661
680See ``bzr help current-formats`` for the complete list of662See ``bzr help current-formats`` for the complete list of
681currently supported formats. See ``bzr help other-formats`` for663currently supported formats. See ``bzr help other-formats`` for
682664
=== modified file 'bzrlib/help_topics/en/debug-flags.txt'
--- bzrlib/help_topics/en/debug-flags.txt 2009-08-20 05:02:45 +0000
+++ bzrlib/help_topics/en/debug-flags.txt 2009-10-15 18:31:14 +0000
@@ -22,16 +22,18 @@
22-Dhttp Trace http connections, requests and responses.22-Dhttp Trace http connections, requests and responses.
23-Dindex Trace major index operations.23-Dindex Trace major index operations.
24-Dknit Trace knit operations.24-Dknit Trace knit operations.
25-Dstrict_locks Trace when OS locks are potentially used in a non-portable
26 manner.
27-Dlock Trace when lockdir locks are taken or released.25-Dlock Trace when lockdir locks are taken or released.
28-Dprogress Trace progress bar operations.26-Dprogress Trace progress bar operations.
29-Dmerge Emit information for debugging merges.27-Dmerge Emit information for debugging merges.
30-Dno_apport Don't use apport to report crashes.28-Dno_apport Don't use apport to report crashes.
31-Dunlock Some errors during unlock are treated as warnings.
32-Dpack Emit information about pack operations.29-Dpack Emit information about pack operations.
30-Drelock Emit a message every time a branch or repository object is
31 unlocked then relocked the same way.
33-Dsftp Trace SFTP internals.32-Dsftp Trace SFTP internals.
34-Dstream Trace fetch streams.33-Dstream Trace fetch streams.
34-Dstrict_locks Trace when OS locks are potentially used in a non-portable
35 manner.
36-Dunlock Some errors during unlock are treated as warnings.
35-DIDS_never Never use InterDifferingSerializer when fetching.37-DIDS_never Never use InterDifferingSerializer when fetching.
36-DIDS_always Always use InterDifferingSerializer to fetch if appropriate38-DIDS_always Always use InterDifferingSerializer to fetch if appropriate
37 for the format, even for non-local fetches.39 for the format, even for non-local fetches.
3840
=== modified file 'bzrlib/index.py'
--- bzrlib/index.py 2009-10-08 15:44:41 +0000
+++ bzrlib/index.py 2009-10-15 18:31:14 +0000
@@ -1,4 +1,4 @@
1# Copyright (C) 2007, 2008 Canonical Ltd1# Copyright (C) 2007, 2008, 2009 Canonical Ltd
2#2#
3# This program is free software; you can redistribute it and/or modify3# This program is free software; you can redistribute it and/or modify
4# it under the terms of the GNU General Public License as published by4# it under the terms of the GNU General Public License as published by
@@ -40,7 +40,7 @@
40 debug,40 debug,
41 errors,41 errors,
42 )42 )
43from bzrlib._static_tuple_c import StaticTuple43from bzrlib.static_tuple import StaticTuple
4444
45_HEADER_READV = (0, 200)45_HEADER_READV = (0, 200)
46_OPTION_KEY_ELEMENTS = "key_elements="46_OPTION_KEY_ELEMENTS = "key_elements="
@@ -203,7 +203,9 @@
203 if reference not in self._nodes:203 if reference not in self._nodes:
204 self._check_key(reference)204 self._check_key(reference)
205 absent_references.append(reference)205 absent_references.append(reference)
206 # TODO: StaticTuple
206 node_refs.append(tuple(reference_list))207 node_refs.append(tuple(reference_list))
208 # TODO: StaticTuple
207 return tuple(node_refs), absent_references209 return tuple(node_refs), absent_references
208210
209 def add_node(self, key, value, references=()):211 def add_node(self, key, value, references=()):
@@ -369,7 +371,7 @@
369 suitable for production use. :XXX371 suitable for production use. :XXX
370 """372 """
371373
372 def __init__(self, transport, name, size):374 def __init__(self, transport, name, size, unlimited_cache=False):
373 """Open an index called name on transport.375 """Open an index called name on transport.
374376
375 :param transport: A bzrlib.transport.Transport.377 :param transport: A bzrlib.transport.Transport.
376378
=== modified file 'bzrlib/lock.py'
--- bzrlib/lock.py 2009-07-31 16:51:48 +0000
+++ bzrlib/lock.py 2009-10-15 18:31:13 +0000
@@ -518,3 +518,24 @@
518# We default to using the first available lock class.518# We default to using the first available lock class.
519_lock_type, WriteLock, ReadLock = _lock_classes[0]519_lock_type, WriteLock, ReadLock = _lock_classes[0]
520520
521
522class _RelockDebugMixin(object):
523 """Mixin support for -Drelock flag.
524
525 Add this as a base class then call self._note_lock with 'r' or 'w' when
526 acquiring a read- or write-lock. If this object was previously locked (and
527 locked the same way), and -Drelock is set, then this will trace.note a
528 message about it.
529 """
530
531 _prev_lock = None
532
533 def _note_lock(self, lock_type):
534 if 'relock' in debug.debug_flags and self._prev_lock == lock_type:
535 if lock_type == 'r':
536 type_name = 'read'
537 else:
538 type_name = 'write'
539 trace.note('%r was %s locked again', self, type_name)
540 self._prev_lock = lock_type
541
521542
=== modified file 'bzrlib/lockable_files.py'
--- bzrlib/lockable_files.py 2009-07-27 05:39:01 +0000
+++ bzrlib/lockable_files.py 2009-10-15 18:31:14 +0000
@@ -32,8 +32,7 @@
32""")32""")
3333
34from bzrlib.decorators import (34from bzrlib.decorators import (
35 needs_read_lock,35 only_raises,
36 needs_write_lock,
37 )36 )
38from bzrlib.symbol_versioning import (37from bzrlib.symbol_versioning import (
39 deprecated_in,38 deprecated_in,
@@ -221,6 +220,7 @@
221 """Setup a write transaction."""220 """Setup a write transaction."""
222 self._set_transaction(transactions.WriteTransaction())221 self._set_transaction(transactions.WriteTransaction())
223222
223 @only_raises(errors.LockNotHeld, errors.LockBroken)
224 def unlock(self):224 def unlock(self):
225 if not self._lock_mode:225 if not self._lock_mode:
226 return lock.cant_unlock_not_held(self)226 return lock.cant_unlock_not_held(self)
227227
=== modified file 'bzrlib/lockdir.py'
--- bzrlib/lockdir.py 2009-07-27 05:24:02 +0000
+++ bzrlib/lockdir.py 2009-10-15 18:31:14 +0000
@@ -112,6 +112,7 @@
112 lock,112 lock,
113 )113 )
114import bzrlib.config114import bzrlib.config
115from bzrlib.decorators import only_raises
115from bzrlib.errors import (116from bzrlib.errors import (
116 DirectoryNotEmpty,117 DirectoryNotEmpty,
117 FileExists,118 FileExists,
@@ -286,6 +287,7 @@
286 info_bytes)287 info_bytes)
287 return tmpname288 return tmpname
288289
290 @only_raises(LockNotHeld, LockBroken)
289 def unlock(self):291 def unlock(self):
290 """Release a held lock292 """Release a held lock
291 """293 """
292294
=== modified file 'bzrlib/merge.py'
--- bzrlib/merge.py 2009-10-06 12:25:59 +0000
+++ bzrlib/merge.py 2009-10-15 18:31:14 +0000
@@ -35,6 +35,10 @@
35 ui,35 ui,
36 versionedfile36 versionedfile
37 )37 )
38from bzrlib.symbol_versioning import (
39 deprecated_in,
40 deprecated_method,
41 )
38# TODO: Report back as changes are merged in42# TODO: Report back as changes are merged in
3943
4044
@@ -226,6 +230,7 @@
226 revision_id = _mod_revision.ensure_null(revision_id)230 revision_id = _mod_revision.ensure_null(revision_id)
227 return branch, self.revision_tree(revision_id, branch)231 return branch, self.revision_tree(revision_id, branch)
228232
233 @deprecated_method(deprecated_in((2, 1, 0)))
229 def ensure_revision_trees(self):234 def ensure_revision_trees(self):
230 if self.this_revision_tree is None:235 if self.this_revision_tree is None:
231 self.this_basis_tree = self.revision_tree(self.this_basis)236 self.this_basis_tree = self.revision_tree(self.this_basis)
@@ -239,6 +244,7 @@
239 other_rev_id = self.other_basis244 other_rev_id = self.other_basis
240 self.other_tree = other_basis_tree245 self.other_tree = other_basis_tree
241246
247 @deprecated_method(deprecated_in((2, 1, 0)))
242 def file_revisions(self, file_id):248 def file_revisions(self, file_id):
243 self.ensure_revision_trees()249 self.ensure_revision_trees()
244 def get_id(tree, file_id):250 def get_id(tree, file_id):
@@ -252,6 +258,7 @@
252 trees = (self.this_basis_tree, self.other_tree)258 trees = (self.this_basis_tree, self.other_tree)
253 return [get_id(tree, file_id) for tree in trees]259 return [get_id(tree, file_id) for tree in trees]
254260
261 @deprecated_method(deprecated_in((2, 1, 0)))
255 def check_basis(self, check_clean, require_commits=True):262 def check_basis(self, check_clean, require_commits=True):
256 if self.this_basis is None and require_commits is True:263 if self.this_basis is None and require_commits is True:
257 raise errors.BzrCommandError(264 raise errors.BzrCommandError(
@@ -262,6 +269,7 @@
262 if self.this_basis != self.this_rev_id:269 if self.this_basis != self.this_rev_id:
263 raise errors.UncommittedChanges(self.this_tree)270 raise errors.UncommittedChanges(self.this_tree)
264271
272 @deprecated_method(deprecated_in((2, 1, 0)))
265 def compare_basis(self):273 def compare_basis(self):
266 try:274 try:
267 basis_tree = self.revision_tree(self.this_tree.last_revision())275 basis_tree = self.revision_tree(self.this_tree.last_revision())
@@ -274,7 +282,8 @@
274 self.interesting_files = file_list282 self.interesting_files = file_list
275283
276 def set_pending(self):284 def set_pending(self):
277 if not self.base_is_ancestor or not self.base_is_other_ancestor or self.other_rev_id is None:285 if (not self.base_is_ancestor or not self.base_is_other_ancestor
286 or self.other_rev_id is None):
278 return287 return
279 self._add_parent()288 self._add_parent()
280289
281290
=== modified file 'bzrlib/mutabletree.py'
--- bzrlib/mutabletree.py 2009-10-06 12:25:59 +0000
+++ bzrlib/mutabletree.py 2009-10-15 18:31:14 +0000
@@ -233,12 +233,20 @@
233 raise NotImplementedError(self._gather_kinds)233 raise NotImplementedError(self._gather_kinds)
234234
235 @needs_read_lock235 @needs_read_lock
236 def has_changes(self, from_tree):236 def has_changes(self, _from_tree=None):
237 """Quickly check that the tree contains at least one change.237 """Quickly check that the tree contains at least one commitable change.
238
239 :param _from_tree: tree to compare against to find changes (default to
240 the basis tree and is intended to be used by tests).
238241
239 :return: True if a change is found. False otherwise242 :return: True if a change is found. False otherwise
240 """243 """
241 changes = self.iter_changes(from_tree)244 # Check pending merges
245 if len(self.get_parent_ids()) > 1:
246 return True
247 if _from_tree is None:
248 _from_tree = self.basis_tree()
249 changes = self.iter_changes(_from_tree)
242 try:250 try:
243 change = changes.next()251 change = changes.next()
244 # Exclude root (talk about black magic... --vila 20090629)252 # Exclude root (talk about black magic... --vila 20090629)
245253
=== modified file 'bzrlib/osutils.py'
--- bzrlib/osutils.py 2009-09-19 01:33:10 +0000
+++ bzrlib/osutils.py 2009-10-15 18:31:14 +0000
@@ -1132,7 +1132,14 @@
1132 bit_iter = iter(rel.split('/'))1132 bit_iter = iter(rel.split('/'))
1133 for bit in bit_iter:1133 for bit in bit_iter:
1134 lbit = bit.lower()1134 lbit = bit.lower()
1135 for look in _listdir(current):1135 try:
1136 next_entries = _listdir(current)
1137 except OSError: # enoent, eperm, etc
1138 # We can't find this in the filesystem, so just append the
1139 # remaining bits.
1140 current = pathjoin(current, bit, *list(bit_iter))
1141 break
1142 for look in next_entries:
1136 if lbit == look.lower():1143 if lbit == look.lower():
1137 current = pathjoin(current, look)1144 current = pathjoin(current, look)
1138 break1145 break
@@ -1142,7 +1149,7 @@
1142 # the target of a move, for example).1149 # the target of a move, for example).
1143 current = pathjoin(current, bit, *list(bit_iter))1150 current = pathjoin(current, bit, *list(bit_iter))
1144 break1151 break
1145 return current[len(abs_base)+1:]1152 return current[len(abs_base):].lstrip('/')
11461153
1147# XXX - TODO - we need better detection/integration of case-insensitive1154# XXX - TODO - we need better detection/integration of case-insensitive
1148# file-systems; Linux often sees FAT32 devices (or NFS-mounted OSX1155# file-systems; Linux often sees FAT32 devices (or NFS-mounted OSX
11491156
=== modified file 'bzrlib/python-compat.h'
--- bzrlib/python-compat.h 2009-09-30 18:00:42 +0000
+++ bzrlib/python-compat.h 2009-10-15 18:31:14 +0000
@@ -28,6 +28,9 @@
28/* http://www.python.org/dev/peps/pep-0353/ */28/* http://www.python.org/dev/peps/pep-0353/ */
29#if PY_VERSION_HEX < 0x02050000 && !defined(PY_SSIZE_T_MIN)29#if PY_VERSION_HEX < 0x02050000 && !defined(PY_SSIZE_T_MIN)
30 typedef int Py_ssize_t;30 typedef int Py_ssize_t;
31 typedef Py_ssize_t (*lenfunc)(PyObject *);
32 typedef PyObject * (*ssizeargfunc)(PyObject *, Py_ssize_t);
33 typedef PyObject * (*ssizessizeargfunc)(PyObject *, Py_ssize_t, Py_ssize_t);
31 #define PY_SSIZE_T_MAX INT_MAX34 #define PY_SSIZE_T_MAX INT_MAX
32 #define PY_SSIZE_T_MIN INT_MIN35 #define PY_SSIZE_T_MIN INT_MIN
33 #define PyInt_FromSsize_t(z) PyInt_FromLong(z)36 #define PyInt_FromSsize_t(z) PyInt_FromLong(z)
@@ -77,5 +80,8 @@
77#ifndef Py_TYPE80#ifndef Py_TYPE
78# define Py_TYPE(o) ((o)->ob_type)81# define Py_TYPE(o) ((o)->ob_type)
79#endif82#endif
83#ifndef Py_REFCNT
84# define Py_REFCNT(o) ((o)->ob_refcnt)
85#endif
8086
81#endif /* _BZR_PYTHON_COMPAT_H */87#endif /* _BZR_PYTHON_COMPAT_H */
8288
=== modified file 'bzrlib/reconfigure.py'
--- bzrlib/reconfigure.py 2009-07-24 03:15:56 +0000
+++ bzrlib/reconfigure.py 2009-10-15 18:31:14 +0000
@@ -265,9 +265,7 @@
265265
266 def _check(self):266 def _check(self):
267 """Raise if reconfiguration would destroy local changes"""267 """Raise if reconfiguration would destroy local changes"""
268 if self._destroy_tree:268 if self._destroy_tree and self.tree.has_changes():
269 # XXX: What about pending merges ? -- vila 20090629
270 if self.tree.has_changes(self.tree.basis_tree()):
271 raise errors.UncommittedChanges(self.tree)269 raise errors.UncommittedChanges(self.tree)
272 if self._create_reference and self.local_branch is not None:270 if self._create_reference and self.local_branch is not None:
273 reference_branch = branch.Branch.open(self._select_bind_location())271 reference_branch = branch.Branch.open(self._select_bind_location())
274272
=== modified file 'bzrlib/remote.py'
--- bzrlib/remote.py 2009-10-02 05:43:41 +0000
+++ bzrlib/remote.py 2009-10-15 18:31:14 +0000
@@ -33,7 +33,7 @@
33)33)
34from bzrlib.branch import BranchReferenceFormat34from bzrlib.branch import BranchReferenceFormat
35from bzrlib.bzrdir import BzrDir, RemoteBzrDirFormat35from bzrlib.bzrdir import BzrDir, RemoteBzrDirFormat
36from bzrlib.decorators import needs_read_lock, needs_write_lock36from bzrlib.decorators import needs_read_lock, needs_write_lock, only_raises
37from bzrlib.errors import (37from bzrlib.errors import (
38 NoSuchRevision,38 NoSuchRevision,
39 SmartProtocolError,39 SmartProtocolError,
@@ -619,7 +619,7 @@
619 return self._custom_format._serializer619 return self._custom_format._serializer
620620
621621
622class RemoteRepository(_RpcHelper):622class RemoteRepository(_RpcHelper, lock._RelockDebugMixin):
623 """Repository accessed over rpc.623 """Repository accessed over rpc.
624624
625 For the moment most operations are performed using local transport-backed625 For the moment most operations are performed using local transport-backed
@@ -949,6 +949,7 @@
949 def lock_read(self):949 def lock_read(self):
950 # wrong eventually - want a local lock cache context950 # wrong eventually - want a local lock cache context
951 if not self._lock_mode:951 if not self._lock_mode:
952 self._note_lock('r')
952 self._lock_mode = 'r'953 self._lock_mode = 'r'
953 self._lock_count = 1954 self._lock_count = 1
954 self._unstacked_provider.enable_cache(cache_misses=True)955 self._unstacked_provider.enable_cache(cache_misses=True)
@@ -974,6 +975,7 @@
974975
975 def lock_write(self, token=None, _skip_rpc=False):976 def lock_write(self, token=None, _skip_rpc=False):
976 if not self._lock_mode:977 if not self._lock_mode:
978 self._note_lock('w')
977 if _skip_rpc:979 if _skip_rpc:
978 if self._lock_token is not None:980 if self._lock_token is not None:
979 if token != self._lock_token:981 if token != self._lock_token:
@@ -1082,6 +1084,7 @@
1082 else:1084 else:
1083 raise errors.UnexpectedSmartServerResponse(response)1085 raise errors.UnexpectedSmartServerResponse(response)
10841086
1087 @only_raises(errors.LockNotHeld, errors.LockBroken)
1085 def unlock(self):1088 def unlock(self):
1086 if not self._lock_count:1089 if not self._lock_count:
1087 return lock.cant_unlock_not_held(self)1090 return lock.cant_unlock_not_held(self)
@@ -2081,7 +2084,7 @@
2081 return self._custom_format.supports_set_append_revisions_only()2084 return self._custom_format.supports_set_append_revisions_only()
20822085
20832086
2084class RemoteBranch(branch.Branch, _RpcHelper):2087class RemoteBranch(branch.Branch, _RpcHelper, lock._RelockDebugMixin):
2085 """Branch stored on a server accessed by HPSS RPC.2088 """Branch stored on a server accessed by HPSS RPC.
20862089
2087 At the moment most operations are mapped down to simple file operations.2090 At the moment most operations are mapped down to simple file operations.
@@ -2318,6 +2321,7 @@
2318 def lock_read(self):2321 def lock_read(self):
2319 self.repository.lock_read()2322 self.repository.lock_read()
2320 if not self._lock_mode:2323 if not self._lock_mode:
2324 self._note_lock('r')
2321 self._lock_mode = 'r'2325 self._lock_mode = 'r'
2322 self._lock_count = 12326 self._lock_count = 1
2323 if self._real_branch is not None:2327 if self._real_branch is not None:
@@ -2343,6 +2347,7 @@
23432347
2344 def lock_write(self, token=None):2348 def lock_write(self, token=None):
2345 if not self._lock_mode:2349 if not self._lock_mode:
2350 self._note_lock('w')
2346 # Lock the branch and repo in one remote call.2351 # Lock the branch and repo in one remote call.
2347 remote_tokens = self._remote_lock_write(token)2352 remote_tokens = self._remote_lock_write(token)
2348 self._lock_token, self._repo_lock_token = remote_tokens2353 self._lock_token, self._repo_lock_token = remote_tokens
@@ -2383,6 +2388,7 @@
2383 return2388 return
2384 raise errors.UnexpectedSmartServerResponse(response)2389 raise errors.UnexpectedSmartServerResponse(response)
23852390
2391 @only_raises(errors.LockNotHeld, errors.LockBroken)
2386 def unlock(self):2392 def unlock(self):
2387 try:2393 try:
2388 self._lock_count -= 12394 self._lock_count -= 1
@@ -2428,6 +2434,7 @@
2428 raise NotImplementedError(self.dont_leave_lock_in_place)2434 raise NotImplementedError(self.dont_leave_lock_in_place)
2429 self._leave_lock = False2435 self._leave_lock = False
24302436
2437 @needs_read_lock
2431 def get_rev_id(self, revno, history=None):2438 def get_rev_id(self, revno, history=None):
2432 if revno == 0:2439 if revno == 0:
2433 return _mod_revision.NULL_REVISION2440 return _mod_revision.NULL_REVISION
24342441
=== modified file 'bzrlib/repofmt/pack_repo.py'
--- bzrlib/repofmt/pack_repo.py 2009-09-08 05:51:36 +0000
+++ bzrlib/repofmt/pack_repo.py 2009-10-15 18:31:13 +0000
@@ -54,7 +54,7 @@
54 revision as _mod_revision,54 revision as _mod_revision,
55 )55 )
5656
57from bzrlib.decorators import needs_write_lock57from bzrlib.decorators import needs_write_lock, only_raises
58from bzrlib.btree_index import (58from bzrlib.btree_index import (
59 BTreeGraphIndex,59 BTreeGraphIndex,
60 BTreeBuilder,60 BTreeBuilder,
@@ -73,6 +73,7 @@
73 )73 )
74from bzrlib.trace import (74from bzrlib.trace import (
75 mutter,75 mutter,
76 note,
76 warning,77 warning,
77 )78 )
7879
@@ -224,10 +225,14 @@
224 return self.index_name('text', name)225 return self.index_name('text', name)
225226
226 def _replace_index_with_readonly(self, index_type):227 def _replace_index_with_readonly(self, index_type):
228 unlimited_cache = False
229 if index_type == 'chk':
230 unlimited_cache = True
227 setattr(self, index_type + '_index',231 setattr(self, index_type + '_index',
228 self.index_class(self.index_transport,232 self.index_class(self.index_transport,
229 self.index_name(index_type, self.name),233 self.index_name(index_type, self.name),
230 self.index_sizes[self.index_offset(index_type)]))234 self.index_sizes[self.index_offset(index_type)],
235 unlimited_cache=unlimited_cache))
231236
232237
233class ExistingPack(Pack):238class ExistingPack(Pack):
@@ -1674,7 +1679,7 @@
1674 txt_index = self._make_index(name, '.tix')1679 txt_index = self._make_index(name, '.tix')
1675 sig_index = self._make_index(name, '.six')1680 sig_index = self._make_index(name, '.six')
1676 if self.chk_index is not None:1681 if self.chk_index is not None:
1677 chk_index = self._make_index(name, '.cix')1682 chk_index = self._make_index(name, '.cix', unlimited_cache=True)
1678 else:1683 else:
1679 chk_index = None1684 chk_index = None
1680 result = ExistingPack(self._pack_transport, name, rev_index,1685 result = ExistingPack(self._pack_transport, name, rev_index,
@@ -1699,7 +1704,8 @@
1699 txt_index = self._make_index(name, '.tix', resume=True)1704 txt_index = self._make_index(name, '.tix', resume=True)
1700 sig_index = self._make_index(name, '.six', resume=True)1705 sig_index = self._make_index(name, '.six', resume=True)
1701 if self.chk_index is not None:1706 if self.chk_index is not None:
1702 chk_index = self._make_index(name, '.cix', resume=True)1707 chk_index = self._make_index(name, '.cix', resume=True,
1708 unlimited_cache=True)
1703 else:1709 else:
1704 chk_index = None1710 chk_index = None
1705 result = self.resumed_pack_factory(name, rev_index, inv_index,1711 result = self.resumed_pack_factory(name, rev_index, inv_index,
@@ -1735,7 +1741,7 @@
1735 return self._index_class(self.transport, 'pack-names', None1741 return self._index_class(self.transport, 'pack-names', None
1736 ).iter_all_entries()1742 ).iter_all_entries()
17371743
1738 def _make_index(self, name, suffix, resume=False):1744 def _make_index(self, name, suffix, resume=False, unlimited_cache=False):
1739 size_offset = self._suffix_offsets[suffix]1745 size_offset = self._suffix_offsets[suffix]
1740 index_name = name + suffix1746 index_name = name + suffix
1741 if resume:1747 if resume:
@@ -1744,7 +1750,8 @@
1744 else:1750 else:
1745 transport = self._index_transport1751 transport = self._index_transport
1746 index_size = self._names[name][size_offset]1752 index_size = self._names[name][size_offset]
1747 return self._index_class(transport, index_name, index_size)1753 return self._index_class(transport, index_name, index_size,
1754 unlimited_cache=unlimited_cache)
17481755
1749 def _max_pack_count(self, total_revisions):1756 def _max_pack_count(self, total_revisions):
1750 """Return the maximum number of packs to use for total revisions.1757 """Return the maximum number of packs to use for total revisions.
@@ -2300,6 +2307,9 @@
2300 if self._write_lock_count == 1:2307 if self._write_lock_count == 1:
2301 self._transaction = transactions.WriteTransaction()2308 self._transaction = transactions.WriteTransaction()
2302 if not locked:2309 if not locked:
2310 if 'relock' in debug.debug_flags and self._prev_lock == 'w':
2311 note('%r was write locked again', self)
2312 self._prev_lock = 'w'
2303 for repo in self._fallback_repositories:2313 for repo in self._fallback_repositories:
2304 # Writes don't affect fallback repos2314 # Writes don't affect fallback repos
2305 repo.lock_read()2315 repo.lock_read()
@@ -2312,6 +2322,9 @@
2312 else:2322 else:
2313 self.control_files.lock_read()2323 self.control_files.lock_read()
2314 if not locked:2324 if not locked:
2325 if 'relock' in debug.debug_flags and self._prev_lock == 'r':
2326 note('%r was read locked again', self)
2327 self._prev_lock = 'r'
2315 for repo in self._fallback_repositories:2328 for repo in self._fallback_repositories:
2316 repo.lock_read()2329 repo.lock_read()
2317 self._refresh_data()2330 self._refresh_data()
@@ -2345,6 +2358,7 @@
2345 packer = ReconcilePacker(collection, packs, extension, revs)2358 packer = ReconcilePacker(collection, packs, extension, revs)
2346 return packer.pack(pb)2359 return packer.pack(pb)
23472360
2361 @only_raises(errors.LockNotHeld, errors.LockBroken)
2348 def unlock(self):2362 def unlock(self):
2349 if self._write_lock_count == 1 and self._write_group is not None:2363 if self._write_lock_count == 1 and self._write_group is not None:
2350 self.abort_write_group()2364 self.abort_write_group()
23512365
=== modified file 'bzrlib/repository.py'
--- bzrlib/repository.py 2009-09-24 04:54:19 +0000
+++ bzrlib/repository.py 2009-10-15 18:31:14 +0000
@@ -49,7 +49,8 @@
49from bzrlib.testament import Testament49from bzrlib.testament import Testament
50""")50""")
5151
52from bzrlib.decorators import needs_read_lock, needs_write_lock52from bzrlib.decorators import needs_read_lock, needs_write_lock, only_raises
53from bzrlib.lock import _RelockDebugMixin
53from bzrlib.inter import InterObject54from bzrlib.inter import InterObject
54from bzrlib.inventory import (55from bzrlib.inventory import (
55 Inventory,56 Inventory,
@@ -856,7 +857,7 @@
856# Repositories857# Repositories
857858
858859
859class Repository(object):860class Repository(_RelockDebugMixin):
860 """Repository holding history for one or more branches.861 """Repository holding history for one or more branches.
861862
862 The repository holds and retrieves historical information including863 The repository holds and retrieves historical information including
@@ -1381,6 +1382,7 @@
1381 locked = self.is_locked()1382 locked = self.is_locked()
1382 result = self.control_files.lock_write(token=token)1383 result = self.control_files.lock_write(token=token)
1383 if not locked:1384 if not locked:
1385 self._note_lock('w')
1384 for repo in self._fallback_repositories:1386 for repo in self._fallback_repositories:
1385 # Writes don't affect fallback repos1387 # Writes don't affect fallback repos
1386 repo.lock_read()1388 repo.lock_read()
@@ -1391,6 +1393,7 @@
1391 locked = self.is_locked()1393 locked = self.is_locked()
1392 self.control_files.lock_read()1394 self.control_files.lock_read()
1393 if not locked:1395 if not locked:
1396 self._note_lock('r')
1394 for repo in self._fallback_repositories:1397 for repo in self._fallback_repositories:
1395 repo.lock_read()1398 repo.lock_read()
1396 self._refresh_data()1399 self._refresh_data()
@@ -1720,6 +1723,7 @@
1720 self.start_write_group()1723 self.start_write_group()
1721 return result1724 return result
17221725
1726 @only_raises(errors.LockNotHeld, errors.LockBroken)
1723 def unlock(self):1727 def unlock(self):
1724 if (self.control_files._lock_count == 1 and1728 if (self.control_files._lock_count == 1 and
1725 self.control_files._lock_mode == 'w'):1729 self.control_files._lock_mode == 'w'):
@@ -4315,6 +4319,13 @@
4315 ):4319 ):
4316 if versioned_file is None:4320 if versioned_file is None:
4317 continue4321 continue
4322 # TODO: key is often going to be a StaticTuple object
4323 # I don't believe we can define a method by which
4324 # (prefix,) + StaticTuple will work, though we could
4325 # define a StaticTuple.sq_concat that would allow you to
4326 # pass in either a tuple or a StaticTuple as the second
4327 # object, so instead we could have:
4328 # StaticTuple(prefix) + key here...
4318 missing_keys.update((prefix,) + key for key in4329 missing_keys.update((prefix,) + key for key in
4319 versioned_file.get_missing_compression_parent_keys())4330 versioned_file.get_missing_compression_parent_keys())
4320 except NotImplementedError:4331 except NotImplementedError:
43214332
=== modified file 'bzrlib/send.py'
--- bzrlib/send.py 2009-07-17 14:41:02 +0000
+++ bzrlib/send.py 2009-10-15 18:31:13 +0000
@@ -115,14 +115,13 @@
115 ).get_user_option_as_bool('send_strict')115 ).get_user_option_as_bool('send_strict')
116 if strict is None: strict = True # default value116 if strict is None: strict = True # default value
117 if strict and tree is not None:117 if strict and tree is not None:
118 if (tree.has_changes(tree.basis_tree())118 if (tree.has_changes()):
119 or len(tree.get_parent_ids()) > 1):
120 raise errors.UncommittedChanges(119 raise errors.UncommittedChanges(
121 tree, more='Use --no-strict to force the send.')120 tree, more='Use --no-strict to force the send.')
122 if tree.last_revision() != tree.branch.last_revision():121 if tree.last_revision() != tree.branch.last_revision():
123 # The tree has lost sync with its branch, there is little122 # The tree has lost sync with its branch, there is little
124 # chance that the user is aware of it but he can still force123 # chance that the user is aware of it but he can still force
125 # the push with --no-strict124 # the send with --no-strict
126 raise errors.OutOfDateTree(125 raise errors.OutOfDateTree(
127 tree, more='Use --no-strict to force the send.')126 tree, more='Use --no-strict to force the send.')
128 revision_id = branch.last_revision()127 revision_id = branch.last_revision()
129128
=== added file 'bzrlib/static_tuple.py'
--- bzrlib/static_tuple.py 1970-01-01 00:00:00 +0000
+++ bzrlib/static_tuple.py 2009-10-15 18:31:14 +0000
@@ -0,0 +1,25 @@
1# Copyright (C) 2009 Canonical Ltd
2#
3# This program is free software; you can redistribute it and/or modify
4# it under the terms of the GNU General Public License as published by
5# the Free Software Foundation; either version 2 of the License, or
6# (at your option) any later version.
7#
8# This program is distributed in the hope that it will be useful,
9# but WITHOUT ANY WARRANTY; without even the implied warranty of
10# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
11# GNU General Public License for more details.
12#
13# You should have received a copy of the GNU General Public License
14# along with this program; if not, write to the Free Software
15# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
16
17"""Interface thunk for a StaticTuple implementation."""
18
19try:
20 from bzrlib._static_tuple_c import StaticTuple
21except ImportError, e:
22 from bzrlib import osutils
23 osutils.failed_to_load_extension(e)
24 from bzrlib._static_tuple_py import StaticTuple
25
026
=== modified file 'bzrlib/tests/__init__.py'
--- bzrlib/tests/__init__.py 2009-10-08 14:59:54 +0000
+++ bzrlib/tests/__init__.py 2009-10-15 18:31:14 +0000
@@ -222,6 +222,10 @@
222 '%s is leaking threads among %d leaking tests.\n' % (222 '%s is leaking threads among %d leaking tests.\n' % (
223 TestCase._first_thread_leaker_id,223 TestCase._first_thread_leaker_id,
224 TestCase._leaking_threads_tests))224 TestCase._leaking_threads_tests))
225 # We don't report the main thread as an active one.
226 self.stream.write(
227 '%d non-main threads were left active in the end.\n'
228 % (TestCase._active_threads - 1))
225229
226 def _extractBenchmarkTime(self, testCase):230 def _extractBenchmarkTime(self, testCase):
227 """Add a benchmark time for the current test case."""231 """Add a benchmark time for the current test case."""
@@ -846,7 +850,13 @@
846 active = threading.activeCount()850 active = threading.activeCount()
847 leaked_threads = active - TestCase._active_threads851 leaked_threads = active - TestCase._active_threads
848 TestCase._active_threads = active852 TestCase._active_threads = active
849 if leaked_threads:853 # If some tests make the number of threads *decrease*, we'll consider
854 # that they are just observing old threads dieing, not agressively kill
855 # random threads. So we don't report these tests as leaking. The risk
856 # is that we have false positives that way (the test see 2 threads
857 # going away but leak one) but it seems less likely than the actual
858 # false positives (the test see threads going away and does not leak).
859 if leaked_threads > 0:
850 TestCase._leaking_threads_tests += 1860 TestCase._leaking_threads_tests += 1
851 if TestCase._first_thread_leaker_id is None:861 if TestCase._first_thread_leaker_id is None:
852 TestCase._first_thread_leaker_id = self.id()862 TestCase._first_thread_leaker_id = self.id()
@@ -1146,6 +1156,25 @@
1146 self.fail("Incorrect length: wanted %d, got %d for %r" % (1156 self.fail("Incorrect length: wanted %d, got %d for %r" % (
1147 length, len(obj_with_len), obj_with_len))1157 length, len(obj_with_len), obj_with_len))
11481158
1159 def assertLogsError(self, exception_class, func, *args, **kwargs):
1160 """Assert that func(*args, **kwargs) quietly logs a specific exception.
1161 """
1162 from bzrlib import trace
1163 captured = []
1164 orig_log_exception_quietly = trace.log_exception_quietly
1165 try:
1166 def capture():
1167 orig_log_exception_quietly()
1168 captured.append(sys.exc_info())
1169 trace.log_exception_quietly = capture
1170 func(*args, **kwargs)
1171 finally:
1172 trace.log_exception_quietly = orig_log_exception_quietly
1173 self.assertLength(1, captured)
1174 err = captured[0][1]
1175 self.assertIsInstance(err, exception_class)
1176 return err
1177
1149 def assertPositive(self, val):1178 def assertPositive(self, val):
1150 """Assert that val is greater than 0."""1179 """Assert that val is greater than 0."""
1151 self.assertTrue(val > 0, 'expected a positive value, but got %s' % val)1180 self.assertTrue(val > 0, 'expected a positive value, but got %s' % val)
@@ -3661,6 +3690,7 @@
3661 'bzrlib.tests.per_repository',3690 'bzrlib.tests.per_repository',
3662 'bzrlib.tests.per_repository_chk',3691 'bzrlib.tests.per_repository_chk',
3663 'bzrlib.tests.per_repository_reference',3692 'bzrlib.tests.per_repository_reference',
3693 'bzrlib.tests.per_uifactory',
3664 'bzrlib.tests.per_versionedfile',3694 'bzrlib.tests.per_versionedfile',
3665 'bzrlib.tests.per_workingtree',3695 'bzrlib.tests.per_workingtree',
3666 'bzrlib.tests.test__annotator',3696 'bzrlib.tests.test__annotator',
36673697
=== modified file 'bzrlib/tests/blackbox/test_merge.py'
--- bzrlib/tests/blackbox/test_merge.py 2009-09-09 15:43:52 +0000
+++ bzrlib/tests/blackbox/test_merge.py 2009-10-15 18:31:13 +0000
@@ -605,7 +605,7 @@
605605
606 def test_merge_force(self):606 def test_merge_force(self):
607 self.tree_a.commit('empty change to allow merge to run')607 self.tree_a.commit('empty change to allow merge to run')
608 # Second merge on top if the uncommitted one608 # Second merge on top of the uncommitted one
609 self.run_bzr(['merge', '../a', '--force'], working_dir='b')609 self.run_bzr(['merge', '../a', '--force'], working_dir='b')
610610
611611
612612
=== modified file 'bzrlib/tests/blackbox/test_uncommit.py'
--- bzrlib/tests/blackbox/test_uncommit.py 2009-04-04 02:50:01 +0000
+++ bzrlib/tests/blackbox/test_uncommit.py 2009-10-15 18:31:14 +0000
@@ -233,14 +233,14 @@
233 tree3.commit('unchanged', rev_id='c3')233 tree3.commit('unchanged', rev_id='c3')
234234
235 wt.merge_from_branch(tree2.branch)235 wt.merge_from_branch(tree2.branch)
236 wt.merge_from_branch(tree3.branch)236 wt.merge_from_branch(tree3.branch, force=True)
237 wt.commit('merge b3, c3', rev_id='a3')237 wt.commit('merge b3, c3', rev_id='a3')
238238
239 tree2.commit('unchanged', rev_id='b4')239 tree2.commit('unchanged', rev_id='b4')
240 tree3.commit('unchanged', rev_id='c4')240 tree3.commit('unchanged', rev_id='c4')
241241
242 wt.merge_from_branch(tree3.branch)242 wt.merge_from_branch(tree3.branch)
243 wt.merge_from_branch(tree2.branch)243 wt.merge_from_branch(tree2.branch, force=True)
244 wt.commit('merge b4, c4', rev_id='a4')244 wt.commit('merge b4, c4', rev_id='a4')
245245
246 self.assertEqual(['a4'], wt.get_parent_ids())246 self.assertEqual(['a4'], wt.get_parent_ids())
247247
=== modified file 'bzrlib/tests/lock_helpers.py'
--- bzrlib/tests/lock_helpers.py 2009-04-15 07:30:34 +0000
+++ bzrlib/tests/lock_helpers.py 2009-10-15 18:31:14 +0000
@@ -17,6 +17,7 @@
17"""Helper functions/classes for testing locking"""17"""Helper functions/classes for testing locking"""
1818
19from bzrlib import errors19from bzrlib import errors
20from bzrlib.decorators import only_raises
2021
2122
22class TestPreventLocking(errors.LockError):23class TestPreventLocking(errors.LockError):
@@ -68,6 +69,7 @@
68 return self._other.lock_write()69 return self._other.lock_write()
69 raise TestPreventLocking('lock_write disabled')70 raise TestPreventLocking('lock_write disabled')
7071
72 @only_raises(errors.LockNotHeld, errors.LockBroken)
71 def unlock(self):73 def unlock(self):
72 self._sequence.append((self._other_id, 'ul', self._allow_unlock))74 self._sequence.append((self._other_id, 'ul', self._allow_unlock))
73 if self._allow_unlock:75 if self._allow_unlock:
7476
=== modified file 'bzrlib/tests/per_branch/test_locking.py'
--- bzrlib/tests/per_branch/test_locking.py 2009-07-10 10:46:00 +0000
+++ bzrlib/tests/per_branch/test_locking.py 2009-10-15 18:31:14 +0000
@@ -139,7 +139,7 @@
139 try:139 try:
140 self.assertTrue(b.is_locked())140 self.assertTrue(b.is_locked())
141 self.assertTrue(b.repository.is_locked())141 self.assertTrue(b.repository.is_locked())
142 self.assertRaises(TestPreventLocking, b.unlock)142 self.assertLogsError(TestPreventLocking, b.unlock)
143 if self.combined_control:143 if self.combined_control:
144 self.assertTrue(b.is_locked())144 self.assertTrue(b.is_locked())
145 else:145 else:
@@ -183,7 +183,7 @@
183 try:183 try:
184 self.assertTrue(b.is_locked())184 self.assertTrue(b.is_locked())
185 self.assertTrue(b.repository.is_locked())185 self.assertTrue(b.repository.is_locked())
186 self.assertRaises(TestPreventLocking, b.unlock)186 self.assertLogsError(TestPreventLocking, b.unlock)
187 self.assertTrue(b.is_locked())187 self.assertTrue(b.is_locked())
188 self.assertTrue(b.repository.is_locked())188 self.assertTrue(b.repository.is_locked())
189189
190190
=== modified file 'bzrlib/tests/per_repository/test_write_group.py'
--- bzrlib/tests/per_repository/test_write_group.py 2009-09-08 06:25:26 +0000
+++ bzrlib/tests/per_repository/test_write_group.py 2009-10-15 18:31:14 +0000
@@ -84,7 +84,7 @@
84 # don't need a specific exception for now - this is84 # don't need a specific exception for now - this is
85 # really to be sure it's used right, not for signalling85 # really to be sure it's used right, not for signalling
86 # semantic information.86 # semantic information.
87 self.assertRaises(errors.BzrError, repo.unlock)87 self.assertLogsError(errors.BzrError, repo.unlock)
88 # after this error occurs, the repository is unlocked, and the write88 # after this error occurs, the repository is unlocked, and the write
89 # group is gone. you've had your chance, and you blew it. ;-)89 # group is gone. you've had your chance, and you blew it. ;-)
90 self.assertFalse(repo.is_locked())90 self.assertFalse(repo.is_locked())
9191
=== modified file 'bzrlib/tests/per_repository_chk/test_supported.py'
--- bzrlib/tests/per_repository_chk/test_supported.py 2009-09-08 06:25:26 +0000
+++ bzrlib/tests/per_repository_chk/test_supported.py 2009-10-15 18:31:14 +0000
@@ -17,8 +17,10 @@
17"""Tests for repositories that support CHK indices."""17"""Tests for repositories that support CHK indices."""
1818
19from bzrlib import (19from bzrlib import (
20 btree_index,
20 errors,21 errors,
21 osutils,22 osutils,
23 repository,
22 )24 )
23from bzrlib.versionedfile import VersionedFiles25from bzrlib.versionedfile import VersionedFiles
24from bzrlib.tests.per_repository_chk import TestCaseWithRepositoryCHK26from bzrlib.tests.per_repository_chk import TestCaseWithRepositoryCHK
@@ -108,6 +110,39 @@
108 finally:110 finally:
109 repo.unlock()111 repo.unlock()
110112
113 def test_chk_bytes_are_fully_buffered(self):
114 repo = self.make_repository('.')
115 repo.lock_write()
116 self.addCleanup(repo.unlock)
117 repo.start_write_group()
118 try:
119 sha1, len, _ = repo.chk_bytes.add_lines((None,),
120 None, ["foo\n", "bar\n"], random_id=True)
121 self.assertEqual('4e48e2c9a3d2ca8a708cb0cc545700544efb5021',
122 sha1)
123 self.assertEqual(
124 set([('sha1:4e48e2c9a3d2ca8a708cb0cc545700544efb5021',)]),
125 repo.chk_bytes.keys())
126 except:
127 repo.abort_write_group()
128 raise
129 else:
130 repo.commit_write_group()
131 # This may not always be correct if we change away from BTreeGraphIndex
132 # in the future. But for now, lets check that chk_bytes are fully
133 # buffered
134 index = repo.chk_bytes._index._graph_index._indices[0]
135 self.assertIsInstance(index, btree_index.BTreeGraphIndex)
136 self.assertIs(type(index._leaf_node_cache), dict)
137 # Re-opening the repository should also have a repo with everything
138 # fully buffered
139 repo2 = repository.Repository.open(self.get_url())
140 repo2.lock_read()
141 self.addCleanup(repo2.unlock)
142 index = repo2.chk_bytes._index._graph_index._indices[0]
143 self.assertIsInstance(index, btree_index.BTreeGraphIndex)
144 self.assertIs(type(index._leaf_node_cache), dict)
145
111146
112class TestCommitWriteGroupIntegrityCheck(TestCaseWithRepositoryCHK):147class TestCommitWriteGroupIntegrityCheck(TestCaseWithRepositoryCHK):
113 """Tests that commit_write_group prevents various kinds of invalid data148 """Tests that commit_write_group prevents various kinds of invalid data
114149
=== added directory 'bzrlib/tests/per_uifactory'
=== added file 'bzrlib/tests/per_uifactory/__init__.py'
--- bzrlib/tests/per_uifactory/__init__.py 1970-01-01 00:00:00 +0000
+++ bzrlib/tests/per_uifactory/__init__.py 2009-10-15 18:31:14 +0000
@@ -0,0 +1,148 @@
1# Copyright (C) 2009 Canonical Ltd
2#
3# This program is free software; you can redistribute it and/or modify
4# it under the terms of the GNU General Public License as published by
5# the Free Software Foundation; either version 2 of the License, or
6# (at your option) any later version.
7#
8# This program is distributed in the hope that it will be useful,
9# but WITHOUT ANY WARRANTY; without even the implied warranty of
10# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
11# GNU General Public License for more details.
12#
13# You should have received a copy of the GNU General Public License
14# along with this program; if not, write to the Free Software
15# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
16
17"""Tests run per UIFactory."""
18
19# Testing UIFactories is a bit interesting because we require they all support a
20# common interface, but the way they implement it can vary very widely. Between
21# text, batch-mode, graphical and other potential UIFactories, the requirements
22# to set up a factory, to make it respond to requests, and to simulate user
23# input can vary a lot.
24#
25# We want tests that therefore allow for the evaluation of the result to vary
26# per implementation, but we want to check that the supported facilities are
27# the same across all UIFactorys, unless they're specifically skipped.
28#
29# Our normal approach is to use test scenarios but that seems to just end up
30# creating test-like objects inside the scenario. Therefore we fall back to
31# the older method of putting the common tests in a mixin.
32#
33# Plugins that add new UIFactorys can create their own subclasses.
34
35
36from cStringIO import StringIO
37import unittest
38
39
40from bzrlib import (
41 tests,
42 ui,
43 )
44
45
46class UIFactoryTestMixin(object):
47 """Common tests for UIFactories.
48
49 These are supposed to be expressed with no assumptions about how the
50 UIFactory implements the method, only that it does implement them (or
51 fails cleanly), and that the concrete subclass will make arrangements to
52 build a factory and to examine its behaviour.
53
54 Note that this is *not* a TestCase, because it can't be directly run, but
55 the concrete subclasses should be.
56 """
57
58 def test_note(self):
59 self.factory.note("a note to the user")
60 self._check_note("a note to the user")
61
62 def test_show_error(self):
63 msg = 'an error occurred'
64 self.factory.show_error(msg)
65 self._check_show_error(msg)
66
67 def test_show_message(self):
68 msg = 'a message'
69 self.factory.show_message(msg)
70 self._check_show_message(msg)
71
72 def test_show_warning(self):
73 msg = 'a warning'
74 self.factory.show_warning(msg)
75 self._check_show_warning(msg)
76
77
78class TestTextUIFactory(tests.TestCase, UIFactoryTestMixin):
79
80 def setUp(self):
81 super(TestTextUIFactory, self).setUp()
82 self.stdin = StringIO()
83 self.stdout = StringIO()
84 self.stderr = StringIO()
85 self.factory = ui.text.TextUIFactory(self.stdin, self.stdout,
86 self.stderr)
87
88 def _check_note(self, note_text):
89 self.assertEquals("%s\n" % note_text,
90 self.stdout.getvalue())
91
92 def _check_show_error(self, msg):
93 self.assertEquals("bzr: error: %s\n" % msg,
94 self.stderr.getvalue())
95 self.assertEquals("", self.stdout.getvalue())
96
97 def _check_show_message(self, msg):
98 self.assertEquals("%s\n" % msg,
99 self.stdout.getvalue())
100 self.assertEquals("", self.stderr.getvalue())
101
102 def _check_show_warning(self, msg):
103 self.assertEquals("bzr: warning: %s\n" % msg,
104 self.stderr.getvalue())
105 self.assertEquals("", self.stdout.getvalue())
106
107
108class TestSilentUIFactory(tests.TestCase, UIFactoryTestMixin):
109 # discards output, therefore tests for output expect nothing
110
111 def setUp(self):
112 super(TestSilentUIFactory, self).setUp()
113 self.factory = ui.SilentUIFactory()
114
115 def _check_note(self, note_text):
116 # it's just discarded
117 pass
118
119 def _check_show_error(self, msg):
120 pass
121
122 def _check_show_message(self, msg):
123 pass
124
125 def _check_show_warning(self, msg):
126 pass
127
128
129class TestCannedInputUIFactory(tests.TestCase, UIFactoryTestMixin):
130 # discards output, reads input from variables
131
132 def setUp(self):
133 super(TestCannedInputUIFactory, self).setUp()
134 self.factory = ui.CannedInputUIFactory([])
135
136 def _check_note(self, note_text):
137 pass
138
139 def _check_show_error(self, msg):
140 pass
141
142 def _check_show_message(self, msg):
143 pass
144
145 def _check_show_warning(self, msg):
146 pass
147
148
0149
=== modified file 'bzrlib/tests/test__simple_set.py'
--- bzrlib/tests/test__simple_set.py 2009-10-08 04:35:01 +0000
+++ bzrlib/tests/test__simple_set.py 2009-10-15 18:31:14 +0000
@@ -30,6 +30,51 @@
30 _simple_set_pyx = None30 _simple_set_pyx = None
3131
3232
33class _Hashable(object):
34 """A simple object which has a fixed hash value.
35
36 We could have used an 'int', but it turns out that Int objects don't
37 implement tp_richcompare...
38 """
39
40 def __init__(self, the_hash):
41 self.hash = the_hash
42
43 def __hash__(self):
44 return self.hash
45
46 def __eq__(self, other):
47 if not isinstance(other, _Hashable):
48 return NotImplemented
49 return other.hash == self.hash
50
51
52class _BadSecondHash(_Hashable):
53
54 def __init__(self, the_hash):
55 _Hashable.__init__(self, the_hash)
56 self._first = True
57
58 def __hash__(self):
59 if self._first:
60 self._first = False
61 return self.hash
62 else:
63 raise ValueError('I can only be hashed once.')
64
65
66class _BadCompare(_Hashable):
67
68 def __eq__(self, other):
69 raise RuntimeError('I refuse to play nice')
70
71
72class _NoImplementCompare(_Hashable):
73
74 def __eq__(self, other):
75 return NotImplemented
76
77
33# Even though this is an extension, we don't permute the tests for a python78# Even though this is an extension, we don't permute the tests for a python
34# version. As the plain python version is just a dict or set79# version. As the plain python version is just a dict or set
3580
@@ -62,6 +107,9 @@
62 def assertFillState(self, used, fill, mask, obj):107 def assertFillState(self, used, fill, mask, obj):
63 self.assertEqual((used, fill, mask), (obj.used, obj.fill, obj.mask))108 self.assertEqual((used, fill, mask), (obj.used, obj.fill, obj.mask))
64109
110 def assertLookup(self, offset, value, obj, key):
111 self.assertEqual((offset, value), obj._test_lookup(key))
112
65 def assertRefcount(self, count, obj):113 def assertRefcount(self, count, obj):
66 """Assert that the refcount for obj is what we expect.114 """Assert that the refcount for obj is what we expect.
67115
@@ -81,58 +129,88 @@
81 self.assertFillState(0, 0, 0x3ff, obj)129 self.assertFillState(0, 0, 0x3ff, obj)
82130
83 def test__lookup(self):131 def test__lookup(self):
84 # The tuple hash function is rather good at entropy. For all integers132 # These are carefully chosen integers to force hash collisions in the
85 # 0=>1023, hash((i,)) & 1023 maps to a unique output, and hash((i,j))133 # algorithm, based on the initial set size of 1024
86 # maps to all 1024 fields evenly.134 obj = self.module.SimpleSet()
87 # However, hash((c,d))& 1023 for characters has an uneven distribution135 self.assertLookup(643, '<null>', obj, _Hashable(643))
88 # of collisions, for example:136 self.assertLookup(643, '<null>', obj, _Hashable(643 + 1024))
89 # ('a', 'a'), ('f', '4'), ('p', 'r'), ('q', '1'), ('F', 'T'),137 self.assertLookup(643, '<null>', obj, _Hashable(643 + 50*1024))
90 # ('Q', 'Q'), ('V', 'd'), ('7', 'C')138
91 # all collide @ 643139 def test__lookup_collision(self):
92 obj = self.module.SimpleSet()140 obj = self.module.SimpleSet()
93 offset, val = obj._test_lookup(('a', 'a'))141 k1 = _Hashable(643)
94 self.assertEqual(643, offset)142 k2 = _Hashable(643 + 1024)
95 self.assertEqual('<null>', val)143 self.assertLookup(643, '<null>', obj, k1)
96 offset, val = obj._test_lookup(('f', '4'))144 self.assertLookup(643, '<null>', obj, k2)
97 self.assertEqual(643, offset)145 obj.add(k1)
98 self.assertEqual('<null>', val)146 self.assertLookup(643, k1, obj, k1)
99 offset, val = obj._test_lookup(('p', 'r'))147 self.assertLookup(644, '<null>', obj, k2)
100 self.assertEqual(643, offset)148
101 self.assertEqual('<null>', val)149 def test__lookup_after_resize(self):
150 obj = self.module.SimpleSet()
151 k1 = _Hashable(643)
152 k2 = _Hashable(643 + 1024)
153 obj.add(k1)
154 obj.add(k2)
155 self.assertLookup(643, k1, obj, k1)
156 self.assertLookup(644, k2, obj, k2)
157 obj._py_resize(2047) # resized to 2048
158 self.assertEqual(2048, obj.mask + 1)
159 self.assertLookup(643, k1, obj, k1)
160 self.assertLookup(643+1024, k2, obj, k2)
161 obj._py_resize(1023) # resized back to 1024
162 self.assertEqual(1024, obj.mask + 1)
163 self.assertLookup(643, k1, obj, k1)
164 self.assertLookup(644, k2, obj, k2)
102165
103 def test_get_set_del_with_collisions(self):166 def test_get_set_del_with_collisions(self):
104 obj = self.module.SimpleSet()167 obj = self.module.SimpleSet()
105 k1 = ('a', 'a')168
106 k2 = ('f', '4') # collides169 h1 = 643
107 k3 = ('p', 'r')170 h2 = 643 + 1024
108 k4 = ('q', '1')171 h3 = 643 + 1024*50
109 self.assertEqual((643, '<null>'), obj._test_lookup(k1))172 h4 = 643 + 1024*25
110 self.assertEqual((643, '<null>'), obj._test_lookup(k2))173 h5 = 644
111 self.assertEqual((643, '<null>'), obj._test_lookup(k3))174 h6 = 644 + 1024
112 self.assertEqual((643, '<null>'), obj._test_lookup(k4))175
176 k1 = _Hashable(h1)
177 k2 = _Hashable(h2)
178 k3 = _Hashable(h3)
179 k4 = _Hashable(h4)
180 k5 = _Hashable(h5)
181 k6 = _Hashable(h6)
182 self.assertLookup(643, '<null>', obj, k1)
183 self.assertLookup(643, '<null>', obj, k2)
184 self.assertLookup(643, '<null>', obj, k3)
185 self.assertLookup(643, '<null>', obj, k4)
186 self.assertLookup(644, '<null>', obj, k5)
187 self.assertLookup(644, '<null>', obj, k6)
113 obj.add(k1)188 obj.add(k1)
114 self.assertIn(k1, obj)189 self.assertIn(k1, obj)
115 self.assertNotIn(k2, obj)190 self.assertNotIn(k2, obj)
116 self.assertNotIn(k3, obj)191 self.assertNotIn(k3, obj)
117 self.assertNotIn(k4, obj)192 self.assertNotIn(k4, obj)
118 self.assertEqual((643, k1), obj._test_lookup(k1))193 self.assertLookup(643, k1, obj, k1)
119 self.assertEqual((787, '<null>'), obj._test_lookup(k2))194 self.assertLookup(644, '<null>', obj, k2)
120 self.assertEqual((787, '<null>'), obj._test_lookup(k3))195 self.assertLookup(644, '<null>', obj, k3)
121 self.assertEqual((787, '<null>'), obj._test_lookup(k4))196 self.assertLookup(644, '<null>', obj, k4)
197 self.assertLookup(644, '<null>', obj, k5)
198 self.assertLookup(644, '<null>', obj, k6)
122 self.assertIs(k1, obj[k1])199 self.assertIs(k1, obj[k1])
123 obj.add(k2)200 self.assertIs(k2, obj.add(k2))
124 self.assertIs(k2, obj[k2])201 self.assertIs(k2, obj[k2])
125 self.assertEqual((643, k1), obj._test_lookup(k1))202 self.assertLookup(643, k1, obj, k1)
126 self.assertEqual((787, k2), obj._test_lookup(k2))203 self.assertLookup(644, k2, obj, k2)
127 self.assertEqual((660, '<null>'), obj._test_lookup(k3))204 self.assertLookup(646, '<null>', obj, k3)
128 # Even though k4 collides for the first couple of iterations, the hash205 self.assertLookup(646, '<null>', obj, k4)
129 # perturbation uses the full width hash (not just the masked value), so206 self.assertLookup(645, '<null>', obj, k5)
130 # it now diverges207 self.assertLookup(645, '<null>', obj, k6)
131 self.assertEqual((180, '<null>'), obj._test_lookup(k4))208 self.assertLookup(643, k1, obj, _Hashable(h1))
132 self.assertEqual((643, k1), obj._test_lookup(('a', 'a')))209 self.assertLookup(644, k2, obj, _Hashable(h2))
133 self.assertEqual((787, k2), obj._test_lookup(('f', '4')))210 self.assertLookup(646, '<null>', obj, _Hashable(h3))
134 self.assertEqual((660, '<null>'), obj._test_lookup(('p', 'r')))211 self.assertLookup(646, '<null>', obj, _Hashable(h4))
135 self.assertEqual((180, '<null>'), obj._test_lookup(('q', '1')))212 self.assertLookup(645, '<null>', obj, _Hashable(h5))
213 self.assertLookup(645, '<null>', obj, _Hashable(h6))
136 obj.add(k3)214 obj.add(k3)
137 self.assertIs(k3, obj[k3])215 self.assertIs(k3, obj[k3])
138 self.assertIn(k1, obj)216 self.assertIn(k1, obj)
@@ -140,11 +218,11 @@
140 self.assertIn(k3, obj)218 self.assertIn(k3, obj)
141 self.assertNotIn(k4, obj)219 self.assertNotIn(k4, obj)
142220
143 del obj[k1]221 obj.discard(k1)
144 self.assertEqual((643, '<dummy>'), obj._test_lookup(k1))222 self.assertLookup(643, '<dummy>', obj, k1)
145 self.assertEqual((787, k2), obj._test_lookup(k2))223 self.assertLookup(644, k2, obj, k2)
146 self.assertEqual((660, k3), obj._test_lookup(k3))224 self.assertLookup(646, k3, obj, k3)
147 self.assertEqual((643, '<dummy>'), obj._test_lookup(k4))225 self.assertLookup(643, '<dummy>', obj, k4)
148 self.assertNotIn(k1, obj)226 self.assertNotIn(k1, obj)
149 self.assertIn(k2, obj)227 self.assertIn(k2, obj)
150 self.assertIn(k3, obj)228 self.assertIn(k3, obj)
@@ -179,7 +257,7 @@
179 self.assertRefcount(2, k1)257 self.assertRefcount(2, k1)
180 self.assertRefcount(1, k2)258 self.assertRefcount(1, k2)
181 # Deleting an entry should remove the fill, but not the used259 # Deleting an entry should remove the fill, but not the used
182 del obj[k1]260 obj.discard(k1)
183 self.assertFillState(0, 1, 0x3ff, obj)261 self.assertFillState(0, 1, 0x3ff, obj)
184 self.assertRefcount(1, k1)262 self.assertRefcount(1, k1)
185 k3 = tuple(['bar'])263 k3 = tuple(['bar'])
@@ -210,23 +288,6 @@
210 self.assertEqual(1, obj.discard(k3))288 self.assertEqual(1, obj.discard(k3))
211 self.assertRefcount(1, k3)289 self.assertRefcount(1, k3)
212290
213 def test__delitem__(self):
214 obj = self.module.SimpleSet()
215 k1 = tuple(['foo'])
216 k2 = tuple(['foo'])
217 k3 = tuple(['bar'])
218 self.assertRefcount(1, k1)
219 self.assertRefcount(1, k2)
220 self.assertRefcount(1, k3)
221 obj.add(k1)
222 self.assertRefcount(2, k1)
223 self.assertRaises(KeyError, obj.__delitem__, k3)
224 self.assertRefcount(1, k3)
225 obj.add(k3)
226 self.assertRefcount(2, k3)
227 del obj[k3]
228 self.assertRefcount(1, k3)
229
230 def test__resize(self):291 def test__resize(self):
231 obj = self.module.SimpleSet()292 obj = self.module.SimpleSet()
232 k1 = ('foo',)293 k1 = ('foo',)
@@ -235,13 +296,13 @@
235 obj.add(k1)296 obj.add(k1)
236 obj.add(k2)297 obj.add(k2)
237 obj.add(k3)298 obj.add(k3)
238 del obj[k2]299 obj.discard(k2)
239 self.assertFillState(2, 3, 0x3ff, obj)300 self.assertFillState(2, 3, 0x3ff, obj)
240 self.assertEqual(1024, obj._py_resize(500))301 self.assertEqual(1024, obj._py_resize(500))
241 # Doesn't change the size, but does change the content302 # Doesn't change the size, but does change the content
242 self.assertFillState(2, 2, 0x3ff, obj)303 self.assertFillState(2, 2, 0x3ff, obj)
243 obj.add(k2)304 obj.add(k2)
244 del obj[k3]305 obj.discard(k3)
245 self.assertFillState(2, 3, 0x3ff, obj)306 self.assertFillState(2, 3, 0x3ff, obj)
246 self.assertEqual(4096, obj._py_resize(4095))307 self.assertEqual(4096, obj._py_resize(4095))
247 self.assertFillState(2, 2, 0xfff, obj)308 self.assertFillState(2, 2, 0xfff, obj)
@@ -250,13 +311,44 @@
250 self.assertNotIn(k3, obj)311 self.assertNotIn(k3, obj)
251 obj.add(k2)312 obj.add(k2)
252 self.assertIn(k2, obj)313 self.assertIn(k2, obj)
253 del obj[k2]314 obj.discard(k2)
254 self.assertEqual((591, '<dummy>'), obj._test_lookup(k2))315 self.assertEqual((591, '<dummy>'), obj._test_lookup(k2))
255 self.assertFillState(1, 2, 0xfff, obj)316 self.assertFillState(1, 2, 0xfff, obj)
256 self.assertEqual(2048, obj._py_resize(1024))317 self.assertEqual(2048, obj._py_resize(1024))
257 self.assertFillState(1, 1, 0x7ff, obj)318 self.assertFillState(1, 1, 0x7ff, obj)
258 self.assertEqual((591, '<null>'), obj._test_lookup(k2))319 self.assertEqual((591, '<null>'), obj._test_lookup(k2))
259320
321 def test_second_hash_failure(self):
322 obj = self.module.SimpleSet()
323 k1 = _BadSecondHash(200)
324 k2 = _Hashable(200)
325 # Should only call hash() one time
326 obj.add(k1)
327 self.assertFalse(k1._first)
328 self.assertRaises(ValueError, obj.add, k2)
329
330 def test_richcompare_failure(self):
331 obj = self.module.SimpleSet()
332 k1 = _Hashable(200)
333 k2 = _BadCompare(200)
334 obj.add(k1)
335 # Tries to compare with k1, fails
336 self.assertRaises(RuntimeError, obj.add, k2)
337
338 def test_richcompare_not_implemented(self):
339 obj = self.module.SimpleSet()
340 # Even though their hashes are the same, tp_richcompare returns
341 # NotImplemented, which means we treat them as not equal
342 k1 = _NoImplementCompare(200)
343 k2 = _NoImplementCompare(200)
344 self.assertLookup(200, '<null>', obj, k1)
345 self.assertLookup(200, '<null>', obj, k2)
346 self.assertIs(k1, obj.add(k1))
347 self.assertLookup(200, k1, obj, k1)
348 self.assertLookup(201, '<null>', obj, k2)
349 self.assertIs(k2, obj.add(k2))
350 self.assertIs(k1, obj[k1])
351
260 def test_add_and_remove_lots_of_items(self):352 def test_add_and_remove_lots_of_items(self):
261 obj = self.module.SimpleSet()353 obj = self.module.SimpleSet()
262 chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz1234567890'354 chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz1234567890'
@@ -295,5 +387,5 @@
295 # Set changed size387 # Set changed size
296 self.assertRaises(RuntimeError, iterator.next)388 self.assertRaises(RuntimeError, iterator.next)
297 # And even removing an item still causes it to fail389 # And even removing an item still causes it to fail
298 del obj[k2]390 obj.discard(k2)
299 self.assertRaises(RuntimeError, iterator.next)391 self.assertRaises(RuntimeError, iterator.next)
300392
=== modified file 'bzrlib/tests/test__static_tuple.py'
--- bzrlib/tests/test__static_tuple.py 2009-10-07 19:34:22 +0000
+++ bzrlib/tests/test__static_tuple.py 2009-10-15 18:31:14 +0000
@@ -23,6 +23,7 @@
23 _static_tuple_py,23 _static_tuple_py,
24 errors,24 errors,
25 osutils,25 osutils,
26 static_tuple,
26 tests,27 tests,
27 )28 )
2829
@@ -140,6 +141,13 @@
140 self.assertEqual('foo', k[0])141 self.assertEqual('foo', k[0])
141 self.assertEqual('z', k[6])142 self.assertEqual('z', k[6])
142 self.assertEqual('z', k[-1])143 self.assertEqual('z', k[-1])
144 self.assertRaises(IndexError, k.__getitem__, 7)
145 self.assertRaises(IndexError, k.__getitem__, 256+7)
146 self.assertRaises(IndexError, k.__getitem__, 12024)
147 # Python's [] resolver handles the negative arguments, so we can't
148 # really test StaticTuple_item() with negative values.
149 self.assertRaises(TypeError, k.__getitem__, 'not-an-int')
150 self.assertRaises(TypeError, k.__getitem__, '5')
143151
144 def test_refcount(self):152 def test_refcount(self):
145 f = 'fo' + 'oo'153 f = 'fo' + 'oo'
@@ -201,13 +209,29 @@
201 self.assertTrue(k_small <= k_big)209 self.assertTrue(k_small <= k_big)
202 self.assertTrue(k_small < k_big)210 self.assertTrue(k_small < k_big)
203211
212 def assertCompareNoRelation(self, k1, k2):
213 """Run the comparison operators, make sure they do something.
214
215 However, we don't actually care what comes first or second. This is
216 stuff like cross-class comparisons. We don't want to segfault/raise an
217 exception, but we don't care about the sort order.
218 """
219 self.assertFalse(k1 == k2)
220 self.assertTrue(k1 != k2)
221 # Do the comparison, but we don't care about the result
222 k1 >= k2
223 k1 > k2
224 k1 <= k2
225 k1 < k2
226
204 def test_compare_vs_none(self):227 def test_compare_vs_none(self):
205 k1 = self.module.StaticTuple('baz', 'bing')228 k1 = self.module.StaticTuple('baz', 'bing')
206 self.assertCompareDifferent(None, k1)229 self.assertCompareDifferent(None, k1)
207 self.assertCompareDifferent(10, k1)230
208 # Comparison with a string is poorly-defined, I seem to get failures231 def test_compare_cross_class(self):
209 # regardless of which one comes first...232 k1 = self.module.StaticTuple('baz', 'bing')
210 # self.assertCompareDifferent('baz', k1)233 self.assertCompareNoRelation(10, k1)
234 self.assertCompareNoRelation('baz', k1)
211235
212 def test_compare_all_different_same_width(self):236 def test_compare_all_different_same_width(self):
213 k1 = self.module.StaticTuple('baz', 'bing')237 k1 = self.module.StaticTuple('baz', 'bing')
@@ -255,6 +279,16 @@
255 self.assertCompareEqual(k3, (k1, ('foo', 'bar')))279 self.assertCompareEqual(k3, (k1, ('foo', 'bar')))
256 self.assertCompareEqual((k1, ('foo', 'bar')), k3)280 self.assertCompareEqual((k1, ('foo', 'bar')), k3)
257281
282 def test_compare_mixed_depths(self):
283 stuple = self.module.StaticTuple
284 k1 = stuple(stuple('a',), stuple('b',))
285 k2 = stuple(stuple(stuple('c',), stuple('d',)),
286 stuple('b',))
287 # This requires comparing a StaticTuple to a 'string', and then
288 # interpreting that value in the next higher StaticTuple. This used to
289 # generate a PyErr_BadIternalCall. We now fall back to *something*.
290 self.assertCompareNoRelation(k1, k2)
291
258 def test_hash(self):292 def test_hash(self):
259 k = self.module.StaticTuple('foo')293 k = self.module.StaticTuple('foo')
260 self.assertEqual(hash(k), hash(('foo',)))294 self.assertEqual(hash(k), hash(('foo',)))
@@ -276,12 +310,22 @@
276 k = self.module.StaticTuple('foo', 'bar', 'baz', 'bing')310 k = self.module.StaticTuple('foo', 'bar', 'baz', 'bing')
277 self.assertEqual(('foo', 'bar'), k[:2])311 self.assertEqual(('foo', 'bar'), k[:2])
278 self.assertEqual(('baz',), k[2:-1])312 self.assertEqual(('baz',), k[2:-1])
313 try:
314 val = k[::2]
315 except TypeError:
316 # C implementation raises a TypeError, we don't need the
317 # implementation yet, so allow this to pass
318 pass
319 else:
320 # Python implementation uses a regular Tuple, so make sure it gives
321 # the right result
322 self.assertEqual(('foo', 'baz'), val)
279323
280 def test_referents(self):324 def test_referents(self):
281 # We implement tp_traverse so that things like 'meliae' can measure the325 # We implement tp_traverse so that things like 'meliae' can measure the
282 # amount of referenced memory. Unfortunately gc.get_referents() first326 # amount of referenced memory. Unfortunately gc.get_referents() first
283 # checks the IS_GC flag before it traverses anything. So there isn't a327 # checks the IS_GC flag before it traverses anything. We could write a
284 # way to expose it that I can see.328 # helper func, but that won't work for the generic implementation...
285 self.requireFeature(Meliae)329 self.requireFeature(Meliae)
286 from meliae import scanner330 from meliae import scanner
287 strs = ['foo', 'bar', 'baz', 'bing']331 strs = ['foo', 'bar', 'baz', 'bing']
@@ -383,3 +427,13 @@
383 if self.module is _static_tuple_py:427 if self.module is _static_tuple_py:
384 return428 return
385 self.assertIsNot(None, self.module._C_API)429 self.assertIsNot(None, self.module._C_API)
430
431 def test_static_tuple_thunk(self):
432 # Make sure the right implementation is available from
433 # bzrlib.static_tuple.StaticTuple.
434 if self.module is _static_tuple_py:
435 if CompiledStaticTuple.available():
436 # We will be using the C version
437 return
438 self.assertIs(static_tuple.StaticTuple,
439 self.module.StaticTuple)
386440
=== modified file 'bzrlib/tests/test_btree_index.py'
--- bzrlib/tests/test_btree_index.py 2009-08-13 19:56:26 +0000
+++ bzrlib/tests/test_btree_index.py 2009-10-15 18:31:13 +0000
@@ -23,6 +23,8 @@
23from bzrlib import (23from bzrlib import (
24 btree_index,24 btree_index,
25 errors,25 errors,
26 fifo_cache,
27 lru_cache,
26 osutils,28 osutils,
27 tests,29 tests,
28 )30 )
@@ -1115,6 +1117,43 @@
1115 self.assertEqual({}, parent_map)1117 self.assertEqual({}, parent_map)
1116 self.assertEqual(set([('one',), ('two',)]), missing_keys)1118 self.assertEqual(set([('one',), ('two',)]), missing_keys)
11171119
1120 def test_supports_unlimited_cache(self):
1121 builder = btree_index.BTreeBuilder(reference_lists=0, key_elements=1)
1122 # We need enough nodes to cause a page split (so we have both an
1123 # internal node and a couple leaf nodes. 500 seems to be enough.)
1124 nodes = self.make_nodes(500, 1, 0)
1125 for node in nodes:
1126 builder.add_node(*node)
1127 stream = builder.finish()
1128 trans = get_transport(self.get_url())
1129 size = trans.put_file('index', stream)
1130 index = btree_index.BTreeGraphIndex(trans, 'index', size)
1131 self.assertEqual(500, index.key_count())
1132 # We have an internal node
1133 self.assertEqual(2, len(index._row_lengths))
1134 # We have at least 2 leaf nodes
1135 self.assertTrue(index._row_lengths[-1] >= 2)
1136 self.assertIsInstance(index._leaf_node_cache, lru_cache.LRUCache)
1137 self.assertEqual(btree_index._NODE_CACHE_SIZE,
1138 index._leaf_node_cache._max_cache)
1139 self.assertIsInstance(index._internal_node_cache, fifo_cache.FIFOCache)
1140 self.assertEqual(100, index._internal_node_cache._max_cache)
1141 # No change if unlimited_cache=False is passed
1142 index = btree_index.BTreeGraphIndex(trans, 'index', size,
1143 unlimited_cache=False)
1144 self.assertIsInstance(index._leaf_node_cache, lru_cache.LRUCache)
1145 self.assertEqual(btree_index._NODE_CACHE_SIZE,
1146 index._leaf_node_cache._max_cache)
1147 self.assertIsInstance(index._internal_node_cache, fifo_cache.FIFOCache)
1148 self.assertEqual(100, index._internal_node_cache._max_cache)
1149 index = btree_index.BTreeGraphIndex(trans, 'index', size,
1150 unlimited_cache=True)
1151 self.assertIsInstance(index._leaf_node_cache, dict)
1152 self.assertIs(type(index._internal_node_cache), dict)
1153 # Exercise the lookup code
1154 entries = set(index.iter_entries([n[0] for n in nodes]))
1155 self.assertEqual(500, len(entries))
1156
11181157
1119class TestBTreeNodes(BTreeTestCase):1158class TestBTreeNodes(BTreeTestCase):
11201159
11211160
=== modified file 'bzrlib/tests/test_decorators.py'
--- bzrlib/tests/test_decorators.py 2009-03-23 14:59:43 +0000
+++ bzrlib/tests/test_decorators.py 2009-10-15 18:31:14 +0000
@@ -23,11 +23,15 @@
23from bzrlib.tests import TestCase23from bzrlib.tests import TestCase
2424
2525
26def create_decorator_sample(style, except_in_unlock=False):26class SampleUnlockError(Exception):
27 pass
28
29
30def create_decorator_sample(style, unlock_error=None):
27 """Create a DecoratorSample object, using specific lock operators.31 """Create a DecoratorSample object, using specific lock operators.
2832
29 :param style: The type of lock decorators to use (fast/pretty/None)33 :param style: The type of lock decorators to use (fast/pretty/None)
30 :param except_in_unlock: If True, raise an exception during unlock34 :param unlock_error: If specified, an error to raise from unlock.
31 :return: An instantiated DecoratorSample object.35 :return: An instantiated DecoratorSample object.
32 """36 """
3337
@@ -58,10 +62,11 @@
58 def lock_write(self):62 def lock_write(self):
59 self.actions.append('lock_write')63 self.actions.append('lock_write')
6064
65 @decorators.only_raises(SampleUnlockError)
61 def unlock(self):66 def unlock(self):
62 if except_in_unlock:67 if unlock_error:
63 self.actions.append('unlock_fail')68 self.actions.append('unlock_fail')
64 raise KeyError('during unlock')69 raise unlock_error
65 else:70 else:
66 self.actions.append('unlock')71 self.actions.append('unlock')
6772
@@ -119,28 +124,28 @@
119124
120 def test_read_lock_raises_original_error(self):125 def test_read_lock_raises_original_error(self):
121 sam = create_decorator_sample(self._decorator_style,126 sam = create_decorator_sample(self._decorator_style,
122 except_in_unlock=True)127 unlock_error=SampleUnlockError())
123 self.assertRaises(TypeError, sam.fail_during_read)128 self.assertRaises(TypeError, sam.fail_during_read)
124 self.assertEqual(['lock_read', 'fail_during_read', 'unlock_fail'],129 self.assertEqual(['lock_read', 'fail_during_read', 'unlock_fail'],
125 sam.actions)130 sam.actions)
126131
127 def test_write_lock_raises_original_error(self):132 def test_write_lock_raises_original_error(self):
128 sam = create_decorator_sample(self._decorator_style,133 sam = create_decorator_sample(self._decorator_style,
129 except_in_unlock=True)134 unlock_error=SampleUnlockError())
130 self.assertRaises(TypeError, sam.fail_during_write)135 self.assertRaises(TypeError, sam.fail_during_write)
131 self.assertEqual(['lock_write', 'fail_during_write', 'unlock_fail'],136 self.assertEqual(['lock_write', 'fail_during_write', 'unlock_fail'],
132 sam.actions)137 sam.actions)
133138
134 def test_read_lock_raises_unlock_error(self):139 def test_read_lock_raises_unlock_error(self):
135 sam = create_decorator_sample(self._decorator_style,140 sam = create_decorator_sample(self._decorator_style,
136 except_in_unlock=True)141 unlock_error=SampleUnlockError())
137 self.assertRaises(KeyError, sam.frob)142 self.assertRaises(SampleUnlockError, sam.frob)
138 self.assertEqual(['lock_read', 'frob', 'unlock_fail'], sam.actions)143 self.assertEqual(['lock_read', 'frob', 'unlock_fail'], sam.actions)
139144
140 def test_write_lock_raises_unlock_error(self):145 def test_write_lock_raises_unlock_error(self):
141 sam = create_decorator_sample(self._decorator_style,146 sam = create_decorator_sample(self._decorator_style,
142 except_in_unlock=True)147 unlock_error=SampleUnlockError())
143 self.assertRaises(KeyError, sam.bank, 'bar', biz='bing')148 self.assertRaises(SampleUnlockError, sam.bank, 'bar', biz='bing')
144 self.assertEqual(['lock_write', ('bank', 'bar', 'bing'),149 self.assertEqual(['lock_write', ('bank', 'bar', 'bing'),
145 'unlock_fail'], sam.actions)150 'unlock_fail'], sam.actions)
146151
@@ -276,3 +281,21 @@
276 finally:281 finally:
277 decorators.needs_read_lock = cur_read282 decorators.needs_read_lock = cur_read
278 decorators.needs_write_lock = cur_write283 decorators.needs_write_lock = cur_write
284
285
286class TestOnlyRaisesDecorator(TestCase):
287
288 def raise_ZeroDivisionError(self):
289 1/0
290
291 def test_raises_approved_error(self):
292 decorator = decorators.only_raises(ZeroDivisionError)
293 decorated_meth = decorator(self.raise_ZeroDivisionError)
294 self.assertRaises(ZeroDivisionError, decorated_meth)
295
296 def test_quietly_logs_unapproved_errors(self):
297 decorator = decorators.only_raises(IOError)
298 decorated_meth = decorator(self.raise_ZeroDivisionError)
299 self.assertLogsError(ZeroDivisionError, decorated_meth)
300
301
279302
=== modified file 'bzrlib/tests/test_diff.py'
--- bzrlib/tests/test_diff.py 2009-03-31 00:12:10 +0000
+++ bzrlib/tests/test_diff.py 2009-10-15 18:31:14 +0000
@@ -32,6 +32,7 @@
32 external_diff,32 external_diff,
33 internal_diff,33 internal_diff,
34 show_diff_trees,34 show_diff_trees,
35 get_trees_and_branches_to_diff,
35 )36 )
36from bzrlib.errors import BinaryFile, NoDiff, ExecutableMissing37from bzrlib.errors import BinaryFile, NoDiff, ExecutableMissing
37import bzrlib.osutils as osutils38import bzrlib.osutils as osutils
@@ -41,6 +42,8 @@
41import bzrlib._patiencediff_py42import bzrlib._patiencediff_py
42from bzrlib.tests import (Feature, TestCase, TestCaseWithTransport,43from bzrlib.tests import (Feature, TestCase, TestCaseWithTransport,
43 TestCaseInTempDir, TestSkipped)44 TestCaseInTempDir, TestSkipped)
45from bzrlib.revisiontree import RevisionTree
46from bzrlib.revisionspec import RevisionSpec
4447
4548
46class _AttribFeature(Feature):49class _AttribFeature(Feature):
@@ -1382,3 +1385,46 @@
1382 self.assertTrue(os.path.samefile('tree/newname', new_path))1385 self.assertTrue(os.path.samefile('tree/newname', new_path))
1383 # make sure we can create files with the same parent directories1386 # make sure we can create files with the same parent directories
1384 diff_obj._prepare_files('file2-id', 'oldname2', 'newname2')1387 diff_obj._prepare_files('file2-id', 'oldname2', 'newname2')
1388
1389
1390class TestGetTreesAndBranchesToDiff(TestCaseWithTransport):
1391
1392 def test_basic(self):
1393 tree = self.make_branch_and_tree('tree')
1394 (old_tree, new_tree,
1395 old_branch, new_branch,
1396 specific_files, extra_trees) = \
1397 get_trees_and_branches_to_diff(['tree'], None, None, None)
1398
1399 self.assertIsInstance(old_tree, RevisionTree)
1400 #print dir (old_tree)
1401 self.assertEqual(_mod_revision.NULL_REVISION, old_tree.get_revision_id())
1402 self.assertEqual(tree.basedir, new_tree.basedir)
1403 self.assertEqual(tree.branch.base, old_branch.base)
1404 self.assertEqual(tree.branch.base, new_branch.base)
1405 self.assertIs(None, specific_files)
1406 self.assertIs(None, extra_trees)
1407
1408 def test_with_rev_specs(self):
1409 tree = self.make_branch_and_tree('tree')
1410 self.build_tree_contents([('tree/file', 'oldcontent')])
1411 tree.add('file', 'file-id')
1412 tree.commit('old tree', timestamp=0, rev_id="old-id")
1413 self.build_tree_contents([('tree/file', 'newcontent')])
1414 tree.commit('new tree', timestamp=0, rev_id="new-id")
1415
1416 revisions = [RevisionSpec.from_string('1'),
1417 RevisionSpec.from_string('2')]
1418 (old_tree, new_tree,
1419 old_branch, new_branch,
1420 specific_files, extra_trees) = \
1421 get_trees_and_branches_to_diff(['tree'], revisions, None, None)
1422
1423 self.assertIsInstance(old_tree, RevisionTree)
1424 self.assertEqual("old-id", old_tree.get_revision_id())
1425 self.assertIsInstance(new_tree, RevisionTree)
1426 self.assertEqual("new-id", new_tree.get_revision_id())
1427 self.assertEqual(tree.branch.base, old_branch.base)
1428 self.assertEqual(tree.branch.base, new_branch.base)
1429 self.assertIs(None, specific_files)
1430 self.assertEqual(tree.basedir, extra_trees[0].basedir)
13851431
=== modified file 'bzrlib/tests/test_index.py'
--- bzrlib/tests/test_index.py 2009-08-13 19:56:26 +0000
+++ bzrlib/tests/test_index.py 2009-10-15 18:31:14 +0000
@@ -1,4 +1,4 @@
1# Copyright (C) 2007 Canonical Ltd1# Copyright (C) 2007, 2009 Canonical Ltd
2#2#
3# This program is free software; you can redistribute it and/or modify3# This program is free software; you can redistribute it and/or modify
4# it under the terms of the GNU General Public License as published by4# it under the terms of the GNU General Public License as published by
@@ -1006,6 +1006,15 @@
1006 self.assertEqual(set(), missing_keys)1006 self.assertEqual(set(), missing_keys)
1007 self.assertEqual(set(), search_keys)1007 self.assertEqual(set(), search_keys)
10081008
1009 def test_supports_unlimited_cache(self):
1010 builder = GraphIndexBuilder(0, key_elements=1)
1011 stream = builder.finish()
1012 trans = get_transport(self.get_url())
1013 size = trans.put_file('index', stream)
1014 # It doesn't matter what unlimited_cache does here, just that it can be
1015 # passed
1016 index = GraphIndex(trans, 'index', size, unlimited_cache=True)
1017
10091018
1010class TestCombinedGraphIndex(TestCaseWithMemoryTransport):1019class TestCombinedGraphIndex(TestCaseWithMemoryTransport):
10111020
10121021
=== modified file 'bzrlib/tests/test_msgeditor.py'
--- bzrlib/tests/test_msgeditor.py 2009-08-20 04:09:58 +0000
+++ bzrlib/tests/test_msgeditor.py 2009-10-15 18:31:14 +0000
@@ -93,7 +93,7 @@
93 tree3.commit('Feature Y, based on initial X work.',93 tree3.commit('Feature Y, based on initial X work.',
94 timestamp=1233285960, timezone=0)94 timestamp=1233285960, timezone=0)
95 tree.merge_from_branch(tree2.branch)95 tree.merge_from_branch(tree2.branch)
96 tree.merge_from_branch(tree3.branch)96 tree.merge_from_branch(tree3.branch, force=True)
97 return tree97 return tree
9898
99 def test_commit_template_pending_merges(self):99 def test_commit_template_pending_merges(self):
100100
=== modified file 'bzrlib/tests/test_mutabletree.py'
--- bzrlib/tests/test_mutabletree.py 2009-09-07 23:14:05 +0000
+++ bzrlib/tests/test_mutabletree.py 2009-10-15 18:31:13 +0000
@@ -19,15 +19,18 @@
19Most functionality of MutableTree is tested as part of WorkingTree.19Most functionality of MutableTree is tested as part of WorkingTree.
20"""20"""
2121
22from bzrlib.tests import TestCase22from bzrlib import (
23from bzrlib.mutabletree import MutableTree, MutableTreeHooks23 mutabletree,
2424 tests,
25class TestHooks(TestCase):25 )
26
27
28class TestHooks(tests.TestCase):
2629
27 def test_constructor(self):30 def test_constructor(self):
28 """Check that creating a MutableTreeHooks instance has the right31 """Check that creating a MutableTreeHooks instance has the right
29 defaults."""32 defaults."""
30 hooks = MutableTreeHooks()33 hooks = mutabletree.MutableTreeHooks()
31 self.assertTrue("start_commit" in hooks,34 self.assertTrue("start_commit" in hooks,
32 "start_commit not in %s" % hooks)35 "start_commit not in %s" % hooks)
33 self.assertTrue("post_commit" in hooks,36 self.assertTrue("post_commit" in hooks,
@@ -36,7 +39,25 @@
36 def test_installed_hooks_are_MutableTreeHooks(self):39 def test_installed_hooks_are_MutableTreeHooks(self):
37 """The installed hooks object should be a MutableTreeHooks."""40 """The installed hooks object should be a MutableTreeHooks."""
38 # the installed hooks are saved in self._preserved_hooks.41 # the installed hooks are saved in self._preserved_hooks.
39 self.assertIsInstance(self._preserved_hooks[MutableTree][1],42 self.assertIsInstance(self._preserved_hooks[mutabletree.MutableTree][1],
40 MutableTreeHooks)43 mutabletree.MutableTreeHooks)
4144
4245
46class TestHasChanges(tests.TestCaseWithTransport):
47
48 def setUp(self):
49 super(TestHasChanges, self).setUp()
50 self.tree = self.make_branch_and_tree('tree')
51
52 def test_with_uncommitted_changes(self):
53 self.build_tree(['tree/file'])
54 self.tree.add('file')
55 self.assertTrue(self.tree.has_changes())
56
57 def test_with_pending_merges(self):
58 other_tree = self.tree.bzrdir.sprout('other').open_workingtree()
59 self.build_tree(['other/file'])
60 other_tree.add('file')
61 other_tree.commit('added file')
62 self.tree.merge_from_branch(other_tree.branch)
63 self.assertTrue(self.tree.has_changes())
4364
=== modified file 'bzrlib/tests/test_osutils.py'
--- bzrlib/tests/test_osutils.py 2009-09-19 16:14:10 +0000
+++ bzrlib/tests/test_osutils.py 2009-10-15 18:31:14 +0000
@@ -457,6 +457,49 @@
457 self.failUnlessEqual('work/MixedCaseParent/nochild', actual)457 self.failUnlessEqual('work/MixedCaseParent/nochild', actual)
458458
459459
460class Test_CICPCanonicalRelpath(tests.TestCaseWithTransport):
461
462 def assertRelpath(self, expected, base, path):
463 actual = osutils._cicp_canonical_relpath(base, path)
464 self.assertEqual(expected, actual)
465
466 def test_simple(self):
467 self.build_tree(['MixedCaseName'])
468 base = osutils.realpath(self.get_transport('.').local_abspath('.'))
469 self.assertRelpath('MixedCaseName', base, 'mixedcAsename')
470
471 def test_subdir_missing_tail(self):
472 self.build_tree(['MixedCaseParent/', 'MixedCaseParent/a_child'])
473 base = osutils.realpath(self.get_transport('.').local_abspath('.'))
474 self.assertRelpath('MixedCaseParent/a_child', base,
475 'MixedCaseParent/a_child')
476 self.assertRelpath('MixedCaseParent/a_child', base,
477 'MixedCaseParent/A_Child')
478 self.assertRelpath('MixedCaseParent/not_child', base,
479 'MixedCaseParent/not_child')
480
481 def test_at_root_slash(self):
482 # We can't test this on Windows, because it has a 'MIN_ABS_PATHLENGTH'
483 # check...
484 if osutils.MIN_ABS_PATHLENGTH > 1:
485 raise tests.TestSkipped('relpath requires %d chars'
486 % osutils.MIN_ABS_PATHLENGTH)
487 self.assertRelpath('foo', '/', '/foo')
488
489 def test_at_root_drive(self):
490 if sys.platform != 'win32':
491 raise tests.TestNotApplicable('we can only test drive-letter relative'
492 ' paths on Windows where we have drive'
493 ' letters.')
494 # see bug #322807
495 # The specific issue is that when at the root of a drive, 'abspath'
496 # returns "C:/" or just "/". However, the code assumes that abspath
497 # always returns something like "C:/foo" or "/foo" (no trailing slash).
498 self.assertRelpath('foo', 'C:/', 'C:/foo')
499 self.assertRelpath('foo', 'X:/', 'X:/foo')
500 self.assertRelpath('foo', 'X:/', 'X://foo')
501
502
460class TestPumpFile(tests.TestCase):503class TestPumpFile(tests.TestCase):
461 """Test pumpfile method."""504 """Test pumpfile method."""
462505
463506
=== modified file 'bzrlib/tests/test_reconfigure.py'
--- bzrlib/tests/test_reconfigure.py 2009-04-28 20:12:44 +0000
+++ bzrlib/tests/test_reconfigure.py 2009-10-15 18:31:14 +0000
@@ -1,4 +1,4 @@
1# Copyright (C) 2007 Canonical Ltd1# Copyright (C) 2007, 2008, 2009 Canonical Ltd
2#2#
3# This program is free software; you can redistribute it and/or modify3# This program is free software; you can redistribute it and/or modify
4# it under the terms of the GNU General Public License as published by4# it under the terms of the GNU General Public License as published by
@@ -44,6 +44,19 @@
44 self.assertRaises(errors.NoWorkingTree, workingtree.WorkingTree.open,44 self.assertRaises(errors.NoWorkingTree, workingtree.WorkingTree.open,
45 'tree')45 'tree')
4646
47 def test_tree_with_pending_merge_to_branch(self):
48 tree = self.make_branch_and_tree('tree')
49 other_tree = tree.bzrdir.sprout('other').open_workingtree()
50 self.build_tree(['other/file'])
51 other_tree.add('file')
52 other_tree.commit('file added')
53 tree.merge_from_branch(other_tree.branch)
54 reconfiguration = reconfigure.Reconfigure.to_branch(tree.bzrdir)
55 self.assertRaises(errors.UncommittedChanges, reconfiguration.apply)
56 reconfiguration.apply(force=True)
57 self.assertRaises(errors.NoWorkingTree, workingtree.WorkingTree.open,
58 'tree')
59
47 def test_branch_to_branch(self):60 def test_branch_to_branch(self):
48 branch = self.make_branch('branch')61 branch = self.make_branch('branch')
49 self.assertRaises(errors.AlreadyBranch,62 self.assertRaises(errors.AlreadyBranch,
5063
=== modified file 'bzrlib/tests/test_remote.py'
--- bzrlib/tests/test_remote.py 2009-09-24 05:31:23 +0000
+++ bzrlib/tests/test_remote.py 2009-10-15 18:31:14 +0000
@@ -2201,6 +2201,26 @@
2201 repo.get_rev_id_for_revno, 5, (42, 'rev-foo'))2201 repo.get_rev_id_for_revno, 5, (42, 'rev-foo'))
2202 self.assertFinished(client)2202 self.assertFinished(client)
22032203
2204 def test_branch_fallback_locking(self):
2205 """RemoteBranch.get_rev_id takes a read lock, and tries to call the
2206 get_rev_id_for_revno verb. If the verb is unknown the VFS fallback
2207 will be invoked, which will fail if the repo is unlocked.
2208 """
2209 self.setup_smart_server_with_call_log()
2210 tree = self.make_branch_and_memory_tree('.')
2211 tree.lock_write()
2212 rev1 = tree.commit('First')
2213 rev2 = tree.commit('Second')
2214 tree.unlock()
2215 branch = tree.branch
2216 self.assertFalse(branch.is_locked())
2217 self.reset_smart_call_log()
2218 verb = 'Repository.get_rev_id_for_revno'
2219 self.disable_verb(verb)
2220 self.assertEqual(rev1, branch.get_rev_id(1))
2221 self.assertLength(1, [call for call in self.hpss_calls if
2222 call.call.method == verb])
2223
22042224
2205class TestRepositoryIsShared(TestRemoteRepository):2225class TestRepositoryIsShared(TestRemoteRepository):
22062226
22072227
=== modified file 'bzrlib/tests/test_status.py'
--- bzrlib/tests/test_status.py 2009-08-20 04:09:58 +0000
+++ bzrlib/tests/test_status.py 2009-10-15 18:31:13 +0000
@@ -53,7 +53,7 @@
53 tree2.commit('commit 3b', timestamp=1196796819, timezone=0)53 tree2.commit('commit 3b', timestamp=1196796819, timezone=0)
54 tree3.commit('commit 3c', timestamp=1196796819, timezone=0)54 tree3.commit('commit 3c', timestamp=1196796819, timezone=0)
55 tree.merge_from_branch(tree2.branch)55 tree.merge_from_branch(tree2.branch)
56 tree.merge_from_branch(tree3.branch)56 tree.merge_from_branch(tree3.branch, force=True)
57 return tree57 return tree
5858
59 def test_multiple_pending(self):59 def test_multiple_pending(self):
6060
=== modified file 'bzrlib/tests/test_ui.py'
--- bzrlib/tests/test_ui.py 2009-09-24 08:56:52 +0000
+++ bzrlib/tests/test_ui.py 2009-10-15 18:31:14 +0000
@@ -54,7 +54,7 @@
54 )54 )
5555
5656
57class UITests(tests.TestCase):57class TestTextUIFactory(tests.TestCase):
5858
59 def test_text_factory_ascii_password(self):59 def test_text_factory_ascii_password(self):
60 ui = tests.TestUIFactory(stdin='secret\n',60 ui = tests.TestUIFactory(stdin='secret\n',
@@ -100,56 +100,6 @@
100 finally:100 finally:
101 pb.finished()101 pb.finished()
102102
103 def test_progress_construction(self):
104 """TextUIFactory constructs the right progress view.
105 """
106 for (file_class, term, pb, expected_pb_class) in (
107 # on an xterm, either use them or not as the user requests,
108 # otherwise default on
109 (_TTYStringIO, 'xterm', 'none', NullProgressView),
110 (_TTYStringIO, 'xterm', 'text', TextProgressView),
111 (_TTYStringIO, 'xterm', None, TextProgressView),
112 # on a dumb terminal, again if there's explicit configuration do
113 # it, otherwise default off
114 (_TTYStringIO, 'dumb', 'none', NullProgressView),
115 (_TTYStringIO, 'dumb', 'text', TextProgressView),
116 (_TTYStringIO, 'dumb', None, NullProgressView),
117 # on a non-tty terminal, it's null regardless of $TERM
118 (StringIO, 'xterm', None, NullProgressView),
119 (StringIO, 'dumb', None, NullProgressView),
120 # however, it can still be forced on
121 (StringIO, 'dumb', 'text', TextProgressView),
122 ):
123 os.environ['TERM'] = term
124 if pb is None:
125 if 'BZR_PROGRESS_BAR' in os.environ:
126 del os.environ['BZR_PROGRESS_BAR']
127 else:
128 os.environ['BZR_PROGRESS_BAR'] = pb
129 stdin = file_class('')
130 stderr = file_class()
131 stdout = file_class()
132 uif = make_ui_for_terminal(stdin, stdout, stderr)
133 self.assertIsInstance(uif, TextUIFactory,
134 "TERM=%s BZR_PROGRESS_BAR=%s uif=%r" % (term, pb, uif,))
135 self.assertIsInstance(uif.make_progress_view(),
136 expected_pb_class,
137 "TERM=%s BZR_PROGRESS_BAR=%s uif=%r" % (term, pb, uif,))
138
139 def test_text_ui_non_terminal(self):
140 """Even on non-ttys, make_ui_for_terminal gives a text ui."""
141 stdin = _NonTTYStringIO('')
142 stderr = _NonTTYStringIO()
143 stdout = _NonTTYStringIO()
144 for term_type in ['dumb', None, 'xterm']:
145 if term_type is None:
146 del os.environ['TERM']
147 else:
148 os.environ['TERM'] = term_type
149 uif = make_ui_for_terminal(stdin, stdout, stderr)
150 self.assertIsInstance(uif, TextUIFactory,
151 'TERM=%r' % (term_type,))
152
153 def test_progress_note(self):103 def test_progress_note(self):
154 stderr = StringIO()104 stderr = StringIO()
155 stdout = StringIO()105 stdout = StringIO()
@@ -304,6 +254,59 @@
304 pb.finished()254 pb.finished()
305255
306256
257class UITests(tests.TestCase):
258
259 def test_progress_construction(self):
260 """TextUIFactory constructs the right progress view.
261 """
262 for (file_class, term, pb, expected_pb_class) in (
263 # on an xterm, either use them or not as the user requests,
264 # otherwise default on
265 (_TTYStringIO, 'xterm', 'none', NullProgressView),
266 (_TTYStringIO, 'xterm', 'text', TextProgressView),
267 (_TTYStringIO, 'xterm', None, TextProgressView),
268 # on a dumb terminal, again if there's explicit configuration do
269 # it, otherwise default off
270 (_TTYStringIO, 'dumb', 'none', NullProgressView),
271 (_TTYStringIO, 'dumb', 'text', TextProgressView),
272 (_TTYStringIO, 'dumb', None, NullProgressView),
273 # on a non-tty terminal, it's null regardless of $TERM
274 (StringIO, 'xterm', None, NullProgressView),
275 (StringIO, 'dumb', None, NullProgressView),
276 # however, it can still be forced on
277 (StringIO, 'dumb', 'text', TextProgressView),
278 ):
279 os.environ['TERM'] = term
280 if pb is None:
281 if 'BZR_PROGRESS_BAR' in os.environ:
282 del os.environ['BZR_PROGRESS_BAR']
283 else:
284 os.environ['BZR_PROGRESS_BAR'] = pb
285 stdin = file_class('')
286 stderr = file_class()
287 stdout = file_class()
288 uif = make_ui_for_terminal(stdin, stdout, stderr)
289 self.assertIsInstance(uif, TextUIFactory,
290 "TERM=%s BZR_PROGRESS_BAR=%s uif=%r" % (term, pb, uif,))
291 self.assertIsInstance(uif.make_progress_view(),
292 expected_pb_class,
293 "TERM=%s BZR_PROGRESS_BAR=%s uif=%r" % (term, pb, uif,))
294
295 def test_text_ui_non_terminal(self):
296 """Even on non-ttys, make_ui_for_terminal gives a text ui."""
297 stdin = _NonTTYStringIO('')
298 stderr = _NonTTYStringIO()
299 stdout = _NonTTYStringIO()
300 for term_type in ['dumb', None, 'xterm']:
301 if term_type is None:
302 del os.environ['TERM']
303 else:
304 os.environ['TERM'] = term_type
305 uif = make_ui_for_terminal(stdin, stdout, stderr)
306 self.assertIsInstance(uif, TextUIFactory,
307 'TERM=%r' % (term_type,))
308
309
307class CLIUITests(TestCase):310class CLIUITests(TestCase):
308311
309 def test_cli_factory_deprecated(self):312 def test_cli_factory_deprecated(self):
310313
=== modified file 'bzrlib/ui/__init__.py'
--- bzrlib/ui/__init__.py 2009-07-22 07:34:08 +0000
+++ bzrlib/ui/__init__.py 2009-10-15 18:31:14 +0000
@@ -22,18 +22,18 @@
22Several levels are supported, and you can also register new factories such as22Several levels are supported, and you can also register new factories such as
23for a GUI.23for a GUI.
2424
25UIFactory25bzrlib.ui.UIFactory
26 Semi-abstract base class26 Semi-abstract base class
2727
28SilentUIFactory28bzrlib.ui.SilentUIFactory
29 Produces no output and cannot take any input; useful for programs using29 Produces no output and cannot take any input; useful for programs using
30 bzrlib in batch mode or for programs such as loggerhead.30 bzrlib in batch mode or for programs such as loggerhead.
3131
32CannedInputUIFactory32bzrlib.ui.CannedInputUIFactory
33 For use in testing; the input values to be returned are provided 33 For use in testing; the input values to be returned are provided
34 at construction.34 at construction.
3535
36TextUIFactory36bzrlib.ui.text.TextUIFactory
37 Standard text command-line interface, with stdin, stdout, stderr.37 Standard text command-line interface, with stdin, stdout, stderr.
38 May make more or less advanced use of them, eg in drawing progress bars,38 May make more or less advanced use of them, eg in drawing progress bars,
39 depending on the detected capabilities of the terminal.39 depending on the detected capabilities of the terminal.
@@ -208,6 +208,22 @@
208 """208 """
209 pass209 pass
210210
211 def show_error(self, msg):
212 """Show an error message (not an exception) to the user.
213
214 The message should not have an error prefix or trailing newline. That
215 will be added by the factory if appropriate.
216 """
217 raise NotImplementedError(self.show_error)
218
219 def show_message(self, msg):
220 """Show a message to the user."""
221 raise NotImplementedError(self.show_message)
222
223 def show_warning(self, msg):
224 """Show a warning to the user."""
225 raise NotImplementedError(self.show_warning)
226
211227
212228
213class CLIUIFactory(UIFactory):229class CLIUIFactory(UIFactory):
@@ -318,6 +334,15 @@
318 def get_username(self, prompt, **kwargs):334 def get_username(self, prompt, **kwargs):
319 return None335 return None
320336
337 def show_error(self, msg):
338 pass
339
340 def show_message(self, msg):
341 pass
342
343 def show_warning(self, msg):
344 pass
345
321346
322class CannedInputUIFactory(SilentUIFactory):347class CannedInputUIFactory(SilentUIFactory):
323 """A silent UI that return canned input."""348 """A silent UI that return canned input."""
324349
=== modified file 'bzrlib/ui/text.py'
--- bzrlib/ui/text.py 2009-08-06 02:23:37 +0000
+++ bzrlib/ui/text.py 2009-10-15 18:31:14 +0000
@@ -49,9 +49,6 @@
49 stdout=None,49 stdout=None,
50 stderr=None):50 stderr=None):
51 """Create a TextUIFactory.51 """Create a TextUIFactory.
52
53 :param bar_type: The type of progress bar to create. Deprecated
54 and ignored; a TextProgressView is always used.
55 """52 """
56 super(TextUIFactory, self).__init__()53 super(TextUIFactory, self).__init__()
57 # TODO: there's no good reason not to pass all three streams, maybe we54 # TODO: there's no good reason not to pass all three streams, maybe we
@@ -176,6 +173,17 @@
176 self._progress_view.show_transport_activity(transport,173 self._progress_view.show_transport_activity(transport,
177 direction, byte_count)174 direction, byte_count)
178175
176 def show_error(self, msg):
177 self.clear_term()
178 self.stderr.write("bzr: error: %s\n" % msg)
179
180 def show_message(self, msg):
181 self.note(msg)
182
183 def show_warning(self, msg):
184 self.clear_term()
185 self.stderr.write("bzr: warning: %s\n" % msg)
186
179 def _progress_updated(self, task):187 def _progress_updated(self, task):
180 """A task has been updated and wants to be displayed.188 """A task has been updated and wants to be displayed.
181 """189 """
182190
=== modified file 'bzrlib/util/_bencode_py.py'
--- bzrlib/util/_bencode_py.py 2009-06-10 03:56:49 +0000
+++ bzrlib/util/_bencode_py.py 2009-10-15 18:31:14 +0000
@@ -154,6 +154,13 @@
154 encode_int(int(x), r)154 encode_int(int(x), r)
155 encode_func[BooleanType] = encode_bool155 encode_func[BooleanType] = encode_bool
156156
157try:
158 from bzrlib._static_tuple_c import StaticTuple
159except ImportError:
160 pass
161else:
162 encode_func[StaticTuple] = encode_list
163
157164
158def bencode(x):165def bencode(x):
159 r = []166 r = []
160167
=== modified file 'bzrlib/workingtree.py'
--- bzrlib/workingtree.py 2009-08-26 05:38:16 +0000
+++ bzrlib/workingtree.py 2009-10-15 18:31:14 +0000
@@ -896,7 +896,7 @@
896896
897 @needs_write_lock # because merge pulls data into the branch.897 @needs_write_lock # because merge pulls data into the branch.
898 def merge_from_branch(self, branch, to_revision=None, from_revision=None,898 def merge_from_branch(self, branch, to_revision=None, from_revision=None,
899 merge_type=None):899 merge_type=None, force=False):
900 """Merge from a branch into this working tree.900 """Merge from a branch into this working tree.
901901
902 :param branch: The branch to merge from.902 :param branch: The branch to merge from.
@@ -911,9 +911,9 @@
911 merger = Merger(self.branch, this_tree=self, pb=pb)911 merger = Merger(self.branch, this_tree=self, pb=pb)
912 merger.pp = ProgressPhase("Merge phase", 5, pb)912 merger.pp = ProgressPhase("Merge phase", 5, pb)
913 merger.pp.next_phase()913 merger.pp.next_phase()
914 # check that there are no914 # check that there are no local alterations
915 # local alterations915 if not force and self.has_changes():
916 merger.check_basis(check_clean=True, require_commits=False)916 raise errors.UncommittedChanges(self)
917 if to_revision is None:917 if to_revision is None:
918 to_revision = _mod_revision.ensure_null(branch.last_revision())918 to_revision = _mod_revision.ensure_null(branch.last_revision())
919 merger.other_rev_id = to_revision919 merger.other_rev_id = to_revision
920920
=== modified file 'doc/developers/HACKING.txt'
--- doc/developers/HACKING.txt 2009-09-30 14:56:21 +0000
+++ doc/developers/HACKING.txt 2009-10-15 18:31:14 +0000
@@ -671,6 +671,19 @@
671 may not catch every case but it's still useful sometimes.671 may not catch every case but it's still useful sometimes.
672672
673673
674Cleanup methods
675===============
676
677Often when something has failed later code, including cleanups invoked
678from ``finally`` blocks, will fail too. These secondary failures are
679generally uninteresting compared to the original exception. So use the
680``only_raises`` decorator (from ``bzrlib.decorators``) for methods that
681are typically called in ``finally`` blocks, such as ``unlock`` methods.
682For example, ``@only_raises(LockNotHeld, LockBroken)``. All errors that
683are unlikely to be a knock-on failure from an previous failure should be
684allowed.
685
686
674Factories687Factories
675=========688=========
676689
677690
=== modified file 'doc/developers/releasing.txt'
--- doc/developers/releasing.txt 2009-09-16 08:59:26 +0000
+++ doc/developers/releasing.txt 2009-10-15 18:31:14 +0000
@@ -212,9 +212,20 @@
212we have a releasable product. The next step is to make it generally212we have a releasable product. The next step is to make it generally
213available to the world.213available to the world.
214214
215#. Link from http://bazaar-vcs.org/Download to the tarball and signature.215go to the release
216216
217#. Announce on the `Bazaar home page <http://bazaar-vcs.org/>`_.217#. Within that release, upload the source tarball and zipfile and the GPG
218 signature. Or, if you prefer, use the
219 ``tools/packaging/lp-upload-release`` script to do this.
220
221#. Link from http://bazaar-vcs.org/SourceDownloads to the tarball and
222 signature.
223
224#. Announce on the `Bazaar website <http://bazaar-vcs.org/>`_.
225 This page is edited via the lp:bzr-website branch. (Changes
226 pushed to this branch are refreshed by a cron job on escudero.)
227
228#. Announce on the `Bazaar wiki <http://bazaar-vcs.org/Welcome>`_.
218229
219#. Check that the documentation for this release is available in230#. Check that the documentation for this release is available in
220 <http://doc.bazaar-vcs.org>. It should be automatically build when the231 <http://doc.bazaar-vcs.org>. It should be automatically build when the
221232
=== modified file 'doc/en/upgrade-guide/data_migration.txt'
--- doc/en/upgrade-guide/data_migration.txt 2009-09-09 13:34:08 +0000
+++ doc/en/upgrade-guide/data_migration.txt 2009-10-15 18:31:13 +0000
@@ -30,7 +30,7 @@
30* **upgrade** - migrate data to a different format.30* **upgrade** - migrate data to a different format.
3131
32**reconcile** is rarely needed but it's good practice to run **check**32**reconcile** is rarely needed but it's good practice to run **check**
33before and after runing **upgrade**.33before and after running **upgrade**.
3434
35For detailed help on these commands, see the `Bazaar User Reference`_.35For detailed help on these commands, see the `Bazaar User Reference`_.
3636
@@ -40,7 +40,7 @@
40Communicating with your community40Communicating with your community
41---------------------------------41---------------------------------
4242
43To enable a smooth transistion to the new format, you should:43To enable a smooth transition to the new format, you should:
4444
451. Make one person responsible for migrating the trunk.451. Make one person responsible for migrating the trunk.
4646
@@ -97,22 +97,54 @@
97Migrating branches on Launchpad97Migrating branches on Launchpad
98-------------------------------98-------------------------------
9999
100You have two options for upgrading your Launchpad branches. You can either
101upgrade them remotely or you can upgrade them locally and push the migrated
102branch to Launchpad. We recommend the latter. Upgrading remotely currently
103requires a fast, rock solid network connection to the Launchpad servers, and
104any interruption in that connection can leave you with a partially upgraded
105branch. The instructions below are the safest and often fastest way to
106upgrade your Launchpad branches.
107
100To allow isolation between public and private branches, Launchpad108To allow isolation between public and private branches, Launchpad
101uses stacked branches rather than shared repositories as the core109uses stacked branches rather than shared repositories as the core
102technology for efficient branch storage. The process for migrating110technology for efficient branch storage. The process for migrating
103to a new format for projects using Launchpad code hosting is therefore111to a new format for projects using Launchpad code hosting is therefore
104different to migrating a personal or in-house project.112different to migrating a personal or in-house project.
105113
114In Launchpad, a project can define a *development series* and associate a
115branch with that series. The branch then becomes the *focus of development*
116and gets special treatment and a shortcut url. By default, if anybody
117branches your project's focus of development and pushes changes back to
118Launchpad, their branch will be stacked on your development focus branch.
119Also, branches can be associated with other Launchpad artifacts such as bugs
120and merge proposals. All of these things mean that upgrading your focus of
121development branch is trickier.
122
106Here are the steps to follow:123Here are the steps to follow:
107124
1081. The nominated person grabs a copy of trunk and does the migration.1251. The nominated person grabs a copy of trunk and does the migration locally.
109126
1102. On Launchpad, unset the current trunk from being the development focus.1272. On Launchpad, unset the current trunk from being the development focus.
111 (This *must* be done or the following step won't work as expected.)128 (This *must* be done or the following step won't work as expected.)
112129
1133. Push the migrated trunk to Launchpad.130 1. Go to your project's home page on Launchpad
114131
1154. Set it as the development focus.132 2. Look for "XXX is the current focus of development"
133
134 3. Click on the edit (pencil) icon
135
136 4. Click on "Change details" in the portlet on the right
137
138 5. Scroll down to where it says "Branch: (Optional)"
139
140 6. Blank out this input field and click "Change"
141
1423. Push the migrated trunk to Launchpad. See below if you want your
143 new migrated development focus branch to have the same name as your old
144 pre-migration development focus branch.
145
1464. Set it as the development focus. Follow the instructions above but at step
147 5, enter the name of the newly migrated branch you just pushed.
116148
1175. Ask users subscribed to the old trunk to subscribe to the new one.1495. Ask users subscribed to the old trunk to subscribe to the new one.
118150
@@ -124,6 +156,20 @@
124You are now ready to tell your community that the new trunk is available156You are now ready to tell your community that the new trunk is available
125and to give them instructions on migrating any local branches they have.157and to give them instructions on migrating any local branches they have.
126158
159If you want your new migrated development focus branch to have the same name
160as your old pre-migration branch, you need to do a few extra things before you
161establish the new development focus.
162
1631. Rename your old pre-migration branch; use something like
164 **foo-obsolete-do-not-use**. You will really not want to delete this
165 because there will be artifacts (bugs, merge proposals, etc.) associated
166 with it.
167
1682. Rename the new migrated branch to the pre-migration branch's old name.
169
1703. Re-establish the development focus branch using the new migrated branch's
171 new name (i.e. the old pre-migration branch's original name).
172
127173
128Migrating local branches after a central trunk has migrated174Migrating local branches after a central trunk has migrated
129-----------------------------------------------------------175-----------------------------------------------------------
130176
=== modified file 'doc/en/user-guide/branching_a_project.txt'
--- doc/en/user-guide/branching_a_project.txt 2009-09-09 15:30:59 +0000
+++ doc/en/user-guide/branching_a_project.txt 2009-10-15 18:31:14 +0000
@@ -16,11 +16,11 @@
16 =========== ======================================================16 =========== ======================================================
17 Prefix Description17 Prefix Description
18 =========== ======================================================18 =========== ======================================================
19 file:// Access using the standard filesystem (default)19 \file:// Access using the standard filesystem (default)
20 sftp:// Access using SFTP (most SSH servers provide SFTP).20 \sftp:// Access using SFTP (most SSH servers provide SFTP).
21 bzr:// Fast access using the Bazaar smart server.21 \bzr:// Fast access using the Bazaar smart server.
22 ftp:// Access using passive FTP.22 \ftp:// Access using passive FTP.
23 http:// Read-only access to branches exported by a web server.23 \http:// Read-only access to branches exported by a web server.
24 =========== ======================================================24 =========== ======================================================
2525
26As indicated above, branches are identified using URLs with the26As indicated above, branches are identified using URLs with the
2727
=== modified file 'setup.py'
--- setup.py 2009-10-08 15:44:41 +0000
+++ setup.py 2009-10-15 18:31:14 +0000
@@ -167,7 +167,13 @@
167from distutils.extension import Extension167from distutils.extension import Extension
168ext_modules = []168ext_modules = []
169try:169try:
170 from Pyrex.Distutils import build_ext170 try:
171 from Pyrex.Distutils import build_ext
172 from Pyrex.Compiler.Version import version as pyrex_version
173 except ImportError:
174 print "No Pyrex, trying Cython..."
175 from Cython.Distutils import build_ext
176 from Cython.Compiler.Version import version as pyrex_version
171except ImportError:177except ImportError:
172 have_pyrex = False178 have_pyrex = False
173 # try to build the extension from the prior generated source.179 # try to build the extension from the prior generated source.
@@ -180,7 +186,6 @@
180 from distutils.command.build_ext import build_ext186 from distutils.command.build_ext import build_ext
181else:187else:
182 have_pyrex = True188 have_pyrex = True
183 from Pyrex.Compiler.Version import version as pyrex_version
184189
185190
186class build_ext_if_possible(build_ext):191class build_ext_if_possible(build_ext):

Subscribers

People subscribed via source and target branches

to all changes: