Merge lp:~spiv/bzr/better-news-merge into lp:bzr
- better-news-merge
- Merge into bzr.dev
Status: | Superseded |
---|---|
Proposed branch: | lp:~spiv/bzr/better-news-merge |
Merge into: | lp:bzr |
Diff against target: |
778 lines (+361/-88) (has conflicts) 14 files modified
NEWS (+57/-0) bzrlib/cleanup.py (+2/-2) bzrlib/help_topics/__init__.py (+5/-0) bzrlib/help_topics/en/location-alias.txt (+19/-0) bzrlib/log.py (+1/-1) bzrlib/merge.py (+54/-30) bzrlib/plugins/news_merge/__init__.py (+22/-2) bzrlib/plugins/news_merge/news_merge.py (+92/-21) bzrlib/plugins/news_merge/parser.py (+5/-1) bzrlib/push.py (+6/-0) bzrlib/repofmt/pack_repo.py (+32/-27) bzrlib/tests/__init__.py (+17/-0) bzrlib/tests/per_tree/test_inv.py (+25/-2) bzrlib/tree.py (+24/-2) Text conflict in NEWS Text conflict in bzrlib/merge.py Text conflict in bzrlib/tree.py |
To merge this branch: | bzr merge lp:~spiv/bzr/better-news-merge |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
bzr-core | Pending | ||
Review via email: mp+19169@code.launchpad.net |
This proposal has been superseded by a proposal from 2010-02-13.
Commit message
Description of the change
Andrew Bennetts (spiv) wrote : | # |
Robert Collins (lifeless) wrote : | # |
The patch seems to have conflicts and a lot of noise :(
-Rob
Andrew Bennetts (spiv) wrote : | # |
Oh, I wrote the code against 2.1, but proposed against lp:bzr. Hmm.
- 4815. By Andrew Bennetts
-
Read section order from template file named in config.
- 4816. By Andrew Bennetts
-
Add a XXX comment for future improvement.
- 4817. By Andrew Bennetts
-
First steps towards a better NEWS file parser.
- 4818. By Andrew Bennetts
-
Rename test.
- 4819. By Andrew Bennetts
-
Merge lp:bzr.
- 4820. By Andrew Bennetts
-
Merge object-3way-merge.
- 4821. By Andrew Bennetts
-
Possibly working news_merge built on a richer structure than just lines.
- 4822. By Andrew Bennetts
-
Fix some simple bugs.
Unmerged revisions
- 4822. By Andrew Bennetts
-
Fix some simple bugs.
- 4821. By Andrew Bennetts
-
Possibly working news_merge built on a richer structure than just lines.
- 4820. By Andrew Bennetts
-
Merge object-3way-merge.
- 4819. By Andrew Bennetts
-
Merge lp:bzr.
- 4818. By Andrew Bennetts
-
Rename test.
- 4817. By Andrew Bennetts
-
First steps towards a better NEWS file parser.
- 4816. By Andrew Bennetts
-
Add a XXX comment for future improvement.
- 4815. By Andrew Bennetts
-
Read section order from template file named in config.
- 4814. By Andrew Bennetts
-
Teach news_merge to handle conflicts involving section headings as well as bullets.
- 4813. By Andrew Bennetts
-
Add some simple mutters so that it's easy to tell if news_merge has been triggered.
Preview Diff
1 | === modified file 'NEWS' |
2 | --- NEWS 2010-02-12 04:33:05 +0000 |
3 | +++ NEWS 2010-02-12 12:16:24 +0000 |
4 | @@ -103,6 +103,7 @@ |
5 | * Fix "AttributeError in Inter1and2Helper" during fetch. |
6 | (Martin Pool, #513432) |
7 | |
8 | +<<<<<<< TREE |
9 | * Fix ``log`` to better check ancestors even if merged revisions are involved. |
10 | (Vincent Ladeuil, #476293) |
11 | |
12 | @@ -145,6 +146,15 @@ |
13 | |
14 | Testing |
15 | ******* |
16 | +======= |
17 | +* Ignore ``KeyError`` from ``remove_index`` during ``_abort_write_group`` |
18 | + in a pack repository, which can happen harmlessly if the abort occurs during |
19 | + finishing the write group. Also use ``bzrlib.cleanup`` so that any |
20 | + other errors that occur while aborting the individual packs won't be |
21 | + hidden by secondary failures when removing the corresponding indices. |
22 | + (Andrew Bennetts, #423015) |
23 | + |
24 | +>>>>>>> MERGE-SOURCE |
25 | * Using the ``bzrlib.chk_map`` module from within multiple threads at the |
26 | same time was broken due to race conditions with a module level page |
27 | cache. This shows up as a KeyError in the ``bzrlib.lru_cache`` code with |
28 | @@ -157,6 +167,30 @@ |
29 | * The new ``merge_file_content`` should now be ok with tests to avoid |
30 | regressions. |
31 | (Vincent Ladeuil, #515597) |
32 | + |
33 | +Improvements |
34 | +************ |
35 | + |
36 | +* The ``news_merge`` plugin is now smarter. It can resolve conflicts |
37 | + involving section headings as well as bullet points. |
38 | + (Andrew Bennetts) |
39 | + |
40 | +Internals |
41 | +********* |
42 | + |
43 | +* Use ``bzrlib.cleanup`` rather than less robust ``try``/``finally`` |
44 | + blocks in several places in ``bzrlib.merge``. This avoids masking prior |
45 | + errors when errors like ``ImmortalPendingDeletion`` occur during cleanup |
46 | + in ``do_merge``. |
47 | + (Andrew Bennetts, #517275) |
48 | + |
49 | +API Changes |
50 | +*********** |
51 | + |
52 | +* The ``remove_index`` method of |
53 | + ``bzrlib.repofmt.pack_repo.AggregateIndex`` no longer takes a ``pack`` |
54 | + argument. This argument was always ignored. |
55 | + (Andrew Bennetts, #423015) |
56 | |
57 | bzr 2.1.0rc2 |
58 | ############ |
59 | @@ -416,6 +450,29 @@ |
60 | (Martin Pool) |
61 | |
62 | |
63 | +bzr 2.0.5 (not released yet) |
64 | +############################ |
65 | + |
66 | +:Codename: |
67 | +:2.0.5: |
68 | + |
69 | +Bug Fixes |
70 | +********* |
71 | + |
72 | +* Handle renames correctly when there are files or directories that |
73 | + differ only in case. (Chris Jones, Martin Pool, #368931) |
74 | + |
75 | +* If ``bzr push --create-prefix`` triggers an unexpected ``NoSuchFile`` |
76 | + error, report that error rather than failing with an unhelpful |
77 | + ``UnboundLocalError``. |
78 | + (Andrew Bennetts, #423563) |
79 | + |
80 | +Documentation |
81 | +************* |
82 | + |
83 | +* Added ``location-alias`` help topic. |
84 | + (Andrew Bennetts, #337834) |
85 | + |
86 | bzr 2.0.4 |
87 | ######### |
88 | |
89 | |
90 | === modified file 'bzrlib/cleanup.py' |
91 | --- bzrlib/cleanup.py 2010-01-13 23:16:20 +0000 |
92 | +++ bzrlib/cleanup.py 2010-02-12 12:16:23 +0000 |
93 | @@ -31,9 +31,9 @@ |
94 | If you want to be certain that the first, and only the first, error is raised, |
95 | then use:: |
96 | |
97 | - operation = OperationWithCleanups(lambda operation: do_something()) |
98 | + operation = OperationWithCleanups(do_something) |
99 | operation.add_cleanup(cleanup_something) |
100 | - operation.run() |
101 | + operation.run_simple() |
102 | |
103 | This is more inconvenient (because you need to make every try block a |
104 | function), but will ensure that the first error encountered is the one raised, |
105 | |
106 | === modified file 'bzrlib/help_topics/__init__.py' |
107 | --- bzrlib/help_topics/__init__.py 2010-02-04 16:06:36 +0000 |
108 | +++ bzrlib/help_topics/__init__.py 2010-02-12 12:16:23 +0000 |
109 | @@ -269,6 +269,9 @@ |
110 | bzr+ssh://remote@shell.example.com/~/myproject/trunk |
111 | |
112 | would refer to ``/home/remote/myproject/trunk``. |
113 | + |
114 | +Many commands that accept URLs also accept location aliases too. See |
115 | +::doc:`location-alias-help`. |
116 | """ |
117 | |
118 | return out |
119 | @@ -758,6 +761,8 @@ |
120 | 'Types of conflicts and what to do about them') |
121 | topic_registry.register('debug-flags', _load_from_file, |
122 | 'Options to show or record debug information') |
123 | +topic_registry.register('location-alias', _load_from_file, |
124 | + 'Aliases for remembered locations') |
125 | topic_registry.register('log-formats', _load_from_file, |
126 | 'Details on the logging formats available') |
127 | |
128 | |
129 | === added file 'bzrlib/help_topics/en/location-alias.txt' |
130 | --- bzrlib/help_topics/en/location-alias.txt 1970-01-01 00:00:00 +0000 |
131 | +++ bzrlib/help_topics/en/location-alias.txt 2010-02-12 12:16:23 +0000 |
132 | @@ -0,0 +1,19 @@ |
133 | +Location aliases |
134 | +================ |
135 | + |
136 | +Bazaar defines several aliases for locations associated with a branch. These |
137 | +can be used with most commands that expect a location, such as `bzr push`. |
138 | + |
139 | +The aliases are:: |
140 | + |
141 | + :parent the parent of this branch |
142 | + :submit the submit branch for this branch |
143 | + :public the public location of this branch |
144 | + :bound the branch this branch is bound to, for bound branches |
145 | + :push the saved location used for `bzr push` with no arguments |
146 | + :this this branch |
147 | + |
148 | +For example, to push to the parent location:: |
149 | + |
150 | + bzr push :parent |
151 | + |
152 | |
153 | === modified file 'bzrlib/log.py' |
154 | --- bzrlib/log.py 2010-02-04 16:06:36 +0000 |
155 | +++ bzrlib/log.py 2010-02-12 12:16:23 +0000 |
156 | @@ -1424,7 +1424,7 @@ |
157 | """ |
158 | # Revision comes directly from a foreign repository |
159 | if isinstance(rev, foreign.ForeignRevision): |
160 | - return rev.mapping.vcs.show_foreign_revid(rev.foreign_revid) |
161 | + return self._format_properties(rev.mapping.vcs.show_foreign_revid(rev.foreign_revid)) |
162 | |
163 | # Imported foreign revision revision ids always contain : |
164 | if not ":" in rev.revision_id: |
165 | |
166 | === modified file 'bzrlib/merge.py' |
167 | --- bzrlib/merge.py 2010-02-09 19:04:02 +0000 |
168 | +++ bzrlib/merge.py 2010-02-12 12:16:23 +0000 |
169 | @@ -37,6 +37,7 @@ |
170 | ui, |
171 | versionedfile |
172 | ) |
173 | +from bzrlib.cleanup import OperationWithCleanups |
174 | from bzrlib.symbol_versioning import ( |
175 | deprecated_in, |
176 | deprecated_method, |
177 | @@ -46,11 +47,10 @@ |
178 | |
179 | def transform_tree(from_tree, to_tree, interesting_ids=None): |
180 | from_tree.lock_tree_write() |
181 | - try: |
182 | - merge_inner(from_tree.branch, to_tree, from_tree, ignore_zero=True, |
183 | - interesting_ids=interesting_ids, this_tree=from_tree) |
184 | - finally: |
185 | - from_tree.unlock() |
186 | + operation = OperationWithCleanups(merge_inner) |
187 | + operation.add_cleanup(from_tree.unlock) |
188 | + operation.run_simple(from_tree.branch, to_tree, from_tree, |
189 | + ignore_zero=True, interesting_ids=interesting_ids, this_tree=from_tree) |
190 | |
191 | |
192 | class MergeHooks(hooks.Hooks): |
193 | @@ -455,6 +455,7 @@ |
194 | def _add_parent(self): |
195 | new_parents = self.this_tree.get_parent_ids() + [self.other_rev_id] |
196 | new_parent_trees = [] |
197 | + operation = OperationWithCleanups(self.this_tree.set_parent_trees) |
198 | for revision_id in new_parents: |
199 | try: |
200 | tree = self.revision_tree(revision_id) |
201 | @@ -462,14 +463,9 @@ |
202 | tree = None |
203 | else: |
204 | tree.lock_read() |
205 | + operation.add_cleanup(tree.unlock) |
206 | new_parent_trees.append((revision_id, tree)) |
207 | - try: |
208 | - self.this_tree.set_parent_trees(new_parent_trees, |
209 | - allow_leftmost_as_ghost=True) |
210 | - finally: |
211 | - for _revision_id, tree in new_parent_trees: |
212 | - if tree is not None: |
213 | - tree.unlock() |
214 | + operation.run_simple(new_parent_trees, allow_leftmost_as_ghost=True) |
215 | |
216 | def set_other(self, other_revision, possible_transports=None): |
217 | """Set the revision and tree to merge from. |
218 | @@ -626,7 +622,8 @@ |
219 | change_reporter=self.change_reporter, |
220 | **kwargs) |
221 | |
222 | - def _do_merge_to(self, merge): |
223 | + def _do_merge_to(self): |
224 | + merge = self.make_merger() |
225 | if self.other_branch is not None: |
226 | self.other_branch.update_references(self.this_branch) |
227 | merge.do_merge() |
228 | @@ -646,26 +643,19 @@ |
229 | sub_tree.branch.repository.revision_tree(base_revision) |
230 | sub_merge.base_rev_id = base_revision |
231 | sub_merge.do_merge() |
232 | + return merge |
233 | |
234 | def do_merge(self): |
235 | + operation = OperationWithCleanups(self._do_merge_to) |
236 | self.this_tree.lock_tree_write() |
237 | - try: |
238 | - if self.base_tree is not None: |
239 | - self.base_tree.lock_read() |
240 | - try: |
241 | - if self.other_tree is not None: |
242 | - self.other_tree.lock_read() |
243 | - try: |
244 | - merge = self.make_merger() |
245 | - self._do_merge_to(merge) |
246 | - finally: |
247 | - if self.other_tree is not None: |
248 | - self.other_tree.unlock() |
249 | - finally: |
250 | - if self.base_tree is not None: |
251 | - self.base_tree.unlock() |
252 | - finally: |
253 | - self.this_tree.unlock() |
254 | + operation.add_cleanup(self.this_tree.unlock) |
255 | + if self.base_tree is not None: |
256 | + self.base_tree.lock_read() |
257 | + operation.add_cleanup(self.base_tree.unlock) |
258 | + if self.other_tree is not None: |
259 | + self.other_tree.lock_read() |
260 | + operation.add_cleanup(self.other_tree.unlock) |
261 | + merge = operation.run_simple() |
262 | if len(merge.cooked_conflicts) == 0: |
263 | if not self.ignore_zero and not trace.is_quiet(): |
264 | trace.note("All changes applied successfully.") |
265 | @@ -765,10 +755,26 @@ |
266 | warnings.warn("pb argument to Merge3Merger is deprecated") |
267 | |
268 | def do_merge(self): |
269 | + operation = OperationWithCleanups(self._do_merge) |
270 | + operation.add_cleanup(self.pb.clear) |
271 | self.this_tree.lock_tree_write() |
272 | + operation.add_cleanup(self.this_tree.unlock) |
273 | self.base_tree.lock_read() |
274 | + operation.add_cleanup(self.base_tree.unlock) |
275 | self.other_tree.lock_read() |
276 | + operation.add_cleanup(self.other_tree.unlock) |
277 | + operation.run() |
278 | + |
279 | + def _do_merge(self, operation): |
280 | + self.tt = transform.TreeTransform(self.this_tree, self.pb) |
281 | + operation.add_cleanup(self.tt.finalize) |
282 | + self.pp.next_phase() |
283 | + self._compute_transform() |
284 | + self.pp.next_phase() |
285 | + results = self.tt.apply(no_conflicts=True) |
286 | + self.write_modified(results) |
287 | try: |
288 | +<<<<<<< TREE |
289 | self.tt = transform.TreeTransform(self.this_tree, None) |
290 | try: |
291 | self._compute_transform() |
292 | @@ -784,16 +790,34 @@ |
293 | self.other_tree.unlock() |
294 | self.base_tree.unlock() |
295 | self.this_tree.unlock() |
296 | +======= |
297 | + self.this_tree.add_conflicts(self.cooked_conflicts) |
298 | + except errors.UnsupportedOperation: |
299 | + pass |
300 | +>>>>>>> MERGE-SOURCE |
301 | |
302 | def make_preview_transform(self): |
303 | + operation = OperationWithCleanups(self._make_preview_transform) |
304 | + operation.add_cleanup(self.pb.clear) |
305 | self.base_tree.lock_read() |
306 | + operation.add_cleanup(self.base_tree.unlock) |
307 | self.other_tree.lock_read() |
308 | + operation.add_cleanup(self.other_tree.unlock) |
309 | + return operation.run_simple() |
310 | + |
311 | + def _make_preview_transform(self): |
312 | self.tt = transform.TransformPreview(self.this_tree) |
313 | +<<<<<<< TREE |
314 | try: |
315 | self._compute_transform() |
316 | finally: |
317 | self.other_tree.unlock() |
318 | self.base_tree.unlock() |
319 | +======= |
320 | + self.pp.next_phase() |
321 | + self._compute_transform() |
322 | + self.pp.next_phase() |
323 | +>>>>>>> MERGE-SOURCE |
324 | return self.tt |
325 | |
326 | def _compute_transform(self): |
327 | |
328 | === modified file 'bzrlib/plugins/news_merge/__init__.py' |
329 | --- bzrlib/plugins/news_merge/__init__.py 2010-01-28 17:27:16 +0000 |
330 | +++ bzrlib/plugins/news_merge/__init__.py 2010-02-12 12:16:24 +0000 |
331 | @@ -26,10 +26,30 @@ |
332 | The news_merge_files config option takes a list of file paths, separated by |
333 | commas. |
334 | |
335 | +The basic approach is that this plugin parses the NEWS file into a simple |
336 | +series of versions, with sections of bullets in those versions. Sections |
337 | +contain a sorted set of bullets, and sections within a version also have a |
338 | +fixed order (see the template at the bottom of NEWS). The plugin merges |
339 | +additions and deletions to the set of bullets (and sections of bullets), then |
340 | +sorts the contents of these sets and turns them back into a series of lines of |
341 | +text. |
342 | + |
343 | Limitations: |
344 | |
345 | -* if there's a conflict in more than just bullet points, this doesn't yet know |
346 | - how to resolve that, so bzr will fallback to the default line-based merge. |
347 | +* invisible whitespace in blank lines is not tracked, so is discarded. (i.e. |
348 | + [newline, space, newline] is collapsed to just [newline, newline]) |
349 | + |
350 | +* empty sections are generally deleted, even if they were present in the |
351 | + originals. |
352 | + |
353 | +* modified sections will typically be reordered to match the standard order (as |
354 | + shown in the template at the bottom of NEWS). |
355 | + |
356 | +* if there's a conflict that involves more than simple sections of bullets, |
357 | + this plugin doesn't know how to handle that. e.g. a conflict in preamble |
358 | + text describing a new version, or sufficiently many conflicts that the |
359 | + matcher thinks a conflict spans a version heading. bzr's builtin merge logic |
360 | + will be tried instead. |
361 | """ |
362 | |
363 | # Since we are a built-in plugin we share the bzrlib version |
364 | |
365 | === modified file 'bzrlib/plugins/news_merge/news_merge.py' |
366 | --- bzrlib/plugins/news_merge/news_merge.py 2010-01-28 18:05:44 +0000 |
367 | +++ bzrlib/plugins/news_merge/news_merge.py 2010-02-12 12:16:24 +0000 |
368 | @@ -18,11 +18,23 @@ |
369 | |
370 | |
371 | from bzrlib.plugins.news_merge.parser import simple_parse |
372 | -from bzrlib import merge, merge3 |
373 | +from bzrlib import merge, merge3, trace |
374 | |
375 | |
376 | magic_marker = '|NEWS-MERGE-MAGIC-MARKER|' |
377 | |
378 | +# The order sections are supposed to appear in. See the template at the |
379 | +# bottom of NEWS. None is a placeholder for an unseen section heading. |
380 | +canonical_section_order = [ |
381 | + None, 'Compatibility Breaks', 'New Features', 'Bug Fixes', 'Improvements', |
382 | + 'Documentation', 'API Changes', 'Internals', 'Testing'] |
383 | + |
384 | +def section_sort_key(section): |
385 | + try: |
386 | + return canonical_section_order.index(section) |
387 | + except ValueError: |
388 | + return 99 |
389 | + |
390 | |
391 | class NewsMerger(merge.ConfigurableFileMerger): |
392 | """Merge bzr NEWS files.""" |
393 | @@ -36,6 +48,7 @@ |
394 | points, so we can simply take a set of bullet points, determine which |
395 | bullets to add and which to remove, sort, and reserialize. |
396 | """ |
397 | + trace.mutter('news_merge triggered') |
398 | # Transform the different versions of the NEWS file into a bunch of |
399 | # text lines where each line matches one part of the overall |
400 | # structure, e.g. a heading or bullet. |
401 | @@ -49,32 +62,90 @@ |
402 | for group in m3.merge_groups(): |
403 | if group[0] == 'conflict': |
404 | _, base, a, b = group |
405 | - # Are all the conflicting lines bullets? If so, we can merge |
406 | - # this. |
407 | - for line_set in [base, a, b]: |
408 | - for line in line_set: |
409 | - if not line.startswith('bullet'): |
410 | - # Something else :( |
411 | - # Maybe the default merge can cope. |
412 | - return 'not_applicable', None |
413 | - # Calculate additions and deletions. |
414 | - new_in_a = set(a).difference(base) |
415 | - new_in_b = set(b).difference(base) |
416 | - all_new = new_in_a.union(new_in_b) |
417 | - deleted_in_a = set(base).difference(a) |
418 | - deleted_in_b = set(base).difference(b) |
419 | - # Combine into the final set of bullet points. |
420 | - final = all_new.difference(deleted_in_a).difference( |
421 | - deleted_in_b) |
422 | - # Sort, and emit. |
423 | - final = sorted(final, key=sort_key) |
424 | - result_lines.extend(final) |
425 | + # Are all the conflicting lines bullets or sections? If so, we |
426 | + # can merge this. |
427 | + try: |
428 | + base_sections = munged_lines_to_section_dict(base) |
429 | + a_sections = munged_lines_to_section_dict(a) |
430 | + b_sections = munged_lines_to_section_dict(b) |
431 | + except MergeTooHard: |
432 | + # Something else :( |
433 | + # Maybe the default merge can cope. |
434 | + trace.mutter('news_merge giving up') |
435 | + return 'not_applicable', None |
436 | + |
437 | + # Basically, for every section present in any version, call |
438 | + # merge_bullets (passing an empty set for versions missing |
439 | + # that section), and if the resulting set of bullets is not |
440 | + # empty, emit the section heading and the sorted set of |
441 | + # bullets. |
442 | + all_sections = set( |
443 | + base_sections.keys() + a_sections.keys() + |
444 | + b_sections.keys()) |
445 | + sections_in_order = sorted(all_sections, key=section_sort_key) |
446 | + for section in sections_in_order: |
447 | + bullets = merge_bullets( |
448 | + base_sections.get(section, set()), |
449 | + a_sections.get(section, set()), |
450 | + b_sections.get(section, set())) |
451 | + if bullets: |
452 | + # Emit section heading (if any), then sorted bullets. |
453 | + if section is not None: |
454 | + result_lines.append( |
455 | + 'section%s%s\n%s' |
456 | + % (magic_marker, section, '*' * len(section))) |
457 | + final = sorted(bullets, key=sort_key) |
458 | + result_lines.extend(final) |
459 | else: |
460 | result_lines.extend(group[1]) |
461 | # Transform the merged elements back into real blocks of lines. |
462 | + trace.mutter('news_merge giving up') |
463 | return 'success', list(fakelines_to_blocks(result_lines)) |
464 | |
465 | |
466 | +def merge_bullets(base_bullets, a_bullets, b_bullets): |
467 | + # Calculate additions and deletions. |
468 | + new_in_a = a_bullets.difference(base_bullets) |
469 | + new_in_b = b_bullets.difference(base_bullets) |
470 | + all_new = new_in_a.union(new_in_b) |
471 | + deleted_in_a = base_bullets.difference(a_bullets) |
472 | + deleted_in_b = base_bullets.difference(b_bullets) |
473 | + # Combine into the final set of bullet points. |
474 | + final = all_new.difference(deleted_in_a).difference(deleted_in_b) |
475 | + return final |
476 | + |
477 | + |
478 | +class MergeTooHard(Exception): |
479 | + pass |
480 | + |
481 | + |
482 | +def munged_lines_to_section_dict(lines): |
483 | + """Takes a sequence of munged lines, and returns a dict mapping section to |
484 | + a set of bullets. |
485 | + |
486 | + :param lines: a sequence of munged lines |
487 | + :raises MergeTooHard: when lines contain anything other than sections or |
488 | + bullets |
489 | + :returns: a dict of section name -> set of munged bullet lines. Any |
490 | + bullets encounted before a section will have a name of None. |
491 | + """ |
492 | + section = None |
493 | + section_dict = {} |
494 | + for line in lines: |
495 | + if line.startswith('section'): |
496 | + section = line[len('section') + len(magic_marker):] |
497 | + section = section.split('\n', 1)[0] |
498 | + elif line.startswith('bullet'): |
499 | + try: |
500 | + bullets = section_dict[section] |
501 | + except KeyError: |
502 | + bullets = section_dict[section] = set() |
503 | + bullets.add(line) |
504 | + else: |
505 | + raise MergeTooHard() |
506 | + return section_dict |
507 | + |
508 | + |
509 | def blocks_to_fakelines(blocks): |
510 | for kind, text in blocks: |
511 | yield '%s%s%s' % (kind, magic_marker, text) |
512 | |
513 | === modified file 'bzrlib/plugins/news_merge/parser.py' |
514 | --- bzrlib/plugins/news_merge/parser.py 2010-01-18 07:00:11 +0000 |
515 | +++ bzrlib/plugins/news_merge/parser.py 2010-02-12 12:16:23 +0000 |
516 | @@ -24,6 +24,7 @@ |
517 | simple_parse's docstring). |
518 | """ |
519 | |
520 | +import re |
521 | |
522 | def simple_parse(content): |
523 | """Returns blocks, where each block is a 2-tuple (kind, text). |
524 | @@ -31,7 +32,10 @@ |
525 | :kind: one of 'heading', 'release', 'section', 'empty' or 'text'. |
526 | :text: a str, including newlines. |
527 | """ |
528 | - blocks = content.split('\n\n') |
529 | + # Split on blank lines. |
530 | + # XXX: this loses any invisible whitespace, which can cause unexpected |
531 | + # changes to strip the whitespace out of those blank lines. |
532 | + blocks = re.split('\n *\n', content) |
533 | for block in blocks: |
534 | if block.startswith('###'): |
535 | # First line is ###...: Top heading |
536 | |
537 | === modified file 'bzrlib/push.py' |
538 | --- bzrlib/push.py 2009-12-25 13:47:23 +0000 |
539 | +++ bzrlib/push.py 2010-02-12 12:16:23 +0000 |
540 | @@ -110,6 +110,12 @@ |
541 | "\nYou may supply --create-prefix to create all" |
542 | " leading parent directories." |
543 | % location) |
544 | + # This shouldn't occur (because create_prefix is true, so |
545 | + # create_clone_on_transport should be catching NoSuchFile and |
546 | + # creating the missing directories) but if it does the original |
547 | + # NoSuchFile error will be more informative than an |
548 | + # UnboundLocalError for br_to. |
549 | + raise |
550 | except errors.TooManyRedirections: |
551 | raise errors.BzrCommandError("Too many redirections trying " |
552 | "to make %s." % location) |
553 | |
554 | === modified file 'bzrlib/repofmt/pack_repo.py' |
555 | --- bzrlib/repofmt/pack_repo.py 2010-01-29 10:59:12 +0000 |
556 | +++ bzrlib/repofmt/pack_repo.py 2010-02-12 12:16:23 +0000 |
557 | @@ -24,6 +24,7 @@ |
558 | |
559 | from bzrlib import ( |
560 | chk_map, |
561 | + cleanup, |
562 | debug, |
563 | graph, |
564 | osutils, |
565 | @@ -646,11 +647,10 @@ |
566 | del self.combined_index._indices[:] |
567 | self.add_callback = None |
568 | |
569 | - def remove_index(self, index, pack): |
570 | + def remove_index(self, index): |
571 | """Remove index from the indices used to answer queries. |
572 | |
573 | :param index: An index from the pack parameter. |
574 | - :param pack: A Pack instance. |
575 | """ |
576 | del self.index_to_pack[index] |
577 | self.combined_index._indices.remove(index) |
578 | @@ -1840,14 +1840,22 @@ |
579 | self._remove_pack_indices(pack) |
580 | self.packs.remove(pack) |
581 | |
582 | - def _remove_pack_indices(self, pack): |
583 | - """Remove the indices for pack from the aggregated indices.""" |
584 | - self.revision_index.remove_index(pack.revision_index, pack) |
585 | - self.inventory_index.remove_index(pack.inventory_index, pack) |
586 | - self.text_index.remove_index(pack.text_index, pack) |
587 | - self.signature_index.remove_index(pack.signature_index, pack) |
588 | - if self.chk_index is not None: |
589 | - self.chk_index.remove_index(pack.chk_index, pack) |
590 | + def _remove_pack_indices(self, pack, ignore_missing=False): |
591 | + """Remove the indices for pack from the aggregated indices. |
592 | + |
593 | + :param ignore_missing: Suppress KeyErrors from calling remove_index. |
594 | + """ |
595 | + for index_type in Pack.index_definitions.keys(): |
596 | + attr_name = index_type + '_index' |
597 | + aggregate_index = getattr(self, attr_name) |
598 | + if aggregate_index is not None: |
599 | + pack_index = getattr(pack, attr_name) |
600 | + try: |
601 | + aggregate_index.remove_index(pack_index) |
602 | + except KeyError: |
603 | + if ignore_missing: |
604 | + continue |
605 | + raise |
606 | |
607 | def reset(self): |
608 | """Clear all cached data.""" |
609 | @@ -2091,24 +2099,21 @@ |
610 | # FIXME: just drop the transient index. |
611 | # forget what names there are |
612 | if self._new_pack is not None: |
613 | - try: |
614 | - self._new_pack.abort() |
615 | - finally: |
616 | - # XXX: If we aborted while in the middle of finishing the write |
617 | - # group, _remove_pack_indices can fail because the indexes are |
618 | - # already gone. If they're not there we shouldn't fail in this |
619 | - # case. -- mbp 20081113 |
620 | - self._remove_pack_indices(self._new_pack) |
621 | - self._new_pack = None |
622 | + operation = cleanup.OperationWithCleanups(self._new_pack.abort) |
623 | + operation.add_cleanup(setattr, self, '_new_pack', None) |
624 | + # If we aborted while in the middle of finishing the write |
625 | + # group, _remove_pack_indices could fail because the indexes are |
626 | + # already gone. But they're not there we shouldn't fail in this |
627 | + # case, so we pass ignore_missing=True. |
628 | + operation.add_cleanup(self._remove_pack_indices, self._new_pack, |
629 | + ignore_missing=True) |
630 | + operation.run_simple() |
631 | for resumed_pack in self._resumed_packs: |
632 | - try: |
633 | - resumed_pack.abort() |
634 | - finally: |
635 | - # See comment in previous finally block. |
636 | - try: |
637 | - self._remove_pack_indices(resumed_pack) |
638 | - except KeyError: |
639 | - pass |
640 | + operation = cleanup.OperationWithCleanups(resumed_pack.abort) |
641 | + # See comment in previous finally block. |
642 | + operation.add_cleanup(self._remove_pack_indices, resumed_pack, |
643 | + ignore_missing=True) |
644 | + operation.run_simple() |
645 | del self._resumed_packs[:] |
646 | |
647 | def _remove_resumed_pack_indices(self): |
648 | |
649 | === modified file 'bzrlib/tests/__init__.py' |
650 | --- bzrlib/tests/__init__.py 2010-02-11 09:27:55 +0000 |
651 | +++ bzrlib/tests/__init__.py 2010-02-12 12:16:24 +0000 |
652 | @@ -4372,6 +4372,23 @@ |
653 | CaseInsensitiveFilesystemFeature = _CaseInsensitiveFilesystemFeature() |
654 | |
655 | |
656 | +class _CaseSensitiveFilesystemFeature(Feature): |
657 | + |
658 | + def _probe(self): |
659 | + if CaseInsCasePresFilenameFeature.available(): |
660 | + return False |
661 | + elif CaseInsensitiveFilesystemFeature.available(): |
662 | + return False |
663 | + else: |
664 | + return True |
665 | + |
666 | + def feature_name(self): |
667 | + return 'case-sensitive filesystem' |
668 | + |
669 | +# new coding style is for feature instances to be lowercase |
670 | +case_sensitive_filesystem_feature = _CaseSensitiveFilesystemFeature() |
671 | + |
672 | + |
673 | # Kept for compatibility, use bzrlib.tests.features.subunit instead |
674 | SubUnitFeature = _CompatabilityThunkFeature( |
675 | deprecated_in((2,1,0)), |
676 | |
677 | === modified file 'bzrlib/tests/per_tree/test_inv.py' |
678 | --- bzrlib/tests/per_tree/test_inv.py 2009-07-10 07:14:02 +0000 |
679 | +++ bzrlib/tests/per_tree/test_inv.py 2010-02-12 12:16:23 +0000 |
680 | @@ -1,4 +1,4 @@ |
681 | -# Copyright (C) 2007, 2008 Canonical Ltd |
682 | +# Copyright (C) 2007, 2008, 2010 Canonical Ltd |
683 | # |
684 | # This program is free software; you can redistribute it and/or modify |
685 | # it under the terms of the GNU General Public License as published by |
686 | @@ -23,7 +23,10 @@ |
687 | from bzrlib import ( |
688 | tests, |
689 | ) |
690 | -from bzrlib.tests import per_tree |
691 | +from bzrlib.tests import ( |
692 | + features, |
693 | + per_tree, |
694 | + ) |
695 | from bzrlib.mutabletree import MutableTree |
696 | from bzrlib.tests import SymlinkFeature, TestSkipped |
697 | from bzrlib.transform import _PreviewTree |
698 | @@ -131,6 +134,8 @@ |
699 | work_tree.add(['dir', 'dir/file']) |
700 | if commit: |
701 | work_tree.commit('commit 1') |
702 | + # XXX: this isn't actually guaranteed to return the class we want to |
703 | + # test -- mbp 2010-02-12 |
704 | return work_tree |
705 | |
706 | def test_canonical_path(self): |
707 | @@ -163,3 +168,21 @@ |
708 | work_tree = self._make_canonical_test_tree() |
709 | self.assertEqual('dir/None', |
710 | work_tree.get_canonical_inventory_path('Dir/None')) |
711 | + |
712 | + def test_canonical_tree_name_mismatch(self): |
713 | + # see <https://bugs.edge.launchpad.net/bzr/+bug/368931> |
714 | + # some of the trees we want to use can only exist on a disk, not in |
715 | + # memory - therefore we can only test this if the filesystem is |
716 | + # case-sensitive. |
717 | + self.requireFeature(tests.case_sensitive_filesystem_feature) |
718 | + work_tree = self.make_branch_and_tree('.') |
719 | + self.build_tree(['test/', 'test/file', 'Test']) |
720 | + work_tree.add(['test/', 'test/file', 'Test']) |
721 | + |
722 | + test_tree = self._convert_tree(work_tree) |
723 | + test_tree.lock_read() |
724 | + self.addCleanup(test_tree.unlock) |
725 | + |
726 | + self.assertEqual(['test', 'test/file', 'Test', 'test/foo', 'Test/foo'], |
727 | + test_tree.get_canonical_inventory_paths( |
728 | + ['test', 'test/file', 'Test', 'test/foo', 'Test/foo'])) |
729 | |
730 | === modified file 'bzrlib/tree.py' |
731 | --- bzrlib/tree.py 2010-02-10 21:36:32 +0000 |
732 | +++ bzrlib/tree.py 2010-02-12 12:16:24 +0000 |
733 | @@ -1,4 +1,8 @@ |
734 | +<<<<<<< TREE |
735 | # Copyright (C) 2005-2010 Canonical Ltd |
736 | +======= |
737 | +# Copyright (C) 2005, 2009, 2010 Canonical Ltd |
738 | +>>>>>>> MERGE-SOURCE |
739 | # |
740 | # This program is free software; you can redistribute it and/or modify |
741 | # it under the terms of the GNU General Public License as published by |
742 | @@ -405,16 +409,34 @@ |
743 | bit_iter = iter(path.split("/")) |
744 | for elt in bit_iter: |
745 | lelt = elt.lower() |
746 | + new_path = None |
747 | for child in self.iter_children(cur_id): |
748 | try: |
749 | + # XXX: it seem like if the child is known to be in the |
750 | + # tree, we shouldn't need to go from its id back to |
751 | + # its path -- mbp 2010-02-11 |
752 | + # |
753 | + # XXX: it seems like we could be more efficient |
754 | + # by just directly looking up the original name and |
755 | + # only then searching all children; also by not |
756 | + # chopping paths so much. -- mbp 2010-02-11 |
757 | child_base = os.path.basename(self.id2path(child)) |
758 | - if child_base.lower() == lelt: |
759 | + if (child_base == elt): |
760 | + # if we found an exact match, we can stop now; if |
761 | + # we found an approximate match we need to keep |
762 | + # searching because there might be an exact match |
763 | + # later. |
764 | cur_id = child |
765 | - cur_path = osutils.pathjoin(cur_path, child_base) |
766 | + new_path = osutils.pathjoin(cur_path, child_base) |
767 | break |
768 | + elif child_base.lower() == lelt: |
769 | + cur_id = child |
770 | + new_path = osutils.pathjoin(cur_path, child_base) |
771 | except NoSuchId: |
772 | # before a change is committed we can see this error... |
773 | continue |
774 | + if new_path: |
775 | + cur_path = new_path |
776 | else: |
777 | # got to the end of this directory and no entries matched. |
778 | # Return what matched so far, plus the rest as specified. |
This patch makes the news_merge plugin more capable. Hopefully the changes to comments and docstrings in the patch explain the changes well enough, but in short it makes it capable of coping with conflicts that span section headings (like 'Bug Fixes'), whereas before it only dealt with conflicts between bullet points within a section. See the comments and code for the details (and limitations and tradeoffs).
With this change I expect it news_merge will handle the vast majority of NEWS merges.
A minor orthogonal change adds a couple of trivial mutters to let you a reader of ~/.bzr.log know when news_merge has been used.