Merge lp:~jameinel/bzr/1.19-bug-402778 into lp:~bzr/bzr/trunk-old

Proposed by John A Meinel
Status: Merged
Merged at revision: not available
Proposed branch: lp:~jameinel/bzr/1.19-bug-402778
Merge into: lp:~bzr/bzr/trunk-old
Diff against target: 351 lines
To merge this branch: bzr merge lp:~jameinel/bzr/1.19-bug-402778
Reviewer Review Type Date Requested Status
Robert Collins (community) Approve
Review via email: mp+10053@code.launchpad.net

This proposal supersedes a proposal from 2009-08-11.

To post a comment you must log in.
Revision history for this message
John A Meinel (jameinel) wrote : Posted in a previous version of this proposal

This fixes 2 bugs found as part of bug #402778

InterDifferingSerializer was failing when the target was a stacked branch. It had 2 primary bugs

1) The inner loop was using the wrong variable, which meant that it was generating a delta for one revision, and then inserting it as the delta for a different revision.

2) It was trying to fill in parent inventories for all parents, even if some parents where ghosts in the source.

I updated the test that Andrew added. I had to add content to the revisions, because otherwise you can't tell whether the added inventories are valid. And I added a ghost parent, to make sure that we don't try to fill in ghosts that we don't have.

Revision history for this message
Robert Collins (lifeless) wrote : Posted in a previous version of this proposal

On Tue, 2009-08-11 at 17:35 +0000, John A Meinel wrote:
> @@ -3816,10 +3818,13 @@
> parent_ids.difference_update(revision_ids)
> parent_ids.discard(_mod_revision.NULL_REVISION)
> parent_map = self.source.get_parent_map(parent_ids)
> - for parent_tree in self.source.revision_trees(parent_ids):
> - basis_id, delta = self._get_delta_for_revision(tree, parent_ids, basis_id, cache)
> + # we iterate over parent_map and not parent_ids because we don't
> + # want to try copying any revision which is a ghost
> + for parent_tree in self.source.revision_trees(parent_map.keys()):

This might be slightly cheaper as
+ for parent_tree in self.source.revision_trees(parent_map):

I'd prefer to see separate tests for ghosts and inventory validity.

so tweak:, however that is expressed...
 review +1

review: Approve
Revision history for this message
John A Meinel (jameinel) wrote :

This is a follow up from Robert's review.

(original)
1) This fixes InterDifferingSerializer to properly transmit parent inventories for stacked branches. (Rather than passing the last-tree inventory and claim it was the parent inventory.)
2) Also fixes IDS to ignore ghost parents when filling in parent inventories.

(updated)
1) Changes test_pack_repository => per_pack_repository. Andrew mentioned this, I liked the change.

2) Adds a conversion to 2a format to the inter-repository tests. Which shows that even with the original fix, things were still broken for 2a formats.

3) Adds a direct test that "Repository.pack()" preserves all inventory texts, even when we don't have a revision associated with it.

I'm pretty sure this closes 2+ bugs marked as critical for 2.0.

Revision history for this message
Robert Collins (lifeless) wrote :

 review: +1

The use of vf.keys() in the gc pack code - might want to check that
reconcile() doesn't use revision selectors - as in that case you will
copy too many inventories [if all tests are passing this probably isn't
the case]

-Rob

review: Approve
Revision history for this message
John A Meinel (jameinel) wrote :

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Robert Collins wrote:
> Review: Approve
> review: +1
>
> The use of vf.keys() in the gc pack code - might want to check that
> reconcile() doesn't use revision selectors - as in that case you will
> copy too many inventories [if all tests are passing this probably isn't
> the case]
>
> -Rob
>

Reconcile has its own implementation of _copy_inventory_texts:

    def _copy_inventory_texts(self):
        source_vf, target_vf = self._build_vfs('inventory', True, True)
        self._copy_stream(source_vf, target_vf, self.revision_keys,
                          'inventories', self._get_filtered_inv_stream, 2)
        if source_vf.keys() != self.revision_keys:
            self._data_changed = True

Which is saying "I'm going to prune down to the minimum, and if the
source's number of keys is different than the minimum, then things have
changed."

This means that reconcile on stacked branches is probably slightly wrong.

But anyway, this change obviously doesn't effect reconcile :).

John
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkqDKfUACgkQJdeBCYSNAAOuKQCeKD4PtKpnw6Ro0AZaWv/w0LPp
6woAoI/3kDA39hojP67PXDb715bJXIKU
=T66k
-----END PGP SIGNATURE-----

Revision history for this message
Robert Collins (lifeless) wrote :

On Wed, 2009-08-12 at 20:48 +0000, John A Meinel wrote:

> Which is saying "I'm going to prune down to the minimum, and if the
> source's number of keys is different than the minimum, then things have
> changed."
>
> This means that reconcile on stacked branches is probably slightly wrong.
>
> But anyway, this change obviously doesn't effect reconcile :).

Ok. So reconcile needs to keep the parent inventories too. I smell
another branch coming up :).

-Rob

Revision history for this message
John A Meinel (jameinel) wrote :

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Robert Collins wrote:
> On Wed, 2009-08-12 at 20:48 +0000, John A Meinel wrote:
>
>> Which is saying "I'm going to prune down to the minimum, and if the
>> source's number of keys is different than the minimum, then things have
>> changed."
>>
>> This means that reconcile on stacked branches is probably slightly wrong.
>>
>> But anyway, this change obviously doesn't effect reconcile :).
>
> Ok. So reconcile needs to keep the parent inventories too. I smell
> another branch coming up :).
>
> -Rob
>

Note that in that case --1.9 format repos are probably also vulnerable,
as I'm pretty sure Packer uses "self._revision_keys" directly, and
doesn't try to also include parent inventories.

John
=:->

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkqDMI8ACgkQJdeBCYSNAANyHQCeJ4DVIM/tuX1KbvnHDxMwQJrU
jMcAnAvAjV202VihITKOng43e4tYHqiz
=ExZL
-----END PGP SIGNATURE-----

Revision history for this message
Robert Collins (lifeless) wrote :

On Wed, 2009-08-12 at 21:15 +0000, John A Meinel wrote:
>
> > Ok. So reconcile needs to keep the parent inventories too. I smell
> > another branch coming up :).
> >
> > -Rob
> >
>
> Note that in that case --1.9 format repos are probably also
> vulnerable,
> as I'm pretty sure Packer uses "self._revision_keys" directly, and
> doesn't try to also include parent inventories.

exactly :(. Want me to file the bug?

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'NEWS'
--- NEWS 2009-08-12 02:59:14 +0000
+++ NEWS 2009-08-12 21:35:09 +0000
@@ -27,6 +27,11 @@
27* Further tweaks to handling of ``bzr add`` messages about ignored files.27* Further tweaks to handling of ``bzr add`` messages about ignored files.
28 (Jason Spashett, #76616)28 (Jason Spashett, #76616)
2929
30* Properly handle fetching into a stacked branch while converting the
31 data, especially when there are also ghosts. The code was filling in
32 parent inventories incorrectly, and also not handling when one of the
33 parents was a ghost. (John Arbash Meinel, #402778, #412198)
34
30Improvements35Improvements
31************36************
3237
3338
=== modified file 'bzrlib/repofmt/groupcompress_repo.py'
--- bzrlib/repofmt/groupcompress_repo.py 2009-08-04 04:36:34 +0000
+++ bzrlib/repofmt/groupcompress_repo.py 2009-08-12 21:35:09 +0000
@@ -410,7 +410,18 @@
410410
411 def _copy_inventory_texts(self):411 def _copy_inventory_texts(self):
412 source_vf, target_vf = self._build_vfs('inventory', True, True)412 source_vf, target_vf = self._build_vfs('inventory', True, True)
413 self._copy_stream(source_vf, target_vf, self.revision_keys,413 # It is not sufficient to just use self.revision_keys, as stacked
414 # repositories can have more inventories than they have revisions.
415 # One alternative would be to do something with
416 # get_parent_map(self.revision_keys), but that shouldn't be any faster
417 # than this.
418 inventory_keys = source_vf.keys()
419 missing_inventories = set(self.revision_keys).difference(inventory_keys)
420 if missing_inventories:
421 missing_inventories = sorted(missing_inventories)
422 raise ValueError('We are missing inventories for revisions: %s'
423 % (missing_inventories,))
424 self._copy_stream(source_vf, target_vf, inventory_keys,
414 'inventories', self._get_filtered_inv_stream, 2)425 'inventories', self._get_filtered_inv_stream, 2)
415426
416 def _copy_chk_texts(self):427 def _copy_chk_texts(self):
@@ -1110,7 +1121,7 @@
11101121
1111class RepositoryFormat2a(RepositoryFormatCHK2):1122class RepositoryFormat2a(RepositoryFormatCHK2):
1112 """A CHK repository that uses the bencode revision serializer.1123 """A CHK repository that uses the bencode revision serializer.
1113 1124
1114 This is the same as RepositoryFormatCHK2 but with a public name.1125 This is the same as RepositoryFormatCHK2 but with a public name.
1115 """1126 """
11161127
11171128
=== modified file 'bzrlib/repository.py'
--- bzrlib/repository.py 2009-08-11 02:45:36 +0000
+++ bzrlib/repository.py 2009-08-12 21:35:09 +0000
@@ -3812,6 +3812,8 @@
3812 # for the new revisions that we are about to insert. We do this3812 # for the new revisions that we are about to insert. We do this
3813 # before adding the revisions so that no revision is added until3813 # before adding the revisions so that no revision is added until
3814 # all the inventories it may depend on are added.3814 # all the inventories it may depend on are added.
3815 # Note that this is overzealous, as we may have fetched these in an
3816 # earlier batch.
3815 parent_ids = set()3817 parent_ids = set()
3816 revision_ids = set()3818 revision_ids = set()
3817 for revision in pending_revisions:3819 for revision in pending_revisions:
@@ -3820,10 +3822,13 @@
3820 parent_ids.difference_update(revision_ids)3822 parent_ids.difference_update(revision_ids)
3821 parent_ids.discard(_mod_revision.NULL_REVISION)3823 parent_ids.discard(_mod_revision.NULL_REVISION)
3822 parent_map = self.source.get_parent_map(parent_ids)3824 parent_map = self.source.get_parent_map(parent_ids)
3823 for parent_tree in self.source.revision_trees(parent_ids):3825 # we iterate over parent_map and not parent_ids because we don't
3824 basis_id, delta = self._get_delta_for_revision(tree, parent_ids, basis_id, cache)3826 # want to try copying any revision which is a ghost
3827 for parent_tree in self.source.revision_trees(parent_map):
3825 current_revision_id = parent_tree.get_revision_id()3828 current_revision_id = parent_tree.get_revision_id()
3826 parents_parents = parent_map[current_revision_id]3829 parents_parents = parent_map[current_revision_id]
3830 basis_id, delta = self._get_delta_for_revision(parent_tree,
3831 parents_parents, basis_id, cache)
3827 self.target.add_inventory_by_delta(3832 self.target.add_inventory_by_delta(
3828 basis_id, delta, current_revision_id, parents_parents)3833 basis_id, delta, current_revision_id, parents_parents)
3829 # insert signatures and revisions3834 # insert signatures and revisions
38303835
=== modified file 'bzrlib/tests/__init__.py'
--- bzrlib/tests/__init__.py 2009-08-04 11:40:59 +0000
+++ bzrlib/tests/__init__.py 2009-08-12 21:35:09 +0000
@@ -3386,6 +3386,7 @@
3386 'bzrlib.tests.per_lock',3386 'bzrlib.tests.per_lock',
3387 'bzrlib.tests.per_transport',3387 'bzrlib.tests.per_transport',
3388 'bzrlib.tests.per_tree',3388 'bzrlib.tests.per_tree',
3389 'bzrlib.tests.per_pack_repository',
3389 'bzrlib.tests.per_repository',3390 'bzrlib.tests.per_repository',
3390 'bzrlib.tests.per_repository_chk',3391 'bzrlib.tests.per_repository_chk',
3391 'bzrlib.tests.per_repository_reference',3392 'bzrlib.tests.per_repository_reference',
@@ -3480,7 +3481,6 @@
3480 'bzrlib.tests.test_osutils',3481 'bzrlib.tests.test_osutils',
3481 'bzrlib.tests.test_osutils_encodings',3482 'bzrlib.tests.test_osutils_encodings',
3482 'bzrlib.tests.test_pack',3483 'bzrlib.tests.test_pack',
3483 'bzrlib.tests.test_pack_repository',
3484 'bzrlib.tests.test_patch',3484 'bzrlib.tests.test_patch',
3485 'bzrlib.tests.test_patches',3485 'bzrlib.tests.test_patches',
3486 'bzrlib.tests.test_permissions',3486 'bzrlib.tests.test_permissions',
34873487
=== modified file 'bzrlib/tests/per_interrepository/__init__.py'
--- bzrlib/tests/per_interrepository/__init__.py 2009-07-10 06:46:10 +0000
+++ bzrlib/tests/per_interrepository/__init__.py 2009-08-12 21:35:09 +0000
@@ -68,7 +68,12 @@
6868
69def default_test_list():69def default_test_list():
70 """Generate the default list of interrepo permutations to test."""70 """Generate the default list of interrepo permutations to test."""
71 from bzrlib.repofmt import knitrepo, pack_repo, weaverepo71 from bzrlib.repofmt import (
72 groupcompress_repo,
73 knitrepo,
74 pack_repo,
75 weaverepo,
76 )
72 result = []77 result = []
73 # test the default InterRepository between format 6 and the current78 # test the default InterRepository between format 6 and the current
74 # default format.79 # default format.
@@ -111,6 +116,9 @@
111 result.append((InterDifferingSerializer,116 result.append((InterDifferingSerializer,
112 pack_repo.RepositoryFormatKnitPack1(),117 pack_repo.RepositoryFormatKnitPack1(),
113 pack_repo.RepositoryFormatKnitPack6RichRoot()))118 pack_repo.RepositoryFormatKnitPack6RichRoot()))
119 result.append((InterDifferingSerializer,
120 pack_repo.RepositoryFormatKnitPack6RichRoot(),
121 groupcompress_repo.RepositoryFormat2a()))
114 return result122 return result
115123
116124
117125
=== modified file 'bzrlib/tests/per_interrepository/test_fetch.py'
--- bzrlib/tests/per_interrepository/test_fetch.py 2009-07-10 06:46:10 +0000
+++ bzrlib/tests/per_interrepository/test_fetch.py 2009-08-12 21:35:09 +0000
@@ -132,17 +132,23 @@
132 altered by all revisions it contains, which means that it needs both132 altered by all revisions it contains, which means that it needs both
133 the inventory for any revision it has, and the inventories of all that133 the inventory for any revision it has, and the inventories of all that
134 revision's parents.134 revision's parents.
135
136 However, we should also skip any revisions which are ghosts in the
137 parents.
135 """138 """
136 to_repo = self.make_to_repository('to')139 if not self.repository_format_to.supports_external_lookups:
137 if not to_repo._format.supports_external_lookups:
138 raise TestNotApplicable("Need stacking support in the target.")140 raise TestNotApplicable("Need stacking support in the target.")
139 builder = self.make_branch_builder('branch')141 builder = self.make_branch_builder('branch')
140 builder.start_series()142 builder.start_series()
141 builder.build_snapshot('base', None, [143 builder.build_snapshot('base', None, [
142 ('add', ('', 'root-id', 'directory', ''))])144 ('add', ('', 'root-id', 'directory', '')),
143 builder.build_snapshot('left', ['base'], [])145 ('add', ('file', 'file-id', 'file', 'content\n'))])
144 builder.build_snapshot('right', ['base'], [])146 builder.build_snapshot('left', ['base'], [
145 builder.build_snapshot('merge', ['left', 'right'], [])147 ('modify', ('file-id', 'left content\n'))])
148 builder.build_snapshot('right', ['base'], [
149 ('modify', ('file-id', 'right content\n'))])
150 builder.build_snapshot('merge', ['left', 'right'], [
151 ('modify', ('file-id', 'left and right content\n'))])
146 builder.finish_series()152 builder.finish_series()
147 branch = builder.get_branch()153 branch = builder.get_branch()
148 repo = self.make_to_repository('trunk')154 repo = self.make_to_repository('trunk')
@@ -161,6 +167,57 @@
161 self.assertEqual(167 self.assertEqual(
162 set([('left',), ('right',), ('merge',)]),168 set([('left',), ('right',), ('merge',)]),
163 unstacked_repo.inventories.keys())169 unstacked_repo.inventories.keys())
170 # And the basis inventories have been copied correctly
171 trunk.lock_read()
172 self.addCleanup(trunk.unlock)
173 left_tree, right_tree = trunk.repository.revision_trees(
174 ['left', 'right'])
175 stacked_branch.lock_read()
176 self.addCleanup(stacked_branch.unlock)
177 (stacked_left_tree,
178 stacked_right_tree) = stacked_branch.repository.revision_trees(
179 ['left', 'right'])
180 self.assertEqual(left_tree.inventory, stacked_left_tree.inventory)
181 self.assertEqual(right_tree.inventory, stacked_right_tree.inventory)
182
183 def test_fetch_across_stacking_boundary_ignores_ghost(self):
184 if not self.repository_format_to.supports_external_lookups:
185 raise TestNotApplicable("Need stacking support in the target.")
186 to_repo = self.make_to_repository('to')
187 builder = self.make_branch_builder('branch')
188 builder.start_series()
189 builder.build_snapshot('base', None, [
190 ('add', ('', 'root-id', 'directory', '')),
191 ('add', ('file', 'file-id', 'file', 'content\n'))])
192 builder.build_snapshot('second', ['base'], [
193 ('modify', ('file-id', 'second content\n'))])
194 builder.build_snapshot('third', ['second', 'ghost'], [
195 ('modify', ('file-id', 'third content\n'))])
196 builder.finish_series()
197 branch = builder.get_branch()
198 repo = self.make_to_repository('trunk')
199 trunk = repo.bzrdir.create_branch()
200 trunk.repository.fetch(branch.repository, 'second')
201 repo = self.make_to_repository('stacked')
202 stacked_branch = repo.bzrdir.create_branch()
203 stacked_branch.set_stacked_on_url(trunk.base)
204 stacked_branch.repository.fetch(branch.repository, 'third')
205 unstacked_repo = stacked_branch.bzrdir.open_repository()
206 unstacked_repo.lock_read()
207 self.addCleanup(unstacked_repo.unlock)
208 self.assertFalse(unstacked_repo.has_revision('second'))
209 self.assertFalse(unstacked_repo.has_revision('ghost'))
210 self.assertEqual(
211 set([('second',), ('third',)]),
212 unstacked_repo.inventories.keys())
213 # And the basis inventories have been copied correctly
214 trunk.lock_read()
215 self.addCleanup(trunk.unlock)
216 second_tree = trunk.repository.revision_tree('second')
217 stacked_branch.lock_read()
218 self.addCleanup(stacked_branch.unlock)
219 stacked_second_tree = stacked_branch.repository.revision_tree('second')
220 self.assertEqual(second_tree.inventory, stacked_second_tree.inventory)
164221
165 def test_fetch_missing_basis_text(self):222 def test_fetch_missing_basis_text(self):
166 """If fetching a delta, we should die if a basis is not present."""223 """If fetching a delta, we should die if a basis is not present."""
@@ -276,8 +333,12 @@
276 to_repo = self.make_to_repository('to')333 to_repo = self.make_to_repository('to')
277 to_repo.fetch(from_tree.branch.repository)334 to_repo.fetch(from_tree.branch.repository)
278 recorded_inv_sha1 = to_repo.get_inventory_sha1('foo-id')335 recorded_inv_sha1 = to_repo.get_inventory_sha1('foo-id')
279 xml = to_repo.get_inventory_xml('foo-id')336 to_repo.lock_read()
280 computed_inv_sha1 = osutils.sha_string(xml)337 self.addCleanup(to_repo.unlock)
338 stream = to_repo.inventories.get_record_stream([('foo-id',)],
339 'unordered', True)
340 bytes = stream.next().get_bytes_as('fulltext')
341 computed_inv_sha1 = osutils.sha_string(bytes)
281 self.assertEqual(computed_inv_sha1, recorded_inv_sha1)342 self.assertEqual(computed_inv_sha1, recorded_inv_sha1)
282343
283344
284345
=== renamed file 'bzrlib/tests/test_pack_repository.py' => 'bzrlib/tests/per_pack_repository.py'
--- bzrlib/tests/test_pack_repository.py 2009-08-11 05:26:57 +0000
+++ bzrlib/tests/per_pack_repository.py 2009-08-12 21:35:09 +0000
@@ -1,4 +1,4 @@
1# Copyright (C) 2008 Canonical Ltd1# Copyright (C) 2008, 2009 Canonical Ltd
2#2#
3# This program is free software; you can redistribute it and/or modify3# This program is free software; you can redistribute it and/or modify
4# it under the terms of the GNU General Public License as published by4# it under the terms of the GNU General Public License as published by
@@ -42,7 +42,7 @@
42 pack_repo,42 pack_repo,
43 groupcompress_repo,43 groupcompress_repo,
44 )44 )
45from bzrlib.repofmt.groupcompress_repo import RepositoryFormatCHK145from bzrlib.repofmt.groupcompress_repo import RepositoryFormat2a
46from bzrlib.smart import (46from bzrlib.smart import (
47 client,47 client,
48 server,48 server,
@@ -84,7 +84,7 @@
84 """Packs reuse deltas."""84 """Packs reuse deltas."""
85 format = self.get_format()85 format = self.get_format()
86 repo = self.make_repository('.', format=format)86 repo = self.make_repository('.', format=format)
87 if isinstance(format.repository_format, RepositoryFormatCHK1):87 if isinstance(format.repository_format, RepositoryFormat2a):
88 # TODO: This is currently a workaround. CHK format repositories88 # TODO: This is currently a workaround. CHK format repositories
89 # ignore the 'deltas' flag, but during conversions, we can't89 # ignore the 'deltas' flag, but during conversions, we can't
90 # do unordered delta fetches. Remove this clause once we90 # do unordered delta fetches. Remove this clause once we
@@ -295,6 +295,41 @@
295 self.assertEqual(1, len(list(index.iter_all_entries())))295 self.assertEqual(1, len(list(index.iter_all_entries())))
296 self.assertEqual(2, len(tree.branch.repository.all_revision_ids()))296 self.assertEqual(2, len(tree.branch.repository.all_revision_ids()))
297297
298 def test_pack_preserves_all_inventories(self):
299 # This is related to bug:
300 # https://bugs.launchpad.net/bzr/+bug/412198
301 # Stacked repositories need to keep the inventory for parents, even
302 # after a pack operation. However, it is harder to test that, then just
303 # test that all inventory texts are preserved.
304 format = self.get_format()
305 builder = self.make_branch_builder('source', format=format)
306 builder.start_series()
307 builder.build_snapshot('A-id', None, [
308 ('add', ('', 'root-id', 'directory', None))])
309 builder.build_snapshot('B-id', None, [
310 ('add', ('file', 'file-id', 'file', 'B content\n'))])
311 builder.build_snapshot('C-id', None, [
312 ('modify', ('file-id', 'C content\n'))])
313 builder.finish_series()
314 b = builder.get_branch()
315 b.lock_read()
316 self.addCleanup(b.unlock)
317 repo = self.make_repository('repo', shared=True, format=format)
318 repo.lock_write()
319 self.addCleanup(repo.unlock)
320 repo.fetch(b.repository, revision_id='B-id')
321 inv = b.repository.iter_inventories(['C-id']).next()
322 repo.start_write_group()
323 repo.add_inventory('C-id', inv, ['B-id'])
324 repo.commit_write_group()
325 self.assertEqual([('A-id',), ('B-id',), ('C-id',)],
326 sorted(repo.inventories.keys()))
327 repo.pack()
328 self.assertEqual([('A-id',), ('B-id',), ('C-id',)],
329 sorted(repo.inventories.keys()))
330 # Content should be preserved as well
331 self.assertEqual(inv, repo.iter_inventories(['C-id']).next())
332
298 def test_pack_layout(self):333 def test_pack_layout(self):
299 # Test that the ordering of revisions in pack repositories is334 # Test that the ordering of revisions in pack repositories is
300 # tip->ancestor335 # tip->ancestor
@@ -311,7 +346,7 @@
311 # revision access tends to be tip->ancestor, so ordering that way on346 # revision access tends to be tip->ancestor, so ordering that way on
312 # disk is a good idea.347 # disk is a good idea.
313 for _1, key, val, refs in pack.revision_index.iter_all_entries():348 for _1, key, val, refs in pack.revision_index.iter_all_entries():
314 if type(format.repository_format) is RepositoryFormatCHK1:349 if type(format.repository_format) is RepositoryFormat2a:
315 # group_start, group_len, internal_start, internal_len350 # group_start, group_len, internal_start, internal_len
316 pos = map(int, val.split())351 pos = map(int, val.split())
317 else:352 else:
@@ -589,7 +624,7 @@
589624
590 def make_write_ready_repo(self):625 def make_write_ready_repo(self):
591 format = self.get_format()626 format = self.get_format()
592 if isinstance(format.repository_format, RepositoryFormatCHK1):627 if isinstance(format.repository_format, RepositoryFormat2a):
593 raise TestNotApplicable("No missing compression parents")628 raise TestNotApplicable("No missing compression parents")
594 repo = self.make_repository('.', format=format)629 repo = self.make_repository('.', format=format)
595 repo.lock_write()630 repo.lock_write()
@@ -808,7 +843,7 @@
808 matching_format_name = 'pack-0.92-subtree'843 matching_format_name = 'pack-0.92-subtree'
809 else:844 else:
810 if repo._format.supports_chks:845 if repo._format.supports_chks:
811 matching_format_name = 'development6-rich-root'846 matching_format_name = '2a'
812 else:847 else:
813 matching_format_name = 'rich-root-pack'848 matching_format_name = 'rich-root-pack'
814 mismatching_format_name = 'pack-0.92'849 mismatching_format_name = 'pack-0.92'
@@ -841,7 +876,7 @@
841 else:876 else:
842 if repo.supports_rich_root():877 if repo.supports_rich_root():
843 if repo._format.supports_chks:878 if repo._format.supports_chks:
844 matching_format_name = 'development6-rich-root'879 matching_format_name = '2a'
845 else:880 else:
846 matching_format_name = 'rich-root-pack'881 matching_format_name = 'rich-root-pack'
847 mismatching_format_name = 'pack-0.92-subtree'882 mismatching_format_name = 'pack-0.92-subtree'
@@ -1062,9 +1097,9 @@
1062 "(bzr 1.9)\n",1097 "(bzr 1.9)\n",
1063 format_supports_external_lookups=True,1098 format_supports_external_lookups=True,
1064 index_class=BTreeGraphIndex),1099 index_class=BTreeGraphIndex),
1065 dict(format_name='development6-rich-root',1100 dict(format_name='2a',
1066 format_string='Bazaar development format - group compression '1101 format_string="Bazaar repository format 2a "
1067 'and chk inventory (needs bzr.dev from 1.14)\n',1102 "(needs bzr 1.16 or later)\n",
1068 format_supports_external_lookups=True,1103 format_supports_external_lookups=True,
1069 index_class=BTreeGraphIndex),1104 index_class=BTreeGraphIndex),
1070 ]1105 ]