Merge lp:~abentley/bzr/skip-dupes into lp:~bzr/bzr/trunk-old

Proposed by Aaron Bentley
Status: Superseded
Proposed branch: lp:~abentley/bzr/skip-dupes
Merge into: lp:~bzr/bzr/trunk-old
Diff against target: 38 lines
To merge this branch: bzr merge lp:~abentley/bzr/skip-dupes
Reviewer Review Type Date Requested Status
Robert Collins (community) Disapprove
Review via email: mp+7846@code.launchpad.net

This proposal has been superseded by a proposal from 2009-06-25.

To post a comment you must log in.
Revision history for this message
Aaron Bentley (abentley) wrote :

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi all,

There are several issues related to Launchpad's current problems with
pulling data from incompletely-reconciled repositories. For one thing,
we're incorrectly predicting which records need to be inserted. For
another, we're inserting records that we already have, which wastes disk
space, and causes us to raise exceptions if the record metadata differs.

This patch allows bzr to avoid installing any records which are already
present, which seems like the most robust fix. It would also be nice to
investigate why we're sending too many records.

Aaron
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkpCPBIACgkQ0F+nu1YWqI0x+wCdEmvIRyVfkIlvplwDMa5wIxDg
QlQAn2N40bkCb0CNM7jxUneiUmZmDRiz
=EDUm
-----END PGP SIGNATURE-----

Revision history for this message
Robert Collins (lifeless) wrote :

On Wed, 2009-06-24 at 14:50 +0000, Aaron Bentley wrote:
> Aaron Bentley has proposed merging lp:~abentley/bzr/skip-dupes into lp:bzr.
>
> Requested reviews:
> bzr-core (bzr-core)
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Hi all,
>
> There are several issues related to Launchpad's current problems with
> pulling data from incompletely-reconciled repositories. For one thing,
> we're incorrectly predicting which records need to be inserted. For
> another, we're inserting records that we already have, which wastes disk
> space, and causes us to raise exceptions if the record metadata differs.
>
> This patch allows bzr to avoid installing any records which are already
> present, which seems like the most robust fix. It would also be nice to
> investigate why we're sending too many records.

 review -1
There are two consequences of the line you've added:
 - we'll lookup everything we are fetching in the target repository
 - incorrect data in one location will be sticky there and no warning
will be given

From a performance point of view, looking up things we don't have is
pretty much worst case: we'll have to hit every index, and read to the
bottom page, most of the time, to show we don't have it. We got
significant performance wins by removing or batching similar calls in
knit.py over the last couple of years. We could address the incorrect
data being sticky and not warning by doing a warning rather than just
continuing, however it won't fix the performance impact.

Now, add_records in the same file, *does* do a lookup for the same data,
but it does it in a batch which means that we don't run the risk of
cache thrashing on large pulls, *and* its controllable via the random_id
flag: we'll hit every backing index once and only once, and we don't pay
that overhead at all during commit.

I'd also _really_ prefer for us to just fix the data rather than working
around it in push and pull. The more we can work on the basis of the
data being correct, the leaner and faster our code can be.

https://bugs.edge.launchpad.net/bzr/+bug/390563 is dealing with too much
data being sent in push/pull operations and may well have significant
bearing with what you're observing here.

-Rob

review: Disapprove
Revision history for this message
Aaron Bentley (abentley) wrote :
Download full text (3.8 KiB)

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Robert Collins wrote:
> review -1
> There are two consequences of the line you've added:
> - we'll lookup everything we are fetching in the target repository

We were already looking up everything we were fetching in the target
repository.

> - incorrect data in one location will be sticky there and no warning
> will be given

We were already ignoring irrelevant incorrect data most of the time.
With this change, we would ignore it all the time.

>>From a performance point of view, looking up things we don't have is
> pretty much worst case: we'll have to hit every index, and read to the
> bottom page, most of the time, to show we don't have it. We got
> significant performance wins by removing or batching similar calls in
> knit.py over the last couple of years.

When fetching the last 100 revisions of bzr.dev into a treeless branch,
my patched version was actually faster, in our standard best-of-3 test:

skip-dupes:
real 0m4.431s
user 0m3.808s
sys 0m0.196s

bzr.dev:
real 0m4.439s
user 0m3.788s
sys 0m0.208s

I conclude that pull times are noisy enough that the inefficiency in my
version is not observable. Performance is not a concern.

> Now, add_records in the same file, *does* do a lookup for the same data,
> but it does it in a batch which means that we don't run the risk of
> cache thrashing on large pulls, *and* its controllable via the random_id
> flag: we'll hit every backing index once and only once, and we don't pay
> that overhead at all during commit.

I'd love to do a batched lookup in
GroupCompressVersionedFiles._insert_record_stream, but the API makes
that very difficult. It would be nice if the API told us, possibly in
batches, what records it was about to send.

I'm happy to disable the dupe-skipping code when random_id is True.

> I'd also _really_ prefer for us to just fix the data rather than working
> around it in push and pull.

I'm not really sure what you mean. In my case, my repository is
reconciled, and I'm trying to pull correct data from
lp:~launchpad-pqm/launchpad/devel. This bug prevented that, and no
random launchpad developer can reasonably be expected to reconcile the
master copy of lp:~launchpad-pqm/launchpad/devel.

Because push and pull are acting as a barrier, preventing pure and
impure repositories from interoperating, we have to delay reconciling
lp:~launchpad-pqm/launchpad/devel. If we were to reconcile it now, that
could break all other launchpad devs until they, too, reconciled.

Because push and pull act as a barrier and reconcile takes 3+ days to
run, we would have to stop all PQM commits for 3+ days in order to
reconcile lp:~launchpad-pqm/launchpad/devel. If we were to take a
mirror of lp:~launchpad-pqm/launchpad/devel and reconcile that, we might
not be able to merge in the commits that had happened while reconcile
was running, even though those commit would be correct data.

> The more we can work on the basis of the
> data being correct, the leaner and faster our code can be.

Yes, but this was not any kind of carefully-designed, systematic test.
The only reason it happens is because we're not correctly determining
which records to send. A...

Read more...

Revision history for this message
Robert Collins (lifeless) wrote :
Download full text (5.4 KiB)

On Thu, 2009-06-25 at 01:03 +0000, Aaron Bentley wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Robert Collins wrote:
> > review -1
> > There are two consequences of the line you've added:
> > - we'll lookup everything we are fetching in the target repository
>
> We were already looking up everything we were fetching in the target
> repository.

With a different access pattern, yes. However thats arguably a bug, we
shouldn't need to look up CHK pages at all, as all their ids are
effectively random.

> > - incorrect data in one location will be sticky there and no warning
> > will be given
>
> We were already ignoring irrelevant incorrect data most of the time.
> With this change, we would ignore it all the time.

We were? Could you enlarge on this. We do ignore data that we already
have - is that what you mean?

> >>From a performance point of view, looking up things we don't have is
> > pretty much worst case: we'll have to hit every index, and read to the
> > bottom page, most of the time, to show we don't have it. We got
> > significant performance wins by removing or batching similar calls in
> > knit.py over the last couple of years.
>
> When fetching the last 100 revisions of bzr.dev into a treeless branch,
> my patched version was actually faster, in our standard best-of-3 test:
>
> skip-dupes:
> real 0m4.431s
> user 0m3.808s
> sys 0m0.196s
>
> bzr.dev:
> real 0m4.439s
> user 0m3.788s
> sys 0m0.208s
>
> I conclude that pull times are noisy enough that the inefficiency in my
> version is not observable. Performance is not a concern.

Its not in that test/scale. However, larger pulls on larger trees may
well show different results. bzr itself is small in the scale of
projects that are using bzr now.

> > Now, add_records in the same file, *does* do a lookup for the same data,
> > but it does it in a batch which means that we don't run the risk of
> > cache thrashing on large pulls, *and* its controllable via the random_id
> > flag: we'll hit every backing index once and only once, and we don't pay
> > that overhead at all during commit.
>
> I'd love to do a batched lookup in
> GroupCompressVersionedFiles._insert_record_stream, but the API makes
> that very difficult. It would be nice if the API told us, possibly in
> batches, what records it was about to send.

Another way to structure this is to not error when add_records detects a
duplicate: just don't index it. The content will be redundant in the
group its in, but not looked up from there. (And if the group is empty
we could discard it). That would mean that only a single lookup happens
and would address the performance concerns I have.

> I'm happy to disable the dupe-skipping code when random_id is True.
>
> > I'd also _really_ prefer for us to just fix the data rather than working
> > around it in push and pull.
>
> I'm not really sure what you mean. In my case, my repository is
> reconciled, and I'm trying to pull correct data from
> lp:~launchpad-pqm/launchpad/devel. This bug prevented that, and no
> random launchpad developer can reasonably be expected to reconcile the
> master copy of lp:~launchpad-pqm/launchpad/devel.

The master copy was meant...

Read more...

Revision history for this message
Aaron Bentley (abentley) wrote :
Download full text (5.2 KiB)

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Robert Collins wrote:
> On Thu, 2009-06-25 at 01:03 +0000, Aaron Bentley wrote:
>> We were already looking up everything we were fetching in the target
>> repository.
>
> With a different access pattern, yes. However thats arguably a bug, we
> shouldn't need to look up CHK pages at all, as all their ids are
> effectively random.

So you're saying that we should be looking up all records except CHK
records? Do you mean that their ids vary across repositories? I did
not think that was the case.

>>> - incorrect data in one location will be sticky there and no warning
>>> will be given
>> We were already ignoring irrelevant incorrect data most of the time.
>> With this change, we would ignore it all the time.
>
> We were? Could you enlarge on this. We do ignore data that we already
> have - is that what you mean?

That is what I mean.

>> I conclude that pull times are noisy enough that the inefficiency in my
>> version is not observable. Performance is not a concern.
>
> Its not in that test/scale. However, larger pulls on larger trees may
> well show different results. bzr itself is small in the scale of
> projects that are using bzr now.

I've based my statements about performance on benchmarking. Please base
your statements about performance on benchmarking.

>> I'd love to do a batched lookup in
>> GroupCompressVersionedFiles._insert_record_stream, but the API makes
>> that very difficult. It would be nice if the API told us, possibly in
>> batches, what records it was about to send.
>
> Another way to structure this is to not error when add_records detects a
> duplicate: just don't index it. The content will be redundant in the
> group its in, but not looked up from there.

I thought you wanted repository indices to be fully reconstructible from
the data in the repository itself. Having two entries for a node, with
one deliberately omitted from the index is not a situation we could
reconstruct from the repository alone.

> That would mean that only a single lookup happens
> and would address the performance concerns I have.

I'd be willing to fix it that way.

>> lp:~launchpad-pqm/launchpad/devel. This bug prevented that, and no
>> random launchpad developer can reasonably be expected to reconcile the
>> master copy of lp:~launchpad-pqm/launchpad/devel.
>
> The master copy was meant to be reconciled *before* migrating;

It was. Then bad commits were pushed into it:
<email address hidden>
<email address hidden>
<email address hidden>

> and all
> user repositories were also meant to be reconciled.

Mine was. That's what made its versions of those revisions incompatible
with the versions in launchpad trunk.

>> Because push and pull act as a barrier and reconcile takes 3+ days to
>> run, we would have to stop all PQM commits for 3+ days in order to
>> reconcile lp:~launchpad-pqm/launchpad/devel. If we were to take a
>> mirror of lp:~launchpad-pqm/launchpad/devel and reconcile that, we might
>> not be able to merge in the commits that had happened while reconcile
>> was ru...

Read more...

Revision history for this message
Robert Collins (lifeless) wrote :

One further note that is relevant I think.

John, Vincent and I are closing in on a fix for bug 350563. Once this is
fixed, duplicate texts won't be pulled across and I think your headaches
with fetch() will go away. The current status is that there is a branch
(lp:~lifeless/bzr/bug-350563) which has a partial fix - one that should
cover all the cases you're running into in launchpad, but won't cover
some more esoteric cases to do with the internal structure of CHK page
layout.

I'd appreciate it if you could try pulling with this branch and see if
it stops you having errors.

-Rob

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'bzrlib/groupcompress.py'
2--- bzrlib/groupcompress.py 2009-06-23 15:27:50 +0000
3+++ bzrlib/groupcompress.py 2009-06-24 23:35:15 +0000
4@@ -1468,6 +1468,9 @@
5 # XXX: TODO: remove this, it is just for safety checking for now
6 inserted_keys = set()
7 for record in stream:
8+ # Avoid inserting records that are already present.
9+ if len(list(self._index._get_entries([record.key]))) > 0:
10+ continue
11 # Raise an error when a record is missing.
12 if record.storage_kind == 'absent':
13 raise errors.RevisionNotPresent(record.key, self)
14
15=== modified file 'bzrlib/tests/test_groupcompress.py'
16--- bzrlib/tests/test_groupcompress.py 2009-06-22 18:30:08 +0000
17+++ bzrlib/tests/test_groupcompress.py 2009-06-24 23:35:16 +0000
18@@ -658,6 +658,21 @@
19 frozenset([('parent-1',), ('parent-2',)]),
20 index.get_missing_parents())
21
22+ def test_avoid_redundant_inserts(self):
23+ """Should not insert a record that is already present."""
24+ source = self.make_test_vf(True, dir='source')
25+ source.add_lines(('a',), (), ['lines\n'])
26+ target = self.make_test_vf(True, dir='target')
27+ _add_records = target._index.add_records
28+ added_nodes = []
29+ def add_records(nodes, *args, **kwargs):
30+ added_nodes.append(nodes)
31+ return _add_records(nodes, *args, **kwargs)
32+ target._index.add_records = add_records
33+ for x in range(2):
34+ target.insert_record_stream(source.get_record_stream(
35+ [('a',)], 'unordered', False))
36+ self.assertEqual(1, len(added_nodes))
37
38 class TestLazyGroupCompress(tests.TestCaseWithTransport):
39