Code review comment for lp:~jameinel/bzr/1.16-commit-fulltext

Revision history for this message
Robert Collins (lifeless) wrote :

On Wed, 2009-06-03 at 03:00 +0000, John A Meinel wrote:
>
> Not trivially.
>
> 1) It is an incompatible api change to insert_record_stream

Yes. Doing this before 2.0 would be better than doing it later.

> 2) It requires setting up a FulltextContextFactory and passing in a
> stream of 1 entry just to add a text, which isn't particularly nice.

record_iter_changes would pass a generator into
texts.insert_record_stream.

e.g.
text_details = \
self.repository.texts.insert_record_stream(self._ric_texts, ...)
for details in text_details:
    [...]

> 3) It requires adding lots of parameters like 'nostore_sha', and
> 'random_id', etc, onto insert_record_stream

or onto the factory. I'm not sure offhand where is best.

> 4) It requires rewriting the internals of
> KnitVersionedFiles.insert_record_stream to *not* thunk back to
> self.add_lines(chunks_to_lines(record.get_bytes_as('chunked')))

this is fairly straightforward: move add_lines to call
self.insert_record_stream appropriately:- I did that for GCVF and it
worked well.

> 5) nostore_sha especially doesn't fit with the theology of
> insert_record_stream. It is really only applicable to a single text,
> and
> insert_record_stream is really designed around many texts. Wedging new
> parameters onto a function where it doesn't really fit doesn't seem
> *better*.

Agreed; so perhaps an attribute on the factory.

> 6) As for VF surface area, there is at least a default implementation
> that simply thunks over to .add_lines() for those that don't strictly
> care about memory performance. (And thus works fine for Weaves, etc.)

Well, I want to delete add_lines as it is.

> In theory we could try to layer it so that we had an 'ongoing' stream,
> and 'yield' texts to be inserted as we find them. But that really
> doesn't fit 'nostore_sha' since that also needs to be passed in, and
> needs to raise an exception which breaks the stream.

I'd yield data per record.

> Also, I thought we *wanted* commit for groupcompress to not have to do
> deltas, and if we stream the texts in, we would spend a modest amount
> of
> time getting poor compression between text files. (Note that we were
> already spending that time to compute the delta index, but I have a
> patch which fixes that.)

It would be good to measure actually... first commit after all suffers
hugely because every page in the CHKMap is add_text'd separately.

> I can understand wanting to shrink the api. If you really push on it,
> I'm willing to deprecate .add_lines() and write a .add_chunks() that
> is
> meant to replace it. (since you can .add_chunks(lines) and
> .add_chunks([text])) However, chunks fits slightly worse for knits,
> since the Content code and annotation and deltas needs lines anyway,
> and
> Groupcompress wants fulltexts...
>
> So if you push hard, I'll try to find the time to do it. But this was
> *much* easier.

I think we're at the point of maturity in bzr that it makes sense to
spend a small amount of time saying 'whats the cleanest way to do X',
and then talk about how to get there.

At the moment, expanding VF's API doesn't seem desirable, or the best
way to be tackling the problem. I think there should be precisely one
way to add texts to a VF, and that should be as small and fast as we can
make it.

-Rob

« Back to merge proposal