Merge lp:~jameinel/bzr/1.16-commit-fulltext into lp:~bzr/bzr/trunk-old

Proposed by John A Meinel
Status: Merged
Merged at revision: not available
Proposed branch: lp:~jameinel/bzr/1.16-commit-fulltext
Merge into: lp:~bzr/bzr/trunk-old
Diff against target: 507 lines
To merge this branch: bzr merge lp:~jameinel/bzr/1.16-commit-fulltext
Reviewer Review Type Date Requested Status
Andrew Bennetts Approve
Review via email: mp+7080@code.launchpad.net

This proposal supersedes a proposal from 2009-06-02.

To post a comment you must log in.
Revision history for this message
John A Meinel (jameinel) wrote : Posted in a previous version of this proposal

This branch adds a new api, VersionedFiles.add_text(). If people really want, I could change it to VF.add_chunks(), but add_text() fits what I needed, and was expedient.

The main effect is to change 'bzr commit' to use file.read() rather than file.readlines(), and then to pass that on to VF.add_text() rather than VF.add_lines().

It also spends a little bit of time to remove some of the bits that were causing us to copy the memory structures a lot. It doesn't completely remove the need for a list of lines during Knit.commit() but it *does* remove the need during --dev6.commit.

To test this, I created a 90MB file, which consists of mostly 20 byte strings with no final newline. I then did:
rm -rf .bzr; bzr init --format=X; bzr add; time bzr commit -Dmemory -m "bigfile"

For --pack-0.92:
pre 469,748 kB, 5.554s
post 360,836 kB, 4.789s

For --development6-rich-root:
pre 589,732 kB, 7.785s
post 348,796 kB, 5.803s

So it is both faster and smaller. Though I still need to explore why --dev6 isn't more memory friendly. It seems to be because of the DeltaIndex structures as part of Groupcompress blocks. It might be worthwhile to optimize for not creating those on the first insert into a new group. (for the 90MB file, it seems to allocate 97MB for the 'loose' index, and then packs that into a 134MB index that has empty slots.)

Revision history for this message
Robert Collins (lifeless) wrote : Posted in a previous version of this proposal

On Tue, 2009-06-02 at 20:50 +0000, John A Meinel wrote:
> John A Meinel has proposed merging lp:~jameinel/bzr/1.16-commit-fulltext into lp:bzr.
>
> Requested reviews:
> bzr-core (bzr-core)
>
> This branch adds a new api, VersionedFiles.add_text(). If people really want, I could change it to VF.add_chunks(), but add_text() fits what I needed, and was expedient.

Is it at all possible to use insert_record_stream?

I'd really like to shrink the VF surface area, not increase it.

-Rob

Revision history for this message
John A Meinel (jameinel) wrote : Posted in a previous version of this proposal

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Robert Collins wrote:
> On Tue, 2009-06-02 at 20:50 +0000, John A Meinel wrote:
>> John A Meinel has proposed merging lp:~jameinel/bzr/1.16-commit-fulltext into lp:bzr.
>>
>> Requested reviews:
>> bzr-core (bzr-core)
>>
>> This branch adds a new api, VersionedFiles.add_text(). If people really want, I could change it to VF.add_chunks(), but add_text() fits what I needed, and was expedient.
>
> Is it at all possible to use insert_record_stream?
>
> I'd really like to shrink the VF surface area, not increase it.
>
> -Rob
>

Not trivially.

1) It is an incompatible api change to insert_record_stream

2) It requires setting up a FulltextContextFactory and passing in a
stream of 1 entry just to add a text, which isn't particularly nice.

3) It requires adding lots of parameters like 'nostore_sha', and
'random_id', etc, onto insert_record_stream

4) It requires rewriting the internals of
KnitVersionedFiles.insert_record_stream to *not* thunk back to
self.add_lines(chunks_to_lines(record.get_bytes_as('chunked')))

5) nostore_sha especially doesn't fit with the theology of
insert_record_stream. It is really only applicable to a single text, and
insert_record_stream is really designed around many texts. Wedging new
parameters onto a function where it doesn't really fit doesn't seem
*better*.

6) As for VF surface area, there is at least a default implementation
that simply thunks over to .add_lines() for those that don't strictly
care about memory performance. (And thus works fine for Weaves, etc.)

In theory we could try to layer it so that we had an 'ongoing' stream,
and 'yield' texts to be inserted as we find them. But that really
doesn't fit 'nostore_sha' since that also needs to be passed in, and
needs to raise an exception which breaks the stream.
Also, I thought we *wanted* commit for groupcompress to not have to do
deltas, and if we stream the texts in, we would spend a modest amount of
time getting poor compression between text files. (Note that we were
already spending that time to compute the delta index, but I have a
patch which fixes that.)

I can understand wanting to shrink the api. If you really push on it,
I'm willing to deprecate .add_lines() and write a .add_chunks() that is
meant to replace it. (since you can .add_chunks(lines) and
.add_chunks([text])) However, chunks fits slightly worse for knits,
since the Content code and annotation and deltas needs lines anyway, and
Groupcompress wants fulltexts...

So if you push hard, I'll try to find the time to do it. But this was
*much* easier.

John
=:->

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkol5pQACgkQJdeBCYSNAAMreQCgqQEE98oPL7DgmZnJRDZIf4F7
g2gAn3sZ91KqokOlLPwRI0rUwJpy/F8+
=wAGi
-----END PGP SIGNATURE-----

Revision history for this message
Robert Collins (lifeless) wrote : Posted in a previous version of this proposal
Download full text (3.2 KiB)

On Wed, 2009-06-03 at 03:00 +0000, John A Meinel wrote:
>
> Not trivially.
>
> 1) It is an incompatible api change to insert_record_stream

Yes. Doing this before 2.0 would be better than doing it later.

> 2) It requires setting up a FulltextContextFactory and passing in a
> stream of 1 entry just to add a text, which isn't particularly nice.

record_iter_changes would pass a generator into
texts.insert_record_stream.

e.g.
text_details = \
self.repository.texts.insert_record_stream(self._ric_texts, ...)
for details in text_details:
    [...]

> 3) It requires adding lots of parameters like 'nostore_sha', and
> 'random_id', etc, onto insert_record_stream

or onto the factory. I'm not sure offhand where is best.

> 4) It requires rewriting the internals of
> KnitVersionedFiles.insert_record_stream to *not* thunk back to
> self.add_lines(chunks_to_lines(record.get_bytes_as('chunked')))

this is fairly straightforward: move add_lines to call
self.insert_record_stream appropriately:- I did that for GCVF and it
worked well.

> 5) nostore_sha especially doesn't fit with the theology of
> insert_record_stream. It is really only applicable to a single text,
> and
> insert_record_stream is really designed around many texts. Wedging new
> parameters onto a function where it doesn't really fit doesn't seem
> *better*.

Agreed; so perhaps an attribute on the factory.

> 6) As for VF surface area, there is at least a default implementation
> that simply thunks over to .add_lines() for those that don't strictly
> care about memory performance. (And thus works fine for Weaves, etc.)

Well, I want to delete add_lines as it is.

> In theory we could try to layer it so that we had an 'ongoing' stream,
> and 'yield' texts to be inserted as we find them. But that really
> doesn't fit 'nostore_sha' since that also needs to be passed in, and
> needs to raise an exception which breaks the stream.

I'd yield data per record.

> Also, I thought we *wanted* commit for groupcompress to not have to do
> deltas, and if we stream the texts in, we would spend a modest amount
> of
> time getting poor compression between text files. (Note that we were
> already spending that time to compute the delta index, but I have a
> patch which fixes that.)

It would be good to measure actually... first commit after all suffers
hugely because every page in the CHKMap is add_text'd separately.

> I can understand wanting to shrink the api. If you really push on it,
> I'm willing to deprecate .add_lines() and write a .add_chunks() that
> is
> meant to replace it. (since you can .add_chunks(lines) and
> .add_chunks([text])) However, chunks fits slightly worse for knits,
> since the Content code and annotation and deltas needs lines anyway,
> and
> Groupcompress wants fulltexts...
>
> So if you push hard, I'll try to find the time to do it. But this was
> *much* easier.

I think we're at the point of maturity in bzr that it makes sense to
spend a small amount of time saying 'whats the cleanest way to do X',
and then talk about how to get there.

At the moment, expanding VF's API doesn't seem desirable, or the best
way to be tackling the problem. I think there should be precisely one
...

Read more...

Revision history for this message
John A Meinel (jameinel) wrote : Posted in a previous version of this proposal

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

...

> I think we're at the point of maturity in bzr that it makes sense to
> spend a small amount of time saying 'whats the cleanest way to do X',
> and then talk about how to get there.
>
> At the moment, expanding VF's API doesn't seem desirable, or the best
> way to be tackling the problem. I think there should be precisely one
> way to add texts to a VF, and that should be as small and fast as we can
> make it.
>
> -Rob
>

We're also blocking on a fairly significant win *today* because of a
potential desire to rewrite a lot of code to make something slightly
cleaner. (Which is something that has been a misfeature of the bzr
project for a *long* time.)

I'm not saying we shouldn't do this, I'm just pointing out the issue.

*For now* I don't feel like rewriting the entire insert_record_stream
stack just to get this in. So I'll leave this pending for now. (More
important is to actually get GC stacking working over bzr+ssh, etc.)

I'm also not sure that getting rid of the "add_this_text_to_the_repo" is
really a net win. Having to write code like:
vf.get_record_stream([one_key], 'unordered',
True).next().get_bytes_as('fulltext')

just to get a single text out is ugly. Not to mention prone to raising
bad exceptions like "AbsentContentFactory has no attribute
.get_bytes_as()", rather than something sane like "NoSuchRevision".
Having to do the same thing during *insert* is just as ugly.

I know you wanted to push people towards multi requests, and I
understand why. I'm not sure that completely removing the convenience
functions is a complete solution, though.

John
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkome6kACgkQJdeBCYSNAAOEggCgozRvOFoof+3NISA7xnFQM+MU
DzMAn1FRwhU8zJDGubT6O6BjCyfCODfE
=csCo
-----END PGP SIGNATURE-----

Revision history for this message
Robert Collins (lifeless) wrote : Posted in a previous version of this proposal

On Wed, 2009-06-03 at 13:36 +0000, John A Meinel wrote:

Meta: I'm really confused vis-a-vis reviews and blocking. All I've done
here is *ask* your opinion on reusing insert_record_stream and provided
answers to some of the technical issues you see with that. I haven't set
a review status of veto or resubmit - and I don't think I've signalled
in anyway that I would. So I don't know why you're feeling blocked.

> > I think we're at the point of maturity in bzr that it makes sense to
> > spend a small amount of time saying 'whats the cleanest way to do X',
> > and then talk about how to get there.
> >
> > At the moment, expanding VF's API doesn't seem desirable, or the best
> > way to be tackling the problem. I think there should be precisely one
> > way to add texts to a VF, and that should be as small and fast as we can
> > make it.
> >
> > -Rob
> >
>
> We're also blocking on a fairly significant win *today* because of a
> potential desire to rewrite a lot of code to make something slightly
> cleaner. (Which is something that has been a misfeature of the bzr
> project for a *long* time.)

I think we often ask the question - and thats important. Sometimes the
answer is 'yes we should fix the deep issue' and sometimes its 'lets do
it with the least possible changes'. Some things do get stuck, and thats
a shame - I've had that happen to concepts I've proposed, and seen it
happen to other peoples ideas.

> I'm not saying we shouldn't do this, I'm just pointing out the issue.
>
> *For now* I don't feel like rewriting the entire insert_record_stream
> stack just to get this in. So I'll leave this pending for now. (More
> important is to actually get GC stacking working over bzr+ssh, etc.)

I think it would be a good idea to make the new method private then,
because of the open question hanging over it.

> I'm also not sure that getting rid of the "add_this_text_to_the_repo" is
> really a net win. Having to write code like:
> vf.get_record_stream([one_key], 'unordered',
> True).next().get_bytes_as('fulltext')
>
> just to get a single text out is ugly. Not to mention prone to raising
> bad exceptions like "AbsentContentFactory has no attribute
> .get_bytes_as()", rather than something sane like "NoSuchRevision".
> Having to do the same thing during *insert* is just as ugly.

And yet, single read/single write methods are terrible for networking,
and commit over the network is something we currently support - but
can't make even vaguely fast until commit no long uses add_text_*. With
respect to exceptions, we actually do want different exceptions at
different places, so I think it has on balance cleaned some stuff up, in
fact.

> I know you wanted to push people towards multi requests, and I
> understand why. I'm not sure that completely removing the convenience
> functions is a complete solution, though.

I'd like us to get to the point where the core code doesn't do network
hostile things. Beyond that - well, I'm ok if plugins and library users
want to shoot themselves in the foot.

-Rob

Revision history for this message
Ian Clatworthy (ian-clatworthy) wrote : Posted in a previous version of this proposal

Robert Collins wrote:
> On Wed, 2009-06-03 at 13:36 +0000, John A Meinel wrote:
>
>
>> We're also blocking on a fairly significant win *today* because of a
>> potential desire to rewrite a lot of code to make something slightly
>> cleaner. (Which is something that has been a misfeature of the bzr
>> project for a *long* time.)
>>

I agree this is a problem that we need to sort out. I occasionally put
and leave useful code in plugins simply because it can take weeks of
effort/debate to get APIs extended in bzrlib. If it only takes a few
hours to write the methods in the first place, it's more productive for
me to just leave the code out of the core and cut-and-paste it when I
need it again.

> I think we often ask the question - and thats important. Sometimes the
> answer is 'yes we should fix the deep issue' and sometimes its 'lets do
> it with the least possible changes'. Some things do get stuck, and thats
> a shame - I've had that happen to concepts I've proposed, and seen it
> happen to other peoples ideas.
>
>

I agree it's really important to ask the questions. That's the whole
point of reviews.

>> *For now* I don't feel like rewriting the entire insert_record_stream
>> stack just to get this in. So I'll leave this pending for now. (More
>> important is to actually get GC stacking working over bzr+ssh, etc.)
>>
>
> I think it would be a good idea to make the new method private then,
> because of the open question hanging over it.
>
>

That sounds like a reasonable compromise. The other way to look at the
problem though is this:

  "Is this new API a step forward with medium-to-long term value?"

> I'd like us to get to the point where the core code doesn't do network
> hostile things. Beyond that - well, I'm ok if plugins and library users
> want to shoot themselves in the foot.
>

Right. But there are genuine use cases for having easy-to-use,
appropriate-locally-only APIs, e.g. import tools. I see no problems with
having such APIs *provided* the docstrings point the reader to more
network-friendly alternatives.

FWIW, if John's proposed API is faster than the current commonly-used
one, then it sounds like a one-or-two line change to fast-import for me
to take advantage of it. I appreciate that you want fast-import moving
towards using CommitBuilder instead of it's own CommitImporter class but
that's a much bigger change (and it's some time away).

Ian C.

Revision history for this message
Robert Collins (lifeless) wrote : Posted in a previous version of this proposal

On Thu, 2009-06-04 at 04:09 +0000, Ian Clatworthy wrote:
> Robert Collins wrote:
> > On Wed, 2009-06-03 at 13:36 +0000, John A Meinel wrote:
> >
> >
> >> We're also blocking on a fairly significant win *today* because of a
> >> potential desire to rewrite a lot of code to make something slightly
> >> cleaner. (Which is something that has been a misfeature of the bzr
> >> project for a *long* time.)
> >>
>
> I agree this is a problem that we need to sort out. I occasionally put
> and leave useful code in plugins simply because it can take weeks of
> effort/debate to get APIs extended in bzrlib. If it only takes a few
> hours to write the methods in the first place, it's more productive for
> me to just leave the code out of the core and cut-and-paste it when I
> need it again.

We don't have a good place for experiments ' in core'. And one possible
answer is that we don't need one - thats what we have plugins for. For
instance, I note that your revno cache got rewritten to be significantly
different as you learnt more about the problem. I think this is healthy,
as long as you don't get blocked.

> >> *For now* I don't feel like rewriting the entire insert_record_stream
> >> stack just to get this in. So I'll leave this pending for now. (More
> >> important is to actually get GC stacking working over bzr+ssh, etc.)
> >>
> >
> > I think it would be a good idea to make the new method private then,
> > because of the open question hanging over it.
> >
> >
>
> That sounds like a reasonable compromise. The other way to look at the
> problem though is this:
>
> "Is this new API a step forward with medium-to-long term value?"

I think thats what the design aspect of the review seeks to answer; but
its often hard to tell.

> > I'd like us to get to the point where the core code doesn't do network
> > hostile things. Beyond that - well, I'm ok if plugins and library users
> > want to shoot themselves in the foot.
> >
>
> Right. But there are genuine use cases for having easy-to-use,
> appropriate-locally-only APIs, e.g. import tools. I see no problems with
> having such APIs *provided* the docstrings point the reader to more
> network-friendly alternatives.

In this particular case I'd like to have them as adapters; such as
versionedfile.add_text(versioned_files, bytes, ....):
    for details in
versioned_files.insert_record_stream([FullTextContentFactory(bytes, ...)])
        return details

or whatever. That would separate them cleanly from the core API, prevent
them varying per implementation (easing testing) and make them not the
default way of working.

> FWIW, if John's proposed API is faster than the current commonly-used
> one, then it sounds like a one-or-two line change to fast-import for me
> to take advantage of it. I appreciate that you want fast-import moving
> towards using CommitBuilder instead of it's own CommitImporter class but
> that's a much bigger change (and it's some time away).

I think it would be fine to use a private method in fast-import:
fast-import is trying for maximum speed, and you are keeping a close eye
on it.

-Rob

Revision history for this message
John A Meinel (jameinel) wrote :

This patch builds on the previous one, and changes things to:
1) VF.add_text becomes VF._add_text
2) Simplifies the api a lot, since it is meant to be used by a limited crowd. So I got rid of the parent_texts parameter, etc. 'commit' never used it, and neither does CHK repositories. So it doesn't really make sense to carry it around as legacy.

This mostly is around making 'commit' hold less data in memory. I think it is right around 3x the size of a large file (depending on text/binary, long/short lines, no eol, etc.) We are still too high for memory on GC repos, but that is the delta index code, and my other patch handles that.

I haven't really performance tested this by itself, because not computing the delta index makes such a large impact.

Revision history for this message
John A Meinel (jameinel) wrote :

...
> I haven't really performance tested this by itself, because not computing the
> delta index makes such a large impact.

For committing a 90MB, short text line with no eol, this changes

1.9 469MB 5.7s
dev6 590MB 7.4s

to

1.9 361MB 5.0s
dev6 348MB 5.6s

with the delta-index patch also proposed, this goes further to:

dev6 216MB 4.0s

Revision history for this message
John A Meinel (jameinel) wrote :

To give a slightly better summary, and include results for a 'wide' tree, more than just a single large file.

single 90MB file
old no-delta-index commit fulltext
1.9 496MB 5.7s 470MB 5.7s 361MB 5.0s
dev6 590MB 7.4s 457MB 5.8s 216MB 4.0s

initial commit of mysql (140 MB of texts)
1.9 44MB 12.3s 45MB 12.1s 45MB 11.5s
dev6 52MB 16.1s 48MB 15.6s 47MB 14.2s

Getting away from line based is improving speed by ~10% (15.6 => 14.2 = 1.4s),
this is on top of ~10% for not computing the delta index.

The larger the files the more impact, but it does, indeed, have an measurable
effect on initial commit times.

Revision history for this message
Andrew Bennetts (spiv) wrote :
Download full text (3.6 KiB)

I have a couple of comments, but basically:

 vote approve

John A Meinel wrote:
[...]
> + def _add_text(self, key, parents, text, nostore_sha=None, random_id=False):
> + """See VersionedFiles.add_text()."""
> + self._index._check_write_ok()
> + self._check_add(key, None, random_id, check_content=False)
> + if text.__class__ is not str:
> + raise errors.BzrBadParameterUnicode("text")

Well... 'text' might not be unicode either ;)

(I'm not really concerned, but ideally perhaps that exception would take the
param value or type as well as param name...)

[...]
> @@ -1597,7 +1615,7 @@
> if refs:
> for ref in refs:
> if ref:
> - raise KnitCorrupt(self,
> + raise errors.KnitCorrupt(self,

Thanks for fixing this. pyflakes reports two other undefined names in this
file:

bzrlib/groupcompress.py:1543: undefined name 'progress'
bzrlib/groupcompress.py:1689: undefined name 'RevisionNotPresent'

Now would be a good time to fix those too :)

> === modified file 'bzrlib/knit.py'
[...]
> + line_bytes = ''.join(lines)
> return self._add(key, lines, parents,
> - parent_texts, left_matching_blocks, nostore_sha, random_id)
> + parent_texts, left_matching_blocks, nostore_sha, random_id,
> + line_bytes=line_bytes)

You pass lines *and* line_bytes? That looks odd. I'm pretty sure it's
intentional, but it's worth pointing this out and explaining the rationale
in _add's docstring, I think.

> + def _add_text(self, key, parents, text, nostore_sha=None, random_id=False):
> + """See VersionedFiles.add_text()."""
> + self._index._check_write_ok()
> + self._check_add(key, None, random_id, check_content=False)
> + if text.__class__ is not str:
> + raise errors.BzrBadParameterUnicode("text")
> + if parents is None:
> + # The caller might pass None if there is no graph data, but kndx
> + # indexes can't directly store that, so we give them
> + # an empty tuple instead.
> + parents = ()
> + return self._add(key, None, parents,
> + None, None, nostore_sha, random_id,
> + line_bytes=text)

Hmm. It would be nice if this wasn't largely the same as _add_text in
groupcompress.py.

[...]
> - if lines:
> - if lines[-1][-1] != '\n':
> - # copy the contents of lines.
> + no_eol = False
> + # Note: line_bytes is not modified to add a newline, that is tracked
> + # via the no_eol flag. 'lines' *is* modified, because that is the
> + # general values needed by the Content code.

Gosh I hate our no_eol flag... :)

[...]
> @@ -1920,21 +1949,16 @@
> function spends less time resizing the final string.
> :return: (len, a StringIO instance with the raw data ready to read.)
> """
> - # Note: using a string copy here increases memory pressure with e.g.
> - # ISO's, but it is about 3 seconds faster on a 1.2Ghz intel machine
> - # when doing the ...

Read more...

review: Approve
Revision history for this message
John A Meinel (jameinel) wrote :
Download full text (6.1 KiB)

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Andrew Bennetts wrote:
> Review: Approve
> I have a couple of comments, but basically:
>
> vote approve
>
> John A Meinel wrote:
> [...]
>> + def _add_text(self, key, parents, text, nostore_sha=None, random_id=False):
>> + """See VersionedFiles.add_text()."""
>> + self._index._check_write_ok()
>> + self._check_add(key, None, random_id, check_content=False)
>> + if text.__class__ is not str:
>> + raise errors.BzrBadParameterUnicode("text")
>
> Well... 'text' might not be unicode either ;)
>
> (I'm not really concerned, but ideally perhaps that exception would take the
> param value or type as well as param name...)

Sure. It was all that "_check_add" was doing, but _check_add does it by
iterating over the list of lines.

>
> [...]
>> @@ -1597,7 +1615,7 @@
>> if refs:
>> for ref in refs:
>> if ref:
>> - raise KnitCorrupt(self,
>> + raise errors.KnitCorrupt(self,
>
> Thanks for fixing this. pyflakes reports two other undefined names in this
> file:
>
> bzrlib/groupcompress.py:1543: undefined name 'progress'
> bzrlib/groupcompress.py:1689: undefined name 'RevisionNotPresent'
>
> Now would be a good time to fix those too :)

Done

Though of course it would be better to have tests that provoke the
behavior.... :)

>> === modified file 'bzrlib/knit.py'
> [...]
>> + line_bytes = ''.join(lines)
>> return self._add(key, lines, parents,
>> - parent_texts, left_matching_blocks, nostore_sha, random_id)
>> + parent_texts, left_matching_blocks, nostore_sha, random_id,
>> + line_bytes=line_bytes)
>
> You pass lines *and* line_bytes? That looks odd. I'm pretty sure it's
> intentional, but it's worth pointing this out and explaining the rationale
> in _add's docstring, I think.

Sure. It is because there are places that want lines (Content objects
require them, and our "diff" code requires them as well.) and the commit
code would rather have line_bytes (because writing a single string to
the gzip code is quite a bit faster than a bunch of short ones).

And this way, we have both types accessible when we need them, rather
than "re-casting" from whatever we have into the other type, and then
multiplying memory *again*.

I'll doc it better.

>
>> + def _add_text(self, key, parents, text, nostore_sha=None, random_id=False):
>> + """See VersionedFiles.add_text()."""
>> + self._index._check_write_ok()
>> + self._check_add(key, None, random_id, check_content=False)
>> + if text.__class__ is not str:
>> + raise errors.BzrBadParameterUnicode("text")
>> + if parents is None:
>> + # The caller might pass None if there is no graph data, but kndx
>> + # indexes can't directly store that, so we give them
>> + # an empty tuple instead.
>> + parents = ()
>> + return self._add(key, None, parents,
>> + None, None, nostore_sha, random_id,
>> + line_bytes=text)
>
> Hmm. It would be nice if th...

Read more...

Revision history for this message
Andrew Bennetts (spiv) wrote :

John A Meinel wrote:
> Andrew Bennetts wrote:
[...]
> > bzrlib/groupcompress.py:1543: undefined name 'progress'
> > bzrlib/groupcompress.py:1689: undefined name 'RevisionNotPresent'
> >
> > Now would be a good time to fix those too :)
>
> Done
>
> Though of course it would be better to have tests that provoke the
> behavior.... :)

Be my guest ;)

I'm happy to take an obvious improvement though.

[...]
> > Hmm. It would be nice if this wasn't largely the same as _add_text in
> > groupcompress.py.
>
> All but the final "self._add" versus "record = FulltextContentFactory,
> self._insert...".
>
> It didn't seem like quite enough duplication to worry about. If you
> really prefer I can refactor.

It's up to you. I can see that it would be a little awkward to refactor
(it's not obvious to me that the common logic really belongs in the
VersionedFiles base class), and I agree about not “quite enough” to require
it.

So, “it would be nice”, but that's all :)

> >> === modified file 'bzrlib/tests/test_versionedfile.py'
> > [...]
> >
> > The conditionals and repetition here are a bit ugly, but it'll do. Ideally
> > this would be done with the proper test parameterisation tools, though.
[...]
> So it is at least better, even if it isn't perfect.
>
> I suppose we could add a test permutation that tried to turn all calls
> to "add_lines" into "_add_text" and run against those. I don't feel it
> is quite worth that.

No, it probably isn't worth it. Thanks for taking a look.

-Andrew.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'bzrlib/groupcompress.py'
2--- bzrlib/groupcompress.py 2009-06-10 03:56:49 +0000
3+++ bzrlib/groupcompress.py 2009-06-16 02:36:16 +0000
4@@ -1008,6 +1008,24 @@
5 nostore_sha=nostore_sha))[0]
6 return sha1, length, None
7
8+ def _add_text(self, key, parents, text, nostore_sha=None, random_id=False):
9+ """See VersionedFiles.add_text()."""
10+ self._index._check_write_ok()
11+ self._check_add(key, None, random_id, check_content=False)
12+ if text.__class__ is not str:
13+ raise errors.BzrBadParameterUnicode("text")
14+ if parents is None:
15+ # The caller might pass None if there is no graph data, but kndx
16+ # indexes can't directly store that, so we give them
17+ # an empty tuple instead.
18+ parents = ()
19+ # double handling for now. Make it work until then.
20+ length = len(text)
21+ record = FulltextContentFactory(key, parents, None, text)
22+ sha1 = list(self._insert_record_stream([record], random_id=random_id,
23+ nostore_sha=nostore_sha))[0]
24+ return sha1, length, None
25+
26 def add_fallback_versioned_files(self, a_versioned_files):
27 """Add a source of texts for texts not present in this knit.
28
29@@ -1613,7 +1631,7 @@
30 if refs:
31 for ref in refs:
32 if ref:
33- raise KnitCorrupt(self,
34+ raise errors.KnitCorrupt(self,
35 "attempt to add node with parents "
36 "in parentless index.")
37 refs = ()
38
39=== modified file 'bzrlib/knit.py'
40--- bzrlib/knit.py 2009-06-10 03:56:49 +0000
41+++ bzrlib/knit.py 2009-06-16 02:36:16 +0000
42@@ -909,18 +909,35 @@
43 # indexes can't directly store that, so we give them
44 # an empty tuple instead.
45 parents = ()
46+ line_bytes = ''.join(lines)
47 return self._add(key, lines, parents,
48- parent_texts, left_matching_blocks, nostore_sha, random_id)
49+ parent_texts, left_matching_blocks, nostore_sha, random_id,
50+ line_bytes=line_bytes)
51+
52+ def _add_text(self, key, parents, text, nostore_sha=None, random_id=False):
53+ """See VersionedFiles.add_text()."""
54+ self._index._check_write_ok()
55+ self._check_add(key, None, random_id, check_content=False)
56+ if text.__class__ is not str:
57+ raise errors.BzrBadParameterUnicode("text")
58+ if parents is None:
59+ # The caller might pass None if there is no graph data, but kndx
60+ # indexes can't directly store that, so we give them
61+ # an empty tuple instead.
62+ parents = ()
63+ return self._add(key, None, parents,
64+ None, None, nostore_sha, random_id,
65+ line_bytes=text)
66
67 def _add(self, key, lines, parents, parent_texts,
68- left_matching_blocks, nostore_sha, random_id):
69+ left_matching_blocks, nostore_sha, random_id,
70+ line_bytes):
71 """Add a set of lines on top of version specified by parents.
72
73 Any versions not present will be converted into ghosts.
74 """
75 # first thing, if the content is something we don't need to store, find
76 # that out.
77- line_bytes = ''.join(lines)
78 digest = sha_string(line_bytes)
79 if nostore_sha == digest:
80 raise errors.ExistingContent
81@@ -947,13 +964,22 @@
82
83 text_length = len(line_bytes)
84 options = []
85- if lines:
86- if lines[-1][-1] != '\n':
87- # copy the contents of lines.
88+ no_eol = False
89+ # Note: line_bytes is not modified to add a newline, that is tracked
90+ # via the no_eol flag. 'lines' *is* modified, because that is the
91+ # general values needed by the Content code.
92+ if line_bytes and line_bytes[-1] != '\n':
93+ options.append('no-eol')
94+ no_eol = True
95+ # Copy the existing list, or create a new one
96+ if lines is None:
97+ lines = osutils.split_lines(line_bytes)
98+ else:
99 lines = lines[:]
100- options.append('no-eol')
101- lines[-1] = lines[-1] + '\n'
102- line_bytes += '\n'
103+ # Replace the last line with one that ends in a final newline
104+ lines[-1] = lines[-1] + '\n'
105+ if lines is None:
106+ lines = osutils.split_lines(line_bytes)
107
108 for element in key[:-1]:
109 if type(element) != str:
110@@ -965,7 +991,7 @@
111 # Knit hunks are still last-element only
112 version_id = key[-1]
113 content = self._factory.make(lines, version_id)
114- if 'no-eol' in options:
115+ if no_eol:
116 # Hint to the content object that its text() call should strip the
117 # EOL.
118 content._should_strip_eol = True
119@@ -986,8 +1012,11 @@
120 if self._factory.__class__ is KnitPlainFactory:
121 # Use the already joined bytes saving iteration time in
122 # _record_to_data.
123+ dense_lines = [line_bytes]
124+ if no_eol:
125+ dense_lines.append('\n')
126 size, bytes = self._record_to_data(key, digest,
127- lines, [line_bytes])
128+ lines, dense_lines)
129 else:
130 # get mixed annotation + content and feed it into the
131 # serialiser.
132@@ -1920,21 +1949,16 @@
133 function spends less time resizing the final string.
134 :return: (len, a StringIO instance with the raw data ready to read.)
135 """
136- # Note: using a string copy here increases memory pressure with e.g.
137- # ISO's, but it is about 3 seconds faster on a 1.2Ghz intel machine
138- # when doing the initial commit of a mozilla tree. RBC 20070921
139- bytes = ''.join(chain(
140- ["version %s %d %s\n" % (key[-1],
141- len(lines),
142- digest)],
143- dense_lines or lines,
144- ["end %s\n" % key[-1]]))
145- if type(bytes) != str:
146- raise AssertionError(
147- 'data must be plain bytes was %s' % type(bytes))
148+ chunks = ["version %s %d %s\n" % (key[-1], len(lines), digest)]
149+ chunks.extend(dense_lines or lines)
150+ chunks.append("end %s\n" % key[-1])
151+ for chunk in chunks:
152+ if type(chunk) != str:
153+ raise AssertionError(
154+ 'data must be plain bytes was %s' % type(chunk))
155 if lines and lines[-1][-1] != '\n':
156 raise ValueError('corrupt lines value %r' % lines)
157- compressed_bytes = tuned_gzip.bytes_to_gzip(bytes)
158+ compressed_bytes = tuned_gzip.chunks_to_gzip(chunks)
159 return len(compressed_bytes), compressed_bytes
160
161 def _split_header(self, line):
162
163=== modified file 'bzrlib/repository.py'
164--- bzrlib/repository.py 2009-06-12 01:11:00 +0000
165+++ bzrlib/repository.py 2009-06-16 02:36:16 +0000
166@@ -494,12 +494,12 @@
167 ie.executable = content_summary[2]
168 file_obj, stat_value = tree.get_file_with_stat(ie.file_id, path)
169 try:
170- lines = file_obj.readlines()
171+ text = file_obj.read()
172 finally:
173 file_obj.close()
174 try:
175 ie.text_sha1, ie.text_size = self._add_text_to_weave(
176- ie.file_id, lines, heads, nostore_sha)
177+ ie.file_id, text, heads, nostore_sha)
178 # Let the caller know we generated a stat fingerprint.
179 fingerprint = (ie.text_sha1, stat_value)
180 except errors.ExistingContent:
181@@ -517,8 +517,7 @@
182 # carry over:
183 ie.revision = parent_entry.revision
184 return self._get_delta(ie, basis_inv, path), False, None
185- lines = []
186- self._add_text_to_weave(ie.file_id, lines, heads, None)
187+ self._add_text_to_weave(ie.file_id, '', heads, None)
188 elif kind == 'symlink':
189 current_link_target = content_summary[3]
190 if not store:
191@@ -532,8 +531,7 @@
192 ie.symlink_target = parent_entry.symlink_target
193 return self._get_delta(ie, basis_inv, path), False, None
194 ie.symlink_target = current_link_target
195- lines = []
196- self._add_text_to_weave(ie.file_id, lines, heads, None)
197+ self._add_text_to_weave(ie.file_id, '', heads, None)
198 elif kind == 'tree-reference':
199 if not store:
200 if content_summary[3] != parent_entry.reference_revision:
201@@ -544,8 +542,7 @@
202 ie.revision = parent_entry.revision
203 return self._get_delta(ie, basis_inv, path), False, None
204 ie.reference_revision = content_summary[3]
205- lines = []
206- self._add_text_to_weave(ie.file_id, lines, heads, None)
207+ self._add_text_to_weave(ie.file_id, '', heads, None)
208 else:
209 raise NotImplementedError('unknown kind')
210 ie.revision = self._new_revision_id
211@@ -745,7 +742,7 @@
212 entry.executable = True
213 else:
214 entry.executable = False
215- if (carry_over_possible and
216+ if (carry_over_possible and
217 parent_entry.executable == entry.executable):
218 # Check the file length, content hash after reading
219 # the file.
220@@ -754,12 +751,12 @@
221 nostore_sha = None
222 file_obj, stat_value = tree.get_file_with_stat(file_id, change[1][1])
223 try:
224- lines = file_obj.readlines()
225+ text = file_obj.read()
226 finally:
227 file_obj.close()
228 try:
229 entry.text_sha1, entry.text_size = self._add_text_to_weave(
230- file_id, lines, heads, nostore_sha)
231+ file_id, text, heads, nostore_sha)
232 yield file_id, change[1][1], (entry.text_sha1, stat_value)
233 except errors.ExistingContent:
234 # No content change against a carry_over parent
235@@ -774,7 +771,7 @@
236 parent_entry.symlink_target == entry.symlink_target):
237 carried_over = True
238 else:
239- self._add_text_to_weave(change[0], [], heads, None)
240+ self._add_text_to_weave(change[0], '', heads, None)
241 elif kind == 'directory':
242 if carry_over_possible:
243 carried_over = True
244@@ -782,7 +779,7 @@
245 # Nothing to set on the entry.
246 # XXX: split into the Root and nonRoot versions.
247 if change[1][1] != '' or self.repository.supports_rich_root():
248- self._add_text_to_weave(change[0], [], heads, None)
249+ self._add_text_to_weave(change[0], '', heads, None)
250 elif kind == 'tree-reference':
251 if not self.repository._format.supports_tree_reference:
252 # This isn't quite sane as an error, but we shouldn't
253@@ -797,7 +794,7 @@
254 parent_entry.reference_revision == reference_revision):
255 carried_over = True
256 else:
257- self._add_text_to_weave(change[0], [], heads, None)
258+ self._add_text_to_weave(change[0], '', heads, None)
259 else:
260 raise AssertionError('unknown kind %r' % kind)
261 if not carried_over:
262@@ -818,17 +815,11 @@
263 self._require_root_change(tree)
264 self.basis_delta_revision = basis_revision_id
265
266- def _add_text_to_weave(self, file_id, new_lines, parents, nostore_sha):
267- # Note: as we read the content directly from the tree, we know its not
268- # been turned into unicode or badly split - but a broken tree
269- # implementation could give us bad output from readlines() so this is
270- # not a guarantee of safety. What would be better is always checking
271- # the content during test suite execution. RBC 20070912
272- parent_keys = tuple((file_id, parent) for parent in parents)
273- return self.repository.texts.add_lines(
274- (file_id, self._new_revision_id), parent_keys, new_lines,
275- nostore_sha=nostore_sha, random_id=self.random_revid,
276- check_content=False)[0:2]
277+ def _add_text_to_weave(self, file_id, new_text, parents, nostore_sha):
278+ parent_keys = tuple([(file_id, parent) for parent in parents])
279+ return self.repository.texts._add_text(
280+ (file_id, self._new_revision_id), parent_keys, new_text,
281+ nostore_sha=nostore_sha, random_id=self.random_revid)[0:2]
282
283
284 class RootCommitBuilder(CommitBuilder):
285
286=== modified file 'bzrlib/tests/test_tuned_gzip.py'
287--- bzrlib/tests/test_tuned_gzip.py 2009-03-23 14:59:43 +0000
288+++ bzrlib/tests/test_tuned_gzip.py 2009-06-16 02:36:17 +0000
289@@ -85,3 +85,28 @@
290 self.assertEqual('', stream.read())
291 # and it should be new member time in the stream.
292 self.failUnless(myfile._new_member)
293+
294+
295+class TestToGzip(TestCase):
296+
297+ def assertToGzip(self, chunks):
298+ bytes = ''.join(chunks)
299+ gzfromchunks = tuned_gzip.chunks_to_gzip(chunks)
300+ gzfrombytes = tuned_gzip.bytes_to_gzip(bytes)
301+ self.assertEqual(gzfrombytes, gzfromchunks)
302+ decoded = tuned_gzip.GzipFile(fileobj=StringIO(gzfromchunks)).read()
303+ self.assertEqual(bytes, decoded)
304+
305+ def test_single_chunk(self):
306+ self.assertToGzip(['a modest chunk\nwith some various\nbits\n'])
307+
308+ def test_simple_text(self):
309+ self.assertToGzip(['some\n', 'strings\n', 'to\n', 'process\n'])
310+
311+ def test_large_chunks(self):
312+ self.assertToGzip(['a large string\n'*1024])
313+ self.assertToGzip(['a large string\n']*1024)
314+
315+ def test_enormous_chunks(self):
316+ self.assertToGzip(['a large string\n'*1024*256])
317+ self.assertToGzip(['a large string\n']*1024*256)
318
319=== modified file 'bzrlib/tests/test_versionedfile.py'
320--- bzrlib/tests/test_versionedfile.py 2009-05-01 18:09:24 +0000
321+++ bzrlib/tests/test_versionedfile.py 2009-06-16 02:36:17 +0000
322@@ -1471,6 +1471,58 @@
323 self.addCleanup(lambda:self.cleanup(files))
324 return files
325
326+ def test_add_lines(self):
327+ f = self.get_versionedfiles()
328+ if self.key_length == 1:
329+ key0 = ('r0',)
330+ key1 = ('r1',)
331+ key2 = ('r2',)
332+ keyf = ('foo',)
333+ else:
334+ key0 = ('fid', 'r0')
335+ key1 = ('fid', 'r1')
336+ key2 = ('fid', 'r2')
337+ keyf = ('fid', 'foo')
338+ f.add_lines(key0, [], ['a\n', 'b\n'])
339+ if self.graph:
340+ f.add_lines(key1, [key0], ['b\n', 'c\n'])
341+ else:
342+ f.add_lines(key1, [], ['b\n', 'c\n'])
343+ keys = f.keys()
344+ self.assertTrue(key0 in keys)
345+ self.assertTrue(key1 in keys)
346+ records = []
347+ for record in f.get_record_stream([key0, key1], 'unordered', True):
348+ records.append((record.key, record.get_bytes_as('fulltext')))
349+ records.sort()
350+ self.assertEqual([(key0, 'a\nb\n'), (key1, 'b\nc\n')], records)
351+
352+ def test__add_text(self):
353+ f = self.get_versionedfiles()
354+ if self.key_length == 1:
355+ key0 = ('r0',)
356+ key1 = ('r1',)
357+ key2 = ('r2',)
358+ keyf = ('foo',)
359+ else:
360+ key0 = ('fid', 'r0')
361+ key1 = ('fid', 'r1')
362+ key2 = ('fid', 'r2')
363+ keyf = ('fid', 'foo')
364+ f._add_text(key0, [], 'a\nb\n')
365+ if self.graph:
366+ f._add_text(key1, [key0], 'b\nc\n')
367+ else:
368+ f._add_text(key1, [], 'b\nc\n')
369+ keys = f.keys()
370+ self.assertTrue(key0 in keys)
371+ self.assertTrue(key1 in keys)
372+ records = []
373+ for record in f.get_record_stream([key0, key1], 'unordered', True):
374+ records.append((record.key, record.get_bytes_as('fulltext')))
375+ records.sort()
376+ self.assertEqual([(key0, 'a\nb\n'), (key1, 'b\nc\n')], records)
377+
378 def test_annotate(self):
379 files = self.get_versionedfiles()
380 self.get_diamond_files(files)
381@@ -1520,7 +1572,7 @@
382 trailing_eol=trailing_eol, nograph=not self.graph,
383 left_only=left_only, nokeys=nokeys)
384
385- def test_add_lines_nostoresha(self):
386+ def _add_content_nostoresha(self, add_lines):
387 """When nostore_sha is supplied using old content raises."""
388 vf = self.get_versionedfiles()
389 empty_text = ('a', [])
390@@ -1528,7 +1580,12 @@
391 sample_text_no_nl = ('c', ["foo\n", "bar"])
392 shas = []
393 for version, lines in (empty_text, sample_text_nl, sample_text_no_nl):
394- sha, _, _ = vf.add_lines(self.get_simple_key(version), [], lines)
395+ if add_lines:
396+ sha, _, _ = vf.add_lines(self.get_simple_key(version), [],
397+ lines)
398+ else:
399+ sha, _, _ = vf._add_text(self.get_simple_key(version), [],
400+ ''.join(lines))
401 shas.append(sha)
402 # we now have a copy of all the lines in the vf.
403 for sha, (version, lines) in zip(
404@@ -1537,10 +1594,19 @@
405 self.assertRaises(errors.ExistingContent,
406 vf.add_lines, new_key, [], lines,
407 nostore_sha=sha)
408+ self.assertRaises(errors.ExistingContent,
409+ vf._add_text, new_key, [], ''.join(lines),
410+ nostore_sha=sha)
411 # and no new version should have been added.
412 record = vf.get_record_stream([new_key], 'unordered', True).next()
413 self.assertEqual('absent', record.storage_kind)
414
415+ def test_add_lines_nostoresha(self):
416+ self._add_content_nostoresha(add_lines=True)
417+
418+ def test__add_text_nostoresha(self):
419+ self._add_content_nostoresha(add_lines=False)
420+
421 def test_add_lines_return(self):
422 files = self.get_versionedfiles()
423 # save code by using the stock data insertion helper.
424
425=== modified file 'bzrlib/tuned_gzip.py'
426--- bzrlib/tuned_gzip.py 2009-03-23 14:59:43 +0000
427+++ bzrlib/tuned_gzip.py 2009-06-16 02:36:16 +0000
428@@ -52,6 +52,18 @@
429 width=-zlib.MAX_WBITS, mem=zlib.DEF_MEM_LEVEL,
430 crc32=zlib.crc32):
431 """Create a gzip file containing bytes and return its content."""
432+ return chunks_to_gzip([bytes])
433+
434+
435+def chunks_to_gzip(chunks, factory=zlib.compressobj,
436+ level=zlib.Z_DEFAULT_COMPRESSION, method=zlib.DEFLATED,
437+ width=-zlib.MAX_WBITS, mem=zlib.DEF_MEM_LEVEL,
438+ crc32=zlib.crc32):
439+ """Create a gzip file containing chunks and return its content.
440+
441+ :param chunks: An iterable of strings. Each string can have arbitrary
442+ layout.
443+ """
444 result = [
445 '\037\213' # self.fileobj.write('\037\213') # magic header
446 '\010' # self.fileobj.write('\010') # compression method
447@@ -69,11 +81,17 @@
448 # using a compressobj avoids a small header and trailer that the compress()
449 # utility function adds.
450 compress = factory(level, method, width, mem, 0)
451- result.append(compress.compress(bytes))
452+ crc = 0
453+ total_len = 0
454+ for chunk in chunks:
455+ crc = crc32(chunk, crc)
456+ total_len += len(chunk)
457+ zbytes = compress.compress(chunk)
458+ if zbytes:
459+ result.append(zbytes)
460 result.append(compress.flush())
461- result.append(struct.pack("<L", LOWU32(crc32(bytes))))
462 # size may exceed 2GB, or even 4GB
463- result.append(struct.pack("<L", LOWU32(len(bytes))))
464+ result.append(struct.pack("<LL", LOWU32(crc), LOWU32(total_len)))
465 return ''.join(result)
466
467
468
469=== modified file 'bzrlib/versionedfile.py'
470--- bzrlib/versionedfile.py 2009-06-10 03:56:49 +0000
471+++ bzrlib/versionedfile.py 2009-06-16 02:36:17 +0000
472@@ -829,6 +829,36 @@
473 """
474 raise NotImplementedError(self.add_lines)
475
476+ def _add_text(self, key, parents, text, nostore_sha=None, random_id=False):
477+ """Add a text to the store.
478+
479+ This is a private function for use by CommitBuilder.
480+
481+ :param key: The key tuple of the text to add. If the last element is
482+ None, a CHK string will be generated during the addition.
483+ :param parents: The parents key tuples of the text to add.
484+ :param text: A string containing the text to be committed.
485+ :param nostore_sha: Raise ExistingContent and do not add the lines to
486+ the versioned file if the digest of the lines matches this.
487+ :param random_id: If True a random id has been selected rather than
488+ an id determined by some deterministic process such as a converter
489+ from a foreign VCS. When True the backend may choose not to check
490+ for uniqueness of the resulting key within the versioned file, so
491+ this should only be done when the result is expected to be unique
492+ anyway.
493+ :param check_content: If True, the lines supplied are verified to be
494+ bytestrings that are correctly formed lines.
495+ :return: The text sha1, the number of bytes in the text, and an opaque
496+ representation of the inserted version which can be provided
497+ back to future _add_text calls in the parent_texts dictionary.
498+ """
499+ # The default implementation just thunks over to .add_lines(),
500+ # inefficient, but it works.
501+ return self.add_lines(key, parents, osutils.split_lines(text),
502+ nostore_sha=nostore_sha,
503+ random_id=random_id,
504+ check_content=True)
505+
506 def add_mpdiffs(self, records):
507 """Add mpdiffs to this VersionedFile.
508