Merge lp:~abentley/bzr/skip-dupes into lp:~bzr/bzr/trunk-old
- skip-dupes
- Merge into trunk-old
Status: | Merged |
---|---|
Merged at revision: | not available |
Proposed branch: | lp:~abentley/bzr/skip-dupes |
Merge into: | lp:~bzr/bzr/trunk-old |
Diff against target: | 350 lines |
To merge this branch: | bzr merge lp:~abentley/bzr/skip-dupes |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Robert Collins (community) | Approve | ||
Review via email: mp+7957@code.launchpad.net |
This proposal supersedes a proposal from 2009-06-25.
Commit message
Description of the change
Aaron Bentley (abentley) wrote : Posted in a previous version of this proposal | # |
Robert Collins (lifeless) wrote : Posted in a previous version of this proposal | # |
On Wed, 2009-06-24 at 14:50 +0000, Aaron Bentley wrote:
> Aaron Bentley has proposed merging lp:~abentley/bzr/skip-dupes into lp:bzr.
>
> Requested reviews:
> bzr-core (bzr-core)
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Hi all,
>
> There are several issues related to Launchpad's current problems with
> pulling data from incompletely-
> we're incorrectly predicting which records need to be inserted. For
> another, we're inserting records that we already have, which wastes disk
> space, and causes us to raise exceptions if the record metadata differs.
>
> This patch allows bzr to avoid installing any records which are already
> present, which seems like the most robust fix. It would also be nice to
> investigate why we're sending too many records.
review -1
There are two consequences of the line you've added:
- we'll lookup everything we are fetching in the target repository
- incorrect data in one location will be sticky there and no warning
will be given
From a performance point of view, looking up things we don't have is
pretty much worst case: we'll have to hit every index, and read to the
bottom page, most of the time, to show we don't have it. We got
significant performance wins by removing or batching similar calls in
knit.py over the last couple of years. We could address the incorrect
data being sticky and not warning by doing a warning rather than just
continuing, however it won't fix the performance impact.
Now, add_records in the same file, *does* do a lookup for the same data,
but it does it in a batch which means that we don't run the risk of
cache thrashing on large pulls, *and* its controllable via the random_id
flag: we'll hit every backing index once and only once, and we don't pay
that overhead at all during commit.
I'd also _really_ prefer for us to just fix the data rather than working
around it in push and pull. The more we can work on the basis of the
data being correct, the leaner and faster our code can be.
https:/
data being sent in push/pull operations and may well have significant
bearing with what you're observing here.
-Rob
Aaron Bentley (abentley) wrote : Posted in a previous version of this proposal | # |
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Robert Collins wrote:
> review -1
> There are two consequences of the line you've added:
> - we'll lookup everything we are fetching in the target repository
We were already looking up everything we were fetching in the target
repository.
> - incorrect data in one location will be sticky there and no warning
> will be given
We were already ignoring irrelevant incorrect data most of the time.
With this change, we would ignore it all the time.
>>From a performance point of view, looking up things we don't have is
> pretty much worst case: we'll have to hit every index, and read to the
> bottom page, most of the time, to show we don't have it. We got
> significant performance wins by removing or batching similar calls in
> knit.py over the last couple of years.
When fetching the last 100 revisions of bzr.dev into a treeless branch,
my patched version was actually faster, in our standard best-of-3 test:
skip-dupes:
real 0m4.431s
user 0m3.808s
sys 0m0.196s
bzr.dev:
real 0m4.439s
user 0m3.788s
sys 0m0.208s
I conclude that pull times are noisy enough that the inefficiency in my
version is not observable. Performance is not a concern.
> Now, add_records in the same file, *does* do a lookup for the same data,
> but it does it in a batch which means that we don't run the risk of
> cache thrashing on large pulls, *and* its controllable via the random_id
> flag: we'll hit every backing index once and only once, and we don't pay
> that overhead at all during commit.
I'd love to do a batched lookup in
GroupCompressVe
that very difficult. It would be nice if the API told us, possibly in
batches, what records it was about to send.
I'm happy to disable the dupe-skipping code when random_id is True.
> I'd also _really_ prefer for us to just fix the data rather than working
> around it in push and pull.
I'm not really sure what you mean. In my case, my repository is
reconciled, and I'm trying to pull correct data from
lp:~launchpad-pqm/launchpad/devel. This bug prevented that, and no
random launchpad developer can reasonably be expected to reconcile the
master copy of lp:~launchpad-pqm/launchpad/devel.
Because push and pull are acting as a barrier, preventing pure and
impure repositories from interoperating, we have to delay reconciling
lp:~launchpad-pqm/launchpad/devel. If we were to reconcile it now, that
could break all other launchpad devs until they, too, reconciled.
Because push and pull act as a barrier and reconcile takes 3+ days to
run, we would have to stop all PQM commits for 3+ days in order to
reconcile lp:~launchpad-pqm/launchpad/devel. If we were to take a
mirror of lp:~launchpad-pqm/launchpad/devel and reconcile that, we might
not be able to merge in the commits that had happened while reconcile
was running, even though those commit would be correct data.
> The more we can work on the basis of the
> data being correct, the leaner and faster our code can be.
Yes, but this was not any kind of carefully-designed, systematic test.
The only reason it happens is because we're not correctly determining
which records to send. A...
Robert Collins (lifeless) wrote : Posted in a previous version of this proposal | # |
On Thu, 2009-06-25 at 01:03 +0000, Aaron Bentley wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Robert Collins wrote:
> > review -1
> > There are two consequences of the line you've added:
> > - we'll lookup everything we are fetching in the target repository
>
> We were already looking up everything we were fetching in the target
> repository.
With a different access pattern, yes. However thats arguably a bug, we
shouldn't need to look up CHK pages at all, as all their ids are
effectively random.
> > - incorrect data in one location will be sticky there and no warning
> > will be given
>
> We were already ignoring irrelevant incorrect data most of the time.
> With this change, we would ignore it all the time.
We were? Could you enlarge on this. We do ignore data that we already
have - is that what you mean?
> >>From a performance point of view, looking up things we don't have is
> > pretty much worst case: we'll have to hit every index, and read to the
> > bottom page, most of the time, to show we don't have it. We got
> > significant performance wins by removing or batching similar calls in
> > knit.py over the last couple of years.
>
> When fetching the last 100 revisions of bzr.dev into a treeless branch,
> my patched version was actually faster, in our standard best-of-3 test:
>
> skip-dupes:
> real 0m4.431s
> user 0m3.808s
> sys 0m0.196s
>
> bzr.dev:
> real 0m4.439s
> user 0m3.788s
> sys 0m0.208s
>
> I conclude that pull times are noisy enough that the inefficiency in my
> version is not observable. Performance is not a concern.
Its not in that test/scale. However, larger pulls on larger trees may
well show different results. bzr itself is small in the scale of
projects that are using bzr now.
> > Now, add_records in the same file, *does* do a lookup for the same data,
> > but it does it in a batch which means that we don't run the risk of
> > cache thrashing on large pulls, *and* its controllable via the random_id
> > flag: we'll hit every backing index once and only once, and we don't pay
> > that overhead at all during commit.
>
> I'd love to do a batched lookup in
> GroupCompressVe
> that very difficult. It would be nice if the API told us, possibly in
> batches, what records it was about to send.
Another way to structure this is to not error when add_records detects a
duplicate: just don't index it. The content will be redundant in the
group its in, but not looked up from there. (And if the group is empty
we could discard it). That would mean that only a single lookup happens
and would address the performance concerns I have.
> I'm happy to disable the dupe-skipping code when random_id is True.
>
> > I'd also _really_ prefer for us to just fix the data rather than working
> > around it in push and pull.
>
> I'm not really sure what you mean. In my case, my repository is
> reconciled, and I'm trying to pull correct data from
> lp:~launchpad-pqm/launchpad/devel. This bug prevented that, and no
> random launchpad developer can reasonably be expected to reconcile the
> master copy of lp:~launchpad-pqm/launchpad/devel.
The master copy was meant...
Aaron Bentley (abentley) wrote : Posted in a previous version of this proposal | # |
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Robert Collins wrote:
> On Thu, 2009-06-25 at 01:03 +0000, Aaron Bentley wrote:
>> We were already looking up everything we were fetching in the target
>> repository.
>
> With a different access pattern, yes. However thats arguably a bug, we
> shouldn't need to look up CHK pages at all, as all their ids are
> effectively random.
So you're saying that we should be looking up all records except CHK
records? Do you mean that their ids vary across repositories? I did
not think that was the case.
>>> - incorrect data in one location will be sticky there and no warning
>>> will be given
>> We were already ignoring irrelevant incorrect data most of the time.
>> With this change, we would ignore it all the time.
>
> We were? Could you enlarge on this. We do ignore data that we already
> have - is that what you mean?
That is what I mean.
>> I conclude that pull times are noisy enough that the inefficiency in my
>> version is not observable. Performance is not a concern.
>
> Its not in that test/scale. However, larger pulls on larger trees may
> well show different results. bzr itself is small in the scale of
> projects that are using bzr now.
I've based my statements about performance on benchmarking. Please base
your statements about performance on benchmarking.
>> I'd love to do a batched lookup in
>> GroupCompressVe
>> that very difficult. It would be nice if the API told us, possibly in
>> batches, what records it was about to send.
>
> Another way to structure this is to not error when add_records detects a
> duplicate: just don't index it. The content will be redundant in the
> group its in, but not looked up from there.
I thought you wanted repository indices to be fully reconstructible from
the data in the repository itself. Having two entries for a node, with
one deliberately omitted from the index is not a situation we could
reconstruct from the repository alone.
> That would mean that only a single lookup happens
> and would address the performance concerns I have.
I'd be willing to fix it that way.
>> lp:~launchpad-pqm/launchpad/devel. This bug prevented that, and no
>> random launchpad developer can reasonably be expected to reconcile the
>> master copy of lp:~launchpad-pqm/launchpad/devel.
>
> The master copy was meant to be reconciled *before* migrating;
It was. Then bad commits were pushed into it:
<email address hidden>
<email address hidden>
<email address hidden>
> and all
> user repositories were also meant to be reconciled.
Mine was. That's what made its versions of those revisions incompatible
with the versions in launchpad trunk.
>> Because push and pull act as a barrier and reconcile takes 3+ days to
>> run, we would have to stop all PQM commits for 3+ days in order to
>> reconcile lp:~launchpad-pqm/launchpad/devel. If we were to take a
>> mirror of lp:~launchpad-pqm/launchpad/devel and reconcile that, we might
>> not be able to merge in the commits that had happened while reconcile
>> was ru...
Robert Collins (lifeless) wrote : Posted in a previous version of this proposal | # |
One further note that is relevant I think.
John, Vincent and I are closing in on a fix for bug 350563. Once this is
fixed, duplicate texts won't be pulled across and I think your headaches
with fetch() will go away. The current status is that there is a branch
(lp:~lifeless/bzr/bug-350563) which has a partial fix - one that should
cover all the cases you're running into in launchpad, but won't cover
some more esoteric cases to do with the internal structure of CHK page
layout.
I'd appreciate it if you could try pulling with this branch and see if
it stops you having errors.
-Rob
Aaron Bentley (abentley) wrote : Posted in a previous version of this proposal | # |
Per Robert's request, this revised version handles duplicates by not adding the records to the index.
It turns out that this was already the behaviour, only it would raise an exception if the duplicate record was inconsistent.
So this patch changes the error to a warning, which is also in line with Robert's wishes. It also adds the key of the inconsistent record to the message.
Robert Collins (lifeless) wrote : Posted in a previous version of this proposal | # |
Thank you for making these changes. I was still nervous about things
going wrong - stepping back from paranoia isn't easy.
If you can do the following, I'll feel comfortable that things won't go
badly wrong *and* get ignored by users.
Add some sort of flag to the state of the GroupCompressVe
object indicating 'trace or raise'. Then in the GCPackRepository
__init__, set the flag as follows:
revisions: raise
inventories: trace
texts: trace
signatures: trace
chk_bytes: trace
This will mean that if the revisions graph is wrong we stop cold - which
is important because that affects fetch. The other graphs don't affect
fetch and are consequently a lower priorityIMO.
-Rob
Aaron Bentley (abentley) wrote : | # |
Here is a new version, revised per Robert's request, so that inconsistent unwanted revisions are fatal, while all others generate a warning.
Robert Collins (lifeless) wrote : | # |
review +1
Other tests for '2a' are in the test_repository.py file; you might like
to put your new one in the Test2a class in there for consistency. Other
than that optional change - looks good, please land.
-Rob
Robert Collins (lifeless) wrote : | # |
review +1
Other tests for '2a' are in the test_repository.py file; you might like
to put your new one in the Test2a class in there for consistency. Other
than that optional change - looks good, please land.
-Rob
Aaron Bentley (abentley) wrote : | # |
> review +1
>
> Other tests for '2a' are in the test_repository.py file; you might like
> to put your new one in the Test2a class in there for consistency. Other
> than that optional change - looks good, please land.
Cool, thanks.
I think having a test_groupcompr
Preview Diff
1 | === modified file 'NEWS' |
2 | --- NEWS 2009-06-26 23:28:46 +0000 |
3 | +++ NEWS 2009-06-29 15:35:12 +0000 |
4 | @@ -50,6 +50,9 @@ |
5 | * ``bzr ls DIR --from-root`` now shows only things in DIR, not everything. |
6 | (Ian Clatworthy) |
7 | |
8 | +* Fetch between repositories does not error if they have inconsistent data |
9 | + that should be irrelevant to the fetch operation. (Aaron Bentley) |
10 | + |
11 | * Progress bars are now suppressed again when the environment variable |
12 | ``BZR_PROGRESS_BAR`` is set to ``none``. |
13 | (Martin Pool, #339385) |
14 | |
15 | === modified file 'bzrlib/groupcompress.py' |
16 | --- bzrlib/groupcompress.py 2009-06-23 15:27:50 +0000 |
17 | +++ bzrlib/groupcompress.py 2009-06-29 15:35:12 +0000 |
18 | @@ -942,7 +942,7 @@ |
19 | self.endpoint = endpoint |
20 | |
21 | |
22 | -def make_pack_factory(graph, delta, keylength): |
23 | +def make_pack_factory(graph, delta, keylength, inconsistency_fatal=True): |
24 | """Create a factory for creating a pack based groupcompress. |
25 | |
26 | This is only functional enough to run interface tests, it doesn't try to |
27 | @@ -963,7 +963,8 @@ |
28 | writer = pack.ContainerWriter(stream.write) |
29 | writer.begin() |
30 | index = _GCGraphIndex(graph_index, lambda:True, parents=parents, |
31 | - add_callback=graph_index.add_nodes) |
32 | + add_callback=graph_index.add_nodes, |
33 | + inconsistency_fatal=inconsistency_fatal) |
34 | access = knit._DirectPackAccess({}) |
35 | access.set_writer(writer, graph_index, (transport, 'newpack')) |
36 | result = GroupCompressVersionedFiles(index, access, delta) |
37 | @@ -1610,7 +1611,8 @@ |
38 | """Mapper from GroupCompressVersionedFiles needs into GraphIndex storage.""" |
39 | |
40 | def __init__(self, graph_index, is_locked, parents=True, |
41 | - add_callback=None, track_external_parent_refs=False): |
42 | + add_callback=None, track_external_parent_refs=False, |
43 | + inconsistency_fatal=True): |
44 | """Construct a _GCGraphIndex on a graph_index. |
45 | |
46 | :param graph_index: An implementation of bzrlib.index.GraphIndex. |
47 | @@ -1624,12 +1626,17 @@ |
48 | :param track_external_parent_refs: As keys are added, keep track of the |
49 | keys they reference, so that we can query get_missing_parents(), |
50 | etc. |
51 | + :param inconsistency_fatal: When asked to add records that are already |
52 | + present, and the details are inconsistent with the existing |
53 | + record, raise an exception instead of warning (and skipping the |
54 | + record). |
55 | """ |
56 | self._add_callback = add_callback |
57 | self._graph_index = graph_index |
58 | self._parents = parents |
59 | self.has_graph = parents |
60 | self._is_locked = is_locked |
61 | + self._inconsistency_fatal = inconsistency_fatal |
62 | if track_external_parent_refs: |
63 | self._key_dependencies = knit._KeyRefs() |
64 | else: |
65 | @@ -1671,8 +1678,14 @@ |
66 | present_nodes = self._get_entries(keys) |
67 | for (index, key, value, node_refs) in present_nodes: |
68 | if node_refs != keys[key][1]: |
69 | - raise errors.KnitCorrupt(self, "inconsistent details in add_records" |
70 | - ": %s %s" % ((value, node_refs), keys[key])) |
71 | + details = '%s %s %s' % (key, (value, node_refs), keys[key]) |
72 | + if self._inconsistency_fatal: |
73 | + raise errors.KnitCorrupt(self, "inconsistent details" |
74 | + " in add_records: %s" % |
75 | + details) |
76 | + else: |
77 | + trace.warning("inconsistent details in skipped" |
78 | + " record: %s", details) |
79 | del keys[key] |
80 | changed = True |
81 | if changed: |
82 | |
83 | === modified file 'bzrlib/repofmt/groupcompress_repo.py' |
84 | --- bzrlib/repofmt/groupcompress_repo.py 2009-06-26 09:24:34 +0000 |
85 | +++ bzrlib/repofmt/groupcompress_repo.py 2009-06-29 15:35:12 +0000 |
86 | @@ -622,7 +622,8 @@ |
87 | self.inventories = GroupCompressVersionedFiles( |
88 | _GCGraphIndex(self._pack_collection.inventory_index.combined_index, |
89 | add_callback=self._pack_collection.inventory_index.add_callback, |
90 | - parents=True, is_locked=self.is_locked), |
91 | + parents=True, is_locked=self.is_locked, |
92 | + inconsistency_fatal=False), |
93 | access=self._pack_collection.inventory_index.data_access) |
94 | self.revisions = GroupCompressVersionedFiles( |
95 | _GCGraphIndex(self._pack_collection.revision_index.combined_index, |
96 | @@ -634,19 +635,22 @@ |
97 | self.signatures = GroupCompressVersionedFiles( |
98 | _GCGraphIndex(self._pack_collection.signature_index.combined_index, |
99 | add_callback=self._pack_collection.signature_index.add_callback, |
100 | - parents=False, is_locked=self.is_locked), |
101 | + parents=False, is_locked=self.is_locked, |
102 | + inconsistency_fatal=False), |
103 | access=self._pack_collection.signature_index.data_access, |
104 | delta=False) |
105 | self.texts = GroupCompressVersionedFiles( |
106 | _GCGraphIndex(self._pack_collection.text_index.combined_index, |
107 | add_callback=self._pack_collection.text_index.add_callback, |
108 | - parents=True, is_locked=self.is_locked), |
109 | + parents=True, is_locked=self.is_locked, |
110 | + inconsistency_fatal=False), |
111 | access=self._pack_collection.text_index.data_access) |
112 | # No parents, individual CHK pages don't have specific ancestry |
113 | self.chk_bytes = GroupCompressVersionedFiles( |
114 | _GCGraphIndex(self._pack_collection.chk_index.combined_index, |
115 | add_callback=self._pack_collection.chk_index.add_callback, |
116 | - parents=False, is_locked=self.is_locked), |
117 | + parents=False, is_locked=self.is_locked, |
118 | + inconsistency_fatal=False), |
119 | access=self._pack_collection.chk_index.data_access) |
120 | # True when the repository object is 'write locked' (as opposed to the |
121 | # physical lock only taken out around changes to the pack-names list.) |
122 | |
123 | === modified file 'bzrlib/tests/test_config.py' |
124 | --- bzrlib/tests/test_config.py 2009-06-10 03:56:49 +0000 |
125 | +++ bzrlib/tests/test_config.py 2009-06-29 15:35:12 +0000 |
126 | @@ -18,7 +18,6 @@ |
127 | """Tests for finding and reading the bzr config file[s].""" |
128 | # import system imports here |
129 | from cStringIO import StringIO |
130 | -import getpass |
131 | import os |
132 | import sys |
133 | |
134 | |
135 | === modified file 'bzrlib/tests/test_groupcompress.py' |
136 | --- bzrlib/tests/test_groupcompress.py 2009-06-22 18:30:08 +0000 |
137 | +++ bzrlib/tests/test_groupcompress.py 2009-06-29 15:35:12 +0000 |
138 | @@ -25,6 +25,7 @@ |
139 | index as _mod_index, |
140 | osutils, |
141 | tests, |
142 | + trace, |
143 | versionedfile, |
144 | ) |
145 | from bzrlib.osutils import sha_string |
146 | @@ -474,11 +475,12 @@ |
147 | class TestCaseWithGroupCompressVersionedFiles(tests.TestCaseWithTransport): |
148 | |
149 | def make_test_vf(self, create_graph, keylength=1, do_cleanup=True, |
150 | - dir='.'): |
151 | + dir='.', inconsistency_fatal=True): |
152 | t = self.get_transport(dir) |
153 | t.ensure_base() |
154 | vf = groupcompress.make_pack_factory(graph=create_graph, |
155 | - delta=False, keylength=keylength)(t) |
156 | + delta=False, keylength=keylength, |
157 | + inconsistency_fatal=inconsistency_fatal)(t) |
158 | if do_cleanup: |
159 | self.addCleanup(groupcompress.cleanup_pack_group, vf) |
160 | return vf |
161 | @@ -658,6 +660,47 @@ |
162 | frozenset([('parent-1',), ('parent-2',)]), |
163 | index.get_missing_parents()) |
164 | |
165 | + def make_source_with_b(self, a_parent, path): |
166 | + source = self.make_test_vf(True, dir=path) |
167 | + source.add_lines(('a',), (), ['lines\n']) |
168 | + if a_parent: |
169 | + b_parents = (('a',),) |
170 | + else: |
171 | + b_parents = () |
172 | + source.add_lines(('b',), b_parents, ['lines\n']) |
173 | + return source |
174 | + |
175 | + def do_inconsistent_inserts(self, inconsistency_fatal): |
176 | + target = self.make_test_vf(True, dir='target', |
177 | + inconsistency_fatal=inconsistency_fatal) |
178 | + for x in range(2): |
179 | + source = self.make_source_with_b(x==1, 'source%s' % x) |
180 | + target.insert_record_stream(source.get_record_stream( |
181 | + [('b',)], 'unordered', False)) |
182 | + |
183 | + def test_inconsistent_redundant_inserts_warn(self): |
184 | + """Should not insert a record that is already present.""" |
185 | + warnings = [] |
186 | + def warning(template, args): |
187 | + warnings.append(template % args) |
188 | + _trace_warning = trace.warning |
189 | + trace.warning = warning |
190 | + try: |
191 | + self.do_inconsistent_inserts(inconsistency_fatal=False) |
192 | + finally: |
193 | + trace.warning = _trace_warning |
194 | + self.assertEqual(["inconsistent details in skipped record: ('b',)" |
195 | + " ('42 32 0 8', ((),)) ('74 32 0 8', ((('a',),),))"], |
196 | + warnings) |
197 | + |
198 | + def test_inconsistent_redundant_inserts_raises(self): |
199 | + e = self.assertRaises(errors.KnitCorrupt, self.do_inconsistent_inserts, |
200 | + inconsistency_fatal=True) |
201 | + self.assertContainsRe(str(e), "Knit.* corrupt: inconsistent details" |
202 | + " in add_records:" |
203 | + " \('b',\) \('42 32 0 8', \(\(\),\)\) \('74 32" |
204 | + " 0 8', \(\(\('a',\),\),\)\)") |
205 | + |
206 | |
207 | class TestLazyGroupCompress(tests.TestCaseWithTransport): |
208 | |
209 | |
210 | === modified file 'bzrlib/tests/test_repository.py' |
211 | --- bzrlib/tests/test_repository.py 2009-06-26 09:24:34 +0000 |
212 | +++ bzrlib/tests/test_repository.py 2009-06-29 15:35:12 +0000 |
213 | @@ -797,6 +797,14 @@ |
214 | self.assertEqual(257, len(full_chk_records)) |
215 | self.assertSubset(simple_chk_records, full_chk_records) |
216 | |
217 | + def test_inconsistency_fatal(self): |
218 | + repo = self.make_repository('repo', format='2a') |
219 | + self.assertTrue(repo.revisions._index._inconsistency_fatal) |
220 | + self.assertFalse(repo.texts._index._inconsistency_fatal) |
221 | + self.assertFalse(repo.inventories._index._inconsistency_fatal) |
222 | + self.assertFalse(repo.signatures._index._inconsistency_fatal) |
223 | + self.assertFalse(repo.chk_bytes._index._inconsistency_fatal) |
224 | + |
225 | |
226 | class TestKnitPackStreamSource(tests.TestCaseWithMemoryTransport): |
227 | |
228 | |
229 | === modified file 'bzrlib/tests/test_ui.py' |
230 | --- bzrlib/tests/test_ui.py 2009-06-17 05:16:48 +0000 |
231 | +++ bzrlib/tests/test_ui.py 2009-06-29 15:35:12 +0000 |
232 | @@ -23,21 +23,15 @@ |
233 | import sys |
234 | import time |
235 | |
236 | -import bzrlib |
237 | -import bzrlib.errors as errors |
238 | +from bzrlib import ( |
239 | + errors, |
240 | + tests, |
241 | + ui as _mod_ui, |
242 | + ) |
243 | from bzrlib.symbol_versioning import ( |
244 | deprecated_in, |
245 | ) |
246 | -from bzrlib.tests import ( |
247 | - TestCase, |
248 | - TestUIFactory, |
249 | - StringIOWrapper, |
250 | - ) |
251 | from bzrlib.tests.test_progress import _TTYStringIO |
252 | -from bzrlib.ui import ( |
253 | - CLIUIFactory, |
254 | - SilentUIFactory, |
255 | - ) |
256 | from bzrlib.ui.text import ( |
257 | NullProgressView, |
258 | TextProgressView, |
259 | @@ -45,10 +39,10 @@ |
260 | ) |
261 | |
262 | |
263 | -class UITests(TestCase): |
264 | +class UITests(tests.TestCase): |
265 | |
266 | def test_silent_factory(self): |
267 | - ui = SilentUIFactory() |
268 | + ui = _mod_ui.SilentUIFactory() |
269 | stdout = StringIO() |
270 | self.assertEqual(None, |
271 | self.apply_redirected(None, stdout, stdout, |
272 | @@ -62,8 +56,9 @@ |
273 | self.assertEqual('', stdout.getvalue()) |
274 | |
275 | def test_text_factory_ascii_password(self): |
276 | - ui = TestUIFactory(stdin='secret\n', stdout=StringIOWrapper(), |
277 | - stderr=StringIOWrapper()) |
278 | + ui = tests.TestUIFactory(stdin='secret\n', |
279 | + stdout=tests.StringIOWrapper(), |
280 | + stderr=tests.StringIOWrapper()) |
281 | pb = ui.nested_progress_bar() |
282 | try: |
283 | self.assertEqual('secret', |
284 | @@ -84,9 +79,9 @@ |
285 | We can't predict what encoding users will have for stdin, so we force |
286 | it to utf8 to test that we transport the password correctly. |
287 | """ |
288 | - ui = TestUIFactory(stdin=u'baz\u1234'.encode('utf8'), |
289 | - stdout=StringIOWrapper(), |
290 | - stderr=StringIOWrapper()) |
291 | + ui = tests.TestUIFactory(stdin=u'baz\u1234'.encode('utf8'), |
292 | + stdout=tests.StringIOWrapper(), |
293 | + stderr=tests.StringIOWrapper()) |
294 | ui.stderr.encoding = ui.stdout.encoding = ui.stdin.encoding = 'utf8' |
295 | pb = ui.nested_progress_bar() |
296 | try: |
297 | @@ -194,11 +189,11 @@ |
298 | self.assertEqual('', factory.stdin.readline()) |
299 | |
300 | def test_silent_ui_getbool(self): |
301 | - factory = SilentUIFactory() |
302 | + factory = _mod_ui.SilentUIFactory() |
303 | self.assert_get_bool_acceptance_of_user_input(factory) |
304 | |
305 | def test_silent_factory_prompts_silently(self): |
306 | - factory = SilentUIFactory() |
307 | + factory = _mod_ui.SilentUIFactory() |
308 | stdout = StringIO() |
309 | factory.stdin = StringIO("y\n") |
310 | self.assertEqual(True, |
311 | @@ -222,7 +217,8 @@ |
312 | def test_text_factory_prompts_and_clears(self): |
313 | # a get_boolean call should clear the pb before prompting |
314 | out = _TTYStringIO() |
315 | - factory = TextUIFactory(stdin=StringIO("yada\ny\n"), stdout=out, stderr=out) |
316 | + factory = TextUIFactory(stdin=StringIO("yada\ny\n"), |
317 | + stdout=out, stderr=out) |
318 | pb = factory.nested_progress_bar() |
319 | pb.show_bar = False |
320 | pb.show_spinner = False |
321 | @@ -253,7 +249,7 @@ |
322 | pb.finished() |
323 | |
324 | def test_silent_ui_getusername(self): |
325 | - factory = SilentUIFactory() |
326 | + factory = _mod_ui.SilentUIFactory() |
327 | factory.stdin = StringIO("someuser\n\n") |
328 | factory.stdout = StringIO() |
329 | factory.stderr = StringIO() |
330 | @@ -279,8 +275,9 @@ |
331 | self.assertEqual('', factory.stdin.readline()) |
332 | |
333 | def test_text_ui_getusername_utf8(self): |
334 | - ui = TestUIFactory(stdin=u'someuser\u1234'.encode('utf8'), |
335 | - stdout=StringIOWrapper(), stderr=StringIOWrapper()) |
336 | + ui = tests.TestUIFactory(stdin=u'someuser\u1234'.encode('utf8'), |
337 | + stdout=tests.StringIOWrapper(), |
338 | + stderr=tests.StringIOWrapper()) |
339 | ui.stderr.encoding = ui.stdout.encoding = ui.stdin.encoding = "utf8" |
340 | pb = ui.nested_progress_bar() |
341 | try: |
342 | @@ -295,7 +292,7 @@ |
343 | pb.finished() |
344 | |
345 | |
346 | -class TestTextProgressView(TestCase): |
347 | +class TestTextProgressView(tests.TestCase): |
348 | """Tests for text display of progress bars. |
349 | """ |
350 | # XXX: These might be a bit easier to write if the rendering and |
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi all,
There are several issues related to Launchpad's current problems with reconciled repositories. For one thing,
pulling data from incompletely-
we're incorrectly predicting which records need to be inserted. For
another, we're inserting records that we already have, which wastes disk
space, and causes us to raise exceptions if the record metadata differs.
This patch allows bzr to avoid installing any records which are already
present, which seems like the most robust fix. It would also be nice to
investigate why we're sending too many records.
Aaron enigmail. mozdev. org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://
iEYEARECAAYFAkp CPBIACgkQ0F+ nu1YWqI0x+ wCdEmvIRyVfkIlv plwDMa5wIxDg NM7jxUneiUmZmDR iz
QlQAn2N40bkCb0C
=EDUm
-----END PGP SIGNATURE-----