Merge lp:~jameinel/bzr-builddeb/unicode-author-508251 into lp:bzr-builddeb

Proposed by John A Meinel
Status: Merged
Merged at revision: not available
Proposed branch: lp:~jameinel/bzr-builddeb/unicode-author-508251
Merge into: lp:bzr-builddeb
Diff against target: 195 lines (+90/-8)
3 files modified
import_dsc.py (+2/-1)
tests/test_util.py (+63/-0)
util.py (+25/-7)
To merge this branch: bzr merge lp:~jameinel/bzr-builddeb/unicode-author-508251
Reviewer Review Type Date Requested Status
Bzr-builddeb-hackers Pending
Review via email: mp+19662@code.launchpad.net
To post a comment you must log in.
Revision history for this message
John A Meinel (jameinel) wrote :

This is a basic fix for bug #508251. Specifically it:

1) Tries to decode using utf-8, if that fails it falls back to iso-8859-1. For now it also mutters the string it failed to decode. (might get a bit noisy, but it would let you know if there are issues with a given import.)

2) Applies this to both author decoding *and* to the commit message. I think the author stuff hid the fact that the commit message was also broken. Basically, find_extra_authors decodes everything before bzr was going to get a chance at it. And bzr was always decoding 'message' as bzrlib.user_encoding, which I assume was always utf-8 for the import machine. Arguably it was succeeding 'by accident', rather than by design.

3) Changes 'find_thanks()' to allow names to start with a Unicode character, rather than requiring strictly A-Z. If you want, I can bring back "author[0].isupper()" or something like that. Looking at the regex, if I said "Thanks to my cat" it seems reasonable to have 'deb-thanks': ['my cat'] even though it wasn't "Mr Cat". The "Thanks to" and "thank you" seem to be a decent filter, without having to worry about the exact name. If you want this changed to something else, just let me know.
I can restore the original behavior and change the tests, but it seemed reasonable to allow non-ascii as the first letter of someone's name. Given this changelog entry:
    - Translators: Vital Khilko (be), Vladimir Petkov (bg), Hendrik
      Brandt (de), Kostas Papadimas (el), Adam Weinberger (en_CA), Francisco
      Javier F. Serrador (es), Ilkka Tuohela (fi), Ignacio Casal Quinteiro
      (gl), Ankit Patel (gu), Luca Ferretti (it), Takeshi AIHANA (ja),
      Žygimantas Beručka (lt), Øivind Hoel (nb), Reinout van Schouwen (nl),
      Øivind Hoel (no), Evandro Fernandes Giovanini (pt_BR), Слободан Д.
      Средојевић (sr), Theppitak Karoonboonyanan (th), Clytie Siddall (vi),
      Funda Wang (zh_CN)

At least 3 of those people have non-ascii first letters (Žygimantas, Øivind, etc)

4) I also made sure to run this locally against 'gnome-panel' which was one of the failing imports. It has certainly gotten a lot farther, and I've check that it has run into a few of these mixed-encoding sections. Note that this assumes that each changelog block uses a constant encoding (for the purposes of commit message), but that actually seems reasonable. As dapper/debian/changelog switches back and forth from iso-8859-1 in some blocks to utf-8 in other blocks.

Revision history for this message
James Westby (james-w) wrote :

On Thu, 18 Feb 2010 22:17:12 -0000, John A Meinel <email address hidden> wrote:
> This is a basic fix for bug #508251. Specifically it:

Thanks.

> 1) Tries to decode using utf-8, if that fails it falls back to iso-8859-1. For now it also mutters the string it failed to decode. (might get a bit noisy, but it would let you know if there are issues with a given import.)

It will still cause failures if it can't be decoded in
iso-8859-1 either, is that what we want at this stage?

> 4) I also made sure to run this locally against 'gnome-panel' which was one of the failing imports. It has certainly gotten a lot farther, and I've check that it has run into a few of these mixed-encoding sections. Note that this assumes that each changelog block uses a constant encoding (for the purposes of commit message), but that actually seems reasonable. As dapper/debian/changelog switches back and forth from iso-8859-1 in some blocks to utf-8 in other blocks.

Thanks, I'll apply this once you tell me that this test didn't discover
any problems with the change (it obviously isn't blocked on any other
issues that might be found.)

Thanks,

James

Revision history for this message
John A Meinel (jameinel) wrote :

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

James Westby wrote:
> On Thu, 18 Feb 2010 22:17:12 -0000, John A Meinel <email address hidden> wrote:
>> This is a basic fix for bug #508251. Specifically it:
>
> Thanks.
>
>> 1) Tries to decode using utf-8, if that fails it falls back to iso-8859-1. For now it also mutters the string it failed to decode. (might get a bit noisy, but it would let you know if there are issues with a given import.)
>
> It will still cause failures if it can't be decoded in
> iso-8859-1 either, is that what we want at this stage?

iso-8859-1 can decode all possible 8-bit sequences. Possibly
incorrectly, but all bits have a Unicode code point from iso-8859-1.

>
>> 4) I also made sure to run this locally against 'gnome-panel' which was one of the failing imports. It has certainly gotten a lot farther, and I've check that it has run into a few of these mixed-encoding sections. Note that this assumes that each changelog block uses a constant encoding (for the purposes of commit message), but that actually seems reasonable. As dapper/debian/changelog switches back and forth from iso-8859-1 in some blocks to utf-8 in other blocks.
>
> Thanks, I'll apply this once you tell me that this test didn't discover
> any problems with the change (it obviously isn't blocked on any other
> issues that might be found.)
>
> Thanks,
>
> James
>

The import succeeded. I don't have a way to tell the fidelity of the
result, etc.

I'm slightly concerned that a new import will give different results to
an old import (based on now finding an author that wasn't found before).
But I don't think the import system uses deterministic ids, so it should
be fine.

John
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkt+ou4ACgkQJdeBCYSNAAMqogCdEMIQvWx31ExKOAPYjwmcKVJa
YGoAnA9m45Pg/9YJAUUuDYQEvjFijdjK
=gAnd
-----END PGP SIGNATURE-----

Revision history for this message
James Westby (james-w) wrote :

On Fri, 19 Feb 2010 08:40:47 -0600, John Arbash Meinel <email address hidden> wrote:
> iso-8859-1 can decode all possible 8-bit sequences. Possibly
> incorrectly, but all bits have a Unicode code point from iso-8859-1.

Ok, so we'll have nonsense on occaision, but no failures. Sounds
reasonable to me.

> The import succeeded. I don't have a way to tell the fidelity of the
> result, etc.

> I'm slightly concerned that a new import will give different results to
> an old import (based on now finding an author that wasn't found before).
> But I don't think the import system uses deterministic ids, so it should
> be fine.

It doesn't use deterministic ids. I don't think it matters much that
there will be differences, there's not a lot we can do about that.

Thanks,

James

Revision history for this message
James Westby (james-w) wrote :

Oh, I'll merge this once I've finished the change I'm currently working
on.

Thanks,

James

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'import_dsc.py'
2--- import_dsc.py 2010-02-12 19:58:29 +0000
3+++ import_dsc.py 2010-02-18 22:17:11 +0000
4@@ -76,6 +76,7 @@
5 get_snapshot_revision,
6 open_file_via_transport,
7 open_transport,
8+ safe_decode,
9 subprocess_setup,
10 )
11
12@@ -1251,7 +1252,7 @@
13 time_tuple = rfc822.parsedate_tz(raw_timestamp)
14 if time_tuple is not None:
15 timestamp = (time.mktime(time_tuple[:9]), time_tuple[9])
16- author = cl.author.decode("utf-8")
17+ author = safe_decode(cl.author)
18 versions = self._get_safe_versions_from_changelog(cl)
19 assert not self.has_version(version), \
20 "Trying to import version %s again" % str(version)
21
22=== modified file 'tests/test_util.py'
23--- tests/test_util.py 2010-02-12 16:41:15 +0000
24+++ tests/test_util.py 2010-02-18 22:17:11 +0000
25@@ -45,6 +45,7 @@
26 move_file_if_different,
27 get_parent_dir,
28 recursive_copy,
29+ safe_decode,
30 strip_changelog_message,
31 suite_to_distribution,
32 tarball_name,
33@@ -84,6 +85,19 @@
34 self.failUnlessExists('a/f')
35
36
37+class SafeDecodeTests(TestCase):
38+
39+ def assertSafeDecode(self, expected, val):
40+ self.assertEqual(expected, safe_decode(val))
41+
42+ def test_utf8(self):
43+ self.assertSafeDecode(u'ascii', 'ascii')
44+ self.assertSafeDecode(u'\xe7', '\xc3\xa7')
45+
46+ def test_iso_8859_1(self):
47+ self.assertSafeDecode(u'\xe7', '\xe7')
48+
49+
50 cl_block1 = """\
51 bzr-builddeb (0.17) unstable; urgency=low
52
53@@ -467,6 +481,22 @@
54 self.assertEqual([u"A. Hacker", u"B. Hacker"], authors)
55 self.assertEqual([unicode]*len(authors), map(type, authors))
56
57+ def test_find_extra_authors_utf8(self):
58+ changes = [" * Do foo", "", " [ \xc3\xa1. Hacker ]", " * Do bar", "",
59+ " [ \xc3\xa7. Hacker ]", " [ A. Hacker}"]
60+ authors = find_extra_authors(changes)
61+ self.assertEqual([u"\xe1. Hacker", u"\xe7. Hacker"], authors)
62+ self.assertEqual([unicode]*len(authors), map(type, authors))
63+
64+ def test_find_extra_authors_iso_8859_1(self):
65+ # We try to treat lines as utf-8, but if that fails to decode, we fall
66+ # back to iso-8859-1
67+ changes = [" * Do foo", "", " [ \xe1. Hacker ]", " * Do bar", "",
68+ " [ \xe7. Hacker ]", " [ A. Hacker}"]
69+ authors = find_extra_authors(changes)
70+ self.assertEqual([u"\xe1. Hacker", u"\xe7. Hacker"], authors)
71+ self.assertEqual([unicode]*len(authors), map(type, authors))
72+
73 def test_find_extra_authors_no_changes(self):
74 authors = find_extra_authors([])
75 self.assertEqual([], authors)
76@@ -504,6 +534,8 @@
77 self.assert_thanks_is(changes, [u"A. Hacker <ahacker@example.com>"])
78 changes = [" * Thanks to Adeodato Sim\xc3\x83\xc2\xb3"]
79 self.assert_thanks_is(changes, [u"Adeodato Sim\xc3\xb3"])
80+ changes = [" * Thanks to \xc3\x81deodato Sim\xc3\x83\xc2\xb3"]
81+ self.assert_thanks_is(changes, [u"\xc1deodato Sim\xc3\xb3"])
82
83 def test_find_bugs_fixed_no_changes(self):
84 self.assertEqual([], find_bugs_fixed([], None, _lplib=MockLaunchpad()))
85@@ -582,6 +614,37 @@
86 self.assertEqual(find_bugs_fixed(changes, wt.branch,
87 _lplib=MockLaunchpad()), bugs)
88
89+ def assertUnicodeCommitInfo(self, changes):
90+ wt = self.make_branch_and_tree(".")
91+ changelog = Changelog()
92+ author = "J. Maintainer <maint@example.com>"
93+ changelog.new_block(changes=changes, author=author)
94+ message, authors, thanks, bugs = \
95+ get_commit_info_from_changelog(changelog, wt.branch,
96+ _lplib=MockLaunchpad())
97+ self.assertEqual(u'[ \xc1. Hacker ]\n'
98+ u'* First ch\xe1nge, LP: #12345\n'
99+ u'* Second change, thanks to \xde. Hacker',
100+ message)
101+ self.assertEqual([author, u'\xc1. Hacker'], authors)
102+ self.assertEqual(unicode, type(authors[0]))
103+ self.assertEqual([u'\xde. Hacker'], thanks)
104+ self.assertEqual(['https://launchpad.net/bugs/12345 fixed'], bugs)
105+
106+ def test_get_commit_info_utf8(self):
107+ changes = [" [ \xc3\x81. Hacker ]",
108+ " * First ch\xc3\xa1nge, LP: #12345",
109+ " * Second change, thanks to \xc3\x9e. Hacker"]
110+ self.assertUnicodeCommitInfo(changes)
111+
112+ def test_get_commit_info_iso_8859_1(self):
113+ # Changelogs aren't always well-formed UTF-8, so we fall back to
114+ # iso-8859-1 if we fail to decode utf-8.
115+ changes = [" [ \xc1. Hacker ]",
116+ " * First ch\xe1nge, LP: #12345",
117+ " * Second change, thanks to \xde. Hacker"]
118+ self.assertUnicodeCommitInfo(changes)
119+
120
121 class MockLaunchpad(object):
122
123
124=== modified file 'util.py'
125--- util.py 2010-02-10 13:43:44 +0000
126+++ util.py 2010-02-18 22:17:11 +0000
127@@ -56,6 +56,24 @@
128 )
129
130
131+def safe_decode(s):
132+ """Decode a string into a Unicode value."""
133+ if isinstance(s, unicode): # Already unicode
134+ mutter('safe_decode() called on an already-unicode string: %r' % (s,))
135+ return s
136+ try:
137+ return s.decode('utf-8')
138+ except UnicodeDecodeError, e:
139+ mutter('safe_decode(%r) falling back to iso-8859-1' % (s,))
140+ # TODO: Looking at BeautifulSoup it seems to use 'chardet' to try to
141+ # guess the encoding of a given text stream. We might want to
142+ # take a closer look at that.
143+ # TODO: Another possibility would be to make the fallback encoding
144+ # configurable, possibly exposed as a command-line flag, for now,
145+ # this seems 'good enough'.
146+ return s.decode('iso-8859-1')
147+
148+
149 def recursive_copy(fromdir, todir):
150 """Copy the contents of fromdir to todir.
151
152@@ -392,13 +410,13 @@
153
154
155 def find_extra_authors(changes):
156- extra_author_re = re.compile(r"\s*\[([^\]]+)]\s*", re.UNICODE)
157+ extra_author_re = re.compile(r"\s*\[([^\]]+)]\s*")
158 authors = []
159 for change in changes:
160 # Parse out any extra authors.
161- match = extra_author_re.match(change.decode("utf-8"))
162+ match = extra_author_re.match(change)
163 if match is not None:
164- new_author = match.group(1).strip()
165+ new_author = safe_decode(match.group(1).strip())
166 already_included = False
167 for author in authors:
168 if author.startswith(new_author):
169@@ -411,11 +429,11 @@
170
171 def find_thanks(changes):
172 thanks_re = re.compile(r"[tT]hank(?:(?:s)|(?:you))(?:\s*to)?"
173- "((?:\s+(?:(?:[A-Z]\.)|(?:[A-Z]\w+(?:-[A-Z]\w+)*)))+"
174+ "((?:\s+(?:(?:\w\.)|(?:\w+(?:-\w+)*)))+"
175 "(?:\s+<[^@>]+@[^@>]+>)?)",
176 re.UNICODE)
177 thanks = []
178- changes_str = " ".join(changes).decode("utf-8")
179+ changes_str = safe_decode(" ".join(changes))
180 for match in thanks_re.finditer(changes_str):
181 if thanks is None:
182 thanks = []
183@@ -446,12 +464,12 @@
184 bugs = []
185 if changelog._blocks:
186 block = changelog._blocks[0]
187- authors = [block.author.decode("utf-8")]
188+ authors = [safe_decode(block.author)]
189 changes = strip_changelog_message(block.changes())
190 authors += find_extra_authors(changes)
191 bugs = find_bugs_fixed(changes, branch, _lplib=_lplib)
192 thanks = find_thanks(changes)
193- message = "\n".join(changes).replace("\r", "")
194+ message = safe_decode("\n".join(changes).replace("\r", ""))
195 return (message, authors, thanks, bugs)
196
197

Subscribers

People subscribed via source and target branches