Merge lp:~chromium-team/chromium-browser/chromium-translations-tools.head into lp:chromium-browser

Proposed by Samuel Ramos
Status: Rejected
Rejected by: Nathan Teodosio
Proposed branch: lp:~chromium-team/chromium-browser/chromium-translations-tools.head
Merge into: lp:chromium-browser
Diff against target: 3433 lines (+3409/-0)
5 files modified
chromium2pot.py (+2610/-0)
create-patches.sh (+185/-0)
desktop2gettext.py (+378/-0)
update-inspector.py (+149/-0)
update-pot.sh (+87/-0)
To merge this branch: bzr merge lp:~chromium-team/chromium-browser/chromium-translations-tools.head
Reviewer Review Type Date Requested Status
Chromium team Pending
Review via email: mp+461443@code.launchpad.net
To post a comment you must log in.

Unmerged revisions

121. By Chad Miller

Handle GRD partial files.

Ignore "external" references, which are usually images.

120. By Ken VanDine

handled latest grd format

119. By Micah Gersten

* Temporarily workaround muliple bg locales in generated_resources

118. By Cris Dywan

* Add temporary workaround for new type not being used yet

117. By Fabien Tassin

* Add some helper scripts

116. By Fabien Tassin

* When updating common.gypi, fold 'locales' by size (instead of by groups of 10)

115. By Fabien Tassin

* Fix a regression introduced by the new fake-bidi pseudo locale
  (see https://sites.google.com/a/chromium.org/dev/Home/fake-bidi and
   http://code.google.com/p/chromium/issues/detail?id=73052)

114. By Fabien Tassin

* Add a --map-template-names knob allowing to handle renamed templates
  in some branches

113. By Fabien Tassin

* Add support for 'string-enum' and 'int-enum' outside of 'group' policies
  (needed since http://codereview.chromium.org/7287001/ landed)

112. By Fabien Tassin

* Move all new xtb files to third_party/launchpad_translations (relative to $SRC)

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== added file 'chromium2pot.py'
--- chromium2pot.py 1970-01-01 00:00:00 +0000
+++ chromium2pot.py 2024-02-28 12:58:47 +0000
@@ -0,0 +1,2610 @@
1#!/usr/bin/python
2# -*- coding: utf-8 -*-
3
4# (c) 2010-2011, Fabien Tassin <fta@ubuntu.com>
5
6# Convert grd/xtb files into pot/po for integration into the Launchpad
7# translation system
8
9## grd files contain the strings for the 'pot' file(s).
10## Keys are alphabetical (IDS_XXX).
11# Sources:
12# - $SRC/chrome/app/*.grd
13# - $SRC/webkit/glue/*.grd
14
15## xtb files are referenced to by the grd files. They contain the translated
16## strings for the 'po' our files. Keys are numerical (64bit ids).
17# Sources:
18# - $SRC/chrome/app/resources/*.xtb
19# - $SRC/webkit/glue/resources/*.xtb
20# and for launchpad contributed strings that already landed:
21# - $SRC/third_party/launchpad_translations/*.xtb
22
23## the mapping between those keys is done using FingerPrint()
24## [ taken from grit ] on a stripped version of the untranslated string
25
26## grd files contain a lot of <if expr="..."> (python-like) conditions.
27## Evaluate those expressions but only skip strings with a lang restriction.
28## For all other conditions (os, defines), simply expose them so translators
29## know when a given string is expected.
30
31## TODO: handle <message translateable="false">
32
33import os, sys, shutil, re, getopt, codecs, urllib
34from xml.dom import minidom
35from xml.sax.saxutils import unescape
36from datetime import datetime
37from difflib import unified_diff
38import textwrap, filecmp, json
39
40lang_mapping = {
41 'no': 'nb', # 'no' is obsolete and the more specific 'nb' (Norwegian Bokmal)
42 # and 'nn' (Norwegian Nynorsk) are preferred.
43 'pt-PT': 'pt'
44}
45
46####
47# vanilla from $SRC/tools/grit/grit/extern/FP.py (r10982)
48# See svn log http://src.chromium.org/svn/trunk/src/tools/grit/grit/extern/FP.py
49
50# Copyright (c) 2006-2008 The Chromium Authors. All rights reserved.
51# Use of this source code is governed by a BSD-style license that can be
52# found in the LICENSE file.
53
54try:
55 import hashlib
56 _new_md5 = hashlib.md5
57except ImportError:
58 import md5
59 _new_md5 = md5.new
60
61def UnsignedFingerPrint(str, encoding='utf-8'):
62 """Generate a 64-bit fingerprint by taking the first half of the md5
63 of the string."""
64 hex128 = _new_md5(str).hexdigest()
65 int64 = long(hex128[:16], 16)
66 return int64
67
68def FingerPrint(str, encoding='utf-8'):
69 fp = UnsignedFingerPrint(str, encoding=encoding)
70 # interpret fingerprint as signed longs
71 if fp & 0x8000000000000000L:
72 fp = - ((~fp & 0xFFFFFFFFFFFFFFFFL) + 1)
73 return fp
74####
75
76class EvalConditions:
77 """ A class allowing an <if expr="xx"/> to be evaluated, based on an array of defines,
78 a dict of local variables.
79 As of Chromium 10:
80 - the known defines are:
81 [ 'chromeos', '_google_chrome', 'toolkit_views', 'touchui', 'use_titlecase' ]
82 On Linux, only [ 'use_titlecase' ] is set.
83 - the known variables are:
84 'os' ('linux2' on Linux)
85 'lang'
86 See http://src.chromium.org/svn/trunk/src/build/common.gypi
87 """
88
89 def eval(self, expression, defines = [ 'use_titlecase' ], vars = { 'os': "linux2" }):
90
91 def pp_ifdef(match):
92 return str(match.group(1) in defines)
93
94 # evaluate all ifdefs
95 expression = re.sub(r"pp_ifdef\('(.*?)'\)", pp_ifdef, expression)
96 # evaluate the whole expression using the vars dict
97 vars['__builtins__'] = { 'True': True, 'False': False } # prevent eval from using the real current globals
98 return eval(expression, vars)
99
100 def lang_eval(self, expression, lang):
101 """ only evaluate the expression against the lang, ignore all defines and other variables.
102 This is needed to ignore a string that has lang restrictions (numerals, plurals, ..) but
103 still keep it even if it's OS or defined don't match the local platform.
104 """
105 conditions = [ x for x in re.split(r'\s+(and|or)\s+', expression) if x.find('lang') >= 0 ]
106 if len(conditions) == 0:
107 return True
108 assert len(conditions) == 1, "Expression '%s' has multiple lang conditions" % expression
109 vars = { 'lang': lang, '__builtins__': { 'True': True, 'False': False } }
110 return eval(conditions[0], vars)
111
112 def test(self):
113 data = [
114 { 'expr': "lang == 'ar'",
115 'vars': { 'lang': 'ar' },
116 'result': True
117 },
118 { 'expr': "lang == 'ar'",
119 'vars': { 'lang': 'fr' },
120 'result': False
121 },
122 { 'expr': "lang in ['ar', 'ro', 'lv']",
123 'vars': { 'lang': 'ar' },
124 'result': True
125 },
126 { 'expr': "lang in ['ar', 'ro', 'lv']",
127 'vars': { 'lang': 'pt-BR' },
128 'result': False
129 },
130 { 'expr': "lang not in ['ar', 'ro', 'lv']",
131 'vars': { 'lang': 'ar' },
132 'result': False
133 },
134 { 'expr': "lang not in ['ar', 'ro', 'lv']",
135 'vars': { 'lang': 'no' },
136 'result': True
137 },
138 { 'expr': "os != 'linux2' and os != 'darwin' and os.find('bsd') == -1",
139 'vars': { 'lang': 'no', 'os': 'bsdos' },
140 'result': False,
141 'lresult': True # no lang restriction in 'expr', so 'no' is ok
142 },
143 { 'expr': "os != 'linux2' and os != 'darwin' and os.find('bsd') > -1",
144 'vars': { 'lang': 'no', 'os': 'bsdos' },
145 'result': True,
146 },
147 { 'expr': "not pp_ifdef('chromeos')",
148 'vars': { 'lang': 'no' },
149 'defines': [],
150 'result': True,
151 },
152 { 'expr': "not pp_ifdef('chromeos')",
153 'vars': { 'lang': 'no' },
154 'defines': [ 'chromeos' ],
155 'result': False,
156 'lresult': True # no lang restriction in 'expr', so 'no' is ok
157 },
158 { 'expr': "pp_ifdef('_google_chrome') and (os == 'darwin')",
159 'vars': { 'lang': 'no', 'os': 'linux2' },
160 'defines': [ 'chromeos' ],
161 'result': False,
162 'lresult': True # no lang restriction in 'expr', so 'no' is ok
163 },
164 { 'expr': "pp_ifdef('_google_chrome') and (os == 'darwin')",
165 'vars': { 'lang': 'no', 'os': 'darwin' },
166 'defines': [ '_google_chrome' ],
167 'result': True
168 },
169 { 'expr': "not pp_ifdef('chromeos') and pp_ifdef('_google_chrome') and 'pt-PT' == lang",
170 'vars': { 'lang': 'pt-PT', 'os': 'darwin' },
171 'defines': [ '_google_chrome' ],
172 'result': True
173 },
174 { 'expr': "not pp_ifdef('chromeos') and pp_ifdef('_google_chrome') and 'pt-PT' == lang",
175 'vars': { 'lang': 'pt-PT', 'os': 'darwin' },
176 'defines': [ ],
177 'result': False,
178 'lresult': True
179 },
180 ]
181 i = -1
182 for d in data:
183 i += 1
184 defines = d['defines'] if 'defines' in d else []
185 vars = d['vars'] if 'vars' in d else {}
186 lvars = vars.copy() # make a copy because eval modifies it
187 res = self.eval(d['expr'], defines = defines, vars = lvars)
188 assert res == d['result'], "FAILED %d: expr: \"%s\" returned %s with vars = %s and defines = %s" % \
189 (i, d['expr'], repr(res), repr(vars), repr(defines))
190 print "All %d tests passed for EvalConditions.eval()" % (i + 1)
191 i = -1
192 for d in data:
193 i += 1
194 assert 'lang' in vars, "All test must have a 'lang' in 'vars', test %d doesn't: %s" % (i, repr(d))
195 res = self.lang_eval(d['expr'], lang = d['vars']['lang'])
196 expected = d['lresult'] if 'lresult' in d else d['result']
197 assert res == expected, "FAILED %d: expr: \"%s\" returned %s with lang = %s for the lang_eval test" % \
198 (i, d['expr'], repr(res), d['vars']['lang'])
199 print "All %d tests passed for EvalConditions.lang_eval()" % (i + 1)
200
201class StringCvt:
202 """ A class converting grit formatted strings to gettext back and forth.
203 The idea is to always have:
204 a/ grd2gettext(xtb2gettext(s)) == s
205 b/ xtb2gettext(s) produces a string that the msgfmt checker likes and
206 that makes sense to translators
207 c/ grd2gettext(s) produces a string acceptable by upstream
208 """
209
210 def xtb2gettext(self, string):
211 """ parse the xtb (xml encoded) string and convert it to a gettext string """
212
213 def fold(string):
214 return textwrap.wrap(string, break_long_words=False, width=76, drop_whitespace=False,
215 expand_tabs=False, replace_whitespace=False, break_on_hyphens=False)
216
217 s = string.replace('\\n', '\\\\n')
218 # escape all single '\' (not followed by 'n')
219 s = re.sub(r'(?<!\\)(\\[^n\\\\])', r'\\\1', s)
220 # remove all xml encodings
221 s = self.unescape_xml(s)
222 # replace '<ph name="FOO"/>' by '%{FOO}'
223 s = re.sub(r'<ph name="(.*?)"/>', r'%{\1}', s)
224 # fold
225 # 1/ fold at \n
226 # 2/ fold each part at ~76 char
227 v = []
228 ll = s.split('\n')
229 sz = len(ll)
230 if sz > 1:
231 i = 0
232 for l in ll:
233 i += 1
234 if i == sz:
235 v.extend(fold(l))
236 else:
237 v.extend(fold(l + '\\n'))
238 else:
239 v.extend(fold(ll[0]))
240 if len(v) > 1:
241 v[:0] = [ '' ]
242 s = '"' + '"\n"'.join(v) + '"'
243 return s
244
245 def decode_xml_entities(self, string):
246 def replace_xmlent(match):
247 if match.group(1)[:1] == 'x':
248 return unichr(int("0" + match.group(1), 16))
249 else:
250 return unichr(int(match.group(1)))
251
252 return re.sub(r'&#(x\w+|\d+);', replace_xmlent, string)
253
254 def unescape_xml(self, string):
255 string = unescape(string).replace('&quot;', '\\"').replace('&apos;', "'")
256 string = self.decode_xml_entities(string)
257 return string
258
259 def grd2gettext(self, string):
260 """ parse the string returned from minidom and convert it to a gettext string.
261 This is similar to str_cvt_xtb2gettext but minidom has its own magic for encoding
262 """
263 return self.xtb2gettext(string)
264
265 def gettext2xtb(self, string):
266 """ parse the gettext string and convert it to an xtb (xml encoded) string. """
267 u = []
268 for s in string.split(u'\n'):
269 # remove the enclosing double quotes
270 u.append(s[1:][:-1])
271 s = u"".join(u)
272
273 # encode the xml special chars
274 s = s.replace("&", "&amp;") # must be first!
275 s = s.replace("<", "&lt;")
276 s = s.replace(">", "&gt;")
277 s = s.replace('\\"', "&quot;")
278 # special case, html comments
279 s = re.sub(r'&lt;!--(.*?)--&gt;', r'<!--\1-->', s, re.S)
280 # replace non-ascii by &#xxx; codes
281 # s = s.encode("ascii", "xmlcharrefreplace")
282 # replace '%{FOO}' by '<ph name="FOO"/>'
283 s = re.sub(r'%{(.*?)}', r'<ph name="\1"/>', s)
284 # unquote \\n and \\\\n
285 s = re.sub(r'(?<!\\)\\n', r'\n', s)
286 # unquote all control chars
287 s = re.sub(r'\\\\([^\\])', r'\\\1', s)
288
289 # launchpad seems to always quote tabs
290 s = s.replace("\\t", "\t")
291 return s
292
293 def test(self):
294 # unit tests
295 data = [
296 # tab
297 { 'id': '0',
298 'xtb': u'foo bar',
299 'po': u'"foo bar"' },
300 { 'id': '1',
301 'xtb': u'foo\tbar',
302 'po': u'"foo\tbar"' },
303 # &amp;
304 { 'id': '6779164083355903755',
305 'xtb': u'Supprime&amp;r',
306 'po': u'"Supprime&r"' },
307 # &quot;
308 { 'id': '4194570336751258953',
309 'xtb': u'Activer la fonction &quot;taper pour cliquer&quot;',
310 'po': u'"Activer la fonction \\"taper pour cliquer\\""' },
311 # &lt; / &gt;
312 { 'id': '7615851733760445951',
313 'xtb': u'&lt;aucun cookie sélectionné&gt;',
314 'po': u'"<aucun cookie sélectionné>"' },
315 # <ph name="FOO"/>
316 { 'id': '5070288309321689174',
317 'xtb': u'<ph name="EXTENSION_NAME"/> :',
318 'po': u'"%{EXTENSION_NAME} :"' },
319 { 'id': '1467071896935429871',
320 'xtb': u'Téléchargement de la mise à jour du système : <ph name="PERCENT"/>% terminé',
321 'po': u'"Téléchargement de la mise à jour du système : %{PERCENT}% terminé"' },
322 # line folding
323 { 'id': '1526811905352917883',
324 'xtb': u'Une nouvelle tentative de connexion avec SSL 3.0 a dû être effectuée. Cette opération indique généralement que le serveur utilise un logiciel très ancien et qu\'il est susceptible de présenter d\'autres problèmes de sécurité.',
325 'po': u'""\n"Une nouvelle tentative de connexion avec SSL 3.0 a dû être effectuée. Cette "\n"opération indique généralement que le serveur utilise un logiciel très "\n"ancien et qu\'il est susceptible de présenter d\'autres problèmes de sécurité."' },
326 { 'id': '7999229196265990314',
327 'xtb': u'Les fichiers suivants ont été créés :\n\nExtension : <ph name="EXTENSION_FILE"/>\nFichier de clé : <ph name="KEY_FILE"/>\n\nConservez votre fichier de clé en lieu sûr. Vous en aurez besoin lors de la création de nouvelles versions de l\'extension.',
328 'po': u'""\n"Les fichiers suivants ont été créés :\\n"\n"\\n"\n"Extension : %{EXTENSION_FILE}\\n"\n"Fichier de clé : %{KEY_FILE}\\n"\n"\\n"\n"Conservez votre fichier de clé en lieu sûr. Vous en aurez besoin lors de la "\n"création de nouvelles versions de l\'extension."' },
329 # quoted LF
330 { 'id': '4845656988780854088',
331 'xtb': u'Synchroniser uniquement les paramètres et\\ndonnées qui ont changé depuis la dernière connexion\\n(requiert votre mot de passe précédent)',
332 'po': u'""\n"Synchroniser uniquement les paramètres et\\\\ndonnées qui ont changé depuis la"\n" dernière connexion\\\\n(requiert votre mot de passe précédent)"' },
333 { 'id': '1761265592227862828', # lang: 'el'
334 'xtb': u'Συγχρονισμός όλων των ρυθμίσεων και των δεδομένων\\n (ενδέχεται να διαρκέσει ορισμένο χρονικό διάστημα)',
335 'po': u'""\n"Συγχρονισμός όλων των ρυθμίσεων και των δεδομένων\\\\n (ενδέχεται να διαρκέσει"\n" ορισμένο χρονικό διάστημα)"' },
336 { 'id': '1768211415369530011', # lang: 'de'
337 'xtb': u'Folgende Anwendung wird gestartet, wenn Sie diese Anforderung akzeptieren:\\n\\n <ph name="APPLICATION"/>',
338 'po': u'""\n"Folgende Anwendung wird gestartet, wenn Sie diese Anforderung "\n"akzeptieren:\\\\n\\\\n %{APPLICATION}"' },
339 # weird controls
340 { 'id': '5107325588313356747', # lang: 'es-419'
341 'xtb': u'Para ocultar el acceso a este programa, debes desinstalarlo. Para ello, utiliza\\n<ph name="CONTROL_PANEL_APPLET_NAME"/> del Panel de control.\\n\¿Deseas iniciar <ph name="CONTROL_PANEL_APPLET_NAME"/>?',
342 'po': u'""\n"Para ocultar el acceso a este programa, debes desinstalarlo. Para ello, "\n"utiliza\\\\n%{CONTROL_PANEL_APPLET_NAME} del Panel de control.\\\\n\\\\¿Deseas "\n"iniciar %{CONTROL_PANEL_APPLET_NAME}?"' }
343 ]
344
345 for string in data:
346 s = u"<x>" + string['xtb'] + u"</x>"
347 s = s.encode('ascii', 'xmlcharrefreplace')
348 dom = minidom.parseString(s)
349 s = dom.firstChild.toxml()[3:][:-4]
350 e = self.grd2gettext(s)
351 if e != string['po']:
352 assert False, "grd2gettext() failed for id " + string['id'] + \
353 ". \nExpected: " + repr(string['po']) + "\nGot: " + repr(e)
354 e = self.xtb2gettext(string['xtb'])
355 if e != string['po']:
356 assert False, "xtb2gettext() failed for id " + string['id'] + \
357 ". \nExpected: " + repr(string['po']) + "\nGot: " + repr(e)
358 u = self.gettext2xtb(e)
359 if u != string['xtb']:
360 assert False, "gettext2xtb() failed for id " + string['id'] + \
361 ". \nExpected: " + repr(string['xtb']) + "\nGot: " + repr(u)
362 print string['id'] + " ok"
363
364 # more tests with only po to xtb to test some weird launchpad po exports
365 data2 = [
366 { 'id': '1768211415369530011', # lang: 'de'
367 'po': u'""\n"Folgende Anwendung wird gestartet, wenn Sie diese Anforderung akzeptieren:\\\\"\n"n\\\\n %{APPLICATION}"',
368 'xtb': u'Folgende Anwendung wird gestartet, wenn Sie diese Anforderung akzeptieren:\\n\\n <ph name="APPLICATION"/>' },
369 ]
370 for string in data2:
371 u = self.gettext2xtb(string['po'])
372 if u != string['xtb']:
373 assert False, "gettext2xtb() failed for id " + string['id'] + \
374 ". \nExpected: " + repr(string['xtb']) + "\nGot: " + repr(u)
375 print string['id'] + " ok"
376
377######
378
379class PotFile(dict):
380 """
381 Read and write gettext pot files
382 """
383
384 def __init__(self, filename, date = None, debug = False, branch_name = "default", branch_dir = os.getcwd()):
385 self.debug = debug
386 self.lang = None
387 self.filename = filename
388 self.tfile = filename + ".new"
389 self.branch_dir = branch_dir
390 self.branch_name = branch_name
391 self.template_date = date
392 self.translation_date = "YEAR-MO-DA HO:MI+ZONE"
393 self.is_pot = True
394 self.fd = None
395 self.fd_mode = "rb"
396 if self.template_date is None:
397 self.template_date = datetime.utcnow().strftime("%Y-%m-%d %H:%M+0000")
398 self.strings = []
399
400 def add_string(self, id, comment, string, translation = "", origin = None):
401 self.strings.append({ 'id': id, 'comment': comment, 'string': string,
402 'origin': origin, 'translation': translation })
403
404 def replace_file_if_newer(self):
405 filename = os.path.join(self.branch_dir, self.filename) if self.branch_dir is not None \
406 else self.filename
407 tfile = os.path.join(self.branch_dir, self.tfile) if self.branch_dir is not None \
408 else self.tfile
409 if os.path.isfile(filename) and filecmp.cmp(filename, tfile) == 1:
410 os.unlink(tfile)
411 return 0
412 else:
413 os.rename(tfile, filename)
414 return 1
415
416 def get_mtime(self, file):
417 rfile = os.path.join(self.branch_dir, file)
418 if self.debug:
419 print "getmtime(%s) [%s]" % (file, os.path.abspath(rfile))
420 return os.path.getmtime(rfile)
421
422 def open(self, mode = "rb", filename = None):
423 if filename is not None:
424 self.filename = filename
425 self.tfile = filename + ".new"
426 rfile = os.path.join(self.branch_dir, self.filename)
427 rtfile = os.path.join(self.branch_dir, self.tfile)
428 if self.fd is not None:
429 self.close()
430 self.fd_mode = mode
431 if mode.find("r") != -1:
432 if self.debug:
433 print "open %s [mode=%s] from branch '%s' [%s]" % (self.filename, mode, self.branch_name, os.path.abspath(rfile))
434 self.fd = codecs.open(rfile, mode, encoding="utf-8")
435 else:
436 if self.debug:
437 print "open %s [mode=%s] from branch '%s' [%s]" % (self.tfile, mode, self.branch_name, os.path.abspath(rtfile))
438 self.fd = codecs.open(rtfile, mode, encoding="utf-8")
439
440 def close(self):
441 self.fd.close()
442 self.fd = None
443 if self.fd_mode.find("w") != -1:
444 return self.replace_file_if_newer()
445
446 def read_string(self):
447 string = {}
448 cur = None
449 while 1:
450 s = self.fd.readline()
451 if len(s) == 0 or s == "\n":
452 break # EOF or end of block
453 if s.rfind('\n') == len(s) - 1:
454 s = s[:-1] # chomp
455 if s.find("# ") == 0 or s == "#": # translator-comment
456 if 'comment' not in string:
457 string['comment'] = ''
458 string['comment'] += s[2:]
459 continue
460 if s.find("#:") == 0: # reference
461 if 'reference' not in string:
462 string['reference'] = ''
463 string['reference'] += s[2:]
464 if s[2:].find(" id: ") == 0:
465 string['id'] = s[7:].split(' ')[0]
466 continue
467 if s.find("#.") == 0: # extracted-comments
468 if 'extracted' not in string:
469 string['extracted'] = ''
470 string['extracted'] += s[2:]
471 if s[2:].find(" - condition: ") == 0:
472 if 'conditions' not in string:
473 string['conditions'] = []
474 string['conditions'].append(s[16:])
475 continue
476 if s.find("#~") == 0: # obsolete messages
477 continue
478 if s.find("#") == 0: # something else
479 print "%s not expected. Skip" % repr(s)
480 continue # not supported/expected
481 if s.find("msgid ") == 0:
482 cur = "string"
483 if cur not in string:
484 string[cur] = u""
485 else:
486 string[cur] += "\n"
487 string[cur] += s[6:]
488 continue
489 if s.find("msgstr ") == 0:
490 cur = "translation"
491 if cur not in string:
492 string[cur] = u""
493 else:
494 string[cur] += "\n"
495 string[cur] += s[7:]
496 continue
497 if s.find('"') == 0:
498 if cur is None:
499 print "'%s' not expected here. Skip" % s
500 continue
501 string[cur] += "\n" + s
502 continue
503 print "'%s' not expected here. Skip" % s
504 return None if string == {} else string
505
506 def write(self, string):
507 self.fd.write(string)
508
509 def write_header(self):
510 lang_team = "LANGUAGE <LL@li.org>" if self.is_pot else "%s <%s@li.org>" % (self.lang, self.lang)
511 lang_str = "template" if self.is_pot else "for lang '%s'" % self.lang
512 date = "YEAR-MO-DA HO:MI+ZONE" if self.is_pot else \
513 datetime.fromtimestamp(self.translation_date).strftime("%Y-%m-%d %H:%M+0000")
514 self.write("# Chromium Translations %s.\n"
515 "# Copyright (C) 2010-2011 Fabien Tassin\n"
516 "# This file is distributed under the same license as the chromium-browser package.\n"
517 "# Fabien Tassin <fta@ubuntu.com>, 2010-2011.\n"
518 "#\n" % lang_str)
519 # FIXME: collect contributors (can LP export them?)
520 self.write('msgid ""\n'
521 'msgstr ""\n'
522 '"Project-Id-Version: chromium-browser.head\\n"\n'
523 '"Report-Msgid-Bugs-To: https://bugs.launchpad.net/ubuntu/+source/chromium-browser/+filebug\\n"\n'
524 '"POT-Creation-Date: %s\\n"\n'
525 '"PO-Revision-Date: %s\\n"\n'
526 '"Last-Translator: FULL NAME <EMAIL@ADDRESS>\\n"\n'
527 '"Language-Team: %s\\n"\n'
528 '"MIME-Version: 1.0\\n"\n'
529 '"Content-Type: text/plain; charset=UTF-8\\n"\n'
530 '"Content-Transfer-Encoding: 8bit\\n"\n\n' % \
531 (datetime.fromtimestamp(self.template_date).strftime("%Y-%m-%d %H:%M+0000"),
532 date, lang_team))
533
534 def write_footer(self):
535 pass
536
537 def write_all_strings(self):
538 for string in self.strings:
539 self.write(u"#. %s\n" % u"\n#. ".join(string['comment'].split("\n")))
540 self.write(u"#: id: %s (used in the following branches: %s)\n" % \
541 (string['id'], ", ".join(string['origin'])))
542 self.write(u'msgid %s\n' % StringCvt().xtb2gettext(string['string']))
543 self.write(u'msgstr %s\n\n' % StringCvt().xtb2gettext(string['translation']))
544
545 def export_file(self, directory = None, filename = None):
546 self.open(mode = "wb", filename = filename)
547 self.write_header()
548 self.write_all_strings()
549 self.write_footer()
550 return self.close()
551
552 def import_file(self):
553 self.mtime = self.get_mtime(self.filename)
554 self.open()
555 while 1:
556 string = self.read_string()
557 if string is None:
558 break
559 self.strings.append(string)
560 self.close()
561
562 def pack_comment(self, data):
563 comment = ""
564 for ent in sorted(data, lambda x,y: cmp(x['code'], y['code'])):
565 comment += "%s\n- description: %s\n" % (ent['code'], ent['desc'])
566 if ent['test'] is not None:
567 comment += "- condition: %s\n" % ent['test']
568 comment = comment[:-1] # strip trailing \n
569 return comment
570
571 def get_origins(self, data):
572 o = []
573 for ent in sorted(data, lambda x,y: cmp(x['code'], y['code'])):
574 for origin in ent['origin']:
575 if origin not in o:
576 o.append(origin)
577 return o
578
579 def import_grd(self, grd):
580 imported = 0
581 for id in sorted(grd.supported_ids.keys()):
582 if 'ids' not in grd.supported_ids[id]:
583 continue
584 comment = self.pack_comment(grd.supported_ids[id]['ids'])
585 string = grd.supported_ids[id]['ids'][0]['val']
586 origin = self.get_origins(grd.supported_ids[id]['ids'])
587 self.strings.append({ 'id': id, 'comment': comment, 'string': string,
588 'origin': origin, 'translation': '' })
589 imported += 1
590 if self.debug:
591 print "imported %d strings from the grd template" % imported
592
593class PoFile(PotFile):
594 """
595 Read and write gettext po files
596 """
597
598 def __init__(self, lang, filename, template, date = None, debug = None,
599 branch_name = "default", branch_dir = os.getcwd()):
600 super(PoFile, self).__init__(filename, date = template.template_date, debug = debug,
601 branch_name = branch_name, branch_dir = branch_dir)
602 self.template = template
603 self.lang = lang
604 self.translation_date = date
605 self.is_pot = False
606
607 def import_xtb(self, xtb):
608 # only import strings present in the current template
609 imported = 0
610 for id in sorted(xtb.template.supported_ids.keys()):
611 if 'ids' not in xtb.template.supported_ids[id]:
612 continue
613 translation = xtb.strings[id] if id in xtb.strings else ""
614 comment = self.template.pack_comment(xtb.template.supported_ids[id]['ids'])
615 string = xtb.template.supported_ids[id]['ids'][0]['val']
616 origin = self.get_origins(xtb.template.supported_ids[id]['ids'])
617 self.add_string(id, comment, string, translation, origin)
618 imported += 1
619 if self.debug:
620 print "imported %d translations for lang %s from xtb into po %s" % (imported, self.lang, self.filename)
621
622class GrdFile(PotFile):
623 """
624 Read a Grit GRD file (write is not supported)
625 """
626 def __init__(self, filename, date = None, lang_mapping = None, debug = None,
627 branch_name = "default", branch_dir = os.getcwd()):
628 super(GrdFile, self).__init__(filename, date = date, debug = debug,
629 branch_name = branch_name, branch_dir = branch_dir)
630 self.lang_mapping = lang_mapping
631 self.mapped_langs = {}
632 self.supported_langs = {}
633 self.supported_ids = {}
634 self.supported_ids_counts = {}
635 self.translated_strings = {}
636 self.stats = {} # per lang
637 self.debug = debug
638 self._PH_REGEXP = re.compile('(<ph name=")([^"]*)("/>)')
639
640 def open(self):
641 pass
642
643 def close(self):
644 pass
645
646 def write_header(self):
647 raise Exception("Not implemented!")
648
649 def write_footer(self):
650 raise Exception("Not implemented!")
651
652 def write_all_strings(self):
653 raise Exception("Not implemented!")
654
655 def export_file(self, directory = None, filename = None, global_langs = None, langs = None):
656 fdi = codecs.open(self.filename, 'rb', encoding="utf-8")
657 fdo = codecs.open(filename, 'wb', encoding="utf-8")
658 # can't use minidom here as the file is manually generated and the
659 # output will create big diffs. parse the source file line by line
660 # and insert our xtb in the <translations> section. Also insert new
661 # langs in the <outputs> section (with type="data_package" or type="js_map_format").
662 # Let everything else untouched
663 tr_found = False
664 tr_saved = []
665 tr_has_ifs = False
666 tr_skipping_if_not = False
667 pak_found = False
668 pak_saved = []
669 # langs, sorted by their xtb names
670 our_langs = map(lambda x: x[0],
671 sorted(map(lambda x: (x, self.mapped_langs[x]['xtb_file']),
672 self.mapped_langs),
673 key = lambda x: x[1])) # d'oh!
674 if langs is None:
675 langs = our_langs[:]
676 for line in fdi.readlines():
677 if re.match(r'.*?<output filename=".*?" type="(data_package|js_map_format)"', line):
678 pak_found = True
679 pak_saved.append(line)
680 continue
681 if line.find('</outputs>') > 0:
682 pak_found = False
683 ours = global_langs[:]
684 chunks = {}
685 c = None
686 pak_if = None
687 pak_is_in_if = False
688 for l in pak_saved:
689 if l.find("<!-- ") > 0:
690 c = l
691 continue
692 if l.find("<if ") > -1:
693 c = l if c is None else c + l
694 tr_has_ifs = True
695 pak_is_in_if = True
696 continue
697 if l.find("</if>") > -1:
698 c = l if c is None else c + l
699 pak_is_in_if = False
700 continue
701 m = re.match(r'.*?<output filename="(.*?)_([^_\.]+)\.(pak|js)" type="(data_package|js_map_format)" lang="(.*?)" />', l)
702 if m is not None:
703 x = { 'name': m.group(1), 'ext': m.group(3), 'lang': m.group(5), 'file_lang': m.group(2),
704 'type': m.group(4), 'in_if': pak_is_in_if, 'line': l }
705 if c is not None:
706 x['comment'] = c
707 c = None
708 k = m.group(2) if m.group(2) != 'nb' else 'no'
709 chunks[k] = x
710 else:
711 if c is None:
712 c = l
713 else:
714 c += l
715 is_in_if = False
716 for lang in sorted(chunks.keys()):
717 tlang = lang if lang != 'no' else 'nb'
718 while len(ours) > 0 and ((ours[0] == 'nb' and 'no' < tlang) or (ours[0] != 'nb' and ours[0] < tlang)):
719 if ours[0] in chunks:
720 ours = ours[1:]
721 continue
722 if tr_has_ifs and is_in_if is False:
723 fdo.write(' <if expr="pp_ifdef(\'use_third_party_translations\')">\n')
724 f = "%s_%s.%s" % (chunks[lang]['name'], ours[0], chunks[lang]['ext'])
725 fdo.write(' %s<output filename="%s" type="%s" lang="%s" />\n' % \
726 (' ' if tr_has_ifs else '', f, chunks[lang]['type'], ours[0]))
727 is_in_if = True
728 if tr_has_ifs and chunks[lang]['in_if'] is False:
729 if 'comment' not in chunks[lang] or chunks[lang]['comment'].find('</if>') == -1:
730 fdo.write(' </if>\n')
731 is_in_if = False
732 ours = ours[1:]
733 if 'comment' in chunks[lang]:
734 for s in chunks[lang]['comment'].split('\n')[:-1]:
735 if chunks[lang]['in_if'] is True and is_in_if and s.find('<if ') > -1:
736 continue
737 if s.find('<!-- No translations available. -->') > -1:
738 continue
739 fdo.write(s + '\n')
740 fdo.write(chunks[lang]['line'])
741 if ours[0] == tlang:
742 ours = ours[1:]
743 is_in_if = chunks[lang]['in_if']
744 if len(chunks.keys()) > 0:
745 while len(ours) > 0:
746 f = "%s_%s.%s" % (chunks[lang]['name'], ours[0], chunks[lang]['ext'])
747 if tr_has_ifs and is_in_if is False:
748 fdo.write(' <if expr="pp_ifdef(\'use_third_party_translations\')">\n')
749 fdo.write(' %s<output filename="%s" type="data_package" lang="%s" />\n' % \
750 (' ' if tr_has_ifs else '', f, ours[0]))
751 is_in_if = True
752 ours = ours[1:]
753 if tr_has_ifs and is_in_if:
754 fdo.write(' </if>\n')
755 is_in_if = False
756 if c is not None:
757 for s in c.split('\n')[:-1]:
758 if s.find('<!-- No translations available. -->') > -1:
759 continue
760 if s.find('</if>') > -1:
761 continue
762 fdo.write(s + '\n')
763 if line.find('<translations>') > 0:
764 fdo.write(line)
765 tr_found = True
766 continue
767 if line.find('</translations>') > 0:
768 tr_found = False
769 ours = our_langs[:]
770 chunks = {}
771 obsolete = []
772 c = None
773 tr_if = None
774 tr_is_in_if = False
775 for l in tr_saved:
776 if l.find("</if>") > -1:
777 if tr_skipping_if_not:
778 tr_skipping_if_not = False
779 continue
780 tr_is_in_if = False
781 continue
782 if tr_skipping_if_not:
783 continue
784 if l.find("<!-- ") > 0:
785 c = l if c is None else c + l
786 continue
787 if l.find("<if ") > -1:
788 m = re.match(r'.*?<if expr="not pp_ifdef\(\'use_third_party_translations\'\)"', l)
789 if m is not None:
790 tr_skipping_if_not = True
791 continue
792 tr_has_ifs = True
793 tr_is_in_if = True
794 continue
795 m = re.match(r'.*?<file path=".*_([^_]+)\.xtb" lang="(.*?)"', l)
796 if m is not None:
797 tlang = m.group(2)
798 if m.group(1) == 'iw':
799 tlang = m.group(1)
800 x = { 'lang': tlang, 'line': l, 'in_if': tr_is_in_if }
801 if c is not None:
802 x['comment'] = c
803 c = None
804 chunks[tlang] = x
805 if tlang not in langs and tlang not in map(lambda t: self.mapped_langs[t]['grit'], langs):
806 obsolete.append(tlang)
807 else:
808 if c is None:
809 c = l
810 else:
811 c += l
812 is_in_if = False
813 # Do we want <if/> in the <translations/> block? (they are only mandatory in the <outputs/> block)
814 want_ifs_in_translations = False
815 for lang in sorted(chunks.keys()):
816 while len(ours) > 0 and self.mapped_langs[ours[0]]['xtb_file'] < lang.replace('@', '-'):
817 if ours[0] not in self.supported_langs:
818 if self.debug:
819 print "Skipped export of lang '%s' (most probably a 'po' file without any translated strings)" % ours[0]
820 ours = ours[1:]
821 continue
822 if ours[0] in obsolete:
823 if self.debug:
824 print "Skipped export of lang '%s' (now obsolete)" % ours[0]
825 ours = ours[1:]
826 continue
827 f = os.path.relpath(self.supported_langs[ours[0]], os.path.dirname(self.filename))
828 if want_ifs_in_translations and tr_has_ifs and is_in_if is False:
829 fdo.write(' <if expr="pp_ifdef(\'use_third_party_translations\')">\n')
830 is_in_if = True
831 fdo.write(' %s<file path="%s" lang="%s" />\n' %
832 (' ' if (is_in_if or want_ifs_in_translations) and tr_has_ifs else '', f, ours[0]))
833 if tr_has_ifs and chunks[lang]['in_if'] is False:
834 if want_ifs_in_translations:
835 fdo.write(' </if>\n')
836 is_in_if = False
837 ours = ours[1:]
838 if 'comment' in chunks[lang]:
839 for s in chunks[lang]['comment'].split('\n')[:-1]:
840 if chunks[lang]['in_if'] is True and is_in_if and s.find('<if ') > -1:
841 continue
842 if s.find('<!-- No translations available. -->') > -1:
843 continue
844 fdo.write(s + '\n')
845 if lang not in obsolete:
846 fdo.write(chunks[lang]['line'])
847 ours = ours[1:]
848 is_in_if = chunks[lang]['in_if']
849 while len(ours) > 0:
850 if ours[0] in self.supported_langs:
851 f = os.path.relpath(self.supported_langs[ours[0]], os.path.dirname(self.filename))
852 if want_ifs_in_translations and tr_has_ifs and is_in_if is False:
853 fdo.write(' <if expr="pp_ifdef(\'use_third_party_translations\')">\n')
854 is_in_if = True
855 fdo.write(' %s<file path="%s" lang="%s" />\n' %
856 (' ' if (is_in_if or want_ifs_in_translations) and tr_has_ifs else '', f, ours[0]))
857 elif self.debug:
858 print "Skipped lang %s with no translated strings" % ours[0]
859 ours = ours[1:]
860
861 if is_in_if and want_ifs_in_translations:
862 fdo.write(' </if>\n')
863 is_in_if = False
864 if c is not None:
865 for s in c.split('\n')[:-1]:
866 if s.find('<!-- No translations available. -->') > -1:
867 continue
868 if s.find('</if>') > -1:
869 continue
870 fdo.write(s + '\n')
871 if tr_found:
872 tr_saved.append(line)
873 continue
874 if pak_found:
875 pak_saved.append(line)
876 continue
877 fdo.write(line)
878 fdi.close()
879 fdo.close()
880
881 def uc(self, match):
882 return match.group(2).upper()
883
884 def uc_name(self, match):
885 return match.group(1) + match.group(2).upper() + match.group(3)
886
887 def is_string_valid_for_lang(self, id, lang):
888 ok = False
889 for string in self.supported_ids[id]['ids']:
890 if string['test'] is not None:
891 ok |= EvalConditions().lang_eval(string['test'], lang)
892 if ok:
893 break
894 else:
895 ok = True
896 break
897 return ok
898
899 def get_supported_strings_count(self, lang):
900 # need to ignore strings for which this lang is not wanted in the <if> conditions
901 if lang in self.supported_ids_counts:
902 return self.supported_ids_counts[lang]['count'], self.supported_ids_counts[lang]['skipped']
903 count = 0
904 skipped = 0
905 for id in self.supported_ids:
906 ok = self.is_string_valid_for_lang(id, lang)
907 if ok:
908 count += 1
909 else:
910 skipped += 1
911 assert count + skipped == len(self.supported_ids.keys())
912 self.supported_ids_counts[lang] = { 'count': count, 'skipped': skipped }
913 return count, skipped
914
915 def get_supported_langs(self):
916 return sorted(self.supported_langs.keys())
917
918 def get_supported_lang_filenames(self):
919 """ return the list of (xtb) filenames sorted by langs (so it's
920 possible to zip() it) """
921 return map(lambda l: self.supported_langs[l], sorted(self.supported_langs.keys()))
922
923 def update_stats(self, lang, translated_upstream = 0, obsolete = 0,
924 new = 0, updated = 0, skipped_lang = 0, mandatory_linux = 0):
925 if lang not in self.stats:
926 self.stats[lang] = { 'translated_upstream': 0, 'skipped_lang': 0,
927 'obsolete': 0, 'new': 0, 'updated': 0,
928 'mandatory_linux': 0 }
929 self.stats[lang]['translated_upstream'] += translated_upstream - updated
930 self.stats[lang]['obsolete'] += obsolete
931 self.stats[lang]['new'] += new
932 self.stats[lang]['updated'] += updated
933 self.stats[lang]['skipped_lang'] += skipped_lang
934 self.stats[lang]['mandatory_linux'] += mandatory_linux
935
936 def merge_template(self, template, newer_preferred = True):
937 """ merge strings from 'template' into self (the master template).
938 If the string differs, prefer the new one when newer_preferred is set """
939 for id in template.supported_ids:
940 if id not in self.supported_ids:
941 if self.debug:
942 print "merged code %s (id %s) from branch '%s' from %s" % \
943 (template.supported_ids[id]['ids'][0]['code'], id,
944 template.supported_ids[id]['ids'][0]['origin'][0], template.filename)
945 self.supported_ids[id] = template.supported_ids[id]
946 else:
947 for ent in template.supported_ids[id]['ids']:
948 found = False
949 for ent2 in self.supported_ids[id]['ids']:
950 if ent2['code'] != ent['code']:
951 continue
952 found = True
953 ent2['origin'].append(ent['origin'][0])
954 if ent['test'] != ent2['test'] or \
955 ent['desc'] != ent2['desc']:
956 if newer_preferred:
957 ent2['test'] = ent['test']
958 ent2['desc'] = ent['desc']
959 if not found:
960 if self.debug:
961 print "adding new ids code '%s' from branch '%s' for string id %s" % \
962 (ent['code'], template.supported_ids[id]['ids'][0]['origin'][0], id)
963 self.supported_ids[id]['ids'].append(ent)
964
965 def add_translation(self, lang, id, translation):
966 if id not in self.supported_ids:
967 if self.debug:
968 print "*warn* obsolete string id %s for lang %s" % (id, lang)
969 return
970 self.supported_ids[id]['lang'][lang] = translation
971
972 def merge_translations(self, lang, xtb, master_xtb = None, newer_preferred = True):
973 if lang not in self.supported_langs:
974 self.supported_langs[lang] = xtb.filename
975 for id in xtb.strings:
976 if id not in self.supported_ids:
977 # d'oh!! obsolete translation?
978 self.update_stats(lang, obsolete = 1)
979 continue
980 if not self.is_string_valid_for_lang(id, lang):
981 # string not wanted for that lang, skipped
982 continue
983 if 'lang' not in self.supported_ids[id]:
984 self.supported_ids[id]['lang'] = {}
985 if lang in self.supported_ids[id]['lang']:
986 # already have a translation for this string
987 if newer_preferred and xtb.strings[id] != self.supported_ids[id]['lang'][lang]:
988 self.supported_ids[id]['lang'][lang] = xtb.strings[id]
989 else:
990 self.update_stats(lang, translated_upstream = 1)
991 self.supported_ids[id]['lang'][lang] = xtb.strings[id]
992 if master_xtb is not None:
993 master_xtb.strings[id] = xtb.strings[id]
994
995 def read_string(self, node, test = None):
996 desc = node.getAttribute('desc')
997 name = node.getAttribute('name')
998 if not node.firstChild:
999 # no string? weird. Skip. (e.g. IDS_LOAD_STATE_IDLE)
1000 return
1001
1002 # Get a/ the full string from the grd, b/ its transformation
1003 # into the smaller version found in xtb files (val) and c/ another into
1004 # something suitable for the 64bit key generator (kval)
1005
1006 orig_val = "".join([ n.toxml() for n in node.childNodes ])
1007
1008 # encode the value to create the 64bit ID needed for the xtb mapping.
1009 #
1010 # grd: 'f&amp;oo &quot;<ph name="IDS_xX">$1<ex>blabla</ex></ph>&quot; bar'
1011 # xtb: 'f&amp;oo &quot;<ph name="IDS_XX"/>&quot; bar'
1012 # but the string used to create the 64bit id is only 'f&oo "IDS_XX" bar'.
1013 # Also, the final value must be positive, while FingerPrint() returns
1014 # a signed long. Of course, none of this is documented...
1015
1016 # grd->xtb
1017 for x in node.getElementsByTagName('ph'):
1018 while x.hasChildNodes():
1019 x.removeChild(x.childNodes[0])
1020 val = "".join([ n.toxml() for n in node.childNodes ]).strip()
1021 # xtb->id
1022 kval = StringCvt().decode_xml_entities(unescape(self._PH_REGEXP.sub(self.uc, val))).encode('utf-8')
1023 kval = kval.replace('&quot;', '"') # not replaced by unescape()
1024
1025 val = self._PH_REGEXP.sub(self.uc_name, val)
1026 val = val.encode("ascii", "xmlcharrefreplace").strip().encode('utf-8')
1027
1028 # finally, create the 64bit ID
1029 id = str(FingerPrint(kval) & 0x7fffffffffffffffL)
1030
1031 if val == '':
1032 # unexpect <message/> block with attributes but without value, skip
1033 return
1034
1035 if id not in self.supported_ids:
1036 self.supported_ids[id] = { 'ids': [] }
1037 self.supported_ids[id]['ids'].append({ 'code': name, 'desc': desc,
1038 'val': val, 'test': test,
1039 'origin': [ self.branch_name ] })
1040
1041 def read_strings(self, node, test = None):
1042 for n in node.childNodes:
1043 if n.nodeName == '#text' or n.nodeName == '#comment':
1044 # comments, skip
1045 continue
1046 if n.nodeName == 'message':
1047 self.read_string(n, test)
1048 continue
1049 if n.nodeName == 'if':
1050 expr = n.getAttribute('expr')
1051 if expr is not None and test is not None:
1052 assert "nested <if> not supported"
1053 self.read_strings(n, expr)
1054 continue
1055 if n.nodeName == 'part':
1056 f = n.getAttribute('file')
1057 qualified_file = os.path.join(os.path.dirname(self.filename), f)
1058 self.import_file(override_filename=qualified_file)
1059 continue
1060 raise Exception("unknown tag (<%s> type %s): ''%s''" % \
1061 (n.nodeName, n.nodeType, n.toxml()))
1062
1063 def import_json_file(self, filename):
1064 # unlike its name seems to indicate, this file is definitely not a json file.
1065 # It's a python object, dumped in a file. It means it's far easier to parse
1066 # because there's no extra unescaping to do on all the strings. It also
1067 # means we can't use the json module
1068 rfile = os.path.join(self.branch_dir, filename)
1069 if self.debug:
1070 print "parse_json('%s') [%s]" % (filename, rfile)
1071 fd = open(rfile, "rb")
1072 data = fd.read()
1073 fd.close()
1074 vars = { '__builtins__': { 'True': True, 'False': False } } # prevent eval from using the real current globals
1075 data = eval(data, vars)
1076 # Check if this is a format we support
1077 if 'policy_definitions' in data and len(data['policy_definitions']) > 0 and \
1078 'caption' not in data['policy_definitions'][0]:
1079 # most probably Chromium v9. It used 'annotations' instead of 'caption'
1080 # Not worth supporting that, all the strings we need in v9 are already in
1081 # the grd file. Skip this json file
1082 if self.debug:
1083 print "Found older unsupported json format. Skipped"
1084 return
1085 if 'messages' in data:
1086 for msg in data['messages']:
1087 self.read_policy('IDS_POLICY_' + msg.upper(),
1088 data['messages'][msg]['desc'],
1089 data['messages'][msg]['text'])
1090 if 'policy_definitions' in data:
1091 for policy in data['policy_definitions']:
1092 name = 'IDS_POLICY_' + policy['name'].upper()
1093 if policy['type'] in [ 'main', 'int', 'string', 'list', 'string-enum', 'int-enum', 'string-enum-list' ]:
1094 # caption
1095 self.read_policy(name + '_CAPTION',
1096 "Caption of the '%s' policy." % policy['name'],
1097 policy['caption'])
1098 # label (optional)
1099 if 'label' in policy:
1100 self.read_policy(name + '_LABEL',
1101 "Label of the '%s' policy." % policy['name'],
1102 policy['label'])
1103 # desc
1104 self.read_policy(name + '_DESC',
1105 "Description of the '%s' policy." % policy['name'],
1106 policy['desc'])
1107 if policy['type'] in [ 'string-enum', 'int-enum', 'string-enum-list' ]:
1108 for item in policy['items']:
1109 self.read_policy('IDS_POLICY_ENUM_' + item['name'].upper().replace(' ', '_') + '_CAPTION',
1110 "Label in a '%s' dropdown menu for selecting '%s'" % \
1111 (policy['name'], item['name']),
1112 item['caption'])
1113 continue
1114 if policy['type'] == 'group':
1115 # group caption
1116 self.read_policy(name + '_CAPTION',
1117 "Caption of the group of '%s' related policies." % name,
1118 policy['caption'])
1119 # group label (optional)
1120 if 'label' in policy:
1121 self.read_policy(name + '_LABEL',
1122 "Label of the group of '%s' related policies." % name,
1123 policy['label'])
1124 # group desc
1125 self.read_policy(name + '_DESC',
1126 "Description of the group of '%s' related policies." % name,
1127 policy['desc'])
1128 for spolicy in policy['policies']:
1129 sname = 'IDS_POLICY_' + spolicy['name'].upper()
1130 # desc
1131 self.read_policy(sname + '_DESC',
1132 "Description of the '%s' policy." % spolicy['name'],
1133 spolicy['desc'])
1134 # label (optional)
1135 if 'label' in spolicy:
1136 self.read_policy(sname + '_LABEL',
1137 "Label of the '%s' policy." % spolicy['name'],
1138 spolicy['label'])
1139 # caption
1140 self.read_policy(sname + '_CAPTION',
1141 "Caption of the '%s' policy." % spolicy['name'],
1142 spolicy['caption'])
1143 if spolicy['type'] in [ 'int-enum', 'string-enum' ]:
1144 # only caption
1145 for item in spolicy['items']:
1146 self.read_policy('IDS_POLICY_ENUM_' + item['name'].upper() + '_CAPTION',
1147 "Label in a '%s' dropdown menu for selecting a '%s' of '%s'" % \
1148 (policy['name'], spolicy['name'], item['name']),
1149 item['caption'])
1150 continue
1151 # The new type is not yet being used: http://code.google.com/p/chromium/issues/detail?id=108992
1152 if policy['type'] == 'external':
1153 continue
1154 if policy['type'] == 'dict':
1155 continue
1156
1157 assert False, "Policy type '%s' not supported while parsing %s" % (policy['type'], rfile)
1158
1159 def read_policy(self, name, desc, text):
1160 xml = '<x><message name="%s" desc="%s">\n%s\n</message></x>' % (name, desc, text)
1161 dom = minidom.parseString(xml)
1162 self.read_strings(dom.getElementsByTagName('x')[0])
1163
1164 def _add_xtb(self, node):
1165 if node.nodeName != 'file':
1166 return
1167 path = node.getAttribute('path')
1168 m = re.match(r'.*_([^_]+)\.xtb', path)
1169 flang = m.group(1)
1170 lang = node.getAttribute('lang')
1171 tlang = lang
1172 if self.lang_mapping is not None and lang in self.lang_mapping:
1173 if self.debug:
1174 print "# mapping lang '%s' to '%s'" % (lang, self.lang_mapping[lang])
1175 tlang = self.lang_mapping[lang]
1176 tlang = tlang.replace('-', '_')
1177 self.supported_langs[lang] = os.path.normpath(os.path.join(os.path.dirname(self.filename), path))
1178 self.translated_strings[lang] = {}
1179 glang = lang
1180 if flang == 'iw':
1181 glang = flang
1182 #assert lang not in self.mapped_langs, "'%s' already in self.mapped_langs" % lang
1183 if lang not in self.mapped_langs:
1184 self.mapped_langs[lang] = { 'xtb_file': flang, 'grit': glang, 'gettext': tlang }
1185
1186 def import_file(self, override_filename=None):
1187 if override_filename:
1188 assert self.branch_dir
1189 filename = os.path.join(self.branch_dir, override_filename)
1190 else:
1191 filename = os.path.join(self.branch_dir, self.filename) if self.branch_dir is not None \
1192 else self.filename
1193 self.supported_langs = {}
1194 self.mtime = self.get_mtime(self.filename)
1195 if self.debug:
1196 print "minidom.parse(%s)" % filename
1197 dom = minidom.parse(filename)
1198 grits = dom.getElementsByTagName('grit')
1199 if not grits:
1200 grits = dom.getElementsByTagName('grit-part')
1201 grit = grits[0]
1202 for node in grit.childNodes:
1203 if node.nodeName == '#text' or node.nodeName == '#comment':
1204 # comments, skip
1205 continue
1206 if node.nodeName == 'outputs':
1207 # skip, nothing for us here
1208 continue
1209 if node.nodeName == 'translations':
1210 # collect the supported langs by scanning the list of xtb files
1211 for n in node.childNodes:
1212 if n.nodeName == 'if':
1213 for nn in n.childNodes:
1214 self._add_xtb(nn)
1215 continue
1216 self._add_xtb(n)
1217 continue
1218 if node.nodeName == 'release':
1219 for n in node.childNodes:
1220 if n.nodeName == '#text' or n.nodeName == '#comment':
1221 # comments, skip
1222 continue
1223 if n.nodeName == 'includes':
1224 # skip, nothing for us here
1225 continue
1226 if n.nodeName == 'structures':
1227 for sn in n.childNodes:
1228 if sn.nodeName != 'structure':
1229 continue
1230 type = sn.getAttribute('type')
1231 if type == 'dialog':
1232 # nothing for us here
1233 continue
1234 name = sn.getAttribute('name')
1235 file = sn.getAttribute('file')
1236 if type == 'policy_template_metafile':
1237 # included file containing the strings that are usually in the <messages> tree.
1238 fname = os.path.normpath(os.path.join(os.path.dirname(self.filename), file))
1239 self.import_json_file(fname)
1240 continue
1241 else:
1242 if self.debug:
1243 print "unknown <structure> type found ('%s') in %s" % (type, self.filename)
1244 continue
1245 if n.nodeName == 'messages':
1246 self.read_strings(n)
1247 continue
1248 print "unknown tag (<%s> type %s): ''%s''" % (n.nodeName, n.nodeType, n.toxml())
1249 continue
1250 print "unknown tag (<%s> type %s): ''%s''" % (node.nodeName, node.nodeType, node.toxml())
1251
1252class XtbFile(PoFile):
1253 """
1254 Read and write a Grit XTB file
1255 """
1256
1257 def __init__(self, lang, filename, grd, date = None, debug = None,
1258 branch_name = "default", branch_dir = os.getcwd()):
1259 super(XtbFile, self).__init__(lang, filename, grd, date = date, debug = debug,
1260 branch_name = branch_name, branch_dir = branch_dir)
1261 self.template = grd
1262 self.strings = {}
1263 self.strings_updated = 0
1264 self.strings_new = 0
1265 self.strings_order = [] # needed to recreate xtb files in a similar order :(
1266
1267 def add_translation(self, id, string):
1268 assert id in self.template.supported_ids, "'%s' is not in supported_ids (file=%s)" % (id, self.filename)
1269 while string[-1:] == '\n' and self.template.supported_ids[id]['ids'][0]['val'][-1:] != '\n':
1270 # prevent the `msgid' and `msgstr' entries do not both end with '\n' error
1271 if self.debug:
1272 print "Found unwanted \\n at the end of translation id " + id + " lang " + self.lang + ". Dropped"
1273 string = string[:-1]
1274 while string[0] == '\n' and self.template.supported_ids[id]['ids'][0]['val'][0] != '\n':
1275 # prevent the `msgid' and `msgstr' entries do not both begin with '\n' error
1276 if self.debug:
1277 print "Found unwanted \\n at the begin of translation id " + id + " lang " + self.lang + ". Dropped"
1278 string = string[1:]
1279 self.strings[id] = string
1280 self.strings_order.append(id)
1281
1282 def write_header(self):
1283 self.write('<?xml version="1.0" ?>\n')
1284 self.write('<!DOCTYPE translationbundle>\n')
1285 self.write('<translationbundle lang="%s">\n' % \
1286 self.template.mapped_langs[self.lang]['grit'])
1287
1288 def write_footer(self):
1289 self.write('</translationbundle>')
1290
1291 def write_all_strings(self):
1292 for id in self.strings_order:
1293 if id in self.strings:
1294 self.write('<translation id="%s">%s</translation>\n' % \
1295 (id, self.strings[id]))
1296 for id in sorted(self.strings.keys()):
1297 if id in self.strings_order:
1298 continue
1299 self.write('<translation id="%s">%s</translation>\n' % \
1300 (id, self.strings[id]))
1301
1302 def import_po(self, po):
1303 for string in po.strings:
1304 if string['string'] == '':
1305 continue
1306 self.add_string(string['id'], string['extracted'],
1307 string['string'], string['translation'])
1308
1309 def import_file(self):
1310 self.open()
1311 file = self.fd.read() # *sigh*
1312 self.close()
1313 imported = 0
1314 for m in re.finditer('<translation id="(.*?)">(.*?)</translation>',
1315 file, re.S):
1316 if m.group(1) not in self.template.supported_ids:
1317 if self.debug:
1318 print "found a translation for obsolete string id %s in upstream xtb %s" % (m.group(1), self.filename)
1319 continue
1320 self.add_translation(m.group(1), m.group(2))
1321 imported += 1
1322 for m in re.finditer('<translationbundle lang="(.*?)">', file):
1323 lang = m.group(1)
1324 if self.lang in self.template.mapped_langs:
1325 assert self.template.mapped_langs[self.lang]['grit'] == lang, \
1326 "bad lang mapping for '%s' while importing %s, expected '%s'" % \
1327 (lang, self.filename, self.template.mapped_langs[self.lang]['grit'])
1328 else:
1329 tlang = lang
1330 if self.template.lang_mapping is not None and lang in self.template.lang_mapping:
1331 if self.debug:
1332 print "# mapping lang '%s' to '%s'" % (lang, self.template.lang_mapping[lang])
1333 tlang = self.template.lang_mapping[lang]
1334 tlang = tlang.replace('-', '_')
1335 flang = lang.replace('@', '-')
1336 self.template.mapped_langs[lang] = { 'xtb_file': flang, 'grit': lang, 'gettext': tlang }
1337 if self.debug:
1338 print "imported %d strings from the xtb file into lang '%s'" % (imported, self.lang)
1339 self.mtime = self.get_mtime(self.filename)
1340
1341###
1342
1343class Converter(dict):
1344 """
1345 Given a grd template and its xtb translations,
1346 a/ exports gettext pot template and po translations,
1347 possibly by merging grd/xtb files from multiple branches
1348 or
1349 b/ imports and merges some gettext po translations,
1350 and exports xtb translations
1351 """
1352
1353 def __init__(self, template_filename, lang_mapping = None, date = None, debug = False,
1354 template_mapping = {}, html_output = False, branches = None):
1355 self.debug = debug
1356 self.translations = {}
1357 self.errors = 0
1358 self.template_changes = 0
1359 self.translations_changes = 0
1360 self.lang_mapping = lang_mapping
1361 self.template_mapping = template_mapping
1362 self.file_mapping = {}
1363 self.html_output = html_output
1364 self.stats = {}
1365 self.branches = branches if branches is not None else [ { 'branch': 'default', 'dir': os.getcwd(), 'grd': template_filename } ]
1366
1367 # read a grd template from a file
1368 self.template = GrdFile(self.branches[0]['grd'], date, lang_mapping = self.lang_mapping, debug = self.debug,
1369 branch_name = self.branches[0]['branch'], branch_dir = self.branches[0]['dir'])
1370 self.file_mapping['grd'] = { 'src': self.branches[0]['grd'],
1371 'branches': { self.branches[0]['branch']: self.branches[0]['dir'] } }
1372 if 'mapped_grd' in self.branches[0]:
1373 self.file_mapping['grd']['mapped_grd'] = self.branches[0]['mapped_grd']
1374 self.template.import_file()
1375 self.template_pot = None
1376 for lang, file in zip(self.template.get_supported_langs(),
1377 self.template.get_supported_lang_filenames()):
1378 try:
1379 # also read all the xtb files referenced by this grd template
1380 rfile = os.path.join(self.branches[0]['dir'] , file)
1381 xtb = XtbFile(lang, file, self.template, date = self.template.get_mtime(file), debug = self.debug,
1382 branch_name = self.branches[0]['branch'], branch_dir = self.branches[0]['dir'])
1383 xtb.import_file()
1384
1385 self.file_mapping['lang_' + lang] = { 'src': file,
1386 'branches': { self.branches[0]['branch']: self.branches[0]['dir'] } }
1387 self.stats[lang] = { 'strings': self.template.get_supported_strings_count(lang),
1388 'translated_upstream': 0,
1389 'changed_in_gettext': 0,
1390 'rejected': 0
1391 }
1392 self.template.merge_translations(lang, xtb)
1393 self.translations[lang] = xtb
1394 except Exception, e:
1395 print "Skipping a XTB file, ERROR while importing xtb %s from grd file %s: %s" % (file, self.branches[0]['grd'], str(e))
1396
1397 # read other grd templates
1398 if len(self.branches) > 1:
1399 for branch in self.branches[1:]:
1400 if self.debug:
1401 print "merging %s from branch '%s' from %s" % (branch['grd'], branch['branch'], branch['dir'])
1402 template = GrdFile(branch['grd'], date, lang_mapping = self.lang_mapping, debug = self.debug,
1403 branch_name = branch['branch'], branch_dir = branch['dir'])
1404 self.file_mapping['grd']['branches'][branch['branch']] = branch['dir']
1405 template.import_file()
1406 self.template.merge_template(template, newer_preferred = False)
1407 for lang, file in zip(template.get_supported_langs(),
1408 template.get_supported_lang_filenames()):
1409 xtb = XtbFile(lang, file, self.template, date = template.get_mtime(file), debug = self.debug,
1410 branch_name = branch['branch'], branch_dir = branch['dir'])
1411 if 'lang_' + lang not in self.file_mapping:
1412 self.file_mapping['lang_' + lang] = { 'src': file, 'branches': {} }
1413 self.file_mapping['lang_' + lang]['branches'][branch['branch']] = branch['dir']
1414 # TODO: stats
1415 xtb.import_file()
1416 if lang not in self.translations:
1417 if self.debug:
1418 print "Add lang '%s' as master xtb for alt branch '%s'" % (lang, branch['branch'])
1419 self.translations[lang] = xtb
1420 self.template.merge_translations(lang, xtb, master_xtb = self.translations[lang],
1421 newer_preferred = False)
1422
1423 def export_gettext_files(self, directory):
1424 fname = self.file_mapping['grd']['mapped_grd'] \
1425 if 'mapped_grd' in self.file_mapping['grd'] else self.template.filename
1426 name = os.path.splitext(os.path.basename(fname))[0]
1427 if directory is not None:
1428 directory = os.path.join(directory, name)
1429 if not os.path.isdir(directory):
1430 os.makedirs(directory, 0755)
1431 filename = os.path.join(directory, name + ".pot")
1432 else:
1433 filename = os.path.splitext(fname)[0] + ".pot"
1434 # create a pot template and merge the grd strings into it
1435 self.template_pot = PotFile(filename, date = self.template.mtime, debug = self.debug)
1436 self.template_pot.import_grd(self.template)
1437 # write it to a file
1438 self.template_changes += self.template_pot.export_file(directory = directory)
1439
1440 # do the same for all langs (xtb -> po)
1441 for lang in self.translations:
1442 gtlang = self.template.mapped_langs[lang]['gettext']
1443 file = os.path.join(os.path.dirname(filename), gtlang + ".po")
1444 po = PoFile(gtlang, file, self.template_pot,
1445 date = self.translations[lang].translation_date, debug = self.debug)
1446 po.import_xtb(self.translations[lang])
1447 self.translations_changes += po.export_file(directory)
1448
1449 def export_grit_xtb_file(self, lang, directory):
1450 name = os.path.splitext(os.path.basename(self.template.filename))[0]
1451 file = os.path.join(directory, os.path.basename(self.template.supported_langs[lang]))
1452 if len(self.translations[lang].strings.keys()) > 0:
1453 if 'lang_' + lang in self.file_mapping:
1454 self.file_mapping['lang_' + lang]['dst'] = file
1455 else:
1456 self.file_mapping['lang_' + lang] = { 'src': None, 'dst': file }
1457 self.translations[lang].export_file(filename = file)
1458
1459 def export_grit_files(self, directory, langs):
1460 grd_dst = os.path.join(directory, os.path.basename(self.template.filename))
1461 if len(self.translations.keys()) == 0:
1462 if self.debug:
1463 print "no translation at all, nothing to export here (template: %s)" % self.template.filename
1464 return
1465 if not os.path.isdir(directory):
1466 os.makedirs(directory, 0755)
1467 # 'langs' may contain langs for which this template no longer have translations for.
1468 # They need to be dropped from the grd file
1469 self.template.export_file(filename = grd_dst, global_langs = langs, langs = self.translations.keys())
1470 self.file_mapping['grd']['dst'] = grd_dst
1471 self.file_mapping['grd']['dir'] = directory[:-len(os.path.dirname(self.template.filename)) - 1]
1472 for lang in self.translations:
1473 prefix = self.template.supported_langs[lang]
1474 fdirectory = os.path.normpath(os.path.join(self.file_mapping['grd']['dir'], os.path.dirname(prefix)))
1475 if not os.path.isdir(fdirectory):
1476 os.makedirs(fdirectory, 0755)
1477 self.export_grit_xtb_file(lang, fdirectory)
1478
1479 def get_supported_strings_count(self):
1480 return len(self.template.supported_ids.keys())
1481
1482 def compare_translations(self, old, new, id, lang):
1483 # strip leading and trailing whitespaces from the upstream strings
1484 # (this should be done upstream)
1485 old = old.strip()
1486 if old != new:
1487 s = self.template.supported_ids[id]['ids'][0]['val'] if 'ids' in self.template.supported_ids[id] else "<none?>"
1488 if self.debug:
1489 print "Found a different translation for id %s in lang '%s':\n string: \"%s\"\n " \
1490 "upstream: \"%s\"\n launchpad: \"%s\"\n" % (id, lang, s, old, new)
1491 return old == new
1492
1493 def import_gettext_po_file(self, lang, filename):
1494 """ import a single lang file into the current translations set,
1495 matching the current template. Could be useful to merge the upstream
1496 and launchpad translations, or to merge strings from another project
1497 (like webkit) """
1498 po = PoFile(self.template.mapped_langs[lang]['gettext'], filename, self.template,
1499 date = self.template.get_mtime(filename), debug = self.debug)
1500 po.import_file()
1501 # no need to continue if there are no translation in this po
1502 translated_count = 0
1503 for s in po.strings:
1504 if s['string'] != '""' and s['translation'] != '""':
1505 translated_count += 1
1506 if translated_count == 0:
1507 if self.debug:
1508 print "No translation found for lang %s in %s" % (lang, filename)
1509 return
1510 if lang not in self.translations:
1511 # assuming the filename should be third_party/launchpad_translations/<template_name>_<lang>.xtb
1512 # (relative to $SRC), we need it relatively to the grd directory
1513 tname = os.path.splitext(os.path.basename(self.template.filename))[0]
1514 f = os.path.normpath(os.path.join('third_party/launchpad_translations',
1515 tname + '_' + self.template.mapped_langs[lang]['xtb_file'] + '.xtb'))
1516 self.translations[lang] = XtbFile(lang, f, self.template, date = po.mtime, debug = self.debug)
1517 self.template.supported_langs[lang] = f # *sigh*
1518
1519 lp669831_skipped = 0
1520 for string in po.strings:
1521 if 'id' not in string:
1522 continue # PO header
1523 id = string['id']
1524 if id in self.template.supported_ids:
1525 if 'conditions' in string:
1526 # test the lang against all those conditions. If at least one passes, we need
1527 # the string
1528 found = False
1529 for c in string['conditions']:
1530 found |= EvalConditions().lang_eval(c, lang)
1531 if found is False:
1532 self.template.update_stats(lang, skipped_lang = 1)
1533 if self.debug:
1534 print "Skipped string (lang condition) for %s/%s: %s" % \
1535 (os.path.splitext(os.path.basename(self.template.filename))[0],
1536 lang, repr(string))
1537 continue
1538 # workaround bug https://bugs.launchpad.net/rosetta/+bug/669831
1539 ustring = StringCvt().gettext2xtb(string['string'])
1540 gt_translation = string['translation'][1:-1].replace('"\n"', '')
1541 string['translation'] = StringCvt().gettext2xtb(string['translation'])
1542 if string['translation'] != "":
1543 while string['translation'][-1:] == '\n' and ustring[-1:] != '\n':
1544 # prevent the `msgid' and `msgstr' entries do not both end with '\n' error
1545 if self.debug:
1546 print "Found unwanted \\n at the end of translation id " + id + " lang " + self.lang + ". Dropped"
1547 string['translation'] = string['translation'][:-1]
1548 while string['translation'][0] == '\n' and ustring[0] != '\n':
1549 # prevent the `msgid' and `msgstr' entries do not both begin with '\n' error
1550 if self.debug:
1551 print "Found unwanted \\n at the begin of translation id " + id + " lang " + self.lang + ". Dropped"
1552 string['translation'] = string['translation'][1:]
1553 grit_str = StringCvt().decode_xml_entities(self.template.supported_ids[id]['ids'][0]['val'])
1554 if False and 'ids' in self.template.supported_ids[id] and \
1555 ustring != grit_str:
1556 # the string for this id is no longer the same, skip it
1557 lp669831_skipped += 1
1558 if self.debug:
1559 print "lp669831_skipped:\n lp: '%s'\n grd: '%s'" % (ustring, grit_str)
1560 continue
1561 # check for xml errors when '<' or '>' are in the string
1562 if string['translation'].find('<') >= 0 or \
1563 string['translation'].find('>') >= 0:
1564 try:
1565 # try to parse it with minidom (it's slow!!), and skip if it fails
1566 s = u"<x>" + string['translation'] + u"</x>"
1567 dom = minidom.parseString(s.encode('utf-8'))
1568 except Exception as inst:
1569 print "Parse error in '%s/%s' for id %s. Skipped.\n%s\n%s" % \
1570 (os.path.splitext(os.path.basename(self.template.filename))[0], lang, id,
1571 repr(string['translation']), inst)
1572 continue
1573 # if the upstream string is not empty, but the contributed string is, keep
1574 # the upstream string untouched
1575 if string['translation'] == '':
1576 continue
1577 # check if we have the same variables in both the upstream string and its
1578 # translation. Otherwise, complain and reject the translation
1579 if 'ids' in self.template.supported_ids[id]:
1580 uvars = sorted([e for e in re.split('(<ph name=".*?"/>)', self.template.supported_ids[id]['ids'][0]['val']) \
1581 if re.match('^<ph name=".*?"/>$', e)])
1582 tvars = sorted([e for e in re.split('(<ph name=".*?"/>)', string['translation'])\
1583 if re.match('^<ph name=".*?"/>$', e)])
1584 lostvars = list(set(uvars).difference(set(tvars)))
1585 createdvars = list(set(tvars).difference(set(uvars)))
1586 if len(lostvars) or len(createdvars):
1587 template = os.path.splitext(os.path.basename(self.template.filename))[0].replace('_', '-')
1588 self.errors += 1
1589 if self.html_output:
1590 print "<div class='error'>[<a id='pherr-%s-%d' href='javascript:toggle(\"pherr-%s-%d\");'>+</a>] " \
1591 "<b>ERROR</b>: Found mismatching placeholder variables in string id %s of <b>%s</b> lang <b>%s</b>" % \
1592 (template, self.errors, template, self.errors, id, template, lang)
1593 else:
1594 print "ERROR: Found mismatching placeholder variables in string id %s of %s/%s:" % \
1595 (id, template, lang)
1596 url = 'https://translations.launchpad.net/chromium-browser/translations/+pots/%s/%s/+translate?batch=10&show=all&search=%s' % \
1597 (template, self.template.mapped_langs[lang]['gettext'], urllib.quote(gt_translation.encode('utf-8')))
1598 if self.html_output:
1599 print "<div id='pherr-%s-%d-t' style='display: none'>\n" \
1600 "<fieldset><legend>Details</legend><p><ul>" % (template, self.errors)
1601 print "<li> <a href='%s'>this string in Launchpad</a>\n" % url
1602 if len(lostvars):
1603 print " <li> expected but not found: <code>%s</code>" % " ".join([ re.sub(r'<ph name="(.*?)"/>', r'%{\1}', s) for s in lostvars ])
1604 if len(createdvars):
1605 print " <li> found but not expected: <code>%s</code>" % " ".join([ re.sub(r'<ph name="(.*?)"/>', r'%{\1}', s) for s in createdvars ])
1606 print "</ul><table border='1'>" \
1607 "<tr><th rowspan='2'>GetText</th><th>template</th><td><code>%s</code></td></tr>\n" \
1608 "<tr><th>translation</th><td><code>%s</code></td></tr>\n" \
1609 "<tr><th rowspan='2'>Grit</th><th>template</th><td><code>%s</code></td></tr>\n" \
1610 "<tr><th>translation</th><td><code>%s</code></td></tr>\n" \
1611 "</table><p> => <b>translation skipped</b>\n" % \
1612 (string['string'][1:-1].replace('"\n"', '').replace('<', '&lt;').replace('>', '&gt;'),
1613 gt_translation.replace('<', '&lt;').replace('>', '&gt;'),
1614 self.template.supported_ids[id]['ids'][0]['val'].replace('<', '&lt;').replace('>', '&gt;'),
1615 string['translation'].replace('<', '&lt;').replace('>', '&gt;'))
1616 print "</fieldset></div></div>"
1617 else:
1618 if len(lostvars):
1619 print " - expected but not found: " + " ".join(lostvars)
1620 if len(createdvars):
1621 print " - found but not expected: " + " ".join(createdvars)
1622 print " string: '%s'\n translation: '%s'\n gettext: '%s'\n url: %s\n => translation skipped\n" % \
1623 (self.template.supported_ids[id]['ids'][0]['val'], string['translation'], gt_translation, url)
1624 continue
1625 # check if the translated string is the same
1626 if 'lang' in self.template.supported_ids[id] and \
1627 lang in self.template.supported_ids[id]['lang']:
1628 # compare
1629 if self.compare_translations(self.template.supported_ids[id]['lang'][lang],
1630 string['translation'], id, lang):
1631 continue # it's the same
1632 if id in self.translations[lang].strings:
1633 # already added from a previously merged gettext po file
1634 if self.debug:
1635 print "already added from a previously merged gettext po file for" + \
1636 " template %s %s id %s in lang %s: %s" % \
1637 (self.template.branch_name, self.template.filename,
1638 id, lang, repr(string['translation']))
1639 # compare
1640 if self.compare_translations(self.translations[lang].strings[id],
1641 string['translation'], id, lang):
1642 continue # it's the same
1643 # update it..
1644 if self.debug:
1645 print "updated string for template %s %s id %s in lang %s: %s" % \
1646 (self.template.branch_name, self.template.filename, id, lang,
1647 repr(string['translation']))
1648 self.template.update_stats(lang, updated = 1)
1649 self.translations[lang].strings[id] = string['translation']
1650 self.translations[lang].strings_updated += 1
1651 elif id in self.translations[lang].strings:
1652 # already added from a previously merged gettext po file
1653 if self.debug:
1654 print "already added from a previously merged gettext po file for" + \
1655 "template %s %s id %s in lang %s: %s" % \
1656 (self.template.branch_name, self.template.filename,
1657 id, lang, repr(string['translation']))
1658 # compare
1659 if self.compare_translations(self.translations[lang].strings[id],
1660 string['translation'], id, lang):
1661 continue # it's the same
1662 # update it..
1663 self.translations[lang].strings[id] = string['translation']
1664 else:
1665 # add
1666 if self.debug:
1667 print "add new string for template %s %s id %s in lang %s: %s" % \
1668 (self.template.branch_name, self.template.filename,
1669 id, lang, repr(string['translation']))
1670 self.template.update_stats(lang, new = 1)
1671 self.translations[lang].strings[id] = string['translation']
1672 self.translations[lang].strings_new += 1
1673 if self.debug and lp669831_skipped > 0:
1674 print "lp669831: skipped %s bogus/obsolete strings from %s" % \
1675 (lp669831_skipped, filename[filename[:filename.rfind('/')].rfind('/') + 1:])
1676
1677 def import_gettext_po_files(self, directory):
1678 fname = self.file_mapping['grd']['mapped_grd'] \
1679 if 'mapped_grd' in self.file_mapping['grd'] else self.template.filename
1680 template_name = os.path.splitext(os.path.basename(fname))[0]
1681 directory = os.path.join(directory, template_name)
1682 if not os.path.isdir(directory):
1683 if self.debug:
1684 print "WARN: Launchpad didn't export anything for template '%s' [%s]" % (template_name, directory)
1685 return
1686 for file in os.listdir(directory):
1687 base, ext = os.path.splitext(file)
1688 if ext != ".po":
1689 continue
1690 # 'base' is a gettext lang, map it
1691 lang = None
1692 for l in self.template.mapped_langs:
1693 if base == self.template.mapped_langs[l]['gettext']:
1694 lang = l
1695 break
1696 if lang is None: # most probably a new lang, map back
1697 lang = base.replace('_', '-')
1698 for l in self.lang_mapping:
1699 if lang == self.lang_mapping[l]:
1700 lang = l
1701 break
1702 flang = lang.replace('@', '-')
1703 self.template.mapped_langs[lang] = { 'xtb_file': flang, 'grit': lang, 'gettext': base }
1704 self.import_gettext_po_file(lang, os.path.join(directory, file))
1705 # remove from the supported langs list all langs with no translated strings
1706 # (to catch either empty 'po' files exported by Launchpad, or 'po' files
1707 # containing only obsolete or too new strings for this branch)
1708 dropped = []
1709 for lang in self.translations:
1710 if len(self.translations[lang].strings.keys()) == 0:
1711 if self.debug:
1712 print "no translation found for template '%s' and lang '%s'. lang removed from the supported lang list" % \
1713 (os.path.splitext(os.path.basename(self.template.filename))[0], lang)
1714 del(self.template.supported_langs[lang])
1715 dropped.append(lang)
1716 for lang in dropped:
1717 del(self.translations[lang])
1718
1719 def copy_grit_files(self, directory):
1720 fname = self.file_mapping['grd']['mapped_grd'] \
1721 if 'mapped_grd' in self.file_mapping['grd'] else self.template.filename
1722 dst = os.path.join(directory, os.path.dirname(fname))
1723 if not os.path.isdir(dst):
1724 os.makedirs(dst, 0755)
1725 shutil.copy2(fname, dst)
1726 for lang in self.template.supported_langs:
1727 dst = os.path.join(directory, os.path.dirname(self.translations[lang].filename))
1728 if not os.path.isdir(dst):
1729 os.makedirs(dst, 0755)
1730 shutil.copy2(self.translations[lang].filename, dst)
1731
1732 def create_patches(self, directory):
1733 if not os.path.isdir(directory):
1734 os.makedirs(directory, 0755)
1735 template_name = os.path.splitext(os.path.basename(self.template.filename))[0]
1736 patch = codecs.open(os.path.join(directory, "translations-" + template_name + ".patch"),
1737 "wb", encoding="utf-8")
1738 for e in sorted(self.file_mapping.keys()):
1739 if 'dst' not in self.file_mapping[e]:
1740 self.file_mapping[e]['dst'] = None
1741 if self.file_mapping[e]['src'] is not None and \
1742 self.file_mapping[e]['dst'] is not None and \
1743 filecmp.cmp(self.file_mapping[e]['src'], self.file_mapping[e]['dst']) == True:
1744 continue # files are the same
1745
1746 if self.file_mapping[e]['src'] is not None:
1747 fromfile = "old/" + self.file_mapping[e]['src']
1748 tofile = "new/" + self.file_mapping[e]['src']
1749 fromdate = datetime.fromtimestamp(self.template.get_mtime(
1750 self.file_mapping[e]['src'])).strftime("%Y-%m-%d %H:%M:%S.%f000 +0000")
1751 fromlines = codecs.open(self.file_mapping[e]['src'], 'rb', encoding="utf-8").readlines()
1752 else:
1753 fromfile = "old/" + self.file_mapping[e]['dst'][len(self.file_mapping['grd']['dir']) + 1:]
1754 tofile = "new/" + self.file_mapping[e]['dst'][len(self.file_mapping['grd']['dir']) + 1:]
1755 fromdate = datetime.fromtimestamp(0).strftime("%Y-%m-%d %H:%M:%S.%f000 +0000")
1756 fromlines = ""
1757 if self.file_mapping[e]['dst'] is not None:
1758 todate = datetime.fromtimestamp(self.template.get_mtime(
1759 self.file_mapping[e]['dst'])).strftime("%Y-%m-%d %H:%M:%S.%f000 +0000")
1760 tolines = codecs.open(self.file_mapping[e]['dst'], 'rb', encoding="utf-8").readlines()
1761 else:
1762 todate = datetime.fromtimestamp(0).strftime("%Y-%m-%d %H:%M:%S.%f000 +0000")
1763 tolines = ""
1764 diff = unified_diff(fromlines, tolines, fromfile, tofile,
1765 fromdate, todate, n=3)
1766 patch.write("diff -Nur %s %s\n" % (fromfile, tofile))
1767 s = ''.join(diff)
1768 # fix the diff so that older patch (<< 2.6) don't fail on new files
1769 s = re.sub(r'@@ -1,0 ', '@@ -0,0 ', s)
1770 # ..and make sure patch is able to detect a patch removing files
1771 s = re.sub(r'(@@ \S+) \+1,0 @@', '\\1 +0,0 @@', s)
1772 patch.writelines(s)
1773 if s[-1:] != '\n':
1774 patch.write("\n\\ No newline at end of file\n")
1775 patch.close()
1776
1777 def update_supported_langs_in_grd(self, grd_in, grd_out, langs):
1778 fdi = codecs.open(grd_in, 'rb', encoding="utf-8")
1779 fdo = codecs.open(grd_out, 'wb', encoding="utf-8")
1780 # can't use minidom here as the file is manually generated and the
1781 # output will create big diffs. parse the source file line by line
1782 # and insert new langs in the <outputs> section (with type="data_package"
1783 # or type="js_map_format"). Let everything else untouched
1784 # FIXME: this is mostly a copy of GrdFile::export_file()
1785 pak_found = False
1786 pak_saved = []
1787 has_ifs = False
1788 for line in fdi.readlines():
1789 if re.match(r'.*?<output filename=".*?" type="(data_package|js_map_format)"', line):
1790 pak_found = True
1791 pak_saved.append(line)
1792 continue
1793 if line.find('</outputs>') > 0:
1794 pak_found = False
1795 ours = langs[:]
1796 chunks = {}
1797 c = None
1798 pak_if = None
1799 pak_is_in_if = False
1800 for l in pak_saved:
1801 if l.find("<!-- ") > 0:
1802 c = l
1803 continue
1804 if l.find("<if ") > -1:
1805 c = l if c is None else c + l
1806 has_ifs = True
1807 pak_is_in_if = True
1808 continue
1809 if l.find("</if>") > -1:
1810 c = l if c is None else c + l
1811 pak_is_in_if = False
1812 continue
1813 m = re.match(r'.*?<output filename="(.*?)_([^_\.]+)\.(pak|js)" type="(data_package|js_map_format)" lang="(.*?)" />', l)
1814 if m is not None:
1815 x = { 'name': m.group(1), 'ext': m.group(3), 'lang': m.group(5), 'file_lang': m.group(2),
1816 'type': m.group(4), 'in_if': pak_is_in_if, 'line': l }
1817 if c is not None:
1818 x['comment'] = c
1819 c = None
1820 k = m.group(2) if m.group(2) != 'nb' else 'no'
1821 chunks[k] = x
1822 else:
1823 if c is None:
1824 c = l
1825 else:
1826 c += l
1827 is_in_if = False
1828 for lang in sorted(chunks.keys()):
1829 tlang = lang if lang != 'no' else 'nb'
1830 while len(ours) > 0 and ((ours[0] == 'nb' and 'no' < tlang) or (ours[0] != 'nb' and ours[0] < tlang)):
1831 if ours[0] in chunks:
1832 ours = ours[1:]
1833 continue
1834 if has_ifs and is_in_if is False:
1835 fdo.write(' <if expr="pp_ifdef(\'use_third_party_translations\')">\n')
1836 f = "%s_%s.%s" % (chunks[lang]['name'], ours[0], chunks[lang]['ext'])
1837 fdo.write(' %s<output filename="%s" type="%s" lang="%s" />\n' % \
1838 (' ' if has_ifs else '', f, chunks[lang]['type'], ours[0]))
1839 is_in_if = True
1840 if has_ifs and chunks[lang]['in_if'] is False:
1841 if 'comment' not in chunks[lang] or chunks[lang]['comment'].find('</if>') == -1:
1842 fdo.write(' </if>\n')
1843 is_in_if = False
1844 ours = ours[1:]
1845 if 'comment' in chunks[lang]:
1846 for s in chunks[lang]['comment'].split('\n')[:-1]:
1847 if chunks[lang]['in_if'] is True and is_in_if and s.find('<if ') > -1:
1848 continue
1849 if s.find('<!-- No translations available. -->') > -1:
1850 continue
1851 fdo.write(s + '\n')
1852 fdo.write(chunks[lang]['line'])
1853 if ours[0] == tlang:
1854 ours = ours[1:]
1855 is_in_if = chunks[lang]['in_if']
1856 if len(chunks.keys()) > 0:
1857 while len(ours) > 0:
1858 f = "%s_%s.%s" % (chunks[lang]['name'], ours[0], chunks[lang]['ext'])
1859 if has_ifs and is_in_if is False:
1860 fdo.write(' <if expr="pp_ifdef(\'use_third_party_translations\')">\n')
1861 fdo.write(' %s<output filename="%s" type="data_package" lang="%s" />\n' % \
1862 (' ' if has_ifs else '', f, ours[0]))
1863 is_in_if = True
1864 ours = ours[1:]
1865 if has_ifs and is_in_if:
1866 fdo.write(' </if>\n')
1867 is_in_if = False
1868 if c is not None:
1869 for s in c.split('\n')[:-1]:
1870 if s.find('<!-- No translations available. -->') > -1:
1871 continue
1872 if s.find('</if>') > -1:
1873 continue
1874 fdo.write(s + '\n')
1875 if pak_found:
1876 pak_saved.append(line)
1877 continue
1878 fdo.write(line)
1879 fdi.close()
1880 fdo.close()
1881
1882 def create_build_gyp_patch(self, directory, build_gyp_file, other_grd_files, nlangs,
1883 whitelisted_new_langs = None):
1884 # read the list of langs supported upstream
1885 fd = open(build_gyp_file, "r")
1886 data = fd.read()
1887 fd.close()
1888 r = data[data.find("'locales':"):]
1889 olangs = sorted(re.findall("'(.*?)'", r[r.find('['):r.find(']')]))
1890 # check for an optional use_third_party_translations list of locales
1891 tpt = data.find('use_third_party_translations==1')
1892 if tpt > 0:
1893 tpt += data[tpt:].find("'locales':")
1894 r = data[tpt:]
1895 tptlangs = sorted(re.findall("'(.*?)'", r[r.find('['):r.find(']')]))
1896 if nlangs == sorted(tptlangs + olangs):
1897 return tptlangs
1898 else:
1899 if nlangs == olangs:
1900 return []
1901 # check if we need to only activate some whitelisted new langs
1902 xlangs = None
1903 nnlangs = [ x for x in nlangs if x not in olangs ]
1904 if whitelisted_new_langs is not None:
1905 if tpt > 0:
1906 nlangs = [ x for x in nlangs if x not in olangs and x in whitelisted_new_langs ]
1907 else:
1908 xlangs = [ x for x in nlangs if x not in olangs and x not in whitelisted_new_langs ]
1909 nlangs = [ x for x in nlangs if x in olangs or x in whitelisted_new_langs ]
1910 elif tpt > 0:
1911 nlangs = [ x for x in nlangs if x not in olangs ]
1912
1913 # we need a patch
1914 if tpt > 0:
1915 pos = tpt + data[tpt:].find('[')
1916 end = data[:pos + 1]
1917 ndata = end[:]
1918 else:
1919 pos = data.find("'locales':")
1920 begin = data[pos:]
1921 end = data[:pos + begin.find('\n')]
1922 ndata = end[:]
1923 end = data[pos + data[pos:].find(']'):]
1924
1925 # list of langs, folded
1926 if len(nlangs) > 9:
1927 ndata += '\n' + \
1928 '\n'.join(textwrap.wrap("'" + "', '".join(nlangs) + "'",
1929 break_long_words=False, width=76,
1930 drop_whitespace=False,
1931 expand_tabs=False,
1932 replace_whitespace=False,
1933 initial_indent=' ',
1934 subsequent_indent=' ',
1935 break_on_hyphens=False)) + '\n '
1936 else:
1937 ndata += "'%s'" % "', '".join(nlangs)
1938
1939 ndata += end
1940
1941 # write the patch
1942 fromfile = "old/" + build_gyp_file
1943 tofile = "new/" + build_gyp_file
1944 fromdate = datetime.fromtimestamp(self.template.get_mtime(build_gyp_file)).strftime("%Y-%m-%d %H:%M:%S.%f000 +0000")
1945 fromlines = [ x for x in re.split('(.*\n?)', data) if x != '' ]
1946 todate = datetime.now().strftime("%Y-%m-%d %H:%M:%S.%f000 +0000")
1947 tolines = [ x for x in re.split('(.*\n?)', ndata) if x != '' ]
1948 patch = codecs.open(os.path.join(directory, "build.patch"), "wb", encoding="utf-8")
1949 diff = unified_diff(fromlines, tolines, fromfile, tofile, fromdate, todate, n=3)
1950 patch.write("diff -Nur %s %s\n" % (fromfile, tofile))
1951 patch.writelines(''.join(diff))
1952
1953 for grd in other_grd_files:
1954 grd_out = os.path.join(directory, os.path.basename(grd))
1955 self.update_supported_langs_in_grd(grd, grd_out, langs)
1956 if filecmp.cmp(grd, grd_out) == True:
1957 os.unlink(grd_out)
1958 continue # files are the same
1959 # add it to the patch
1960 fromfile = "old/" + grd
1961 tofile = "new/" + grd
1962 fromdate = datetime.fromtimestamp(self.template.get_mtime(grd)).strftime("%Y-%m-%d %H:%M:%S.%f000 +0000")
1963 fromlines = codecs.open(grd, 'rb', encoding="utf-8").readlines()
1964 todate = datetime.now().strftime("%Y-%m-%d %H:%M:%S.%f000 +0000")
1965 tolines = codecs.open(grd_out, 'rb', encoding="utf-8").readlines()
1966 diff = unified_diff(fromlines, tolines, fromfile, tofile, fromdate, todate, n=3)
1967 patch.write("diff -Nur %s %s\n" % (fromfile, tofile))
1968 patch.writelines(''.join(diff))
1969 os.unlink(grd_out)
1970 patch.close()
1971 return nnlangs
1972
1973def usage():
1974 print """
1975Usage: %s [options] [grd_file [more_grd_files]]
1976
1977 Convert Chromium translation files (grd/xtb) into gettext files (pot/po) and back
1978
1979 options could be:
1980 -d | --debug debug mode
1981 -v | --verbose verbose mode
1982 -h | --help this help screen
1983
1984 --export-gettext dir
1985 export pot/po gettext files to dir
1986
1987 --import-gettext dir[,dir2][...]
1988 import gettext pot/po files from those directories.
1989 Directories must be ordered from the oldest to
1990 the freshest. Only strings different from the grit
1991 (upstream) translations are considered.
1992
1993 --import-grit-branch name:dir:grd1[,grd2,...]]
1994 import the Grit files for this branch from this
1995 directory. --import-grit-branch could be used several
1996 times, and then, branches must be specified from the
1997 freshest (trunk) to the more stable ones.
1998 The default value is trunk:<cwd>
1999 Note: must not be used along with --export-grit
2000
2001 --export-grit dir
2002 export grd/xtb grit files to dir
2003
2004 --copy-grit dir copy the src grit files containing strings to dir
2005 (useful to create diffs after --export-grit)
2006
2007 --whitelisted-new-langs lang1[,lang2][..]
2008 comma separated list of new langs that have to be enabled
2009 (assuming they have some strings translated). The default
2010 is to enable all new langs, but for stable builds, a good
2011 enough coverage is preferred
2012
2013 --create-patches dir
2014 create unified patches per template in dir
2015 (only useful after --export-grit)
2016
2017 --build-gyp-file file
2018 location of the build/common.gypi file, used only
2019 with --create-patches to add all new langs
2020 for which we merged translated strings
2021
2022 --other-grd-files file1[,file2][..]
2023 comma separated list of grd files to also patch
2024 to add new langs for (see --build-gyp-file)
2025
2026 --html-output produce nice some HTML as output (on stdout)
2027
2028 --json-branches-info file
2029 location of a json file containing the url, revision
2030 and last date of both the upstream branch and
2031 launchpad export. optionally used in the html output
2032
2033 --map-template-names new1=old1[,new2=old2][...]
2034 comma separated list of template names mappings.
2035 It is useful when upstream renames a grd file in a branch
2036 to preserve the old name in gettext for the older branches
2037
2038 --landable-templates template1[,template2][...]
2039 comma separated list of templates that are landable upstream
2040 for all langs
2041
2042 --unlandable-templates template1[,template2][...]
2043 comma separated list of templates that are not landable upstream,
2044 even for new langs
2045
2046 --test-strcvt run the grit2gettext2grit checker
2047 --test-conditions run the conditions evaluation checker
2048
2049""" % sys.argv[0].rpartition('/')[2]
2050
2051if '__main__' == __name__:
2052 sys.stdout = codecs.getwriter('utf8')(sys.stdout)
2053 try:
2054 opts, args = getopt.getopt(sys.argv[1:], "dhv",
2055 [ "test-strcvt", "test-conditions", "debug", "verbose", "help", "copy-grit=",
2056 "import-grit-branch=", "export-gettext=", "import-gettext=", "export-grit=",
2057 "create-patches=", "build-gyp-file=", "other-grd-files=",
2058 "landable-templates=", "unlandable-templates=", "map-template-names=",
2059 "whitelisted-new-langs=", "html-output", "json-branches-info=" ])
2060 except getopt.GetoptError, err:
2061 print str(err)
2062 usage()
2063 sys.exit(2)
2064
2065 verbose = False
2066 debug = False
2067 html_output = False
2068 outdir = None
2069 indir = None
2070 export_gettext = None
2071 import_gettext = None
2072 export_grit = None
2073 copy_grit = None
2074 create_patches = None
2075 build_gyp_file = None
2076 json_info = None
2077 other_grd_files = []
2078 whitelisted_new_langs = None
2079 templatenames_mapping = {}
2080 landable_templates = []
2081 unlandable_templates = []
2082 branches = None
2083 nbranches = []
2084 for o, a in opts:
2085 if o in ("-v", "--verbose"):
2086 verbose = True
2087 elif o in ("-h", "--help"):
2088 usage()
2089 sys.exit()
2090 elif o in ("-d", "--debug"):
2091 debug = True
2092 elif o == "--import-grit-branch":
2093 if branches is None:
2094 branches = {}
2095 branch, dir, grds = a.split(':')
2096 for grd in grds.split(','):
2097 name = os.path.basename(grd)
2098 if name not in branches:
2099 branches[name] = []
2100 branches[name].append({ 'branch': branch, 'dir': dir, 'grd': grd })
2101 if branch not in nbranches:
2102 nbranches.append(branch)
2103 elif o == "--export-gettext":
2104 export_gettext = a
2105 elif o == "--import-gettext":
2106 import_gettext = a.split(",")
2107 elif o == "--export-grit":
2108 export_grit = a
2109 elif o == "--copy-grit":
2110 copy_grit = a
2111 elif o == "--whitelisted-new-langs":
2112 whitelisted_new_langs = a.split(",")
2113 elif o == "--create-patches":
2114 create_patches = a
2115 elif o == "--build-gyp-file":
2116 build_gyp_file = a
2117 elif o == "--other-grd-files":
2118 other_grd_files = a.split(',')
2119 elif o == "--html-output":
2120 html_output = True
2121 elif o == "--json-branches-info":
2122 json_info = a
2123 elif o == "--landable-templates":
2124 landable_templates = a.split(",")
2125 elif o == "--unlandable-templates":
2126 unlandable_templates = a.split(",")
2127 elif o == "--map-template-names":
2128 for c in a.split(','):
2129 x = c.split('=')
2130 templatenames_mapping[x[0]] = x[1]
2131 elif o == "--test-strcvt":
2132 StringCvt().test()
2133 sys.exit()
2134 elif o == "--test-conditions":
2135 EvalConditions().test()
2136 sys.exit()
2137 else:
2138 assert False, "unhandled option"
2139
2140 if branches is None and len(args) != 0:
2141 branches = {}
2142 for arg in args:
2143 branches[os.path.basename(arg)] = [ { 'branch': 'default', 'dir': os.getcwd(), 'grd': arg } ]
2144 if branches is None:
2145 print "Please specify at least one grd file or use --import-grit-branch"
2146 usage()
2147 sys.exit(2)
2148
2149 # re-map the templates, if needed
2150 for grd in templatenames_mapping.keys():
2151 new = os.path.basename(grd)
2152 old = os.path.basename(templatenames_mapping[grd])
2153 if new in branches:
2154 if old not in branches:
2155 branches[old] = []
2156 for branch in branches[new]:
2157 branch['mapped_grd'] = old
2158 branches[old].extend(branches[new])
2159 # re-sort the branches
2160 branches[old] = sorted(branches[old], lambda x,y: cmp(nbranches.index(x['branch']), nbranches.index(y['branch'])))
2161 del(branches[new])
2162
2163 if html_output:
2164 print """\
2165<html>
2166<head><meta charset="UTF-8">
2167</head><body>
2168<style type="text/css">
2169body {
2170 font-family: UbuntuBeta,Ubuntu,"Bitstream Vera Sans","DejaVu Sans",Tahoma,sans-serif;
2171}
2172div#legend {
2173 float: left;
2174}
2175fieldset {
2176 border-width: 1px;
2177 border-color: #f0f0f0;
2178}
2179div#legend fieldset, div#branches fieldset {
2180 border-width: 0px;
2181}
2182legend {
2183 font-size: 80%;
2184}
2185div#branches {
2186 float: left;
2187 padding-left: 40px;
2188}
2189div#branches td {
2190 padding-right: 5px;
2191}
2192div#stats {
2193 padding-top: 5px;
2194 clear: both;
2195}
2196a {
2197 text-decoration: none;
2198}
2199a.l:link, a.l:visited {
2200 color: black;
2201}
2202.error {
2203 font-size: 90%;
2204}
2205div.error a {
2206 font-family: monospace;
2207 font-size: 120%;
2208}
2209table {
2210 border-collapse: collapse;
2211 border-spacing: 1px;
2212 font-size: 0.9em;
2213}
2214th {
2215 font-weight: bold;
2216 color: #666;
2217 padding-right: 5px;
2218}
2219th, td {
2220 border: 1px #d2d2d2;
2221 border-style: solid;
2222 padding-left: 4px;
2223 padding-top: 0px;
2224 padding-bottom: 0px;
2225}
2226td.d {
2227 font-size: 90%;
2228 text-align: right;
2229}
2230td.n {
2231 background: #FFA;
2232}
2233.lang {
2234 font-weight: bold;
2235 padding-left: 0.5em;
2236 padding-right: 0.5em;
2237 white-space: nowrap;
2238}
2239.progress_bar {
2240 width: 100px; overflow: hidden; position: relative; padding: 0px;
2241}
2242.pb_label {
2243 text-align: center; width: 100%;
2244 position: absolute; z-index: 1001; left: 4px; top: -2px; color: white; font-size: 0.7em;
2245}
2246.pb_label2 {
2247 text-align: center; width: 100%;
2248 position: absolute; z-index: 1000; left: 5px; top: -1px; color: black; font-size: 0.7em;
2249}
2250.green_gradient {
2251 height: 1em; position: relative; float: left;
2252 background: #00ff00;
2253 background: -moz-linear-gradient(top, #00ff00, #007700);
2254 background: -webkit-gradient(linear, left top, left bottom, from(#00ff00), to(#007700));
2255 filter: progid:DXImageTransform.Microsoft.Gradient(StartColorStr='#00ff00', EndColorStr='#007700', GradientType=0);
2256}
2257.red_gradient {
2258 height: 1em; position: relative; float: left;
2259 background: #ff8888;
2260 background: -moz-linear-gradient(top, #ff8888, #771111);
2261 background: -webkit-gradient(linear, left top, left bottom, from(#ff8888), to(#771111));
2262 filter: progid:DXImageTransform.Microsoft.Gradient(StartColorStr='#ff8888', EndColorStr='#771111', GradientType=0);
2263}
2264.blue_gradient {
2265 height: 1em; position: relative; float: left;
2266 background: #62b0dd;
2267 background: -moz-linear-gradient(top, #62b0dd, #1f3d4a);
2268 background: -webkit-gradient(linear, left top, left bottom, from(#62b0dd), to(#1f3d4a));
2269 filter: progid:DXImageTransform.Microsoft.Gradient(StartColorStr='#62b0dd', EndColorStr='#1f3d4a', GradientType=0);
2270}
2271.purple_gradient {
2272 height: 1em; position: relative; float: left;
2273 background: #b8a4ba;
2274 background: -moz-linear-gradient(top, #b8a4ba, #5c3765);
2275 background: -webkit-gradient(linear, left top, left bottom, from(#b8a4ba), to(#5c3765));
2276 filter: progid:DXImageTransform.Microsoft.Gradient(StartColorStr='#b8a4ba', EndColorStr='#5c3765', GradientType=0);
2277}
2278</style>
2279<script type="text/javascript" language="javascript">
2280function progress_bar(where, red, green, purple, blue) {
2281 var total = green + red + blue + purple;
2282 if (total == 0)
2283 total = 1;
2284 var d = document.getElementById(where);
2285 var v = 100 * (1 - (red / total));
2286 if (total != 1) {
2287 d.innerHTML += '<div class="pb_label">' + v.toFixed(1) + "%</div>";
2288 d.innerHTML += '<div class="pb_label2">' + v.toFixed(1) + "%</div>";
2289 }
2290 else
2291 d.style.width = "25px";
2292 var pgreen = parseInt(100 * green / total);
2293 var pblue = parseInt(100 * blue / total);
2294 var ppurple = parseInt(100 * purple / total);
2295 var pred = parseInt(100 * red / total);
2296 if (pgreen + pblue + ppurple + pred != 100) {
2297 if (red > 0)
2298 pred = 100 - pgreen - pblue - ppurple;
2299 else if (purple > 0)
2300 ppurple = 100 - pgreen - pblue;
2301 else if (blue > 0)
2302 pblue = 100 - pgreen;
2303 else
2304 pgreen = 100;
2305 }
2306 if (green > 0)
2307 d.innerHTML += '<div class="green_gradient" style="width:' + pgreen + '%;"></div>';
2308 if (blue > 0)
2309 d.innerHTML += '<div class="blue_gradient" style="width:' + pblue + '%;"></div>';
2310 if (purple > 0)
2311 d.innerHTML += '<div class="purple_gradient" style="width:' + ppurple + '%;"></div>';
2312 if (red > 0)
2313 d.innerHTML += '<div class="red_gradient" style="width:' + pred + '%;"></div>';
2314 return true;
2315}
2316
2317function toggle(e) {
2318 var elt = document.getElementById(e + "-t");
2319 var text = document.getElementById(e);
2320 if (elt.style.display == "block") {
2321 elt.style.display = "none";
2322 text.innerHTML = "+";
2323 }
2324 else {
2325 elt.style.display = "block";
2326 text.innerHTML = "-";
2327 }
2328}
2329
2330function time_delta(date, e) {
2331 var now = new Date();
2332 var d = new Date(date);
2333 var delta = (now - d) / 1000;
2334 var elt = document.getElementById(e);
2335 if (delta >= 3600) {
2336 var h = parseInt(delta / 3600);
2337 var m = parseInt((delta - h * 3600) / 60);
2338 elt.innerHTML = '(' + h + 'h ' + m + 'min ago)';
2339 return;
2340 }
2341 if (delta >= 60) {
2342 var m = parseInt(delta / 60);
2343 elt.innerHTML = '(' + m + 'min ago)';
2344 return;
2345 }
2346 elt.innerHTML = '(seconds ago)';
2347}
2348
2349</script>
2350"""
2351
2352 prefix = os.path.commonprefix([ branches[x][0]['grd'] for x in branches.keys() ])
2353 changes = 0
2354 langs = []
2355 mapped_langs = {}
2356 cvts = {}
2357 for grd in branches.keys():
2358 cvts[grd] = Converter(branches[grd][0]['grd'],
2359 lang_mapping = lang_mapping,
2360 template_mapping = templatenames_mapping,
2361 debug = debug,
2362 html_output = html_output,
2363 branches = branches[grd])
2364
2365 if cvts[grd].get_supported_strings_count() == 0:
2366 if debug:
2367 print "no string found in %s" % grd
2368 if export_grit is not None and copy_grit is None:
2369 directory = os.path.join(export_grit, os.path.dirname(branches[grd][0]['grd'])[len(prefix):])
2370 if not os.path.isdir(directory):
2371 os.makedirs(directory, 0755)
2372 shutil.copy2(branches[grd][0]['grd'], directory)
2373 continue
2374
2375 if copy_grit is not None:
2376 cvts[grd].copy_grit_files(copy_grit)
2377
2378 if import_gettext is not None:
2379 for directory in import_gettext:
2380 cvts[grd].import_gettext_po_files(directory)
2381 langs.extend(cvts[grd].translations.keys())
2382
2383 if export_gettext is not None:
2384 cvts[grd].export_gettext_files(export_gettext)
2385 changes += cvts[grd].template_changes + cvts[grd].translations_changes
2386
2387 # as we need to add all supported langs to the <outputs> section of all grd files,
2388 # we have to wait for all the 'po' files to be imported and merged before we export
2389 # the grit files and create the patches.
2390
2391 # supported langs
2392 langs.append('en-US') # special case, it's not translated, but needs to be here
2393 for lang in [ 'no' ]: # workaround for cases like the infamous no->nb mapping
2394 while lang in langs:
2395 langs.remove(lang)
2396 langs.append(lang_mapping[lang])
2397 r = {}
2398 langs = sorted([ r.setdefault(e, e) for e in langs if e not in r ])
2399
2400 for grd in branches.keys():
2401 if export_grit is not None:
2402 cvts[grd].export_grit_files(os.path.join(export_grit, os.path.dirname(branches[grd][0]['grd'])[len(prefix):]), langs)
2403 for lang in cvts[grd].template.mapped_langs:
2404 mapped_langs[lang] = cvts[grd].template.mapped_langs[lang]['gettext']
2405 if create_patches is not None:
2406 cvts[grd].create_patches(create_patches)
2407
2408 # patch the build/common.gypi file if we have to
2409 nlangs = None
2410 if create_patches is not None and build_gyp_file is not None:
2411 nlangs = cvts[branches.keys()[0]].create_build_gyp_patch(create_patches, build_gyp_file, other_grd_files, langs,
2412 whitelisted_new_langs)
2413
2414 if create_patches is None:
2415 # no need to display the stats
2416 exit(1 if changes > 0 else 0)
2417
2418 # display some stats
2419 html_js = ""
2420 if html_output:
2421 print """
2422<p>
2423<div>
2424<div id="legend">
2425<fieldset><legend>Legend</legend>
2426<table border="0">
2427<tr><td><div id='green_l' class='progress_bar'></td><td>translated upstream</td></tr>
2428<tr><td><div id='blue_l' class='progress_bar'></td><td>translations updated in Launchpad</td></tr>
2429<tr><td><div id='purple_l' class='progress_bar'></td><td>translated in Launchpad</td></tr>
2430<tr><td><div id='red_l' class='progress_bar'></td><td>untranslated</td></tr>
2431</table>
2432</fieldset>
2433</div>
2434"""
2435 html_js += "progress_bar('%s', %d, %d, %d, %d);\n" % ('green_l', 0, 1, 0, 0)
2436 html_js += "progress_bar('%s', %d, %d, %d, %d);\n" % ('blue_l', 0, 0, 0, 1)
2437 html_js += "progress_bar('%s', %d, %d, %d, %d);\n" % ('purple_l', 0, 0, 1, 0)
2438 html_js += "progress_bar('%s', %d, %d, %d, %d);\n" % ('red_l', 1, 0, 0, 0)
2439 if json_info:
2440 now = datetime.utcfromtimestamp(os.path.getmtime(json_info)).strftime("%a %b %e %H:%M:%S UTC %Y")
2441 binfo = json.loads(open(json_info, "r").read())
2442 print """
2443<div id="branches">
2444<fieldset><legend>Last update info</legend>
2445<table border="0">
2446<tr><th>Branch</th><th>Revision</th><th>Date</th></tr>
2447<tr><td><a href="%s">Upstream</a></td><td>r%s</td><td>%s <em id='em-u'></em> </td></tr>
2448<tr><td><a href="%s">Launchpad export</a></td><td>r%s</td><td>%s <em id='em-lp'></em> </td></tr>
2449<tr><td>This page</a></td><td>-</td><td>%s <em id='em-now'></em> </td></tr>
2450</table>
2451</fieldset>
2452</div>
2453""" % (binfo['upstream']['url'], binfo['upstream']['revision'], binfo['upstream']['date'],
2454 binfo['launchpad-export']['url'], binfo['launchpad-export']['revision'],
2455 binfo['launchpad-export']['date'], now)
2456 html_js += "time_delta('%s', '%s');\n" % (binfo['upstream']['date'], 'em-u')
2457 html_js += "time_delta('%s', '%s');\n" % (binfo['launchpad-export']['date'], 'em-lp')
2458 html_js += "time_delta('%s', '%s');\n" % (now, 'em-now')
2459 print """
2460<div id="stats">
2461<table border="0">
2462<tr><th rowspan="2">Rank</th><th rowspan="2">Lang</th><th colspan='5'>TOTAL</th><th colspan='5'>"""
2463 print ("</th><th colspan='5'>".join([ "%s (<a href='http://git.chromium.org/gitweb/?p=chromium.git;a=history;f=%s;hb=HEAD'>+</a>)" \
2464 % (os.path.splitext(grd)[0], branches[grd][0]['grd']) \
2465 for grd in sorted(branches.keys()) ])) + "</th></tr><tr>"
2466 j = 0
2467 for grd in [ 'TOTAL' ] + sorted(branches.keys()):
2468 print """
2469<th>Status</th>
2470<th><div id='%s_t%d' class='progress_bar'></th>
2471<th><div id='%s_t%d' class='progress_bar'></th>
2472<th><div id='%s_t%d' class='progress_bar'></th>
2473<th><div id='%s_t%d' class='progress_bar'></th>""" % ('red', j, 'green', j, 'purple', j, 'blue', j)
2474 html_js += "progress_bar('%s_t%d', %d, %d, %d, %d);\n" % ('green', j, 0, 1, 0, 0)
2475 html_js += "progress_bar('%s_t%d', %d, %d, %d, %d);\n" % ('blue', j, 0, 0, 0, 1)
2476 html_js += "progress_bar('%s_t%d', %d, %d, %d, %d);\n" % ('purple', j, 0, 0, 1, 0)
2477 html_js += "progress_bar('%s_t%d', %d, %d, %d, %d);\n" % ('red', j, 1, 0, 0, 0)
2478 j += 1
2479 print "</tr>"
2480 else:
2481 print """\
2482 +----------------------- % translated
2483 | +----------------- untranslated
2484 | | +------------ translated upstream
2485 | | | +------- translated in Launchpad
2486 | | | | +-- translations updated in Launchpad
2487 | | | | |
2488 V V V V V"""
2489 print "-- lang -- " + \
2490 ' '.join([ (" %s " % os.path.splitext(grd)[0]).center(25, "-") \
2491 for grd in [ 'TOTAL' ] + sorted(branches.keys()) ])
2492 totals = {}
2493 for lang in langs:
2494 klang = lang
2495 if lang == 'nb':
2496 klang = 'no'
2497 totals[klang] = { 'total': 0, 'missing': 0, 'translated_upstream': 0, 'new': 0, 'updated': 0, 'lskipped': 0 }
2498 for grd in branches.keys():
2499 tot, lskipped = cvts[grd].template.get_supported_strings_count(klang)
2500 totals[klang]['lskipped'] += lskipped
2501 totals[klang]['total'] += tot
2502 totals[klang]['missing'] += tot
2503 if klang in cvts[grd].template.stats:
2504 totals[klang]['missing'] -= cvts[grd].template.stats[klang]['translated_upstream'] + \
2505 cvts[grd].template.stats[klang]['new'] + cvts[grd].template.stats[klang]['updated']
2506 totals[klang]['translated_upstream'] += cvts[grd].template.stats[klang]['translated_upstream']
2507 totals[klang]['new'] += cvts[grd].template.stats[klang]['new']
2508 totals[klang]['updated'] += cvts[grd].template.stats[klang]['updated']
2509
2510 rank = 0
2511 p_rank = 0
2512 p_score = -1
2513 t_landable = 0
2514 for lang in sorted(totals, lambda x, y: cmp("%05d %05d %s" % (totals[x]['missing'], totals[x]['total'] - totals[x]['updated'] - totals[x]['new'], x),
2515 "%05d %05d %s" % (totals[y]['missing'], totals[y]['total'] - totals[y]['updated'] - totals[y]['new'], y))):
2516 if lang == 'en-US':
2517 continue
2518 rank += 1
2519 if p_score != totals[lang]['missing']:
2520 p_score = totals[lang]['missing']
2521 p_rank = rank
2522 rlang = lang
2523 if lang in lang_mapping:
2524 rlang = lang_mapping[lang]
2525 if html_output:
2526 s = "<tr><td>%s</td><td class='lang'><a class='l' href='%s'>%s</a></td>" % \
2527 ("#%d" % p_rank, 'https://translations.launchpad.net/chromium-browser/translations/+lang/' + \
2528 mapped_langs[lang], rlang)
2529 s += "<td><div id='%s' class='progress_bar'></div></td>" % rlang
2530 s += "<td class='d'>%d</td><td class='d'>%d</td><td class='d'>%d</td><td class='d'>%d</td>" % \
2531 (totals[lang]['missing'], totals[lang]['translated_upstream'],
2532 totals[lang]['new'], totals[lang]['updated'])
2533 html_js += "progress_bar('%s', %d, %d, %d, %d);\n" % \
2534 (rlang, totals[lang]['missing'], totals[lang]['translated_upstream'],
2535 totals[lang]['new'], totals[lang]['updated'])
2536 else:
2537 s = "%-3s %-6s " % ("#%d" % p_rank, rlang)
2538 s += "%3d%% %4d %4d %4d %4d" % \
2539 (100.0 * float(totals[lang]['total'] - totals[lang]['missing']) / float(totals[lang]['total']),
2540 totals[lang]['missing'], totals[lang]['translated_upstream'],
2541 totals[lang]['new'], totals[lang]['updated'])
2542 j = 0
2543 for grd in sorted(branches.keys()):
2544 j += 1
2545 tplt = os.path.splitext(grd)[0].replace('_', '-')
2546 total, lskipped = cvts[grd].template.get_supported_strings_count(lang)
2547 if lang in cvts[grd].template.stats:
2548 missing = total - cvts[grd].template.stats[lang]['translated_upstream'] - \
2549 cvts[grd].template.stats[lang]['new'] - cvts[grd].template.stats[lang]['updated']
2550 if html_output:
2551 if len(unlandable_templates) == 0 and len(landable_templates) == 0:
2552 landable = False
2553 else:
2554 landable = (nlangs is not None and lang in nlangs and tplt not in unlandable_templates) or \
2555 (nlangs is not None and lang not in nlangs and tplt in landable_templates)
2556 if landable:
2557 t_landable += cvts[grd].template.stats[lang]['new'] + cvts[grd].template.stats[lang]['updated']
2558 s += "<td><div id='%s_%d' class='progress_bar'></div></td>" % (rlang, j)
2559 s += "<td class='d'>%d</td><td class='d'>%d</td><td class='d%s'>%d</td><td class='d%s'>%d</td>" % \
2560 (missing,
2561 cvts[grd].template.stats[lang]['translated_upstream'],
2562 " n" if landable and cvts[grd].template.stats[lang]['new'] > 0 else "",
2563 cvts[grd].template.stats[lang]['new'],
2564 " n" if landable and cvts[grd].template.stats[lang]['updated'] > 0 else "",
2565 cvts[grd].template.stats[lang]['updated'])
2566 html_js += "progress_bar('%s_%d', %d, %d, %d, %d);\n" % \
2567 (rlang, j, missing,
2568 cvts[grd].template.stats[lang]['translated_upstream'],
2569 cvts[grd].template.stats[lang]['new'],
2570 cvts[grd].template.stats[lang]['updated'])
2571 else:
2572 if float(total) > 0:
2573 pct = 100.0 * float(total - missing) / float(total)
2574 else:
2575 pct = 0
2576 s += " %3d%% %4d %4d %4d %4d" % \
2577 (pct, missing,
2578 cvts[grd].template.stats[lang]['translated_upstream'],
2579 cvts[grd].template.stats[lang]['new'],
2580 cvts[grd].template.stats[lang]['updated'])
2581 else:
2582 if html_output:
2583 s += "<td><div id='%s_%d' class='progress_bar'></div></td>" % (rlang, j)
2584 s += "<td class='d'>%d</td><td class='d'>%d</td><td class='d'>%d</td><td class='d'>%d</td>" % \
2585 (total, 0, 0, 0)
2586 html_js += "progress_bar('%s_%d', %d, %d, %d, %d);\n" % \
2587 (rlang, j, total, 0, 0, 0)
2588 else:
2589 s += " %3d%% %4d %4d %4d %4d" % (0, total, 0, 0, 0)
2590 if html_output:
2591 s += "</tr>"
2592 print s
2593 if html_output:
2594 landable_sum = ""
2595 if t_landable > 0:
2596 landable_sum = """<p>
2597<div name='landable'>
2598<table border="0"><tr><td class="d n">%d strings are landable upstream</td></tr></table></div>
2599""" % t_landable
2600 print """\
2601</table>
2602%s</div>
2603</div>
2604<script type="text/javascript" language="javascript">
2605%s
2606</script>
2607</body>
2608</html>""" % (landable_sum, html_js)
2609 exit(1 if changes > 0 else 0)
2610
02611
=== added file 'create-patches.sh'
--- create-patches.sh 1970-01-01 00:00:00 +0000
+++ create-patches.sh 2024-02-28 12:58:47 +0000
@@ -0,0 +1,185 @@
1#!/bin/sh
2
3# Create the translation patches (grit format) based on the last
4# translation export (gettext format)
5# (c) 2010-2011, Fabien Tassin <fta@ubuntu.com>
6
7# location of chromium2pot.py (lp:~chromium-team/chromium-browser/chromium-translations-tools.head)
8# (must already exist)
9BIN_DIR=/data/bot/chromium-translations-tools.head
10
11# Launchpad translation export (must already exist, will be pulled here)
12LPE_DIR=/data/bot/upstream/chromium-translations-exports.head
13
14# local svn branches (updated by drobotik)
15SRC_TRUNK_DIR=/data/bot/upstream/chromium-browser.svn/src
16SRC_DEV_DIR=/data/bot/upstream/chromium-dev.svn/src
17SRC_BETA_DIR=/data/bot/upstream/chromium-beta.svn/src
18SRC_STABLE_DIR=/data/bot/upstream/chromium-stable.svn/src
19
20####
21
22OUT_DIR=$(mktemp -d)
23
24# List of options per branch
25NEW_OPTS="--map-template-names ui/base/strings/ui_strings.grd=ui/base/strings/app_strings.grd"
26OLD_OPTS=""
27
28OPTS_TRUNK=$NEW_OPTS
29OPTS_DEV=$NEW_OPTS
30OPTS_BETA=$NEW_OPTS
31OPTS_STABLE=$OLD_OPTS
32
33# List of templates to send to Launchpad
34NEW_TEMPLATES=$(cat - <<EOF
35ui/base/strings/ui_strings.grd
36EOF
37)
38OLD_TEMPLATES=$(cat - <<EOF
39ui/base/strings/app_strings.grd
40EOF
41)
42TEMPLATES_TRUNK=$NEW_TEMPLATES
43TEMPLATES_DEV=$NEW_TEMPLATES
44TEMPLATES_BETA=$NEW_TEMPLATES
45TEMPLATES_STABLE=$OLD_TEMPLATES
46
47TEMPLATES=$(cat - <<EOF
48chrome/app/chromium_strings.grd
49chrome/app/generated_resources.grd
50chrome/app/policy/policy_templates.grd
51webkit/glue/inspector_strings.grd
52webkit/glue/webkit_strings.grd
53EOF
54)
55
56# List of other templates to update for new langs, but that are
57# not sent to Launchpad
58NEW_OTHER_TEMPLATES=$(cat - <<EOF
59ui/base/strings/app_locale_settings.grd
60EOF
61)
62OLD_OTHER_TEMPLATES=$(cat - <<EOF
63app/resources/app_locale_settings.grd
64EOF
65)
66OTHER_TEMPLATES_TRUNK=$NEW_OTHER_TEMPLATES
67OTHER_TEMPLATES_DEV=$NEW_OTHER_TEMPLATES
68OTHER_TEMPLATES_BETA=$NEW_OTHER_TEMPLATES
69OTHER_TEMPLATES_STABLE=$NEW_OTHER_TEMPLATES
70# Common to all branches
71OTHER_TEMPLATES=$(cat - <<EOF
72chrome/app/resources/locale_settings.grd
73chrome/app/resources/locale_settings_linux.grd
74chrome/app/resources/locale_settings_cros.grd
75EOF
76)
77
78######
79
80TEMPLATES=$(echo $TEMPLATES | tr '[ \n]' ' ')
81
82(cd $LPE_DIR ; bzr pull -q)
83
84space_list () {
85 local V1="$1"
86 local V2="$2"
87 echo "$V1 $V2" | tr '[ \n]' ' ' | sed -e 's/ $//'
88}
89
90comma_list () {
91 local V1="$1"
92 local V2="$2"
93
94 echo "$V1 $V2" | tr '[ \n]' ',' | sed -e 's/,$//'
95}
96
97get_branches_info () {
98 local BRANCH=$1
99 local SRC_DIR=$2
100 local JSON=$3
101
102 # upstream url, revision & last change date
103 UURL=$(cd $SRC_DIR; svn info | grep '^URL: ' | sed -e 's/.*: //')
104 UREV="$(cd $SRC_DIR; svn info | grep '^Last Changed Rev:' | sed -e 's/.*: //') ($(cut -d= -f2 $SRC_DIR/chrome/VERSION | sed -e 's,$,.,' | tr -d '\n' | sed -e 's/.$//'))"
105 UDATE=$(date -d "$(cd $SRC_DIR; svn info | grep '^Last Changed Date:' | sed -e 's/.*: //')" --utc)
106
107 # Launchpad url, revision & last change date
108 LURL=$(cd $LPE_DIR; bzr info | grep 'parent branch' | sed -e 's,.*: bzr+ssh://bazaar,https://code,')
109 LREV=$(cd $LPE_DIR; bzr revno)
110 LDATE=$(date -d "$(cd $LPE_DIR; bzr info -v | grep 'latest revision' | sed -e 's/.*: //')" --utc)
111
112 cat - <<EOF > $JSON
113{
114 "upstream": {
115 "revision": "$UREV",
116 "url": "$UURL",
117 "date": "$UDATE"
118 },
119 "launchpad-export": {
120 "revision": $LREV,
121 "url": "$LURL",
122 "date": "$LDATE"
123 }
124}
125EOF
126}
127
128create_patch () {
129 local BRANCH=$1
130 local SRC_DIR=$2
131 local BRANCH_OPTS="$3"
132 local TEMPLATES="$4"
133 local OTHER_TEMPLATES="$5"
134
135 LOG=converter-output.html
136 DLOG=converter-diffstat.html
137
138 set -e
139 cd $SRC_DIR
140 mkdir -p $OUT_DIR/$BRANCH/new
141
142 get_branches_info $BRANCH $SRC_DIR $OUT_DIR/$BRANCH/new/revisions.json
143
144 local OPTS=""
145 if [ $BRANCH = trunk ] ; then
146 OPTS="--landable-templates chromium-strings,inspector-strings --unlandable-templates policy-templates"
147 fi
148
149 # Generate the new files, using the new template and the translations exported by launchpad
150 $BIN_DIR/chromium2pot.py \
151 --html-output \
152 --json-branches-info $OUT_DIR/$BRANCH/new/revisions.json \
153 --create-patches $OUT_DIR/$BRANCH/new/patches \
154 --import-gettext $LPE_DIR \
155 --export-grit $OUT_DIR/$BRANCH/new/patched-files \
156 --build-gyp-file build/common.gypi \
157 --other-grd-files $OTHER_TEMPLATES \
158 $OPTS \
159 $BRANCH_OPTS \
160 $TEMPLATES >> $OUT_DIR/$BRANCH/new/$LOG 2>&1
161 echo >> $OUT_DIR/$BRANCH/new/$LOG
162
163 ( cd "$OUT_DIR/$BRANCH/new/patches" ; for i in * ; do mv $i $i.txt ; done )
164 echo "<pre>" > $OUT_DIR/$BRANCH/new/$DLOG
165 ( cd "$OUT_DIR/$BRANCH/new" ; find patches -type f | xargs --verbose -n 1 diffstat -p 1 >> $DLOG 2>&1 )
166 perl -i -pe 's,^(diffstat -p 1 )(\S+)(.*),$1<a href="$2">$2</a>$3,;' $OUT_DIR/$BRANCH/new/$DLOG
167 echo "</pre>" >> $OUT_DIR/$BRANCH/new/$DLOG
168
169 # get the old files
170 mkdir $OUT_DIR/$BRANCH/old
171 lftp -e "lcd $OUT_DIR/$BRANCH/old; cd public_html/chromium/translations/$BRANCH; mirror; quit" sftp://people.ubuntu.com > /dev/null 2>&1
172 set +e
173 (cd $OUT_DIR/$BRANCH ; diff -Nur old new > diff.patch; cd old ; patch -p 1 < ../diff.patch > /dev/null 2>&1 )
174 set -e
175
176 lftp -e "lcd $OUT_DIR/$BRANCH/old; cd public_html/chromium/translations/$BRANCH; mirror --delete -R; quit" sftp://people.ubuntu.com > /dev/null 2>&1
177 set +e
178}
179
180create_patch "trunk" $SRC_TRUNK_DIR "$OPTS_TRUNK" "$(space_list "$TEMPLATES" "$TEMPLATES_TRUNK")" $(comma_list "$OTHER_TEMPLATES_TRUNK" "$OTHER_TEMPLATES")
181create_patch "dev" $SRC_DEV_DIR "$OPTS_DEV" "$(space_list "$TEMPLATES" "$TEMPLATES_DEV")" $(comma_list "$OTHER_TEMPLATES_DEV" "$OTHER_TEMPLATES")
182create_patch "beta" $SRC_BETA_DIR "$OPTS_BETA" "$(space_list "$TEMPLATES" "$TEMPLATES_BETA")" $(comma_list "$OTHER_TEMPLATES_BETA" "$OTHER_TEMPLATES")
183create_patch "stable" $SRC_STABLE_DIR "$OPTS_STABLE" "$(space_list "$TEMPLATES" "$TEMPLATES_STABLE")" $(comma_list "$OTHER_TEMPLATES_STABLE" "$OTHER_TEMPLATES")
184
185rm -rf $OUT_DIR
0186
=== added file 'desktop2gettext.py'
--- desktop2gettext.py 1970-01-01 00:00:00 +0000
+++ desktop2gettext.py 2024-02-28 12:58:47 +0000
@@ -0,0 +1,378 @@
1#!/usr/bin/python
2# -*- coding: utf-8 -*-
3
4# (c) 2010-2011, Fabien Tassin <fta@ubuntu.com>
5
6# Convert a desktop file to Gettext files and back
7
8import sys, getopt, os, codecs, re, time
9from datetime import datetime
10
11class DesktopFile(dict):
12 """ Read and write a desktop file """
13 def __init__(self, desktop = None, src_pkg = None, verbose = False):
14 self.changed = False
15 self.data = []
16 self.headers = {}
17 self.template = {}
18 self.translations = {}
19 self.src_pkg = src_pkg
20 self.mtime = None
21 self.verbose = verbose
22 if desktop is not None:
23 self.read_desktop_file(desktop)
24
25 def read_desktop_file(self, filename):
26 self.data = []
27 self.template = {}
28 self.mtime = os.path.getmtime(filename)
29 fd = codecs.open(filename, "rb", encoding="utf-8")
30 section = None
31 for line in fd.readlines():
32 m = re.match(r'^\[(.*?)\]', line)
33 if m is not None:
34 section = m.group(1)
35 assert section not in self.template, "Duplicate section [%s]" % section
36 self.template[section] = {}
37 m = re.match(r'^(Name|GenericName|Comment)(\[\S+\]|)=(.*)', line)
38 if m is None:
39 self.data.append(line)
40 continue
41 assert section is not None, "Found a '%s' outside a section" % m.group(1)
42 entry = m.group(1)
43 value = m.group(3)
44 if m.group(2) == "":
45 # master string
46 self.data.append(line)
47 assert entry not in self.template[section], \
48 "Duplicate entry '%s' in section [%s]" % (entry, section)
49 self.template[section][entry] = value
50 if value not in self.translations:
51 self.translations[value] = {}
52 else:
53 # translation
54 lang = m.group(2)[1:-1]
55 string = self.template[section][entry]
56 assert entry in self.template[section], \
57 "Translation found for lang '%s' in section [%s] before master entry '%s'" % \
58 (lang, section, entry)
59 if lang not in self.translations[string]:
60 self.translations[string][lang] = value
61 fd.close()
62
63 def dump(self):
64 for section in sorted(self.template.keys()):
65 print "[%s]:" % section
66 for entry in sorted(self.template[section].keys()):
67 print " '%s': '%s'" % (entry, self.template[section][entry])
68 print
69 for string in sorted(self.translations.keys()):
70 print "'%s':" % string
71 for lang in sorted(self.translations[string].keys()):
72 print " '%s' => '%s'" % (lang, self.translations[string][lang])
73
74 def write_desktop(self, file):
75 fd = codecs.open(file, "wb", encoding="utf-8")
76 for ent in self.data:
77 fd.write(ent)
78 m = re.match(r'^(Name|GenericName|Comment)=(.*)', ent)
79 if m is None:
80 continue
81 k = m.group(2)
82 if k not in self.translations:
83 continue
84 for lang in sorted(self.translations[k].keys()):
85 fd.write("%s[%s]=%s\n" % (m.group(1), lang, self.translations[k][lang]))
86
87 def write_gettext_header(self, fd, mtime = None, last_translator = None, lang_team = None):
88 mtime = "YEAR-MO-DA HO:MI+ZONE" if mtime is None else \
89 datetime.fromtimestamp(mtime).strftime("%Y-%m-%d %H:%M+0000")
90 last_translator = "FULL NAME <EMAIL@ADDRESS>" if last_translator is None else \
91 last_translator
92 lang_team = "LANGUAGE <LL@li.org>" if lang_team is None else lang_team
93 fd.write("""\
94# %s desktop file.
95# Copyright (C) 2010-2011 Fabien Tassin
96# This file is distributed under the same license as the %s package.
97# Fabien Tassin <fta@ubuntu.com>, 2010-2011.
98#
99msgid ""
100msgstr ""
101"Project-Id-Version: %s\\n"
102"Report-Msgid-Bugs-To: https://bugs.launchpad.net/ubuntu/+source/%s/+filebug\\n"
103"POT-Creation-Date: %s\\n"
104"PO-Revision-Date: %s\\n"
105"Last-Translator: %s\\n"
106"Language-Team: %s\\n"
107"MIME-Version: 1.0\\n"
108"Content-Type: text/plain; charset=UTF-8\\n"
109"Content-Transfer-Encoding: 8bit\\n"
110
111""" % (self.src_pkg, self.src_pkg, self.src_pkg, self.src_pkg,
112 datetime.fromtimestamp(self.mtime).strftime("%Y-%m-%d %H:%M+0000"),
113 mtime, last_translator, lang_team))
114
115 def write_gettext_string(self, fd, string, translation = None, usage = None):
116 if translation is None:
117 translation = ""
118 if usage is not None:
119 fd.write("#. %s\n" % usage)
120 fd.write('msgid "%s"\nmsgstr "%s"\n\n' % (string, translation))
121
122 def get_usage(self, string):
123 res = []
124 for section in sorted(self.template.keys()):
125 for entry in sorted(self.template[section].keys()):
126 if self.template[section][entry] == string:
127 res.append("[%s] %s" % (section, entry))
128 return ", ".join(res)
129
130 def read_gettext_string(self, fd):
131 string = {}
132 cur = None
133 while 1:
134 s = fd.readline()
135 if len(s) == 0 or s == "\n":
136 break # EOF or end of block
137 if s.rfind('\n') == len(s) - 1:
138 s = s[:-1] # chomp
139 if s.find("# ") == 0 or s == "#": # translator-comment
140 if 'comment' not in string:
141 string['comment'] = ''
142 string['comment'] += s[2:]
143 continue
144 if s.find("#:") == 0: # reference
145 if 'reference' not in string:
146 string['reference'] = ''
147 string['reference'] += s[2:]
148 if s[2:].find(" id: ") == 0:
149 string['id'] = s[7:].split(' ')[0]
150 continue
151 if s.find("#.") == 0: # extracted-comments
152 if 'extracted' not in string:
153 string['extracted'] = ''
154 string['extracted'] += s[2:]
155 if s[2:].find(" - condition: ") == 0:
156 if 'conditions' not in string:
157 string['conditions'] = []
158 string['conditions'].append(s[16:])
159 continue
160 if s.find("#~") == 0: # obsolete messages
161 continue
162 if s.find("#") == 0: # something else
163 print "%s not expected. Skip" % repr(s)
164 continue # not supported/expected
165 if s.find("msgid ") == 0:
166 cur = "string"
167 if cur not in string:
168 string[cur] = u""
169 else:
170 string[cur] += "\n"
171 string[cur] += s[6:]
172 continue
173 if s.find("msgstr ") == 0:
174 cur = "translation"
175 if cur not in string:
176 string[cur] = u""
177 else:
178 string[cur] += "\n"
179 string[cur] += s[7:]
180 continue
181 if s.find('"') == 0:
182 if cur is None:
183 print "'%s' not expected here. Skip" % s
184 continue
185 string[cur] += "\n" + s
186 continue
187 print "'%s' not expected here. Skip" % s
188 return None if string == {} else string
189
190 def merge_gettext_string(self, string, lang):
191 msg = string['string'][1:-1]
192 if msg == "": # header
193 self.headers[lang] = {}
194 map(lambda x: self.headers[lang].setdefault(x.split(": ")[0], x.split(": ")[1]),
195 string['translation'][4:-3].replace('\\n"\n"', '\n').replace('"\n"', '').split('\n'))
196 return
197 if msg not in self.translations:
198 return # obsolete string
199 translation = string['translation'][1:-1]
200 if translation == "": # no translation
201 return
202 if lang not in self.translations[msg]:
203 if self.verbose:
204 print "merge translation for lang '%s' string '%s'" % (lang, msg)
205 self.translations[msg][lang] = translation
206 elif self.translations[msg][lang] != translation:
207 if self.verbose:
208 print "update translation for lang '%s' string '%s'" % (lang, msg)
209 self.translations[msg][lang] = translation
210
211 def read_gettext_file(self, filename):
212 fd = codecs.open(filename, "rb", encoding="utf-8")
213 strings = []
214 while 1:
215 string = self.read_gettext_string(fd)
216 if string is None:
217 break
218 strings.append(string)
219 fd.close()
220 return strings
221
222 def get_langs(self):
223 langs = []
224 for st in self.translations:
225 for lang in self.translations[st]:
226 if lang not in langs:
227 langs.append(lang)
228 return sorted(langs)
229
230 def export_gettext_file(self, directory, template_name = "desktop_file", lang = None):
231 filename = os.path.join(directory, "%s.pot" % template_name if lang is None else "%s.po" % lang)
232 # if there's already a file with this name, compare its content and only update it
233 # when it's needed
234 update = False
235 if not os.path.exists(filename):
236 update = True
237 else:
238 # compare the strings
239 strings = self.read_gettext_file(filename)[1:]
240 old = sorted(map(lambda x: x['string'][1:-1], strings))
241 new = sorted(self.translations.keys())
242 if old != new:
243 update = True
244 if self.verbose:
245 print "strings differ for %s. Update" % filename
246 if not update:
247 # compare string descriptions
248 old = map(lambda x: x['extracted'][1:],
249 sorted(strings, lambda a, b: cmp(a['string'][1:-1], b['string'][1:-1])))
250 new = map(lambda x: self.get_usage(x), sorted(self.translations.keys()))
251 if old != new:
252 update = True
253 if self.verbose:
254 print "string descriptions differ for %s. Update" % filename
255 if not update and lang is not None:
256 # compare translations
257 old = map(lambda x: x['translation'][1:-1],
258 sorted(strings, lambda a, b: cmp(a['string'][1:-1], b['string'][1:-1])))
259 new = map(lambda x: self.translations[x][lang] if lang in self.translations[x] else "",
260 sorted(self.translations.keys()))
261 if old != new:
262 update = True
263 if self.verbose:
264 print "translations differ for %s. Update" % filename
265 if not update:
266 return
267 if self.verbose:
268 print "update %s" % filename
269 fd = codecs.open(filename, 'wb', encoding='utf-8')
270 last_translator = None
271 lang_team = None
272 if lang is None:
273 self.mtime = time.time()
274 mtime = None
275 else:
276 mtime = time.time()
277 if lang in self.headers and 'Last-Translator' in self.headers[lang]:
278 last_translator = self.headers[lang]['Last-Translator']
279 if lang in self.headers and 'Language-Team' in self.headers[lang]:
280 lang_team = self.headers[lang]['Language-Team']
281 else:
282 lang_team = "%s <%s@li.org>" % (lang, lang)
283 self.write_gettext_header(fd, mtime = mtime, last_translator = last_translator, lang_team = lang_team)
284 for string in sorted(self.translations.keys()):
285 val = self.translations[string][lang] \
286 if lang is not None and lang in self.translations[string] else None
287 self.write_gettext_string(fd, string, val, usage = self.get_usage(string))
288 fd.close()
289 self.changed = True
290
291 def export_gettext_files(self, directory):
292 if not os.path.isdir(directory):
293 os.makedirs(directory, 0755)
294 self.export_gettext_file(directory)
295 for lang in self.get_langs():
296 self.export_gettext_file(directory, lang = lang)
297
298 def import_gettext_files(self, directory):
299 """ Import strings from gettext 'po' files, ignore the 'pot'.
300 Only merge strings matching our desktop file """
301 assert os.path.isdir(directory)
302 for file in os.listdir(directory):
303 lang, ext = os.path.splitext(file)
304 if ext != '.po':
305 continue
306 for string in self.read_gettext_file(os.path.join(directory, file)):
307 self.merge_gettext_string(string, lang)
308
309def usage():
310 appname = sys.argv[0].rpartition('/')[2]
311 print """
312Usage: %s [options] --import-desktop master.desktop
313 [--export-gettext to-launchpad]
314 [--import-gettext from-launchpad]
315 [--export-desktop improved.desktop]
316 [--project-name somename]
317
318 Convert a desktop file to Gettext files and back
319
320 options could be:
321 -v | --verbose verbose mode
322 --import-desktop file master desktop file (mandatory)
323 --import-gettext dir GetText files to merge
324 --export-gettext dir merged GetText files
325 --export-desktop file improved desktop file
326 --project-name name project or source package name
327
328""" % appname
329
330if '__main__' == __name__:
331 sys.stdout = codecs.getwriter('utf8')(sys.stdout)
332 try:
333 opts, args = getopt.getopt(sys.argv[1:], "dhv",
334 [ "verbose", "project-name=",
335 "import-desktop=", "export-desktop=",
336 "import-gettext=", "export-gettext=" ])
337 except getopt.GetoptError, err:
338 print str(err)
339 usage()
340 sys.exit(2)
341
342 verbose = False
343 desktop_in = None
344 desktop_out = None
345 gettext_in = None
346 gettext_out = None
347 project_name = "misconfigured-project"
348 for o, a in opts:
349 if o in ("-v", "--verbose"):
350 verbose = True
351 elif o in ("-h", "--help"):
352 usage()
353 sys.exit()
354 elif o == "--project-name":
355 project_name = a
356 elif o == "--import-desktop":
357 desktop_in = a
358 elif o == "--export-desktop":
359 desktop_out = a
360 elif o == "--import-gettext":
361 gettext_in = a
362 elif o == "--export-gettext":
363 gettext_out = a
364
365 if desktop_in is None:
366 print "Error: --import-desktop is mandatory"
367 usage()
368 sys.exit(2)
369
370 df = DesktopFile(desktop_in, src_pkg = project_name, verbose = verbose)
371 if gettext_in is not None:
372 df.import_gettext_files(gettext_in)
373 if gettext_out is not None:
374 df.export_gettext_files(gettext_out)
375 if desktop_out is not None:
376 if desktop_in != desktop_out or df.changed:
377 df.write_desktop(desktop_out)
378 exit(1 if df.changed else 0)
0379
=== added file 'update-inspector.py'
--- update-inspector.py 1970-01-01 00:00:00 +0000
+++ update-inspector.py 2024-02-28 12:58:47 +0000
@@ -0,0 +1,149 @@
1#!/usr/bin/python
2# -*- coding: utf-8 -*-
3
4# (c) 2010, Fabien Tassin <fta@ubuntu.com>
5
6# Helper to merge the localizedStrings.js strings from Wekbit inspector into
7# the inspector_strings.grd Grit template
8
9import os, re, sys, codecs
10from optparse import OptionParser
11
12class JS2Grit:
13 def __init__(self, grd = None, js = None):
14 self.js_file = None
15 self.js_strings = []
16 self.grd_file = None
17 self.order = []
18 self.data = {}
19 self.missing = []
20 self.obsolete = []
21 self.merged = []
22 if grd is not None:
23 self.import_grd(grd)
24 if js is not None:
25 self.import_js(js)
26
27 def xml2js(self, s):
28 '''
29 '<ph name="ERRORS_COUNT">%1$d<ex>2</ex></ph> errors, <ph name="WARNING_COUNT">%2$d<ex>1</ex></ph> warning'
30 => '%d errors, %d warnings'
31 '''
32 s2 = re.sub('<ex>.*?</ex>', '', s)
33 s2 = re.sub('<ph name=".*?">%\d+\$(.*?)</ph>', r'%\1', s2)
34 s2 = re.sub('&lt;', '<', s2)
35 s2 = re.sub('&gt;', '>', s2)
36 s2 = re.sub('\'\'\'', '', s2)
37 return re.sub('<ph name=".*?">(.*?)</ph>', r'\1', s2)
38
39 def js2xml(self, s):
40 '''
41 '%d errors, %d warnings'
42 => '<ph name="XXX">%1$d<ex>XXX</ex></ph> errors, <ph name="XXX">%2$d<ex>XXX</ex></ph> warning'
43 '''
44 s = re.sub('<', '&lt;', s)
45 s = re.sub('>', '&gt;', s)
46 s = re.sub('^ ', '\'\'\' ', s)
47 phs = [ x for x in re.split(r'(%\d*\.?\d*[dfs])', s) if x.find('%') == 0 and x.find('%%') != 0 ]
48 if len(phs) > 1:
49 phs = re.split(r'(%\d*\.?\d*[dfs])', s)
50 j = 1
51 for i, part in enumerate(phs):
52 if part.find('%') == 0 and part.find('%%') != 0:
53 phs[i] = '<ph name="XXX">%%%d$%s<ex>XXX</ex></ph>' % (j, part[1:])
54 j += 1
55 elif len(phs) == 1:
56 phs = re.split(r'(%\d*\.?\d*[dfs])', s)
57 for i, part in enumerate(phs):
58 if part.find('%') == 0 and part.find('%%') != 0:
59 phs[i] = '<ph name="XXX">%s<ex>XXX</ex></ph>' % part
60 else:
61 return s
62 return ''.join(phs)
63
64 def import_grd(self, file):
65 self.order = []
66 self.data = {}
67 self.grd_file = file
68 fd = codecs.open(file, 'rb', encoding='utf-8')
69 file = fd.read()
70 for s in re.finditer('<message name="(.*?)" desc="(.*?)">\n\s+(.*?)\n\s+</message>', file, re.S):
71 key = self.xml2js(s.group(3))
72 self.order.append(key)
73 self.data[key] = { 'code': s.group(1), 'desc': s.group(2), 'string': s.group(3) }
74 fd.close()
75 return self.order, self.data
76
77 def import_js(self, file):
78 self.js_strings = []
79 self.js_file = file
80 fd = codecs.open(file, 'rb', encoding='utf-16')
81 file = fd.read()
82 for s in re.finditer('localizedStrings\["(.*?)"\] = "(.*?)";', file, re.S):
83 self.js_strings.append(s.group(1))
84 fd.close()
85 return self.js_strings
86
87 def merge_strings(self):
88 self.merged = []
89 self.missing = [ s for s in self.js_strings if s not in self.order ]
90 self.obsolete = [ s for s in self.order if s not in self.js_strings ]
91 for s in self.js_strings:
92 if s in self.order:
93 self.merged.append(self.data[s])
94 else:
95 self.merged.append({ 'code': 'IDS_XXX', 'desc': 'XXX', 'string': self.js2xml(s), 'key': s })
96
97 def get_new_strings_count(self):
98 return len(self.missing)
99
100 def get_obsolete_strings_count(self):
101 return len(self.obsolete)
102
103 def get_strings_count(self):
104 return len(self.js_strings)
105
106 def export_grd(self, grd):
107 fdi = codecs.open(self.grd_file, 'rb', encoding='utf-8')
108 data = fdi.read()
109 fdi.close()
110
111 fdo = codecs.open(grd, 'wb', encoding='utf-8')
112
113 # copy the header
114 pos = data.find('<messages')
115 pos += data[pos:].find('\n') + 1
116 fdo.write(data[:pos])
117
118 # write the merged strings
119 for s in self.merged:
120 if 'key' in s and s['key'] != s['string']:
121 fdo.write(" <!-- XXX: '%s' -->\n" % s['key'])
122 fdo.write(' <message name="%s" desc="%s">\n %s\n </message>\n' % \
123 (s['code'], s['desc'], s['string']))
124 # copy the footer
125 pos = data.find('</messages>')
126 pos -= pos - data[:pos].rfind('\n') - 1
127 fdo.write(data[pos:])
128 fdo.close()
129
130if '__main__' == __name__:
131 sys.stdout = codecs.getwriter('utf8')(sys.stdout)
132
133 parser = OptionParser(usage = 'Usage: %prog --grd inspector_strings.grd --js localizedStrings.js -o foo.grd')
134 parser.add_option("-j", "--js", dest="js",
135 help="read js strings from FILE", metavar="FILE")
136 parser.add_option("-g", "--grd", dest="grd",
137 help="read grd template from FILE", metavar="FILE")
138 parser.add_option("-o", "--output", dest="output",
139 help="write merged grd template to FILE", metavar="FILE")
140 (options, args) = parser.parse_args()
141
142 if options.grd is None or options.js is None or options.output is None:
143 parser.error("One of --grd, --js or --output is missing")
144 js2grd = JS2Grit(grd = options.grd, js = options.js)
145 print "Found %d strings in the js file" % js2grd.get_strings_count()
146 js2grd.merge_strings()
147 js2grd.export_grd(options.output)
148 print "Merged %d new strings, dropped %d obsolete strings" % \
149 (js2grd.get_new_strings_count(), js2grd.get_obsolete_strings_count())
0150
=== added file 'update-pot.sh'
--- update-pot.sh 1970-01-01 00:00:00 +0000
+++ update-pot.sh 2024-02-28 12:58:47 +0000
@@ -0,0 +1,87 @@
1#!/bin/sh
2
3# Update the gettext bzr branch (imported by lp rosetta)
4# based on a merge of all the templates in the 4 chromium
5# channels.
6# (c) 2010-2011, Fabien Tassin <fta@ubuntu.com>
7
8PROJECT=chromium-browser
9PKG_DIR=/data/bot/chromium-browser.head
10BIN_DIR=/data/bot/chromium-translations-tools.head
11OUT_DIR=/data/bot/upstream/chromium-translations.head
12LPE_DIR=/data/bot/upstream/chromium-translations-exports.head
13
14SRC_TRUNK_DIR=/data/bot/upstream/chromium-browser.svn/src
15SRC_DEV_DIR=/data/bot/upstream/chromium-dev.svn/src
16SRC_BETA_DIR=/data/bot/upstream/chromium-beta.svn/src
17SRC_STABLE_DIR=/data/bot/upstream/chromium-stable.svn/src
18
19######
20
21NEW_TEMPLATES="chrome/app/chromium_strings.grd,chrome/app/generated_resources.grd,ui/base/strings/ui_strings.grd,chrome/app/policy/policy_templates.grd,webkit/glue/inspector_strings.grd,webkit/glue/webkit_strings.grd"
22TEMPLATES="chrome/app/chromium_strings.grd,chrome/app/generated_resources.grd,ui/base/strings/app_strings.grd,chrome/app/policy/policy_templates.grd,webkit/glue/inspector_strings.grd,webkit/glue/webkit_strings.grd"
23
24OPTS="--map-template-names ui/base/strings/ui_strings.grd=ui/base/strings/app_strings.grd"
25IMPORT="--import-gettext $OUT_DIR,$LPE_DIR"
26BRANCH_TRUNK="--import-grit-branch trunk:$SRC_TRUNK_DIR:$NEW_TEMPLATES"
27BRANCH_DEV="--import-grit-branch dev:$SRC_DEV_DIR:$NEW_TEMPLATES"
28BRANCH_BETA="--import-grit-branch beta:$SRC_BETA_DIR:$NEW_TEMPLATES"
29BRANCH_STABLE="--import-grit-branch stable:$SRC_STABLE_DIR:$TEMPLATES"
30
31BRANCHES="$BRANCH_TRUNK $BRANCH_DEV $BRANCH_BETA $BRANCH_STABLE"
32
33(cd $LPE_DIR ; bzr pull -q)
34(cd $BIN_DIR ; bzr pull -q)
35
36cd $SRC_TRUNK_DIR
37$BIN_DIR/chromium2pot.py $BRANCHES $IMPORT $OPTS --export-gettext $OUT_DIR $NEW_TEMPLATES
38RET=$?
39cd $OUT_DIR
40set -e
41for f in */*.pot */*.po ; do
42 msgfmt -c $f
43done
44set +e
45rm -f messages.mo
46
47# desktop file
48DF_DIR="desktop_file"
49DF=$PROJECT.desktop
50DF_ARGS="-v --import-desktop $DF_DIR/$DF --project-name $PROJECT"
51if [ ! -d $DF_DIR ] ; then
52 mkdir $DF_DIR
53 RET=1
54fi
55if [ ! -e $DF_DIR/$DF ] ; then # no desktop file yet
56 cp -va $PKG_DIR/debian/$DF $DF_DIR
57 $BIN_DIR/desktop2gettext.py $DF_ARGS --export-gettext $DF_DIR
58 bzr add $DF_DIR
59 RET=1
60fi
61if [ -d $LPE_DIR/$DF_DIR ] ; then
62 $BIN_DIR/desktop2gettext.py $DF_ARGS --import-gettext $LPE_DIR/$DF_DIR --export-gettext $DF_DIR --export-desktop $DF_DIR/$DF
63 R=$?
64 if [ $R = 1 ] ; then
65 RET=1
66 fi
67 diff -u $PKG_DIR/debian/$DF $DF_DIR/$DF
68fi
69
70if [ $RET = 0 ] ; then
71 # no changes
72 exit 0
73fi
74
75REV=$(svn info $SRC_TRUNK_DIR | grep 'Last Changed Rev:' | cut -d' ' -f4)
76VERSION=$(cut -d= -f2 $SRC_TRUNK_DIR/chrome/VERSION | sed -e 's,$,.,' | tr -d '\n' | sed -e 's/.$//')
77MSG="Strings update for $VERSION r$REV"
78
79if [ "Z$1" != Z ] ; then
80 MSG=$1
81fi
82
83cd $OUT_DIR
84bzr add
85bzr commit -q -m "* $MSG"
86bzr push -q > /dev/null
87exit 0

Subscribers

People subscribed via source and target branches

to status/vote changes: