Merge lp:~mitya57/ubuntu/raring/sphinx/1.1.3+dfsg-7ubuntu1 into lp:ubuntu/raring/sphinx
- Raring (13.04)
- 1.1.3+dfsg-7ubuntu1
- Merge into raring
Status: | Merged |
---|---|
Merge reported by: | Dmitry Shachnev |
Merged at revision: | not available |
Proposed branch: | lp:~mitya57/ubuntu/raring/sphinx/1.1.3+dfsg-7ubuntu1 |
Merge into: | lp:ubuntu/raring/sphinx |
Diff against target: |
3328 lines (+2631/-451) 25 files modified
.pc/applied-patches (+3/-1) .pc/fix_literal_block_warning.diff/sphinx/environment.py (+1807/-0) .pc/fix_manpages_generation_with_new_docutils.diff/sphinx/writers/manpage.py (+0/-345) .pc/manpage_writer_docutils_0.10_api.diff/sphinx/writers/manpage.py (+345/-0) .pc/parallel_2to3.diff/setup.py (+209/-0) debian/changelog (+33/-2) debian/jstest/run-tests (+1/-1) debian/patches/fix_literal_block_warning.diff (+17/-0) debian/patches/fix_manpages_generation_with_new_docutils.diff (+0/-33) debian/patches/initialize_autodoc.diff (+1/-1) debian/patches/l10n_fixes.diff (+49/-33) debian/patches/manpage_writer_docutils_0.10_api.diff (+31/-0) debian/patches/parallel_2to3.diff (+31/-0) debian/patches/python3_test_build_dir.diff (+1/-1) debian/patches/series (+3/-1) debian/patches/show_more_stack_frames.diff (+1/-1) debian/patches/sort_stopwords.diff (+2/-2) debian/patches/sphinxcontrib_namespace.diff (+1/-1) debian/patches/unversioned_grammar_pickle.diff (+1/-1) debian/rules (+8/-3) debian/source/options (+1/-0) debian/tests/control (+3/-0) debian/tests/sphinx-doc (+20/-0) setup.py (+17/-0) sphinx/environment.py (+46/-25) |
To merge this branch: | bzr merge lp:~mitya57/ubuntu/raring/sphinx/1.1.3+dfsg-7ubuntu1 |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Daniel Holbach (community) | Approve | ||
Ubuntu branches | Pending | ||
Review via email: mp+148869@code.launchpad.net |
Commit message
Description of the change
Looks like upstream is not going to release 1.3 anytime soon, so this branch merges some recent Debian changes and backports a minor fix from upstream.
This also includes a "final" version of patch for bug 1068493, which will make is possible to SRU it.
sphinx (1.1.3+
* Merge with Debian experimental. Remaining Ubuntu changes:
- Switch to dh_python2.
- debian/rules: export NO_PKG_MANGLE=1 in order to not have translations
stripped.
- debian/control: Drop the build-dependency on python-whoosh.
- debian/control: Add "XS-Testsuite: autopkgtest" header.
* debian/
dropped, applied in Debian as manpage_
* debian/
false-positive "Literal block expected; none found." warnings when
building l10n projects.
-- Dmitry Shachnev <email address hidden> Sat, 16 Feb 2013 14:51:12 +0400
- 42. By Dmitry Shachnev
-
Releasing version 1.1.3+dfsg-
7ubuntu1.
Daniel Holbach (dholbach) wrote : | # |
Unfortunately it FTBFS because of a segfault for some reason: https:/
Dmitry Shachnev (mitya57) wrote : | # |
Segfaulting again? :-(
/me looks at python-webkit suspiciously, but will ask buildd admins to
investigate the problem tomorrow.
Dmitry Shachnev (mitya57) wrote : | # |
[Commenting on right MP this time]
I can't reproduce the crash in the up-to-date raring pbuilder chroot. William Grant suggested me to try chroot from https:/
Preview Diff
1 | === modified file '.pc/applied-patches' |
2 | --- .pc/applied-patches 2012-11-27 19:20:44 +0000 |
3 | +++ .pc/applied-patches 2013-02-16 12:25:23 +0000 |
4 | @@ -8,8 +8,10 @@ |
5 | fix_nepali_po.diff |
6 | pygments_byte_strings.diff |
7 | fix_shorthandoff.diff |
8 | -fix_manpages_generation_with_new_docutils.diff |
9 | test_build_html_rb.diff |
10 | sort_stopwords.diff |
11 | support_python_3.3.diff |
12 | l10n_fixes.diff |
13 | +manpage_writer_docutils_0.10_api.diff |
14 | +parallel_2to3.diff |
15 | +fix_literal_block_warning.diff |
16 | |
17 | === added directory '.pc/fix_literal_block_warning.diff' |
18 | === added directory '.pc/fix_literal_block_warning.diff/sphinx' |
19 | === added file '.pc/fix_literal_block_warning.diff/sphinx/environment.py' |
20 | --- .pc/fix_literal_block_warning.diff/sphinx/environment.py 1970-01-01 00:00:00 +0000 |
21 | +++ .pc/fix_literal_block_warning.diff/sphinx/environment.py 2013-02-16 12:25:23 +0000 |
22 | @@ -0,0 +1,1807 @@ |
23 | +# -*- coding: utf-8 -*- |
24 | +""" |
25 | + sphinx.environment |
26 | + ~~~~~~~~~~~~~~~~~~ |
27 | + |
28 | + Global creation environment. |
29 | + |
30 | + :copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS. |
31 | + :license: BSD, see LICENSE for details. |
32 | +""" |
33 | + |
34 | +import re |
35 | +import os |
36 | +import sys |
37 | +import time |
38 | +import types |
39 | +import codecs |
40 | +import imghdr |
41 | +import string |
42 | +import unicodedata |
43 | +import cPickle as pickle |
44 | +from os import path |
45 | +from glob import glob |
46 | +from itertools import izip, groupby |
47 | + |
48 | +from docutils import nodes |
49 | +from docutils.io import FileInput, NullOutput |
50 | +from docutils.core import Publisher |
51 | +from docutils.utils import Reporter, relative_path, new_document, \ |
52 | + get_source_line |
53 | +from docutils.readers import standalone |
54 | +from docutils.parsers.rst import roles, directives, Parser as RSTParser |
55 | +from docutils.parsers.rst.languages import en as english |
56 | +from docutils.parsers.rst.directives.html import MetaBody |
57 | +from docutils.writers import UnfilteredWriter |
58 | +from docutils.transforms import Transform |
59 | +from docutils.transforms.parts import ContentsFilter |
60 | + |
61 | +from sphinx import addnodes |
62 | +from sphinx.util import url_re, get_matching_docs, docname_join, split_into, \ |
63 | + FilenameUniqDict |
64 | +from sphinx.util.nodes import clean_astext, make_refnode, extract_messages, \ |
65 | + WarningStream |
66 | +from sphinx.util.osutil import movefile, SEP, ustrftime, find_catalog |
67 | +from sphinx.util.matching import compile_matchers |
68 | +from sphinx.util.pycompat import all, class_types |
69 | +from sphinx.util.websupport import is_commentable |
70 | +from sphinx.errors import SphinxError, ExtensionError |
71 | +from sphinx.locale import _, init as init_locale |
72 | +from sphinx.versioning import add_uids, merge_doctrees |
73 | + |
74 | +fs_encoding = sys.getfilesystemencoding() or sys.getdefaultencoding() |
75 | + |
76 | +orig_role_function = roles.role |
77 | +orig_directive_function = directives.directive |
78 | + |
79 | +class ElementLookupError(Exception): pass |
80 | + |
81 | + |
82 | +default_settings = { |
83 | + 'embed_stylesheet': False, |
84 | + 'cloak_email_addresses': True, |
85 | + 'pep_base_url': 'http://www.python.org/dev/peps/', |
86 | + 'rfc_base_url': 'http://tools.ietf.org/html/', |
87 | + 'input_encoding': 'utf-8-sig', |
88 | + 'doctitle_xform': False, |
89 | + 'sectsubtitle_xform': False, |
90 | + 'halt_level': 5, |
91 | +} |
92 | + |
93 | +# This is increased every time an environment attribute is added |
94 | +# or changed to properly invalidate pickle files. |
95 | +ENV_VERSION = 41 |
96 | + |
97 | + |
98 | +default_substitutions = set([ |
99 | + 'version', |
100 | + 'release', |
101 | + 'today', |
102 | +]) |
103 | + |
104 | +dummy_reporter = Reporter('', 4, 4) |
105 | + |
106 | +versioning_conditions = { |
107 | + 'none': False, |
108 | + 'text': nodes.TextElement, |
109 | + 'commentable': is_commentable, |
110 | +} |
111 | + |
112 | + |
113 | +class NoUri(Exception): |
114 | + """Raised by get_relative_uri if there is no URI available.""" |
115 | + pass |
116 | + |
117 | + |
118 | +class DefaultSubstitutions(Transform): |
119 | + """ |
120 | + Replace some substitutions if they aren't defined in the document. |
121 | + """ |
122 | + # run before the default Substitutions |
123 | + default_priority = 210 |
124 | + |
125 | + def apply(self): |
126 | + config = self.document.settings.env.config |
127 | + # only handle those not otherwise defined in the document |
128 | + to_handle = default_substitutions - set(self.document.substitution_defs) |
129 | + for ref in self.document.traverse(nodes.substitution_reference): |
130 | + refname = ref['refname'] |
131 | + if refname in to_handle: |
132 | + text = config[refname] |
133 | + if refname == 'today' and not text: |
134 | + # special handling: can also specify a strftime format |
135 | + text = ustrftime(config.today_fmt or _('%B %d, %Y')) |
136 | + ref.replace_self(nodes.Text(text, text)) |
137 | + |
138 | + |
139 | +class MoveModuleTargets(Transform): |
140 | + """ |
141 | + Move module targets that are the first thing in a section to the section |
142 | + title. |
143 | + |
144 | + XXX Python specific |
145 | + """ |
146 | + default_priority = 210 |
147 | + |
148 | + def apply(self): |
149 | + for node in self.document.traverse(nodes.target): |
150 | + if not node['ids']: |
151 | + continue |
152 | + if (node.has_key('ismod') and |
153 | + node.parent.__class__ is nodes.section and |
154 | + # index 0 is the section title node |
155 | + node.parent.index(node) == 1): |
156 | + node.parent['ids'][0:0] = node['ids'] |
157 | + node.parent.remove(node) |
158 | + |
159 | + |
160 | +class HandleCodeBlocks(Transform): |
161 | + """ |
162 | + Several code block related transformations. |
163 | + """ |
164 | + default_priority = 210 |
165 | + |
166 | + def apply(self): |
167 | + # move doctest blocks out of blockquotes |
168 | + for node in self.document.traverse(nodes.block_quote): |
169 | + if all(isinstance(child, nodes.doctest_block) for child |
170 | + in node.children): |
171 | + node.replace_self(node.children) |
172 | + # combine successive doctest blocks |
173 | + #for node in self.document.traverse(nodes.doctest_block): |
174 | + # if node not in node.parent.children: |
175 | + # continue |
176 | + # parindex = node.parent.index(node) |
177 | + # while len(node.parent) > parindex+1 and \ |
178 | + # isinstance(node.parent[parindex+1], nodes.doctest_block): |
179 | + # node[0] = nodes.Text(node[0] + '\n\n' + |
180 | + # node.parent[parindex+1][0]) |
181 | + # del node.parent[parindex+1] |
182 | + |
183 | + |
184 | +class SortIds(Transform): |
185 | + """ |
186 | + Sort secion IDs so that the "id[0-9]+" one comes last. |
187 | + """ |
188 | + default_priority = 261 |
189 | + |
190 | + def apply(self): |
191 | + for node in self.document.traverse(nodes.section): |
192 | + if len(node['ids']) > 1 and node['ids'][0].startswith('id'): |
193 | + node['ids'] = node['ids'][1:] + [node['ids'][0]] |
194 | + |
195 | + |
196 | +class CitationReferences(Transform): |
197 | + """ |
198 | + Replace citation references by pending_xref nodes before the default |
199 | + docutils transform tries to resolve them. |
200 | + """ |
201 | + default_priority = 619 |
202 | + |
203 | + def apply(self): |
204 | + for citnode in self.document.traverse(nodes.citation_reference): |
205 | + cittext = citnode.astext() |
206 | + refnode = addnodes.pending_xref(cittext, reftype='citation', |
207 | + reftarget=cittext, refwarn=True) |
208 | + refnode.line = citnode.line or citnode.parent.line |
209 | + refnode += nodes.Text('[' + cittext + ']') |
210 | + citnode.parent.replace(citnode, refnode) |
211 | + |
212 | + |
213 | +class Locale(Transform): |
214 | + """ |
215 | + Replace translatable nodes with their translated doctree. |
216 | + """ |
217 | + default_priority = 0 |
218 | + def apply(self): |
219 | + env = self.document.settings.env |
220 | + settings, source = self.document.settings, self.document['source'] |
221 | + # XXX check if this is reliable |
222 | + assert source.startswith(env.srcdir) |
223 | + docname = path.splitext(relative_path(env.srcdir, source))[0] |
224 | + textdomain = find_catalog(docname, |
225 | + self.document.settings.gettext_compact) |
226 | + |
227 | + # fetch translations |
228 | + dirs = [path.join(env.srcdir, directory) |
229 | + for directory in env.config.locale_dirs] |
230 | + catalog, has_catalog = init_locale(dirs, env.config.language, |
231 | + textdomain) |
232 | + if not has_catalog: |
233 | + return |
234 | + |
235 | + parser = RSTParser() |
236 | + |
237 | + for node, msg in extract_messages(self.document): |
238 | + msgstr = catalog.gettext(msg) |
239 | + # XXX add marker to untranslated parts |
240 | + if not msgstr or msgstr == msg: # as-of-yet untranslated |
241 | + continue |
242 | + |
243 | + patch = new_document(source, settings) |
244 | + parser.parse(msgstr, patch) |
245 | + patch = patch[0] |
246 | + # XXX doctest and other block markup |
247 | + if not isinstance(patch, nodes.paragraph): |
248 | + continue # skip for now |
249 | + |
250 | + # auto-numbered foot note reference should use original 'ids'. |
251 | + is_autonumber_footnote_ref = lambda node: \ |
252 | + isinstance(node, nodes.footnote_reference) \ |
253 | + and node.get('auto') == 1 |
254 | + old_foot_refs = node.traverse(is_autonumber_footnote_ref) |
255 | + new_foot_refs = patch.traverse(is_autonumber_footnote_ref) |
256 | + if len(old_foot_refs) != len(new_foot_refs): |
257 | + env.warn_node('inconsistent footnote references in ' |
258 | + 'translated message', node) |
259 | + for old, new in zip(old_foot_refs, new_foot_refs): |
260 | + new['ids'] = old['ids'] |
261 | + self.document.autofootnote_refs.remove(old) |
262 | + self.document.note_autofootnote_ref(new) |
263 | + |
264 | + # reference should use original 'refname'. |
265 | + # * reference target ".. _Python: ..." is not translatable. |
266 | + # * section refname is not translatable. |
267 | + # * inline reference "`Python <...>`_" has no 'refname'. |
268 | + is_refnamed_ref = lambda node: \ |
269 | + isinstance(node, nodes.reference) \ |
270 | + and 'refname' in node |
271 | + old_refs = node.traverse(is_refnamed_ref) |
272 | + new_refs = patch.traverse(is_refnamed_ref) |
273 | + applied_refname_map = {} |
274 | + if len(old_refs) != len(new_refs): |
275 | + env.warn_node('inconsistent references in ' |
276 | + 'translated message', node) |
277 | + for new in new_refs: |
278 | + if new['refname'] in applied_refname_map: |
279 | + # 2nd appearance of the reference |
280 | + new['refname'] = applied_refname_map[new['refname']] |
281 | + elif old_refs: |
282 | + # 1st appearance of the reference in old_refs |
283 | + old = old_refs.pop(0) |
284 | + refname = old['refname'] |
285 | + new['refname'] = refname |
286 | + applied_refname_map[new['refname']] = refname |
287 | + else: |
288 | + # the reference is not found in old_refs |
289 | + applied_refname_map[new['refname']] = new['refname'] |
290 | + |
291 | + self.document.note_refname(new) |
292 | + |
293 | + for child in patch.children: # update leaves |
294 | + child.parent = node |
295 | + node.children = patch.children |
296 | + |
297 | + |
298 | +class SphinxStandaloneReader(standalone.Reader): |
299 | + """ |
300 | + Add our own transforms. |
301 | + """ |
302 | + transforms = [Locale, CitationReferences, DefaultSubstitutions, |
303 | + MoveModuleTargets, HandleCodeBlocks, SortIds] |
304 | + |
305 | + def get_transforms(self): |
306 | + return standalone.Reader.get_transforms(self) + self.transforms |
307 | + |
308 | + |
309 | +class SphinxDummyWriter(UnfilteredWriter): |
310 | + supported = ('html',) # needed to keep "meta" nodes |
311 | + |
312 | + def translate(self): |
313 | + pass |
314 | + |
315 | + |
316 | +class SphinxContentsFilter(ContentsFilter): |
317 | + """ |
318 | + Used with BuildEnvironment.add_toc_from() to discard cross-file links |
319 | + within table-of-contents link nodes. |
320 | + """ |
321 | + def visit_pending_xref(self, node): |
322 | + text = node.astext() |
323 | + self.parent.append(nodes.literal(text, text)) |
324 | + raise nodes.SkipNode |
325 | + |
326 | + def visit_image(self, node): |
327 | + raise nodes.SkipNode |
328 | + |
329 | + |
330 | +class BuildEnvironment: |
331 | + """ |
332 | + The environment in which the ReST files are translated. |
333 | + Stores an inventory of cross-file targets and provides doctree |
334 | + transformations to resolve links to them. |
335 | + """ |
336 | + |
337 | + # --------- ENVIRONMENT PERSISTENCE ---------------------------------------- |
338 | + |
339 | + @staticmethod |
340 | + def frompickle(config, filename): |
341 | + picklefile = open(filename, 'rb') |
342 | + try: |
343 | + env = pickle.load(picklefile) |
344 | + finally: |
345 | + picklefile.close() |
346 | + if env.version != ENV_VERSION: |
347 | + raise IOError('env version not current') |
348 | + env.config.values = config.values |
349 | + return env |
350 | + |
351 | + def topickle(self, filename): |
352 | + # remove unpicklable attributes |
353 | + warnfunc = self._warnfunc |
354 | + self.set_warnfunc(None) |
355 | + values = self.config.values |
356 | + del self.config.values |
357 | + domains = self.domains |
358 | + del self.domains |
359 | + # first write to a temporary file, so that if dumping fails, |
360 | + # the existing environment won't be overwritten |
361 | + picklefile = open(filename + '.tmp', 'wb') |
362 | + # remove potentially pickling-problematic values from config |
363 | + for key, val in vars(self.config).items(): |
364 | + if key.startswith('_') or \ |
365 | + isinstance(val, types.ModuleType) or \ |
366 | + isinstance(val, types.FunctionType) or \ |
367 | + isinstance(val, class_types): |
368 | + del self.config[key] |
369 | + try: |
370 | + pickle.dump(self, picklefile, pickle.HIGHEST_PROTOCOL) |
371 | + finally: |
372 | + picklefile.close() |
373 | + movefile(filename + '.tmp', filename) |
374 | + # reset attributes |
375 | + self.domains = domains |
376 | + self.config.values = values |
377 | + self.set_warnfunc(warnfunc) |
378 | + |
379 | + # --------- ENVIRONMENT INITIALIZATION ------------------------------------- |
380 | + |
381 | + def __init__(self, srcdir, doctreedir, config): |
382 | + self.doctreedir = doctreedir |
383 | + self.srcdir = srcdir |
384 | + self.config = config |
385 | + |
386 | + # the method of doctree versioning; see set_versioning_method |
387 | + self.versioning_condition = None |
388 | + |
389 | + # the application object; only set while update() runs |
390 | + self.app = None |
391 | + |
392 | + # all the registered domains, set by the application |
393 | + self.domains = {} |
394 | + |
395 | + # the docutils settings for building |
396 | + self.settings = default_settings.copy() |
397 | + self.settings['env'] = self |
398 | + |
399 | + # the function to write warning messages with |
400 | + self._warnfunc = None |
401 | + |
402 | + # this is to invalidate old pickles |
403 | + self.version = ENV_VERSION |
404 | + |
405 | + # make this a set for faster testing |
406 | + self._nitpick_ignore = set(self.config.nitpick_ignore) |
407 | + |
408 | + # All "docnames" here are /-separated and relative and exclude |
409 | + # the source suffix. |
410 | + |
411 | + self.found_docs = set() # contains all existing docnames |
412 | + self.all_docs = {} # docname -> mtime at the time of build |
413 | + # contains all built docnames |
414 | + self.dependencies = {} # docname -> set of dependent file |
415 | + # names, relative to documentation root |
416 | + self.reread_always = set() # docnames to re-read unconditionally on |
417 | + # next build |
418 | + |
419 | + # File metadata |
420 | + self.metadata = {} # docname -> dict of metadata items |
421 | + |
422 | + # TOC inventory |
423 | + self.titles = {} # docname -> title node |
424 | + self.longtitles = {} # docname -> title node; only different if |
425 | + # set differently with title directive |
426 | + self.tocs = {} # docname -> table of contents nodetree |
427 | + self.toc_num_entries = {} # docname -> number of real entries |
428 | + # used to determine when to show the TOC |
429 | + # in a sidebar (don't show if it's only one item) |
430 | + self.toc_secnumbers = {} # docname -> dict of sectionid -> number |
431 | + |
432 | + self.toctree_includes = {} # docname -> list of toctree includefiles |
433 | + self.files_to_rebuild = {} # docname -> set of files |
434 | + # (containing its TOCs) to rebuild too |
435 | + self.glob_toctrees = set() # docnames that have :glob: toctrees |
436 | + self.numbered_toctrees = set() # docnames that have :numbered: toctrees |
437 | + |
438 | + # domain-specific inventories, here to be pickled |
439 | + self.domaindata = {} # domainname -> domain-specific dict |
440 | + |
441 | + # Other inventories |
442 | + self.citations = {} # citation name -> docname, labelid |
443 | + self.indexentries = {} # docname -> list of |
444 | + # (type, string, target, aliasname) |
445 | + self.versionchanges = {} # version -> list of (type, docname, |
446 | + # lineno, module, descname, content) |
447 | + |
448 | + # these map absolute path -> (docnames, unique filename) |
449 | + self.images = FilenameUniqDict() |
450 | + self.dlfiles = FilenameUniqDict() |
451 | + |
452 | + # temporary data storage while reading a document |
453 | + self.temp_data = {} |
454 | + |
455 | + def set_warnfunc(self, func): |
456 | + self._warnfunc = func |
457 | + self.settings['warning_stream'] = WarningStream(func) |
458 | + |
459 | + def set_versioning_method(self, method): |
460 | + """This sets the doctree versioning method for this environment. |
461 | + |
462 | + Versioning methods are a builder property; only builders with the same |
463 | + versioning method can share the same doctree directory. Therefore, we |
464 | + raise an exception if the user tries to use an environment with an |
465 | + incompatible versioning method. |
466 | + """ |
467 | + if method not in versioning_conditions: |
468 | + raise ValueError('invalid versioning method: %r' % method) |
469 | + condition = versioning_conditions[method] |
470 | + if self.versioning_condition not in (None, condition): |
471 | + raise SphinxError('This environment is incompatible with the ' |
472 | + 'selected builder, please choose another ' |
473 | + 'doctree directory.') |
474 | + self.versioning_condition = condition |
475 | + |
476 | + def warn(self, docname, msg, lineno=None): |
477 | + # strange argument order is due to backwards compatibility |
478 | + self._warnfunc(msg, (docname, lineno)) |
479 | + |
480 | + def warn_node(self, msg, node): |
481 | + self._warnfunc(msg, '%s:%s' % get_source_line(node)) |
482 | + |
483 | + def clear_doc(self, docname): |
484 | + """Remove all traces of a source file in the inventory.""" |
485 | + if docname in self.all_docs: |
486 | + self.all_docs.pop(docname, None) |
487 | + self.reread_always.discard(docname) |
488 | + self.metadata.pop(docname, None) |
489 | + self.dependencies.pop(docname, None) |
490 | + self.titles.pop(docname, None) |
491 | + self.longtitles.pop(docname, None) |
492 | + self.tocs.pop(docname, None) |
493 | + self.toc_secnumbers.pop(docname, None) |
494 | + self.toc_num_entries.pop(docname, None) |
495 | + self.toctree_includes.pop(docname, None) |
496 | + self.indexentries.pop(docname, None) |
497 | + self.glob_toctrees.discard(docname) |
498 | + self.numbered_toctrees.discard(docname) |
499 | + self.images.purge_doc(docname) |
500 | + self.dlfiles.purge_doc(docname) |
501 | + |
502 | + for subfn, fnset in self.files_to_rebuild.items(): |
503 | + fnset.discard(docname) |
504 | + if not fnset: |
505 | + del self.files_to_rebuild[subfn] |
506 | + for key, (fn, _) in self.citations.items(): |
507 | + if fn == docname: |
508 | + del self.citations[key] |
509 | + for version, changes in self.versionchanges.items(): |
510 | + new = [change for change in changes if change[1] != docname] |
511 | + changes[:] = new |
512 | + |
513 | + for domain in self.domains.values(): |
514 | + domain.clear_doc(docname) |
515 | + |
516 | + def doc2path(self, docname, base=True, suffix=None): |
517 | + """Return the filename for the document name. |
518 | + |
519 | + If *base* is True, return absolute path under self.srcdir. |
520 | + If *base* is None, return relative path to self.srcdir. |
521 | + If *base* is a path string, return absolute path under that. |
522 | + If *suffix* is not None, add it instead of config.source_suffix. |
523 | + """ |
524 | + docname = docname.replace(SEP, path.sep) |
525 | + suffix = suffix or self.config.source_suffix |
526 | + if base is True: |
527 | + return path.join(self.srcdir, docname) + suffix |
528 | + elif base is None: |
529 | + return docname + suffix |
530 | + else: |
531 | + return path.join(base, docname) + suffix |
532 | + |
533 | + def relfn2path(self, filename, docname=None): |
534 | + """Return paths to a file referenced from a document, relative to |
535 | + documentation root and absolute. |
536 | + |
537 | + Absolute filenames are relative to the source dir, while relative |
538 | + filenames are relative to the dir of the containing document. |
539 | + """ |
540 | + if filename.startswith('/') or filename.startswith(os.sep): |
541 | + rel_fn = filename[1:] |
542 | + else: |
543 | + docdir = path.dirname(self.doc2path(docname or self.docname, |
544 | + base=None)) |
545 | + rel_fn = path.join(docdir, filename) |
546 | + try: |
547 | + return rel_fn, path.join(self.srcdir, rel_fn) |
548 | + except UnicodeDecodeError: |
549 | + # the source directory is a bytestring with non-ASCII characters; |
550 | + # let's try to encode the rel_fn in the file system encoding |
551 | + enc_rel_fn = rel_fn.encode(sys.getfilesystemencoding()) |
552 | + return rel_fn, path.join(self.srcdir, enc_rel_fn) |
553 | + |
554 | + def find_files(self, config): |
555 | + """Find all source files in the source dir and put them in |
556 | + self.found_docs. |
557 | + """ |
558 | + matchers = compile_matchers( |
559 | + config.exclude_patterns[:] + |
560 | + config.exclude_trees + |
561 | + [d + config.source_suffix for d in config.unused_docs] + |
562 | + ['**/' + d for d in config.exclude_dirnames] + |
563 | + ['**/_sources', '.#*'] |
564 | + ) |
565 | + self.found_docs = set(get_matching_docs( |
566 | + self.srcdir, config.source_suffix, exclude_matchers=matchers)) |
567 | + |
568 | + def get_outdated_files(self, config_changed): |
569 | + """Return (added, changed, removed) sets.""" |
570 | + # clear all files no longer present |
571 | + removed = set(self.all_docs) - self.found_docs |
572 | + |
573 | + added = set() |
574 | + changed = set() |
575 | + |
576 | + if config_changed: |
577 | + # config values affect e.g. substitutions |
578 | + added = self.found_docs |
579 | + else: |
580 | + for docname in self.found_docs: |
581 | + if docname not in self.all_docs: |
582 | + added.add(docname) |
583 | + continue |
584 | + # if the doctree file is not there, rebuild |
585 | + if not path.isfile(self.doc2path(docname, self.doctreedir, |
586 | + '.doctree')): |
587 | + changed.add(docname) |
588 | + continue |
589 | + # check the "reread always" list |
590 | + if docname in self.reread_always: |
591 | + changed.add(docname) |
592 | + continue |
593 | + # check the mtime of the document |
594 | + mtime = self.all_docs[docname] |
595 | + newmtime = path.getmtime(self.doc2path(docname)) |
596 | + if newmtime > mtime: |
597 | + changed.add(docname) |
598 | + continue |
599 | + # finally, check the mtime of dependencies |
600 | + for dep in self.dependencies.get(docname, ()): |
601 | + try: |
602 | + # this will do the right thing when dep is absolute too |
603 | + deppath = path.join(self.srcdir, dep) |
604 | + if not path.isfile(deppath): |
605 | + changed.add(docname) |
606 | + break |
607 | + depmtime = path.getmtime(deppath) |
608 | + if depmtime > mtime: |
609 | + changed.add(docname) |
610 | + break |
611 | + except EnvironmentError: |
612 | + # give it another chance |
613 | + changed.add(docname) |
614 | + break |
615 | + |
616 | + return added, changed, removed |
617 | + |
618 | + def update(self, config, srcdir, doctreedir, app=None): |
619 | + """(Re-)read all files new or changed since last update. |
620 | + |
621 | + Returns a summary, the total count of documents to reread and an |
622 | + iterator that yields docnames as it processes them. Store all |
623 | + environment docnames in the canonical format (ie using SEP as a |
624 | + separator in place of os.path.sep). |
625 | + """ |
626 | + config_changed = False |
627 | + if self.config is None: |
628 | + msg = '[new config] ' |
629 | + config_changed = True |
630 | + else: |
631 | + # check if a config value was changed that affects how |
632 | + # doctrees are read |
633 | + for key, descr in config.values.iteritems(): |
634 | + if descr[1] != 'env': |
635 | + continue |
636 | + if self.config[key] != config[key]: |
637 | + msg = '[config changed] ' |
638 | + config_changed = True |
639 | + break |
640 | + else: |
641 | + msg = '' |
642 | + # this value is not covered by the above loop because it is handled |
643 | + # specially by the config class |
644 | + if self.config.extensions != config.extensions: |
645 | + msg = '[extensions changed] ' |
646 | + config_changed = True |
647 | + # the source and doctree directories may have been relocated |
648 | + self.srcdir = srcdir |
649 | + self.doctreedir = doctreedir |
650 | + self.find_files(config) |
651 | + self.config = config |
652 | + |
653 | + added, changed, removed = self.get_outdated_files(config_changed) |
654 | + |
655 | + # allow user intervention as well |
656 | + for docs in app.emit('env-get-outdated', self, added, changed, removed): |
657 | + changed.update(set(docs) & self.found_docs) |
658 | + |
659 | + # if files were added or removed, all documents with globbed toctrees |
660 | + # must be reread |
661 | + if added or removed: |
662 | + # ... but not those that already were removed |
663 | + changed.update(self.glob_toctrees & self.found_docs) |
664 | + |
665 | + msg += '%s added, %s changed, %s removed' % (len(added), len(changed), |
666 | + len(removed)) |
667 | + |
668 | + def update_generator(): |
669 | + self.app = app |
670 | + |
671 | + # clear all files no longer present |
672 | + for docname in removed: |
673 | + if app: |
674 | + app.emit('env-purge-doc', self, docname) |
675 | + self.clear_doc(docname) |
676 | + |
677 | + # read all new and changed files |
678 | + for docname in sorted(added | changed): |
679 | + yield docname |
680 | + self.read_doc(docname, app=app) |
681 | + |
682 | + if config.master_doc not in self.all_docs: |
683 | + self.warn(None, 'master file %s not found' % |
684 | + self.doc2path(config.master_doc)) |
685 | + |
686 | + self.app = None |
687 | + if app: |
688 | + app.emit('env-updated', self) |
689 | + |
690 | + return msg, len(added | changed), update_generator() |
691 | + |
692 | + def check_dependents(self, already): |
693 | + to_rewrite = self.assign_section_numbers() |
694 | + for docname in to_rewrite: |
695 | + if docname not in already: |
696 | + yield docname |
697 | + |
698 | + # --------- SINGLE FILE READING -------------------------------------------- |
699 | + |
700 | + def warn_and_replace(self, error): |
701 | + """Custom decoding error handler that warns and replaces.""" |
702 | + linestart = error.object.rfind('\n', 0, error.start) |
703 | + lineend = error.object.find('\n', error.start) |
704 | + if lineend == -1: lineend = len(error.object) |
705 | + lineno = error.object.count('\n', 0, error.start) + 1 |
706 | + self.warn(self.docname, 'undecodable source characters, ' |
707 | + 'replacing with "?": %r' % |
708 | + (error.object[linestart+1:error.start] + '>>>' + |
709 | + error.object[error.start:error.end] + '<<<' + |
710 | + error.object[error.end:lineend]), lineno) |
711 | + return (u'?', error.end) |
712 | + |
713 | + def lookup_domain_element(self, type, name): |
714 | + """Lookup a markup element (directive or role), given its name which can |
715 | + be a full name (with domain). |
716 | + """ |
717 | + name = name.lower() |
718 | + # explicit domain given? |
719 | + if ':' in name: |
720 | + domain_name, name = name.split(':', 1) |
721 | + if domain_name in self.domains: |
722 | + domain = self.domains[domain_name] |
723 | + element = getattr(domain, type)(name) |
724 | + if element is not None: |
725 | + return element, [] |
726 | + # else look in the default domain |
727 | + else: |
728 | + def_domain = self.temp_data.get('default_domain') |
729 | + if def_domain is not None: |
730 | + element = getattr(def_domain, type)(name) |
731 | + if element is not None: |
732 | + return element, [] |
733 | + # always look in the std domain |
734 | + element = getattr(self.domains['std'], type)(name) |
735 | + if element is not None: |
736 | + return element, [] |
737 | + raise ElementLookupError |
738 | + |
739 | + def patch_lookup_functions(self): |
740 | + """Monkey-patch directive and role dispatch, so that domain-specific |
741 | + markup takes precedence. |
742 | + """ |
743 | + def directive(name, lang_module, document): |
744 | + try: |
745 | + return self.lookup_domain_element('directive', name) |
746 | + except ElementLookupError: |
747 | + return orig_directive_function(name, lang_module, document) |
748 | + |
749 | + def role(name, lang_module, lineno, reporter): |
750 | + try: |
751 | + return self.lookup_domain_element('role', name) |
752 | + except ElementLookupError: |
753 | + return orig_role_function(name, lang_module, lineno, reporter) |
754 | + |
755 | + directives.directive = directive |
756 | + roles.role = role |
757 | + |
758 | + def read_doc(self, docname, src_path=None, save_parsed=True, app=None): |
759 | + """Parse a file and add/update inventory entries for the doctree. |
760 | + |
761 | + If srcpath is given, read from a different source file. |
762 | + """ |
763 | + # remove all inventory entries for that file |
764 | + if app: |
765 | + app.emit('env-purge-doc', self, docname) |
766 | + |
767 | + self.clear_doc(docname) |
768 | + |
769 | + if src_path is None: |
770 | + src_path = self.doc2path(docname) |
771 | + |
772 | + self.temp_data['docname'] = docname |
773 | + # defaults to the global default, but can be re-set in a document |
774 | + self.temp_data['default_domain'] = \ |
775 | + self.domains.get(self.config.primary_domain) |
776 | + |
777 | + self.settings['input_encoding'] = self.config.source_encoding |
778 | + self.settings['trim_footnote_reference_space'] = \ |
779 | + self.config.trim_footnote_reference_space |
780 | + self.settings['gettext_compact'] = self.config.gettext_compact |
781 | + |
782 | + self.patch_lookup_functions() |
783 | + |
784 | + if self.config.default_role: |
785 | + role_fn, messages = roles.role(self.config.default_role, english, |
786 | + 0, dummy_reporter) |
787 | + if role_fn: |
788 | + roles._roles[''] = role_fn |
789 | + else: |
790 | + self.warn(docname, 'default role %s not found' % |
791 | + self.config.default_role) |
792 | + |
793 | + codecs.register_error('sphinx', self.warn_and_replace) |
794 | + |
795 | + class SphinxSourceClass(FileInput): |
796 | + def __init__(self_, *args, **kwds): |
797 | + # don't call sys.exit() on IOErrors |
798 | + kwds['handle_io_errors'] = False |
799 | + FileInput.__init__(self_, *args, **kwds) |
800 | + |
801 | + def decode(self_, data): |
802 | + if isinstance(data, unicode): |
803 | + return data |
804 | + return data.decode(self_.encoding, 'sphinx') |
805 | + |
806 | + def read(self_): |
807 | + data = FileInput.read(self_) |
808 | + if app: |
809 | + arg = [data] |
810 | + app.emit('source-read', docname, arg) |
811 | + data = arg[0] |
812 | + if self.config.rst_epilog: |
813 | + data = data + '\n' + self.config.rst_epilog + '\n' |
814 | + if self.config.rst_prolog: |
815 | + data = self.config.rst_prolog + '\n' + data |
816 | + return data |
817 | + |
818 | + # publish manually |
819 | + pub = Publisher(reader=SphinxStandaloneReader(), |
820 | + writer=SphinxDummyWriter(), |
821 | + source_class=SphinxSourceClass, |
822 | + destination_class=NullOutput) |
823 | + pub.set_components(None, 'restructuredtext', None) |
824 | + pub.process_programmatic_settings(None, self.settings, None) |
825 | + pub.set_source(None, src_path.encode(fs_encoding)) |
826 | + pub.set_destination(None, None) |
827 | + try: |
828 | + pub.publish() |
829 | + doctree = pub.document |
830 | + except UnicodeError, err: |
831 | + raise SphinxError(str(err)) |
832 | + |
833 | + # post-processing |
834 | + self.filter_messages(doctree) |
835 | + self.process_dependencies(docname, doctree) |
836 | + self.process_images(docname, doctree) |
837 | + self.process_downloads(docname, doctree) |
838 | + self.process_metadata(docname, doctree) |
839 | + self.process_refonly_bullet_lists(docname, doctree) |
840 | + self.create_title_from(docname, doctree) |
841 | + self.note_indexentries_from(docname, doctree) |
842 | + self.note_citations_from(docname, doctree) |
843 | + self.build_toc_from(docname, doctree) |
844 | + for domain in self.domains.itervalues(): |
845 | + domain.process_doc(self, docname, doctree) |
846 | + |
847 | + # allow extension-specific post-processing |
848 | + if app: |
849 | + app.emit('doctree-read', doctree) |
850 | + |
851 | + # store time of build, for outdated files detection |
852 | + self.all_docs[docname] = time.time() |
853 | + |
854 | + if self.versioning_condition: |
855 | + # get old doctree |
856 | + try: |
857 | + f = open(self.doc2path(docname, |
858 | + self.doctreedir, '.doctree'), 'rb') |
859 | + try: |
860 | + old_doctree = pickle.load(f) |
861 | + finally: |
862 | + f.close() |
863 | + except EnvironmentError: |
864 | + old_doctree = None |
865 | + |
866 | + # add uids for versioning |
867 | + if old_doctree is None: |
868 | + list(add_uids(doctree, self.versioning_condition)) |
869 | + else: |
870 | + list(merge_doctrees( |
871 | + old_doctree, doctree, self.versioning_condition)) |
872 | + |
873 | + # make it picklable |
874 | + doctree.reporter = None |
875 | + doctree.transformer = None |
876 | + doctree.settings.warning_stream = None |
877 | + doctree.settings.env = None |
878 | + doctree.settings.record_dependencies = None |
879 | + for metanode in doctree.traverse(MetaBody.meta): |
880 | + # docutils' meta nodes aren't picklable because the class is nested |
881 | + metanode.__class__ = addnodes.meta |
882 | + |
883 | + # cleanup |
884 | + self.temp_data.clear() |
885 | + |
886 | + if save_parsed: |
887 | + # save the parsed doctree |
888 | + doctree_filename = self.doc2path(docname, self.doctreedir, |
889 | + '.doctree') |
890 | + dirname = path.dirname(doctree_filename) |
891 | + if not path.isdir(dirname): |
892 | + os.makedirs(dirname) |
893 | + f = open(doctree_filename, 'wb') |
894 | + try: |
895 | + pickle.dump(doctree, f, pickle.HIGHEST_PROTOCOL) |
896 | + finally: |
897 | + f.close() |
898 | + else: |
899 | + return doctree |
900 | + |
901 | + # utilities to use while reading a document |
902 | + |
903 | + @property |
904 | + def docname(self): |
905 | + """Backwards compatible alias.""" |
906 | + return self.temp_data['docname'] |
907 | + |
908 | + @property |
909 | + def currmodule(self): |
910 | + """Backwards compatible alias.""" |
911 | + return self.temp_data.get('py:module') |
912 | + |
913 | + @property |
914 | + def currclass(self): |
915 | + """Backwards compatible alias.""" |
916 | + return self.temp_data.get('py:class') |
917 | + |
918 | + def new_serialno(self, category=''): |
919 | + """Return a serial number, e.g. for index entry targets.""" |
920 | + key = category + 'serialno' |
921 | + cur = self.temp_data.get(key, 0) |
922 | + self.temp_data[key] = cur + 1 |
923 | + return cur |
924 | + |
925 | + def note_dependency(self, filename): |
926 | + self.dependencies.setdefault(self.docname, set()).add(filename) |
927 | + |
928 | + def note_reread(self): |
929 | + self.reread_always.add(self.docname) |
930 | + |
931 | + def note_versionchange(self, type, version, node, lineno): |
932 | + self.versionchanges.setdefault(version, []).append( |
933 | + (type, self.temp_data['docname'], lineno, |
934 | + self.temp_data.get('py:module'), |
935 | + self.temp_data.get('object'), node.astext())) |
936 | + |
937 | + # post-processing of read doctrees |
938 | + |
939 | + def filter_messages(self, doctree): |
940 | + """Filter system messages from a doctree.""" |
941 | + filterlevel = self.config.keep_warnings and 2 or 5 |
942 | + for node in doctree.traverse(nodes.system_message): |
943 | + if node['level'] < filterlevel: |
944 | + node.parent.remove(node) |
945 | + |
946 | + |
947 | + def process_dependencies(self, docname, doctree): |
948 | + """Process docutils-generated dependency info.""" |
949 | + cwd = os.getcwd() |
950 | + frompath = path.join(path.normpath(self.srcdir), 'dummy') |
951 | + deps = doctree.settings.record_dependencies |
952 | + if not deps: |
953 | + return |
954 | + for dep in deps.list: |
955 | + # the dependency path is relative to the working dir, so get |
956 | + # one relative to the srcdir |
957 | + relpath = relative_path(frompath, |
958 | + path.normpath(path.join(cwd, dep))) |
959 | + self.dependencies.setdefault(docname, set()).add(relpath) |
960 | + |
961 | + def process_downloads(self, docname, doctree): |
962 | + """Process downloadable file paths. """ |
963 | + for node in doctree.traverse(addnodes.download_reference): |
964 | + targetname = node['reftarget'] |
965 | + rel_filename, filename = self.relfn2path(targetname, docname) |
966 | + self.dependencies.setdefault(docname, set()).add(rel_filename) |
967 | + if not os.access(filename, os.R_OK): |
968 | + self.warn_node('download file not readable: %s' % filename, |
969 | + node) |
970 | + continue |
971 | + uniquename = self.dlfiles.add_file(docname, filename) |
972 | + node['filename'] = uniquename |
973 | + |
974 | + def process_images(self, docname, doctree): |
975 | + """Process and rewrite image URIs.""" |
976 | + for node in doctree.traverse(nodes.image): |
977 | + # Map the mimetype to the corresponding image. The writer may |
978 | + # choose the best image from these candidates. The special key * is |
979 | + # set if there is only single candidate to be used by a writer. |
980 | + # The special key ? is set for nonlocal URIs. |
981 | + node['candidates'] = candidates = {} |
982 | + imguri = node['uri'] |
983 | + if imguri.find('://') != -1: |
984 | + self.warn_node('nonlocal image URI found: %s' % imguri, node) |
985 | + candidates['?'] = imguri |
986 | + continue |
987 | + rel_imgpath, full_imgpath = self.relfn2path(imguri, docname) |
988 | + # set imgpath as default URI |
989 | + node['uri'] = rel_imgpath |
990 | + if rel_imgpath.endswith(os.extsep + '*'): |
991 | + for filename in glob(full_imgpath): |
992 | + new_imgpath = relative_path(self.srcdir, filename) |
993 | + if filename.lower().endswith('.pdf'): |
994 | + candidates['application/pdf'] = new_imgpath |
995 | + elif filename.lower().endswith('.svg'): |
996 | + candidates['image/svg+xml'] = new_imgpath |
997 | + else: |
998 | + try: |
999 | + f = open(filename, 'rb') |
1000 | + try: |
1001 | + imgtype = imghdr.what(f) |
1002 | + finally: |
1003 | + f.close() |
1004 | + except (OSError, IOError), err: |
1005 | + self.warn_node('image file %s not readable: %s' % |
1006 | + (filename, err), node) |
1007 | + if imgtype: |
1008 | + candidates['image/' + imgtype] = new_imgpath |
1009 | + else: |
1010 | + candidates['*'] = rel_imgpath |
1011 | + # map image paths to unique image names (so that they can be put |
1012 | + # into a single directory) |
1013 | + for imgpath in candidates.itervalues(): |
1014 | + self.dependencies.setdefault(docname, set()).add(imgpath) |
1015 | + if not os.access(path.join(self.srcdir, imgpath), os.R_OK): |
1016 | + self.warn_node('image file not readable: %s' % imgpath, |
1017 | + node) |
1018 | + continue |
1019 | + self.images.add_file(docname, imgpath) |
1020 | + |
1021 | + def process_metadata(self, docname, doctree): |
1022 | + """Process the docinfo part of the doctree as metadata. |
1023 | + |
1024 | + Keep processing minimal -- just return what docutils says. |
1025 | + """ |
1026 | + self.metadata[docname] = md = {} |
1027 | + try: |
1028 | + docinfo = doctree[0] |
1029 | + except IndexError: |
1030 | + # probably an empty document |
1031 | + return |
1032 | + if docinfo.__class__ is not nodes.docinfo: |
1033 | + # nothing to see here |
1034 | + return |
1035 | + for node in docinfo: |
1036 | + # nodes are multiply inherited... |
1037 | + if isinstance(node, nodes.authors): |
1038 | + md['authors'] = [author.astext() for author in node] |
1039 | + elif isinstance(node, nodes.TextElement): # e.g. author |
1040 | + md[node.__class__.__name__] = node.astext() |
1041 | + else: |
1042 | + name, body = node |
1043 | + md[name.astext()] = body.astext() |
1044 | + del doctree[0] |
1045 | + |
1046 | + def process_refonly_bullet_lists(self, docname, doctree): |
1047 | + """Change refonly bullet lists to use compact_paragraphs. |
1048 | + |
1049 | + Specifically implemented for 'Indices and Tables' section, which looks |
1050 | + odd when html_compact_lists is false. |
1051 | + """ |
1052 | + if self.config.html_compact_lists: |
1053 | + return |
1054 | + |
1055 | + class RefOnlyListChecker(nodes.GenericNodeVisitor): |
1056 | + """Raise `nodes.NodeFound` if non-simple list item is encountered. |
1057 | + |
1058 | + Here 'simple' means a list item containing only a paragraph with a |
1059 | + single reference in it. |
1060 | + """ |
1061 | + |
1062 | + def default_visit(self, node): |
1063 | + raise nodes.NodeFound |
1064 | + |
1065 | + def visit_bullet_list(self, node): |
1066 | + pass |
1067 | + |
1068 | + def visit_list_item(self, node): |
1069 | + children = [] |
1070 | + for child in node.children: |
1071 | + if not isinstance(child, nodes.Invisible): |
1072 | + children.append(child) |
1073 | + if len(children) != 1: |
1074 | + raise nodes.NodeFound |
1075 | + if not isinstance(children[0], nodes.paragraph): |
1076 | + raise nodes.NodeFound |
1077 | + para = children[0] |
1078 | + if len(para) != 1: |
1079 | + raise nodes.NodeFound |
1080 | + if not isinstance(para[0], addnodes.pending_xref): |
1081 | + raise nodes.NodeFound |
1082 | + raise nodes.SkipChildren |
1083 | + |
1084 | + def invisible_visit(self, node): |
1085 | + """Invisible nodes should be ignored.""" |
1086 | + pass |
1087 | + |
1088 | + def check_refonly_list(node): |
1089 | + """Check for list with only references in it.""" |
1090 | + visitor = RefOnlyListChecker(doctree) |
1091 | + try: |
1092 | + node.walk(visitor) |
1093 | + except nodes.NodeFound: |
1094 | + return False |
1095 | + else: |
1096 | + return True |
1097 | + |
1098 | + for node in doctree.traverse(nodes.bullet_list): |
1099 | + if check_refonly_list(node): |
1100 | + for item in node.traverse(nodes.list_item): |
1101 | + para = item[0] |
1102 | + ref = para[0] |
1103 | + compact_para = addnodes.compact_paragraph() |
1104 | + compact_para += ref |
1105 | + item.replace(para, compact_para) |
1106 | + |
1107 | + def create_title_from(self, docname, document): |
1108 | + """Add a title node to the document (just copy the first section title), |
1109 | + and store that title in the environment. |
1110 | + """ |
1111 | + titlenode = nodes.title() |
1112 | + longtitlenode = titlenode |
1113 | + # explicit title set with title directive; use this only for |
1114 | + # the <title> tag in HTML output |
1115 | + if document.has_key('title'): |
1116 | + longtitlenode = nodes.title() |
1117 | + longtitlenode += nodes.Text(document['title']) |
1118 | + # look for first section title and use that as the title |
1119 | + for node in document.traverse(nodes.section): |
1120 | + visitor = SphinxContentsFilter(document) |
1121 | + node[0].walkabout(visitor) |
1122 | + titlenode += visitor.get_entry_text() |
1123 | + break |
1124 | + else: |
1125 | + # document has no title |
1126 | + titlenode += nodes.Text('<no title>') |
1127 | + self.titles[docname] = titlenode |
1128 | + self.longtitles[docname] = longtitlenode |
1129 | + |
1130 | + def note_indexentries_from(self, docname, document): |
1131 | + entries = self.indexentries[docname] = [] |
1132 | + for node in document.traverse(addnodes.index): |
1133 | + entries.extend(node['entries']) |
1134 | + |
1135 | + def note_citations_from(self, docname, document): |
1136 | + for node in document.traverse(nodes.citation): |
1137 | + label = node[0].astext() |
1138 | + if label in self.citations: |
1139 | + self.warn_node('duplicate citation %s, ' % label + |
1140 | + 'other instance in %s' % self.doc2path( |
1141 | + self.citations[label][0]), node) |
1142 | + self.citations[label] = (docname, node['ids'][0]) |
1143 | + |
1144 | + def note_toctree(self, docname, toctreenode): |
1145 | + """Note a TOC tree directive in a document and gather information about |
1146 | + file relations from it. |
1147 | + """ |
1148 | + if toctreenode['glob']: |
1149 | + self.glob_toctrees.add(docname) |
1150 | + if toctreenode.get('numbered'): |
1151 | + self.numbered_toctrees.add(docname) |
1152 | + includefiles = toctreenode['includefiles'] |
1153 | + for includefile in includefiles: |
1154 | + # note that if the included file is rebuilt, this one must be |
1155 | + # too (since the TOC of the included file could have changed) |
1156 | + self.files_to_rebuild.setdefault(includefile, set()).add(docname) |
1157 | + self.toctree_includes.setdefault(docname, []).extend(includefiles) |
1158 | + |
1159 | + def build_toc_from(self, docname, document): |
1160 | + """Build a TOC from the doctree and store it in the inventory.""" |
1161 | + numentries = [0] # nonlocal again... |
1162 | + |
1163 | + try: |
1164 | + maxdepth = int(self.metadata[docname].get('tocdepth', 0)) |
1165 | + except ValueError: |
1166 | + maxdepth = 0 |
1167 | + |
1168 | + def traverse_in_section(node, cls): |
1169 | + """Like traverse(), but stay within the same section.""" |
1170 | + result = [] |
1171 | + if isinstance(node, cls): |
1172 | + result.append(node) |
1173 | + for child in node.children: |
1174 | + if isinstance(child, nodes.section): |
1175 | + continue |
1176 | + result.extend(traverse_in_section(child, cls)) |
1177 | + return result |
1178 | + |
1179 | + def build_toc(node, depth=1): |
1180 | + entries = [] |
1181 | + for sectionnode in node: |
1182 | + # find all toctree nodes in this section and add them |
1183 | + # to the toc (just copying the toctree node which is then |
1184 | + # resolved in self.get_and_resolve_doctree) |
1185 | + if isinstance(sectionnode, addnodes.only): |
1186 | + onlynode = addnodes.only(expr=sectionnode['expr']) |
1187 | + blist = build_toc(sectionnode, depth) |
1188 | + if blist: |
1189 | + onlynode += blist.children |
1190 | + entries.append(onlynode) |
1191 | + if not isinstance(sectionnode, nodes.section): |
1192 | + for toctreenode in traverse_in_section(sectionnode, |
1193 | + addnodes.toctree): |
1194 | + item = toctreenode.copy() |
1195 | + entries.append(item) |
1196 | + # important: do the inventory stuff |
1197 | + self.note_toctree(docname, toctreenode) |
1198 | + continue |
1199 | + title = sectionnode[0] |
1200 | + # copy the contents of the section title, but without references |
1201 | + # and unnecessary stuff |
1202 | + visitor = SphinxContentsFilter(document) |
1203 | + title.walkabout(visitor) |
1204 | + nodetext = visitor.get_entry_text() |
1205 | + if not numentries[0]: |
1206 | + # for the very first toc entry, don't add an anchor |
1207 | + # as it is the file's title anyway |
1208 | + anchorname = '' |
1209 | + else: |
1210 | + anchorname = '#' + sectionnode['ids'][0] |
1211 | + numentries[0] += 1 |
1212 | + # make these nodes: |
1213 | + # list_item -> compact_paragraph -> reference |
1214 | + reference = nodes.reference( |
1215 | + '', '', internal=True, refuri=docname, |
1216 | + anchorname=anchorname, *nodetext) |
1217 | + para = addnodes.compact_paragraph('', '', reference) |
1218 | + item = nodes.list_item('', para) |
1219 | + if maxdepth == 0 or depth < maxdepth: |
1220 | + item += build_toc(sectionnode, depth+1) |
1221 | + entries.append(item) |
1222 | + if entries: |
1223 | + return nodes.bullet_list('', *entries) |
1224 | + return [] |
1225 | + toc = build_toc(document) |
1226 | + if toc: |
1227 | + self.tocs[docname] = toc |
1228 | + else: |
1229 | + self.tocs[docname] = nodes.bullet_list('') |
1230 | + self.toc_num_entries[docname] = numentries[0] |
1231 | + |
1232 | + def get_toc_for(self, docname, builder): |
1233 | + """Return a TOC nodetree -- for use on the same page only!""" |
1234 | + try: |
1235 | + toc = self.tocs[docname].deepcopy() |
1236 | + except KeyError: |
1237 | + # the document does not exist anymore: return a dummy node that |
1238 | + # renders to nothing |
1239 | + return nodes.paragraph() |
1240 | + self.process_only_nodes(toc, builder, docname) |
1241 | + for node in toc.traverse(nodes.reference): |
1242 | + node['refuri'] = node['anchorname'] or '#' |
1243 | + return toc |
1244 | + |
1245 | + def get_toctree_for(self, docname, builder, collapse, **kwds): |
1246 | + """Return the global TOC nodetree.""" |
1247 | + doctree = self.get_doctree(self.config.master_doc) |
1248 | + toctrees = [] |
1249 | + if 'includehidden' not in kwds: |
1250 | + kwds['includehidden'] = True |
1251 | + if 'maxdepth' not in kwds: |
1252 | + kwds['maxdepth'] = 0 |
1253 | + kwds['collapse'] = collapse |
1254 | + for toctreenode in doctree.traverse(addnodes.toctree): |
1255 | + toctree = self.resolve_toctree(docname, builder, toctreenode, |
1256 | + prune=True, **kwds) |
1257 | + toctrees.append(toctree) |
1258 | + if not toctrees: |
1259 | + return None |
1260 | + result = toctrees[0] |
1261 | + for toctree in toctrees[1:]: |
1262 | + result.extend(toctree.children) |
1263 | + return result |
1264 | + |
1265 | + def get_domain(self, domainname): |
1266 | + """Return the domain instance with the specified name. |
1267 | + |
1268 | + Raises an ExtensionError if the domain is not registered. |
1269 | + """ |
1270 | + try: |
1271 | + return self.domains[domainname] |
1272 | + except KeyError: |
1273 | + raise ExtensionError('Domain %r is not registered' % domainname) |
1274 | + |
1275 | + # --------- RESOLVING REFERENCES AND TOCTREES ------------------------------ |
1276 | + |
1277 | + def get_doctree(self, docname): |
1278 | + """Read the doctree for a file from the pickle and return it.""" |
1279 | + doctree_filename = self.doc2path(docname, self.doctreedir, '.doctree') |
1280 | + f = open(doctree_filename, 'rb') |
1281 | + try: |
1282 | + doctree = pickle.load(f) |
1283 | + finally: |
1284 | + f.close() |
1285 | + doctree.settings.env = self |
1286 | + doctree.reporter = Reporter(self.doc2path(docname), 2, 5, |
1287 | + stream=WarningStream(self._warnfunc)) |
1288 | + return doctree |
1289 | + |
1290 | + |
1291 | + def get_and_resolve_doctree(self, docname, builder, doctree=None, |
1292 | + prune_toctrees=True): |
1293 | + """Read the doctree from the pickle, resolve cross-references and |
1294 | + toctrees and return it. |
1295 | + """ |
1296 | + if doctree is None: |
1297 | + doctree = self.get_doctree(docname) |
1298 | + |
1299 | + # resolve all pending cross-references |
1300 | + self.resolve_references(doctree, docname, builder) |
1301 | + |
1302 | + # now, resolve all toctree nodes |
1303 | + for toctreenode in doctree.traverse(addnodes.toctree): |
1304 | + result = self.resolve_toctree(docname, builder, toctreenode, |
1305 | + prune=prune_toctrees) |
1306 | + if result is None: |
1307 | + toctreenode.replace_self([]) |
1308 | + else: |
1309 | + toctreenode.replace_self(result) |
1310 | + |
1311 | + return doctree |
1312 | + |
1313 | + def resolve_toctree(self, docname, builder, toctree, prune=True, maxdepth=0, |
1314 | + titles_only=False, collapse=False, includehidden=False): |
1315 | + """Resolve a *toctree* node into individual bullet lists with titles |
1316 | + as items, returning None (if no containing titles are found) or |
1317 | + a new node. |
1318 | + |
1319 | + If *prune* is True, the tree is pruned to *maxdepth*, or if that is 0, |
1320 | + to the value of the *maxdepth* option on the *toctree* node. |
1321 | + If *titles_only* is True, only toplevel document titles will be in the |
1322 | + resulting tree. |
1323 | + If *collapse* is True, all branches not containing docname will |
1324 | + be collapsed. |
1325 | + """ |
1326 | + if toctree.get('hidden', False) and not includehidden: |
1327 | + return None |
1328 | + |
1329 | + def _walk_depth(node, depth, maxdepth): |
1330 | + """Utility: Cut a TOC at a specified depth.""" |
1331 | + |
1332 | + # For reading this function, it is useful to keep in mind the node |
1333 | + # structure of a toctree (using HTML-like node names for brevity): |
1334 | + # |
1335 | + # <ul> |
1336 | + # <li> |
1337 | + # <p><a></p> |
1338 | + # <p><a></p> |
1339 | + # ... |
1340 | + # <ul> |
1341 | + # ... |
1342 | + # </ul> |
1343 | + # </li> |
1344 | + # </ul> |
1345 | + |
1346 | + for subnode in node.children[:]: |
1347 | + if isinstance(subnode, (addnodes.compact_paragraph, |
1348 | + nodes.list_item)): |
1349 | + # for <p> and <li>, just indicate the depth level and |
1350 | + # recurse to children |
1351 | + subnode['classes'].append('toctree-l%d' % (depth-1)) |
1352 | + _walk_depth(subnode, depth, maxdepth) |
1353 | + |
1354 | + elif isinstance(subnode, nodes.bullet_list): |
1355 | + # for <ul>, determine if the depth is too large or if the |
1356 | + # entry is to be collapsed |
1357 | + if maxdepth > 0 and depth > maxdepth: |
1358 | + subnode.parent.replace(subnode, []) |
1359 | + else: |
1360 | + # to find out what to collapse, *first* walk subitems, |
1361 | + # since that determines which children point to the |
1362 | + # current page |
1363 | + _walk_depth(subnode, depth+1, maxdepth) |
1364 | + # cull sub-entries whose parents aren't 'current' |
1365 | + if (collapse and depth > 1 and |
1366 | + 'iscurrent' not in subnode.parent): |
1367 | + subnode.parent.remove(subnode) |
1368 | + |
1369 | + elif isinstance(subnode, nodes.reference): |
1370 | + # for <a>, identify which entries point to the current |
1371 | + # document and therefore may not be collapsed |
1372 | + if subnode['refuri'] == docname: |
1373 | + if not subnode['anchorname']: |
1374 | + # give the whole branch a 'current' class |
1375 | + # (useful for styling it differently) |
1376 | + branchnode = subnode |
1377 | + while branchnode: |
1378 | + branchnode['classes'].append('current') |
1379 | + branchnode = branchnode.parent |
1380 | + # mark the list_item as "on current page" |
1381 | + if subnode.parent.parent.get('iscurrent'): |
1382 | + # but only if it's not already done |
1383 | + return |
1384 | + while subnode: |
1385 | + subnode['iscurrent'] = True |
1386 | + subnode = subnode.parent |
1387 | + |
1388 | + def _entries_from_toctree(toctreenode, parents, |
1389 | + separate=False, subtree=False): |
1390 | + """Return TOC entries for a toctree node.""" |
1391 | + refs = [(e[0], str(e[1])) for e in toctreenode['entries']] |
1392 | + entries = [] |
1393 | + for (title, ref) in refs: |
1394 | + try: |
1395 | + refdoc = None |
1396 | + if url_re.match(ref): |
1397 | + reference = nodes.reference('', '', internal=False, |
1398 | + refuri=ref, anchorname='', |
1399 | + *[nodes.Text(title)]) |
1400 | + para = addnodes.compact_paragraph('', '', reference) |
1401 | + item = nodes.list_item('', para) |
1402 | + toc = nodes.bullet_list('', item) |
1403 | + elif ref == 'self': |
1404 | + # 'self' refers to the document from which this |
1405 | + # toctree originates |
1406 | + ref = toctreenode['parent'] |
1407 | + if not title: |
1408 | + title = clean_astext(self.titles[ref]) |
1409 | + reference = nodes.reference('', '', internal=True, |
1410 | + refuri=ref, |
1411 | + anchorname='', |
1412 | + *[nodes.Text(title)]) |
1413 | + para = addnodes.compact_paragraph('', '', reference) |
1414 | + item = nodes.list_item('', para) |
1415 | + # don't show subitems |
1416 | + toc = nodes.bullet_list('', item) |
1417 | + else: |
1418 | + if ref in parents: |
1419 | + self.warn(ref, 'circular toctree references ' |
1420 | + 'detected, ignoring: %s <- %s' % |
1421 | + (ref, ' <- '.join(parents))) |
1422 | + continue |
1423 | + refdoc = ref |
1424 | + toc = self.tocs[ref].deepcopy() |
1425 | + self.process_only_nodes(toc, builder, ref) |
1426 | + if title and toc.children and len(toc.children) == 1: |
1427 | + child = toc.children[0] |
1428 | + for refnode in child.traverse(nodes.reference): |
1429 | + if refnode['refuri'] == ref and \ |
1430 | + not refnode['anchorname']: |
1431 | + refnode.children = [nodes.Text(title)] |
1432 | + if not toc.children: |
1433 | + # empty toc means: no titles will show up in the toctree |
1434 | + self.warn_node( |
1435 | + 'toctree contains reference to document %r that ' |
1436 | + 'doesn\'t have a title: no link will be generated' |
1437 | + % ref, toctreenode) |
1438 | + except KeyError: |
1439 | + # this is raised if the included file does not exist |
1440 | + self.warn_node( |
1441 | + 'toctree contains reference to nonexisting document %r' |
1442 | + % ref, toctreenode) |
1443 | + else: |
1444 | + # if titles_only is given, only keep the main title and |
1445 | + # sub-toctrees |
1446 | + if titles_only: |
1447 | + # delete everything but the toplevel title(s) |
1448 | + # and toctrees |
1449 | + for toplevel in toc: |
1450 | + # nodes with length 1 don't have any children anyway |
1451 | + if len(toplevel) > 1: |
1452 | + subtrees = toplevel.traverse(addnodes.toctree) |
1453 | + toplevel[1][:] = subtrees |
1454 | + # resolve all sub-toctrees |
1455 | + for toctreenode in toc.traverse(addnodes.toctree): |
1456 | + if not (toctreenode.get('hidden', False) |
1457 | + and not includehidden): |
1458 | + i = toctreenode.parent.index(toctreenode) + 1 |
1459 | + for item in _entries_from_toctree( |
1460 | + toctreenode, [refdoc] + parents, |
1461 | + subtree=True): |
1462 | + toctreenode.parent.insert(i, item) |
1463 | + i += 1 |
1464 | + toctreenode.parent.remove(toctreenode) |
1465 | + if separate: |
1466 | + entries.append(toc) |
1467 | + else: |
1468 | + entries.extend(toc.children) |
1469 | + if not subtree and not separate: |
1470 | + ret = nodes.bullet_list() |
1471 | + ret += entries |
1472 | + return [ret] |
1473 | + return entries |
1474 | + |
1475 | + maxdepth = maxdepth or toctree.get('maxdepth', -1) |
1476 | + if not titles_only and toctree.get('titlesonly', False): |
1477 | + titles_only = True |
1478 | + |
1479 | + # NOTE: previously, this was separate=True, but that leads to artificial |
1480 | + # separation when two or more toctree entries form a logical unit, so |
1481 | + # separating mode is no longer used -- it's kept here for history's sake |
1482 | + tocentries = _entries_from_toctree(toctree, [], separate=False) |
1483 | + if not tocentries: |
1484 | + return None |
1485 | + |
1486 | + newnode = addnodes.compact_paragraph('', '', *tocentries) |
1487 | + newnode['toctree'] = True |
1488 | + |
1489 | + # prune the tree to maxdepth and replace titles, also set level classes |
1490 | + _walk_depth(newnode, 1, prune and maxdepth or 0) |
1491 | + |
1492 | + # set the target paths in the toctrees (they are not known at TOC |
1493 | + # generation time) |
1494 | + for refnode in newnode.traverse(nodes.reference): |
1495 | + if not url_re.match(refnode['refuri']): |
1496 | + refnode['refuri'] = builder.get_relative_uri( |
1497 | + docname, refnode['refuri']) + refnode['anchorname'] |
1498 | + return newnode |
1499 | + |
1500 | + def resolve_references(self, doctree, fromdocname, builder): |
1501 | + for node in doctree.traverse(addnodes.pending_xref): |
1502 | + contnode = node[0].deepcopy() |
1503 | + newnode = None |
1504 | + |
1505 | + typ = node['reftype'] |
1506 | + target = node['reftarget'] |
1507 | + refdoc = node.get('refdoc', fromdocname) |
1508 | + domain = None |
1509 | + |
1510 | + try: |
1511 | + if 'refdomain' in node and node['refdomain']: |
1512 | + # let the domain try to resolve the reference |
1513 | + try: |
1514 | + domain = self.domains[node['refdomain']] |
1515 | + except KeyError: |
1516 | + raise NoUri |
1517 | + newnode = domain.resolve_xref(self, fromdocname, builder, |
1518 | + typ, target, node, contnode) |
1519 | + # really hardwired reference types |
1520 | + elif typ == 'doc': |
1521 | + # directly reference to document by source name; |
1522 | + # can be absolute or relative |
1523 | + docname = docname_join(refdoc, target) |
1524 | + if docname in self.all_docs: |
1525 | + if node['refexplicit']: |
1526 | + # reference with explicit title |
1527 | + caption = node.astext() |
1528 | + else: |
1529 | + caption = clean_astext(self.titles[docname]) |
1530 | + innernode = nodes.emphasis(caption, caption) |
1531 | + newnode = nodes.reference('', '', internal=True) |
1532 | + newnode['refuri'] = builder.get_relative_uri( |
1533 | + fromdocname, docname) |
1534 | + newnode.append(innernode) |
1535 | + elif typ == 'citation': |
1536 | + docname, labelid = self.citations.get(target, ('', '')) |
1537 | + if docname: |
1538 | + newnode = make_refnode(builder, fromdocname, docname, |
1539 | + labelid, contnode) |
1540 | + # no new node found? try the missing-reference event |
1541 | + if newnode is None: |
1542 | + newnode = builder.app.emit_firstresult( |
1543 | + 'missing-reference', self, node, contnode) |
1544 | + # still not found? warn if in nit-picky mode |
1545 | + if newnode is None: |
1546 | + self._warn_missing_reference( |
1547 | + fromdocname, typ, target, node, domain) |
1548 | + except NoUri: |
1549 | + newnode = contnode |
1550 | + node.replace_self(newnode or contnode) |
1551 | + |
1552 | + # remove only-nodes that do not belong to our builder |
1553 | + self.process_only_nodes(doctree, builder, fromdocname) |
1554 | + |
1555 | + # allow custom references to be resolved |
1556 | + builder.app.emit('doctree-resolved', doctree, fromdocname) |
1557 | + |
1558 | + def _warn_missing_reference(self, fromdoc, typ, target, node, domain): |
1559 | + warn = node.get('refwarn') |
1560 | + if self.config.nitpicky: |
1561 | + warn = True |
1562 | + if self._nitpick_ignore: |
1563 | + dtype = domain and '%s:%s' % (domain.name, typ) or typ |
1564 | + if (dtype, target) in self._nitpick_ignore: |
1565 | + warn = False |
1566 | + if not warn: |
1567 | + return |
1568 | + if domain and typ in domain.dangling_warnings: |
1569 | + msg = domain.dangling_warnings[typ] |
1570 | + elif typ == 'doc': |
1571 | + msg = 'unknown document: %(target)s' |
1572 | + elif typ == 'citation': |
1573 | + msg = 'citation not found: %(target)s' |
1574 | + elif node.get('refdomain', 'std') != 'std': |
1575 | + msg = '%s:%s reference target not found: %%(target)s' % \ |
1576 | + (node['refdomain'], typ) |
1577 | + else: |
1578 | + msg = '%s reference target not found: %%(target)s' % typ |
1579 | + self.warn_node(msg % {'target': target}, node) |
1580 | + |
1581 | + def process_only_nodes(self, doctree, builder, fromdocname=None): |
1582 | + # A comment on the comment() nodes being inserted: replacing by [] would |
1583 | + # result in a "Losing ids" exception if there is a target node before |
1584 | + # the only node, so we make sure docutils can transfer the id to |
1585 | + # something, even if it's just a comment and will lose the id anyway... |
1586 | + for node in doctree.traverse(addnodes.only): |
1587 | + try: |
1588 | + ret = builder.tags.eval_condition(node['expr']) |
1589 | + except Exception, err: |
1590 | + self.warn_node('exception while evaluating only ' |
1591 | + 'directive expression: %s' % err, node) |
1592 | + node.replace_self(node.children or nodes.comment()) |
1593 | + else: |
1594 | + if ret: |
1595 | + node.replace_self(node.children or nodes.comment()) |
1596 | + else: |
1597 | + node.replace_self(nodes.comment()) |
1598 | + |
1599 | + def assign_section_numbers(self): |
1600 | + """Assign a section number to each heading under a numbered toctree.""" |
1601 | + # a list of all docnames whose section numbers changed |
1602 | + rewrite_needed = [] |
1603 | + |
1604 | + old_secnumbers = self.toc_secnumbers |
1605 | + self.toc_secnumbers = {} |
1606 | + |
1607 | + def _walk_toc(node, secnums, depth, titlenode=None): |
1608 | + # titlenode is the title of the document, it will get assigned a |
1609 | + # secnumber too, so that it shows up in next/prev/parent rellinks |
1610 | + for subnode in node.children: |
1611 | + if isinstance(subnode, nodes.bullet_list): |
1612 | + numstack.append(0) |
1613 | + _walk_toc(subnode, secnums, depth-1, titlenode) |
1614 | + numstack.pop() |
1615 | + titlenode = None |
1616 | + elif isinstance(subnode, nodes.list_item): |
1617 | + _walk_toc(subnode, secnums, depth, titlenode) |
1618 | + titlenode = None |
1619 | + elif isinstance(subnode, addnodes.only): |
1620 | + # at this stage we don't know yet which sections are going |
1621 | + # to be included; just include all of them, even if it leads |
1622 | + # to gaps in the numbering |
1623 | + _walk_toc(subnode, secnums, depth, titlenode) |
1624 | + titlenode = None |
1625 | + elif isinstance(subnode, addnodes.compact_paragraph): |
1626 | + numstack[-1] += 1 |
1627 | + if depth > 0: |
1628 | + number = tuple(numstack) |
1629 | + else: |
1630 | + number = None |
1631 | + secnums[subnode[0]['anchorname']] = \ |
1632 | + subnode[0]['secnumber'] = number |
1633 | + if titlenode: |
1634 | + titlenode['secnumber'] = number |
1635 | + titlenode = None |
1636 | + elif isinstance(subnode, addnodes.toctree): |
1637 | + _walk_toctree(subnode, depth) |
1638 | + |
1639 | + def _walk_toctree(toctreenode, depth): |
1640 | + if depth == 0: |
1641 | + return |
1642 | + for (title, ref) in toctreenode['entries']: |
1643 | + if url_re.match(ref) or ref == 'self': |
1644 | + # don't mess with those |
1645 | + continue |
1646 | + if ref in self.tocs: |
1647 | + secnums = self.toc_secnumbers[ref] = {} |
1648 | + _walk_toc(self.tocs[ref], secnums, depth, |
1649 | + self.titles.get(ref)) |
1650 | + if secnums != old_secnumbers.get(ref): |
1651 | + rewrite_needed.append(ref) |
1652 | + |
1653 | + for docname in self.numbered_toctrees: |
1654 | + doctree = self.get_doctree(docname) |
1655 | + for toctreenode in doctree.traverse(addnodes.toctree): |
1656 | + depth = toctreenode.get('numbered', 0) |
1657 | + if depth: |
1658 | + # every numbered toctree gets new numbering |
1659 | + numstack = [0] |
1660 | + _walk_toctree(toctreenode, depth) |
1661 | + |
1662 | + return rewrite_needed |
1663 | + |
1664 | + def create_index(self, builder, group_entries=True, |
1665 | + _fixre=re.compile(r'(.*) ([(][^()]*[)])')): |
1666 | + """Create the real index from the collected index entries.""" |
1667 | + new = {} |
1668 | + |
1669 | + def add_entry(word, subword, link=True, dic=new): |
1670 | + entry = dic.get(word) |
1671 | + if not entry: |
1672 | + dic[word] = entry = [[], {}] |
1673 | + if subword: |
1674 | + add_entry(subword, '', link=link, dic=entry[1]) |
1675 | + elif link: |
1676 | + try: |
1677 | + uri = builder.get_relative_uri('genindex', fn) + '#' + tid |
1678 | + except NoUri: |
1679 | + pass |
1680 | + else: |
1681 | + entry[0].append((main, uri)) |
1682 | + |
1683 | + for fn, entries in self.indexentries.iteritems(): |
1684 | + # new entry types must be listed in directives/other.py! |
1685 | + for type, value, tid, main in entries: |
1686 | + try: |
1687 | + if type == 'single': |
1688 | + try: |
1689 | + entry, subentry = split_into(2, 'single', value) |
1690 | + except ValueError: |
1691 | + entry, = split_into(1, 'single', value) |
1692 | + subentry = '' |
1693 | + add_entry(entry, subentry) |
1694 | + elif type == 'pair': |
1695 | + first, second = split_into(2, 'pair', value) |
1696 | + add_entry(first, second) |
1697 | + add_entry(second, first) |
1698 | + elif type == 'triple': |
1699 | + first, second, third = split_into(3, 'triple', value) |
1700 | + add_entry(first, second+' '+third) |
1701 | + add_entry(second, third+', '+first) |
1702 | + add_entry(third, first+' '+second) |
1703 | + elif type == 'see': |
1704 | + first, second = split_into(2, 'see', value) |
1705 | + add_entry(first, _('see %s') % second, link=False) |
1706 | + elif type == 'seealso': |
1707 | + first, second = split_into(2, 'see', value) |
1708 | + add_entry(first, _('see also %s') % second, link=False) |
1709 | + else: |
1710 | + self.warn(fn, 'unknown index entry type %r' % type) |
1711 | + except ValueError, err: |
1712 | + self.warn(fn, str(err)) |
1713 | + |
1714 | + # sort the index entries; put all symbols at the front, even those |
1715 | + # following the letters in ASCII, this is where the chr(127) comes from |
1716 | + def keyfunc(entry, lcletters=string.ascii_lowercase + '_'): |
1717 | + lckey = unicodedata.normalize('NFD', entry[0].lower()) |
1718 | + if lckey[0:1] in lcletters: |
1719 | + return chr(127) + lckey |
1720 | + return lckey |
1721 | + newlist = new.items() |
1722 | + newlist.sort(key=keyfunc) |
1723 | + |
1724 | + if group_entries: |
1725 | + # fixup entries: transform |
1726 | + # func() (in module foo) |
1727 | + # func() (in module bar) |
1728 | + # into |
1729 | + # func() |
1730 | + # (in module foo) |
1731 | + # (in module bar) |
1732 | + oldkey = '' |
1733 | + oldsubitems = None |
1734 | + i = 0 |
1735 | + while i < len(newlist): |
1736 | + key, (targets, subitems) = newlist[i] |
1737 | + # cannot move if it has subitems; structure gets too complex |
1738 | + if not subitems: |
1739 | + m = _fixre.match(key) |
1740 | + if m: |
1741 | + if oldkey == m.group(1): |
1742 | + # prefixes match: add entry as subitem of the |
1743 | + # previous entry |
1744 | + oldsubitems.setdefault(m.group(2), [[], {}])[0].\ |
1745 | + extend(targets) |
1746 | + del newlist[i] |
1747 | + continue |
1748 | + oldkey = m.group(1) |
1749 | + else: |
1750 | + oldkey = key |
1751 | + oldsubitems = subitems |
1752 | + i += 1 |
1753 | + |
1754 | + # group the entries by letter |
1755 | + def keyfunc2(item, letters=string.ascii_uppercase + '_'): |
1756 | + # hack: mutating the subitems dicts to a list in the keyfunc |
1757 | + k, v = item |
1758 | + v[1] = sorted((si, se) for (si, (se, void)) in v[1].iteritems()) |
1759 | + # now calculate the key |
1760 | + letter = unicodedata.normalize('NFD', k[0])[0].upper() |
1761 | + if letter in letters: |
1762 | + return letter |
1763 | + else: |
1764 | + # get all other symbols under one heading |
1765 | + return 'Symbols' |
1766 | + return [(key, list(group)) |
1767 | + for (key, group) in groupby(newlist, keyfunc2)] |
1768 | + |
1769 | + def collect_relations(self): |
1770 | + relations = {} |
1771 | + getinc = self.toctree_includes.get |
1772 | + def collect(parents, parents_set, docname, previous, next): |
1773 | + # circular relationship? |
1774 | + if docname in parents_set: |
1775 | + # we will warn about this in resolve_toctree() |
1776 | + return |
1777 | + includes = getinc(docname) |
1778 | + # previous |
1779 | + if not previous: |
1780 | + # if no previous sibling, go to parent |
1781 | + previous = parents[0][0] |
1782 | + else: |
1783 | + # else, go to previous sibling, or if it has children, to |
1784 | + # the last of its children, or if that has children, to the |
1785 | + # last of those, and so forth |
1786 | + while 1: |
1787 | + previncs = getinc(previous) |
1788 | + if previncs: |
1789 | + previous = previncs[-1] |
1790 | + else: |
1791 | + break |
1792 | + # next |
1793 | + if includes: |
1794 | + # if it has children, go to first of them |
1795 | + next = includes[0] |
1796 | + elif next: |
1797 | + # else, if next sibling, go to it |
1798 | + pass |
1799 | + else: |
1800 | + # else, go to the next sibling of the parent, if present, |
1801 | + # else the grandparent's sibling, if present, and so forth |
1802 | + for parname, parindex in parents: |
1803 | + parincs = getinc(parname) |
1804 | + if parincs and parindex + 1 < len(parincs): |
1805 | + next = parincs[parindex+1] |
1806 | + break |
1807 | + # else it will stay None |
1808 | + # same for children |
1809 | + if includes: |
1810 | + for subindex, args in enumerate(izip(includes, |
1811 | + [None] + includes, |
1812 | + includes[1:] + [None])): |
1813 | + collect([(docname, subindex)] + parents, |
1814 | + parents_set.union([docname]), *args) |
1815 | + relations[docname] = [parents[0][0], previous, next] |
1816 | + collect([(None, 0)], set(), self.config.master_doc, None, None) |
1817 | + return relations |
1818 | + |
1819 | + def check_consistency(self): |
1820 | + """Do consistency checks.""" |
1821 | + for docname in sorted(self.all_docs): |
1822 | + if docname not in self.files_to_rebuild: |
1823 | + if docname == self.config.master_doc: |
1824 | + # the master file is not included anywhere ;) |
1825 | + continue |
1826 | + if 'orphan' in self.metadata[docname]: |
1827 | + continue |
1828 | + self.warn(docname, 'document isn\'t included in any toctree') |
1829 | + |
1830 | |
1831 | === removed directory '.pc/fix_manpages_generation_with_new_docutils.diff' |
1832 | === removed directory '.pc/fix_manpages_generation_with_new_docutils.diff/sphinx' |
1833 | === removed directory '.pc/fix_manpages_generation_with_new_docutils.diff/sphinx/writers' |
1834 | === removed file '.pc/fix_manpages_generation_with_new_docutils.diff/sphinx/writers/manpage.py' |
1835 | --- .pc/fix_manpages_generation_with_new_docutils.diff/sphinx/writers/manpage.py 2012-10-22 20:20:35 +0000 |
1836 | +++ .pc/fix_manpages_generation_with_new_docutils.diff/sphinx/writers/manpage.py 1970-01-01 00:00:00 +0000 |
1837 | @@ -1,345 +0,0 @@ |
1838 | -# -*- coding: utf-8 -*- |
1839 | -""" |
1840 | - sphinx.writers.manpage |
1841 | - ~~~~~~~~~~~~~~~~~~~~~~ |
1842 | - |
1843 | - Manual page writer, extended for Sphinx custom nodes. |
1844 | - |
1845 | - :copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS. |
1846 | - :license: BSD, see LICENSE for details. |
1847 | -""" |
1848 | - |
1849 | -from docutils import nodes |
1850 | -try: |
1851 | - from docutils.writers.manpage import MACRO_DEF, Writer, \ |
1852 | - Translator as BaseTranslator |
1853 | - has_manpage_writer = True |
1854 | -except ImportError: |
1855 | - # define the classes in any case, sphinx.application needs it |
1856 | - Writer = BaseTranslator = object |
1857 | - has_manpage_writer = False |
1858 | - |
1859 | -from sphinx import addnodes |
1860 | -from sphinx.locale import admonitionlabels, versionlabels, _ |
1861 | -from sphinx.util.osutil import ustrftime |
1862 | - |
1863 | - |
1864 | -class ManualPageWriter(Writer): |
1865 | - def __init__(self, builder): |
1866 | - Writer.__init__(self) |
1867 | - self.builder = builder |
1868 | - |
1869 | - def translate(self): |
1870 | - visitor = ManualPageTranslator(self.builder, self.document) |
1871 | - self.visitor = visitor |
1872 | - self.document.walkabout(visitor) |
1873 | - self.output = visitor.astext() |
1874 | - |
1875 | - |
1876 | -class ManualPageTranslator(BaseTranslator): |
1877 | - """ |
1878 | - Custom translator. |
1879 | - """ |
1880 | - |
1881 | - def __init__(self, builder, *args, **kwds): |
1882 | - BaseTranslator.__init__(self, *args, **kwds) |
1883 | - self.builder = builder |
1884 | - |
1885 | - self.in_productionlist = 0 |
1886 | - |
1887 | - # first title is the manpage title |
1888 | - self.section_level = -1 |
1889 | - |
1890 | - # docinfo set by man_pages config value |
1891 | - self._docinfo['title'] = self.document.settings.title |
1892 | - self._docinfo['subtitle'] = self.document.settings.subtitle |
1893 | - if self.document.settings.authors: |
1894 | - # don't set it if no author given |
1895 | - self._docinfo['author'] = self.document.settings.authors |
1896 | - self._docinfo['manual_section'] = self.document.settings.section |
1897 | - |
1898 | - # docinfo set by other config values |
1899 | - self._docinfo['title_upper'] = self._docinfo['title'].upper() |
1900 | - if builder.config.today: |
1901 | - self._docinfo['date'] = builder.config.today |
1902 | - else: |
1903 | - self._docinfo['date'] = ustrftime(builder.config.today_fmt |
1904 | - or _('%B %d, %Y')) |
1905 | - self._docinfo['copyright'] = builder.config.copyright |
1906 | - self._docinfo['version'] = builder.config.version |
1907 | - self._docinfo['manual_group'] = builder.config.project |
1908 | - |
1909 | - # since self.append_header() is never called, need to do this here |
1910 | - self.body.append(MACRO_DEF) |
1911 | - |
1912 | - # overwritten -- added quotes around all .TH arguments |
1913 | - def header(self): |
1914 | - tmpl = (".TH \"%(title_upper)s\" \"%(manual_section)s\"" |
1915 | - " \"%(date)s\" \"%(version)s\" \"%(manual_group)s\"\n" |
1916 | - ".SH NAME\n" |
1917 | - "%(title)s \- %(subtitle)s\n") |
1918 | - return tmpl % self._docinfo |
1919 | - |
1920 | - def visit_start_of_file(self, node): |
1921 | - pass |
1922 | - def depart_start_of_file(self, node): |
1923 | - pass |
1924 | - |
1925 | - def visit_desc(self, node): |
1926 | - self.visit_definition_list(node) |
1927 | - def depart_desc(self, node): |
1928 | - self.depart_definition_list(node) |
1929 | - |
1930 | - def visit_desc_signature(self, node): |
1931 | - self.visit_definition_list_item(node) |
1932 | - self.visit_term(node) |
1933 | - def depart_desc_signature(self, node): |
1934 | - self.depart_term(node) |
1935 | - |
1936 | - def visit_desc_addname(self, node): |
1937 | - pass |
1938 | - def depart_desc_addname(self, node): |
1939 | - pass |
1940 | - |
1941 | - def visit_desc_type(self, node): |
1942 | - pass |
1943 | - def depart_desc_type(self, node): |
1944 | - pass |
1945 | - |
1946 | - def visit_desc_returns(self, node): |
1947 | - self.body.append(' -> ') |
1948 | - def depart_desc_returns(self, node): |
1949 | - pass |
1950 | - |
1951 | - def visit_desc_name(self, node): |
1952 | - pass |
1953 | - def depart_desc_name(self, node): |
1954 | - pass |
1955 | - |
1956 | - def visit_desc_parameterlist(self, node): |
1957 | - self.body.append('(') |
1958 | - self.first_param = 1 |
1959 | - def depart_desc_parameterlist(self, node): |
1960 | - self.body.append(')') |
1961 | - |
1962 | - def visit_desc_parameter(self, node): |
1963 | - if not self.first_param: |
1964 | - self.body.append(', ') |
1965 | - else: |
1966 | - self.first_param = 0 |
1967 | - def depart_desc_parameter(self, node): |
1968 | - pass |
1969 | - |
1970 | - def visit_desc_optional(self, node): |
1971 | - self.body.append('[') |
1972 | - def depart_desc_optional(self, node): |
1973 | - self.body.append(']') |
1974 | - |
1975 | - def visit_desc_annotation(self, node): |
1976 | - pass |
1977 | - def depart_desc_annotation(self, node): |
1978 | - pass |
1979 | - |
1980 | - def visit_desc_content(self, node): |
1981 | - self.visit_definition(node) |
1982 | - def depart_desc_content(self, node): |
1983 | - self.depart_definition(node) |
1984 | - |
1985 | - def visit_refcount(self, node): |
1986 | - self.body.append(self.defs['emphasis'][0]) |
1987 | - def depart_refcount(self, node): |
1988 | - self.body.append(self.defs['emphasis'][1]) |
1989 | - |
1990 | - def visit_versionmodified(self, node): |
1991 | - self.visit_paragraph(node) |
1992 | - text = versionlabels[node['type']] % node['version'] |
1993 | - if len(node): |
1994 | - text += ': ' |
1995 | - else: |
1996 | - text += '.' |
1997 | - self.body.append(text) |
1998 | - def depart_versionmodified(self, node): |
1999 | - self.depart_paragraph(node) |
2000 | - |
2001 | - def visit_termsep(self, node): |
2002 | - self.body.append(', ') |
2003 | - raise nodes.SkipNode |
2004 | - |
2005 | - # overwritten -- we don't want source comments to show up |
2006 | - def visit_comment(self, node): |
2007 | - raise nodes.SkipNode |
2008 | - |
2009 | - # overwritten -- added ensure_eol() |
2010 | - def visit_footnote(self, node): |
2011 | - self.ensure_eol() |
2012 | - BaseTranslator.visit_footnote(self, node) |
2013 | - |
2014 | - # overwritten -- handle footnotes rubric |
2015 | - def visit_rubric(self, node): |
2016 | - self.ensure_eol() |
2017 | - if len(node.children) == 1: |
2018 | - rubtitle = node.children[0].astext() |
2019 | - if rubtitle in ('Footnotes', _('Footnotes')): |
2020 | - self.body.append('.SH ' + self.deunicode(rubtitle).upper() + |
2021 | - '\n') |
2022 | - raise nodes.SkipNode |
2023 | - else: |
2024 | - self.body.append('.sp\n') |
2025 | - def depart_rubric(self, node): |
2026 | - pass |
2027 | - |
2028 | - def visit_seealso(self, node): |
2029 | - self.visit_admonition(node) |
2030 | - def depart_seealso(self, node): |
2031 | - self.depart_admonition(node) |
2032 | - |
2033 | - # overwritten -- use our own label translations |
2034 | - def visit_admonition(self, node, name=None): |
2035 | - if name: |
2036 | - self.body.append('.IP %s\n' % |
2037 | - self.deunicode(admonitionlabels.get(name, name))) |
2038 | - |
2039 | - def visit_productionlist(self, node): |
2040 | - self.ensure_eol() |
2041 | - names = [] |
2042 | - self.in_productionlist += 1 |
2043 | - self.body.append('.sp\n.nf\n') |
2044 | - for production in node: |
2045 | - names.append(production['tokenname']) |
2046 | - maxlen = max(len(name) for name in names) |
2047 | - for production in node: |
2048 | - if production['tokenname']: |
2049 | - lastname = production['tokenname'].ljust(maxlen) |
2050 | - self.body.append(self.defs['strong'][0]) |
2051 | - self.body.append(self.deunicode(lastname)) |
2052 | - self.body.append(self.defs['strong'][1]) |
2053 | - self.body.append(' ::= ') |
2054 | - else: |
2055 | - self.body.append('%s ' % (' '*len(lastname))) |
2056 | - production.walkabout(self) |
2057 | - self.body.append('\n') |
2058 | - self.body.append('\n.fi\n') |
2059 | - self.in_productionlist -= 1 |
2060 | - raise nodes.SkipNode |
2061 | - |
2062 | - def visit_production(self, node): |
2063 | - pass |
2064 | - def depart_production(self, node): |
2065 | - pass |
2066 | - |
2067 | - # overwritten -- don't emit a warning for images |
2068 | - def visit_image(self, node): |
2069 | - if 'alt' in node.attributes: |
2070 | - self.body.append(_('[image: %s]') % node['alt'] + '\n') |
2071 | - self.body.append(_('[image]') + '\n') |
2072 | - raise nodes.SkipNode |
2073 | - |
2074 | - # overwritten -- don't visit inner marked up nodes |
2075 | - def visit_reference(self, node): |
2076 | - self.body.append(self.defs['reference'][0]) |
2077 | - self.body.append(node.astext()) |
2078 | - self.body.append(self.defs['reference'][1]) |
2079 | - |
2080 | - uri = node.get('refuri', '') |
2081 | - if uri.startswith('mailto:') or uri.startswith('http:') or \ |
2082 | - uri.startswith('https:') or uri.startswith('ftp:'): |
2083 | - # if configured, put the URL after the link |
2084 | - if self.builder.config.man_show_urls and \ |
2085 | - node.astext() != uri: |
2086 | - if uri.startswith('mailto:'): |
2087 | - uri = uri[7:] |
2088 | - self.body.extend([ |
2089 | - ' <', |
2090 | - self.defs['strong'][0], uri, self.defs['strong'][1], |
2091 | - '>']) |
2092 | - raise nodes.SkipNode |
2093 | - |
2094 | - def visit_centered(self, node): |
2095 | - self.ensure_eol() |
2096 | - self.body.append('.sp\n.ce\n') |
2097 | - def depart_centered(self, node): |
2098 | - self.body.append('\n.ce 0\n') |
2099 | - |
2100 | - def visit_compact_paragraph(self, node): |
2101 | - pass |
2102 | - def depart_compact_paragraph(self, node): |
2103 | - pass |
2104 | - |
2105 | - def visit_highlightlang(self, node): |
2106 | - pass |
2107 | - def depart_highlightlang(self, node): |
2108 | - pass |
2109 | - |
2110 | - def visit_download_reference(self, node): |
2111 | - pass |
2112 | - def depart_download_reference(self, node): |
2113 | - pass |
2114 | - |
2115 | - def visit_toctree(self, node): |
2116 | - raise nodes.SkipNode |
2117 | - |
2118 | - def visit_index(self, node): |
2119 | - raise nodes.SkipNode |
2120 | - |
2121 | - def visit_tabular_col_spec(self, node): |
2122 | - raise nodes.SkipNode |
2123 | - |
2124 | - def visit_glossary(self, node): |
2125 | - pass |
2126 | - def depart_glossary(self, node): |
2127 | - pass |
2128 | - |
2129 | - def visit_acks(self, node): |
2130 | - self.ensure_eol() |
2131 | - self.body.append(', '.join(n.astext() |
2132 | - for n in node.children[0].children) + '.') |
2133 | - self.body.append('\n') |
2134 | - raise nodes.SkipNode |
2135 | - |
2136 | - def visit_hlist(self, node): |
2137 | - self.visit_bullet_list(node) |
2138 | - def depart_hlist(self, node): |
2139 | - self.depart_bullet_list(node) |
2140 | - |
2141 | - def visit_hlistcol(self, node): |
2142 | - pass |
2143 | - def depart_hlistcol(self, node): |
2144 | - pass |
2145 | - |
2146 | - def visit_literal_emphasis(self, node): |
2147 | - return self.visit_emphasis(node) |
2148 | - def depart_literal_emphasis(self, node): |
2149 | - return self.depart_emphasis(node) |
2150 | - |
2151 | - def visit_abbreviation(self, node): |
2152 | - pass |
2153 | - def depart_abbreviation(self, node): |
2154 | - pass |
2155 | - |
2156 | - # overwritten: handle section titles better than in 0.6 release |
2157 | - def visit_title(self, node): |
2158 | - if isinstance(node.parent, addnodes.seealso): |
2159 | - self.body.append('.IP "') |
2160 | - return |
2161 | - elif isinstance(node.parent, nodes.section): |
2162 | - if self.section_level == 0: |
2163 | - # skip the document title |
2164 | - raise nodes.SkipNode |
2165 | - elif self.section_level == 1: |
2166 | - self.body.append('.SH %s\n' % |
2167 | - self.deunicode(node.astext().upper())) |
2168 | - raise nodes.SkipNode |
2169 | - return BaseTranslator.visit_title(self, node) |
2170 | - def depart_title(self, node): |
2171 | - if isinstance(node.parent, addnodes.seealso): |
2172 | - self.body.append('"\n') |
2173 | - return |
2174 | - return BaseTranslator.depart_title(self, node) |
2175 | - |
2176 | - def visit_raw(self, node): |
2177 | - if 'manpage' in node.get('format', '').split(): |
2178 | - self.body.append(node.astext()) |
2179 | - raise nodes.SkipNode |
2180 | - |
2181 | - def unknown_visit(self, node): |
2182 | - raise NotImplementedError('Unknown node: ' + node.__class__.__name__) |
2183 | |
2184 | === added directory '.pc/manpage_writer_docutils_0.10_api.diff' |
2185 | === added directory '.pc/manpage_writer_docutils_0.10_api.diff/sphinx' |
2186 | === added directory '.pc/manpage_writer_docutils_0.10_api.diff/sphinx/writers' |
2187 | === added file '.pc/manpage_writer_docutils_0.10_api.diff/sphinx/writers/manpage.py' |
2188 | --- .pc/manpage_writer_docutils_0.10_api.diff/sphinx/writers/manpage.py 1970-01-01 00:00:00 +0000 |
2189 | +++ .pc/manpage_writer_docutils_0.10_api.diff/sphinx/writers/manpage.py 2013-02-16 12:25:23 +0000 |
2190 | @@ -0,0 +1,345 @@ |
2191 | +# -*- coding: utf-8 -*- |
2192 | +""" |
2193 | + sphinx.writers.manpage |
2194 | + ~~~~~~~~~~~~~~~~~~~~~~ |
2195 | + |
2196 | + Manual page writer, extended for Sphinx custom nodes. |
2197 | + |
2198 | + :copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS. |
2199 | + :license: BSD, see LICENSE for details. |
2200 | +""" |
2201 | + |
2202 | +from docutils import nodes |
2203 | +try: |
2204 | + from docutils.writers.manpage import MACRO_DEF, Writer, \ |
2205 | + Translator as BaseTranslator |
2206 | + has_manpage_writer = True |
2207 | +except ImportError: |
2208 | + # define the classes in any case, sphinx.application needs it |
2209 | + Writer = BaseTranslator = object |
2210 | + has_manpage_writer = False |
2211 | + |
2212 | +from sphinx import addnodes |
2213 | +from sphinx.locale import admonitionlabels, versionlabels, _ |
2214 | +from sphinx.util.osutil import ustrftime |
2215 | + |
2216 | + |
2217 | +class ManualPageWriter(Writer): |
2218 | + def __init__(self, builder): |
2219 | + Writer.__init__(self) |
2220 | + self.builder = builder |
2221 | + |
2222 | + def translate(self): |
2223 | + visitor = ManualPageTranslator(self.builder, self.document) |
2224 | + self.visitor = visitor |
2225 | + self.document.walkabout(visitor) |
2226 | + self.output = visitor.astext() |
2227 | + |
2228 | + |
2229 | +class ManualPageTranslator(BaseTranslator): |
2230 | + """ |
2231 | + Custom translator. |
2232 | + """ |
2233 | + |
2234 | + def __init__(self, builder, *args, **kwds): |
2235 | + BaseTranslator.__init__(self, *args, **kwds) |
2236 | + self.builder = builder |
2237 | + |
2238 | + self.in_productionlist = 0 |
2239 | + |
2240 | + # first title is the manpage title |
2241 | + self.section_level = -1 |
2242 | + |
2243 | + # docinfo set by man_pages config value |
2244 | + self._docinfo['title'] = self.document.settings.title |
2245 | + self._docinfo['subtitle'] = self.document.settings.subtitle |
2246 | + if self.document.settings.authors: |
2247 | + # don't set it if no author given |
2248 | + self._docinfo['author'] = self.document.settings.authors |
2249 | + self._docinfo['manual_section'] = self.document.settings.section |
2250 | + |
2251 | + # docinfo set by other config values |
2252 | + self._docinfo['title_upper'] = self._docinfo['title'].upper() |
2253 | + if builder.config.today: |
2254 | + self._docinfo['date'] = builder.config.today |
2255 | + else: |
2256 | + self._docinfo['date'] = ustrftime(builder.config.today_fmt |
2257 | + or _('%B %d, %Y')) |
2258 | + self._docinfo['copyright'] = builder.config.copyright |
2259 | + self._docinfo['version'] = builder.config.version |
2260 | + self._docinfo['manual_group'] = builder.config.project |
2261 | + |
2262 | + # since self.append_header() is never called, need to do this here |
2263 | + self.body.append(MACRO_DEF) |
2264 | + |
2265 | + # overwritten -- added quotes around all .TH arguments |
2266 | + def header(self): |
2267 | + tmpl = (".TH \"%(title_upper)s\" \"%(manual_section)s\"" |
2268 | + " \"%(date)s\" \"%(version)s\" \"%(manual_group)s\"\n" |
2269 | + ".SH NAME\n" |
2270 | + "%(title)s \- %(subtitle)s\n") |
2271 | + return tmpl % self._docinfo |
2272 | + |
2273 | + def visit_start_of_file(self, node): |
2274 | + pass |
2275 | + def depart_start_of_file(self, node): |
2276 | + pass |
2277 | + |
2278 | + def visit_desc(self, node): |
2279 | + self.visit_definition_list(node) |
2280 | + def depart_desc(self, node): |
2281 | + self.depart_definition_list(node) |
2282 | + |
2283 | + def visit_desc_signature(self, node): |
2284 | + self.visit_definition_list_item(node) |
2285 | + self.visit_term(node) |
2286 | + def depart_desc_signature(self, node): |
2287 | + self.depart_term(node) |
2288 | + |
2289 | + def visit_desc_addname(self, node): |
2290 | + pass |
2291 | + def depart_desc_addname(self, node): |
2292 | + pass |
2293 | + |
2294 | + def visit_desc_type(self, node): |
2295 | + pass |
2296 | + def depart_desc_type(self, node): |
2297 | + pass |
2298 | + |
2299 | + def visit_desc_returns(self, node): |
2300 | + self.body.append(' -> ') |
2301 | + def depart_desc_returns(self, node): |
2302 | + pass |
2303 | + |
2304 | + def visit_desc_name(self, node): |
2305 | + pass |
2306 | + def depart_desc_name(self, node): |
2307 | + pass |
2308 | + |
2309 | + def visit_desc_parameterlist(self, node): |
2310 | + self.body.append('(') |
2311 | + self.first_param = 1 |
2312 | + def depart_desc_parameterlist(self, node): |
2313 | + self.body.append(')') |
2314 | + |
2315 | + def visit_desc_parameter(self, node): |
2316 | + if not self.first_param: |
2317 | + self.body.append(', ') |
2318 | + else: |
2319 | + self.first_param = 0 |
2320 | + def depart_desc_parameter(self, node): |
2321 | + pass |
2322 | + |
2323 | + def visit_desc_optional(self, node): |
2324 | + self.body.append('[') |
2325 | + def depart_desc_optional(self, node): |
2326 | + self.body.append(']') |
2327 | + |
2328 | + def visit_desc_annotation(self, node): |
2329 | + pass |
2330 | + def depart_desc_annotation(self, node): |
2331 | + pass |
2332 | + |
2333 | + def visit_desc_content(self, node): |
2334 | + self.visit_definition(node) |
2335 | + def depart_desc_content(self, node): |
2336 | + self.depart_definition(node) |
2337 | + |
2338 | + def visit_refcount(self, node): |
2339 | + self.body.append(self.defs['emphasis'][0]) |
2340 | + def depart_refcount(self, node): |
2341 | + self.body.append(self.defs['emphasis'][1]) |
2342 | + |
2343 | + def visit_versionmodified(self, node): |
2344 | + self.visit_paragraph(node) |
2345 | + text = versionlabels[node['type']] % node['version'] |
2346 | + if len(node): |
2347 | + text += ': ' |
2348 | + else: |
2349 | + text += '.' |
2350 | + self.body.append(text) |
2351 | + def depart_versionmodified(self, node): |
2352 | + self.depart_paragraph(node) |
2353 | + |
2354 | + def visit_termsep(self, node): |
2355 | + self.body.append(', ') |
2356 | + raise nodes.SkipNode |
2357 | + |
2358 | + # overwritten -- we don't want source comments to show up |
2359 | + def visit_comment(self, node): |
2360 | + raise nodes.SkipNode |
2361 | + |
2362 | + # overwritten -- added ensure_eol() |
2363 | + def visit_footnote(self, node): |
2364 | + self.ensure_eol() |
2365 | + BaseTranslator.visit_footnote(self, node) |
2366 | + |
2367 | + # overwritten -- handle footnotes rubric |
2368 | + def visit_rubric(self, node): |
2369 | + self.ensure_eol() |
2370 | + if len(node.children) == 1: |
2371 | + rubtitle = node.children[0].astext() |
2372 | + if rubtitle in ('Footnotes', _('Footnotes')): |
2373 | + self.body.append('.SH ' + self.deunicode(rubtitle).upper() + |
2374 | + '\n') |
2375 | + raise nodes.SkipNode |
2376 | + else: |
2377 | + self.body.append('.sp\n') |
2378 | + def depart_rubric(self, node): |
2379 | + pass |
2380 | + |
2381 | + def visit_seealso(self, node): |
2382 | + self.visit_admonition(node) |
2383 | + def depart_seealso(self, node): |
2384 | + self.depart_admonition(node) |
2385 | + |
2386 | + # overwritten -- use our own label translations |
2387 | + def visit_admonition(self, node, name=None): |
2388 | + if name: |
2389 | + self.body.append('.IP %s\n' % |
2390 | + self.deunicode(admonitionlabels.get(name, name))) |
2391 | + |
2392 | + def visit_productionlist(self, node): |
2393 | + self.ensure_eol() |
2394 | + names = [] |
2395 | + self.in_productionlist += 1 |
2396 | + self.body.append('.sp\n.nf\n') |
2397 | + for production in node: |
2398 | + names.append(production['tokenname']) |
2399 | + maxlen = max(len(name) for name in names) |
2400 | + for production in node: |
2401 | + if production['tokenname']: |
2402 | + lastname = production['tokenname'].ljust(maxlen) |
2403 | + self.body.append(self.defs['strong'][0]) |
2404 | + self.body.append(self.deunicode(lastname)) |
2405 | + self.body.append(self.defs['strong'][1]) |
2406 | + self.body.append(' ::= ') |
2407 | + else: |
2408 | + self.body.append('%s ' % (' '*len(lastname))) |
2409 | + production.walkabout(self) |
2410 | + self.body.append('\n') |
2411 | + self.body.append('\n.fi\n') |
2412 | + self.in_productionlist -= 1 |
2413 | + raise nodes.SkipNode |
2414 | + |
2415 | + def visit_production(self, node): |
2416 | + pass |
2417 | + def depart_production(self, node): |
2418 | + pass |
2419 | + |
2420 | + # overwritten -- don't emit a warning for images |
2421 | + def visit_image(self, node): |
2422 | + if 'alt' in node.attributes: |
2423 | + self.body.append(_('[image: %s]') % node['alt'] + '\n') |
2424 | + self.body.append(_('[image]') + '\n') |
2425 | + raise nodes.SkipNode |
2426 | + |
2427 | + # overwritten -- don't visit inner marked up nodes |
2428 | + def visit_reference(self, node): |
2429 | + self.body.append(self.defs['reference'][0]) |
2430 | + self.body.append(node.astext()) |
2431 | + self.body.append(self.defs['reference'][1]) |
2432 | + |
2433 | + uri = node.get('refuri', '') |
2434 | + if uri.startswith('mailto:') or uri.startswith('http:') or \ |
2435 | + uri.startswith('https:') or uri.startswith('ftp:'): |
2436 | + # if configured, put the URL after the link |
2437 | + if self.builder.config.man_show_urls and \ |
2438 | + node.astext() != uri: |
2439 | + if uri.startswith('mailto:'): |
2440 | + uri = uri[7:] |
2441 | + self.body.extend([ |
2442 | + ' <', |
2443 | + self.defs['strong'][0], uri, self.defs['strong'][1], |
2444 | + '>']) |
2445 | + raise nodes.SkipNode |
2446 | + |
2447 | + def visit_centered(self, node): |
2448 | + self.ensure_eol() |
2449 | + self.body.append('.sp\n.ce\n') |
2450 | + def depart_centered(self, node): |
2451 | + self.body.append('\n.ce 0\n') |
2452 | + |
2453 | + def visit_compact_paragraph(self, node): |
2454 | + pass |
2455 | + def depart_compact_paragraph(self, node): |
2456 | + pass |
2457 | + |
2458 | + def visit_highlightlang(self, node): |
2459 | + pass |
2460 | + def depart_highlightlang(self, node): |
2461 | + pass |
2462 | + |
2463 | + def visit_download_reference(self, node): |
2464 | + pass |
2465 | + def depart_download_reference(self, node): |
2466 | + pass |
2467 | + |
2468 | + def visit_toctree(self, node): |
2469 | + raise nodes.SkipNode |
2470 | + |
2471 | + def visit_index(self, node): |
2472 | + raise nodes.SkipNode |
2473 | + |
2474 | + def visit_tabular_col_spec(self, node): |
2475 | + raise nodes.SkipNode |
2476 | + |
2477 | + def visit_glossary(self, node): |
2478 | + pass |
2479 | + def depart_glossary(self, node): |
2480 | + pass |
2481 | + |
2482 | + def visit_acks(self, node): |
2483 | + self.ensure_eol() |
2484 | + self.body.append(', '.join(n.astext() |
2485 | + for n in node.children[0].children) + '.') |
2486 | + self.body.append('\n') |
2487 | + raise nodes.SkipNode |
2488 | + |
2489 | + def visit_hlist(self, node): |
2490 | + self.visit_bullet_list(node) |
2491 | + def depart_hlist(self, node): |
2492 | + self.depart_bullet_list(node) |
2493 | + |
2494 | + def visit_hlistcol(self, node): |
2495 | + pass |
2496 | + def depart_hlistcol(self, node): |
2497 | + pass |
2498 | + |
2499 | + def visit_literal_emphasis(self, node): |
2500 | + return self.visit_emphasis(node) |
2501 | + def depart_literal_emphasis(self, node): |
2502 | + return self.depart_emphasis(node) |
2503 | + |
2504 | + def visit_abbreviation(self, node): |
2505 | + pass |
2506 | + def depart_abbreviation(self, node): |
2507 | + pass |
2508 | + |
2509 | + # overwritten: handle section titles better than in 0.6 release |
2510 | + def visit_title(self, node): |
2511 | + if isinstance(node.parent, addnodes.seealso): |
2512 | + self.body.append('.IP "') |
2513 | + return |
2514 | + elif isinstance(node.parent, nodes.section): |
2515 | + if self.section_level == 0: |
2516 | + # skip the document title |
2517 | + raise nodes.SkipNode |
2518 | + elif self.section_level == 1: |
2519 | + self.body.append('.SH %s\n' % |
2520 | + self.deunicode(node.astext().upper())) |
2521 | + raise nodes.SkipNode |
2522 | + return BaseTranslator.visit_title(self, node) |
2523 | + def depart_title(self, node): |
2524 | + if isinstance(node.parent, addnodes.seealso): |
2525 | + self.body.append('"\n') |
2526 | + return |
2527 | + return BaseTranslator.depart_title(self, node) |
2528 | + |
2529 | + def visit_raw(self, node): |
2530 | + if 'manpage' in node.get('format', '').split(): |
2531 | + self.body.append(node.astext()) |
2532 | + raise nodes.SkipNode |
2533 | + |
2534 | + def unknown_visit(self, node): |
2535 | + raise NotImplementedError('Unknown node: ' + node.__class__.__name__) |
2536 | |
2537 | === added directory '.pc/parallel_2to3.diff' |
2538 | === added file '.pc/parallel_2to3.diff/setup.py' |
2539 | --- .pc/parallel_2to3.diff/setup.py 1970-01-01 00:00:00 +0000 |
2540 | +++ .pc/parallel_2to3.diff/setup.py 2013-02-16 12:25:23 +0000 |
2541 | @@ -0,0 +1,209 @@ |
2542 | +# -*- coding: utf-8 -*- |
2543 | +try: |
2544 | + from setuptools import setup, find_packages |
2545 | +except ImportError: |
2546 | + raise |
2547 | + import distribute_setup |
2548 | + distribute_setup.use_setuptools() |
2549 | + from setuptools import setup, find_packages |
2550 | + |
2551 | +import os |
2552 | +import sys |
2553 | +from distutils import log |
2554 | + |
2555 | +import sphinx |
2556 | + |
2557 | +long_desc = ''' |
2558 | +Sphinx is a tool that makes it easy to create intelligent and beautiful |
2559 | +documentation for Python projects (or other documents consisting of multiple |
2560 | +reStructuredText sources), written by Georg Brandl. It was originally created |
2561 | +for the new Python documentation, and has excellent facilities for Python |
2562 | +project documentation, but C/C++ is supported as well, and more languages are |
2563 | +planned. |
2564 | + |
2565 | +Sphinx uses reStructuredText as its markup language, and many of its strengths |
2566 | +come from the power and straightforwardness of reStructuredText and its parsing |
2567 | +and translating suite, the Docutils. |
2568 | + |
2569 | +Among its features are the following: |
2570 | + |
2571 | +* Output formats: HTML (including derivative formats such as HTML Help, Epub |
2572 | + and Qt Help), plain text, manual pages and LaTeX or direct PDF output |
2573 | + using rst2pdf |
2574 | +* Extensive cross-references: semantic markup and automatic links |
2575 | + for functions, classes, glossary terms and similar pieces of information |
2576 | +* Hierarchical structure: easy definition of a document tree, with automatic |
2577 | + links to siblings, parents and children |
2578 | +* Automatic indices: general index as well as a module index |
2579 | +* Code handling: automatic highlighting using the Pygments highlighter |
2580 | +* Flexible HTML output using the Jinja 2 templating engine |
2581 | +* Various extensions are available, e.g. for automatic testing of snippets |
2582 | + and inclusion of appropriately formatted docstrings |
2583 | +* Setuptools integration |
2584 | + |
2585 | +A development egg can be found `here |
2586 | +<http://bitbucket.org/birkenfeld/sphinx/get/tip.gz#egg=Sphinx-dev>`_. |
2587 | +''' |
2588 | + |
2589 | +requires = ['Pygments>=1.2', 'Jinja2>=2.3', 'docutils>=0.7'] |
2590 | + |
2591 | +if sys.version_info < (2, 4): |
2592 | + print('ERROR: Sphinx requires at least Python 2.4 to run.') |
2593 | + sys.exit(1) |
2594 | + |
2595 | +if sys.version_info < (2, 5): |
2596 | + # Python 2.4's distutils doesn't automatically install an egg-info, |
2597 | + # so an existing docutils install won't be detected -- in that case, |
2598 | + # remove the dependency from setup.py |
2599 | + try: |
2600 | + import docutils |
2601 | + if int(docutils.__version__[2]) < 4: |
2602 | + raise ValueError('docutils not recent enough') |
2603 | + except: |
2604 | + pass |
2605 | + else: |
2606 | + del requires[-1] |
2607 | + |
2608 | + # The uuid module is new in the stdlib in 2.5 |
2609 | + requires.append('uuid>=1.30') |
2610 | + |
2611 | + |
2612 | +# Provide a "compile_catalog" command that also creates the translated |
2613 | +# JavaScript files if Babel is available. |
2614 | + |
2615 | +cmdclass = {} |
2616 | + |
2617 | +try: |
2618 | + from babel.messages.pofile import read_po |
2619 | + from babel.messages.frontend import compile_catalog |
2620 | + try: |
2621 | + from simplejson import dump |
2622 | + except ImportError: |
2623 | + from json import dump |
2624 | +except ImportError: |
2625 | + pass |
2626 | +else: |
2627 | + class compile_catalog_plusjs(compile_catalog): |
2628 | + """ |
2629 | + An extended command that writes all message strings that occur in |
2630 | + JavaScript files to a JavaScript file along with the .mo file. |
2631 | + |
2632 | + Unfortunately, babel's setup command isn't built very extensible, so |
2633 | + most of the run() code is duplicated here. |
2634 | + """ |
2635 | + |
2636 | + def run(self): |
2637 | + compile_catalog.run(self) |
2638 | + |
2639 | + po_files = [] |
2640 | + js_files = [] |
2641 | + |
2642 | + if not self.input_file: |
2643 | + if self.locale: |
2644 | + po_files.append((self.locale, |
2645 | + os.path.join(self.directory, self.locale, |
2646 | + 'LC_MESSAGES', |
2647 | + self.domain + '.po'))) |
2648 | + js_files.append(os.path.join(self.directory, self.locale, |
2649 | + 'LC_MESSAGES', |
2650 | + self.domain + '.js')) |
2651 | + else: |
2652 | + for locale in os.listdir(self.directory): |
2653 | + po_file = os.path.join(self.directory, locale, |
2654 | + 'LC_MESSAGES', |
2655 | + self.domain + '.po') |
2656 | + if os.path.exists(po_file): |
2657 | + po_files.append((locale, po_file)) |
2658 | + js_files.append(os.path.join(self.directory, locale, |
2659 | + 'LC_MESSAGES', |
2660 | + self.domain + '.js')) |
2661 | + else: |
2662 | + po_files.append((self.locale, self.input_file)) |
2663 | + if self.output_file: |
2664 | + js_files.append(self.output_file) |
2665 | + else: |
2666 | + js_files.append(os.path.join(self.directory, self.locale, |
2667 | + 'LC_MESSAGES', |
2668 | + self.domain + '.js')) |
2669 | + |
2670 | + for js_file, (locale, po_file) in zip(js_files, po_files): |
2671 | + infile = open(po_file, 'r') |
2672 | + try: |
2673 | + catalog = read_po(infile, locale) |
2674 | + finally: |
2675 | + infile.close() |
2676 | + |
2677 | + if catalog.fuzzy and not self.use_fuzzy: |
2678 | + continue |
2679 | + |
2680 | + log.info('writing JavaScript strings in catalog %r to %r', |
2681 | + po_file, js_file) |
2682 | + |
2683 | + jscatalog = {} |
2684 | + for message in catalog: |
2685 | + if any(x[0].endswith('.js') for x in message.locations): |
2686 | + msgid = message.id |
2687 | + if isinstance(msgid, (list, tuple)): |
2688 | + msgid = msgid[0] |
2689 | + jscatalog[msgid] = message.string |
2690 | + |
2691 | + outfile = open(js_file, 'wb') |
2692 | + try: |
2693 | + outfile.write('Documentation.addTranslations('); |
2694 | + dump(dict( |
2695 | + messages=jscatalog, |
2696 | + plural_expr=catalog.plural_expr, |
2697 | + locale=str(catalog.locale) |
2698 | + ), outfile) |
2699 | + outfile.write(');') |
2700 | + finally: |
2701 | + outfile.close() |
2702 | + |
2703 | + cmdclass['compile_catalog'] = compile_catalog_plusjs |
2704 | + |
2705 | + |
2706 | +setup( |
2707 | + name='Sphinx', |
2708 | + version=sphinx.__version__, |
2709 | + url='http://sphinx.pocoo.org/', |
2710 | + download_url='http://pypi.python.org/pypi/Sphinx', |
2711 | + license='BSD', |
2712 | + author='Georg Brandl', |
2713 | + author_email='georg@python.org', |
2714 | + description='Python documentation generator', |
2715 | + long_description=long_desc, |
2716 | + zip_safe=False, |
2717 | + classifiers=[ |
2718 | + 'Development Status :: 5 - Production/Stable', |
2719 | + 'Environment :: Console', |
2720 | + 'Environment :: Web Environment', |
2721 | + 'Intended Audience :: Developers', |
2722 | + 'Intended Audience :: Education', |
2723 | + 'License :: OSI Approved :: BSD License', |
2724 | + 'Operating System :: OS Independent', |
2725 | + 'Programming Language :: Python', |
2726 | + 'Programming Language :: Python :: 2', |
2727 | + 'Programming Language :: Python :: 3', |
2728 | + 'Topic :: Documentation', |
2729 | + 'Topic :: Text Processing', |
2730 | + 'Topic :: Utilities', |
2731 | + ], |
2732 | + platforms='any', |
2733 | + packages=find_packages(exclude=['custom_fixers', 'test']), |
2734 | + include_package_data=True, |
2735 | + entry_points={ |
2736 | + 'console_scripts': [ |
2737 | + 'sphinx-build = sphinx:main', |
2738 | + 'sphinx-quickstart = sphinx.quickstart:main', |
2739 | + 'sphinx-apidoc = sphinx.apidoc:main', |
2740 | + 'sphinx-autogen = sphinx.ext.autosummary.generate:main', |
2741 | + ], |
2742 | + 'distutils.commands': [ |
2743 | + 'build_sphinx = sphinx.setup_command:BuildDoc', |
2744 | + ], |
2745 | + }, |
2746 | + install_requires=requires, |
2747 | + cmdclass=cmdclass, |
2748 | + use_2to3=True, |
2749 | + use_2to3_fixers=['custom_fixers'], |
2750 | +) |
2751 | |
2752 | === modified file 'debian/changelog' |
2753 | --- debian/changelog 2012-11-27 19:20:44 +0000 |
2754 | +++ debian/changelog 2013-02-16 12:25:23 +0000 |
2755 | @@ -1,3 +1,27 @@ |
2756 | +sphinx (1.1.3+dfsg-7ubuntu1) raring; urgency=low |
2757 | + |
2758 | + * Merge with Debian experimental. Remaining Ubuntu changes: |
2759 | + - Switch to dh_python2. |
2760 | + - debian/rules: export NO_PKG_MANGLE=1 in order to not have translations |
2761 | + stripped. |
2762 | + - debian/control: Drop the build-dependency on python-whoosh. |
2763 | + - debian/control: Add "XS-Testsuite: autopkgtest" header. |
2764 | + * debian/patches/fix_manpages_generation_with_new_docutils.diff: |
2765 | + dropped, applied in Debian as manpage_writer_docutils_0.10_api.diff. |
2766 | + * debian/patches/fix_literal_block_warning.diff: add patch to avoid |
2767 | + false-positive "Literal block expected; none found." warnings when |
2768 | + building l10n projects. |
2769 | + |
2770 | + -- Dmitry Shachnev <mitya57@ubuntu.com> Sat, 16 Feb 2013 14:51:12 +0400 |
2771 | + |
2772 | +sphinx (1.1.3+dfsg-7) experimental; urgency=low |
2773 | + |
2774 | + * Backport upstream patch for fix compatibility with Docutils 0.10. |
2775 | + * Run 2to3 in parallel. |
2776 | + * Add DEP-8 tests for the documentation package. |
2777 | + |
2778 | + -- Jakub Wilk <jwilk@debian.org> Wed, 19 Dec 2012 10:53:51 +0100 |
2779 | + |
2780 | sphinx (1.1.3+dfsg-5ubuntu1) raring; urgency=low |
2781 | |
2782 | * Merge with Debian packaging SVN. |
2783 | @@ -14,17 +38,24 @@ |
2784 | |
2785 | -- Dmitry Shachnev <mitya57@ubuntu.com> Tue, 27 Nov 2012 19:20:44 +0400 |
2786 | |
2787 | -sphinx (1.1.3+dfsg-6) UNRELEASED; urgency=low |
2788 | +sphinx (1.1.3+dfsg-6) experimental; urgency=low |
2789 | |
2790 | [ Jakub Wilk ] |
2791 | * DEP-8 tests: remove “Features: no-build-needed”; it's the default now. |
2792 | * Bump standards version to 3.9.4; no changes needed. |
2793 | + * Pass -a to xvfb-run, so that it tries to get a free server number. |
2794 | + * Rebuild MO files from source. |
2795 | + + Update debian/rules. |
2796 | + + Add the rebuilt files to extend-diff-ignore. |
2797 | + * Make synopses in the patch header start with a lowercase latter and not |
2798 | + end with a full stop. |
2799 | |
2800 | [ Dmitry Shachnev ] |
2801 | * debian/patches/l10n_fixes.diff: fix crashes and not working external |
2802 | links in l10n mode (closes: #691719). |
2803 | + * debian/patches/sort_stopwords.diff: mark as applied upstream. |
2804 | |
2805 | - -- Jakub Wilk <jwilk@debian.org> Tue, 13 Nov 2012 22:36:10 +0100 |
2806 | + -- Jakub Wilk <jwilk@debian.org> Sat, 08 Dec 2012 14:38:19 +0100 |
2807 | |
2808 | sphinx (1.1.3+dfsg-5) experimental; urgency=low |
2809 | |
2810 | |
2811 | === modified file 'debian/jstest/run-tests' |
2812 | --- debian/jstest/run-tests 2012-03-12 12:18:37 +0000 |
2813 | +++ debian/jstest/run-tests 2013-02-16 12:25:23 +0000 |
2814 | @@ -26,7 +26,7 @@ |
2815 | if __name__ == '__main__': |
2816 | if not os.getenv('DISPLAY'): |
2817 | raise RuntimeError('These tests requires access to an X server') |
2818 | - build_directory = os.path.join(os.path.dirname(__file__), '..', '..', 'build', 'html') |
2819 | + [build_directory] = sys.argv[1:] |
2820 | build_directory = os.path.abspath(build_directory) |
2821 | n_failures = 0 |
2822 | for testcase in t1, t2, t3: |
2823 | |
2824 | === added file 'debian/patches/fix_literal_block_warning.diff' |
2825 | --- debian/patches/fix_literal_block_warning.diff 1970-01-01 00:00:00 +0000 |
2826 | +++ debian/patches/fix_literal_block_warning.diff 2013-02-16 12:25:23 +0000 |
2827 | @@ -0,0 +1,17 @@ |
2828 | +Description: avoid false-positive warnings about missing literal block |
2829 | +Origin: upstream, https://bitbucket.org/birkenfeld/sphinx/commits/e2338c4fcf86 |
2830 | +Last-Update: 2013-02-16 |
2831 | + |
2832 | +--- a/sphinx/environment.py 2013-02-16 15:24:05.699388441 +0400 |
2833 | ++++ b/sphinx/environment.py 2013-02-16 15:24:05.691388443 +0400 |
2834 | +@@ -218,6 +218,10 @@ |
2835 | + if not msgstr or msgstr == msg: # as-of-yet untranslated |
2836 | + continue |
2837 | + |
2838 | ++ # Avoid "Literal block expected; none found." warnings. |
2839 | ++ if msgstr.strip().endswith('::'): |
2840 | ++ msgstr += '\n\n dummy literal' |
2841 | ++ |
2842 | + patch = new_document(source, settings) |
2843 | + parser.parse(msgstr, patch) |
2844 | + patch = patch[0] |
2845 | |
2846 | === removed file 'debian/patches/fix_manpages_generation_with_new_docutils.diff' |
2847 | --- debian/patches/fix_manpages_generation_with_new_docutils.diff 2012-10-22 20:20:35 +0000 |
2848 | +++ debian/patches/fix_manpages_generation_with_new_docutils.diff 1970-01-01 00:00:00 +0000 |
2849 | @@ -1,33 +0,0 @@ |
2850 | -Description: Fix build failure with Docutils 0.9 |
2851 | -Bug: https://bitbucket.org/birkenfeld/sphinx/issue/998/docutils-010-will-break-sphinx-manpage |
2852 | -Author: Toshio Kuratomi <a.badger@gmail.com> |
2853 | -Last-Update: 2012-10-22 |
2854 | - |
2855 | -diff -up a/sphinx/writers/manpage.py b/sphinx/writers/manpage.py |
2856 | ---- a/sphinx/writers/manpage.py 2011-11-01 00:38:44.000000000 -0700 |
2857 | -+++ b/sphinx/writers/manpage.py 2012-08-21 12:38:33.380808202 -0700 |
2858 | -@@ -72,6 +72,11 @@ class ManualPageTranslator(BaseTranslato |
2859 | - # since self.append_header() is never called, need to do this here |
2860 | - self.body.append(MACRO_DEF) |
2861 | - |
2862 | -+ # Overwrite admonition label translations with our own |
2863 | -+ for label, translation in admonitionlabels.items(): |
2864 | -+ self.language.labels[label] = self.deunicode(translation) |
2865 | -+ |
2866 | -+ |
2867 | - # overwritten -- added quotes around all .TH arguments |
2868 | - def header(self): |
2869 | - tmpl = (".TH \"%(title_upper)s\" \"%(manual_section)s\"" |
2870 | -@@ -193,12 +198,6 @@ class ManualPageTranslator(BaseTranslato |
2871 | - def depart_seealso(self, node): |
2872 | - self.depart_admonition(node) |
2873 | - |
2874 | -- # overwritten -- use our own label translations |
2875 | -- def visit_admonition(self, node, name=None): |
2876 | -- if name: |
2877 | -- self.body.append('.IP %s\n' % |
2878 | -- self.deunicode(admonitionlabels.get(name, name))) |
2879 | -- |
2880 | - def visit_productionlist(self, node): |
2881 | - self.ensure_eol() |
2882 | - names = [] |
2883 | |
2884 | === modified file 'debian/patches/initialize_autodoc.diff' |
2885 | --- debian/patches/initialize_autodoc.diff 2012-02-05 19:33:59 +0000 |
2886 | +++ debian/patches/initialize_autodoc.diff 2013-02-16 12:25:23 +0000 |
2887 | @@ -1,4 +1,4 @@ |
2888 | -Description: Make sphinx-autogen initialize the sphinx.ext.autodoc module. |
2889 | +Description: make sphinx-autogen initialize the sphinx.ext.autodoc module |
2890 | Author: Jakub Wilk <jwilk@debian.org> |
2891 | Bug-Debian: http://bugs.debian.org/611078 |
2892 | Bug: https://bitbucket.org/birkenfeld/sphinx/issue/618 |
2893 | |
2894 | === modified file 'debian/patches/l10n_fixes.diff' |
2895 | --- debian/patches/l10n_fixes.diff 2012-11-27 19:20:44 +0000 |
2896 | +++ debian/patches/l10n_fixes.diff 2013-02-16 12:25:23 +0000 |
2897 | @@ -1,15 +1,14 @@ |
2898 | -Description: Fix l10n build of text containing footnotes |
2899 | - Based on initial patch by Cristophe Simonis and modifications by Takayuki Shimizukawa |
2900 | - (upstream pull request #86). |
2901 | -Bug: https://bitbucket.org/birkenfeld/sphinx/issue/955/cant-build-html-with-footnotes-when-using |
2902 | +Description: fix l10n build of text containing footnotes or external links |
2903 | +Bug: https://bitbucket.org/birkenfeld/sphinx/issue/955 |
2904 | Bug-Debian: http://bugs.debian.org/691719 |
2905 | +Origin: upstream, https://bitbucket.org/birkenfeld/sphinx/commits/b7b808e468 |
2906 | + and https://bitbucket.org/birkenfeld/sphinx/commits/870a91ca86 |
2907 | Author: Takayuki Shimizukawa <shimizukawa@gmail.com> |
2908 | -Last-Update: 2012-11-27 |
2909 | +Last-Update: 2012-12-08 |
2910 | |
2911 | -=== modified file 'sphinx/environment.py' |
2912 | --- a/sphinx/environment.py 2012-03-12 12:18:37 +0000 |
2913 | -+++ b/sphinx/environment.py 2012-11-27 14:05:36 +0000 |
2914 | -@@ -213,16 +213,44 @@ |
2915 | ++++ b/sphinx/environment.py 2012-12-07 12:22:33 +0000 |
2916 | +@@ -213,16 +213,61 @@ |
2917 | parser = RSTParser() |
2918 | |
2919 | for node, msg in extract_messages(self.document): |
2920 | @@ -26,31 +25,48 @@ |
2921 | if not isinstance(patch, nodes.paragraph): |
2922 | continue # skip for now |
2923 | + |
2924 | -+ footnote_refs = [r for r in node.children |
2925 | -+ if isinstance(r, nodes.footnote_reference) |
2926 | -+ and r.get('auto') == 1] |
2927 | -+ refs = [r for r in node.children if isinstance(r, nodes.reference)] |
2928 | -+ |
2929 | -+ for i, child in enumerate(patch.children): # update leaves |
2930 | -+ if isinstance(child, nodes.footnote_reference) \ |
2931 | -+ and child.get('auto') == 1: |
2932 | -+ # use original 'footnote_reference' object. |
2933 | -+ # this object is already registered in self.document.autofootnote_refs |
2934 | -+ patch.children[i] = footnote_refs.pop(0) |
2935 | -+ # Some duplicated footnote_reference in msgstr causes |
2936 | -+ # IndexError in .pop(0). That is invalid msgstr. |
2937 | -+ |
2938 | -+ elif isinstance(child, nodes.reference): |
2939 | -+ # reference should use original 'refname'. |
2940 | -+ # * reference target ".. _Python: ..." is not translatable. |
2941 | -+ # * section refname is not translatable. |
2942 | -+ # * inline reference "`Python <...>`_" has no 'refname'. |
2943 | -+ if refs and 'refname' in refs[0]: |
2944 | -+ refname = child['refname'] = refs.pop(0)['refname'] |
2945 | -+ self.document.refnames.setdefault( |
2946 | -+ refname, []).append(child) |
2947 | -+ # if number of reference nodes had been changed, that |
2948 | -+ # would often generate unknown link target warning. |
2949 | ++ # auto-numbered foot note reference should use original 'ids'. |
2950 | ++ is_autonumber_footnote_ref = lambda node: \ |
2951 | ++ isinstance(node, nodes.footnote_reference) \ |
2952 | ++ and node.get('auto') == 1 |
2953 | ++ old_foot_refs = node.traverse(is_autonumber_footnote_ref) |
2954 | ++ new_foot_refs = patch.traverse(is_autonumber_footnote_ref) |
2955 | ++ if len(old_foot_refs) != len(new_foot_refs): |
2956 | ++ env.warn_node('inconsistent footnote references in ' |
2957 | ++ 'translated message', node) |
2958 | ++ for old, new in zip(old_foot_refs, new_foot_refs): |
2959 | ++ new['ids'] = old['ids'] |
2960 | ++ self.document.autofootnote_refs.remove(old) |
2961 | ++ self.document.note_autofootnote_ref(new) |
2962 | ++ |
2963 | ++ # reference should use original 'refname'. |
2964 | ++ # * reference target ".. _Python: ..." is not translatable. |
2965 | ++ # * section refname is not translatable. |
2966 | ++ # * inline reference "`Python <...>`_" has no 'refname'. |
2967 | ++ is_refnamed_ref = lambda node: \ |
2968 | ++ isinstance(node, nodes.reference) \ |
2969 | ++ and 'refname' in node |
2970 | ++ old_refs = node.traverse(is_refnamed_ref) |
2971 | ++ new_refs = patch.traverse(is_refnamed_ref) |
2972 | ++ applied_refname_map = {} |
2973 | ++ if len(old_refs) != len(new_refs): |
2974 | ++ env.warn_node('inconsistent references in ' |
2975 | ++ 'translated message', node) |
2976 | ++ for new in new_refs: |
2977 | ++ if new['refname'] in applied_refname_map: |
2978 | ++ # 2nd appearance of the reference |
2979 | ++ new['refname'] = applied_refname_map[new['refname']] |
2980 | ++ elif old_refs: |
2981 | ++ # 1st appearance of the reference in old_refs |
2982 | ++ old = old_refs.pop(0) |
2983 | ++ refname = old['refname'] |
2984 | ++ new['refname'] = refname |
2985 | ++ applied_refname_map[new['refname']] = refname |
2986 | ++ else: |
2987 | ++ # the reference is not found in old_refs |
2988 | ++ applied_refname_map[new['refname']] = new['refname'] |
2989 | ++ |
2990 | ++ self.document.note_refname(new) |
2991 | + |
2992 | for child in patch.children: # update leaves |
2993 | child.parent = node |
2994 | |
2995 | === added file 'debian/patches/manpage_writer_docutils_0.10_api.diff' |
2996 | --- debian/patches/manpage_writer_docutils_0.10_api.diff 1970-01-01 00:00:00 +0000 |
2997 | +++ debian/patches/manpage_writer_docutils_0.10_api.diff 2013-02-16 12:25:23 +0000 |
2998 | @@ -0,0 +1,31 @@ |
2999 | +Description: port manpage writer to docutils 0.10 API |
3000 | +Origin: upstream, https://bitbucket.org/birkenfeld/sphinx/commits/ffb145b7884f |
3001 | +Last-Update: 2012-12-18 |
3002 | + |
3003 | +--- a/sphinx/writers/manpage.py |
3004 | ++++ b/sphinx/writers/manpage.py |
3005 | +@@ -72,6 +72,11 @@ |
3006 | + # since self.append_header() is never called, need to do this here |
3007 | + self.body.append(MACRO_DEF) |
3008 | + |
3009 | ++ # Overwrite admonition label translations with our own |
3010 | ++ for label, translation in admonitionlabels.items(): |
3011 | ++ self.language.labels[label] = self.deunicode(translation) |
3012 | ++ |
3013 | ++ |
3014 | + # overwritten -- added quotes around all .TH arguments |
3015 | + def header(self): |
3016 | + tmpl = (".TH \"%(title_upper)s\" \"%(manual_section)s\"" |
3017 | +@@ -193,12 +198,6 @@ |
3018 | + def depart_seealso(self, node): |
3019 | + self.depart_admonition(node) |
3020 | + |
3021 | +- # overwritten -- use our own label translations |
3022 | +- def visit_admonition(self, node, name=None): |
3023 | +- if name: |
3024 | +- self.body.append('.IP %s\n' % |
3025 | +- self.deunicode(admonitionlabels.get(name, name))) |
3026 | +- |
3027 | + def visit_productionlist(self, node): |
3028 | + self.ensure_eol() |
3029 | + names = [] |
3030 | |
3031 | === added file 'debian/patches/parallel_2to3.diff' |
3032 | --- debian/patches/parallel_2to3.diff 1970-01-01 00:00:00 +0000 |
3033 | +++ debian/patches/parallel_2to3.diff 2013-02-16 12:25:23 +0000 |
3034 | @@ -0,0 +1,31 @@ |
3035 | +Description: run 2to3 in parallel |
3036 | +Author: Jakub Wilk <jwilk@debian.org> |
3037 | +Forwarded: not-needed |
3038 | +Last-Update: 2012-12-18 |
3039 | + |
3040 | +--- a/setup.py |
3041 | ++++ b/setup.py |
3042 | +@@ -67,6 +67,23 @@ |
3043 | + # The uuid module is new in the stdlib in 2.5 |
3044 | + requires.append('uuid>=1.30') |
3045 | + |
3046 | ++if sys.version_info >= (3,): |
3047 | ++ |
3048 | ++ num_processes = 1 |
3049 | ++ for option in os.environ.get('DEB_BUILD_OPTIONS', '').split(): |
3050 | ++ if option.startswith('parallel='): |
3051 | ++ num_processes = int(option.split('=', 1)[1]) |
3052 | ++ if num_processes > 1: |
3053 | ++ import lib2to3.refactor |
3054 | ++ class RefactoringTool(lib2to3.refactor.MultiprocessRefactoringTool): |
3055 | ++ def refactor(self, items, write=False, doctests_only=False): |
3056 | ++ return lib2to3.refactor.MultiprocessRefactoringTool.refactor( |
3057 | ++ self, items, |
3058 | ++ write=write, |
3059 | ++ doctests_only=doctests_only, |
3060 | ++ num_processes=num_processes |
3061 | ++ ) |
3062 | ++ lib2to3.refactor.RefactoringTool = RefactoringTool |
3063 | + |
3064 | + # Provide a "compile_catalog" command that also creates the translated |
3065 | + # JavaScript files if Babel is available. |
3066 | |
3067 | === modified file 'debian/patches/python3_test_build_dir.diff' |
3068 | --- debian/patches/python3_test_build_dir.diff 2012-02-14 00:13:35 +0000 |
3069 | +++ debian/patches/python3_test_build_dir.diff 2013-02-16 12:25:23 +0000 |
3070 | @@ -1,4 +1,4 @@ |
3071 | -Description: Fix build directory for test runner. |
3072 | +Description: fix build directory for test runner |
3073 | Hardcode Python 3 build directory in the test runner to the one that Debian |
3074 | package uses. |
3075 | Author: Jakub Wilk <jwilk@debian.org> |
3076 | |
3077 | === modified file 'debian/patches/series' |
3078 | --- debian/patches/series 2012-11-27 19:20:44 +0000 |
3079 | +++ debian/patches/series 2013-02-16 12:25:23 +0000 |
3080 | @@ -8,8 +8,10 @@ |
3081 | fix_nepali_po.diff |
3082 | pygments_byte_strings.diff |
3083 | fix_shorthandoff.diff |
3084 | -fix_manpages_generation_with_new_docutils.diff |
3085 | test_build_html_rb.diff |
3086 | sort_stopwords.diff |
3087 | support_python_3.3.diff |
3088 | l10n_fixes.diff |
3089 | +manpage_writer_docutils_0.10_api.diff |
3090 | +parallel_2to3.diff |
3091 | +fix_literal_block_warning.diff |
3092 | |
3093 | === modified file 'debian/patches/show_more_stack_frames.diff' |
3094 | --- debian/patches/show_more_stack_frames.diff 2011-11-20 15:56:50 +0000 |
3095 | +++ debian/patches/show_more_stack_frames.diff 2013-02-16 12:25:23 +0000 |
3096 | @@ -1,4 +1,4 @@ |
3097 | -Description: When Sphinx crashes, show 10 stack frames (instead of a single one). |
3098 | +Description: when Sphinx crashes, show 10 stack frames (instead of a single one) |
3099 | Normally, when Sphinx crashes, it doesn't display full stack trace (as other |
3100 | Python application do by default), but only a single one; rest of the stack |
3101 | trace is stored into a temporary file. Such behaviour is undesired in some |
3102 | |
3103 | === modified file 'debian/patches/sort_stopwords.diff' |
3104 | --- debian/patches/sort_stopwords.diff 2012-11-27 19:20:44 +0000 |
3105 | +++ debian/patches/sort_stopwords.diff 2013-02-16 12:25:23 +0000 |
3106 | @@ -2,8 +2,8 @@ |
3107 | The order of stopwords in searchtools.js would be random if hash randomization |
3108 | was enabled, breaking dh_sphinxdoc. This patch makes the order deterministic. |
3109 | Author: Jakub Wilk <jwilk@debian.org> |
3110 | -Applied-Upstream: https://bitbucket.org/birkenfeld/sphinx/changeset/6cf5320e65 |
3111 | -Last-Update: 2012-11-10 |
3112 | +Forwarded: yes, https://bitbucket.org/birkenfeld/sphinx/commits/6cf5320e65 |
3113 | +Last-Update: 2012-12-08 |
3114 | |
3115 | --- a/sphinx/search/__init__.py |
3116 | +++ b/sphinx/search/__init__.py |
3117 | |
3118 | === modified file 'debian/patches/sphinxcontrib_namespace.diff' |
3119 | --- debian/patches/sphinxcontrib_namespace.diff 2012-02-03 13:52:49 +0000 |
3120 | +++ debian/patches/sphinxcontrib_namespace.diff 2013-02-16 12:25:23 +0000 |
3121 | @@ -1,4 +1,4 @@ |
3122 | -Description: Create namespace package ‘sphinxcontrib’. |
3123 | +Description: create namespace package ‘sphinxcontrib’ |
3124 | Create namespace package ‘sphinxcontrib’. This allows python-sphinxcontrib.* |
3125 | packages, both those using dh_python2 and those using python-support, to be |
3126 | co-importable. |
3127 | |
3128 | === modified file 'debian/patches/unversioned_grammar_pickle.diff' |
3129 | --- debian/patches/unversioned_grammar_pickle.diff 2011-09-28 17:20:22 +0000 |
3130 | +++ debian/patches/unversioned_grammar_pickle.diff 2013-02-16 12:25:23 +0000 |
3131 | @@ -1,4 +1,4 @@ |
3132 | -Description: Don't embed Python version in filename of grammar pickle. |
3133 | +Description: don't embed Python version in filename of grammar pickle |
3134 | Author: Jakub Wilk <jwilk@debian.org> |
3135 | Bug: https://bitbucket.org/birkenfeld/sphinx/issue/641 |
3136 | Forwarded: not-needed |
3137 | |
3138 | === modified file 'debian/rules' |
3139 | --- debian/rules 2012-11-27 19:20:44 +0000 |
3140 | +++ debian/rules 2013-02-16 12:25:23 +0000 |
3141 | @@ -23,6 +23,10 @@ |
3142 | python_all = pyversions -r | tr ' ' '\n' | xargs -t -I {} env {} |
3143 | python3_all = py3versions -r | tr ' ' '\n' | xargs -t -I {} env {} |
3144 | |
3145 | +ifeq "$(filter nocheck,$(DEB_BUILD_OPTIONS))" "" |
3146 | +msgfmt_options = -c |
3147 | +endif |
3148 | + |
3149 | build-arch: |
3150 | |
3151 | build-indep build: build-stamp |
3152 | @@ -37,12 +41,13 @@ |
3153 | python ./sphinx-build.py -b man doc build/man |
3154 | python setup.py build --build-lib build/py2/ |
3155 | python3 setup.py build --build-lib build/py3/ |
3156 | -ifeq (,$(filter nocheck,$(DEB_BUILD_OPTIONS))) |
3157 | - find sphinx/locale/ -name '*.po' | xargs -t -I {} msgfmt -o /dev/null -c {} |
3158 | + find sphinx/locale/ -name '*.po' | sed -e 's/[.]po$$//' \ |
3159 | + | xargs -t -I {} msgfmt $(msgfmt_options) {}.po -o {}.mo |
3160 | +ifeq "$(filter nocheck,$(DEB_BUILD_OPTIONS))" "" |
3161 | $(python_all) tests/run.py --verbose --no-skip |
3162 | $(python3_all) tests/run.py --verbose |
3163 | cd build/py3/ && rm -rf tests/ sphinx/pycode/Grammar.pickle |
3164 | - xvfb-run --auto-servernum ./debian/jstest/run-tests |
3165 | + xvfb-run -a ./debian/jstest/run-tests build/html/ |
3166 | endif |
3167 | touch build-stamp |
3168 | |
3169 | |
3170 | === modified file 'debian/source/options' |
3171 | --- debian/source/options 2012-02-05 19:33:59 +0000 |
3172 | +++ debian/source/options 2013-02-16 12:25:23 +0000 |
3173 | @@ -1,1 +1,2 @@ |
3174 | extend-diff-ignore = "^[^/]*[.]egg-info/" |
3175 | +extend-diff-ignore = "^sphinx/locale/[^/]+/LC_MESSAGES/sphinx[.]mo$" |
3176 | |
3177 | === modified file 'debian/tests/control' |
3178 | --- debian/tests/control 2012-11-27 19:20:44 +0000 |
3179 | +++ debian/tests/control 2013-02-16 12:25:23 +0000 |
3180 | @@ -3,3 +3,6 @@ |
3181 | |
3182 | Tests: python3-sphinx |
3183 | Depends: python3-sphinx, python3-nose |
3184 | + |
3185 | +Tests: sphinx-doc |
3186 | +Depends: sphinx-doc, python, python-webkit, xvfb |
3187 | |
3188 | === added file 'debian/tests/sphinx-doc' |
3189 | --- debian/tests/sphinx-doc 1970-01-01 00:00:00 +0000 |
3190 | +++ debian/tests/sphinx-doc 2013-02-16 12:25:23 +0000 |
3191 | @@ -0,0 +1,20 @@ |
3192 | +#!/bin/sh |
3193 | +set -e -u |
3194 | +cp -r debian/jstest "$ADTTMP/" |
3195 | +cd "$ADTTMP" |
3196 | +for python in python python3 |
3197 | +do |
3198 | + for format in rst html |
3199 | + do |
3200 | + [ "$(readlink -f /usr/share/doc/$python-sphinx/$format)" = "$(readlink -f /usr/share/doc/sphinx-doc/$format)" ] |
3201 | + done |
3202 | +done |
3203 | +run_js_tests='jstest/run-tests /usr/share/doc/sphinx-doc/html/' |
3204 | +if [ -n "${DISPLAY:-}" ] |
3205 | +then |
3206 | + $run_js_tests |
3207 | +else |
3208 | + xvfb-run -a $run_js_tests |
3209 | +fi |
3210 | + |
3211 | +# vim:ts=4 sw=4 et |
3212 | |
3213 | === modified file 'setup.py' |
3214 | --- setup.py 2012-03-30 23:32:16 +0000 |
3215 | +++ setup.py 2013-02-16 12:25:23 +0000 |
3216 | @@ -67,6 +67,23 @@ |
3217 | # The uuid module is new in the stdlib in 2.5 |
3218 | requires.append('uuid>=1.30') |
3219 | |
3220 | +if sys.version_info >= (3,): |
3221 | + |
3222 | + num_processes = 1 |
3223 | + for option in os.environ.get('DEB_BUILD_OPTIONS', '').split(): |
3224 | + if option.startswith('parallel='): |
3225 | + num_processes = int(option.split('=', 1)[1]) |
3226 | + if num_processes > 1: |
3227 | + import lib2to3.refactor |
3228 | + class RefactoringTool(lib2to3.refactor.MultiprocessRefactoringTool): |
3229 | + def refactor(self, items, write=False, doctests_only=False): |
3230 | + return lib2to3.refactor.MultiprocessRefactoringTool.refactor( |
3231 | + self, items, |
3232 | + write=write, |
3233 | + doctests_only=doctests_only, |
3234 | + num_processes=num_processes |
3235 | + ) |
3236 | + lib2to3.refactor.RefactoringTool = RefactoringTool |
3237 | |
3238 | # Provide a "compile_catalog" command that also creates the translated |
3239 | # JavaScript files if Babel is available. |
3240 | |
3241 | === modified file 'sphinx/environment.py' |
3242 | --- sphinx/environment.py 2012-11-27 19:20:44 +0000 |
3243 | +++ sphinx/environment.py 2013-02-16 12:25:23 +0000 |
3244 | @@ -218,6 +218,10 @@ |
3245 | if not msgstr or msgstr == msg: # as-of-yet untranslated |
3246 | continue |
3247 | |
3248 | + # Avoid "Literal block expected; none found." warnings. |
3249 | + if msgstr.strip().endswith('::'): |
3250 | + msgstr += '\n\n dummy literal' |
3251 | + |
3252 | patch = new_document(source, settings) |
3253 | parser.parse(msgstr, patch) |
3254 | patch = patch[0] |
3255 | @@ -225,31 +229,48 @@ |
3256 | if not isinstance(patch, nodes.paragraph): |
3257 | continue # skip for now |
3258 | |
3259 | - footnote_refs = [r for r in node.children |
3260 | - if isinstance(r, nodes.footnote_reference) |
3261 | - and r.get('auto') == 1] |
3262 | - refs = [r for r in node.children if isinstance(r, nodes.reference)] |
3263 | - |
3264 | - for i, child in enumerate(patch.children): # update leaves |
3265 | - if isinstance(child, nodes.footnote_reference) \ |
3266 | - and child.get('auto') == 1: |
3267 | - # use original 'footnote_reference' object. |
3268 | - # this object is already registered in self.document.autofootnote_refs |
3269 | - patch.children[i] = footnote_refs.pop(0) |
3270 | - # Some duplicated footnote_reference in msgstr causes |
3271 | - # IndexError in .pop(0). That is invalid msgstr. |
3272 | - |
3273 | - elif isinstance(child, nodes.reference): |
3274 | - # reference should use original 'refname'. |
3275 | - # * reference target ".. _Python: ..." is not translatable. |
3276 | - # * section refname is not translatable. |
3277 | - # * inline reference "`Python <...>`_" has no 'refname'. |
3278 | - if refs and 'refname' in refs[0]: |
3279 | - refname = child['refname'] = refs.pop(0)['refname'] |
3280 | - self.document.refnames.setdefault( |
3281 | - refname, []).append(child) |
3282 | - # if number of reference nodes had been changed, that |
3283 | - # would often generate unknown link target warning. |
3284 | + # auto-numbered foot note reference should use original 'ids'. |
3285 | + is_autonumber_footnote_ref = lambda node: \ |
3286 | + isinstance(node, nodes.footnote_reference) \ |
3287 | + and node.get('auto') == 1 |
3288 | + old_foot_refs = node.traverse(is_autonumber_footnote_ref) |
3289 | + new_foot_refs = patch.traverse(is_autonumber_footnote_ref) |
3290 | + if len(old_foot_refs) != len(new_foot_refs): |
3291 | + env.warn_node('inconsistent footnote references in ' |
3292 | + 'translated message', node) |
3293 | + for old, new in zip(old_foot_refs, new_foot_refs): |
3294 | + new['ids'] = old['ids'] |
3295 | + self.document.autofootnote_refs.remove(old) |
3296 | + self.document.note_autofootnote_ref(new) |
3297 | + |
3298 | + # reference should use original 'refname'. |
3299 | + # * reference target ".. _Python: ..." is not translatable. |
3300 | + # * section refname is not translatable. |
3301 | + # * inline reference "`Python <...>`_" has no 'refname'. |
3302 | + is_refnamed_ref = lambda node: \ |
3303 | + isinstance(node, nodes.reference) \ |
3304 | + and 'refname' in node |
3305 | + old_refs = node.traverse(is_refnamed_ref) |
3306 | + new_refs = patch.traverse(is_refnamed_ref) |
3307 | + applied_refname_map = {} |
3308 | + if len(old_refs) != len(new_refs): |
3309 | + env.warn_node('inconsistent references in ' |
3310 | + 'translated message', node) |
3311 | + for new in new_refs: |
3312 | + if new['refname'] in applied_refname_map: |
3313 | + # 2nd appearance of the reference |
3314 | + new['refname'] = applied_refname_map[new['refname']] |
3315 | + elif old_refs: |
3316 | + # 1st appearance of the reference in old_refs |
3317 | + old = old_refs.pop(0) |
3318 | + refname = old['refname'] |
3319 | + new['refname'] = refname |
3320 | + applied_refname_map[new['refname']] = refname |
3321 | + else: |
3322 | + # the reference is not found in old_refs |
3323 | + applied_refname_map[new['refname']] = new['refname'] |
3324 | + |
3325 | + self.document.note_refname(new) |
3326 | |
3327 | for child in patch.children: # update leaves |
3328 | child.parent = node |
Good work! Uploaded.