Merge lp:~mitya57/ubuntu/raring/sphinx/1.1.3+dfsg-7ubuntu1 into lp:ubuntu/raring/sphinx
- Raring (13.04)
- 1.1.3+dfsg-7ubuntu1
- Merge into raring
Status: | Merged |
---|---|
Merge reported by: | Dmitry Shachnev |
Merged at revision: | not available |
Proposed branch: | lp:~mitya57/ubuntu/raring/sphinx/1.1.3+dfsg-7ubuntu1 |
Merge into: | lp:ubuntu/raring/sphinx |
Diff against target: |
3328 lines (+2631/-451) 25 files modified
.pc/applied-patches (+3/-1) .pc/fix_literal_block_warning.diff/sphinx/environment.py (+1807/-0) .pc/fix_manpages_generation_with_new_docutils.diff/sphinx/writers/manpage.py (+0/-345) .pc/manpage_writer_docutils_0.10_api.diff/sphinx/writers/manpage.py (+345/-0) .pc/parallel_2to3.diff/setup.py (+209/-0) debian/changelog (+33/-2) debian/jstest/run-tests (+1/-1) debian/patches/fix_literal_block_warning.diff (+17/-0) debian/patches/fix_manpages_generation_with_new_docutils.diff (+0/-33) debian/patches/initialize_autodoc.diff (+1/-1) debian/patches/l10n_fixes.diff (+49/-33) debian/patches/manpage_writer_docutils_0.10_api.diff (+31/-0) debian/patches/parallel_2to3.diff (+31/-0) debian/patches/python3_test_build_dir.diff (+1/-1) debian/patches/series (+3/-1) debian/patches/show_more_stack_frames.diff (+1/-1) debian/patches/sort_stopwords.diff (+2/-2) debian/patches/sphinxcontrib_namespace.diff (+1/-1) debian/patches/unversioned_grammar_pickle.diff (+1/-1) debian/rules (+8/-3) debian/source/options (+1/-0) debian/tests/control (+3/-0) debian/tests/sphinx-doc (+20/-0) setup.py (+17/-0) sphinx/environment.py (+46/-25) |
To merge this branch: | bzr merge lp:~mitya57/ubuntu/raring/sphinx/1.1.3+dfsg-7ubuntu1 |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Daniel Holbach (community) | Approve | ||
Ubuntu branches | Pending | ||
Review via email: mp+148869@code.launchpad.net |
Commit message
Description of the change
Looks like upstream is not going to release 1.3 anytime soon, so this branch merges some recent Debian changes and backports a minor fix from upstream.
This also includes a "final" version of patch for bug 1068493, which will make is possible to SRU it.
sphinx (1.1.3+
* Merge with Debian experimental. Remaining Ubuntu changes:
- Switch to dh_python2.
- debian/rules: export NO_PKG_MANGLE=1 in order to not have translations
stripped.
- debian/control: Drop the build-dependency on python-whoosh.
- debian/control: Add "XS-Testsuite: autopkgtest" header.
* debian/
dropped, applied in Debian as manpage_
* debian/
false-positive "Literal block expected; none found." warnings when
building l10n projects.
-- Dmitry Shachnev <email address hidden> Sat, 16 Feb 2013 14:51:12 +0400
- 42. By Dmitry Shachnev
-
Releasing version 1.1.3+dfsg-
7ubuntu1.
Daniel Holbach (dholbach) wrote : | # |
Unfortunately it FTBFS because of a segfault for some reason: https:/
Dmitry Shachnev (mitya57) wrote : | # |
Segfaulting again? :-(
/me looks at python-webkit suspiciously, but will ask buildd admins to
investigate the problem tomorrow.
Dmitry Shachnev (mitya57) wrote : | # |
[Commenting on right MP this time]
I can't reproduce the crash in the up-to-date raring pbuilder chroot. William Grant suggested me to try chroot from https:/
Preview Diff
1 | === modified file '.pc/applied-patches' | |||
2 | --- .pc/applied-patches 2012-11-27 19:20:44 +0000 | |||
3 | +++ .pc/applied-patches 2013-02-16 12:25:23 +0000 | |||
4 | @@ -8,8 +8,10 @@ | |||
5 | 8 | fix_nepali_po.diff | 8 | fix_nepali_po.diff |
6 | 9 | pygments_byte_strings.diff | 9 | pygments_byte_strings.diff |
7 | 10 | fix_shorthandoff.diff | 10 | fix_shorthandoff.diff |
8 | 11 | fix_manpages_generation_with_new_docutils.diff | ||
9 | 12 | test_build_html_rb.diff | 11 | test_build_html_rb.diff |
10 | 13 | sort_stopwords.diff | 12 | sort_stopwords.diff |
11 | 14 | support_python_3.3.diff | 13 | support_python_3.3.diff |
12 | 15 | l10n_fixes.diff | 14 | l10n_fixes.diff |
13 | 15 | manpage_writer_docutils_0.10_api.diff | ||
14 | 16 | parallel_2to3.diff | ||
15 | 17 | fix_literal_block_warning.diff | ||
16 | 16 | 18 | ||
17 | === added directory '.pc/fix_literal_block_warning.diff' | |||
18 | === added directory '.pc/fix_literal_block_warning.diff/sphinx' | |||
19 | === added file '.pc/fix_literal_block_warning.diff/sphinx/environment.py' | |||
20 | --- .pc/fix_literal_block_warning.diff/sphinx/environment.py 1970-01-01 00:00:00 +0000 | |||
21 | +++ .pc/fix_literal_block_warning.diff/sphinx/environment.py 2013-02-16 12:25:23 +0000 | |||
22 | @@ -0,0 +1,1807 @@ | |||
23 | 1 | # -*- coding: utf-8 -*- | ||
24 | 2 | """ | ||
25 | 3 | sphinx.environment | ||
26 | 4 | ~~~~~~~~~~~~~~~~~~ | ||
27 | 5 | |||
28 | 6 | Global creation environment. | ||
29 | 7 | |||
30 | 8 | :copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS. | ||
31 | 9 | :license: BSD, see LICENSE for details. | ||
32 | 10 | """ | ||
33 | 11 | |||
34 | 12 | import re | ||
35 | 13 | import os | ||
36 | 14 | import sys | ||
37 | 15 | import time | ||
38 | 16 | import types | ||
39 | 17 | import codecs | ||
40 | 18 | import imghdr | ||
41 | 19 | import string | ||
42 | 20 | import unicodedata | ||
43 | 21 | import cPickle as pickle | ||
44 | 22 | from os import path | ||
45 | 23 | from glob import glob | ||
46 | 24 | from itertools import izip, groupby | ||
47 | 25 | |||
48 | 26 | from docutils import nodes | ||
49 | 27 | from docutils.io import FileInput, NullOutput | ||
50 | 28 | from docutils.core import Publisher | ||
51 | 29 | from docutils.utils import Reporter, relative_path, new_document, \ | ||
52 | 30 | get_source_line | ||
53 | 31 | from docutils.readers import standalone | ||
54 | 32 | from docutils.parsers.rst import roles, directives, Parser as RSTParser | ||
55 | 33 | from docutils.parsers.rst.languages import en as english | ||
56 | 34 | from docutils.parsers.rst.directives.html import MetaBody | ||
57 | 35 | from docutils.writers import UnfilteredWriter | ||
58 | 36 | from docutils.transforms import Transform | ||
59 | 37 | from docutils.transforms.parts import ContentsFilter | ||
60 | 38 | |||
61 | 39 | from sphinx import addnodes | ||
62 | 40 | from sphinx.util import url_re, get_matching_docs, docname_join, split_into, \ | ||
63 | 41 | FilenameUniqDict | ||
64 | 42 | from sphinx.util.nodes import clean_astext, make_refnode, extract_messages, \ | ||
65 | 43 | WarningStream | ||
66 | 44 | from sphinx.util.osutil import movefile, SEP, ustrftime, find_catalog | ||
67 | 45 | from sphinx.util.matching import compile_matchers | ||
68 | 46 | from sphinx.util.pycompat import all, class_types | ||
69 | 47 | from sphinx.util.websupport import is_commentable | ||
70 | 48 | from sphinx.errors import SphinxError, ExtensionError | ||
71 | 49 | from sphinx.locale import _, init as init_locale | ||
72 | 50 | from sphinx.versioning import add_uids, merge_doctrees | ||
73 | 51 | |||
74 | 52 | fs_encoding = sys.getfilesystemencoding() or sys.getdefaultencoding() | ||
75 | 53 | |||
76 | 54 | orig_role_function = roles.role | ||
77 | 55 | orig_directive_function = directives.directive | ||
78 | 56 | |||
79 | 57 | class ElementLookupError(Exception): pass | ||
80 | 58 | |||
81 | 59 | |||
82 | 60 | default_settings = { | ||
83 | 61 | 'embed_stylesheet': False, | ||
84 | 62 | 'cloak_email_addresses': True, | ||
85 | 63 | 'pep_base_url': 'http://www.python.org/dev/peps/', | ||
86 | 64 | 'rfc_base_url': 'http://tools.ietf.org/html/', | ||
87 | 65 | 'input_encoding': 'utf-8-sig', | ||
88 | 66 | 'doctitle_xform': False, | ||
89 | 67 | 'sectsubtitle_xform': False, | ||
90 | 68 | 'halt_level': 5, | ||
91 | 69 | } | ||
92 | 70 | |||
93 | 71 | # This is increased every time an environment attribute is added | ||
94 | 72 | # or changed to properly invalidate pickle files. | ||
95 | 73 | ENV_VERSION = 41 | ||
96 | 74 | |||
97 | 75 | |||
98 | 76 | default_substitutions = set([ | ||
99 | 77 | 'version', | ||
100 | 78 | 'release', | ||
101 | 79 | 'today', | ||
102 | 80 | ]) | ||
103 | 81 | |||
104 | 82 | dummy_reporter = Reporter('', 4, 4) | ||
105 | 83 | |||
106 | 84 | versioning_conditions = { | ||
107 | 85 | 'none': False, | ||
108 | 86 | 'text': nodes.TextElement, | ||
109 | 87 | 'commentable': is_commentable, | ||
110 | 88 | } | ||
111 | 89 | |||
112 | 90 | |||
113 | 91 | class NoUri(Exception): | ||
114 | 92 | """Raised by get_relative_uri if there is no URI available.""" | ||
115 | 93 | pass | ||
116 | 94 | |||
117 | 95 | |||
118 | 96 | class DefaultSubstitutions(Transform): | ||
119 | 97 | """ | ||
120 | 98 | Replace some substitutions if they aren't defined in the document. | ||
121 | 99 | """ | ||
122 | 100 | # run before the default Substitutions | ||
123 | 101 | default_priority = 210 | ||
124 | 102 | |||
125 | 103 | def apply(self): | ||
126 | 104 | config = self.document.settings.env.config | ||
127 | 105 | # only handle those not otherwise defined in the document | ||
128 | 106 | to_handle = default_substitutions - set(self.document.substitution_defs) | ||
129 | 107 | for ref in self.document.traverse(nodes.substitution_reference): | ||
130 | 108 | refname = ref['refname'] | ||
131 | 109 | if refname in to_handle: | ||
132 | 110 | text = config[refname] | ||
133 | 111 | if refname == 'today' and not text: | ||
134 | 112 | # special handling: can also specify a strftime format | ||
135 | 113 | text = ustrftime(config.today_fmt or _('%B %d, %Y')) | ||
136 | 114 | ref.replace_self(nodes.Text(text, text)) | ||
137 | 115 | |||
138 | 116 | |||
139 | 117 | class MoveModuleTargets(Transform): | ||
140 | 118 | """ | ||
141 | 119 | Move module targets that are the first thing in a section to the section | ||
142 | 120 | title. | ||
143 | 121 | |||
144 | 122 | XXX Python specific | ||
145 | 123 | """ | ||
146 | 124 | default_priority = 210 | ||
147 | 125 | |||
148 | 126 | def apply(self): | ||
149 | 127 | for node in self.document.traverse(nodes.target): | ||
150 | 128 | if not node['ids']: | ||
151 | 129 | continue | ||
152 | 130 | if (node.has_key('ismod') and | ||
153 | 131 | node.parent.__class__ is nodes.section and | ||
154 | 132 | # index 0 is the section title node | ||
155 | 133 | node.parent.index(node) == 1): | ||
156 | 134 | node.parent['ids'][0:0] = node['ids'] | ||
157 | 135 | node.parent.remove(node) | ||
158 | 136 | |||
159 | 137 | |||
160 | 138 | class HandleCodeBlocks(Transform): | ||
161 | 139 | """ | ||
162 | 140 | Several code block related transformations. | ||
163 | 141 | """ | ||
164 | 142 | default_priority = 210 | ||
165 | 143 | |||
166 | 144 | def apply(self): | ||
167 | 145 | # move doctest blocks out of blockquotes | ||
168 | 146 | for node in self.document.traverse(nodes.block_quote): | ||
169 | 147 | if all(isinstance(child, nodes.doctest_block) for child | ||
170 | 148 | in node.children): | ||
171 | 149 | node.replace_self(node.children) | ||
172 | 150 | # combine successive doctest blocks | ||
173 | 151 | #for node in self.document.traverse(nodes.doctest_block): | ||
174 | 152 | # if node not in node.parent.children: | ||
175 | 153 | # continue | ||
176 | 154 | # parindex = node.parent.index(node) | ||
177 | 155 | # while len(node.parent) > parindex+1 and \ | ||
178 | 156 | # isinstance(node.parent[parindex+1], nodes.doctest_block): | ||
179 | 157 | # node[0] = nodes.Text(node[0] + '\n\n' + | ||
180 | 158 | # node.parent[parindex+1][0]) | ||
181 | 159 | # del node.parent[parindex+1] | ||
182 | 160 | |||
183 | 161 | |||
184 | 162 | class SortIds(Transform): | ||
185 | 163 | """ | ||
186 | 164 | Sort secion IDs so that the "id[0-9]+" one comes last. | ||
187 | 165 | """ | ||
188 | 166 | default_priority = 261 | ||
189 | 167 | |||
190 | 168 | def apply(self): | ||
191 | 169 | for node in self.document.traverse(nodes.section): | ||
192 | 170 | if len(node['ids']) > 1 and node['ids'][0].startswith('id'): | ||
193 | 171 | node['ids'] = node['ids'][1:] + [node['ids'][0]] | ||
194 | 172 | |||
195 | 173 | |||
196 | 174 | class CitationReferences(Transform): | ||
197 | 175 | """ | ||
198 | 176 | Replace citation references by pending_xref nodes before the default | ||
199 | 177 | docutils transform tries to resolve them. | ||
200 | 178 | """ | ||
201 | 179 | default_priority = 619 | ||
202 | 180 | |||
203 | 181 | def apply(self): | ||
204 | 182 | for citnode in self.document.traverse(nodes.citation_reference): | ||
205 | 183 | cittext = citnode.astext() | ||
206 | 184 | refnode = addnodes.pending_xref(cittext, reftype='citation', | ||
207 | 185 | reftarget=cittext, refwarn=True) | ||
208 | 186 | refnode.line = citnode.line or citnode.parent.line | ||
209 | 187 | refnode += nodes.Text('[' + cittext + ']') | ||
210 | 188 | citnode.parent.replace(citnode, refnode) | ||
211 | 189 | |||
212 | 190 | |||
213 | 191 | class Locale(Transform): | ||
214 | 192 | """ | ||
215 | 193 | Replace translatable nodes with their translated doctree. | ||
216 | 194 | """ | ||
217 | 195 | default_priority = 0 | ||
218 | 196 | def apply(self): | ||
219 | 197 | env = self.document.settings.env | ||
220 | 198 | settings, source = self.document.settings, self.document['source'] | ||
221 | 199 | # XXX check if this is reliable | ||
222 | 200 | assert source.startswith(env.srcdir) | ||
223 | 201 | docname = path.splitext(relative_path(env.srcdir, source))[0] | ||
224 | 202 | textdomain = find_catalog(docname, | ||
225 | 203 | self.document.settings.gettext_compact) | ||
226 | 204 | |||
227 | 205 | # fetch translations | ||
228 | 206 | dirs = [path.join(env.srcdir, directory) | ||
229 | 207 | for directory in env.config.locale_dirs] | ||
230 | 208 | catalog, has_catalog = init_locale(dirs, env.config.language, | ||
231 | 209 | textdomain) | ||
232 | 210 | if not has_catalog: | ||
233 | 211 | return | ||
234 | 212 | |||
235 | 213 | parser = RSTParser() | ||
236 | 214 | |||
237 | 215 | for node, msg in extract_messages(self.document): | ||
238 | 216 | msgstr = catalog.gettext(msg) | ||
239 | 217 | # XXX add marker to untranslated parts | ||
240 | 218 | if not msgstr or msgstr == msg: # as-of-yet untranslated | ||
241 | 219 | continue | ||
242 | 220 | |||
243 | 221 | patch = new_document(source, settings) | ||
244 | 222 | parser.parse(msgstr, patch) | ||
245 | 223 | patch = patch[0] | ||
246 | 224 | # XXX doctest and other block markup | ||
247 | 225 | if not isinstance(patch, nodes.paragraph): | ||
248 | 226 | continue # skip for now | ||
249 | 227 | |||
250 | 228 | # auto-numbered foot note reference should use original 'ids'. | ||
251 | 229 | is_autonumber_footnote_ref = lambda node: \ | ||
252 | 230 | isinstance(node, nodes.footnote_reference) \ | ||
253 | 231 | and node.get('auto') == 1 | ||
254 | 232 | old_foot_refs = node.traverse(is_autonumber_footnote_ref) | ||
255 | 233 | new_foot_refs = patch.traverse(is_autonumber_footnote_ref) | ||
256 | 234 | if len(old_foot_refs) != len(new_foot_refs): | ||
257 | 235 | env.warn_node('inconsistent footnote references in ' | ||
258 | 236 | 'translated message', node) | ||
259 | 237 | for old, new in zip(old_foot_refs, new_foot_refs): | ||
260 | 238 | new['ids'] = old['ids'] | ||
261 | 239 | self.document.autofootnote_refs.remove(old) | ||
262 | 240 | self.document.note_autofootnote_ref(new) | ||
263 | 241 | |||
264 | 242 | # reference should use original 'refname'. | ||
265 | 243 | # * reference target ".. _Python: ..." is not translatable. | ||
266 | 244 | # * section refname is not translatable. | ||
267 | 245 | # * inline reference "`Python <...>`_" has no 'refname'. | ||
268 | 246 | is_refnamed_ref = lambda node: \ | ||
269 | 247 | isinstance(node, nodes.reference) \ | ||
270 | 248 | and 'refname' in node | ||
271 | 249 | old_refs = node.traverse(is_refnamed_ref) | ||
272 | 250 | new_refs = patch.traverse(is_refnamed_ref) | ||
273 | 251 | applied_refname_map = {} | ||
274 | 252 | if len(old_refs) != len(new_refs): | ||
275 | 253 | env.warn_node('inconsistent references in ' | ||
276 | 254 | 'translated message', node) | ||
277 | 255 | for new in new_refs: | ||
278 | 256 | if new['refname'] in applied_refname_map: | ||
279 | 257 | # 2nd appearance of the reference | ||
280 | 258 | new['refname'] = applied_refname_map[new['refname']] | ||
281 | 259 | elif old_refs: | ||
282 | 260 | # 1st appearance of the reference in old_refs | ||
283 | 261 | old = old_refs.pop(0) | ||
284 | 262 | refname = old['refname'] | ||
285 | 263 | new['refname'] = refname | ||
286 | 264 | applied_refname_map[new['refname']] = refname | ||
287 | 265 | else: | ||
288 | 266 | # the reference is not found in old_refs | ||
289 | 267 | applied_refname_map[new['refname']] = new['refname'] | ||
290 | 268 | |||
291 | 269 | self.document.note_refname(new) | ||
292 | 270 | |||
293 | 271 | for child in patch.children: # update leaves | ||
294 | 272 | child.parent = node | ||
295 | 273 | node.children = patch.children | ||
296 | 274 | |||
297 | 275 | |||
298 | 276 | class SphinxStandaloneReader(standalone.Reader): | ||
299 | 277 | """ | ||
300 | 278 | Add our own transforms. | ||
301 | 279 | """ | ||
302 | 280 | transforms = [Locale, CitationReferences, DefaultSubstitutions, | ||
303 | 281 | MoveModuleTargets, HandleCodeBlocks, SortIds] | ||
304 | 282 | |||
305 | 283 | def get_transforms(self): | ||
306 | 284 | return standalone.Reader.get_transforms(self) + self.transforms | ||
307 | 285 | |||
308 | 286 | |||
309 | 287 | class SphinxDummyWriter(UnfilteredWriter): | ||
310 | 288 | supported = ('html',) # needed to keep "meta" nodes | ||
311 | 289 | |||
312 | 290 | def translate(self): | ||
313 | 291 | pass | ||
314 | 292 | |||
315 | 293 | |||
316 | 294 | class SphinxContentsFilter(ContentsFilter): | ||
317 | 295 | """ | ||
318 | 296 | Used with BuildEnvironment.add_toc_from() to discard cross-file links | ||
319 | 297 | within table-of-contents link nodes. | ||
320 | 298 | """ | ||
321 | 299 | def visit_pending_xref(self, node): | ||
322 | 300 | text = node.astext() | ||
323 | 301 | self.parent.append(nodes.literal(text, text)) | ||
324 | 302 | raise nodes.SkipNode | ||
325 | 303 | |||
326 | 304 | def visit_image(self, node): | ||
327 | 305 | raise nodes.SkipNode | ||
328 | 306 | |||
329 | 307 | |||
330 | 308 | class BuildEnvironment: | ||
331 | 309 | """ | ||
332 | 310 | The environment in which the ReST files are translated. | ||
333 | 311 | Stores an inventory of cross-file targets and provides doctree | ||
334 | 312 | transformations to resolve links to them. | ||
335 | 313 | """ | ||
336 | 314 | |||
337 | 315 | # --------- ENVIRONMENT PERSISTENCE ---------------------------------------- | ||
338 | 316 | |||
339 | 317 | @staticmethod | ||
340 | 318 | def frompickle(config, filename): | ||
341 | 319 | picklefile = open(filename, 'rb') | ||
342 | 320 | try: | ||
343 | 321 | env = pickle.load(picklefile) | ||
344 | 322 | finally: | ||
345 | 323 | picklefile.close() | ||
346 | 324 | if env.version != ENV_VERSION: | ||
347 | 325 | raise IOError('env version not current') | ||
348 | 326 | env.config.values = config.values | ||
349 | 327 | return env | ||
350 | 328 | |||
351 | 329 | def topickle(self, filename): | ||
352 | 330 | # remove unpicklable attributes | ||
353 | 331 | warnfunc = self._warnfunc | ||
354 | 332 | self.set_warnfunc(None) | ||
355 | 333 | values = self.config.values | ||
356 | 334 | del self.config.values | ||
357 | 335 | domains = self.domains | ||
358 | 336 | del self.domains | ||
359 | 337 | # first write to a temporary file, so that if dumping fails, | ||
360 | 338 | # the existing environment won't be overwritten | ||
361 | 339 | picklefile = open(filename + '.tmp', 'wb') | ||
362 | 340 | # remove potentially pickling-problematic values from config | ||
363 | 341 | for key, val in vars(self.config).items(): | ||
364 | 342 | if key.startswith('_') or \ | ||
365 | 343 | isinstance(val, types.ModuleType) or \ | ||
366 | 344 | isinstance(val, types.FunctionType) or \ | ||
367 | 345 | isinstance(val, class_types): | ||
368 | 346 | del self.config[key] | ||
369 | 347 | try: | ||
370 | 348 | pickle.dump(self, picklefile, pickle.HIGHEST_PROTOCOL) | ||
371 | 349 | finally: | ||
372 | 350 | picklefile.close() | ||
373 | 351 | movefile(filename + '.tmp', filename) | ||
374 | 352 | # reset attributes | ||
375 | 353 | self.domains = domains | ||
376 | 354 | self.config.values = values | ||
377 | 355 | self.set_warnfunc(warnfunc) | ||
378 | 356 | |||
379 | 357 | # --------- ENVIRONMENT INITIALIZATION ------------------------------------- | ||
380 | 358 | |||
381 | 359 | def __init__(self, srcdir, doctreedir, config): | ||
382 | 360 | self.doctreedir = doctreedir | ||
383 | 361 | self.srcdir = srcdir | ||
384 | 362 | self.config = config | ||
385 | 363 | |||
386 | 364 | # the method of doctree versioning; see set_versioning_method | ||
387 | 365 | self.versioning_condition = None | ||
388 | 366 | |||
389 | 367 | # the application object; only set while update() runs | ||
390 | 368 | self.app = None | ||
391 | 369 | |||
392 | 370 | # all the registered domains, set by the application | ||
393 | 371 | self.domains = {} | ||
394 | 372 | |||
395 | 373 | # the docutils settings for building | ||
396 | 374 | self.settings = default_settings.copy() | ||
397 | 375 | self.settings['env'] = self | ||
398 | 376 | |||
399 | 377 | # the function to write warning messages with | ||
400 | 378 | self._warnfunc = None | ||
401 | 379 | |||
402 | 380 | # this is to invalidate old pickles | ||
403 | 381 | self.version = ENV_VERSION | ||
404 | 382 | |||
405 | 383 | # make this a set for faster testing | ||
406 | 384 | self._nitpick_ignore = set(self.config.nitpick_ignore) | ||
407 | 385 | |||
408 | 386 | # All "docnames" here are /-separated and relative and exclude | ||
409 | 387 | # the source suffix. | ||
410 | 388 | |||
411 | 389 | self.found_docs = set() # contains all existing docnames | ||
412 | 390 | self.all_docs = {} # docname -> mtime at the time of build | ||
413 | 391 | # contains all built docnames | ||
414 | 392 | self.dependencies = {} # docname -> set of dependent file | ||
415 | 393 | # names, relative to documentation root | ||
416 | 394 | self.reread_always = set() # docnames to re-read unconditionally on | ||
417 | 395 | # next build | ||
418 | 396 | |||
419 | 397 | # File metadata | ||
420 | 398 | self.metadata = {} # docname -> dict of metadata items | ||
421 | 399 | |||
422 | 400 | # TOC inventory | ||
423 | 401 | self.titles = {} # docname -> title node | ||
424 | 402 | self.longtitles = {} # docname -> title node; only different if | ||
425 | 403 | # set differently with title directive | ||
426 | 404 | self.tocs = {} # docname -> table of contents nodetree | ||
427 | 405 | self.toc_num_entries = {} # docname -> number of real entries | ||
428 | 406 | # used to determine when to show the TOC | ||
429 | 407 | # in a sidebar (don't show if it's only one item) | ||
430 | 408 | self.toc_secnumbers = {} # docname -> dict of sectionid -> number | ||
431 | 409 | |||
432 | 410 | self.toctree_includes = {} # docname -> list of toctree includefiles | ||
433 | 411 | self.files_to_rebuild = {} # docname -> set of files | ||
434 | 412 | # (containing its TOCs) to rebuild too | ||
435 | 413 | self.glob_toctrees = set() # docnames that have :glob: toctrees | ||
436 | 414 | self.numbered_toctrees = set() # docnames that have :numbered: toctrees | ||
437 | 415 | |||
438 | 416 | # domain-specific inventories, here to be pickled | ||
439 | 417 | self.domaindata = {} # domainname -> domain-specific dict | ||
440 | 418 | |||
441 | 419 | # Other inventories | ||
442 | 420 | self.citations = {} # citation name -> docname, labelid | ||
443 | 421 | self.indexentries = {} # docname -> list of | ||
444 | 422 | # (type, string, target, aliasname) | ||
445 | 423 | self.versionchanges = {} # version -> list of (type, docname, | ||
446 | 424 | # lineno, module, descname, content) | ||
447 | 425 | |||
448 | 426 | # these map absolute path -> (docnames, unique filename) | ||
449 | 427 | self.images = FilenameUniqDict() | ||
450 | 428 | self.dlfiles = FilenameUniqDict() | ||
451 | 429 | |||
452 | 430 | # temporary data storage while reading a document | ||
453 | 431 | self.temp_data = {} | ||
454 | 432 | |||
455 | 433 | def set_warnfunc(self, func): | ||
456 | 434 | self._warnfunc = func | ||
457 | 435 | self.settings['warning_stream'] = WarningStream(func) | ||
458 | 436 | |||
459 | 437 | def set_versioning_method(self, method): | ||
460 | 438 | """This sets the doctree versioning method for this environment. | ||
461 | 439 | |||
462 | 440 | Versioning methods are a builder property; only builders with the same | ||
463 | 441 | versioning method can share the same doctree directory. Therefore, we | ||
464 | 442 | raise an exception if the user tries to use an environment with an | ||
465 | 443 | incompatible versioning method. | ||
466 | 444 | """ | ||
467 | 445 | if method not in versioning_conditions: | ||
468 | 446 | raise ValueError('invalid versioning method: %r' % method) | ||
469 | 447 | condition = versioning_conditions[method] | ||
470 | 448 | if self.versioning_condition not in (None, condition): | ||
471 | 449 | raise SphinxError('This environment is incompatible with the ' | ||
472 | 450 | 'selected builder, please choose another ' | ||
473 | 451 | 'doctree directory.') | ||
474 | 452 | self.versioning_condition = condition | ||
475 | 453 | |||
476 | 454 | def warn(self, docname, msg, lineno=None): | ||
477 | 455 | # strange argument order is due to backwards compatibility | ||
478 | 456 | self._warnfunc(msg, (docname, lineno)) | ||
479 | 457 | |||
480 | 458 | def warn_node(self, msg, node): | ||
481 | 459 | self._warnfunc(msg, '%s:%s' % get_source_line(node)) | ||
482 | 460 | |||
483 | 461 | def clear_doc(self, docname): | ||
484 | 462 | """Remove all traces of a source file in the inventory.""" | ||
485 | 463 | if docname in self.all_docs: | ||
486 | 464 | self.all_docs.pop(docname, None) | ||
487 | 465 | self.reread_always.discard(docname) | ||
488 | 466 | self.metadata.pop(docname, None) | ||
489 | 467 | self.dependencies.pop(docname, None) | ||
490 | 468 | self.titles.pop(docname, None) | ||
491 | 469 | self.longtitles.pop(docname, None) | ||
492 | 470 | self.tocs.pop(docname, None) | ||
493 | 471 | self.toc_secnumbers.pop(docname, None) | ||
494 | 472 | self.toc_num_entries.pop(docname, None) | ||
495 | 473 | self.toctree_includes.pop(docname, None) | ||
496 | 474 | self.indexentries.pop(docname, None) | ||
497 | 475 | self.glob_toctrees.discard(docname) | ||
498 | 476 | self.numbered_toctrees.discard(docname) | ||
499 | 477 | self.images.purge_doc(docname) | ||
500 | 478 | self.dlfiles.purge_doc(docname) | ||
501 | 479 | |||
502 | 480 | for subfn, fnset in self.files_to_rebuild.items(): | ||
503 | 481 | fnset.discard(docname) | ||
504 | 482 | if not fnset: | ||
505 | 483 | del self.files_to_rebuild[subfn] | ||
506 | 484 | for key, (fn, _) in self.citations.items(): | ||
507 | 485 | if fn == docname: | ||
508 | 486 | del self.citations[key] | ||
509 | 487 | for version, changes in self.versionchanges.items(): | ||
510 | 488 | new = [change for change in changes if change[1] != docname] | ||
511 | 489 | changes[:] = new | ||
512 | 490 | |||
513 | 491 | for domain in self.domains.values(): | ||
514 | 492 | domain.clear_doc(docname) | ||
515 | 493 | |||
516 | 494 | def doc2path(self, docname, base=True, suffix=None): | ||
517 | 495 | """Return the filename for the document name. | ||
518 | 496 | |||
519 | 497 | If *base* is True, return absolute path under self.srcdir. | ||
520 | 498 | If *base* is None, return relative path to self.srcdir. | ||
521 | 499 | If *base* is a path string, return absolute path under that. | ||
522 | 500 | If *suffix* is not None, add it instead of config.source_suffix. | ||
523 | 501 | """ | ||
524 | 502 | docname = docname.replace(SEP, path.sep) | ||
525 | 503 | suffix = suffix or self.config.source_suffix | ||
526 | 504 | if base is True: | ||
527 | 505 | return path.join(self.srcdir, docname) + suffix | ||
528 | 506 | elif base is None: | ||
529 | 507 | return docname + suffix | ||
530 | 508 | else: | ||
531 | 509 | return path.join(base, docname) + suffix | ||
532 | 510 | |||
533 | 511 | def relfn2path(self, filename, docname=None): | ||
534 | 512 | """Return paths to a file referenced from a document, relative to | ||
535 | 513 | documentation root and absolute. | ||
536 | 514 | |||
537 | 515 | Absolute filenames are relative to the source dir, while relative | ||
538 | 516 | filenames are relative to the dir of the containing document. | ||
539 | 517 | """ | ||
540 | 518 | if filename.startswith('/') or filename.startswith(os.sep): | ||
541 | 519 | rel_fn = filename[1:] | ||
542 | 520 | else: | ||
543 | 521 | docdir = path.dirname(self.doc2path(docname or self.docname, | ||
544 | 522 | base=None)) | ||
545 | 523 | rel_fn = path.join(docdir, filename) | ||
546 | 524 | try: | ||
547 | 525 | return rel_fn, path.join(self.srcdir, rel_fn) | ||
548 | 526 | except UnicodeDecodeError: | ||
549 | 527 | # the source directory is a bytestring with non-ASCII characters; | ||
550 | 528 | # let's try to encode the rel_fn in the file system encoding | ||
551 | 529 | enc_rel_fn = rel_fn.encode(sys.getfilesystemencoding()) | ||
552 | 530 | return rel_fn, path.join(self.srcdir, enc_rel_fn) | ||
553 | 531 | |||
554 | 532 | def find_files(self, config): | ||
555 | 533 | """Find all source files in the source dir and put them in | ||
556 | 534 | self.found_docs. | ||
557 | 535 | """ | ||
558 | 536 | matchers = compile_matchers( | ||
559 | 537 | config.exclude_patterns[:] + | ||
560 | 538 | config.exclude_trees + | ||
561 | 539 | [d + config.source_suffix for d in config.unused_docs] + | ||
562 | 540 | ['**/' + d for d in config.exclude_dirnames] + | ||
563 | 541 | ['**/_sources', '.#*'] | ||
564 | 542 | ) | ||
565 | 543 | self.found_docs = set(get_matching_docs( | ||
566 | 544 | self.srcdir, config.source_suffix, exclude_matchers=matchers)) | ||
567 | 545 | |||
568 | 546 | def get_outdated_files(self, config_changed): | ||
569 | 547 | """Return (added, changed, removed) sets.""" | ||
570 | 548 | # clear all files no longer present | ||
571 | 549 | removed = set(self.all_docs) - self.found_docs | ||
572 | 550 | |||
573 | 551 | added = set() | ||
574 | 552 | changed = set() | ||
575 | 553 | |||
576 | 554 | if config_changed: | ||
577 | 555 | # config values affect e.g. substitutions | ||
578 | 556 | added = self.found_docs | ||
579 | 557 | else: | ||
580 | 558 | for docname in self.found_docs: | ||
581 | 559 | if docname not in self.all_docs: | ||
582 | 560 | added.add(docname) | ||
583 | 561 | continue | ||
584 | 562 | # if the doctree file is not there, rebuild | ||
585 | 563 | if not path.isfile(self.doc2path(docname, self.doctreedir, | ||
586 | 564 | '.doctree')): | ||
587 | 565 | changed.add(docname) | ||
588 | 566 | continue | ||
589 | 567 | # check the "reread always" list | ||
590 | 568 | if docname in self.reread_always: | ||
591 | 569 | changed.add(docname) | ||
592 | 570 | continue | ||
593 | 571 | # check the mtime of the document | ||
594 | 572 | mtime = self.all_docs[docname] | ||
595 | 573 | newmtime = path.getmtime(self.doc2path(docname)) | ||
596 | 574 | if newmtime > mtime: | ||
597 | 575 | changed.add(docname) | ||
598 | 576 | continue | ||
599 | 577 | # finally, check the mtime of dependencies | ||
600 | 578 | for dep in self.dependencies.get(docname, ()): | ||
601 | 579 | try: | ||
602 | 580 | # this will do the right thing when dep is absolute too | ||
603 | 581 | deppath = path.join(self.srcdir, dep) | ||
604 | 582 | if not path.isfile(deppath): | ||
605 | 583 | changed.add(docname) | ||
606 | 584 | break | ||
607 | 585 | depmtime = path.getmtime(deppath) | ||
608 | 586 | if depmtime > mtime: | ||
609 | 587 | changed.add(docname) | ||
610 | 588 | break | ||
611 | 589 | except EnvironmentError: | ||
612 | 590 | # give it another chance | ||
613 | 591 | changed.add(docname) | ||
614 | 592 | break | ||
615 | 593 | |||
616 | 594 | return added, changed, removed | ||
617 | 595 | |||
618 | 596 | def update(self, config, srcdir, doctreedir, app=None): | ||
619 | 597 | """(Re-)read all files new or changed since last update. | ||
620 | 598 | |||
621 | 599 | Returns a summary, the total count of documents to reread and an | ||
622 | 600 | iterator that yields docnames as it processes them. Store all | ||
623 | 601 | environment docnames in the canonical format (ie using SEP as a | ||
624 | 602 | separator in place of os.path.sep). | ||
625 | 603 | """ | ||
626 | 604 | config_changed = False | ||
627 | 605 | if self.config is None: | ||
628 | 606 | msg = '[new config] ' | ||
629 | 607 | config_changed = True | ||
630 | 608 | else: | ||
631 | 609 | # check if a config value was changed that affects how | ||
632 | 610 | # doctrees are read | ||
633 | 611 | for key, descr in config.values.iteritems(): | ||
634 | 612 | if descr[1] != 'env': | ||
635 | 613 | continue | ||
636 | 614 | if self.config[key] != config[key]: | ||
637 | 615 | msg = '[config changed] ' | ||
638 | 616 | config_changed = True | ||
639 | 617 | break | ||
640 | 618 | else: | ||
641 | 619 | msg = '' | ||
642 | 620 | # this value is not covered by the above loop because it is handled | ||
643 | 621 | # specially by the config class | ||
644 | 622 | if self.config.extensions != config.extensions: | ||
645 | 623 | msg = '[extensions changed] ' | ||
646 | 624 | config_changed = True | ||
647 | 625 | # the source and doctree directories may have been relocated | ||
648 | 626 | self.srcdir = srcdir | ||
649 | 627 | self.doctreedir = doctreedir | ||
650 | 628 | self.find_files(config) | ||
651 | 629 | self.config = config | ||
652 | 630 | |||
653 | 631 | added, changed, removed = self.get_outdated_files(config_changed) | ||
654 | 632 | |||
655 | 633 | # allow user intervention as well | ||
656 | 634 | for docs in app.emit('env-get-outdated', self, added, changed, removed): | ||
657 | 635 | changed.update(set(docs) & self.found_docs) | ||
658 | 636 | |||
659 | 637 | # if files were added or removed, all documents with globbed toctrees | ||
660 | 638 | # must be reread | ||
661 | 639 | if added or removed: | ||
662 | 640 | # ... but not those that already were removed | ||
663 | 641 | changed.update(self.glob_toctrees & self.found_docs) | ||
664 | 642 | |||
665 | 643 | msg += '%s added, %s changed, %s removed' % (len(added), len(changed), | ||
666 | 644 | len(removed)) | ||
667 | 645 | |||
668 | 646 | def update_generator(): | ||
669 | 647 | self.app = app | ||
670 | 648 | |||
671 | 649 | # clear all files no longer present | ||
672 | 650 | for docname in removed: | ||
673 | 651 | if app: | ||
674 | 652 | app.emit('env-purge-doc', self, docname) | ||
675 | 653 | self.clear_doc(docname) | ||
676 | 654 | |||
677 | 655 | # read all new and changed files | ||
678 | 656 | for docname in sorted(added | changed): | ||
679 | 657 | yield docname | ||
680 | 658 | self.read_doc(docname, app=app) | ||
681 | 659 | |||
682 | 660 | if config.master_doc not in self.all_docs: | ||
683 | 661 | self.warn(None, 'master file %s not found' % | ||
684 | 662 | self.doc2path(config.master_doc)) | ||
685 | 663 | |||
686 | 664 | self.app = None | ||
687 | 665 | if app: | ||
688 | 666 | app.emit('env-updated', self) | ||
689 | 667 | |||
690 | 668 | return msg, len(added | changed), update_generator() | ||
691 | 669 | |||
692 | 670 | def check_dependents(self, already): | ||
693 | 671 | to_rewrite = self.assign_section_numbers() | ||
694 | 672 | for docname in to_rewrite: | ||
695 | 673 | if docname not in already: | ||
696 | 674 | yield docname | ||
697 | 675 | |||
698 | 676 | # --------- SINGLE FILE READING -------------------------------------------- | ||
699 | 677 | |||
700 | 678 | def warn_and_replace(self, error): | ||
701 | 679 | """Custom decoding error handler that warns and replaces.""" | ||
702 | 680 | linestart = error.object.rfind('\n', 0, error.start) | ||
703 | 681 | lineend = error.object.find('\n', error.start) | ||
704 | 682 | if lineend == -1: lineend = len(error.object) | ||
705 | 683 | lineno = error.object.count('\n', 0, error.start) + 1 | ||
706 | 684 | self.warn(self.docname, 'undecodable source characters, ' | ||
707 | 685 | 'replacing with "?": %r' % | ||
708 | 686 | (error.object[linestart+1:error.start] + '>>>' + | ||
709 | 687 | error.object[error.start:error.end] + '<<<' + | ||
710 | 688 | error.object[error.end:lineend]), lineno) | ||
711 | 689 | return (u'?', error.end) | ||
712 | 690 | |||
713 | 691 | def lookup_domain_element(self, type, name): | ||
714 | 692 | """Lookup a markup element (directive or role), given its name which can | ||
715 | 693 | be a full name (with domain). | ||
716 | 694 | """ | ||
717 | 695 | name = name.lower() | ||
718 | 696 | # explicit domain given? | ||
719 | 697 | if ':' in name: | ||
720 | 698 | domain_name, name = name.split(':', 1) | ||
721 | 699 | if domain_name in self.domains: | ||
722 | 700 | domain = self.domains[domain_name] | ||
723 | 701 | element = getattr(domain, type)(name) | ||
724 | 702 | if element is not None: | ||
725 | 703 | return element, [] | ||
726 | 704 | # else look in the default domain | ||
727 | 705 | else: | ||
728 | 706 | def_domain = self.temp_data.get('default_domain') | ||
729 | 707 | if def_domain is not None: | ||
730 | 708 | element = getattr(def_domain, type)(name) | ||
731 | 709 | if element is not None: | ||
732 | 710 | return element, [] | ||
733 | 711 | # always look in the std domain | ||
734 | 712 | element = getattr(self.domains['std'], type)(name) | ||
735 | 713 | if element is not None: | ||
736 | 714 | return element, [] | ||
737 | 715 | raise ElementLookupError | ||
738 | 716 | |||
739 | 717 | def patch_lookup_functions(self): | ||
740 | 718 | """Monkey-patch directive and role dispatch, so that domain-specific | ||
741 | 719 | markup takes precedence. | ||
742 | 720 | """ | ||
743 | 721 | def directive(name, lang_module, document): | ||
744 | 722 | try: | ||
745 | 723 | return self.lookup_domain_element('directive', name) | ||
746 | 724 | except ElementLookupError: | ||
747 | 725 | return orig_directive_function(name, lang_module, document) | ||
748 | 726 | |||
749 | 727 | def role(name, lang_module, lineno, reporter): | ||
750 | 728 | try: | ||
751 | 729 | return self.lookup_domain_element('role', name) | ||
752 | 730 | except ElementLookupError: | ||
753 | 731 | return orig_role_function(name, lang_module, lineno, reporter) | ||
754 | 732 | |||
755 | 733 | directives.directive = directive | ||
756 | 734 | roles.role = role | ||
757 | 735 | |||
758 | 736 | def read_doc(self, docname, src_path=None, save_parsed=True, app=None): | ||
759 | 737 | """Parse a file and add/update inventory entries for the doctree. | ||
760 | 738 | |||
761 | 739 | If srcpath is given, read from a different source file. | ||
762 | 740 | """ | ||
763 | 741 | # remove all inventory entries for that file | ||
764 | 742 | if app: | ||
765 | 743 | app.emit('env-purge-doc', self, docname) | ||
766 | 744 | |||
767 | 745 | self.clear_doc(docname) | ||
768 | 746 | |||
769 | 747 | if src_path is None: | ||
770 | 748 | src_path = self.doc2path(docname) | ||
771 | 749 | |||
772 | 750 | self.temp_data['docname'] = docname | ||
773 | 751 | # defaults to the global default, but can be re-set in a document | ||
774 | 752 | self.temp_data['default_domain'] = \ | ||
775 | 753 | self.domains.get(self.config.primary_domain) | ||
776 | 754 | |||
777 | 755 | self.settings['input_encoding'] = self.config.source_encoding | ||
778 | 756 | self.settings['trim_footnote_reference_space'] = \ | ||
779 | 757 | self.config.trim_footnote_reference_space | ||
780 | 758 | self.settings['gettext_compact'] = self.config.gettext_compact | ||
781 | 759 | |||
782 | 760 | self.patch_lookup_functions() | ||
783 | 761 | |||
784 | 762 | if self.config.default_role: | ||
785 | 763 | role_fn, messages = roles.role(self.config.default_role, english, | ||
786 | 764 | 0, dummy_reporter) | ||
787 | 765 | if role_fn: | ||
788 | 766 | roles._roles[''] = role_fn | ||
789 | 767 | else: | ||
790 | 768 | self.warn(docname, 'default role %s not found' % | ||
791 | 769 | self.config.default_role) | ||
792 | 770 | |||
793 | 771 | codecs.register_error('sphinx', self.warn_and_replace) | ||
794 | 772 | |||
795 | 773 | class SphinxSourceClass(FileInput): | ||
796 | 774 | def __init__(self_, *args, **kwds): | ||
797 | 775 | # don't call sys.exit() on IOErrors | ||
798 | 776 | kwds['handle_io_errors'] = False | ||
799 | 777 | FileInput.__init__(self_, *args, **kwds) | ||
800 | 778 | |||
801 | 779 | def decode(self_, data): | ||
802 | 780 | if isinstance(data, unicode): | ||
803 | 781 | return data | ||
804 | 782 | return data.decode(self_.encoding, 'sphinx') | ||
805 | 783 | |||
806 | 784 | def read(self_): | ||
807 | 785 | data = FileInput.read(self_) | ||
808 | 786 | if app: | ||
809 | 787 | arg = [data] | ||
810 | 788 | app.emit('source-read', docname, arg) | ||
811 | 789 | data = arg[0] | ||
812 | 790 | if self.config.rst_epilog: | ||
813 | 791 | data = data + '\n' + self.config.rst_epilog + '\n' | ||
814 | 792 | if self.config.rst_prolog: | ||
815 | 793 | data = self.config.rst_prolog + '\n' + data | ||
816 | 794 | return data | ||
817 | 795 | |||
818 | 796 | # publish manually | ||
819 | 797 | pub = Publisher(reader=SphinxStandaloneReader(), | ||
820 | 798 | writer=SphinxDummyWriter(), | ||
821 | 799 | source_class=SphinxSourceClass, | ||
822 | 800 | destination_class=NullOutput) | ||
823 | 801 | pub.set_components(None, 'restructuredtext', None) | ||
824 | 802 | pub.process_programmatic_settings(None, self.settings, None) | ||
825 | 803 | pub.set_source(None, src_path.encode(fs_encoding)) | ||
826 | 804 | pub.set_destination(None, None) | ||
827 | 805 | try: | ||
828 | 806 | pub.publish() | ||
829 | 807 | doctree = pub.document | ||
830 | 808 | except UnicodeError, err: | ||
831 | 809 | raise SphinxError(str(err)) | ||
832 | 810 | |||
833 | 811 | # post-processing | ||
834 | 812 | self.filter_messages(doctree) | ||
835 | 813 | self.process_dependencies(docname, doctree) | ||
836 | 814 | self.process_images(docname, doctree) | ||
837 | 815 | self.process_downloads(docname, doctree) | ||
838 | 816 | self.process_metadata(docname, doctree) | ||
839 | 817 | self.process_refonly_bullet_lists(docname, doctree) | ||
840 | 818 | self.create_title_from(docname, doctree) | ||
841 | 819 | self.note_indexentries_from(docname, doctree) | ||
842 | 820 | self.note_citations_from(docname, doctree) | ||
843 | 821 | self.build_toc_from(docname, doctree) | ||
844 | 822 | for domain in self.domains.itervalues(): | ||
845 | 823 | domain.process_doc(self, docname, doctree) | ||
846 | 824 | |||
847 | 825 | # allow extension-specific post-processing | ||
848 | 826 | if app: | ||
849 | 827 | app.emit('doctree-read', doctree) | ||
850 | 828 | |||
851 | 829 | # store time of build, for outdated files detection | ||
852 | 830 | self.all_docs[docname] = time.time() | ||
853 | 831 | |||
854 | 832 | if self.versioning_condition: | ||
855 | 833 | # get old doctree | ||
856 | 834 | try: | ||
857 | 835 | f = open(self.doc2path(docname, | ||
858 | 836 | self.doctreedir, '.doctree'), 'rb') | ||
859 | 837 | try: | ||
860 | 838 | old_doctree = pickle.load(f) | ||
861 | 839 | finally: | ||
862 | 840 | f.close() | ||
863 | 841 | except EnvironmentError: | ||
864 | 842 | old_doctree = None | ||
865 | 843 | |||
866 | 844 | # add uids for versioning | ||
867 | 845 | if old_doctree is None: | ||
868 | 846 | list(add_uids(doctree, self.versioning_condition)) | ||
869 | 847 | else: | ||
870 | 848 | list(merge_doctrees( | ||
871 | 849 | old_doctree, doctree, self.versioning_condition)) | ||
872 | 850 | |||
873 | 851 | # make it picklable | ||
874 | 852 | doctree.reporter = None | ||
875 | 853 | doctree.transformer = None | ||
876 | 854 | doctree.settings.warning_stream = None | ||
877 | 855 | doctree.settings.env = None | ||
878 | 856 | doctree.settings.record_dependencies = None | ||
879 | 857 | for metanode in doctree.traverse(MetaBody.meta): | ||
880 | 858 | # docutils' meta nodes aren't picklable because the class is nested | ||
881 | 859 | metanode.__class__ = addnodes.meta | ||
882 | 860 | |||
883 | 861 | # cleanup | ||
884 | 862 | self.temp_data.clear() | ||
885 | 863 | |||
886 | 864 | if save_parsed: | ||
887 | 865 | # save the parsed doctree | ||
888 | 866 | doctree_filename = self.doc2path(docname, self.doctreedir, | ||
889 | 867 | '.doctree') | ||
890 | 868 | dirname = path.dirname(doctree_filename) | ||
891 | 869 | if not path.isdir(dirname): | ||
892 | 870 | os.makedirs(dirname) | ||
893 | 871 | f = open(doctree_filename, 'wb') | ||
894 | 872 | try: | ||
895 | 873 | pickle.dump(doctree, f, pickle.HIGHEST_PROTOCOL) | ||
896 | 874 | finally: | ||
897 | 875 | f.close() | ||
898 | 876 | else: | ||
899 | 877 | return doctree | ||
900 | 878 | |||
901 | 879 | # utilities to use while reading a document | ||
902 | 880 | |||
903 | 881 | @property | ||
904 | 882 | def docname(self): | ||
905 | 883 | """Backwards compatible alias.""" | ||
906 | 884 | return self.temp_data['docname'] | ||
907 | 885 | |||
908 | 886 | @property | ||
909 | 887 | def currmodule(self): | ||
910 | 888 | """Backwards compatible alias.""" | ||
911 | 889 | return self.temp_data.get('py:module') | ||
912 | 890 | |||
913 | 891 | @property | ||
914 | 892 | def currclass(self): | ||
915 | 893 | """Backwards compatible alias.""" | ||
916 | 894 | return self.temp_data.get('py:class') | ||
917 | 895 | |||
918 | 896 | def new_serialno(self, category=''): | ||
919 | 897 | """Return a serial number, e.g. for index entry targets.""" | ||
920 | 898 | key = category + 'serialno' | ||
921 | 899 | cur = self.temp_data.get(key, 0) | ||
922 | 900 | self.temp_data[key] = cur + 1 | ||
923 | 901 | return cur | ||
924 | 902 | |||
925 | 903 | def note_dependency(self, filename): | ||
926 | 904 | self.dependencies.setdefault(self.docname, set()).add(filename) | ||
927 | 905 | |||
928 | 906 | def note_reread(self): | ||
929 | 907 | self.reread_always.add(self.docname) | ||
930 | 908 | |||
931 | 909 | def note_versionchange(self, type, version, node, lineno): | ||
932 | 910 | self.versionchanges.setdefault(version, []).append( | ||
933 | 911 | (type, self.temp_data['docname'], lineno, | ||
934 | 912 | self.temp_data.get('py:module'), | ||
935 | 913 | self.temp_data.get('object'), node.astext())) | ||
936 | 914 | |||
937 | 915 | # post-processing of read doctrees | ||
938 | 916 | |||
939 | 917 | def filter_messages(self, doctree): | ||
940 | 918 | """Filter system messages from a doctree.""" | ||
941 | 919 | filterlevel = self.config.keep_warnings and 2 or 5 | ||
942 | 920 | for node in doctree.traverse(nodes.system_message): | ||
943 | 921 | if node['level'] < filterlevel: | ||
944 | 922 | node.parent.remove(node) | ||
945 | 923 | |||
946 | 924 | |||
947 | 925 | def process_dependencies(self, docname, doctree): | ||
948 | 926 | """Process docutils-generated dependency info.""" | ||
949 | 927 | cwd = os.getcwd() | ||
950 | 928 | frompath = path.join(path.normpath(self.srcdir), 'dummy') | ||
951 | 929 | deps = doctree.settings.record_dependencies | ||
952 | 930 | if not deps: | ||
953 | 931 | return | ||
954 | 932 | for dep in deps.list: | ||
955 | 933 | # the dependency path is relative to the working dir, so get | ||
956 | 934 | # one relative to the srcdir | ||
957 | 935 | relpath = relative_path(frompath, | ||
958 | 936 | path.normpath(path.join(cwd, dep))) | ||
959 | 937 | self.dependencies.setdefault(docname, set()).add(relpath) | ||
960 | 938 | |||
961 | 939 | def process_downloads(self, docname, doctree): | ||
962 | 940 | """Process downloadable file paths. """ | ||
963 | 941 | for node in doctree.traverse(addnodes.download_reference): | ||
964 | 942 | targetname = node['reftarget'] | ||
965 | 943 | rel_filename, filename = self.relfn2path(targetname, docname) | ||
966 | 944 | self.dependencies.setdefault(docname, set()).add(rel_filename) | ||
967 | 945 | if not os.access(filename, os.R_OK): | ||
968 | 946 | self.warn_node('download file not readable: %s' % filename, | ||
969 | 947 | node) | ||
970 | 948 | continue | ||
971 | 949 | uniquename = self.dlfiles.add_file(docname, filename) | ||
972 | 950 | node['filename'] = uniquename | ||
973 | 951 | |||
974 | 952 | def process_images(self, docname, doctree): | ||
975 | 953 | """Process and rewrite image URIs.""" | ||
976 | 954 | for node in doctree.traverse(nodes.image): | ||
977 | 955 | # Map the mimetype to the corresponding image. The writer may | ||
978 | 956 | # choose the best image from these candidates. The special key * is | ||
979 | 957 | # set if there is only single candidate to be used by a writer. | ||
980 | 958 | # The special key ? is set for nonlocal URIs. | ||
981 | 959 | node['candidates'] = candidates = {} | ||
982 | 960 | imguri = node['uri'] | ||
983 | 961 | if imguri.find('://') != -1: | ||
984 | 962 | self.warn_node('nonlocal image URI found: %s' % imguri, node) | ||
985 | 963 | candidates['?'] = imguri | ||
986 | 964 | continue | ||
987 | 965 | rel_imgpath, full_imgpath = self.relfn2path(imguri, docname) | ||
988 | 966 | # set imgpath as default URI | ||
989 | 967 | node['uri'] = rel_imgpath | ||
990 | 968 | if rel_imgpath.endswith(os.extsep + '*'): | ||
991 | 969 | for filename in glob(full_imgpath): | ||
992 | 970 | new_imgpath = relative_path(self.srcdir, filename) | ||
993 | 971 | if filename.lower().endswith('.pdf'): | ||
994 | 972 | candidates['application/pdf'] = new_imgpath | ||
995 | 973 | elif filename.lower().endswith('.svg'): | ||
996 | 974 | candidates['image/svg+xml'] = new_imgpath | ||
997 | 975 | else: | ||
998 | 976 | try: | ||
999 | 977 | f = open(filename, 'rb') | ||
1000 | 978 | try: | ||
1001 | 979 | imgtype = imghdr.what(f) | ||
1002 | 980 | finally: | ||
1003 | 981 | f.close() | ||
1004 | 982 | except (OSError, IOError), err: | ||
1005 | 983 | self.warn_node('image file %s not readable: %s' % | ||
1006 | 984 | (filename, err), node) | ||
1007 | 985 | if imgtype: | ||
1008 | 986 | candidates['image/' + imgtype] = new_imgpath | ||
1009 | 987 | else: | ||
1010 | 988 | candidates['*'] = rel_imgpath | ||
1011 | 989 | # map image paths to unique image names (so that they can be put | ||
1012 | 990 | # into a single directory) | ||
1013 | 991 | for imgpath in candidates.itervalues(): | ||
1014 | 992 | self.dependencies.setdefault(docname, set()).add(imgpath) | ||
1015 | 993 | if not os.access(path.join(self.srcdir, imgpath), os.R_OK): | ||
1016 | 994 | self.warn_node('image file not readable: %s' % imgpath, | ||
1017 | 995 | node) | ||
1018 | 996 | continue | ||
1019 | 997 | self.images.add_file(docname, imgpath) | ||
1020 | 998 | |||
1021 | 999 | def process_metadata(self, docname, doctree): | ||
1022 | 1000 | """Process the docinfo part of the doctree as metadata. | ||
1023 | 1001 | |||
1024 | 1002 | Keep processing minimal -- just return what docutils says. | ||
1025 | 1003 | """ | ||
1026 | 1004 | self.metadata[docname] = md = {} | ||
1027 | 1005 | try: | ||
1028 | 1006 | docinfo = doctree[0] | ||
1029 | 1007 | except IndexError: | ||
1030 | 1008 | # probably an empty document | ||
1031 | 1009 | return | ||
1032 | 1010 | if docinfo.__class__ is not nodes.docinfo: | ||
1033 | 1011 | # nothing to see here | ||
1034 | 1012 | return | ||
1035 | 1013 | for node in docinfo: | ||
1036 | 1014 | # nodes are multiply inherited... | ||
1037 | 1015 | if isinstance(node, nodes.authors): | ||
1038 | 1016 | md['authors'] = [author.astext() for author in node] | ||
1039 | 1017 | elif isinstance(node, nodes.TextElement): # e.g. author | ||
1040 | 1018 | md[node.__class__.__name__] = node.astext() | ||
1041 | 1019 | else: | ||
1042 | 1020 | name, body = node | ||
1043 | 1021 | md[name.astext()] = body.astext() | ||
1044 | 1022 | del doctree[0] | ||
1045 | 1023 | |||
1046 | 1024 | def process_refonly_bullet_lists(self, docname, doctree): | ||
1047 | 1025 | """Change refonly bullet lists to use compact_paragraphs. | ||
1048 | 1026 | |||
1049 | 1027 | Specifically implemented for 'Indices and Tables' section, which looks | ||
1050 | 1028 | odd when html_compact_lists is false. | ||
1051 | 1029 | """ | ||
1052 | 1030 | if self.config.html_compact_lists: | ||
1053 | 1031 | return | ||
1054 | 1032 | |||
1055 | 1033 | class RefOnlyListChecker(nodes.GenericNodeVisitor): | ||
1056 | 1034 | """Raise `nodes.NodeFound` if non-simple list item is encountered. | ||
1057 | 1035 | |||
1058 | 1036 | Here 'simple' means a list item containing only a paragraph with a | ||
1059 | 1037 | single reference in it. | ||
1060 | 1038 | """ | ||
1061 | 1039 | |||
1062 | 1040 | def default_visit(self, node): | ||
1063 | 1041 | raise nodes.NodeFound | ||
1064 | 1042 | |||
1065 | 1043 | def visit_bullet_list(self, node): | ||
1066 | 1044 | pass | ||
1067 | 1045 | |||
1068 | 1046 | def visit_list_item(self, node): | ||
1069 | 1047 | children = [] | ||
1070 | 1048 | for child in node.children: | ||
1071 | 1049 | if not isinstance(child, nodes.Invisible): | ||
1072 | 1050 | children.append(child) | ||
1073 | 1051 | if len(children) != 1: | ||
1074 | 1052 | raise nodes.NodeFound | ||
1075 | 1053 | if not isinstance(children[0], nodes.paragraph): | ||
1076 | 1054 | raise nodes.NodeFound | ||
1077 | 1055 | para = children[0] | ||
1078 | 1056 | if len(para) != 1: | ||
1079 | 1057 | raise nodes.NodeFound | ||
1080 | 1058 | if not isinstance(para[0], addnodes.pending_xref): | ||
1081 | 1059 | raise nodes.NodeFound | ||
1082 | 1060 | raise nodes.SkipChildren | ||
1083 | 1061 | |||
1084 | 1062 | def invisible_visit(self, node): | ||
1085 | 1063 | """Invisible nodes should be ignored.""" | ||
1086 | 1064 | pass | ||
1087 | 1065 | |||
1088 | 1066 | def check_refonly_list(node): | ||
1089 | 1067 | """Check for list with only references in it.""" | ||
1090 | 1068 | visitor = RefOnlyListChecker(doctree) | ||
1091 | 1069 | try: | ||
1092 | 1070 | node.walk(visitor) | ||
1093 | 1071 | except nodes.NodeFound: | ||
1094 | 1072 | return False | ||
1095 | 1073 | else: | ||
1096 | 1074 | return True | ||
1097 | 1075 | |||
1098 | 1076 | for node in doctree.traverse(nodes.bullet_list): | ||
1099 | 1077 | if check_refonly_list(node): | ||
1100 | 1078 | for item in node.traverse(nodes.list_item): | ||
1101 | 1079 | para = item[0] | ||
1102 | 1080 | ref = para[0] | ||
1103 | 1081 | compact_para = addnodes.compact_paragraph() | ||
1104 | 1082 | compact_para += ref | ||
1105 | 1083 | item.replace(para, compact_para) | ||
1106 | 1084 | |||
1107 | 1085 | def create_title_from(self, docname, document): | ||
1108 | 1086 | """Add a title node to the document (just copy the first section title), | ||
1109 | 1087 | and store that title in the environment. | ||
1110 | 1088 | """ | ||
1111 | 1089 | titlenode = nodes.title() | ||
1112 | 1090 | longtitlenode = titlenode | ||
1113 | 1091 | # explicit title set with title directive; use this only for | ||
1114 | 1092 | # the <title> tag in HTML output | ||
1115 | 1093 | if document.has_key('title'): | ||
1116 | 1094 | longtitlenode = nodes.title() | ||
1117 | 1095 | longtitlenode += nodes.Text(document['title']) | ||
1118 | 1096 | # look for first section title and use that as the title | ||
1119 | 1097 | for node in document.traverse(nodes.section): | ||
1120 | 1098 | visitor = SphinxContentsFilter(document) | ||
1121 | 1099 | node[0].walkabout(visitor) | ||
1122 | 1100 | titlenode += visitor.get_entry_text() | ||
1123 | 1101 | break | ||
1124 | 1102 | else: | ||
1125 | 1103 | # document has no title | ||
1126 | 1104 | titlenode += nodes.Text('<no title>') | ||
1127 | 1105 | self.titles[docname] = titlenode | ||
1128 | 1106 | self.longtitles[docname] = longtitlenode | ||
1129 | 1107 | |||
1130 | 1108 | def note_indexentries_from(self, docname, document): | ||
1131 | 1109 | entries = self.indexentries[docname] = [] | ||
1132 | 1110 | for node in document.traverse(addnodes.index): | ||
1133 | 1111 | entries.extend(node['entries']) | ||
1134 | 1112 | |||
1135 | 1113 | def note_citations_from(self, docname, document): | ||
1136 | 1114 | for node in document.traverse(nodes.citation): | ||
1137 | 1115 | label = node[0].astext() | ||
1138 | 1116 | if label in self.citations: | ||
1139 | 1117 | self.warn_node('duplicate citation %s, ' % label + | ||
1140 | 1118 | 'other instance in %s' % self.doc2path( | ||
1141 | 1119 | self.citations[label][0]), node) | ||
1142 | 1120 | self.citations[label] = (docname, node['ids'][0]) | ||
1143 | 1121 | |||
1144 | 1122 | def note_toctree(self, docname, toctreenode): | ||
1145 | 1123 | """Note a TOC tree directive in a document and gather information about | ||
1146 | 1124 | file relations from it. | ||
1147 | 1125 | """ | ||
1148 | 1126 | if toctreenode['glob']: | ||
1149 | 1127 | self.glob_toctrees.add(docname) | ||
1150 | 1128 | if toctreenode.get('numbered'): | ||
1151 | 1129 | self.numbered_toctrees.add(docname) | ||
1152 | 1130 | includefiles = toctreenode['includefiles'] | ||
1153 | 1131 | for includefile in includefiles: | ||
1154 | 1132 | # note that if the included file is rebuilt, this one must be | ||
1155 | 1133 | # too (since the TOC of the included file could have changed) | ||
1156 | 1134 | self.files_to_rebuild.setdefault(includefile, set()).add(docname) | ||
1157 | 1135 | self.toctree_includes.setdefault(docname, []).extend(includefiles) | ||
1158 | 1136 | |||
1159 | 1137 | def build_toc_from(self, docname, document): | ||
1160 | 1138 | """Build a TOC from the doctree and store it in the inventory.""" | ||
1161 | 1139 | numentries = [0] # nonlocal again... | ||
1162 | 1140 | |||
1163 | 1141 | try: | ||
1164 | 1142 | maxdepth = int(self.metadata[docname].get('tocdepth', 0)) | ||
1165 | 1143 | except ValueError: | ||
1166 | 1144 | maxdepth = 0 | ||
1167 | 1145 | |||
1168 | 1146 | def traverse_in_section(node, cls): | ||
1169 | 1147 | """Like traverse(), but stay within the same section.""" | ||
1170 | 1148 | result = [] | ||
1171 | 1149 | if isinstance(node, cls): | ||
1172 | 1150 | result.append(node) | ||
1173 | 1151 | for child in node.children: | ||
1174 | 1152 | if isinstance(child, nodes.section): | ||
1175 | 1153 | continue | ||
1176 | 1154 | result.extend(traverse_in_section(child, cls)) | ||
1177 | 1155 | return result | ||
1178 | 1156 | |||
1179 | 1157 | def build_toc(node, depth=1): | ||
1180 | 1158 | entries = [] | ||
1181 | 1159 | for sectionnode in node: | ||
1182 | 1160 | # find all toctree nodes in this section and add them | ||
1183 | 1161 | # to the toc (just copying the toctree node which is then | ||
1184 | 1162 | # resolved in self.get_and_resolve_doctree) | ||
1185 | 1163 | if isinstance(sectionnode, addnodes.only): | ||
1186 | 1164 | onlynode = addnodes.only(expr=sectionnode['expr']) | ||
1187 | 1165 | blist = build_toc(sectionnode, depth) | ||
1188 | 1166 | if blist: | ||
1189 | 1167 | onlynode += blist.children | ||
1190 | 1168 | entries.append(onlynode) | ||
1191 | 1169 | if not isinstance(sectionnode, nodes.section): | ||
1192 | 1170 | for toctreenode in traverse_in_section(sectionnode, | ||
1193 | 1171 | addnodes.toctree): | ||
1194 | 1172 | item = toctreenode.copy() | ||
1195 | 1173 | entries.append(item) | ||
1196 | 1174 | # important: do the inventory stuff | ||
1197 | 1175 | self.note_toctree(docname, toctreenode) | ||
1198 | 1176 | continue | ||
1199 | 1177 | title = sectionnode[0] | ||
1200 | 1178 | # copy the contents of the section title, but without references | ||
1201 | 1179 | # and unnecessary stuff | ||
1202 | 1180 | visitor = SphinxContentsFilter(document) | ||
1203 | 1181 | title.walkabout(visitor) | ||
1204 | 1182 | nodetext = visitor.get_entry_text() | ||
1205 | 1183 | if not numentries[0]: | ||
1206 | 1184 | # for the very first toc entry, don't add an anchor | ||
1207 | 1185 | # as it is the file's title anyway | ||
1208 | 1186 | anchorname = '' | ||
1209 | 1187 | else: | ||
1210 | 1188 | anchorname = '#' + sectionnode['ids'][0] | ||
1211 | 1189 | numentries[0] += 1 | ||
1212 | 1190 | # make these nodes: | ||
1213 | 1191 | # list_item -> compact_paragraph -> reference | ||
1214 | 1192 | reference = nodes.reference( | ||
1215 | 1193 | '', '', internal=True, refuri=docname, | ||
1216 | 1194 | anchorname=anchorname, *nodetext) | ||
1217 | 1195 | para = addnodes.compact_paragraph('', '', reference) | ||
1218 | 1196 | item = nodes.list_item('', para) | ||
1219 | 1197 | if maxdepth == 0 or depth < maxdepth: | ||
1220 | 1198 | item += build_toc(sectionnode, depth+1) | ||
1221 | 1199 | entries.append(item) | ||
1222 | 1200 | if entries: | ||
1223 | 1201 | return nodes.bullet_list('', *entries) | ||
1224 | 1202 | return [] | ||
1225 | 1203 | toc = build_toc(document) | ||
1226 | 1204 | if toc: | ||
1227 | 1205 | self.tocs[docname] = toc | ||
1228 | 1206 | else: | ||
1229 | 1207 | self.tocs[docname] = nodes.bullet_list('') | ||
1230 | 1208 | self.toc_num_entries[docname] = numentries[0] | ||
1231 | 1209 | |||
1232 | 1210 | def get_toc_for(self, docname, builder): | ||
1233 | 1211 | """Return a TOC nodetree -- for use on the same page only!""" | ||
1234 | 1212 | try: | ||
1235 | 1213 | toc = self.tocs[docname].deepcopy() | ||
1236 | 1214 | except KeyError: | ||
1237 | 1215 | # the document does not exist anymore: return a dummy node that | ||
1238 | 1216 | # renders to nothing | ||
1239 | 1217 | return nodes.paragraph() | ||
1240 | 1218 | self.process_only_nodes(toc, builder, docname) | ||
1241 | 1219 | for node in toc.traverse(nodes.reference): | ||
1242 | 1220 | node['refuri'] = node['anchorname'] or '#' | ||
1243 | 1221 | return toc | ||
1244 | 1222 | |||
1245 | 1223 | def get_toctree_for(self, docname, builder, collapse, **kwds): | ||
1246 | 1224 | """Return the global TOC nodetree.""" | ||
1247 | 1225 | doctree = self.get_doctree(self.config.master_doc) | ||
1248 | 1226 | toctrees = [] | ||
1249 | 1227 | if 'includehidden' not in kwds: | ||
1250 | 1228 | kwds['includehidden'] = True | ||
1251 | 1229 | if 'maxdepth' not in kwds: | ||
1252 | 1230 | kwds['maxdepth'] = 0 | ||
1253 | 1231 | kwds['collapse'] = collapse | ||
1254 | 1232 | for toctreenode in doctree.traverse(addnodes.toctree): | ||
1255 | 1233 | toctree = self.resolve_toctree(docname, builder, toctreenode, | ||
1256 | 1234 | prune=True, **kwds) | ||
1257 | 1235 | toctrees.append(toctree) | ||
1258 | 1236 | if not toctrees: | ||
1259 | 1237 | return None | ||
1260 | 1238 | result = toctrees[0] | ||
1261 | 1239 | for toctree in toctrees[1:]: | ||
1262 | 1240 | result.extend(toctree.children) | ||
1263 | 1241 | return result | ||
1264 | 1242 | |||
1265 | 1243 | def get_domain(self, domainname): | ||
1266 | 1244 | """Return the domain instance with the specified name. | ||
1267 | 1245 | |||
1268 | 1246 | Raises an ExtensionError if the domain is not registered. | ||
1269 | 1247 | """ | ||
1270 | 1248 | try: | ||
1271 | 1249 | return self.domains[domainname] | ||
1272 | 1250 | except KeyError: | ||
1273 | 1251 | raise ExtensionError('Domain %r is not registered' % domainname) | ||
1274 | 1252 | |||
1275 | 1253 | # --------- RESOLVING REFERENCES AND TOCTREES ------------------------------ | ||
1276 | 1254 | |||
1277 | 1255 | def get_doctree(self, docname): | ||
1278 | 1256 | """Read the doctree for a file from the pickle and return it.""" | ||
1279 | 1257 | doctree_filename = self.doc2path(docname, self.doctreedir, '.doctree') | ||
1280 | 1258 | f = open(doctree_filename, 'rb') | ||
1281 | 1259 | try: | ||
1282 | 1260 | doctree = pickle.load(f) | ||
1283 | 1261 | finally: | ||
1284 | 1262 | f.close() | ||
1285 | 1263 | doctree.settings.env = self | ||
1286 | 1264 | doctree.reporter = Reporter(self.doc2path(docname), 2, 5, | ||
1287 | 1265 | stream=WarningStream(self._warnfunc)) | ||
1288 | 1266 | return doctree | ||
1289 | 1267 | |||
1290 | 1268 | |||
1291 | 1269 | def get_and_resolve_doctree(self, docname, builder, doctree=None, | ||
1292 | 1270 | prune_toctrees=True): | ||
1293 | 1271 | """Read the doctree from the pickle, resolve cross-references and | ||
1294 | 1272 | toctrees and return it. | ||
1295 | 1273 | """ | ||
1296 | 1274 | if doctree is None: | ||
1297 | 1275 | doctree = self.get_doctree(docname) | ||
1298 | 1276 | |||
1299 | 1277 | # resolve all pending cross-references | ||
1300 | 1278 | self.resolve_references(doctree, docname, builder) | ||
1301 | 1279 | |||
1302 | 1280 | # now, resolve all toctree nodes | ||
1303 | 1281 | for toctreenode in doctree.traverse(addnodes.toctree): | ||
1304 | 1282 | result = self.resolve_toctree(docname, builder, toctreenode, | ||
1305 | 1283 | prune=prune_toctrees) | ||
1306 | 1284 | if result is None: | ||
1307 | 1285 | toctreenode.replace_self([]) | ||
1308 | 1286 | else: | ||
1309 | 1287 | toctreenode.replace_self(result) | ||
1310 | 1288 | |||
1311 | 1289 | return doctree | ||
1312 | 1290 | |||
1313 | 1291 | def resolve_toctree(self, docname, builder, toctree, prune=True, maxdepth=0, | ||
1314 | 1292 | titles_only=False, collapse=False, includehidden=False): | ||
1315 | 1293 | """Resolve a *toctree* node into individual bullet lists with titles | ||
1316 | 1294 | as items, returning None (if no containing titles are found) or | ||
1317 | 1295 | a new node. | ||
1318 | 1296 | |||
1319 | 1297 | If *prune* is True, the tree is pruned to *maxdepth*, or if that is 0, | ||
1320 | 1298 | to the value of the *maxdepth* option on the *toctree* node. | ||
1321 | 1299 | If *titles_only* is True, only toplevel document titles will be in the | ||
1322 | 1300 | resulting tree. | ||
1323 | 1301 | If *collapse* is True, all branches not containing docname will | ||
1324 | 1302 | be collapsed. | ||
1325 | 1303 | """ | ||
1326 | 1304 | if toctree.get('hidden', False) and not includehidden: | ||
1327 | 1305 | return None | ||
1328 | 1306 | |||
1329 | 1307 | def _walk_depth(node, depth, maxdepth): | ||
1330 | 1308 | """Utility: Cut a TOC at a specified depth.""" | ||
1331 | 1309 | |||
1332 | 1310 | # For reading this function, it is useful to keep in mind the node | ||
1333 | 1311 | # structure of a toctree (using HTML-like node names for brevity): | ||
1334 | 1312 | # | ||
1335 | 1313 | # <ul> | ||
1336 | 1314 | # <li> | ||
1337 | 1315 | # <p><a></p> | ||
1338 | 1316 | # <p><a></p> | ||
1339 | 1317 | # ... | ||
1340 | 1318 | # <ul> | ||
1341 | 1319 | # ... | ||
1342 | 1320 | # </ul> | ||
1343 | 1321 | # </li> | ||
1344 | 1322 | # </ul> | ||
1345 | 1323 | |||
1346 | 1324 | for subnode in node.children[:]: | ||
1347 | 1325 | if isinstance(subnode, (addnodes.compact_paragraph, | ||
1348 | 1326 | nodes.list_item)): | ||
1349 | 1327 | # for <p> and <li>, just indicate the depth level and | ||
1350 | 1328 | # recurse to children | ||
1351 | 1329 | subnode['classes'].append('toctree-l%d' % (depth-1)) | ||
1352 | 1330 | _walk_depth(subnode, depth, maxdepth) | ||
1353 | 1331 | |||
1354 | 1332 | elif isinstance(subnode, nodes.bullet_list): | ||
1355 | 1333 | # for <ul>, determine if the depth is too large or if the | ||
1356 | 1334 | # entry is to be collapsed | ||
1357 | 1335 | if maxdepth > 0 and depth > maxdepth: | ||
1358 | 1336 | subnode.parent.replace(subnode, []) | ||
1359 | 1337 | else: | ||
1360 | 1338 | # to find out what to collapse, *first* walk subitems, | ||
1361 | 1339 | # since that determines which children point to the | ||
1362 | 1340 | # current page | ||
1363 | 1341 | _walk_depth(subnode, depth+1, maxdepth) | ||
1364 | 1342 | # cull sub-entries whose parents aren't 'current' | ||
1365 | 1343 | if (collapse and depth > 1 and | ||
1366 | 1344 | 'iscurrent' not in subnode.parent): | ||
1367 | 1345 | subnode.parent.remove(subnode) | ||
1368 | 1346 | |||
1369 | 1347 | elif isinstance(subnode, nodes.reference): | ||
1370 | 1348 | # for <a>, identify which entries point to the current | ||
1371 | 1349 | # document and therefore may not be collapsed | ||
1372 | 1350 | if subnode['refuri'] == docname: | ||
1373 | 1351 | if not subnode['anchorname']: | ||
1374 | 1352 | # give the whole branch a 'current' class | ||
1375 | 1353 | # (useful for styling it differently) | ||
1376 | 1354 | branchnode = subnode | ||
1377 | 1355 | while branchnode: | ||
1378 | 1356 | branchnode['classes'].append('current') | ||
1379 | 1357 | branchnode = branchnode.parent | ||
1380 | 1358 | # mark the list_item as "on current page" | ||
1381 | 1359 | if subnode.parent.parent.get('iscurrent'): | ||
1382 | 1360 | # but only if it's not already done | ||
1383 | 1361 | return | ||
1384 | 1362 | while subnode: | ||
1385 | 1363 | subnode['iscurrent'] = True | ||
1386 | 1364 | subnode = subnode.parent | ||
1387 | 1365 | |||
1388 | 1366 | def _entries_from_toctree(toctreenode, parents, | ||
1389 | 1367 | separate=False, subtree=False): | ||
1390 | 1368 | """Return TOC entries for a toctree node.""" | ||
1391 | 1369 | refs = [(e[0], str(e[1])) for e in toctreenode['entries']] | ||
1392 | 1370 | entries = [] | ||
1393 | 1371 | for (title, ref) in refs: | ||
1394 | 1372 | try: | ||
1395 | 1373 | refdoc = None | ||
1396 | 1374 | if url_re.match(ref): | ||
1397 | 1375 | reference = nodes.reference('', '', internal=False, | ||
1398 | 1376 | refuri=ref, anchorname='', | ||
1399 | 1377 | *[nodes.Text(title)]) | ||
1400 | 1378 | para = addnodes.compact_paragraph('', '', reference) | ||
1401 | 1379 | item = nodes.list_item('', para) | ||
1402 | 1380 | toc = nodes.bullet_list('', item) | ||
1403 | 1381 | elif ref == 'self': | ||
1404 | 1382 | # 'self' refers to the document from which this | ||
1405 | 1383 | # toctree originates | ||
1406 | 1384 | ref = toctreenode['parent'] | ||
1407 | 1385 | if not title: | ||
1408 | 1386 | title = clean_astext(self.titles[ref]) | ||
1409 | 1387 | reference = nodes.reference('', '', internal=True, | ||
1410 | 1388 | refuri=ref, | ||
1411 | 1389 | anchorname='', | ||
1412 | 1390 | *[nodes.Text(title)]) | ||
1413 | 1391 | para = addnodes.compact_paragraph('', '', reference) | ||
1414 | 1392 | item = nodes.list_item('', para) | ||
1415 | 1393 | # don't show subitems | ||
1416 | 1394 | toc = nodes.bullet_list('', item) | ||
1417 | 1395 | else: | ||
1418 | 1396 | if ref in parents: | ||
1419 | 1397 | self.warn(ref, 'circular toctree references ' | ||
1420 | 1398 | 'detected, ignoring: %s <- %s' % | ||
1421 | 1399 | (ref, ' <- '.join(parents))) | ||
1422 | 1400 | continue | ||
1423 | 1401 | refdoc = ref | ||
1424 | 1402 | toc = self.tocs[ref].deepcopy() | ||
1425 | 1403 | self.process_only_nodes(toc, builder, ref) | ||
1426 | 1404 | if title and toc.children and len(toc.children) == 1: | ||
1427 | 1405 | child = toc.children[0] | ||
1428 | 1406 | for refnode in child.traverse(nodes.reference): | ||
1429 | 1407 | if refnode['refuri'] == ref and \ | ||
1430 | 1408 | not refnode['anchorname']: | ||
1431 | 1409 | refnode.children = [nodes.Text(title)] | ||
1432 | 1410 | if not toc.children: | ||
1433 | 1411 | # empty toc means: no titles will show up in the toctree | ||
1434 | 1412 | self.warn_node( | ||
1435 | 1413 | 'toctree contains reference to document %r that ' | ||
1436 | 1414 | 'doesn\'t have a title: no link will be generated' | ||
1437 | 1415 | % ref, toctreenode) | ||
1438 | 1416 | except KeyError: | ||
1439 | 1417 | # this is raised if the included file does not exist | ||
1440 | 1418 | self.warn_node( | ||
1441 | 1419 | 'toctree contains reference to nonexisting document %r' | ||
1442 | 1420 | % ref, toctreenode) | ||
1443 | 1421 | else: | ||
1444 | 1422 | # if titles_only is given, only keep the main title and | ||
1445 | 1423 | # sub-toctrees | ||
1446 | 1424 | if titles_only: | ||
1447 | 1425 | # delete everything but the toplevel title(s) | ||
1448 | 1426 | # and toctrees | ||
1449 | 1427 | for toplevel in toc: | ||
1450 | 1428 | # nodes with length 1 don't have any children anyway | ||
1451 | 1429 | if len(toplevel) > 1: | ||
1452 | 1430 | subtrees = toplevel.traverse(addnodes.toctree) | ||
1453 | 1431 | toplevel[1][:] = subtrees | ||
1454 | 1432 | # resolve all sub-toctrees | ||
1455 | 1433 | for toctreenode in toc.traverse(addnodes.toctree): | ||
1456 | 1434 | if not (toctreenode.get('hidden', False) | ||
1457 | 1435 | and not includehidden): | ||
1458 | 1436 | i = toctreenode.parent.index(toctreenode) + 1 | ||
1459 | 1437 | for item in _entries_from_toctree( | ||
1460 | 1438 | toctreenode, [refdoc] + parents, | ||
1461 | 1439 | subtree=True): | ||
1462 | 1440 | toctreenode.parent.insert(i, item) | ||
1463 | 1441 | i += 1 | ||
1464 | 1442 | toctreenode.parent.remove(toctreenode) | ||
1465 | 1443 | if separate: | ||
1466 | 1444 | entries.append(toc) | ||
1467 | 1445 | else: | ||
1468 | 1446 | entries.extend(toc.children) | ||
1469 | 1447 | if not subtree and not separate: | ||
1470 | 1448 | ret = nodes.bullet_list() | ||
1471 | 1449 | ret += entries | ||
1472 | 1450 | return [ret] | ||
1473 | 1451 | return entries | ||
1474 | 1452 | |||
1475 | 1453 | maxdepth = maxdepth or toctree.get('maxdepth', -1) | ||
1476 | 1454 | if not titles_only and toctree.get('titlesonly', False): | ||
1477 | 1455 | titles_only = True | ||
1478 | 1456 | |||
1479 | 1457 | # NOTE: previously, this was separate=True, but that leads to artificial | ||
1480 | 1458 | # separation when two or more toctree entries form a logical unit, so | ||
1481 | 1459 | # separating mode is no longer used -- it's kept here for history's sake | ||
1482 | 1460 | tocentries = _entries_from_toctree(toctree, [], separate=False) | ||
1483 | 1461 | if not tocentries: | ||
1484 | 1462 | return None | ||
1485 | 1463 | |||
1486 | 1464 | newnode = addnodes.compact_paragraph('', '', *tocentries) | ||
1487 | 1465 | newnode['toctree'] = True | ||
1488 | 1466 | |||
1489 | 1467 | # prune the tree to maxdepth and replace titles, also set level classes | ||
1490 | 1468 | _walk_depth(newnode, 1, prune and maxdepth or 0) | ||
1491 | 1469 | |||
1492 | 1470 | # set the target paths in the toctrees (they are not known at TOC | ||
1493 | 1471 | # generation time) | ||
1494 | 1472 | for refnode in newnode.traverse(nodes.reference): | ||
1495 | 1473 | if not url_re.match(refnode['refuri']): | ||
1496 | 1474 | refnode['refuri'] = builder.get_relative_uri( | ||
1497 | 1475 | docname, refnode['refuri']) + refnode['anchorname'] | ||
1498 | 1476 | return newnode | ||
1499 | 1477 | |||
1500 | 1478 | def resolve_references(self, doctree, fromdocname, builder): | ||
1501 | 1479 | for node in doctree.traverse(addnodes.pending_xref): | ||
1502 | 1480 | contnode = node[0].deepcopy() | ||
1503 | 1481 | newnode = None | ||
1504 | 1482 | |||
1505 | 1483 | typ = node['reftype'] | ||
1506 | 1484 | target = node['reftarget'] | ||
1507 | 1485 | refdoc = node.get('refdoc', fromdocname) | ||
1508 | 1486 | domain = None | ||
1509 | 1487 | |||
1510 | 1488 | try: | ||
1511 | 1489 | if 'refdomain' in node and node['refdomain']: | ||
1512 | 1490 | # let the domain try to resolve the reference | ||
1513 | 1491 | try: | ||
1514 | 1492 | domain = self.domains[node['refdomain']] | ||
1515 | 1493 | except KeyError: | ||
1516 | 1494 | raise NoUri | ||
1517 | 1495 | newnode = domain.resolve_xref(self, fromdocname, builder, | ||
1518 | 1496 | typ, target, node, contnode) | ||
1519 | 1497 | # really hardwired reference types | ||
1520 | 1498 | elif typ == 'doc': | ||
1521 | 1499 | # directly reference to document by source name; | ||
1522 | 1500 | # can be absolute or relative | ||
1523 | 1501 | docname = docname_join(refdoc, target) | ||
1524 | 1502 | if docname in self.all_docs: | ||
1525 | 1503 | if node['refexplicit']: | ||
1526 | 1504 | # reference with explicit title | ||
1527 | 1505 | caption = node.astext() | ||
1528 | 1506 | else: | ||
1529 | 1507 | caption = clean_astext(self.titles[docname]) | ||
1530 | 1508 | innernode = nodes.emphasis(caption, caption) | ||
1531 | 1509 | newnode = nodes.reference('', '', internal=True) | ||
1532 | 1510 | newnode['refuri'] = builder.get_relative_uri( | ||
1533 | 1511 | fromdocname, docname) | ||
1534 | 1512 | newnode.append(innernode) | ||
1535 | 1513 | elif typ == 'citation': | ||
1536 | 1514 | docname, labelid = self.citations.get(target, ('', '')) | ||
1537 | 1515 | if docname: | ||
1538 | 1516 | newnode = make_refnode(builder, fromdocname, docname, | ||
1539 | 1517 | labelid, contnode) | ||
1540 | 1518 | # no new node found? try the missing-reference event | ||
1541 | 1519 | if newnode is None: | ||
1542 | 1520 | newnode = builder.app.emit_firstresult( | ||
1543 | 1521 | 'missing-reference', self, node, contnode) | ||
1544 | 1522 | # still not found? warn if in nit-picky mode | ||
1545 | 1523 | if newnode is None: | ||
1546 | 1524 | self._warn_missing_reference( | ||
1547 | 1525 | fromdocname, typ, target, node, domain) | ||
1548 | 1526 | except NoUri: | ||
1549 | 1527 | newnode = contnode | ||
1550 | 1528 | node.replace_self(newnode or contnode) | ||
1551 | 1529 | |||
1552 | 1530 | # remove only-nodes that do not belong to our builder | ||
1553 | 1531 | self.process_only_nodes(doctree, builder, fromdocname) | ||
1554 | 1532 | |||
1555 | 1533 | # allow custom references to be resolved | ||
1556 | 1534 | builder.app.emit('doctree-resolved', doctree, fromdocname) | ||
1557 | 1535 | |||
1558 | 1536 | def _warn_missing_reference(self, fromdoc, typ, target, node, domain): | ||
1559 | 1537 | warn = node.get('refwarn') | ||
1560 | 1538 | if self.config.nitpicky: | ||
1561 | 1539 | warn = True | ||
1562 | 1540 | if self._nitpick_ignore: | ||
1563 | 1541 | dtype = domain and '%s:%s' % (domain.name, typ) or typ | ||
1564 | 1542 | if (dtype, target) in self._nitpick_ignore: | ||
1565 | 1543 | warn = False | ||
1566 | 1544 | if not warn: | ||
1567 | 1545 | return | ||
1568 | 1546 | if domain and typ in domain.dangling_warnings: | ||
1569 | 1547 | msg = domain.dangling_warnings[typ] | ||
1570 | 1548 | elif typ == 'doc': | ||
1571 | 1549 | msg = 'unknown document: %(target)s' | ||
1572 | 1550 | elif typ == 'citation': | ||
1573 | 1551 | msg = 'citation not found: %(target)s' | ||
1574 | 1552 | elif node.get('refdomain', 'std') != 'std': | ||
1575 | 1553 | msg = '%s:%s reference target not found: %%(target)s' % \ | ||
1576 | 1554 | (node['refdomain'], typ) | ||
1577 | 1555 | else: | ||
1578 | 1556 | msg = '%s reference target not found: %%(target)s' % typ | ||
1579 | 1557 | self.warn_node(msg % {'target': target}, node) | ||
1580 | 1558 | |||
1581 | 1559 | def process_only_nodes(self, doctree, builder, fromdocname=None): | ||
1582 | 1560 | # A comment on the comment() nodes being inserted: replacing by [] would | ||
1583 | 1561 | # result in a "Losing ids" exception if there is a target node before | ||
1584 | 1562 | # the only node, so we make sure docutils can transfer the id to | ||
1585 | 1563 | # something, even if it's just a comment and will lose the id anyway... | ||
1586 | 1564 | for node in doctree.traverse(addnodes.only): | ||
1587 | 1565 | try: | ||
1588 | 1566 | ret = builder.tags.eval_condition(node['expr']) | ||
1589 | 1567 | except Exception, err: | ||
1590 | 1568 | self.warn_node('exception while evaluating only ' | ||
1591 | 1569 | 'directive expression: %s' % err, node) | ||
1592 | 1570 | node.replace_self(node.children or nodes.comment()) | ||
1593 | 1571 | else: | ||
1594 | 1572 | if ret: | ||
1595 | 1573 | node.replace_self(node.children or nodes.comment()) | ||
1596 | 1574 | else: | ||
1597 | 1575 | node.replace_self(nodes.comment()) | ||
1598 | 1576 | |||
1599 | 1577 | def assign_section_numbers(self): | ||
1600 | 1578 | """Assign a section number to each heading under a numbered toctree.""" | ||
1601 | 1579 | # a list of all docnames whose section numbers changed | ||
1602 | 1580 | rewrite_needed = [] | ||
1603 | 1581 | |||
1604 | 1582 | old_secnumbers = self.toc_secnumbers | ||
1605 | 1583 | self.toc_secnumbers = {} | ||
1606 | 1584 | |||
1607 | 1585 | def _walk_toc(node, secnums, depth, titlenode=None): | ||
1608 | 1586 | # titlenode is the title of the document, it will get assigned a | ||
1609 | 1587 | # secnumber too, so that it shows up in next/prev/parent rellinks | ||
1610 | 1588 | for subnode in node.children: | ||
1611 | 1589 | if isinstance(subnode, nodes.bullet_list): | ||
1612 | 1590 | numstack.append(0) | ||
1613 | 1591 | _walk_toc(subnode, secnums, depth-1, titlenode) | ||
1614 | 1592 | numstack.pop() | ||
1615 | 1593 | titlenode = None | ||
1616 | 1594 | elif isinstance(subnode, nodes.list_item): | ||
1617 | 1595 | _walk_toc(subnode, secnums, depth, titlenode) | ||
1618 | 1596 | titlenode = None | ||
1619 | 1597 | elif isinstance(subnode, addnodes.only): | ||
1620 | 1598 | # at this stage we don't know yet which sections are going | ||
1621 | 1599 | # to be included; just include all of them, even if it leads | ||
1622 | 1600 | # to gaps in the numbering | ||
1623 | 1601 | _walk_toc(subnode, secnums, depth, titlenode) | ||
1624 | 1602 | titlenode = None | ||
1625 | 1603 | elif isinstance(subnode, addnodes.compact_paragraph): | ||
1626 | 1604 | numstack[-1] += 1 | ||
1627 | 1605 | if depth > 0: | ||
1628 | 1606 | number = tuple(numstack) | ||
1629 | 1607 | else: | ||
1630 | 1608 | number = None | ||
1631 | 1609 | secnums[subnode[0]['anchorname']] = \ | ||
1632 | 1610 | subnode[0]['secnumber'] = number | ||
1633 | 1611 | if titlenode: | ||
1634 | 1612 | titlenode['secnumber'] = number | ||
1635 | 1613 | titlenode = None | ||
1636 | 1614 | elif isinstance(subnode, addnodes.toctree): | ||
1637 | 1615 | _walk_toctree(subnode, depth) | ||
1638 | 1616 | |||
1639 | 1617 | def _walk_toctree(toctreenode, depth): | ||
1640 | 1618 | if depth == 0: | ||
1641 | 1619 | return | ||
1642 | 1620 | for (title, ref) in toctreenode['entries']: | ||
1643 | 1621 | if url_re.match(ref) or ref == 'self': | ||
1644 | 1622 | # don't mess with those | ||
1645 | 1623 | continue | ||
1646 | 1624 | if ref in self.tocs: | ||
1647 | 1625 | secnums = self.toc_secnumbers[ref] = {} | ||
1648 | 1626 | _walk_toc(self.tocs[ref], secnums, depth, | ||
1649 | 1627 | self.titles.get(ref)) | ||
1650 | 1628 | if secnums != old_secnumbers.get(ref): | ||
1651 | 1629 | rewrite_needed.append(ref) | ||
1652 | 1630 | |||
1653 | 1631 | for docname in self.numbered_toctrees: | ||
1654 | 1632 | doctree = self.get_doctree(docname) | ||
1655 | 1633 | for toctreenode in doctree.traverse(addnodes.toctree): | ||
1656 | 1634 | depth = toctreenode.get('numbered', 0) | ||
1657 | 1635 | if depth: | ||
1658 | 1636 | # every numbered toctree gets new numbering | ||
1659 | 1637 | numstack = [0] | ||
1660 | 1638 | _walk_toctree(toctreenode, depth) | ||
1661 | 1639 | |||
1662 | 1640 | return rewrite_needed | ||
1663 | 1641 | |||
1664 | 1642 | def create_index(self, builder, group_entries=True, | ||
1665 | 1643 | _fixre=re.compile(r'(.*) ([(][^()]*[)])')): | ||
1666 | 1644 | """Create the real index from the collected index entries.""" | ||
1667 | 1645 | new = {} | ||
1668 | 1646 | |||
1669 | 1647 | def add_entry(word, subword, link=True, dic=new): | ||
1670 | 1648 | entry = dic.get(word) | ||
1671 | 1649 | if not entry: | ||
1672 | 1650 | dic[word] = entry = [[], {}] | ||
1673 | 1651 | if subword: | ||
1674 | 1652 | add_entry(subword, '', link=link, dic=entry[1]) | ||
1675 | 1653 | elif link: | ||
1676 | 1654 | try: | ||
1677 | 1655 | uri = builder.get_relative_uri('genindex', fn) + '#' + tid | ||
1678 | 1656 | except NoUri: | ||
1679 | 1657 | pass | ||
1680 | 1658 | else: | ||
1681 | 1659 | entry[0].append((main, uri)) | ||
1682 | 1660 | |||
1683 | 1661 | for fn, entries in self.indexentries.iteritems(): | ||
1684 | 1662 | # new entry types must be listed in directives/other.py! | ||
1685 | 1663 | for type, value, tid, main in entries: | ||
1686 | 1664 | try: | ||
1687 | 1665 | if type == 'single': | ||
1688 | 1666 | try: | ||
1689 | 1667 | entry, subentry = split_into(2, 'single', value) | ||
1690 | 1668 | except ValueError: | ||
1691 | 1669 | entry, = split_into(1, 'single', value) | ||
1692 | 1670 | subentry = '' | ||
1693 | 1671 | add_entry(entry, subentry) | ||
1694 | 1672 | elif type == 'pair': | ||
1695 | 1673 | first, second = split_into(2, 'pair', value) | ||
1696 | 1674 | add_entry(first, second) | ||
1697 | 1675 | add_entry(second, first) | ||
1698 | 1676 | elif type == 'triple': | ||
1699 | 1677 | first, second, third = split_into(3, 'triple', value) | ||
1700 | 1678 | add_entry(first, second+' '+third) | ||
1701 | 1679 | add_entry(second, third+', '+first) | ||
1702 | 1680 | add_entry(third, first+' '+second) | ||
1703 | 1681 | elif type == 'see': | ||
1704 | 1682 | first, second = split_into(2, 'see', value) | ||
1705 | 1683 | add_entry(first, _('see %s') % second, link=False) | ||
1706 | 1684 | elif type == 'seealso': | ||
1707 | 1685 | first, second = split_into(2, 'see', value) | ||
1708 | 1686 | add_entry(first, _('see also %s') % second, link=False) | ||
1709 | 1687 | else: | ||
1710 | 1688 | self.warn(fn, 'unknown index entry type %r' % type) | ||
1711 | 1689 | except ValueError, err: | ||
1712 | 1690 | self.warn(fn, str(err)) | ||
1713 | 1691 | |||
1714 | 1692 | # sort the index entries; put all symbols at the front, even those | ||
1715 | 1693 | # following the letters in ASCII, this is where the chr(127) comes from | ||
1716 | 1694 | def keyfunc(entry, lcletters=string.ascii_lowercase + '_'): | ||
1717 | 1695 | lckey = unicodedata.normalize('NFD', entry[0].lower()) | ||
1718 | 1696 | if lckey[0:1] in lcletters: | ||
1719 | 1697 | return chr(127) + lckey | ||
1720 | 1698 | return lckey | ||
1721 | 1699 | newlist = new.items() | ||
1722 | 1700 | newlist.sort(key=keyfunc) | ||
1723 | 1701 | |||
1724 | 1702 | if group_entries: | ||
1725 | 1703 | # fixup entries: transform | ||
1726 | 1704 | # func() (in module foo) | ||
1727 | 1705 | # func() (in module bar) | ||
1728 | 1706 | # into | ||
1729 | 1707 | # func() | ||
1730 | 1708 | # (in module foo) | ||
1731 | 1709 | # (in module bar) | ||
1732 | 1710 | oldkey = '' | ||
1733 | 1711 | oldsubitems = None | ||
1734 | 1712 | i = 0 | ||
1735 | 1713 | while i < len(newlist): | ||
1736 | 1714 | key, (targets, subitems) = newlist[i] | ||
1737 | 1715 | # cannot move if it has subitems; structure gets too complex | ||
1738 | 1716 | if not subitems: | ||
1739 | 1717 | m = _fixre.match(key) | ||
1740 | 1718 | if m: | ||
1741 | 1719 | if oldkey == m.group(1): | ||
1742 | 1720 | # prefixes match: add entry as subitem of the | ||
1743 | 1721 | # previous entry | ||
1744 | 1722 | oldsubitems.setdefault(m.group(2), [[], {}])[0].\ | ||
1745 | 1723 | extend(targets) | ||
1746 | 1724 | del newlist[i] | ||
1747 | 1725 | continue | ||
1748 | 1726 | oldkey = m.group(1) | ||
1749 | 1727 | else: | ||
1750 | 1728 | oldkey = key | ||
1751 | 1729 | oldsubitems = subitems | ||
1752 | 1730 | i += 1 | ||
1753 | 1731 | |||
1754 | 1732 | # group the entries by letter | ||
1755 | 1733 | def keyfunc2(item, letters=string.ascii_uppercase + '_'): | ||
1756 | 1734 | # hack: mutating the subitems dicts to a list in the keyfunc | ||
1757 | 1735 | k, v = item | ||
1758 | 1736 | v[1] = sorted((si, se) for (si, (se, void)) in v[1].iteritems()) | ||
1759 | 1737 | # now calculate the key | ||
1760 | 1738 | letter = unicodedata.normalize('NFD', k[0])[0].upper() | ||
1761 | 1739 | if letter in letters: | ||
1762 | 1740 | return letter | ||
1763 | 1741 | else: | ||
1764 | 1742 | # get all other symbols under one heading | ||
1765 | 1743 | return 'Symbols' | ||
1766 | 1744 | return [(key, list(group)) | ||
1767 | 1745 | for (key, group) in groupby(newlist, keyfunc2)] | ||
1768 | 1746 | |||
1769 | 1747 | def collect_relations(self): | ||
1770 | 1748 | relations = {} | ||
1771 | 1749 | getinc = self.toctree_includes.get | ||
1772 | 1750 | def collect(parents, parents_set, docname, previous, next): | ||
1773 | 1751 | # circular relationship? | ||
1774 | 1752 | if docname in parents_set: | ||
1775 | 1753 | # we will warn about this in resolve_toctree() | ||
1776 | 1754 | return | ||
1777 | 1755 | includes = getinc(docname) | ||
1778 | 1756 | # previous | ||
1779 | 1757 | if not previous: | ||
1780 | 1758 | # if no previous sibling, go to parent | ||
1781 | 1759 | previous = parents[0][0] | ||
1782 | 1760 | else: | ||
1783 | 1761 | # else, go to previous sibling, or if it has children, to | ||
1784 | 1762 | # the last of its children, or if that has children, to the | ||
1785 | 1763 | # last of those, and so forth | ||
1786 | 1764 | while 1: | ||
1787 | 1765 | previncs = getinc(previous) | ||
1788 | 1766 | if previncs: | ||
1789 | 1767 | previous = previncs[-1] | ||
1790 | 1768 | else: | ||
1791 | 1769 | break | ||
1792 | 1770 | # next | ||
1793 | 1771 | if includes: | ||
1794 | 1772 | # if it has children, go to first of them | ||
1795 | 1773 | next = includes[0] | ||
1796 | 1774 | elif next: | ||
1797 | 1775 | # else, if next sibling, go to it | ||
1798 | 1776 | pass | ||
1799 | 1777 | else: | ||
1800 | 1778 | # else, go to the next sibling of the parent, if present, | ||
1801 | 1779 | # else the grandparent's sibling, if present, and so forth | ||
1802 | 1780 | for parname, parindex in parents: | ||
1803 | 1781 | parincs = getinc(parname) | ||
1804 | 1782 | if parincs and parindex + 1 < len(parincs): | ||
1805 | 1783 | next = parincs[parindex+1] | ||
1806 | 1784 | break | ||
1807 | 1785 | # else it will stay None | ||
1808 | 1786 | # same for children | ||
1809 | 1787 | if includes: | ||
1810 | 1788 | for subindex, args in enumerate(izip(includes, | ||
1811 | 1789 | [None] + includes, | ||
1812 | 1790 | includes[1:] + [None])): | ||
1813 | 1791 | collect([(docname, subindex)] + parents, | ||
1814 | 1792 | parents_set.union([docname]), *args) | ||
1815 | 1793 | relations[docname] = [parents[0][0], previous, next] | ||
1816 | 1794 | collect([(None, 0)], set(), self.config.master_doc, None, None) | ||
1817 | 1795 | return relations | ||
1818 | 1796 | |||
1819 | 1797 | def check_consistency(self): | ||
1820 | 1798 | """Do consistency checks.""" | ||
1821 | 1799 | for docname in sorted(self.all_docs): | ||
1822 | 1800 | if docname not in self.files_to_rebuild: | ||
1823 | 1801 | if docname == self.config.master_doc: | ||
1824 | 1802 | # the master file is not included anywhere ;) | ||
1825 | 1803 | continue | ||
1826 | 1804 | if 'orphan' in self.metadata[docname]: | ||
1827 | 1805 | continue | ||
1828 | 1806 | self.warn(docname, 'document isn\'t included in any toctree') | ||
1829 | 1807 | |||
1830 | 0 | 1808 | ||
1831 | === removed directory '.pc/fix_manpages_generation_with_new_docutils.diff' | |||
1832 | === removed directory '.pc/fix_manpages_generation_with_new_docutils.diff/sphinx' | |||
1833 | === removed directory '.pc/fix_manpages_generation_with_new_docutils.diff/sphinx/writers' | |||
1834 | === removed file '.pc/fix_manpages_generation_with_new_docutils.diff/sphinx/writers/manpage.py' | |||
1835 | --- .pc/fix_manpages_generation_with_new_docutils.diff/sphinx/writers/manpage.py 2012-10-22 20:20:35 +0000 | |||
1836 | +++ .pc/fix_manpages_generation_with_new_docutils.diff/sphinx/writers/manpage.py 1970-01-01 00:00:00 +0000 | |||
1837 | @@ -1,345 +0,0 @@ | |||
1838 | 1 | # -*- coding: utf-8 -*- | ||
1839 | 2 | """ | ||
1840 | 3 | sphinx.writers.manpage | ||
1841 | 4 | ~~~~~~~~~~~~~~~~~~~~~~ | ||
1842 | 5 | |||
1843 | 6 | Manual page writer, extended for Sphinx custom nodes. | ||
1844 | 7 | |||
1845 | 8 | :copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS. | ||
1846 | 9 | :license: BSD, see LICENSE for details. | ||
1847 | 10 | """ | ||
1848 | 11 | |||
1849 | 12 | from docutils import nodes | ||
1850 | 13 | try: | ||
1851 | 14 | from docutils.writers.manpage import MACRO_DEF, Writer, \ | ||
1852 | 15 | Translator as BaseTranslator | ||
1853 | 16 | has_manpage_writer = True | ||
1854 | 17 | except ImportError: | ||
1855 | 18 | # define the classes in any case, sphinx.application needs it | ||
1856 | 19 | Writer = BaseTranslator = object | ||
1857 | 20 | has_manpage_writer = False | ||
1858 | 21 | |||
1859 | 22 | from sphinx import addnodes | ||
1860 | 23 | from sphinx.locale import admonitionlabels, versionlabels, _ | ||
1861 | 24 | from sphinx.util.osutil import ustrftime | ||
1862 | 25 | |||
1863 | 26 | |||
1864 | 27 | class ManualPageWriter(Writer): | ||
1865 | 28 | def __init__(self, builder): | ||
1866 | 29 | Writer.__init__(self) | ||
1867 | 30 | self.builder = builder | ||
1868 | 31 | |||
1869 | 32 | def translate(self): | ||
1870 | 33 | visitor = ManualPageTranslator(self.builder, self.document) | ||
1871 | 34 | self.visitor = visitor | ||
1872 | 35 | self.document.walkabout(visitor) | ||
1873 | 36 | self.output = visitor.astext() | ||
1874 | 37 | |||
1875 | 38 | |||
1876 | 39 | class ManualPageTranslator(BaseTranslator): | ||
1877 | 40 | """ | ||
1878 | 41 | Custom translator. | ||
1879 | 42 | """ | ||
1880 | 43 | |||
1881 | 44 | def __init__(self, builder, *args, **kwds): | ||
1882 | 45 | BaseTranslator.__init__(self, *args, **kwds) | ||
1883 | 46 | self.builder = builder | ||
1884 | 47 | |||
1885 | 48 | self.in_productionlist = 0 | ||
1886 | 49 | |||
1887 | 50 | # first title is the manpage title | ||
1888 | 51 | self.section_level = -1 | ||
1889 | 52 | |||
1890 | 53 | # docinfo set by man_pages config value | ||
1891 | 54 | self._docinfo['title'] = self.document.settings.title | ||
1892 | 55 | self._docinfo['subtitle'] = self.document.settings.subtitle | ||
1893 | 56 | if self.document.settings.authors: | ||
1894 | 57 | # don't set it if no author given | ||
1895 | 58 | self._docinfo['author'] = self.document.settings.authors | ||
1896 | 59 | self._docinfo['manual_section'] = self.document.settings.section | ||
1897 | 60 | |||
1898 | 61 | # docinfo set by other config values | ||
1899 | 62 | self._docinfo['title_upper'] = self._docinfo['title'].upper() | ||
1900 | 63 | if builder.config.today: | ||
1901 | 64 | self._docinfo['date'] = builder.config.today | ||
1902 | 65 | else: | ||
1903 | 66 | self._docinfo['date'] = ustrftime(builder.config.today_fmt | ||
1904 | 67 | or _('%B %d, %Y')) | ||
1905 | 68 | self._docinfo['copyright'] = builder.config.copyright | ||
1906 | 69 | self._docinfo['version'] = builder.config.version | ||
1907 | 70 | self._docinfo['manual_group'] = builder.config.project | ||
1908 | 71 | |||
1909 | 72 | # since self.append_header() is never called, need to do this here | ||
1910 | 73 | self.body.append(MACRO_DEF) | ||
1911 | 74 | |||
1912 | 75 | # overwritten -- added quotes around all .TH arguments | ||
1913 | 76 | def header(self): | ||
1914 | 77 | tmpl = (".TH \"%(title_upper)s\" \"%(manual_section)s\"" | ||
1915 | 78 | " \"%(date)s\" \"%(version)s\" \"%(manual_group)s\"\n" | ||
1916 | 79 | ".SH NAME\n" | ||
1917 | 80 | "%(title)s \- %(subtitle)s\n") | ||
1918 | 81 | return tmpl % self._docinfo | ||
1919 | 82 | |||
1920 | 83 | def visit_start_of_file(self, node): | ||
1921 | 84 | pass | ||
1922 | 85 | def depart_start_of_file(self, node): | ||
1923 | 86 | pass | ||
1924 | 87 | |||
1925 | 88 | def visit_desc(self, node): | ||
1926 | 89 | self.visit_definition_list(node) | ||
1927 | 90 | def depart_desc(self, node): | ||
1928 | 91 | self.depart_definition_list(node) | ||
1929 | 92 | |||
1930 | 93 | def visit_desc_signature(self, node): | ||
1931 | 94 | self.visit_definition_list_item(node) | ||
1932 | 95 | self.visit_term(node) | ||
1933 | 96 | def depart_desc_signature(self, node): | ||
1934 | 97 | self.depart_term(node) | ||
1935 | 98 | |||
1936 | 99 | def visit_desc_addname(self, node): | ||
1937 | 100 | pass | ||
1938 | 101 | def depart_desc_addname(self, node): | ||
1939 | 102 | pass | ||
1940 | 103 | |||
1941 | 104 | def visit_desc_type(self, node): | ||
1942 | 105 | pass | ||
1943 | 106 | def depart_desc_type(self, node): | ||
1944 | 107 | pass | ||
1945 | 108 | |||
1946 | 109 | def visit_desc_returns(self, node): | ||
1947 | 110 | self.body.append(' -> ') | ||
1948 | 111 | def depart_desc_returns(self, node): | ||
1949 | 112 | pass | ||
1950 | 113 | |||
1951 | 114 | def visit_desc_name(self, node): | ||
1952 | 115 | pass | ||
1953 | 116 | def depart_desc_name(self, node): | ||
1954 | 117 | pass | ||
1955 | 118 | |||
1956 | 119 | def visit_desc_parameterlist(self, node): | ||
1957 | 120 | self.body.append('(') | ||
1958 | 121 | self.first_param = 1 | ||
1959 | 122 | def depart_desc_parameterlist(self, node): | ||
1960 | 123 | self.body.append(')') | ||
1961 | 124 | |||
1962 | 125 | def visit_desc_parameter(self, node): | ||
1963 | 126 | if not self.first_param: | ||
1964 | 127 | self.body.append(', ') | ||
1965 | 128 | else: | ||
1966 | 129 | self.first_param = 0 | ||
1967 | 130 | def depart_desc_parameter(self, node): | ||
1968 | 131 | pass | ||
1969 | 132 | |||
1970 | 133 | def visit_desc_optional(self, node): | ||
1971 | 134 | self.body.append('[') | ||
1972 | 135 | def depart_desc_optional(self, node): | ||
1973 | 136 | self.body.append(']') | ||
1974 | 137 | |||
1975 | 138 | def visit_desc_annotation(self, node): | ||
1976 | 139 | pass | ||
1977 | 140 | def depart_desc_annotation(self, node): | ||
1978 | 141 | pass | ||
1979 | 142 | |||
1980 | 143 | def visit_desc_content(self, node): | ||
1981 | 144 | self.visit_definition(node) | ||
1982 | 145 | def depart_desc_content(self, node): | ||
1983 | 146 | self.depart_definition(node) | ||
1984 | 147 | |||
1985 | 148 | def visit_refcount(self, node): | ||
1986 | 149 | self.body.append(self.defs['emphasis'][0]) | ||
1987 | 150 | def depart_refcount(self, node): | ||
1988 | 151 | self.body.append(self.defs['emphasis'][1]) | ||
1989 | 152 | |||
1990 | 153 | def visit_versionmodified(self, node): | ||
1991 | 154 | self.visit_paragraph(node) | ||
1992 | 155 | text = versionlabels[node['type']] % node['version'] | ||
1993 | 156 | if len(node): | ||
1994 | 157 | text += ': ' | ||
1995 | 158 | else: | ||
1996 | 159 | text += '.' | ||
1997 | 160 | self.body.append(text) | ||
1998 | 161 | def depart_versionmodified(self, node): | ||
1999 | 162 | self.depart_paragraph(node) | ||
2000 | 163 | |||
2001 | 164 | def visit_termsep(self, node): | ||
2002 | 165 | self.body.append(', ') | ||
2003 | 166 | raise nodes.SkipNode | ||
2004 | 167 | |||
2005 | 168 | # overwritten -- we don't want source comments to show up | ||
2006 | 169 | def visit_comment(self, node): | ||
2007 | 170 | raise nodes.SkipNode | ||
2008 | 171 | |||
2009 | 172 | # overwritten -- added ensure_eol() | ||
2010 | 173 | def visit_footnote(self, node): | ||
2011 | 174 | self.ensure_eol() | ||
2012 | 175 | BaseTranslator.visit_footnote(self, node) | ||
2013 | 176 | |||
2014 | 177 | # overwritten -- handle footnotes rubric | ||
2015 | 178 | def visit_rubric(self, node): | ||
2016 | 179 | self.ensure_eol() | ||
2017 | 180 | if len(node.children) == 1: | ||
2018 | 181 | rubtitle = node.children[0].astext() | ||
2019 | 182 | if rubtitle in ('Footnotes', _('Footnotes')): | ||
2020 | 183 | self.body.append('.SH ' + self.deunicode(rubtitle).upper() + | ||
2021 | 184 | '\n') | ||
2022 | 185 | raise nodes.SkipNode | ||
2023 | 186 | else: | ||
2024 | 187 | self.body.append('.sp\n') | ||
2025 | 188 | def depart_rubric(self, node): | ||
2026 | 189 | pass | ||
2027 | 190 | |||
2028 | 191 | def visit_seealso(self, node): | ||
2029 | 192 | self.visit_admonition(node) | ||
2030 | 193 | def depart_seealso(self, node): | ||
2031 | 194 | self.depart_admonition(node) | ||
2032 | 195 | |||
2033 | 196 | # overwritten -- use our own label translations | ||
2034 | 197 | def visit_admonition(self, node, name=None): | ||
2035 | 198 | if name: | ||
2036 | 199 | self.body.append('.IP %s\n' % | ||
2037 | 200 | self.deunicode(admonitionlabels.get(name, name))) | ||
2038 | 201 | |||
2039 | 202 | def visit_productionlist(self, node): | ||
2040 | 203 | self.ensure_eol() | ||
2041 | 204 | names = [] | ||
2042 | 205 | self.in_productionlist += 1 | ||
2043 | 206 | self.body.append('.sp\n.nf\n') | ||
2044 | 207 | for production in node: | ||
2045 | 208 | names.append(production['tokenname']) | ||
2046 | 209 | maxlen = max(len(name) for name in names) | ||
2047 | 210 | for production in node: | ||
2048 | 211 | if production['tokenname']: | ||
2049 | 212 | lastname = production['tokenname'].ljust(maxlen) | ||
2050 | 213 | self.body.append(self.defs['strong'][0]) | ||
2051 | 214 | self.body.append(self.deunicode(lastname)) | ||
2052 | 215 | self.body.append(self.defs['strong'][1]) | ||
2053 | 216 | self.body.append(' ::= ') | ||
2054 | 217 | else: | ||
2055 | 218 | self.body.append('%s ' % (' '*len(lastname))) | ||
2056 | 219 | production.walkabout(self) | ||
2057 | 220 | self.body.append('\n') | ||
2058 | 221 | self.body.append('\n.fi\n') | ||
2059 | 222 | self.in_productionlist -= 1 | ||
2060 | 223 | raise nodes.SkipNode | ||
2061 | 224 | |||
2062 | 225 | def visit_production(self, node): | ||
2063 | 226 | pass | ||
2064 | 227 | def depart_production(self, node): | ||
2065 | 228 | pass | ||
2066 | 229 | |||
2067 | 230 | # overwritten -- don't emit a warning for images | ||
2068 | 231 | def visit_image(self, node): | ||
2069 | 232 | if 'alt' in node.attributes: | ||
2070 | 233 | self.body.append(_('[image: %s]') % node['alt'] + '\n') | ||
2071 | 234 | self.body.append(_('[image]') + '\n') | ||
2072 | 235 | raise nodes.SkipNode | ||
2073 | 236 | |||
2074 | 237 | # overwritten -- don't visit inner marked up nodes | ||
2075 | 238 | def visit_reference(self, node): | ||
2076 | 239 | self.body.append(self.defs['reference'][0]) | ||
2077 | 240 | self.body.append(node.astext()) | ||
2078 | 241 | self.body.append(self.defs['reference'][1]) | ||
2079 | 242 | |||
2080 | 243 | uri = node.get('refuri', '') | ||
2081 | 244 | if uri.startswith('mailto:') or uri.startswith('http:') or \ | ||
2082 | 245 | uri.startswith('https:') or uri.startswith('ftp:'): | ||
2083 | 246 | # if configured, put the URL after the link | ||
2084 | 247 | if self.builder.config.man_show_urls and \ | ||
2085 | 248 | node.astext() != uri: | ||
2086 | 249 | if uri.startswith('mailto:'): | ||
2087 | 250 | uri = uri[7:] | ||
2088 | 251 | self.body.extend([ | ||
2089 | 252 | ' <', | ||
2090 | 253 | self.defs['strong'][0], uri, self.defs['strong'][1], | ||
2091 | 254 | '>']) | ||
2092 | 255 | raise nodes.SkipNode | ||
2093 | 256 | |||
2094 | 257 | def visit_centered(self, node): | ||
2095 | 258 | self.ensure_eol() | ||
2096 | 259 | self.body.append('.sp\n.ce\n') | ||
2097 | 260 | def depart_centered(self, node): | ||
2098 | 261 | self.body.append('\n.ce 0\n') | ||
2099 | 262 | |||
2100 | 263 | def visit_compact_paragraph(self, node): | ||
2101 | 264 | pass | ||
2102 | 265 | def depart_compact_paragraph(self, node): | ||
2103 | 266 | pass | ||
2104 | 267 | |||
2105 | 268 | def visit_highlightlang(self, node): | ||
2106 | 269 | pass | ||
2107 | 270 | def depart_highlightlang(self, node): | ||
2108 | 271 | pass | ||
2109 | 272 | |||
2110 | 273 | def visit_download_reference(self, node): | ||
2111 | 274 | pass | ||
2112 | 275 | def depart_download_reference(self, node): | ||
2113 | 276 | pass | ||
2114 | 277 | |||
2115 | 278 | def visit_toctree(self, node): | ||
2116 | 279 | raise nodes.SkipNode | ||
2117 | 280 | |||
2118 | 281 | def visit_index(self, node): | ||
2119 | 282 | raise nodes.SkipNode | ||
2120 | 283 | |||
2121 | 284 | def visit_tabular_col_spec(self, node): | ||
2122 | 285 | raise nodes.SkipNode | ||
2123 | 286 | |||
2124 | 287 | def visit_glossary(self, node): | ||
2125 | 288 | pass | ||
2126 | 289 | def depart_glossary(self, node): | ||
2127 | 290 | pass | ||
2128 | 291 | |||
2129 | 292 | def visit_acks(self, node): | ||
2130 | 293 | self.ensure_eol() | ||
2131 | 294 | self.body.append(', '.join(n.astext() | ||
2132 | 295 | for n in node.children[0].children) + '.') | ||
2133 | 296 | self.body.append('\n') | ||
2134 | 297 | raise nodes.SkipNode | ||
2135 | 298 | |||
2136 | 299 | def visit_hlist(self, node): | ||
2137 | 300 | self.visit_bullet_list(node) | ||
2138 | 301 | def depart_hlist(self, node): | ||
2139 | 302 | self.depart_bullet_list(node) | ||
2140 | 303 | |||
2141 | 304 | def visit_hlistcol(self, node): | ||
2142 | 305 | pass | ||
2143 | 306 | def depart_hlistcol(self, node): | ||
2144 | 307 | pass | ||
2145 | 308 | |||
2146 | 309 | def visit_literal_emphasis(self, node): | ||
2147 | 310 | return self.visit_emphasis(node) | ||
2148 | 311 | def depart_literal_emphasis(self, node): | ||
2149 | 312 | return self.depart_emphasis(node) | ||
2150 | 313 | |||
2151 | 314 | def visit_abbreviation(self, node): | ||
2152 | 315 | pass | ||
2153 | 316 | def depart_abbreviation(self, node): | ||
2154 | 317 | pass | ||
2155 | 318 | |||
2156 | 319 | # overwritten: handle section titles better than in 0.6 release | ||
2157 | 320 | def visit_title(self, node): | ||
2158 | 321 | if isinstance(node.parent, addnodes.seealso): | ||
2159 | 322 | self.body.append('.IP "') | ||
2160 | 323 | return | ||
2161 | 324 | elif isinstance(node.parent, nodes.section): | ||
2162 | 325 | if self.section_level == 0: | ||
2163 | 326 | # skip the document title | ||
2164 | 327 | raise nodes.SkipNode | ||
2165 | 328 | elif self.section_level == 1: | ||
2166 | 329 | self.body.append('.SH %s\n' % | ||
2167 | 330 | self.deunicode(node.astext().upper())) | ||
2168 | 331 | raise nodes.SkipNode | ||
2169 | 332 | return BaseTranslator.visit_title(self, node) | ||
2170 | 333 | def depart_title(self, node): | ||
2171 | 334 | if isinstance(node.parent, addnodes.seealso): | ||
2172 | 335 | self.body.append('"\n') | ||
2173 | 336 | return | ||
2174 | 337 | return BaseTranslator.depart_title(self, node) | ||
2175 | 338 | |||
2176 | 339 | def visit_raw(self, node): | ||
2177 | 340 | if 'manpage' in node.get('format', '').split(): | ||
2178 | 341 | self.body.append(node.astext()) | ||
2179 | 342 | raise nodes.SkipNode | ||
2180 | 343 | |||
2181 | 344 | def unknown_visit(self, node): | ||
2182 | 345 | raise NotImplementedError('Unknown node: ' + node.__class__.__name__) | ||
2183 | 346 | 0 | ||
2184 | === added directory '.pc/manpage_writer_docutils_0.10_api.diff' | |||
2185 | === added directory '.pc/manpage_writer_docutils_0.10_api.diff/sphinx' | |||
2186 | === added directory '.pc/manpage_writer_docutils_0.10_api.diff/sphinx/writers' | |||
2187 | === added file '.pc/manpage_writer_docutils_0.10_api.diff/sphinx/writers/manpage.py' | |||
2188 | --- .pc/manpage_writer_docutils_0.10_api.diff/sphinx/writers/manpage.py 1970-01-01 00:00:00 +0000 | |||
2189 | +++ .pc/manpage_writer_docutils_0.10_api.diff/sphinx/writers/manpage.py 2013-02-16 12:25:23 +0000 | |||
2190 | @@ -0,0 +1,345 @@ | |||
2191 | 1 | # -*- coding: utf-8 -*- | ||
2192 | 2 | """ | ||
2193 | 3 | sphinx.writers.manpage | ||
2194 | 4 | ~~~~~~~~~~~~~~~~~~~~~~ | ||
2195 | 5 | |||
2196 | 6 | Manual page writer, extended for Sphinx custom nodes. | ||
2197 | 7 | |||
2198 | 8 | :copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS. | ||
2199 | 9 | :license: BSD, see LICENSE for details. | ||
2200 | 10 | """ | ||
2201 | 11 | |||
2202 | 12 | from docutils import nodes | ||
2203 | 13 | try: | ||
2204 | 14 | from docutils.writers.manpage import MACRO_DEF, Writer, \ | ||
2205 | 15 | Translator as BaseTranslator | ||
2206 | 16 | has_manpage_writer = True | ||
2207 | 17 | except ImportError: | ||
2208 | 18 | # define the classes in any case, sphinx.application needs it | ||
2209 | 19 | Writer = BaseTranslator = object | ||
2210 | 20 | has_manpage_writer = False | ||
2211 | 21 | |||
2212 | 22 | from sphinx import addnodes | ||
2213 | 23 | from sphinx.locale import admonitionlabels, versionlabels, _ | ||
2214 | 24 | from sphinx.util.osutil import ustrftime | ||
2215 | 25 | |||
2216 | 26 | |||
2217 | 27 | class ManualPageWriter(Writer): | ||
2218 | 28 | def __init__(self, builder): | ||
2219 | 29 | Writer.__init__(self) | ||
2220 | 30 | self.builder = builder | ||
2221 | 31 | |||
2222 | 32 | def translate(self): | ||
2223 | 33 | visitor = ManualPageTranslator(self.builder, self.document) | ||
2224 | 34 | self.visitor = visitor | ||
2225 | 35 | self.document.walkabout(visitor) | ||
2226 | 36 | self.output = visitor.astext() | ||
2227 | 37 | |||
2228 | 38 | |||
2229 | 39 | class ManualPageTranslator(BaseTranslator): | ||
2230 | 40 | """ | ||
2231 | 41 | Custom translator. | ||
2232 | 42 | """ | ||
2233 | 43 | |||
2234 | 44 | def __init__(self, builder, *args, **kwds): | ||
2235 | 45 | BaseTranslator.__init__(self, *args, **kwds) | ||
2236 | 46 | self.builder = builder | ||
2237 | 47 | |||
2238 | 48 | self.in_productionlist = 0 | ||
2239 | 49 | |||
2240 | 50 | # first title is the manpage title | ||
2241 | 51 | self.section_level = -1 | ||
2242 | 52 | |||
2243 | 53 | # docinfo set by man_pages config value | ||
2244 | 54 | self._docinfo['title'] = self.document.settings.title | ||
2245 | 55 | self._docinfo['subtitle'] = self.document.settings.subtitle | ||
2246 | 56 | if self.document.settings.authors: | ||
2247 | 57 | # don't set it if no author given | ||
2248 | 58 | self._docinfo['author'] = self.document.settings.authors | ||
2249 | 59 | self._docinfo['manual_section'] = self.document.settings.section | ||
2250 | 60 | |||
2251 | 61 | # docinfo set by other config values | ||
2252 | 62 | self._docinfo['title_upper'] = self._docinfo['title'].upper() | ||
2253 | 63 | if builder.config.today: | ||
2254 | 64 | self._docinfo['date'] = builder.config.today | ||
2255 | 65 | else: | ||
2256 | 66 | self._docinfo['date'] = ustrftime(builder.config.today_fmt | ||
2257 | 67 | or _('%B %d, %Y')) | ||
2258 | 68 | self._docinfo['copyright'] = builder.config.copyright | ||
2259 | 69 | self._docinfo['version'] = builder.config.version | ||
2260 | 70 | self._docinfo['manual_group'] = builder.config.project | ||
2261 | 71 | |||
2262 | 72 | # since self.append_header() is never called, need to do this here | ||
2263 | 73 | self.body.append(MACRO_DEF) | ||
2264 | 74 | |||
2265 | 75 | # overwritten -- added quotes around all .TH arguments | ||
2266 | 76 | def header(self): | ||
2267 | 77 | tmpl = (".TH \"%(title_upper)s\" \"%(manual_section)s\"" | ||
2268 | 78 | " \"%(date)s\" \"%(version)s\" \"%(manual_group)s\"\n" | ||
2269 | 79 | ".SH NAME\n" | ||
2270 | 80 | "%(title)s \- %(subtitle)s\n") | ||
2271 | 81 | return tmpl % self._docinfo | ||
2272 | 82 | |||
2273 | 83 | def visit_start_of_file(self, node): | ||
2274 | 84 | pass | ||
2275 | 85 | def depart_start_of_file(self, node): | ||
2276 | 86 | pass | ||
2277 | 87 | |||
2278 | 88 | def visit_desc(self, node): | ||
2279 | 89 | self.visit_definition_list(node) | ||
2280 | 90 | def depart_desc(self, node): | ||
2281 | 91 | self.depart_definition_list(node) | ||
2282 | 92 | |||
2283 | 93 | def visit_desc_signature(self, node): | ||
2284 | 94 | self.visit_definition_list_item(node) | ||
2285 | 95 | self.visit_term(node) | ||
2286 | 96 | def depart_desc_signature(self, node): | ||
2287 | 97 | self.depart_term(node) | ||
2288 | 98 | |||
2289 | 99 | def visit_desc_addname(self, node): | ||
2290 | 100 | pass | ||
2291 | 101 | def depart_desc_addname(self, node): | ||
2292 | 102 | pass | ||
2293 | 103 | |||
2294 | 104 | def visit_desc_type(self, node): | ||
2295 | 105 | pass | ||
2296 | 106 | def depart_desc_type(self, node): | ||
2297 | 107 | pass | ||
2298 | 108 | |||
2299 | 109 | def visit_desc_returns(self, node): | ||
2300 | 110 | self.body.append(' -> ') | ||
2301 | 111 | def depart_desc_returns(self, node): | ||
2302 | 112 | pass | ||
2303 | 113 | |||
2304 | 114 | def visit_desc_name(self, node): | ||
2305 | 115 | pass | ||
2306 | 116 | def depart_desc_name(self, node): | ||
2307 | 117 | pass | ||
2308 | 118 | |||
2309 | 119 | def visit_desc_parameterlist(self, node): | ||
2310 | 120 | self.body.append('(') | ||
2311 | 121 | self.first_param = 1 | ||
2312 | 122 | def depart_desc_parameterlist(self, node): | ||
2313 | 123 | self.body.append(')') | ||
2314 | 124 | |||
2315 | 125 | def visit_desc_parameter(self, node): | ||
2316 | 126 | if not self.first_param: | ||
2317 | 127 | self.body.append(', ') | ||
2318 | 128 | else: | ||
2319 | 129 | self.first_param = 0 | ||
2320 | 130 | def depart_desc_parameter(self, node): | ||
2321 | 131 | pass | ||
2322 | 132 | |||
2323 | 133 | def visit_desc_optional(self, node): | ||
2324 | 134 | self.body.append('[') | ||
2325 | 135 | def depart_desc_optional(self, node): | ||
2326 | 136 | self.body.append(']') | ||
2327 | 137 | |||
2328 | 138 | def visit_desc_annotation(self, node): | ||
2329 | 139 | pass | ||
2330 | 140 | def depart_desc_annotation(self, node): | ||
2331 | 141 | pass | ||
2332 | 142 | |||
2333 | 143 | def visit_desc_content(self, node): | ||
2334 | 144 | self.visit_definition(node) | ||
2335 | 145 | def depart_desc_content(self, node): | ||
2336 | 146 | self.depart_definition(node) | ||
2337 | 147 | |||
2338 | 148 | def visit_refcount(self, node): | ||
2339 | 149 | self.body.append(self.defs['emphasis'][0]) | ||
2340 | 150 | def depart_refcount(self, node): | ||
2341 | 151 | self.body.append(self.defs['emphasis'][1]) | ||
2342 | 152 | |||
2343 | 153 | def visit_versionmodified(self, node): | ||
2344 | 154 | self.visit_paragraph(node) | ||
2345 | 155 | text = versionlabels[node['type']] % node['version'] | ||
2346 | 156 | if len(node): | ||
2347 | 157 | text += ': ' | ||
2348 | 158 | else: | ||
2349 | 159 | text += '.' | ||
2350 | 160 | self.body.append(text) | ||
2351 | 161 | def depart_versionmodified(self, node): | ||
2352 | 162 | self.depart_paragraph(node) | ||
2353 | 163 | |||
2354 | 164 | def visit_termsep(self, node): | ||
2355 | 165 | self.body.append(', ') | ||
2356 | 166 | raise nodes.SkipNode | ||
2357 | 167 | |||
2358 | 168 | # overwritten -- we don't want source comments to show up | ||
2359 | 169 | def visit_comment(self, node): | ||
2360 | 170 | raise nodes.SkipNode | ||
2361 | 171 | |||
2362 | 172 | # overwritten -- added ensure_eol() | ||
2363 | 173 | def visit_footnote(self, node): | ||
2364 | 174 | self.ensure_eol() | ||
2365 | 175 | BaseTranslator.visit_footnote(self, node) | ||
2366 | 176 | |||
2367 | 177 | # overwritten -- handle footnotes rubric | ||
2368 | 178 | def visit_rubric(self, node): | ||
2369 | 179 | self.ensure_eol() | ||
2370 | 180 | if len(node.children) == 1: | ||
2371 | 181 | rubtitle = node.children[0].astext() | ||
2372 | 182 | if rubtitle in ('Footnotes', _('Footnotes')): | ||
2373 | 183 | self.body.append('.SH ' + self.deunicode(rubtitle).upper() + | ||
2374 | 184 | '\n') | ||
2375 | 185 | raise nodes.SkipNode | ||
2376 | 186 | else: | ||
2377 | 187 | self.body.append('.sp\n') | ||
2378 | 188 | def depart_rubric(self, node): | ||
2379 | 189 | pass | ||
2380 | 190 | |||
2381 | 191 | def visit_seealso(self, node): | ||
2382 | 192 | self.visit_admonition(node) | ||
2383 | 193 | def depart_seealso(self, node): | ||
2384 | 194 | self.depart_admonition(node) | ||
2385 | 195 | |||
2386 | 196 | # overwritten -- use our own label translations | ||
2387 | 197 | def visit_admonition(self, node, name=None): | ||
2388 | 198 | if name: | ||
2389 | 199 | self.body.append('.IP %s\n' % | ||
2390 | 200 | self.deunicode(admonitionlabels.get(name, name))) | ||
2391 | 201 | |||
2392 | 202 | def visit_productionlist(self, node): | ||
2393 | 203 | self.ensure_eol() | ||
2394 | 204 | names = [] | ||
2395 | 205 | self.in_productionlist += 1 | ||
2396 | 206 | self.body.append('.sp\n.nf\n') | ||
2397 | 207 | for production in node: | ||
2398 | 208 | names.append(production['tokenname']) | ||
2399 | 209 | maxlen = max(len(name) for name in names) | ||
2400 | 210 | for production in node: | ||
2401 | 211 | if production['tokenname']: | ||
2402 | 212 | lastname = production['tokenname'].ljust(maxlen) | ||
2403 | 213 | self.body.append(self.defs['strong'][0]) | ||
2404 | 214 | self.body.append(self.deunicode(lastname)) | ||
2405 | 215 | self.body.append(self.defs['strong'][1]) | ||
2406 | 216 | self.body.append(' ::= ') | ||
2407 | 217 | else: | ||
2408 | 218 | self.body.append('%s ' % (' '*len(lastname))) | ||
2409 | 219 | production.walkabout(self) | ||
2410 | 220 | self.body.append('\n') | ||
2411 | 221 | self.body.append('\n.fi\n') | ||
2412 | 222 | self.in_productionlist -= 1 | ||
2413 | 223 | raise nodes.SkipNode | ||
2414 | 224 | |||
2415 | 225 | def visit_production(self, node): | ||
2416 | 226 | pass | ||
2417 | 227 | def depart_production(self, node): | ||
2418 | 228 | pass | ||
2419 | 229 | |||
2420 | 230 | # overwritten -- don't emit a warning for images | ||
2421 | 231 | def visit_image(self, node): | ||
2422 | 232 | if 'alt' in node.attributes: | ||
2423 | 233 | self.body.append(_('[image: %s]') % node['alt'] + '\n') | ||
2424 | 234 | self.body.append(_('[image]') + '\n') | ||
2425 | 235 | raise nodes.SkipNode | ||
2426 | 236 | |||
2427 | 237 | # overwritten -- don't visit inner marked up nodes | ||
2428 | 238 | def visit_reference(self, node): | ||
2429 | 239 | self.body.append(self.defs['reference'][0]) | ||
2430 | 240 | self.body.append(node.astext()) | ||
2431 | 241 | self.body.append(self.defs['reference'][1]) | ||
2432 | 242 | |||
2433 | 243 | uri = node.get('refuri', '') | ||
2434 | 244 | if uri.startswith('mailto:') or uri.startswith('http:') or \ | ||
2435 | 245 | uri.startswith('https:') or uri.startswith('ftp:'): | ||
2436 | 246 | # if configured, put the URL after the link | ||
2437 | 247 | if self.builder.config.man_show_urls and \ | ||
2438 | 248 | node.astext() != uri: | ||
2439 | 249 | if uri.startswith('mailto:'): | ||
2440 | 250 | uri = uri[7:] | ||
2441 | 251 | self.body.extend([ | ||
2442 | 252 | ' <', | ||
2443 | 253 | self.defs['strong'][0], uri, self.defs['strong'][1], | ||
2444 | 254 | '>']) | ||
2445 | 255 | raise nodes.SkipNode | ||
2446 | 256 | |||
2447 | 257 | def visit_centered(self, node): | ||
2448 | 258 | self.ensure_eol() | ||
2449 | 259 | self.body.append('.sp\n.ce\n') | ||
2450 | 260 | def depart_centered(self, node): | ||
2451 | 261 | self.body.append('\n.ce 0\n') | ||
2452 | 262 | |||
2453 | 263 | def visit_compact_paragraph(self, node): | ||
2454 | 264 | pass | ||
2455 | 265 | def depart_compact_paragraph(self, node): | ||
2456 | 266 | pass | ||
2457 | 267 | |||
2458 | 268 | def visit_highlightlang(self, node): | ||
2459 | 269 | pass | ||
2460 | 270 | def depart_highlightlang(self, node): | ||
2461 | 271 | pass | ||
2462 | 272 | |||
2463 | 273 | def visit_download_reference(self, node): | ||
2464 | 274 | pass | ||
2465 | 275 | def depart_download_reference(self, node): | ||
2466 | 276 | pass | ||
2467 | 277 | |||
2468 | 278 | def visit_toctree(self, node): | ||
2469 | 279 | raise nodes.SkipNode | ||
2470 | 280 | |||
2471 | 281 | def visit_index(self, node): | ||
2472 | 282 | raise nodes.SkipNode | ||
2473 | 283 | |||
2474 | 284 | def visit_tabular_col_spec(self, node): | ||
2475 | 285 | raise nodes.SkipNode | ||
2476 | 286 | |||
2477 | 287 | def visit_glossary(self, node): | ||
2478 | 288 | pass | ||
2479 | 289 | def depart_glossary(self, node): | ||
2480 | 290 | pass | ||
2481 | 291 | |||
2482 | 292 | def visit_acks(self, node): | ||
2483 | 293 | self.ensure_eol() | ||
2484 | 294 | self.body.append(', '.join(n.astext() | ||
2485 | 295 | for n in node.children[0].children) + '.') | ||
2486 | 296 | self.body.append('\n') | ||
2487 | 297 | raise nodes.SkipNode | ||
2488 | 298 | |||
2489 | 299 | def visit_hlist(self, node): | ||
2490 | 300 | self.visit_bullet_list(node) | ||
2491 | 301 | def depart_hlist(self, node): | ||
2492 | 302 | self.depart_bullet_list(node) | ||
2493 | 303 | |||
2494 | 304 | def visit_hlistcol(self, node): | ||
2495 | 305 | pass | ||
2496 | 306 | def depart_hlistcol(self, node): | ||
2497 | 307 | pass | ||
2498 | 308 | |||
2499 | 309 | def visit_literal_emphasis(self, node): | ||
2500 | 310 | return self.visit_emphasis(node) | ||
2501 | 311 | def depart_literal_emphasis(self, node): | ||
2502 | 312 | return self.depart_emphasis(node) | ||
2503 | 313 | |||
2504 | 314 | def visit_abbreviation(self, node): | ||
2505 | 315 | pass | ||
2506 | 316 | def depart_abbreviation(self, node): | ||
2507 | 317 | pass | ||
2508 | 318 | |||
2509 | 319 | # overwritten: handle section titles better than in 0.6 release | ||
2510 | 320 | def visit_title(self, node): | ||
2511 | 321 | if isinstance(node.parent, addnodes.seealso): | ||
2512 | 322 | self.body.append('.IP "') | ||
2513 | 323 | return | ||
2514 | 324 | elif isinstance(node.parent, nodes.section): | ||
2515 | 325 | if self.section_level == 0: | ||
2516 | 326 | # skip the document title | ||
2517 | 327 | raise nodes.SkipNode | ||
2518 | 328 | elif self.section_level == 1: | ||
2519 | 329 | self.body.append('.SH %s\n' % | ||
2520 | 330 | self.deunicode(node.astext().upper())) | ||
2521 | 331 | raise nodes.SkipNode | ||
2522 | 332 | return BaseTranslator.visit_title(self, node) | ||
2523 | 333 | def depart_title(self, node): | ||
2524 | 334 | if isinstance(node.parent, addnodes.seealso): | ||
2525 | 335 | self.body.append('"\n') | ||
2526 | 336 | return | ||
2527 | 337 | return BaseTranslator.depart_title(self, node) | ||
2528 | 338 | |||
2529 | 339 | def visit_raw(self, node): | ||
2530 | 340 | if 'manpage' in node.get('format', '').split(): | ||
2531 | 341 | self.body.append(node.astext()) | ||
2532 | 342 | raise nodes.SkipNode | ||
2533 | 343 | |||
2534 | 344 | def unknown_visit(self, node): | ||
2535 | 345 | raise NotImplementedError('Unknown node: ' + node.__class__.__name__) | ||
2536 | 0 | 346 | ||
2537 | === added directory '.pc/parallel_2to3.diff' | |||
2538 | === added file '.pc/parallel_2to3.diff/setup.py' | |||
2539 | --- .pc/parallel_2to3.diff/setup.py 1970-01-01 00:00:00 +0000 | |||
2540 | +++ .pc/parallel_2to3.diff/setup.py 2013-02-16 12:25:23 +0000 | |||
2541 | @@ -0,0 +1,209 @@ | |||
2542 | 1 | # -*- coding: utf-8 -*- | ||
2543 | 2 | try: | ||
2544 | 3 | from setuptools import setup, find_packages | ||
2545 | 4 | except ImportError: | ||
2546 | 5 | raise | ||
2547 | 6 | import distribute_setup | ||
2548 | 7 | distribute_setup.use_setuptools() | ||
2549 | 8 | from setuptools import setup, find_packages | ||
2550 | 9 | |||
2551 | 10 | import os | ||
2552 | 11 | import sys | ||
2553 | 12 | from distutils import log | ||
2554 | 13 | |||
2555 | 14 | import sphinx | ||
2556 | 15 | |||
2557 | 16 | long_desc = ''' | ||
2558 | 17 | Sphinx is a tool that makes it easy to create intelligent and beautiful | ||
2559 | 18 | documentation for Python projects (or other documents consisting of multiple | ||
2560 | 19 | reStructuredText sources), written by Georg Brandl. It was originally created | ||
2561 | 20 | for the new Python documentation, and has excellent facilities for Python | ||
2562 | 21 | project documentation, but C/C++ is supported as well, and more languages are | ||
2563 | 22 | planned. | ||
2564 | 23 | |||
2565 | 24 | Sphinx uses reStructuredText as its markup language, and many of its strengths | ||
2566 | 25 | come from the power and straightforwardness of reStructuredText and its parsing | ||
2567 | 26 | and translating suite, the Docutils. | ||
2568 | 27 | |||
2569 | 28 | Among its features are the following: | ||
2570 | 29 | |||
2571 | 30 | * Output formats: HTML (including derivative formats such as HTML Help, Epub | ||
2572 | 31 | and Qt Help), plain text, manual pages and LaTeX or direct PDF output | ||
2573 | 32 | using rst2pdf | ||
2574 | 33 | * Extensive cross-references: semantic markup and automatic links | ||
2575 | 34 | for functions, classes, glossary terms and similar pieces of information | ||
2576 | 35 | * Hierarchical structure: easy definition of a document tree, with automatic | ||
2577 | 36 | links to siblings, parents and children | ||
2578 | 37 | * Automatic indices: general index as well as a module index | ||
2579 | 38 | * Code handling: automatic highlighting using the Pygments highlighter | ||
2580 | 39 | * Flexible HTML output using the Jinja 2 templating engine | ||
2581 | 40 | * Various extensions are available, e.g. for automatic testing of snippets | ||
2582 | 41 | and inclusion of appropriately formatted docstrings | ||
2583 | 42 | * Setuptools integration | ||
2584 | 43 | |||
2585 | 44 | A development egg can be found `here | ||
2586 | 45 | <http://bitbucket.org/birkenfeld/sphinx/get/tip.gz#egg=Sphinx-dev>`_. | ||
2587 | 46 | ''' | ||
2588 | 47 | |||
2589 | 48 | requires = ['Pygments>=1.2', 'Jinja2>=2.3', 'docutils>=0.7'] | ||
2590 | 49 | |||
2591 | 50 | if sys.version_info < (2, 4): | ||
2592 | 51 | print('ERROR: Sphinx requires at least Python 2.4 to run.') | ||
2593 | 52 | sys.exit(1) | ||
2594 | 53 | |||
2595 | 54 | if sys.version_info < (2, 5): | ||
2596 | 55 | # Python 2.4's distutils doesn't automatically install an egg-info, | ||
2597 | 56 | # so an existing docutils install won't be detected -- in that case, | ||
2598 | 57 | # remove the dependency from setup.py | ||
2599 | 58 | try: | ||
2600 | 59 | import docutils | ||
2601 | 60 | if int(docutils.__version__[2]) < 4: | ||
2602 | 61 | raise ValueError('docutils not recent enough') | ||
2603 | 62 | except: | ||
2604 | 63 | pass | ||
2605 | 64 | else: | ||
2606 | 65 | del requires[-1] | ||
2607 | 66 | |||
2608 | 67 | # The uuid module is new in the stdlib in 2.5 | ||
2609 | 68 | requires.append('uuid>=1.30') | ||
2610 | 69 | |||
2611 | 70 | |||
2612 | 71 | # Provide a "compile_catalog" command that also creates the translated | ||
2613 | 72 | # JavaScript files if Babel is available. | ||
2614 | 73 | |||
2615 | 74 | cmdclass = {} | ||
2616 | 75 | |||
2617 | 76 | try: | ||
2618 | 77 | from babel.messages.pofile import read_po | ||
2619 | 78 | from babel.messages.frontend import compile_catalog | ||
2620 | 79 | try: | ||
2621 | 80 | from simplejson import dump | ||
2622 | 81 | except ImportError: | ||
2623 | 82 | from json import dump | ||
2624 | 83 | except ImportError: | ||
2625 | 84 | pass | ||
2626 | 85 | else: | ||
2627 | 86 | class compile_catalog_plusjs(compile_catalog): | ||
2628 | 87 | """ | ||
2629 | 88 | An extended command that writes all message strings that occur in | ||
2630 | 89 | JavaScript files to a JavaScript file along with the .mo file. | ||
2631 | 90 | |||
2632 | 91 | Unfortunately, babel's setup command isn't built very extensible, so | ||
2633 | 92 | most of the run() code is duplicated here. | ||
2634 | 93 | """ | ||
2635 | 94 | |||
2636 | 95 | def run(self): | ||
2637 | 96 | compile_catalog.run(self) | ||
2638 | 97 | |||
2639 | 98 | po_files = [] | ||
2640 | 99 | js_files = [] | ||
2641 | 100 | |||
2642 | 101 | if not self.input_file: | ||
2643 | 102 | if self.locale: | ||
2644 | 103 | po_files.append((self.locale, | ||
2645 | 104 | os.path.join(self.directory, self.locale, | ||
2646 | 105 | 'LC_MESSAGES', | ||
2647 | 106 | self.domain + '.po'))) | ||
2648 | 107 | js_files.append(os.path.join(self.directory, self.locale, | ||
2649 | 108 | 'LC_MESSAGES', | ||
2650 | 109 | self.domain + '.js')) | ||
2651 | 110 | else: | ||
2652 | 111 | for locale in os.listdir(self.directory): | ||
2653 | 112 | po_file = os.path.join(self.directory, locale, | ||
2654 | 113 | 'LC_MESSAGES', | ||
2655 | 114 | self.domain + '.po') | ||
2656 | 115 | if os.path.exists(po_file): | ||
2657 | 116 | po_files.append((locale, po_file)) | ||
2658 | 117 | js_files.append(os.path.join(self.directory, locale, | ||
2659 | 118 | 'LC_MESSAGES', | ||
2660 | 119 | self.domain + '.js')) | ||
2661 | 120 | else: | ||
2662 | 121 | po_files.append((self.locale, self.input_file)) | ||
2663 | 122 | if self.output_file: | ||
2664 | 123 | js_files.append(self.output_file) | ||
2665 | 124 | else: | ||
2666 | 125 | js_files.append(os.path.join(self.directory, self.locale, | ||
2667 | 126 | 'LC_MESSAGES', | ||
2668 | 127 | self.domain + '.js')) | ||
2669 | 128 | |||
2670 | 129 | for js_file, (locale, po_file) in zip(js_files, po_files): | ||
2671 | 130 | infile = open(po_file, 'r') | ||
2672 | 131 | try: | ||
2673 | 132 | catalog = read_po(infile, locale) | ||
2674 | 133 | finally: | ||
2675 | 134 | infile.close() | ||
2676 | 135 | |||
2677 | 136 | if catalog.fuzzy and not self.use_fuzzy: | ||
2678 | 137 | continue | ||
2679 | 138 | |||
2680 | 139 | log.info('writing JavaScript strings in catalog %r to %r', | ||
2681 | 140 | po_file, js_file) | ||
2682 | 141 | |||
2683 | 142 | jscatalog = {} | ||
2684 | 143 | for message in catalog: | ||
2685 | 144 | if any(x[0].endswith('.js') for x in message.locations): | ||
2686 | 145 | msgid = message.id | ||
2687 | 146 | if isinstance(msgid, (list, tuple)): | ||
2688 | 147 | msgid = msgid[0] | ||
2689 | 148 | jscatalog[msgid] = message.string | ||
2690 | 149 | |||
2691 | 150 | outfile = open(js_file, 'wb') | ||
2692 | 151 | try: | ||
2693 | 152 | outfile.write('Documentation.addTranslations('); | ||
2694 | 153 | dump(dict( | ||
2695 | 154 | messages=jscatalog, | ||
2696 | 155 | plural_expr=catalog.plural_expr, | ||
2697 | 156 | locale=str(catalog.locale) | ||
2698 | 157 | ), outfile) | ||
2699 | 158 | outfile.write(');') | ||
2700 | 159 | finally: | ||
2701 | 160 | outfile.close() | ||
2702 | 161 | |||
2703 | 162 | cmdclass['compile_catalog'] = compile_catalog_plusjs | ||
2704 | 163 | |||
2705 | 164 | |||
2706 | 165 | setup( | ||
2707 | 166 | name='Sphinx', | ||
2708 | 167 | version=sphinx.__version__, | ||
2709 | 168 | url='http://sphinx.pocoo.org/', | ||
2710 | 169 | download_url='http://pypi.python.org/pypi/Sphinx', | ||
2711 | 170 | license='BSD', | ||
2712 | 171 | author='Georg Brandl', | ||
2713 | 172 | author_email='georg@python.org', | ||
2714 | 173 | description='Python documentation generator', | ||
2715 | 174 | long_description=long_desc, | ||
2716 | 175 | zip_safe=False, | ||
2717 | 176 | classifiers=[ | ||
2718 | 177 | 'Development Status :: 5 - Production/Stable', | ||
2719 | 178 | 'Environment :: Console', | ||
2720 | 179 | 'Environment :: Web Environment', | ||
2721 | 180 | 'Intended Audience :: Developers', | ||
2722 | 181 | 'Intended Audience :: Education', | ||
2723 | 182 | 'License :: OSI Approved :: BSD License', | ||
2724 | 183 | 'Operating System :: OS Independent', | ||
2725 | 184 | 'Programming Language :: Python', | ||
2726 | 185 | 'Programming Language :: Python :: 2', | ||
2727 | 186 | 'Programming Language :: Python :: 3', | ||
2728 | 187 | 'Topic :: Documentation', | ||
2729 | 188 | 'Topic :: Text Processing', | ||
2730 | 189 | 'Topic :: Utilities', | ||
2731 | 190 | ], | ||
2732 | 191 | platforms='any', | ||
2733 | 192 | packages=find_packages(exclude=['custom_fixers', 'test']), | ||
2734 | 193 | include_package_data=True, | ||
2735 | 194 | entry_points={ | ||
2736 | 195 | 'console_scripts': [ | ||
2737 | 196 | 'sphinx-build = sphinx:main', | ||
2738 | 197 | 'sphinx-quickstart = sphinx.quickstart:main', | ||
2739 | 198 | 'sphinx-apidoc = sphinx.apidoc:main', | ||
2740 | 199 | 'sphinx-autogen = sphinx.ext.autosummary.generate:main', | ||
2741 | 200 | ], | ||
2742 | 201 | 'distutils.commands': [ | ||
2743 | 202 | 'build_sphinx = sphinx.setup_command:BuildDoc', | ||
2744 | 203 | ], | ||
2745 | 204 | }, | ||
2746 | 205 | install_requires=requires, | ||
2747 | 206 | cmdclass=cmdclass, | ||
2748 | 207 | use_2to3=True, | ||
2749 | 208 | use_2to3_fixers=['custom_fixers'], | ||
2750 | 209 | ) | ||
2751 | 0 | 210 | ||
2752 | === modified file 'debian/changelog' | |||
2753 | --- debian/changelog 2012-11-27 19:20:44 +0000 | |||
2754 | +++ debian/changelog 2013-02-16 12:25:23 +0000 | |||
2755 | @@ -1,3 +1,27 @@ | |||
2756 | 1 | sphinx (1.1.3+dfsg-7ubuntu1) raring; urgency=low | ||
2757 | 2 | |||
2758 | 3 | * Merge with Debian experimental. Remaining Ubuntu changes: | ||
2759 | 4 | - Switch to dh_python2. | ||
2760 | 5 | - debian/rules: export NO_PKG_MANGLE=1 in order to not have translations | ||
2761 | 6 | stripped. | ||
2762 | 7 | - debian/control: Drop the build-dependency on python-whoosh. | ||
2763 | 8 | - debian/control: Add "XS-Testsuite: autopkgtest" header. | ||
2764 | 9 | * debian/patches/fix_manpages_generation_with_new_docutils.diff: | ||
2765 | 10 | dropped, applied in Debian as manpage_writer_docutils_0.10_api.diff. | ||
2766 | 11 | * debian/patches/fix_literal_block_warning.diff: add patch to avoid | ||
2767 | 12 | false-positive "Literal block expected; none found." warnings when | ||
2768 | 13 | building l10n projects. | ||
2769 | 14 | |||
2770 | 15 | -- Dmitry Shachnev <mitya57@ubuntu.com> Sat, 16 Feb 2013 14:51:12 +0400 | ||
2771 | 16 | |||
2772 | 17 | sphinx (1.1.3+dfsg-7) experimental; urgency=low | ||
2773 | 18 | |||
2774 | 19 | * Backport upstream patch for fix compatibility with Docutils 0.10. | ||
2775 | 20 | * Run 2to3 in parallel. | ||
2776 | 21 | * Add DEP-8 tests for the documentation package. | ||
2777 | 22 | |||
2778 | 23 | -- Jakub Wilk <jwilk@debian.org> Wed, 19 Dec 2012 10:53:51 +0100 | ||
2779 | 24 | |||
2780 | 1 | sphinx (1.1.3+dfsg-5ubuntu1) raring; urgency=low | 25 | sphinx (1.1.3+dfsg-5ubuntu1) raring; urgency=low |
2781 | 2 | 26 | ||
2782 | 3 | * Merge with Debian packaging SVN. | 27 | * Merge with Debian packaging SVN. |
2783 | @@ -14,17 +38,24 @@ | |||
2784 | 14 | 38 | ||
2785 | 15 | -- Dmitry Shachnev <mitya57@ubuntu.com> Tue, 27 Nov 2012 19:20:44 +0400 | 39 | -- Dmitry Shachnev <mitya57@ubuntu.com> Tue, 27 Nov 2012 19:20:44 +0400 |
2786 | 16 | 40 | ||
2788 | 17 | sphinx (1.1.3+dfsg-6) UNRELEASED; urgency=low | 41 | sphinx (1.1.3+dfsg-6) experimental; urgency=low |
2789 | 18 | 42 | ||
2790 | 19 | [ Jakub Wilk ] | 43 | [ Jakub Wilk ] |
2791 | 20 | * DEP-8 tests: remove “Features: no-build-needed”; it's the default now. | 44 | * DEP-8 tests: remove “Features: no-build-needed”; it's the default now. |
2792 | 21 | * Bump standards version to 3.9.4; no changes needed. | 45 | * Bump standards version to 3.9.4; no changes needed. |
2793 | 46 | * Pass -a to xvfb-run, so that it tries to get a free server number. | ||
2794 | 47 | * Rebuild MO files from source. | ||
2795 | 48 | + Update debian/rules. | ||
2796 | 49 | + Add the rebuilt files to extend-diff-ignore. | ||
2797 | 50 | * Make synopses in the patch header start with a lowercase latter and not | ||
2798 | 51 | end with a full stop. | ||
2799 | 22 | 52 | ||
2800 | 23 | [ Dmitry Shachnev ] | 53 | [ Dmitry Shachnev ] |
2801 | 24 | * debian/patches/l10n_fixes.diff: fix crashes and not working external | 54 | * debian/patches/l10n_fixes.diff: fix crashes and not working external |
2802 | 25 | links in l10n mode (closes: #691719). | 55 | links in l10n mode (closes: #691719). |
2803 | 56 | * debian/patches/sort_stopwords.diff: mark as applied upstream. | ||
2804 | 26 | 57 | ||
2806 | 27 | -- Jakub Wilk <jwilk@debian.org> Tue, 13 Nov 2012 22:36:10 +0100 | 58 | -- Jakub Wilk <jwilk@debian.org> Sat, 08 Dec 2012 14:38:19 +0100 |
2807 | 28 | 59 | ||
2808 | 29 | sphinx (1.1.3+dfsg-5) experimental; urgency=low | 60 | sphinx (1.1.3+dfsg-5) experimental; urgency=low |
2809 | 30 | 61 | ||
2810 | 31 | 62 | ||
2811 | === modified file 'debian/jstest/run-tests' | |||
2812 | --- debian/jstest/run-tests 2012-03-12 12:18:37 +0000 | |||
2813 | +++ debian/jstest/run-tests 2013-02-16 12:25:23 +0000 | |||
2814 | @@ -26,7 +26,7 @@ | |||
2815 | 26 | if __name__ == '__main__': | 26 | if __name__ == '__main__': |
2816 | 27 | if not os.getenv('DISPLAY'): | 27 | if not os.getenv('DISPLAY'): |
2817 | 28 | raise RuntimeError('These tests requires access to an X server') | 28 | raise RuntimeError('These tests requires access to an X server') |
2819 | 29 | build_directory = os.path.join(os.path.dirname(__file__), '..', '..', 'build', 'html') | 29 | [build_directory] = sys.argv[1:] |
2820 | 30 | build_directory = os.path.abspath(build_directory) | 30 | build_directory = os.path.abspath(build_directory) |
2821 | 31 | n_failures = 0 | 31 | n_failures = 0 |
2822 | 32 | for testcase in t1, t2, t3: | 32 | for testcase in t1, t2, t3: |
2823 | 33 | 33 | ||
2824 | === added file 'debian/patches/fix_literal_block_warning.diff' | |||
2825 | --- debian/patches/fix_literal_block_warning.diff 1970-01-01 00:00:00 +0000 | |||
2826 | +++ debian/patches/fix_literal_block_warning.diff 2013-02-16 12:25:23 +0000 | |||
2827 | @@ -0,0 +1,17 @@ | |||
2828 | 1 | Description: avoid false-positive warnings about missing literal block | ||
2829 | 2 | Origin: upstream, https://bitbucket.org/birkenfeld/sphinx/commits/e2338c4fcf86 | ||
2830 | 3 | Last-Update: 2013-02-16 | ||
2831 | 4 | |||
2832 | 5 | --- a/sphinx/environment.py 2013-02-16 15:24:05.699388441 +0400 | ||
2833 | 6 | +++ b/sphinx/environment.py 2013-02-16 15:24:05.691388443 +0400 | ||
2834 | 7 | @@ -218,6 +218,10 @@ | ||
2835 | 8 | if not msgstr or msgstr == msg: # as-of-yet untranslated | ||
2836 | 9 | continue | ||
2837 | 10 | |||
2838 | 11 | + # Avoid "Literal block expected; none found." warnings. | ||
2839 | 12 | + if msgstr.strip().endswith('::'): | ||
2840 | 13 | + msgstr += '\n\n dummy literal' | ||
2841 | 14 | + | ||
2842 | 15 | patch = new_document(source, settings) | ||
2843 | 16 | parser.parse(msgstr, patch) | ||
2844 | 17 | patch = patch[0] | ||
2845 | 0 | 18 | ||
2846 | === removed file 'debian/patches/fix_manpages_generation_with_new_docutils.diff' | |||
2847 | --- debian/patches/fix_manpages_generation_with_new_docutils.diff 2012-10-22 20:20:35 +0000 | |||
2848 | +++ debian/patches/fix_manpages_generation_with_new_docutils.diff 1970-01-01 00:00:00 +0000 | |||
2849 | @@ -1,33 +0,0 @@ | |||
2850 | 1 | Description: Fix build failure with Docutils 0.9 | ||
2851 | 2 | Bug: https://bitbucket.org/birkenfeld/sphinx/issue/998/docutils-010-will-break-sphinx-manpage | ||
2852 | 3 | Author: Toshio Kuratomi <a.badger@gmail.com> | ||
2853 | 4 | Last-Update: 2012-10-22 | ||
2854 | 5 | |||
2855 | 6 | diff -up a/sphinx/writers/manpage.py b/sphinx/writers/manpage.py | ||
2856 | 7 | --- a/sphinx/writers/manpage.py 2011-11-01 00:38:44.000000000 -0700 | ||
2857 | 8 | +++ b/sphinx/writers/manpage.py 2012-08-21 12:38:33.380808202 -0700 | ||
2858 | 9 | @@ -72,6 +72,11 @@ class ManualPageTranslator(BaseTranslato | ||
2859 | 10 | # since self.append_header() is never called, need to do this here | ||
2860 | 11 | self.body.append(MACRO_DEF) | ||
2861 | 12 | |||
2862 | 13 | + # Overwrite admonition label translations with our own | ||
2863 | 14 | + for label, translation in admonitionlabels.items(): | ||
2864 | 15 | + self.language.labels[label] = self.deunicode(translation) | ||
2865 | 16 | + | ||
2866 | 17 | + | ||
2867 | 18 | # overwritten -- added quotes around all .TH arguments | ||
2868 | 19 | def header(self): | ||
2869 | 20 | tmpl = (".TH \"%(title_upper)s\" \"%(manual_section)s\"" | ||
2870 | 21 | @@ -193,12 +198,6 @@ class ManualPageTranslator(BaseTranslato | ||
2871 | 22 | def depart_seealso(self, node): | ||
2872 | 23 | self.depart_admonition(node) | ||
2873 | 24 | |||
2874 | 25 | - # overwritten -- use our own label translations | ||
2875 | 26 | - def visit_admonition(self, node, name=None): | ||
2876 | 27 | - if name: | ||
2877 | 28 | - self.body.append('.IP %s\n' % | ||
2878 | 29 | - self.deunicode(admonitionlabels.get(name, name))) | ||
2879 | 30 | - | ||
2880 | 31 | def visit_productionlist(self, node): | ||
2881 | 32 | self.ensure_eol() | ||
2882 | 33 | names = [] | ||
2883 | 34 | 0 | ||
2884 | === modified file 'debian/patches/initialize_autodoc.diff' | |||
2885 | --- debian/patches/initialize_autodoc.diff 2012-02-05 19:33:59 +0000 | |||
2886 | +++ debian/patches/initialize_autodoc.diff 2013-02-16 12:25:23 +0000 | |||
2887 | @@ -1,4 +1,4 @@ | |||
2889 | 1 | Description: Make sphinx-autogen initialize the sphinx.ext.autodoc module. | 1 | Description: make sphinx-autogen initialize the sphinx.ext.autodoc module |
2890 | 2 | Author: Jakub Wilk <jwilk@debian.org> | 2 | Author: Jakub Wilk <jwilk@debian.org> |
2891 | 3 | Bug-Debian: http://bugs.debian.org/611078 | 3 | Bug-Debian: http://bugs.debian.org/611078 |
2892 | 4 | Bug: https://bitbucket.org/birkenfeld/sphinx/issue/618 | 4 | Bug: https://bitbucket.org/birkenfeld/sphinx/issue/618 |
2893 | 5 | 5 | ||
2894 | === modified file 'debian/patches/l10n_fixes.diff' | |||
2895 | --- debian/patches/l10n_fixes.diff 2012-11-27 19:20:44 +0000 | |||
2896 | +++ debian/patches/l10n_fixes.diff 2013-02-16 12:25:23 +0000 | |||
2897 | @@ -1,15 +1,14 @@ | |||
2902 | 1 | Description: Fix l10n build of text containing footnotes | 1 | Description: fix l10n build of text containing footnotes or external links |
2903 | 2 | Based on initial patch by Cristophe Simonis and modifications by Takayuki Shimizukawa | 2 | Bug: https://bitbucket.org/birkenfeld/sphinx/issue/955 |
2900 | 3 | (upstream pull request #86). | ||
2901 | 4 | Bug: https://bitbucket.org/birkenfeld/sphinx/issue/955/cant-build-html-with-footnotes-when-using | ||
2904 | 5 | Bug-Debian: http://bugs.debian.org/691719 | 3 | Bug-Debian: http://bugs.debian.org/691719 |
2905 | 4 | Origin: upstream, https://bitbucket.org/birkenfeld/sphinx/commits/b7b808e468 | ||
2906 | 5 | and https://bitbucket.org/birkenfeld/sphinx/commits/870a91ca86 | ||
2907 | 6 | Author: Takayuki Shimizukawa <shimizukawa@gmail.com> | 6 | Author: Takayuki Shimizukawa <shimizukawa@gmail.com> |
2909 | 7 | Last-Update: 2012-11-27 | 7 | Last-Update: 2012-12-08 |
2910 | 8 | 8 | ||
2911 | 9 | === modified file 'sphinx/environment.py' | ||
2912 | 10 | --- a/sphinx/environment.py 2012-03-12 12:18:37 +0000 | 9 | --- a/sphinx/environment.py 2012-03-12 12:18:37 +0000 |
2915 | 11 | +++ b/sphinx/environment.py 2012-11-27 14:05:36 +0000 | 10 | +++ b/sphinx/environment.py 2012-12-07 12:22:33 +0000 |
2916 | 12 | @@ -213,16 +213,44 @@ | 11 | @@ -213,16 +213,61 @@ |
2917 | 13 | parser = RSTParser() | 12 | parser = RSTParser() |
2918 | 14 | 13 | ||
2919 | 15 | for node, msg in extract_messages(self.document): | 14 | for node, msg in extract_messages(self.document): |
2920 | @@ -26,31 +25,48 @@ | |||
2921 | 26 | if not isinstance(patch, nodes.paragraph): | 25 | if not isinstance(patch, nodes.paragraph): |
2922 | 27 | continue # skip for now | 26 | continue # skip for now |
2923 | 28 | + | 27 | + |
2949 | 29 | + footnote_refs = [r for r in node.children | 28 | + # auto-numbered foot note reference should use original 'ids'. |
2950 | 30 | + if isinstance(r, nodes.footnote_reference) | 29 | + is_autonumber_footnote_ref = lambda node: \ |
2951 | 31 | + and r.get('auto') == 1] | 30 | + isinstance(node, nodes.footnote_reference) \ |
2952 | 32 | + refs = [r for r in node.children if isinstance(r, nodes.reference)] | 31 | + and node.get('auto') == 1 |
2953 | 33 | + | 32 | + old_foot_refs = node.traverse(is_autonumber_footnote_ref) |
2954 | 34 | + for i, child in enumerate(patch.children): # update leaves | 33 | + new_foot_refs = patch.traverse(is_autonumber_footnote_ref) |
2955 | 35 | + if isinstance(child, nodes.footnote_reference) \ | 34 | + if len(old_foot_refs) != len(new_foot_refs): |
2956 | 36 | + and child.get('auto') == 1: | 35 | + env.warn_node('inconsistent footnote references in ' |
2957 | 37 | + # use original 'footnote_reference' object. | 36 | + 'translated message', node) |
2958 | 38 | + # this object is already registered in self.document.autofootnote_refs | 37 | + for old, new in zip(old_foot_refs, new_foot_refs): |
2959 | 39 | + patch.children[i] = footnote_refs.pop(0) | 38 | + new['ids'] = old['ids'] |
2960 | 40 | + # Some duplicated footnote_reference in msgstr causes | 39 | + self.document.autofootnote_refs.remove(old) |
2961 | 41 | + # IndexError in .pop(0). That is invalid msgstr. | 40 | + self.document.note_autofootnote_ref(new) |
2962 | 42 | + | 41 | + |
2963 | 43 | + elif isinstance(child, nodes.reference): | 42 | + # reference should use original 'refname'. |
2964 | 44 | + # reference should use original 'refname'. | 43 | + # * reference target ".. _Python: ..." is not translatable. |
2965 | 45 | + # * reference target ".. _Python: ..." is not translatable. | 44 | + # * section refname is not translatable. |
2966 | 46 | + # * section refname is not translatable. | 45 | + # * inline reference "`Python <...>`_" has no 'refname'. |
2967 | 47 | + # * inline reference "`Python <...>`_" has no 'refname'. | 46 | + is_refnamed_ref = lambda node: \ |
2968 | 48 | + if refs and 'refname' in refs[0]: | 47 | + isinstance(node, nodes.reference) \ |
2969 | 49 | + refname = child['refname'] = refs.pop(0)['refname'] | 48 | + and 'refname' in node |
2970 | 50 | + self.document.refnames.setdefault( | 49 | + old_refs = node.traverse(is_refnamed_ref) |
2971 | 51 | + refname, []).append(child) | 50 | + new_refs = patch.traverse(is_refnamed_ref) |
2972 | 52 | + # if number of reference nodes had been changed, that | 51 | + applied_refname_map = {} |
2973 | 53 | + # would often generate unknown link target warning. | 52 | + if len(old_refs) != len(new_refs): |
2974 | 53 | + env.warn_node('inconsistent references in ' | ||
2975 | 54 | + 'translated message', node) | ||
2976 | 55 | + for new in new_refs: | ||
2977 | 56 | + if new['refname'] in applied_refname_map: | ||
2978 | 57 | + # 2nd appearance of the reference | ||
2979 | 58 | + new['refname'] = applied_refname_map[new['refname']] | ||
2980 | 59 | + elif old_refs: | ||
2981 | 60 | + # 1st appearance of the reference in old_refs | ||
2982 | 61 | + old = old_refs.pop(0) | ||
2983 | 62 | + refname = old['refname'] | ||
2984 | 63 | + new['refname'] = refname | ||
2985 | 64 | + applied_refname_map[new['refname']] = refname | ||
2986 | 65 | + else: | ||
2987 | 66 | + # the reference is not found in old_refs | ||
2988 | 67 | + applied_refname_map[new['refname']] = new['refname'] | ||
2989 | 68 | + | ||
2990 | 69 | + self.document.note_refname(new) | ||
2991 | 54 | + | 70 | + |
2992 | 55 | for child in patch.children: # update leaves | 71 | for child in patch.children: # update leaves |
2993 | 56 | child.parent = node | 72 | child.parent = node |
2994 | 57 | 73 | ||
2995 | === added file 'debian/patches/manpage_writer_docutils_0.10_api.diff' | |||
2996 | --- debian/patches/manpage_writer_docutils_0.10_api.diff 1970-01-01 00:00:00 +0000 | |||
2997 | +++ debian/patches/manpage_writer_docutils_0.10_api.diff 2013-02-16 12:25:23 +0000 | |||
2998 | @@ -0,0 +1,31 @@ | |||
2999 | 1 | Description: port manpage writer to docutils 0.10 API | ||
3000 | 2 | Origin: upstream, https://bitbucket.org/birkenfeld/sphinx/commits/ffb145b7884f | ||
3001 | 3 | Last-Update: 2012-12-18 | ||
3002 | 4 | |||
3003 | 5 | --- a/sphinx/writers/manpage.py | ||
3004 | 6 | +++ b/sphinx/writers/manpage.py | ||
3005 | 7 | @@ -72,6 +72,11 @@ | ||
3006 | 8 | # since self.append_header() is never called, need to do this here | ||
3007 | 9 | self.body.append(MACRO_DEF) | ||
3008 | 10 | |||
3009 | 11 | + # Overwrite admonition label translations with our own | ||
3010 | 12 | + for label, translation in admonitionlabels.items(): | ||
3011 | 13 | + self.language.labels[label] = self.deunicode(translation) | ||
3012 | 14 | + | ||
3013 | 15 | + | ||
3014 | 16 | # overwritten -- added quotes around all .TH arguments | ||
3015 | 17 | def header(self): | ||
3016 | 18 | tmpl = (".TH \"%(title_upper)s\" \"%(manual_section)s\"" | ||
3017 | 19 | @@ -193,12 +198,6 @@ | ||
3018 | 20 | def depart_seealso(self, node): | ||
3019 | 21 | self.depart_admonition(node) | ||
3020 | 22 | |||
3021 | 23 | - # overwritten -- use our own label translations | ||
3022 | 24 | - def visit_admonition(self, node, name=None): | ||
3023 | 25 | - if name: | ||
3024 | 26 | - self.body.append('.IP %s\n' % | ||
3025 | 27 | - self.deunicode(admonitionlabels.get(name, name))) | ||
3026 | 28 | - | ||
3027 | 29 | def visit_productionlist(self, node): | ||
3028 | 30 | self.ensure_eol() | ||
3029 | 31 | names = [] | ||
3030 | 0 | 32 | ||
3031 | === added file 'debian/patches/parallel_2to3.diff' | |||
3032 | --- debian/patches/parallel_2to3.diff 1970-01-01 00:00:00 +0000 | |||
3033 | +++ debian/patches/parallel_2to3.diff 2013-02-16 12:25:23 +0000 | |||
3034 | @@ -0,0 +1,31 @@ | |||
3035 | 1 | Description: run 2to3 in parallel | ||
3036 | 2 | Author: Jakub Wilk <jwilk@debian.org> | ||
3037 | 3 | Forwarded: not-needed | ||
3038 | 4 | Last-Update: 2012-12-18 | ||
3039 | 5 | |||
3040 | 6 | --- a/setup.py | ||
3041 | 7 | +++ b/setup.py | ||
3042 | 8 | @@ -67,6 +67,23 @@ | ||
3043 | 9 | # The uuid module is new in the stdlib in 2.5 | ||
3044 | 10 | requires.append('uuid>=1.30') | ||
3045 | 11 | |||
3046 | 12 | +if sys.version_info >= (3,): | ||
3047 | 13 | + | ||
3048 | 14 | + num_processes = 1 | ||
3049 | 15 | + for option in os.environ.get('DEB_BUILD_OPTIONS', '').split(): | ||
3050 | 16 | + if option.startswith('parallel='): | ||
3051 | 17 | + num_processes = int(option.split('=', 1)[1]) | ||
3052 | 18 | + if num_processes > 1: | ||
3053 | 19 | + import lib2to3.refactor | ||
3054 | 20 | + class RefactoringTool(lib2to3.refactor.MultiprocessRefactoringTool): | ||
3055 | 21 | + def refactor(self, items, write=False, doctests_only=False): | ||
3056 | 22 | + return lib2to3.refactor.MultiprocessRefactoringTool.refactor( | ||
3057 | 23 | + self, items, | ||
3058 | 24 | + write=write, | ||
3059 | 25 | + doctests_only=doctests_only, | ||
3060 | 26 | + num_processes=num_processes | ||
3061 | 27 | + ) | ||
3062 | 28 | + lib2to3.refactor.RefactoringTool = RefactoringTool | ||
3063 | 29 | |||
3064 | 30 | # Provide a "compile_catalog" command that also creates the translated | ||
3065 | 31 | # JavaScript files if Babel is available. | ||
3066 | 0 | 32 | ||
3067 | === modified file 'debian/patches/python3_test_build_dir.diff' | |||
3068 | --- debian/patches/python3_test_build_dir.diff 2012-02-14 00:13:35 +0000 | |||
3069 | +++ debian/patches/python3_test_build_dir.diff 2013-02-16 12:25:23 +0000 | |||
3070 | @@ -1,4 +1,4 @@ | |||
3072 | 1 | Description: Fix build directory for test runner. | 1 | Description: fix build directory for test runner |
3073 | 2 | Hardcode Python 3 build directory in the test runner to the one that Debian | 2 | Hardcode Python 3 build directory in the test runner to the one that Debian |
3074 | 3 | package uses. | 3 | package uses. |
3075 | 4 | Author: Jakub Wilk <jwilk@debian.org> | 4 | Author: Jakub Wilk <jwilk@debian.org> |
3076 | 5 | 5 | ||
3077 | === modified file 'debian/patches/series' | |||
3078 | --- debian/patches/series 2012-11-27 19:20:44 +0000 | |||
3079 | +++ debian/patches/series 2013-02-16 12:25:23 +0000 | |||
3080 | @@ -8,8 +8,10 @@ | |||
3081 | 8 | fix_nepali_po.diff | 8 | fix_nepali_po.diff |
3082 | 9 | pygments_byte_strings.diff | 9 | pygments_byte_strings.diff |
3083 | 10 | fix_shorthandoff.diff | 10 | fix_shorthandoff.diff |
3084 | 11 | fix_manpages_generation_with_new_docutils.diff | ||
3085 | 12 | test_build_html_rb.diff | 11 | test_build_html_rb.diff |
3086 | 13 | sort_stopwords.diff | 12 | sort_stopwords.diff |
3087 | 14 | support_python_3.3.diff | 13 | support_python_3.3.diff |
3088 | 15 | l10n_fixes.diff | 14 | l10n_fixes.diff |
3089 | 15 | manpage_writer_docutils_0.10_api.diff | ||
3090 | 16 | parallel_2to3.diff | ||
3091 | 17 | fix_literal_block_warning.diff | ||
3092 | 16 | 18 | ||
3093 | === modified file 'debian/patches/show_more_stack_frames.diff' | |||
3094 | --- debian/patches/show_more_stack_frames.diff 2011-11-20 15:56:50 +0000 | |||
3095 | +++ debian/patches/show_more_stack_frames.diff 2013-02-16 12:25:23 +0000 | |||
3096 | @@ -1,4 +1,4 @@ | |||
3098 | 1 | Description: When Sphinx crashes, show 10 stack frames (instead of a single one). | 1 | Description: when Sphinx crashes, show 10 stack frames (instead of a single one) |
3099 | 2 | Normally, when Sphinx crashes, it doesn't display full stack trace (as other | 2 | Normally, when Sphinx crashes, it doesn't display full stack trace (as other |
3100 | 3 | Python application do by default), but only a single one; rest of the stack | 3 | Python application do by default), but only a single one; rest of the stack |
3101 | 4 | trace is stored into a temporary file. Such behaviour is undesired in some | 4 | trace is stored into a temporary file. Such behaviour is undesired in some |
3102 | 5 | 5 | ||
3103 | === modified file 'debian/patches/sort_stopwords.diff' | |||
3104 | --- debian/patches/sort_stopwords.diff 2012-11-27 19:20:44 +0000 | |||
3105 | +++ debian/patches/sort_stopwords.diff 2013-02-16 12:25:23 +0000 | |||
3106 | @@ -2,8 +2,8 @@ | |||
3107 | 2 | The order of stopwords in searchtools.js would be random if hash randomization | 2 | The order of stopwords in searchtools.js would be random if hash randomization |
3108 | 3 | was enabled, breaking dh_sphinxdoc. This patch makes the order deterministic. | 3 | was enabled, breaking dh_sphinxdoc. This patch makes the order deterministic. |
3109 | 4 | Author: Jakub Wilk <jwilk@debian.org> | 4 | Author: Jakub Wilk <jwilk@debian.org> |
3112 | 5 | Applied-Upstream: https://bitbucket.org/birkenfeld/sphinx/changeset/6cf5320e65 | 5 | Forwarded: yes, https://bitbucket.org/birkenfeld/sphinx/commits/6cf5320e65 |
3113 | 6 | Last-Update: 2012-11-10 | 6 | Last-Update: 2012-12-08 |
3114 | 7 | 7 | ||
3115 | 8 | --- a/sphinx/search/__init__.py | 8 | --- a/sphinx/search/__init__.py |
3116 | 9 | +++ b/sphinx/search/__init__.py | 9 | +++ b/sphinx/search/__init__.py |
3117 | 10 | 10 | ||
3118 | === modified file 'debian/patches/sphinxcontrib_namespace.diff' | |||
3119 | --- debian/patches/sphinxcontrib_namespace.diff 2012-02-03 13:52:49 +0000 | |||
3120 | +++ debian/patches/sphinxcontrib_namespace.diff 2013-02-16 12:25:23 +0000 | |||
3121 | @@ -1,4 +1,4 @@ | |||
3123 | 1 | Description: Create namespace package ‘sphinxcontrib’. | 1 | Description: create namespace package ‘sphinxcontrib’ |
3124 | 2 | Create namespace package ‘sphinxcontrib’. This allows python-sphinxcontrib.* | 2 | Create namespace package ‘sphinxcontrib’. This allows python-sphinxcontrib.* |
3125 | 3 | packages, both those using dh_python2 and those using python-support, to be | 3 | packages, both those using dh_python2 and those using python-support, to be |
3126 | 4 | co-importable. | 4 | co-importable. |
3127 | 5 | 5 | ||
3128 | === modified file 'debian/patches/unversioned_grammar_pickle.diff' | |||
3129 | --- debian/patches/unversioned_grammar_pickle.diff 2011-09-28 17:20:22 +0000 | |||
3130 | +++ debian/patches/unversioned_grammar_pickle.diff 2013-02-16 12:25:23 +0000 | |||
3131 | @@ -1,4 +1,4 @@ | |||
3133 | 1 | Description: Don't embed Python version in filename of grammar pickle. | 1 | Description: don't embed Python version in filename of grammar pickle |
3134 | 2 | Author: Jakub Wilk <jwilk@debian.org> | 2 | Author: Jakub Wilk <jwilk@debian.org> |
3135 | 3 | Bug: https://bitbucket.org/birkenfeld/sphinx/issue/641 | 3 | Bug: https://bitbucket.org/birkenfeld/sphinx/issue/641 |
3136 | 4 | Forwarded: not-needed | 4 | Forwarded: not-needed |
3137 | 5 | 5 | ||
3138 | === modified file 'debian/rules' | |||
3139 | --- debian/rules 2012-11-27 19:20:44 +0000 | |||
3140 | +++ debian/rules 2013-02-16 12:25:23 +0000 | |||
3141 | @@ -23,6 +23,10 @@ | |||
3142 | 23 | python_all = pyversions -r | tr ' ' '\n' | xargs -t -I {} env {} | 23 | python_all = pyversions -r | tr ' ' '\n' | xargs -t -I {} env {} |
3143 | 24 | python3_all = py3versions -r | tr ' ' '\n' | xargs -t -I {} env {} | 24 | python3_all = py3versions -r | tr ' ' '\n' | xargs -t -I {} env {} |
3144 | 25 | 25 | ||
3145 | 26 | ifeq "$(filter nocheck,$(DEB_BUILD_OPTIONS))" "" | ||
3146 | 27 | msgfmt_options = -c | ||
3147 | 28 | endif | ||
3148 | 29 | |||
3149 | 26 | build-arch: | 30 | build-arch: |
3150 | 27 | 31 | ||
3151 | 28 | build-indep build: build-stamp | 32 | build-indep build: build-stamp |
3152 | @@ -37,12 +41,13 @@ | |||
3153 | 37 | python ./sphinx-build.py -b man doc build/man | 41 | python ./sphinx-build.py -b man doc build/man |
3154 | 38 | python setup.py build --build-lib build/py2/ | 42 | python setup.py build --build-lib build/py2/ |
3155 | 39 | python3 setup.py build --build-lib build/py3/ | 43 | python3 setup.py build --build-lib build/py3/ |
3158 | 40 | ifeq (,$(filter nocheck,$(DEB_BUILD_OPTIONS))) | 44 | find sphinx/locale/ -name '*.po' | sed -e 's/[.]po$$//' \ |
3159 | 41 | find sphinx/locale/ -name '*.po' | xargs -t -I {} msgfmt -o /dev/null -c {} | 45 | | xargs -t -I {} msgfmt $(msgfmt_options) {}.po -o {}.mo |
3160 | 46 | ifeq "$(filter nocheck,$(DEB_BUILD_OPTIONS))" "" | ||
3161 | 42 | $(python_all) tests/run.py --verbose --no-skip | 47 | $(python_all) tests/run.py --verbose --no-skip |
3162 | 43 | $(python3_all) tests/run.py --verbose | 48 | $(python3_all) tests/run.py --verbose |
3163 | 44 | cd build/py3/ && rm -rf tests/ sphinx/pycode/Grammar.pickle | 49 | cd build/py3/ && rm -rf tests/ sphinx/pycode/Grammar.pickle |
3165 | 45 | xvfb-run --auto-servernum ./debian/jstest/run-tests | 50 | xvfb-run -a ./debian/jstest/run-tests build/html/ |
3166 | 46 | endif | 51 | endif |
3167 | 47 | touch build-stamp | 52 | touch build-stamp |
3168 | 48 | 53 | ||
3169 | 49 | 54 | ||
3170 | === modified file 'debian/source/options' | |||
3171 | --- debian/source/options 2012-02-05 19:33:59 +0000 | |||
3172 | +++ debian/source/options 2013-02-16 12:25:23 +0000 | |||
3173 | @@ -1,1 +1,2 @@ | |||
3174 | 1 | extend-diff-ignore = "^[^/]*[.]egg-info/" | 1 | extend-diff-ignore = "^[^/]*[.]egg-info/" |
3175 | 2 | extend-diff-ignore = "^sphinx/locale/[^/]+/LC_MESSAGES/sphinx[.]mo$" | ||
3176 | 2 | 3 | ||
3177 | === modified file 'debian/tests/control' | |||
3178 | --- debian/tests/control 2012-11-27 19:20:44 +0000 | |||
3179 | +++ debian/tests/control 2013-02-16 12:25:23 +0000 | |||
3180 | @@ -3,3 +3,6 @@ | |||
3181 | 3 | 3 | ||
3182 | 4 | Tests: python3-sphinx | 4 | Tests: python3-sphinx |
3183 | 5 | Depends: python3-sphinx, python3-nose | 5 | Depends: python3-sphinx, python3-nose |
3184 | 6 | |||
3185 | 7 | Tests: sphinx-doc | ||
3186 | 8 | Depends: sphinx-doc, python, python-webkit, xvfb | ||
3187 | 6 | 9 | ||
3188 | === added file 'debian/tests/sphinx-doc' | |||
3189 | --- debian/tests/sphinx-doc 1970-01-01 00:00:00 +0000 | |||
3190 | +++ debian/tests/sphinx-doc 2013-02-16 12:25:23 +0000 | |||
3191 | @@ -0,0 +1,20 @@ | |||
3192 | 1 | #!/bin/sh | ||
3193 | 2 | set -e -u | ||
3194 | 3 | cp -r debian/jstest "$ADTTMP/" | ||
3195 | 4 | cd "$ADTTMP" | ||
3196 | 5 | for python in python python3 | ||
3197 | 6 | do | ||
3198 | 7 | for format in rst html | ||
3199 | 8 | do | ||
3200 | 9 | [ "$(readlink -f /usr/share/doc/$python-sphinx/$format)" = "$(readlink -f /usr/share/doc/sphinx-doc/$format)" ] | ||
3201 | 10 | done | ||
3202 | 11 | done | ||
3203 | 12 | run_js_tests='jstest/run-tests /usr/share/doc/sphinx-doc/html/' | ||
3204 | 13 | if [ -n "${DISPLAY:-}" ] | ||
3205 | 14 | then | ||
3206 | 15 | $run_js_tests | ||
3207 | 16 | else | ||
3208 | 17 | xvfb-run -a $run_js_tests | ||
3209 | 18 | fi | ||
3210 | 19 | |||
3211 | 20 | # vim:ts=4 sw=4 et | ||
3212 | 0 | 21 | ||
3213 | === modified file 'setup.py' | |||
3214 | --- setup.py 2012-03-30 23:32:16 +0000 | |||
3215 | +++ setup.py 2013-02-16 12:25:23 +0000 | |||
3216 | @@ -67,6 +67,23 @@ | |||
3217 | 67 | # The uuid module is new in the stdlib in 2.5 | 67 | # The uuid module is new in the stdlib in 2.5 |
3218 | 68 | requires.append('uuid>=1.30') | 68 | requires.append('uuid>=1.30') |
3219 | 69 | 69 | ||
3220 | 70 | if sys.version_info >= (3,): | ||
3221 | 71 | |||
3222 | 72 | num_processes = 1 | ||
3223 | 73 | for option in os.environ.get('DEB_BUILD_OPTIONS', '').split(): | ||
3224 | 74 | if option.startswith('parallel='): | ||
3225 | 75 | num_processes = int(option.split('=', 1)[1]) | ||
3226 | 76 | if num_processes > 1: | ||
3227 | 77 | import lib2to3.refactor | ||
3228 | 78 | class RefactoringTool(lib2to3.refactor.MultiprocessRefactoringTool): | ||
3229 | 79 | def refactor(self, items, write=False, doctests_only=False): | ||
3230 | 80 | return lib2to3.refactor.MultiprocessRefactoringTool.refactor( | ||
3231 | 81 | self, items, | ||
3232 | 82 | write=write, | ||
3233 | 83 | doctests_only=doctests_only, | ||
3234 | 84 | num_processes=num_processes | ||
3235 | 85 | ) | ||
3236 | 86 | lib2to3.refactor.RefactoringTool = RefactoringTool | ||
3237 | 70 | 87 | ||
3238 | 71 | # Provide a "compile_catalog" command that also creates the translated | 88 | # Provide a "compile_catalog" command that also creates the translated |
3239 | 72 | # JavaScript files if Babel is available. | 89 | # JavaScript files if Babel is available. |
3240 | 73 | 90 | ||
3241 | === modified file 'sphinx/environment.py' | |||
3242 | --- sphinx/environment.py 2012-11-27 19:20:44 +0000 | |||
3243 | +++ sphinx/environment.py 2013-02-16 12:25:23 +0000 | |||
3244 | @@ -218,6 +218,10 @@ | |||
3245 | 218 | if not msgstr or msgstr == msg: # as-of-yet untranslated | 218 | if not msgstr or msgstr == msg: # as-of-yet untranslated |
3246 | 219 | continue | 219 | continue |
3247 | 220 | 220 | ||
3248 | 221 | # Avoid "Literal block expected; none found." warnings. | ||
3249 | 222 | if msgstr.strip().endswith('::'): | ||
3250 | 223 | msgstr += '\n\n dummy literal' | ||
3251 | 224 | |||
3252 | 221 | patch = new_document(source, settings) | 225 | patch = new_document(source, settings) |
3253 | 222 | parser.parse(msgstr, patch) | 226 | parser.parse(msgstr, patch) |
3254 | 223 | patch = patch[0] | 227 | patch = patch[0] |
3255 | @@ -225,31 +229,48 @@ | |||
3256 | 225 | if not isinstance(patch, nodes.paragraph): | 229 | if not isinstance(patch, nodes.paragraph): |
3257 | 226 | continue # skip for now | 230 | continue # skip for now |
3258 | 227 | 231 | ||
3284 | 228 | footnote_refs = [r for r in node.children | 232 | # auto-numbered foot note reference should use original 'ids'. |
3285 | 229 | if isinstance(r, nodes.footnote_reference) | 233 | is_autonumber_footnote_ref = lambda node: \ |
3286 | 230 | and r.get('auto') == 1] | 234 | isinstance(node, nodes.footnote_reference) \ |
3287 | 231 | refs = [r for r in node.children if isinstance(r, nodes.reference)] | 235 | and node.get('auto') == 1 |
3288 | 232 | 236 | old_foot_refs = node.traverse(is_autonumber_footnote_ref) | |
3289 | 233 | for i, child in enumerate(patch.children): # update leaves | 237 | new_foot_refs = patch.traverse(is_autonumber_footnote_ref) |
3290 | 234 | if isinstance(child, nodes.footnote_reference) \ | 238 | if len(old_foot_refs) != len(new_foot_refs): |
3291 | 235 | and child.get('auto') == 1: | 239 | env.warn_node('inconsistent footnote references in ' |
3292 | 236 | # use original 'footnote_reference' object. | 240 | 'translated message', node) |
3293 | 237 | # this object is already registered in self.document.autofootnote_refs | 241 | for old, new in zip(old_foot_refs, new_foot_refs): |
3294 | 238 | patch.children[i] = footnote_refs.pop(0) | 242 | new['ids'] = old['ids'] |
3295 | 239 | # Some duplicated footnote_reference in msgstr causes | 243 | self.document.autofootnote_refs.remove(old) |
3296 | 240 | # IndexError in .pop(0). That is invalid msgstr. | 244 | self.document.note_autofootnote_ref(new) |
3297 | 241 | 245 | ||
3298 | 242 | elif isinstance(child, nodes.reference): | 246 | # reference should use original 'refname'. |
3299 | 243 | # reference should use original 'refname'. | 247 | # * reference target ".. _Python: ..." is not translatable. |
3300 | 244 | # * reference target ".. _Python: ..." is not translatable. | 248 | # * section refname is not translatable. |
3301 | 245 | # * section refname is not translatable. | 249 | # * inline reference "`Python <...>`_" has no 'refname'. |
3302 | 246 | # * inline reference "`Python <...>`_" has no 'refname'. | 250 | is_refnamed_ref = lambda node: \ |
3303 | 247 | if refs and 'refname' in refs[0]: | 251 | isinstance(node, nodes.reference) \ |
3304 | 248 | refname = child['refname'] = refs.pop(0)['refname'] | 252 | and 'refname' in node |
3305 | 249 | self.document.refnames.setdefault( | 253 | old_refs = node.traverse(is_refnamed_ref) |
3306 | 250 | refname, []).append(child) | 254 | new_refs = patch.traverse(is_refnamed_ref) |
3307 | 251 | # if number of reference nodes had been changed, that | 255 | applied_refname_map = {} |
3308 | 252 | # would often generate unknown link target warning. | 256 | if len(old_refs) != len(new_refs): |
3309 | 257 | env.warn_node('inconsistent references in ' | ||
3310 | 258 | 'translated message', node) | ||
3311 | 259 | for new in new_refs: | ||
3312 | 260 | if new['refname'] in applied_refname_map: | ||
3313 | 261 | # 2nd appearance of the reference | ||
3314 | 262 | new['refname'] = applied_refname_map[new['refname']] | ||
3315 | 263 | elif old_refs: | ||
3316 | 264 | # 1st appearance of the reference in old_refs | ||
3317 | 265 | old = old_refs.pop(0) | ||
3318 | 266 | refname = old['refname'] | ||
3319 | 267 | new['refname'] = refname | ||
3320 | 268 | applied_refname_map[new['refname']] = refname | ||
3321 | 269 | else: | ||
3322 | 270 | # the reference is not found in old_refs | ||
3323 | 271 | applied_refname_map[new['refname']] = new['refname'] | ||
3324 | 272 | |||
3325 | 273 | self.document.note_refname(new) | ||
3326 | 253 | 274 | ||
3327 | 254 | for child in patch.children: # update leaves | 275 | for child in patch.children: # update leaves |
3328 | 255 | child.parent = node | 276 | child.parent = node |
Good work! Uploaded.