Merge lp:~mitya57/ubuntu/raring/sphinx/1.1.3+dfsg-7ubuntu1 into lp:ubuntu/raring/sphinx

Proposed by Dmitry Shachnev
Status: Merged
Merge reported by: Dmitry Shachnev
Merged at revision: not available
Proposed branch: lp:~mitya57/ubuntu/raring/sphinx/1.1.3+dfsg-7ubuntu1
Merge into: lp:ubuntu/raring/sphinx
Diff against target: 3328 lines (+2631/-451)
25 files modified
.pc/applied-patches (+3/-1)
.pc/fix_literal_block_warning.diff/sphinx/environment.py (+1807/-0)
.pc/fix_manpages_generation_with_new_docutils.diff/sphinx/writers/manpage.py (+0/-345)
.pc/manpage_writer_docutils_0.10_api.diff/sphinx/writers/manpage.py (+345/-0)
.pc/parallel_2to3.diff/setup.py (+209/-0)
debian/changelog (+33/-2)
debian/jstest/run-tests (+1/-1)
debian/patches/fix_literal_block_warning.diff (+17/-0)
debian/patches/fix_manpages_generation_with_new_docutils.diff (+0/-33)
debian/patches/initialize_autodoc.diff (+1/-1)
debian/patches/l10n_fixes.diff (+49/-33)
debian/patches/manpage_writer_docutils_0.10_api.diff (+31/-0)
debian/patches/parallel_2to3.diff (+31/-0)
debian/patches/python3_test_build_dir.diff (+1/-1)
debian/patches/series (+3/-1)
debian/patches/show_more_stack_frames.diff (+1/-1)
debian/patches/sort_stopwords.diff (+2/-2)
debian/patches/sphinxcontrib_namespace.diff (+1/-1)
debian/patches/unversioned_grammar_pickle.diff (+1/-1)
debian/rules (+8/-3)
debian/source/options (+1/-0)
debian/tests/control (+3/-0)
debian/tests/sphinx-doc (+20/-0)
setup.py (+17/-0)
sphinx/environment.py (+46/-25)
To merge this branch: bzr merge lp:~mitya57/ubuntu/raring/sphinx/1.1.3+dfsg-7ubuntu1
Reviewer Review Type Date Requested Status
Daniel Holbach (community) Approve
Ubuntu branches Pending
Review via email: mp+148869@code.launchpad.net

Description of the change

Looks like upstream is not going to release 1.3 anytime soon, so this branch merges some recent Debian changes and backports a minor fix from upstream.

This also includes a "final" version of patch for bug 1068493, which will make is possible to SRU it.

sphinx (1.1.3+dfsg-7ubuntu1) raring; urgency=low

  * Merge with Debian experimental. Remaining Ubuntu changes:
    - Switch to dh_python2.
    - debian/rules: export NO_PKG_MANGLE=1 in order to not have translations
      stripped.
    - debian/control: Drop the build-dependency on python-whoosh.
    - debian/control: Add "XS-Testsuite: autopkgtest" header.
  * debian/patches/fix_manpages_generation_with_new_docutils.diff:
    dropped, applied in Debian as manpage_writer_docutils_0.10_api.diff.
  * debian/patches/fix_literal_block_warning.diff: add patch to avoid
    false-positive "Literal block expected; none found." warnings when
    building l10n projects.

 -- Dmitry Shachnev <email address hidden> Sat, 16 Feb 2013 14:51:12 +0400

To post a comment you must log in.
42. By Dmitry Shachnev

Releasing version 1.1.3+dfsg-7ubuntu1.

Revision history for this message
Daniel Holbach (dholbach) wrote :

Good work! Uploaded.

review: Approve
Revision history for this message
Daniel Holbach (dholbach) wrote :
Revision history for this message
Dmitry Shachnev (mitya57) wrote :

Segfaulting again? :-(

/me looks at python-webkit suspiciously, but will ask buildd admins to
investigate the problem tomorrow.

Revision history for this message
Dmitry Shachnev (mitya57) wrote :

[Commenting on right MP this time]

I can't reproduce the crash in the up-to-date raring pbuilder chroot. William Grant suggested me to try chroot from https://launchpad.net/api/devel/ubuntu/raring/i386, will do that later.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file '.pc/applied-patches'
--- .pc/applied-patches 2012-11-27 19:20:44 +0000
+++ .pc/applied-patches 2013-02-16 12:25:23 +0000
@@ -8,8 +8,10 @@
8fix_nepali_po.diff8fix_nepali_po.diff
9pygments_byte_strings.diff9pygments_byte_strings.diff
10fix_shorthandoff.diff10fix_shorthandoff.diff
11fix_manpages_generation_with_new_docutils.diff
12test_build_html_rb.diff11test_build_html_rb.diff
13sort_stopwords.diff12sort_stopwords.diff
14support_python_3.3.diff13support_python_3.3.diff
15l10n_fixes.diff14l10n_fixes.diff
15manpage_writer_docutils_0.10_api.diff
16parallel_2to3.diff
17fix_literal_block_warning.diff
1618
=== added directory '.pc/fix_literal_block_warning.diff'
=== added directory '.pc/fix_literal_block_warning.diff/sphinx'
=== added file '.pc/fix_literal_block_warning.diff/sphinx/environment.py'
--- .pc/fix_literal_block_warning.diff/sphinx/environment.py 1970-01-01 00:00:00 +0000
+++ .pc/fix_literal_block_warning.diff/sphinx/environment.py 2013-02-16 12:25:23 +0000
@@ -0,0 +1,1807 @@
1# -*- coding: utf-8 -*-
2"""
3 sphinx.environment
4 ~~~~~~~~~~~~~~~~~~
5
6 Global creation environment.
7
8 :copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
9 :license: BSD, see LICENSE for details.
10"""
11
12import re
13import os
14import sys
15import time
16import types
17import codecs
18import imghdr
19import string
20import unicodedata
21import cPickle as pickle
22from os import path
23from glob import glob
24from itertools import izip, groupby
25
26from docutils import nodes
27from docutils.io import FileInput, NullOutput
28from docutils.core import Publisher
29from docutils.utils import Reporter, relative_path, new_document, \
30 get_source_line
31from docutils.readers import standalone
32from docutils.parsers.rst import roles, directives, Parser as RSTParser
33from docutils.parsers.rst.languages import en as english
34from docutils.parsers.rst.directives.html import MetaBody
35from docutils.writers import UnfilteredWriter
36from docutils.transforms import Transform
37from docutils.transforms.parts import ContentsFilter
38
39from sphinx import addnodes
40from sphinx.util import url_re, get_matching_docs, docname_join, split_into, \
41 FilenameUniqDict
42from sphinx.util.nodes import clean_astext, make_refnode, extract_messages, \
43 WarningStream
44from sphinx.util.osutil import movefile, SEP, ustrftime, find_catalog
45from sphinx.util.matching import compile_matchers
46from sphinx.util.pycompat import all, class_types
47from sphinx.util.websupport import is_commentable
48from sphinx.errors import SphinxError, ExtensionError
49from sphinx.locale import _, init as init_locale
50from sphinx.versioning import add_uids, merge_doctrees
51
52fs_encoding = sys.getfilesystemencoding() or sys.getdefaultencoding()
53
54orig_role_function = roles.role
55orig_directive_function = directives.directive
56
57class ElementLookupError(Exception): pass
58
59
60default_settings = {
61 'embed_stylesheet': False,
62 'cloak_email_addresses': True,
63 'pep_base_url': 'http://www.python.org/dev/peps/',
64 'rfc_base_url': 'http://tools.ietf.org/html/',
65 'input_encoding': 'utf-8-sig',
66 'doctitle_xform': False,
67 'sectsubtitle_xform': False,
68 'halt_level': 5,
69}
70
71# This is increased every time an environment attribute is added
72# or changed to properly invalidate pickle files.
73ENV_VERSION = 41
74
75
76default_substitutions = set([
77 'version',
78 'release',
79 'today',
80])
81
82dummy_reporter = Reporter('', 4, 4)
83
84versioning_conditions = {
85 'none': False,
86 'text': nodes.TextElement,
87 'commentable': is_commentable,
88}
89
90
91class NoUri(Exception):
92 """Raised by get_relative_uri if there is no URI available."""
93 pass
94
95
96class DefaultSubstitutions(Transform):
97 """
98 Replace some substitutions if they aren't defined in the document.
99 """
100 # run before the default Substitutions
101 default_priority = 210
102
103 def apply(self):
104 config = self.document.settings.env.config
105 # only handle those not otherwise defined in the document
106 to_handle = default_substitutions - set(self.document.substitution_defs)
107 for ref in self.document.traverse(nodes.substitution_reference):
108 refname = ref['refname']
109 if refname in to_handle:
110 text = config[refname]
111 if refname == 'today' and not text:
112 # special handling: can also specify a strftime format
113 text = ustrftime(config.today_fmt or _('%B %d, %Y'))
114 ref.replace_self(nodes.Text(text, text))
115
116
117class MoveModuleTargets(Transform):
118 """
119 Move module targets that are the first thing in a section to the section
120 title.
121
122 XXX Python specific
123 """
124 default_priority = 210
125
126 def apply(self):
127 for node in self.document.traverse(nodes.target):
128 if not node['ids']:
129 continue
130 if (node.has_key('ismod') and
131 node.parent.__class__ is nodes.section and
132 # index 0 is the section title node
133 node.parent.index(node) == 1):
134 node.parent['ids'][0:0] = node['ids']
135 node.parent.remove(node)
136
137
138class HandleCodeBlocks(Transform):
139 """
140 Several code block related transformations.
141 """
142 default_priority = 210
143
144 def apply(self):
145 # move doctest blocks out of blockquotes
146 for node in self.document.traverse(nodes.block_quote):
147 if all(isinstance(child, nodes.doctest_block) for child
148 in node.children):
149 node.replace_self(node.children)
150 # combine successive doctest blocks
151 #for node in self.document.traverse(nodes.doctest_block):
152 # if node not in node.parent.children:
153 # continue
154 # parindex = node.parent.index(node)
155 # while len(node.parent) > parindex+1 and \
156 # isinstance(node.parent[parindex+1], nodes.doctest_block):
157 # node[0] = nodes.Text(node[0] + '\n\n' +
158 # node.parent[parindex+1][0])
159 # del node.parent[parindex+1]
160
161
162class SortIds(Transform):
163 """
164 Sort secion IDs so that the "id[0-9]+" one comes last.
165 """
166 default_priority = 261
167
168 def apply(self):
169 for node in self.document.traverse(nodes.section):
170 if len(node['ids']) > 1 and node['ids'][0].startswith('id'):
171 node['ids'] = node['ids'][1:] + [node['ids'][0]]
172
173
174class CitationReferences(Transform):
175 """
176 Replace citation references by pending_xref nodes before the default
177 docutils transform tries to resolve them.
178 """
179 default_priority = 619
180
181 def apply(self):
182 for citnode in self.document.traverse(nodes.citation_reference):
183 cittext = citnode.astext()
184 refnode = addnodes.pending_xref(cittext, reftype='citation',
185 reftarget=cittext, refwarn=True)
186 refnode.line = citnode.line or citnode.parent.line
187 refnode += nodes.Text('[' + cittext + ']')
188 citnode.parent.replace(citnode, refnode)
189
190
191class Locale(Transform):
192 """
193 Replace translatable nodes with their translated doctree.
194 """
195 default_priority = 0
196 def apply(self):
197 env = self.document.settings.env
198 settings, source = self.document.settings, self.document['source']
199 # XXX check if this is reliable
200 assert source.startswith(env.srcdir)
201 docname = path.splitext(relative_path(env.srcdir, source))[0]
202 textdomain = find_catalog(docname,
203 self.document.settings.gettext_compact)
204
205 # fetch translations
206 dirs = [path.join(env.srcdir, directory)
207 for directory in env.config.locale_dirs]
208 catalog, has_catalog = init_locale(dirs, env.config.language,
209 textdomain)
210 if not has_catalog:
211 return
212
213 parser = RSTParser()
214
215 for node, msg in extract_messages(self.document):
216 msgstr = catalog.gettext(msg)
217 # XXX add marker to untranslated parts
218 if not msgstr or msgstr == msg: # as-of-yet untranslated
219 continue
220
221 patch = new_document(source, settings)
222 parser.parse(msgstr, patch)
223 patch = patch[0]
224 # XXX doctest and other block markup
225 if not isinstance(patch, nodes.paragraph):
226 continue # skip for now
227
228 # auto-numbered foot note reference should use original 'ids'.
229 is_autonumber_footnote_ref = lambda node: \
230 isinstance(node, nodes.footnote_reference) \
231 and node.get('auto') == 1
232 old_foot_refs = node.traverse(is_autonumber_footnote_ref)
233 new_foot_refs = patch.traverse(is_autonumber_footnote_ref)
234 if len(old_foot_refs) != len(new_foot_refs):
235 env.warn_node('inconsistent footnote references in '
236 'translated message', node)
237 for old, new in zip(old_foot_refs, new_foot_refs):
238 new['ids'] = old['ids']
239 self.document.autofootnote_refs.remove(old)
240 self.document.note_autofootnote_ref(new)
241
242 # reference should use original 'refname'.
243 # * reference target ".. _Python: ..." is not translatable.
244 # * section refname is not translatable.
245 # * inline reference "`Python <...>`_" has no 'refname'.
246 is_refnamed_ref = lambda node: \
247 isinstance(node, nodes.reference) \
248 and 'refname' in node
249 old_refs = node.traverse(is_refnamed_ref)
250 new_refs = patch.traverse(is_refnamed_ref)
251 applied_refname_map = {}
252 if len(old_refs) != len(new_refs):
253 env.warn_node('inconsistent references in '
254 'translated message', node)
255 for new in new_refs:
256 if new['refname'] in applied_refname_map:
257 # 2nd appearance of the reference
258 new['refname'] = applied_refname_map[new['refname']]
259 elif old_refs:
260 # 1st appearance of the reference in old_refs
261 old = old_refs.pop(0)
262 refname = old['refname']
263 new['refname'] = refname
264 applied_refname_map[new['refname']] = refname
265 else:
266 # the reference is not found in old_refs
267 applied_refname_map[new['refname']] = new['refname']
268
269 self.document.note_refname(new)
270
271 for child in patch.children: # update leaves
272 child.parent = node
273 node.children = patch.children
274
275
276class SphinxStandaloneReader(standalone.Reader):
277 """
278 Add our own transforms.
279 """
280 transforms = [Locale, CitationReferences, DefaultSubstitutions,
281 MoveModuleTargets, HandleCodeBlocks, SortIds]
282
283 def get_transforms(self):
284 return standalone.Reader.get_transforms(self) + self.transforms
285
286
287class SphinxDummyWriter(UnfilteredWriter):
288 supported = ('html',) # needed to keep "meta" nodes
289
290 def translate(self):
291 pass
292
293
294class SphinxContentsFilter(ContentsFilter):
295 """
296 Used with BuildEnvironment.add_toc_from() to discard cross-file links
297 within table-of-contents link nodes.
298 """
299 def visit_pending_xref(self, node):
300 text = node.astext()
301 self.parent.append(nodes.literal(text, text))
302 raise nodes.SkipNode
303
304 def visit_image(self, node):
305 raise nodes.SkipNode
306
307
308class BuildEnvironment:
309 """
310 The environment in which the ReST files are translated.
311 Stores an inventory of cross-file targets and provides doctree
312 transformations to resolve links to them.
313 """
314
315 # --------- ENVIRONMENT PERSISTENCE ----------------------------------------
316
317 @staticmethod
318 def frompickle(config, filename):
319 picklefile = open(filename, 'rb')
320 try:
321 env = pickle.load(picklefile)
322 finally:
323 picklefile.close()
324 if env.version != ENV_VERSION:
325 raise IOError('env version not current')
326 env.config.values = config.values
327 return env
328
329 def topickle(self, filename):
330 # remove unpicklable attributes
331 warnfunc = self._warnfunc
332 self.set_warnfunc(None)
333 values = self.config.values
334 del self.config.values
335 domains = self.domains
336 del self.domains
337 # first write to a temporary file, so that if dumping fails,
338 # the existing environment won't be overwritten
339 picklefile = open(filename + '.tmp', 'wb')
340 # remove potentially pickling-problematic values from config
341 for key, val in vars(self.config).items():
342 if key.startswith('_') or \
343 isinstance(val, types.ModuleType) or \
344 isinstance(val, types.FunctionType) or \
345 isinstance(val, class_types):
346 del self.config[key]
347 try:
348 pickle.dump(self, picklefile, pickle.HIGHEST_PROTOCOL)
349 finally:
350 picklefile.close()
351 movefile(filename + '.tmp', filename)
352 # reset attributes
353 self.domains = domains
354 self.config.values = values
355 self.set_warnfunc(warnfunc)
356
357 # --------- ENVIRONMENT INITIALIZATION -------------------------------------
358
359 def __init__(self, srcdir, doctreedir, config):
360 self.doctreedir = doctreedir
361 self.srcdir = srcdir
362 self.config = config
363
364 # the method of doctree versioning; see set_versioning_method
365 self.versioning_condition = None
366
367 # the application object; only set while update() runs
368 self.app = None
369
370 # all the registered domains, set by the application
371 self.domains = {}
372
373 # the docutils settings for building
374 self.settings = default_settings.copy()
375 self.settings['env'] = self
376
377 # the function to write warning messages with
378 self._warnfunc = None
379
380 # this is to invalidate old pickles
381 self.version = ENV_VERSION
382
383 # make this a set for faster testing
384 self._nitpick_ignore = set(self.config.nitpick_ignore)
385
386 # All "docnames" here are /-separated and relative and exclude
387 # the source suffix.
388
389 self.found_docs = set() # contains all existing docnames
390 self.all_docs = {} # docname -> mtime at the time of build
391 # contains all built docnames
392 self.dependencies = {} # docname -> set of dependent file
393 # names, relative to documentation root
394 self.reread_always = set() # docnames to re-read unconditionally on
395 # next build
396
397 # File metadata
398 self.metadata = {} # docname -> dict of metadata items
399
400 # TOC inventory
401 self.titles = {} # docname -> title node
402 self.longtitles = {} # docname -> title node; only different if
403 # set differently with title directive
404 self.tocs = {} # docname -> table of contents nodetree
405 self.toc_num_entries = {} # docname -> number of real entries
406 # used to determine when to show the TOC
407 # in a sidebar (don't show if it's only one item)
408 self.toc_secnumbers = {} # docname -> dict of sectionid -> number
409
410 self.toctree_includes = {} # docname -> list of toctree includefiles
411 self.files_to_rebuild = {} # docname -> set of files
412 # (containing its TOCs) to rebuild too
413 self.glob_toctrees = set() # docnames that have :glob: toctrees
414 self.numbered_toctrees = set() # docnames that have :numbered: toctrees
415
416 # domain-specific inventories, here to be pickled
417 self.domaindata = {} # domainname -> domain-specific dict
418
419 # Other inventories
420 self.citations = {} # citation name -> docname, labelid
421 self.indexentries = {} # docname -> list of
422 # (type, string, target, aliasname)
423 self.versionchanges = {} # version -> list of (type, docname,
424 # lineno, module, descname, content)
425
426 # these map absolute path -> (docnames, unique filename)
427 self.images = FilenameUniqDict()
428 self.dlfiles = FilenameUniqDict()
429
430 # temporary data storage while reading a document
431 self.temp_data = {}
432
433 def set_warnfunc(self, func):
434 self._warnfunc = func
435 self.settings['warning_stream'] = WarningStream(func)
436
437 def set_versioning_method(self, method):
438 """This sets the doctree versioning method for this environment.
439
440 Versioning methods are a builder property; only builders with the same
441 versioning method can share the same doctree directory. Therefore, we
442 raise an exception if the user tries to use an environment with an
443 incompatible versioning method.
444 """
445 if method not in versioning_conditions:
446 raise ValueError('invalid versioning method: %r' % method)
447 condition = versioning_conditions[method]
448 if self.versioning_condition not in (None, condition):
449 raise SphinxError('This environment is incompatible with the '
450 'selected builder, please choose another '
451 'doctree directory.')
452 self.versioning_condition = condition
453
454 def warn(self, docname, msg, lineno=None):
455 # strange argument order is due to backwards compatibility
456 self._warnfunc(msg, (docname, lineno))
457
458 def warn_node(self, msg, node):
459 self._warnfunc(msg, '%s:%s' % get_source_line(node))
460
461 def clear_doc(self, docname):
462 """Remove all traces of a source file in the inventory."""
463 if docname in self.all_docs:
464 self.all_docs.pop(docname, None)
465 self.reread_always.discard(docname)
466 self.metadata.pop(docname, None)
467 self.dependencies.pop(docname, None)
468 self.titles.pop(docname, None)
469 self.longtitles.pop(docname, None)
470 self.tocs.pop(docname, None)
471 self.toc_secnumbers.pop(docname, None)
472 self.toc_num_entries.pop(docname, None)
473 self.toctree_includes.pop(docname, None)
474 self.indexentries.pop(docname, None)
475 self.glob_toctrees.discard(docname)
476 self.numbered_toctrees.discard(docname)
477 self.images.purge_doc(docname)
478 self.dlfiles.purge_doc(docname)
479
480 for subfn, fnset in self.files_to_rebuild.items():
481 fnset.discard(docname)
482 if not fnset:
483 del self.files_to_rebuild[subfn]
484 for key, (fn, _) in self.citations.items():
485 if fn == docname:
486 del self.citations[key]
487 for version, changes in self.versionchanges.items():
488 new = [change for change in changes if change[1] != docname]
489 changes[:] = new
490
491 for domain in self.domains.values():
492 domain.clear_doc(docname)
493
494 def doc2path(self, docname, base=True, suffix=None):
495 """Return the filename for the document name.
496
497 If *base* is True, return absolute path under self.srcdir.
498 If *base* is None, return relative path to self.srcdir.
499 If *base* is a path string, return absolute path under that.
500 If *suffix* is not None, add it instead of config.source_suffix.
501 """
502 docname = docname.replace(SEP, path.sep)
503 suffix = suffix or self.config.source_suffix
504 if base is True:
505 return path.join(self.srcdir, docname) + suffix
506 elif base is None:
507 return docname + suffix
508 else:
509 return path.join(base, docname) + suffix
510
511 def relfn2path(self, filename, docname=None):
512 """Return paths to a file referenced from a document, relative to
513 documentation root and absolute.
514
515 Absolute filenames are relative to the source dir, while relative
516 filenames are relative to the dir of the containing document.
517 """
518 if filename.startswith('/') or filename.startswith(os.sep):
519 rel_fn = filename[1:]
520 else:
521 docdir = path.dirname(self.doc2path(docname or self.docname,
522 base=None))
523 rel_fn = path.join(docdir, filename)
524 try:
525 return rel_fn, path.join(self.srcdir, rel_fn)
526 except UnicodeDecodeError:
527 # the source directory is a bytestring with non-ASCII characters;
528 # let's try to encode the rel_fn in the file system encoding
529 enc_rel_fn = rel_fn.encode(sys.getfilesystemencoding())
530 return rel_fn, path.join(self.srcdir, enc_rel_fn)
531
532 def find_files(self, config):
533 """Find all source files in the source dir and put them in
534 self.found_docs.
535 """
536 matchers = compile_matchers(
537 config.exclude_patterns[:] +
538 config.exclude_trees +
539 [d + config.source_suffix for d in config.unused_docs] +
540 ['**/' + d for d in config.exclude_dirnames] +
541 ['**/_sources', '.#*']
542 )
543 self.found_docs = set(get_matching_docs(
544 self.srcdir, config.source_suffix, exclude_matchers=matchers))
545
546 def get_outdated_files(self, config_changed):
547 """Return (added, changed, removed) sets."""
548 # clear all files no longer present
549 removed = set(self.all_docs) - self.found_docs
550
551 added = set()
552 changed = set()
553
554 if config_changed:
555 # config values affect e.g. substitutions
556 added = self.found_docs
557 else:
558 for docname in self.found_docs:
559 if docname not in self.all_docs:
560 added.add(docname)
561 continue
562 # if the doctree file is not there, rebuild
563 if not path.isfile(self.doc2path(docname, self.doctreedir,
564 '.doctree')):
565 changed.add(docname)
566 continue
567 # check the "reread always" list
568 if docname in self.reread_always:
569 changed.add(docname)
570 continue
571 # check the mtime of the document
572 mtime = self.all_docs[docname]
573 newmtime = path.getmtime(self.doc2path(docname))
574 if newmtime > mtime:
575 changed.add(docname)
576 continue
577 # finally, check the mtime of dependencies
578 for dep in self.dependencies.get(docname, ()):
579 try:
580 # this will do the right thing when dep is absolute too
581 deppath = path.join(self.srcdir, dep)
582 if not path.isfile(deppath):
583 changed.add(docname)
584 break
585 depmtime = path.getmtime(deppath)
586 if depmtime > mtime:
587 changed.add(docname)
588 break
589 except EnvironmentError:
590 # give it another chance
591 changed.add(docname)
592 break
593
594 return added, changed, removed
595
596 def update(self, config, srcdir, doctreedir, app=None):
597 """(Re-)read all files new or changed since last update.
598
599 Returns a summary, the total count of documents to reread and an
600 iterator that yields docnames as it processes them. Store all
601 environment docnames in the canonical format (ie using SEP as a
602 separator in place of os.path.sep).
603 """
604 config_changed = False
605 if self.config is None:
606 msg = '[new config] '
607 config_changed = True
608 else:
609 # check if a config value was changed that affects how
610 # doctrees are read
611 for key, descr in config.values.iteritems():
612 if descr[1] != 'env':
613 continue
614 if self.config[key] != config[key]:
615 msg = '[config changed] '
616 config_changed = True
617 break
618 else:
619 msg = ''
620 # this value is not covered by the above loop because it is handled
621 # specially by the config class
622 if self.config.extensions != config.extensions:
623 msg = '[extensions changed] '
624 config_changed = True
625 # the source and doctree directories may have been relocated
626 self.srcdir = srcdir
627 self.doctreedir = doctreedir
628 self.find_files(config)
629 self.config = config
630
631 added, changed, removed = self.get_outdated_files(config_changed)
632
633 # allow user intervention as well
634 for docs in app.emit('env-get-outdated', self, added, changed, removed):
635 changed.update(set(docs) & self.found_docs)
636
637 # if files were added or removed, all documents with globbed toctrees
638 # must be reread
639 if added or removed:
640 # ... but not those that already were removed
641 changed.update(self.glob_toctrees & self.found_docs)
642
643 msg += '%s added, %s changed, %s removed' % (len(added), len(changed),
644 len(removed))
645
646 def update_generator():
647 self.app = app
648
649 # clear all files no longer present
650 for docname in removed:
651 if app:
652 app.emit('env-purge-doc', self, docname)
653 self.clear_doc(docname)
654
655 # read all new and changed files
656 for docname in sorted(added | changed):
657 yield docname
658 self.read_doc(docname, app=app)
659
660 if config.master_doc not in self.all_docs:
661 self.warn(None, 'master file %s not found' %
662 self.doc2path(config.master_doc))
663
664 self.app = None
665 if app:
666 app.emit('env-updated', self)
667
668 return msg, len(added | changed), update_generator()
669
670 def check_dependents(self, already):
671 to_rewrite = self.assign_section_numbers()
672 for docname in to_rewrite:
673 if docname not in already:
674 yield docname
675
676 # --------- SINGLE FILE READING --------------------------------------------
677
678 def warn_and_replace(self, error):
679 """Custom decoding error handler that warns and replaces."""
680 linestart = error.object.rfind('\n', 0, error.start)
681 lineend = error.object.find('\n', error.start)
682 if lineend == -1: lineend = len(error.object)
683 lineno = error.object.count('\n', 0, error.start) + 1
684 self.warn(self.docname, 'undecodable source characters, '
685 'replacing with "?": %r' %
686 (error.object[linestart+1:error.start] + '>>>' +
687 error.object[error.start:error.end] + '<<<' +
688 error.object[error.end:lineend]), lineno)
689 return (u'?', error.end)
690
691 def lookup_domain_element(self, type, name):
692 """Lookup a markup element (directive or role), given its name which can
693 be a full name (with domain).
694 """
695 name = name.lower()
696 # explicit domain given?
697 if ':' in name:
698 domain_name, name = name.split(':', 1)
699 if domain_name in self.domains:
700 domain = self.domains[domain_name]
701 element = getattr(domain, type)(name)
702 if element is not None:
703 return element, []
704 # else look in the default domain
705 else:
706 def_domain = self.temp_data.get('default_domain')
707 if def_domain is not None:
708 element = getattr(def_domain, type)(name)
709 if element is not None:
710 return element, []
711 # always look in the std domain
712 element = getattr(self.domains['std'], type)(name)
713 if element is not None:
714 return element, []
715 raise ElementLookupError
716
717 def patch_lookup_functions(self):
718 """Monkey-patch directive and role dispatch, so that domain-specific
719 markup takes precedence.
720 """
721 def directive(name, lang_module, document):
722 try:
723 return self.lookup_domain_element('directive', name)
724 except ElementLookupError:
725 return orig_directive_function(name, lang_module, document)
726
727 def role(name, lang_module, lineno, reporter):
728 try:
729 return self.lookup_domain_element('role', name)
730 except ElementLookupError:
731 return orig_role_function(name, lang_module, lineno, reporter)
732
733 directives.directive = directive
734 roles.role = role
735
736 def read_doc(self, docname, src_path=None, save_parsed=True, app=None):
737 """Parse a file and add/update inventory entries for the doctree.
738
739 If srcpath is given, read from a different source file.
740 """
741 # remove all inventory entries for that file
742 if app:
743 app.emit('env-purge-doc', self, docname)
744
745 self.clear_doc(docname)
746
747 if src_path is None:
748 src_path = self.doc2path(docname)
749
750 self.temp_data['docname'] = docname
751 # defaults to the global default, but can be re-set in a document
752 self.temp_data['default_domain'] = \
753 self.domains.get(self.config.primary_domain)
754
755 self.settings['input_encoding'] = self.config.source_encoding
756 self.settings['trim_footnote_reference_space'] = \
757 self.config.trim_footnote_reference_space
758 self.settings['gettext_compact'] = self.config.gettext_compact
759
760 self.patch_lookup_functions()
761
762 if self.config.default_role:
763 role_fn, messages = roles.role(self.config.default_role, english,
764 0, dummy_reporter)
765 if role_fn:
766 roles._roles[''] = role_fn
767 else:
768 self.warn(docname, 'default role %s not found' %
769 self.config.default_role)
770
771 codecs.register_error('sphinx', self.warn_and_replace)
772
773 class SphinxSourceClass(FileInput):
774 def __init__(self_, *args, **kwds):
775 # don't call sys.exit() on IOErrors
776 kwds['handle_io_errors'] = False
777 FileInput.__init__(self_, *args, **kwds)
778
779 def decode(self_, data):
780 if isinstance(data, unicode):
781 return data
782 return data.decode(self_.encoding, 'sphinx')
783
784 def read(self_):
785 data = FileInput.read(self_)
786 if app:
787 arg = [data]
788 app.emit('source-read', docname, arg)
789 data = arg[0]
790 if self.config.rst_epilog:
791 data = data + '\n' + self.config.rst_epilog + '\n'
792 if self.config.rst_prolog:
793 data = self.config.rst_prolog + '\n' + data
794 return data
795
796 # publish manually
797 pub = Publisher(reader=SphinxStandaloneReader(),
798 writer=SphinxDummyWriter(),
799 source_class=SphinxSourceClass,
800 destination_class=NullOutput)
801 pub.set_components(None, 'restructuredtext', None)
802 pub.process_programmatic_settings(None, self.settings, None)
803 pub.set_source(None, src_path.encode(fs_encoding))
804 pub.set_destination(None, None)
805 try:
806 pub.publish()
807 doctree = pub.document
808 except UnicodeError, err:
809 raise SphinxError(str(err))
810
811 # post-processing
812 self.filter_messages(doctree)
813 self.process_dependencies(docname, doctree)
814 self.process_images(docname, doctree)
815 self.process_downloads(docname, doctree)
816 self.process_metadata(docname, doctree)
817 self.process_refonly_bullet_lists(docname, doctree)
818 self.create_title_from(docname, doctree)
819 self.note_indexentries_from(docname, doctree)
820 self.note_citations_from(docname, doctree)
821 self.build_toc_from(docname, doctree)
822 for domain in self.domains.itervalues():
823 domain.process_doc(self, docname, doctree)
824
825 # allow extension-specific post-processing
826 if app:
827 app.emit('doctree-read', doctree)
828
829 # store time of build, for outdated files detection
830 self.all_docs[docname] = time.time()
831
832 if self.versioning_condition:
833 # get old doctree
834 try:
835 f = open(self.doc2path(docname,
836 self.doctreedir, '.doctree'), 'rb')
837 try:
838 old_doctree = pickle.load(f)
839 finally:
840 f.close()
841 except EnvironmentError:
842 old_doctree = None
843
844 # add uids for versioning
845 if old_doctree is None:
846 list(add_uids(doctree, self.versioning_condition))
847 else:
848 list(merge_doctrees(
849 old_doctree, doctree, self.versioning_condition))
850
851 # make it picklable
852 doctree.reporter = None
853 doctree.transformer = None
854 doctree.settings.warning_stream = None
855 doctree.settings.env = None
856 doctree.settings.record_dependencies = None
857 for metanode in doctree.traverse(MetaBody.meta):
858 # docutils' meta nodes aren't picklable because the class is nested
859 metanode.__class__ = addnodes.meta
860
861 # cleanup
862 self.temp_data.clear()
863
864 if save_parsed:
865 # save the parsed doctree
866 doctree_filename = self.doc2path(docname, self.doctreedir,
867 '.doctree')
868 dirname = path.dirname(doctree_filename)
869 if not path.isdir(dirname):
870 os.makedirs(dirname)
871 f = open(doctree_filename, 'wb')
872 try:
873 pickle.dump(doctree, f, pickle.HIGHEST_PROTOCOL)
874 finally:
875 f.close()
876 else:
877 return doctree
878
879 # utilities to use while reading a document
880
881 @property
882 def docname(self):
883 """Backwards compatible alias."""
884 return self.temp_data['docname']
885
886 @property
887 def currmodule(self):
888 """Backwards compatible alias."""
889 return self.temp_data.get('py:module')
890
891 @property
892 def currclass(self):
893 """Backwards compatible alias."""
894 return self.temp_data.get('py:class')
895
896 def new_serialno(self, category=''):
897 """Return a serial number, e.g. for index entry targets."""
898 key = category + 'serialno'
899 cur = self.temp_data.get(key, 0)
900 self.temp_data[key] = cur + 1
901 return cur
902
903 def note_dependency(self, filename):
904 self.dependencies.setdefault(self.docname, set()).add(filename)
905
906 def note_reread(self):
907 self.reread_always.add(self.docname)
908
909 def note_versionchange(self, type, version, node, lineno):
910 self.versionchanges.setdefault(version, []).append(
911 (type, self.temp_data['docname'], lineno,
912 self.temp_data.get('py:module'),
913 self.temp_data.get('object'), node.astext()))
914
915 # post-processing of read doctrees
916
917 def filter_messages(self, doctree):
918 """Filter system messages from a doctree."""
919 filterlevel = self.config.keep_warnings and 2 or 5
920 for node in doctree.traverse(nodes.system_message):
921 if node['level'] < filterlevel:
922 node.parent.remove(node)
923
924
925 def process_dependencies(self, docname, doctree):
926 """Process docutils-generated dependency info."""
927 cwd = os.getcwd()
928 frompath = path.join(path.normpath(self.srcdir), 'dummy')
929 deps = doctree.settings.record_dependencies
930 if not deps:
931 return
932 for dep in deps.list:
933 # the dependency path is relative to the working dir, so get
934 # one relative to the srcdir
935 relpath = relative_path(frompath,
936 path.normpath(path.join(cwd, dep)))
937 self.dependencies.setdefault(docname, set()).add(relpath)
938
939 def process_downloads(self, docname, doctree):
940 """Process downloadable file paths. """
941 for node in doctree.traverse(addnodes.download_reference):
942 targetname = node['reftarget']
943 rel_filename, filename = self.relfn2path(targetname, docname)
944 self.dependencies.setdefault(docname, set()).add(rel_filename)
945 if not os.access(filename, os.R_OK):
946 self.warn_node('download file not readable: %s' % filename,
947 node)
948 continue
949 uniquename = self.dlfiles.add_file(docname, filename)
950 node['filename'] = uniquename
951
952 def process_images(self, docname, doctree):
953 """Process and rewrite image URIs."""
954 for node in doctree.traverse(nodes.image):
955 # Map the mimetype to the corresponding image. The writer may
956 # choose the best image from these candidates. The special key * is
957 # set if there is only single candidate to be used by a writer.
958 # The special key ? is set for nonlocal URIs.
959 node['candidates'] = candidates = {}
960 imguri = node['uri']
961 if imguri.find('://') != -1:
962 self.warn_node('nonlocal image URI found: %s' % imguri, node)
963 candidates['?'] = imguri
964 continue
965 rel_imgpath, full_imgpath = self.relfn2path(imguri, docname)
966 # set imgpath as default URI
967 node['uri'] = rel_imgpath
968 if rel_imgpath.endswith(os.extsep + '*'):
969 for filename in glob(full_imgpath):
970 new_imgpath = relative_path(self.srcdir, filename)
971 if filename.lower().endswith('.pdf'):
972 candidates['application/pdf'] = new_imgpath
973 elif filename.lower().endswith('.svg'):
974 candidates['image/svg+xml'] = new_imgpath
975 else:
976 try:
977 f = open(filename, 'rb')
978 try:
979 imgtype = imghdr.what(f)
980 finally:
981 f.close()
982 except (OSError, IOError), err:
983 self.warn_node('image file %s not readable: %s' %
984 (filename, err), node)
985 if imgtype:
986 candidates['image/' + imgtype] = new_imgpath
987 else:
988 candidates['*'] = rel_imgpath
989 # map image paths to unique image names (so that they can be put
990 # into a single directory)
991 for imgpath in candidates.itervalues():
992 self.dependencies.setdefault(docname, set()).add(imgpath)
993 if not os.access(path.join(self.srcdir, imgpath), os.R_OK):
994 self.warn_node('image file not readable: %s' % imgpath,
995 node)
996 continue
997 self.images.add_file(docname, imgpath)
998
999 def process_metadata(self, docname, doctree):
1000 """Process the docinfo part of the doctree as metadata.
1001
1002 Keep processing minimal -- just return what docutils says.
1003 """
1004 self.metadata[docname] = md = {}
1005 try:
1006 docinfo = doctree[0]
1007 except IndexError:
1008 # probably an empty document
1009 return
1010 if docinfo.__class__ is not nodes.docinfo:
1011 # nothing to see here
1012 return
1013 for node in docinfo:
1014 # nodes are multiply inherited...
1015 if isinstance(node, nodes.authors):
1016 md['authors'] = [author.astext() for author in node]
1017 elif isinstance(node, nodes.TextElement): # e.g. author
1018 md[node.__class__.__name__] = node.astext()
1019 else:
1020 name, body = node
1021 md[name.astext()] = body.astext()
1022 del doctree[0]
1023
1024 def process_refonly_bullet_lists(self, docname, doctree):
1025 """Change refonly bullet lists to use compact_paragraphs.
1026
1027 Specifically implemented for 'Indices and Tables' section, which looks
1028 odd when html_compact_lists is false.
1029 """
1030 if self.config.html_compact_lists:
1031 return
1032
1033 class RefOnlyListChecker(nodes.GenericNodeVisitor):
1034 """Raise `nodes.NodeFound` if non-simple list item is encountered.
1035
1036 Here 'simple' means a list item containing only a paragraph with a
1037 single reference in it.
1038 """
1039
1040 def default_visit(self, node):
1041 raise nodes.NodeFound
1042
1043 def visit_bullet_list(self, node):
1044 pass
1045
1046 def visit_list_item(self, node):
1047 children = []
1048 for child in node.children:
1049 if not isinstance(child, nodes.Invisible):
1050 children.append(child)
1051 if len(children) != 1:
1052 raise nodes.NodeFound
1053 if not isinstance(children[0], nodes.paragraph):
1054 raise nodes.NodeFound
1055 para = children[0]
1056 if len(para) != 1:
1057 raise nodes.NodeFound
1058 if not isinstance(para[0], addnodes.pending_xref):
1059 raise nodes.NodeFound
1060 raise nodes.SkipChildren
1061
1062 def invisible_visit(self, node):
1063 """Invisible nodes should be ignored."""
1064 pass
1065
1066 def check_refonly_list(node):
1067 """Check for list with only references in it."""
1068 visitor = RefOnlyListChecker(doctree)
1069 try:
1070 node.walk(visitor)
1071 except nodes.NodeFound:
1072 return False
1073 else:
1074 return True
1075
1076 for node in doctree.traverse(nodes.bullet_list):
1077 if check_refonly_list(node):
1078 for item in node.traverse(nodes.list_item):
1079 para = item[0]
1080 ref = para[0]
1081 compact_para = addnodes.compact_paragraph()
1082 compact_para += ref
1083 item.replace(para, compact_para)
1084
1085 def create_title_from(self, docname, document):
1086 """Add a title node to the document (just copy the first section title),
1087 and store that title in the environment.
1088 """
1089 titlenode = nodes.title()
1090 longtitlenode = titlenode
1091 # explicit title set with title directive; use this only for
1092 # the <title> tag in HTML output
1093 if document.has_key('title'):
1094 longtitlenode = nodes.title()
1095 longtitlenode += nodes.Text(document['title'])
1096 # look for first section title and use that as the title
1097 for node in document.traverse(nodes.section):
1098 visitor = SphinxContentsFilter(document)
1099 node[0].walkabout(visitor)
1100 titlenode += visitor.get_entry_text()
1101 break
1102 else:
1103 # document has no title
1104 titlenode += nodes.Text('<no title>')
1105 self.titles[docname] = titlenode
1106 self.longtitles[docname] = longtitlenode
1107
1108 def note_indexentries_from(self, docname, document):
1109 entries = self.indexentries[docname] = []
1110 for node in document.traverse(addnodes.index):
1111 entries.extend(node['entries'])
1112
1113 def note_citations_from(self, docname, document):
1114 for node in document.traverse(nodes.citation):
1115 label = node[0].astext()
1116 if label in self.citations:
1117 self.warn_node('duplicate citation %s, ' % label +
1118 'other instance in %s' % self.doc2path(
1119 self.citations[label][0]), node)
1120 self.citations[label] = (docname, node['ids'][0])
1121
1122 def note_toctree(self, docname, toctreenode):
1123 """Note a TOC tree directive in a document and gather information about
1124 file relations from it.
1125 """
1126 if toctreenode['glob']:
1127 self.glob_toctrees.add(docname)
1128 if toctreenode.get('numbered'):
1129 self.numbered_toctrees.add(docname)
1130 includefiles = toctreenode['includefiles']
1131 for includefile in includefiles:
1132 # note that if the included file is rebuilt, this one must be
1133 # too (since the TOC of the included file could have changed)
1134 self.files_to_rebuild.setdefault(includefile, set()).add(docname)
1135 self.toctree_includes.setdefault(docname, []).extend(includefiles)
1136
1137 def build_toc_from(self, docname, document):
1138 """Build a TOC from the doctree and store it in the inventory."""
1139 numentries = [0] # nonlocal again...
1140
1141 try:
1142 maxdepth = int(self.metadata[docname].get('tocdepth', 0))
1143 except ValueError:
1144 maxdepth = 0
1145
1146 def traverse_in_section(node, cls):
1147 """Like traverse(), but stay within the same section."""
1148 result = []
1149 if isinstance(node, cls):
1150 result.append(node)
1151 for child in node.children:
1152 if isinstance(child, nodes.section):
1153 continue
1154 result.extend(traverse_in_section(child, cls))
1155 return result
1156
1157 def build_toc(node, depth=1):
1158 entries = []
1159 for sectionnode in node:
1160 # find all toctree nodes in this section and add them
1161 # to the toc (just copying the toctree node which is then
1162 # resolved in self.get_and_resolve_doctree)
1163 if isinstance(sectionnode, addnodes.only):
1164 onlynode = addnodes.only(expr=sectionnode['expr'])
1165 blist = build_toc(sectionnode, depth)
1166 if blist:
1167 onlynode += blist.children
1168 entries.append(onlynode)
1169 if not isinstance(sectionnode, nodes.section):
1170 for toctreenode in traverse_in_section(sectionnode,
1171 addnodes.toctree):
1172 item = toctreenode.copy()
1173 entries.append(item)
1174 # important: do the inventory stuff
1175 self.note_toctree(docname, toctreenode)
1176 continue
1177 title = sectionnode[0]
1178 # copy the contents of the section title, but without references
1179 # and unnecessary stuff
1180 visitor = SphinxContentsFilter(document)
1181 title.walkabout(visitor)
1182 nodetext = visitor.get_entry_text()
1183 if not numentries[0]:
1184 # for the very first toc entry, don't add an anchor
1185 # as it is the file's title anyway
1186 anchorname = ''
1187 else:
1188 anchorname = '#' + sectionnode['ids'][0]
1189 numentries[0] += 1
1190 # make these nodes:
1191 # list_item -> compact_paragraph -> reference
1192 reference = nodes.reference(
1193 '', '', internal=True, refuri=docname,
1194 anchorname=anchorname, *nodetext)
1195 para = addnodes.compact_paragraph('', '', reference)
1196 item = nodes.list_item('', para)
1197 if maxdepth == 0 or depth < maxdepth:
1198 item += build_toc(sectionnode, depth+1)
1199 entries.append(item)
1200 if entries:
1201 return nodes.bullet_list('', *entries)
1202 return []
1203 toc = build_toc(document)
1204 if toc:
1205 self.tocs[docname] = toc
1206 else:
1207 self.tocs[docname] = nodes.bullet_list('')
1208 self.toc_num_entries[docname] = numentries[0]
1209
1210 def get_toc_for(self, docname, builder):
1211 """Return a TOC nodetree -- for use on the same page only!"""
1212 try:
1213 toc = self.tocs[docname].deepcopy()
1214 except KeyError:
1215 # the document does not exist anymore: return a dummy node that
1216 # renders to nothing
1217 return nodes.paragraph()
1218 self.process_only_nodes(toc, builder, docname)
1219 for node in toc.traverse(nodes.reference):
1220 node['refuri'] = node['anchorname'] or '#'
1221 return toc
1222
1223 def get_toctree_for(self, docname, builder, collapse, **kwds):
1224 """Return the global TOC nodetree."""
1225 doctree = self.get_doctree(self.config.master_doc)
1226 toctrees = []
1227 if 'includehidden' not in kwds:
1228 kwds['includehidden'] = True
1229 if 'maxdepth' not in kwds:
1230 kwds['maxdepth'] = 0
1231 kwds['collapse'] = collapse
1232 for toctreenode in doctree.traverse(addnodes.toctree):
1233 toctree = self.resolve_toctree(docname, builder, toctreenode,
1234 prune=True, **kwds)
1235 toctrees.append(toctree)
1236 if not toctrees:
1237 return None
1238 result = toctrees[0]
1239 for toctree in toctrees[1:]:
1240 result.extend(toctree.children)
1241 return result
1242
1243 def get_domain(self, domainname):
1244 """Return the domain instance with the specified name.
1245
1246 Raises an ExtensionError if the domain is not registered.
1247 """
1248 try:
1249 return self.domains[domainname]
1250 except KeyError:
1251 raise ExtensionError('Domain %r is not registered' % domainname)
1252
1253 # --------- RESOLVING REFERENCES AND TOCTREES ------------------------------
1254
1255 def get_doctree(self, docname):
1256 """Read the doctree for a file from the pickle and return it."""
1257 doctree_filename = self.doc2path(docname, self.doctreedir, '.doctree')
1258 f = open(doctree_filename, 'rb')
1259 try:
1260 doctree = pickle.load(f)
1261 finally:
1262 f.close()
1263 doctree.settings.env = self
1264 doctree.reporter = Reporter(self.doc2path(docname), 2, 5,
1265 stream=WarningStream(self._warnfunc))
1266 return doctree
1267
1268
1269 def get_and_resolve_doctree(self, docname, builder, doctree=None,
1270 prune_toctrees=True):
1271 """Read the doctree from the pickle, resolve cross-references and
1272 toctrees and return it.
1273 """
1274 if doctree is None:
1275 doctree = self.get_doctree(docname)
1276
1277 # resolve all pending cross-references
1278 self.resolve_references(doctree, docname, builder)
1279
1280 # now, resolve all toctree nodes
1281 for toctreenode in doctree.traverse(addnodes.toctree):
1282 result = self.resolve_toctree(docname, builder, toctreenode,
1283 prune=prune_toctrees)
1284 if result is None:
1285 toctreenode.replace_self([])
1286 else:
1287 toctreenode.replace_self(result)
1288
1289 return doctree
1290
1291 def resolve_toctree(self, docname, builder, toctree, prune=True, maxdepth=0,
1292 titles_only=False, collapse=False, includehidden=False):
1293 """Resolve a *toctree* node into individual bullet lists with titles
1294 as items, returning None (if no containing titles are found) or
1295 a new node.
1296
1297 If *prune* is True, the tree is pruned to *maxdepth*, or if that is 0,
1298 to the value of the *maxdepth* option on the *toctree* node.
1299 If *titles_only* is True, only toplevel document titles will be in the
1300 resulting tree.
1301 If *collapse* is True, all branches not containing docname will
1302 be collapsed.
1303 """
1304 if toctree.get('hidden', False) and not includehidden:
1305 return None
1306
1307 def _walk_depth(node, depth, maxdepth):
1308 """Utility: Cut a TOC at a specified depth."""
1309
1310 # For reading this function, it is useful to keep in mind the node
1311 # structure of a toctree (using HTML-like node names for brevity):
1312 #
1313 # <ul>
1314 # <li>
1315 # <p><a></p>
1316 # <p><a></p>
1317 # ...
1318 # <ul>
1319 # ...
1320 # </ul>
1321 # </li>
1322 # </ul>
1323
1324 for subnode in node.children[:]:
1325 if isinstance(subnode, (addnodes.compact_paragraph,
1326 nodes.list_item)):
1327 # for <p> and <li>, just indicate the depth level and
1328 # recurse to children
1329 subnode['classes'].append('toctree-l%d' % (depth-1))
1330 _walk_depth(subnode, depth, maxdepth)
1331
1332 elif isinstance(subnode, nodes.bullet_list):
1333 # for <ul>, determine if the depth is too large or if the
1334 # entry is to be collapsed
1335 if maxdepth > 0 and depth > maxdepth:
1336 subnode.parent.replace(subnode, [])
1337 else:
1338 # to find out what to collapse, *first* walk subitems,
1339 # since that determines which children point to the
1340 # current page
1341 _walk_depth(subnode, depth+1, maxdepth)
1342 # cull sub-entries whose parents aren't 'current'
1343 if (collapse and depth > 1 and
1344 'iscurrent' not in subnode.parent):
1345 subnode.parent.remove(subnode)
1346
1347 elif isinstance(subnode, nodes.reference):
1348 # for <a>, identify which entries point to the current
1349 # document and therefore may not be collapsed
1350 if subnode['refuri'] == docname:
1351 if not subnode['anchorname']:
1352 # give the whole branch a 'current' class
1353 # (useful for styling it differently)
1354 branchnode = subnode
1355 while branchnode:
1356 branchnode['classes'].append('current')
1357 branchnode = branchnode.parent
1358 # mark the list_item as "on current page"
1359 if subnode.parent.parent.get('iscurrent'):
1360 # but only if it's not already done
1361 return
1362 while subnode:
1363 subnode['iscurrent'] = True
1364 subnode = subnode.parent
1365
1366 def _entries_from_toctree(toctreenode, parents,
1367 separate=False, subtree=False):
1368 """Return TOC entries for a toctree node."""
1369 refs = [(e[0], str(e[1])) for e in toctreenode['entries']]
1370 entries = []
1371 for (title, ref) in refs:
1372 try:
1373 refdoc = None
1374 if url_re.match(ref):
1375 reference = nodes.reference('', '', internal=False,
1376 refuri=ref, anchorname='',
1377 *[nodes.Text(title)])
1378 para = addnodes.compact_paragraph('', '', reference)
1379 item = nodes.list_item('', para)
1380 toc = nodes.bullet_list('', item)
1381 elif ref == 'self':
1382 # 'self' refers to the document from which this
1383 # toctree originates
1384 ref = toctreenode['parent']
1385 if not title:
1386 title = clean_astext(self.titles[ref])
1387 reference = nodes.reference('', '', internal=True,
1388 refuri=ref,
1389 anchorname='',
1390 *[nodes.Text(title)])
1391 para = addnodes.compact_paragraph('', '', reference)
1392 item = nodes.list_item('', para)
1393 # don't show subitems
1394 toc = nodes.bullet_list('', item)
1395 else:
1396 if ref in parents:
1397 self.warn(ref, 'circular toctree references '
1398 'detected, ignoring: %s <- %s' %
1399 (ref, ' <- '.join(parents)))
1400 continue
1401 refdoc = ref
1402 toc = self.tocs[ref].deepcopy()
1403 self.process_only_nodes(toc, builder, ref)
1404 if title and toc.children and len(toc.children) == 1:
1405 child = toc.children[0]
1406 for refnode in child.traverse(nodes.reference):
1407 if refnode['refuri'] == ref and \
1408 not refnode['anchorname']:
1409 refnode.children = [nodes.Text(title)]
1410 if not toc.children:
1411 # empty toc means: no titles will show up in the toctree
1412 self.warn_node(
1413 'toctree contains reference to document %r that '
1414 'doesn\'t have a title: no link will be generated'
1415 % ref, toctreenode)
1416 except KeyError:
1417 # this is raised if the included file does not exist
1418 self.warn_node(
1419 'toctree contains reference to nonexisting document %r'
1420 % ref, toctreenode)
1421 else:
1422 # if titles_only is given, only keep the main title and
1423 # sub-toctrees
1424 if titles_only:
1425 # delete everything but the toplevel title(s)
1426 # and toctrees
1427 for toplevel in toc:
1428 # nodes with length 1 don't have any children anyway
1429 if len(toplevel) > 1:
1430 subtrees = toplevel.traverse(addnodes.toctree)
1431 toplevel[1][:] = subtrees
1432 # resolve all sub-toctrees
1433 for toctreenode in toc.traverse(addnodes.toctree):
1434 if not (toctreenode.get('hidden', False)
1435 and not includehidden):
1436 i = toctreenode.parent.index(toctreenode) + 1
1437 for item in _entries_from_toctree(
1438 toctreenode, [refdoc] + parents,
1439 subtree=True):
1440 toctreenode.parent.insert(i, item)
1441 i += 1
1442 toctreenode.parent.remove(toctreenode)
1443 if separate:
1444 entries.append(toc)
1445 else:
1446 entries.extend(toc.children)
1447 if not subtree and not separate:
1448 ret = nodes.bullet_list()
1449 ret += entries
1450 return [ret]
1451 return entries
1452
1453 maxdepth = maxdepth or toctree.get('maxdepth', -1)
1454 if not titles_only and toctree.get('titlesonly', False):
1455 titles_only = True
1456
1457 # NOTE: previously, this was separate=True, but that leads to artificial
1458 # separation when two or more toctree entries form a logical unit, so
1459 # separating mode is no longer used -- it's kept here for history's sake
1460 tocentries = _entries_from_toctree(toctree, [], separate=False)
1461 if not tocentries:
1462 return None
1463
1464 newnode = addnodes.compact_paragraph('', '', *tocentries)
1465 newnode['toctree'] = True
1466
1467 # prune the tree to maxdepth and replace titles, also set level classes
1468 _walk_depth(newnode, 1, prune and maxdepth or 0)
1469
1470 # set the target paths in the toctrees (they are not known at TOC
1471 # generation time)
1472 for refnode in newnode.traverse(nodes.reference):
1473 if not url_re.match(refnode['refuri']):
1474 refnode['refuri'] = builder.get_relative_uri(
1475 docname, refnode['refuri']) + refnode['anchorname']
1476 return newnode
1477
1478 def resolve_references(self, doctree, fromdocname, builder):
1479 for node in doctree.traverse(addnodes.pending_xref):
1480 contnode = node[0].deepcopy()
1481 newnode = None
1482
1483 typ = node['reftype']
1484 target = node['reftarget']
1485 refdoc = node.get('refdoc', fromdocname)
1486 domain = None
1487
1488 try:
1489 if 'refdomain' in node and node['refdomain']:
1490 # let the domain try to resolve the reference
1491 try:
1492 domain = self.domains[node['refdomain']]
1493 except KeyError:
1494 raise NoUri
1495 newnode = domain.resolve_xref(self, fromdocname, builder,
1496 typ, target, node, contnode)
1497 # really hardwired reference types
1498 elif typ == 'doc':
1499 # directly reference to document by source name;
1500 # can be absolute or relative
1501 docname = docname_join(refdoc, target)
1502 if docname in self.all_docs:
1503 if node['refexplicit']:
1504 # reference with explicit title
1505 caption = node.astext()
1506 else:
1507 caption = clean_astext(self.titles[docname])
1508 innernode = nodes.emphasis(caption, caption)
1509 newnode = nodes.reference('', '', internal=True)
1510 newnode['refuri'] = builder.get_relative_uri(
1511 fromdocname, docname)
1512 newnode.append(innernode)
1513 elif typ == 'citation':
1514 docname, labelid = self.citations.get(target, ('', ''))
1515 if docname:
1516 newnode = make_refnode(builder, fromdocname, docname,
1517 labelid, contnode)
1518 # no new node found? try the missing-reference event
1519 if newnode is None:
1520 newnode = builder.app.emit_firstresult(
1521 'missing-reference', self, node, contnode)
1522 # still not found? warn if in nit-picky mode
1523 if newnode is None:
1524 self._warn_missing_reference(
1525 fromdocname, typ, target, node, domain)
1526 except NoUri:
1527 newnode = contnode
1528 node.replace_self(newnode or contnode)
1529
1530 # remove only-nodes that do not belong to our builder
1531 self.process_only_nodes(doctree, builder, fromdocname)
1532
1533 # allow custom references to be resolved
1534 builder.app.emit('doctree-resolved', doctree, fromdocname)
1535
1536 def _warn_missing_reference(self, fromdoc, typ, target, node, domain):
1537 warn = node.get('refwarn')
1538 if self.config.nitpicky:
1539 warn = True
1540 if self._nitpick_ignore:
1541 dtype = domain and '%s:%s' % (domain.name, typ) or typ
1542 if (dtype, target) in self._nitpick_ignore:
1543 warn = False
1544 if not warn:
1545 return
1546 if domain and typ in domain.dangling_warnings:
1547 msg = domain.dangling_warnings[typ]
1548 elif typ == 'doc':
1549 msg = 'unknown document: %(target)s'
1550 elif typ == 'citation':
1551 msg = 'citation not found: %(target)s'
1552 elif node.get('refdomain', 'std') != 'std':
1553 msg = '%s:%s reference target not found: %%(target)s' % \
1554 (node['refdomain'], typ)
1555 else:
1556 msg = '%s reference target not found: %%(target)s' % typ
1557 self.warn_node(msg % {'target': target}, node)
1558
1559 def process_only_nodes(self, doctree, builder, fromdocname=None):
1560 # A comment on the comment() nodes being inserted: replacing by [] would
1561 # result in a "Losing ids" exception if there is a target node before
1562 # the only node, so we make sure docutils can transfer the id to
1563 # something, even if it's just a comment and will lose the id anyway...
1564 for node in doctree.traverse(addnodes.only):
1565 try:
1566 ret = builder.tags.eval_condition(node['expr'])
1567 except Exception, err:
1568 self.warn_node('exception while evaluating only '
1569 'directive expression: %s' % err, node)
1570 node.replace_self(node.children or nodes.comment())
1571 else:
1572 if ret:
1573 node.replace_self(node.children or nodes.comment())
1574 else:
1575 node.replace_self(nodes.comment())
1576
1577 def assign_section_numbers(self):
1578 """Assign a section number to each heading under a numbered toctree."""
1579 # a list of all docnames whose section numbers changed
1580 rewrite_needed = []
1581
1582 old_secnumbers = self.toc_secnumbers
1583 self.toc_secnumbers = {}
1584
1585 def _walk_toc(node, secnums, depth, titlenode=None):
1586 # titlenode is the title of the document, it will get assigned a
1587 # secnumber too, so that it shows up in next/prev/parent rellinks
1588 for subnode in node.children:
1589 if isinstance(subnode, nodes.bullet_list):
1590 numstack.append(0)
1591 _walk_toc(subnode, secnums, depth-1, titlenode)
1592 numstack.pop()
1593 titlenode = None
1594 elif isinstance(subnode, nodes.list_item):
1595 _walk_toc(subnode, secnums, depth, titlenode)
1596 titlenode = None
1597 elif isinstance(subnode, addnodes.only):
1598 # at this stage we don't know yet which sections are going
1599 # to be included; just include all of them, even if it leads
1600 # to gaps in the numbering
1601 _walk_toc(subnode, secnums, depth, titlenode)
1602 titlenode = None
1603 elif isinstance(subnode, addnodes.compact_paragraph):
1604 numstack[-1] += 1
1605 if depth > 0:
1606 number = tuple(numstack)
1607 else:
1608 number = None
1609 secnums[subnode[0]['anchorname']] = \
1610 subnode[0]['secnumber'] = number
1611 if titlenode:
1612 titlenode['secnumber'] = number
1613 titlenode = None
1614 elif isinstance(subnode, addnodes.toctree):
1615 _walk_toctree(subnode, depth)
1616
1617 def _walk_toctree(toctreenode, depth):
1618 if depth == 0:
1619 return
1620 for (title, ref) in toctreenode['entries']:
1621 if url_re.match(ref) or ref == 'self':
1622 # don't mess with those
1623 continue
1624 if ref in self.tocs:
1625 secnums = self.toc_secnumbers[ref] = {}
1626 _walk_toc(self.tocs[ref], secnums, depth,
1627 self.titles.get(ref))
1628 if secnums != old_secnumbers.get(ref):
1629 rewrite_needed.append(ref)
1630
1631 for docname in self.numbered_toctrees:
1632 doctree = self.get_doctree(docname)
1633 for toctreenode in doctree.traverse(addnodes.toctree):
1634 depth = toctreenode.get('numbered', 0)
1635 if depth:
1636 # every numbered toctree gets new numbering
1637 numstack = [0]
1638 _walk_toctree(toctreenode, depth)
1639
1640 return rewrite_needed
1641
1642 def create_index(self, builder, group_entries=True,
1643 _fixre=re.compile(r'(.*) ([(][^()]*[)])')):
1644 """Create the real index from the collected index entries."""
1645 new = {}
1646
1647 def add_entry(word, subword, link=True, dic=new):
1648 entry = dic.get(word)
1649 if not entry:
1650 dic[word] = entry = [[], {}]
1651 if subword:
1652 add_entry(subword, '', link=link, dic=entry[1])
1653 elif link:
1654 try:
1655 uri = builder.get_relative_uri('genindex', fn) + '#' + tid
1656 except NoUri:
1657 pass
1658 else:
1659 entry[0].append((main, uri))
1660
1661 for fn, entries in self.indexentries.iteritems():
1662 # new entry types must be listed in directives/other.py!
1663 for type, value, tid, main in entries:
1664 try:
1665 if type == 'single':
1666 try:
1667 entry, subentry = split_into(2, 'single', value)
1668 except ValueError:
1669 entry, = split_into(1, 'single', value)
1670 subentry = ''
1671 add_entry(entry, subentry)
1672 elif type == 'pair':
1673 first, second = split_into(2, 'pair', value)
1674 add_entry(first, second)
1675 add_entry(second, first)
1676 elif type == 'triple':
1677 first, second, third = split_into(3, 'triple', value)
1678 add_entry(first, second+' '+third)
1679 add_entry(second, third+', '+first)
1680 add_entry(third, first+' '+second)
1681 elif type == 'see':
1682 first, second = split_into(2, 'see', value)
1683 add_entry(first, _('see %s') % second, link=False)
1684 elif type == 'seealso':
1685 first, second = split_into(2, 'see', value)
1686 add_entry(first, _('see also %s') % second, link=False)
1687 else:
1688 self.warn(fn, 'unknown index entry type %r' % type)
1689 except ValueError, err:
1690 self.warn(fn, str(err))
1691
1692 # sort the index entries; put all symbols at the front, even those
1693 # following the letters in ASCII, this is where the chr(127) comes from
1694 def keyfunc(entry, lcletters=string.ascii_lowercase + '_'):
1695 lckey = unicodedata.normalize('NFD', entry[0].lower())
1696 if lckey[0:1] in lcletters:
1697 return chr(127) + lckey
1698 return lckey
1699 newlist = new.items()
1700 newlist.sort(key=keyfunc)
1701
1702 if group_entries:
1703 # fixup entries: transform
1704 # func() (in module foo)
1705 # func() (in module bar)
1706 # into
1707 # func()
1708 # (in module foo)
1709 # (in module bar)
1710 oldkey = ''
1711 oldsubitems = None
1712 i = 0
1713 while i < len(newlist):
1714 key, (targets, subitems) = newlist[i]
1715 # cannot move if it has subitems; structure gets too complex
1716 if not subitems:
1717 m = _fixre.match(key)
1718 if m:
1719 if oldkey == m.group(1):
1720 # prefixes match: add entry as subitem of the
1721 # previous entry
1722 oldsubitems.setdefault(m.group(2), [[], {}])[0].\
1723 extend(targets)
1724 del newlist[i]
1725 continue
1726 oldkey = m.group(1)
1727 else:
1728 oldkey = key
1729 oldsubitems = subitems
1730 i += 1
1731
1732 # group the entries by letter
1733 def keyfunc2(item, letters=string.ascii_uppercase + '_'):
1734 # hack: mutating the subitems dicts to a list in the keyfunc
1735 k, v = item
1736 v[1] = sorted((si, se) for (si, (se, void)) in v[1].iteritems())
1737 # now calculate the key
1738 letter = unicodedata.normalize('NFD', k[0])[0].upper()
1739 if letter in letters:
1740 return letter
1741 else:
1742 # get all other symbols under one heading
1743 return 'Symbols'
1744 return [(key, list(group))
1745 for (key, group) in groupby(newlist, keyfunc2)]
1746
1747 def collect_relations(self):
1748 relations = {}
1749 getinc = self.toctree_includes.get
1750 def collect(parents, parents_set, docname, previous, next):
1751 # circular relationship?
1752 if docname in parents_set:
1753 # we will warn about this in resolve_toctree()
1754 return
1755 includes = getinc(docname)
1756 # previous
1757 if not previous:
1758 # if no previous sibling, go to parent
1759 previous = parents[0][0]
1760 else:
1761 # else, go to previous sibling, or if it has children, to
1762 # the last of its children, or if that has children, to the
1763 # last of those, and so forth
1764 while 1:
1765 previncs = getinc(previous)
1766 if previncs:
1767 previous = previncs[-1]
1768 else:
1769 break
1770 # next
1771 if includes:
1772 # if it has children, go to first of them
1773 next = includes[0]
1774 elif next:
1775 # else, if next sibling, go to it
1776 pass
1777 else:
1778 # else, go to the next sibling of the parent, if present,
1779 # else the grandparent's sibling, if present, and so forth
1780 for parname, parindex in parents:
1781 parincs = getinc(parname)
1782 if parincs and parindex + 1 < len(parincs):
1783 next = parincs[parindex+1]
1784 break
1785 # else it will stay None
1786 # same for children
1787 if includes:
1788 for subindex, args in enumerate(izip(includes,
1789 [None] + includes,
1790 includes[1:] + [None])):
1791 collect([(docname, subindex)] + parents,
1792 parents_set.union([docname]), *args)
1793 relations[docname] = [parents[0][0], previous, next]
1794 collect([(None, 0)], set(), self.config.master_doc, None, None)
1795 return relations
1796
1797 def check_consistency(self):
1798 """Do consistency checks."""
1799 for docname in sorted(self.all_docs):
1800 if docname not in self.files_to_rebuild:
1801 if docname == self.config.master_doc:
1802 # the master file is not included anywhere ;)
1803 continue
1804 if 'orphan' in self.metadata[docname]:
1805 continue
1806 self.warn(docname, 'document isn\'t included in any toctree')
1807
01808
=== removed directory '.pc/fix_manpages_generation_with_new_docutils.diff'
=== removed directory '.pc/fix_manpages_generation_with_new_docutils.diff/sphinx'
=== removed directory '.pc/fix_manpages_generation_with_new_docutils.diff/sphinx/writers'
=== removed file '.pc/fix_manpages_generation_with_new_docutils.diff/sphinx/writers/manpage.py'
--- .pc/fix_manpages_generation_with_new_docutils.diff/sphinx/writers/manpage.py 2012-10-22 20:20:35 +0000
+++ .pc/fix_manpages_generation_with_new_docutils.diff/sphinx/writers/manpage.py 1970-01-01 00:00:00 +0000
@@ -1,345 +0,0 @@
1# -*- coding: utf-8 -*-
2"""
3 sphinx.writers.manpage
4 ~~~~~~~~~~~~~~~~~~~~~~
5
6 Manual page writer, extended for Sphinx custom nodes.
7
8 :copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
9 :license: BSD, see LICENSE for details.
10"""
11
12from docutils import nodes
13try:
14 from docutils.writers.manpage import MACRO_DEF, Writer, \
15 Translator as BaseTranslator
16 has_manpage_writer = True
17except ImportError:
18 # define the classes in any case, sphinx.application needs it
19 Writer = BaseTranslator = object
20 has_manpage_writer = False
21
22from sphinx import addnodes
23from sphinx.locale import admonitionlabels, versionlabels, _
24from sphinx.util.osutil import ustrftime
25
26
27class ManualPageWriter(Writer):
28 def __init__(self, builder):
29 Writer.__init__(self)
30 self.builder = builder
31
32 def translate(self):
33 visitor = ManualPageTranslator(self.builder, self.document)
34 self.visitor = visitor
35 self.document.walkabout(visitor)
36 self.output = visitor.astext()
37
38
39class ManualPageTranslator(BaseTranslator):
40 """
41 Custom translator.
42 """
43
44 def __init__(self, builder, *args, **kwds):
45 BaseTranslator.__init__(self, *args, **kwds)
46 self.builder = builder
47
48 self.in_productionlist = 0
49
50 # first title is the manpage title
51 self.section_level = -1
52
53 # docinfo set by man_pages config value
54 self._docinfo['title'] = self.document.settings.title
55 self._docinfo['subtitle'] = self.document.settings.subtitle
56 if self.document.settings.authors:
57 # don't set it if no author given
58 self._docinfo['author'] = self.document.settings.authors
59 self._docinfo['manual_section'] = self.document.settings.section
60
61 # docinfo set by other config values
62 self._docinfo['title_upper'] = self._docinfo['title'].upper()
63 if builder.config.today:
64 self._docinfo['date'] = builder.config.today
65 else:
66 self._docinfo['date'] = ustrftime(builder.config.today_fmt
67 or _('%B %d, %Y'))
68 self._docinfo['copyright'] = builder.config.copyright
69 self._docinfo['version'] = builder.config.version
70 self._docinfo['manual_group'] = builder.config.project
71
72 # since self.append_header() is never called, need to do this here
73 self.body.append(MACRO_DEF)
74
75 # overwritten -- added quotes around all .TH arguments
76 def header(self):
77 tmpl = (".TH \"%(title_upper)s\" \"%(manual_section)s\""
78 " \"%(date)s\" \"%(version)s\" \"%(manual_group)s\"\n"
79 ".SH NAME\n"
80 "%(title)s \- %(subtitle)s\n")
81 return tmpl % self._docinfo
82
83 def visit_start_of_file(self, node):
84 pass
85 def depart_start_of_file(self, node):
86 pass
87
88 def visit_desc(self, node):
89 self.visit_definition_list(node)
90 def depart_desc(self, node):
91 self.depart_definition_list(node)
92
93 def visit_desc_signature(self, node):
94 self.visit_definition_list_item(node)
95 self.visit_term(node)
96 def depart_desc_signature(self, node):
97 self.depart_term(node)
98
99 def visit_desc_addname(self, node):
100 pass
101 def depart_desc_addname(self, node):
102 pass
103
104 def visit_desc_type(self, node):
105 pass
106 def depart_desc_type(self, node):
107 pass
108
109 def visit_desc_returns(self, node):
110 self.body.append(' -> ')
111 def depart_desc_returns(self, node):
112 pass
113
114 def visit_desc_name(self, node):
115 pass
116 def depart_desc_name(self, node):
117 pass
118
119 def visit_desc_parameterlist(self, node):
120 self.body.append('(')
121 self.first_param = 1
122 def depart_desc_parameterlist(self, node):
123 self.body.append(')')
124
125 def visit_desc_parameter(self, node):
126 if not self.first_param:
127 self.body.append(', ')
128 else:
129 self.first_param = 0
130 def depart_desc_parameter(self, node):
131 pass
132
133 def visit_desc_optional(self, node):
134 self.body.append('[')
135 def depart_desc_optional(self, node):
136 self.body.append(']')
137
138 def visit_desc_annotation(self, node):
139 pass
140 def depart_desc_annotation(self, node):
141 pass
142
143 def visit_desc_content(self, node):
144 self.visit_definition(node)
145 def depart_desc_content(self, node):
146 self.depart_definition(node)
147
148 def visit_refcount(self, node):
149 self.body.append(self.defs['emphasis'][0])
150 def depart_refcount(self, node):
151 self.body.append(self.defs['emphasis'][1])
152
153 def visit_versionmodified(self, node):
154 self.visit_paragraph(node)
155 text = versionlabels[node['type']] % node['version']
156 if len(node):
157 text += ': '
158 else:
159 text += '.'
160 self.body.append(text)
161 def depart_versionmodified(self, node):
162 self.depart_paragraph(node)
163
164 def visit_termsep(self, node):
165 self.body.append(', ')
166 raise nodes.SkipNode
167
168 # overwritten -- we don't want source comments to show up
169 def visit_comment(self, node):
170 raise nodes.SkipNode
171
172 # overwritten -- added ensure_eol()
173 def visit_footnote(self, node):
174 self.ensure_eol()
175 BaseTranslator.visit_footnote(self, node)
176
177 # overwritten -- handle footnotes rubric
178 def visit_rubric(self, node):
179 self.ensure_eol()
180 if len(node.children) == 1:
181 rubtitle = node.children[0].astext()
182 if rubtitle in ('Footnotes', _('Footnotes')):
183 self.body.append('.SH ' + self.deunicode(rubtitle).upper() +
184 '\n')
185 raise nodes.SkipNode
186 else:
187 self.body.append('.sp\n')
188 def depart_rubric(self, node):
189 pass
190
191 def visit_seealso(self, node):
192 self.visit_admonition(node)
193 def depart_seealso(self, node):
194 self.depart_admonition(node)
195
196 # overwritten -- use our own label translations
197 def visit_admonition(self, node, name=None):
198 if name:
199 self.body.append('.IP %s\n' %
200 self.deunicode(admonitionlabels.get(name, name)))
201
202 def visit_productionlist(self, node):
203 self.ensure_eol()
204 names = []
205 self.in_productionlist += 1
206 self.body.append('.sp\n.nf\n')
207 for production in node:
208 names.append(production['tokenname'])
209 maxlen = max(len(name) for name in names)
210 for production in node:
211 if production['tokenname']:
212 lastname = production['tokenname'].ljust(maxlen)
213 self.body.append(self.defs['strong'][0])
214 self.body.append(self.deunicode(lastname))
215 self.body.append(self.defs['strong'][1])
216 self.body.append(' ::= ')
217 else:
218 self.body.append('%s ' % (' '*len(lastname)))
219 production.walkabout(self)
220 self.body.append('\n')
221 self.body.append('\n.fi\n')
222 self.in_productionlist -= 1
223 raise nodes.SkipNode
224
225 def visit_production(self, node):
226 pass
227 def depart_production(self, node):
228 pass
229
230 # overwritten -- don't emit a warning for images
231 def visit_image(self, node):
232 if 'alt' in node.attributes:
233 self.body.append(_('[image: %s]') % node['alt'] + '\n')
234 self.body.append(_('[image]') + '\n')
235 raise nodes.SkipNode
236
237 # overwritten -- don't visit inner marked up nodes
238 def visit_reference(self, node):
239 self.body.append(self.defs['reference'][0])
240 self.body.append(node.astext())
241 self.body.append(self.defs['reference'][1])
242
243 uri = node.get('refuri', '')
244 if uri.startswith('mailto:') or uri.startswith('http:') or \
245 uri.startswith('https:') or uri.startswith('ftp:'):
246 # if configured, put the URL after the link
247 if self.builder.config.man_show_urls and \
248 node.astext() != uri:
249 if uri.startswith('mailto:'):
250 uri = uri[7:]
251 self.body.extend([
252 ' <',
253 self.defs['strong'][0], uri, self.defs['strong'][1],
254 '>'])
255 raise nodes.SkipNode
256
257 def visit_centered(self, node):
258 self.ensure_eol()
259 self.body.append('.sp\n.ce\n')
260 def depart_centered(self, node):
261 self.body.append('\n.ce 0\n')
262
263 def visit_compact_paragraph(self, node):
264 pass
265 def depart_compact_paragraph(self, node):
266 pass
267
268 def visit_highlightlang(self, node):
269 pass
270 def depart_highlightlang(self, node):
271 pass
272
273 def visit_download_reference(self, node):
274 pass
275 def depart_download_reference(self, node):
276 pass
277
278 def visit_toctree(self, node):
279 raise nodes.SkipNode
280
281 def visit_index(self, node):
282 raise nodes.SkipNode
283
284 def visit_tabular_col_spec(self, node):
285 raise nodes.SkipNode
286
287 def visit_glossary(self, node):
288 pass
289 def depart_glossary(self, node):
290 pass
291
292 def visit_acks(self, node):
293 self.ensure_eol()
294 self.body.append(', '.join(n.astext()
295 for n in node.children[0].children) + '.')
296 self.body.append('\n')
297 raise nodes.SkipNode
298
299 def visit_hlist(self, node):
300 self.visit_bullet_list(node)
301 def depart_hlist(self, node):
302 self.depart_bullet_list(node)
303
304 def visit_hlistcol(self, node):
305 pass
306 def depart_hlistcol(self, node):
307 pass
308
309 def visit_literal_emphasis(self, node):
310 return self.visit_emphasis(node)
311 def depart_literal_emphasis(self, node):
312 return self.depart_emphasis(node)
313
314 def visit_abbreviation(self, node):
315 pass
316 def depart_abbreviation(self, node):
317 pass
318
319 # overwritten: handle section titles better than in 0.6 release
320 def visit_title(self, node):
321 if isinstance(node.parent, addnodes.seealso):
322 self.body.append('.IP "')
323 return
324 elif isinstance(node.parent, nodes.section):
325 if self.section_level == 0:
326 # skip the document title
327 raise nodes.SkipNode
328 elif self.section_level == 1:
329 self.body.append('.SH %s\n' %
330 self.deunicode(node.astext().upper()))
331 raise nodes.SkipNode
332 return BaseTranslator.visit_title(self, node)
333 def depart_title(self, node):
334 if isinstance(node.parent, addnodes.seealso):
335 self.body.append('"\n')
336 return
337 return BaseTranslator.depart_title(self, node)
338
339 def visit_raw(self, node):
340 if 'manpage' in node.get('format', '').split():
341 self.body.append(node.astext())
342 raise nodes.SkipNode
343
344 def unknown_visit(self, node):
345 raise NotImplementedError('Unknown node: ' + node.__class__.__name__)
3460
=== added directory '.pc/manpage_writer_docutils_0.10_api.diff'
=== added directory '.pc/manpage_writer_docutils_0.10_api.diff/sphinx'
=== added directory '.pc/manpage_writer_docutils_0.10_api.diff/sphinx/writers'
=== added file '.pc/manpage_writer_docutils_0.10_api.diff/sphinx/writers/manpage.py'
--- .pc/manpage_writer_docutils_0.10_api.diff/sphinx/writers/manpage.py 1970-01-01 00:00:00 +0000
+++ .pc/manpage_writer_docutils_0.10_api.diff/sphinx/writers/manpage.py 2013-02-16 12:25:23 +0000
@@ -0,0 +1,345 @@
1# -*- coding: utf-8 -*-
2"""
3 sphinx.writers.manpage
4 ~~~~~~~~~~~~~~~~~~~~~~
5
6 Manual page writer, extended for Sphinx custom nodes.
7
8 :copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
9 :license: BSD, see LICENSE for details.
10"""
11
12from docutils import nodes
13try:
14 from docutils.writers.manpage import MACRO_DEF, Writer, \
15 Translator as BaseTranslator
16 has_manpage_writer = True
17except ImportError:
18 # define the classes in any case, sphinx.application needs it
19 Writer = BaseTranslator = object
20 has_manpage_writer = False
21
22from sphinx import addnodes
23from sphinx.locale import admonitionlabels, versionlabels, _
24from sphinx.util.osutil import ustrftime
25
26
27class ManualPageWriter(Writer):
28 def __init__(self, builder):
29 Writer.__init__(self)
30 self.builder = builder
31
32 def translate(self):
33 visitor = ManualPageTranslator(self.builder, self.document)
34 self.visitor = visitor
35 self.document.walkabout(visitor)
36 self.output = visitor.astext()
37
38
39class ManualPageTranslator(BaseTranslator):
40 """
41 Custom translator.
42 """
43
44 def __init__(self, builder, *args, **kwds):
45 BaseTranslator.__init__(self, *args, **kwds)
46 self.builder = builder
47
48 self.in_productionlist = 0
49
50 # first title is the manpage title
51 self.section_level = -1
52
53 # docinfo set by man_pages config value
54 self._docinfo['title'] = self.document.settings.title
55 self._docinfo['subtitle'] = self.document.settings.subtitle
56 if self.document.settings.authors:
57 # don't set it if no author given
58 self._docinfo['author'] = self.document.settings.authors
59 self._docinfo['manual_section'] = self.document.settings.section
60
61 # docinfo set by other config values
62 self._docinfo['title_upper'] = self._docinfo['title'].upper()
63 if builder.config.today:
64 self._docinfo['date'] = builder.config.today
65 else:
66 self._docinfo['date'] = ustrftime(builder.config.today_fmt
67 or _('%B %d, %Y'))
68 self._docinfo['copyright'] = builder.config.copyright
69 self._docinfo['version'] = builder.config.version
70 self._docinfo['manual_group'] = builder.config.project
71
72 # since self.append_header() is never called, need to do this here
73 self.body.append(MACRO_DEF)
74
75 # overwritten -- added quotes around all .TH arguments
76 def header(self):
77 tmpl = (".TH \"%(title_upper)s\" \"%(manual_section)s\""
78 " \"%(date)s\" \"%(version)s\" \"%(manual_group)s\"\n"
79 ".SH NAME\n"
80 "%(title)s \- %(subtitle)s\n")
81 return tmpl % self._docinfo
82
83 def visit_start_of_file(self, node):
84 pass
85 def depart_start_of_file(self, node):
86 pass
87
88 def visit_desc(self, node):
89 self.visit_definition_list(node)
90 def depart_desc(self, node):
91 self.depart_definition_list(node)
92
93 def visit_desc_signature(self, node):
94 self.visit_definition_list_item(node)
95 self.visit_term(node)
96 def depart_desc_signature(self, node):
97 self.depart_term(node)
98
99 def visit_desc_addname(self, node):
100 pass
101 def depart_desc_addname(self, node):
102 pass
103
104 def visit_desc_type(self, node):
105 pass
106 def depart_desc_type(self, node):
107 pass
108
109 def visit_desc_returns(self, node):
110 self.body.append(' -> ')
111 def depart_desc_returns(self, node):
112 pass
113
114 def visit_desc_name(self, node):
115 pass
116 def depart_desc_name(self, node):
117 pass
118
119 def visit_desc_parameterlist(self, node):
120 self.body.append('(')
121 self.first_param = 1
122 def depart_desc_parameterlist(self, node):
123 self.body.append(')')
124
125 def visit_desc_parameter(self, node):
126 if not self.first_param:
127 self.body.append(', ')
128 else:
129 self.first_param = 0
130 def depart_desc_parameter(self, node):
131 pass
132
133 def visit_desc_optional(self, node):
134 self.body.append('[')
135 def depart_desc_optional(self, node):
136 self.body.append(']')
137
138 def visit_desc_annotation(self, node):
139 pass
140 def depart_desc_annotation(self, node):
141 pass
142
143 def visit_desc_content(self, node):
144 self.visit_definition(node)
145 def depart_desc_content(self, node):
146 self.depart_definition(node)
147
148 def visit_refcount(self, node):
149 self.body.append(self.defs['emphasis'][0])
150 def depart_refcount(self, node):
151 self.body.append(self.defs['emphasis'][1])
152
153 def visit_versionmodified(self, node):
154 self.visit_paragraph(node)
155 text = versionlabels[node['type']] % node['version']
156 if len(node):
157 text += ': '
158 else:
159 text += '.'
160 self.body.append(text)
161 def depart_versionmodified(self, node):
162 self.depart_paragraph(node)
163
164 def visit_termsep(self, node):
165 self.body.append(', ')
166 raise nodes.SkipNode
167
168 # overwritten -- we don't want source comments to show up
169 def visit_comment(self, node):
170 raise nodes.SkipNode
171
172 # overwritten -- added ensure_eol()
173 def visit_footnote(self, node):
174 self.ensure_eol()
175 BaseTranslator.visit_footnote(self, node)
176
177 # overwritten -- handle footnotes rubric
178 def visit_rubric(self, node):
179 self.ensure_eol()
180 if len(node.children) == 1:
181 rubtitle = node.children[0].astext()
182 if rubtitle in ('Footnotes', _('Footnotes')):
183 self.body.append('.SH ' + self.deunicode(rubtitle).upper() +
184 '\n')
185 raise nodes.SkipNode
186 else:
187 self.body.append('.sp\n')
188 def depart_rubric(self, node):
189 pass
190
191 def visit_seealso(self, node):
192 self.visit_admonition(node)
193 def depart_seealso(self, node):
194 self.depart_admonition(node)
195
196 # overwritten -- use our own label translations
197 def visit_admonition(self, node, name=None):
198 if name:
199 self.body.append('.IP %s\n' %
200 self.deunicode(admonitionlabels.get(name, name)))
201
202 def visit_productionlist(self, node):
203 self.ensure_eol()
204 names = []
205 self.in_productionlist += 1
206 self.body.append('.sp\n.nf\n')
207 for production in node:
208 names.append(production['tokenname'])
209 maxlen = max(len(name) for name in names)
210 for production in node:
211 if production['tokenname']:
212 lastname = production['tokenname'].ljust(maxlen)
213 self.body.append(self.defs['strong'][0])
214 self.body.append(self.deunicode(lastname))
215 self.body.append(self.defs['strong'][1])
216 self.body.append(' ::= ')
217 else:
218 self.body.append('%s ' % (' '*len(lastname)))
219 production.walkabout(self)
220 self.body.append('\n')
221 self.body.append('\n.fi\n')
222 self.in_productionlist -= 1
223 raise nodes.SkipNode
224
225 def visit_production(self, node):
226 pass
227 def depart_production(self, node):
228 pass
229
230 # overwritten -- don't emit a warning for images
231 def visit_image(self, node):
232 if 'alt' in node.attributes:
233 self.body.append(_('[image: %s]') % node['alt'] + '\n')
234 self.body.append(_('[image]') + '\n')
235 raise nodes.SkipNode
236
237 # overwritten -- don't visit inner marked up nodes
238 def visit_reference(self, node):
239 self.body.append(self.defs['reference'][0])
240 self.body.append(node.astext())
241 self.body.append(self.defs['reference'][1])
242
243 uri = node.get('refuri', '')
244 if uri.startswith('mailto:') or uri.startswith('http:') or \
245 uri.startswith('https:') or uri.startswith('ftp:'):
246 # if configured, put the URL after the link
247 if self.builder.config.man_show_urls and \
248 node.astext() != uri:
249 if uri.startswith('mailto:'):
250 uri = uri[7:]
251 self.body.extend([
252 ' <',
253 self.defs['strong'][0], uri, self.defs['strong'][1],
254 '>'])
255 raise nodes.SkipNode
256
257 def visit_centered(self, node):
258 self.ensure_eol()
259 self.body.append('.sp\n.ce\n')
260 def depart_centered(self, node):
261 self.body.append('\n.ce 0\n')
262
263 def visit_compact_paragraph(self, node):
264 pass
265 def depart_compact_paragraph(self, node):
266 pass
267
268 def visit_highlightlang(self, node):
269 pass
270 def depart_highlightlang(self, node):
271 pass
272
273 def visit_download_reference(self, node):
274 pass
275 def depart_download_reference(self, node):
276 pass
277
278 def visit_toctree(self, node):
279 raise nodes.SkipNode
280
281 def visit_index(self, node):
282 raise nodes.SkipNode
283
284 def visit_tabular_col_spec(self, node):
285 raise nodes.SkipNode
286
287 def visit_glossary(self, node):
288 pass
289 def depart_glossary(self, node):
290 pass
291
292 def visit_acks(self, node):
293 self.ensure_eol()
294 self.body.append(', '.join(n.astext()
295 for n in node.children[0].children) + '.')
296 self.body.append('\n')
297 raise nodes.SkipNode
298
299 def visit_hlist(self, node):
300 self.visit_bullet_list(node)
301 def depart_hlist(self, node):
302 self.depart_bullet_list(node)
303
304 def visit_hlistcol(self, node):
305 pass
306 def depart_hlistcol(self, node):
307 pass
308
309 def visit_literal_emphasis(self, node):
310 return self.visit_emphasis(node)
311 def depart_literal_emphasis(self, node):
312 return self.depart_emphasis(node)
313
314 def visit_abbreviation(self, node):
315 pass
316 def depart_abbreviation(self, node):
317 pass
318
319 # overwritten: handle section titles better than in 0.6 release
320 def visit_title(self, node):
321 if isinstance(node.parent, addnodes.seealso):
322 self.body.append('.IP "')
323 return
324 elif isinstance(node.parent, nodes.section):
325 if self.section_level == 0:
326 # skip the document title
327 raise nodes.SkipNode
328 elif self.section_level == 1:
329 self.body.append('.SH %s\n' %
330 self.deunicode(node.astext().upper()))
331 raise nodes.SkipNode
332 return BaseTranslator.visit_title(self, node)
333 def depart_title(self, node):
334 if isinstance(node.parent, addnodes.seealso):
335 self.body.append('"\n')
336 return
337 return BaseTranslator.depart_title(self, node)
338
339 def visit_raw(self, node):
340 if 'manpage' in node.get('format', '').split():
341 self.body.append(node.astext())
342 raise nodes.SkipNode
343
344 def unknown_visit(self, node):
345 raise NotImplementedError('Unknown node: ' + node.__class__.__name__)
0346
=== added directory '.pc/parallel_2to3.diff'
=== added file '.pc/parallel_2to3.diff/setup.py'
--- .pc/parallel_2to3.diff/setup.py 1970-01-01 00:00:00 +0000
+++ .pc/parallel_2to3.diff/setup.py 2013-02-16 12:25:23 +0000
@@ -0,0 +1,209 @@
1# -*- coding: utf-8 -*-
2try:
3 from setuptools import setup, find_packages
4except ImportError:
5 raise
6 import distribute_setup
7 distribute_setup.use_setuptools()
8 from setuptools import setup, find_packages
9
10import os
11import sys
12from distutils import log
13
14import sphinx
15
16long_desc = '''
17Sphinx is a tool that makes it easy to create intelligent and beautiful
18documentation for Python projects (or other documents consisting of multiple
19reStructuredText sources), written by Georg Brandl. It was originally created
20for the new Python documentation, and has excellent facilities for Python
21project documentation, but C/C++ is supported as well, and more languages are
22planned.
23
24Sphinx uses reStructuredText as its markup language, and many of its strengths
25come from the power and straightforwardness of reStructuredText and its parsing
26and translating suite, the Docutils.
27
28Among its features are the following:
29
30* Output formats: HTML (including derivative formats such as HTML Help, Epub
31 and Qt Help), plain text, manual pages and LaTeX or direct PDF output
32 using rst2pdf
33* Extensive cross-references: semantic markup and automatic links
34 for functions, classes, glossary terms and similar pieces of information
35* Hierarchical structure: easy definition of a document tree, with automatic
36 links to siblings, parents and children
37* Automatic indices: general index as well as a module index
38* Code handling: automatic highlighting using the Pygments highlighter
39* Flexible HTML output using the Jinja 2 templating engine
40* Various extensions are available, e.g. for automatic testing of snippets
41 and inclusion of appropriately formatted docstrings
42* Setuptools integration
43
44A development egg can be found `here
45<http://bitbucket.org/birkenfeld/sphinx/get/tip.gz#egg=Sphinx-dev>`_.
46'''
47
48requires = ['Pygments>=1.2', 'Jinja2>=2.3', 'docutils>=0.7']
49
50if sys.version_info < (2, 4):
51 print('ERROR: Sphinx requires at least Python 2.4 to run.')
52 sys.exit(1)
53
54if sys.version_info < (2, 5):
55 # Python 2.4's distutils doesn't automatically install an egg-info,
56 # so an existing docutils install won't be detected -- in that case,
57 # remove the dependency from setup.py
58 try:
59 import docutils
60 if int(docutils.__version__[2]) < 4:
61 raise ValueError('docutils not recent enough')
62 except:
63 pass
64 else:
65 del requires[-1]
66
67 # The uuid module is new in the stdlib in 2.5
68 requires.append('uuid>=1.30')
69
70
71# Provide a "compile_catalog" command that also creates the translated
72# JavaScript files if Babel is available.
73
74cmdclass = {}
75
76try:
77 from babel.messages.pofile import read_po
78 from babel.messages.frontend import compile_catalog
79 try:
80 from simplejson import dump
81 except ImportError:
82 from json import dump
83except ImportError:
84 pass
85else:
86 class compile_catalog_plusjs(compile_catalog):
87 """
88 An extended command that writes all message strings that occur in
89 JavaScript files to a JavaScript file along with the .mo file.
90
91 Unfortunately, babel's setup command isn't built very extensible, so
92 most of the run() code is duplicated here.
93 """
94
95 def run(self):
96 compile_catalog.run(self)
97
98 po_files = []
99 js_files = []
100
101 if not self.input_file:
102 if self.locale:
103 po_files.append((self.locale,
104 os.path.join(self.directory, self.locale,
105 'LC_MESSAGES',
106 self.domain + '.po')))
107 js_files.append(os.path.join(self.directory, self.locale,
108 'LC_MESSAGES',
109 self.domain + '.js'))
110 else:
111 for locale in os.listdir(self.directory):
112 po_file = os.path.join(self.directory, locale,
113 'LC_MESSAGES',
114 self.domain + '.po')
115 if os.path.exists(po_file):
116 po_files.append((locale, po_file))
117 js_files.append(os.path.join(self.directory, locale,
118 'LC_MESSAGES',
119 self.domain + '.js'))
120 else:
121 po_files.append((self.locale, self.input_file))
122 if self.output_file:
123 js_files.append(self.output_file)
124 else:
125 js_files.append(os.path.join(self.directory, self.locale,
126 'LC_MESSAGES',
127 self.domain + '.js'))
128
129 for js_file, (locale, po_file) in zip(js_files, po_files):
130 infile = open(po_file, 'r')
131 try:
132 catalog = read_po(infile, locale)
133 finally:
134 infile.close()
135
136 if catalog.fuzzy and not self.use_fuzzy:
137 continue
138
139 log.info('writing JavaScript strings in catalog %r to %r',
140 po_file, js_file)
141
142 jscatalog = {}
143 for message in catalog:
144 if any(x[0].endswith('.js') for x in message.locations):
145 msgid = message.id
146 if isinstance(msgid, (list, tuple)):
147 msgid = msgid[0]
148 jscatalog[msgid] = message.string
149
150 outfile = open(js_file, 'wb')
151 try:
152 outfile.write('Documentation.addTranslations(');
153 dump(dict(
154 messages=jscatalog,
155 plural_expr=catalog.plural_expr,
156 locale=str(catalog.locale)
157 ), outfile)
158 outfile.write(');')
159 finally:
160 outfile.close()
161
162 cmdclass['compile_catalog'] = compile_catalog_plusjs
163
164
165setup(
166 name='Sphinx',
167 version=sphinx.__version__,
168 url='http://sphinx.pocoo.org/',
169 download_url='http://pypi.python.org/pypi/Sphinx',
170 license='BSD',
171 author='Georg Brandl',
172 author_email='georg@python.org',
173 description='Python documentation generator',
174 long_description=long_desc,
175 zip_safe=False,
176 classifiers=[
177 'Development Status :: 5 - Production/Stable',
178 'Environment :: Console',
179 'Environment :: Web Environment',
180 'Intended Audience :: Developers',
181 'Intended Audience :: Education',
182 'License :: OSI Approved :: BSD License',
183 'Operating System :: OS Independent',
184 'Programming Language :: Python',
185 'Programming Language :: Python :: 2',
186 'Programming Language :: Python :: 3',
187 'Topic :: Documentation',
188 'Topic :: Text Processing',
189 'Topic :: Utilities',
190 ],
191 platforms='any',
192 packages=find_packages(exclude=['custom_fixers', 'test']),
193 include_package_data=True,
194 entry_points={
195 'console_scripts': [
196 'sphinx-build = sphinx:main',
197 'sphinx-quickstart = sphinx.quickstart:main',
198 'sphinx-apidoc = sphinx.apidoc:main',
199 'sphinx-autogen = sphinx.ext.autosummary.generate:main',
200 ],
201 'distutils.commands': [
202 'build_sphinx = sphinx.setup_command:BuildDoc',
203 ],
204 },
205 install_requires=requires,
206 cmdclass=cmdclass,
207 use_2to3=True,
208 use_2to3_fixers=['custom_fixers'],
209)
0210
=== modified file 'debian/changelog'
--- debian/changelog 2012-11-27 19:20:44 +0000
+++ debian/changelog 2013-02-16 12:25:23 +0000
@@ -1,3 +1,27 @@
1sphinx (1.1.3+dfsg-7ubuntu1) raring; urgency=low
2
3 * Merge with Debian experimental. Remaining Ubuntu changes:
4 - Switch to dh_python2.
5 - debian/rules: export NO_PKG_MANGLE=1 in order to not have translations
6 stripped.
7 - debian/control: Drop the build-dependency on python-whoosh.
8 - debian/control: Add "XS-Testsuite: autopkgtest" header.
9 * debian/patches/fix_manpages_generation_with_new_docutils.diff:
10 dropped, applied in Debian as manpage_writer_docutils_0.10_api.diff.
11 * debian/patches/fix_literal_block_warning.diff: add patch to avoid
12 false-positive "Literal block expected; none found." warnings when
13 building l10n projects.
14
15 -- Dmitry Shachnev <mitya57@ubuntu.com> Sat, 16 Feb 2013 14:51:12 +0400
16
17sphinx (1.1.3+dfsg-7) experimental; urgency=low
18
19 * Backport upstream patch for fix compatibility with Docutils 0.10.
20 * Run 2to3 in parallel.
21 * Add DEP-8 tests for the documentation package.
22
23 -- Jakub Wilk <jwilk@debian.org> Wed, 19 Dec 2012 10:53:51 +0100
24
1sphinx (1.1.3+dfsg-5ubuntu1) raring; urgency=low25sphinx (1.1.3+dfsg-5ubuntu1) raring; urgency=low
226
3 * Merge with Debian packaging SVN.27 * Merge with Debian packaging SVN.
@@ -14,17 +38,24 @@
1438
15 -- Dmitry Shachnev <mitya57@ubuntu.com> Tue, 27 Nov 2012 19:20:44 +040039 -- Dmitry Shachnev <mitya57@ubuntu.com> Tue, 27 Nov 2012 19:20:44 +0400
1640
17sphinx (1.1.3+dfsg-6) UNRELEASED; urgency=low41sphinx (1.1.3+dfsg-6) experimental; urgency=low
1842
19 [ Jakub Wilk ]43 [ Jakub Wilk ]
20 * DEP-8 tests: remove “Features: no-build-needed”; it's the default now.44 * DEP-8 tests: remove “Features: no-build-needed”; it's the default now.
21 * Bump standards version to 3.9.4; no changes needed.45 * Bump standards version to 3.9.4; no changes needed.
46 * Pass -a to xvfb-run, so that it tries to get a free server number.
47 * Rebuild MO files from source.
48 + Update debian/rules.
49 + Add the rebuilt files to extend-diff-ignore.
50 * Make synopses in the patch header start with a lowercase latter and not
51 end with a full stop.
2252
23 [ Dmitry Shachnev ]53 [ Dmitry Shachnev ]
24 * debian/patches/l10n_fixes.diff: fix crashes and not working external54 * debian/patches/l10n_fixes.diff: fix crashes and not working external
25 links in l10n mode (closes: #691719).55 links in l10n mode (closes: #691719).
56 * debian/patches/sort_stopwords.diff: mark as applied upstream.
2657
27 -- Jakub Wilk <jwilk@debian.org> Tue, 13 Nov 2012 22:36:10 +010058 -- Jakub Wilk <jwilk@debian.org> Sat, 08 Dec 2012 14:38:19 +0100
2859
29sphinx (1.1.3+dfsg-5) experimental; urgency=low60sphinx (1.1.3+dfsg-5) experimental; urgency=low
3061
3162
=== modified file 'debian/jstest/run-tests'
--- debian/jstest/run-tests 2012-03-12 12:18:37 +0000
+++ debian/jstest/run-tests 2013-02-16 12:25:23 +0000
@@ -26,7 +26,7 @@
26if __name__ == '__main__':26if __name__ == '__main__':
27 if not os.getenv('DISPLAY'):27 if not os.getenv('DISPLAY'):
28 raise RuntimeError('These tests requires access to an X server')28 raise RuntimeError('These tests requires access to an X server')
29 build_directory = os.path.join(os.path.dirname(__file__), '..', '..', 'build', 'html')29 [build_directory] = sys.argv[1:]
30 build_directory = os.path.abspath(build_directory)30 build_directory = os.path.abspath(build_directory)
31 n_failures = 031 n_failures = 0
32 for testcase in t1, t2, t3:32 for testcase in t1, t2, t3:
3333
=== added file 'debian/patches/fix_literal_block_warning.diff'
--- debian/patches/fix_literal_block_warning.diff 1970-01-01 00:00:00 +0000
+++ debian/patches/fix_literal_block_warning.diff 2013-02-16 12:25:23 +0000
@@ -0,0 +1,17 @@
1Description: avoid false-positive warnings about missing literal block
2Origin: upstream, https://bitbucket.org/birkenfeld/sphinx/commits/e2338c4fcf86
3Last-Update: 2013-02-16
4
5--- a/sphinx/environment.py 2013-02-16 15:24:05.699388441 +0400
6+++ b/sphinx/environment.py 2013-02-16 15:24:05.691388443 +0400
7@@ -218,6 +218,10 @@
8 if not msgstr or msgstr == msg: # as-of-yet untranslated
9 continue
10
11+ # Avoid "Literal block expected; none found." warnings.
12+ if msgstr.strip().endswith('::'):
13+ msgstr += '\n\n dummy literal'
14+
15 patch = new_document(source, settings)
16 parser.parse(msgstr, patch)
17 patch = patch[0]
018
=== removed file 'debian/patches/fix_manpages_generation_with_new_docutils.diff'
--- debian/patches/fix_manpages_generation_with_new_docutils.diff 2012-10-22 20:20:35 +0000
+++ debian/patches/fix_manpages_generation_with_new_docutils.diff 1970-01-01 00:00:00 +0000
@@ -1,33 +0,0 @@
1Description: Fix build failure with Docutils 0.9
2Bug: https://bitbucket.org/birkenfeld/sphinx/issue/998/docutils-010-will-break-sphinx-manpage
3Author: Toshio Kuratomi <a.badger@gmail.com>
4Last-Update: 2012-10-22
5
6diff -up a/sphinx/writers/manpage.py b/sphinx/writers/manpage.py
7--- a/sphinx/writers/manpage.py 2011-11-01 00:38:44.000000000 -0700
8+++ b/sphinx/writers/manpage.py 2012-08-21 12:38:33.380808202 -0700
9@@ -72,6 +72,11 @@ class ManualPageTranslator(BaseTranslato
10 # since self.append_header() is never called, need to do this here
11 self.body.append(MACRO_DEF)
12
13+ # Overwrite admonition label translations with our own
14+ for label, translation in admonitionlabels.items():
15+ self.language.labels[label] = self.deunicode(translation)
16+
17+
18 # overwritten -- added quotes around all .TH arguments
19 def header(self):
20 tmpl = (".TH \"%(title_upper)s\" \"%(manual_section)s\""
21@@ -193,12 +198,6 @@ class ManualPageTranslator(BaseTranslato
22 def depart_seealso(self, node):
23 self.depart_admonition(node)
24
25- # overwritten -- use our own label translations
26- def visit_admonition(self, node, name=None):
27- if name:
28- self.body.append('.IP %s\n' %
29- self.deunicode(admonitionlabels.get(name, name)))
30-
31 def visit_productionlist(self, node):
32 self.ensure_eol()
33 names = []
340
=== modified file 'debian/patches/initialize_autodoc.diff'
--- debian/patches/initialize_autodoc.diff 2012-02-05 19:33:59 +0000
+++ debian/patches/initialize_autodoc.diff 2013-02-16 12:25:23 +0000
@@ -1,4 +1,4 @@
1Description: Make sphinx-autogen initialize the sphinx.ext.autodoc module.1Description: make sphinx-autogen initialize the sphinx.ext.autodoc module
2Author: Jakub Wilk <jwilk@debian.org>2Author: Jakub Wilk <jwilk@debian.org>
3Bug-Debian: http://bugs.debian.org/6110783Bug-Debian: http://bugs.debian.org/611078
4Bug: https://bitbucket.org/birkenfeld/sphinx/issue/6184Bug: https://bitbucket.org/birkenfeld/sphinx/issue/618
55
=== modified file 'debian/patches/l10n_fixes.diff'
--- debian/patches/l10n_fixes.diff 2012-11-27 19:20:44 +0000
+++ debian/patches/l10n_fixes.diff 2013-02-16 12:25:23 +0000
@@ -1,15 +1,14 @@
1Description: Fix l10n build of text containing footnotes1Description: fix l10n build of text containing footnotes or external links
2 Based on initial patch by Cristophe Simonis and modifications by Takayuki Shimizukawa2Bug: https://bitbucket.org/birkenfeld/sphinx/issue/955
3 (upstream pull request #86).
4Bug: https://bitbucket.org/birkenfeld/sphinx/issue/955/cant-build-html-with-footnotes-when-using
5Bug-Debian: http://bugs.debian.org/6917193Bug-Debian: http://bugs.debian.org/691719
4Origin: upstream, https://bitbucket.org/birkenfeld/sphinx/commits/b7b808e468
5 and https://bitbucket.org/birkenfeld/sphinx/commits/870a91ca86
6Author: Takayuki Shimizukawa <shimizukawa@gmail.com>6Author: Takayuki Shimizukawa <shimizukawa@gmail.com>
7Last-Update: 2012-11-277Last-Update: 2012-12-08
88
9=== modified file 'sphinx/environment.py'
10--- a/sphinx/environment.py 2012-03-12 12:18:37 +00009--- a/sphinx/environment.py 2012-03-12 12:18:37 +0000
11+++ b/sphinx/environment.py 2012-11-27 14:05:36 +000010+++ b/sphinx/environment.py 2012-12-07 12:22:33 +0000
12@@ -213,16 +213,44 @@11@@ -213,16 +213,61 @@
13 parser = RSTParser()12 parser = RSTParser()
14 13
15 for node, msg in extract_messages(self.document):14 for node, msg in extract_messages(self.document):
@@ -26,31 +25,48 @@
26 if not isinstance(patch, nodes.paragraph):25 if not isinstance(patch, nodes.paragraph):
27 continue # skip for now26 continue # skip for now
28+27+
29+ footnote_refs = [r for r in node.children28+ # auto-numbered foot note reference should use original 'ids'.
30+ if isinstance(r, nodes.footnote_reference)29+ is_autonumber_footnote_ref = lambda node: \
31+ and r.get('auto') == 1]30+ isinstance(node, nodes.footnote_reference) \
32+ refs = [r for r in node.children if isinstance(r, nodes.reference)]31+ and node.get('auto') == 1
33+32+ old_foot_refs = node.traverse(is_autonumber_footnote_ref)
34+ for i, child in enumerate(patch.children): # update leaves33+ new_foot_refs = patch.traverse(is_autonumber_footnote_ref)
35+ if isinstance(child, nodes.footnote_reference) \34+ if len(old_foot_refs) != len(new_foot_refs):
36+ and child.get('auto') == 1:35+ env.warn_node('inconsistent footnote references in '
37+ # use original 'footnote_reference' object.36+ 'translated message', node)
38+ # this object is already registered in self.document.autofootnote_refs37+ for old, new in zip(old_foot_refs, new_foot_refs):
39+ patch.children[i] = footnote_refs.pop(0)38+ new['ids'] = old['ids']
40+ # Some duplicated footnote_reference in msgstr causes39+ self.document.autofootnote_refs.remove(old)
41+ # IndexError in .pop(0). That is invalid msgstr.40+ self.document.note_autofootnote_ref(new)
42+41+
43+ elif isinstance(child, nodes.reference):42+ # reference should use original 'refname'.
44+ # reference should use original 'refname'.43+ # * reference target ".. _Python: ..." is not translatable.
45+ # * reference target ".. _Python: ..." is not translatable.44+ # * section refname is not translatable.
46+ # * section refname is not translatable.45+ # * inline reference "`Python <...>`_" has no 'refname'.
47+ # * inline reference "`Python <...>`_" has no 'refname'.46+ is_refnamed_ref = lambda node: \
48+ if refs and 'refname' in refs[0]:47+ isinstance(node, nodes.reference) \
49+ refname = child['refname'] = refs.pop(0)['refname']48+ and 'refname' in node
50+ self.document.refnames.setdefault(49+ old_refs = node.traverse(is_refnamed_ref)
51+ refname, []).append(child)50+ new_refs = patch.traverse(is_refnamed_ref)
52+ # if number of reference nodes had been changed, that51+ applied_refname_map = {}
53+ # would often generate unknown link target warning.52+ if len(old_refs) != len(new_refs):
53+ env.warn_node('inconsistent references in '
54+ 'translated message', node)
55+ for new in new_refs:
56+ if new['refname'] in applied_refname_map:
57+ # 2nd appearance of the reference
58+ new['refname'] = applied_refname_map[new['refname']]
59+ elif old_refs:
60+ # 1st appearance of the reference in old_refs
61+ old = old_refs.pop(0)
62+ refname = old['refname']
63+ new['refname'] = refname
64+ applied_refname_map[new['refname']] = refname
65+ else:
66+ # the reference is not found in old_refs
67+ applied_refname_map[new['refname']] = new['refname']
68+
69+ self.document.note_refname(new)
54+70+
55 for child in patch.children: # update leaves71 for child in patch.children: # update leaves
56 child.parent = node72 child.parent = node
5773
=== added file 'debian/patches/manpage_writer_docutils_0.10_api.diff'
--- debian/patches/manpage_writer_docutils_0.10_api.diff 1970-01-01 00:00:00 +0000
+++ debian/patches/manpage_writer_docutils_0.10_api.diff 2013-02-16 12:25:23 +0000
@@ -0,0 +1,31 @@
1Description: port manpage writer to docutils 0.10 API
2Origin: upstream, https://bitbucket.org/birkenfeld/sphinx/commits/ffb145b7884f
3Last-Update: 2012-12-18
4
5--- a/sphinx/writers/manpage.py
6+++ b/sphinx/writers/manpage.py
7@@ -72,6 +72,11 @@
8 # since self.append_header() is never called, need to do this here
9 self.body.append(MACRO_DEF)
10
11+ # Overwrite admonition label translations with our own
12+ for label, translation in admonitionlabels.items():
13+ self.language.labels[label] = self.deunicode(translation)
14+
15+
16 # overwritten -- added quotes around all .TH arguments
17 def header(self):
18 tmpl = (".TH \"%(title_upper)s\" \"%(manual_section)s\""
19@@ -193,12 +198,6 @@
20 def depart_seealso(self, node):
21 self.depart_admonition(node)
22
23- # overwritten -- use our own label translations
24- def visit_admonition(self, node, name=None):
25- if name:
26- self.body.append('.IP %s\n' %
27- self.deunicode(admonitionlabels.get(name, name)))
28-
29 def visit_productionlist(self, node):
30 self.ensure_eol()
31 names = []
032
=== added file 'debian/patches/parallel_2to3.diff'
--- debian/patches/parallel_2to3.diff 1970-01-01 00:00:00 +0000
+++ debian/patches/parallel_2to3.diff 2013-02-16 12:25:23 +0000
@@ -0,0 +1,31 @@
1Description: run 2to3 in parallel
2Author: Jakub Wilk <jwilk@debian.org>
3Forwarded: not-needed
4Last-Update: 2012-12-18
5
6--- a/setup.py
7+++ b/setup.py
8@@ -67,6 +67,23 @@
9 # The uuid module is new in the stdlib in 2.5
10 requires.append('uuid>=1.30')
11
12+if sys.version_info >= (3,):
13+
14+ num_processes = 1
15+ for option in os.environ.get('DEB_BUILD_OPTIONS', '').split():
16+ if option.startswith('parallel='):
17+ num_processes = int(option.split('=', 1)[1])
18+ if num_processes > 1:
19+ import lib2to3.refactor
20+ class RefactoringTool(lib2to3.refactor.MultiprocessRefactoringTool):
21+ def refactor(self, items, write=False, doctests_only=False):
22+ return lib2to3.refactor.MultiprocessRefactoringTool.refactor(
23+ self, items,
24+ write=write,
25+ doctests_only=doctests_only,
26+ num_processes=num_processes
27+ )
28+ lib2to3.refactor.RefactoringTool = RefactoringTool
29
30 # Provide a "compile_catalog" command that also creates the translated
31 # JavaScript files if Babel is available.
032
=== modified file 'debian/patches/python3_test_build_dir.diff'
--- debian/patches/python3_test_build_dir.diff 2012-02-14 00:13:35 +0000
+++ debian/patches/python3_test_build_dir.diff 2013-02-16 12:25:23 +0000
@@ -1,4 +1,4 @@
1Description: Fix build directory for test runner.1Description: fix build directory for test runner
2 Hardcode Python 3 build directory in the test runner to the one that Debian2 Hardcode Python 3 build directory in the test runner to the one that Debian
3 package uses.3 package uses.
4Author: Jakub Wilk <jwilk@debian.org>4Author: Jakub Wilk <jwilk@debian.org>
55
=== modified file 'debian/patches/series'
--- debian/patches/series 2012-11-27 19:20:44 +0000
+++ debian/patches/series 2013-02-16 12:25:23 +0000
@@ -8,8 +8,10 @@
8fix_nepali_po.diff8fix_nepali_po.diff
9pygments_byte_strings.diff9pygments_byte_strings.diff
10fix_shorthandoff.diff10fix_shorthandoff.diff
11fix_manpages_generation_with_new_docutils.diff
12test_build_html_rb.diff11test_build_html_rb.diff
13sort_stopwords.diff12sort_stopwords.diff
14support_python_3.3.diff13support_python_3.3.diff
15l10n_fixes.diff14l10n_fixes.diff
15manpage_writer_docutils_0.10_api.diff
16parallel_2to3.diff
17fix_literal_block_warning.diff
1618
=== modified file 'debian/patches/show_more_stack_frames.diff'
--- debian/patches/show_more_stack_frames.diff 2011-11-20 15:56:50 +0000
+++ debian/patches/show_more_stack_frames.diff 2013-02-16 12:25:23 +0000
@@ -1,4 +1,4 @@
1Description: When Sphinx crashes, show 10 stack frames (instead of a single one).1Description: when Sphinx crashes, show 10 stack frames (instead of a single one)
2 Normally, when Sphinx crashes, it doesn't display full stack trace (as other2 Normally, when Sphinx crashes, it doesn't display full stack trace (as other
3 Python application do by default), but only a single one; rest of the stack3 Python application do by default), but only a single one; rest of the stack
4 trace is stored into a temporary file. Such behaviour is undesired in some4 trace is stored into a temporary file. Such behaviour is undesired in some
55
=== modified file 'debian/patches/sort_stopwords.diff'
--- debian/patches/sort_stopwords.diff 2012-11-27 19:20:44 +0000
+++ debian/patches/sort_stopwords.diff 2013-02-16 12:25:23 +0000
@@ -2,8 +2,8 @@
2 The order of stopwords in searchtools.js would be random if hash randomization2 The order of stopwords in searchtools.js would be random if hash randomization
3 was enabled, breaking dh_sphinxdoc. This patch makes the order deterministic.3 was enabled, breaking dh_sphinxdoc. This patch makes the order deterministic.
4Author: Jakub Wilk <jwilk@debian.org>4Author: Jakub Wilk <jwilk@debian.org>
5Applied-Upstream: https://bitbucket.org/birkenfeld/sphinx/changeset/6cf5320e655Forwarded: yes, https://bitbucket.org/birkenfeld/sphinx/commits/6cf5320e65
6Last-Update: 2012-11-106Last-Update: 2012-12-08
77
8--- a/sphinx/search/__init__.py8--- a/sphinx/search/__init__.py
9+++ b/sphinx/search/__init__.py9+++ b/sphinx/search/__init__.py
1010
=== modified file 'debian/patches/sphinxcontrib_namespace.diff'
--- debian/patches/sphinxcontrib_namespace.diff 2012-02-03 13:52:49 +0000
+++ debian/patches/sphinxcontrib_namespace.diff 2013-02-16 12:25:23 +0000
@@ -1,4 +1,4 @@
1Description: Create namespace package ‘sphinxcontrib’.1Description: create namespace package ‘sphinxcontrib’
2 Create namespace package ‘sphinxcontrib’. This allows python-sphinxcontrib.*2 Create namespace package ‘sphinxcontrib’. This allows python-sphinxcontrib.*
3 packages, both those using dh_python2 and those using python-support, to be3 packages, both those using dh_python2 and those using python-support, to be
4 co-importable.4 co-importable.
55
=== modified file 'debian/patches/unversioned_grammar_pickle.diff'
--- debian/patches/unversioned_grammar_pickle.diff 2011-09-28 17:20:22 +0000
+++ debian/patches/unversioned_grammar_pickle.diff 2013-02-16 12:25:23 +0000
@@ -1,4 +1,4 @@
1Description: Don't embed Python version in filename of grammar pickle.1Description: don't embed Python version in filename of grammar pickle
2Author: Jakub Wilk <jwilk@debian.org>2Author: Jakub Wilk <jwilk@debian.org>
3Bug: https://bitbucket.org/birkenfeld/sphinx/issue/6413Bug: https://bitbucket.org/birkenfeld/sphinx/issue/641
4Forwarded: not-needed4Forwarded: not-needed
55
=== modified file 'debian/rules'
--- debian/rules 2012-11-27 19:20:44 +0000
+++ debian/rules 2013-02-16 12:25:23 +0000
@@ -23,6 +23,10 @@
23python_all = pyversions -r | tr ' ' '\n' | xargs -t -I {} env {}23python_all = pyversions -r | tr ' ' '\n' | xargs -t -I {} env {}
24python3_all = py3versions -r | tr ' ' '\n' | xargs -t -I {} env {}24python3_all = py3versions -r | tr ' ' '\n' | xargs -t -I {} env {}
2525
26ifeq "$(filter nocheck,$(DEB_BUILD_OPTIONS))" ""
27msgfmt_options = -c
28endif
29
26build-arch:30build-arch:
2731
28build-indep build: build-stamp32build-indep build: build-stamp
@@ -37,12 +41,13 @@
37 python ./sphinx-build.py -b man doc build/man41 python ./sphinx-build.py -b man doc build/man
38 python setup.py build --build-lib build/py2/42 python setup.py build --build-lib build/py2/
39 python3 setup.py build --build-lib build/py3/43 python3 setup.py build --build-lib build/py3/
40ifeq (,$(filter nocheck,$(DEB_BUILD_OPTIONS)))44 find sphinx/locale/ -name '*.po' | sed -e 's/[.]po$$//' \
41 find sphinx/locale/ -name '*.po' | xargs -t -I {} msgfmt -o /dev/null -c {}45 | xargs -t -I {} msgfmt $(msgfmt_options) {}.po -o {}.mo
46ifeq "$(filter nocheck,$(DEB_BUILD_OPTIONS))" ""
42 $(python_all) tests/run.py --verbose --no-skip47 $(python_all) tests/run.py --verbose --no-skip
43 $(python3_all) tests/run.py --verbose48 $(python3_all) tests/run.py --verbose
44 cd build/py3/ && rm -rf tests/ sphinx/pycode/Grammar.pickle49 cd build/py3/ && rm -rf tests/ sphinx/pycode/Grammar.pickle
45 xvfb-run --auto-servernum ./debian/jstest/run-tests50 xvfb-run -a ./debian/jstest/run-tests build/html/
46endif51endif
47 touch build-stamp52 touch build-stamp
4853
4954
=== modified file 'debian/source/options'
--- debian/source/options 2012-02-05 19:33:59 +0000
+++ debian/source/options 2013-02-16 12:25:23 +0000
@@ -1,1 +1,2 @@
1extend-diff-ignore = "^[^/]*[.]egg-info/"1extend-diff-ignore = "^[^/]*[.]egg-info/"
2extend-diff-ignore = "^sphinx/locale/[^/]+/LC_MESSAGES/sphinx[.]mo$"
23
=== modified file 'debian/tests/control'
--- debian/tests/control 2012-11-27 19:20:44 +0000
+++ debian/tests/control 2013-02-16 12:25:23 +0000
@@ -3,3 +3,6 @@
33
4Tests: python3-sphinx4Tests: python3-sphinx
5Depends: python3-sphinx, python3-nose5Depends: python3-sphinx, python3-nose
6
7Tests: sphinx-doc
8Depends: sphinx-doc, python, python-webkit, xvfb
69
=== added file 'debian/tests/sphinx-doc'
--- debian/tests/sphinx-doc 1970-01-01 00:00:00 +0000
+++ debian/tests/sphinx-doc 2013-02-16 12:25:23 +0000
@@ -0,0 +1,20 @@
1#!/bin/sh
2set -e -u
3cp -r debian/jstest "$ADTTMP/"
4cd "$ADTTMP"
5for python in python python3
6do
7 for format in rst html
8 do
9 [ "$(readlink -f /usr/share/doc/$python-sphinx/$format)" = "$(readlink -f /usr/share/doc/sphinx-doc/$format)" ]
10 done
11done
12run_js_tests='jstest/run-tests /usr/share/doc/sphinx-doc/html/'
13if [ -n "${DISPLAY:-}" ]
14then
15 $run_js_tests
16else
17 xvfb-run -a $run_js_tests
18fi
19
20# vim:ts=4 sw=4 et
021
=== modified file 'setup.py'
--- setup.py 2012-03-30 23:32:16 +0000
+++ setup.py 2013-02-16 12:25:23 +0000
@@ -67,6 +67,23 @@
67 # The uuid module is new in the stdlib in 2.567 # The uuid module is new in the stdlib in 2.5
68 requires.append('uuid>=1.30')68 requires.append('uuid>=1.30')
6969
70if sys.version_info >= (3,):
71
72 num_processes = 1
73 for option in os.environ.get('DEB_BUILD_OPTIONS', '').split():
74 if option.startswith('parallel='):
75 num_processes = int(option.split('=', 1)[1])
76 if num_processes > 1:
77 import lib2to3.refactor
78 class RefactoringTool(lib2to3.refactor.MultiprocessRefactoringTool):
79 def refactor(self, items, write=False, doctests_only=False):
80 return lib2to3.refactor.MultiprocessRefactoringTool.refactor(
81 self, items,
82 write=write,
83 doctests_only=doctests_only,
84 num_processes=num_processes
85 )
86 lib2to3.refactor.RefactoringTool = RefactoringTool
7087
71# Provide a "compile_catalog" command that also creates the translated88# Provide a "compile_catalog" command that also creates the translated
72# JavaScript files if Babel is available.89# JavaScript files if Babel is available.
7390
=== modified file 'sphinx/environment.py'
--- sphinx/environment.py 2012-11-27 19:20:44 +0000
+++ sphinx/environment.py 2013-02-16 12:25:23 +0000
@@ -218,6 +218,10 @@
218 if not msgstr or msgstr == msg: # as-of-yet untranslated218 if not msgstr or msgstr == msg: # as-of-yet untranslated
219 continue219 continue
220220
221 # Avoid "Literal block expected; none found." warnings.
222 if msgstr.strip().endswith('::'):
223 msgstr += '\n\n dummy literal'
224
221 patch = new_document(source, settings)225 patch = new_document(source, settings)
222 parser.parse(msgstr, patch)226 parser.parse(msgstr, patch)
223 patch = patch[0]227 patch = patch[0]
@@ -225,31 +229,48 @@
225 if not isinstance(patch, nodes.paragraph):229 if not isinstance(patch, nodes.paragraph):
226 continue # skip for now230 continue # skip for now
227231
228 footnote_refs = [r for r in node.children232 # auto-numbered foot note reference should use original 'ids'.
229 if isinstance(r, nodes.footnote_reference)233 is_autonumber_footnote_ref = lambda node: \
230 and r.get('auto') == 1]234 isinstance(node, nodes.footnote_reference) \
231 refs = [r for r in node.children if isinstance(r, nodes.reference)]235 and node.get('auto') == 1
232236 old_foot_refs = node.traverse(is_autonumber_footnote_ref)
233 for i, child in enumerate(patch.children): # update leaves237 new_foot_refs = patch.traverse(is_autonumber_footnote_ref)
234 if isinstance(child, nodes.footnote_reference) \238 if len(old_foot_refs) != len(new_foot_refs):
235 and child.get('auto') == 1:239 env.warn_node('inconsistent footnote references in '
236 # use original 'footnote_reference' object.240 'translated message', node)
237 # this object is already registered in self.document.autofootnote_refs241 for old, new in zip(old_foot_refs, new_foot_refs):
238 patch.children[i] = footnote_refs.pop(0)242 new['ids'] = old['ids']
239 # Some duplicated footnote_reference in msgstr causes243 self.document.autofootnote_refs.remove(old)
240 # IndexError in .pop(0). That is invalid msgstr.244 self.document.note_autofootnote_ref(new)
241245
242 elif isinstance(child, nodes.reference):246 # reference should use original 'refname'.
243 # reference should use original 'refname'.247 # * reference target ".. _Python: ..." is not translatable.
244 # * reference target ".. _Python: ..." is not translatable.248 # * section refname is not translatable.
245 # * section refname is not translatable.249 # * inline reference "`Python <...>`_" has no 'refname'.
246 # * inline reference "`Python <...>`_" has no 'refname'.250 is_refnamed_ref = lambda node: \
247 if refs and 'refname' in refs[0]:251 isinstance(node, nodes.reference) \
248 refname = child['refname'] = refs.pop(0)['refname']252 and 'refname' in node
249 self.document.refnames.setdefault(253 old_refs = node.traverse(is_refnamed_ref)
250 refname, []).append(child)254 new_refs = patch.traverse(is_refnamed_ref)
251 # if number of reference nodes had been changed, that255 applied_refname_map = {}
252 # would often generate unknown link target warning.256 if len(old_refs) != len(new_refs):
257 env.warn_node('inconsistent references in '
258 'translated message', node)
259 for new in new_refs:
260 if new['refname'] in applied_refname_map:
261 # 2nd appearance of the reference
262 new['refname'] = applied_refname_map[new['refname']]
263 elif old_refs:
264 # 1st appearance of the reference in old_refs
265 old = old_refs.pop(0)
266 refname = old['refname']
267 new['refname'] = refname
268 applied_refname_map[new['refname']] = refname
269 else:
270 # the reference is not found in old_refs
271 applied_refname_map[new['refname']] = new['refname']
272
273 self.document.note_refname(new)
253274
254 for child in patch.children: # update leaves275 for child in patch.children: # update leaves
255 child.parent = node276 child.parent = node

Subscribers

People subscribed via source and target branches