diff -Nru python-testscenarios-0.2/debian/changelog python-testscenarios-0.4/debian/changelog --- python-testscenarios-0.2/debian/changelog 2014-07-19 23:30:35.000000000 +0000 +++ python-testscenarios-0.4/debian/changelog 2014-07-18 07:17:50.000000000 +0000 @@ -1,3 +1,80 @@ +python-testscenarios (0.4-2ubuntu3) precise; urgency=low + + * No-change backport to precise + + -- Lars Butler (larsbutler) Fri, 18 Jul 2014 07:17:20 +0000 + +python-testscenarios (0.4-2ubuntu2) trusty; urgency=medium + + * Build-Depend on python3-all. + + -- Barry Warsaw Thu, 16 Jan 2014 11:45:36 -0500 + +python-testscenarios (0.4-2ubuntu1) trusty; urgency=low + + * Merge from Debian unstable. Remaining changes: + - Add python3 support. + + -- Chuck Short Mon, 28 Oct 2013 11:08:21 -0400 + +python-testscenarios (0.4-2) unstable; urgency=low + + * Using debian-unstable as packaging branch. + * Fixed python -> python-all build-depends. + + -- Thomas Goirand Mon, 22 Jul 2013 15:43:59 +0800 + +python-testscenarios (0.4-1) unstable; urgency=low + + * New upstream release. + * Package is now team maintained in PKG OpenStack. + * debian/copyright now using 1.0 parsable format. + * Now using 3.0 (quilt) source format. + * Using debhelper and compat 9. + * Using dh_python2 and not CDBS anymore. + * Added VCS fields. + * Added build-depends: python-setuptools. + * Added a watch file. + + -- Thomas Goirand Sat, 20 Jul 2013 17:29:59 +0000 + +python-testscenarios (0.4-0ubuntu1) saucy; urgency=low + + * New upstream version. + * debian/patches/fix-python3-ubuntu.patch: Dropped. + * debian/control: Add python-setuptools and python3-setuptools. + + -- Chuck Short Thu, 19 Sep 2013 10:00:10 -0400 + +python-testscenarios (0.3-0ubuntu2) saucy; urgency=low + + * Build for python2/python3. + * debian/patches/fix-python3-ubuntu.patch: Fix building for + python3.3 + * Switch to debhelper. + + -- Chuck Short Thu, 13 Jun 2013 08:05:19 -0500 + +python-testscenarios (0.3-0ubuntu1) raring; urgency=low + + * New upstream release. + * debian/rules: Run testsuite during build. + + -- Chuck Short Tue, 08 Jan 2013 13:20:20 -0600 + +python-testscenarios (0.2-1.1) unstable; urgency=low + + * Non-maintainer upload. + * Convert to dh_python2 (Closes: #617035). + + -- Andrea Colangelo Wed, 26 Jun 2013 11:15:38 +0200 + +python-testscenarios (0.2-1ubuntu1) raring; urgency=low + + * Convert to dh_python2 + + -- Chuck Short Sun, 06 Jan 2013 12:05:48 -0600 + python-testscenarios (0.2-1) unstable; urgency=low * New upstream release. @@ -9,3 +86,4 @@ * New upstream release. Closes: #561644 -- Robert Collins Sat, 19 Dec 2009 14:20:58 +1100 + diff -Nru python-testscenarios-0.2/debian/compat python-testscenarios-0.4/debian/compat --- python-testscenarios-0.2/debian/compat 2014-07-19 23:30:35.000000000 +0000 +++ python-testscenarios-0.4/debian/compat 2014-01-16 16:45:19.000000000 +0000 @@ -1 +1 @@ -6 +9 diff -Nru python-testscenarios-0.2/debian/control python-testscenarios-0.4/debian/control --- python-testscenarios-0.2/debian/control 2014-07-19 23:30:35.000000000 +0000 +++ python-testscenarios-0.4/debian/control 2014-01-16 16:45:28.000000000 +0000 @@ -1,26 +1,41 @@ Source: python-testscenarios -Maintainer: Robert Collins +Maintainer: Ubuntu Developers +XSBC-Original-Maintainer: PKG OpenStack +Uploaders: Robert Collins , Section: python Priority: optional -Standards-Version: 3.8.1 -Build-Depends-Indep: - python-central (>= 0.6.7) +Standards-Version: 3.9.4 Build-Depends: - cdbs (>= 0.4.51), - debhelper (>= 6.0.4), - python (>= 2.4), - python-testtools + debhelper (>= 8), + python (>= 2.6.6-3~), + python-setuptools, + python3-setuptools, + python-testtools, + python3-all, + python3-testtools XS-Python-Version: all +Vcs-Browser: http://anonscm.debian.org/gitweb/?p=openstack/python-testscenarios.git +Vcs-Git: git://anonscm.debian.org/openstack/python-testscenarios.git Homepage: https://launchpad.net/testscenarios Package: python-testscenarios Architecture: all -XB-Python-Version: ${python:Versions} -Depends: ${python:Depends}, - ${misc:Depends}, - python-testtools +Depends: python-testtools, ${misc:Depends}, ${python:Depends} Provides: ${python:Provides} -Description: Dependency injection for Python unittest tests +Description: Dependency injection for Python unittest tests (python2) + testscenarios provides clean dependency injection for python unittest style + tests. This can be used for interface testing (testing many implementations + via a single test suite) or for classic dependency injection (provide tests + with dependencies externally to the test code itself, allowing easy testing + in different situations). + +Package: python3-testscenarios +Architecture: all +Depends: ${python3:Depends}, + ${misc:Depends}, + python3-testtools +Provides: ${python3:Provides} +Description: Dependency injection for Python unittest tests (python3) testscenarios provides clean dependency injection for python unittest style tests. This can be used for interface testing (testing many implementations via a single test suite) or for classic dependency injection (provide tests diff -Nru python-testscenarios-0.2/debian/copyright python-testscenarios-0.4/debian/copyright --- python-testscenarios-0.2/debian/copyright 2014-07-19 23:30:35.000000000 +0000 +++ python-testscenarios-0.4/debian/copyright 2014-01-16 16:45:19.000000000 +0000 @@ -1,17 +1,70 @@ -This is python-testscenarios, packaged for debian by Robert Collins. +Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/ +Upstream-Name: testscenarios +Upstream-Contact: Robert Collins +Source: https://pypi.python.org/pypi/testscenarios -Homepage is https://launchpad.net/testscenarios +Files: debian/* +Copyright: (c) 2009, Robert Collins +License: BSD-3-clauses-or-Apache-2.0 -Copyright (c) 2009 Robert Collins -Copyright (c) 2009 Martin Pool +Files: * +Copyright: (c) 2009, Robert Collins + (c) 2009, Martin Pool +License: BSD-3-clauses-or-Apache-2.0 -Licensed under either the Apache License, Version 2.0 or the BSD 3-clause -license at the users choice. A copy of both licenses are available in the -project source as Apache-2.0 and BSD. You may not use this file except in -compliance with one of these two licences. - -Unless required by applicable law or agreed to in writing, software -distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT -WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -license you chose for the specific language governing permissions and -limitations under that license. +License: BSD-3-clauses-or-Apache-2.0 + Licensed under either the Apache License, Version 2.0 or the BSD 3-clause + license at the users choice. A copy of both licenses are available in the + project source as Apache-2.0 and BSD. You may not use this file except in + compliance with one of these two licences. + . + Unless required by applicable law or agreed to in writing, software + distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT + WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + license you chose for the specific language governing permissions and + limitations under that license. + . + BSD-license: + . + Redistribution and use in source and binary forms, with or without + modification, are permitted provided that the following conditions + are met: + . + 1. Redistributions of source code must retain the above copyright notice, this + list of conditions and the following disclaimer. + . + 2. Redistributions in binary form must reproduce the above copyright notice, + this list of conditions and the following disclaimer in the documentation + and/or other materials provided with the distribution. + . + 3. Neither the name of Robert Collins nor the names of Subunit contributors + may be used to endorse or promote products derived from this software without + specific prior written permission. + . + THIS SOFTWARE IS PROVIDED BY ROBERT COLLINS AND SUBUNIT CONTRIBUTORS "AS IS" + AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE + DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR ANY + DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; + LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON + ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS + SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + . + Apache-2.0-license: + . + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + . + http://www.apache.org/licenses/LICENSE-2.0 + . + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + . + On Debian-based systems the full text of the Apache version 2.0 license + can be found in `/usr/share/common-licenses/Apache-2.0'. diff -Nru python-testscenarios-0.2/debian/gbp.conf python-testscenarios-0.4/debian/gbp.conf --- python-testscenarios-0.2/debian/gbp.conf 1970-01-01 00:00:00.000000000 +0000 +++ python-testscenarios-0.4/debian/gbp.conf 2014-01-16 16:45:19.000000000 +0000 @@ -0,0 +1,7 @@ +[DEFAULT] +upstream-branch = upstream-unstable +debian-branch = debian-unstable +pristine-tar = True + +[git-buildpackage] +export-dir = ../build-area/ diff -Nru python-testscenarios-0.2/debian/pycompat python-testscenarios-0.4/debian/pycompat --- python-testscenarios-0.2/debian/pycompat 2014-07-19 23:30:35.000000000 +0000 +++ python-testscenarios-0.4/debian/pycompat 1970-01-01 00:00:00.000000000 +0000 @@ -1 +0,0 @@ -2 diff -Nru python-testscenarios-0.2/debian/rules python-testscenarios-0.4/debian/rules --- python-testscenarios-0.2/debian/rules 2014-07-19 23:30:35.000000000 +0000 +++ python-testscenarios-0.4/debian/rules 2014-01-16 16:45:19.000000000 +0000 @@ -1,5 +1,25 @@ #!/usr/bin/make -f -include /usr/share/cdbs/1/rules/debhelper.mk -DEB_PYTHON_SYSTEM = pycentral -include /usr/share/cdbs/1/class/python-distutils.mk +PYTHONS:=$(shell pyversions -vr) +PYTHON3S:=$(shell py3versions -vr) + +%: + dh $@ --with python2,python3 + +override_dh_auto_build: + set -e && for pyvers in $(PYTHONS); do \ + python$$pyvers setup.py build; \ + done + set -e && for pyvers in $(PYTHON3S); do \ + python$$pyvers setup.py build; \ + done + +override_dh_auto_install: + set -e && for pyvers in $(PYTHONS); do \ + python$$pyvers setup.py install --install-layout=deb \ + --root $(CURDIR)/debian/python-testscenarios;\ + done + set -e && for pyvers in $(PYTHON3S); do \ + python$$pyvers setup.py install --install-layout=deb \ + --root $(CURDIR)/debian/python3-testscenarios;\ + done diff -Nru python-testscenarios-0.2/debian/source/format python-testscenarios-0.4/debian/source/format --- python-testscenarios-0.2/debian/source/format 1970-01-01 00:00:00.000000000 +0000 +++ python-testscenarios-0.4/debian/source/format 2014-01-16 16:45:19.000000000 +0000 @@ -0,0 +1 @@ +3.0 (quilt) diff -Nru python-testscenarios-0.2/debian/watch python-testscenarios-0.4/debian/watch --- python-testscenarios-0.2/debian/watch 1970-01-01 00:00:00.000000000 +0000 +++ python-testscenarios-0.4/debian/watch 2014-01-16 16:45:19.000000000 +0000 @@ -0,0 +1,3 @@ +version=3 + +http://pypi.python.org/packages/source/t/testscenarios/testscenarios-(.+).tar.gz diff -Nru python-testscenarios-0.2/lib/testscenarios/__init__.py python-testscenarios-0.4/lib/testscenarios/__init__.py --- python-testscenarios-0.2/lib/testscenarios/__init__.py 2010-02-01 04:47:49.000000000 +0000 +++ python-testscenarios-0.4/lib/testscenarios/__init__.py 2013-01-26 19:11:10.000000000 +0000 @@ -38,20 +38,30 @@ # established at this point, and setup.py will use a version of next-$(revno). # If the releaselevel is 'final', then the tarball will be major.minor.micro. # Otherwise it is major.minor.micro~$(revno). -__version__ = (0, 2, 0, 'final', 0) +__version__ = (0, 4, 0, 'final', 0) __all__ = [ 'TestWithScenarios', + 'WithScenarios', 'apply_scenario', 'apply_scenarios', 'generate_scenarios', + 'load_tests_apply_scenarios', + 'multiply_scenarios', + 'per_module_scenarios', ] import unittest -from testscenarios.scenarios import apply_scenario, generate_scenarios -from testscenarios.testcase import TestWithScenarios +from testscenarios.scenarios import ( + apply_scenario, + generate_scenarios, + load_tests_apply_scenarios, + multiply_scenarios, + per_module_scenarios, + ) +from testscenarios.testcase import TestWithScenarios, WithScenarios def test_suite(): diff -Nru python-testscenarios-0.2/lib/testscenarios/scenarios.py python-testscenarios-0.4/lib/testscenarios/scenarios.py --- python-testscenarios-0.2/lib/testscenarios/scenarios.py 2010-02-01 04:45:48.000000000 +0000 +++ python-testscenarios-0.4/lib/testscenarios/scenarios.py 2013-01-26 18:54:26.000000000 +0000 @@ -2,6 +2,7 @@ # dependency injection ('scenarios') by tests. # # Copyright (c) 2009, Robert Collins +# Copyright (c) 2010, 2011 Martin Pool # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the @@ -18,15 +19,22 @@ 'apply_scenario', 'apply_scenarios', 'generate_scenarios', + 'load_tests_apply_scenarios', + 'multiply_scenarios', ] +from itertools import ( + chain, + product, + ) +import sys import unittest from testtools.testcase import clone_test_with_new_id from testtools import iterate_tests -def apply_scenario((name, parameters), test): +def apply_scenario(scenario, test): """Apply scenario to test. :param scenario: A tuple (name, parameters) to apply to the test. The test @@ -35,6 +43,7 @@ :param test: The test to apply the scenario to. This test is unaltered. :return: A new test cloned from test, with the scenario applied. """ + name, parameters = scenario scenario_suffix = '(' + name + ')' newtest = clone_test_with_new_id(test, test.id() + scenario_suffix) @@ -42,7 +51,7 @@ if test_desc is not None: newtest_desc = "%(test_desc)s %(scenario_suffix)s" % vars() newtest.shortDescription = (lambda: newtest_desc) - for key, value in parameters.iteritems(): + for key, value in parameters.items(): setattr(newtest, key, value) return newtest @@ -76,3 +85,83 @@ yield newtest else: yield test + + +def load_tests_apply_scenarios(*params): + """Adapter test runner load hooks to call generate_scenarios. + + If this is referenced by the `load_tests` attribute of a module, then + testloaders that implement this protocol will automatically arrange for + the scenarios to be expanded. This can be used instead of using + TestWithScenarios. + + Two different calling conventions for load_tests have been used, and this + function should support both. Python 2.7 passes (loader, standard_tests, + pattern), and bzr used (standard_tests, module, loader). + + :param loader: A TestLoader. + :param standard_test: The test objects found in this module before + multiplication. + """ + if getattr(params[0], 'suiteClass', None) is not None: + loader, standard_tests, pattern = params + else: + standard_tests, module, loader = params + result = loader.suiteClass() + result.addTests(generate_scenarios(standard_tests)) + return result + + +def multiply_scenarios(*scenarios): + """Multiply two or more iterables of scenarios. + + It is safe to pass scenario generators or iterators. + + :returns: A list of compound scenarios: the cross-product of all + scenarios, with the names concatenated and the parameters + merged together. + """ + result = [] + scenario_lists = map(list, scenarios) + for combination in product(*scenario_lists): + names, parameters = zip(*combination) + scenario_name = ','.join(names) + scenario_parameters = {} + for parameter in parameters: + scenario_parameters.update(parameter) + result.append((scenario_name, scenario_parameters)) + return result + + +def per_module_scenarios(attribute_name, modules): + """Generate scenarios for available implementation modules. + + This is typically used when there is a subsystem implemented, for + example, in both Python and C, and we want to apply the same tests to + both, but the C module may sometimes not be available. + + Note: if the module can't be loaded, the sys.exc_info() tuple for the + exception raised during import of the module is used instead of the module + object. A common idiom is to check in setUp for that and raise a skip or + error for that case. No special helpers are supplied in testscenarios as + yet. + + :param attribute_name: A name to be set in the scenario parameter + dictionary (and thence onto the test instance) pointing to the + implementation module (or import exception) for this scenario. + + :param modules: An iterable of (short_name, module_name), where + the short name is something like 'python' to put in the + scenario name, and the long name is a fully-qualified Python module + name. + """ + scenarios = [] + for short_name, module_name in modules: + try: + mod = __import__(module_name, {}, {}, ['']) + except: + mod = sys.exc_info() + scenarios.append(( + short_name, + {attribute_name: mod})) + return scenarios diff -Nru python-testscenarios-0.2/lib/testscenarios/testcase.py python-testscenarios-0.4/lib/testscenarios/testcase.py --- python-testscenarios-0.2/lib/testscenarios/testcase.py 2009-12-19 03:11:46.000000000 +0000 +++ python-testscenarios-0.4/lib/testscenarios/testcase.py 2012-04-04 10:44:48.000000000 +0000 @@ -16,6 +16,7 @@ __all__ = [ 'TestWithScenarios', + 'WithScenarios', ] import unittest @@ -24,16 +25,18 @@ from testscenarios.scenarios import generate_scenarios -class TestWithScenarios(unittest.TestCase): - """A TestCase with support for scenarios via a scenarios attribute. - - When a test object which is an instance of TestWithScenarios is run, - and there is a non-empty scenarios attribute on the object, the test is - multiplied by the run method into one test per scenario. For this to work - reliably the TestWithScenarios.run method must not be overriden in a - subclass (or overridden compatibly with TestWithScenarios). +_doc = """ + When a test object which inherits from WithScenarios is run, and there is a + non-empty scenarios attribute on the object, the test is multiplied by the + run method into one test per scenario. For this to work reliably the + WithScenarios.run method must not be overriden in a subclass (or overridden + compatibly with WithScenarios). """ +class WithScenarios(object): + __doc__ = """A mixin for TestCase with support for declarative scenarios. + """ + _doc + def _get_scenarios(self): return getattr(self, 'scenarios', None) @@ -50,7 +53,7 @@ for test in generate_scenarios(self): test.debug() else: - return super(TestWithScenarios, self).debug() + return super(WithScenarios, self).debug() def run(self, result=None): scenarios = self._get_scenarios() @@ -59,4 +62,9 @@ test.run(result) return else: - return super(TestWithScenarios, self).run(result) + return super(WithScenarios, self).run(result) + + +class TestWithScenarios(WithScenarios, unittest.TestCase): + __doc__ = """Unittest TestCase with support for declarative scenarios. + """ + _doc diff -Nru python-testscenarios-0.2/lib/testscenarios/tests/__init__.py python-testscenarios-0.4/lib/testscenarios/tests/__init__.py --- python-testscenarios-0.2/lib/testscenarios/tests/__init__.py 2009-12-19 03:11:46.000000000 +0000 +++ python-testscenarios-0.4/lib/testscenarios/tests/__init__.py 2012-04-04 10:37:30.000000000 +0000 @@ -38,5 +38,6 @@ test_mod_names = [prefix + test_module for test_module in test_modules] standard_tests.addTests(loader.loadTestsFromNames(test_mod_names)) doctest.set_unittest_reportflags(doctest.REPORT_ONLY_FIRST_FAILURE) - standard_tests.addTest(doctest.DocFileSuite("../../../README")) - return standard_tests + standard_tests.addTest( + doctest.DocFileSuite("../../../README", optionflags=doctest.ELLIPSIS)) + return loader.suiteClass(testscenarios.generate_scenarios(standard_tests)) diff -Nru python-testscenarios-0.2/lib/testscenarios/tests/test_scenarios.py python-testscenarios-0.4/lib/testscenarios/tests/test_scenarios.py --- python-testscenarios-0.2/lib/testscenarios/tests/test_scenarios.py 2010-02-01 04:45:48.000000000 +0000 +++ python-testscenarios-0.4/lib/testscenarios/tests/test_scenarios.py 2012-04-04 10:00:58.000000000 +0000 @@ -2,6 +2,7 @@ # dependency injection ('scenarios') by tests. # # Copyright (c) 2009, Robert Collins +# Copyright (c) 2010, 2011 Martin Pool # # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause # license at the users choice. A copy of both licenses are available in the @@ -21,6 +22,8 @@ apply_scenario, apply_scenarios, generate_scenarios, + load_tests_apply_scenarios, + multiply_scenarios, ) import testtools from testtools.tests.helpers import LoggingResult @@ -171,3 +174,88 @@ tests = list(apply_scenarios(ReferenceTest.scenarios, test)) self.assertEqual([('demo', {})], ReferenceTest.scenarios) self.assertEqual(ReferenceTest.scenarios, tests[0].scenarios) + + +class TestLoadTests(testtools.TestCase): + + class SampleTest(unittest.TestCase): + def test_nothing(self): + pass + scenarios = [ + ('a', {}), + ('b', {}), + ] + + def test_load_tests_apply_scenarios(self): + suite = load_tests_apply_scenarios( + unittest.TestLoader(), + [self.SampleTest('test_nothing')], + None) + result_tests = list(testtools.iterate_tests(suite)) + self.assertEquals( + 2, + len(result_tests), + result_tests) + + def test_load_tests_apply_scenarios_old_style(self): + """Call load_tests in the way used by bzr.""" + suite = load_tests_apply_scenarios( + [self.SampleTest('test_nothing')], + self.__class__.__module__, + unittest.TestLoader(), + ) + result_tests = list(testtools.iterate_tests(suite)) + self.assertEquals( + 2, + len(result_tests), + result_tests) + + +class TestMultiplyScenarios(testtools.TestCase): + + def test_multiply_scenarios(self): + def factory(name): + for i in 'ab': + yield i, {name: i} + scenarios = multiply_scenarios(factory('p'), factory('q')) + self.assertEqual([ + ('a,a', dict(p='a', q='a')), + ('a,b', dict(p='a', q='b')), + ('b,a', dict(p='b', q='a')), + ('b,b', dict(p='b', q='b')), + ], + scenarios) + + def test_multiply_many_scenarios(self): + def factory(name): + for i in 'abc': + yield i, {name: i} + scenarios = multiply_scenarios(factory('p'), factory('q'), + factory('r'), factory('t')) + self.assertEqual( + 3**4, + len(scenarios), + scenarios) + self.assertEqual( + 'a,a,a,a', + scenarios[0][0]) + + +class TestPerModuleScenarios(testtools.TestCase): + + def test_per_module_scenarios(self): + """Generate scenarios for available modules""" + s = testscenarios.scenarios.per_module_scenarios( + 'the_module', [ + ('Python', 'testscenarios'), + ('unittest', 'unittest'), + ('nonexistent', 'nonexistent'), + ]) + self.assertEqual('nonexistent', s[-1][0]) + self.assertIsInstance(s[-1][1]['the_module'], tuple) + s[-1][1]['the_module'] = None + self.assertEqual(s, [ + ('Python', {'the_module': testscenarios}), + ('unittest', {'the_module': unittest}), + ('nonexistent', {'the_module': None}), + ]) diff -Nru python-testscenarios-0.2/lib/testscenarios/tests/test_testcase.py python-testscenarios-0.4/lib/testscenarios/tests/test_testcase.py --- python-testscenarios-0.2/lib/testscenarios/tests/test_testcase.py 2009-12-19 03:11:46.000000000 +0000 +++ python-testscenarios-0.4/lib/testscenarios/tests/test_testcase.py 2012-04-04 10:41:22.000000000 +0000 @@ -17,13 +17,25 @@ import unittest import testscenarios +import testtools from testtools.tests.helpers import LoggingResult -class TestTestWithScenarios(unittest.TestCase): +class TestTestWithScenarios(testtools.TestCase): + + scenarios = testscenarios.scenarios.per_module_scenarios( + 'impl', (('unittest', 'unittest'), ('unittest2', 'unittest2'))) + + @property + def Implementation(self): + if isinstance(self.impl, tuple): + self.skipTest('import failed - module not installed?') + class Implementation(testscenarios.WithScenarios, self.impl.TestCase): + pass + return Implementation def test_no_scenarios_no_error(self): - class ReferenceTest(testscenarios.TestWithScenarios): + class ReferenceTest(self.Implementation): def test_pass(self): pass test = ReferenceTest("test_pass") @@ -33,7 +45,7 @@ self.assertEqual(1, result.testsRun) def test_with_one_scenario_one_run(self): - class ReferenceTest(testscenarios.TestWithScenarios): + class ReferenceTest(self.Implementation): scenarios = [('demo', {})] def test_pass(self): pass @@ -48,7 +60,7 @@ log[0][1].id()) def test_with_two_scenarios_two_run(self): - class ReferenceTest(testscenarios.TestWithScenarios): + class ReferenceTest(self.Implementation): scenarios = [('1', {}), ('2', {})] def test_pass(self): pass @@ -66,7 +78,7 @@ log[4][1].id()) def test_attributes_set(self): - class ReferenceTest(testscenarios.TestWithScenarios): + class ReferenceTest(self.Implementation): scenarios = [ ('1', {'foo': 1, 'bar': 2}), ('2', {'foo': 2, 'bar': 4})] @@ -80,7 +92,7 @@ self.assertEqual(2, result.testsRun) def test_scenarios_attribute_cleared(self): - class ReferenceTest(testscenarios.TestWithScenarios): + class ReferenceTest(self.Implementation): scenarios = [ ('1', {'foo': 1, 'bar': 2}), ('2', {'foo': 2, 'bar': 4})] @@ -97,14 +109,14 @@ self.assertEqual(None, log[4][1].scenarios) def test_countTestCases_no_scenarios(self): - class ReferenceTest(testscenarios.TestWithScenarios): + class ReferenceTest(self.Implementation): def test_check_foo(self): pass test = ReferenceTest("test_check_foo") self.assertEqual(1, test.countTestCases()) def test_countTestCases_empty_scenarios(self): - class ReferenceTest(testscenarios.TestWithScenarios): + class ReferenceTest(self.Implementation): scenarios = [] def test_check_foo(self): pass @@ -112,7 +124,7 @@ self.assertEqual(1, test.countTestCases()) def test_countTestCases_1_scenarios(self): - class ReferenceTest(testscenarios.TestWithScenarios): + class ReferenceTest(self.Implementation): scenarios = [('1', {'foo': 1, 'bar': 2})] def test_check_foo(self): pass @@ -120,7 +132,7 @@ self.assertEqual(1, test.countTestCases()) def test_countTestCases_2_scenarios(self): - class ReferenceTest(testscenarios.TestWithScenarios): + class ReferenceTest(self.Implementation): scenarios = [ ('1', {'foo': 1, 'bar': 2}), ('2', {'foo': 2, 'bar': 4})] @@ -131,7 +143,7 @@ def test_debug_2_scenarios(self): log = [] - class ReferenceTest(testscenarios.TestWithScenarios): + class ReferenceTest(self.Implementation): scenarios = [ ('1', {'foo': 1, 'bar': 2}), ('2', {'foo': 2, 'bar': 4})] diff -Nru python-testscenarios-0.2/lib/testscenarios.egg-info/dependency_links.txt python-testscenarios-0.4/lib/testscenarios.egg-info/dependency_links.txt --- python-testscenarios-0.2/lib/testscenarios.egg-info/dependency_links.txt 1970-01-01 00:00:00.000000000 +0000 +++ python-testscenarios-0.4/lib/testscenarios.egg-info/dependency_links.txt 2013-01-27 00:32:59.000000000 +0000 @@ -0,0 +1 @@ + diff -Nru python-testscenarios-0.2/lib/testscenarios.egg-info/PKG-INFO python-testscenarios-0.4/lib/testscenarios.egg-info/PKG-INFO --- python-testscenarios-0.2/lib/testscenarios.egg-info/PKG-INFO 1970-01-01 00:00:00.000000000 +0000 +++ python-testscenarios-0.4/lib/testscenarios.egg-info/PKG-INFO 2013-01-27 00:32:59.000000000 +0000 @@ -0,0 +1,335 @@ +Metadata-Version: 1.1 +Name: testscenarios +Version: 0.4 +Summary: Testscenarios, a pyunit extension for dependency injection +Home-page: https://launchpad.net/testscenarios +Author: Robert Collins +Author-email: robertc@robertcollins.net +License: UNKNOWN +Description: ***************************************************************** + testscenarios: extensions to python unittest to support scenarios + ***************************************************************** + + Copyright (c) 2009, Robert Collins + + Licensed under either the Apache License, Version 2.0 or the BSD 3-clause + license at the users choice. A copy of both licenses are available in the + project source as Apache-2.0 and BSD. You may not use this file except in + compliance with one of these two licences. + + Unless required by applicable law or agreed to in writing, software + distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT + WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + license you chose for the specific language governing permissions and + limitations under that license. + + + testscenarios provides clean dependency injection for python unittest style + tests. This can be used for interface testing (testing many implementations via + a single test suite) or for classic dependency injection (provide tests with + dependencies externally to the test code itself, allowing easy testing in + different situations). + + Dependencies + ============ + + * Python 2.4+ + * testtools + + + Why TestScenarios + ================= + + Standard Python unittest.py provides on obvious method for running a single + test_foo method with two (or more) scenarios: by creating a mix-in that + provides the functions, objects or settings that make up the scenario. This is + however limited and unsatisfying. Firstly, when two projects are cooperating + on a test suite (for instance, a plugin to a larger project may want to run + the standard tests for a given interface on its implementation), then it is + easy for them to get out of sync with each other: when the list of TestCase + classes to mix-in with changes, the plugin will either fail to run some tests + or error trying to run deleted tests. Secondly, its not as easy to work with + runtime-created-subclasses (a way of dealing with the aforementioned skew) + because they require more indirection to locate the source of the test, and will + often be ignored by e.g. pyflakes pylint etc. + + It is the intent of testscenarios to make dynamically running a single test + in multiple scenarios clear, easy to debug and work with even when the list + of scenarios is dynamically generated. + + + Defining Scenarios + ================== + + A **scenario** is a tuple of a string name for the scenario, and a dict of + parameters describing the scenario. The name is appended to the test name, and + the parameters are made available to the test instance when it's run. + + Scenarios are presented in **scenario lists** which are typically Python lists + but may be any iterable. + + + Getting Scenarios applied + ========================= + + At its heart the concept is simple. For a given test object with a list of + scenarios we prepare a new test object for each scenario. This involves: + + * Clone the test to a new test with a new id uniquely distinguishing it. + * Apply the scenario to the test by setting each key, value in the scenario + as attributes on the test object. + + There are some complicating factors around making this happen seamlessly. These + factors are in two areas: + + * Choosing what scenarios to use. (See Setting Scenarios For A Test). + * Getting the multiplication to happen. + + Subclasssing + ++++++++++++ + + If you can subclass TestWithScenarios, then the ``run()`` method in + TestWithScenarios will take care of test multiplication. It will at test + execution act as a generator causing multiple tests to execute. For this to + work reliably TestWithScenarios must be first in the MRO and you cannot + override run() or __call__. This is the most robust method, in the sense + that any test runner or test loader that obeys the python unittest protocol + will run all your scenarios. + + Manual generation + +++++++++++++++++ + + If you cannot subclass TestWithScenarios (e.g. because you are using + TwistedTestCase, or TestCaseWithResources, or any one of a number of other + useful test base classes, or need to override run() or __call__ yourself) then + you can cause scenario application to happen later by calling + ``testscenarios.generate_scenarios()``. For instance:: + + >>> import unittest + >>> try: + ... from StringIO import StringIO + ... except ImportError: + ... from io import StringIO + >>> from testscenarios.scenarios import generate_scenarios + + This can work with loaders and runners from the standard library, or possibly other + implementations:: + + >>> loader = unittest.TestLoader() + >>> test_suite = unittest.TestSuite() + >>> runner = unittest.TextTestRunner(stream=StringIO()) + + >>> mytests = loader.loadTestsFromNames(['doc.test_sample']) + >>> test_suite.addTests(generate_scenarios(mytests)) + >>> runner.run(test_suite) + + + Testloaders + +++++++++++ + + Some test loaders support hooks like ``load_tests`` and ``test_suite``. + Ensuring your tests have had scenario application done through these hooks can + be a good idea - it means that external test runners (which support these hooks + like ``nose``, ``trial``, ``tribunal``) will still run your scenarios. (Of + course, if you are using the subclassing approach this is already a surety). + With ``load_tests``:: + + >>> def load_tests(standard_tests, module, loader): + ... result = loader.suiteClass() + ... result.addTests(generate_scenarios(standard_tests)) + ... return result + + as a convenience, this is available in ``load_tests_apply_scenarios``, so a + module using scenario tests need only say :: + + >>> from testscenarios import load_tests_apply_scenarios as load_tests + + Python 2.7 and greater support a different calling convention for `load_tests`` + . `load_tests_apply_scenarios` + copes with both. + + With ``test_suite``:: + + >>> def test_suite(): + ... loader = TestLoader() + ... tests = loader.loadTestsFromName(__name__) + ... result = loader.suiteClass() + ... result.addTests(generate_scenarios(tests)) + ... return result + + + Setting Scenarios for a test + ============================ + + A sample test using scenarios can be found in the doc/ folder. + + See `pydoc testscenarios` for details. + + On the TestCase + +++++++++++++++ + + You can set a scenarios attribute on the test case:: + + >>> class MyTest(unittest.TestCase): + ... + ... scenarios = [ + ... ('scenario1', dict(param=1)), + ... ('scenario2', dict(param=2)),] + + This provides the main interface by which scenarios are found for a given test. + Subclasses will inherit the scenarios (unless they override the attribute). + + After loading + +++++++++++++ + + Test scenarios can also be generated arbitrarily later, as long as the test has + not yet run. Simply replace (or alter, but be aware that many tests may share a + single scenarios attribute) the scenarios attribute. For instance in this + example some third party tests are extended to run with a custom scenario. :: + + >>> import testtools + >>> class TestTransport: + ... """Hypothetical test case for bzrlib transport tests""" + ... pass + ... + >>> stock_library_tests = unittest.TestLoader().loadTestsFromNames( + ... ['doc.test_sample']) + ... + >>> for test in testtools.iterate_tests(stock_library_tests): + ... if isinstance(test, TestTransport): + ... test.scenarios = test.scenarios + [my_vfs_scenario] + ... + >>> suite = unittest.TestSuite() + >>> suite.addTests(generate_scenarios(stock_library_tests)) + + Generated tests don't have a ``scenarios`` list, because they don't normally + require any more expansion. However, you can add a ``scenarios`` list back on + to them, and then run them through ``generate_scenarios`` again to generate the + cross product of tests. :: + + >>> class CrossProductDemo(unittest.TestCase): + ... scenarios = [('scenario_0_0', {}), + ... ('scenario_0_1', {})] + ... def test_foo(self): + ... return + ... + >>> suite = unittest.TestSuite() + >>> suite.addTests(generate_scenarios(CrossProductDemo("test_foo"))) + >>> for test in testtools.iterate_tests(suite): + ... test.scenarios = [ + ... ('scenario_1_0', {}), + ... ('scenario_1_1', {})] + ... + >>> suite2 = unittest.TestSuite() + >>> suite2.addTests(generate_scenarios(suite)) + >>> print(suite2.countTestCases()) + 4 + + Dynamic Scenarios + +++++++++++++++++ + + A common use case is to have the list of scenarios be dynamic based on plugins + and available libraries. An easy way to do this is to provide a global scope + scenarios somewhere relevant to the tests that will use it, and then that can + be customised, or dynamically populate your scenarios from a registry etc. + For instance:: + + >>> hash_scenarios = [] + >>> try: + ... from hashlib import md5 + ... except ImportError: + ... pass + ... else: + ... hash_scenarios.append(("md5", dict(hash=md5))) + >>> try: + ... from hashlib import sha1 + ... except ImportError: + ... pass + ... else: + ... hash_scenarios.append(("sha1", dict(hash=sha1))) + ... + >>> class TestHashContract(unittest.TestCase): + ... + ... scenarios = hash_scenarios + ... + >>> class TestHashPerformance(unittest.TestCase): + ... + ... scenarios = hash_scenarios + + + Forcing Scenarios + +++++++++++++++++ + + The ``apply_scenarios`` function can be useful to apply scenarios to a test + that has none applied. ``apply_scenarios`` is the workhorse for + ``generate_scenarios``, except it takes the scenarios passed in rather than + introspecting the test object to determine the scenarios. The + ``apply_scenarios`` function does not reset the test scenarios attribute, + allowing it to be used to layer scenarios without affecting existing scenario + selection. + + + Generating Scenarios + ==================== + + Some functions (currently one :-) are available to ease generation of scenario + lists for common situations. + + Testing Per Implementation Module + +++++++++++++++++++++++++++++++++ + + It is reasonably common to have multiple Python modules that provide the same + capabilities and interface, and to want apply the same tests to all of them. + + In some cases, not all of the statically defined implementations will be able + to be used in a particular testing environment. For example, there may be both + a C and a pure-Python implementation of a module. You want to test the C + module if it can be loaded, but also to have the tests pass if the C module has + not been compiled. + + The ``per_module_scenarios`` function generates a scenario for each named + module. The module object of the imported module is set in the supplied + attribute name of the resulting scenario. + Modules which raise ``ImportError`` during import will have the + ``sys.exc_info()`` of the exception set instead of the module object. Tests + can check for the attribute being a tuple to decide what to do (e.g. to skip). + + Note that for the test to be valid, all access to the module under test must go + through the relevant attribute of the test object. If one of the + implementations is also directly imported by the test module or any other, + testscenarios will not magically stop it being used. + + + Advice on Writing Scenarios + =========================== + + If a parameterised test is because of a bug run without being parameterized, + it should fail rather than running with defaults, because this can hide bugs. + + + Producing Scenarios + =================== + + The `multiply_scenarios` function produces the cross-product of the scenarios + passed in:: + + >>> from testscenarios.scenarios import multiply_scenarios + >>> + >>> scenarios = multiply_scenarios( + ... [('scenario1', dict(param1=1)), ('scenario2', dict(param1=2))], + ... [('scenario2', dict(param2=1))], + ... ) + >>> scenarios == [('scenario1,scenario2', {'param2': 1, 'param1': 1}), + ... ('scenario2,scenario2', {'param2': 1, 'param1': 2})] + True + +Platform: UNKNOWN +Classifier: Development Status :: 6 - Mature +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: BSD License +Classifier: License :: OSI Approved :: Apache Software License +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Topic :: Software Development :: Quality Assurance +Classifier: Topic :: Software Development :: Testing diff -Nru python-testscenarios-0.2/lib/testscenarios.egg-info/requires.txt python-testscenarios-0.4/lib/testscenarios.egg-info/requires.txt --- python-testscenarios-0.2/lib/testscenarios.egg-info/requires.txt 1970-01-01 00:00:00.000000000 +0000 +++ python-testscenarios-0.4/lib/testscenarios.egg-info/requires.txt 2013-01-27 00:32:59.000000000 +0000 @@ -0,0 +1 @@ +testtools \ No newline at end of file diff -Nru python-testscenarios-0.2/lib/testscenarios.egg-info/SOURCES.txt python-testscenarios-0.4/lib/testscenarios.egg-info/SOURCES.txt --- python-testscenarios-0.2/lib/testscenarios.egg-info/SOURCES.txt 1970-01-01 00:00:00.000000000 +0000 +++ python-testscenarios-0.4/lib/testscenarios.egg-info/SOURCES.txt 2013-01-27 00:32:59.000000000 +0000 @@ -0,0 +1,25 @@ +.bzrignore +Apache-2.0 +BSD +COPYING +GOALS +HACKING +MANIFEST.in +Makefile +NEWS +README +setup.py +doc/__init__.py +doc/example.py +doc/test_sample.py +lib/testscenarios/__init__.py +lib/testscenarios/scenarios.py +lib/testscenarios/testcase.py +lib/testscenarios.egg-info/PKG-INFO +lib/testscenarios.egg-info/SOURCES.txt +lib/testscenarios.egg-info/dependency_links.txt +lib/testscenarios.egg-info/requires.txt +lib/testscenarios.egg-info/top_level.txt +lib/testscenarios/tests/__init__.py +lib/testscenarios/tests/test_scenarios.py +lib/testscenarios/tests/test_testcase.py \ No newline at end of file diff -Nru python-testscenarios-0.2/lib/testscenarios.egg-info/top_level.txt python-testscenarios-0.4/lib/testscenarios.egg-info/top_level.txt --- python-testscenarios-0.2/lib/testscenarios.egg-info/top_level.txt 1970-01-01 00:00:00.000000000 +0000 +++ python-testscenarios-0.4/lib/testscenarios.egg-info/top_level.txt 2013-01-27 00:32:59.000000000 +0000 @@ -0,0 +1 @@ +testscenarios diff -Nru python-testscenarios-0.2/NEWS python-testscenarios-0.4/NEWS --- python-testscenarios-0.2/NEWS 2010-02-01 04:47:32.000000000 +0000 +++ python-testscenarios-0.4/NEWS 2013-01-26 19:10:51.000000000 +0000 @@ -6,17 +6,44 @@ IN DEVELOPMENT ~~~~~~~~~~~~~~ +0.4 +~~~ + +IMPROVEMENTS +------------ + +* Python 3.2 support added. (Robert Collins) + +0.3 +~~~ + +CHANGES +------- + +* New function ``per_module_scenarios`` for tests that should be applied across + multiple modules providing the same interface, some of which may not be + available at run time. (Martin Pool) + +* ``TestWithScenarios`` is now backed by a mixin - WithScenarios - which can be + mixed into different unittest implementations more cleanly (e.g. unittest2). + (James Polley, Robert Collins) + 0.2 ~~~ -CHANGES: +CHANGES +------- * Adjust the cloned tests ``shortDescription`` if one is present. (Ben Finney) +* Provide a load_tests implementation for easy use, and multiply_scenarios to + create the cross product of scenarios. (Martin Pool) + 0.1 ~~~ -CHANGES: +CHANGES +------- * Created project. The primary interfaces are ``testscenarios.TestWithScenarios`` and @@ -27,11 +54,3 @@ Also various presentation and language touchups. (Martin Pool) (Adjusted to use doctest directly, and to not print the demo runners output to stderror during make check - Robert Collins) - -IMPROVEMENTS: - -BUG FIXES: - -API CHANGES: - -INTERNALS: diff -Nru python-testscenarios-0.2/PKG-INFO python-testscenarios-0.4/PKG-INFO --- python-testscenarios-0.2/PKG-INFO 2010-02-01 05:05:55.000000000 +0000 +++ python-testscenarios-0.4/PKG-INFO 2013-01-27 00:32:59.000000000 +0000 @@ -1,6 +1,6 @@ -Metadata-Version: 1.0 +Metadata-Version: 1.1 Name: testscenarios -Version: 0.2 +Version: 0.4 Summary: Testscenarios, a pyunit extension for dependency injection Home-page: https://launchpad.net/testscenarios Author: Robert Collins @@ -10,18 +10,18 @@ testscenarios: extensions to python unittest to support scenarios ***************************************************************** - Copyright (c) 2009, Robert Collins - - Licensed under either the Apache License, Version 2.0 or the BSD 3-clause - license at the users choice. A copy of both licenses are available in the - project source as Apache-2.0 and BSD. You may not use this file except in - compliance with one of these two licences. - - Unless required by applicable law or agreed to in writing, software - distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT - WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the - license you chose for the specific language governing permissions and - limitations under that license. + Copyright (c) 2009, Robert Collins + + Licensed under either the Apache License, Version 2.0 or the BSD 3-clause + license at the users choice. A copy of both licenses are available in the + project source as Apache-2.0 and BSD. You may not use this file except in + compliance with one of these two licences. + + Unless required by applicable law or agreed to in writing, software + distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT + WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + license you chose for the specific language governing permissions and + limitations under that license. testscenarios provides clean dependency injection for python unittest style @@ -77,20 +77,20 @@ * Clone the test to a new test with a new id uniquely distinguishing it. * Apply the scenario to the test by setting each key, value in the scenario - as attributes on the test object. + as attributes on the test object. There are some complicating factors around making this happen seamlessly. These factors are in two areas: * Choosing what scenarios to use. (See Setting Scenarios For A Test). - * Getting the multiplication to happen. + * Getting the multiplication to happen. Subclasssing ++++++++++++ If you can subclass TestWithScenarios, then the ``run()`` method in TestWithScenarios will take care of test multiplication. It will at test - execution act as a generator causing multiple tests to execute. For this to + execution act as a generator causing multiple tests to execute. For this to work reliably TestWithScenarios must be first in the MRO and you cannot override run() or __call__. This is the most robust method, in the sense that any test runner or test loader that obeys the python unittest protocol @@ -101,25 +101,28 @@ If you cannot subclass TestWithScenarios (e.g. because you are using TwistedTestCase, or TestCaseWithResources, or any one of a number of other - useful test base classes, or need to override run() or __call__ yourself) then + useful test base classes, or need to override run() or __call__ yourself) then you can cause scenario application to happen later by calling ``testscenarios.generate_scenarios()``. For instance:: - >>> import unittest - >>> import StringIO - >>> from testscenarios.scenarios import generate_scenarios + >>> import unittest + >>> try: + ... from StringIO import StringIO + ... except ImportError: + ... from io import StringIO + >>> from testscenarios.scenarios import generate_scenarios This can work with loaders and runners from the standard library, or possibly other implementations:: - >>> loader = unittest.TestLoader() - >>> test_suite = unittest.TestSuite() - >>> runner = unittest.TextTestRunner(stream=StringIO.StringIO()) - - >>> mytests = loader.loadTestsFromNames(['doc.test_sample']) - >>> test_suite.addTests(generate_scenarios(mytests)) - >>> runner.run(test_suite) - + >>> loader = unittest.TestLoader() + >>> test_suite = unittest.TestSuite() + >>> runner = unittest.TextTestRunner(stream=StringIO()) + + >>> mytests = loader.loadTestsFromNames(['doc.test_sample']) + >>> test_suite.addTests(generate_scenarios(mytests)) + >>> runner.run(test_suite) + Testloaders +++++++++++ @@ -131,19 +134,28 @@ course, if you are using the subclassing approach this is already a surety). With ``load_tests``:: - >>> def load_tests(standard_tests, module, loader): - ... result = loader.suiteClass() - ... result.addTests(generate_scenarios(standard_tests)) - ... return result + >>> def load_tests(standard_tests, module, loader): + ... result = loader.suiteClass() + ... result.addTests(generate_scenarios(standard_tests)) + ... return result + + as a convenience, this is available in ``load_tests_apply_scenarios``, so a + module using scenario tests need only say :: + + >>> from testscenarios import load_tests_apply_scenarios as load_tests + + Python 2.7 and greater support a different calling convention for `load_tests`` + . `load_tests_apply_scenarios` + copes with both. With ``test_suite``:: - >>> def test_suite(): - ... loader = TestLoader() - ... tests = loader.loadTestsFromName(__name__) - ... result = loader.suiteClass() - ... result.addTests(generate_scenarios(tests)) - ... return result + >>> def test_suite(): + ... loader = TestLoader() + ... tests = loader.loadTestsFromName(__name__) + ... result = loader.suiteClass() + ... result.addTests(generate_scenarios(tests)) + ... return result Setting Scenarios for a test @@ -158,11 +170,11 @@ You can set a scenarios attribute on the test case:: - >>> class MyTest(unittest.TestCase): - ... - ... scenarios = [ - ... ('scenario1', dict(param=1)), - ... ('scenario2', dict(param=2)),] + >>> class MyTest(unittest.TestCase): + ... + ... scenarios = [ + ... ('scenario1', dict(param=1)), + ... ('scenario2', dict(param=2)),] This provides the main interface by which scenarios are found for a given test. Subclasses will inherit the scenarios (unless they override the attribute). @@ -175,43 +187,43 @@ single scenarios attribute) the scenarios attribute. For instance in this example some third party tests are extended to run with a custom scenario. :: - >>> import testtools - >>> class TestTransport: - ... """Hypothetical test case for bzrlib transport tests""" - ... pass - ... - >>> stock_library_tests = unittest.TestLoader().loadTestsFromNames( - ... ['doc.test_sample']) - ... - >>> for test in testtools.iterate_tests(stock_library_tests): - ... if isinstance(test, TestTransport): - ... test.scenarios = test.scenarios + [my_vfs_scenario] - ... - >>> suite = unittest.TestSuite() - >>> suite.addTests(generate_scenarios(stock_library_tests)) + >>> import testtools + >>> class TestTransport: + ... """Hypothetical test case for bzrlib transport tests""" + ... pass + ... + >>> stock_library_tests = unittest.TestLoader().loadTestsFromNames( + ... ['doc.test_sample']) + ... + >>> for test in testtools.iterate_tests(stock_library_tests): + ... if isinstance(test, TestTransport): + ... test.scenarios = test.scenarios + [my_vfs_scenario] + ... + >>> suite = unittest.TestSuite() + >>> suite.addTests(generate_scenarios(stock_library_tests)) Generated tests don't have a ``scenarios`` list, because they don't normally require any more expansion. However, you can add a ``scenarios`` list back on to them, and then run them through ``generate_scenarios`` again to generate the cross product of tests. :: - >>> class CrossProductDemo(unittest.TestCase): - ... scenarios = [('scenario_0_0', {}), - ... ('scenario_0_1', {})] - ... def test_foo(self): - ... return - ... - >>> suite = unittest.TestSuite() - >>> suite.addTests(generate_scenarios(CrossProductDemo("test_foo"))) - >>> for test in testtools.iterate_tests(suite): - ... test.scenarios = [ - ... ('scenario_1_0', {}), - ... ('scenario_1_1', {})] - ... - >>> suite2 = unittest.TestSuite() - >>> suite2.addTests(generate_scenarios(suite)) - >>> print suite2.countTestCases() - 4 + >>> class CrossProductDemo(unittest.TestCase): + ... scenarios = [('scenario_0_0', {}), + ... ('scenario_0_1', {})] + ... def test_foo(self): + ... return + ... + >>> suite = unittest.TestSuite() + >>> suite.addTests(generate_scenarios(CrossProductDemo("test_foo"))) + >>> for test in testtools.iterate_tests(suite): + ... test.scenarios = [ + ... ('scenario_1_0', {}), + ... ('scenario_1_1', {})] + ... + >>> suite2 = unittest.TestSuite() + >>> suite2.addTests(generate_scenarios(suite)) + >>> print(suite2.countTestCases()) + 4 Dynamic Scenarios +++++++++++++++++ @@ -222,27 +234,27 @@ be customised, or dynamically populate your scenarios from a registry etc. For instance:: - >>> hash_scenarios = [] - >>> try: - ... from hashlib import md5 - ... except ImportError: - ... pass - ... else: - ... hash_scenarios.append(("md5", dict(hash=md5))) - >>> try: - ... from hashlib import sha1 - ... except ImportError: - ... pass - ... else: - ... hash_scenarios.append(("sha1", dict(hash=sha1))) - ... - >>> class TestHashContract(unittest.TestCase): - ... - ... scenarios = hash_scenarios - ... - >>> class TestHashPerformance(unittest.TestCase): - ... - ... scenarios = hash_scenarios + >>> hash_scenarios = [] + >>> try: + ... from hashlib import md5 + ... except ImportError: + ... pass + ... else: + ... hash_scenarios.append(("md5", dict(hash=md5))) + >>> try: + ... from hashlib import sha1 + ... except ImportError: + ... pass + ... else: + ... hash_scenarios.append(("sha1", dict(hash=sha1))) + ... + >>> class TestHashContract(unittest.TestCase): + ... + ... scenarios = hash_scenarios + ... + >>> class TestHashPerformance(unittest.TestCase): + ... + ... scenarios = hash_scenarios Forcing Scenarios @@ -257,12 +269,60 @@ selection. + Generating Scenarios + ==================== + + Some functions (currently one :-) are available to ease generation of scenario + lists for common situations. + + Testing Per Implementation Module + +++++++++++++++++++++++++++++++++ + + It is reasonably common to have multiple Python modules that provide the same + capabilities and interface, and to want apply the same tests to all of them. + + In some cases, not all of the statically defined implementations will be able + to be used in a particular testing environment. For example, there may be both + a C and a pure-Python implementation of a module. You want to test the C + module if it can be loaded, but also to have the tests pass if the C module has + not been compiled. + + The ``per_module_scenarios`` function generates a scenario for each named + module. The module object of the imported module is set in the supplied + attribute name of the resulting scenario. + Modules which raise ``ImportError`` during import will have the + ``sys.exc_info()`` of the exception set instead of the module object. Tests + can check for the attribute being a tuple to decide what to do (e.g. to skip). + + Note that for the test to be valid, all access to the module under test must go + through the relevant attribute of the test object. If one of the + implementations is also directly imported by the test module or any other, + testscenarios will not magically stop it being used. + + Advice on Writing Scenarios =========================== If a parameterised test is because of a bug run without being parameterized, it should fail rather than running with defaults, because this can hide bugs. + + Producing Scenarios + =================== + + The `multiply_scenarios` function produces the cross-product of the scenarios + passed in:: + + >>> from testscenarios.scenarios import multiply_scenarios + >>> + >>> scenarios = multiply_scenarios( + ... [('scenario1', dict(param1=1)), ('scenario2', dict(param1=2))], + ... [('scenario2', dict(param2=1))], + ... ) + >>> scenarios == [('scenario1,scenario2', {'param2': 1, 'param1': 1}), + ... ('scenario2,scenario2', {'param2': 1, 'param1': 2})] + True + Platform: UNKNOWN Classifier: Development Status :: 6 - Mature Classifier: Intended Audience :: Developers @@ -270,5 +330,6 @@ Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 Classifier: Topic :: Software Development :: Quality Assurance Classifier: Topic :: Software Development :: Testing diff -Nru python-testscenarios-0.2/README python-testscenarios-0.4/README --- python-testscenarios-0.2/README 2009-12-19 03:11:46.000000000 +0000 +++ python-testscenarios-0.4/README 2013-01-26 18:58:06.000000000 +0000 @@ -98,7 +98,10 @@ ``testscenarios.generate_scenarios()``. For instance:: >>> import unittest - >>> import StringIO + >>> try: + ... from StringIO import StringIO + ... except ImportError: + ... from io import StringIO >>> from testscenarios.scenarios import generate_scenarios This can work with loaders and runners from the standard library, or possibly other @@ -106,12 +109,12 @@ >>> loader = unittest.TestLoader() >>> test_suite = unittest.TestSuite() - >>> runner = unittest.TextTestRunner(stream=StringIO.StringIO()) + >>> runner = unittest.TextTestRunner(stream=StringIO()) >>> mytests = loader.loadTestsFromNames(['doc.test_sample']) >>> test_suite.addTests(generate_scenarios(mytests)) >>> runner.run(test_suite) - + Testloaders +++++++++++ @@ -128,6 +131,15 @@ ... result.addTests(generate_scenarios(standard_tests)) ... return result +as a convenience, this is available in ``load_tests_apply_scenarios``, so a +module using scenario tests need only say :: + + >>> from testscenarios import load_tests_apply_scenarios as load_tests + +Python 2.7 and greater support a different calling convention for `load_tests`` +. `load_tests_apply_scenarios` +copes with both. + With ``test_suite``:: >>> def test_suite(): @@ -202,7 +214,7 @@ ... >>> suite2 = unittest.TestSuite() >>> suite2.addTests(generate_scenarios(suite)) - >>> print suite2.countTestCases() + >>> print(suite2.countTestCases()) 4 Dynamic Scenarios @@ -249,8 +261,56 @@ selection. +Generating Scenarios +==================== + +Some functions (currently one :-) are available to ease generation of scenario +lists for common situations. + +Testing Per Implementation Module ++++++++++++++++++++++++++++++++++ + +It is reasonably common to have multiple Python modules that provide the same +capabilities and interface, and to want apply the same tests to all of them. + +In some cases, not all of the statically defined implementations will be able +to be used in a particular testing environment. For example, there may be both +a C and a pure-Python implementation of a module. You want to test the C +module if it can be loaded, but also to have the tests pass if the C module has +not been compiled. + +The ``per_module_scenarios`` function generates a scenario for each named +module. The module object of the imported module is set in the supplied +attribute name of the resulting scenario. +Modules which raise ``ImportError`` during import will have the +``sys.exc_info()`` of the exception set instead of the module object. Tests +can check for the attribute being a tuple to decide what to do (e.g. to skip). + +Note that for the test to be valid, all access to the module under test must go +through the relevant attribute of the test object. If one of the +implementations is also directly imported by the test module or any other, +testscenarios will not magically stop it being used. + + Advice on Writing Scenarios =========================== If a parameterised test is because of a bug run without being parameterized, it should fail rather than running with defaults, because this can hide bugs. + + +Producing Scenarios +=================== + +The `multiply_scenarios` function produces the cross-product of the scenarios +passed in:: + + >>> from testscenarios.scenarios import multiply_scenarios + >>> + >>> scenarios = multiply_scenarios( + ... [('scenario1', dict(param1=1)), ('scenario2', dict(param1=2))], + ... [('scenario2', dict(param2=1))], + ... ) + >>> scenarios == [('scenario1,scenario2', {'param2': 1, 'param1': 1}), + ... ('scenario2,scenario2', {'param2': 1, 'param1': 2})] + True diff -Nru python-testscenarios-0.2/setup.cfg python-testscenarios-0.4/setup.cfg --- python-testscenarios-0.2/setup.cfg 1970-01-01 00:00:00.000000000 +0000 +++ python-testscenarios-0.4/setup.cfg 2013-01-27 00:32:59.000000000 +0000 @@ -0,0 +1,5 @@ +[egg_info] +tag_build = +tag_date = 0 +tag_svn_revision = 0 + diff -Nru python-testscenarios-0.2/setup.py python-testscenarios-0.4/setup.py --- python-testscenarios-0.2/setup.py 2010-02-01 05:05:37.000000000 +0000 +++ python-testscenarios-0.4/setup.py 2013-01-26 19:11:00.000000000 +0000 @@ -1,12 +1,12 @@ #!/usr/bin/env python -from distutils.core import setup +from setuptools import setup import os.path -description = file(os.path.join(os.path.dirname(__file__), 'README'), 'rb').read() +description = open(os.path.join(os.path.dirname(__file__), 'README'), 'rt').read() setup(name="testscenarios", - version="0.2", + version="0.4", description="Testscenarios, a pyunit extension for dependency injection", long_description=description, maintainer="Robert Collins", @@ -21,7 +21,11 @@ 'License :: OSI Approved :: Apache Software License', 'Operating System :: OS Independent', 'Programming Language :: Python', + 'Programming Language :: Python :: 3', 'Topic :: Software Development :: Quality Assurance', 'Topic :: Software Development :: Testing', ], + install_requires = [ + 'testtools', + ] )