Merge lp:~pwlars/lava-test/list into lp:lava-test/0.0

Proposed by Paul Larson
Status: Merged
Merged at revision: 14
Proposed branch: lp:~pwlars/lava-test/list
Merge into: lp:lava-test/0.0
Diff against target: 165 lines (+109/-3)
5 files modified
abrek/builtins.py (+40/-0)
abrek/config.py (+12/-0)
abrek/testdef.py (+2/-2)
tests/__init__.py (+2/-1)
tests/test_builtins.py (+53/-0)
To merge this branch: bzr merge lp:~pwlars/lava-test/list
Reviewer Review Type Date Requested Status
Paul Larson (community) Needs Resubmitting
Review via email: mp+29512@code.launchpad.net

Description of the change

These two commands are pretty straightforward - one to list the tests that have been defined, another to list the tests that have been installed. Not sure how to go about testing them since it prints it directly, unless I separate out a function to generate the list. Considering how simple they are right now, and the limited usefulness of such a function in other places, I think that might be overkill. But if you disagree, or have a better idea for how to unit test this, I'm all ears.

Thanks,
Paul Larson

To post a comment you must log in.
lp:~pwlars/lava-test/list updated
16. By Paul Larson

Add a command to list saved results

Revision history for this message
Paul Larson (pwlars) wrote :

Updated to also add a command to list saved results, since that's also small and related.

Revision history for this message
James Westby (james-w) wrote :

On Thu, 08 Jul 2010 21:50:58 -0000, Paul Larson <email address hidden> wrote:
> Paul Larson has proposed merging lp:~pwlars/abrek/list into lp:abrek.
>
> Requested reviews:
> Linaro Infrastructure (arm-infrastructure)
>
>
> These two commands are pretty straightforward - one to list the tests that have been defined, another to list the tests that have been installed. Not sure how to go about testing them since it prints it directly, unless I separate out a function to generate the list. Considering how simple they are right now, and the limited usefulness of such a function in other places, I think that might be overkill. But if you disagree, or have a better idea for how to unit test this, I'm all ears.

So, while I agree there isn't a lot of value to testing these things,
the danger is that you make it harder for your to add tests later when
you want to, as there is more and more to do everytime you add something
to them.

Separating out functions to list the tests etc. could be useful, and
would mean that you could have simpler tests for what is actually
printed.

As for testing the commands themselves there are a couple of things you
can do. One is to pass in to the command a file object that they write
to, which is normally arranged to be sys.stdout, but can be a StringIO
or something in a test. The other is to have test methods that exec the
commands in a subprocess and capture the output. This is slower, but
gives you end-to-end coverage.

Some union of all three approaches would probably suit you as things
grow.

  * Funtions to do the work, separate from the UI classes. This allows
    you to test logic with targeted unit tests.

  * Tests of the command logic using test doubles for things like
    sys.stdout.

  * Tests that actually exec the commands and check that they did
    something. As these are the slowest and the hardest to debug you
    don't want to use them too much, but by this point there is little
    let to test except the fact that the command is available and
    actually does something, so they can be little more than smoke
    tests.

Your actual code looks fine to me, though things such as "walk_packages"
aren't immediately obvious to me, so I would have reached for tests to
ensure that it did what I thought.

Thanks,

James

Revision history for this message
Paul Larson (pwlars) wrote :

> * Tests of the command logic using test doubles for things like
> sys.stdout.
I was able to something for this pretty easily for testing the command to list known tests. However writing a test for listing the installed tests and the results is something I'm having trouble seeing past. Obviously it's unlikely that tests would be installed or results would exist at the time of running the unittest, and I really don't want to rely on the actual default directories. I'd much rather have the config object return a fake path, that I can pre-fill with test data. I don't know of a way to do that though. Is it just because of how I handled the config object?

Revision history for this message
James Westby (james-w) wrote :

> > * Tests of the command logic using test doubles for things like
> > sys.stdout.
> I was able to something for this pretty easily for testing the command to list
> known tests. However writing a test for listing the installed tests and the
> results is something I'm having trouble seeing past. Obviously it's unlikely
> that tests would be installed or results would exist at the time of running
> the unittest, and I really don't want to rely on the actual default
> directories. I'd much rather have the config object return a fake path, that
> I can pre-fill with test data. I don't know of a way to do that though. Is
> it just because of how I handled the config object?

Having a global config makes this tricky.

If you are not forking a subprocess to run these commands then you can do
something like replace AbrekConfig with an object with known values.

If you are forking then you can set the env to point $HOME to a tempdir
that you create, containing config files with the desired contents.

Thanks,

James

lp:~pwlars/lava-test/list updated
17. By Paul Larson

Handle errors gracefully when the directories for results and installed
tests don't even exist

18. By Paul Larson

improved config handling, thanks jamesw

19. By Paul Larson

Added tests for listing installed/known tests, and results

Revision history for this message
Paul Larson (pwlars) wrote :

Thanks for the help! I've updated the branch with tests for the 3 commands that I added

review: Needs Resubmitting

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'abrek/builtins.py'
2--- abrek/builtins.py 2010-07-06 11:36:06 +0000
3+++ abrek/builtins.py 2010-07-09 21:51:42 +0000
4@@ -1,3 +1,4 @@
5+import os
6 import sys
7
8 import abrek.command
9@@ -82,3 +83,42 @@
10 except Exception as strerror:
11 print "Test uninstall error: %s" % strerror
12 sys.exit(1)
13+
14+class cmd_list_installed(abrek.command.AbrekCmd):
15+ """
16+ List tests that are currently installed
17+ """
18+ def run(self, argv):
19+ from abrek.config import get_config
20+ config = get_config()
21+ print "Installed tests:"
22+ try:
23+ for dir in os.listdir(config.installdir):
24+ print dir
25+ except OSError:
26+ print "No tests installed"
27+
28+class cmd_list_tests(abrek.command.AbrekCmd):
29+ """
30+ List all known tests
31+ """
32+ def run(self, argv):
33+ from abrek import test_definitions
34+ from pkgutil import walk_packages
35+ print "Known tests:"
36+ for importer, mod, ispkg in walk_packages(test_definitions.__path__):
37+ print mod
38+
39+class cmd_list_results(abrek.command.AbrekCmd):
40+ """
41+ List results of previous runs
42+ """
43+ def run(self, argv):
44+ from abrek.config import get_config
45+ config = get_config()
46+ print "Saved results:"
47+ try:
48+ for dir in os.listdir(config.resultsdir):
49+ print dir
50+ except OSError:
51+ print "No results found"
52
53=== modified file 'abrek/config.py'
54--- abrek/config.py 2010-06-21 19:45:49 +0000
55+++ abrek/config.py 2010-07-09 21:51:42 +0000
56@@ -11,3 +11,15 @@
57 self.installdir = os.path.join(basedata, 'abrek', 'installed-tests')
58 self.resultsdir = os.path.join(basedata, 'abrek', 'results')
59
60+_config = None
61+
62+def get_config():
63+ global _config
64+ if _config is not None:
65+ return _config
66+ return AbrekConfig()
67+
68+def set_config(config):
69+ global _config
70+ _config = config
71+
72
73=== modified file 'abrek/testdef.py'
74--- abrek/testdef.py 2010-06-29 16:47:51 +0000
75+++ abrek/testdef.py 2010-07-09 21:51:42 +0000
76@@ -7,7 +7,7 @@
77 from commands import getstatusoutput
78 from datetime import datetime
79
80-import abrek.config
81+from abrek.config import get_config
82 from abrek.utils import geturl, write_file
83
84 class AbrekTest(object):
85@@ -25,7 +25,7 @@
86 """
87 def __init__(self, testname, version="", installer=None, runner=None,
88 parser=None):
89- self.config = abrek.config.AbrekConfig()
90+ self.config = get_config()
91 self.testname = testname
92 self.version = version
93 self.installer = installer
94
95=== modified file 'tests/__init__.py'
96--- tests/__init__.py 2010-06-28 20:23:02 +0000
97+++ tests/__init__.py 2010-07-09 21:51:42 +0000
98@@ -1,7 +1,8 @@
99 import unittest
100
101 def test_suite():
102- module_names = ['tests.test_abrekcmd',
103+ module_names = ['tests.test_builtins',
104+ 'tests.test_abrekcmd',
105 'tests.test_abrektestinstaller',
106 'tests.test_abrektestrunner']
107 loader = unittest.TestLoader()
108
109=== added file 'tests/test_builtins.py'
110--- tests/test_builtins.py 1970-01-01 00:00:00 +0000
111+++ tests/test_builtins.py 2010-07-09 21:51:42 +0000
112@@ -0,0 +1,53 @@
113+import os
114+import shutil
115+import sys
116+import tempfile
117+import unittest
118+import StringIO
119+
120+import abrek.builtins
121+from abrek.config import set_config
122+
123+class FakeOutputTests(unittest.TestCase):
124+ def setUp(self):
125+ self.origstdout = sys.stdout
126+ sys.stdout = self.fakestdout = StringIO.StringIO()
127+
128+ def tearDown(self):
129+ sys.stdout = self.origstdout
130+
131+class ListKnown(FakeOutputTests):
132+ def test_list_tests(self):
133+ cmd = abrek.builtins.cmd_list_tests()
134+ cmd.run("argv_junk")
135+ self.assertTrue("stream" in self.fakestdout.getvalue())
136+
137+class FakeConfigTests(FakeOutputTests):
138+ def setUp(self):
139+ super(FakeConfigTests, self).setUp()
140+ class fakeconfig:
141+ def __init__(self, basedir):
142+ self.configdir = os.path.join(basedir, "config")
143+ self.installdir = os.path.join(basedir, "install")
144+ self.resultsdir = os.path.join(basedir, "results")
145+ self.tmpdir = tempfile.mkdtemp()
146+ self.config = fakeconfig(self.tmpdir)
147+ set_config(self.config)
148+
149+ def tearDown(self):
150+ super(FakeConfigTests, self).tearDown()
151+ shutil.rmtree(self.tmpdir)
152+
153+ def test_list_installed(self):
154+ test_name="test_list_installed000"
155+ os.makedirs(os.path.join(self.config.installdir, test_name))
156+ cmd = abrek.builtins.cmd_list_installed()
157+ cmd.run("argv_junk")
158+ self.assertTrue(test_name in self.fakestdout.getvalue())
159+
160+ def test_list_results(self):
161+ result_name = "test_list_results000"
162+ os.makedirs(os.path.join(self.config.resultsdir, result_name))
163+ cmd = abrek.builtins.cmd_list_results()
164+ cmd.run("argv_junk")
165+ self.assertTrue(result_name in self.fakestdout.getvalue())

Subscribers

People subscribed via source and target branches