Merge lp:~yamahata/glance/lp802883 into lp:~hudson-openstack/glance/trunk
- lp802883
- Merge into trunk
Status: | Merged | ||||
---|---|---|---|---|---|
Merged at revision: | 152 | ||||
Proposed branch: | lp:~yamahata/glance/lp802883 | ||||
Merge into: | lp:~hudson-openstack/glance/trunk | ||||
Diff against target: |
48 lines (+6/-3) 3 files modified
Authors (+1/-0) run_tests.py (+4/-2) tests/unit/test_config.py (+1/-1) |
||||
To merge this branch: | bzr merge lp:~yamahata/glance/lp802883 | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Christopher MacGown (community) | Approve | ||
Jay Pipes (community) | Approve | ||
Brian Lamar (community) | Needs Information | ||
Review via email: mp+66092@code.launchpad.net |
Commit message
run_tests.py: allow nose plugis such that it accepts extra command line options
With this changes, run_tests.sh can will accepts extra command line
like
run_tests.sh [-V] --coverage-with
run_tests.sh [-V] --pudb --pudb-failure with nosepudb installed
And other unit test plugins also will be available.
Description of the change
There are three changes.
- fix an unit test which will be broken by adding extra command
line option
- ix run_tests.py which will be broken by adding extra command
line option
- Lastly, make run_test.py accept defaultplugins.
Details
- tests/unit/
The first test in test_config.
no options are specified. But it checks sys.argv when default parameter
is used. So it fails when some parameter is passed to run_test.py.
So pass empty list to the parser as argument explicitly in order to
make it pass independent to sys.argv.
- run_tests.py: make run_tests.py work.
Without this patch, the following exception occurs.
Traceback (most recent call last):
File "run_tests.py", line 280, in <module>
sys.exit(not core.run(config=c, testRunner=runner))
File "/home/
ackages/
return TestProgram(*arg, **kw).success
File "/home/
ackages/
File "/usr/lib/
File "/home/
ackages/
result = self.testRunner
File "/home/
ackages/
result = self._makeResult()
File "run_tests.py", line 268, in _makeResult
self.config)
File "run_tests.py", line 183, in __init__
if colorizer.
File "run_tests.py", line 92, in supported
- run_tests.py: make test runner accepts plugins
With this changeset, useful plugins are available
for unit test. Thus we can use debugger for unit tests with
,say, --pdb, --pudb, ...
Isaku Yamahata (yamahata) wrote : | # |
On Tue, Jun 28, 2011 at 11:39:30PM -0000, Brian Lamar wrote:
> Can you give an example of what options you're passing to run_tests.py to get the failure and what the failure is? I can run /run_tests.sh -V tests.unit.
What I want is run_tests.sh [-V] --pudb --pudb-failure with nosepudb
installed. With that, when tests fail or unexpected exception is raised,
the debugger is automatically activated and we can investigate what went
wrong.
It is possible with nova's test_run.py, but not with glance's.
So I made it work with three fixes. It's quite useful and it harms nothing.
--
yamahata
Brian Lamar (blamar) wrote : | # |
Hey, I'm on board with your other changes, but I'm not certain that they make sense separately. Maybe they do and I just don't understand :) Can you give me a command I can run right now that this specific proposal will make work?
Thanks!
Isaku Yamahata (yamahata) wrote : | # |
On Wed, Jun 29, 2011 at 04:45:58PM -0000, Brian Lamar wrote:
> Hey, I'm on board with your other changes, but I'm not certain that they make sense separately. Maybe they do and I just don't understand :) Can you give me a command I can run right now that this specific proposal will make work?
All three fixes are necessary to make 'test_run.sh --pudb --pudb-failure'
work. So each single patch doesn't make sense separately.
I'm quite fine with putting three patches into single branch and marking
other two report invalid as long as the fixes are accepted.
I reported three bugs separately just because they're logically
different and it would help code review. But you don't think so.
--
yamahata
Brian Lamar (blamar) wrote : | # |
Absolutely, the current review system doesn't allow for grouping co-requisite branches. Can you please merge them together and invalidate the other two? Thank you
Jay Pipes (jaypipes) wrote : | # |
> Absolutely, the current review system doesn't allow for grouping co-requisite
> branches. Can you please merge them together and invalidate the other two?
> Thank you
Hmm, yes it does :) You can set a pre-requisite branch when submitting a merge proposal by clicking the Extra button on the screen and entering in the branch to be a pre-requisite. You can chain dependent merge branches that way.
-jay
Jay Pipes (jaypipes) wrote : | # |
> On Wed, Jun 29, 2011 at 04:45:58PM -0000, Brian Lamar wrote:
> > Hey, I'm on board with your other changes, but I'm not certain that they
> make sense separately. Maybe they do and I just don't understand :) Can you
> give me a command I can run right now that this specific proposal will make
> work?
>
> All three fixes are necessary to make 'test_run.sh --pudb --pudb-failure'
> work. So each single patch doesn't make sense separately.
> I'm quite fine with putting three patches into single branch and marking
> other two report invalid as long as the fixes are accepted.
Actually, I ran into the same thing when trying to do --with-coverage... so I know this is an issue.
-jay
Brian Lamar (blamar) wrote : | # |
Hey Jay, I'm aware that pre-requisites work, but are co-dependent branches allowed? Basically Branch A depends on Branch B *and* Branch B depends on Branch A?
https:/
https:/
https:/
These three changes don't fix anything individually, but as a whole they seem to make run_tests.sh work? Maybe I'm an outlier here, but since this branch doesn't actually fix anything (as in, this line of code doesn't make anything *work* that I can tell), I'd rather have 1 branch than 3...in this case.
Isaku Yamahata (yamahata) wrote : | # |
I think we all agreed that those fixes should go into glance tree.
But don't agree on how we should go for it.
(To be honest, I don't care of what is the right procedure in launchpad
as long as the changes are accepted)
So in order to make progress, I merged other two fixes into this branch
and marked the other two bug report invalid.
Now this branch includes all three fixes.
https:/
So all things to do is merge the branch into glance trunk.
If you want me to do other way, please tell me how/what I should do.
So I'll do so.
--
yamahata
Jay Pipes (jaypipes) wrote : | # |
> Hey Jay, I'm aware that pre-requisites work, but are co-dependent branches
> allowed? Basically Branch A depends on Branch B *and* Branch B depends on
> Branch A?
Gotcha, sorry, I misunderstood...
> https:/
> https:/
> https:/
>
> These three changes don't fix anything individually, but as a whole they seem
> to make run_tests.sh work? Maybe I'm an outlier here, but since this branch
> doesn't actually fix anything (as in, this line of code doesn't make anything
> *work* that I can tell), I'd rather have 1 branch than 3...in this case.
Yes, I would prefer a single branch containing all three bug fixes...
-jay
Jay Pipes (jaypipes) wrote : | # |
> Yes, I would prefer a single branch containing all three bug fixes...
And it looks like Yamahata did just that :)
-jay
OpenStack Infra (hudson-openstack) wrote : | # |
The attempt to merge lp:~yamahata/glance/lp802883 into lp:glance failed. Below is the output from the failed tests.
running test
running egg_info
creating glance.egg-info
writing glance.
writing top-level names to glance.
writing dependency_links to glance.
writing manifest file 'glance.
reading manifest file 'glance.
reading manifest template 'MANIFEST.in'
warning: no files found matching 'ChangeLog'
writing manifest file 'glance.
running build_ext
We test the following: ... ok
We test the following: ... ok
Test for LP Bugs #736295, #767203 ... ok
We test conditions that produced LP Bug #768969, where an image ... ok
Set up three test images and ensure each query param filter works ... ok
We test the following sequential series of actions: ... ok
Ensure marker and limit query params work ... ok
Set up three test images and ensure each query param filter works ... ok
We test the process flow where a user registers an image ... ok
A test against the actual datastore backend for the registry ... ok
A test that errors coming from the POST API do not ... ok
We test that various calls to the images and root endpoints are ... ok
We test the following sequential series of actions: ... ok
We test the process flow where a user registers an image ... ok
A test against the actual datastore backend for the registry ... ok
We test that various calls to the images and root endpoints are ... ok
Test logging output proper when verbose and debug ... ok
Test logging output proper when verbose and debug ... ok
A test for LP bug #704854 -- Exception thrown by registry ... ok
Tests raises BadRequest for invalid store header ... ok
Tests to add a basic image in the file store ... ok
Tests creates a queued image for no body and no loc header ... ok
Tests creates a queued image for no body and no loc header ... ok
Test that the image contents are checksummed properly ... ok
test_bad_
test_bad_
test_delete_image (tests.
test_delete_
Here, we try to delete an image that is in the queued state. ... ok
Test that the ETag header matches the x-image-
Tests that the /images/detail registry API returns a 400 ... ok
Tests that the /images registry API returns list of ... ok
Test that the image contents are checksummed properly ... ok
Test for HEAD /images/<ID> ... ok
test_show_
test_show_
Tests that the /images POST registry API creates the image ... ok
Tests proper exception is raised if a bad disk_format is set ... ok
Tests proper exception is raised if a bad disk_format is set ... ok
Tests proper exception is raised if a bad status is set ... ok
Tests that exception raised for bad matching disk and ... ok
Tests that the /images DELETE registry API deletes the image ... ok
Tests proper exception is raised if ...
Isaku Yamahata (yamahata) wrote : | # |
On Fri, Jul 01, 2011 at 04:37:22PM -0000, OpenStack Hudson wrote:
> =======
> FAIL: test_authors_
> -------
> Traceback (most recent call last):
> File "/tmp/tmpL3XuZd
> '%r not listed in Authors' % missing)
> AssertionError: set([u'<email address hidden>']) not listed in Authors
I fixed it by adding myself to it.
--
yamahata
Christopher MacGown (0x44) : | # |
OpenStack Infra (hudson-openstack) wrote : | # |
The attempt to merge lp:~yamahata/glance/lp802883 into lp:glance failed. Below is the output from the failed tests.
running test
running egg_info
creating glance.egg-info
writing glance.
writing top-level names to glance.
writing dependency_links to glance.
writing manifest file 'glance.
reading manifest file 'glance.
reading manifest template 'MANIFEST.in'
warning: no files found matching 'ChangeLog'
writing manifest file 'glance.
running build_ext
We test the following: ... ok
We test the following: ... ok
Test for LP Bugs #736295, #767203 ... ok
We test conditions that produced LP Bug #768969, where an image ... ok
Set up three test images and ensure each query param filter works ... ok
We test the following sequential series of actions: ... ok
Ensure marker and limit query params work ... ok
Set up three test images and ensure each query param filter works ... ok
We test the process flow where a user registers an image ... ok
A test against the actual datastore backend for the registry ... ok
A test that errors coming from the POST API do not ... ok
We test that various calls to the images and root endpoints are ... ok
We test the following sequential series of actions: ... FAIL
We test the process flow where a user registers an image ... ok
A test against the actual datastore backend for the registry ... ok
We test that various calls to the images and root endpoints are ... ok
Test logging output proper when verbose and debug ... ok
Test logging output proper when verbose and debug ... ok
A test for LP bug #704854 -- Exception thrown by registry ... ok
Tests raises BadRequest for invalid store header ... ok
Tests to add a basic image in the file store ... ok
Tests creates a queued image for no body and no loc header ... ok
Tests creates a queued image for no body and no loc header ... ok
Test that the image contents are checksummed properly ... ok
test_bad_
test_bad_
test_delete_image (tests.
test_delete_
Here, we try to delete an image that is in the queued state. ... ok
Test that the ETag header matches the x-image-
Tests that the /images/detail registry API returns a 400 ... ok
Tests that the /images registry API returns list of ... ok
Test that the image contents are checksummed properly ... ok
Test for HEAD /images/<ID> ... ok
test_show_
test_show_
Tests that the /images POST registry API creates the image ... ok
Tests proper exception is raised if a bad disk_format is set ... ok
Tests proper exception is raised if a bad disk_format is set ... ok
Tests proper exception is raised if a bad status is set ... ok
Tests that exception raised for bad matching disk and ... ok
Tests that the /images DELETE registry API deletes the image ... ok
Tests proper exception is raised i...
OpenStack Infra (hudson-openstack) wrote : | # |
The attempt to merge lp:~yamahata/glance/lp802883 into lp:glance failed. Below is the output from the failed tests.
running test
running egg_info
creating glance.egg-info
writing glance.
writing top-level names to glance.
writing dependency_links to glance.
writing manifest file 'glance.
reading manifest file 'glance.
reading manifest template 'MANIFEST.in'
warning: no files found matching 'ChangeLog'
writing manifest file 'glance.
running build_ext
We test the following: ... ok
We test the following: ... ok
Test for LP Bugs #736295, #767203 ... ok
We test conditions that produced LP Bug #768969, where an image ... ok
Set up three test images and ensure each query param filter works ... ok
We test the following sequential series of actions: ... ok
Ensure marker and limit query params work ... FAIL
Set up three test images and ensure each query param filter works ... ok
We test the process flow where a user registers an image ... ok
A test against the actual datastore backend for the registry ... ok
A test that errors coming from the POST API do not ... ok
We test that various calls to the images and root endpoints are ... ok
We test the following sequential series of actions: ... ok
We test the process flow where a user registers an image ... ok
A test against the actual datastore backend for the registry ... ok
We test that various calls to the images and root endpoints are ... ok
Test logging output proper when verbose and debug ... ok
Test logging output proper when verbose and debug ... ok
A test for LP bug #704854 -- Exception thrown by registry ... ok
Tests raises BadRequest for invalid store header ... ok
Tests to add a basic image in the file store ... ok
Tests creates a queued image for no body and no loc header ... ok
Tests creates a queued image for no body and no loc header ... ok
Test that the image contents are checksummed properly ... ok
test_bad_
test_bad_
test_delete_image (tests.
test_delete_
Here, we try to delete an image that is in the queued state. ... ok
Test that the ETag header matches the x-image-
Tests that the /images/detail registry API returns a 400 ... ok
Tests that the /images registry API returns list of ... ok
Test that the image contents are checksummed properly ... ok
Test for HEAD /images/<ID> ... ok
test_show_
test_show_
Tests that the /images POST registry API creates the image ... ok
Tests proper exception is raised if a bad disk_format is set ... ok
Tests proper exception is raised if a bad disk_format is set ... ok
Tests proper exception is raised if a bad status is set ... ok
Tests that exception raised for bad matching disk and ... ok
Tests that the /images DELETE registry API deletes the image ... ok
Tests proper exception is raised i...
Isaku Yamahata (yamahata) wrote : | # |
On Wed, Jul 06, 2011 at 03:03:28PM -0000, OpenStack Hudson wrote:
> =======
> FAIL: We test the following sequential series of actions:
> -------
> Traceback (most recent call last):
> File "/tmp/tmp2Jc4bU
> self.start_
> File "/tmp/tmp2Jc4bU
> self.wait_
> File "/tmp/tmp2Jc4bU
> self.assertFals
> AssertionError: Failed to start servers.
>
> -------
The test passes for me. It looks the server didn't start up until 3 secs?
--
yamahata
Isaku Yamahata (yamahata) wrote : | # |
On Wed, Jul 06, 2011 at 03:12:38PM -0000, OpenStack Hudson wrote:
> =======
> FAIL: Ensure marker and limit query params work
> -------
> Traceback (most recent call last):
> File "/tmp/tmpJjk6OQ
> self.assertEqua
> AssertionError: '{"images": []}' != 'Traceback (most recent call last):\n File "/usr/lib/
All test passes for me after merging glance trunk(revno 151).
The above shows curl received the following trace back.
ECONNREFUSED means that no one was listening the port of glance registry,
so glance image server couldn't connect to glance registry server.
My guess is, tests.utils.
tests.functiona
condition. So the testing machine is somewhat overloaded unluckily
whe...
Jay Pipes (jaypipes) wrote : | # |
I'm pulling this to my local laptop to see if I can reproduce. I'm a bit skeptical about the port race condition, as this hasn't happened before, but you never know! :)
Jay Pipes (jaypipes) wrote : | # |
I ran the tests four times. 3 times they passed, one time this happened:
Reproduced the failure locally with ./run_tests.sh -V:
=======
FAIL: test_filtered_
-------
Traceback (most recent call last):
File "/home/
self.
File "/home/
self.
File "/home/
self.
AssertionError: Failed to start servers.
=======
FAIL: test_ordered_images (tests.
-------
Traceback (most recent call last):
File "/home/
self.
File "/home/
self.
File "/home/
self.
AssertionError: Failed to start servers.
Running ./run_tests.sh -N I get absolutely no output:
jpipes@
So.. something is amiss that this branch is causing. I'm going to do some more investigating.
Jay Pipes (jaypipes) wrote : | # |
I've reproduced this on trunk, too. :(
Seems that it only happens the *first* time run_tests.sh -V is run. After that, things seem to work every time... I'm going to look to see if we're forgetting to clean up the old pids or call stop_servers() in one of the tests...
-jay
Jay Pipes (jaypipes) wrote : | # |
OK, turns out that a recent-ish change that made configure_db() call migrate.db_sync() was causing startup times for the servers to slow down. This caused the somewhat random failure to start servers test failures because the timeout was too short.
Rather than increase the timeout, I reverted the commit that made migration run on startup (we go back to the Nova way of making the user run nova-manage db_sync to migrate the registry database). Because of this change, I was able to reset the registry database used in most functional tests to the in-memory SQLite database. This alone decreased the test run time from 50 seconds to 30 seconds or less on most runs...
I've pushed the changes to lp:~jaypipes/glance/bug802883 and am going to merge propose that. If the tests all pass, it'll merge into trunk and I'll close this proposal as Merged...
Isaku Yamahata (yamahata) wrote : | # |
I run the tests repeatedly, so it explains why I didn't hit the issue.
Setup change and unclean db sound much more plausible.
--
yamahata
Preview Diff
1 | === modified file 'Authors' |
2 | --- Authors 2011-06-27 14:37:40 +0000 |
3 | +++ Authors 2011-07-08 10:04:13 +0000 |
4 | @@ -6,6 +6,7 @@ |
5 | Donal Lafferty <donal.lafferty@citrix.com> |
6 | Eldar Nugaev <enugaev@griddynamics.com> |
7 | Ewan Mellor <ewan.mellor@citrix.com> |
8 | +Isaku Yamahata <yamahata@valinux.co.jp> |
9 | Jay Pipes <jaypipes@gmail.com> |
10 | Jinwoo 'Joseph' Suh <jsuh@isi.edu> |
11 | Josh Kearney <josh@jk0.org> |
12 | |
13 | === modified file 'run_tests.py' |
14 | --- run_tests.py 2011-06-28 14:37:31 +0000 |
15 | +++ run_tests.py 2011-07-08 10:04:13 +0000 |
16 | @@ -180,7 +180,8 @@ |
17 | self._last_case = None |
18 | self.colorizer = None |
19 | # NOTE(vish, tfukushima): reset stdout for the terminal check |
20 | - stdout = sys.__stdout__ |
21 | + stdout = sys.stdout |
22 | + sys.stdout = sys.__stdout__ |
23 | for colorizer in [_Win32Colorizer, _AnsiColorizer, _NullColorizer]: |
24 | if colorizer.supported(): |
25 | self.colorizer = colorizer(self.stream) |
26 | @@ -281,7 +282,8 @@ |
27 | |
28 | c = config.Config(stream=sys.stdout, |
29 | env=os.environ, |
30 | - verbosity=3) |
31 | + verbosity=3, |
32 | + plugins=core.DefaultPluginManager()) |
33 | |
34 | runner = GlanceTestRunner(stream=c.stream, |
35 | verbosity=c.verbosity, |
36 | |
37 | === modified file 'tests/unit/test_config.py' |
38 | --- tests/unit/test_config.py 2011-06-11 18:18:22 +0000 |
39 | +++ tests/unit/test_config.py 2011-07-08 10:04:13 +0000 |
40 | @@ -40,7 +40,7 @@ |
41 | # of typed values |
42 | parser = optparse.OptionParser() |
43 | config.add_common_options(parser) |
44 | - parsed_options, args = config.parse_options(parser) |
45 | + parsed_options, args = config.parse_options(parser, []) |
46 | |
47 | expected_options = {'verbose': False, 'debug': False, |
48 | 'config_file': None} |
Can you give an example of what options you're passing to run_tests.py to get the failure and what the failure is? I can run /run_tests.sh -V tests.unit. test_config and everything works fine...but I'm not certain what option options I would pass... Thanks!