Merge lp:~yamahata/glance/lp802883 into lp:~hudson-openstack/glance/trunk

Proposed by Isaku Yamahata
Status: Merged
Merged at revision: 152
Proposed branch: lp:~yamahata/glance/lp802883
Merge into: lp:~hudson-openstack/glance/trunk
Diff against target: 48 lines (+6/-3)
3 files modified
Authors (+1/-0)
run_tests.py (+4/-2)
tests/unit/test_config.py (+1/-1)
To merge this branch: bzr merge lp:~yamahata/glance/lp802883
Reviewer Review Type Date Requested Status
Christopher MacGown (community) Approve
Jay Pipes (community) Approve
Brian Lamar (community) Needs Information
Review via email: mp+66092@code.launchpad.net

Commit message

run_tests.py: allow nose plugis such that it accepts extra command line options

With this changes, run_tests.sh can will accepts extra command line
like
run_tests.sh [-V] --coverage-with
run_tests.sh [-V] --pudb --pudb-failure with nosepudb installed

And other unit test plugins also will be available.

Description of the change

There are three changes.
- fix an unit test which will be broken by adding extra command
  line option
- ix run_tests.py which will be broken by adding extra command
  line option
- Lastly, make run_test.py accept defaultplugins.

Details
- tests/unit/test_config.py fails when extra options are passed to run_tests.py
The first test in test_config.TestConfig.test_parse_options tests
 no options are specified. But it checks sys.argv when default parameter
 is used. So it fails when some parameter is passed to run_test.py.
 So pass empty list to the parser as argument explicitly in order to
 make it pass independent to sys.argv.

- run_tests.py: make run_tests.py work.
Without this patch, the following exception occurs.

  Traceback (most recent call last):
     File "run_tests.py", line 280, in <module>
       sys.exit(not core.run(config=c, testRunner=runner))
     File "/home/yamahata/openstack/src/glance/my/.glance-venv/lib/python2.6/site
   ackages/nose/core.py", line 283, in run
       return TestProgram(*arg, **kw).success
     File "/home/yamahata/openstack/src/glance/my/.glance-venv/lib/python2.6/site
   ackages/nose/core.py", line 118, in __init__
       **extra_args)
     File "/usr/lib/python2.6/unittest.py", line 817, in __init__
       self.runTests()
     File "/home/yamahata/openstack/src/glance/my/.glance-venv/lib/python2.6/site
   ackages/nose/core.py", line 197, in runTests
       result = self.testRunner.run(self.test)
     File "/home/yamahata/openstack/src/glance/my/.glance-venv/lib/python2.6/site
   ackages/nose/core.py", line 59, in run
       result = self._makeResult()
     File "run_tests.py", line 268, in _makeResult
       self.config)
     File "run_tests.py", line 183, in __init__
       if colorizer.supported():
     File "run_tests.py", line 92, in supported
       curses.setupterm()

- run_tests.py: make test runner accepts plugins
With this changeset, useful plugins are available
 for unit test. Thus we can use debugger for unit tests with
 ,say, --pdb, --pudb, ...

To post a comment you must log in.
Revision history for this message
Brian Lamar (blamar) wrote :

Can you give an example of what options you're passing to run_tests.py to get the failure and what the failure is? I can run /run_tests.sh -V tests.unit.test_config and everything works fine...but I'm not certain what option options I would pass... Thanks!

review: Needs Information
Revision history for this message
Isaku Yamahata (yamahata) wrote :

On Tue, Jun 28, 2011 at 11:39:30PM -0000, Brian Lamar wrote:
> Can you give an example of what options you're passing to run_tests.py to get the failure and what the failure is? I can run /run_tests.sh -V tests.unit.test_config and everything works fine...but I'm not certain what option options I would pass... Thanks!

What I want is run_tests.sh [-V] --pudb --pudb-failure with nosepudb
installed. With that, when tests fail or unexpected exception is raised,
the debugger is automatically activated and we can investigate what went
wrong.

It is possible with nova's test_run.py, but not with glance's.
So I made it work with three fixes. It's quite useful and it harms nothing.
--
yamahata

Revision history for this message
Brian Lamar (blamar) wrote :

Hey, I'm on board with your other changes, but I'm not certain that they make sense separately. Maybe they do and I just don't understand :) Can you give me a command I can run right now that this specific proposal will make work?

Thanks!

Revision history for this message
Isaku Yamahata (yamahata) wrote :

On Wed, Jun 29, 2011 at 04:45:58PM -0000, Brian Lamar wrote:
> Hey, I'm on board with your other changes, but I'm not certain that they make sense separately. Maybe they do and I just don't understand :) Can you give me a command I can run right now that this specific proposal will make work?

All three fixes are necessary to make 'test_run.sh --pudb --pudb-failure'
work. So each single patch doesn't make sense separately.
I'm quite fine with putting three patches into single branch and marking
other two report invalid as long as the fixes are accepted.

I reported three bugs separately just because they're logically
different and it would help code review. But you don't think so.
--
yamahata

Revision history for this message
Brian Lamar (blamar) wrote :

Absolutely, the current review system doesn't allow for grouping co-requisite branches. Can you please merge them together and invalidate the other two? Thank you

Revision history for this message
Jay Pipes (jaypipes) wrote :

> Absolutely, the current review system doesn't allow for grouping co-requisite
> branches. Can you please merge them together and invalidate the other two?
> Thank you

Hmm, yes it does :) You can set a pre-requisite branch when submitting a merge proposal by clicking the Extra button on the screen and entering in the branch to be a pre-requisite. You can chain dependent merge branches that way.

-jay

Revision history for this message
Jay Pipes (jaypipes) wrote :

> On Wed, Jun 29, 2011 at 04:45:58PM -0000, Brian Lamar wrote:
> > Hey, I'm on board with your other changes, but I'm not certain that they
> make sense separately. Maybe they do and I just don't understand :) Can you
> give me a command I can run right now that this specific proposal will make
> work?
>
> All three fixes are necessary to make 'test_run.sh --pudb --pudb-failure'
> work. So each single patch doesn't make sense separately.
> I'm quite fine with putting three patches into single branch and marking
> other two report invalid as long as the fixes are accepted.

Actually, I ran into the same thing when trying to do --with-coverage... so I know this is an issue.

-jay

review: Approve
Revision history for this message
Brian Lamar (blamar) wrote :

Hey Jay, I'm aware that pre-requisites work, but are co-dependent branches allowed? Basically Branch A depends on Branch B *and* Branch B depends on Branch A?

https://code.launchpad.net/~yamahata/glance/lp802883/+merge/66092
https://code.launchpad.net/~yamahata/glance/lp802885/+merge/66093
https://code.launchpad.net/~yamahata/glance/lp802878/+merge/66090

These three changes don't fix anything individually, but as a whole they seem to make run_tests.sh work? Maybe I'm an outlier here, but since this branch doesn't actually fix anything (as in, this line of code doesn't make anything *work* that I can tell), I'd rather have 1 branch than 3...in this case.

Revision history for this message
Isaku Yamahata (yamahata) wrote :

I think we all agreed that those fixes should go into glance tree.
But don't agree on how we should go for it.
(To be honest, I don't care of what is the right procedure in launchpad
as long as the changes are accepted)

So in order to make progress, I merged other two fixes into this branch
and marked the other two bug report invalid.
Now this branch includes all three fixes.
https://code.launchpad.net/~yamahata/glance/lp802883/+merge/66092
So all things to do is merge the branch into glance trunk.

If you want me to do other way, please tell me how/what I should do.
So I'll do so.
--
yamahata

Revision history for this message
Jay Pipes (jaypipes) wrote :

> Hey Jay, I'm aware that pre-requisites work, but are co-dependent branches
> allowed? Basically Branch A depends on Branch B *and* Branch B depends on
> Branch A?

Gotcha, sorry, I misunderstood...

> https://code.launchpad.net/~yamahata/glance/lp802883/+merge/66092
> https://code.launchpad.net/~yamahata/glance/lp802885/+merge/66093
> https://code.launchpad.net/~yamahata/glance/lp802878/+merge/66090
>
> These three changes don't fix anything individually, but as a whole they seem
> to make run_tests.sh work? Maybe I'm an outlier here, but since this branch
> doesn't actually fix anything (as in, this line of code doesn't make anything
> *work* that I can tell), I'd rather have 1 branch than 3...in this case.

Yes, I would prefer a single branch containing all three bug fixes...

-jay

Revision history for this message
Jay Pipes (jaypipes) wrote :

> Yes, I would prefer a single branch containing all three bug fixes...

And it looks like Yamahata did just that :)

-jay

review: Approve
Revision history for this message
OpenStack Infra (hudson-openstack) wrote :
Download full text (13.8 KiB)

The attempt to merge lp:~yamahata/glance/lp802883 into lp:glance failed. Below is the output from the failed tests.

running test
running egg_info
creating glance.egg-info
writing glance.egg-info/PKG-INFO
writing top-level names to glance.egg-info/top_level.txt
writing dependency_links to glance.egg-info/dependency_links.txt
writing manifest file 'glance.egg-info/SOURCES.txt'
reading manifest file 'glance.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'ChangeLog'
writing manifest file 'glance.egg-info/SOURCES.txt'
running build_ext

We test the following: ... ok
We test the following: ... ok
Test for LP Bugs #736295, #767203 ... ok
We test conditions that produced LP Bug #768969, where an image ... ok
Set up three test images and ensure each query param filter works ... ok
We test the following sequential series of actions: ... ok
Ensure marker and limit query params work ... ok
Set up three test images and ensure each query param filter works ... ok
We test the process flow where a user registers an image ... ok
A test against the actual datastore backend for the registry ... ok
A test that errors coming from the POST API do not ... ok
We test that various calls to the images and root endpoints are ... ok
We test the following sequential series of actions: ... ok
We test the process flow where a user registers an image ... ok
A test against the actual datastore backend for the registry ... ok
We test that various calls to the images and root endpoints are ... ok
Test logging output proper when verbose and debug ... ok
Test logging output proper when verbose and debug ... ok
A test for LP bug #704854 -- Exception thrown by registry ... ok
Tests raises BadRequest for invalid store header ... ok
Tests to add a basic image in the file store ... ok
Tests creates a queued image for no body and no loc header ... ok
Tests creates a queued image for no body and no loc header ... ok
Test that the image contents are checksummed properly ... ok
test_bad_container_format (tests.unit.test_api.TestGlanceAPI) ... ok
test_bad_disk_format (tests.unit.test_api.TestGlanceAPI) ... ok
test_delete_image (tests.unit.test_api.TestGlanceAPI) ... ok
test_delete_non_exists_image (tests.unit.test_api.TestGlanceAPI) ... ok
Here, we try to delete an image that is in the queued state. ... ok
Test that the ETag header matches the x-image-meta-checksum ... ok
Tests that the /images/detail registry API returns a 400 ... ok
Tests that the /images registry API returns list of ... ok
Test that the image contents are checksummed properly ... ok
Test for HEAD /images/<ID> ... ok
test_show_image_basic (tests.unit.test_api.TestGlanceAPI) ... ok
test_show_non_exists_image (tests.unit.test_api.TestGlanceAPI) ... ok
Tests that the /images POST registry API creates the image ... ok
Tests proper exception is raised if a bad disk_format is set ... ok
Tests proper exception is raised if a bad disk_format is set ... ok
Tests proper exception is raised if a bad status is set ... ok
Tests that exception raised for bad matching disk and ... ok
Tests that the /images DELETE registry API deletes the image ... ok
Tests proper exception is raised if ...

Revision history for this message
Isaku Yamahata (yamahata) wrote :

On Fri, Jul 01, 2011 at 04:37:22PM -0000, OpenStack Hudson wrote:
> ======================================================================
> FAIL: test_authors_up_to_date (tests.unit.test_misc.AuthorsTestCase)
> ----------------------------------------------------------------------
> Traceback (most recent call last):
> File "/tmp/tmpL3XuZd/tests/unit/test_misc.py", line 75, in test_authors_up_to_date
> '%r not listed in Authors' % missing)
> AssertionError: set([u'<email address hidden>']) not listed in Authors

I fixed it by adding myself to it.

--
yamahata

Revision history for this message
Christopher MacGown (0x44) :
review: Approve
Revision history for this message
OpenStack Infra (hudson-openstack) wrote :
Download full text (14.0 KiB)

The attempt to merge lp:~yamahata/glance/lp802883 into lp:glance failed. Below is the output from the failed tests.

running test
running egg_info
creating glance.egg-info
writing glance.egg-info/PKG-INFO
writing top-level names to glance.egg-info/top_level.txt
writing dependency_links to glance.egg-info/dependency_links.txt
writing manifest file 'glance.egg-info/SOURCES.txt'
reading manifest file 'glance.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'ChangeLog'
writing manifest file 'glance.egg-info/SOURCES.txt'
running build_ext

We test the following: ... ok
We test the following: ... ok
Test for LP Bugs #736295, #767203 ... ok
We test conditions that produced LP Bug #768969, where an image ... ok
Set up three test images and ensure each query param filter works ... ok
We test the following sequential series of actions: ... ok
Ensure marker and limit query params work ... ok
Set up three test images and ensure each query param filter works ... ok
We test the process flow where a user registers an image ... ok
A test against the actual datastore backend for the registry ... ok
A test that errors coming from the POST API do not ... ok
We test that various calls to the images and root endpoints are ... ok
We test the following sequential series of actions: ... FAIL
We test the process flow where a user registers an image ... ok
A test against the actual datastore backend for the registry ... ok
We test that various calls to the images and root endpoints are ... ok
Test logging output proper when verbose and debug ... ok
Test logging output proper when verbose and debug ... ok
A test for LP bug #704854 -- Exception thrown by registry ... ok
Tests raises BadRequest for invalid store header ... ok
Tests to add a basic image in the file store ... ok
Tests creates a queued image for no body and no loc header ... ok
Tests creates a queued image for no body and no loc header ... ok
Test that the image contents are checksummed properly ... ok
test_bad_container_format (tests.unit.test_api.TestGlanceAPI) ... ok
test_bad_disk_format (tests.unit.test_api.TestGlanceAPI) ... ok
test_delete_image (tests.unit.test_api.TestGlanceAPI) ... ok
test_delete_non_exists_image (tests.unit.test_api.TestGlanceAPI) ... ok
Here, we try to delete an image that is in the queued state. ... ok
Test that the ETag header matches the x-image-meta-checksum ... ok
Tests that the /images/detail registry API returns a 400 ... ok
Tests that the /images registry API returns list of ... ok
Test that the image contents are checksummed properly ... ok
Test for HEAD /images/<ID> ... ok
test_show_image_basic (tests.unit.test_api.TestGlanceAPI) ... ok
test_show_non_exists_image (tests.unit.test_api.TestGlanceAPI) ... ok
Tests that the /images POST registry API creates the image ... ok
Tests proper exception is raised if a bad disk_format is set ... ok
Tests proper exception is raised if a bad disk_format is set ... ok
Tests proper exception is raised if a bad status is set ... ok
Tests that exception raised for bad matching disk and ... ok
Tests that the /images DELETE registry API deletes the image ... ok
Tests proper exception is raised i...

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :
Download full text (16.0 KiB)

The attempt to merge lp:~yamahata/glance/lp802883 into lp:glance failed. Below is the output from the failed tests.

running test
running egg_info
creating glance.egg-info
writing glance.egg-info/PKG-INFO
writing top-level names to glance.egg-info/top_level.txt
writing dependency_links to glance.egg-info/dependency_links.txt
writing manifest file 'glance.egg-info/SOURCES.txt'
reading manifest file 'glance.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'ChangeLog'
writing manifest file 'glance.egg-info/SOURCES.txt'
running build_ext

We test the following: ... ok
We test the following: ... ok
Test for LP Bugs #736295, #767203 ... ok
We test conditions that produced LP Bug #768969, where an image ... ok
Set up three test images and ensure each query param filter works ... ok
We test the following sequential series of actions: ... ok
Ensure marker and limit query params work ... FAIL
Set up three test images and ensure each query param filter works ... ok
We test the process flow where a user registers an image ... ok
A test against the actual datastore backend for the registry ... ok
A test that errors coming from the POST API do not ... ok
We test that various calls to the images and root endpoints are ... ok
We test the following sequential series of actions: ... ok
We test the process flow where a user registers an image ... ok
A test against the actual datastore backend for the registry ... ok
We test that various calls to the images and root endpoints are ... ok
Test logging output proper when verbose and debug ... ok
Test logging output proper when verbose and debug ... ok
A test for LP bug #704854 -- Exception thrown by registry ... ok
Tests raises BadRequest for invalid store header ... ok
Tests to add a basic image in the file store ... ok
Tests creates a queued image for no body and no loc header ... ok
Tests creates a queued image for no body and no loc header ... ok
Test that the image contents are checksummed properly ... ok
test_bad_container_format (tests.unit.test_api.TestGlanceAPI) ... ok
test_bad_disk_format (tests.unit.test_api.TestGlanceAPI) ... ok
test_delete_image (tests.unit.test_api.TestGlanceAPI) ... ok
test_delete_non_exists_image (tests.unit.test_api.TestGlanceAPI) ... ok
Here, we try to delete an image that is in the queued state. ... ok
Test that the ETag header matches the x-image-meta-checksum ... ok
Tests that the /images/detail registry API returns a 400 ... ok
Tests that the /images registry API returns list of ... ok
Test that the image contents are checksummed properly ... ok
Test for HEAD /images/<ID> ... ok
test_show_image_basic (tests.unit.test_api.TestGlanceAPI) ... ok
test_show_non_exists_image (tests.unit.test_api.TestGlanceAPI) ... ok
Tests that the /images POST registry API creates the image ... ok
Tests proper exception is raised if a bad disk_format is set ... ok
Tests proper exception is raised if a bad disk_format is set ... ok
Tests proper exception is raised if a bad status is set ... ok
Tests that exception raised for bad matching disk and ... ok
Tests that the /images DELETE registry API deletes the image ... ok
Tests proper exception is raised i...

Revision history for this message
Isaku Yamahata (yamahata) wrote :

On Wed, Jul 06, 2011 at 03:03:28PM -0000, OpenStack Hudson wrote:
> ======================================================================
> FAIL: We test the following sequential series of actions:
> ----------------------------------------------------------------------
> Traceback (most recent call last):
> File "/tmp/tmp2Jc4bU/tests/functional/test_httplib2_api.py", line 67, in test_get_head_simple_post
> self.start_servers()
> File "/tmp/tmp2Jc4bU/tests/functional/__init__.py", line 311, in start_servers
> self.wait_for_servers()
> File "/tmp/tmp2Jc4bU/tests/functional/__init__.py", line 345, in wait_for_servers
> self.assertFalse(True, "Failed to start servers.")
> AssertionError: Failed to start servers.
>
> ----------------------------------------------------------------------

The test passes for me. It looks the server didn't start up until 3 secs?

--
yamahata

Revision history for this message
Isaku Yamahata (yamahata) wrote :
Download full text (5.9 KiB)

On Wed, Jul 06, 2011 at 03:12:38PM -0000, OpenStack Hudson wrote:

> ======================================================================
> FAIL: Ensure marker and limit query params work
> ----------------------------------------------------------------------
> Traceback (most recent call last):
> File "/tmp/tmpJjk6OQ/tests/functional/test_curl_api.py", line 1017, in test_limited_images
> self.assertEqual('{"images": []}', out.strip())
> AssertionError: '{"images": []}' != 'Traceback (most recent call last):\n File "/usr/lib/python2.6/dist-packages/eventlet/wsgi.py", line 336, in handle_one_response\n result = self.application(self.environ, start_response)\n File "/usr/lib/pymodules/python2.6/webob/dec.py", line 147, in __call__\n resp = self.call_func(req, *args, **self.kwargs)\n File "/usr/lib/pymodules/python2.6/webob/dec.py", line 208, in call_func\n return self.func(req, *args, **kwargs)\n File "/tmp/tmpJjk6OQ/glance/common/wsgi.py", line 113, in __call__\n response = req.get_response(self.application)\n File "/usr/lib/pymodules/python2.6/webob/request.py", line 1053, in get_response\n application, catch_exc_info=False)\n File "/usr/lib/pymodules/python2.6/webob/request.py", line 1022, in call_application\n app_iter = application(self.environ, start_response)\n File "/usr/lib/pymodules/python2.6/webob/dec.py", line 159, in __call__\n return resp(environ, start_response)\n File "/usr/local/lib/python2.6/dist-packages/Routes-1.12.3-py2.6.egg/routes/middleware.py", line 131, in __call__\n response = self.app(environ, start_response)\n File "/usr/lib/pymodules/python2.6/webob/dec.py", line 159, in __call__\n return resp(environ, start_response)\n File "/usr/lib/pymodules/python2.6/webob/dec.py", line 147, in __call__\n resp = self.call_func(req, *args, **self.kwargs)\n File "/usr/lib/pymodules/python2.6/webob/dec.py", line 208, in call_func\n return self.func(req, *args, **kwargs)\n File "/tmp/tmpJjk6OQ/glance/common/wsgi.py", line 311, in __call__\n request, **action_args)\n File "/tmp/tmpJjk6OQ/glance/common/wsgi.py", line 328, in dispatch\n return method(*args, **kwargs)\n File "/tmp/tmpJjk6OQ/glance/api/v1/images.py", line 99, in index\n images = registry.get_images_list(self.options, **params)\n File "/tmp/tmpJjk6OQ/glance/registry/__init__.py", line 37, in get_images_list\n return c.get_images(**kwargs)\n File "/tmp/tmpJjk6OQ/glance/registry/client.py", line 58, in get_images\n res = self.do_request("GET", "/images", params=params)\n File "/tmp/tmpJjk6OQ/glance/common/client.py", line 148, in do_request\n "server. Got error: %s" % e)\nClientConnectionError: Unable to connect to server. Got error: [Errno 111] ECONNREFUSED'

All test passes for me after merging glance trunk(revno 151).

The above shows curl received the following trace back.
ECONNREFUSED means that no one was listening the port of glance registry,
so glance image server couldn't connect to glance registry server.

My guess is, tests.utils.get_unused_port() and
tests.functional.FunctionlTet.ping_server() are very prone to race
condition. So the testing machine is somewhat overloaded unluckily
whe...

Read more...

Revision history for this message
Jay Pipes (jaypipes) wrote :

I'm pulling this to my local laptop to see if I can reproduce. I'm a bit skeptical about the port race condition, as this hasn't happened before, but you never know! :)

Revision history for this message
Jay Pipes (jaypipes) wrote :

I ran the tests four times. 3 times they passed, one time this happened:

Reproduced the failure locally with ./run_tests.sh -V:
======================================================================
FAIL: test_filtered_images (tests.functional.test_curl_api.TestCurlApi)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/jpipes/repos/glance/lp802883/tests/functional/test_curl_api.py", line 793, in test_filtered_images
    self.start_servers()
  File "/home/jpipes/repos/glance/lp802883/tests/functional/__init__.py", line 311, in start_servers
    self.wait_for_servers()
  File "/home/jpipes/repos/glance/lp802883/tests/functional/__init__.py", line 345, in wait_for_servers
    self.assertFalse(True, "Failed to start servers.")
AssertionError: Failed to start servers.

======================================================================
FAIL: test_ordered_images (tests.functional.test_curl_api.TestCurlApi)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/jpipes/repos/glance/lp802883/tests/functional/test_curl_api.py", line 1121, in test_ordered_images
    self.start_servers()
  File "/home/jpipes/repos/glance/lp802883/tests/functional/__init__.py", line 311, in start_servers
    self.wait_for_servers()
  File "/home/jpipes/repos/glance/lp802883/tests/functional/__init__.py", line 345, in wait_for_servers
    self.assertFalse(True, "Failed to start servers.")
AssertionError: Failed to start servers.

Running ./run_tests.sh -N I get absolutely no output:

jpipes@serialcoder:~/repos/glance/lp802883$ ./run_tests.sh -N

So.. something is amiss that this branch is causing. I'm going to do some more investigating.

Revision history for this message
Jay Pipes (jaypipes) wrote :

I've reproduced this on trunk, too. :(

Seems that it only happens the *first* time run_tests.sh -V is run. After that, things seem to work every time... I'm going to look to see if we're forgetting to clean up the old pids or call stop_servers() in one of the tests...

-jay

Revision history for this message
Jay Pipes (jaypipes) wrote :

OK, turns out that a recent-ish change that made configure_db() call migrate.db_sync() was causing startup times for the servers to slow down. This caused the somewhat random failure to start servers test failures because the timeout was too short.

Rather than increase the timeout, I reverted the commit that made migration run on startup (we go back to the Nova way of making the user run nova-manage db_sync to migrate the registry database). Because of this change, I was able to reset the registry database used in most functional tests to the in-memory SQLite database. This alone decreased the test run time from 50 seconds to 30 seconds or less on most runs...

I've pushed the changes to lp:~jaypipes/glance/bug802883 and am going to merge propose that. If the tests all pass, it'll merge into trunk and I'll close this proposal as Merged...

Revision history for this message
Isaku Yamahata (yamahata) wrote :

I run the tests repeatedly, so it explains why I didn't hit the issue.
Setup change and unclean db sound much more plausible.
--
yamahata

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Authors'
2--- Authors 2011-06-27 14:37:40 +0000
3+++ Authors 2011-07-08 10:04:13 +0000
4@@ -6,6 +6,7 @@
5 Donal Lafferty <donal.lafferty@citrix.com>
6 Eldar Nugaev <enugaev@griddynamics.com>
7 Ewan Mellor <ewan.mellor@citrix.com>
8+Isaku Yamahata <yamahata@valinux.co.jp>
9 Jay Pipes <jaypipes@gmail.com>
10 Jinwoo 'Joseph' Suh <jsuh@isi.edu>
11 Josh Kearney <josh@jk0.org>
12
13=== modified file 'run_tests.py'
14--- run_tests.py 2011-06-28 14:37:31 +0000
15+++ run_tests.py 2011-07-08 10:04:13 +0000
16@@ -180,7 +180,8 @@
17 self._last_case = None
18 self.colorizer = None
19 # NOTE(vish, tfukushima): reset stdout for the terminal check
20- stdout = sys.__stdout__
21+ stdout = sys.stdout
22+ sys.stdout = sys.__stdout__
23 for colorizer in [_Win32Colorizer, _AnsiColorizer, _NullColorizer]:
24 if colorizer.supported():
25 self.colorizer = colorizer(self.stream)
26@@ -281,7 +282,8 @@
27
28 c = config.Config(stream=sys.stdout,
29 env=os.environ,
30- verbosity=3)
31+ verbosity=3,
32+ plugins=core.DefaultPluginManager())
33
34 runner = GlanceTestRunner(stream=c.stream,
35 verbosity=c.verbosity,
36
37=== modified file 'tests/unit/test_config.py'
38--- tests/unit/test_config.py 2011-06-11 18:18:22 +0000
39+++ tests/unit/test_config.py 2011-07-08 10:04:13 +0000
40@@ -40,7 +40,7 @@
41 # of typed values
42 parser = optparse.OptionParser()
43 config.add_common_options(parser)
44- parsed_options, args = config.parse_options(parser)
45+ parsed_options, args = config.parse_options(parser, [])
46
47 expected_options = {'verbose': False, 'debug': False,
48 'config_file': None}

Subscribers

People subscribed via source and target branches