Merge lp:~justin-fathomdb/nova/bug724623 into lp:~hudson-openstack/nova/trunk

Proposed by justinsb
Status: Superseded
Proposed branch: lp:~justin-fathomdb/nova/bug724623
Merge into: lp:~hudson-openstack/nova/trunk
Prerequisite: lp:~justin-fathomdb/nova/test-openstack-api-servers
Diff against target: 160 lines (+90/-44)
3 files modified
nova/db/sqlalchemy/api.py (+10/-0)
nova/known_bugs.py.THIS (+35/-0)
nova/tests/integrated/test_servers.py (+45/-44)
To merge this branch: bzr merge lp:~justin-fathomdb/nova/bug724623
Reviewer Review Type Date Requested Status
Jay Pipes (community) Disapprove
Nova Core security contacts Pending
Review via email: mp+51227@code.launchpad.net

This proposal has been superseded by a proposal from 2011-03-08.

Description of the change

Fixes bug 724623: server metadata couldn't actually be created, because it can't be written to the DB. Also adds unit tests that test this functionality through the OpenStack API.

To post a comment you must log in.
Revision history for this message
Jay Pipes (jaypipes) wrote :

I'm not a fan of the known_bugs.py file in the prerequsite branch. Would it be possible to just fix the bug and propose that for merging instead of having that large pre-req branch? Also, you've got print statements in the test case below..

review: Disapprove
Revision history for this message
justinsb (justin-fathomdb) wrote :

Hi Jay - this branch is at the end of a long dependency chain, so I didn't
expect anyone to get to it so fast!!

Certainly I could "back-port" the bugfix, but I don't think it's necessarily
a blocker for anyone so I'm not sure it's worth the effort (?)

However, the 'known_bugs' thing is more interesting.... that's my experiment
in whether this approach works for separating out bugs (as I find them) from
the bug fixes. In this case, I found a random bug while I was writing the
test case: "In the OS API, when creating a server, the metadata is not
returned". I wanted the test case to have the logic, but equally I can't
actually test it until the bug is fixed. I'm not sure it's a high priority
bug at the moment (it's more of a TODO than a bug!) Can you think of a
better approach? Do I dare open this up to mailing list?

I am guilty of putting print statements in my tests - must remember not to
do that!!

On Thu, Feb 24, 2011 at 6:07 PM, Jay Pipes <email address hidden> wrote:

> Review: Disapprove
> I'm not a fan of the known_bugs.py file in the prerequsite branch. Would it
> be possible to just fix the bug and propose that for merging instead of
> having that large pre-req branch? Also, you've got print statements in the
> test case below..
> --
> https://code.launchpad.net/~justin-fathomdb/nova/bug724623/+merge/51227
> You are the owner of lp:~justin-fathomdb/nova/bug724623.
>

Revision history for this message
Jay Pipes (jaypipes) wrote :

On Thu, Feb 24, 2011 at 10:26 PM, justinsb <email address hidden> wrote:
> Hi Jay - this branch is at the end of a long dependency chain, so I didn't
> expect anyone to get to it so fast!!

You can never tell which one will be reviewed first ;)

> Certainly I could "back-port" the bugfix, but I don't think it's necessarily
> a blocker for anyone so I'm not sure it's worth the effort (?)

Sure, understood.

> However, the 'known_bugs' thing is more interesting.... that's my experiment
> in whether this approach works for separating out bugs (as I find them) from
> the bug fixes.

That's what the LP bugs system is for. I'm really not in favour of
adding a file that tracks known bugs. A similar concept is a file that
disables certain tests on certain builds. It just never works in the
long run, as things always get out of sync. I think it's best to just
use the Launchpad bugs system to track your fixes and make sure that
LP is always as up to date as possible and linked to the branches you
make that fix the bugs. You've been doing a great job of this so far,
so I was a bit surprised to see you change tactics here :)

> In this case, I found a random bug while I was writing the
> test case: "In the OS API, when creating a server, the metadata is not
> returned".  I wanted the test case to have the logic, but equally I can't
> actually test it until the bug is fixed.  I'm not sure it's a high priority
> bug at the moment (it's more of a TODO than a bug!)  Can you think of a
> better approach?  Do I dare open this up to mailing list?

OK, so this happens quite a bit. This is the way I deal with it:

When you run into the bug:
* If it is a bug ONLY in code that you have just added in your branch,
then just fix the bug
* If it is a bug that is reproducible in trunk (i.e. not *just* the
topic branch you're working in):
  (1) File the bug in Launchpad
  (2) Do not fix the bug in your topic branch
  (3) If you want to assign yourself to the bug you just filed, then:
    a. Branch a bugfix branch from your local *trunk* branch (NOT your
topic branch)
    b. Add a test case to your bugfix branch that will trigger the bug
    c. Patch code to fix the bug and pass the test case
    d. Push to LP with --fixes=lp:XXXX where XXX is the bug number
    e. Propose for merging your bugfix branch into trunk

At this point, return to working on your original topic branch and
continue coding. If you *must* have the fix for the bug you just
reported, then do:

bzr shelve --all
bzr merge ../bugXXX && bzr commit -m "Merge fix for XXX"
bzr unshelve 1

If you don't absolutely have to have the fix for bug XXX in your topic
branch, don't merge it, since there's really no need to...

Hope this helps explain a little bit how I like to work on things that
can have dependencies between them. In git, it's a little easier,
since you can cherry-pick a bugfix revision from another branch
without having to shelve/merge/unshelve, but in Bazaar, the above is
the way to go about this all...

> I am guilty of putting print statements in my tests - must remember not to
> do that!!

I do it all the time, no worries :)

-jay

Revision history for this message
justinsb (justin-fathomdb) wrote :

Hi Jay,

Thanks for outlining the process, but I think this only applies when I'm fixing a bug at the same time / am able to fix it myself. The goal of the known_bugs file is to cope when I'm not fixing a bug at the same time. Some examples:

* When I'm doing some of my 'advanced' unit tests, I'd rather use fake_carrot than RabbitMQ, but there was an issue in it which turned out to be way beyond my knowledge of the queuing stuff to solve (Vish fixed it). Though Vish fixed it quickly, I had to fall back to using RabbitMQ while the fix was in-progress.

* The authentication for OpenStack is currently 'unusual', and not the username and password you might expect. By controlling that with a known_bugs flag, once this is fixed the flag can be removed and then the tests will continue to pass. I can't just fix this bug, because it has to go through "the process".

* Some of my unit tests bring up a WSGI server, but I haven't currently figured out how to shut it down cleanly, so I need to recycle the same WSGI instance across tests. If someone that knows WSGI fixes this, I'd like to shut it down cleanly etc. My unit test code could also be a head start for the bug fixer in terms of diagnosing the problem (once the flag is removed, they'll get the failing tests immediately)

I agree that known_bugs is a bit ugly, but I think it has some upsides in terms of keeping this information neatly in one place in the code. I definitely expect that we'd file bugs alongside adding known_bugs flags - this isn't intended to be an end-run around that. I'm very open to better alternatives!

Justin

Revision history for this message
Jay Pipes (jaypipes) wrote :
Download full text (4.1 KiB)

> Thanks for outlining the process, but I think this only applies when I'm
> fixing a bug at the same time / am able to fix it myself.

I said the opposite, actually:

* If it is a bug that is reproducible in trunk (i.e. not *just* the
topic branch you're working in):
  (1) File the bug in Launchpad
  (2) Do not fix the bug in your topic branch
  (3) If you want to assign yourself to the bug you just filed, then:
    a. Branch a bugfix branch from your local *trunk* branch (NOT your topic branch)
    b. Add a test case to your bugfix branch that will trigger the bug
    c. Patch code to fix the bug and pass the test case
    d. Push to LP with --fixes=lp:XXXX where XXX is the bug number
    e. Propose for merging your bugfix branch into trunk

> The goal of the
> known_bugs file is to cope when I'm not fixing a bug at the same time. Some
> examples:
>
> * When I'm doing some of my 'advanced' unit tests, I'd rather use fake_carrot
> than RabbitMQ, but there was an issue in it which turned out to be way beyond
> my knowledge of the queuing stuff to solve (Vish fixed it). Though Vish fixed
> it quickly, I had to fall back to using RabbitMQ while the fix was in-
> progress.

Then you should simply file a bug. There is no reason to have the known_bugs.py file IMO. That is what Launchpad and all of its tracking goodness is for...

> * The authentication for OpenStack is currently 'unusual', and not the
> username and password you might expect. By controlling that with a known_bugs
> flag, once this is fixed the flag can be removed and then the tests will
> continue to pass. I can't just fix this bug, because it has to go through
> "the process".

The known_bugs flag doesn't "control" anything. The above scenario just means that your work depends on a fix for a bug. If you don't want that to delay you, you have the option of either fixing the bug if possible or nagging someone to fix it. Disabling a test case that happens to fail when you change code behaviour to depend on a potential bug fix or community-decided change to, say, the API, is not an option. This only leads to long and painful lists of dependent branches that all come crumbling down when the bug is fixed in a way that you didn't foresee. Or at least, that's been my experience. Having a file that disables tests is just not a good future-proof way to "push through" the inconvenience of a bug fix holding up code you want to get merged. I agree it's a pain, but it's a necessary pain and I believe adding the known_bugs.py file makes the source tree vulnerable to bugs via procrastination since the test is disabled and "hidden" from impacting automated testing.

Believe me, Justin, I understand the frustration that comes with having to wait for bug fixes. I do. Based on my experience in other projects, I just feel this particular patch is not a workable solution. But, it's just my opinion. I'll respect the wishes of other contributors if they feel the known_bugs solution is one that should be promoted.

> * Some of my unit tests bring up a WSGI server, but I haven't currently
> figured out how to shut it down cleanly, so I need to recycle the same WSGI
> instance across tests. If som...

Read more...

Revision history for this message
justinsb (justin-fathomdb) wrote :

Jay, I see and fully respect your view. If I find a bug, I should fix it or get it fixed!

I think though the choice isn't between fixing a bug and not fixing a bug, it's between writing a test and not writing a test, or writing a hacked test and writing a good test (with a hack controlled by the known_bugs flag). To take the easiest example, authentication is 'confused' right now, so right now I set the values all to be the same string, so that it doesn't matter what permutation is used. But that's a terrible unit test! The choice is whether isn't whether or not I fix it (I'd like to fix it...), the choice is whether I write a good unit test or a bad one.

To be clear - I'm not talking about disabling a test normally, I'm talking about degrading a test. So, authentication will step-back to using the same values for all 3. We'll use real RabbitMQ instead of fake_carrot. We'll share WSGI services instead of cleanly restarting them.

Of course, this might not be evident from the current branch, but that's definitely the intent.

Right now, if I find a bug, my natural inclination is to work around it, not fix it. I certainly am not inclined to write a unit test that fails, because by definition I can't get that merged. I should open a bug, and I should fix it, but I'm not always going to want to break my flow and jump into a completely different system.

Also, when we have dedicated QA people, I'd like them to write unit tests that fail even if they can't fix it.

But at the same time, I see your point of view. It's complicated (but I think it's a complex problem). It does let us avoid fixing things (but that's sort of the point, really). It's not obvious that I'm going to get the test right for the case when the bug is fixed.

Not an easy one... might be time to share this debate with the mailing list.

lp:~justin-fathomdb/nova/bug724623 updated
729. By justinsb

Merged with upstream

730. By justinsb

Removed the bizarre/bazaar known_bugs.py.THIS

Unmerged revisions

730. By justinsb

Removed the bizarre/bazaar known_bugs.py.THIS

729. By justinsb

Merged with upstream

728. By justinsb

Merge with upstream

727. By justinsb

Merge with rate-limiting-suppression

726. By justinsb

Merged with testing branch

725. By justinsb

Remove the bug we're working on

724. By justinsb

Map dictionary to DB objects

723. By justinsb

Add integrated unit tests for server metadata. Also add KNOWN_BUGS list, so that we can test bugs separately from fixing them

722. By justinsb

Created tests for OpenStack compute API. Merged with metadata hotfix.

721. By justinsb

Re-remove the problematic sqlalchemy fix that somehow made it back into this branch

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'nova/db/sqlalchemy/api.py'
2--- nova/db/sqlalchemy/api.py 2011-03-08 03:12:29 +0000
3+++ nova/db/sqlalchemy/api.py 2011-03-08 03:12:30 +0000
4@@ -676,6 +676,16 @@
5 context - request context object
6 values - dict containing column values.
7 """
8+
9+ metadata = values.get('metadata')
10+ metadata_refs = []
11+ if metadata:
12+ for metadata_item in metadata:
13+ metadata_ref = models.InstanceMetadata()
14+ metadata_ref.update(metadata_item)
15+ metadata_refs.append(metadata_ref)
16+ values['metadata'] = metadata_refs
17+
18 instance_ref = models.Instance()
19 instance_ref.update(values)
20
21
22=== added file 'nova/known_bugs.py.THIS'
23--- nova/known_bugs.py.THIS 1970-01-01 00:00:00 +0000
24+++ nova/known_bugs.py.THIS 2011-03-08 03:12:30 +0000
25@@ -0,0 +1,35 @@
26+# vim: tabstop=4 shiftwidth=4 softtabstop=4
27+
28+# Copyright 2011 Justin Santa Barbara
29+# All Rights Reserved.
30+#
31+# Licensed under the Apache License, Version 2.0 (the "License"); you may
32+# not use this file except in compliance with the License. You may obtain
33+# a copy of the License at
34+#
35+# http://www.apache.org/licenses/LICENSE-2.0
36+#
37+# Unless required by applicable law or agreed to in writing, software
38+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
39+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
40+# License for the specific language governing permissions and limitations
41+# under the License.
42+"""
43+This file contains flag variables that track known bugs.
44+
45+The idea is that when we find a bug, we would like to have a test case for it,
46+but we aren't necessarily in a position to fix it immediately. So, we record
47+a marker in here: KNOWN_BUGS.append('server_create_does_not_return_metadata')
48+Then we can write a test, but not fail the test, because it's a known bug.
49+We should also open a bug at the same time.
50+
51+When the person comes to fix that bug, they remove the marker; the test fails;
52+they fix the bug; they commit.
53+
54+This also makes it easy to track known bugs - they should all be in this file.
55+"""
56+
57+KNOWN_BUGS = []
58+
59+# In the OS API, when creating a server, the metadata is not returned
60+KNOWN_BUGS.append('server_create_does_not_return_metadata')
61
62=== modified file 'nova/tests/integrated/test_servers.py'
63--- nova/tests/integrated/test_servers.py 2011-03-08 03:12:29 +0000
64+++ nova/tests/integrated/test_servers.py 2011-03-08 03:12:30 +0000
65@@ -166,50 +166,51 @@
66
67 return server
68
69-# TODO(justinsb): Enable this unit test when the metadata bug is fixed
70-# def test_create_server_with_metadata(self):
71-# """Creates a server with metadata"""
72-#
73-# # Build the server data gradually, checking errors along the way
74-# server = self._build_minimal_create_server_request()
75-#
76-# for metadata_count in range(30):
77-# metadata = {}
78-# for i in range(metadata_count):
79-# metadata['key_%s' % i] = 'value_%s' % i
80-# server['metadata'] = metadata
81-#
82-# post = {'server': server}
83-# created_server = self.api.post_server(post)
84-# LOG.debug("created_server: %s" % created_server)
85-# self.assertTrue(created_server['id'])
86-# created_server_id = created_server['id']
87-# # Reenable when bug fixed
88-# # self.assertEqual(metadata, created_server.get('metadata'))
89-#
90-# # Check it's there
91-# found_server = self.api.get_server(created_server_id)
92-# self.assertEqual(created_server_id, found_server['id'])
93-# self.assertEqual(metadata, found_server.get('metadata'))
94-#
95-# # The server should also be in the all-servers details list
96-# servers = self.api.get_servers(detail=True)
97-# server_map = dict((server['id'], server) for server in servers)
98-# found_server = server_map.get(created_server_id)
99-# self.assertTrue(found_server)
100-# # Details do include metadata
101-# self.assertEqual(metadata, found_server.get('metadata'))
102-#
103-# # The server should also be in the all-servers summary list
104-# servers = self.api.get_servers(detail=False)
105-# server_map = dict((server['id'], server) for server in servers)
106-# found_server = server_map.get(created_server_id)
107-# self.assertTrue(found_server)
108-# # Summary should not include metadata
109-# self.assertFalse(found_server.get('metadata'))
110-#
111-# # Cleanup
112-# self._delete_server(created_server_id)
113+ def test_create_server_with_metadata(self):
114+ """Creates a server with metadata"""
115+
116+ # Build the server data gradually, checking errors along the way
117+ server = self._build_minimal_create_server_request()
118+
119+ for metadata_count in range(0, 30, 5):
120+ metadata = {}
121+ for i in range(metadata_count):
122+ metadata['key_%s' % i] = 'value_%s' % i
123+ server['metadata'] = metadata
124+
125+ post = {'server': server}
126+ created_server = self.api.post_server(post)
127+ LOG.debug("created_server: %s" % created_server)
128+ self.assertTrue(created_server['id'])
129+ created_server_id = created_server['id']
130+ # Reenable when this is implemented
131+ # self.assertEqual(metadata, created_server.get('metadata'))
132+
133+ # Check it's there
134+ found_server = self.api.get_server(created_server_id)
135+ self.assertEqual(created_server_id, found_server['id'])
136+ self.assertEqual(metadata, found_server.get('metadata'))
137+
138+ # The server should also be in the all-servers details list
139+ servers = self.api.get_servers(detail=True)
140+ server_map = dict((server['id'], server) for server in servers)
141+ found_server = server_map.get(created_server_id)
142+ self.assertTrue(found_server)
143+ LOG.debug("found server: %s" % found_server)
144+ # Details do include metadata
145+ self.assertEqual(metadata, found_server.get('metadata'))
146+
147+ # The server should also be in the all-servers summary list
148+ servers = self.api.get_servers(detail=False)
149+ server_map = dict((server['id'], server) for server in servers)
150+ found_server = server_map.get(created_server_id)
151+ self.assertTrue(found_server)
152+ LOG.debug("found server: %s" % found_server)
153+ # Summary should not include metadata
154+ self.assertFalse(found_server.get('metadata'))
155+
156+ # Cleanup
157+ self._delete_server(created_server_id)
158
159
160 if __name__ == "__main__":