Code review comment for lp:~tr3buchet/nova/multi_nic

Revision history for this message
Trey Morris (tr3buchet) wrote :

> Getting a number of tests being Skipped. Is this intentional? When will these
> get resolved? I'm hesitate to put broken stuff in trunk without knowing when
> this fix will be coming.

yes it is. I've got pressure to get this merged. I/we aren't responsible for all the different hypervisors/APIs hence having lieutenants. Multi-nic actually breaks some functionality in them, so of course it will also break some of their associated tests. It could be done where we go in to each hypervisor/API and make sure they are prepared to work with or without multi-nic prior to merging multi-nic and then remove any shims after, but that could take ages and is hard to manage. Instead we're basically taking a "push this with bugs and skipped tests approach". This allows the rest of hte network planning/development to start immediately and the rest of the fixes can be done in parallel by the parties responsible or requiring use of the hypervisors/APIs which are broken. I'm not going to say it's ideal, but it works and at the same time forces a clear division of labor.

> test_run_with_snapshot freezes completely on me. Once that's fixed up I can
> continue testing.

skipped! Near as I can tell without getting all the way in the rabbit hole, there is some underlying rpc stuff not being stubbed correctly so it just waits and waits for a response. This should not cause API tests to fail. You mentioned earlier they weren't unittests. I think this is a problem. We should be able to develop in one area without breaking a bunch of (seemingly) unrelated tests. I think there are also 10 different network_info fake data structures floating around the tests. This kind of thing is bad juju.

> Questions:
>
> 1. If we remove a network, what happens to instances currently using them?

Good one! Let's see, in pseudololcode:
loldef remove_network(network):
  if network haz project
     we raises
  elses
     we deletes

This wasn't written to handle flat networks which don't ever have associated projects. So you'd have instances with virtual_interfaces that have associated fixed_ips. When you delete the network, the row in the networks table would go away (be set to DELETED), but the fixed_ips would still exist and be associated with everything. How do you see this working, best case scenario? I can see doing something like checking to see if the network has any allocated_fixed_ips and fail if so. What about the fixed IPs, should those go away? For DHCP and vlan managers we'd also have to reconfigure the network host associated with that network (it doesn't really make any different for the hosts in flatmanager). I don't think the network delete functionality is fully functional yet. Maybe outside the scope of multinic?

> Fixes:
>
> * General: _("message") should use named values and not just %s %d. Even if
> there's only 1 value.

Need a bit more context, by this do you mean:
LOG.debug(_("message %(var1)s %(var2)s"), locals)
or
LOG.debug(_("message %s %s"), var1, var2)

> * General: Comments should be properly formed sentences. Start with capital
> letter and have proper punctuation. (I'm looking at you +2440-2469, but many
> other places as well)

These still exist? I went through and fixed all the docstrings per bcwalden above.

> +286 Good question. I think the idea of ProjectID will live outside of Zones
> in Auth. So perhaps this method will need to work at that layer?

yeah it'll have to. This will be an issue with IPv6 as well. The portion of the IPv6 address that comes from the tenant id will be the same across zones. The problem is that if it exists every, how do we determine where it is that we want to add the network? It's easier with an instance that only exists in one zone.

> +302 Is this something that will need to span Zones? If so, it'll need to be
> added to novaclient.

We should be able to moving floating IPs around across huddles. I'm not sure of all the consequences of this statement. It may not be possible with the way IPs are partitioned in datacenters and what not. Ex: having an ip floating from one datacenter to another. I guess I'm saying I don't know what buoyancy of these floating IPs is. I think floating IPs should be in novaclient.

> +327 Should say which one it's using vs. "the first"

fixed!

> +1035 No way to update() a virtual interface? Only delete()/add()?

added!

> +2155 "5 attempts ..." seems kinda rigid?

I just drew a line in the sand. It's arbitrary. Better ideas? 10? 25? Go until it finds one?

> +3824, et al ... I don't really like these fakes that have conditional logic
> in them. I'd rather see specific functions for each case/test. Sooner or later
> we'll be debugging the fakes and not the underlying code.

I can't say I'm a fan of it either. I feel like we're already debugging the fakes.. Suggest a better route?

> Comments:
>
> * There are lots of dependencies/expectations of fix-ups by other groups.
> Perhaps these should be tagged differently to make it easier for people to
> find them? TODO(tr3buchet) is a little generic.

anything specific? A lot of this code is going to be refactored very soon after multi-nic lands. There are kind of a lot of them now that i grep for it.. I have a feeling you are referring to:
nova/auth/manager.py
638: # TODO(tr3buchet): not sure what you guys plan on doing with this
pertaining to vpn and vlan manager. I guess that was just a general "hey!" type thing. I can try to clean these up some, but I don't know what to do short of trying to send emails to a bunch of people.

> * It would really be handy to have a breakdown of the flags and what their
> purpose is. How would I set this up?

Which flags are you referring to. The few flags that are associated with multi-nic should be deprecated. You shouldn't need any specific flags. If you take a look at my first post on this page I've got a couple of network create examples that show how to create networks for different things. You can also just get the docstring output from the command that shows the args. Basically once you've got network(s), you wait for them to be picked up by hosts and once that happens, you're all set for multinic.

> ... phew. Ok, let's start with that :)

no kidding!

« Back to merge proposal