Merge lp:~vishvananda/nova/network-refactor into lp:~hudson-openstack/nova/trunk
- network-refactor
- Merge into trunk
Status: | Merged |
---|---|
Approved by: | Vish Ishaya |
Approved revision: | 220 |
Merge reported by: | OpenStack Infra |
Merged at revision: | not available |
Proposed branch: | lp:~vishvananda/nova/network-refactor |
Merge into: | lp:~hudson-openstack/nova/trunk |
Diff against target: |
1588 lines (+623/-385) 14 files modified
bin/nova-dhcpbridge (+12/-10) bin/nova-manage (+11/-12) bin/nova-network (+5/-1) nova/auth/manager.py (+31/-108) nova/compute/service.py (+12/-7) nova/endpoint/cloud.py (+99/-77) nova/network/exception.py (+1/-1) nova/network/model.py (+51/-94) nova/network/service.py (+203/-8) nova/network/vpn.py (+116/-0) nova/rpc.py (+2/-2) nova/tests/auth_unittest.py (+0/-14) nova/tests/network_unittest.py (+70/-44) nova/virt/libvirt_conn.py (+10/-7) |
To merge this branch: | bzr merge lp:~vishvananda/nova/network-refactor |
Related bugs: | |
Related blueprints: |
Refactor networking to use its own binary
(Undefined)
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Jay Pipes (community) | Needs Fixing | ||
Eric Day (community) | Approve | ||
Review via email: mp+31808@code.launchpad.net |
Commit message
Make network its own worker! This separates the network logic from the api server, allowing us to have multiple network controllers. There a lot of stuff in networking that is ugly and should be modified with the datamodel changes. I've attempted not to mess with those things too much to keep the changeset small(ha!).
Description of the change
Make network its own worker!
This separates the network logic from the api server, allowing us to have multiple network controllers. There a lot of stuff in networking that is ugly and should be modified with the datamodel changes. I've attempted not to mess with those things too much to keep the changeset small(ha!).
One important change is that each project is associated with a particular network host. Each network host has a pool of ips to assign. This seems very apropriate for the one vlan per project model, but may have to be re-examined with multi-cluster networking in mind. The limitation may not be necessary in the Flat networking model, and get_network_topic could perhaps run through the scheduler and go to the least busy network worker.
There is one outstanding issue with vpns, a vpn isn't allocated to a project until set_network_host is called, which requires that the network host is actually running. This means that using nova-manage project zip is no longer guaranteed to give you a zip file that includes vpn credentials.
Everything else should be working. I've done quite a bit of testing on my own, rewrote and fixed a few of the existing tests. More tests need to be written, but I wanted to get this up and merged as soon as possible since it touches a lot of files.
Hopefully I have some time in our next sprint to devote to improving test coverage of all parts of the system. We are in need of it.
Please look over the code, try it out. I haven't done extensive testing with flat network, but I believe that it will very helpful for most deployments. It currently relies on injecting network data and just bridges into whatever bridge is specifed in the flag.
It should be easy to extend the capability to include a network that uses dhcp without vlans, or configures switches, sets up ebtables, etc.
Reviews and comments welcome
Jay Pipes (jaypipes) wrote : | # |
Why not /network/model.py? This would match /compute/model.py.
Eric Day (eday) wrote : | # |
There is already a network/model.py, but it may be that the classes in networkdata.py should move into model.py too.
Vish Ishaya (vishvananda) wrote : | # |
This is the natural place for it to go, but unfortunately that brings up the
perennial circular import problem. AuthManager needs access to the
NetworkData class to handle requests for the project's vpn ip and port when
it is generating credentials. Other classes in network/model.py need access
to auth_manager to list projects/etc. auth/manager.py import users =>
network/model.py import manager fails.
It could theoretically go in a auth/model.py class, since it does relate to
users specifically, but I feel like it has more to do with the network than
the user. Any suggestions?
On Thu, Aug 5, 2010 at 9:48 AM, Eric Day <email address hidden> wrote:
> There is already a network/model.py, but it may be that the classes in
> networkdata.py should move into model.py too.
> --
> https:/
> You are the owner of lp:~vishvananda/nova/network-refactor.
>
Jay Pipes (jaypipes) wrote : | # |
Yeah, so I've run into similar issues in prototyping new datastore code. Is the real problem the fact that the Auth objects like Project aren't derived from BasicModel like all the other objects in the system are? If we could have things that are *models* (i.e. plain old data records with some ability to "associate" with each other) all derive from a single class (BasicModel) this problem would go away. This really is a problem with having the models contain business logic (the Rails way...)
-jay
Jay Pipes (jaypipes) wrote : | # |
Per conversation on IRC, I understand the reasons why this is necessary, and approve this merge as-is. Moving from ActiveRecord to a DataMapper pattern is a much bigger discussion :)
Vish Ishaya (vishvananda) wrote : | # |
I went ahead and updated this to use vpn.py for the moment. Hopefully this
will be cleaned up in the datamodel refactor.
Vish
On Thu, Aug 5, 2010 at 12:21 PM, Jay Pipes <email address hidden> wrote:
> Yeah, so I've run into similar issues in prototyping new datastore code.
> Is the real problem the fact that the Auth objects like Project aren't
> derived from BasicModel like all the other objects in the system are? If we
> could have things that are *models* (i.e. plain old data records with some
> ability to "associate" with each other) all derive from a single class
> (BasicModel) this problem would go away. This really is a problem with
> having the models contain business logic (the Rails way...)
>
> -jay
> --
> https:/
> You are the owner of lp:~vishvananda/nova/network-refactor.
>
Eric Day (eday) wrote : | # |
I would also probably rename NetworkData class to VPN or something for consistency, but I don't want to hold this up from getting in any longer. It sounds like we'll be refactoring these data models soon enough and can get it then. :)
Jay Pipes (jaypipes) wrote : | # |
So...a good chat with Michael on IRC, and the circular import issue has to do with using from nova.auth import manager instead of import nova.auth.
The Vpn class can go back into /network/model.py by changing the import statement. Tested this locally and it works.
-jay
OpenStack Infra (hudson-openstack) wrote : | # |
The attempt to merge lp:~vishvananda/nova/network-refactor into lp:nova failed.Below is the output from the failed tests.
nova.tests.
AccessTestCase
test_
test_
test_
test_
test_
nova.tests.
ApiEc2TestCase
test_
test_
nova.tests.
AuthTestCase
test_
test_
test_
test_
test_
test_
test_
test_
test_
test_
test_
test_
test_
test_
test_
test_
test_
test_
test_
test_
test_
test_
test_
test_
test_
nova.tests.
CloudTestCase
test_
test_
test_
nova.tests.
ComputeConnec
test_
- 219. By Vish Ishaya
-
merged trunk
- 220. By Vish Ishaya
-
fix search/replace error
Preview Diff
1 | === modified file 'bin/nova-dhcpbridge' |
2 | --- bin/nova-dhcpbridge 2010-07-28 23:11:02 +0000 |
3 | +++ bin/nova-dhcpbridge 2010-08-06 21:33:45 +0000 |
4 | @@ -35,32 +35,34 @@ |
5 | from nova import flags |
6 | from nova import rpc |
7 | from nova import utils |
8 | -from nova.compute import linux_net |
9 | -from nova.compute import network |
10 | - |
11 | +from nova.network import linux_net |
12 | +from nova.network import model |
13 | +from nova.network import service |
14 | |
15 | FLAGS = flags.FLAGS |
16 | |
17 | |
18 | def add_lease(mac, ip, hostname, interface): |
19 | if FLAGS.fake_rabbit: |
20 | - network.lease_ip(ip) |
21 | + service.VlanNetworkService().lease_ip(ip) |
22 | else: |
23 | - rpc.cast(FLAGS.cloud_topic, {"method": "lease_ip", |
24 | - "args" : {"address": ip}}) |
25 | + rpc.cast("%s.%s" (FLAGS.network_topic, FLAGS.node_name), |
26 | + {"method": "lease_ip", |
27 | + "args" : {"fixed_ip": ip}}) |
28 | |
29 | def old_lease(mac, ip, hostname, interface): |
30 | logging.debug("Adopted old lease or got a change of mac/hostname") |
31 | |
32 | def del_lease(mac, ip, hostname, interface): |
33 | if FLAGS.fake_rabbit: |
34 | - network.release_ip(ip) |
35 | + service.VlanNetworkService().release_ip(ip) |
36 | else: |
37 | - rpc.cast(FLAGS.cloud_topic, {"method": "release_ip", |
38 | - "args" : {"address": ip}}) |
39 | + rpc.cast("%s.%s" (FLAGS.network_topic, FLAGS.node_name), |
40 | + {"method": "release_ip", |
41 | + "args" : {"fixed_ip": ip}}) |
42 | |
43 | def init_leases(interface): |
44 | - net = network.get_network_by_interface(interface) |
45 | + net = model.get_network_by_interface(interface) |
46 | res = "" |
47 | for host_name in net.hosts: |
48 | res += "%s\n" % linux_net.hostDHCP(net, host_name, net.hosts[host_name]) |
49 | |
50 | === modified file 'bin/nova-manage' |
51 | --- bin/nova-manage 2010-07-19 18:19:26 +0000 |
52 | +++ bin/nova-manage 2010-08-06 21:33:45 +0000 |
53 | @@ -29,16 +29,12 @@ |
54 | from nova import utils |
55 | from nova.auth import manager |
56 | from nova.compute import model |
57 | -from nova.compute import network |
58 | from nova.cloudpipe import pipelib |
59 | from nova.endpoint import cloud |
60 | |
61 | |
62 | FLAGS = flags.FLAGS |
63 | |
64 | -class NetworkCommands(object): |
65 | - def restart(self): |
66 | - network.restart_nets() |
67 | |
68 | class VpnCommands(object): |
69 | def __init__(self): |
70 | @@ -170,6 +166,13 @@ |
71 | arguments: name""" |
72 | self.manager.delete_project(name) |
73 | |
74 | + def environment(self, project_id, user_id, filename='novarc'): |
75 | + """exports environment variables to an sourcable file |
76 | + arguments: project_id user_id [filename='novarc]""" |
77 | + rc = self.manager.get_environment_rc(project_id, user_id) |
78 | + with open(filename, 'w') as f: |
79 | + f.write(rc) |
80 | + |
81 | def list(self): |
82 | """lists all projects |
83 | arguments: <none>""" |
84 | @@ -182,14 +185,11 @@ |
85 | self.manager.remove_from_project(user, project) |
86 | |
87 | def zip(self, project_id, user_id, filename='nova.zip'): |
88 | - """exports credentials for user to a zip file |
89 | + """exports credentials for project to a zip file |
90 | arguments: project_id user_id [filename='nova.zip]""" |
91 | - project = self.manager.get_project(project_id) |
92 | - if project: |
93 | - with open(filename, 'w') as f: |
94 | - f.write(project.get_credentials(user_id)) |
95 | - else: |
96 | - print "Project %s doesn't exist" % project |
97 | + zip = self.manager.get_credentials(project_id, user_id) |
98 | + with open(filename, 'w') as f: |
99 | + f.write(zip) |
100 | |
101 | |
102 | def usage(script_name): |
103 | @@ -197,7 +197,6 @@ |
104 | |
105 | |
106 | categories = [ |
107 | - ('network', NetworkCommands), |
108 | ('user', UserCommands), |
109 | ('project', ProjectCommands), |
110 | ('role', RoleCommands), |
111 | |
112 | === modified file 'bin/nova-network' |
113 | --- bin/nova-network 2010-07-27 00:14:28 +0000 |
114 | +++ bin/nova-network 2010-08-06 21:33:45 +0000 |
115 | @@ -21,12 +21,16 @@ |
116 | Twistd daemon for the nova network nodes. |
117 | """ |
118 | |
119 | +from nova import flags |
120 | from nova import twistd |
121 | + |
122 | from nova.network import service |
123 | |
124 | +FLAGS = flags.FLAGS |
125 | + |
126 | |
127 | if __name__ == '__main__': |
128 | twistd.serve(__file__) |
129 | |
130 | if __name__ == '__builtin__': |
131 | - application = service.NetworkService.create() |
132 | + application = service.type_to_class(FLAGS.network_type).create() |
133 | |
134 | === modified file 'nova/auth/manager.py' |
135 | --- nova/auth/manager.py 2010-08-04 06:17:35 +0000 |
136 | +++ nova/auth/manager.py 2010-08-06 21:33:45 +0000 |
137 | @@ -36,6 +36,7 @@ |
138 | from nova import utils |
139 | from nova.auth import ldapdriver # for flags |
140 | from nova.auth import signer |
141 | +from nova.network import vpn |
142 | |
143 | FLAGS = flags.FLAGS |
144 | |
145 | @@ -50,13 +51,14 @@ |
146 | 'Roles that apply to all projects') |
147 | |
148 | |
149 | -flags.DEFINE_bool('use_vpn', True, 'Support per-project vpns') |
150 | flags.DEFINE_string('credentials_template', |
151 | utils.abspath('auth/novarc.template'), |
152 | 'Template for creating users rc file') |
153 | flags.DEFINE_string('vpn_client_template', |
154 | utils.abspath('cloudpipe/client.ovpn.template'), |
155 | 'Template for creating users vpn file') |
156 | +flags.DEFINE_string('credential_vpn_file', 'nova-vpn.conf', |
157 | + 'Filename of certificate in credentials zip') |
158 | flags.DEFINE_string('credential_key_file', 'pk.pem', |
159 | 'Filename of private key in credentials zip') |
160 | flags.DEFINE_string('credential_cert_file', 'cert.pem', |
161 | @@ -64,19 +66,11 @@ |
162 | flags.DEFINE_string('credential_rc_file', 'novarc', |
163 | 'Filename of rc in credentials zip') |
164 | |
165 | -flags.DEFINE_integer('vpn_start_port', 1000, |
166 | - 'Start port for the cloudpipe VPN servers') |
167 | -flags.DEFINE_integer('vpn_end_port', 2000, |
168 | - 'End port for the cloudpipe VPN servers') |
169 | - |
170 | flags.DEFINE_string('credential_cert_subject', |
171 | '/C=US/ST=California/L=MountainView/O=AnsoLabs/' |
172 | 'OU=NovaDev/CN=%s-%s', |
173 | 'Subject for certificate for users') |
174 | |
175 | -flags.DEFINE_string('vpn_ip', '127.0.0.1', |
176 | - 'Public IP for the cloudpipe VPN servers') |
177 | - |
178 | flags.DEFINE_string('auth_driver', 'nova.auth.ldapdriver.FakeLdapDriver', |
179 | 'Driver that auth manager uses') |
180 | |
181 | @@ -228,86 +222,6 @@ |
182 | self.member_ids) |
183 | |
184 | |
185 | -class NoMorePorts(exception.Error): |
186 | - pass |
187 | - |
188 | - |
189 | -class Vpn(datastore.BasicModel): |
190 | - """Manages vpn ips and ports for projects""" |
191 | - def __init__(self, project_id): |
192 | - self.project_id = project_id |
193 | - super(Vpn, self).__init__() |
194 | - |
195 | - @property |
196 | - def identifier(self): |
197 | - """Identifier used for key in redis""" |
198 | - return self.project_id |
199 | - |
200 | - @classmethod |
201 | - def create(cls, project_id): |
202 | - """Creates a vpn for project |
203 | - |
204 | - This method finds a free ip and port and stores the associated |
205 | - values in the datastore. |
206 | - """ |
207 | - # TODO(vish): get list of vpn ips from redis |
208 | - port = cls.find_free_port_for_ip(FLAGS.vpn_ip) |
209 | - vpn = cls(project_id) |
210 | - # save ip for project |
211 | - vpn['project'] = project_id |
212 | - vpn['ip'] = FLAGS.vpn_ip |
213 | - vpn['port'] = port |
214 | - vpn.save() |
215 | - return vpn |
216 | - |
217 | - @classmethod |
218 | - def find_free_port_for_ip(cls, ip): |
219 | - """Finds a free port for a given ip from the redis set""" |
220 | - # TODO(vish): these redis commands should be generalized and |
221 | - # placed into a base class. Conceptually, it is |
222 | - # similar to an association, but we are just |
223 | - # storing a set of values instead of keys that |
224 | - # should be turned into objects. |
225 | - redis = datastore.Redis.instance() |
226 | - key = 'ip:%s:ports' % ip |
227 | - # TODO(vish): these ports should be allocated through an admin |
228 | - # command instead of a flag |
229 | - if (not redis.exists(key) and |
230 | - not redis.exists(cls._redis_association_name('ip', ip))): |
231 | - for i in range(FLAGS.vpn_start_port, FLAGS.vpn_end_port + 1): |
232 | - redis.sadd(key, i) |
233 | - |
234 | - port = redis.spop(key) |
235 | - if not port: |
236 | - raise NoMorePorts() |
237 | - return port |
238 | - |
239 | - @classmethod |
240 | - def num_ports_for_ip(cls, ip): |
241 | - """Calculates the number of free ports for a given ip""" |
242 | - return datastore.Redis.instance().scard('ip:%s:ports' % ip) |
243 | - |
244 | - @property |
245 | - def ip(self): |
246 | - """The ip assigned to the project""" |
247 | - return self['ip'] |
248 | - |
249 | - @property |
250 | - def port(self): |
251 | - """The port assigned to the project""" |
252 | - return int(self['port']) |
253 | - |
254 | - def save(self): |
255 | - """Saves the association to the given ip""" |
256 | - self.associate_with('ip', self.ip) |
257 | - super(Vpn, self).save() |
258 | - |
259 | - def destroy(self): |
260 | - """Cleans up datastore and adds port back to pool""" |
261 | - self.unassociate_with('ip', self.ip) |
262 | - datastore.Redis.instance().sadd('ip:%s:ports' % self.ip, self.port) |
263 | - super(Vpn, self).destroy() |
264 | - |
265 | |
266 | class AuthManager(object): |
267 | """Manager Singleton for dealing with Users, Projects, and Keypairs |
268 | @@ -585,8 +499,6 @@ |
269 | description, |
270 | member_users) |
271 | if project_dict: |
272 | - if FLAGS.use_vpn: |
273 | - Vpn.create(project_dict['id']) |
274 | return Project(**project_dict) |
275 | |
276 | def add_to_project(self, user, project): |
277 | @@ -623,10 +535,10 @@ |
278 | @return: A tuple containing (ip, port) or None, None if vpn has |
279 | not been allocated for user. |
280 | """ |
281 | - vpn = Vpn.lookup(Project.safe_id(project)) |
282 | - if not vpn: |
283 | - return None, None |
284 | - return (vpn.ip, vpn.port) |
285 | + network_data = vpn.NetworkData.lookup(Project.safe_id(project)) |
286 | + if not network_data: |
287 | + raise exception.NotFound('project network data has not been set') |
288 | + return (network_data.ip, network_data.port) |
289 | |
290 | def delete_project(self, project): |
291 | """Deletes a project""" |
292 | @@ -757,25 +669,27 @@ |
293 | rc = self.__generate_rc(user.access, user.secret, pid) |
294 | private_key, signed_cert = self._generate_x509_cert(user.id, pid) |
295 | |
296 | - vpn = Vpn.lookup(pid) |
297 | - if not vpn: |
298 | - raise exception.Error("No vpn data allocated for project %s" % |
299 | - project.name) |
300 | - configfile = open(FLAGS.vpn_client_template,"r") |
301 | - s = string.Template(configfile.read()) |
302 | - configfile.close() |
303 | - config = s.substitute(keyfile=FLAGS.credential_key_file, |
304 | - certfile=FLAGS.credential_cert_file, |
305 | - ip=vpn.ip, |
306 | - port=vpn.port) |
307 | - |
308 | tmpdir = tempfile.mkdtemp() |
309 | zf = os.path.join(tmpdir, "temp.zip") |
310 | zippy = zipfile.ZipFile(zf, 'w') |
311 | zippy.writestr(FLAGS.credential_rc_file, rc) |
312 | zippy.writestr(FLAGS.credential_key_file, private_key) |
313 | zippy.writestr(FLAGS.credential_cert_file, signed_cert) |
314 | - zippy.writestr("nebula-client.conf", config) |
315 | + |
316 | + network_data = vpn.NetworkData.lookup(pid) |
317 | + if network_data: |
318 | + configfile = open(FLAGS.vpn_client_template,"r") |
319 | + s = string.Template(configfile.read()) |
320 | + configfile.close() |
321 | + config = s.substitute(keyfile=FLAGS.credential_key_file, |
322 | + certfile=FLAGS.credential_cert_file, |
323 | + ip=network_data.ip, |
324 | + port=network_data.port) |
325 | + zippy.writestr(FLAGS.credential_vpn_file, config) |
326 | + else: |
327 | + logging.warn("No vpn data for project %s" % |
328 | + pid) |
329 | + |
330 | zippy.writestr(FLAGS.ca_file, crypto.fetch_ca(user.id)) |
331 | zippy.close() |
332 | with open(zf, 'rb') as f: |
333 | @@ -784,6 +698,15 @@ |
334 | shutil.rmtree(tmpdir) |
335 | return buffer |
336 | |
337 | + def get_environment_rc(self, user, project=None): |
338 | + """Get credential zip for user in project""" |
339 | + if not isinstance(user, User): |
340 | + user = self.get_user(user) |
341 | + if project is None: |
342 | + project = user.id |
343 | + pid = Project.safe_id(project) |
344 | + return self.__generate_rc(user.access, user.secret, pid) |
345 | + |
346 | def __generate_rc(self, access, secret, pid): |
347 | """Generate rc file for user""" |
348 | rc = open(FLAGS.credentials_template).read() |
349 | |
350 | === modified file 'nova/compute/service.py' |
351 | --- nova/compute/service.py 2010-07-28 23:11:02 +0000 |
352 | +++ nova/compute/service.py 2010-08-06 21:33:45 +0000 |
353 | @@ -39,9 +39,9 @@ |
354 | from nova import utils |
355 | from nova.compute import disk |
356 | from nova.compute import model |
357 | -from nova.compute import network |
358 | from nova.compute import power_state |
359 | from nova.compute.instance_types import INSTANCE_TYPES |
360 | +from nova.network import service as network_service |
361 | from nova.objectstore import image # for image_path flag |
362 | from nova.virt import connection as virt_connection |
363 | from nova.volume import service as volume_service |
364 | @@ -117,12 +117,17 @@ |
365 | """ launch a new instance with specified options """ |
366 | logging.debug("Starting instance %s..." % (instance_id)) |
367 | inst = self.instdir.get(instance_id) |
368 | - if not FLAGS.simple_network: |
369 | - # TODO: Get the real security group of launch in here |
370 | - security_group = "default" |
371 | - net = network.BridgedNetwork.get_network_for_project(inst['user_id'], |
372 | - inst['project_id'], |
373 | - security_group).express() |
374 | + # TODO: Get the real security group of launch in here |
375 | + security_group = "default" |
376 | + # NOTE(vish): passing network type allows us to express the |
377 | + # network without making a call to network to find |
378 | + # out which type of network to setup |
379 | + network_service.setup_compute_network( |
380 | + inst.get('network_type', 'vlan'), |
381 | + inst['user_id'], |
382 | + inst['project_id'], |
383 | + security_group) |
384 | + |
385 | inst['node_name'] = FLAGS.node_name |
386 | inst.save() |
387 | # TODO(vish) check to make sure the availability zone matches |
388 | |
389 | === modified file 'nova/endpoint/cloud.py' |
390 | --- nova/endpoint/cloud.py 2010-07-31 01:32:12 +0000 |
391 | +++ nova/endpoint/cloud.py 2010-08-06 21:33:45 +0000 |
392 | @@ -36,11 +36,11 @@ |
393 | from nova.auth import rbac |
394 | from nova.auth import manager |
395 | from nova.compute import model |
396 | -from nova.compute import network |
397 | from nova.compute.instance_types import INSTANCE_TYPES |
398 | -from nova.compute import service as compute_service |
399 | from nova.endpoint import images |
400 | -from nova.volume import service as volume_service |
401 | +from nova.network import service as network_service |
402 | +from nova.network import model as network_model |
403 | +from nova.volume import service |
404 | |
405 | |
406 | FLAGS = flags.FLAGS |
407 | @@ -64,7 +64,6 @@ |
408 | """ |
409 | def __init__(self): |
410 | self.instdir = model.InstanceDirectory() |
411 | - self.network = network.PublicNetworkController() |
412 | self.setup() |
413 | |
414 | @property |
415 | @@ -76,7 +75,7 @@ |
416 | def volumes(self): |
417 | """ returns a list of all volumes """ |
418 | for volume_id in datastore.Redis.instance().smembers("volumes"): |
419 | - volume = volume_service.get_volume(volume_id) |
420 | + volume = service.get_volume(volume_id) |
421 | yield volume |
422 | |
423 | def __str__(self): |
424 | @@ -222,7 +221,7 @@ |
425 | callback=_complete) |
426 | return d |
427 | |
428 | - except users.UserError, e: |
429 | + except manager.UserError as e: |
430 | raise |
431 | |
432 | @rbac.allow('all') |
433 | @@ -308,7 +307,7 @@ |
434 | |
435 | def _get_address(self, context, public_ip): |
436 | # FIXME(vish) this should move into network.py |
437 | - address = self.network.get_host(public_ip) |
438 | + address = network_model.PublicAddress.lookup(public_ip) |
439 | if address and (context.user.is_admin() or address['project_id'] == context.project.id): |
440 | return address |
441 | raise exception.NotFound("Address at ip %s not found" % public_ip) |
442 | @@ -330,7 +329,7 @@ |
443 | raise exception.NotFound('Instance %s could not be found' % instance_id) |
444 | |
445 | def _get_volume(self, context, volume_id): |
446 | - volume = volume_service.get_volume(volume_id) |
447 | + volume = service.get_volume(volume_id) |
448 | if context.user.is_admin() or volume['project_id'] == context.project.id: |
449 | return volume |
450 | raise exception.NotFound('Volume %s could not be found' % volume_id) |
451 | @@ -417,7 +416,7 @@ |
452 | 'code': instance.get('state', 0), |
453 | 'name': instance.get('state_description', 'pending') |
454 | } |
455 | - i['public_dns_name'] = self.network.get_public_ip_for_instance( |
456 | + i['public_dns_name'] = network_model.get_public_ip_for_instance( |
457 | i['instance_id']) |
458 | i['private_dns_name'] = instance.get('private_dns_name', None) |
459 | if not i['public_dns_name']: |
460 | @@ -452,10 +451,10 @@ |
461 | |
462 | def format_addresses(self, context): |
463 | addresses = [] |
464 | - for address in self.network.host_objs: |
465 | + for address in network_model.PublicAddress.all(): |
466 | # TODO(vish): implement a by_project iterator for addresses |
467 | if (context.user.is_admin() or |
468 | - address['project_id'] == self.project.id): |
469 | + address['project_id'] == context.project.id): |
470 | address_rv = { |
471 | 'public_ip': address['address'], |
472 | 'instance_id' : address.get('instance_id', 'free') |
473 | @@ -470,41 +469,63 @@ |
474 | return {'addressesSet': addresses} |
475 | |
476 | @rbac.allow('netadmin') |
477 | + @defer.inlineCallbacks |
478 | def allocate_address(self, context, **kwargs): |
479 | - address = self.network.allocate_ip( |
480 | - context.user.id, context.project.id, 'public') |
481 | - return defer.succeed({'addressSet': [{'publicIp' : address}]}) |
482 | + network_topic = yield self._get_network_topic(context) |
483 | + alloc_result = yield rpc.call(network_topic, |
484 | + {"method": "allocate_elastic_ip", |
485 | + "args": {"user_id": context.user.id, |
486 | + "project_id": context.project.id}}) |
487 | + public_ip = alloc_result['result'] |
488 | + defer.returnValue({'addressSet': [{'publicIp' : public_ip}]}) |
489 | |
490 | @rbac.allow('netadmin') |
491 | + @defer.inlineCallbacks |
492 | def release_address(self, context, public_ip, **kwargs): |
493 | - self.network.deallocate_ip(public_ip) |
494 | - return defer.succeed({'releaseResponse': ["Address released."]}) |
495 | + # NOTE(vish): Should we make sure this works? |
496 | + network_topic = yield self._get_network_topic(context) |
497 | + rpc.cast(network_topic, |
498 | + {"method": "deallocate_elastic_ip", |
499 | + "args": {"elastic_ip": public_ip}}) |
500 | + defer.returnValue({'releaseResponse': ["Address released."]}) |
501 | |
502 | @rbac.allow('netadmin') |
503 | - def associate_address(self, context, instance_id, **kwargs): |
504 | + @defer.inlineCallbacks |
505 | + def associate_address(self, context, instance_id, public_ip, **kwargs): |
506 | instance = self._get_instance(context, instance_id) |
507 | - self.network.associate_address( |
508 | - kwargs['public_ip'], |
509 | - instance['private_dns_name'], |
510 | - instance_id) |
511 | - return defer.succeed({'associateResponse': ["Address associated."]}) |
512 | + address = self._get_address(context, public_ip) |
513 | + network_topic = yield self._get_network_topic(context) |
514 | + rpc.cast(network_topic, |
515 | + {"method": "associate_elastic_ip", |
516 | + "args": {"elastic_ip": address['address'], |
517 | + "fixed_ip": instance['private_dns_name'], |
518 | + "instance_id": instance['instance_id']}}) |
519 | + defer.returnValue({'associateResponse': ["Address associated."]}) |
520 | |
521 | @rbac.allow('netadmin') |
522 | + @defer.inlineCallbacks |
523 | def disassociate_address(self, context, public_ip, **kwargs): |
524 | address = self._get_address(context, public_ip) |
525 | - self.network.disassociate_address(public_ip) |
526 | - # TODO - Strip the IP from the instance |
527 | - return defer.succeed({'disassociateResponse': ["Address disassociated."]}) |
528 | - |
529 | - def release_ip(self, context, private_ip, **kwargs): |
530 | - self.network.release_ip(private_ip) |
531 | - return defer.succeed({'releaseResponse': ["Address released."]}) |
532 | - |
533 | - def lease_ip(self, context, private_ip, **kwargs): |
534 | - self.network.lease_ip(private_ip) |
535 | - return defer.succeed({'leaseResponse': ["Address leased."]}) |
536 | + network_topic = yield self._get_network_topic(context) |
537 | + rpc.cast(network_topic, |
538 | + {"method": "disassociate_elastic_ip", |
539 | + "args": {"elastic_ip": address['address']}}) |
540 | + defer.returnValue({'disassociateResponse': ["Address disassociated."]}) |
541 | + |
542 | + @defer.inlineCallbacks |
543 | + def _get_network_topic(self, context): |
544 | + """Retrieves the network host for a project""" |
545 | + host = network_service.get_host_for_project(context.project.id) |
546 | + if not host: |
547 | + result = yield rpc.call(FLAGS.network_topic, |
548 | + {"method": "set_network_host", |
549 | + "args": {"user_id": context.user.id, |
550 | + "project_id": context.project.id}}) |
551 | + host = result['result'] |
552 | + defer.returnValue('%s.%s' %(FLAGS.network_topic, host)) |
553 | |
554 | @rbac.allow('projectmanager', 'sysadmin') |
555 | + @defer.inlineCallbacks |
556 | def run_instances(self, context, **kwargs): |
557 | # make sure user can access the image |
558 | # vpn image is private so it doesn't show up on lists |
559 | @@ -536,15 +557,20 @@ |
560 | raise exception.ApiError('Key Pair %s not found' % |
561 | kwargs['key_name']) |
562 | key_data = key_pair.public_key |
563 | + network_topic = yield self._get_network_topic(context) |
564 | # TODO: Get the real security group of launch in here |
565 | security_group = "default" |
566 | - if FLAGS.simple_network: |
567 | - bridge_name = FLAGS.simple_network_bridge |
568 | - else: |
569 | - net = network.BridgedNetwork.get_network_for_project( |
570 | - context.user.id, context.project.id, security_group) |
571 | - bridge_name = net['bridge_name'] |
572 | for num in range(int(kwargs['max_count'])): |
573 | + vpn = False |
574 | + if image_id == FLAGS.vpn_image_id: |
575 | + vpn = True |
576 | + allocate_result = yield rpc.call(network_topic, |
577 | + {"method": "allocate_fixed_ip", |
578 | + "args": {"user_id": context.user.id, |
579 | + "project_id": context.project.id, |
580 | + "security_group": security_group, |
581 | + "vpn": vpn}}) |
582 | + allocate_data = allocate_result['result'] |
583 | inst = self.instdir.new() |
584 | inst['image_id'] = image_id |
585 | inst['kernel_id'] = kernel_id |
586 | @@ -557,24 +583,11 @@ |
587 | inst['key_name'] = kwargs.get('key_name', '') |
588 | inst['user_id'] = context.user.id |
589 | inst['project_id'] = context.project.id |
590 | - inst['mac_address'] = utils.generate_mac() |
591 | inst['ami_launch_index'] = num |
592 | - inst['bridge_name'] = bridge_name |
593 | - if FLAGS.simple_network: |
594 | - address = network.allocate_simple_ip() |
595 | - else: |
596 | - if inst['image_id'] == FLAGS.vpn_image_id: |
597 | - address = network.allocate_vpn_ip( |
598 | - inst['user_id'], |
599 | - inst['project_id'], |
600 | - mac=inst['mac_address']) |
601 | - else: |
602 | - address = network.allocate_ip( |
603 | - inst['user_id'], |
604 | - inst['project_id'], |
605 | - mac=inst['mac_address']) |
606 | - inst['private_dns_name'] = str(address) |
607 | - # TODO: allocate expresses on the router node |
608 | + inst['security_group'] = security_group |
609 | + for (key, value) in allocate_data.iteritems(): |
610 | + inst[key] = value |
611 | + |
612 | inst.save() |
613 | rpc.cast(FLAGS.compute_topic, |
614 | {"method": "run_instance", |
615 | @@ -582,40 +595,49 @@ |
616 | logging.debug("Casting to node for %s's instance with IP of %s" % |
617 | (context.user.name, inst['private_dns_name'])) |
618 | # TODO: Make Network figure out the network name from ip. |
619 | - return defer.succeed(self._format_instances( |
620 | - context, reservation_id)) |
621 | + defer.returnValue(self._format_instances(context, reservation_id)) |
622 | |
623 | @rbac.allow('projectmanager', 'sysadmin') |
624 | + @defer.inlineCallbacks |
625 | def terminate_instances(self, context, instance_id, **kwargs): |
626 | logging.debug("Going to start terminating instances") |
627 | + network_topic = yield self._get_network_topic(context) |
628 | for i in instance_id: |
629 | logging.debug("Going to try and terminate %s" % i) |
630 | try: |
631 | instance = self._get_instance(context, i) |
632 | except exception.NotFound: |
633 | - logging.warning("Instance %s was not found during terminate" % i) |
634 | + logging.warning("Instance %s was not found during terminate" |
635 | + % i) |
636 | continue |
637 | - try: |
638 | - self.network.disassociate_address( |
639 | - instance.get('public_dns_name', 'bork')) |
640 | - except: |
641 | - pass |
642 | - if instance.get('private_dns_name', None): |
643 | - logging.debug("Deallocating address %s" % instance.get('private_dns_name', None)) |
644 | - if FLAGS.simple_network: |
645 | - network.deallocate_simple_ip(instance.get('private_dns_name', None)) |
646 | - else: |
647 | - try: |
648 | - self.network.deallocate_ip(instance.get('private_dns_name', None)) |
649 | - except Exception, _err: |
650 | - pass |
651 | - if instance.get('node_name', 'unassigned') != 'unassigned': #It's also internal default |
652 | + elastic_ip = network_model.get_public_ip_for_instance(i) |
653 | + if elastic_ip: |
654 | + logging.debug("Disassociating address %s" % elastic_ip) |
655 | + # NOTE(vish): Right now we don't really care if the ip is |
656 | + # disassociated. We may need to worry about |
657 | + # checking this later. Perhaps in the scheduler? |
658 | + rpc.cast(network_topic, |
659 | + {"method": "disassociate_elastic_ip", |
660 | + "args": {"elastic_ip": elastic_ip}}) |
661 | + |
662 | + fixed_ip = instance.get('private_dns_name', None) |
663 | + if fixed_ip: |
664 | + logging.debug("Deallocating address %s" % fixed_ip) |
665 | + # NOTE(vish): Right now we don't really care if the ip is |
666 | + # actually removed. We may need to worry about |
667 | + # checking this later. Perhaps in the scheduler? |
668 | + rpc.cast(network_topic, |
669 | + {"method": "deallocate_fixed_ip", |
670 | + "args": {"fixed_ip": fixed_ip}}) |
671 | + |
672 | + if instance.get('node_name', 'unassigned') != 'unassigned': |
673 | + # NOTE(joshua?): It's also internal default |
674 | rpc.cast('%s.%s' % (FLAGS.compute_topic, instance['node_name']), |
675 | - {"method": "terminate_instance", |
676 | - "args" : {"instance_id": i}}) |
677 | + {"method": "terminate_instance", |
678 | + "args": {"instance_id": i}}) |
679 | else: |
680 | instance.destroy() |
681 | - return defer.succeed(True) |
682 | + defer.returnValue(True) |
683 | |
684 | @rbac.allow('projectmanager', 'sysadmin') |
685 | def reboot_instances(self, context, instance_id, **kwargs): |
686 | |
687 | === renamed file 'nova/compute/exception.py' => 'nova/network/exception.py' |
688 | --- nova/compute/exception.py 2010-07-15 23:13:48 +0000 |
689 | +++ nova/network/exception.py 2010-08-06 21:33:45 +0000 |
690 | @@ -17,7 +17,7 @@ |
691 | # under the License. |
692 | |
693 | """ |
694 | -Exceptions for Compute Node errors, mostly network addressing. |
695 | +Exceptions for network errors. |
696 | """ |
697 | |
698 | from nova.exception import Error |
699 | |
700 | === renamed file 'nova/compute/linux_net.py' => 'nova/network/linux_net.py' |
701 | === renamed file 'nova/compute/network.py' => 'nova/network/model.py' |
702 | --- nova/compute/network.py 2010-07-27 21:16:49 +0000 |
703 | +++ nova/network/model.py 2010-08-06 21:33:45 +0000 |
704 | @@ -17,7 +17,7 @@ |
705 | # under the License. |
706 | |
707 | """ |
708 | -Classes for network control, including VLANs, DHCP, and IP allocation. |
709 | +Model Classes for network control, including VLANs, DHCP, and IP allocation. |
710 | """ |
711 | |
712 | import IPy |
713 | @@ -26,12 +26,12 @@ |
714 | import time |
715 | |
716 | from nova import datastore |
717 | -from nova import exception |
718 | +from nova import exception as nova_exception |
719 | from nova import flags |
720 | from nova import utils |
721 | from nova.auth import manager |
722 | -from nova.compute import exception as compute_exception |
723 | -from nova.compute import linux_net |
724 | +from nova.network import exception |
725 | +from nova.network import linux_net |
726 | |
727 | |
728 | FLAGS = flags.FLAGS |
729 | @@ -53,26 +53,6 @@ |
730 | flags.DEFINE_integer('cloudpipe_start_port', 12000, |
731 | 'Starting port for mapped CloudPipe external ports') |
732 | |
733 | -flags.DEFINE_boolean('simple_network', False, |
734 | - 'Use simple networking instead of vlans') |
735 | -flags.DEFINE_string('simple_network_bridge', 'br100', |
736 | - 'Bridge for simple network instances') |
737 | -flags.DEFINE_list('simple_network_ips', ['192.168.0.2'], |
738 | - 'Available ips for simple network') |
739 | -flags.DEFINE_string('simple_network_template', |
740 | - utils.abspath('compute/interfaces.template'), |
741 | - 'Template file for simple network') |
742 | -flags.DEFINE_string('simple_network_netmask', '255.255.255.0', |
743 | - 'Netmask for simple network') |
744 | -flags.DEFINE_string('simple_network_network', '192.168.0.0', |
745 | - 'Network for simple network') |
746 | -flags.DEFINE_string('simple_network_gateway', '192.168.0.1', |
747 | - 'Broadcast for simple network') |
748 | -flags.DEFINE_string('simple_network_broadcast', '192.168.0.255', |
749 | - 'Broadcast for simple network') |
750 | -flags.DEFINE_string('simple_network_dns', '8.8.4.4', |
751 | - 'Dns for simple network') |
752 | - |
753 | logging.getLogger().setLevel(logging.DEBUG) |
754 | |
755 | |
756 | @@ -156,7 +136,6 @@ |
757 | |
758 | # CLEANUP: |
759 | # TODO(ja): Save the IPs at the top of each subnet for cloudpipe vpn clients |
760 | -# TODO(ja): use singleton for usermanager instead of self.manager in vlanpool et al |
761 | # TODO(ja): does vlanpool "keeper" need to know the min/max - shouldn't FLAGS always win? |
762 | # TODO(joshua): Save the IPs at the top of each subnet for cloudpipe vpn clients |
763 | |
764 | @@ -241,7 +220,7 @@ |
765 | for idx in range(self.num_static_ips, len(self.network)-(1 + FLAGS.cnt_vpn_clients)): |
766 | address = str(self.network[idx]) |
767 | if not address in self.hosts.keys(): |
768 | - yield str(address) |
769 | + yield address |
770 | |
771 | @property |
772 | def num_static_ips(self): |
773 | @@ -253,7 +232,7 @@ |
774 | self._add_host(user_id, project_id, address, mac) |
775 | self.express(address=address) |
776 | return address |
777 | - raise compute_exception.NoMoreAddresses("Project %s with network %s" % |
778 | + raise exception.NoMoreAddresses("Project %s with network %s" % |
779 | (project_id, str(self.network))) |
780 | |
781 | def lease_ip(self, ip_str): |
782 | @@ -261,7 +240,7 @@ |
783 | |
784 | def release_ip(self, ip_str): |
785 | if not ip_str in self.assigned: |
786 | - raise compute_exception.AddressNotAllocated() |
787 | + raise exception.AddressNotAllocated() |
788 | self.deexpress(address=ip_str) |
789 | self._rem_host(ip_str) |
790 | |
791 | @@ -349,14 +328,14 @@ |
792 | logging.debug("Not launching dnsmasq: no hosts.") |
793 | self.express_cloudpipe() |
794 | |
795 | - def allocate_vpn_ip(self, mac): |
796 | + def allocate_vpn_ip(self, user_id, project_id, mac): |
797 | address = str(self.network[2]) |
798 | - self._add_host(self['user_id'], self['project_id'], address, mac) |
799 | + self._add_host(user_id, project_id, address, mac) |
800 | self.express(address=address) |
801 | return address |
802 | |
803 | def express_cloudpipe(self): |
804 | - private_ip = self.network[2] |
805 | + private_ip = str(self.network[2]) |
806 | linux_net.confirm_rule("FORWARD -d %s -p udp --dport 1194 -j ACCEPT" |
807 | % (private_ip, )) |
808 | linux_net.confirm_rule("PREROUTING -t nat -d %s -p udp --dport %s -j DNAT --to %s:1194" |
809 | @@ -394,6 +373,7 @@ |
810 | addr.save() |
811 | return addr |
812 | |
813 | + |
814 | DEFAULT_PORTS = [("tcp",80), ("tcp",22), ("udp",1194), ("tcp",443)] |
815 | class PublicNetworkController(BaseNetwork): |
816 | override_type = 'network' |
817 | @@ -420,12 +400,6 @@ |
818 | for address in self.assigned: |
819 | yield PublicAddress(address) |
820 | |
821 | - def get_public_ip_for_instance(self, instance_id): |
822 | - # FIXME: this should be a lookup - iteration won't scale |
823 | - for address_record in self.host_objs: |
824 | - if address_record.get('instance_id', 'available') == instance_id: |
825 | - return address_record['address'] |
826 | - |
827 | def get_host(self, host): |
828 | if host in self.assigned: |
829 | return PublicAddress(host) |
830 | @@ -439,16 +413,20 @@ |
831 | PublicAddress(host).destroy() |
832 | datastore.Redis.instance().hdel(self._hosts_key, host) |
833 | |
834 | + def deallocate_ip(self, ip_str): |
835 | + # NOTE(vish): cleanup is now done on release by the parent class |
836 | + self.release_ip(ip_str) |
837 | + |
838 | def associate_address(self, public_ip, private_ip, instance_id): |
839 | if not public_ip in self.assigned: |
840 | - raise compute_exception.AddressNotAllocated() |
841 | + raise exception.AddressNotAllocated() |
842 | # TODO(joshua): Keep an index going both ways |
843 | for addr in self.host_objs: |
844 | if addr.get('private_ip', None) == private_ip: |
845 | - raise compute_exception.AddressAlreadyAssociated() |
846 | + raise exception.AddressAlreadyAssociated() |
847 | addr = self.get_host(public_ip) |
848 | if addr.get('private_ip', 'available') != 'available': |
849 | - raise compute_exception.AddressAlreadyAssociated() |
850 | + raise exception.AddressAlreadyAssociated() |
851 | addr['private_ip'] = private_ip |
852 | addr['instance_id'] = instance_id |
853 | addr.save() |
854 | @@ -456,10 +434,10 @@ |
855 | |
856 | def disassociate_address(self, public_ip): |
857 | if not public_ip in self.assigned: |
858 | - raise compute_exception.AddressNotAllocated() |
859 | + raise exception.AddressNotAllocated() |
860 | addr = self.get_host(public_ip) |
861 | if addr.get('private_ip', 'available') == 'available': |
862 | - raise compute_exception.AddressNotAssociated() |
863 | + raise exception.AddressNotAssociated() |
864 | self.deexpress(address=public_ip) |
865 | addr['private_ip'] = 'available' |
866 | addr['instance_id'] = 'available' |
867 | @@ -535,63 +513,42 @@ |
868 | return vlan |
869 | else: |
870 | return Vlan.create(project_id, vnum) |
871 | - raise compute_exception.AddressNotAllocated("Out of VLANs") |
872 | - |
873 | -def get_network_by_interface(iface, security_group='default'): |
874 | - vlan = iface.rpartition("br")[2] |
875 | - return get_project_network(Vlan.dict_by_vlan().get(vlan), security_group) |
876 | + raise exception.AddressNotAllocated("Out of VLANs") |
877 | + |
878 | +def get_project_network(project_id, security_group='default'): |
879 | + """ get a project's private network, allocating one if needed """ |
880 | + project = manager.AuthManager().get_project(project_id) |
881 | + if not project: |
882 | + raise nova_exception.NotFound("Project %s doesn't exist." % project_id) |
883 | + manager_id = project.project_manager_id |
884 | + return DHCPNetwork.get_network_for_project(manager_id, |
885 | + project.id, |
886 | + security_group) |
887 | + |
888 | |
889 | def get_network_by_address(address): |
890 | + # TODO(vish): This is completely the wrong way to do this, but |
891 | + # I'm getting the network binary working before I |
892 | + # tackle doing this the right way. |
893 | logging.debug("Get Network By Address: %s" % address) |
894 | for project in manager.AuthManager().get_projects(): |
895 | net = get_project_network(project.id) |
896 | if address in net.assigned: |
897 | logging.debug("Found %s in %s" % (address, project.id)) |
898 | return net |
899 | - raise compute_exception.AddressNotAllocated() |
900 | - |
901 | -def allocate_simple_ip(): |
902 | - redis = datastore.Redis.instance() |
903 | - if not redis.exists('ips') and not len(redis.keys('instances:*')): |
904 | - for address in FLAGS.simple_network_ips: |
905 | - redis.sadd('ips', address) |
906 | - address = redis.spop('ips') |
907 | - if not address: |
908 | - raise exception.NoMoreAddresses() |
909 | - return address |
910 | - |
911 | -def deallocate_simple_ip(address): |
912 | - datastore.Redis.instance().sadd('ips', address) |
913 | - |
914 | - |
915 | -def allocate_vpn_ip(user_id, project_id, mac): |
916 | - return get_project_network(project_id).allocate_vpn_ip(mac) |
917 | - |
918 | -def allocate_ip(user_id, project_id, mac): |
919 | - return get_project_network(project_id).allocate_ip(user_id, project_id, mac) |
920 | - |
921 | -def deallocate_ip(address): |
922 | - return get_network_by_address(address).deallocate_ip(address) |
923 | - |
924 | -def release_ip(address): |
925 | - return get_network_by_address(address).release_ip(address) |
926 | - |
927 | -def lease_ip(address): |
928 | - return get_network_by_address(address).lease_ip(address) |
929 | - |
930 | -def get_project_network(project_id, security_group='default'): |
931 | - """ get a project's private network, allocating one if needed """ |
932 | - # TODO(todd): It looks goofy to get a project from a UserManager. |
933 | - # Refactor to still use the LDAP backend, but not User specific. |
934 | - project = manager.AuthManager().get_project(project_id) |
935 | - if not project: |
936 | - raise exception.Error("Project %s doesn't exist, uhoh." % |
937 | - project_id) |
938 | - return DHCPNetwork.get_network_for_project(project.project_manager_id, |
939 | - project.id, security_group) |
940 | - |
941 | - |
942 | -def restart_nets(): |
943 | - """ Ensure the network for each user is enabled""" |
944 | - for project in manager.AuthManager().get_projects(): |
945 | - get_project_network(project.id).express() |
946 | + raise exception.AddressNotAllocated() |
947 | + |
948 | + |
949 | +def get_network_by_interface(iface, security_group='default'): |
950 | + vlan = iface.rpartition("br")[2] |
951 | + project_id = Vlan.dict_by_vlan().get(vlan) |
952 | + return get_project_network(project_id, security_group) |
953 | + |
954 | + |
955 | + |
956 | +def get_public_ip_for_instance(instance_id): |
957 | + # FIXME: this should be a lookup - iteration won't scale |
958 | + for address_record in PublicAddress.all(): |
959 | + if address_record.get('instance_id', 'available') == instance_id: |
960 | + return address_record['address'] |
961 | + |
962 | |
963 | === modified file 'nova/network/service.py' |
964 | --- nova/network/service.py 2010-07-27 00:14:28 +0000 |
965 | +++ nova/network/service.py 2010-08-06 21:33:45 +0000 |
966 | @@ -20,16 +20,211 @@ |
967 | Network Nodes are responsible for allocating ips and setting up network |
968 | """ |
969 | |
970 | -import logging |
971 | - |
972 | +from nova import datastore |
973 | from nova import flags |
974 | from nova import service |
975 | - |
976 | +from nova import utils |
977 | +from nova.auth import manager |
978 | +from nova.exception import NotFound |
979 | +from nova.network import exception |
980 | +from nova.network import model |
981 | +from nova.network import vpn |
982 | |
983 | FLAGS = flags.FLAGS |
984 | |
985 | -class NetworkService(service.Service): |
986 | - """Allocates ips and sets up networks""" |
987 | - |
988 | - def __init__(self): |
989 | - logging.debug("Network node working") |
990 | +flags.DEFINE_string('network_type', |
991 | + 'flat', |
992 | + 'Service Class for Networking') |
993 | +flags.DEFINE_string('flat_network_bridge', 'br100', |
994 | + 'Bridge for simple network instances') |
995 | +flags.DEFINE_list('flat_network_ips', |
996 | + ['192.168.0.2','192.168.0.3','192.168.0.4'], |
997 | + 'Available ips for simple network') |
998 | +flags.DEFINE_string('flat_network_network', '192.168.0.0', |
999 | + 'Network for simple network') |
1000 | +flags.DEFINE_string('flat_network_netmask', '255.255.255.0', |
1001 | + 'Netmask for simple network') |
1002 | +flags.DEFINE_string('flat_network_gateway', '192.168.0.1', |
1003 | + 'Broadcast for simple network') |
1004 | +flags.DEFINE_string('flat_network_broadcast', '192.168.0.255', |
1005 | + 'Broadcast for simple network') |
1006 | +flags.DEFINE_string('flat_network_dns', '8.8.4.4', |
1007 | + 'Dns for simple network') |
1008 | + |
1009 | +def type_to_class(network_type): |
1010 | + if network_type == 'flat': |
1011 | + return FlatNetworkService |
1012 | + elif network_type == 'vlan': |
1013 | + return VlanNetworkService |
1014 | + raise NotFound("Couldn't find %s network type" % network_type) |
1015 | + |
1016 | + |
1017 | +def setup_compute_network(network_type, user_id, project_id, security_group): |
1018 | + srv = type_to_class(network_type) |
1019 | + srv.setup_compute_network(network_type, user_id, project_id, security_group) |
1020 | + |
1021 | + |
1022 | +def get_host_for_project(project_id): |
1023 | + redis = datastore.Redis.instance() |
1024 | + return redis.get(_host_key(project_id)) |
1025 | + |
1026 | + |
1027 | +def _host_key(project_id): |
1028 | + return "network_host:%s" % project_id |
1029 | + |
1030 | + |
1031 | +class BaseNetworkService(service.Service): |
1032 | + """Implements common network service functionality |
1033 | + |
1034 | + This class must be subclassed. |
1035 | + """ |
1036 | + def __init__(self, *args, **kwargs): |
1037 | + self.network = model.PublicNetworkController() |
1038 | + |
1039 | + def set_network_host(self, user_id, project_id, *args, **kwargs): |
1040 | + """Safely sets the host of the projects network""" |
1041 | + redis = datastore.Redis.instance() |
1042 | + key = _host_key(project_id) |
1043 | + if redis.setnx(key, FLAGS.node_name): |
1044 | + self._on_set_network_host(user_id, project_id, |
1045 | + security_group='default', |
1046 | + *args, **kwargs) |
1047 | + return FLAGS.node_name |
1048 | + else: |
1049 | + return redis.get(key) |
1050 | + |
1051 | + def allocate_fixed_ip(self, user_id, project_id, |
1052 | + security_group='default', |
1053 | + *args, **kwargs): |
1054 | + """Subclass implements getting fixed ip from the pool""" |
1055 | + raise NotImplementedError() |
1056 | + |
1057 | + def deallocate_fixed_ip(self, fixed_ip, *args, **kwargs): |
1058 | + """Subclass implements return of ip to the pool""" |
1059 | + raise NotImplementedError() |
1060 | + |
1061 | + def _on_set_network_host(self, user_id, project_id, |
1062 | + *args, **kwargs): |
1063 | + """Called when this host becomes the host for a project""" |
1064 | + pass |
1065 | + |
1066 | + @classmethod |
1067 | + def setup_compute_network(self, user_id, project_id, security_group, |
1068 | + *args, **kwargs): |
1069 | + """Sets up matching network for compute hosts""" |
1070 | + raise NotImplementedError() |
1071 | + |
1072 | + def allocate_elastic_ip(self, user_id, project_id): |
1073 | + """Gets a elastic ip from the pool""" |
1074 | + # NOTE(vish): Replicating earlier decision to use 'public' as |
1075 | + # mac address name, although this should probably |
1076 | + # be done inside of the PublicNetworkController |
1077 | + return self.network.allocate_ip(user_id, project_id, 'public') |
1078 | + |
1079 | + def associate_elastic_ip(self, elastic_ip, fixed_ip, instance_id): |
1080 | + """Associates an elastic ip to a fixed ip""" |
1081 | + self.network.associate_address(elastic_ip, fixed_ip, instance_id) |
1082 | + |
1083 | + def disassociate_elastic_ip(self, elastic_ip): |
1084 | + """Disassociates a elastic ip""" |
1085 | + self.network.disassociate_address(elastic_ip) |
1086 | + |
1087 | + def deallocate_elastic_ip(self, elastic_ip): |
1088 | + """Returns a elastic ip to the pool""" |
1089 | + self.network.deallocate_ip(elastic_ip) |
1090 | + |
1091 | + |
1092 | +class FlatNetworkService(BaseNetworkService): |
1093 | + """Basic network where no vlans are used""" |
1094 | + |
1095 | + @classmethod |
1096 | + def setup_compute_network(self, user_id, project_id, security_group, |
1097 | + *args, **kwargs): |
1098 | + """Network is created manually""" |
1099 | + pass |
1100 | + |
1101 | + def allocate_fixed_ip(self, user_id, project_id, |
1102 | + security_group='default', |
1103 | + *args, **kwargs): |
1104 | + """Gets a fixed ip from the pool |
1105 | + |
1106 | + Flat network just grabs the next available ip from the pool |
1107 | + """ |
1108 | + # NOTE(vish): Some automation could be done here. For example, |
1109 | + # creating the flat_network_bridge and setting up |
1110 | + # a gateway. This is all done manually atm |
1111 | + redis = datastore.Redis.instance() |
1112 | + if not redis.exists('ips') and not len(redis.keys('instances:*')): |
1113 | + for fixed_ip in FLAGS.flat_network_ips: |
1114 | + redis.sadd('ips', fixed_ip) |
1115 | + fixed_ip = redis.spop('ips') |
1116 | + if not fixed_ip: |
1117 | + raise exception.NoMoreAddresses() |
1118 | + return {'inject_network': True, |
1119 | + 'network_type': FLAGS.network_type, |
1120 | + 'mac_address': utils.generate_mac(), |
1121 | + 'private_dns_name': str(fixed_ip), |
1122 | + 'bridge_name': FLAGS.flat_network_bridge, |
1123 | + 'network_network': FLAGS.flat_network_network, |
1124 | + 'network_netmask': FLAGS.flat_network_netmask, |
1125 | + 'network_gateway': FLAGS.flat_network_gateway, |
1126 | + 'network_broadcast': FLAGS.flat_network_broadcast, |
1127 | + 'network_dns': FLAGS.flat_network_dns} |
1128 | + |
1129 | + def deallocate_fixed_ip(self, fixed_ip, *args, **kwargs): |
1130 | + """Returns an ip to the pool""" |
1131 | + datastore.Redis.instance().sadd('ips', fixed_ip) |
1132 | + |
1133 | +class VlanNetworkService(BaseNetworkService): |
1134 | + """Vlan network with dhcp""" |
1135 | + # NOTE(vish): A lot of the interactions with network/model.py can be |
1136 | + # simplified and improved. Also there it may be useful |
1137 | + # to support vlans separately from dhcp, instead of having |
1138 | + # both of them together in this class. |
1139 | + def allocate_fixed_ip(self, user_id, project_id, |
1140 | + security_group='default', |
1141 | + vpn=False, *args, **kwargs): |
1142 | + """Gets a fixed ip from the pool """ |
1143 | + mac = utils.generate_mac() |
1144 | + net = model.get_project_network(project_id) |
1145 | + if vpn: |
1146 | + fixed_ip = net.allocate_vpn_ip(user_id, project_id, mac) |
1147 | + else: |
1148 | + fixed_ip = net.allocate_ip(user_id, project_id, mac) |
1149 | + return {'network_type': FLAGS.network_type, |
1150 | + 'bridge_name': net['bridge_name'], |
1151 | + 'mac_address': mac, |
1152 | + 'private_dns_name' : fixed_ip} |
1153 | + |
1154 | + def deallocate_fixed_ip(self, fixed_ip, |
1155 | + *args, **kwargs): |
1156 | + """Returns an ip to the pool""" |
1157 | + return model.get_network_by_address(fixed_ip).deallocate_ip(fixed_ip) |
1158 | + |
1159 | + def lease_ip(self, address): |
1160 | + return model.get_network_by_address(address).lease_ip(address) |
1161 | + |
1162 | + def release_ip(self, address): |
1163 | + return model.get_network_by_address(address).release_ip(address) |
1164 | + |
1165 | + def restart_nets(self): |
1166 | + """Ensure the network for each user is enabled""" |
1167 | + for project in manager.AuthManager().get_projects(): |
1168 | + model.get_project_network(project.id).express() |
1169 | + |
1170 | + def _on_set_network_host(self, user_id, project_id, |
1171 | + *args, **kwargs): |
1172 | + """Called when this host becomes the host for a project""" |
1173 | + vpn.NetworkData.create(project_id) |
1174 | + |
1175 | + @classmethod |
1176 | + def setup_compute_network(self, user_id, project_id, security_group, |
1177 | + *args, **kwargs): |
1178 | + """Sets up matching network for compute hosts""" |
1179 | + # NOTE(vish): Use BridgedNetwork instead of DHCPNetwork because |
1180 | + # we don't want to run dnsmasq on the client machines |
1181 | + net = model.BridgedNetwork.get_network_for_project( |
1182 | + user_id, |
1183 | + project_id, |
1184 | + security_group) |
1185 | + net.express() |
1186 | |
1187 | === added file 'nova/network/vpn.py' |
1188 | --- nova/network/vpn.py 1970-01-01 00:00:00 +0000 |
1189 | +++ nova/network/vpn.py 2010-08-06 21:33:45 +0000 |
1190 | @@ -0,0 +1,116 @@ |
1191 | +# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
1192 | + |
1193 | +# Copyright 2010 United States Government as represented by the |
1194 | +# Administrator of the National Aeronautics and Space Administration. |
1195 | +# All Rights Reserved. |
1196 | +# |
1197 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
1198 | +# not use this file except in compliance with the License. You may obtain |
1199 | +# a copy of the License at |
1200 | +# |
1201 | +# http://www.apache.org/licenses/LICENSE-2.0 |
1202 | +# |
1203 | +# Unless required by applicable law or agreed to in writing, software |
1204 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
1205 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
1206 | +# License for the specific language governing permissions and limitations |
1207 | +# under the License. |
1208 | + |
1209 | +"""Network Data for projects""" |
1210 | + |
1211 | +from nova import datastore |
1212 | +from nova import exception |
1213 | +from nova import flags |
1214 | +from nova import utils |
1215 | + |
1216 | +FLAGS = flags.FLAGS |
1217 | + |
1218 | + |
1219 | +flags.DEFINE_string('vpn_ip', utils.get_my_ip(), |
1220 | + 'Public IP for the cloudpipe VPN servers') |
1221 | +flags.DEFINE_integer('vpn_start_port', 1000, |
1222 | + 'Start port for the cloudpipe VPN servers') |
1223 | +flags.DEFINE_integer('vpn_end_port', 2000, |
1224 | + 'End port for the cloudpipe VPN servers') |
1225 | + |
1226 | +class NoMorePorts(exception.Error): |
1227 | + pass |
1228 | + |
1229 | + |
1230 | +class NetworkData(datastore.BasicModel): |
1231 | + """Manages network host, and vpn ip and port for projects""" |
1232 | + def __init__(self, project_id): |
1233 | + self.project_id = project_id |
1234 | + super(NetworkData, self).__init__() |
1235 | + |
1236 | + @property |
1237 | + def identifier(self): |
1238 | + """Identifier used for key in redis""" |
1239 | + return self.project_id |
1240 | + |
1241 | + @classmethod |
1242 | + def create(cls, project_id): |
1243 | + """Creates a vpn for project |
1244 | + |
1245 | + This method finds a free ip and port and stores the associated |
1246 | + values in the datastore. |
1247 | + """ |
1248 | + # TODO(vish): will we ever need multiiple ips per host? |
1249 | + port = cls.find_free_port_for_ip(FLAGS.vpn_ip) |
1250 | + network_data = cls(project_id) |
1251 | + # save ip for project |
1252 | + network_data['host'] = FLAGS.node_name |
1253 | + network_data['project'] = project_id |
1254 | + network_data['ip'] = FLAGS.vpn_ip |
1255 | + network_data['port'] = port |
1256 | + network_data.save() |
1257 | + return network_data |
1258 | + |
1259 | + @classmethod |
1260 | + def find_free_port_for_ip(cls, ip): |
1261 | + """Finds a free port for a given ip from the redis set""" |
1262 | + # TODO(vish): these redis commands should be generalized and |
1263 | + # placed into a base class. Conceptually, it is |
1264 | + # similar to an association, but we are just |
1265 | + # storing a set of values instead of keys that |
1266 | + # should be turned into objects. |
1267 | + redis = datastore.Redis.instance() |
1268 | + key = 'ip:%s:ports' % ip |
1269 | + # TODO(vish): these ports should be allocated through an admin |
1270 | + # command instead of a flag |
1271 | + if (not redis.exists(key) and |
1272 | + not redis.exists(cls._redis_association_name('ip', ip))): |
1273 | + for i in range(FLAGS.vpn_start_port, FLAGS.vpn_end_port + 1): |
1274 | + redis.sadd(key, i) |
1275 | + |
1276 | + port = redis.spop(key) |
1277 | + if not port: |
1278 | + raise NoMorePorts() |
1279 | + return port |
1280 | + |
1281 | + @classmethod |
1282 | + def num_ports_for_ip(cls, ip): |
1283 | + """Calculates the number of free ports for a given ip""" |
1284 | + return datastore.Redis.instance().scard('ip:%s:ports' % ip) |
1285 | + |
1286 | + @property |
1287 | + def ip(self): |
1288 | + """The ip assigned to the project""" |
1289 | + return self['ip'] |
1290 | + |
1291 | + @property |
1292 | + def port(self): |
1293 | + """The port assigned to the project""" |
1294 | + return int(self['port']) |
1295 | + |
1296 | + def save(self): |
1297 | + """Saves the association to the given ip""" |
1298 | + self.associate_with('ip', self.ip) |
1299 | + super(NetworkData, self).save() |
1300 | + |
1301 | + def destroy(self): |
1302 | + """Cleans up datastore and adds port back to pool""" |
1303 | + self.unassociate_with('ip', self.ip) |
1304 | + datastore.Redis.instance().sadd('ip:%s:ports' % self.ip, self.port) |
1305 | + super(NetworkData, self).destroy() |
1306 | + |
1307 | |
1308 | === modified file 'nova/rpc.py' |
1309 | --- nova/rpc.py 2010-07-28 23:11:02 +0000 |
1310 | +++ nova/rpc.py 2010-08-06 21:33:45 +0000 |
1311 | @@ -238,12 +238,12 @@ |
1312 | exchange=msg_id, |
1313 | auto_delete=True, |
1314 | exchange_type="direct", |
1315 | - routing_key=msg_id, |
1316 | - durable=False) |
1317 | + routing_key=msg_id) |
1318 | consumer.register_callback(generic_response) |
1319 | |
1320 | publisher = messaging.Publisher(connection=Connection.instance(), |
1321 | exchange=FLAGS.control_exchange, |
1322 | + durable=False, |
1323 | exchange_type="topic", |
1324 | routing_key=topic) |
1325 | publisher.send(message) |
1326 | |
1327 | === modified file 'nova/tests/auth_unittest.py' |
1328 | --- nova/tests/auth_unittest.py 2010-08-05 23:56:23 +0000 |
1329 | +++ nova/tests/auth_unittest.py 2010-08-06 21:33:45 +0000 |
1330 | @@ -187,20 +187,6 @@ |
1331 | self.manager.remove_role('test1', 'sysadmin') |
1332 | self.assertFalse(project.has_role('test1', 'sysadmin')) |
1333 | |
1334 | - def test_212_vpn_ip_and_port_looks_valid(self): |
1335 | - project = self.manager.get_project('testproj') |
1336 | - self.assert_(project.vpn_ip) |
1337 | - self.assert_(project.vpn_port >= FLAGS.vpn_start_port) |
1338 | - self.assert_(project.vpn_port <= FLAGS.vpn_end_port) |
1339 | - |
1340 | - def test_213_too_many_vpns(self): |
1341 | - vpns = [] |
1342 | - for i in xrange(manager.Vpn.num_ports_for_ip(FLAGS.vpn_ip)): |
1343 | - vpns.append(manager.Vpn.create("vpnuser%s" % i)) |
1344 | - self.assertRaises(manager.NoMorePorts, manager.Vpn.create, "boom") |
1345 | - for vpn in vpns: |
1346 | - vpn.destroy() |
1347 | - |
1348 | def test_214_can_retrieve_project_by_user(self): |
1349 | project = self.manager.create_project('testproj2', 'test2', 'Another test project', ['test2']) |
1350 | self.assert_(len(self.manager.get_projects()) > 1) |
1351 | |
1352 | === modified file 'nova/tests/network_unittest.py' |
1353 | --- nova/tests/network_unittest.py 2010-07-28 23:11:02 +0000 |
1354 | +++ nova/tests/network_unittest.py 2010-08-06 21:33:45 +0000 |
1355 | @@ -24,8 +24,10 @@ |
1356 | from nova import test |
1357 | from nova import utils |
1358 | from nova.auth import manager |
1359 | -from nova.compute import network |
1360 | -from nova.compute.exception import NoMoreAddresses |
1361 | +from nova.network import model |
1362 | +from nova.network import service |
1363 | +from nova.network import vpn |
1364 | +from nova.network.exception import NoMoreAddresses |
1365 | |
1366 | FLAGS = flags.FLAGS |
1367 | |
1368 | @@ -52,7 +54,8 @@ |
1369 | self.projects.append(self.manager.create_project(name, |
1370 | 'netuser', |
1371 | name)) |
1372 | - self.network = network.PublicNetworkController() |
1373 | + self.network = model.PublicNetworkController() |
1374 | + self.service = service.VlanNetworkService() |
1375 | |
1376 | def tearDown(self): |
1377 | super(NetworkTestCase, self).tearDown() |
1378 | @@ -66,16 +69,17 @@ |
1379 | self.assertTrue(IPy.IP(address) in pubnet) |
1380 | self.assertTrue(IPy.IP(address) in self.network.network) |
1381 | |
1382 | - def test_allocate_deallocate_ip(self): |
1383 | - address = network.allocate_ip( |
1384 | - self.user.id, self.projects[0].id, utils.generate_mac()) |
1385 | + def test_allocate_deallocate_fixed_ip(self): |
1386 | + result = yield self.service.allocate_fixed_ip( |
1387 | + self.user.id, self.projects[0].id) |
1388 | + address = result['private_dns_name'] |
1389 | + mac = result['mac_address'] |
1390 | logging.debug("Was allocated %s" % (address)) |
1391 | - net = network.get_project_network(self.projects[0].id, "default") |
1392 | + net = model.get_project_network(self.projects[0].id, "default") |
1393 | self.assertEqual(True, is_in_project(address, self.projects[0].id)) |
1394 | - mac = utils.generate_mac() |
1395 | hostname = "test-host" |
1396 | self.dnsmasq.issue_ip(mac, address, hostname, net.bridge_name) |
1397 | - rv = network.deallocate_ip(address) |
1398 | + rv = self.service.deallocate_fixed_ip(address) |
1399 | |
1400 | # Doesn't go away until it's dhcp released |
1401 | self.assertEqual(True, is_in_project(address, self.projects[0].id)) |
1402 | @@ -84,15 +88,18 @@ |
1403 | self.assertEqual(False, is_in_project(address, self.projects[0].id)) |
1404 | |
1405 | def test_range_allocation(self): |
1406 | - mac = utils.generate_mac() |
1407 | - secondmac = utils.generate_mac() |
1408 | hostname = "test-host" |
1409 | - address = network.allocate_ip( |
1410 | - self.user.id, self.projects[0].id, mac) |
1411 | - secondaddress = network.allocate_ip( |
1412 | - self.user, self.projects[1].id, secondmac) |
1413 | - net = network.get_project_network(self.projects[0].id, "default") |
1414 | - secondnet = network.get_project_network(self.projects[1].id, "default") |
1415 | + result = yield self.service.allocate_fixed_ip( |
1416 | + self.user.id, self.projects[0].id) |
1417 | + mac = result['mac_address'] |
1418 | + address = result['private_dns_name'] |
1419 | + result = yield self.service.allocate_fixed_ip( |
1420 | + self.user, self.projects[1].id) |
1421 | + secondmac = result['mac_address'] |
1422 | + secondaddress = result['private_dns_name'] |
1423 | + |
1424 | + net = model.get_project_network(self.projects[0].id, "default") |
1425 | + secondnet = model.get_project_network(self.projects[1].id, "default") |
1426 | |
1427 | self.assertEqual(True, is_in_project(address, self.projects[0].id)) |
1428 | self.assertEqual(True, is_in_project(secondaddress, self.projects[1].id)) |
1429 | @@ -103,46 +110,64 @@ |
1430 | self.dnsmasq.issue_ip(secondmac, secondaddress, |
1431 | hostname, secondnet.bridge_name) |
1432 | |
1433 | - rv = network.deallocate_ip(address) |
1434 | + rv = self.service.deallocate_fixed_ip(address) |
1435 | self.dnsmasq.release_ip(mac, address, hostname, net.bridge_name) |
1436 | self.assertEqual(False, is_in_project(address, self.projects[0].id)) |
1437 | |
1438 | # First address release shouldn't affect the second |
1439 | self.assertEqual(True, is_in_project(secondaddress, self.projects[1].id)) |
1440 | |
1441 | - rv = network.deallocate_ip(secondaddress) |
1442 | + rv = self.service.deallocate_fixed_ip(secondaddress) |
1443 | self.dnsmasq.release_ip(secondmac, secondaddress, |
1444 | hostname, secondnet.bridge_name) |
1445 | self.assertEqual(False, is_in_project(secondaddress, self.projects[1].id)) |
1446 | |
1447 | def test_subnet_edge(self): |
1448 | - secondaddress = network.allocate_ip(self.user.id, self.projects[0].id, |
1449 | - utils.generate_mac()) |
1450 | + result = yield self.service.allocate_fixed_ip(self.user.id, |
1451 | + self.projects[0].id) |
1452 | + firstaddress = result['private_dns_name'] |
1453 | hostname = "toomany-hosts" |
1454 | for i in range(1,5): |
1455 | project_id = self.projects[i].id |
1456 | - mac = utils.generate_mac() |
1457 | - mac2 = utils.generate_mac() |
1458 | - mac3 = utils.generate_mac() |
1459 | - address = network.allocate_ip( |
1460 | - self.user, project_id, mac) |
1461 | - address2 = network.allocate_ip( |
1462 | - self.user, project_id, mac2) |
1463 | - address3 = network.allocate_ip( |
1464 | - self.user, project_id, mac3) |
1465 | + result = yield self.service.allocate_fixed_ip( |
1466 | + self.user, project_id) |
1467 | + mac = result['mac_address'] |
1468 | + address = result['private_dns_name'] |
1469 | + result = yield self.service.allocate_fixed_ip( |
1470 | + self.user, project_id) |
1471 | + mac2 = result['mac_address'] |
1472 | + address2 = result['private_dns_name'] |
1473 | + result = yield self.service.allocate_fixed_ip( |
1474 | + self.user, project_id) |
1475 | + mac3 = result['mac_address'] |
1476 | + address3 = result['private_dns_name'] |
1477 | self.assertEqual(False, is_in_project(address, self.projects[0].id)) |
1478 | self.assertEqual(False, is_in_project(address2, self.projects[0].id)) |
1479 | self.assertEqual(False, is_in_project(address3, self.projects[0].id)) |
1480 | - rv = network.deallocate_ip(address) |
1481 | - rv = network.deallocate_ip(address2) |
1482 | - rv = network.deallocate_ip(address3) |
1483 | - net = network.get_project_network(project_id, "default") |
1484 | + rv = self.service.deallocate_fixed_ip(address) |
1485 | + rv = self.service.deallocate_fixed_ip(address2) |
1486 | + rv = self.service.deallocate_fixed_ip(address3) |
1487 | + net = model.get_project_network(project_id, "default") |
1488 | self.dnsmasq.release_ip(mac, address, hostname, net.bridge_name) |
1489 | self.dnsmasq.release_ip(mac2, address2, hostname, net.bridge_name) |
1490 | self.dnsmasq.release_ip(mac3, address3, hostname, net.bridge_name) |
1491 | - net = network.get_project_network(self.projects[0].id, "default") |
1492 | - rv = network.deallocate_ip(secondaddress) |
1493 | - self.dnsmasq.release_ip(mac, secondaddress, hostname, net.bridge_name) |
1494 | + net = model.get_project_network(self.projects[0].id, "default") |
1495 | + rv = self.service.deallocate_fixed_ip(firstaddress) |
1496 | + self.dnsmasq.release_ip(mac, firstaddress, hostname, net.bridge_name) |
1497 | + |
1498 | + def test_212_vpn_ip_and_port_looks_valid(self): |
1499 | + vpn.NetworkData.create(self.projects[0].id) |
1500 | + self.assert_(self.projects[0].vpn_ip) |
1501 | + self.assert_(self.projects[0].vpn_port >= FLAGS.vpn_start_port) |
1502 | + self.assert_(self.projects[0].vpn_port <= FLAGS.vpn_end_port) |
1503 | + |
1504 | + def test_too_many_vpns(self): |
1505 | + vpns = [] |
1506 | + for i in xrange(vpn.NetworkData.num_ports_for_ip(FLAGS.vpn_ip)): |
1507 | + vpns.append(vpn.NetworkData.create("vpnuser%s" % i)) |
1508 | + self.assertRaises(vpn.NoMorePorts, vpn.NetworkData.create, "boom") |
1509 | + for network_datum in vpns: |
1510 | + network_datum.destroy() |
1511 | |
1512 | def test_release_before_deallocate(self): |
1513 | pass |
1514 | @@ -169,7 +194,7 @@ |
1515 | NUM_RESERVED_VPN_IPS) |
1516 | usable addresses |
1517 | """ |
1518 | - net = network.get_project_network(self.projects[0].id, "default") |
1519 | + net = model.get_project_network(self.projects[0].id, "default") |
1520 | |
1521 | # Determine expected number of available IP addresses |
1522 | num_static_ips = net.num_static_ips |
1523 | @@ -183,22 +208,23 @@ |
1524 | macs = {} |
1525 | addresses = {} |
1526 | for i in range(0, (num_available_ips - 1)): |
1527 | - macs[i] = utils.generate_mac() |
1528 | - addresses[i] = network.allocate_ip(self.user.id, self.projects[0].id, macs[i]) |
1529 | + result = yield self.service.allocate_fixed_ip(self.user.id, self.projects[0].id) |
1530 | + macs[i] = result['mac_address'] |
1531 | + addresses[i] = result['private_dns_name'] |
1532 | self.dnsmasq.issue_ip(macs[i], addresses[i], hostname, net.bridge_name) |
1533 | |
1534 | - self.assertRaises(NoMoreAddresses, network.allocate_ip, self.user.id, self.projects[0].id, utils.generate_mac()) |
1535 | + self.assertFailure(self.service.allocate_fixed_ip(self.user.id, self.projects[0].id), NoMoreAddresses) |
1536 | |
1537 | for i in range(0, (num_available_ips - 1)): |
1538 | - rv = network.deallocate_ip(addresses[i]) |
1539 | + rv = self.service.deallocate_fixed_ip(addresses[i]) |
1540 | self.dnsmasq.release_ip(macs[i], addresses[i], hostname, net.bridge_name) |
1541 | |
1542 | def is_in_project(address, project_id): |
1543 | - return address in network.get_project_network(project_id).list_addresses() |
1544 | + return address in model.get_project_network(project_id).list_addresses() |
1545 | |
1546 | def _get_project_addresses(project_id): |
1547 | project_addresses = [] |
1548 | - for addr in network.get_project_network(project_id).list_addresses(): |
1549 | + for addr in model.get_project_network(project_id).list_addresses(): |
1550 | project_addresses.append(addr) |
1551 | return project_addresses |
1552 | |
1553 | |
1554 | === modified file 'nova/virt/libvirt_conn.py' |
1555 | --- nova/virt/libvirt_conn.py 2010-08-04 06:17:35 +0000 |
1556 | +++ nova/virt/libvirt_conn.py 2010-08-06 21:33:45 +0000 |
1557 | @@ -46,6 +46,9 @@ |
1558 | flags.DEFINE_string('libvirt_xml_template', |
1559 | utils.abspath('compute/libvirt.xml.template'), |
1560 | 'Libvirt XML Template') |
1561 | +flags.DEFINE_string('injected_network_template', |
1562 | + utils.abspath('compute/interfaces.template'), |
1563 | + 'Template file for injected network') |
1564 | |
1565 | flags.DEFINE_string('libvirt_type', |
1566 | 'kvm', |
1567 | @@ -205,14 +208,14 @@ |
1568 | |
1569 | key = data['key_data'] |
1570 | net = None |
1571 | - if FLAGS.simple_network: |
1572 | - with open(FLAGS.simple_network_template) as f: |
1573 | + if data.get('inject_network', False): |
1574 | + with open(FLAGS.injected_network_template) as f: |
1575 | net = f.read() % {'address': data['private_dns_name'], |
1576 | - 'network': FLAGS.simple_network_network, |
1577 | - 'netmask': FLAGS.simple_network_netmask, |
1578 | - 'gateway': FLAGS.simple_network_gateway, |
1579 | - 'broadcast': FLAGS.simple_network_broadcast, |
1580 | - 'dns': FLAGS.simple_network_dns} |
1581 | + 'network': data['network_network'], |
1582 | + 'netmask': data['network_netmask'], |
1583 | + 'gateway': data['network_gateway'], |
1584 | + 'broadcast': data['network_broadcast'], |
1585 | + 'dns': data['network_dns']} |
1586 | if key or net: |
1587 | logging.info('Injecting data into image %s', data['image_id']) |
1588 | yield disk.inject_data(basepath('disk-raw'), key, net, execute=execute) |
Looks great! I have some concern about introducing some lag for the extra roundtrip for looking up the network topic, but this should be fine for now. The only small thing would be the name of the network/ networkdata. py file and class. Since this is a model for VPN, what about just network/vpn.py? I guess the network/network* redundancy is the thing that bugs me. :)