Merge lp:~soren/nova/iptables-security-groups into lp:~hudson-openstack/nova/trunk
- iptables-security-groups
- Merge into trunk
Status: | Merged | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Approved by: | Soren Hansen | ||||||||||||
Approved revision: | 452 | ||||||||||||
Merged at revision: | 533 | ||||||||||||
Proposed branch: | lp:~soren/nova/iptables-security-groups | ||||||||||||
Merge into: | lp:~hudson-openstack/nova/trunk | ||||||||||||
Diff against target: |
847 lines (+541/-88) 8 files modified
nova/api/ec2/cloud.py (+4/-11) nova/compute/api.py (+57/-0) nova/compute/manager.py (+10/-3) nova/db/api.py (+7/-0) nova/db/sqlalchemy/api.py (+40/-2) nova/network/linux_net.py (+2/-0) nova/tests/test_virt.py (+96/-4) nova/virt/libvirt_conn.py (+325/-68) |
||||||||||||
To merge this branch: | bzr merge lp:~soren/nova/iptables-security-groups | ||||||||||||
Related bugs: |
|
||||||||||||
Related blueprints: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Jay Pipes (community) | Approve | ||
Eric Day (community) | Approve | ||
Vish Ishaya (community) | Approve | ||
Review via email: mp+43767@code.launchpad.net |
Commit message
Description of the change
Add a new firewall backend for libvirt, based on iptables.
Soren Hansen (soren) wrote : | # |
Jay Pipes (jaypipes) wrote : | # |
I really, really like this work. Needs to be merged with trunk and ported for eventlet. Other than that, I think it's great stuff.
Jay Pipes (jaypipes) wrote : | # |
Coolness :) Great work, Soren.
Vish Ishaya (vishvananda) wrote : | # |
Minor issue:
171 options(
172 + options(
joinedload_all will load all of the objects in the chain so lines 171 and 179 are redundant and can be removed.
Also, have you tested with --use_nova_chains as well?
Finally, do you think that the we be iptables security groups is strictly superior to nwfilter security groups? Should we continue to support the other driver or deprecate it?
Eric Day (eday) wrote : | # |
48: line is not needed
69, 91: should use self.db.<...>, not just db.<...> for consistency
Soren Hansen (soren) wrote : | # |
> Minor issue:
>
> 171 options(
> 172 +
> options(
>
> joinedload_all will load all of the objects in the chain so lines 171 and 179
> are redundant and can be removed.
Good call. Fixed.
> Also, have you tested with --use_nova_chains as well?
Uh.. No. Not at all. I've completely missed that. :( I'll do so and set this mp to WiP.
> Finally, do you think that the we be iptables security groups is strictly
> superior to nwfilter security groups?
Right now, yes. No doubt. Bug 659135 basically makes nwfilter non functional. NWFilter has other problems as well (its attempts to avoid being racy are somewhat misguided, for instance).
> Should we continue to support the other driver or deprecate it?
Temporary deprecation might make sense, yeah. I know I won't be using it and I wouldn't recommend it to anyone.
Soren Hansen (soren) wrote : | # |
> 48: line is not needed
Fixed.
> 69, 91: should use self.db.<...>, not just db.<...> for consistency
Fixed, although I have to admit that it's a bit of a mystery to me what purpose the db instance attribute serves.
Soren Hansen (soren) wrote : | # |
> Also, have you tested with --use_nova_chains as well?
I'm having trouble understanding that code, to be honest.
If use_nova_chains is enabled (I don't understand why it's configurable?), we do this:
At the end of FLAGS.input_chain (which I can't see getting DEFINE'd anywhere?) we add a nova_input chain.
At the end of FORWARD, we add a nova_forward chain.
At the end of OUTPUT, we add a nova_output chain.
At the end of nat:PREROUTING, we add nova_prerouting.
At the end of nat:POSTROUTING, we add nova_postrouting and nova_snatting (in that order).
At the end of nat:OUTPUT, we add nova_output.
If use_nova_chains isn't enabled, we add a new chain, called SNATTING, to the end of nat:POSTROUTING.
Regardless of the use_nova_chains setting, we now add a rule to the SNATTING chain, even though AFAICT it only exists if use_nova_chains is False?
In nova.network.
If use_nova_chains is enabled, we create nova_forward (again?) and add it to the end of FORWARD.
We then prepend the ACCEPT rules for the bridge to the FORWARD chain (rather than the nova_forward chain?).
I'm at a bit of a loss as to where to integrate my stuff. What I need is a place to put some forwarding rules that get applied before the ACCEPT rules for the bridge. I could simply just leave it where it is in the current patch (which is at the very top of the FORWARD chain), and that would totally work with use_nova_chains as far as I can see, but I'm not sure if that's what is indended in the use_nova_chains paradigm?
Vish Ishaya (vishvananda) wrote : | # |
Some discussion inline.
On Jan 7, 2011, at 4:31 AM, Soren Hansen wrote:
>> Also, have you tested with --use_nova_chains as well?
>
> I'm having trouble understanding that code, to be honest.
>
> If use_nova_chains is enabled (I don't understand why it's configurable?), we do this:
> At the end of FLAGS.input_chain (which I can't see getting DEFINE'd anywhere?) we add a nova_input chain.
> At the end of FORWARD, we add a nova_forward chain.
> At the end of OUTPUT, we add a nova_output chain.
> At the end of nat:PREROUTING, we add nova_prerouting.
> At the end of nat:POSTROUTING, we add nova_postrouting and nova_snatting (in that order).
> At the end of nat:OUTPUT, we add nova_output.
>
> If use_nova_chains isn't enabled, we add a new chain, called SNATTING, to the end of nat:POSTROUTING.
splitting postrouting into two chains allows us to ensure the right ordering for the postrouting rules.
>
> Regardless of the use_nova_chains setting, we now add a rule to the SNATTING chain, even though AFAICT it only exists if use_nova_chains is False?
if nova chains is true, the rule will be rewritten in confirm_rule and it will be added to nova_snatting instead
>
> In nova.network.
>
> If use_nova_chains is enabled, we create nova_forward (again?) and add it to the end of FORWARD.
It creates nova_forward if it hasn't been created yet
>
> We then prepend the ACCEPT rules for the bridge to the FORWARD chain (rather than the nova_forward chain?).
once again the rule will go into the nova_forward chain
>
> I'm at a bit of a loss as to where to integrate my stuff. What I need is a place to put some forwarding rules that get applied before the ACCEPT rules for the bridge. I could simply just leave it where it is in the current patch (which is at the very top of the FORWARD chain), and that would totally work with use_nova_chains as far as I can see, but I'm not sure if that's what is indended in the use_nova_chains paradigm?
The idea here is that all of the rules go into nova specific chains instead of the default chains so that sysadmins can add and remove rules by dealing with the shorter default chains. It is mostly for so that the chains remain relatively clean. and that all of the rules added by nova are in one place so that they are easy to flush/remove if necessary. For example, we tend to have a pretty locked down set of rules, so we have custom exceptions for a bunch of non-nova services. There have been many cases where there are extraneous due to a misconfigured flag or some such. Once the flag is changed, the iptables -D doesn't delete the old rule. With use nova_chains you can just flush the nova_xxxx chain and restart the network worker instead of manually removing bad rules from the input chain.
I don't know that your code needs to follow the same pattern, although it could. I just wanted to make sure it wouldn't break with use_nova_chains, since we use it. If all of your rules are in forward, you could have a conditional based on --use_nova_chains that ensures the chain exists and then adds them to nova-forward instead. Or you could have a specific new nova_xxxxx chain before n...
Vish Ishaya (vishvananda) wrote : | # |
On Jan 7, 2011, at 4:31 AM, Soren Hansen wrote:
>> Also, have you tested with --use_nova_chains as well?
>
> I'm having trouble understanding that code, to be honest.
>
> If use_nova_chains is enabled (I don't understand why it's configurable?), we do this:
Configurable flag is a legacy holdover. It was initially there because the chains were being created by a separate script in tools, so I wanted it to work in dev mode without running the script. I think it is strictly cleaner with use_nova_chains set to true, so perhaps I should submit a patch that removes the flag. The rewriting of confirm_rule will look a bit strange without the flag though :).
Vish
Soren Hansen (soren) wrote : | # |
2011/1/7 Vish Ishaya <email address hidden>:
>>> Also, have you tested with --use_nova_chains as well?
>> I'm having trouble understanding that code, to be honest.
>>
>> If use_nova_chains is enabled (I don't understand why it's
>> configurable?), we do this:
>> At the end of FLAGS.input_chain (which I can't see getting DEFINE'd anywhere?) we add a nova_input chain.
>> At the end of FORWARD, we add a nova_forward chain.
>> At the end of OUTPUT, we add a nova_output chain.
>> At the end of nat:PREROUTING, we add nova_prerouting.
>> At the end of nat:POSTROUTING, we add nova_postrouting and nova_snatting (in that order).
>> At the end of nat:OUTPUT, we add nova_output.
>>
>> If use_nova_chains isn't enabled, we add a new chain, called SNATTING, to the end of nat:POSTROUTING.
> splitting postrouting into two chains allows us to ensure the right ordering for the postrouting rules.
Ok. I can relate to that. I just don't understand why that split is
conditional then? That feeds back into my "why is this configurable?"
question, I guess:)
>> Regardless of the use_nova_chains setting, we now add a rule to the SNATTING chain, even though AFAICT it only exists if use_nova_chains is False?
> if nova chains is true, the rule will be rewritten in confirm_rule and it will be added to nova_snatting instead
Oh, ok! I completely missed the change to confirm_rule.
>> In nova.network.
>> If use_nova_chains is enabled, we create nova_forward (again?) and add it to the end of FORWARD.
> It creates nova_forward if it hasn't been created yet
I just didn't see when that would be the case, but ok :)
>> We then prepend the ACCEPT rules for the bridge to the FORWARD chain (rather than the nova_forward chain?).
> once again the rule will go into the nova_forward chain
Right, right, because of _confirm_rule. I completely missed that.
>> I'm at a bit of a loss as to where to integrate my stuff. What I need
>> is a place to put some forwarding rules that get applied before the
>> ACCEPT rules for the bridge. I could simply just leave it where it is
>> in the current patch (which is at the very top of the FORWARD chain),
>> and that would totally work with use_nova_chains as far as I can see,
>> but I'm not sure if that's what is indended in the use_nova_chains
>> paradigm?
> The idea here is that all of the rules go into nova specific chains
> instead of the default chains so that sysadmins can add and remove
> rules by dealing with the shorter default chains. It is mostly for so
> that the chains remain relatively clean.
Right, I understand the motivation behind it. I'm following sort of the
same pattern in my stuff where everything is packaged away in separate
chains, all named "nova-<something>".
I think consolidating our iptables stuff is a good idea. However, IMHO
my approach to iptables handling is better :) iptables (the command line
tool, not subsystem) is racy and unnecessarily costly. Each call to
iptables makes it load the entire table into memory, make the change,
and put it back. In the mean time, it could have been altered by
another invocation of iptables. iptables-save and iptables-restore are
perfectly...
Vish Ishaya (vishvananda) wrote : | # |
Agreed on the cactus refactor. Looks like it is working for now. so we should merge this.
Vish
On Jan 7, 2011, at 1:46 PM, Soren Hansen wrote:
> 2011/1/7 Vish Ishaya <email address hidden>:
>>>> Also, have you tested with --use_nova_chains as well?
>>> I'm having trouble understanding that code, to be honest.
>>>
>>> If use_nova_chains is enabled (I don't understand why it's
>>> configurable?), we do this:
>>> At the end of FLAGS.input_chain (which I can't see getting DEFINE'd anywhere?) we add a nova_input chain.
>>> At the end of FORWARD, we add a nova_forward chain.
>>> At the end of OUTPUT, we add a nova_output chain.
>>> At the end of nat:PREROUTING, we add nova_prerouting.
>>> At the end of nat:POSTROUTING, we add nova_postrouting and nova_snatting (in that order).
>>> At the end of nat:OUTPUT, we add nova_output.
>>>
>>> If use_nova_chains isn't enabled, we add a new chain, called SNATTING, to the end of nat:POSTROUTING.
>> splitting postrouting into two chains allows us to ensure the right ordering for the postrouting rules.
>
> Ok. I can relate to that. I just don't understand why that split is
> conditional then? That feeds back into my "why is this configurable?"
> question, I guess:)
>
>>> Regardless of the use_nova_chains setting, we now add a rule to the SNATTING chain, even though AFAICT it only exists if use_nova_chains is False?
>> if nova chains is true, the rule will be rewritten in confirm_rule and it will be added to nova_snatting instead
>
> Oh, ok! I completely missed the change to confirm_rule.
>
>>> In nova.network.
>>> If use_nova_chains is enabled, we create nova_forward (again?) and add it to the end of FORWARD.
>> It creates nova_forward if it hasn't been created yet
>
> I just didn't see when that would be the case, but ok :)
>
>>> We then prepend the ACCEPT rules for the bridge to the FORWARD chain (rather than the nova_forward chain?).
>> once again the rule will go into the nova_forward chain
>
> Right, right, because of _confirm_rule. I completely missed that.
>
>>> I'm at a bit of a loss as to where to integrate my stuff. What I need
>>> is a place to put some forwarding rules that get applied before the
>>> ACCEPT rules for the bridge. I could simply just leave it where it is
>>> in the current patch (which is at the very top of the FORWARD chain),
>>> and that would totally work with use_nova_chains as far as I can see,
>>> but I'm not sure if that's what is indended in the use_nova_chains
>>> paradigm?
>> The idea here is that all of the rules go into nova specific chains
>> instead of the default chains so that sysadmins can add and remove
>> rules by dealing with the shorter default chains. It is mostly for so
>> that the chains remain relatively clean.
>
> Right, I understand the motivation behind it. I'm following sort of the
> same pattern in my stuff where everything is packaged away in separate
> chains, all named "nova-<something>".
>
> I think consolidating our iptables stuff is a good idea. However, IMHO
> my approach to iptables handling is better :) iptables (the command line
> tool, not subsystem) is racy and unnecessarily costly. Eac...
Vish Ishaya (vishvananda) : | # |
OpenStack Infra (hudson-openstack) wrote : | # |
Attempt to merge into lp:nova failed due to conflicts:
text conflict in nova/virt/
Vish Ishaya (vishvananda) wrote : | # |
Looks like you need another merge
Soren Hansen (soren) wrote : | # |
Done.
OpenStack Infra (hudson-openstack) wrote : | # |
There are additional revisions which have not been approved in review. Please seek review and approval of these new revisions.
OpenStack Infra (hudson-openstack) wrote : | # |
The attempt to merge lp:~soren/nova/iptables-security-groups into lp:nova failed. Below is the output from the failed tests.
TrialTestCase
runTest ok
AdminAPITest
test_
test_
APITest
test_
Test
test_
test_
test_bad_token ok
test_bad_user ok
test_no_user ok
test_
TestLimiter
test_
TestFaults
test_
test_raise ok
test_
FlavorsTest
test_
test_
GlanceImageServ
test_create ok
test_
test_delete ok
test_update ok
ImageController
test_
test_
LocalImageServi
test_create ok
test_
test_delete ok
test_update ok
LimiterTest
test_minute ok
test_
test_second ok
test_
test_
WSGIAppProxyTest
test_200 ok
test_403 ok
test_failure ok
WSGIAppTest
test_escaping ok
test_good_urls ok
test_
test_
test_
ServersTest
test_
test_
test_
test_
OpenStack Infra (hudson-openstack) wrote : | # |
The attempt to merge lp:~soren/nova/iptables-security-groups into lp:nova failed. Below is the output from the failed tests.
TrialTestCase
runTest ok
AdminAPITest
test_
test_
APITest
test_
Test
test_
test_
test_bad_token ok
test_bad_user ok
test_no_user ok
test_
TestLimiter
test_
TestFaults
test_
test_raise ok
test_
FlavorsTest
test_
test_
GlanceImageServ
test_create ok
test_
test_delete ok
test_update ok
ImageController
test_
test_
LocalImageServi
test_create ok
test_
test_delete ok
test_update ok
LimiterTest
test_minute ok
test_
test_second ok
test_
test_
WSGIAppProxyTest
test_200 ok
test_403 ok
test_failure ok
WSGIAppTest
test_escaping ok
test_good_urls ok
test_
test_
test_
ServersTest
test_
test_
test_
test_
- 452. By Soren Hansen
-
Create LibvirtConnection directly, rather than going through libvirt_
conn.get_ connection. This should remove the dependency on libvirt for tests.
Preview Diff
1 | === modified file 'nova/api/ec2/cloud.py' |
2 | --- nova/api/ec2/cloud.py 2011-01-07 23:22:52 +0000 |
3 | +++ nova/api/ec2/cloud.py 2011-01-10 10:38:10 +0000 |
4 | @@ -133,15 +133,6 @@ |
5 | result[key] = [line] |
6 | return result |
7 | |
8 | - def _trigger_refresh_security_group(self, context, security_group): |
9 | - nodes = set([instance['host'] for instance in security_group.instances |
10 | - if instance['host'] is not None]) |
11 | - for node in nodes: |
12 | - rpc.cast(context, |
13 | - '%s.%s' % (FLAGS.compute_topic, node), |
14 | - {"method": "refresh_security_group", |
15 | - "args": {"security_group_id": security_group.id}}) |
16 | - |
17 | def get_metadata(self, address): |
18 | ctxt = context.get_admin_context() |
19 | instance_ref = self.compute_api.get_all(ctxt, fixed_ip=address) |
20 | @@ -419,7 +410,8 @@ |
21 | match = False |
22 | if match: |
23 | db.security_group_rule_destroy(context, rule['id']) |
24 | - self._trigger_refresh_security_group(context, security_group) |
25 | + self.compute_api.trigger_security_group_rules_refresh(context, |
26 | + security_group['id']) |
27 | return True |
28 | raise exception.ApiError(_("No rule for the specified parameters.")) |
29 | |
30 | @@ -444,7 +436,8 @@ |
31 | |
32 | security_group_rule = db.security_group_rule_create(context, values) |
33 | |
34 | - self._trigger_refresh_security_group(context, security_group) |
35 | + self.compute_api.trigger_security_group_rules_refresh(context, |
36 | + security_group['id']) |
37 | |
38 | return True |
39 | |
40 | |
41 | === modified file 'nova/compute/api.py' |
42 | --- nova/compute/api.py 2011-01-07 14:46:17 +0000 |
43 | +++ nova/compute/api.py 2011-01-10 10:38:10 +0000 |
44 | @@ -185,6 +185,9 @@ |
45 | "args": {"topic": FLAGS.compute_topic, |
46 | "instance_id": instance_id}}) |
47 | |
48 | + for group_id in security_groups: |
49 | + self.trigger_security_group_members_refresh(elevated, group_id) |
50 | + |
51 | return instances |
52 | |
53 | def ensure_default_security_group(self, context): |
54 | @@ -204,6 +207,60 @@ |
55 | 'project_id': context.project_id} |
56 | db.security_group_create(context, values) |
57 | |
58 | + def trigger_security_group_rules_refresh(self, context, security_group_id): |
59 | + """Called when a rule is added to or removed from a security_group""" |
60 | + |
61 | + security_group = self.db.security_group_get(context, security_group_id) |
62 | + |
63 | + hosts = set() |
64 | + for instance in security_group['instances']: |
65 | + if instance['host'] is not None: |
66 | + hosts.add(instance['host']) |
67 | + |
68 | + for host in hosts: |
69 | + rpc.cast(context, |
70 | + self.db.queue_get_for(context, FLAGS.compute_topic, host), |
71 | + {"method": "refresh_security_group_rules", |
72 | + "args": {"security_group_id": security_group.id}}) |
73 | + |
74 | + def trigger_security_group_members_refresh(self, context, group_id): |
75 | + """Called when a security group gains a new or loses a member |
76 | + |
77 | + Sends an update request to each compute node for whom this is |
78 | + relevant.""" |
79 | + |
80 | + # First, we get the security group rules that reference this group as |
81 | + # the grantee.. |
82 | + security_group_rules = \ |
83 | + self.db.security_group_rule_get_by_security_group_grantee( |
84 | + context, |
85 | + group_id) |
86 | + |
87 | + # ..then we distill the security groups to which they belong.. |
88 | + security_groups = set() |
89 | + for rule in security_group_rules: |
90 | + security_groups.add(rule['parent_group_id']) |
91 | + |
92 | + # ..then we find the instances that are members of these groups.. |
93 | + instances = set() |
94 | + for security_group in security_groups: |
95 | + for instance in security_group['instances']: |
96 | + instances.add(instance['id']) |
97 | + |
98 | + # ...then we find the hosts where they live... |
99 | + hosts = set() |
100 | + for instance in instances: |
101 | + if instance['host']: |
102 | + hosts.add(instance['host']) |
103 | + |
104 | + # ...and finally we tell these nodes to refresh their view of this |
105 | + # particular security group. |
106 | + for host in hosts: |
107 | + rpc.cast(context, |
108 | + self.db.queue_get_for(context, FLAGS.compute_topic, host), |
109 | + {"method": "refresh_security_group_members", |
110 | + "args": {"security_group_id": group_id}}) |
111 | + |
112 | def update(self, context, instance_id, **kwargs): |
113 | """Updates the instance in the datastore. |
114 | |
115 | |
116 | === modified file 'nova/compute/manager.py' |
117 | --- nova/compute/manager.py 2011-01-07 14:46:17 +0000 |
118 | +++ nova/compute/manager.py 2011-01-10 10:38:10 +0000 |
119 | @@ -137,9 +137,16 @@ |
120 | host) |
121 | |
122 | @exception.wrap_exception |
123 | - def refresh_security_group(self, context, security_group_id, **_kwargs): |
124 | - """This call passes stright through to the virtualization driver.""" |
125 | - self.driver.refresh_security_group(security_group_id) |
126 | + def refresh_security_group_rules(self, context, |
127 | + security_group_id, **_kwargs): |
128 | + """This call passes straight through to the virtualization driver.""" |
129 | + return self.driver.refresh_security_group_rules(security_group_id) |
130 | + |
131 | + @exception.wrap_exception |
132 | + def refresh_security_group_members(self, context, |
133 | + security_group_id, **_kwargs): |
134 | + """This call passes straight through to the virtualization driver.""" |
135 | + return self.driver.refresh_security_group_members(security_group_id) |
136 | |
137 | @exception.wrap_exception |
138 | def run_instance(self, context, instance_id, **_kwargs): |
139 | |
140 | === modified file 'nova/db/api.py' |
141 | --- nova/db/api.py 2011-01-04 17:07:09 +0000 |
142 | +++ nova/db/api.py 2011-01-10 10:38:10 +0000 |
143 | @@ -772,6 +772,13 @@ |
144 | security_group_id) |
145 | |
146 | |
147 | +def security_group_rule_get_by_security_group_grantee(context, |
148 | + security_group_id): |
149 | + """Get all rules that grant access to the given security group.""" |
150 | + return IMPL.security_group_rule_get_by_security_group_grantee(context, |
151 | + security_group_id) |
152 | + |
153 | + |
154 | def security_group_rule_destroy(context, security_group_rule_id): |
155 | """Deletes a security group rule.""" |
156 | return IMPL.security_group_rule_destroy(context, security_group_rule_id) |
157 | |
158 | === modified file 'nova/db/sqlalchemy/api.py' |
159 | --- nova/db/sqlalchemy/api.py 2011-01-07 01:19:22 +0000 |
160 | +++ nova/db/sqlalchemy/api.py 2011-01-10 10:38:10 +0000 |
161 | @@ -650,7 +650,7 @@ |
162 | if is_admin_context(context): |
163 | result = session.query(models.Instance).\ |
164 | options(joinedload_all('fixed_ip.floating_ips')).\ |
165 | - options(joinedload('security_groups')).\ |
166 | + options(joinedload_all('security_groups.rules')).\ |
167 | options(joinedload('volumes')).\ |
168 | filter_by(id=instance_id).\ |
169 | filter_by(deleted=can_read_deleted(context)).\ |
170 | @@ -658,7 +658,7 @@ |
171 | elif is_user_context(context): |
172 | result = session.query(models.Instance).\ |
173 | options(joinedload_all('fixed_ip.floating_ips')).\ |
174 | - options(joinedload('security_groups')).\ |
175 | + options(joinedload_all('security_groups.rules')).\ |
176 | options(joinedload('volumes')).\ |
177 | filter_by(project_id=context.project_id).\ |
178 | filter_by(id=instance_id).\ |
179 | @@ -1579,6 +1579,44 @@ |
180 | |
181 | |
182 | @require_context |
183 | +def security_group_rule_get_by_security_group(context, security_group_id, |
184 | + session=None): |
185 | + if not session: |
186 | + session = get_session() |
187 | + if is_admin_context(context): |
188 | + result = session.query(models.SecurityGroupIngressRule).\ |
189 | + filter_by(deleted=can_read_deleted(context)).\ |
190 | + filter_by(parent_group_id=security_group_id).\ |
191 | + all() |
192 | + else: |
193 | + # TODO(vish): Join to group and check for project_id |
194 | + result = session.query(models.SecurityGroupIngressRule).\ |
195 | + filter_by(deleted=False).\ |
196 | + filter_by(parent_group_id=security_group_id).\ |
197 | + all() |
198 | + return result |
199 | + |
200 | + |
201 | +@require_context |
202 | +def security_group_rule_get_by_security_group_grantee(context, |
203 | + security_group_id, |
204 | + session=None): |
205 | + if not session: |
206 | + session = get_session() |
207 | + if is_admin_context(context): |
208 | + result = session.query(models.SecurityGroupIngressRule).\ |
209 | + filter_by(deleted=can_read_deleted(context)).\ |
210 | + filter_by(group_id=security_group_id).\ |
211 | + all() |
212 | + else: |
213 | + result = session.query(models.SecurityGroupIngressRule).\ |
214 | + filter_by(deleted=False).\ |
215 | + filter_by(group_id=security_group_id).\ |
216 | + all() |
217 | + return result |
218 | + |
219 | + |
220 | +@require_context |
221 | def security_group_rule_create(context, values): |
222 | security_group_rule_ref = models.SecurityGroupIngressRule() |
223 | security_group_rule_ref.update(values) |
224 | |
225 | === modified file 'nova/network/linux_net.py' |
226 | --- nova/network/linux_net.py 2011-01-04 05:23:35 +0000 |
227 | +++ nova/network/linux_net.py 2011-01-10 10:38:10 +0000 |
228 | @@ -209,6 +209,8 @@ |
229 | |
230 | _confirm_rule("FORWARD", "--in-interface %s -j ACCEPT" % bridge) |
231 | _confirm_rule("FORWARD", "--out-interface %s -j ACCEPT" % bridge) |
232 | + _execute("sudo iptables -N nova-local", check_exit_code=False) |
233 | + _confirm_rule("FORWARD", "-j nova-local") |
234 | |
235 | |
236 | def get_dhcp_hosts(context, network_id): |
237 | |
238 | === modified file 'nova/tests/test_virt.py' |
239 | --- nova/tests/test_virt.py 2010-12-28 01:37:04 +0000 |
240 | +++ nova/tests/test_virt.py 2011-01-10 10:38:10 +0000 |
241 | @@ -208,8 +208,99 @@ |
242 | self.manager.delete_user(self.user) |
243 | |
244 | |
245 | +class IptablesFirewallTestCase(test.TestCase): |
246 | + def setUp(self): |
247 | + super(IptablesFirewallTestCase, self).setUp() |
248 | + |
249 | + self.manager = manager.AuthManager() |
250 | + self.user = self.manager.create_user('fake', 'fake', 'fake', |
251 | + admin=True) |
252 | + self.project = self.manager.create_project('fake', 'fake', 'fake') |
253 | + self.context = context.RequestContext('fake', 'fake') |
254 | + self.network = utils.import_object(FLAGS.network_manager) |
255 | + self.fw = libvirt_conn.IptablesFirewallDriver() |
256 | + |
257 | + def tearDown(self): |
258 | + self.manager.delete_project(self.project) |
259 | + self.manager.delete_user(self.user) |
260 | + super(IptablesFirewallTestCase, self).tearDown() |
261 | + |
262 | + def _p(self, *args, **kwargs): |
263 | + if 'iptables-restore' in args: |
264 | + print ' '.join(args), kwargs['stdin'] |
265 | + if 'iptables-save' in args: |
266 | + return |
267 | + |
268 | + in_rules = [ |
269 | + '# Generated by iptables-save v1.4.4 on Mon Dec 6 11:54:13 2010', |
270 | + '*filter', |
271 | + ':INPUT ACCEPT [969615:281627771]', |
272 | + ':FORWARD ACCEPT [0:0]', |
273 | + ':OUTPUT ACCEPT [915599:63811649]', |
274 | + ':nova-block-ipv4 - [0:0]', |
275 | + '-A INPUT -i virbr0 -p udp -m udp --dport 53 -j ACCEPT ', |
276 | + '-A INPUT -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT ', |
277 | + '-A INPUT -i virbr0 -p udp -m udp --dport 67 -j ACCEPT ', |
278 | + '-A INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT ', |
279 | + '-A FORWARD -d 192.168.122.0/24 -o virbr0 -m state --state RELATED' |
280 | + ',ESTABLISHED -j ACCEPT ', |
281 | + '-A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT ', |
282 | + '-A FORWARD -i virbr0 -o virbr0 -j ACCEPT ', |
283 | + '-A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable ', |
284 | + '-A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable ', |
285 | + 'COMMIT', |
286 | + '# Completed on Mon Dec 6 11:54:13 2010' |
287 | + ] |
288 | + |
289 | + def test_static_filters(self): |
290 | + self.fw.execute = self._p |
291 | + instance_ref = db.instance_create(self.context, |
292 | + {'user_id': 'fake', |
293 | + 'project_id': 'fake'}) |
294 | + ip = '10.11.12.13' |
295 | + |
296 | + network_ref = db.project_get_network(self.context, |
297 | + 'fake') |
298 | + |
299 | + fixed_ip = {'address': ip, |
300 | + 'network_id': network_ref['id']} |
301 | + |
302 | + admin_ctxt = context.get_admin_context() |
303 | + db.fixed_ip_create(admin_ctxt, fixed_ip) |
304 | + db.fixed_ip_update(admin_ctxt, ip, {'allocated': True, |
305 | + 'instance_id': instance_ref['id']}) |
306 | + |
307 | + secgroup = db.security_group_create(admin_ctxt, |
308 | + {'user_id': 'fake', |
309 | + 'project_id': 'fake', |
310 | + 'name': 'testgroup', |
311 | + 'description': 'test group'}) |
312 | + |
313 | + db.security_group_rule_create(admin_ctxt, |
314 | + {'parent_group_id': secgroup['id'], |
315 | + 'protocol': 'tcp', |
316 | + 'from_port': 80, |
317 | + 'to_port': 81, |
318 | + 'cidr': '192.168.10.0/24'}) |
319 | + |
320 | + db.instance_add_security_group(admin_ctxt, instance_ref['id'], |
321 | + secgroup['id']) |
322 | + instance_ref = db.instance_get(admin_ctxt, instance_ref['id']) |
323 | + |
324 | + self.fw.add_instance(instance_ref) |
325 | + |
326 | + out_rules = self.fw.modify_rules(self.in_rules) |
327 | + |
328 | + in_rules = filter(lambda l: not l.startswith('#'), self.in_rules) |
329 | + for rule in in_rules: |
330 | + if not 'nova' in rule: |
331 | + self.assertTrue(rule in out_rules, |
332 | + 'Rule went missing: %s' % rule) |
333 | + |
334 | + print '\n'.join(out_rules) |
335 | + |
336 | + |
337 | class NWFilterTestCase(test.TestCase): |
338 | - |
339 | def setUp(self): |
340 | super(NWFilterTestCase, self).setUp() |
341 | |
342 | @@ -224,7 +315,8 @@ |
343 | |
344 | self.fake_libvirt_connection = Mock() |
345 | |
346 | - self.fw = libvirt_conn.NWFilterFirewall(self.fake_libvirt_connection) |
347 | + self.fw = libvirt_conn.NWFilterFirewall( |
348 | + lambda: self.fake_libvirt_connection) |
349 | |
350 | def tearDown(self): |
351 | self.manager.delete_project(self.project) |
352 | @@ -337,7 +429,7 @@ |
353 | self.security_group.id) |
354 | instance = db.instance_get(self.context, inst_id) |
355 | |
356 | - self.fw.setup_base_nwfilters() |
357 | - self.fw.setup_nwfilters_for_instance(instance) |
358 | + self.fw.setup_basic_filtering(instance) |
359 | + self.fw.prepare_instance_filter(instance) |
360 | _ensure_all_called() |
361 | self.teardown_security_group() |
362 | |
363 | === modified file 'nova/virt/libvirt_conn.py' |
364 | --- nova/virt/libvirt_conn.py 2011-01-04 05:26:41 +0000 |
365 | +++ nova/virt/libvirt_conn.py 2011-01-10 10:38:10 +0000 |
366 | @@ -86,6 +86,9 @@ |
367 | flags.DEFINE_bool('allow_project_net_traffic', |
368 | True, |
369 | 'Whether to allow in project network traffic') |
370 | +flags.DEFINE_string('firewall_driver', |
371 | + 'nova.virt.libvirt_conn.IptablesFirewallDriver', |
372 | + 'Firewall driver (defaults to iptables)') |
373 | |
374 | |
375 | def get_connection(read_only): |
376 | @@ -125,16 +128,24 @@ |
377 | self._wrapped_conn = None |
378 | self.read_only = read_only |
379 | |
380 | + self.nwfilter = NWFilterFirewall(self._get_connection) |
381 | + |
382 | + if not FLAGS.firewall_driver: |
383 | + self.firewall_driver = self.nwfilter |
384 | + self.nwfilter.handle_security_groups = True |
385 | + else: |
386 | + self.firewall_driver = utils.import_object(FLAGS.firewall_driver) |
387 | + |
388 | def init_host(self): |
389 | - NWFilterFirewall(self._conn).setup_base_nwfilters() |
390 | + pass |
391 | |
392 | - @property |
393 | - def _conn(self): |
394 | + def _get_connection(self): |
395 | if not self._wrapped_conn or not self._test_connection(): |
396 | LOG.debug(_('Connecting to libvirt: %s'), self.libvirt_uri) |
397 | self._wrapped_conn = self._connect(self.libvirt_uri, |
398 | self.read_only) |
399 | return self._wrapped_conn |
400 | + _conn = property(_get_connection) |
401 | |
402 | def _test_connection(self): |
403 | try: |
404 | @@ -351,10 +362,13 @@ |
405 | instance['id'], |
406 | power_state.NOSTATE, |
407 | 'launching') |
408 | - NWFilterFirewall(self._conn).setup_nwfilters_for_instance(instance) |
409 | + |
410 | + self.nwfilter.setup_basic_filtering(instance) |
411 | + self.firewall_driver.prepare_instance_filter(instance) |
412 | self._create_image(instance, xml) |
413 | self._conn.createXML(xml, 0) |
414 | LOG.debug(_("instance %s: is running"), instance['name']) |
415 | + self.firewall_driver.apply_instance_filter(instance) |
416 | |
417 | timer = utils.LoopingCall(f=None) |
418 | |
419 | @@ -693,18 +707,55 @@ |
420 | domain = self._conn.lookupByName(instance_name) |
421 | return domain.interfaceStats(interface) |
422 | |
423 | - def refresh_security_group(self, security_group_id): |
424 | - fw = NWFilterFirewall(self._conn) |
425 | - fw.ensure_security_group_filter(security_group_id) |
426 | - |
427 | - |
428 | -class NWFilterFirewall(object): |
429 | + def refresh_security_group_rules(self, security_group_id): |
430 | + self.firewall_driver.refresh_security_group_rules(security_group_id) |
431 | + |
432 | + def refresh_security_group_members(self, security_group_id): |
433 | + self.firewall_driver.refresh_security_group_members(security_group_id) |
434 | + |
435 | + |
436 | +class FirewallDriver(object): |
437 | + def prepare_instance_filter(self, instance): |
438 | + """Prepare filters for the instance. |
439 | + |
440 | + At this point, the instance isn't running yet.""" |
441 | + raise NotImplementedError() |
442 | + |
443 | + def apply_instance_filter(self, instance): |
444 | + """Apply instance filter. |
445 | + |
446 | + Once this method returns, the instance should be firewalled |
447 | + appropriately. This method should as far as possible be a |
448 | + no-op. It's vastly preferred to get everything set up in |
449 | + prepare_instance_filter. |
450 | + """ |
451 | + raise NotImplementedError() |
452 | + |
453 | + def refresh_security_group_rules(self, security_group_id): |
454 | + """Refresh security group rules from data store |
455 | + |
456 | + Gets called when a rule has been added to or removed from |
457 | + the security group.""" |
458 | + raise NotImplementedError() |
459 | + |
460 | + def refresh_security_group_members(self, security_group_id): |
461 | + """Refresh security group members from data store |
462 | + |
463 | + Gets called when an instance gets added to or removed from |
464 | + the security group.""" |
465 | + raise NotImplementedError() |
466 | + |
467 | + |
468 | +class NWFilterFirewall(FirewallDriver): |
469 | """ |
470 | This class implements a network filtering mechanism versatile |
471 | enough for EC2 style Security Group filtering by leveraging |
472 | libvirt's nwfilter. |
473 | |
474 | First, all instances get a filter ("nova-base-filter") applied. |
475 | + This filter provides some basic security such as protection against |
476 | + MAC spoofing, IP spoofing, and ARP spoofing. |
477 | + |
478 | This filter drops all incoming ipv4 and ipv6 connections. |
479 | Outgoing connections are never blocked. |
480 | |
481 | @@ -738,38 +789,79 @@ |
482 | |
483 | (*) This sentence brought to you by the redundancy department of |
484 | redundancy. |
485 | + |
486 | """ |
487 | |
488 | def __init__(self, get_connection): |
489 | - self._conn = get_connection |
490 | - |
491 | - nova_base_filter = '''<filter name='nova-base' chain='root'> |
492 | - <uuid>26717364-50cf-42d1-8185-29bf893ab110</uuid> |
493 | - <filterref filter='no-mac-spoofing'/> |
494 | - <filterref filter='no-ip-spoofing'/> |
495 | - <filterref filter='no-arp-spoofing'/> |
496 | - <filterref filter='allow-dhcp-server'/> |
497 | - <filterref filter='nova-allow-dhcp-server'/> |
498 | - <filterref filter='nova-base-ipv4'/> |
499 | - <filterref filter='nova-base-ipv6'/> |
500 | - </filter>''' |
501 | - |
502 | - nova_dhcp_filter = '''<filter name='nova-allow-dhcp-server' chain='ipv4'> |
503 | - <uuid>891e4787-e5c0-d59b-cbd6-41bc3c6b36fc</uuid> |
504 | - <rule action='accept' direction='out' |
505 | - priority='100'> |
506 | - <udp srcipaddr='0.0.0.0' |
507 | - dstipaddr='255.255.255.255' |
508 | - srcportstart='68' |
509 | - dstportstart='67'/> |
510 | - </rule> |
511 | - <rule action='accept' direction='in' |
512 | - priority='100'> |
513 | - <udp srcipaddr='$DHCPSERVER' |
514 | - srcportstart='67' |
515 | - dstportstart='68'/> |
516 | - </rule> |
517 | - </filter>''' |
518 | + self._libvirt_get_connection = get_connection |
519 | + self.static_filters_configured = False |
520 | + self.handle_security_groups = False |
521 | + |
522 | + def _get_connection(self): |
523 | + return self._libvirt_get_connection() |
524 | + _conn = property(_get_connection) |
525 | + |
526 | + def nova_dhcp_filter(self): |
527 | + """The standard allow-dhcp-server filter is an <ip> one, so it uses |
528 | + ebtables to allow traffic through. Without a corresponding rule in |
529 | + iptables, it'll get blocked anyway.""" |
530 | + |
531 | + return '''<filter name='nova-allow-dhcp-server' chain='ipv4'> |
532 | + <uuid>891e4787-e5c0-d59b-cbd6-41bc3c6b36fc</uuid> |
533 | + <rule action='accept' direction='out' |
534 | + priority='100'> |
535 | + <udp srcipaddr='0.0.0.0' |
536 | + dstipaddr='255.255.255.255' |
537 | + srcportstart='68' |
538 | + dstportstart='67'/> |
539 | + </rule> |
540 | + <rule action='accept' direction='in' |
541 | + priority='100'> |
542 | + <udp srcipaddr='$DHCPSERVER' |
543 | + srcportstart='67' |
544 | + dstportstart='68'/> |
545 | + </rule> |
546 | + </filter>''' |
547 | + |
548 | + def setup_basic_filtering(self, instance): |
549 | + """Set up basic filtering (MAC, IP, and ARP spoofing protection)""" |
550 | + logging.info('called setup_basic_filtering in nwfilter') |
551 | + |
552 | + if self.handle_security_groups: |
553 | + # No point in setting up a filter set that we'll be overriding |
554 | + # anyway. |
555 | + return |
556 | + |
557 | + logging.info('ensuring static filters') |
558 | + self._ensure_static_filters() |
559 | + |
560 | + instance_filter_name = self._instance_filter_name(instance) |
561 | + self._define_filter(self._filter_container(instance_filter_name, |
562 | + ['nova-base'])) |
563 | + |
564 | + def _ensure_static_filters(self): |
565 | + if self.static_filters_configured: |
566 | + return |
567 | + |
568 | + self._define_filter(self._filter_container('nova-base', |
569 | + ['no-mac-spoofing', |
570 | + 'no-ip-spoofing', |
571 | + 'no-arp-spoofing', |
572 | + 'allow-dhcp-server'])) |
573 | + self._define_filter(self.nova_base_ipv4_filter) |
574 | + self._define_filter(self.nova_base_ipv6_filter) |
575 | + self._define_filter(self.nova_dhcp_filter) |
576 | + self._define_filter(self.nova_vpn_filter) |
577 | + if FLAGS.allow_project_net_traffic: |
578 | + self._define_filter(self.nova_project_filter) |
579 | + |
580 | + self.static_filters_configured = True |
581 | + |
582 | + def _filter_container(self, name, filters): |
583 | + xml = '''<filter name='%s' chain='root'>%s</filter>''' % ( |
584 | + name, |
585 | + ''.join(["<filterref filter='%s'/>" % (f,) for f in filters])) |
586 | + return xml |
587 | |
588 | nova_vpn_filter = '''<filter name='nova-vpn' chain='root'> |
589 | <uuid>2086015e-cf03-11df-8c5d-080027c27973</uuid> |
590 | @@ -783,7 +875,7 @@ |
591 | retval = "<filter name='nova-base-ipv4' chain='ipv4'>" |
592 | for protocol in ['tcp', 'udp', 'icmp']: |
593 | for direction, action, priority in [('out', 'accept', 399), |
594 | - ('inout', 'drop', 400)]: |
595 | + ('in', 'drop', 400)]: |
596 | retval += """<rule action='%s' direction='%s' priority='%d'> |
597 | <%s /> |
598 | </rule>""" % (action, direction, |
599 | @@ -795,7 +887,7 @@ |
600 | retval = "<filter name='nova-base-ipv6' chain='ipv6'>" |
601 | for protocol in ['tcp', 'udp', 'icmp']: |
602 | for direction, action, priority in [('out', 'accept', 399), |
603 | - ('inout', 'drop', 400)]: |
604 | + ('in', 'drop', 400)]: |
605 | retval += """<rule action='%s' direction='%s' priority='%d'> |
606 | <%s-ipv6 /> |
607 | </rule>""" % (action, direction, |
608 | @@ -819,43 +911,49 @@ |
609 | # execute in a native thread and block current greenthread until done |
610 | tpool.execute(self._conn.nwfilterDefineXML, xml) |
611 | |
612 | - def setup_base_nwfilters(self): |
613 | - self._define_filter(self.nova_base_ipv4_filter) |
614 | - self._define_filter(self.nova_base_ipv6_filter) |
615 | - self._define_filter(self.nova_dhcp_filter) |
616 | - self._define_filter(self.nova_base_filter) |
617 | - self._define_filter(self.nova_vpn_filter) |
618 | - if FLAGS.allow_project_net_traffic: |
619 | - self._define_filter(self.nova_project_filter) |
620 | - |
621 | - def setup_nwfilters_for_instance(self, instance): |
622 | + def prepare_instance_filter(self, instance): |
623 | """ |
624 | Creates an NWFilter for the given instance. In the process, |
625 | it makes sure the filters for the security groups as well as |
626 | the base filter are all in place. |
627 | """ |
628 | |
629 | - nwfilter_xml = ("<filter name='nova-instance-%s' " |
630 | - "chain='root'>\n") % instance['name'] |
631 | - |
632 | if instance['image_id'] == FLAGS.vpn_image_id: |
633 | - nwfilter_xml += " <filterref filter='nova-vpn' />\n" |
634 | + base_filter = 'nova-vpn' |
635 | else: |
636 | - nwfilter_xml += " <filterref filter='nova-base' />\n" |
637 | + base_filter = 'nova-base' |
638 | + |
639 | + instance_filter_name = self._instance_filter_name(instance) |
640 | + instance_secgroup_filter_name = '%s-secgroup' % (instance_filter_name,) |
641 | + instance_filter_children = [base_filter, instance_secgroup_filter_name] |
642 | + instance_secgroup_filter_children = ['nova-base-ipv4', |
643 | + 'nova-base-ipv6', |
644 | + 'nova-allow-dhcp-server'] |
645 | + |
646 | + ctxt = context.get_admin_context() |
647 | |
648 | if FLAGS.allow_project_net_traffic: |
649 | - nwfilter_xml += " <filterref filter='nova-project' />\n" |
650 | - |
651 | - for security_group in instance.security_groups: |
652 | - self.ensure_security_group_filter(security_group['id']) |
653 | - |
654 | - nwfilter_xml += (" <filterref filter='nova-secgroup-%d' " |
655 | - "/>\n") % security_group['id'] |
656 | - nwfilter_xml += "</filter>" |
657 | - |
658 | - self._define_filter(nwfilter_xml) |
659 | - |
660 | - def ensure_security_group_filter(self, security_group_id): |
661 | + instance_filter_children += ['nova-project'] |
662 | + |
663 | + for security_group in db.security_group_get_by_instance(ctxt, |
664 | + instance['id']): |
665 | + |
666 | + self.refresh_security_group_rules(security_group['id']) |
667 | + |
668 | + instance_secgroup_filter_children += [('nova-secgroup-%s' % |
669 | + security_group['id'])] |
670 | + |
671 | + self._define_filter( |
672 | + self._filter_container(instance_secgroup_filter_name, |
673 | + instance_secgroup_filter_children)) |
674 | + |
675 | + self._define_filter( |
676 | + self._filter_container(instance_filter_name, |
677 | + instance_filter_children)) |
678 | + |
679 | + return |
680 | + |
681 | + def refresh_security_group_rules(self, security_group_id): |
682 | return self._define_filter( |
683 | self.security_group_to_nwfilter_xml(security_group_id)) |
684 | |
685 | @@ -886,3 +984,162 @@ |
686 | xml = "<filter name='nova-secgroup-%s' chain='ipv4'>%s</filter>" % \ |
687 | (security_group_id, rule_xml,) |
688 | return xml |
689 | + |
690 | + def _instance_filter_name(self, instance): |
691 | + return 'nova-instance-%s' % instance['name'] |
692 | + |
693 | + |
694 | +class IptablesFirewallDriver(FirewallDriver): |
695 | + def __init__(self, execute=None): |
696 | + self.execute = execute or utils.execute |
697 | + self.instances = set() |
698 | + |
699 | + def apply_instance_filter(self, instance): |
700 | + """No-op. Everything is done in prepare_instance_filter""" |
701 | + pass |
702 | + |
703 | + def remove_instance(self, instance): |
704 | + self.instances.remove(instance) |
705 | + |
706 | + def add_instance(self, instance): |
707 | + self.instances.add(instance) |
708 | + |
709 | + def prepare_instance_filter(self, instance): |
710 | + self.add_instance(instance) |
711 | + self.apply_ruleset() |
712 | + |
713 | + def apply_ruleset(self): |
714 | + current_filter, _ = self.execute('sudo iptables-save -t filter') |
715 | + current_lines = current_filter.split('\n') |
716 | + new_filter = self.modify_rules(current_lines) |
717 | + self.execute('sudo iptables-restore', |
718 | + process_input='\n'.join(new_filter)) |
719 | + |
720 | + def modify_rules(self, current_lines): |
721 | + ctxt = context.get_admin_context() |
722 | + # Remove any trace of nova rules. |
723 | + new_filter = filter(lambda l: 'nova-' not in l, current_lines) |
724 | + |
725 | + seen_chains = False |
726 | + for rules_index in range(len(new_filter)): |
727 | + if not seen_chains: |
728 | + if new_filter[rules_index].startswith(':'): |
729 | + seen_chains = True |
730 | + elif seen_chains == 1: |
731 | + if not new_filter[rules_index].startswith(':'): |
732 | + break |
733 | + |
734 | + our_chains = [':nova-ipv4-fallback - [0:0]'] |
735 | + our_rules = ['-A nova-ipv4-fallback -j DROP'] |
736 | + |
737 | + our_chains += [':nova-local - [0:0]'] |
738 | + our_rules += ['-A FORWARD -j nova-local'] |
739 | + |
740 | + security_groups = set() |
741 | + # Add our chains |
742 | + # First, we add instance chains and rules |
743 | + for instance in self.instances: |
744 | + chain_name = self._instance_chain_name(instance) |
745 | + ip_address = self._ip_for_instance(instance) |
746 | + |
747 | + our_chains += [':%s - [0:0]' % chain_name] |
748 | + |
749 | + # Jump to the per-instance chain |
750 | + our_rules += ['-A nova-local -d %s -j %s' % (ip_address, |
751 | + chain_name)] |
752 | + |
753 | + # Always drop invalid packets |
754 | + our_rules += ['-A %s -m state --state ' |
755 | + 'INVALID -j DROP' % (chain_name,)] |
756 | + |
757 | + # Allow established connections |
758 | + our_rules += ['-A %s -m state --state ' |
759 | + 'ESTABLISHED,RELATED -j ACCEPT' % (chain_name,)] |
760 | + |
761 | + # Jump to each security group chain in turn |
762 | + for security_group in \ |
763 | + db.security_group_get_by_instance(ctxt, |
764 | + instance['id']): |
765 | + security_groups.add(security_group) |
766 | + |
767 | + sg_chain_name = self._security_group_chain_name(security_group) |
768 | + |
769 | + our_rules += ['-A %s -j %s' % (chain_name, sg_chain_name)] |
770 | + |
771 | + # Allow DHCP responses |
772 | + dhcp_server = self._dhcp_server_for_instance(instance) |
773 | + our_rules += ['-A %s -s %s -p udp --sport 67 --dport 68' % |
774 | + (chain_name, dhcp_server)] |
775 | + |
776 | + # If nothing matches, jump to the fallback chain |
777 | + our_rules += ['-A %s -j nova-ipv4-fallback' % (chain_name,)] |
778 | + |
779 | + # then, security group chains and rules |
780 | + for security_group in security_groups: |
781 | + chain_name = self._security_group_chain_name(security_group) |
782 | + our_chains += [':%s - [0:0]' % chain_name] |
783 | + |
784 | + rules = \ |
785 | + db.security_group_rule_get_by_security_group(ctxt, |
786 | + security_group['id']) |
787 | + |
788 | + for rule in rules: |
789 | + logging.info('%r', rule) |
790 | + args = ['-A', chain_name, '-p', rule.protocol] |
791 | + |
792 | + if rule.cidr: |
793 | + args += ['-s', rule.cidr] |
794 | + else: |
795 | + # Eventually, a mechanism to grant access for security |
796 | + # groups will turn up here. It'll use ipsets. |
797 | + continue |
798 | + |
799 | + if rule.protocol in ['udp', 'tcp']: |
800 | + if rule.from_port == rule.to_port: |
801 | + args += ['--dport', '%s' % (rule.from_port,)] |
802 | + else: |
803 | + args += ['-m', 'multiport', |
804 | + '--dports', '%s:%s' % (rule.from_port, |
805 | + rule.to_port)] |
806 | + elif rule.protocol == 'icmp': |
807 | + icmp_type = rule.from_port |
808 | + icmp_code = rule.to_port |
809 | + |
810 | + if icmp_type == '-1': |
811 | + icmp_type_arg = None |
812 | + else: |
813 | + icmp_type_arg = '%s' % icmp_type |
814 | + if not icmp_code == '-1': |
815 | + icmp_type_arg += '/%s' % icmp_code |
816 | + |
817 | + if icmp_type_arg: |
818 | + args += ['-m', 'icmp', '--icmp_type', icmp_type_arg] |
819 | + |
820 | + args += ['-j ACCEPT'] |
821 | + our_rules += [' '.join(args)] |
822 | + |
823 | + new_filter[rules_index:rules_index] = our_rules |
824 | + new_filter[rules_index:rules_index] = our_chains |
825 | + logging.info('new_filter: %s', '\n'.join(new_filter)) |
826 | + return new_filter |
827 | + |
828 | + def refresh_security_group_members(self, security_group): |
829 | + pass |
830 | + |
831 | + def refresh_security_group_rules(self, security_group): |
832 | + self.apply_ruleset() |
833 | + |
834 | + def _security_group_chain_name(self, security_group): |
835 | + return 'nova-sg-%s' % (security_group['id'],) |
836 | + |
837 | + def _instance_chain_name(self, instance): |
838 | + return 'nova-inst-%s' % (instance['id'],) |
839 | + |
840 | + def _ip_for_instance(self, instance): |
841 | + return db.instance_get_fixed_address(context.get_admin_context(), |
842 | + instance['id']) |
843 | + |
844 | + def _dhcp_server_for_instance(self, instance): |
845 | + network = db.project_get_network(context.get_admin_context(), |
846 | + instance['project_id']) |
847 | + return network['gateway'] |
Oh, I should mention that granting access to another security group (i.e. saying that instances that are a member of security group A are allowed to access ports Y-Z on security group W) doesn't work right now. It's not super hard, but I need to work a bit on the kernel support for it.