Merge lp:~rackspace-titan/nova/ram-limits into lp:~hudson-openstack/nova/trunk
- ram-limits
- Merge into trunk
Status: | Merged |
---|---|
Approved by: | Brian Waldon |
Approved revision: | 1079 |
Merged at revision: | 1112 |
Proposed branch: | lp:~rackspace-titan/nova/ram-limits |
Merge into: | lp:~hudson-openstack/nova/trunk |
Diff against target: |
660 lines (+284/-85) 9 files modified
bin/nova-manage (+5/-1) nova/api/openstack/limits.py (+3/-1) nova/api/openstack/views/limits.py (+28/-6) nova/compute/api.py (+10/-5) nova/db/api.py (+1/-1) nova/db/sqlalchemy/api.py (+5/-4) nova/quota.py (+78/-52) nova/tests/api/openstack/test_limits.py (+87/-2) nova/tests/test_quota.py (+67/-13) |
To merge this branch: | bzr merge lp:~rackspace-titan/nova/ram-limits |
Related bugs: | |
Related blueprints: |
Openstack API 1.1 Finalization
(Essential)
Enable ram limits in nova
(Undefined)
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Vish Ishaya (community) | Approve | ||
Dan Prince (community) | Approve | ||
Brian Waldon (community) | Approve | ||
Review via email: mp+61510@code.launchpad.net |
Commit message
Description of the change
Several changes designed to bring the openstack api 1.1 closer to spec
- add ram limits to the nova compute quotas
- enable injected file limits and injected file size limits to be overridden in the quota database table
- expose quota limits as absolute limits in the openstack api 1.1 limits resource
- add support for controlling 'unlimited' quotas to nova-manage
Mark Washenberger (markwash) wrote : | # |
I agree with all of those. But for
> 119: the name ram_count is a bit awkward. Change it to ram_total (or something better)?
I can't see a way to clean this up that is in scope here.
The others, I have corrected. Let me know if you see any other issues.
Brian Waldon (bcwaldon) wrote : | # |
> I agree with all of those. But for
>
> > 119: the name ram_count is a bit awkward. Change it to ram_total (or
> something better)?
>
> I can't see a way to clean this up that is in scope here.
>
> The others, I have corrected. Let me know if you see any other issues.
Looks good. I assume you meant to refer to my comments about line 139 above...
Mark Washenberger (markwash) wrote : | # |
> Looks good. I assume you meant to refer to my comments about line 139 above...
Yes. Sorry, that was what I meant.
Dan Prince (dan-prince) wrote : | # |
Hey Mark.
This code works however if an end user actually hits the limit the error message (returned in the quota) exception is somewhat misleading.
If I do something like this:
I have 1 node:
root@nova1:~# nova list
+----+-
| ID | Name | Status | Public IP | Private IP |
+----+-
| 1 | Server 1 | ACTIVE | | 172.19.1.2 |
+----+-
I set my limits really low:
root@nova1:~# nova-manage project quota admin ram 1000
metadata_items: 128
instances: 10
injected_
injected_files: 5
volumes: 10
gigabytes: 1000
cores: 20
ram: 1000
floating_ips: 10
I then get an exception that looks like this:
root@nova1:~# nova boot --flavor 1 --image 3 test
InstanceLimitEx
---
Can we update the exception handling in the compute API so that this error message is related to the actual limit that is being hit.
Dan Prince (dan-prince) wrote : | # |
The error I get on the EC2 side is a bit different:
root@nova1:~# euca-run-instances -t m1.small -k test ami-00000003
InstanceLimitEx
--
In both cases (using either the OSAPI or the EC2 API) I am hitting a RAM limit.
Dan Prince (dan-prince) wrote : | # |
Never mind on the EC2 error being different. The reason it didn't match was because I used 'm1.small' instead of 'm1.tiny' (which would have match flavor ID 1 in my installation).
We just need to fix the quota error message when a RAM limit is hit.
Mark Washenberger (markwash) wrote : | # |
Nice catch! It doesn't look like I cause this misbehavior, but this seems as good a time as any to fix it. So I did.
Dan Prince (dan-prince) wrote : | # |
Hi Mark,
Okay. Your latest commit resolves the issue w/ the negative numbers in the error messages returned from the API when you hit a quota. Thanks!
---
Couple more things:
1) Why did you choose not to implement absolute limits for the v1.0 view (/v1.0/limits). I believe it is in the SPEC? I'd really like to have the absolute limits on the v1.0 call as well.
2) The HTTP return codes when you actually hit a quota limit are a weird mix. I get a 500 level error code when I hit a RAM limit (the one you are adding this this merge prop). Just for fun I tried another one for the personality limit and I got an HTTP 400 error code. Looking at the SPEC again it looks like we should actually be returning a HTTP 413 'overLimit' error code. Perhaps this is something to handle outside the scope of this merge prop but I thought it worth mentioning since it would make sense that all the limit exceeded quotas should probably return a similar HTTP error code.
Mark Washenberger (markwash) wrote : | # |
>
> 1) Why did you choose not to implement absolute limits for the v1.0 view
> (/v1.0/limits). I believe it is in the SPEC? I'd really like to have the
> absolute limits on the v1.0 call as well.
For some reason I thought the 1.0 api was out of scope for our current task--but that doesn't really make any sense to me now. I will try to add these limits to 1.0 now.
>
> 2) The HTTP return codes when you actually hit a quota limit are a weird mix.
> I get a 500 level error code when I hit a RAM limit (the one you are adding
> this this merge prop). Just for fun I tried another one for the personality
> limit and I got an HTTP 400 error code. Looking at the SPEC again it looks
> like we should actually be returning a HTTP 413 'overLimit' error code.
> Perhaps this is something to handle outside the scope of this merge prop but I
> thought it worth mentioning since it would make sense that all the limit
> exceeded quotas should probably return a similar HTTP error code.
I agree that this is a problem. Like you suggested, I would like for us to do a review of these error conditions and address them in a separate bug and branch. I think that makes sense especially since the discrepancy you mentioned is not introduced in my branch.
Dan Prince (dan-prince) wrote : | # |
Thanks Mark.
Looks good. I filed a ticket for the 413 issue here: https:/
Chris Behrens (cbehrens) wrote : | # |
I just see a minor style thing that's different than most of the rest of nova when dictionaries are created. Take the 'limit_names' dict around line 59 for example. If matching the rest of nova, it would look like this:
limit_names = {"ram": ["maxTotalRAMSi
I see this a number of other places. (Personally, I like your style, but I feel I should point out how the nova code looks..)
Mark Washenberger (markwash) wrote : | # |
Thanks for checking!
Based on my grepping, my style occurs 327 times in the nova libraries, and the other style occurs 410 times. Relatively even in my judgement. I also do not see a reference to this style choice either way in HACKING. Finally, I prefer my style because adding something to the end of the dict is a one line diff, whereas in the other style it is often 2 lines of change.
Trey Morris (tr3buchet) wrote : | # |
May be a bit off here, but is there any chance we could have '0' mean 'unlimited'? This way there'd be no type checking and there would never be any converting to None and back. I don't know about you guys, but when I see an integer limit of 0 I usually assume it means no limit.
Mark Washenberger (markwash) wrote : | # |
Trey,
Right now the database and the config flags disagree (one says -1 == unlimited, the db says Null == unlimited). After reading your comment that seems more and more like a wart that should be eliminated.
However, I am a bit uncomfortable with making 0 the standard for unlimited, since it does seem like a valid limit in some cases. Suppose a deployer doesn't like floating ips and thus literally wants to limit them to zero for projects? There are also future quota cases where 0 might make sense.
How do you feel about standardizing on -1 instead of 0? And having that reflected in both the flags and the db entries?
OpenStack Infra (hudson-openstack) wrote : | # |
The attempt to merge lp:~rackspace-titan/nova/ram-limits into lp:nova failed. Below is the output from the failed tests.
AccountsTest
test_
test_
test_
test_
AdminAPITest
test_
test_
APITest
test_
Test
test_
test_
test_bad_token OK
test_
test_
test_no_user OK
test_
TestFunctional
test_
test_
TestLimiter
test_
LimiterTest
test_
test_
test_
test_
test_
test_
test_
test_
test_
test_
test_
test_
test_
ActionExtensionTest
test_
test_
test_
ExtensionContro
test_
test_index OK
ExtensionManage
test_
RequestExtensio
test_
test_
ResourceExtensi
test_
test_
test_
TestFaults
test_
test_
...
- 1079. By Mark Washenberger
-
pep8 fixes
Brian Waldon (bcwaldon) wrote : | # |
Re-approving since Mark fixed the pep8 issues
Preview Diff
1 | === modified file 'bin/nova-manage' |
2 | --- bin/nova-manage 2011-05-25 15:58:17 +0000 |
3 | +++ bin/nova-manage 2011-05-25 19:53:25 +0000 |
4 | @@ -417,12 +417,16 @@ |
5 | arguments: project_id [key] [value]""" |
6 | ctxt = context.get_admin_context() |
7 | if key: |
8 | + if value.lower() == 'unlimited': |
9 | + value = None |
10 | try: |
11 | db.quota_update(ctxt, project_id, key, value) |
12 | except exception.ProjectQuotaNotFound: |
13 | db.quota_create(ctxt, project_id, key, value) |
14 | - project_quota = quota.get_quota(ctxt, project_id) |
15 | + project_quota = quota.get_project_quotas(ctxt, project_id) |
16 | for key, value in project_quota.iteritems(): |
17 | + if value is None: |
18 | + value = 'unlimited' |
19 | print '%s: %s' % (key, value) |
20 | |
21 | def remove(self, project_id, user_id): |
22 | |
23 | === modified file 'nova/api/openstack/limits.py' |
24 | --- nova/api/openstack/limits.py 2011-05-03 16:25:19 +0000 |
25 | +++ nova/api/openstack/limits.py 2011-05-25 19:53:25 +0000 |
26 | @@ -30,6 +30,7 @@ |
27 | |
28 | from webob.dec import wsgify |
29 | |
30 | +from nova import quota |
31 | from nova import wsgi |
32 | from nova.api.openstack import common |
33 | from nova.api.openstack import faults |
34 | @@ -64,7 +65,8 @@ |
35 | """ |
36 | Return all global and rate limit information. |
37 | """ |
38 | - abs_limits = {} |
39 | + context = req.environ['nova.context'] |
40 | + abs_limits = quota.get_project_quotas(context, context.project_id) |
41 | rate_limits = req.environ.get("nova.limits", []) |
42 | |
43 | builder = self._get_view_builder(req) |
44 | |
45 | === modified file 'nova/api/openstack/views/limits.py' |
46 | --- nova/api/openstack/views/limits.py 2011-05-06 18:13:27 +0000 |
47 | +++ nova/api/openstack/views/limits.py 2011-05-25 19:53:25 +0000 |
48 | @@ -45,6 +45,34 @@ |
49 | |
50 | return output |
51 | |
52 | + def _build_absolute_limits(self, absolute_limits): |
53 | + """Builder for absolute limits |
54 | + |
55 | + absolute_limits should be given as a dict of limits. |
56 | + For example: {"ram": 512, "gigabytes": 1024}. |
57 | + |
58 | + """ |
59 | + limit_names = { |
60 | + "ram": ["maxTotalRAMSize"], |
61 | + "instances": ["maxTotalInstances"], |
62 | + "cores": ["maxTotalCores"], |
63 | + "metadata_items": ["maxServerMeta", "maxImageMeta"], |
64 | + "injected_files": ["maxPersonality"], |
65 | + "injected_file_content_bytes": ["maxPersonalitySize"], |
66 | + } |
67 | + limits = {} |
68 | + for name, value in absolute_limits.iteritems(): |
69 | + if name in limit_names and value is not None: |
70 | + for name in limit_names[name]: |
71 | + limits[name] = value |
72 | + return limits |
73 | + |
74 | + def _build_rate_limits(self, rate_limits): |
75 | + raise NotImplementedError() |
76 | + |
77 | + def _build_rate_limit(self, rate_limit): |
78 | + raise NotImplementedError() |
79 | + |
80 | |
81 | class ViewBuilderV10(ViewBuilder): |
82 | """Openstack API v1.0 limits view builder.""" |
83 | @@ -63,9 +91,6 @@ |
84 | "resetTime": rate_limit["resetTime"], |
85 | } |
86 | |
87 | - def _build_absolute_limits(self, absolute_limit): |
88 | - return {} |
89 | - |
90 | |
91 | class ViewBuilderV11(ViewBuilder): |
92 | """Openstack API v1.1 limits view builder.""" |
93 | @@ -104,6 +129,3 @@ |
94 | "unit": rate_limit["unit"], |
95 | "next-available": rate_limit["resetTime"], |
96 | } |
97 | - |
98 | - def _build_absolute_limits(self, absolute_limit): |
99 | - return {} |
100 | |
101 | === modified file 'nova/compute/api.py' |
102 | --- nova/compute/api.py 2011-05-24 22:51:29 +0000 |
103 | +++ nova/compute/api.py 2011-05-25 19:53:25 +0000 |
104 | @@ -95,14 +95,15 @@ |
105 | """ |
106 | if injected_files is None: |
107 | return |
108 | - limit = quota.allowed_injected_files(context) |
109 | + limit = quota.allowed_injected_files(context, len(injected_files)) |
110 | if len(injected_files) > limit: |
111 | raise quota.QuotaError(code="OnsetFileLimitExceeded") |
112 | path_limit = quota.allowed_injected_file_path_bytes(context) |
113 | - content_limit = quota.allowed_injected_file_content_bytes(context) |
114 | for path, content in injected_files: |
115 | if len(path) > path_limit: |
116 | raise quota.QuotaError(code="OnsetFilePathLimitExceeded") |
117 | + content_limit = quota.allowed_injected_file_content_bytes( |
118 | + context, len(content)) |
119 | if len(content) > content_limit: |
120 | raise quota.QuotaError(code="OnsetFileContentLimitExceeded") |
121 | |
122 | @@ -149,9 +150,13 @@ |
123 | pid = context.project_id |
124 | LOG.warn(_("Quota exceeeded for %(pid)s," |
125 | " tried to run %(min_count)s instances") % locals()) |
126 | - raise quota.QuotaError(_("Instance quota exceeded. You can only " |
127 | - "run %s more instances of this type.") % |
128 | - num_instances, "InstanceLimitExceeded") |
129 | + if num_instances <= 0: |
130 | + message = _("Instance quota exceeded. You cannot run any " |
131 | + "more instances of this type.") |
132 | + else: |
133 | + message = _("Instance quota exceeded. You can only run %s " |
134 | + "more instances of this type.") % num_instances |
135 | + raise quota.QuotaError(message, "InstanceLimitExceeded") |
136 | |
137 | self._check_metadata_properties_quota(context, metadata) |
138 | self._check_injected_file_quota(context, injected_files) |
139 | |
140 | === modified file 'nova/db/api.py' |
141 | --- nova/db/api.py 2011-05-11 18:32:28 +0000 |
142 | +++ nova/db/api.py 2011-05-25 19:53:25 +0000 |
143 | @@ -403,7 +403,7 @@ |
144 | |
145 | |
146 | def instance_data_get_for_project(context, project_id): |
147 | - """Get (instance_count, core_count) for project.""" |
148 | + """Get (instance_count, total_cores, total_ram) for project.""" |
149 | return IMPL.instance_data_get_for_project(context, project_id) |
150 | |
151 | |
152 | |
153 | === modified file 'nova/db/sqlalchemy/api.py' |
154 | --- nova/db/sqlalchemy/api.py 2011-05-16 22:36:42 +0000 |
155 | +++ nova/db/sqlalchemy/api.py 2011-05-25 19:53:25 +0000 |
156 | @@ -803,12 +803,13 @@ |
157 | def instance_data_get_for_project(context, project_id): |
158 | session = get_session() |
159 | result = session.query(func.count(models.Instance.id), |
160 | - func.sum(models.Instance.vcpus)).\ |
161 | + func.sum(models.Instance.vcpus), |
162 | + func.sum(models.Instance.memory_mb)).\ |
163 | filter_by(project_id=project_id).\ |
164 | filter_by(deleted=False).\ |
165 | first() |
166 | # NOTE(vish): convert None to 0 |
167 | - return (result[0] or 0, result[1] or 0) |
168 | + return (result[0] or 0, result[1] or 0, result[2] or 0) |
169 | |
170 | |
171 | @require_context |
172 | @@ -1499,7 +1500,7 @@ |
173 | ################### |
174 | |
175 | |
176 | -@require_admin_context |
177 | +@require_context |
178 | def quota_get(context, project_id, resource, session=None): |
179 | if not session: |
180 | session = get_session() |
181 | @@ -1513,7 +1514,7 @@ |
182 | return result |
183 | |
184 | |
185 | -@require_admin_context |
186 | +@require_context |
187 | def quota_get_all_by_project(context, project_id): |
188 | session = get_session() |
189 | result = {'project_id': project_id} |
190 | |
191 | === modified file 'nova/quota.py' |
192 | --- nova/quota.py 2011-05-11 18:01:41 +0000 |
193 | +++ nova/quota.py 2011-05-25 19:53:25 +0000 |
194 | @@ -28,6 +28,8 @@ |
195 | 'number of instances allowed per project') |
196 | flags.DEFINE_integer('quota_cores', 20, |
197 | 'number of instance cores allowed per project') |
198 | +flags.DEFINE_integer('quota_ram', 50 * 1024, |
199 | + 'megabytes of instance ram allowed per project') |
200 | flags.DEFINE_integer('quota_volumes', 10, |
201 | 'number of volumes allowed per project') |
202 | flags.DEFINE_integer('quota_gigabytes', 1000, |
203 | @@ -44,14 +46,28 @@ |
204 | 'number of bytes allowed per injected file path') |
205 | |
206 | |
207 | -def get_quota(context, project_id): |
208 | - rval = {'instances': FLAGS.quota_instances, |
209 | - 'cores': FLAGS.quota_cores, |
210 | - 'volumes': FLAGS.quota_volumes, |
211 | - 'gigabytes': FLAGS.quota_gigabytes, |
212 | - 'floating_ips': FLAGS.quota_floating_ips, |
213 | - 'metadata_items': FLAGS.quota_metadata_items} |
214 | - |
215 | +def _get_default_quotas(): |
216 | + defaults = { |
217 | + 'instances': FLAGS.quota_instances, |
218 | + 'cores': FLAGS.quota_cores, |
219 | + 'ram': FLAGS.quota_ram, |
220 | + 'volumes': FLAGS.quota_volumes, |
221 | + 'gigabytes': FLAGS.quota_gigabytes, |
222 | + 'floating_ips': FLAGS.quota_floating_ips, |
223 | + 'metadata_items': FLAGS.quota_metadata_items, |
224 | + 'injected_files': FLAGS.quota_max_injected_files, |
225 | + 'injected_file_content_bytes': |
226 | + FLAGS.quota_max_injected_file_content_bytes, |
227 | + } |
228 | + # -1 in the quota flags means unlimited |
229 | + for key in defaults.keys(): |
230 | + if defaults[key] == -1: |
231 | + defaults[key] = None |
232 | + return defaults |
233 | + |
234 | + |
235 | +def get_project_quotas(context, project_id): |
236 | + rval = _get_default_quotas() |
237 | quota = db.quota_get_all_by_project(context, project_id) |
238 | for key in rval.keys(): |
239 | if key in quota: |
240 | @@ -65,71 +81,81 @@ |
241 | return quota - used |
242 | |
243 | |
244 | -def allowed_instances(context, num_instances, instance_type): |
245 | - """Check quota and return min(num_instances, allowed_instances).""" |
246 | +def allowed_instances(context, requested_instances, instance_type): |
247 | + """Check quota and return min(requested_instances, allowed_instances).""" |
248 | project_id = context.project_id |
249 | context = context.elevated() |
250 | - num_cores = num_instances * instance_type['vcpus'] |
251 | - used_instances, used_cores = db.instance_data_get_for_project(context, |
252 | - project_id) |
253 | - quota = get_quota(context, project_id) |
254 | - allowed_instances = _get_request_allotment(num_instances, used_instances, |
255 | + requested_cores = requested_instances * instance_type['vcpus'] |
256 | + requested_ram = requested_instances * instance_type['memory_mb'] |
257 | + usage = db.instance_data_get_for_project(context, project_id) |
258 | + used_instances, used_cores, used_ram = usage |
259 | + quota = get_project_quotas(context, project_id) |
260 | + allowed_instances = _get_request_allotment(requested_instances, |
261 | + used_instances, |
262 | quota['instances']) |
263 | - allowed_cores = _get_request_allotment(num_cores, used_cores, |
264 | + allowed_cores = _get_request_allotment(requested_cores, used_cores, |
265 | quota['cores']) |
266 | + allowed_ram = _get_request_allotment(requested_ram, used_ram, quota['ram']) |
267 | allowed_instances = min(allowed_instances, |
268 | - int(allowed_cores // instance_type['vcpus'])) |
269 | - return min(num_instances, allowed_instances) |
270 | - |
271 | - |
272 | -def allowed_volumes(context, num_volumes, size): |
273 | - """Check quota and return min(num_volumes, allowed_volumes).""" |
274 | + allowed_cores // instance_type['vcpus'], |
275 | + allowed_ram // instance_type['memory_mb']) |
276 | + return min(requested_instances, allowed_instances) |
277 | + |
278 | + |
279 | +def allowed_volumes(context, requested_volumes, size): |
280 | + """Check quota and return min(requested_volumes, allowed_volumes).""" |
281 | project_id = context.project_id |
282 | context = context.elevated() |
283 | size = int(size) |
284 | - num_gigabytes = num_volumes * size |
285 | + requested_gigabytes = requested_volumes * size |
286 | used_volumes, used_gigabytes = db.volume_data_get_for_project(context, |
287 | project_id) |
288 | - quota = get_quota(context, project_id) |
289 | - allowed_volumes = _get_request_allotment(num_volumes, used_volumes, |
290 | + quota = get_project_quotas(context, project_id) |
291 | + allowed_volumes = _get_request_allotment(requested_volumes, used_volumes, |
292 | quota['volumes']) |
293 | - allowed_gigabytes = _get_request_allotment(num_gigabytes, used_gigabytes, |
294 | + allowed_gigabytes = _get_request_allotment(requested_gigabytes, |
295 | + used_gigabytes, |
296 | quota['gigabytes']) |
297 | allowed_volumes = min(allowed_volumes, |
298 | int(allowed_gigabytes // size)) |
299 | - return min(num_volumes, allowed_volumes) |
300 | - |
301 | - |
302 | -def allowed_floating_ips(context, num_floating_ips): |
303 | - """Check quota and return min(num_floating_ips, allowed_floating_ips).""" |
304 | + return min(requested_volumes, allowed_volumes) |
305 | + |
306 | + |
307 | +def allowed_floating_ips(context, requested_floating_ips): |
308 | + """Check quota and return min(requested, allowed) floating ips.""" |
309 | project_id = context.project_id |
310 | context = context.elevated() |
311 | used_floating_ips = db.floating_ip_count_by_project(context, project_id) |
312 | - quota = get_quota(context, project_id) |
313 | - allowed_floating_ips = _get_request_allotment(num_floating_ips, |
314 | + quota = get_project_quotas(context, project_id) |
315 | + allowed_floating_ips = _get_request_allotment(requested_floating_ips, |
316 | used_floating_ips, |
317 | quota['floating_ips']) |
318 | - return min(num_floating_ips, allowed_floating_ips) |
319 | - |
320 | - |
321 | -def allowed_metadata_items(context, num_metadata_items): |
322 | - """Check quota; return min(num_metadata_items,allowed_metadata_items).""" |
323 | - project_id = context.project_id |
324 | - context = context.elevated() |
325 | - quota = get_quota(context, project_id) |
326 | - allowed_metadata_items = _get_request_allotment(num_metadata_items, 0, |
327 | - quota['metadata_items']) |
328 | - return min(num_metadata_items, allowed_metadata_items) |
329 | - |
330 | - |
331 | -def allowed_injected_files(context): |
332 | + return min(requested_floating_ips, allowed_floating_ips) |
333 | + |
334 | + |
335 | +def _calculate_simple_quota(context, resource, requested): |
336 | + """Check quota for resource; return min(requested, allowed).""" |
337 | + quota = get_project_quotas(context, context.project_id) |
338 | + allowed = _get_request_allotment(requested, 0, quota[resource]) |
339 | + return min(requested, allowed) |
340 | + |
341 | + |
342 | +def allowed_metadata_items(context, requested_metadata_items): |
343 | + """Return the number of metadata items allowed.""" |
344 | + return _calculate_simple_quota(context, 'metadata_items', |
345 | + requested_metadata_items) |
346 | + |
347 | + |
348 | +def allowed_injected_files(context, requested_injected_files): |
349 | """Return the number of injected files allowed.""" |
350 | - return FLAGS.quota_max_injected_files |
351 | - |
352 | - |
353 | -def allowed_injected_file_content_bytes(context): |
354 | + return _calculate_simple_quota(context, 'injected_files', |
355 | + requested_injected_files) |
356 | + |
357 | + |
358 | +def allowed_injected_file_content_bytes(context, requested_bytes): |
359 | """Return the number of bytes allowed per injected file content.""" |
360 | - return FLAGS.quota_max_injected_file_content_bytes |
361 | + resource = 'injected_file_content_bytes' |
362 | + return _calculate_simple_quota(context, resource, requested_bytes) |
363 | |
364 | |
365 | def allowed_injected_file_path_bytes(context): |
366 | |
367 | === modified file 'nova/tests/api/openstack/test_limits.py' |
368 | --- nova/tests/api/openstack/test_limits.py 2011-04-30 11:14:20 +0000 |
369 | +++ nova/tests/api/openstack/test_limits.py 2011-05-25 19:53:25 +0000 |
370 | @@ -27,6 +27,7 @@ |
371 | |
372 | from xml.dom.minidom import parseString |
373 | |
374 | +import nova.context |
375 | from nova.api.openstack import limits |
376 | |
377 | |
378 | @@ -47,6 +48,13 @@ |
379 | self.time = 0.0 |
380 | self.stubs = stubout.StubOutForTesting() |
381 | self.stubs.Set(limits.Limit, "_get_time", self._get_time) |
382 | + self.absolute_limits = {} |
383 | + |
384 | + def stub_get_project_quotas(context, project_id): |
385 | + return self.absolute_limits |
386 | + |
387 | + self.stubs.Set(nova.quota, "get_project_quotas", |
388 | + stub_get_project_quotas) |
389 | |
390 | def tearDown(self): |
391 | """Run after each test.""" |
392 | @@ -75,6 +83,8 @@ |
393 | "action": "index", |
394 | "controller": "", |
395 | }) |
396 | + context = nova.context.RequestContext('testuser', 'testproject') |
397 | + request.environ["nova.context"] = context |
398 | return request |
399 | |
400 | def _populate_limits(self, request): |
401 | @@ -86,6 +96,18 @@ |
402 | request.environ["nova.limits"] = _limits |
403 | return request |
404 | |
405 | + def _setup_absolute_limits(self): |
406 | + self.absolute_limits = { |
407 | + 'instances': 5, |
408 | + 'cores': 8, |
409 | + 'ram': 2 ** 13, |
410 | + 'volumes': 21, |
411 | + 'gigabytes': 34, |
412 | + 'metadata_items': 55, |
413 | + 'injected_files': 89, |
414 | + 'injected_file_content_bytes': 144, |
415 | + } |
416 | + |
417 | def test_empty_index_json(self): |
418 | """Test getting empty limit details in JSON.""" |
419 | request = self._get_index_request() |
420 | @@ -103,6 +125,7 @@ |
421 | """Test getting limit details in JSON.""" |
422 | request = self._get_index_request() |
423 | request = self._populate_limits(request) |
424 | + self._setup_absolute_limits() |
425 | response = request.get_response(self.controller) |
426 | expected = { |
427 | "limits": { |
428 | @@ -124,7 +147,15 @@ |
429 | "remaining": 5, |
430 | "unit": "HOUR", |
431 | }], |
432 | - "absolute": {}, |
433 | + "absolute": { |
434 | + "maxTotalInstances": 5, |
435 | + "maxTotalCores": 8, |
436 | + "maxTotalRAMSize": 2 ** 13, |
437 | + "maxServerMeta": 55, |
438 | + "maxImageMeta": 55, |
439 | + "maxPersonality": 89, |
440 | + "maxPersonalitySize": 144, |
441 | + }, |
442 | }, |
443 | } |
444 | body = json.loads(response.body) |
445 | @@ -188,6 +219,8 @@ |
446 | "action": "index", |
447 | "controller": "", |
448 | }) |
449 | + context = nova.context.RequestContext('testuser', 'testproject') |
450 | + request.environ["nova.context"] = context |
451 | return request |
452 | |
453 | def _populate_limits(self, request): |
454 | @@ -218,6 +251,11 @@ |
455 | """Test getting limit details in JSON.""" |
456 | request = self._get_index_request() |
457 | request = self._populate_limits(request) |
458 | + self.absolute_limits = { |
459 | + 'ram': 512, |
460 | + 'instances': 5, |
461 | + 'cores': 21, |
462 | + } |
463 | response = request.get_response(self.controller) |
464 | expected = { |
465 | "limits": { |
466 | @@ -257,12 +295,59 @@ |
467 | }, |
468 | |
469 | ], |
470 | - "absolute": {}, |
471 | + "absolute": { |
472 | + "maxTotalRAMSize": 512, |
473 | + "maxTotalInstances": 5, |
474 | + "maxTotalCores": 21, |
475 | + }, |
476 | }, |
477 | } |
478 | body = json.loads(response.body) |
479 | self.assertEqual(expected, body) |
480 | |
481 | + def _test_index_absolute_limits_json(self, expected): |
482 | + request = self._get_index_request() |
483 | + response = request.get_response(self.controller) |
484 | + body = json.loads(response.body) |
485 | + self.assertEqual(expected, body['limits']['absolute']) |
486 | + |
487 | + def test_index_ignores_extra_absolute_limits_json(self): |
488 | + self.absolute_limits = {'unknown_limit': 9001} |
489 | + self._test_index_absolute_limits_json({}) |
490 | + |
491 | + def test_index_absolute_ram_json(self): |
492 | + self.absolute_limits = {'ram': 1024} |
493 | + self._test_index_absolute_limits_json({'maxTotalRAMSize': 1024}) |
494 | + |
495 | + def test_index_absolute_cores_json(self): |
496 | + self.absolute_limits = {'cores': 17} |
497 | + self._test_index_absolute_limits_json({'maxTotalCores': 17}) |
498 | + |
499 | + def test_index_absolute_instances_json(self): |
500 | + self.absolute_limits = {'instances': 19} |
501 | + self._test_index_absolute_limits_json({'maxTotalInstances': 19}) |
502 | + |
503 | + def test_index_absolute_metadata_json(self): |
504 | + # NOTE: both server metadata and image metadata are overloaded |
505 | + # into metadata_items |
506 | + self.absolute_limits = {'metadata_items': 23} |
507 | + expected = { |
508 | + 'maxServerMeta': 23, |
509 | + 'maxImageMeta': 23, |
510 | + } |
511 | + self._test_index_absolute_limits_json(expected) |
512 | + |
513 | + def test_index_absolute_injected_files(self): |
514 | + self.absolute_limits = { |
515 | + 'injected_files': 17, |
516 | + 'injected_file_content_bytes': 86753, |
517 | + } |
518 | + expected = { |
519 | + 'maxPersonality': 17, |
520 | + 'maxPersonalitySize': 86753, |
521 | + } |
522 | + self._test_index_absolute_limits_json(expected) |
523 | + |
524 | |
525 | class LimitMiddlewareTest(BaseLimitTestSuite): |
526 | """ |
527 | |
528 | === modified file 'nova/tests/test_quota.py' |
529 | --- nova/tests/test_quota.py 2011-05-09 18:44:39 +0000 |
530 | +++ nova/tests/test_quota.py 2011-05-25 19:53:25 +0000 |
531 | @@ -104,6 +104,10 @@ |
532 | num_instances = quota.allowed_instances(self.context, 100, |
533 | self._get_instance_type('m1.small')) |
534 | self.assertEqual(num_instances, 10) |
535 | + db.quota_create(self.context, self.project.id, 'ram', 3 * 2048) |
536 | + num_instances = quota.allowed_instances(self.context, 100, |
537 | + self._get_instance_type('m1.small')) |
538 | + self.assertEqual(num_instances, 3) |
539 | |
540 | # metadata_items |
541 | too_many_items = FLAGS.quota_metadata_items + 1000 |
542 | @@ -120,7 +124,8 @@ |
543 | |
544 | def test_unlimited_instances(self): |
545 | FLAGS.quota_instances = 2 |
546 | - FLAGS.quota_cores = 1000 |
547 | + FLAGS.quota_ram = -1 |
548 | + FLAGS.quota_cores = -1 |
549 | instance_type = self._get_instance_type('m1.small') |
550 | num_instances = quota.allowed_instances(self.context, 100, |
551 | instance_type) |
552 | @@ -133,8 +138,25 @@ |
553 | instance_type) |
554 | self.assertEqual(num_instances, 101) |
555 | |
556 | + def test_unlimited_ram(self): |
557 | + FLAGS.quota_instances = -1 |
558 | + FLAGS.quota_ram = 2 * 2048 |
559 | + FLAGS.quota_cores = -1 |
560 | + instance_type = self._get_instance_type('m1.small') |
561 | + num_instances = quota.allowed_instances(self.context, 100, |
562 | + instance_type) |
563 | + self.assertEqual(num_instances, 2) |
564 | + db.quota_create(self.context, self.project.id, 'ram', None) |
565 | + num_instances = quota.allowed_instances(self.context, 100, |
566 | + instance_type) |
567 | + self.assertEqual(num_instances, 100) |
568 | + num_instances = quota.allowed_instances(self.context, 101, |
569 | + instance_type) |
570 | + self.assertEqual(num_instances, 101) |
571 | + |
572 | def test_unlimited_cores(self): |
573 | - FLAGS.quota_instances = 1000 |
574 | + FLAGS.quota_instances = -1 |
575 | + FLAGS.quota_ram = -1 |
576 | FLAGS.quota_cores = 2 |
577 | instance_type = self._get_instance_type('m1.small') |
578 | num_instances = quota.allowed_instances(self.context, 100, |
579 | @@ -150,7 +172,7 @@ |
580 | |
581 | def test_unlimited_volumes(self): |
582 | FLAGS.quota_volumes = 10 |
583 | - FLAGS.quota_gigabytes = 1000 |
584 | + FLAGS.quota_gigabytes = -1 |
585 | volumes = quota.allowed_volumes(self.context, 100, 1) |
586 | self.assertEqual(volumes, 10) |
587 | db.quota_create(self.context, self.project.id, 'volumes', None) |
588 | @@ -160,7 +182,7 @@ |
589 | self.assertEqual(volumes, 101) |
590 | |
591 | def test_unlimited_gigabytes(self): |
592 | - FLAGS.quota_volumes = 1000 |
593 | + FLAGS.quota_volumes = -1 |
594 | FLAGS.quota_gigabytes = 10 |
595 | volumes = quota.allowed_volumes(self.context, 100, 1) |
596 | self.assertEqual(volumes, 10) |
597 | @@ -274,10 +296,47 @@ |
598 | image_id='fake', |
599 | metadata=metadata) |
600 | |
601 | - def test_allowed_injected_files(self): |
602 | - self.assertEqual( |
603 | - quota.allowed_injected_files(self.context), |
604 | - FLAGS.quota_max_injected_files) |
605 | + def test_default_allowed_injected_files(self): |
606 | + FLAGS.quota_max_injected_files = 55 |
607 | + self.assertEqual(quota.allowed_injected_files(self.context, 100), 55) |
608 | + |
609 | + def test_overridden_allowed_injected_files(self): |
610 | + FLAGS.quota_max_injected_files = 5 |
611 | + db.quota_create(self.context, self.project.id, 'injected_files', 77) |
612 | + self.assertEqual(quota.allowed_injected_files(self.context, 100), 77) |
613 | + |
614 | + def test_unlimited_default_allowed_injected_files(self): |
615 | + FLAGS.quota_max_injected_files = -1 |
616 | + self.assertEqual(quota.allowed_injected_files(self.context, 100), 100) |
617 | + |
618 | + def test_unlimited_db_allowed_injected_files(self): |
619 | + FLAGS.quota_max_injected_files = 5 |
620 | + db.quota_create(self.context, self.project.id, 'injected_files', None) |
621 | + self.assertEqual(quota.allowed_injected_files(self.context, 100), 100) |
622 | + |
623 | + def test_default_allowed_injected_file_content_bytes(self): |
624 | + FLAGS.quota_max_injected_file_content_bytes = 12345 |
625 | + limit = quota.allowed_injected_file_content_bytes(self.context, 23456) |
626 | + self.assertEqual(limit, 12345) |
627 | + |
628 | + def test_overridden_allowed_injected_file_content_bytes(self): |
629 | + FLAGS.quota_max_injected_file_content_bytes = 12345 |
630 | + db.quota_create(self.context, self.project.id, |
631 | + 'injected_file_content_bytes', 5678) |
632 | + limit = quota.allowed_injected_file_content_bytes(self.context, 23456) |
633 | + self.assertEqual(limit, 5678) |
634 | + |
635 | + def test_unlimited_default_allowed_injected_file_content_bytes(self): |
636 | + FLAGS.quota_max_injected_file_content_bytes = -1 |
637 | + limit = quota.allowed_injected_file_content_bytes(self.context, 23456) |
638 | + self.assertEqual(limit, 23456) |
639 | + |
640 | + def test_unlimited_db_allowed_injected_file_content_bytes(self): |
641 | + FLAGS.quota_max_injected_file_content_bytes = 12345 |
642 | + db.quota_create(self.context, self.project.id, |
643 | + 'injected_file_content_bytes', None) |
644 | + limit = quota.allowed_injected_file_content_bytes(self.context, 23456) |
645 | + self.assertEqual(limit, 23456) |
646 | |
647 | def _create_with_injected_files(self, files): |
648 | api = compute.API(image_service=self.StubImageService()) |
649 | @@ -304,11 +363,6 @@ |
650 | self.assertRaises(quota.QuotaError, |
651 | self._create_with_injected_files, files) |
652 | |
653 | - def test_allowed_injected_file_content_bytes(self): |
654 | - self.assertEqual( |
655 | - quota.allowed_injected_file_content_bytes(self.context), |
656 | - FLAGS.quota_max_injected_file_content_bytes) |
657 | - |
658 | def test_max_injected_file_content_bytes(self): |
659 | max = FLAGS.quota_max_injected_file_content_bytes |
660 | content = ''.join(['a' for i in xrange(max)]) |
Great work, guys. I found some style issues:
119: the name ram_count is a bit awkward. Change it to ram_total (or something better)?
139: Is there no cleaner way to accomplish this?
198: The function name in nova/quota.py "get_quota" should be made plural (get_project_ quotas? ). It even uses a db api method that refers to multiple quotas. Thoughts?
216: Can you change num_ram and num_cores to required_ram and required_cores? It would help with function readability. Same goes for num_gigabytes in the allowed_volumes function.