Merge lp:~cbehrens/nova/servers-search into lp:~hudson-openstack/nova/trunk
- servers-search
- Merge into trunk
Status: | Merged |
---|---|
Approved by: | Josh Kearney |
Approved revision: | 1305 |
Merged at revision: | 1410 |
Proposed branch: | lp:~cbehrens/nova/servers-search |
Merge into: | lp:~hudson-openstack/nova/trunk |
Diff against target: |
1621 lines (+1110/-135) 12 files modified
nova/api/ec2/cloud.py (+30/-12) nova/api/openstack/common.py (+33/-0) nova/api/openstack/servers.py (+80/-17) nova/api/openstack/views/servers.py (+1/-15) nova/compute/api.py (+57/-31) nova/db/api.py (+16/-10) nova/db/sqlalchemy/api.py (+149/-44) nova/db/sqlalchemy/models.py (+1/-0) nova/tests/api/openstack/fakes.py (+6/-2) nova/tests/api/openstack/test_servers.py (+275/-2) nova/tests/test_compute.py (+457/-1) nova/tests/test_metadata.py (+5/-1) |
To merge this branch: | bzr merge lp:~cbehrens/nova/servers-search |
Related bugs: | |
Related blueprints: |
Search API
(Undefined)
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Brian Waldon (community) | Approve | ||
Rick Harris (community) | Approve | ||
Brian Lamar (community) | Abstain | ||
Jay Pipes (community) | Needs Fixing | ||
Review via email: mp+67765@code.launchpad.net |
Commit message
Description of the change
This adds the servers search capabilities defined in the OS API v1.1 spec.. and more for admins.
For users, flavor=, image=, status=, and name= can be specified. name= supports regular expression matching.
Most other options are ignored. (things outside of the spec like 'recurse_zones' and 'reservation_id' still work, also)
If admin_api is enabled and context is an admin: along with the above, one can specify ip= and ip6= which will do regular expression matching. Also, any other 'Instance' column name can be specified, so you can do regexp matching there as well. Unknown Instance columns are ignored.
Also fixes up fixed_ip=, making a 404 returned vs a 500 error... and handling this properly with zone recursion as well.
Jay Pipes (jaypipes) wrote : | # |
Hi Chris!
Few little comments...
190 + # See if a valid search option was passed in.
191 + # Ignore unknown search options for possible forward compatability.
192 + # Raise an exception if more than 1 search option is specified
193 + found_opt = None
194 + for opt in exclusive_opts:
195 + v = search_
196 + if v:
197 + if found_opt is None:
198 + found_opt = opt
199 + else:
200 + LOG.error(_("More than 1 mutually exclusive "
201 + "search option specified (%(found_opt)s and "
202 + "%(opt)s were both specified") % locals())
203 + raise exception.
204 + "More than 1 mutually exclusive "
205 + "search option specified (%(found_opt)s and "
206 + "%(opt)s were both specified") % locals())
Might be more succinctly written as:
found_opt = None
found_opts = [opt for opt in exclusive_opts
if search_
if len(found_opts) > 1:
found_opt_str = ", ".join(found_opts)
msg = _("More than 1 mutually exclusive "
"search option specified: %(found_opt_str)s")
% locals())
logger.
raise exception.
if found_opts:
found_opt = found_opts[0]
I recognize (and agree with your decision) not to do regexp matching via the database. Not only is it not portable, it's not any more efficient to do that at the database level (still requires a scan of all pre-restricted rows anyway...).
However this method signature:
+def instance_
Looked strange, perhaps a copy/paste error? Should ipv6_regexp be name_regexp?
Cheers!
jay
Brian Lamar (blamar) wrote : | # |
> I recognize (and agree with your decision) not to do regexp matching via the database. Not only is
> it not portable, it's not any more efficient to do that at the database level (still requires a
> scan of all pre-restricted rows anyway...).
Regular expressions are more expensive than LIKE matches (which in their own right, are pretty expensive). Do we really want operators doing complex regexs? At that point we should be putting our data into a purpose-built search indexing solution like Lucene/
Jay Pipes (jaypipes) wrote : | # |
> > I recognize (and agree with your decision) not to do regexp matching via the
> database. Not only is
> > it not portable, it's not any more efficient to do that at the database
> level (still requires a
> > scan of all pre-restricted rows anyway...).
>
> Regular expressions are more expensive than LIKE matches (which in their own
> right, are pretty expensive).
Actually, this is incorrect. LIKE '%something%' and column REGEXP 'someregexp' will produce identical query execution plans. The complexity of the REGEXP determines whether or not a simple string match such as '%something%' would be computationally more expensive to execute per row than a compiled regexp match.
> Do we really want operators doing complex
> regexs? At that point we should be putting our data into a purpose-built
> search indexing solution like Lucene/
> what they're good at.
Lucene/
I understand your concern about the regexp inefficiency. Just saying that it's not that much less efficient than doing a REGEXP or LIKE '%something%' expression in SQL. The same loop and match process is occurring in Python code versus C code. The problem is that not all DBs support the REGEXP operator...
Just my two cents,
-jay
Chris Behrens (cbehrens) wrote : | # |
Brian:
Yeah, I'm aware of everything in your first comment. I started out using 'like', but I ditched it for a few reasons.
1) Some of the things I'm matching against are not stored in the database. IPv6 addresses and the instance 'name', in particular. So, I'd have to build search functions in python that can parse '%..%'. That.. or I'd end up supporting regular expressions for some queries and the sql 'like' format for others.
2) Related to #1 in a way, what if the backend ends up not being SQL at all?
3) 'like' does a full table scan anyway and regular expressions are a little more flexible.
That said, using 'like' is going to be more efficient because I think it'll result in less 'join's. But, really, there's way too many joins in all of the previously existing instance getting code anyway. I duplicated what was used previously, but I don't think all of them are necessary. Someone really needs to go through and do an audit on all of these DB calls and find a way to not have to join stuff that isn't going to be used by the caller. I'd rather that I not have to be done as a part of this merge... as that'll be a huge task. :)
Chris Behrens (cbehrens) wrote : | # |
Jay: Ah, thanks! I'll get those fixed up.
Brian Waldon (bcwaldon) wrote : | # |
Is it necessary to have a separate db api method for each query string parameter? This seems like it could cause unnecessary growth in that module.
This doesn't seem to match the latest v1.1 spec. Is there a reason for the divergence?
Chris Behrens (cbehrens) wrote : | # |
> Is it necessary to have a separate db api method for each query string
> parameter? This seems like it could cause unnecessary growth in that module.
>
> This doesn't seem to match the latest v1.1 spec. Is there a reason for the
> divergence?
Will check the spec.
As far as the separate db api methods... just to clarify... did you mean in db/api or actually in compute/api? Or both? Mostly everything can use the search "by_column" method I added. That tries to generically support columns by doing a 'getattr' on the Instance class for the column value to match. But, searching by 'ip', 'ipv6', and 'name' had to be separate because:
1) 'ip' matches against the FixedIp and FloatingIP tables, instead..
2) 'ipv6' isn't stored in the database
3) 'name' isn't stored in the database either... it's a @property of the class and a getattr() doesn't return the value directly... it returns a Property instance IIRC.
Brian Lamar (blamar) wrote : | # |
> > > I recognize (and agree with your decision) not to do regexp matching via
> the
> > database. Not only is
> > > it not portable, it's not any more efficient to do that at the database
> > level (still requires a
> > > scan of all pre-restricted rows anyway...).
> >
> > Regular expressions are more expensive than LIKE matches (which in their own
> > right, are pretty expensive).
>
> Actually, this is incorrect. LIKE '%something%' and column REGEXP 'someregexp'
> will produce identical query execution plans. The complexity of the REGEXP
> determines whether or not a simple string match such as '%something%' would be
> computationally more expensive to execute per row than a compiled regexp
> match.
>
> > Do we really want operators doing complex
> > regexs? At that point we should be putting our data into a purpose-built
> > search indexing solution like Lucene/
> that's
> > what they're good at.
>
> Lucene/
> happening here is looking for a particular pattern in a short string. The
> solution presented here is flexible enough to query for various IP(v6) and
> name patterns without having to set up a separate fulltext indexing server for
> this kind of thing, which I think would be overboard.
>
> I understand your concern about the regexp inefficiency. Just saying that it's
> not that much less efficient than doing a REGEXP or LIKE '%something%'
> expression in SQL. The same loop and match process is occurring in Python code
> versus C code. The problem is that not all DBs support the REGEXP operator...
>
> Just my two cents,
> -jay
Wow, just gonna have to agree to disagree then... I'm really positive it's going to be tons more efficient to do simple wildcard matching vs. fully-fledged perl-compatible regular expressions. I'm all about not putting specialized code into projects and I see this as being search-specialized code. I'm going to bow out now because I don't want to further this discussion on a merge-prop (sorry Chris!!).
> 1) Some of the things I'm matching against are not stored in the database. IPv6 addresses and the
> instance 'name', in particular. So, I'd have to build search functions in python that can parse
> '%..%'. That.. or I'd end up supporting regular expressions for some queries and the sql 'like'
> format for others.
Yeah, I can't believe that v6 address aren't stored in the DB. :) Good point.
> 2) Related to #1 in a way, what if the backend ends up not being SQL at all?
Heh, good luck with this one, but the point I'm trying to make is compatible with this as well. I use ElasticSearch with some of my CouchDB projects and there are tons of indexers for all types of datastores.
> 3) 'like' does a full table scan anyway and regular expressions are a little more flexible.
Absolutely, I just feel that we should be leaving the database to do the data-storage, a search-indexer to do searching, and Python to tie everything together.
Going to abstain from this merge prop, my apologies for cluttering it with conversations that most assuredly should have happened at the Blueprint level.
Chris Behrens (cbehrens) wrote : | # |
blamar: No problem, I understand. And FWIW, it's not 'pcre'.. it's just the python 're' module.. only slightly better performance-wise, probably. :)
Chris Behrens (cbehrens) wrote : | # |
Ok. Per discussion with bcwaldon, I'm going to make the "name" query string option map to searching by display name. The spec sys name=serverName, and I got clarification that serverName means what the user sees... which is the Instance.
The spec doesn't mention 'instance_name' (Instance.name property in nova), but admins will want that. So, I'll make "instance_name" option map to what "name" did, which was search by this instance name. instance_name will have admin_api and admin context checking.
I'll remove searching by 'server_name='. It's always NULL in my tables, anyway, and it's not clear what this is really supposed to be used for (I haven't looked in the code, honestly). It's not mentioned in the spec, and it doesn't seem to be an immediate need... so...
Chris Behrens (cbehrens) wrote : | # |
After reviewing section 4.1.1 in the v1.1 API spec regarding 'list', I went ahead and added some of the other search options like 'image', 'flavor', and status... along with making the previous changes that were discussed. Ie:
name matches display_name
instance_name is admin API/admin context only
I also made ip and ip6 admin-only due to it not being in the spec.
Unit tests have been added for OS API
Brian Waldon (bcwaldon) wrote : | # |
A branch recently went in that replaced all returns of fault-wrapped webob exceptions with raising of non-wrapped webob exceptions. Can you convert any of the "return faults.Fault(...)" to "raise ..."?
I think we may want to take the approach of ignoring unsupported query params rather than raising an exception (that's how Glance does it). We should be liberal in what we accept. A lot of the _get_items / _items code can get cleaned back up if we don't need to explicitly check filter params. Same goes for check_option_
Power states belong to the compute layer, and server statuses belong to the OSAPI. I think we should remove any OSAPI status-related code from the power_states module and place it in either api.openstack.
What is the 'fresh' query filter? I just started seeing that in novaclient requests.
Are there future plans to implement changes-since? I can see it is an accepted filter, just not respected.
In the DB API, why is there a _get_all_
Can you change _get_all_by_column to _get_all_
I don't see it necessary to list OSAPI-specific filters in the compute api. Can we fix that known_params?
I really want to be able to sort by multiple parameters, and I think most people using the API would expect to be able to do that. How hard would it be to support this?
Don't get me wrong here, I'm really excited to get this feature in. Having done this in Glance already, I know this isn't a small undertaking :)
Chris Behrens (cbehrens) wrote : | # |
> A branch recently went in that replaced all returns of fault-wrapped webob
> exceptions with raising of non-wrapped webob exceptions. Can you convert any
> of the "return faults.Fault(...)" to "raise ..."?
Aha.. that explains something. :) Sure.
> I think we may want to take the approach of ignoring unsupported query params
> rather than raising an exception (that's how Glance does it). We should be
> liberal in what we accept. A lot of the _get_items / _items code can get
> cleaned back up if we don't need to explicitly check filter params. Same goes
> for check_option_
Ok. I was thinking about v1.1 and trying to 'guess' at what I should do here based on the possible response codes to the query. I agree with what you say, though. I'd rather allow and ignore search options we don't understand. That will clean things up slightly. I still need to check for permissions on certain search items that should be admin-api only, but I can cut the 2 lists of options down to 1.. (list of options that require admin)
>
> Power states belong to the compute layer, and server statuses belong to the
> OSAPI. I think we should remove any OSAPI status-related code from the
> power_states module and place it in either api.openstack.
> api.openstack.
I had the same debate. I was going to move it to api.openstack.
That or I could leave it the way it is. :) Changing it probably makes sense so there's clear separation.
>
> What is the 'fresh' query filter? I just started seeing that in novaclient
> requests.
Honestly, I have no clue. :-/ But I know that novaclient sends it. I figured it was v1.0 equiv of 'changes-since' that's in v1.1.
>
> Are there future plans to implement changes-since? I can see it is an accepted
> filter, just not respected.
Yeah, good question. I'm already going beyond the scope of the story that was given to us for the sprint, and that one seemed a little more difficult. Wasn't sure if compute.api should filter that.. or let OS API filter on the return to the client. I thought maybe titan would have a better idea of how that should be done. ;)
>
> In the DB API, why is there a _get_all_
> _get_all_
> the name filter?
Let me double check that. I saw a problem early on with this, because 'name' is a @property in the Instance class and I seemed to be getting back a property object with getattr. I bet I was doing testing with a class and not an instance of a class at the time, however. I bet this can be consolidated.
> ...
Brian Waldon (bcwaldon) wrote : | # |
> > Power states belong to the compute layer, and server statuses belong to the
> > OSAPI. I think we should remove any OSAPI status-related code from the
> > power_states module and place it in either api.openstack.
> > api.openstack.
>
> I had the same debate. I was going to move it to api.openstack.
> I realized that compute.api needs to send the 'status' to search by to child
> zones. This means that compute.api would end up making a call into
> api.openstack.
> and that didn't feel quite right. I think I can solve that differently. I
> can make the OS API pass the 'status' to search for through to compute.api
> (along with 'state' which is the translation of status to power state).
> compute.api will ignore 'status' and use 'state' but it'll pass 'status' on to
> novaclient to talk to zones.
>
> That or I could leave it the way it is. :) Changing it probably makes sense
> so there's clear separation.
Yeah, I really don't want the compute api (the power states module) to depend on concepts defined in the openstack api.
> > Are there future plans to implement changes-since? I can see it is an
> accepted
> > filter, just not respected.
>
> Yeah, good question. I'm already going beyond the scope of the story that was
> given to us for the sprint, and that one seemed a little more difficult.
> Wasn't sure if compute.api should filter that.. or let OS API filter on the
> return to the client. I thought maybe titan would have a better idea of how
> that should be done. ;)
We got it :)
> > I don't see it necessary to list OSAPI-specific filters in the compute api.
> > Can we fix that known_params?
>
> Well, I need a table to map the search parameter to the function to call with
> its arguments. In the initial merge prop, I didn't have this as I used
> getattr(self, '_get_all_by_%s' % opt_name).. and called that function. The
> problem is that the generic 'column' ones need an extra argument. The
> original merge prop had a list of those search options that could be done
> generically. I could change it back... but it'd also require moving all of
> the _get_all_by* calls back outside of 'def get_all' so I could use
> getattr(self, ..) again.
>
> Hm.. did that make sense? We can chat on IRC to clarify if needed.
What bothers me is the inclusion of non-filter query params (like changes-since). If those would just go away I would be more okay with it.
> > I really want to be able to sort by multiple parameters, and I think most
> > people using the API would expect to be able to do that. How hard would it
> be
> > to support this?
>
> Yeah, I hear you. The story I was given says to throw an error if multiple
> are given, so that's the main reason why I didn't code support for it. :) My
> instinct is to say it's rather difficult... but when I really start to think
> about it, I'm not sure it's so bad. I can think about it some more while
> doing the other fixups.
I don't mean to toot my own horn here, but look at glance and see if that approach would work.
Chris Behrens (cbehrens) wrote : | # |
Alright. This is almost a re-write since the original merge prop, now. I updated the description up at the top to better reflect what this ends up adding.
Fire away.. :)
Ed Leafe (ed-leafe) wrote : | # |
Only got through the first 700 lines of the diff so far, but here's what I have:
Lines 134 -143 can be more efficiently written:
def power_states_
"""Map the server status string to a list of power states"""
low_status = status.
power_states = [val for key, val in _STATUS_
if key.lower() == low_status
and val is not None]
return power_states
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
The method '_check_
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Line 643: What is this supposed to be doing? Did you maybe mean to copy the 'filters' dict to another name?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Lines 660 -664 can be more efficiently written:
for filter_name in query_filters:
query_prefix = _exact_
Chris Behrens (cbehrens) wrote : | # |
Ed:
Ok re: _check_
643: I meant to make a copy of it to use going forward.. I just reassigned it to same variable name. I'm making a copy because I'm modifying it and I can't have it affect the caller.
660-664: Good call... makes sense.
Chris Behrens (cbehrens) wrote : | # |
Updated. Added a comment around #643 to clarify the .copy().
Brian Waldon (bcwaldon) wrote : | # |
This looks great, Mr. Behrens. I like the db-api implementation a lot more. I've got just a few comments:
I noticed you linked a lazy load bug to this. Can you explain how this branch addresses that?
164: Instead of using an instance variable, can you implement this with constant tuples that are passed into _remove_
217: I'm not crazy about this method name. How about just _get_servers?
215: This is only called once, so does it make sense to break it out into its own method? I think it all fits under _servers_
392: There's a pattern here that is repeated a few times. Maybe you can create a mapping dictionary for converting key names
419: This behavior will also happen through OSAPI, right? I'm not so sure we want to support special cases like this. It's a totally valid request to filter by fixed_ip and name where no results are returned.
480: This behavior seems odd. I'm thinking we should just return an empty list. What do you think?
Chris Behrens (cbehrens) wrote : | # |
lazy load: The fix is actually described in the thread with the bug, but: OS API accesses Instance[
164: Sure
217: Ok.
215: I'd originally broken it out because I split the caller into the 1.0/1.1 Controllers, so it had 2 references. I'll merge it back in, since that's not the case anymore.
392: I didn't use a mapping table because the flipping needs to happen in a certain order (since
i'm renaming name -> display_name, but instance_name to name. Doing those in reverse order would clobber the 'name' if both exist in the query). But I know how I can solve that.
419: Yeah, I did this because I'm not sure 'fixed_ip' should be used via OS API. One should use 'ip' instead. If it's for EC2 only, fixed_ip this way is a lot more efficient query since it looks up via FixedIps table directly with no joins to find the entry. Decisions like these are difficult, as I'm not entirely sure what's most desirable to the community...but generally there is some sort of reason to the method of my madness. :) I'm cool with changing it.
480: I feel this way, too.. I was just trying to match the original behavior in single zone. I guess if EC2 should really return an error in this case, the error return should happen in the ec2 controller if it gets an empty list back.
Chris Behrens (cbehrens) wrote : | # |
Ok, I think all of those issues are addressed.
Brian Waldon (bcwaldon) wrote : | # |
Thanks for addressing those issues. Code looks good to me, now, but I'm seeing one test failure:
=======
FAIL: test_get_
-------
Traceback (most recent call last):
File "/Users/
self.
AssertionError: 500 != 200
-------
...
(nova.api.
(nova.api.
(nova.api.
(nova.api.
(nova.api.
(nova.api.
(nova.api.
(nova.api.
(nova.api.
(nova.api.
-------
-------
Rick Harris (rconradharris) wrote : | # |
Tests pass (for me), and code looks good. Really nice work here Chris.
Chris Behrens (cbehrens) wrote : | # |
bcwaldon: Issue is this:
[15:51:38] <comstud> 1173 self.assertEqua
[15:51:39] <comstud> 1174 [power_
The list is in reverse order in your test and I'm assuming order here. I've just wrapped those lists with set() so it does a set comparison. Try now. :)
Brian Waldon (bcwaldon) wrote : | # |
Yep, all good now. Thank you, Dr. Behrens!
OpenStack Infra (hudson-openstack) wrote : | # |
Attempt to merge into lp:nova failed due to conflicts:
text conflict in nova/api/
text conflict in nova/tests/
Chris Behrens (cbehrens) wrote : | # |
Merged trunk and resolved 2 conflicts. Need the approves again.
Preview Diff
1 | === modified file 'nova/api/ec2/cloud.py' |
2 | --- nova/api/ec2/cloud.py 2011-08-09 12:47:47 +0000 |
3 | +++ nova/api/ec2/cloud.py 2011-08-09 20:27:18 +0000 |
4 | @@ -213,8 +213,9 @@ |
5 | |
6 | def _get_mpi_data(self, context, project_id): |
7 | result = {} |
8 | + search_opts = {'project_id': project_id} |
9 | for instance in self.compute_api.get_all(context, |
10 | - project_id=project_id): |
11 | + search_opts=search_opts): |
12 | if instance['fixed_ips']: |
13 | line = '%s slots=%d' % (instance['fixed_ips'][0]['address'], |
14 | instance['vcpus']) |
15 | @@ -264,8 +265,13 @@ |
16 | |
17 | def get_metadata(self, address): |
18 | ctxt = context.get_admin_context() |
19 | - instance_ref = self.compute_api.get_all(ctxt, fixed_ip=address) |
20 | - if instance_ref is None: |
21 | + search_opts = {'fixed_ip': address} |
22 | + try: |
23 | + instance_ref = self.compute_api.get_all(ctxt, |
24 | + search_opts=search_opts) |
25 | + except exception.NotFound: |
26 | + instance_ref = None |
27 | + if not instance_ref: |
28 | return None |
29 | |
30 | # This ensures that all attributes of the instance |
31 | @@ -1086,11 +1092,16 @@ |
32 | return result |
33 | |
34 | def describe_instances(self, context, **kwargs): |
35 | - return self._format_describe_instances(context, **kwargs) |
36 | + # Optional DescribeInstances argument |
37 | + instance_id = kwargs.get('instance_id', None) |
38 | + return self._format_describe_instances(context, |
39 | + instance_id=instance_id) |
40 | |
41 | def describe_instances_v6(self, context, **kwargs): |
42 | - kwargs['use_v6'] = True |
43 | - return self._format_describe_instances(context, **kwargs) |
44 | + # Optional DescribeInstancesV6 argument |
45 | + instance_id = kwargs.get('instance_id', None) |
46 | + return self._format_describe_instances(context, |
47 | + instance_id=instance_id, use_v6=True) |
48 | |
49 | def _format_describe_instances(self, context, **kwargs): |
50 | return {'reservationSet': self._format_instances(context, **kwargs)} |
51 | @@ -1152,7 +1163,8 @@ |
52 | result['groupSet'] = CloudController._convert_to_set( |
53 | security_group_names, 'groupId') |
54 | |
55 | - def _format_instances(self, context, instance_id=None, **kwargs): |
56 | + def _format_instances(self, context, instance_id=None, use_v6=False, |
57 | + **search_opts): |
58 | # TODO(termie): this method is poorly named as its name does not imply |
59 | # that it will be making a variety of database calls |
60 | # rather than simply formatting a bunch of instances that |
61 | @@ -1163,11 +1175,17 @@ |
62 | instances = [] |
63 | for ec2_id in instance_id: |
64 | internal_id = ec2utils.ec2_id_to_id(ec2_id) |
65 | - instance = self.compute_api.get(context, |
66 | - instance_id=internal_id) |
67 | + try: |
68 | + instance = self.compute_api.get(context, internal_id) |
69 | + except exception.NotFound: |
70 | + continue |
71 | instances.append(instance) |
72 | else: |
73 | - instances = self.compute_api.get_all(context, **kwargs) |
74 | + try: |
75 | + instances = self.compute_api.get_all(context, |
76 | + search_opts=search_opts) |
77 | + except exception.NotFound: |
78 | + instances = [] |
79 | for instance in instances: |
80 | if not context.is_admin: |
81 | if instance['image_ref'] == str(FLAGS.vpn_image_id): |
82 | @@ -1189,7 +1207,7 @@ |
83 | fixed_addr = fixed['address'] |
84 | if fixed['floating_ips']: |
85 | floating_addr = fixed['floating_ips'][0]['address'] |
86 | - if fixed['network'] and 'use_v6' in kwargs: |
87 | + if fixed['network'] and use_v6: |
88 | i['dnsNameV6'] = ipv6.to_global( |
89 | fixed['network']['cidr_v6'], |
90 | fixed['virtual_interface']['address'], |
91 | @@ -1326,7 +1344,7 @@ |
92 | 'AvailabilityZone'), |
93 | block_device_mapping=kwargs.get('block_device_mapping', {})) |
94 | return self._format_run_instances(context, |
95 | - instances[0]['reservation_id']) |
96 | + reservation_id=instances[0]['reservation_id']) |
97 | |
98 | def _do_instance(self, action, context, ec2_id): |
99 | instance_id = ec2utils.ec2_id_to_id(ec2_id) |
100 | |
101 | === modified file 'nova/api/openstack/common.py' |
102 | --- nova/api/openstack/common.py 2011-08-09 16:33:13 +0000 |
103 | +++ nova/api/openstack/common.py 2011-08-09 20:27:18 +0000 |
104 | @@ -27,6 +27,7 @@ |
105 | from nova import log as logging |
106 | from nova import quota |
107 | from nova.api.openstack import wsgi |
108 | +from nova.compute import power_state as compute_power_state |
109 | |
110 | |
111 | LOG = logging.getLogger('nova.api.openstack.common') |
112 | @@ -37,6 +38,38 @@ |
113 | XML_NS_V11 = 'http://docs.openstack.org/compute/api/v1.1' |
114 | |
115 | |
116 | +_STATUS_MAP = { |
117 | + None: 'BUILD', |
118 | + compute_power_state.NOSTATE: 'BUILD', |
119 | + compute_power_state.RUNNING: 'ACTIVE', |
120 | + compute_power_state.BLOCKED: 'ACTIVE', |
121 | + compute_power_state.SUSPENDED: 'SUSPENDED', |
122 | + compute_power_state.PAUSED: 'PAUSED', |
123 | + compute_power_state.SHUTDOWN: 'SHUTDOWN', |
124 | + compute_power_state.SHUTOFF: 'SHUTOFF', |
125 | + compute_power_state.CRASHED: 'ERROR', |
126 | + compute_power_state.FAILED: 'ERROR', |
127 | + compute_power_state.BUILDING: 'BUILD', |
128 | +} |
129 | + |
130 | + |
131 | +def status_from_power_state(power_state): |
132 | + """Map the power state to the server status string""" |
133 | + return _STATUS_MAP[power_state] |
134 | + |
135 | + |
136 | +def power_states_from_status(status): |
137 | + """Map the server status string to a list of power states""" |
138 | + power_states = [] |
139 | + for power_state, status_map in _STATUS_MAP.iteritems(): |
140 | + # Skip the 'None' state |
141 | + if power_state is None: |
142 | + continue |
143 | + if status.lower() == status_map.lower(): |
144 | + power_states.append(power_state) |
145 | + return power_states |
146 | + |
147 | + |
148 | def get_pagination_params(request): |
149 | """Return marker, limit tuple from request. |
150 | |
151 | |
152 | === modified file 'nova/api/openstack/servers.py' |
153 | --- nova/api/openstack/servers.py 2011-08-09 16:33:13 +0000 |
154 | +++ nova/api/openstack/servers.py 2011-08-09 20:27:18 +0000 |
155 | @@ -44,7 +44,7 @@ |
156 | |
157 | |
158 | class Controller(object): |
159 | - """ The Server API controller for the OpenStack API """ |
160 | + """ The Server API base controller class for the OpenStack API """ |
161 | |
162 | def __init__(self): |
163 | self.compute_api = compute.API() |
164 | @@ -53,17 +53,21 @@ |
165 | def index(self, req): |
166 | """ Returns a list of server names and ids for a given user """ |
167 | try: |
168 | - servers = self._items(req, is_detail=False) |
169 | + servers = self._get_servers(req, is_detail=False) |
170 | except exception.Invalid as err: |
171 | return exc.HTTPBadRequest(explanation=str(err)) |
172 | + except exception.NotFound: |
173 | + return exc.HTTPNotFound() |
174 | return servers |
175 | |
176 | def detail(self, req): |
177 | """ Returns a list of server details for a given user """ |
178 | try: |
179 | - servers = self._items(req, is_detail=True) |
180 | + servers = self._get_servers(req, is_detail=True) |
181 | except exception.Invalid as err: |
182 | return exc.HTTPBadRequest(explanation=str(err)) |
183 | + except exception.NotFound as err: |
184 | + return exc.HTTPNotFound() |
185 | return servers |
186 | |
187 | def _build_view(self, req, instance, is_detail=False): |
188 | @@ -75,22 +79,55 @@ |
189 | def _action_rebuild(self, info, request, instance_id): |
190 | raise NotImplementedError() |
191 | |
192 | - def _items(self, req, is_detail): |
193 | - """Returns a list of servers for a given user. |
194 | - |
195 | - builder - the response model builder |
196 | + def _get_servers(self, req, is_detail): |
197 | + """Returns a list of servers, taking into account any search |
198 | + options specified. |
199 | """ |
200 | - query_str = req.str_GET |
201 | - reservation_id = query_str.get('reservation_id') |
202 | - project_id = query_str.get('project_id') |
203 | - fixed_ip = query_str.get('fixed_ip') |
204 | - recurse_zones = utils.bool_from_str(query_str.get('recurse_zones')) |
205 | + |
206 | + search_opts = {} |
207 | + search_opts.update(req.str_GET) |
208 | + |
209 | + context = req.environ['nova.context'] |
210 | + remove_invalid_options(context, search_opts, |
211 | + self._get_server_search_options()) |
212 | + |
213 | + # Convert recurse_zones into a boolean |
214 | + search_opts['recurse_zones'] = utils.bool_from_str( |
215 | + search_opts.get('recurse_zones', False)) |
216 | + |
217 | + # If search by 'status', we need to convert it to 'state' |
218 | + # If the status is unknown, bail. |
219 | + # Leave 'state' in search_opts so compute can pass it on to |
220 | + # child zones.. |
221 | + if 'status' in search_opts: |
222 | + status = search_opts['status'] |
223 | + search_opts['state'] = common.power_states_from_status(status) |
224 | + if len(search_opts['state']) == 0: |
225 | + reason = _('Invalid server status: %(status)s') % locals() |
226 | + LOG.error(reason) |
227 | + raise exception.InvalidInput(reason=reason) |
228 | + |
229 | + # By default, compute's get_all() will return deleted instances. |
230 | + # If an admin hasn't specified a 'deleted' search option, we need |
231 | + # to filter out deleted instances by setting the filter ourselves. |
232 | + # ... Unless 'changes-since' is specified, because 'changes-since' |
233 | + # should return recently deleted images according to the API spec. |
234 | + |
235 | + if 'deleted' not in search_opts: |
236 | + # Admin hasn't specified deleted filter |
237 | + if 'changes-since' not in search_opts: |
238 | + # No 'changes-since', so we need to find non-deleted servers |
239 | + search_opts['deleted'] = False |
240 | + else: |
241 | + # This is the default, but just in case.. |
242 | + search_opts['deleted'] = True |
243 | + |
244 | instance_list = self.compute_api.get_all( |
245 | - req.environ['nova.context'], |
246 | - reservation_id=reservation_id, |
247 | - project_id=project_id, |
248 | - fixed_ip=fixed_ip, |
249 | - recurse_zones=recurse_zones) |
250 | + context, search_opts=search_opts) |
251 | + |
252 | + # FIXME(comstud): 'changes-since' is not fully implemented. Where |
253 | + # should this be filtered? |
254 | + |
255 | limited_list = self._limit_items(instance_list, req) |
256 | servers = [self._build_view(req, inst, is_detail)['server'] |
257 | for inst in limited_list] |
258 | @@ -506,6 +543,7 @@ |
259 | |
260 | |
261 | class ControllerV10(Controller): |
262 | + """v1.0 OpenStack API controller""" |
263 | |
264 | @scheduler_api.redirect_handler |
265 | def delete(self, req, id): |
266 | @@ -568,8 +606,13 @@ |
267 | """ Determine the admin password for a server on creation """ |
268 | return self.helper._get_server_admin_password_old_style(server) |
269 | |
270 | + def _get_server_search_options(self): |
271 | + """Return server search options allowed by non-admin""" |
272 | + return 'reservation_id', 'fixed_ip', 'name', 'recurse_zones' |
273 | + |
274 | |
275 | class ControllerV11(Controller): |
276 | + """v1.1 OpenStack API controller""" |
277 | |
278 | @scheduler_api.redirect_handler |
279 | def delete(self, req, id): |
280 | @@ -742,6 +785,11 @@ |
281 | """ Determine the admin password for a server on creation """ |
282 | return self.helper._get_server_admin_password_new_style(server) |
283 | |
284 | + def _get_server_search_options(self): |
285 | + """Return server search options allowed by non-admin""" |
286 | + return ('reservation_id', 'name', 'recurse_zones', |
287 | + 'status', 'image', 'flavor', 'changes-since') |
288 | + |
289 | |
290 | class HeadersSerializer(wsgi.ResponseHeadersSerializer): |
291 | |
292 | @@ -920,3 +968,18 @@ |
293 | deserializer = wsgi.RequestDeserializer(body_deserializers) |
294 | |
295 | return wsgi.Resource(controller, deserializer, serializer) |
296 | + |
297 | + |
298 | +def remove_invalid_options(context, search_options, allowed_search_options): |
299 | + """Remove search options that are not valid for non-admin API/context""" |
300 | + if FLAGS.allow_admin_api and context.is_admin: |
301 | + # Allow all options |
302 | + return |
303 | + # Otherwise, strip out all unknown options |
304 | + unknown_options = [opt for opt in search_options |
305 | + if opt not in allowed_search_options] |
306 | + unk_opt_str = ", ".join(unknown_options) |
307 | + log_msg = _("Removing options '%(unk_opt_str)s' from query") % locals() |
308 | + LOG.debug(log_msg) |
309 | + for opt in unknown_options: |
310 | + search_options.pop(opt, None) |
311 | |
312 | === modified file 'nova/api/openstack/views/servers.py' |
313 | --- nova/api/openstack/views/servers.py 2011-07-29 16:28:02 +0000 |
314 | +++ nova/api/openstack/views/servers.py 2011-08-09 20:27:18 +0000 |
315 | @@ -20,7 +20,6 @@ |
316 | import os |
317 | |
318 | from nova import exception |
319 | -from nova.compute import power_state |
320 | import nova.compute |
321 | import nova.context |
322 | from nova.api.openstack import common |
323 | @@ -61,24 +60,11 @@ |
324 | |
325 | def _build_detail(self, inst): |
326 | """Returns a detailed model of a server.""" |
327 | - power_mapping = { |
328 | - None: 'BUILD', |
329 | - power_state.NOSTATE: 'BUILD', |
330 | - power_state.RUNNING: 'ACTIVE', |
331 | - power_state.BLOCKED: 'ACTIVE', |
332 | - power_state.SUSPENDED: 'SUSPENDED', |
333 | - power_state.PAUSED: 'PAUSED', |
334 | - power_state.SHUTDOWN: 'SHUTDOWN', |
335 | - power_state.SHUTOFF: 'SHUTOFF', |
336 | - power_state.CRASHED: 'ERROR', |
337 | - power_state.FAILED: 'ERROR', |
338 | - power_state.BUILDING: 'BUILD', |
339 | - } |
340 | |
341 | inst_dict = { |
342 | 'id': inst['id'], |
343 | 'name': inst['display_name'], |
344 | - 'status': power_mapping[inst.get('state')]} |
345 | + 'status': common.status_from_power_state(inst.get('state'))} |
346 | |
347 | ctxt = nova.context.get_admin_context() |
348 | compute_api = nova.compute.API() |
349 | |
350 | === modified file 'nova/compute/api.py' |
351 | --- nova/compute/api.py 2011-08-09 15:27:56 +0000 |
352 | +++ nova/compute/api.py 2011-08-09 20:27:18 +0000 |
353 | @@ -19,6 +19,7 @@ |
354 | """Handles all requests relating to instances (guest vms).""" |
355 | |
356 | import eventlet |
357 | +import novaclient |
358 | import re |
359 | import time |
360 | |
361 | @@ -712,59 +713,84 @@ |
362 | """ |
363 | return self.get(context, instance_id) |
364 | |
365 | - def get_all(self, context, project_id=None, reservation_id=None, |
366 | - fixed_ip=None, recurse_zones=False): |
367 | + def get_all(self, context, search_opts=None): |
368 | """Get all instances filtered by one of the given parameters. |
369 | |
370 | If there is no filter and the context is an admin, it will retreive |
371 | all instances in the system. |
372 | """ |
373 | |
374 | - if reservation_id is not None: |
375 | + if search_opts is None: |
376 | + search_opts = {} |
377 | + |
378 | + LOG.debug(_("Searching by: %s") % str(search_opts)) |
379 | + |
380 | + # Fixups for the DB call |
381 | + filters = {} |
382 | + |
383 | + def _remap_flavor_filter(flavor_id): |
384 | + instance_type = self.db.instance_type_get_by_flavor_id( |
385 | + context, flavor_id) |
386 | + filters['instance_type_id'] = instance_type['id'] |
387 | + |
388 | + def _remap_fixed_ip_filter(fixed_ip): |
389 | + # Turn fixed_ip into a regexp match. Since '.' matches |
390 | + # any character, we need to use regexp escaping for it. |
391 | + filters['ip'] = '^%s$' % fixed_ip.replace('.', '\\.') |
392 | + |
393 | + # search_option to filter_name mapping. |
394 | + filter_mapping = { |
395 | + 'image': 'image_ref', |
396 | + 'name': 'display_name', |
397 | + 'instance_name': 'name', |
398 | + 'recurse_zones': None, |
399 | + 'flavor': _remap_flavor_filter, |
400 | + 'fixed_ip': _remap_fixed_ip_filter} |
401 | + |
402 | + # copy from search_opts, doing various remappings as necessary |
403 | + for opt, value in search_opts.iteritems(): |
404 | + # Do remappings. |
405 | + # Values not in the filter_mapping table are copied as-is. |
406 | + # If remapping is None, option is not copied |
407 | + # If the remapping is a string, it is the filter_name to use |
408 | + try: |
409 | + remap_object = filter_mapping[opt] |
410 | + except KeyError: |
411 | + filters[opt] = value |
412 | + else: |
413 | + if remap_object: |
414 | + if isinstance(remap_object, basestring): |
415 | + filters[remap_object] = value |
416 | + else: |
417 | + remap_object(value) |
418 | + |
419 | + recurse_zones = search_opts.get('recurse_zones', False) |
420 | + if 'reservation_id' in filters: |
421 | recurse_zones = True |
422 | - instances = self.db.instance_get_all_by_reservation( |
423 | - context, reservation_id) |
424 | - elif fixed_ip is not None: |
425 | - try: |
426 | - instances = self.db.fixed_ip_get_instance(context, fixed_ip) |
427 | - except exception.FloatingIpNotFound, e: |
428 | - if not recurse_zones: |
429 | - raise |
430 | - instances = None |
431 | - elif project_id or not context.is_admin: |
432 | - if not context.project_id: |
433 | - instances = self.db.instance_get_all_by_user( |
434 | - context, context.user_id) |
435 | - else: |
436 | - if project_id is None: |
437 | - project_id = context.project_id |
438 | - instances = self.db.instance_get_all_by_project( |
439 | - context, project_id) |
440 | - else: |
441 | - instances = self.db.instance_get_all(context) |
442 | |
443 | - if instances is None: |
444 | - instances = [] |
445 | - elif not isinstance(instances, list): |
446 | - instances = [instances] |
447 | + instances = self.db.instance_get_all_by_filters(context, filters) |
448 | |
449 | if not recurse_zones: |
450 | return instances |
451 | |
452 | + # Recurse zones. Need admin context for this. Send along |
453 | + # the un-modified search options we received.. |
454 | admin_context = context.elevated() |
455 | children = scheduler_api.call_zone_method(admin_context, |
456 | "list", |
457 | + errors_to_ignore=[novaclient.exceptions.NotFound], |
458 | novaclient_collection_name="servers", |
459 | - reservation_id=reservation_id, |
460 | - project_id=project_id, |
461 | - fixed_ip=fixed_ip, |
462 | - recurse_zones=True) |
463 | + search_opts=search_opts) |
464 | |
465 | for zone, servers in children: |
466 | + # 'servers' can be None if a 404 was returned by a zone |
467 | + if servers is None: |
468 | + continue |
469 | for server in servers: |
470 | # Results are ready to send to user. No need to scrub. |
471 | server._info['_is_precooked'] = True |
472 | instances.append(server._info) |
473 | + |
474 | return instances |
475 | |
476 | def _cast_compute_message(self, method, context, instance_id, host=None, |
477 | |
478 | === modified file 'nova/db/api.py' |
479 | --- nova/db/api.py 2011-08-08 21:33:03 +0000 |
480 | +++ nova/db/api.py 2011-08-09 20:27:18 +0000 |
481 | @@ -387,15 +387,6 @@ |
482 | return IMPL.fixed_ip_get_by_virtual_interface(context, vif_id) |
483 | |
484 | |
485 | -def fixed_ip_get_instance(context, address): |
486 | - """Get an instance for a fixed ip by address.""" |
487 | - return IMPL.fixed_ip_get_instance(context, address) |
488 | - |
489 | - |
490 | -def fixed_ip_get_instance_v6(context, address): |
491 | - return IMPL.fixed_ip_get_instance_v6(context, address) |
492 | - |
493 | - |
494 | def fixed_ip_get_network(context, address): |
495 | """Get a network for a fixed ip by address.""" |
496 | return IMPL.fixed_ip_get_network(context, address) |
497 | @@ -500,6 +491,11 @@ |
498 | return IMPL.instance_get_all(context) |
499 | |
500 | |
501 | +def instance_get_all_by_filters(context, filters): |
502 | + """Get all instances that match all filters.""" |
503 | + return IMPL.instance_get_all_by_filters(context, filters) |
504 | + |
505 | + |
506 | def instance_get_active_by_window(context, begin, end=None): |
507 | """Get instances active during a certain time window.""" |
508 | return IMPL.instance_get_active_by_window(context, begin, end) |
509 | @@ -521,10 +517,20 @@ |
510 | |
511 | |
512 | def instance_get_all_by_reservation(context, reservation_id): |
513 | - """Get all instance belonging to a reservation.""" |
514 | + """Get all instances belonging to a reservation.""" |
515 | return IMPL.instance_get_all_by_reservation(context, reservation_id) |
516 | |
517 | |
518 | +def instance_get_by_fixed_ip(context, address): |
519 | + """Get an instance for a fixed ip by address.""" |
520 | + return IMPL.instance_get_by_fixed_ip(context, address) |
521 | + |
522 | + |
523 | +def instance_get_by_fixed_ipv6(context, address): |
524 | + """Get an instance for a fixed ip by IPv6 address.""" |
525 | + return IMPL.instance_get_by_fixed_ipv6(context, address) |
526 | + |
527 | + |
528 | def instance_get_fixed_addresses(context, instance_id): |
529 | """Get the fixed ip address of an instance.""" |
530 | return IMPL.instance_get_fixed_addresses(context, instance_id) |
531 | |
532 | === modified file 'nova/db/sqlalchemy/api.py' |
533 | --- nova/db/sqlalchemy/api.py 2011-08-09 15:27:56 +0000 |
534 | +++ nova/db/sqlalchemy/api.py 2011-08-09 20:27:18 +0000 |
535 | @@ -18,6 +18,7 @@ |
536 | """ |
537 | Implementation of SQLAlchemy backend. |
538 | """ |
539 | +import re |
540 | import warnings |
541 | |
542 | from nova import block_device |
543 | @@ -829,28 +830,6 @@ |
544 | return rv |
545 | |
546 | |
547 | -@require_context |
548 | -def fixed_ip_get_instance(context, address): |
549 | - fixed_ip_ref = fixed_ip_get_by_address(context, address) |
550 | - return fixed_ip_ref.instance |
551 | - |
552 | - |
553 | -@require_context |
554 | -def fixed_ip_get_instance_v6(context, address): |
555 | - session = get_session() |
556 | - |
557 | - # convert IPv6 address to mac |
558 | - mac = ipv6.to_mac(address) |
559 | - |
560 | - # get virtual interface |
561 | - vif_ref = virtual_interface_get_by_address(context, mac) |
562 | - |
563 | - # look up instance based on instance_id from vif row |
564 | - result = session.query(models.Instance).\ |
565 | - filter_by(id=vif_ref['instance_id']) |
566 | - return result |
567 | - |
568 | - |
569 | @require_admin_context |
570 | def fixed_ip_get_network(context, address): |
571 | fixed_ip_ref = fixed_ip_get_by_address(context, address) |
572 | @@ -1169,6 +1148,114 @@ |
573 | all() |
574 | |
575 | |
576 | +@require_context |
577 | +def instance_get_all_by_filters(context, filters): |
578 | + """Return instances that match all filters. Deleted instances |
579 | + will be returned by default, unless there's a filter that says |
580 | + otherwise""" |
581 | + |
582 | + def _regexp_filter_by_ipv6(instance, filter_re): |
583 | + for interface in instance['virtual_interfaces']: |
584 | + fixed_ipv6 = interface.get('fixed_ipv6') |
585 | + if fixed_ipv6 and filter_re.match(fixed_ipv6): |
586 | + return True |
587 | + return False |
588 | + |
589 | + def _regexp_filter_by_ip(instance, filter_re): |
590 | + for interface in instance['virtual_interfaces']: |
591 | + for fixed_ip in interface['fixed_ips']: |
592 | + if not fixed_ip or not fixed_ip['address']: |
593 | + continue |
594 | + if filter_re.match(fixed_ip['address']): |
595 | + return True |
596 | + for floating_ip in fixed_ip.get('floating_ips', []): |
597 | + if not floating_ip or not floating_ip['address']: |
598 | + continue |
599 | + if filter_re.match(floating_ip['address']): |
600 | + return True |
601 | + return False |
602 | + |
603 | + def _regexp_filter_by_column(instance, filter_name, filter_re): |
604 | + try: |
605 | + v = getattr(instance, filter_name) |
606 | + except AttributeError: |
607 | + return True |
608 | + if v and filter_re.match(str(v)): |
609 | + return True |
610 | + return False |
611 | + |
612 | + def _exact_match_filter(query, column, value): |
613 | + """Do exact match against a column. value to match can be a list |
614 | + so you can match any value in the list. |
615 | + """ |
616 | + if isinstance(value, list): |
617 | + column_attr = getattr(models.Instance, column) |
618 | + return query.filter(column_attr.in_(value)) |
619 | + else: |
620 | + filter_dict = {} |
621 | + filter_dict[column] = value |
622 | + return query.filter_by(**filter_dict) |
623 | + |
624 | + session = get_session() |
625 | + query_prefix = session.query(models.Instance).\ |
626 | + options(joinedload_all('fixed_ips.floating_ips')).\ |
627 | + options(joinedload_all('virtual_interfaces.network')).\ |
628 | + options(joinedload_all( |
629 | + 'virtual_interfaces.fixed_ips.floating_ips')).\ |
630 | + options(joinedload('security_groups')).\ |
631 | + options(joinedload_all('fixed_ips.network')).\ |
632 | + options(joinedload('metadata')).\ |
633 | + options(joinedload('instance_type')) |
634 | + |
635 | + # Make a copy of the filters dictionary to use going forward, as we'll |
636 | + # be modifying it and we shouldn't affect the caller's use of it. |
637 | + filters = filters.copy() |
638 | + |
639 | + if not context.is_admin: |
640 | + # If we're not admin context, add appropriate filter.. |
641 | + if context.project_id: |
642 | + filters['project_id'] = context.project_id |
643 | + else: |
644 | + filters['user_id'] = context.user_id |
645 | + |
646 | + # Filters for exact matches that we can do along with the SQL query... |
647 | + # For other filters that don't match this, we will do regexp matching |
648 | + exact_match_filter_names = ['project_id', 'user_id', 'image_ref', |
649 | + 'state', 'instance_type_id', 'deleted'] |
650 | + |
651 | + query_filters = [key for key in filters.iterkeys() |
652 | + if key in exact_match_filter_names] |
653 | + |
654 | + for filter_name in query_filters: |
655 | + # Do the matching and remove the filter from the dictionary |
656 | + # so we don't try it again below.. |
657 | + query_prefix = _exact_match_filter(query_prefix, filter_name, |
658 | + filters.pop(filter_name)) |
659 | + |
660 | + instances = query_prefix.all() |
661 | + |
662 | + if not instances: |
663 | + return [] |
664 | + |
665 | + # Now filter on everything else for regexp matching.. |
666 | + # For filters not in the list, we'll attempt to use the filter_name |
667 | + # as a column name in Instance.. |
668 | + regexp_filter_funcs = {'ip6': _regexp_filter_by_ipv6, |
669 | + 'ip': _regexp_filter_by_ip} |
670 | + |
671 | + for filter_name in filters.iterkeys(): |
672 | + filter_func = regexp_filter_funcs.get(filter_name, None) |
673 | + filter_re = re.compile(str(filters[filter_name])) |
674 | + if filter_func: |
675 | + filter_l = lambda instance: filter_func(instance, filter_re) |
676 | + else: |
677 | + filter_l = lambda instance: _regexp_filter_by_column(instance, |
678 | + filter_name, filter_re) |
679 | + instances = filter(filter_l, instances) |
680 | + |
681 | + return instances |
682 | + |
683 | + |
684 | @require_admin_context |
685 | def instance_get_active_by_window(context, begin, end=None): |
686 | """Return instances that were continuously active over the given window""" |
687 | @@ -1237,30 +1324,48 @@ |
688 | @require_context |
689 | def instance_get_all_by_reservation(context, reservation_id): |
690 | session = get_session() |
691 | + query = session.query(models.Instance).\ |
692 | + filter_by(reservation_id=reservation_id).\ |
693 | + options(joinedload_all('fixed_ips.floating_ips')).\ |
694 | + options(joinedload('virtual_interfaces')).\ |
695 | + options(joinedload('security_groups')).\ |
696 | + options(joinedload_all('fixed_ips.network')).\ |
697 | + options(joinedload('metadata')).\ |
698 | + options(joinedload('instance_type')) |
699 | |
700 | if is_admin_context(context): |
701 | - return session.query(models.Instance).\ |
702 | - options(joinedload_all('fixed_ips.floating_ips')).\ |
703 | - options(joinedload('virtual_interfaces')).\ |
704 | - options(joinedload('security_groups')).\ |
705 | - options(joinedload_all('fixed_ips.network')).\ |
706 | - options(joinedload('metadata')).\ |
707 | - options(joinedload('instance_type')).\ |
708 | - filter_by(reservation_id=reservation_id).\ |
709 | - filter_by(deleted=can_read_deleted(context)).\ |
710 | - all() |
711 | + return query.\ |
712 | + filter_by(deleted=can_read_deleted(context)).\ |
713 | + all() |
714 | elif is_user_context(context): |
715 | - return session.query(models.Instance).\ |
716 | - options(joinedload_all('fixed_ips.floating_ips')).\ |
717 | - options(joinedload('virtual_interfaces')).\ |
718 | - options(joinedload('security_groups')).\ |
719 | - options(joinedload_all('fixed_ips.network')).\ |
720 | - options(joinedload('metadata')).\ |
721 | - options(joinedload('instance_type')).\ |
722 | - filter_by(project_id=context.project_id).\ |
723 | - filter_by(reservation_id=reservation_id).\ |
724 | - filter_by(deleted=False).\ |
725 | - all() |
726 | + return query.\ |
727 | + filter_by(project_id=context.project_id).\ |
728 | + filter_by(deleted=False).\ |
729 | + all() |
730 | + |
731 | + |
732 | +@require_context |
733 | +def instance_get_by_fixed_ip(context, address): |
734 | + """Return instance ref by exact match of FixedIP""" |
735 | + fixed_ip_ref = fixed_ip_get_by_address(context, address) |
736 | + return fixed_ip_ref.instance |
737 | + |
738 | + |
739 | +@require_context |
740 | +def instance_get_by_fixed_ipv6(context, address): |
741 | + """Return instance ref by exact match of IPv6""" |
742 | + session = get_session() |
743 | + |
744 | + # convert IPv6 address to mac |
745 | + mac = ipv6.to_mac(address) |
746 | + |
747 | + # get virtual interface |
748 | + vif_ref = virtual_interface_get_by_address(context, mac) |
749 | + |
750 | + # look up instance based on instance_id from vif row |
751 | + result = session.query(models.Instance).\ |
752 | + filter_by(id=vif_ref['instance_id']) |
753 | + return result |
754 | |
755 | |
756 | @require_admin_context |
757 | @@ -1302,7 +1407,7 @@ |
758 | network_refs = network_get_all_by_instance(context, instance_id) |
759 | # compile a list of cidr_v6 prefixes sorted by network id |
760 | prefixes = [ref.cidr_v6 for ref in |
761 | - sorted(network_refs, key=lambda ref: ref.id)] |
762 | + sorted(network_refs, key=lambda ref: ref.id)] |
763 | # get vifs associated with instance |
764 | vif_refs = virtual_interface_get_by_instance(context, instance_ref.id) |
765 | # compile list of the mac_addresses for vifs sorted by network id |
766 | |
767 | === modified file 'nova/db/sqlalchemy/models.py' |
768 | --- nova/db/sqlalchemy/models.py 2011-08-03 11:31:10 +0000 |
769 | +++ nova/db/sqlalchemy/models.py 2011-08-09 20:27:18 +0000 |
770 | @@ -180,6 +180,7 @@ |
771 | image_ref = Column(String(255)) |
772 | kernel_id = Column(String(255)) |
773 | ramdisk_id = Column(String(255)) |
774 | + server_name = Column(String(255)) |
775 | |
776 | # image_ref = Column(Integer, ForeignKey('images.id'), nullable=True) |
777 | # kernel_id = Column(Integer, ForeignKey('images.id'), nullable=True) |
778 | |
779 | === modified file 'nova/tests/api/openstack/fakes.py' |
780 | --- nova/tests/api/openstack/fakes.py 2011-08-02 22:57:24 +0000 |
781 | +++ nova/tests/api/openstack/fakes.py 2011-08-09 20:27:18 +0000 |
782 | @@ -71,14 +71,18 @@ |
783 | return self.application |
784 | |
785 | |
786 | -def wsgi_app(inner_app10=None, inner_app11=None, fake_auth=True): |
787 | +def wsgi_app(inner_app10=None, inner_app11=None, fake_auth=True, |
788 | + fake_auth_context=None): |
789 | if not inner_app10: |
790 | inner_app10 = openstack.APIRouterV10() |
791 | if not inner_app11: |
792 | inner_app11 = openstack.APIRouterV11() |
793 | |
794 | if fake_auth: |
795 | - ctxt = context.RequestContext('fake', 'fake') |
796 | + if fake_auth_context is not None: |
797 | + ctxt = fake_auth_context |
798 | + else: |
799 | + ctxt = context.RequestContext('fake', 'fake') |
800 | api10 = openstack.FaultWrapper(wsgi.InjectContext(ctxt, |
801 | limits.RateLimitingMiddleware(inner_app10))) |
802 | api11 = openstack.FaultWrapper(wsgi.InjectContext(ctxt, |
803 | |
804 | === modified file 'nova/tests/api/openstack/test_servers.py' |
805 | --- nova/tests/api/openstack/test_servers.py 2011-08-09 04:47:16 +0000 |
806 | +++ nova/tests/api/openstack/test_servers.py 2011-08-09 20:27:18 +0000 |
807 | @@ -236,7 +236,8 @@ |
808 | fakes.stub_out_key_pair_funcs(self.stubs) |
809 | fakes.stub_out_image_service(self.stubs) |
810 | self.stubs.Set(utils, 'gen_uuid', fake_gen_uuid) |
811 | - self.stubs.Set(nova.db.api, 'instance_get_all', return_servers) |
812 | + self.stubs.Set(nova.db.api, 'instance_get_all_by_filters', |
813 | + return_servers) |
814 | self.stubs.Set(nova.db.api, 'instance_get', return_server_by_id) |
815 | self.stubs.Set(nova.db, 'instance_get_by_uuid', |
816 | return_server_by_uuid) |
817 | @@ -1098,6 +1099,277 @@ |
818 | self.assertEqual(res.status_int, 400) |
819 | self.assertTrue(res.body.find('marker param') > -1) |
820 | |
821 | + def test_get_servers_with_bad_option_v1_0(self): |
822 | + # 1.0 API ignores unknown options |
823 | + def fake_get_all(compute_self, context, search_opts=None): |
824 | + return [stub_instance(100)] |
825 | + |
826 | + self.stubs.Set(nova.compute.API, 'get_all', fake_get_all) |
827 | + |
828 | + req = webob.Request.blank('/v1.0/servers?unknownoption=whee') |
829 | + res = req.get_response(fakes.wsgi_app()) |
830 | + self.assertEqual(res.status_int, 200) |
831 | + servers = json.loads(res.body)['servers'] |
832 | + self.assertEqual(len(servers), 1) |
833 | + self.assertEqual(servers[0]['id'], 100) |
834 | + |
835 | + def test_get_servers_with_bad_option_v1_1(self): |
836 | + # 1.1 API also ignores unknown options |
837 | + def fake_get_all(compute_self, context, search_opts=None): |
838 | + return [stub_instance(100)] |
839 | + |
840 | + self.stubs.Set(nova.compute.API, 'get_all', fake_get_all) |
841 | + |
842 | + req = webob.Request.blank('/v1.1/servers?unknownoption=whee') |
843 | + res = req.get_response(fakes.wsgi_app()) |
844 | + self.assertEqual(res.status_int, 200) |
845 | + servers = json.loads(res.body)['servers'] |
846 | + self.assertEqual(len(servers), 1) |
847 | + self.assertEqual(servers[0]['id'], 100) |
848 | + |
849 | + def test_get_servers_allows_image_v1_1(self): |
850 | + def fake_get_all(compute_self, context, search_opts=None): |
851 | + self.assertNotEqual(search_opts, None) |
852 | + self.assertTrue('image' in search_opts) |
853 | + self.assertEqual(search_opts['image'], '12345') |
854 | + return [stub_instance(100)] |
855 | + |
856 | + self.stubs.Set(nova.compute.API, 'get_all', fake_get_all) |
857 | + self.flags(allow_admin_api=False) |
858 | + |
859 | + req = webob.Request.blank('/v1.1/servers?image=12345') |
860 | + res = req.get_response(fakes.wsgi_app()) |
861 | + # The following assert will fail if either of the asserts in |
862 | + # fake_get_all() fail |
863 | + self.assertEqual(res.status_int, 200) |
864 | + servers = json.loads(res.body)['servers'] |
865 | + self.assertEqual(len(servers), 1) |
866 | + self.assertEqual(servers[0]['id'], 100) |
867 | + |
868 | + def test_get_servers_allows_flavor_v1_1(self): |
869 | + def fake_get_all(compute_self, context, search_opts=None): |
870 | + self.assertNotEqual(search_opts, None) |
871 | + self.assertTrue('flavor' in search_opts) |
872 | + # flavor is an integer ID |
873 | + self.assertEqual(search_opts['flavor'], '12345') |
874 | + return [stub_instance(100)] |
875 | + |
876 | + self.stubs.Set(nova.compute.API, 'get_all', fake_get_all) |
877 | + self.flags(allow_admin_api=False) |
878 | + |
879 | + req = webob.Request.blank('/v1.1/servers?flavor=12345') |
880 | + res = req.get_response(fakes.wsgi_app()) |
881 | + # The following assert will fail if either of the asserts in |
882 | + # fake_get_all() fail |
883 | + self.assertEqual(res.status_int, 200) |
884 | + servers = json.loads(res.body)['servers'] |
885 | + self.assertEqual(len(servers), 1) |
886 | + self.assertEqual(servers[0]['id'], 100) |
887 | + |
888 | + def test_get_servers_allows_status_v1_1(self): |
889 | + def fake_get_all(compute_self, context, search_opts=None): |
890 | + self.assertNotEqual(search_opts, None) |
891 | + self.assertTrue('state' in search_opts) |
892 | + self.assertEqual(set(search_opts['state']), |
893 | + set([power_state.RUNNING, power_state.BLOCKED])) |
894 | + return [stub_instance(100)] |
895 | + |
896 | + self.stubs.Set(nova.compute.API, 'get_all', fake_get_all) |
897 | + self.flags(allow_admin_api=False) |
898 | + |
899 | + req = webob.Request.blank('/v1.1/servers?status=active') |
900 | + res = req.get_response(fakes.wsgi_app()) |
901 | + # The following assert will fail if either of the asserts in |
902 | + # fake_get_all() fail |
903 | + self.assertEqual(res.status_int, 200) |
904 | + servers = json.loads(res.body)['servers'] |
905 | + self.assertEqual(len(servers), 1) |
906 | + self.assertEqual(servers[0]['id'], 100) |
907 | + |
908 | + def test_get_servers_invalid_status_v1_1(self): |
909 | + """Test getting servers by invalid status""" |
910 | + |
911 | + self.flags(allow_admin_api=False) |
912 | + |
913 | + req = webob.Request.blank('/v1.1/servers?status=running') |
914 | + res = req.get_response(fakes.wsgi_app()) |
915 | + # The following assert will fail if either of the asserts in |
916 | + # fake_get_all() fail |
917 | + self.assertEqual(res.status_int, 400) |
918 | + self.assertTrue(res.body.find('Invalid server status') > -1) |
919 | + |
920 | + def test_get_servers_allows_name_v1_1(self): |
921 | + def fake_get_all(compute_self, context, search_opts=None): |
922 | + self.assertNotEqual(search_opts, None) |
923 | + self.assertTrue('name' in search_opts) |
924 | + self.assertEqual(search_opts['name'], 'whee.*') |
925 | + return [stub_instance(100)] |
926 | + |
927 | + self.stubs.Set(nova.compute.API, 'get_all', fake_get_all) |
928 | + self.flags(allow_admin_api=False) |
929 | + |
930 | + req = webob.Request.blank('/v1.1/servers?name=whee.*') |
931 | + res = req.get_response(fakes.wsgi_app()) |
932 | + # The following assert will fail if either of the asserts in |
933 | + # fake_get_all() fail |
934 | + self.assertEqual(res.status_int, 200) |
935 | + servers = json.loads(res.body)['servers'] |
936 | + self.assertEqual(len(servers), 1) |
937 | + self.assertEqual(servers[0]['id'], 100) |
938 | + |
939 | + def test_get_servers_unknown_or_admin_options1(self): |
940 | + """Test getting servers by admin-only or unknown options. |
941 | + This tests when admin_api is off. Make sure the admin and |
942 | + unknown options are stripped before they get to |
943 | + compute_api.get_all() |
944 | + """ |
945 | + |
946 | + self.flags(allow_admin_api=False) |
947 | + |
948 | + def fake_get_all(compute_self, context, search_opts=None): |
949 | + self.assertNotEqual(search_opts, None) |
950 | + # Allowed by user |
951 | + self.assertTrue('name' in search_opts) |
952 | + self.assertTrue('status' in search_opts) |
953 | + # Allowed only by admins with admin API on |
954 | + self.assertFalse('ip' in search_opts) |
955 | + self.assertFalse('unknown_option' in search_opts) |
956 | + return [stub_instance(100)] |
957 | + |
958 | + self.stubs.Set(nova.compute.API, 'get_all', fake_get_all) |
959 | + |
960 | + query_str = "name=foo&ip=10.*&status=active&unknown_option=meow" |
961 | + req = webob.Request.blank('/v1.1/servers?%s' % query_str) |
962 | + # Request admin context |
963 | + context = nova.context.RequestContext('testuser', 'testproject', |
964 | + is_admin=True) |
965 | + res = req.get_response(fakes.wsgi_app(fake_auth_context=context)) |
966 | + # The following assert will fail if either of the asserts in |
967 | + # fake_get_all() fail |
968 | + self.assertEqual(res.status_int, 200) |
969 | + servers = json.loads(res.body)['servers'] |
970 | + self.assertEqual(len(servers), 1) |
971 | + self.assertEqual(servers[0]['id'], 100) |
972 | + |
973 | + def test_get_servers_unknown_or_admin_options2(self): |
974 | + """Test getting servers by admin-only or unknown options. |
975 | + This tests when admin_api is on, but context is a user. |
976 | + Make sure the admin and unknown options are stripped before |
977 | + they get to compute_api.get_all() |
978 | + """ |
979 | + |
980 | + self.flags(allow_admin_api=True) |
981 | + |
982 | + def fake_get_all(compute_self, context, search_opts=None): |
983 | + self.assertNotEqual(search_opts, None) |
984 | + # Allowed by user |
985 | + self.assertTrue('name' in search_opts) |
986 | + self.assertTrue('status' in search_opts) |
987 | + # Allowed only by admins with admin API on |
988 | + self.assertFalse('ip' in search_opts) |
989 | + self.assertFalse('unknown_option' in search_opts) |
990 | + return [stub_instance(100)] |
991 | + |
992 | + self.stubs.Set(nova.compute.API, 'get_all', fake_get_all) |
993 | + |
994 | + query_str = "name=foo&ip=10.*&status=active&unknown_option=meow" |
995 | + req = webob.Request.blank('/v1.1/servers?%s' % query_str) |
996 | + # Request admin context |
997 | + context = nova.context.RequestContext('testuser', 'testproject', |
998 | + is_admin=False) |
999 | + res = req.get_response(fakes.wsgi_app(fake_auth_context=context)) |
1000 | + # The following assert will fail if either of the asserts in |
1001 | + # fake_get_all() fail |
1002 | + self.assertEqual(res.status_int, 200) |
1003 | + servers = json.loads(res.body)['servers'] |
1004 | + self.assertEqual(len(servers), 1) |
1005 | + self.assertEqual(servers[0]['id'], 100) |
1006 | + |
1007 | + def test_get_servers_unknown_or_admin_options3(self): |
1008 | + """Test getting servers by admin-only or unknown options. |
1009 | + This tests when admin_api is on and context is admin. |
1010 | + All options should be passed through to compute_api.get_all() |
1011 | + """ |
1012 | + |
1013 | + self.flags(allow_admin_api=True) |
1014 | + |
1015 | + def fake_get_all(compute_self, context, search_opts=None): |
1016 | + self.assertNotEqual(search_opts, None) |
1017 | + # Allowed by user |
1018 | + self.assertTrue('name' in search_opts) |
1019 | + self.assertTrue('status' in search_opts) |
1020 | + # Allowed only by admins with admin API on |
1021 | + self.assertTrue('ip' in search_opts) |
1022 | + self.assertTrue('unknown_option' in search_opts) |
1023 | + return [stub_instance(100)] |
1024 | + |
1025 | + self.stubs.Set(nova.compute.API, 'get_all', fake_get_all) |
1026 | + |
1027 | + query_str = "name=foo&ip=10.*&status=active&unknown_option=meow" |
1028 | + req = webob.Request.blank('/v1.1/servers?%s' % query_str) |
1029 | + # Request admin context |
1030 | + context = nova.context.RequestContext('testuser', 'testproject', |
1031 | + is_admin=True) |
1032 | + res = req.get_response(fakes.wsgi_app(fake_auth_context=context)) |
1033 | + # The following assert will fail if either of the asserts in |
1034 | + # fake_get_all() fail |
1035 | + self.assertEqual(res.status_int, 200) |
1036 | + servers = json.loads(res.body)['servers'] |
1037 | + self.assertEqual(len(servers), 1) |
1038 | + self.assertEqual(servers[0]['id'], 100) |
1039 | + |
1040 | + def test_get_servers_admin_allows_ip_v1_1(self): |
1041 | + """Test getting servers by ip with admin_api enabled and |
1042 | + admin context |
1043 | + """ |
1044 | + self.flags(allow_admin_api=True) |
1045 | + |
1046 | + def fake_get_all(compute_self, context, search_opts=None): |
1047 | + self.assertNotEqual(search_opts, None) |
1048 | + self.assertTrue('ip' in search_opts) |
1049 | + self.assertEqual(search_opts['ip'], '10\..*') |
1050 | + return [stub_instance(100)] |
1051 | + |
1052 | + self.stubs.Set(nova.compute.API, 'get_all', fake_get_all) |
1053 | + |
1054 | + req = webob.Request.blank('/v1.1/servers?ip=10\..*') |
1055 | + # Request admin context |
1056 | + context = nova.context.RequestContext('testuser', 'testproject', |
1057 | + is_admin=True) |
1058 | + res = req.get_response(fakes.wsgi_app(fake_auth_context=context)) |
1059 | + # The following assert will fail if either of the asserts in |
1060 | + # fake_get_all() fail |
1061 | + self.assertEqual(res.status_int, 200) |
1062 | + servers = json.loads(res.body)['servers'] |
1063 | + self.assertEqual(len(servers), 1) |
1064 | + self.assertEqual(servers[0]['id'], 100) |
1065 | + |
1066 | + def test_get_servers_admin_allows_ip6_v1_1(self): |
1067 | + """Test getting servers by ip6 with admin_api enabled and |
1068 | + admin context |
1069 | + """ |
1070 | + self.flags(allow_admin_api=True) |
1071 | + |
1072 | + def fake_get_all(compute_self, context, search_opts=None): |
1073 | + self.assertNotEqual(search_opts, None) |
1074 | + self.assertTrue('ip6' in search_opts) |
1075 | + self.assertEqual(search_opts['ip6'], 'ffff.*') |
1076 | + return [stub_instance(100)] |
1077 | + |
1078 | + self.stubs.Set(nova.compute.API, 'get_all', fake_get_all) |
1079 | + |
1080 | + req = webob.Request.blank('/v1.1/servers?ip6=ffff.*') |
1081 | + # Request admin context |
1082 | + context = nova.context.RequestContext('testuser', 'testproject', |
1083 | + is_admin=True) |
1084 | + res = req.get_response(fakes.wsgi_app(fake_auth_context=context)) |
1085 | + # The following assert will fail if either of the asserts in |
1086 | + # fake_get_all() fail |
1087 | + self.assertEqual(res.status_int, 200) |
1088 | + servers = json.loads(res.body)['servers'] |
1089 | + self.assertEqual(len(servers), 1) |
1090 | + self.assertEqual(servers[0]['id'], 100) |
1091 | + |
1092 | def _setup_for_create_instance(self): |
1093 | """Shared implementation for tests below that create instance""" |
1094 | def instance_create(context, inst): |
1095 | @@ -1665,6 +1937,7 @@ |
1096 | def test_get_all_server_details_v1_0(self): |
1097 | req = webob.Request.blank('/v1.0/servers/detail') |
1098 | res = req.get_response(fakes.wsgi_app()) |
1099 | + self.assertEqual(res.status_int, 200) |
1100 | res_dict = json.loads(res.body) |
1101 | |
1102 | for i, s in enumerate(res_dict['servers']): |
1103 | @@ -1720,7 +1993,7 @@ |
1104 | return [stub_instance(i, 'fake', 'fake', None, None, i % 2) |
1105 | for i in xrange(5)] |
1106 | |
1107 | - self.stubs.Set(nova.db.api, 'instance_get_all_by_project', |
1108 | + self.stubs.Set(nova.db.api, 'instance_get_all_by_filters', |
1109 | return_servers_with_host) |
1110 | |
1111 | req = webob.Request.blank('/v1.0/servers/detail') |
1112 | |
1113 | === modified file 'nova/tests/test_compute.py' |
1114 | --- nova/tests/test_compute.py 2011-08-05 05:06:22 +0000 |
1115 | +++ nova/tests/test_compute.py 2011-08-09 20:27:18 +0000 |
1116 | @@ -26,6 +26,7 @@ |
1117 | from nova import context |
1118 | from nova import db |
1119 | from nova.db.sqlalchemy import models |
1120 | +from nova.db.sqlalchemy import api as sqlalchemy_api |
1121 | from nova import exception |
1122 | from nova import flags |
1123 | import nova.image.fake |
1124 | @@ -73,8 +74,11 @@ |
1125 | |
1126 | self.stubs.Set(nova.image.fake._FakeImageService, 'show', fake_show) |
1127 | |
1128 | - def _create_instance(self, params={}): |
1129 | + def _create_instance(self, params=None): |
1130 | """Create a test instance""" |
1131 | + |
1132 | + if params is None: |
1133 | + params = {} |
1134 | inst = {} |
1135 | inst['image_ref'] = 1 |
1136 | inst['reservation_id'] = 'r-fakeres' |
1137 | @@ -864,6 +868,458 @@ |
1138 | self.assertEqual(len(instances), 1) |
1139 | self.assertEqual(power_state.SHUTOFF, instances[0]['state']) |
1140 | |
1141 | + def test_get_all_by_name_regexp(self): |
1142 | + """Test searching instances by name (display_name)""" |
1143 | + c = context.get_admin_context() |
1144 | + instance_id1 = self._create_instance({'display_name': 'woot'}) |
1145 | + instance_id2 = self._create_instance({ |
1146 | + 'display_name': 'woo', |
1147 | + 'id': 20}) |
1148 | + instance_id3 = self._create_instance({ |
1149 | + 'display_name': 'not-woot', |
1150 | + 'id': 30}) |
1151 | + |
1152 | + instances = self.compute_api.get_all(c, |
1153 | + search_opts={'name': 'woo.*'}) |
1154 | + self.assertEqual(len(instances), 2) |
1155 | + instance_ids = [instance.id for instance in instances] |
1156 | + self.assertTrue(instance_id1 in instance_ids) |
1157 | + self.assertTrue(instance_id2 in instance_ids) |
1158 | + |
1159 | + instances = self.compute_api.get_all(c, |
1160 | + search_opts={'name': 'woot.*'}) |
1161 | + instance_ids = [instance.id for instance in instances] |
1162 | + self.assertEqual(len(instances), 1) |
1163 | + self.assertTrue(instance_id1 in instance_ids) |
1164 | + |
1165 | + instances = self.compute_api.get_all(c, |
1166 | + search_opts={'name': '.*oot.*'}) |
1167 | + self.assertEqual(len(instances), 2) |
1168 | + instance_ids = [instance.id for instance in instances] |
1169 | + self.assertTrue(instance_id1 in instance_ids) |
1170 | + self.assertTrue(instance_id3 in instance_ids) |
1171 | + |
1172 | + instances = self.compute_api.get_all(c, |
1173 | + search_opts={'name': 'n.*'}) |
1174 | + self.assertEqual(len(instances), 1) |
1175 | + instance_ids = [instance.id for instance in instances] |
1176 | + self.assertTrue(instance_id3 in instance_ids) |
1177 | + |
1178 | + instances = self.compute_api.get_all(c, |
1179 | + search_opts={'name': 'noth.*'}) |
1180 | + self.assertEqual(len(instances), 0) |
1181 | + |
1182 | + db.instance_destroy(c, instance_id1) |
1183 | + db.instance_destroy(c, instance_id2) |
1184 | + db.instance_destroy(c, instance_id3) |
1185 | + |
1186 | + def test_get_all_by_instance_name_regexp(self): |
1187 | + """Test searching instances by name""" |
1188 | + self.flags(instance_name_template='instance-%d') |
1189 | + |
1190 | + c = context.get_admin_context() |
1191 | + instance_id1 = self._create_instance() |
1192 | + instance_id2 = self._create_instance({'id': 2}) |
1193 | + instance_id3 = self._create_instance({'id': 10}) |
1194 | + |
1195 | + instances = self.compute_api.get_all(c, |
1196 | + search_opts={'instance_name': 'instance.*'}) |
1197 | + self.assertEqual(len(instances), 3) |
1198 | + |
1199 | + instances = self.compute_api.get_all(c, |
1200 | + search_opts={'instance_name': '.*\-\d$'}) |
1201 | + self.assertEqual(len(instances), 2) |
1202 | + instance_ids = [instance.id for instance in instances] |
1203 | + self.assertTrue(instance_id1 in instance_ids) |
1204 | + self.assertTrue(instance_id2 in instance_ids) |
1205 | + |
1206 | + instances = self.compute_api.get_all(c, |
1207 | + search_opts={'instance_name': 'i.*2'}) |
1208 | + self.assertEqual(len(instances), 1) |
1209 | + self.assertEqual(instances[0].id, instance_id2) |
1210 | + |
1211 | + db.instance_destroy(c, instance_id1) |
1212 | + db.instance_destroy(c, instance_id2) |
1213 | + db.instance_destroy(c, instance_id3) |
1214 | + |
1215 | + def test_get_by_fixed_ip(self): |
1216 | + """Test getting 1 instance by Fixed IP""" |
1217 | + c = context.get_admin_context() |
1218 | + instance_id1 = self._create_instance() |
1219 | + instance_id2 = self._create_instance({'id': 20}) |
1220 | + instance_id3 = self._create_instance({'id': 30}) |
1221 | + |
1222 | + vif_ref1 = db.virtual_interface_create(c, |
1223 | + {'address': '12:34:56:78:90:12', |
1224 | + 'instance_id': instance_id1, |
1225 | + 'network_id': 1}) |
1226 | + vif_ref2 = db.virtual_interface_create(c, |
1227 | + {'address': '90:12:34:56:78:90', |
1228 | + 'instance_id': instance_id2, |
1229 | + 'network_id': 1}) |
1230 | + |
1231 | + db.fixed_ip_create(c, |
1232 | + {'address': '1.1.1.1', |
1233 | + 'instance_id': instance_id1, |
1234 | + 'virtual_interface_id': vif_ref1['id']}) |
1235 | + db.fixed_ip_create(c, |
1236 | + {'address': '1.1.2.1', |
1237 | + 'instance_id': instance_id2, |
1238 | + 'virtual_interface_id': vif_ref2['id']}) |
1239 | + |
1240 | + # regex not allowed |
1241 | + instances = self.compute_api.get_all(c, |
1242 | + search_opts={'fixed_ip': '.*'}) |
1243 | + self.assertEqual(len(instances), 0) |
1244 | + |
1245 | + instances = self.compute_api.get_all(c, |
1246 | + search_opts={'fixed_ip': '1.1.3.1'}) |
1247 | + self.assertEqual(len(instances), 0) |
1248 | + |
1249 | + instances = self.compute_api.get_all(c, |
1250 | + search_opts={'fixed_ip': '1.1.1.1'}) |
1251 | + self.assertEqual(len(instances), 1) |
1252 | + self.assertEqual(instances[0].id, instance_id1) |
1253 | + |
1254 | + instances = self.compute_api.get_all(c, |
1255 | + search_opts={'fixed_ip': '1.1.2.1'}) |
1256 | + self.assertEqual(len(instances), 1) |
1257 | + self.assertEqual(instances[0].id, instance_id2) |
1258 | + |
1259 | + db.virtual_interface_delete(c, vif_ref1['id']) |
1260 | + db.virtual_interface_delete(c, vif_ref2['id']) |
1261 | + db.instance_destroy(c, instance_id1) |
1262 | + db.instance_destroy(c, instance_id2) |
1263 | + |
1264 | + def test_get_all_by_ip_regexp(self): |
1265 | + """Test searching by Floating and Fixed IP""" |
1266 | + c = context.get_admin_context() |
1267 | + instance_id1 = self._create_instance({'display_name': 'woot'}) |
1268 | + instance_id2 = self._create_instance({ |
1269 | + 'display_name': 'woo', |
1270 | + 'id': 20}) |
1271 | + instance_id3 = self._create_instance({ |
1272 | + 'display_name': 'not-woot', |
1273 | + 'id': 30}) |
1274 | + |
1275 | + vif_ref1 = db.virtual_interface_create(c, |
1276 | + {'address': '12:34:56:78:90:12', |
1277 | + 'instance_id': instance_id1, |
1278 | + 'network_id': 1}) |
1279 | + vif_ref2 = db.virtual_interface_create(c, |
1280 | + {'address': '90:12:34:56:78:90', |
1281 | + 'instance_id': instance_id2, |
1282 | + 'network_id': 1}) |
1283 | + vif_ref3 = db.virtual_interface_create(c, |
1284 | + {'address': '34:56:78:90:12:34', |
1285 | + 'instance_id': instance_id3, |
1286 | + 'network_id': 1}) |
1287 | + |
1288 | + db.fixed_ip_create(c, |
1289 | + {'address': '1.1.1.1', |
1290 | + 'instance_id': instance_id1, |
1291 | + 'virtual_interface_id': vif_ref1['id']}) |
1292 | + db.fixed_ip_create(c, |
1293 | + {'address': '1.1.2.1', |
1294 | + 'instance_id': instance_id2, |
1295 | + 'virtual_interface_id': vif_ref2['id']}) |
1296 | + fix_addr = db.fixed_ip_create(c, |
1297 | + {'address': '1.1.3.1', |
1298 | + 'instance_id': instance_id3, |
1299 | + 'virtual_interface_id': vif_ref3['id']}) |
1300 | + fix_ref = db.fixed_ip_get_by_address(c, fix_addr) |
1301 | + flo_ref = db.floating_ip_create(c, |
1302 | + {'address': '10.0.0.2', |
1303 | + 'fixed_ip_id': fix_ref['id']}) |
1304 | + |
1305 | + # ends up matching 2nd octet here.. so all 3 match |
1306 | + instances = self.compute_api.get_all(c, |
1307 | + search_opts={'ip': '.*\.1'}) |
1308 | + self.assertEqual(len(instances), 3) |
1309 | + |
1310 | + instances = self.compute_api.get_all(c, |
1311 | + search_opts={'ip': '1.*'}) |
1312 | + self.assertEqual(len(instances), 3) |
1313 | + |
1314 | + instances = self.compute_api.get_all(c, |
1315 | + search_opts={'ip': '.*\.1.\d+$'}) |
1316 | + self.assertEqual(len(instances), 1) |
1317 | + instance_ids = [instance.id for instance in instances] |
1318 | + self.assertTrue(instance_id1 in instance_ids) |
1319 | + |
1320 | + instances = self.compute_api.get_all(c, |
1321 | + search_opts={'ip': '.*\.2.+'}) |
1322 | + self.assertEqual(len(instances), 1) |
1323 | + self.assertEqual(instances[0].id, instance_id2) |
1324 | + |
1325 | + instances = self.compute_api.get_all(c, |
1326 | + search_opts={'ip': '10.*'}) |
1327 | + self.assertEqual(len(instances), 1) |
1328 | + self.assertEqual(instances[0].id, instance_id3) |
1329 | + |
1330 | + db.virtual_interface_delete(c, vif_ref1['id']) |
1331 | + db.virtual_interface_delete(c, vif_ref2['id']) |
1332 | + db.virtual_interface_delete(c, vif_ref3['id']) |
1333 | + db.floating_ip_destroy(c, '10.0.0.2') |
1334 | + db.instance_destroy(c, instance_id1) |
1335 | + db.instance_destroy(c, instance_id2) |
1336 | + db.instance_destroy(c, instance_id3) |
1337 | + |
1338 | + def test_get_all_by_ipv6_regexp(self): |
1339 | + """Test searching by IPv6 address""" |
1340 | + |
1341 | + c = context.get_admin_context() |
1342 | + instance_id1 = self._create_instance({'display_name': 'woot'}) |
1343 | + instance_id2 = self._create_instance({ |
1344 | + 'display_name': 'woo', |
1345 | + 'id': 20}) |
1346 | + instance_id3 = self._create_instance({ |
1347 | + 'display_name': 'not-woot', |
1348 | + 'id': 30}) |
1349 | + |
1350 | + vif_ref1 = db.virtual_interface_create(c, |
1351 | + {'address': '12:34:56:78:90:12', |
1352 | + 'instance_id': instance_id1, |
1353 | + 'network_id': 1}) |
1354 | + vif_ref2 = db.virtual_interface_create(c, |
1355 | + {'address': '90:12:34:56:78:90', |
1356 | + 'instance_id': instance_id2, |
1357 | + 'network_id': 1}) |
1358 | + vif_ref3 = db.virtual_interface_create(c, |
1359 | + {'address': '34:56:78:90:12:34', |
1360 | + 'instance_id': instance_id3, |
1361 | + 'network_id': 1}) |
1362 | + |
1363 | + # This will create IPv6 addresses of: |
1364 | + # 1: fd00::1034:56ff:fe78:9012 |
1365 | + # 20: fd00::9212:34ff:fe56:7890 |
1366 | + # 30: fd00::3656:78ff:fe90:1234 |
1367 | + |
1368 | + instances = self.compute_api.get_all(c, |
1369 | + search_opts={'ip6': '.*1034.*'}) |
1370 | + self.assertEqual(len(instances), 1) |
1371 | + self.assertEqual(instances[0].id, instance_id1) |
1372 | + |
1373 | + instances = self.compute_api.get_all(c, |
1374 | + search_opts={'ip6': '^fd00.*'}) |
1375 | + self.assertEqual(len(instances), 3) |
1376 | + instance_ids = [instance.id for instance in instances] |
1377 | + self.assertTrue(instance_id1 in instance_ids) |
1378 | + self.assertTrue(instance_id2 in instance_ids) |
1379 | + self.assertTrue(instance_id3 in instance_ids) |
1380 | + |
1381 | + instances = self.compute_api.get_all(c, |
1382 | + search_opts={'ip6': '^.*12.*34.*'}) |
1383 | + self.assertEqual(len(instances), 2) |
1384 | + instance_ids = [instance.id for instance in instances] |
1385 | + self.assertTrue(instance_id2 in instance_ids) |
1386 | + self.assertTrue(instance_id3 in instance_ids) |
1387 | + |
1388 | + db.virtual_interface_delete(c, vif_ref1['id']) |
1389 | + db.virtual_interface_delete(c, vif_ref2['id']) |
1390 | + db.virtual_interface_delete(c, vif_ref3['id']) |
1391 | + db.instance_destroy(c, instance_id1) |
1392 | + db.instance_destroy(c, instance_id2) |
1393 | + db.instance_destroy(c, instance_id3) |
1394 | + |
1395 | + def test_get_all_by_multiple_options_at_once(self): |
1396 | + """Test searching by multiple options at once""" |
1397 | + c = context.get_admin_context() |
1398 | + instance_id1 = self._create_instance({'display_name': 'woot'}) |
1399 | + instance_id2 = self._create_instance({ |
1400 | + 'display_name': 'woo', |
1401 | + 'id': 20}) |
1402 | + instance_id3 = self._create_instance({ |
1403 | + 'display_name': 'not-woot', |
1404 | + 'id': 30}) |
1405 | + |
1406 | + vif_ref1 = db.virtual_interface_create(c, |
1407 | + {'address': '12:34:56:78:90:12', |
1408 | + 'instance_id': instance_id1, |
1409 | + 'network_id': 1}) |
1410 | + vif_ref2 = db.virtual_interface_create(c, |
1411 | + {'address': '90:12:34:56:78:90', |
1412 | + 'instance_id': instance_id2, |
1413 | + 'network_id': 1}) |
1414 | + vif_ref3 = db.virtual_interface_create(c, |
1415 | + {'address': '34:56:78:90:12:34', |
1416 | + 'instance_id': instance_id3, |
1417 | + 'network_id': 1}) |
1418 | + |
1419 | + db.fixed_ip_create(c, |
1420 | + {'address': '1.1.1.1', |
1421 | + 'instance_id': instance_id1, |
1422 | + 'virtual_interface_id': vif_ref1['id']}) |
1423 | + db.fixed_ip_create(c, |
1424 | + {'address': '1.1.2.1', |
1425 | + 'instance_id': instance_id2, |
1426 | + 'virtual_interface_id': vif_ref2['id']}) |
1427 | + fix_addr = db.fixed_ip_create(c, |
1428 | + {'address': '1.1.3.1', |
1429 | + 'instance_id': instance_id3, |
1430 | + 'virtual_interface_id': vif_ref3['id']}) |
1431 | + fix_ref = db.fixed_ip_get_by_address(c, fix_addr) |
1432 | + flo_ref = db.floating_ip_create(c, |
1433 | + {'address': '10.0.0.2', |
1434 | + 'fixed_ip_id': fix_ref['id']}) |
1435 | + |
1436 | + # ip ends up matching 2nd octet here.. so all 3 match ip |
1437 | + # but 'name' only matches one |
1438 | + instances = self.compute_api.get_all(c, |
1439 | + search_opts={'ip': '.*\.1', 'name': 'not.*'}) |
1440 | + self.assertEqual(len(instances), 1) |
1441 | + self.assertEqual(instances[0].id, instance_id3) |
1442 | + |
1443 | + # ip ends up matching any ip with a '2' in it.. so instance |
1444 | + # 2 and 3.. but name should only match #2 |
1445 | + # but 'name' only matches one |
1446 | + instances = self.compute_api.get_all(c, |
1447 | + search_opts={'ip': '.*2', 'name': '^woo.*'}) |
1448 | + self.assertEqual(len(instances), 1) |
1449 | + self.assertEqual(instances[0].id, instance_id2) |
1450 | + |
1451 | + # same as above but no match on name (name matches instance_id1 |
1452 | + # but the ip query doesn't |
1453 | + instances = self.compute_api.get_all(c, |
1454 | + search_opts={'ip': '.*2.*', 'name': '^woot.*'}) |
1455 | + self.assertEqual(len(instances), 0) |
1456 | + |
1457 | + # ip matches all 3... ipv6 matches #2+#3...name matches #3 |
1458 | + instances = self.compute_api.get_all(c, |
1459 | + search_opts={'ip': '.*\.1', |
1460 | + 'name': 'not.*', |
1461 | + 'ip6': '^.*12.*34.*'}) |
1462 | + self.assertEqual(len(instances), 1) |
1463 | + self.assertEqual(instances[0].id, instance_id3) |
1464 | + |
1465 | + db.virtual_interface_delete(c, vif_ref1['id']) |
1466 | + db.virtual_interface_delete(c, vif_ref2['id']) |
1467 | + db.virtual_interface_delete(c, vif_ref3['id']) |
1468 | + db.floating_ip_destroy(c, '10.0.0.2') |
1469 | + db.instance_destroy(c, instance_id1) |
1470 | + db.instance_destroy(c, instance_id2) |
1471 | + db.instance_destroy(c, instance_id3) |
1472 | + |
1473 | + def test_get_all_by_image(self): |
1474 | + """Test searching instances by image""" |
1475 | + |
1476 | + c = context.get_admin_context() |
1477 | + instance_id1 = self._create_instance({'image_ref': '1234'}) |
1478 | + instance_id2 = self._create_instance({ |
1479 | + 'id': 2, |
1480 | + 'image_ref': '4567'}) |
1481 | + instance_id3 = self._create_instance({ |
1482 | + 'id': 10, |
1483 | + 'image_ref': '4567'}) |
1484 | + |
1485 | + instances = self.compute_api.get_all(c, |
1486 | + search_opts={'image': '123'}) |
1487 | + self.assertEqual(len(instances), 0) |
1488 | + |
1489 | + instances = self.compute_api.get_all(c, |
1490 | + search_opts={'image': '1234'}) |
1491 | + self.assertEqual(len(instances), 1) |
1492 | + self.assertEqual(instances[0].id, instance_id1) |
1493 | + |
1494 | + instances = self.compute_api.get_all(c, |
1495 | + search_opts={'image': '4567'}) |
1496 | + self.assertEqual(len(instances), 2) |
1497 | + instance_ids = [instance.id for instance in instances] |
1498 | + self.assertTrue(instance_id2 in instance_ids) |
1499 | + self.assertTrue(instance_id3 in instance_ids) |
1500 | + |
1501 | + # Test passing a list as search arg |
1502 | + instances = self.compute_api.get_all(c, |
1503 | + search_opts={'image': ['1234', '4567']}) |
1504 | + self.assertEqual(len(instances), 3) |
1505 | + |
1506 | + db.instance_destroy(c, instance_id1) |
1507 | + db.instance_destroy(c, instance_id2) |
1508 | + db.instance_destroy(c, instance_id3) |
1509 | + |
1510 | + def test_get_all_by_flavor(self): |
1511 | + """Test searching instances by image""" |
1512 | + |
1513 | + c = context.get_admin_context() |
1514 | + instance_id1 = self._create_instance({'instance_type_id': 1}) |
1515 | + instance_id2 = self._create_instance({ |
1516 | + 'id': 2, |
1517 | + 'instance_type_id': 2}) |
1518 | + instance_id3 = self._create_instance({ |
1519 | + 'id': 10, |
1520 | + 'instance_type_id': 2}) |
1521 | + |
1522 | + # NOTE(comstud): Migrations set up the instance_types table |
1523 | + # for us. Therefore, we assume the following is true for |
1524 | + # these tests: |
1525 | + # instance_type_id 1 == flavor 3 |
1526 | + # instance_type_id 2 == flavor 1 |
1527 | + # instance_type_id 3 == flavor 4 |
1528 | + # instance_type_id 4 == flavor 5 |
1529 | + # instance_type_id 5 == flavor 2 |
1530 | + |
1531 | + instances = self.compute_api.get_all(c, |
1532 | + search_opts={'flavor': 5}) |
1533 | + self.assertEqual(len(instances), 0) |
1534 | + |
1535 | + self.assertRaises(exception.FlavorNotFound, |
1536 | + self.compute_api.get_all, |
1537 | + c, search_opts={'flavor': 99}) |
1538 | + |
1539 | + instances = self.compute_api.get_all(c, |
1540 | + search_opts={'flavor': 3}) |
1541 | + self.assertEqual(len(instances), 1) |
1542 | + self.assertEqual(instances[0].id, instance_id1) |
1543 | + |
1544 | + instances = self.compute_api.get_all(c, |
1545 | + search_opts={'flavor': 1}) |
1546 | + self.assertEqual(len(instances), 2) |
1547 | + instance_ids = [instance.id for instance in instances] |
1548 | + self.assertTrue(instance_id2 in instance_ids) |
1549 | + self.assertTrue(instance_id3 in instance_ids) |
1550 | + |
1551 | + db.instance_destroy(c, instance_id1) |
1552 | + db.instance_destroy(c, instance_id2) |
1553 | + db.instance_destroy(c, instance_id3) |
1554 | + |
1555 | + def test_get_all_by_state(self): |
1556 | + """Test searching instances by state""" |
1557 | + |
1558 | + c = context.get_admin_context() |
1559 | + instance_id1 = self._create_instance({'state': power_state.SHUTDOWN}) |
1560 | + instance_id2 = self._create_instance({ |
1561 | + 'id': 2, |
1562 | + 'state': power_state.RUNNING}) |
1563 | + instance_id3 = self._create_instance({ |
1564 | + 'id': 10, |
1565 | + 'state': power_state.RUNNING}) |
1566 | + |
1567 | + instances = self.compute_api.get_all(c, |
1568 | + search_opts={'state': power_state.SUSPENDED}) |
1569 | + self.assertEqual(len(instances), 0) |
1570 | + |
1571 | + instances = self.compute_api.get_all(c, |
1572 | + search_opts={'state': power_state.SHUTDOWN}) |
1573 | + self.assertEqual(len(instances), 1) |
1574 | + self.assertEqual(instances[0].id, instance_id1) |
1575 | + |
1576 | + instances = self.compute_api.get_all(c, |
1577 | + search_opts={'state': power_state.RUNNING}) |
1578 | + self.assertEqual(len(instances), 2) |
1579 | + instance_ids = [instance.id for instance in instances] |
1580 | + self.assertTrue(instance_id2 in instance_ids) |
1581 | + self.assertTrue(instance_id3 in instance_ids) |
1582 | + |
1583 | + # Test passing a list as search arg |
1584 | + instances = self.compute_api.get_all(c, |
1585 | + search_opts={'state': [power_state.SHUTDOWN, |
1586 | + power_state.RUNNING]}) |
1587 | + self.assertEqual(len(instances), 3) |
1588 | + |
1589 | + db.instance_destroy(c, instance_id1) |
1590 | + db.instance_destroy(c, instance_id2) |
1591 | + db.instance_destroy(c, instance_id3) |
1592 | + |
1593 | @staticmethod |
1594 | def _parse_db_block_device_mapping(bdm_ref): |
1595 | attr_list = ('delete_on_termination', 'device_name', 'no_device', |
1596 | |
1597 | === modified file 'nova/tests/test_metadata.py' |
1598 | --- nova/tests/test_metadata.py 2011-07-23 07:57:04 +0000 |
1599 | +++ nova/tests/test_metadata.py 2011-08-09 20:27:18 +0000 |
1600 | @@ -43,17 +43,21 @@ |
1601 | 'reservation_id': 'r-xxxxxxxx', |
1602 | 'user_data': '', |
1603 | 'image_ref': 7, |
1604 | + 'fixed_ips': [], |
1605 | 'root_device_name': '/dev/sda1', |
1606 | 'hostname': 'test'}) |
1607 | |
1608 | def instance_get(*args, **kwargs): |
1609 | return self.instance |
1610 | |
1611 | + def instance_get_list(*args, **kwargs): |
1612 | + return [self.instance] |
1613 | + |
1614 | def floating_get(*args, **kwargs): |
1615 | return '99.99.99.99' |
1616 | |
1617 | self.stubs.Set(api, 'instance_get', instance_get) |
1618 | - self.stubs.Set(api, 'fixed_ip_get_instance', instance_get) |
1619 | + self.stubs.Set(api, 'instance_get_all_by_filters', instance_get_list) |
1620 | self.stubs.Set(api, 'instance_get_floating_address', floating_get) |
1621 | self.app = metadatarequesthandler.MetadataRequestHandler() |
1622 |
This seems like a pretty inefficient way to do filtering if all we're looking for is wildcard matching. Why return everything when SQLAlchemy should successfully provide filtering across multiple backends? Can this be done using something like...
query.filter( Instance. name.like( '%test% '))
...rather than returning each row and doing a regex?