Merge lp:~vladimir.p/nova/vsa into lp:~hudson-openstack/nova/trunk

Proposed by Vladimir Popovski
Status: Merged
Approved by: Vish Ishaya
Approved revision: 1328
Merged at revision: 1502
Proposed branch: lp:~vladimir.p/nova/vsa
Merge into: lp:~hudson-openstack/nova/trunk
Diff against target: 4900 lines (+4498/-13)
29 files modified
bin/nova-manage (+477/-1)
bin/nova-vsa (+49/-0)
nova/api/openstack/contrib/virtual_storage_arrays.py (+606/-0)
nova/db/api.py (+35/-1)
nova/db/sqlalchemy/api.py (+102/-0)
nova/db/sqlalchemy/migrate_repo/versions/043_add_vsa_data.py (+75/-0)
nova/db/sqlalchemy/migration.py (+1/-0)
nova/db/sqlalchemy/models.py (+34/-1)
nova/exception.py (+12/-0)
nova/flags.py (+12/-0)
nova/quota.py (+3/-2)
nova/scheduler/vsa.py (+535/-0)
nova/tests/api/openstack/contrib/test_vsa.py (+450/-0)
nova/tests/api/openstack/test_extensions.py (+1/-0)
nova/tests/scheduler/test_vsa_scheduler.py (+641/-0)
nova/tests/test_vsa.py (+182/-0)
nova/tests/test_vsa_volumes.py (+136/-0)
nova/virt/libvirt.xml.template (+3/-1)
nova/virt/libvirt/connection.py (+5/-0)
nova/volume/api.py (+14/-4)
nova/volume/driver.py (+272/-0)
nova/volume/manager.py (+78/-0)
nova/volume/volume_types.py (+40/-3)
nova/vsa/__init__.py (+18/-0)
nova/vsa/api.py (+411/-0)
nova/vsa/connection.py (+25/-0)
nova/vsa/fake.py (+22/-0)
nova/vsa/manager.py (+179/-0)
nova/vsa/utils.py (+80/-0)
To merge this branch: bzr merge lp:~vladimir.p/nova/vsa
Reviewer Review Type Date Requested Status
Matt Dietz (community) Abstain
Brian Lamar (community) Approve
Vish Ishaya (community) Approve
Soren Hansen (community) Approve
Review via email: mp+72983@code.launchpad.net

Description of the change

Virtual Storage Array (VSA) feature.
- new Virtual Storage Array (VSA) objects / OS API extensions / APIs / CLIs
- new schedulers for selecting nodes with particular volume capabilities
- new special volume driver
- report volume capabilities
- some fixes for volume types

To post a comment you must log in.
Revision history for this message
Vladimir Popovski (vladimir.p) wrote :

I re-submitted VSA merge proposal after removing drive_type related code and merging with volume types/metadata.

I suppose that most (or actually all?) of your previous comments were addressed.

Just in case if you would like to take a look at previous proposal:
https://code.launchpad.net/~vladimir.p/nova/vsa/+merge/68987

Really appreciate your feedback! Hope we will be able to land it tomorrow.

lp:~vladimir.p/nova/vsa updated
1325. By Vladimir Popovski

added debug prints for scheduler

Revision history for this message
Vish Ishaya (vishvananda) wrote :
Download full text (3.3 KiB)

Nicely done vladimir. I just see a couple of issues:

573 === added directory 'nova/CA/newcerts'
574 === removed directory 'nova/CA/newcerts'
575 === added file 'nova/CA/newcerts/.placeholder'
576 === removed file 'nova/CA/newcerts/.placeholder'
577 === added directory 'nova/CA/private'
578 === removed directory 'nova/CA/private'
579 === added file 'nova/CA/private/.placeholder'
580 === removed file 'nova/CA/private/.placeholder'
581 === added directory 'nova/CA/projects'
582 === removed directory 'nova/CA/projects'
583 === added file 'nova/CA/projects/.gitignore'
584 --- nova/CA/projects/.gitignore 1970-01-01 00:00:00 +0000
585 +++ nova/CA/projects/.gitignore 2011-08-26 18:16:26 +0000
586 @@ -0,0 +1,1 @@
587 +*
588
589 === removed file 'nova/CA/projects/.gitignore'
590 --- nova/CA/projects/.gitignore 2010-07-28 08:32:40 +0000
591 +++ nova/CA/projects/.gitignore 1970-01-01 00:00:00 +0000
592 @@ -1,1 +0,0 @@
593 -*
594
595 === added file 'nova/CA/projects/.placeholder'
596 === removed file 'nova/CA/projects/.placeholder'
597 === added directory 'nova/CA/reqs'
598 === removed directory 'nova/CA/reqs'
599 === added file 'nova/CA/reqs/.gitignore'
600 --- nova/CA/reqs/.gitignore 1970-01-01 00:00:00 +0000
601 +++ nova/CA/reqs/.gitignore 2011-08-26 18:16:26 +0000
602 @@ -0,0 +1,1 @@
603 +*
604
605 === removed file 'nova/CA/reqs/.gitignore'
606 --- nova/CA/reqs/.gitignore 2010-07-28 08:32:40 +0000
607 +++ nova/CA/reqs/.gitignore 1970-01-01 00:00:00 +0000
608 @@ -1,1 +0,0 @@
609 -*
610
611 === added file 'nova/CA/reqs/.placeholder'
612 === removed file 'nova/CA/reqs/.placeholder'
613 === added file 'nova/api/openstack/contrib/virtual_storage_arrays.py'

Some weird stuff going here. Might be nice to recreate the branch to get rid of these strange issues. i.e. start with trunk and just copy all of the new files in. and put it in one commit.

4006 + def create_volumes(self, context, request_spec, availability_zone):
4007 + LOG.info(_("create_volumes called with req=%(request_spec)s, "\
4008 + "availability_zone=%(availability_zone)s"), locals())
4009 +

This doesn't look like it is actually implemented? It seems to be just logging so perhaps remove it for now?

----

Great job separating out special cases from the code. It seems that there is no problem running without nova-vsa. I feel like we need some clear documentation that the vsa code is Experimental and is not a supported core component. I think a note that prints out when you start nova-vsa and a note in the vsa folder in docstrings would be good enough.

It would also be good to mention in that note that a specific image is needed to use the functionality exposed, and other storage providers are invited to make images. I also think it is very important for this to have a generic image implementation that is open source. Even if it is just a silly command line version.

In general I'm worried about adding something to the project that is specific to a vendor, but since you guys have put so much effort into doing this the right way, I'm willing to assume that you will continue to generalize and make it possible for others to use this code.

Rest of nova-cor...

Read more...

review: Needs Fixing
Revision history for this message
Brian Waldon (bcwaldon) wrote :

I definitely agree with you, Vish. I would, however, like to see some more documentation about this feature before we accept it.

Revision history for this message
Soren Hansen (soren) wrote :

> Nicely done vladimir. I just see a couple of issues:
>
> 573 === added directory 'nova/CA/newcerts'
> 574 === removed directory 'nova/CA/newcerts'
> 575 === added file 'nova/CA/newcerts/.placeholder'
> 576 === removed file 'nova/CA/newcerts/.placeholder'
> 577 === added directory 'nova/CA/private'
> 578 === removed directory 'nova/CA/private'
> 579 === added file 'nova/CA/private/.placeholder'
> 580 === removed file 'nova/CA/private/.placeholder'
> 581 === added directory 'nova/CA/projects'
> 582 === removed directory 'nova/CA/projects'
> 583 === added file 'nova/CA/projects/.gitignore'
[...]
> Some weird stuff going here. Might be nice to recreate the branch to get rid
> of these strange issues. i.e. start with trunk and just copy all of the new
> files in. and put it in one commit.

No need. Just do something like:
bzr revert -r -150 nova/CA/newcerts/.placeholder nova/CA/private/.placeholder nova/CA/projects/.*

-150 picked at random. It just needed to be a revision from before they got removed.

bzr assigns a file id to a file when you create it. It you rename it, it keeps this file id. If you remove a file and later create another one with the same name, bzr considers it a new file and assigns a new file id to it. If you don't know this, you run into annoying, frustrating stuff like this. However, it lets you e.g. rename a file from "bar" to "baz" and rename another from "foo" to "bar" and bzr actually knows that this is what you did instead of thinking you deleted "foo", created a brand new file "baz", and made some really funky changes to "bar". :)

bzr revert resurrects those files (with their ids) from the given revision.

> In general I'm worried about adding something to the project that is specific
> to a vendor, but since you guys have put so much effort into doing this the
> right way, I'm willing to assume that you will continue to generalize and make
> it possible for others to use this code.

I understand where you're coming from with this concern. However, this seems like a really good driver that others can expand on.

> Rest of nova-core, does that seem ok?

I'm ok with it. I don't see us gaining anything by having this live outside of Nova itself. And, I like the code and would love to see others doing similar stuff building on top of this rather than building something crappy themselves.

Approved (assuming you do the quick bzr thing above).

review: Approve
Revision history for this message
Christopher MacGown (0x44) wrote :

I would also like to see the format strings include the names within the string the way we have formatted log strings elsewhere in the codebase.

lp:~vladimir.p/nova/vsa updated
1326. By Vladimir Popovski

reverted CA files

1327. By Vladimir Popovski

removed create_volumes, added log & doc comment about experimental code

Revision history for this message
Matt Dietz (cerberus) wrote :

At this point, shouldn't this wait for Essex?

review: Needs Information
Revision history for this message
Vladimir Popovski (vladimir.p) wrote :

If it is possible, we would like to have it in Diablo. The initial proposal
was posted in July. If it will be postponed for Essex, it will be released
in April only.
It would be great if other storage vendors will be aware of such feature and
use it (or part of it) as a reference. Putting it into Diablo will increase
its visibility.
Thanks.

Revision history for this message
Vish Ishaya (vishvananda) wrote :

Based on discussion in irc and bug filed for docs, I'm ok with this going in.

review: Approve
Revision history for this message
Brian Lamar (blamar) wrote :

I'm of the opinion that this is much better than the previous version (thank you for that!) and that we should approve this and then decide on it getting in to Essex or Diablo.

So basically, I'm approving this for trunk, but you'll have to get a FFE from ttx to get this into Diablo if I remember correctly.

Thanks for your hard work on this.

review: Approve
Revision history for this message
Vish Ishaya (vishvananda) wrote :

This was granted a FFE if it was approved by nova-core by Today (Friday), so I think it is good to go in.

lp:~vladimir.p/nova/vsa updated
1328. By Vladimir Popovski

changed format string in nova-manage

Revision history for this message
Matt Dietz (cerberus) wrote :

Reconciled in os-dev. I'm going to abstain. Looks like there are enough reviews.

review: Abstain

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'bin/nova-manage'
2--- bin/nova-manage 2011-08-24 21:01:33 +0000
3+++ bin/nova-manage 2011-08-26 22:09:26 +0000
4@@ -53,6 +53,7 @@
5 CLI interface for nova management.
6 """
7
8+import ast
9 import gettext
10 import glob
11 import json
12@@ -85,11 +86,13 @@
13 from nova import rpc
14 from nova import utils
15 from nova import version
16+from nova import vsa
17 from nova.api.ec2 import ec2utils
18 from nova.auth import manager
19 from nova.cloudpipe import pipelib
20 from nova.compute import instance_types
21 from nova.db import migration
22+from nova.volume import volume_types
23
24 FLAGS = flags.FLAGS
25 flags.DECLARE('fixed_range', 'nova.network.manager')
26@@ -1097,6 +1100,477 @@
27 self.list()
28
29
30+class VsaCommands(object):
31+ """Methods for dealing with VSAs"""
32+
33+ def __init__(self, *args, **kwargs):
34+ self.manager = manager.AuthManager()
35+ self.vsa_api = vsa.API()
36+ self.context = context.get_admin_context()
37+
38+ self._format_str_vsa = "%(id)-5s %(vsa_id)-15s %(name)-25s "\
39+ "%(type)-10s %(vcs)-6s %(drives)-9s %(stat)-10s "\
40+ "%(az)-10s %(time)-10s"
41+ self._format_str_volume = "\t%(id)-4s %(name)-15s %(size)-5s "\
42+ "%(stat)-10s %(att)-20s %(time)s"
43+ self._format_str_drive = "\t%(id)-4s %(name)-15s %(size)-5s "\
44+ "%(stat)-10s %(host)-20s %(type)-4s %(tname)-10s %(time)s"
45+ self._format_str_instance = "\t%(id)-4s %(name)-10s %(dname)-20s "\
46+ "%(image)-12s %(type)-10s %(fl_ip)-15s %(fx_ip)-15s "\
47+ "%(stat)-10s %(host)-15s %(time)s"
48+
49+ def _print_vsa_header(self):
50+ print self._format_str_vsa %\
51+ dict(id=_('ID'),
52+ vsa_id=_('vsa_id'),
53+ name=_('displayName'),
54+ type=_('vc_type'),
55+ vcs=_('vc_cnt'),
56+ drives=_('drive_cnt'),
57+ stat=_('status'),
58+ az=_('AZ'),
59+ time=_('createTime'))
60+
61+ def _print_vsa(self, vsa):
62+ print self._format_str_vsa %\
63+ dict(id=vsa['id'],
64+ vsa_id=vsa['name'],
65+ name=vsa['display_name'],
66+ type=vsa['vsa_instance_type'].get('name', None),
67+ vcs=vsa['vc_count'],
68+ drives=vsa['vol_count'],
69+ stat=vsa['status'],
70+ az=vsa['availability_zone'],
71+ time=str(vsa['created_at']))
72+
73+ def _print_volume_header(self):
74+ print _(' === Volumes ===')
75+ print self._format_str_volume %\
76+ dict(id=_('ID'),
77+ name=_('name'),
78+ size=_('size'),
79+ stat=_('status'),
80+ att=_('attachment'),
81+ time=_('createTime'))
82+
83+ def _print_volume(self, vol):
84+ print self._format_str_volume %\
85+ dict(id=vol['id'],
86+ name=vol['display_name'] or vol['name'],
87+ size=vol['size'],
88+ stat=vol['status'],
89+ att=vol['attach_status'],
90+ time=str(vol['created_at']))
91+
92+ def _print_drive_header(self):
93+ print _(' === Drives ===')
94+ print self._format_str_drive %\
95+ dict(id=_('ID'),
96+ name=_('name'),
97+ size=_('size'),
98+ stat=_('status'),
99+ host=_('host'),
100+ type=_('type'),
101+ tname=_('typeName'),
102+ time=_('createTime'))
103+
104+ def _print_drive(self, drive):
105+ if drive['volume_type_id'] is not None and drive.get('volume_type'):
106+ drive_type_name = drive['volume_type'].get('name')
107+ else:
108+ drive_type_name = ''
109+
110+ print self._format_str_drive %\
111+ dict(id=drive['id'],
112+ name=drive['display_name'],
113+ size=drive['size'],
114+ stat=drive['status'],
115+ host=drive['host'],
116+ type=drive['volume_type_id'],
117+ tname=drive_type_name,
118+ time=str(drive['created_at']))
119+
120+ def _print_instance_header(self):
121+ print _(' === Instances ===')
122+ print self._format_str_instance %\
123+ dict(id=_('ID'),
124+ name=_('name'),
125+ dname=_('disp_name'),
126+ image=_('image'),
127+ type=_('type'),
128+ fl_ip=_('floating_IP'),
129+ fx_ip=_('fixed_IP'),
130+ stat=_('status'),
131+ host=_('host'),
132+ time=_('createTime'))
133+
134+ def _print_instance(self, vc):
135+
136+ fixed_addr = None
137+ floating_addr = None
138+ if vc['fixed_ips']:
139+ fixed = vc['fixed_ips'][0]
140+ fixed_addr = fixed['address']
141+ if fixed['floating_ips']:
142+ floating_addr = fixed['floating_ips'][0]['address']
143+ floating_addr = floating_addr or fixed_addr
144+
145+ print self._format_str_instance %\
146+ dict(id=vc['id'],
147+ name=ec2utils.id_to_ec2_id(vc['id']),
148+ dname=vc['display_name'],
149+ image=('ami-%08x' % int(vc['image_ref'])),
150+ type=vc['instance_type']['name'],
151+ fl_ip=floating_addr,
152+ fx_ip=fixed_addr,
153+ stat=vc['state_description'],
154+ host=vc['host'],
155+ time=str(vc['created_at']))
156+
157+ def _list(self, context, vsas, print_drives=False,
158+ print_volumes=False, print_instances=False):
159+ if vsas:
160+ self._print_vsa_header()
161+
162+ for vsa in vsas:
163+ self._print_vsa(vsa)
164+ vsa_id = vsa.get('id')
165+
166+ if print_instances:
167+ instances = self.vsa_api.get_all_vsa_instances(context, vsa_id)
168+ if instances:
169+ print
170+ self._print_instance_header()
171+ for instance in instances:
172+ self._print_instance(instance)
173+ print
174+
175+ if print_drives:
176+ drives = self.vsa_api.get_all_vsa_drives(context, vsa_id)
177+ if drives:
178+ self._print_drive_header()
179+ for drive in drives:
180+ self._print_drive(drive)
181+ print
182+
183+ if print_volumes:
184+ volumes = self.vsa_api.get_all_vsa_volumes(context, vsa_id)
185+ if volumes:
186+ self._print_volume_header()
187+ for volume in volumes:
188+ self._print_volume(volume)
189+ print
190+
191+ @args('--storage', dest='storage',
192+ metavar="[{'drive_name': 'type', 'num_drives': N, 'size': M},..]",
193+ help='Initial storage allocation for VSA')
194+ @args('--name', dest='name', metavar="<name>", help='VSA name')
195+ @args('--description', dest='description', metavar="<description>",
196+ help='VSA description')
197+ @args('--vc', dest='vc_count', metavar="<number>", help='Number of VCs')
198+ @args('--instance_type', dest='instance_type_name', metavar="<name>",
199+ help='Instance type name')
200+ @args('--image', dest='image_name', metavar="<name>", help='Image name')
201+ @args('--shared', dest='shared', action="store_true", default=False,
202+ help='Use shared drives')
203+ @args('--az', dest='az', metavar="<zone:host>", help='Availability zone')
204+ @args('--user', dest="user_id", metavar='<User name>',
205+ help='User name')
206+ @args('--project', dest="project_id", metavar='<Project name>',
207+ help='Project name')
208+ def create(self, storage='[]', name=None, description=None, vc_count=1,
209+ instance_type_name=None, image_name=None, shared=None,
210+ az=None, user_id=None, project_id=None):
211+ """Create a VSA."""
212+
213+ if project_id is None:
214+ try:
215+ project_id = os.getenv("EC2_ACCESS_KEY").split(':')[1]
216+ except Exception as exc:
217+ print _("Failed to retrieve project id: %(exc)s") % exc
218+ raise
219+
220+ if user_id is None:
221+ try:
222+ project = self.manager.get_project(project_id)
223+ user_id = project.project_manager_id
224+ except Exception as exc:
225+ print _("Failed to retrieve user info: %(exc)s") % exc
226+ raise
227+
228+ is_admin = self.manager.is_admin(user_id)
229+ ctxt = context.RequestContext(user_id, project_id, is_admin)
230+ if not is_admin and \
231+ not self.manager.is_project_member(user_id, project_id):
232+ msg = _("%(user_id)s must be an admin or a "
233+ "member of %(project_id)s")
234+ LOG.warn(msg % locals())
235+ raise ValueError(msg % locals())
236+
237+ # Sanity check for storage string
238+ storage_list = []
239+ if storage is not None:
240+ try:
241+ storage_list = ast.literal_eval(storage)
242+ except:
243+ print _("Invalid string format %s") % storage
244+ raise
245+
246+ for node in storage_list:
247+ if ('drive_name' not in node) or ('num_drives' not in node):
248+ print (_("Invalid string format for element %s. " \
249+ "Expecting keys 'drive_name' & 'num_drives'"),
250+ str(node))
251+ raise KeyError
252+
253+ if instance_type_name == '':
254+ instance_type_name = None
255+ instance_type = instance_types.get_instance_type_by_name(
256+ instance_type_name)
257+
258+ if image_name == '':
259+ image_name = None
260+
261+ if shared in [None, False, "--full_drives"]:
262+ shared = False
263+ elif shared in [True, "--shared"]:
264+ shared = True
265+ else:
266+ raise ValueError(_('Shared parameter should be set either to "\
267+ "--shared or --full_drives'))
268+
269+ values = {
270+ 'display_name': name,
271+ 'display_description': description,
272+ 'vc_count': int(vc_count),
273+ 'instance_type': instance_type,
274+ 'image_name': image_name,
275+ 'availability_zone': az,
276+ 'storage': storage_list,
277+ 'shared': shared,
278+ }
279+
280+ result = self.vsa_api.create(ctxt, **values)
281+ self._list(ctxt, [result])
282+
283+ @args('--id', dest='vsa_id', metavar="<vsa_id>", help='VSA ID')
284+ @args('--name', dest='name', metavar="<name>", help='VSA name')
285+ @args('--description', dest='description', metavar="<description>",
286+ help='VSA description')
287+ @args('--vc', dest='vc_count', metavar="<number>", help='Number of VCs')
288+ def update(self, vsa_id, name=None, description=None, vc_count=None):
289+ """Updates name/description of vsa and number of VCs."""
290+
291+ values = {}
292+ if name is not None:
293+ values['display_name'] = name
294+ if description is not None:
295+ values['display_description'] = description
296+ if vc_count is not None:
297+ values['vc_count'] = int(vc_count)
298+
299+ vsa_id = ec2utils.ec2_id_to_id(vsa_id)
300+ result = self.vsa_api.update(self.context, vsa_id=vsa_id, **values)
301+ self._list(self.context, [result])
302+
303+ @args('--id', dest='vsa_id', metavar="<vsa_id>", help='VSA ID')
304+ def delete(self, vsa_id):
305+ """Delete a VSA."""
306+ vsa_id = ec2utils.ec2_id_to_id(vsa_id)
307+ self.vsa_api.delete(self.context, vsa_id)
308+
309+ @args('--id', dest='vsa_id', metavar="<vsa_id>",
310+ help='VSA ID (optional)')
311+ @args('--all', dest='all', action="store_true", default=False,
312+ help='Show all available details')
313+ @args('--drives', dest='drives', action="store_true",
314+ help='Include drive-level details')
315+ @args('--volumes', dest='volumes', action="store_true",
316+ help='Include volume-level details')
317+ @args('--instances', dest='instances', action="store_true",
318+ help='Include instance-level details')
319+ def list(self, vsa_id=None, all=False,
320+ drives=False, volumes=False, instances=False):
321+ """Describe all available VSAs (or particular one)."""
322+
323+ vsas = []
324+ if vsa_id is not None:
325+ internal_id = ec2utils.ec2_id_to_id(vsa_id)
326+ vsa = self.vsa_api.get(self.context, internal_id)
327+ vsas.append(vsa)
328+ else:
329+ vsas = self.vsa_api.get_all(self.context)
330+
331+ if all:
332+ drives = volumes = instances = True
333+
334+ self._list(self.context, vsas, drives, volumes, instances)
335+
336+ def update_capabilities(self):
337+ """Forces updates capabilities on all nova-volume nodes."""
338+
339+ rpc.fanout_cast(context.get_admin_context(),
340+ FLAGS.volume_topic,
341+ {"method": "notification",
342+ "args": {"event": "startup"}})
343+
344+
345+class VsaDriveTypeCommands(object):
346+ """Methods for dealing with VSA drive types"""
347+
348+ def __init__(self, *args, **kwargs):
349+ super(VsaDriveTypeCommands, self).__init__(*args, **kwargs)
350+ self.context = context.get_admin_context()
351+ self._drive_type_template = '%s_%sGB_%sRPM'
352+
353+ def _list(self, drives):
354+ format_str = "%-5s %-30s %-10s %-10s %-10s %-20s %-10s %s"
355+ if len(drives):
356+ print format_str %\
357+ (_('ID'),
358+ _('name'),
359+ _('type'),
360+ _('size_gb'),
361+ _('rpm'),
362+ _('capabilities'),
363+ _('visible'),
364+ _('createTime'))
365+
366+ for name, vol_type in drives.iteritems():
367+ drive = vol_type.get('extra_specs')
368+ print format_str %\
369+ (str(vol_type['id']),
370+ drive['drive_name'],
371+ drive['drive_type'],
372+ drive['drive_size'],
373+ drive['drive_rpm'],
374+ drive.get('capabilities', ''),
375+ str(drive.get('visible', '')),
376+ str(vol_type['created_at']))
377+
378+ @args('--type', dest='type', metavar="<type>",
379+ help='Drive type (SATA, SAS, SSD, etc.)')
380+ @args('--size', dest='size_gb', metavar="<gb>", help='Drive size in GB')
381+ @args('--rpm', dest='rpm', metavar="<rpm>", help='RPM')
382+ @args('--capabilities', dest='capabilities', default=None,
383+ metavar="<string>", help='Different capabilities')
384+ @args('--hide', dest='hide', action="store_true", default=False,
385+ help='Show or hide drive')
386+ @args('--name', dest='name', metavar="<name>", help='Drive name')
387+ def create(self, type, size_gb, rpm, capabilities=None,
388+ hide=False, name=None):
389+ """Create drive type."""
390+
391+ hide = True if hide in [True, "True", "--hide", "hide"] else False
392+
393+ if name is None:
394+ name = self._drive_type_template % (type, size_gb, rpm)
395+
396+ extra_specs = {'type': 'vsa_drive',
397+ 'drive_name': name,
398+ 'drive_type': type,
399+ 'drive_size': size_gb,
400+ 'drive_rpm': rpm,
401+ 'visible': True,
402+ }
403+ if hide:
404+ extra_specs['visible'] = False
405+
406+ if capabilities is not None and capabilities != '':
407+ extra_specs['capabilities'] = capabilities
408+
409+ volume_types.create(self.context, name, extra_specs)
410+ result = volume_types.get_volume_type_by_name(self.context, name)
411+ self._list({name: result})
412+
413+ @args('--name', dest='name', metavar="<name>", help='Drive name')
414+ @args('--purge', action="store_true", dest='purge', default=False,
415+ help='purge record from database')
416+ def delete(self, name, purge):
417+ """Marks instance types / flavors as deleted"""
418+ try:
419+ if purge:
420+ volume_types.purge(self.context, name)
421+ verb = "purged"
422+ else:
423+ volume_types.destroy(self.context, name)
424+ verb = "deleted"
425+ except exception.ApiError:
426+ print "Valid volume type name is required"
427+ sys.exit(1)
428+ except exception.DBError, e:
429+ print "DB Error: %s" % e
430+ sys.exit(2)
431+ except:
432+ sys.exit(3)
433+ else:
434+ print "%s %s" % (name, verb)
435+
436+ @args('--all', dest='all', action="store_true", default=False,
437+ help='Show all drives (including invisible)')
438+ @args('--name', dest='name', metavar="<name>",
439+ help='Show only specified drive')
440+ def list(self, all=False, name=None):
441+ """Describe all available VSA drive types (or particular one)."""
442+
443+ all = False if all in ["--all", False, "False"] else True
444+
445+ search_opts = {'extra_specs': {'type': 'vsa_drive'}}
446+ if name is not None:
447+ search_opts['extra_specs']['name'] = name
448+
449+ if all == False:
450+ search_opts['extra_specs']['visible'] = '1'
451+
452+ drives = volume_types.get_all_types(self.context,
453+ search_opts=search_opts)
454+ self._list(drives)
455+
456+ @args('--name', dest='name', metavar="<name>", help='Drive name')
457+ @args('--type', dest='type', metavar="<type>",
458+ help='Drive type (SATA, SAS, SSD, etc.)')
459+ @args('--size', dest='size_gb', metavar="<gb>", help='Drive size in GB')
460+ @args('--rpm', dest='rpm', metavar="<rpm>", help='RPM')
461+ @args('--capabilities', dest='capabilities', default=None,
462+ metavar="<string>", help='Different capabilities')
463+ @args('--visible', dest='visible',
464+ metavar="<show|hide>", help='Show or hide drive')
465+ def update(self, name, type=None, size_gb=None, rpm=None,
466+ capabilities=None, visible=None):
467+ """Update drive type."""
468+
469+ volume_type = volume_types.get_volume_type_by_name(self.context, name)
470+
471+ extra_specs = {'type': 'vsa_drive'}
472+
473+ if type:
474+ extra_specs['drive_type'] = type
475+
476+ if size_gb:
477+ extra_specs['drive_size'] = size_gb
478+
479+ if rpm:
480+ extra_specs['drive_rpm'] = rpm
481+
482+ if capabilities:
483+ extra_specs['capabilities'] = capabilities
484+
485+ if visible is not None:
486+ if visible in ["show", True, "True"]:
487+ extra_specs['visible'] = True
488+ elif visible in ["hide", False, "False"]:
489+ extra_specs['visible'] = False
490+ else:
491+ raise ValueError(_('visible parameter should be set to '\
492+ 'show or hide'))
493+
494+ db.api.volume_type_extra_specs_update_or_create(self.context,
495+ volume_type['id'],
496+ extra_specs)
497+ result = volume_types.get_volume_type_by_name(self.context, name)
498+ self._list({name: result})
499+
500+
501 class VolumeCommands(object):
502 """Methods for dealing with a cloud in an odd state"""
503
504@@ -1483,6 +1957,7 @@
505 ('agent', AgentBuildCommands),
506 ('config', ConfigCommands),
507 ('db', DbCommands),
508+ ('drive', VsaDriveTypeCommands),
509 ('fixed', FixedIpCommands),
510 ('flavor', InstanceTypeCommands),
511 ('floating', FloatingIpCommands),
512@@ -1498,7 +1973,8 @@
513 ('version', VersionCommands),
514 ('vm', VmCommands),
515 ('volume', VolumeCommands),
516- ('vpn', VpnCommands)]
517+ ('vpn', VpnCommands),
518+ ('vsa', VsaCommands)]
519
520
521 def lazy_match(name, key_value_tuples):
522
523=== added file 'bin/nova-vsa'
524--- bin/nova-vsa 1970-01-01 00:00:00 +0000
525+++ bin/nova-vsa 2011-08-26 22:09:26 +0000
526@@ -0,0 +1,49 @@
527+#!/usr/bin/env python
528+# vim: tabstop=4 shiftwidth=4 softtabstop=4
529+
530+# Copyright (c) 2011 Zadara Storage Inc.
531+# Copyright (c) 2011 OpenStack LLC.
532+#
533+#
534+# Licensed under the Apache License, Version 2.0 (the "License"); you may
535+# not use this file except in compliance with the License. You may obtain
536+# a copy of the License at
537+#
538+# http://www.apache.org/licenses/LICENSE-2.0
539+#
540+# Unless required by applicable law or agreed to in writing, software
541+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
542+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
543+# License for the specific language governing permissions and limitations
544+# under the License.
545+
546+"""Starter script for Nova VSA."""
547+
548+import eventlet
549+eventlet.monkey_patch()
550+
551+import os
552+import sys
553+
554+# If ../nova/__init__.py exists, add ../ to Python search path, so that
555+# it will override what happens to be installed in /usr/(local/)lib/python...
556+possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]),
557+ os.pardir,
558+ os.pardir))
559+if os.path.exists(os.path.join(possible_topdir, 'nova', '__init__.py')):
560+ sys.path.insert(0, possible_topdir)
561+
562+
563+from nova import flags
564+from nova import log as logging
565+from nova import service
566+from nova import utils
567+
568+if __name__ == '__main__':
569+ utils.default_flagfile()
570+ flags.FLAGS(sys.argv)
571+ logging.setup()
572+ utils.monkey_patch()
573+ server = service.Service.create(binary='nova-vsa')
574+ service.serve(server)
575+ service.wait()
576
577=== added file 'nova/api/openstack/contrib/virtual_storage_arrays.py'
578--- nova/api/openstack/contrib/virtual_storage_arrays.py 1970-01-01 00:00:00 +0000
579+++ nova/api/openstack/contrib/virtual_storage_arrays.py 2011-08-26 22:09:26 +0000
580@@ -0,0 +1,606 @@
581+# vim: tabstop=4 shiftwidth=4 softtabstop=4
582+
583+# Copyright (c) 2011 Zadara Storage Inc.
584+# Copyright (c) 2011 OpenStack LLC.
585+#
586+# Licensed under the Apache License, Version 2.0 (the "License"); you may
587+# not use this file except in compliance with the License. You may obtain
588+# a copy of the License at
589+#
590+# http://www.apache.org/licenses/LICENSE-2.0
591+#
592+# Unless required by applicable law or agreed to in writing, software
593+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
594+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
595+# License for the specific language governing permissions and limitations
596+# under the License.
597+
598+""" The virtul storage array extension"""
599+
600+
601+from webob import exc
602+
603+from nova import vsa
604+from nova import volume
605+from nova import compute
606+from nova import network
607+from nova import db
608+from nova import quota
609+from nova import exception
610+from nova import log as logging
611+from nova.api.openstack import common
612+from nova.api.openstack import extensions
613+from nova.api.openstack import faults
614+from nova.api.openstack import wsgi
615+from nova.api.openstack import servers
616+from nova.api.openstack.contrib import volumes
617+from nova.compute import instance_types
618+
619+from nova import flags
620+FLAGS = flags.FLAGS
621+
622+LOG = logging.getLogger("nova.api.vsa")
623+
624+
625+def _vsa_view(context, vsa, details=False, instances=None):
626+ """Map keys for vsa summary/detailed view."""
627+ d = {}
628+
629+ d['id'] = vsa.get('id')
630+ d['name'] = vsa.get('name')
631+ d['displayName'] = vsa.get('display_name')
632+ d['displayDescription'] = vsa.get('display_description')
633+
634+ d['createTime'] = vsa.get('created_at')
635+ d['status'] = vsa.get('status')
636+
637+ if 'vsa_instance_type' in vsa:
638+ d['vcType'] = vsa['vsa_instance_type'].get('name', None)
639+ else:
640+ d['vcType'] = vsa['instance_type_id']
641+
642+ d['vcCount'] = vsa.get('vc_count')
643+ d['driveCount'] = vsa.get('vol_count')
644+
645+ d['ipAddress'] = None
646+ for instance in instances:
647+ fixed_addr = None
648+ floating_addr = None
649+ if instance['fixed_ips']:
650+ fixed = instance['fixed_ips'][0]
651+ fixed_addr = fixed['address']
652+ if fixed['floating_ips']:
653+ floating_addr = fixed['floating_ips'][0]['address']
654+
655+ if floating_addr:
656+ d['ipAddress'] = floating_addr
657+ break
658+ else:
659+ d['ipAddress'] = d['ipAddress'] or fixed_addr
660+
661+ return d
662+
663+
664+class VsaController(object):
665+ """The Virtual Storage Array API controller for the OpenStack API."""
666+
667+ _serialization_metadata = {
668+ 'application/xml': {
669+ "attributes": {
670+ "vsa": [
671+ "id",
672+ "name",
673+ "displayName",
674+ "displayDescription",
675+ "createTime",
676+ "status",
677+ "vcType",
678+ "vcCount",
679+ "driveCount",
680+ "ipAddress",
681+ ]}}}
682+
683+ def __init__(self):
684+ self.vsa_api = vsa.API()
685+ self.compute_api = compute.API()
686+ self.network_api = network.API()
687+ super(VsaController, self).__init__()
688+
689+ def _get_instances_by_vsa_id(self, context, id):
690+ return self.compute_api.get_all(context,
691+ search_opts={'metadata': dict(vsa_id=str(id))})
692+
693+ def _items(self, req, details):
694+ """Return summary or detailed list of VSAs."""
695+ context = req.environ['nova.context']
696+ vsas = self.vsa_api.get_all(context)
697+ limited_list = common.limited(vsas, req)
698+
699+ vsa_list = []
700+ for vsa in limited_list:
701+ instances = self._get_instances_by_vsa_id(context, vsa.get('id'))
702+ vsa_list.append(_vsa_view(context, vsa, details, instances))
703+ return {'vsaSet': vsa_list}
704+
705+ def index(self, req):
706+ """Return a short list of VSAs."""
707+ return self._items(req, details=False)
708+
709+ def detail(self, req):
710+ """Return a detailed list of VSAs."""
711+ return self._items(req, details=True)
712+
713+ def show(self, req, id):
714+ """Return data about the given VSA."""
715+ context = req.environ['nova.context']
716+
717+ try:
718+ vsa = self.vsa_api.get(context, vsa_id=id)
719+ except exception.NotFound:
720+ return faults.Fault(exc.HTTPNotFound())
721+
722+ instances = self._get_instances_by_vsa_id(context, vsa.get('id'))
723+ return {'vsa': _vsa_view(context, vsa, True, instances)}
724+
725+ def create(self, req, body):
726+ """Create a new VSA."""
727+ context = req.environ['nova.context']
728+
729+ if not body or 'vsa' not in body:
730+ LOG.debug(_("No body provided"), context=context)
731+ return faults.Fault(exc.HTTPUnprocessableEntity())
732+
733+ vsa = body['vsa']
734+
735+ display_name = vsa.get('displayName')
736+ vc_type = vsa.get('vcType', FLAGS.default_vsa_instance_type)
737+ try:
738+ instance_type = instance_types.get_instance_type_by_name(vc_type)
739+ except exception.NotFound:
740+ return faults.Fault(exc.HTTPNotFound())
741+
742+ LOG.audit(_("Create VSA %(display_name)s of type %(vc_type)s"),
743+ locals(), context=context)
744+
745+ args = dict(display_name=display_name,
746+ display_description=vsa.get('displayDescription'),
747+ instance_type=instance_type,
748+ storage=vsa.get('storage'),
749+ shared=vsa.get('shared'),
750+ availability_zone=vsa.get('placement', {}).\
751+ get('AvailabilityZone'))
752+
753+ vsa = self.vsa_api.create(context, **args)
754+
755+ instances = self._get_instances_by_vsa_id(context, vsa.get('id'))
756+ return {'vsa': _vsa_view(context, vsa, True, instances)}
757+
758+ def delete(self, req, id):
759+ """Delete a VSA."""
760+ context = req.environ['nova.context']
761+
762+ LOG.audit(_("Delete VSA with id: %s"), id, context=context)
763+
764+ try:
765+ self.vsa_api.delete(context, vsa_id=id)
766+ except exception.NotFound:
767+ return faults.Fault(exc.HTTPNotFound())
768+
769+ def associate_address(self, req, id, body):
770+ """ /zadr-vsa/{vsa_id}/associate_address
771+ auto or manually associate an IP to VSA
772+ """
773+ context = req.environ['nova.context']
774+
775+ if body is None:
776+ ip = 'auto'
777+ else:
778+ ip = body.get('ipAddress', 'auto')
779+
780+ LOG.audit(_("Associate address %(ip)s to VSA %(id)s"),
781+ locals(), context=context)
782+
783+ try:
784+ instances = self._get_instances_by_vsa_id(context, id)
785+ if instances is None or len(instances) == 0:
786+ return faults.Fault(exc.HTTPNotFound())
787+
788+ for instance in instances:
789+ self.network_api.allocate_for_instance(context, instance,
790+ vpn=False)
791+ # Placeholder
792+ return
793+
794+ except exception.NotFound:
795+ return faults.Fault(exc.HTTPNotFound())
796+
797+ def disassociate_address(self, req, id, body):
798+ """ /zadr-vsa/{vsa_id}/disassociate_address
799+ auto or manually associate an IP to VSA
800+ """
801+ context = req.environ['nova.context']
802+
803+ if body is None:
804+ ip = 'auto'
805+ else:
806+ ip = body.get('ipAddress', 'auto')
807+
808+ LOG.audit(_("Disassociate address from VSA %(id)s"),
809+ locals(), context=context)
810+ # Placeholder
811+
812+
813+class VsaVolumeDriveController(volumes.VolumeController):
814+ """The base class for VSA volumes & drives.
815+
816+ A child resource of the VSA object. Allows operations with
817+ volumes and drives created to/from particular VSA
818+
819+ """
820+
821+ _serialization_metadata = {
822+ 'application/xml': {
823+ "attributes": {
824+ "volume": [
825+ "id",
826+ "name",
827+ "status",
828+ "size",
829+ "availabilityZone",
830+ "createdAt",
831+ "displayName",
832+ "displayDescription",
833+ "vsaId",
834+ ]}}}
835+
836+ def __init__(self):
837+ self.volume_api = volume.API()
838+ self.vsa_api = vsa.API()
839+ super(VsaVolumeDriveController, self).__init__()
840+
841+ def _translation(self, context, vol, vsa_id, details):
842+ if details:
843+ translation = volumes._translate_volume_detail_view
844+ else:
845+ translation = volumes._translate_volume_summary_view
846+
847+ d = translation(context, vol)
848+ d['vsaId'] = vsa_id
849+ d['name'] = vol['name']
850+ return d
851+
852+ def _check_volume_ownership(self, context, vsa_id, id):
853+ obj = self.object
854+ try:
855+ volume_ref = self.volume_api.get(context, volume_id=id)
856+ except exception.NotFound:
857+ LOG.error(_("%(obj)s with ID %(id)s not found"), locals())
858+ raise
859+
860+ own_vsa_id = self.volume_api.get_volume_metadata_value(volume_ref,
861+ self.direction)
862+ if own_vsa_id != vsa_id:
863+ LOG.error(_("%(obj)s with ID %(id)s belongs to VSA %(own_vsa_id)s"\
864+ " and not to VSA %(vsa_id)s."), locals())
865+ raise exception.Invalid()
866+
867+ def _items(self, req, vsa_id, details):
868+ """Return summary or detailed list of volumes for particular VSA."""
869+ context = req.environ['nova.context']
870+
871+ vols = self.volume_api.get_all(context,
872+ search_opts={'metadata': {self.direction: str(vsa_id)}})
873+ limited_list = common.limited(vols, req)
874+
875+ res = [self._translation(context, vol, vsa_id, details) \
876+ for vol in limited_list]
877+
878+ return {self.objects: res}
879+
880+ def index(self, req, vsa_id):
881+ """Return a short list of volumes created from particular VSA."""
882+ LOG.audit(_("Index. vsa_id=%(vsa_id)s"), locals())
883+ return self._items(req, vsa_id, details=False)
884+
885+ def detail(self, req, vsa_id):
886+ """Return a detailed list of volumes created from particular VSA."""
887+ LOG.audit(_("Detail. vsa_id=%(vsa_id)s"), locals())
888+ return self._items(req, vsa_id, details=True)
889+
890+ def create(self, req, vsa_id, body):
891+ """Create a new volume from VSA."""
892+ LOG.audit(_("Create. vsa_id=%(vsa_id)s, body=%(body)s"), locals())
893+ context = req.environ['nova.context']
894+
895+ if not body:
896+ return faults.Fault(exc.HTTPUnprocessableEntity())
897+
898+ vol = body[self.object]
899+ size = vol['size']
900+ LOG.audit(_("Create volume of %(size)s GB from VSA ID %(vsa_id)s"),
901+ locals(), context=context)
902+ try:
903+ # create is supported for volumes only (drives created through VSA)
904+ volume_type = self.vsa_api.get_vsa_volume_type(context)
905+ except exception.NotFound:
906+ return faults.Fault(exc.HTTPNotFound())
907+
908+ new_volume = self.volume_api.create(context,
909+ size,
910+ None,
911+ vol.get('displayName'),
912+ vol.get('displayDescription'),
913+ volume_type=volume_type,
914+ metadata=dict(from_vsa_id=str(vsa_id)))
915+
916+ return {self.object: self._translation(context, new_volume,
917+ vsa_id, True)}
918+
919+ def update(self, req, vsa_id, id, body):
920+ """Update a volume."""
921+ context = req.environ['nova.context']
922+
923+ try:
924+ self._check_volume_ownership(context, vsa_id, id)
925+ except exception.NotFound:
926+ return faults.Fault(exc.HTTPNotFound())
927+ except exception.Invalid:
928+ return faults.Fault(exc.HTTPBadRequest())
929+
930+ vol = body[self.object]
931+ updatable_fields = [{'displayName': 'display_name'},
932+ {'displayDescription': 'display_description'},
933+ {'status': 'status'},
934+ {'providerLocation': 'provider_location'},
935+ {'providerAuth': 'provider_auth'}]
936+ changes = {}
937+ for field in updatable_fields:
938+ key = field.keys()[0]
939+ val = field[key]
940+ if key in vol:
941+ changes[val] = vol[key]
942+
943+ obj = self.object
944+ LOG.audit(_("Update %(obj)s with id: %(id)s, changes: %(changes)s"),
945+ locals(), context=context)
946+
947+ try:
948+ self.volume_api.update(context, volume_id=id, fields=changes)
949+ except exception.NotFound:
950+ return faults.Fault(exc.HTTPNotFound())
951+ return exc.HTTPAccepted()
952+
953+ def delete(self, req, vsa_id, id):
954+ """Delete a volume."""
955+ context = req.environ['nova.context']
956+
957+ LOG.audit(_("Delete. vsa_id=%(vsa_id)s, id=%(id)s"), locals())
958+
959+ try:
960+ self._check_volume_ownership(context, vsa_id, id)
961+ except exception.NotFound:
962+ return faults.Fault(exc.HTTPNotFound())
963+ except exception.Invalid:
964+ return faults.Fault(exc.HTTPBadRequest())
965+
966+ return super(VsaVolumeDriveController, self).delete(req, id)
967+
968+ def show(self, req, vsa_id, id):
969+ """Return data about the given volume."""
970+ context = req.environ['nova.context']
971+
972+ LOG.audit(_("Show. vsa_id=%(vsa_id)s, id=%(id)s"), locals())
973+
974+ try:
975+ self._check_volume_ownership(context, vsa_id, id)
976+ except exception.NotFound:
977+ return faults.Fault(exc.HTTPNotFound())
978+ except exception.Invalid:
979+ return faults.Fault(exc.HTTPBadRequest())
980+
981+ return super(VsaVolumeDriveController, self).show(req, id)
982+
983+
984+class VsaVolumeController(VsaVolumeDriveController):
985+ """The VSA volume API controller for the Openstack API.
986+
987+ A child resource of the VSA object. Allows operations with volumes created
988+ by particular VSA
989+
990+ """
991+
992+ def __init__(self):
993+ self.direction = 'from_vsa_id'
994+ self.objects = 'volumes'
995+ self.object = 'volume'
996+ super(VsaVolumeController, self).__init__()
997+
998+
999+class VsaDriveController(VsaVolumeDriveController):
1000+ """The VSA Drive API controller for the Openstack API.
1001+
1002+ A child resource of the VSA object. Allows operations with drives created
1003+ for particular VSA
1004+
1005+ """
1006+
1007+ def __init__(self):
1008+ self.direction = 'to_vsa_id'
1009+ self.objects = 'drives'
1010+ self.object = 'drive'
1011+ super(VsaDriveController, self).__init__()
1012+
1013+ def create(self, req, vsa_id, body):
1014+ """Create a new drive for VSA. Should be done through VSA APIs"""
1015+ return faults.Fault(exc.HTTPBadRequest())
1016+
1017+ def update(self, req, vsa_id, id, body):
1018+ """Update a drive. Should be done through VSA APIs"""
1019+ return faults.Fault(exc.HTTPBadRequest())
1020+
1021+ def delete(self, req, vsa_id, id):
1022+ """Delete a volume. Should be done through VSA APIs"""
1023+ return faults.Fault(exc.HTTPBadRequest())
1024+
1025+
1026+class VsaVPoolController(object):
1027+ """The vPool VSA API controller for the OpenStack API."""
1028+
1029+ _serialization_metadata = {
1030+ 'application/xml': {
1031+ "attributes": {
1032+ "vpool": [
1033+ "id",
1034+ "vsaId",
1035+ "name",
1036+ "displayName",
1037+ "displayDescription",
1038+ "driveCount",
1039+ "driveIds",
1040+ "protection",
1041+ "stripeSize",
1042+ "stripeWidth",
1043+ "createTime",
1044+ "status",
1045+ ]}}}
1046+
1047+ def __init__(self):
1048+ self.vsa_api = vsa.API()
1049+ super(VsaVPoolController, self).__init__()
1050+
1051+ def index(self, req, vsa_id):
1052+ """Return a short list of vpools created from particular VSA."""
1053+ return {'vpools': []}
1054+
1055+ def create(self, req, vsa_id, body):
1056+ """Create a new vPool for VSA."""
1057+ return faults.Fault(exc.HTTPBadRequest())
1058+
1059+ def update(self, req, vsa_id, id, body):
1060+ """Update vPool parameters."""
1061+ return faults.Fault(exc.HTTPBadRequest())
1062+
1063+ def delete(self, req, vsa_id, id):
1064+ """Delete a vPool."""
1065+ return faults.Fault(exc.HTTPBadRequest())
1066+
1067+ def show(self, req, vsa_id, id):
1068+ """Return data about the given vPool."""
1069+ return faults.Fault(exc.HTTPBadRequest())
1070+
1071+
1072+class VsaVCController(servers.ControllerV11):
1073+ """The VSA Virtual Controller API controller for the OpenStack API."""
1074+
1075+ def __init__(self):
1076+ self.vsa_api = vsa.API()
1077+ self.compute_api = compute.API()
1078+ self.vsa_id = None # VP-TODO: temporary ugly hack
1079+ super(VsaVCController, self).__init__()
1080+
1081+ def _get_servers(self, req, is_detail):
1082+ """Returns a list of servers, taking into account any search
1083+ options specified.
1084+ """
1085+
1086+ if self.vsa_id is None:
1087+ super(VsaVCController, self)._get_servers(req, is_detail)
1088+
1089+ context = req.environ['nova.context']
1090+
1091+ search_opts = {'metadata': dict(vsa_id=str(self.vsa_id))}
1092+ instance_list = self.compute_api.get_all(
1093+ context, search_opts=search_opts)
1094+
1095+ limited_list = self._limit_items(instance_list, req)
1096+ servers = [self._build_view(req, inst, is_detail)['server']
1097+ for inst in limited_list]
1098+ return dict(servers=servers)
1099+
1100+ def index(self, req, vsa_id):
1101+ """Return list of instances for particular VSA."""
1102+
1103+ LOG.audit(_("Index instances for VSA %s"), vsa_id)
1104+
1105+ self.vsa_id = vsa_id # VP-TODO: temporary ugly hack
1106+ result = super(VsaVCController, self).detail(req)
1107+ self.vsa_id = None
1108+ return result
1109+
1110+ def create(self, req, vsa_id, body):
1111+ """Create a new instance for VSA."""
1112+ return faults.Fault(exc.HTTPBadRequest())
1113+
1114+ def update(self, req, vsa_id, id, body):
1115+ """Update VSA instance."""
1116+ return faults.Fault(exc.HTTPBadRequest())
1117+
1118+ def delete(self, req, vsa_id, id):
1119+ """Delete VSA instance."""
1120+ return faults.Fault(exc.HTTPBadRequest())
1121+
1122+ def show(self, req, vsa_id, id):
1123+ """Return data about the given instance."""
1124+ return super(VsaVCController, self).show(req, id)
1125+
1126+
1127+class Virtual_storage_arrays(extensions.ExtensionDescriptor):
1128+
1129+ def get_name(self):
1130+ return "VSAs"
1131+
1132+ def get_alias(self):
1133+ return "zadr-vsa"
1134+
1135+ def get_description(self):
1136+ return "Virtual Storage Arrays support"
1137+
1138+ def get_namespace(self):
1139+ return "http://docs.openstack.org/ext/vsa/api/v1.1"
1140+
1141+ def get_updated(self):
1142+ return "2011-08-25T00:00:00+00:00"
1143+
1144+ def get_resources(self):
1145+ resources = []
1146+ res = extensions.ResourceExtension(
1147+ 'zadr-vsa',
1148+ VsaController(),
1149+ collection_actions={'detail': 'GET'},
1150+ member_actions={'add_capacity': 'POST',
1151+ 'remove_capacity': 'POST',
1152+ 'associate_address': 'POST',
1153+ 'disassociate_address': 'POST'})
1154+ resources.append(res)
1155+
1156+ res = extensions.ResourceExtension('volumes',
1157+ VsaVolumeController(),
1158+ collection_actions={'detail': 'GET'},
1159+ parent=dict(
1160+ member_name='vsa',
1161+ collection_name='zadr-vsa'))
1162+ resources.append(res)
1163+
1164+ res = extensions.ResourceExtension('drives',
1165+ VsaDriveController(),
1166+ collection_actions={'detail': 'GET'},
1167+ parent=dict(
1168+ member_name='vsa',
1169+ collection_name='zadr-vsa'))
1170+ resources.append(res)
1171+
1172+ res = extensions.ResourceExtension('vpools',
1173+ VsaVPoolController(),
1174+ parent=dict(
1175+ member_name='vsa',
1176+ collection_name='zadr-vsa'))
1177+ resources.append(res)
1178+
1179+ res = extensions.ResourceExtension('instances',
1180+ VsaVCController(),
1181+ parent=dict(
1182+ member_name='vsa',
1183+ collection_name='zadr-vsa'))
1184+ resources.append(res)
1185+
1186+ return resources
1187
1188=== modified file 'nova/db/api.py'
1189--- nova/db/api.py 2011-08-25 16:14:44 +0000
1190+++ nova/db/api.py 2011-08-26 22:09:26 +0000
1191@@ -49,7 +49,8 @@
1192 'Template string to be used to generate instance names')
1193 flags.DEFINE_string('snapshot_name_template', 'snapshot-%08x',
1194 'Template string to be used to generate snapshot names')
1195-
1196+flags.DEFINE_string('vsa_name_template', 'vsa-%08x',
1197+ 'Template string to be used to generate VSA names')
1198
1199 IMPL = utils.LazyPluggable(FLAGS['db_backend'],
1200 sqlalchemy='nova.db.sqlalchemy.api')
1201@@ -1512,3 +1513,36 @@
1202 key/value pairs specified in the extra specs dict argument"""
1203 IMPL.volume_type_extra_specs_update_or_create(context, volume_type_id,
1204 extra_specs)
1205+
1206+
1207+####################
1208+
1209+
1210+def vsa_create(context, values):
1211+ """Creates Virtual Storage Array record."""
1212+ return IMPL.vsa_create(context, values)
1213+
1214+
1215+def vsa_update(context, vsa_id, values):
1216+ """Updates Virtual Storage Array record."""
1217+ return IMPL.vsa_update(context, vsa_id, values)
1218+
1219+
1220+def vsa_destroy(context, vsa_id):
1221+ """Deletes Virtual Storage Array record."""
1222+ return IMPL.vsa_destroy(context, vsa_id)
1223+
1224+
1225+def vsa_get(context, vsa_id):
1226+ """Get Virtual Storage Array record by ID."""
1227+ return IMPL.vsa_get(context, vsa_id)
1228+
1229+
1230+def vsa_get_all(context):
1231+ """Get all Virtual Storage Array records."""
1232+ return IMPL.vsa_get_all(context)
1233+
1234+
1235+def vsa_get_all_by_project(context, project_id):
1236+ """Get all Virtual Storage Array records by project ID."""
1237+ return IMPL.vsa_get_all_by_project(context, project_id)
1238
1239=== modified file 'nova/db/sqlalchemy/api.py'
1240--- nova/db/sqlalchemy/api.py 2011-08-25 16:14:44 +0000
1241+++ nova/db/sqlalchemy/api.py 2011-08-26 22:09:26 +0000
1242@@ -3837,3 +3837,105 @@
1243 "deleted": 0})
1244 spec_ref.save(session=session)
1245 return specs
1246+
1247+
1248+ ####################
1249+
1250+
1251+@require_admin_context
1252+def vsa_create(context, values):
1253+ """
1254+ Creates Virtual Storage Array record.
1255+ """
1256+ try:
1257+ vsa_ref = models.VirtualStorageArray()
1258+ vsa_ref.update(values)
1259+ vsa_ref.save()
1260+ except Exception, e:
1261+ raise exception.DBError(e)
1262+ return vsa_ref
1263+
1264+
1265+@require_admin_context
1266+def vsa_update(context, vsa_id, values):
1267+ """
1268+ Updates Virtual Storage Array record.
1269+ """
1270+ session = get_session()
1271+ with session.begin():
1272+ vsa_ref = vsa_get(context, vsa_id, session=session)
1273+ vsa_ref.update(values)
1274+ vsa_ref.save(session=session)
1275+ return vsa_ref
1276+
1277+
1278+@require_admin_context
1279+def vsa_destroy(context, vsa_id):
1280+ """
1281+ Deletes Virtual Storage Array record.
1282+ """
1283+ session = get_session()
1284+ with session.begin():
1285+ session.query(models.VirtualStorageArray).\
1286+ filter_by(id=vsa_id).\
1287+ update({'deleted': True,
1288+ 'deleted_at': utils.utcnow(),
1289+ 'updated_at': literal_column('updated_at')})
1290+
1291+
1292+@require_context
1293+def vsa_get(context, vsa_id, session=None):
1294+ """
1295+ Get Virtual Storage Array record by ID.
1296+ """
1297+ if not session:
1298+ session = get_session()
1299+ result = None
1300+
1301+ if is_admin_context(context):
1302+ result = session.query(models.VirtualStorageArray).\
1303+ options(joinedload('vsa_instance_type')).\
1304+ filter_by(id=vsa_id).\
1305+ filter_by(deleted=can_read_deleted(context)).\
1306+ first()
1307+ elif is_user_context(context):
1308+ result = session.query(models.VirtualStorageArray).\
1309+ options(joinedload('vsa_instance_type')).\
1310+ filter_by(project_id=context.project_id).\
1311+ filter_by(id=vsa_id).\
1312+ filter_by(deleted=False).\
1313+ first()
1314+ if not result:
1315+ raise exception.VirtualStorageArrayNotFound(id=vsa_id)
1316+
1317+ return result
1318+
1319+
1320+@require_admin_context
1321+def vsa_get_all(context):
1322+ """
1323+ Get all Virtual Storage Array records.
1324+ """
1325+ session = get_session()
1326+ return session.query(models.VirtualStorageArray).\
1327+ options(joinedload('vsa_instance_type')).\
1328+ filter_by(deleted=can_read_deleted(context)).\
1329+ all()
1330+
1331+
1332+@require_context
1333+def vsa_get_all_by_project(context, project_id):
1334+ """
1335+ Get all Virtual Storage Array records by project ID.
1336+ """
1337+ authorize_project_context(context, project_id)
1338+
1339+ session = get_session()
1340+ return session.query(models.VirtualStorageArray).\
1341+ options(joinedload('vsa_instance_type')).\
1342+ filter_by(project_id=project_id).\
1343+ filter_by(deleted=can_read_deleted(context)).\
1344+ all()
1345+
1346+
1347+ ####################
1348
1349=== added file 'nova/db/sqlalchemy/migrate_repo/versions/043_add_vsa_data.py'
1350--- nova/db/sqlalchemy/migrate_repo/versions/043_add_vsa_data.py 1970-01-01 00:00:00 +0000
1351+++ nova/db/sqlalchemy/migrate_repo/versions/043_add_vsa_data.py 2011-08-26 22:09:26 +0000
1352@@ -0,0 +1,75 @@
1353+# vim: tabstop=4 shiftwidth=4 softtabstop=4
1354+
1355+# Copyright (c) 2011 Zadara Storage Inc.
1356+# Copyright (c) 2011 OpenStack LLC.
1357+#
1358+# Licensed under the Apache License, Version 2.0 (the "License"); you may
1359+# not use this file except in compliance with the License. You may obtain
1360+# a copy of the License at
1361+#
1362+# http://www.apache.org/licenses/LICENSE-2.0
1363+#
1364+# Unless required by applicable law or agreed to in writing, software
1365+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
1366+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
1367+# License for the specific language governing permissions and limitations
1368+# under the License.
1369+
1370+from sqlalchemy import Column, DateTime, Integer, MetaData, String, Table
1371+from sqlalchemy import Text, Boolean, ForeignKey
1372+
1373+from nova import log as logging
1374+
1375+meta = MetaData()
1376+
1377+#
1378+# New Tables
1379+#
1380+
1381+virtual_storage_arrays = Table('virtual_storage_arrays', meta,
1382+ Column('created_at', DateTime(timezone=False)),
1383+ Column('updated_at', DateTime(timezone=False)),
1384+ Column('deleted_at', DateTime(timezone=False)),
1385+ Column('deleted', Boolean(create_constraint=True, name=None)),
1386+ Column('id', Integer(), primary_key=True, nullable=False),
1387+ Column('display_name',
1388+ String(length=255, convert_unicode=False, assert_unicode=None,
1389+ unicode_error=None, _warn_on_bytestring=False)),
1390+ Column('display_description',
1391+ String(length=255, convert_unicode=False, assert_unicode=None,
1392+ unicode_error=None, _warn_on_bytestring=False)),
1393+ Column('project_id',
1394+ String(length=255, convert_unicode=False, assert_unicode=None,
1395+ unicode_error=None, _warn_on_bytestring=False)),
1396+ Column('availability_zone',
1397+ String(length=255, convert_unicode=False, assert_unicode=None,
1398+ unicode_error=None, _warn_on_bytestring=False)),
1399+ Column('instance_type_id', Integer(), nullable=False),
1400+ Column('image_ref',
1401+ String(length=255, convert_unicode=False, assert_unicode=None,
1402+ unicode_error=None, _warn_on_bytestring=False)),
1403+ Column('vc_count', Integer(), nullable=False),
1404+ Column('vol_count', Integer(), nullable=False),
1405+ Column('status',
1406+ String(length=255, convert_unicode=False, assert_unicode=None,
1407+ unicode_error=None, _warn_on_bytestring=False)),
1408+ )
1409+
1410+
1411+def upgrade(migrate_engine):
1412+ # Upgrade operations go here. Don't create your own engine;
1413+ # bind migrate_engine to your metadata
1414+ meta.bind = migrate_engine
1415+
1416+ try:
1417+ virtual_storage_arrays.create()
1418+ except Exception:
1419+ logging.info(repr(table))
1420+ logging.exception('Exception while creating table')
1421+ raise
1422+
1423+
1424+def downgrade(migrate_engine):
1425+ meta.bind = migrate_engine
1426+
1427+ virtual_storage_arrays.drop()
1428
1429=== modified file 'nova/db/sqlalchemy/migration.py'
1430--- nova/db/sqlalchemy/migration.py 2011-08-24 23:41:14 +0000
1431+++ nova/db/sqlalchemy/migration.py 2011-08-26 22:09:26 +0000
1432@@ -64,6 +64,7 @@
1433 'users', 'user_project_association',
1434 'user_project_role_association',
1435 'user_role_association',
1436+ 'virtual_storage_arrays',
1437 'volumes', 'volume_metadata',
1438 'volume_types', 'volume_type_extra_specs'):
1439 assert table in meta.tables
1440
1441=== modified file 'nova/db/sqlalchemy/models.py'
1442--- nova/db/sqlalchemy/models.py 2011-08-24 16:10:28 +0000
1443+++ nova/db/sqlalchemy/models.py 2011-08-26 22:09:26 +0000
1444@@ -250,6 +250,32 @@
1445 # 'shutdown', 'shutoff', 'crashed'])
1446
1447
1448+class VirtualStorageArray(BASE, NovaBase):
1449+ """
1450+ Represents a virtual storage array supplying block storage to instances.
1451+ """
1452+ __tablename__ = 'virtual_storage_arrays'
1453+
1454+ id = Column(Integer, primary_key=True, autoincrement=True)
1455+
1456+ @property
1457+ def name(self):
1458+ return FLAGS.vsa_name_template % self.id
1459+
1460+ # User editable field for display in user-facing UIs
1461+ display_name = Column(String(255))
1462+ display_description = Column(String(255))
1463+
1464+ project_id = Column(String(255))
1465+ availability_zone = Column(String(255))
1466+
1467+ instance_type_id = Column(Integer, ForeignKey('instance_types.id'))
1468+ image_ref = Column(String(255))
1469+ vc_count = Column(Integer, default=0) # number of requested VC instances
1470+ vol_count = Column(Integer, default=0) # total number of BE volumes
1471+ status = Column(String(255))
1472+
1473+
1474 class InstanceActions(BASE, NovaBase):
1475 """Represents a guest VM's actions and results"""
1476 __tablename__ = "instance_actions"
1477@@ -279,6 +305,12 @@
1478 primaryjoin='and_(Instance.instance_type_id == '
1479 'InstanceTypes.id)')
1480
1481+ vsas = relationship(VirtualStorageArray,
1482+ backref=backref('vsa_instance_type', uselist=False),
1483+ foreign_keys=id,
1484+ primaryjoin='and_(VirtualStorageArray.instance_type_id'
1485+ ' == InstanceTypes.id)')
1486+
1487
1488 class Volume(BASE, NovaBase):
1489 """Represents a block storage device that can be attached to a vm."""
1490@@ -848,7 +880,8 @@
1491 SecurityGroupInstanceAssociation, AuthToken, User,
1492 Project, Certificate, ConsolePool, Console, Zone,
1493 VolumeMetadata, VolumeTypes, VolumeTypeExtraSpecs,
1494- AgentBuild, InstanceMetadata, InstanceTypeExtraSpecs, Migration)
1495+ AgentBuild, InstanceMetadata, InstanceTypeExtraSpecs, Migration,
1496+ VirtualStorageArray)
1497 engine = create_engine(FLAGS.sql_connection, echo=False)
1498 for model in models:
1499 model.metadata.create_all(engine)
1500
1501=== modified file 'nova/exception.py'
1502--- nova/exception.py 2011-08-24 16:10:28 +0000
1503+++ nova/exception.py 2011-08-26 22:09:26 +0000
1504@@ -783,6 +783,18 @@
1505 message = _("Could not load paste app '%(name)s' from %(path)s")
1506
1507
1508+class VSANovaAccessParamNotFound(Invalid):
1509+ message = _("Nova access parameters were not specified.")
1510+
1511+
1512+class VirtualStorageArrayNotFound(NotFound):
1513+ message = _("Virtual Storage Array %(id)d could not be found.")
1514+
1515+
1516+class VirtualStorageArrayNotFoundByName(NotFound):
1517+ message = _("Virtual Storage Array %(name)s could not be found.")
1518+
1519+
1520 class CannotResizeToSameSize(NovaException):
1521 message = _("When resizing, instances must change size!")
1522
1523
1524=== modified file 'nova/flags.py'
1525--- nova/flags.py 2011-08-23 19:06:25 +0000
1526+++ nova/flags.py 2011-08-26 22:09:26 +0000
1527@@ -292,6 +292,7 @@
1528 in the form "http://127.0.0.1:8000"')
1529 DEFINE_string('ajax_console_proxy_port',
1530 8000, 'port that ajax_console_proxy binds')
1531+DEFINE_string('vsa_topic', 'vsa', 'the topic that nova-vsa service listens on')
1532 DEFINE_bool('verbose', False, 'show debug output')
1533 DEFINE_boolean('fake_rabbit', False, 'use a fake rabbit')
1534 DEFINE_bool('fake_network', False,
1535@@ -371,6 +372,17 @@
1536 'Manager for volume')
1537 DEFINE_string('scheduler_manager', 'nova.scheduler.manager.SchedulerManager',
1538 'Manager for scheduler')
1539+DEFINE_string('vsa_manager', 'nova.vsa.manager.VsaManager',
1540+ 'Manager for vsa')
1541+DEFINE_string('vc_image_name', 'vc_image',
1542+ 'the VC image ID (for a VC image that exists in DB Glance)')
1543+# VSA constants and enums
1544+DEFINE_string('default_vsa_instance_type', 'm1.small',
1545+ 'default instance type for VSA instances')
1546+DEFINE_integer('max_vcs_in_vsa', 32,
1547+ 'maxinum VCs in a VSA')
1548+DEFINE_integer('vsa_part_size_gb', 100,
1549+ 'default partition size for shared capacity')
1550
1551 # The service to use for image search and retrieval
1552 DEFINE_string('image_service', 'nova.image.glance.GlanceImageService',
1553
1554=== modified file 'nova/quota.py'
1555--- nova/quota.py 2011-08-17 07:41:17 +0000
1556+++ nova/quota.py 2011-08-26 22:09:26 +0000
1557@@ -116,8 +116,9 @@
1558 allowed_gigabytes = _get_request_allotment(requested_gigabytes,
1559 used_gigabytes,
1560 quota['gigabytes'])
1561- allowed_volumes = min(allowed_volumes,
1562- int(allowed_gigabytes // size))
1563+ if size != 0:
1564+ allowed_volumes = min(allowed_volumes,
1565+ int(allowed_gigabytes // size))
1566 return min(requested_volumes, allowed_volumes)
1567
1568
1569
1570=== added file 'nova/scheduler/vsa.py'
1571--- nova/scheduler/vsa.py 1970-01-01 00:00:00 +0000
1572+++ nova/scheduler/vsa.py 2011-08-26 22:09:26 +0000
1573@@ -0,0 +1,535 @@
1574+# vim: tabstop=4 shiftwidth=4 softtabstop=4
1575+
1576+# Copyright (c) 2011 Zadara Storage Inc.
1577+# Copyright (c) 2011 OpenStack LLC.
1578+#
1579+# Licensed under the Apache License, Version 2.0 (the "License"); you may
1580+# not use this file except in compliance with the License. You may obtain
1581+# a copy of the License at
1582+#
1583+# http://www.apache.org/licenses/LICENSE-2.0
1584+#
1585+# Unless required by applicable law or agreed to in writing, software
1586+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
1587+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
1588+# License for the specific language governing permissions and limitations
1589+# under the License.
1590+
1591+"""
1592+VSA Simple Scheduler
1593+"""
1594+
1595+from nova import context
1596+from nova import db
1597+from nova import flags
1598+from nova import log as logging
1599+from nova import rpc
1600+from nova import utils
1601+from nova.scheduler import driver
1602+from nova.scheduler import simple
1603+from nova.vsa.api import VsaState
1604+from nova.volume import volume_types
1605+
1606+LOG = logging.getLogger('nova.scheduler.vsa')
1607+
1608+FLAGS = flags.FLAGS
1609+flags.DEFINE_integer('drive_type_approx_capacity_percent', 10,
1610+ 'The percentage range for capacity comparison')
1611+flags.DEFINE_integer('vsa_unique_hosts_per_alloc', 10,
1612+ 'The number of unique hosts per storage allocation')
1613+flags.DEFINE_boolean('vsa_select_unique_drives', True,
1614+ 'Allow selection of same host for multiple drives')
1615+
1616+
1617+def BYTES_TO_GB(bytes):
1618+ return bytes >> 30
1619+
1620+
1621+def GB_TO_BYTES(gb):
1622+ return gb << 30
1623+
1624+
1625+class VsaScheduler(simple.SimpleScheduler):
1626+ """Implements Scheduler for volume placement."""
1627+
1628+ def __init__(self, *args, **kwargs):
1629+ super(VsaScheduler, self).__init__(*args, **kwargs)
1630+ self._notify_all_volume_hosts("startup")
1631+
1632+ def _notify_all_volume_hosts(self, event):
1633+ rpc.fanout_cast(context.get_admin_context(),
1634+ FLAGS.volume_topic,
1635+ {"method": "notification",
1636+ "args": {"event": event}})
1637+
1638+ def _qosgrp_match(self, drive_type, qos_values):
1639+
1640+ def _compare_names(str1, str2):
1641+ return str1.lower() == str2.lower()
1642+
1643+ def _compare_sizes_approxim(cap_capacity, size):
1644+ cap_capacity = BYTES_TO_GB(int(cap_capacity))
1645+ size = int(size)
1646+ size_perc = size * \
1647+ FLAGS.drive_type_approx_capacity_percent / 100
1648+
1649+ return cap_capacity >= size - size_perc and \
1650+ cap_capacity <= size + size_perc
1651+
1652+ # Add more entries for additional comparisons
1653+ compare_list = [{'cap1': 'DriveType',
1654+ 'cap2': 'type',
1655+ 'cmp_func': _compare_names},
1656+ {'cap1': 'DriveCapacity',
1657+ 'cap2': 'size',
1658+ 'cmp_func': _compare_sizes_approxim}]
1659+
1660+ for cap in compare_list:
1661+ if cap['cap1'] in qos_values.keys() and \
1662+ cap['cap2'] in drive_type.keys() and \
1663+ cap['cmp_func'] is not None and \
1664+ cap['cmp_func'](qos_values[cap['cap1']],
1665+ drive_type[cap['cap2']]):
1666+ pass
1667+ else:
1668+ return False
1669+ return True
1670+
1671+ def _get_service_states(self):
1672+ return self.zone_manager.service_states
1673+
1674+ def _filter_hosts(self, topic, request_spec, host_list=None):
1675+
1676+ LOG.debug(_("_filter_hosts: %(request_spec)s"), locals())
1677+
1678+ drive_type = request_spec['drive_type']
1679+ LOG.debug(_("Filter hosts for drive type %s"), drive_type['name'])
1680+
1681+ if host_list is None:
1682+ host_list = self._get_service_states().iteritems()
1683+
1684+ filtered_hosts = [] # returns list of (hostname, capability_dict)
1685+ for host, host_dict in host_list:
1686+ for service_name, service_dict in host_dict.iteritems():
1687+ if service_name != topic:
1688+ continue
1689+
1690+ gos_info = service_dict.get('drive_qos_info', {})
1691+ for qosgrp, qos_values in gos_info.iteritems():
1692+ if self._qosgrp_match(drive_type, qos_values):
1693+ if qos_values['AvailableCapacity'] > 0:
1694+ filtered_hosts.append((host, gos_info))
1695+ else:
1696+ LOG.debug(_("Host %s has no free capacity. Skip"),
1697+ host)
1698+ break
1699+
1700+ host_names = [item[0] for item in filtered_hosts]
1701+ LOG.debug(_("Filter hosts: %s"), host_names)
1702+ return filtered_hosts
1703+
1704+ def _allowed_to_use_host(self, host, selected_hosts, unique):
1705+ if unique == False or \
1706+ host not in [item[0] for item in selected_hosts]:
1707+ return True
1708+ else:
1709+ return False
1710+
1711+ def _add_hostcap_to_list(self, selected_hosts, host, cap):
1712+ if host not in [item[0] for item in selected_hosts]:
1713+ selected_hosts.append((host, cap))
1714+
1715+ def host_selection_algorithm(self, request_spec, all_hosts,
1716+ selected_hosts, unique):
1717+ """Must override this method for VSA scheduler to work."""
1718+ raise NotImplementedError(_("Must implement host selection mechanism"))
1719+
1720+ def _select_hosts(self, request_spec, all_hosts, selected_hosts=None):
1721+
1722+ if selected_hosts is None:
1723+ selected_hosts = []
1724+
1725+ host = None
1726+ if len(selected_hosts) >= FLAGS.vsa_unique_hosts_per_alloc:
1727+ # try to select from already selected hosts only
1728+ LOG.debug(_("Maximum number of hosts selected (%d)"),
1729+ len(selected_hosts))
1730+ unique = False
1731+ (host, qos_cap) = self.host_selection_algorithm(request_spec,
1732+ selected_hosts,
1733+ selected_hosts,
1734+ unique)
1735+
1736+ LOG.debug(_("Selected excessive host %(host)s"), locals())
1737+ else:
1738+ unique = FLAGS.vsa_select_unique_drives
1739+
1740+ if host is None:
1741+ # if we've not tried yet (# of sel hosts < max) - unique=True
1742+ # or failed to select from selected_hosts - unique=False
1743+ # select from all hosts
1744+ (host, qos_cap) = self.host_selection_algorithm(request_spec,
1745+ all_hosts,
1746+ selected_hosts,
1747+ unique)
1748+ if host is None:
1749+ raise driver.WillNotSchedule(_("No available hosts"))
1750+
1751+ return (host, qos_cap)
1752+
1753+ def _provision_volume(self, context, vol, vsa_id, availability_zone):
1754+
1755+ if availability_zone is None:
1756+ availability_zone = FLAGS.storage_availability_zone
1757+
1758+ now = utils.utcnow()
1759+ options = {
1760+ 'size': vol['size'],
1761+ 'user_id': context.user_id,
1762+ 'project_id': context.project_id,
1763+ 'snapshot_id': None,
1764+ 'availability_zone': availability_zone,
1765+ 'status': "creating",
1766+ 'attach_status': "detached",
1767+ 'display_name': vol['name'],
1768+ 'display_description': vol['description'],
1769+ 'volume_type_id': vol['volume_type_id'],
1770+ 'metadata': dict(to_vsa_id=vsa_id),
1771+ 'host': vol['host'],
1772+ 'scheduled_at': now
1773+ }
1774+
1775+ size = vol['size']
1776+ host = vol['host']
1777+ name = vol['name']
1778+ LOG.debug(_("Provision volume %(name)s of size %(size)s GB on "\
1779+ "host %(host)s"), locals())
1780+
1781+ volume_ref = db.volume_create(context, options)
1782+ rpc.cast(context,
1783+ db.queue_get_for(context, "volume", vol['host']),
1784+ {"method": "create_volume",
1785+ "args": {"volume_id": volume_ref['id'],
1786+ "snapshot_id": None}})
1787+
1788+ def _check_host_enforcement(self, context, availability_zone):
1789+ if (availability_zone
1790+ and ':' in availability_zone
1791+ and context.is_admin):
1792+ zone, _x, host = availability_zone.partition(':')
1793+ service = db.service_get_by_args(context.elevated(), host,
1794+ 'nova-volume')
1795+ if not self.service_is_up(service):
1796+ raise driver.WillNotSchedule(_("Host %s not available") % host)
1797+
1798+ return host
1799+ else:
1800+ return None
1801+
1802+ def _assign_hosts_to_volumes(self, context, volume_params, forced_host):
1803+
1804+ prev_volume_type_id = None
1805+ request_spec = {}
1806+ selected_hosts = []
1807+
1808+ LOG.debug(_("volume_params %(volume_params)s") % locals())
1809+
1810+ i = 1
1811+ for vol in volume_params:
1812+ name = vol['name']
1813+ LOG.debug(_("%(i)d: Volume %(name)s"), locals())
1814+ i += 1
1815+
1816+ if forced_host:
1817+ vol['host'] = forced_host
1818+ vol['capabilities'] = None
1819+ continue
1820+
1821+ volume_type_id = vol['volume_type_id']
1822+ request_spec['size'] = vol['size']
1823+
1824+ if prev_volume_type_id is None or\
1825+ prev_volume_type_id != volume_type_id:
1826+ # generate list of hosts for this drive type
1827+
1828+ volume_type = volume_types.get_volume_type(context,
1829+ volume_type_id)
1830+ drive_type = {
1831+ 'name': volume_type['extra_specs'].get('drive_name'),
1832+ 'type': volume_type['extra_specs'].get('drive_type'),
1833+ 'size': int(volume_type['extra_specs'].get('drive_size')),
1834+ 'rpm': volume_type['extra_specs'].get('drive_rpm'),
1835+ }
1836+ request_spec['drive_type'] = drive_type
1837+
1838+ all_hosts = self._filter_hosts("volume", request_spec)
1839+ prev_volume_type_id = volume_type_id
1840+
1841+ (host, qos_cap) = self._select_hosts(request_spec,
1842+ all_hosts, selected_hosts)
1843+ vol['host'] = host
1844+ vol['capabilities'] = qos_cap
1845+ self._consume_resource(qos_cap, vol['size'], -1)
1846+
1847+ def schedule_create_volumes(self, context, request_spec,
1848+ availability_zone=None, *_args, **_kwargs):
1849+ """Picks hosts for hosting multiple volumes."""
1850+
1851+ num_volumes = request_spec.get('num_volumes')
1852+ LOG.debug(_("Attempting to spawn %(num_volumes)d volume(s)") %
1853+ locals())
1854+
1855+ vsa_id = request_spec.get('vsa_id')
1856+ volume_params = request_spec.get('volumes')
1857+
1858+ host = self._check_host_enforcement(context, availability_zone)
1859+
1860+ try:
1861+ self._print_capabilities_info()
1862+
1863+ self._assign_hosts_to_volumes(context, volume_params, host)
1864+
1865+ for vol in volume_params:
1866+ self._provision_volume(context, vol, vsa_id, availability_zone)
1867+ except:
1868+ if vsa_id:
1869+ db.vsa_update(context, vsa_id, dict(status=VsaState.FAILED))
1870+
1871+ for vol in volume_params:
1872+ if 'capabilities' in vol:
1873+ self._consume_resource(vol['capabilities'],
1874+ vol['size'], 1)
1875+ raise
1876+
1877+ return None
1878+
1879+ def schedule_create_volume(self, context, volume_id, *_args, **_kwargs):
1880+ """Picks the best host based on requested drive type capability."""
1881+ volume_ref = db.volume_get(context, volume_id)
1882+
1883+ host = self._check_host_enforcement(context,
1884+ volume_ref['availability_zone'])
1885+ if host:
1886+ now = utils.utcnow()
1887+ db.volume_update(context, volume_id, {'host': host,
1888+ 'scheduled_at': now})
1889+ return host
1890+
1891+ volume_type_id = volume_ref['volume_type_id']
1892+ if volume_type_id:
1893+ volume_type = volume_types.get_volume_type(context, volume_type_id)
1894+
1895+ if volume_type_id is None or\
1896+ volume_types.is_vsa_volume(volume_type_id, volume_type):
1897+
1898+ LOG.debug(_("Non-VSA volume %d"), volume_ref['id'])
1899+ return super(VsaScheduler, self).schedule_create_volume(context,
1900+ volume_id, *_args, **_kwargs)
1901+
1902+ self._print_capabilities_info()
1903+
1904+ drive_type = {
1905+ 'name': volume_type['extra_specs'].get('drive_name'),
1906+ 'type': volume_type['extra_specs'].get('drive_type'),
1907+ 'size': int(volume_type['extra_specs'].get('drive_size')),
1908+ 'rpm': volume_type['extra_specs'].get('drive_rpm'),
1909+ }
1910+
1911+ LOG.debug(_("Spawning volume %(volume_id)s with drive type "\
1912+ "%(drive_type)s"), locals())
1913+
1914+ request_spec = {'size': volume_ref['size'],
1915+ 'drive_type': drive_type}
1916+ hosts = self._filter_hosts("volume", request_spec)
1917+
1918+ try:
1919+ (host, qos_cap) = self._select_hosts(request_spec, all_hosts=hosts)
1920+ except:
1921+ if volume_ref['to_vsa_id']:
1922+ db.vsa_update(context, volume_ref['to_vsa_id'],
1923+ dict(status=VsaState.FAILED))
1924+ raise
1925+
1926+ if host:
1927+ now = utils.utcnow()
1928+ db.volume_update(context, volume_id, {'host': host,
1929+ 'scheduled_at': now})
1930+ self._consume_resource(qos_cap, volume_ref['size'], -1)
1931+ return host
1932+
1933+ def _consume_full_drive(self, qos_values, direction):
1934+ qos_values['FullDrive']['NumFreeDrives'] += direction
1935+ qos_values['FullDrive']['NumOccupiedDrives'] -= direction
1936+
1937+ def _consume_partition(self, qos_values, size, direction):
1938+
1939+ if qos_values['PartitionDrive']['PartitionSize'] != 0:
1940+ partition_size = qos_values['PartitionDrive']['PartitionSize']
1941+ else:
1942+ partition_size = size
1943+ part_per_drive = qos_values['DriveCapacity'] / partition_size
1944+
1945+ if direction == -1 and \
1946+ qos_values['PartitionDrive']['NumFreePartitions'] == 0:
1947+
1948+ self._consume_full_drive(qos_values, direction)
1949+ qos_values['PartitionDrive']['NumFreePartitions'] += \
1950+ part_per_drive
1951+
1952+ qos_values['PartitionDrive']['NumFreePartitions'] += direction
1953+ qos_values['PartitionDrive']['NumOccupiedPartitions'] -= direction
1954+
1955+ if direction == 1 and \
1956+ qos_values['PartitionDrive']['NumFreePartitions'] >= \
1957+ part_per_drive:
1958+
1959+ self._consume_full_drive(qos_values, direction)
1960+ qos_values['PartitionDrive']['NumFreePartitions'] -= \
1961+ part_per_drive
1962+
1963+ def _consume_resource(self, qos_values, size, direction):
1964+ if qos_values is None:
1965+ LOG.debug(_("No capability selected for volume of size %(size)s"),
1966+ locals())
1967+ return
1968+
1969+ if size == 0: # full drive match
1970+ qos_values['AvailableCapacity'] += direction * \
1971+ qos_values['DriveCapacity']
1972+ self._consume_full_drive(qos_values, direction)
1973+ else:
1974+ qos_values['AvailableCapacity'] += direction * GB_TO_BYTES(size)
1975+ self._consume_partition(qos_values, GB_TO_BYTES(size), direction)
1976+ return
1977+
1978+ def _print_capabilities_info(self):
1979+ host_list = self._get_service_states().iteritems()
1980+ for host, host_dict in host_list:
1981+ for service_name, service_dict in host_dict.iteritems():
1982+ if service_name != "volume":
1983+ continue
1984+
1985+ LOG.info(_("Host %s:"), host)
1986+
1987+ gos_info = service_dict.get('drive_qos_info', {})
1988+ for qosgrp, qos_values in gos_info.iteritems():
1989+ total = qos_values['TotalDrives']
1990+ used = qos_values['FullDrive']['NumOccupiedDrives']
1991+ free = qos_values['FullDrive']['NumFreeDrives']
1992+ avail = BYTES_TO_GB(qos_values['AvailableCapacity'])
1993+
1994+ LOG.info(_("\tDrive %(qosgrp)-25s: total %(total)2s, "\
1995+ "used %(used)2s, free %(free)2s. Available "\
1996+ "capacity %(avail)-5s"), locals())
1997+
1998+
1999+class VsaSchedulerLeastUsedHost(VsaScheduler):
2000+ """
2001+ Implements VSA scheduler to select the host with least used capacity
2002+ of particular type.
2003+ """
2004+
2005+ def __init__(self, *args, **kwargs):
2006+ super(VsaSchedulerLeastUsedHost, self).__init__(*args, **kwargs)
2007+
2008+ def host_selection_algorithm(self, request_spec, all_hosts,
2009+ selected_hosts, unique):
2010+ size = request_spec['size']
2011+ drive_type = request_spec['drive_type']
2012+ best_host = None
2013+ best_qoscap = None
2014+ best_cap = None
2015+ min_used = 0
2016+
2017+ for (host, capabilities) in all_hosts:
2018+
2019+ has_enough_capacity = False
2020+ used_capacity = 0
2021+ for qosgrp, qos_values in capabilities.iteritems():
2022+
2023+ used_capacity = used_capacity + qos_values['TotalCapacity'] \
2024+ - qos_values['AvailableCapacity']
2025+
2026+ if self._qosgrp_match(drive_type, qos_values):
2027+ # we found required qosgroup
2028+
2029+ if size == 0: # full drive match
2030+ if qos_values['FullDrive']['NumFreeDrives'] > 0:
2031+ has_enough_capacity = True
2032+ matched_qos = qos_values
2033+ else:
2034+ break
2035+ else:
2036+ if qos_values['AvailableCapacity'] >= size and \
2037+ (qos_values['PartitionDrive'][
2038+ 'NumFreePartitions'] > 0 or \
2039+ qos_values['FullDrive']['NumFreeDrives'] > 0):
2040+ has_enough_capacity = True
2041+ matched_qos = qos_values
2042+ else:
2043+ break
2044+
2045+ if has_enough_capacity and \
2046+ self._allowed_to_use_host(host,
2047+ selected_hosts,
2048+ unique) and \
2049+ (best_host is None or used_capacity < min_used):
2050+
2051+ min_used = used_capacity
2052+ best_host = host
2053+ best_qoscap = matched_qos
2054+ best_cap = capabilities
2055+
2056+ if best_host:
2057+ self._add_hostcap_to_list(selected_hosts, best_host, best_cap)
2058+ min_used = BYTES_TO_GB(min_used)
2059+ LOG.debug(_("\t LeastUsedHost: Best host: %(best_host)s. "\
2060+ "(used capacity %(min_used)s)"), locals())
2061+ return (best_host, best_qoscap)
2062+
2063+
2064+class VsaSchedulerMostAvailCapacity(VsaScheduler):
2065+ """
2066+ Implements VSA scheduler to select the host with most available capacity
2067+ of one particular type.
2068+ """
2069+
2070+ def __init__(self, *args, **kwargs):
2071+ super(VsaSchedulerMostAvailCapacity, self).__init__(*args, **kwargs)
2072+
2073+ def host_selection_algorithm(self, request_spec, all_hosts,
2074+ selected_hosts, unique):
2075+ size = request_spec['size']
2076+ drive_type = request_spec['drive_type']
2077+ best_host = None
2078+ best_qoscap = None
2079+ best_cap = None
2080+ max_avail = 0
2081+
2082+ for (host, capabilities) in all_hosts:
2083+ for qosgrp, qos_values in capabilities.iteritems():
2084+ if self._qosgrp_match(drive_type, qos_values):
2085+ # we found required qosgroup
2086+
2087+ if size == 0: # full drive match
2088+ available = qos_values['FullDrive']['NumFreeDrives']
2089+ else:
2090+ available = qos_values['AvailableCapacity']
2091+
2092+ if available > max_avail and \
2093+ self._allowed_to_use_host(host,
2094+ selected_hosts,
2095+ unique):
2096+ max_avail = available
2097+ best_host = host
2098+ best_qoscap = qos_values
2099+ best_cap = capabilities
2100+ break # go to the next host
2101+
2102+ if best_host:
2103+ self._add_hostcap_to_list(selected_hosts, best_host, best_cap)
2104+ type_str = "drives" if size == 0 else "bytes"
2105+ LOG.debug(_("\t MostAvailCap: Best host: %(best_host)s. "\
2106+ "(available %(max_avail)s %(type_str)s)"), locals())
2107+
2108+ return (best_host, best_qoscap)
2109
2110=== added file 'nova/tests/api/openstack/contrib/test_vsa.py'
2111--- nova/tests/api/openstack/contrib/test_vsa.py 1970-01-01 00:00:00 +0000
2112+++ nova/tests/api/openstack/contrib/test_vsa.py 2011-08-26 22:09:26 +0000
2113@@ -0,0 +1,450 @@
2114+# Copyright 2011 OpenStack LLC.
2115+# All Rights Reserved.
2116+#
2117+# Licensed under the Apache License, Version 2.0 (the "License"); you may
2118+# not use this file except in compliance with the License. You may obtain
2119+# a copy of the License at
2120+#
2121+# http://www.apache.org/licenses/LICENSE-2.0
2122+#
2123+# Unless required by applicable law or agreed to in writing, software
2124+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
2125+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
2126+# License for the specific language governing permissions and limitations
2127+# under the License.
2128+
2129+import json
2130+import stubout
2131+import unittest
2132+import webob
2133+
2134+from nova import context
2135+from nova import db
2136+from nova import exception
2137+from nova import flags
2138+from nova import log as logging
2139+from nova import test
2140+from nova import volume
2141+from nova import vsa
2142+from nova.api import openstack
2143+from nova.tests.api.openstack import fakes
2144+import nova.wsgi
2145+
2146+from nova.api.openstack.contrib.virtual_storage_arrays import _vsa_view
2147+
2148+FLAGS = flags.FLAGS
2149+
2150+LOG = logging.getLogger('nova.tests.api.openstack.vsa')
2151+
2152+last_param = {}
2153+
2154+
2155+def _get_default_vsa_param():
2156+ return {
2157+ 'display_name': 'Test_VSA_name',
2158+ 'display_description': 'Test_VSA_description',
2159+ 'vc_count': 1,
2160+ 'instance_type': 'm1.small',
2161+ 'instance_type_id': 5,
2162+ 'image_name': None,
2163+ 'availability_zone': None,
2164+ 'storage': [],
2165+ 'shared': False
2166+ }
2167+
2168+
2169+def stub_vsa_create(self, context, **param):
2170+ global last_param
2171+ LOG.debug(_("_create: param=%s"), param)
2172+ param['id'] = 123
2173+ param['name'] = 'Test name'
2174+ param['instance_type_id'] = 5
2175+ last_param = param
2176+ return param
2177+
2178+
2179+def stub_vsa_delete(self, context, vsa_id):
2180+ global last_param
2181+ last_param = dict(vsa_id=vsa_id)
2182+
2183+ LOG.debug(_("_delete: %s"), locals())
2184+ if vsa_id != '123':
2185+ raise exception.NotFound
2186+
2187+
2188+def stub_vsa_get(self, context, vsa_id):
2189+ global last_param
2190+ last_param = dict(vsa_id=vsa_id)
2191+
2192+ LOG.debug(_("_get: %s"), locals())
2193+ if vsa_id != '123':
2194+ raise exception.NotFound
2195+
2196+ param = _get_default_vsa_param()
2197+ param['id'] = vsa_id
2198+ return param
2199+
2200+
2201+def stub_vsa_get_all(self, context):
2202+ LOG.debug(_("_get_all: %s"), locals())
2203+ param = _get_default_vsa_param()
2204+ param['id'] = 123
2205+ return [param]
2206+
2207+
2208+class VSAApiTest(test.TestCase):
2209+ def setUp(self):
2210+ super(VSAApiTest, self).setUp()
2211+ self.stubs = stubout.StubOutForTesting()
2212+ fakes.FakeAuthManager.reset_fake_data()
2213+ fakes.FakeAuthDatabase.data = {}
2214+ fakes.stub_out_networking(self.stubs)
2215+ fakes.stub_out_rate_limiting(self.stubs)
2216+ fakes.stub_out_auth(self.stubs)
2217+ self.stubs.Set(vsa.api.API, "create", stub_vsa_create)
2218+ self.stubs.Set(vsa.api.API, "delete", stub_vsa_delete)
2219+ self.stubs.Set(vsa.api.API, "get", stub_vsa_get)
2220+ self.stubs.Set(vsa.api.API, "get_all", stub_vsa_get_all)
2221+
2222+ self.context = context.get_admin_context()
2223+
2224+ def tearDown(self):
2225+ self.stubs.UnsetAll()
2226+ super(VSAApiTest, self).tearDown()
2227+
2228+ def test_vsa_create(self):
2229+ global last_param
2230+ last_param = {}
2231+
2232+ vsa = {"displayName": "VSA Test Name",
2233+ "displayDescription": "VSA Test Desc"}
2234+ body = dict(vsa=vsa)
2235+ req = webob.Request.blank('/v1.1/777/zadr-vsa')
2236+ req.method = 'POST'
2237+ req.body = json.dumps(body)
2238+ req.headers['content-type'] = 'application/json'
2239+
2240+ resp = req.get_response(fakes.wsgi_app())
2241+ self.assertEqual(resp.status_int, 200)
2242+
2243+ # Compare if parameters were correctly passed to stub
2244+ self.assertEqual(last_param['display_name'], "VSA Test Name")
2245+ self.assertEqual(last_param['display_description'], "VSA Test Desc")
2246+
2247+ resp_dict = json.loads(resp.body)
2248+ self.assertTrue('vsa' in resp_dict)
2249+ self.assertEqual(resp_dict['vsa']['displayName'], vsa['displayName'])
2250+ self.assertEqual(resp_dict['vsa']['displayDescription'],
2251+ vsa['displayDescription'])
2252+
2253+ def test_vsa_create_no_body(self):
2254+ req = webob.Request.blank('/v1.1/777/zadr-vsa')
2255+ req.method = 'POST'
2256+ req.body = json.dumps({})
2257+ req.headers['content-type'] = 'application/json'
2258+
2259+ resp = req.get_response(fakes.wsgi_app())
2260+ self.assertEqual(resp.status_int, 422)
2261+
2262+ def test_vsa_delete(self):
2263+ global last_param
2264+ last_param = {}
2265+
2266+ vsa_id = 123
2267+ req = webob.Request.blank('/v1.1/777/zadr-vsa/%d' % vsa_id)
2268+ req.method = 'DELETE'
2269+
2270+ resp = req.get_response(fakes.wsgi_app())
2271+ self.assertEqual(resp.status_int, 200)
2272+ self.assertEqual(str(last_param['vsa_id']), str(vsa_id))
2273+
2274+ def test_vsa_delete_invalid_id(self):
2275+ global last_param
2276+ last_param = {}
2277+
2278+ vsa_id = 234
2279+ req = webob.Request.blank('/v1.1/777/zadr-vsa/%d' % vsa_id)
2280+ req.method = 'DELETE'
2281+
2282+ resp = req.get_response(fakes.wsgi_app())
2283+ self.assertEqual(resp.status_int, 404)
2284+ self.assertEqual(str(last_param['vsa_id']), str(vsa_id))
2285+
2286+ def test_vsa_show(self):
2287+ global last_param
2288+ last_param = {}
2289+
2290+ vsa_id = 123
2291+ req = webob.Request.blank('/v1.1/777/zadr-vsa/%d' % vsa_id)
2292+ req.method = 'GET'
2293+ resp = req.get_response(fakes.wsgi_app())
2294+ self.assertEqual(resp.status_int, 200)
2295+ self.assertEqual(str(last_param['vsa_id']), str(vsa_id))
2296+
2297+ resp_dict = json.loads(resp.body)
2298+ self.assertTrue('vsa' in resp_dict)
2299+ self.assertEqual(resp_dict['vsa']['id'], str(vsa_id))
2300+
2301+ def test_vsa_show_invalid_id(self):
2302+ global last_param
2303+ last_param = {}
2304+
2305+ vsa_id = 234
2306+ req = webob.Request.blank('/v1.1/777/zadr-vsa/%d' % vsa_id)
2307+ req.method = 'GET'
2308+ resp = req.get_response(fakes.wsgi_app())
2309+ self.assertEqual(resp.status_int, 404)
2310+ self.assertEqual(str(last_param['vsa_id']), str(vsa_id))
2311+
2312+ def test_vsa_index(self):
2313+ req = webob.Request.blank('/v1.1/777/zadr-vsa')
2314+ req.method = 'GET'
2315+ resp = req.get_response(fakes.wsgi_app())
2316+ self.assertEqual(resp.status_int, 200)
2317+
2318+ resp_dict = json.loads(resp.body)
2319+
2320+ self.assertTrue('vsaSet' in resp_dict)
2321+ resp_vsas = resp_dict['vsaSet']
2322+ self.assertEqual(len(resp_vsas), 1)
2323+
2324+ resp_vsa = resp_vsas.pop()
2325+ self.assertEqual(resp_vsa['id'], 123)
2326+
2327+ def test_vsa_detail(self):
2328+ req = webob.Request.blank('/v1.1/777/zadr-vsa/detail')
2329+ req.method = 'GET'
2330+ resp = req.get_response(fakes.wsgi_app())
2331+ self.assertEqual(resp.status_int, 200)
2332+
2333+ resp_dict = json.loads(resp.body)
2334+
2335+ self.assertTrue('vsaSet' in resp_dict)
2336+ resp_vsas = resp_dict['vsaSet']
2337+ self.assertEqual(len(resp_vsas), 1)
2338+
2339+ resp_vsa = resp_vsas.pop()
2340+ self.assertEqual(resp_vsa['id'], 123)
2341+
2342+
2343+def _get_default_volume_param():
2344+ return {
2345+ 'id': 123,
2346+ 'status': 'available',
2347+ 'size': 100,
2348+ 'availability_zone': 'nova',
2349+ 'created_at': None,
2350+ 'attach_status': 'detached',
2351+ 'name': 'vol name',
2352+ 'display_name': 'Default vol name',
2353+ 'display_description': 'Default vol description',
2354+ 'volume_type_id': 1,
2355+ 'volume_metadata': [],
2356+ }
2357+
2358+
2359+def stub_get_vsa_volume_type(self, context):
2360+ return {'id': 1,
2361+ 'name': 'VSA volume type',
2362+ 'extra_specs': {'type': 'vsa_volume'}}
2363+
2364+
2365+def stub_volume_create(self, context, size, snapshot_id, name, description,
2366+ **param):
2367+ LOG.debug(_("_create: param=%s"), size)
2368+ vol = _get_default_volume_param()
2369+ vol['size'] = size
2370+ vol['display_name'] = name
2371+ vol['display_description'] = description
2372+ return vol
2373+
2374+
2375+def stub_volume_update(self, context, **param):
2376+ LOG.debug(_("_volume_update: param=%s"), param)
2377+ pass
2378+
2379+
2380+def stub_volume_delete(self, context, **param):
2381+ LOG.debug(_("_volume_delete: param=%s"), param)
2382+ pass
2383+
2384+
2385+def stub_volume_get(self, context, volume_id):
2386+ LOG.debug(_("_volume_get: volume_id=%s"), volume_id)
2387+ vol = _get_default_volume_param()
2388+ vol['id'] = volume_id
2389+ meta = {'key': 'from_vsa_id', 'value': '123'}
2390+ if volume_id == '345':
2391+ meta = {'key': 'to_vsa_id', 'value': '123'}
2392+ vol['volume_metadata'].append(meta)
2393+ return vol
2394+
2395+
2396+def stub_volume_get_notfound(self, context, volume_id):
2397+ raise exception.NotFound
2398+
2399+
2400+def stub_volume_get_all(self, context, search_opts):
2401+ vol = stub_volume_get(self, context, '123')
2402+ vol['metadata'] = search_opts['metadata']
2403+ return [vol]
2404+
2405+
2406+def return_vsa(context, vsa_id):
2407+ return {'id': vsa_id}
2408+
2409+
2410+class VSAVolumeApiTest(test.TestCase):
2411+
2412+ def setUp(self, test_obj=None, test_objs=None):
2413+ super(VSAVolumeApiTest, self).setUp()
2414+ self.stubs = stubout.StubOutForTesting()
2415+ fakes.FakeAuthManager.reset_fake_data()
2416+ fakes.FakeAuthDatabase.data = {}
2417+ fakes.stub_out_networking(self.stubs)
2418+ fakes.stub_out_rate_limiting(self.stubs)
2419+ fakes.stub_out_auth(self.stubs)
2420+ self.stubs.Set(nova.db.api, 'vsa_get', return_vsa)
2421+ self.stubs.Set(vsa.api.API, "get_vsa_volume_type",
2422+ stub_get_vsa_volume_type)
2423+
2424+ self.stubs.Set(volume.api.API, "update", stub_volume_update)
2425+ self.stubs.Set(volume.api.API, "delete", stub_volume_delete)
2426+ self.stubs.Set(volume.api.API, "get", stub_volume_get)
2427+ self.stubs.Set(volume.api.API, "get_all", stub_volume_get_all)
2428+
2429+ self.context = context.get_admin_context()
2430+ self.test_obj = test_obj if test_obj else "volume"
2431+ self.test_objs = test_objs if test_objs else "volumes"
2432+
2433+ def tearDown(self):
2434+ self.stubs.UnsetAll()
2435+ super(VSAVolumeApiTest, self).tearDown()
2436+
2437+ def test_vsa_volume_create(self):
2438+ self.stubs.Set(volume.api.API, "create", stub_volume_create)
2439+
2440+ vol = {"size": 100,
2441+ "displayName": "VSA Volume Test Name",
2442+ "displayDescription": "VSA Volume Test Desc"}
2443+ body = {self.test_obj: vol}
2444+ req = webob.Request.blank('/v1.1/777/zadr-vsa/123/%s' % self.test_objs)
2445+ req.method = 'POST'
2446+ req.body = json.dumps(body)
2447+ req.headers['content-type'] = 'application/json'
2448+ resp = req.get_response(fakes.wsgi_app())
2449+
2450+ if self.test_obj == "volume":
2451+ self.assertEqual(resp.status_int, 200)
2452+
2453+ resp_dict = json.loads(resp.body)
2454+ self.assertTrue(self.test_obj in resp_dict)
2455+ self.assertEqual(resp_dict[self.test_obj]['size'],
2456+ vol['size'])
2457+ self.assertEqual(resp_dict[self.test_obj]['displayName'],
2458+ vol['displayName'])
2459+ self.assertEqual(resp_dict[self.test_obj]['displayDescription'],
2460+ vol['displayDescription'])
2461+ else:
2462+ self.assertEqual(resp.status_int, 400)
2463+
2464+ def test_vsa_volume_create_no_body(self):
2465+ req = webob.Request.blank('/v1.1/777/zadr-vsa/123/%s' % self.test_objs)
2466+ req.method = 'POST'
2467+ req.body = json.dumps({})
2468+ req.headers['content-type'] = 'application/json'
2469+
2470+ resp = req.get_response(fakes.wsgi_app())
2471+ if self.test_obj == "volume":
2472+ self.assertEqual(resp.status_int, 422)
2473+ else:
2474+ self.assertEqual(resp.status_int, 400)
2475+
2476+ def test_vsa_volume_index(self):
2477+ req = webob.Request.blank('/v1.1/777/zadr-vsa/123/%s' % self.test_objs)
2478+ resp = req.get_response(fakes.wsgi_app())
2479+ self.assertEqual(resp.status_int, 200)
2480+
2481+ def test_vsa_volume_detail(self):
2482+ req = webob.Request.blank('/v1.1/777/zadr-vsa/123/%s/detail' % \
2483+ self.test_objs)
2484+ resp = req.get_response(fakes.wsgi_app())
2485+ self.assertEqual(resp.status_int, 200)
2486+
2487+ def test_vsa_volume_show(self):
2488+ obj_num = 234 if self.test_objs == "volumes" else 345
2489+ req = webob.Request.blank('/v1.1/777/zadr-vsa/123/%s/%s' % \
2490+ (self.test_objs, obj_num))
2491+ resp = req.get_response(fakes.wsgi_app())
2492+ self.assertEqual(resp.status_int, 200)
2493+
2494+ def test_vsa_volume_show_no_vsa_assignment(self):
2495+ req = webob.Request.blank('/v1.1/777/zadr-vsa/4/%s/333' % \
2496+ (self.test_objs))
2497+ resp = req.get_response(fakes.wsgi_app())
2498+ self.assertEqual(resp.status_int, 400)
2499+
2500+ def test_vsa_volume_show_no_volume(self):
2501+ self.stubs.Set(volume.api.API, "get", stub_volume_get_notfound)
2502+
2503+ req = webob.Request.blank('/v1.1/777/zadr-vsa/123/%s/333' % \
2504+ (self.test_objs))
2505+ resp = req.get_response(fakes.wsgi_app())
2506+ self.assertEqual(resp.status_int, 404)
2507+
2508+ def test_vsa_volume_update(self):
2509+ obj_num = 234 if self.test_objs == "volumes" else 345
2510+ update = {"status": "available",
2511+ "displayName": "Test Display name"}
2512+ body = {self.test_obj: update}
2513+ req = webob.Request.blank('/v1.1/777/zadr-vsa/123/%s/%s' % \
2514+ (self.test_objs, obj_num))
2515+ req.method = 'PUT'
2516+ req.body = json.dumps(body)
2517+ req.headers['content-type'] = 'application/json'
2518+
2519+ resp = req.get_response(fakes.wsgi_app())
2520+ if self.test_obj == "volume":
2521+ self.assertEqual(resp.status_int, 202)
2522+ else:
2523+ self.assertEqual(resp.status_int, 400)
2524+
2525+ def test_vsa_volume_delete(self):
2526+ obj_num = 234 if self.test_objs == "volumes" else 345
2527+ req = webob.Request.blank('/v1.1/777/zadr-vsa/123/%s/%s' % \
2528+ (self.test_objs, obj_num))
2529+ req.method = 'DELETE'
2530+ resp = req.get_response(fakes.wsgi_app())
2531+ if self.test_obj == "volume":
2532+ self.assertEqual(resp.status_int, 202)
2533+ else:
2534+ self.assertEqual(resp.status_int, 400)
2535+
2536+ def test_vsa_volume_delete_no_vsa_assignment(self):
2537+ req = webob.Request.blank('/v1.1/777/zadr-vsa/4/%s/333' % \
2538+ (self.test_objs))
2539+ req.method = 'DELETE'
2540+ resp = req.get_response(fakes.wsgi_app())
2541+ self.assertEqual(resp.status_int, 400)
2542+
2543+ def test_vsa_volume_delete_no_volume(self):
2544+ self.stubs.Set(volume.api.API, "get", stub_volume_get_notfound)
2545+
2546+ req = webob.Request.blank('/v1.1/777/zadr-vsa/123/%s/333' % \
2547+ (self.test_objs))
2548+ req.method = 'DELETE'
2549+ resp = req.get_response(fakes.wsgi_app())
2550+ if self.test_obj == "volume":
2551+ self.assertEqual(resp.status_int, 404)
2552+ else:
2553+ self.assertEqual(resp.status_int, 400)
2554+
2555+
2556+class VSADriveApiTest(VSAVolumeApiTest):
2557+ def setUp(self):
2558+ super(VSADriveApiTest, self).setUp(test_obj="drive",
2559+ test_objs="drives")
2560+
2561+ def tearDown(self):
2562+ self.stubs.UnsetAll()
2563+ super(VSADriveApiTest, self).tearDown()
2564
2565=== modified file 'nova/tests/api/openstack/test_extensions.py'
2566--- nova/tests/api/openstack/test_extensions.py 2011-08-24 21:30:40 +0000
2567+++ nova/tests/api/openstack/test_extensions.py 2011-08-26 22:09:26 +0000
2568@@ -95,6 +95,7 @@
2569 "Quotas",
2570 "Rescue",
2571 "SecurityGroups",
2572+ "VSAs",
2573 "VirtualInterfaces",
2574 "Volumes",
2575 "VolumeTypes",
2576
2577=== added file 'nova/tests/scheduler/test_vsa_scheduler.py'
2578--- nova/tests/scheduler/test_vsa_scheduler.py 1970-01-01 00:00:00 +0000
2579+++ nova/tests/scheduler/test_vsa_scheduler.py 2011-08-26 22:09:26 +0000
2580@@ -0,0 +1,641 @@
2581+# Copyright 2011 OpenStack LLC.
2582+# All Rights Reserved.
2583+#
2584+# Licensed under the Apache License, Version 2.0 (the "License"); you may
2585+# not use this file except in compliance with the License. You may obtain
2586+# a copy of the License at
2587+#
2588+# http://www.apache.org/licenses/LICENSE-2.0
2589+#
2590+# Unless required by applicable law or agreed to in writing, software
2591+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
2592+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
2593+# License for the specific language governing permissions and limitations
2594+# under the License.
2595+
2596+import stubout
2597+
2598+import nova
2599+
2600+from nova import context
2601+from nova import db
2602+from nova import exception
2603+from nova import flags
2604+from nova import log as logging
2605+from nova import test
2606+from nova import utils
2607+from nova.volume import volume_types
2608+
2609+from nova.scheduler import vsa as vsa_sched
2610+from nova.scheduler import driver
2611+
2612+FLAGS = flags.FLAGS
2613+LOG = logging.getLogger('nova.tests.scheduler.vsa')
2614+
2615+scheduled_volumes = []
2616+scheduled_volume = {}
2617+global_volume = {}
2618+
2619+
2620+class FakeVsaLeastUsedScheduler(
2621+ vsa_sched.VsaSchedulerLeastUsedHost):
2622+ # No need to stub anything at the moment
2623+ pass
2624+
2625+
2626+class FakeVsaMostAvailCapacityScheduler(
2627+ vsa_sched.VsaSchedulerMostAvailCapacity):
2628+ # No need to stub anything at the moment
2629+ pass
2630+
2631+
2632+class VsaSchedulerTestCase(test.TestCase):
2633+
2634+ def _get_vol_creation_request(self, num_vols, drive_ix, size=0):
2635+ volume_params = []
2636+ for i in range(num_vols):
2637+
2638+ name = 'name_' + str(i)
2639+ try:
2640+ volume_types.create(self.context, name,
2641+ extra_specs={'type': 'vsa_drive',
2642+ 'drive_name': name,
2643+ 'drive_type': 'type_' + str(drive_ix),
2644+ 'drive_size': 1 + 100 * (drive_ix)})
2645+ self.created_types_lst.append(name)
2646+ except exception.ApiError:
2647+ # type is already created
2648+ pass
2649+
2650+ volume_type = volume_types.get_volume_type_by_name(self.context,
2651+ name)
2652+ volume = {'size': size,
2653+ 'snapshot_id': None,
2654+ 'name': 'vol_' + str(i),
2655+ 'description': None,
2656+ 'volume_type_id': volume_type['id']}
2657+ volume_params.append(volume)
2658+
2659+ return {'num_volumes': len(volume_params),
2660+ 'vsa_id': 123,
2661+ 'volumes': volume_params}
2662+
2663+ def _generate_default_service_states(self):
2664+ service_states = {}
2665+ for i in range(self.host_num):
2666+ host = {}
2667+ hostname = 'host_' + str(i)
2668+ if hostname in self.exclude_host_list:
2669+ continue
2670+
2671+ host['volume'] = {'timestamp': utils.utcnow(),
2672+ 'drive_qos_info': {}}
2673+
2674+ for j in range(self.drive_type_start_ix,
2675+ self.drive_type_start_ix + self.drive_type_num):
2676+ dtype = {}
2677+ dtype['Name'] = 'name_' + str(j)
2678+ dtype['DriveType'] = 'type_' + str(j)
2679+ dtype['TotalDrives'] = 2 * (self.init_num_drives + i)
2680+ dtype['DriveCapacity'] = vsa_sched.GB_TO_BYTES(1 + 100 * j)
2681+ dtype['TotalCapacity'] = dtype['TotalDrives'] * \
2682+ dtype['DriveCapacity']
2683+ dtype['AvailableCapacity'] = (dtype['TotalDrives'] - i) * \
2684+ dtype['DriveCapacity']
2685+ dtype['DriveRpm'] = 7200
2686+ dtype['DifCapable'] = 0
2687+ dtype['SedCapable'] = 0
2688+ dtype['PartitionDrive'] = {
2689+ 'PartitionSize': 0,
2690+ 'NumOccupiedPartitions': 0,
2691+ 'NumFreePartitions': 0}
2692+ dtype['FullDrive'] = {
2693+ 'NumFreeDrives': dtype['TotalDrives'] - i,
2694+ 'NumOccupiedDrives': i}
2695+ host['volume']['drive_qos_info'][dtype['Name']] = dtype
2696+
2697+ service_states[hostname] = host
2698+
2699+ return service_states
2700+
2701+ def _print_service_states(self):
2702+ for host, host_val in self.service_states.iteritems():
2703+ LOG.info(_("Host %s"), host)
2704+ total_used = 0
2705+ total_available = 0
2706+ qos = host_val['volume']['drive_qos_info']
2707+
2708+ for k, d in qos.iteritems():
2709+ LOG.info("\t%s: type %s: drives (used %2d, total %2d) "\
2710+ "size %3d, total %4d, used %4d, avail %d",
2711+ k, d['DriveType'],
2712+ d['FullDrive']['NumOccupiedDrives'], d['TotalDrives'],
2713+ vsa_sched.BYTES_TO_GB(d['DriveCapacity']),
2714+ vsa_sched.BYTES_TO_GB(d['TotalCapacity']),
2715+ vsa_sched.BYTES_TO_GB(d['TotalCapacity'] - \
2716+ d['AvailableCapacity']),
2717+ vsa_sched.BYTES_TO_GB(d['AvailableCapacity']))
2718+
2719+ total_used += vsa_sched.BYTES_TO_GB(d['TotalCapacity'] - \
2720+ d['AvailableCapacity'])
2721+ total_available += vsa_sched.BYTES_TO_GB(
2722+ d['AvailableCapacity'])
2723+ LOG.info("Host %s: used %d, avail %d",
2724+ host, total_used, total_available)
2725+
2726+ def _set_service_states(self, host_num,
2727+ drive_type_start_ix, drive_type_num,
2728+ init_num_drives=10,
2729+ exclude_host_list=[]):
2730+ self.host_num = host_num
2731+ self.drive_type_start_ix = drive_type_start_ix
2732+ self.drive_type_num = drive_type_num
2733+ self.exclude_host_list = exclude_host_list
2734+ self.init_num_drives = init_num_drives
2735+ self.service_states = self._generate_default_service_states()
2736+
2737+ def _get_service_states(self):
2738+ return self.service_states
2739+
2740+ def _fake_get_service_states(self):
2741+ return self._get_service_states()
2742+
2743+ def _fake_provision_volume(self, context, vol, vsa_id, availability_zone):
2744+ global scheduled_volumes
2745+ scheduled_volumes.append(dict(vol=vol,
2746+ vsa_id=vsa_id,
2747+ az=availability_zone))
2748+ name = vol['name']
2749+ host = vol['host']
2750+ LOG.debug(_("Test: provision vol %(name)s on host %(host)s"),
2751+ locals())
2752+ LOG.debug(_("\t vol=%(vol)s"), locals())
2753+ pass
2754+
2755+ def _fake_vsa_update(self, context, vsa_id, values):
2756+ LOG.debug(_("Test: VSA update request: vsa_id=%(vsa_id)s "\
2757+ "values=%(values)s"), locals())
2758+ pass
2759+
2760+ def _fake_volume_create(self, context, options):
2761+ LOG.debug(_("Test: Volume create: %s"), options)
2762+ options['id'] = 123
2763+ global global_volume
2764+ global_volume = options
2765+ return options
2766+
2767+ def _fake_volume_get(self, context, volume_id):
2768+ LOG.debug(_("Test: Volume get request: id=%(volume_id)s"), locals())
2769+ global global_volume
2770+ global_volume['id'] = volume_id
2771+ global_volume['availability_zone'] = None
2772+ return global_volume
2773+
2774+ def _fake_volume_update(self, context, volume_id, values):
2775+ LOG.debug(_("Test: Volume update request: id=%(volume_id)s "\
2776+ "values=%(values)s"), locals())
2777+ global scheduled_volume
2778+ scheduled_volume = {'id': volume_id, 'host': values['host']}
2779+ pass
2780+
2781+ def _fake_service_get_by_args(self, context, host, binary):
2782+ return "service"
2783+
2784+ def _fake_service_is_up_True(self, service):
2785+ return True
2786+
2787+ def _fake_service_is_up_False(self, service):
2788+ return False
2789+
2790+ def setUp(self, sched_class=None):
2791+ super(VsaSchedulerTestCase, self).setUp()
2792+ self.stubs = stubout.StubOutForTesting()
2793+ self.context = context.get_admin_context()
2794+
2795+ if sched_class is None:
2796+ self.sched = FakeVsaLeastUsedScheduler()
2797+ else:
2798+ self.sched = sched_class
2799+
2800+ self.host_num = 10
2801+ self.drive_type_num = 5
2802+
2803+ self.stubs.Set(self.sched,
2804+ '_get_service_states', self._fake_get_service_states)
2805+ self.stubs.Set(self.sched,
2806+ '_provision_volume', self._fake_provision_volume)
2807+ self.stubs.Set(nova.db, 'vsa_update', self._fake_vsa_update)
2808+
2809+ self.stubs.Set(nova.db, 'volume_get', self._fake_volume_get)
2810+ self.stubs.Set(nova.db, 'volume_update', self._fake_volume_update)
2811+
2812+ self.created_types_lst = []
2813+
2814+ def tearDown(self):
2815+ for name in self.created_types_lst:
2816+ volume_types.purge(self.context, name)
2817+
2818+ self.stubs.UnsetAll()
2819+ super(VsaSchedulerTestCase, self).tearDown()
2820+
2821+ def test_vsa_sched_create_volumes_simple(self):
2822+ global scheduled_volumes
2823+ scheduled_volumes = []
2824+ self._set_service_states(host_num=10,
2825+ drive_type_start_ix=0,
2826+ drive_type_num=5,
2827+ init_num_drives=10,
2828+ exclude_host_list=['host_1', 'host_3'])
2829+ prev = self._generate_default_service_states()
2830+ request_spec = self._get_vol_creation_request(num_vols=3, drive_ix=2)
2831+
2832+ self.sched.schedule_create_volumes(self.context,
2833+ request_spec,
2834+ availability_zone=None)
2835+
2836+ self.assertEqual(len(scheduled_volumes), 3)
2837+ self.assertEqual(scheduled_volumes[0]['vol']['host'], 'host_0')
2838+ self.assertEqual(scheduled_volumes[1]['vol']['host'], 'host_2')
2839+ self.assertEqual(scheduled_volumes[2]['vol']['host'], 'host_4')
2840+
2841+ cur = self._get_service_states()
2842+ for host in ['host_0', 'host_2', 'host_4']:
2843+ cur_dtype = cur[host]['volume']['drive_qos_info']['name_2']
2844+ prev_dtype = prev[host]['volume']['drive_qos_info']['name_2']
2845+ self.assertEqual(cur_dtype['DriveType'], prev_dtype['DriveType'])
2846+ self.assertEqual(cur_dtype['FullDrive']['NumFreeDrives'],
2847+ prev_dtype['FullDrive']['NumFreeDrives'] - 1)
2848+ self.assertEqual(cur_dtype['FullDrive']['NumOccupiedDrives'],
2849+ prev_dtype['FullDrive']['NumOccupiedDrives'] + 1)
2850+
2851+ def test_vsa_sched_no_drive_type(self):
2852+ self._set_service_states(host_num=10,
2853+ drive_type_start_ix=0,
2854+ drive_type_num=5,
2855+ init_num_drives=1)
2856+ request_spec = self._get_vol_creation_request(num_vols=1, drive_ix=6)
2857+ self.assertRaises(driver.WillNotSchedule,
2858+ self.sched.schedule_create_volumes,
2859+ self.context,
2860+ request_spec,
2861+ availability_zone=None)
2862+
2863+ def test_vsa_sched_no_enough_drives(self):
2864+ global scheduled_volumes
2865+ scheduled_volumes = []
2866+
2867+ self._set_service_states(host_num=3,
2868+ drive_type_start_ix=0,
2869+ drive_type_num=1,
2870+ init_num_drives=0)
2871+ prev = self._generate_default_service_states()
2872+ request_spec = self._get_vol_creation_request(num_vols=3, drive_ix=0)
2873+
2874+ self.assertRaises(driver.WillNotSchedule,
2875+ self.sched.schedule_create_volumes,
2876+ self.context,
2877+ request_spec,
2878+ availability_zone=None)
2879+
2880+ # check that everything was returned back
2881+ cur = self._get_service_states()
2882+ for k, v in prev.iteritems():
2883+ self.assertEqual(prev[k]['volume']['drive_qos_info'],
2884+ cur[k]['volume']['drive_qos_info'])
2885+
2886+ def test_vsa_sched_wrong_topic(self):
2887+ self._set_service_states(host_num=1,
2888+ drive_type_start_ix=0,
2889+ drive_type_num=5,
2890+ init_num_drives=1)
2891+ states = self._get_service_states()
2892+ new_states = {}
2893+ new_states['host_0'] = {'compute': states['host_0']['volume']}
2894+ self.service_states = new_states
2895+ request_spec = self._get_vol_creation_request(num_vols=1, drive_ix=0)
2896+
2897+ self.assertRaises(driver.WillNotSchedule,
2898+ self.sched.schedule_create_volumes,
2899+ self.context,
2900+ request_spec,
2901+ availability_zone=None)
2902+
2903+ def test_vsa_sched_provision_volume(self):
2904+ global global_volume
2905+ global_volume = {}
2906+ self._set_service_states(host_num=1,
2907+ drive_type_start_ix=0,
2908+ drive_type_num=1,
2909+ init_num_drives=1)
2910+ request_spec = self._get_vol_creation_request(num_vols=1, drive_ix=0)
2911+
2912+ self.stubs.UnsetAll()
2913+ self.stubs.Set(self.sched,
2914+ '_get_service_states', self._fake_get_service_states)
2915+ self.stubs.Set(nova.db, 'volume_create', self._fake_volume_create)
2916+
2917+ self.sched.schedule_create_volumes(self.context,
2918+ request_spec,
2919+ availability_zone=None)
2920+
2921+ self.assertEqual(request_spec['volumes'][0]['name'],
2922+ global_volume['display_name'])
2923+
2924+ def test_vsa_sched_no_free_drives(self):
2925+ self._set_service_states(host_num=1,
2926+ drive_type_start_ix=0,
2927+ drive_type_num=1,
2928+ init_num_drives=1)
2929+ request_spec = self._get_vol_creation_request(num_vols=1, drive_ix=0)
2930+
2931+ self.sched.schedule_create_volumes(self.context,
2932+ request_spec,
2933+ availability_zone=None)
2934+
2935+ cur = self._get_service_states()
2936+ cur_dtype = cur['host_0']['volume']['drive_qos_info']['name_0']
2937+ self.assertEqual(cur_dtype['FullDrive']['NumFreeDrives'], 1)
2938+
2939+ new_request = self._get_vol_creation_request(num_vols=1, drive_ix=0)
2940+
2941+ self.sched.schedule_create_volumes(self.context,
2942+ request_spec,
2943+ availability_zone=None)
2944+ self._print_service_states()
2945+
2946+ self.assertRaises(driver.WillNotSchedule,
2947+ self.sched.schedule_create_volumes,
2948+ self.context,
2949+ new_request,
2950+ availability_zone=None)
2951+
2952+ def test_vsa_sched_forced_host(self):
2953+ global scheduled_volumes
2954+ scheduled_volumes = []
2955+
2956+ self._set_service_states(host_num=10,
2957+ drive_type_start_ix=0,
2958+ drive_type_num=5,
2959+ init_num_drives=10)
2960+
2961+ request_spec = self._get_vol_creation_request(num_vols=3, drive_ix=2)
2962+
2963+ self.assertRaises(exception.HostBinaryNotFound,
2964+ self.sched.schedule_create_volumes,
2965+ self.context,
2966+ request_spec,
2967+ availability_zone="nova:host_5")
2968+
2969+ self.stubs.Set(nova.db,
2970+ 'service_get_by_args', self._fake_service_get_by_args)
2971+ self.stubs.Set(self.sched,
2972+ 'service_is_up', self._fake_service_is_up_False)
2973+
2974+ self.assertRaises(driver.WillNotSchedule,
2975+ self.sched.schedule_create_volumes,
2976+ self.context,
2977+ request_spec,
2978+ availability_zone="nova:host_5")
2979+
2980+ self.stubs.Set(self.sched,
2981+ 'service_is_up', self._fake_service_is_up_True)
2982+
2983+ self.sched.schedule_create_volumes(self.context,
2984+ request_spec,
2985+ availability_zone="nova:host_5")
2986+
2987+ self.assertEqual(len(scheduled_volumes), 3)
2988+ self.assertEqual(scheduled_volumes[0]['vol']['host'], 'host_5')
2989+ self.assertEqual(scheduled_volumes[1]['vol']['host'], 'host_5')
2990+ self.assertEqual(scheduled_volumes[2]['vol']['host'], 'host_5')
2991+
2992+ def test_vsa_sched_create_volumes_partition(self):
2993+ global scheduled_volumes
2994+ scheduled_volumes = []
2995+ self._set_service_states(host_num=5,
2996+ drive_type_start_ix=0,
2997+ drive_type_num=5,
2998+ init_num_drives=1,
2999+ exclude_host_list=['host_0', 'host_2'])
3000+ prev = self._generate_default_service_states()
3001+ request_spec = self._get_vol_creation_request(num_vols=3,
3002+ drive_ix=3,
3003+ size=50)
3004+ self.sched.schedule_create_volumes(self.context,
3005+ request_spec,
3006+ availability_zone=None)
3007+
3008+ self.assertEqual(len(scheduled_volumes), 3)
3009+ self.assertEqual(scheduled_volumes[0]['vol']['host'], 'host_1')
3010+ self.assertEqual(scheduled_volumes[1]['vol']['host'], 'host_3')
3011+ self.assertEqual(scheduled_volumes[2]['vol']['host'], 'host_4')
3012+
3013+ cur = self._get_service_states()
3014+ for host in ['host_1', 'host_3', 'host_4']:
3015+ cur_dtype = cur[host]['volume']['drive_qos_info']['name_3']
3016+ prev_dtype = prev[host]['volume']['drive_qos_info']['name_3']
3017+
3018+ self.assertEqual(cur_dtype['DriveType'], prev_dtype['DriveType'])
3019+ self.assertEqual(cur_dtype['FullDrive']['NumFreeDrives'],
3020+ prev_dtype['FullDrive']['NumFreeDrives'] - 1)
3021+ self.assertEqual(cur_dtype['FullDrive']['NumOccupiedDrives'],
3022+ prev_dtype['FullDrive']['NumOccupiedDrives'] + 1)
3023+
3024+ self.assertEqual(prev_dtype['PartitionDrive']
3025+ ['NumOccupiedPartitions'], 0)
3026+ self.assertEqual(cur_dtype['PartitionDrive']
3027+ ['NumOccupiedPartitions'], 1)
3028+ self.assertEqual(cur_dtype['PartitionDrive']
3029+ ['NumFreePartitions'], 5)
3030+
3031+ self.assertEqual(prev_dtype['PartitionDrive']
3032+ ['NumFreePartitions'], 0)
3033+ self.assertEqual(prev_dtype['PartitionDrive']
3034+ ['PartitionSize'], 0)
3035+
3036+ def test_vsa_sched_create_single_volume_az(self):
3037+ global scheduled_volume
3038+ scheduled_volume = {}
3039+
3040+ def _fake_volume_get_az(context, volume_id):
3041+ LOG.debug(_("Test: Volume get: id=%(volume_id)s"), locals())
3042+ return {'id': volume_id, 'availability_zone': 'nova:host_3'}
3043+
3044+ self.stubs.Set(nova.db, 'volume_get', _fake_volume_get_az)
3045+ self.stubs.Set(nova.db,
3046+ 'service_get_by_args', self._fake_service_get_by_args)
3047+ self.stubs.Set(self.sched,
3048+ 'service_is_up', self._fake_service_is_up_True)
3049+
3050+ host = self.sched.schedule_create_volume(self.context,
3051+ 123, availability_zone=None)
3052+
3053+ self.assertEqual(host, 'host_3')
3054+ self.assertEqual(scheduled_volume['id'], 123)
3055+ self.assertEqual(scheduled_volume['host'], 'host_3')
3056+
3057+ def test_vsa_sched_create_single_non_vsa_volume(self):
3058+ global scheduled_volume
3059+ scheduled_volume = {}
3060+
3061+ global global_volume
3062+ global_volume = {}
3063+ global_volume['volume_type_id'] = None
3064+
3065+ self.assertRaises(driver.NoValidHost,
3066+ self.sched.schedule_create_volume,
3067+ self.context,
3068+ 123,
3069+ availability_zone=None)
3070+
3071+ def test_vsa_sched_create_single_volume(self):
3072+ global scheduled_volume
3073+ scheduled_volume = {}
3074+ self._set_service_states(host_num=10,
3075+ drive_type_start_ix=0,
3076+ drive_type_num=5,
3077+ init_num_drives=10,
3078+ exclude_host_list=['host_0', 'host_1'])
3079+ prev = self._generate_default_service_states()
3080+
3081+ global global_volume
3082+ global_volume = {}
3083+
3084+ drive_ix = 2
3085+ name = 'name_' + str(drive_ix)
3086+ volume_types.create(self.context, name,
3087+ extra_specs={'type': 'vsa_drive',
3088+ 'drive_name': name,
3089+ 'drive_type': 'type_' + str(drive_ix),
3090+ 'drive_size': 1 + 100 * (drive_ix)})
3091+ self.created_types_lst.append(name)
3092+ volume_type = volume_types.get_volume_type_by_name(self.context, name)
3093+
3094+ global_volume['volume_type_id'] = volume_type['id']
3095+ global_volume['size'] = 0
3096+
3097+ host = self.sched.schedule_create_volume(self.context,
3098+ 123, availability_zone=None)
3099+
3100+ self.assertEqual(host, 'host_2')
3101+ self.assertEqual(scheduled_volume['id'], 123)
3102+ self.assertEqual(scheduled_volume['host'], 'host_2')
3103+
3104+
3105+class VsaSchedulerTestCaseMostAvail(VsaSchedulerTestCase):
3106+
3107+ def setUp(self):
3108+ super(VsaSchedulerTestCaseMostAvail, self).setUp(
3109+ FakeVsaMostAvailCapacityScheduler())
3110+
3111+ def tearDown(self):
3112+ self.stubs.UnsetAll()
3113+ super(VsaSchedulerTestCaseMostAvail, self).tearDown()
3114+
3115+ def test_vsa_sched_create_single_volume(self):
3116+ global scheduled_volume
3117+ scheduled_volume = {}
3118+ self._set_service_states(host_num=10,
3119+ drive_type_start_ix=0,
3120+ drive_type_num=5,
3121+ init_num_drives=10,
3122+ exclude_host_list=['host_0', 'host_1'])
3123+ prev = self._generate_default_service_states()
3124+
3125+ global global_volume
3126+ global_volume = {}
3127+
3128+ drive_ix = 2
3129+ name = 'name_' + str(drive_ix)
3130+ volume_types.create(self.context, name,
3131+ extra_specs={'type': 'vsa_drive',
3132+ 'drive_name': name,
3133+ 'drive_type': 'type_' + str(drive_ix),
3134+ 'drive_size': 1 + 100 * (drive_ix)})
3135+ self.created_types_lst.append(name)
3136+ volume_type = volume_types.get_volume_type_by_name(self.context, name)
3137+
3138+ global_volume['volume_type_id'] = volume_type['id']
3139+ global_volume['size'] = 0
3140+
3141+ host = self.sched.schedule_create_volume(self.context,
3142+ 123, availability_zone=None)
3143+
3144+ self.assertEqual(host, 'host_9')
3145+ self.assertEqual(scheduled_volume['id'], 123)
3146+ self.assertEqual(scheduled_volume['host'], 'host_9')
3147+
3148+ def test_vsa_sched_create_volumes_simple(self):
3149+ global scheduled_volumes
3150+ scheduled_volumes = []
3151+ self._set_service_states(host_num=10,
3152+ drive_type_start_ix=0,
3153+ drive_type_num=5,
3154+ init_num_drives=10,
3155+ exclude_host_list=['host_1', 'host_3'])
3156+ prev = self._generate_default_service_states()
3157+ request_spec = self._get_vol_creation_request(num_vols=3, drive_ix=2)
3158+
3159+ self._print_service_states()
3160+
3161+ self.sched.schedule_create_volumes(self.context,
3162+ request_spec,
3163+ availability_zone=None)
3164+
3165+ self.assertEqual(len(scheduled_volumes), 3)
3166+ self.assertEqual(scheduled_volumes[0]['vol']['host'], 'host_9')
3167+ self.assertEqual(scheduled_volumes[1]['vol']['host'], 'host_8')
3168+ self.assertEqual(scheduled_volumes[2]['vol']['host'], 'host_7')
3169+
3170+ cur = self._get_service_states()
3171+ for host in ['host_9', 'host_8', 'host_7']:
3172+ cur_dtype = cur[host]['volume']['drive_qos_info']['name_2']
3173+ prev_dtype = prev[host]['volume']['drive_qos_info']['name_2']
3174+ self.assertEqual(cur_dtype['DriveType'], prev_dtype['DriveType'])
3175+ self.assertEqual(cur_dtype['FullDrive']['NumFreeDrives'],
3176+ prev_dtype['FullDrive']['NumFreeDrives'] - 1)
3177+ self.assertEqual(cur_dtype['FullDrive']['NumOccupiedDrives'],
3178+ prev_dtype['FullDrive']['NumOccupiedDrives'] + 1)
3179+
3180+ def test_vsa_sched_create_volumes_partition(self):
3181+ global scheduled_volumes
3182+ scheduled_volumes = []
3183+ self._set_service_states(host_num=5,
3184+ drive_type_start_ix=0,
3185+ drive_type_num=5,
3186+ init_num_drives=1,
3187+ exclude_host_list=['host_0', 'host_2'])
3188+ prev = self._generate_default_service_states()
3189+ request_spec = self._get_vol_creation_request(num_vols=3,
3190+ drive_ix=3,
3191+ size=50)
3192+ self.sched.schedule_create_volumes(self.context,
3193+ request_spec,
3194+ availability_zone=None)
3195+
3196+ self.assertEqual(len(scheduled_volumes), 3)
3197+ self.assertEqual(scheduled_volumes[0]['vol']['host'], 'host_4')
3198+ self.assertEqual(scheduled_volumes[1]['vol']['host'], 'host_3')
3199+ self.assertEqual(scheduled_volumes[2]['vol']['host'], 'host_1')
3200+
3201+ cur = self._get_service_states()
3202+ for host in ['host_1', 'host_3', 'host_4']:
3203+ cur_dtype = cur[host]['volume']['drive_qos_info']['name_3']
3204+ prev_dtype = prev[host]['volume']['drive_qos_info']['name_3']
3205+
3206+ self.assertEqual(cur_dtype['DriveType'], prev_dtype['DriveType'])
3207+ self.assertEqual(cur_dtype['FullDrive']['NumFreeDrives'],
3208+ prev_dtype['FullDrive']['NumFreeDrives'] - 1)
3209+ self.assertEqual(cur_dtype['FullDrive']['NumOccupiedDrives'],
3210+ prev_dtype['FullDrive']['NumOccupiedDrives'] + 1)
3211+
3212+ self.assertEqual(prev_dtype['PartitionDrive']
3213+ ['NumOccupiedPartitions'], 0)
3214+ self.assertEqual(cur_dtype['PartitionDrive']
3215+ ['NumOccupiedPartitions'], 1)
3216+ self.assertEqual(cur_dtype['PartitionDrive']
3217+ ['NumFreePartitions'], 5)
3218+ self.assertEqual(prev_dtype['PartitionDrive']
3219+ ['NumFreePartitions'], 0)
3220+ self.assertEqual(prev_dtype['PartitionDrive']
3221+ ['PartitionSize'], 0)
3222
3223=== added file 'nova/tests/test_vsa.py'
3224--- nova/tests/test_vsa.py 1970-01-01 00:00:00 +0000
3225+++ nova/tests/test_vsa.py 2011-08-26 22:09:26 +0000
3226@@ -0,0 +1,182 @@
3227+# Copyright 2011 OpenStack LLC.
3228+# All Rights Reserved.
3229+#
3230+# Licensed under the Apache License, Version 2.0 (the "License"); you may
3231+# not use this file except in compliance with the License. You may obtain
3232+# a copy of the License at
3233+#
3234+# http://www.apache.org/licenses/LICENSE-2.0
3235+#
3236+# Unless required by applicable law or agreed to in writing, software
3237+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
3238+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
3239+# License for the specific language governing permissions and limitations
3240+# under the License.
3241+
3242+import base64
3243+import stubout
3244+
3245+from xml.etree import ElementTree
3246+from xml.etree.ElementTree import Element, SubElement
3247+
3248+from nova import context
3249+from nova import db
3250+from nova import exception
3251+from nova import flags
3252+from nova import log as logging
3253+from nova import test
3254+from nova import vsa
3255+from nova import volume
3256+from nova.volume import volume_types
3257+from nova.vsa import utils as vsa_utils
3258+
3259+import nova.image.fake
3260+
3261+FLAGS = flags.FLAGS
3262+LOG = logging.getLogger('nova.tests.vsa')
3263+
3264+
3265+class VsaTestCase(test.TestCase):
3266+
3267+ def setUp(self):
3268+ super(VsaTestCase, self).setUp()
3269+ self.stubs = stubout.StubOutForTesting()
3270+ self.vsa_api = vsa.API()
3271+ self.volume_api = volume.API()
3272+
3273+ FLAGS.quota_volumes = 100
3274+ FLAGS.quota_gigabytes = 10000
3275+
3276+ self.context = context.get_admin_context()
3277+
3278+ volume_types.create(self.context,
3279+ 'SATA_500_7200',
3280+ extra_specs={'type': 'vsa_drive',
3281+ 'drive_name': 'SATA_500_7200',
3282+ 'drive_type': 'SATA',
3283+ 'drive_size': '500',
3284+ 'drive_rpm': '7200'})
3285+
3286+ def fake_show_by_name(meh, context, name):
3287+ if name == 'wrong_image_name':
3288+ LOG.debug(_("Test: Emulate wrong VSA name. Raise"))
3289+ raise exception.ImageNotFound
3290+ return {'id': 1, 'properties': {'kernel_id': 1, 'ramdisk_id': 1}}
3291+
3292+ self.stubs.Set(nova.image.fake._FakeImageService,
3293+ 'show_by_name',
3294+ fake_show_by_name)
3295+
3296+ def tearDown(self):
3297+ self.stubs.UnsetAll()
3298+ super(VsaTestCase, self).tearDown()
3299+
3300+ def test_vsa_create_delete_defaults(self):
3301+ param = {'display_name': 'VSA name test'}
3302+ vsa_ref = self.vsa_api.create(self.context, **param)
3303+ self.assertEqual(vsa_ref['display_name'], param['display_name'])
3304+ self.vsa_api.delete(self.context, vsa_ref['id'])
3305+
3306+ def test_vsa_create_delete_check_in_db(self):
3307+ vsa_list1 = self.vsa_api.get_all(self.context)
3308+ vsa_ref = self.vsa_api.create(self.context)
3309+ vsa_list2 = self.vsa_api.get_all(self.context)
3310+ self.assertEqual(len(vsa_list2), len(vsa_list1) + 1)
3311+
3312+ self.vsa_api.delete(self.context, vsa_ref['id'])
3313+ vsa_list3 = self.vsa_api.get_all(self.context)
3314+ self.assertEqual(len(vsa_list3), len(vsa_list2) - 1)
3315+
3316+ def test_vsa_create_delete_high_vc_count(self):
3317+ param = {'vc_count': FLAGS.max_vcs_in_vsa + 1}
3318+ vsa_ref = self.vsa_api.create(self.context, **param)
3319+ self.assertEqual(vsa_ref['vc_count'], FLAGS.max_vcs_in_vsa)
3320+ self.vsa_api.delete(self.context, vsa_ref['id'])
3321+
3322+ def test_vsa_create_wrong_image_name(self):
3323+ param = {'image_name': 'wrong_image_name'}
3324+ self.assertRaises(exception.ApiError,
3325+ self.vsa_api.create, self.context, **param)
3326+
3327+ def test_vsa_create_db_error(self):
3328+
3329+ def fake_vsa_create(context, options):
3330+ LOG.debug(_("Test: Emulate DB error. Raise"))
3331+ raise exception.Error
3332+
3333+ self.stubs.Set(nova.db.api, 'vsa_create', fake_vsa_create)
3334+ self.assertRaises(exception.ApiError,
3335+ self.vsa_api.create, self.context)
3336+
3337+ def test_vsa_create_wrong_storage_params(self):
3338+ vsa_list1 = self.vsa_api.get_all(self.context)
3339+ param = {'storage': [{'stub': 1}]}
3340+ self.assertRaises(exception.ApiError,
3341+ self.vsa_api.create, self.context, **param)
3342+ vsa_list2 = self.vsa_api.get_all(self.context)
3343+ self.assertEqual(len(vsa_list2), len(vsa_list1))
3344+
3345+ param = {'storage': [{'drive_name': 'wrong name'}]}
3346+ self.assertRaises(exception.ApiError,
3347+ self.vsa_api.create, self.context, **param)
3348+
3349+ def test_vsa_create_with_storage(self, multi_vol_creation=True):
3350+ """Test creation of VSA with BE storage"""
3351+
3352+ FLAGS.vsa_multi_vol_creation = multi_vol_creation
3353+
3354+ param = {'storage': [{'drive_name': 'SATA_500_7200',
3355+ 'num_drives': 3}]}
3356+ vsa_ref = self.vsa_api.create(self.context, **param)
3357+ self.assertEqual(vsa_ref['vol_count'], 3)
3358+ self.vsa_api.delete(self.context, vsa_ref['id'])
3359+
3360+ param = {'storage': [{'drive_name': 'SATA_500_7200',
3361+ 'num_drives': 3}],
3362+ 'shared': True}
3363+ vsa_ref = self.vsa_api.create(self.context, **param)
3364+ self.assertEqual(vsa_ref['vol_count'], 15)
3365+ self.vsa_api.delete(self.context, vsa_ref['id'])
3366+
3367+ def test_vsa_create_with_storage_single_volumes(self):
3368+ self.test_vsa_create_with_storage(multi_vol_creation=False)
3369+
3370+ def test_vsa_update(self):
3371+ vsa_ref = self.vsa_api.create(self.context)
3372+
3373+ param = {'vc_count': FLAGS.max_vcs_in_vsa + 1}
3374+ vsa_ref = self.vsa_api.update(self.context, vsa_ref['id'], **param)
3375+ self.assertEqual(vsa_ref['vc_count'], FLAGS.max_vcs_in_vsa)
3376+
3377+ param = {'vc_count': 2}
3378+ vsa_ref = self.vsa_api.update(self.context, vsa_ref['id'], **param)
3379+ self.assertEqual(vsa_ref['vc_count'], 2)
3380+
3381+ self.vsa_api.delete(self.context, vsa_ref['id'])
3382+
3383+ def test_vsa_generate_user_data(self):
3384+
3385+ FLAGS.vsa_multi_vol_creation = False
3386+ param = {'display_name': 'VSA name test',
3387+ 'display_description': 'VSA desc test',
3388+ 'vc_count': 2,
3389+ 'storage': [{'drive_name': 'SATA_500_7200',
3390+ 'num_drives': 3}]}
3391+ vsa_ref = self.vsa_api.create(self.context, **param)
3392+ volumes = self.vsa_api.get_all_vsa_drives(self.context,
3393+ vsa_ref['id'])
3394+
3395+ user_data = vsa_utils.generate_user_data(vsa_ref, volumes)
3396+ user_data = base64.b64decode(user_data)
3397+
3398+ LOG.debug(_("Test: user_data = %s"), user_data)
3399+
3400+ elem = ElementTree.fromstring(user_data)
3401+ self.assertEqual(elem.findtext('name'),
3402+ param['display_name'])
3403+ self.assertEqual(elem.findtext('description'),
3404+ param['display_description'])
3405+ self.assertEqual(elem.findtext('vc_count'),
3406+ str(param['vc_count']))
3407+
3408+ self.vsa_api.delete(self.context, vsa_ref['id'])
3409
3410=== added file 'nova/tests/test_vsa_volumes.py'
3411--- nova/tests/test_vsa_volumes.py 1970-01-01 00:00:00 +0000
3412+++ nova/tests/test_vsa_volumes.py 2011-08-26 22:09:26 +0000
3413@@ -0,0 +1,136 @@
3414+# Copyright 2011 OpenStack LLC.
3415+# All Rights Reserved.
3416+#
3417+# Licensed under the Apache License, Version 2.0 (the "License"); you may
3418+# not use this file except in compliance with the License. You may obtain
3419+# a copy of the License at
3420+#
3421+# http://www.apache.org/licenses/LICENSE-2.0
3422+#
3423+# Unless required by applicable law or agreed to in writing, software
3424+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
3425+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
3426+# License for the specific language governing permissions and limitations
3427+# under the License.
3428+
3429+import stubout
3430+
3431+from nova import exception
3432+from nova import flags
3433+from nova import vsa
3434+from nova import volume
3435+from nova import db
3436+from nova import context
3437+from nova import test
3438+from nova import log as logging
3439+import nova.image.fake
3440+
3441+FLAGS = flags.FLAGS
3442+LOG = logging.getLogger('nova.tests.vsa.volumes')
3443+
3444+
3445+class VsaVolumesTestCase(test.TestCase):
3446+
3447+ def setUp(self):
3448+ super(VsaVolumesTestCase, self).setUp()
3449+ self.stubs = stubout.StubOutForTesting()
3450+ self.vsa_api = vsa.API()
3451+ self.volume_api = volume.API()
3452+ self.context = context.get_admin_context()
3453+
3454+ self.default_vol_type = self.vsa_api.get_vsa_volume_type(self.context)
3455+
3456+ def fake_show_by_name(meh, context, name):
3457+ return {'id': 1, 'properties': {'kernel_id': 1, 'ramdisk_id': 1}}
3458+
3459+ self.stubs.Set(nova.image.fake._FakeImageService,
3460+ 'show_by_name',
3461+ fake_show_by_name)
3462+
3463+ param = {'display_name': 'VSA name test'}
3464+ vsa_ref = self.vsa_api.create(self.context, **param)
3465+ self.vsa_id = vsa_ref['id']
3466+
3467+ def tearDown(self):
3468+ if self.vsa_id:
3469+ self.vsa_api.delete(self.context, self.vsa_id)
3470+ self.stubs.UnsetAll()
3471+ super(VsaVolumesTestCase, self).tearDown()
3472+
3473+ def _default_volume_param(self):
3474+ return {
3475+ 'size': 1,
3476+ 'snapshot_id': None,
3477+ 'name': 'Test volume name',
3478+ 'description': 'Test volume desc name',
3479+ 'volume_type': self.default_vol_type,
3480+ 'metadata': {'from_vsa_id': self.vsa_id}
3481+ }
3482+
3483+ def _get_all_volumes_by_vsa(self):
3484+ return self.volume_api.get_all(self.context,
3485+ search_opts={'metadata': {"from_vsa_id": str(self.vsa_id)}})
3486+
3487+ def test_vsa_volume_create_delete(self):
3488+ """ Check if volume properly created and deleted. """
3489+ volume_param = self._default_volume_param()
3490+ volume_ref = self.volume_api.create(self.context, **volume_param)
3491+
3492+ self.assertEqual(volume_ref['display_name'],
3493+ volume_param['name'])
3494+ self.assertEqual(volume_ref['display_description'],
3495+ volume_param['description'])
3496+ self.assertEqual(volume_ref['size'],
3497+ volume_param['size'])
3498+ self.assertEqual(volume_ref['status'],
3499+ 'creating')
3500+
3501+ vols2 = self._get_all_volumes_by_vsa()
3502+ self.assertEqual(1, len(vols2))
3503+ volume_ref = vols2[0]
3504+
3505+ self.assertEqual(volume_ref['display_name'],
3506+ volume_param['name'])
3507+ self.assertEqual(volume_ref['display_description'],
3508+ volume_param['description'])
3509+ self.assertEqual(volume_ref['size'],
3510+ volume_param['size'])
3511+ self.assertEqual(volume_ref['status'],
3512+ 'creating')
3513+
3514+ self.volume_api.update(self.context,
3515+ volume_ref['id'], {'status': 'available'})
3516+ self.volume_api.delete(self.context, volume_ref['id'])
3517+
3518+ vols3 = self._get_all_volumes_by_vsa()
3519+ self.assertEqual(1, len(vols2))
3520+ volume_ref = vols3[0]
3521+ self.assertEqual(volume_ref['status'],
3522+ 'deleting')
3523+
3524+ def test_vsa_volume_delete_nonavail_volume(self):
3525+ """ Check volume deleton in different states. """
3526+ volume_param = self._default_volume_param()
3527+ volume_ref = self.volume_api.create(self.context, **volume_param)
3528+
3529+ self.volume_api.update(self.context,
3530+ volume_ref['id'], {'status': 'in-use'})
3531+ self.assertRaises(exception.ApiError,
3532+ self.volume_api.delete,
3533+ self.context, volume_ref['id'])
3534+
3535+ def test_vsa_volume_delete_vsa_with_volumes(self):
3536+ """ Check volume deleton in different states. """
3537+
3538+ vols1 = self._get_all_volumes_by_vsa()
3539+ for i in range(3):
3540+ volume_param = self._default_volume_param()
3541+ volume_ref = self.volume_api.create(self.context, **volume_param)
3542+
3543+ vols2 = self._get_all_volumes_by_vsa()
3544+ self.assertEqual(len(vols1) + 3, len(vols2))
3545+
3546+ self.vsa_api.delete(self.context, self.vsa_id)
3547+
3548+ vols3 = self._get_all_volumes_by_vsa()
3549+ self.assertEqual(len(vols1), len(vols3))
3550
3551=== modified file 'nova/virt/libvirt.xml.template'
3552--- nova/virt/libvirt.xml.template 2011-08-12 21:23:10 +0000
3553+++ nova/virt/libvirt.xml.template 2011-08-26 22:09:26 +0000
3554@@ -135,7 +135,9 @@
3555 <interface type='bridge'>
3556 <source bridge='${nic.bridge_name}'/>
3557 <mac address='${nic.mac_address}'/>
3558- <!-- <model type='virtio'/> CANT RUN virtio network right now -->
3559+#if $getVar('use_virtio_for_bridges', True)
3560+ <model type='virtio'/>
3561+#end if
3562 <filterref filter="nova-instance-${name}-${nic.id}">
3563 <parameter name="IP" value="${nic.ip_address}" />
3564 <parameter name="DHCPSERVER" value="${nic.dhcp_server}" />
3565
3566=== modified file 'nova/virt/libvirt/connection.py'
3567--- nova/virt/libvirt/connection.py 2011-08-23 05:17:51 +0000
3568+++ nova/virt/libvirt/connection.py 2011-08-26 22:09:26 +0000
3569@@ -135,6 +135,9 @@
3570 None,
3571 'The default format a local_volume will be formatted with '
3572 'on creation.')
3573+flags.DEFINE_bool('libvirt_use_virtio_for_bridges',
3574+ False,
3575+ 'Use virtio for bridge interfaces')
3576
3577
3578 def get_connection(read_only):
3579@@ -1083,6 +1086,8 @@
3580 'ebs_root': ebs_root,
3581 'local_device': local_device,
3582 'volumes': block_device_mapping,
3583+ 'use_virtio_for_bridges':
3584+ FLAGS.libvirt_use_virtio_for_bridges,
3585 'ephemerals': ephemerals}
3586
3587 root_device_name = driver.block_device_info_get_root(block_device_info)
3588
3589=== modified file 'nova/volume/api.py'
3590--- nova/volume/api.py 2011-08-24 03:22:27 +0000
3591+++ nova/volume/api.py 2011-08-26 22:09:26 +0000
3592@@ -132,7 +132,7 @@
3593 for i in volume.get('volume_metadata'):
3594 volume_metadata[i['key']] = i['value']
3595
3596- for k, v in searchdict:
3597+ for k, v in searchdict.iteritems():
3598 if k not in volume_metadata.keys()\
3599 or volume_metadata[k] != v:
3600 return False
3601@@ -141,6 +141,7 @@
3602 # search_option to filter_name mapping.
3603 filter_mapping = {'metadata': _check_metadata_match}
3604
3605+ result = []
3606 for volume in volumes:
3607 # go over all filters in the list
3608 for opt, values in search_opts.iteritems():
3609@@ -150,10 +151,10 @@
3610 # no such filter - ignore it, go to next filter
3611 continue
3612 else:
3613- if filter_func(volume, values) == False:
3614- # if one of conditions didn't match - remove
3615- volumes.remove(volume)
3616+ if filter_func(volume, values):
3617+ result.append(volume)
3618 break
3619+ volumes = result
3620 return volumes
3621
3622 def get_snapshot(self, context, snapshot_id):
3623@@ -255,3 +256,12 @@
3624
3625 self.db.volume_metadata_update(context, volume_id, _metadata, True)
3626 return _metadata
3627+
3628+ def get_volume_metadata_value(self, volume, key):
3629+ """Get value of particular metadata key."""
3630+ metadata = volume.get('volume_metadata')
3631+ if metadata:
3632+ for i in volume['volume_metadata']:
3633+ if i['key'] == key:
3634+ return i['value']
3635+ return None
3636
3637=== modified file 'nova/volume/driver.py'
3638--- nova/volume/driver.py 2011-08-25 22:22:51 +0000
3639+++ nova/volume/driver.py 2011-08-26 22:09:26 +0000
3640@@ -22,11 +22,13 @@
3641
3642 import time
3643 import os
3644+from xml.etree import ElementTree
3645
3646 from nova import exception
3647 from nova import flags
3648 from nova import log as logging
3649 from nova import utils
3650+from nova.volume import volume_types
3651
3652
3653 LOG = logging.getLogger("nova.volume.driver")
3654@@ -212,6 +214,11 @@
3655 """Make sure volume is exported."""
3656 raise NotImplementedError()
3657
3658+ def get_volume_stats(self, refresh=False):
3659+ """Return the current state of the volume service. If 'refresh' is
3660+ True, run the update first."""
3661+ return None
3662+
3663
3664 class AOEDriver(VolumeDriver):
3665 """Implements AOE specific volume commands."""
3666@@ -802,3 +809,268 @@
3667 if match:
3668 matches.append(entry)
3669 return matches
3670+
3671+
3672+class ZadaraBEDriver(ISCSIDriver):
3673+ """Performs actions to configure Zadara BE module."""
3674+
3675+ def _is_vsa_volume(self, volume):
3676+ return volume_types.is_vsa_volume(volume['volume_type_id'])
3677+
3678+ def _is_vsa_drive(self, volume):
3679+ return volume_types.is_vsa_drive(volume['volume_type_id'])
3680+
3681+ def _not_vsa_volume_or_drive(self, volume):
3682+ """Returns True if volume is not VSA BE volume."""
3683+ if not volume_types.is_vsa_object(volume['volume_type_id']):
3684+ LOG.debug(_("\tVolume %s is NOT VSA volume"), volume['name'])
3685+ return True
3686+ else:
3687+ return False
3688+
3689+ def check_for_setup_error(self):
3690+ """No setup necessary for Zadara BE."""
3691+ pass
3692+
3693+ """ Volume Driver methods """
3694+ def create_volume(self, volume):
3695+ """Creates BE volume."""
3696+ if self._not_vsa_volume_or_drive(volume):
3697+ return super(ZadaraBEDriver, self).create_volume(volume)
3698+
3699+ if self._is_vsa_volume(volume):
3700+ LOG.debug(_("\tFE VSA Volume %s creation - do nothing"),
3701+ volume['name'])
3702+ return
3703+
3704+ if int(volume['size']) == 0:
3705+ sizestr = '0' # indicates full-partition
3706+ else:
3707+ sizestr = '%s' % (int(volume['size']) << 30) # size in bytes
3708+
3709+ # Set the qos-str to default type sas
3710+ qosstr = 'SAS_1000'
3711+ volume_type = volume_types.get_volume_type(None,
3712+ volume['volume_type_id'])
3713+ if volume_type is not None:
3714+ qosstr = volume_type['extra_specs']['drive_type'] + \
3715+ ("_%s" % volume_type['extra_specs']['drive_size'])
3716+
3717+ vsa_id = None
3718+ for i in volume.get('volume_metadata'):
3719+ if i['key'] == 'to_vsa_id':
3720+ vsa_id = i['value']
3721+ break
3722+
3723+ try:
3724+ self._sync_exec('/var/lib/zadara/bin/zadara_sncfg',
3725+ 'create_qospart',
3726+ '--qos', qosstr,
3727+ '--pname', volume['name'],
3728+ '--psize', sizestr,
3729+ '--vsaid', vsa_id,
3730+ run_as_root=True,
3731+ check_exit_code=0)
3732+ except exception.ProcessExecutionError:
3733+ LOG.debug(_("VSA BE create_volume for %s failed"), volume['name'])
3734+ raise
3735+
3736+ LOG.debug(_("VSA BE create_volume for %s succeeded"), volume['name'])
3737+
3738+ def delete_volume(self, volume):
3739+ """Deletes BE volume."""
3740+ if self._not_vsa_volume_or_drive(volume):
3741+ return super(ZadaraBEDriver, self).delete_volume(volume)
3742+
3743+ if self._is_vsa_volume(volume):
3744+ LOG.debug(_("\tFE VSA Volume %s deletion - do nothing"),
3745+ volume['name'])
3746+ return
3747+
3748+ try:
3749+ self._sync_exec('/var/lib/zadara/bin/zadara_sncfg',
3750+ 'delete_partition',
3751+ '--pname', volume['name'],
3752+ run_as_root=True,
3753+ check_exit_code=0)
3754+ except exception.ProcessExecutionError:
3755+ LOG.debug(_("VSA BE delete_volume for %s failed"), volume['name'])
3756+ return
3757+
3758+ LOG.debug(_("VSA BE delete_volume for %s suceeded"), volume['name'])
3759+
3760+ def local_path(self, volume):
3761+ if self._not_vsa_volume_or_drive(volume):
3762+ return super(ZadaraBEDriver, self).local_path(volume)
3763+
3764+ if self._is_vsa_volume(volume):
3765+ LOG.debug(_("\tFE VSA Volume %s local path call - call discover"),
3766+ volume['name'])
3767+ return super(ZadaraBEDriver, self).discover_volume(None, volume)
3768+
3769+ raise exception.Error(_("local_path not supported"))
3770+
3771+ def ensure_export(self, context, volume):
3772+ """ensure BE export for a volume"""
3773+ if self._not_vsa_volume_or_drive(volume):
3774+ return super(ZadaraBEDriver, self).ensure_export(context, volume)
3775+
3776+ if self._is_vsa_volume(volume):
3777+ LOG.debug(_("\tFE VSA Volume %s ensure export - do nothing"),
3778+ volume['name'])
3779+ return
3780+
3781+ try:
3782+ iscsi_target = self.db.volume_get_iscsi_target_num(context,
3783+ volume['id'])
3784+ except exception.NotFound:
3785+ LOG.info(_("Skipping ensure_export. No iscsi_target " +
3786+ "provisioned for volume: %d"), volume['id'])
3787+ return
3788+
3789+ try:
3790+ ret = self._common_be_export(context, volume, iscsi_target)
3791+ except exception.ProcessExecutionError:
3792+ return
3793+ return ret
3794+
3795+ def create_export(self, context, volume):
3796+ """create BE export for a volume"""
3797+ if self._not_vsa_volume_or_drive(volume):
3798+ return super(ZadaraBEDriver, self).create_export(context, volume)
3799+
3800+ if self._is_vsa_volume(volume):
3801+ LOG.debug(_("\tFE VSA Volume %s create export - do nothing"),
3802+ volume['name'])
3803+ return
3804+
3805+ self._ensure_iscsi_targets(context, volume['host'])
3806+ iscsi_target = self.db.volume_allocate_iscsi_target(context,
3807+ volume['id'],
3808+ volume['host'])
3809+ try:
3810+ ret = self._common_be_export(context, volume, iscsi_target)
3811+ except:
3812+ raise exception.ProcessExecutionError
3813+ return ret
3814+
3815+ def remove_export(self, context, volume):
3816+ """Removes BE export for a volume."""
3817+ if self._not_vsa_volume_or_drive(volume):
3818+ return super(ZadaraBEDriver, self).remove_export(context, volume)
3819+
3820+ if self._is_vsa_volume(volume):
3821+ LOG.debug(_("\tFE VSA Volume %s remove export - do nothing"),
3822+ volume['name'])
3823+ return
3824+
3825+ try:
3826+ iscsi_target = self.db.volume_get_iscsi_target_num(context,
3827+ volume['id'])
3828+ except exception.NotFound:
3829+ LOG.info(_("Skipping remove_export. No iscsi_target " +
3830+ "provisioned for volume: %d"), volume['id'])
3831+ return
3832+
3833+ try:
3834+ self._sync_exec('/var/lib/zadara/bin/zadara_sncfg',
3835+ 'remove_export',
3836+ '--pname', volume['name'],
3837+ '--tid', iscsi_target,
3838+ run_as_root=True,
3839+ check_exit_code=0)
3840+ except exception.ProcessExecutionError:
3841+ LOG.debug(_("VSA BE remove_export for %s failed"), volume['name'])
3842+ return
3843+
3844+ def create_snapshot(self, snapshot):
3845+ """Nothing required for snapshot"""
3846+ if self._not_vsa_volume_or_drive(volume):
3847+ return super(ZadaraBEDriver, self).create_snapshot(volume)
3848+
3849+ pass
3850+
3851+ def delete_snapshot(self, snapshot):
3852+ """Nothing required to delete a snapshot"""
3853+ if self._not_vsa_volume_or_drive(volume):
3854+ return super(ZadaraBEDriver, self).delete_snapshot(volume)
3855+
3856+ pass
3857+
3858+ """ Internal BE Volume methods """
3859+ def _common_be_export(self, context, volume, iscsi_target):
3860+ """
3861+ Common logic that asks zadara_sncfg to setup iSCSI target/lun for
3862+ this volume
3863+ """
3864+ (out, err) = self._sync_exec(
3865+ '/var/lib/zadara/bin/zadara_sncfg',
3866+ 'create_export',
3867+ '--pname', volume['name'],
3868+ '--tid', iscsi_target,
3869+ run_as_root=True,
3870+ check_exit_code=0)
3871+
3872+ result_xml = ElementTree.fromstring(out)
3873+ response_node = result_xml.find("Sn")
3874+ if response_node is None:
3875+ msg = "Malformed response from zadara_sncfg"
3876+ raise exception.Error(msg)
3877+
3878+ sn_ip = response_node.findtext("SnIp")
3879+ sn_iqn = response_node.findtext("IqnName")
3880+ iscsi_portal = sn_ip + ":3260," + ("%s" % iscsi_target)
3881+
3882+ model_update = {}
3883+ model_update['provider_location'] = ("%s %s" %
3884+ (iscsi_portal,
3885+ sn_iqn))
3886+ return model_update
3887+
3888+ def _get_qosgroup_summary(self):
3889+ """gets the list of qosgroups from Zadara BE"""
3890+ try:
3891+ (out, err) = self._sync_exec(
3892+ '/var/lib/zadara/bin/zadara_sncfg',
3893+ 'get_qosgroups_xml',
3894+ run_as_root=True,
3895+ check_exit_code=0)
3896+ except exception.ProcessExecutionError:
3897+ LOG.debug(_("Failed to retrieve QoS info"))
3898+ return {}
3899+
3900+ qos_groups = {}
3901+ result_xml = ElementTree.fromstring(out)
3902+ for element in result_xml.findall('QosGroup'):
3903+ qos_group = {}
3904+ # get the name of the group.
3905+ # If we cannot find it, forget this element
3906+ group_name = element.findtext("Name")
3907+ if not group_name:
3908+ continue
3909+
3910+ # loop through all child nodes & fill up attributes of this group
3911+ for child in element.getchildren():
3912+ # two types of elements - property of qos-group & sub property
3913+ # classify them accordingly
3914+ if child.text:
3915+ qos_group[child.tag] = int(child.text) \
3916+ if child.text.isdigit() else child.text
3917+ else:
3918+ subelement = {}
3919+ for subchild in child.getchildren():
3920+ subelement[subchild.tag] = int(subchild.text) \
3921+ if subchild.text.isdigit() else subchild.text
3922+ qos_group[child.tag] = subelement
3923+
3924+ # Now add this group to the master qos_groups
3925+ qos_groups[group_name] = qos_group
3926+
3927+ return qos_groups
3928+
3929+ def get_volume_stats(self, refresh=False):
3930+ """Return the current state of the volume service. If 'refresh' is
3931+ True, run the update first."""
3932+
3933+ drive_info = self._get_qosgroup_summary()
3934+ return {'drive_qos_info': drive_info}
3935
3936=== modified file 'nova/volume/manager.py'
3937--- nova/volume/manager.py 2011-06-02 21:23:05 +0000
3938+++ nova/volume/manager.py 2011-08-26 22:09:26 +0000
3939@@ -48,7 +48,9 @@
3940 from nova import flags
3941 from nova import log as logging
3942 from nova import manager
3943+from nova import rpc
3944 from nova import utils
3945+from nova.volume import volume_types
3946
3947
3948 LOG = logging.getLogger('nova.volume.manager')
3949@@ -60,6 +62,8 @@
3950 'Driver to use for volume creation')
3951 flags.DEFINE_boolean('use_local_volumes', True,
3952 'if True, will not discover local volumes')
3953+flags.DEFINE_boolean('volume_force_update_capabilities', False,
3954+ 'if True will force update capabilities on each check')
3955
3956
3957 class VolumeManager(manager.SchedulerDependentManager):
3958@@ -74,6 +78,7 @@
3959 # NOTE(vish): Implementation specific db handling is done
3960 # by the driver.
3961 self.driver.db = self.db
3962+ self._last_volume_stats = []
3963
3964 def init_host(self):
3965 """Do any initialization that needs to be run if this is a
3966@@ -123,6 +128,7 @@
3967 except Exception:
3968 self.db.volume_update(context,
3969 volume_ref['id'], {'status': 'error'})
3970+ self._notify_vsa(context, volume_ref, 'error')
3971 raise
3972
3973 now = utils.utcnow()
3974@@ -130,8 +136,29 @@
3975 volume_ref['id'], {'status': 'available',
3976 'launched_at': now})
3977 LOG.debug(_("volume %s: created successfully"), volume_ref['name'])
3978+ self._notify_vsa(context, volume_ref, 'available')
3979+ self._reset_stats()
3980 return volume_id
3981
3982+ def _notify_vsa(self, context, volume_ref, status):
3983+ if volume_ref['volume_type_id'] is None:
3984+ return
3985+
3986+ if volume_types.is_vsa_drive(volume_ref['volume_type_id']):
3987+ vsa_id = None
3988+ for i in volume_ref.get('volume_metadata'):
3989+ if i['key'] == 'to_vsa_id':
3990+ vsa_id = int(i['value'])
3991+ break
3992+
3993+ if vsa_id:
3994+ rpc.cast(context,
3995+ FLAGS.vsa_topic,
3996+ {"method": "vsa_volume_created",
3997+ "args": {"vol_id": volume_ref['id'],
3998+ "vsa_id": vsa_id,
3999+ "status": status}})
4000+
4001 def delete_volume(self, context, volume_id):
4002 """Deletes and unexports volume."""
4003 context = context.elevated()
4004@@ -141,6 +168,7 @@
4005 if volume_ref['host'] != self.host:
4006 raise exception.Error(_("Volume is not local to this node"))
4007
4008+ self._reset_stats()
4009 try:
4010 LOG.debug(_("volume %s: removing export"), volume_ref['name'])
4011 self.driver.remove_export(context, volume_ref)
4012@@ -231,3 +259,53 @@
4013 instance_ref = self.db.instance_get(context, instance_id)
4014 for volume in instance_ref['volumes']:
4015 self.driver.check_for_export(context, volume['id'])
4016+
4017+ def periodic_tasks(self, context=None):
4018+ """Tasks to be run at a periodic interval."""
4019+
4020+ error_list = []
4021+ try:
4022+ self._report_driver_status()
4023+ except Exception as ex:
4024+ LOG.warning(_("Error during report_driver_status(): %s"),
4025+ unicode(ex))
4026+ error_list.append(ex)
4027+
4028+ super(VolumeManager, self).periodic_tasks(context)
4029+
4030+ return error_list
4031+
4032+ def _volume_stats_changed(self, stat1, stat2):
4033+ if FLAGS.volume_force_update_capabilities:
4034+ return True
4035+ if len(stat1) != len(stat2):
4036+ return True
4037+ for (k, v) in stat1.iteritems():
4038+ if (k, v) not in stat2.iteritems():
4039+ return True
4040+ return False
4041+
4042+ def _report_driver_status(self):
4043+ volume_stats = self.driver.get_volume_stats(refresh=True)
4044+ if volume_stats:
4045+ LOG.info(_("Checking volume capabilities"))
4046+
4047+ if self._volume_stats_changed(self._last_volume_stats,
4048+ volume_stats):
4049+ LOG.info(_("New capabilities found: %s"), volume_stats)
4050+ self._last_volume_stats = volume_stats
4051+
4052+ # This will grab info about the host and queue it
4053+ # to be sent to the Schedulers.
4054+ self.update_service_capabilities(self._last_volume_stats)
4055+ else:
4056+ # avoid repeating fanouts
4057+ self.update_service_capabilities(None)
4058+
4059+ def _reset_stats(self):
4060+ LOG.info(_("Clear capabilities"))
4061+ self._last_volume_stats = []
4062+
4063+ def notification(self, context, event):
4064+ LOG.info(_("Notification {%s} received"), event)
4065+ self._reset_stats()
4066
4067=== modified file 'nova/volume/volume_types.py'
4068--- nova/volume/volume_types.py 2011-08-24 17:16:20 +0000
4069+++ nova/volume/volume_types.py 2011-08-26 22:09:26 +0000
4070@@ -100,20 +100,22 @@
4071 continue
4072 else:
4073 if filter_func(type_args, values):
4074- # if one of conditions didn't match - remove
4075 result[type_name] = type_args
4076 break
4077 vol_types = result
4078 return vol_types
4079
4080
4081-def get_volume_type(context, id):
4082+def get_volume_type(ctxt, id):
4083 """Retrieves single volume type by id."""
4084 if id is None:
4085 raise exception.InvalidVolumeType(volume_type=id)
4086
4087+ if ctxt is None:
4088+ ctxt = context.get_admin_context()
4089+
4090 try:
4091- return db.volume_type_get(context, id)
4092+ return db.volume_type_get(ctxt, id)
4093 except exception.DBError:
4094 raise exception.ApiError(_("Unknown volume type: %s") % id)
4095
4096@@ -127,3 +129,38 @@
4097 return db.volume_type_get_by_name(context, name)
4098 except exception.DBError:
4099 raise exception.ApiError(_("Unknown volume type: %s") % name)
4100+
4101+
4102+def is_key_value_present(volume_type_id, key, value, volume_type=None):
4103+ if volume_type_id is None:
4104+ return False
4105+
4106+ if volume_type is None:
4107+ volume_type = get_volume_type(context.get_admin_context(),
4108+ volume_type_id)
4109+ if volume_type.get('extra_specs') is None or\
4110+ volume_type['extra_specs'].get(key) != value:
4111+ return False
4112+ else:
4113+ return True
4114+
4115+
4116+def is_vsa_drive(volume_type_id, volume_type=None):
4117+ return is_key_value_present(volume_type_id,
4118+ 'type', 'vsa_drive', volume_type)
4119+
4120+
4121+def is_vsa_volume(volume_type_id, volume_type=None):
4122+ return is_key_value_present(volume_type_id,
4123+ 'type', 'vsa_volume', volume_type)
4124+
4125+
4126+def is_vsa_object(volume_type_id):
4127+ if volume_type_id is None:
4128+ return False
4129+
4130+ volume_type = get_volume_type(context.get_admin_context(),
4131+ volume_type_id)
4132+
4133+ return is_vsa_drive(volume_type_id, volume_type) or\
4134+ is_vsa_volume(volume_type_id, volume_type)
4135
4136=== added directory 'nova/vsa'
4137=== added file 'nova/vsa/__init__.py'
4138--- nova/vsa/__init__.py 1970-01-01 00:00:00 +0000
4139+++ nova/vsa/__init__.py 2011-08-26 22:09:26 +0000
4140@@ -0,0 +1,18 @@
4141+# vim: tabstop=4 shiftwidth=4 softtabstop=4
4142+
4143+# Copyright (c) 2011 Zadara Storage Inc.
4144+# Copyright (c) 2011 OpenStack LLC.
4145+#
4146+# Licensed under the Apache License, Version 2.0 (the "License"); you may
4147+# not use this file except in compliance with the License. You may obtain
4148+# a copy of the License at
4149+#
4150+# http://www.apache.org/licenses/LICENSE-2.0
4151+#
4152+# Unless required by applicable law or agreed to in writing, software
4153+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
4154+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
4155+# License for the specific language governing permissions and limitations
4156+# under the License.
4157+
4158+from nova.vsa.api import API
4159
4160=== added file 'nova/vsa/api.py'
4161--- nova/vsa/api.py 1970-01-01 00:00:00 +0000
4162+++ nova/vsa/api.py 2011-08-26 22:09:26 +0000
4163@@ -0,0 +1,411 @@
4164+# vim: tabstop=4 shiftwidth=4 softtabstop=4
4165+
4166+# Copyright (c) 2011 Zadara Storage Inc.
4167+# Copyright (c) 2011 OpenStack LLC.
4168+#
4169+# Licensed under the Apache License, Version 2.0 (the "License"); you may
4170+# not use this file except in compliance with the License. You may obtain
4171+# a copy of the License at
4172+#
4173+# http://www.apache.org/licenses/LICENSE-2.0
4174+#
4175+# Unless required by applicable law or agreed to in writing, software
4176+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
4177+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
4178+# License for the specific language governing permissions and limitations
4179+# under the License.
4180+
4181+"""
4182+Handles all requests relating to Virtual Storage Arrays (VSAs).
4183+
4184+Experimental code. Requires special VSA image.
4185+For assistance and guidelines pls contact
4186+ Zadara Storage Inc & Openstack community
4187+"""
4188+
4189+import sys
4190+
4191+from nova import compute
4192+from nova import db
4193+from nova import exception
4194+from nova import flags
4195+from nova import log as logging
4196+from nova import rpc
4197+from nova import volume
4198+from nova.compute import instance_types
4199+from nova.db import base
4200+from nova.volume import volume_types
4201+
4202+
4203+class VsaState:
4204+ CREATING = 'creating' # VSA creating (not ready yet)
4205+ LAUNCHING = 'launching' # Launching VCs (all BE volumes were created)
4206+ CREATED = 'created' # VSA fully created and ready for use
4207+ PARTIAL = 'partial' # Some BE drives were allocated
4208+ FAILED = 'failed' # Some BE storage allocations failed
4209+ DELETING = 'deleting' # VSA started the deletion procedure
4210+
4211+
4212+FLAGS = flags.FLAGS
4213+flags.DEFINE_string('vsa_ec2_access_key', None,
4214+ 'EC2 access key used by VSA for accessing nova')
4215+flags.DEFINE_string('vsa_ec2_user_id', None,
4216+ 'User ID used by VSA for accessing nova')
4217+flags.DEFINE_boolean('vsa_multi_vol_creation', True,
4218+ 'Ask scheduler to create multiple volumes in one call')
4219+flags.DEFINE_string('vsa_volume_type_name', 'VSA volume type',
4220+ 'Name of volume type associated with FE VSA volumes')
4221+
4222+LOG = logging.getLogger('nova.vsa')
4223+
4224+
4225+class API(base.Base):
4226+ """API for interacting with the VSA manager."""
4227+
4228+ def __init__(self, compute_api=None, volume_api=None, **kwargs):
4229+ self.compute_api = compute_api or compute.API()
4230+ self.volume_api = volume_api or volume.API()
4231+ super(API, self).__init__(**kwargs)
4232+
4233+ def _check_volume_type_correctness(self, vol_type):
4234+ if vol_type.get('extra_specs') == None or\
4235+ vol_type['extra_specs'].get('type') != 'vsa_drive' or\
4236+ vol_type['extra_specs'].get('drive_type') == None or\
4237+ vol_type['extra_specs'].get('drive_size') == None:
4238+
4239+ raise exception.ApiError(_("Invalid drive type %s")
4240+ % vol_type['name'])
4241+
4242+ def _get_default_vsa_instance_type(self):
4243+ return instance_types.get_instance_type_by_name(
4244+ FLAGS.default_vsa_instance_type)
4245+
4246+ def _check_storage_parameters(self, context, vsa_name, storage,
4247+ shared, first_index=0):
4248+ """
4249+ Translates storage array of disks to the list of volumes
4250+ :param storage: List of dictionaries with following keys:
4251+ disk_name, num_disks, size
4252+ :param shared: Specifies if storage is dedicated or shared.
4253+ For shared storage disks split into partitions
4254+ """
4255+ volume_params = []
4256+ for node in storage:
4257+
4258+ name = node.get('drive_name', None)
4259+ num_disks = node.get('num_drives', 1)
4260+
4261+ if name is None:
4262+ raise exception.ApiError(_("No drive_name param found in %s")
4263+ % node)
4264+ try:
4265+ vol_type = volume_types.get_volume_type_by_name(context, name)
4266+ except exception.NotFound:
4267+ raise exception.ApiError(_("Invalid drive type name %s")
4268+ % name)
4269+
4270+ self._check_volume_type_correctness(vol_type)
4271+
4272+ # if size field present - override disk size specified in DB
4273+ size = int(node.get('size',
4274+ vol_type['extra_specs'].get('drive_size')))
4275+
4276+ if shared:
4277+ part_size = FLAGS.vsa_part_size_gb
4278+ total_capacity = num_disks * size
4279+ num_volumes = total_capacity / part_size
4280+ size = part_size
4281+ else:
4282+ num_volumes = num_disks
4283+ size = 0 # special handling for full drives
4284+
4285+ for i in range(num_volumes):
4286+ volume_name = "drive-%03d" % first_index
4287+ first_index += 1
4288+ volume_desc = 'BE volume for VSA %s type %s' % \
4289+ (vsa_name, name)
4290+ volume = {
4291+ 'size': size,
4292+ 'name': volume_name,
4293+ 'description': volume_desc,
4294+ 'volume_type_id': vol_type['id'],
4295+ }
4296+ volume_params.append(volume)
4297+
4298+ return volume_params
4299+
4300+ def create(self, context, display_name='', display_description='',
4301+ vc_count=1, instance_type=None, image_name=None,
4302+ availability_zone=None, storage=[], shared=None):
4303+ """
4304+ Provision VSA instance with corresponding compute instances
4305+ and associated volumes
4306+ :param storage: List of dictionaries with following keys:
4307+ disk_name, num_disks, size
4308+ :param shared: Specifies if storage is dedicated or shared.
4309+ For shared storage disks split into partitions
4310+ """
4311+
4312+ LOG.info(_("*** Experimental VSA code ***"))
4313+
4314+ if vc_count > FLAGS.max_vcs_in_vsa:
4315+ LOG.warning(_("Requested number of VCs (%d) is too high."\
4316+ " Setting to default"), vc_count)
4317+ vc_count = FLAGS.max_vcs_in_vsa
4318+
4319+ if instance_type is None:
4320+ instance_type = self._get_default_vsa_instance_type()
4321+
4322+ if availability_zone is None:
4323+ availability_zone = FLAGS.storage_availability_zone
4324+
4325+ if storage is None:
4326+ storage = []
4327+
4328+ if shared is None or shared == 'False' or shared == False:
4329+ shared = False
4330+ else:
4331+ shared = True
4332+
4333+ # check if image is ready before starting any work
4334+ if image_name is None:
4335+ image_name = FLAGS.vc_image_name
4336+ try:
4337+ image_service = self.compute_api.image_service
4338+ vc_image = image_service.show_by_name(context, image_name)
4339+ vc_image_href = vc_image['id']
4340+ except exception.ImageNotFound:
4341+ raise exception.ApiError(_("Failed to find configured image %s")
4342+ % image_name)
4343+
4344+ options = {
4345+ 'display_name': display_name,
4346+ 'display_description': display_description,
4347+ 'project_id': context.project_id,
4348+ 'availability_zone': availability_zone,
4349+ 'instance_type_id': instance_type['id'],
4350+ 'image_ref': vc_image_href,
4351+ 'vc_count': vc_count,
4352+ 'status': VsaState.CREATING,
4353+ }
4354+ LOG.info(_("Creating VSA: %s") % options)
4355+
4356+ # create DB entry for VSA instance
4357+ try:
4358+ vsa_ref = self.db.vsa_create(context, options)
4359+ except exception.Error:
4360+ raise exception.ApiError(_(sys.exc_info()[1]))
4361+ vsa_id = vsa_ref['id']
4362+ vsa_name = vsa_ref['name']
4363+
4364+ # check storage parameters
4365+ try:
4366+ volume_params = self._check_storage_parameters(context, vsa_name,
4367+ storage, shared)
4368+ except exception.ApiError:
4369+ self.db.vsa_destroy(context, vsa_id)
4370+ raise exception.ApiError(_("Error in storage parameters: %s")
4371+ % storage)
4372+
4373+ # after creating DB entry, re-check and set some defaults
4374+ updates = {}
4375+ if (not hasattr(vsa_ref, 'display_name') or
4376+ vsa_ref.display_name is None or
4377+ vsa_ref.display_name == ''):
4378+ updates['display_name'] = display_name = vsa_name
4379+ updates['vol_count'] = len(volume_params)
4380+ vsa_ref = self.update(context, vsa_id, **updates)
4381+
4382+ # create volumes
4383+ if FLAGS.vsa_multi_vol_creation:
4384+ if len(volume_params) > 0:
4385+ request_spec = {
4386+ 'num_volumes': len(volume_params),
4387+ 'vsa_id': str(vsa_id),
4388+ 'volumes': volume_params,
4389+ }
4390+
4391+ rpc.cast(context,
4392+ FLAGS.scheduler_topic,
4393+ {"method": "create_volumes",
4394+ "args": {"topic": FLAGS.volume_topic,
4395+ "request_spec": request_spec,
4396+ "availability_zone": availability_zone}})
4397+ else:
4398+ # create BE volumes one-by-one
4399+ for vol in volume_params:
4400+ try:
4401+ vol_name = vol['name']
4402+ vol_size = vol['size']
4403+ vol_type_id = vol['volume_type_id']
4404+ LOG.debug(_("VSA ID %(vsa_id)d %(vsa_name)s: Create "\
4405+ "volume %(vol_name)s, %(vol_size)d GB, "\
4406+ "type %(vol_type_id)s"), locals())
4407+
4408+ vol_type = volume_types.get_volume_type(context,
4409+ vol['volume_type_id'])
4410+
4411+ vol_ref = self.volume_api.create(context,
4412+ vol_size,
4413+ None,
4414+ vol_name,
4415+ vol['description'],
4416+ volume_type=vol_type,
4417+ metadata=dict(to_vsa_id=str(vsa_id)),
4418+ availability_zone=availability_zone)
4419+ except:
4420+ self.update_vsa_status(context, vsa_id,
4421+ status=VsaState.PARTIAL)
4422+ raise
4423+
4424+ if len(volume_params) == 0:
4425+ # No BE volumes - ask VSA manager to start VCs
4426+ rpc.cast(context,
4427+ FLAGS.vsa_topic,
4428+ {"method": "create_vsa",
4429+ "args": {"vsa_id": str(vsa_id)}})
4430+
4431+ return vsa_ref
4432+
4433+ def update_vsa_status(self, context, vsa_id, status):
4434+ updates = dict(status=status)
4435+ LOG.info(_("VSA ID %(vsa_id)d: Update VSA status to %(status)s"),
4436+ locals())
4437+ return self.update(context, vsa_id, **updates)
4438+
4439+ def update(self, context, vsa_id, **kwargs):
4440+ """Updates the VSA instance in the datastore.
4441+
4442+ :param context: The security context
4443+ :param vsa_id: ID of the VSA instance to update
4444+ :param kwargs: All additional keyword args are treated
4445+ as data fields of the instance to be
4446+ updated
4447+
4448+ :returns: None
4449+ """
4450+ LOG.info(_("VSA ID %(vsa_id)d: Update VSA call"), locals())
4451+
4452+ updatable_fields = ['status', 'vc_count', 'vol_count',
4453+ 'display_name', 'display_description']
4454+ changes = {}
4455+ for field in updatable_fields:
4456+ if field in kwargs:
4457+ changes[field] = kwargs[field]
4458+
4459+ vc_count = kwargs.get('vc_count', None)
4460+ if vc_count is not None:
4461+ # VP-TODO: This request may want to update number of VCs
4462+ # Get number of current VCs and add/delete VCs appropriately
4463+ vsa = self.get(context, vsa_id)
4464+ vc_count = int(vc_count)
4465+ if vc_count > FLAGS.max_vcs_in_vsa:
4466+ LOG.warning(_("Requested number of VCs (%d) is too high."\
4467+ " Setting to default"), vc_count)
4468+ vc_count = FLAGS.max_vcs_in_vsa
4469+
4470+ if vsa['vc_count'] != vc_count:
4471+ self.update_num_vcs(context, vsa, vc_count)
4472+ changes['vc_count'] = vc_count
4473+
4474+ return self.db.vsa_update(context, vsa_id, changes)
4475+
4476+ def update_num_vcs(self, context, vsa, vc_count):
4477+ vsa_name = vsa['name']
4478+ old_vc_count = int(vsa['vc_count'])
4479+ if vc_count > old_vc_count:
4480+ add_cnt = vc_count - old_vc_count
4481+ LOG.debug(_("Adding %(add_cnt)s VCs to VSA %(vsa_name)s."),
4482+ locals())
4483+ # VP-TODO: actual code for adding new VCs
4484+
4485+ elif vc_count < old_vc_count:
4486+ del_cnt = old_vc_count - vc_count
4487+ LOG.debug(_("Deleting %(del_cnt)s VCs from VSA %(vsa_name)s."),
4488+ locals())
4489+ # VP-TODO: actual code for deleting extra VCs
4490+
4491+ def _force_volume_delete(self, ctxt, volume):
4492+ """Delete a volume, bypassing the check that it must be available."""
4493+ host = volume['host']
4494+ if not host:
4495+ # Deleting volume from database and skipping rpc.
4496+ self.db.volume_destroy(ctxt, volume['id'])
4497+ return
4498+
4499+ rpc.cast(ctxt,
4500+ self.db.queue_get_for(ctxt, FLAGS.volume_topic, host),
4501+ {"method": "delete_volume",
4502+ "args": {"volume_id": volume['id']}})
4503+
4504+ def delete_vsa_volumes(self, context, vsa_id, direction,
4505+ force_delete=True):
4506+ if direction == "FE":
4507+ volumes = self.get_all_vsa_volumes(context, vsa_id)
4508+ else:
4509+ volumes = self.get_all_vsa_drives(context, vsa_id)
4510+
4511+ for volume in volumes:
4512+ try:
4513+ vol_name = volume['name']
4514+ LOG.info(_("VSA ID %(vsa_id)s: Deleting %(direction)s "\
4515+ "volume %(vol_name)s"), locals())
4516+ self.volume_api.delete(context, volume['id'])
4517+ except exception.ApiError:
4518+ LOG.info(_("Unable to delete volume %s"), volume['name'])
4519+ if force_delete:
4520+ LOG.info(_("VSA ID %(vsa_id)s: Forced delete. "\
4521+ "%(direction)s volume %(vol_name)s"), locals())
4522+ self._force_volume_delete(context, volume)
4523+
4524+ def delete(self, context, vsa_id):
4525+ """Terminate a VSA instance."""
4526+ LOG.info(_("Going to try to terminate VSA ID %s"), vsa_id)
4527+
4528+ # Delete all FrontEnd and BackEnd volumes
4529+ self.delete_vsa_volumes(context, vsa_id, "FE", force_delete=True)
4530+ self.delete_vsa_volumes(context, vsa_id, "BE", force_delete=True)
4531+
4532+ # Delete all VC instances
4533+ instances = self.compute_api.get_all(context,
4534+ search_opts={'metadata': dict(vsa_id=str(vsa_id))})
4535+ for instance in instances:
4536+ name = instance['name']
4537+ LOG.debug(_("VSA ID %(vsa_id)s: Delete instance %(name)s"),
4538+ locals())
4539+ self.compute_api.delete(context, instance['id'])
4540+
4541+ # Delete VSA instance
4542+ self.db.vsa_destroy(context, vsa_id)
4543+
4544+ def get(self, context, vsa_id):
4545+ rv = self.db.vsa_get(context, vsa_id)
4546+ return rv
4547+
4548+ def get_all(self, context):
4549+ if context.is_admin:
4550+ return self.db.vsa_get_all(context)
4551+ return self.db.vsa_get_all_by_project(context, context.project_id)
4552+
4553+ def get_vsa_volume_type(self, context):
4554+ name = FLAGS.vsa_volume_type_name
4555+ try:
4556+ vol_type = volume_types.get_volume_type_by_name(context, name)
4557+ except exception.NotFound:
4558+ volume_types.create(context, name,
4559+ extra_specs=dict(type='vsa_volume'))
4560+ vol_type = volume_types.get_volume_type_by_name(context, name)
4561+
4562+ return vol_type
4563+
4564+ def get_all_vsa_instances(self, context, vsa_id):
4565+ return self.compute_api.get_all(context,
4566+ search_opts={'metadata': dict(vsa_id=str(vsa_id))})
4567+
4568+ def get_all_vsa_volumes(self, context, vsa_id):
4569+ return self.volume_api.get_all(context,
4570+ search_opts={'metadata': dict(from_vsa_id=str(vsa_id))})
4571+
4572+ def get_all_vsa_drives(self, context, vsa_id):
4573+ return self.volume_api.get_all(context,
4574+ search_opts={'metadata': dict(to_vsa_id=str(vsa_id))})
4575
4576=== added file 'nova/vsa/connection.py'
4577--- nova/vsa/connection.py 1970-01-01 00:00:00 +0000
4578+++ nova/vsa/connection.py 2011-08-26 22:09:26 +0000
4579@@ -0,0 +1,25 @@
4580+# vim: tabstop=4 shiftwidth=4 softtabstop=4
4581+
4582+# Copyright (c) 2011 Zadara Storage Inc.
4583+# Copyright (c) 2011 OpenStack LLC.
4584+#
4585+# Licensed under the Apache License, Version 2.0 (the "License"); you may
4586+# not use this file except in compliance with the License. You may obtain
4587+# a copy of the License at
4588+#
4589+# http://www.apache.org/licenses/LICENSE-2.0
4590+#
4591+# Unless required by applicable law or agreed to in writing, software
4592+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
4593+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
4594+# License for the specific language governing permissions and limitations
4595+# under the License.
4596+
4597+"""Abstraction of the underlying connection to VC."""
4598+
4599+from nova.vsa import fake
4600+
4601+
4602+def get_connection():
4603+ # Return an object that is able to talk to VCs
4604+ return fake.FakeVcConnection()
4605
4606=== added file 'nova/vsa/fake.py'
4607--- nova/vsa/fake.py 1970-01-01 00:00:00 +0000
4608+++ nova/vsa/fake.py 2011-08-26 22:09:26 +0000
4609@@ -0,0 +1,22 @@
4610+# vim: tabstop=4 shiftwidth=4 softtabstop=4
4611+
4612+# Copyright (c) 2011 Zadara Storage Inc.
4613+# Copyright (c) 2011 OpenStack LLC.
4614+#
4615+# Licensed under the Apache License, Version 2.0 (the "License"); you may
4616+# not use this file except in compliance with the License. You may obtain
4617+# a copy of the License at
4618+#
4619+# http://www.apache.org/licenses/LICENSE-2.0
4620+#
4621+# Unless required by applicable law or agreed to in writing, software
4622+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
4623+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
4624+# License for the specific language governing permissions and limitations
4625+# under the License.
4626+
4627+
4628+class FakeVcConnection(object):
4629+
4630+ def init_host(self, host):
4631+ pass
4632
4633=== added file 'nova/vsa/manager.py'
4634--- nova/vsa/manager.py 1970-01-01 00:00:00 +0000
4635+++ nova/vsa/manager.py 2011-08-26 22:09:26 +0000
4636@@ -0,0 +1,179 @@
4637+# vim: tabstop=4 shiftwidth=4 softtabstop=4
4638+
4639+# Copyright (c) 2011 Zadara Storage Inc.
4640+# Copyright (c) 2011 OpenStack LLC.
4641+#
4642+# Licensed under the Apache License, Version 2.0 (the "License"); you may
4643+# not use this file except in compliance with the License. You may obtain
4644+# a copy of the License at
4645+#
4646+# http://www.apache.org/licenses/LICENSE-2.0
4647+#
4648+# Unless required by applicable law or agreed to in writing, software
4649+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
4650+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
4651+# License for the specific language governing permissions and limitations
4652+# under the License.
4653+
4654+"""
4655+Handles all processes relating to Virtual Storage Arrays (VSA).
4656+
4657+**Related Flags**
4658+
4659+"""
4660+
4661+from nova import compute
4662+from nova import exception
4663+from nova import flags
4664+from nova import log as logging
4665+from nova import manager
4666+from nova import volume
4667+from nova import vsa
4668+from nova import utils
4669+from nova.compute import instance_types
4670+from nova.vsa import utils as vsa_utils
4671+from nova.vsa.api import VsaState
4672+
4673+FLAGS = flags.FLAGS
4674+flags.DEFINE_string('vsa_driver', 'nova.vsa.connection.get_connection',
4675+ 'Driver to use for controlling VSAs')
4676+
4677+LOG = logging.getLogger('nova.vsa.manager')
4678+
4679+
4680+class VsaManager(manager.SchedulerDependentManager):
4681+ """Manages Virtual Storage Arrays (VSAs)."""
4682+
4683+ def __init__(self, vsa_driver=None, *args, **kwargs):
4684+ if not vsa_driver:
4685+ vsa_driver = FLAGS.vsa_driver
4686+ self.driver = utils.import_object(vsa_driver)
4687+ self.compute_manager = utils.import_object(FLAGS.compute_manager)
4688+
4689+ self.compute_api = compute.API()
4690+ self.volume_api = volume.API()
4691+ self.vsa_api = vsa.API()
4692+
4693+ if FLAGS.vsa_ec2_user_id is None or \
4694+ FLAGS.vsa_ec2_access_key is None:
4695+ raise exception.VSANovaAccessParamNotFound()
4696+
4697+ super(VsaManager, self).__init__(*args, **kwargs)
4698+
4699+ def init_host(self):
4700+ self.driver.init_host(host=self.host)
4701+ super(VsaManager, self).init_host()
4702+
4703+ @exception.wrap_exception()
4704+ def create_vsa(self, context, vsa_id):
4705+ """Called by API if there were no BE volumes assigned"""
4706+ LOG.debug(_("Create call received for VSA %s"), vsa_id)
4707+
4708+ vsa_id = int(vsa_id) # just in case
4709+
4710+ try:
4711+ vsa = self.vsa_api.get(context, vsa_id)
4712+ except Exception as ex:
4713+ msg = _("Failed to find VSA %(vsa_id)d") % locals()
4714+ LOG.exception(msg)
4715+ return
4716+
4717+ return self._start_vcs(context, vsa)
4718+
4719+ @exception.wrap_exception()
4720+ def vsa_volume_created(self, context, vol_id, vsa_id, status):
4721+ """Callback for volume creations"""
4722+ LOG.debug(_("VSA ID %(vsa_id)s: Drive %(vol_id)s created. "\
4723+ "Status %(status)s"), locals())
4724+ vsa_id = int(vsa_id) # just in case
4725+
4726+ # Get all volumes for this VSA
4727+ # check if any of them still in creating phase
4728+ drives = self.vsa_api.get_all_vsa_drives(context, vsa_id)
4729+ for drive in drives:
4730+ if drive['status'] == 'creating':
4731+ vol_name = drive['name']
4732+ vol_disp_name = drive['display_name']
4733+ LOG.debug(_("Drive %(vol_name)s (%(vol_disp_name)s) still "\
4734+ "in creating phase - wait"), locals())
4735+ return
4736+
4737+ try:
4738+ vsa = self.vsa_api.get(context, vsa_id)
4739+ except Exception as ex:
4740+ msg = _("Failed to find VSA %(vsa_id)d") % locals()
4741+ LOG.exception(msg)
4742+ return
4743+
4744+ if len(drives) != vsa['vol_count']:
4745+ cvol_real = len(drives)
4746+ cvol_exp = vsa['vol_count']
4747+ LOG.debug(_("VSA ID %(vsa_id)d: Not all volumes are created "\
4748+ "(%(cvol_real)d of %(cvol_exp)d)"), locals())
4749+ return
4750+
4751+ # all volumes created (successfully or not)
4752+ return self._start_vcs(context, vsa, drives)
4753+
4754+ def _start_vcs(self, context, vsa, drives=[]):
4755+ """Start VCs for VSA """
4756+
4757+ vsa_id = vsa['id']
4758+ if vsa['status'] == VsaState.CREATING:
4759+ self.vsa_api.update_vsa_status(context, vsa_id,
4760+ VsaState.LAUNCHING)
4761+ else:
4762+ return
4763+
4764+ # in _separate_ loop go over all volumes and mark as "attached"
4765+ has_failed_volumes = False
4766+ for drive in drives:
4767+ vol_name = drive['name']
4768+ vol_disp_name = drive['display_name']
4769+ status = drive['status']
4770+ LOG.info(_("VSA ID %(vsa_id)d: Drive %(vol_name)s "\
4771+ "(%(vol_disp_name)s) is in %(status)s state"),
4772+ locals())
4773+ if status == 'available':
4774+ try:
4775+ # self.volume_api.update(context, volume['id'],
4776+ # dict(attach_status="attached"))
4777+ pass
4778+ except Exception as ex:
4779+ msg = _("Failed to update attach status for volume "
4780+ "%(vol_name)s. %(ex)s") % locals()
4781+ LOG.exception(msg)
4782+ else:
4783+ has_failed_volumes = True
4784+
4785+ if has_failed_volumes:
4786+ LOG.info(_("VSA ID %(vsa_id)d: Delete all BE volumes"), locals())
4787+ self.vsa_api.delete_vsa_volumes(context, vsa_id, "BE", True)
4788+ self.vsa_api.update_vsa_status(context, vsa_id,
4789+ VsaState.FAILED)
4790+ return
4791+
4792+ # create user-data record for VC
4793+ storage_data = vsa_utils.generate_user_data(vsa, drives)
4794+
4795+ instance_type = instance_types.get_instance_type(
4796+ vsa['instance_type_id'])
4797+
4798+ # now start the VC instance
4799+
4800+ vc_count = vsa['vc_count']
4801+ LOG.info(_("VSA ID %(vsa_id)d: Start %(vc_count)d instances"),
4802+ locals())
4803+ vc_instances = self.compute_api.create(context,
4804+ instance_type, # vsa['vsa_instance_type'],
4805+ vsa['image_ref'],
4806+ min_count=1,
4807+ max_count=vc_count,
4808+ display_name='vc-' + vsa['display_name'],
4809+ display_description='VC for VSA ' + vsa['display_name'],
4810+ availability_zone=vsa['availability_zone'],
4811+ user_data=storage_data,
4812+ metadata=dict(vsa_id=str(vsa_id)))
4813+
4814+ self.vsa_api.update_vsa_status(context, vsa_id,
4815+ VsaState.CREATED)
4816
4817=== added file 'nova/vsa/utils.py'
4818--- nova/vsa/utils.py 1970-01-01 00:00:00 +0000
4819+++ nova/vsa/utils.py 2011-08-26 22:09:26 +0000
4820@@ -0,0 +1,80 @@
4821+# vim: tabstop=4 shiftwidth=4 softtabstop=4
4822+
4823+# Copyright (c) 2011 Zadara Storage Inc.
4824+# Copyright (c) 2011 OpenStack LLC.
4825+#
4826+# Licensed under the Apache License, Version 2.0 (the "License"); you may
4827+# not use this file except in compliance with the License. You may obtain
4828+# a copy of the License at
4829+#
4830+# http://www.apache.org/licenses/LICENSE-2.0
4831+#
4832+# Unless required by applicable law or agreed to in writing, software
4833+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
4834+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
4835+# License for the specific language governing permissions and limitations
4836+# under the License.
4837+
4838+import base64
4839+from xml.etree import ElementTree
4840+
4841+from nova import flags
4842+
4843+FLAGS = flags.FLAGS
4844+
4845+
4846+def generate_user_data(vsa, volumes):
4847+ SubElement = ElementTree.SubElement
4848+
4849+ e_vsa = ElementTree.Element("vsa")
4850+
4851+ e_vsa_detail = SubElement(e_vsa, "id")
4852+ e_vsa_detail.text = str(vsa['id'])
4853+ e_vsa_detail = SubElement(e_vsa, "name")
4854+ e_vsa_detail.text = vsa['display_name']
4855+ e_vsa_detail = SubElement(e_vsa, "description")
4856+ e_vsa_detail.text = vsa['display_description']
4857+ e_vsa_detail = SubElement(e_vsa, "vc_count")
4858+ e_vsa_detail.text = str(vsa['vc_count'])
4859+
4860+ e_vsa_detail = SubElement(e_vsa, "auth_user")
4861+ e_vsa_detail.text = FLAGS.vsa_ec2_user_id
4862+ e_vsa_detail = SubElement(e_vsa, "auth_access_key")
4863+ e_vsa_detail.text = FLAGS.vsa_ec2_access_key
4864+
4865+ e_volumes = SubElement(e_vsa, "volumes")
4866+ for volume in volumes:
4867+
4868+ loc = volume['provider_location']
4869+ if loc is None:
4870+ ip = ''
4871+ iscsi_iqn = ''
4872+ iscsi_portal = ''
4873+ else:
4874+ (iscsi_target, _sep, iscsi_iqn) = loc.partition(" ")
4875+ (ip, iscsi_portal) = iscsi_target.split(":", 1)
4876+
4877+ e_vol = SubElement(e_volumes, "volume")
4878+ e_vol_detail = SubElement(e_vol, "id")
4879+ e_vol_detail.text = str(volume['id'])
4880+ e_vol_detail = SubElement(e_vol, "name")
4881+ e_vol_detail.text = volume['name']
4882+ e_vol_detail = SubElement(e_vol, "display_name")
4883+ e_vol_detail.text = volume['display_name']
4884+ e_vol_detail = SubElement(e_vol, "size_gb")
4885+ e_vol_detail.text = str(volume['size'])
4886+ e_vol_detail = SubElement(e_vol, "status")
4887+ e_vol_detail.text = volume['status']
4888+ e_vol_detail = SubElement(e_vol, "ip")
4889+ e_vol_detail.text = ip
4890+ e_vol_detail = SubElement(e_vol, "iscsi_iqn")
4891+ e_vol_detail.text = iscsi_iqn
4892+ e_vol_detail = SubElement(e_vol, "iscsi_portal")
4893+ e_vol_detail.text = iscsi_portal
4894+ e_vol_detail = SubElement(e_vol, "lun")
4895+ e_vol_detail.text = '0'
4896+ e_vol_detail = SubElement(e_vol, "sn_host")
4897+ e_vol_detail.text = volume['host']
4898+
4899+ _xml = ElementTree.tostring(e_vsa)
4900+ return base64.b64encode(_xml)