Merge lp:~vladimir.p/nova/vsa into lp:~hudson-openstack/nova/trunk

Proposed by Vladimir Popovski
Status: Merged
Approved by: Vish Ishaya
Approved revision: 1328
Merged at revision: 1502
Proposed branch: lp:~vladimir.p/nova/vsa
Merge into: lp:~hudson-openstack/nova/trunk
Diff against target: 4900 lines (+4498/-13)
29 files modified
bin/nova-manage (+477/-1)
bin/nova-vsa (+49/-0)
nova/api/openstack/contrib/virtual_storage_arrays.py (+606/-0)
nova/db/api.py (+35/-1)
nova/db/sqlalchemy/api.py (+102/-0)
nova/db/sqlalchemy/migrate_repo/versions/043_add_vsa_data.py (+75/-0)
nova/db/sqlalchemy/migration.py (+1/-0)
nova/db/sqlalchemy/models.py (+34/-1)
nova/exception.py (+12/-0)
nova/flags.py (+12/-0)
nova/quota.py (+3/-2)
nova/scheduler/vsa.py (+535/-0)
nova/tests/api/openstack/contrib/test_vsa.py (+450/-0)
nova/tests/api/openstack/test_extensions.py (+1/-0)
nova/tests/scheduler/test_vsa_scheduler.py (+641/-0)
nova/tests/test_vsa.py (+182/-0)
nova/tests/test_vsa_volumes.py (+136/-0)
nova/virt/libvirt.xml.template (+3/-1)
nova/virt/libvirt/connection.py (+5/-0)
nova/volume/api.py (+14/-4)
nova/volume/driver.py (+272/-0)
nova/volume/manager.py (+78/-0)
nova/volume/volume_types.py (+40/-3)
nova/vsa/__init__.py (+18/-0)
nova/vsa/api.py (+411/-0)
nova/vsa/connection.py (+25/-0)
nova/vsa/fake.py (+22/-0)
nova/vsa/manager.py (+179/-0)
nova/vsa/utils.py (+80/-0)
To merge this branch: bzr merge lp:~vladimir.p/nova/vsa
Reviewer Review Type Date Requested Status
Matt Dietz (community) Abstain
Brian Lamar (community) Approve
Vish Ishaya (community) Approve
Soren Hansen (community) Approve
Review via email: mp+72983@code.launchpad.net

Description of the change

Virtual Storage Array (VSA) feature.
- new Virtual Storage Array (VSA) objects / OS API extensions / APIs / CLIs
- new schedulers for selecting nodes with particular volume capabilities
- new special volume driver
- report volume capabilities
- some fixes for volume types

To post a comment you must log in.
Revision history for this message
Vladimir Popovski (vladimir.p) wrote :

I re-submitted VSA merge proposal after removing drive_type related code and merging with volume types/metadata.

I suppose that most (or actually all?) of your previous comments were addressed.

Just in case if you would like to take a look at previous proposal:
https://code.launchpad.net/~vladimir.p/nova/vsa/+merge/68987

Really appreciate your feedback! Hope we will be able to land it tomorrow.

lp:~vladimir.p/nova/vsa updated
1325. By Vladimir Popovski

added debug prints for scheduler

Revision history for this message
Vish Ishaya (vishvananda) wrote :
Download full text (3.3 KiB)

Nicely done vladimir. I just see a couple of issues:

573 === added directory 'nova/CA/newcerts'
574 === removed directory 'nova/CA/newcerts'
575 === added file 'nova/CA/newcerts/.placeholder'
576 === removed file 'nova/CA/newcerts/.placeholder'
577 === added directory 'nova/CA/private'
578 === removed directory 'nova/CA/private'
579 === added file 'nova/CA/private/.placeholder'
580 === removed file 'nova/CA/private/.placeholder'
581 === added directory 'nova/CA/projects'
582 === removed directory 'nova/CA/projects'
583 === added file 'nova/CA/projects/.gitignore'
584 --- nova/CA/projects/.gitignore 1970-01-01 00:00:00 +0000
585 +++ nova/CA/projects/.gitignore 2011-08-26 18:16:26 +0000
586 @@ -0,0 +1,1 @@
587 +*
588
589 === removed file 'nova/CA/projects/.gitignore'
590 --- nova/CA/projects/.gitignore 2010-07-28 08:32:40 +0000
591 +++ nova/CA/projects/.gitignore 1970-01-01 00:00:00 +0000
592 @@ -1,1 +0,0 @@
593 -*
594
595 === added file 'nova/CA/projects/.placeholder'
596 === removed file 'nova/CA/projects/.placeholder'
597 === added directory 'nova/CA/reqs'
598 === removed directory 'nova/CA/reqs'
599 === added file 'nova/CA/reqs/.gitignore'
600 --- nova/CA/reqs/.gitignore 1970-01-01 00:00:00 +0000
601 +++ nova/CA/reqs/.gitignore 2011-08-26 18:16:26 +0000
602 @@ -0,0 +1,1 @@
603 +*
604
605 === removed file 'nova/CA/reqs/.gitignore'
606 --- nova/CA/reqs/.gitignore 2010-07-28 08:32:40 +0000
607 +++ nova/CA/reqs/.gitignore 1970-01-01 00:00:00 +0000
608 @@ -1,1 +0,0 @@
609 -*
610
611 === added file 'nova/CA/reqs/.placeholder'
612 === removed file 'nova/CA/reqs/.placeholder'
613 === added file 'nova/api/openstack/contrib/virtual_storage_arrays.py'

Some weird stuff going here. Might be nice to recreate the branch to get rid of these strange issues. i.e. start with trunk and just copy all of the new files in. and put it in one commit.

4006 + def create_volumes(self, context, request_spec, availability_zone):
4007 + LOG.info(_("create_volumes called with req=%(request_spec)s, "\
4008 + "availability_zone=%(availability_zone)s"), locals())
4009 +

This doesn't look like it is actually implemented? It seems to be just logging so perhaps remove it for now?

----

Great job separating out special cases from the code. It seems that there is no problem running without nova-vsa. I feel like we need some clear documentation that the vsa code is Experimental and is not a supported core component. I think a note that prints out when you start nova-vsa and a note in the vsa folder in docstrings would be good enough.

It would also be good to mention in that note that a specific image is needed to use the functionality exposed, and other storage providers are invited to make images. I also think it is very important for this to have a generic image implementation that is open source. Even if it is just a silly command line version.

In general I'm worried about adding something to the project that is specific to a vendor, but since you guys have put so much effort into doing this the right way, I'm willing to assume that you will continue to generalize and make it possible for others to use this code.

Rest of nova-cor...

Read more...

review: Needs Fixing
Revision history for this message
Brian Waldon (bcwaldon) wrote :

I definitely agree with you, Vish. I would, however, like to see some more documentation about this feature before we accept it.

Revision history for this message
Soren Hansen (soren) wrote :

> Nicely done vladimir. I just see a couple of issues:
>
> 573 === added directory 'nova/CA/newcerts'
> 574 === removed directory 'nova/CA/newcerts'
> 575 === added file 'nova/CA/newcerts/.placeholder'
> 576 === removed file 'nova/CA/newcerts/.placeholder'
> 577 === added directory 'nova/CA/private'
> 578 === removed directory 'nova/CA/private'
> 579 === added file 'nova/CA/private/.placeholder'
> 580 === removed file 'nova/CA/private/.placeholder'
> 581 === added directory 'nova/CA/projects'
> 582 === removed directory 'nova/CA/projects'
> 583 === added file 'nova/CA/projects/.gitignore'
[...]
> Some weird stuff going here. Might be nice to recreate the branch to get rid
> of these strange issues. i.e. start with trunk and just copy all of the new
> files in. and put it in one commit.

No need. Just do something like:
bzr revert -r -150 nova/CA/newcerts/.placeholder nova/CA/private/.placeholder nova/CA/projects/.*

-150 picked at random. It just needed to be a revision from before they got removed.

bzr assigns a file id to a file when you create it. It you rename it, it keeps this file id. If you remove a file and later create another one with the same name, bzr considers it a new file and assigns a new file id to it. If you don't know this, you run into annoying, frustrating stuff like this. However, it lets you e.g. rename a file from "bar" to "baz" and rename another from "foo" to "bar" and bzr actually knows that this is what you did instead of thinking you deleted "foo", created a brand new file "baz", and made some really funky changes to "bar". :)

bzr revert resurrects those files (with their ids) from the given revision.

> In general I'm worried about adding something to the project that is specific
> to a vendor, but since you guys have put so much effort into doing this the
> right way, I'm willing to assume that you will continue to generalize and make
> it possible for others to use this code.

I understand where you're coming from with this concern. However, this seems like a really good driver that others can expand on.

> Rest of nova-core, does that seem ok?

I'm ok with it. I don't see us gaining anything by having this live outside of Nova itself. And, I like the code and would love to see others doing similar stuff building on top of this rather than building something crappy themselves.

Approved (assuming you do the quick bzr thing above).

review: Approve
Revision history for this message
Christopher MacGown (0x44) wrote :

I would also like to see the format strings include the names within the string the way we have formatted log strings elsewhere in the codebase.

lp:~vladimir.p/nova/vsa updated
1326. By Vladimir Popovski

reverted CA files

1327. By Vladimir Popovski

removed create_volumes, added log & doc comment about experimental code

Revision history for this message
Matt Dietz (cerberus) wrote :

At this point, shouldn't this wait for Essex?

review: Needs Information
Revision history for this message
Vladimir Popovski (vladimir.p) wrote :

If it is possible, we would like to have it in Diablo. The initial proposal
was posted in July. If it will be postponed for Essex, it will be released
in April only.
It would be great if other storage vendors will be aware of such feature and
use it (or part of it) as a reference. Putting it into Diablo will increase
its visibility.
Thanks.

Revision history for this message
Vish Ishaya (vishvananda) wrote :

Based on discussion in irc and bug filed for docs, I'm ok with this going in.

review: Approve
Revision history for this message
Brian Lamar (blamar) wrote :

I'm of the opinion that this is much better than the previous version (thank you for that!) and that we should approve this and then decide on it getting in to Essex or Diablo.

So basically, I'm approving this for trunk, but you'll have to get a FFE from ttx to get this into Diablo if I remember correctly.

Thanks for your hard work on this.

review: Approve
Revision history for this message
Vish Ishaya (vishvananda) wrote :

This was granted a FFE if it was approved by nova-core by Today (Friday), so I think it is good to go in.

lp:~vladimir.p/nova/vsa updated
1328. By Vladimir Popovski

changed format string in nova-manage

Revision history for this message
Matt Dietz (cerberus) wrote :

Reconciled in os-dev. I'm going to abstain. Looks like there are enough reviews.

review: Abstain

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'bin/nova-manage'
--- bin/nova-manage 2011-08-24 21:01:33 +0000
+++ bin/nova-manage 2011-08-26 22:09:26 +0000
@@ -53,6 +53,7 @@
53 CLI interface for nova management.53 CLI interface for nova management.
54"""54"""
5555
56import ast
56import gettext57import gettext
57import glob58import glob
58import json59import json
@@ -85,11 +86,13 @@
85from nova import rpc86from nova import rpc
86from nova import utils87from nova import utils
87from nova import version88from nova import version
89from nova import vsa
88from nova.api.ec2 import ec2utils90from nova.api.ec2 import ec2utils
89from nova.auth import manager91from nova.auth import manager
90from nova.cloudpipe import pipelib92from nova.cloudpipe import pipelib
91from nova.compute import instance_types93from nova.compute import instance_types
92from nova.db import migration94from nova.db import migration
95from nova.volume import volume_types
9396
94FLAGS = flags.FLAGS97FLAGS = flags.FLAGS
95flags.DECLARE('fixed_range', 'nova.network.manager')98flags.DECLARE('fixed_range', 'nova.network.manager')
@@ -1097,6 +1100,477 @@
1097 self.list()1100 self.list()
10981101
10991102
1103class VsaCommands(object):
1104 """Methods for dealing with VSAs"""
1105
1106 def __init__(self, *args, **kwargs):
1107 self.manager = manager.AuthManager()
1108 self.vsa_api = vsa.API()
1109 self.context = context.get_admin_context()
1110
1111 self._format_str_vsa = "%(id)-5s %(vsa_id)-15s %(name)-25s "\
1112 "%(type)-10s %(vcs)-6s %(drives)-9s %(stat)-10s "\
1113 "%(az)-10s %(time)-10s"
1114 self._format_str_volume = "\t%(id)-4s %(name)-15s %(size)-5s "\
1115 "%(stat)-10s %(att)-20s %(time)s"
1116 self._format_str_drive = "\t%(id)-4s %(name)-15s %(size)-5s "\
1117 "%(stat)-10s %(host)-20s %(type)-4s %(tname)-10s %(time)s"
1118 self._format_str_instance = "\t%(id)-4s %(name)-10s %(dname)-20s "\
1119 "%(image)-12s %(type)-10s %(fl_ip)-15s %(fx_ip)-15s "\
1120 "%(stat)-10s %(host)-15s %(time)s"
1121
1122 def _print_vsa_header(self):
1123 print self._format_str_vsa %\
1124 dict(id=_('ID'),
1125 vsa_id=_('vsa_id'),
1126 name=_('displayName'),
1127 type=_('vc_type'),
1128 vcs=_('vc_cnt'),
1129 drives=_('drive_cnt'),
1130 stat=_('status'),
1131 az=_('AZ'),
1132 time=_('createTime'))
1133
1134 def _print_vsa(self, vsa):
1135 print self._format_str_vsa %\
1136 dict(id=vsa['id'],
1137 vsa_id=vsa['name'],
1138 name=vsa['display_name'],
1139 type=vsa['vsa_instance_type'].get('name', None),
1140 vcs=vsa['vc_count'],
1141 drives=vsa['vol_count'],
1142 stat=vsa['status'],
1143 az=vsa['availability_zone'],
1144 time=str(vsa['created_at']))
1145
1146 def _print_volume_header(self):
1147 print _(' === Volumes ===')
1148 print self._format_str_volume %\
1149 dict(id=_('ID'),
1150 name=_('name'),
1151 size=_('size'),
1152 stat=_('status'),
1153 att=_('attachment'),
1154 time=_('createTime'))
1155
1156 def _print_volume(self, vol):
1157 print self._format_str_volume %\
1158 dict(id=vol['id'],
1159 name=vol['display_name'] or vol['name'],
1160 size=vol['size'],
1161 stat=vol['status'],
1162 att=vol['attach_status'],
1163 time=str(vol['created_at']))
1164
1165 def _print_drive_header(self):
1166 print _(' === Drives ===')
1167 print self._format_str_drive %\
1168 dict(id=_('ID'),
1169 name=_('name'),
1170 size=_('size'),
1171 stat=_('status'),
1172 host=_('host'),
1173 type=_('type'),
1174 tname=_('typeName'),
1175 time=_('createTime'))
1176
1177 def _print_drive(self, drive):
1178 if drive['volume_type_id'] is not None and drive.get('volume_type'):
1179 drive_type_name = drive['volume_type'].get('name')
1180 else:
1181 drive_type_name = ''
1182
1183 print self._format_str_drive %\
1184 dict(id=drive['id'],
1185 name=drive['display_name'],
1186 size=drive['size'],
1187 stat=drive['status'],
1188 host=drive['host'],
1189 type=drive['volume_type_id'],
1190 tname=drive_type_name,
1191 time=str(drive['created_at']))
1192
1193 def _print_instance_header(self):
1194 print _(' === Instances ===')
1195 print self._format_str_instance %\
1196 dict(id=_('ID'),
1197 name=_('name'),
1198 dname=_('disp_name'),
1199 image=_('image'),
1200 type=_('type'),
1201 fl_ip=_('floating_IP'),
1202 fx_ip=_('fixed_IP'),
1203 stat=_('status'),
1204 host=_('host'),
1205 time=_('createTime'))
1206
1207 def _print_instance(self, vc):
1208
1209 fixed_addr = None
1210 floating_addr = None
1211 if vc['fixed_ips']:
1212 fixed = vc['fixed_ips'][0]
1213 fixed_addr = fixed['address']
1214 if fixed['floating_ips']:
1215 floating_addr = fixed['floating_ips'][0]['address']
1216 floating_addr = floating_addr or fixed_addr
1217
1218 print self._format_str_instance %\
1219 dict(id=vc['id'],
1220 name=ec2utils.id_to_ec2_id(vc['id']),
1221 dname=vc['display_name'],
1222 image=('ami-%08x' % int(vc['image_ref'])),
1223 type=vc['instance_type']['name'],
1224 fl_ip=floating_addr,
1225 fx_ip=fixed_addr,
1226 stat=vc['state_description'],
1227 host=vc['host'],
1228 time=str(vc['created_at']))
1229
1230 def _list(self, context, vsas, print_drives=False,
1231 print_volumes=False, print_instances=False):
1232 if vsas:
1233 self._print_vsa_header()
1234
1235 for vsa in vsas:
1236 self._print_vsa(vsa)
1237 vsa_id = vsa.get('id')
1238
1239 if print_instances:
1240 instances = self.vsa_api.get_all_vsa_instances(context, vsa_id)
1241 if instances:
1242 print
1243 self._print_instance_header()
1244 for instance in instances:
1245 self._print_instance(instance)
1246 print
1247
1248 if print_drives:
1249 drives = self.vsa_api.get_all_vsa_drives(context, vsa_id)
1250 if drives:
1251 self._print_drive_header()
1252 for drive in drives:
1253 self._print_drive(drive)
1254 print
1255
1256 if print_volumes:
1257 volumes = self.vsa_api.get_all_vsa_volumes(context, vsa_id)
1258 if volumes:
1259 self._print_volume_header()
1260 for volume in volumes:
1261 self._print_volume(volume)
1262 print
1263
1264 @args('--storage', dest='storage',
1265 metavar="[{'drive_name': 'type', 'num_drives': N, 'size': M},..]",
1266 help='Initial storage allocation for VSA')
1267 @args('--name', dest='name', metavar="<name>", help='VSA name')
1268 @args('--description', dest='description', metavar="<description>",
1269 help='VSA description')
1270 @args('--vc', dest='vc_count', metavar="<number>", help='Number of VCs')
1271 @args('--instance_type', dest='instance_type_name', metavar="<name>",
1272 help='Instance type name')
1273 @args('--image', dest='image_name', metavar="<name>", help='Image name')
1274 @args('--shared', dest='shared', action="store_true", default=False,
1275 help='Use shared drives')
1276 @args('--az', dest='az', metavar="<zone:host>", help='Availability zone')
1277 @args('--user', dest="user_id", metavar='<User name>',
1278 help='User name')
1279 @args('--project', dest="project_id", metavar='<Project name>',
1280 help='Project name')
1281 def create(self, storage='[]', name=None, description=None, vc_count=1,
1282 instance_type_name=None, image_name=None, shared=None,
1283 az=None, user_id=None, project_id=None):
1284 """Create a VSA."""
1285
1286 if project_id is None:
1287 try:
1288 project_id = os.getenv("EC2_ACCESS_KEY").split(':')[1]
1289 except Exception as exc:
1290 print _("Failed to retrieve project id: %(exc)s") % exc
1291 raise
1292
1293 if user_id is None:
1294 try:
1295 project = self.manager.get_project(project_id)
1296 user_id = project.project_manager_id
1297 except Exception as exc:
1298 print _("Failed to retrieve user info: %(exc)s") % exc
1299 raise
1300
1301 is_admin = self.manager.is_admin(user_id)
1302 ctxt = context.RequestContext(user_id, project_id, is_admin)
1303 if not is_admin and \
1304 not self.manager.is_project_member(user_id, project_id):
1305 msg = _("%(user_id)s must be an admin or a "
1306 "member of %(project_id)s")
1307 LOG.warn(msg % locals())
1308 raise ValueError(msg % locals())
1309
1310 # Sanity check for storage string
1311 storage_list = []
1312 if storage is not None:
1313 try:
1314 storage_list = ast.literal_eval(storage)
1315 except:
1316 print _("Invalid string format %s") % storage
1317 raise
1318
1319 for node in storage_list:
1320 if ('drive_name' not in node) or ('num_drives' not in node):
1321 print (_("Invalid string format for element %s. " \
1322 "Expecting keys 'drive_name' & 'num_drives'"),
1323 str(node))
1324 raise KeyError
1325
1326 if instance_type_name == '':
1327 instance_type_name = None
1328 instance_type = instance_types.get_instance_type_by_name(
1329 instance_type_name)
1330
1331 if image_name == '':
1332 image_name = None
1333
1334 if shared in [None, False, "--full_drives"]:
1335 shared = False
1336 elif shared in [True, "--shared"]:
1337 shared = True
1338 else:
1339 raise ValueError(_('Shared parameter should be set either to "\
1340 "--shared or --full_drives'))
1341
1342 values = {
1343 'display_name': name,
1344 'display_description': description,
1345 'vc_count': int(vc_count),
1346 'instance_type': instance_type,
1347 'image_name': image_name,
1348 'availability_zone': az,
1349 'storage': storage_list,
1350 'shared': shared,
1351 }
1352
1353 result = self.vsa_api.create(ctxt, **values)
1354 self._list(ctxt, [result])
1355
1356 @args('--id', dest='vsa_id', metavar="<vsa_id>", help='VSA ID')
1357 @args('--name', dest='name', metavar="<name>", help='VSA name')
1358 @args('--description', dest='description', metavar="<description>",
1359 help='VSA description')
1360 @args('--vc', dest='vc_count', metavar="<number>", help='Number of VCs')
1361 def update(self, vsa_id, name=None, description=None, vc_count=None):
1362 """Updates name/description of vsa and number of VCs."""
1363
1364 values = {}
1365 if name is not None:
1366 values['display_name'] = name
1367 if description is not None:
1368 values['display_description'] = description
1369 if vc_count is not None:
1370 values['vc_count'] = int(vc_count)
1371
1372 vsa_id = ec2utils.ec2_id_to_id(vsa_id)
1373 result = self.vsa_api.update(self.context, vsa_id=vsa_id, **values)
1374 self._list(self.context, [result])
1375
1376 @args('--id', dest='vsa_id', metavar="<vsa_id>", help='VSA ID')
1377 def delete(self, vsa_id):
1378 """Delete a VSA."""
1379 vsa_id = ec2utils.ec2_id_to_id(vsa_id)
1380 self.vsa_api.delete(self.context, vsa_id)
1381
1382 @args('--id', dest='vsa_id', metavar="<vsa_id>",
1383 help='VSA ID (optional)')
1384 @args('--all', dest='all', action="store_true", default=False,
1385 help='Show all available details')
1386 @args('--drives', dest='drives', action="store_true",
1387 help='Include drive-level details')
1388 @args('--volumes', dest='volumes', action="store_true",
1389 help='Include volume-level details')
1390 @args('--instances', dest='instances', action="store_true",
1391 help='Include instance-level details')
1392 def list(self, vsa_id=None, all=False,
1393 drives=False, volumes=False, instances=False):
1394 """Describe all available VSAs (or particular one)."""
1395
1396 vsas = []
1397 if vsa_id is not None:
1398 internal_id = ec2utils.ec2_id_to_id(vsa_id)
1399 vsa = self.vsa_api.get(self.context, internal_id)
1400 vsas.append(vsa)
1401 else:
1402 vsas = self.vsa_api.get_all(self.context)
1403
1404 if all:
1405 drives = volumes = instances = True
1406
1407 self._list(self.context, vsas, drives, volumes, instances)
1408
1409 def update_capabilities(self):
1410 """Forces updates capabilities on all nova-volume nodes."""
1411
1412 rpc.fanout_cast(context.get_admin_context(),
1413 FLAGS.volume_topic,
1414 {"method": "notification",
1415 "args": {"event": "startup"}})
1416
1417
1418class VsaDriveTypeCommands(object):
1419 """Methods for dealing with VSA drive types"""
1420
1421 def __init__(self, *args, **kwargs):
1422 super(VsaDriveTypeCommands, self).__init__(*args, **kwargs)
1423 self.context = context.get_admin_context()
1424 self._drive_type_template = '%s_%sGB_%sRPM'
1425
1426 def _list(self, drives):
1427 format_str = "%-5s %-30s %-10s %-10s %-10s %-20s %-10s %s"
1428 if len(drives):
1429 print format_str %\
1430 (_('ID'),
1431 _('name'),
1432 _('type'),
1433 _('size_gb'),
1434 _('rpm'),
1435 _('capabilities'),
1436 _('visible'),
1437 _('createTime'))
1438
1439 for name, vol_type in drives.iteritems():
1440 drive = vol_type.get('extra_specs')
1441 print format_str %\
1442 (str(vol_type['id']),
1443 drive['drive_name'],
1444 drive['drive_type'],
1445 drive['drive_size'],
1446 drive['drive_rpm'],
1447 drive.get('capabilities', ''),
1448 str(drive.get('visible', '')),
1449 str(vol_type['created_at']))
1450
1451 @args('--type', dest='type', metavar="<type>",
1452 help='Drive type (SATA, SAS, SSD, etc.)')
1453 @args('--size', dest='size_gb', metavar="<gb>", help='Drive size in GB')
1454 @args('--rpm', dest='rpm', metavar="<rpm>", help='RPM')
1455 @args('--capabilities', dest='capabilities', default=None,
1456 metavar="<string>", help='Different capabilities')
1457 @args('--hide', dest='hide', action="store_true", default=False,
1458 help='Show or hide drive')
1459 @args('--name', dest='name', metavar="<name>", help='Drive name')
1460 def create(self, type, size_gb, rpm, capabilities=None,
1461 hide=False, name=None):
1462 """Create drive type."""
1463
1464 hide = True if hide in [True, "True", "--hide", "hide"] else False
1465
1466 if name is None:
1467 name = self._drive_type_template % (type, size_gb, rpm)
1468
1469 extra_specs = {'type': 'vsa_drive',
1470 'drive_name': name,
1471 'drive_type': type,
1472 'drive_size': size_gb,
1473 'drive_rpm': rpm,
1474 'visible': True,
1475 }
1476 if hide:
1477 extra_specs['visible'] = False
1478
1479 if capabilities is not None and capabilities != '':
1480 extra_specs['capabilities'] = capabilities
1481
1482 volume_types.create(self.context, name, extra_specs)
1483 result = volume_types.get_volume_type_by_name(self.context, name)
1484 self._list({name: result})
1485
1486 @args('--name', dest='name', metavar="<name>", help='Drive name')
1487 @args('--purge', action="store_true", dest='purge', default=False,
1488 help='purge record from database')
1489 def delete(self, name, purge):
1490 """Marks instance types / flavors as deleted"""
1491 try:
1492 if purge:
1493 volume_types.purge(self.context, name)
1494 verb = "purged"
1495 else:
1496 volume_types.destroy(self.context, name)
1497 verb = "deleted"
1498 except exception.ApiError:
1499 print "Valid volume type name is required"
1500 sys.exit(1)
1501 except exception.DBError, e:
1502 print "DB Error: %s" % e
1503 sys.exit(2)
1504 except:
1505 sys.exit(3)
1506 else:
1507 print "%s %s" % (name, verb)
1508
1509 @args('--all', dest='all', action="store_true", default=False,
1510 help='Show all drives (including invisible)')
1511 @args('--name', dest='name', metavar="<name>",
1512 help='Show only specified drive')
1513 def list(self, all=False, name=None):
1514 """Describe all available VSA drive types (or particular one)."""
1515
1516 all = False if all in ["--all", False, "False"] else True
1517
1518 search_opts = {'extra_specs': {'type': 'vsa_drive'}}
1519 if name is not None:
1520 search_opts['extra_specs']['name'] = name
1521
1522 if all == False:
1523 search_opts['extra_specs']['visible'] = '1'
1524
1525 drives = volume_types.get_all_types(self.context,
1526 search_opts=search_opts)
1527 self._list(drives)
1528
1529 @args('--name', dest='name', metavar="<name>", help='Drive name')
1530 @args('--type', dest='type', metavar="<type>",
1531 help='Drive type (SATA, SAS, SSD, etc.)')
1532 @args('--size', dest='size_gb', metavar="<gb>", help='Drive size in GB')
1533 @args('--rpm', dest='rpm', metavar="<rpm>", help='RPM')
1534 @args('--capabilities', dest='capabilities', default=None,
1535 metavar="<string>", help='Different capabilities')
1536 @args('--visible', dest='visible',
1537 metavar="<show|hide>", help='Show or hide drive')
1538 def update(self, name, type=None, size_gb=None, rpm=None,
1539 capabilities=None, visible=None):
1540 """Update drive type."""
1541
1542 volume_type = volume_types.get_volume_type_by_name(self.context, name)
1543
1544 extra_specs = {'type': 'vsa_drive'}
1545
1546 if type:
1547 extra_specs['drive_type'] = type
1548
1549 if size_gb:
1550 extra_specs['drive_size'] = size_gb
1551
1552 if rpm:
1553 extra_specs['drive_rpm'] = rpm
1554
1555 if capabilities:
1556 extra_specs['capabilities'] = capabilities
1557
1558 if visible is not None:
1559 if visible in ["show", True, "True"]:
1560 extra_specs['visible'] = True
1561 elif visible in ["hide", False, "False"]:
1562 extra_specs['visible'] = False
1563 else:
1564 raise ValueError(_('visible parameter should be set to '\
1565 'show or hide'))
1566
1567 db.api.volume_type_extra_specs_update_or_create(self.context,
1568 volume_type['id'],
1569 extra_specs)
1570 result = volume_types.get_volume_type_by_name(self.context, name)
1571 self._list({name: result})
1572
1573
1100class VolumeCommands(object):1574class VolumeCommands(object):
1101 """Methods for dealing with a cloud in an odd state"""1575 """Methods for dealing with a cloud in an odd state"""
11021576
@@ -1483,6 +1957,7 @@
1483 ('agent', AgentBuildCommands),1957 ('agent', AgentBuildCommands),
1484 ('config', ConfigCommands),1958 ('config', ConfigCommands),
1485 ('db', DbCommands),1959 ('db', DbCommands),
1960 ('drive', VsaDriveTypeCommands),
1486 ('fixed', FixedIpCommands),1961 ('fixed', FixedIpCommands),
1487 ('flavor', InstanceTypeCommands),1962 ('flavor', InstanceTypeCommands),
1488 ('floating', FloatingIpCommands),1963 ('floating', FloatingIpCommands),
@@ -1498,7 +1973,8 @@
1498 ('version', VersionCommands),1973 ('version', VersionCommands),
1499 ('vm', VmCommands),1974 ('vm', VmCommands),
1500 ('volume', VolumeCommands),1975 ('volume', VolumeCommands),
1501 ('vpn', VpnCommands)]1976 ('vpn', VpnCommands),
1977 ('vsa', VsaCommands)]
15021978
15031979
1504def lazy_match(name, key_value_tuples):1980def lazy_match(name, key_value_tuples):
15051981
=== added file 'bin/nova-vsa'
--- bin/nova-vsa 1970-01-01 00:00:00 +0000
+++ bin/nova-vsa 2011-08-26 22:09:26 +0000
@@ -0,0 +1,49 @@
1#!/usr/bin/env python
2# vim: tabstop=4 shiftwidth=4 softtabstop=4
3
4# Copyright (c) 2011 Zadara Storage Inc.
5# Copyright (c) 2011 OpenStack LLC.
6#
7#
8# Licensed under the Apache License, Version 2.0 (the "License"); you may
9# not use this file except in compliance with the License. You may obtain
10# a copy of the License at
11#
12# http://www.apache.org/licenses/LICENSE-2.0
13#
14# Unless required by applicable law or agreed to in writing, software
15# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
16# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
17# License for the specific language governing permissions and limitations
18# under the License.
19
20"""Starter script for Nova VSA."""
21
22import eventlet
23eventlet.monkey_patch()
24
25import os
26import sys
27
28# If ../nova/__init__.py exists, add ../ to Python search path, so that
29# it will override what happens to be installed in /usr/(local/)lib/python...
30possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]),
31 os.pardir,
32 os.pardir))
33if os.path.exists(os.path.join(possible_topdir, 'nova', '__init__.py')):
34 sys.path.insert(0, possible_topdir)
35
36
37from nova import flags
38from nova import log as logging
39from nova import service
40from nova import utils
41
42if __name__ == '__main__':
43 utils.default_flagfile()
44 flags.FLAGS(sys.argv)
45 logging.setup()
46 utils.monkey_patch()
47 server = service.Service.create(binary='nova-vsa')
48 service.serve(server)
49 service.wait()
050
=== added file 'nova/api/openstack/contrib/virtual_storage_arrays.py'
--- nova/api/openstack/contrib/virtual_storage_arrays.py 1970-01-01 00:00:00 +0000
+++ nova/api/openstack/contrib/virtual_storage_arrays.py 2011-08-26 22:09:26 +0000
@@ -0,0 +1,606 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright (c) 2011 Zadara Storage Inc.
4# Copyright (c) 2011 OpenStack LLC.
5#
6# Licensed under the Apache License, Version 2.0 (the "License"); you may
7# not use this file except in compliance with the License. You may obtain
8# a copy of the License at
9#
10# http://www.apache.org/licenses/LICENSE-2.0
11#
12# Unless required by applicable law or agreed to in writing, software
13# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
14# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
15# License for the specific language governing permissions and limitations
16# under the License.
17
18""" The virtul storage array extension"""
19
20
21from webob import exc
22
23from nova import vsa
24from nova import volume
25from nova import compute
26from nova import network
27from nova import db
28from nova import quota
29from nova import exception
30from nova import log as logging
31from nova.api.openstack import common
32from nova.api.openstack import extensions
33from nova.api.openstack import faults
34from nova.api.openstack import wsgi
35from nova.api.openstack import servers
36from nova.api.openstack.contrib import volumes
37from nova.compute import instance_types
38
39from nova import flags
40FLAGS = flags.FLAGS
41
42LOG = logging.getLogger("nova.api.vsa")
43
44
45def _vsa_view(context, vsa, details=False, instances=None):
46 """Map keys for vsa summary/detailed view."""
47 d = {}
48
49 d['id'] = vsa.get('id')
50 d['name'] = vsa.get('name')
51 d['displayName'] = vsa.get('display_name')
52 d['displayDescription'] = vsa.get('display_description')
53
54 d['createTime'] = vsa.get('created_at')
55 d['status'] = vsa.get('status')
56
57 if 'vsa_instance_type' in vsa:
58 d['vcType'] = vsa['vsa_instance_type'].get('name', None)
59 else:
60 d['vcType'] = vsa['instance_type_id']
61
62 d['vcCount'] = vsa.get('vc_count')
63 d['driveCount'] = vsa.get('vol_count')
64
65 d['ipAddress'] = None
66 for instance in instances:
67 fixed_addr = None
68 floating_addr = None
69 if instance['fixed_ips']:
70 fixed = instance['fixed_ips'][0]
71 fixed_addr = fixed['address']
72 if fixed['floating_ips']:
73 floating_addr = fixed['floating_ips'][0]['address']
74
75 if floating_addr:
76 d['ipAddress'] = floating_addr
77 break
78 else:
79 d['ipAddress'] = d['ipAddress'] or fixed_addr
80
81 return d
82
83
84class VsaController(object):
85 """The Virtual Storage Array API controller for the OpenStack API."""
86
87 _serialization_metadata = {
88 'application/xml': {
89 "attributes": {
90 "vsa": [
91 "id",
92 "name",
93 "displayName",
94 "displayDescription",
95 "createTime",
96 "status",
97 "vcType",
98 "vcCount",
99 "driveCount",
100 "ipAddress",
101 ]}}}
102
103 def __init__(self):
104 self.vsa_api = vsa.API()
105 self.compute_api = compute.API()
106 self.network_api = network.API()
107 super(VsaController, self).__init__()
108
109 def _get_instances_by_vsa_id(self, context, id):
110 return self.compute_api.get_all(context,
111 search_opts={'metadata': dict(vsa_id=str(id))})
112
113 def _items(self, req, details):
114 """Return summary or detailed list of VSAs."""
115 context = req.environ['nova.context']
116 vsas = self.vsa_api.get_all(context)
117 limited_list = common.limited(vsas, req)
118
119 vsa_list = []
120 for vsa in limited_list:
121 instances = self._get_instances_by_vsa_id(context, vsa.get('id'))
122 vsa_list.append(_vsa_view(context, vsa, details, instances))
123 return {'vsaSet': vsa_list}
124
125 def index(self, req):
126 """Return a short list of VSAs."""
127 return self._items(req, details=False)
128
129 def detail(self, req):
130 """Return a detailed list of VSAs."""
131 return self._items(req, details=True)
132
133 def show(self, req, id):
134 """Return data about the given VSA."""
135 context = req.environ['nova.context']
136
137 try:
138 vsa = self.vsa_api.get(context, vsa_id=id)
139 except exception.NotFound:
140 return faults.Fault(exc.HTTPNotFound())
141
142 instances = self._get_instances_by_vsa_id(context, vsa.get('id'))
143 return {'vsa': _vsa_view(context, vsa, True, instances)}
144
145 def create(self, req, body):
146 """Create a new VSA."""
147 context = req.environ['nova.context']
148
149 if not body or 'vsa' not in body:
150 LOG.debug(_("No body provided"), context=context)
151 return faults.Fault(exc.HTTPUnprocessableEntity())
152
153 vsa = body['vsa']
154
155 display_name = vsa.get('displayName')
156 vc_type = vsa.get('vcType', FLAGS.default_vsa_instance_type)
157 try:
158 instance_type = instance_types.get_instance_type_by_name(vc_type)
159 except exception.NotFound:
160 return faults.Fault(exc.HTTPNotFound())
161
162 LOG.audit(_("Create VSA %(display_name)s of type %(vc_type)s"),
163 locals(), context=context)
164
165 args = dict(display_name=display_name,
166 display_description=vsa.get('displayDescription'),
167 instance_type=instance_type,
168 storage=vsa.get('storage'),
169 shared=vsa.get('shared'),
170 availability_zone=vsa.get('placement', {}).\
171 get('AvailabilityZone'))
172
173 vsa = self.vsa_api.create(context, **args)
174
175 instances = self._get_instances_by_vsa_id(context, vsa.get('id'))
176 return {'vsa': _vsa_view(context, vsa, True, instances)}
177
178 def delete(self, req, id):
179 """Delete a VSA."""
180 context = req.environ['nova.context']
181
182 LOG.audit(_("Delete VSA with id: %s"), id, context=context)
183
184 try:
185 self.vsa_api.delete(context, vsa_id=id)
186 except exception.NotFound:
187 return faults.Fault(exc.HTTPNotFound())
188
189 def associate_address(self, req, id, body):
190 """ /zadr-vsa/{vsa_id}/associate_address
191 auto or manually associate an IP to VSA
192 """
193 context = req.environ['nova.context']
194
195 if body is None:
196 ip = 'auto'
197 else:
198 ip = body.get('ipAddress', 'auto')
199
200 LOG.audit(_("Associate address %(ip)s to VSA %(id)s"),
201 locals(), context=context)
202
203 try:
204 instances = self._get_instances_by_vsa_id(context, id)
205 if instances is None or len(instances) == 0:
206 return faults.Fault(exc.HTTPNotFound())
207
208 for instance in instances:
209 self.network_api.allocate_for_instance(context, instance,
210 vpn=False)
211 # Placeholder
212 return
213
214 except exception.NotFound:
215 return faults.Fault(exc.HTTPNotFound())
216
217 def disassociate_address(self, req, id, body):
218 """ /zadr-vsa/{vsa_id}/disassociate_address
219 auto or manually associate an IP to VSA
220 """
221 context = req.environ['nova.context']
222
223 if body is None:
224 ip = 'auto'
225 else:
226 ip = body.get('ipAddress', 'auto')
227
228 LOG.audit(_("Disassociate address from VSA %(id)s"),
229 locals(), context=context)
230 # Placeholder
231
232
233class VsaVolumeDriveController(volumes.VolumeController):
234 """The base class for VSA volumes & drives.
235
236 A child resource of the VSA object. Allows operations with
237 volumes and drives created to/from particular VSA
238
239 """
240
241 _serialization_metadata = {
242 'application/xml': {
243 "attributes": {
244 "volume": [
245 "id",
246 "name",
247 "status",
248 "size",
249 "availabilityZone",
250 "createdAt",
251 "displayName",
252 "displayDescription",
253 "vsaId",
254 ]}}}
255
256 def __init__(self):
257 self.volume_api = volume.API()
258 self.vsa_api = vsa.API()
259 super(VsaVolumeDriveController, self).__init__()
260
261 def _translation(self, context, vol, vsa_id, details):
262 if details:
263 translation = volumes._translate_volume_detail_view
264 else:
265 translation = volumes._translate_volume_summary_view
266
267 d = translation(context, vol)
268 d['vsaId'] = vsa_id
269 d['name'] = vol['name']
270 return d
271
272 def _check_volume_ownership(self, context, vsa_id, id):
273 obj = self.object
274 try:
275 volume_ref = self.volume_api.get(context, volume_id=id)
276 except exception.NotFound:
277 LOG.error(_("%(obj)s with ID %(id)s not found"), locals())
278 raise
279
280 own_vsa_id = self.volume_api.get_volume_metadata_value(volume_ref,
281 self.direction)
282 if own_vsa_id != vsa_id:
283 LOG.error(_("%(obj)s with ID %(id)s belongs to VSA %(own_vsa_id)s"\
284 " and not to VSA %(vsa_id)s."), locals())
285 raise exception.Invalid()
286
287 def _items(self, req, vsa_id, details):
288 """Return summary or detailed list of volumes for particular VSA."""
289 context = req.environ['nova.context']
290
291 vols = self.volume_api.get_all(context,
292 search_opts={'metadata': {self.direction: str(vsa_id)}})
293 limited_list = common.limited(vols, req)
294
295 res = [self._translation(context, vol, vsa_id, details) \
296 for vol in limited_list]
297
298 return {self.objects: res}
299
300 def index(self, req, vsa_id):
301 """Return a short list of volumes created from particular VSA."""
302 LOG.audit(_("Index. vsa_id=%(vsa_id)s"), locals())
303 return self._items(req, vsa_id, details=False)
304
305 def detail(self, req, vsa_id):
306 """Return a detailed list of volumes created from particular VSA."""
307 LOG.audit(_("Detail. vsa_id=%(vsa_id)s"), locals())
308 return self._items(req, vsa_id, details=True)
309
310 def create(self, req, vsa_id, body):
311 """Create a new volume from VSA."""
312 LOG.audit(_("Create. vsa_id=%(vsa_id)s, body=%(body)s"), locals())
313 context = req.environ['nova.context']
314
315 if not body:
316 return faults.Fault(exc.HTTPUnprocessableEntity())
317
318 vol = body[self.object]
319 size = vol['size']
320 LOG.audit(_("Create volume of %(size)s GB from VSA ID %(vsa_id)s"),
321 locals(), context=context)
322 try:
323 # create is supported for volumes only (drives created through VSA)
324 volume_type = self.vsa_api.get_vsa_volume_type(context)
325 except exception.NotFound:
326 return faults.Fault(exc.HTTPNotFound())
327
328 new_volume = self.volume_api.create(context,
329 size,
330 None,
331 vol.get('displayName'),
332 vol.get('displayDescription'),
333 volume_type=volume_type,
334 metadata=dict(from_vsa_id=str(vsa_id)))
335
336 return {self.object: self._translation(context, new_volume,
337 vsa_id, True)}
338
339 def update(self, req, vsa_id, id, body):
340 """Update a volume."""
341 context = req.environ['nova.context']
342
343 try:
344 self._check_volume_ownership(context, vsa_id, id)
345 except exception.NotFound:
346 return faults.Fault(exc.HTTPNotFound())
347 except exception.Invalid:
348 return faults.Fault(exc.HTTPBadRequest())
349
350 vol = body[self.object]
351 updatable_fields = [{'displayName': 'display_name'},
352 {'displayDescription': 'display_description'},
353 {'status': 'status'},
354 {'providerLocation': 'provider_location'},
355 {'providerAuth': 'provider_auth'}]
356 changes = {}
357 for field in updatable_fields:
358 key = field.keys()[0]
359 val = field[key]
360 if key in vol:
361 changes[val] = vol[key]
362
363 obj = self.object
364 LOG.audit(_("Update %(obj)s with id: %(id)s, changes: %(changes)s"),
365 locals(), context=context)
366
367 try:
368 self.volume_api.update(context, volume_id=id, fields=changes)
369 except exception.NotFound:
370 return faults.Fault(exc.HTTPNotFound())
371 return exc.HTTPAccepted()
372
373 def delete(self, req, vsa_id, id):
374 """Delete a volume."""
375 context = req.environ['nova.context']
376
377 LOG.audit(_("Delete. vsa_id=%(vsa_id)s, id=%(id)s"), locals())
378
379 try:
380 self._check_volume_ownership(context, vsa_id, id)
381 except exception.NotFound:
382 return faults.Fault(exc.HTTPNotFound())
383 except exception.Invalid:
384 return faults.Fault(exc.HTTPBadRequest())
385
386 return super(VsaVolumeDriveController, self).delete(req, id)
387
388 def show(self, req, vsa_id, id):
389 """Return data about the given volume."""
390 context = req.environ['nova.context']
391
392 LOG.audit(_("Show. vsa_id=%(vsa_id)s, id=%(id)s"), locals())
393
394 try:
395 self._check_volume_ownership(context, vsa_id, id)
396 except exception.NotFound:
397 return faults.Fault(exc.HTTPNotFound())
398 except exception.Invalid:
399 return faults.Fault(exc.HTTPBadRequest())
400
401 return super(VsaVolumeDriveController, self).show(req, id)
402
403
404class VsaVolumeController(VsaVolumeDriveController):
405 """The VSA volume API controller for the Openstack API.
406
407 A child resource of the VSA object. Allows operations with volumes created
408 by particular VSA
409
410 """
411
412 def __init__(self):
413 self.direction = 'from_vsa_id'
414 self.objects = 'volumes'
415 self.object = 'volume'
416 super(VsaVolumeController, self).__init__()
417
418
419class VsaDriveController(VsaVolumeDriveController):
420 """The VSA Drive API controller for the Openstack API.
421
422 A child resource of the VSA object. Allows operations with drives created
423 for particular VSA
424
425 """
426
427 def __init__(self):
428 self.direction = 'to_vsa_id'
429 self.objects = 'drives'
430 self.object = 'drive'
431 super(VsaDriveController, self).__init__()
432
433 def create(self, req, vsa_id, body):
434 """Create a new drive for VSA. Should be done through VSA APIs"""
435 return faults.Fault(exc.HTTPBadRequest())
436
437 def update(self, req, vsa_id, id, body):
438 """Update a drive. Should be done through VSA APIs"""
439 return faults.Fault(exc.HTTPBadRequest())
440
441 def delete(self, req, vsa_id, id):
442 """Delete a volume. Should be done through VSA APIs"""
443 return faults.Fault(exc.HTTPBadRequest())
444
445
446class VsaVPoolController(object):
447 """The vPool VSA API controller for the OpenStack API."""
448
449 _serialization_metadata = {
450 'application/xml': {
451 "attributes": {
452 "vpool": [
453 "id",
454 "vsaId",
455 "name",
456 "displayName",
457 "displayDescription",
458 "driveCount",
459 "driveIds",
460 "protection",
461 "stripeSize",
462 "stripeWidth",
463 "createTime",
464 "status",
465 ]}}}
466
467 def __init__(self):
468 self.vsa_api = vsa.API()
469 super(VsaVPoolController, self).__init__()
470
471 def index(self, req, vsa_id):
472 """Return a short list of vpools created from particular VSA."""
473 return {'vpools': []}
474
475 def create(self, req, vsa_id, body):
476 """Create a new vPool for VSA."""
477 return faults.Fault(exc.HTTPBadRequest())
478
479 def update(self, req, vsa_id, id, body):
480 """Update vPool parameters."""
481 return faults.Fault(exc.HTTPBadRequest())
482
483 def delete(self, req, vsa_id, id):
484 """Delete a vPool."""
485 return faults.Fault(exc.HTTPBadRequest())
486
487 def show(self, req, vsa_id, id):
488 """Return data about the given vPool."""
489 return faults.Fault(exc.HTTPBadRequest())
490
491
492class VsaVCController(servers.ControllerV11):
493 """The VSA Virtual Controller API controller for the OpenStack API."""
494
495 def __init__(self):
496 self.vsa_api = vsa.API()
497 self.compute_api = compute.API()
498 self.vsa_id = None # VP-TODO: temporary ugly hack
499 super(VsaVCController, self).__init__()
500
501 def _get_servers(self, req, is_detail):
502 """Returns a list of servers, taking into account any search
503 options specified.
504 """
505
506 if self.vsa_id is None:
507 super(VsaVCController, self)._get_servers(req, is_detail)
508
509 context = req.environ['nova.context']
510
511 search_opts = {'metadata': dict(vsa_id=str(self.vsa_id))}
512 instance_list = self.compute_api.get_all(
513 context, search_opts=search_opts)
514
515 limited_list = self._limit_items(instance_list, req)
516 servers = [self._build_view(req, inst, is_detail)['server']
517 for inst in limited_list]
518 return dict(servers=servers)
519
520 def index(self, req, vsa_id):
521 """Return list of instances for particular VSA."""
522
523 LOG.audit(_("Index instances for VSA %s"), vsa_id)
524
525 self.vsa_id = vsa_id # VP-TODO: temporary ugly hack
526 result = super(VsaVCController, self).detail(req)
527 self.vsa_id = None
528 return result
529
530 def create(self, req, vsa_id, body):
531 """Create a new instance for VSA."""
532 return faults.Fault(exc.HTTPBadRequest())
533
534 def update(self, req, vsa_id, id, body):
535 """Update VSA instance."""
536 return faults.Fault(exc.HTTPBadRequest())
537
538 def delete(self, req, vsa_id, id):
539 """Delete VSA instance."""
540 return faults.Fault(exc.HTTPBadRequest())
541
542 def show(self, req, vsa_id, id):
543 """Return data about the given instance."""
544 return super(VsaVCController, self).show(req, id)
545
546
547class Virtual_storage_arrays(extensions.ExtensionDescriptor):
548
549 def get_name(self):
550 return "VSAs"
551
552 def get_alias(self):
553 return "zadr-vsa"
554
555 def get_description(self):
556 return "Virtual Storage Arrays support"
557
558 def get_namespace(self):
559 return "http://docs.openstack.org/ext/vsa/api/v1.1"
560
561 def get_updated(self):
562 return "2011-08-25T00:00:00+00:00"
563
564 def get_resources(self):
565 resources = []
566 res = extensions.ResourceExtension(
567 'zadr-vsa',
568 VsaController(),
569 collection_actions={'detail': 'GET'},
570 member_actions={'add_capacity': 'POST',
571 'remove_capacity': 'POST',
572 'associate_address': 'POST',
573 'disassociate_address': 'POST'})
574 resources.append(res)
575
576 res = extensions.ResourceExtension('volumes',
577 VsaVolumeController(),
578 collection_actions={'detail': 'GET'},
579 parent=dict(
580 member_name='vsa',
581 collection_name='zadr-vsa'))
582 resources.append(res)
583
584 res = extensions.ResourceExtension('drives',
585 VsaDriveController(),
586 collection_actions={'detail': 'GET'},
587 parent=dict(
588 member_name='vsa',
589 collection_name='zadr-vsa'))
590 resources.append(res)
591
592 res = extensions.ResourceExtension('vpools',
593 VsaVPoolController(),
594 parent=dict(
595 member_name='vsa',
596 collection_name='zadr-vsa'))
597 resources.append(res)
598
599 res = extensions.ResourceExtension('instances',
600 VsaVCController(),
601 parent=dict(
602 member_name='vsa',
603 collection_name='zadr-vsa'))
604 resources.append(res)
605
606 return resources
0607
=== modified file 'nova/db/api.py'
--- nova/db/api.py 2011-08-25 16:14:44 +0000
+++ nova/db/api.py 2011-08-26 22:09:26 +0000
@@ -49,7 +49,8 @@
49 'Template string to be used to generate instance names')49 'Template string to be used to generate instance names')
50flags.DEFINE_string('snapshot_name_template', 'snapshot-%08x',50flags.DEFINE_string('snapshot_name_template', 'snapshot-%08x',
51 'Template string to be used to generate snapshot names')51 'Template string to be used to generate snapshot names')
5252flags.DEFINE_string('vsa_name_template', 'vsa-%08x',
53 'Template string to be used to generate VSA names')
5354
54IMPL = utils.LazyPluggable(FLAGS['db_backend'],55IMPL = utils.LazyPluggable(FLAGS['db_backend'],
55 sqlalchemy='nova.db.sqlalchemy.api')56 sqlalchemy='nova.db.sqlalchemy.api')
@@ -1512,3 +1513,36 @@
1512 key/value pairs specified in the extra specs dict argument"""1513 key/value pairs specified in the extra specs dict argument"""
1513 IMPL.volume_type_extra_specs_update_or_create(context, volume_type_id,1514 IMPL.volume_type_extra_specs_update_or_create(context, volume_type_id,
1514 extra_specs)1515 extra_specs)
1516
1517
1518####################
1519
1520
1521def vsa_create(context, values):
1522 """Creates Virtual Storage Array record."""
1523 return IMPL.vsa_create(context, values)
1524
1525
1526def vsa_update(context, vsa_id, values):
1527 """Updates Virtual Storage Array record."""
1528 return IMPL.vsa_update(context, vsa_id, values)
1529
1530
1531def vsa_destroy(context, vsa_id):
1532 """Deletes Virtual Storage Array record."""
1533 return IMPL.vsa_destroy(context, vsa_id)
1534
1535
1536def vsa_get(context, vsa_id):
1537 """Get Virtual Storage Array record by ID."""
1538 return IMPL.vsa_get(context, vsa_id)
1539
1540
1541def vsa_get_all(context):
1542 """Get all Virtual Storage Array records."""
1543 return IMPL.vsa_get_all(context)
1544
1545
1546def vsa_get_all_by_project(context, project_id):
1547 """Get all Virtual Storage Array records by project ID."""
1548 return IMPL.vsa_get_all_by_project(context, project_id)
15151549
=== modified file 'nova/db/sqlalchemy/api.py'
--- nova/db/sqlalchemy/api.py 2011-08-25 16:14:44 +0000
+++ nova/db/sqlalchemy/api.py 2011-08-26 22:09:26 +0000
@@ -3837,3 +3837,105 @@
3837 "deleted": 0})3837 "deleted": 0})
3838 spec_ref.save(session=session)3838 spec_ref.save(session=session)
3839 return specs3839 return specs
3840
3841
3842 ####################
3843
3844
3845@require_admin_context
3846def vsa_create(context, values):
3847 """
3848 Creates Virtual Storage Array record.
3849 """
3850 try:
3851 vsa_ref = models.VirtualStorageArray()
3852 vsa_ref.update(values)
3853 vsa_ref.save()
3854 except Exception, e:
3855 raise exception.DBError(e)
3856 return vsa_ref
3857
3858
3859@require_admin_context
3860def vsa_update(context, vsa_id, values):
3861 """
3862 Updates Virtual Storage Array record.
3863 """
3864 session = get_session()
3865 with session.begin():
3866 vsa_ref = vsa_get(context, vsa_id, session=session)
3867 vsa_ref.update(values)
3868 vsa_ref.save(session=session)
3869 return vsa_ref
3870
3871
3872@require_admin_context
3873def vsa_destroy(context, vsa_id):
3874 """
3875 Deletes Virtual Storage Array record.
3876 """
3877 session = get_session()
3878 with session.begin():
3879 session.query(models.VirtualStorageArray).\
3880 filter_by(id=vsa_id).\
3881 update({'deleted': True,
3882 'deleted_at': utils.utcnow(),
3883 'updated_at': literal_column('updated_at')})
3884
3885
3886@require_context
3887def vsa_get(context, vsa_id, session=None):
3888 """
3889 Get Virtual Storage Array record by ID.
3890 """
3891 if not session:
3892 session = get_session()
3893 result = None
3894
3895 if is_admin_context(context):
3896 result = session.query(models.VirtualStorageArray).\
3897 options(joinedload('vsa_instance_type')).\
3898 filter_by(id=vsa_id).\
3899 filter_by(deleted=can_read_deleted(context)).\
3900 first()
3901 elif is_user_context(context):
3902 result = session.query(models.VirtualStorageArray).\
3903 options(joinedload('vsa_instance_type')).\
3904 filter_by(project_id=context.project_id).\
3905 filter_by(id=vsa_id).\
3906 filter_by(deleted=False).\
3907 first()
3908 if not result:
3909 raise exception.VirtualStorageArrayNotFound(id=vsa_id)
3910
3911 return result
3912
3913
3914@require_admin_context
3915def vsa_get_all(context):
3916 """
3917 Get all Virtual Storage Array records.
3918 """
3919 session = get_session()
3920 return session.query(models.VirtualStorageArray).\
3921 options(joinedload('vsa_instance_type')).\
3922 filter_by(deleted=can_read_deleted(context)).\
3923 all()
3924
3925
3926@require_context
3927def vsa_get_all_by_project(context, project_id):
3928 """
3929 Get all Virtual Storage Array records by project ID.
3930 """
3931 authorize_project_context(context, project_id)
3932
3933 session = get_session()
3934 return session.query(models.VirtualStorageArray).\
3935 options(joinedload('vsa_instance_type')).\
3936 filter_by(project_id=project_id).\
3937 filter_by(deleted=can_read_deleted(context)).\
3938 all()
3939
3940
3941 ####################
38403942
=== added file 'nova/db/sqlalchemy/migrate_repo/versions/043_add_vsa_data.py'
--- nova/db/sqlalchemy/migrate_repo/versions/043_add_vsa_data.py 1970-01-01 00:00:00 +0000
+++ nova/db/sqlalchemy/migrate_repo/versions/043_add_vsa_data.py 2011-08-26 22:09:26 +0000
@@ -0,0 +1,75 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright (c) 2011 Zadara Storage Inc.
4# Copyright (c) 2011 OpenStack LLC.
5#
6# Licensed under the Apache License, Version 2.0 (the "License"); you may
7# not use this file except in compliance with the License. You may obtain
8# a copy of the License at
9#
10# http://www.apache.org/licenses/LICENSE-2.0
11#
12# Unless required by applicable law or agreed to in writing, software
13# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
14# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
15# License for the specific language governing permissions and limitations
16# under the License.
17
18from sqlalchemy import Column, DateTime, Integer, MetaData, String, Table
19from sqlalchemy import Text, Boolean, ForeignKey
20
21from nova import log as logging
22
23meta = MetaData()
24
25#
26# New Tables
27#
28
29virtual_storage_arrays = Table('virtual_storage_arrays', meta,
30 Column('created_at', DateTime(timezone=False)),
31 Column('updated_at', DateTime(timezone=False)),
32 Column('deleted_at', DateTime(timezone=False)),
33 Column('deleted', Boolean(create_constraint=True, name=None)),
34 Column('id', Integer(), primary_key=True, nullable=False),
35 Column('display_name',
36 String(length=255, convert_unicode=False, assert_unicode=None,
37 unicode_error=None, _warn_on_bytestring=False)),
38 Column('display_description',
39 String(length=255, convert_unicode=False, assert_unicode=None,
40 unicode_error=None, _warn_on_bytestring=False)),
41 Column('project_id',
42 String(length=255, convert_unicode=False, assert_unicode=None,
43 unicode_error=None, _warn_on_bytestring=False)),
44 Column('availability_zone',
45 String(length=255, convert_unicode=False, assert_unicode=None,
46 unicode_error=None, _warn_on_bytestring=False)),
47 Column('instance_type_id', Integer(), nullable=False),
48 Column('image_ref',
49 String(length=255, convert_unicode=False, assert_unicode=None,
50 unicode_error=None, _warn_on_bytestring=False)),
51 Column('vc_count', Integer(), nullable=False),
52 Column('vol_count', Integer(), nullable=False),
53 Column('status',
54 String(length=255, convert_unicode=False, assert_unicode=None,
55 unicode_error=None, _warn_on_bytestring=False)),
56 )
57
58
59def upgrade(migrate_engine):
60 # Upgrade operations go here. Don't create your own engine;
61 # bind migrate_engine to your metadata
62 meta.bind = migrate_engine
63
64 try:
65 virtual_storage_arrays.create()
66 except Exception:
67 logging.info(repr(table))
68 logging.exception('Exception while creating table')
69 raise
70
71
72def downgrade(migrate_engine):
73 meta.bind = migrate_engine
74
75 virtual_storage_arrays.drop()
076
=== modified file 'nova/db/sqlalchemy/migration.py'
--- nova/db/sqlalchemy/migration.py 2011-08-24 23:41:14 +0000
+++ nova/db/sqlalchemy/migration.py 2011-08-26 22:09:26 +0000
@@ -64,6 +64,7 @@
64 'users', 'user_project_association',64 'users', 'user_project_association',
65 'user_project_role_association',65 'user_project_role_association',
66 'user_role_association',66 'user_role_association',
67 'virtual_storage_arrays',
67 'volumes', 'volume_metadata',68 'volumes', 'volume_metadata',
68 'volume_types', 'volume_type_extra_specs'):69 'volume_types', 'volume_type_extra_specs'):
69 assert table in meta.tables70 assert table in meta.tables
7071
=== modified file 'nova/db/sqlalchemy/models.py'
--- nova/db/sqlalchemy/models.py 2011-08-24 16:10:28 +0000
+++ nova/db/sqlalchemy/models.py 2011-08-26 22:09:26 +0000
@@ -250,6 +250,32 @@
250 # 'shutdown', 'shutoff', 'crashed'])250 # 'shutdown', 'shutoff', 'crashed'])
251251
252252
253class VirtualStorageArray(BASE, NovaBase):
254 """
255 Represents a virtual storage array supplying block storage to instances.
256 """
257 __tablename__ = 'virtual_storage_arrays'
258
259 id = Column(Integer, primary_key=True, autoincrement=True)
260
261 @property
262 def name(self):
263 return FLAGS.vsa_name_template % self.id
264
265 # User editable field for display in user-facing UIs
266 display_name = Column(String(255))
267 display_description = Column(String(255))
268
269 project_id = Column(String(255))
270 availability_zone = Column(String(255))
271
272 instance_type_id = Column(Integer, ForeignKey('instance_types.id'))
273 image_ref = Column(String(255))
274 vc_count = Column(Integer, default=0) # number of requested VC instances
275 vol_count = Column(Integer, default=0) # total number of BE volumes
276 status = Column(String(255))
277
278
253class InstanceActions(BASE, NovaBase):279class InstanceActions(BASE, NovaBase):
254 """Represents a guest VM's actions and results"""280 """Represents a guest VM's actions and results"""
255 __tablename__ = "instance_actions"281 __tablename__ = "instance_actions"
@@ -279,6 +305,12 @@
279 primaryjoin='and_(Instance.instance_type_id == '305 primaryjoin='and_(Instance.instance_type_id == '
280 'InstanceTypes.id)')306 'InstanceTypes.id)')
281307
308 vsas = relationship(VirtualStorageArray,
309 backref=backref('vsa_instance_type', uselist=False),
310 foreign_keys=id,
311 primaryjoin='and_(VirtualStorageArray.instance_type_id'
312 ' == InstanceTypes.id)')
313
282314
283class Volume(BASE, NovaBase):315class Volume(BASE, NovaBase):
284 """Represents a block storage device that can be attached to a vm."""316 """Represents a block storage device that can be attached to a vm."""
@@ -848,7 +880,8 @@
848 SecurityGroupInstanceAssociation, AuthToken, User,880 SecurityGroupInstanceAssociation, AuthToken, User,
849 Project, Certificate, ConsolePool, Console, Zone,881 Project, Certificate, ConsolePool, Console, Zone,
850 VolumeMetadata, VolumeTypes, VolumeTypeExtraSpecs,882 VolumeMetadata, VolumeTypes, VolumeTypeExtraSpecs,
851 AgentBuild, InstanceMetadata, InstanceTypeExtraSpecs, Migration)883 AgentBuild, InstanceMetadata, InstanceTypeExtraSpecs, Migration,
884 VirtualStorageArray)
852 engine = create_engine(FLAGS.sql_connection, echo=False)885 engine = create_engine(FLAGS.sql_connection, echo=False)
853 for model in models:886 for model in models:
854 model.metadata.create_all(engine)887 model.metadata.create_all(engine)
855888
=== modified file 'nova/exception.py'
--- nova/exception.py 2011-08-24 16:10:28 +0000
+++ nova/exception.py 2011-08-26 22:09:26 +0000
@@ -783,6 +783,18 @@
783 message = _("Could not load paste app '%(name)s' from %(path)s")783 message = _("Could not load paste app '%(name)s' from %(path)s")
784784
785785
786class VSANovaAccessParamNotFound(Invalid):
787 message = _("Nova access parameters were not specified.")
788
789
790class VirtualStorageArrayNotFound(NotFound):
791 message = _("Virtual Storage Array %(id)d could not be found.")
792
793
794class VirtualStorageArrayNotFoundByName(NotFound):
795 message = _("Virtual Storage Array %(name)s could not be found.")
796
797
786class CannotResizeToSameSize(NovaException):798class CannotResizeToSameSize(NovaException):
787 message = _("When resizing, instances must change size!")799 message = _("When resizing, instances must change size!")
788800
789801
=== modified file 'nova/flags.py'
--- nova/flags.py 2011-08-23 19:06:25 +0000
+++ nova/flags.py 2011-08-26 22:09:26 +0000
@@ -292,6 +292,7 @@
292 in the form "http://127.0.0.1:8000"')292 in the form "http://127.0.0.1:8000"')
293DEFINE_string('ajax_console_proxy_port',293DEFINE_string('ajax_console_proxy_port',
294 8000, 'port that ajax_console_proxy binds')294 8000, 'port that ajax_console_proxy binds')
295DEFINE_string('vsa_topic', 'vsa', 'the topic that nova-vsa service listens on')
295DEFINE_bool('verbose', False, 'show debug output')296DEFINE_bool('verbose', False, 'show debug output')
296DEFINE_boolean('fake_rabbit', False, 'use a fake rabbit')297DEFINE_boolean('fake_rabbit', False, 'use a fake rabbit')
297DEFINE_bool('fake_network', False,298DEFINE_bool('fake_network', False,
@@ -371,6 +372,17 @@
371 'Manager for volume')372 'Manager for volume')
372DEFINE_string('scheduler_manager', 'nova.scheduler.manager.SchedulerManager',373DEFINE_string('scheduler_manager', 'nova.scheduler.manager.SchedulerManager',
373 'Manager for scheduler')374 'Manager for scheduler')
375DEFINE_string('vsa_manager', 'nova.vsa.manager.VsaManager',
376 'Manager for vsa')
377DEFINE_string('vc_image_name', 'vc_image',
378 'the VC image ID (for a VC image that exists in DB Glance)')
379# VSA constants and enums
380DEFINE_string('default_vsa_instance_type', 'm1.small',
381 'default instance type for VSA instances')
382DEFINE_integer('max_vcs_in_vsa', 32,
383 'maxinum VCs in a VSA')
384DEFINE_integer('vsa_part_size_gb', 100,
385 'default partition size for shared capacity')
374386
375# The service to use for image search and retrieval387# The service to use for image search and retrieval
376DEFINE_string('image_service', 'nova.image.glance.GlanceImageService',388DEFINE_string('image_service', 'nova.image.glance.GlanceImageService',
377389
=== modified file 'nova/quota.py'
--- nova/quota.py 2011-08-17 07:41:17 +0000
+++ nova/quota.py 2011-08-26 22:09:26 +0000
@@ -116,8 +116,9 @@
116 allowed_gigabytes = _get_request_allotment(requested_gigabytes,116 allowed_gigabytes = _get_request_allotment(requested_gigabytes,
117 used_gigabytes,117 used_gigabytes,
118 quota['gigabytes'])118 quota['gigabytes'])
119 allowed_volumes = min(allowed_volumes,119 if size != 0:
120 int(allowed_gigabytes // size))120 allowed_volumes = min(allowed_volumes,
121 int(allowed_gigabytes // size))
121 return min(requested_volumes, allowed_volumes)122 return min(requested_volumes, allowed_volumes)
122123
123124
124125
=== added file 'nova/scheduler/vsa.py'
--- nova/scheduler/vsa.py 1970-01-01 00:00:00 +0000
+++ nova/scheduler/vsa.py 2011-08-26 22:09:26 +0000
@@ -0,0 +1,535 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright (c) 2011 Zadara Storage Inc.
4# Copyright (c) 2011 OpenStack LLC.
5#
6# Licensed under the Apache License, Version 2.0 (the "License"); you may
7# not use this file except in compliance with the License. You may obtain
8# a copy of the License at
9#
10# http://www.apache.org/licenses/LICENSE-2.0
11#
12# Unless required by applicable law or agreed to in writing, software
13# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
14# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
15# License for the specific language governing permissions and limitations
16# under the License.
17
18"""
19VSA Simple Scheduler
20"""
21
22from nova import context
23from nova import db
24from nova import flags
25from nova import log as logging
26from nova import rpc
27from nova import utils
28from nova.scheduler import driver
29from nova.scheduler import simple
30from nova.vsa.api import VsaState
31from nova.volume import volume_types
32
33LOG = logging.getLogger('nova.scheduler.vsa')
34
35FLAGS = flags.FLAGS
36flags.DEFINE_integer('drive_type_approx_capacity_percent', 10,
37 'The percentage range for capacity comparison')
38flags.DEFINE_integer('vsa_unique_hosts_per_alloc', 10,
39 'The number of unique hosts per storage allocation')
40flags.DEFINE_boolean('vsa_select_unique_drives', True,
41 'Allow selection of same host for multiple drives')
42
43
44def BYTES_TO_GB(bytes):
45 return bytes >> 30
46
47
48def GB_TO_BYTES(gb):
49 return gb << 30
50
51
52class VsaScheduler(simple.SimpleScheduler):
53 """Implements Scheduler for volume placement."""
54
55 def __init__(self, *args, **kwargs):
56 super(VsaScheduler, self).__init__(*args, **kwargs)
57 self._notify_all_volume_hosts("startup")
58
59 def _notify_all_volume_hosts(self, event):
60 rpc.fanout_cast(context.get_admin_context(),
61 FLAGS.volume_topic,
62 {"method": "notification",
63 "args": {"event": event}})
64
65 def _qosgrp_match(self, drive_type, qos_values):
66
67 def _compare_names(str1, str2):
68 return str1.lower() == str2.lower()
69
70 def _compare_sizes_approxim(cap_capacity, size):
71 cap_capacity = BYTES_TO_GB(int(cap_capacity))
72 size = int(size)
73 size_perc = size * \
74 FLAGS.drive_type_approx_capacity_percent / 100
75
76 return cap_capacity >= size - size_perc and \
77 cap_capacity <= size + size_perc
78
79 # Add more entries for additional comparisons
80 compare_list = [{'cap1': 'DriveType',
81 'cap2': 'type',
82 'cmp_func': _compare_names},
83 {'cap1': 'DriveCapacity',
84 'cap2': 'size',
85 'cmp_func': _compare_sizes_approxim}]
86
87 for cap in compare_list:
88 if cap['cap1'] in qos_values.keys() and \
89 cap['cap2'] in drive_type.keys() and \
90 cap['cmp_func'] is not None and \
91 cap['cmp_func'](qos_values[cap['cap1']],
92 drive_type[cap['cap2']]):
93 pass
94 else:
95 return False
96 return True
97
98 def _get_service_states(self):
99 return self.zone_manager.service_states
100
101 def _filter_hosts(self, topic, request_spec, host_list=None):
102
103 LOG.debug(_("_filter_hosts: %(request_spec)s"), locals())
104
105 drive_type = request_spec['drive_type']
106 LOG.debug(_("Filter hosts for drive type %s"), drive_type['name'])
107
108 if host_list is None:
109 host_list = self._get_service_states().iteritems()
110
111 filtered_hosts = [] # returns list of (hostname, capability_dict)
112 for host, host_dict in host_list:
113 for service_name, service_dict in host_dict.iteritems():
114 if service_name != topic:
115 continue
116
117 gos_info = service_dict.get('drive_qos_info', {})
118 for qosgrp, qos_values in gos_info.iteritems():
119 if self._qosgrp_match(drive_type, qos_values):
120 if qos_values['AvailableCapacity'] > 0:
121 filtered_hosts.append((host, gos_info))
122 else:
123 LOG.debug(_("Host %s has no free capacity. Skip"),
124 host)
125 break
126
127 host_names = [item[0] for item in filtered_hosts]
128 LOG.debug(_("Filter hosts: %s"), host_names)
129 return filtered_hosts
130
131 def _allowed_to_use_host(self, host, selected_hosts, unique):
132 if unique == False or \
133 host not in [item[0] for item in selected_hosts]:
134 return True
135 else:
136 return False
137
138 def _add_hostcap_to_list(self, selected_hosts, host, cap):
139 if host not in [item[0] for item in selected_hosts]:
140 selected_hosts.append((host, cap))
141
142 def host_selection_algorithm(self, request_spec, all_hosts,
143 selected_hosts, unique):
144 """Must override this method for VSA scheduler to work."""
145 raise NotImplementedError(_("Must implement host selection mechanism"))
146
147 def _select_hosts(self, request_spec, all_hosts, selected_hosts=None):
148
149 if selected_hosts is None:
150 selected_hosts = []
151
152 host = None
153 if len(selected_hosts) >= FLAGS.vsa_unique_hosts_per_alloc:
154 # try to select from already selected hosts only
155 LOG.debug(_("Maximum number of hosts selected (%d)"),
156 len(selected_hosts))
157 unique = False
158 (host, qos_cap) = self.host_selection_algorithm(request_spec,
159 selected_hosts,
160 selected_hosts,
161 unique)
162
163 LOG.debug(_("Selected excessive host %(host)s"), locals())
164 else:
165 unique = FLAGS.vsa_select_unique_drives
166
167 if host is None:
168 # if we've not tried yet (# of sel hosts < max) - unique=True
169 # or failed to select from selected_hosts - unique=False
170 # select from all hosts
171 (host, qos_cap) = self.host_selection_algorithm(request_spec,
172 all_hosts,
173 selected_hosts,
174 unique)
175 if host is None:
176 raise driver.WillNotSchedule(_("No available hosts"))
177
178 return (host, qos_cap)
179
180 def _provision_volume(self, context, vol, vsa_id, availability_zone):
181
182 if availability_zone is None:
183 availability_zone = FLAGS.storage_availability_zone
184
185 now = utils.utcnow()
186 options = {
187 'size': vol['size'],
188 'user_id': context.user_id,
189 'project_id': context.project_id,
190 'snapshot_id': None,
191 'availability_zone': availability_zone,
192 'status': "creating",
193 'attach_status': "detached",
194 'display_name': vol['name'],
195 'display_description': vol['description'],
196 'volume_type_id': vol['volume_type_id'],
197 'metadata': dict(to_vsa_id=vsa_id),
198 'host': vol['host'],
199 'scheduled_at': now
200 }
201
202 size = vol['size']
203 host = vol['host']
204 name = vol['name']
205 LOG.debug(_("Provision volume %(name)s of size %(size)s GB on "\
206 "host %(host)s"), locals())
207
208 volume_ref = db.volume_create(context, options)
209 rpc.cast(context,
210 db.queue_get_for(context, "volume", vol['host']),
211 {"method": "create_volume",
212 "args": {"volume_id": volume_ref['id'],
213 "snapshot_id": None}})
214
215 def _check_host_enforcement(self, context, availability_zone):
216 if (availability_zone
217 and ':' in availability_zone
218 and context.is_admin):
219 zone, _x, host = availability_zone.partition(':')
220 service = db.service_get_by_args(context.elevated(), host,
221 'nova-volume')
222 if not self.service_is_up(service):
223 raise driver.WillNotSchedule(_("Host %s not available") % host)
224
225 return host
226 else:
227 return None
228
229 def _assign_hosts_to_volumes(self, context, volume_params, forced_host):
230
231 prev_volume_type_id = None
232 request_spec = {}
233 selected_hosts = []
234
235 LOG.debug(_("volume_params %(volume_params)s") % locals())
236
237 i = 1
238 for vol in volume_params:
239 name = vol['name']
240 LOG.debug(_("%(i)d: Volume %(name)s"), locals())
241 i += 1
242
243 if forced_host:
244 vol['host'] = forced_host
245 vol['capabilities'] = None
246 continue
247
248 volume_type_id = vol['volume_type_id']
249 request_spec['size'] = vol['size']
250
251 if prev_volume_type_id is None or\
252 prev_volume_type_id != volume_type_id:
253 # generate list of hosts for this drive type
254
255 volume_type = volume_types.get_volume_type(context,
256 volume_type_id)
257 drive_type = {
258 'name': volume_type['extra_specs'].get('drive_name'),
259 'type': volume_type['extra_specs'].get('drive_type'),
260 'size': int(volume_type['extra_specs'].get('drive_size')),
261 'rpm': volume_type['extra_specs'].get('drive_rpm'),
262 }
263 request_spec['drive_type'] = drive_type
264
265 all_hosts = self._filter_hosts("volume", request_spec)
266 prev_volume_type_id = volume_type_id
267
268 (host, qos_cap) = self._select_hosts(request_spec,
269 all_hosts, selected_hosts)
270 vol['host'] = host
271 vol['capabilities'] = qos_cap
272 self._consume_resource(qos_cap, vol['size'], -1)
273
274 def schedule_create_volumes(self, context, request_spec,
275 availability_zone=None, *_args, **_kwargs):
276 """Picks hosts for hosting multiple volumes."""
277
278 num_volumes = request_spec.get('num_volumes')
279 LOG.debug(_("Attempting to spawn %(num_volumes)d volume(s)") %
280 locals())
281
282 vsa_id = request_spec.get('vsa_id')
283 volume_params = request_spec.get('volumes')
284
285 host = self._check_host_enforcement(context, availability_zone)
286
287 try:
288 self._print_capabilities_info()
289
290 self._assign_hosts_to_volumes(context, volume_params, host)
291
292 for vol in volume_params:
293 self._provision_volume(context, vol, vsa_id, availability_zone)
294 except:
295 if vsa_id:
296 db.vsa_update(context, vsa_id, dict(status=VsaState.FAILED))
297
298 for vol in volume_params:
299 if 'capabilities' in vol:
300 self._consume_resource(vol['capabilities'],
301 vol['size'], 1)
302 raise
303
304 return None
305
306 def schedule_create_volume(self, context, volume_id, *_args, **_kwargs):
307 """Picks the best host based on requested drive type capability."""
308 volume_ref = db.volume_get(context, volume_id)
309
310 host = self._check_host_enforcement(context,
311 volume_ref['availability_zone'])
312 if host:
313 now = utils.utcnow()
314 db.volume_update(context, volume_id, {'host': host,
315 'scheduled_at': now})
316 return host
317
318 volume_type_id = volume_ref['volume_type_id']
319 if volume_type_id:
320 volume_type = volume_types.get_volume_type(context, volume_type_id)
321
322 if volume_type_id is None or\
323 volume_types.is_vsa_volume(volume_type_id, volume_type):
324
325 LOG.debug(_("Non-VSA volume %d"), volume_ref['id'])
326 return super(VsaScheduler, self).schedule_create_volume(context,
327 volume_id, *_args, **_kwargs)
328
329 self._print_capabilities_info()
330
331 drive_type = {
332 'name': volume_type['extra_specs'].get('drive_name'),
333 'type': volume_type['extra_specs'].get('drive_type'),
334 'size': int(volume_type['extra_specs'].get('drive_size')),
335 'rpm': volume_type['extra_specs'].get('drive_rpm'),
336 }
337
338 LOG.debug(_("Spawning volume %(volume_id)s with drive type "\
339 "%(drive_type)s"), locals())
340
341 request_spec = {'size': volume_ref['size'],
342 'drive_type': drive_type}
343 hosts = self._filter_hosts("volume", request_spec)
344
345 try:
346 (host, qos_cap) = self._select_hosts(request_spec, all_hosts=hosts)
347 except:
348 if volume_ref['to_vsa_id']:
349 db.vsa_update(context, volume_ref['to_vsa_id'],
350 dict(status=VsaState.FAILED))
351 raise
352
353 if host:
354 now = utils.utcnow()
355 db.volume_update(context, volume_id, {'host': host,
356 'scheduled_at': now})
357 self._consume_resource(qos_cap, volume_ref['size'], -1)
358 return host
359
360 def _consume_full_drive(self, qos_values, direction):
361 qos_values['FullDrive']['NumFreeDrives'] += direction
362 qos_values['FullDrive']['NumOccupiedDrives'] -= direction
363
364 def _consume_partition(self, qos_values, size, direction):
365
366 if qos_values['PartitionDrive']['PartitionSize'] != 0:
367 partition_size = qos_values['PartitionDrive']['PartitionSize']
368 else:
369 partition_size = size
370 part_per_drive = qos_values['DriveCapacity'] / partition_size
371
372 if direction == -1 and \
373 qos_values['PartitionDrive']['NumFreePartitions'] == 0:
374
375 self._consume_full_drive(qos_values, direction)
376 qos_values['PartitionDrive']['NumFreePartitions'] += \
377 part_per_drive
378
379 qos_values['PartitionDrive']['NumFreePartitions'] += direction
380 qos_values['PartitionDrive']['NumOccupiedPartitions'] -= direction
381
382 if direction == 1 and \
383 qos_values['PartitionDrive']['NumFreePartitions'] >= \
384 part_per_drive:
385
386 self._consume_full_drive(qos_values, direction)
387 qos_values['PartitionDrive']['NumFreePartitions'] -= \
388 part_per_drive
389
390 def _consume_resource(self, qos_values, size, direction):
391 if qos_values is None:
392 LOG.debug(_("No capability selected for volume of size %(size)s"),
393 locals())
394 return
395
396 if size == 0: # full drive match
397 qos_values['AvailableCapacity'] += direction * \
398 qos_values['DriveCapacity']
399 self._consume_full_drive(qos_values, direction)
400 else:
401 qos_values['AvailableCapacity'] += direction * GB_TO_BYTES(size)
402 self._consume_partition(qos_values, GB_TO_BYTES(size), direction)
403 return
404
405 def _print_capabilities_info(self):
406 host_list = self._get_service_states().iteritems()
407 for host, host_dict in host_list:
408 for service_name, service_dict in host_dict.iteritems():
409 if service_name != "volume":
410 continue
411
412 LOG.info(_("Host %s:"), host)
413
414 gos_info = service_dict.get('drive_qos_info', {})
415 for qosgrp, qos_values in gos_info.iteritems():
416 total = qos_values['TotalDrives']
417 used = qos_values['FullDrive']['NumOccupiedDrives']
418 free = qos_values['FullDrive']['NumFreeDrives']
419 avail = BYTES_TO_GB(qos_values['AvailableCapacity'])
420
421 LOG.info(_("\tDrive %(qosgrp)-25s: total %(total)2s, "\
422 "used %(used)2s, free %(free)2s. Available "\
423 "capacity %(avail)-5s"), locals())
424
425
426class VsaSchedulerLeastUsedHost(VsaScheduler):
427 """
428 Implements VSA scheduler to select the host with least used capacity
429 of particular type.
430 """
431
432 def __init__(self, *args, **kwargs):
433 super(VsaSchedulerLeastUsedHost, self).__init__(*args, **kwargs)
434
435 def host_selection_algorithm(self, request_spec, all_hosts,
436 selected_hosts, unique):
437 size = request_spec['size']
438 drive_type = request_spec['drive_type']
439 best_host = None
440 best_qoscap = None
441 best_cap = None
442 min_used = 0
443
444 for (host, capabilities) in all_hosts:
445
446 has_enough_capacity = False
447 used_capacity = 0
448 for qosgrp, qos_values in capabilities.iteritems():
449
450 used_capacity = used_capacity + qos_values['TotalCapacity'] \
451 - qos_values['AvailableCapacity']
452
453 if self._qosgrp_match(drive_type, qos_values):
454 # we found required qosgroup
455
456 if size == 0: # full drive match
457 if qos_values['FullDrive']['NumFreeDrives'] > 0:
458 has_enough_capacity = True
459 matched_qos = qos_values
460 else:
461 break
462 else:
463 if qos_values['AvailableCapacity'] >= size and \
464 (qos_values['PartitionDrive'][
465 'NumFreePartitions'] > 0 or \
466 qos_values['FullDrive']['NumFreeDrives'] > 0):
467 has_enough_capacity = True
468 matched_qos = qos_values
469 else:
470 break
471
472 if has_enough_capacity and \
473 self._allowed_to_use_host(host,
474 selected_hosts,
475 unique) and \
476 (best_host is None or used_capacity < min_used):
477
478 min_used = used_capacity
479 best_host = host
480 best_qoscap = matched_qos
481 best_cap = capabilities
482
483 if best_host:
484 self._add_hostcap_to_list(selected_hosts, best_host, best_cap)
485 min_used = BYTES_TO_GB(min_used)
486 LOG.debug(_("\t LeastUsedHost: Best host: %(best_host)s. "\
487 "(used capacity %(min_used)s)"), locals())
488 return (best_host, best_qoscap)
489
490
491class VsaSchedulerMostAvailCapacity(VsaScheduler):
492 """
493 Implements VSA scheduler to select the host with most available capacity
494 of one particular type.
495 """
496
497 def __init__(self, *args, **kwargs):
498 super(VsaSchedulerMostAvailCapacity, self).__init__(*args, **kwargs)
499
500 def host_selection_algorithm(self, request_spec, all_hosts,
501 selected_hosts, unique):
502 size = request_spec['size']
503 drive_type = request_spec['drive_type']
504 best_host = None
505 best_qoscap = None
506 best_cap = None
507 max_avail = 0
508
509 for (host, capabilities) in all_hosts:
510 for qosgrp, qos_values in capabilities.iteritems():
511 if self._qosgrp_match(drive_type, qos_values):
512 # we found required qosgroup
513
514 if size == 0: # full drive match
515 available = qos_values['FullDrive']['NumFreeDrives']
516 else:
517 available = qos_values['AvailableCapacity']
518
519 if available > max_avail and \
520 self._allowed_to_use_host(host,
521 selected_hosts,
522 unique):
523 max_avail = available
524 best_host = host
525 best_qoscap = qos_values
526 best_cap = capabilities
527 break # go to the next host
528
529 if best_host:
530 self._add_hostcap_to_list(selected_hosts, best_host, best_cap)
531 type_str = "drives" if size == 0 else "bytes"
532 LOG.debug(_("\t MostAvailCap: Best host: %(best_host)s. "\
533 "(available %(max_avail)s %(type_str)s)"), locals())
534
535 return (best_host, best_qoscap)
0536
=== added file 'nova/tests/api/openstack/contrib/test_vsa.py'
--- nova/tests/api/openstack/contrib/test_vsa.py 1970-01-01 00:00:00 +0000
+++ nova/tests/api/openstack/contrib/test_vsa.py 2011-08-26 22:09:26 +0000
@@ -0,0 +1,450 @@
1# Copyright 2011 OpenStack LLC.
2# All Rights Reserved.
3#
4# Licensed under the Apache License, Version 2.0 (the "License"); you may
5# not use this file except in compliance with the License. You may obtain
6# a copy of the License at
7#
8# http://www.apache.org/licenses/LICENSE-2.0
9#
10# Unless required by applicable law or agreed to in writing, software
11# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
12# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
13# License for the specific language governing permissions and limitations
14# under the License.
15
16import json
17import stubout
18import unittest
19import webob
20
21from nova import context
22from nova import db
23from nova import exception
24from nova import flags
25from nova import log as logging
26from nova import test
27from nova import volume
28from nova import vsa
29from nova.api import openstack
30from nova.tests.api.openstack import fakes
31import nova.wsgi
32
33from nova.api.openstack.contrib.virtual_storage_arrays import _vsa_view
34
35FLAGS = flags.FLAGS
36
37LOG = logging.getLogger('nova.tests.api.openstack.vsa')
38
39last_param = {}
40
41
42def _get_default_vsa_param():
43 return {
44 'display_name': 'Test_VSA_name',
45 'display_description': 'Test_VSA_description',
46 'vc_count': 1,
47 'instance_type': 'm1.small',
48 'instance_type_id': 5,
49 'image_name': None,
50 'availability_zone': None,
51 'storage': [],
52 'shared': False
53 }
54
55
56def stub_vsa_create(self, context, **param):
57 global last_param
58 LOG.debug(_("_create: param=%s"), param)
59 param['id'] = 123
60 param['name'] = 'Test name'
61 param['instance_type_id'] = 5
62 last_param = param
63 return param
64
65
66def stub_vsa_delete(self, context, vsa_id):
67 global last_param
68 last_param = dict(vsa_id=vsa_id)
69
70 LOG.debug(_("_delete: %s"), locals())
71 if vsa_id != '123':
72 raise exception.NotFound
73
74
75def stub_vsa_get(self, context, vsa_id):
76 global last_param
77 last_param = dict(vsa_id=vsa_id)
78
79 LOG.debug(_("_get: %s"), locals())
80 if vsa_id != '123':
81 raise exception.NotFound
82
83 param = _get_default_vsa_param()
84 param['id'] = vsa_id
85 return param
86
87
88def stub_vsa_get_all(self, context):
89 LOG.debug(_("_get_all: %s"), locals())
90 param = _get_default_vsa_param()
91 param['id'] = 123
92 return [param]
93
94
95class VSAApiTest(test.TestCase):
96 def setUp(self):
97 super(VSAApiTest, self).setUp()
98 self.stubs = stubout.StubOutForTesting()
99 fakes.FakeAuthManager.reset_fake_data()
100 fakes.FakeAuthDatabase.data = {}
101 fakes.stub_out_networking(self.stubs)
102 fakes.stub_out_rate_limiting(self.stubs)
103 fakes.stub_out_auth(self.stubs)
104 self.stubs.Set(vsa.api.API, "create", stub_vsa_create)
105 self.stubs.Set(vsa.api.API, "delete", stub_vsa_delete)
106 self.stubs.Set(vsa.api.API, "get", stub_vsa_get)
107 self.stubs.Set(vsa.api.API, "get_all", stub_vsa_get_all)
108
109 self.context = context.get_admin_context()
110
111 def tearDown(self):
112 self.stubs.UnsetAll()
113 super(VSAApiTest, self).tearDown()
114
115 def test_vsa_create(self):
116 global last_param
117 last_param = {}
118
119 vsa = {"displayName": "VSA Test Name",
120 "displayDescription": "VSA Test Desc"}
121 body = dict(vsa=vsa)
122 req = webob.Request.blank('/v1.1/777/zadr-vsa')
123 req.method = 'POST'
124 req.body = json.dumps(body)
125 req.headers['content-type'] = 'application/json'
126
127 resp = req.get_response(fakes.wsgi_app())
128 self.assertEqual(resp.status_int, 200)
129
130 # Compare if parameters were correctly passed to stub
131 self.assertEqual(last_param['display_name'], "VSA Test Name")
132 self.assertEqual(last_param['display_description'], "VSA Test Desc")
133
134 resp_dict = json.loads(resp.body)
135 self.assertTrue('vsa' in resp_dict)
136 self.assertEqual(resp_dict['vsa']['displayName'], vsa['displayName'])
137 self.assertEqual(resp_dict['vsa']['displayDescription'],
138 vsa['displayDescription'])
139
140 def test_vsa_create_no_body(self):
141 req = webob.Request.blank('/v1.1/777/zadr-vsa')
142 req.method = 'POST'
143 req.body = json.dumps({})
144 req.headers['content-type'] = 'application/json'
145
146 resp = req.get_response(fakes.wsgi_app())
147 self.assertEqual(resp.status_int, 422)
148
149 def test_vsa_delete(self):
150 global last_param
151 last_param = {}
152
153 vsa_id = 123
154 req = webob.Request.blank('/v1.1/777/zadr-vsa/%d' % vsa_id)
155 req.method = 'DELETE'
156
157 resp = req.get_response(fakes.wsgi_app())
158 self.assertEqual(resp.status_int, 200)
159 self.assertEqual(str(last_param['vsa_id']), str(vsa_id))
160
161 def test_vsa_delete_invalid_id(self):
162 global last_param
163 last_param = {}
164
165 vsa_id = 234
166 req = webob.Request.blank('/v1.1/777/zadr-vsa/%d' % vsa_id)
167 req.method = 'DELETE'
168
169 resp = req.get_response(fakes.wsgi_app())
170 self.assertEqual(resp.status_int, 404)
171 self.assertEqual(str(last_param['vsa_id']), str(vsa_id))
172
173 def test_vsa_show(self):
174 global last_param
175 last_param = {}
176
177 vsa_id = 123
178 req = webob.Request.blank('/v1.1/777/zadr-vsa/%d' % vsa_id)
179 req.method = 'GET'
180 resp = req.get_response(fakes.wsgi_app())
181 self.assertEqual(resp.status_int, 200)
182 self.assertEqual(str(last_param['vsa_id']), str(vsa_id))
183
184 resp_dict = json.loads(resp.body)
185 self.assertTrue('vsa' in resp_dict)
186 self.assertEqual(resp_dict['vsa']['id'], str(vsa_id))
187
188 def test_vsa_show_invalid_id(self):
189 global last_param
190 last_param = {}
191
192 vsa_id = 234
193 req = webob.Request.blank('/v1.1/777/zadr-vsa/%d' % vsa_id)
194 req.method = 'GET'
195 resp = req.get_response(fakes.wsgi_app())
196 self.assertEqual(resp.status_int, 404)
197 self.assertEqual(str(last_param['vsa_id']), str(vsa_id))
198
199 def test_vsa_index(self):
200 req = webob.Request.blank('/v1.1/777/zadr-vsa')
201 req.method = 'GET'
202 resp = req.get_response(fakes.wsgi_app())
203 self.assertEqual(resp.status_int, 200)
204
205 resp_dict = json.loads(resp.body)
206
207 self.assertTrue('vsaSet' in resp_dict)
208 resp_vsas = resp_dict['vsaSet']
209 self.assertEqual(len(resp_vsas), 1)
210
211 resp_vsa = resp_vsas.pop()
212 self.assertEqual(resp_vsa['id'], 123)
213
214 def test_vsa_detail(self):
215 req = webob.Request.blank('/v1.1/777/zadr-vsa/detail')
216 req.method = 'GET'
217 resp = req.get_response(fakes.wsgi_app())
218 self.assertEqual(resp.status_int, 200)
219
220 resp_dict = json.loads(resp.body)
221
222 self.assertTrue('vsaSet' in resp_dict)
223 resp_vsas = resp_dict['vsaSet']
224 self.assertEqual(len(resp_vsas), 1)
225
226 resp_vsa = resp_vsas.pop()
227 self.assertEqual(resp_vsa['id'], 123)
228
229
230def _get_default_volume_param():
231 return {
232 'id': 123,
233 'status': 'available',
234 'size': 100,
235 'availability_zone': 'nova',
236 'created_at': None,
237 'attach_status': 'detached',
238 'name': 'vol name',
239 'display_name': 'Default vol name',
240 'display_description': 'Default vol description',
241 'volume_type_id': 1,
242 'volume_metadata': [],
243 }
244
245
246def stub_get_vsa_volume_type(self, context):
247 return {'id': 1,
248 'name': 'VSA volume type',
249 'extra_specs': {'type': 'vsa_volume'}}
250
251
252def stub_volume_create(self, context, size, snapshot_id, name, description,
253 **param):
254 LOG.debug(_("_create: param=%s"), size)
255 vol = _get_default_volume_param()
256 vol['size'] = size
257 vol['display_name'] = name
258 vol['display_description'] = description
259 return vol
260
261
262def stub_volume_update(self, context, **param):
263 LOG.debug(_("_volume_update: param=%s"), param)
264 pass
265
266
267def stub_volume_delete(self, context, **param):
268 LOG.debug(_("_volume_delete: param=%s"), param)
269 pass
270
271
272def stub_volume_get(self, context, volume_id):
273 LOG.debug(_("_volume_get: volume_id=%s"), volume_id)
274 vol = _get_default_volume_param()
275 vol['id'] = volume_id
276 meta = {'key': 'from_vsa_id', 'value': '123'}
277 if volume_id == '345':
278 meta = {'key': 'to_vsa_id', 'value': '123'}
279 vol['volume_metadata'].append(meta)
280 return vol
281
282
283def stub_volume_get_notfound(self, context, volume_id):
284 raise exception.NotFound
285
286
287def stub_volume_get_all(self, context, search_opts):
288 vol = stub_volume_get(self, context, '123')
289 vol['metadata'] = search_opts['metadata']
290 return [vol]
291
292
293def return_vsa(context, vsa_id):
294 return {'id': vsa_id}
295
296
297class VSAVolumeApiTest(test.TestCase):
298
299 def setUp(self, test_obj=None, test_objs=None):
300 super(VSAVolumeApiTest, self).setUp()
301 self.stubs = stubout.StubOutForTesting()
302 fakes.FakeAuthManager.reset_fake_data()
303 fakes.FakeAuthDatabase.data = {}
304 fakes.stub_out_networking(self.stubs)
305 fakes.stub_out_rate_limiting(self.stubs)
306 fakes.stub_out_auth(self.stubs)
307 self.stubs.Set(nova.db.api, 'vsa_get', return_vsa)
308 self.stubs.Set(vsa.api.API, "get_vsa_volume_type",
309 stub_get_vsa_volume_type)
310
311 self.stubs.Set(volume.api.API, "update", stub_volume_update)
312 self.stubs.Set(volume.api.API, "delete", stub_volume_delete)
313 self.stubs.Set(volume.api.API, "get", stub_volume_get)
314 self.stubs.Set(volume.api.API, "get_all", stub_volume_get_all)
315
316 self.context = context.get_admin_context()
317 self.test_obj = test_obj if test_obj else "volume"
318 self.test_objs = test_objs if test_objs else "volumes"
319
320 def tearDown(self):
321 self.stubs.UnsetAll()
322 super(VSAVolumeApiTest, self).tearDown()
323
324 def test_vsa_volume_create(self):
325 self.stubs.Set(volume.api.API, "create", stub_volume_create)
326
327 vol = {"size": 100,
328 "displayName": "VSA Volume Test Name",
329 "displayDescription": "VSA Volume Test Desc"}
330 body = {self.test_obj: vol}
331 req = webob.Request.blank('/v1.1/777/zadr-vsa/123/%s' % self.test_objs)
332 req.method = 'POST'
333 req.body = json.dumps(body)
334 req.headers['content-type'] = 'application/json'
335 resp = req.get_response(fakes.wsgi_app())
336
337 if self.test_obj == "volume":
338 self.assertEqual(resp.status_int, 200)
339
340 resp_dict = json.loads(resp.body)
341 self.assertTrue(self.test_obj in resp_dict)
342 self.assertEqual(resp_dict[self.test_obj]['size'],
343 vol['size'])
344 self.assertEqual(resp_dict[self.test_obj]['displayName'],
345 vol['displayName'])
346 self.assertEqual(resp_dict[self.test_obj]['displayDescription'],
347 vol['displayDescription'])
348 else:
349 self.assertEqual(resp.status_int, 400)
350
351 def test_vsa_volume_create_no_body(self):
352 req = webob.Request.blank('/v1.1/777/zadr-vsa/123/%s' % self.test_objs)
353 req.method = 'POST'
354 req.body = json.dumps({})
355 req.headers['content-type'] = 'application/json'
356
357 resp = req.get_response(fakes.wsgi_app())
358 if self.test_obj == "volume":
359 self.assertEqual(resp.status_int, 422)
360 else:
361 self.assertEqual(resp.status_int, 400)
362
363 def test_vsa_volume_index(self):
364 req = webob.Request.blank('/v1.1/777/zadr-vsa/123/%s' % self.test_objs)
365 resp = req.get_response(fakes.wsgi_app())
366 self.assertEqual(resp.status_int, 200)
367
368 def test_vsa_volume_detail(self):
369 req = webob.Request.blank('/v1.1/777/zadr-vsa/123/%s/detail' % \
370 self.test_objs)
371 resp = req.get_response(fakes.wsgi_app())
372 self.assertEqual(resp.status_int, 200)
373
374 def test_vsa_volume_show(self):
375 obj_num = 234 if self.test_objs == "volumes" else 345
376 req = webob.Request.blank('/v1.1/777/zadr-vsa/123/%s/%s' % \
377 (self.test_objs, obj_num))
378 resp = req.get_response(fakes.wsgi_app())
379 self.assertEqual(resp.status_int, 200)
380
381 def test_vsa_volume_show_no_vsa_assignment(self):
382 req = webob.Request.blank('/v1.1/777/zadr-vsa/4/%s/333' % \
383 (self.test_objs))
384 resp = req.get_response(fakes.wsgi_app())
385 self.assertEqual(resp.status_int, 400)
386
387 def test_vsa_volume_show_no_volume(self):
388 self.stubs.Set(volume.api.API, "get", stub_volume_get_notfound)
389
390 req = webob.Request.blank('/v1.1/777/zadr-vsa/123/%s/333' % \
391 (self.test_objs))
392 resp = req.get_response(fakes.wsgi_app())
393 self.assertEqual(resp.status_int, 404)
394
395 def test_vsa_volume_update(self):
396 obj_num = 234 if self.test_objs == "volumes" else 345
397 update = {"status": "available",
398 "displayName": "Test Display name"}
399 body = {self.test_obj: update}
400 req = webob.Request.blank('/v1.1/777/zadr-vsa/123/%s/%s' % \
401 (self.test_objs, obj_num))
402 req.method = 'PUT'
403 req.body = json.dumps(body)
404 req.headers['content-type'] = 'application/json'
405
406 resp = req.get_response(fakes.wsgi_app())
407 if self.test_obj == "volume":
408 self.assertEqual(resp.status_int, 202)
409 else:
410 self.assertEqual(resp.status_int, 400)
411
412 def test_vsa_volume_delete(self):
413 obj_num = 234 if self.test_objs == "volumes" else 345
414 req = webob.Request.blank('/v1.1/777/zadr-vsa/123/%s/%s' % \
415 (self.test_objs, obj_num))
416 req.method = 'DELETE'
417 resp = req.get_response(fakes.wsgi_app())
418 if self.test_obj == "volume":
419 self.assertEqual(resp.status_int, 202)
420 else:
421 self.assertEqual(resp.status_int, 400)
422
423 def test_vsa_volume_delete_no_vsa_assignment(self):
424 req = webob.Request.blank('/v1.1/777/zadr-vsa/4/%s/333' % \
425 (self.test_objs))
426 req.method = 'DELETE'
427 resp = req.get_response(fakes.wsgi_app())
428 self.assertEqual(resp.status_int, 400)
429
430 def test_vsa_volume_delete_no_volume(self):
431 self.stubs.Set(volume.api.API, "get", stub_volume_get_notfound)
432
433 req = webob.Request.blank('/v1.1/777/zadr-vsa/123/%s/333' % \
434 (self.test_objs))
435 req.method = 'DELETE'
436 resp = req.get_response(fakes.wsgi_app())
437 if self.test_obj == "volume":
438 self.assertEqual(resp.status_int, 404)
439 else:
440 self.assertEqual(resp.status_int, 400)
441
442
443class VSADriveApiTest(VSAVolumeApiTest):
444 def setUp(self):
445 super(VSADriveApiTest, self).setUp(test_obj="drive",
446 test_objs="drives")
447
448 def tearDown(self):
449 self.stubs.UnsetAll()
450 super(VSADriveApiTest, self).tearDown()
0451
=== modified file 'nova/tests/api/openstack/test_extensions.py'
--- nova/tests/api/openstack/test_extensions.py 2011-08-24 21:30:40 +0000
+++ nova/tests/api/openstack/test_extensions.py 2011-08-26 22:09:26 +0000
@@ -95,6 +95,7 @@
95 "Quotas",95 "Quotas",
96 "Rescue",96 "Rescue",
97 "SecurityGroups",97 "SecurityGroups",
98 "VSAs",
98 "VirtualInterfaces",99 "VirtualInterfaces",
99 "Volumes",100 "Volumes",
100 "VolumeTypes",101 "VolumeTypes",
101102
=== added file 'nova/tests/scheduler/test_vsa_scheduler.py'
--- nova/tests/scheduler/test_vsa_scheduler.py 1970-01-01 00:00:00 +0000
+++ nova/tests/scheduler/test_vsa_scheduler.py 2011-08-26 22:09:26 +0000
@@ -0,0 +1,641 @@
1# Copyright 2011 OpenStack LLC.
2# All Rights Reserved.
3#
4# Licensed under the Apache License, Version 2.0 (the "License"); you may
5# not use this file except in compliance with the License. You may obtain
6# a copy of the License at
7#
8# http://www.apache.org/licenses/LICENSE-2.0
9#
10# Unless required by applicable law or agreed to in writing, software
11# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
12# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
13# License for the specific language governing permissions and limitations
14# under the License.
15
16import stubout
17
18import nova
19
20from nova import context
21from nova import db
22from nova import exception
23from nova import flags
24from nova import log as logging
25from nova import test
26from nova import utils
27from nova.volume import volume_types
28
29from nova.scheduler import vsa as vsa_sched
30from nova.scheduler import driver
31
32FLAGS = flags.FLAGS
33LOG = logging.getLogger('nova.tests.scheduler.vsa')
34
35scheduled_volumes = []
36scheduled_volume = {}
37global_volume = {}
38
39
40class FakeVsaLeastUsedScheduler(
41 vsa_sched.VsaSchedulerLeastUsedHost):
42 # No need to stub anything at the moment
43 pass
44
45
46class FakeVsaMostAvailCapacityScheduler(
47 vsa_sched.VsaSchedulerMostAvailCapacity):
48 # No need to stub anything at the moment
49 pass
50
51
52class VsaSchedulerTestCase(test.TestCase):
53
54 def _get_vol_creation_request(self, num_vols, drive_ix, size=0):
55 volume_params = []
56 for i in range(num_vols):
57
58 name = 'name_' + str(i)
59 try:
60 volume_types.create(self.context, name,
61 extra_specs={'type': 'vsa_drive',
62 'drive_name': name,
63 'drive_type': 'type_' + str(drive_ix),
64 'drive_size': 1 + 100 * (drive_ix)})
65 self.created_types_lst.append(name)
66 except exception.ApiError:
67 # type is already created
68 pass
69
70 volume_type = volume_types.get_volume_type_by_name(self.context,
71 name)
72 volume = {'size': size,
73 'snapshot_id': None,
74 'name': 'vol_' + str(i),
75 'description': None,
76 'volume_type_id': volume_type['id']}
77 volume_params.append(volume)
78
79 return {'num_volumes': len(volume_params),
80 'vsa_id': 123,
81 'volumes': volume_params}
82
83 def _generate_default_service_states(self):
84 service_states = {}
85 for i in range(self.host_num):
86 host = {}
87 hostname = 'host_' + str(i)
88 if hostname in self.exclude_host_list:
89 continue
90
91 host['volume'] = {'timestamp': utils.utcnow(),
92 'drive_qos_info': {}}
93
94 for j in range(self.drive_type_start_ix,
95 self.drive_type_start_ix + self.drive_type_num):
96 dtype = {}
97 dtype['Name'] = 'name_' + str(j)
98 dtype['DriveType'] = 'type_' + str(j)
99 dtype['TotalDrives'] = 2 * (self.init_num_drives + i)
100 dtype['DriveCapacity'] = vsa_sched.GB_TO_BYTES(1 + 100 * j)
101 dtype['TotalCapacity'] = dtype['TotalDrives'] * \
102 dtype['DriveCapacity']
103 dtype['AvailableCapacity'] = (dtype['TotalDrives'] - i) * \
104 dtype['DriveCapacity']
105 dtype['DriveRpm'] = 7200
106 dtype['DifCapable'] = 0
107 dtype['SedCapable'] = 0
108 dtype['PartitionDrive'] = {
109 'PartitionSize': 0,
110 'NumOccupiedPartitions': 0,
111 'NumFreePartitions': 0}
112 dtype['FullDrive'] = {
113 'NumFreeDrives': dtype['TotalDrives'] - i,
114 'NumOccupiedDrives': i}
115 host['volume']['drive_qos_info'][dtype['Name']] = dtype
116
117 service_states[hostname] = host
118
119 return service_states
120
121 def _print_service_states(self):
122 for host, host_val in self.service_states.iteritems():
123 LOG.info(_("Host %s"), host)
124 total_used = 0
125 total_available = 0
126 qos = host_val['volume']['drive_qos_info']
127
128 for k, d in qos.iteritems():
129 LOG.info("\t%s: type %s: drives (used %2d, total %2d) "\
130 "size %3d, total %4d, used %4d, avail %d",
131 k, d['DriveType'],
132 d['FullDrive']['NumOccupiedDrives'], d['TotalDrives'],
133 vsa_sched.BYTES_TO_GB(d['DriveCapacity']),
134 vsa_sched.BYTES_TO_GB(d['TotalCapacity']),
135 vsa_sched.BYTES_TO_GB(d['TotalCapacity'] - \
136 d['AvailableCapacity']),
137 vsa_sched.BYTES_TO_GB(d['AvailableCapacity']))
138
139 total_used += vsa_sched.BYTES_TO_GB(d['TotalCapacity'] - \
140 d['AvailableCapacity'])
141 total_available += vsa_sched.BYTES_TO_GB(
142 d['AvailableCapacity'])
143 LOG.info("Host %s: used %d, avail %d",
144 host, total_used, total_available)
145
146 def _set_service_states(self, host_num,
147 drive_type_start_ix, drive_type_num,
148 init_num_drives=10,
149 exclude_host_list=[]):
150 self.host_num = host_num
151 self.drive_type_start_ix = drive_type_start_ix
152 self.drive_type_num = drive_type_num
153 self.exclude_host_list = exclude_host_list
154 self.init_num_drives = init_num_drives
155 self.service_states = self._generate_default_service_states()
156
157 def _get_service_states(self):
158 return self.service_states
159
160 def _fake_get_service_states(self):
161 return self._get_service_states()
162
163 def _fake_provision_volume(self, context, vol, vsa_id, availability_zone):
164 global scheduled_volumes
165 scheduled_volumes.append(dict(vol=vol,
166 vsa_id=vsa_id,
167 az=availability_zone))
168 name = vol['name']
169 host = vol['host']
170 LOG.debug(_("Test: provision vol %(name)s on host %(host)s"),
171 locals())
172 LOG.debug(_("\t vol=%(vol)s"), locals())
173 pass
174
175 def _fake_vsa_update(self, context, vsa_id, values):
176 LOG.debug(_("Test: VSA update request: vsa_id=%(vsa_id)s "\
177 "values=%(values)s"), locals())
178 pass
179
180 def _fake_volume_create(self, context, options):
181 LOG.debug(_("Test: Volume create: %s"), options)
182 options['id'] = 123
183 global global_volume
184 global_volume = options
185 return options
186
187 def _fake_volume_get(self, context, volume_id):
188 LOG.debug(_("Test: Volume get request: id=%(volume_id)s"), locals())
189 global global_volume
190 global_volume['id'] = volume_id
191 global_volume['availability_zone'] = None
192 return global_volume
193
194 def _fake_volume_update(self, context, volume_id, values):
195 LOG.debug(_("Test: Volume update request: id=%(volume_id)s "\
196 "values=%(values)s"), locals())
197 global scheduled_volume
198 scheduled_volume = {'id': volume_id, 'host': values['host']}
199 pass
200
201 def _fake_service_get_by_args(self, context, host, binary):
202 return "service"
203
204 def _fake_service_is_up_True(self, service):
205 return True
206
207 def _fake_service_is_up_False(self, service):
208 return False
209
210 def setUp(self, sched_class=None):
211 super(VsaSchedulerTestCase, self).setUp()
212 self.stubs = stubout.StubOutForTesting()
213 self.context = context.get_admin_context()
214
215 if sched_class is None:
216 self.sched = FakeVsaLeastUsedScheduler()
217 else:
218 self.sched = sched_class
219
220 self.host_num = 10
221 self.drive_type_num = 5
222
223 self.stubs.Set(self.sched,
224 '_get_service_states', self._fake_get_service_states)
225 self.stubs.Set(self.sched,
226 '_provision_volume', self._fake_provision_volume)
227 self.stubs.Set(nova.db, 'vsa_update', self._fake_vsa_update)
228
229 self.stubs.Set(nova.db, 'volume_get', self._fake_volume_get)
230 self.stubs.Set(nova.db, 'volume_update', self._fake_volume_update)
231
232 self.created_types_lst = []
233
234 def tearDown(self):
235 for name in self.created_types_lst:
236 volume_types.purge(self.context, name)
237
238 self.stubs.UnsetAll()
239 super(VsaSchedulerTestCase, self).tearDown()
240
241 def test_vsa_sched_create_volumes_simple(self):
242 global scheduled_volumes
243 scheduled_volumes = []
244 self._set_service_states(host_num=10,
245 drive_type_start_ix=0,
246 drive_type_num=5,
247 init_num_drives=10,
248 exclude_host_list=['host_1', 'host_3'])
249 prev = self._generate_default_service_states()
250 request_spec = self._get_vol_creation_request(num_vols=3, drive_ix=2)
251
252 self.sched.schedule_create_volumes(self.context,
253 request_spec,
254 availability_zone=None)
255
256 self.assertEqual(len(scheduled_volumes), 3)
257 self.assertEqual(scheduled_volumes[0]['vol']['host'], 'host_0')
258 self.assertEqual(scheduled_volumes[1]['vol']['host'], 'host_2')
259 self.assertEqual(scheduled_volumes[2]['vol']['host'], 'host_4')
260
261 cur = self._get_service_states()
262 for host in ['host_0', 'host_2', 'host_4']:
263 cur_dtype = cur[host]['volume']['drive_qos_info']['name_2']
264 prev_dtype = prev[host]['volume']['drive_qos_info']['name_2']
265 self.assertEqual(cur_dtype['DriveType'], prev_dtype['DriveType'])
266 self.assertEqual(cur_dtype['FullDrive']['NumFreeDrives'],
267 prev_dtype['FullDrive']['NumFreeDrives'] - 1)
268 self.assertEqual(cur_dtype['FullDrive']['NumOccupiedDrives'],
269 prev_dtype['FullDrive']['NumOccupiedDrives'] + 1)
270
271 def test_vsa_sched_no_drive_type(self):
272 self._set_service_states(host_num=10,
273 drive_type_start_ix=0,
274 drive_type_num=5,
275 init_num_drives=1)
276 request_spec = self._get_vol_creation_request(num_vols=1, drive_ix=6)
277 self.assertRaises(driver.WillNotSchedule,
278 self.sched.schedule_create_volumes,
279 self.context,
280 request_spec,
281 availability_zone=None)
282
283 def test_vsa_sched_no_enough_drives(self):
284 global scheduled_volumes
285 scheduled_volumes = []
286
287 self._set_service_states(host_num=3,
288 drive_type_start_ix=0,
289 drive_type_num=1,
290 init_num_drives=0)
291 prev = self._generate_default_service_states()
292 request_spec = self._get_vol_creation_request(num_vols=3, drive_ix=0)
293
294 self.assertRaises(driver.WillNotSchedule,
295 self.sched.schedule_create_volumes,
296 self.context,
297 request_spec,
298 availability_zone=None)
299
300 # check that everything was returned back
301 cur = self._get_service_states()
302 for k, v in prev.iteritems():
303 self.assertEqual(prev[k]['volume']['drive_qos_info'],
304 cur[k]['volume']['drive_qos_info'])
305
306 def test_vsa_sched_wrong_topic(self):
307 self._set_service_states(host_num=1,
308 drive_type_start_ix=0,
309 drive_type_num=5,
310 init_num_drives=1)
311 states = self._get_service_states()
312 new_states = {}
313 new_states['host_0'] = {'compute': states['host_0']['volume']}
314 self.service_states = new_states
315 request_spec = self._get_vol_creation_request(num_vols=1, drive_ix=0)
316
317 self.assertRaises(driver.WillNotSchedule,
318 self.sched.schedule_create_volumes,
319 self.context,
320 request_spec,
321 availability_zone=None)
322
323 def test_vsa_sched_provision_volume(self):
324 global global_volume
325 global_volume = {}
326 self._set_service_states(host_num=1,
327 drive_type_start_ix=0,
328 drive_type_num=1,
329 init_num_drives=1)
330 request_spec = self._get_vol_creation_request(num_vols=1, drive_ix=0)
331
332 self.stubs.UnsetAll()
333 self.stubs.Set(self.sched,
334 '_get_service_states', self._fake_get_service_states)
335 self.stubs.Set(nova.db, 'volume_create', self._fake_volume_create)
336
337 self.sched.schedule_create_volumes(self.context,
338 request_spec,
339 availability_zone=None)
340
341 self.assertEqual(request_spec['volumes'][0]['name'],
342 global_volume['display_name'])
343
344 def test_vsa_sched_no_free_drives(self):
345 self._set_service_states(host_num=1,
346 drive_type_start_ix=0,
347 drive_type_num=1,
348 init_num_drives=1)
349 request_spec = self._get_vol_creation_request(num_vols=1, drive_ix=0)
350
351 self.sched.schedule_create_volumes(self.context,
352 request_spec,
353 availability_zone=None)
354
355 cur = self._get_service_states()
356 cur_dtype = cur['host_0']['volume']['drive_qos_info']['name_0']
357 self.assertEqual(cur_dtype['FullDrive']['NumFreeDrives'], 1)
358
359 new_request = self._get_vol_creation_request(num_vols=1, drive_ix=0)
360
361 self.sched.schedule_create_volumes(self.context,
362 request_spec,
363 availability_zone=None)
364 self._print_service_states()
365
366 self.assertRaises(driver.WillNotSchedule,
367 self.sched.schedule_create_volumes,
368 self.context,
369 new_request,
370 availability_zone=None)
371
372 def test_vsa_sched_forced_host(self):
373 global scheduled_volumes
374 scheduled_volumes = []
375
376 self._set_service_states(host_num=10,
377 drive_type_start_ix=0,
378 drive_type_num=5,
379 init_num_drives=10)
380
381 request_spec = self._get_vol_creation_request(num_vols=3, drive_ix=2)
382
383 self.assertRaises(exception.HostBinaryNotFound,
384 self.sched.schedule_create_volumes,
385 self.context,
386 request_spec,
387 availability_zone="nova:host_5")
388
389 self.stubs.Set(nova.db,
390 'service_get_by_args', self._fake_service_get_by_args)
391 self.stubs.Set(self.sched,
392 'service_is_up', self._fake_service_is_up_False)
393
394 self.assertRaises(driver.WillNotSchedule,
395 self.sched.schedule_create_volumes,
396 self.context,
397 request_spec,
398 availability_zone="nova:host_5")
399
400 self.stubs.Set(self.sched,
401 'service_is_up', self._fake_service_is_up_True)
402
403 self.sched.schedule_create_volumes(self.context,
404 request_spec,
405 availability_zone="nova:host_5")
406
407 self.assertEqual(len(scheduled_volumes), 3)
408 self.assertEqual(scheduled_volumes[0]['vol']['host'], 'host_5')
409 self.assertEqual(scheduled_volumes[1]['vol']['host'], 'host_5')
410 self.assertEqual(scheduled_volumes[2]['vol']['host'], 'host_5')
411
412 def test_vsa_sched_create_volumes_partition(self):
413 global scheduled_volumes
414 scheduled_volumes = []
415 self._set_service_states(host_num=5,
416 drive_type_start_ix=0,
417 drive_type_num=5,
418 init_num_drives=1,
419 exclude_host_list=['host_0', 'host_2'])
420 prev = self._generate_default_service_states()
421 request_spec = self._get_vol_creation_request(num_vols=3,
422 drive_ix=3,
423 size=50)
424 self.sched.schedule_create_volumes(self.context,
425 request_spec,
426 availability_zone=None)
427
428 self.assertEqual(len(scheduled_volumes), 3)
429 self.assertEqual(scheduled_volumes[0]['vol']['host'], 'host_1')
430 self.assertEqual(scheduled_volumes[1]['vol']['host'], 'host_3')
431 self.assertEqual(scheduled_volumes[2]['vol']['host'], 'host_4')
432
433 cur = self._get_service_states()
434 for host in ['host_1', 'host_3', 'host_4']:
435 cur_dtype = cur[host]['volume']['drive_qos_info']['name_3']
436 prev_dtype = prev[host]['volume']['drive_qos_info']['name_3']
437
438 self.assertEqual(cur_dtype['DriveType'], prev_dtype['DriveType'])
439 self.assertEqual(cur_dtype['FullDrive']['NumFreeDrives'],
440 prev_dtype['FullDrive']['NumFreeDrives'] - 1)
441 self.assertEqual(cur_dtype['FullDrive']['NumOccupiedDrives'],
442 prev_dtype['FullDrive']['NumOccupiedDrives'] + 1)
443
444 self.assertEqual(prev_dtype['PartitionDrive']
445 ['NumOccupiedPartitions'], 0)
446 self.assertEqual(cur_dtype['PartitionDrive']
447 ['NumOccupiedPartitions'], 1)
448 self.assertEqual(cur_dtype['PartitionDrive']
449 ['NumFreePartitions'], 5)
450
451 self.assertEqual(prev_dtype['PartitionDrive']
452 ['NumFreePartitions'], 0)
453 self.assertEqual(prev_dtype['PartitionDrive']
454 ['PartitionSize'], 0)
455
456 def test_vsa_sched_create_single_volume_az(self):
457 global scheduled_volume
458 scheduled_volume = {}
459
460 def _fake_volume_get_az(context, volume_id):
461 LOG.debug(_("Test: Volume get: id=%(volume_id)s"), locals())
462 return {'id': volume_id, 'availability_zone': 'nova:host_3'}
463
464 self.stubs.Set(nova.db, 'volume_get', _fake_volume_get_az)
465 self.stubs.Set(nova.db,
466 'service_get_by_args', self._fake_service_get_by_args)
467 self.stubs.Set(self.sched,
468 'service_is_up', self._fake_service_is_up_True)
469
470 host = self.sched.schedule_create_volume(self.context,
471 123, availability_zone=None)
472
473 self.assertEqual(host, 'host_3')
474 self.assertEqual(scheduled_volume['id'], 123)
475 self.assertEqual(scheduled_volume['host'], 'host_3')
476
477 def test_vsa_sched_create_single_non_vsa_volume(self):
478 global scheduled_volume
479 scheduled_volume = {}
480
481 global global_volume
482 global_volume = {}
483 global_volume['volume_type_id'] = None
484
485 self.assertRaises(driver.NoValidHost,
486 self.sched.schedule_create_volume,
487 self.context,
488 123,
489 availability_zone=None)
490
491 def test_vsa_sched_create_single_volume(self):
492 global scheduled_volume
493 scheduled_volume = {}
494 self._set_service_states(host_num=10,
495 drive_type_start_ix=0,
496 drive_type_num=5,
497 init_num_drives=10,
498 exclude_host_list=['host_0', 'host_1'])
499 prev = self._generate_default_service_states()
500
501 global global_volume
502 global_volume = {}
503
504 drive_ix = 2
505 name = 'name_' + str(drive_ix)
506 volume_types.create(self.context, name,
507 extra_specs={'type': 'vsa_drive',
508 'drive_name': name,
509 'drive_type': 'type_' + str(drive_ix),
510 'drive_size': 1 + 100 * (drive_ix)})
511 self.created_types_lst.append(name)
512 volume_type = volume_types.get_volume_type_by_name(self.context, name)
513
514 global_volume['volume_type_id'] = volume_type['id']
515 global_volume['size'] = 0
516
517 host = self.sched.schedule_create_volume(self.context,
518 123, availability_zone=None)
519
520 self.assertEqual(host, 'host_2')
521 self.assertEqual(scheduled_volume['id'], 123)
522 self.assertEqual(scheduled_volume['host'], 'host_2')
523
524
525class VsaSchedulerTestCaseMostAvail(VsaSchedulerTestCase):
526
527 def setUp(self):
528 super(VsaSchedulerTestCaseMostAvail, self).setUp(
529 FakeVsaMostAvailCapacityScheduler())
530
531 def tearDown(self):
532 self.stubs.UnsetAll()
533 super(VsaSchedulerTestCaseMostAvail, self).tearDown()
534
535 def test_vsa_sched_create_single_volume(self):
536 global scheduled_volume
537 scheduled_volume = {}
538 self._set_service_states(host_num=10,
539 drive_type_start_ix=0,
540 drive_type_num=5,
541 init_num_drives=10,
542 exclude_host_list=['host_0', 'host_1'])
543 prev = self._generate_default_service_states()
544
545 global global_volume
546 global_volume = {}
547
548 drive_ix = 2
549 name = 'name_' + str(drive_ix)
550 volume_types.create(self.context, name,
551 extra_specs={'type': 'vsa_drive',
552 'drive_name': name,
553 'drive_type': 'type_' + str(drive_ix),
554 'drive_size': 1 + 100 * (drive_ix)})
555 self.created_types_lst.append(name)
556 volume_type = volume_types.get_volume_type_by_name(self.context, name)
557
558 global_volume['volume_type_id'] = volume_type['id']
559 global_volume['size'] = 0
560
561 host = self.sched.schedule_create_volume(self.context,
562 123, availability_zone=None)
563
564 self.assertEqual(host, 'host_9')
565 self.assertEqual(scheduled_volume['id'], 123)
566 self.assertEqual(scheduled_volume['host'], 'host_9')
567
568 def test_vsa_sched_create_volumes_simple(self):
569 global scheduled_volumes
570 scheduled_volumes = []
571 self._set_service_states(host_num=10,
572 drive_type_start_ix=0,
573 drive_type_num=5,
574 init_num_drives=10,
575 exclude_host_list=['host_1', 'host_3'])
576 prev = self._generate_default_service_states()
577 request_spec = self._get_vol_creation_request(num_vols=3, drive_ix=2)
578
579 self._print_service_states()
580
581 self.sched.schedule_create_volumes(self.context,
582 request_spec,
583 availability_zone=None)
584
585 self.assertEqual(len(scheduled_volumes), 3)
586 self.assertEqual(scheduled_volumes[0]['vol']['host'], 'host_9')
587 self.assertEqual(scheduled_volumes[1]['vol']['host'], 'host_8')
588 self.assertEqual(scheduled_volumes[2]['vol']['host'], 'host_7')
589
590 cur = self._get_service_states()
591 for host in ['host_9', 'host_8', 'host_7']:
592 cur_dtype = cur[host]['volume']['drive_qos_info']['name_2']
593 prev_dtype = prev[host]['volume']['drive_qos_info']['name_2']
594 self.assertEqual(cur_dtype['DriveType'], prev_dtype['DriveType'])
595 self.assertEqual(cur_dtype['FullDrive']['NumFreeDrives'],
596 prev_dtype['FullDrive']['NumFreeDrives'] - 1)
597 self.assertEqual(cur_dtype['FullDrive']['NumOccupiedDrives'],
598 prev_dtype['FullDrive']['NumOccupiedDrives'] + 1)
599
600 def test_vsa_sched_create_volumes_partition(self):
601 global scheduled_volumes
602 scheduled_volumes = []
603 self._set_service_states(host_num=5,
604 drive_type_start_ix=0,
605 drive_type_num=5,
606 init_num_drives=1,
607 exclude_host_list=['host_0', 'host_2'])
608 prev = self._generate_default_service_states()
609 request_spec = self._get_vol_creation_request(num_vols=3,
610 drive_ix=3,
611 size=50)
612 self.sched.schedule_create_volumes(self.context,
613 request_spec,
614 availability_zone=None)
615
616 self.assertEqual(len(scheduled_volumes), 3)
617 self.assertEqual(scheduled_volumes[0]['vol']['host'], 'host_4')
618 self.assertEqual(scheduled_volumes[1]['vol']['host'], 'host_3')
619 self.assertEqual(scheduled_volumes[2]['vol']['host'], 'host_1')
620
621 cur = self._get_service_states()
622 for host in ['host_1', 'host_3', 'host_4']:
623 cur_dtype = cur[host]['volume']['drive_qos_info']['name_3']
624 prev_dtype = prev[host]['volume']['drive_qos_info']['name_3']
625
626 self.assertEqual(cur_dtype['DriveType'], prev_dtype['DriveType'])
627 self.assertEqual(cur_dtype['FullDrive']['NumFreeDrives'],
628 prev_dtype['FullDrive']['NumFreeDrives'] - 1)
629 self.assertEqual(cur_dtype['FullDrive']['NumOccupiedDrives'],
630 prev_dtype['FullDrive']['NumOccupiedDrives'] + 1)
631
632 self.assertEqual(prev_dtype['PartitionDrive']
633 ['NumOccupiedPartitions'], 0)
634 self.assertEqual(cur_dtype['PartitionDrive']
635 ['NumOccupiedPartitions'], 1)
636 self.assertEqual(cur_dtype['PartitionDrive']
637 ['NumFreePartitions'], 5)
638 self.assertEqual(prev_dtype['PartitionDrive']
639 ['NumFreePartitions'], 0)
640 self.assertEqual(prev_dtype['PartitionDrive']
641 ['PartitionSize'], 0)
0642
=== added file 'nova/tests/test_vsa.py'
--- nova/tests/test_vsa.py 1970-01-01 00:00:00 +0000
+++ nova/tests/test_vsa.py 2011-08-26 22:09:26 +0000
@@ -0,0 +1,182 @@
1# Copyright 2011 OpenStack LLC.
2# All Rights Reserved.
3#
4# Licensed under the Apache License, Version 2.0 (the "License"); you may
5# not use this file except in compliance with the License. You may obtain
6# a copy of the License at
7#
8# http://www.apache.org/licenses/LICENSE-2.0
9#
10# Unless required by applicable law or agreed to in writing, software
11# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
12# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
13# License for the specific language governing permissions and limitations
14# under the License.
15
16import base64
17import stubout
18
19from xml.etree import ElementTree
20from xml.etree.ElementTree import Element, SubElement
21
22from nova import context
23from nova import db
24from nova import exception
25from nova import flags
26from nova import log as logging
27from nova import test
28from nova import vsa
29from nova import volume
30from nova.volume import volume_types
31from nova.vsa import utils as vsa_utils
32
33import nova.image.fake
34
35FLAGS = flags.FLAGS
36LOG = logging.getLogger('nova.tests.vsa')
37
38
39class VsaTestCase(test.TestCase):
40
41 def setUp(self):
42 super(VsaTestCase, self).setUp()
43 self.stubs = stubout.StubOutForTesting()
44 self.vsa_api = vsa.API()
45 self.volume_api = volume.API()
46
47 FLAGS.quota_volumes = 100
48 FLAGS.quota_gigabytes = 10000
49
50 self.context = context.get_admin_context()
51
52 volume_types.create(self.context,
53 'SATA_500_7200',
54 extra_specs={'type': 'vsa_drive',
55 'drive_name': 'SATA_500_7200',
56 'drive_type': 'SATA',
57 'drive_size': '500',
58 'drive_rpm': '7200'})
59
60 def fake_show_by_name(meh, context, name):
61 if name == 'wrong_image_name':
62 LOG.debug(_("Test: Emulate wrong VSA name. Raise"))
63 raise exception.ImageNotFound
64 return {'id': 1, 'properties': {'kernel_id': 1, 'ramdisk_id': 1}}
65
66 self.stubs.Set(nova.image.fake._FakeImageService,
67 'show_by_name',
68 fake_show_by_name)
69
70 def tearDown(self):
71 self.stubs.UnsetAll()
72 super(VsaTestCase, self).tearDown()
73
74 def test_vsa_create_delete_defaults(self):
75 param = {'display_name': 'VSA name test'}
76 vsa_ref = self.vsa_api.create(self.context, **param)
77 self.assertEqual(vsa_ref['display_name'], param['display_name'])
78 self.vsa_api.delete(self.context, vsa_ref['id'])
79
80 def test_vsa_create_delete_check_in_db(self):
81 vsa_list1 = self.vsa_api.get_all(self.context)
82 vsa_ref = self.vsa_api.create(self.context)
83 vsa_list2 = self.vsa_api.get_all(self.context)
84 self.assertEqual(len(vsa_list2), len(vsa_list1) + 1)
85
86 self.vsa_api.delete(self.context, vsa_ref['id'])
87 vsa_list3 = self.vsa_api.get_all(self.context)
88 self.assertEqual(len(vsa_list3), len(vsa_list2) - 1)
89
90 def test_vsa_create_delete_high_vc_count(self):
91 param = {'vc_count': FLAGS.max_vcs_in_vsa + 1}
92 vsa_ref = self.vsa_api.create(self.context, **param)
93 self.assertEqual(vsa_ref['vc_count'], FLAGS.max_vcs_in_vsa)
94 self.vsa_api.delete(self.context, vsa_ref['id'])
95
96 def test_vsa_create_wrong_image_name(self):
97 param = {'image_name': 'wrong_image_name'}
98 self.assertRaises(exception.ApiError,
99 self.vsa_api.create, self.context, **param)
100
101 def test_vsa_create_db_error(self):
102
103 def fake_vsa_create(context, options):
104 LOG.debug(_("Test: Emulate DB error. Raise"))
105 raise exception.Error
106
107 self.stubs.Set(nova.db.api, 'vsa_create', fake_vsa_create)
108 self.assertRaises(exception.ApiError,
109 self.vsa_api.create, self.context)
110
111 def test_vsa_create_wrong_storage_params(self):
112 vsa_list1 = self.vsa_api.get_all(self.context)
113 param = {'storage': [{'stub': 1}]}
114 self.assertRaises(exception.ApiError,
115 self.vsa_api.create, self.context, **param)
116 vsa_list2 = self.vsa_api.get_all(self.context)
117 self.assertEqual(len(vsa_list2), len(vsa_list1))
118
119 param = {'storage': [{'drive_name': 'wrong name'}]}
120 self.assertRaises(exception.ApiError,
121 self.vsa_api.create, self.context, **param)
122
123 def test_vsa_create_with_storage(self, multi_vol_creation=True):
124 """Test creation of VSA with BE storage"""
125
126 FLAGS.vsa_multi_vol_creation = multi_vol_creation
127
128 param = {'storage': [{'drive_name': 'SATA_500_7200',
129 'num_drives': 3}]}
130 vsa_ref = self.vsa_api.create(self.context, **param)
131 self.assertEqual(vsa_ref['vol_count'], 3)
132 self.vsa_api.delete(self.context, vsa_ref['id'])
133
134 param = {'storage': [{'drive_name': 'SATA_500_7200',
135 'num_drives': 3}],
136 'shared': True}
137 vsa_ref = self.vsa_api.create(self.context, **param)
138 self.assertEqual(vsa_ref['vol_count'], 15)
139 self.vsa_api.delete(self.context, vsa_ref['id'])
140
141 def test_vsa_create_with_storage_single_volumes(self):
142 self.test_vsa_create_with_storage(multi_vol_creation=False)
143
144 def test_vsa_update(self):
145 vsa_ref = self.vsa_api.create(self.context)
146
147 param = {'vc_count': FLAGS.max_vcs_in_vsa + 1}
148 vsa_ref = self.vsa_api.update(self.context, vsa_ref['id'], **param)
149 self.assertEqual(vsa_ref['vc_count'], FLAGS.max_vcs_in_vsa)
150
151 param = {'vc_count': 2}
152 vsa_ref = self.vsa_api.update(self.context, vsa_ref['id'], **param)
153 self.assertEqual(vsa_ref['vc_count'], 2)
154
155 self.vsa_api.delete(self.context, vsa_ref['id'])
156
157 def test_vsa_generate_user_data(self):
158
159 FLAGS.vsa_multi_vol_creation = False
160 param = {'display_name': 'VSA name test',
161 'display_description': 'VSA desc test',
162 'vc_count': 2,
163 'storage': [{'drive_name': 'SATA_500_7200',
164 'num_drives': 3}]}
165 vsa_ref = self.vsa_api.create(self.context, **param)
166 volumes = self.vsa_api.get_all_vsa_drives(self.context,
167 vsa_ref['id'])
168
169 user_data = vsa_utils.generate_user_data(vsa_ref, volumes)
170 user_data = base64.b64decode(user_data)
171
172 LOG.debug(_("Test: user_data = %s"), user_data)
173
174 elem = ElementTree.fromstring(user_data)
175 self.assertEqual(elem.findtext('name'),
176 param['display_name'])
177 self.assertEqual(elem.findtext('description'),
178 param['display_description'])
179 self.assertEqual(elem.findtext('vc_count'),
180 str(param['vc_count']))
181
182 self.vsa_api.delete(self.context, vsa_ref['id'])
0183
=== added file 'nova/tests/test_vsa_volumes.py'
--- nova/tests/test_vsa_volumes.py 1970-01-01 00:00:00 +0000
+++ nova/tests/test_vsa_volumes.py 2011-08-26 22:09:26 +0000
@@ -0,0 +1,136 @@
1# Copyright 2011 OpenStack LLC.
2# All Rights Reserved.
3#
4# Licensed under the Apache License, Version 2.0 (the "License"); you may
5# not use this file except in compliance with the License. You may obtain
6# a copy of the License at
7#
8# http://www.apache.org/licenses/LICENSE-2.0
9#
10# Unless required by applicable law or agreed to in writing, software
11# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
12# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
13# License for the specific language governing permissions and limitations
14# under the License.
15
16import stubout
17
18from nova import exception
19from nova import flags
20from nova import vsa
21from nova import volume
22from nova import db
23from nova import context
24from nova import test
25from nova import log as logging
26import nova.image.fake
27
28FLAGS = flags.FLAGS
29LOG = logging.getLogger('nova.tests.vsa.volumes')
30
31
32class VsaVolumesTestCase(test.TestCase):
33
34 def setUp(self):
35 super(VsaVolumesTestCase, self).setUp()
36 self.stubs = stubout.StubOutForTesting()
37 self.vsa_api = vsa.API()
38 self.volume_api = volume.API()
39 self.context = context.get_admin_context()
40
41 self.default_vol_type = self.vsa_api.get_vsa_volume_type(self.context)
42
43 def fake_show_by_name(meh, context, name):
44 return {'id': 1, 'properties': {'kernel_id': 1, 'ramdisk_id': 1}}
45
46 self.stubs.Set(nova.image.fake._FakeImageService,
47 'show_by_name',
48 fake_show_by_name)
49
50 param = {'display_name': 'VSA name test'}
51 vsa_ref = self.vsa_api.create(self.context, **param)
52 self.vsa_id = vsa_ref['id']
53
54 def tearDown(self):
55 if self.vsa_id:
56 self.vsa_api.delete(self.context, self.vsa_id)
57 self.stubs.UnsetAll()
58 super(VsaVolumesTestCase, self).tearDown()
59
60 def _default_volume_param(self):
61 return {
62 'size': 1,
63 'snapshot_id': None,
64 'name': 'Test volume name',
65 'description': 'Test volume desc name',
66 'volume_type': self.default_vol_type,
67 'metadata': {'from_vsa_id': self.vsa_id}
68 }
69
70 def _get_all_volumes_by_vsa(self):
71 return self.volume_api.get_all(self.context,
72 search_opts={'metadata': {"from_vsa_id": str(self.vsa_id)}})
73
74 def test_vsa_volume_create_delete(self):
75 """ Check if volume properly created and deleted. """
76 volume_param = self._default_volume_param()
77 volume_ref = self.volume_api.create(self.context, **volume_param)
78
79 self.assertEqual(volume_ref['display_name'],
80 volume_param['name'])
81 self.assertEqual(volume_ref['display_description'],
82 volume_param['description'])
83 self.assertEqual(volume_ref['size'],
84 volume_param['size'])
85 self.assertEqual(volume_ref['status'],
86 'creating')
87
88 vols2 = self._get_all_volumes_by_vsa()
89 self.assertEqual(1, len(vols2))
90 volume_ref = vols2[0]
91
92 self.assertEqual(volume_ref['display_name'],
93 volume_param['name'])
94 self.assertEqual(volume_ref['display_description'],
95 volume_param['description'])
96 self.assertEqual(volume_ref['size'],
97 volume_param['size'])
98 self.assertEqual(volume_ref['status'],
99 'creating')
100
101 self.volume_api.update(self.context,
102 volume_ref['id'], {'status': 'available'})
103 self.volume_api.delete(self.context, volume_ref['id'])
104
105 vols3 = self._get_all_volumes_by_vsa()
106 self.assertEqual(1, len(vols2))
107 volume_ref = vols3[0]
108 self.assertEqual(volume_ref['status'],
109 'deleting')
110
111 def test_vsa_volume_delete_nonavail_volume(self):
112 """ Check volume deleton in different states. """
113 volume_param = self._default_volume_param()
114 volume_ref = self.volume_api.create(self.context, **volume_param)
115
116 self.volume_api.update(self.context,
117 volume_ref['id'], {'status': 'in-use'})
118 self.assertRaises(exception.ApiError,
119 self.volume_api.delete,
120 self.context, volume_ref['id'])
121
122 def test_vsa_volume_delete_vsa_with_volumes(self):
123 """ Check volume deleton in different states. """
124
125 vols1 = self._get_all_volumes_by_vsa()
126 for i in range(3):
127 volume_param = self._default_volume_param()
128 volume_ref = self.volume_api.create(self.context, **volume_param)
129
130 vols2 = self._get_all_volumes_by_vsa()
131 self.assertEqual(len(vols1) + 3, len(vols2))
132
133 self.vsa_api.delete(self.context, self.vsa_id)
134
135 vols3 = self._get_all_volumes_by_vsa()
136 self.assertEqual(len(vols1), len(vols3))
0137
=== modified file 'nova/virt/libvirt.xml.template'
--- nova/virt/libvirt.xml.template 2011-08-12 21:23:10 +0000
+++ nova/virt/libvirt.xml.template 2011-08-26 22:09:26 +0000
@@ -135,7 +135,9 @@
135 <interface type='bridge'>135 <interface type='bridge'>
136 <source bridge='${nic.bridge_name}'/>136 <source bridge='${nic.bridge_name}'/>
137 <mac address='${nic.mac_address}'/>137 <mac address='${nic.mac_address}'/>
138 <!-- <model type='virtio'/> CANT RUN virtio network right now -->138#if $getVar('use_virtio_for_bridges', True)
139 <model type='virtio'/>
140#end if
139 <filterref filter="nova-instance-${name}-${nic.id}">141 <filterref filter="nova-instance-${name}-${nic.id}">
140 <parameter name="IP" value="${nic.ip_address}" />142 <parameter name="IP" value="${nic.ip_address}" />
141 <parameter name="DHCPSERVER" value="${nic.dhcp_server}" />143 <parameter name="DHCPSERVER" value="${nic.dhcp_server}" />
142144
=== modified file 'nova/virt/libvirt/connection.py'
--- nova/virt/libvirt/connection.py 2011-08-23 05:17:51 +0000
+++ nova/virt/libvirt/connection.py 2011-08-26 22:09:26 +0000
@@ -135,6 +135,9 @@
135 None,135 None,
136 'The default format a local_volume will be formatted with '136 'The default format a local_volume will be formatted with '
137 'on creation.')137 'on creation.')
138flags.DEFINE_bool('libvirt_use_virtio_for_bridges',
139 False,
140 'Use virtio for bridge interfaces')
138141
139142
140def get_connection(read_only):143def get_connection(read_only):
@@ -1083,6 +1086,8 @@
1083 'ebs_root': ebs_root,1086 'ebs_root': ebs_root,
1084 'local_device': local_device,1087 'local_device': local_device,
1085 'volumes': block_device_mapping,1088 'volumes': block_device_mapping,
1089 'use_virtio_for_bridges':
1090 FLAGS.libvirt_use_virtio_for_bridges,
1086 'ephemerals': ephemerals}1091 'ephemerals': ephemerals}
10871092
1088 root_device_name = driver.block_device_info_get_root(block_device_info)1093 root_device_name = driver.block_device_info_get_root(block_device_info)
10891094
=== modified file 'nova/volume/api.py'
--- nova/volume/api.py 2011-08-24 03:22:27 +0000
+++ nova/volume/api.py 2011-08-26 22:09:26 +0000
@@ -132,7 +132,7 @@
132 for i in volume.get('volume_metadata'):132 for i in volume.get('volume_metadata'):
133 volume_metadata[i['key']] = i['value']133 volume_metadata[i['key']] = i['value']
134134
135 for k, v in searchdict:135 for k, v in searchdict.iteritems():
136 if k not in volume_metadata.keys()\136 if k not in volume_metadata.keys()\
137 or volume_metadata[k] != v:137 or volume_metadata[k] != v:
138 return False138 return False
@@ -141,6 +141,7 @@
141 # search_option to filter_name mapping.141 # search_option to filter_name mapping.
142 filter_mapping = {'metadata': _check_metadata_match}142 filter_mapping = {'metadata': _check_metadata_match}
143143
144 result = []
144 for volume in volumes:145 for volume in volumes:
145 # go over all filters in the list146 # go over all filters in the list
146 for opt, values in search_opts.iteritems():147 for opt, values in search_opts.iteritems():
@@ -150,10 +151,10 @@
150 # no such filter - ignore it, go to next filter151 # no such filter - ignore it, go to next filter
151 continue152 continue
152 else:153 else:
153 if filter_func(volume, values) == False:154 if filter_func(volume, values):
154 # if one of conditions didn't match - remove155 result.append(volume)
155 volumes.remove(volume)
156 break156 break
157 volumes = result
157 return volumes158 return volumes
158159
159 def get_snapshot(self, context, snapshot_id):160 def get_snapshot(self, context, snapshot_id):
@@ -255,3 +256,12 @@
255256
256 self.db.volume_metadata_update(context, volume_id, _metadata, True)257 self.db.volume_metadata_update(context, volume_id, _metadata, True)
257 return _metadata258 return _metadata
259
260 def get_volume_metadata_value(self, volume, key):
261 """Get value of particular metadata key."""
262 metadata = volume.get('volume_metadata')
263 if metadata:
264 for i in volume['volume_metadata']:
265 if i['key'] == key:
266 return i['value']
267 return None
258268
=== modified file 'nova/volume/driver.py'
--- nova/volume/driver.py 2011-08-25 22:22:51 +0000
+++ nova/volume/driver.py 2011-08-26 22:09:26 +0000
@@ -22,11 +22,13 @@
2222
23import time23import time
24import os24import os
25from xml.etree import ElementTree
2526
26from nova import exception27from nova import exception
27from nova import flags28from nova import flags
28from nova import log as logging29from nova import log as logging
29from nova import utils30from nova import utils
31from nova.volume import volume_types
3032
3133
32LOG = logging.getLogger("nova.volume.driver")34LOG = logging.getLogger("nova.volume.driver")
@@ -212,6 +214,11 @@
212 """Make sure volume is exported."""214 """Make sure volume is exported."""
213 raise NotImplementedError()215 raise NotImplementedError()
214216
217 def get_volume_stats(self, refresh=False):
218 """Return the current state of the volume service. If 'refresh' is
219 True, run the update first."""
220 return None
221
215222
216class AOEDriver(VolumeDriver):223class AOEDriver(VolumeDriver):
217 """Implements AOE specific volume commands."""224 """Implements AOE specific volume commands."""
@@ -802,3 +809,268 @@
802 if match:809 if match:
803 matches.append(entry)810 matches.append(entry)
804 return matches811 return matches
812
813
814class ZadaraBEDriver(ISCSIDriver):
815 """Performs actions to configure Zadara BE module."""
816
817 def _is_vsa_volume(self, volume):
818 return volume_types.is_vsa_volume(volume['volume_type_id'])
819
820 def _is_vsa_drive(self, volume):
821 return volume_types.is_vsa_drive(volume['volume_type_id'])
822
823 def _not_vsa_volume_or_drive(self, volume):
824 """Returns True if volume is not VSA BE volume."""
825 if not volume_types.is_vsa_object(volume['volume_type_id']):
826 LOG.debug(_("\tVolume %s is NOT VSA volume"), volume['name'])
827 return True
828 else:
829 return False
830
831 def check_for_setup_error(self):
832 """No setup necessary for Zadara BE."""
833 pass
834
835 """ Volume Driver methods """
836 def create_volume(self, volume):
837 """Creates BE volume."""
838 if self._not_vsa_volume_or_drive(volume):
839 return super(ZadaraBEDriver, self).create_volume(volume)
840
841 if self._is_vsa_volume(volume):
842 LOG.debug(_("\tFE VSA Volume %s creation - do nothing"),
843 volume['name'])
844 return
845
846 if int(volume['size']) == 0:
847 sizestr = '0' # indicates full-partition
848 else:
849 sizestr = '%s' % (int(volume['size']) << 30) # size in bytes
850
851 # Set the qos-str to default type sas
852 qosstr = 'SAS_1000'
853 volume_type = volume_types.get_volume_type(None,
854 volume['volume_type_id'])
855 if volume_type is not None:
856 qosstr = volume_type['extra_specs']['drive_type'] + \
857 ("_%s" % volume_type['extra_specs']['drive_size'])
858
859 vsa_id = None
860 for i in volume.get('volume_metadata'):
861 if i['key'] == 'to_vsa_id':
862 vsa_id = i['value']
863 break
864
865 try:
866 self._sync_exec('/var/lib/zadara/bin/zadara_sncfg',
867 'create_qospart',
868 '--qos', qosstr,
869 '--pname', volume['name'],
870 '--psize', sizestr,
871 '--vsaid', vsa_id,
872 run_as_root=True,
873 check_exit_code=0)
874 except exception.ProcessExecutionError:
875 LOG.debug(_("VSA BE create_volume for %s failed"), volume['name'])
876 raise
877
878 LOG.debug(_("VSA BE create_volume for %s succeeded"), volume['name'])
879
880 def delete_volume(self, volume):
881 """Deletes BE volume."""
882 if self._not_vsa_volume_or_drive(volume):
883 return super(ZadaraBEDriver, self).delete_volume(volume)
884
885 if self._is_vsa_volume(volume):
886 LOG.debug(_("\tFE VSA Volume %s deletion - do nothing"),
887 volume['name'])
888 return
889
890 try:
891 self._sync_exec('/var/lib/zadara/bin/zadara_sncfg',
892 'delete_partition',
893 '--pname', volume['name'],
894 run_as_root=True,
895 check_exit_code=0)
896 except exception.ProcessExecutionError:
897 LOG.debug(_("VSA BE delete_volume for %s failed"), volume['name'])
898 return
899
900 LOG.debug(_("VSA BE delete_volume for %s suceeded"), volume['name'])
901
902 def local_path(self, volume):
903 if self._not_vsa_volume_or_drive(volume):
904 return super(ZadaraBEDriver, self).local_path(volume)
905
906 if self._is_vsa_volume(volume):
907 LOG.debug(_("\tFE VSA Volume %s local path call - call discover"),
908 volume['name'])
909 return super(ZadaraBEDriver, self).discover_volume(None, volume)
910
911 raise exception.Error(_("local_path not supported"))
912
913 def ensure_export(self, context, volume):
914 """ensure BE export for a volume"""
915 if self._not_vsa_volume_or_drive(volume):
916 return super(ZadaraBEDriver, self).ensure_export(context, volume)
917
918 if self._is_vsa_volume(volume):
919 LOG.debug(_("\tFE VSA Volume %s ensure export - do nothing"),
920 volume['name'])
921 return
922
923 try:
924 iscsi_target = self.db.volume_get_iscsi_target_num(context,
925 volume['id'])
926 except exception.NotFound:
927 LOG.info(_("Skipping ensure_export. No iscsi_target " +
928 "provisioned for volume: %d"), volume['id'])
929 return
930
931 try:
932 ret = self._common_be_export(context, volume, iscsi_target)
933 except exception.ProcessExecutionError:
934 return
935 return ret
936
937 def create_export(self, context, volume):
938 """create BE export for a volume"""
939 if self._not_vsa_volume_or_drive(volume):
940 return super(ZadaraBEDriver, self).create_export(context, volume)
941
942 if self._is_vsa_volume(volume):
943 LOG.debug(_("\tFE VSA Volume %s create export - do nothing"),
944 volume['name'])
945 return
946
947 self._ensure_iscsi_targets(context, volume['host'])
948 iscsi_target = self.db.volume_allocate_iscsi_target(context,
949 volume['id'],
950 volume['host'])
951 try:
952 ret = self._common_be_export(context, volume, iscsi_target)
953 except:
954 raise exception.ProcessExecutionError
955 return ret
956
957 def remove_export(self, context, volume):
958 """Removes BE export for a volume."""
959 if self._not_vsa_volume_or_drive(volume):
960 return super(ZadaraBEDriver, self).remove_export(context, volume)
961
962 if self._is_vsa_volume(volume):
963 LOG.debug(_("\tFE VSA Volume %s remove export - do nothing"),
964 volume['name'])
965 return
966
967 try:
968 iscsi_target = self.db.volume_get_iscsi_target_num(context,
969 volume['id'])
970 except exception.NotFound:
971 LOG.info(_("Skipping remove_export. No iscsi_target " +
972 "provisioned for volume: %d"), volume['id'])
973 return
974
975 try:
976 self._sync_exec('/var/lib/zadara/bin/zadara_sncfg',
977 'remove_export',
978 '--pname', volume['name'],
979 '--tid', iscsi_target,
980 run_as_root=True,
981 check_exit_code=0)
982 except exception.ProcessExecutionError:
983 LOG.debug(_("VSA BE remove_export for %s failed"), volume['name'])
984 return
985
986 def create_snapshot(self, snapshot):
987 """Nothing required for snapshot"""
988 if self._not_vsa_volume_or_drive(volume):
989 return super(ZadaraBEDriver, self).create_snapshot(volume)
990
991 pass
992
993 def delete_snapshot(self, snapshot):
994 """Nothing required to delete a snapshot"""
995 if self._not_vsa_volume_or_drive(volume):
996 return super(ZadaraBEDriver, self).delete_snapshot(volume)
997
998 pass
999
1000 """ Internal BE Volume methods """
1001 def _common_be_export(self, context, volume, iscsi_target):
1002 """
1003 Common logic that asks zadara_sncfg to setup iSCSI target/lun for
1004 this volume
1005 """
1006 (out, err) = self._sync_exec(
1007 '/var/lib/zadara/bin/zadara_sncfg',
1008 'create_export',
1009 '--pname', volume['name'],
1010 '--tid', iscsi_target,
1011 run_as_root=True,
1012 check_exit_code=0)
1013
1014 result_xml = ElementTree.fromstring(out)
1015 response_node = result_xml.find("Sn")
1016 if response_node is None:
1017 msg = "Malformed response from zadara_sncfg"
1018 raise exception.Error(msg)
1019
1020 sn_ip = response_node.findtext("SnIp")
1021 sn_iqn = response_node.findtext("IqnName")
1022 iscsi_portal = sn_ip + ":3260," + ("%s" % iscsi_target)
1023
1024 model_update = {}
1025 model_update['provider_location'] = ("%s %s" %
1026 (iscsi_portal,
1027 sn_iqn))
1028 return model_update
1029
1030 def _get_qosgroup_summary(self):
1031 """gets the list of qosgroups from Zadara BE"""
1032 try:
1033 (out, err) = self._sync_exec(
1034 '/var/lib/zadara/bin/zadara_sncfg',
1035 'get_qosgroups_xml',
1036 run_as_root=True,
1037 check_exit_code=0)
1038 except exception.ProcessExecutionError:
1039 LOG.debug(_("Failed to retrieve QoS info"))
1040 return {}
1041
1042 qos_groups = {}
1043 result_xml = ElementTree.fromstring(out)
1044 for element in result_xml.findall('QosGroup'):
1045 qos_group = {}
1046 # get the name of the group.
1047 # If we cannot find it, forget this element
1048 group_name = element.findtext("Name")
1049 if not group_name:
1050 continue
1051
1052 # loop through all child nodes & fill up attributes of this group
1053 for child in element.getchildren():
1054 # two types of elements - property of qos-group & sub property
1055 # classify them accordingly
1056 if child.text:
1057 qos_group[child.tag] = int(child.text) \
1058 if child.text.isdigit() else child.text
1059 else:
1060 subelement = {}
1061 for subchild in child.getchildren():
1062 subelement[subchild.tag] = int(subchild.text) \
1063 if subchild.text.isdigit() else subchild.text
1064 qos_group[child.tag] = subelement
1065
1066 # Now add this group to the master qos_groups
1067 qos_groups[group_name] = qos_group
1068
1069 return qos_groups
1070
1071 def get_volume_stats(self, refresh=False):
1072 """Return the current state of the volume service. If 'refresh' is
1073 True, run the update first."""
1074
1075 drive_info = self._get_qosgroup_summary()
1076 return {'drive_qos_info': drive_info}
8051077
=== modified file 'nova/volume/manager.py'
--- nova/volume/manager.py 2011-06-02 21:23:05 +0000
+++ nova/volume/manager.py 2011-08-26 22:09:26 +0000
@@ -48,7 +48,9 @@
48from nova import flags48from nova import flags
49from nova import log as logging49from nova import log as logging
50from nova import manager50from nova import manager
51from nova import rpc
51from nova import utils52from nova import utils
53from nova.volume import volume_types
5254
5355
54LOG = logging.getLogger('nova.volume.manager')56LOG = logging.getLogger('nova.volume.manager')
@@ -60,6 +62,8 @@
60 'Driver to use for volume creation')62 'Driver to use for volume creation')
61flags.DEFINE_boolean('use_local_volumes', True,63flags.DEFINE_boolean('use_local_volumes', True,
62 'if True, will not discover local volumes')64 'if True, will not discover local volumes')
65flags.DEFINE_boolean('volume_force_update_capabilities', False,
66 'if True will force update capabilities on each check')
6367
6468
65class VolumeManager(manager.SchedulerDependentManager):69class VolumeManager(manager.SchedulerDependentManager):
@@ -74,6 +78,7 @@
74 # NOTE(vish): Implementation specific db handling is done78 # NOTE(vish): Implementation specific db handling is done
75 # by the driver.79 # by the driver.
76 self.driver.db = self.db80 self.driver.db = self.db
81 self._last_volume_stats = []
7782
78 def init_host(self):83 def init_host(self):
79 """Do any initialization that needs to be run if this is a84 """Do any initialization that needs to be run if this is a
@@ -123,6 +128,7 @@
123 except Exception:128 except Exception:
124 self.db.volume_update(context,129 self.db.volume_update(context,
125 volume_ref['id'], {'status': 'error'})130 volume_ref['id'], {'status': 'error'})
131 self._notify_vsa(context, volume_ref, 'error')
126 raise132 raise
127133
128 now = utils.utcnow()134 now = utils.utcnow()
@@ -130,8 +136,29 @@
130 volume_ref['id'], {'status': 'available',136 volume_ref['id'], {'status': 'available',
131 'launched_at': now})137 'launched_at': now})
132 LOG.debug(_("volume %s: created successfully"), volume_ref['name'])138 LOG.debug(_("volume %s: created successfully"), volume_ref['name'])
139 self._notify_vsa(context, volume_ref, 'available')
140 self._reset_stats()
133 return volume_id141 return volume_id
134142
143 def _notify_vsa(self, context, volume_ref, status):
144 if volume_ref['volume_type_id'] is None:
145 return
146
147 if volume_types.is_vsa_drive(volume_ref['volume_type_id']):
148 vsa_id = None
149 for i in volume_ref.get('volume_metadata'):
150 if i['key'] == 'to_vsa_id':
151 vsa_id = int(i['value'])
152 break
153
154 if vsa_id:
155 rpc.cast(context,
156 FLAGS.vsa_topic,
157 {"method": "vsa_volume_created",
158 "args": {"vol_id": volume_ref['id'],
159 "vsa_id": vsa_id,
160 "status": status}})
161
135 def delete_volume(self, context, volume_id):162 def delete_volume(self, context, volume_id):
136 """Deletes and unexports volume."""163 """Deletes and unexports volume."""
137 context = context.elevated()164 context = context.elevated()
@@ -141,6 +168,7 @@
141 if volume_ref['host'] != self.host:168 if volume_ref['host'] != self.host:
142 raise exception.Error(_("Volume is not local to this node"))169 raise exception.Error(_("Volume is not local to this node"))
143170
171 self._reset_stats()
144 try:172 try:
145 LOG.debug(_("volume %s: removing export"), volume_ref['name'])173 LOG.debug(_("volume %s: removing export"), volume_ref['name'])
146 self.driver.remove_export(context, volume_ref)174 self.driver.remove_export(context, volume_ref)
@@ -231,3 +259,53 @@
231 instance_ref = self.db.instance_get(context, instance_id)259 instance_ref = self.db.instance_get(context, instance_id)
232 for volume in instance_ref['volumes']:260 for volume in instance_ref['volumes']:
233 self.driver.check_for_export(context, volume['id'])261 self.driver.check_for_export(context, volume['id'])
262
263 def periodic_tasks(self, context=None):
264 """Tasks to be run at a periodic interval."""
265
266 error_list = []
267 try:
268 self._report_driver_status()
269 except Exception as ex:
270 LOG.warning(_("Error during report_driver_status(): %s"),
271 unicode(ex))
272 error_list.append(ex)
273
274 super(VolumeManager, self).periodic_tasks(context)
275
276 return error_list
277
278 def _volume_stats_changed(self, stat1, stat2):
279 if FLAGS.volume_force_update_capabilities:
280 return True
281 if len(stat1) != len(stat2):
282 return True
283 for (k, v) in stat1.iteritems():
284 if (k, v) not in stat2.iteritems():
285 return True
286 return False
287
288 def _report_driver_status(self):
289 volume_stats = self.driver.get_volume_stats(refresh=True)
290 if volume_stats:
291 LOG.info(_("Checking volume capabilities"))
292
293 if self._volume_stats_changed(self._last_volume_stats,
294 volume_stats):
295 LOG.info(_("New capabilities found: %s"), volume_stats)
296 self._last_volume_stats = volume_stats
297
298 # This will grab info about the host and queue it
299 # to be sent to the Schedulers.
300 self.update_service_capabilities(self._last_volume_stats)
301 else:
302 # avoid repeating fanouts
303 self.update_service_capabilities(None)
304
305 def _reset_stats(self):
306 LOG.info(_("Clear capabilities"))
307 self._last_volume_stats = []
308
309 def notification(self, context, event):
310 LOG.info(_("Notification {%s} received"), event)
311 self._reset_stats()
234312
=== modified file 'nova/volume/volume_types.py'
--- nova/volume/volume_types.py 2011-08-24 17:16:20 +0000
+++ nova/volume/volume_types.py 2011-08-26 22:09:26 +0000
@@ -100,20 +100,22 @@
100 continue100 continue
101 else:101 else:
102 if filter_func(type_args, values):102 if filter_func(type_args, values):
103 # if one of conditions didn't match - remove
104 result[type_name] = type_args103 result[type_name] = type_args
105 break104 break
106 vol_types = result105 vol_types = result
107 return vol_types106 return vol_types
108107
109108
110def get_volume_type(context, id):109def get_volume_type(ctxt, id):
111 """Retrieves single volume type by id."""110 """Retrieves single volume type by id."""
112 if id is None:111 if id is None:
113 raise exception.InvalidVolumeType(volume_type=id)112 raise exception.InvalidVolumeType(volume_type=id)
114113
114 if ctxt is None:
115 ctxt = context.get_admin_context()
116
115 try:117 try:
116 return db.volume_type_get(context, id)118 return db.volume_type_get(ctxt, id)
117 except exception.DBError:119 except exception.DBError:
118 raise exception.ApiError(_("Unknown volume type: %s") % id)120 raise exception.ApiError(_("Unknown volume type: %s") % id)
119121
@@ -127,3 +129,38 @@
127 return db.volume_type_get_by_name(context, name)129 return db.volume_type_get_by_name(context, name)
128 except exception.DBError:130 except exception.DBError:
129 raise exception.ApiError(_("Unknown volume type: %s") % name)131 raise exception.ApiError(_("Unknown volume type: %s") % name)
132
133
134def is_key_value_present(volume_type_id, key, value, volume_type=None):
135 if volume_type_id is None:
136 return False
137
138 if volume_type is None:
139 volume_type = get_volume_type(context.get_admin_context(),
140 volume_type_id)
141 if volume_type.get('extra_specs') is None or\
142 volume_type['extra_specs'].get(key) != value:
143 return False
144 else:
145 return True
146
147
148def is_vsa_drive(volume_type_id, volume_type=None):
149 return is_key_value_present(volume_type_id,
150 'type', 'vsa_drive', volume_type)
151
152
153def is_vsa_volume(volume_type_id, volume_type=None):
154 return is_key_value_present(volume_type_id,
155 'type', 'vsa_volume', volume_type)
156
157
158def is_vsa_object(volume_type_id):
159 if volume_type_id is None:
160 return False
161
162 volume_type = get_volume_type(context.get_admin_context(),
163 volume_type_id)
164
165 return is_vsa_drive(volume_type_id, volume_type) or\
166 is_vsa_volume(volume_type_id, volume_type)
130167
=== added directory 'nova/vsa'
=== added file 'nova/vsa/__init__.py'
--- nova/vsa/__init__.py 1970-01-01 00:00:00 +0000
+++ nova/vsa/__init__.py 2011-08-26 22:09:26 +0000
@@ -0,0 +1,18 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright (c) 2011 Zadara Storage Inc.
4# Copyright (c) 2011 OpenStack LLC.
5#
6# Licensed under the Apache License, Version 2.0 (the "License"); you may
7# not use this file except in compliance with the License. You may obtain
8# a copy of the License at
9#
10# http://www.apache.org/licenses/LICENSE-2.0
11#
12# Unless required by applicable law or agreed to in writing, software
13# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
14# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
15# License for the specific language governing permissions and limitations
16# under the License.
17
18from nova.vsa.api import API
019
=== added file 'nova/vsa/api.py'
--- nova/vsa/api.py 1970-01-01 00:00:00 +0000
+++ nova/vsa/api.py 2011-08-26 22:09:26 +0000
@@ -0,0 +1,411 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright (c) 2011 Zadara Storage Inc.
4# Copyright (c) 2011 OpenStack LLC.
5#
6# Licensed under the Apache License, Version 2.0 (the "License"); you may
7# not use this file except in compliance with the License. You may obtain
8# a copy of the License at
9#
10# http://www.apache.org/licenses/LICENSE-2.0
11#
12# Unless required by applicable law or agreed to in writing, software
13# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
14# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
15# License for the specific language governing permissions and limitations
16# under the License.
17
18"""
19Handles all requests relating to Virtual Storage Arrays (VSAs).
20
21Experimental code. Requires special VSA image.
22For assistance and guidelines pls contact
23 Zadara Storage Inc & Openstack community
24"""
25
26import sys
27
28from nova import compute
29from nova import db
30from nova import exception
31from nova import flags
32from nova import log as logging
33from nova import rpc
34from nova import volume
35from nova.compute import instance_types
36from nova.db import base
37from nova.volume import volume_types
38
39
40class VsaState:
41 CREATING = 'creating' # VSA creating (not ready yet)
42 LAUNCHING = 'launching' # Launching VCs (all BE volumes were created)
43 CREATED = 'created' # VSA fully created and ready for use
44 PARTIAL = 'partial' # Some BE drives were allocated
45 FAILED = 'failed' # Some BE storage allocations failed
46 DELETING = 'deleting' # VSA started the deletion procedure
47
48
49FLAGS = flags.FLAGS
50flags.DEFINE_string('vsa_ec2_access_key', None,
51 'EC2 access key used by VSA for accessing nova')
52flags.DEFINE_string('vsa_ec2_user_id', None,
53 'User ID used by VSA for accessing nova')
54flags.DEFINE_boolean('vsa_multi_vol_creation', True,
55 'Ask scheduler to create multiple volumes in one call')
56flags.DEFINE_string('vsa_volume_type_name', 'VSA volume type',
57 'Name of volume type associated with FE VSA volumes')
58
59LOG = logging.getLogger('nova.vsa')
60
61
62class API(base.Base):
63 """API for interacting with the VSA manager."""
64
65 def __init__(self, compute_api=None, volume_api=None, **kwargs):
66 self.compute_api = compute_api or compute.API()
67 self.volume_api = volume_api or volume.API()
68 super(API, self).__init__(**kwargs)
69
70 def _check_volume_type_correctness(self, vol_type):
71 if vol_type.get('extra_specs') == None or\
72 vol_type['extra_specs'].get('type') != 'vsa_drive' or\
73 vol_type['extra_specs'].get('drive_type') == None or\
74 vol_type['extra_specs'].get('drive_size') == None:
75
76 raise exception.ApiError(_("Invalid drive type %s")
77 % vol_type['name'])
78
79 def _get_default_vsa_instance_type(self):
80 return instance_types.get_instance_type_by_name(
81 FLAGS.default_vsa_instance_type)
82
83 def _check_storage_parameters(self, context, vsa_name, storage,
84 shared, first_index=0):
85 """
86 Translates storage array of disks to the list of volumes
87 :param storage: List of dictionaries with following keys:
88 disk_name, num_disks, size
89 :param shared: Specifies if storage is dedicated or shared.
90 For shared storage disks split into partitions
91 """
92 volume_params = []
93 for node in storage:
94
95 name = node.get('drive_name', None)
96 num_disks = node.get('num_drives', 1)
97
98 if name is None:
99 raise exception.ApiError(_("No drive_name param found in %s")
100 % node)
101 try:
102 vol_type = volume_types.get_volume_type_by_name(context, name)
103 except exception.NotFound:
104 raise exception.ApiError(_("Invalid drive type name %s")
105 % name)
106
107 self._check_volume_type_correctness(vol_type)
108
109 # if size field present - override disk size specified in DB
110 size = int(node.get('size',
111 vol_type['extra_specs'].get('drive_size')))
112
113 if shared:
114 part_size = FLAGS.vsa_part_size_gb
115 total_capacity = num_disks * size
116 num_volumes = total_capacity / part_size
117 size = part_size
118 else:
119 num_volumes = num_disks
120 size = 0 # special handling for full drives
121
122 for i in range(num_volumes):
123 volume_name = "drive-%03d" % first_index
124 first_index += 1
125 volume_desc = 'BE volume for VSA %s type %s' % \
126 (vsa_name, name)
127 volume = {
128 'size': size,
129 'name': volume_name,
130 'description': volume_desc,
131 'volume_type_id': vol_type['id'],
132 }
133 volume_params.append(volume)
134
135 return volume_params
136
137 def create(self, context, display_name='', display_description='',
138 vc_count=1, instance_type=None, image_name=None,
139 availability_zone=None, storage=[], shared=None):
140 """
141 Provision VSA instance with corresponding compute instances
142 and associated volumes
143 :param storage: List of dictionaries with following keys:
144 disk_name, num_disks, size
145 :param shared: Specifies if storage is dedicated or shared.
146 For shared storage disks split into partitions
147 """
148
149 LOG.info(_("*** Experimental VSA code ***"))
150
151 if vc_count > FLAGS.max_vcs_in_vsa:
152 LOG.warning(_("Requested number of VCs (%d) is too high."\
153 " Setting to default"), vc_count)
154 vc_count = FLAGS.max_vcs_in_vsa
155
156 if instance_type is None:
157 instance_type = self._get_default_vsa_instance_type()
158
159 if availability_zone is None:
160 availability_zone = FLAGS.storage_availability_zone
161
162 if storage is None:
163 storage = []
164
165 if shared is None or shared == 'False' or shared == False:
166 shared = False
167 else:
168 shared = True
169
170 # check if image is ready before starting any work
171 if image_name is None:
172 image_name = FLAGS.vc_image_name
173 try:
174 image_service = self.compute_api.image_service
175 vc_image = image_service.show_by_name(context, image_name)
176 vc_image_href = vc_image['id']
177 except exception.ImageNotFound:
178 raise exception.ApiError(_("Failed to find configured image %s")
179 % image_name)
180
181 options = {
182 'display_name': display_name,
183 'display_description': display_description,
184 'project_id': context.project_id,
185 'availability_zone': availability_zone,
186 'instance_type_id': instance_type['id'],
187 'image_ref': vc_image_href,
188 'vc_count': vc_count,
189 'status': VsaState.CREATING,
190 }
191 LOG.info(_("Creating VSA: %s") % options)
192
193 # create DB entry for VSA instance
194 try:
195 vsa_ref = self.db.vsa_create(context, options)
196 except exception.Error:
197 raise exception.ApiError(_(sys.exc_info()[1]))
198 vsa_id = vsa_ref['id']
199 vsa_name = vsa_ref['name']
200
201 # check storage parameters
202 try:
203 volume_params = self._check_storage_parameters(context, vsa_name,
204 storage, shared)
205 except exception.ApiError:
206 self.db.vsa_destroy(context, vsa_id)
207 raise exception.ApiError(_("Error in storage parameters: %s")
208 % storage)
209
210 # after creating DB entry, re-check and set some defaults
211 updates = {}
212 if (not hasattr(vsa_ref, 'display_name') or
213 vsa_ref.display_name is None or
214 vsa_ref.display_name == ''):
215 updates['display_name'] = display_name = vsa_name
216 updates['vol_count'] = len(volume_params)
217 vsa_ref = self.update(context, vsa_id, **updates)
218
219 # create volumes
220 if FLAGS.vsa_multi_vol_creation:
221 if len(volume_params) > 0:
222 request_spec = {
223 'num_volumes': len(volume_params),
224 'vsa_id': str(vsa_id),
225 'volumes': volume_params,
226 }
227
228 rpc.cast(context,
229 FLAGS.scheduler_topic,
230 {"method": "create_volumes",
231 "args": {"topic": FLAGS.volume_topic,
232 "request_spec": request_spec,
233 "availability_zone": availability_zone}})
234 else:
235 # create BE volumes one-by-one
236 for vol in volume_params:
237 try:
238 vol_name = vol['name']
239 vol_size = vol['size']
240 vol_type_id = vol['volume_type_id']
241 LOG.debug(_("VSA ID %(vsa_id)d %(vsa_name)s: Create "\
242 "volume %(vol_name)s, %(vol_size)d GB, "\
243 "type %(vol_type_id)s"), locals())
244
245 vol_type = volume_types.get_volume_type(context,
246 vol['volume_type_id'])
247
248 vol_ref = self.volume_api.create(context,
249 vol_size,
250 None,
251 vol_name,
252 vol['description'],
253 volume_type=vol_type,
254 metadata=dict(to_vsa_id=str(vsa_id)),
255 availability_zone=availability_zone)
256 except:
257 self.update_vsa_status(context, vsa_id,
258 status=VsaState.PARTIAL)
259 raise
260
261 if len(volume_params) == 0:
262 # No BE volumes - ask VSA manager to start VCs
263 rpc.cast(context,
264 FLAGS.vsa_topic,
265 {"method": "create_vsa",
266 "args": {"vsa_id": str(vsa_id)}})
267
268 return vsa_ref
269
270 def update_vsa_status(self, context, vsa_id, status):
271 updates = dict(status=status)
272 LOG.info(_("VSA ID %(vsa_id)d: Update VSA status to %(status)s"),
273 locals())
274 return self.update(context, vsa_id, **updates)
275
276 def update(self, context, vsa_id, **kwargs):
277 """Updates the VSA instance in the datastore.
278
279 :param context: The security context
280 :param vsa_id: ID of the VSA instance to update
281 :param kwargs: All additional keyword args are treated
282 as data fields of the instance to be
283 updated
284
285 :returns: None
286 """
287 LOG.info(_("VSA ID %(vsa_id)d: Update VSA call"), locals())
288
289 updatable_fields = ['status', 'vc_count', 'vol_count',
290 'display_name', 'display_description']
291 changes = {}
292 for field in updatable_fields:
293 if field in kwargs:
294 changes[field] = kwargs[field]
295
296 vc_count = kwargs.get('vc_count', None)
297 if vc_count is not None:
298 # VP-TODO: This request may want to update number of VCs
299 # Get number of current VCs and add/delete VCs appropriately
300 vsa = self.get(context, vsa_id)
301 vc_count = int(vc_count)
302 if vc_count > FLAGS.max_vcs_in_vsa:
303 LOG.warning(_("Requested number of VCs (%d) is too high."\
304 " Setting to default"), vc_count)
305 vc_count = FLAGS.max_vcs_in_vsa
306
307 if vsa['vc_count'] != vc_count:
308 self.update_num_vcs(context, vsa, vc_count)
309 changes['vc_count'] = vc_count
310
311 return self.db.vsa_update(context, vsa_id, changes)
312
313 def update_num_vcs(self, context, vsa, vc_count):
314 vsa_name = vsa['name']
315 old_vc_count = int(vsa['vc_count'])
316 if vc_count > old_vc_count:
317 add_cnt = vc_count - old_vc_count
318 LOG.debug(_("Adding %(add_cnt)s VCs to VSA %(vsa_name)s."),
319 locals())
320 # VP-TODO: actual code for adding new VCs
321
322 elif vc_count < old_vc_count:
323 del_cnt = old_vc_count - vc_count
324 LOG.debug(_("Deleting %(del_cnt)s VCs from VSA %(vsa_name)s."),
325 locals())
326 # VP-TODO: actual code for deleting extra VCs
327
328 def _force_volume_delete(self, ctxt, volume):
329 """Delete a volume, bypassing the check that it must be available."""
330 host = volume['host']
331 if not host:
332 # Deleting volume from database and skipping rpc.
333 self.db.volume_destroy(ctxt, volume['id'])
334 return
335
336 rpc.cast(ctxt,
337 self.db.queue_get_for(ctxt, FLAGS.volume_topic, host),
338 {"method": "delete_volume",
339 "args": {"volume_id": volume['id']}})
340
341 def delete_vsa_volumes(self, context, vsa_id, direction,
342 force_delete=True):
343 if direction == "FE":
344 volumes = self.get_all_vsa_volumes(context, vsa_id)
345 else:
346 volumes = self.get_all_vsa_drives(context, vsa_id)
347
348 for volume in volumes:
349 try:
350 vol_name = volume['name']
351 LOG.info(_("VSA ID %(vsa_id)s: Deleting %(direction)s "\
352 "volume %(vol_name)s"), locals())
353 self.volume_api.delete(context, volume['id'])
354 except exception.ApiError:
355 LOG.info(_("Unable to delete volume %s"), volume['name'])
356 if force_delete:
357 LOG.info(_("VSA ID %(vsa_id)s: Forced delete. "\
358 "%(direction)s volume %(vol_name)s"), locals())
359 self._force_volume_delete(context, volume)
360
361 def delete(self, context, vsa_id):
362 """Terminate a VSA instance."""
363 LOG.info(_("Going to try to terminate VSA ID %s"), vsa_id)
364
365 # Delete all FrontEnd and BackEnd volumes
366 self.delete_vsa_volumes(context, vsa_id, "FE", force_delete=True)
367 self.delete_vsa_volumes(context, vsa_id, "BE", force_delete=True)
368
369 # Delete all VC instances
370 instances = self.compute_api.get_all(context,
371 search_opts={'metadata': dict(vsa_id=str(vsa_id))})
372 for instance in instances:
373 name = instance['name']
374 LOG.debug(_("VSA ID %(vsa_id)s: Delete instance %(name)s"),
375 locals())
376 self.compute_api.delete(context, instance['id'])
377
378 # Delete VSA instance
379 self.db.vsa_destroy(context, vsa_id)
380
381 def get(self, context, vsa_id):
382 rv = self.db.vsa_get(context, vsa_id)
383 return rv
384
385 def get_all(self, context):
386 if context.is_admin:
387 return self.db.vsa_get_all(context)
388 return self.db.vsa_get_all_by_project(context, context.project_id)
389
390 def get_vsa_volume_type(self, context):
391 name = FLAGS.vsa_volume_type_name
392 try:
393 vol_type = volume_types.get_volume_type_by_name(context, name)
394 except exception.NotFound:
395 volume_types.create(context, name,
396 extra_specs=dict(type='vsa_volume'))
397 vol_type = volume_types.get_volume_type_by_name(context, name)
398
399 return vol_type
400
401 def get_all_vsa_instances(self, context, vsa_id):
402 return self.compute_api.get_all(context,
403 search_opts={'metadata': dict(vsa_id=str(vsa_id))})
404
405 def get_all_vsa_volumes(self, context, vsa_id):
406 return self.volume_api.get_all(context,
407 search_opts={'metadata': dict(from_vsa_id=str(vsa_id))})
408
409 def get_all_vsa_drives(self, context, vsa_id):
410 return self.volume_api.get_all(context,
411 search_opts={'metadata': dict(to_vsa_id=str(vsa_id))})
0412
=== added file 'nova/vsa/connection.py'
--- nova/vsa/connection.py 1970-01-01 00:00:00 +0000
+++ nova/vsa/connection.py 2011-08-26 22:09:26 +0000
@@ -0,0 +1,25 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright (c) 2011 Zadara Storage Inc.
4# Copyright (c) 2011 OpenStack LLC.
5#
6# Licensed under the Apache License, Version 2.0 (the "License"); you may
7# not use this file except in compliance with the License. You may obtain
8# a copy of the License at
9#
10# http://www.apache.org/licenses/LICENSE-2.0
11#
12# Unless required by applicable law or agreed to in writing, software
13# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
14# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
15# License for the specific language governing permissions and limitations
16# under the License.
17
18"""Abstraction of the underlying connection to VC."""
19
20from nova.vsa import fake
21
22
23def get_connection():
24 # Return an object that is able to talk to VCs
25 return fake.FakeVcConnection()
026
=== added file 'nova/vsa/fake.py'
--- nova/vsa/fake.py 1970-01-01 00:00:00 +0000
+++ nova/vsa/fake.py 2011-08-26 22:09:26 +0000
@@ -0,0 +1,22 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright (c) 2011 Zadara Storage Inc.
4# Copyright (c) 2011 OpenStack LLC.
5#
6# Licensed under the Apache License, Version 2.0 (the "License"); you may
7# not use this file except in compliance with the License. You may obtain
8# a copy of the License at
9#
10# http://www.apache.org/licenses/LICENSE-2.0
11#
12# Unless required by applicable law or agreed to in writing, software
13# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
14# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
15# License for the specific language governing permissions and limitations
16# under the License.
17
18
19class FakeVcConnection(object):
20
21 def init_host(self, host):
22 pass
023
=== added file 'nova/vsa/manager.py'
--- nova/vsa/manager.py 1970-01-01 00:00:00 +0000
+++ nova/vsa/manager.py 2011-08-26 22:09:26 +0000
@@ -0,0 +1,179 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright (c) 2011 Zadara Storage Inc.
4# Copyright (c) 2011 OpenStack LLC.
5#
6# Licensed under the Apache License, Version 2.0 (the "License"); you may
7# not use this file except in compliance with the License. You may obtain
8# a copy of the License at
9#
10# http://www.apache.org/licenses/LICENSE-2.0
11#
12# Unless required by applicable law or agreed to in writing, software
13# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
14# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
15# License for the specific language governing permissions and limitations
16# under the License.
17
18"""
19Handles all processes relating to Virtual Storage Arrays (VSA).
20
21**Related Flags**
22
23"""
24
25from nova import compute
26from nova import exception
27from nova import flags
28from nova import log as logging
29from nova import manager
30from nova import volume
31from nova import vsa
32from nova import utils
33from nova.compute import instance_types
34from nova.vsa import utils as vsa_utils
35from nova.vsa.api import VsaState
36
37FLAGS = flags.FLAGS
38flags.DEFINE_string('vsa_driver', 'nova.vsa.connection.get_connection',
39 'Driver to use for controlling VSAs')
40
41LOG = logging.getLogger('nova.vsa.manager')
42
43
44class VsaManager(manager.SchedulerDependentManager):
45 """Manages Virtual Storage Arrays (VSAs)."""
46
47 def __init__(self, vsa_driver=None, *args, **kwargs):
48 if not vsa_driver:
49 vsa_driver = FLAGS.vsa_driver
50 self.driver = utils.import_object(vsa_driver)
51 self.compute_manager = utils.import_object(FLAGS.compute_manager)
52
53 self.compute_api = compute.API()
54 self.volume_api = volume.API()
55 self.vsa_api = vsa.API()
56
57 if FLAGS.vsa_ec2_user_id is None or \
58 FLAGS.vsa_ec2_access_key is None:
59 raise exception.VSANovaAccessParamNotFound()
60
61 super(VsaManager, self).__init__(*args, **kwargs)
62
63 def init_host(self):
64 self.driver.init_host(host=self.host)
65 super(VsaManager, self).init_host()
66
67 @exception.wrap_exception()
68 def create_vsa(self, context, vsa_id):
69 """Called by API if there were no BE volumes assigned"""
70 LOG.debug(_("Create call received for VSA %s"), vsa_id)
71
72 vsa_id = int(vsa_id) # just in case
73
74 try:
75 vsa = self.vsa_api.get(context, vsa_id)
76 except Exception as ex:
77 msg = _("Failed to find VSA %(vsa_id)d") % locals()
78 LOG.exception(msg)
79 return
80
81 return self._start_vcs(context, vsa)
82
83 @exception.wrap_exception()
84 def vsa_volume_created(self, context, vol_id, vsa_id, status):
85 """Callback for volume creations"""
86 LOG.debug(_("VSA ID %(vsa_id)s: Drive %(vol_id)s created. "\
87 "Status %(status)s"), locals())
88 vsa_id = int(vsa_id) # just in case
89
90 # Get all volumes for this VSA
91 # check if any of them still in creating phase
92 drives = self.vsa_api.get_all_vsa_drives(context, vsa_id)
93 for drive in drives:
94 if drive['status'] == 'creating':
95 vol_name = drive['name']
96 vol_disp_name = drive['display_name']
97 LOG.debug(_("Drive %(vol_name)s (%(vol_disp_name)s) still "\
98 "in creating phase - wait"), locals())
99 return
100
101 try:
102 vsa = self.vsa_api.get(context, vsa_id)
103 except Exception as ex:
104 msg = _("Failed to find VSA %(vsa_id)d") % locals()
105 LOG.exception(msg)
106 return
107
108 if len(drives) != vsa['vol_count']:
109 cvol_real = len(drives)
110 cvol_exp = vsa['vol_count']
111 LOG.debug(_("VSA ID %(vsa_id)d: Not all volumes are created "\
112 "(%(cvol_real)d of %(cvol_exp)d)"), locals())
113 return
114
115 # all volumes created (successfully or not)
116 return self._start_vcs(context, vsa, drives)
117
118 def _start_vcs(self, context, vsa, drives=[]):
119 """Start VCs for VSA """
120
121 vsa_id = vsa['id']
122 if vsa['status'] == VsaState.CREATING:
123 self.vsa_api.update_vsa_status(context, vsa_id,
124 VsaState.LAUNCHING)
125 else:
126 return
127
128 # in _separate_ loop go over all volumes and mark as "attached"
129 has_failed_volumes = False
130 for drive in drives:
131 vol_name = drive['name']
132 vol_disp_name = drive['display_name']
133 status = drive['status']
134 LOG.info(_("VSA ID %(vsa_id)d: Drive %(vol_name)s "\
135 "(%(vol_disp_name)s) is in %(status)s state"),
136 locals())
137 if status == 'available':
138 try:
139 # self.volume_api.update(context, volume['id'],
140 # dict(attach_status="attached"))
141 pass
142 except Exception as ex:
143 msg = _("Failed to update attach status for volume "
144 "%(vol_name)s. %(ex)s") % locals()
145 LOG.exception(msg)
146 else:
147 has_failed_volumes = True
148
149 if has_failed_volumes:
150 LOG.info(_("VSA ID %(vsa_id)d: Delete all BE volumes"), locals())
151 self.vsa_api.delete_vsa_volumes(context, vsa_id, "BE", True)
152 self.vsa_api.update_vsa_status(context, vsa_id,
153 VsaState.FAILED)
154 return
155
156 # create user-data record for VC
157 storage_data = vsa_utils.generate_user_data(vsa, drives)
158
159 instance_type = instance_types.get_instance_type(
160 vsa['instance_type_id'])
161
162 # now start the VC instance
163
164 vc_count = vsa['vc_count']
165 LOG.info(_("VSA ID %(vsa_id)d: Start %(vc_count)d instances"),
166 locals())
167 vc_instances = self.compute_api.create(context,
168 instance_type, # vsa['vsa_instance_type'],
169 vsa['image_ref'],
170 min_count=1,
171 max_count=vc_count,
172 display_name='vc-' + vsa['display_name'],
173 display_description='VC for VSA ' + vsa['display_name'],
174 availability_zone=vsa['availability_zone'],
175 user_data=storage_data,
176 metadata=dict(vsa_id=str(vsa_id)))
177
178 self.vsa_api.update_vsa_status(context, vsa_id,
179 VsaState.CREATED)
0180
=== added file 'nova/vsa/utils.py'
--- nova/vsa/utils.py 1970-01-01 00:00:00 +0000
+++ nova/vsa/utils.py 2011-08-26 22:09:26 +0000
@@ -0,0 +1,80 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright (c) 2011 Zadara Storage Inc.
4# Copyright (c) 2011 OpenStack LLC.
5#
6# Licensed under the Apache License, Version 2.0 (the "License"); you may
7# not use this file except in compliance with the License. You may obtain
8# a copy of the License at
9#
10# http://www.apache.org/licenses/LICENSE-2.0
11#
12# Unless required by applicable law or agreed to in writing, software
13# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
14# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
15# License for the specific language governing permissions and limitations
16# under the License.
17
18import base64
19from xml.etree import ElementTree
20
21from nova import flags
22
23FLAGS = flags.FLAGS
24
25
26def generate_user_data(vsa, volumes):
27 SubElement = ElementTree.SubElement
28
29 e_vsa = ElementTree.Element("vsa")
30
31 e_vsa_detail = SubElement(e_vsa, "id")
32 e_vsa_detail.text = str(vsa['id'])
33 e_vsa_detail = SubElement(e_vsa, "name")
34 e_vsa_detail.text = vsa['display_name']
35 e_vsa_detail = SubElement(e_vsa, "description")
36 e_vsa_detail.text = vsa['display_description']
37 e_vsa_detail = SubElement(e_vsa, "vc_count")
38 e_vsa_detail.text = str(vsa['vc_count'])
39
40 e_vsa_detail = SubElement(e_vsa, "auth_user")
41 e_vsa_detail.text = FLAGS.vsa_ec2_user_id
42 e_vsa_detail = SubElement(e_vsa, "auth_access_key")
43 e_vsa_detail.text = FLAGS.vsa_ec2_access_key
44
45 e_volumes = SubElement(e_vsa, "volumes")
46 for volume in volumes:
47
48 loc = volume['provider_location']
49 if loc is None:
50 ip = ''
51 iscsi_iqn = ''
52 iscsi_portal = ''
53 else:
54 (iscsi_target, _sep, iscsi_iqn) = loc.partition(" ")
55 (ip, iscsi_portal) = iscsi_target.split(":", 1)
56
57 e_vol = SubElement(e_volumes, "volume")
58 e_vol_detail = SubElement(e_vol, "id")
59 e_vol_detail.text = str(volume['id'])
60 e_vol_detail = SubElement(e_vol, "name")
61 e_vol_detail.text = volume['name']
62 e_vol_detail = SubElement(e_vol, "display_name")
63 e_vol_detail.text = volume['display_name']
64 e_vol_detail = SubElement(e_vol, "size_gb")
65 e_vol_detail.text = str(volume['size'])
66 e_vol_detail = SubElement(e_vol, "status")
67 e_vol_detail.text = volume['status']
68 e_vol_detail = SubElement(e_vol, "ip")
69 e_vol_detail.text = ip
70 e_vol_detail = SubElement(e_vol, "iscsi_iqn")
71 e_vol_detail.text = iscsi_iqn
72 e_vol_detail = SubElement(e_vol, "iscsi_portal")
73 e_vol_detail.text = iscsi_portal
74 e_vol_detail = SubElement(e_vol, "lun")
75 e_vol_detail.text = '0'
76 e_vol_detail = SubElement(e_vol, "sn_host")
77 e_vol_detail.text = volume['host']
78
79 _xml = ElementTree.tostring(e_vsa)
80 return base64.b64encode(_xml)