Merge lp:~justin-fathomdb/nova/san into lp:~hudson-openstack/nova/trunk

Proposed by justinsb
Status: Merged
Approved by: Vish Ishaya
Approved revision: 652
Merged at revision: 656
Proposed branch: lp:~justin-fathomdb/nova/san
Merge into: lp:~hudson-openstack/nova/trunk
Diff against target: 444 lines (+378/-6)
5 files modified
.mailmap (+1/-0)
nova/utils.py (+36/-0)
nova/volume/driver.py (+5/-5)
nova/volume/manager.py (+1/-1)
nova/volume/san.py (+335/-0)
To merge this branch: bzr merge lp:~justin-fathomdb/nova/san
Reviewer Review Type Date Requested Status
Devin Carlen (community) Approve
Vish Ishaya (community) Approve
justinsb (community) Needs Resubmitting
Review via email: mp+48633@code.launchpad.net

Description of the change

Added support for 'SAN' style volumes. A SAN's big difference is that the iSCSI target won't normally run on the same host as the volume service.

To post a comment you must log in.
Revision history for this message
Vish Ishaya (vishvananda) wrote :

This looks pretty awesome. It seems like ssh_execute could live in utils.py, since it is likely useful for other components as well.

107 + # TODO(justinsb): store in volume, remerge with generic iSCSI code
108 + host = FLAGS.san_ip

Perhaps we should have an extra field for ip for all of the iscsi drivers. That way the host name doesn't have to be resolvable from the compute host, and we can git rid of the ugly --iscsi_ip_prefix flag. I'm ok with this happening in another patch though.

The long list of commented setup commands should be moved into the docstring or perhaps fleshed out and moved into docs/

review: Needs Fixing
Revision history for this message
Devin Carlen (devcamcar) wrote :

A nit:

405 + # I'm suspicious of this...
406 + # ...other SSH clients have buffering issues with this approach

Please use the NOTE(myname): prefix and avoid first person in comments so that the community knows who is providing the information here.

review: Needs Fixing
Revision history for this message
justinsb (justin-fathomdb) wrote :

Thanks Vish & Devin.

Devin: I formatted the note as you suggested.

Vish: I moved ssh_execute into utils. It'll end up there in the end, and this way change tracking is easier. I do want to introduce SSH connection pooling in future.

I moved the comments into a docstring, though I don't know how they will look once they're the doc is generated. Once this is locked down, definitely agree this should be moved into docs.

Finally, I totally agree that the iSCSI target address vs san_ip vs iscsi_ip_prefix thing is ugly. I'm not actually sure what iscsi_ip_prefix is for (I had to rewrite the discovery function anyway, and I didn't need it) But long term, I think that a single volume controller may be managing multiple SAN nodes, and so I'd like to fix this properly at that stage. It will need a new attribute in the volume model, so I stayed away from it initially to keep the patch small. So I'd like to defer this to another patch if that's OK!

review: Needs Resubmitting
Revision history for this message
Devin Carlen (devcamcar) wrote :

Cool, I'll do a proper review in a bit. I'm ok with deferring to another patch as long as we create a bug or blueprint (whichever is more appropriate) to document what limitations there are in this proposed implementation.

Revision history for this message
justinsb (justin-fathomdb) wrote :

I'm not sure there's actually a limitation in the current approach - I think it should work at least as well as the current OpenISCSI target code. But there's some code messiness in the handling of the iSCSI target IP; happily the future direction of the SAN code is likely to evolve to fix this anyway. Not sure if we should do a bug/blueprint in this case (or what would really go in it if we did!)

Revision history for this message
Vish Ishaya (vishvananda) wrote :

lgtm. We can do another patch for putting the ip of the target into the db.

review: Approve
Revision history for this message
Devin Carlen (devcamcar) wrote :

lgtm

review: Approve
Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

The attempt to merge lp:~justin-fathomdb/nova/san into lp:nova failed. Below is the output from the failed tests.

/bin/sh: /var/lib/hudson/test_nova.sh: not found

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :
Download full text (20.8 KiB)

The attempt to merge lp:~justin-fathomdb/nova/san into lp:nova failed. Below is the output from the failed tests.

AdminAPITest
    test_admin_disabled ok
    test_admin_enabled ok
APITest
    test_exceptions_are_converted_to_faults ok
Test
    test_authorize_token ok
    test_authorize_user ok
    test_bad_token ok
    test_bad_user ok
    test_no_user ok
    test_token_expiry ok
TestLimiter
    test_authorize_token ok
TestFaults
    test_fault_parts ok
    test_raise ok
    test_retry_header ok
FlavorsTest
    test_get_flavor_by_id ok
    test_get_flavor_list ok
GlanceImageServiceTest
    test_create ok
    test_create_and_show_non_existing_image ok
    test_delete ok
    test_update ok
ImageControllerWithGlanceServiceTest
    test_get_image_details ok
    test_get_image_index ok
LocalImageServiceTest
    test_create ok
    test_create_and_show_non_existing_image ok
    test_delete ok
    test_update ok
LimiterTest
    test_minute ok
    test_one_per_period ok
    test_second ok
    test_users_get_separate_buckets ok
    test_we_can_go_indefinitely_if_we_spread_out_requests ok
WSGIAppProxyTest
    test_200 ok
    test_403 ok
    test_failure ok
WSGIAppTest
    test_escaping ok
    test_good_urls ok
    test_invalid_methods ok
    test_invalid_urls ok
    test_response_to_delays ok
ServersTest
    test_create_backup_schedules ok
    test_create_instance ok
    test_delete_backup_schedules ok
    test_delete_server_instance ok
    test_get_all_server_details ok
    test...

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :
Download full text (27.5 KiB)

The attempt to merge lp:~justin-fathomdb/nova/san into lp:nova failed. Below is the output from the failed tests.

AdminAPITest
    test_admin_disabled ok
    test_admin_enabled ok
APITest
    test_exceptions_are_converted_to_faults ok
Test
    test_authorize_token ok
    test_authorize_user ok
    test_bad_token ok
    test_bad_user ok
    test_no_user ok
    test_token_expiry ok
TestLimiter
    test_authorize_token ok
TestFaults
    test_fault_parts ok
    test_raise ok
    test_retry_header ok
FlavorsTest
    test_get_flavor_by_id ok
    test_get_flavor_list ok
GlanceImageServiceTest
    test_create ok
    test_create_and_show_non_existing_image ok
    test_delete ok
    test_update ok
ImageControllerWithGlanceServiceTest
    test_get_image_details ok
    test_get_image_index ok
LocalImageServiceTest
    test_create ok
    test_create_and_show_non_existing_image ok
    test_delete ok
    test_update ok
LimiterTest
    test_minute ok
    test_one_per_period ok
    test_second ok
    test_users_get_separate_buckets ok
    test_we_can_go_indefinitely_if_we_spread_out_requests ok
WSGIAppProxyTest
    test_200 ok
    test_403 ok
    test_failure ok
WSGIAppTest
    test_escaping ok
    test_good_urls ok
    test_invalid_methods ok
    test_invalid_urls ok
    test_response_to_delays ok
ServersTest
    test_create_backup_schedules ok
    test_create_instance ok
    test_delete_backup_schedules ok
    test_delete_server_instance ok
    test_get_all_server_details ok
    test...

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

There are additional revisions which have not been approved in review. Please seek review and approval of these new revisions.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

There are additional revisions which have not been approved in review. Please seek review and approval of these new revisions.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file '.mailmap'
2--- .mailmap 2011-01-18 19:34:29 +0000
3+++ .mailmap 2011-02-09 19:30:31 +0000
4@@ -33,3 +33,4 @@
5 <corywright@gmail.com> <cory.wright@rackspace.com>
6 <ant@openstack.org> <amesserl@rackspace.com>
7 <chiradeep@cloud.com> <chiradeep@chiradeep-lt2>
8+<justin@fathomdb.com> <superstack@superstack.org>
9
10=== modified file 'nova/utils.py'
11--- nova/utils.py 2011-01-27 19:52:10 +0000
12+++ nova/utils.py 2011-02-09 19:30:31 +0000
13@@ -152,6 +152,42 @@
14 return result
15
16
17+def ssh_execute(ssh, cmd, process_input=None,
18+ addl_env=None, check_exit_code=True):
19+ LOG.debug(_("Running cmd (SSH): %s"), cmd)
20+ if addl_env:
21+ raise exception.Error("Environment not supported over SSH")
22+
23+ if process_input:
24+ # This is (probably) fixable if we need it...
25+ raise exception.Error("process_input not supported over SSH")
26+
27+ stdin_stream, stdout_stream, stderr_stream = ssh.exec_command(cmd)
28+ channel = stdout_stream.channel
29+
30+ #stdin.write('process_input would go here')
31+ #stdin.flush()
32+
33+ # NOTE(justinsb): This seems suspicious...
34+ # ...other SSH clients have buffering issues with this approach
35+ stdout = stdout_stream.read()
36+ stderr = stderr_stream.read()
37+ stdin_stream.close()
38+
39+ exit_status = channel.recv_exit_status()
40+
41+ # exit_status == -1 if no exit code was returned
42+ if exit_status != -1:
43+ LOG.debug(_("Result was %s") % exit_status)
44+ if check_exit_code and exit_status != 0:
45+ raise exception.ProcessExecutionError(exit_code=exit_status,
46+ stdout=stdout,
47+ stderr=stderr,
48+ cmd=cmd)
49+
50+ return (stdout, stderr)
51+
52+
53 def abspath(s):
54 return os.path.join(os.path.dirname(__file__), s)
55
56
57=== modified file 'nova/volume/driver.py'
58--- nova/volume/driver.py 2011-01-19 02:50:21 +0000
59+++ nova/volume/driver.py 2011-02-09 19:30:31 +0000
60@@ -294,8 +294,10 @@
61 self._execute("sudo ietadm --op delete --tid=%s" %
62 iscsi_target)
63
64- def _get_name_and_portal(self, volume_name, host):
65+ def _get_name_and_portal(self, volume):
66 """Gets iscsi name and portal from volume name and host."""
67+ volume_name = volume['name']
68+ host = volume['host']
69 (out, _err) = self._execute("sudo iscsiadm -m discovery -t "
70 "sendtargets -p %s" % host)
71 for target in out.splitlines():
72@@ -307,8 +309,7 @@
73
74 def discover_volume(self, volume):
75 """Discover volume on a remote host."""
76- iscsi_name, iscsi_portal = self._get_name_and_portal(volume['name'],
77- volume['host'])
78+ iscsi_name, iscsi_portal = self._get_name_and_portal(volume)
79 self._execute("sudo iscsiadm -m node -T %s -p %s --login" %
80 (iscsi_name, iscsi_portal))
81 self._execute("sudo iscsiadm -m node -T %s -p %s --op update "
82@@ -319,8 +320,7 @@
83
84 def undiscover_volume(self, volume):
85 """Undiscover volume on a remote host."""
86- iscsi_name, iscsi_portal = self._get_name_and_portal(volume['name'],
87- volume['host'])
88+ iscsi_name, iscsi_portal = self._get_name_and_portal(volume)
89 self._execute("sudo iscsiadm -m node -T %s -p %s --op update "
90 "-n node.startup -v manual" %
91 (iscsi_name, iscsi_portal))
92
93=== modified file 'nova/volume/manager.py'
94--- nova/volume/manager.py 2011-01-21 21:10:26 +0000
95+++ nova/volume/manager.py 2011-02-09 19:30:31 +0000
96@@ -87,7 +87,7 @@
97 if volume['status'] in ['available', 'in-use']:
98 self.driver.ensure_export(ctxt, volume)
99 else:
100- LOG.info(_("volume %s: skipping export"), volume_ref['name'])
101+ LOG.info(_("volume %s: skipping export"), volume['name'])
102
103 def create_volume(self, context, volume_id):
104 """Creates and exports the volume."""
105
106=== added file 'nova/volume/san.py'
107--- nova/volume/san.py 1970-01-01 00:00:00 +0000
108+++ nova/volume/san.py 2011-02-09 19:30:31 +0000
109@@ -0,0 +1,335 @@
110+# vim: tabstop=4 shiftwidth=4 softtabstop=4
111+
112+# Copyright 2011 Justin Santa Barbara
113+# All Rights Reserved.
114+#
115+# Licensed under the Apache License, Version 2.0 (the "License"); you may
116+# not use this file except in compliance with the License. You may obtain
117+# a copy of the License at
118+#
119+# http://www.apache.org/licenses/LICENSE-2.0
120+#
121+# Unless required by applicable law or agreed to in writing, software
122+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
123+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
124+# License for the specific language governing permissions and limitations
125+# under the License.
126+"""
127+Drivers for san-stored volumes.
128+The unique thing about a SAN is that we don't expect that we can run the volume
129+ controller on the SAN hardware. We expect to access it over SSH or some API.
130+"""
131+
132+import os
133+import paramiko
134+
135+from nova import exception
136+from nova import flags
137+from nova import log as logging
138+from nova.utils import ssh_execute
139+from nova.volume.driver import ISCSIDriver
140+
141+LOG = logging.getLogger("nova.volume.driver")
142+FLAGS = flags.FLAGS
143+flags.DEFINE_boolean('san_thin_provision', 'true',
144+ 'Use thin provisioning for SAN volumes?')
145+flags.DEFINE_string('san_ip', '',
146+ 'IP address of SAN controller')
147+flags.DEFINE_string('san_login', 'admin',
148+ 'Username for SAN controller')
149+flags.DEFINE_string('san_password', '',
150+ 'Password for SAN controller')
151+flags.DEFINE_string('san_privatekey', '',
152+ 'Filename of private key to use for SSH authentication')
153+
154+
155+class SanISCSIDriver(ISCSIDriver):
156+ """ Base class for SAN-style storage volumes
157+ (storage providers we access over SSH)"""
158+ #Override because SAN ip != host ip
159+ def _get_name_and_portal(self, volume):
160+ """Gets iscsi name and portal from volume name and host."""
161+ volume_name = volume['name']
162+
163+ # TODO(justinsb): store in volume, remerge with generic iSCSI code
164+ host = FLAGS.san_ip
165+
166+ (out, _err) = self._execute("sudo iscsiadm -m discovery -t "
167+ "sendtargets -p %s" % host)
168+
169+ location = None
170+ find_iscsi_name = self._build_iscsi_target_name(volume)
171+ for target in out.splitlines():
172+ if find_iscsi_name in target:
173+ (location, _sep, iscsi_name) = target.partition(" ")
174+ break
175+ if not location:
176+ raise exception.Error(_("Could not find iSCSI export "
177+ " for volume %s") %
178+ volume_name)
179+
180+ iscsi_portal = location.split(",")[0]
181+ LOG.debug("iscsi_name=%s, iscsi_portal=%s" %
182+ (iscsi_name, iscsi_portal))
183+ return (iscsi_name, iscsi_portal)
184+
185+ def _build_iscsi_target_name(self, volume):
186+ return "%s%s" % (FLAGS.iscsi_target_prefix, volume['name'])
187+
188+ # discover_volume is still OK
189+ # undiscover_volume is still OK
190+
191+ def _connect_to_ssh(self):
192+ ssh = paramiko.SSHClient()
193+ #TODO(justinsb): We need a better SSH key policy
194+ ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
195+ if FLAGS.san_password:
196+ ssh.connect(FLAGS.san_ip,
197+ username=FLAGS.san_login,
198+ password=FLAGS.san_password)
199+ elif FLAGS.san_privatekey:
200+ privatekeyfile = os.path.expanduser(FLAGS.san_privatekey)
201+ # It sucks that paramiko doesn't support DSA keys
202+ privatekey = paramiko.RSAKey.from_private_key_file(privatekeyfile)
203+ ssh.connect(FLAGS.san_ip,
204+ username=FLAGS.san_login,
205+ pkey=privatekey)
206+ else:
207+ raise exception.Error("Specify san_password or san_privatekey")
208+ return ssh
209+
210+ def _run_ssh(self, command, check_exit_code=True):
211+ #TODO(justinsb): SSH connection caching (?)
212+ ssh = self._connect_to_ssh()
213+
214+ #TODO(justinsb): Reintroduce the retry hack
215+ ret = ssh_execute(ssh, command, check_exit_code=check_exit_code)
216+
217+ ssh.close()
218+
219+ return ret
220+
221+ def ensure_export(self, context, volume):
222+ """Synchronously recreates an export for a logical volume."""
223+ pass
224+
225+ def create_export(self, context, volume):
226+ """Exports the volume."""
227+ pass
228+
229+ def remove_export(self, context, volume):
230+ """Removes an export for a logical volume."""
231+ pass
232+
233+ def check_for_setup_error(self):
234+ """Returns an error if prerequisites aren't met"""
235+ if not (FLAGS.san_password or FLAGS.san_privatekey):
236+ raise exception.Error("Specify san_password or san_privatekey")
237+
238+ if not (FLAGS.san_ip):
239+ raise exception.Error("san_ip must be set")
240+
241+
242+def _collect_lines(data):
243+ """ Split lines from data into an array, trimming them """
244+ matches = []
245+ for line in data.splitlines():
246+ match = line.strip()
247+ matches.append(match)
248+
249+ return matches
250+
251+
252+def _get_prefixed_values(data, prefix):
253+ """Collect lines which start with prefix; with trimming"""
254+ matches = []
255+ for line in data.splitlines():
256+ line = line.strip()
257+ if line.startswith(prefix):
258+ match = line[len(prefix):]
259+ match = match.strip()
260+ matches.append(match)
261+
262+ return matches
263+
264+
265+class SolarisISCSIDriver(SanISCSIDriver):
266+ """Executes commands relating to Solaris-hosted ISCSI volumes.
267+ Basic setup for a Solaris iSCSI server:
268+ pkg install storage-server SUNWiscsit
269+ svcadm enable stmf
270+ svcadm enable -r svc:/network/iscsi/target:default
271+ pfexec itadm create-tpg e1000g0 ${MYIP}
272+ pfexec itadm create-target -t e1000g0
273+
274+ Then grant the user that will be logging on lots of permissions.
275+ I'm not sure exactly which though:
276+ zfs allow justinsb create,mount,destroy rpool
277+ usermod -P'File System Management' justinsb
278+ usermod -P'Primary Administrator' justinsb
279+
280+ Also make sure you can login using san_login & san_password/san_privatekey
281+ """
282+
283+ def _view_exists(self, luid):
284+ cmd = "pfexec /usr/sbin/stmfadm list-view -l %s" % (luid)
285+ (out, _err) = self._run_ssh(cmd,
286+ check_exit_code=False)
287+ if "no views found" in out:
288+ return False
289+
290+ if "View Entry:" in out:
291+ return True
292+
293+ raise exception.Error("Cannot parse list-view output: %s" % (out))
294+
295+ def _get_target_groups(self):
296+ """Gets list of target groups from host."""
297+ (out, _err) = self._run_ssh("pfexec /usr/sbin/stmfadm list-tg")
298+ matches = _get_prefixed_values(out, 'Target group: ')
299+ LOG.debug("target_groups=%s" % matches)
300+ return matches
301+
302+ def _target_group_exists(self, target_group_name):
303+ return target_group_name not in self._get_target_groups()
304+
305+ def _get_target_group_members(self, target_group_name):
306+ (out, _err) = self._run_ssh("pfexec /usr/sbin/stmfadm list-tg -v %s" %
307+ (target_group_name))
308+ matches = _get_prefixed_values(out, 'Member: ')
309+ LOG.debug("members of %s=%s" % (target_group_name, matches))
310+ return matches
311+
312+ def _is_target_group_member(self, target_group_name, iscsi_target_name):
313+ return iscsi_target_name in (
314+ self._get_target_group_members(target_group_name))
315+
316+ def _get_iscsi_targets(self):
317+ cmd = ("pfexec /usr/sbin/itadm list-target | "
318+ "awk '{print $1}' | grep -v ^TARGET")
319+ (out, _err) = self._run_ssh(cmd)
320+ matches = _collect_lines(out)
321+ LOG.debug("_get_iscsi_targets=%s" % (matches))
322+ return matches
323+
324+ def _iscsi_target_exists(self, iscsi_target_name):
325+ return iscsi_target_name in self._get_iscsi_targets()
326+
327+ def _build_zfs_poolname(self, volume):
328+ #TODO(justinsb): rpool should be configurable
329+ zfs_poolname = 'rpool/%s' % (volume['name'])
330+ return zfs_poolname
331+
332+ def create_volume(self, volume):
333+ """Creates a volume."""
334+ if int(volume['size']) == 0:
335+ sizestr = '100M'
336+ else:
337+ sizestr = '%sG' % volume['size']
338+
339+ zfs_poolname = self._build_zfs_poolname(volume)
340+
341+ thin_provision_arg = '-s' if FLAGS.san_thin_provision else ''
342+ # Create a zfs volume
343+ self._run_ssh("pfexec /usr/sbin/zfs create %s -V %s %s" %
344+ (thin_provision_arg,
345+ sizestr,
346+ zfs_poolname))
347+
348+ def _get_luid(self, volume):
349+ zfs_poolname = self._build_zfs_poolname(volume)
350+
351+ cmd = ("pfexec /usr/sbin/sbdadm list-lu | "
352+ "grep -w %s | awk '{print $1}'" %
353+ (zfs_poolname))
354+
355+ (stdout, _stderr) = self._run_ssh(cmd)
356+
357+ luid = stdout.strip()
358+ return luid
359+
360+ def _is_lu_created(self, volume):
361+ luid = self._get_luid(volume)
362+ return luid
363+
364+ def delete_volume(self, volume):
365+ """Deletes a volume."""
366+ zfs_poolname = self._build_zfs_poolname(volume)
367+ self._run_ssh("pfexec /usr/sbin/zfs destroy %s" %
368+ (zfs_poolname))
369+
370+ def local_path(self, volume):
371+ # TODO(justinsb): Is this needed here?
372+ escaped_group = FLAGS.volume_group.replace('-', '--')
373+ escaped_name = volume['name'].replace('-', '--')
374+ return "/dev/mapper/%s-%s" % (escaped_group, escaped_name)
375+
376+ def ensure_export(self, context, volume):
377+ """Synchronously recreates an export for a logical volume."""
378+ #TODO(justinsb): On bootup, this is called for every volume.
379+ # It then runs ~5 SSH commands for each volume,
380+ # most of which fetch the same info each time
381+ # This makes initial start stupid-slow
382+ self._do_export(volume, force_create=False)
383+
384+ def create_export(self, context, volume):
385+ self._do_export(volume, force_create=True)
386+
387+ def _do_export(self, volume, force_create):
388+ # Create a Logical Unit (LU) backed by the zfs volume
389+ zfs_poolname = self._build_zfs_poolname(volume)
390+
391+ if force_create or not self._is_lu_created(volume):
392+ cmd = ("pfexec /usr/sbin/sbdadm create-lu /dev/zvol/rdsk/%s" %
393+ (zfs_poolname))
394+ self._run_ssh(cmd)
395+
396+ luid = self._get_luid(volume)
397+ iscsi_name = self._build_iscsi_target_name(volume)
398+ target_group_name = 'tg-%s' % volume['name']
399+
400+ # Create a iSCSI target, mapped to just this volume
401+ if force_create or not self._target_group_exists(target_group_name):
402+ self._run_ssh("pfexec /usr/sbin/stmfadm create-tg %s" %
403+ (target_group_name))
404+
405+ # Yes, we add the initiatior before we create it!
406+ # Otherwise, it complains that the target is already active
407+ if force_create or not self._is_target_group_member(target_group_name,
408+ iscsi_name):
409+ self._run_ssh("pfexec /usr/sbin/stmfadm add-tg-member -g %s %s" %
410+ (target_group_name, iscsi_name))
411+ if force_create or not self._iscsi_target_exists(iscsi_name):
412+ self._run_ssh("pfexec /usr/sbin/itadm create-target -n %s" %
413+ (iscsi_name))
414+ if force_create or not self._view_exists(luid):
415+ self._run_ssh("pfexec /usr/sbin/stmfadm add-view -t %s %s" %
416+ (target_group_name, luid))
417+
418+ def remove_export(self, context, volume):
419+ """Removes an export for a logical volume."""
420+
421+ # This is the reverse of _do_export
422+ luid = self._get_luid(volume)
423+ iscsi_name = self._build_iscsi_target_name(volume)
424+ target_group_name = 'tg-%s' % volume['name']
425+
426+ if self._view_exists(luid):
427+ self._run_ssh("pfexec /usr/sbin/stmfadm remove-view -l %s -a" %
428+ (luid))
429+
430+ if self._iscsi_target_exists(iscsi_name):
431+ self._run_ssh("pfexec /usr/sbin/stmfadm offline-target %s" %
432+ (iscsi_name))
433+ self._run_ssh("pfexec /usr/sbin/itadm delete-target %s" %
434+ (iscsi_name))
435+
436+ # We don't delete the tg-member; we delete the whole tg!
437+
438+ if self._target_group_exists(target_group_name):
439+ self._run_ssh("pfexec /usr/sbin/stmfadm delete-tg %s" %
440+ (target_group_name))
441+
442+ if self._is_lu_created(volume):
443+ self._run_ssh("pfexec /usr/sbin/sbdadm delete-lu %s" %
444+ (luid))