Merge lp:~justin-fathomdb/nova/san into lp:~hudson-openstack/nova/trunk
- san
- Merge into trunk
Status: | Merged |
---|---|
Approved by: | Vish Ishaya |
Approved revision: | 652 |
Merged at revision: | 656 |
Proposed branch: | lp:~justin-fathomdb/nova/san |
Merge into: | lp:~hudson-openstack/nova/trunk |
Diff against target: |
444 lines (+378/-6) 5 files modified
.mailmap (+1/-0) nova/utils.py (+36/-0) nova/volume/driver.py (+5/-5) nova/volume/manager.py (+1/-1) nova/volume/san.py (+335/-0) |
To merge this branch: | bzr merge lp:~justin-fathomdb/nova/san |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Devin Carlen (community) | Approve | ||
Vish Ishaya (community) | Approve | ||
justinsb (community) | Needs Resubmitting | ||
Review via email: mp+48633@code.launchpad.net |
Commit message
Description of the change
Added support for 'SAN' style volumes. A SAN's big difference is that the iSCSI target won't normally run on the same host as the volume service.
Devin Carlen (devcamcar) wrote : | # |
A nit:
405 + # I'm suspicious of this...
406 + # ...other SSH clients have buffering issues with this approach
Please use the NOTE(myname): prefix and avoid first person in comments so that the community knows who is providing the information here.
justinsb (justin-fathomdb) wrote : | # |
Thanks Vish & Devin.
Devin: I formatted the note as you suggested.
Vish: I moved ssh_execute into utils. It'll end up there in the end, and this way change tracking is easier. I do want to introduce SSH connection pooling in future.
I moved the comments into a docstring, though I don't know how they will look once they're the doc is generated. Once this is locked down, definitely agree this should be moved into docs.
Finally, I totally agree that the iSCSI target address vs san_ip vs iscsi_ip_prefix thing is ugly. I'm not actually sure what iscsi_ip_prefix is for (I had to rewrite the discovery function anyway, and I didn't need it) But long term, I think that a single volume controller may be managing multiple SAN nodes, and so I'd like to fix this properly at that stage. It will need a new attribute in the volume model, so I stayed away from it initially to keep the patch small. So I'd like to defer this to another patch if that's OK!
Devin Carlen (devcamcar) wrote : | # |
Cool, I'll do a proper review in a bit. I'm ok with deferring to another patch as long as we create a bug or blueprint (whichever is more appropriate) to document what limitations there are in this proposed implementation.
justinsb (justin-fathomdb) wrote : | # |
I'm not sure there's actually a limitation in the current approach - I think it should work at least as well as the current OpenISCSI target code. But there's some code messiness in the handling of the iSCSI target IP; happily the future direction of the SAN code is likely to evolve to fix this anyway. Not sure if we should do a bug/blueprint in this case (or what would really go in it if we did!)
Vish Ishaya (vishvananda) wrote : | # |
lgtm. We can do another patch for putting the ip of the target into the db.
OpenStack Infra (hudson-openstack) wrote : | # |
The attempt to merge lp:~justin-fathomdb/nova/san into lp:nova failed. Below is the output from the failed tests.
/bin/sh: /var/lib/
OpenStack Infra (hudson-openstack) wrote : | # |
The attempt to merge lp:~justin-fathomdb/nova/san into lp:nova failed. Below is the output from the failed tests.
AdminAPITest
test_
test_
APITest
test_
Test
test_
test_
test_bad_token ok
test_bad_user ok
test_no_user ok
test_
TestLimiter
test_
TestFaults
test_
test_raise ok
test_
FlavorsTest
test_
test_
GlanceImageServ
test_create ok
test_
test_delete ok
test_update ok
ImageController
test_
test_
LocalImageServi
test_create ok
test_
test_delete ok
test_update ok
LimiterTest
test_minute ok
test_
test_second ok
test_
test_
WSGIAppProxyTest
test_200 ok
test_403 ok
test_failure ok
WSGIAppTest
test_escaping ok
test_good_urls ok
test_
test_
test_
ServersTest
test_
test_
test_
test_
test_
test...
OpenStack Infra (hudson-openstack) wrote : | # |
The attempt to merge lp:~justin-fathomdb/nova/san into lp:nova failed. Below is the output from the failed tests.
AdminAPITest
test_
test_
APITest
test_
Test
test_
test_
test_bad_token ok
test_bad_user ok
test_no_user ok
test_
TestLimiter
test_
TestFaults
test_
test_raise ok
test_
FlavorsTest
test_
test_
GlanceImageServ
test_create ok
test_
test_delete ok
test_update ok
ImageController
test_
test_
LocalImageServi
test_create ok
test_
test_delete ok
test_update ok
LimiterTest
test_minute ok
test_
test_second ok
test_
test_
WSGIAppProxyTest
test_200 ok
test_403 ok
test_failure ok
WSGIAppTest
test_escaping ok
test_good_urls ok
test_
test_
test_
ServersTest
test_
test_
test_
test_
test_
test...
OpenStack Infra (hudson-openstack) wrote : | # |
There are additional revisions which have not been approved in review. Please seek review and approval of these new revisions.
OpenStack Infra (hudson-openstack) wrote : | # |
There are additional revisions which have not been approved in review. Please seek review and approval of these new revisions.
Preview Diff
1 | === modified file '.mailmap' | |||
2 | --- .mailmap 2011-01-18 19:34:29 +0000 | |||
3 | +++ .mailmap 2011-02-09 19:30:31 +0000 | |||
4 | @@ -33,3 +33,4 @@ | |||
5 | 33 | <corywright@gmail.com> <cory.wright@rackspace.com> | 33 | <corywright@gmail.com> <cory.wright@rackspace.com> |
6 | 34 | <ant@openstack.org> <amesserl@rackspace.com> | 34 | <ant@openstack.org> <amesserl@rackspace.com> |
7 | 35 | <chiradeep@cloud.com> <chiradeep@chiradeep-lt2> | 35 | <chiradeep@cloud.com> <chiradeep@chiradeep-lt2> |
8 | 36 | <justin@fathomdb.com> <superstack@superstack.org> | ||
9 | 36 | 37 | ||
10 | === modified file 'nova/utils.py' | |||
11 | --- nova/utils.py 2011-01-27 19:52:10 +0000 | |||
12 | +++ nova/utils.py 2011-02-09 19:30:31 +0000 | |||
13 | @@ -152,6 +152,42 @@ | |||
14 | 152 | return result | 152 | return result |
15 | 153 | 153 | ||
16 | 154 | 154 | ||
17 | 155 | def ssh_execute(ssh, cmd, process_input=None, | ||
18 | 156 | addl_env=None, check_exit_code=True): | ||
19 | 157 | LOG.debug(_("Running cmd (SSH): %s"), cmd) | ||
20 | 158 | if addl_env: | ||
21 | 159 | raise exception.Error("Environment not supported over SSH") | ||
22 | 160 | |||
23 | 161 | if process_input: | ||
24 | 162 | # This is (probably) fixable if we need it... | ||
25 | 163 | raise exception.Error("process_input not supported over SSH") | ||
26 | 164 | |||
27 | 165 | stdin_stream, stdout_stream, stderr_stream = ssh.exec_command(cmd) | ||
28 | 166 | channel = stdout_stream.channel | ||
29 | 167 | |||
30 | 168 | #stdin.write('process_input would go here') | ||
31 | 169 | #stdin.flush() | ||
32 | 170 | |||
33 | 171 | # NOTE(justinsb): This seems suspicious... | ||
34 | 172 | # ...other SSH clients have buffering issues with this approach | ||
35 | 173 | stdout = stdout_stream.read() | ||
36 | 174 | stderr = stderr_stream.read() | ||
37 | 175 | stdin_stream.close() | ||
38 | 176 | |||
39 | 177 | exit_status = channel.recv_exit_status() | ||
40 | 178 | |||
41 | 179 | # exit_status == -1 if no exit code was returned | ||
42 | 180 | if exit_status != -1: | ||
43 | 181 | LOG.debug(_("Result was %s") % exit_status) | ||
44 | 182 | if check_exit_code and exit_status != 0: | ||
45 | 183 | raise exception.ProcessExecutionError(exit_code=exit_status, | ||
46 | 184 | stdout=stdout, | ||
47 | 185 | stderr=stderr, | ||
48 | 186 | cmd=cmd) | ||
49 | 187 | |||
50 | 188 | return (stdout, stderr) | ||
51 | 189 | |||
52 | 190 | |||
53 | 155 | def abspath(s): | 191 | def abspath(s): |
54 | 156 | return os.path.join(os.path.dirname(__file__), s) | 192 | return os.path.join(os.path.dirname(__file__), s) |
55 | 157 | 193 | ||
56 | 158 | 194 | ||
57 | === modified file 'nova/volume/driver.py' | |||
58 | --- nova/volume/driver.py 2011-01-19 02:50:21 +0000 | |||
59 | +++ nova/volume/driver.py 2011-02-09 19:30:31 +0000 | |||
60 | @@ -294,8 +294,10 @@ | |||
61 | 294 | self._execute("sudo ietadm --op delete --tid=%s" % | 294 | self._execute("sudo ietadm --op delete --tid=%s" % |
62 | 295 | iscsi_target) | 295 | iscsi_target) |
63 | 296 | 296 | ||
65 | 297 | def _get_name_and_portal(self, volume_name, host): | 297 | def _get_name_and_portal(self, volume): |
66 | 298 | """Gets iscsi name and portal from volume name and host.""" | 298 | """Gets iscsi name and portal from volume name and host.""" |
67 | 299 | volume_name = volume['name'] | ||
68 | 300 | host = volume['host'] | ||
69 | 299 | (out, _err) = self._execute("sudo iscsiadm -m discovery -t " | 301 | (out, _err) = self._execute("sudo iscsiadm -m discovery -t " |
70 | 300 | "sendtargets -p %s" % host) | 302 | "sendtargets -p %s" % host) |
71 | 301 | for target in out.splitlines(): | 303 | for target in out.splitlines(): |
72 | @@ -307,8 +309,7 @@ | |||
73 | 307 | 309 | ||
74 | 308 | def discover_volume(self, volume): | 310 | def discover_volume(self, volume): |
75 | 309 | """Discover volume on a remote host.""" | 311 | """Discover volume on a remote host.""" |
78 | 310 | iscsi_name, iscsi_portal = self._get_name_and_portal(volume['name'], | 312 | iscsi_name, iscsi_portal = self._get_name_and_portal(volume) |
77 | 311 | volume['host']) | ||
79 | 312 | self._execute("sudo iscsiadm -m node -T %s -p %s --login" % | 313 | self._execute("sudo iscsiadm -m node -T %s -p %s --login" % |
80 | 313 | (iscsi_name, iscsi_portal)) | 314 | (iscsi_name, iscsi_portal)) |
81 | 314 | self._execute("sudo iscsiadm -m node -T %s -p %s --op update " | 315 | self._execute("sudo iscsiadm -m node -T %s -p %s --op update " |
82 | @@ -319,8 +320,7 @@ | |||
83 | 319 | 320 | ||
84 | 320 | def undiscover_volume(self, volume): | 321 | def undiscover_volume(self, volume): |
85 | 321 | """Undiscover volume on a remote host.""" | 322 | """Undiscover volume on a remote host.""" |
88 | 322 | iscsi_name, iscsi_portal = self._get_name_and_portal(volume['name'], | 323 | iscsi_name, iscsi_portal = self._get_name_and_portal(volume) |
87 | 323 | volume['host']) | ||
89 | 324 | self._execute("sudo iscsiadm -m node -T %s -p %s --op update " | 324 | self._execute("sudo iscsiadm -m node -T %s -p %s --op update " |
90 | 325 | "-n node.startup -v manual" % | 325 | "-n node.startup -v manual" % |
91 | 326 | (iscsi_name, iscsi_portal)) | 326 | (iscsi_name, iscsi_portal)) |
92 | 327 | 327 | ||
93 | === modified file 'nova/volume/manager.py' | |||
94 | --- nova/volume/manager.py 2011-01-21 21:10:26 +0000 | |||
95 | +++ nova/volume/manager.py 2011-02-09 19:30:31 +0000 | |||
96 | @@ -87,7 +87,7 @@ | |||
97 | 87 | if volume['status'] in ['available', 'in-use']: | 87 | if volume['status'] in ['available', 'in-use']: |
98 | 88 | self.driver.ensure_export(ctxt, volume) | 88 | self.driver.ensure_export(ctxt, volume) |
99 | 89 | else: | 89 | else: |
101 | 90 | LOG.info(_("volume %s: skipping export"), volume_ref['name']) | 90 | LOG.info(_("volume %s: skipping export"), volume['name']) |
102 | 91 | 91 | ||
103 | 92 | def create_volume(self, context, volume_id): | 92 | def create_volume(self, context, volume_id): |
104 | 93 | """Creates and exports the volume.""" | 93 | """Creates and exports the volume.""" |
105 | 94 | 94 | ||
106 | === added file 'nova/volume/san.py' | |||
107 | --- nova/volume/san.py 1970-01-01 00:00:00 +0000 | |||
108 | +++ nova/volume/san.py 2011-02-09 19:30:31 +0000 | |||
109 | @@ -0,0 +1,335 @@ | |||
110 | 1 | # vim: tabstop=4 shiftwidth=4 softtabstop=4 | ||
111 | 2 | |||
112 | 3 | # Copyright 2011 Justin Santa Barbara | ||
113 | 4 | # All Rights Reserved. | ||
114 | 5 | # | ||
115 | 6 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | ||
116 | 7 | # not use this file except in compliance with the License. You may obtain | ||
117 | 8 | # a copy of the License at | ||
118 | 9 | # | ||
119 | 10 | # http://www.apache.org/licenses/LICENSE-2.0 | ||
120 | 11 | # | ||
121 | 12 | # Unless required by applicable law or agreed to in writing, software | ||
122 | 13 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | ||
123 | 14 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | ||
124 | 15 | # License for the specific language governing permissions and limitations | ||
125 | 16 | # under the License. | ||
126 | 17 | """ | ||
127 | 18 | Drivers for san-stored volumes. | ||
128 | 19 | The unique thing about a SAN is that we don't expect that we can run the volume | ||
129 | 20 | controller on the SAN hardware. We expect to access it over SSH or some API. | ||
130 | 21 | """ | ||
131 | 22 | |||
132 | 23 | import os | ||
133 | 24 | import paramiko | ||
134 | 25 | |||
135 | 26 | from nova import exception | ||
136 | 27 | from nova import flags | ||
137 | 28 | from nova import log as logging | ||
138 | 29 | from nova.utils import ssh_execute | ||
139 | 30 | from nova.volume.driver import ISCSIDriver | ||
140 | 31 | |||
141 | 32 | LOG = logging.getLogger("nova.volume.driver") | ||
142 | 33 | FLAGS = flags.FLAGS | ||
143 | 34 | flags.DEFINE_boolean('san_thin_provision', 'true', | ||
144 | 35 | 'Use thin provisioning for SAN volumes?') | ||
145 | 36 | flags.DEFINE_string('san_ip', '', | ||
146 | 37 | 'IP address of SAN controller') | ||
147 | 38 | flags.DEFINE_string('san_login', 'admin', | ||
148 | 39 | 'Username for SAN controller') | ||
149 | 40 | flags.DEFINE_string('san_password', '', | ||
150 | 41 | 'Password for SAN controller') | ||
151 | 42 | flags.DEFINE_string('san_privatekey', '', | ||
152 | 43 | 'Filename of private key to use for SSH authentication') | ||
153 | 44 | |||
154 | 45 | |||
155 | 46 | class SanISCSIDriver(ISCSIDriver): | ||
156 | 47 | """ Base class for SAN-style storage volumes | ||
157 | 48 | (storage providers we access over SSH)""" | ||
158 | 49 | #Override because SAN ip != host ip | ||
159 | 50 | def _get_name_and_portal(self, volume): | ||
160 | 51 | """Gets iscsi name and portal from volume name and host.""" | ||
161 | 52 | volume_name = volume['name'] | ||
162 | 53 | |||
163 | 54 | # TODO(justinsb): store in volume, remerge with generic iSCSI code | ||
164 | 55 | host = FLAGS.san_ip | ||
165 | 56 | |||
166 | 57 | (out, _err) = self._execute("sudo iscsiadm -m discovery -t " | ||
167 | 58 | "sendtargets -p %s" % host) | ||
168 | 59 | |||
169 | 60 | location = None | ||
170 | 61 | find_iscsi_name = self._build_iscsi_target_name(volume) | ||
171 | 62 | for target in out.splitlines(): | ||
172 | 63 | if find_iscsi_name in target: | ||
173 | 64 | (location, _sep, iscsi_name) = target.partition(" ") | ||
174 | 65 | break | ||
175 | 66 | if not location: | ||
176 | 67 | raise exception.Error(_("Could not find iSCSI export " | ||
177 | 68 | " for volume %s") % | ||
178 | 69 | volume_name) | ||
179 | 70 | |||
180 | 71 | iscsi_portal = location.split(",")[0] | ||
181 | 72 | LOG.debug("iscsi_name=%s, iscsi_portal=%s" % | ||
182 | 73 | (iscsi_name, iscsi_portal)) | ||
183 | 74 | return (iscsi_name, iscsi_portal) | ||
184 | 75 | |||
185 | 76 | def _build_iscsi_target_name(self, volume): | ||
186 | 77 | return "%s%s" % (FLAGS.iscsi_target_prefix, volume['name']) | ||
187 | 78 | |||
188 | 79 | # discover_volume is still OK | ||
189 | 80 | # undiscover_volume is still OK | ||
190 | 81 | |||
191 | 82 | def _connect_to_ssh(self): | ||
192 | 83 | ssh = paramiko.SSHClient() | ||
193 | 84 | #TODO(justinsb): We need a better SSH key policy | ||
194 | 85 | ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) | ||
195 | 86 | if FLAGS.san_password: | ||
196 | 87 | ssh.connect(FLAGS.san_ip, | ||
197 | 88 | username=FLAGS.san_login, | ||
198 | 89 | password=FLAGS.san_password) | ||
199 | 90 | elif FLAGS.san_privatekey: | ||
200 | 91 | privatekeyfile = os.path.expanduser(FLAGS.san_privatekey) | ||
201 | 92 | # It sucks that paramiko doesn't support DSA keys | ||
202 | 93 | privatekey = paramiko.RSAKey.from_private_key_file(privatekeyfile) | ||
203 | 94 | ssh.connect(FLAGS.san_ip, | ||
204 | 95 | username=FLAGS.san_login, | ||
205 | 96 | pkey=privatekey) | ||
206 | 97 | else: | ||
207 | 98 | raise exception.Error("Specify san_password or san_privatekey") | ||
208 | 99 | return ssh | ||
209 | 100 | |||
210 | 101 | def _run_ssh(self, command, check_exit_code=True): | ||
211 | 102 | #TODO(justinsb): SSH connection caching (?) | ||
212 | 103 | ssh = self._connect_to_ssh() | ||
213 | 104 | |||
214 | 105 | #TODO(justinsb): Reintroduce the retry hack | ||
215 | 106 | ret = ssh_execute(ssh, command, check_exit_code=check_exit_code) | ||
216 | 107 | |||
217 | 108 | ssh.close() | ||
218 | 109 | |||
219 | 110 | return ret | ||
220 | 111 | |||
221 | 112 | def ensure_export(self, context, volume): | ||
222 | 113 | """Synchronously recreates an export for a logical volume.""" | ||
223 | 114 | pass | ||
224 | 115 | |||
225 | 116 | def create_export(self, context, volume): | ||
226 | 117 | """Exports the volume.""" | ||
227 | 118 | pass | ||
228 | 119 | |||
229 | 120 | def remove_export(self, context, volume): | ||
230 | 121 | """Removes an export for a logical volume.""" | ||
231 | 122 | pass | ||
232 | 123 | |||
233 | 124 | def check_for_setup_error(self): | ||
234 | 125 | """Returns an error if prerequisites aren't met""" | ||
235 | 126 | if not (FLAGS.san_password or FLAGS.san_privatekey): | ||
236 | 127 | raise exception.Error("Specify san_password or san_privatekey") | ||
237 | 128 | |||
238 | 129 | if not (FLAGS.san_ip): | ||
239 | 130 | raise exception.Error("san_ip must be set") | ||
240 | 131 | |||
241 | 132 | |||
242 | 133 | def _collect_lines(data): | ||
243 | 134 | """ Split lines from data into an array, trimming them """ | ||
244 | 135 | matches = [] | ||
245 | 136 | for line in data.splitlines(): | ||
246 | 137 | match = line.strip() | ||
247 | 138 | matches.append(match) | ||
248 | 139 | |||
249 | 140 | return matches | ||
250 | 141 | |||
251 | 142 | |||
252 | 143 | def _get_prefixed_values(data, prefix): | ||
253 | 144 | """Collect lines which start with prefix; with trimming""" | ||
254 | 145 | matches = [] | ||
255 | 146 | for line in data.splitlines(): | ||
256 | 147 | line = line.strip() | ||
257 | 148 | if line.startswith(prefix): | ||
258 | 149 | match = line[len(prefix):] | ||
259 | 150 | match = match.strip() | ||
260 | 151 | matches.append(match) | ||
261 | 152 | |||
262 | 153 | return matches | ||
263 | 154 | |||
264 | 155 | |||
265 | 156 | class SolarisISCSIDriver(SanISCSIDriver): | ||
266 | 157 | """Executes commands relating to Solaris-hosted ISCSI volumes. | ||
267 | 158 | Basic setup for a Solaris iSCSI server: | ||
268 | 159 | pkg install storage-server SUNWiscsit | ||
269 | 160 | svcadm enable stmf | ||
270 | 161 | svcadm enable -r svc:/network/iscsi/target:default | ||
271 | 162 | pfexec itadm create-tpg e1000g0 ${MYIP} | ||
272 | 163 | pfexec itadm create-target -t e1000g0 | ||
273 | 164 | |||
274 | 165 | Then grant the user that will be logging on lots of permissions. | ||
275 | 166 | I'm not sure exactly which though: | ||
276 | 167 | zfs allow justinsb create,mount,destroy rpool | ||
277 | 168 | usermod -P'File System Management' justinsb | ||
278 | 169 | usermod -P'Primary Administrator' justinsb | ||
279 | 170 | |||
280 | 171 | Also make sure you can login using san_login & san_password/san_privatekey | ||
281 | 172 | """ | ||
282 | 173 | |||
283 | 174 | def _view_exists(self, luid): | ||
284 | 175 | cmd = "pfexec /usr/sbin/stmfadm list-view -l %s" % (luid) | ||
285 | 176 | (out, _err) = self._run_ssh(cmd, | ||
286 | 177 | check_exit_code=False) | ||
287 | 178 | if "no views found" in out: | ||
288 | 179 | return False | ||
289 | 180 | |||
290 | 181 | if "View Entry:" in out: | ||
291 | 182 | return True | ||
292 | 183 | |||
293 | 184 | raise exception.Error("Cannot parse list-view output: %s" % (out)) | ||
294 | 185 | |||
295 | 186 | def _get_target_groups(self): | ||
296 | 187 | """Gets list of target groups from host.""" | ||
297 | 188 | (out, _err) = self._run_ssh("pfexec /usr/sbin/stmfadm list-tg") | ||
298 | 189 | matches = _get_prefixed_values(out, 'Target group: ') | ||
299 | 190 | LOG.debug("target_groups=%s" % matches) | ||
300 | 191 | return matches | ||
301 | 192 | |||
302 | 193 | def _target_group_exists(self, target_group_name): | ||
303 | 194 | return target_group_name not in self._get_target_groups() | ||
304 | 195 | |||
305 | 196 | def _get_target_group_members(self, target_group_name): | ||
306 | 197 | (out, _err) = self._run_ssh("pfexec /usr/sbin/stmfadm list-tg -v %s" % | ||
307 | 198 | (target_group_name)) | ||
308 | 199 | matches = _get_prefixed_values(out, 'Member: ') | ||
309 | 200 | LOG.debug("members of %s=%s" % (target_group_name, matches)) | ||
310 | 201 | return matches | ||
311 | 202 | |||
312 | 203 | def _is_target_group_member(self, target_group_name, iscsi_target_name): | ||
313 | 204 | return iscsi_target_name in ( | ||
314 | 205 | self._get_target_group_members(target_group_name)) | ||
315 | 206 | |||
316 | 207 | def _get_iscsi_targets(self): | ||
317 | 208 | cmd = ("pfexec /usr/sbin/itadm list-target | " | ||
318 | 209 | "awk '{print $1}' | grep -v ^TARGET") | ||
319 | 210 | (out, _err) = self._run_ssh(cmd) | ||
320 | 211 | matches = _collect_lines(out) | ||
321 | 212 | LOG.debug("_get_iscsi_targets=%s" % (matches)) | ||
322 | 213 | return matches | ||
323 | 214 | |||
324 | 215 | def _iscsi_target_exists(self, iscsi_target_name): | ||
325 | 216 | return iscsi_target_name in self._get_iscsi_targets() | ||
326 | 217 | |||
327 | 218 | def _build_zfs_poolname(self, volume): | ||
328 | 219 | #TODO(justinsb): rpool should be configurable | ||
329 | 220 | zfs_poolname = 'rpool/%s' % (volume['name']) | ||
330 | 221 | return zfs_poolname | ||
331 | 222 | |||
332 | 223 | def create_volume(self, volume): | ||
333 | 224 | """Creates a volume.""" | ||
334 | 225 | if int(volume['size']) == 0: | ||
335 | 226 | sizestr = '100M' | ||
336 | 227 | else: | ||
337 | 228 | sizestr = '%sG' % volume['size'] | ||
338 | 229 | |||
339 | 230 | zfs_poolname = self._build_zfs_poolname(volume) | ||
340 | 231 | |||
341 | 232 | thin_provision_arg = '-s' if FLAGS.san_thin_provision else '' | ||
342 | 233 | # Create a zfs volume | ||
343 | 234 | self._run_ssh("pfexec /usr/sbin/zfs create %s -V %s %s" % | ||
344 | 235 | (thin_provision_arg, | ||
345 | 236 | sizestr, | ||
346 | 237 | zfs_poolname)) | ||
347 | 238 | |||
348 | 239 | def _get_luid(self, volume): | ||
349 | 240 | zfs_poolname = self._build_zfs_poolname(volume) | ||
350 | 241 | |||
351 | 242 | cmd = ("pfexec /usr/sbin/sbdadm list-lu | " | ||
352 | 243 | "grep -w %s | awk '{print $1}'" % | ||
353 | 244 | (zfs_poolname)) | ||
354 | 245 | |||
355 | 246 | (stdout, _stderr) = self._run_ssh(cmd) | ||
356 | 247 | |||
357 | 248 | luid = stdout.strip() | ||
358 | 249 | return luid | ||
359 | 250 | |||
360 | 251 | def _is_lu_created(self, volume): | ||
361 | 252 | luid = self._get_luid(volume) | ||
362 | 253 | return luid | ||
363 | 254 | |||
364 | 255 | def delete_volume(self, volume): | ||
365 | 256 | """Deletes a volume.""" | ||
366 | 257 | zfs_poolname = self._build_zfs_poolname(volume) | ||
367 | 258 | self._run_ssh("pfexec /usr/sbin/zfs destroy %s" % | ||
368 | 259 | (zfs_poolname)) | ||
369 | 260 | |||
370 | 261 | def local_path(self, volume): | ||
371 | 262 | # TODO(justinsb): Is this needed here? | ||
372 | 263 | escaped_group = FLAGS.volume_group.replace('-', '--') | ||
373 | 264 | escaped_name = volume['name'].replace('-', '--') | ||
374 | 265 | return "/dev/mapper/%s-%s" % (escaped_group, escaped_name) | ||
375 | 266 | |||
376 | 267 | def ensure_export(self, context, volume): | ||
377 | 268 | """Synchronously recreates an export for a logical volume.""" | ||
378 | 269 | #TODO(justinsb): On bootup, this is called for every volume. | ||
379 | 270 | # It then runs ~5 SSH commands for each volume, | ||
380 | 271 | # most of which fetch the same info each time | ||
381 | 272 | # This makes initial start stupid-slow | ||
382 | 273 | self._do_export(volume, force_create=False) | ||
383 | 274 | |||
384 | 275 | def create_export(self, context, volume): | ||
385 | 276 | self._do_export(volume, force_create=True) | ||
386 | 277 | |||
387 | 278 | def _do_export(self, volume, force_create): | ||
388 | 279 | # Create a Logical Unit (LU) backed by the zfs volume | ||
389 | 280 | zfs_poolname = self._build_zfs_poolname(volume) | ||
390 | 281 | |||
391 | 282 | if force_create or not self._is_lu_created(volume): | ||
392 | 283 | cmd = ("pfexec /usr/sbin/sbdadm create-lu /dev/zvol/rdsk/%s" % | ||
393 | 284 | (zfs_poolname)) | ||
394 | 285 | self._run_ssh(cmd) | ||
395 | 286 | |||
396 | 287 | luid = self._get_luid(volume) | ||
397 | 288 | iscsi_name = self._build_iscsi_target_name(volume) | ||
398 | 289 | target_group_name = 'tg-%s' % volume['name'] | ||
399 | 290 | |||
400 | 291 | # Create a iSCSI target, mapped to just this volume | ||
401 | 292 | if force_create or not self._target_group_exists(target_group_name): | ||
402 | 293 | self._run_ssh("pfexec /usr/sbin/stmfadm create-tg %s" % | ||
403 | 294 | (target_group_name)) | ||
404 | 295 | |||
405 | 296 | # Yes, we add the initiatior before we create it! | ||
406 | 297 | # Otherwise, it complains that the target is already active | ||
407 | 298 | if force_create or not self._is_target_group_member(target_group_name, | ||
408 | 299 | iscsi_name): | ||
409 | 300 | self._run_ssh("pfexec /usr/sbin/stmfadm add-tg-member -g %s %s" % | ||
410 | 301 | (target_group_name, iscsi_name)) | ||
411 | 302 | if force_create or not self._iscsi_target_exists(iscsi_name): | ||
412 | 303 | self._run_ssh("pfexec /usr/sbin/itadm create-target -n %s" % | ||
413 | 304 | (iscsi_name)) | ||
414 | 305 | if force_create or not self._view_exists(luid): | ||
415 | 306 | self._run_ssh("pfexec /usr/sbin/stmfadm add-view -t %s %s" % | ||
416 | 307 | (target_group_name, luid)) | ||
417 | 308 | |||
418 | 309 | def remove_export(self, context, volume): | ||
419 | 310 | """Removes an export for a logical volume.""" | ||
420 | 311 | |||
421 | 312 | # This is the reverse of _do_export | ||
422 | 313 | luid = self._get_luid(volume) | ||
423 | 314 | iscsi_name = self._build_iscsi_target_name(volume) | ||
424 | 315 | target_group_name = 'tg-%s' % volume['name'] | ||
425 | 316 | |||
426 | 317 | if self._view_exists(luid): | ||
427 | 318 | self._run_ssh("pfexec /usr/sbin/stmfadm remove-view -l %s -a" % | ||
428 | 319 | (luid)) | ||
429 | 320 | |||
430 | 321 | if self._iscsi_target_exists(iscsi_name): | ||
431 | 322 | self._run_ssh("pfexec /usr/sbin/stmfadm offline-target %s" % | ||
432 | 323 | (iscsi_name)) | ||
433 | 324 | self._run_ssh("pfexec /usr/sbin/itadm delete-target %s" % | ||
434 | 325 | (iscsi_name)) | ||
435 | 326 | |||
436 | 327 | # We don't delete the tg-member; we delete the whole tg! | ||
437 | 328 | |||
438 | 329 | if self._target_group_exists(target_group_name): | ||
439 | 330 | self._run_ssh("pfexec /usr/sbin/stmfadm delete-tg %s" % | ||
440 | 331 | (target_group_name)) | ||
441 | 332 | |||
442 | 333 | if self._is_lu_created(volume): | ||
443 | 334 | self._run_ssh("pfexec /usr/sbin/sbdadm delete-lu %s" % | ||
444 | 335 | (luid)) |
This looks pretty awesome. It seems like ssh_execute could live in utils.py, since it is likely useful for other components as well.
107 + # TODO(justinsb): store in volume, remerge with generic iSCSI code
108 + host = FLAGS.san_ip
Perhaps we should have an extra field for ip for all of the iscsi drivers. That way the host name doesn't have to be resolvable from the compute host, and we can git rid of the ugly --iscsi_ip_prefix flag. I'm ok with this happening in another patch though.
The long list of commented setup commands should be moved into the docstring or perhaps fleshed out and moved into docs/