Merge lp:~justin-fathomdb/nova/justinsb-hpsan into lp:~hudson-openstack/nova/trunk

Proposed by justinsb
Status: Superseded
Proposed branch: lp:~justin-fathomdb/nova/justinsb-hpsan
Merge into: lp:~hudson-openstack/nova/trunk
Diff against target: 1207 lines (+972/-54) (has conflicts)
11 files modified
nova/api/openstack/zones.py (+80/-0)
nova/db/sqlalchemy/migrate_repo/versions/003_add_label_to_networks.py (+51/-0)
nova/db/sqlalchemy/migrate_repo/versions/004_add_zone_tables.py (+61/-0)
nova/db/sqlalchemy/migrate_repo/versions/006_add_provider_data_to_volumes.py (+72/-0)
nova/db/sqlalchemy/models.py (+3/-0)
nova/tests/api/openstack/test_common.py (+161/-0)
nova/tests/api/openstack/test_zones.py (+141/-0)
nova/volume/driver.py (+120/-24)
nova/volume/manager.py (+10/-1)
nova/volume/san.py (+258/-29)
plugins/xenserver/xenapi/etc/xapi.d/plugins/agent (+15/-0)
Conflict adding file nova/api/openstack/zones.py.  Moved existing file to nova/api/openstack/zones.py.moved.
Conflict adding file nova/db/sqlalchemy/migrate_repo/versions/003_add_label_to_networks.py.  Moved existing file to nova/db/sqlalchemy/migrate_repo/versions/003_add_label_to_networks.py.moved.
Conflict adding file nova/db/sqlalchemy/migrate_repo/versions/004_add_zone_tables.py.  Moved existing file to nova/db/sqlalchemy/migrate_repo/versions/004_add_zone_tables.py.moved.
Conflict adding file nova/tests/api/openstack/test_common.py.  Moved existing file to nova/tests/api/openstack/test_common.py.moved.
Conflict adding file nova/tests/api/openstack/test_zones.py.  Moved existing file to nova/tests/api/openstack/test_zones.py.moved.
Text conflict in nova/volume/manager.py
Text conflict in plugins/xenserver/xenapi/etc/xapi.d/plugins/agent
To merge this branch: bzr merge lp:~justin-fathomdb/nova/justinsb-hpsan
Reviewer Review Type Date Requested Status
Jay Pipes (community) Needs Fixing
Review via email: mp+49143@code.launchpad.net

This proposal has been superseded by a proposal from 2011-02-18.

Description of the change

Support HP/LeftHand SANs. We control the SAN by SSHing and issuing CLIQ commands. Also improved the way iSCSI volumes are mounted: try to store the iSCSI connection info in the volume entity, in preference to doing discovery. Also CHAP authentication support.

CHAP support is necessary to avoid the attach-volume command require an export on the SAN device. If we had to do that, the attach command would have to hit both the volume controller and the compute controller, and that would be complex.

To post a comment you must log in.
658. By justinsb

Fixed debug strings for i18n

Revision history for this message
Jay Pipes (jaypipes) wrote :

Hi! As I said on IRC, not much I can review as far as the volume stuff goes, so here is some general feedback :)

1 === added file 'nova/db/sqlalchemy/migrate_repo/versions/003_cactus.py'

As you mentioned on IRC, I think it would be best to name this something like 003_add_provider_columns.py instead of 003_cactus.py, to make the actual migration file more descriptive of its contents...

74 + # Create tables
75 + for table in (
76 + #certificates, consoles, console_pools, instance_actions
77 + ):
78 + try:
79 + table.create()
80 + except Exception:
81 + logging.info(repr(table))
82 + logging.exception('Exception while creating table')
83 + raise
84 +
85 + # Alter column types
86 + # auth_tokens.c.user_id.alter(type=String(length=255,
87 + # convert_unicode=False,
88 + # assert_unicode=None,
89 + # unicode_error=None,
90 + # _warn_on_bytestring=False))

I recognize you probably commented that stuff out thinking that we are only using a single 003_cactus.py migration file, but if that migration file is renamed to something more descriptive (and we can have >1 migration file per release), obviously I think you should remove all that commented out code :)

Missed i18n in these locations:

139 + LOG.warn("ISCSI provider_location not stored, using discovery")
175 + LOG.debug("ISCSI Discovery: Found %s" % (location))
552 + raise exception.Error("local_path not supported")

For 175 above, please use the following instead:

LOG.debug(_("ISCSI Discovery: Found %s"), location)

Not sure if you meant to i18n this or not :)

457 + message = ("Unexpected number of virtual ips for cluster %s. "
458 + "Result=%s" %
459 + (cluster_name, ElementTree.tostring(cluster_xml)))

But if so, should be:

message = _("Unexpected number of virtual ips for cluster %(cluster)s. "
            "Result=%(result)s" % {'cluster': cluster_name,
                           'result': ElementTree.tostring(cluster_xml)})

Other than those little nits above, the only other advice I could give would be to maybe sprinkle a few LOG.debug()s around HpSanISCSIDriver methods and maybe add a bit of docstring documentation to HpSanISCSIDriver that gives the rough flow of the commands executed against the volume controller?

Cheers, and thanks for a great contribution!
jay

review: Needs Fixing
659. By justinsb

i18n fixes

660. By justinsb

Merge branch 'master' into hpsan

Conflicts:
 nova/volume/manager.py

661. By justinsb

Renamed migration 003 to 006 (likely to be right number, I think)

662. By justinsb

Documentation, debug output & formatting improvements

Unmerged revisions

662. By justinsb

Documentation, debug output & formatting improvements

661. By justinsb

Renamed migration 003 to 006 (likely to be right number, I think)

660. By justinsb

Merge branch 'master' into hpsan

Conflicts:
 nova/volume/manager.py

659. By justinsb

i18n fixes

658. By justinsb

Fixed debug strings for i18n

657. By justinsb

Improved handling of missing project_id

656. By justinsb

Fix pylint/pep8 issues. Also store provider_location for Solaris iSCSI provider

655. By justinsb

Fixed whitespace for PEP8 compliance

654. By justinsb

Merge branch 'master' into hpsan

Conflicts:
 nova/volume/san.py

653. By justinsb

Small bugfixes from testing. Now works.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'nova/api/openstack/servers.py'
2=== added file 'nova/api/openstack/zones.py'
3--- nova/api/openstack/zones.py 1970-01-01 00:00:00 +0000
4+++ nova/api/openstack/zones.py 2011-02-18 18:41:49 +0000
5@@ -0,0 +1,80 @@
6+# Copyright 2010 OpenStack LLC.
7+# All Rights Reserved.
8+#
9+# Licensed under the Apache License, Version 2.0 (the "License"); you may
10+# not use this file except in compliance with the License. You may obtain
11+# a copy of the License at
12+#
13+# http://www.apache.org/licenses/LICENSE-2.0
14+#
15+# Unless required by applicable law or agreed to in writing, software
16+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
17+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
18+# License for the specific language governing permissions and limitations
19+# under the License.
20+
21+import common
22+import logging
23+
24+from nova import flags
25+from nova import wsgi
26+from nova import db
27+
28+
29+FLAGS = flags.FLAGS
30+
31+
32+def _filter_keys(item, keys):
33+ """
34+ Filters all model attributes except for keys
35+ item is a dict
36+
37+ """
38+ return dict((k, v) for k, v in item.iteritems() if k in keys)
39+
40+
41+def _scrub_zone(zone):
42+ return _filter_keys(zone, ('id', 'api_url'))
43+
44+
45+class Controller(wsgi.Controller):
46+
47+ _serialization_metadata = {
48+ 'application/xml': {
49+ "attributes": {
50+ "zone": ["id", "api_url"]}}}
51+
52+ def index(self, req):
53+ """Return all zones in brief"""
54+ items = db.zone_get_all(req.environ['nova.context'])
55+ items = common.limited(items, req)
56+ items = [_scrub_zone(item) for item in items]
57+ return dict(zones=items)
58+
59+ def detail(self, req):
60+ """Return all zones in detail"""
61+ return self.index(req)
62+
63+ def show(self, req, id):
64+ """Return data about the given zone id"""
65+ zone_id = int(id)
66+ zone = db.zone_get(req.environ['nova.context'], zone_id)
67+ return dict(zone=_scrub_zone(zone))
68+
69+ def delete(self, req, id):
70+ zone_id = int(id)
71+ db.zone_delete(req.environ['nova.context'], zone_id)
72+ return {}
73+
74+ def create(self, req):
75+ context = req.environ['nova.context']
76+ env = self._deserialize(req.body, req)
77+ zone = db.zone_create(context, env["zone"])
78+ return dict(zone=_scrub_zone(zone))
79+
80+ def update(self, req, id):
81+ context = req.environ['nova.context']
82+ env = self._deserialize(req.body, req)
83+ zone_id = int(id)
84+ zone = db.zone_update(context, zone_id, env["zone"])
85+ return dict(zone=_scrub_zone(zone))
86
87=== renamed file 'nova/api/openstack/zones.py' => 'nova/api/openstack/zones.py.moved'
88=== modified file 'nova/compute/api.py'
89=== modified file 'nova/compute/manager.py'
90=== modified file 'nova/db/api.py'
91=== added file 'nova/db/sqlalchemy/migrate_repo/versions/003_add_label_to_networks.py'
92--- nova/db/sqlalchemy/migrate_repo/versions/003_add_label_to_networks.py 1970-01-01 00:00:00 +0000
93+++ nova/db/sqlalchemy/migrate_repo/versions/003_add_label_to_networks.py 2011-02-18 18:41:49 +0000
94@@ -0,0 +1,51 @@
95+# vim: tabstop=4 shiftwidth=4 softtabstop=4
96+
97+# Copyright 2011 OpenStack LLC
98+# All Rights Reserved.
99+#
100+# Licensed under the Apache License, Version 2.0 (the "License"); you may
101+# not use this file except in compliance with the License. You may obtain
102+# a copy of the License at
103+#
104+# http://www.apache.org/licenses/LICENSE-2.0
105+#
106+# Unless required by applicable law or agreed to in writing, software
107+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
108+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
109+# License for the specific language governing permissions and limitations
110+# under the License.
111+
112+from sqlalchemy import *
113+from migrate import *
114+
115+from nova import log as logging
116+
117+
118+meta = MetaData()
119+
120+
121+networks = Table('networks', meta,
122+ Column('id', Integer(), primary_key=True, nullable=False),
123+ )
124+
125+
126+#
127+# New Tables
128+#
129+
130+
131+#
132+# Tables to alter
133+#
134+
135+networks_label = Column(
136+ 'label',
137+ String(length=255, convert_unicode=False, assert_unicode=None,
138+ unicode_error=None, _warn_on_bytestring=False))
139+
140+
141+def upgrade(migrate_engine):
142+ # Upgrade operations go here. Don't create your own engine;
143+ # bind migrate_engine to your metadata
144+ meta.bind = migrate_engine
145+ networks.create_column(networks_label)
146
147=== renamed file 'nova/db/sqlalchemy/migrate_repo/versions/003_add_label_to_networks.py' => 'nova/db/sqlalchemy/migrate_repo/versions/003_add_label_to_networks.py.moved'
148=== added file 'nova/db/sqlalchemy/migrate_repo/versions/004_add_zone_tables.py'
149--- nova/db/sqlalchemy/migrate_repo/versions/004_add_zone_tables.py 1970-01-01 00:00:00 +0000
150+++ nova/db/sqlalchemy/migrate_repo/versions/004_add_zone_tables.py 2011-02-18 18:41:49 +0000
151@@ -0,0 +1,61 @@
152+# Copyright 2010 OpenStack LLC.
153+# All Rights Reserved.
154+#
155+# Licensed under the Apache License, Version 2.0 (the "License"); you may
156+# not use this file except in compliance with the License. You may obtain
157+# a copy of the License at
158+#
159+# http://www.apache.org/licenses/LICENSE-2.0
160+#
161+# Unless required by applicable law or agreed to in writing, software
162+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
163+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
164+# License for the specific language governing permissions and limitations
165+# under the License.
166+
167+from sqlalchemy import *
168+from migrate import *
169+
170+from nova import log as logging
171+
172+
173+meta = MetaData()
174+
175+
176+#
177+# New Tables
178+#
179+zones = Table('zones', meta,
180+ Column('created_at', DateTime(timezone=False)),
181+ Column('updated_at', DateTime(timezone=False)),
182+ Column('deleted_at', DateTime(timezone=False)),
183+ Column('deleted', Boolean(create_constraint=True, name=None)),
184+ Column('id', Integer(), primary_key=True, nullable=False),
185+ Column('api_url',
186+ String(length=255, convert_unicode=False, assert_unicode=None,
187+ unicode_error=None, _warn_on_bytestring=False)),
188+ Column('username',
189+ String(length=255, convert_unicode=False, assert_unicode=None,
190+ unicode_error=None, _warn_on_bytestring=False)),
191+ Column('password',
192+ String(length=255, convert_unicode=False, assert_unicode=None,
193+ unicode_error=None, _warn_on_bytestring=False)),
194+ )
195+
196+
197+#
198+# Tables to alter
199+#
200+
201+# (none currently)
202+
203+
204+def upgrade(migrate_engine):
205+ # Upgrade operations go here. Don't create your own engine;
206+ # bind migrate_engine to your metadata
207+ meta.bind = migrate_engine
208+ for table in (zones, ):
209+ try:
210+ table.create()
211+ except Exception:
212+ logging.info(repr(table))
213
214=== renamed file 'nova/db/sqlalchemy/migrate_repo/versions/004_add_zone_tables.py' => 'nova/db/sqlalchemy/migrate_repo/versions/004_add_zone_tables.py.moved'
215=== added file 'nova/db/sqlalchemy/migrate_repo/versions/006_add_provider_data_to_volumes.py'
216--- nova/db/sqlalchemy/migrate_repo/versions/006_add_provider_data_to_volumes.py 1970-01-01 00:00:00 +0000
217+++ nova/db/sqlalchemy/migrate_repo/versions/006_add_provider_data_to_volumes.py 2011-02-18 18:41:49 +0000
218@@ -0,0 +1,72 @@
219+# vim: tabstop=4 shiftwidth=4 softtabstop=4
220+
221+# Copyright 2011 Justin Santa Barbara.
222+# All Rights Reserved.
223+#
224+# Licensed under the Apache License, Version 2.0 (the "License"); you may
225+# not use this file except in compliance with the License. You may obtain
226+# a copy of the License at
227+#
228+# http://www.apache.org/licenses/LICENSE-2.0
229+#
230+# Unless required by applicable law or agreed to in writing, software
231+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
232+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
233+# License for the specific language governing permissions and limitations
234+# under the License.
235+
236+from sqlalchemy import *
237+from migrate import *
238+
239+from nova import log as logging
240+
241+
242+meta = MetaData()
243+
244+
245+# Table stub-definitions
246+# Just for the ForeignKey and column creation to succeed, these are not the
247+# actual definitions of instances or services.
248+#
249+volumes = Table('volumes', meta,
250+ Column('id', Integer(), primary_key=True, nullable=False),
251+ )
252+
253+
254+#
255+# New Tables
256+#
257+# None
258+
259+#
260+# Tables to alter
261+#
262+# None
263+
264+#
265+# Columns to add to existing tables
266+#
267+
268+volumes_provider_location = Column('provider_location',
269+ String(length=256,
270+ convert_unicode=False,
271+ assert_unicode=None,
272+ unicode_error=None,
273+ _warn_on_bytestring=False))
274+
275+volumes_provider_auth = Column('provider_auth',
276+ String(length=256,
277+ convert_unicode=False,
278+ assert_unicode=None,
279+ unicode_error=None,
280+ _warn_on_bytestring=False))
281+
282+
283+def upgrade(migrate_engine):
284+ # Upgrade operations go here. Don't create your own engine;
285+ # bind migrate_engine to your metadata
286+ meta.bind = migrate_engine
287+
288+ # Add columns to existing tables
289+ volumes.create_column(volumes_provider_location)
290+ volumes.create_column(volumes_provider_auth)
291
292=== modified file 'nova/db/sqlalchemy/models.py'
293--- nova/db/sqlalchemy/models.py 2011-02-17 21:39:03 +0000
294+++ nova/db/sqlalchemy/models.py 2011-02-18 18:41:49 +0000
295@@ -243,6 +243,9 @@
296 display_name = Column(String(255))
297 display_description = Column(String(255))
298
299+ provider_location = Column(String(255))
300+ provider_auth = Column(String(255))
301+
302
303 class Quota(BASE, NovaBase):
304 """Represents quota overrides for a project."""
305
306=== modified file 'nova/network/manager.py'
307=== added file 'nova/tests/api/openstack/test_common.py'
308--- nova/tests/api/openstack/test_common.py 1970-01-01 00:00:00 +0000
309+++ nova/tests/api/openstack/test_common.py 2011-02-18 18:41:49 +0000
310@@ -0,0 +1,161 @@
311+# vim: tabstop=4 shiftwidth=4 softtabstop=4
312+
313+# Copyright 2010 OpenStack LLC.
314+# All Rights Reserved.
315+#
316+# Licensed under the Apache License, Version 2.0 (the "License"); you may
317+# not use this file except in compliance with the License. You may obtain
318+# a copy of the License at
319+#
320+# http://www.apache.org/licenses/LICENSE-2.0
321+#
322+# Unless required by applicable law or agreed to in writing, software
323+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
324+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
325+# License for the specific language governing permissions and limitations
326+# under the License.
327+
328+"""
329+Test suites for 'common' code used throughout the OpenStack HTTP API.
330+"""
331+
332+import unittest
333+
334+from webob import Request
335+
336+from nova.api.openstack.common import limited
337+
338+
339+class LimiterTest(unittest.TestCase):
340+ """
341+ Unit tests for the `nova.api.openstack.common.limited` method which takes
342+ in a list of items and, depending on the 'offset' and 'limit' GET params,
343+ returns a subset or complete set of the given items.
344+ """
345+
346+ def setUp(self):
347+ """
348+ Run before each test.
349+ """
350+ self.tiny = range(1)
351+ self.small = range(10)
352+ self.medium = range(1000)
353+ self.large = range(10000)
354+
355+ def test_limiter_offset_zero(self):
356+ """
357+ Test offset key works with 0.
358+ """
359+ req = Request.blank('/?offset=0')
360+ self.assertEqual(limited(self.tiny, req), self.tiny)
361+ self.assertEqual(limited(self.small, req), self.small)
362+ self.assertEqual(limited(self.medium, req), self.medium)
363+ self.assertEqual(limited(self.large, req), self.large[:1000])
364+
365+ def test_limiter_offset_medium(self):
366+ """
367+ Test offset key works with a medium sized number.
368+ """
369+ req = Request.blank('/?offset=10')
370+ self.assertEqual(limited(self.tiny, req), [])
371+ self.assertEqual(limited(self.small, req), self.small[10:])
372+ self.assertEqual(limited(self.medium, req), self.medium[10:])
373+ self.assertEqual(limited(self.large, req), self.large[10:1010])
374+
375+ def test_limiter_offset_over_max(self):
376+ """
377+ Test offset key works with a number over 1000 (max_limit).
378+ """
379+ req = Request.blank('/?offset=1001')
380+ self.assertEqual(limited(self.tiny, req), [])
381+ self.assertEqual(limited(self.small, req), [])
382+ self.assertEqual(limited(self.medium, req), [])
383+ self.assertEqual(limited(self.large, req), self.large[1001:2001])
384+
385+ def test_limiter_offset_blank(self):
386+ """
387+ Test offset key works with a blank offset.
388+ """
389+ req = Request.blank('/?offset=')
390+ self.assertEqual(limited(self.tiny, req), self.tiny)
391+ self.assertEqual(limited(self.small, req), self.small)
392+ self.assertEqual(limited(self.medium, req), self.medium)
393+ self.assertEqual(limited(self.large, req), self.large[:1000])
394+
395+ def test_limiter_offset_bad(self):
396+ """
397+ Test offset key works with a BAD offset.
398+ """
399+ req = Request.blank(u'/?offset=\u0020aa')
400+ self.assertEqual(limited(self.tiny, req), self.tiny)
401+ self.assertEqual(limited(self.small, req), self.small)
402+ self.assertEqual(limited(self.medium, req), self.medium)
403+ self.assertEqual(limited(self.large, req), self.large[:1000])
404+
405+ def test_limiter_nothing(self):
406+ """
407+ Test request with no offset or limit
408+ """
409+ req = Request.blank('/')
410+ self.assertEqual(limited(self.tiny, req), self.tiny)
411+ self.assertEqual(limited(self.small, req), self.small)
412+ self.assertEqual(limited(self.medium, req), self.medium)
413+ self.assertEqual(limited(self.large, req), self.large[:1000])
414+
415+ def test_limiter_limit_zero(self):
416+ """
417+ Test limit of zero.
418+ """
419+ req = Request.blank('/?limit=0')
420+ self.assertEqual(limited(self.tiny, req), self.tiny)
421+ self.assertEqual(limited(self.small, req), self.small)
422+ self.assertEqual(limited(self.medium, req), self.medium)
423+ self.assertEqual(limited(self.large, req), self.large[:1000])
424+
425+ def test_limiter_limit_medium(self):
426+ """
427+ Test limit of 10.
428+ """
429+ req = Request.blank('/?limit=10')
430+ self.assertEqual(limited(self.tiny, req), self.tiny)
431+ self.assertEqual(limited(self.small, req), self.small)
432+ self.assertEqual(limited(self.medium, req), self.medium[:10])
433+ self.assertEqual(limited(self.large, req), self.large[:10])
434+
435+ def test_limiter_limit_over_max(self):
436+ """
437+ Test limit of 3000.
438+ """
439+ req = Request.blank('/?limit=3000')
440+ self.assertEqual(limited(self.tiny, req), self.tiny)
441+ self.assertEqual(limited(self.small, req), self.small)
442+ self.assertEqual(limited(self.medium, req), self.medium)
443+ self.assertEqual(limited(self.large, req), self.large[:1000])
444+
445+ def test_limiter_limit_and_offset(self):
446+ """
447+ Test request with both limit and offset.
448+ """
449+ items = range(2000)
450+ req = Request.blank('/?offset=1&limit=3')
451+ self.assertEqual(limited(items, req), items[1:4])
452+ req = Request.blank('/?offset=3&limit=0')
453+ self.assertEqual(limited(items, req), items[3:1003])
454+ req = Request.blank('/?offset=3&limit=1500')
455+ self.assertEqual(limited(items, req), items[3:1003])
456+ req = Request.blank('/?offset=3000&limit=10')
457+ self.assertEqual(limited(items, req), [])
458+
459+ def test_limiter_custom_max_limit(self):
460+ """
461+ Test a max_limit other than 1000.
462+ """
463+ items = range(2000)
464+ req = Request.blank('/?offset=1&limit=3')
465+ self.assertEqual(limited(items, req, max_limit=2000), items[1:4])
466+ req = Request.blank('/?offset=3&limit=0')
467+ self.assertEqual(limited(items, req, max_limit=2000), items[3:])
468+ req = Request.blank('/?offset=3&limit=2500')
469+ self.assertEqual(limited(items, req, max_limit=2000), items[3:])
470+ req = Request.blank('/?offset=3000&limit=10')
471+ self.assertEqual(limited(items, req, max_limit=2000), [])
472
473=== renamed file 'nova/tests/api/openstack/test_common.py' => 'nova/tests/api/openstack/test_common.py.moved'
474=== added file 'nova/tests/api/openstack/test_zones.py'
475--- nova/tests/api/openstack/test_zones.py 1970-01-01 00:00:00 +0000
476+++ nova/tests/api/openstack/test_zones.py 2011-02-18 18:41:49 +0000
477@@ -0,0 +1,141 @@
478+# Copyright 2010 OpenStack LLC.
479+# All Rights Reserved.
480+#
481+# Licensed under the Apache License, Version 2.0 (the "License"); you may
482+# not use this file except in compliance with the License. You may obtain
483+# a copy of the License at
484+#
485+# http://www.apache.org/licenses/LICENSE-2.0
486+#
487+# Unless required by applicable law or agreed to in writing, software
488+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
489+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
490+# License for the specific language governing permissions and limitations
491+# under the License.
492+
493+import unittest
494+
495+import stubout
496+import webob
497+import json
498+
499+import nova.db
500+from nova import context
501+from nova import flags
502+from nova.api.openstack import zones
503+from nova.tests.api.openstack import fakes
504+
505+
506+FLAGS = flags.FLAGS
507+FLAGS.verbose = True
508+
509+
510+def zone_get(context, zone_id):
511+ return dict(id=1, api_url='http://foo.com', username='bob',
512+ password='xxx')
513+
514+
515+def zone_create(context, values):
516+ zone = dict(id=1)
517+ zone.update(values)
518+ return zone
519+
520+
521+def zone_update(context, zone_id, values):
522+ zone = dict(id=zone_id, api_url='http://foo.com', username='bob',
523+ password='xxx')
524+ zone.update(values)
525+ return zone
526+
527+
528+def zone_delete(context, zone_id):
529+ pass
530+
531+
532+def zone_get_all(context):
533+ return [
534+ dict(id=1, api_url='http://foo.com', username='bob',
535+ password='xxx'),
536+ dict(id=2, api_url='http://blah.com', username='alice',
537+ password='qwerty')
538+ ]
539+
540+
541+class ZonesTest(unittest.TestCase):
542+ def setUp(self):
543+ self.stubs = stubout.StubOutForTesting()
544+ fakes.FakeAuthManager.auth_data = {}
545+ fakes.FakeAuthDatabase.data = {}
546+ fakes.stub_out_networking(self.stubs)
547+ fakes.stub_out_rate_limiting(self.stubs)
548+ fakes.stub_out_auth(self.stubs)
549+
550+ self.allow_admin = FLAGS.allow_admin_api
551+ FLAGS.allow_admin_api = True
552+
553+ self.stubs.Set(nova.db, 'zone_get', zone_get)
554+ self.stubs.Set(nova.db, 'zone_get_all', zone_get_all)
555+ self.stubs.Set(nova.db, 'zone_update', zone_update)
556+ self.stubs.Set(nova.db, 'zone_create', zone_create)
557+ self.stubs.Set(nova.db, 'zone_delete', zone_delete)
558+
559+ def tearDown(self):
560+ self.stubs.UnsetAll()
561+ FLAGS.allow_admin_api = self.allow_admin
562+
563+ def test_get_zone_list(self):
564+ req = webob.Request.blank('/v1.0/zones')
565+ res = req.get_response(fakes.wsgi_app())
566+ res_dict = json.loads(res.body)
567+
568+ self.assertEqual(res.status_int, 200)
569+ self.assertEqual(len(res_dict['zones']), 2)
570+
571+ def test_get_zone_by_id(self):
572+ req = webob.Request.blank('/v1.0/zones/1')
573+ res = req.get_response(fakes.wsgi_app())
574+ res_dict = json.loads(res.body)
575+
576+ self.assertEqual(res_dict['zone']['id'], 1)
577+ self.assertEqual(res_dict['zone']['api_url'], 'http://foo.com')
578+ self.assertFalse('password' in res_dict['zone'])
579+ self.assertEqual(res.status_int, 200)
580+
581+ def test_zone_delete(self):
582+ req = webob.Request.blank('/v1.0/zones/1')
583+ res = req.get_response(fakes.wsgi_app())
584+
585+ self.assertEqual(res.status_int, 200)
586+
587+ def test_zone_create(self):
588+ body = dict(zone=dict(api_url='http://blah.zoo', username='fred',
589+ password='fubar'))
590+ req = webob.Request.blank('/v1.0/zones')
591+ req.method = 'POST'
592+ req.body = json.dumps(body)
593+
594+ res = req.get_response(fakes.wsgi_app())
595+ res_dict = json.loads(res.body)
596+
597+ self.assertEqual(res.status_int, 200)
598+ self.assertEqual(res_dict['zone']['id'], 1)
599+ self.assertEqual(res_dict['zone']['api_url'], 'http://blah.zoo')
600+ self.assertFalse('username' in res_dict['zone'])
601+
602+ def test_zone_update(self):
603+ body = dict(zone=dict(username='zeb', password='sneaky'))
604+ req = webob.Request.blank('/v1.0/zones/1')
605+ req.method = 'PUT'
606+ req.body = json.dumps(body)
607+
608+ res = req.get_response(fakes.wsgi_app())
609+ res_dict = json.loads(res.body)
610+
611+ self.assertEqual(res.status_int, 200)
612+ self.assertEqual(res_dict['zone']['id'], 1)
613+ self.assertEqual(res_dict['zone']['api_url'], 'http://foo.com')
614+ self.assertFalse('username' in res_dict['zone'])
615+
616+
617+if __name__ == '__main__':
618+ unittest.main()
619
620=== renamed file 'nova/tests/api/openstack/test_zones.py' => 'nova/tests/api/openstack/test_zones.py.moved'
621=== modified file 'nova/utils.py'
622=== modified file 'nova/virt/xenapi/vmops.py'
623=== modified file 'nova/virt/xenapi_conn.py'
624=== modified file 'nova/volume/driver.py'
625--- nova/volume/driver.py 2011-02-04 17:04:55 +0000
626+++ nova/volume/driver.py 2011-02-18 18:41:49 +0000
627@@ -21,6 +21,7 @@
628 """
629
630 import time
631+import os
632
633 from nova import exception
634 from nova import flags
635@@ -36,6 +37,8 @@
636 'Which device to export the volumes on')
637 flags.DEFINE_string('num_shell_tries', 3,
638 'number of times to attempt to run flakey shell commands')
639+flags.DEFINE_string('num_iscsi_scan_tries', 3,
640+ 'number of times to rescan iSCSI target to find volume')
641 flags.DEFINE_integer('num_shelves',
642 100,
643 'Number of vblade shelves')
644@@ -294,40 +297,133 @@
645 self._execute("sudo ietadm --op delete --tid=%s" %
646 iscsi_target)
647
648- def _get_name_and_portal(self, volume):
649- """Gets iscsi name and portal from volume name and host."""
650+ def _do_iscsi_discovery(self, volume):
651+ #TODO(justinsb): Deprecate discovery and use stored info
652+ #NOTE(justinsb): Discovery won't work with CHAP-secured targets (?)
653+ LOG.warn(_("ISCSI provider_location not stored, using discovery"))
654+
655 volume_name = volume['name']
656- host = volume['host']
657+
658 (out, _err) = self._execute("sudo iscsiadm -m discovery -t "
659- "sendtargets -p %s" % host)
660+ "sendtargets -p %s" % (volume['host']))
661 for target in out.splitlines():
662 if FLAGS.iscsi_ip_prefix in target and volume_name in target:
663- (location, _sep, iscsi_name) = target.partition(" ")
664- break
665- iscsi_portal = location.split(",")[0]
666- return (iscsi_name, iscsi_portal)
667+ return target
668+ return None
669+
670+ def _get_iscsi_properties(self, volume):
671+ """Gets iscsi configuration, ideally from saved information in the
672+ volume entity, but falling back to discovery if need be."""
673+
674+ properties = {}
675+
676+ location = volume['provider_location']
677+
678+ if location:
679+ # provider_location is the same format as iSCSI discovery output
680+ properties['target_discovered'] = False
681+ else:
682+ location = self._do_iscsi_discovery(volume)
683+
684+ if not location:
685+ raise exception.Error(_("Could not find iSCSI export "
686+ " for volume %s") %
687+ (volume['name']))
688+
689+ LOG.debug(_("ISCSI Discovery: Found %s") % (location))
690+ properties['target_discovered'] = True
691+
692+ (iscsi_target, _sep, iscsi_name) = location.partition(" ")
693+
694+ iscsi_portal = iscsi_target.split(",")[0]
695+
696+ properties['target_iqn'] = iscsi_name
697+ properties['target_portal'] = iscsi_portal
698+
699+ auth = volume['provider_auth']
700+
701+ if auth:
702+ (auth_method, auth_username, auth_secret) = auth.split()
703+
704+ properties['auth_method'] = auth_method
705+ properties['auth_username'] = auth_username
706+ properties['auth_password'] = auth_secret
707+
708+ return properties
709+
710+ def _run_iscsiadm(self, iscsi_properties, iscsi_command):
711+ command = ("sudo iscsiadm -m node -T %s -p %s %s" %
712+ (iscsi_properties['target_iqn'],
713+ iscsi_properties['target_portal'],
714+ iscsi_command))
715+ (out, err) = self._execute(command)
716+ LOG.debug("iscsiadm %s: stdout=%s stderr=%s" %
717+ (iscsi_command, out, err))
718+ return (out, err)
719+
720+ def _iscsiadm_update(self, iscsi_properties, property_key, property_value):
721+ iscsi_command = ("--op update -n %s -v %s" %
722+ (property_key, property_value))
723+ return self._run_iscsiadm(iscsi_properties, iscsi_command)
724
725 def discover_volume(self, volume):
726 """Discover volume on a remote host."""
727- iscsi_name, iscsi_portal = self._get_name_and_portal(volume)
728- self._execute("sudo iscsiadm -m node -T %s -p %s --login" %
729- (iscsi_name, iscsi_portal))
730- self._execute("sudo iscsiadm -m node -T %s -p %s --op update "
731- "-n node.startup -v automatic" %
732- (iscsi_name, iscsi_portal))
733- return "/dev/disk/by-path/ip-%s-iscsi-%s-lun-0" % (iscsi_portal,
734- iscsi_name)
735+ iscsi_properties = self._get_iscsi_properties(volume)
736+
737+ if not iscsi_properties['target_discovered']:
738+ self._run_iscsiadm(iscsi_properties, "--op new")
739+
740+ if iscsi_properties.get('auth_method'):
741+ self._iscsiadm_update(iscsi_properties,
742+ "node.session.auth.authmethod",
743+ iscsi_properties['auth_method'])
744+ self._iscsiadm_update(iscsi_properties,
745+ "node.session.auth.username",
746+ iscsi_properties['auth_username'])
747+ self._iscsiadm_update(iscsi_properties,
748+ "node.session.auth.password",
749+ iscsi_properties['auth_password'])
750+
751+ self._run_iscsiadm(iscsi_properties, "--login")
752+
753+ self._iscsiadm_update(iscsi_properties, "node.startup", "automatic")
754+
755+ mount_device = ("/dev/disk/by-path/ip-%s-iscsi-%s-lun-0" %
756+ (iscsi_properties['target_portal'],
757+ iscsi_properties['target_iqn']))
758+
759+ # The /dev/disk/by-path/... node is not always present immediately
760+ # TODO(justinsb): This retry-with-delay is a pattern, move to utils?
761+ tries = 0
762+ while not os.path.exists(mount_device):
763+ if tries >= FLAGS.num_iscsi_scan_tries:
764+ raise exception.Error(_("iSCSI device not found at %s") %
765+ (mount_device))
766+
767+ LOG.warn(_("ISCSI volume not yet found at: %(mount_device)s. "
768+ "Will rescan & retry. Try number: %(tries)s") %
769+ locals())
770+
771+ # The rescan isn't documented as being necessary(?), but it helps
772+ self._run_iscsiadm(iscsi_properties, "--rescan")
773+
774+ tries = tries + 1
775+ if not os.path.exists(mount_device):
776+ time.sleep(tries ** 2)
777+
778+ if tries != 0:
779+ LOG.debug(_("Found iSCSI node %(mount_device)s "
780+ "(after %(tries)s rescans)") %
781+ locals())
782+
783+ return mount_device
784
785 def undiscover_volume(self, volume):
786 """Undiscover volume on a remote host."""
787- iscsi_name, iscsi_portal = self._get_name_and_portal(volume)
788- self._execute("sudo iscsiadm -m node -T %s -p %s --op update "
789- "-n node.startup -v manual" %
790- (iscsi_name, iscsi_portal))
791- self._execute("sudo iscsiadm -m node -T %s -p %s --logout " %
792- (iscsi_name, iscsi_portal))
793- self._execute("sudo iscsiadm -m node --op delete "
794- "--targetname %s" % iscsi_name)
795+ iscsi_properties = self._get_iscsi_properties(volume)
796+ self._iscsiadm_update(iscsi_properties, "node.startup", "manual")
797+ self._run_iscsiadm(iscsi_properties, "--logout")
798+ self._run_iscsiadm(iscsi_properties, "--op delete")
799
800
801 class FakeISCSIDriver(ISCSIDriver):
802
803=== modified file 'nova/volume/manager.py'
804--- nova/volume/manager.py 2011-02-15 18:19:52 +0000
805+++ nova/volume/manager.py 2011-02-18 18:41:49 +0000
806@@ -107,11 +107,20 @@
807 vol_size = volume_ref['size']
808 LOG.debug(_("volume %(vol_name)s: creating lv of"
809 " size %(vol_size)sG") % locals())
810- self.driver.create_volume(volume_ref)
811+ db_update = self.driver.create_volume(volume_ref)
812+ if db_update:
813+ self.db.volume_update(context, volume_ref['id'], db_update)
814
815 LOG.debug(_("volume %s: creating export"), volume_ref['name'])
816+<<<<<<< TREE
817 self.driver.create_export(context, volume_ref)
818 except Exception:
819+=======
820+ db_update = self.driver.create_export(context, volume_ref)
821+ if db_update:
822+ self.db.volume_update(context, volume_ref['id'], db_update)
823+ except Exception:
824+>>>>>>> MERGE-SOURCE
825 self.db.volume_update(context,
826 volume_ref['id'], {'status': 'error'})
827 raise
828
829=== modified file 'nova/volume/san.py'
830--- nova/volume/san.py 2011-02-09 19:25:18 +0000
831+++ nova/volume/san.py 2011-02-18 18:41:49 +0000
832@@ -23,6 +23,8 @@
833 import os
834 import paramiko
835
836+from xml.etree import ElementTree
837+
838 from nova import exception
839 from nova import flags
840 from nova import log as logging
841@@ -41,37 +43,15 @@
842 'Password for SAN controller')
843 flags.DEFINE_string('san_privatekey', '',
844 'Filename of private key to use for SSH authentication')
845+flags.DEFINE_string('san_clustername', '',
846+ 'Cluster name to use for creating volumes')
847+flags.DEFINE_integer('san_ssh_port', 22,
848+ 'SSH port to use with SAN')
849
850
851 class SanISCSIDriver(ISCSIDriver):
852 """ Base class for SAN-style storage volumes
853 (storage providers we access over SSH)"""
854- #Override because SAN ip != host ip
855- def _get_name_and_portal(self, volume):
856- """Gets iscsi name and portal from volume name and host."""
857- volume_name = volume['name']
858-
859- # TODO(justinsb): store in volume, remerge with generic iSCSI code
860- host = FLAGS.san_ip
861-
862- (out, _err) = self._execute("sudo iscsiadm -m discovery -t "
863- "sendtargets -p %s" % host)
864-
865- location = None
866- find_iscsi_name = self._build_iscsi_target_name(volume)
867- for target in out.splitlines():
868- if find_iscsi_name in target:
869- (location, _sep, iscsi_name) = target.partition(" ")
870- break
871- if not location:
872- raise exception.Error(_("Could not find iSCSI export "
873- " for volume %s") %
874- volume_name)
875-
876- iscsi_portal = location.split(",")[0]
877- LOG.debug("iscsi_name=%s, iscsi_portal=%s" %
878- (iscsi_name, iscsi_portal))
879- return (iscsi_name, iscsi_portal)
880
881 def _build_iscsi_target_name(self, volume):
882 return "%s%s" % (FLAGS.iscsi_target_prefix, volume['name'])
883@@ -85,6 +65,7 @@
884 ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
885 if FLAGS.san_password:
886 ssh.connect(FLAGS.san_ip,
887+ port=FLAGS.san_ssh_port,
888 username=FLAGS.san_login,
889 password=FLAGS.san_password)
890 elif FLAGS.san_privatekey:
891@@ -92,10 +73,11 @@
892 # It sucks that paramiko doesn't support DSA keys
893 privatekey = paramiko.RSAKey.from_private_key_file(privatekeyfile)
894 ssh.connect(FLAGS.san_ip,
895+ port=FLAGS.san_ssh_port,
896 username=FLAGS.san_login,
897 pkey=privatekey)
898 else:
899- raise exception.Error("Specify san_password or san_privatekey")
900+ raise exception.Error(_("Specify san_password or san_privatekey"))
901 return ssh
902
903 def _run_ssh(self, command, check_exit_code=True):
904@@ -124,10 +106,10 @@
905 def check_for_setup_error(self):
906 """Returns an error if prerequisites aren't met"""
907 if not (FLAGS.san_password or FLAGS.san_privatekey):
908- raise exception.Error("Specify san_password or san_privatekey")
909+ raise exception.Error(_("Specify san_password or san_privatekey"))
910
911 if not (FLAGS.san_ip):
912- raise exception.Error("san_ip must be set")
913+ raise exception.Error(_("san_ip must be set"))
914
915
916 def _collect_lines(data):
917@@ -306,6 +288,17 @@
918 self._run_ssh("pfexec /usr/sbin/stmfadm add-view -t %s %s" %
919 (target_group_name, luid))
920
921+ #TODO(justinsb): Is this always 1? Does it matter?
922+ iscsi_portal_interface = '1'
923+ iscsi_portal = FLAGS.san_ip + ":3260," + iscsi_portal_interface
924+
925+ db_update = {}
926+ db_update['provider_location'] = ("%s %s" %
927+ (iscsi_portal,
928+ iscsi_name))
929+
930+ return db_update
931+
932 def remove_export(self, context, volume):
933 """Removes an export for a logical volume."""
934
935@@ -333,3 +326,239 @@
936 if self._is_lu_created(volume):
937 self._run_ssh("pfexec /usr/sbin/sbdadm delete-lu %s" %
938 (luid))
939+
940+
941+class HpSanISCSIDriver(SanISCSIDriver):
942+ """Executes commands relating to HP/Lefthand SAN ISCSI volumes.
943+ We use the CLIQ interface, over SSH.
944+
945+ Rough overview of CLIQ commands used:
946+ CLIQ createVolume (creates the volume)
947+ CLIQ getVolumeInfo (to discover the IQN etc)
948+ CLIQ getClusterInfo (to discover the iSCSI target IP address)
949+ CLIQ assignVolumeChap (exports it with CHAP security)
950+
951+ The 'trick' here is that the HP SAN enforces security by default, so
952+ normally a volume mount would need both to configure the SAN in the volume
953+ layer and do the mount on the compute layer. Multi-layer operations are
954+ not catered for at the moment in the nova architecture, so instead we
955+ share the volume using CHAP at volume creation time. Then the mount need
956+ only use those CHAP credentials, so can take place exclusively in the
957+ compute layer"""
958+
959+ def _cliq_run(self, verb, cliq_args):
960+ """Runs a CLIQ command over SSH, without doing any result parsing"""
961+ cliq_arg_strings = []
962+ for k, v in cliq_args.items():
963+ cliq_arg_strings.append(" %s=%s" % (k, v))
964+ cmd = verb + ''.join(cliq_arg_strings)
965+
966+ return self._run_ssh(cmd)
967+
968+ def _cliq_run_xml(self, verb, cliq_args, check_cliq_result=True):
969+ """Runs a CLIQ command over SSH, parsing and checking the output"""
970+ cliq_args['output'] = 'XML'
971+ (out, _err) = self._cliq_run(verb, cliq_args)
972+
973+ LOG.debug(_("CLIQ command returned %s"), out)
974+
975+ result_xml = ElementTree.fromstring(out)
976+ if check_cliq_result:
977+ response_node = result_xml.find("response")
978+ if response_node is None:
979+ msg = (_("Malformed response to CLIQ command "
980+ "%(verb)s %(cliq_args)s. Result=%(out)s") %
981+ locals())
982+ raise exception.Error(msg)
983+
984+ result_code = response_node.attrib.get("result")
985+
986+ if result_code != "0":
987+ msg = (_("Error running CLIQ command %(verb)s %(cliq_args)s. "
988+ " Result=%(out)s") %
989+ locals())
990+ raise exception.Error(msg)
991+
992+ return result_xml
993+
994+ def _cliq_get_cluster_info(self, cluster_name):
995+ """Queries for info about the cluster (including IP)"""
996+ cliq_args = {}
997+ cliq_args['clusterName'] = cluster_name
998+ cliq_args['searchDepth'] = '1'
999+ cliq_args['verbose'] = '0'
1000+
1001+ result_xml = self._cliq_run_xml("getClusterInfo", cliq_args)
1002+
1003+ return result_xml
1004+
1005+ def _cliq_get_cluster_vip(self, cluster_name):
1006+ """Gets the IP on which a cluster shares iSCSI volumes"""
1007+ cluster_xml = self._cliq_get_cluster_info(cluster_name)
1008+
1009+ vips = []
1010+ for vip in cluster_xml.findall("response/cluster/vip"):
1011+ vips.append(vip.attrib.get('ipAddress'))
1012+
1013+ if len(vips) == 1:
1014+ return vips[0]
1015+
1016+ _xml = ElementTree.tostring(cluster_xml)
1017+ msg = (_("Unexpected number of virtual ips for cluster "
1018+ " %(cluster_name)s. Result=%(_xml)s") %
1019+ locals())
1020+ raise exception.Error(msg)
1021+
1022+ def _cliq_get_volume_info(self, volume_name):
1023+ """Gets the volume info, including IQN"""
1024+ cliq_args = {}
1025+ cliq_args['volumeName'] = volume_name
1026+ result_xml = self._cliq_run_xml("getVolumeInfo", cliq_args)
1027+
1028+ # Result looks like this:
1029+ #<gauche version="1.0">
1030+ # <response description="Operation succeeded." name="CliqSuccess"
1031+ # processingTime="87" result="0">
1032+ # <volume autogrowPages="4" availability="online" blockSize="1024"
1033+ # bytesWritten="0" checkSum="false" clusterName="Cluster01"
1034+ # created="2011-02-08T19:56:53Z" deleting="false" description=""
1035+ # groupName="Group01" initialQuota="536870912" isPrimary="true"
1036+ # iscsiIqn="iqn.2003-10.com.lefthandnetworks:group01:25366:vol-b"
1037+ # maxSize="6865387257856" md5="9fa5c8b2cca54b2948a63d833097e1ca"
1038+ # minReplication="1" name="vol-b" parity="0" replication="2"
1039+ # reserveQuota="536870912" scratchQuota="4194304"
1040+ # serialNumber="9fa5c8b2cca54b2948a63d833097e1ca0000000000006316"
1041+ # size="1073741824" stridePages="32" thinProvision="true">
1042+ # <status description="OK" value="2"/>
1043+ # <permission access="rw"
1044+ # authGroup="api-34281B815713B78-(trimmed)51ADD4B7030853AA7"
1045+ # chapName="chapusername" chapRequired="true" id="25369"
1046+ # initiatorSecret="" iqn="" iscsiEnabled="true"
1047+ # loadBalance="true" targetSecret="supersecret"/>
1048+ # </volume>
1049+ # </response>
1050+ #</gauche>
1051+
1052+ # Flatten the nodes into a dictionary; use prefixes to avoid collisions
1053+ volume_attributes = {}
1054+
1055+ volume_node = result_xml.find("response/volume")
1056+ for k, v in volume_node.attrib.items():
1057+ volume_attributes["volume." + k] = v
1058+
1059+ status_node = volume_node.find("status")
1060+ if not status_node is None:
1061+ for k, v in status_node.attrib.items():
1062+ volume_attributes["status." + k] = v
1063+
1064+ # We only consider the first permission node
1065+ permission_node = volume_node.find("permission")
1066+ if not permission_node is None:
1067+ for k, v in status_node.attrib.items():
1068+ volume_attributes["permission." + k] = v
1069+
1070+ LOG.debug(_("Volume info: %(volume_name)s => %(volume_attributes)s") %
1071+ locals())
1072+ return volume_attributes
1073+
1074+ def create_volume(self, volume):
1075+ """Creates a volume."""
1076+ cliq_args = {}
1077+ cliq_args['clusterName'] = FLAGS.san_clustername
1078+ #TODO(justinsb): Should we default to inheriting thinProvision?
1079+ cliq_args['thinProvision'] = '1' if FLAGS.san_thin_provision else '0'
1080+ cliq_args['volumeName'] = volume['name']
1081+ if int(volume['size']) == 0:
1082+ cliq_args['size'] = '100MB'
1083+ else:
1084+ cliq_args['size'] = '%sGB' % volume['size']
1085+
1086+ self._cliq_run_xml("createVolume", cliq_args)
1087+
1088+ volume_info = self._cliq_get_volume_info(volume['name'])
1089+ cluster_name = volume_info['volume.clusterName']
1090+ iscsi_iqn = volume_info['volume.iscsiIqn']
1091+
1092+ #TODO(justinsb): Is this always 1? Does it matter?
1093+ cluster_interface = '1'
1094+
1095+ cluster_vip = self._cliq_get_cluster_vip(cluster_name)
1096+ iscsi_portal = cluster_vip + ":3260," + cluster_interface
1097+
1098+ db_update = {}
1099+ db_update['provider_location'] = ("%s %s" %
1100+ (iscsi_portal,
1101+ iscsi_iqn))
1102+
1103+ return db_update
1104+
1105+ def delete_volume(self, volume):
1106+ """Deletes a volume."""
1107+ cliq_args = {}
1108+ cliq_args['volumeName'] = volume['name']
1109+ cliq_args['prompt'] = 'false' # Don't confirm
1110+
1111+ self._cliq_run_xml("deleteVolume", cliq_args)
1112+
1113+ def local_path(self, volume):
1114+ # TODO(justinsb): Is this needed here?
1115+ raise exception.Error(_("local_path not supported"))
1116+
1117+ def ensure_export(self, context, volume):
1118+ """Synchronously recreates an export for a logical volume."""
1119+ return self._do_export(context, volume, force_create=False)
1120+
1121+ def create_export(self, context, volume):
1122+ return self._do_export(context, volume, force_create=True)
1123+
1124+ def _do_export(self, context, volume, force_create):
1125+ """Supports ensure_export and create_export"""
1126+ volume_info = self._cliq_get_volume_info(volume['name'])
1127+
1128+ is_shared = 'permission.authGroup' in volume_info
1129+
1130+ db_update = {}
1131+
1132+ should_export = False
1133+
1134+ if force_create or not is_shared:
1135+ should_export = True
1136+ # Check that we have a project_id
1137+ project_id = volume['project_id']
1138+ if not project_id:
1139+ project_id = context.project_id
1140+
1141+ if project_id:
1142+ #TODO(justinsb): Use a real per-project password here
1143+ chap_username = 'proj_' + project_id
1144+ # HP/Lefthand requires that the password be >= 12 characters
1145+ chap_password = 'project_secret_' + project_id
1146+ else:
1147+ msg = (_("Could not determine project for volume %s, "
1148+ "can't export") %
1149+ (volume['name']))
1150+ if force_create:
1151+ raise exception.Error(msg)
1152+ else:
1153+ LOG.warn(msg)
1154+ should_export = False
1155+
1156+ if should_export:
1157+ cliq_args = {}
1158+ cliq_args['volumeName'] = volume['name']
1159+ cliq_args['chapName'] = chap_username
1160+ cliq_args['targetSecret'] = chap_password
1161+
1162+ self._cliq_run_xml("assignVolumeChap", cliq_args)
1163+
1164+ db_update['provider_auth'] = ("CHAP %s %s" %
1165+ (chap_username, chap_password))
1166+
1167+ return db_update
1168+
1169+ def remove_export(self, context, volume):
1170+ """Removes an export for a logical volume."""
1171+ cliq_args = {}
1172+ cliq_args['volumeName'] = volume['name']
1173+
1174+ self._cliq_run_xml("unassignVolume", cliq_args)
1175
1176=== modified file 'plugins/xenserver/xenapi/etc/xapi.d/plugins/agent'
1177--- plugins/xenserver/xenapi/etc/xapi.d/plugins/agent 2011-02-17 22:09:26 +0000
1178+++ plugins/xenserver/xenapi/etc/xapi.d/plugins/agent 2011-02-18 18:41:49 +0000
1179@@ -91,6 +91,7 @@
1180 return resp
1181
1182
1183+<<<<<<< TREE
1184 @jsonify
1185 def resetnetwork(self, arg_dict):
1186 """Writes a resquest to xenstore that tells the agent
1187@@ -102,6 +103,20 @@
1188 xenstore.write_record(self, arg_dict)
1189
1190
1191+=======
1192+@jsonify
1193+def resetnetwork(self, arg_dict):
1194+ """
1195+ Writes a resquest to xenstore that tells the agent to reset networking.
1196+
1197+ """
1198+ arg_dict['value'] = json.dumps({'name': 'resetnetwork', 'value': ''})
1199+ request_id = arg_dict['id']
1200+ arg_dict['path'] = "data/host/%s" % request_id
1201+ xenstore.write_record(self, arg_dict)
1202+
1203+
1204+>>>>>>> MERGE-SOURCE
1205 def _wait_for_agent(self, request_id, arg_dict):
1206 """Periodically checks xenstore for a response from the agent.
1207 The request is always written to 'data/host/{id}', and