Merge ~zioproto/ubuntu/+source/nova:stable/newton into ~ubuntu-server-dev/ubuntu/+source/nova:stable/newton

Proposed by Saverio Proto on 2017-04-24
Status: Merged
Merged at revision: 87a6a8ffb1955236db1f2bae72d06e589f046fe6
Proposed branch: ~zioproto/ubuntu/+source/nova:stable/newton
Merge into: ~ubuntu-server-dev/ubuntu/+source/nova:stable/newton
Diff against target: 4326 lines (+1483/-438)
46 files modified
AUTHORS (+5/-0)
ChangeLog (+29/-0)
PKG-INFO (+1/-1)
debian/changelog (+6/-0)
nova.egg-info/PKG-INFO (+1/-1)
nova.egg-info/SOURCES.txt (+2/-0)
nova.egg-info/pbr.json (+1/-1)
nova.egg-info/requires.txt (+1/-1)
nova/api/openstack/compute/server_external_events.py (+3/-3)
nova/compute/manager.py (+39/-10)
nova/db/sqlalchemy/api.py (+1/-0)
nova/exception.py (+6/-0)
nova/exception_wrapper.py (+3/-0)
nova/locale/cs/LC_MESSAGES/nova.po (+2/-5)
nova/locale/de/LC_MESSAGES/nova.po (+2/-5)
nova/locale/es/LC_MESSAGES/nova.po (+2/-5)
nova/locale/fr/LC_MESSAGES/nova.po (+2/-5)
nova/locale/it/LC_MESSAGES/nova.po (+2/-5)
nova/locale/ja/LC_MESSAGES/nova.po (+2/-5)
nova/locale/ko_KR/LC_MESSAGES/nova.po (+2/-5)
nova/locale/pt_BR/LC_MESSAGES/nova.po (+2/-5)
nova/locale/ru/LC_MESSAGES/nova.po (+2/-5)
nova/locale/tr_TR/LC_MESSAGES/nova.po (+2/-5)
nova/locale/zh_CN/LC_MESSAGES/nova.po (+288/-7)
nova/locale/zh_TW/LC_MESSAGES/nova.po (+2/-5)
nova/network/neutronv2/api.py (+11/-9)
nova/objects/resource_provider.py (+33/-3)
nova/scheduler/client/report.py (+197/-87)
nova/tests/functional/api/openstack/placement/gabbits/allocations.yaml (+4/-4)
nova/tests/functional/api/openstack/placement/gabbits/unicode.yaml (+0/-27)
nova/tests/functional/api/openstack/placement/test_report_client.py (+12/-0)
nova/tests/functional/db/test_resource_provider.py (+90/-4)
nova/tests/unit/api/openstack/compute/test_serversV21.py (+2/-1)
nova/tests/unit/compute/test_compute.py (+46/-0)
nova/tests/unit/compute/test_compute_mgr.py (+25/-8)
nova/tests/unit/db/test_db_api.py (+4/-0)
nova/tests/unit/network/test_neutronv2.py (+105/-32)
nova/tests/unit/scheduler/client/test_report.py (+411/-129)
nova/tests/unit/test_exception.py (+1/-0)
nova/tests/unit/virt/libvirt/test_driver.py (+36/-44)
nova/tests/unit/virt/test_driver.py (+17/-0)
nova/virt/driver.py (+9/-2)
nova/virt/libvirt/driver.py (+50/-8)
releasenotes/notes/bug-1673569-cve-2017-7214-2d7644b356015c93.yaml (+8/-0)
releasenotes/notes/live-migration-progress-known-issue-20176f49da4d3c91.yaml (+13/-0)
requirements.txt (+1/-1)
Reviewer Review Type Date Requested Status
James Page 2017-04-24 Pending
Review via email: mp+323018@code.launchpad.net

Description of the Change

Nova version bump to 14.0.5 for stable/newton

To post a comment you must log in.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1diff --git a/AUTHORS b/AUTHORS
2index 3c5ca47..197ca31 100644
3--- a/AUTHORS
4+++ b/AUTHORS
5@@ -126,6 +126,7 @@ Armando Migliaccio <armando.migliaccio@eu.citrix.com>
6 Arnaud Legendre <alegendre@vmware.com>
7 Arnaud Legendre <arnaudleg@gmail.com>
8 Arnaud Morin <arnaud.morin@corp.ovh.com>
9+Arne Recknagel <arecknag@de.ibm.com>
10 Artom Lifshitz <alifshit@redhat.com>
11 Artur Malinowski <artur.malinowski@intel.com>
12 Arvind Somya <asomya@cisco.com>
13@@ -703,6 +704,7 @@ Matt Fischer <matt@mattfischer.com>
14 Matt Joyce <matt.joyce@cloudscaling.com>
15 Matt Odden <mrodden@us.ibm.com>
16 Matt Rabe <mdrabe@us.ibm.com>
17+Matt Riedemann <mriedem.os@gmail.com>
18 Matt Riedemann <mriedem@us.ibm.com>
19 Matt Stephenson <mattstep@mattstep.net>
20 Matt Thompson <mattt@defunct.ca>
21@@ -733,6 +735,7 @@ Michael Still <mikal@stillhq.com>
22 Michael Turek <mjturek@linux.vnet.ibm.com>
23 Michael Wilson <geekinutah@gmail.com>
24 Michael Wurtz <michael.wurtz@ibm.com>
25+Michal <mpryc@redhat.com>
26 Michal Dulko <michal.dulko@intel.com>
27 Michal Pryc <mpryc@redhat.com>
28 Miguel Lavalle <malavall@us.ibm.com>
29@@ -827,6 +830,7 @@ Pranali Deore <pranali11.deore@nttdata.com>
30 PranaliDeore <pranali11.deore@nttdata.com>
31 Pranav Salunke <dguitarbite@gmail.com>
32 Pranav Salunke <pranav@aptira.com>
33+Prateek Arora <parora@redhat.com>
34 Praveen Yalagandula <ypraveen@gmail.com>
35 Prem Karat <prem.karat@linux.vnet.ibm.com>
36 Przemyslaw Czesnowicz <przemyslaw.czesnowicz@intel.com>
37@@ -1290,6 +1294,7 @@ lrqrun <lrqrun@gmail.com>
38 lvdongbing <dongbing.lv@kylin-cloud.com>
39 lyanchih <lyan.h@inwinstack.com>
40 m.benchchaoui@cloudbau.de <m.benchchaoui@cloudbau.de>
41+m4cr0v <m4cr0v@gmail.com>
42 maqi <maqi@cmss.chinamobile.com>
43 mark.sturdevant <mark.sturdevant@hpe.com>
44 mathieu-rohon <mathieu.rohon@gmail.com>
45diff --git a/ChangeLog b/ChangeLog
46index 2289ac9..8eab1d5 100644
47--- a/ChangeLog
48+++ b/ChangeLog
49@@ -1,6 +1,21 @@
50 CHANGES
51 =======
52
53+14.0.5
54+------
55+
56+* Add release note for CVE-2017-7214
57+* do not include context to exception notification
58+* Fix exception message formatting error in test
59+* Fix invalid exception mock for InvalidNUMANodesNumber
60+* Fix s390 "connector not found" issue
61+* Fix spice channel type
62+* Updated from global requirements
63+* Add release note for live_migration_progress_timeout issue
64+* Ignore deleted services in minimum version calculation
65+* Imported Translations from Zanata
66+* Fresh resource provider in RT must have generation 0
67+
68 14.0.4
69 ------
70
71@@ -10,9 +25,11 @@ CHANGES
72 * Skip test_stamp_pattern in cells v1 job
73 * Prepare for using standard python tests
74 * Allow None for block_device_mapping_v2.boot_index
75+* libvirt: Remove redundant bdm serial mangling and saving during swap_volume
76 * Catch error and log warning when not able to update mtimes
77 * libvirt: Limit destroying disks during cleanup to spawn
78 * libvirt: fix nova can't delete the instance with nvram
79+* Pre-load info_cache when handling external events and handle NotFound
80 * libvirt: Use the mirror element to detect job completion
81 * libvirt: Mock is_job_complete in test_driver
82 * libvirt: Fix BlockDevice.wait_for_job when qemu reports no job
83@@ -22,16 +39,27 @@ CHANGES
84 * Raise DeviceNotFound detaching volume from persistent domain
85 * libvirt: Detach volumes from a domain before detaching any encryptors
86 * Ensure we mark baremetal links as phy links
87+* Do not post allocations that are zero
88+* placement: Do not save 0-valued inventory
89+* [placement] Enforce min_unit, max_unit and step_size
90+* Correct wrong max_unit in placement inventory
91+* placement: genericize on resource providers
92+* placement: refactor instance translate function
93+* placement: refactor translate from node to dict
94+* Removal of tests with different result depending on testing env
95 * Let nova-manage cell_v2 commands use transport_url from CONF
96 * Make simple_cell_setup fully idempotent
97 * Make placement client keep trying to connect
98 * Handle Unauthorized exception in report client's safe_connect()
99+* Handle ImageNotFound exception during instance backup
100 * Guestfs handle no passwd or group in image
101 * libvirt: Improve _is_booted_from_volume implementation
102 * libvirt: Delete duplicate check when live-migrating
103 * Fix cold migration with qcow2 ephemeral disks
104+* Catch VolumeEncryptionNotSupported during spawn
105 * Provide an online data migration to cleanup orphaned build requests
106 * Don't apply multi-queue to SRIOV ports
107+* libvirt: Acquire TCP ports for console during live migration
108 * libvirt: prepare domain XML update for serial ports
109 * libvirt: do not return serial address if disabled on destination
110 * Fix BDM JSON-Schema validation
111@@ -41,6 +69,7 @@ CHANGES
112 ------
113
114 * Catch ImageNotAuthorized during boot instance
115+* fix for auth during live-migration
116 * Fix crashing during guest config with pci_devices=None
117 * Bump prlimit cpu time for qemu from 2 to 8
118 * Don't trace on ImageNotFound in delete_image_on_error
119diff --git a/PKG-INFO b/PKG-INFO
120index 0dfed1d..aaa5e3a 100644
121--- a/PKG-INFO
122+++ b/PKG-INFO
123@@ -1,6 +1,6 @@
124 Metadata-Version: 1.1
125 Name: nova
126-Version: 14.0.4
127+Version: 14.0.5
128 Summary: Cloud computing fabric controller
129 Home-page: http://docs.openstack.org/developer/nova/
130 Author: OpenStack
131diff --git a/debian/changelog b/debian/changelog
132index d934cf6..aad93fc 100644
133--- a/debian/changelog
134+++ b/debian/changelog
135@@ -1,3 +1,9 @@
136+nova (2:14.0.5-0ubuntu1) UNRELEASED; urgency=medium
137+
138+ * New upstream point release for OpenStack Newton
139+
140+ -- Saverio Proto <saverio.proto@switch.ch> Mon, 24 Apr 2017 08:22:54 +0000
141+
142 nova (2:14.0.4-0ubuntu1.2) yakkety; urgency=medium
143
144 * d/p/libvirt-set-vlan-tag-for-macvtap.patch: Pick dependent patch
145diff --git a/nova.egg-info/PKG-INFO b/nova.egg-info/PKG-INFO
146index 0dfed1d..aaa5e3a 100644
147--- a/nova.egg-info/PKG-INFO
148+++ b/nova.egg-info/PKG-INFO
149@@ -1,6 +1,6 @@
150 Metadata-Version: 1.1
151 Name: nova
152-Version: 14.0.4
153+Version: 14.0.5
154 Summary: Cloud computing fabric controller
155 Home-page: http://docs.openstack.org/developer/nova/
156 Author: OpenStack
157diff --git a/nova.egg-info/SOURCES.txt b/nova.egg-info/SOURCES.txt
158index 5f00a54..72b1f25 100644
159--- a/nova.egg-info/SOURCES.txt
160+++ b/nova.egg-info/SOURCES.txt
161@@ -2720,6 +2720,7 @@ releasenotes/notes/bp-virtuozzo-rescue-support-a0f69357a93e5e92.yaml
162 releasenotes/notes/bug-1559026-47c3fa3468d66b07.yaml
163 releasenotes/notes/bug-1635446-newton-2351fe93f9af67e5.yaml
164 releasenotes/notes/bug-1662699-06203e7262e02aa6.yaml
165+releasenotes/notes/bug-1673569-cve-2017-7214-2d7644b356015c93.yaml
166 releasenotes/notes/bug_1632723-2a4bd74e4a942a06.yaml
167 releasenotes/notes/cell-id-db-sync-nova-manage-8504b54dd115a2e9.yaml
168 releasenotes/notes/cells-discover-hosts-06a3079ba687e092.yaml
169@@ -2795,6 +2796,7 @@ releasenotes/notes/libvirt_hardware_policy_from_libosinfo-19e261851d1ad93a.yaml
170 releasenotes/notes/libvirt_ppc64le_hugepage_support-b9fd39cf20c8e91d.yaml
171 releasenotes/notes/list-invalid-status-af07af378728bc57.yaml
172 releasenotes/notes/list-server-bad-status-fix-7db504b38c8d732f.yaml
173+releasenotes/notes/live-migration-progress-known-issue-20176f49da4d3c91.yaml
174 releasenotes/notes/live_migration_uri-dependent-on-virt_type-595c46c2310f45c3.yaml
175 releasenotes/notes/lock_policy-75bea372036acbd5.yaml
176 releasenotes/notes/min-required-libvirt-b948948949669b02.yaml
177diff --git a/nova.egg-info/pbr.json b/nova.egg-info/pbr.json
178index 58fcfde..6a0ab56 100644
179--- a/nova.egg-info/pbr.json
180+++ b/nova.egg-info/pbr.json
181@@ -1 +1 @@
182-{"is_release": true, "git_version": "642caf0"}
183\ No newline at end of file
184+{"git_version": "c2c91ce", "is_release": true}
185\ No newline at end of file
186diff --git a/nova.egg-info/requires.txt b/nova.egg-info/requires.txt
187index cc1c89b..ee9e996 100644
188--- a/nova.egg-info/requires.txt
189+++ b/nova.egg-info/requires.txt
190@@ -27,7 +27,7 @@ python-glanceclient!=2.4.0,>=2.3.0
191 requests>=2.10.0
192 six>=1.9.0
193 stevedore>=1.16.0
194-setuptools!=24.0.0,>=16.0
195+setuptools!=24.0.0,!=34.0.0,!=34.0.1,!=34.0.2,!=34.0.3,!=34.1.0,!=34.1.1,!=34.2.0,!=34.3.0,>=16.0
196 websockify>=0.8.0
197 oslo.cache>=1.5.0
198 oslo.concurrency>=3.8.0
199diff --git a/nova/api/openstack/compute/server_external_events.py b/nova/api/openstack/compute/server_external_events.py
200index 2ed3399..2789b66 100644
201--- a/nova/api/openstack/compute/server_external_events.py
202+++ b/nova/api/openstack/compute/server_external_events.py
203@@ -65,11 +65,11 @@ class ServerExternalEventsController(wsgi.Controller):
204 instance = instances.get(event.instance_uuid)
205 if not instance:
206 try:
207- # Load migration_context here in a single DB operation
208- # because we need it later on
209+ # Load migration_context and info_cache here in a single DB
210+ # operation because we need them later on
211 instance = objects.Instance.get_by_uuid(
212 context, event.instance_uuid,
213- expected_attrs=['migration_context'])
214+ expected_attrs=['migration_context', 'info_cache'])
215 instances[event.instance_uuid] = instance
216 except exception.InstanceNotFound:
217 LOG.debug('Dropping event %(name)s:%(tag)s for unknown '
218diff --git a/nova/compute/manager.py b/nova/compute/manager.py
219index fb58012..7b403e9 100644
220--- a/nova/compute/manager.py
221+++ b/nova/compute/manager.py
222@@ -1968,7 +1968,8 @@ class ComputeManager(manager.Manager):
223 exception.ImageUnacceptable,
224 exception.InvalidDiskInfo,
225 exception.InvalidDiskFormat,
226- exception.SignatureVerificationError) as e:
227+ exception.SignatureVerificationError,
228+ exception.VolumeEncryptionNotSupported) as e:
229 self._notify_about_instance_usage(context, instance,
230 'create.error', fault=e)
231 raise exception.BuildAbortException(instance_uuid=instance.uuid,
232@@ -3149,7 +3150,12 @@ class ComputeManager(manager.Manager):
233 image_id = image['id']
234 LOG.debug("Deleting image %s", image_id,
235 instance=instance)
236- self.image_api.delete(context, image_id)
237+ try:
238+ self.image_api.delete(context, image_id)
239+ except exception.ImageNotFound:
240+ LOG.info(_LI("Failed to find image %(image_id)s to "
241+ "delete"), {'image_id': image_id},
242+ instance=instance)
243
244 @wrap_exception()
245 @reverts_task_state
246@@ -4859,7 +4865,11 @@ class ComputeManager(manager.Manager):
247 old_cinfo = jsonutils.loads(bdm['connection_info'])
248 if old_cinfo and 'serial' not in old_cinfo:
249 old_cinfo['serial'] = old_volume_id
250- new_cinfo['serial'] = old_cinfo['serial']
251+ # NOTE(lyarwood): serial is not always present in the returned
252+ # connection_info so set it if it is missing as we do in
253+ # DriverVolumeBlockDevice.attach().
254+ if 'serial' not in new_cinfo:
255+ new_cinfo['serial'] = new_volume_id
256 return (old_cinfo, new_cinfo)
257
258 def _swap_volume(self, context, instance, bdm, connector,
259@@ -4874,6 +4884,10 @@ class ComputeManager(manager.Manager):
260 connector,
261 instance,
262 bdm)
263+ # NOTE(lyarwood): The Libvirt driver, the only virt driver
264+ # currently implementing swap_volume, will modify the contents of
265+ # new_cinfo when connect_volume is called. This is then saved to
266+ # the BDM in swap_volume for future use outside of this flow.
267 LOG.debug("swap_volume: Calling driver volume swap with "
268 "connection infos: new: %(new_cinfo)s; "
269 "old: %(old_cinfo)s",
270@@ -4881,6 +4895,9 @@ class ComputeManager(manager.Manager):
271 contex=context, instance=instance)
272 self.driver.swap_volume(old_cinfo, new_cinfo, instance, mountpoint,
273 resize_to)
274+ LOG.debug("swap_volume: Driver volume swap returned, new "
275+ "connection_info is now : %(new_cinfo)s",
276+ {'new_cinfo': new_cinfo})
277 except Exception:
278 failed = True
279 with excutils.save_and_reraise_exception():
280@@ -4909,8 +4926,13 @@ class ComputeManager(manager.Manager):
281 self.volume_api.terminate_connection(context,
282 conn_volume,
283 connector)
284- # If Cinder initiated the swap, it will keep
285- # the original ID
286+ # NOTE(lyarwood): The following call to
287+ # os-migrate-volume-completion returns a dict containing
288+ # save_volume_id, this volume id has two possible values :
289+ # 1. old_volume_id if we are migrating (retyping) volumes
290+ # 2. new_volume_id if we are swapping between two existing volumes
291+ # This volume id is later used to update the volume_id and
292+ # connection_info['serial'] of the BDM.
293 comp_ret = self.volume_api.migrate_volume_completion(
294 context,
295 old_volume_id,
296@@ -4949,9 +4971,10 @@ class ComputeManager(manager.Manager):
297 new_volume_id,
298 resize_to)
299
300+ # NOTE(lyarwood): Update the BDM with the modified new_cinfo and
301+ # correct volume_id returned by Cinder.
302 save_volume_id = comp_ret['save_volume_id']
303-
304- # Update bdm
305+ new_cinfo['serial'] = save_volume_id
306 values = {
307 'connection_info': jsonutils.dumps(new_cinfo),
308 'source_type': 'volume',
309@@ -6690,9 +6713,15 @@ class ComputeManager(manager.Manager):
310 {'event': event.key, 'error': six.text_type(e)},
311 instance=instance)
312 elif event.name == 'network-vif-deleted':
313- self._process_instance_vif_deleted_event(context,
314- instance,
315- event.tag)
316+ try:
317+ self._process_instance_vif_deleted_event(context,
318+ instance,
319+ event.tag)
320+ except exception.NotFound as e:
321+ LOG.info(_LI('Failed to process external instance event '
322+ '%(event)s due to: %(error)s'),
323+ {'event': event.key, 'error': six.text_type(e)},
324+ instance=instance)
325 else:
326 self._process_instance_event(instance, event)
327
328diff --git a/nova/db/sqlalchemy/api.py b/nova/db/sqlalchemy/api.py
329index 5527c0e..29baac5 100644
330--- a/nova/db/sqlalchemy/api.py
331+++ b/nova/db/sqlalchemy/api.py
332@@ -453,6 +453,7 @@ def service_get_minimum_version(context, binaries):
333 models.Service.binary,
334 func.min(models.Service.version)).\
335 filter(models.Service.binary.in_(binaries)).\
336+ filter(models.Service.deleted == 0).\
337 filter(models.Service.forced_down == false()).\
338 group_by(models.Service.binary)
339 return dict(min_versions)
340diff --git a/nova/exception.py b/nova/exception.py
341index c9e5212..5006175 100644
342--- a/nova/exception.py
343+++ b/nova/exception.py
344@@ -2140,6 +2140,12 @@ class InvalidAllocationCapacityExceeded(InvalidInventory):
345 "amount would exceed the capacity.")
346
347
348+class InvalidAllocationConstraintsViolated(InvalidInventory):
349+ msg_fmt = _("Unable to create allocation for '%(resource_class)s' on "
350+ "resource provider '%(resource_provider)s'. The requested "
351+ "amount would violate inventory constraints.")
352+
353+
354 class UnsupportedPointerModelRequested(Invalid):
355 msg_fmt = _("Pointer model '%(model)s' requested is not supported by "
356 "host.")
357diff --git a/nova/exception_wrapper.py b/nova/exception_wrapper.py
358index 5b74c3b..5051b83 100644
359--- a/nova/exception_wrapper.py
360+++ b/nova/exception_wrapper.py
361@@ -86,6 +86,9 @@ def _get_call_dict(function, self, context, *args, **kw):
362 # self can't be serialized and shouldn't be in the
363 # payload
364 call_dict.pop('self', None)
365+ # NOTE(gibi) remove context as well as it contains sensitive information
366+ # and it can also contain circular references
367+ call_dict.pop('context', None)
368 return _cleanse_dict(call_dict)
369
370
371diff --git a/nova/locale/cs/LC_MESSAGES/nova.po b/nova/locale/cs/LC_MESSAGES/nova.po
372index 954f4ab..d369753 100644
373--- a/nova/locale/cs/LC_MESSAGES/nova.po
374+++ b/nova/locale/cs/LC_MESSAGES/nova.po
375@@ -10,9 +10,9 @@
376 # Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
377 msgid ""
378 msgstr ""
379-"Project-Id-Version: nova 14.0.2.dev16\n"
380+"Project-Id-Version: nova 14.0.4.dev59\n"
381 "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
382-"POT-Creation-Date: 2016-10-23 09:20+0000\n"
383+"POT-Creation-Date: 2017-02-18 03:18+0000\n"
384 "MIME-Version: 1.0\n"
385 "Content-Type: text/plain; charset=UTF-8\n"
386 "Content-Transfer-Encoding: 8bit\n"
387@@ -4234,9 +4234,6 @@ msgstr "libguestfs je nainstalováno ale nelze použít (%s)"
388 msgid "libguestfs is not installed (%s)"
389 msgstr "libguestfs není nainstalováno (%s)"
390
391-msgid "libvirt error while requesting blockjob info."
392-msgstr "chyba libvirt při žádání informací o práci s blokem."
393-
394 #, python-format
395 msgid "marker [%s] not found"
396 msgstr "značka [%s] nenalezena"
397diff --git a/nova/locale/de/LC_MESSAGES/nova.po b/nova/locale/de/LC_MESSAGES/nova.po
398index 4af7307..36ad13e 100644
399--- a/nova/locale/de/LC_MESSAGES/nova.po
400+++ b/nova/locale/de/LC_MESSAGES/nova.po
401@@ -13,9 +13,9 @@
402 # Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
403 msgid ""
404 msgstr ""
405-"Project-Id-Version: nova 14.0.2.dev16\n"
406+"Project-Id-Version: nova 14.0.4.dev59\n"
407 "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
408-"POT-Creation-Date: 2016-10-23 09:20+0000\n"
409+"POT-Creation-Date: 2017-02-18 03:18+0000\n"
410 "MIME-Version: 1.0\n"
411 "Content-Type: text/plain; charset=UTF-8\n"
412 "Content-Transfer-Encoding: 8bit\n"
413@@ -4760,9 +4760,6 @@ msgstr "libguestfs installiert, aber nicht benutzbar (%s)"
414 msgid "libguestfs is not installed (%s)"
415 msgstr "libguestfs ist nicht installiert (%s)"
416
417-msgid "libvirt error while requesting blockjob info."
418-msgstr "libvirt-Fehler beim Anfordern der blockjob-Informationen."
419-
420 #, python-format
421 msgid "marker [%s] not found"
422 msgstr "Marker [%s] nicht gefunden"
423diff --git a/nova/locale/es/LC_MESSAGES/nova.po b/nova/locale/es/LC_MESSAGES/nova.po
424index ab71042..8def410 100644
425--- a/nova/locale/es/LC_MESSAGES/nova.po
426+++ b/nova/locale/es/LC_MESSAGES/nova.po
427@@ -14,9 +14,9 @@
428 # Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
429 msgid ""
430 msgstr ""
431-"Project-Id-Version: nova 14.0.2.dev16\n"
432+"Project-Id-Version: nova 14.0.4.dev59\n"
433 "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
434-"POT-Creation-Date: 2016-10-23 09:20+0000\n"
435+"POT-Creation-Date: 2017-02-18 03:18+0000\n"
436 "MIME-Version: 1.0\n"
437 "Content-Type: text/plain; charset=UTF-8\n"
438 "Content-Transfer-Encoding: 8bit\n"
439@@ -4664,9 +4664,6 @@ msgstr "libguestfs está instalado pero no puede ser usado (%s)"
440 msgid "libguestfs is not installed (%s)"
441 msgstr "libguestfs no está nstalado (%s)"
442
443-msgid "libvirt error while requesting blockjob info."
444-msgstr "error de libvirt al solicitar información de blockjob."
445-
446 #, python-format
447 msgid "marker [%s] not found"
448 msgstr "no se ha encontrado el marcador [%s]"
449diff --git a/nova/locale/fr/LC_MESSAGES/nova.po b/nova/locale/fr/LC_MESSAGES/nova.po
450index 86f72f6..9bd1a96 100644
451--- a/nova/locale/fr/LC_MESSAGES/nova.po
452+++ b/nova/locale/fr/LC_MESSAGES/nova.po
453@@ -25,9 +25,9 @@
454 # Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
455 msgid ""
456 msgstr ""
457-"Project-Id-Version: nova 14.0.2.dev16\n"
458+"Project-Id-Version: nova 14.0.4.dev59\n"
459 "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
460-"POT-Creation-Date: 2016-10-23 09:20+0000\n"
461+"POT-Creation-Date: 2017-02-18 03:18+0000\n"
462 "MIME-Version: 1.0\n"
463 "Content-Type: text/plain; charset=UTF-8\n"
464 "Content-Transfer-Encoding: 8bit\n"
465@@ -4676,9 +4676,6 @@ msgstr "libguestfs est installé mais n'est pas utilisable (%s)"
466 msgid "libguestfs is not installed (%s)"
467 msgstr "libguestfs n'est pas installé (%s)"
468
469-msgid "libvirt error while requesting blockjob info."
470-msgstr "Erreur de libvirt lors de la demande des informations de blockjob."
471-
472 #, python-format
473 msgid "marker [%s] not found"
474 msgstr "le marqueur [%s] est introuvable"
475diff --git a/nova/locale/it/LC_MESSAGES/nova.po b/nova/locale/it/LC_MESSAGES/nova.po
476index c4b4dfb..d8dbf19 100644
477--- a/nova/locale/it/LC_MESSAGES/nova.po
478+++ b/nova/locale/it/LC_MESSAGES/nova.po
479@@ -11,9 +11,9 @@
480 # Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
481 msgid ""
482 msgstr ""
483-"Project-Id-Version: nova 14.0.2.dev16\n"
484+"Project-Id-Version: nova 14.0.4.dev59\n"
485 "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
486-"POT-Creation-Date: 2016-10-23 09:20+0000\n"
487+"POT-Creation-Date: 2017-02-18 03:18+0000\n"
488 "MIME-Version: 1.0\n"
489 "Content-Type: text/plain; charset=UTF-8\n"
490 "Content-Transfer-Encoding: 8bit\n"
491@@ -4632,9 +4632,6 @@ msgstr "libguestfs installato, ma non utilizzabile (%s)"
492 msgid "libguestfs is not installed (%s)"
493 msgstr "libguestfs non è installato (%s)"
494
495-msgid "libvirt error while requesting blockjob info."
496-msgstr "errore libvirt durante la richiesta delle informazioni blockjob."
497-
498 #, python-format
499 msgid "marker [%s] not found"
500 msgstr "indicatore [%s] non trovato"
501diff --git a/nova/locale/ja/LC_MESSAGES/nova.po b/nova/locale/ja/LC_MESSAGES/nova.po
502index dee81c9..38e5cac 100644
503--- a/nova/locale/ja/LC_MESSAGES/nova.po
504+++ b/nova/locale/ja/LC_MESSAGES/nova.po
505@@ -11,9 +11,9 @@
506 # Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
507 msgid ""
508 msgstr ""
509-"Project-Id-Version: nova 14.0.2.dev16\n"
510+"Project-Id-Version: nova 14.0.4.dev59\n"
511 "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
512-"POT-Creation-Date: 2016-10-23 09:20+0000\n"
513+"POT-Creation-Date: 2017-02-18 03:18+0000\n"
514 "MIME-Version: 1.0\n"
515 "Content-Type: text/plain; charset=UTF-8\n"
516 "Content-Transfer-Encoding: 8bit\n"
517@@ -4612,9 +4612,6 @@ msgstr "libguestfs はインストールされていますが、使用できま�
518 msgid "libguestfs is not installed (%s)"
519 msgstr "libguestfs がインストールされていません (%s)"
520
521-msgid "libvirt error while requesting blockjob info."
522-msgstr "blockjob 情報を要求しているときに libvirt エラーが発生しました。"
523-
524 #, python-format
525 msgid "marker [%s] not found"
526 msgstr "マーカー [%s] が見つかりません"
527diff --git a/nova/locale/ko_KR/LC_MESSAGES/nova.po b/nova/locale/ko_KR/LC_MESSAGES/nova.po
528index 21aa3a7..363ecad 100644
529--- a/nova/locale/ko_KR/LC_MESSAGES/nova.po
530+++ b/nova/locale/ko_KR/LC_MESSAGES/nova.po
531@@ -11,9 +11,9 @@
532 # Jongwon Lee <tothebinaryworld@gmail.com>, 2016. #zanata
533 msgid ""
534 msgstr ""
535-"Project-Id-Version: nova 14.0.2.dev16\n"
536+"Project-Id-Version: nova 14.0.4.dev59\n"
537 "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
538-"POT-Creation-Date: 2016-10-23 09:20+0000\n"
539+"POT-Creation-Date: 2017-02-18 03:18+0000\n"
540 "MIME-Version: 1.0\n"
541 "Content-Type: text/plain; charset=UTF-8\n"
542 "Content-Transfer-Encoding: 8bit\n"
543@@ -4743,9 +4743,6 @@ msgstr "libguestfs가 설치되었지만 사용할 수 없음(%s)"
544 msgid "libguestfs is not installed (%s)"
545 msgstr "libguestfs가 설치되지 않음(%s)"
546
547-msgid "libvirt error while requesting blockjob info."
548-msgstr "블록 작업 정보를 요청하는 중에 libvirt 오류 발생."
549-
550 #, python-format
551 msgid "marker [%s] not found"
552 msgstr "마커 [%s]을(를) 찾을 수 없음"
553diff --git a/nova/locale/pt_BR/LC_MESSAGES/nova.po b/nova/locale/pt_BR/LC_MESSAGES/nova.po
554index e78a688..c695e7d 100644
555--- a/nova/locale/pt_BR/LC_MESSAGES/nova.po
556+++ b/nova/locale/pt_BR/LC_MESSAGES/nova.po
557@@ -17,9 +17,9 @@
558 # Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
559 msgid ""
560 msgstr ""
561-"Project-Id-Version: nova 14.0.2.dev16\n"
562+"Project-Id-Version: nova 14.0.4.dev59\n"
563 "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
564-"POT-Creation-Date: 2016-10-23 09:20+0000\n"
565+"POT-Creation-Date: 2017-02-18 03:18+0000\n"
566 "MIME-Version: 1.0\n"
567 "Content-Type: text/plain; charset=UTF-8\n"
568 "Content-Transfer-Encoding: 8bit\n"
569@@ -4601,9 +4601,6 @@ msgstr "libguestfs instalado, mas não utilizável (%s)"
570 msgid "libguestfs is not installed (%s)"
571 msgstr "libguestfs não está instalado (%s)"
572
573-msgid "libvirt error while requesting blockjob info."
574-msgstr "Erro de libvirt ao solicitar informações de blockjob."
575-
576 #, python-format
577 msgid "marker [%s] not found"
578 msgstr "marcador [%s] não localizado"
579diff --git a/nova/locale/ru/LC_MESSAGES/nova.po b/nova/locale/ru/LC_MESSAGES/nova.po
580index 5c01ad6..70c845d 100644
581--- a/nova/locale/ru/LC_MESSAGES/nova.po
582+++ b/nova/locale/ru/LC_MESSAGES/nova.po
583@@ -14,9 +14,9 @@
584 # Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
585 msgid ""
586 msgstr ""
587-"Project-Id-Version: nova 14.0.2.dev16\n"
588+"Project-Id-Version: nova 14.0.4.dev59\n"
589 "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
590-"POT-Creation-Date: 2016-10-23 09:20+0000\n"
591+"POT-Creation-Date: 2017-02-18 03:18+0000\n"
592 "MIME-Version: 1.0\n"
593 "Content-Type: text/plain; charset=UTF-8\n"
594 "Content-Transfer-Encoding: 8bit\n"
595@@ -4570,9 +4570,6 @@ msgstr "libguestfs установлена, но ее невозможно исп
596 msgid "libguestfs is not installed (%s)"
597 msgstr "Не установлена libguestfs (%s)"
598
599-msgid "libvirt error while requesting blockjob info."
600-msgstr "Ошибка libvirt при запросе информации о blockjob."
601-
602 #, python-format
603 msgid "marker [%s] not found"
604 msgstr "маркер [%s] не найден"
605diff --git a/nova/locale/tr_TR/LC_MESSAGES/nova.po b/nova/locale/tr_TR/LC_MESSAGES/nova.po
606index b9a2bbc..8affc82 100644
607--- a/nova/locale/tr_TR/LC_MESSAGES/nova.po
608+++ b/nova/locale/tr_TR/LC_MESSAGES/nova.po
609@@ -8,9 +8,9 @@
610 # Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
611 msgid ""
612 msgstr ""
613-"Project-Id-Version: nova 14.0.2.dev16\n"
614+"Project-Id-Version: nova 14.0.4.dev59\n"
615 "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
616-"POT-Creation-Date: 2016-10-23 09:20+0000\n"
617+"POT-Creation-Date: 2017-02-18 03:18+0000\n"
618 "MIME-Version: 1.0\n"
619 "Content-Type: text/plain; charset=UTF-8\n"
620 "Content-Transfer-Encoding: 8bit\n"
621@@ -3700,9 +3700,6 @@ msgstr "libguestfs kurulu ama kullanılabilir değil (%s)"
622 msgid "libguestfs is not installed (%s)"
623 msgstr "libguestfs kurulu değil (%s)"
624
625-msgid "libvirt error while requesting blockjob info."
626-msgstr "blockjob bilgisi istenirken libvirt hatası."
627-
628 #, python-format
629 msgid "marker [%s] not found"
630 msgstr " [%s] göstergesi bulunamadı"
631diff --git a/nova/locale/zh_CN/LC_MESSAGES/nova.po b/nova/locale/zh_CN/LC_MESSAGES/nova.po
632index eac80b5..de47550 100644
633--- a/nova/locale/zh_CN/LC_MESSAGES/nova.po
634+++ b/nova/locale/zh_CN/LC_MESSAGES/nova.po
635@@ -30,16 +30,17 @@
636 # English translations for nova.
637 # Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
638 # zzxwill <zzxwill@gmail.com>, 2016. #zanata
639+# TigerFang <tigerfun@126.com>, 2017. #zanata
640 msgid ""
641 msgstr ""
642-"Project-Id-Version: nova 14.0.2.dev16\n"
643+"Project-Id-Version: nova 14.0.4.dev59\n"
644 "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
645-"POT-Creation-Date: 2016-10-23 09:20+0000\n"
646+"POT-Creation-Date: 2017-02-18 03:18+0000\n"
647 "MIME-Version: 1.0\n"
648 "Content-Type: text/plain; charset=UTF-8\n"
649 "Content-Transfer-Encoding: 8bit\n"
650-"PO-Revision-Date: 2016-08-23 02:11+0000\n"
651-"Last-Translator: zzxwill <zzxwill@gmail.com>\n"
652+"PO-Revision-Date: 2017-02-27 03:31+0000\n"
653+"Last-Translator: TigerFang <tigerfun@126.com>\n"
654 "Language: zh-CN\n"
655 "Language-Team: Chinese (China)\n"
656 "Plural-Forms: nplurals=1; plural=0\n"
657@@ -126,6 +127,13 @@ msgid "%(name)s has more than %(max_length)s characters."
658 msgstr "%(name)s 包含的字符超过 %(max_length)s 个。"
659
660 #, python-format
661+msgid ""
662+"%(operation)s is not supported in conjunction with the current %(option)s "
663+"setting. Please refer to the nova config-reference."
664+msgstr ""
665+"在设置了当前的配置项%(option)s时不支持%(operation)s。请参考nova配置手册。"
666+
667+#, python-format
668 msgid "%(path)s is not on local storage: %(reason)s"
669 msgstr "%(path)s 没有在本地存储器上:%(reason)s"
670
671@@ -246,6 +254,10 @@ msgid "Action: '%(action)s', calling method: %(meth)s, body: %(body)s"
672 msgstr "操作:“%(action)s”,调用方法:%(meth)s,主体:%(body)s"
673
674 #, python-format
675+msgid "Active live migration for instance %(instance_id)s not found"
676+msgstr "无法找到针对实例%(instance_id)s的正在进行的实时迁移"
677+
678+#, python-format
679 msgid "Add metadata failed for aggregate %(id)s after %(retries)s retries"
680 msgstr "在%(retries)s尝试后,为聚合%(id)s 添加元数据"
681
682@@ -304,6 +316,16 @@ msgid "All hosts are already mapped to cell(s), exiting."
683 msgstr "所有主机已映射至单元,正在退出。"
684
685 #, python-format
686+msgid "Allocation for resource provider '%(rp_uuid)s' that does not exist."
687+msgstr "分配给不存在的资源提供者 '%(rp_uuid)s' 。"
688+
689+#, python-format
690+msgid ""
691+"Allocation of class '%(class)s' for resource provider '%(rp_uuid)s' invalid: "
692+"%(error)s"
693+msgstr "为资源提供者'%(rp_uuid)s' 分配类型'%(class)s'失败,详情如下:%(error)s"
694+
695+#, python-format
696 msgid "An invalid 'name' value was provided. The name must be: %(reason)s"
697 msgstr "提供了无效“name”值。name 必须为:%(reason)s"
698
699@@ -313,6 +335,9 @@ msgstr "发生了一个未知的错误. 请重试你的请求."
700 msgid "An unknown exception occurred."
701 msgstr "发生未知异常。"
702
703+msgid "Another thread concurrently updated the data. Please retry your update"
704+msgstr "另外一个进程也在同时更新数据。请您重试"
705+
706 msgid "Anti-affinity instance group policy was violated."
707 msgstr "违反反亲和力实例组策略。"
708
709@@ -332,6 +357,10 @@ msgstr ""
710 "和内存。"
711
712 #, python-format
713+msgid "Attaching interfaces is not supported for instance %(instance)s."
714+msgstr "无法向实例%(instance)s挂载接口。"
715+
716+#, python-format
717 msgid ""
718 "Attempt to consume PCI device %(compute_node_id)s:%(address)s from empty pool"
719 msgstr "尝试从空池子中消费PCI设备%(compute_node_id)s:%(address)s"
720@@ -347,6 +376,10 @@ msgid "Bad Request - Invalid Parameters"
721 msgstr "错误请求——参数无效"
722
723 #, python-format
724+msgid "Bad inventory %(class)s for resource provider %(rp_uuid)s: %(error)s"
725+msgstr "资源提供者%(rp_uuid)s的存量%(class)s无效,详情如下:%(error)s"
726+
727+#, python-format
728 msgid "Bad mac for to_global_ipv6: %s"
729 msgstr "错误的to_global_ipv6 mac:%s"
730
731@@ -428,6 +461,9 @@ msgstr "块设备映射无效:未能获取快照 %(id)s。"
732 msgid "Block Device Mapping is Invalid: failed to get volume %(id)s."
733 msgstr "块设备映射无效:未能获取卷 %(id)s。"
734
735+msgid "Block device tags are not yet supported."
736+msgstr "暂不支持块设备标签。"
737+
738 msgid "Block migration can not be used with shared storage."
739 msgstr "块存储迁移无法在共享存储使用"
740
741@@ -479,6 +515,25 @@ msgstr "CPU 编号 %(cpuset)s 未分配为任何节点"
742 msgid "CPU pinning is not supported by the host: %(reason)s"
743 msgstr "主机不支持CPU 绑定:%(reason)s"
744
745+#, python-format
746+msgid "CPU set to pin %(requested)s must be a subset of free CPU set %(free)s"
747+msgstr "要绑定的CPU值%(requested)s必须是可用CPU值%(free)s的子集"
748+
749+#, python-format
750+msgid ""
751+"CPU set to pin %(requested)s must be a subset of known CPU set %(cpuset)s"
752+msgstr "要绑定的CPU值%(requested)s必须是已知CPU值%(cpuset)s的子集"
753+
754+#, python-format
755+msgid ""
756+"CPU set to unpin %(requested)s must be a subset of known CPU set %(cpuset)s"
757+msgstr "取消绑定的CPU值%(requested)s必须是已知的CPU值%(cpuset)s的子集"
758+
759+#, python-format
760+msgid ""
761+"CPU set to unpin %(requested)s must be a subset of pinned CPU set %(pinned)s"
762+msgstr "取消绑定的CPU值%(requested)s必须是已经绑定的CPU值%(pinned)s的子集"
763+
764 msgid "Can not add access to a public flavor."
765 msgstr "不能添加访问到公共云主机类型。"
766
767@@ -496,6 +551,9 @@ msgstr "只能再多运行 %s 个此类型的实例。"
768 msgid "Can't detach root device volume"
769 msgstr "不能断开root设备卷"
770
771+msgid "Can't force to a non-provided destination"
772+msgstr "无法强制到一个未指定的目标"
773+
774 msgid "Can't resize a disk to 0 GB."
775 msgstr "不能调整磁盘到0GB."
776
777@@ -666,6 +724,10 @@ msgid "Class %(class_name)s could not be found: %(exception)s"
778 msgstr "找不到类 %(class_name)s :异常 %(exception)s"
779
780 #, python-format
781+msgid "Client exception during Migration Pre check: %(reason)s"
782+msgstr "在迁移预检查的过程中出现客户端错误,原因是:%(reason)s"
783+
784+#, python-format
785 msgid ""
786 "Command Not supported. Please use Ironic command %(cmd)s to perform this "
787 "action."
788@@ -949,6 +1011,14 @@ msgid ""
789 "the project %(project_id)s."
790 msgstr "网络uuid %(network_uuid)s不存在或者没有分配到项目%(project_id)s。"
791
792+msgid ""
793+"Ephemeral disks requested are larger than the instance type allows. If no "
794+"size is given in one block device mapping, flavor ephemeral size will be "
795+"used."
796+msgstr ""
797+"请求的临时磁盘大小比实例类型允许值要大。在一个块存储设备进行映射时,如果磁盘"
798+"的大小没有给出,则会使用配置类型中的大小值。"
799+
800 #, python-format
801 msgid "Error attempting to run %(method)s"
802 msgstr "尝试运行 %(method)s 时出错"
803@@ -1213,6 +1283,10 @@ msgstr "无法终止云主机:%(reason)s"
804 msgid "Failed to unplug vif %s"
805 msgstr "拔除 vif %s 失败"
806
807+#, python-format
808+msgid "Failed to unplug virtual interface: %(reason)s"
809+msgstr "移除虚拟接口失败:%(reason)s"
810+
811 msgid "Failure prepping block device."
812 msgstr "准备块设备失败。"
813
814@@ -1422,6 +1496,9 @@ msgstr "剩余%(type)s %(free).02f %(unit)s<请求%(requested)d %(unit)s"
815 msgid "Group not valid. Reason: %(reason)s"
816 msgstr "组无效。原因:%(reason)s"
817
818+msgid "Guest agent is not enabled for the instance"
819+msgstr "实例中的客户代理没有启用"
820+
821 msgid "Guest does not have a console available."
822 msgstr "访客没有可用控制台。"
823
824@@ -1736,6 +1813,9 @@ msgstr "实例已处于救援模式:%s"
825 msgid "Instance is not a member of specified network"
826 msgstr "实例并不是指定网络的成员"
827
828+msgid "Instance network is not ready yet"
829+msgstr "实例的网络尚未就绪"
830+
831 msgid "Instance recreate is not supported."
832 msgstr "不支持实例重创建"
833
834@@ -1840,6 +1920,9 @@ msgstr "控制台类型 %(console_type)s 无效"
835 msgid "Invalid content type %(content_type)s."
836 msgstr "无效的内容类型 %(content_type)s。"
837
838+msgid "Invalid data supplied to HashRing.get_hosts."
839+msgstr "对 HashRing.get_hosts 提供的数据无效。"
840+
841 #, python-format
842 msgid "Invalid datetime string: %(reason)s"
843 msgstr "日期字符串无效: %(reason)s"
844@@ -1875,6 +1958,9 @@ msgstr "请求中无效固定IP地址%s"
845 msgid "Invalid floating IP %s in request"
846 msgstr "请求中无效浮动IP %s"
847
848+msgid "Invalid hosts supplied when building HashRing."
849+msgstr "在创建HashRing时提供的主机不正确。"
850+
851 #, python-format
852 msgid "Invalid id: %(instance_id)s (expecting \"i-...\")"
853 msgstr "无效id:%(instance_id)s (期望 \"i-...\")"
854@@ -1911,6 +1997,15 @@ msgid "Invalid instance image."
855 msgstr "无效实例镜像。"
856
857 #, python-format
858+msgid ""
859+"Invalid inventory for '%(resource_class)s' on resource provider "
860+"'%(resource_provider)s'. The reserved value is greater than or equal to "
861+"total."
862+msgstr ""
863+"资源提供者'%(resource_provider)s'上的'%(resource_class)s'清单无效。预留值大于"
864+"或者等于总值。"
865+
866+#, python-format
867 msgid "Invalid is_public filter [%s]"
868 msgstr "is_public 过滤器 [%s] 无效"
869
870@@ -1937,6 +2032,10 @@ msgid "Invalid metadata: %(reason)s"
871 msgstr "元数据无效: %(reason)s"
872
873 #, python-format
874+msgid "Invalid microversion: %(error)s"
875+msgstr "微转化无效详情如下:%(error)s"
876+
877+#, python-format
878 msgid "Invalid minDisk filter [%s]"
879 msgstr "minDisk 过滤器 [%s] 无效"
880
881@@ -1998,6 +2097,9 @@ msgstr "开始时间无效。开始时间不能出现在结束时间之后。"
882 msgid "Invalid state of instance files on shared storage"
883 msgstr "共享存储器上实例文件的状态无效"
884
885+msgid "Invalid status value"
886+msgstr "状态值无效"
887+
888 msgid "Invalid target_lun"
889 msgstr "无效 target_lun"
890
891@@ -2051,6 +2153,23 @@ msgid "Invalid volume_size."
892 msgstr "无效volume_size."
893
894 #, python-format
895+msgid "Inventory changed while attempting to allocate: %(error)s"
896+msgstr "在进行分配时存量发现改变,详情如下:%(error)s"
897+
898+#, python-format
899+msgid ""
900+"Inventory for '%(resource_class)s' on resource provider "
901+"'%(resource_provider)s' invalid."
902+msgstr "资源提供者'%(resource_provider)s'上的'%(resource_class)s'清单无效。"
903+
904+#, python-format
905+msgid ""
906+"Inventory for '%(resource_classes)s' on resource provider "
907+"'%(resource_provider)s' in use."
908+msgstr ""
909+"资源提供者'%(resource_provider)s'上的'%(resource_classes)s'清单正在使用中。"
910+
911+#, python-format
912 msgid "Ironic node uuid not supplied to driver for instance %s."
913 msgstr "Ironic节点uuid不提供实例 %s的驱动。"
914
915@@ -2069,6 +2188,10 @@ msgid ""
916 msgstr "在外部网络%(network_uuid)s创建一个接口是不允许的"
917
918 #, python-format
919+msgid "JSON does not validate: %(error)s"
920+msgstr "未验证的JSON文件,详情如下:%(error)s"
921+
922+#, python-format
923 msgid ""
924 "Kernel/Ramdisk image is too large: %(vdi_size)d bytes, max %(max_size)d bytes"
925 msgstr "内核/内存盘镜像太大:%(vdi_size)d 字节,最大 %(max_size)d 字节"
926@@ -2104,6 +2227,9 @@ msgstr "密钥对必须是字符串,并且长度在1到255个字符"
927 msgid "Last %s nova syslog entries:-"
928 msgstr "最近的nova 系统日志nova syslog 输出 %s:-"
929
930+msgid "Libguestfs does not have permission to read host kernel."
931+msgstr "Libguestfs没有权限读取主机内核"
932+
933 msgid "Limit"
934 msgstr "限制"
935
936@@ -2143,6 +2269,10 @@ msgid ""
937 msgstr "使用 API V2.25 进行的实时迁移要求所有 Mitaka 升级完成才可用。"
938
939 #, python-format
940+msgid "Malformed JSON: %(error)s"
941+msgstr "JSON格式错误,详情如下:%(error)s"
942+
943+#, python-format
944 msgid "Malformed message body: %(reason)s"
945 msgstr "错误格式的消息体: %(reason)s"
946
947@@ -2186,6 +2316,10 @@ msgstr "超过端口的最大数"
948 msgid "Maximum number of security groups or rules exceeded"
949 msgstr "已超过最大安全组数或最大规则数"
950
951+#, python-format
952+msgid "Maximum number of serial port exceeds %(allowed)d for %(virt_type)s"
953+msgstr "超出最大串行端口限制,允许值为%(allowed)d,请求值为%(virt_type)s"
954+
955 msgid "Metadata item was not found"
956 msgstr "元数据项目未找到"
957
958@@ -2400,6 +2534,9 @@ msgstr "网络驱动程序不支持此功能。"
959 msgid "Network host %(host)s has zero fixed IPs in network %(network_id)s."
960 msgstr "网络主机 %(host)s 在网络 %(network_id)s 中没有固定 IP。"
961
962+msgid "Network interface tags are not yet supported."
963+msgstr "暂不支持网络接口标签。"
964+
965 #, python-format
966 msgid ""
967 "Network must be disassociated from project %(project_id)s before it can be "
968@@ -2418,6 +2555,9 @@ msgstr "网络需要关联的 port_security_enabled 和子网,以便应用安�
969 msgid "Network set host failed for network %(network_id)s."
970 msgstr "网络为网络%(network_id)s设置主机失败。"
971
972+msgid "Networking client is experiencing an unauthorized exception."
973+msgstr "网络客户端正在发生一项非授权的异常。"
974+
975 msgid "New volume must be detached in order to swap."
976 msgstr "为了进行交换,新卷必须断开。"
977
978@@ -2442,6 +2582,10 @@ msgstr "在协议nnection_info中没有access_url。不能验证协议"
979 msgid "No agent-build associated with id %(id)s."
980 msgstr "不存在任何与标识 %(id)s 关联的代理构建。"
981
982+#, python-format
983+msgid "No allocations for consumer '%(consumer_uuid)s'"
984+msgstr "消耗者'%(consumer_uuid)s'没有被分配的资源"
985+
986 msgid "No cell given in routing path."
987 msgstr "在路由路径中未给定单元。"
988
989@@ -2604,6 +2748,12 @@ msgid ""
990 "Not all Virtual Functions of PF %(compute_node_id)s:%(address)s are free."
991 msgstr "并非 PF %(compute_node_id)s:%(address)s 的所有虚拟功能都可用。"
992
993+msgid "Not all aggregates have been migrated to the API database"
994+msgstr "迁移到API数据库的主机组不完整"
995+
996+msgid "Not all flavors have been migrated to the API database"
997+msgstr "迁移到API数据库的配置信息不完整"
998+
999 msgid "Not an rbd snapshot"
1000 msgstr "不是 rbd 快照"
1001
1002@@ -2662,6 +2812,10 @@ msgstr "旧卷绑定到一个不同的实例。"
1003 msgid "One or more hosts already in availability zone(s) %s"
1004 msgstr "在可用区域%s中,已经有一个或多个主机"
1005
1006+#, python-format
1007+msgid "Only %(type)s is provided"
1008+msgstr "只提供了%(type)s"
1009+
1010 msgid "Only administrators may list deleted instances"
1011 msgstr "仅管理员可列示已删除的实例"
1012
1013@@ -2793,6 +2947,10 @@ msgid "Plugin version mismatch (Expected %(exp)s, got %(got)s)"
1014 msgstr "插件版本不匹配 (预期 %(exp)s,获取 %(got)s)"
1015
1016 #, python-format
1017+msgid "Pointer model '%(model)s' requested is not supported by host."
1018+msgstr "主机不支持请求的指针模型'%(model)s'"
1019+
1020+#, python-format
1021 msgid "Policy doesn't allow %(action)s to be performed."
1022 msgstr "政策不允许 %(action)s 被执行。"
1023
1024@@ -2826,6 +2984,10 @@ msgid "Port id %(port_id)s could not be found."
1025 msgstr "找不到端口标识 %(port_id)s。"
1026
1027 #, python-format
1028+msgid "Port update failed for port %(port_id)s: %(reason)s"
1029+msgstr "对端口%(port_id)s的更新失败,原因是:%(reason)s"
1030+
1031+#, python-format
1032 msgid "Project %(project_id)s could not be found."
1033 msgstr "项目 %(project_id)s 没有找到。"
1034
1035@@ -2934,6 +3096,17 @@ msgid "Quota usage for project %(project_id)s could not be found."
1036 msgstr "找不到项目 %(project_id)s 的配额使用量。"
1037
1038 #, python-format
1039+msgid ""
1040+"Quota usage refresh of resource %(resource)s for project %(project_id)s, "
1041+"user %(user_id)s, is not allowed. The allowed resources are %(syncable)s."
1042+msgstr ""
1043+"不允许对项目%(project_id)s中用户%(user_id)s的资源%(resource)s配额使用率进行刷"
1044+"新。允许使用的资源是%(syncable)s。"
1045+
1046+msgid "RPC is pinned to old version"
1047+msgstr "RPC版本过老"
1048+
1049+#, python-format
1050 msgid "Reached maximum number of retries trying to unplug VBD %s"
1051 msgstr "已达到尝试拔出 VBD %s 的最大重试次数"
1052
1053@@ -3032,6 +3205,13 @@ msgstr "不允许将云主机类型的磁盘大小缩减为零。"
1054 msgid "Resource could not be found."
1055 msgstr "资源没有找到。"
1056
1057+#, python-format
1058+msgid "Resource provider '%(rp_uuid)s' not found: %(error)s"
1059+msgstr "无法找到资源提供者 '%(rp_uuid)s' ,详情如下:%(error)s"
1060+
1061+msgid "Resource provider has allocations."
1062+msgstr "资源提供者已经分配。"
1063+
1064 msgid "Resumed"
1065 msgstr "已恢复"
1066
1067@@ -3143,6 +3323,10 @@ msgid "Security group with rule %(rule_id)s not found."
1068 msgstr "带有规则 %(rule_id)s 的安全组没有找到。"
1069
1070 #, python-format
1071+msgid "Server %(server_id)s has no tag '%(tag)s'"
1072+msgstr "服务器%(server_id)s上未发现标签'%(tag)s'"
1073+
1074+#, python-format
1075 msgid "Server disk was unable to be resized because: %(reason)s"
1076 msgstr "由于: %(reason)s,实例磁盘空间不能修改"
1077
1078@@ -3237,6 +3421,14 @@ msgstr "分类大小超过分类关键值大小"
1079 msgid "Sort key supplied was not valid."
1080 msgstr "提供的排序键无效。"
1081
1082+#, python-format
1083+msgid ""
1084+"Specified Fixed IP '%(addr)s' cannot be used with port '%(port)s': the two "
1085+"cannot be specified together."
1086+msgstr ""
1087+"指定的固定IP地址'%(addr)s'无法使用在接口'%(port)s'上,原因是:这两者无法同时"
1088+"指定。"
1089+
1090 msgid "Specified fixed address not assigned to instance"
1091 msgstr "指定的固定IP地址没有分配给实例"
1092
1093@@ -3276,6 +3468,28 @@ msgid "Table"
1094 msgstr "表"
1095
1096 #, python-format
1097+msgid ""
1098+"Tag '%(tag)s' is invalid. It must be a string without characters '/' and "
1099+"','. Validation error message: %(err)s"
1100+msgstr ""
1101+"标签 '%(tag)s' 无效。它必须是字符串,且不能包含‘/’和‘,’。验证错误相关的情况"
1102+"为:%(err)s"
1103+
1104+#, python-format
1105+msgid "Tag '%(tag)s' is too long. Maximum length of a tag is %(length)d"
1106+msgstr "标签'%(tag)s'过长。最大的标签长度是%(length)d"
1107+
1108+#, python-format
1109+msgid "Tags %(tags)s are too long. Maximum length of a tag is %(length)d"
1110+msgstr "标签%(tags)s过长。标签的最大长度为%(length)d"
1111+
1112+#, python-format
1113+msgid ""
1114+"Tags '%s' are invalid. Each tag must be a string without characters '/' and "
1115+"','."
1116+msgstr "标签 '%s'无效。标签必须是不包含‘/’和‘,’的字符串。"
1117+
1118+#, python-format
1119 msgid "Task %(task_name)s is already running on host %(host)s"
1120 msgstr "任务 %(task_name)s 已在主机 %(host)s 上运行"
1121
1122@@ -3327,10 +3541,24 @@ msgstr "在后台,缺省的PBM策略不存在。"
1123 msgid "The firewall filter for %s does not exist"
1124 msgstr "%s 的防火墙过滤器不存在"
1125
1126+#, python-format
1127+msgid ""
1128+"The fixed IP associated with port %(port_id)s is not compatible with the "
1129+"host."
1130+msgstr "端口%(port_id)s对应的固定IP地址无法与主机适配。"
1131+
1132 msgid "The floating IP request failed with a BadRequest"
1133 msgstr "由于坏的请求,浮动IP请求失败"
1134
1135 #, python-format
1136+msgid ""
1137+"The format of the option 'reserved_huge_pages' is invalid. (found "
1138+"'%(conf)s') Please refer to the nova config-reference."
1139+msgstr ""
1140+"配置项'reserved_huge_pages'的格式不正确。(配置内容为'%(conf)s')请参照nova配"
1141+"置说明手册。"
1142+
1143+#, python-format
1144 msgid "The group %(group_name)s must be configured with an id."
1145 msgstr "组%(group_name)s必须配置一个id。"
1146
1147@@ -3351,6 +3579,10 @@ msgid "The key %s is required in all file system descriptions."
1148 msgstr "在所有文件系统描述中,关键字%s是必须的。"
1149
1150 #, python-format
1151+msgid "The media type %(bad_type)s is not supported, use %(good_type)s"
1152+msgstr "不支持%(bad_type)s这个媒体类型,请使用%(good_type)s"
1153+
1154+#, python-format
1155 msgid ""
1156 "The metadata for this location will not work with this module %(module)s. "
1157 "%(reason)s."
1158@@ -3360,6 +3592,9 @@ msgstr "这个定位的元数据与模块%(module)s不能正常工作。%(reason
1159 msgid "The method %(method_name)s is not implemented."
1160 msgstr "方法%(method_name)s没有实现。"
1161
1162+msgid "The method specified is not allowed for this resource."
1163+msgstr "指定的方法无权使用在这个资源上。"
1164+
1165 #, python-format
1166 msgid "The module %(module)s is misconfigured: %(reason)s."
1167 msgstr "模块%(module)s配置错误:%(reason)s。"
1168@@ -3381,17 +3616,39 @@ msgid ""
1169 msgstr ""
1170 "网络范围不够大,无法容纳 %(num_networks)s 个网络。网络大小为 %(network_size)s"
1171
1172+msgid "The networks quota is disabled"
1173+msgstr "网络配额被禁用"
1174+
1175 #, python-format
1176 msgid "The number of defined ports: %(ports)d is over the limit: %(quota)d"
1177 msgstr "定义的端口数量:%(ports)d 超过限制:%(quota)d"
1178
1179+#, python-format
1180+msgid ""
1181+"The number of tags exceeded the per-server limit %(max)d. The number of tags "
1182+"in request is %(count)d."
1183+msgstr "标签数量超过了保护限制%(max)d。请求的标签数量是%(count)d。"
1184+
1185+#, python-format
1186+msgid "The number of tags exceeded the per-server limit %d"
1187+msgstr "标签数量超过了保护限制,上限数量为%d"
1188+
1189 msgid "The only partition should be partition 1."
1190 msgstr "唯一的分区应该是分区1。"
1191
1192 #, python-format
1193+msgid ""
1194+"The property 'numa_nodes' cannot be '%(nodes)s'. It must be a number greater "
1195+"than 0"
1196+msgstr " 'numa_nodes' 的属性不能是'%(nodes)s'。它必须是一个比0大的数"
1197+
1198+#, python-format
1199 msgid "The provided RNG device path: (%(path)s) is not present on the host."
1200 msgstr "主机上不存在提供的RNG设备路径:(%(path)s)。"
1201
1202+msgid "The regex for server name is incorrect"
1203+msgstr "服务器名称的正则表达式不正确"
1204+
1205 msgid "The request body can't be empty"
1206 msgstr "请求题不能为空"
1207
1208@@ -3583,6 +3840,10 @@ msgid "UUID is required to delete Neutron Networks"
1209 msgstr "删除Neutron网络需要UUID"
1210
1211 #, python-format
1212+msgid "Unable to allocate inventory: %(error)s"
1213+msgstr "无法确定存量,详情如下: %(error)s"
1214+
1215+#, python-format
1216 msgid ""
1217 "Unable to associate floating IP %(address)s to any fixed IPs for instance "
1218 "%(id)s. Instance has no fixed IPv4 addresses to associate."
1219@@ -3601,6 +3862,10 @@ msgstr ""
1220 msgid "Unable to authenticate Ironic client."
1221 msgstr "不能认证Ironic客户端。"
1222
1223+#, python-format
1224+msgid "Unable to automatically allocate a network for project %(project_id)s"
1225+msgstr "无法为项目%(project_id)s自动分配一个网络"
1226+
1227 msgid ""
1228 "Unable to claim IP for VPN instances, ensure it isn't running, and try again "
1229 "in a few minutes"
1230@@ -3619,6 +3884,14 @@ msgid "Unable to convert image to raw: %(exp)s"
1231 msgstr "无法将镜像转换为原始格式:%(exp)s"
1232
1233 #, python-format
1234+msgid ""
1235+"Unable to create allocation for '%(resource_class)s' on resource provider "
1236+"'%(resource_provider)s'. The requested amount would exceed the capacity."
1237+msgstr ""
1238+"无法在资源提供者'%(resource_provider)s'上分配'%(resource_class)s'。请求的数量"
1239+"已经超过可用的总量。"
1240+
1241+#, python-format
1242 msgid "Unable to delete system group '%s'"
1243 msgstr "无法删除系统组“%s”"
1244
1245@@ -3972,6 +4245,11 @@ msgid ""
1246 msgstr ""
1247 "API不支持版本%(req_ver)s。小版本号是%(min_ver)s,大版本号是%(max_ver)s。"
1248
1249+#, python-format
1250+msgid ""
1251+"Version of %(name)s %(min_ver)s %(max_ver)s intersects with another versions."
1252+msgstr "%(name)s %(min_ver)s %(max_ver)s的版本会取代其它版本。"
1253+
1254 msgid "Virtual Interface creation failed"
1255 msgstr "虚拟接口创建失败"
1256
1257@@ -4140,6 +4418,9 @@ msgstr "没有浮动 IP。"
1258 msgid "admin password can't be changed on existing disk"
1259 msgstr "无法在现有磁盘上更改管理员密码"
1260
1261+msgid "admin required"
1262+msgstr "需要管理员身份"
1263+
1264 msgid "aggregate deleted"
1265 msgstr "删除的聚合"
1266
1267@@ -4164,6 +4445,9 @@ msgstr "连接信息:%s"
1268 msgid "connecting to: %(host)s:%(port)s"
1269 msgstr "连接到:%(host)s:%(port)s"
1270
1271+msgid "content-type header required"
1272+msgstr "需要content-type头文件信息"
1273+
1274 #, python-format
1275 msgid "destination is %(target_cell)s but routing_path is %(routing_path)s"
1276 msgstr "目标为 %(target_cell)s,但是 routing_path 为 %(routing_path)s"
1277@@ -4293,9 +4577,6 @@ msgstr "libguestfs安装了,但是不可用(%s)"
1278 msgid "libguestfs is not installed (%s)"
1279 msgstr "libguestfs没有安装 (%s)"
1280
1281-msgid "libvirt error while requesting blockjob info."
1282-msgstr "当请求blockjob 信息时,libvirt出错"
1283-
1284 #, python-format
1285 msgid "marker [%s] not found"
1286 msgstr "没有找到标记 [%s]"
1287diff --git a/nova/locale/zh_TW/LC_MESSAGES/nova.po b/nova/locale/zh_TW/LC_MESSAGES/nova.po
1288index 3913043..c1fc35c 100644
1289--- a/nova/locale/zh_TW/LC_MESSAGES/nova.po
1290+++ b/nova/locale/zh_TW/LC_MESSAGES/nova.po
1291@@ -10,9 +10,9 @@
1292 # Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
1293 msgid ""
1294 msgstr ""
1295-"Project-Id-Version: nova 14.0.2.dev16\n"
1296+"Project-Id-Version: nova 14.0.4.dev59\n"
1297 "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
1298-"POT-Creation-Date: 2016-10-23 09:20+0000\n"
1299+"POT-Creation-Date: 2017-02-18 03:18+0000\n"
1300 "MIME-Version: 1.0\n"
1301 "Content-Type: text/plain; charset=UTF-8\n"
1302 "Content-Transfer-Encoding: 8bit\n"
1303@@ -4276,9 +4276,6 @@ msgstr "libguestfs 已安裝,但卻無法使用 (%s)"
1304 msgid "libguestfs is not installed (%s)"
1305 msgstr "未安裝 libguestfs (%s)"
1306
1307-msgid "libvirt error while requesting blockjob info."
1308-msgstr "要求 blockjob 資訊時發生 libVirt 錯誤。"
1309-
1310 #, python-format
1311 msgid "marker [%s] not found"
1312 msgstr "找不到標記 [%s]"
1313diff --git a/nova/network/neutronv2/api.py b/nova/network/neutronv2/api.py
1314index 5c41c87..9eac724 100644
1315--- a/nova/network/neutronv2/api.py
1316+++ b/nova/network/neutronv2/api.py
1317@@ -1284,7 +1284,7 @@ class API(base_api.NetworkAPI):
1318 return network_model.NetworkInfo.hydrate(nw_info)
1319
1320 def _gather_port_ids_and_networks(self, context, instance, networks=None,
1321- port_ids=None):
1322+ port_ids=None, neutron=None):
1323 """Return an instance's complete list of port_ids and networks."""
1324
1325 if ((networks is None and port_ids is not None) or
1326@@ -1303,7 +1303,7 @@ class API(base_api.NetworkAPI):
1327 if networks is None:
1328 networks = self._get_available_networks(context,
1329 instance.project_id,
1330- net_ids)
1331+ net_ids, neutron)
1332 # an interface was added/removed from instance.
1333 else:
1334
1335@@ -2050,8 +2050,8 @@ class API(base_api.NetworkAPI):
1336 network_IPs.append(fixed)
1337 return network_IPs
1338
1339- def _nw_info_get_subnets(self, context, port, network_IPs):
1340- subnets = self._get_subnets_from_port(context, port)
1341+ def _nw_info_get_subnets(self, context, port, network_IPs, client=None):
1342+ subnets = self._get_subnets_from_port(context, port, client)
1343 for subnet in subnets:
1344 subnet['ips'] = [fixed_ip for fixed_ip in network_IPs
1345 if fixed_ip.is_in_subnet(subnet)]
1346@@ -2168,7 +2168,7 @@ class API(base_api.NetworkAPI):
1347 current_neutron_ports = data.get('ports', [])
1348 nw_info_refresh = networks is None and port_ids is None
1349 networks, port_ids = self._gather_port_ids_and_networks(
1350- context, instance, networks, port_ids)
1351+ context, instance, networks, port_ids, client)
1352 nw_info = network_model.NetworkInfo()
1353
1354 if preexisting_port_ids is None:
1355@@ -2193,7 +2193,7 @@ class API(base_api.NetworkAPI):
1356 current_neutron_port)
1357 subnets = self._nw_info_get_subnets(context,
1358 current_neutron_port,
1359- network_IPs)
1360+ network_IPs, client)
1361
1362 devname = "tap" + current_neutron_port['id']
1363 devname = devname[:network_model.NIC_NAME_LEN]
1364@@ -2226,7 +2226,7 @@ class API(base_api.NetworkAPI):
1365
1366 return nw_info
1367
1368- def _get_subnets_from_port(self, context, port):
1369+ def _get_subnets_from_port(self, context, port, client=None):
1370 """Return the subnets for a given port."""
1371
1372 fixed_ips = port['fixed_ips']
1373@@ -2237,8 +2237,10 @@ class API(base_api.NetworkAPI):
1374 # related to the port. To avoid this, the method returns here.
1375 if not fixed_ips:
1376 return []
1377+ if not client:
1378+ client = get_client(context)
1379 search_opts = {'id': [ip['subnet_id'] for ip in fixed_ips]}
1380- data = get_client(context).list_subnets(**search_opts)
1381+ data = client.list_subnets(**search_opts)
1382 ipam_subnets = data.get('subnets', [])
1383 subnets = []
1384
1385@@ -2252,7 +2254,7 @@ class API(base_api.NetworkAPI):
1386 # attempt to populate DHCP server field
1387 search_opts = {'network_id': subnet['network_id'],
1388 'device_owner': 'network:dhcp'}
1389- data = get_client(context).list_ports(**search_opts)
1390+ data = client.list_ports(**search_opts)
1391 dhcp_ports = data.get('ports', [])
1392 for p in dhcp_ports:
1393 for ip_pair in p['fixed_ips']:
1394diff --git a/nova/objects/resource_provider.py b/nova/objects/resource_provider.py
1395index 2e02ede..3036a4f 100644
1396--- a/nova/objects/resource_provider.py
1397+++ b/nova/objects/resource_provider.py
1398@@ -657,9 +657,14 @@ def _check_capacity_exceeded(conn, allocs):
1399 the inventories involved having their capacity exceeded.
1400
1401 Raises an InvalidAllocationCapacityExceeded exception if any inventory
1402- would be exhausted by the allocation. If no inventories would be exceeded
1403- by the allocation, the function returns a list of `ResourceProvider`
1404- objects that contain the generation at the time of the check.
1405+ would be exhausted by the allocation. Raises an
1406+ InvalidAllocationConstraintsViolated exception if any of the `step_size`,
1407+ `min_unit` or `max_unit` constraints in an inventory will be violated
1408+ by any one of the allocations.
1409+
1410+ If no inventories would be exceeded or violated by the allocations, the
1411+ function returns a list of `ResourceProvider` objects that contain the
1412+ generation at the time of the check.
1413
1414 :param conn: SQLalchemy Connection object to use
1415 :param allocs: List of `Allocation` objects to check
1416@@ -718,6 +723,9 @@ def _check_capacity_exceeded(conn, allocs):
1417 _INV_TBL.c.total,
1418 _INV_TBL.c.reserved,
1419 _INV_TBL.c.allocation_ratio,
1420+ _INV_TBL.c.min_unit,
1421+ _INV_TBL.c.max_unit,
1422+ _INV_TBL.c.step_size,
1423 usage.c.used,
1424 ]
1425
1426@@ -749,6 +757,28 @@ def _check_capacity_exceeded(conn, allocs):
1427 usage = usage_map[key]
1428 amount_needed = alloc.used
1429 allocation_ratio = usage['allocation_ratio']
1430+ min_unit = usage['min_unit']
1431+ max_unit = usage['max_unit']
1432+ step_size = usage['step_size']
1433+
1434+ # check min_unit, max_unit, step_size
1435+ if (amount_needed < min_unit or amount_needed > max_unit or
1436+ amount_needed % step_size != 0):
1437+ LOG.warning(
1438+ _LW("Allocation for %(rc)s on resource provider %(rp)s "
1439+ "violates min_unit, max_unit, or step_size. "
1440+ "Requested: %(requested)s, min_unit: %(min_unit)s, "
1441+ "max_unit: %(max_unit)s, step_size: %(step_size)s"),
1442+ {'rc': alloc.resource_class,
1443+ 'rp': rp_uuid,
1444+ 'requested': amount_needed,
1445+ 'min_unit': min_unit,
1446+ 'max_unit': max_unit,
1447+ 'step_size': step_size})
1448+ raise exception.InvalidAllocationConstraintsViolated(
1449+ resource_class=alloc.resource_class,
1450+ resource_provider=rp_uuid)
1451+
1452 # usage["used"] can be returned as None
1453 used = usage['used'] or 0
1454 capacity = (usage['total'] - usage['reserved']) * allocation_ratio
1455diff --git a/nova/scheduler/client/report.py b/nova/scheduler/client/report.py
1456index 141ec37..dbfecdc 100644
1457--- a/nova/scheduler/client/report.py
1458+++ b/nova/scheduler/client/report.py
1459@@ -14,6 +14,7 @@
1460 # under the License.
1461
1462 import functools
1463+import re
1464 import time
1465
1466 from keystoneauth1 import exceptions as ks_exc
1467@@ -32,6 +33,8 @@ LOG = logging.getLogger(__name__)
1468 VCPU = fields.ResourceClass.VCPU
1469 MEMORY_MB = fields.ResourceClass.MEMORY_MB
1470 DISK_GB = fields.ResourceClass.DISK_GB
1471+_RE_INV_IN_USE = re.compile("Inventory for (.+) on resource provider "
1472+ "(.+) in use")
1473 WARN_EVERY = 10
1474
1475
1476@@ -74,6 +77,80 @@ def safe_connect(f):
1477 return wrapper
1478
1479
1480+def _compute_node_to_inventory_dict(compute_node):
1481+ """Given a supplied `objects.ComputeNode` object, return a dict, keyed
1482+ by resource class, of various inventory information.
1483+
1484+ :param compute_node: `objects.ComputeNode` object to translate
1485+ """
1486+ result = {}
1487+
1488+ # NOTE(jaypipes): Ironic virt driver will return 0 values for vcpus,
1489+ # memory_mb and disk_gb if the Ironic node is not available/operable
1490+ if compute_node.vcpus > 0:
1491+ result[VCPU] = {
1492+ 'total': compute_node.vcpus,
1493+ 'reserved': 0,
1494+ 'min_unit': 1,
1495+ 'max_unit': compute_node.vcpus,
1496+ 'step_size': 1,
1497+ 'allocation_ratio': compute_node.cpu_allocation_ratio,
1498+ }
1499+ if compute_node.memory_mb > 0:
1500+ result[MEMORY_MB] = {
1501+ 'total': compute_node.memory_mb,
1502+ 'reserved': CONF.reserved_host_memory_mb,
1503+ 'min_unit': 1,
1504+ 'max_unit': compute_node.memory_mb,
1505+ 'step_size': 1,
1506+ 'allocation_ratio': compute_node.ram_allocation_ratio,
1507+ }
1508+ if compute_node.local_gb > 0:
1509+ result[DISK_GB] = {
1510+ 'total': compute_node.local_gb,
1511+ 'reserved': CONF.reserved_host_disk_mb * 1024,
1512+ 'min_unit': 1,
1513+ 'max_unit': compute_node.local_gb,
1514+ 'step_size': 1,
1515+ 'allocation_ratio': compute_node.disk_allocation_ratio,
1516+ }
1517+ return result
1518+
1519+
1520+def _instance_to_allocations_dict(instance):
1521+ """Given an `objects.Instance` object, return a dict, keyed by resource
1522+ class of the amount used by the instance.
1523+
1524+ :param instance: `objects.Instance` object to translate
1525+ """
1526+ # NOTE(danms): Boot-from-volume instances consume no local disk
1527+ is_bfv = compute_utils.is_volume_backed_instance(instance._context,
1528+ instance)
1529+ disk = ((0 if is_bfv else instance.flavor.root_gb) +
1530+ instance.flavor.swap +
1531+ instance.flavor.ephemeral_gb)
1532+ alloc_dict = {
1533+ MEMORY_MB: instance.flavor.memory_mb,
1534+ VCPU: instance.flavor.vcpus,
1535+ DISK_GB: disk,
1536+ }
1537+ # Remove any zero allocations.
1538+ return {key: val for key, val in alloc_dict.items() if val}
1539+
1540+
1541+def _extract_inventory_in_use(body):
1542+ """Given an HTTP response body, extract the resource classes that were
1543+ still in use when we tried to delete inventory.
1544+
1545+ :returns: String of resource classes or None if there was no InventoryInUse
1546+ error in the response body.
1547+ """
1548+ match = _RE_INV_IN_USE.search(body)
1549+ if match:
1550+ return match.group(1)
1551+ return None
1552+
1553+
1554 class SchedulerReportClient(object):
1555 """Client class for updating the scheduler."""
1556
1557@@ -174,7 +251,7 @@ class SchedulerReportClient(object):
1558 return objects.ResourceProvider(
1559 uuid=uuid,
1560 name=name,
1561- generation=1,
1562+ generation=0,
1563 )
1564 elif resp.status_code == 409:
1565 # Another thread concurrently created a resource provider with the
1566@@ -225,89 +302,71 @@ class SchedulerReportClient(object):
1567 self._resource_providers[uuid] = rp
1568 return rp
1569
1570- def _compute_node_inventory(self, compute_node):
1571- inventories = {
1572- 'VCPU': {
1573- 'total': compute_node.vcpus,
1574- 'reserved': 0,
1575- 'min_unit': 1,
1576- 'max_unit': 1,
1577- 'step_size': 1,
1578- 'allocation_ratio': compute_node.cpu_allocation_ratio,
1579- },
1580- 'MEMORY_MB': {
1581- 'total': compute_node.memory_mb,
1582- 'reserved': CONF.reserved_host_memory_mb,
1583- 'min_unit': 1,
1584- 'max_unit': 1,
1585- 'step_size': 1,
1586- 'allocation_ratio': compute_node.ram_allocation_ratio,
1587- },
1588- 'DISK_GB': {
1589- 'total': compute_node.local_gb,
1590- 'reserved': CONF.reserved_host_disk_mb * 1024,
1591- 'min_unit': 1,
1592- 'max_unit': 1,
1593- 'step_size': 1,
1594- 'allocation_ratio': compute_node.disk_allocation_ratio,
1595- },
1596- }
1597- data = {
1598- 'inventories': inventories,
1599- }
1600- return data
1601-
1602- def _get_inventory(self, compute_node):
1603- url = '/resource_providers/%s/inventories' % compute_node.uuid
1604+ def _get_inventory(self, rp_uuid):
1605+ url = '/resource_providers/%s/inventories' % rp_uuid
1606 result = self.get(url)
1607 if not result:
1608 return {'inventories': {}}
1609 return result.json()
1610
1611- def _update_inventory_attempt(self, compute_node):
1612- """Update the inventory for this compute node if needed.
1613-
1614- :param compute_node: The objects.ComputeNode for the operation
1615- :returns: True if the inventory was updated (or did not need to be),
1616- False otherwise.
1617+ def _get_inventory_and_update_provider_generation(self, rp_uuid):
1618+ """Helper method that retrieves the current inventory for the supplied
1619+ resource provider according to the placement API. If the cached
1620+ generation of the resource provider is not the same as the generation
1621+ returned from the placement API, we update the cached generation.
1622 """
1623- data = self._compute_node_inventory(compute_node)
1624- curr = self._get_inventory(compute_node)
1625+ curr = self._get_inventory(rp_uuid)
1626
1627 # Update our generation immediately, if possible. Even if there
1628 # are no inventories we should always have a generation but let's
1629 # be careful.
1630 server_gen = curr.get('resource_provider_generation')
1631 if server_gen:
1632- my_rp = self._resource_providers[compute_node.uuid]
1633+ my_rp = self._resource_providers[rp_uuid]
1634 if server_gen != my_rp.generation:
1635 LOG.debug('Updating our resource provider generation '
1636 'from %(old)i to %(new)i',
1637 {'old': my_rp.generation,
1638 'new': server_gen})
1639 my_rp.generation = server_gen
1640+ return curr
1641+
1642+ def _update_inventory_attempt(self, rp_uuid, inv_data):
1643+ """Update the inventory for this resource provider if needed.
1644+
1645+ :param rp_uuid: The resource provider UUID for the operation
1646+ :param inv_data: The new inventory for the resource provider
1647+ :returns: True if the inventory was updated (or did not need to be),
1648+ False otherwise.
1649+ """
1650+ curr = self._get_inventory_and_update_provider_generation(rp_uuid)
1651
1652 # Check to see if we need to update placement's view
1653- if data['inventories'] == curr.get('inventories', {}):
1654+ if inv_data == curr.get('inventories', {}):
1655 return True
1656
1657- data['resource_provider_generation'] = (
1658- self._resource_providers[compute_node.uuid].generation)
1659- url = '/resource_providers/%s/inventories' % compute_node.uuid
1660- result = self.put(url, data)
1661+ cur_rp_gen = self._resource_providers[rp_uuid].generation
1662+ payload = {
1663+ 'resource_provider_generation': cur_rp_gen,
1664+ 'inventories': inv_data,
1665+ }
1666+ url = '/resource_providers/%s/inventories' % rp_uuid
1667+ result = self.put(url, payload)
1668 if result.status_code == 409:
1669 LOG.info(_LI('Inventory update conflict for %s'),
1670- compute_node.uuid)
1671+ rp_uuid)
1672 # Invalidate our cache and re-fetch the resource provider
1673 # to be sure to get the latest generation.
1674- del self._resource_providers[compute_node.uuid]
1675- self._ensure_resource_provider(compute_node.uuid,
1676- compute_node.hypervisor_hostname)
1677+ del self._resource_providers[rp_uuid]
1678+ # NOTE(jaypipes): We don't need to pass a name parameter to
1679+ # _ensure_resource_provider() because we know the resource provider
1680+ # record already exists. We're just reloading the record here.
1681+ self._ensure_resource_provider(rp_uuid)
1682 return False
1683 elif not result:
1684- LOG.warning(_LW('Failed to update inventory for '
1685+ LOG.warning(_LW('Failed to update inventory for resource provider '
1686 '%(uuid)s: %(status)i %(text)s'),
1687- {'uuid': compute_node.uuid,
1688+ {'uuid': rp_uuid,
1689 'status': result.status_code,
1690 'text': result.text})
1691 return False
1692@@ -315,9 +374,9 @@ class SchedulerReportClient(object):
1693 if result.status_code != 200:
1694 LOG.info(
1695 _LI('Received unexpected response code %(code)i while '
1696- 'trying to update inventory for compute node %(uuid)s'
1697+ 'trying to update inventory for resource provider %(uuid)s'
1698 ': %(text)s'),
1699- {'uuid': compute_node.uuid,
1700+ {'uuid': rp_uuid,
1701 'code': result.status_code,
1702 'text': result.text})
1703 return False
1704@@ -326,15 +385,15 @@ class SchedulerReportClient(object):
1705 updated_inventories_result = result.json()
1706 new_gen = updated_inventories_result['resource_provider_generation']
1707
1708- self._resource_providers[compute_node.uuid].generation = new_gen
1709+ self._resource_providers[rp_uuid].generation = new_gen
1710 LOG.debug('Updated inventory for %s at generation %i' % (
1711- compute_node.uuid, new_gen))
1712+ rp_uuid, new_gen))
1713 return True
1714
1715 @safe_connect
1716- def _update_inventory(self, compute_node):
1717+ def _update_inventory(self, rp_uuid, inv_data):
1718 for attempt in (1, 2, 3):
1719- if compute_node.uuid not in self._resource_providers:
1720+ if rp_uuid not in self._resource_providers:
1721 # NOTE(danms): Either we failed to fetch/create the RP
1722 # on our first attempt, or a previous attempt had to
1723 # invalidate the cache, and we were unable to refresh
1724@@ -342,11 +401,71 @@ class SchedulerReportClient(object):
1725 LOG.warning(_LW(
1726 'Unable to refresh my resource provider record'))
1727 return False
1728- if self._update_inventory_attempt(compute_node):
1729+ if self._update_inventory_attempt(rp_uuid, inv_data):
1730 return True
1731 time.sleep(1)
1732 return False
1733
1734+ @safe_connect
1735+ def _delete_inventory(self, rp_uuid):
1736+ """Deletes all inventory records for a resource provider with the
1737+ supplied UUID.
1738+ """
1739+ curr = self._get_inventory_and_update_provider_generation(rp_uuid)
1740+
1741+ # Check to see if we need to update placement's view
1742+ if not curr.get('inventories', {}):
1743+ msg = "No inventory to delete from resource provider %s."
1744+ LOG.debug(msg, rp_uuid)
1745+ return
1746+
1747+ msg = _LI("Compute node %s reported no inventory but previous "
1748+ "inventory was detected. Deleting existing inventory "
1749+ "records.")
1750+ LOG.info(msg, rp_uuid)
1751+
1752+ url = '/resource_providers/%s/inventories' % rp_uuid
1753+ cur_rp_gen = self._resource_providers[rp_uuid].generation
1754+ payload = {
1755+ 'resource_provider_generation': cur_rp_gen,
1756+ 'inventories': {},
1757+ }
1758+ r = self.put(url, payload)
1759+ if r.status_code == 200:
1760+ # Update our view of the generation for next time
1761+ updated_inv = r.json()
1762+ new_gen = updated_inv['resource_provider_generation']
1763+
1764+ self._resource_providers[rp_uuid].generation = new_gen
1765+ msg_args = {
1766+ 'rp_uuid': rp_uuid,
1767+ 'generation': new_gen,
1768+ }
1769+ LOG.info(_LI('Deleted all inventory for resource provider '
1770+ '%(rp_uuid)s at generation %(generation)i'),
1771+ msg_args)
1772+ return
1773+ elif r.status_code == 409:
1774+ rc_str = _extract_inventory_in_use(r.text)
1775+ if rc_str is not None:
1776+ msg = _LW("We cannot delete inventory %(rc_str)s for resource "
1777+ "provider %(rp_uuid)s because the inventory is "
1778+ "in use.")
1779+ msg_args = {
1780+ 'rp_uuid': rp_uuid,
1781+ 'rc_str': rc_str,
1782+ }
1783+ LOG.warning(msg, msg_args)
1784+ return
1785+
1786+ msg = _LE("Failed to delete inventory for resource provider "
1787+ "%(rp_uuid)s. Got error response: %(err)s")
1788+ msg_args = {
1789+ 'rp_uuid': rp_uuid,
1790+ 'err': r.text,
1791+ }
1792+ LOG.error(msg, msg_args)
1793+
1794 def update_resource_stats(self, compute_node):
1795 """Creates or updates stats for the supplied compute node.
1796
1797@@ -355,39 +474,30 @@ class SchedulerReportClient(object):
1798 compute_node.save()
1799 self._ensure_resource_provider(compute_node.uuid,
1800 compute_node.hypervisor_hostname)
1801- self._update_inventory(compute_node)
1802-
1803- def _allocations(self, instance):
1804- # NOTE(danms): Boot-from-volume instances consume no local disk
1805- is_bfv = compute_utils.is_volume_backed_instance(instance._context,
1806- instance)
1807- disk = ((0 if is_bfv else instance.flavor.root_gb) +
1808- instance.flavor.swap +
1809- instance.flavor.ephemeral_gb)
1810- return {
1811- MEMORY_MB: instance.flavor.memory_mb,
1812- VCPU: instance.flavor.vcpus,
1813- DISK_GB: disk,
1814- }
1815+ inv_data = _compute_node_to_inventory_dict(compute_node)
1816+ if inv_data:
1817+ self._update_inventory(compute_node.uuid, inv_data)
1818+ else:
1819+ self._delete_inventory(compute_node.uuid)
1820
1821- def _get_allocations_for_instance(self, compute_node, instance):
1822+ def _get_allocations_for_instance(self, rp_uuid, instance):
1823 url = '/allocations/%s' % instance.uuid
1824 resp = self.get(url)
1825 if not resp:
1826 return {}
1827 else:
1828 # NOTE(cdent): This trims to just the allocations being
1829- # used on this compute node. In the future when there
1830+ # used on this resource provider. In the future when there
1831 # are shared resources there might be other providers.
1832 return resp.json()['allocations'].get(
1833- compute_node.uuid, {}).get('resources', {})
1834+ rp_uuid, {}).get('resources', {})
1835
1836 @safe_connect
1837- def _allocate_for_instance(self, compute_node, instance):
1838+ def _allocate_for_instance(self, rp_uuid, instance):
1839 url = '/allocations/%s' % instance.uuid
1840
1841- my_allocations = self._allocations(instance)
1842- current_allocations = self._get_allocations_for_instance(compute_node,
1843+ my_allocations = _instance_to_allocations_dict(instance)
1844+ current_allocations = self._get_allocations_for_instance(rp_uuid,
1845 instance)
1846 if current_allocations == my_allocations:
1847 allocstr = ','.join(['%s=%s' % (k, v)
1848@@ -400,7 +510,7 @@ class SchedulerReportClient(object):
1849 'allocations': [
1850 {
1851 'resource_provider': {
1852- 'uuid': compute_node.uuid,
1853+ 'uuid': rp_uuid,
1854 },
1855 'resources': my_allocations,
1856 },
1857@@ -438,13 +548,13 @@ class SchedulerReportClient(object):
1858
1859 def update_instance_allocation(self, compute_node, instance, sign):
1860 if sign > 0:
1861- self._allocate_for_instance(compute_node, instance)
1862+ self._allocate_for_instance(compute_node.uuid, instance)
1863 else:
1864 self._delete_allocation_for_instance(instance.uuid)
1865
1866 @safe_connect
1867- def _get_allocations(self, compute_node):
1868- url = '/resource_providers/%s/allocations' % compute_node.uuid
1869+ def _get_allocations(self, rp_uuid):
1870+ url = '/resource_providers/%s/allocations' % rp_uuid
1871 resp = self.get(url)
1872 if not resp:
1873 return {}
1874@@ -452,7 +562,7 @@ class SchedulerReportClient(object):
1875 return resp.json()['allocations']
1876
1877 def remove_deleted_instances(self, compute_node, instance_uuids):
1878- allocations = self._get_allocations(compute_node)
1879+ allocations = self._get_allocations(compute_node.uuid)
1880 if allocations is None:
1881 allocations = {}
1882
1883diff --git a/nova/tests/functional/api/openstack/placement/gabbits/allocations.yaml b/nova/tests/functional/api/openstack/placement/gabbits/allocations.yaml
1884index adc9c31..7b84a2d 100644
1885--- a/nova/tests/functional/api/openstack/placement/gabbits/allocations.yaml
1886+++ b/nova/tests/functional/api/openstack/placement/gabbits/allocations.yaml
1887@@ -263,28 +263,28 @@ tests:
1888 request_headers:
1889 content-type: application/json
1890 data:
1891- # TODO(cdent): This format is going to go out of date because
1892- # of other changes
1893 resource_provider_generation: 0
1894 inventories:
1895 VCPU:
1896 total: 32
1897+ max_unit: 32
1898 DISK_GB:
1899 total: 10
1900+ max_unit: 10
1901
1902 - name: set inventory on rp2
1903 PUT: /resource_providers/fcfa516a-abbe-45d1-8152-d5225d82e596/inventories
1904 request_headers:
1905 content-type: application/json
1906 data:
1907- # TODO(cdent): This format is going to go out of date because
1908- # of other changes
1909 resource_provider_generation: 0
1910 inventories:
1911 VCPU:
1912 total: 16
1913+ max_unit: 16
1914 DISK_GB:
1915 total: 20
1916+ max_unit: 20
1917 status: 200
1918
1919 - name: put allocations on both those providers one
1920diff --git a/nova/tests/functional/api/openstack/placement/gabbits/unicode.yaml b/nova/tests/functional/api/openstack/placement/gabbits/unicode.yaml
1921index 113c515..367f79a 100644
1922--- a/nova/tests/functional/api/openstack/placement/gabbits/unicode.yaml
1923+++ b/nova/tests/functional/api/openstack/placement/gabbits/unicode.yaml
1924@@ -43,30 +43,3 @@ tests:
1925 - name: delete that one
1926 DELETE: /resource_providers/$ENVIRON['RP_UUID']
1927 status: 204
1928-
1929-# These next three are expected to fail on many mysql
1930-# installations. It works with the local in-RAM sqlite
1931-# test database.
1932-- name: four byte utf8 smiley
1933- xfail: true
1934- POST: /resource_providers
1935- request_headers:
1936- content-type: application/json
1937- data:
1938- name: "\U0001F601"
1939- uuid: $ENVIRON['RP_UUID']
1940- status: 201
1941- response_headers:
1942- location: //resource_providers/[a-f0-9-]+/
1943-
1944-- name: get that wide resource provider
1945- xfail: true
1946- GET: $LOCATION
1947- response_json_paths:
1948- $.name: "\U0001F601"
1949-
1950-- name: query by wide name
1951- xfail: true
1952- GET: /resource_providers?name=%F0%9F%98%81
1953- response_json_paths:
1954- $.resource_providers[0].name: "\U0001F601"
1955diff --git a/nova/tests/functional/api/openstack/placement/test_report_client.py b/nova/tests/functional/api/openstack/placement/test_report_client.py
1956index 1ccb971..bdd8d09 100644
1957--- a/nova/tests/functional/api/openstack/placement/test_report_client.py
1958+++ b/nova/tests/functional/api/openstack/placement/test_report_client.py
1959@@ -140,3 +140,15 @@ class SchedulerReportClientTests(test.TestCase):
1960 usage_data = resp.json()['usages']
1961 vcpu_data = usage_data[res_class]
1962 self.assertEqual(0, vcpu_data)
1963+
1964+ # Trigger the reporting client deleting all inventory by setting
1965+ # the compute node's CPU, RAM and disk amounts to 0.
1966+ self.compute_node.vcpus = 0
1967+ self.compute_node.memory_mb = 0
1968+ self.compute_node.local_gb = 0
1969+ self.client.update_resource_stats(self.compute_node)
1970+
1971+ # Check there's no more inventory records
1972+ resp = self.client.get(inventory_url)
1973+ inventory_data = resp.json()['inventories']
1974+ self.assertEqual({}, inventory_data)
1975diff --git a/nova/tests/functional/db/test_resource_provider.py b/nova/tests/functional/db/test_resource_provider.py
1976index 1774195..b0db2dc 100644
1977--- a/nova/tests/functional/db/test_resource_provider.py
1978+++ b/nova/tests/functional/db/test_resource_provider.py
1979@@ -639,6 +639,7 @@ class TestAllocationListCreateDelete(ResourceProviderBaseCase):
1980 If this fails, we get a KeyError at create_all()
1981 """
1982
1983+ max_unit = 10
1984 consumer_uuid = uuidsentinel.consumer
1985 consumer_uuid2 = uuidsentinel.consumer2
1986
1987@@ -657,12 +658,13 @@ class TestAllocationListCreateDelete(ResourceProviderBaseCase):
1988
1989 inv = objects.Inventory(resource_provider=rp1,
1990 resource_class=rp1_class,
1991- total=1024)
1992+ total=1024, max_unit=max_unit)
1993 inv.obj_set_defaults()
1994
1995 inv2 = objects.Inventory(resource_provider=rp1,
1996- resource_class=rp2_class,
1997- total=255, reserved=2)
1998+ resource_class=rp2_class,
1999+ total=255, reserved=2,
2000+ max_unit=max_unit)
2001 inv2.obj_set_defaults()
2002 inv_list = objects.InventoryList(objects=[inv, inv2])
2003 rp1.set_inventory(inv_list)
2004@@ -698,6 +700,7 @@ class TestAllocationListCreateDelete(ResourceProviderBaseCase):
2005 allocation_list.create_all()
2006
2007 def test_allocation_list_create(self):
2008+ max_unit = 10
2009 consumer_uuid = uuidsentinel.consumer
2010
2011 # Create two resource providers
2012@@ -753,7 +756,21 @@ class TestAllocationListCreateDelete(ResourceProviderBaseCase):
2013 inv_list = objects.InventoryList(objects=[inv])
2014 rp2.set_inventory(inv_list)
2015
2016- # Now the allocations will work.
2017+ # Now the allocations will still fail because max_unit 1
2018+ self.assertRaises(exception.InvalidAllocationConstraintsViolated,
2019+ allocation_list.create_all)
2020+ inv1 = objects.Inventory(resource_provider=rp1,
2021+ resource_class=rp1_class,
2022+ total=1024, max_unit=max_unit)
2023+ inv1.obj_set_defaults()
2024+ rp1.set_inventory(objects.InventoryList(objects=[inv1]))
2025+ inv2 = objects.Inventory(resource_provider=rp2,
2026+ resource_class=rp2_class,
2027+ total=255, reserved=2, max_unit=max_unit)
2028+ inv2.obj_set_defaults()
2029+ rp2.set_inventory(objects.InventoryList(objects=[inv2]))
2030+
2031+ # Now we can finally allocate.
2032 allocation_list.create_all()
2033
2034 # Check that those allocations changed usage on each
2035@@ -797,6 +814,75 @@ class TestAllocationListCreateDelete(ResourceProviderBaseCase):
2036 self.assertEqual(0, rp1_usage[0].usage)
2037 self.assertEqual(0, rp2_usage[0].usage)
2038
2039+ def _make_rp_and_inventory(self, **kwargs):
2040+ # Create one resource provider and set some inventory
2041+ rp_name = uuidsentinel.rp_name
2042+ rp_uuid = uuidsentinel.rp_uuid
2043+ rp = objects.ResourceProvider(
2044+ self.context, name=rp_name, uuid=rp_uuid)
2045+ rp.create()
2046+ inv = objects.Inventory(resource_provider=rp,
2047+ total=1024, allocation_ratio=1,
2048+ reserved=0, **kwargs)
2049+ inv.obj_set_defaults()
2050+ rp.set_inventory(objects.InventoryList(objects=[inv]))
2051+ return rp
2052+
2053+ def _validate_usage(self, rp, usage):
2054+ rp_usage = objects.UsageList.get_all_by_resource_provider_uuid(
2055+ self.context, rp.uuid)
2056+ self.assertEqual(usage, rp_usage[0].usage)
2057+
2058+ def _check_create_allocations(self, inventory_kwargs,
2059+ bad_used, good_used):
2060+ consumer_uuid = uuidsentinel.consumer
2061+ rp_class = fields.ResourceClass.DISK_GB
2062+ rp = self._make_rp_and_inventory(resource_class=rp_class,
2063+ **inventory_kwargs)
2064+
2065+ # allocation, bad step_size
2066+ allocation = objects.Allocation(resource_provider=rp,
2067+ consumer_id=consumer_uuid,
2068+ resource_class=rp_class,
2069+ used=bad_used)
2070+ allocation_list = objects.AllocationList(self.context,
2071+ objects=[allocation])
2072+ self.assertRaises(exception.InvalidAllocationConstraintsViolated,
2073+ allocation_list.create_all)
2074+
2075+ # correct for step size
2076+ allocation.used = good_used
2077+ allocation_list = objects.AllocationList(self.context,
2078+ objects=[allocation])
2079+ allocation_list.create_all()
2080+
2081+ # check usage
2082+ self._validate_usage(rp, allocation.used)
2083+
2084+ def test_create_all_step_size(self):
2085+ bad_used = 4
2086+ good_used = 5
2087+ inventory_kwargs = {'max_unit': 9999, 'step_size': 5}
2088+
2089+ self._check_create_allocations(inventory_kwargs,
2090+ bad_used, good_used)
2091+
2092+ def test_create_all_min_unit(self):
2093+ bad_used = 4
2094+ good_used = 5
2095+ inventory_kwargs = {'max_unit': 9999, 'min_unit': 5}
2096+
2097+ self._check_create_allocations(inventory_kwargs,
2098+ bad_used, good_used)
2099+
2100+ def test_create_all_max_unit(self):
2101+ bad_used = 5
2102+ good_used = 3
2103+ inventory_kwargs = {'max_unit': 3}
2104+
2105+ self._check_create_allocations(inventory_kwargs,
2106+ bad_used, good_used)
2107+
2108
2109 class UsageListTestCase(ResourceProviderBaseCase):
2110
2111diff --git a/nova/tests/unit/api/openstack/compute/test_serversV21.py b/nova/tests/unit/api/openstack/compute/test_serversV21.py
2112index a427da1..95180a5 100644
2113--- a/nova/tests/unit/api/openstack/compute/test_serversV21.py
2114+++ b/nova/tests/unit/api/openstack/compute/test_serversV21.py
2115@@ -3243,7 +3243,7 @@ class ServersControllerCreateTest(test.TestCase):
2116
2117 @mock.patch.object(compute_api.API, 'create',
2118 side_effect=exception.ImageNotAuthorized(
2119- project_id=FAKE_UUID))
2120+ image_id=FAKE_UUID))
2121 def test_create_instance_with_image_not_authorized(self,
2122 mock_create):
2123 self.assertRaises(webob.exc.HTTPBadRequest,
2124@@ -3266,6 +3266,7 @@ class ServersControllerCreateTest(test.TestCase):
2125
2126 @mock.patch.object(compute_api.API, 'create',
2127 side_effect=exception.InvalidNUMANodesNumber(
2128+ nodes='-1',
2129 details=''))
2130 def test_create_instance_raise_invalid_numa_nodes(self, mock_create):
2131 self.assertRaises(webob.exc.HTTPBadRequest,
2132diff --git a/nova/tests/unit/compute/test_compute.py b/nova/tests/unit/compute/test_compute.py
2133index 601a4f6..02d0598 100644
2134--- a/nova/tests/unit/compute/test_compute.py
2135+++ b/nova/tests/unit/compute/test_compute.py
2136@@ -3303,6 +3303,52 @@ class ComputeTestCase(BaseTestCase):
2137 self.assertEqual(state_dict['power_state'],
2138 instances[0]['power_state'])
2139
2140+ @mock.patch('nova.image.api.API.get_all')
2141+ @mock.patch('nova.image.api.API.delete')
2142+ def test_rotate_backups(self, mock_delete, mock_get_all_images):
2143+ instance = self._create_fake_instance_obj()
2144+ instance_uuid = instance['uuid']
2145+ fake_images = [{
2146+ 'id': uuids.image_id_1,
2147+ 'name': 'fake_name_1',
2148+ 'status': 'active',
2149+ 'properties': {'kernel_id': uuids.kernel_id_1,
2150+ 'ramdisk_id': uuids.ramdisk_id_1,
2151+ 'image_type': 'backup',
2152+ 'backup_type': 'daily',
2153+ 'instance_uuid': instance_uuid},
2154+ },
2155+ {
2156+ 'id': uuids.image_id_2,
2157+ 'name': 'fake_name_2',
2158+ 'status': 'active',
2159+ 'properties': {'kernel_id': uuids.kernel_id_2,
2160+ 'ramdisk_id': uuids.ramdisk_id_2,
2161+ 'image_type': 'backup',
2162+ 'backup_type': 'daily',
2163+ 'instance_uuid': instance_uuid},
2164+ },
2165+ {
2166+ 'id': uuids.image_id_3,
2167+ 'name': 'fake_name_3',
2168+ 'status': 'active',
2169+ 'properties': {'kernel_id': uuids.kernel_id_3,
2170+ 'ramdisk_id': uuids.ramdisk_id_3,
2171+ 'image_type': 'backup',
2172+ 'backup_type': 'daily',
2173+ 'instance_uuid': instance_uuid},
2174+ }]
2175+
2176+ mock_get_all_images.return_value = fake_images
2177+
2178+ mock_delete.side_effect = (exception.ImageNotFound(
2179+ image_id=uuids.image_id_1), None)
2180+
2181+ self.compute._rotate_backups(self.context, instance=instance,
2182+ backup_type='daily',
2183+ rotation=1)
2184+ self.assertEqual(2, mock_delete.call_count)
2185+
2186 def test_console_output(self):
2187 # Make sure we can get console output from instance.
2188 instance = self._create_fake_instance_obj()
2189diff --git a/nova/tests/unit/compute/test_compute_mgr.py b/nova/tests/unit/compute/test_compute_mgr.py
2190index 235c46d..694f014 100755
2191--- a/nova/tests/unit/compute/test_compute_mgr.py
2192+++ b/nova/tests/unit/compute/test_compute_mgr.py
2193@@ -1883,7 +1883,7 @@ class ComputeManagerUnitTestCase(test.NoDBTestCase):
2194 'delete_on_termination': True,
2195 'connection_info': '{"foo": "bar"}'})
2196 comp_ret = {'save_volume_id': old_volume_id}
2197- new_info = {"foo": "bar"}
2198+ new_info = {"foo": "bar", "serial": old_volume_id}
2199 swap_volume_mock.return_value = (comp_ret, new_info)
2200 volume_connector_mock.return_value = {}
2201 update_bdm_mock.return_value = fake_bdm
2202@@ -1893,7 +1893,7 @@ class ComputeManagerUnitTestCase(test.NoDBTestCase):
2203 fake_instance.fake_instance_obj(self.context,
2204 **{'uuid': uuids.instance}))
2205 update_values = {'no_device': False,
2206- 'connection_info': u'{"foo": "bar"}',
2207+ 'connection_info': jsonutils.dumps(new_info),
2208 'volume_id': old_volume_id,
2209 'source_type': u'volume',
2210 'snapshot_id': None,
2211@@ -2147,7 +2147,10 @@ class ComputeManagerUnitTestCase(test.NoDBTestCase):
2212 objects.Instance(id=1, uuid=uuids.instance_1),
2213 objects.Instance(id=2, uuid=uuids.instance_2,
2214 info_cache=info_cache),
2215- objects.Instance(id=3, uuid=uuids.instance_3)]
2216+ objects.Instance(id=3, uuid=uuids.instance_3),
2217+ # instance_4 doesn't have info_cache set so it will be lazy-loaded
2218+ # and blow up with an InstanceNotFound error.
2219+ objects.Instance(id=4, uuid=uuids.instance_4)]
2220 events = [
2221 objects.InstanceExternalEvent(name='network-changed',
2222 tag='tag1',
2223@@ -2157,10 +2160,17 @@ class ComputeManagerUnitTestCase(test.NoDBTestCase):
2224 tag='2'),
2225 objects.InstanceExternalEvent(name='network-vif-plugged',
2226 instance_uuid=uuids.instance_3,
2227- tag='tag3')]
2228-
2229- # Make sure all the three events are handled despite the exceptions in
2230- # processing events 1 and 2
2231+ tag='tag3'),
2232+ objects.InstanceExternalEvent(name='network-vif-deleted',
2233+ instance_uuid=uuids.instance_4,
2234+ tag='tag4'),
2235+ ]
2236+
2237+ # Make sure all the four events are handled despite the exceptions in
2238+ # processing events 1, 2, and 4.
2239+ @mock.patch.object(instances[3], 'obj_load_attr',
2240+ side_effect=exception.InstanceNotFound(
2241+ instance_id=uuids.instance_4))
2242 @mock.patch.object(manager.base_net_api,
2243 'update_instance_cache_with_nw_info')
2244 @mock.patch.object(self.compute.driver, 'detach_interface',
2245@@ -2170,7 +2180,8 @@ class ComputeManagerUnitTestCase(test.NoDBTestCase):
2246 instance_uuid=uuids.instance_1))
2247 @mock.patch.object(self.compute, '_process_instance_event')
2248 def do_test(_process_instance_event, get_instance_nw_info,
2249- detach_interface, update_instance_cache_with_nw_info):
2250+ detach_interface, update_instance_cache_with_nw_info,
2251+ obj_load_attr):
2252 self.compute.external_instance_event(self.context,
2253 instances, events)
2254 get_instance_nw_info.assert_called_once_with(self.context,
2255@@ -2183,6 +2194,7 @@ class ComputeManagerUnitTestCase(test.NoDBTestCase):
2256 detach_interface.assert_called_once_with(instances[1], vif2)
2257 _process_instance_event.assert_called_once_with(instances[2],
2258 events[2])
2259+ obj_load_attr.assert_called_once_with('info_cache')
2260 do_test()
2261
2262 def test_cancel_all_events(self):
2263@@ -3970,6 +3982,11 @@ class ComputeManagerBuildInstanceTestCase(test.NoDBTestCase):
2264 self._test_build_and_run_spawn_exceptions(
2265 exception.SignatureVerificationError(reason=""))
2266
2267+ def test_build_and_run_volume_encryption_not_supported(self):
2268+ self._test_build_and_run_spawn_exceptions(
2269+ exception.VolumeEncryptionNotSupported(volume_type="fake",
2270+ volume_id=uuids.volume_id))
2271+
2272 def _test_build_and_run_spawn_exceptions(self, exc):
2273 with test.nested(
2274 mock.patch.object(self.compute.driver, 'spawn',
2275diff --git a/nova/tests/unit/db/test_db_api.py b/nova/tests/unit/db/test_db_api.py
2276index a4483c4..926486a 100644
2277--- a/nova/tests/unit/db/test_db_api.py
2278+++ b/nova/tests/unit/db/test_db_api.py
2279@@ -3481,6 +3481,10 @@ class ServiceTestCase(test.TestCase, ModelsObjectComparatorMixin):
2280 self._create_service({'version': 3,
2281 'host': 'host2',
2282 'binary': 'compute'})
2283+ self._create_service({'version': 0,
2284+ 'host': 'host0',
2285+ 'binary': 'compute',
2286+ 'deleted': 1})
2287 self.assertEqual({'compute': 2},
2288 db.service_get_minimum_version(self.ctxt,
2289 ['compute']))
2290diff --git a/nova/tests/unit/network/test_neutronv2.py b/nova/tests/unit/network/test_neutronv2.py
2291index 69c2f5a..f2e7abc 100644
2292--- a/nova/tests/unit/network/test_neutronv2.py
2293+++ b/nova/tests/unit/network/test_neutronv2.py
2294@@ -800,11 +800,6 @@ class TestNeutronv2Base(test.TestCase):
2295
2296 class TestNeutronv2(TestNeutronv2Base):
2297
2298- def setUp(self):
2299- super(TestNeutronv2, self).setUp()
2300- neutronapi.get_client(mox.IgnoreArg()).MultipleTimes().AndReturn(
2301- self.moxed_client)
2302-
2303 def test_get_instance_nw_info_1(self):
2304 # Test to get one port in one network and subnet.
2305 neutronapi.get_client(mox.IgnoreArg(),
2306@@ -1023,6 +1018,8 @@ class TestNeutronv2(TestNeutronv2Base):
2307 api.db.instance_info_cache_update(
2308 mox.IgnoreArg(),
2309 self.instance['uuid'], mox.IgnoreArg()).AndReturn(fake_info_cache)
2310+ neutronapi.get_client(mox.IgnoreArg(), admin=True).AndReturn(
2311+ self.moxed_client)
2312 self.moxed_client.list_ports(
2313 tenant_id=self.instance['project_id'],
2314 device_id=self.instance['uuid']).AndReturn(
2315@@ -1030,9 +1027,6 @@ class TestNeutronv2(TestNeutronv2Base):
2316 self.moxed_client.list_networks(
2317 id=[self.port_data1[0]['network_id']]).AndReturn(
2318 {'networks': self.nets1})
2319- neutronapi.get_client(mox.IgnoreArg(),
2320- admin=True).MultipleTimes().AndReturn(
2321- self.moxed_client)
2322
2323 net_info_cache = []
2324 for port in self.port_data3:
2325@@ -1095,10 +1089,12 @@ class TestNeutronv2(TestNeutronv2Base):
2326
2327 def test_allocate_for_instance_1(self):
2328 # Allocate one port in one network env.
2329+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2330 self._allocate_for_instance(1)
2331
2332 def test_allocate_for_instance_2(self):
2333 # Allocate one port in two networks env.
2334+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2335 api = self._stub_allocate_for_instance(net_idx=2)
2336 self.assertRaises(exception.NetworkAmbiguous,
2337 api.allocate_for_instance,
2338@@ -1106,17 +1102,20 @@ class TestNeutronv2(TestNeutronv2Base):
2339
2340 def test_allocate_for_instance_accepts_macs_kwargs_None(self):
2341 # The macs kwarg should be accepted as None.
2342+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2343 self._allocate_for_instance(1, macs=None)
2344
2345 def test_allocate_for_instance_accepts_macs_kwargs_set(self):
2346 # The macs kwarg should be accepted, as a set, the
2347 # _allocate_for_instance helper checks that the mac is used to create a
2348 # port.
2349+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2350 self._allocate_for_instance(1, macs=set(['ab:cd:ef:01:23:45']))
2351
2352 def test_allocate_for_instance_with_mac_added_to_port(self):
2353 requested_networks = objects.NetworkRequestList(
2354 objects=[objects.NetworkRequest(port_id=uuids.portid_1)])
2355+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2356 # NOTE(johngarbutt) we override the provided mac with a new one
2357 self._allocate_for_instance(net_idx=1,
2358 requested_networks=requested_networks,
2359@@ -1127,6 +1126,7 @@ class TestNeutronv2(TestNeutronv2Base):
2360 def test_allocate_for_instance_accepts_only_portid(self):
2361 # Make sure allocate_for_instance works when only a portid is provided
2362 self._returned_nw_info = self.port_data1
2363+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2364 result = self._allocate_for_instance(
2365 requested_networks=objects.NetworkRequestList(
2366 objects=[objects.NetworkRequest(port_id=uuids.portid_1,
2367@@ -1152,6 +1152,7 @@ class TestNeutronv2(TestNeutronv2Base):
2368 objects = [
2369 objects.NetworkRequest(network_id=self.nets2[1]['id']),
2370 objects.NetworkRequest(port_id=uuids.portid_1)])
2371+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2372 api = self._stub_allocate_for_instance(
2373 net_idx=2, requested_networks=requested_networks,
2374 macs=set(['my_mac1']),
2375@@ -1173,6 +1174,7 @@ class TestNeutronv2(TestNeutronv2Base):
2376 requested_networks = objects.NetworkRequestList(
2377 objects=[objects.NetworkRequest(network_id=self.nets2[1]['id']),
2378 objects.NetworkRequest(network_id=self.nets2[0]['id'])])
2379+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2380 api = self._stub_allocate_for_instance(
2381 net_idx=2, requested_networks=requested_networks,
2382 macs=set(['my_mac2']),
2383@@ -1192,11 +1194,13 @@ class TestNeutronv2(TestNeutronv2Base):
2384 requested_networks = objects.NetworkRequestList(
2385 objects=[objects.NetworkRequest(network_id=self.nets2[1]['id']),
2386 objects.NetworkRequest(network_id=self.nets2[0]['id'])])
2387+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2388 self._allocate_for_instance(
2389 net_idx=2, requested_networks=requested_networks,
2390 macs=set(['my_mac2', 'my_mac1']))
2391
2392 def test_allocate_for_instance_without_requested_networks(self):
2393+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2394 api = self._stub_allocate_for_instance(net_idx=3)
2395 self.assertRaises(exception.NetworkAmbiguous,
2396 api.allocate_for_instance,
2397@@ -1211,6 +1215,7 @@ class TestNeutronv2(TestNeutronv2Base):
2398 objects=[objects.NetworkRequest(network_id=net['id'])
2399 for net in (self.nets3[0], self.nets3[2], self.nets3[1])])
2400 requested_networks[0].tag = 'foo'
2401+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2402 self._allocate_for_instance(net_idx=2,
2403 requested_networks=requested_networks)
2404 self.assertEqual(2, len(self._vifs_created))
2405@@ -1229,6 +1234,7 @@ class TestNeutronv2(TestNeutronv2Base):
2406 requested_networks = objects.NetworkRequestList(
2407 objects=[objects.NetworkRequest(network_id=net['id'])
2408 for net in (self.nets3[1], self.nets3[0], self.nets3[2])])
2409+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2410 self._allocate_for_instance(net_idx=3,
2411 requested_networks=requested_networks)
2412
2413@@ -1238,6 +1244,7 @@ class TestNeutronv2(TestNeutronv2Base):
2414 # able to associate the default security group to the port
2415 # requested to be created. We expect an exception to be
2416 # raised.
2417+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2418 self.assertRaises(exception.SecurityGroupCannotBeApplied,
2419 self._allocate_for_instance, net_idx=4,
2420 _break='post_list_extensions')
2421@@ -1246,6 +1253,7 @@ class TestNeutronv2(TestNeutronv2Base):
2422 requested_networks = objects.NetworkRequestList(
2423 objects=[objects.NetworkRequest(
2424 network_id=uuids.non_existent_uuid)])
2425+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2426 api = self._stub_allocate_for_instance(net_idx=9,
2427 requested_networks=requested_networks,
2428 _break='post_list_networks')
2429@@ -1259,12 +1267,14 @@ class TestNeutronv2(TestNeutronv2Base):
2430 requested_networks = objects.NetworkRequestList(
2431 objects=[objects.NetworkRequest(network_id=self.nets1[0]['id'],
2432 address='10.0.1.0')])
2433+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2434 self._allocate_for_instance(net_idx=1,
2435 requested_networks=requested_networks)
2436
2437 def test_allocate_for_instance_with_requested_networks_with_port(self):
2438 requested_networks = objects.NetworkRequestList(
2439 objects=[objects.NetworkRequest(port_id=uuids.portid_1)])
2440+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2441 self._allocate_for_instance(net_idx=1,
2442 requested_networks=requested_networks)
2443
2444@@ -1277,6 +1287,7 @@ class TestNeutronv2(TestNeutronv2Base):
2445 tenant_id=self.instance.project_id,
2446 shared=False).AndReturn(
2447 {'networks': model.NetworkInfo([])})
2448+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2449 self.moxed_client.list_networks(shared=True).AndReturn(
2450 {'networks': model.NetworkInfo([])})
2451 self.mox.ReplayAll()
2452@@ -1302,6 +1313,7 @@ class TestNeutronv2(TestNeutronv2Base):
2453 requested_networks = objects.NetworkRequestList(
2454 objects=[objects.NetworkRequest(network_id=net['id'])
2455 for net in (self.nets2[0], self.nets2[1])])
2456+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2457 self.moxed_client.list_networks(
2458 id=[uuids.my_netid1, uuids.my_netid2]).AndReturn(
2459 {'networks': self.nets2})
2460@@ -1367,6 +1379,7 @@ class TestNeutronv2(TestNeutronv2Base):
2461 requested_networks = objects.NetworkRequestList(
2462 objects=[objects.NetworkRequest(network_id=net['id'])
2463 for net in (self.nets2[0], self.nets2[1])])
2464+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2465 self.moxed_client.list_networks(
2466 id=[uuids.my_netid1, uuids.my_netid2]).AndReturn(
2467 {'networks': self.nets2})
2468@@ -1392,6 +1405,7 @@ class TestNeutronv2(TestNeutronv2Base):
2469 self.instance = fake_instance.fake_instance_obj(self.context,
2470 **self.instance)
2471 api = neutronapi.API()
2472+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2473 self.mox.StubOutWithMock(api, '_get_available_networks')
2474 # Make sure we get an empty list and then bail out of the rest
2475 # of the function
2476@@ -1413,6 +1427,7 @@ class TestNeutronv2(TestNeutronv2Base):
2477 # allocated during _that_ run.
2478 new_port = {'id': uuids.fake}
2479 self._returned_nw_info = self.port_data1 + [new_port]
2480+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2481 nw_info = self._allocate_for_instance()
2482 self.assertEqual([new_port], nw_info)
2483
2484@@ -1420,6 +1435,7 @@ class TestNeutronv2(TestNeutronv2Base):
2485 # If a port is already in use, an exception should be raised.
2486 requested_networks = objects.NetworkRequestList(
2487 objects=[objects.NetworkRequest(port_id=uuids.portid_1)])
2488+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2489 api = self._stub_allocate_for_instance(
2490 requested_networks=requested_networks,
2491 _break='pre_list_networks',
2492@@ -1432,6 +1448,7 @@ class TestNeutronv2(TestNeutronv2Base):
2493 # If a port is not found, an exception should be raised.
2494 requested_networks = objects.NetworkRequestList(
2495 objects=[objects.NetworkRequest(port_id=uuids.non_existent_uuid)])
2496+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2497 api = self._stub_allocate_for_instance(
2498 requested_networks=requested_networks,
2499 _break='pre_list_networks')
2500@@ -1443,6 +1460,7 @@ class TestNeutronv2(TestNeutronv2Base):
2501 self.tenant_id = 'invalid_id'
2502 requested_networks = objects.NetworkRequestList(
2503 objects=[objects.NetworkRequest(port_id=uuids.portid_1)])
2504+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2505 api = self._stub_allocate_for_instance(
2506 requested_networks=requested_networks,
2507 _break='pre_list_networks')
2508@@ -1456,6 +1474,7 @@ class TestNeutronv2(TestNeutronv2Base):
2509 """
2510 self.instance = fake_instance.fake_instance_obj(self.context,
2511 **self.instance)
2512+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2513 # no networks in the tenant
2514 self.moxed_client.list_networks(
2515 tenant_id=self.instance.project_id,
2516@@ -1476,6 +1495,7 @@ class TestNeutronv2(TestNeutronv2Base):
2517 """
2518 self.instance = fake_instance.fake_instance_obj(self.context,
2519 **self.instance)
2520+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2521 # network found in the tenant
2522 self.moxed_client.list_networks(
2523 tenant_id=self.instance.project_id,
2524@@ -1497,12 +1517,14 @@ class TestNeutronv2(TestNeutronv2Base):
2525 """
2526 admin_ctx = context.RequestContext('userid', uuids.my_tenant,
2527 is_admin=True)
2528+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2529 api = self._stub_allocate_for_instance(net_idx=8)
2530 api.allocate_for_instance(admin_ctx, self.instance)
2531
2532 def test_allocate_for_instance_with_external_shared_net(self):
2533 """Only one network is available, it's external and shared."""
2534 ctx = context.RequestContext('userid', uuids.my_tenant)
2535+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2536 api = self._stub_allocate_for_instance(net_idx=10)
2537 api.allocate_for_instance(ctx, self.instance)
2538
2539@@ -1529,6 +1551,7 @@ class TestNeutronv2(TestNeutronv2Base):
2540 'admin_state_up': True,
2541 'fixed_ips': [],
2542 'mac_address': 'fake_mac', })
2543+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2544 self.moxed_client.list_ports(
2545 device_id=self.instance.uuid).AndReturn(
2546 {'ports': ret_data})
2547@@ -1591,6 +1614,7 @@ class TestNeutronv2(TestNeutronv2Base):
2548 **self.instance)
2549 mock_preexisting.return_value = []
2550 port_data = self.port_data1
2551+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2552 self.moxed_client.list_ports(
2553 device_id=self.instance.uuid).AndReturn(
2554 {'ports': port_data})
2555@@ -1617,15 +1641,15 @@ class TestNeutronv2(TestNeutronv2Base):
2556 self.instance['info_cache'] = self._fake_instance_info_cache(
2557 net_info_cache, self.instance['uuid'])
2558 api = neutronapi.API()
2559- neutronapi.get_client(mox.IgnoreArg(), admin=True).AndReturn(
2560+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(
2561 self.moxed_client)
2562 self.moxed_client.list_ports(
2563 tenant_id=self.instance['project_id'],
2564 device_id=self.instance['uuid']).AndReturn(
2565 {'ports': port_data[1:]})
2566- neutronapi.get_client(mox.IgnoreArg()).MultipleTimes().AndReturn(
2567- self.moxed_client)
2568 net_ids = [port['network_id'] for port in port_data]
2569+ neutronapi.get_client(mox.IgnoreArg(), admin=True).AndReturn(
2570+ self.moxed_client)
2571 self.moxed_client.list_networks(id=net_ids).AndReturn(
2572 {'networks': nets})
2573 float_data = number == 1 and self.float_data1 or self.float_data2
2574@@ -1662,11 +1686,13 @@ class TestNeutronv2(TestNeutronv2Base):
2575
2576 def test_list_ports(self):
2577 search_opts = {'parm': 'value'}
2578+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2579 self.moxed_client.list_ports(**search_opts)
2580 self.mox.ReplayAll()
2581 neutronapi.API().list_ports(self.context, **search_opts)
2582
2583 def test_show_port(self):
2584+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2585 self.moxed_client.show_port('foo').AndReturn(
2586 {'port': self.port_data1[0]})
2587 self.mox.ReplayAll()
2588@@ -1676,6 +1702,7 @@ class TestNeutronv2(TestNeutronv2Base):
2589 requested_networks = [(uuids.my_netid1, None, None, None),
2590 (uuids.my_netid2, None, None, None)]
2591 ids = [uuids.my_netid1, uuids.my_netid2]
2592+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2593 self.moxed_client.list_networks(
2594 id=mox.SameElementsAs(ids)).AndReturn(
2595 {'networks': self.nets2})
2596@@ -1693,6 +1720,7 @@ class TestNeutronv2(TestNeutronv2Base):
2597 requested_networks = [(uuids.my_netid1, None, None, None),
2598 (uuids.my_netid2, None, None, None)]
2599 ids = [uuids.my_netid1, uuids.my_netid2]
2600+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2601 self.moxed_client.list_networks(
2602 id=mox.SameElementsAs(ids)).AndReturn(
2603 {'networks': self.nets2})
2604@@ -1705,6 +1733,7 @@ class TestNeutronv2(TestNeutronv2Base):
2605
2606 def test_validate_networks_ex_1(self):
2607 requested_networks = [(uuids.my_netid1, None, None, None)]
2608+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2609 self.moxed_client.list_networks(
2610 id=mox.SameElementsAs([uuids.my_netid1])).AndReturn(
2611 {'networks': self.nets1})
2612@@ -1726,6 +1755,7 @@ class TestNeutronv2(TestNeutronv2Base):
2613 (uuids.my_netid2, None, None, None),
2614 (uuids.my_netid3, None, None, None)]
2615 ids = [uuids.my_netid1, uuids.my_netid2, uuids.my_netid3]
2616+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2617 self.moxed_client.list_networks(
2618 id=mox.SameElementsAs(ids)).AndReturn(
2619 {'networks': self.nets1})
2620@@ -1744,7 +1774,7 @@ class TestNeutronv2(TestNeutronv2Base):
2621 objects=[objects.NetworkRequest(network_id=uuids.my_netid1),
2622 objects.NetworkRequest(network_id=uuids.my_netid1)])
2623 ids = [uuids.my_netid1, uuids.my_netid1]
2624-
2625+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2626 self.moxed_client.list_networks(
2627 id=mox.SameElementsAs(ids)).AndReturn(
2628 {'networks': self.nets1})
2629@@ -1763,6 +1793,8 @@ class TestNeutronv2(TestNeutronv2Base):
2630 requested_networks = objects.NetworkRequestList(
2631 objects=[objects.NetworkRequest(network_id=net['id'])
2632 for net in (self.nets6[0], self.nets6[1])])
2633+ neutronapi.get_client(mox.IgnoreArg()).MultipleTimes().AndReturn(
2634+ self.moxed_client)
2635 self._allocate_for_instance(net_idx=6,
2636 requested_networks=requested_networks)
2637
2638@@ -1771,6 +1803,8 @@ class TestNeutronv2(TestNeutronv2Base):
2639 requested_networks = objects.NetworkRequestList(
2640 objects=[objects.NetworkRequest(port_id=port['id'])
2641 for port in (self.port_data1[0], self.port_data3[0])])
2642+ neutronapi.get_client(mox.IgnoreArg()).MultipleTimes().AndReturn(
2643+ self.moxed_client)
2644 self._allocate_for_instance(net_idx=6,
2645 requested_networks=requested_networks)
2646
2647@@ -1781,11 +1815,14 @@ class TestNeutronv2(TestNeutronv2Base):
2648 objects.NetworkRequest(port_id=self.port_data1[0]['id']),
2649 objects.NetworkRequest(network_id=uuids.my_netid2),
2650 objects.NetworkRequest(port_id=self.port_data3[0]['id'])])
2651+ neutronapi.get_client(mox.IgnoreArg()).MultipleTimes().AndReturn(
2652+ self.moxed_client)
2653 self._allocate_for_instance(net_idx=7,
2654 requested_networks=requested_networks)
2655
2656 def test_validate_networks_not_specified(self):
2657 requested_networks = objects.NetworkRequestList(objects=[])
2658+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2659 self.moxed_client.list_networks(
2660 tenant_id=self.context.project_id,
2661 shared=False).AndReturn(
2662@@ -1809,11 +1846,10 @@ class TestNeutronv2(TestNeutronv2Base):
2663 port_id=uuids.portid_1)])
2664
2665 PortNotFound = exceptions.PortNotFoundClient()
2666+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2667 self.moxed_client.show_port(requested_networks[0].port_id).AndRaise(
2668 PortNotFound)
2669 self.mox.ReplayAll()
2670- # Expected call from setUp.
2671- neutronapi.get_client(None)
2672 api = neutronapi.API()
2673 self.assertRaises(exception.PortNotFound,
2674 api.validate_networks,
2675@@ -1830,11 +1866,10 @@ class TestNeutronv2(TestNeutronv2Base):
2676 port_id=fake_port_id)])
2677
2678 NeutronNotFound = exceptions.NeutronClientException(status_code=0)
2679+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2680 self.moxed_client.show_port(requested_networks[0].port_id).AndRaise(
2681 NeutronNotFound)
2682 self.mox.ReplayAll()
2683- # Expected call from setUp.
2684- neutronapi.get_client(None)
2685 api = neutronapi.API()
2686 exc = self.assertRaises(exception.NovaException,
2687 api.validate_networks,
2688@@ -1847,6 +1882,7 @@ class TestNeutronv2(TestNeutronv2Base):
2689 def test_validate_networks_port_in_use(self):
2690 requested_networks = objects.NetworkRequestList(
2691 objects=[objects.NetworkRequest(port_id=self.port_data3[0]['id'])])
2692+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2693 self.moxed_client.show_port(self.port_data3[0]['id']).\
2694 AndReturn({'port': self.port_data3[0]})
2695
2696@@ -1864,6 +1900,7 @@ class TestNeutronv2(TestNeutronv2Base):
2697
2698 requested_networks = objects.NetworkRequestList(
2699 objects=[objects.NetworkRequest(port_id=port_a['id'])])
2700+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2701 self.moxed_client.show_port(port_a['id']).AndReturn({'port': port_a})
2702
2703 self.mox.ReplayAll()
2704@@ -1877,6 +1914,7 @@ class TestNeutronv2(TestNeutronv2Base):
2705 requested_networks = objects.NetworkRequestList(
2706 objects=[objects.NetworkRequest(network_id='his_netid4')])
2707 ids = ['his_netid4']
2708+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2709 self.moxed_client.list_networks(
2710 id=mox.SameElementsAs(ids)).AndReturn(
2711 {'networks': self.nets4})
2712@@ -1901,6 +1939,7 @@ class TestNeutronv2(TestNeutronv2Base):
2713 requested_networks = objects.NetworkRequestList(
2714 objects=[objects.NetworkRequest(port_id=port_a['id']),
2715 objects.NetworkRequest(port_id=port_b['id'])])
2716+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2717 self.moxed_client.show_port(port_a['id']).AndReturn(
2718 {'port': port_a})
2719 self.moxed_client.show_port(port_b['id']).AndReturn(
2720@@ -1924,6 +1963,7 @@ class TestNeutronv2(TestNeutronv2Base):
2721 requested_networks = objects.NetworkRequestList(
2722 objects=[objects.NetworkRequest(port_id=port_a['id']),
2723 objects.NetworkRequest(port_id=port_b['id'])])
2724+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2725 self.moxed_client.show_port(port_a['id']).AndReturn({'port': port_a})
2726 self.moxed_client.show_port(port_b['id']).AndReturn({'port': port_b})
2727 self.mox.ReplayAll()
2728@@ -1939,6 +1979,7 @@ class TestNeutronv2(TestNeutronv2Base):
2729 objects=[objects.NetworkRequest(network_id=uuids.my_netid1),
2730 objects.NetworkRequest(network_id=uuids.my_netid2)])
2731 ids = [uuids.my_netid1, uuids.my_netid2]
2732+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2733 self.moxed_client.list_networks(
2734 id=mox.SameElementsAs(ids)).AndReturn(
2735 {'networks': self.nets2})
2736@@ -1963,6 +2004,7 @@ class TestNeutronv2(TestNeutronv2Base):
2737 requested_networks = objects.NetworkRequestList(
2738 objects=[objects.NetworkRequest(network_id=uuids.my_netid1),
2739 objects.NetworkRequest(port_id=port_b['id'])])
2740+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2741 self.moxed_client.show_port(port_b['id']).AndReturn({'port': port_b})
2742 ids = [uuids.my_netid1]
2743 self.moxed_client.list_networks(
2744@@ -1988,6 +2030,7 @@ class TestNeutronv2(TestNeutronv2Base):
2745 port_b['device_owner'] = None
2746 requested_networks = objects.NetworkRequestList(
2747 objects=[objects.NetworkRequest(port_id=port_b['id'])])
2748+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2749 self.moxed_client.show_port(port_b['id']).AndReturn({'port': port_b})
2750 self.mox.ReplayAll()
2751 api = neutronapi.API()
2752@@ -2003,6 +2046,7 @@ class TestNeutronv2(TestNeutronv2Base):
2753 objects=[objects.NetworkRequest(network_id=uuids.my_netid1),
2754 objects.NetworkRequest(network_id=uuids.my_netid2)])
2755 ids = [uuids.my_netid1, uuids.my_netid2]
2756+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2757 self.moxed_client.list_networks(
2758 id=mox.SameElementsAs(ids)).AndReturn(
2759 {'networks': self.nets2})
2760@@ -2026,6 +2070,7 @@ class TestNeutronv2(TestNeutronv2Base):
2761 objects=[objects.NetworkRequest(network_id=uuids.my_netid1),
2762 objects.NetworkRequest(network_id=uuids.my_netid2)])
2763 ids = [uuids.my_netid1, uuids.my_netid2]
2764+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2765 self.moxed_client.list_networks(
2766 id=mox.SameElementsAs(ids)).AndReturn(
2767 {'networks': self.nets2})
2768@@ -2051,6 +2096,7 @@ class TestNeutronv2(TestNeutronv2Base):
2769 requested_networks = objects.NetworkRequestList(
2770 objects=[objects.NetworkRequest(port_id=port_a['id']),
2771 objects.NetworkRequest(port_id=port_b['id'])])
2772+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2773 self.moxed_client.show_port(port_a['id']).AndReturn({'port': port_a})
2774 self.moxed_client.show_port(port_b['id']).AndReturn({'port': port_b})
2775
2776@@ -2065,6 +2111,7 @@ class TestNeutronv2(TestNeutronv2Base):
2777 if port_data is None:
2778 port_data = self.port_data2
2779 address = self.port_address
2780+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2781 self.moxed_client.list_ports(
2782 fixed_ips=MyComparator('ip_address=%s' % address)).AndReturn(
2783 {'ports': port_data})
2784@@ -2094,6 +2141,7 @@ class TestNeutronv2(TestNeutronv2Base):
2785 def _get_available_networks(self, prv_nets, pub_nets,
2786 req_ids=None, context=None):
2787 api = neutronapi.API()
2788+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2789 nets = prv_nets + pub_nets
2790 if req_ids:
2791 mox_list_params = {'id': req_ids}
2792@@ -2140,6 +2188,7 @@ class TestNeutronv2(TestNeutronv2Base):
2793 def test_get_floating_ip_pools(self):
2794 api = neutronapi.API()
2795 search_opts = {'router:external': True}
2796+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2797 self.moxed_client.list_networks(**search_opts).\
2798 AndReturn({'networks': [self.fip_pool, self.fip_pool_nova]})
2799 self.mox.ReplayAll()
2800@@ -2175,6 +2224,7 @@ class TestNeutronv2(TestNeutronv2Base):
2801 fip_id = fip_data['id']
2802 net_id = fip_data['floating_network_id']
2803 address = fip_data['floating_ip_address']
2804+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2805 if by_address:
2806 self.moxed_client.list_floatingips(floating_ip_address=address).\
2807 AndReturn({'floatingips': [fip_data]})
2808@@ -2213,6 +2263,7 @@ class TestNeutronv2(TestNeutronv2Base):
2809 def test_get_floating_ip_by_address_not_found(self):
2810 api = neutronapi.API()
2811 address = self.fip_unassociated['floating_ip_address']
2812+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2813 self.moxed_client.list_floatingips(floating_ip_address=address).\
2814 AndReturn({'floatingips': []})
2815 self.mox.ReplayAll()
2816@@ -2224,6 +2275,7 @@ class TestNeutronv2(TestNeutronv2Base):
2817 api = neutronapi.API()
2818 NeutronNotFound = exceptions.NeutronClientException(status_code=404)
2819 floating_ip_id = self.fip_unassociated['id']
2820+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2821 self.moxed_client.show_floatingip(floating_ip_id).\
2822 AndRaise(NeutronNotFound)
2823 self.mox.ReplayAll()
2824@@ -2235,6 +2287,7 @@ class TestNeutronv2(TestNeutronv2Base):
2825 api = neutronapi.API()
2826 NeutronNotFound = exceptions.NeutronClientException(status_code=0)
2827 floating_ip_id = self.fip_unassociated['id']
2828+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2829 self.moxed_client.show_floatingip(floating_ip_id).\
2830 AndRaise(NeutronNotFound)
2831 self.mox.ReplayAll()
2832@@ -2245,6 +2298,7 @@ class TestNeutronv2(TestNeutronv2Base):
2833 def test_get_floating_ip_by_address_multiple_found(self):
2834 api = neutronapi.API()
2835 address = self.fip_unassociated['floating_ip_address']
2836+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2837 self.moxed_client.list_floatingips(floating_ip_address=address).\
2838 AndReturn({'floatingips': [self.fip_unassociated] * 2})
2839 self.mox.ReplayAll()
2840@@ -2255,6 +2309,7 @@ class TestNeutronv2(TestNeutronv2Base):
2841 def test_get_floating_ips_by_project(self):
2842 api = neutronapi.API()
2843 project_id = self.context.project_id
2844+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2845 self.moxed_client.list_floatingips(tenant_id=project_id).\
2846 AndReturn({'floatingips': [self.fip_unassociated,
2847 self.fip_associated]})
2848@@ -2276,6 +2331,7 @@ class TestNeutronv2(TestNeutronv2Base):
2849 associated=False):
2850 api = neutronapi.API()
2851 address = fip_data['floating_ip_address']
2852+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2853 self.moxed_client.list_floatingips(floating_ip_address=address).\
2854 AndReturn({'floatingips': [fip_data]})
2855 if associated:
2856@@ -2304,6 +2360,7 @@ class TestNeutronv2(TestNeutronv2Base):
2857 search_opts = {'router:external': True,
2858 'fields': 'id',
2859 'name': pool_name}
2860+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2861 self.moxed_client.list_networks(**search_opts).\
2862 AndReturn({'networks': [self.fip_pool]})
2863 self.moxed_client.create_floatingip(
2864@@ -2320,6 +2377,7 @@ class TestNeutronv2(TestNeutronv2Base):
2865 search_opts = {'router:external': True,
2866 'fields': 'id',
2867 'name': pool_name}
2868+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2869 self.moxed_client.list_networks(**search_opts).\
2870 AndReturn({'networks': [self.fip_pool]})
2871 self.moxed_client.create_floatingip(
2872@@ -2336,6 +2394,7 @@ class TestNeutronv2(TestNeutronv2Base):
2873 search_opts = {'router:external': True,
2874 'fields': 'id',
2875 'name': pool_name}
2876+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2877 self.moxed_client.list_networks(**search_opts).\
2878 AndReturn({'networks': [self.fip_pool]})
2879 self.moxed_client.create_floatingip(
2880@@ -2351,6 +2410,7 @@ class TestNeutronv2(TestNeutronv2Base):
2881 search_opts = {'router:external': True,
2882 'fields': 'id',
2883 'id': pool_id}
2884+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2885 self.moxed_client.list_networks(**search_opts).\
2886 AndReturn({'networks': [self.fip_pool]})
2887 self.moxed_client.create_floatingip(
2888@@ -2367,6 +2427,7 @@ class TestNeutronv2(TestNeutronv2Base):
2889 search_opts = {'router:external': True,
2890 'fields': 'id',
2891 'name': pool_name}
2892+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2893 self.moxed_client.list_networks(**search_opts).\
2894 AndReturn({'networks': [self.fip_pool_nova]})
2895 self.moxed_client.create_floatingip(
2896@@ -2380,7 +2441,7 @@ class TestNeutronv2(TestNeutronv2Base):
2897 api = neutronapi.API()
2898 address = self.fip_unassociated['floating_ip_address']
2899 fip_id = self.fip_unassociated['id']
2900-
2901+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2902 self.moxed_client.list_floatingips(floating_ip_address=address).\
2903 AndReturn({'floatingips': [self.fip_unassociated]})
2904 self.moxed_client.delete_floatingip(fip_id)
2905@@ -2392,7 +2453,7 @@ class TestNeutronv2(TestNeutronv2Base):
2906 address = self.fip_unassociated['floating_ip_address']
2907 fip_id = self.fip_unassociated['id']
2908 floating_ip = {'address': address}
2909-
2910+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2911 self.moxed_client.list_floatingips(floating_ip_address=address).\
2912 AndReturn({'floatingips': [self.fip_unassociated]})
2913 self.moxed_client.delete_floatingip(fip_id)
2914@@ -2406,7 +2467,7 @@ class TestNeutronv2(TestNeutronv2Base):
2915 fip_id = self.fip_unassociated['id']
2916 floating_ip = {'address': address}
2917 instance = self._fake_instance_object(self.instance)
2918-
2919+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2920 self.moxed_client.list_floatingips(floating_ip_address=address).\
2921 AndReturn({'floatingips': [self.fip_unassociated]})
2922 self.moxed_client.delete_floatingip(fip_id)
2923@@ -2418,7 +2479,7 @@ class TestNeutronv2(TestNeutronv2Base):
2924 def test_release_floating_ip_associated(self):
2925 api = neutronapi.API()
2926 address = self.fip_associated['floating_ip_address']
2927-
2928+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2929 self.moxed_client.list_floatingips(floating_ip_address=address).\
2930 AndReturn({'floatingips': [self.fip_associated]})
2931 self.mox.ReplayAll()
2932@@ -2446,6 +2507,8 @@ class TestNeutronv2(TestNeutronv2Base):
2933
2934 search_opts = {'device_owner': 'compute:nova',
2935 'device_id': instance.uuid}
2936+
2937+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2938 self.moxed_client.list_ports(**search_opts).\
2939 AndReturn({'ports': [self.port_data2[1]]})
2940 self.moxed_client.list_floatingips(floating_ip_address=address).\
2941@@ -2468,6 +2531,7 @@ class TestNeutronv2(TestNeutronv2Base):
2942
2943 search_opts = {'device_owner': 'compute:nova',
2944 'device_id': self.instance2['uuid']}
2945+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2946 self.moxed_client.list_ports(**search_opts).\
2947 AndReturn({'ports': [self.port_data2[0]]})
2948 self.moxed_client.list_floatingips(floating_ip_address=address).\
2949@@ -2496,6 +2560,7 @@ class TestNeutronv2(TestNeutronv2Base):
2950
2951 search_opts = {'device_owner': 'compute:nova',
2952 'device_id': self.instance['uuid']}
2953+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2954 self.moxed_client.list_ports(**search_opts).\
2955 AndReturn({'ports': [self.port_data2[0]]})
2956
2957@@ -2509,7 +2574,7 @@ class TestNeutronv2(TestNeutronv2Base):
2958 api = neutronapi.API()
2959 address = self.fip_associated['floating_ip_address']
2960 fip_id = self.fip_associated['id']
2961-
2962+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2963 self.moxed_client.list_floatingips(floating_ip_address=address).\
2964 AndReturn({'floatingips': [self.fip_associated]})
2965 self.moxed_client.update_floatingip(
2966@@ -2525,6 +2590,7 @@ class TestNeutronv2(TestNeutronv2Base):
2967 self._setup_mock_for_refresh_cache(api, [instance])
2968 network_id = uuids.my_netid1
2969 search_opts = {'network_id': network_id}
2970+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2971 self.moxed_client.list_subnets(
2972 **search_opts).AndReturn({'subnets': self.subnet_data_n})
2973
2974@@ -2558,6 +2624,7 @@ class TestNeutronv2(TestNeutronv2Base):
2975 search_opts = {'device_id': self.instance['uuid'],
2976 'device_owner': zone,
2977 'fixed_ips': 'ip_address=%s' % address}
2978+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2979 self.moxed_client.list_ports(
2980 **search_opts).AndReturn({'ports': self.port_data1})
2981 port_req_body = {
2982@@ -2577,6 +2644,7 @@ class TestNeutronv2(TestNeutronv2Base):
2983 def test_list_floating_ips_without_l3_support(self):
2984 api = neutronapi.API()
2985 NeutronNotFound = exceptions.NotFound()
2986+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2987 self.moxed_client.list_floatingips(
2988 fixed_ip_address='1.1.1.1', port_id=1).AndRaise(NeutronNotFound)
2989 self.mox.ReplayAll()
2990@@ -2592,6 +2660,7 @@ class TestNeutronv2(TestNeutronv2Base):
2991 'id': 'port-id',
2992 }
2993 api = neutronapi.API()
2994+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
2995 self.mox.StubOutWithMock(api, '_get_floating_ips_by_fixed_and_port')
2996 api._get_floating_ips_by_fixed_and_port(
2997 self.moxed_client, '1.1.1.1', 'port-id').AndReturn(
2998@@ -2614,10 +2683,10 @@ class TestNeutronv2(TestNeutronv2Base):
2999 fake_ips = [model.IP(x['ip_address']) for x in fake_port['fixed_ips']]
3000 api = neutronapi.API()
3001 self.mox.StubOutWithMock(api, '_get_subnets_from_port')
3002- api._get_subnets_from_port(self.context, fake_port).AndReturn(
3003+ api._get_subnets_from_port(
3004+ self.context, fake_port, None).AndReturn(
3005 [fake_subnet])
3006 self.mox.ReplayAll()
3007- neutronapi.get_client(uuids.fake)
3008 subnets = api._nw_info_get_subnets(self.context, fake_port, fake_ips)
3009 self.assertEqual(1, len(subnets))
3010 self.assertEqual(1, len(subnets[0]['ips']))
3011@@ -2634,6 +2703,7 @@ class TestNeutronv2(TestNeutronv2Base):
3012 fake_nets = [{'id': 'net-id', 'name': 'foo', 'tenant_id': 'tenant',
3013 'mtu': 9000}]
3014 api = neutronapi.API()
3015+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
3016 self.mox.ReplayAll()
3017 neutronapi.get_client(uuids.fake)
3018 net, iid = api._nw_info_build_network(fake_port, fake_nets,
3019@@ -2688,6 +2758,7 @@ class TestNeutronv2(TestNeutronv2Base):
3020 fake_subnets = [model.Subnet(cidr='1.0.0.0/8')]
3021 fake_nets = [{'id': 'net-id2', 'name': 'foo', 'tenant_id': 'tenant'}]
3022 api = neutronapi.API()
3023+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
3024 self.mox.ReplayAll()
3025 neutronapi.get_client(uuids.fake)
3026 net, iid = api._nw_info_build_network(fake_port, fake_nets,
3027@@ -2709,6 +2780,7 @@ class TestNeutronv2(TestNeutronv2Base):
3028 fake_subnets = [model.Subnet(cidr='1.0.0.0/8')]
3029 fake_nets = [{'id': 'net-id', 'name': 'foo', 'tenant_id': 'tenant'}]
3030 api = neutronapi.API()
3031+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
3032 self.mox.ReplayAll()
3033 neutronapi.get_client(uuids.fake)
3034 net, iid = api._nw_info_build_network(fake_port, fake_nets,
3035@@ -2737,6 +2809,7 @@ class TestNeutronv2(TestNeutronv2Base):
3036 fake_subnets = [model.Subnet(cidr='1.0.0.0/8')]
3037 fake_nets = [{'id': 'net-id', 'name': 'foo', 'tenant_id': 'tenant'}]
3038 api = neutronapi.API()
3039+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
3040 self.mox.ReplayAll()
3041 neutronapi.get_client(uuids.fake)
3042 net, iid = api._nw_info_build_network(fake_port, fake_nets,
3043@@ -2852,8 +2925,8 @@ class TestNeutronv2(TestNeutronv2Base):
3044 'tenant_id': uuids.fake,
3045 }
3046 ]
3047- neutronapi.get_client(mox.IgnoreArg(), admin=True).MultipleTimes(
3048- ).AndReturn(self.moxed_client)
3049+ neutronapi.get_client(mox.IgnoreArg(), admin=True).AndReturn(
3050+ self.moxed_client)
3051 self.moxed_client.list_ports(
3052 tenant_id=uuids.fake, device_id=uuids.instance).AndReturn(
3053 {'ports': fake_ports})
3054@@ -2867,13 +2940,13 @@ class TestNeutronv2(TestNeutronv2Base):
3055 self.moxed_client, '1.1.1.1', requested_port['id']).AndReturn(
3056 [{'floating_ip_address': '10.0.0.1'}])
3057 for requested_port in requested_ports:
3058- api._get_subnets_from_port(self.context, requested_port
3059- ).AndReturn(fake_subnets)
3060+ api._get_subnets_from_port(self.context, requested_port,
3061+ self.moxed_client).AndReturn(
3062+ fake_subnets)
3063
3064 self.mox.StubOutWithMock(api, '_get_preexisting_port_ids')
3065 api._get_preexisting_port_ids(fake_inst).AndReturn(['port5'])
3066 self.mox.ReplayAll()
3067- neutronapi.get_client(uuids.fake)
3068 fake_inst.info_cache = objects.InstanceInfoCache.new(
3069 self.context, uuids.instance)
3070 fake_inst.info_cache.network_info = model.NetworkInfo.hydrate([])
3071@@ -2963,8 +3036,8 @@ class TestNeutronv2(TestNeutronv2Base):
3072 ]
3073 fake_subnets = [model.Subnet(cidr='1.0.0.0/8')]
3074
3075- neutronapi.get_client(mox.IgnoreArg(), admin=True).MultipleTimes(
3076- ).AndReturn(self.moxed_client)
3077+ neutronapi.get_client(mox.IgnoreArg(), admin=True).AndReturn(
3078+ self.moxed_client)
3079 self.moxed_client.list_ports(
3080 tenant_id=uuids.fake, device_id=uuids.instance).AndReturn(
3081 {'ports': fake_ports})
3082@@ -2976,7 +3049,6 @@ class TestNeutronv2(TestNeutronv2Base):
3083 mock_nw_info_get_subnets.return_value = fake_subnets
3084
3085 self.mox.ReplayAll()
3086- neutronapi.get_client(uuids.fake)
3087
3088 nw_infos = api._build_network_info_model(
3089 self.context, fake_inst)
3090@@ -2990,7 +3062,7 @@ class TestNeutronv2(TestNeutronv2Base):
3091 subnet_data1[0]['host_routes'] = [
3092 {'destination': '192.168.0.0/24', 'nexthop': '1.0.0.10'}
3093 ]
3094-
3095+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
3096 self.moxed_client.list_subnets(
3097 id=[port_data['fixed_ips'][0]['subnet_id']]
3098 ).AndReturn({'subnets': subnet_data1})
3099@@ -3010,6 +3082,7 @@ class TestNeutronv2(TestNeutronv2Base):
3100
3101 def test_get_all_empty_list_networks(self):
3102 api = neutronapi.API()
3103+ neutronapi.get_client(mox.IgnoreArg()).AndReturn(self.moxed_client)
3104 self.moxed_client.list_networks().AndReturn({'networks': []})
3105 self.mox.ReplayAll()
3106 networks = api.get_all(self.context)
3107diff --git a/nova/tests/unit/scheduler/client/test_report.py b/nova/tests/unit/scheduler/client/test_report.py
3108index a85bf82..7af78e4 100644
3109--- a/nova/tests/unit/scheduler/client/test_report.py
3110+++ b/nova/tests/unit/scheduler/client/test_report.py
3111@@ -11,11 +11,13 @@
3112 # under the License.
3113
3114 import mock
3115+import six
3116
3117 from keystoneauth1 import exceptions as ks_exc
3118
3119 import nova.conf
3120 from nova import context
3121+from nova import exception
3122 from nova import objects
3123 from nova.objects import base as obj_base
3124 from nova.scheduler.client import report
3125@@ -120,6 +122,16 @@ class SchedulerReportClientTestCase(test.NoDBTestCase):
3126 super(SchedulerReportClientTestCase, self).setUp()
3127 self.context = context.get_admin_context()
3128 self.ks_sess_mock = mock.Mock()
3129+ self.compute_node = objects.ComputeNode(
3130+ uuid=uuids.compute_node,
3131+ hypervisor_hostname='foo',
3132+ vcpus=8,
3133+ cpu_allocation_ratio=16.0,
3134+ memory_mb=1024,
3135+ ram_allocation_ratio=1.5,
3136+ local_gb=10,
3137+ disk_allocation_ratio=1.0,
3138+ )
3139
3140 with test.nested(
3141 mock.patch('keystoneauth1.session.Session',
3142@@ -304,7 +316,7 @@ class SchedulerReportClientTestCase(test.NoDBTestCase):
3143 expected_provider = objects.ResourceProvider(
3144 uuid=uuid,
3145 name=name,
3146- generation=1,
3147+ generation=0,
3148 )
3149 expected_url = '/resource_providers'
3150 self.ks_sess_mock.post.assert_called_once_with(
3151@@ -372,8 +384,6 @@ class SchedulerReportClientTestCase(test.NoDBTestCase):
3152 self.assertFalse(result)
3153
3154 def test_compute_node_inventory(self):
3155- # This is for making sure we only check once the I/O so we can directly
3156- # call this helper method for the next tests.
3157 uuid = uuids.compute_node
3158 name = 'computehost'
3159 compute_node = objects.ComputeNode(uuid=uuid,
3160@@ -384,20 +394,18 @@ class SchedulerReportClientTestCase(test.NoDBTestCase):
3161 ram_allocation_ratio=1.5,
3162 local_gb=10,
3163 disk_allocation_ratio=1.0)
3164- rp = objects.ResourceProvider(uuid=uuid, name=name, generation=42)
3165- self.client._resource_providers[uuid] = rp
3166
3167 self.flags(reserved_host_memory_mb=1000)
3168 self.flags(reserved_host_disk_mb=2000)
3169
3170- result = self.client._compute_node_inventory(compute_node)
3171+ result = report._compute_node_to_inventory_dict(compute_node)
3172
3173- expected_inventories = {
3174+ expected = {
3175 'VCPU': {
3176 'total': compute_node.vcpus,
3177 'reserved': 0,
3178 'min_unit': 1,
3179- 'max_unit': 1,
3180+ 'max_unit': compute_node.vcpus,
3181 'step_size': 1,
3182 'allocation_ratio': compute_node.cpu_allocation_ratio,
3183 },
3184@@ -405,7 +413,7 @@ class SchedulerReportClientTestCase(test.NoDBTestCase):
3185 'total': compute_node.memory_mb,
3186 'reserved': CONF.reserved_host_memory_mb,
3187 'min_unit': 1,
3188- 'max_unit': 1,
3189+ 'max_unit': compute_node.memory_mb,
3190 'step_size': 1,
3191 'allocation_ratio': compute_node.ram_allocation_ratio,
3192 },
3193@@ -413,77 +421,343 @@ class SchedulerReportClientTestCase(test.NoDBTestCase):
3194 'total': compute_node.local_gb,
3195 'reserved': CONF.reserved_host_disk_mb * 1024,
3196 'min_unit': 1,
3197- 'max_unit': 1,
3198+ 'max_unit': compute_node.local_gb,
3199 'step_size': 1,
3200 'allocation_ratio': compute_node.disk_allocation_ratio,
3201 },
3202 }
3203- expected = {
3204- 'inventories': expected_inventories,
3205- }
3206 self.assertEqual(expected, result)
3207
3208+ def test_compute_node_inventory_empty(self):
3209+ uuid = uuids.compute_node
3210+ name = 'computehost'
3211+ compute_node = objects.ComputeNode(uuid=uuid,
3212+ hypervisor_hostname=name,
3213+ vcpus=0,
3214+ cpu_allocation_ratio=16.0,
3215+ memory_mb=0,
3216+ ram_allocation_ratio=1.5,
3217+ local_gb=0,
3218+ disk_allocation_ratio=1.0)
3219+ result = report._compute_node_to_inventory_dict(compute_node)
3220+ self.assertEqual({}, result)
3221+
3222+ @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3223+ '_ensure_resource_provider')
3224+ @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3225+ '_delete_inventory')
3226+ @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3227+ '_update_inventory')
3228+ @mock.patch('nova.objects.ComputeNode.save')
3229+ def test_update_resource_stats(self, mock_save, mock_ui, mock_delete,
3230+ mock_erp):
3231+ cn = self.compute_node
3232+ self.client.update_resource_stats(cn)
3233+ mock_save.assert_called_once_with()
3234+ mock_erp.assert_called_once_with(cn.uuid, cn.hypervisor_hostname)
3235+ expected_inv_data = {
3236+ 'VCPU': {
3237+ 'total': 8,
3238+ 'reserved': 0,
3239+ 'min_unit': 1,
3240+ 'max_unit': 8,
3241+ 'step_size': 1,
3242+ 'allocation_ratio': 16.0,
3243+ },
3244+ 'MEMORY_MB': {
3245+ 'total': 1024,
3246+ 'reserved': 512,
3247+ 'min_unit': 1,
3248+ 'max_unit': 1024,
3249+ 'step_size': 1,
3250+ 'allocation_ratio': 1.5,
3251+ },
3252+ 'DISK_GB': {
3253+ 'total': 10,
3254+ 'reserved': 0,
3255+ 'min_unit': 1,
3256+ 'max_unit': 10,
3257+ 'step_size': 1,
3258+ 'allocation_ratio': 1.0,
3259+ },
3260+ }
3261+ mock_ui.assert_called_once_with(
3262+ cn.uuid,
3263+ expected_inv_data,
3264+ )
3265+ self.assertFalse(mock_delete.called)
3266+
3267+ @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3268+ '_ensure_resource_provider')
3269+ @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3270+ '_delete_inventory')
3271+ @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3272+ '_update_inventory')
3273+ @mock.patch('nova.objects.ComputeNode.save')
3274+ def test_update_resource_stats_no_inv(self, mock_save, mock_ui,
3275+ mock_delete, mock_erp):
3276+ """Ensure that if there are no inventory records, that we call
3277+ _delete_inventory() instead of _update_inventory().
3278+ """
3279+ cn = self.compute_node
3280+ cn.vcpus = 0
3281+ cn.memory_mb = 0
3282+ cn.local_gb = 0
3283+ self.client.update_resource_stats(cn)
3284+ mock_save.assert_called_once_with()
3285+ mock_erp.assert_called_once_with(cn.uuid, cn.hypervisor_hostname)
3286+ mock_delete.assert_called_once_with(cn.uuid)
3287+ self.assertFalse(mock_ui.called)
3288+
3289+ @mock.patch('nova.scheduler.client.report._extract_inventory_in_use')
3290+ @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3291+ 'put')
3292+ @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3293+ 'get')
3294+ def test_delete_inventory_already_no_inventory(self, mock_get, mock_put,
3295+ mock_extract):
3296+ cn = self.compute_node
3297+ rp = objects.ResourceProvider(uuid=cn.uuid, generation=42)
3298+ # Make sure the ResourceProvider exists for preventing to call the API
3299+ self.client._resource_providers[cn.uuid] = rp
3300+
3301+ mock_get.return_value.json.return_value = {
3302+ 'resource_provider_generation': 1,
3303+ 'inventories': {
3304+ }
3305+ }
3306+ result = self.client._delete_inventory(cn.uuid)
3307+ self.assertIsNone(result)
3308+ self.assertFalse(mock_put.called)
3309+ self.assertFalse(mock_extract.called)
3310+ new_gen = self.client._resource_providers[cn.uuid].generation
3311+ self.assertEqual(1, new_gen)
3312+
3313+ @mock.patch('nova.scheduler.client.report._extract_inventory_in_use')
3314+ @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3315+ 'put')
3316+ @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3317+ 'get')
3318+ def test_delete_inventory(self, mock_get, mock_put, mock_extract):
3319+ cn = self.compute_node
3320+ rp = objects.ResourceProvider(uuid=cn.uuid, generation=42)
3321+ # Make sure the ResourceProvider exists for preventing to call the API
3322+ self.client._resource_providers[cn.uuid] = rp
3323+
3324+ mock_get.return_value.json.return_value = {
3325+ 'resource_provider_generation': 1,
3326+ 'inventories': {
3327+ 'VCPU': {'total': 16},
3328+ 'MEMORY_MB': {'total': 1024},
3329+ 'DISK_GB': {'total': 10},
3330+ }
3331+ }
3332+ mock_put.return_value.status_code = 200
3333+ mock_put.return_value.json.return_value = {
3334+ 'resource_provider_generation': 44,
3335+ 'inventories': {
3336+ }
3337+ }
3338+ result = self.client._delete_inventory(cn.uuid)
3339+ self.assertIsNone(result)
3340+ self.assertFalse(mock_extract.called)
3341+ new_gen = self.client._resource_providers[cn.uuid].generation
3342+ self.assertEqual(44, new_gen)
3343+
3344+ @mock.patch.object(report.LOG, 'warning')
3345+ @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3346+ 'put')
3347 @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3348 'get')
3349+ def test_delete_inventory_inventory_in_use(self, mock_get, mock_put,
3350+ mock_warn):
3351+ cn = self.compute_node
3352+ rp = objects.ResourceProvider(uuid=cn.uuid, generation=42)
3353+ # Make sure the ResourceProvider exists for preventing to call the API
3354+ self.client._resource_providers[cn.uuid] = rp
3355+
3356+ mock_get.return_value.json.return_value = {
3357+ 'resource_provider_generation': 1,
3358+ 'inventories': {
3359+ 'VCPU': {'total': 16},
3360+ 'MEMORY_MB': {'total': 1024},
3361+ 'DISK_GB': {'total': 10},
3362+ }
3363+ }
3364+ mock_put.return_value.status_code = 409
3365+ rc_str = "VCPU, MEMORY_MB"
3366+ in_use_exc = exception.InventoryInUse(
3367+ resource_classes=rc_str,
3368+ resource_provider=cn.uuid,
3369+ )
3370+ fault_text = """
3371+409 Conflict
3372+
3373+There was a conflict when trying to complete your request.
3374+
3375+ update conflict: %s
3376+ """ % six.text_type(in_use_exc)
3377+ mock_put.return_value.text = fault_text
3378+ mock_put.return_value.json.return_value = {
3379+ 'resource_provider_generation': 44,
3380+ 'inventories': {
3381+ }
3382+ }
3383+ result = self.client._delete_inventory(cn.uuid)
3384+ self.assertIsNone(result)
3385+ self.assertTrue(mock_warn.called)
3386+
3387+ @mock.patch.object(report.LOG, 'error')
3388+ @mock.patch.object(report.LOG, 'warning')
3389 @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3390 'put')
3391 @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3392- '_compute_node_inventory')
3393- def test_update_inventory(self, mock_inv, mock_put, mock_get):
3394+ 'get')
3395+ def test_delete_inventory_inventory_error(self, mock_get, mock_put,
3396+ mock_warn, mock_error):
3397+ cn = self.compute_node
3398+ rp = objects.ResourceProvider(uuid=cn.uuid, generation=42)
3399+ # Make sure the ResourceProvider exists for preventing to call the API
3400+ self.client._resource_providers[cn.uuid] = rp
3401+
3402+ mock_get.return_value.json.return_value = {
3403+ 'resource_provider_generation': 1,
3404+ 'inventories': {
3405+ 'VCPU': {'total': 16},
3406+ 'MEMORY_MB': {'total': 1024},
3407+ 'DISK_GB': {'total': 10},
3408+ }
3409+ }
3410+ mock_put.return_value.status_code = 409
3411+ mock_put.return_value.text = (
3412+ 'There was a failure'
3413+ )
3414+ mock_put.return_value.json.return_value = {
3415+ 'resource_provider_generation': 44,
3416+ 'inventories': {
3417+ }
3418+ }
3419+ result = self.client._delete_inventory(cn.uuid)
3420+ self.assertIsNone(result)
3421+ self.assertFalse(mock_warn.called)
3422+ self.assertTrue(mock_error.called)
3423+
3424+ @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3425+ 'get')
3426+ @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3427+ 'put')
3428+ def test_update_inventory(self, mock_put, mock_get):
3429 # Ensure _update_inventory() returns a list of Inventories objects
3430 # after creating or updating the existing values
3431 uuid = uuids.compute_node
3432- compute_node = objects.ComputeNode(uuid=uuid,
3433- hypervisor_hostname='foo')
3434+ compute_node = self.compute_node
3435 rp = objects.ResourceProvider(uuid=uuid, name='foo', generation=42)
3436 # Make sure the ResourceProvider exists for preventing to call the API
3437 self.client._resource_providers[uuid] = rp
3438
3439- mock_inv.return_value = {'inventories': []}
3440 mock_get.return_value.json.return_value = {
3441 'resource_provider_generation': 43,
3442- 'inventories': {'VCPU': {'total': 16}},
3443+ 'inventories': {
3444+ 'VCPU': {'total': 16},
3445+ 'MEMORY_MB': {'total': 1024},
3446+ 'DISK_GB': {'total': 10},
3447+ }
3448 }
3449 mock_put.return_value.status_code = 200
3450 mock_put.return_value.json.return_value = {
3451 'resource_provider_generation': 44,
3452- 'inventories': {'VCPU': {'total': 16}},
3453+ 'inventories': {
3454+ 'VCPU': {'total': 16},
3455+ 'MEMORY_MB': {'total': 1024},
3456+ 'DISK_GB': {'total': 10},
3457+ }
3458 }
3459
3460- result = self.client._update_inventory_attempt(compute_node)
3461+ inv_data = report._compute_node_to_inventory_dict(compute_node)
3462+ result = self.client._update_inventory_attempt(
3463+ compute_node.uuid, inv_data
3464+ )
3465 self.assertTrue(result)
3466
3467 exp_url = '/resource_providers/%s/inventories' % uuid
3468 mock_get.assert_called_once_with(exp_url)
3469- # Called with the newly-found generation from the existing inventory
3470- self.assertEqual(43,
3471- mock_inv.return_value['resource_provider_generation'])
3472 # Updated with the new inventory from the PUT call
3473 self.assertEqual(44, rp.generation)
3474- mock_put.assert_called_once_with(exp_url, mock_inv.return_value)
3475+ expected = {
3476+ # Called with the newly-found generation from the existing
3477+ # inventory
3478+ 'resource_provider_generation': 43,
3479+ 'inventories': {
3480+ 'VCPU': {
3481+ 'total': 8,
3482+ 'reserved': 0,
3483+ 'min_unit': 1,
3484+ 'max_unit': compute_node.vcpus,
3485+ 'step_size': 1,
3486+ 'allocation_ratio': compute_node.cpu_allocation_ratio,
3487+ },
3488+ 'MEMORY_MB': {
3489+ 'total': 1024,
3490+ 'reserved': CONF.reserved_host_memory_mb,
3491+ 'min_unit': 1,
3492+ 'max_unit': compute_node.memory_mb,
3493+ 'step_size': 1,
3494+ 'allocation_ratio': compute_node.ram_allocation_ratio,
3495+ },
3496+ 'DISK_GB': {
3497+ 'total': 10,
3498+ 'reserved': CONF.reserved_host_disk_mb * 1024,
3499+ 'min_unit': 1,
3500+ 'max_unit': compute_node.local_gb,
3501+ 'step_size': 1,
3502+ 'allocation_ratio': compute_node.disk_allocation_ratio,
3503+ },
3504+ }
3505+ }
3506+ mock_put.assert_called_once_with(exp_url, expected)
3507
3508 @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3509 'get')
3510 @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3511 'put')
3512- @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3513- '_compute_node_inventory')
3514- def test_update_inventory_no_update(self, mock_inv, mock_put, mock_get):
3515+ def test_update_inventory_no_update(self, mock_put, mock_get):
3516 uuid = uuids.compute_node
3517- compute_node = objects.ComputeNode(uuid=uuid,
3518- hypervisor_hostname='foo')
3519+ compute_node = self.compute_node
3520 rp = objects.ResourceProvider(uuid=uuid, name='foo', generation=42)
3521 self.client._resource_providers[uuid] = rp
3522- mock_inv.return_value = {'inventories': {
3523- 'VCPU': {'total': 8},
3524- }}
3525 mock_get.return_value.json.return_value = {
3526 'resource_provider_generation': 43,
3527 'inventories': {
3528- 'VCPU': {'total': 8}
3529+ 'VCPU': {
3530+ 'total': 8,
3531+ 'reserved': 0,
3532+ 'min_unit': 1,
3533+ 'max_unit': compute_node.vcpus,
3534+ 'step_size': 1,
3535+ 'allocation_ratio': compute_node.cpu_allocation_ratio,
3536+ },
3537+ 'MEMORY_MB': {
3538+ 'total': 1024,
3539+ 'reserved': CONF.reserved_host_memory_mb,
3540+ 'min_unit': 1,
3541+ 'max_unit': compute_node.memory_mb,
3542+ 'step_size': 1,
3543+ 'allocation_ratio': compute_node.ram_allocation_ratio,
3544+ },
3545+ 'DISK_GB': {
3546+ 'total': 10,
3547+ 'reserved': CONF.reserved_host_disk_mb * 1024,
3548+ 'min_unit': 1,
3549+ 'max_unit': compute_node.local_gb,
3550+ 'step_size': 1,
3551+ 'allocation_ratio': compute_node.disk_allocation_ratio,
3552+ },
3553 }
3554 }
3555- result = self.client._update_inventory_attempt(compute_node)
3556+ inv_data = report._compute_node_to_inventory_dict(compute_node)
3557+ result = self.client._update_inventory_attempt(
3558+ compute_node.uuid, inv_data
3559+ )
3560 self.assertTrue(result)
3561 exp_url = '/resource_providers/%s/inventories' % uuid
3562 mock_get.assert_called_once_with(exp_url)
3563@@ -497,54 +771,51 @@ class SchedulerReportClientTestCase(test.NoDBTestCase):
3564 @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3565 'put')
3566 @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3567- '_compute_node_inventory')
3568- @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3569 '_ensure_resource_provider')
3570- def test_update_inventory_conflicts(self, mock_ensure, mock_inv,
3571+ def test_update_inventory_conflicts(self, mock_ensure,
3572 mock_put, mock_get):
3573 # Ensure _update_inventory() returns a list of Inventories objects
3574 # after creating or updating the existing values
3575 uuid = uuids.compute_node
3576- compute_node = objects.ComputeNode(uuid=uuid,
3577- hypervisor_hostname='foo')
3578+ compute_node = self.compute_node
3579 rp = objects.ResourceProvider(uuid=uuid, name='foo', generation=42)
3580 # Make sure the ResourceProvider exists for preventing to call the API
3581 self.client._resource_providers[uuid] = rp
3582
3583- mock_inv.return_value = {'inventories': [{'resource_class': 'VCPU'}]}
3584 mock_get.return_value = {}
3585 mock_put.return_value.status_code = 409
3586
3587- result = self.client._update_inventory_attempt(compute_node)
3588+ inv_data = report._compute_node_to_inventory_dict(compute_node)
3589+ result = self.client._update_inventory_attempt(
3590+ compute_node.uuid, inv_data
3591+ )
3592 self.assertFalse(result)
3593
3594 # Invalidated the cache
3595 self.assertNotIn(uuid, self.client._resource_providers)
3596 # Refreshed our resource provider
3597- mock_ensure.assert_called_once_with(uuid, 'foo')
3598+ mock_ensure.assert_called_once_with(uuid)
3599
3600 @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3601 '_get_inventory')
3602 @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3603 'put')
3604- @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3605- '_compute_node_inventory')
3606- def test_update_inventory_unknown_response(self, mock_inv,
3607- mock_put, mock_get):
3608+ def test_update_inventory_unknown_response(self, mock_put, mock_get):
3609 # Ensure _update_inventory() returns a list of Inventories objects
3610 # after creating or updating the existing values
3611 uuid = uuids.compute_node
3612- compute_node = objects.ComputeNode(uuid=uuid,
3613- hypervisor_hostname='foo')
3614+ compute_node = self.compute_node
3615 rp = objects.ResourceProvider(uuid=uuid, name='foo', generation=42)
3616 # Make sure the ResourceProvider exists for preventing to call the API
3617 self.client._resource_providers[uuid] = rp
3618
3619- mock_inv.return_value = {'inventories': [{'resource_class': 'VCPU'}]}
3620 mock_get.return_value = {}
3621 mock_put.return_value.status_code = 234
3622
3623- result = self.client._update_inventory_attempt(compute_node)
3624+ inv_data = report._compute_node_to_inventory_dict(compute_node)
3625+ result = self.client._update_inventory_attempt(
3626+ compute_node.uuid, inv_data
3627+ )
3628 self.assertFalse(result)
3629
3630 # No cache invalidation
3631@@ -554,20 +825,15 @@ class SchedulerReportClientTestCase(test.NoDBTestCase):
3632 '_get_inventory')
3633 @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3634 'put')
3635- @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3636- '_compute_node_inventory')
3637- def test_update_inventory_failed(self, mock_inv,
3638- mock_put, mock_get):
3639+ def test_update_inventory_failed(self, mock_put, mock_get):
3640 # Ensure _update_inventory() returns a list of Inventories objects
3641 # after creating or updating the existing values
3642 uuid = uuids.compute_node
3643- compute_node = objects.ComputeNode(uuid=uuid,
3644- hypervisor_hostname='foo')
3645+ compute_node = self.compute_node
3646 rp = objects.ResourceProvider(uuid=uuid, name='foo', generation=42)
3647 # Make sure the ResourceProvider exists for preventing to call the API
3648 self.client._resource_providers[uuid] = rp
3649
3650- mock_inv.return_value = {'inventories': [{'resource_class': 'VCPU'}]}
3651 mock_get.return_value = {}
3652 try:
3653 mock_put.return_value.__nonzero__.return_value = False
3654@@ -575,7 +841,10 @@ class SchedulerReportClientTestCase(test.NoDBTestCase):
3655 # Thanks py3
3656 mock_put.return_value.__bool__.return_value = False
3657
3658- result = self.client._update_inventory_attempt(compute_node)
3659+ inv_data = report._compute_node_to_inventory_dict(compute_node)
3660+ result = self.client._update_inventory_attempt(
3661+ compute_node.uuid, inv_data
3662+ )
3663 self.assertFalse(result)
3664
3665 # No cache invalidation
3666@@ -595,7 +864,9 @@ class SchedulerReportClientTestCase(test.NoDBTestCase):
3667 mock_update.side_effect = (False, True)
3668
3669 self.client._resource_providers[cn.uuid] = True
3670- result = self.client._update_inventory(cn)
3671+ result = self.client._update_inventory(
3672+ cn.uuid, mock.sentinel.inv_data
3673+ )
3674 self.assertTrue(result)
3675
3676 # Only slept once
3677@@ -614,43 +885,29 @@ class SchedulerReportClientTestCase(test.NoDBTestCase):
3678 mock_update.side_effect = (False, False, False)
3679
3680 self.client._resource_providers[cn.uuid] = True
3681- result = self.client._update_inventory(cn)
3682+ result = self.client._update_inventory(
3683+ cn.uuid, mock.sentinel.inv_data
3684+ )
3685 self.assertFalse(result)
3686
3687 # Slept three times
3688 mock_sleep.assert_has_calls([mock.call(1), mock.call(1), mock.call(1)])
3689
3690 # Three attempts to update
3691- mock_update.assert_has_calls([mock.call(cn), mock.call(cn),
3692- mock.call(cn)])
3693+ mock_update.assert_has_calls([
3694+ mock.call(cn.uuid, mock.sentinel.inv_data),
3695+ mock.call(cn.uuid, mock.sentinel.inv_data),
3696+ mock.call(cn.uuid, mock.sentinel.inv_data),
3697+ ])
3698
3699 # Slept three times
3700 mock_sleep.assert_has_calls([mock.call(1), mock.call(1), mock.call(1)])
3701
3702- @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3703- '_ensure_resource_provider')
3704- @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3705- '_update_inventory_attempt')
3706- def test_update_resource_stats_rp_fail(self, mock_ui, mock_erp):
3707- cn = mock.MagicMock()
3708- self.client.update_resource_stats(cn)
3709- cn.save.assert_called_once_with()
3710- mock_erp.assert_called_once_with(cn.uuid, cn.hypervisor_hostname)
3711- self.assertFalse(mock_ui.called)
3712
3713- @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3714- '_ensure_resource_provider')
3715- @mock.patch.object(objects.ComputeNode, 'save')
3716- def test_update_resource_stats_saves(self, mock_save, mock_ensure):
3717- cn = objects.ComputeNode(context=self.context,
3718- uuid=uuids.compute_node,
3719- hypervisor_hostname='host1')
3720- self.client.update_resource_stats(cn)
3721- mock_save.assert_called_once_with()
3722- mock_ensure.assert_called_once_with(uuids.compute_node, 'host1')
3723+class TestAllocations(SchedulerReportClientTestCase):
3724
3725 @mock.patch('nova.compute.utils.is_volume_backed_instance')
3726- def test_allocations(self, mock_vbi):
3727+ def test_instance_to_allocations_dict(self, mock_vbi):
3728 mock_vbi.return_value = False
3729 inst = objects.Instance(
3730 uuid=uuids.inst,
3731@@ -659,15 +916,16 @@ class SchedulerReportClientTestCase(test.NoDBTestCase):
3732 ephemeral_gb=100,
3733 memory_mb=1024,
3734 vcpus=2))
3735+ result = report._instance_to_allocations_dict(inst)
3736 expected = {
3737 'MEMORY_MB': 1024,
3738 'VCPU': 2,
3739 'DISK_GB': 111,
3740 }
3741- self.assertEqual(expected, self.client._allocations(inst))
3742+ self.assertEqual(expected, result)
3743
3744 @mock.patch('nova.compute.utils.is_volume_backed_instance')
3745- def test_allocations_boot_from_volume(self, mock_vbi):
3746+ def test_instance_to_allocations_dict_boot_from_volume(self, mock_vbi):
3747 mock_vbi.return_value = True
3748 inst = objects.Instance(
3749 uuid=uuids.inst,
3750@@ -676,38 +934,61 @@ class SchedulerReportClientTestCase(test.NoDBTestCase):
3751 ephemeral_gb=100,
3752 memory_mb=1024,
3753 vcpus=2))
3754+ result = report._instance_to_allocations_dict(inst)
3755 expected = {
3756 'MEMORY_MB': 1024,
3757 'VCPU': 2,
3758 'DISK_GB': 101,
3759 }
3760- self.assertEqual(expected, self.client._allocations(inst))
3761+ self.assertEqual(expected, result)
3762+
3763+ @mock.patch('nova.compute.utils.is_volume_backed_instance')
3764+ def test_instance_to_allocations_dict_zero_disk(self, mock_vbi):
3765+ mock_vbi.return_value = True
3766+ inst = objects.Instance(
3767+ uuid=uuids.inst,
3768+ flavor=objects.Flavor(root_gb=10,
3769+ swap=0,
3770+ ephemeral_gb=0,
3771+ memory_mb=1024,
3772+ vcpus=2))
3773+ result = report._instance_to_allocations_dict(inst)
3774+ expected = {
3775+ 'MEMORY_MB': 1024,
3776+ 'VCPU': 2,
3777+ }
3778+ self.assertEqual(expected, result)
3779
3780 @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3781 'put')
3782 @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3783 'get')
3784- def test_update_instance_allocation_new(self, mock_get, mock_put):
3785+ @mock.patch('nova.scheduler.client.report.'
3786+ '_instance_to_allocations_dict')
3787+ def test_update_instance_allocation_new(self, mock_a, mock_get,
3788+ mock_put):
3789 cn = objects.ComputeNode(uuid=uuids.cn)
3790 inst = objects.Instance(uuid=uuids.inst)
3791 mock_get.return_value.json.return_value = {'allocations': {}}
3792- with mock.patch.object(self.client, '_allocations') as mock_a:
3793- expected = {
3794- 'allocations': [
3795- {'resource_provider': {'uuid': cn.uuid},
3796- 'resources': mock_a.return_value}]
3797- }
3798- self.client.update_instance_allocation(cn, inst, 1)
3799- mock_put.assert_called_once_with(
3800- '/allocations/%s' % inst.uuid,
3801- expected)
3802- self.assertTrue(mock_get.called)
3803+ expected = {
3804+ 'allocations': [
3805+ {'resource_provider': {'uuid': cn.uuid},
3806+ 'resources': mock_a.return_value}]
3807+ }
3808+ self.client.update_instance_allocation(cn, inst, 1)
3809+ mock_put.assert_called_once_with(
3810+ '/allocations/%s' % inst.uuid,
3811+ expected)
3812+ self.assertTrue(mock_get.called)
3813
3814 @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3815 'put')
3816 @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3817 'get')
3818- def test_update_instance_allocation_existing(self, mock_get, mock_put):
3819+ @mock.patch('nova.scheduler.client.report.'
3820+ '_instance_to_allocations_dict')
3821+ def test_update_instance_allocation_existing(self, mock_a, mock_get,
3822+ mock_put):
3823 cn = objects.ComputeNode(uuid=uuids.cn)
3824 inst = objects.Instance(uuid=uuids.inst)
3825 mock_get.return_value.json.return_value = {'allocations': {
3826@@ -719,33 +1000,33 @@ class SchedulerReportClientTestCase(test.NoDBTestCase):
3827 }
3828 }}
3829 }
3830- with mock.patch.object(self.client, '_allocations') as mock_a:
3831- mock_a.return_value = {
3832- 'DISK_GB': 123,
3833- 'MEMORY_MB': 456,
3834- }
3835- self.client.update_instance_allocation(cn, inst, 1)
3836- self.assertFalse(mock_put.called)
3837- mock_get.assert_called_once_with(
3838- '/allocations/%s' % inst.uuid)
3839+ mock_a.return_value = {
3840+ 'DISK_GB': 123,
3841+ 'MEMORY_MB': 456,
3842+ }
3843+ self.client.update_instance_allocation(cn, inst, 1)
3844+ self.assertFalse(mock_put.called)
3845+ mock_get.assert_called_once_with(
3846+ '/allocations/%s' % inst.uuid)
3847
3848 @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3849 'get')
3850 @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3851 'put')
3852+ @mock.patch('nova.scheduler.client.report.'
3853+ '_instance_to_allocations_dict')
3854 @mock.patch.object(report.LOG, 'warning')
3855- def test_update_instance_allocation_new_failed(self, mock_warn, mock_put,
3856- mock_get):
3857+ def test_update_instance_allocation_new_failed(self, mock_warn, mock_a,
3858+ mock_put, mock_get):
3859 cn = objects.ComputeNode(uuid=uuids.cn)
3860 inst = objects.Instance(uuid=uuids.inst)
3861- with mock.patch.object(self.client, '_allocations'):
3862- try:
3863- mock_put.return_value.__nonzero__.return_value = False
3864- except AttributeError:
3865- # NOTE(danms): LOL @ py3
3866- mock_put.return_value.__bool__.return_value = False
3867- self.client.update_instance_allocation(cn, inst, 1)
3868- self.assertTrue(mock_warn.called)
3869+ try:
3870+ mock_put.return_value.__nonzero__.return_value = False
3871+ except AttributeError:
3872+ # NOTE(danms): LOL @ py3
3873+ mock_put.return_value.__bool__.return_value = False
3874+ self.client.update_instance_allocation(cn, inst, 1)
3875+ self.assertTrue(mock_warn.called)
3876
3877 @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3878 'delete')
3879@@ -775,8 +1056,10 @@ class SchedulerReportClientTestCase(test.NoDBTestCase):
3880 'delete')
3881 @mock.patch('nova.scheduler.client.report.SchedulerReportClient.'
3882 'get')
3883- def test_remove_deleted_instances(
3884- self, mock_get, mock_delete):
3885+ @mock.patch('nova.scheduler.client.report.'
3886+ '_instance_to_allocations_dict')
3887+ def test_remove_deleted_instances(self, mock_a, mock_get,
3888+ mock_delete):
3889 cn = objects.ComputeNode(uuid=uuids.cn)
3890 inst1 = objects.Instance(uuid=uuids.inst1)
3891 inst2 = objects.Instance(uuid=uuids.inst2)
3892@@ -796,11 +1079,10 @@ class SchedulerReportClientTestCase(test.NoDBTestCase):
3893 inst3 = {'uuid': 'foo'}
3894
3895 mock_delete.return_value = True
3896- with mock.patch.object(self.client, '_allocations'):
3897- self.client.remove_deleted_instances(cn, [inst3])
3898- mock_get.assert_called_once_with(
3899- '/resource_providers/%s/allocations' % cn.uuid)
3900- expected_calls = [
3901- mock.call('/allocations/%s' % inst1.uuid),
3902- mock.call('/allocations/%s' % inst2.uuid)]
3903- mock_delete.assert_has_calls(expected_calls, any_order=True)
3904+ self.client.remove_deleted_instances(cn, [inst3])
3905+ mock_get.assert_called_once_with(
3906+ '/resource_providers/%s/allocations' % cn.uuid)
3907+ expected_calls = [
3908+ mock.call('/allocations/%s' % inst1.uuid),
3909+ mock.call('/allocations/%s' % inst2.uuid)]
3910+ mock_delete.assert_has_calls(expected_calls, any_order=True)
3911diff --git a/nova/tests/unit/test_exception.py b/nova/tests/unit/test_exception.py
3912index a9bada1..55478a6 100644
3913--- a/nova/tests/unit/test_exception.py
3914+++ b/nova/tests/unit/test_exception.py
3915@@ -61,6 +61,7 @@ class WrapExceptionTestCase(test.NoDBTestCase):
3916 self.assertEqual(3, notification.payload['args']['extra'])
3917 for key in ['exception', 'args']:
3918 self.assertIn(key, notification.payload.keys())
3919+ self.assertNotIn('context', notification.payload['args'].keys())
3920
3921 self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS))
3922 notification = fake_notifier.VERSIONED_NOTIFICATIONS[0]
3923diff --git a/nova/tests/unit/virt/libvirt/test_driver.py b/nova/tests/unit/virt/libvirt/test_driver.py
3924index 0c8d6c0..69dc8d8 100644
3925--- a/nova/tests/unit/virt/libvirt/test_driver.py
3926+++ b/nova/tests/unit/virt/libvirt/test_driver.py
3927@@ -3584,18 +3584,25 @@ class LibvirtConnTestCase(test.NoDBTestCase):
3928 vconfig.LibvirtConfigMemoryBalloon)
3929
3930 self.assertEqual(cfg.devices[4].target_name, "com.redhat.spice.0")
3931+ self.assertEqual(cfg.devices[4].type, 'spicevmc')
3932 self.assertEqual(cfg.devices[5].type, "spice")
3933 self.assertEqual(cfg.devices[6].type, "qxl")
3934
3935+ @mock.patch.object(host.Host, 'get_guest')
3936+ @mock.patch.object(libvirt_driver.LibvirtDriver,
3937+ '_get_serial_ports_from_guest')
3938 @mock.patch('nova.console.serial.acquire_port')
3939 @mock.patch('nova.virt.hardware.get_number_of_serial_ports',
3940 return_value=1)
3941 @mock.patch.object(libvirt_driver.libvirt_utils, 'get_arch',)
3942 def test_create_serial_console_devices_based_on_arch(self, mock_get_arch,
3943- mock_get_port_number,
3944- mock_acquire_port):
3945+ mock_get_port_number,
3946+ mock_acquire_port,
3947+ mock_ports,
3948+ mock_guest):
3949 self.flags(enabled=True, group='serial_console')
3950 drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
3951+ instance = objects.Instance(**self.test_instance)
3952
3953 expected = {arch.X86_64: vconfig.LibvirtConfigGuestSerial,
3954 arch.S390: vconfig.LibvirtConfigGuestConsole,
3955@@ -3604,19 +3611,22 @@ class LibvirtConnTestCase(test.NoDBTestCase):
3956 for guest_arch, device_type in expected.items():
3957 mock_get_arch.return_value = guest_arch
3958 guest = vconfig.LibvirtConfigGuest()
3959- drvr._create_serial_console_devices(guest, instance=None,
3960+ drvr._create_serial_console_devices(guest, instance=instance,
3961 flavor={}, image_meta={})
3962 self.assertEqual(1, len(guest.devices))
3963 console_device = guest.devices[0]
3964 self.assertIsInstance(console_device, device_type)
3965 self.assertEqual("tcp", console_device.type)
3966
3967+ @mock.patch.object(host.Host, 'get_guest')
3968+ @mock.patch.object(libvirt_driver.LibvirtDriver,
3969+ '_get_serial_ports_from_guest')
3970 @mock.patch('nova.virt.hardware.get_number_of_serial_ports',
3971 return_value=4)
3972 @mock.patch.object(libvirt_driver.libvirt_utils, 'get_arch',
3973 side_effect=[arch.X86_64, arch.S390, arch.S390X])
3974 def test_create_serial_console_devices_with_limit_exceeded_based_on_arch(
3975- self, mock_get_arch, mock_get_port_number):
3976+ self, mock_get_arch, mock_get_port_number, mock_ports, mock_guest):
3977 self.flags(enabled=True, group='serial_console')
3978 self.flags(virt_type="qemu", group='libvirt')
3979 flavor = 'fake_flavor'
3980@@ -4116,6 +4126,7 @@ class LibvirtConnTestCase(test.NoDBTestCase):
3981
3982 self.assertEqual(cfg.devices[4].type, "tablet")
3983 self.assertEqual(cfg.devices[5].target_name, "com.redhat.spice.0")
3984+ self.assertEqual(cfg.devices[5].type, 'spicevmc')
3985 self.assertEqual(cfg.devices[6].type, "vnc")
3986 self.assertEqual(cfg.devices[7].type, "spice")
3987
3988@@ -7839,24 +7850,27 @@ class LibvirtConnTestCase(test.NoDBTestCase):
3989 drvr._get_volume_config)
3990 self.assertEqual(target_xml, config)
3991
3992+ @mock.patch.object(libvirt_driver.LibvirtDriver,
3993+ '_get_serial_ports_from_guest')
3994 @mock.patch.object(fakelibvirt.virDomain, "migrateToURI2")
3995 @mock.patch.object(fakelibvirt.virDomain, "XMLDesc")
3996 def test_live_migration_update_serial_console_xml(self, mock_xml,
3997- mock_migrate):
3998+ mock_migrate, mock_get):
3999 self.compute = importutils.import_object(CONF.compute_manager)
4000 instance_ref = self.test_instance
4001
4002 xml_tmpl = ("<domain type='kvm'>"
4003 "<devices>"
4004 "<console type='tcp'>"
4005- "<source mode='bind' host='{addr}' service='10000'/>"
4006+ "<source mode='bind' host='{addr}' service='{port}'/>"
4007+ "<target type='serial' port='0'/>"
4008 "</console>"
4009 "</devices>"
4010 "</domain>")
4011
4012- initial_xml = xml_tmpl.format(addr='9.0.0.1')
4013+ initial_xml = xml_tmpl.format(addr='9.0.0.1', port='10100')
4014
4015- target_xml = xml_tmpl.format(addr='9.0.0.12')
4016+ target_xml = xml_tmpl.format(addr='9.0.0.12', port='10200')
4017 target_xml = etree.tostring(etree.fromstring(target_xml))
4018
4019 # Preparing mocks
4020@@ -7871,7 +7885,8 @@ class LibvirtConnTestCase(test.NoDBTestCase):
4021 serial_listen_addr='9.0.0.12',
4022 target_connect_addr=None,
4023 bdms=[],
4024- block_migration=False)
4025+ block_migration=False,
4026+ serial_listen_ports=[10200])
4027 dom = fakelibvirt.virDomain
4028 guest = libvirt_guest.Guest(dom)
4029 drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False)
4030@@ -14586,18 +14601,12 @@ class LibvirtConnTestCase(test.NoDBTestCase):
4031
4032 @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._disconnect_volume')
4033 @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._swap_volume')
4034- @mock.patch('nova.objects.block_device.BlockDeviceMapping.'
4035- 'get_by_volume_and_instance')
4036 @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_volume_config')
4037 @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._connect_volume')
4038 @mock.patch('nova.virt.libvirt.host.Host.get_guest')
4039- def _test_swap_volume_driver_bdm_save(self, get_guest,
4040- connect_volume, get_volume_config,
4041- get_by_volume_and_instance,
4042- swap_volume,
4043- disconnect_volume,
4044- volume_save,
4045- source_type):
4046+ def _test_swap_volume_driver(self, get_guest, connect_volume,
4047+ get_volume_config, swap_volume,
4048+ disconnect_volume, source_type):
4049 conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI())
4050 instance = objects.Instance(**self.test_instance)
4051 old_connection_info = {'driver_volume_type': 'fake',
4052@@ -14626,16 +14635,6 @@ class LibvirtConnTestCase(test.NoDBTestCase):
4053 get_volume_config.return_value = mock.MagicMock(
4054 source_path='/fake-new-volume')
4055
4056- bdm = objects.BlockDeviceMapping(self.context,
4057- **fake_block_device.FakeDbBlockDeviceDict(
4058- {'id': 2, 'instance_uuid': uuids.instance,
4059- 'device_name': '/dev/vdb',
4060- 'source_type': source_type,
4061- 'destination_type': 'volume',
4062- 'volume_id': 'fake-volume-id-2',
4063- 'boot_index': 0}))
4064- get_by_volume_and_instance.return_value = bdm
4065-
4066 conn.swap_volume(old_connection_info, new_connection_info, instance,
4067 '/dev/vdb', 1)
4068
4069@@ -14645,22 +14644,15 @@ class LibvirtConnTestCase(test.NoDBTestCase):
4070 swap_volume.assert_called_once_with(guest, 'vdb',
4071 '/fake-new-volume', 1)
4072 disconnect_volume.assert_called_once_with(old_connection_info, 'vdb')
4073- volume_save.assert_called_once_with()
4074-
4075- @mock.patch('nova.virt.block_device.DriverVolumeBlockDevice.save')
4076- def test_swap_volume_driver_bdm_save_source_is_volume(self, volume_save):
4077- self._test_swap_volume_driver_bdm_save(volume_save=volume_save,
4078- source_type='volume')
4079-
4080- @mock.patch('nova.virt.block_device.DriverImageBlockDevice.save')
4081- def test_swap_volume_driver_bdm_save_source_is_image(self, volume_save):
4082- self._test_swap_volume_driver_bdm_save(volume_save=volume_save,
4083- source_type='image')
4084-
4085- @mock.patch('nova.virt.block_device.DriverSnapshotBlockDevice.save')
4086- def test_swap_volume_driver_bdm_save_source_is_snapshot(self, volume_save):
4087- self._test_swap_volume_driver_bdm_save(volume_save=volume_save,
4088- source_type='snapshot')
4089+
4090+ def test_swap_volume_driver_source_is_volume(self):
4091+ self._test_swap_volume_driver(source_type='volume')
4092+
4093+ def test_swap_volume_driver_source_is_image(self):
4094+ self._test_swap_volume_driver(source_type='image')
4095+
4096+ def test_swap_volume_driver_source_is_snapshot(self):
4097+ self._test_swap_volume_driver(source_type='snapshot')
4098
4099 @mock.patch('nova.virt.libvirt.guest.BlockDevice.is_job_complete')
4100 def _test_live_snapshot(self, mock_is_job_complete,
4101diff --git a/nova/tests/unit/virt/test_driver.py b/nova/tests/unit/virt/test_driver.py
4102index aa7478c..041ff3e 100644
4103--- a/nova/tests/unit/virt/test_driver.py
4104+++ b/nova/tests/unit/virt/test_driver.py
4105@@ -13,6 +13,8 @@
4106 # License for the specific language governing permissions and limitations
4107 # under the License.
4108
4109+import mock
4110+from os_brick.initiator import connector
4111 from oslo_config import fixture as fixture_config
4112
4113 from nova import test
4114@@ -29,6 +31,13 @@ class FakeDriver2(FakeDriver):
4115 pass
4116
4117
4118+class FailDriver(object):
4119+ def __init__(self):
4120+ self.connector = connector.InitiatorConnector.factory(
4121+ 'UNSUPPORTED', None
4122+ )
4123+
4124+
4125 class ToDriverRegistryTestCase(test.NoDBTestCase):
4126
4127 def assertDriverInstance(self, inst, class_, *args, **kwargs):
4128@@ -59,6 +68,14 @@ class ToDriverRegistryTestCase(test.NoDBTestCase):
4129 FakeDriver2, 'arg1', 'arg2', param1='value1',
4130 param2='value2')
4131
4132+ @mock.patch.object(connector.InitiatorConnector, "factory")
4133+ def test_driver_dict_from_config_exception(self, mocked_factory):
4134+ mocked_factory.side_effect = ValueError
4135+ registry = driver.driver_dict_from_config([
4136+ 'fail=nova.tests.unit.virt.test_driver.FailDriver',
4137+ ])
4138+ self.assertEqual({}, registry)
4139+
4140
4141 class DriverMethodTestCase(test.NoDBTestCase):
4142
4143diff --git a/nova/virt/driver.py b/nova/virt/driver.py
4144index 1ca49a4..7bd4464 100644
4145--- a/nova/virt/driver.py
4146+++ b/nova/virt/driver.py
4147@@ -41,8 +41,15 @@ def driver_dict_from_config(named_driver_config, *args, **kwargs):
4148 for driver_str in named_driver_config:
4149 driver_type, _sep, driver = driver_str.partition('=')
4150 driver_class = importutils.import_class(driver)
4151- driver_registry[driver_type] = driver_class(*args, **kwargs)
4152-
4153+ try:
4154+ driver_registry[driver_type] = driver_class(*args, **kwargs)
4155+ except ValueError:
4156+ # NOTE(arne_r):
4157+ # stable/newton can not enforce os_brick versions that include
4158+ # the InvalidConnectorProtocol exception. Since it inherits from
4159+ # ValueError, this fix is still compatible with it.
4160+ LOG.debug('Unable to load volume driver %s. It is not '
4161+ 'supported on this host.', driver_type)
4162 return driver_registry
4163
4164
4165diff --git a/nova/virt/libvirt/driver.py b/nova/virt/libvirt/driver.py
4166index 7bd6d28..fe78d04 100644
4167--- a/nova/virt/libvirt/driver.py
4168+++ b/nova/virt/libvirt/driver.py
4169@@ -1239,20 +1239,19 @@ class LibvirtDriver(driver.ComputeDriver):
4170 CONF.libvirt.virt_type, disk_dev),
4171 'type': 'disk',
4172 }
4173+ # NOTE (lyarwood): new_connection_info will be modified by the
4174+ # following _connect_volume call down into the volume drivers. The
4175+ # majority of the volume drivers will add a device_path that is in turn
4176+ # used by _get_volume_config to set the source_path of the
4177+ # LibvirtConfigGuestDisk object it returns. We do not explicitly save
4178+ # this to the BDM here as the upper compute swap_volume method will
4179+ # eventually do this for us.
4180 self._connect_volume(new_connection_info, disk_info)
4181 conf = self._get_volume_config(new_connection_info, disk_info)
4182 if not conf.source_path:
4183 self._disconnect_volume(new_connection_info, disk_dev)
4184 raise NotImplementedError(_("Swap only supports host devices"))
4185
4186- # Save updates made in connection_info when connect_volume was called
4187- volume_id = new_connection_info.get('serial')
4188- bdm = objects.BlockDeviceMapping.get_by_volume_and_instance(
4189- nova_context.get_admin_context(), volume_id, instance.uuid)
4190- driver_bdm = driver_block_device.convert_volume(bdm)
4191- driver_bdm['connection_info'] = new_connection_info
4192- driver_bdm.save()
4193-
4194 self._swap_volume(guest, disk_dev, conf.source_path, resize_to)
4195 self._disconnect_volume(old_connection_info, disk_dev)
4196
4197@@ -4081,6 +4080,19 @@ class LibvirtDriver(driver.ComputeDriver):
4198 guest_arch = libvirt_utils.get_arch(image_meta)
4199
4200 if CONF.serial_console.enabled:
4201+ try:
4202+ # TODO(sahid): the guest param of this method should
4203+ # be renamed as guest_cfg then guest_obj to guest.
4204+ guest_obj = self._host.get_guest(instance)
4205+ if list(self._get_serial_ports_from_guest(guest_obj)):
4206+ # Serial port are already configured for instance that
4207+ # means we are in a context of migration.
4208+ return
4209+ except exception.InstanceNotFound:
4210+ LOG.debug(
4211+ "Instance does not exist yet on libvirt, we can "
4212+ "safely pass on looking for already defined serial "
4213+ "ports in its domain XML", instance=instance)
4214 num_ports = hardware.get_number_of_serial_ports(
4215 flavor, image_meta)
4216
4217@@ -4520,6 +4532,7 @@ class LibvirtDriver(driver.ComputeDriver):
4218 if (CONF.spice.enabled and CONF.spice.agent_enabled and
4219 virt_type not in ('lxc', 'uml', 'xen')):
4220 channel = vconfig.LibvirtConfigGuestChannel()
4221+ channel.type = 'spicevmc'
4222 channel.target_name = "com.redhat.spice.0"
4223 guest.add_device(channel)
4224
4225@@ -5950,12 +5963,25 @@ class LibvirtDriver(driver.ComputeDriver):
4226 libvirt.VIR_MIGRATE_TUNNELLED != 0):
4227 params.pop('migrate_disks')
4228
4229+ # TODO(sahid): This should be in
4230+ # post_live_migration_at_source but no way to retrieve
4231+ # ports acquired on the host for the guest at this
4232+ # step. Since the domain is going to be removed from
4233+ # libvird on source host after migration, we backup the
4234+ # serial ports to release them if all went well.
4235+ serial_ports = []
4236+ if CONF.serial_console.enabled:
4237+ serial_ports = list(self._get_serial_ports_from_guest(guest))
4238+
4239 guest.migrate(self._live_migration_uri(dest),
4240 migrate_uri=migrate_uri,
4241 flags=migration_flags,
4242 params=params,
4243 domain_xml=new_xml_str,
4244 bandwidth=CONF.libvirt.live_migration_bandwidth)
4245+
4246+ for hostname, port in serial_ports:
4247+ serial_console.release_port(host=hostname, port=port)
4248 except Exception as e:
4249 with excutils.save_and_reraise_exception():
4250 LOG.error(_LE("Live Migration failure: %s"), e,
4251@@ -6448,6 +6474,13 @@ class LibvirtDriver(driver.ComputeDriver):
4252 is_shared_instance_path = True
4253 if migrate_data:
4254 is_shared_instance_path = migrate_data.is_shared_instance_path
4255+ if (migrate_data.obj_attr_is_set("serial_listen_ports")
4256+ and migrate_data.serial_listen_ports):
4257+ # Releases serial ports reserved.
4258+ for port in migrate_data.serial_listen_ports:
4259+ serial_console.release_port(
4260+ host=migrate_data.serial_listen_addr, port=port)
4261+
4262 if not is_shared_instance_path:
4263 instance_dir = libvirt_utils.get_instance_path_at_destination(
4264 instance, migrate_data)
4265@@ -6579,6 +6612,15 @@ class LibvirtDriver(driver.ComputeDriver):
4266 CONF.libvirt.live_migration_inbound_addr
4267 migrate_data.supported_perf_events = self._supported_perf_events
4268
4269+ migrate_data.serial_listen_ports = []
4270+ if CONF.serial_console.enabled:
4271+ num_ports = hardware.get_number_of_serial_ports(
4272+ instance.flavor, instance.image_meta)
4273+ for port in six.moves.range(num_ports):
4274+ migrate_data.serial_listen_ports.append(
4275+ serial_console.acquire_port(
4276+ migrate_data.serial_listen_addr))
4277+
4278 for vol in block_device_mapping:
4279 connection_info = vol['connection_info']
4280 if connection_info.get('serial'):
4281diff --git a/releasenotes/notes/bug-1673569-cve-2017-7214-2d7644b356015c93.yaml b/releasenotes/notes/bug-1673569-cve-2017-7214-2d7644b356015c93.yaml
4282new file mode 100644
4283index 0000000..30a7e29
4284--- /dev/null
4285+++ b/releasenotes/notes/bug-1673569-cve-2017-7214-2d7644b356015c93.yaml
4286@@ -0,0 +1,8 @@
4287+---
4288+prelude: >
4289+ This release includes fixes for security vulnerabilities.
4290+security:
4291+ - |
4292+ [CVE-2017-7214] Failed notification payload is dumped in logs with auth secrets
4293+
4294+ * `Bug 1673569 <https://bugs.launchpad.net/nova/+bug/1673569>`_
4295diff --git a/releasenotes/notes/live-migration-progress-known-issue-20176f49da4d3c91.yaml b/releasenotes/notes/live-migration-progress-known-issue-20176f49da4d3c91.yaml
4296new file mode 100644
4297index 0000000..62a51d8
4298--- /dev/null
4299+++ b/releasenotes/notes/live-migration-progress-known-issue-20176f49da4d3c91.yaml
4300@@ -0,0 +1,13 @@
4301+---
4302+issues:
4303+ - |
4304+ The live-migration progress timeout controlled by the configuration option
4305+ ``[libvirt]/live_migration_progress_timeout`` has been discovered to
4306+ frequently cause live-migrations to fail with a progress timeout error,
4307+ even though the live-migration is still making good progress.
4308+ To minimize problems caused by these checks we recommend setting the value
4309+ to 0, which means do not trigger a timeout. (This has been made the
4310+ default in Ocata and Pike.)
4311+ To modify when a live-migration will fail with a timeout error, please now
4312+ look at ``[libvirt]/live_migration_completion_timeout`` and
4313+ ``[libvirt]/live_migration_downtime``.
4314diff --git a/requirements.txt b/requirements.txt
4315index 4dbe720..6c68b91 100644
4316--- a/requirements.txt
4317+++ b/requirements.txt
4318@@ -32,7 +32,7 @@ python-glanceclient!=2.4.0,>=2.3.0 # Apache-2.0
4319 requests>=2.10.0 # Apache-2.0
4320 six>=1.9.0 # MIT
4321 stevedore>=1.16.0 # Apache-2.0
4322-setuptools!=24.0.0,>=16.0 # PSF/ZPL
4323+setuptools!=24.0.0,!=34.0.0,!=34.0.1,!=34.0.2,!=34.0.3,!=34.1.0,!=34.1.1,!=34.2.0,!=34.3.0,>=16.0 # PSF/ZPL
4324 websockify>=0.8.0 # LGPLv3
4325 oslo.cache>=1.5.0 # Apache-2.0
4326 oslo.concurrency>=3.8.0 # Apache-2.0

Subscribers

People subscribed via source and target branches