Merge ~lucaskanashiro/ubuntu/+source/fence-agents:backport_ibmz_agent_bionic into ubuntu/+source/fence-agents:ubuntu/bionic-devel
- Git
- lp:~lucaskanashiro/ubuntu/+source/fence-agents
- backport_ibmz_agent_bionic
- Merge into ubuntu/bionic-devel
Status: | Merged |
---|---|
Merged at revision: | 2a5fdcf7272b12ef2c86344390c85f6f9e4e5e76 |
Proposed branch: | ~lucaskanashiro/ubuntu/+source/fence-agents:backport_ibmz_agent_bionic |
Merge into: | ubuntu/+source/fence-agents:ubuntu/bionic-devel |
Diff against target: |
677 lines (+615/-10) 5 files modified
debian/changelog (+11/-0) debian/copyright (+15/-9) debian/patches/0013-Add-fence-agent-for-IBM-z-LPARs.patch (+587/-0) debian/patches/series (+1/-0) debian/rules (+1/-1) |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Lucas Kanashiro (community) | Approve | ||
Christian Ehrhardt (community) | Needs Fixing | ||
Review via email: mp+398276@code.launchpad.net |
Commit message
Description of the change
Backport IBM LPAR fence agent (LP: #1889070). The intent of this MP is reviewing the work done so far before sharing a PPA containing the proposed package with IBM for testing. Due to that, the SRU paperwork is not done yet, when we get an approval from IBM I will do that.
PPA:
https:/
autopkgtest result:
autopkgtest [12:09:31]: @@@@@@@
fence-dummy PASS
Lucas Kanashiro (lucaskanashiro) wrote : | # |
Thanks for the review Christian.
I tried to backport the very same commit from upstream without changes but I believe you are right regarding the test, we can drop it and add a note. About the copyright change, in my opinion, we should keep it and as you mentioned advertise it in d/copyright as well. Does that sound good to you?
Christian Ehrhardt (paelzer) wrote : | # |
> I tried to backport the very same commit from upstream without changes but I believe you are right regarding the test, we can drop it and add a note. About the copyright change, in my opinion, we should keep it and as you mentioned advertise it in d/copyright as well. Does that sound good to you?
yes, WFM
Lucas Kanashiro (lucaskanashiro) wrote : | # |
The package was tested by IBM and it was uploaded:
$ git push pkg upload/
Enumerating objects: 26, done.
Counting objects: 100% (26/26), done.
Delta compression using up to 32 threads
Compressing objects: 100% (19/19), done.
Writing objects: 100% (19/19), 8.28 KiB | 2.07 MiB/s, done.
Total 19 (delta 13), reused 0 (delta 0)
To ssh://git.
* [new tag] upload/
dput ubuntu ../fence-
Checking signature on .changes
gpg: ../fence-
Checking signature on .dsc
gpg: ../fence-
Uploading to ubuntu (via ftp to upload.ubuntu.com):
Uploading fence-agents_
Uploading fence-agents_
Uploading fence-agents_
Successfully uploaded packages.
Preview Diff
1 | diff --git a/debian/changelog b/debian/changelog |
2 | index 7412a79..568d5d5 100644 |
3 | --- a/debian/changelog |
4 | +++ b/debian/changelog |
5 | @@ -1,3 +1,14 @@ |
6 | +fence-agents (4.0.25-2ubuntu1.3) bionic; urgency=medium |
7 | + |
8 | + * Backport upstream patch to add agent for IBM z LPARs (LP: #1889070). |
9 | + - d/p/0013-Add-fence-agent-for-IBM-z-LPARs.patch |
10 | + - d/copyright: add the IBM z LPARs agent. |
11 | + - d/rules: skip fence_ibmz in dh_auto_test. The test metadata file was |
12 | + removed from the upstream patch and it makes "make check" fail for |
13 | + fence_ibmz. |
14 | + |
15 | + -- Lucas Kanashiro <kanashiro@ubuntu.com> Wed, 10 Feb 2021 16:39:54 -0300 |
16 | + |
17 | fence-agents (4.0.25-2ubuntu1.2) bionic; urgency=medium |
18 | |
19 | * fence_aws backport from Focal (LP: #1894323): |
20 | diff --git a/debian/copyright b/debian/copyright |
21 | index a70b5d8..ba34532 100644 |
22 | --- a/debian/copyright |
23 | +++ b/debian/copyright |
24 | @@ -19,6 +19,17 @@ License: GPL-2+ |
25 | On Debian systems, the complete text of the GNU General |
26 | Public License version 2 can be found in "/usr/share/common-licenses/GPL-2". |
27 | |
28 | +License: LGPL-2.1+ |
29 | + This library is free software; you can redistribute it and/or |
30 | + modify it under the terms of the GNU Lesser General Public |
31 | + License as published by the Free Software Foundation; either |
32 | + version 2.1 of the License, or (at your option) any later version. |
33 | + . |
34 | + This library is distributed in the hope that it will be useful, |
35 | + but WITHOUT ANY WARRANTY; without even the implied warranty of |
36 | + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU |
37 | + Lesser General Public License for more details. |
38 | + |
39 | Files: debian/* |
40 | Copyright: 2011 Andres Rodriguez <andreserl@ubuntu.com> |
41 | License: GPL-2+ |
42 | @@ -53,16 +64,11 @@ License: GPL-2+ |
43 | Files: fence/agents/zvm/fence_zvm.* |
44 | Copyright: (C) 2012 Sine Nomine Associates |
45 | License: LGPL-2.1+ |
46 | - This library is free software; you can redistribute it and/or |
47 | - modify it under the terms of the GNU Lesser General Public |
48 | - License as published by the Free Software Foundation; either |
49 | - version 2.1 of the License, or (at your option) any later version. |
50 | - . |
51 | - This library is distributed in the hope that it will be useful, |
52 | - but WITHOUT ANY WARRANTY; without even the implied warranty of |
53 | - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU |
54 | - Lesser General Public License for more details. |
55 | |
56 | Files: fence/man/fence_ifmib.8 |
57 | Copyright: 2008 Ross Vandegrift |
58 | License: GPL-2+ |
59 | + |
60 | +Files: fence/agents/ibmz/fence_ibmz.py |
61 | +Copyright: 2020 IBM Corp. |
62 | +License: LGPL-2.1+ |
63 | diff --git a/debian/patches/0013-Add-fence-agent-for-IBM-z-LPARs.patch b/debian/patches/0013-Add-fence-agent-for-IBM-z-LPARs.patch |
64 | new file mode 100644 |
65 | index 0000000..113fa1f |
66 | --- /dev/null |
67 | +++ b/debian/patches/0013-Add-fence-agent-for-IBM-z-LPARs.patch |
68 | @@ -0,0 +1,587 @@ |
69 | +From: Paulo de Rezende Pinatti <ppinatti@linux.ibm.com> |
70 | +Date: Thu, 16 Jul 2020 21:37:12 +0200 |
71 | +Subject: Add fence agent for IBM z LPARs |
72 | + |
73 | +This agent manages IBM z LPARs via the HMC Web Services |
74 | +REST API. |
75 | + |
76 | +Signed-off-by: Paulo de Rezende Pinatti <ppinatti@linux.ibm.com> |
77 | + |
78 | +Origin: backport, https://github.com/ClusterLabs/fence-agents/commit/74d415bf |
79 | +Bug-Ubuntu: https://bugs.launchpad.net/ubuntu-z-systems/+bug/1889070 |
80 | +Reviewed-By: Lucas Kanashiro <kanashiro@ubuntu.com> |
81 | + |
82 | +The addition of test/data/metadata/fence_ibmz.xml was dropped because it does |
83 | +not work during build time, it tries to connect to non-localhost. The changes |
84 | +made to the fence-agensts.spec.in were also removed since here we do not care |
85 | +about RPM packages. |
86 | + |
87 | +--- |
88 | + doc/COPYRIGHT | 4 + |
89 | + fence/agents/ibmz/fence_ibmz.py | 542 +++++++++++++++++++++++++++++++++++++ |
90 | + 2 files changed, 546 insertions(+) |
91 | + create mode 100644 fence/agents/ibmz/fence_ibmz.py |
92 | + |
93 | +diff --git a/doc/COPYRIGHT b/doc/COPYRIGHT |
94 | +index 8124c53..f1e8353 100644 |
95 | +--- a/doc/COPYRIGHT |
96 | ++++ b/doc/COPYRIGHT |
97 | +@@ -5,6 +5,10 @@ Copyright (C) 2004-2011 Red Hat, Inc. All rights reserved. |
98 | + |
99 | + Exceptions: |
100 | + |
101 | ++fence/agents/ibmz/*: |
102 | ++ Copyright (c) 2020 IBM Corp. |
103 | ++ Contributed by Paulo de Rezende Pinatti <ppinatti at linux.ibm.com> |
104 | ++ |
105 | + fence/agents/hds_cb/*: |
106 | + Copyright (C) 2012 Matthew Clark. |
107 | + Author: Matthew Clark <mattjclark0407 at hotmail.com> |
108 | +diff --git a/fence/agents/ibmz/fence_ibmz.py b/fence/agents/ibmz/fence_ibmz.py |
109 | +new file mode 100644 |
110 | +index 0000000..d3ac550 |
111 | +--- /dev/null |
112 | ++++ b/fence/agents/ibmz/fence_ibmz.py |
113 | +@@ -0,0 +1,542 @@ |
114 | ++#!@PYTHON@ -tt |
115 | ++ |
116 | ++# Copyright (c) 2020 IBM Corp. |
117 | ++# |
118 | ++# This library is free software; you can redistribute it and/or |
119 | ++# modify it under the terms of the GNU Lesser General Public |
120 | ++# License as published by the Free Software Foundation; either |
121 | ++# version 2.1 of the License, or (at your option) any later version. |
122 | ++# |
123 | ++# This library is distributed in the hope that it will be useful, |
124 | ++# but WITHOUT ANY WARRANTY; without even the implied warranty of |
125 | ++# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU |
126 | ++# Lesser General Public License for more details. |
127 | ++# |
128 | ++# You should have received a copy of the GNU Lesser General Public |
129 | ++# License along with this library. If not, see |
130 | ++# <http://www.gnu.org/licenses/>. |
131 | ++ |
132 | ++import atexit |
133 | ++import logging |
134 | ++import time |
135 | ++import sys |
136 | ++ |
137 | ++import requests |
138 | ++from requests.packages import urllib3 |
139 | ++ |
140 | ++sys.path.append("@FENCEAGENTSLIBDIR@") |
141 | ++from fencing import * |
142 | ++from fencing import fail_usage, run_delay, EC_GENERIC_ERROR |
143 | ++ |
144 | ++DEFAULT_POWER_TIMEOUT = '300' |
145 | ++ERROR_NOT_FOUND = ("{obj_type} {obj_name} not found in this HMC. " |
146 | ++ "Attention: names are case-sensitive.") |
147 | ++ |
148 | ++class ApiClientError(Exception): |
149 | ++ """ |
150 | ++ Base exception for all API Client related errors. |
151 | ++ """ |
152 | ++ |
153 | ++class ApiClientRequestError(ApiClientError): |
154 | ++ """ |
155 | ++ Raised when an API request ends in error |
156 | ++ """ |
157 | ++ |
158 | ++ def __init__(self, req_method, req_uri, status, reason, message): |
159 | ++ self.req_method = req_method |
160 | ++ self.req_uri = req_uri |
161 | ++ self.status = status |
162 | ++ self.reason = reason |
163 | ++ self.message = message |
164 | ++ super(ApiClientRequestError, self).__init__() |
165 | ++ |
166 | ++ def __str__(self): |
167 | ++ return ( |
168 | ++ "API request failed, details:\n" |
169 | ++ "HTTP Request : {req_method} {req_uri}\n" |
170 | ++ "HTTP Response status: {status}\n" |
171 | ++ "Error reason: {reason}\n" |
172 | ++ "Error message: {message}\n".format( |
173 | ++ req_method=self.req_method, req_uri=self.req_uri, |
174 | ++ status=self.status, reason=self.reason, message=self.message) |
175 | ++ ) |
176 | ++ |
177 | ++class APIClient(object): |
178 | ++ DEFAULT_CONFIG = { |
179 | ++ # how many connection-related errors to retry on |
180 | ++ 'connect_retries': 3, |
181 | ++ # how many times to retry on read errors (after request was sent to the |
182 | ++ # server) |
183 | ++ 'read_retries': 3, |
184 | ++ # http methods that should be retried |
185 | ++ 'method_whitelist': ['HEAD', 'GET', 'OPTIONS'], |
186 | ++ # limit of redirects to perform to avoid loops |
187 | ++ 'redirect': 5, |
188 | ++ # how long to wait while establishing a connection |
189 | ++ 'connect_timeout': 30, |
190 | ++ # how long to wait for asynchronous operations (jobs) to complete |
191 | ++ 'operation_timeout': 900, |
192 | ++ # how long to wait between bytes sent by the remote side |
193 | ++ 'read_timeout': 300, |
194 | ++ # default API port |
195 | ++ 'port': 6794, |
196 | ++ # validate ssl certificates |
197 | ++ 'ssl_verify': False |
198 | ++ } |
199 | ++ LABEL_BY_OP_MODE = { |
200 | ++ 'classic': { |
201 | ++ 'nodes': 'logical-partitions', |
202 | ++ 'state-on': 'operating', |
203 | ++ 'start': 'load', |
204 | ++ 'stop': 'deactivate' |
205 | ++ }, |
206 | ++ 'dpm': { |
207 | ++ 'nodes': 'partitions', |
208 | ++ 'state-on': 'active', |
209 | ++ 'start': 'start', |
210 | ++ 'stop': 'stop' |
211 | ++ } |
212 | ++ } |
213 | ++ def __init__(self, host, user, passwd, config=None): |
214 | ++ self.host = host |
215 | ++ if not passwd: |
216 | ++ raise ValueError('Password cannot be empty') |
217 | ++ self.passwd = passwd |
218 | ++ if not user: |
219 | ++ raise ValueError('Username cannot be empty') |
220 | ++ self.user = user |
221 | ++ self._cpc_cache = {} |
222 | ++ self._session = None |
223 | ++ self._config = self.DEFAULT_CONFIG.copy() |
224 | ++ # apply user defined values |
225 | ++ if config: |
226 | ++ self._config.update(config) |
227 | ++ |
228 | ++ def _create_session(self): |
229 | ++ """ |
230 | ++ Create a new requests session and apply config values |
231 | ++ """ |
232 | ++ session = requests.Session() |
233 | ++ retry_obj = urllib3.Retry( |
234 | ++ # setting a total is necessary to cover SSL related errors |
235 | ++ total=max(self._config['connect_retries'], |
236 | ++ self._config['read_retries']), |
237 | ++ connect=self._config['connect_retries'], |
238 | ++ read=self._config['read_retries'], |
239 | ++ method_whitelist=self._config['method_whitelist'], |
240 | ++ redirect=self._config['redirect'] |
241 | ++ ) |
242 | ++ session.mount('http://', requests.adapters.HTTPAdapter( |
243 | ++ max_retries=retry_obj)) |
244 | ++ session.mount('https://', requests.adapters.HTTPAdapter( |
245 | ++ max_retries=retry_obj)) |
246 | ++ return session |
247 | ++ |
248 | ++ def _get_mode_labels(self, cpc): |
249 | ++ """ |
250 | ++ Return the map of labels that corresponds to the cpc operation mode |
251 | ++ """ |
252 | ++ if self.is_dpm_enabled(cpc): |
253 | ++ return self.LABEL_BY_OP_MODE['dpm'] |
254 | ++ return self.LABEL_BY_OP_MODE['classic'] |
255 | ++ |
256 | ++ def _partition_switch_power(self, cpc, partition, action): |
257 | ++ """ |
258 | ++ Perform the API request to start (power on) or stop (power off) the |
259 | ++ target partition and wait for the job to finish. |
260 | ++ """ |
261 | ++ # retrieve partition's uri |
262 | ++ label_map = self._get_mode_labels(cpc) |
263 | ++ resp = self._request('get', '{}/{}?name={}'.format( |
264 | ++ self._cpc_cache[cpc]['object-uri'], label_map['nodes'], partition)) |
265 | ++ |
266 | ++ if not resp[label_map['nodes']]: |
267 | ++ raise ValueError(ERROR_NOT_FOUND.format( |
268 | ++ obj_type='LPAR/Partition', obj_name=partition)) |
269 | ++ |
270 | ++ part_uri = resp[label_map['nodes']][0]['object-uri'] |
271 | ++ |
272 | ++ # in dpm mode the request must have empty body |
273 | ++ if self.is_dpm_enabled(cpc): |
274 | ++ body = None |
275 | ++ # in classic mode we make sure the operation is executed |
276 | ++ # even if the partition is already on |
277 | ++ else: |
278 | ++ body = {'force': True} |
279 | ++ # when powering on the partition must be activated first |
280 | ++ if action == 'start': |
281 | ++ op_uri = '{}/operations/activate'.format(part_uri) |
282 | ++ job_resp = self._request( |
283 | ++ 'post', op_uri, body=body, valid_codes=[202]) |
284 | ++ # always wait for activate otherwise the load (start) |
285 | ++ # operation will fail |
286 | ++ if self._config['operation_timeout'] == 0: |
287 | ++ timeout = self.DEFAULT_CONFIG['operation_timeout'] |
288 | ++ else: |
289 | ++ timeout = self._config['operation_timeout'] |
290 | ++ logging.debug( |
291 | ++ 'waiting for activate (timeout %s secs)', timeout) |
292 | ++ self._wait_for_job('post', op_uri, job_resp['job-uri'], |
293 | ++ timeout=timeout) |
294 | ++ |
295 | ++ # trigger the start job |
296 | ++ op_uri = '{}/operations/{}'.format(part_uri, label_map[action]) |
297 | ++ job_resp = self._request('post', op_uri, body=body, valid_codes=[202]) |
298 | ++ if self._config['operation_timeout'] == 0: |
299 | ++ return |
300 | ++ logging.debug('waiting for %s (timeout %s secs)', |
301 | ++ label_map[action], self._config['operation_timeout']) |
302 | ++ self._wait_for_job('post', op_uri, job_resp['job-uri'], |
303 | ++ timeout=self._config['operation_timeout']) |
304 | ++ |
305 | ++ def _request(self, method, uri, body=None, headers=None, valid_codes=None): |
306 | ++ """ |
307 | ++ Perform a request to the HMC API |
308 | ++ """ |
309 | ++ assert method in ('delete', 'head', 'get', 'post', 'put') |
310 | ++ |
311 | ++ url = 'https://{host}:{port}{uri}'.format( |
312 | ++ host=self.host, port=self._config['port'], uri=uri) |
313 | ++ if not headers: |
314 | ++ headers = {} |
315 | ++ |
316 | ++ if self._session is None: |
317 | ++ raise ValueError('You need to log on first') |
318 | ++ method = getattr(self._session, method) |
319 | ++ timeout = ( |
320 | ++ self._config['connect_timeout'], self._config['read_timeout']) |
321 | ++ response = method(url, json=body, headers=headers, |
322 | ++ verify=self._config['ssl_verify'], timeout=timeout) |
323 | ++ |
324 | ++ if valid_codes and response.status_code not in valid_codes: |
325 | ++ reason = '(no reason)' |
326 | ++ message = '(no message)' |
327 | ++ if response.headers.get('content-type') == 'application/json': |
328 | ++ try: |
329 | ++ json_resp = response.json() |
330 | ++ except ValueError: |
331 | ++ pass |
332 | ++ else: |
333 | ++ reason = json_resp.get('reason', reason) |
334 | ++ message = json_resp.get('message', message) |
335 | ++ else: |
336 | ++ message = '{}...'.format(response.text[:500]) |
337 | ++ raise ApiClientRequestError( |
338 | ++ response.request.method, response.request.url, |
339 | ++ response.status_code, reason, message) |
340 | ++ |
341 | ++ if response.status_code == 204: |
342 | ++ return dict() |
343 | ++ try: |
344 | ++ json_resp = response.json() |
345 | ++ except ValueError: |
346 | ++ raise ApiClientRequestError( |
347 | ++ response.request.method, response.request.url, |
348 | ++ response.status_code, '(no reason)', |
349 | ++ 'Invalid JSON content in response') |
350 | ++ |
351 | ++ return json_resp |
352 | ++ |
353 | ++ def _update_cpc_cache(self, cpc_props): |
354 | ++ self._cpc_cache[cpc_props['name']] = { |
355 | ++ 'object-uri': cpc_props['object-uri'], |
356 | ++ 'dpm-enabled': cpc_props.get('dpm-enabled', False) |
357 | ++ } |
358 | ++ |
359 | ++ def _wait_for_job(self, req_method, req_uri, job_uri, timeout): |
360 | ++ """ |
361 | ++ Perform API requests to check for job status until it has completed |
362 | ++ or the specified timeout is reached |
363 | ++ """ |
364 | ++ op_timeout = time.time() + timeout |
365 | ++ while time.time() < op_timeout: |
366 | ++ job_resp = self._request("get", job_uri) |
367 | ++ if job_resp['status'] == 'complete': |
368 | ++ if job_resp['job-status-code'] in (200, 201, 204): |
369 | ++ return |
370 | ++ raise ApiClientRequestError( |
371 | ++ req_method, req_uri, |
372 | ++ job_resp.get('job-status-code', '(no status)'), |
373 | ++ job_resp.get('job-reason-code', '(no reason)'), |
374 | ++ job_resp.get('job-results', {}).get( |
375 | ++ 'message', '(no message)') |
376 | ++ ) |
377 | ++ time.sleep(1) |
378 | ++ raise ApiClientError('Timed out while waiting for job completion') |
379 | ++ |
380 | ++ def cpc_list(self): |
381 | ++ """ |
382 | ++ Return a list of CPCs in the format {'name': 'cpc-name', 'status': |
383 | ++ 'operating'} |
384 | ++ """ |
385 | ++ list_resp = self._request("get", "/api/cpcs", valid_codes=[200]) |
386 | ++ ret = [] |
387 | ++ for cpc_props in list_resp['cpcs']: |
388 | ++ self._update_cpc_cache(cpc_props) |
389 | ++ ret.append({ |
390 | ++ 'name': cpc_props['name'], 'status': cpc_props['status']}) |
391 | ++ return ret |
392 | ++ |
393 | ++ def is_dpm_enabled(self, cpc): |
394 | ++ """ |
395 | ++ Return True if CPC is in DPM mode, False for classic mode |
396 | ++ """ |
397 | ++ if cpc in self._cpc_cache: |
398 | ++ return self._cpc_cache[cpc]['dpm-enabled'] |
399 | ++ list_resp = self._request("get", "/api/cpcs?name={}".format(cpc), |
400 | ++ valid_codes=[200]) |
401 | ++ if not list_resp['cpcs']: |
402 | ++ raise ValueError(ERROR_NOT_FOUND.format( |
403 | ++ obj_type='CPC', obj_name=cpc)) |
404 | ++ self._update_cpc_cache(list_resp['cpcs'][0]) |
405 | ++ return self._cpc_cache[cpc]['dpm-enabled'] |
406 | ++ |
407 | ++ def logon(self): |
408 | ++ """ |
409 | ++ Open a session with the HMC API and store its ID |
410 | ++ """ |
411 | ++ self._session = self._create_session() |
412 | ++ logon_body = {"userid": self.user, "password": self.passwd} |
413 | ++ logon_resp = self._request("post", "/api/sessions", body=logon_body, |
414 | ++ valid_codes=[200, 201]) |
415 | ++ self._session.headers["X-API-Session"] = logon_resp['api-session'] |
416 | ++ |
417 | ++ def logoff(self): |
418 | ++ """ |
419 | ++ Close/delete the HMC API session |
420 | ++ """ |
421 | ++ if self._session is None: |
422 | ++ return |
423 | ++ self._request("delete", "/api/sessions/this-session", |
424 | ++ valid_codes=[204]) |
425 | ++ self._cpc_cache = {} |
426 | ++ self._session = None |
427 | ++ |
428 | ++ def partition_list(self, cpc): |
429 | ++ """ |
430 | ++ Return a list of partitions in the format {'name': 'part-name', |
431 | ++ 'status': 'on'} |
432 | ++ """ |
433 | ++ label_map = self._get_mode_labels(cpc) |
434 | ++ list_resp = self._request('get', '{}/{}'.format( |
435 | ++ self._cpc_cache[cpc]['object-uri'], label_map['nodes'])) |
436 | ++ status_map = {label_map['state-on']: 'on'} |
437 | ++ return [{'name': part['name'], |
438 | ++ 'status': status_map.get(part['status'].lower(), 'off')} |
439 | ++ for part in list_resp[label_map['nodes']]] |
440 | ++ |
441 | ++ def partition_start(self, cpc, partition): |
442 | ++ """ |
443 | ++ Power on a partition |
444 | ++ """ |
445 | ++ self._partition_switch_power(cpc, partition, 'start') |
446 | ++ |
447 | ++ def partition_status(self, cpc, partition): |
448 | ++ """ |
449 | ++ Return the current status of a partition (on or off) |
450 | ++ """ |
451 | ++ label_map = self._get_mode_labels(cpc) |
452 | ++ |
453 | ++ resp = self._request('get', '{}/{}?name={}'.format( |
454 | ++ self._cpc_cache[cpc]['object-uri'], label_map['nodes'], partition)) |
455 | ++ if not resp[label_map['nodes']]: |
456 | ++ raise ValueError(ERROR_NOT_FOUND.format( |
457 | ++ obj_type='LPAR/Partition', obj_name=partition)) |
458 | ++ part_props = resp[label_map['nodes']][0] |
459 | ++ if part_props['status'].lower() == label_map['state-on']: |
460 | ++ return 'on' |
461 | ++ return 'off' |
462 | ++ |
463 | ++ def partition_stop(self, cpc, partition): |
464 | ++ """ |
465 | ++ Power off a partition |
466 | ++ """ |
467 | ++ self._partition_switch_power(cpc, partition, 'stop') |
468 | ++ |
469 | ++def parse_plug(options): |
470 | ++ """ |
471 | ++ Extract cpc and partition from specified plug value |
472 | ++ """ |
473 | ++ try: |
474 | ++ cpc, partition = options['--plug'].strip().split('/', 1) |
475 | ++ except ValueError: |
476 | ++ fail_usage('Please specify nodename in format cpc/partition') |
477 | ++ cpc = cpc.strip() |
478 | ++ if not cpc or not partition: |
479 | ++ fail_usage('Please specify nodename in format cpc/partition') |
480 | ++ return cpc, partition |
481 | ++ |
482 | ++def get_power_status(conn, options): |
483 | ++ logging.debug('executing get_power_status') |
484 | ++ status = conn.partition_status(*parse_plug(options)) |
485 | ++ return status |
486 | ++ |
487 | ++def set_power_status(conn, options): |
488 | ++ logging.debug('executing set_power_status') |
489 | ++ if options['--action'] == 'on': |
490 | ++ conn.partition_start(*parse_plug(options)) |
491 | ++ elif options['--action'] == 'off': |
492 | ++ conn.partition_stop(*parse_plug(options)) |
493 | ++ else: |
494 | ++ fail_usage('Invalid action {}'.format(options['--action'])) |
495 | ++ |
496 | ++def get_outlet_list(conn, options): |
497 | ++ logging.debug('executing get_outlet_list') |
498 | ++ result = {} |
499 | ++ for cpc in conn.cpc_list(): |
500 | ++ for part in conn.partition_list(cpc['name']): |
501 | ++ result['{}/{}'.format(cpc['name'], part['name'])] = ( |
502 | ++ part['name'], part['status']) |
503 | ++ return result |
504 | ++ |
505 | ++def disconnect(conn): |
506 | ++ """ |
507 | ++ Close the API session |
508 | ++ """ |
509 | ++ try: |
510 | ++ conn.logoff() |
511 | ++ except Exception as exc: |
512 | ++ logging.exception('Logoff failed: ') |
513 | ++ sys.exit(str(exc)) |
514 | ++ |
515 | ++def set_opts(): |
516 | ++ """ |
517 | ++ Define the options supported by this agent |
518 | ++ """ |
519 | ++ device_opt = [ |
520 | ++ "ipaddr", |
521 | ++ "ipport", |
522 | ++ "login", |
523 | ++ "passwd", |
524 | ++ "port", |
525 | ++ "connect_retries", |
526 | ++ "connect_timeout", |
527 | ++ "operation_timeout", |
528 | ++ "read_retries", |
529 | ++ "read_timeout", |
530 | ++ "ssl_secure", |
531 | ++ ] |
532 | ++ |
533 | ++ all_opt["ipport"]["default"] = APIClient.DEFAULT_CONFIG['port'] |
534 | ++ all_opt["power_timeout"]["default"] = DEFAULT_POWER_TIMEOUT |
535 | ++ port_desc = ("Physical plug id in the format cpc-name/partition-name " |
536 | ++ "(case-sensitive)") |
537 | ++ all_opt["port"]["shortdesc"] = port_desc |
538 | ++ all_opt["port"]["help"] = ( |
539 | ++ "-n, --plug=[id] {}".format(port_desc)) |
540 | ++ all_opt["connect_retries"] = { |
541 | ++ "getopt" : ":", |
542 | ++ "longopt" : "connect-retries", |
543 | ++ "help" : "--connect-retries=[number] How many times to " |
544 | ++ "retry on connection errors", |
545 | ++ "default" : APIClient.DEFAULT_CONFIG['connect_retries'], |
546 | ++ "type" : "integer", |
547 | ++ "required" : "0", |
548 | ++ "shortdesc" : "How many times to retry on connection errors", |
549 | ++ "order" : 2 |
550 | ++ } |
551 | ++ all_opt["read_retries"] = { |
552 | ++ "getopt" : ":", |
553 | ++ "longopt" : "read-retries", |
554 | ++ "help" : "--read-retries=[number] How many times to " |
555 | ++ "retry on errors related to reading from server", |
556 | ++ "default" : APIClient.DEFAULT_CONFIG['read_retries'], |
557 | ++ "type" : "integer", |
558 | ++ "required" : "0", |
559 | ++ "shortdesc" : "How many times to retry on read errors", |
560 | ++ "order" : 2 |
561 | ++ } |
562 | ++ all_opt["connect_timeout"] = { |
563 | ++ "getopt" : ":", |
564 | ++ "longopt" : "connect-timeout", |
565 | ++ "help" : "--connect-timeout=[seconds] How long to wait to " |
566 | ++ "establish a connection", |
567 | ++ "default" : APIClient.DEFAULT_CONFIG['connect_timeout'], |
568 | ++ "type" : "second", |
569 | ++ "required" : "0", |
570 | ++ "shortdesc" : "How long to wait to establish a connection", |
571 | ++ "order" : 2 |
572 | ++ } |
573 | ++ all_opt["operation_timeout"] = { |
574 | ++ "getopt" : ":", |
575 | ++ "longopt" : "operation-timeout", |
576 | ++ "help" : "--operation-timeout=[seconds] How long to wait for " |
577 | ++ "power operation to complete (0 = do not wait)", |
578 | ++ "default" : APIClient.DEFAULT_CONFIG['operation_timeout'], |
579 | ++ "type" : "second", |
580 | ++ "required" : "0", |
581 | ++ "shortdesc" : "How long to wait for power operation to complete", |
582 | ++ "order" : 2 |
583 | ++ } |
584 | ++ all_opt["read_timeout"] = { |
585 | ++ "getopt" : ":", |
586 | ++ "longopt" : "read-timeout", |
587 | ++ "help" : "--read-timeout=[seconds] How long to wait " |
588 | ++ "to read data from server", |
589 | ++ "default" : APIClient.DEFAULT_CONFIG['read_timeout'], |
590 | ++ "type" : "second", |
591 | ++ "required" : "0", |
592 | ++ "shortdesc" : "How long to wait for server data", |
593 | ++ "order" : 2 |
594 | ++ } |
595 | ++ return device_opt |
596 | ++ |
597 | ++def main(): |
598 | ++ """ |
599 | ++ Agent entry point |
600 | ++ """ |
601 | ++ # register exit handler used by pacemaker |
602 | ++ atexit.register(atexit_handler) |
603 | ++ |
604 | ++ # prepare accepted options |
605 | ++ device_opt = set_opts() |
606 | ++ |
607 | ++ # parse options provided on input |
608 | ++ options = check_input(device_opt, process_input(device_opt)) |
609 | ++ |
610 | ++ docs = { |
611 | ++ "shortdesc": "Fence agent for IBM z LPARs", |
612 | ++ "longdesc": ( |
613 | ++ "fence_ibmz is a power fencing agent which uses the HMC Web " |
614 | ++ "Services API to fence IBM z LPARs."), |
615 | ++ "vendorurl": "http://www.ibm.com" |
616 | ++ } |
617 | ++ show_docs(options, docs) |
618 | ++ |
619 | ++ run_delay(options) |
620 | ++ |
621 | ++ # set underlying library's logging and ssl config according to specified |
622 | ++ # options |
623 | ++ requests_log = logging.getLogger("urllib3") |
624 | ++ requests_log.propagate = True |
625 | ++ if "--verbose" in options: |
626 | ++ requests_log.setLevel(logging.DEBUG) |
627 | ++ if "--ssl-secure" not in options: |
628 | ++ urllib3.disable_warnings( |
629 | ++ category=urllib3.exceptions.InsecureRequestWarning) |
630 | ++ |
631 | ++ hmc_address = options["--ip"] |
632 | ++ hmc_userid = options["--username"] |
633 | ++ hmc_password = options["--password"] |
634 | ++ config = { |
635 | ++ 'connect_retries': int(options['--connect-retries']), |
636 | ++ 'read_retries': int(options['--read-retries']), |
637 | ++ 'operation_timeout': int(options['--operation-timeout']), |
638 | ++ 'connect_timeout': int(options['--connect-timeout']), |
639 | ++ 'read_timeout': int(options['--read-timeout']), |
640 | ++ 'port': int(options['--ipport']), |
641 | ++ 'ssl_verify': bool('--ssl-secure' in options), |
642 | ++ } |
643 | ++ try: |
644 | ++ conn = APIClient(hmc_address, hmc_userid, hmc_password, config) |
645 | ++ conn.logon() |
646 | ++ atexit.register(disconnect, conn) |
647 | ++ result = fence_action(conn, options, set_power_status, |
648 | ++ get_power_status, get_outlet_list) |
649 | ++ except Exception: |
650 | ++ logging.exception('Exception occurred: ') |
651 | ++ result = EC_GENERIC_ERROR |
652 | ++ sys.exit(result) |
653 | ++ |
654 | ++if __name__ == "__main__": |
655 | ++ main() |
656 | diff --git a/debian/patches/series b/debian/patches/series |
657 | index df80fa4..075cb3b 100644 |
658 | --- a/debian/patches/series |
659 | +++ b/debian/patches/series |
660 | @@ -10,3 +10,4 @@ lp1865523-05-fence_scsi-fix-python3-encoding.patch |
661 | lp1865523-06-fence_scsi-fixes-around-node-id.patch |
662 | lp1865523-07-fence-metadata-update.xml |
663 | lp1894323-01-fence_aws-new-agent.patch |
664 | +0013-Add-fence-agent-for-IBM-z-LPARs.patch |
665 | diff --git a/debian/rules b/debian/rules |
666 | index 551e252..3d92f95 100755 |
667 | --- a/debian/rules |
668 | +++ b/debian/rules |
669 | @@ -47,7 +47,7 @@ override_dh_missing: |
670 | |
671 | override_dh_auto_test: |
672 | # disable testing for ovh as it tries to access SOAP service on www.ovh.com |
673 | - dh_auto_test -- TEST_TARGET_SKIP=ovh/fence_ovh |
674 | + dh_auto_test -- TEST_TARGET_SKIP="ovh/fence_ovh ibmz/fence_ibmz" |
675 | |
676 | override_dh_python2: |
677 | dh_python2 |
If we disable the test (which is ok) why do we backport and then carry tests/data/ metadata/ fence_ibmz. xml ?
Maybe not adding this test xml allows you to even not need to skip it?
If we update doc/COPYRIGHT don't we also need to update d/cpyright?
I'm not good on those licensing things, but it comes to mind seeing to touch one but not the other.
Maybe add on the patch header notes for the backport:
- dropped .spec because ...
- dropped test because ...
The rest looks like a good SRU, while it adds new features that should be ok as it is for HW exploitation. And in addition it just adds one other agent not touching (=regressing) the others.