Merge ~lucaskanashiro/ubuntu/+source/fence-agents:backport_ibmz_agent_bionic into ubuntu/+source/fence-agents:ubuntu/bionic-devel
- Git
- lp:~lucaskanashiro/ubuntu/+source/fence-agents
- backport_ibmz_agent_bionic
- Merge into ubuntu/bionic-devel
Status: | Merged |
---|---|
Merged at revision: | 2a5fdcf7272b12ef2c86344390c85f6f9e4e5e76 |
Proposed branch: | ~lucaskanashiro/ubuntu/+source/fence-agents:backport_ibmz_agent_bionic |
Merge into: | ubuntu/+source/fence-agents:ubuntu/bionic-devel |
Diff against target: |
677 lines (+615/-10) 5 files modified
debian/changelog (+11/-0) debian/copyright (+15/-9) debian/patches/0013-Add-fence-agent-for-IBM-z-LPARs.patch (+587/-0) debian/patches/series (+1/-0) debian/rules (+1/-1) |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Lucas Kanashiro (community) | Approve | ||
Christian Ehrhardt (community) | Needs Fixing | ||
Review via email:
|
Commit message
Description of the change
Backport IBM LPAR fence agent (LP: #1889070). The intent of this MP is reviewing the work done so far before sharing a PPA containing the proposed package with IBM for testing. Due to that, the SRU paperwork is not done yet, when we get an approval from IBM I will do that.
PPA:
https:/
autopkgtest result:
autopkgtest [12:09:31]: @@@@@@@
fence-dummy PASS
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Lucas Kanashiro (lucaskanashiro) wrote : | # |
Thanks for the review Christian.
I tried to backport the very same commit from upstream without changes but I believe you are right regarding the test, we can drop it and add a note. About the copyright change, in my opinion, we should keep it and as you mentioned advertise it in d/copyright as well. Does that sound good to you?
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Christian Ehrhardt (paelzer) wrote : | # |
> I tried to backport the very same commit from upstream without changes but I believe you are right regarding the test, we can drop it and add a note. About the copyright change, in my opinion, we should keep it and as you mentioned advertise it in d/copyright as well. Does that sound good to you?
yes, WFM
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Lucas Kanashiro (lucaskanashiro) wrote : | # |
The package was tested by IBM and it was uploaded:
$ git push pkg upload/
Enumerating objects: 26, done.
Counting objects: 100% (26/26), done.
Delta compression using up to 32 threads
Compressing objects: 100% (19/19), done.
Writing objects: 100% (19/19), 8.28 KiB | 2.07 MiB/s, done.
Total 19 (delta 13), reused 0 (delta 0)
To ssh://git.
* [new tag] upload/
dput ubuntu ../fence-
Checking signature on .changes
gpg: ../fence-
Checking signature on .dsc
gpg: ../fence-
Uploading to ubuntu (via ftp to upload.ubuntu.com):
Uploading fence-agents_
Uploading fence-agents_
Uploading fence-agents_
Successfully uploaded packages.
Preview Diff
1 | diff --git a/debian/changelog b/debian/changelog | |||
2 | index 7412a79..568d5d5 100644 | |||
3 | --- a/debian/changelog | |||
4 | +++ b/debian/changelog | |||
5 | @@ -1,3 +1,14 @@ | |||
6 | 1 | fence-agents (4.0.25-2ubuntu1.3) bionic; urgency=medium | ||
7 | 2 | |||
8 | 3 | * Backport upstream patch to add agent for IBM z LPARs (LP: #1889070). | ||
9 | 4 | - d/p/0013-Add-fence-agent-for-IBM-z-LPARs.patch | ||
10 | 5 | - d/copyright: add the IBM z LPARs agent. | ||
11 | 6 | - d/rules: skip fence_ibmz in dh_auto_test. The test metadata file was | ||
12 | 7 | removed from the upstream patch and it makes "make check" fail for | ||
13 | 8 | fence_ibmz. | ||
14 | 9 | |||
15 | 10 | -- Lucas Kanashiro <kanashiro@ubuntu.com> Wed, 10 Feb 2021 16:39:54 -0300 | ||
16 | 11 | |||
17 | 1 | fence-agents (4.0.25-2ubuntu1.2) bionic; urgency=medium | 12 | fence-agents (4.0.25-2ubuntu1.2) bionic; urgency=medium |
18 | 2 | 13 | ||
19 | 3 | * fence_aws backport from Focal (LP: #1894323): | 14 | * fence_aws backport from Focal (LP: #1894323): |
20 | diff --git a/debian/copyright b/debian/copyright | |||
21 | index a70b5d8..ba34532 100644 | |||
22 | --- a/debian/copyright | |||
23 | +++ b/debian/copyright | |||
24 | @@ -19,6 +19,17 @@ License: GPL-2+ | |||
25 | 19 | On Debian systems, the complete text of the GNU General | 19 | On Debian systems, the complete text of the GNU General |
26 | 20 | Public License version 2 can be found in "/usr/share/common-licenses/GPL-2". | 20 | Public License version 2 can be found in "/usr/share/common-licenses/GPL-2". |
27 | 21 | 21 | ||
28 | 22 | License: LGPL-2.1+ | ||
29 | 23 | This library is free software; you can redistribute it and/or | ||
30 | 24 | modify it under the terms of the GNU Lesser General Public | ||
31 | 25 | License as published by the Free Software Foundation; either | ||
32 | 26 | version 2.1 of the License, or (at your option) any later version. | ||
33 | 27 | . | ||
34 | 28 | This library is distributed in the hope that it will be useful, | ||
35 | 29 | but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
36 | 30 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU | ||
37 | 31 | Lesser General Public License for more details. | ||
38 | 32 | |||
39 | 22 | Files: debian/* | 33 | Files: debian/* |
40 | 23 | Copyright: 2011 Andres Rodriguez <andreserl@ubuntu.com> | 34 | Copyright: 2011 Andres Rodriguez <andreserl@ubuntu.com> |
41 | 24 | License: GPL-2+ | 35 | License: GPL-2+ |
42 | @@ -53,16 +64,11 @@ License: GPL-2+ | |||
43 | 53 | Files: fence/agents/zvm/fence_zvm.* | 64 | Files: fence/agents/zvm/fence_zvm.* |
44 | 54 | Copyright: (C) 2012 Sine Nomine Associates | 65 | Copyright: (C) 2012 Sine Nomine Associates |
45 | 55 | License: LGPL-2.1+ | 66 | License: LGPL-2.1+ |
46 | 56 | This library is free software; you can redistribute it and/or | ||
47 | 57 | modify it under the terms of the GNU Lesser General Public | ||
48 | 58 | License as published by the Free Software Foundation; either | ||
49 | 59 | version 2.1 of the License, or (at your option) any later version. | ||
50 | 60 | . | ||
51 | 61 | This library is distributed in the hope that it will be useful, | ||
52 | 62 | but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
53 | 63 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU | ||
54 | 64 | Lesser General Public License for more details. | ||
55 | 65 | 67 | ||
56 | 66 | Files: fence/man/fence_ifmib.8 | 68 | Files: fence/man/fence_ifmib.8 |
57 | 67 | Copyright: 2008 Ross Vandegrift | 69 | Copyright: 2008 Ross Vandegrift |
58 | 68 | License: GPL-2+ | 70 | License: GPL-2+ |
59 | 71 | |||
60 | 72 | Files: fence/agents/ibmz/fence_ibmz.py | ||
61 | 73 | Copyright: 2020 IBM Corp. | ||
62 | 74 | License: LGPL-2.1+ | ||
63 | diff --git a/debian/patches/0013-Add-fence-agent-for-IBM-z-LPARs.patch b/debian/patches/0013-Add-fence-agent-for-IBM-z-LPARs.patch | |||
64 | 69 | new file mode 100644 | 75 | new file mode 100644 |
65 | index 0000000..113fa1f | |||
66 | --- /dev/null | |||
67 | +++ b/debian/patches/0013-Add-fence-agent-for-IBM-z-LPARs.patch | |||
68 | @@ -0,0 +1,587 @@ | |||
69 | 1 | From: Paulo de Rezende Pinatti <ppinatti@linux.ibm.com> | ||
70 | 2 | Date: Thu, 16 Jul 2020 21:37:12 +0200 | ||
71 | 3 | Subject: Add fence agent for IBM z LPARs | ||
72 | 4 | |||
73 | 5 | This agent manages IBM z LPARs via the HMC Web Services | ||
74 | 6 | REST API. | ||
75 | 7 | |||
76 | 8 | Signed-off-by: Paulo de Rezende Pinatti <ppinatti@linux.ibm.com> | ||
77 | 9 | |||
78 | 10 | Origin: backport, https://github.com/ClusterLabs/fence-agents/commit/74d415bf | ||
79 | 11 | Bug-Ubuntu: https://bugs.launchpad.net/ubuntu-z-systems/+bug/1889070 | ||
80 | 12 | Reviewed-By: Lucas Kanashiro <kanashiro@ubuntu.com> | ||
81 | 13 | |||
82 | 14 | The addition of test/data/metadata/fence_ibmz.xml was dropped because it does | ||
83 | 15 | not work during build time, it tries to connect to non-localhost. The changes | ||
84 | 16 | made to the fence-agensts.spec.in were also removed since here we do not care | ||
85 | 17 | about RPM packages. | ||
86 | 18 | |||
87 | 19 | --- | ||
88 | 20 | doc/COPYRIGHT | 4 + | ||
89 | 21 | fence/agents/ibmz/fence_ibmz.py | 542 +++++++++++++++++++++++++++++++++++++ | ||
90 | 22 | 2 files changed, 546 insertions(+) | ||
91 | 23 | create mode 100644 fence/agents/ibmz/fence_ibmz.py | ||
92 | 24 | |||
93 | 25 | diff --git a/doc/COPYRIGHT b/doc/COPYRIGHT | ||
94 | 26 | index 8124c53..f1e8353 100644 | ||
95 | 27 | --- a/doc/COPYRIGHT | ||
96 | 28 | +++ b/doc/COPYRIGHT | ||
97 | 29 | @@ -5,6 +5,10 @@ Copyright (C) 2004-2011 Red Hat, Inc. All rights reserved. | ||
98 | 30 | |||
99 | 31 | Exceptions: | ||
100 | 32 | |||
101 | 33 | +fence/agents/ibmz/*: | ||
102 | 34 | + Copyright (c) 2020 IBM Corp. | ||
103 | 35 | + Contributed by Paulo de Rezende Pinatti <ppinatti at linux.ibm.com> | ||
104 | 36 | + | ||
105 | 37 | fence/agents/hds_cb/*: | ||
106 | 38 | Copyright (C) 2012 Matthew Clark. | ||
107 | 39 | Author: Matthew Clark <mattjclark0407 at hotmail.com> | ||
108 | 40 | diff --git a/fence/agents/ibmz/fence_ibmz.py b/fence/agents/ibmz/fence_ibmz.py | ||
109 | 41 | new file mode 100644 | ||
110 | 42 | index 0000000..d3ac550 | ||
111 | 43 | --- /dev/null | ||
112 | 44 | +++ b/fence/agents/ibmz/fence_ibmz.py | ||
113 | 45 | @@ -0,0 +1,542 @@ | ||
114 | 46 | +#!@PYTHON@ -tt | ||
115 | 47 | + | ||
116 | 48 | +# Copyright (c) 2020 IBM Corp. | ||
117 | 49 | +# | ||
118 | 50 | +# This library is free software; you can redistribute it and/or | ||
119 | 51 | +# modify it under the terms of the GNU Lesser General Public | ||
120 | 52 | +# License as published by the Free Software Foundation; either | ||
121 | 53 | +# version 2.1 of the License, or (at your option) any later version. | ||
122 | 54 | +# | ||
123 | 55 | +# This library is distributed in the hope that it will be useful, | ||
124 | 56 | +# but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
125 | 57 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU | ||
126 | 58 | +# Lesser General Public License for more details. | ||
127 | 59 | +# | ||
128 | 60 | +# You should have received a copy of the GNU Lesser General Public | ||
129 | 61 | +# License along with this library. If not, see | ||
130 | 62 | +# <http://www.gnu.org/licenses/>. | ||
131 | 63 | + | ||
132 | 64 | +import atexit | ||
133 | 65 | +import logging | ||
134 | 66 | +import time | ||
135 | 67 | +import sys | ||
136 | 68 | + | ||
137 | 69 | +import requests | ||
138 | 70 | +from requests.packages import urllib3 | ||
139 | 71 | + | ||
140 | 72 | +sys.path.append("@FENCEAGENTSLIBDIR@") | ||
141 | 73 | +from fencing import * | ||
142 | 74 | +from fencing import fail_usage, run_delay, EC_GENERIC_ERROR | ||
143 | 75 | + | ||
144 | 76 | +DEFAULT_POWER_TIMEOUT = '300' | ||
145 | 77 | +ERROR_NOT_FOUND = ("{obj_type} {obj_name} not found in this HMC. " | ||
146 | 78 | + "Attention: names are case-sensitive.") | ||
147 | 79 | + | ||
148 | 80 | +class ApiClientError(Exception): | ||
149 | 81 | + """ | ||
150 | 82 | + Base exception for all API Client related errors. | ||
151 | 83 | + """ | ||
152 | 84 | + | ||
153 | 85 | +class ApiClientRequestError(ApiClientError): | ||
154 | 86 | + """ | ||
155 | 87 | + Raised when an API request ends in error | ||
156 | 88 | + """ | ||
157 | 89 | + | ||
158 | 90 | + def __init__(self, req_method, req_uri, status, reason, message): | ||
159 | 91 | + self.req_method = req_method | ||
160 | 92 | + self.req_uri = req_uri | ||
161 | 93 | + self.status = status | ||
162 | 94 | + self.reason = reason | ||
163 | 95 | + self.message = message | ||
164 | 96 | + super(ApiClientRequestError, self).__init__() | ||
165 | 97 | + | ||
166 | 98 | + def __str__(self): | ||
167 | 99 | + return ( | ||
168 | 100 | + "API request failed, details:\n" | ||
169 | 101 | + "HTTP Request : {req_method} {req_uri}\n" | ||
170 | 102 | + "HTTP Response status: {status}\n" | ||
171 | 103 | + "Error reason: {reason}\n" | ||
172 | 104 | + "Error message: {message}\n".format( | ||
173 | 105 | + req_method=self.req_method, req_uri=self.req_uri, | ||
174 | 106 | + status=self.status, reason=self.reason, message=self.message) | ||
175 | 107 | + ) | ||
176 | 108 | + | ||
177 | 109 | +class APIClient(object): | ||
178 | 110 | + DEFAULT_CONFIG = { | ||
179 | 111 | + # how many connection-related errors to retry on | ||
180 | 112 | + 'connect_retries': 3, | ||
181 | 113 | + # how many times to retry on read errors (after request was sent to the | ||
182 | 114 | + # server) | ||
183 | 115 | + 'read_retries': 3, | ||
184 | 116 | + # http methods that should be retried | ||
185 | 117 | + 'method_whitelist': ['HEAD', 'GET', 'OPTIONS'], | ||
186 | 118 | + # limit of redirects to perform to avoid loops | ||
187 | 119 | + 'redirect': 5, | ||
188 | 120 | + # how long to wait while establishing a connection | ||
189 | 121 | + 'connect_timeout': 30, | ||
190 | 122 | + # how long to wait for asynchronous operations (jobs) to complete | ||
191 | 123 | + 'operation_timeout': 900, | ||
192 | 124 | + # how long to wait between bytes sent by the remote side | ||
193 | 125 | + 'read_timeout': 300, | ||
194 | 126 | + # default API port | ||
195 | 127 | + 'port': 6794, | ||
196 | 128 | + # validate ssl certificates | ||
197 | 129 | + 'ssl_verify': False | ||
198 | 130 | + } | ||
199 | 131 | + LABEL_BY_OP_MODE = { | ||
200 | 132 | + 'classic': { | ||
201 | 133 | + 'nodes': 'logical-partitions', | ||
202 | 134 | + 'state-on': 'operating', | ||
203 | 135 | + 'start': 'load', | ||
204 | 136 | + 'stop': 'deactivate' | ||
205 | 137 | + }, | ||
206 | 138 | + 'dpm': { | ||
207 | 139 | + 'nodes': 'partitions', | ||
208 | 140 | + 'state-on': 'active', | ||
209 | 141 | + 'start': 'start', | ||
210 | 142 | + 'stop': 'stop' | ||
211 | 143 | + } | ||
212 | 144 | + } | ||
213 | 145 | + def __init__(self, host, user, passwd, config=None): | ||
214 | 146 | + self.host = host | ||
215 | 147 | + if not passwd: | ||
216 | 148 | + raise ValueError('Password cannot be empty') | ||
217 | 149 | + self.passwd = passwd | ||
218 | 150 | + if not user: | ||
219 | 151 | + raise ValueError('Username cannot be empty') | ||
220 | 152 | + self.user = user | ||
221 | 153 | + self._cpc_cache = {} | ||
222 | 154 | + self._session = None | ||
223 | 155 | + self._config = self.DEFAULT_CONFIG.copy() | ||
224 | 156 | + # apply user defined values | ||
225 | 157 | + if config: | ||
226 | 158 | + self._config.update(config) | ||
227 | 159 | + | ||
228 | 160 | + def _create_session(self): | ||
229 | 161 | + """ | ||
230 | 162 | + Create a new requests session and apply config values | ||
231 | 163 | + """ | ||
232 | 164 | + session = requests.Session() | ||
233 | 165 | + retry_obj = urllib3.Retry( | ||
234 | 166 | + # setting a total is necessary to cover SSL related errors | ||
235 | 167 | + total=max(self._config['connect_retries'], | ||
236 | 168 | + self._config['read_retries']), | ||
237 | 169 | + connect=self._config['connect_retries'], | ||
238 | 170 | + read=self._config['read_retries'], | ||
239 | 171 | + method_whitelist=self._config['method_whitelist'], | ||
240 | 172 | + redirect=self._config['redirect'] | ||
241 | 173 | + ) | ||
242 | 174 | + session.mount('http://', requests.adapters.HTTPAdapter( | ||
243 | 175 | + max_retries=retry_obj)) | ||
244 | 176 | + session.mount('https://', requests.adapters.HTTPAdapter( | ||
245 | 177 | + max_retries=retry_obj)) | ||
246 | 178 | + return session | ||
247 | 179 | + | ||
248 | 180 | + def _get_mode_labels(self, cpc): | ||
249 | 181 | + """ | ||
250 | 182 | + Return the map of labels that corresponds to the cpc operation mode | ||
251 | 183 | + """ | ||
252 | 184 | + if self.is_dpm_enabled(cpc): | ||
253 | 185 | + return self.LABEL_BY_OP_MODE['dpm'] | ||
254 | 186 | + return self.LABEL_BY_OP_MODE['classic'] | ||
255 | 187 | + | ||
256 | 188 | + def _partition_switch_power(self, cpc, partition, action): | ||
257 | 189 | + """ | ||
258 | 190 | + Perform the API request to start (power on) or stop (power off) the | ||
259 | 191 | + target partition and wait for the job to finish. | ||
260 | 192 | + """ | ||
261 | 193 | + # retrieve partition's uri | ||
262 | 194 | + label_map = self._get_mode_labels(cpc) | ||
263 | 195 | + resp = self._request('get', '{}/{}?name={}'.format( | ||
264 | 196 | + self._cpc_cache[cpc]['object-uri'], label_map['nodes'], partition)) | ||
265 | 197 | + | ||
266 | 198 | + if not resp[label_map['nodes']]: | ||
267 | 199 | + raise ValueError(ERROR_NOT_FOUND.format( | ||
268 | 200 | + obj_type='LPAR/Partition', obj_name=partition)) | ||
269 | 201 | + | ||
270 | 202 | + part_uri = resp[label_map['nodes']][0]['object-uri'] | ||
271 | 203 | + | ||
272 | 204 | + # in dpm mode the request must have empty body | ||
273 | 205 | + if self.is_dpm_enabled(cpc): | ||
274 | 206 | + body = None | ||
275 | 207 | + # in classic mode we make sure the operation is executed | ||
276 | 208 | + # even if the partition is already on | ||
277 | 209 | + else: | ||
278 | 210 | + body = {'force': True} | ||
279 | 211 | + # when powering on the partition must be activated first | ||
280 | 212 | + if action == 'start': | ||
281 | 213 | + op_uri = '{}/operations/activate'.format(part_uri) | ||
282 | 214 | + job_resp = self._request( | ||
283 | 215 | + 'post', op_uri, body=body, valid_codes=[202]) | ||
284 | 216 | + # always wait for activate otherwise the load (start) | ||
285 | 217 | + # operation will fail | ||
286 | 218 | + if self._config['operation_timeout'] == 0: | ||
287 | 219 | + timeout = self.DEFAULT_CONFIG['operation_timeout'] | ||
288 | 220 | + else: | ||
289 | 221 | + timeout = self._config['operation_timeout'] | ||
290 | 222 | + logging.debug( | ||
291 | 223 | + 'waiting for activate (timeout %s secs)', timeout) | ||
292 | 224 | + self._wait_for_job('post', op_uri, job_resp['job-uri'], | ||
293 | 225 | + timeout=timeout) | ||
294 | 226 | + | ||
295 | 227 | + # trigger the start job | ||
296 | 228 | + op_uri = '{}/operations/{}'.format(part_uri, label_map[action]) | ||
297 | 229 | + job_resp = self._request('post', op_uri, body=body, valid_codes=[202]) | ||
298 | 230 | + if self._config['operation_timeout'] == 0: | ||
299 | 231 | + return | ||
300 | 232 | + logging.debug('waiting for %s (timeout %s secs)', | ||
301 | 233 | + label_map[action], self._config['operation_timeout']) | ||
302 | 234 | + self._wait_for_job('post', op_uri, job_resp['job-uri'], | ||
303 | 235 | + timeout=self._config['operation_timeout']) | ||
304 | 236 | + | ||
305 | 237 | + def _request(self, method, uri, body=None, headers=None, valid_codes=None): | ||
306 | 238 | + """ | ||
307 | 239 | + Perform a request to the HMC API | ||
308 | 240 | + """ | ||
309 | 241 | + assert method in ('delete', 'head', 'get', 'post', 'put') | ||
310 | 242 | + | ||
311 | 243 | + url = 'https://{host}:{port}{uri}'.format( | ||
312 | 244 | + host=self.host, port=self._config['port'], uri=uri) | ||
313 | 245 | + if not headers: | ||
314 | 246 | + headers = {} | ||
315 | 247 | + | ||
316 | 248 | + if self._session is None: | ||
317 | 249 | + raise ValueError('You need to log on first') | ||
318 | 250 | + method = getattr(self._session, method) | ||
319 | 251 | + timeout = ( | ||
320 | 252 | + self._config['connect_timeout'], self._config['read_timeout']) | ||
321 | 253 | + response = method(url, json=body, headers=headers, | ||
322 | 254 | + verify=self._config['ssl_verify'], timeout=timeout) | ||
323 | 255 | + | ||
324 | 256 | + if valid_codes and response.status_code not in valid_codes: | ||
325 | 257 | + reason = '(no reason)' | ||
326 | 258 | + message = '(no message)' | ||
327 | 259 | + if response.headers.get('content-type') == 'application/json': | ||
328 | 260 | + try: | ||
329 | 261 | + json_resp = response.json() | ||
330 | 262 | + except ValueError: | ||
331 | 263 | + pass | ||
332 | 264 | + else: | ||
333 | 265 | + reason = json_resp.get('reason', reason) | ||
334 | 266 | + message = json_resp.get('message', message) | ||
335 | 267 | + else: | ||
336 | 268 | + message = '{}...'.format(response.text[:500]) | ||
337 | 269 | + raise ApiClientRequestError( | ||
338 | 270 | + response.request.method, response.request.url, | ||
339 | 271 | + response.status_code, reason, message) | ||
340 | 272 | + | ||
341 | 273 | + if response.status_code == 204: | ||
342 | 274 | + return dict() | ||
343 | 275 | + try: | ||
344 | 276 | + json_resp = response.json() | ||
345 | 277 | + except ValueError: | ||
346 | 278 | + raise ApiClientRequestError( | ||
347 | 279 | + response.request.method, response.request.url, | ||
348 | 280 | + response.status_code, '(no reason)', | ||
349 | 281 | + 'Invalid JSON content in response') | ||
350 | 282 | + | ||
351 | 283 | + return json_resp | ||
352 | 284 | + | ||
353 | 285 | + def _update_cpc_cache(self, cpc_props): | ||
354 | 286 | + self._cpc_cache[cpc_props['name']] = { | ||
355 | 287 | + 'object-uri': cpc_props['object-uri'], | ||
356 | 288 | + 'dpm-enabled': cpc_props.get('dpm-enabled', False) | ||
357 | 289 | + } | ||
358 | 290 | + | ||
359 | 291 | + def _wait_for_job(self, req_method, req_uri, job_uri, timeout): | ||
360 | 292 | + """ | ||
361 | 293 | + Perform API requests to check for job status until it has completed | ||
362 | 294 | + or the specified timeout is reached | ||
363 | 295 | + """ | ||
364 | 296 | + op_timeout = time.time() + timeout | ||
365 | 297 | + while time.time() < op_timeout: | ||
366 | 298 | + job_resp = self._request("get", job_uri) | ||
367 | 299 | + if job_resp['status'] == 'complete': | ||
368 | 300 | + if job_resp['job-status-code'] in (200, 201, 204): | ||
369 | 301 | + return | ||
370 | 302 | + raise ApiClientRequestError( | ||
371 | 303 | + req_method, req_uri, | ||
372 | 304 | + job_resp.get('job-status-code', '(no status)'), | ||
373 | 305 | + job_resp.get('job-reason-code', '(no reason)'), | ||
374 | 306 | + job_resp.get('job-results', {}).get( | ||
375 | 307 | + 'message', '(no message)') | ||
376 | 308 | + ) | ||
377 | 309 | + time.sleep(1) | ||
378 | 310 | + raise ApiClientError('Timed out while waiting for job completion') | ||
379 | 311 | + | ||
380 | 312 | + def cpc_list(self): | ||
381 | 313 | + """ | ||
382 | 314 | + Return a list of CPCs in the format {'name': 'cpc-name', 'status': | ||
383 | 315 | + 'operating'} | ||
384 | 316 | + """ | ||
385 | 317 | + list_resp = self._request("get", "/api/cpcs", valid_codes=[200]) | ||
386 | 318 | + ret = [] | ||
387 | 319 | + for cpc_props in list_resp['cpcs']: | ||
388 | 320 | + self._update_cpc_cache(cpc_props) | ||
389 | 321 | + ret.append({ | ||
390 | 322 | + 'name': cpc_props['name'], 'status': cpc_props['status']}) | ||
391 | 323 | + return ret | ||
392 | 324 | + | ||
393 | 325 | + def is_dpm_enabled(self, cpc): | ||
394 | 326 | + """ | ||
395 | 327 | + Return True if CPC is in DPM mode, False for classic mode | ||
396 | 328 | + """ | ||
397 | 329 | + if cpc in self._cpc_cache: | ||
398 | 330 | + return self._cpc_cache[cpc]['dpm-enabled'] | ||
399 | 331 | + list_resp = self._request("get", "/api/cpcs?name={}".format(cpc), | ||
400 | 332 | + valid_codes=[200]) | ||
401 | 333 | + if not list_resp['cpcs']: | ||
402 | 334 | + raise ValueError(ERROR_NOT_FOUND.format( | ||
403 | 335 | + obj_type='CPC', obj_name=cpc)) | ||
404 | 336 | + self._update_cpc_cache(list_resp['cpcs'][0]) | ||
405 | 337 | + return self._cpc_cache[cpc]['dpm-enabled'] | ||
406 | 338 | + | ||
407 | 339 | + def logon(self): | ||
408 | 340 | + """ | ||
409 | 341 | + Open a session with the HMC API and store its ID | ||
410 | 342 | + """ | ||
411 | 343 | + self._session = self._create_session() | ||
412 | 344 | + logon_body = {"userid": self.user, "password": self.passwd} | ||
413 | 345 | + logon_resp = self._request("post", "/api/sessions", body=logon_body, | ||
414 | 346 | + valid_codes=[200, 201]) | ||
415 | 347 | + self._session.headers["X-API-Session"] = logon_resp['api-session'] | ||
416 | 348 | + | ||
417 | 349 | + def logoff(self): | ||
418 | 350 | + """ | ||
419 | 351 | + Close/delete the HMC API session | ||
420 | 352 | + """ | ||
421 | 353 | + if self._session is None: | ||
422 | 354 | + return | ||
423 | 355 | + self._request("delete", "/api/sessions/this-session", | ||
424 | 356 | + valid_codes=[204]) | ||
425 | 357 | + self._cpc_cache = {} | ||
426 | 358 | + self._session = None | ||
427 | 359 | + | ||
428 | 360 | + def partition_list(self, cpc): | ||
429 | 361 | + """ | ||
430 | 362 | + Return a list of partitions in the format {'name': 'part-name', | ||
431 | 363 | + 'status': 'on'} | ||
432 | 364 | + """ | ||
433 | 365 | + label_map = self._get_mode_labels(cpc) | ||
434 | 366 | + list_resp = self._request('get', '{}/{}'.format( | ||
435 | 367 | + self._cpc_cache[cpc]['object-uri'], label_map['nodes'])) | ||
436 | 368 | + status_map = {label_map['state-on']: 'on'} | ||
437 | 369 | + return [{'name': part['name'], | ||
438 | 370 | + 'status': status_map.get(part['status'].lower(), 'off')} | ||
439 | 371 | + for part in list_resp[label_map['nodes']]] | ||
440 | 372 | + | ||
441 | 373 | + def partition_start(self, cpc, partition): | ||
442 | 374 | + """ | ||
443 | 375 | + Power on a partition | ||
444 | 376 | + """ | ||
445 | 377 | + self._partition_switch_power(cpc, partition, 'start') | ||
446 | 378 | + | ||
447 | 379 | + def partition_status(self, cpc, partition): | ||
448 | 380 | + """ | ||
449 | 381 | + Return the current status of a partition (on or off) | ||
450 | 382 | + """ | ||
451 | 383 | + label_map = self._get_mode_labels(cpc) | ||
452 | 384 | + | ||
453 | 385 | + resp = self._request('get', '{}/{}?name={}'.format( | ||
454 | 386 | + self._cpc_cache[cpc]['object-uri'], label_map['nodes'], partition)) | ||
455 | 387 | + if not resp[label_map['nodes']]: | ||
456 | 388 | + raise ValueError(ERROR_NOT_FOUND.format( | ||
457 | 389 | + obj_type='LPAR/Partition', obj_name=partition)) | ||
458 | 390 | + part_props = resp[label_map['nodes']][0] | ||
459 | 391 | + if part_props['status'].lower() == label_map['state-on']: | ||
460 | 392 | + return 'on' | ||
461 | 393 | + return 'off' | ||
462 | 394 | + | ||
463 | 395 | + def partition_stop(self, cpc, partition): | ||
464 | 396 | + """ | ||
465 | 397 | + Power off a partition | ||
466 | 398 | + """ | ||
467 | 399 | + self._partition_switch_power(cpc, partition, 'stop') | ||
468 | 400 | + | ||
469 | 401 | +def parse_plug(options): | ||
470 | 402 | + """ | ||
471 | 403 | + Extract cpc and partition from specified plug value | ||
472 | 404 | + """ | ||
473 | 405 | + try: | ||
474 | 406 | + cpc, partition = options['--plug'].strip().split('/', 1) | ||
475 | 407 | + except ValueError: | ||
476 | 408 | + fail_usage('Please specify nodename in format cpc/partition') | ||
477 | 409 | + cpc = cpc.strip() | ||
478 | 410 | + if not cpc or not partition: | ||
479 | 411 | + fail_usage('Please specify nodename in format cpc/partition') | ||
480 | 412 | + return cpc, partition | ||
481 | 413 | + | ||
482 | 414 | +def get_power_status(conn, options): | ||
483 | 415 | + logging.debug('executing get_power_status') | ||
484 | 416 | + status = conn.partition_status(*parse_plug(options)) | ||
485 | 417 | + return status | ||
486 | 418 | + | ||
487 | 419 | +def set_power_status(conn, options): | ||
488 | 420 | + logging.debug('executing set_power_status') | ||
489 | 421 | + if options['--action'] == 'on': | ||
490 | 422 | + conn.partition_start(*parse_plug(options)) | ||
491 | 423 | + elif options['--action'] == 'off': | ||
492 | 424 | + conn.partition_stop(*parse_plug(options)) | ||
493 | 425 | + else: | ||
494 | 426 | + fail_usage('Invalid action {}'.format(options['--action'])) | ||
495 | 427 | + | ||
496 | 428 | +def get_outlet_list(conn, options): | ||
497 | 429 | + logging.debug('executing get_outlet_list') | ||
498 | 430 | + result = {} | ||
499 | 431 | + for cpc in conn.cpc_list(): | ||
500 | 432 | + for part in conn.partition_list(cpc['name']): | ||
501 | 433 | + result['{}/{}'.format(cpc['name'], part['name'])] = ( | ||
502 | 434 | + part['name'], part['status']) | ||
503 | 435 | + return result | ||
504 | 436 | + | ||
505 | 437 | +def disconnect(conn): | ||
506 | 438 | + """ | ||
507 | 439 | + Close the API session | ||
508 | 440 | + """ | ||
509 | 441 | + try: | ||
510 | 442 | + conn.logoff() | ||
511 | 443 | + except Exception as exc: | ||
512 | 444 | + logging.exception('Logoff failed: ') | ||
513 | 445 | + sys.exit(str(exc)) | ||
514 | 446 | + | ||
515 | 447 | +def set_opts(): | ||
516 | 448 | + """ | ||
517 | 449 | + Define the options supported by this agent | ||
518 | 450 | + """ | ||
519 | 451 | + device_opt = [ | ||
520 | 452 | + "ipaddr", | ||
521 | 453 | + "ipport", | ||
522 | 454 | + "login", | ||
523 | 455 | + "passwd", | ||
524 | 456 | + "port", | ||
525 | 457 | + "connect_retries", | ||
526 | 458 | + "connect_timeout", | ||
527 | 459 | + "operation_timeout", | ||
528 | 460 | + "read_retries", | ||
529 | 461 | + "read_timeout", | ||
530 | 462 | + "ssl_secure", | ||
531 | 463 | + ] | ||
532 | 464 | + | ||
533 | 465 | + all_opt["ipport"]["default"] = APIClient.DEFAULT_CONFIG['port'] | ||
534 | 466 | + all_opt["power_timeout"]["default"] = DEFAULT_POWER_TIMEOUT | ||
535 | 467 | + port_desc = ("Physical plug id in the format cpc-name/partition-name " | ||
536 | 468 | + "(case-sensitive)") | ||
537 | 469 | + all_opt["port"]["shortdesc"] = port_desc | ||
538 | 470 | + all_opt["port"]["help"] = ( | ||
539 | 471 | + "-n, --plug=[id] {}".format(port_desc)) | ||
540 | 472 | + all_opt["connect_retries"] = { | ||
541 | 473 | + "getopt" : ":", | ||
542 | 474 | + "longopt" : "connect-retries", | ||
543 | 475 | + "help" : "--connect-retries=[number] How many times to " | ||
544 | 476 | + "retry on connection errors", | ||
545 | 477 | + "default" : APIClient.DEFAULT_CONFIG['connect_retries'], | ||
546 | 478 | + "type" : "integer", | ||
547 | 479 | + "required" : "0", | ||
548 | 480 | + "shortdesc" : "How many times to retry on connection errors", | ||
549 | 481 | + "order" : 2 | ||
550 | 482 | + } | ||
551 | 483 | + all_opt["read_retries"] = { | ||
552 | 484 | + "getopt" : ":", | ||
553 | 485 | + "longopt" : "read-retries", | ||
554 | 486 | + "help" : "--read-retries=[number] How many times to " | ||
555 | 487 | + "retry on errors related to reading from server", | ||
556 | 488 | + "default" : APIClient.DEFAULT_CONFIG['read_retries'], | ||
557 | 489 | + "type" : "integer", | ||
558 | 490 | + "required" : "0", | ||
559 | 491 | + "shortdesc" : "How many times to retry on read errors", | ||
560 | 492 | + "order" : 2 | ||
561 | 493 | + } | ||
562 | 494 | + all_opt["connect_timeout"] = { | ||
563 | 495 | + "getopt" : ":", | ||
564 | 496 | + "longopt" : "connect-timeout", | ||
565 | 497 | + "help" : "--connect-timeout=[seconds] How long to wait to " | ||
566 | 498 | + "establish a connection", | ||
567 | 499 | + "default" : APIClient.DEFAULT_CONFIG['connect_timeout'], | ||
568 | 500 | + "type" : "second", | ||
569 | 501 | + "required" : "0", | ||
570 | 502 | + "shortdesc" : "How long to wait to establish a connection", | ||
571 | 503 | + "order" : 2 | ||
572 | 504 | + } | ||
573 | 505 | + all_opt["operation_timeout"] = { | ||
574 | 506 | + "getopt" : ":", | ||
575 | 507 | + "longopt" : "operation-timeout", | ||
576 | 508 | + "help" : "--operation-timeout=[seconds] How long to wait for " | ||
577 | 509 | + "power operation to complete (0 = do not wait)", | ||
578 | 510 | + "default" : APIClient.DEFAULT_CONFIG['operation_timeout'], | ||
579 | 511 | + "type" : "second", | ||
580 | 512 | + "required" : "0", | ||
581 | 513 | + "shortdesc" : "How long to wait for power operation to complete", | ||
582 | 514 | + "order" : 2 | ||
583 | 515 | + } | ||
584 | 516 | + all_opt["read_timeout"] = { | ||
585 | 517 | + "getopt" : ":", | ||
586 | 518 | + "longopt" : "read-timeout", | ||
587 | 519 | + "help" : "--read-timeout=[seconds] How long to wait " | ||
588 | 520 | + "to read data from server", | ||
589 | 521 | + "default" : APIClient.DEFAULT_CONFIG['read_timeout'], | ||
590 | 522 | + "type" : "second", | ||
591 | 523 | + "required" : "0", | ||
592 | 524 | + "shortdesc" : "How long to wait for server data", | ||
593 | 525 | + "order" : 2 | ||
594 | 526 | + } | ||
595 | 527 | + return device_opt | ||
596 | 528 | + | ||
597 | 529 | +def main(): | ||
598 | 530 | + """ | ||
599 | 531 | + Agent entry point | ||
600 | 532 | + """ | ||
601 | 533 | + # register exit handler used by pacemaker | ||
602 | 534 | + atexit.register(atexit_handler) | ||
603 | 535 | + | ||
604 | 536 | + # prepare accepted options | ||
605 | 537 | + device_opt = set_opts() | ||
606 | 538 | + | ||
607 | 539 | + # parse options provided on input | ||
608 | 540 | + options = check_input(device_opt, process_input(device_opt)) | ||
609 | 541 | + | ||
610 | 542 | + docs = { | ||
611 | 543 | + "shortdesc": "Fence agent for IBM z LPARs", | ||
612 | 544 | + "longdesc": ( | ||
613 | 545 | + "fence_ibmz is a power fencing agent which uses the HMC Web " | ||
614 | 546 | + "Services API to fence IBM z LPARs."), | ||
615 | 547 | + "vendorurl": "http://www.ibm.com" | ||
616 | 548 | + } | ||
617 | 549 | + show_docs(options, docs) | ||
618 | 550 | + | ||
619 | 551 | + run_delay(options) | ||
620 | 552 | + | ||
621 | 553 | + # set underlying library's logging and ssl config according to specified | ||
622 | 554 | + # options | ||
623 | 555 | + requests_log = logging.getLogger("urllib3") | ||
624 | 556 | + requests_log.propagate = True | ||
625 | 557 | + if "--verbose" in options: | ||
626 | 558 | + requests_log.setLevel(logging.DEBUG) | ||
627 | 559 | + if "--ssl-secure" not in options: | ||
628 | 560 | + urllib3.disable_warnings( | ||
629 | 561 | + category=urllib3.exceptions.InsecureRequestWarning) | ||
630 | 562 | + | ||
631 | 563 | + hmc_address = options["--ip"] | ||
632 | 564 | + hmc_userid = options["--username"] | ||
633 | 565 | + hmc_password = options["--password"] | ||
634 | 566 | + config = { | ||
635 | 567 | + 'connect_retries': int(options['--connect-retries']), | ||
636 | 568 | + 'read_retries': int(options['--read-retries']), | ||
637 | 569 | + 'operation_timeout': int(options['--operation-timeout']), | ||
638 | 570 | + 'connect_timeout': int(options['--connect-timeout']), | ||
639 | 571 | + 'read_timeout': int(options['--read-timeout']), | ||
640 | 572 | + 'port': int(options['--ipport']), | ||
641 | 573 | + 'ssl_verify': bool('--ssl-secure' in options), | ||
642 | 574 | + } | ||
643 | 575 | + try: | ||
644 | 576 | + conn = APIClient(hmc_address, hmc_userid, hmc_password, config) | ||
645 | 577 | + conn.logon() | ||
646 | 578 | + atexit.register(disconnect, conn) | ||
647 | 579 | + result = fence_action(conn, options, set_power_status, | ||
648 | 580 | + get_power_status, get_outlet_list) | ||
649 | 581 | + except Exception: | ||
650 | 582 | + logging.exception('Exception occurred: ') | ||
651 | 583 | + result = EC_GENERIC_ERROR | ||
652 | 584 | + sys.exit(result) | ||
653 | 585 | + | ||
654 | 586 | +if __name__ == "__main__": | ||
655 | 587 | + main() | ||
656 | diff --git a/debian/patches/series b/debian/patches/series | |||
657 | index df80fa4..075cb3b 100644 | |||
658 | --- a/debian/patches/series | |||
659 | +++ b/debian/patches/series | |||
660 | @@ -10,3 +10,4 @@ lp1865523-05-fence_scsi-fix-python3-encoding.patch | |||
661 | 10 | lp1865523-06-fence_scsi-fixes-around-node-id.patch | 10 | lp1865523-06-fence_scsi-fixes-around-node-id.patch |
662 | 11 | lp1865523-07-fence-metadata-update.xml | 11 | lp1865523-07-fence-metadata-update.xml |
663 | 12 | lp1894323-01-fence_aws-new-agent.patch | 12 | lp1894323-01-fence_aws-new-agent.patch |
664 | 13 | 0013-Add-fence-agent-for-IBM-z-LPARs.patch | ||
665 | diff --git a/debian/rules b/debian/rules | |||
666 | index 551e252..3d92f95 100755 | |||
667 | --- a/debian/rules | |||
668 | +++ b/debian/rules | |||
669 | @@ -47,7 +47,7 @@ override_dh_missing: | |||
670 | 47 | 47 | ||
671 | 48 | override_dh_auto_test: | 48 | override_dh_auto_test: |
672 | 49 | # disable testing for ovh as it tries to access SOAP service on www.ovh.com | 49 | # disable testing for ovh as it tries to access SOAP service on www.ovh.com |
674 | 50 | dh_auto_test -- TEST_TARGET_SKIP=ovh/fence_ovh | 50 | dh_auto_test -- TEST_TARGET_SKIP="ovh/fence_ovh ibmz/fence_ibmz" |
675 | 51 | 51 | ||
676 | 52 | override_dh_python2: | 52 | override_dh_python2: |
677 | 53 | dh_python2 | 53 | dh_python2 |
If we disable the test (which is ok) why do we backport and then carry tests/data/ metadata/ fence_ibmz. xml ?
Maybe not adding this test xml allows you to even not need to skip it?
If we update doc/COPYRIGHT don't we also need to update d/cpyright?
I'm not good on those licensing things, but it comes to mind seeing to touch one but not the other.
Maybe add on the patch header notes for the backport:
- dropped .spec because ...
- dropped test because ...
The rest looks like a good SRU, while it adds new features that should be ok as it is for HW exploitation. And in addition it just adds one other agent not touching (=regressing) the others.