Merge ~ogayot/curtin:nvme-o-tcp-storageconfig into curtin:master

Proposed by Olivier Gayot
Status: Merged
Merged at revision: 237053d9d18916dd72cf861280474d4df0e9fd24
Proposed branch: ~ogayot/curtin:nvme-o-tcp-storageconfig
Merge into: curtin:master
Diff against target: 652 lines (+387/-5)
11 files modified
curtin/block/deps.py (+3/-0)
curtin/block/schemas.py (+18/-0)
curtin/commands/block_meta.py (+6/-0)
curtin/commands/curthooks.py (+61/-1)
curtin/storage_config.py (+49/-4)
doc/topics/storage.rst (+42/-0)
tests/data/probert_storage_bogus_wwn.json (+43/-0)
tests/data/probert_storage_nvme_multipath.json (+43/-0)
tests/data/probert_storage_nvme_uuid.json (+43/-0)
tests/unittests/test_curthooks.py (+77/-0)
tests/unittests/test_storage_config.py (+2/-0)
Reviewer Review Type Date Requested Status
Server Team CI bot continuous-integration Approve
Dan Bungert Approve
Michael Hudson-Doyle Approve
Review via email: mp+458446@code.launchpad.net

Commit message

block: initial support for NVMe over TCP

Description of the change

block: initial support for NVMe over TCP

This MP adds partial support for NVMe over TCP.

In the storage configuration, NVMe drives can now have a "nvme_controller" property, holding the identifier to an existing nvme_controller object (which is new), e.g.,

```
- type: disk
  id: disk-nvme0n1
  path: /dev/nvme0n1
  nvme_controller: nvme-controller-nvme0

- type: disk
  id: disk-nvme1n1
  path: /dev/nvme1n1
  nvme_controller: nvme-controller-nvme1

- type: nvme_controller
  id: nvme-controller-nvme0
  transport: pcie

- type: nvme_controller
  id: nvme-controller-nvme1
  transport: tcp
  tcp_port: 4420
  tcp_addr: 1.2.3.4
```

In the presence of a nvme_controller section having transport=tcp in the storage config, curtin will install nvme-stas (and nvme-cli) and configure the service so that the drives can be discovered and made available when the target system boots.

Current limitations:
* For the target system to boot correctly, we only support placing non-critical partitions (e.g., /home) on remote storage. For the next iteration, the plan is to support placing the rootfs (i.e., /) on remote NVMe drives, while preserving the /boot and /boot/efi partitions on local storage.
* If a nvme_controller section is present in the storage configuration, curtin will end up installing nvme-stas and nvme-cli on the target system ; even if the nvme_controller section denotes the use of PCIe (local storage).
* Curtin itself will not automatically append the _netdev option if a given mount uses remote storage, so we would expect the storage configuration to specify the option, e.g.:

```
-type: mount
 path: /home
 device: ...
 options: defaults,_netdev
 id: mount-2
```

To post a comment you must log in.
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

FAILED: Continuous integration, rev:c0c824da7af9280c26e2396f76829ab36753f21a

No commit message was specified in the merge proposal. Click on the following link and set the commit message (if you want jenkins to rebuild you need to trigger it yourself):
https://code.launchpad.net/~ogayot/curtin/+git/curtin/+merge/458446/+edit-commit-message

https://jenkins.canonical.com/server-team/job/curtin-ci/215/
Executed test runs:
    FAILURE: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-amd64/215/
    FAILURE: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-arm64/215/
    FAILURE: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-ppc64el/215/
    FAILURE: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-s390x/215/

Click here to trigger a rebuild:
https://jenkins.canonical.com/server-team/job/curtin-ci/215//rebuild

review: Needs Fixing (continuous-integration)
Revision history for this message
Dan Bungert (dbungert) wrote :

As we are adjusting the config format, we want to update doc/topics/storage.rst as well with the changes to add the new storage command.

As we're early days in terms of nvme-stas support, I suggest you mark that documentation experimental similar to how ZFS things are marked.

Please either include that doc change in this MP, or land it first, so that we don't forget to do the doc update.

More of a review later.

review: Needs Fixing
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

FAILED: Continuous integration, rev:a2a0e7be18c6b7e46d63d4aaf61c230874429556

No commit message was specified in the merge proposal. Click on the following link and set the commit message (if you want jenkins to rebuild you need to trigger it yourself):
https://code.launchpad.net/~ogayot/curtin/+git/curtin/+merge/458446/+edit-commit-message

https://jenkins.canonical.com/server-team/job/curtin-ci/217/
Executed test runs:
    FAILURE: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-amd64/217/
    FAILURE: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-arm64/217/
    FAILURE: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-ppc64el/217/
    FAILURE: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-s390x/217/

Click here to trigger a rebuild:
https://jenkins.canonical.com/server-team/job/curtin-ci/217//rebuild

review: Needs Fixing (continuous-integration)
Revision history for this message
Michael Hudson-Doyle (mwhudson) wrote :

This looks fine, thanks! All the jenkins results seem to have been deleted so I'm not sure what CI is unhappy about?

review: Approve
Revision history for this message
Olivier Gayot (ogayot) :
Revision history for this message
Olivier Gayot (ogayot) wrote :

Pushed a change for the typo. Suggestions welcome about the {"type": "nvme_controller", "transport": "pcie"} problem :)

Revision history for this message
Server Team CI bot (server-team-bot) wrote :

FAILED: Continuous integration, rev:fe96b7b5828c22df8623355e7b7cb7348b5fbf31

No commit message was specified in the merge proposal. Click on the following link and set the commit message (if you want jenkins to rebuild you need to trigger it yourself):
https://code.launchpad.net/~ogayot/curtin/+git/curtin/+merge/458446/+edit-commit-message

https://jenkins.canonical.com/server-team/job/curtin-ci/219/
Executed test runs:
    FAILURE: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-amd64/219/
    FAILURE: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-arm64/219/
    FAILURE: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-ppc64el/219/
    FAILURE: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-s390x/219/

Click here to trigger a rebuild:
https://jenkins.canonical.com/server-team/job/curtin-ci/219//rebuild

review: Needs Fixing (continuous-integration)
Revision history for this message
Olivier Gayot (ogayot) wrote :

Fixed the py3-flakes8 issues as well.

Revision history for this message
Server Team CI bot (server-team-bot) wrote :

FAILED: Continuous integration, rev:d939953958ad4e9728fc4ed5c4b69744605e568f

No commit message was specified in the merge proposal. Click on the following link and set the commit message (if you want jenkins to rebuild you need to trigger it yourself):
https://code.launchpad.net/~ogayot/curtin/+git/curtin/+merge/458446/+edit-commit-message

https://jenkins.canonical.com/server-team/job/curtin-ci/220/
Executed test runs:
    SUCCESS: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-amd64/220/
    SUCCESS: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-arm64/220/
    SUCCESS: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-ppc64el/220/
    SUCCESS: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-s390x/220/

Click here to trigger a rebuild:
https://jenkins.canonical.com/server-team/job/curtin-ci/220//rebuild

review: Needs Fixing (continuous-integration)
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

FAILED: Continuous integration, rev:c9cb81fa99cafe9f2fb9a6239581b113ea9ee41a

No commit message was specified in the merge proposal. Click on the following link and set the commit message (if you want jenkins to rebuild you need to trigger it yourself):
https://code.launchpad.net/~ogayot/curtin/+git/curtin/+merge/458446/+edit-commit-message

https://jenkins.canonical.com/server-team/job/curtin-ci/221/
Executed test runs:
    SUCCESS: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-amd64/221/
    SUCCESS: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-arm64/221/
    SUCCESS: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-ppc64el/221/
    SUCCESS: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-s390x/221/

Click here to trigger a rebuild:
https://jenkins.canonical.com/server-team/job/curtin-ci/221//rebuild

review: Needs Fixing (continuous-integration)
Revision history for this message
Michael Hudson-Doyle (mwhudson) wrote :

Just need to set a commit message I think.

Revision history for this message
Dan Bungert (dbungert) wrote :

LGTM. Jenkins test retriggers aren't working for me at the moment, you may want to rebase anyway and force-push so it can pick up the results.

review: Approve
Revision history for this message
Olivier Gayot (ogayot) wrote :

> LGTM. Jenkins test retriggers aren't working for me at the moment, you may
> want to rebase anyway and force-push so it can pick up the results.

Thanks! They don't work for me either ..

I'll rebase then. Originally, I didn't set a commit message because I didn't want the commits to be squashed by the merge process. But red is bad I guess.

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1diff --git a/curtin/block/deps.py b/curtin/block/deps.py
2index 8a310b6..e5370b6 100644
3--- a/curtin/block/deps.py
4+++ b/curtin/block/deps.py
5@@ -69,6 +69,7 @@ def detect_required_packages_mapping(osfamily=DISTROS.debian):
6 'lvm_partition': ['lvm2'],
7 'lvm_volgroup': ['lvm2'],
8 'ntfs': ['ntfs-3g'],
9+ 'nvme_controller': ['nvme-cli', 'nvme-stas'],
10 'raid': ['mdadm'],
11 'reiserfs': ['reiserfsprogs'],
12 'xfs': ['xfsprogs'],
13@@ -89,6 +90,7 @@ def detect_required_packages_mapping(osfamily=DISTROS.debian):
14 'lvm_partition': ['lvm2'],
15 'lvm_volgroup': ['lvm2'],
16 'ntfs': [],
17+ 'nvme_controller': [],
18 'raid': ['mdadm'],
19 'reiserfs': [],
20 'xfs': ['xfsprogs'],
21@@ -109,6 +111,7 @@ def detect_required_packages_mapping(osfamily=DISTROS.debian):
22 'lvm_partition': ['lvm2'],
23 'lvm_volgroup': ['lvm2'],
24 'ntfs': [],
25+ 'nvme_controller': [],
26 'raid': ['mdadm'],
27 'reiserfs': [],
28 'xfs': ['xfsprogs'],
29diff --git a/curtin/block/schemas.py b/curtin/block/schemas.py
30index 6a5c5b4..503e870 100644
31--- a/curtin/block/schemas.py
32+++ b/curtin/block/schemas.py
33@@ -144,6 +144,7 @@ DISK = {
34 'minimum': 0,
35 'maximum': 1
36 },
37+ 'nvme_controller': {'$ref': '#/definitions/ref_id'},
38 },
39 }
40 DM_CRYPT = {
41@@ -275,6 +276,23 @@ MOUNT = {
42 'pattern': r'[0-9]'},
43 },
44 }
45+NVME = {
46+ '$schema': 'http://json-schema.org/draft-07/schema#',
47+ 'name': 'CURTIN-NVME',
48+ 'title': 'curtin storage configuration for NVMe controllers',
49+ 'description': ('Declarative syntax for specifying NVMe controllers.'),
50+ 'definitions': definitions,
51+ 'required': ['id', 'type', 'transport'],
52+ 'type': 'object',
53+ 'additionalProperties': False,
54+ 'properties': {
55+ 'id': {'$ref': '#/definitions/id'},
56+ 'type': {'const': 'nvme_controller'},
57+ 'transport': {'type': 'string'},
58+ 'tcp_port': {'type': 'integer'},
59+ 'tcp_addr': {'type': 'string'},
60+ },
61+}
62 PARTITION = {
63 '$schema': 'http://json-schema.org/draft-07/schema#',
64 'name': 'CURTIN-PARTITION',
65diff --git a/curtin/commands/block_meta.py b/curtin/commands/block_meta.py
66index 8ba7a55..9fde9c6 100644
67--- a/curtin/commands/block_meta.py
68+++ b/curtin/commands/block_meta.py
69@@ -2026,6 +2026,11 @@ def zpool_handler(info, storage_config, context):
70 zfs_properties=fs_properties)
71
72
73+def nvme_controller_handler(info, storage_config, context):
74+ '''Handle the NVMe Controller storage section. This is currently a no-op,
75+ the section is handled in curthooks.'''
76+
77+
78 def zfs_handler(info, storage_config, context):
79 """
80 Create a zfs filesystem
81@@ -2215,6 +2220,7 @@ def meta_custom(args):
82 'bcache': bcache_handler,
83 'zfs': zfs_handler,
84 'zpool': zpool_handler,
85+ 'nvme_controller': nvme_controller_handler,
86 }
87
88 if args.testmode:
89diff --git a/curtin/commands/curthooks.py b/curtin/commands/curthooks.py
90index 4be2cb4..d84e9ec 100644
91--- a/curtin/commands/curthooks.py
92+++ b/curtin/commands/curthooks.py
93@@ -1,14 +1,16 @@
94 # This file is part of curtin. See LICENSE file for copyright and license info.
95
96 import copy
97+import contextlib
98 import glob
99 import os
100+import pathlib
101 import platform
102 import re
103 import sys
104 import shutil
105 import textwrap
106-from typing import List, Tuple
107+from typing import List, Set, Tuple
108
109 from curtin import config
110 from curtin import block
111@@ -1498,6 +1500,58 @@ def configure_mdadm(cfg, state_etcd, target, osfamily=DISTROS.debian):
112 data=None, target=target)
113
114
115+def get_nvme_stas_controller_directives(cfg) -> Set[str]:
116+ """Parse the storage configuration and return a set of "controller ="
117+ directives to write in the [Controllers] section of a nvme-stas
118+ configuration file."""
119+ directives = set()
120+ if 'storage' not in cfg or not isinstance(cfg['storage'], dict):
121+ return directives
122+ storage = cfg['storage']
123+ if 'config' not in storage or storage['config'] == 'disabled':
124+ return directives
125+ config = storage['config']
126+ for item in config:
127+ if item['type'] != 'nvme_controller':
128+ continue
129+ if item['transport'] != 'tcp':
130+ continue
131+ controller_props = {
132+ 'transport': 'tcp',
133+ 'traddr': item["tcp_addr"],
134+ 'trsvcid': item["tcp_port"],
135+ }
136+
137+ props_str = ';'.join([f'{k}={v}' for k, v in controller_props.items()])
138+ directives.add(f'controller = {props_str}')
139+
140+ return directives
141+
142+
143+def configure_nvme_stas(cfg, target):
144+ """If any NVMe controller using the TCP transport is present in the storage
145+ configuration, create a nvme-stas configuration so that the remote drives
146+ can be made available at boot."""
147+ controllers = get_nvme_stas_controller_directives(cfg)
148+
149+ if not controllers:
150+ return
151+
152+ LOG.info('NVMe-over-TCP configuration found'
153+ ' , writing nvme-stas configuration')
154+ target = pathlib.Path(target)
155+ stas_dir = target / 'etc' / 'stas'
156+ stas_dir.mkdir(parents=True, exist_ok=True)
157+ with (stas_dir / 'stafd-curtin.conf').open('w', encoding='utf-8') as fh:
158+ print('[Controllers]', file=fh)
159+ for controller in controllers:
160+ print(controller, file=fh)
161+
162+ with contextlib.suppress(FileNotFoundError):
163+ (stas_dir / 'stafd.conf').replace(stas_dir / '.stafd.conf.bak')
164+ (stas_dir / 'stafd.conf').symlink_to('stafd-curtin.conf')
165+
166+
167 def handle_cloudconfig(cfg, base_dir=None):
168 """write cloud-init configuration files into base_dir.
169
170@@ -1760,6 +1814,12 @@ def builtin_curthooks(cfg, target, state):
171 description="configuring raid (mdadm) service"):
172 configure_mdadm(cfg, state_etcd, target, osfamily=osfamily)
173
174+ with events.ReportEventStack(
175+ name=stack_prefix + '/configuring-nvme-stas-service',
176+ reporting_enabled=True, level="INFO",
177+ description="configuring NVMe STorage Appliance Services"):
178+ configure_nvme_stas(cfg, target)
179+
180 if osfamily == DISTROS.debian:
181 with events.ReportEventStack(
182 name=stack_prefix + '/installing-kernel',
183diff --git a/curtin/storage_config.py b/curtin/storage_config.py
184index af7b6f3..dae89f4 100644
185--- a/curtin/storage_config.py
186+++ b/curtin/storage_config.py
187@@ -50,6 +50,8 @@ STORAGE_CONFIG_TYPES = {
188 'bcache': StorageConfig(type='bcache', schema=schemas.BCACHE),
189 'dasd': StorageConfig(type='dasd', schema=schemas.DASD),
190 'disk': StorageConfig(type='disk', schema=schemas.DISK),
191+ 'nvme_controller': StorageConfig(type='nvme_controller',
192+ schema=schemas.NVME),
193 'dm_crypt': StorageConfig(type='dm_crypt', schema=schemas.DM_CRYPT),
194 'format': StorageConfig(type='format', schema=schemas.FORMAT),
195 'lvm_partition': StorageConfig(type='lvm_partition',
196@@ -159,12 +161,13 @@ def _stype_to_deps(stype):
197 depends_keys = {
198 'bcache': {'backing_device', 'cache_device'},
199 'dasd': set(),
200- 'disk': set(),
201+ 'disk': {'nvme_controller'},
202 'dm_crypt': {'volume'},
203 'format': {'volume'},
204 'lvm_partition': {'volgroup'},
205 'lvm_volgroup': {'devices'},
206 'mount': {'device'},
207+ 'nvme_controller': set(),
208 'partition': {'device'},
209 'raid': {'devices', 'spare_devices', 'container'},
210 'zfs': {'pool'},
211@@ -184,6 +187,7 @@ def _stype_to_order_key(stype):
212 'lvm_partition': {'name'},
213 'lvm_volgroup': {'name'},
214 'mount': {'path'},
215+ 'nvme_controller': default_sort,
216 'partition': {'number'},
217 'raid': default_sort,
218 'zfs': {'volume'},
219@@ -204,7 +208,7 @@ def _validate_dep_type(source_id, dep_key, dep_id, sconfig):
220 'bcache': {'bcache', 'disk', 'dm_crypt', 'lvm_partition',
221 'partition', 'raid'},
222 'dasd': {},
223- 'disk': {'dasd'},
224+ 'disk': {'dasd', 'nvme_controller'},
225 'dm_crypt': {'bcache', 'disk', 'dm_crypt', 'lvm_partition',
226 'partition', 'raid'},
227 'format': {'bcache', 'disk', 'dm_crypt', 'lvm_partition',
228@@ -212,6 +216,7 @@ def _validate_dep_type(source_id, dep_key, dep_id, sconfig):
229 'lvm_partition': {'lvm_volgroup'},
230 'lvm_volgroup': {'bcache', 'disk', 'dm_crypt', 'partition', 'raid'},
231 'mount': {'format'},
232+ 'nvme_controller': {},
233 'partition': {'bcache', 'disk', 'raid', 'partition'},
234 'raid': {'bcache', 'disk', 'dm_crypt', 'lvm_partition',
235 'partition', 'raid'},
236@@ -231,7 +236,7 @@ def _validate_dep_type(source_id, dep_key, dep_id, sconfig):
237 if source_type not in depends:
238 raise ValueError('Invalid source_type: %s' % source_type)
239 if dep_type not in depends:
240- raise ValueError('Invalid type in depedency: %s' % dep_type)
241+ raise ValueError('Invalid type in dependency: %s' % dep_type)
242
243 source_deps = depends[source_type]
244 result = dep_type in source_deps
245@@ -753,6 +758,11 @@ class BlockdevParser(ProbertParser):
246 entry['ptable'] = ptype
247 else:
248 entry['ptable'] = schemas._ptable_unsupported
249+
250+ match = re.fullmatch(r'/dev/(?P<ctrler>nvme\d+)n\d', devname)
251+ if match is not None:
252+ entry['nvme_controller'] = f'nvme-controller-{match["ctrler"]}'
253+
254 return entry
255
256 if entry['type'] == 'partition':
257@@ -1174,6 +1184,39 @@ class MountParser(ProbertParser):
258 return (configs, errors)
259
260
261+class NVMeParser(ProbertParser):
262+
263+ probe_data_key = 'nvme'
264+
265+ def asdict(self, ctrler_id: str, ctrler_props):
266+ action = {
267+ 'type': 'nvme_controller',
268+ 'id': f'nvme-controller-{ctrler_id}',
269+ 'transport': ctrler_props['NVME_TRTYPE'],
270+ }
271+ if action['transport'] == 'tcp':
272+ action['tcp_addr'] = ctrler_props['NVME_TRADDR']
273+ action['tcp_port'] = int(ctrler_props['NVME_TRSVCID'])
274+
275+ return action
276+
277+ def parse(self):
278+ """ parse probert 'nvme' data format """
279+
280+ errors = []
281+ configs = []
282+ for ctrler_id, ctrler_props in self.class_data.items():
283+ entry = self.asdict(ctrler_id, ctrler_props)
284+ if entry:
285+ try:
286+ validate_config(entry)
287+ except ValueError as e:
288+ errors.append(e)
289+ continue
290+ configs.append(entry)
291+ return configs, errors
292+
293+
294 class ZfsParser(ProbertParser):
295
296 probe_data_key = 'zfs'
297@@ -1318,6 +1361,7 @@ def extract_storage_config(probe_data, strict=False):
298 'lvm': LvmParser,
299 'raid': RaidParser,
300 'mount': MountParser,
301+ 'nvme': NVMeParser,
302 'zfs': ZfsParser,
303 }
304 configs = []
305@@ -1339,11 +1383,12 @@ def extract_storage_config(probe_data, strict=False):
306 raids = [cfg for cfg in configs if cfg.get('type') == 'raid']
307 dmcrypts = [cfg for cfg in configs if cfg.get('type') == 'dm_crypt']
308 mounts = [cfg for cfg in configs if cfg.get('type') == 'mount']
309+ nvmes = [cfg for cfg in configs if cfg.get('type') == 'nvme_controller']
310 bcache = [cfg for cfg in configs if cfg.get('type') == 'bcache']
311 zpool = [cfg for cfg in configs if cfg.get('type') == 'zpool']
312 zfs = [cfg for cfg in configs if cfg.get('type') == 'zfs']
313
314- ordered = (dasd + disk + part + format + lvols + lparts + raids +
315+ ordered = (nvmes + dasd + disk + part + format + lvols + lparts + raids +
316 dmcrypts + mounts + bcache + zpool + zfs)
317
318 final_config = {'storage': {'version': 2, 'config': ordered}}
319diff --git a/doc/topics/storage.rst b/doc/topics/storage.rst
320index 97e900d..7650c4d 100644
321--- a/doc/topics/storage.rst
322+++ b/doc/topics/storage.rst
323@@ -71,6 +71,7 @@ commands include:
324 - Bcache Command (``bcache``)
325 - Zpool Command (``zpool``) **Experimental**
326 - ZFS Command (``zfs``)) **Experimental**
327+- NVMe Controller Command (``nvme_controller``) **Experimental**
328 - Device "Command" (``device``)
329
330 Any action that refers to a block device (so things like ``partition``
331@@ -331,6 +332,11 @@ configuration dictionary. Currently the value is informational only.
332 Curtin already detects whether disks are part of a multipath and selects
333 one member path to operate upon.
334
335+**nvme_controller**: *<NVMe controller id>*
336+
337+If the disk is a NVMe SSD, the ``nvme_controller`` key can be set to the
338+identifier of a ``nvme_controller`` object. This will help to determine the
339+type of transport used (e.g., PCIe vs TCP).
340
341 **Config Example**::
342
343@@ -1205,6 +1211,42 @@ passed to the ZFS dataset creation command.
344 canmount: noauto
345 mountpoint: /
346
347+NVMe Controller Command
348+~~~~~~~~~~~~~~~~~~~~~~~
349+NVMe Controller Commands (and NVMe over TCP support in general) are
350+**experimental**.
351+
352+The nvme_controller command describes how to communicate with a given NVMe
353+controller.
354+
355+**transport**: *pcie, tcp*
356+
357+The ``transport`` key specifies whether the communication with the NVMe
358+controller operates over PCIe or over TCP. Other transports like RDMA and FC
359+(aka. Fiber Channel) are not supported at the moment.
360+
361+**tcp_addr**: *<ip address>*
362+
363+The ``tcp_addr`` key specifies the IP where the NVMe controller can be reached.
364+This key is only meaningful in conjunction with ``transport: tcp``.
365+
366+**tcp_port**: *port*
367+
368+The ``tcp_port`` key specifies the TCP port where the NVMe controller can be
369+reached. This key is only meaningful in conjunction with ``transport: tcp``.
370+
371+**Config Example**::
372+
373+ - type: nvme_controller
374+ id: nvme-controller-nvme0
375+ transport: pcie
376+
377+ - type: nvme_controller
378+ id: nvme-controller-nvme1
379+ transport: tcp
380+ tcp_addr: 172.16.82.78
381+ tcp_port: 4420
382+
383 Device "Command"
384 ~~~~~~~~~~~~~~~~
385
386diff --git a/tests/data/probert_storage_bogus_wwn.json b/tests/data/probert_storage_bogus_wwn.json
387index b3211fd..d817515 100644
388--- a/tests/data/probert_storage_bogus_wwn.json
389+++ b/tests/data/probert_storage_bogus_wwn.json
390@@ -1254,5 +1254,48 @@
391 "bcache": {
392 "backing": {},
393 "caching": {}
394+ },
395+ "nvme": {
396+ "nvme0": {
397+ "DEVNAME": "/dev/nvme0",
398+ "DEVPATH": "/devices/pci0000:00/0000:00:1c.4/0000:04:00.0/nvme/nvme0",
399+ "MAJOR": "238",
400+ "MINOR": "0",
401+ "NVME_TRTYPE": "pcie",
402+ "SUBSYSTEM": "nvme",
403+ "attrs": {
404+ "address": "0000:04:00.0",
405+ "cntlid": "5",
406+ "cntrltype": "io",
407+ "dctype": "none",
408+ "dev": "238:0",
409+ "device": null,
410+ "firmware_rev": "2B2QEXM7",
411+ "hmb": "1",
412+ "kato": "0",
413+ "model": "SAMSUNG SSD 970 EVO Plus 500GB",
414+ "numa_node": "0",
415+ "power/async": "disabled",
416+ "power/autosuspend_delay_ms": null,
417+ "power/control": "auto",
418+ "power/pm_qos_latency_tolerance_us": "100000",
419+ "power/runtime_active_kids": "0",
420+ "power/runtime_active_time": "0",
421+ "power/runtime_enabled": "disabled",
422+ "power/runtime_status": "unsupported",
423+ "power/runtime_suspended_time": "0",
424+ "power/runtime_usage": "0",
425+ "queue_count": "9",
426+ "rescan_controller": null,
427+ "reset_controller": null,
428+ "serial": "S4EVNJ0N203359W",
429+ "sqsize": "1023",
430+ "state": "live",
431+ "subsysnqn": "nqn.1994-11.com.samsung:nvme:970M.2:S4EVNJ0N203359W",
432+ "subsystem": "nvme",
433+ "transport": "pcie",
434+ "uevent": "MAJOR=238\nMINOR=0\nDEVNAME=nvme0\nNVME_TRTYPE=pcie"
435+ }
436+ }
437 }
438 }
439diff --git a/tests/data/probert_storage_nvme_multipath.json b/tests/data/probert_storage_nvme_multipath.json
440index 56a761d..9718368 100644
441--- a/tests/data/probert_storage_nvme_multipath.json
442+++ b/tests/data/probert_storage_nvme_multipath.json
443@@ -306,5 +306,48 @@
444 "uevent": "MAJOR=259\nMINOR=4\nDEVNAME=nvme0n1p3\nDEVTYPE=partition\nPARTN=3"
445 }
446 }
447+ },
448+ "nvme": {
449+ "nvme0": {
450+ "DEVNAME": "/dev/nvme0",
451+ "DEVPATH": "/devices/pci0000:00/0000:00:1d.0/0000:03:00.0/nvme/nvme0",
452+ "MAJOR": "238",
453+ "MINOR": "0",
454+ "NVME_TRTYPE": "pcie",
455+ "SUBSYSTEM": "nvme",
456+ "attrs": {
457+ "address": "0000:03:00.0",
458+ "cntlid": "5",
459+ "cntrltype": "io",
460+ "dctype": "none",
461+ "dev": "238:0",
462+ "device": null,
463+ "firmware_rev": "GPJA0B3Q",
464+ "hmb": "1",
465+ "kato": "0",
466+ "model": "SAMSUNG MZPLL3T2HAJQ-00005",
467+ "numa_node": "0",
468+ "power/async": "disabled",
469+ "power/autosuspend_delay_ms": null,
470+ "power/control": "auto",
471+ "power/pm_qos_latency_tolerance_us": "100000",
472+ "power/runtime_active_kids": "0",
473+ "power/runtime_active_time": "0",
474+ "power/runtime_enabled": "disabled",
475+ "power/runtime_status": "unsupported",
476+ "power/runtime_suspended_time": "0",
477+ "power/runtime_usage": "0",
478+ "queue_count": "9",
479+ "rescan_controller": null,
480+ "reset_controller": null,
481+ "serial": "S4CCNE0M300015",
482+ "sqsize": "1023",
483+ "state": "live",
484+ "subsysnqn": "nqn.1994-11.com.samsung:nvme:MZPLL3T2HAJQ-00005M.2:S64DMZ0T351601T ",
485+ "subsystem": "nvme",
486+ "transport": "pcie",
487+ "uevent": "MAJOR=238\nMINOR=0\nDEVNAME=nvme0\nNVME_TRTYPE=pcie"
488+ }
489+ }
490 }
491 }
492diff --git a/tests/data/probert_storage_nvme_uuid.json b/tests/data/probert_storage_nvme_uuid.json
493index c54239b..d93dffc 100644
494--- a/tests/data/probert_storage_nvme_uuid.json
495+++ b/tests/data/probert_storage_nvme_uuid.json
496@@ -306,5 +306,48 @@
497 "uevent": "MAJOR=259\nMINOR=4\nDEVNAME=nvme0n1p3\nDEVTYPE=partition\nPARTN=3"
498 }
499 }
500+ },
501+ "nvme": {
502+ "nvme0": {
503+ "DEVNAME": "/dev/nvme0",
504+ "DEVPATH": "/devices/pci0000:00/0000:00:1d.0/0000:03:00.0/nvme/nvme0",
505+ "MAJOR": "238",
506+ "MINOR": "0",
507+ "NVME_TRTYPE": "pcie",
508+ "SUBSYSTEM": "nvme",
509+ "attrs": {
510+ "address": "0000:03:00.0",
511+ "cntlid": "5",
512+ "cntrltype": "io",
513+ "dctype": "none",
514+ "dev": "238:0",
515+ "device": null,
516+ "firmware_rev": "GPJA0B3Q",
517+ "hmb": "1",
518+ "kato": "0",
519+ "model": "SAMSUNG MZPLL3T2HAJQ-00005",
520+ "numa_node": "0",
521+ "power/async": "disabled",
522+ "power/autosuspend_delay_ms": null,
523+ "power/control": "auto",
524+ "power/pm_qos_latency_tolerance_us": "100000",
525+ "power/runtime_active_kids": "0",
526+ "power/runtime_active_time": "0",
527+ "power/runtime_enabled": "disabled",
528+ "power/runtime_status": "unsupported",
529+ "power/runtime_suspended_time": "0",
530+ "power/runtime_usage": "0",
531+ "queue_count": "9",
532+ "rescan_controller": null,
533+ "reset_controller": null,
534+ "serial": "S4CCNE0M300015",
535+ "sqsize": "1023",
536+ "state": "live",
537+ "subsysnqn": "nqn.1994-11.com.samsung:nvme:MZPLL3T2HAJQ-00005M.2:S64DMZ0T351601T ",
538+ "subsystem": "nvme",
539+ "transport": "pcie",
540+ "uevent": "MAJOR=238\nMINOR=0\nDEVNAME=nvme0\nNVME_TRTYPE=pcie"
541+ }
542+ }
543 }
544 }
545diff --git a/tests/unittests/test_curthooks.py b/tests/unittests/test_curthooks.py
546index 0728260..e615b38 100644
547--- a/tests/unittests/test_curthooks.py
548+++ b/tests/unittests/test_curthooks.py
549@@ -2017,6 +2017,83 @@ class TestCurthooksGrubDebconf(CiTestCase):
550 self.m_debconf.assert_called_with(expectedcfg, target)
551
552
553+class TestCurthooksNVMeStas(CiTestCase):
554+ def test_get_nvme_stas_controller_directives__no_nvme_controller(self):
555+ self.assertFalse(curthooks.get_nvme_stas_controller_directives({
556+ "storage": {
557+ "config": [
558+ {"type": "partition"},
559+ {"type": "mount"},
560+ {"type": "disk"},
561+ ],
562+ },
563+ }))
564+
565+ def test_get_nvme_stas_controller_directives__pcie_controller(self):
566+ self.assertFalse(curthooks.get_nvme_stas_controller_directives({
567+ "storage": {
568+ "config": [
569+ {"type": "nvme_controller", "transport": "pcie"},
570+ ],
571+ },
572+ }))
573+
574+ def test_get_nvme_stas_controller_directives__tcp_controller(self):
575+ expected = {"controller = transport=tcp;traddr=1.2.3.4;trsvcid=1111"}
576+
577+ result = curthooks.get_nvme_stas_controller_directives({
578+ "storage": {
579+ "config": [
580+ {
581+ "type": "nvme_controller",
582+ "transport": "tcp",
583+ "tcp_addr": "1.2.3.4",
584+ "tcp_port": "1111",
585+ },
586+ ],
587+ },
588+ })
589+ self.assertEqual(expected, result)
590+
591+ def test_get_nvme_stas_controller_directives__three_nvme_controllers(self):
592+ expected = {"controller = transport=tcp;traddr=1.2.3.4;trsvcid=1111",
593+ "controller = transport=tcp;traddr=4.5.6.7;trsvcid=1212"}
594+
595+ result = curthooks.get_nvme_stas_controller_directives({
596+ "storage": {
597+ "config": [
598+ {
599+ "type": "nvme_controller",
600+ "transport": "tcp",
601+ "tcp_addr": "1.2.3.4",
602+ "tcp_port": "1111",
603+ }, {
604+ "type": "nvme_controller",
605+ "transport": "tcp",
606+ "tcp_addr": "4.5.6.7",
607+ "tcp_port": "1212",
608+ }, {
609+ "type": "nvme_controller",
610+ "transport": "pcie",
611+ },
612+ ],
613+ },
614+ })
615+ self.assertEqual(expected, result)
616+
617+ def test_get_nvme_stas_controller_directives__empty_conf(self):
618+ self.assertFalse(curthooks.get_nvme_stas_controller_directives({}))
619+ self.assertFalse(curthooks.get_nvme_stas_controller_directives(
620+ {"storage": False}))
621+ self.assertFalse(curthooks.get_nvme_stas_controller_directives(
622+ {"storage": {}}))
623+ self.assertFalse(curthooks.get_nvme_stas_controller_directives({
624+ "storage": {
625+ "config": "disabled",
626+ },
627+ }))
628+
629+
630 class TestUefiFindGrubDeviceIds(CiTestCase):
631
632 def _sconfig(self, cfg):
633diff --git a/tests/unittests/test_storage_config.py b/tests/unittests/test_storage_config.py
634index caaac29..7b0f68c 100644
635--- a/tests/unittests/test_storage_config.py
636+++ b/tests/unittests/test_storage_config.py
637@@ -1087,6 +1087,7 @@ class TestExtractStorageConfig(CiTestCase):
638 'serial': 'SAMSUNG MZPLL3T2HAJQ-00005_S4CCNE0M300015',
639 'type': 'disk',
640 'wwn': 'eui.344343304d3000150025384500000004',
641+ 'nvme_controller': 'nvme-controller-nvme0',
642 }
643 self.assertEqual(1, len(disks))
644 self.assertEqual(expected_dict, disks[0])
645@@ -1104,6 +1105,7 @@ class TestExtractStorageConfig(CiTestCase):
646 'serial': 'SAMSUNG MZPLL3T2HAJQ-00005_S4CCNE0M300015',
647 'type': 'disk',
648 'wwn': 'uuid.344343304d3000150025384500000004',
649+ 'nvme_controller': 'nvme-controller-nvme0',
650 }
651 self.assertEqual(1, len(disks))
652 self.assertEqual(expected_dict, disks[0])

Subscribers

People subscribed via source and target branches