Merge ~ogayot/curtin:nvme-o-tcp-storageconfig into curtin:master

Proposed by Olivier Gayot
Status: Merged
Merged at revision: 237053d9d18916dd72cf861280474d4df0e9fd24
Proposed branch: ~ogayot/curtin:nvme-o-tcp-storageconfig
Merge into: curtin:master
Diff against target: 652 lines (+387/-5)
11 files modified
curtin/block/deps.py (+3/-0)
curtin/block/schemas.py (+18/-0)
curtin/commands/block_meta.py (+6/-0)
curtin/commands/curthooks.py (+61/-1)
curtin/storage_config.py (+49/-4)
doc/topics/storage.rst (+42/-0)
tests/data/probert_storage_bogus_wwn.json (+43/-0)
tests/data/probert_storage_nvme_multipath.json (+43/-0)
tests/data/probert_storage_nvme_uuid.json (+43/-0)
tests/unittests/test_curthooks.py (+77/-0)
tests/unittests/test_storage_config.py (+2/-0)
Reviewer Review Type Date Requested Status
Server Team CI bot continuous-integration Approve
Dan Bungert Approve
Michael Hudson-Doyle Approve
Review via email: mp+458446@code.launchpad.net

Commit message

block: initial support for NVMe over TCP

Description of the change

block: initial support for NVMe over TCP

This MP adds partial support for NVMe over TCP.

In the storage configuration, NVMe drives can now have a "nvme_controller" property, holding the identifier to an existing nvme_controller object (which is new), e.g.,

```
- type: disk
  id: disk-nvme0n1
  path: /dev/nvme0n1
  nvme_controller: nvme-controller-nvme0

- type: disk
  id: disk-nvme1n1
  path: /dev/nvme1n1
  nvme_controller: nvme-controller-nvme1

- type: nvme_controller
  id: nvme-controller-nvme0
  transport: pcie

- type: nvme_controller
  id: nvme-controller-nvme1
  transport: tcp
  tcp_port: 4420
  tcp_addr: 1.2.3.4
```

In the presence of a nvme_controller section having transport=tcp in the storage config, curtin will install nvme-stas (and nvme-cli) and configure the service so that the drives can be discovered and made available when the target system boots.

Current limitations:
* For the target system to boot correctly, we only support placing non-critical partitions (e.g., /home) on remote storage. For the next iteration, the plan is to support placing the rootfs (i.e., /) on remote NVMe drives, while preserving the /boot and /boot/efi partitions on local storage.
* If a nvme_controller section is present in the storage configuration, curtin will end up installing nvme-stas and nvme-cli on the target system ; even if the nvme_controller section denotes the use of PCIe (local storage).
* Curtin itself will not automatically append the _netdev option if a given mount uses remote storage, so we would expect the storage configuration to specify the option, e.g.:

```
-type: mount
 path: /home
 device: ...
 options: defaults,_netdev
 id: mount-2
```

To post a comment you must log in.
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

FAILED: Continuous integration, rev:c0c824da7af9280c26e2396f76829ab36753f21a

No commit message was specified in the merge proposal. Click on the following link and set the commit message (if you want jenkins to rebuild you need to trigger it yourself):
https://code.launchpad.net/~ogayot/curtin/+git/curtin/+merge/458446/+edit-commit-message

https://jenkins.canonical.com/server-team/job/curtin-ci/215/
Executed test runs:
    FAILURE: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-amd64/215/
    FAILURE: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-arm64/215/
    FAILURE: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-ppc64el/215/
    FAILURE: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-s390x/215/

Click here to trigger a rebuild:
https://jenkins.canonical.com/server-team/job/curtin-ci/215//rebuild

review: Needs Fixing (continuous-integration)
Revision history for this message
Dan Bungert (dbungert) wrote :

As we are adjusting the config format, we want to update doc/topics/storage.rst as well with the changes to add the new storage command.

As we're early days in terms of nvme-stas support, I suggest you mark that documentation experimental similar to how ZFS things are marked.

Please either include that doc change in this MP, or land it first, so that we don't forget to do the doc update.

More of a review later.

review: Needs Fixing
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

FAILED: Continuous integration, rev:a2a0e7be18c6b7e46d63d4aaf61c230874429556

No commit message was specified in the merge proposal. Click on the following link and set the commit message (if you want jenkins to rebuild you need to trigger it yourself):
https://code.launchpad.net/~ogayot/curtin/+git/curtin/+merge/458446/+edit-commit-message

https://jenkins.canonical.com/server-team/job/curtin-ci/217/
Executed test runs:
    FAILURE: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-amd64/217/
    FAILURE: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-arm64/217/
    FAILURE: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-ppc64el/217/
    FAILURE: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-s390x/217/

Click here to trigger a rebuild:
https://jenkins.canonical.com/server-team/job/curtin-ci/217//rebuild

review: Needs Fixing (continuous-integration)
Revision history for this message
Michael Hudson-Doyle (mwhudson) wrote :

This looks fine, thanks! All the jenkins results seem to have been deleted so I'm not sure what CI is unhappy about?

review: Approve
Revision history for this message
Olivier Gayot (ogayot) :
Revision history for this message
Olivier Gayot (ogayot) wrote :

Pushed a change for the typo. Suggestions welcome about the {"type": "nvme_controller", "transport": "pcie"} problem :)

Revision history for this message
Server Team CI bot (server-team-bot) wrote :

FAILED: Continuous integration, rev:fe96b7b5828c22df8623355e7b7cb7348b5fbf31

No commit message was specified in the merge proposal. Click on the following link and set the commit message (if you want jenkins to rebuild you need to trigger it yourself):
https://code.launchpad.net/~ogayot/curtin/+git/curtin/+merge/458446/+edit-commit-message

https://jenkins.canonical.com/server-team/job/curtin-ci/219/
Executed test runs:
    FAILURE: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-amd64/219/
    FAILURE: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-arm64/219/
    FAILURE: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-ppc64el/219/
    FAILURE: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-s390x/219/

Click here to trigger a rebuild:
https://jenkins.canonical.com/server-team/job/curtin-ci/219//rebuild

review: Needs Fixing (continuous-integration)
Revision history for this message
Olivier Gayot (ogayot) wrote :

Fixed the py3-flakes8 issues as well.

Revision history for this message
Server Team CI bot (server-team-bot) wrote :

FAILED: Continuous integration, rev:d939953958ad4e9728fc4ed5c4b69744605e568f

No commit message was specified in the merge proposal. Click on the following link and set the commit message (if you want jenkins to rebuild you need to trigger it yourself):
https://code.launchpad.net/~ogayot/curtin/+git/curtin/+merge/458446/+edit-commit-message

https://jenkins.canonical.com/server-team/job/curtin-ci/220/
Executed test runs:
    SUCCESS: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-amd64/220/
    SUCCESS: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-arm64/220/
    SUCCESS: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-ppc64el/220/
    SUCCESS: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-s390x/220/

Click here to trigger a rebuild:
https://jenkins.canonical.com/server-team/job/curtin-ci/220//rebuild

review: Needs Fixing (continuous-integration)
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

FAILED: Continuous integration, rev:c9cb81fa99cafe9f2fb9a6239581b113ea9ee41a

No commit message was specified in the merge proposal. Click on the following link and set the commit message (if you want jenkins to rebuild you need to trigger it yourself):
https://code.launchpad.net/~ogayot/curtin/+git/curtin/+merge/458446/+edit-commit-message

https://jenkins.canonical.com/server-team/job/curtin-ci/221/
Executed test runs:
    SUCCESS: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-amd64/221/
    SUCCESS: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-arm64/221/
    SUCCESS: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-ppc64el/221/
    SUCCESS: https://jenkins.canonical.com/server-team/job/curtin-ci/nodes=metal-s390x/221/

Click here to trigger a rebuild:
https://jenkins.canonical.com/server-team/job/curtin-ci/221//rebuild

review: Needs Fixing (continuous-integration)
Revision history for this message
Michael Hudson-Doyle (mwhudson) wrote :

Just need to set a commit message I think.

Revision history for this message
Dan Bungert (dbungert) wrote :

LGTM. Jenkins test retriggers aren't working for me at the moment, you may want to rebase anyway and force-push so it can pick up the results.

review: Approve
Revision history for this message
Olivier Gayot (ogayot) wrote :

> LGTM. Jenkins test retriggers aren't working for me at the moment, you may
> want to rebase anyway and force-push so it can pick up the results.

Thanks! They don't work for me either ..

I'll rebase then. Originally, I didn't set a commit message because I didn't want the commits to be squashed by the merge process. But red is bad I guess.

Revision history for this message
Server Team CI bot (server-team-bot) wrote :
review: Approve (continuous-integration)

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
diff --git a/curtin/block/deps.py b/curtin/block/deps.py
index 8a310b6..e5370b6 100644
--- a/curtin/block/deps.py
+++ b/curtin/block/deps.py
@@ -69,6 +69,7 @@ def detect_required_packages_mapping(osfamily=DISTROS.debian):
69 'lvm_partition': ['lvm2'],69 'lvm_partition': ['lvm2'],
70 'lvm_volgroup': ['lvm2'],70 'lvm_volgroup': ['lvm2'],
71 'ntfs': ['ntfs-3g'],71 'ntfs': ['ntfs-3g'],
72 'nvme_controller': ['nvme-cli', 'nvme-stas'],
72 'raid': ['mdadm'],73 'raid': ['mdadm'],
73 'reiserfs': ['reiserfsprogs'],74 'reiserfs': ['reiserfsprogs'],
74 'xfs': ['xfsprogs'],75 'xfs': ['xfsprogs'],
@@ -89,6 +90,7 @@ def detect_required_packages_mapping(osfamily=DISTROS.debian):
89 'lvm_partition': ['lvm2'],90 'lvm_partition': ['lvm2'],
90 'lvm_volgroup': ['lvm2'],91 'lvm_volgroup': ['lvm2'],
91 'ntfs': [],92 'ntfs': [],
93 'nvme_controller': [],
92 'raid': ['mdadm'],94 'raid': ['mdadm'],
93 'reiserfs': [],95 'reiserfs': [],
94 'xfs': ['xfsprogs'],96 'xfs': ['xfsprogs'],
@@ -109,6 +111,7 @@ def detect_required_packages_mapping(osfamily=DISTROS.debian):
109 'lvm_partition': ['lvm2'],111 'lvm_partition': ['lvm2'],
110 'lvm_volgroup': ['lvm2'],112 'lvm_volgroup': ['lvm2'],
111 'ntfs': [],113 'ntfs': [],
114 'nvme_controller': [],
112 'raid': ['mdadm'],115 'raid': ['mdadm'],
113 'reiserfs': [],116 'reiserfs': [],
114 'xfs': ['xfsprogs'],117 'xfs': ['xfsprogs'],
diff --git a/curtin/block/schemas.py b/curtin/block/schemas.py
index 6a5c5b4..503e870 100644
--- a/curtin/block/schemas.py
+++ b/curtin/block/schemas.py
@@ -144,6 +144,7 @@ DISK = {
144 'minimum': 0,144 'minimum': 0,
145 'maximum': 1145 'maximum': 1
146 },146 },
147 'nvme_controller': {'$ref': '#/definitions/ref_id'},
147 },148 },
148}149}
149DM_CRYPT = {150DM_CRYPT = {
@@ -275,6 +276,23 @@ MOUNT = {
275 'pattern': r'[0-9]'},276 'pattern': r'[0-9]'},
276 },277 },
277}278}
279NVME = {
280 '$schema': 'http://json-schema.org/draft-07/schema#',
281 'name': 'CURTIN-NVME',
282 'title': 'curtin storage configuration for NVMe controllers',
283 'description': ('Declarative syntax for specifying NVMe controllers.'),
284 'definitions': definitions,
285 'required': ['id', 'type', 'transport'],
286 'type': 'object',
287 'additionalProperties': False,
288 'properties': {
289 'id': {'$ref': '#/definitions/id'},
290 'type': {'const': 'nvme_controller'},
291 'transport': {'type': 'string'},
292 'tcp_port': {'type': 'integer'},
293 'tcp_addr': {'type': 'string'},
294 },
295}
278PARTITION = {296PARTITION = {
279 '$schema': 'http://json-schema.org/draft-07/schema#',297 '$schema': 'http://json-schema.org/draft-07/schema#',
280 'name': 'CURTIN-PARTITION',298 'name': 'CURTIN-PARTITION',
diff --git a/curtin/commands/block_meta.py b/curtin/commands/block_meta.py
index 8ba7a55..9fde9c6 100644
--- a/curtin/commands/block_meta.py
+++ b/curtin/commands/block_meta.py
@@ -2026,6 +2026,11 @@ def zpool_handler(info, storage_config, context):
2026 zfs_properties=fs_properties)2026 zfs_properties=fs_properties)
20272027
20282028
2029def nvme_controller_handler(info, storage_config, context):
2030 '''Handle the NVMe Controller storage section. This is currently a no-op,
2031 the section is handled in curthooks.'''
2032
2033
2029def zfs_handler(info, storage_config, context):2034def zfs_handler(info, storage_config, context):
2030 """2035 """
2031 Create a zfs filesystem2036 Create a zfs filesystem
@@ -2215,6 +2220,7 @@ def meta_custom(args):
2215 'bcache': bcache_handler,2220 'bcache': bcache_handler,
2216 'zfs': zfs_handler,2221 'zfs': zfs_handler,
2217 'zpool': zpool_handler,2222 'zpool': zpool_handler,
2223 'nvme_controller': nvme_controller_handler,
2218 }2224 }
22192225
2220 if args.testmode:2226 if args.testmode:
diff --git a/curtin/commands/curthooks.py b/curtin/commands/curthooks.py
index 4be2cb4..d84e9ec 100644
--- a/curtin/commands/curthooks.py
+++ b/curtin/commands/curthooks.py
@@ -1,14 +1,16 @@
1# This file is part of curtin. See LICENSE file for copyright and license info.1# This file is part of curtin. See LICENSE file for copyright and license info.
22
3import copy3import copy
4import contextlib
4import glob5import glob
5import os6import os
7import pathlib
6import platform8import platform
7import re9import re
8import sys10import sys
9import shutil11import shutil
10import textwrap12import textwrap
11from typing import List, Tuple13from typing import List, Set, Tuple
1214
13from curtin import config15from curtin import config
14from curtin import block16from curtin import block
@@ -1498,6 +1500,58 @@ def configure_mdadm(cfg, state_etcd, target, osfamily=DISTROS.debian):
1498 data=None, target=target)1500 data=None, target=target)
14991501
15001502
1503def get_nvme_stas_controller_directives(cfg) -> Set[str]:
1504 """Parse the storage configuration and return a set of "controller ="
1505 directives to write in the [Controllers] section of a nvme-stas
1506 configuration file."""
1507 directives = set()
1508 if 'storage' not in cfg or not isinstance(cfg['storage'], dict):
1509 return directives
1510 storage = cfg['storage']
1511 if 'config' not in storage or storage['config'] == 'disabled':
1512 return directives
1513 config = storage['config']
1514 for item in config:
1515 if item['type'] != 'nvme_controller':
1516 continue
1517 if item['transport'] != 'tcp':
1518 continue
1519 controller_props = {
1520 'transport': 'tcp',
1521 'traddr': item["tcp_addr"],
1522 'trsvcid': item["tcp_port"],
1523 }
1524
1525 props_str = ';'.join([f'{k}={v}' for k, v in controller_props.items()])
1526 directives.add(f'controller = {props_str}')
1527
1528 return directives
1529
1530
1531def configure_nvme_stas(cfg, target):
1532 """If any NVMe controller using the TCP transport is present in the storage
1533 configuration, create a nvme-stas configuration so that the remote drives
1534 can be made available at boot."""
1535 controllers = get_nvme_stas_controller_directives(cfg)
1536
1537 if not controllers:
1538 return
1539
1540 LOG.info('NVMe-over-TCP configuration found'
1541 ' , writing nvme-stas configuration')
1542 target = pathlib.Path(target)
1543 stas_dir = target / 'etc' / 'stas'
1544 stas_dir.mkdir(parents=True, exist_ok=True)
1545 with (stas_dir / 'stafd-curtin.conf').open('w', encoding='utf-8') as fh:
1546 print('[Controllers]', file=fh)
1547 for controller in controllers:
1548 print(controller, file=fh)
1549
1550 with contextlib.suppress(FileNotFoundError):
1551 (stas_dir / 'stafd.conf').replace(stas_dir / '.stafd.conf.bak')
1552 (stas_dir / 'stafd.conf').symlink_to('stafd-curtin.conf')
1553
1554
1501def handle_cloudconfig(cfg, base_dir=None):1555def handle_cloudconfig(cfg, base_dir=None):
1502 """write cloud-init configuration files into base_dir.1556 """write cloud-init configuration files into base_dir.
15031557
@@ -1760,6 +1814,12 @@ def builtin_curthooks(cfg, target, state):
1760 description="configuring raid (mdadm) service"):1814 description="configuring raid (mdadm) service"):
1761 configure_mdadm(cfg, state_etcd, target, osfamily=osfamily)1815 configure_mdadm(cfg, state_etcd, target, osfamily=osfamily)
17621816
1817 with events.ReportEventStack(
1818 name=stack_prefix + '/configuring-nvme-stas-service',
1819 reporting_enabled=True, level="INFO",
1820 description="configuring NVMe STorage Appliance Services"):
1821 configure_nvme_stas(cfg, target)
1822
1763 if osfamily == DISTROS.debian:1823 if osfamily == DISTROS.debian:
1764 with events.ReportEventStack(1824 with events.ReportEventStack(
1765 name=stack_prefix + '/installing-kernel',1825 name=stack_prefix + '/installing-kernel',
diff --git a/curtin/storage_config.py b/curtin/storage_config.py
index af7b6f3..dae89f4 100644
--- a/curtin/storage_config.py
+++ b/curtin/storage_config.py
@@ -50,6 +50,8 @@ STORAGE_CONFIG_TYPES = {
50 'bcache': StorageConfig(type='bcache', schema=schemas.BCACHE),50 'bcache': StorageConfig(type='bcache', schema=schemas.BCACHE),
51 'dasd': StorageConfig(type='dasd', schema=schemas.DASD),51 'dasd': StorageConfig(type='dasd', schema=schemas.DASD),
52 'disk': StorageConfig(type='disk', schema=schemas.DISK),52 'disk': StorageConfig(type='disk', schema=schemas.DISK),
53 'nvme_controller': StorageConfig(type='nvme_controller',
54 schema=schemas.NVME),
53 'dm_crypt': StorageConfig(type='dm_crypt', schema=schemas.DM_CRYPT),55 'dm_crypt': StorageConfig(type='dm_crypt', schema=schemas.DM_CRYPT),
54 'format': StorageConfig(type='format', schema=schemas.FORMAT),56 'format': StorageConfig(type='format', schema=schemas.FORMAT),
55 'lvm_partition': StorageConfig(type='lvm_partition',57 'lvm_partition': StorageConfig(type='lvm_partition',
@@ -159,12 +161,13 @@ def _stype_to_deps(stype):
159 depends_keys = {161 depends_keys = {
160 'bcache': {'backing_device', 'cache_device'},162 'bcache': {'backing_device', 'cache_device'},
161 'dasd': set(),163 'dasd': set(),
162 'disk': set(),164 'disk': {'nvme_controller'},
163 'dm_crypt': {'volume'},165 'dm_crypt': {'volume'},
164 'format': {'volume'},166 'format': {'volume'},
165 'lvm_partition': {'volgroup'},167 'lvm_partition': {'volgroup'},
166 'lvm_volgroup': {'devices'},168 'lvm_volgroup': {'devices'},
167 'mount': {'device'},169 'mount': {'device'},
170 'nvme_controller': set(),
168 'partition': {'device'},171 'partition': {'device'},
169 'raid': {'devices', 'spare_devices', 'container'},172 'raid': {'devices', 'spare_devices', 'container'},
170 'zfs': {'pool'},173 'zfs': {'pool'},
@@ -184,6 +187,7 @@ def _stype_to_order_key(stype):
184 'lvm_partition': {'name'},187 'lvm_partition': {'name'},
185 'lvm_volgroup': {'name'},188 'lvm_volgroup': {'name'},
186 'mount': {'path'},189 'mount': {'path'},
190 'nvme_controller': default_sort,
187 'partition': {'number'},191 'partition': {'number'},
188 'raid': default_sort,192 'raid': default_sort,
189 'zfs': {'volume'},193 'zfs': {'volume'},
@@ -204,7 +208,7 @@ def _validate_dep_type(source_id, dep_key, dep_id, sconfig):
204 'bcache': {'bcache', 'disk', 'dm_crypt', 'lvm_partition',208 'bcache': {'bcache', 'disk', 'dm_crypt', 'lvm_partition',
205 'partition', 'raid'},209 'partition', 'raid'},
206 'dasd': {},210 'dasd': {},
207 'disk': {'dasd'},211 'disk': {'dasd', 'nvme_controller'},
208 'dm_crypt': {'bcache', 'disk', 'dm_crypt', 'lvm_partition',212 'dm_crypt': {'bcache', 'disk', 'dm_crypt', 'lvm_partition',
209 'partition', 'raid'},213 'partition', 'raid'},
210 'format': {'bcache', 'disk', 'dm_crypt', 'lvm_partition',214 'format': {'bcache', 'disk', 'dm_crypt', 'lvm_partition',
@@ -212,6 +216,7 @@ def _validate_dep_type(source_id, dep_key, dep_id, sconfig):
212 'lvm_partition': {'lvm_volgroup'},216 'lvm_partition': {'lvm_volgroup'},
213 'lvm_volgroup': {'bcache', 'disk', 'dm_crypt', 'partition', 'raid'},217 'lvm_volgroup': {'bcache', 'disk', 'dm_crypt', 'partition', 'raid'},
214 'mount': {'format'},218 'mount': {'format'},
219 'nvme_controller': {},
215 'partition': {'bcache', 'disk', 'raid', 'partition'},220 'partition': {'bcache', 'disk', 'raid', 'partition'},
216 'raid': {'bcache', 'disk', 'dm_crypt', 'lvm_partition',221 'raid': {'bcache', 'disk', 'dm_crypt', 'lvm_partition',
217 'partition', 'raid'},222 'partition', 'raid'},
@@ -231,7 +236,7 @@ def _validate_dep_type(source_id, dep_key, dep_id, sconfig):
231 if source_type not in depends:236 if source_type not in depends:
232 raise ValueError('Invalid source_type: %s' % source_type)237 raise ValueError('Invalid source_type: %s' % source_type)
233 if dep_type not in depends:238 if dep_type not in depends:
234 raise ValueError('Invalid type in depedency: %s' % dep_type)239 raise ValueError('Invalid type in dependency: %s' % dep_type)
235240
236 source_deps = depends[source_type]241 source_deps = depends[source_type]
237 result = dep_type in source_deps242 result = dep_type in source_deps
@@ -753,6 +758,11 @@ class BlockdevParser(ProbertParser):
753 entry['ptable'] = ptype758 entry['ptable'] = ptype
754 else:759 else:
755 entry['ptable'] = schemas._ptable_unsupported760 entry['ptable'] = schemas._ptable_unsupported
761
762 match = re.fullmatch(r'/dev/(?P<ctrler>nvme\d+)n\d', devname)
763 if match is not None:
764 entry['nvme_controller'] = f'nvme-controller-{match["ctrler"]}'
765
756 return entry766 return entry
757767
758 if entry['type'] == 'partition':768 if entry['type'] == 'partition':
@@ -1174,6 +1184,39 @@ class MountParser(ProbertParser):
1174 return (configs, errors)1184 return (configs, errors)
11751185
11761186
1187class NVMeParser(ProbertParser):
1188
1189 probe_data_key = 'nvme'
1190
1191 def asdict(self, ctrler_id: str, ctrler_props):
1192 action = {
1193 'type': 'nvme_controller',
1194 'id': f'nvme-controller-{ctrler_id}',
1195 'transport': ctrler_props['NVME_TRTYPE'],
1196 }
1197 if action['transport'] == 'tcp':
1198 action['tcp_addr'] = ctrler_props['NVME_TRADDR']
1199 action['tcp_port'] = int(ctrler_props['NVME_TRSVCID'])
1200
1201 return action
1202
1203 def parse(self):
1204 """ parse probert 'nvme' data format """
1205
1206 errors = []
1207 configs = []
1208 for ctrler_id, ctrler_props in self.class_data.items():
1209 entry = self.asdict(ctrler_id, ctrler_props)
1210 if entry:
1211 try:
1212 validate_config(entry)
1213 except ValueError as e:
1214 errors.append(e)
1215 continue
1216 configs.append(entry)
1217 return configs, errors
1218
1219
1177class ZfsParser(ProbertParser):1220class ZfsParser(ProbertParser):
11781221
1179 probe_data_key = 'zfs'1222 probe_data_key = 'zfs'
@@ -1318,6 +1361,7 @@ def extract_storage_config(probe_data, strict=False):
1318 'lvm': LvmParser,1361 'lvm': LvmParser,
1319 'raid': RaidParser,1362 'raid': RaidParser,
1320 'mount': MountParser,1363 'mount': MountParser,
1364 'nvme': NVMeParser,
1321 'zfs': ZfsParser,1365 'zfs': ZfsParser,
1322 }1366 }
1323 configs = []1367 configs = []
@@ -1339,11 +1383,12 @@ def extract_storage_config(probe_data, strict=False):
1339 raids = [cfg for cfg in configs if cfg.get('type') == 'raid']1383 raids = [cfg for cfg in configs if cfg.get('type') == 'raid']
1340 dmcrypts = [cfg for cfg in configs if cfg.get('type') == 'dm_crypt']1384 dmcrypts = [cfg for cfg in configs if cfg.get('type') == 'dm_crypt']
1341 mounts = [cfg for cfg in configs if cfg.get('type') == 'mount']1385 mounts = [cfg for cfg in configs if cfg.get('type') == 'mount']
1386 nvmes = [cfg for cfg in configs if cfg.get('type') == 'nvme_controller']
1342 bcache = [cfg for cfg in configs if cfg.get('type') == 'bcache']1387 bcache = [cfg for cfg in configs if cfg.get('type') == 'bcache']
1343 zpool = [cfg for cfg in configs if cfg.get('type') == 'zpool']1388 zpool = [cfg for cfg in configs if cfg.get('type') == 'zpool']
1344 zfs = [cfg for cfg in configs if cfg.get('type') == 'zfs']1389 zfs = [cfg for cfg in configs if cfg.get('type') == 'zfs']
13451390
1346 ordered = (dasd + disk + part + format + lvols + lparts + raids +1391 ordered = (nvmes + dasd + disk + part + format + lvols + lparts + raids +
1347 dmcrypts + mounts + bcache + zpool + zfs)1392 dmcrypts + mounts + bcache + zpool + zfs)
13481393
1349 final_config = {'storage': {'version': 2, 'config': ordered}}1394 final_config = {'storage': {'version': 2, 'config': ordered}}
diff --git a/doc/topics/storage.rst b/doc/topics/storage.rst
index 97e900d..7650c4d 100644
--- a/doc/topics/storage.rst
+++ b/doc/topics/storage.rst
@@ -71,6 +71,7 @@ commands include:
71- Bcache Command (``bcache``)71- Bcache Command (``bcache``)
72- Zpool Command (``zpool``) **Experimental**72- Zpool Command (``zpool``) **Experimental**
73- ZFS Command (``zfs``)) **Experimental**73- ZFS Command (``zfs``)) **Experimental**
74- NVMe Controller Command (``nvme_controller``) **Experimental**
74- Device "Command" (``device``)75- Device "Command" (``device``)
7576
76Any action that refers to a block device (so things like ``partition``77Any action that refers to a block device (so things like ``partition``
@@ -331,6 +332,11 @@ configuration dictionary. Currently the value is informational only.
331Curtin already detects whether disks are part of a multipath and selects332Curtin already detects whether disks are part of a multipath and selects
332one member path to operate upon.333one member path to operate upon.
333334
335**nvme_controller**: *<NVMe controller id>*
336
337If the disk is a NVMe SSD, the ``nvme_controller`` key can be set to the
338identifier of a ``nvme_controller`` object. This will help to determine the
339type of transport used (e.g., PCIe vs TCP).
334340
335**Config Example**::341**Config Example**::
336342
@@ -1205,6 +1211,42 @@ passed to the ZFS dataset creation command.
1205 canmount: noauto1211 canmount: noauto
1206 mountpoint: /1212 mountpoint: /
12071213
1214NVMe Controller Command
1215~~~~~~~~~~~~~~~~~~~~~~~
1216NVMe Controller Commands (and NVMe over TCP support in general) are
1217**experimental**.
1218
1219The nvme_controller command describes how to communicate with a given NVMe
1220controller.
1221
1222**transport**: *pcie, tcp*
1223
1224The ``transport`` key specifies whether the communication with the NVMe
1225controller operates over PCIe or over TCP. Other transports like RDMA and FC
1226(aka. Fiber Channel) are not supported at the moment.
1227
1228**tcp_addr**: *<ip address>*
1229
1230The ``tcp_addr`` key specifies the IP where the NVMe controller can be reached.
1231This key is only meaningful in conjunction with ``transport: tcp``.
1232
1233**tcp_port**: *port*
1234
1235The ``tcp_port`` key specifies the TCP port where the NVMe controller can be
1236reached. This key is only meaningful in conjunction with ``transport: tcp``.
1237
1238**Config Example**::
1239
1240 - type: nvme_controller
1241 id: nvme-controller-nvme0
1242 transport: pcie
1243
1244 - type: nvme_controller
1245 id: nvme-controller-nvme1
1246 transport: tcp
1247 tcp_addr: 172.16.82.78
1248 tcp_port: 4420
1249
1208Device "Command"1250Device "Command"
1209~~~~~~~~~~~~~~~~1251~~~~~~~~~~~~~~~~
12101252
diff --git a/tests/data/probert_storage_bogus_wwn.json b/tests/data/probert_storage_bogus_wwn.json
index b3211fd..d817515 100644
--- a/tests/data/probert_storage_bogus_wwn.json
+++ b/tests/data/probert_storage_bogus_wwn.json
@@ -1254,5 +1254,48 @@
1254 "bcache": {1254 "bcache": {
1255 "backing": {},1255 "backing": {},
1256 "caching": {}1256 "caching": {}
1257 },
1258 "nvme": {
1259 "nvme0": {
1260 "DEVNAME": "/dev/nvme0",
1261 "DEVPATH": "/devices/pci0000:00/0000:00:1c.4/0000:04:00.0/nvme/nvme0",
1262 "MAJOR": "238",
1263 "MINOR": "0",
1264 "NVME_TRTYPE": "pcie",
1265 "SUBSYSTEM": "nvme",
1266 "attrs": {
1267 "address": "0000:04:00.0",
1268 "cntlid": "5",
1269 "cntrltype": "io",
1270 "dctype": "none",
1271 "dev": "238:0",
1272 "device": null,
1273 "firmware_rev": "2B2QEXM7",
1274 "hmb": "1",
1275 "kato": "0",
1276 "model": "SAMSUNG SSD 970 EVO Plus 500GB",
1277 "numa_node": "0",
1278 "power/async": "disabled",
1279 "power/autosuspend_delay_ms": null,
1280 "power/control": "auto",
1281 "power/pm_qos_latency_tolerance_us": "100000",
1282 "power/runtime_active_kids": "0",
1283 "power/runtime_active_time": "0",
1284 "power/runtime_enabled": "disabled",
1285 "power/runtime_status": "unsupported",
1286 "power/runtime_suspended_time": "0",
1287 "power/runtime_usage": "0",
1288 "queue_count": "9",
1289 "rescan_controller": null,
1290 "reset_controller": null,
1291 "serial": "S4EVNJ0N203359W",
1292 "sqsize": "1023",
1293 "state": "live",
1294 "subsysnqn": "nqn.1994-11.com.samsung:nvme:970M.2:S4EVNJ0N203359W",
1295 "subsystem": "nvme",
1296 "transport": "pcie",
1297 "uevent": "MAJOR=238\nMINOR=0\nDEVNAME=nvme0\nNVME_TRTYPE=pcie"
1298 }
1299 }
1257 }1300 }
1258}1301}
diff --git a/tests/data/probert_storage_nvme_multipath.json b/tests/data/probert_storage_nvme_multipath.json
index 56a761d..9718368 100644
--- a/tests/data/probert_storage_nvme_multipath.json
+++ b/tests/data/probert_storage_nvme_multipath.json
@@ -306,5 +306,48 @@
306 "uevent": "MAJOR=259\nMINOR=4\nDEVNAME=nvme0n1p3\nDEVTYPE=partition\nPARTN=3"306 "uevent": "MAJOR=259\nMINOR=4\nDEVNAME=nvme0n1p3\nDEVTYPE=partition\nPARTN=3"
307 }307 }
308 }308 }
309 },
310 "nvme": {
311 "nvme0": {
312 "DEVNAME": "/dev/nvme0",
313 "DEVPATH": "/devices/pci0000:00/0000:00:1d.0/0000:03:00.0/nvme/nvme0",
314 "MAJOR": "238",
315 "MINOR": "0",
316 "NVME_TRTYPE": "pcie",
317 "SUBSYSTEM": "nvme",
318 "attrs": {
319 "address": "0000:03:00.0",
320 "cntlid": "5",
321 "cntrltype": "io",
322 "dctype": "none",
323 "dev": "238:0",
324 "device": null,
325 "firmware_rev": "GPJA0B3Q",
326 "hmb": "1",
327 "kato": "0",
328 "model": "SAMSUNG MZPLL3T2HAJQ-00005",
329 "numa_node": "0",
330 "power/async": "disabled",
331 "power/autosuspend_delay_ms": null,
332 "power/control": "auto",
333 "power/pm_qos_latency_tolerance_us": "100000",
334 "power/runtime_active_kids": "0",
335 "power/runtime_active_time": "0",
336 "power/runtime_enabled": "disabled",
337 "power/runtime_status": "unsupported",
338 "power/runtime_suspended_time": "0",
339 "power/runtime_usage": "0",
340 "queue_count": "9",
341 "rescan_controller": null,
342 "reset_controller": null,
343 "serial": "S4CCNE0M300015",
344 "sqsize": "1023",
345 "state": "live",
346 "subsysnqn": "nqn.1994-11.com.samsung:nvme:MZPLL3T2HAJQ-00005M.2:S64DMZ0T351601T ",
347 "subsystem": "nvme",
348 "transport": "pcie",
349 "uevent": "MAJOR=238\nMINOR=0\nDEVNAME=nvme0\nNVME_TRTYPE=pcie"
350 }
351 }
309 }352 }
310}353}
diff --git a/tests/data/probert_storage_nvme_uuid.json b/tests/data/probert_storage_nvme_uuid.json
index c54239b..d93dffc 100644
--- a/tests/data/probert_storage_nvme_uuid.json
+++ b/tests/data/probert_storage_nvme_uuid.json
@@ -306,5 +306,48 @@
306 "uevent": "MAJOR=259\nMINOR=4\nDEVNAME=nvme0n1p3\nDEVTYPE=partition\nPARTN=3"306 "uevent": "MAJOR=259\nMINOR=4\nDEVNAME=nvme0n1p3\nDEVTYPE=partition\nPARTN=3"
307 }307 }
308 }308 }
309 },
310 "nvme": {
311 "nvme0": {
312 "DEVNAME": "/dev/nvme0",
313 "DEVPATH": "/devices/pci0000:00/0000:00:1d.0/0000:03:00.0/nvme/nvme0",
314 "MAJOR": "238",
315 "MINOR": "0",
316 "NVME_TRTYPE": "pcie",
317 "SUBSYSTEM": "nvme",
318 "attrs": {
319 "address": "0000:03:00.0",
320 "cntlid": "5",
321 "cntrltype": "io",
322 "dctype": "none",
323 "dev": "238:0",
324 "device": null,
325 "firmware_rev": "GPJA0B3Q",
326 "hmb": "1",
327 "kato": "0",
328 "model": "SAMSUNG MZPLL3T2HAJQ-00005",
329 "numa_node": "0",
330 "power/async": "disabled",
331 "power/autosuspend_delay_ms": null,
332 "power/control": "auto",
333 "power/pm_qos_latency_tolerance_us": "100000",
334 "power/runtime_active_kids": "0",
335 "power/runtime_active_time": "0",
336 "power/runtime_enabled": "disabled",
337 "power/runtime_status": "unsupported",
338 "power/runtime_suspended_time": "0",
339 "power/runtime_usage": "0",
340 "queue_count": "9",
341 "rescan_controller": null,
342 "reset_controller": null,
343 "serial": "S4CCNE0M300015",
344 "sqsize": "1023",
345 "state": "live",
346 "subsysnqn": "nqn.1994-11.com.samsung:nvme:MZPLL3T2HAJQ-00005M.2:S64DMZ0T351601T ",
347 "subsystem": "nvme",
348 "transport": "pcie",
349 "uevent": "MAJOR=238\nMINOR=0\nDEVNAME=nvme0\nNVME_TRTYPE=pcie"
350 }
351 }
309 }352 }
310}353}
diff --git a/tests/unittests/test_curthooks.py b/tests/unittests/test_curthooks.py
index 0728260..e615b38 100644
--- a/tests/unittests/test_curthooks.py
+++ b/tests/unittests/test_curthooks.py
@@ -2017,6 +2017,83 @@ class TestCurthooksGrubDebconf(CiTestCase):
2017 self.m_debconf.assert_called_with(expectedcfg, target)2017 self.m_debconf.assert_called_with(expectedcfg, target)
20182018
20192019
2020class TestCurthooksNVMeStas(CiTestCase):
2021 def test_get_nvme_stas_controller_directives__no_nvme_controller(self):
2022 self.assertFalse(curthooks.get_nvme_stas_controller_directives({
2023 "storage": {
2024 "config": [
2025 {"type": "partition"},
2026 {"type": "mount"},
2027 {"type": "disk"},
2028 ],
2029 },
2030 }))
2031
2032 def test_get_nvme_stas_controller_directives__pcie_controller(self):
2033 self.assertFalse(curthooks.get_nvme_stas_controller_directives({
2034 "storage": {
2035 "config": [
2036 {"type": "nvme_controller", "transport": "pcie"},
2037 ],
2038 },
2039 }))
2040
2041 def test_get_nvme_stas_controller_directives__tcp_controller(self):
2042 expected = {"controller = transport=tcp;traddr=1.2.3.4;trsvcid=1111"}
2043
2044 result = curthooks.get_nvme_stas_controller_directives({
2045 "storage": {
2046 "config": [
2047 {
2048 "type": "nvme_controller",
2049 "transport": "tcp",
2050 "tcp_addr": "1.2.3.4",
2051 "tcp_port": "1111",
2052 },
2053 ],
2054 },
2055 })
2056 self.assertEqual(expected, result)
2057
2058 def test_get_nvme_stas_controller_directives__three_nvme_controllers(self):
2059 expected = {"controller = transport=tcp;traddr=1.2.3.4;trsvcid=1111",
2060 "controller = transport=tcp;traddr=4.5.6.7;trsvcid=1212"}
2061
2062 result = curthooks.get_nvme_stas_controller_directives({
2063 "storage": {
2064 "config": [
2065 {
2066 "type": "nvme_controller",
2067 "transport": "tcp",
2068 "tcp_addr": "1.2.3.4",
2069 "tcp_port": "1111",
2070 }, {
2071 "type": "nvme_controller",
2072 "transport": "tcp",
2073 "tcp_addr": "4.5.6.7",
2074 "tcp_port": "1212",
2075 }, {
2076 "type": "nvme_controller",
2077 "transport": "pcie",
2078 },
2079 ],
2080 },
2081 })
2082 self.assertEqual(expected, result)
2083
2084 def test_get_nvme_stas_controller_directives__empty_conf(self):
2085 self.assertFalse(curthooks.get_nvme_stas_controller_directives({}))
2086 self.assertFalse(curthooks.get_nvme_stas_controller_directives(
2087 {"storage": False}))
2088 self.assertFalse(curthooks.get_nvme_stas_controller_directives(
2089 {"storage": {}}))
2090 self.assertFalse(curthooks.get_nvme_stas_controller_directives({
2091 "storage": {
2092 "config": "disabled",
2093 },
2094 }))
2095
2096
2020class TestUefiFindGrubDeviceIds(CiTestCase):2097class TestUefiFindGrubDeviceIds(CiTestCase):
20212098
2022 def _sconfig(self, cfg):2099 def _sconfig(self, cfg):
diff --git a/tests/unittests/test_storage_config.py b/tests/unittests/test_storage_config.py
index caaac29..7b0f68c 100644
--- a/tests/unittests/test_storage_config.py
+++ b/tests/unittests/test_storage_config.py
@@ -1087,6 +1087,7 @@ class TestExtractStorageConfig(CiTestCase):
1087 'serial': 'SAMSUNG MZPLL3T2HAJQ-00005_S4CCNE0M300015',1087 'serial': 'SAMSUNG MZPLL3T2HAJQ-00005_S4CCNE0M300015',
1088 'type': 'disk',1088 'type': 'disk',
1089 'wwn': 'eui.344343304d3000150025384500000004',1089 'wwn': 'eui.344343304d3000150025384500000004',
1090 'nvme_controller': 'nvme-controller-nvme0',
1090 }1091 }
1091 self.assertEqual(1, len(disks))1092 self.assertEqual(1, len(disks))
1092 self.assertEqual(expected_dict, disks[0])1093 self.assertEqual(expected_dict, disks[0])
@@ -1104,6 +1105,7 @@ class TestExtractStorageConfig(CiTestCase):
1104 'serial': 'SAMSUNG MZPLL3T2HAJQ-00005_S4CCNE0M300015',1105 'serial': 'SAMSUNG MZPLL3T2HAJQ-00005_S4CCNE0M300015',
1105 'type': 'disk',1106 'type': 'disk',
1106 'wwn': 'uuid.344343304d3000150025384500000004',1107 'wwn': 'uuid.344343304d3000150025384500000004',
1108 'nvme_controller': 'nvme-controller-nvme0',
1107 }1109 }
1108 self.assertEqual(1, len(disks))1110 self.assertEqual(1, len(disks))
1109 self.assertEqual(expected_dict, disks[0])1111 self.assertEqual(expected_dict, disks[0])

Subscribers

People subscribed via source and target branches