Code review comment for ~dbungert/curtin:fs_pass_one

Revision history for this message
Ryan Harper (raharper) wrote :

I've been using this:

https://paste.ubuntu.com/p/x2TQ7d2j9v/

which does a better job of collecting boot time (captures systemd-analyze at shutdown time instead of during boot).

I run TestSimple which uses block-meta simple, it defaults to one root partition the size of the entire disk; this allows us to scale the size from (10 (default) to whatever) since fsck is somewhat related to total size of the filesystem it creates.

This loop is what I've used to collect data:

# 1TB, 100GB, 10GB rootfs sizes
for size in 1000 100 10; do
    # 50 tries
    for t in $(seq -w 1 50); do
        # this is where the disks and artifacts will be kept; the IO to/from
        # the virtual disks will happen against this mountpoint
        CURTIN_VMTEST_TOPDIR=/hogshead/blackmetal/curtin/baseline_${size}/${t};
        CURTIN_VMTEST_ROOT_DISK_SIZE_GB=${size}
        # run in parallel for some variance in IO patterns
        # the filter will select just the <Release>TestSimple test cases
        CURTIN_VMTEST_TOPDIR=$CURTIN_VMTEST_TOPDIR \
            ./tools/jenkins-runner -p12 --filter=conf_file=examples/tests/simple.yaml;
    done
done

grep . -r baseline_10/01/XenialTestSimple/collect/systemd-analyze.out
Startup finished in 3.207s (kernel) + 4.459s (userspace) = 7.666s

For now, I've been looking just at total time, and rootfs fsck only, for the
simple test we only have one filesystem; but in other scenarios where we fsck
each fs, that will increase the total boot time.

I've done testing against a high speed (xfs over NVME in raid0), and the
slowest I have access to is zfs over 5-disk raidz2.

I'm still processing the data but should have some results later this week.

« Back to merge proposal