Merge ~rodsmith/plainbox-provider-checkbox:update-disk-read-performance-thresholds into plainbox-provider-checkbox:master

Proposed by Rod Smith
Status: Merged
Approved by: Rod Smith
Approved revision: 9179cb98372f454eeccba983d29da45bb615da01
Merged at revision: f71686e8e4f137e2380459d37f0f808adb43013a
Proposed branch: ~rodsmith/plainbox-provider-checkbox:update-disk-read-performance-thresholds
Merge into: plainbox-provider-checkbox:master
Diff against target: 40 lines (+14/-0)
1 file modified
bin/disk_read_performance_test (+14/-0)
Reviewer Review Type Date Requested Status
Rod Smith Approve
Jonathan Cave (community) Approve
Jeff Lane  Approve
Review via email: mp+381120@code.launchpad.net

Commit message

Update disk_read_performance_test with new device types & more modern minimum acceptable speeds.

Description of the change

Added device types for NVDIMM, SSD, ATA, & SCSI devices -- the latter two were identified by the script but fell back to the default (15 MB/s) pass criterion, and the first two require new code to identify.

This MR also sets new or updates pass criteria for NVMe, NVDIMM, MDADM, ATA, SCSI, and SSD devices, based on data from over 50 devices in a semi-random sample -- see:

https://docs.google.com/spreadsheets/d/183H8rf0YpmpjI4OVr9pjG-CQvoBkWqE7qCsiwtduuiQ/edit#gid=0

Criteria were set a little lower than the lowest observed value for most devices, with an exception being HDD (ATA and SCSI) devices, which were set above the pathetic level of a couple of very old disks and a bit below the lowest observed value for more modern hardware. I set the NVDIMM value to match the NVMe value because I have access to just one machine with NVDIMM hardware, and average performance levels seemed similar.

To post a comment you must log in.
Revision history for this message
Jeff Lane  (bladernr) wrote :

LGTM. I'd suggest at least letting Devices know this changed, but it'll sit in Dev for a while and they can test it and change if necessary.

review: Approve
Revision history for this message
Jonathan Cave (jocave) wrote :

Looking at the git log of this file there have been a number of commits over the past 6 months or so that modify these values one way or the other, hence I have some reservations that the values are being "experimented" with.

However, your spreadsheet is the most authoritative source I have seen for identifying reasonable limits which makes me think this should be ok.

The only other suggestion I can think of would be to somehow modify the script/tests such that there was a job with an easily parse-able value for the speed (no other output for example) such that a script to parse C3 could be written and you wouldnt need to manually produce such a spreadsheet again. Just an idea.

review: Approve
Revision history for this message
Rod Smith (rodsmith) wrote :

Thanks, Jonathan. I've fired off an e-mail to Maciej and Jonathan, who have made the most recent commits altering these values, so as to solicit their input.

Your suggestion to create output that's easily parsed from C3 by an outside script is a good one, but I think that's a task for another commit. I'll give it some thought.

Revision history for this message
Rod Smith (rodsmith) wrote :

s/e-mail to Maciej and Jonathan/e-mail to Maciej and Sylvain/

...although I did cc: you, Jonathan.

Revision history for this message
Sylvain Pineau (sylvain-pineau) wrote :

Sorry for the delay (and thanks for the email detailing all the changes).

I'd keep nvme/mdadm original values. We've found IoT devices with such setup and surprisingly low speed.

Revision history for this message
Rod Smith (rodsmith) wrote :

OK, I've reverted the changes to nvme and mdadm, but kept everything else. Based on earlier reviews, I'll go ahead and merge this variant.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1diff --git a/bin/disk_read_performance_test b/bin/disk_read_performance_test
2index 8d6d6a0..bcf1c75 100755
3--- a/bin/disk_read_performance_test
4+++ b/bin/disk_read_performance_test
5@@ -13,6 +13,10 @@ for disk in $@; do
6
7 disk_type=`udevadm info --name /dev/$disk --query property | grep "ID_BUS" | awk '{gsub(/ID_BUS=/," ")}{printf $1}'`
8 dev_path=`udevadm info --name /dev/$disk --query property | grep "DEVPATH" | awk '{gsub(/DEVPATH=/," ")}{printf $1}'`
9+ # /sys/block/$disk/queue/rotational was added with Linux 2.6.29. If file is
10+ # not present, test below will fail & disk will be considered an HDD, not
11+ # an SSD.
12+ rotational=`cat /sys/block/$disk/queue/rotational`
13 if [[ $dev_path =~ dm ]]; then
14 disk_type="devmapper"
15 fi
16@@ -25,6 +29,12 @@ for disk in $@; do
17 if [[ $dev_path =~ mmc ]]; then
18 disk_type="mmc"
19 fi
20+ if [[ $dev_path =~ pmem ]]; then
21+ disk_type="nvdimm"
22+ fi
23+ if [[ ($disk_type == "scsi" || $disk_type == "ata") && $rotational == 0 ]]; then
24+ disk_type="ssd"
25+ fi
26 if [ -z "$disk_type" ]; then
27 echo "ERROR: disk type not recognized"
28 exit 1
29@@ -49,7 +59,11 @@ for disk in $@; do
30 "ide" ) MIN_BUF_READ=40;;
31 "mmc" ) MIN_BUF_READ=$DEFAULT_BUF_READ;;
32 "nvme" ) MIN_BUF_READ=200;;
33+ "nvdimm" ) MIN_BUF_READ=500;;
34 "mdadm" ) MIN_BUF_READ=80;;
35+ "ata" ) MIN_BUF_READ=100;;
36+ "scsi" ) MIN_BUF_READ=100;;
37+ "ssd" ) MIN_BUF_READ=200;;
38 * ) MIN_BUF_READ=$DEFAULT_BUF_READ;;
39 esac
40 echo "INFO: $disk_type: Using $MIN_BUF_READ MB/sec as the minimum throughput speed"

Subscribers

People subscribed via source and target branches