Performance benchmarking
Lots of benchmark tools are available as described in this post
DISK performance
- IOPS : I/O operations per second
- Throughput or Bandwidth : the amount of data transferred per second
- Latency : the time required to perform an I/O operation
hdparm
hdparm is a tool for collecting and manipulating the low-level parameters of SATA/PATA/SAS and older IDE storage devices. It also contains a benchmark mode we can invoke with the -t flag, even on a disk with mounted partitions:
$ sudo hdparm -t /dev/nvme0n1
/dev/nvme0n1:
Timing buffered disk reads: 10014 MB in 3.00 seconds = 3337.66 MB/sec
*** the disk read output means how fast the disk can sustain sequential data reads without any filesystem overhead and without prior caching since hdparm automatically flushes it
Using dd for disk performance tests is not recommended. Because dd will perform a sequential write what is not a real scenario, To evaluate your servers, use fio or bonnie++ for performance tests.
GeekBench
GeekBench is another complete test suite for Linux. GeekBench automatically puts your system running test suites and produces a complete set of results. User can download the latest release for Linux from the GeekBench website.
sysbench[1]
## step 1, prepare benchmark dir and files
## –file-total-size flag indicates the total size of test files to be created locally.
$mkdir temp
$ cd temp
$ sysbench fileio --file-total-size=50G prepare
## step 2, Execute benchmark
sysbench --file-total-size=50G --file-test-mode=rndrw --file-extra-flags=direct fileio run
–file-extra-flags=direct option tells sysbench to use direct I/O, which will bypass the page cache. This choice ensures that the test interacts with the disk and not with the main memory
–file-test-mode accepts the following workloads:
seqwr → sequential write
seqrewr → sequential rewrite
seqrd → sequential read
rndrd → random read
rndwr → random write
rndrw → combined random read/write
## step 3, clean up
$ sysbench fileio cleanup
fio
fio stands for Flexible I/O Tester and refers to a tool for measuring I/O performance. With fio, we can set our workload to get the type of measurement we want.
$sudo fio --filename=/dev/sda --readonly --rw=read --name=TEST --runtime=3
–filename flag allows us to test a block device, such as /dev/sda, or just a file
–runtime=3 means three seconds
–name : test name
-rw options are
read → sequential reads
write → sequential writes
randread → random reads
randwrite → random writes
rw, readwrite → mixed sequential reads and writes
randrw → mixed random reads and writes
fio test can be done through fio file, for example
$cat test.fio
[global]
runtime=3
name=TEST
rw=read
[job1]
filename=/dev/nvme0n1
$ fio --readonly test.fio
iozone test
iozone -+m <conf_file> -i 0 -w -+n -c -C -e -s 4g -r 1024k -t 8
Actual results: gluster-block: Children see throughput for 8 initial writers = 87671.01 kB/sec Children see throughput for 8 readers = 133351.47 kB/sec gluster-fuse: Children see throughput for 8 initial writers = 939760.11 kB/sec Children see throughput for 8 readers = 791956.57 kB/sec gluster-nfs: Children see throughput for 8 initial writers = 989822.15 kB/sec Children see throughput for 8 readers = 1203338.91 kB/sec
PCI Switch, CPU and GPU Direct server topology[2]
Reference
<reference/>