Performance benchmarking: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
== DISK | == DISK performance == | ||
Using dd for disk performance tests is not recommended. Because dd will perform a sequential write what is not a real scenario, To evaluate your servers, use fio or bonnie++ for performance tests. | |||
* ''[[IOPS]]'' : I/O operations per second | |||
* ''Throughput or Bandwidth'' : the amount of data transferred per second | |||
* ''Latency'' : the time required to perform an I/O operation | |||
=== ''hdparm'' === | |||
''hdparm'' is a tool for collecting and manipulating the low-level parameters of SATA/PATA/SAS and older IDE storage devices. It also contains a [[benchmark]] mode we can invoke with the ''-t'' flag, even on a disk with mounted partitions:<syntaxhighlight lang="bash"> | |||
$ sudo hdparm -t /dev/nvme0n1 | |||
/dev/nvme0n1: | |||
Timing buffered disk reads: 10014 MB in 3.00 seconds = 3337.66 MB/sec | |||
*** the disk read output means how fast the disk can sustain sequential data reads without any filesystem overhead and without prior caching since hdparm automatically flushes it | |||
</syntaxhighlight>Using dd for disk performance tests is not recommended. Because dd will perform a sequential write what is not a real scenario, To evaluate your servers, use fio or bonnie++ for performance tests. | |||
=== bonnie++ === | === bonnie++ === | ||
=== sysbench === | |||
<syntaxhighlight lang="bash"> | |||
## step 1, prepare benchmark dir and files | |||
## –file-total-size flag indicates the total size of test files to be created locally. | |||
$mkdir temp | |||
$ cd temp | |||
$ sysbench fileio --file-total-size=50G prepare | |||
## step 2, Execute benchmark | |||
sysbench --file-total-size=50G --file-test-mode=rndrw --file-extra-flags=direct fileio run | |||
–file-extra-flags=direct option tells sysbench to use direct I/O, which will bypass the page cache. This choice ensures that the test interacts with the disk and not with the main memory | |||
–file-test-mode accepts the following workloads: | |||
seqwr → sequential write | |||
seqrewr → sequential rewrite | |||
seqrd → sequential read | |||
rndrd → random read | |||
rndwr → random write | |||
rndrw → combined random read/write | |||
## step 3, clean up | |||
$ sysbench fileio cleanup | |||
</syntaxhighlight> | |||
=== fio === | === fio === | ||
''fio'' stands for Flexible I/O Tester and refers to a tool for measuring I/O performance. '''With ''fio'', we can set our [[workload]] to get the type of measurement we want'''.<syntaxhighlight lang="bash"> | |||
$sudo fio --filename=/dev/sda --readonly --rw=read --name=TEST --runtime=3 | |||
–filename flag allows us to test a block device, such as /dev/sda, or just a file | |||
–runtime=3 means three seconds | |||
–name : test name | |||
-rw options are | |||
read → sequential reads | |||
write → sequential writes | |||
randread → random reads | |||
randwrite → random writes | |||
rw, readwrite → mixed sequential reads and writes | |||
randrw → mixed random reads and writes | |||
</syntaxhighlight>fio [[test]] can be done through fio file, for example<syntaxhighlight lang="bash"> | |||
$cat test.fio | |||
[global] | |||
runtime=3 | |||
name=TEST | |||
rw=read | |||
[job1] | |||
filename=/dev/nvme0n1 | |||
$ fio --readonly test.fio | |||
</syntaxhighlight> | |||
=== iozone test === | === iozone test === |
Revision as of 08:30, 25 February 2025
DISK performance
- IOPS : I/O operations per second
- Throughput or Bandwidth : the amount of data transferred per second
- Latency : the time required to perform an I/O operation
hdparm
hdparm is a tool for collecting and manipulating the low-level parameters of SATA/PATA/SAS and older IDE storage devices. It also contains a benchmark mode we can invoke with the -t flag, even on a disk with mounted partitions:
$ sudo hdparm -t /dev/nvme0n1
/dev/nvme0n1:
Timing buffered disk reads: 10014 MB in 3.00 seconds = 3337.66 MB/sec
*** the disk read output means how fast the disk can sustain sequential data reads without any filesystem overhead and without prior caching since hdparm automatically flushes it
Using dd for disk performance tests is not recommended. Because dd will perform a sequential write what is not a real scenario, To evaluate your servers, use fio or bonnie++ for performance tests.
bonnie++
sysbench
## step 1, prepare benchmark dir and files
## –file-total-size flag indicates the total size of test files to be created locally.
$mkdir temp
$ cd temp
$ sysbench fileio --file-total-size=50G prepare
## step 2, Execute benchmark
sysbench --file-total-size=50G --file-test-mode=rndrw --file-extra-flags=direct fileio run
–file-extra-flags=direct option tells sysbench to use direct I/O, which will bypass the page cache. This choice ensures that the test interacts with the disk and not with the main memory
–file-test-mode accepts the following workloads:
seqwr → sequential write
seqrewr → sequential rewrite
seqrd → sequential read
rndrd → random read
rndwr → random write
rndrw → combined random read/write
## step 3, clean up
$ sysbench fileio cleanup
fio
fio stands for Flexible I/O Tester and refers to a tool for measuring I/O performance. With fio, we can set our workload to get the type of measurement we want.
$sudo fio --filename=/dev/sda --readonly --rw=read --name=TEST --runtime=3
–filename flag allows us to test a block device, such as /dev/sda, or just a file
–runtime=3 means three seconds
–name : test name
-rw options are
read → sequential reads
write → sequential writes
randread → random reads
randwrite → random writes
rw, readwrite → mixed sequential reads and writes
randrw → mixed random reads and writes
fio test can be done through fio file, for example
$cat test.fio
[global]
runtime=3
name=TEST
rw=read
[job1]
filename=/dev/nvme0n1
$ fio --readonly test.fio
iozone test
iozone -+m <conf_file> -i 0 -w -+n -c -C -e -s 4g -r 1024k -t 8
Actual results: gluster-block: Children see throughput for 8 initial writers = 87671.01 kB/sec Children see throughput for 8 readers = 133351.47 kB/sec gluster-fuse: Children see throughput for 8 initial writers = 939760.11 kB/sec Children see throughput for 8 readers = 791956.57 kB/sec gluster-nfs: Children see throughput for 8 initial writers = 989822.15 kB/sec Children see throughput for 8 readers = 1203338.91 kB/sec
PCI Switch, CPU and GPU Direct server topology[1]
Reference
<reference/>