Performance Optimization
Fine-Tune Linux Kernel Parameters
Linux is known for its flexibility and ability to be customized, which allows system administrators to adjust its behavior to achieve top performance through fine-tune its kernel parameters.[1] in general, Real-time tuning for benchmark can be done using sysctl -w command and permanent change by editing /etc/sysctl.conf and sysctl -p
HPCMATE DLS system provides pre-configured kernel level optimization and UCM (Universial Cluster Manager) provides real-time kernel level optimization via real-time system monitoring.
TCP/IP Network Parameters
TCP/IP stack parameters in Linux offers powerful possibilities for improved network performance.
Key parameter | Description | Recommendation |
---|---|---|
ipv4.tcp_fin_timeout | This controls how long a connection stays in the “FIN-WAIT-2” state after it’s gracefully closed. Lowering this value (default: 60 seconds) can free up resources faster. particularly useful for servers handling lots of short connections | Recommended Value: 20-30 seconds
|
ipv4.tcp_tw_reuse | This allows the reuse of sockets in “TIME-WAIT” state. On busy servers, enabling this parameter helps manage high loads more efficiently by enabling quicker recycling of connections | ipv4.tcp_tw_reuse: Recommended Value: 1 (Enable) |
ipv4.tcp_max_syn_backlog | This value controls the maximum number of connection requests that the server can queue up while waiting for full three-way handshake completion. Increasing this value helps prevent dropping new connections when your server is under heavy load | Recommended Values: Start with doubling the default and adjust based on your server’s traffic load |
File System Parameters
It determines how data is stored, retrieved, and organized on your hard drives or SSDs.
Databases (Write-Heavy): A lower vm.dirty_background_ratio ensures writes are flushed frequently, reducing the potential for significant data loss in case of issues. A slightly increased vm.dirty_ratio might reduce stalls experienced by the database when large amounts of data need to be written quickly.
Web Servers (Read-Heavy): Here, you might increase vm.dirty_ratio to prioritize buffering read operations for greater speed, as losing some recent writes from users due to a crash is often less of a concern than having slow website loading times.
Key parameter | Description | Recommendation |
---|---|---|
vm.dirty_background_ratio | This parameter controls at what percentage of system memory “dirty” (modified but not yet written to disk) pages will trigger the system to start writing them to disk in the background.
|
|
vm.dirty_ratio | This determines the maximum percentage of system memory that can be filled with dirty pages before processes trying to write are forced to pause and flush data to disk themselves.
|
Memory Management Parameters
Adjusting vm.swappiness` and `vm.overcommit_memory` parameters allows system administrators to fine-tune how the system uses RAM and swap space.
Key parameter | Description | Recommendation |
---|---|---|
vm.swappiness | This parameter controls the kernel’s tendency to swap memory to disk. `vm.swappiness` can have a value between 0 and 100, where a lower value reduces the system’s use of swap, preferring to keep more data in RAM, and a higher value makes the system more inclined to use the swap space. For servers, a lower swappiness value is often preferred to ensure that applications remain in RAM for faster access, unless the system is running out of memory. | |
vm.overcommit_memory | This parameter controls the kernel’s policy towards memory overcommitment.
The setting can be 0 (heuristic overcommit handling), 1 (always overcommit), 2 (don’t overcommit). The default setting (0) allows the kernel to estimate the amount of memory available and overcommit to a certain extent, which is suitable for most scenarios. Setting it to 1 allows unlimited overcommitment of memory, which can be useful in environments where applications are expected to request more memory than they actually use. Setting 2 makes the kernel strict about memory allocation, which can prevent out-of-memory scenarios but might restrict application performance. |