Performance Tweaks: Difference between revisions
Line 32: | Line 32: | ||
guest disk cache mode is writethrough | guest disk cache mode is writethrough | ||
Writethrough make a fsync for each write. So it's the more secure cache mode, you can't loose data. It's also the slower. | Writethrough make a fsync for each write. So it's the more secure cache mode, you can't loose data. It's also the slower. | ||
This mode causes qemu-kvm to interact with the disk image file or block device with O_DSYNC semantics, | |||
where writes are reported as completed only when the data has been committed to the storage device. | |||
The host page cache is used in what can be termed a writethrough caching mode. | |||
The guest's virtual storage adapter is informed that there is no writeback cache, so the guest would not need to send down | |||
flush commands to manage data integrity. The storage behaves as if there is a writethrough cache. | |||
Revision as of 16:10, 8 February 2015
Introduction
This page is intended to be a collection of various performance tips/tweaks to help you get the most from your virtual servers.
KVM
USB Tablet Device
Disabling the USB tablet device in windows VMs can reduce idle CPU usage and reduce context switches. This can be done on the GUI. You can use vmmouse to get the pointer in sync (load drivers inside your VM). [1]
VirtIO
Use virtIO for disk and network for best performance.
- Linux has the drivers built in
- Windows requires drivers which can be obtained here: Fedora VirtIO Drivers
- FreeBSD has the drivers available in ports. See the virtio section in FreeBSD Guest Notes.
Disk Cache
Note: The information below is based on using raw volumes, other volume formats may behave differently. |
cache=none seems to be the best performance and is the default in Proxmox 2.X.
host don't do cache. guest disk cache is writeback Warn : like writeback, you can loose datas in case of a powerfailure you need to use barrier option in your linux guest fstab if kernel < 2.6.37 to avoid fs corruption in case of powerfailure.
This mode causes qemu-kvm to interact with the disk image file or block device with O_DIRECT semantics, so the host page cache is bypassed and I/O happens directly between the qemu-kvm userspace buffers and the storage device. Because the actual storage device may report a write as completed when placed in its write queue only, the guest's virtual storage adapter is informed that there is a writeback cache, so the guest would be expected to send down flush commands as needed to manage data integrity. Equivalent to direct access to your hosts' disk, performance wise.
cache=writethrough
host do read cache guest disk cache mode is writethrough Writethrough make a fsync for each write. So it's the more secure cache mode, you can't loose data. It's also the slower. This mode causes qemu-kvm to interact with the disk image file or block device with O_DSYNC semantics, where writes are reported as completed only when the data has been committed to the storage device. The host page cache is used in what can be termed a writethrough caching mode. The guest's virtual storage adapter is informed that there is no writeback cache, so the guest would not need to send down flush commands to manage data integrity. The storage behaves as if there is a writethrough cache.
cache=directsync.
host don't do cache. guest disk cache mode is writethrough similar to writethrough, a fsync is made for each write.
cache=writeback
host do read/write cache guest disk cache mode is writeback Warn : you can loose datas in case of a powerfailure you need to use barrier option in your linux guest fstab if kernel < 2.6.37 to avoid fs corruption in case of powerfailure.
This mode causes qemu-kvm to interact with the disk image file or block device with neither O_DSYNC nor O_DIRECT semantics, so the host page cache is used and writes are reported to the guest as completed when placed in the host page cache, and the normal page cache management will handle commitment to the storage device. Additionally, the guest's virtual storage adapter is informed of the writeback cache, so the guest would be expected to send down flush commands as needed to manage data integrity. Analogous to a raid controller with RAM cache.
caching adds additional data copies and bus traffic causing it to perform worse.
For read cache memory: try to add more memory in your guest, they already do the job with their buffer cache
cache=directsync and writethrough can be fast, if you have a san or raid controller with cache with battery by exemple
Avoid to use cache=directsync and writethrough with qcow2 files.
some interestings articles :
barriers : http://monolight.cc/2011/06/barriers-caches-filesystems/
cache mode and fsync : http://www.ilsistemista.net/index.php/virtualization/23-kvm-storage-performance-and-cache-settings-on-red-hat-enterprise-linux-62.html?start=2
Application Specific Tweaks
Microsoft SQL Server
Use raw and not qcow2
Consider using raw image or partition for a partition with Microsoft SQL database files because qcow2 can be very slow under such type of load.
Trace Flag T8038
Setting the trace flag -T8038 will drastically reduce the number of context switches when running SQL 2005 or 2008.
To change the trace flag:
- Open the SQL server Configuration Manager
- Open the properties for the SQL service typically named MSSQLSERVER
- Go to the advanced tab
- Append ;-T8038 to the end of the startup parameters option
For additional references see: Proxmox forum
VZdump
By default vzdump limited to 10000 Kbps (10 Mbps). That's why some users confuse with the slower data transfer on Proxmox even they claim 1 Gbps NIC installed. To increase the VZdump speed you need to edit:
# nano /etc/vzdump.conf
Find out:
#bwlimit: KBPS bwlimit: 10000
Change to any value you like, to change bandwidth limit to 50 Mbps:
bwlimit: 50000