Command line tools - PVE 3.x
Introduction
This page lists the important Proxmox VE and Debian command line tools. All CLI tools have also manual pages.
KVM specific
qm
qm - qemu/kvm manager - see Manual: qm and Qm manual
OpenVZ specific
vzctl
vzctl - utility to control an OpenVZ container - see Vzctl manual
vztop
vztop - display top CPU processes
user_beancounters
cat /proc/user_beancounters
vzlist
- example:
vzlist CTID NPROC STATUS IP_ADDR HOSTNAME 101 26 running - localhost.fantinibakery.com 102 121 running 10.100.100.18 mediawiki.fantinibakery.com 114 49 running - fbc14.fantinibakery.com
From PVE 3.0 onwards, the display will be:
vzlist CTID NPROC STATUS IP_ADDR HOSTNAME 101 26 running - localhost 102 121 running 10.100.100.18 mediawiki 114 49 running - fbc14
- The fields for selective display are: ctid, nproc, status, ip, hostname.
- All are case sensitive and are used with the options -H (no header) and -o [field1, field2, ...]
- The binary is at: /usr/sbin/vzlist
USAGE
Usage: vzlist [-a | -S] [-n] [-H] [-o field[,field...] | -1] [-s [-]field] [-h pattern] [-N pattern] [-d pattern] [CTID [CTID ...]] vzlist -L | --list Options: -a, --all list all containers -S, --stopped list stopped containers -n, --name display containers' names -H, --no-header suppress columns header -t, --no-trim do not trim long values -j, --json output in JSON format -o, --output output only specified fields -1 synonym for -H -octid -s, --sort sort by the specified field ('-field' to reverse sort order) -h, --hostname filter CTs by hostname pattern -N, --name_filter filter CTs by name pattern -d, --description filter CTs by description pattern -L, --list get possible field names
Backup
vzdump
vzdump - backup utility for virtual machine - see Vzdump manual
vzrestore
vzrestore - restore OpenVZ vzdump backups - see Vzrestore manual
qmrestore
qmrestore - restore KVM vzdump backups - see Qmrestore manual
Cluster management
pveca
PVE Cluster Administration Toolkit
USAGE
- pveca -l # show cluster status
- pveca -c # create new cluster with localhost as master
- pveca -s [-h IP] # sync cluster configuration from master (or IP)
- pveca -d ID # delete a node
- pveca -a [-h IP] # add new node to cluster
- pveca -m # force local node to become master
- pveca -i # print node info (CID NAME IP ROLE)
Software version check
pveversion
Proxmox VE version info - Print version information for Proxmox VE packages.
USAGE
pveversion [--verbose]
- without any argument shows the version of pve-manager, something like:
- pve-manager/1.5/4660
or
- pve-manager/3.0/957f0862
- with -v argument it shows a list of programs versions related to pve, like:
- pve-manager: 1.5-7 (pve-manager/1.5/4660)
- running kernel: 2.6.18-2-pve
- proxmox-ve-2.6.18: 1.5-5
- pve-kernel-2.6.18-2-pve: 2.6.18-5
- pve-kernel-2.6.18-1-pve: 2.6.18-4
- qemu-server: 1.1-11
- pve-firmware: 1.0-3
- libpve-storage-perl: 1.0-10
- vncterm: 0.9-2
- vzctl: 3.0.23-1pve8
- vzdump: 1.2-5
- vzprocps: 2.0.11-1dso2
- vzquota: 3.0.11-1
- pve-qemu-kvm-2.6.18: 0.9.1-5
or
- pve-manager: 3.0-23 (pve-manager/3.0/957f0862)
- running kernel: 2.6.32-20-pve
- proxmox-ve-2.6.32: 3.0-100
- pve-kernel-2.6.32-20-pve: 2.6.32-100
- lvm2: 2.02.95-pve3
- clvm: 2.02.95-pve3
- corosync-pve: 1.4.5-1
- openais-pve: 1.1.4-3
- libqb0: 0.11.1-2
- redhat-cluster-pve: 3.2.0-2
- resource-agents-pve: 3.9.2-4
- fence-agents-pve: 4.0.0-1
- pve-cluster: 3.0-4
- qemu-server: 3.0-20
- pve-firmware: 1.0-22
- libpve-common-perl: 3.0-4
- libpve-access-control: 3.0-4
- libpve-storage-perl: 3.0-8
- vncterm: 1.1-4
- vzctl: 4.0-1pve3
- vzprocps: 2.0.11-2
- vzquota: 3.1-2
- pve-qemu-kvm: 1.4-13
- ksm-control-daemon: 1.1-1
aptitude
Standard Debian package update tool
LVM
Most of the commands in LVM are very similar to each other. Each valid command is preceded by one of the following:
- Physical Volume = pv
- Volume Group = vg
- Logical Volume = lv
USAGE
Physicl Volume | Volume Group | Logical Volume | ||
LVM | PV | VG | LV | |
s | No | Yes |
Yes |
Yes |
display | No | Yes | Yes | Yes |
create | No | Yes | Yes | Yes |
rename | No | No | Yes | Yes |
change | Yes | Yes | Yes | Yes |
move | No | Yes | Yes | No |
extend | No | No | Yes | Yes |
reduce | No | No | Yes | Yes |
resize | No | Yes | No | Yes |
split | No | No | Yes | No |
merge | No | No | Yes | No |
convert | No | No | Yes | Yes |
import | No | No | Yes | No |
export | No | No | Yes | No |
importclone | No | No | Yes | No |
cfgbackup | No | No | Yes | No |
cfgrestore | No | No | Yes | No |
ck | No | Yes | Yes | No |
scan | diskscan | Yes | Yes | Yes |
mknodes | No | No | Yes | No |
remove | No | Yes | Yes | Yes |
dump | Yes | No | No | No |
iSCSI
DRBD
See DRBD
Debian Appliance Builder
dab
Other useful tools
pveperf
Simple host performance test.
(from man page)
USAGE
- pveperf [PATH]
DESCRIPTION
- Tries to gather some CPU/Hardisk performance data on the hardisk mounted at PATH (/ is used as default)
It dumps on the terminal:
- CPU BOGOMIPS: bogomips sum of all CPUs
- REGEX/SECOND: regular expressions per second (perl performance test), should be above 300000
- HD SIZE: harddisk size
- BUFFERED READS: simple HD read test. Modern HDs should reach at least 40 MB/sec
- AVERAGE SEEK TIME: tests average seek time. Fast SCSI HDs reach values < 8 milliseconds. Common IDE/SATA disks get values from 15 to 20 ms.
- FSYNCS/SECOND: value should be greater than 200 (you should enable "write back" cache mode on you RAID controller - needs a battery backed cache (BBWC)).
- DNS EXT: average time to resolve an external DNS name
- DNS INT: average time to resolve a local DNS name
Note: this command may require root privileges (or sudo) to run, otherwise you get an error after "HD SIZE" value, like: <<sh: /proc/sys/vm/drop_caches: Permission denied unable to open HD at /usr/bin/pveperf line 149.>>