Command line tools - PVE 3.x: Difference between revisions
(→qm) |
m (Thomas Lamprecht moved page Command line tools to Command line tools - PVE 3.x) |
||
(10 intermediate revisions by 4 users not shown) | |||
Line 1: | Line 1: | ||
{{PVE3}} | |||
== Introduction == | == Introduction == | ||
This page lists | This page lists some important Proxmox VE 3x and Debian command line tools. | ||
== KVM specific == | == KVM specific == | ||
=== qm === | === qm === | ||
To view a list of KVMs: | |||
qm list | |||
<pre> | |||
VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID | |||
100 alpha001 running 2048 8.00 5858 | |||
101 WINXP002 running 1024 32.00 6177 | |||
102 Win2K running 2048 32.00 113893 | |||
105 axe002 running 16384 100.00 279613 | |||
</pre> | |||
To see how much memory your machine 101 has: | |||
qm config 101 | grep ^memory | |||
<pre> | |||
memory: 2048 | |||
</pre> | |||
see "man qm" for further info | |||
QEMU Emulator: [http://wiki.qemu.org/download/qemu-doc.html#QEMU-PC-System-emulator User Documentation] | [https://en.wikibooks.org/wiki/QEMU/Monitor Monitor Commands] | QEMU Emulator: [http://wiki.qemu.org/download/qemu-doc.html#QEMU-PC-System-emulator User Documentation] | [https://en.wikibooks.org/wiki/QEMU/Monitor Monitor Commands] | ||
Line 22: | Line 38: | ||
=== pvectl === | === pvectl === | ||
pvectl - vzctl wrapper to manage OpenVZ containers - see | pvectl - vzctl wrapper to manage OpenVZ containers - see "man pvectl" | ||
=== vzctl === | === vzctl === | ||
vzctl - utility to control an OpenVZ container - see | vzctl - utility to control an OpenVZ container - see "man vzctl" | ||
=== vztop === | === vztop === | ||
Line 134: | Line 150: | ||
=== vzlist === | === vzlist === | ||
<pre>vzlist | <pre>vzlist | ||
CTID NPROC STATUS IP_ADDR HOSTNAME | CTID NPROC STATUS IP_ADDR HOSTNAME | ||
Line 154: | Line 161: | ||
* by default, vzlist lists only RUNNING CTs, stopped ones won't appear in its output (qm list for VMs, instead, lists also stopped ones) | * by default, vzlist lists only RUNNING CTs, stopped ones won't appear in its output (qm list for VMs, instead, lists also stopped ones) | ||
See "man vzlist" for detailled options. | |||
== Backup == | == Backup == | ||
Line 181: | Line 167: | ||
=== vzdump === | === vzdump === | ||
vzdump - backup utility for virtual machine - see | vzdump - backup utility for virtual machine - see "man vzdump" | ||
=== vzrestore === | === vzrestore === | ||
vzrestore - restore OpenVZ vzdump backups - see | vzrestore - restore OpenVZ vzdump backups - see "man vzrestore" | ||
=== qmrestore === | === qmrestore === | ||
qmrestore - restore KVM vzdump backups - see | qmrestore - restore KVM vzdump backups - see "man qmrestore" | ||
== Cluster management == | == Cluster management == | ||
Line 195: | Line 181: | ||
=== pvecm === | === pvecm === | ||
PVE Cluster Manager | PVE Cluster Manager - see "man pvecm" for details. | ||
pvecm | |||
== Software version check == | == Software version check == | ||
Line 224: | Line 195: | ||
*without any argument shows the version of pve-manager, something like: | *without any argument shows the version of pve-manager, something like: | ||
:pve-manager/3.0/957f0862 | :pve-manager/3.0/957f0862 | ||
*with -v argument it shows a list of programs versions related to pve, like: | *with -v argument it shows a list of programs versions related to pve, like: | ||
:pve-manager: 3.0-23 (pve-manager/3.0/957f0862) | :pve-manager: 3.0-23 (pve-manager/3.0/957f0862) | ||
:running kernel: 2.6.32-20-pve | :running kernel: 2.6.32-20-pve | ||
Line 269: | Line 223: | ||
:pve-qemu-kvm: 1.4-13 | :pve-qemu-kvm: 1.4-13 | ||
:ksm-control-daemon: 1.1-1 | :ksm-control-daemon: 1.1-1 | ||
== LVM == | == LVM == | ||
Line 434: | Line 384: | ||
<br> | <br> | ||
== Debian Appliance Builder == | == Debian Appliance Builder == | ||
Line 451: | Line 395: | ||
=== pveperf === | === pveperf === | ||
Simple host performance test | Simple host performance test - see "man pveperf". | ||
Note: this command may require root privileges (or sudo) to run, otherwise you get an error after "HD SIZE" value, like: <<sh: /proc/sys/vm/drop_caches: Permission denied unable to open HD at /usr/bin/pveperf line 149.>> | Note: this command may require root privileges (or sudo) to run, otherwise you get an error after "HD SIZE" value, like: <<sh: /proc/sys/vm/drop_caches: Permission denied unable to open HD at /usr/bin/pveperf line 149.>> | ||
Line 490: | Line 413: | ||
=== pvesubscription === | === pvesubscription === | ||
For managing a node's subscription key | For managing a node's subscription key - see "man pvesubscription" | ||
=== Third party CLI Tools === | === Third party CLI Tools === | ||
* [https://raymii.org/s/software/ProxBash.html ProxBash] | * [https://raymii.org/s/software/ProxBash.html ProxBash] | ||
[[Category: | [[Category:Proxmox VE 3.x]] |
Latest revision as of 15:18, 16 December 2022
Note: This article is about the previous Proxmox VE 3.x releases |
Introduction
This page lists some important Proxmox VE 3x and Debian command line tools.
KVM specific
qm
To view a list of KVMs:
qm list
VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID 100 alpha001 running 2048 8.00 5858 101 WINXP002 running 1024 32.00 6177 102 Win2K running 2048 32.00 113893 105 axe002 running 16384 100.00 279613
To see how much memory your machine 101 has:
qm config 101 | grep ^memory
memory: 2048
see "man qm" for further info
QEMU Emulator: User Documentation | Monitor Commands
OpenVZ specific
vzps
This utility program can be run on the Node just as the standard Linux ps. For information on the ps utility please consult the corresponding man page, vzps provides certain additional functionality related to monitoring separate Containers running on the Node.
The vzps utility has the following functionality added:
- The -E CT_ID command line switch can be used to show only the processes running inside the Container with the specified ID.
pvectl
pvectl - vzctl wrapper to manage OpenVZ containers - see "man pvectl"
vzctl
vzctl - utility to control an OpenVZ container - see "man vzctl"
vztop
This utility program can be run on the Node just as the standard Linux top . For information on the top utility please consult the corresponding man page, vztop provides certain additional functionality related to monitoring separate Containers running on the Node.
The vztop utility has the following functionality added:
- The -E CT_ID command line switch can be used to show only the processes running inside the Container with the ID specified. If -1 is specified as CT_ID, the processes of all running Containers are displayed.
- The e interactive command (the key pressed while top is running) can be used to show/hide the CTID column, which displays the Container where a particular process is running (0 stands for the Hardware Node itself).
- The E interactive command can be used to select another Container the processes of which are to be shown. If -1 is specified, the processes of all running Containers are displayed.
vztop - display top CPU processes
10:28:52 up 31 days, 11:18, 1 user, load average: 0.07, 0.06, 0.02 197 processes: 196 sleeping, 1 running, 0 zombie, 0 stopped CPU0 states: 0.2% user 0.1% system 0.0% nice 0.0% iowait 99.2% idle CPU1 states: 1.3% user 2.1% system 0.0% nice 0.0% iowait 96.1% idle CPU2 states: 6.3% user 0.1% system 0.0% nice 0.0% iowait 93.1% idle CPU3 states: 2.0% user 1.0% system 0.0% nice 0.0% iowait 96.4% idle Mem: 16251688k av, 16032764k used, 218924k free, 0k shrd, 364120k buff 4448576k active, 10983652k inactive Swap: 15728632k av, 36k used, 15728596k free 14170784k cached PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND 390694 root 20 0 759M 528M 2432 S 6.7 3.3 244:53 1 kvm 566767 root 20 0 40464 8908 5320 S 6.7 0.0 0:54 0 apache2 7898 root 20 0 181M 34M 4076 S 0.3 0.2 73:12 2 pvestatd 1 root 20 0 10604 848 744 S 0.0 0.0 0:16 0 init 2 root 20 0 0 0 0 SW 0.0 0.0 0:00 2 kthreadd 3 root RT 0 0 0 0 SW 0.0 0.0 0:00 0 migration/0 4 root 20 0 0 0 0 SW 0.0 0.0 0:19 0 ksoftirqd/0 5 root RT 0 0 0 0 SW 0.0 0.0 0:00 0 migration/0 6 root RT 0 0 0 0 SW 0.0 0.0 0:02 0 watchdog/0 7 root RT 0 0 0 0 SW 0.0 0.0 0:00 1 migration/1 8 root RT 0 0 0 0 SW 0.0 0.0 0:00 1 migration/1 9 root 20 0 0 0 0 SW 0.0 0.0 0:24 1 ksoftirqd/1 10 root RT 0 0 0 0 SW 0.0 0.0 0:01 1 watchdog/1 11 root RT 0 0 0 0 SW 0.0 0.0 0:01 2 migration/2 12 root RT 0 0 0 0 SW 0.0 0.0 0:00 2 migration/2 13 root 20 0 0 0 0 SW 0.0 0.0 0:12 2 ksoftirqd/2 14 root RT 0 0 0 0 SW 0.0 0.0 0:01 2 watchdog/2 15 root RT 0 0 0 0 SW 0.0 0.0 0:07 3 migration/3 .. ..
user_beancounters
cat /proc/user_beancounters
Version: 2.5 uid resource held maxheld barrier limit failcnt 101: kmemsize 11217945 16650240 243269632 268435456 0 lockedpages 0 418 65536 65536 0 privvmpages 134161 221093 9223372036854775807 9223372036854775807 0 shmpages 16 3232 9223372036854775807 9223372036854775807 0 dummy 0 0 0 0 0 numproc 56 99 9223372036854775807 9223372036854775807 0 physpages 96245 122946 0 131072 0 vmguarpages 0 0 0 9223372036854775807 0 oomguarpages 53689 78279 0 9223372036854775807 0 numtcpsock 49 82 9223372036854775807 9223372036854775807 0 numflock 8 20 9223372036854775807 9223372036854775807 0 numpty 0 6 9223372036854775807 9223372036854775807 0 numsiginfo 0 33 9223372036854775807 9223372036854775807 0 tcpsndbuf 927856 1619344 9223372036854775807 9223372036854775807 0 tcprcvbuf 802816 1343488 9223372036854775807 9223372036854775807 0 othersockbuf 152592 481248 9223372036854775807 9223372036854775807 0 dgramrcvbuf 0 4624 9223372036854775807 9223372036854775807 0 numothersock 124 152 9223372036854775807 9223372036854775807 0 dcachesize 6032652 12378728 121634816 134217728 0 numfile 629 915 9223372036854775807 9223372036854775807 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 numiptent 20 20 9223372036854775807 9223372036854775807 0 0: kmemsize 34634728 65306624 9223372036854775807 9223372036854775807 0 lockedpages 1360 6721 9223372036854775807 9223372036854775807 0 privvmpages 317475 507560 9223372036854775807 9223372036854775807 0 shmpages 4738 9645 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 numproc 190 220 9223372036854775807 9223372036854775807 0 physpages 3769163 3867750 9223372036854775807 9223372036854775807 0 vmguarpages 0 0 0 0 0 oomguarpages 182160 205746 9223372036854775807 9223372036854775807 0 numtcpsock 12 29 9223372036854775807 9223372036854775807 0 numflock 9 13 9223372036854775807 9223372036854775807 0 numpty 4 12 9223372036854775807 9223372036854775807 0 numsiginfo 3 84 9223372036854775807 9223372036854775807 0 tcpsndbuf 249512 1760544 9223372036854775807 9223372036854775807 0 tcprcvbuf 198920 1142000 9223372036854775807 9223372036854775807 0 othersockbuf 233512 276832 9223372036854775807 9223372036854775807 0 dgramrcvbuf 0 2576 9223372036854775807 9223372036854775807 0 numothersock 179 193 9223372036854775807 9223372036854775807 0 dcachesize 18688898 47058779 9223372036854775807 9223372036854775807 0 numfile 1141 1410 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 numiptent 20 20 9223372036854775807 9223372036854775807 0
vzlist
vzlist CTID NPROC STATUS IP_ADDR HOSTNAME 101 26 running - localhost 102 121 running 10.100.100.18 mediawiki 114 49 running - fbc14
- The fields for (-o option) selective display are: ctid, nproc, status, ip, hostname.
- All are case sensitive and are used with the options -H (no header) and -o [field1, field2, ...]
- The binary is at: /usr/sbin/vzlist
- by default, vzlist lists only RUNNING CTs, stopped ones won't appear in its output (qm list for VMs, instead, lists also stopped ones)
See "man vzlist" for detailled options.
Backup
vzdump
vzdump - backup utility for virtual machine - see "man vzdump"
vzrestore
vzrestore - restore OpenVZ vzdump backups - see "man vzrestore"
qmrestore
qmrestore - restore KVM vzdump backups - see "man qmrestore"
Cluster management
pvecm
PVE Cluster Manager - see "man pvecm" for details.
Software version check
pveversion
Proxmox VE version info - Print version information for Proxmox VE packages.
USAGE
pveversion [--verbose]
- without any argument shows the version of pve-manager, something like:
- pve-manager/3.0/957f0862
- with -v argument it shows a list of programs versions related to pve, like:
- pve-manager: 3.0-23 (pve-manager/3.0/957f0862)
- running kernel: 2.6.32-20-pve
- proxmox-ve-2.6.32: 3.0-100
- pve-kernel-2.6.32-20-pve: 2.6.32-100
- lvm2: 2.02.95-pve3
- clvm: 2.02.95-pve3
- corosync-pve: 1.4.5-1
- openais-pve: 1.1.4-3
- libqb0: 0.11.1-2
- redhat-cluster-pve: 3.2.0-2
- resource-agents-pve: 3.9.2-4
- fence-agents-pve: 4.0.0-1
- pve-cluster: 3.0-4
- qemu-server: 3.0-20
- pve-firmware: 1.0-22
- libpve-common-perl: 3.0-4
- libpve-access-control: 3.0-4
- libpve-storage-perl: 3.0-8
- vncterm: 1.1-4
- vzctl: 4.0-1pve3
- vzprocps: 2.0.11-2
- vzquota: 3.1-2
- pve-qemu-kvm: 1.4-13
- ksm-control-daemon: 1.1-1
LVM
Most of the commands in LVM are very similar to each other. Each valid command is preceded by one of the following:
- Physical Volume = pv
- Volume Group = vg
- Logical Volume = lv
USAGE
Physical Volume | Volume Group | Logical Volume | ||
LVM | PV | VG | LV | |
s | No | Yes |
Yes |
Yes |
display | No | Yes | Yes | Yes |
create | No | Yes | Yes | Yes |
rename | No | No | Yes | Yes |
change | Yes | Yes | Yes | Yes |
move | No | Yes | Yes | No |
extend | No | No | Yes | Yes |
reduce | No | No | Yes | Yes |
resize | No | Yes | No | Yes |
split | No | No | Yes | No |
merge | No | No | Yes | No |
convert | No | No | Yes | Yes |
import | No | No | Yes | No |
export | No | No | Yes | No |
importclone | No | No | Yes | No |
cfgbackup | No | No | Yes | No |
cfgrestore | No | No | Yes | No |
ck | No | Yes | Yes | No |
scan | diskscan | Yes | Yes | Yes |
mknodes | No | No | Yes | No |
remove | No | Yes | Yes | Yes |
dump | Yes | No | No | No |
Debian Appliance Builder
dab
Other useful tools
pveperf
Simple host performance test - see "man pveperf".
Note: this command may require root privileges (or sudo) to run, otherwise you get an error after "HD SIZE" value, like: <<sh: /proc/sys/vm/drop_caches: Permission denied unable to open HD at /usr/bin/pveperf line 149.>>
Example output
CPU BOGOMIPS: 26341.80 REGEX/SECOND: 1554770 HD SIZE: 94.49 GB (/dev/mapper/pve-root) BUFFERED READS: 49.83 MB/sec AVERAGE SEEK TIME: 14.16 ms FSYNCS/SECOND: 1060.47 DNS EXT: 314.58 ms DNS INT: 236.94 ms (mypve.com)
pvesubscription
For managing a node's subscription key - see "man pvesubscription"