Difference between revisions of "Command line tools"

From Proxmox VE
Jump to navigation Jump to search
 
(49 intermediate revisions by 10 users not shown)
Line 1: Line 1:
=Introduction=
+
{{PVE3}}
This page lists the important Proxmox VE and Debian command line tools. All CLI tools have also manual pages.
 
  
=KVM specific=
+
== Introduction ==
==qm==
 
  
qm - qemu/kvm manager
+
This page lists some important Proxmox VE 3x and Debian command line tools.
  
=OpenVZ specific=
+
== KVM specific ==
==vzctl==
 
  
vzctl - utility to control an OpenVZ container.
+
=== qm ===
 +
To view a list of KVMs:
 +
qm list
 +
<pre>
 +
      VMID NAME                STATUS    MEM(MB)    BOOTDISK(GB) PID
 +
      100 alpha001            running    2048              8.00 5858
 +
      101 WINXP002            running    1024              32.00 6177
 +
      102 Win2K                running    2048              32.00 113893
 +
      105 axe002              running    16384            100.00 279613
 +
</pre>
 +
To see how much memory your machine 101 has:
 +
qm config 101 | grep ^memory
 +
<pre>
 +
memory: 2048
 +
</pre>
  
==vztop==
+
see "man qm" for further info
  
vztop - display top CPU processes
+
QEMU Emulator: [http://wiki.qemu.org/download/qemu-doc.html#QEMU-PC-System-emulator User Documentation] | [https://en.wikibooks.org/wiki/QEMU/Monitor Monitor Commands]
  
==user_beancounters==
+
== OpenVZ specific ==
cat /proc/user_beancounters
 
=Backup=
 
==vzdump==
 
  
vzdump - backup utility for virtual machine
+
=== vzps ===
 +
This utility program can be run on the Node just as the standard Linux ps. For information on the ps utility please consult the corresponding man page, vzps provides certain additional functionality related to monitoring separate Containers running on the Node.
  
===USAGE===
+
The vzps utility has the following functionality added:
vzdump OPTIONS [--all | <VMID>]
 
  
(where <VMID> is the ID pve assigns to each virtual machine created)
+
* The -E CT_ID command line switch can be used to show only the processes running inside the Container with the specified ID.
  
OPTIONS:
+
=== pvectl ===
:--exclude VMID          #exclude VMID (assumes :--all)
 
:--exclude-path REGEX    #exclude certain files/directories. You can use this option more than once to specify multiple exclude paths
 
:--stdexcludes          #exclude temporary files and logs
 
:--compress              #compress dump file (gzip)
 
:--storage STORAGE_ID    #store resulting files to STORAGE_ID (PVE only)
 
:--script                #execute hook script
 
:--dumpdir DIR          #store resulting files in DIR
 
:--maxfiles N            #maximal number of backup files per VM.
 
:--tmpdir DIR            #store temporary files in DIR. :--suspend and :--stop are using this directory to store a copy of the VM.
 
:--mailto EMAIL          #send notification mail to EMAIL. You can use this option more than once to specify multiple receivers
 
:--stop                  #stop/start VM if running
 
:--suspend              #suspend/resume VM when running
 
:--snapshot              #use LVM snapshot when running
 
:--size MB              #LVM snapshot size (default 1024)
 
:--bwlimit KBPS          #limit I/O bandwidth; KBytes per second
 
:--lockwait MINUTES      #maximal time to wait for the global lock. vzdump uses a global lock file to make sure that only one instance is running (running several instance puts too much load on a server). Default is 180 (3 hours).
 
:--stopwait MINUTES      #maximal time to wait until a VM is stopped.
 
  
===DESCRIPTION===
+
pvectl - vzctl wrapper to manage OpenVZ containers - see "man pvectl"
  
vzdump is an utility to make consistent snapshots of running virtual machines (VMs). It basically creates a tar archive of the VM private area, which also includes the VM configuration files. vzdump currently supports OpenVZ and QemuServer VMs.
+
=== vzctl ===
  
There are several ways to provide consistency:
+
vzctl - utility to control an OpenVZ container - see "man vzctl"
* "stop" mode: Stop the VM during backup. This results in a very long downtime.
 
* "suspend" mode: For OpenVZ, this mode uses rsync to copy the VM to a temporary location (see option --tmpdir). Then the VM is suspended and a second rsync copies changed files. After that, the VM is started (resume) again. This results in a minimal downtime, but needs additional space to hold the VM copy. For QemuServer, this mode work like "stop" mode, but uses suspend/resume instead of stop/start.
 
* "snapshot" mode: This mode uses LVM2 snapshots. There is no downtime, but snapshot mode needs LVM2 and some free space on the corresponding volume group to create the LVM snapshot.
 
  
===BACKUP FILE NAMES===
+
=== vztop ===
 +
This utility program can be run on the Node just as the standard Linux top . For information on the top utility please consult the corresponding man page, vztop provides certain additional functionality related to monitoring separate Containers running on the Node.
  
Newer version of vzdump encodes the virtual machine type and the backup time into the filename, for example
+
The vztop utility has the following functionality added:
  
vzdump-openvz-105-2009_10_09-11_04_43.tar
+
* The -E CT_ID command line switch can be used to show only the processes running inside the Container with the ID specified. If -1 is specified as CT_ID, the processes of all running Containers are displayed.
 +
* The e interactive command (the key pressed while top is running) can be used to show/hide the CTID column, which displays the Container where a particular process is running (0 stands for the Hardware Node itself).
 +
* The E interactive command can be used to select another Container the processes of which are to be shown. If -1 is specified, the processes of all running Containers are displayed.
  
That way it is possible to store several backup into the same directory. The parameter "maxfiles" can be used to specify the maximal number of backups to keep.
+
vztop - display top CPU processes
  
===RESTORE===
+
<pre>
 +
10:28:52  up 31 days, 11:18,  1 user,  load average: 0.07, 0.06, 0.02
 +
197 processes: 196 sleeping, 1 running, 0 zombie, 0 stopped
 +
CPU0 states:  0.2% user  0.1% system    0.0% nice  0.0% iowait  99.2% idle
 +
CPU1 states:  1.3% user  2.1% system    0.0% nice  0.0% iowait  96.1% idle
 +
CPU2 states:  6.3% user  0.1% system    0.0% nice  0.0% iowait  93.1% idle
 +
CPU3 states:  2.0% user  1.0% system    0.0% nice  0.0% iowait  96.4% idle
 +
Mem:  16251688k av, 16032764k used,  218924k free,      0k shrd,  364120k buff
 +
      4448576k active,            10983652k inactive
 +
Swap: 15728632k av,      36k used, 15728596k free                14170784k cached
  
The resulting tar files can be restored with the following programs.
+
  PID USER    PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM  TIME CPU COMMAND
 +
390694 root      20  0  759M 528M  2432 S    6.7  3.3 244:53  1 kvm
 +
566767 root      20  0 40464 8908  5320 S    6.7  0.0  0:54  0 apache2
 +
7898 root      20  0  181M  34M  4076 S    0.3  0.2  73:12  2 pvestatd
 +
    1 root      20  0 10604  848  744 S    0.0  0.0  0:16  0 init
 +
    2 root      20  0    0    0    0 SW    0.0  0.0  0:00  2 kthreadd
 +
    3 root      RT  0    0    0    0 SW    0.0  0.0  0:00  0 migration/0
 +
    4 root      20  0    0    0    0 SW    0.0  0.0  0:19  0 ksoftirqd/0
 +
    5 root      RT  0    0    0    0 SW    0.0  0.0  0:00  0 migration/0
 +
    6 root      RT  0    0    0    0 SW    0.0  0.0  0:02  0 watchdog/0
 +
    7 root      RT  0    0    0    0 SW    0.0  0.0  0:00  1 migration/1
 +
    8 root      RT  0    0    0    0 SW    0.0  0.0  0:00  1 migration/1
 +
    9 root      20  0    0    0    0 SW    0.0  0.0  0:24  1 ksoftirqd/1
 +
  10 root      RT  0    0    0    0 SW    0.0  0.0  0:01  1 watchdog/1
 +
  11 root      RT  0    0    0    0 SW    0.0  0.0  0:01  2 migration/2
 +
  12 root      RT  0    0    0    0 SW    0.0  0.0  0:00  2 migration/2
 +
  13 root      20  0    0    0    0 SW    0.0  0.0  0:12  2 ksoftirqd/2
 +
  14 root      RT  0    0    0    0 SW    0.0  0.0  0:01  2 watchdog/2
 +
  15 root      RT  0    0    0    0 SW    0.0  0.0  0:07  3 migration/3
 +
..
 +
..
 +
</pre>
  
* vzrestore: OpenVZ restore utility
+
=== user_beancounters ===
* qmrestore: QemuServer restore utility
 
  
===CONFIGURATION===
+
cat /proc/user_beancounters
 
 
Global configuration is stored in /etc/vzdump.conf.
 
 
 
:tmpdir: DIR
 
:dumpdir: DIR
 
:storage: STORAGE_ID
 
:mode: snapshot|suspend|stop
 
:bwlimit: KBPS
 
:lockwait: MINUTES
 
:stopwait: MINUTES
 
:size: MB
 
:maxfiles: N
 
:script: FILENAME
 
  
===HOOK SCRIPT===
 
  
You can specify a hook script with option "--script". This script is called at various phases of the backup process, with parameters accordingly set. You can find an example in the documentation directory ("hook-script.pl").
+
<pre>
 +
Version: 2.5
 +
      uid  resource                    held              maxheld              barrier                limit              failcnt
 +
      101:  kmemsize                11217945            16650240            243269632            268435456                    0
 +
            lockedpages                    0                  418                65536                65536                    0
 +
            privvmpages                134161              221093  9223372036854775807  9223372036854775807                    0
 +
            shmpages                      16                3232  9223372036854775807  9223372036854775807                    0
 +
            dummy                          0                    0                    0                    0                    0
 +
            numproc                        56                  99  9223372036854775807  9223372036854775807                    0
 +
            physpages                  96245              122946                    0              131072                    0
 +
            vmguarpages                    0                    0                    0  9223372036854775807                    0
 +
            oomguarpages                53689                78279                    0  9223372036854775807                    0
 +
            numtcpsock                    49                  82  9223372036854775807  9223372036854775807                    0
 +
            numflock                        8                  20  9223372036854775807  9223372036854775807                    0
 +
            numpty                          0                    6  9223372036854775807  9223372036854775807                    0
 +
            numsiginfo                      0                  33  9223372036854775807  9223372036854775807                    0
 +
            tcpsndbuf                  927856              1619344  9223372036854775807  9223372036854775807                    0
 +
            tcprcvbuf                  802816              1343488  9223372036854775807  9223372036854775807                    0
 +
            othersockbuf              152592              481248  9223372036854775807  9223372036854775807                    0
 +
            dgramrcvbuf                    0                4624  9223372036854775807  9223372036854775807                    0
 +
            numothersock                  124                  152  9223372036854775807  9223372036854775807                    0
 +
            dcachesize                6032652            12378728            121634816            134217728                    0
 +
            numfile                      629                  915  9223372036854775807  9223372036854775807                    0
 +
            dummy                          0                    0                    0                    0                    0
 +
            dummy                          0                    0                    0                    0                    0
 +
            dummy                          0                    0                    0                    0                    0
 +
            numiptent                      20                  20  9223372036854775807  9223372036854775807                    0
 +
        0:  kmemsize                34634728            65306624  9223372036854775807  9223372036854775807                    0
 +
            lockedpages                  1360                6721  9223372036854775807  9223372036854775807                    0
 +
            privvmpages                317475              507560  9223372036854775807  9223372036854775807                    0
 +
            shmpages                    4738                9645  9223372036854775807  9223372036854775807                    0
 +
            dummy                          0                    0  9223372036854775807  9223372036854775807                    0
 +
            numproc                      190                  220  9223372036854775807  9223372036854775807                    0
 +
            physpages                3769163              3867750  9223372036854775807  9223372036854775807                    0
 +
            vmguarpages                    0                    0                    0                    0                    0
 +
            oomguarpages              182160              205746  9223372036854775807  9223372036854775807                    0
 +
            numtcpsock                    12                  29  9223372036854775807  9223372036854775807                    0
 +
            numflock                        9                  13  9223372036854775807  9223372036854775807                    0
 +
            numpty                          4                  12  9223372036854775807  9223372036854775807                    0
 +
            numsiginfo                      3                  84  9223372036854775807  9223372036854775807                    0
 +
            tcpsndbuf                  249512              1760544  9223372036854775807  9223372036854775807                    0
 +
            tcprcvbuf                  198920              1142000  9223372036854775807  9223372036854775807                    0
 +
            othersockbuf              233512              276832  9223372036854775807  9223372036854775807                    0
 +
            dgramrcvbuf                    0                2576  9223372036854775807  9223372036854775807                    0
 +
            numothersock                  179                  193  9223372036854775807  9223372036854775807                    0
 +
            dcachesize              18688898            47058779  9223372036854775807  9223372036854775807                    0
 +
            numfile                      1141                1410  9223372036854775807  9223372036854775807                    0
 +
            dummy                          0                    0  9223372036854775807  9223372036854775807                    0
 +
            dummy                          0                    0  9223372036854775807  9223372036854775807                    0
 +
            dummy                          0                    0  9223372036854775807  9223372036854775807                    0
 +
            numiptent                      20                  20  9223372036854775807  9223372036854775807                    0
  
===EXCLUSIONS (OpenVZ only)===
+
</pre>
  
vzdump skips the following files wit option --stdexcludes
+
=== vzlist ===
  
:/var/log/.+
+
<pre>vzlist
:/tmp/.+
+
      CTID      NPROC STATUS    IP_ADDR        HOSTNAME
:/var/tmp/.+
+
      101        26 running  -              localhost
:/var/run/.+pid
+
      102        121 running  10.100.100.18  mediawiki
 +
      114        49 running  -              fbc14
 +
</pre>
 +
* The fields for ('''-o''' option) selective display are: '''ctid, nproc, status, ip, hostname'''.
 +
* All are case sensitive and are used with the options '''-H''' (no header) and '''-o''' [field1, field2, ...]
 +
* The binary is at: <tt>/usr/sbin/vzlist</tt>
 +
* by default, vzlist lists only RUNNING CTs, stopped ones won't appear in its output (qm list for VMs, instead, lists also stopped ones)
  
You can manually specify exclude paths, for example:
+
See "man vzlist" for detailled options.
  
:> vzdump --exclude-path "/tmp/.+" --exclude-path "/var/tmp/.+" 777
+
== Backup ==
  
(only excludes tmp directories)
+
=== vzdump ===
  
Configuration files are also stored inside the backup archive (/etc/vzdump), and will be correctly restored.
+
vzdump - backup utility for virtual machine - see "man vzdump"
  
===LIMITATIONS===
+
=== vzrestore ===
  
VZDump does not save ACLs.
+
vzrestore - restore OpenVZ vzdump backups - see "man vzrestore"
  
=== EXAMPLES ===
+
=== qmrestore ===
Simply dump VM 777 - no snapshot, just archive the VM private area and configuration files to the default dump directory (usually /vz/dump/).
 
:> vzdump 777
 
  
Use rsync and suspend/resume to create an snapshot (minimal downtime).
+
qmrestore - restore KVM vzdump backups - see "man qmrestore"
:> vzdump --suspend 777
 
  
Backup all VMs and send notification mails to root.
+
== Cluster management ==
:> vzdump --suspend --all --mailto root
 
  
Use LVM2 to create snapshots (no downtime).
+
=== pvecm ===
:> vzdump --dumpdir /mnt/backup --snapshot 777
 
  
Backup all VMs excluding VM 101 and 102
+
PVE Cluster Manager - see "man pvecm" for details.
:> vzdump --suspend --exclude 101 --exclude 102
 
  
Restore an OpenVZ machine to VM 600
+
== Software version check ==
:> vzrestore /mnt/backup/vzdump-openvz-777.tar 600
 
  
Restore an Qemu/KVM machine to VM 601
+
=== pveversion ===
:> qmrestore /mnt/backup/vzdump-qemu-888.tar 601
 
  
==vzrestore==
+
Proxmox VE version info - Print version information for Proxmox VE packages.
  
vzrestore - restore OpenVZ vzdump backups
+
==== USAGE ====
  
===USAGE===
+
pveversion [--verbose]
:vzrestore <archive> <VMID>
 
  
Restore the OpenVZ vzdump backup <archive> to virtual machine <VMID>.
+
*without any argument shows the version of pve-manager, something like:
  
==qmrestore==
+
:pve-manager/3.0/957f0862
  
qmrestore - restore QemuServer vzdump backups
+
*with -v argument it shows a list of programs versions related to pve, like:
  
===USAGE===
+
:pve-manager: 3.0-23 (pve-manager/3.0/957f0862)
qmrestore [OPTIONS] <archive> <VMID>
+
:running kernel: 2.6.32-20-pve
 +
:proxmox-ve-2.6.32: 3.0-100
 +
:pve-kernel-2.6.32-20-pve: 2.6.32-100
 +
:lvm2: 2.02.95-pve3
 +
:clvm: 2.02.95-pve3
 +
:corosync-pve: 1.4.5-1
 +
:openais-pve: 1.1.4-3
 +
:libqb0: 0.11.1-2
 +
:redhat-cluster-pve: 3.2.0-2
 +
:resource-agents-pve: 3.9.2-4
 +
:fence-agents-pve: 4.0.0-1
 +
:pve-cluster: 3.0-4
 +
:qemu-server: 3.0-20
 +
:pve-firmware: 1.0-22
 +
:libpve-common-perl: 3.0-4
 +
:libpve-access-control: 3.0-4
 +
:libpve-storage-perl: 3.0-8
 +
:vncterm: 1.1-4
 +
:vzctl: 4.0-1pve3
 +
:vzprocps: 2.0.11-2
 +
:vzquota: 3.1-2
 +
:pve-qemu-kvm: 1.4-13
 +
:ksm-control-daemon: 1.1-1
  
: --info                    #read/verify archive and print relevant information (test run)
+
== LVM ==
: --storage <STORAGE_ID>    #restore to storage <STORAGE_ID>
 
: --prealloc                #never generate sparse files
 
  
===DESCRIPTION===
+
Most of the commands in LVM are very similar to each other. Each valid command is preceded by one of the following:
Restore the QemuServer vzdump backup <archive> to virtual machine <VMID>. Volumes are allocated on the original storage if there is no "--storage" specified.
 
  
=Cluster management=
+
*Physical Volume = pv
==pveca==
+
*Volume Group = vg
 +
*Logical Volume = lv
  
PVE Cluster Administration Toolkit
+
==== USAGE ====
  
===USAGE===
+
{| width="200" border="1" align="center" cellpadding="1" cellspacing="1"
* pveca -l            # show cluster status
+
|-
* pveca -c            # create new cluster with localhost as master
+
| '''<br>'''
* pveca -s [-h IP]    # sync cluster configuration from master (or IP)
+
|
* pveca -d ID          # delete a node
+
| '''Physical Volume'''
* pveca -a [-h IP]    # add new node to cluster
+
| '''Volume Group'''
* pveca -m            # force local node to become master
+
| '''Logical Volume'''
* pveca -i            # print node info (CID NAME IP ROLE)
+
|-
 +
|
 +
| '''LVM'''
 +
| '''PV'''
 +
| '''VG'''
 +
| '''LV'''
 +
|-
 +
| s  
 +
| No
 +
| Yes
 +
|
 +
Yes
  
=Software version check=
+
| Yes
==pveversion==
+
|-
 +
| display
 +
| No
 +
| Yes
 +
| Yes
 +
| Yes
 +
|-
 +
| create
 +
| No
 +
| Yes
 +
| Yes
 +
| Yes
 +
|-
 +
| rename
 +
| No
 +
| No
 +
| Yes
 +
| Yes
 +
|-
 +
| change
 +
| Yes
 +
| Yes
 +
| Yes
 +
| Yes
 +
|-
 +
| move
 +
| No
 +
| Yes
 +
| Yes
 +
| No
 +
|-
 +
| extend
 +
| No
 +
| No
 +
| Yes
 +
| Yes
 +
|-
 +
| reduce
 +
| No
 +
| No
 +
| Yes
 +
| Yes
 +
|-
 +
| resize
 +
| No
 +
| Yes
 +
| No
 +
| Yes
 +
|-
 +
| split
 +
| No
 +
| No
 +
| Yes
 +
| No
 +
|-
 +
| merge
 +
| No
 +
| No
 +
| Yes
 +
| No
 +
|-
 +
| convert
 +
| No
 +
| No
 +
| Yes
 +
| Yes
 +
|-
 +
| import
 +
| No
 +
| No
 +
| Yes
 +
| No
 +
|-
 +
| export
 +
| No
 +
| No
 +
| Yes
 +
| No
 +
|-
 +
| importclone
 +
| No
 +
| No
 +
| Yes
 +
| No
 +
|-
 +
| cfgbackup
 +
| No
 +
| No
 +
| Yes
 +
| No
 +
|-
 +
| cfgrestore
 +
| No
 +
| No
 +
| Yes
 +
| No
 +
|-
 +
| ck
 +
| No
 +
| Yes
 +
| Yes
 +
| No
 +
|-
 +
| scan
 +
| diskscan
 +
| Yes
 +
| Yes
 +
| Yes
 +
|-
 +
| mknodes
 +
| No
 +
| No
 +
| Yes
 +
| No
 +
|-
 +
| remove
 +
| No
 +
| Yes
 +
| Yes
 +
| Yes
 +
|-
 +
| dump
 +
| Yes
 +
| No
 +
| No
 +
| No
 +
|}
  
Proxmox  VE version info - Print version information for Proxmox VE packages.
+
<br>
  
===USAGE===
+
== Debian Appliance Builder ==
pveversion [--verbose]
 
  
* without any argument shows the version of pve-manager, something like:
+
=== dab ===
:pve-manager/1.5/4660
 
  
* with -v argument it shows a list of programs versions related to pve, like:
+
See [[Debian Appliance Builder]]
  
:pve-manager: 1.5-7 (pve-manager/1.5/4660)
+
== Other useful tools ==
:running kernel: 2.6.18-2-pve
 
:proxmox-ve-2.6.18: 1.5-5
 
:pve-kernel-2.6.18-2-pve: 2.6.18-5
 
:pve-kernel-2.6.18-1-pve: 2.6.18-4
 
:qemu-server: 1.1-11
 
:pve-firmware: 1.0-3
 
:libpve-storage-perl: 1.0-10
 
:vncterm: 0.9-2
 
:vzctl: 3.0.23-1pve8
 
:vzdump: 1.2-5
 
:vzprocps: 2.0.11-1dso2
 
:vzquota: 3.0.11-1
 
:pve-qemu-kvm-2.6.18: 0.9.1-5
 
  
==aptitude==
+
=== pveperf ===
Standard Debian package update tool
 
=LVM=
 
=iSCSI=
 
=DRBD=
 
See [[DRBD]]
 
=Debian Appliance Builder=
 
==dab==
 
See [[Debian_Appliance_Builder]]
 
=Other useful tools=
 
==pveperf==
 
Simple host performance test.
 
  
(from man page)
+
Simple host performance test - see "man pveperf".
  
===USAGE===
+
Note: this command may require root privileges (or sudo) to run, otherwise you get an error after "HD SIZE" value, like: &lt;&lt;sh: /proc/sys/vm/drop_caches: Permission denied unable to open HD at /usr/bin/pveperf line 149.&gt;&gt;
:pveperf [PATH]
 
  
===DESCRIPTION===
+
==== Example output ====
:Tries to gather some CPU/Hardisk performance data on the hardisk mounted at PATH (/ is used as default)
+
<pre>
It dumps on the terminal:
+
CPU BOGOMIPS:     26341.80
 +
REGEX/SECOND:      1554770
 +
HD SIZE:          94.49 GB (/dev/mapper/pve-root)
 +
BUFFERED READS:   49.83 MB/sec
 +
AVERAGE SEEK TIME: 14.16 ms
 +
FSYNCS/SECOND:    1060.47
 +
DNS EXT:          314.58 ms
 +
DNS INT:          236.94 ms (mypve.com)
 +
</pre>
  
* CPU BOGOMIPS: bogomips sum of all CPUs
+
=== pvesubscription ===
* REGEX/SECOND: regular expressions per second (perl performance test), should be above 300000
 
* HD SIZE: harddisk size
 
* BUFFERED READS: simple HD read test. Modern HDs should reach at least 40 MB/sec
 
* AVERAGE SEEK TIME: tests average seek time. Fast SCSI HDs reach values < 8 milliseconds. Common IDE/SATA disks get values from 15 to 20 ms.
 
* FSYNCS/SECOND: value should be greater than 200 (you should enable "write back" cache mode on you RAID controller - needs a battery backed cache (BBWC)).
 
* DNS EXT: average time to resolve an external DNS name
 
* DNS INT: average time to resolve a local DNS name
 
  
Note: this command may require root privileges (or sudo) to run, otherwise you get an error after "HD SIZE" value, like:
+
For managing a node's subscription key - see "man pvesubscription"
<<sh: /proc/sys/vm/drop_caches: Permission denied
 
unable to open HD at /usr/bin/pveperf line 149.>>
 
  
 +
=== Third party CLI Tools ===
 +
* [https://raymii.org/s/software/ProxBash.html ProxBash]
  
[[Category: HOWTO]][[Category: Installation]]
+
[[Category:Proxmox VE 3.x]]

Latest revision as of 14:05, 27 October 2017

Yellowpin.svg Note: This article is about the previous Proxmox VE 3.x releases

Introduction

This page lists some important Proxmox VE 3x and Debian command line tools.

KVM specific

qm

To view a list of KVMs:

qm list
      VMID NAME                 STATUS     MEM(MB)    BOOTDISK(GB) PID
       100 alpha001             running    2048               8.00 5858
       101 WINXP002             running    1024              32.00 6177
       102 Win2K                running    2048              32.00 113893
       105 axe002               running    16384            100.00 279613

To see how much memory your machine 101 has:

qm config 101 | grep ^memory
memory: 2048

see "man qm" for further info

QEMU Emulator: User Documentation | Monitor Commands

OpenVZ specific

vzps

This utility program can be run on the Node just as the standard Linux ps. For information on the ps utility please consult the corresponding man page, vzps provides certain additional functionality related to monitoring separate Containers running on the Node.

The vzps utility has the following functionality added:

  • The -E CT_ID command line switch can be used to show only the processes running inside the Container with the specified ID.

pvectl

pvectl - vzctl wrapper to manage OpenVZ containers - see "man pvectl"

vzctl

vzctl - utility to control an OpenVZ container - see "man vzctl"

vztop

This utility program can be run on the Node just as the standard Linux top . For information on the top utility please consult the corresponding man page, vztop provides certain additional functionality related to monitoring separate Containers running on the Node.

The vztop utility has the following functionality added:

  • The -E CT_ID command line switch can be used to show only the processes running inside the Container with the ID specified. If -1 is specified as CT_ID, the processes of all running Containers are displayed.
  • The e interactive command (the key pressed while top is running) can be used to show/hide the CTID column, which displays the Container where a particular process is running (0 stands for the Hardware Node itself).
  • The E interactive command can be used to select another Container the processes of which are to be shown. If -1 is specified, the processes of all running Containers are displayed.

vztop - display top CPU processes

 10:28:52  up 31 days, 11:18,  1 user,  load average: 0.07, 0.06, 0.02
197 processes: 196 sleeping, 1 running, 0 zombie, 0 stopped
CPU0 states:   0.2% user   0.1% system    0.0% nice   0.0% iowait  99.2% idle
CPU1 states:   1.3% user   2.1% system    0.0% nice   0.0% iowait  96.1% idle
CPU2 states:   6.3% user   0.1% system    0.0% nice   0.0% iowait  93.1% idle
CPU3 states:   2.0% user   1.0% system    0.0% nice   0.0% iowait  96.4% idle
Mem:  16251688k av, 16032764k used,  218924k free,       0k shrd,  364120k buff
      4448576k active,            10983652k inactive
Swap: 15728632k av,      36k used, 15728596k free                 14170784k cached

  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
390694 root      20   0  759M 528M  2432 S     6.7  3.3 244:53   1 kvm
566767 root      20   0 40464 8908  5320 S     6.7  0.0   0:54   0 apache2
 7898 root      20   0  181M  34M  4076 S     0.3  0.2  73:12   2 pvestatd
    1 root      20   0 10604  848   744 S     0.0  0.0   0:16   0 init
    2 root      20   0     0    0     0 SW    0.0  0.0   0:00   2 kthreadd
    3 root      RT   0     0    0     0 SW    0.0  0.0   0:00   0 migration/0
    4 root      20   0     0    0     0 SW    0.0  0.0   0:19   0 ksoftirqd/0
    5 root      RT   0     0    0     0 SW    0.0  0.0   0:00   0 migration/0
    6 root      RT   0     0    0     0 SW    0.0  0.0   0:02   0 watchdog/0
    7 root      RT   0     0    0     0 SW    0.0  0.0   0:00   1 migration/1
    8 root      RT   0     0    0     0 SW    0.0  0.0   0:00   1 migration/1
    9 root      20   0     0    0     0 SW    0.0  0.0   0:24   1 ksoftirqd/1
   10 root      RT   0     0    0     0 SW    0.0  0.0   0:01   1 watchdog/1
   11 root      RT   0     0    0     0 SW    0.0  0.0   0:01   2 migration/2
   12 root      RT   0     0    0     0 SW    0.0  0.0   0:00   2 migration/2
   13 root      20   0     0    0     0 SW    0.0  0.0   0:12   2 ksoftirqd/2
   14 root      RT   0     0    0     0 SW    0.0  0.0   0:01   2 watchdog/2
   15 root      RT   0     0    0     0 SW    0.0  0.0   0:07   3 migration/3
..
..

user_beancounters

cat /proc/user_beancounters


Version: 2.5
       uid  resource                     held              maxheld              barrier                limit              failcnt
      101:  kmemsize                 11217945             16650240            243269632            268435456                    0
            lockedpages                     0                  418                65536                65536                    0
            privvmpages                134161               221093  9223372036854775807  9223372036854775807                    0
            shmpages                       16                 3232  9223372036854775807  9223372036854775807                    0
            dummy                           0                    0                    0                    0                    0
            numproc                        56                   99  9223372036854775807  9223372036854775807                    0
            physpages                   96245               122946                    0               131072                    0
            vmguarpages                     0                    0                    0  9223372036854775807                    0
            oomguarpages                53689                78279                    0  9223372036854775807                    0
            numtcpsock                     49                   82  9223372036854775807  9223372036854775807                    0
            numflock                        8                   20  9223372036854775807  9223372036854775807                    0
            numpty                          0                    6  9223372036854775807  9223372036854775807                    0
            numsiginfo                      0                   33  9223372036854775807  9223372036854775807                    0
            tcpsndbuf                  927856              1619344  9223372036854775807  9223372036854775807                    0
            tcprcvbuf                  802816              1343488  9223372036854775807  9223372036854775807                    0
            othersockbuf               152592               481248  9223372036854775807  9223372036854775807                    0
            dgramrcvbuf                     0                 4624  9223372036854775807  9223372036854775807                    0
            numothersock                  124                  152  9223372036854775807  9223372036854775807                    0
            dcachesize                6032652             12378728            121634816            134217728                    0
            numfile                       629                  915  9223372036854775807  9223372036854775807                    0
            dummy                           0                    0                    0                    0                    0
            dummy                           0                    0                    0                    0                    0
            dummy                           0                    0                    0                    0                    0
            numiptent                      20                   20  9223372036854775807  9223372036854775807                    0
        0:  kmemsize                 34634728             65306624  9223372036854775807  9223372036854775807                    0
            lockedpages                  1360                 6721  9223372036854775807  9223372036854775807                    0
            privvmpages                317475               507560  9223372036854775807  9223372036854775807                    0
            shmpages                     4738                 9645  9223372036854775807  9223372036854775807                    0
            dummy                           0                    0  9223372036854775807  9223372036854775807                    0
            numproc                       190                  220  9223372036854775807  9223372036854775807                    0
            physpages                 3769163              3867750  9223372036854775807  9223372036854775807                    0
            vmguarpages                     0                    0                    0                    0                    0
            oomguarpages               182160               205746  9223372036854775807  9223372036854775807                    0
            numtcpsock                     12                   29  9223372036854775807  9223372036854775807                    0
            numflock                        9                   13  9223372036854775807  9223372036854775807                    0
            numpty                          4                   12  9223372036854775807  9223372036854775807                    0
            numsiginfo                      3                   84  9223372036854775807  9223372036854775807                    0
            tcpsndbuf                  249512              1760544  9223372036854775807  9223372036854775807                    0
            tcprcvbuf                  198920              1142000  9223372036854775807  9223372036854775807                    0
            othersockbuf               233512               276832  9223372036854775807  9223372036854775807                    0
            dgramrcvbuf                     0                 2576  9223372036854775807  9223372036854775807                    0
            numothersock                  179                  193  9223372036854775807  9223372036854775807                    0
            dcachesize               18688898             47058779  9223372036854775807  9223372036854775807                    0
            numfile                      1141                 1410  9223372036854775807  9223372036854775807                    0
            dummy                           0                    0  9223372036854775807  9223372036854775807                    0
            dummy                           0                    0  9223372036854775807  9223372036854775807                    0
            dummy                           0                    0  9223372036854775807  9223372036854775807                    0
            numiptent                      20                   20  9223372036854775807  9223372036854775807                    0

vzlist

vzlist
      CTID      NPROC STATUS    IP_ADDR         HOSTNAME
       101         26 running   -               localhost
       102        121 running   10.100.100.18   mediawiki
       114         49 running   -               fbc14
  • The fields for (-o option) selective display are: ctid, nproc, status, ip, hostname.
  • All are case sensitive and are used with the options -H (no header) and -o [field1, field2, ...]
  • The binary is at: /usr/sbin/vzlist
  • by default, vzlist lists only RUNNING CTs, stopped ones won't appear in its output (qm list for VMs, instead, lists also stopped ones)

See "man vzlist" for detailled options.

Backup

vzdump

vzdump - backup utility for virtual machine - see "man vzdump"

vzrestore

vzrestore - restore OpenVZ vzdump backups - see "man vzrestore"

qmrestore

qmrestore - restore KVM vzdump backups - see "man qmrestore"

Cluster management

pvecm

PVE Cluster Manager - see "man pvecm" for details.

Software version check

pveversion

Proxmox VE version info - Print version information for Proxmox VE packages.

USAGE

pveversion [--verbose]

  • without any argument shows the version of pve-manager, something like:
pve-manager/3.0/957f0862
  • with -v argument it shows a list of programs versions related to pve, like:
pve-manager: 3.0-23 (pve-manager/3.0/957f0862)
running kernel: 2.6.32-20-pve
proxmox-ve-2.6.32: 3.0-100
pve-kernel-2.6.32-20-pve: 2.6.32-100
lvm2: 2.02.95-pve3
clvm: 2.02.95-pve3
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-4
qemu-server: 3.0-20
pve-firmware: 1.0-22
libpve-common-perl: 3.0-4
libpve-access-control: 3.0-4
libpve-storage-perl: 3.0-8
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-13
ksm-control-daemon: 1.1-1

LVM

Most of the commands in LVM are very similar to each other. Each valid command is preceded by one of the following:

  • Physical Volume = pv
  • Volume Group = vg
  • Logical Volume = lv

USAGE


Physical Volume Volume Group Logical Volume
LVM PV VG LV
s No Yes

Yes

Yes
display No Yes Yes Yes
create No Yes Yes Yes
rename No No Yes Yes
change Yes Yes Yes Yes
move No Yes Yes No
extend No No Yes Yes
reduce No No Yes Yes
resize No Yes No Yes
split No No Yes No
merge No No Yes No
convert No No Yes Yes
import No No Yes No
export No No Yes No
importclone No No Yes No
cfgbackup No No Yes No
cfgrestore No No Yes No
ck No Yes Yes No
scan diskscan Yes Yes Yes
mknodes No No Yes No
remove No Yes Yes Yes
dump Yes No No No


Debian Appliance Builder

dab

See Debian Appliance Builder

Other useful tools

pveperf

Simple host performance test - see "man pveperf".

Note: this command may require root privileges (or sudo) to run, otherwise you get an error after "HD SIZE" value, like: <<sh: /proc/sys/vm/drop_caches: Permission denied unable to open HD at /usr/bin/pveperf line 149.>>

Example output

CPU BOGOMIPS:      26341.80
REGEX/SECOND:      1554770
HD SIZE:           94.49 GB (/dev/mapper/pve-root)
BUFFERED READS:    49.83 MB/sec
AVERAGE SEEK TIME: 14.16 ms
FSYNCS/SECOND:     1060.47
DNS EXT:           314.58 ms
DNS INT:           236.94 ms (mypve.com)

pvesubscription

For managing a node's subscription key - see "man pvesubscription"

Third party CLI Tools