https://pve.proxmox.com/mediawiki/api.php?action=feedcontributions&user=Peetaur&feedformat=atomProxmox VE - User contributions [en]2024-03-28T13:12:56ZUser contributionsMediaWiki 1.35.11https://pve.proxmox.com/mediawiki/index.php?title=VM_Templates_and_Clones&diff=10049VM Templates and Clones2018-01-31T14:40:44Z<p>Peetaur: /* Deploy a VM from a Template */ fix some bad grammar and spelling</p>
<hr />
<div>== Introduction ==<br />
A template is a fully pre-configured operating system image that can used to deploy KVM virtual machines. Creating a special template is usually preferred over [[Duplicate Virtual Machines|cloning an existing VM]].<br />
<br />
Deploying virtual machines from templates is blazing fast, very comfortable and if you use linked clones you can optimize your storage by using base images and thin-provisioning.<br />
<br />
Proxmox VE includes container based templates since 2008 and beginning with the V3.x series, additionally KVM templates can be created and deployed.<br />
<br />
== Definitions ==<br />
<br />
*'''VM '''- KVM based virtual machine <br />
*'''Templates '''- Templates are pre-configured operating system environments that deploy in a couple of clicks <br />
*'''Linked Clone''' - A linked clone VM requires less disk space but cannot run without access to the base VM Template <br />
*'''Full Clone''' - A full clone VM is a complete copy and is fully independant to the original VM or VM Template, but requires the same disk space as the original.<br />
<br />
== Create VM Template ==<br />
<br />
Templates are created by converting a VM to a template. <br />
<br />
#Install your VM with all drivers and needed software packages <br />
#Remove all user data, passwords and keys - In order to do this run sysprep on Windows or similar tools or scripts on Linux and just power-off the VM. <br />
#'''Right-click''' the VM and select "Convert to template"<br />
<br />
'''Note'''<br />
<br />
As soon as the VM is converted, it cannot be started anymore and the icon changes. If you want to modify an existing template, you need to deploy a full clone from this template and do the steps above again.<br />
<br />
=== OS specific notes for Templates ===<br />
For productive usage it is highly recommended that a template does not include any data, user accounts or SSH keys so you should remove all before you convert the VM to a template. On Linux systems you should remove SSH host keys, persistent network MAC configuration and user accounts and user data. Windows offers a bunch of tools for this, e.g. sysprep.<br />
<br />
For testing purposes it may be useful to use a fully installed OS as a template.<br />
<br />
==== GNU/Linux ====<br />
*Ubuntu: e.g. install with 'OEM mode' (press F4)<br />
*CentOS7: Most steps in this [https://github.com/rharmonson/richtech/wiki/CentOS-7-1511-Minimal-oVirt-Template guide] are valid for PVE too.<br />
==== Windows 7 ====<br />
*[http://technet.microsoft.com/en-us/library/ee523217(v=ws.10).aspx Building a Standard Image of Windows 7: Step-by-Step Guide]<br />
<br />
== Deploy a VM from a Template ==<br />
<br />
Right-click the template, and select "Clone".<br />
<br />
=== Full Clone ===<br />
A full clone VM is a complete copy and is fully independent from the original VM or VM Template, but it requires the same disk space as the original. <br />
<br />
=== Linked Clone ===<br />
A linked clone VM requires less disk space but cannot run without access to the base VM Template.<br />
<br />
Linked Clones works for theses storages: files in raw, qcow2, vmdk format (either on local storage or nfs); LVM-thin, ZFS, rbd, sheepdog, nexenta.<br />
<br />
It's not supported with LVM & ISCSI storage.<br />
<br />
== Video Tutorials ==<br />
tbd: [http://www.youtube.com/user/ProxmoxVE Proxmox VE Youtube channel]<br />
<br />
== Troubleshooting ==<br />
<br />
[[Category:HOWTO]]</div>Peetaurhttps://pve.proxmox.com/mediawiki/index.php?title=Install_Proxmox_VE_on_Debian_Jessie&diff=8116Install Proxmox VE on Debian Jessie2016-01-12T12:14:57Z<p>Peetaur: /* Connect to the Proxmox VE web interface */ added note about using root account if you didn't set up users yet</p>
<hr />
<div>== Introduction ==<br />
<br />
The installation of a supported Proxmox VE server should be done via [[Bare-metal_ISO_Installer]]. In some case it makes sense to install Proxmox VE on top of a running Debian Jessie 64-bit, especially if you want a custom partition layout. For this HowTO the following Debian Jessie ISO was used: [http://cdimage.debian.org/debian-cd/8.2.0/amd64/iso-cd/debian-8.2.0-amd64-netinst.iso debian-8.2.0-amd64-netinst.iso].<br />
<br />
== Install a standard Debian Jessie (amd64) ==<br />
Install a standard Debian Jessie, for details see [http://www.debian.org Debian], and select a fixed IP.<br />
It is recommended to only install the "standard" package selection and nothing else, as Proxmox VE brings its own packages for qemu, lxc.<br />
During installation you have to manually partition the Hard Disk and use the choice "Configure the Logical Volume Manager" to create a volume group called "pve" and 3 logical volumes called "swap", "root" and "data". The mount point for "root" will be "/" and for "data" select the manual option and enter "/var/lib/vz". You can format with ext4.<br />
<br />
You can also create empty partitions during installation and then create [[ZFS]] pool on them.<br />
The suggested partition layout with LVM is like that:<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sda1 1 122 975872 83 Linux<br />
/dev/sda2 122 5222 40965120 8e Linux LVM<br />
<br />
LVM:<br />
root@pvedebian:~# lvs<br />
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert<br />
data pve -wi-ao---- 68.25g<br />
root pve -wi-ao---- 29.75g<br />
swap pve -wi-ao---- 7.00g<br />
<br />
<br />
<br />
=== Add an /etc/hosts entry for your IP address ===<br />
<br />
Please make sure that your hostname is resolvable via /etc/hosts, i.e you need an entry in /etc/hosts which assigns an IPv4 address to that hostname.<br />
<br />
'''Note''': Make sure that no IPv6 address for your hostname is specified in `/etc/hosts`<br />
<br />
For instance if your IP address is 192.168.15.77, and your hostname prox4m1, then your <tt>/etc/hosts</tt> file shoud look like:<br />
<br />
<pre><br />
127.0.0.1 localhost.localdomain localhost<br />
192.168.15.77 prox4m1.proxmox.com prox4m1 pvelocalhost<br />
<br />
# The following lines are desirable for IPv6 capable hosts<br />
::1 localhost ip6-localhost ip6-loopback<br />
ff02::1 ip6-allnodes<br />
ff02::2 ip6-allrouters<br />
</pre><br />
<br />
You can test if your setup is ok using the '''getent''' command:<br />
<pre><br />
#verify that your hostname is resolved<br />
getent hosts $(hostname)<br />
192.168.15.77 prox4m1.proxmox.com prox4m1 pvelocalhost<br />
</pre><br />
<pre><br />
# verify that your IP address is resolved <br />
getent hosts 192.168.15.77<br />
192.168.15.77 prox4m1.proxmox.com prox4m1 pvelocalhost<br />
</pre><br />
<br />
== Install Proxmox VE ==<br />
=== Adapt your sources.list ===<br />
<br />
Add the Proxmox VE repository:<br />
echo "deb http://download.proxmox.com/debian jessie pve-no-subscription" > /etc/apt/sources.list.d/pve-install-repo.list<br />
<br />
<b>NOTE:</b> Due to a bug in debian/apt(?) it may be required to alter the above sources.list entry to say the following, if apt-get complains about not being able to find /binary-i386: "Unable to find expected entry 'pve/binary-i386/Packages'" (despite it being a 64bit debian install!)<br />
<pre>deb [arch=amd64] http://download.proxmox.com/debian jessie pve-no-subscription</pre><br />
<br />
If it does not work for '''apt-get''' of some files, then replace '''http://''' with '''ftp://''' especially in the first two urls above.<br />
<br />
Add the Proxmox VE repository key:<br />
wget -O- "http://download.proxmox.com/debian/key.asc" | apt-key add -<br />
<br />
Update your repository and system by running:<br />
apt-get update && apt-get dist-upgrade<br />
<br />
=== Install Proxmox VE packages ===<br />
Install the Proxmox VE packages:<br />
<br />
apt-get install proxmox-ve ntp ssh postfix ksm-control-daemon open-iscsi<br />
<br />
Accept the suggestion to remove Exim and configure postfix according to your network.<br />
If you have a mail server in your network, you should configure postfix as a '''satellite system''',<br />
and your existing mail server will be the 'relay host' which will route the emails send by the <br />
proxmox server to the end recipient.<br />
If you don't know what to enter here, choose '''local only'''. <br />
<br />
Finally, reboot your system, the new Proxmox VE kernel should be automatically selected in the GRUB menu.<br />
<br />
== Connect to the Proxmox VE web interface ==<br />
Connect to the admin web interface (<nowiki>https://youripaddress:8006</nowiki>), create a bridge called '''vmbr0''', and add your first network interface to it. If you have a fresh install and didn't add any users yet, you should use the root account with your linux root password, and select "PAM Authentication" to log in.<br />
<br />
[[Image:Screen-vmbr0-setup-for-pve2.png||Adapt vmbr0 settings]]<br />
<br />
== Configure apt to use the new packages repositories ==<br />
In order to get latest updates, you need to add one of the new package repositories, see [[Package repositories]]<br />
<br />
== Troubleshooting ==<br />
=== resolv.conf gets overwritten ===<br />
The PVE4 GUI expects to control DNS management and will no longer take its DNS settings from /etc/network/interfaces<br />
Any package that autogenerates (overwrites) /etc/resolv.conf will cause DNS to fail.<br />
e.g. packages 'resolvconf' for IPv4 and 'rdnssd' for IPv6.<br />
<br />
== Optional Steps ==<br />
=== Optional: Remove the Debian kernel ===<br />
apt-get remove linux-image-amd64 linux-image-3.16.0-4-amd64 linux-base<br />
<br />
Check grub2 config by running:<br />
update-grub<br />
<br />
=== Optional: Developer Workstations with Proxmox VE and X11 ===<br />
Proxmox VE is primarily used as virtualization platform with NO additional software installed. In some case it makes sense to have a full desktop running on Proxmox VE, for example for developers using Proxmox VE as their primary workstation/desktop.<br />
<br />
For example, just install XFCE4 desktop and Firefox/Iceweasel browser:<br />
apt-get install xfce4 iceweasel lightdm<br />
<br />
If you prefer LXDE desktop instead just do:<br />
apt-get install lxde iceweasel<br />
<br />
Make sure network-manager is not used, else pve-cluster will not start in some cases<br />
apt-get purge network-manager<br />
<br />
[[Category: HOWTO]][[Category: Installation]]</div>Peetaurhttps://pve.proxmox.com/mediawiki/index.php?title=Talk:Pvectl_manual&diff=6690Talk:Pvectl manual2014-09-11T13:54:32Z<p>Peetaur: Created page with "Someone please make sure the "pvectl: 3.2-4/e24a91c1" line is right? I don't know where that should come from. I just pasted from pveversion command, and made the page to look..."</p>
<hr />
<div>Someone please make sure the "pvectl: 3.2-4/e24a91c1" line is right? I don't know where that should come from. I just pasted from pveversion command, and made the page to look like the vzctl one.</div>Peetaurhttps://pve.proxmox.com/mediawiki/index.php?title=Command_line_tools_-_PVE_3.x&diff=6688Command line tools - PVE 3.x2014-09-11T13:49:15Z<p>Peetaur: /* OpenVZ specific */ added pvectl</p>
<hr />
<div>= Introduction =<br />
<br />
This page lists the important Proxmox VE and Debian command line tools. All CLI tools have also manual pages. <br />
<br />
= KVM specific =<br />
<br />
== qm ==<br />
<br />
qm - qemu/kvm manager - see [[Manual: qm]] and [[Qm manual]] <br />
<br />
= OpenVZ specific =<br />
<br />
== vzps ==<br />
This utility program can be run on the Node just as the standard Linux ps. For information on the ps utility please consult the corresponding man page, vzps provides certain additional functionality related to monitoring separate Containers running on the Node.<br />
<br />
The vzps utility has the following functionality added:<br />
<br />
* The -E CT_ID command line switch can be used to show only the processes running inside the Container with the specified ID.<br />
<br />
== pvectl ==<br />
<br />
pvectl - vzctl wrapper to manage OpenVZ containers - see [[Pvectl manual]] <br />
<br />
== vzctl ==<br />
<br />
vzctl - utility to control an OpenVZ container - see [[Vzctl manual]] <br />
<br />
== vztop ==<br />
This utility program can be run on the Node just as the standard Linux top . For information on the top utility please consult the corresponding man page, vztop provides certain additional functionality related to monitoring separate Containers running on the Node.<br />
<br />
The vztop utility has the following functionality added:<br />
<br />
* The -E CT_ID command line switch can be used to show only the processes running inside the Container with the ID specified. If -1 is specified as CT_ID, the processes of all running Containers are displayed.<br />
* The e interactive command (the key pressed while top is running) can be used to show/hide the CTID column, which displays the Container where a particular process is running (0 stands for the Hardware Node itself).<br />
* The E interactive command can be used to select another Container the processes of which are to be shown. If -1 is specified, the processes of all running Containers are displayed.<br />
<br />
vztop - display top CPU processes<br />
<br />
<pre><br />
10:28:52 up 31 days, 11:18, 1 user, load average: 0.07, 0.06, 0.02<br />
197 processes: 196 sleeping, 1 running, 0 zombie, 0 stopped<br />
CPU0 states: 0.2% user 0.1% system 0.0% nice 0.0% iowait 99.2% idle<br />
CPU1 states: 1.3% user 2.1% system 0.0% nice 0.0% iowait 96.1% idle<br />
CPU2 states: 6.3% user 0.1% system 0.0% nice 0.0% iowait 93.1% idle<br />
CPU3 states: 2.0% user 1.0% system 0.0% nice 0.0% iowait 96.4% idle<br />
Mem: 16251688k av, 16032764k used, 218924k free, 0k shrd, 364120k buff<br />
4448576k active, 10983652k inactive<br />
Swap: 15728632k av, 36k used, 15728596k free 14170784k cached<br />
<br />
PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND<br />
390694 root 20 0 759M 528M 2432 S 6.7 3.3 244:53 1 kvm<br />
566767 root 20 0 40464 8908 5320 S 6.7 0.0 0:54 0 apache2<br />
7898 root 20 0 181M 34M 4076 S 0.3 0.2 73:12 2 pvestatd<br />
1 root 20 0 10604 848 744 S 0.0 0.0 0:16 0 init<br />
2 root 20 0 0 0 0 SW 0.0 0.0 0:00 2 kthreadd<br />
3 root RT 0 0 0 0 SW 0.0 0.0 0:00 0 migration/0<br />
4 root 20 0 0 0 0 SW 0.0 0.0 0:19 0 ksoftirqd/0<br />
5 root RT 0 0 0 0 SW 0.0 0.0 0:00 0 migration/0<br />
6 root RT 0 0 0 0 SW 0.0 0.0 0:02 0 watchdog/0<br />
7 root RT 0 0 0 0 SW 0.0 0.0 0:00 1 migration/1<br />
8 root RT 0 0 0 0 SW 0.0 0.0 0:00 1 migration/1<br />
9 root 20 0 0 0 0 SW 0.0 0.0 0:24 1 ksoftirqd/1<br />
10 root RT 0 0 0 0 SW 0.0 0.0 0:01 1 watchdog/1<br />
11 root RT 0 0 0 0 SW 0.0 0.0 0:01 2 migration/2<br />
12 root RT 0 0 0 0 SW 0.0 0.0 0:00 2 migration/2<br />
13 root 20 0 0 0 0 SW 0.0 0.0 0:12 2 ksoftirqd/2<br />
14 root RT 0 0 0 0 SW 0.0 0.0 0:01 2 watchdog/2<br />
15 root RT 0 0 0 0 SW 0.0 0.0 0:07 3 migration/3<br />
..<br />
..<br />
</pre><br />
<br />
== user_beancounters ==<br />
<br />
cat /proc/user_beancounters<br />
<br />
<br />
<pre><br />
Version: 2.5<br />
uid resource held maxheld barrier limit failcnt<br />
101: kmemsize 11217945 16650240 243269632 268435456 0<br />
lockedpages 0 418 65536 65536 0<br />
privvmpages 134161 221093 9223372036854775807 9223372036854775807 0<br />
shmpages 16 3232 9223372036854775807 9223372036854775807 0<br />
dummy 0 0 0 0 0<br />
numproc 56 99 9223372036854775807 9223372036854775807 0<br />
physpages 96245 122946 0 131072 0<br />
vmguarpages 0 0 0 9223372036854775807 0<br />
oomguarpages 53689 78279 0 9223372036854775807 0<br />
numtcpsock 49 82 9223372036854775807 9223372036854775807 0<br />
numflock 8 20 9223372036854775807 9223372036854775807 0<br />
numpty 0 6 9223372036854775807 9223372036854775807 0<br />
numsiginfo 0 33 9223372036854775807 9223372036854775807 0<br />
tcpsndbuf 927856 1619344 9223372036854775807 9223372036854775807 0<br />
tcprcvbuf 802816 1343488 9223372036854775807 9223372036854775807 0<br />
othersockbuf 152592 481248 9223372036854775807 9223372036854775807 0<br />
dgramrcvbuf 0 4624 9223372036854775807 9223372036854775807 0<br />
numothersock 124 152 9223372036854775807 9223372036854775807 0<br />
dcachesize 6032652 12378728 121634816 134217728 0<br />
numfile 629 915 9223372036854775807 9223372036854775807 0<br />
dummy 0 0 0 0 0<br />
dummy 0 0 0 0 0<br />
dummy 0 0 0 0 0<br />
numiptent 20 20 9223372036854775807 9223372036854775807 0<br />
0: kmemsize 34634728 65306624 9223372036854775807 9223372036854775807 0<br />
lockedpages 1360 6721 9223372036854775807 9223372036854775807 0<br />
privvmpages 317475 507560 9223372036854775807 9223372036854775807 0<br />
shmpages 4738 9645 9223372036854775807 9223372036854775807 0<br />
dummy 0 0 9223372036854775807 9223372036854775807 0<br />
numproc 190 220 9223372036854775807 9223372036854775807 0<br />
physpages 3769163 3867750 9223372036854775807 9223372036854775807 0<br />
vmguarpages 0 0 0 0 0<br />
oomguarpages 182160 205746 9223372036854775807 9223372036854775807 0<br />
numtcpsock 12 29 9223372036854775807 9223372036854775807 0<br />
numflock 9 13 9223372036854775807 9223372036854775807 0<br />
numpty 4 12 9223372036854775807 9223372036854775807 0<br />
numsiginfo 3 84 9223372036854775807 9223372036854775807 0<br />
tcpsndbuf 249512 1760544 9223372036854775807 9223372036854775807 0<br />
tcprcvbuf 198920 1142000 9223372036854775807 9223372036854775807 0<br />
othersockbuf 233512 276832 9223372036854775807 9223372036854775807 0<br />
dgramrcvbuf 0 2576 9223372036854775807 9223372036854775807 0<br />
numothersock 179 193 9223372036854775807 9223372036854775807 0<br />
dcachesize 18688898 47058779 9223372036854775807 9223372036854775807 0<br />
numfile 1141 1410 9223372036854775807 9223372036854775807 0<br />
dummy 0 0 9223372036854775807 9223372036854775807 0<br />
dummy 0 0 9223372036854775807 9223372036854775807 0<br />
dummy 0 0 9223372036854775807 9223372036854775807 0<br />
numiptent 20 20 9223372036854775807 9223372036854775807 0<br />
<br />
</pre><br />
<br />
== vzlist ==<br />
<br />
:example:<br />
<pre>vzlist<br />
CTID NPROC STATUS IP_ADDR HOSTNAME<br />
101 26 running - localhost.fantinibakery.com<br />
102 121 running 10.100.100.18 mediawiki.fantinibakery.com<br />
114 49 running - fbc14.fantinibakery.com<br />
</pre><br />
<br />
From PVE 3.0 onwards, the display will be:<br />
<pre>vzlist<br />
CTID NPROC STATUS IP_ADDR HOSTNAME<br />
101 26 running - localhost<br />
102 121 running 10.100.100.18 mediawiki<br />
114 49 running - fbc14<br />
</pre><br />
* The fields for ('''-o''' option) selective display are: '''ctid, nproc, status, ip, hostname'''.<br />
* All are case sensitive and are used with the options '''-H''' (no header) and '''-o''' [field1, field2, ...]<br />
* The binary is at: <tt>/usr/sbin/vzlist</tt><br />
* by default, vzlist lists only RUNNING CTs, stopped ones won't appear in its output (qm list for VMs, instead, lists also stopped ones)<br />
<br />
=== USAGE ===<br />
<pre><br />
Usage: vzlist [-a | -S] [-n] [-H] [-o field[,field...] | -1] [-s [-]field]<br />
[-h pattern] [-N pattern] [-d pattern] [CTID [CTID ...]]<br />
vzlist -L | --list<br />
<br />
Options:<br />
-a, --all list all containers<br />
-S, --stopped list stopped containers<br />
-n, --name display containers' names<br />
-H, --no-header suppress columns header<br />
-t, --no-trim do not trim long values<br />
-j, --json output in JSON format<br />
-o, --output output only specified fields<br />
-1 synonym for -H -octid<br />
-s, --sort sort by the specified field<br />
('-field' to reverse sort order)<br />
-h, --hostname filter CTs by hostname pattern<br />
-N, --name_filter filter CTs by name pattern<br />
-d, --description filter CTs by description pattern<br />
-L, --list get possible field names<br />
</pre><br />
<br />
= Backup =<br />
<br />
== vzdump ==<br />
<br />
vzdump - backup utility for virtual machine - see [[Vzdump manual]] <br />
<br />
== vzrestore ==<br />
<br />
vzrestore - restore OpenVZ vzdump backups - see [[Vzrestore manual]] <br />
<br />
== qmrestore ==<br />
<br />
qmrestore - restore KVM vzdump backups - see [[Qmrestore manual]] <br />
<br />
= Cluster management =<br />
<br />
== pveca ==<br />
<br />
PVE Cluster Administration Toolkit <br />
<br />
=== USAGE ===<br />
<br />
*pveca -l # show cluster status <br />
*pveca -c # create new cluster with localhost as master <br />
*pveca -s [-h IP] # sync cluster configuration from master (or IP) <br />
*pveca -d ID # delete a node <br />
*pveca -a [-h IP] # add new node to cluster <br />
*pveca -m # force local node to become master <br />
*pveca -i # print node info (CID NAME IP ROLE)<br />
<br />
= Software version check =<br />
<br />
== pveversion ==<br />
<br />
Proxmox VE version info - Print version information for Proxmox VE packages. <br />
<br />
=== USAGE ===<br />
<br />
pveversion [--verbose] <br />
<br />
*without any argument shows the version of pve-manager, something like:<br />
<br />
:pve-manager/1.5/4660<br />
or<br />
:pve-manager/3.0/957f0862<br />
<br />
*with -v argument it shows a list of programs versions related to pve, like:<br />
<br />
:pve-manager: 1.5-7 (pve-manager/1.5/4660) <br />
:running kernel: 2.6.18-2-pve <br />
:proxmox-ve-2.6.18: 1.5-5 <br />
:pve-kernel-2.6.18-2-pve: 2.6.18-5 <br />
:pve-kernel-2.6.18-1-pve: 2.6.18-4 <br />
:qemu-server: 1.1-11 <br />
:pve-firmware: 1.0-3 <br />
:libpve-storage-perl: 1.0-10 <br />
:vncterm: 0.9-2 <br />
:vzctl: 3.0.23-1pve8 <br />
:vzdump: 1.2-5 <br />
:vzprocps: 2.0.11-1dso2 <br />
:vzquota: 3.0.11-1 <br />
:pve-qemu-kvm-2.6.18: 0.9.1-5<br />
or<br />
:pve-manager: 3.0-23 (pve-manager/3.0/957f0862)<br />
:running kernel: 2.6.32-20-pve<br />
:proxmox-ve-2.6.32: 3.0-100<br />
:pve-kernel-2.6.32-20-pve: 2.6.32-100<br />
:lvm2: 2.02.95-pve3<br />
:clvm: 2.02.95-pve3<br />
:corosync-pve: 1.4.5-1<br />
:openais-pve: 1.1.4-3<br />
:libqb0: 0.11.1-2<br />
:redhat-cluster-pve: 3.2.0-2<br />
:resource-agents-pve: 3.9.2-4<br />
:fence-agents-pve: 4.0.0-1<br />
:pve-cluster: 3.0-4<br />
:qemu-server: 3.0-20<br />
:pve-firmware: 1.0-22<br />
:libpve-common-perl: 3.0-4<br />
:libpve-access-control: 3.0-4<br />
:libpve-storage-perl: 3.0-8<br />
:vncterm: 1.1-4<br />
:vzctl: 4.0-1pve3<br />
:vzprocps: 2.0.11-2<br />
:vzquota: 3.1-2<br />
:pve-qemu-kvm: 1.4-13<br />
:ksm-control-daemon: 1.1-1<br />
<br />
== aptitude ==<br />
<br />
Standard Debian package update tool <br />
<br />
= LVM =<br />
<br />
Most of the commands in LVM are very similar to each other. Each valid command is preceded by one of the following: <br />
<br />
*Physical Volume = pv <br />
*Volume Group = vg <br />
*Logical Volume = lv<br />
<br />
=== USAGE ===<br />
<br />
{| width="200" border="1" align="center" cellpadding="1" cellspacing="1"<br />
|-<br />
| '''<br>''' <br />
| <br />
| '''Physical Volume''' <br />
| '''Volume Group''' <br />
| '''Logical Volume'''<br />
|-<br />
| <br />
| '''LVM''' <br />
| '''PV''' <br />
| '''VG''' <br />
| '''LV'''<br />
|-<br />
| s <br />
| No <br />
| Yes <br />
| <br />
Yes <br />
<br />
| Yes<br />
|-<br />
| display <br />
| No <br />
| Yes <br />
| Yes <br />
| Yes<br />
|-<br />
| create <br />
| No <br />
| Yes <br />
| Yes <br />
| Yes<br />
|-<br />
| rename <br />
| No <br />
| No <br />
| Yes <br />
| Yes<br />
|-<br />
| change <br />
| Yes <br />
| Yes <br />
| Yes <br />
| Yes<br />
|-<br />
| move <br />
| No <br />
| Yes <br />
| Yes <br />
| No<br />
|-<br />
| extend <br />
| No <br />
| No <br />
| Yes <br />
| Yes<br />
|-<br />
| reduce <br />
| No <br />
| No <br />
| Yes <br />
| Yes<br />
|-<br />
| resize <br />
| No <br />
| Yes <br />
| No <br />
| Yes<br />
|-<br />
| split <br />
| No <br />
| No <br />
| Yes <br />
| No<br />
|-<br />
| merge <br />
| No <br />
| No <br />
| Yes <br />
| No<br />
|-<br />
| convert <br />
| No <br />
| No <br />
| Yes <br />
| Yes<br />
|-<br />
| import <br />
| No <br />
| No <br />
| Yes <br />
| No<br />
|-<br />
| export <br />
| No <br />
| No <br />
| Yes <br />
| No<br />
|-<br />
| importclone <br />
| No <br />
| No <br />
| Yes <br />
| No<br />
|-<br />
| cfgbackup <br />
| No <br />
| No <br />
| Yes <br />
| No<br />
|-<br />
| cfgrestore <br />
| No <br />
| No <br />
| Yes <br />
| No<br />
|-<br />
| ck <br />
| No <br />
| Yes <br />
| Yes <br />
| No<br />
|-<br />
| scan <br />
| diskscan <br />
| Yes <br />
| Yes <br />
| Yes<br />
|-<br />
| mknodes <br />
| No <br />
| No <br />
| Yes <br />
| No<br />
|-<br />
| remove <br />
| No <br />
| Yes <br />
| Yes <br />
| Yes<br />
|-<br />
| dump <br />
| Yes <br />
| No <br />
| No <br />
| No<br />
|}<br />
<br />
<br><br />
<br />
= iSCSI =<br />
<br />
= DRBD =<br />
<br />
See [[DRBD]] <br />
<br />
= Debian Appliance Builder =<br />
<br />
== dab ==<br />
<br />
See [[Debian Appliance Builder]] <br />
<br />
= Other useful tools =<br />
<br />
== pveperf ==<br />
<br />
Simple host performance test. <br />
<br />
(from man page) <br />
<br />
=== USAGE ===<br />
<br />
:pveperf [PATH]<br />
<br />
=== DESCRIPTION ===<br />
<br />
:Tries to gather some CPU/Hardisk performance data on the hardisk mounted at PATH (/ is used as default)<br />
<br />
It dumps on the terminal: <br />
<br />
*CPU BOGOMIPS: bogomips sum of all CPUs <br />
*REGEX/SECOND: regular expressions per second (perl performance test), should be above 300000 <br />
*HD SIZE: harddisk size <br />
*BUFFERED READS: simple HD read test. Modern HDs should reach at least 40 MB/sec <br />
*AVERAGE SEEK TIME: tests average seek time. Fast SCSI HDs reach values &lt; 8 milliseconds. Common IDE/SATA disks get values from 15 to 20 ms. <br />
*FSYNCS/SECOND: value should be greater than 200 (you should enable "write back" cache mode on you RAID controller - needs a battery backed cache (BBWC)). <br />
*DNS EXT: average time to resolve an external DNS name <br />
*DNS INT: average time to resolve a local DNS name<br />
<br />
Note: this command may require root privileges (or sudo) to run, otherwise you get an error after "HD SIZE" value, like: &lt;&lt;sh: /proc/sys/vm/drop_caches: Permission denied unable to open HD at /usr/bin/pveperf line 149.&gt;&gt; <br />
<br />
=== Example output ===<br />
<pre><br />
CPU BOGOMIPS: 26341.80<br />
REGEX/SECOND: 1554770<br />
HD SIZE: 94.49 GB (/dev/mapper/pve-root)<br />
BUFFERED READS: 49.83 MB/sec<br />
AVERAGE SEEK TIME: 14.16 ms<br />
FSYNCS/SECOND: 1060.47<br />
DNS EXT: 314.58 ms<br />
DNS INT: 236.94 ms (mypve.com)<br />
</pre><br />
<br />
== pvesubscription ==<br />
<br />
For managing a node's subscription key<br />
<br />
=== Usage ===<br />
<br />
To set the key use:<br />
<br />
* pvesubscription set <key><br />
<br />
The following updates the subscription status<br />
<br />
* pvesubscription update -force<br />
<br />
To print subscription status use<br />
<br />
* pvesubscription get <br />
<br />
<pre><br />
USAGE: pvesubscription <COMMAND> [ARGS] [OPTIONS]<br />
pvesubscription get <br />
pvesubscription set <key><br />
pvesubscription update [OPTIONS]<br />
<br />
pvesubscription help [<cmd>] [OPTIONS]<br />
</pre><br />
<br />
== Third party CLI Tools ==<br />
* [https://raymii.org/s/software/ProxBash.html ProxBash]<br />
<br />
[[Category:HOWTO]] [[Category:Installation]]</div>Peetaurhttps://pve.proxmox.com/mediawiki/index.php?title=OpenVZ_Console&diff=5334OpenVZ Console2013-03-26T14:11:00Z<p>Peetaur: /* Centos 6 */ added quicker better way to save changes without rebooting the guest (Centos 5 specific) , edited on behalf of squeeb on IRC</p>
<hr />
<div>=Introduction=<br />
Beginning with Proxmox VE 2.2, we introduced a new console view (with login capability). Especially for beginners it is not that easy to understand and manage containers but with the new console this is big step forward. OpenVZ and KVM console looks now quite similar.<br />
<br />
But as most OpenVZ templates have disabled terminals, you need to enable it first. This article describes for the needed changes for already running OpenVZ container.<br />
<br />
'''Note:'''<br />
<br />
All Debian templates created with latest [[Debian Appliance Builder]] got this changes already, just download them via GUI to your Proxmox VE storage (Debian 6 and 7 templates are up2date, 32 and 64 bit)<br />
<br />
=Debian=<br />
This method works for Debian 5/6/7, you can do this on the host without entering CT (but the CT must be running). Just log in to the Proxmox VE host and:<br />
<br />
edit all inittabs under /var/lib/vz/root/ :<br />
<pre><br />
nano /var/lib/vz/root/*/etc/inittab<br />
<br />
# add this<br />
1:2345:respawn:/sbin/getty 38400 tty1<br />
</pre><br />
<br />
== Debian Lenny 5.0 ==<br />
[[Image:Screen-Debian-5-OpenVZ-console.png|thumb]] <br />
Login via SSH (or use the VNC "Shell") to your Proxmox VE host and 'vzctl enter CTID' the container:<br />
<br />
List all running container:<br />
<br />
proxmox-ve:~# vzlist<br />
CTID NPROC STATUS IP_ADDR HOSTNAME<br />
108 23 running 192.168.9.20 ubuntu-1204.proxmox.com<br />
109 18 running 192.168.9.21 centos63-64.proxmox.com<br />
111 15 running 192.168.9.23 centos5-64.proxmox.com<br />
114 14 running 192.168.9.30 deb6-32.proxmox.com<br />
115 15 running 192.168.9.31 deb7-32.proxmox.com<br />
122 14 running 192.168.9.36 deb5.proxmox.com<br />
<br />
Enter the container:<br />
proxmox-ve:~# vzctl enter 122<br />
<br />
root@debian:/# nano /etc/inittab<br />
<br />
On the bottom of /etc/inittab just add the following line:<br />
1:2345:respawn:/sbin/getty 38400 tty1<br />
<br />
Save the changes and shutdown/start the container via Console.<br />
<br />
== Debian Squeeze 6.0 ==<br />
Same as Debian Lenny 5.0<br />
== Debian Wheezy 7.0 ==<br />
Same as Debian Lenny 5.0<br />
<br />
=Ubuntu=<br />
== Ubuntu 12.04 ==<br />
[[Image:Screen-Ubuntu-12.04-OpenVZ-console.png|thumb]] <br />
Login via SSH (or use the VNC "Shell") to your Proxmox VE host and 'vzctl enter CTID' the container:<br />
<br />
List all running container:<br />
<br />
proxmox-ve:~# vzlist<br />
CTID NPROC STATUS IP_ADDR HOSTNAME<br />
108 23 running 192.168.9.20 ubuntu-1204.proxmox.com<br />
109 18 running 192.168.9.21 centos63-64.proxmox.com<br />
111 15 running 192.168.9.23 centos5-64.proxmox.com<br />
114 14 running 192.168.9.30 deb6-32.proxmox.com<br />
115 15 running 192.168.9.31 deb7-32.proxmox.com<br />
122 14 running 192.168.9.36 deb5.proxmox.com<br />
<br />
Enter the container:<br />
proxmox-ve:~# vzctl enter 108<br />
<br />
root@ubuntu-1204:/# nano /etc/init/tty1.conf<br />
<br />
Change/Create the file that it looks exactly like this:<br />
<br />
# tty1 - getty<br />
#<br />
# This service maintains a getty on tty1 from the point the system is<br />
# started until it is shut down again.<br />
<br />
start on stopped rc RUNLEVEL=[2345]<br />
<br />
stop on runlevel [!2345]<br />
<br />
respawn<br />
exec /sbin/getty -8 38400 tty1<br />
<br />
Save the changes and shutdown/start the container via Console.<br />
<br />
== Ubuntu 10.04 ==<br />
Same as Ubuntu 12.04<br />
<br />
=Centos=<br />
== Centos 5 ==<br />
[[Image:Screen-Centos-5-OpenVZ-console.png|thumb]] <br />
Login via SSH (or use the VNC "Shell") to your Proxmox VE host and 'vzctl enter CTID' the container:<br />
<br />
List all running container:<br />
<br />
proxmox-ve:~# vzlist<br />
CTID NPROC STATUS IP_ADDR HOSTNAME<br />
108 23 running 192.168.9.20 ubuntu-1204.proxmox.com<br />
109 18 running 192.168.9.21 centos63-64.proxmox.com<br />
111 15 running 192.168.9.23 centos5-64.proxmox.com<br />
114 14 running 192.168.9.30 deb6-32.proxmox.com<br />
115 15 running 192.168.9.31 deb7-32.proxmox.com<br />
122 14 running 192.168.9.36 deb5.proxmox.com<br />
<br />
Enter the container:<br />
proxmox-ve:~# vzctl enter 111<br />
<br />
root@centos5-64:/# nano /etc/inittab<br />
<br />
On the bottom of /etc/inittab just add the following line:<br />
1:2345:respawn:/sbin/agetty tty1 38400 linux<br />
<br />
Save the changes and shutdown/start the container via Console.<br />
== Centos 6 ==<br />
[[Image:Screen-Centos-6-OpenVZ-console.png|thumb]] <br />
Login via SSH (or use the VNC "Shell") to your Proxmox VE host and 'vzctl enter CTID' the container:<br />
<br />
List all running container:<br />
<br />
proxmox-ve:~# vzlist<br />
CTID NPROC STATUS IP_ADDR HOSTNAME<br />
108 23 running 192.168.9.20 ubuntu-1204.proxmox.com<br />
109 18 running 192.168.9.21 centos63-64.proxmox.com<br />
111 15 running 192.168.9.23 centos5-64.proxmox.com<br />
114 14 running 192.168.9.30 deb6-32.proxmox.com<br />
115 15 running 192.168.9.31 deb7-32.proxmox.com<br />
122 14 running 192.168.9.36 deb5.proxmox.com<br />
<br />
Enter the container:<br />
proxmox-ve:~# vzctl enter 109<br />
<br />
root@centos63-64:/# nano /etc/init/tty.conf<br />
<br />
Change/Create the file that it looks exactly like this:<br />
# This service maintains a getty on tty1 from the point the system is<br />
# started until it is shut down again.<br />
<br />
start on stopped rc RUNLEVEL=[2345]<br />
<br />
stop on runlevel [!2345]<br />
<br />
respawn<br />
exec /sbin/agetty -8 tty1 38400<br />
<br />
Either run "start tty" without rebooting the container, or save the changes and shutdown/start the container via Console.<br />
<br />
=Troubleshooting=<br />
If you still want to use the previous method (vzctl enter CTID) you can open the host "Shell" and just type 'vzctl enter CTID" to manage your containers.<br />
==Java browser plugin==<br />
The console is using a Java applet, therefore you need latest Oracle (Sun) Java browser plugin installed and enabled in your browser (Google Chrome and Firefox preferred). If you are on Windows desktop, just go to java.com, if you run a Linux desktop you need to make sure that you run Oracle (Sun) Java plugin instead of the default openjdk. For Debian/Ubuntu based desktops, see [[Java_Console_(Ubuntu)]] <br />
<br />
= Modifying your templates =<br />
<br />
If you don't want to commit the changes above for every single CT you create, you can simply update the templates accordingly. For this, simply place the file you want to insert into your template (like etc/inittab for debian containers) into your template folder and update the template. <br />
The following is specific to CentOS 6, just replace filename/path and contents with the appropriate contents found above.<br />
<pre>cd [TEMPLATE LOCATION] #Modify this<br />
<br />
mkdir -p etc/init<br />
<br />
cat <<EOF >etc/init/tty.conf<br />
# This service maintains a getty on tty1 from the point the system is<br />
# started until it is shut down again.<br />
<br />
start on stopped rc RUNLEVEL=[2345]<br />
<br />
stop on runlevel [!2345]<br />
<br />
respawn<br />
exec /sbin/agetty -8 tty1 38400<br />
EOF<br />
<br />
gunzip centos-6-standard_6.3-1_amd64.tar.gz<br />
tar -rf centos-6-standard_6.3-1_amd64.tar etc<br />
gzip centos-6-standard_6.3-1_amd64.tar<br />
<br />
rm etc/init/tty.conf<br />
rmdir -p etc/init</pre><br />
<br />
[[Category: HOWTO]][[Category: Technology]][[Category: Proxmox VE 2.0]]</div>Peetaurhttps://pve.proxmox.com/mediawiki/index.php?title=OpenVZ_Console&diff=5333OpenVZ Console2013-03-26T14:10:41Z<p>Peetaur: Undo revision 5332 by Peetaur (Talk) was supposed to be centos 6 not 5</p>
<hr />
<div>=Introduction=<br />
Beginning with Proxmox VE 2.2, we introduced a new console view (with login capability). Especially for beginners it is not that easy to understand and manage containers but with the new console this is big step forward. OpenVZ and KVM console looks now quite similar.<br />
<br />
But as most OpenVZ templates have disabled terminals, you need to enable it first. This article describes for the needed changes for already running OpenVZ container.<br />
<br />
'''Note:'''<br />
<br />
All Debian templates created with latest [[Debian Appliance Builder]] got this changes already, just download them via GUI to your Proxmox VE storage (Debian 6 and 7 templates are up2date, 32 and 64 bit)<br />
<br />
=Debian=<br />
This method works for Debian 5/6/7, you can do this on the host without entering CT (but the CT must be running). Just log in to the Proxmox VE host and:<br />
<br />
edit all inittabs under /var/lib/vz/root/ :<br />
<pre><br />
nano /var/lib/vz/root/*/etc/inittab<br />
<br />
# add this<br />
1:2345:respawn:/sbin/getty 38400 tty1<br />
</pre><br />
<br />
== Debian Lenny 5.0 ==<br />
[[Image:Screen-Debian-5-OpenVZ-console.png|thumb]] <br />
Login via SSH (or use the VNC "Shell") to your Proxmox VE host and 'vzctl enter CTID' the container:<br />
<br />
List all running container:<br />
<br />
proxmox-ve:~# vzlist<br />
CTID NPROC STATUS IP_ADDR HOSTNAME<br />
108 23 running 192.168.9.20 ubuntu-1204.proxmox.com<br />
109 18 running 192.168.9.21 centos63-64.proxmox.com<br />
111 15 running 192.168.9.23 centos5-64.proxmox.com<br />
114 14 running 192.168.9.30 deb6-32.proxmox.com<br />
115 15 running 192.168.9.31 deb7-32.proxmox.com<br />
122 14 running 192.168.9.36 deb5.proxmox.com<br />
<br />
Enter the container:<br />
proxmox-ve:~# vzctl enter 122<br />
<br />
root@debian:/# nano /etc/inittab<br />
<br />
On the bottom of /etc/inittab just add the following line:<br />
1:2345:respawn:/sbin/getty 38400 tty1<br />
<br />
Save the changes and shutdown/start the container via Console.<br />
<br />
== Debian Squeeze 6.0 ==<br />
Same as Debian Lenny 5.0<br />
== Debian Wheezy 7.0 ==<br />
Same as Debian Lenny 5.0<br />
<br />
=Ubuntu=<br />
== Ubuntu 12.04 ==<br />
[[Image:Screen-Ubuntu-12.04-OpenVZ-console.png|thumb]] <br />
Login via SSH (or use the VNC "Shell") to your Proxmox VE host and 'vzctl enter CTID' the container:<br />
<br />
List all running container:<br />
<br />
proxmox-ve:~# vzlist<br />
CTID NPROC STATUS IP_ADDR HOSTNAME<br />
108 23 running 192.168.9.20 ubuntu-1204.proxmox.com<br />
109 18 running 192.168.9.21 centos63-64.proxmox.com<br />
111 15 running 192.168.9.23 centos5-64.proxmox.com<br />
114 14 running 192.168.9.30 deb6-32.proxmox.com<br />
115 15 running 192.168.9.31 deb7-32.proxmox.com<br />
122 14 running 192.168.9.36 deb5.proxmox.com<br />
<br />
Enter the container:<br />
proxmox-ve:~# vzctl enter 108<br />
<br />
root@ubuntu-1204:/# nano /etc/init/tty1.conf<br />
<br />
Change/Create the file that it looks exactly like this:<br />
<br />
# tty1 - getty<br />
#<br />
# This service maintains a getty on tty1 from the point the system is<br />
# started until it is shut down again.<br />
<br />
start on stopped rc RUNLEVEL=[2345]<br />
<br />
stop on runlevel [!2345]<br />
<br />
respawn<br />
exec /sbin/getty -8 38400 tty1<br />
<br />
Save the changes and shutdown/start the container via Console.<br />
<br />
== Ubuntu 10.04 ==<br />
Same as Ubuntu 12.04<br />
<br />
=Centos=<br />
== Centos 5 ==<br />
[[Image:Screen-Centos-5-OpenVZ-console.png|thumb]] <br />
Login via SSH (or use the VNC "Shell") to your Proxmox VE host and 'vzctl enter CTID' the container:<br />
<br />
List all running container:<br />
<br />
proxmox-ve:~# vzlist<br />
CTID NPROC STATUS IP_ADDR HOSTNAME<br />
108 23 running 192.168.9.20 ubuntu-1204.proxmox.com<br />
109 18 running 192.168.9.21 centos63-64.proxmox.com<br />
111 15 running 192.168.9.23 centos5-64.proxmox.com<br />
114 14 running 192.168.9.30 deb6-32.proxmox.com<br />
115 15 running 192.168.9.31 deb7-32.proxmox.com<br />
122 14 running 192.168.9.36 deb5.proxmox.com<br />
<br />
Enter the container:<br />
proxmox-ve:~# vzctl enter 111<br />
<br />
root@centos5-64:/# nano /etc/inittab<br />
<br />
On the bottom of /etc/inittab just add the following line:<br />
1:2345:respawn:/sbin/agetty tty1 38400 linux<br />
<br />
Save the changes and shutdown/start the container via Console.<br />
== Centos 6 ==<br />
[[Image:Screen-Centos-6-OpenVZ-console.png|thumb]] <br />
Login via SSH (or use the VNC "Shell") to your Proxmox VE host and 'vzctl enter CTID' the container:<br />
<br />
List all running container:<br />
<br />
proxmox-ve:~# vzlist<br />
CTID NPROC STATUS IP_ADDR HOSTNAME<br />
108 23 running 192.168.9.20 ubuntu-1204.proxmox.com<br />
109 18 running 192.168.9.21 centos63-64.proxmox.com<br />
111 15 running 192.168.9.23 centos5-64.proxmox.com<br />
114 14 running 192.168.9.30 deb6-32.proxmox.com<br />
115 15 running 192.168.9.31 deb7-32.proxmox.com<br />
122 14 running 192.168.9.36 deb5.proxmox.com<br />
<br />
Enter the container:<br />
proxmox-ve:~# vzctl enter 109<br />
<br />
root@centos63-64:/# nano /etc/init/tty.conf<br />
<br />
Change/Create the file that it looks exactly like this:<br />
# This service maintains a getty on tty1 from the point the system is<br />
# started until it is shut down again.<br />
<br />
start on stopped rc RUNLEVEL=[2345]<br />
<br />
stop on runlevel [!2345]<br />
<br />
respawn<br />
exec /sbin/agetty -8 tty1 38400<br />
<br />
Save the changes and shutdown/start the container via Console.<br />
=Troubleshooting=<br />
If you still want to use the previous method (vzctl enter CTID) you can open the host "Shell" and just type 'vzctl enter CTID" to manage your containers.<br />
==Java browser plugin==<br />
The console is using a Java applet, therefore you need latest Oracle (Sun) Java browser plugin installed and enabled in your browser (Google Chrome and Firefox preferred). If you are on Windows desktop, just go to java.com, if you run a Linux desktop you need to make sure that you run Oracle (Sun) Java plugin instead of the default openjdk. For Debian/Ubuntu based desktops, see [[Java_Console_(Ubuntu)]] <br />
<br />
= Modifying your templates =<br />
<br />
If you don't want to commit the changes above for every single CT you create, you can simply update the templates accordingly. For this, simply place the file you want to insert into your template (like etc/inittab for debian containers) into your template folder and update the template. <br />
The following is specific to CentOS 6, just replace filename/path and contents with the appropriate contents found above.<br />
<pre>cd [TEMPLATE LOCATION] #Modify this<br />
<br />
mkdir -p etc/init<br />
<br />
cat <<EOF >etc/init/tty.conf<br />
# This service maintains a getty on tty1 from the point the system is<br />
# started until it is shut down again.<br />
<br />
start on stopped rc RUNLEVEL=[2345]<br />
<br />
stop on runlevel [!2345]<br />
<br />
respawn<br />
exec /sbin/agetty -8 tty1 38400<br />
EOF<br />
<br />
gunzip centos-6-standard_6.3-1_amd64.tar.gz<br />
tar -rf centos-6-standard_6.3-1_amd64.tar etc<br />
gzip centos-6-standard_6.3-1_amd64.tar<br />
<br />
rm etc/init/tty.conf<br />
rmdir -p etc/init</pre><br />
<br />
[[Category: HOWTO]][[Category: Technology]][[Category: Proxmox VE 2.0]]</div>Peetaurhttps://pve.proxmox.com/mediawiki/index.php?title=OpenVZ_Console&diff=5332OpenVZ Console2013-03-26T14:10:06Z<p>Peetaur: /* Centos 5 */ added quicker better way to save changes without rebooting the guest (Centos 5 specific) , edited on behalf of squeeb on IRC</p>
<hr />
<div>=Introduction=<br />
Beginning with Proxmox VE 2.2, we introduced a new console view (with login capability). Especially for beginners it is not that easy to understand and manage containers but with the new console this is big step forward. OpenVZ and KVM console looks now quite similar.<br />
<br />
But as most OpenVZ templates have disabled terminals, you need to enable it first. This article describes for the needed changes for already running OpenVZ container.<br />
<br />
'''Note:'''<br />
<br />
All Debian templates created with latest [[Debian Appliance Builder]] got this changes already, just download them via GUI to your Proxmox VE storage (Debian 6 and 7 templates are up2date, 32 and 64 bit)<br />
<br />
=Debian=<br />
This method works for Debian 5/6/7, you can do this on the host without entering CT (but the CT must be running). Just log in to the Proxmox VE host and:<br />
<br />
edit all inittabs under /var/lib/vz/root/ :<br />
<pre><br />
nano /var/lib/vz/root/*/etc/inittab<br />
<br />
# add this<br />
1:2345:respawn:/sbin/getty 38400 tty1<br />
</pre><br />
<br />
== Debian Lenny 5.0 ==<br />
[[Image:Screen-Debian-5-OpenVZ-console.png|thumb]] <br />
Login via SSH (or use the VNC "Shell") to your Proxmox VE host and 'vzctl enter CTID' the container:<br />
<br />
List all running container:<br />
<br />
proxmox-ve:~# vzlist<br />
CTID NPROC STATUS IP_ADDR HOSTNAME<br />
108 23 running 192.168.9.20 ubuntu-1204.proxmox.com<br />
109 18 running 192.168.9.21 centos63-64.proxmox.com<br />
111 15 running 192.168.9.23 centos5-64.proxmox.com<br />
114 14 running 192.168.9.30 deb6-32.proxmox.com<br />
115 15 running 192.168.9.31 deb7-32.proxmox.com<br />
122 14 running 192.168.9.36 deb5.proxmox.com<br />
<br />
Enter the container:<br />
proxmox-ve:~# vzctl enter 122<br />
<br />
root@debian:/# nano /etc/inittab<br />
<br />
On the bottom of /etc/inittab just add the following line:<br />
1:2345:respawn:/sbin/getty 38400 tty1<br />
<br />
Save the changes and shutdown/start the container via Console.<br />
<br />
== Debian Squeeze 6.0 ==<br />
Same as Debian Lenny 5.0<br />
== Debian Wheezy 7.0 ==<br />
Same as Debian Lenny 5.0<br />
<br />
=Ubuntu=<br />
== Ubuntu 12.04 ==<br />
[[Image:Screen-Ubuntu-12.04-OpenVZ-console.png|thumb]] <br />
Login via SSH (or use the VNC "Shell") to your Proxmox VE host and 'vzctl enter CTID' the container:<br />
<br />
List all running container:<br />
<br />
proxmox-ve:~# vzlist<br />
CTID NPROC STATUS IP_ADDR HOSTNAME<br />
108 23 running 192.168.9.20 ubuntu-1204.proxmox.com<br />
109 18 running 192.168.9.21 centos63-64.proxmox.com<br />
111 15 running 192.168.9.23 centos5-64.proxmox.com<br />
114 14 running 192.168.9.30 deb6-32.proxmox.com<br />
115 15 running 192.168.9.31 deb7-32.proxmox.com<br />
122 14 running 192.168.9.36 deb5.proxmox.com<br />
<br />
Enter the container:<br />
proxmox-ve:~# vzctl enter 108<br />
<br />
root@ubuntu-1204:/# nano /etc/init/tty1.conf<br />
<br />
Change/Create the file that it looks exactly like this:<br />
<br />
# tty1 - getty<br />
#<br />
# This service maintains a getty on tty1 from the point the system is<br />
# started until it is shut down again.<br />
<br />
start on stopped rc RUNLEVEL=[2345]<br />
<br />
stop on runlevel [!2345]<br />
<br />
respawn<br />
exec /sbin/getty -8 38400 tty1<br />
<br />
Save the changes and shutdown/start the container via Console.<br />
<br />
== Ubuntu 10.04 ==<br />
Same as Ubuntu 12.04<br />
<br />
=Centos=<br />
== Centos 5 ==<br />
[[Image:Screen-Centos-5-OpenVZ-console.png|thumb]] <br />
Login via SSH (or use the VNC "Shell") to your Proxmox VE host and 'vzctl enter CTID' the container:<br />
<br />
List all running container:<br />
<br />
proxmox-ve:~# vzlist<br />
CTID NPROC STATUS IP_ADDR HOSTNAME<br />
108 23 running 192.168.9.20 ubuntu-1204.proxmox.com<br />
109 18 running 192.168.9.21 centos63-64.proxmox.com<br />
111 15 running 192.168.9.23 centos5-64.proxmox.com<br />
114 14 running 192.168.9.30 deb6-32.proxmox.com<br />
115 15 running 192.168.9.31 deb7-32.proxmox.com<br />
122 14 running 192.168.9.36 deb5.proxmox.com<br />
<br />
Enter the container:<br />
proxmox-ve:~# vzctl enter 111<br />
<br />
root@centos5-64:/# nano /etc/inittab<br />
<br />
On the bottom of /etc/inittab just add the following line:<br />
1:2345:respawn:/sbin/agetty tty1 38400 linux<br />
<br />
Either run "start tty" without rebooting the container, or save the changes and shutdown/start the container via Console.<br />
<br />
== Centos 6 ==<br />
[[Image:Screen-Centos-6-OpenVZ-console.png|thumb]] <br />
Login via SSH (or use the VNC "Shell") to your Proxmox VE host and 'vzctl enter CTID' the container:<br />
<br />
List all running container:<br />
<br />
proxmox-ve:~# vzlist<br />
CTID NPROC STATUS IP_ADDR HOSTNAME<br />
108 23 running 192.168.9.20 ubuntu-1204.proxmox.com<br />
109 18 running 192.168.9.21 centos63-64.proxmox.com<br />
111 15 running 192.168.9.23 centos5-64.proxmox.com<br />
114 14 running 192.168.9.30 deb6-32.proxmox.com<br />
115 15 running 192.168.9.31 deb7-32.proxmox.com<br />
122 14 running 192.168.9.36 deb5.proxmox.com<br />
<br />
Enter the container:<br />
proxmox-ve:~# vzctl enter 109<br />
<br />
root@centos63-64:/# nano /etc/init/tty.conf<br />
<br />
Change/Create the file that it looks exactly like this:<br />
# This service maintains a getty on tty1 from the point the system is<br />
# started until it is shut down again.<br />
<br />
start on stopped rc RUNLEVEL=[2345]<br />
<br />
stop on runlevel [!2345]<br />
<br />
respawn<br />
exec /sbin/agetty -8 tty1 38400<br />
<br />
Save the changes and shutdown/start the container via Console.<br />
=Troubleshooting=<br />
If you still want to use the previous method (vzctl enter CTID) you can open the host "Shell" and just type 'vzctl enter CTID" to manage your containers.<br />
==Java browser plugin==<br />
The console is using a Java applet, therefore you need latest Oracle (Sun) Java browser plugin installed and enabled in your browser (Google Chrome and Firefox preferred). If you are on Windows desktop, just go to java.com, if you run a Linux desktop you need to make sure that you run Oracle (Sun) Java plugin instead of the default openjdk. For Debian/Ubuntu based desktops, see [[Java_Console_(Ubuntu)]] <br />
<br />
= Modifying your templates =<br />
<br />
If you don't want to commit the changes above for every single CT you create, you can simply update the templates accordingly. For this, simply place the file you want to insert into your template (like etc/inittab for debian containers) into your template folder and update the template. <br />
The following is specific to CentOS 6, just replace filename/path and contents with the appropriate contents found above.<br />
<pre>cd [TEMPLATE LOCATION] #Modify this<br />
<br />
mkdir -p etc/init<br />
<br />
cat <<EOF >etc/init/tty.conf<br />
# This service maintains a getty on tty1 from the point the system is<br />
# started until it is shut down again.<br />
<br />
start on stopped rc RUNLEVEL=[2345]<br />
<br />
stop on runlevel [!2345]<br />
<br />
respawn<br />
exec /sbin/agetty -8 tty1 38400<br />
EOF<br />
<br />
gunzip centos-6-standard_6.3-1_amd64.tar.gz<br />
tar -rf centos-6-standard_6.3-1_amd64.tar etc<br />
gzip centos-6-standard_6.3-1_amd64.tar<br />
<br />
rm etc/init/tty.conf<br />
rmdir -p etc/init</pre><br />
<br />
[[Category: HOWTO]][[Category: Technology]][[Category: Proxmox VE 2.0]]</div>Peetaurhttps://pve.proxmox.com/mediawiki/index.php?title=Proxmox_VE_2.0_Cluster&diff=4424Proxmox VE 2.0 Cluster2012-08-16T12:10:58Z<p>Peetaur: /* Requirements */ added link for adding a qdisk (only link I had was Two-Node_High_Availability_Cluster); see tom's posts here: http://forum.proxmox.com/threads/7786-Cluster-and-quorum</p>
<hr />
<div>{{Note|Article about Proxmox VE 2.0}}<br />
<br />
=Introduction=<br />
Proxmox VE 2.0 Cluster enables central management of multiple physical servers. A Proxmox VE Cluster consists of several nodes (up to 16 physical nodes, probably more).<br />
<br />
==Main features==<br />
*Centralized web management, including secure VNC console<br />
*Support for multiple authentication sources (e.g. local, MS ADS, LDAP, ...)<br />
*Role based permission management for all objects (VM´s, storages, nodes, etc.)<br />
*Creates multi-master clusters (no single master anymore!)<br />
*[[Proxmox Cluster file system (pmxcfs)]]: Database-driven file system for storing configuration files, replicated in real-time on all nodes using corosync<br />
*Migration of Virtual Machines between physical hosts<br />
*Cluster-wide logging<br />
*RESTful web API<br />
<br />
==Requirements==<br />
*All nodes must be in the same network as it uses IP Multicast to communicate between nodes (See also [http://www.corosync.org Corosync Cluster Engine]). Note: Some switches do not support IP multicast by default and must be manually enabled first. See [[multicast notes]] for more information about multicast.<br />
*Date and time have to be synchronized<br />
*SSH tunnel on port 22 between nodes is used<br />
*VNC console traffic is secured via SSL, using ports between 5900 and 5999<br />
*For reliable quorum, you must have at least 3 active nodes at all times, or use a qdisk as seen in [[Two-Node_High_Availability_Cluster]]<br />
<br />
=Proxmox VE Cluster=<br />
<br />
First, install the Proxmox VE nodes, see [[Installation]]. Make sure that each Proxmox VE node is installed with the final hostname and IP configuration. Changing the hostname and IP is not possible after cluster creation.<br />
<br />
Currently the cluster creation has to be done on the console, you can login to the Proxmox VE node via ssh. <br />
<br />
All settings can be done via "pvecm", the [https://pve.proxmox.com/pve2-api-doc/man/pvecm.1.html Proxmox VE cluster manager toolkit].<br />
<br />
==Create the Cluster==<br />
Login via ssh to the first Proxmox VE node. Use a unique name for your Cluster, this name cannot be changed later.<br />
<br />
'''Create:'''<br />
<pre>pvecm create YOUR-CLUSTER-NAME</pre> <br />
To check the state of cluster: <br />
<pre>pvecm status</pre><br />
<br />
==Adding nodes to the Cluster==<br />
Login via ssh to the other Proxmox VE nodes. Please note, the node cannot hold any any VM´s. (If yes you will get conflicts with identical VMID´s - to workaround, use vzdump to backup and to restore to a different VMID after the cluster configuration).<br />
<br />
'''Add a node:''' <br />
<pre>pvecm add IP-ADDRESS-CLUSTER</pre><br />
<br />
For IP-ADDRESS-CLUSTER use an IP from an existing cluster node.<br />
<br />
To check the state of cluster: <br />
<pre>pvecm status</pre><br />
<br />
'''Display the state of cluster:''' <br />
<pre>pvecm status<br />
<br />
CID----IPADDRESS----ROLE-STATE--------UPTIME---LOAD----MEM---ROOT---DATA<br />
1&nbsp;: 192.168.7.104 M A 5 days 01:43 0.54 20% 1% 4%<br />
2&nbsp;: 192.168.7.103 N A 2 days 05:02 0.04 26% 5% 29%<br />
3&nbsp;: 192.168.7.105 N A 00:13 1.41 22% 3% 15%<br />
4&nbsp;: 192.168.7.106 N A 00:05 0.54 17% 3% 3%</pre><br />
<br />
'''Display the nodes of cluster:'''<br />
<pre>pvecm nodes<br />
<br />
Node Sts Inc Joined Name<br />
1 M 156 2011-09-05 10:39:09 hp1<br />
2 M 156 2011-09-05 10:39:09 hp2<br />
3 M 168 2011-09-05 11:24:12 hp4<br />
4 M 160 2011-09-05 10:40:27 hp3<br />
</pre><br />
<br />
== Remove a cluster node ==<br />
<br />
Move all virtual machines out of the node, just use the [[Central Web-based Management 2.0]] to migrate or delete all VM´s. Make sure you have no local backups you want to keep, or save them accordingly. <br />
<br />
Log in to one remaining node via ssh. Issue a pvecm nodes command to identify the nodeID: <br />
<pre>pvecm nodes<br />
<br />
Node Sts Inc Joined Name<br />
1 M 156 2011-09-05 10:39:09 hp1<br />
2 M 156 2011-09-05 10:39:09 hp2<br />
3 M 168 2011-09-05 11:24:12 hp4<br />
4 M 160 2011-09-05 10:40:27 hp3<br />
</pre> <br />
Issue the delete command (here deleting node hp2): <br />
<pre>pvecm delnode hp2</pre> <br />
If the operation succeeds no output is returned, just check the node list again with 'pvecm nodes' (or just 'pvecm n').<br />
<br />
ATTENTION: you need to power off the removed node, and make sure that it will not power on again.<br />
<br />
== Re-installing a cluster node ==<br />
<br />
Move all virtual machines off the node.<br />
<br />
Stop the following services:<br />
<pre>service pvestatd stop<br />
service pvedaemon stop<br />
service cman stop<br />
service pve-cluster stop<br />
</pre><br />
<br />
Backup /var/lib/pve-cluster/<br />
<pre>tar -czf /root/pve-cluster-backup.tar.gz /var/lib/pve-cluster<br />
</pre><br />
<br />
Backup /root/.ssh/ , there are two symlinks here to the shared pve config authorized_keys and authorized_keys.orig , you need not worry about these two yet as they're stored in /var/lib/pve-cluster/<br />
<pre>tar -czf /root/ssh-backup.tar.gz /root/.ssh<br />
</pre><br />
<br />
Shut server down & re-install. Make sure the hostname is the same as it was before you continue.<br />
<br />
Stop the following services:<br />
<pre>service pvestatd stop<br />
service pvedaemon stop<br />
service cman stop<br />
service pve-cluster stop<br />
</pre><br />
<br />
Restore the files in /root/.ssh/<br />
<pre>tar -xzf /root/ssh-backup.tar.gz<br />
</pre><br />
<br />
Replace /var/lib/pve-cluster/ with your backup copy<br />
<pre>rm -rf /var/lib/pve-cluster<br />
tar -xzf /root/pve-cluster-backup.tar.gz<br />
</pre><br />
<br />
Start pve-cluster & cman:<br />
<pre>service pve-cluster start<br />
service cman start<br />
</pre><br />
<br />
Restore the two ssh symlinks:<br />
<pre>ln -sf /etc/pve/priv/authorized_keys /root/.ssh/authorized_keys<br />
ln -sf /etc/pve/priv/authorized_keys /root/.ssh/authorized_keys.orig<br />
</pre><br />
<br />
Start the rest of the services:<br />
<pre>service pvestatd start<br />
service pvedaemon start<br />
</pre><br />
<br />
<br />
<br />
=Working with Proxmox VE Cluster=<br />
Now, you can start creating Virtual Machines on Cluster nodes by using the [[Central Web-based Management 2.0]] on any node.<br />
<br />
=Troubleshooting=<br />
<br />
*Date and time have to be synchronized (check "ntpdc -p")<br />
*Check /etc/hosts for an actual IP address of a system<br />
<br />
=Video Tutorials=<br />
[http://www.youtube.com/user/ProxmoxVE Proxmox VE Youtube channel]<br />
<br />
<br />
<br />
[[Category: Proxmox VE 2.0]]</div>Peetaurhttps://pve.proxmox.com/mediawiki/index.php?title=Fencing&diff=4350Fencing2012-07-02T09:09:13Z<p>Peetaur: /* Enable fencing on all nodes */ fixed typo "To the follwing" to "Do the following"</p>
<hr />
<div>{{Note|Article about Proxmox VE 2.0}}<br />
=Introduction=<br />
To ensure data integrity, only one node is allowed to run a VM or any other cluster-service at a time. The use of power switches in the hardware configuration enables a node to power-cycle another node before restarting that node's HA services during a fail-over process. This prevents two nodes from simultaneously accessing the same data and corrupting it. Fence devices are used to guarantee data integrity under all failure conditions.<br />
=Configure nodes to boot immediately and always after power cycle=<br />
Check your bios settings and test if if works. Just unplug the power cord and test if the server boots up after reconnecting.<br />
<br />
If you use integrated fence devices, you must configure ACPI (Advanced Configuration and Power Interface) to ensure immediate and complete fencing - here are the different options:<br />
*make sure that you did not installed acpid (remove with: aptitude remove acpid) <br />
*disable ACPI soft-off in the bios<br />
*disable via acpi=off to the kernel boot command line<br />
<br />
In any case, you need to '''make sure that the node turns off immediately when fenced.''' If you have delays here, the HA resources cannot be moved.<br />
<br />
=List of supported fence devices=<br />
<br />
==APC Switch Rack PDU==<br />
E.g. AP7921, here is a example used in our test lab.<br />
<br />
===Create a user on the APC web interface===<br />
I just configured a new user via "Outlet User Management"<br />
*user name: hpapc<br />
*password: 12345678<br />
<br />
Make sure that you enable "Outlet Access" and SSH and the most important part, make sure you connected the physical servers to the right power supply.<br />
<br />
===Example /etc/pve/cluster.conf.new with APC power fencing===<br />
This example uses the APC power switch as fencing device. Additionally, a simple "TestIP" is used for HA service and fail-over testing.<br />
<br />
cp /etc/pve/cluster.conf /etc/pve/cluster.conf.new<br />
<br />
nano /etc/pve/cluster.conf.new<br />
<br />
<source lang="xml"><br />
<?xml version="1.0"?><br />
<cluster name="hpcluster765" config_version="28"><br />
<br />
<cman keyfile="/var/lib/pve-cluster/corosync.authkey"><br />
</cman><br />
<br />
<fencedevices><br />
<fencedevice agent="fence_apc" ipaddr="192.168.2.30" login="hpapc" name="apc" passwd="12345678"/><br />
</fencedevices><br />
<br />
<clusternodes><br />
<br />
<clusternode name="hp4" votes="1" nodeid="1"><br />
<fence><br />
<method name="power"><br />
<device name="apc" port="4" secure="on"/><br />
</method><br />
</fence><br />
</clusternode><br />
<br />
<clusternode name="hp1" votes="1" nodeid="2"><br />
<fence><br />
<method name="power"><br />
<device name="apc" port="1" secure="on"/><br />
</method><br />
</fence><br />
</clusternode><br />
<br />
<clusternode name="hp3" votes="1" nodeid="3"><br />
<fence><br />
<method name="power"><br />
<device name="apc" port="3" secure="on"/><br />
</method><br />
</fence><br />
</clusternode><br />
<br />
<clusternode name="hp2" votes="1" nodeid="4"><br />
<fence><br />
<method name="power"><br />
<device name="apc" port="2" secure="on"/><br />
</method><br />
</fence><br />
</clusternode><br />
<br />
</clusternodes><br />
<br />
<rm><br />
<service autostart="1" exclusive="0" name="TestIP" recovery="relocate"><br />
<ip address="192.168.7.180"/><br />
</service><br />
</rm><br />
<br />
</cluster><br />
<br />
</source><br />
<br />
'''Note'''<br />
<br />
If you edit this file via CLI, you need to increase ALWAYS the "config_version" number. This guarantees that the all nodes apply´s the new settings.<br />
<br />
In order to apply this new config, you need to go to the web interface (Datacenter/HA). You can see the changes done and if the syntax is ok you can commit the changed via gui to all nodes. By doing this, all nodes gets the info about the new config and apply them automatically.<br />
<br />
===Enable fencing on all nodes===<br />
In order to get fencing active, you also need to join each node to the fencing domain. Do the following on all your cluster nodes.<br />
<br />
*Enable fencing in /etc/default/redhat-cluster-pve (Just uncomment the last line, see below):<br />
nano /etc/default/redhat-cluster-pve<br />
<pre># CLUSTERNAME=""<br />
# NODENAME=""<br />
# USE_CCS="yes"<br />
# CLUSTER_JOIN_TIMEOUT=300<br />
# CLUSTER_JOIN_OPTIONS=""<br />
# CLUSTER_SHUTDOWN_TIMEOUT=60<br />
# RGMGR_OPTIONS=""<br />
FENCE_JOIN="yes"</pre><br />
<br />
*join the fence domain with:<br />
fence_tool join<br />
<br />
To check the status, just run (this example shows all 3 nodes already joined):<br />
fence_tool ls<br />
<br />
<pre>fence domain<br />
member count 3<br />
victim count 0<br />
victim now 0<br />
master nodeid 1<br />
wait state none<br />
members 1 2 3</pre><br />
<br />
===Test fencing=== <br />
Before you use the fencing device, make sure that it works as expected. In my example configuration, the AP7921 uses the IP 192.168.2.30:<br />
<br />
Query the status of power supply:<br />
fence_apc -x -l hpapc -p 12345678 -a 192.168.2.30 -o status -n 1 -v<br />
<br />
Reboot the server using fence_apc:<br />
fence_apc -x -l hpapc -p 12345678 -a 192.168.2.30 -o reboot -n 1 -v<br />
<br />
==[[Intel Modular Server HA]]==<br />
<br />
==[[Dell servers]]==<br />
<br />
You can use dell drac cards as fencing devices.<br />
<br />
Your proxmox hosts need to have network access, through ssh to your dell drac cards.<br />
<br />
This config was tested with DRAC V5 cards.<br />
<br />
<source lang="xml"><br />
<?xml version="1.0"?><br />
<cluster name="hpcluster765" config_version="28"><br />
<cman keyfile="/var/lib/pve-cluster/corosync.authkey"><br />
</cman><br />
<fencedevices><br />
<fencedevice agent="fence_drac5" ipaddr="X.X.X.X" login="root" name="node1-drac" passwd="XXXX" secure="1"/><br />
<fencedevice agent="fence_drac5" ipaddr="X.X.X.X" login="root" name="node2-drac" passwd="XXXX" secure="1"/><br />
<fencedevice agent="fence_drac5" ipaddr="X.X.X.X" login="root" name="node3-drac" passwd="XXXX" secure="1"/><br />
</fencedevices><br />
<clusternodes><br />
<clusternode name="node1" nodeid="1" votes="1"><br />
<fence><br />
<method name="1"><br />
<device name="node1-drac"/><br />
</method><br />
</fence><br />
</clusternode><br />
<clusternode name="node2" nodeid="2" votes="1"><br />
<fence><br />
<method name="1"><br />
<device name="node2-drac"/><br />
</method><br />
</fence><br />
</clusternode><br />
<clusternode name="node3" nodeid="3" votes="1"><br />
<fence><br />
<method name="1"><br />
<device name="node3-drac"/><br />
</method><br />
</fence><br />
</clusternode><br />
</clusternodes><br />
<rm><br />
<service autostart="1" exclusive="0" name="TestIP" recovery="relocate"><br />
<ip address="192.168.7.180"/><br />
</service><br />
</rm><br />
</cluster><br />
</source><br />
<br />
For Dell iDRAC6 Cards you can basically use the same config as for DRAC5, but you need to change the lines<br />
<br />
<source lang="xml"><br />
<fencedevices><br />
<fencedevice agent="fence_drac5" ipaddr="X.X.X.X" login="root" name="node1-drac" passwd="XXXX" secure="1"/><br />
<fencedevice agent="fence_drac5" ipaddr="X.X.X.X" login="root" name="node2-drac" passwd="XXXX" secure="1"/><br />
<fencedevice agent="fence_drac5" ipaddr="X.X.X.X" login="root" name="node3-drac" passwd="XXXX" secure="1"/><br />
</fencedevices><br />
</source><br />
<br />
to<br />
<br />
<source lang="xml"><br />
<fencedevices><br />
<fencedevice agent="fence_drac5" cmd_prompt="admin1->" ipaddr="X.X.X.X" login="root" name="node1-drac" passwd="XXXX" secure="1"/><br />
<fencedevice agent="fence_drac5" cmd_prompt="admin1->" ipaddr="X.X.X.X" login="root" name="node2-drac" passwd="XXXX" secure="1"/><br />
<fencedevice agent="fence_drac5" cmd_prompt="admin1->" ipaddr="X.X.X.X" login="root" name="node3-drac" passwd="XXXX" secure="1"/><br />
</fencedevices><br />
</source><br />
<br />
==[[Dell blade servers]]==<br />
PowerEdge M1000e Chassis Management Controller (CMC) acts as a network power switch of sorts. You configure a single IP address on the CMC, and connect to that IP for management. Individual blade slots can be powered up or down as needed. <br />
<br />
'''NOTE''': At the time of this writing, there is a bug that prevents the CMC from powering the blade back up after it is fenced. To recover from a fenced outage, manually power the blade on (or connect to the CMC and issue the command '''racadm serveraction -m server-# powerup'''). New code available for testing can correct this behavior. See [https://bugzilla.redhat.com/show_bug.cgi?id=466788 Bug 466788] for beta code and further discussions on this issue.<br />
<br />
'''NOTE''': Using the individual iDRAC on each Dell Blade is not supported at this time. Instead use the Dell CMC as described in this section. If desired, you may configure IPMI as your secondary fencing method for individual Dell Blades. For information on support of the Dell iDRAC, see [https://bugzilla.redhat.com/show_bug.cgi?id=496748 Bug 496748].<br />
<br />
To configure your nodes for DRAC CMC fencing:<br />
# For '''CMC IP Address''' enter the DRAC CMC IP address.<br />
# Enter the specific blade for '''Module Name'''. For example, enter '''server-1''' for blade 1, and '''server-4''' for blade 4.<br />
<br />
Example:<br />
<br />
<br />
<br />
<source lang="xml"><br />
<?xml version="1.0"?><br />
<cluster name="hpcluster765" config_version="28"><br />
<cman keyfile="/var/lib/pve-cluster/corosync.authkey"><br />
</cman><br />
<fencedevices><br />
<fencedevice agent="fence_drac5" module_name="server-1" ipaddr="CMC IP Address (X.X.X.X)" login="root" secure="1" name="drac-cmc-blade1" passwd="drac_password"/><br />
<fencedevice agent="fence_drac5" module_name="server-2" ipaddr="CMC IP Address (X.X.X.X)" login="root" secure="1" name="drac-cmc-blade2" passwd="drac_password"/><br />
<fencedevice agent="fence_drac5" module_name="server-2" ipaddr="CMC IP Address (X.X.X.X)" login="root" secure="1" name="drac-cmc-blade3" passwd="drac_password"/><br />
</fencedevices><br />
<clusternodes><br />
<clusternode name="node1" nodeid="1" votes="1"><br />
<fence><br />
<method name="1"><br />
<device name="drac-cmc-blade1"/><br />
</method><br />
</fence><br />
</clusternode><br />
<clusternode name="node2" nodeid="2" votes="1"><br />
<fence><br />
<method name="1"><br />
<device name="drac-cmc-blade2"/><br />
</method><br />
</fence><br />
</clusternode><br />
<clusternode name="node3" nodeid="3" votes="1"><br />
<fence><br />
<method name="1"><br />
<device name="drac-cmc-blade3"/><br />
</method><br />
</fence><br />
</clusternode><br />
</clusternodes><br />
<rm><br />
<service autostart="1" exclusive="0" name="TestIP" recovery="relocate"><br />
<ip address="192.168.7.180"/><br />
</service><br />
</rm><br />
</cluster><br />
</source><br />
<br />
<br />
== [[IPMI (generic)]] ==<br />
<br />
This is a generic method for IPMI<br />
<br />
<br />
<source lang="xml"><br />
<?xml version="1.0"?><br />
<cluster name="clustername" config_version="6"><br />
<cman keyfile="/var/lib/pve-cluster/corosync.authkey"><br />
</cman><br />
<fencedevices><br />
<fencedevice agent="fence_ipmilan" name="ipmi1" lanplus="1" ipaddr="X.X.X.X" login="ipmiusername" passwd="ipmipassword" power_wait="5"/><br />
<fencedevice agent="fence_ipmilan" name="ipmi2" lanplus="1" ipaddr="X.X.X.X" login="ipmiusername" passwd="ipmipassword" power_wait="5"/><br />
<fencedevice agent="fence_ipmilan" name="ipmi3" lanplus="1" ipaddr="X.X.X.X" login="ipmiusername" passwd="ipmipassword" power_wait="5"/><br />
</fencedevices><br />
<clusternodes><br />
<clusternode name="host1" votes="1" nodeid="1"><br />
<fence><br />
<method name="1"><br />
<device name="ipmi1"/><br />
</method><br />
</fence><br />
</clusternode><br />
<clusternode name="host2" votes="1" nodeid="2"><br />
<fence><br />
<method name="1"><br />
<device name="ipmi2"/><br />
</method><br />
</fence><br />
</clusternode><br />
<clusternode name="host3" votes="1" nodeid="3"><br />
<fence><br />
<method name="1"><br />
<device name="ipmi3"/><br />
</method><br />
</fence><br />
</clusternode><br />
</clusternodes><br />
<rm><br />
<service autostart="1" exclusive="0" name="ha_test_ip" recovery="relocate"><br />
<ip address="192.168.7.180"/><br />
</service><br />
</rm><br />
</cluster><br />
</source><br />
<br />
==to be extended==<br />
tbd.<br />
<br />
[[Category: Proxmox VE 2.0]]<br />
<br />
[[Category: HOWTO]]</div>Peetaurhttps://pve.proxmox.com/mediawiki/index.php?title=Command_line_tools_-_PVE_3.x&diff=4343Command line tools - PVE 3.x2012-06-29T07:07:02Z<p>Peetaur: /* qm */ added link to Manual:_qm which is more recent and more complete</p>
<hr />
<div>=Introduction=<br />
This page lists the important Proxmox VE and Debian command line tools. All CLI tools have also manual pages.<br />
<br />
=KVM specific=<br />
==qm==<br />
qm - qemu/kvm manager - see [[Manual:_qm]] and [[qm manual]]<br />
<br />
=OpenVZ specific=<br />
==vzctl==<br />
<br />
vzctl - utility to control an OpenVZ container - see [[vzctl manual]]<br />
<br />
==vztop==<br />
<br />
vztop - display top CPU processes<br />
<br />
==user_beancounters==<br />
cat /proc/user_beancounters<br />
<br />
==vzlist==<br />
:example:<br />
<pre><br />
vzlist<br />
CTID NPROC STATUS IP_ADDR HOSTNAME<br />
101 26 running - localhost.fantinibakery.com<br />
102 121 running 10.100.100.18 mediawiki.fantinibakery.com<br />
114 49 running - fbc14.fantinibakery.com<br />
</pre><br />
<br />
=Backup=<br />
==vzdump==<br />
<br />
vzdump - backup utility for virtual machine - see [[Vzdump manual]]<br />
<br />
==vzrestore==<br />
<br />
vzrestore - restore OpenVZ vzdump backups - see [[vzrestore manual]]<br />
<br />
==qmrestore==<br />
<br />
qmrestore - restore KVM vzdump backups - see [[qmrestore manual]]<br />
<br />
=Cluster management=<br />
==pveca==<br />
<br />
PVE Cluster Administration Toolkit<br />
<br />
===USAGE===<br />
* pveca -l # show cluster status<br />
* pveca -c # create new cluster with localhost as master<br />
* pveca -s [-h IP] # sync cluster configuration from master (or IP)<br />
* pveca -d ID # delete a node<br />
* pveca -a [-h IP] # add new node to cluster<br />
* pveca -m # force local node to become master<br />
* pveca -i # print node info (CID NAME IP ROLE)<br />
<br />
=Software version check=<br />
==pveversion==<br />
<br />
Proxmox VE version info - Print version information for Proxmox VE packages.<br />
<br />
===USAGE===<br />
pveversion [--verbose]<br />
<br />
* without any argument shows the version of pve-manager, something like:<br />
:pve-manager/1.5/4660<br />
<br />
* with -v argument it shows a list of programs versions related to pve, like:<br />
<br />
:pve-manager: 1.5-7 (pve-manager/1.5/4660)<br />
:running kernel: 2.6.18-2-pve<br />
:proxmox-ve-2.6.18: 1.5-5<br />
:pve-kernel-2.6.18-2-pve: 2.6.18-5<br />
:pve-kernel-2.6.18-1-pve: 2.6.18-4<br />
:qemu-server: 1.1-11<br />
:pve-firmware: 1.0-3<br />
:libpve-storage-perl: 1.0-10<br />
:vncterm: 0.9-2<br />
:vzctl: 3.0.23-1pve8<br />
:vzdump: 1.2-5<br />
:vzprocps: 2.0.11-1dso2<br />
:vzquota: 3.0.11-1<br />
:pve-qemu-kvm-2.6.18: 0.9.1-5<br />
<br />
==aptitude==<br />
Standard Debian package update tool<br />
=LVM=<br />
=iSCSI=<br />
=DRBD=<br />
See [[DRBD]]<br />
=Debian Appliance Builder=<br />
==dab==<br />
See [[Debian_Appliance_Builder]]<br />
=Other useful tools=<br />
==pveperf==<br />
Simple host performance test.<br />
<br />
(from man page)<br />
<br />
===USAGE===<br />
:pveperf [PATH]<br />
<br />
===DESCRIPTION===<br />
:Tries to gather some CPU/Hardisk performance data on the hardisk mounted at PATH (/ is used as default)<br />
It dumps on the terminal:<br />
<br />
* CPU BOGOMIPS: bogomips sum of all CPUs<br />
* REGEX/SECOND: regular expressions per second (perl performance test), should be above 300000<br />
* HD SIZE: harddisk size<br />
* BUFFERED READS: simple HD read test. Modern HDs should reach at least 40 MB/sec<br />
* AVERAGE SEEK TIME: tests average seek time. Fast SCSI HDs reach values < 8 milliseconds. Common IDE/SATA disks get values from 15 to 20 ms.<br />
* FSYNCS/SECOND: value should be greater than 200 (you should enable "write back" cache mode on you RAID controller - needs a battery backed cache (BBWC)).<br />
* DNS EXT: average time to resolve an external DNS name<br />
* DNS INT: average time to resolve a local DNS name<br />
<br />
Note: this command may require root privileges (or sudo) to run, otherwise you get an error after "HD SIZE" value, like:<br />
<<sh: /proc/sys/vm/drop_caches: Permission denied<br />
unable to open HD at /usr/bin/pveperf line 149.>><br />
<br />
<br />
[[Category: HOWTO]][[Category: Installation]]</div>Peetaur