Installation: Difference between revisions
Kotakomputer (talk | contribs) |
|||
Line 6: | Line 6: | ||
*Complete operating system (Debian Linux, 64-bit) | *Complete operating system (Debian Linux, 64-bit) | ||
*Partition the hard drive with | *Partition the hard drive with ext4 (alternative ext3 or xfs) or ZFS | ||
*[[Proxmox VE Kernel]] with | *[[Proxmox VE Kernel]] with LXC and KVM support | ||
* | *Complete toolset | ||
*Web based management interface | *Web based management interface | ||
Revision as of 08:09, 14 October 2015
Introduction
Proxmox VE installs the complete operating system and management tools in 3 to 5 minutes (depending on the hardware used).
Including the following:
- Complete operating system (Debian Linux, 64-bit)
- Partition the hard drive with ext4 (alternative ext3 or xfs) or ZFS
- Proxmox VE Kernel with LXC and KVM support
- Complete toolset
- Web based management interface
Please note, by default the complete server is used and all existing data is removed.
If you want to set custom options for the installer, or need to debug the installation process on your server, you can use some special boot options.
Video tutorials
- List of all official tutorials on our Proxmox VE YouTube Channel
- Tutorials in Spanish language on ITexperts.es YouTube Play List
System requirements
For production servers, high quality server equipment is needed. Keep in mind, if you run 10 Virtual Servers on one machine and you then experience a hardware failure, 10 services are lost. Proxmox VE supports clustering, this means that multiple Proxmox VE installations can be centrally managed thanks to the included cluster functionality.
Proxmox VE can use local storage (DAS), SAN, NAS and also distributed storage (Ceph RBD). For details see Storage Model
Minimum requirements, for evaluation
- CPU: 64bit (Intel EMT64 or AMD64), Intel VT/AMD-V capable CPU/Mainboard (for KVM Full Virtualization support)
- RAM: 1 GB RAM
- Hard drive
- One NIC
Recommended system requirements
- CPU: 64bit (Intel EMT64 or AMD64), Multi core CPU recommended, Intel VT/AMD-V capable CPU/Mainboard (for KVM Full Virtualization support)
- RAM: 8 GB is good, more is better
- Hardware RAID with batteries protected write cache (BBU) or flash based protection (Software RAID is not supported)
- Fast hard drives, best results with 15k rpm SAS, Raid10
- At least two NIC´s, depending on the used storage technology you need more
Certified hardware
Basically you can use any hardware supporting RHEL6, 64 bit. If you are unsure, post in the forum.
Steps to get your Proxmox VE up and running
Install Proxmox VE server
Proxmox VE installation (Video Tutorial)
If you need to install the outdated 1.9 release, check Installing Proxmox VE v1.9 post Lenny retirement
Optional: Install Proxmox VE on Debian 6 Squeeze (64 bit)
See Install Proxmox VE on Debian Squeeze
Optional: Install Proxmox VE on Debian 7 Wheezy (64 bit)
See Install Proxmox VE on Debian Wheezy
Optional: Install Proxmox VE on Debian 8 Jessie (64 bit)
See Install Proxmox VE on Debian Jessie
Optional: Install Proxmox VE over iSCSI
See Proxmox ISCSI installation
Proxmox VE web interface
Configuration is done via the Proxmox web interface, just point your browser to the IP address given during installation (https://youripaddress:8006). Please make sure that your browser has the latest Oracle Java browser plugin installed. Proxmox VE is tested for IE9, Firefox 10 and higher, Google Chrome (latest).
Default login is "root" and the root password is defined during the installation process.
Configure basic system setting
Please review the NIC setup, IP and hostname.
Note: changing IP or hostname after cluster creation is not possible (unless you know exactly what you do)
Get Appliance Templates
Download
Just go to your content tab of your storage (e.g. "local") and download pre-built Virtual Appliances directly to your server. This list is maintained by the Proxmox VE team and more and more Appliances will be available. This is the easiest way and a good place to start.
If you have a NFS server you can use a NFS share for storing ISO images. To start, configure the NFS ISO store on the web interface (Configuration/Storage).
Upload from your desktop
If you already got Virtually Appliances you can upload them via the upload button. To install a virtual machine from an ISO image (using KVM full virtualization) just upload the ISO file via the upload button.
Directly to file system
Templates and ISO images are stored on the Proxmox VE server (see /var/lib/vz/template/cache for openvz templates and /var/lib/vz/template/iso for ISO images). You can also transfer templates and ISO images via secure copy (scp) to these directories. If you work on a windows desktop, you can use a graphical scp client like winscp.
Create Virtual Machines
Container (OpenVZ)
First get the adequate(s) appliance(s) template(s).
Then just click "Create CT":
General
- Node: If you have several Proxmox VE servers, select the node where you want to create the new container
- VM ID: choose a virtual machine identification number, just use the given ID or overwrite the suggested one
- Hostname: give a unique server name for the new container
- Resource Pool: select the previously resource pool (optional)
- Storage: select the storage for your container
- Password: set the root password for your container
Template
- Storage: select your template data store (you need to download templates before you can select them here)
- Template: choose the template
Resources
- Memory (MB): set the memory (RAM)
- Swap (MB): set the swap
- Disk size (GB): set the total disk size
- CPUs: set the number of CPUs (if you run java inside your container, choose at least 2 here)
Network
- Routed mode (venet): default(venet) give a unique IP
- Briged mode
- in only some case you need Bridged Ethernet(veth) (see Differences_between_venet_and_veth on OpenVZ wiki for details) If you select Brigded Ethernet, the IP configuration has to be done in the container, like you would do it on a physical server.
DNS
- DNS Domain: e.g. yourdomain.com
- First/Second DNS Servers: enter DNS servers
Confirm
This tab shows a summary, please check if everything is done as needed. If you need to change a setting, you can jump to the previous tabs just by clicking.
After you clicked "Finish", all settings are applied - wait for completion (this process can take between a view seconds and up to a minute, depends on the used template and your hardware).
CentOS 7
If you can't PING a CentOS 7 container: In order to get rid of problems with venet0 when running a CentOS 7 container (OpenVZ), just activate the patch redhat-add_ip.sh-patch as follows:
1. transfer the patchfile to your working directory into Proxmox VE host (i.e:/ root) and extract it (unzip) 2. perform this command:
# patch -p0 < redhat-add_ip.sh-patch
3. stop and start the respective container
Patch file: http://forum.proxmox.com/threads/22770-fix-for-centos-7-container-networking
Video Tutorials
Virtual Machines (KVM)
Just click "Create VM":
General
- Node: If you have several Proxmox VE servers, select the node where you want to create the new VM
- VM ID: choose a virtual machine identification number, just use the given ID or overwrite the suggested one
- Name: choose a name for your VM (this is not the hostname), can be changed any time
- Resource Pool: select the previously resoucre pool (optional)
OS
Select the Operating System (OS) of your VM
CD/DVD
- Use CD/DVD disc image file (iso): Select the storage where you previously uploaded your iso images and choose the file
- Use physical CD/DVD Drive: choose this to use the CD/DVD from your Proxmox VE node
- Do not use any media: choose this if you do not want any media
Hard disk
- Bus/Device: choose the bus type, as long as your guest supports go for virtio
- Storage: select the storage where you want to store the disk Disk size (GB): define the size
- Format: Define the disk image format. For good performance, go for raw. If you plan to use snapshots, go for qcow2.
- Cache: define the cache policy for the virtual disk
- Limits: (if necessary) set the maximum transfer speeds
CPU
- Sockets: set the number of CPU sockets
- Cores: set the number of CPU Cores per socket
- CPU type: select CPU type
- Total cores: never use more CPU cores than physical available on the Proxmox VE host
Memory
- Memory (MB): set the memory (RAM) for your VM
Network
- Briged mode: this is the default setting, just choose the Brigde where you want to connect your VM. If you want to use VLAN, you can define the VLAN tag for the VM
- NAT mode
- No network device
- Model: choose the emulated network device, as long as your guest support it, go for virtio
- MAC address: use 'auto' or overwrite with a valid and unique MAC address
- Rate limit (MB/s): set a speed limit for this network adapter
Confirm
This tab shows a summary, please check if everything is done as needed. If you need to change a setting, you can jump to the previous tabs just by clicking. After you clicked "Finish", all settings are applied - wait for completion (this process just takes a second).
Video Tutorials
- See Proxmox VE YouTube Channel
- Windows 2003 Server on KVM (Video)
- Hardware setup for KVM guests (Video)
Managing Virtual Machines
Go to "VM Manager/Virtual Machines" to see a list of your Virtual Machines.
Basic tasks can be done by clicking on the red arrow - drop down menu:
- start, restart, shutdown, stop
- migrate: migrate a Virtual Machine to another physical host (you need at least two Proxmox VE servers - see Proxmox VE Cluster
- console: using the VNC console for container virtualization automatically logs in via root. For managing KVM Virtual Machine, the console shows the screen of the full virtualized machine)
For a detailed view and configuration changes just click on a Virtual Machine row in the list of VMs.
"Logs" on a container Virtual Machine:
- Boot/Init: shows the Boot/Init logs generated during start or stop
- Command: see the current/last executed task
- Syslog: see the real time syslog of the Virtual Machine
Networking and Firewall
A detail guide on how to virtualize your VPS network infrastructure. [1]
On adding a second network card and enabling it on the Configuration -> System -> vmbr0 page and choosing the eth1 card as the the bridged interface, the /etc/network/interfaces file will look like:
# network interface settings auto lo iface lo inet loopback iface eth0 inet manual iface eth1 inet manual auto vmbr0 iface vmbr0 inet static address 192.168.1.100 netmask 255.255.255.0 gateway 192.168.1.1 bridge_ports eth1 bridge_stp off bridge_fd 0