Installation: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
mNo edit summary
Line 1: Line 1:
=Introduction=
= Introduction =
Proxmox VE installs the complete operating system and management tools in 3 to 5 minutes (depends on the used hardware).


Including the following:
Proxmox VE installs the complete operating system and management tools in 3 to 5 minutes (depends on the used hardware).


* Complete operating system (Debian 6 - Squeeze, 64-bit)
Including the following:
* Partition the hard drive with [[LVM2]]
* [[Proxmox VE Kernel]] with OpenVZ and KVM support
* Backup/Restore tools
* Web based management interface


Please note, the complete server is used and all existing data is removed.
*Complete operating system (Debian 6 - Squeeze, 64-bit)
*Partition the hard drive with [[LVM2]]
*[[Proxmox VE Kernel]] with OpenVZ and KVM support
*Backup/Restore tools
*Web based management interface


===Video tutorials===
Please note, the complete server is used and all existing data is removed.
List of all official tutorials: [[:Category:Video Tutorials|Video Tutorials]]


====Screen cast video demo of container installation (OUTDATED)====
=== Video tutorials ===
* English [http://www.fridu.org/download/video/flv/intro-openvz-en.html]
* French  [http://www.fridu.org/download/video/flv/intro-openvz-fr.html]


Note: These external videos are based on a beta version of Proxmox VE
List of all official tutorials: [[:Category:Video Tutorials|Video Tutorials]]


=System requirements=
==== Screen cast video demo of container installation (OUTDATED) ====


For production servers, high quality server equipment is needed. Keep in mind, if you run 10 Virtual Servers on one machine and you got a hardware failure, 10 services are lost. Proxmox VE supports clustering, means that multiple Proxmox VE installations can centrally managed via the included cluster functionality.
*English [http://www.fridu.org/download/video/flv/intro-openvz-en.html]
*French [http://www.fridu.org/download/video/flv/intro-openvz-fr.html]


Proxmox VE can use local storage (DAS) and SAN. For details see [[Storage_Model]]
Note: These external videos are based on a beta version of Proxmox VE  


==Minimum requirements for evaluation:==
= System requirements =
* CPU: 64bit (Intel EMT64 or AMD64), Intel VT/AMD-V capable CPU/Mainboard (for KVM Full Virtualization support)
* Minimum 1 GB RAM
* Hard drive
* One NIC


==Recommended system requirements:==
For production servers, high quality server equipment is needed. Keep in mind, if you run 10 Virtual Servers on one machine and you got a hardware failure, 10 services are lost. Proxmox VE supports clustering, means that multiple Proxmox VE installations can centrally managed via the included cluster functionality.
* CPU: 64bit (Intel EMT64 or AMD64), Multi core CPU recommended, Intel VT/AMD-V capable CPU/Mainboard (for KVM Full Virtualization support)
* 8 GB is good, more is better
* Hardware RAID with batteries protected write cache (BBU) or flash based protection
* Fast hard drives, best results with 15k rpm SAS, Raid10
* At least two NIC´s, depending on the used storage technology you need more


==Certified hardware==
Proxmox VE can use local storage (DAS) and SAN. For details see [[Storage Model]]


Basically you can use all hardware supporting RHEL6, 64 bit. If you are unsure, post in the forum.
== Minimum requirements for evaluation: ==
 
*CPU: 64bit (Intel EMT64 or AMD64), Intel VT/AMD-V capable CPU/Mainboard (for KVM Full Virtualization support)
*Minimum 1 GB RAM
*Hard drive
*One NIC
 
== Recommended system requirements: ==
 
*CPU: 64bit (Intel EMT64 or AMD64), Multi core CPU recommended, Intel VT/AMD-V capable CPU/Mainboard (for KVM Full Virtualization support)
*8 GB is good, more is better
*Hardware RAID with batteries protected write cache (BBU) or flash based protection
*Fast hard drives, best results with 15k rpm SAS, Raid10
*At least two NIC´s, depending on the used storage technology you need more
 
== Certified hardware ==
 
Basically you can use all hardware supporting RHEL6, 64 bit. If you are unsure, post in the forum.  


= Steps to get your Proxmox VE up and running  =
= Steps to get your Proxmox VE up and running  =
Line 48: Line 53:
== Install Proxmox VE server  ==
== Install Proxmox VE server  ==


See [[Quick_installation]]
See [[Quick installation]]  


[http://youtu.be/ckvPt1Bp9p0 Proxmox VE installation (Video Tutorial)]
[http://youtu.be/ckvPt1Bp9p0 Proxmox VE installation (Video Tutorial)]  


[[Installing Proxmox VE v1.9 post Lenny retirement]]
If you need to installing the outdated 1.9 release, check [[Installing Proxmox VE v1.9 post Lenny retirement]]  


== Optional: Install Proxmox VE on existing Debian Squeeze(64 bit)  ==
== Optional: Install Proxmox VE on existing Debian Squeeze(64 bit)  ==


See [[Install Proxmox VE on Debian Squeeze]]
See [[Install Proxmox VE on Debian Squeeze]]  


== Optional: Install Proxmox VE over iSCSI  ==
== Optional: Install Proxmox VE over iSCSI  ==
Line 72: Line 77:
Please review the NIC setup, IP and hostname.  
Please review the NIC setup, IP and hostname.  


'''Note: changing IP or hostname after cluster creation is not possible (unless you know exactly what you do)
'''Note: changing IP or hostname after cluster creation is not possible (unless you know exactly what you do)'''


=== Get Appliance Templates  ===
=== Get Appliance Templates  ===
Line 78: Line 83:
==== Download  ====
==== Download  ====


Just go to your content tab of your storage (e.g. "local")and download pre-built Virtual Appliances directly to your server. This list is maintained by the Proxmox VE team and more and more Appliances will be available. This is the easiest way and a good place to start.
Just go to your content tab of your storage (e.g. "local")and download pre-built Virtual Appliances directly to your server. This list is maintained by the Proxmox VE team and more and more Appliances will be available. This is the easiest way and a good place to start.  


==== Use a NFS share for ISO´s  ====
==== Use a NFS share for ISO´s  ====
Line 90: Line 95:
==== Directly to file system  ====
==== Directly to file system  ====


Templates and ISO images are stored on the Proxmox VE server (see /var/lib/vz/template/cache for openvz templates and /var/lib/vz/template/iso for ISO images). You can also transfer templates and ISO images via secure copy (scp) to these directories. If you work on a windows desktop, you can use a graphical scp client like [http://winscp.net winscp].
Templates and ISO images are stored on the Proxmox VE server (see /var/lib/vz/template/cache for openvz templates and /var/lib/vz/template/iso for ISO images). You can also transfer templates and ISO images via secure copy (scp) to these directories. If you work on a windows desktop, you can use a graphical scp client like [http://winscp.net winscp].  


=Create Virtual Machines=
= Create Virtual Machines =
==Container (OpenVZ)==
[[Image:Screen-create-container-mailgateway.png|thumb|Create OpenVZ container]]
[[Image:Screen-create-container-mailgateway-log.png|thumb|Create OpenVZ container - log]]
[[Image:Screen-virtual-machine-detail1.png|thumb|Running OpenVZ container]]


Just click "Create CT":
== Container (OpenVZ) ==


'''General'''
[[Image:Screen-create-container-mailgateway.png|thumb]] [[Image:Screen-create-container-mailgateway-log.png|thumb]] [[Image:Screen-virtual-machine-detail1.png|thumb]]


*Node: If you have several Proxmox VE servers, select the node where you want to create the new container
Just click "Create CT":
*VM ID: choose a virtual machine identification number, just use the given ID or overwrite the suggested one
 
*Hostname: give a unique server name for the new container
'''General'''
*Resource Pool: select the previously resoucre pool (optional)
 
*Storage: select the storage for your container
*Node: If you have several Proxmox VE servers, select the node where you want to create the new container  
*VM ID: choose a virtual machine identification number, just use the given ID or overwrite the suggested one  
*Hostname: give a unique server name for the new container  
*Resource Pool: select the previously resoucre pool (optional)  
*Storage: select the storage for your container  
*Password: set the root password for your container
*Password: set the root password for your container


'''Template'''
'''Template'''  


*Storage: select your template data store (you need to download templates before you can select them here)
*Storage: select your template data store (you need to download templates before you can select them here)  
*Template: choose the template
*Template: choose the template


'''Resources'''
'''Resources'''  


*Memory (MB): set the memory (RAM)
*Memory (MB): set the memory (RAM)  
*Swap (MB): set the swap
*Swap (MB): set the swap  
*Disk size (GB): set the total disk size
*Disk size (GB): set the total disk size  
*CPUs: set the number of CPUs (if you run java inside your container, choose at least 2 here)
*CPUs: set the number of CPUs (if you run java inside your container, choose at least 2 here)


'''Network'''
'''Network'''  


*Routed mode (venet): default([http://wiki.openvz.org/Venet venet]) give a unique IP
*Routed mode (venet): default([http://wiki.openvz.org/Venet venet]) give a unique IP  
*Briged mode
*Briged mode
- in only some case you need Bridged Ethernet([http://wiki.openvz.org/Veth veth]) (see [http://wiki.openvz.org/Differences_between_venet_and_veth Differences_between_venet_and_veth] on OpenVZ wiki for details)
If you select Brigded Ethernet, the IP configuration has to be done in the container, like you would do it on a physical server.


'''DNS'''
- in only some case you need Bridged Ethernet([http://wiki.openvz.org/Veth veth]) (see [http://wiki.openvz.org/Differences_between_venet_and_veth Differences_between_venet_and_veth] on OpenVZ wiki for details) If you select Brigded Ethernet, the IP configuration has to be done in the container, like you would do it on a physical server.


* DNS Domain: e.g. yourdomain.com
'''DNS'''
* First/Second DNS Servers: enter DNS servers


'''Confirm'''
*DNS Domain: e.g. yourdomain.com
*First/Second DNS Servers: enter DNS servers


This tab shows a summary, please check if everything is done as needed. If you need to change a setting, you can jump to the previous tabs just by clicking.
'''Confirm'''


After you clicked "Finish", all settings are applied - wait for completion (this process can take between a view seconds and up to a minute, depends on the used template and your hardware).
This tab shows a summary, please check if everything is done as needed. If you need to change a setting, you can jump to the previous tabs just by clicking.  


===Video Tutorials===
After you clicked "Finish", all settings are applied - wait for completion (this process can take between a view seconds and up to a minute, depends on the used template and your hardware).  
* See [http://www.youtube.com/proxmoxve Proxmox VE YouTube Channel]


==Virtual Machines (KVM)==
=== Video Tutorials ===
 
*See [http://www.youtube.com/proxmoxve Proxmox VE YouTube Channel]
 
== Virtual Machines (KVM) ==


Just click "Create VM":  
Just click "Create VM":  


'''General'''
'''General'''  


*Node: If you have several Proxmox VE servers, select the node where you want to create the new VM
*Node: If you have several Proxmox VE servers, select the node where you want to create the new VM  
*VM ID: choose a virtual machine identification number, just use the given ID or overwrite the suggested one
*VM ID: choose a virtual machine identification number, just use the given ID or overwrite the suggested one  
*Name: choose a name for your VM (this is not the hostname), can be changed any time
*Name: choose a name for your VM (this is not the hostname), can be changed any time  
*Resource Pool: select the previously resoucre pool (optional)
*Resource Pool: select the previously resoucre pool (optional)


'''OS'''
'''OS'''  


Select the Operating System (OS) of your VM
Select the Operating System (OS) of your VM  


'''CD/DVD'''
'''CD/DVD'''  


*Use CD/DVD disc image file (iso): Select the storage where you previously uploaded your iso images and choose the file
*Use CD/DVD disc image file (iso): Select the storage where you previously uploaded your iso images and choose the file  
*Use physical CD/DVD Drive: choose this to use the CD/DVD from your Proxmox VE node
*Use physical CD/DVD Drive: choose this to use the CD/DVD from your Proxmox VE node  
*Do not use any media: choose this if you do not want any media
*Do not use any media: choose this if you do not want any media


'''Hard disk'''
'''Hard disk'''  


Bus/Device: choose the bus type, as long as your guest supports go for virtio
Bus/Device: choose the bus type, as long as your guest supports go for virtio Storage: select the storage where you want to store the disk Disk size (GB): define the size Format: Define the disk image format. For good performance, go for raw. If you plan to use snapshots, go for qcow2. Cache: define the cache policy for the virtual disk Limits: set the maximum transfer speeds  
Storage: select the storage where you want to store the disk
Disk size (GB): define the size
Format: Define the disk image format. For good performance, go for raw. If you plan to use snapshots, go for qcow2.
Cache: define the cache policy for the virtual disk
Limits: set the maximum transfer speeds


'''CPU'''
'''CPU'''  


*Sockets: set the number of CPU sockets
*Sockets: set the number of CPU sockets  
*Cores: set the number of CPU Cores per socket
*Cores: set the number of CPU Cores per socket  
*CPU type: select CPU type
*CPU type: select CPU type  
*Total cores: never use more CPU cores than physical available on the Proxmox VE host
*Total cores: never use more CPU cores than physical available on the Proxmox VE host


'''Memory'''
'''Memory'''  


*Memory (MB): set the memory (RAM) for your VM
*Memory (MB): set the memory (RAM) for your VM


'''Network'''
'''Network'''  
 
*Briged mode: this is the default setting, just choose the Brigde where you want to connect your VM. If you want to use VLAN, you can defiene the VLAN tag for the VM
*NAT mode
*No network device
*Model: choose the emulated network device, as long as your guest support it, go for virtio
*MAC address: use 'auto' or overwrite with a valid and unique MAC address
*Rate limit (MB/s): set a speed limit for this network adapter


*Briged mode: this is the default setting, just choose the Brigde where you want to connect your VM. If you want to use VLAN, you can defiene the VLAN tag for the VM
'''Confirm'''
*NAT mode
*No network device
*Model: choose the emulated network device, as long as your guest support it, go for virtio
*MAC address: use 'auto' or overwrite with a valid and unique MAC address
*Rate limit (MB/s): set a speed limit for this network adapter


'''Confirm'''
This tab shows a summary, please check if everything is done as needed. If you need to change a setting, you can jump to the previous tabs just by clicking. After you clicked "Finish", all settings are applied - wait for completion (this process just takes a second).


This tab shows a summary, please check if everything is done as needed. If you need to change a setting, you can jump to the previous tabs just by clicking.
=== Video Tutorials ===
After you clicked "Finish", all settings are applied - wait for completion (this process just takes a second).


===Video Tutorials===
*See [http://www.youtube.com/proxmoxve Proxmox VE YouTube Channel]  
*See [http://www.youtube.com/proxmoxve Proxmox VE YouTube Channel]
*[[Windows 2003 Server on KVM (Video)]]  
*[[Windows 2003 Server on KVM (Video)]]
*[[Hardware setup for KVM guests (Video)]]
*[[Hardware setup for KVM guests (Video)]]


==Managing Virtual Machines==
== Managing Virtual Machines ==
 
Go to "VM Manager/Virtual Machines" to see a list of your Virtual Machines.  
Go to "VM Manager/Virtual Machines" to see a list of your Virtual Machines.  


Basic tasks can be done by clicking on the red arrow - drop down menu:
Basic tasks can be done by clicking on the red arrow - drop down menu:  
* start, restart, shutdown, stop
 
* migrate: migrate a Virtual Machine to another physical host (you need at least two Proxmox VE servers - see [[Proxmox VE Cluster]]
*start, restart, shutdown, stop  
* console: using the VNC console for container virtualization automatically logs in via root. For managing KVM Virtual Machine, the console shows the screen of the full virtualized machine)  
*migrate: migrate a Virtual Machine to another physical host (you need at least two Proxmox VE servers - see [[Proxmox VE Cluster]]  
*console: using the VNC console for container virtualization automatically logs in via root. For managing KVM Virtual Machine, the console shows the screen of the full virtualized machine)


For a '''detailed view''' and '''configuration changes''' just click on a Virtual Machine row in the list of VMs.  
For a '''detailed view''' and '''configuration changes''' just click on a Virtual Machine row in the list of VMs.  


"Logs" on a container Virtual Machine:
"Logs" on a container Virtual Machine:  
* Boot/Init: shows the Boot/Init logs generated during start or stop
 
* Command: see the current/last executed task
*Boot/Init: shows the Boot/Init logs generated during start or stop  
* Syslog: see the real time syslog of the Virtual Machine
*Command: see the current/last executed task  
[[Category: HOWTO]][[Category: Installation]]
*Syslog: see the real time syslog of the Virtual Machine


==Networking and Firewall==
== Networking and Firewall ==


A detail guide on how to virtualize your VPS network infrastructure.  
A detail guide on how to virtualize your VPS network infrastructure. [http://www.fridu.org/fulup-posts/40-hosting-a-sysadmin/52-openvz-virtualization#architecture]  
[http://www.fridu.org/fulup-posts/40-hosting-a-sysadmin/52-openvz-virtualization#architecture]


On adding a second network card and enabling it on the '''Configuration -> System -> vmbr0''' page and choosing the '''eth1''' card as the the bridged interface, the '''/etc/network/interfaces''' file will look like:
On adding a second network card and enabling it on the '''Configuration -> System -> vmbr0''' page and choosing the '''eth1''' card as the the bridged interface, the '''/etc/network/interfaces''' file will look like:  
<pre>
<pre># network interface settings
# network interface settings
auto lo
auto lo
iface lo inet loopback
iface lo inet loopback
Line 243: Line 244:




</pre>
</pre>  
The '''[http://www.dlink.com/products/?pid=284 D-Link DGE-530T] Gigabit PCI Desktop Adapter''' is detected with '''e1000e''' [http://www.intel.com/support/network/sb/cs-006120.htm Intel(R) PRO/1000 Network Driver] during modprobe while bootup and works well. This card is one of a few that comes with a half height low profile bracket as well.
The '''[http://www.dlink.com/products/?pid=284 D-Link DGE-530T] Gigabit PCI Desktop Adapter''' is detected with '''e1000e''' [http://www.intel.com/support/network/sb/cs-006120.htm Intel(R) PRO/1000 Network Driver] during modprobe while bootup and works well. This card is one of a few that comes with a half height low profile bracket as well.  
 
[[Category:HOWTO]] [[Category:Installation]]

Revision as of 17:40, 22 October 2012

Introduction

Proxmox VE installs the complete operating system and management tools in 3 to 5 minutes (depends on the used hardware).

Including the following:

  • Complete operating system (Debian 6 - Squeeze, 64-bit)
  • Partition the hard drive with LVM2
  • Proxmox VE Kernel with OpenVZ and KVM support
  • Backup/Restore tools
  • Web based management interface

Please note, the complete server is used and all existing data is removed.

Video tutorials

List of all official tutorials: Video Tutorials

Screen cast video demo of container installation (OUTDATED)

Note: These external videos are based on a beta version of Proxmox VE

System requirements

For production servers, high quality server equipment is needed. Keep in mind, if you run 10 Virtual Servers on one machine and you got a hardware failure, 10 services are lost. Proxmox VE supports clustering, means that multiple Proxmox VE installations can centrally managed via the included cluster functionality.

Proxmox VE can use local storage (DAS) and SAN. For details see Storage Model

Minimum requirements for evaluation:

  • CPU: 64bit (Intel EMT64 or AMD64), Intel VT/AMD-V capable CPU/Mainboard (for KVM Full Virtualization support)
  • Minimum 1 GB RAM
  • Hard drive
  • One NIC

Recommended system requirements:

  • CPU: 64bit (Intel EMT64 or AMD64), Multi core CPU recommended, Intel VT/AMD-V capable CPU/Mainboard (for KVM Full Virtualization support)
  • 8 GB is good, more is better
  • Hardware RAID with batteries protected write cache (BBU) or flash based protection
  • Fast hard drives, best results with 15k rpm SAS, Raid10
  • At least two NIC´s, depending on the used storage technology you need more

Certified hardware

Basically you can use all hardware supporting RHEL6, 64 bit. If you are unsure, post in the forum.

Steps to get your Proxmox VE up and running

Install Proxmox VE server

See Quick installation

Proxmox VE installation (Video Tutorial)

If you need to installing the outdated 1.9 release, check Installing Proxmox VE v1.9 post Lenny retirement

Optional: Install Proxmox VE on existing Debian Squeeze(64 bit)

See Install Proxmox VE on Debian Squeeze

Optional: Install Proxmox VE over iSCSI

See Proxmox ISCSI installation

Proxmox VE web interface

Configuration is done via web interface, just point your browser to the given IP address during installation (https://youripaddress:8006). Please make sure that your browser has latest Oracle java browser plugin installed. Proxmox VE is tested for IE9, Firefox 10 and higher, Google Chrome (latest).

Default login is "root" and the root password is defined during the installation process.

Configure basic system setting

Please review the NIC setup, IP and hostname.

Note: changing IP or hostname after cluster creation is not possible (unless you know exactly what you do)

Get Appliance Templates

Download

Just go to your content tab of your storage (e.g. "local")and download pre-built Virtual Appliances directly to your server. This list is maintained by the Proxmox VE team and more and more Appliances will be available. This is the easiest way and a good place to start.

Use a NFS share for ISO´s

If you have a NFS server you can use a NFS share for storing ISO images. To start, configure the NFS ISO store on the web interface (Configuration/Storage).

Upload from your desktop

If you already got Virtually Appliances you can upload them via the upload button. To install a virtual machine from an ISO image (using KVM full virtualization) just upload the ISO file via the upload button (limited to 2 GB).

Directly to file system

Templates and ISO images are stored on the Proxmox VE server (see /var/lib/vz/template/cache for openvz templates and /var/lib/vz/template/iso for ISO images). You can also transfer templates and ISO images via secure copy (scp) to these directories. If you work on a windows desktop, you can use a graphical scp client like winscp.

Create Virtual Machines

Container (OpenVZ)

Screen-create-container-mailgateway.png
Screen-create-container-mailgateway-log.png
Screen-virtual-machine-detail1.png

Just click "Create CT":

General

  • Node: If you have several Proxmox VE servers, select the node where you want to create the new container
  • VM ID: choose a virtual machine identification number, just use the given ID or overwrite the suggested one
  • Hostname: give a unique server name for the new container
  • Resource Pool: select the previously resoucre pool (optional)
  • Storage: select the storage for your container
  • Password: set the root password for your container

Template

  • Storage: select your template data store (you need to download templates before you can select them here)
  • Template: choose the template

Resources

  • Memory (MB): set the memory (RAM)
  • Swap (MB): set the swap
  • Disk size (GB): set the total disk size
  • CPUs: set the number of CPUs (if you run java inside your container, choose at least 2 here)

Network

  • Routed mode (venet): default(venet) give a unique IP
  • Briged mode

- in only some case you need Bridged Ethernet(veth) (see Differences_between_venet_and_veth on OpenVZ wiki for details) If you select Brigded Ethernet, the IP configuration has to be done in the container, like you would do it on a physical server.

DNS

  • DNS Domain: e.g. yourdomain.com
  • First/Second DNS Servers: enter DNS servers

Confirm

This tab shows a summary, please check if everything is done as needed. If you need to change a setting, you can jump to the previous tabs just by clicking.

After you clicked "Finish", all settings are applied - wait for completion (this process can take between a view seconds and up to a minute, depends on the used template and your hardware).

Video Tutorials

Virtual Machines (KVM)

Just click "Create VM":

General

  • Node: If you have several Proxmox VE servers, select the node where you want to create the new VM
  • VM ID: choose a virtual machine identification number, just use the given ID or overwrite the suggested one
  • Name: choose a name for your VM (this is not the hostname), can be changed any time
  • Resource Pool: select the previously resoucre pool (optional)

OS

Select the Operating System (OS) of your VM

CD/DVD

  • Use CD/DVD disc image file (iso): Select the storage where you previously uploaded your iso images and choose the file
  • Use physical CD/DVD Drive: choose this to use the CD/DVD from your Proxmox VE node
  • Do not use any media: choose this if you do not want any media

Hard disk

Bus/Device: choose the bus type, as long as your guest supports go for virtio Storage: select the storage where you want to store the disk Disk size (GB): define the size Format: Define the disk image format. For good performance, go for raw. If you plan to use snapshots, go for qcow2. Cache: define the cache policy for the virtual disk Limits: set the maximum transfer speeds

CPU

  • Sockets: set the number of CPU sockets
  • Cores: set the number of CPU Cores per socket
  • CPU type: select CPU type
  • Total cores: never use more CPU cores than physical available on the Proxmox VE host

Memory

  • Memory (MB): set the memory (RAM) for your VM

Network

  • Briged mode: this is the default setting, just choose the Brigde where you want to connect your VM. If you want to use VLAN, you can defiene the VLAN tag for the VM
  • NAT mode
  • No network device
  • Model: choose the emulated network device, as long as your guest support it, go for virtio
  • MAC address: use 'auto' or overwrite with a valid and unique MAC address
  • Rate limit (MB/s): set a speed limit for this network adapter

Confirm

This tab shows a summary, please check if everything is done as needed. If you need to change a setting, you can jump to the previous tabs just by clicking. After you clicked "Finish", all settings are applied - wait for completion (this process just takes a second).

Video Tutorials

Managing Virtual Machines

Go to "VM Manager/Virtual Machines" to see a list of your Virtual Machines.

Basic tasks can be done by clicking on the red arrow - drop down menu:

  • start, restart, shutdown, stop
  • migrate: migrate a Virtual Machine to another physical host (you need at least two Proxmox VE servers - see Proxmox VE Cluster
  • console: using the VNC console for container virtualization automatically logs in via root. For managing KVM Virtual Machine, the console shows the screen of the full virtualized machine)

For a detailed view and configuration changes just click on a Virtual Machine row in the list of VMs.

"Logs" on a container Virtual Machine:

  • Boot/Init: shows the Boot/Init logs generated during start or stop
  • Command: see the current/last executed task
  • Syslog: see the real time syslog of the Virtual Machine

Networking and Firewall

A detail guide on how to virtualize your VPS network infrastructure. [3]

On adding a second network card and enabling it on the Configuration -> System -> vmbr0 page and choosing the eth1 card as the the bridged interface, the /etc/network/interfaces file will look like:

# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

auto vmbr0
iface vmbr0 inet static
        address  192.168.1.100
        netmask  255.255.255.0
        gateway  192.168.1.1
        bridge_ports eth1
        bridge_stp off
        bridge_fd 0


The D-Link DGE-530T Gigabit PCI Desktop Adapter is detected with e1000e Intel(R) PRO/1000 Network Driver during modprobe while bootup and works well. This card is one of a few that comes with a half height low profile bracket as well.