FAQ: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
Line 87: Line 87:




==Troubleshooting==
== Troubleshooting ==
 
See [[Troubleshooting]] page.
===I can't switch virtual consoles in Linux KVM guests with alt-F1, alt-F2...===
 
VNC viewer does not pass some key combinations or they may be intercepted by your operating system.
 
To send custom key combinations to the guest, go to "Monitor" in Virtual Machine Configuration for a given guest and use "sendkey" command.
 
For example, to switch to the third console (tty3) you would use:
 
sendkey alt-f3
 
 
===How can I send sysrq to Linux KVM guests?===
 
Similarly to the above, go to "Monitor" in Virtual Machine Configuration for a given guest and use "sendkey" command.
 
For example, to issue "Emergency Sync", you would use:
 
sendkey alt-sysrq-s
 
In the VNC viewer for the given guest you should see:
 
SysRq : Emergency Sync
 
You should also see this entry if you run "dmesg" on this guest.
 
See also http://en.wikipedia.org/wiki/Magic_SysRq_key for a full reference of possible combinations.
 
 
===How can I access Linux guests through a serial console?===
 
Sometimes, it is necessary to access the guest through a serial console:
* you lost network access to the guest and VNC is either too slow for you or don't have the features you need (i.e. easy copy/paste between other terminals)
* your guest freezes or kernel panics, you want to debug it, but it's impossible to capture all messages on VNC screen
* you are familiar with <code>xm console <guest></code> from Xen and you want to use a similar feature here
 
 
The necessary steps are:
* on Proxmox VE host, in guest's configuration file in <code>/etc/qemu-server/<VMID>.conf</code> add:
 
args: -serial unix:/var/run/qemu-server/<VMID>.serial,server,nowait
 
This will open console in <code>/var/run/qemu-server/<VMID>.serial</code> socket file, which can be accessed by minicom or other serial communication program.
 
An alternative is to add:
 
args: -serial tcp:localhost:6000,server,nowait
 
With this, you can connect to guest's serial console with telnet. Note that with telnet, any passwords will be visible on the screen.
 
 
* on guest, in /etc/inittab, look for lines similar to the ones below and make sure you have "ttyS0" there - this would be your serial console:
 
5:2345:respawn:/sbin/mingetty tty5
6:2345:respawn:/sbin/mingetty ttyS0
 
If you want kernel messages to be shown on both serial and VGA consoles, you have to add a kernel parameter in your bootloader's configuration. For grub, it would be these "console" entries:
 
kernel (hd0,0)/boot/vmlinuz root=LABEL=guest-root console=ttyS0 console=tty0
 
 
To access the guest from minicom, configure it to use a path such as <code>unix#/var/run/qemu-server/<VMID>.serial</code> in "cOnfigure Minicom -> Serial port setup -> Serial Device".
 
 
To use multiple minicom configurations for several guests, create a file like /etc/minicom/minirc.someguest for each of your guests, with contents:
 
pu port            unix#/var/run/qemu-server/<VMID>.serial
pu minit
pu mreset
 
 
Then, start the console with:
 
minicom someguest
 
 
See also http://www.proxmox.com/forum/showthread.php?p=5615
 
===How can I assign a physical disk to a VM?===
You don't have to do anything at host level (i.e. not add to fstab or anything), just set is as available directly to the KVM guest:
qm set <vmid> -ide# /dev/sdb
Or:
qm set <vmid> -ide# /dev/disk/by-id/[your disk ID]
 
...since having the drive letter change (should you add a drive) might have unintended consequences.
 
Also see /etc/qemu-server/<vmid>.conf if you want to add it editing the conf file by hand (i.e. adding ide1: /dev/sdb2).
After that you can run the VM as usual, and you will have the new storage device available inside it.
Beware that you can't assign it to more than one running VM if the filesystem is not designed for such scenario.
 
===How can I assign a physical disk to a CT?===
See http://wiki.openvz.org/Bind_mounts
 
=== "error: out of partition" after a fresh install ===
The error message below may happens when you had two or more harddrives connected during the installation. Try to disconnect all but one disk.
error: out of partition.
grub rescue>
 
=== NFS Client Mount Error: "mount.nfs: No such device" ===
By default NFS cannot be mounted in VZ containers. See this page to set-it-up: [http://wiki.openvz.org/NFS OpenVZ: NFS]
See also this page to make an host's directory visible to a container: [http://wiki.openvz.org/Bind_mounts OpenVZ: Bind mounts]
 
=== See also ===
* [[Troubleshooting]]
* [[:Category:Troubleshooting]]
* [[:Category:HOWTO]]

Revision as of 14:14, 1 April 2013

tbd, if possible, detailed answers should linked to internal wiki pages

General

What is a container, CT, VE, Virtual Private Server, VPS?

See Container and Full Virtualization

What is a KVM guest (KVM VM)?

A KVM guest or KVM VM is a guest system running virtualized under Proxmox VE with KVM.

What is a Virtual Appliance?

See Overview

What distribution is Proxmox VE based on?

Proxmox VE is based on Debian GNU/Linux, Proxmox VE Kernel is based on RHEL6 Kernel with OpenVZ patches

Installation and upgrade

Where can I find installation instructions?

See Installation

Proxmox VE command line tools

See Command line tools

Hardware

CPU

Will Proxmox VE run on a 32bit processor?

Proxmox VE works only on 64-bit CPU´s (AMD or Intel). There is no plan for 32-bit for the platform.

There are, however, unofficial (and unsupported) instructions for manually installing Proxmox on 32-bit systems:

Supported CPU chips

To check if your CPU is virtualization compatible, check for the "vmx" or "svm" tag in this command output:

egrep '(vmx|svm)' /proc/cpuinfo

Intel

64-bit processors with Intel Virtualization Technology (Intel VT-x) support

List of processors with Intel VT and 64-bit

AMD

64-bit processors with AMD Virtualization Technology (AMD-V) support

Networking

How do I configure bridged networking in an OpenVZ Ubuntu/Debian container?

  1. In the web gui under Virtual Machine configuration go to the network tab.
  2. Remove the ip address for venet and save. (Bridged Ethernet Devices will appear)
  3. SSH into your host system and enter the container you want set bridged networking for: # vzctl enter <VMID>
  4. edit /etc/network/interfaces using using the following format and save. (replace with settings for your network)
     auto lo
     iface lo inet loopback
     
     auto eth0
     iface eth0 inet static
            address 10.0.0.17
            netmask 255.255.255.0
            network 10.0.0.0
            broadcast 10.0.0.255
            gateway 10.0.0.10
    
  5. Shutdown the container.
  6. Go back to web gui and under "Bridged Ethernet Devices" configure eth0 to vmbr0 and save. (a mac address will be automatically assigned)
  7. Start the container.

Finally check that networking is working by entering the guest and viewing the results of ifconfig

  • In a Centos/RHEL container, check the gateway device is set correctly.

edit /etc/sysconfig/network

NETWORKING="yes"
#GATEWAYDEV="venet0"              # comment this and add line below
GATEWAYDEV="eth0"
HOSTNAME="hostname"    # should be set by proxmox
GATEWAY=123.123.123.123         # CHANGE (and remove from ifcfg-eth0)

Virtualization

Why do you recommend 32-bit guests over 64 bit guests?

64-bit makes sense only if you need greater than 4GB of memory.
32-bit guests use less memory in certain situations
e.g. a standard installation of apache2 on 64 bit containers consumes much more memory than on 32 bit.


Troubleshooting

See Troubleshooting page.