Convert OpenVZ to LXC: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
No edit summary
 
(19 intermediate revisions by 10 users not shown)
Line 1: Line 1:
= Introduction =
== Introduction ==
This article describes the migration of OpenVZ containers to Linux containers (LXC). OpenVZ is not available for Kernels above 2.6.32, therefore a migration is necessary. [[Linux Container]] technology is available in all mainline Linux kernels and a future proof technology introduced in Proxmox VE 4.x series, currently in beta status.
This article describes the migration of OpenVZ containers to Linux containers (LXC). OpenVZ is not available for Kernels above 2.6.32, therefore a migration is necessary. [[Linux Container]] technology is available in all mainline Linux kernels and a future proof technology introduced in Proxmox VE 4.x series.


= Move an OpenVZ container to LXC in 5 steps =
== Move an OpenVZ container to LXC in 5 steps ==
==  General overview ==
===  General overview ===
Basically you have to follow these steps:
Basically you have to follow these steps:


Line 12: Line 12:


on the Proxmox VE 4.x node:
on the Proxmox VE 4.x node:
# create a LXC container based on the backup
# restore/create a LXC container based on the backup
# configure the network with your previous settings  
# configure the network with the previous settings  
# you boot and voilà, it works
# boot and voilà, it works


Note that all the steps mentioned here can be done with the Web GUI. However, it is easier to split the steps in command line actions. This allows you to script the steps if you have a bigger number of containers to convert.
Note that all the steps mentioned here can be done with the Web GUI. However, it is easier to split the steps in command line actions. This allows to script the steps if there is a bigger number of containers to convert.


= Step by step conversion =
===Unsupported OpenVZ templates===
 
Not all OpenVZ templates are supported. If you try to convert OpenVZ template with unsupported OS then you will get error message during pct restore command and restore will fail.
<pre>
unsupported fedora release 'Fedora release 14 (Laughlin)'
</pre>
 
== Step by step conversion ==
Login with ssh on your Proxmox VE 3.x node:
Login with ssh on your Proxmox VE 3.x node:


Line 30: Line 37:
   
   


== Get the network configuration of the OpenVZ containers, and note it somewhere ==
=== Get the network configuration of the OpenVZ containers, and note it somewhere ===
A) If your container uses a venet device, you get the address directly from the command line:
A) If your container uses a venet device, you get the address directly from the command line:


Line 46: Line 53:
     cat /etc/sysconfig/network-scripts/ifcfg-eth0
     cat /etc/sysconfig/network-scripts/ifcfg-eth0
     exit
     exit
There may be more than one network interface in CentOS that will be seen using <tt>ifcfg-eth1</tt> and the like in the above command.


If you have a Debian, Ubuntu or Turnkey Linux appliance:
If you have a Debian, Ubuntu or Turnkey Linux appliance (all network interfaces are available in one go here):


     vzctl enter 101
     vzctl enter 101
Line 53: Line 61:
     exit
     exit


== Make a backup of your containers ==  
=== Make a backup of your containers ===  
First choose on which storage you want to backup the containers.
First choose on which storage you want to backup the containers.


    #list available storages
<pre>
    pvesm status
# List available storages:
    freenas    nfs 1        27676672            128        27676544 0.50%
pvesm status
    local      dir 1        8512928        2122088        6390840 25.43%
freenas    nfs 1        27676672            128        27676544 0.50%
    nas-iso    nfs 1      2558314496      421186560      2137127936 16.96%
local      dir 1        8512928        2122088        6390840 25.43%
nas-iso    nfs 1      2558314496      421186560      2137127936 16.96%
</pre>


For example, you can use the "local" storage, which corresponds to the directory <tt>/var/lib/vz/dump</tt> on a standard Proxmox VE installation.


For example, you can use the "local" storage, which corresponds to the directory /var/lib/vz/dump on a standard Proxmox VE installation.
By default, this storage does not allow backups to be stored, so make sure you enable it for backup contents. (See [[Storage_Model#Storage_type_Content|Storage type Content]])
By default, this storage does not allow backups to be stored, so make sure you enable it for backup contents.  
(see http://pve.proxmox.com/wiki/Storage_Model#Storage_type_Content)


Then backup all the containers
Then backup all the containers


    # stop the container, and
<pre>
    # start a backup right after the shutdown
# Stop the container, and start a backup right after the shutdown:
    vzctl stop 100 && vzdump 100 -storage local
vzctl stop 100 && vzdump 100 -storage local
    vzctl stop 101 && vzdump 101 -storage local
vzctl stop 101 && vzdump 101 -storage local
    vzctl stop 102 && vzdump 102 -storage local
vzctl stop 102 && vzdump 102 -storage local
</pre>


At that point you can either:
At that point you can either:
* A) Upgrade your Proxmox VE 3.x node to Proxmox VE 4<br />
* A) Upgrade your Proxmox VE 3.x node to Proxmox VE 4.x<br />
* B) Copy the backups to a Proxmox VE 4 node, and do the conversion on the Proxmox VE 4 node
* B) Copy the backups to a Proxmox VE 4.x node, and do the conversion on the Proxmox VE 4.x node


Suppose you follow option B) (copy the backups to the Proxmox VE 4 node, and convert to LXC format)
Suppose you follow option B) (copy the backups to the Proxmox VE 4.x node, and convert to LXC format)


    # copy each container tar backup to the pve4 node via ssh
<pre>
    scp /var/lib/vz/dump/vzdump-openvz-100-2015_08_27-10_46_47.tar root@pve4:/var/lib/vz/dump
# Copy each container tar backup to the pve4 node via ssh:
    scp /var/lib/vz/dump/vzdump-openvz-101-2015_08_27-10_50_44.tar root@pve4:/var/lib/vz/dump
cd /var/lib/vz/dump/
    scp /var/lib/vz/dump/vzdump-openvz-102-2015_08_27-10_56_34.tar root@pve4:/var/lib/vz/dump
scp vzdump-openvz-100-2015_08_27-10_46_47.tar root@pve4:/var/lib/vz/dump
scp vzdump-openvz-101-2015_08_27-10_50_44.tar root@pve4:/var/lib/vz/dump
scp vzdump-openvz-102-2015_08_27-10_56_34.tar root@pve4:/var/lib/vz/dump
</pre>


== Create LXCs based on your backup ==
=== Restore/Create LXCs based on your backup ===
Now switch to the Proxmox VE 4 node, and create containers based on the backup:
Now switch to the Proxmox VE 4 node, and create containers based on the backup:


Line 95: Line 108:
At that point you should be able to see your containers in the web interface, but they still have no network.
At that point you should be able to see your containers in the web interface, but they still have no network.


== Add network configuration based on the original settings ==
'''Note:''' if you want to / have to restore to a different storage than the default 'local' one, add "-storage STORAGEID" to the "pct restore" command. E.g., if you have a ZFS storage called 'local-zfs', you can use the following command to restore:
 
    pct restore  100 /var/lib/vz/dump/vzdump-openvz-100-2015_08_27-10_46_47.tar -storage local-zfs
 
=== Add network configuration based on the original settings ===


LXCs uses virtual network adapter which are bridged to the physical interface of your host.
LXCs uses virtual network adapter which are bridged to the physical interface of your host.
Line 101: Line 118:


In Proxmox VE 3.x the configuration of each container using a '''veth''' device had to be done inside the container.
In Proxmox VE 3.x the configuration of each container using a '''veth''' device had to be done inside the container.
In Proxmox VE 4 you can do this directly from the host.
In Proxmox VE 4.x you can do this directly from the host.


==== Add network configuration via the GUI ====
===== Add network configuration via the GUI =====
For each container:
For each container:
* Select the container by clicking on it
* Select the container by clicking on it
Line 113: Line 130:
:put your IP address and the corresponding netmask in the following format 192.168.5.75/24
:put your IP address and the corresponding netmask in the following format 192.168.5.75/24


==== Add network configuration via the CLI ====
===== Add network configuration via the CLI =====


  pct set 101 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.144/24,gw=192.168.15.1
  pct set 101 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.144/24,gw=192.168.15.1
  pct set 102 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.145/24,gw=192.168.15.1
  pct set 102 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.145/24,gw=192.168.15.1


== Start the containers ==
=== Start the containers ===


  pct start 100
  pct start 100
Line 128: Line 145:
  pct enter 100
  pct enter 100


== Optional steps ==
=== Optional steps ===
=== Graphical console ===
==== Graphical console ====
If you have not yet done it before, [https://pve.proxmox.com/wiki/OpenVZ_Console you can add a console to your container], so you can log in to the containers via the web GUI.
If you have not yet done it before, [https://pve.proxmox.com/wiki/OpenVZ_Console you can add a console to your container], so you can log in to the containers via the web GUI.
=== OpenVZ bind mounts ===
==== OpenVZ bind mounts ====
If you use [https://openvz.org/Bind_mounts OpenVZ bind mounts], you need to recreate them in LXC. See [[LXC Bind Mounts]]
If you use [https://openvz.org/Bind_mounts OpenVZ bind mounts], you need to recreate them in LXC. See [[LXC Bind Mounts]]
*this works 2015-09-13 using lxc-pve: 1.1.3-1 . 
==== PTY Allocation ====
:add this to the .conf file in /etc/pve/lxc
`/dev/pts' and `/dev/shm' used to be present in OpenVZ templates' `/etc/fstab', but cannot be mounted with a current LXC (>= 3.0). If the VPS refuses to start remove those lines from `/etc/fstab' (or the complete file if nothing relevant remains) - e.g.:
lxc.mount.entry=/bkup  bkup  none bind 0 0
<pre>
note do not put a '/' before the target
none /dev/pts devpts rw,gid=5,mode=620 0 0
none /dev/shm tmpfs defaults 0 0
</pre>
need to be removed
 
# mount the container rootfs `pct mount $VMID'
# edit the fstab `nano /var/lib/lxc/$VMID/rootfs'
# unmount the container rootfs `pct unmount $VMID'


= Video tutorials =
== See also ==
tbd.


= See also =
=== CentOS 6 OpenVZ to LXC migration issues ===
tbd.
https://forum.proxmox.com/threads/centos-6-openvz-to-lxc-migration-issues.35058/


[[Category: HOWTO]][[Category: Technology]]
[[Category: HOWTO]]

Latest revision as of 12:30, 9 January 2020

Introduction

This article describes the migration of OpenVZ containers to Linux containers (LXC). OpenVZ is not available for Kernels above 2.6.32, therefore a migration is necessary. Linux Container technology is available in all mainline Linux kernels and a future proof technology introduced in Proxmox VE 4.x series.

Move an OpenVZ container to LXC in 5 steps

General overview

Basically you have to follow these steps:

on the Proxmox VE 3.x node

  1. note the network settings used by the container
  2. make a backup of the OpenVZ container

on the Proxmox VE 4.x node:

  1. restore/create a LXC container based on the backup
  2. configure the network with the previous settings
  3. boot and voilà, it works

Note that all the steps mentioned here can be done with the Web GUI. However, it is easier to split the steps in command line actions. This allows to script the steps if there is a bigger number of containers to convert.

Unsupported OpenVZ templates

Not all OpenVZ templates are supported. If you try to convert OpenVZ template with unsupported OS then you will get error message during pct restore command and restore will fail.

unsupported fedora release 'Fedora release 14 (Laughlin)'

Step by step conversion

Login with ssh on your Proxmox VE 3.x node:

Suppose you want to migrate three different containers: a CentOS container, an Ubuntu, and a Debian container.

    vzlist 
     CTID      NPROC STATUS    IP_ADDR         HOSTNAME
      100         20 running   -               centos6vz.proxmox.com
      101         18 running   -               debian7vz.proxmox.com
      102         20 running   192.168.15.142  ubuntu12vz.proxmox.com

Get the network configuration of the OpenVZ containers, and note it somewhere

A) If your container uses a venet device, you get the address directly from the command line:

   vzlist 102
   CTID      NPROC STATUS    IP_ADDR         HOSTNAME
   102         20 running   192.168.15.142  ubuntu12vz.proxmox.com

B) If your container uses veth, the network configuration is done inside the container. How to find the network configuration depends on which OS is running inside the container:

If you have a CentOS based container, you can get the network configuration like this:

   # start a root shell inside the container 100
   vzctl enter 100
   cat /etc/sysconfig/network-scripts/ifcfg-eth0
   exit

There may be more than one network interface in CentOS that will be seen using ifcfg-eth1 and the like in the above command.

If you have a Debian, Ubuntu or Turnkey Linux appliance (all network interfaces are available in one go here):

   vzctl enter 101
   cat /etc/network/interfaces
   exit

Make a backup of your containers

First choose on which storage you want to backup the containers.

# List available storages:
pvesm status
freenas     nfs 1        27676672             128        27676544 0.50%
local       dir 1         8512928         2122088         6390840 25.43%
nas-iso     nfs 1      2558314496       421186560      2137127936 16.96%

For example, you can use the "local" storage, which corresponds to the directory /var/lib/vz/dump on a standard Proxmox VE installation.

By default, this storage does not allow backups to be stored, so make sure you enable it for backup contents. (See Storage type Content)

Then backup all the containers

# Stop the container, and start a backup right after the shutdown:
vzctl stop 100 && vzdump 100 -storage local
vzctl stop 101 && vzdump 101 -storage local
vzctl stop 102 && vzdump 102 -storage local

At that point you can either:

  • A) Upgrade your Proxmox VE 3.x node to Proxmox VE 4.x
  • B) Copy the backups to a Proxmox VE 4.x node, and do the conversion on the Proxmox VE 4.x node

Suppose you follow option B) (copy the backups to the Proxmox VE 4.x node, and convert to LXC format)

# Copy each container tar backup to the pve4 node via ssh:
cd /var/lib/vz/dump/
scp vzdump-openvz-100-2015_08_27-10_46_47.tar root@pve4:/var/lib/vz/dump
scp vzdump-openvz-101-2015_08_27-10_50_44.tar root@pve4:/var/lib/vz/dump
scp vzdump-openvz-102-2015_08_27-10_56_34.tar root@pve4:/var/lib/vz/dump

Restore/Create LXCs based on your backup

Now switch to the Proxmox VE 4 node, and create containers based on the backup:

   pct restore  100 /var/lib/vz/dump/vzdump-openvz-100-2015_08_27-10_46_47.tar
   pct restore  101 /var/lib/vz/dump/vzdump-openvz-101-2015_08_27-10_50_44.tar
   pct restore  102 /var/lib/vz/dump/vzdump-openvz-102-2015_08_27-10_56_34.tar

At that point you should be able to see your containers in the web interface, but they still have no network.

Note: if you want to / have to restore to a different storage than the default 'local' one, add "-storage STORAGEID" to the "pct restore" command. E.g., if you have a ZFS storage called 'local-zfs', you can use the following command to restore:

   pct restore  100 /var/lib/vz/dump/vzdump-openvz-100-2015_08_27-10_46_47.tar -storage local-zfs

Add network configuration based on the original settings

LXCs uses virtual network adapter which are bridged to the physical interface of your host. This works very similar to the way veth devices work in OpenVZ.

In Proxmox VE 3.x the configuration of each container using a veth device had to be done inside the container. In Proxmox VE 4.x you can do this directly from the host.

Add network configuration via the GUI

For each container:

  • Select the container by clicking on it
  • Go to the Network tab
  • Click Add device
  • On the veth device panel, add a device with the parameters:
ID: net0
name eth0
put your IP address and the corresponding netmask in the following format 192.168.5.75/24
Add network configuration via the CLI
pct set 101 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.144/24,gw=192.168.15.1
pct set 102 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.145/24,gw=192.168.15.1

Start the containers

pct start 100
pct start 101
pct start 102 

and voilà, you can now log in to a container and check that your services are running

pct enter 100

Optional steps

Graphical console

If you have not yet done it before, you can add a console to your container, so you can log in to the containers via the web GUI.

OpenVZ bind mounts

If you use OpenVZ bind mounts, you need to recreate them in LXC. See LXC Bind Mounts

PTY Allocation

`/dev/pts' and `/dev/shm' used to be present in OpenVZ templates' `/etc/fstab', but cannot be mounted with a current LXC (>= 3.0). If the VPS refuses to start remove those lines from `/etc/fstab' (or the complete file if nothing relevant remains) - e.g.:

none /dev/pts devpts rw,gid=5,mode=620 0 0
none /dev/shm tmpfs defaults 0 0

need to be removed

  1. mount the container rootfs `pct mount $VMID'
  2. edit the fstab `nano /var/lib/lxc/$VMID/rootfs'
  3. unmount the container rootfs `pct unmount $VMID'

See also

CentOS 6 OpenVZ to LXC migration issues

https://forum.proxmox.com/threads/centos-6-openvz-to-lxc-migration-issues.35058/