Difference between revisions of "Proxmox Cluster file system (pmxcfs)"

From Proxmox VE
Jump to navigation Jump to search
 
(16 intermediate revisions by 7 users not shown)
Line 1: Line 1:
{{Note|Article about Proxmox VE 2.0 beta}}
+
#REDIRECT [[Proxmox Cluster File System (pmxcfs)]]
=Introduction=
 
  
Proxmox Cluster file system (pmxcfs) is a database-driven file system for storing configuration files, replicated in real time on all nodes using corosync. We use this to store all PVE related configuration files. Although the file system stores all data inside a persistent database on disk, a copy of the data resides im RAM. That imposes restriction on the maximal size, which is currently 30MB. This is still enough to store the configuration several thousand virtual machines.
+
== Introduction ==
  
====Advantages====
+
Proxmox Cluster file system (pmxcfs) is a database-driven file system for storing configuration files, replicated in real time on all nodes using corosync. We use this to store all PVE related configuration files. Although the file system stores all data inside a persistent database on disk, a copy of the data resides in RAM. That imposes restriction on the maximal size, which is currently 30MB. This is still enough to store the configuration of several thousand virtual machines.
 +
Note that this restriction is also imposed from the real time replication through a totem ring protocol.
 +
 
 +
=== Advantages ===
  
 
* seamless replication of all configuration to all nodes in real time
 
* seamless replication of all configuration to all nodes in real time
 
* provides strong consistency checks to avoid duplicate VM IDs
 
* provides strong consistency checks to avoid duplicate VM IDs
 
* read-only when a node looses quorum
 
* read-only when a node looses quorum
* automatic replication of the corosync cluster configuration to all nodes.
+
* automatic updates of the corosync cluster configuration to all nodes.
  
====POSIX Compatibility====
+
=== POSIX Compatibility ===
  
 
The file system is based on fuse, so the behavior is POSIX like. But
 
The file system is based on fuse, so the behavior is POSIX like. But
Line 23: Line 25:
 
* ...
 
* ...
  
====File access rights====
+
=== File access rights ===
  
 
All files/dirs are owned by user 'root' and have group
 
All files/dirs are owned by user 'root' and have group
Line 34: Line 36:
 
are only accessible by root.
 
are only accessible by root.
  
=Technologie=
+
== Technology ==
  
 
We use the [http://www.corosync.org Corosync Cluster Engine] for cluster communication, and [http://www.sqlite.org SQlite] for the database file. The filesystem is implemented in user space using [http://fuse.sourceforge.net/ FUSE].
 
We use the [http://www.corosync.org Corosync Cluster Engine] for cluster communication, and [http://www.sqlite.org SQlite] for the database file. The filesystem is implemented in user space using [http://fuse.sourceforge.net/ FUSE].
  
 
+
== File system layout ==
=File system layout=
 
  
 
The file system is mounted at:
 
The file system is mounted at:
Line 45: Line 46:
 
  /etc/pve
 
  /etc/pve
  
====Files====
+
=== Files ===
  
  cluster.conf  => corosync/cman cluster configuration file
+
  corosync.conf  => corosync/cman cluster configuration file (previous to PVE 4.x this file was called cluster.conf)
 
  storage.cfg  => PVE storage configuration
 
  storage.cfg  => PVE storage configuration
 
  user.cfg      => PVE access control configuration (users/groups/...)
 
  user.cfg      => PVE access control configuration (users/groups/...)
Line 56: Line 57:
 
  priv/authkey.key => private key used by ticket system
 
  priv/authkey.key => private key used by ticket system
  
  nodes/${NAME}/pve-ssl.pem                => public ssl key fo apache
+
  nodes/${NAME}/pve-ssl.pem                => public ssl key for web server
 
  nodes/${NAME}/priv/pve-ssl.key            => private ssl key
 
  nodes/${NAME}/priv/pve-ssl.key            => private ssl key
 
  nodes/${NAME}/qemu-server/${VMID}.conf    => VM configuration data for KVM VMs
 
  nodes/${NAME}/qemu-server/${VMID}.conf    => VM configuration data for KVM VMs
 
  nodes/${NAME}/openvz/${VMID}.conf        => VM configuratin data for OpenVZ containers
 
  nodes/${NAME}/openvz/${VMID}.conf        => VM configuratin data for OpenVZ containers
  
====Symbolic links====
+
=== Symbolic links ===
  
 
  local => nodes/${LOCALNAME}
 
  local => nodes/${LOCALNAME}
Line 67: Line 68:
 
  openvz => nodes/${LOCALNAME}/openvz/
 
  openvz => nodes/${LOCALNAME}/openvz/
  
====Special status files for debugging (JSON)====
+
=== Special status files for debugging (JSON) ===
  
 
  .version    => file versions (to detect file modifications)
 
  .version    => file versions (to detect file modifications)
Line 76: Line 77:
  
  
====Enable/Disable debugging====
+
=== Enable/Disable debugging ====
  
 
  # enable verbose syslog messages
 
  # enable verbose syslog messages
Line 84: Line 85:
 
  echo "0" >/etc/pve/.debug  
 
  echo "0" >/etc/pve/.debug  
  
=Source code=
+
== Recovery ==
 +
If you have major problems with your Proxmox VE host, e.g. hardware issues, it could be helpful to just copy the pmxcfs database file /var/lib/pve-cluster/config.db and move it to a new Proxmox VE host. On the new host (with nothing running), you need to stop the pve-cluster service and replace the config.db file (needed permissions : 600). Second, adapt /etc/hostname and /etc/hosts according to the lost Proxmox VE host, then reboot and check. (And don´t forget your VM/CT data)
 +
 
 +
=== Remove Cluster configuration ===
 +
 
 +
The recommended way is to reinstall the node after you removed it from your cluster. This makes sure that all secret cluster/ssh keys and
 +
any shared configuration data is destroyed.
 +
 
 +
I some cases, you might prefer to put a node back to local mode without reinstall, which is described here:
 +
 
 +
* '''stop the cluster file system in /etc/pve/'''
 +
# service pve-cluster stop
 +
 
 +
or if you use PVE 4.0 and newer
 +
# systemctl stop pve-cluster
 +
 
 +
* '''start it again but forcing local mode'''
 +
# pmxcfs -l
 +
 
 +
* '''remove the cluster config'''
 +
# rm -f /etc/pve/cluster.conf /etc/pve/corosync.conf
 +
# rm -f /etc/cluster/cluster.conf /etc/corosync/corosync.conf
 +
# rm /var/lib/pve-cluster/corosync.authkey
 +
 
 +
* '''stop the cluster file system again'''
 +
# # on PVE 3.4 and earlier
 +
# service pve-cluster stop
 +
# # on PVE 4.0 and newer
 +
# systemctl stop pve-cluster
 +
 
 +
* '''you may have to delete the lockfile of the cluster filesystem:'''
 +
# rm /var/lib/pve-cluster/.pmxcfs.lockfile
 +
 
 +
* '''restart pve services (or reboot)'''
 +
 
 +
on PVE 3.4 and older:
 +
 
 +
# service pve-cluster start
 +
# service pvedaemon restart
 +
# service pveproxy restart
 +
# service pvestatd restart
 +
 
 +
on pve 4.0 and newer
 +
 
 +
# systemctl start pve-cluster
 +
# systemctl restart pvedaemon
 +
# systemctl restart pveproxy
 +
# systemctl restart pvestatd
 +
 
 +
== Source code ==
  
 
The source code is available [https://git.proxmox.com/?p=pve-cluster.git;a=summary here].
 
The source code is available [https://git.proxmox.com/?p=pve-cluster.git;a=summary here].
  
[[Category: Proxmox VE 2.0]]
+
[[Category: HOWTO]]

Latest revision as of 13:25, 24 May 2017

Introduction

Proxmox Cluster file system (pmxcfs) is a database-driven file system for storing configuration files, replicated in real time on all nodes using corosync. We use this to store all PVE related configuration files. Although the file system stores all data inside a persistent database on disk, a copy of the data resides in RAM. That imposes restriction on the maximal size, which is currently 30MB. This is still enough to store the configuration of several thousand virtual machines. Note that this restriction is also imposed from the real time replication through a totem ring protocol.

Advantages

  • seamless replication of all configuration to all nodes in real time
  • provides strong consistency checks to avoid duplicate VM IDs
  • read-only when a node looses quorum
  • automatic updates of the corosync cluster configuration to all nodes.

POSIX Compatibility

The file system is based on fuse, so the behavior is POSIX like. But many feature are simply not implemented, because we do not need them:

  • you can just generate normal files and directories, but no symbolic links, ...
  • you can't rename non-empty directories (because this makes it easier to guarantee that VMIDs are unique).
  • you can't change file permissions (permissions are based on path)
  • O_EXCL creates were not atomic (like old NFS)
  • O_TRUNC creates are not atomic (fuse restriction)
  • ...

File access rights

All files/dirs are owned by user 'root' and have group 'www-data'. Only root has write permissions, but group 'www-data' can read most files. Files below the following paths:

/etc/pve/priv/
/etc/pve/nodes/${NAME}/priv/

are only accessible by root.

Technology

We use the Corosync Cluster Engine for cluster communication, and SQlite for the database file. The filesystem is implemented in user space using FUSE.

File system layout

The file system is mounted at:

/etc/pve

Files

corosync.conf  => corosync/cman cluster configuration file (previous to PVE 4.x this file was called cluster.conf)
storage.cfg   => PVE storage configuration
user.cfg      => PVE access control configuration (users/groups/...)
domains.cfg   => PVE Authentication domains 
authkey.pub   => public key used by ticket system
priv/shadow.cfg  => shadow password file
priv/authkey.key => private key used by ticket system
nodes/${NAME}/pve-ssl.pem                 => public ssl key for web server
nodes/${NAME}/priv/pve-ssl.key            => private ssl key
nodes/${NAME}/qemu-server/${VMID}.conf    => VM configuration data for KVM VMs
nodes/${NAME}/openvz/${VMID}.conf         => VM configuratin data for OpenVZ containers

Symbolic links

local => nodes/${LOCALNAME}
qemu-server => nodes/${LOCALNAME}/qemu-server/
openvz => nodes/${LOCALNAME}/openvz/

Special status files for debugging (JSON)

.version    => file versions (to detect file modifications)
.members    => Info about cluster members
.vmlist     => List of all VMs
.clusterlog => Cluster log (last 50 entries)
.rrd        => RRD data (most recent entries)


Enable/Disable debugging =

# enable verbose syslog messages
echo "1" >/etc/pve/.debug 
# disable verbose syslog messages
echo "0" >/etc/pve/.debug 

Recovery

If you have major problems with your Proxmox VE host, e.g. hardware issues, it could be helpful to just copy the pmxcfs database file /var/lib/pve-cluster/config.db and move it to a new Proxmox VE host. On the new host (with nothing running), you need to stop the pve-cluster service and replace the config.db file (needed permissions : 600). Second, adapt /etc/hostname and /etc/hosts according to the lost Proxmox VE host, then reboot and check. (And don´t forget your VM/CT data)

Remove Cluster configuration

The recommended way is to reinstall the node after you removed it from your cluster. This makes sure that all secret cluster/ssh keys and any shared configuration data is destroyed.

I some cases, you might prefer to put a node back to local mode without reinstall, which is described here:

  • stop the cluster file system in /etc/pve/
# service pve-cluster stop

or if you use PVE 4.0 and newer

# systemctl stop pve-cluster
  • start it again but forcing local mode
# pmxcfs -l
  • remove the cluster config
# rm -f /etc/pve/cluster.conf /etc/pve/corosync.conf 
# rm -f /etc/cluster/cluster.conf /etc/corosync/corosync.conf 
# rm /var/lib/pve-cluster/corosync.authkey
  • stop the cluster file system again
# # on PVE 3.4 and earlier
# service pve-cluster stop
# # on PVE 4.0 and newer
# systemctl stop pve-cluster
  • you may have to delete the lockfile of the cluster filesystem:
# rm /var/lib/pve-cluster/.pmxcfs.lockfile
  • restart pve services (or reboot)

on PVE 3.4 and older:

# service pve-cluster start
# service pvedaemon restart
# service pveproxy restart
# service pvestatd restart

on pve 4.0 and newer

# systemctl start pve-cluster
# systemctl restart pvedaemon
# systemctl restart pveproxy
# systemctl restart pvestatd

Source code

The source code is available here.