Difference between revisions of "Proxmox Cluster File System (pmxcfs)"

From Proxmox VE
Jump to navigation Jump to search
 
(5 intermediate revisions by 2 users not shown)
Line 4: Line 4:
 
[[Category:Reference Documentation]]
 
[[Category:Reference Documentation]]
 
<pvehide>
 
<pvehide>
The Proxmox Cluster file system (“pmxcfs”) is a database-driven file
+
The Proxmox Cluster file system (&#8220;pmxcfs&#8221;) is a database-driven file
 
system for storing configuration files, replicated in real time to all
 
system for storing configuration files, replicated in real time to all
 
cluster nodes using corosync. We use this to store all PVE related
 
cluster nodes using corosync. We use this to store all PVE related
 
configuration files.
 
configuration files.
 
Although the file system stores all data inside a persistent database
 
Although the file system stores all data inside a persistent database
on disk, a copy of the data resides in RAM. That imposes restriction
+
on disk, a copy of the data resides in RAM. This imposes restrictions
 
on the maximum size, which is currently 30MB. This is still enough to
 
on the maximum size, which is currently 30MB. This is still enough to
 
store the configuration of several thousand virtual machines.
 
store the configuration of several thousand virtual machines.
 
This system provides the following advantages:
 
This system provides the following advantages:
seamless replication of all configuration to all nodes in real time
+
Seamless replication of all configuration to all nodes in real time
provides strong consistency checks to avoid duplicate VM IDs
+
Provides strong consistency checks to avoid duplicate VM IDs
read-only when a node loses quorum
+
Read-only when a node loses quorum
automatic updates of the corosync cluster configuration to all nodes
+
Automatic updates of the corosync cluster configuration to all nodes
includes a distributed locking mechanism
+
Includes a distributed locking mechanism
 
POSIX Compatibility
 
POSIX Compatibility
 
The file system is based on FUSE, so the behavior is POSIX like. But
 
The file system is based on FUSE, so the behavior is POSIX like. But
 
some feature are simply not implemented, because we do not need them:
 
some feature are simply not implemented, because we do not need them:
you can just generate normal files and directories, but no symbolic
+
You can just generate normal files and directories, but no symbolic
   links,
+
   links, &#8230;
you can’t rename non-empty directories (because this makes it easier
+
You can&#8217;t rename non-empty directories (because this makes it easier
 
   to guarantee that VMIDs are unique).
 
   to guarantee that VMIDs are unique).
you can’t change file permissions (permissions are based on path)
+
You can&#8217;t change file permissions (permissions are based on paths)
 
O_EXCL creates were not atomic (like old NFS)
 
O_EXCL creates were not atomic (like old NFS)
 
O_TRUNC creates are not atomic (FUSE restriction)
 
O_TRUNC creates are not atomic (FUSE restriction)
Line 31: Line 31:
 
All files and directories are owned by user root and have group
 
All files and directories are owned by user root and have group
 
www-data. Only root has write permissions, but group www-data can
 
www-data. Only root has write permissions, but group www-data can
read most files. Files below the following paths:
+
read most files. Files below the following paths are only accessible by root:
 
/etc/pve/priv/
 
/etc/pve/priv/
 
/etc/pve/nodes/${NAME}/priv/
 
/etc/pve/nodes/${NAME}/priv/
are only accessible by root.
 
 
Technology
 
Technology
 
We use the Corosync Cluster Engine for
 
We use the Corosync Cluster Engine for
Line 44: Line 43:
 
/etc/pve
 
/etc/pve
 
Files
 
Files
 +
authkey.pub
 +
Public key used by the ticket system
 +
ceph.conf
 +
Ceph configuration file (note: /etc/ceph/ceph.conf is a symbolic link to this)
 
corosync.conf
 
corosync.conf
Corosync cluster configuration file (previous to Proxmox VE 4.x this file was called cluster.conf)
+
Corosync cluster configuration file (prior to Proxmox VE 4.x, this file was called cluster.conf)
storage.cfg
 
Proxmox VE storage configuration
 
 
datacenter.cfg
 
datacenter.cfg
Proxmox VE datacenter wide configuration (keyboard layout, proxy, …)
+
Proxmox VE data center-wide configuration (keyboard layout, proxy, &#8230;)
user.cfg
 
Proxmox VE access control configuration (users/groups/…)
 
 
domains.cfg
 
domains.cfg
 
Proxmox VE authentication domains
 
Proxmox VE authentication domains
authkey.pub
+
firewall/cluster.fw
Public key used by ticket system
+
Firewall configuration applied to all nodes
pve-root-ca.pem
+
firewall/&lt;NAME&gt;.fw
Public certificate of cluster CA
+
Firewall configuration for individual nodes
priv/shadow.cfg
+
firewall/&lt;VMID&gt;.fw
Shadow password file
+
Firewall configuration for VMs and containers
 +
ha/crm_commands
 +
Displays HA operations that are currently being carried out by the CRM
 +
ha/manager_status
 +
JSON-formatted information regarding HA services on the cluster
 +
ha/resources.cfg
 +
Resources managed by high availability, and their current state
 +
nodes/&lt;NAME&gt;/config
 +
Node-specific configuration
 +
nodes/&lt;NAME&gt;/lxc/&lt;VMID&gt;.conf
 +
VM configuration data for LXC containers
 +
nodes/&lt;NAME&gt;/openvz/
 +
Prior to PVE 4.0, used for container configuration data (deprecated, removed soon)
 +
nodes/&lt;NAME&gt;/pve-ssl.key
 +
Private SSL key for pve-ssl.pem
 +
nodes/&lt;NAME&gt;/pve-ssl.pem
 +
Public SSL certificate for web server (signed by cluster CA)
 +
nodes/&lt;NAME&gt;/pveproxy-ssl.key
 +
Private SSL key for pveproxy-ssl.pem (optional)
 +
nodes/&lt;NAME&gt;/pveproxy-ssl.pem
 +
Public SSL certificate (chain) for web server (optional override for pve-ssl.pem)
 +
nodes/&lt;NAME&gt;/qemu-server/&lt;VMID&gt;.conf
 +
VM configuration data for KVM VMs
 
priv/authkey.key
 
priv/authkey.key
 
Private key used by ticket system
 
Private key used by ticket system
 +
priv/authorized_keys
 +
SSH keys of cluster members for authentication
 +
priv/ceph*
 +
Ceph authentication keys and associated capabilities
 +
priv/known_hosts
 +
SSH keys of the cluster members for verification
 +
priv/lock/*
 +
Lock files used by various services to ensure safe cluster-wide operations
 
priv/pve-root-ca.key
 
priv/pve-root-ca.key
 
Private key of cluster CA
 
Private key of cluster CA
nodes/<NAME>/pve-ssl.pem
+
priv/shadow.cfg
Public SSL certificate for web server (signed by cluster CA)
+
Shadow password file for PVE Realm users
nodes/<NAME>/pve-ssl.key
+
priv/storage/&lt;STORAGE-ID&gt;.pw
Private SSL key for pve-ssl.pem
+
Contains the password of a storage in plain text
nodes/<NAME>/pveproxy-ssl.pem
+
priv/tfa.cfg
Public SSL certificate (chain) for web server (optional override for pve-ssl.pem)
+
Base64-encoded two-factor authentication configuration
nodes/<NAME>/pveproxy-ssl.key
+
priv/token.cfg
Private SSL key for pveproxy-ssl.pem (optional)
+
API token secrets of all tokens
nodes/<NAME>/qemu-server/<VMID>.conf
+
pve-root-ca.pem
VM configuration data for KVM VMs
+
Public certificate of cluster CA
nodes/<NAME>/lxc/<VMID>.conf
+
pve-www.key
VM configuration data for LXC containers
+
Private key used for generating CSRF tokens
firewall/cluster.fw
+
sdn/*
Firewall configuration applied to all nodes
+
Shared configuration files for Software Defined Networking (SDN)
firewall/<NAME>.fw
+
status.cfg
Firewall configuration for individual nodes
+
Proxmox VE external metrics server configuration
firewall/<VMID>.fw
+
storage.cfg
Firewall configuration for VMs and Containers
+
Proxmox VE storage configuration
 +
user.cfg
 +
Proxmox VE access control configuration (users/groups/&#8230;)
 +
virtual-guest/cpu-models.conf
 +
For storing custom CPU models
 +
vzdump.cron
 +
Cluster-wide vzdump backup-job schedule
 
Symbolic links
 
Symbolic links
 +
Certain directories within the cluster file system use symbolic links, in order
 +
to point to a node&#8217;s own configuration files. Thus, the files pointed to in the
 +
table below refer to different files on each node of the cluster.
 
local
 
local
nodes/<LOCAL_HOST_NAME>
+
nodes/&lt;LOCAL_HOST_NAME&gt;
 +
lxc
 +
nodes/&lt;LOCAL_HOST_NAME&gt;/lxc/
 +
openvz
 +
nodes/&lt;LOCAL_HOST_NAME&gt;/openvz/ (deprecated, removed soon)
 
qemu-server
 
qemu-server
nodes/<LOCAL_HOST_NAME>/qemu-server/
+
nodes/&lt;LOCAL_HOST_NAME&gt;/qemu-server/
lxc
 
nodes/<LOCAL_HOST_NAME>/lxc/
 
 
Special status files for debugging (JSON)
 
Special status files for debugging (JSON)
 
.version
 
.version
Line 102: Line 142:
 
Enable/Disable debugging
 
Enable/Disable debugging
 
You can enable verbose syslog messages with:
 
You can enable verbose syslog messages with:
echo "1" >/etc/pve/.debug
+
echo "1" &gt;/etc/pve/.debug
 
And disable verbose syslog messages with:
 
And disable verbose syslog messages with:
echo "0" >/etc/pve/.debug
+
echo "0" &gt;/etc/pve/.debug
 
Recovery
 
Recovery
If you have major problems with your Proxmox VE host, e.g. hardware
+
If you have major problems with your Proxmox VE host, for example hardware
issues, it could be helpful to just copy the pmxcfs database file
+
issues, it could be helpful to copy the pmxcfs database file
/var/lib/pve-cluster/config.db and move it to a new Proxmox VE
+
/var/lib/pve-cluster/config.db, and move it to a new Proxmox VE
 
host. On the new host (with nothing running), you need to stop the
 
host. On the new host (with nothing running), you need to stop the
pve-cluster service and replace the config.db file (needed permissions
+
pve-cluster service and replace the config.db file (required permissions
0600). Second, adapt /etc/hostname and /etc/hosts according to the
+
0600). Following this, adapt /etc/hostname and /etc/hosts according to the
lost Proxmox VE host, then reboot and check. (And don’t forget your
+
lost Proxmox VE host, then reboot and check (and don&#8217;t forget your
VM/CT data)
+
VM/CT data).
Remove Cluster configuration
+
Remove Cluster Configuration
The recommended way is to reinstall the node after you removed it from
+
The recommended way is to reinstall the node after you remove it from
your cluster. This makes sure that all secret cluster/ssh keys and any
+
your cluster. This ensures that all secret cluster/ssh keys and any
 
shared configuration data is destroyed.
 
shared configuration data is destroyed.
In some cases, you might prefer to put a node back to local mode
+
In some cases, you might prefer to put a node back to local mode without
without reinstall, which is described here:
+
reinstalling, which is described in
stop the cluster file system in /etc/pve/
+
Separate A Node Without Reinstalling
# systemctl stop pve-cluster
+
Recovering/Moving Guests from Failed Nodes
start it again but forcing local mode
+
For the guest configuration files in nodes/&lt;NAME&gt;/qemu-server/ (VMs) and
# pmxcfs -l
+
nodes/&lt;NAME&gt;/lxc/ (containers), Proxmox VE sees the containing node &lt;NAME&gt; as the
remove the cluster configuration
+
owner of the respective guest. This concept enables the usage of local locks
# rm /etc/pve/cluster.conf
+
instead of expensive cluster-wide locks for preventing concurrent guest
# rm /etc/cluster/cluster.conf
+
configuration changes.
# rm /var/lib/pve-cluster/corosync.authkey
+
As a consequence, if the owning node of a guest fails (for example, due to a power
stop the cluster file system again
+
outage, fencing event, etc.), a regular migration is not possible (even if all
# systemctl stop pve-cluster
+
the disks are located on shared storage), because such a local lock on the
restart PVE services (or reboot)
+
(offline) owning node is unobtainable. This is not a problem for HA-managed
# systemctl start pve-cluster
+
guests, as Proxmox VE&#8217;s High Availability stack includes the necessary
# systemctl restart pvedaemon
+
(cluster-wide) locking and watchdog functionality to ensure correct and
# systemctl restart pveproxy
+
automatic recovery of guests from fenced nodes.
# systemctl restart pvestatd
+
If a non-HA-managed guest has only shared disks (and no other local resources
 +
which are only available on the failed node), a manual recovery
 +
is possible by simply moving the guest configuration file from the failed
 +
node&#8217;s directory in /etc/pve/ to an online node&#8217;s directory (which changes the
 +
logical owner or location of the guest).
 +
For example, recovering the VM with ID 100 from an offline node1 to another
 +
node node2 works by running the following command as root on any member node
 +
of the cluster:
 +
mv /etc/pve/nodes/node1/qemu-server/100.conf /etc/pve/nodes/node2/
 +
Before manually recovering a guest like this, make absolutely sure
 +
that the failed source node is really powered off/fenced. Otherwise Proxmox VE&#8217;s
 +
locking principles are violated by the mv command, which can have unexpected
 +
consequences.
 +
Guests with local disks (or other local resources which are only
 +
available on the offline node) are not recoverable like this. Either wait for the
 +
failed node to rejoin the cluster or restore such guests from backups.
 
</pvehide>
 
</pvehide>
 
<!--PVE_IMPORT_END_MARKER-->
 
<!--PVE_IMPORT_END_MARKER-->
{{#pvedocs:pmxcfs-plain.html}}
 
 
[[Category:Reference Documentation]]
 

Latest revision as of 11:27, 17 November 2021

The Proxmox Cluster file system (“pmxcfs”) is a database-driven file system for storing configuration files, replicated in real time to all cluster nodes using corosync. We use this to store all PVE related configuration files.

Although the file system stores all data inside a persistent database on disk, a copy of the data resides in RAM. This imposes restrictions on the maximum size, which is currently 30MB. This is still enough to store the configuration of several thousand virtual machines.

This system provides the following advantages:

  • Seamless replication of all configuration to all nodes in real time

  • Provides strong consistency checks to avoid duplicate VM IDs

  • Read-only when a node loses quorum

  • Automatic updates of the corosync cluster configuration to all nodes

  • Includes a distributed locking mechanism

POSIX Compatibility

The file system is based on FUSE, so the behavior is POSIX like. But some feature are simply not implemented, because we do not need them:

  • You can just generate normal files and directories, but no symbolic links, …

  • You can’t rename non-empty directories (because this makes it easier to guarantee that VMIDs are unique).

  • You can’t change file permissions (permissions are based on paths)

  • O_EXCL creates were not atomic (like old NFS)

  • O_TRUNC creates are not atomic (FUSE restriction)

File Access Rights

All files and directories are owned by user root and have group www-data. Only root has write permissions, but group www-data can read most files. Files below the following paths are only accessible by root:

/etc/pve/priv/
/etc/pve/nodes/${NAME}/priv/

Technology

We use the Corosync Cluster Engine for cluster communication, and SQlite for the database file. The file system is implemented in user space using FUSE.

File System Layout

The file system is mounted at:

/etc/pve

Files

authkey.pub

Public key used by the ticket system

ceph.conf

Ceph configuration file (note: /etc/ceph/ceph.conf is a symbolic link to this)

corosync.conf

Corosync cluster configuration file (prior to Proxmox VE 4.x, this file was called cluster.conf)

datacenter.cfg

Proxmox VE data center-wide configuration (keyboard layout, proxy, …)

domains.cfg

Proxmox VE authentication domains

firewall/cluster.fw

Firewall configuration applied to all nodes

firewall/<NAME>.fw

Firewall configuration for individual nodes

firewall/<VMID>.fw

Firewall configuration for VMs and containers

ha/crm_commands

Displays HA operations that are currently being carried out by the CRM

ha/manager_status

JSON-formatted information regarding HA services on the cluster

ha/resources.cfg

Resources managed by high availability, and their current state

nodes/<NAME>/config

Node-specific configuration

nodes/<NAME>/lxc/<VMID>.conf

VM configuration data for LXC containers

nodes/<NAME>/openvz/

Prior to PVE 4.0, used for container configuration data (deprecated, removed soon)

nodes/<NAME>/pve-ssl.key

Private SSL key for pve-ssl.pem

nodes/<NAME>/pve-ssl.pem

Public SSL certificate for web server (signed by cluster CA)

nodes/<NAME>/pveproxy-ssl.key

Private SSL key for pveproxy-ssl.pem (optional)

nodes/<NAME>/pveproxy-ssl.pem

Public SSL certificate (chain) for web server (optional override for pve-ssl.pem)

nodes/<NAME>/qemu-server/<VMID>.conf

VM configuration data for KVM VMs

priv/authkey.key

Private key used by ticket system

priv/authorized_keys

SSH keys of cluster members for authentication

priv/ceph*

Ceph authentication keys and associated capabilities

priv/known_hosts

SSH keys of the cluster members for verification

priv/lock/*

Lock files used by various services to ensure safe cluster-wide operations

priv/pve-root-ca.key

Private key of cluster CA

priv/shadow.cfg

Shadow password file for PVE Realm users

priv/storage/<STORAGE-ID>.pw

Contains the password of a storage in plain text

priv/tfa.cfg

Base64-encoded two-factor authentication configuration

priv/token.cfg

API token secrets of all tokens

pve-root-ca.pem

Public certificate of cluster CA

pve-www.key

Private key used for generating CSRF tokens

sdn/*

Shared configuration files for Software Defined Networking (SDN)

status.cfg

Proxmox VE external metrics server configuration

storage.cfg

Proxmox VE storage configuration

user.cfg

Proxmox VE access control configuration (users/groups/…)

virtual-guest/cpu-models.conf

For storing custom CPU models

vzdump.cron

Cluster-wide vzdump backup-job schedule

Certain directories within the cluster file system use symbolic links, in order to point to a node’s own configuration files. Thus, the files pointed to in the table below refer to different files on each node of the cluster.

local

nodes/<LOCAL_HOST_NAME>

lxc

nodes/<LOCAL_HOST_NAME>/lxc/

openvz

nodes/<LOCAL_HOST_NAME>/openvz/ (deprecated, removed soon)

qemu-server

nodes/<LOCAL_HOST_NAME>/qemu-server/

Special status files for debugging (JSON)

.version

File versions (to detect file modifications)

.members

Info about cluster members

.vmlist

List of all VMs

.clusterlog

Cluster log (last 50 entries)

.rrd

RRD data (most recent entries)

Enable/Disable debugging

You can enable verbose syslog messages with:

echo "1" >/etc/pve/.debug

And disable verbose syslog messages with:

echo "0" >/etc/pve/.debug

Recovery

If you have major problems with your Proxmox VE host, for example hardware issues, it could be helpful to copy the pmxcfs database file /var/lib/pve-cluster/config.db, and move it to a new Proxmox VE host. On the new host (with nothing running), you need to stop the pve-cluster service and replace the config.db file (required permissions 0600). Following this, adapt /etc/hostname and /etc/hosts according to the lost Proxmox VE host, then reboot and check (and don’t forget your VM/CT data).

Remove Cluster Configuration

The recommended way is to reinstall the node after you remove it from your cluster. This ensures that all secret cluster/ssh keys and any shared configuration data is destroyed.

In some cases, you might prefer to put a node back to local mode without reinstalling, which is described in Separate A Node Without Reinstalling

Recovering/Moving Guests from Failed Nodes

For the guest configuration files in nodes/<NAME>/qemu-server/ (VMs) and nodes/<NAME>/lxc/ (containers), Proxmox VE sees the containing node <NAME> as the owner of the respective guest. This concept enables the usage of local locks instead of expensive cluster-wide locks for preventing concurrent guest configuration changes.

As a consequence, if the owning node of a guest fails (for example, due to a power outage, fencing event, etc.), a regular migration is not possible (even if all the disks are located on shared storage), because such a local lock on the (offline) owning node is unobtainable. This is not a problem for HA-managed guests, as Proxmox VE’s High Availability stack includes the necessary (cluster-wide) locking and watchdog functionality to ensure correct and automatic recovery of guests from fenced nodes.

If a non-HA-managed guest has only shared disks (and no other local resources which are only available on the failed node), a manual recovery is possible by simply moving the guest configuration file from the failed node’s directory in /etc/pve/ to an online node’s directory (which changes the logical owner or location of the guest).

For example, recovering the VM with ID 100 from an offline node1 to another node node2 works by running the following command as root on any member node of the cluster:

mv /etc/pve/nodes/node1/qemu-server/100.conf /etc/pve/nodes/node2/
Warning Before manually recovering a guest like this, make absolutely sure that the failed source node is really powered off/fenced. Otherwise Proxmox VE’s locking principles are violated by the mv command, which can have unexpected consequences.
Warning Guests with local disks (or other local resources which are only available on the offline node) are not recoverable like this. Either wait for the failed node to rejoin the cluster or restore such guests from backups.