[pve-devel] [PATCH docs 1/2] Use correct xref: syntax and add pvecm prefix

Stefan Reiter s.reiter at proxmox.com
Mon Jul 8 18:26:33 CEST 2019


Signed-off-by: Stefan Reiter <s.reiter at proxmox.com>
---

Hope it's the correct style now.

I decided to this into its own commit, since it could technically be applied to
the docs for pve 5 as well (although with little visual effect).

 pvecm.adoc | 30 +++++++++++++++---------------
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/pvecm.adoc b/pvecm.adoc
index 05756ca..1c0b9e7 100644
--- a/pvecm.adoc
+++ b/pvecm.adoc
@@ -150,7 +150,7 @@ Login via `ssh` to the node you want to add.
 ----
 
 For `IP-ADDRESS-CLUSTER` use the IP or hostname of an existing cluster node.
-An IP address is recommended (see <<corosync-addresses,Ring Address Types>>).
+An IP address is recommended (see xref:pvecm_corosync_addresses[Ring Address Types]).
 
 CAUTION: A new node cannot hold any VMs, because you would get
 conflicts about identical VM IDs. Also, all existing configuration in
@@ -212,7 +212,7 @@ Membership information
          4          1 hp4
 ----
 
-[[adding-nodes-with-separated-cluster-network]]
+[[pvecm_adding_nodes_with_separated_cluster_network]]
 Adding Nodes With Separated Cluster Network
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -428,7 +428,7 @@ part is done by corosync, an implementation of a high performance low overhead
 high availability development toolkit. It serves our decentralized
 configuration file system (`pmxcfs`).
 
-[[cluster-network-requirements]]
+[[pvecm_cluster_network_requirements]]
 Network Requirements
 ~~~~~~~~~~~~~~~~~~~~
 This needs a reliable network with latencies under 2 milliseconds (LAN
@@ -486,7 +486,7 @@ Setting Up A New Network
 
 First you have to setup a new network interface. It should be on a physical
 separate network. Ensure that your network fulfills the
-<<cluster-network-requirements,cluster network requirements>>.
+xref:pvecm_cluster_network_requirements[cluster network requirements].
 
 Separate On Cluster Creation
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -510,9 +510,9 @@ systemctl status corosync
 ----
 
 Afterwards, proceed as descripted in the section to
-<<adding-nodes-with-separated-cluster-network,add nodes with a separated cluster network>>.
+xref:pvecm_adding_nodes_with_separated_cluster_network[add nodes with a separated cluster network].
 
-[[separate-cluster-net-after-creation]]
+[[pvecm_separate_cluster_net_after_creation]]
 Separate After Cluster Creation
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
@@ -521,7 +521,7 @@ its communication to another network, without rebuilding the whole cluster.
 This change may lead to short durations of quorum loss in the cluster, as nodes
 have to restart corosync and come up one after the other on the new network.
 
-Check how to <<edit-corosync-conf,edit the corosync.conf file>> first.
+Check how to xref:pvecm_edit_corosync_conf[edit the corosync.conf file] first.
 The open it and you should see a file similar to:
 
 ----
@@ -579,7 +579,7 @@ you do not see them already. Those *must* match the node name.
 Then replace the address from the 'ring0_addr' properties with the new
 addresses.  You may use plain IP addresses or also hostnames here. If you use
 hostnames ensure that they are resolvable from all nodes. (see also
-<<corosync-addresses,Ring Address Types>>)
+xref:pvecm_corosync_addresses[Ring Address Types])
 
 In my example I want to switch my cluster communication to the 10.10.10.1/25
 network. So I replace all 'ring0_addr' respectively. I also set the bindnetaddr
@@ -640,7 +640,7 @@ totem {
 ----
 
 Now after a final check whether all changed information is correct we save it
-and see again the <<edit-corosync-conf,edit corosync.conf file>> section to
+and see again the xref:pvecm_edit_corosync_conf[edit corosync.conf file] section to
 learn how to bring it in effect.
 
 As our change cannot be enforced live from corosync we have to do an restart.
@@ -661,7 +661,7 @@ systemctl status corosync
 If corosync runs again correct restart corosync also on all other nodes.
 They will then join the cluster membership one by one on the new network.
 
-[[corosync-addresses]]
+[[pvecm_corosync_addresses]]
 Corosync addresses
 ~~~~~~~~~~~~~~~~~~
 
@@ -708,7 +708,7 @@ RRP On Cluster Creation
 The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
 'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
 
-NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means.
+NOTE: See the xref:pvecm_corosync_conf_glossary[glossary] if you do not know what each parameter means.
 
 So if you have two networks, one on the 10.10.10.1/24 and the other on the
 10.10.20.1/24 subnet you would execute:
@@ -723,7 +723,7 @@ RRP On Existing Clusters
 ~~~~~~~~~~~~~~~~~~~~~~~~
 
 You will take similar steps as described in
-<<separate-cluster-net-after-creation,separating the cluster network>> to
+xref:pvecm_separate_cluster_net_after_creation[separating the cluster network] to
 enable RRP on an already running cluster. The single difference is, that you
 will add `ring1` and use it instead of `ring0`.
 
@@ -781,7 +781,7 @@ nodelist {
 ----
 
 Bring it in effect like described in the
-<<edit-corosync-conf,edit the corosync.conf file>> section.
+xref:pvecm_edit_corosync_conf[edit the corosync.conf file] section.
 
 This is a change which cannot take live in effect and needs at least a restart
 of corosync. Recommended is a restart of the whole cluster.
@@ -979,7 +979,7 @@ For node membership you should always use the `pvecm` tool provided by {pve}.
 You may have to edit the configuration file manually for other changes.
 Here are a few best practice tips for doing this.
 
-[[edit-corosync-conf]]
+[[pvecm_edit_corosync_conf]]
 Edit corosync.conf
 ~~~~~~~~~~~~~~~~~~
 
@@ -1075,7 +1075,7 @@ the same content to avoid split brains. If you are not sure what went wrong
 it's best to ask the Proxmox Community to help you.
 
 
-[[corosync-conf-glossary]]
+[[pvecm_corosync_conf_glossary]]
 Corosync Configuration Glossary
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.20.1





More information about the pve-devel mailing list