User:Grin/Ceph Object Gateway: Difference between revisions
(copy) |
No edit summary |
||
(2 intermediate revisions by one other user not shown) | |||
Line 61: | Line 61: | ||
root@pve1:~# apt install radosgw | root@pve1:~# apt install radosgw | ||
root@pve1:~# service radosgw start | Create systemd service symlink on each node | ||
root@pve1:~# mkdir /etc/systemd/system/ceph-radosgw.target.wants | |||
root@pve1:~# ln -s /lib/systemd/system/ceph-radosgw@.service /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@radosgw.radosgw.pve1 | |||
root@pve1:~# systemctl daemon-reload | |||
root@pve2:~# mkdir /etc/systemd/system/ceph-radosgw.target.wants | |||
root@pve2:~# ln -s /lib/systemd/system/ceph-radosgw@.service /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@radosgw.radosgw.pve2 | |||
root@pve2:~# systemctl daemon-reload | |||
root@pve3:~# mkdir /etc/systemd/system/ceph-radosgw.target.wants | |||
root@pve3:~# ln -s /lib/systemd/system/ceph-radosgw@.service /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@radosgw.radosgw.pve3 | |||
root@pve3:~# systemctl daemon-reload | |||
And then fire up the gateway per node: | |||
root@pve1:~# systemctl start ceph-radosgw@radosgw.pve1 | |||
root@pve2:~# systemctl start ceph-radosgw@radosgw.pve2 | |||
root@pve3:~# systemctl start ceph-radosgw@radosgw.pve3 | |||
If all goes well, RADOSGW will create some default pools for you (see below), and you should be able to visit any of your nodes on port 7480 (e.g. http://pve1.example.net:7480) and you should see something like this: | If all goes well, RADOSGW will create some default pools for you (see below), and you should be able to visit any of your nodes on port 7480 (e.g. http://pve1.example.net:7480) and you should see something like this: | ||
Latest revision as of 09:07, 12 February 2021
I mostly followed the instructions from the main Ceph site, but it was somewhat confusing because they referred to installing Apache and FCGI in some places, but in others they mention that Ceph uses "Civetweb". There is also mention of using ceph-deploy, but I knew that Proxmox uses it's own pveceph tools. So, not wanting to affect my main Proxmox nodes too much, I decided on my first cut to install a dual NIC VM and put one on the same VLAN as my storage network, and the other on the PVE VLAN. It went well enough, and only required one additional package, so I decided to go ahead and install directly on the Proxmox nodes.
My Proxmox environment consists of 3 nodes: pve1, pve2, and pve3, and I wanted to run the Gateway on all three nodes for High Availabilty (I'm running HAProxy in front of these for SSL termination, HA and load balancing).
I ran the following commands from the pve1 node, but it could have been done from any of the nodes.
First I created the keyring to store the keys:
root@pve1:~# ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring
Next, I generated the keys and added them to the keyring:
root@pve1:~# ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.pve1 --gen-key root@pve1:~# ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.pve2 --gen-key root@pve1:~# ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.pve3 --gen-key
And then I added the proper capabilities:
root@pve1:~# ceph-authtool -n client.radosgw.pve1 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring root@pve1:~# ceph-authtool -n client.radosgw.pve2 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring root@pve1:~# ceph-authtool -n client.radosgw.pve3 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring
Finally, I add the keys to the cluster:
root@pve1:~# ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.pve1 -i /etc/ceph/ceph.client.radosgw.keyring root@pve1:~# ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.pve2 -i /etc/ceph/ceph.client.radosgw.keyring root@pve1:~# ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.pve3 -i /etc/ceph/ceph.client.radosgw.keyring
I also copied the keyring into the Proxmox ClusterFS so that it'd be available on all nodes. Note: I might have been able to generate the key directly in the /etc/pve/priv folder, and saved this step.
root@pve1:~# cp /etc/ceph/ceph.client.radosgw.keyring /etc/pve/priv
Add the following lines to /etc/ceph/ceph.conf:
[client.radosgw.pve1] host = pve1 keyring = /etc/pve/priv/ceph.client.radosgw.keyring log file = /var/log/ceph/client.radosgw.$host.log rgw_dns_name = s3.example.net [client.radosgw.pve2] host = pve2 keyring = /etc/pve/priv/ceph.client.radosgw.keyring log file = /var/log/ceph/client.radosgw.$host.log rgw_dns_name = s3.example.net [client.radosgw.pve3] host = pve3 keyring = /etc/pve/priv/ceph.client.radosgw.keyring log file = /var/log/ceph/client.rados.$host.log rgw_dns_name = s3.example.net
Here again, I think there's room for optimization. It's my understanding that multiple [client] sections can be combined, so everything below the host line could potentially be merged into a single section to eliminate repetition.
At this point it was time to log into each of the nodes and add the proper packages:
root@pve1:~# apt install radosgw
Create systemd service symlink on each node
root@pve1:~# mkdir /etc/systemd/system/ceph-radosgw.target.wants root@pve1:~# ln -s /lib/systemd/system/ceph-radosgw@.service /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@radosgw.radosgw.pve1 root@pve1:~# systemctl daemon-reload root@pve2:~# mkdir /etc/systemd/system/ceph-radosgw.target.wants root@pve2:~# ln -s /lib/systemd/system/ceph-radosgw@.service /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@radosgw.radosgw.pve2 root@pve2:~# systemctl daemon-reload root@pve3:~# mkdir /etc/systemd/system/ceph-radosgw.target.wants root@pve3:~# ln -s /lib/systemd/system/ceph-radosgw@.service /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@radosgw.radosgw.pve3 root@pve3:~# systemctl daemon-reload
And then fire up the gateway per node:
root@pve1:~# systemctl start ceph-radosgw@radosgw.pve1 root@pve2:~# systemctl start ceph-radosgw@radosgw.pve2 root@pve3:~# systemctl start ceph-radosgw@radosgw.pve3
If all goes well, RADOSGW will create some default pools for you (see below), and you should be able to visit any of your nodes on port 7480 (e.g. http://pve1.example.net:7480) and you should see something like this:
<ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Owner> <ID>anonymous</ID> <DisplayName/> </Owner> <Buckets/> </ListAllMyBucketsResult>
If not, you can follow your logs to troubleshoot:
root@pve1:~# tail -f /var/log/ceph/client.rados.pve1.log
I was getting warnings on my Ceph cluster that the application hadn't been enabled on pools, so I ran the following:
root@pve1:~# ceph osd pool application enable .rgw.root rgw root@pve1:~# ceph osd pool application enable default.rgw.control rgw root@pve1:~# ceph osd pool application enable default.rgw.data.root rgw root@pve1:~# ceph osd pool application enable default.rgw.gc rgw root@pve1:~# ceph osd pool application enable default.rgw.log rgw root@pve1:~# ceph osd pool application enable default.rgw.users.uid rgw root@pve1:~# ceph osd pool application enable default.rgw.users.email rgw root@pve1:~# ceph osd pool application enable default.rgw.users.keys rgw root@pve1:~# ceph osd pool application enable default.rgw.buckets.index rgw root@pve1:~# ceph osd pool application enable default.rgw.buckets.data rgw root@pve1:~# ceph osd pool application enable default.rgw.lc rgw
Note: some of these pools showed up only when I needed them, such as creating a user, so I may need to go back and rerun this command with any newly created pools
So now you can setup your first user:
root@pve1:~# radosgw-admin user create --uid=testuser --display-name="Test User" --email=test.user@example.net
That's it for configuration on the servers. If you plan to expose these as I did through HAProxy, don't forget to add a wildcard entry for your domain in DNS: *.s3.example.net, so that your buckets will resolve. I also ended up purchasing a wildcard SSL certificate that I loaded onto HAProxy for SSL.