Deploy Ceph RADOS Gateway in Kubernetes
Pointing to a separate PVE Ceph Cluster
This guide outlines how to deploy a RADOS Gateway to enable an S3 API for a Ceph pool. I use this to provide S3 storage to my Kubernetes Cluster with the Ceph cluster hosted by Proxmox VE. Many conecpts are similar to the previous guide - Enable Ceph CSI for PVE Ceph, some steps will refer to that guide.
This guide makes the following assumptions:
* You are already runnung Ceph via PVE.
* You are using the PVE UI for Ceph actions where possible.
* You are deploying the RADOS Gateway to the object-store namespace in K8s.
* Flux is used to deploy to K8s using SOPS for secret encryption.
1. Ceph Pool & User Creation
These steps ensure that a Ceph Pool is created with appropriate Replication.
-
Create the RGW Realm on a PVE Host from the Shell
- Create Realm:
radosgw-admin realm create --rgw-realm=default --default - Create Zonegroup:
radosgw-admin zonegroup create --rgw-zonegroup=default --master --default --endpoints=http://ceph-rgw.object-store.svc.cluster.local:8080 - Create Zone:
radosgw-admin zone create --rgw-zone=default --master --default - Set Zone endpoint:
radosgw-admin zone modify --rgw-zone=default --endpoints=http://ceph-rgw.object-store.svc.cluster.local:8080 - Ensure Zone is included in Zonegroup:
radosgw-admin zonegroup add --rgw-zonegroup=default --rgw-zone=default - Update & Commit Period:
radosgw-admin period update --commit - Set the default realm:
radosgw-admin realm default --rgw-realm=default
- Create Realm:
-
The above commands will have created the following new pools You do not need to manually create these
Pool Name Purpose .rgw.root default.rgw.log default.rgw.control default.rgw.meta -
Create the two required Pools for index and data in the PVE UI:
Pool Name PG Autoscaler Size Min Size Crush Rule default.rgw.buckets.index On 3 2 replicated_rule default.rgw.buckets.data On 3 2 replicated_rule -
Enable RGW Application: When the pool is created via PVE, it is registered by default as an RBD Pool, run these commands to change it to an RGW pool.
- Disable RBD:
ceph osd pool application disable default.rgw.buckets.data rbd --yes-i-really-mean-it - Enable RGW:
ceph osd pool application enable default.rgw.buckets.data rgw - Check with:
ceph osd pool application get default.rgw.buckets.data - Repeat for index pool:
ceph osd pool application disable default.rgw.buckets.index rbd --yes-i-really-mean-it - Enable RGW:
ceph osd pool application enable default.rgw.buckets.index rgw -
Check with:
ceph osd pool application get default.rgw.buckets.index -
Create a user for the RADOS Gateway:
bash ceph auth get-or-create client.rgw.k8s.svc \ mon 'allow r' \ osd 'allow rwx pool=default.rgw.buckets.data, allow rwx pool=default.rgw.buckets.index, allow rwx pool=.rgw.root, allow rwx pool=default.rgw.meta, allow rwx pool=default.rgw.log, allow rwx pool=default.rgw.control' \ -o /etc/ceph/ceph.client.rgw.k8s.svc.keyring
2. Register Kubernetes Secrets
-
Retreive the files, from the Ceph host, required for Kubernetes Secrets: Retreive these files and store them temporarily on your workstation.
File Path Purpose ceph.conf /etc/ceph/ceph.conf Location of Ceph Monitors Keyring /etc/ceph/ceph.client.rgw.k8s.svc.keyring Auth token -
CRITICAL: A newline must be present at the end of each file.
-
CRITICAL: Remove whitespace from the keyring file, except newlines.
-
Create Secret manifests for deployment to K8s:
bash
kubectl create secret generic ceph-config \
--namespace=object-store \
--from-file=ceph.conf=./conf \
--dry-run=client -o yaml > ceph-config-secret.yaml
bash
kubectl create secret generic ceph-keyring \
--namespace=object-store \
--from-file=keyring=./keyring \
--dry-run=client -o yaml > ceph-keyring-secret.yaml
- Encrypt the secret manifests using sops:
sops encrypt --in-place ./ceph-config-secret.yamlsops encrypt --in-place ./ceph-keyring-secret.yaml
3. Kubernetes Manifests
These should be treated as examples, read through them and ensure they match your environment
Namespace
apiVersion: v1
kind: Namespace
metadata:
name: object-store
Service
apiVersion: v1
kind: Service
metadata:
name: ceph-rgw-svc
namespace: object-store
labels:
app.kubernetes.io/name: ceph-rgw
app.kubernetes.io/component: gateway
spec:
# The ClusterIP DNS name used for the RGW initialization:
# http://ceph-rgw-svc.object-store.svc.cluster.local:8080
ports:
- port: 8080
targetPort: 8080
protocol: TCP
name: http-api
selector:
app: ceph-rgw
type: ClusterIP
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: ceph-rgw
namespace: object-store
labels:
app.kubernetes.io/name: ceph-rgw
app.kubernetes.io/component: gateway
spec:
replicas: 2
selector:
matchLabels:
app: ceph-rgw
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: ceph-rgw
spec:
# CRUCIAL: Enforce Pods to be on separate nodes for HA
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: ceph-rgw
topologyKey: "kubernetes.io/hostname"
containers:
- name: rgw
# Use the same Major:Minor as your PVE Hosts
image: quay.io/ceph/ceph:v18.2
# Arguments to start the RGW process on port 8080
args: [
"radosgw",
"-f", # Run in foreground
"--conf=/etc/ceph/ceph.conf", # Explicitly use the mounted config
"--name=client.rgw.k8s.svc", # The exact CephX user name we created
"--rgw-frontends=beast port=8080" # REQUIRED: Beast frontend for Ceph 18+
]
resources:
requests:
cpu: 500m
memory: 2Gi
limits:
cpu: 2000m
memory: 2Gi
ports:
- containerPort: 8080
name: rgw-http
# Ensure the Pod does not run as root unnecessarily
securityContext:
runAsUser: 167 # A common non-root user ID for Ceph containers
runAsGroup: 167
allowPrivilegeEscalation: false
volumeMounts:
# Mount the ceph.conf file directly
- name: ceph-config-vol
mountPath: /etc/ceph/
volumes:
- name: ceph-config-vol
secret:
secretName: ceph-config
defaultMode: 0444 # Global read for user 167
items:
- key: ceph.conf
path: ceph.conf
- key: ceph.client.rgw.k8s.svc.keyring
path: ceph.client.rgw.k8s.svc.keyring
Deploy these manifests to Flux
4. RGW Admin Utility
Do not commit this to Flux, run as and when required to manage RGW users and buckets
Pod Manifest
apiVersion: v1
kind: Pod
metadata:
name: rgw-admin-utility
namespace: object-store
spec:
restartPolicy: Never
containers:
- name: rgw-admin-cli
# Use the same image as your RGW deployment for consistency
image: quay.io/ceph/ceph:v18.2
# Use the /bin/bash entrypoint to allow manual command execution
command: ["/bin/bash", "-c", "sleep 3600"]
# Environment variable to explicitly define the CephX user for CLI tools
env:
- name: CEPH_ARGS
value: "--name client.rgw.k8s.svc --keyring /etc/ceph/ceph.client.rgw.k8s.svc.keyring"
volumeMounts:
# Mount the ceph.conf file directly
- name: ceph-config-vol
mountPath: /etc/ceph/
volumes:
- name: ceph-config-vol
secret:
secretName: ceph-config
defaultMode: 0444 # Global read for user 167
items:
- key: ceph.conf
path: ceph.conf
- key: ceph.client.rgw.k8s.svc.keyring
path: ceph.client.rgw.k8s.svc.keyring
Managing RGW
- Deploy the Pod using
kubectl apply -f {filepath} - Exec into the pod
kubectl exec -it rgw-admin-utility -n object-store -- bash
Create User
radosgw-admin user create --uid={uid} --display-name={display-name} --gen-key --gen-secret
CRITICAL: Copy the JSON output, save the access_key and secret_key
Create Bucket
radosgw-admin bucket create --bucket={buket-name} --uid={owner-uid}
Exit & Cleanup
exitkubectl delete pod rgw-admin-utility -n object-store
5. Generate Secret for Client Access
Deploy this in the namespace of the appliation requiring the S3 API Access
kubectl create secret generic s3-credentials \
--namespace={application-namespace} \
--from-literal=S3_ACCESS_KEY={access-key-from-user-creation} \
--from-literal=S3_SECRET_KEY={secret-key-from-user-creation} \
--dry-run=client -o yaml > s3-secret.yaml