Skip to content

MicroCeph

If using an external Ceph cluster to be consumed by Rook.

Overview#

Ceph provides block, object, and file storage. It supports both replicated and erasure coded storage.

We'll work with CephCluster, CephBlockPool objects, the rook-ceph.rbd.csi.ceph.com provisioner pods, the ceph-rbd storage class and the Ceph operator pod.

Option 1

Imports external MicroCeph cluster.

Components:

  • MicroCeph cluster.
  • MicroK8s cluster with rook-ceph addon connected to the external Ceph cluster.
  • Ceph cluster with pools e.g.
    pool 2 'microk8s-rbd0' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 31 lfor 0/0/29 flags hashpspool stripe_width 0 application rbd
    
  • Kubernetes StorageClass objects with paramter pool=mypool e.g.
    Name:                  ceph-rbd
    Provisioner:           rook-ceph.rbd.csi.ceph.com
    Parameters:            clusterID=rook-ceph-external,pool=microk8s-rbd0 ...
    

Option 2

Deploy Ceph on MicroK8s.

Components:

  • MicroK8s cluster with rook-ceph addon.
  • Deploy Ceph on the MicroK8s cluster using storage from the k8s nodes.
  • Not recommended for clusters with virtual disks backed by loop devices. The provision container of the rook-ceph-osd-prepare pod for each node will not use them and the pool creation will fail with skipping OSD configuration as no devices matched the storage settings for this node.

Prerequisites#

  1. Allocate disks. For a MicroCeph test cluster with only 1 node you can create virtual disks as loop devices (a special block device that maps to a file).
    On each node
    for l in a b c; do
        loop_file="$(sudo mktemp -p /mnt XXXX.img)"
        sudo truncate -s 60G "${loop_file}"
        loop_dev="$(sudo losetup --show -f "${loop_file}")"
        # the block-devices plug doesn't allow accessing /dev/loopX
        # devices so we make those same devices available under alternate
        # names (/dev/sdiY)
        minor="${loop_dev##/dev/loop}"
        sudo mknod -m 0660 "/dev/sdi${l}" b 7 "${minor}"
    done
    
    Verify
    On each node
    lsblk
    ls -al  /dev/sdi*
    
  2. Review storage concepts like replication vs erasure coding.
    Erasure Coding

    Data chunks (k)Coding chunks (m)Total storageLosses Tolerated (m)OSDs required (k+m)
    211.5x13
    222x24
    421.5x26
    332x36
    1641.25x420

Setup#

  1. Install MicroCeph
    On all nodes
    sudo snap install microceph --channel=latest/edge
    snap info microceph # if you want to see instructions
    
    On the control plane node
    sudo microceph cluster bootstrap
    
  2. Join all nodes to the cluster
    On the control plane, for each node you want to add
    sudo microceph cluster add $NODE_HOSTNAME
    # get the join token for the node
    
    It's best to have an odd number of monitors because Ceph needs a majority of monitors to be running e.g. 2 out of 3.
    On each node you want to add to the cluster
    sudo microceph cluster join $JOIN_TOKEN
    
  3. Add the disks as OSDs. MicroCeph supports loop devices or disks. At the time of this writing it does not support partitions but that will change.
    On each node
    # After adding physical or virtual disks to your node/VM
    # for each disk (whatever device names your disks have)
    sudo microceph disk add /dev/sdia --wipe 
    sudo microceph disk add /dev/sdib --wipe
    sudo microceph disk add /dev/sdic --wipe
    
  4. Verify
    On the control plane (or any ceph node, really)
    sudo ceph status # detailed status
    # HEALTH_OK with all OSDs showing
    
    Learn more about pools.

You can also create and initialize other Ceph pools.

Usage#

# Ceph
sudo microceph status # deployment summary
sudo microceph disk list
sudo ceph osd metadata {osd-id} | grep osd_objectstore # check that it's a bluestore OSD
sudo ceph osd lspools
sudo ceph osd pool ls
sudo ceph osd pool ls detail -f json-pretty # list the pools with all details
sudo ceph osd pool stats # obtain stats from all pools, or from specified pool
sudo ceph osd pool delete {pool-name} {pool-name} --yes-i-really-really-mean-it

Troubleshooting#

  • View logs
    sudo ls -al /var/snap/microceph/common/logs/