diff --git a/offline/docs_ubuntu_22.04.md b/offline/docs_ubuntu_22.04.md index b6163825b..7ed125d04 100644 --- a/offline/docs_ubuntu_22.04.md +++ b/offline/docs_ubuntu_22.04.md @@ -514,6 +514,9 @@ ufw allow 25672/tcp; ' ``` +### Deploy RabbitMQ cluster +Follow the steps mentioned here to create a RabbitMQ cluster based on your setup - [offline/rabbitmq_setup.md](./rabbitmq_setup.md) + ### Preparation for Federation For enabling Federation, we need to have RabbitMQ in place. Please follow the instructions in [offline/federation_preparation.md](./federation_preparation.md) for setting up RabbitMQ. diff --git a/offline/federation_preparation.md b/offline/federation_preparation.md index 05ed3cb30..65ee74287 100644 --- a/offline/federation_preparation.md +++ b/offline/federation_preparation.md @@ -29,86 +29,6 @@ Adding remote instances to federate with happens in the `brig` subsection in [va ``` Multiple domains with individual search policies can be added. - ## RabbitMQ -There are two methods to deploy the RabbitMQ cluster: - -### Method 1: Install RabbitMQ inside kubernetes cluster with the help of helm chart - -To install the RabbitMQ service, first copy the value and secret files: -``` -cp ./values/rabbitmq/prod-values.example.yaml ./values/rabbitmq/values.yaml -cp ./values/rabbitmq/prod-secrets.example.yaml ./values/rabbitmq/secrets.yaml -``` -By default this will create a RabbitMQ deployment with ephemeral storage. To use the local persistence storage of Kubernetes nodes, please refer to the related documentation in [offline/local_persistent_storage_k8s.md](./local_persistent_storage_k8s.md). - -Now, update the `./values/rabbitmq/values.yaml` and `./values/rabbitmq/secrets.yaml` with correct values as needed. - -Deploy the `rabbitmq` helm chart: -``` -d helm upgrade --install rabbitmq ./charts/rabbitmq --values ./values/rabbitmq/values.yaml --values ./values/rabbitmq/secrets.yaml -``` - -### Method 2: Install RabbitMQ outside of the Kubernetes cluster with an Ansible playbook - -Add the nodes on which you want to run rabbitmq to the `[rmq-cluster]` group in the `ansible/inventory/offline/hosts.ini` file. Also, update the `ansible/roles/rabbitmq-cluster/defaults/main.yml` file with the correct configurations for your environment. - -If you need RabbitMQ to listen on a different interface than the default gateway, set `rabbitmq_network_interface` - -You should have following entries in the `/ansible/inventory/offline/hosts.ini` file. For example: -``` -[rmq-cluster:vars] -rabbitmq_network_interface = enp1s0 - -[rmq-cluster] -ansnode1 -ansnode2 -ansnode3 -``` - - -#### Hostname Resolution -RabbitMQ nodes address each other using a node name, a combination of a prefix and domain name, either short or fully-qualified (FQDNs). For e.g. rabbitmq@ansnode1 - -Therefore every cluster member must be able to resolve hostnames of every other cluster member, its own hostname, as well as machines on which command line tools such as rabbitmqctl might be used. - -Nodes will perform hostname resolution early on node boot. In container-based environments it is important that hostname resolution is ready before the container is started. - -Hostname resolution can use any of the standard OS-provided methods: - -For e.g. DNS records -Local host files (e.g. /etc/hosts) -Reference - https://www.rabbitmq.com/clustering.html#cluster-formation-requirements - - -For adding entries to local host file(`/etc/hosts`), run -``` -d ansible-playbook -i ansible/inventory/offline/hosts.ini ansible/roles/rabbitmq-cluster/tasks/configure_dns.yml -``` - -Create the rabbitmq cluster: - -``` -d ansible-playbook -i ansible/inventory/offline/hosts.ini ansible/rabbitmq.yml -``` - -and run the following playbook to create values file for helm charts to look for RabbitMQ IP addresses - - -``` -d ansible-playbook -i ./ansible/inventory/offline/hosts.ini ansible/helm_external.yml --tags=rabbitmq-external -``` - -Make Kubernetes aware of where RabbitMQ external stateful service is running: -``` -d helm install rabbitmq-external ./charts/rabbitmq-external --values ./values/rabbitmq-external/values.yaml -``` - -Configure wire-server to use the external RabbitMQ service: - -Edit the `/values/wire-server/prod-values.example.yaml` file to update the RabbitMQ host -Under `brig` and `galley` section, you will find the `rabbitmq` config, update the host to `rabbitmq-external`, it should look like this: -``` -rabbitmq: - host: rabbitmq-external -``` +You can refer to [offline/rabbitmq_setup.md](./rabbitmq_setup.md) for creating RabbitMQ cluster, if you haven't yet. diff --git a/offline/k8ssandra_setup.md b/offline/k8ssandra_setup.md index b8812bf60..7472140b9 100644 --- a/offline/k8ssandra_setup.md +++ b/offline/k8ssandra_setup.md @@ -9,24 +9,7 @@ K8ssandra will need the following components to be installed in the cluster - - Configure minio bucket for backups ## [1] Dynamic Persistent Volume Provisioning -If you already have a dynamic persistent volume provisioning setup, you can skip this step. If not, we will be using OpenEBS for dynamic persistent volume provisioning. - -Reference docs - https://openebs.io/docs/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-installation - -### Deploy OpenEBS - -``` -d helm install openebs charts/openebs --namespace openebs --create-namespace -``` -The above helm chart is available in the offline artifact. - -After successful deployment of OpenEBS, you will see these storage classes: -``` -d kubectl get sc -NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE -openebs-device openebs.io/local Delete WaitForFirstConsumer false 5d20h -openebs-hostpath openebs.io/local Delete WaitForFirstConsumer false 5d20h -``` +Refer to [offline/local_persistent_storage_k8s](./local_persistent_storage_k8s.md) ## [2] Install cert-manager cert-manager is a must requirement for k8ssandra - see https://docs.k8ssandra.io/install/local/single-cluster-helm/#deploy-cert-manager for why. diff --git a/offline/local_persistent_storage_k8s.md b/offline/local_persistent_storage_k8s.md index 97fdffbcf..b48a7f435 100644 --- a/offline/local_persistent_storage_k8s.md +++ b/offline/local_persistent_storage_k8s.md @@ -1,83 +1,23 @@ -# To Create a storage class for local persistent storage in Kubernetes +## Dynamic Persistent Volume Provisioning +If you already have a dynamic persistent volume provisioning setup, you can skip this step. If not, we can use OpenEBS for dynamic persistent volume provisioning. -#### Note: This is just an example to create a local-path storage class. For the actual usecase, you can create your own storageclass with provisioner of your choice and use in different places to deploy wire-server and other resources. +Reference docs - https://openebs.io/docs/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-installation -Create a storage class. - -You can find more information about the local persistent storage here: https://kubernetes.io/docs/concepts/storage/storage-classes/#local -Copy the following content in a file and name it sc.yaml +### Deploy OpenEBS ``` -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: local-path -provisioner: kubernetes.io/no-provisioner -volumeBindingMode: WaitForFirstConsumer - +d helm install openebs charts/openebs --namespace openebs --create-namespace ``` +The above helm chart is available in the offline artifact. -Create a Persistent Volume. - -You can find more information about the Persistent Volume here: https://kubernetes.io/docs/concepts/storage/persistent-volumes/ - -Note: The below example will create a Persistent Volume on the node kubenode1. You can change the node name as per your requirement. And also make sure that the path /data/local-path exists on the node kubenode1. - -Copy the following content in a file and name it pv.yaml - +After successful deployment of OpenEBS, you will see these storage classes: ``` -apiVersion: v1 -kind: PersistentVolume -metadata: - name: local-path-pv -spec: - capacity: - storage: 10Gi - accessModes: - - ReadWriteOnce - persistentVolumeReclaimPolicy: Retain - storageClassName: local-path - local: - path: /data/local-path - nodeAffinity: - required: - nodeSelectorTerms: - - matchExpressions: - - key: kubernetes.io/hostname - operator: In - values: - - kubenode1 +d kubectl get sc +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +openebs-device openebs.io/local Delete WaitForFirstConsumer false 5d20h +openebs-hostpath openebs.io/local Delete WaitForFirstConsumer false 5d20h ``` -Create a Persistent Volume Claim. - -You can find more information about the Persistent Volume Claim here: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims +### Backup and Restore -Copy the following content in a file and name it pvc.yaml - -``` -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: local-path-pvc -spec: - storageClassName: local-path - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 10Gi -``` - -Now, create the above resources using the following commands: - -``` -d kubectl apply -f sc.yaml -d kubectl apply -f pv.yaml -d kubectl apply -f pvc.yaml -``` - -After successfull creation, you should be able to see the resources with - -``` -d kubectl get sc,pv,pvc -``` +For backup and restore of the OpenEBS Local Storage, refer to the official docs at - https://openebs.io/docs/user-guides/local-storage-user-guide/additional-information/backupandrestore diff --git a/offline/rabbitmq_backup_restore.md b/offline/rabbitmq_backup_restore.md deleted file mode 100644 index 9616d58a4..000000000 --- a/offline/rabbitmq_backup_restore.md +++ /dev/null @@ -1,65 +0,0 @@ -This document describes the backup and restore process for RabbitMQ deployed outside of Kubernetes. - -Although, this can vary based on your setup, it is also recommended to follow the official documentation here - https://www.rabbitmq.com/docs/backup - -## Backup -Make sure to have the nodes on which RabbitMQ is running in the [ansible inventory file](https://github.com/wireapp/wire-server-deploy/blob/master/offline/docs_ubuntu_22.04.md#editing-the-inventory), under the `rmq-cluster` group. -Then run the following command: -``` -source bin/offline-env.sh -``` - -Replace `/path/to/backup` in the command below with the backup target path on the rabbitmq nodes. - -``` -d ansible-playbook -i ansible/inventory/offline/hosts.ini ansible/backup_rabbitmq.yml --extra-vars "backup_dir=/path/to/backup" -``` - -This ansible playbook will create `definitions.json` (Definitions) and `rabbitmq-backup.tgz` (Messages) files on all RabbitMQ nodes at `/path/to/backup`. - -Now, save these files on your host machine with scp command - -``` -mkdir rabbitmq_backups -cd rabbitmq_backups -``` -Fetch the backup files for each node one by one, -``` -scp -r :/path/to/backup/ / -``` - - -## Restore -You should have the definition and data backup files on your host machine for each node, in the specific `node_name` directory. -To restore the RabbitMQ backup, -Copy both files to the specific nodes at `/path/to/restore/from` for each node - -``` -scp -r / :/path/to/restore/from -``` - -### Restore Definitions -ssh into each node and run the following command from the path `/path/to/restore/from` - -``` -rabbitmqadmin import definitions.json -``` - -### Restore Data -To restore the data, we need to stop the rabbitmq service on each node first - -On each nodes, stop the service with - -``` -ssh -sudo systemctl stop rabbitmq-server -``` - -Once the service is stopped, restore the data - - -``` -sudo tar xvf rabbitmq-backup.tgz -C / -sudo chown -R rabbitmq:rabbitmq /var/lib/rabbitmq/mnesia # To ensure the correct permissions -``` - -At the end, restart the RabbitMQ server on each node - -``` -sudo systemctl start rabbitmq-server -``` - -At the end, please make sure that the RabbitMQ is running fine on all the nodes. diff --git a/offline/rabbitmq_setup.md b/offline/rabbitmq_setup.md new file mode 100644 index 000000000..eee3f4cd8 --- /dev/null +++ b/offline/rabbitmq_setup.md @@ -0,0 +1,149 @@ +## RabbitMQ + +There are two methods to deploy the RabbitMQ cluster: + +### Method 1: Install RabbitMQ inside kubernetes cluster with the help of helm chart + +To install the RabbitMQ service, first copy the value and secret files: +``` +cp ./values/rabbitmq/prod-values.example.yaml ./values/rabbitmq/values.yaml +cp ./values/rabbitmq/prod-secrets.example.yaml ./values/rabbitmq/secrets.yaml +``` +By default this will create a RabbitMQ deployment with ephemeral storage. To use the local persistence storage of Kubernetes nodes, please refer to the related documentation in [offline/local_persistent_storage_k8s.md](./local_persistent_storage_k8s.md). + +Now, update the `./values/rabbitmq/values.yaml` and `./values/rabbitmq/secrets.yaml` with correct values as needed. + +Deploy the `rabbitmq` helm chart: +``` +d helm upgrade --install rabbitmq ./charts/rabbitmq --values ./values/rabbitmq/values.yaml --values ./values/rabbitmq/secrets.yaml +``` + +### Method 2: Install RabbitMQ outside of the Kubernetes cluster with an Ansible playbook + +Add the nodes on which you want to run rabbitmq to the `[rmq-cluster]` group in the `ansible/inventory/offline/hosts.ini` file. Also, update the `ansible/roles/rabbitmq-cluster/defaults/main.yml` file with the correct configurations for your environment. + +If you need RabbitMQ to listen on a different interface than the default gateway, set `rabbitmq_network_interface` + +You should have following entries in the `/ansible/inventory/offline/hosts.ini` file. For example: +``` +[rmq-cluster:vars] +rabbitmq_network_interface = enp1s0 + +[rmq-cluster] +ansnode1 +ansnode2 +ansnode3 +``` + +#### Hostname Resolution +RabbitMQ nodes address each other using a node name, a combination of a prefix and domain name, either short or fully-qualified (FQDNs). For e.g. rabbitmq@ansnode1 + +Therefore every cluster member must be able to resolve hostnames of every other cluster member, its own hostname, as well as machines on which command line tools such as rabbitmqctl might be used. + +Nodes will perform hostname resolution early on node boot. In container-based environments it is important that hostname resolution is ready before the container is started. + +Hostname resolution can use any of the standard OS-provided methods: + +For e.g. DNS records +Local host files (e.g. /etc/hosts) +Reference - https://www.rabbitmq.com/clustering.html#cluster-formation-requirements + + +For adding entries to local host file(`/etc/hosts`), run +``` +d ansible-playbook -i ansible/inventory/offline/hosts.ini ansible/roles/rabbitmq-cluster/tasks/configure_dns.yml +``` + +Create the rabbitmq cluster: + +``` +d ansible-playbook -i ansible/inventory/offline/hosts.ini ansible/rabbitmq.yml +``` + +and run the following playbook to create values file for helm charts to look for RabbitMQ IP addresses - + +``` +d ansible-playbook -i ./ansible/inventory/offline/hosts.ini ansible/helm_external.yml --tags=rabbitmq-external +``` + +Make Kubernetes aware of where RabbitMQ external stateful service is running: +``` +d helm install rabbitmq-external ./charts/rabbitmq-external --values ./values/rabbitmq-external/values.yaml +``` + +Configure wire-server to use the external RabbitMQ service: + +Edit the `/values/wire-server/prod-values.example.yaml` file to update the RabbitMQ host +Under `brig` and `galley` section, you will find the `rabbitmq` config, update the host to `rabbitmq-external`, it should look like this: +``` +rabbitmq: + host: rabbitmq-external +``` + +## Backup and Restore + +The following steps describe the backup and restore process for RabbitMQ deployed outside of Kubernetes. + +This can vary based on your security, privacy, and administrative policies. It is also recommended to read and follow the official documentation here - https://www.rabbitmq.com/docs/backup + +## Backup +Make sure to have the nodes on which RabbitMQ is running in the [ansible inventory file](https://github.com/wireapp/wire-server-deploy/blob/master/offline/docs_ubuntu_22.04.md#editing-the-inventory), under the `rmq-cluster` group. +Then run the following command to load your wire utility environment: +``` +source bin/offline-env.sh +``` + +Replace `/path/to/backup` in the command below with the backup target path on the rabbitmq nodes. + +``` +d ansible-playbook -i ansible/inventory/offline/hosts.ini ansible/backup_rabbitmq.yml --extra-vars "backup_dir=/path/to/backup" +``` + +This ansible playbook will create `definitions.json` (Definitions) and `rabbitmq-backup.tgz` (Messages) files on all RabbitMQ nodes at `/path/to/backup`. + +Now, save these files on your host machine with scp command - +``` +mkdir rabbitmq_backups +cd rabbitmq_backups +``` +Fetch the backup files for each node one by one, +``` +scp -r :/path/to/backup/ / +``` + + +## Restore +You should have the definition and data backup files on your host machine for each node, in the specific `node_name` directory. +To restore the RabbitMQ backup, +Copy both files to the specific nodes at `/path/to/restore/from` for each node - +``` +scp -r / :/path/to/restore/from +``` + +### Restore Definitions +ssh into each node and run the following command from the path `/path/to/restore/from` - +``` +rabbitmqadmin import definitions.json +``` + +### Restore Data +To restore the data, we need to stop the rabbitmq service on each node first - +On each nodes, stop the service with - +``` +ssh +sudo systemctl stop rabbitmq-server +``` + +Once the service is stopped, restore the data - + +``` +sudo tar xvf rabbitmq-backup.tgz -C / +sudo chown -R rabbitmq:rabbitmq /var/lib/rabbitmq/mnesia # To ensure the correct permissions +``` + +At the end, restart the RabbitMQ server on each node - +``` +sudo systemctl start rabbitmq-server +``` + +At the end, please make sure that the RabbitMQ is running fine on all the nodes. diff --git a/values/rabbitmq/prod-values.example.yaml b/values/rabbitmq/prod-values.example.yaml index b634e1431..cc6b67dbf 100644 --- a/values/rabbitmq/prod-values.example.yaml +++ b/values/rabbitmq/prod-values.example.yaml @@ -6,7 +6,7 @@ rabbitmq: size: 10Gi enabled: false ### To use a persistent volume, set the enabled to true - ### set and uncomment the name of your storageClass and existingClaim below, as per needed - ### also, you can refer to offline/local_persistent_storage_k8s.md for a local-path example - # storageClass: local-path - # existingClaim: local-path-pvc + ### set and uncomment the name of your storageClass below, + ### also, you can refer to offline/local_persistent_storage_k8s.md + ### for deploying openebs for dynamic volume provisioning + # storageClass: openebs-hostpath