Skip to content

Commit

Permalink
Update docs to always deploy RabbitMQ and add data persistence (#730)
Browse files Browse the repository at this point in the history
* update rabbitmq values file

* update rabbitmq docs

* add backup/restore reference for openebs

* Update offline/rabbitmq_setup.md

* Update offline/rabbitmq_setup.md

* Apply suggestions from code review

Co-authored-by: Julia Longtin <[email protected]>

---------

Co-authored-by: Julia Longtin <[email protected]>
  • Loading branch information
amitsagtani97 and julialongtin authored Sep 10, 2024
1 parent 07123cf commit 3f096d1
Show file tree
Hide file tree
Showing 7 changed files with 171 additions and 241 deletions.
3 changes: 3 additions & 0 deletions offline/docs_ubuntu_22.04.md
Original file line number Diff line number Diff line change
Expand Up @@ -514,6 +514,9 @@ ufw allow 25672/tcp;
'
```

### Deploy RabbitMQ cluster
Follow the steps mentioned here to create a RabbitMQ cluster based on your setup - [offline/rabbitmq_setup.md](./rabbitmq_setup.md)

### Preparation for Federation
For enabling Federation, we need to have RabbitMQ in place. Please follow the instructions in [offline/federation_preparation.md](./federation_preparation.md) for setting up RabbitMQ.

Expand Down
82 changes: 1 addition & 81 deletions offline/federation_preparation.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,86 +29,6 @@ Adding remote instances to federate with happens in the `brig` subsection in [va
```
Multiple domains with individual search policies can be added.


## RabbitMQ

There are two methods to deploy the RabbitMQ cluster:

### Method 1: Install RabbitMQ inside kubernetes cluster with the help of helm chart

To install the RabbitMQ service, first copy the value and secret files:
```
cp ./values/rabbitmq/prod-values.example.yaml ./values/rabbitmq/values.yaml
cp ./values/rabbitmq/prod-secrets.example.yaml ./values/rabbitmq/secrets.yaml
```
By default this will create a RabbitMQ deployment with ephemeral storage. To use the local persistence storage of Kubernetes nodes, please refer to the related documentation in [offline/local_persistent_storage_k8s.md](./local_persistent_storage_k8s.md).

Now, update the `./values/rabbitmq/values.yaml` and `./values/rabbitmq/secrets.yaml` with correct values as needed.

Deploy the `rabbitmq` helm chart:
```
d helm upgrade --install rabbitmq ./charts/rabbitmq --values ./values/rabbitmq/values.yaml --values ./values/rabbitmq/secrets.yaml
```

### Method 2: Install RabbitMQ outside of the Kubernetes cluster with an Ansible playbook

Add the nodes on which you want to run rabbitmq to the `[rmq-cluster]` group in the `ansible/inventory/offline/hosts.ini` file. Also, update the `ansible/roles/rabbitmq-cluster/defaults/main.yml` file with the correct configurations for your environment.

If you need RabbitMQ to listen on a different interface than the default gateway, set `rabbitmq_network_interface`

You should have following entries in the `/ansible/inventory/offline/hosts.ini` file. For example:
```
[rmq-cluster:vars]
rabbitmq_network_interface = enp1s0
[rmq-cluster]
ansnode1
ansnode2
ansnode3
```


#### Hostname Resolution
RabbitMQ nodes address each other using a node name, a combination of a prefix and domain name, either short or fully-qualified (FQDNs). For e.g. rabbitmq@ansnode1

Therefore every cluster member must be able to resolve hostnames of every other cluster member, its own hostname, as well as machines on which command line tools such as rabbitmqctl might be used.

Nodes will perform hostname resolution early on node boot. In container-based environments it is important that hostname resolution is ready before the container is started.

Hostname resolution can use any of the standard OS-provided methods:

For e.g. DNS records
Local host files (e.g. /etc/hosts)
Reference - https://www.rabbitmq.com/clustering.html#cluster-formation-requirements


For adding entries to local host file(`/etc/hosts`), run
```
d ansible-playbook -i ansible/inventory/offline/hosts.ini ansible/roles/rabbitmq-cluster/tasks/configure_dns.yml
```

Create the rabbitmq cluster:

```
d ansible-playbook -i ansible/inventory/offline/hosts.ini ansible/rabbitmq.yml
```

and run the following playbook to create values file for helm charts to look for RabbitMQ IP addresses -

```
d ansible-playbook -i ./ansible/inventory/offline/hosts.ini ansible/helm_external.yml --tags=rabbitmq-external
```

Make Kubernetes aware of where RabbitMQ external stateful service is running:
```
d helm install rabbitmq-external ./charts/rabbitmq-external --values ./values/rabbitmq-external/values.yaml
```

Configure wire-server to use the external RabbitMQ service:

Edit the `/values/wire-server/prod-values.example.yaml` file to update the RabbitMQ host
Under `brig` and `galley` section, you will find the `rabbitmq` config, update the host to `rabbitmq-external`, it should look like this:
```
rabbitmq:
host: rabbitmq-external
```
You can refer to [offline/rabbitmq_setup.md](./rabbitmq_setup.md) for creating RabbitMQ cluster, if you haven't yet.
19 changes: 1 addition & 18 deletions offline/k8ssandra_setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,24 +9,7 @@ K8ssandra will need the following components to be installed in the cluster -
- Configure minio bucket for backups

## [1] Dynamic Persistent Volume Provisioning
If you already have a dynamic persistent volume provisioning setup, you can skip this step. If not, we will be using OpenEBS for dynamic persistent volume provisioning.

Reference docs - https://openebs.io/docs/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-installation

### Deploy OpenEBS

```
d helm install openebs charts/openebs --namespace openebs --create-namespace
```
The above helm chart is available in the offline artifact.

After successful deployment of OpenEBS, you will see these storage classes:
```
d kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
openebs-device openebs.io/local Delete WaitForFirstConsumer false 5d20h
openebs-hostpath openebs.io/local Delete WaitForFirstConsumer false 5d20h
```
Refer to [offline/local_persistent_storage_k8s](./local_persistent_storage_k8s.md)

## [2] Install cert-manager
cert-manager is a must requirement for k8ssandra - see https://docs.k8ssandra.io/install/local/single-cluster-helm/#deploy-cert-manager for why.
Expand Down
86 changes: 13 additions & 73 deletions offline/local_persistent_storage_k8s.md
Original file line number Diff line number Diff line change
@@ -1,83 +1,23 @@
# To Create a storage class for local persistent storage in Kubernetes
## Dynamic Persistent Volume Provisioning
If you already have a dynamic persistent volume provisioning setup, you can skip this step. If not, we can use OpenEBS for dynamic persistent volume provisioning.

#### Note: This is just an example to create a local-path storage class. For the actual usecase, you can create your own storageclass with provisioner of your choice and use in different places to deploy wire-server and other resources.
Reference docs - https://openebs.io/docs/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-installation

Create a storage class.

You can find more information about the local persistent storage here: https://kubernetes.io/docs/concepts/storage/storage-classes/#local
Copy the following content in a file and name it sc.yaml
### Deploy OpenEBS

```
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-path
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
d helm install openebs charts/openebs --namespace openebs --create-namespace
```
The above helm chart is available in the offline artifact.

Create a Persistent Volume.

You can find more information about the Persistent Volume here: https://kubernetes.io/docs/concepts/storage/persistent-volumes/

Note: The below example will create a Persistent Volume on the node kubenode1. You can change the node name as per your requirement. And also make sure that the path /data/local-path exists on the node kubenode1.

Copy the following content in a file and name it pv.yaml

After successful deployment of OpenEBS, you will see these storage classes:
```
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-path-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-path
local:
path: /data/local-path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kubenode1
d kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
openebs-device openebs.io/local Delete WaitForFirstConsumer false 5d20h
openebs-hostpath openebs.io/local Delete WaitForFirstConsumer false 5d20h
```

Create a Persistent Volume Claim.

You can find more information about the Persistent Volume Claim here: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims
### Backup and Restore

Copy the following content in a file and name it pvc.yaml

```
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: local-path-pvc
spec:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
```

Now, create the above resources using the following commands:

```
d kubectl apply -f sc.yaml
d kubectl apply -f pv.yaml
d kubectl apply -f pvc.yaml
```

After successfull creation, you should be able to see the resources with -
```
d kubectl get sc,pv,pvc
```
For backup and restore of the OpenEBS Local Storage, refer to the official docs at - https://openebs.io/docs/user-guides/local-storage-user-guide/additional-information/backupandrestore
65 changes: 0 additions & 65 deletions offline/rabbitmq_backup_restore.md

This file was deleted.

Loading

0 comments on commit 3f096d1

Please sign in to comment.