Skip to content

Commit

Permalink
Merge pull request #40 from Ocelot-Social-Community/39-on-backups-rol…
Browse files Browse the repository at this point in the history
…lbacks--release-v1.0.7-171

docs: 🍰 Add Docs For Backups, Rollbacks, And Release v1.0.7-171
  • Loading branch information
Tirokk authored Dec 2, 2021
2 parents 9d48daf + b792a18 commit 6439e6d
Show file tree
Hide file tree
Showing 4 changed files with 328 additions and 2 deletions.
305 changes: 305 additions & 0 deletions deployment/kubernetes/Backup.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,305 @@
# Kubernetes Backup Of Ocelot.Social

One of the most important tasks in managing a running [ocelot.social](https://github.com/Ocelot-Social-Community/Ocelot-Social) network is backing up the data, e.g. the Neo4j database and the stored image files.

## Manual Offline Backup

To prepare, [kubectl](https://kubernetes.io/docs/tasks/tools/) must be installed and ready to use so that you have access to Kubernetes on your server.

Check if the correct context is used by running the following commands:

```bash
# check context and set the correct one
$ kubectl config get-contexts
# if the wrong context is chosen use it
$ kubectl config use-context <your-context>
# if you like check additionally if all pods are running well
$ kubectl -n default get pods -o wide
```

The very first step is to put the webside into **maintenance mode**.

### Set Maintenance Mode

There are two ways to put the network into maintenance mode:

- via Kubernetes Dashboard
- via `kubectl`

#### Maintenance Mode Via Kubernetes Dashboard

In the Kubernetes Dashboard, you can select `Ingresses` from the left side menu under `Service`.

After that, in the list that appears, you will find the entry `ingress-ocelot-webapp`, which has three dots on the right, where you can click to edit the entry.

You can scroll to the end of the YAML file, where you will find one or more `host` entries under `rules`, one for each domain of the network.

In all entries, change the value of the `serviceName` entry from ***ocelot-webapp*** to `ocelot-maintenance` and the value of the `servicePort` entry from ***3000*** to `80`.

First, check if your website is still online.
After you click `Update`, the new settings will be applied and you will find your website in maintenance mode.

#### Maintenance Mode Via `kubectl`

To put the network into maintenance mode, run the following commands in the terminal:

```bash
# list ingresses
$ kubectl get ingress -n default
# edit ingress
$ kubectl -n default edit ingress ingress-ocelot-webapp
```

Change the content of the YAML file for all domains to:

```yaml
spec:
rules:
- host: network-domain.social
http:
paths:
- backend:
# serviceName: ocelot-webapp
# servicePort: 3000
serviceName: ocelot-maintenance
servicePort: 80
```
First, check if your website is still online.
After you save the file, the new settings will be applied and you will find your website in maintenance mode.
### Neo4j Database Offline Backup
Before we can back up the database, we need to put it into **sleep mode**.
#### Set Neo4j To Sleep Mode
Again there are two ways to put the network into sleep mode:
- via Kubernetes Dashboard
- via `kubectl`

##### Sleep Mode Via Kubernetes Dashboard

In the Kubernetes Dashboard, you can select `Deployments` from the left side menu under `Workloads`.

After that, in the list that appears, you will find the entry `ocelot-neo4j`, which has three dots on the right, where you can click to edit the entry.

Scroll to the end of the YAML file where you will find the `spec.template.spec.containers` entry. Here you can insert the `command` entry directly after `imagePullPolicy` in a new line.

```yaml
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
command: ["tail", "-f", "/dev/null"]
```

After clicking `Update`, the new settings will be applied and you should check in the `Pods` menu item on the left side if the `ocelot-neo4j-<ID>` pod restarts.

##### Sleep Mode Via `kubectl`

To put Neo4j into sleep mode, run the following commands in the terminal:

```bash
# list deployments
$ kubectl get deployments -n default
# edit deployment
$ kubectl -n default edit deployment ocelot-neo4j
```

Scroll to the `spec.template.spec.containers` entry. Here you can insert the `command` entry directly after `imagePullPolicy` in a new line.

```yaml
image: <network-DockerHub-name>/neo4j-community-branded:latest
imagePullPolicy: Always
command: ["tail", "-f", "/dev/null"]
```

After pressing enter, the new settings will be applied and you should check if the `ocelot-neo4j-<ID>` pod restarts.
Use command:

```bash
# check if the old pod restarts
$ kubectl -n default get pods -o wide
```

#### Generate Offline Backup

The offline backup is generated via `kubectl`:

```bash
# check for the Neo4j pod
$ kubectl -n default get pods -o wide
# ls: see wish backup dumps are already there
$ kubectl -n default exec -it $(kubectl -n default get pods | grep ocelot-neo4j | awk '{ print $1 }') -- ls
# bash: enter bash of Neo4j
$ kubectl -n default exec -it $(kubectl -n default get pods | grep ocelot-neo4j | awk '{ print $1 }') -- bash
# generate Dump
neo4j% neo4j-admin dump --to=/var/lib/neo4j/$(date +%F)-neo4j-dump
# exit bash
neo4j% exit
# ls: see if the new backup dump is there
$ kubectl -n default exec -it $(kubectl -n default get pods | grep ocelot-neo4j | awk '{ print $1 }') -- ls
```

Lets copy the dump backup

```bash
# copy dump onto backup volume direct
$ kubectl cp default/$(kubectl -n default get pods | grep ocelot-neo4j |awk '{ print $1 }'):/var/lib/neo4j/$(date +%F)-neo4j-dump /Volumes/<volume-name>/$(date +%F)-neo4j-dump
```

#### Remove Sleep Mode From Neo4j

Again there are two ways to put the network into working mode:

- via Kubernetes Dashboard
- via `kubectl`

##### Remove Sleep Mode Via Kubernetes Dashboard

In the Kubernetes Dashboard, you can select `Deployments` from the left side menu under `Workloads`.

After that, in the list that appears, you will find the entry `ocelot-neo4j`, which has three dots on the right, where you can click to edit the entry.

Scroll to the `spec.template.spec.containers.command` entry and remove the whole `command` entry like:

```yaml
containers:
- name: container-ocelot-neo4j
image: 'senderfm/neo4j-community-branded:latest'
command:
- tail
- '-f'
- /dev/null
ports:
- containerPort: 7687
protocol: TCP
```

And get:

```yaml
containers:
- name: container-ocelot-neo4j
image: 'senderfm/neo4j-community-branded:latest'
ports:
- containerPort: 7687
protocol: TCP
```

After clicking `Update`, the new settings will be applied and you should check in the `Pods` menu item on the left side if the `ocelot-neo4j-<ID>` pod restarts.

##### Remove Sleep Mode Via `kubectl`

To put Neo4j into working mode, run the following commands in the terminal:

```bash
# list deployments
$ kubectl get deployments -n default
# edit deployment
$ kubectl -n default edit deployment ocelot-neo4j
```

Scroll to the `spec.template.spec.containers.command` entry and remove the whole `command` entry like:

```yaml
spec:
containers:
- command:
- tail
- -f
- /dev/null
envFrom:
- configMapRef:
name: configmap-ocelot-neo4j
```

And get:

```yaml
spec:
containers:
- envFrom:
- configMapRef:
name: configmap-ocelot-neo4j
```

After pressing enter, the new settings will be applied and you should check if the `ocelot-neo4j-<ID>` pod restarts.
Use command:

```bash
# check if the old pod restarts
$ kubectl -n default get pods -o wide
```

### Backend Backup

To back up the images from the backend volume, run commands:

```bash
# ls: backend/public/uploads
$ kubectl -n default exec -it $(kubectl -n default get pods | grep ocelot-backend | awk '{ print $1 }') -- ls public/uploads
# copy all images from upload to backup volume direct
$ kubectl cp default/$(kubectl -n default get pods | grep ocelot-backend |awk '{ print $1 }'):/app/public/uploads /Volumes/<volume-name>/$(date +%F)-public-uploads
```

### Remove Maintenance Mode

There are two ways to put the network into working mode:

- via Kubernetes Dashboard
- via `kubectl`

#### Remove Maintenance Mode Via Kubernetes Dashboard

In the Kubernetes Dashboard, you can select `Ingresses` from the left side menu under `Service`.

After that, in the list that appears, you will find the entry `ingress-ocelot-webapp`, which has three dots on the right, where you can click to edit the entry.

You can scroll to the end of the YAML file, where you will find one or more `host` entries under `rules`, one for each domain of the network.

In all entries, change the value of the `serviceName` entry from ***ocelot-maintenance*** to `ocelot-webapp` and the value of the `servicePort` entry from ***80*** to `3000`.

First, check if your website is still in maintenance mode.
After you click `Update`, the new settings will be applied and you will find your website online again.

#### Remove Maintenance Mode Via `kubectl`

To put the network into working mode, run the following commands in the terminal:

```bash
# list ingresses
$ kubectl get ingress -n default
# edit ingress
$ kubectl -n default edit ingress ingress-ocelot-webapp
```

Change the content of the YAML file for all domains to:

```yaml
spec:
rules:
- host: network-domain.social
http:
paths:
- backend:
serviceName: ocelot-webapp
servicePort: 3000
# serviceName: ocelot-maintenance
# servicePort: 80
```

First, check if your website is still in maintenance mode.
After you save the file, the new settings will be applied and you will find your website online again.

XXX

```bash
# Dump: Create a Backup in Kubernetes: https://docs.human-connection.org/human-connection/deployment/volumes/neo4j-offline-backup#create-a-backup-in-kubernetes
```
6 changes: 6 additions & 0 deletions deployment/kubernetes/DigitalOcean.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,3 +76,9 @@ The IPs of the DigitalOcean machines are not necessarily stable, so the cluster'
## Deploy

Yeah, you're done here. Back to [Deployment with Helm for Kubernetes](/deployment/kubernetes/README.md).

## Backups On DigitalOcean

You can and should do [backups](/deployment/kubernetes/Backup.md) with Kubernetes for sure.

Additional to backup and copying the Neo4j database dump and the backend images you can do a volume snapshot on DigitalOcean at the moment you have the database in sleep mode.
15 changes: 15 additions & 0 deletions deployment/kubernetes/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -183,6 +183,17 @@ $ helm upgrade ocelot ./
$ helm --kubeconfig=/../kubeconfig.yaml upgrade ocelot ./
```

#### Rollback

Run for a rollback, in case something went wrong:

```bash
# kubeconfig.yaml set globaly
$ helm rollback ocelot
# or kubeconfig.yaml in your repo, then adjust
$ helm --kubeconfig=/../kubeconfig.yaml rollback ocelot
```

#### Uninstall

Be aware that if you uninstall ocelot the formerly bound volumes become unbound. Those volumes contain all data from uploads and database. You have to manually free their reference in order to bind them again when reinstalling. Once unbound from their former container references they should automatically be rebound (considering the sizes did not change)
Expand All @@ -194,6 +205,10 @@ $ helm uninstall ocelot
$ helm --kubeconfig=/../kubeconfig.yaml uninstall ocelot
```

## Backups

You can and should do [backups](/deployment/kubernetes/Backup.md) with Kubernetes for sure.

## Error Reporting

We use [Sentry](https://github.com/getsentry/sentry) for error reporting in both
Expand Down
4 changes: 2 additions & 2 deletions package.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{
"name": "ocelot-social-branded",
"version": "1.0.6",
"ocelotDockerVersionTag": "1.0.6-170",
"version": "1.0.7",
"ocelotDockerVersionTag": "1.0.7-171",
"dockerOrganisation": "ocelotsocialnetwork",
"description": "ocelot.social Branded",
"author": "ocelot.social Community",
Expand Down

0 comments on commit 6439e6d

Please sign in to comment.