Skip to content

Commit

Permalink
modify docs
Browse files Browse the repository at this point in the history
  • Loading branch information
sozenh authored and molliezhang committed Feb 8, 2022
1 parent 2871b7b commit df24640
Show file tree
Hide file tree
Showing 17 changed files with 96 additions and 75 deletions.
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Thanks for taking the time to contribute to `clickhouse-operator`!

Please, **do not** make PR into `master` branch.
We intend to keep `master` clean and stable and will not accept commits directly into `master`.
We always have dedicated branch, named as `x.y.z` (for example, `0.9.1` at the time of this writing), which is used as a testbed for next release.
We always have dedicated branch, named as `x.y.z` (for example, `0.9.1` at the time of this writing), which is used as a testbed for next release.
Please, make your PR into this branch. If you are not sure which branch to use, create new issue to discuss, and we'd be happy to assist.

Your submission should not contain more than one commit. Please **squash your commits**.
Expand Down
46 changes: 31 additions & 15 deletions docs/README.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,37 @@
# Table of Contents
1. [architecture.md](./architecture.md) - architecture overview
1. [chi_update_add_replication.md](./chi_update_add_replication.md) - how to add replication
1. [chi_update_clickhouse_version.md](./chi_update_clickhouse_version.md) - how to update version
1. [clickhouse_config_errors_handling.md](./clickhouse_config_errors_handling.md) - how operator handles ClickHouse's config errors
1. [custom_resource_explained.md](./custom_resource_explained.md) - explain Custom Resource Definition in details
1. [grafana_setup.md](./grafana_setup.md) - how to set up Grafana

## Quick Start:
1. [quick_start.md](./quick_start.md) - quick start
1. [introduction.md](./introduction.md) - general introduction
1. [k8s_cluster_access.md](./k8s_cluster_access.md) - how to set up cluster access
1. [monitoring_setup.md](./monitoring_setup.md) - how to set up monitoring
1. [operator_build_from_sources.md](./operator_build_from_sources.md) - how to build operator from sources
1. [operator_configuration.md](./operator_configuration.md) - operator configuration in details


## ClickHouse Operator:
1. [operator_installation_details.md](./operator_installation_details.md) - how to install operator in details
1. [operator_upgrade.md](./operator_upgrade.md) - how to upgrade operator to the different version
1. [prometheus_setup.md](./prometheus_setup.md) - how to set up Prometheus
1. [pull_request_template.md](./pull_request_template.md) - template used by github during PR process
1. [quick_start.md](./quick_start.md) - quick start
1. [replication_setup.md](./replication_setup.md) - how to set up replication
1. [operator_configuration.md](./operator_configuration.md) - operator configuration in details
1. [operator_build_from_sources.md](./operator_build_from_sources.md) - how to build operator from sources
1. [custom_resource_explained.md](./custom_resource_explained.md) - explain Custom Resource Definition in details
1. [clickhouse_config_errors_handling.md](./clickhouse_config_errors_handling.md) - how operator handles ClickHouse's config errors
1. [architecture.md](./architecture.md) - architecture overview
1. [schema_migration.md](./schema_migration.md) - how operator migrates schema during cluster resize
1. [storage.md](./storage.md) - storage explained


## ClickHouse Installation:
1. [zookeeper_setup.md](./zookeeper_setup.md) - how to set up zookeeper
1. [replication_setup.md](./replication_setup.md) - how to set up replication
1. [chi_update_add_replication.md](./chi_update_add_replication.md) - how to add replication
1. [chi_update_clickhouse_version.md](./chi_update_clickhouse_version.md) - how to update version
1. [clickhouse_backup_and_restore.md](./clickhouse_backup_and_restore.md) - how to do backup and restore
1. [storage.md](./storage.md) - storage explained


## ClickHouse Monitor:
1. [monitoring_setup.md](./monitoring_setup.md) - how to set up monitoring
1. [prometheus_setup.md](./prometheus_setup.md) - how to set up Prometheus
1. [grafana_setup.md](./grafana_setup.md) - how to set up Grafana

## ClickHouse Backup:
1. [clickhouse_backup_and_restore.md](./clickhouse_backup_and_restore.md) - how to backup / restore clickhouse cluster

## Others:
1. [k8s_cluster_access.md](./k8s_cluster_access.md) - how to set up cluster access
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ spec:
- clickhouse-server
- --config-file=/etc/clickhouse-server/config.xml
- name: clickhouse-backup
image: altinity/clickhouse-backup:1.0.11
image: radondb/clickhouse-backup:latest
imagePullPolicy: Always
command:
- /bin/bash
Expand Down
6 changes: 3 additions & 3 deletions docs/chi-examples/99-clickhouseupgrade-draft.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ spec:
deployment:
zone:
matchLabels:
clickhouse.altinity.com/zone: zone1
clickhouse.radondb.com/zone: zone1
podTemplate: clickhouse-v20.6
dataVolumeClaimTemplate: default
logVolumeClaimTemplate: default
Expand Down Expand Up @@ -54,8 +54,8 @@ spec:
scenario: NodeMonopoly # 1 pod (CH server instance) per node (zone can be a set of n nodes) -> podAntiAffinity
zone:
matchLabels:
clickhouse.altinity.com/zone: zone4
clickhouse.altinity.com/kind: ssd
clickhouse.radondb.com/zone: zone4
clickhouse.radondb.com/kind: ssd
podTemplate: clickhouse-v20.6

replicas:
Expand Down
3 changes: 3 additions & 0 deletions docs/chi_update_add_replication.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@
1. Assume we have `clickhouse-operator` already installed and running
1. Assume we have `Zookeeper` already installed and running

Note: Starting from version 2.1, you can use the automatic creation and configuration of zookeeper clusters provided by the operator.
For details, please refer to [custom_resource_explained.md](./custom_resource_explained.md) .spec.configuration.zookeeper section.

## Install ClickHouseInstallation example
We are going to install everything into `dev` namespace. `clickhouse-operator` is already installed into `dev` namespace

Expand Down
12 changes: 12 additions & 0 deletions docs/custom_resource_explained.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@ clickhouse-installation-max 23h
`.spec.configuration` section represents sources for ClickHouse configuration files. Be it users, remote servers and etc configuration files.

## .spec.configuration.zookeeper
If you want to use your own manually deployed zookeeper, take the following configuration:
```yaml
zookeeper:
nodes:
Expand All @@ -59,6 +60,17 @@ clickhouse-installation-max 23h
root: /path/to/zookeeper/node
identity: user:password
```

or if you want to automatically create a zookeeper cluster by operator, use the following configuration:
```yaml
zookeeper:
install: true
replica: 3
port: 2181
image: radondb/zookeeper:3.6.1
imagePullPolicy: IfNotPresent
```

`.spec.configuration.zookeeper` refers to [<yandex><zookeeper></zookeeper></yandex>][server-settings_zookeeper] config section

## .spec.configuration.profiles
Expand Down
10 changes: 5 additions & 5 deletions docs/grafana_setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Login credentials:
- password: **admin**

check `http://localhost:3000/datasources` contains `Prometheus` datasource
check `http://localhost:3000/dashboards` contains [Altinity Clickhouse Operator Dashboard][altinity_recommended_dashboard]
check `http://localhost:3000/dashboards` contains [RadonDB Clickhouse Operator Dashboard][radondb_recommended_dashboard]

## Install Grafana instance via pure kubernetes manifests
In case we do not have Grafana available, we can setup it directly into k8s and integrate with Prometheus afterwards.
Expand Down Expand Up @@ -92,17 +92,17 @@ Data source configuration parameters:
where `svc.cluster.local` is k8s-cluster-dependent part, but it is still rather often called with default `svc.cluster.local`
- Access: choose **proxy**, which means Grafana backend will send request to Prometheus, while **direct** means "directly from browser" which will not work in case of k8s-installation.

By now, Prometheus data should be available for Grafana and we can choose nice dashboard to plot data on. Altinity supply recommended [Grafana dashoard][altinity_recommended_dashboard] as additional deliverables.
By now, Prometheus data should be available for Grafana and we can choose nice dashboard to plot data on. RadonDB supply recommended [Grafana dashoard][radondb_recommended_dashboard] as additional deliverables.

## Manual installation Grafana Dashboard

In order to install dashboard:
1. Navigate to `main menu -> Dashboards -> Import` and pick `Upload .json file`.
1. Select recommended [Grafana dashoard][altinity_recommended_dashboard]
1. Select recommended [Grafana dashoard][radondb_recommended_dashboard]
1. Select a Prometheus data source from which data would be fetched
1. Click **Import**

By now Altinity recommended dashboard should be available for use.
By now RadonDB recommended dashboard should be available for use.

More [Grafana docs][grafana-docs]

Expand All @@ -111,7 +111,7 @@ More [Grafana docs][grafana-docs]
[grafana_manifest_yaml_secret]: ../deploy/grafana/grafana-manually/grafana.yaml#L56
[create_grafana_script]: ../deploy/grafana/grafana-manually/create-grafana.sh
[prometheus_setup_doc]: ./prometheus_setup.md
[altinity_recommended_dashboard]: ../grafana-dashboard/RadonDB_ClickHouse_Operator_dashboard.json
[radondb_recommended_dashboard]: ../grafana-dashboard/RadonDB_ClickHouse_Operator_dashboard.json
[install_grafana_operator_script]: ../deploy/grafana/grafana-with-grafana-operator/install-grafana-operator.sh
[install_grafana_dashboard_script]: ../deploy/grafana/grafana-with-grafana-operator/install-grafana-with-operator.sh
[grafana-docs]: http://docs.grafana.org/
1 change: 1 addition & 0 deletions docs/introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ However, in case we'd like to have high-available ClickHouse installation, we ne
So, we can either use
1. Already existing Zookeeper instance, or
1. [Setup][zookeeper-setup-doc] our own Zookeeper - in most cases inside the same k8s installation.
1. (In version 2.1+) Specify .spec.configuration.zookeeper.install = true in CHI to let the operator help create the zookeeper cluster.

[persistent-volumes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
[dynamic-provisioning]: https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/
Expand Down
10 changes: 5 additions & 5 deletions docs/operator_build_from_sources.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,21 +4,21 @@

1. `go-lang` compiler
2. `mod` Package Manager
3. Get the sources from our repository using `go` git wrapper `go get github.com/altinity/clickhouse-operator`
3. Get the sources from our repository using `go` git wrapper `go get github.com/radondb/radondb-clickhouse-operator`

## Binary Build Procedure

1. Switch working dir to `src/github.com/altinity/clickhouse-operator`
1. Switch working dir to `src/github.com/radondb/radondb-clickhouse-operator`
2. Make sure all packages are linked properly by using `mod` package manager: `go mod tidy`
3. Build the sources `go build -o ./clickhouse-operator cmd/operator/main.go`. This will create `clickhouse-operator` binary which could be only used inside kubernetes environment.

## Docker Image Build and Usage Procedure

This process does not require `go-lang` compiler nor `dep` package manager. Instead it requires `kubernetes` and `docker`.

1. Switch working dir to `src/github.com/altinity/clickhouse-operator`
2. Build docker image with `docker`: `docker build -t altinity/clickhouse-operator ./`
3. Register freshly build `docker` image inside `kubernetes` environment like so: `docker save altinity/clickhouse-operator | (eval $(minikube docker-env) && docker load)`
1. Switch working dir to `src/github.com/radondb/radondb-clickhouse-operator`
2. Build docker image with `docker`: `docker build -t radondb/clickhouse-operator ./`
3. Register freshly build `docker` image inside `kubernetes` environment like so: `docker save radondb/clickhouse-operator | (eval $(minikube docker-env) && docker load)`
4. Install `clickhouse-operator` as described here: [Install ClickHouse Operator][install]

[install]: ./operator_installation_details.md
4 changes: 2 additions & 2 deletions docs/operator_configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ Defaults for ClickHouseInstallation can be provided by `ClickHouseInstallationTe

`ClickHouseInstallationTemplate` has the same structure as `ClickHouseInstallation`, but all parts and fields are optional. Templates are included into an installation with 'useTemplates' syntax. For example, one can define a template for ClickHouse pod:

```apiVersion: "clickhouse.altinity.com/v1"
```apiVersion: "clickhouse.radondb.com/v1"
kind: "ClickHouseInstallationTemplate"

metadata:
Expand All @@ -154,7 +154,7 @@ spec:
Template needs to be deployed to some namespace, and later on used in the installation:
```
apiVersion: "clickhouse.altinity.com/v1"
apiVersion: "clickhouse.radondb.com/v1"
kind: "ClickHouseInstallation"
...
spec:
Expand Down
8 changes: 4 additions & 4 deletions docs/operator_installation_details.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,12 +7,12 @@ In is located in `deploy/operator` folder inside `clickhouse-operator` sources.
Operator installation process is quite straightforward and consists of one main step - deploy **ClickHouse operator**.
We'll apply operator manifest directly from github repo
```bash
kubectl apply -f https://raw.githubusercontent.com/Altinity/clickhouse-operator/master/deploy/operator/clickhouse-operator-install-bundle.yaml
kubectl apply -f https://raw.githubusercontent.com/radondb/radondb-clickhouse-operator/master/deploy/operator/clickhouse-operator-install-bundle.yaml
```

The following results are expected:
```text
customresourcedefinition.apiextensions.k8s.io/clickhouseinstallations.clickhouse.altinity.com created
customresourcedefinition.apiextensions.k8s.io/clickhouseinstallations.clickhouse.radondb.com created
serviceaccount/clickhouse-operator created
clusterrolebinding.rbac.authorization.k8s.io/clickhouse-operator created
deployment.apps/clickhouse-operator configured
Expand Down Expand Up @@ -44,7 +44,7 @@ Let's walk over all resources created along with ClickHouse operator, which are:

### Custom Resource Definition
```text
customresourcedefinition.apiextensions.k8s.io/clickhouseinstallations.clickhouse.altinity.com created
customresourcedefinition.apiextensions.k8s.io/clickhouseinstallations.clickhouse.radondb.com created
```
New [Custom Resource Definition][customresourcedefinitions] named **ClickHouseInstallation** is created.
k8s API is extended with new kind `ClickHouseInstallation` and we'll be able to manage k8s resource of `kind: ClickHouseInstallation`
Expand Down Expand Up @@ -98,7 +98,7 @@ Expected result
```text
NAME CREATED AT
...
clickhouseinstallations.clickhouse.altinity.com 2019-01-25T10:17:57Z
clickhouseinstallations.clickhouse.radondb.com 2019-01-25T10:17:57Z
...
```

Expand Down
18 changes: 9 additions & 9 deletions docs/operator_upgrade.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,9 @@ spec:
spec:
serviceAccountName: clickhouse-operator
containers:
- image: altinity/clickhouse-operator:0.13.0
- image: radondb/chronus-operator:2.0
name: clickhouse-operator
- image: altinity/metrics-exporter:0.13.0
- image: radondb/chronus-metrics-operator:2.0
name: metrics-exporter
```
The latest available version is installed by default. If version changes, there are three ways to upgrade the operator:
Expand All @@ -50,25 +50,25 @@ Pod Template:
Service Account: clickhouse-operator
Containers:
clickhouse-operator:
Image: altinity/clickhouse-operator:0.13.0
Image: radondb/chronus-operator:2.0
metrics-exporter:
Image: altinity/metrics-exporter:0.13.0
Image: radondb/chronus-metrics-operator:2.0
<...>
```
Version is labeled and can be also displayed with the command:
```
$ kubectl get deployment clickhouse-operator -L version -n kube-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE VERSION
clickhouse-operator 1 1 1 1 19h 0.13.0
clickhouse-operator 1 1 1 1 19h 2.0
```

If we want to update to the new version, we can run following command:

```
$ kubectl set image deployment.v1.apps/clickhouse-operator clickhouse-operator=altinity/clickhouse-operator:0.13.5 -n kube-system
$ kubectl set image deployment.v1.apps/clickhouse-operator clickhouse-operator=radondb/chronus-operator:2.1.1 -n kube-system
deployment.apps/clickhouse-operator image updated
$ kubectl set image deployment.v1.apps/clickhouse-operator metrics-exporter=altinity/clickhouse-operator:0.13.5 -n kube-system
$ kubectl set image deployment.v1.apps/clickhouse-operator metrics-exporter=radondb/chronus-metrics-operator:2.1.1 -n kube-system
deployment.apps/clickhouse-operator image updated
```
Expand All @@ -89,9 +89,9 @@ Pod Template:
Service Account: clickhouse-operator
Containers:
clickhouse-operator:
Image: altinity/clickhouse-operator:0.13.5
Image: radondb/chronus-operator:2.1.1
metrics-exporter:
Image: altinity/metrics-exporter:0.13.5
Image: radondb/chronus-metrics-operator:2.1.1
<...>
```

Expand Down
6 changes: 3 additions & 3 deletions docs/prometheus_setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ We can either run [create-prometheus.sh][create-prometheus.sh] or setup the whol

- Setup `prometheus` into dedicated namespace. `prometheus-operator` would be used to create `prometheus` instance
```bash
kubectl apply --namespace=prometheus -f <(wget -qO- https://raw.githubusercontent.com/Altinity/clickhouse-operator/master/deploy/prometheus/prometheus-template.yaml | PROMETHEUS_NAMESPACE=prometheus envsubst)
kubectl apply --namespace=prometheus -f <(wget -qO- https://raw.githubusercontent.com/radondb/radondb-clickhouse-operator/master/deploy/prometheus/prometheus-template.yaml | PROMETHEUS_NAMESPACE=prometheus envsubst)
```

- Setup `alertmanager` slack webhook, look at https://api.slack.com/incoming-webhooks how to enable external webhooks in Slack API
Expand All @@ -82,14 +82,14 @@ We can either run [create-prometheus.sh][create-prometheus.sh] or setup the whol
export PROMETHEUS_NAMESPACE=prometheus
export ALERT_MANAGER_EXTERNAL_URL=https://your.external-domain.for-alertmanger/
kubectl apply --namespace=prometheus -f <( \
wget -qO- https://raw.githubusercontent.com/Altinity/clickhouse-operator/master/deploy/prometheus/prometheus-alertmanager-template.yaml | \
wget -qO- https://raw.githubusercontent.com/radondb/radondb-clickhouse-operator/master/deploy/prometheus/prometheus-alertmanager-template.yaml | \
envsubst \
)
```

- Setup `clickhouse-operator` alert rules for `prometheus`
```bash
kubectl apply --namespace=prometheus -f https://raw.githubusercontent.com/Altinity/clickhouse-operator/master/deploy/prometheus/prometheus-alert-rules.yaml
kubectl apply --namespace=prometheus -f https://raw.githubusercontent.com/radondb/radondb-clickhouse-operator/master/deploy/prometheus/prometheus-alert-rules.yaml
```

At this point Prometheus and AlertManager is up and running. Also, all kubernetes pods which contains `meta.annotations` `prometheus.io/scrape: 'true'`, `promethus.io/port: NNNN`, `prometheus.io/path: '/metrics'`, `prometheus.io/scheme: http`, will discover via prometheus kubernetes_sd_config job `kubernetes-pods`.
Expand Down
16 changes: 0 additions & 16 deletions docs/pull_request_template.md

This file was deleted.

Loading

0 comments on commit df24640

Please sign in to comment.