Skip to content

Commit

Permalink
Merge pull request #107 from vshn/change/xplane114_docs
Browse files Browse the repository at this point in the history
Adjust docs for crossplane 1.14
  • Loading branch information
Kidswiss authored Nov 27, 2023
2 parents a9be16c + a5f725b commit 5aaeb63
Show file tree
Hide file tree
Showing 5 changed files with 122 additions and 133 deletions.
82 changes: 27 additions & 55 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -157,71 +157,43 @@ To add a new function to PostgreSQL by VSHN:

entrypoint to start working with gRPC server is to run:
```
go run main.go --log-level 1 start grpc --network tcp --socket ':9547' --devmode
go run main.go --log-level 1 functions --insecure
```

This will start the GRPC server listening on a TCP port. Afterward you can configure the composition to use this connection:
This will start the GRPC server listening on a TCP port. Afterward you can configure the composition to use this connection in the component:

```yaml
### Macos:
functions:
- container:
image: redis
imagePullPolicy: IfNotPresent
runner:
# HOSTIP=$(docker inspect kindev-control-plane | jq '.[0].NetworkSettings.Networks.kind.Gateway') # On kind MacOS/Windows
# HOSTIP=$(docker inspect kindev-control-plane | jq '.[0].NetworkSettings.Networks.kind.Gateway') # On kind MacOS/Windows
# HOSTIP=host.docker.internal # On Docker Desktop distributions
# HOSTIP=host.lima.internal # On Lima backed Docker distributions
# For Linux users: `ip -4 addr show dev docker0 | grep inet | awk -F' ' '{print $2}' | awk -F'/' '{print $1}'`
endpoint: $HOSTIP:9547 # edit in component-appcat or directly using
# `k edit compositions.apiextensions.crossplane.io vshnpostgres.vshn.appcat.vshn.io`
services:
vshn:
enabled: true
externalDatabaseConnectionsEnabled: true
emailAlerting:
enabled: true
smtpPassword: "whatever"
postgres:
grpcEndpoint: $HOSTIP:9547
proxyFunction: true
```
Each service should have these params configured.
If you prefer to set the proxy endpoint directly in the composition, it's passed via the `input.data.proxyEndpoint` field.

Then install the function proxy in kind:

```bash
make install-proxy
```

It's also possible to trigger fake request to gRPC server by client (to imitate Crossplane):
It's also possible to trigger fake request to gRPC server via the crank renderer:
```
cd test/grpc-client
go run main.go start grpc
go run github.com/crossplane/crossplane/cmd/crank beta render xr.yaml composition.yaml functions.yaml -r
```
Crank will return a list of all the objects this specific request would have produced, including the result messages.

Please have a look at the `hack/` folder for an example.

if You want to run gRPC server in local kind cluster, please use:
1. [kindev](https://github.com/vshn/kindev). In makefile replace target:
1.
```
$(crossplane_sentinel): export KUBECONFIG = $(KIND_KUBECONFIG)
$(crossplane_sentinel): kind-setup local-pv-setup
# below line loads image to kind
kind load docker-image --name kindev ghcr.io/vshn/appcat-comp-functions
helm repo add crossplane https://charts.crossplane.io/stable
helm upgrade --install crossplane --create-namespace --namespace syn-crossplane crossplane/crossplane \
--set "args[0]='--debug'" \
--set "args[1]='--enable-composition-functions'" \
--set "args[2]='--enable-environment-configs'" \
--set "xfn.enabled=true" \
--set "xfn.args={--debug}" \
--set "xfn.image.repository=ghcr.io/vshn/appcat-comp-functions" \
--set "xfn.image.tag=latest" \
--wait
@touch $@
```
2. [component-appcat](https://github.com/vshn/component-appcat) please append [file](https://github.com/vshn/appcat/blob/master/tests/golden/vshn/appcat/appcat/21_composition_vshn_postgres.yaml) with:
1.
```
compositeTypeRef:
apiVersion: vshn.appcat.vshn.io/v1
kind: XVSHNPostgreSQL
# we have to add functions declaration to postgresql
functions:
- container:
image: postgresql
runner:
endpoint: unix-abstract:crossplane/fn/default.sock
name: pgsql-func
type: Container
resources:
- base:
apiVersion: kubernetes.crossplane.io/v1alpha1
```
That's all - You can now run Your claims. This documentation and above workaround is just temporary solution, it should disappear once we actually implement composition functions.
19 changes: 16 additions & 3 deletions docs/modules/ROOT/pages/explanations/comp-functions/debug.adoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,20 @@
= Debugging

If the GRPC server is in devmode, it will save the funcIOs of each function to configMaps in the default namespace. It will always keep two configMaps per composite and function. One contains the previous and one the current state. For example for VSHNPostgreSQL this could result in up to four configMaps; two for the postgresql function and two for the miniodev function (which provides the local S3 bucket).
The function server can be started in an insecure mode.
This mode will make it listen on plain http on port 9443.
Any valid Composition Function GRPC request can be redirected to this port.

These configMaps can be used to see the state of the whole FuncIO and should help with local debugging. Additionally, the GRPC server now prints a diff of those configMaps on each reconcile.
There are two tools helping with debugging local running function servers:

The configMaps and the diffs will only rotate on `.spec` changes on the claim/composite, it uses the generation to track current and previous states. This way we can track the "logical" diff of a change, even if it needs multiple reconcile loops to be applied fully.
* Crank renderer (github.com/crossplane/crossplane/cmd/crank beta render)
* Function proxy
The crank rendered will take the given xr, composition and function as yaml files.
It then sends the given XR to the function server where it will handle accordingly.
It's also possible to add observed objects via a folder with their yaml definitions.
This method of debugging is handy when the raw output of the function needs to be inspected.

The function proxy on the other hand, is a running function server instance on the cluster itself.
It will then only redirect any given GRPC call to the given endpoint.
This given endpoint can then be the local running function server.
The function server can run in a debug session, so stepping through the execution is supported.
35 changes: 20 additions & 15 deletions docs/modules/ROOT/pages/explanations/comp-functions/runtime.adoc
Original file line number Diff line number Diff line change
@@ -1,26 +1,31 @@
= Runtime Library

The runtime library helps to facilitate the implementation of transformation go functions.
It allows to operate on underlying function-io resources and composites. There are 2 objects accessible
from a runtime object:
The runtime library helps to facilitate the implementation of go functions.
It allows to operate on underlying managed resources and composites.

- `Observed` - the observed state of the XR and any existing composed resources.
- `Desired` - the desired state of the XR and any composed resources.
For more information on how function-io operates check the https://docs.crossplane.io/knowledge-base/guides/composition-functions/#functionio[documentation]
For more information on how composition functions operate check the https://docs.crossplane.io/knowledge-base/guides/composition-functions/[documentation]
from Crossplane.

== Desired Object
== Desired Objects

The runtime desired object has methods to obtain and update desired resources from function-io.
The desired objects are tracked in the `ServiceRuntime` until all steps finished.
The `ServiceRuntime` will then generate the response GRPC object for Crossplane.All objects that have been added to the desired objects map will be applied by Crossplane.
If an object is not added to the desired map, and it exists on the cluster, it will be removed.
So always add all necessary objects on each reconcile.

== Observed Object
== Observed Objects

The runtime observed object has methods to obtain observed resources from function-io.
Are the objects as applied to the cluster.
They contain any external injected information, as well as the status object.

== Result Object

Any transformation go function expects a `runtime.Result` object. This object type wraps the Crossplane
own Result type. The runtime library has simple functions that allows creation of `runtime.Result` objects
in various states - `fatal`, `warning` or `normal`. To understand the difference between these states
consult crossplane https://docs.crossplane.io/knowledge-base/guides/composition-functions/#functionio[documentation].
Any service step expects a `*xfnproto.Result` return object.
This object type is Crossplane's own Result type.
The runtime library has simple functions that allows creation of `*xfnproto.Result` objects
in various states - `fatal`, `warning` or `normal`.
Please note that a fatal result will mark the whole composition as failed.
It should only be used if the error is sever enough for the composition to actually work properly.

For further information, please consult the Crossplane docs.
consult crossplane https://docs.crossplane.io/knowledge-base/guides/composition-functions/[documentation].
114 changes: 56 additions & 58 deletions docs/modules/ROOT/pages/explanations/connectiondetails.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,11 @@ The following page assumes you've read and understand.
== Composite Resource Connection Details

These are the most straight forward kind of connection details to handle.
In terms of go code they are specified by `xfnv1alpha1.ExplicitConnectionDetail`.
As the name suggests, they are explicit and take a name and a value.
They just take a name and a value, which will then be exposed on the connection details of the claim.

[source,golang]
----
iof.Desired.PutCompositeConnectionDetail(ctx, xfnv1alpha1.ExplicitConnectionDetail{
Name: "ENDPOINT",
Value: bucket.Status.Endpoint,
})
svc.SetConnectionDetail(PostgresqlUrl, []byte(val))
----

== Managed Resource Connection Details
Expand Down Expand Up @@ -69,24 +65,21 @@ To actually expose the secrets we need to use `xfnv1alpha1.DerivedConnectionDeta
These are usually indirect connection details, but they can also take the value directly.

It's possible to reference connection details that are exposed via the managed resource, or any field path within the managed resources, like status fields.
With composition functions the `ConnectionDetailTypeFromConnectionSecretKey` is probably most commonly used, as field values can easily be accessed and set as an `xfnv1alpha1.ExplicitConnectionDetail` in the composite.
Crossplane Composition Function in version 1.14 automatically add the connection details to the observed objects.
This makes it very easy to get and manipulate them.

[source,golang]
----
cd := []xfnv1alpha1.DerivedConnectionDetail{
{
Name: pointer.String(secretKeyName),
FromConnectionSecretKey: pointer.String(secretKeyName),
Type: xfnv1alpha1.ConnectionDetailTypeFromConnectionSecretKey,
},
{
Name: pointer.String(accessKeyName),
FromConnectionSecretKey: pointer.String(accessKeyName),
Type: xfnv1alpha1.ConnectionDetailTypeFromConnectionSecretKey,
},
}
// Get all connection details of a resurce as a map.
cd, _ := svc.GetObservedComposedResourceConnectionDetails(comp.Name + "-release")
return iof.Desired.PutWithResourceName(ctx, user, "minio-user", runtime.AddDerivedConnectionDetails(cd))
mySecret := cd["AWS_ACCESS_KEY_ID"]
// Automatically add all connection details of a resource to the composite.
err = svc.AddObservedConnectionDetails(comp.Name + "-release")
if err != nil {
return err
}
----

Lastly, the `User` xrd needs to be made aware of the keys it should expose in the claim connection details.
Expand Down Expand Up @@ -157,71 +150,76 @@ However, provider-helm's connection detail management isn't perfect.
It cannot handle any transformations on the secrets.
So if anything needs to concatenated, split, or changed in any other way, then this might unfortunately not be the way to do it.

These connection details then still need to be added to the composite as derived connectiondetails.
These connection details then still need to be added to the composite.

[source,golang]
----
cd := []xfnv1alpha1.DerivedConnectionDetail{
{
Name: pointer.String(connectionPasswordKey),
FromConnectionSecretKey: pointer.String(connectionPasswordKey),
Type: xfnv1alpha1.ConnectionDetailTypeFromConnectionSecretKey,
},
{
Name: pointer.String(connectionPortKey),
FromConnectionSecretKey: pointer.String(connectionPortKey),
Type: xfnv1alpha1.ConnectionDetailTypeFromConnectionSecretKey,
},
err = svc.AddObservedConnectionDetails(comp.Name + "-release")
if err != nil {
return err
}
return iof.Desired.PutWithResourceName(ctx, release, "minio-release", runtime.AddDerivedConnectionDetails(cd)
----

== Unmanaged Connection Details

There are cases where the connection details are in objects that aren't managed by the composition function or any of the providers.
For example if the service is provisioned via an operator.
Also, if the given methods to acquire the connection details aren't powerful enough, for example if some helm release connection details need some transformations.

To still get access to any information needed, an observer object can be leveraged.
This uses provider-kubernetes to deploy an `Object` with the `Observe` policy.
These objects will only be read and never modified, making it perfect for this use-case.

However this involes two steps, first to deploy the observer and then in the next reconcile to read it.
As of version v0.8.0 of provider-kubernetes, it supports exposing connection details directly from the `Object`.
It works similar to how provider-helm can expose connection details.
Also like with provider-helm, it's possible to expose any field of any arbitrary k8s object that the provider is allowed to read.

. Deploy observer
. Deploy an operator's cr
[source,golang]
----
secret := &corev1.Secret{
secret := &opv1.MyThing{
ObjectMeta: metav1.ObjectMeta{
Name: "mysecret",
Name: "fancything",
Namespace: "myns",
},
Spec: {
ForProvider: // Omitted
// Let's assume that the CR provisions some service and writes a service and secret that we need for connection details.
ConnectionDetails: []v1beta1.ConnectionDetail{
{
ObjectReference: corev1.ObjectReference{
APIVersion: "v1",
Kind: "Secret",
Name: "mysecret",
Namespace: "myns",
FieldPath: "data.my-password", <2>
},
ToConnectionSecretKey: "connectionPasswordKey",
},
{
ObjectReference: corev1.ObjectReference{
APIVersion : "v1"
Kind: "Service",
Name: "myservice",
Namespace: "myns",
FieldPath: "spec.ports[0].port",
},
ToConnectionSecretKey: "connectionPortKey"
},
},
}
}
return iof.Desired.PutIntoObserveOnlyObject(ctx, secret, comp.Name+"-secret-observer")
return svc.SetDesiredKubeObserveObject(secret, comp.Name+"-my-cr")
----

. Read observer
The connection details can then be read like usual.

. Read connectiondetail
[source,golang]
----
secret := &corev1.Secret{}
err = iof.Observed.Get(ctx, secret, comp.Name+"-secret-observer")
if err != nil {
if err == runtime.ErrNotFound { <1>
return runtime.NewNormal()
}
return runtime.NewFatalErr(ctx, "cannot get observed deletion job", err)
}
secret := &xkube.Object{}
cd, _ := svc.GetObservedComposedResourceConnectionDetails(comp.Name + "-my-cr")
----
<1> We need to ignore not found errors here, as the observer will not exist during the first reconcile

Also, here for secrets the values will not be base64 encoded, so they can be used directly.

[source,golang]
----
iof.Desired.PutCompositeConnectionDetail(ctx, v1alpha1.ExplicitConnectionDetail{
Name: "MY_PASSWORD",
Value: string(secret.Data["my_password"])
})
svc.SetConnectionDetail("MY_PASSWORD", cd["connectionPasswordKey"])
----
5 changes: 3 additions & 2 deletions docs/modules/ROOT/pages/explanations/dev-notes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -40,8 +40,8 @@ When doing it this way the secret values are available in plaintext and NOT in b

== How to (not) delete resources in the composition function

For crossplane, if a given resource was in the desired array during the previous reconcile, but is missing in the current reconcile, then it will be removed.
If, for any reason, the resource name in the functionIO changes, even though it's still the same object, crossplane will delete and re-create it.
For crossplane, if a given resource was in the desired map during the previous reconcile, but is missing in the current reconcile, then it will be removed.
If, for any reason, the resource name in the map changes, even though it's still the same object, crossplane will delete and re-create it.
That's because for crossplane it's a new object in that case, as it uses the resource name to identify the resources.
The resource name of important resources (releases, deployments, pvcs, etc.) should never change, or expect unhappy users.

Expand All @@ -55,6 +55,7 @@ They should be used mainly to create and observe objects related to the instance

If there's a need for more complex one time operations, please consider putting them into a separate job that is deployed via the composition function.
Example for such a job is the restore of a backup.
Try to make those jobs idempotent if possible.

If there's a need for more complex operations on a schedule, please consider putting them into a cronjob that is deployed via the composition function.
Examples for such cronjobs are maintenance and backups.

0 comments on commit 5aaeb63

Please sign in to comment.