-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Local Service Chaining and Intra-App routing not working with selective deployment #357
Comments
Edited: never mind this comment, I confused myself for a moment here. |
I think this needs to be an issue in Spin - there's no way (afaik) to tell it what it should do for unselected components. The service chaining contract doesn't let those requests suddenly become remote at the moment, so it might not be solvable as-is. For the self path, depending on how the shim resolves self, you could configure an ingress (which is unmanaged by SpinKube https://www.spinkube.dev/docs/topics/routing/) with multiple paths (https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource) and that could potentially work |
It's a good point that Spin should probably make up it's mind of what's the appropriate way of doing component-to-component communication, given selective deployments is a thing (local Service Chaining or "self"). E.g., "If you do this, it's guaranteed to work.".
|
@mikkelhegn, as you found, support service chaining and self requests is limited for selective deployment. This is documented in Spin's docs as that is the initial scope of support: https://developer.fermyon.com/spin/v3/running-apps#splitting-an-application-across-environments One workaround is for Spin/SpinKube users to set the route to other deployments as spin application variables. For example, say in the salutations app, you update the hello component to call the goodbye component and append the contents to the HTTP response (effectively returning "hello goodbye"). You could do the following: Add the goodbye address as an application variable in allowed hosts (and update the call in the implementation to use the application variable):
Add the application variable to the SpinApp: apiVersion: core.spinkube.dev/v1alpha1
kind: SpinApp
metadata:
name: hello-spinapp
spec:
image: "ghcr.io/spinkube/spin-operator/salutations:20241105-223428-g4da3171"
replicas: 1
executor: containerd-shim-spin
components: ["hello"]
variables:
- name: goodbye-route
value: http://goodbye-spinapp.default.svc.cluster.local
---
apiVersion: core.spinkube.dev/v1alpha1
kind: SpinApp
metadata:
name: goodbye-spinapp
spec:
image: "ghcr.io/spinkube/spin-operator/salutations:20241105-223428-g4da3171"
replicas: 1
executor: containerd-shim-spin
components: ["goodbye"] Right now, the Spin Operator never looks at a Spin.toml file and has no understanding of which components are trying to communicate. If we want to enable self requests with selective deployment, the operator would likely need to open the OCI artifacts and keep an understanding of where each component has been deployed. We'd need to implement in spin a new way to map self requests too. |
Thanks for all the details @kate-goldenring, and the workaround of providing addresses via config. I still think it would be great if there was an easy solution, that doesn't require config specific to the environment, to have two Spin components communicate, also when using selective deployments. @tschneidereit and I discussed this earlier today "in-person", and one idea that came up was to check for the use of Local Service Chaining or "self" at deployment time to block those components which depend on others to be selectively deployed. At least to help users fail fast. |
Trying to deploy a Spin application with two components, using local service chaining, the component calling another component fails with the following message:
Using the Intra-app method
"http://self"
also does not work, it returns404
trying to route to another component.It would be nice to see service chaining working across components with selective deployment.
The text was updated successfully, but these errors were encountered: