You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi !
I have the following question/issue: go panics panic: lost connection to pod even after port-forwards were closed.
github.com/anthhub/forwarder v1.1.0
Here is simplified code I use. The main function actually is executed in Go routine (it is some e2e tests running in k8s)
package main
import (
"context"
"fmt"
"time"
"github.com/anthhub/forwarder"
)
func PortForward(options []*forwarder.Option, config string) (*forwarder.Result, error) {
ret, err := forwarder.WithForwarders(context.Background(), options, config)
if err != nil {
fmt.Printf("Error occurred while configuring port-forwarding with config (%s) and options %v", config, &options)
ret.Close()
return nil, err
}
ports, err := ret.Ready()
if err != nil {
fmt.Printf("Errror occurred while waiting for port-forwarding to be ready with config (%s) and options %v", config, options)
ret.Close()
return nil, err
}
fmt.Printf("Port-forwarding established with ports: %+v\n", ports)
fmt.Printf("Make sure to close forwarding via Close()")
return ret, nil
}
func main() {
const httpsPort int = 443
const certRenewalTimeoutSeconds = 120
localPortForwardPort := 26842
kubeConfig := "~/.kube/config"
type test struct {
Namespace string
Source string
RemotePort int
}
tests := []test{
{
Namespace: "default",
Source: "svc/my-service-1",
RemotePort: 8000,
},
{
Namespace: "default",
Source: "svc/my-service-2",
RemotePort: 8443,
},
}
for _, test := range tests {
var portForward *forwarder.Result
portForwardOptions := []*forwarder.Option{
{
LocalPort: localPortForwardPort,
RemotePort: test.RemotePort,
Source: test.Source,
Namespace: test.Namespace,
},
}
portForward, err := PortForward(portForwardOptions, kubeConfig)
if err != nil {
fmt.Println(err)
}
portForwardOpenedAt := time.Now().Unix()
fmt.Printf("Portforward opened to %d at %d", localPortForwardPort, portForwardOpenedAt)
close := func(rst *forwarder.Result, port int, timestamp int64) {
fmt.Printf("Closing portforward %v on port %d opened at %d", rst, port, timestamp)
rst.Close()
}
defer close(portForward, localPortForwardPort, portForwardOpenedAt)
// do some tests here, this code returns some value
localPortForwardPort++
fmt.Printf("Port incremented. Next port-forwarding will be opened at %d. They all will be closed on return from this func", localPortForwardPort)
}
}
It all works great, in logs after this particular test is concluded I see
2024/06/19 10:30:42 Closing portforward &{0x1041d7c30 0x1041d7ab0 0x1041d79c0} on port 26843 opened at 1718782197
2024/06/19 10:30:42 Closing portforward &{0x1041d7c30 0x1041d7ab0 0x1041d79c0} on port 26842 opened at 1718782153
However down the road, long after all ports were supposedly closed, in the next tests Go panics with
panic: lost connection to pod
goroutine 3903 [running]:
github.com/anthhub/forwarder.portForwardAPod.func1()
/vendor/github.com/anthhub/forwarder/forwarder.go:164 +0x2d
created by github.com/anthhub/forwarder.portForwardAPod in goroutine 3902
/vendor/github.com/anthhub/forwarder/forwarder.go:162 +0x419
In my initial implementation I was calling Close() on each iteration manually (not through defer), but on the second iteration port from the first one was still in use somehow, so I resorted to use separate port in each iteration. Now I tried with defer and ports are closed on return (as they should) , but still this issue persists.
Any thoughts will be appreciated.
The text was updated successfully, but these errors were encountered:
I've added some logs to my vendored package version and observed, that only the first call to "Close()" is handled.
Looks like module level var once sync.Once prevents subsequent calls, even though new port-forward instances are created (and they all run successfully).
As a workaround I've moved it to to the top of forwarders func, then all my port-forward were successfully closed on defer.
This is probably just a hack, but works for me for now.
Hi !
I have the following question/issue: go panics
panic: lost connection to pod
even after port-forwards were closed.Here is simplified code I use. The
main
function actually is executed in Go routine (it is some e2e tests running in k8s)It all works great, in logs after this particular test is concluded I see
However down the road, long after all ports were supposedly closed, in the next tests Go panics with
In my initial implementation I was calling
Close()
on each iteration manually (not throughdefer
), but on the second iteration port from the first one was still in use somehow, so I resorted to use separate port in each iteration. Now I tried withdefer
and ports are closed on return (as they should) , but still this issue persists.Any thoughts will be appreciated.
The text was updated successfully, but these errors were encountered: