-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support a manual failover being initiated by a user or application #39
Comments
Is there any way to test the failover? As manual failover by killing the primary is not currently supported. ERRO[2018-07-04T13:33:56Z] dial tcp: lookup pr-primary on 10.96.0.10:53: server misbehaving INFO[2018-07-04T13:33:57Z] Promoting failover replica... I see the logs indicating the failover is successful(but pr-replica still does not have write permission after primary got killed) |
I'd be curious what the logs were on the replica.
Dave Cramer
…On 5 July 2018 at 02:40, GajaHebbar ***@***.***> wrote:
Is there any way to test the failover? As manual failover by killing the
primary is not currently supported.
ERRO[2018-07-04T13:33:56Z] dial tcp: lookup pr-primary on 10.96.0.10:53:
server misbehaving
ERRO[2018-07-04T13:33:56Z] Could not reach 'pr-primary' (Attempt: 1)
INFO[2018-07-04T13:33:56Z] Executing pre-hook: /hooks/watch-pre-hook
INFO[2018-07-04T13:33:56Z] Processing Failover: Strategy - latest
INFO[2018-07-04T13:33:56Z] Deleting existing primary...
INFO[2018-07-04T13:33:57Z] Deleted old primary
INFO[2018-07-04T13:33:57Z] Choosing failover replica...
INFO[2018-07-04T13:33:57Z] Chose failover target (pr-replica)
INFO[2018-07-04T13:33:57Z] Promoting failover replica...
DEBU[2018-07-04T13:33:57Z] executing cmd: [/opt/cpm/bin/promote.sh] on pod
pr-replica in namespace kube-system container: postgres
INFO[2018-07-04T13:33:57Z] Relabeling failover replica...
DEBU[2018-07-04T13:33:57Z] label: name
DEBU[2018-07-04T13:33:57Z] label: replicatype
INFO[2018-07-04T13:33:57Z] Executing post-hook: /hooks/watch-post-hook
I see the logs indicating the failover is successful(but pr-replica still
does not have write permission after primary got killed)
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
<#39 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAYz9klvD-4abwd7q4K-oucNxX7t09FDks5uDbT5gaJpZM4TETNO>
.
|
Hi Dave,
Please find the details
PODS
======
pod/pr-primary 1/1
Running 0 1m name=pr-primary
pod/pr-replica 1/1
Running 0 1m name=pr-replica,replicatype=trigger
pod/watch 1/1
Running 0 5s name=crunchy-watch
SERVICE
=======
service/pr-primary ClusterIP 10.101.156.135 <none>
5432/TCP 1m name=pr-primary
service/pr-replica ClusterIP 10.102.91.247 <none>
5432/TCP 1m name=pr-replica
execute command
kubectl delete service/pr-primary
PODS
======
pod/pr-replica 1/1
Running 0 16m name=pr-primary,replicatype=trigger
pod/watch 1/1
Running 0 13m name=crunchy-watch
SERVICE
=======
service/pr-replica ClusterIP 10.102.91.247 <none>
5432/TCP 16m name=pr-replica
On Fri, Jul 6, 2018 at 5:42 PM, Dave Cramer <[email protected]>
wrote:
… I'd be curious what the logs were on the replica.
Dave Cramer
On 5 July 2018 at 02:40, GajaHebbar ***@***.***> wrote:
> Is there any way to test the failover? As manual failover by killing the
> primary is not currently supported.
>
> ERRO[2018-07-04T13:33:56Z] dial tcp: lookup pr-primary on 10.96.0.10:53:
> server misbehaving
> ERRO[2018-07-04T13:33:56Z] Could not reach 'pr-primary' (Attempt: 1)
> INFO[2018-07-04T13:33:56Z] Executing pre-hook: /hooks/watch-pre-hook
> INFO[2018-07-04T13:33:56Z] Processing Failover: Strategy - latest
> INFO[2018-07-04T13:33:56Z] Deleting existing primary...
> INFO[2018-07-04T13:33:57Z] Deleted old primary
> INFO[2018-07-04T13:33:57Z] Choosing failover replica...
> INFO[2018-07-04T13:33:57Z] Chose failover target (pr-replica)
>
> INFO[2018-07-04T13:33:57Z] Promoting failover replica...
> DEBU[2018-07-04T13:33:57Z] executing cmd: [/opt/cpm/bin/promote.sh] on
pod
> pr-replica in namespace kube-system container: postgres
> INFO[2018-07-04T13:33:57Z] Relabeling failover replica...
> DEBU[2018-07-04T13:33:57Z] label: name
> DEBU[2018-07-04T13:33:57Z] label: replicatype
> INFO[2018-07-04T13:33:57Z] Executing post-hook: /hooks/watch-post-hook
>
> I see the logs indicating the failover is successful(but pr-replica still
> does not have write permission after primary got killed)
>
> —
> You are receiving this because you were assigned.
> Reply to this email directly, view it on GitHub
> <#39 (comment)-
402621083>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AAYz9klvD-4abwd7q4K-
oucNxX7t09FDks5uDbT5gaJpZM4TETNO>
> .
>
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#39 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AjoN42wHsgyaEsUllZptc1eXNWM3keaVks5uD1QagaJpZM4TETNO>
.
|
Hit send button before attaching the logs
Please find the logs
On Mon, Jul 9, 2018 at 12:13 PM, Gajanan Hebbar <[email protected]>
wrote:
Hi Dave,
Please find the details
PODS
======
pod/pr-primary 1/1
Running 0 1m name=pr-primary
pod/pr-replica 1/1
Running 0 1m name=pr-replica,replicatype=trigger
pod/watch 1/1
Running 0 5s name=crunchy-watch
SERVICE
=======
service/pr-primary ClusterIP 10.101.156.135 <none>
5432/TCP 1m name=pr-primary
service/pr-replica ClusterIP 10.102.91.247 <none>
5432/TCP 1m name=pr-replica
execute command
kubectl delete service/pr-primary
PODS
======
pod/pr-replica 1/1
Running 0 16m name=pr-primary,replicatype=trigger
pod/watch 1/1
Running 0 13m name=crunchy-watch
SERVICE
=======
service/pr-replica ClusterIP 10.102.91.247 <none>
5432/TCP 16m name=pr-replica
On Fri, Jul 6, 2018 at 5:42 PM, Dave Cramer ***@***.***>
wrote:
> I'd be curious what the logs were on the replica.
>
> Dave Cramer
>
> On 5 July 2018 at 02:40, GajaHebbar ***@***.***> wrote:
>
> > Is there any way to test the failover? As manual failover by killing the
> > primary is not currently supported.
> >
> > ERRO[2018-07-04T13:33:56Z] dial tcp: lookup pr-primary on 10.96.0.10:53
> :
> > server misbehaving
> > ERRO[2018-07-04T13:33:56Z] Could not reach 'pr-primary' (Attempt: 1)
> > INFO[2018-07-04T13:33:56Z] Executing pre-hook: /hooks/watch-pre-hook
> > INFO[2018-07-04T13:33:56Z] Processing Failover: Strategy - latest
> > INFO[2018-07-04T13:33:56Z] Deleting existing primary...
> > INFO[2018-07-04T13:33:57Z] Deleted old primary
> > INFO[2018-07-04T13:33:57Z] Choosing failover replica...
> > INFO[2018-07-04T13:33:57Z] Chose failover target (pr-replica)
> >
> > INFO[2018-07-04T13:33:57Z] Promoting failover replica...
> > DEBU[2018-07-04T13:33:57Z] executing cmd: [/opt/cpm/bin/promote.sh] on
> pod
> > pr-replica in namespace kube-system container: postgres
> > INFO[2018-07-04T13:33:57Z] Relabeling failover replica...
> > DEBU[2018-07-04T13:33:57Z] label: name
> > DEBU[2018-07-04T13:33:57Z] label: replicatype
> > INFO[2018-07-04T13:33:57Z] Executing post-hook: /hooks/watch-post-hook
> >
> > I see the logs indicating the failover is successful(but pr-replica
> still
> > does not have write permission after primary got killed)
> >
> > —
> > You are receiving this because you were assigned.
> > Reply to this email directly, view it on GitHub
> > <#39#
> issuecomment-402621083>,
> > or mute the thread
> > <https://github.com/notifications/unsubscribe-auth/
> AAYz9klvD-4abwd7q4K-oucNxX7t09FDks5uDbT5gaJpZM4TETNO>
> > .
> >
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <#39 (comment)>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AjoN42wHsgyaEsUllZptc1eXNWM3keaVks5uD1QagaJpZM4TETNO>
> .
>
kubectl logs pod/pr-primary
Mon Jul 9 06:24:11 UTC 2018
Mon Jul 9 06:24:11 UTC 2018 INFO: Setting PGROOT to /usr/pgsql-10.
Mon Jul 9 06:24:11 UTC 2018 INFO: Cleaning up the old postmaster.pid file..
Mon Jul 9 06:24:11 UTC 2018 INFO: User ID is set to uid=26(postgres) gid=26(postgres) groups=26(postgres).
Mon Jul 9 06:24:11 UTC 2018 INFO: Working on primary..
Mon Jul 9 06:24:11 UTC 2018 INFO: Initializing the primary database..
Mon Jul 9 06:24:11 UTC 2018 INFO: PGDATA is empty. ID is uid=26(postgres) gid=26(postgres) groups=26(postgres). Creating the PGDATA directory..
Mon Jul 9 06:24:11 UTC 2018 INFO: Checking for restore..
total 0
Mon Jul 9 06:24:11 UTC 2018 INFO: No backup file found.
Mon Jul 9 06:24:11 UTC 2018 INFO: Starting initdb..
initdb -D /pgdata/pr-primary > /tmp/initdb.log &> /tmp/initdb.err
Mon Jul 9 06:24:12 UTC 2018 INFO: Overlaying PostgreSQL's default configuration with customized settings..
Mon Jul 9 06:24:12 UTC 2018 INFO: Checking for PITR WAL files to recover with..
Mon Jul 9 06:24:12 UTC 2018 INFO: Temporarily starting database to run setup.sql..
waiting for server to start....2018-07-09 02:24:12 EDT [62]: [1-1] user=,db=,app=,client=LOG: pgaudit extension initialized
2018-07-09 02:24:12 EDT [62]: [2-1] user=,db=,app=,client=LOG: listening on IPv4 address "0.0.0.0", port 5432
2018-07-09 02:24:12 EDT [62]: [3-1] user=,db=,app=,client=LOG: listening on IPv6 address "::", port 5432
2018-07-09 02:24:12 EDT [62]: [4-1] user=,db=,app=,client=LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
2018-07-09 02:24:12 EDT [62]: [5-1] user=,db=,app=,client=LOG: redirecting log output to logging collector process
2018-07-09 02:24:12 EDT [62]: [6-1] user=,db=,app=,client=HINT: Future log output will appear in directory "pg_log".
done
server started
Mon Jul 9 06:24:12 UTC 2018 INFO: Waiting for PostgreSQL to start..
pr-primary:5432 - accepting connections
Mon Jul 9 06:24:12 UTC 2018 INFO: The database is ready for setup.sql.
SET
CREATE EXTENSION
CREATE EXTENSION
ALTER ROLE
CREATE ROLE
CREATE ROLE
CREATE TABLE
GRANT
CREATE DATABASE
GRANT
You are now connected to database "userdb" as user "postgres".
CREATE EXTENSION
CREATE EXTENSION
You are now connected to database "userdb" as user "testuser".
CREATE SCHEMA
CREATE TABLE
INSERT 0 1
INSERT 0 1
GRANT
Mon Jul 9 06:24:12 UTC 2018 INFO: Stopping database after primary initialization..
waiting for server to shut down.... done
server stopped
Mon Jul 9 06:24:12 UTC 2018 INFO: Starting PostgreSQL..
Mon Jul 9 06:24:12 UTC 2018
2018-07-09 02:24:12 EDT [96]: [1-1] user=,db=,app=,client=LOG: pgaudit extension initialized
2018-07-09 02:24:12 EDT [96]: [2-1] user=,db=,app=,client=LOG: listening on IPv4 address "0.0.0.0", port 5432
2018-07-09 02:24:12 EDT [96]: [3-1] user=,db=,app=,client=LOG: listening on IPv6 address "::", port 5432
2018-07-09 02:24:12 EDT [96]: [4-1] user=,db=,app=,client=LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
2018-07-09 02:24:12 EDT [96]: [5-1] user=,db=,app=,client=LOG: redirecting log output to logging collector process
2018-07-09 02:24:12 EDT [96]: [6-1] user=,db=,app=,client=HINT: Future log output will appear in directory "pg_log".
kubectl logs pod/pr-replica
Mon Jul 9 06:24:11 UTC 2018
Mon Jul 9 06:24:11 UTC 2018 INFO: Setting PGROOT to /usr/pgsql-10.
Mon Jul 9 06:24:11 UTC 2018 INFO: Cleaning up the old postmaster.pid file..
Mon Jul 9 06:24:11 UTC 2018 INFO: User ID is set to uid=26(postgres) gid=26(postgres) groups=26(postgres).
Mon Jul 9 06:24:11 UTC 2018 INFO: Working on replica..
Mon Jul 9 06:24:11 UTC 2018 INFO: Initializing the replica.
Mon Jul 9 06:24:11 UTC 2018 INFO: Waiting to allow the primary database time to successfully start before performing the initial backup..
pr-primary:5432 - no response
pr-primary:5432 - no response
pr-primary:5432 - no response
pr-primary:5432 - no response
pr-primary:5432 - no response
pr-primary:5432 - no response
pr-primary:5432 - no response
pr-primary:5432 - no response
pr-primary:5432 - no response
pr-primary:5432 - no response
pr-primary:5432 - no response
pr-primary:5432 - no response
pr-primary:5432 - accepting connections
Mon Jul 9 06:24:59 UTC 2018 INFO: The database is ready.
now
-------------------------------
2018-07-09 02:24:59.675783-04
(1 row)
Mon Jul 9 06:24:59 UTC 2018 INFO: The database is ready.
Mon Jul 9 06:25:00 UTC 2018 INFO: SYNC_REPLICA environment variable is not set.
Mon Jul 9 06:25:00 UTC 2018 INFO: pr-replica is the APPLICATION_NAME being used.
Mon Jul 9 06:25:00 UTC 2018 INFO: Starting PostgreSQL..
Mon Jul 9 06:25:00 UTC 2018
2018-07-09 02:25:00 EDT [84]: [1-1] user=,db=,app=,client=LOG: pgaudit extension initialized
2018-07-09 02:25:00 EDT [84]: [2-1] user=,db=,app=,client=LOG: listening on IPv4 address "0.0.0.0", port 5432
2018-07-09 02:25:00 EDT [84]: [3-1] user=,db=,app=,client=LOG: listening on IPv6 address "::", port 5432
2018-07-09 02:25:00 EDT [84]: [4-1] user=,db=,app=,client=LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
2018-07-09 02:25:00 EDT [84]: [5-1] user=,db=,app=,client=LOG: redirecting log output to logging collector process
2018-07-09 02:25:00 EDT [84]: [6-1] user=,db=,app=,client=HINT: Future log output will appear in directory "pg_log".
INFO[2018-07-09T06:39:22Z] Health Checking: 'pr-primary'
INFO[2018-07-09T06:39:22Z] Successfully reached 'pr-primary'
INFO[2018-07-09T06:39:52Z] Health Checking: 'pr-primary'
INFO[2018-07-09T06:39:52Z] Successfully reached 'pr-primary'
INFO[2018-07-09T06:40:22Z] Health Checking: 'pr-primary'
ERRO[2018-07-09T06:40:22Z] dial tcp: lookup pr-primary on 10.96.0.10:53: server misbehaving
ERRO[2018-07-09T06:40:22Z] Could not reach 'pr-primary' (Attempt: 1)
INFO[2018-07-09T06:40:22Z] Executing pre-hook: /hooks/watch-pre-hook
INFO[2018-07-09T06:40:22Z] Processing Failover: Strategy - latest
INFO[2018-07-09T06:40:22Z] Deleting existing primary...
INFO[2018-07-09T06:40:22Z] Deleted old primary
INFO[2018-07-09T06:40:22Z] Choosing failover replica...
INFO[2018-07-09T06:40:22Z] Chose failover target (pr-replica)
INFO[2018-07-09T06:40:22Z] Promoting failover replica...
DEBU[2018-07-09T06:40:22Z] executing cmd: [/opt/cpm/bin/promote.sh] on pod pr-replica in namespace kube-system container: postgres
INFO[2018-07-09T06:40:22Z] Relabeling failover replica...
DEBU[2018-07-09T06:40:22Z] label: name
DEBU[2018-07-09T06:40:22Z] label: replicatype
INFO[2018-07-09T06:40:22Z] Executing post-hook: /hooks/watch-post-hook
INFO[2018-07-09T06:40:52Z] Health Checking: 'pr-primary'
ERRO[2018-07-09T06:40:52Z] dial tcp: lookup pr-primary on 10.96.0.10:53: server misbehaving
ERRO[2018-07-09T06:40:52Z] Could not reach 'pr-primary' (Attempt: 1)
INFO[2018-07-09T06:41:22Z] Health Checking: 'pr-primary'
ERRO[2018-07-09T06:41:22Z] dial tcp: lookup pr-primary on 10.96.0.10:53: server misbehaving
ERRO[2018-07-09T06:41:22Z] Could not reach 'pr-primary' (Attempt: 1)
INFO[2018-07-09T06:41:52Z] Health Checking: 'pr-primary'
|
So it seems that promote.sh is executed. I'd be curious to see why it can't
write to the new primary now.
You should be able to login to that instance and poke around to make sure
it is now a primary instance (and writable)
Also there's no reason you can't kill the primary to cause a failover (to
address your very first email)
Dave Cramer
…On 9 July 2018 at 02:45, GajaHebbar ***@***.***> wrote:
Hit send button before attaching the logs
Please find the logs
On Mon, Jul 9, 2018 at 12:13 PM, Gajanan Hebbar ***@***.***>
wrote:
>
> Hi Dave,
>
> Please find the details
>
> PODS
> ======
> pod/pr-primary 1/1
> Running 0 1m name=pr-primary
> pod/pr-replica 1/1
> Running 0 1m name=pr-replica,replicatype=trigger
> pod/watch 1/1
> Running 0 5s name=crunchy-watch
>
>
>
>
> SERVICE
> =======
> service/pr-primary ClusterIP 10.101.156.135 <none>
> 5432/TCP 1m name=pr-primary
> service/pr-replica ClusterIP 10.102.91.247 <none>
> 5432/TCP 1m name=pr-replica
>
>
> execute command
>
> kubectl delete service/pr-primary
>
>
> PODS
> ======
>
> pod/pr-replica 1/1
> Running 0 16m name=pr-primary,replicatype=trigger
> pod/watch 1/1
> Running 0 13m name=crunchy-watch
>
> SERVICE
> =======
> service/pr-replica ClusterIP 10.102.91.247 <none>
> 5432/TCP 16m name=pr-replica
>
>
>
>
>
>
>
> On Fri, Jul 6, 2018 at 5:42 PM, Dave Cramer ***@***.***>
> wrote:
>
>> I'd be curious what the logs were on the replica.
>>
>> Dave Cramer
>>
>> On 5 July 2018 at 02:40, GajaHebbar ***@***.***> wrote:
>>
>> > Is there any way to test the failover? As manual failover by killing
the
>> > primary is not currently supported.
>> >
>> > ERRO[2018-07-04T13:33:56Z] dial tcp: lookup pr-primary on
10.96.0.10:53
>> :
>> > server misbehaving
>> > ERRO[2018-07-04T13:33:56Z] Could not reach 'pr-primary' (Attempt: 1)
>> > INFO[2018-07-04T13:33:56Z] Executing pre-hook: /hooks/watch-pre-hook
>> > INFO[2018-07-04T13:33:56Z] Processing Failover: Strategy - latest
>> > INFO[2018-07-04T13:33:56Z] Deleting existing primary...
>> > INFO[2018-07-04T13:33:57Z] Deleted old primary
>> > INFO[2018-07-04T13:33:57Z] Choosing failover replica...
>> > INFO[2018-07-04T13:33:57Z] Chose failover target (pr-replica)
>> >
>> > INFO[2018-07-04T13:33:57Z] Promoting failover replica...
>> > DEBU[2018-07-04T13:33:57Z] executing cmd: [/opt/cpm/bin/promote.sh] on
>> pod
>> > pr-replica in namespace kube-system container: postgres
>> > INFO[2018-07-04T13:33:57Z] Relabeling failover replica...
>> > DEBU[2018-07-04T13:33:57Z] label: name
>> > DEBU[2018-07-04T13:33:57Z] label: replicatype
>> > INFO[2018-07-04T13:33:57Z] Executing post-hook: /hooks/watch-post-hook
>> >
>> > I see the logs indicating the failover is successful(but pr-replica
>> still
>> > does not have write permission after primary got killed)
>> >
>> > —
>> > You are receiving this because you were assigned.
>> > Reply to this email directly, view it on GitHub
>> > <#39#
>> issuecomment-402621083>,
>> > or mute the thread
>> > <https://github.com/notifications/unsubscribe-auth/
>> AAYz9klvD-4abwd7q4K-oucNxX7t09FDks5uDbT5gaJpZM4TETNO>
>> > .
>> >
>>
>> —
>> You are receiving this because you commented.
>> Reply to this email directly, view it on GitHub
>> <#39 (comment)-
403015959>,
>> or mute the thread
>> <https://github.com/notifications/unsubscribe-auth/
AjoN42wHsgyaEsUllZptc1eXNWM3keaVks5uD1QagaJpZM4TETNO>
>> .
>>
>
>
kubectl logs pod/pr-primary
Mon Jul 9 06:24:11 UTC 2018
Mon Jul 9 06:24:11 UTC 2018 INFO: Setting PGROOT to /usr/pgsql-10.
Mon Jul 9 06:24:11 UTC 2018 INFO: Cleaning up the old postmaster.pid
file..
Mon Jul 9 06:24:11 UTC 2018 INFO: User ID is set to uid=26(postgres)
gid=26(postgres) groups=26(postgres).
Mon Jul 9 06:24:11 UTC 2018 INFO: Working on primary..
Mon Jul 9 06:24:11 UTC 2018 INFO: Initializing the primary database..
Mon Jul 9 06:24:11 UTC 2018 INFO: PGDATA is empty. ID is uid=26(postgres)
gid=26(postgres) groups=26(postgres). Creating the PGDATA directory..
Mon Jul 9 06:24:11 UTC 2018 INFO: Checking for restore..
total 0
Mon Jul 9 06:24:11 UTC 2018 INFO: No backup file found.
Mon Jul 9 06:24:11 UTC 2018 INFO: Starting initdb..
initdb -D /pgdata/pr-primary > /tmp/initdb.log &> /tmp/initdb.err
Mon Jul 9 06:24:12 UTC 2018 INFO: Overlaying PostgreSQL's default
configuration with customized settings..
Mon Jul 9 06:24:12 UTC 2018 INFO: Checking for PITR WAL files to recover
with..
Mon Jul 9 06:24:12 UTC 2018 INFO: Temporarily starting database to run
setup.sql..
waiting for server to start....2018-07-09 02:24:12 EDT [62]: [1-1]
user=,db=,app=,client=LOG: pgaudit extension initialized
2018-07-09 02:24:12 EDT [62]: [2-1] user=,db=,app=,client=LOG: listening
on IPv4 address "0.0.0.0", port 5432
2018-07-09 02:24:12 EDT [62]: [3-1] user=,db=,app=,client=LOG: listening
on IPv6 address "::", port 5432
2018-07-09 02:24:12 EDT [62]: [4-1] user=,db=,app=,client=LOG: listening
on Unix socket "/tmp/.s.PGSQL.5432"
2018-07-09 02:24:12 EDT [62]: [5-1] user=,db=,app=,client=LOG: redirecting
log output to logging collector process
2018-07-09 02:24:12 EDT [62]: [6-1] user=,db=,app=,client=HINT: Future log
output will appear in directory "pg_log".
done
server started
Mon Jul 9 06:24:12 UTC 2018 INFO: Waiting for PostgreSQL to start..
pr-primary:5432 - accepting connections
Mon Jul 9 06:24:12 UTC 2018 INFO: The database is ready for setup.sql.
SET
CREATE EXTENSION
CREATE EXTENSION
ALTER ROLE
CREATE ROLE
CREATE ROLE
CREATE TABLE
GRANT
CREATE DATABASE
GRANT
You are now connected to database "userdb" as user "postgres".
CREATE EXTENSION
CREATE EXTENSION
You are now connected to database "userdb" as user "testuser".
CREATE SCHEMA
CREATE TABLE
INSERT 0 1
INSERT 0 1
GRANT
Mon Jul 9 06:24:12 UTC 2018 INFO: Stopping database after primary
initialization..
waiting for server to shut down.... done
server stopped
Mon Jul 9 06:24:12 UTC 2018 INFO: Starting PostgreSQL..
Mon Jul 9 06:24:12 UTC 2018
2018-07-09 02:24:12 EDT [96]: [1-1] user=,db=,app=,client=LOG: pgaudit
extension initialized
2018-07-09 02:24:12 EDT [96]: [2-1] user=,db=,app=,client=LOG: listening
on IPv4 address "0.0.0.0", port 5432
2018-07-09 02:24:12 EDT [96]: [3-1] user=,db=,app=,client=LOG: listening
on IPv6 address "::", port 5432
2018-07-09 02:24:12 EDT [96]: [4-1] user=,db=,app=,client=LOG: listening
on Unix socket "/tmp/.s.PGSQL.5432"
2018-07-09 02:24:12 EDT [96]: [5-1] user=,db=,app=,client=LOG: redirecting
log output to logging collector process
2018-07-09 02:24:12 EDT [96]: [6-1] user=,db=,app=,client=HINT: Future log
output will appear in directory "pg_log".
kubectl logs pod/pr-replica
Mon Jul 9 06:24:11 UTC 2018
Mon Jul 9 06:24:11 UTC 2018 INFO: Setting PGROOT to /usr/pgsql-10.
Mon Jul 9 06:24:11 UTC 2018 INFO: Cleaning up the old postmaster.pid
file..
Mon Jul 9 06:24:11 UTC 2018 INFO: User ID is set to uid=26(postgres)
gid=26(postgres) groups=26(postgres).
Mon Jul 9 06:24:11 UTC 2018 INFO: Working on replica..
Mon Jul 9 06:24:11 UTC 2018 INFO: Initializing the replica.
Mon Jul 9 06:24:11 UTC 2018 INFO: Waiting to allow the primary database
time to successfully start before performing the initial backup..
pr-primary:5432 - no response
pr-primary:5432 - no response
pr-primary:5432 - no response
pr-primary:5432 - no response
pr-primary:5432 - no response
pr-primary:5432 - no response
pr-primary:5432 - no response
pr-primary:5432 - no response
pr-primary:5432 - no response
pr-primary:5432 - no response
pr-primary:5432 - no response
pr-primary:5432 - no response
pr-primary:5432 - accepting connections
Mon Jul 9 06:24:59 UTC 2018 INFO: The database is ready.
now
-------------------------------
2018-07-09 02:24:59.675783-04
(1 row)
Mon Jul 9 06:24:59 UTC 2018 INFO: The database is ready.
Mon Jul 9 06:25:00 UTC 2018 INFO: SYNC_REPLICA environment variable is not
set.
Mon Jul 9 06:25:00 UTC 2018 INFO: pr-replica is the APPLICATION_NAME being
used.
Mon Jul 9 06:25:00 UTC 2018 INFO: Starting PostgreSQL..
Mon Jul 9 06:25:00 UTC 2018
2018-07-09 02:25:00 EDT [84]: [1-1] user=,db=,app=,client=LOG: pgaudit
extension initialized
2018-07-09 02:25:00 EDT [84]: [2-1] user=,db=,app=,client=LOG: listening
on IPv4 address "0.0.0.0", port 5432
2018-07-09 02:25:00 EDT [84]: [3-1] user=,db=,app=,client=LOG: listening
on IPv6 address "::", port 5432
2018-07-09 02:25:00 EDT [84]: [4-1] user=,db=,app=,client=LOG: listening
on Unix socket "/tmp/.s.PGSQL.5432"
2018-07-09 02:25:00 EDT [84]: [5-1] user=,db=,app=,client=LOG: redirecting
log output to logging collector process
2018-07-09 02:25:00 EDT [84]: [6-1] user=,db=,app=,client=HINT: Future log
output will appear in directory "pg_log".
INFO[2018-07-09T06:39:22Z] Health Checking: 'pr-primary'
INFO[2018-07-09T06:39:22Z] Successfully reached 'pr-primary'
INFO[2018-07-09T06:39:52Z] Health Checking: 'pr-primary'
INFO[2018-07-09T06:39:52Z] Successfully reached 'pr-primary'
INFO[2018-07-09T06:40:22Z] Health Checking: 'pr-primary'
ERRO[2018-07-09T06:40:22Z] dial tcp: lookup pr-primary on 10.96.0.10:53:
server misbehaving
ERRO[2018-07-09T06:40:22Z] Could not reach 'pr-primary' (Attempt: 1)
INFO[2018-07-09T06:40:22Z] Executing pre-hook: /hooks/watch-pre-hook
INFO[2018-07-09T06:40:22Z] Processing Failover: Strategy - latest
INFO[2018-07-09T06:40:22Z] Deleting existing primary...
INFO[2018-07-09T06:40:22Z] Deleted old primary
INFO[2018-07-09T06:40:22Z] Choosing failover replica...
INFO[2018-07-09T06:40:22Z] Chose failover target (pr-replica)
INFO[2018-07-09T06:40:22Z] Promoting failover replica...
DEBU[2018-07-09T06:40:22Z] executing cmd: [/opt/cpm/bin/promote.sh] on pod
pr-replica in namespace kube-system container: postgres
INFO[2018-07-09T06:40:22Z] Relabeling failover replica...
DEBU[2018-07-09T06:40:22Z] label: name
DEBU[2018-07-09T06:40:22Z] label: replicatype
INFO[2018-07-09T06:40:22Z] Executing post-hook: /hooks/watch-post-hook
INFO[2018-07-09T06:40:52Z] Health Checking: 'pr-primary'
ERRO[2018-07-09T06:40:52Z] dial tcp: lookup pr-primary on 10.96.0.10:53:
server misbehaving
ERRO[2018-07-09T06:40:52Z] Could not reach 'pr-primary' (Attempt: 1)
INFO[2018-07-09T06:41:22Z] Health Checking: 'pr-primary'
ERRO[2018-07-09T06:41:22Z] dial tcp: lookup pr-primary on 10.96.0.10:53:
server misbehaving
ERRO[2018-07-09T06:41:22Z] Could not reach 'pr-primary' (Attempt: 1)
INFO[2018-07-09T06:41:52Z] Health Checking: 'pr-primary'
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
<#39 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAYz9nAFYuEXDuZiaqGuIAGDRSXtt6eXks5uEvwKgaJpZM4TETNO>
.
|
Thanks Dave,
Also I would like to share some more info on this.
If I manually execute promote.sh on all my replica( assuming I have 2
replica(replica-1 and replica-2) then replica-1 will become master) I can
than use that container to write it. But using watch I was not able to do
it.
One thing here I am not sure about is Watch does not execute promote.sh on
all replica. is that the problem?
Regards,
Gaja
On Mon, Jul 9, 2018 at 7:07 PM, Dave Cramer <[email protected]>
wrote:
… So it seems that promote.sh is executed. I'd be curious to see why it can't
write to the new primary now.
You should be able to login to that instance and poke around to make sure
it is now a primary instance (and writable)
Also there's no reason you can't kill the primary to cause a failover (to
address your very first email)
Dave Cramer
On 9 July 2018 at 02:45, GajaHebbar ***@***.***> wrote:
> Hit send button before attaching the logs
>
> Please find the logs
>
> On Mon, Jul 9, 2018 at 12:13 PM, Gajanan Hebbar <
***@***.***>
> wrote:
>
>
> >
> > Hi Dave,
> >
> > Please find the details
> >
> > PODS
> > ======
> > pod/pr-primary 1/1
> > Running 0 1m name=pr-primary
> > pod/pr-replica 1/1
> > Running 0 1m name=pr-replica,replicatype=trigger
> > pod/watch 1/1
> > Running 0 5s name=crunchy-watch
> >
> >
> >
> >
> > SERVICE
> > =======
> > service/pr-primary ClusterIP 10.101.156.135 <none>
> > 5432/TCP 1m name=pr-primary
> > service/pr-replica ClusterIP 10.102.91.247 <none>
> > 5432/TCP 1m name=pr-replica
> >
> >
> > execute command
> >
> > kubectl delete service/pr-primary
> >
> >
> > PODS
> > ======
> >
> > pod/pr-replica 1/1
> > Running 0 16m name=pr-primary,replicatype=trigger
> > pod/watch 1/1
> > Running 0 13m name=crunchy-watch
> >
> > SERVICE
> > =======
> > service/pr-replica ClusterIP 10.102.91.247 <none>
> > 5432/TCP 16m name=pr-replica
> >
> >
> >
> >
> >
> >
> >
> > On Fri, Jul 6, 2018 at 5:42 PM, Dave Cramer ***@***.***>
> > wrote:
> >
> >> I'd be curious what the logs were on the replica.
> >>
> >> Dave Cramer
> >>
> >> On 5 July 2018 at 02:40, GajaHebbar ***@***.***> wrote:
> >>
> >> > Is there any way to test the failover? As manual failover by killing
> the
> >> > primary is not currently supported.
> >> >
> >> > ERRO[2018-07-04T13:33:56Z] dial tcp: lookup pr-primary on
> 10.96.0.10:53
> >> :
> >> > server misbehaving
> >> > ERRO[2018-07-04T13:33:56Z] Could not reach 'pr-primary' (Attempt: 1)
> >> > INFO[2018-07-04T13:33:56Z] Executing pre-hook: /hooks/watch-pre-hook
> >> > INFO[2018-07-04T13:33:56Z] Processing Failover: Strategy - latest
> >> > INFO[2018-07-04T13:33:56Z] Deleting existing primary...
> >> > INFO[2018-07-04T13:33:57Z] Deleted old primary
> >> > INFO[2018-07-04T13:33:57Z] Choosing failover replica...
> >> > INFO[2018-07-04T13:33:57Z] Chose failover target (pr-replica)
> >> >
> >> > INFO[2018-07-04T13:33:57Z] Promoting failover replica...
> >> > DEBU[2018-07-04T13:33:57Z] executing cmd: [/opt/cpm/bin/promote.sh]
on
> >> pod
> >> > pr-replica in namespace kube-system container: postgres
> >> > INFO[2018-07-04T13:33:57Z] Relabeling failover replica...
> >> > DEBU[2018-07-04T13:33:57Z] label: name
> >> > DEBU[2018-07-04T13:33:57Z] label: replicatype
> >> > INFO[2018-07-04T13:33:57Z] Executing post-hook:
/hooks/watch-post-hook
> >> >
> >> > I see the logs indicating the failover is successful(but pr-replica
> >> still
> >> > does not have write permission after primary got killed)
> >> >
> >> > —
> >> > You are receiving this because you were assigned.
> >> > Reply to this email directly, view it on GitHub
> >> > <#39#
> >> issuecomment-402621083>,
> >> > or mute the thread
> >> > <https://github.com/notifications/unsubscribe-auth/
> >> AAYz9klvD-4abwd7q4K-oucNxX7t09FDks5uDbT5gaJpZM4TETNO>
> >> > .
> >> >
> >>
> >> —
> >> You are receiving this because you commented.
> >> Reply to this email directly, view it on GitHub
> >> <#39 (comment)-
> 403015959>,
> >> or mute the thread
> >> <https://github.com/notifications/unsubscribe-auth/
> AjoN42wHsgyaEsUllZptc1eXNWM3keaVks5uD1QagaJpZM4TETNO>
> >> .
> >>
> >
> >
>
> kubectl logs pod/pr-primary
> Mon Jul 9 06:24:11 UTC 2018
> Mon Jul 9 06:24:11 UTC 2018 INFO: Setting PGROOT to /usr/pgsql-10.
> Mon Jul 9 06:24:11 UTC 2018 INFO: Cleaning up the old postmaster.pid
> file..
> Mon Jul 9 06:24:11 UTC 2018 INFO: User ID is set to uid=26(postgres)
> gid=26(postgres) groups=26(postgres).
> Mon Jul 9 06:24:11 UTC 2018 INFO: Working on primary..
> Mon Jul 9 06:24:11 UTC 2018 INFO: Initializing the primary database..
> Mon Jul 9 06:24:11 UTC 2018 INFO: PGDATA is empty. ID is uid=26(postgres)
> gid=26(postgres) groups=26(postgres). Creating the PGDATA directory..
> Mon Jul 9 06:24:11 UTC 2018 INFO: Checking for restore..
> total 0
> Mon Jul 9 06:24:11 UTC 2018 INFO: No backup file found.
> Mon Jul 9 06:24:11 UTC 2018 INFO: Starting initdb..
> initdb -D /pgdata/pr-primary > /tmp/initdb.log &> /tmp/initdb.err
> Mon Jul 9 06:24:12 UTC 2018 INFO: Overlaying PostgreSQL's default
> configuration with customized settings..
> Mon Jul 9 06:24:12 UTC 2018 INFO: Checking for PITR WAL files to recover
> with..
> Mon Jul 9 06:24:12 UTC 2018 INFO: Temporarily starting database to run
> setup.sql..
> waiting for server to start....2018-07-09 02:24:12 EDT [62]: [1-1]
> user=,db=,app=,client=LOG: pgaudit extension initialized
> 2018-07-09 02:24:12 EDT [62]: [2-1] user=,db=,app=,client=LOG: listening
> on IPv4 address "0.0.0.0", port 5432
> 2018-07-09 02:24:12 EDT [62]: [3-1] user=,db=,app=,client=LOG: listening
> on IPv6 address "::", port 5432
> 2018-07-09 02:24:12 EDT [62]: [4-1] user=,db=,app=,client=LOG: listening
> on Unix socket "/tmp/.s.PGSQL.5432"
> 2018-07-09 02:24:12 EDT [62]: [5-1] user=,db=,app=,client=LOG:
redirecting
> log output to logging collector process
> 2018-07-09 02:24:12 EDT [62]: [6-1] user=,db=,app=,client=HINT: Future
log
> output will appear in directory "pg_log".
> done
> server started
> Mon Jul 9 06:24:12 UTC 2018 INFO: Waiting for PostgreSQL to start..
> pr-primary:5432 - accepting connections
> Mon Jul 9 06:24:12 UTC 2018 INFO: The database is ready for setup.sql.
> SET
> CREATE EXTENSION
> CREATE EXTENSION
> ALTER ROLE
> CREATE ROLE
> CREATE ROLE
> CREATE TABLE
> GRANT
> CREATE DATABASE
> GRANT
> You are now connected to database "userdb" as user "postgres".
> CREATE EXTENSION
> CREATE EXTENSION
> You are now connected to database "userdb" as user "testuser".
> CREATE SCHEMA
> CREATE TABLE
> INSERT 0 1
> INSERT 0 1
> GRANT
> Mon Jul 9 06:24:12 UTC 2018 INFO: Stopping database after primary
> initialization..
> waiting for server to shut down.... done
> server stopped
> Mon Jul 9 06:24:12 UTC 2018 INFO: Starting PostgreSQL..
> Mon Jul 9 06:24:12 UTC 2018
> 2018-07-09 02:24:12 EDT [96]: [1-1] user=,db=,app=,client=LOG: pgaudit
> extension initialized
> 2018-07-09 02:24:12 EDT [96]: [2-1] user=,db=,app=,client=LOG: listening
> on IPv4 address "0.0.0.0", port 5432
> 2018-07-09 02:24:12 EDT [96]: [3-1] user=,db=,app=,client=LOG: listening
> on IPv6 address "::", port 5432
> 2018-07-09 02:24:12 EDT [96]: [4-1] user=,db=,app=,client=LOG: listening
> on Unix socket "/tmp/.s.PGSQL.5432"
> 2018-07-09 02:24:12 EDT [96]: [5-1] user=,db=,app=,client=LOG:
redirecting
> log output to logging collector process
> 2018-07-09 02:24:12 EDT [96]: [6-1] user=,db=,app=,client=HINT: Future
log
> output will appear in directory "pg_log".
>
> kubectl logs pod/pr-replica
> Mon Jul 9 06:24:11 UTC 2018
> Mon Jul 9 06:24:11 UTC 2018 INFO: Setting PGROOT to /usr/pgsql-10.
> Mon Jul 9 06:24:11 UTC 2018 INFO: Cleaning up the old postmaster.pid
> file..
> Mon Jul 9 06:24:11 UTC 2018 INFO: User ID is set to uid=26(postgres)
> gid=26(postgres) groups=26(postgres).
> Mon Jul 9 06:24:11 UTC 2018 INFO: Working on replica..
> Mon Jul 9 06:24:11 UTC 2018 INFO: Initializing the replica.
> Mon Jul 9 06:24:11 UTC 2018 INFO: Waiting to allow the primary database
> time to successfully start before performing the initial backup..
> pr-primary:5432 - no response
> pr-primary:5432 - no response
> pr-primary:5432 - no response
> pr-primary:5432 - no response
> pr-primary:5432 - no response
> pr-primary:5432 - no response
> pr-primary:5432 - no response
> pr-primary:5432 - no response
> pr-primary:5432 - no response
> pr-primary:5432 - no response
> pr-primary:5432 - no response
> pr-primary:5432 - no response
> pr-primary:5432 - accepting connections
> Mon Jul 9 06:24:59 UTC 2018 INFO: The database is ready.
> now
> -------------------------------
> 2018-07-09 02:24:59.675783-04
> (1 row)
>
> Mon Jul 9 06:24:59 UTC 2018 INFO: The database is ready.
> Mon Jul 9 06:25:00 UTC 2018 INFO: SYNC_REPLICA environment variable is
not
> set.
> Mon Jul 9 06:25:00 UTC 2018 INFO: pr-replica is the APPLICATION_NAME
being
> used.
> Mon Jul 9 06:25:00 UTC 2018 INFO: Starting PostgreSQL..
> Mon Jul 9 06:25:00 UTC 2018
> 2018-07-09 02:25:00 EDT [84]: [1-1] user=,db=,app=,client=LOG: pgaudit
> extension initialized
> 2018-07-09 02:25:00 EDT [84]: [2-1] user=,db=,app=,client=LOG: listening
> on IPv4 address "0.0.0.0", port 5432
> 2018-07-09 02:25:00 EDT [84]: [3-1] user=,db=,app=,client=LOG: listening
> on IPv6 address "::", port 5432
> 2018-07-09 02:25:00 EDT [84]: [4-1] user=,db=,app=,client=LOG: listening
> on Unix socket "/tmp/.s.PGSQL.5432"
> 2018-07-09 02:25:00 EDT [84]: [5-1] user=,db=,app=,client=LOG:
redirecting
> log output to logging collector process
> 2018-07-09 02:25:00 EDT [84]: [6-1] user=,db=,app=,client=HINT: Future
log
> output will appear in directory "pg_log".
>
> INFO[2018-07-09T06:39:22Z] Health Checking: 'pr-primary'
> INFO[2018-07-09T06:39:22Z] Successfully reached 'pr-primary'
> INFO[2018-07-09T06:39:52Z] Health Checking: 'pr-primary'
> INFO[2018-07-09T06:39:52Z] Successfully reached 'pr-primary'
> INFO[2018-07-09T06:40:22Z] Health Checking: 'pr-primary'
> ERRO[2018-07-09T06:40:22Z] dial tcp: lookup pr-primary on 10.96.0.10:53:
> server misbehaving
> ERRO[2018-07-09T06:40:22Z] Could not reach 'pr-primary' (Attempt: 1)
> INFO[2018-07-09T06:40:22Z] Executing pre-hook: /hooks/watch-pre-hook
> INFO[2018-07-09T06:40:22Z] Processing Failover: Strategy - latest
> INFO[2018-07-09T06:40:22Z] Deleting existing primary...
> INFO[2018-07-09T06:40:22Z] Deleted old primary
> INFO[2018-07-09T06:40:22Z] Choosing failover replica...
> INFO[2018-07-09T06:40:22Z] Chose failover target (pr-replica)
>
> INFO[2018-07-09T06:40:22Z] Promoting failover replica...
> DEBU[2018-07-09T06:40:22Z] executing cmd: [/opt/cpm/bin/promote.sh] on
pod
> pr-replica in namespace kube-system container: postgres
> INFO[2018-07-09T06:40:22Z] Relabeling failover replica...
> DEBU[2018-07-09T06:40:22Z] label: name
> DEBU[2018-07-09T06:40:22Z] label: replicatype
> INFO[2018-07-09T06:40:22Z] Executing post-hook: /hooks/watch-post-hook
> INFO[2018-07-09T06:40:52Z] Health Checking: 'pr-primary'
> ERRO[2018-07-09T06:40:52Z] dial tcp: lookup pr-primary on 10.96.0.10:53:
> server misbehaving
> ERRO[2018-07-09T06:40:52Z] Could not reach 'pr-primary' (Attempt: 1)
> INFO[2018-07-09T06:41:22Z] Health Checking: 'pr-primary'
> ERRO[2018-07-09T06:41:22Z] dial tcp: lookup pr-primary on 10.96.0.10:53:
> server misbehaving
> ERRO[2018-07-09T06:41:22Z] Could not reach 'pr-primary' (Attempt: 1)
> INFO[2018-07-09T06:41:52Z] Health Checking: 'pr-primary'
>
> —
> You are receiving this because you were assigned.
> Reply to this email directly, view it on GitHub
> <#39 (comment)-
403376234>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/
AAYz9nAFYuEXDuZiaqGuIAGDRSXtt6eXks5uEvwKgaJpZM4TETNO>
> .
>
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#39 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AjoN42mFTYO0No2TRiyCQ7dyxUcdqk6Kks5uE1yigaJpZM4TETNO>
.
|
Clearly that would be the problem if it is the case |
Hi Dave,
I was able to solve the problem. expenv was not present in my environment
and after adding that things got resolved.
I wanted to know one more scenario
currently
pr-primary, pr-replica and pr-replica-2 is running and watcher is also
running along with pgpool
And I could able to do insert and read operation on table through pgpool
Pod "pr-primary" gets removed at this time watcher notices that primary is
dead and it activates the replica as primary and changes the label to
primary and
all operation is working normal through pgpool with only 5-10 sec of
service interruption.
These things are working fine.
no there are only pr-replica(label name changed to pr-primary) and
pr-replica-2( label name changed to pr-replica)watcher and pgpool
I was checking what will happen if pr-replica goes down.
And found that unfortunately watcher was unable to switch pr-replica-2 to
pr-primary and pgpool cannot be connected.
Is this the proper behavior?
Please find watch logs when pr-replica(is master at this time with label
pr-primary is deleted)
INFO[2018-08-03T08:32:38Z] Health Checking: 'pr-primary'
ERRO[2018-08-03T08:32:48Z] dial tcp 10.102.89.145:5432: i/o timeout
ERRO[2018-08-03T08:32:48Z] Could not reach 'pr-primary' (Attempt: 1)
INFO[2018-08-03T08:33:18Z] Health Checking: 'pr-primary'
ERRO[2018-08-03T08:33:28Z] dial tcp 10.102.89.145:5432: i/o timeout
ERRO[2018-08-03T08:33:28Z] Could not reach 'pr-primary' (Attempt: 1)
INFO[2018-08-03T08:33:58Z] Health Checking: 'pr-primary'
ERRO[2018-08-03T08:34:08Z] dial tcp 10.102.89.145:5432: i/o timeout
regards,
Gaja
…On Mon, Jul 16, 2018 at 4:05 PM, Dave Cramer ***@***.***> wrote:
@GajaHebbar <https://github.com/GajaHebbar>
all replica. is that the problem?
Clearly that would be the problem if it is the case
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#39 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AjoN43QnOOG9pWk-Q0S-ooOzWNrSXMf5ks5uHGyMgaJpZM4TETNO>
.
|
this feature would let crunchy-watch support a manual failover...perhaps a REST API...another application or an end user using curl for instance might want to cause a manual failover for schedule maintenance or other...they need an API whereby to invoke this function.
The text was updated successfully, but these errors were encountered: