Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Relayer retry API: add the # of messages prioritized in the queue to the http response #4059

Closed
Tracked by #3318
tkporter opened this issue Jun 26, 2024 · 3 comments · Fixed by #5043 · May be fixed by #5132
Closed
Tracked by #3318

Relayer retry API: add the # of messages prioritized in the queue to the http response #4059

tkporter opened this issue Jun 26, 2024 · 3 comments · Fixed by #5043 · May be fixed by #5132
Assignees
Labels
agent good first issue Good for newcomers

Comments

@tkporter
Copy link
Collaborator

Problem

Paul and I recently tried retrying messages for a particular domain shortly after starting up - because it takes some time to get to a point where the messages are actually in the prep queue, when we retried no messages were actually prioritized. We had to look at logs to realize it was bc of kinda slow startup.

Solution

Adding the # of messages in the queue that were prioritized would be useful to indicate this was the case

Nice to Have

Describe non-essential extensions to the solution
Additional features which should be implemented if they are easy to accommodate but otherwise can be skipped

@kamiyaa
Copy link
Collaborator

kamiyaa commented Dec 12, 2024

How is the retry being done? via CLI or?

@daniel-savu
Copy link
Contributor

chatted with @kamiyaa, leaving some notes here for open-sourceness' sake:

The changes can be tested via the existing unit tests setup of the server but can also be tried manually against the relayer running in the run-locally e2e tests for a more "prod-like" setup

@kamiyaa
Copy link
Collaborator

kamiyaa commented Dec 15, 2024

pub async fn process_retry_requests(&mut self) {

let mut batch = prepare_queue.pop_many(ops_to_prepare).await;

It looks like there is an internal queue mechanism for processing these retry message (along with potential other messages?). With this current design, it's difficult to return the results of the message retry, because we will somehow need to wait for the operation to be processed and somehow get its results.

I'm thinking we just remove the queue mechanism (for message retry) and just do the message retry logic in the http handler.
What do you think? @tkporter @daniel-savu

@tkporter tkporter moved this to In Review in Hyperlane Tasks Dec 20, 2024
github-merge-queue bot pushed a commit that referenced this issue Jan 6, 2025
#5043)

### Description

Add a mpsc channel between relayer and message retry handler, so message
retry handler knows the retry results from the relayer.

`POST /message_retry` endpoint now returns a JSON with details on the
retry request:

```rust
#[derive(Clone, Debug, Deserialize, Serialize)]
pub struct MessageRetryResponse {
    /// ID of the retry request
    pub uuid: String,
    /// how many pending operations were processed
    pub processed: usize,
    /// how many of the pending operations matched the retry request pattern
    pub matched: u64,
}
```

### Drive-by changes

<!--
Are there any minor or drive-by changes also included?
-->

### Related issues

Fixes #4059

### Backward compatibility

<!--
Are these changes backward compatible? Are there any infrastructure
implications, e.g. changes that would prohibit deploying older commits
using this infra tooling?

Yes/No
-->

### Testing

Updated unit tests to return a response so the handler is not blocked
@github-project-automation github-project-automation bot moved this from In Review to Done in Hyperlane Tasks Jan 6, 2025
tiendn pushed a commit to tiendn/hyperlane-monorepo that referenced this issue Jan 11, 2025
hyperlane-xyz#5043)

### Description

Add a mpsc channel between relayer and message retry handler, so message
retry handler knows the retry results from the relayer.

`POST /message_retry` endpoint now returns a JSON with details on the
retry request:

```rust
#[derive(Clone, Debug, Deserialize, Serialize)]
pub struct MessageRetryResponse {
    /// ID of the retry request
    pub uuid: String,
    /// how many pending operations were processed
    pub processed: usize,
    /// how many of the pending operations matched the retry request pattern
    pub matched: u64,
}
```

### Drive-by changes

<!--
Are there any minor or drive-by changes also included?
-->

### Related issues

Fixes hyperlane-xyz#4059

### Backward compatibility

<!--
Are these changes backward compatible? Are there any infrastructure
implications, e.g. changes that would prohibit deploying older commits
using this infra tooling?

Yes/No
-->

### Testing

Updated unit tests to return a response so the handler is not blocked
tiendn pushed a commit to tiendn/hyperlane-monorepo that referenced this issue Jan 11, 2025
hyperlane-xyz#5043)

### Description

Add a mpsc channel between relayer and message retry handler, so message
retry handler knows the retry results from the relayer.

`POST /message_retry` endpoint now returns a JSON with details on the
retry request:

```rust
#[derive(Clone, Debug, Deserialize, Serialize)]
pub struct MessageRetryResponse {
    /// ID of the retry request
    pub uuid: String,
    /// how many pending operations were processed
    pub processed: usize,
    /// how many of the pending operations matched the retry request pattern
    pub matched: u64,
}
```

### Drive-by changes

<!--
Are there any minor or drive-by changes also included?
-->

### Related issues

Fixes hyperlane-xyz#4059

### Backward compatibility

<!--
Are these changes backward compatible? Are there any infrastructure
implications, e.g. changes that would prohibit deploying older commits
using this infra tooling?

Yes/No
-->

### Testing

Updated unit tests to return a response so the handler is not blocked
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment