Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remember vanished pubkeys #21

Merged
merged 5 commits into from
Oct 29, 2024
Merged

Remember vanished pubkeys #21

merged 5 commits into from
Oct 29, 2024

Conversation

dcadenas
Copy link
Contributor

Maps to #20.

NIP-62 specifies:

Relays MUST ensure the deleted events cannot be re-broadcasted into the relay.

To address this, this PR introduces an in-process cache, initialized through a Redis ordered set, that tracks the last million deleted pubkeys. Once a pubkey is removed from our relay, events from that pubkey will be rejected if it’s still present in the cache. The Redis ordered set guarantees that the cache persists across restarts. We assume that one million entries are sufficient and that it is highly unlikely for a deleted pubkey to reappear after one million deletions.

Additionally, NIP-62 also states:

Relays SHOULD delete all NIP-59 Gift Wraps that p-tagged the .pubkey if their service URL is tagged in the event, deleting all DMs to the pubkey.

This PR ensures the removal of any gift wraps sent to the deleted pubkey.

@dcadenas dcadenas requested a review from mplorentz October 22, 2024 11:45
@dcadenas dcadenas self-assigned this Oct 22, 2024
Copy link
Member

@mplorentz mplorentz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have some questions and comments but don't want to hold up the deploy any longer. Nice work 👍

if (cache.has(pubkey)) {
return {
id: event.id,
action: "shadowReject",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this "shadow" reject. Can we give a nice error message in this case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can change it to a normal reject indeed, not real reason to hide that it was deleted

// the relay. We store CACHE_MAX_SIZE items in redis so restarts don't lose the
// cache. This assumes that after CACHE_MAX_SIZE items are processed, the
// oldest ones are no longer relevant and probably not used anymore in the
// network to avoid to have an infinite cache.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose in the case where we want to change our cache size/strategy we do have a record of all requests in the append-only log, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, the vanish_requests stream is unchanged. In fact in case we need it, we could re-run all vanish requests from the beginning

class PubkeyCache {
private maxSize: number;
private queue: string[] = [];
private set: Set<string> = new Set();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain why we need a queue, a set, and a redis cache?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can add comments in the code too if needed:

The Set is used for quick O(1) in-process lookups when we need to read from the cache. This ensures that there is very little lag because these lookups are extremely fast, much more than Redis itself. The Queue allows us to remove the oldest entries when we exceed the maximum cache size to control memory usage. Both the Set and Queue are in-process data structures because Strfry blocks until it receives a response from a plugin, so write operations need to be as optimized as possible.

While Redis is great, it’s still too slow to be called on each write operation, it’s asynchronous and remote compared to synchronous local queries, which are orders of magnitude faster. Therefore, the main use for Redis in this case is to persist the cache across restarts. By storing the cache in Redis, we ensure that the application retains important state information even after it restarts, without compromising the performance of critical operations.

return this.zset.size;
}

async zremrangebyrank(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should these z.* functions be somewhere other than the test file since they are called from the main code?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just a test mock for the Deno redis library.

@dcadenas dcadenas merged commit faff9ff into main Oct 29, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants