-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remember vanished pubkeys #21
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have some questions and comments but don't want to hold up the deploy any longer. Nice work 👍
if (cache.has(pubkey)) { | ||
return { | ||
id: event.id, | ||
action: "shadowReject", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this "shadow" reject. Can we give a nice error message in this case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can change it to a normal reject indeed, not real reason to hide that it was deleted
strfry/plugins/pubkey_cache.ts
Outdated
// the relay. We store CACHE_MAX_SIZE items in redis so restarts don't lose the | ||
// cache. This assumes that after CACHE_MAX_SIZE items are processed, the | ||
// oldest ones are no longer relevant and probably not used anymore in the | ||
// network to avoid to have an infinite cache. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suppose in the case where we want to change our cache size/strategy we do have a record of all requests in the append-only log, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, the vanish_requests stream is unchanged. In fact in case we need it, we could re-run all vanish requests from the beginning
class PubkeyCache { | ||
private maxSize: number; | ||
private queue: string[] = []; | ||
private set: Set<string> = new Set(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you explain why we need a queue
, a set
, and a redis cache?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can add comments in the code too if needed:
The Set
is used for quick O(1) in-process lookups when we need to read from the cache. This ensures that there is very little lag because these lookups are extremely fast, much more than Redis itself. The Queue
allows us to remove the oldest entries when we exceed the maximum cache size to control memory usage. Both the Set
and Queue
are in-process data structures because Strfry blocks until it receives a response from a plugin, so write operations need to be as optimized as possible.
While Redis is great, it’s still too slow to be called on each write operation, it’s asynchronous and remote compared to synchronous local queries, which are orders of magnitude faster. Therefore, the main use for Redis in this case is to persist the cache across restarts. By storing the cache in Redis, we ensure that the application retains important state information even after it restarts, without compromising the performance of critical operations.
return this.zset.size; | ||
} | ||
|
||
async zremrangebyrank( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should these z.*
functions be somewhere other than the test file since they are called from the main code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just a test mock for the Deno redis library.
Maps to #20.
NIP-62 specifies:
To address this, this PR introduces an in-process cache, initialized through a Redis ordered set, that tracks the last million deleted pubkeys. Once a pubkey is removed from our relay, events from that pubkey will be rejected if it’s still present in the cache. The Redis ordered set guarantees that the cache persists across restarts. We assume that one million entries are sufficient and that it is highly unlikely for a deleted pubkey to reappear after one million deletions.
Additionally, NIP-62 also states:
This PR ensures the removal of any gift wraps sent to the deleted pubkey.